halid
stringlengths
8
12
lang
stringclasses
1 value
domain
sequencelengths
0
36
timestamp
stringclasses
652 values
year
stringclasses
55 values
url
stringlengths
43
370
text
stringlengths
16
2.18M
04112035
en
[ "info.info-es", "info.info-cr" ]
2024/03/04 16:41:24
2023
https://theses.hal.science/tel-04112035/file/OLIVIER_Paul_these_2023.pdf
Dynamic analysis techniques have proven their effectiveness in security assessment. Nevertheless, it is necessary to be able to execute the code to be analyzed and this is often a challenge for firmware, most of which are deeply integrated into the hardware architecture of embedded systems. Emulation allows to execute a large part of the code but is quickly limited when it is necessary to interact with specialized components. For this, the partial emulation, or hardware-in-the-loop, approach offers several advantages: transferring access to hardware that is difficult to emulate properly and executing the firmware in turn on both entities. To date, this approach has been considered primarily for monolithic firmware, but less so for devices running advanced operating systems. In this thesis, we explore the challenges of security testing for processes running in an emulated environment where part of their execution must be transmitted to their original physical device. Throughout this thesis, we first review the various techniques for intercepting system calls and their objectives. We highlight the fact that forwarding is not a very well explored technique in depth but is a promising approach for evaluating the security of embedded applications. We discuss the challenges of different ways of running a process in two different Linux kernels. We implement through a framework these transfers for a Linux process with its system calls and memory accesses between its emulated environment and its original environment on the physical device. To overcome the challenges of using physical devices for these security tests, we also present a new test platform to reproduce hardware-in-the-loop security experiments. Résumé Les techniques d'analyse dynamique ont fait la preuve de leur efficacité pour évaluer la sécurité. Il est en revanche nécessaire de pouvoir exécuter le code à analyser et c'est souvent un défi pour les firmware dont la plupart sont profondément intégrés à l'architecture matérielle du système embarqué. L'émulation permet d'exécuter une grande partie du code mais se retrouve rapidement limitée lorsqu'il est nécessaire d'interagir avec des composants spécialisés. Pour cela, l'approche d'émulation partielle, ou hardware-in-theloop, offre plusieurs avantages: transférer les accès aux matériels qui sont difficiles à émuler correctement et exécuter le firmware à tour de rôle sur les deux entités. Jusqu'à présent, cette approche a été principalement considérée pour les firmware monolithiques, mais moins pour les dispositifs utilisant des systèmes d'exploitation avancés. Dans cette thèse, nous explorons les défis des tests de sécurité pour les processus exécutés dans un environnement émulé où une partie de leur exécution doit être transmise à leur dispositif physique d'origine. Au travers de cette thèse, nous passons d'abord en revue les différentes techniques d'interception des appels système ainsi que leurs objectifs. Nous soulignons le fait que le transfert est une technique peu explorée en profondeur mais une approche prometteuse pour évaluer la sécurité des applications embarquées. Nous discutons des défis des différentes manières d'exécuter un processus dans deux noyaux Linux différents. Nous implémentons au travers d'un framework ces transferts pour un processus Linux avec ses appels système et accès mémoire entre son environnement émulé et son environnement d'origine sur le dispositif physique. Afin de surmonter les défis liés à l'utilisation de dispositifs physiques pour ces tests de sécurité, nous présentons également une nouvelle plateforme pour les tests de sécurité hardware-in-the-loop. First and foremost, I want to express my deepest gratitude to my esteemed advisor Aurélien, for his invaluable guidance, constructive feedback, and unwavering support. As a young scientist and researcher, his mentorship has been a constant source of inspiration and motivation. I would also like to thank my colleagues and friends from the S3 group at Eurecom for providing a stimulating and positive research environment. I hope that future generations of students will also have the privilege of experiencing such conditions. Additionally, many thanks to the Eurecom administration for their assistance in navigating the bureaucracy processes that come with pursuing a PhD. I would also like to express my sincere appreciation to the members of the committee for their valuable comments. In particular, I am grateful to the reviewers, Prof. Guillaume HIET and Dr. Vincent ROCA for their time and insightful feedbacks, which helped me to improve my research work. Furthermore, I am glad to extend my gratitude to Siemens for their fundings towards my research. I am especially grateful fot the opportunity I had to intern with their team in Munich for two months. Then, I would like to thank my family for their support and encouragement throughout my studies. Last but not least, I would like to express my heartfelt gratitude to my fiancée, Isabelle, for her patience and for having always been by my side during all these years. Chapter 1 Introduction In their arms race and space conquest, both the Soviet Union and the United States needed more sophisticated ways to control the movements of their rockets. With the increase in computing power in the 1960s, it became possible to replace existing analog and mechanical approaches with digital computers. This led to the development of the Autonetics D-17 guidance system [START_REF]Documents about the D-17(B) guidance system[END_REF] and the Apollo Guidance Computer (AGC) [START_REF]The virtual AGC project[END_REF], two of the earliest digital embedded systems designed. Their missions were critical and required high reliability. These systems were responsible for processing measured data and providing accurate information in a specific time frame. Their weight was considered a crucial factor, as additional weight meant the need for larger and more expensive rockets. Many of their characteristics are shared with modern counterparts. Today, embedded systems can be found in a wide range of sectors, including industry, consumer, military, health, home automation, and agriculture. It is difficult to provide an exact number of deployed embedded systems. But according to some reports [5,6], their global market in 2022 is estimated to represent 102.82 billion USD and to reach 130.5 billion by 2027. They are at the heart of critical systems that can sometimes directly affect the lives of human beings. Critical incidents such as the one involving the Therac-25 radiotherapy computer, which resulted in patients receiving overdoses of radiation because of a race condition, demonstrate the importance of asserting error-free software [START_REF]Therac-25[END_REF]. Therefore, their prevalence in our society and everyday life make them a key issue regarding security and privacy. And it is likely to continue in the future. For instance, the automotive sector is undergoing a paradigm shift with the gradual abandonment of combustion engines in favor of electric ones. New vehicles incorporate a lot of recent embedded systems for their advanced driver-assistance systems (ADAS). They help the driver by providing alerts (e.g., drowsiness detection), driving assistance (e.g., anti-1 lock braking system (ABS), cruise control, collision avoidance system) and environment monitoring (e.g., traffic sign recognition, vehicular communication systems). Compared to rocket guidance systems, these systems need to accommodate other moving objects such as cars, bicycles and pedestrians. Moreover, the importance of embedded systems in various industries, such as the automotive sector, is highlighted by the ongoing global chip shortage. As a result, it is also important to accurately assert the risk inherent in these systems. This led to the adoption of secure software development life cycle (SDLC). This methodology defines best practices that integrate security measures at every stage of the product development process. Such measures consist in identifying security concerns, building a threat model, keeping it up to date during the product's lifespan, documenting the procedures for addressing security vulnerabilities, and monitoring for known vulnerabilities. Additionally, it covers security testing such as static code review, test automation with continuous integration, product integrity during deployment and penetration testing on the final product. However, firmware penetration testing presents additional challenges when compared to traditional software. A wide range of technologies have been developed to address the specialized requirements of embedded systems. This diversity has created a heterogeneous ecosystem in terms of solutions, designs, and architectures. Moreover, the rise of Internet-of-Things (IoT) has led to an increase in the number of connected devices that can communicate with each other and exchange data over the Internet. In addition to increasing their attack surface, these online services make the task even more complex for security analysis, as part of the environment is beyond control and may change continuously over time. Furthermore, the firmware is often closely integrated with the hardware making it challenging to effectively test the firmware without its hardware. Modern software testing techniques are fruitful in finding bugs. In particular, dynamic analysis techniques such as fuzzing and symbolic execution perform very well and are automatable. These were at the core of the automatic defensive systems used to identify the vulnerabilities during the DARPA Cyber Grand Challenge (CGC) [START_REF]Cyber grand challenge (CGC) (archived)[END_REF]. The best competitors employed a combination of fuzzing and symbolic execution to balance their weaknesses [START_REF] Avgerinos | The mayhem cyber reasoning system[END_REF][START_REF] Goodman | The past, present, and future of cyberdyne[END_REF][START_REF] Nguyen-Tuong | Xandra: An autonomous cyber battle system for the Cyber Grand Challenge[END_REF][START_REF] Shoshitaishvili | Mechanical phish: Resilient autonomous hacking[END_REF]. This event sparked dramatic progress in published research in bug-finding techniques for the following years [START_REF] Valentin | The art, science, and engineering of fuzzing: A survey[END_REF]. However, it is impractical to run these methods directly on the physical device for several reasons. Unlike more traditional computers, embedded systems have more limited resources in terms of computing power, memory size, network bandwidth and energy consumption. Additionally, they also provide less introspection into the firmware execution. Symbolic execution requires identifying branch statements and solving computationally intensive equations. To select the next input to run, fuzzing relies on metrics derived from the feedback of a program execution trace. Furthermore, running the analysis on the device make automation and scaling more arduous. To overcome these challenges, emulation is often used to enhance the observability of system states and improve control over their execution. This process of moving the firmware from its original host to an emulated environment is called rehosting [START_REF] Fasano | Enabling Security Analyses of Embedded Systems via Rehosting[END_REF]. It comes with new challenges regarding the inference of an environment suitable for the proper execution of the firmware. Although efforts have been made to automatically generate these virtualized environments, the need for accuracy leads to including the original device in the simulation in a hardware-in-the-loop (HIL) fashion. Pairing the hardware with the simulation has been a proven technique in various fields for several decades. It dates back to the early days of rocket testing [START_REF] Bailey | Contributions of hardware-in-theloop simulations to Navy test and evaluation[END_REF][START_REF] Eguchi | Benefits of HWIL simulation to develop guidance and control systems for missiles[END_REF][START_REF] Mitchell | Hardware-in-the-loop simulation for an active missile[END_REF] and was later employed in the development of fly-bywire (FBW) 1 systems in aircraft [START_REF] Martha | The role of simulation in the development and flight test of the HiMAT vehicle[END_REF]. The hardware-in-the-loop approach is particularly useful for scenarios involving complex environments that are difficult to model precisely. In the context of firmware rehosting, using hardware-in-the-loop offers several advantages. It allows focusing the rehosting process on the area of interest while still preserving the interactions with its original environment. In practice, it reduces the reverse engineering scope to the understanding of a suitable interface on the device. Furthermore, the hardware-in-the-loop approach leads to feeding more accurate inputs in the emulation. Additionally, the scalability of the analysis can be enhanced through caching interactions [START_REF] Kammerstetter | Embedded security testing with peripheral device caching and runtime program state approximation[END_REF] or connecting multiple emulation instances to the same device [START_REF] Liu | Mousse: a system for selective symbolic execution of programs with untamed environments[END_REF]. Hardware-in-the-loop security testing has been considered primarily for monolithic firmware, but less so for devices running advanced operating systems. Because of their versatility and low cost, Linux-based systems are a popular choice among embedded systems. Although the kernel security model is robust, many flaws are discovered every day. This has an impact on billions of devices around the world and on people relying on them. Therefore, it is crucial to develop appropriate security testing techniques for them. 1 It replaces the traditional mechanical flight controls with electronic signals. It allows for more precise and responsive control of the aircraft. This thesis aims to explore and address the challenges of security testing with hardware-in-the-loop. In particular, the focus is on Linux-based operating systems with processes running in an emulated environment where part of their execution must be transmitted to their original physical device. In summary, the main contributions of this thesis consist in: • a state of the art on publications related to embedded device security testing using hardware-in-the-loop and Linux rehosting firmware approaches. • a novel approach to filter and forward system calls between an emulator and a physical device which address the shortcomings of existing methods. • a prototype, Chestburster, implementing this previous technique. • an infrastructure, BEERR, aiming to ease the access to physical devices used in system security publications and the reproducibility of their artifacts. The work of this thesis led to an academic publication "BEERR: Bench of Embedded system Experiments for Reproducible Research" [START_REF] Olivier | BEERR: Bench of Embedded system Experiments for Reproducible Research[END_REF], and a second on the system call forwarding approach which is going to be submitted soon. Throughout my graduate studies, I had the pleasure to maintain and extend the research project avatar2 [START_REF] Muench | Avatar 2: A multi-target orchestration platform[END_REF], which was the outcome of the previous thesis [START_REF] Muench | Dynamic binary firmware analysis: challenges & solutions[END_REF][START_REF] Zaddach | Development of novel binary analysis techniques for security applications[END_REF]. This thesis is organized into 7 chapters. After having introduced the context and the general challenges, Chapter 2 provides the necessary background for the rest of this thesis. Next in Chapter 3, we cover the state of the art by surveying hardware-in-the-loop security testing publications and existing approaches for rehosting Linux firmware. Furthermore, we highlight their current limitations and investigate how system call interception mechanisms can improve them. With Chapter 4, we delve into the challenges of forwarding system calls and propose a novel approach. In Chapter 5, we detail the implementation of our new technique in a prototype: Chestburster. Afterward, we present a solution to facilitate access to physical devices and promote artifacts reproducibility in Chapter 6. Finally, we discuss future work and conclude the thesis with Chapter 7. Chapter 2 Background This chapter discusses background information relevant to the rest of the thesis. Embedded systems Embedded systems are devices designed to perform a specific task within a larger system. They are optimized and tailored to meet specific requirements and minimize development and production costs. As a result, these systems typically have limited resources in terms of computing power, memory size, network bandwidth and energy consumption. Moreover, these devices often interact with the physical world and are subject to real-time computing constraints. This has led to the creation of a wide variety of technologies to meet different design choices. For example, it is illustrated by the large number of different Instruction Set Architectures (ISA) present in embedded devices (e.g., x86, MIPS, ARM, PowerPC, m68k, AVR, MSP430, RISC-V) where in addition each can enable extra features via diverse architecture extensions. Firmware The software driving an embedded system is commonly called a firmware. Besides this lexical difference, it presents other distinctions related to its intrinsic nature. A firmware is rarely portable across systems because it is often specific to a particular device and hardware platform. Although some embedded systems rely on safe languages such as Ada, and other languages are becoming more popular such as Rust, most are often written in low-level languages (e.g., C or assembly). It is usually designed to run with minimal intervention from the user, whereas traditional software often works with user inputs. Firmware is typically stored in read-only memory (ROM) or 5 non-volatile memory (flash memory, EEPROM). This makes its update process more difficult to perform than traditional software updates. All of these particularities contribute to the complexity of their security analysis. In [START_REF] Muench | What You Corrupt Is Not What You Crash: Challenges in Fuzzing Embedded Devices[END_REF], the authors present a classification of firmware based on the type of operating system they used. This classification is relevant to us because it takes into account the number of abstraction layers present in the firmware. Type 0 The Non-embedded devices category is used to represent traditional systems used by smartphones, desktops, workstations and servers. Examples include Unix-like operating systems (e.g., Debian, Android), BSD variants (e.g., OpenBSD, macOS) or others (e.g., Windows). Type I. General-Purpose OS-based devices are type 0 systems tailored for an embedded environment. They only include features needed by the system with lightweight user-mode applications. Their proximity to traditional operating systems simplifies the use of common analysis techniques. Examples include Minix and Linux-based systems paired with BusyBox. Type II. Embedded OS-based devices aims to address the resource constraint some systems face. They are typically characterized by their small footprint, high performance, and ability to support real-time scheduling requirements. Despite the absence of advanced processor features such as a Memory Management Unit (MMU), there still exists, usually, a logical separation between the kernel and the application code. Examples include real-time OS such as FreeRTOS, ZephyrOS, VxWorks and QNX which frequently runs on modem devices. Type III. Devices without an OS-abstraction represent monolithic firmware where all components are linked together into one executable which runs directly on the hardware. This firmware does not have precise OS abstraction but instead relies on library calls. Peripherals Peripherals play an essential role in the composition of a System-on-Chip (SoC). They provide the majority of the input/output (I/O) for the processors and connect them to the external environment. Examples include timers, hardware accelerators (cryptographic primitives, network packet processing, graphics rendering), communication interfaces, memory controllers and power management units. Peripherals may be located either internally or externally to the SoC. In the latter case, a method of communication between the peripheral and the processor must be established. Some common mechanisms include: • Memory-mapped I/O (MMIO): This method of communication involves directly mapping hardware registers of the peripheral in the firmware address space. This way, the processor can read and write the peripheral's status, configuration or any other data. • Polling: The processor periodically checks the status of a peripheral to see if the desired condition is satisfied. • Interrupt requests (IRQ): To avoid wasting the processor's time in polling, peripherals can implement interrupts to notify events, such as when a task is completed. When it happens, the processor stops its execution, saves its current context and switches to the interrupt handler corresponding to the raised interrupt. • Direct memory access (DMA): To continue improving the processor usage efficiency, a DMA peripheral accesses memory directly to read or write large amounts of data. On completion, the processor is notified by issuing an interrupt. The Linux kernel 2.2.1 Importance of Linux in embedded devices The Linux kernel has become so popular that it is now considered an essential component in the field of embedded systems. It is written in a highly portable language (C), has been designed to be easily adaptable to a wide range of hardware architectures and provides a large selection of software tools and libraries. Its modular design offers developers the ability to tailor the kernel to the specific needs of an embedded system and its application. In addition, the kernel has a robust security model to protect the integrity of the system such as memory protection and access controls. Finally, it has a large, active and open-source community that ensures continuous support. The process abstraction The process abstraction is a fundamental concept that refers to a program in execution. It is used by the operating system to manage concurrent tasks. Each process has unique identifiers such as the user ID (UID) and the group ID (GID). They can communicate with other processes using inter-process communication (IPC) methods, and in particular with their children. The process table organizes processes in a hierarchical structure including parent and child relationships. Additionally, a process has its own address space which allows isolating processes from each other. The memory model The Linux kernel uses virtual memory management to enable processes to use more memory than is physically available on the system. This is accomplished through the use of paging. The kernel divides the process's virtual address space into small chunks, called pages, and swaps them in and out of physical memory as needed. Furthermore, the kernel maintains a page table for each process containing the mapping of their virtual addresses to the physical addresses. When a process accesses a virtual address, the hardware looks into the page table for the corresponding physical address and retrieves the data at this location. If the page is not present in physical memory, a page fault is raised. The kernel will then answer this error by loading the page in memory. The kernel also supports various memory-related features. Memory permissions, such as read-write-execute (rwx) permissions, help to ensure that processes only access memory in an authorized manner. Memory mapping allows mapping files or devices into processes' virtual address space. However, it is important to note that different computer architectures may implement different memory models and management techniques [START_REF] Balzarotti | In the Land of MMUs: Multiarchitecture OS-Agnostic Virtual Memory Forensics[END_REF]. The Most common types are radix tree for Intel x86, ARM and RISC-V, inverted page tables for PowerPC, and software-defined management for MIPS. It is the job of the kernel to provide a unified interface for processes, regardless of the underlying memory model. Filesystem data structures The filesystem is responsible for organizing and storing files and directories. The Linux kernel uses various data structures to manage the filesystem and enable file operations. A file descriptor is an abstraction that represents the connection between a process and a file or a device. Inodes are data structures that store information about files (e.g., size, permissions, location on disk). The open file tables and the inodes tables are used to keep track of the kernel's connections with open files. In contrast, the file descriptor tables are used to keep track of the connections between processes and files. In addition, the kernel uses a virtual filesystem (VFS) to abstract the underlying physical storage and provide a uniform interface for accesses. Pipes allow processes to communicate with each other by passing data through a buffer. They can be used to redirect the output of one process as the input of another, creating a chain of processes working together. Special files, such as block devices and character devices, are used to abstract input/output (I/O) operations and provide a uniform interface for access to hardware devices. Character devices Character devices create an interface for user applications to interact with kernel and hardware components. The specificity of character devices is the way they handle data: the exchange is done through a continuous stream of characters. Character devices are accessed from user mode through the filesystem with a defined set of system calls. It is up to the character device to implement the supported system calls and their operations on the device. Beyond the typical file operations, such as open, read, write, mmap, llseek or lock, the character devices support the ioctl system call. Its signature is: int ioctl(int fildes, int request, ... /* arg */). The request code specifies the operation to be performed, and the additional arguments provide its inputs. This design choice grants flexibility to the device on the type of operations it can handle. However, it brings issues regarding consistency and portability across the different platforms. As a result, different device drivers may use different codes to represent the same operation, which can lead to confusion and difficulty when working with multiple devices. Linux asynchronous I/O The Linux asynchronous I/O mechanism emables concurrent processing of multiple I/O requests in the kernel without blocking the calling thread. It consists of two main primitives: submitting a message into the pending queue and consuming an event notifying the completion of the request. the Submission of a message into the pending queue, and the event notifying the completion. The traditional Linux AIO subsystem has five system calls for managing the I/O contexts: io_setup, io_destroy, io_submit, io_cancel, io_getevents and io_pgetevents. It relies on a ring buffer internally to manage I/O request submissions while completion events are stored in an array. However, this implementation has several limitations such as not working with buffered I/O. To address these limitations, a new interface called io_uring was introduced in Linux 5.1. To reduce the number of context switches, it uses two ring buffers in user space: the submission queue for I/O requests and the completion queue for events completion. The user space application puts new I/O requests at the tail of the submission queue, and the kernel consumes them from the head. In the opposite way, the kernel puts completion events at the tail of the completion queue while the application consumes them from the head. As a result, managing diverse operations related to the queues requires only three system calls: io_uring_setup, io_uring_enter and io_uring_register. Process migration Process snapshotting is a technique used to capture the state of a process, including its context and memory, at a particular point in time. This allows the process to be inspected statically, cloned, restarted later or in a different location. One use for process snapshotting is cold migration, where the process is stopped and its state is transferred to another system in order to continue its execution there. Tools like CRIU [START_REF]Checkpoint/restart in userspace (CRIU)[END_REF] (Checkpoint/Restore In Userspace) are commonly used to migrate Linux containers. Another technique for migrating virtual machines without downtime is hot migration. QEMU uses the userfaultfd [16] kernel feature to register a special file descriptor to handle the process's page faults in user mode. This way, when the migrating process tries to access a memory location not yet migrated, a page fault is raised and QEMU fetches the missing pages before restarting it. Instead, checkpoint-restart involves periodically saving the state of a process during its execution. Distributed operating systems make use of these techniques to improve availability, scalability and load-balancing of applications [START_REF] Lottiaux | OpenMosix, OpenSSI and Kerrighed: a comparative study[END_REF]. For instance, MOSIX [START_REF] Barak | The MOSIX distributed operating system: load balancing for UNIX[END_REF][START_REF] Barak | Scalable cluster computing with MOSIX for Linux[END_REF] divides the migrating process in two parts. While the user context composed of the program code, data, stack, memory-maps and registers is allowed to migrate many times, the system context residing in the kernel stays at the initial nodes. All interactions between these contexts are intercepted and forwarded across the network. Additionally, various resource sharing algorithms are used to manage the load-balancing of processes between nodes. Dynamic binary analysis Static analysis is a method for analyzing a program without executing it. Dynamic analysis, on the other hand, involves executing the program and instrumenting its behavior while it is running. Both of these two classes of analysis offer different advantages and drawbacks which depend on the context of the analysis and its objectives [START_REF] Cono | SoK: Using dynamic binary instrumentation for security (and how you may get caught red handed)[END_REF][START_REF] Shoshitaishvili | ) the art of war: Offensive techniques in binary analysis[END_REF][START_REF] Vadayath | Arbiter: Bridging the Static and Dynamic Divide in Vulnerability Discovery on Binary Programs[END_REF]. In general, static analysis can achieve larger coverage and produce sound results by analyzing all possible execution paths of a program. However, with the lack of runtime context, it has to make approximations that are often arbitrary and lead to false positives. In contrast, dynamic analysis is performed in a given environment and for a given input. It is more precise in what it observes (instructions, content of registers and memory), but offers a smaller coverage because one program path is executed at a time. Examples are techniques such as runtime testing, debugging, profiling, and fuzzing. Static analysis examples include code review, data and control flow analysis, and model checking. For some analysis techniques, the distinction between the two can be unclear. For instance, symbolic execution may be considered either static or dynamic analysis depending on how it is implemented [START_REF] Corteggiani | Inception: System-wide security testing of real-world embedded systems software[END_REF][START_REF] Shoshitaishvili | Firmalice-automatic detection of authentication bypass vulnerabilities in binary firmware[END_REF]. Source code instrumentation leverages high-level semantics (e.g., the variable types) to better reason about the program's behavior [START_REF] Corteggiani | Inception: System-wide security testing of real-world embedded systems software[END_REF][START_REF] Davidson | FIE on Firmware: Finding Vulnerabilities in Embedded Systems Using Symbolic Execution[END_REF][START_REF] Mohammadjavad | Charm: Facilitating dynamic analysis of device drivers of mobile systems[END_REF]. As a result, it is easier to discover vulnerabilities due to the increased information found in the context available. However, source code instrumentation may not be always feasible because it requires access to the source code and the ability to recompile the program. In addition, it does not test exactly all of what is executed by the processor. From its writing to its execution, the program source code goes into multiple transformations (e.g., compilation, linking, loading) where each stage has a chance to introduce new bugs [START_REF] Thompson | Reflections on trusting trust[END_REF]. This is particularly relevant in the current context with the increasing prevalence of supply chain attacks [START_REF] Sterle | On solarwinds orion platform security breach[END_REF][START_REF]Attack inception: Compromised supply chain within a supply chain poses new risks[END_REF]. Therefore, it is important to also test the program in its binary form, even when the source code is available. Both approaches are therefore complementary. Testing at the source code level is more reasonable during product development because developers have access to the source code and may require feedback. In contrast, binary testing can happen before product release or for external audits where a threat model may not be initially thought of by the developer team. In light of this, this thesis focuses on dynamic analysis targeting binaryonly programs. Hardware-in-the-loop Hardware-in-the-loop (HIL) testing consists of the integration of physical components within a simulation [START_REF] Corteggiani | Inception: System-wide security testing of real-world embedded systems software[END_REF][START_REF] Kammerstetter | Prospect: peripheral proxying supported embedded code testing[END_REF][START_REF] Koscher | SURROGATES: Enabling near-real-time dynamic analyses of embedded systems[END_REF][START_REF] Liu | Mousse: a system for selective symbolic execution of programs with untamed environments[END_REF][START_REF] Mocanu | An open-source hardware-in-the-loop virtualization system for cybersecurity studies of scada systems[END_REF][START_REF] Muench | Avatar 2: A multi-target orchestration platform[END_REF][START_REF] Sharma Oruganti | Hardware-in-Loop Based Automotive Embedded Systems Cybersecurity Evaluation Testbed[END_REF][START_REF] Ruge | Frankenstein: Advanced wireless fuzzing to exploit new bluetooth escalation targets[END_REF][START_REF] Mohammadjavad | Charm: Facilitating dynamic analysis of device drivers of mobile systems[END_REF]. In this way, the physical devices are fed with inputs from the simulation while their outputs are monitored. This technique brings the benefit of only requiring access to the interface with the physical component. This interface is often standardized (JTAG, I2C, MMIO), which removed the burden to know the internal structure of the device: it is considered a black box. For instance, cars are increasingly filled with embedded systems with complex and sensitive architectures. The automotive sector has strong incentives from regulations to extensively test their systems. HIL helps to build realistic testbeds to establish the reliability of the system [START_REF] Fowler | Towards a Testbed for Automotive Cybersecurity[END_REF][START_REF] Sharma Oruganti | Hardware-in-Loop Based Automotive Embedded Systems Cybersecurity Evaluation Testbed[END_REF]. In the context of system security, HIL gained a lot of interest recently with rehosting [START_REF] Fasano | Enabling Security Analyses of Embedded Systems via Rehosting[END_REF][START_REF] Wright | Challenges in firmware rehosting, emulation, and analysis[END_REF]. A significant advantage of HIL lies in its high fidelity of outputs returned to the simulation. In contrast, the approach presents several drawbacks. The devices need a debugging access to control their state. This interface often requires reverse engineering efforts to identify it, and may be intentionally disabled by the manufacturer. Moreover, the execution overhead introduced by forwarding accesses can significantly impact the analysis feasibility because of speed and latency issues [START_REF] Mohammadjavad | Charm: Facilitating dynamic analysis of device drivers of mobile systems[END_REF]. Despite being limited by the number of physical devices, various optimization techniques such as caching [START_REF] Kammerstetter | Embedded security testing with peripheral device caching and runtime program state approximation[END_REF] and concurrent execution [START_REF] Liu | Mousse: a system for selective symbolic execution of programs with untamed environments[END_REF] can be used to improve the scalability of the analysis. Emulation Emulation is the process of mimicking the behavior of a system by using another system. The emulator acts as a translation layer between the software to execute and the current hardware, allowing the software to be executed like it was on its original hardware [START_REF] Wind | [END_REF][START_REF] Bellard | QEMU, a fast and portable dynamic translator[END_REF]. Emulation is particularly useful for security analysis because the original environment may not be suitable for deep analysis [START_REF] Dolan-Gavitt | Repeatable reverse engineering with PANDA[END_REF][START_REF] Henderson | Decaf: A platform-neutral wholesystem dynamic binary analysis platform[END_REF]. It enhances system observability and introspection by being able to inspect and modify the sequence of state the software is passing through. However, the sheer variety of hardware makes it difficult to create a versatile emulator that would support all existing devices. This is even more true in the context of embedded system security where peripherals are often custom and proprietary, removing all hope to access any public documentation. For this reason, recent research focuses on techniques that emulate an appropriate environment for the execution of a given firmware. This technique is called rehosting [START_REF] Fasano | Enabling Security Analyses of Embedded Systems via Rehosting[END_REF][START_REF] Wright | Challenges in firmware rehosting, emulation, and analysis[END_REF]. The avatar2 framework The avatar2 framework [START_REF] Muench | Avatar 2: A multi-target orchestration platform[END_REF] is the worthy successor of Avatar [START_REF] Zaddach | AVATAR: A Framework to Support Dynamic Security Analysis of Embedded Systems' Firmwares[END_REF]. It is a tool developed to facilitate the integration and interoperability between various binary analysis tools such as debuggers, emulators, disassemblers, symbolic execution engines and fuzzers. The framework is particularly aimed at analyzing embedded systems and their firmware, as it allows for the combination of physical devices with emulators in a hardware-in-the-loop fashion. This allows the application of traditional software security testing techniques to complex firmware, which would not otherwise be possible. Addi-tionally, avatar2 provides fine-grained control over the program execution. It allows doing live migration of a program between analysis tools and forwarding special accesses, such as memory and I/O, to others analysis tools for hybrid execution. Avatar2 has been used in several security research works [START_REF] Clements | HALucinator: Firmware Re-hosting Through Abstraction Layer Emulation[END_REF][START_REF] Gustafson | Toward the analysis of embedded firmware through automated re-hosting[END_REF][START_REF] Hernandez | FIRMWIRE: Transparent dynamic analysis for cellular baseband firmware[END_REF][START_REF] Maier | Unicorefuzz: On the viability of emulation for kernelspace fuzzing[END_REF][START_REF] Scharnowski | Fuzzware: Using Precise MMIO Modeling for Effective Firmware Fuzzing[END_REF][START_REF] Spensky | Conware: Automated modeling of hardware peripherals[END_REF]. Rehosting Firmware rehosting is the process of creating a virtual environment in which a firmware can be run as if it was in its original physical environment. This allows the application of general dynamic analysis techniques for security testing such as debugging, tracing, fuzz testing and symbolic execution. While there has been significant progress in this area in recent years, many challenges around obtaining the firmware image and its execution remain [START_REF] Fasano | Enabling Security Analyses of Embedded Systems via Rehosting[END_REF][START_REF] Wright | Challenges in firmware rehosting, emulation, and analysis[END_REF]. The firmware image acquisition process is not consistent across devices. It may require intercepting updates, exploiting vulnerabilities, de-soldering and dumping memory chips, connecting to a debug interface or being confronted with protections such as an encrypted image or hardware memory read-protection that require invasive attacks. To proceed, it is necessary to identify the ISA utilized by the firmware (e.g., ARM, MIPS, PowerPC, AVR, MSP430). Additionally, the emulator should be able to determine and model the peripherals utilized by the firmware. The chosen approach for rehosting a firmware depends on the individual case, as firmware, peripherals, and environments can vary significantly. Fuzzing Fuzz testing, or fuzzing, is a technique used to discover unexpected or erroneous behavior in a system by repeatedly feeding modified inputs. It involves executing a large number of inputs and collecting feedback in order to mutate them. Early fuzzers [START_REF] Eddington | Peach fuzzer[END_REF][START_REF] Barton P Miller | An empirical study of the reliability of UNIX utilities[END_REF][START_REF] Portnoy | Sulley fuzzing framework[END_REF] used a blackbox testing approach to execute a target as often as possible. However, their lack of introspection hindered their ability to thoroughly explore the target code due to conditional statements [START_REF] Godefroid | Random testing for security: blackbox vs. whitebox fuzzing[END_REF]. This led to the development of novel whitebox [START_REF] Cadar | Klee: unassisted and automatic generation of high-coverage tests for complex systems programs[END_REF][START_REF] Godefroid | DART: Directed automated random testing[END_REF][START_REF] Godefroid | Automated Whitebox Fuzz Testing[END_REF][START_REF] Stephens | Driller: Augmenting fuzzing through selective symbolic execution[END_REF] and greybox [START_REF] Schumilo | kAFL: Hardware-Assisted Feedback Fuzzing for OS Kernels[END_REF]168,194] fuzzers, which instrument the target to collect feedback and produce better inputs. Originally developed to find security bugs in software, fuzzing has evolved into a more general approach that can be used to explore the different possible states of a system [START_REF] Böhme | Directed greybox fuzzing[END_REF][START_REF] Ragab | BugsBunny: Hopping to RTL Targets with a Directed Hardware-Design Fuzzer[END_REF]. Fuzzing has experienced considerable interest in software testing and vulnerability research because of its efficiency in bug finding [START_REF] Fioraldi | Dissecting American Fuzzy Lop-A FuzzBench Evaluation[END_REF]. Yet estab-lishing good methodologies and metrics to compare fuzzing techniques and algorithms remain a challenge for researchers [START_REF] Fioraldi | LibAFL: A Framework to Build Modular and Reusable Fuzzers[END_REF]. For this reason, Google proposes the FuzzBench [START_REF] Metzman | FuzzBench: An Open Fuzzer Benchmarking Platform and Service[END_REF] service to evaluate and compare fuzzers against a set of benchmarks. Embedded systems offer additional challenges for fuzz testing [START_REF] Muench | What You Corrupt Is Not What You Crash: Challenges in Fuzzing Embedded Devices[END_REF][START_REF] Salehi | Discovery and Identification of Memory Corruption Vulnerabilities on Bare-metal Embedded Devices[END_REF][START_REF] Yun | Fuzzing of Embedded Systems: A Survey[END_REF]. Their limited resources make it difficult to run efficiently many inputs [START_REF] Börsig | Fuzzing framework for ESP32 microcontrollers[END_REF]. Furthermore, the lack of introspection into their internals can hinder the ability to collect data and provide meaningful feedback for selecting interesting inputs to mutate. Additionally, embedded systems often have complex interactions with their environment, making it difficult to correctly identify their input [START_REF] Feng | P2im: Scalable and hardwareindependent firmware testing via automatic peripheral interface modeling[END_REF][START_REF] Mera | DICE: Automatic emulation of dma input channels for dynamic firmware analysis[END_REF][START_REF] Redini | Karonte: Detecting insecure multi-binary interactions in embedded firmware[END_REF][START_REF] Scharnowski | Fuzzware: Using Precise MMIO Modeling for Effective Firmware Fuzzing[END_REF][START_REF] Spensky | Conware: Automated modeling of hardware peripherals[END_REF]. Testing on real hardware requires a method to properly reset the system between runs [START_REF] Corteggiani | HardSnap: Leveraging hardware snapshotting for embedded systems security testing[END_REF]. There is also the risk to brick the system and potentially posing a danger to the physical world and humans. Finally, as highlighted by Muench et al. [START_REF] Muench | What You Corrupt Is Not What You Crash: Challenges in Fuzzing Embedded Devices[END_REF], silent memory corruption can further complicate the detection of crashes. Symbolic & concolic execution Symbolic execution is a technique used to explore the possible states of a program by executing it symbolically. This method involves replacing certain inputs and variables during execution with symbolic expressions. As the program is executed, constraints are placed on these symbolic expressions. Solvers are then used to determine whether all the constraints are satisfiable and if so, to generate an input that can reach those states in the program. For each conditional statement in the program, the symbolic engine typically forks execution to follow both paths. The number of paths to explore therefore grows exponentially over time and may make the analysis impractical for larger programs. This problem is referred to as path explosion. To address this issue, different approaches try to combine symbolic execution with concrete execution. Concolic execution [START_REF] Godefroid | DART: Directed automated random testing[END_REF][START_REF] Sen | CUTE: A concolic unit testing engine for C[END_REF] uses a concrete input to guide the symbolic execution. This approach was later improved with the help of fuzz testing [START_REF] Stephens | Driller: Augmenting fuzzing through selective symbolic execution[END_REF]. Differently, selective symbolic execution [START_REF] Chipounov | Selective symbolic execution[END_REF] limits the symbolic execution to a specific part of a program. This is especially useful when the program is composed of elements not relevant to the analysis. Symbolic execution has been used to analyze firmware. Studies such as [START_REF] Davidson | FIE on Firmware: Finding Vulnerabilities in Embedded Systems Using Symbolic Execution[END_REF][START_REF] Hernandez | Firmusb: Vetting usb device firmware using domain informed symbolic execution[END_REF][START_REF] Shoshitaishvili | Firmalice-automatic detection of authentication bypass vulnerabilities in binary firmware[END_REF][START_REF] Zaddach | AVATAR: A Framework to Support Dynamic Security Analysis of Embedded Systems' Firmwares[END_REF] have demonstrated its effectiveness in identifying vulnerabilities in firmware. In particular, Inception [START_REF] Corteggiani | Inception: System-wide security testing of real-world embedded systems software[END_REF] discovered a vulnerability in a bootloader before being written on the Mask ROM. Moreover, the framework hightlights the challenges to apply symbolic execution to firmware that contains both low-level and high-level code, resulting in different levels of semantic information. More recently, work has been pushed to improve the execution speed of the symbolic engine. QSYM [START_REF] Yun | QSYM: A practical concolic execution engine tailored for hybrid fuzzing[END_REF] brings the idea to replace the symbolic interpreter by instrumenting the execution with native machine code. Following approaches [START_REF] Coppa | SymFusion: Hybrid Instrumentation for Concolic Execution[END_REF][START_REF] Poeplau | Symbolic execution with SymCC: Don't interpret[END_REF][START_REF] Poeplau | SymQEMU: Compilationbased symbolic execution for binaries[END_REF] include the concolic execution in the binary code with the help of compilers to drastically improve the execution speed. Multi-variant execution environment Multi-Variant eXecution (MVX) systems are used to prevent exploits by executing multiple variants of the same program with the same inputs [START_REF] Emery | DieHard: Probabilistic memory safety for unsafe languages[END_REF][START_REF] Cox | N-Variant Systems: A Secretless Framework for Security through Diversity[END_REF][START_REF] Hosek | Varan the unbelievable: An efficient nversion execution framework[END_REF][START_REF] Koning | Secure and efficient multi-variant execution using hardware-assisted process virtualization[END_REF][START_REF] Volckaert | Secure and efficient application monitoring and replication[END_REF][START_REF] Volckaert | GHUMVEE: efficient, effective, and flexible replication[END_REF]. The goal is to detect any divergence, discrepancy, or difference in the execution of the variants, which would indicate that the program has been exploited. To do this, variants run on the same machine and are synchronized at the system call interface. The MVX monitor, responsible for managing variants, may be implemented as a loadable kernel module (LKM) [START_REF] Cox | N-Variant Systems: A Secretless Framework for Security through Diversity[END_REF][START_REF] Volckaert | Secure and efficient application monitoring and replication[END_REF] or run in user mode [START_REF] Emery | DieHard: Probabilistic memory safety for unsafe languages[END_REF][START_REF] Hosek | Varan the unbelievable: An efficient nversion execution framework[END_REF] to balance instrumentation context and execution overhead. The main challenges in multi-variant execution are the methods of synchronization and the strategy used to handle variant discrepancy. Chapter 3 State of the art In this chapter, we present a comprehensive overview of the current stateof-the-art techniques in hardware-in-the-loop (HIL) security testing and rehosting. We begin by examining the various publications that include physical devices in their security analysis. We then delve into the challenges of rehosting Linux-based firmware and provide an in-depth examination of the existing approaches and their shortcomings. Starting from the observation that current methods are limited in reproducing the original firmware environment, we also explore the interception mechanisms for the system call interface. Finally, we conclude with the promising facts this interface represents for rehosting processes. Survey on embedded device security testing using HIL In this Section, we survey papers related to embedded device security testing using hardware-in-the-loop (HIL) methods. Furthermore, we inspect their artifacts to estimate their reproducibility. Table 3.1 reports all experiments from the surveyed papers while table 3.2 presents a summary of their artifacts' status. Publications Muench et al. [START_REF] Muench | What You Corrupt Is Not What You Crash: Challenges in Fuzzing Embedded Devices[END_REF] address the state of memory corruptions in embedded devices and the lack of mechanisms to mitigate silent memory corruptions. For this purpose, they insert multiple vulnerabilities with independent trigger conditions on different classes of embedded systems. They observe different behavior ranging from crashing, rebooting, hanging, and malfunctioning to no effect. They propose mitigations against these vulnerabilities and measure their performance costs on fuzz testing. Emulation facilitates the use of generic dynamic analysis techniques on firmware but suffers from limited device support and peripheral emulation. Therefore, many publications explore the idea of dynamically forwarding I/O operations to the physical device to improve emulation. Avatar2 [START_REF] Muench | Avatar 2: A multi-target orchestration platform[END_REF] is a framework written in Python that aims to facilitate the interoperability between different dynamic binary analysis tools. In particular, it offers the power to use HIL techniques to plug devices into an emulator with the help of a debugger. Three use cases are presented within the publication. The first experiment reproduces the analysis of the HARVEY rootkit, while the second shows the ability to move the firmware execution state between concrete and symbolic execution modes. The third experiment demonstrates the capabilities of avatar2 to forward peripherals accesses on the physical device from an emulator. This helps to record traces to replay and analyze them later without the device. As an ancestor of avatar2, Avatar [START_REF] Zaddach | AVATAR: A Framework to Support Dynamic Security Analysis of Embedded Systems' Firmwares[END_REF] shares similar objectives and characteristics. Experiments gather around three case studies: backdoor detection in a masked ROM bootloader from a hard drive, vulnerability research in a commercial Zigbee device, the Econotag and helping reverse engineering the GSM stack of a Motorola C118 phone. Prospect [START_REF] Kammerstetter | Prospect: peripheral proxying supported embedded code testing[END_REF] targets embedded Linux systems by intercepting accesses to character devices in the filesystem and forwarding them to the physical device. The performance of the system is evaluated against an unknown 324 MHz embedded Linux MIPS system with 16MiB RAM using strace. In addition, an undisclosed proprietary fire alarm system is fuzzed as a security audit. The source code of the Prospect has not been made public. Charm [START_REF] Mohammadjavad | Charm: Facilitating dynamic analysis of device drivers of mobile systems[END_REF] focuses on device drivers for smartphones. It claims to support four different device drivers on different smartphones: camera and audio for LG Nexus 5X, GPU for Huawei Nexus 6P and IMU sensors for Samsung Galaxy S7. Experiments try to answer several questions on its feasibility, performance and capability to perform dynamic analysis techniques such as interactive debugging, fuzzing and record-and-replay on a Nexus 5X smartphone. Surrogates [START_REF] Koscher | SURROGATES: Enabling near-real-time dynamic analyses of embedded systems[END_REF] leverages specialized hardware to enable low-latency communication between the emulator and the system under test. It uses a custom FPGA to bridge the device's JTAG interface to the host's PCI Express bus. The implementation uses a Pico Computing E17FX70T with Xilinx Vir-tex5 FX70T FPGA because of its included ready-to-use PCI Express card. Unfortunately, the tool was never released and the FPGA board is not anymore commercially available. The experiments measure Surrogates' performance impact on MMIO forwarding and its ease to port it to two new target devices. Inception [START_REF] Corteggiani | Inception: System-wide security testing of real-world embedded systems software[END_REF] introduces symbolic execution to embedded systems. Similarly to Surrogates, it includes an FPGA-based debugger to provide highspeed and low-latency access to peripherals. But it differs in interfacing itself with the host via USB 3 instead of PCI Express. Experiments focus on validation of the design, benchmarking the performance, vulnerability detection and several use cases on proprietary systems. Mousse [START_REF] Liu | Mousse: a system for selective symbolic execution of programs with untamed environments[END_REF] brings selective symbolic execution to environments that are too difficult to emulate because of specific hardware. The proposed system is evaluated around three aspects: performance, code coverage and vulnerability discovery; and against three smartphones: Pixel 3, Nexus 5X and Nexus 5. Pretender [START_REF] Gustafson | Toward the analysis of embedded firmware through automated re-hosting[END_REF] and Conware [START_REF] Spensky | Conware: Automated modeling of hardware peripherals[END_REF] focus on the challenges of automatically modeling hardware peripherals to enable better firmware emulation. Both follow a similar logic: first record traces of peripheral interactions, then use these traces to generate a model and finally plug the model into an emulator to allow the firmware to execute. Their contributions differ in the way of modeling the peripheral behavior from a recorded trace. Pretender uses machine learning while Conware employs automata representations. Both firmware datasets focus on the 32-bit ARM Cortex-M processor with a wide range of peripherals (e.g., timer, button, GPIO, I2C, USART, radio). Frankenstein [START_REF] Ruge | Frankenstein: Advanced wireless fuzzing to exploit new bluetooth escalation targets[END_REF] takes memory snapshots of the wireless firmware on the device, mainly for Bluetooth and Wi-Fi. These captures are then patched to ease the emulation and fuzz them efficiently. The framework is used to discover three heap overflow vulnerabilities in implementations of the Bluetooth standard using the CYW20735 evaluation board. FirmCorn [START_REF] Gui | Firmcorn: Vulnerability-oriented fuzzing of iot firmware via optimized virtual execution[END_REF] is a framework to fuzz IoT firmware. A collection of different firmware contexts are captured from the physical devices to be used as starting point for the fuzzing phase. Experiments target seven routers and three cameras. We evaluate multiple aspects of the proposed system such as accuracy, efficiency, stability and effectiveness. Incision [START_REF] Sam L Thomas | Cutting through the complexity of reverse engineering embedded devices[END_REF] tackles the challenge of combining static with dynamic analysis to help with the task of reverse engineering complex embedded systems. Execution traces are recorded in order to improve the static firmware analysis. The evaluation targets two physical devices, an LTE baseband unit and an automotive Body Control Module. The artifacts are not available. But at the time of writing, the authors plan to re-implement the source code with similar functionalities in a new open source framework called Fugue1 . Published artifacts' status We observe that most publications release the source code of their proposed system and collected dataset. However, packaging their artifacts within a container or virtual machine images may improve their usability. In addition, scripts used to process generated data and plot figures for papers are rarely shared within the artifacts. It is worth noting an initiative has been created to collect monolithic firmware used in publications: https://github.com/ucsb-seclab/monolithic-firmware-collection Linux firmware rehosting approaches As highlighted in the introduction, testing firmware poses a significant challenge: the systems are heterogeneous and operate in constrained environments. A recent trend attempts to rehost firmware to gain better control and deeper introspection on its execution. However, rehosting an entire system may not always be practical. Linux-based firmware is composed of various elements such as the kernel, drivers implemented as loadable kernel modules (LKM), libraries and applications. Previous research has focused on evaluating each of these elements as shown in Table 3.3. Publication Analysis focus Technique Hardware Kernel Driver User program Function Emulation HIL Symbolic execution RevNIC [START_REF] Chipounov | Reverse engineering of binary device drivers with RevNIC[END_REF] SymDrive [START_REF] Matthew J Renzelmann | SymDrive: Testing Drivers without Devices[END_REF] Prospect [START_REF] Kammerstetter | Prospect: peripheral proxying supported embedded code testing[END_REF] Surrogates [START_REF] Koscher | SURROGATES: Enabling near-real-time dynamic analyses of embedded systems[END_REF] Firmalice [START_REF] Shoshitaishvili | Firmalice-automatic detection of authentication bypass vulnerabilities in binary firmware[END_REF] Costin'16 [START_REF] Costin | Automated Dynamic Firmware Analysis at Scale: A Case Study on Embedded Web Interfaces[END_REF] Firmadyne [START_REF] Daming D Chen | Towards Automated Dynamic Analysis for Linux-based Embedded Firmware[END_REF] Charm [START_REF] Mohammadjavad | Charm: Facilitating dynamic analysis of device drivers of mobile systems[END_REF] Firm-AFL [START_REF] Zheng | FIRM-AFL: High-Throughput Greybox Fuzzing of IoT Firmware via Augmented Process Emulation[END_REF] FirmAE [START_REF] Kim | FirmAE: Towards Large-Scale Emulation of IoT Firmware for Dynamic Analysis[END_REF] Mousse [START_REF] Liu | Mousse: a system for selective symbolic execution of programs with untamed environments[END_REF] FirmCorn [START_REF] Gui | Firmcorn: Vulnerability-oriented fuzzing of iot firmware via optimized virtual execution[END_REF] ECMO [START_REF] Jiang | ECMO: Peripheral Transplantation to Rehost Embedded Linux Kernels[END_REF] Jetset [START_REF] Johnson | Jetset: Targeted firmware rehosting for embedded systems[END_REF] FirmGuide [START_REF] Liu | Firmguide: Boosting the capability of rehosting embedded linux kernels through modelguided kernel execution[END_REF] EQUAFL [START_REF] Zheng | Efficient greybox fuzzing of applications in Linux-based IoT devices via enhanced user-mode emulation[END_REF] FirmSolo [START_REF] Angelakopoulos | FirmSolo: Enabling dynamic analysis of binary Linux-based IoT kernel modules[END_REF] Table 3.3: Publications addressing Linux-based firmware rehosting. Rehosting user mode applications User mode processes are however a cornerstone for global system security. They implement most of the application logic and are often on the front line to parse external inputs. Although defense in-depth mechanisms exist to mitigate the impact of vulnerabilities in user mode processes, they offer a large attack surface. Moreover, their code is often proprietary contrary to what is written in the kernel and drivers, which makes it arduous to assert their security. This implies the importance of being able to do security testing and understanding what these applications achieve. In this regard, we systematically review the existing approaches in literature focusing on rehosting and testing Linux processes, as well as exposing their limitations. In the following Figures 3.1, 3.2, 3.3, 3.4, 3.5 and 3.6, the elements originating from the embedded system are represented in red while components participating in the emulation are highlighted in yellow and the parts located on the analysis workstation are shown in blue. Costin et al. [START_REF] Costin | Automated Dynamic Firmware Analysis at Scale: A Case Study on Embedded Web Interfaces[END_REF] developed in 2016 an automated framework to analyze at-scale firmware images, and web service in particular. While their framework describes a pipeline for dynamic analysis, they discuss different approaches to rehost web services. Specifically, the approach they chose to implement executes the relevant programs in a chroot environment of the firmware filesystem. For architectural reasons, everything runs on top of a full-system emulation using a generic operating system. Those generic operating systems are pre-compiled Debian images composed of the kernel and its filesystem. The downside of the approach is an excessive execution overhead generated by the combination of full-system emulation with a generic operating system, which is not directly relevant to the analysis. In addition, only the web interfaces are subject to testing and any interactions with lower abstraction layers, such as the kernel or the hardware peripherals, do not work as on the original device. Firmadyne [START_REF] Daming D Chen | Towards Automated Dynamic Analysis for Linux-based Embedded Firmware[END_REF] and later on FirmAE [START_REF] Kim | FirmAE: Towards Large-Scale Emulation of IoT Firmware for Dynamic Analysis[END_REF] propose to reduce the execution overhead by booting the firmware filesystem on top of a custom kernel. They argue that a high-level behavior from the web services is sufficient to perform its dynamic analysis properly. Their approach described a pipeline for dynamically analyzing firmware and their web services at scale. After extracting the firmware filesystem, an initial emulation phase records its booting interactions with the network and other system hardware interfaces. These records are then used to infer the expected environment. The firmware is then started in the inferred environment for deeper analysis. The main challenge in this approach is the setup of the environment for the kernel to boot correctly. In particular, it is related to improper booting sequence, network interfaces expected by the firmware, interactions with non-volatile storage memory (e.g., NVRAM) where configurations are often stored, and various other kernel issues. This approach improves upon the state of the art but still suffers from high execution overhead because of full-system emulation. Furthermore, it may not always be possible to infer the correct environment for all firmware filesystems. Additionally, the interactions with hardware peripherals may also not be inferred or supported at all. Despite these limitations, Fir-mAE has demonstrated encouraging results by improving the original success rate of Firmadyne from 16.28% to 79.36% in their dataset. However, these results may be put into perspective when considering a different data set [START_REF] Dietz | Firmware re-hosting, an evaluation and verification of FirmAE[END_REF]. Nonetheless, both studies [START_REF] Dietz | Firmware re-hosting, an evaluation and verification of FirmAE[END_REF][START_REF] Kim | FirmAE: Towards Large-Scale Emulation of IoT Firmware for Dynamic Analysis[END_REF] provide valuable insights into the key instrumentation that contribute the most to the success rate of rehosting, namely NVRAM, network and boot arbitration. Their success rate is measured by network reachability, which is tested by launching a ping command to the network services under test. While their evaluations show their approach is sufficient to find new vulnerabilities in web services, it is unlikely to be sufficient to run accurately more complex dynamic analysis, such as taint analysis, on the different applications. Furthermore, as demonstrated by Karonte [START_REF] Redini | Karonte: Detecting insecure multi-binary interactions in embedded firmware[END_REF], multi-binary services are widely used in applications. FirmAFL [START_REF] Zheng | FIRM-AFL: High-Throughput Greybox Fuzzing of IoT Firmware via Augmented Process Emulation[END_REF] aims to fuzz processes from a firmware. To improve execution speed, they have enhanced Firmadyne by providing the ability to switch the process between the user and the full-system mode emulation. They refer to their approach as augmented process emulation. The motivation behind the idea is to take advantage of the faster execution speed of process-level emulation while maintaining the accuracy of full-system mode when necessary. This approach addresses the challenges related to process synchronization and migration. The work addresses the execution speed problem. Therefore, most limitations regarding environment inferring and accuracy continue to be valid for FirmAFL. EQUAFL [START_REF] Zheng | Efficient greybox fuzzing of applications in Linux-based IoT devices via enhanced user-mode emulation[END_REF] seeks to further improve the execution speed of fuzzing by removing the need to switch to full-system emulation. The approach is twofold. Given a starting fuzzing point in a program, it first records the system environment setup in full-system using Firmadyne. The recorded information is then used to replay the same setup in a chroot environment on the host. Lastly, the target program execution can be resumed in user mode emulation. The recording process involves intercepting system calls of interest and collecting their information for the replay phase. The two most difficult behavior to reproduce are related to generating appropriate configuration files and handling network interactions. Because the approach builds upon Firmadyne, it inherits similar limitations. In addition, it appears that while the execution speed is improved, the cost of analyzing at a large scale is incurred. The method requires manually unpacking the firmware and identifying environment variables using a decompiler. Host Kernel (x86_64) FirmCorn [START_REF] Gui | Firmcorn: Vulnerability-oriented fuzzing of iot firmware via optimized virtual execution[END_REF] also focuses on increasing the efficiency of firmware fuzzing. First, the method conducts a static analysis of the firmware programs to identify potentially vulnerable code. This is achieved by measuring the cyclomatic complexity2 of all functions and counting the number of times each function is called. Then, the program is started on the device to capture its context and migrated to the emulator. Three heuristics are used to further improve the emulation and fuzzing execution by replacing certain functions. Unresolved functions define functions that interact with newly dynamically allocated memory, they are emulated using the uClibc implementation. Unnecessary functions describe functions not interesting for fuzzing, such as printing and logging. Hardware-specific functions denote functions interacting with hardware components such as the NVRAM reads and writes. The main drawback of this approach lies in the fact that it prioritizes optimization for fuzzing at the expense of neglecting certain areas where vulnerabilities may reside. The algorithm for identifying vulnerable code is based on weak metrics. Additionally, the method only considers accesses in NVRAM but omits other types of hardware interactions such as character devices. Overall, the application of this rehosting technique is limited to the specific context of fuzzing and may not be easily adaptable to other scenarios. PROSPECT [START_REF] Kammerstetter | Prospect: peripheral proxying supported embedded code testing[END_REF] aims to address the limitation of using generic kernels for rehosting in the absence of hardware peripherals. For this reason, the paper describes a mechanism for intercepting system calls related to character devices and forwarding them to the physical device for re-execution. The reply is then returned to the process in the emulator. Later on, to reduce the execution overhead caused by forwarding system calls, the approach has been improved to incorporate caching of peripheral accesses [START_REF] Kammerstetter | Embedded security testing with peripheral device caching and runtime program state approximation[END_REF]. However, one of the major limitations of this approach lies in the way in which system calls are intercepted. The interception happens inside the kernel at the filesystem subsystem layer, and as acknowledged by its authors, it relies on the FUSE filesystem framework and its supported system calls. This implies that it is not capable of instrumenting other system calls not related to character devices. Although this is not the main objective of PROSPECT, it highlights its limitation in terms of interactions with the environment, such as those related to inter-process communication. Moreover, choosing to intercept system calls with a mechanism implemented inside the kernel prevents from using user mode emulation. Summary The choice to rehost certain components is characterized by the increase of complexity with the number of firmware parts chosen. First, application data alone presents limited interest. Usually, it is not located at a single point but spread across the system which makes its extraction more cumbersome. It can be stored in different memory types (e.g., NVRAM, ROM), files in the filesystem or even embedded in binaries. This also greatly limits the scope of analysis and vulnerability research. These are the reasons why current approaches prefer focusing on more comprehensive solutions. Processes offer more interest for analysis, but migrating them from the device to the emulator can be challenging. This is because certain process resources such as TCP connections and devices are not part of the process itself but a kernel abstraction. Rehosting the firmware's filesystem provides a more accurate emulation of the original environment. This eliminates the need for emulating all processes interactions, such as loading dynamic libraries, opening configuration files or starting new programs. During the boot process, the kernel is responsible for initializing the hardware components. Unlike the x86 architectures, which use the ACPI standard to dynamically discover and configure hardware components, many other architectures use a static device tree. This means that the kernel parses this data structure and, if errors are encountered, the kernel fails to boot. The vast variety of existing hardware components and peripherals makes it impossible for an emulator to support them all [START_REF] Fasano | Enabling Security Analyses of Embedded Systems via Rehosting[END_REF]. Therefore, some works have focused on inferring the expected hardware or guiding the kernel booting process [START_REF] Angelakopoulos | FirmSolo: Enabling dynamic analysis of binary Linux-based IoT kernel modules[END_REF][START_REF] Jiang | ECMO: Peripheral Transplantation to Rehost Embedded Linux Kernels[END_REF][START_REF] Johnson | Jetset: Targeted firmware rehosting for embedded systems[END_REF][START_REF] Liu | Firmguide: Boosting the capability of rehosting embedded linux kernels through modelguided kernel execution[END_REF]. Instead, others [START_REF] Daming D Chen | Towards Automated Dynamic Analysis for Linux-based Embedded Firmware[END_REF][START_REF] Costin | Automated Dynamic Firmware Analysis at Scale: A Case Study on Embedded Web Interfaces[END_REF][START_REF] Kim | FirmAE: Towards Large-Scale Emulation of IoT Firmware for Dynamic Analysis[END_REF][START_REF] Zheng | FIRM-AFL: High-Throughput Greybox Fuzzing of IoT Firmware via Augmented Process Emulation[END_REF] have chosen to replace the firmware kernel with one that is compatible with the emulator. This improves the ability to easily modify the kernel to add instrumentation. However, this approach presents new challenges as programs expect a specific environment to execute properly, such as specific network interfaces. The execution mode balances the trade-off between emulation speed and accuracy. By statically recompiling the application to the host ISA, the burden of emulating a different ISA is removed, resulting in increased execution speed as the program runs natively on the host. However, this requires either access to the source code of the application and a compatible toolchain or the ability to correctly decompile the binary. The latter is still an ongoing research problem because it can be difficult for certain ISA to properly identify the code from the data in the disassembly [START_REF] Andriesse | An In-Depth Analysis of Disassembly on Full-Scale x86/x64 Binaries[END_REF]. User mode emulation allows for the execution of foreign architecture binaries at the cost of slower execution. It is often used in conjunction with chroot to recreate the firmware filesystem environment. However, it lacks most of the hardware interactions. Full-system emulation, in contrast, emulates the hardware behavior to enable execution for programs that interact directly with the hardware logic, such as kernels and drivers. But it is entirely dependent on the emulator support. For example, QEMU does not support all CPU architectures and does not implement all existing peripherals and platforms. In addition, peripheral hardware is often custom designed, and the documentation is rarely publicly available, making emulation more difficult without significant reverseengineering efforts [START_REF] Fasano | Enabling Security Analyses of Embedded Systems via Rehosting[END_REF]. A workaround consists in combining the emulation with the physical device to forward hardware accesses. Contrary to rehosting approaches focusing on the kernel, drivers and peripherals, as presented in Table 3.3, to the best of our knowledge, no firm-ware rehosting methods that focus on user programs make use of symbolic models. Each of the approaches makes different trade-offs to enable rehosting of various parts of the embedded system. They are summarized in Table 3 Firmadyne [START_REF] Daming D Chen | Towards Automated Dynamic Analysis for Linux-based Embedded Firmware[END_REF] Environment inferring FirmAE [START_REF] Kim | FirmAE: Towards Large-Scale Emulation of IoT Firmware for Dynamic Analysis[END_REF] Environment inferring FirmAFL [START_REF] Zheng | FIRM-AFL: High-Throughput Greybox Fuzzing of IoT Firmware via Augmented Process Emulation[END_REF] Shared memory EQUAFL [START_REF] Zheng | Efficient greybox fuzzing of applications in Linux-based IoT devices via enhanced user-mode emulation[END_REF] FirmCorn [START_REF] Gui | Firmcorn: Vulnerability-oriented fuzzing of iot firmware via optimized virtual execution[END_REF] Process migration PROSPECT [START_REF] Kammerstetter | Prospect: peripheral proxying supported embedded code testing[END_REF] System call caching [START_REF] Kammerstetter | Embedded security testing with peripheral device caching and runtime program state approximation[END_REF] Table 3.4: Summary of user mode rehosting approaches. System call interception Earlier, we examined various techniques for rehosting programs in user mode. The primary interface that separates processes from the kernel is system calls. In this Section, we provide background information and explore motivations for system call interception. In particular, we shed light on the inner workings of Linux system calls and present reasons for their interception (i.e., tracing, filtering, instrumentation, and forwarding). Next, we present in more details the common interception techniques used in Linux-based systems. Table 3.6 summarizes the main characteristics. Motivation A modern computing system typically consists of a kernel and a user space. The privileged operating system is running in kernel mode, while user programs reside in user space. Generally, user-mode applications interact with the kernel via the system call interface. Hence, whenever a user program requires a service provided by the kernel, it initiates a system call. The Linux kernel follows the "everything is a file" philosophy, and therefore many system calls are dedicated to files and filesystem management. Other examples of services exposed via system calls are address space, process management, signal handling, and ISA-specific services. Table 3.5 depicts examples of these system calls. User mode applications rarely need to issue the system calls directly, they are abstracted away by the standard system libraries (e.g., libc, libstdc++, libm, libpthread). When an application issues a system call, the execution is usually 3 transferred to the kernel via architecture-dependent instructions. For instance, Linux on x86 architectures uses software interrupts for 32-bit programs (int 0x80), and traps for 64-bit programs (sysenter), while resorting to exceptions for the ARM (svc) and MIPS (syscall) architectures. The execution context of the user process is saved. The kernel then determines the number of the requested system call, usually stored in a pre-defined register. This system call number is then used as an index to the kernel internal system call table, which stores a pointer to the individual system call handlers. Once the handler finishes, the process context is restored and a single return value is passed on success. On the opposite, a failure is typically indicated by a return value of -1 and sets errno, a special variable indicating the type of error. OS subcomponent Example system calls Files Tracing The least invasive form of system call interception is tracing. It is a technique that logs information about the executed system calls: the arguments, the return value, the potential errors and sometimes also about the process context such as the stack trace. Essentially, the resulting traces are recordings of the interaction between these programs and the kernel and give deep insights into the programs' behavior. Its applications are manifold and range from simple debugging to the analysis of complex malware and profiling. While naïve tracers are just logging the executed system calls and their arguments, the need for more advanced traces is indisputable. A variety of system calls have pointers to locations in memory in user space as arguments, and without logging those memory contents, information is lost. Furthermore, additional information about the state of the process, such as the content of the stack, when issuing the system call may be relevant for the analysis. However, tracing can lead to considerable overhead in performance, as dereferencing and copying memory is slow, and system calls are frequently issued for most software. As a result, developing system call tracing engines requires trade-offs between performance and granularity of the retrieved information. On Linux, the most established system call tracer is probably strace [14], which uses ptrace, a Unix system call for enabling debugging and tracing of processes. While strace provides thorough information about the executed system calls, a process can identify whether it is ptraced by querying the Linux kernel, and adversarial software, such as malware, typically changes its behavior if this is the case. Hence, other solutions for system call tracing are usually used in this context. For instance, the malware analysis framework Ninja [START_REF] Ning | Ninja: Towards Transparent Tracing and Debugging on ARM[END_REF] uses the Embedded Trace Macrocell, a hardware feature for ARM CPUs, while Cozzi et al. [START_REF] Cozzi | Understanding linux malware[END_REF] used a modified version of System-Tap4 [START_REF] Jacob | SystemTap: instrumenting the Linux kernel for analyzing performance and functional problems[END_REF] to trace system calls for analysis of Linux malware. Instrumentation There are several use cases for the instrumentation of system calls and their arguments or return values. For instance, modifying the arguments passed to the kernel can help during debugging and analysis of a program, as it allows the modification of the program's behavior. More common, however, is the modification of system call return values. In particular, changing the return value can exercise different paths in the program, which is useful during dynamic testing. For instance, the purposeful injection of error return codes helps for identifying bugs and vulnerabilities in programs [START_REF] Woodruff | How to write a rootkit without really trying[END_REF][START_REF] Zhang | Maximizing error injection realism for Chaos engineering with system calls[END_REF]. Furthermore, instrumentation of system calls is often used by malicious software. Rootkits are frequently instrumenting system call arguments and return values, for instance, to modify the behavior of other userland processes, or for evading malware detection systems. Filtering & Sandboxing System calls are the main interface for processes to modify the state of the system. Hence, filtering out malicious or unwanted system calls is a central goal in any sandboxing approach. Moreover, filtering system calls can also be a component in the principle of the least privileges: not all userland processes require having access to all implemented system calls. Therefore, by restricting a process to the system calls it is supposed to need, the attack surface of the kernel is limited. The seccomp subsystem is a popular mechanism for system call filtering on Linux. It can be accessed through the prctl system call and allows a process to be placed in "secure computing mode" which only allows a limited subset of system calls. The seccomp-bpf extension offers more precise control over system calls by specifying policies through Berkeley Packet Filter (BPF) rules. In addition to actions such as killing a process, this extension allows for the handling of system call events in user space, through methods such as sending a signal or via a ptrace-based tracer. Forwarding An emerging trend for system call interception is the action of forwarding system calls from one system to another. As a result, user space applications do not interact with their native kernel, but instead with a foreign one. While this may appear non-intuitive at first, it is the underlying concept for popular user-mode emulators, such as qemu-user [START_REF] Bellard | QEMU, a fast and portable dynamic translator[END_REF]. Furthermore, an emerging use case for system call forwarding is the analysis of Linux-based embedded devices as demonstrated with PROSPECT [START_REF] Kammerstetter | Embedded security testing with peripheral device caching and runtime program state approximation[END_REF][START_REF] Kammerstetter | Prospect: peripheral proxying supported embedded code testing[END_REF] and Firm-AFL [START_REF] Zheng | FIRM-AFL: High-Throughput Greybox Fuzzing of IoT Firmware via Augmented Process Emulation[END_REF]. The key idea is that resource-intensive analysis of user space applications is carried out on an analysis host, while system calls interacting with the hardware of the device are forwarded. Another notable example of system call forwarding, although not targeting Linux system calls, is a framework written by Martignoni et al. [START_REF] Martignoni | A framework for behavior-based malware analysis in the cloud[END_REF]. It forwards and executes a subset of Windows system calls from a user host to an analysis host in the cloud for facilitating advanced runtime malware analysis. Distributed operating systems rely on forwarding mechanisms to seamlessly continue the execution of migrated processes across different nodes within a cluster [START_REF] Barak | Scalable cluster computing with MOSIX for Linux[END_REF][START_REF] Lottiaux | OpenMosix, OpenSSI and Kerrighed: a comparative study[END_REF][START_REF] Vallee | Process migration based on gobelins distributed shared memory[END_REF][START_REF] Van Hensbergen | Grave Robbers from Outer Space: Using 9P2000 Under Linux[END_REF]. Main system call interception techniques The need for observing, instrumenting, filtering, and forwarding system calls leads to a variety of techniques, each with its advantages and drawbacks. Although system call interception techniques differ widely in the way they operate and the capabilities they provide, they all have in common that the execution of a hook comes alongside the execution of the actual system call. Hence, we will refer to different interception techniques as hooking techniques for the sake of readability, regardless of the goal and motivation of the interception. Generally speaking, hooking techniques usually fall into one of the following three categories: 1. User mode-based. These techniques register and execute their hook in user space. While some of these techniques require assistance from the kernel, the core logic of the hook resides in user mode. 2. Kernel-based. In opposition to the user mode-based techniques, the code of the hook is executed in the kernel context. 3. External. Another option is the insertion and execution of hooks outside the operating system. This scenario includes for instance hypervisorbased hooking or the usage of special debug interfaces. While we will use this categorization for structuring the remainder of this section, a variety of other properties can be used to characterize a given system call interception technique. For instance, one key property framing the performance, requirements and accessible information is the hooking mechanism, i.e., how execution is transferred from entering or exiting a system call to the actual hook. Typically, a hook is either reached via a trampoline, or a trap. The former is a direct transfer of execution to the hook via a jump instruction, whereas a trap uses some kind of interrupt mechanism during the execution, which eventually transfers the control flow to the hook. Other characteristics include whether the hook is inserted statically, e.g., during compile time, dynamically during program execution, or whether the technique needs to be supported by a given kernel. We will highlight the diversity of system call hooking techniques on Linux by discussing several different techniques. While the list of presented techniques is not exhaustive, it will give a good overview of the current state of the art. User mode-based Approaches that implement instrumentation using user mode programs have the advantage of increased stability and portability across different platforms. However, this comes at the cost of reduced stealth, which can be a significant factor in the context of malware analysis. Library Injection. The principle of this technique is to inject a shared library responsible for instrumentation into another process, either during startup or run-time. The most famous example for this is probably LD_PRELOAD based hooking. By exploiting the behavior of the dynamic linker/loader, this technique overrides the function references in the Global Offset Table (GOT) to redirect them to other locations. While this does not give full control and visibility over system calls, it allows the instrumentation of the according wrapper functions of the standard C library. Although the simplicity of this approach is remarkable, it is very prone to miss system calls not issued by the instrumented libraries, and it is not applicable for statically linked binaries. Nonetheless, LD_PRELOAD serves as the building block for other techniques, such as syscall_intercept [START_REF]system call intercepting library[END_REF], which preloads a stub for disassembling the standard C library and replacing system call instructions with a trampoline. A similar, but more sophisticated approach is implemented by Frida [149], which injects a JavaScript interpreter inside a running process to provide various dynamic instrumentation capabilities, including system call interception. The script has full access to memory, and process' functions and can instrument its execution. However, the overhead introduced in memory and CPU time is significant [START_REF] Lopez | A Survey on Function and System Call Hooking Approaches[END_REF]. Binary Rewriting. Essentially, binary rewriting consists of disassembling the binary code, adding instrumentation, and reassembling the modified code before execution. This is either performed offline, using static binary rewriting tools such as Ramblr [START_REF] Wang | Ramblr: Making Reassembly Great Again[END_REF], Retrowrite [START_REF] Dinesh | Retrowrite: Statically instrumenting cots binaries for fuzzing and sanitization[END_REF] and ZAFL [START_REF] Nagy | Breaking through binaries: Compiler-quality instrumentation for better binary-only fuzzing[END_REF], or online during program run-time, which can be achieved with frameworks such as DynamoRIO [START_REF] Bruening | An infrastructure for adaptive dynamic optimization[END_REF] and PIN [START_REF] Luk | Pin: building customized program analysis tools with dynamic instrumentation[END_REF]. While static rewriting techniques are somewhat limited, as they rely on correct disassembly and can't cope with obfuscated code and packed binaries, dynamic rewriting typically introduces a significant performance overhead during the process execution, as instrumentation code is typically added to every basic block. Hence, some systems aim to provide lighter solutions, such as VARAN [START_REF] Hosek | Varan the unbelievable: An efficient nversion execution framework[END_REF], which uses selective binary rewriting where only system calls instructions are replaced by either a trap or a trampoline whenever a code page is loaded into memory. While this beats the performance of more traditional binary rewriting frameworks, system calls originating from self-modifying code as often seen in malware can be missed by this approach. Process Trace. One of the most versatile building blocks for user modebased hooking on Linux is process trace, or, in short, ptrace, which is provided by the kernel via the ptrace system call. Despite its name, ptrace allows one process to introspect and modify the registers and memory contents of another process with the help of the kernel. Additionally, trap handlers for special events such as system calls can be registered. A variety of standard debugging tools are built on top of this facility, such as strace, GDB and ltrace, as well as advanced dynamic analysis tools [START_REF] Volckaert | GHUMVEE: efficient, effective, and flexible replication[END_REF][START_REF] Zheng | DroidTrace: A ptrace based Android dynamic analysis system with forward execution capability[END_REF]. The main drawback of this approach lies in the multiple context switches between the tracer and the tracee, resulting in high execution overhead. It also suffers from TOCTOU attacks [START_REF] Provos | Improving Host Security with System Call Policies[END_REF] which are difficult to resolve due to the lack of atomicity in this interception mechanism. Seccomp. Like ptrace, seccomp allows registering hooks for a system call with the help of the kernel. In particular, it allows a process to register Berke-ley Packet Filter (BPF)5 rules to match system calls and their arguments. The kernel will evaluate subsequent system calls against this filter and carry out an action encoded in the return value of the filter. Possible actions include termination of the process, returning an error for the system call, passing control to the program via signals, or notifying a ptrace-based tracer. Although the interception mechanism resides in the kernel, this approach and the ptrace method are classified as user mode-based due to the execution of actual system call instrumentation. Kernel-based The Linux kernel offers various tracing, hooking, and probing facilities for all sorts of events, including the execution of system calls. A standard way to access these functions are Loadable Kernel Modules (LKM), which can be used for flexible instrumentation of user applications from kernel space. By running at higher privilege mode, an LKM can manipulate the target process address space while having access to all kernel data structures and functions, and perform system call interception transparently to the user program. However, to build such modules, the toolchain and headers originally used to compile the kernel are required, which is frequently impossible or very difficult to obtain [START_REF] Angelakopoulos | FirmSolo: Enabling dynamic analysis of binary Linux-based IoT kernel modules[END_REF]. In the following, we describe various APIs and techniques accessible from LKM to enable system call hooking. Tracepoints. The Linux kernel allows inserting tracepoints in its source code. They can later be hooked with handler functions, called probes, at runtime. A tracepoint can be turned on or off by registering or unregistering the probe. Various tracepoints, including some for system calls, are already present in the Linux kernel and can be accessed via the API function trace_tracepointname() and the TRACE_EVENT() macro. Although this mechanism introduces only minimal overhead, it does not allow interception of the system call beyond simple tracing and can be inserted without recompiling the kernel code. Kprobes. Dynamic insertion of hooking points can be achieved with Kprobes, the trap-based dynamic tracing system of the kernel. Kprobes allows placing a software breakpoint almost anywhere in the kernel code [START_REF] King | Linux patch: ARM: probes: avoid adding kprobes to sensitive kernel-entry/exit code[END_REF] and allows registration of pre-, post-and fault handler routines. When the breakpoint is hit, the CPU traps into an interrupt context, saves its state and executes the pre-handler instrumentation. Then, it executes the saved instruction and the post-handler function before returning after the breakpoint. As trapping and switching to interrupt context usually comes with a significant performance penalty, optimized Kprobes replace the software interrupt instruction with a trampoline when a strict set of requirements is met. The trampoline handler simulates the breakpoint behavior by pushing CPU's registers onto the stack. Then, the Kprobe handlers and the copied instruction are executed, before the state is restored and execution continued at the original location. For hooking system calls with Kprobes, two strategies are viable: registering a Kprobe for every possible system call routine, or registering a single Kprobe in the system call dispatching function. The former approach is insufficient in the presence of rootkits, as they typically alter the system call table, while the latter has additional hurdles to overcome: As system call dispatching is a critical operation, interrupts may be disabled and the code locations may be excluded from the authorized locations for Kprobes. Uprobes. Uprobes allow kernel modules to instrument user space applications by replacing a given instruction with a software breakpoint. The instructions to instrument are identified by tuples composed of the inode of the binary and their offset from the start. When the breakpoint is hit, an additional filter, specified at uprobe creation, examines additional conditions to meet before transferring control to the hook or continuing execution in user space. While this mechanism is especially useful for user mode tracers such as perf [11] or ftrace [4], it has similar limitations as static binary rewriting as system call instructions need to be identified before the program starts. Character device. User space applications often interact with drivers and hardware using special character devices. Starting from this assumption, a kernel module can provide a custom character device to co-operating user mode programs to intercept file-related system calls [START_REF] Kammerstetter | Prospect: peripheral proxying supported embedded code testing[END_REF][START_REF] Mulliner | Fuzzing the phone in your phone[END_REF]. Although this method has its benefits when the ultimate goal is forwarding interaction with device drivers, it has severe limitations as a generic system call hooking strategy. Kernel patching. If all the aforementioned methods are not applicable, a module can always fall back to the most direct hooking technique: overwriting kernel code-or data-during runtime [START_REF] Cama | Corrupting the ARM exception vector table[END_REF][START_REF]Android platform based linux kernel rootkit[END_REF][START_REF] Mueller | Hooking android system calls for pleasure and benefit[END_REF][START_REF] Talaat | Intercepting system calls and dispatchers[END_REF]. For instance, single entries in the system call table could be changed, the complete system call table could be replaced, or instructions from the dispatching routines could be rewritten. However, this method offers the lowest level of porta-bility and usually requires eluding various runtime protections aiming to prevent kernel memory tampering. External System call interception from outside the operating system usually has the advantage of being undetectable both by the operating system and user space. Furthermore, it can lead to better performance and full access to memory and any low-level machine state information. However, most of the time a semantic gap has to be bridged, as internal details of the kernel, such as data structures, are lost. Hypervisor. System call interception at the hypervisor level is deeply coupled with the virtualization approach used. Some hypervisors rely on CPU hardware virtualization extension to run a guest operating system. When a sensitive instruction (e.g., software interrupts for system calls) is issued, the control flow moves to the hypervisor which then decides what action to take. Indeed, system calls can be intercepted directly by the hypervisor before reaching the guest kernel. It is particularly useful when the instrumentation needs to be undetectable by the program being analyzed, like for malware analysis [START_REF] Dinaburg | Ether: malware analysis via hardware virtualization extensions[END_REF]. On the other hand, when the hypervisor relies on dynamic binary translation to provide a different execution environment, it is possible to rewrite the system calls to include the desired instrumentation. Dedicated hypervisor tools such as PANDA [START_REF] Dolan-Gavitt | Repeatable reverse engineering with PANDA[END_REF], provide hooks for various events on the guest, including the occurrence of system calls. A third case is illustrated by approaches based on KVM. Dune [START_REF] Belay | Dune: Safe user-level access to privileged CPU features[END_REF] and later ELKM [START_REF] Pester | [END_REF] use a library to convert system calls into hypercalls which are then instrumented by the hypervisor. Hardware Tracing. Modern CPUs are oftentimes equipped with specialized tracing capabilities such as Intel's Processor Trace (PT), or ARM's Embedded Trace Macrocell (ETM). Unfortunately, these capabilities do not allow for any kind of system call hooking outside the scope of tracing on its own, as the generation of the trace is typically completely decoupled from the execution of a binary. However, systems like Griffin [START_REF] Ge | Griffin: Guarding control flows using intel processor trace[END_REF] demonstrate that they indeed can be used to introduce blocking checks in the event of system calls. Moreover, tracing system calls with these hardware features is especially useful for advanced malware analysis, as illustrated by Ninja [START_REF] Ning | Ninja: Towards Transparent Tracing and Debugging on ARM[END_REF]. One limitation of hardware tracing is its lack of portability. The hardware and software used for hardware tracing are deeply coupled with the architecture. This makes it difficult to apply a generic solution or to compare the behavior of systems that use different architectures. This is particularly true given the wide range of architectures employed by embedded systems. A second limitation concerns the limited number of events that can be traced at once. Most tracing solutions have a fixed amount of hardware resources that are dedicated to tracing. This means that it may not be possible to trace all the events of interest at a very high frequency without deleting old records. Hardware Debugging. In comparison to tracing, hardware debugging facilities can alter CPU state and memory content in a completely transparent way to the underlying operating system and user space. While this effectively allows large flexibility with system call interception, it requires lowlevel knowledge about the target operating system. Nonetheless, the viability of this approach for system call interception is shown by OpenST [START_REF] Zheng | On-chip system call tracing: A feasibility study and open prototype[END_REF], which performs system introspection for Linux on ARM using JTAG and OpenOCD. However, retrieving and rebuilding information through a lowspeed debug port can introduce high execution overhead. In addition, debugging ports are not always present, accessible, or simply enabled on devices. Problem statement The previously discussed methods for rehosting processes introduce promising concepts, but overall offer a best-effort approach to approximate the minimum requirements needed to run a program. Although these methods have been successful in identifying new vulnerabilities via fuzzing, they are not sufficient for performing more complex analyses on program execution such as taint tracking and symbolic execution. Furthermore, these methods do not fully reproduce the context of the embedded system and its comprehensive interactions with the environment. For instance, Firmadyne focuses on the booting process of the firmware image but does not verify the correctness of the rest of the execution, particularly concerning hardware accesses. Moreover, the rehosting method is designed to analyze firmware at scale and efforts are focused to improve the overall success over the dataset. Alternatively, PROSPECT is limited to character device accesses only. Therefore, it is crucial to continue proposing new approaches to reproduce environments and interactions faithful to the original devices. In the next chapter, we will examine how different methods of system call instrumentation can be used to rehost processes. Chapter 4 System call forwarding for Linux processes Previous research has mainly focused on the ability to boot and run the firmware but with less emphasis placed on its execution accuracy. In this chapter, we start by highlighting the shortcomings of existing approaches through an example of an embedded system presenting a variety of interactions with a rich environment, including local wireless network and cloud services. We then examine the challenges the process abstraction poses for rehosting and investigate the solutions to handle them. Finally, we propose a novel approach for providing partial execution to a process by forwarding its system calls from an emulator to its original environment on a physical device. Motivation of the approach 4.1.1 Motivational example We observed that existing approaches primarily focus on extracting the programs from its filesystem and apply best-effort methods to approximate the minimum required for the program to run. While this is sufficient to identify some vulnerabilities, it fails to capture the complete context of the embedded system and lacks comprehensive interactions with the environment as we will see in the following example. A program rarely runs alone in a Linux-based system. It is indeed the very point of an operating system to offer the ability to share the execution time between multiple programs. The rehosting technique implemented via Firmadyne and FirmAE illustrates this by trying to identify all the firmware services and start them properly. But that is not always sufficient to capture all firmware interactions with the rest of its environment. As highlighted before, the original environment is not reproduced in its accuracy: the kernel version is different, which makes it impossible to load kernel modules; some hardware components may not be supported by the emulator; not all firmware services could be started. Therefore, some key program inputs may be missing. A representative example of firmware evolving in a complex environment is the firmware of the Hue Bridge. Philips Hue is a popular smart home automation system that offers to control lighting devices wirelessly. It is composed of different types of light devices: bulbs, light strips, as well as switches, motion sensors and a bridge. The bridge aims to provide interoperability between the local network and the wireless protocols used by the devices. To enhance user experience, the system integrates with different cloud services, for instance enabling users to control the lights from their smartphones when outside their home, or to launch voice commands via smart speakers. Figure 4.1 shows a typical Philips Hue home network environment. To facilitate interactions, and support different use cases, multiple protocols are supported: Zigbee Light Link, Bluetooth Low Energy and Wi-Fi. The core of the system is the Hue Bridge. It connects to the router via Ethernet and the devices via the Zigbee mesh network. In addition to the Zigbee Light Link protocol, some devices support the Bluetooth Low Energy protocol. This allows sending commands directly to the device without the need for a bridge. Figure 4.2 illustrates the different protocols involved in the system. In addition to its technological choices, the Philips Hue system presents security challenges [START_REF] Alrawi | Sok: Security evaluation of home-based iot deployments[END_REF][START_REF] Morgner | All your bulbs are belong to us: Investigating the current state of security in connected lighting systems[END_REF][START_REF] Eyal Ronen | IoT goes nuclear: Creating a ZigBee chain reaction[END_REF]. Its privacy has also been studied. Thiery et al. [START_REF] Thiery | Privacy implications of switching ON a light bulb in the IoT world[END_REF] discussed how the state of lights and other sensors within the house help understand user habits. By cross-referencing these data with those of their partners, companies can infer broader behavior about their customers. It is even more true when Philips Hue system is advertised as being well integrated with other home automation systems, in particular Google Assistant, Amazon Alexa, Apple HomeKit and Microsoft Cortana. In their experiments with multiple devices, Thiery et al. [START_REF] Thiery | Privacy implications of switching ON a light bulb in the IoT world[END_REF] measures that 75.35% of data is sent to servers located in the US while the device resides in Europe. This raises questions about data sovereignty. Thiery et al. [START_REF] Thiery | Privacy implications of switching ON a light bulb in the IoT world[END_REF] analyzed the traffic from the network. However, most of the traffic was encrypted, and it was not always possible to set up a man-in-the-middle attack. The authors observed recurrent communication between the bridge and the cloud server (hosted by Google) but because communications are encrypted they were unable to precisely analyze them. Nonetheless, they supposed it is used either to notify the light status or as a keep-alive mechanism. This is why it is interesting to be able to have a deep insight into the device's internals. Because the Hue Bridge is at the heart of the whole system, rehosting and instrumenting its programs would let us better understand what kind of information is shared with devices and cloud services. At the hardware level, the Hue Bridge can be divided into two parts: the Linux system and the Zigbee modem. Communication between both components is handled via a UART and appears as a tty device under the dev directory: /dev/ttyZigbee. At the software level, the application ipbridge plays a central role in forwarding the commands to the Zigbee modem, monitoring the status of the lamps and communicating with cloud services. This illustrates the fact that rehosting only a part of the system, for instance, the Linux one, is not sufficient for it to function properly. Therefore, it is crucial for the rehosted application to maintain its relations with its original environment. Our Approach Our approach consists of intercepting system calls, deciding if their execution should be carried out locally or remotely, invoking them on the chosen platform, and returning them to the application. For certain system calls that modify the process structure, we perform additional operations to keep the representations of the process in the two kernels synchronized. This is particularly the case for the mmap family which alters the structure of the address space (Figure 4 We started by observing the typical composition of Linux-based systems. We summarize those components in three1 layers: the hardware, the kernel and the user space. For a user-mode application, the system call interface is the main point of interaction with the kernel and its hardware resources. It is a public and stable API and ABI. Unlike low-level hardware accesses happening between the kernel and the hardware such as I/O operations, the system call interactions are more general and suffer less from latency issues [START_REF] Mohammadjavad | Charm: Facilitating dynamic analysis of device drivers of mobile systems[END_REF]. Furthermore, many techniques exist for instrumenting system calls as we saw in Section 3.3: tracing, sandboxing, filtering, forwarding, etc. As a result, all these characteristics make the system call interface a good candidate for rehosting programs. We have chosen to rely on interception methods that are external to the executing program to avoid the need for modifications in the programs. Thus, the compatible mechanisms are user mode emulation, ptrace, seccomp and other kernel-based interceptions. In our implementation described in Chapter 5, we use QEMU user mode emulation with several Linux namespaces to reproduce the original firmware environment. Challenges As seen in the motivation in 4.1, some existing work has been done to provide a limited way for forwarding system calls. FirmAFL [START_REF] Zheng | FIRM-AFL: High-Throughput Greybox Fuzzing of IoT Firmware via Augmented Process Emulation[END_REF] shares memory between the two emulators, but system calls are almost always executed in full-emulation mode. PROSPECT [START_REF] Kammerstetter | Prospect: peripheral proxying supported embedded code testing[END_REF] can forward system calls to the physical device, but its interception mechanism lies in the FUSE filesystem, limiting it to a certain category of system calls. In this Section we will look at the different challenges faced when forwarding system calls between two separate kernels. The process duality: between user and kernel mode A process is composed of elements in both user and kernel modes. The execution is divided between user and kernel mode. In user mode, application code is executed together with its linked libraries, while in the kernel a thread is maintained for each user process. When a system call is invoked, the control flow jumps to its associated thread where the user context is saved and further operations are executed on behalf of the process. The kernel maintains several structures for the process. The hierarchy and relations between processes are stored in the kernel process table. The kernel maintains multiple tables for file management: the file descriptor table, the open file table and the inode table. Inter-process communication (IPC) mechanisms are partly controlled by the kernel and the user process. For instance, the process registers handlers in the kernel to properly receive signals [13]. Shared memory lets processes create a bridge between different address spaces. To access user space memory safely, the kernel uses a dedicated API (defined under the usaccess.h files). Table 4.1 shows the main functions of the interface. Therefore, from user mode, the process does not have full control over its state and the kernel resources it uses. Filtering Forwarding all system calls to the remote kernel is not always necessary, and provides optimization opportunities. In addition, as we will see in the rest of the chapter, certain system calls may need to be executed on both kernels to keep the process structures synchronized. To decide which system call to execute, and where to execute it, a method for filtering system calls and the data they carry in their arguments is necessary. Classification The two most crucial aspects of forwarding system calls are the information that needs to be transferred and the operations required to keep processes synchronized. Most existing classifications are based on the function of the system call, i.e, the type of operations the system call executes and the subsystem it interacts with. However, these groupings do not consider the interface in itself and the information transferred between user and kernel modes. It exists mainly two types of arguments: • Direct variables: integers where all the information is contained in its value (e.g., int option, int pid, uid_t uid, size_t len, off_t offset, unsigned int fd, int flags, umode_t mode), • Indirect variables: pointers to memory blocks in user space such as buffers (char *buf, void *buf), strings (char *filename, const char *pathname), structures (struct tms *tbuf) and arrays (int *fildes). These are often associated with a direction for the transfer: is the memory copied from user space to kernel, or the opposite, or both? In the kernel, this is reflected by the usage of the user space memory access API (Table 4.1). In user mode, clues exist in the argument definitions via the usage of qualifiers such as const and __user. However, this does not grasp the full range of corner cases offered by system calls with variable argument lengths such as ioctl() and fcntl(). The mmap system call family is particularly noteworthy. The vma argument represents an address that is the start of a memory region and could be misinterpreted as a pointer. But it does not refer to any data in itself, the information is the address, thus the value of the argument, so it should belong to the integer category. Another corner case concerns linked lists via set_robust_list(). The system call is used to set the robustness of a process's futexes (fast userspace mutexes), i.e., the behavior of the mutex when the process terminates. It is a mechanism used to avoid deadlocks when a process terminates accidentally. The system call takes as an argument a pointer to a structure representing the head of the linked list: struct robust_list_head *head. However, the kernel only stores the value of the pointer. It is only when the process is terminated that user memory is accessed to traverse the linked list. Because the management of the futexes is delegated to the user process, most of the structures are stored in the user memory space. So the kernel keeps only pointers to these structures and therefore needs to do many reads to user memory to set the state of futexes. In our approach, we have set aside this particular case to handle all the futex management with the local kernel and avoid any forwarding. As we will further see in the Chapter 5, not all system calls need to be forwarded and the filtering mechanism helps us achieve that. Taking these observations into account, we identified four categories of system calls: • Category A modifies the structure of the process. • Category B exchanges information with the kernel via pointers to user space buffers. • Category C gathers simple functions where all the information is contained in their arguments. • Category D is special because it is the vDSO library that invokes direct calls to the kernel without any context switches. We will now, discuss each group with examples. A. Modifying the address space These system calls operate directly on the kernel structures of the process address space. They modify it in a way that is hard to control from user mode. However, their number is limited, and it is possible to replicate their effects to keep both process structures synchronized. B. Exchange user space data These system calls are characterized by their interactions with the process address space through memory reads and writes. As mentioned in Table 4.1, the Linux kernel uses a specific interface that takes the pointer and the size as arguments. The size of structures is part of the system call ABI. In order to preserve backward compatibility for user space applications and libraries [START_REF]Kernel ABI readme[END_REF] [9], the Linux kernel developers are making great efforts to not change the ABI. In contrast, buffers that are not strings are often followed by another argument defining their size. C. Simple function These are the simplest system calls to forward as all the information exchanged with the kernel and the resulting effects (from the kernel routine) they trigger are contained in their parameters and the return value. This means that only the registers must be copied on the system call entry and exit. pid_t getpid ( void ) ; int socket ( int domain , int type , int protocol ) ; int flock ( int fd , int operation ) ; void sync ( void ) ; int kill ( pid_t pid , int sig ) ; Listing 4.3: Examples of system call prototypes that exchange user space data. D. vDSO -Virtual Dynamic Shared Object For some system calls, the time spent in the kernel is negligible compared to the overhead of the context switching. For this reason, the kernel maps a shared library in the address space of user space processes to avoid the interrupt-handling procedure. Hence, invoking those system calls results in a function call rather than a classic system call. These system calls are often not intercepted by tracing tools as they use a different interface. This is usually a minor problem because it mainly concerns time-related system calls. The implementation of these functions in the vDSO is architecture specific [START_REF]vDSO -overview of the virtual ELF dynamic shared object[END_REF] For example, on ARM and MIPS architecture, only the following two functions are concerned. Memory forwarding As previously discussed, certain system calls expect to access user space memory. Hence, a problem arises: which memory blocks must be forwarded together with the system call? In our research, we have considered seven approaches, each has drawbacks but remains relevant for particular situations, thus being complementary to each other. They can be grouped according to whether the instrumentation happens at runtime or is built into a model during an upstream analysis. Table 4 At runtime I. The naive approach is to synchronize the complete process memory at the entry and exit of each system call. While this method does not require any knowledge of the type of system call, it is very inefficient as it introduces a significant delay. Indeed, the embedded system network throughput is often limited to transferring all the process memory. In general, a system call does not interact with the whole process memory but only a region. It is often the case when a file is mapped in memory via mmap(). II. In contrast, the ideal approach would precisely identify and synchronize the memory regions modified by each system call. This is possible for many system calls because the size of the structure is known or the size of the buffer is located in another argument. However, the size of objects referred to by a parameter will not be known for custom system calls and ioctl() calls to custom character devices. III. To handle the limitations of the ideal method with ioctl, PROSPECT [START_REF] Kammerstetter | Prospect: peripheral proxying supported embedded code testing[END_REF] proposed a novel mechanism called Dynamic Memory Tunneling. The idea consists of "always transferring a memory buffer to the target system if the IOCTL parameter is a possible pointer to a memory location". To identify if the argument is a pointer, the technique checks if the value corresponds to a valid address in the process address space and if the corresponding memory region has read and write permissions. The pages containing the address, as well as the surrounding ones, are synchronized. Upon the system call's return, the memory is compared to detect modifications, i.e., writes from the kernel. The number of bytes that were changed is used to decide if the pages need to be forwarded back. Heuristics are employed to identify the number of pages that need to be forwarded. The authors have determined empirically that three pages are usually sufficient in most cases, as structures are often located at the border of a page. IV. An alternative method consists of implementing a form of distributed shared memory. To accomplish this, a mechanism to intercept the memory writes and reads is needed. On Linux systems, it is possible to take advantage of the kernel page fault handling mechanism. This is what the libsigsegv library [START_REF]GNU libsigsegv[END_REF] and the userfaultfd() system call allows doing. However, page faults are not free: raising and catching them introduces overhead in the instrumentation execution. Model-based V. The proposed method is carried out in two phases. Firstly, the process is traced under normal use during which system calls and user memory accesses are recorded. From these recordings, the data structure outlines are identified and associated with the system call arguments. Secondly, during emulation, the inferred rules are used to decide which blocks of memory need to be forwarded with the system call. However, it should be noted that tracing system calls and recording their memory accesses on the original environment can be difficult. Additionally, the traces may not cover all the program execution paths of interest during emulation. VI. Alternatively, selective symbolic execution [START_REF] Chipounov | Selective symbolic execution[END_REF] may be used to identify the memory regions touched by a given system call. However, it requires the ability to symbolically execute the relevant kernel codes. For example, Mousse [START_REF] Liu | Mousse: a system for selective symbolic execution of programs with untamed environments[END_REF] shows promising results to apply selective symbolic execution on OS services and kernel routines. VII. As before, a model is built to determine which memory blocks should be forwarded during the system call. However, this method uses static analysis. The approach involves identifying the parts in the system call code that interact with user space memory and the associated data structures. As seen in table 4.1, a user space memory access API is often used. Rules can then be inferred from this information and used at runtime to synchronize the memory before and after the forwarding. A similar approach has been demonstrated through DIFUZE [START_REF] Corina | Difuze: Interface aware fuzzing for kernel drivers[END_REF]. Using source code, DIFUZE focuses on ioctl system calls to recover the device driver interface. In particular, it reconstructs the driver filename, the ioctl commands and the associated data structure exchanged with user space. It should be possible to generalize this approach to binaries without source code access. In particular for Loadable Kernel Modules because they need to export their symbols to be properly linked with the kernel. Process resources consistency and synchronization It is essential to maintain a minimum level of synchronization between the emulator and the physical device while the program is executing in a distributed fashion. This is necessary to ensure that the program runs normally. This section discusses strategies to manage the process resources consistently. It is not a straightforward task because, as previously discussed in section 4.2.1, the resources are partially managed by the kernel. Although it is possible to instrument the kernel on the host [START_REF] Jiang | ECMO: Peripheral Transplantation to Rehost Embedded Linux Kernels[END_REF][START_REF] Liu | Firmguide: Boosting the capability of rehosting embedded linux kernels through modelguided kernel execution[END_REF], we aim at not modifying the kernel running on the physical device. Moreover, this latter may not always be feasible in certain cases. Therefore, additional care must be taken in some cases when managing resources from user mode to ensure consistency in the program execution. Process creation and identifiers On the creation of a new process or thread using the fork() or clone() system call, the operation must be forwarded to the physical device. In addition, all identifiers reflecting the process hierarchy must be kept synchronized. This requires special attention as the choice of identifiers, in particular the PID, is rarely decided by the user process but imposed by the kernel. To address this, we chose to copy the identifiers chosen by the kernel on the remote device and rearrange them in the emulator. This approach is easier to implement as it provides more control over the emulator state. On Linux, there are several methods available for setting the PID of a child process. One method is to repeatedly use the fork() system call until the desired PID is obtained. Another method, initially used by CRIU, utilizes the /proc/sys/kernel/ns_last_pid file to assign PIDs. It has the advantage of not requiring root privileges. From the kernel version 5.5, the set_tid array in the clone3() system call offers the ability to choose the PID of the child process. Moreover, these methods can be used within a PID namespace to prevent interferences from other processes on the host system. File descriptors When a file or device is opened, the created file descriptor is only valid within the kernel where it was generated. As a result, any actions taken on the file descriptor, such as reading, writing or other system calls (e.g., mmap, lseek, ioctl, close), must be performed on the same kernel. The filtering system allows us to modify these rules at runtime. To reduce the number of context switches introduced by I/O operations, such as read() and write() system calls, a file can be mapped in the process memory via mmap(). Doing so allows the process to directly access the file content without asking the kernel. It is done transparently by the kernel. Similarly, splice() and vmsplice() system calls remap memory pages internally without a round trip to user space. They are used to transfer data to a pipe. Both present different challenges for memory synchronization between the process memory living in the emulator and the kernel managing the read and write accesses on the remote device. For example, a file opened on the remote device is then mapped in the process memory. Besides allocating the same memory region locally in the emulator, the accesses must be forwarded otherwise the program will execute with wrong values. This is similar to a splice() call between a file and a pipe not living in the same kernel. Inter-Process Communication IPCs are managed by the kernel. Most of them use the system call interface: files, pipes, message queues, sockets and sending signals. Therefore, for most of them, no further special care needs to be taken. Because it is the kernel that handles the signal propagation and reception, they need to be intercepted on the device, returned, and injected into the emulator. Shared memory with other processes can be handled by intercepting the accesses and synchronizing them using a distributed shared memory mechanism between the emulator and the device. Environment variables are stored within the memory of the process. They can be synchronized at the initialization. Any environment variable change should be intercepted in the same way as any other memory region. Remote events So far, we have mainly discussed actions initiated by the user mode application. However, the kernel may also initiate operations on the process such as memory accesses or sending signals. In such cases, these actions are initiated by the remote kernel located on the device and need to be forwarded to the emulator. As previously explained in the background in 2.2.6, asynchronous interfaces consist of two types of operations: a submission issued by the process and an event confirming the operation's status issued by the kernel to the process. The older asynchronous I/O interface handles both operations via system calls initiated by the process and can therefore be handled in a similar way to other system calls. Instead, the more recent io_uring interface uses two ring buffers shared between kernel and user space. They present a challenge because the kernel may read and write or modify pointers related to the ring buffers whenever it wants. Similarly, when a file located on the remote device is mapped in the process address space, accesses to this memory region have to be intercepted for proper synchronization. Direct Memory Access (DMA) /citemera2021dice allows hardware devices to transfer data directly to and from the memory without involving the CPU. However, it may be difficult to intercept these accesses from user space because it is a more privileged operation. In addition, kernel modules may also implement new mechanisms to access user space memory. In this case, it is highly dependent of what the developer intend to do and may even not follow the kernel guidance. Finally, special architecture modes may also access the user space memory. This is typically the case for the System Management Mode (SMM) on Intel, ARM TrustZone or MIPS VZ (Virtualization Extensions). Althought embedded systems are the main scope of this tool, these hardware extensions may not be present. To monitor accesses to theses regions and propagate the event back to the tracer, one approach is to use a shared memory. However, it is necessary to identify in advance the correct memory regions. In practice, the kernel on the device may not support the correct capabilities to implement this shared memory, i.e., userfaultfd interface for intercepting the memory accesses. Furthermore, more privileged accesses may not be possible to intercept from the user space at all. In such cases, it may be necessary to use a shadow memory for these regions and a polling mechanism to monitor changes in their content, to later replicate them in the emulator. System call forwarding 4.3.1 State machines We propose a new approach able to trace and forward system calls from a running program. As we have seen in 3.3, each use case has different motivations for intercepting system calls at different abstraction layers. To avoid reinventing the wheel, we have decided to dissociate the interception mechanisms from the tracing instrumentation and focus on the latter. In Chapter 5, we describe how we have integrated our tracer in QEMU user mode emulation. Another characteristic that influenced our design choices is that the analysis has to be carried out by different components communicating: the emulator and the physical devices. To address this, we have created a distributed system composed of three main components: a tracer, an executor and an orchestrator. The approach is generic enough for POSIX systems, but we have focused on Linux in particular. Figure 4.4 describes our approach. First, the tracer takes over at the next entry to or exit from a system call. On system call entry, the context is transferred to the orchestrator which filters according to rules set by the analyst. The decision made is sent back to the tracer which executes it. On system call exit, the saved decision is checked again before being carried out. The orchestrator operates over three different events: SEND_SYSCALL_ENTRY, SEND_SYSCALL_EXIT and RETURN_SYSCALL_EXIT. The first two are linked to the tracer while the last one is related to the executor. The orchestrator is responsible for instrumenting system calls and establishing connections between the tracer and the executor. Finally, the executor works in a loop, waiting for new system calls to execute and replying with their return values. The state machines for the three components are illustrated with Figures 4.5, 4.6 and 4.7. A distributed system implies a mean for elements to communicate. For this purpose, we have designed a packet-based protocol with a fixed-length header and a variable-length payload. More details on its implementation are described in Section 5.2.4. Filter & Rules Because it is not efficient or desirable to forward all system calls to the remote kernel, we have implemented a new filtering mechanism. The filtering system allows for defining rules for system call interception in a relatively simple grammar, fully depicted in Listing 4.5. Each filter consists of a set of rules and aims to return a decision based on the first matched rule. If no rule is matched, a default decision, which is often to execute the system call normally, is returned. A rule must at least define the system call to which it is applied and a decision. Optionally, a rule can be refined by adding a set of conditions that must be met by the system call's arguments before returning a match. For example on the MIPS architecture, the following rule forwards the openat system call when it opens any file with the name filename with read and write permissions: 4288: *a2=="filename" and a4==2 -> FWD_ENTRY|FWD_EXIT|NO_EXEC; In addition, multiple conditions for a single rule can be concatenated with the "AND" keyword, leading to a logical conjunction of the individual conditions. While a single rule is not able to match multiple mutually exclusive statements, multiple rules for the same system call can be registered with different conditions to achieve the same goal. Effectively, this allows the creation of logical disjunct statements by combining multiple rules, while keeping the grammar and its parser simple. digit = "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" ; digits = digit , { digit } argument = " a1 " | " a2 " | " a3 " | " a4 " | " a5 " | " a6 "; parameter = ["*"] , argument operator = "==" | "!=" | " >" | " <" | " >=" | " <="; sys_no = digits decision = " CONTINUE " | " FWD_ENTRY " | " FWD_EXIT " | " NOTIFY_EXIT " | " NO_EXEC " | " KILL " ; decisions = decision , { "|" , decision }; condition = parameter , operator , ( digits | parameter ) ; conditions = "" | condition , { " and " , condition }; rule = sys_no , ": " , conditions , " -> " decisions , ";" ; filter = { rule }; Listing 4.5: Filter grammar in eBNF. Decisions Once a rule of a filter is successfully matched, the corresponding decision has to be carried out. Six different decisions can be made, or a combination of them. We carefully selected these decisions to ensure it is expressive enough to cover all use cases. In detail, the decisions are: 1. CONTINUE: This is the default decision, it executes the system call without any further interception. 2. FWD_ENTRY: Execution will be transferred to a callback on system call entry and allows the modification of system call arguments. This can be used for system call instrumentation and forwarding. 3. FWD_EXIT: Similar to F_ENTRY, except that the callback is executed on system call exit and allows for modification of its return value. 4. NOTIFY_EXIT: Notify on system call exit of its return value without any callback. 5. NO_EXEC: Suppresses the execution of the system call, which results in an undefined return value being passed to the user space. This decision is typically selected in combination with F_EXIT to explicitly set a return value, for instance, to implement system call forwarding. 6. KILL: Terminate the process issuing the system call. Execution order When a program issues a system call, it may be executed on one side, the other side or both sides. It is crucial to consider the order of execution when executing the system call on both sides to keep both processes synchronized. It is also necessary to determine whether the system call behavior needs to be replicated on the other side or if it can be executed independently. To illustrate this concept, consider the example of allocating a memory region through the mmap() system call with the address argument set to NULL. On success, the kernel returns the starting address of the allocated pages. For instance, when tracing the ls program, the following system call is invoked: mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x77a78000. It informs the program that the kernel has allocated two pages of memory with read and write permissions located at the address 0x77a78000. The question that arises is which kernel should decide the location of the new memory block. There may exist some dissimilarities between the kernel versions and configurations resulting in a different allocation algorithm. The Linux kernel implements different allocators such as SLOB, SLAB and SLUB. Additionally, other running programs may also influence the allocation via the allocator caches. Moreover, it is easier to instrument the kernel running on the host with the emulator than the device kernel. For these reasons, we chose to first invoke an anonymous mmap() on the device kernel, so that a second mmap() call is issued in the emulator with the address argument sets to the former's return value. Other system calls do not always need to implement this ordering mechanism resulting from the instrumentation. Typically, system calls from category C (i.e., simple routine) can be replaced on the local kernel by a dummy call without having any impact on the execution flow. Both system calls can be performed in parallel with a synchronized mechanism on their termination to substitute the return value. Chapter 5 Improving Linux-based firmware emulation with process snapshot and syscall forwarding In this Chapter, we first describe our implementation of Chestburster before looking at its performances. Design concept Our approach is based on binary analysis rather than source code instrumentation. We have chosen this approach for several practical reasons related to the context of conducting security testing on embedded systems. First, access to the source code is not always possible. Vendors often prefer to protect their software components behind proprietary licenses. Common examples in Linux-based systems include user-mode applications, binary kernel modules (e.g., GPU drivers) [START_REF] Maier | BSOD: Binary-only scalable fuzzing of device drivers[END_REF] and peripheral's firmware (e.g., WiFi or Bluetooth peripherals). Obtaining source code for firmware parts covered by free software licenses (e.g., under GPL) is not always straightforward. Although vendors are legally obliged to comply with source code redistribution when inquired, the process can be cumbersome in practice. Second, recreating the toolchain and the building process for the target device can be challenging as it depends on many factors, such as the compiler, the kernel configuration, the kernel headers, and the patches. Finally, hardware protections may prevent flashing the newly compiled firmware image on the device. For instance, the boot process may check the image's integrity before loading the firmware. Our goal is to maintain the relevance and generality of our approach. To achieve this, we have chosen to implement our instrumentation at the POSIX interface without modifying the system itself. By leveraging the stable and public POSIX API, we believe we will be able to extend our analysis to Type-II firmware such as QNX-based firmware. Given these considerations, practical analysis for real-world firmware often relies on binary firmware. Process migration Starting a dynamically linked program is made of two main steps. First, the kernel allocates the system resources and loads the program segments with its interpreter in memory. For an ELF file, this is the ELF interpreter. Next, the control flow is transferred to the entry point of the interpreter in user mode. The interpreter then loads the required shared libraries in memory, relocates objects and resolves the required symbols. All these steps are complex and lead to significant changes to the process memory layout, using system calls. Forwarding all these system calls to keep the emulated process synchronized with the one on the device would incur a significant execution overhead. Furthermore, the runtime loader of the emulator is significantly different the one from the kernel used on the device. As previously demonstrated [158] [31] [176] [START_REF] Nisi | Lost in the Loader: The Many Faces of the Windows PE File Format[END_REF], such a discrepancy can lead to significant variations in the interpretation and significantly different memory layouts or symbol resolutions. This could for example lead to missing bugs both in the kernel runtime loader and the program. Taking a snapshot of the process state, and allowing it to be transferred from the device to the emulator, helps to avoid those issues. However, snapshotting leads to another challenge: the restoration of the process, and in particular to identify which type of information needs to be captured and the how to save it. We next explore possible techniques and tools for userland capturing the state and resources of a process running on a Linux-based system. In particular, we will discuss CRIU, Core dumps and GDBServer. CRIU The Checkpoint/Restore In Userspace project implements the process migration [START_REF]Checkpoint/restart in userspace (CRIU)[END_REF] functionality for Linux. It stops a process and all its children, capture their states to disk, and later, or in a different place, restores and restarts them. CRIU captures a lot of different system resources of a process: identifiers, sockets, files, pending signals, mount points, namespaces, cgroups, etc. CRIU was initially created for snapshotting running containers. Unfortunately, its application to Linux-based embedded systems is problematic. CRIU requires specific kernel features [START_REF]CRIU linux kernel options[END_REF] such as CONFIG_CHECKPOINT_RESTORE, CONFIG_NAMESPACES and CONFIG_UNIX_DIAG to be configured at compile time. However, kernels found in embedded systems are often stripped of unnecessary options to conserve storage and memory usage minimal, making CRIU unable to run. Furthermore, CRIU does not provide the option to select which resources to capture, it either captures all resources or aborts. Therefore, without implementing this selection feature in the project, using CRIU on embedded systems can be difficult. For instance, we found that the namespaces and socket monitoring interfaces are rarely enabled. Both are essential options needed by CRIU during a checkpoint. Core Dump A core dump is a file that captures the memory content and CPU context of a process at a specific time. Usually, it happens when the process receives a certain type of signal such as SIGSEGV, SIGQUIT, although the exact behavior depends on the kernel configuration. Core dumps are mainly used for debugging purposes in the context of post-crash analysis. However, it should be noted that core dumps may provide incomplete information. Other resources such as file descriptors, identifiers, and mount points are not included. Moreover, the file /proc/[PID]/coredump_filter controls which memory segments are written to the core dump. The Extended Core File Snapshot (ECFS) project [START_REF]Extended core file snapshot[END_REF] enhances the core file format with additional features to examine processes in the context of memory forensic analysis. After generating the core dump file, the process procfs is inspected to extract complete information about it. A utility is available to continue the process execution from its snapshot. However, the support is mainly limited to the x86 architectures. GDBserver A simpler alternative consists of using a debugger such as gdbserver for remotely starting or attaching to a process. This approach allows the inspection of various process resources, which can then be easily copied into the emulator. The Chestburster Architecture We now present our solution Chestburster which aims at providing snapshot and restore for Linux-based embedded devices. The architecture of Chestburster is presented in figure 5.1. It consists of three main components: the executor on the physical device, the tracer inside an emulator, and the orchestrator managing both. This design centralizes the main features in the orchestrator while targets implement basic operations. As a result, this design provides the possibility to smoothly manage targets and register callbacks for various instrumentation at different stages of the analysis. The executor uses a statically compiled gdbserver to create process snapshots and execute the system calls remotely. We have developed a new library sysforward to trace system calls and communicate with the orchestrator in different execution environments. We have integrated this library into the QEMU user-mode emulator and extended it to load process snapshots. The avatar2 framework [START_REF] Muench | Avatar 2: A multi-target orchestration platform[END_REF] manages both the emulator and the physical device. Because avatar2 mainly focused on monolithic firmware (Type-III), we have extended it to include support for processes, system call filtering and forwarding between targets as described in Secion 4.3. Executor DeviceTarget EmulatorTarget Tracer Analysis workflow The analysis is done in two phases. First, the program is loaded on the physical device and the process context and memory are captured and transferred to the emulator. Then, the execution resumes and system calls are filtered to either be performed locally or forwarded to the device. Process migration The program is first loaded on the device and then migrated to the emulator for several reasons. The use of the kernel runtime loader on the embedded system allows the memory layout to be as close as possible to its original environment. Moreover, as explained in Section 5.1.1, loading various libraries forces extensive instrumentation of symbol resolutions and memory mapping operations. Additionally, the emulator provides better control over the process and its environment, making the restoration of the process more straightforward. Nonetheless, Chestburster is flexible enough to support the opposite scenario to start the process in the emulator and migrate a copy to the device. The gdbserver approach was selected due to its simplicity and versatility. It is the approach that requires the least amount of additional instrumentation in comparison to other alternatives which comes with significant challenges. CRIU needs special kernel configurations to be enabled and does not allow for partial process snapshots. ECFS is limited to x86 architectures. The core dump approach requires implementing a suitable loader for the QEMU emulator. The choice to stop the execution of the process after its initialization by the loader, but prior to starting (i.e., the beginning of main() function), simplifies the migration process by only requiring the capture of the memory mapping and CPU context. This approach eliminates the difficulties associated with the forwarding of dynamic library loading and the migration of complex resources, such as network connections, file locks, and devices. In summary, Chestburster starts a gdbserver and the program to analyze on the physical device. A breakpoint is set after the loading but early in the life of the process. For example, a breakpoint near the main function is preferable. From there, the process state is captured via the /proc/[PID]/maps and /proc/[PID]/stat files and its memory content is dumped into an ELF file. This file is then transferred to the host in namespaces that replicate the device environment. The QEMU runtime loader restores the process memory layout and its gdbstub is used to copy the saved CPU context. The process execution can now be resumed. System call forwarding All system calls are intercepted on their entry. This interception happens inside QEMU, before QEMU's system call instrumentation. The system call number and its arguments are passed as raw values to the tracer in the sysforward library, which then sends them to Chestburster. It handles syscall parameter decoding by assigning a type to each argument and dereferencing the potential pointers. This facilitates the application of user-defined filters to the system call. The filter outputs a decision on the action to carry out. The decision could be executing the system call locally on the host, remotely on the device, intercepting the system call exit or terminating the process. The choice of action is left to the discretion of the user. Figure 5.2 illustrates on a sequence diagram the flow of events for a typical forwarding decision. For example, a typical forwarding strategy can be implemented with the following decision NO_EXEC|FWD_ENTRY|FWD_EXIT. The NO_EXEC decision means the system call should not be executed by the emulator. This is achieved by replacing it with a dummy call such as sys_ni_syscall() to keep the opportunity to intercept its exits without causing any side effects on the kernel. The FWD_ENTRY decision copies the arguments to the remote process on the device, executes the system call there, saves the return value and system error number and then returns to Chestburster. Finally, FWD_EXIT decision ensures that both local and remote system calls are synchronized on exit. Implementation To demonstrate our system call forwarding approach, we have implemented a prototype Chestburster targeting Linux-based systems on the MIPS 32 bits big-endian architecture. This platform was selected because of its widespread use in Linux embedded systems [START_REF] Daming D Chen | Towards Automated Dynamic Analysis for Linux-based Embedded Firmware[END_REF], including our case study, the Philips Hue Bridge. The implementation is generic enough for allowing future extensions to other architectures and POSIX systems in the future. Enhancing avatar2 for Linux processes Chestburster is based on the avatar2 framework [START_REF] Muench | Avatar 2: A multi-target orchestration platform[END_REF]. It acts as the central component of our system and is written in Python for rapid prototyping. Chestburster can generate and migrate process snapshots, as well as trace, filter and forward system calls between various analysis tools. To achieve this, we have extended the avatar2 messaging system to handle new events related to system calls such as SyscallEntry, SyscallExit, SyscallReturnExit, SyscallForwardEntry and NewProcess. Additionally, we have implemented new abstractions to better represent the process and its memory space, as well as to facilitate the instrumentation of system calls. These instrumentations include argument decoding, filtering and synchronization mechanisms for system calls which modify the process layout (category A from Section 4.2.3). Filters are implemented as dictionaries with the key corresponding to the system call number and the value storing the rules applied on this system call. Process migration We rely on the gdbserver approach to migrate the process running on the physical device to the emulator on the host. We set a breakpoint at the main() function to allow the program to load properly on the device. Once the breakpoint is reached, the memory mapping and the state of the process are collected through examination of the /proc/[PID]/maps and /proc/[PID]/stats files. The memory regions are then stored in an ELF file using segments. As a result, it can be loaded in the QEMU emulator like any program. Finally, the CPU context is transferred using GDB to resume the execution in the emulator. The QEMU user-mode based tracer We have implemented the tracer component of the state machine presented in Chapter 4 Figure 4.5 as a C library named sysforward. We have made this decision to be able to dissociate the interception mechanisms from the tracing instrumentation. It allows it to be integrated with multiple analysis tools. As a result, the library can be used in various contexts such as a standalone tracer based on ptrace, or enhance an emulator like QEMU user-mode. The libsysforward consists of a tracing thread for each thread being traced, and a listening thread responsible for communicating with the orchestrator. This design enables multiplexing tracing communications over a single communication link. Furthermore, it implements the forwarding protocol as described in 5.2.4. In particular, we have modified the QEMU user-mode emulator to include the sysforward library. We chose this emulator because we are interested in the execution of the user process part and not the whole operating system. The kernel part of the execution is performed on the physical device. In addition, the execution overhead is reduced because no hardware needs to be emulated. It also removes the burden to rehost all the OS services and their hardware dependencies, similar to what Firmadyne [START_REF] Daming D Chen | Towards Automated Dynamic Analysis for Linux-based Embedded Firmware[END_REF] tries to achieve. The process is resumed in namespaces to separate the analysis environment from the rest of the host. The systemd-nspawn command allows starting a command in a lightweight container. It combines the classic Mount namespace from chroot with others such as PID, User, Cgroup, IPC, Time, UTS and Network. The process PID is restored through the combination of the PID namespace with the set_tid parameter of the clone() system call. Protocol The communication protocol corresponds to the implementation of the state machines from figures 4.5, 4.6, 4.7. Each packet consists of a header with a fixed size and a payload of variable length. For our prototype, we have implemented the protocol on top of TCP, but it could be modified to work on top of any other communication protocol such as UDP. The header is composed of three main fields: 1. The command. 2. The PID to multiplex multiples process tracings over a single connection. 3. The length of the payload associated with the header. The payload is dependent on the command. We define 16 main commands: • Error to indicate a failure. • NotifyNewProcess to notify the orchestrator a new process is traced. • SendSyscallEntry to send the system call number and its arguments. • NotifySyscallExit to notify the results of a system call. • ReturnSyscallExit to return to the tracer what should the system call return value should be. • ReturnDecision to return the tracer the decision made by the filer. • NotifySignal is used to inject signals. • ReadArgs, WriteArgNo and WriteArgs to operates on the system call arguments. • ReadRegs, WriteRegNo and WriteRegs to interact with CPU registers. • ReadMemory, ReadString and WriteMemory to interact with memory content. Executor In order to execute the system calls on the device, we take advantage of the process attached to the gdbserver from the process migration. GDB is used to execute each system call in the following manner: the memory and the CPU context are copied, the system call is executed, its result is saved and the context is restored. The clone system call has several options for creating new processes. Currently, Chestburster only supports creating new threads, using the CLONE_THREAD flag. This is because the libsysforward has been designed to handle multiple threads only, because of its communication link with the orchestrator. In addition, GDB does not permit debugging multiple processes simultaneously, it only supports multithreading debugging. However, it is still possible to catch new process creation for fork and clone and initiate them under a new debugger and emulator instances. Moreover, execve requires starting a new process migration from the device to avoid the complicated task of process loading. The brk system call either allocates or deallocates memory by adjusting the program break which represents the data segment size of the process. Chestburster first executes the system call on the device and retrieves its return value. It then compares it with the program break present in the emulator to determine if brk should be emulated with a mmap or munmmap system call in the emulator. Limitations At the time of writing, the Chestburster prototype has the following limitations: • As explained above, it does not support directly the creation of new processes. However, this can be addressed with additional instrumentations. • Signals are not yet supported, but they do not pose a significant challenge. They can be caught and injected by the debuggers. • Shared memories and memory-mapped files accesses are not intercepted on the device. Therefore, events initiated on the device are not correctly forwarded to the emulator. This can be improved in the future using distributed shared memory. Evaluation In order to evaluate Chestburster, we performed two sets of experiments: 1. We look at the correctness of our approach, and in particular if our instrumentation influences the execution flow. 2. We analyze the system performance regarding execution time overhead over synthetic benchmarks. The experiments are run on a laptop with an Intel i7-8650U CPU (8 cores) with a clock of 4.2GHz and 32GB of RAM. The embedded system used in our experiments runs the HUE Bridge 2.X firmware version 1935074050 with OpenWRT on a Linux kernel 4.4.60 compiled with GCC 4.8.3. The hardware is composed of the QCA4531 System-on-Chip with a MIPS 24Kc CPU at 650 MHz and 64 MB of RAM. We initially started with the Lima1 development board, but for the evaluation we switched to the Philips Hue Bridge board where we reconstructed a compatible toolchain. Execution correctness Our evaluation starts by comparing the system call traces of the forwarded execution with the native execution on the physical device. Although it does not guarantee that the program under test follows exactly the same execution path as it does on the device, it does confirm the program similarly interacts with the environment through its boundaries, e.g., the system call interface. Moreover, the choice to compare execution trace with system call granularity is justified by the location of our instrumentation. To trace the program's system calls on the device, we have used a statically compiled strace. Paxtest suite The paxtest suite [140] was designed to simulate exploits and test kernel security features work as intended. PaX is a security patch for the Linux kernel which implements mechanisms to prevent arbitrary read/write access from an exploit. The test suite is composed of two sets of tests. The first set of programs attempts various approaches to writing and running exploit code. The second set measures the system's randomization. We are interested in the first set because the programs test different memory layouts and permissions using corner cases. Listing 5.1 illustrates one such program: the execstack. Running them by forwarding the system calls to the device's kernel assures us these corner cases are correctly handled. The programs represent combinations of scenarios running in various memory areas. The memory areas are anonymous mapping, data, heap and stack. The first scenario imitates a buffer exploit: write data to the memory area and try to execute it. The second scenario tries to disable the memory protection using mprotect() before imitating a buffer exploit. The third scenario tries to overwrite code in the text segments. The fourth scenario simulates return-to-function attacks using (1) strcpy and (2) memcpy. Both attacks are performed with RANDEXEC enabled and disabled. The fifth scenario checks shared library BSS (Block Started by Symbol) and data areas with and without similarly disabling memory protection as before. We successfully ran the first four scenarios in a way similar to how they are executed on the device. However, the fifth scenario, which involved the mapping of shared libraries, encountered a bug during the forwarding that prevented their correct loading. Like with the initialization step, a way around this problem is to migrate the process on the device for the loading. Table 5.1 presents the number of system calls that are identical between the execution on the device and in the emulator with forwarding. Furthermore, we manually inspected the memory mapping to ensure they were also identical. The column test result displays the output of the tests, identical on the device and during emulation. It confirms that the process migration was executed correctly and that the various memory mapping operations were carried out correctly. Linux kernel selftests The Linux selftests suite is composed of many tests that focus on specific code paths in the kernel. The nolibc tests checks system calls issued by the eponymous kernel libc. It focuses on implementing a minimal C library to reduce the size of binaries used by the kernel such as mkinitramfs [START_REF] Tarreau | Nolibc: a minimal c-library replacement shipped with the kernel[END_REF]. These tests were compiled using uClibc, which is part of the toolchain compatible with our Linux firmware. Out of 66 nolibc tests covering 31 system calls, 3 failed because of a return address corruption during the forwarding. Another test fails (waitpid_min) but not because of our instrumentation. It fails in the same way on the device, therefore we consider it as a correct be-havior. The tested system calls includes getpid, getppid, getpgid, kill, sbrk, brk, chdir, chmod, chown, chroot, close, dup, dup2, dup3, execve, gettimeofday, ioctl, link, lseek, mkdir, open, poll, read, sched_yield, select, stat, symlink, unlink, wait, waitpid, write. Table 5.2 displays the arguments used with the system calls and their corresponding return values. Execution overhead We have so far evaluated the correct execution of Chestburster, however, forwarding system calls to a different device have a significant performance impact. To evaluate the runtime overhead, we have used multiple scenarios to determine its origins. • Scenario Baseline runs the program with plain QEMU user-mode version 7.2.0. • Scenario Interception runs the program with our fork of QEMU which integrate libsysforward. • Scenario Strace enables the option -strace in plain QEMU which prints the system calls arguments and returns a value between the system call transitions. • Scenario Interception-Strace runs the binary with the option -strace with our QEMU fork. • In scenario Tracing, the program is run with Chestburster where all decisions are to trace and continue the execution in the emulator. • Instead, scenario Forwarding forwards all system calls to the physical device where they are executed. Both scenarios Interception and Interception-Strace do not send information to the orchestrator. Scenario Forwarding represents a worst-case scenario where all system calls are forwarded. In its usage, the filtering mechanism helps to decide if the system call should be executed locally or remotely. All presented values have been normalized with respect to scenario Baseline. Moreover, each scenario was conducted in a dedicated namespaces using systemd-nspawn. Finally, we disabled the CPU hyperthreading to minimize variations in all measurements. ✓ chmod("/proc/self", 0555) -1 EPERM ✓ chown("/proc/self", 0, 0) -1 EPERM ✓ chroot("/") 0 ✓ chroot("/proc/self/blah") -1 ENOENT ✓ chroot("/proc/self/exe") -1 ENOTDIR ✓ close(-1) -1 EBADF ✓ close(dup(0)) 0 ✓ dup(0) 3 ✓ dup(-1) -1 EBADF ✓ dup2(0, 100) 100 ✓ dup2(-1, 100) -1 EBADF ✓ dup3(0, 100, 0) 100 ✓ dup3(-1, 100, 0) -1 EBADF ✓ execve("/", (char*[]) [0] = "/", [1] = NULL , NULL) -1 EACCES ✓ gettimeofday(NULL, NULL) 0 ✓ ioctl(0, TIOCINQ, &tmp) 0 ✓ ioctl(0, TIOCINQ, &tmp) 0 ✓ link("/", "/") -1 EEXIST ✓ link("/proc/self/blah", "/blah") -1 ENOENT ✓ link("/", "/blah") -1 EPERM ✓ link("/proc/self/net", "/blah") -1 EXDEV ✓ lseek(-1, 0, SEEK_SET) -1 EBADF ✓ lseek(0, 0, SEEK_SET) -1 ESPIPE ✓ mkdir("/", 0755) -1 EEXIST ✓ open("/dev/null", 0) 3 ✓ open("/proc/self/blah", 0) -1 ENOENT ✓ poll(NULL, 0, 0) 0 ✓ poll(&fds, 1, 0) BUG ✘ poll((void *)1, 1, 0) -1 EFAULT ✓ read(-1, &tmp, 1) -1 EBADF ✓ sched_yield() 0 ✓ select(0, System call microbenchmarks To measure the runtime overhead introduced by the different elements of Chestburster, we ran microbenchmarks on various system calls. In particular, we measured the time from the perspective of the user mode process. The methodology consists in executing each system call 1000 times in a loop. The monotonic time is measured at the entrance and exit of the loop using the clock_gettime() system call with the CLOCK_MONOTONIC clock. The buffer for read and write was set to 256 bytes. The measured values are then normalized by dividing them with the corresponding ones from scenario Baseline, which involve the execution in plain emulation with QEMU. Both getpid and close(-1) represent system calls that return rapidly in user space. The system calls creat, open, write, read, lseek, close and unlink are representatives of common I/O operations in Linux. Coreutils Coreutils programs are essential components in any embedded Linux-based system. They form a central part of shell scripts that are used for initializing, updating, and maintaining these systems, in particular for routers. Using Chestburster, we managed to run 56 out of the 110 binaries from GNU coreutils. The programs that run focus on operations on file content, directory listing, file statistics, conditions, file name manipulation, user information, system and working contexts. The 56 programs are cat, tac, nl, od, base32, base64, basenc, fmt, pr, fold, head, tail, split, csplit, wc, sum, cksum, b2sum, md5sum, sha1sum, sha224sum, sha256sum, sha384sum, sha512sum, sort, uniq, comm, ptx, tsort, cut, ls, dir, dircolors, dd, stat, sync, echo, true, false, test, expr, basename, dirname, pathchk, realpath, stty, printenv, id, logname, whoami, groups, who, date, hostid, nice and factor. We conducted a performance evaluation on five coreutils programs: cat, csplit, wc, sha512sum, and dd. To obtain accurate results, we calculated the arithmetic mean of five runs, along with an additional warm-up execution. Table 5.3 displays the total number of system calls and the most significant ones related to the filesystem. Figures 5.6, 5.7, 5.8, 5.9 and 5.10 provide a breakdown of the overhead composition when taking as input a file of 0, 128, 1024, 65k and 1M random bytes. We did not include the 1MB case for dd because the program execution ends normally before copying all the bytes. We did not identified the root cause of the issue. No timeout is present and the trace only contains a succession of read and write system calls. However, in the experiment environment we observed the bs option has an influence on the number of bytes copied before exiting. Switching from 512 to 1k makes dd ends from 130kB to 930kB. The figures show that the majority of the overhead is caused by the forwarding which also adds an extra order of magnitude compared to the Tracing scenario. Moreover, we do not observe any strong correlation between the file size and the runtime execution. This may be explained by the fact, some of the programs copied a buffer with a fixed length for transfering the memory with the kernel. Conclusion In conclusion, this Chapter provides insights into various methods for migrating processes from a device to an emulator. We described the implementation of Chestburster for hybrid execution by filtering system calls and forwarding memory to the device. Through its evaluation, we examined the correctness of program execution and measured the performance impact of decoding, filtering and forwarding system calls in various scenarios. Our findings showed that memory forwarding plays a significant role in the overall overhead, particularly due to the larger file size handled by the programs. We aim to further examine the execution correctness by utilizing the Linux Test Project (LTP). It is a comprehensive testsuite to validate the functionality of the Linux kernel and in particular the system call interface. By using LTP, we hope to provide additional insights into the accuracy of our forwarding implementation and the robustness of the process migration. Embedded systems such as routers often include web servers for network services and device management. These systems often use certificates and cryptographic keys stored in NVRAM and offer web access for controlling other devices, such as cameras. Therefore, the performance of Chestburster on these web servers is worth investigating. Another network service that runs on an embedded system is the ipbridge application from the Hue Bridge. This application connects physical devices, such as Zigbee lights, to the local network and cloud services. The security and privacy of these systems are crucial, and it has been shown that they are critical [START_REF] Alrawi | Sok: Security evaluation of home-based iot deployments[END_REF][START_REF] Morgner | All your bulbs are belong to us: Investigating the current state of security in connected lighting systems[END_REF][START_REF] Eyal Ronen | IoT goes nuclear: Creating a ZigBee chain reaction[END_REF][START_REF] Thiery | Privacy implications of switching ON a light bulb in the IoT world[END_REF]. The application communicates with a Zigbee modem that is separate from the Linux system. This typical example illustrates the difficulty to analyze such system only in an emulator, as this would lack the context of the environment such as other devices present on the network. As such, it would be useful to instrument its execution to assess privacy risks and existing vulnerabilities. The benefits of our approach could be further demonstrated with fuzzing and symbolic execution. By leveraging these vulnerability research techniques, it would be possible to expand the scope to applications requiring a high level of integration with their environment. For instance, more programs now rely on trusted execution environments (TEE) to protect and store secrets such as cryptographic keys and certificates which were previously stored in NVRAM. Chapter 6 Bench of Embedded system Experiment for Reproducible Research Experiments including physical devices present challenges in terms of accessibility, sharing and reproducibility. In this chapter, we will see the reasons to promote the reproducibility of experiments, particularly when physical devices are involved. Additionally, we propose an infrastructure architecture to address these challenges in the context of firmware security analysis. Motivations A core aspect of the journey to expand knowledge is experimenting. Experiments help to validate or refute hypotheses. One of the experiments' most important attributes is reproducibility. The modification of experiment parameters is crucial in observing the impact of results and improvements. In this way, the experimenters better understand and study the phenomena at play. This is part of the reason why Science is an iterative process. By sharing reproducible experiments, other scientists can in turn add their expertise to them and progress. In particular, an emphasis has been recently put on making research more reproducible in computer science, and in system security in particular [START_REF]Sharing expertise and artifacts for reuse through cybersecurity community hub (SEARCCH) project[END_REF]22,177,187]. For this purpose, conferences promote the publication of the code and data used in the research together with the paper. Several conferences now award badges to publications whose artifacts have successfully been reproduced by an evaluation committee [START_REF]Artifact review and badging -version 2.0[END_REF]92,131,134,178]. These badges characterize the way the artifacts have been audited and how reproducible they are. For instance, the ACM Artifact Review and Badging version 2.0 [START_REF]Artifact review and badging -version 2.0[END_REF] defines three independent types of badges: Artifacts Evaluated, Artifacts Available and Results Validated. Each describes a qualitative set of requirements applied to the artifacts associated with the research. In addition, it also provides ACM's three definitions to clear any confusion: • Repeatability: the measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation. • Reproducibility: the measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author's own artifacts. • Replicability: The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently. According to the type of artifacts the experiment manipulates, different challenges may arise at the time of publication. First, it is often the source code that is the most easily published with the paper. This brings transparency to experiments by providing the exact operations executed. However, source code alone isn't sufficient to make an experiment reproducible. Dependencies, compilers, and other toolchain elements have a significant impact on the binary produced. The second difficulty with reproducibility is if, and how, to share the data used during an experiment. Data copyright may raise concerns while malware studies face ethical issues in openly sharing datasets. User privacy also limits the scope of legitimate traffic, behavior and traces that can be collected and published. More trivially, the size of the dataset has a direct influence on its ease to be shared. Third, the environment of the experiment has a great impact on its results. Thanks to virtualization technologies, sharing the exact environment for software is often straightforward. In contrast, trade-offs have to be made with hardware. For dynamic analysis, emulation is the primary solution to abstract hardware requirements. But it is not always feasible, as shown by Fasano et al. [START_REF] Fasano | Enabling Security Analyses of Embedded Systems via Rehosting[END_REF]: the task itself can be onerous, and there is a too broad variety of devices to support. To circumvent these obstacles, a different approach is to record the execution into traces that can be replayed later and somewhere else [START_REF] Dolan-Gavitt | Repeatable reverse engineering with PANDA[END_REF][START_REF] Sam L Thomas | Cutting through the complexity of reverse engineering embedded devices[END_REF]. It is however impossible to explore new execution paths with a replay. Both approaches suffer from limitations, highlighting the importance of using physical devices in the analysis loop. Fourth, the experiment setup has an essential influence on the measured results. A wrong configuration can yield a different outcome from the original study. It is a crucial aspect for a fair comparison between methodologies. Experiments interacting with hardware devices introduce four challenges: 1. The cost of the device: How expensive is it? Is the budget available for purchasing it? 2. Availability: Is the device still produced and sold? Is the device available and legal to operate in the user's region of the world? 3. Storage: Where to keep the device? How to make it accessible to use it? 4. Safety: Can the device be dangerous when manipulated? We want to address these last questions by proposing the Bench of Embedded systems Experiments for Reproducible Research, BEERR. It is an infrastructure aiming to ease access to physical devices used in system security publications by making them available remotely. In addition, we want to congregate and propose a collection of experiment code and data that can be easily used with these devices. We focus on dynamic firmware analysis with a strong emphasis on studies looking at interactions with the hardware. We believe BEERR can help future researchers by simplifying access to published experiments, removing the burden of hardware management, promoting fair comparison between studies and being a key to fostering new ideas. BEERR web interface is accessible at: https://beerr.s3.eurecom.fr. Overview Computer Testbeds To overcome the difficulties in reproducing, sharing and comparing experiments, scientific communities are building testbeds. They take a wide variety of forms depending on their objectives and the problems to be studied. They can be small robots placed in an arena to study swarm intelligence as done in the Robotarium [START_REF] Pickem | The robotarium: A remotely accessible swarm robotics research testbed[END_REF]; a grid of radio transceivers to analyze wireless interference and performance on different topology (R2Lab [START_REF] Parmentelat | A step towards runnable papers using R2lab[END_REF]); inspect IoT devices interactions within an environment (FIT IoT-LAB [START_REF] Adjih | FIT IoT-LAB: A Large Scale Open Experimental IoT Testbed[END_REF] and CorteXLab [START_REF] Massouri | CorteXlab: An open FPGA-based facility for testing SDR amp; cognitive radio networks in a reproducible environment[END_REF]); or to study distributed systems networking and services such as Emulab [START_REF] White | An Integrated Experimental Environment for Distributed Systems and Networks[END_REF], CloudLab [START_REF] Duplyakin | The Design and Operation of CloudLab[END_REF], PlanetLab [START_REF] Peterson | A blueprint for introducing disruptive technology into the internet[END_REF] and more recently EdgeNet [START_REF] Simioni | Monitoring an anonymity network: Toward the deanonymization of hidden services[END_REF]. To simplify administrative work, and management and to be accessible to as many people as possible, front-end projects have been created to federate multiple testbeds [3,[START_REF] Berman | GENI: A federated testbed for innovative network experiments[END_REF] and offer a common interface. A recent example of this case is the Fed4FIRE+ project which ran for 5 years. It gathered 20 different testbeds in Europe and led to many publications [3]. Continuous Integration (CI) helps developers automatically merge their changes into the main development tree. However, before merging, the changes need to be tested to minimize new bugs or regressions. Embedded systems are known to be a very heterogeneous ecosystem. It is therefore difficult to know upfront if a specific code change would work on all the hardware the project wants to support. An example of such a case is the Linaro Automated Validation Architecture (LAVA) [START_REF]Linaro automated validation architecture (LAVA)[END_REF], which aims to test deployments on Linux-based systems for the ARM architecture. In the context of system security, building a testbed is an effective approach to studying a system [START_REF] Fowler | Towards a Testbed for Automotive Cybersecurity[END_REF][START_REF] Sharma Oruganti | Hardware-in-Loop Based Automotive Embedded Systems Cybersecurity Evaluation Testbed[END_REF][START_REF] Siboni | Security testbed for Internet-of-Things devices[END_REF][START_REF] Muhammad Mudassar Yamin | Cyber ranges and security testbeds: Scenarios, functions, tools and architecture[END_REF]. A diverse range of applications are reasonable: benchmarking [START_REF] Metzman | FuzzBench: An Open Fuzzer Benchmarking Platform and Service[END_REF], training [START_REF] Sridhar Adepu | Epic: An electric power testbed for research and training in cyber physical systems security[END_REF][START_REF] Aditya | SWaT: A water treatment testbed for research and training on ICS security[END_REF], or investigating multi-stage attacks on distributed systems. In other words, in all situations where the reproduction of the environment is crucial to understand the events that are at stake. Examples of such cases are often related to networking attacks like DDoS [START_REF] Hussain | Replicated Testbed Experiments for the Evaluation of a Wide-range of DDoS Defenses[END_REF], industrial control systems such as SCADA [START_REF] Sridhar Adepu | Epic: An electric power testbed for research and training in cyber physical systems security[END_REF][START_REF] Aditya | SWaT: A water treatment testbed for research and training on ICS security[END_REF][START_REF] Mocanu | An open-source hardware-in-the-loop virtualization system for cybersecurity studies of scada systems[END_REF][START_REF] Qassim | A survey of SCADA testbed implementation approaches[END_REF], or Internet-of-Things [START_REF] Siboni | Security testbed for Internet-of-Things devices[END_REF] where physical devices communicate with external services often located on the Internet. Fuzzing has recently experienced considerable interest in software testing and vulnerability research because of its efficiency in bug finding. Yet establishing good methodologies and metrics to compare fuzzing techniques remain a challenge for researchers. For this reason, Google proposes a service called FuzzBench [START_REF] Metzman | FuzzBench: An Open Fuzzer Benchmarking Platform and Service[END_REF] to evaluate and compare fuzzers against a set of benchmarks. Architecture BEERR is divided into two parts. The first part is called the front node. It hosts the website with the scheduler and stores the experiment codes and data. The website lets the user register an account and submit tasks to the scheduler. The latter takes care of allocating and setting up the resources as well as opening the connection to the selected experiment node. Code and data for experiments are combined in portable container images which are stored in a local registry. The second part is composed of independent experiment nodes. Each of these nodes holds a gateway which is the main access to the experiment node for the user. From it, the user can interact with the available devices, upload files, download container images from the local registry and run its analysis. Other components such as debuggers and power switches help in controlling the device state under study. The initial design is strongly inspired by existing testbeds, in particular, R2Lab [START_REF] Parmentelat | A step towards runnable papers using R2lab[END_REF] and FIT IoT-LAB [START_REF] Adjih | FIT IoT-LAB: A Large Scale Open Experimental IoT Testbed[END_REF] because they also target embedded systems. However, the type of analysis we want to perform is very different. Unlike R2Lab with its anechoic chamber, we don't aim to study radio propagation across different nodes in a controlled environment. Similarly, contrary to FIT IoT-LAB, there is no need to work with sensor networks, routing protocols or distributed applications on distinct typologies. As shown in the survey (Section 3.1), the analysis we target focuses on the firmware and code closely coupled with low-level hardware rather than across a pool of different devices. It may require powerful computing resources to handle a partial emulation of the system and its analysis. That is the reason we made the trade-offs to divide the testbed into independent bookable experiment nodes with a more or less powerful gateway. This design still offers the possibility to build systems composed of multiple devices behind a single gateway. User Workflow The typical workflow for the user first involves creating an account on the website and submitting its SSH public key. Then the user has to wait for the validation of his account from the person in charge of its affiliation. To create such a group of users (e.g., per university or company), an application explaining their motivations has to be sent to the administrators. The access is free of charge and mainly targets scientific activities. The creation of an experiment requires selecting an experiment node, selecting a time slot and a duration, selecting an image to boot the gateway and optionally filling a public link to a git repository. This repository is a way for the user to prepare the experiment code upstream of its time slot. The repository may contain a Dockerfile which would be built and stored in the local registry. When the booked time comes, the gateway is powered with the selected image and an SSH connection is opened between the front node and the gateway. The user can use this access to start a shell on the bare-metal gateway with root permissions. He can upload files, and install and configure the gateway as needed. No direct Internet access is provided on the gateway from our infrastructure, users can use SSH tunneling to share their own Internet connection. Finally, the front node closes the SSH connection, resets the devices, cleans up the gateway disk and powers them off. The status of previous experiments is displayed on the website. Implementation Front Node The front node hosts the website and the database with user and experiment data such as credentials, SSH keys, affiliation, experiment scheduled time, and repository link. The scheduler is integrated into the back end via the Advanced Python Scheduler library. A local docker registry stores the experiments in the form of docker images. They are built locally from the git repository link provided by the user on the experiment submission. The gateways boot on the network to facilitate image selection and management. We use dnsmasq because of its advantage to offer a complete and lightweight solution with DNS caching, DHCP and TFTP servers and its ease of configuration. The root filesystem is mounted using a union filesystem, OverlayFS. The read-only bottom layer is on an NFS server on the front node while the read-write upper layer is on the local disk. In this way, images are easily added and updated by administrators while users can modify the system. Modifications are cleaned at the end of the experiment by removing the writable upper layer. Experiment Nodes Experiments are assembled in individual container images. We chose this solution because it facilitates the packaging, sharing, setup and resetting of experiments and parts of their environment. The essential advantage to use precise software versions from the time of publication offers the possibility to circumvent the hard task of maintaining experiments in different environments. A corner case occurs when a special kernel version is required. In this case, because the user has root shell access on the bare-metal gateway, he is free to use a virtual machine. Nevertheless, we did not observe such a situation in our survey (Section 3.1). BEERR provides a set of pre-built images in the local registry. This allows the user to use them directly or use them as a base to build new images. The first set of experiments we propose to reproduce is from the paper Avatar2: A Multi-target Orchestration Platform [START_REF] Muench | Avatar 2: A multi-target orchestration platform[END_REF]. The second set targets the paper What You Corrupt Is Not What You Crash: Challenges in Fuzzing Embedded Devices [START_REF] Muench | What You Corrupt Is Not What You Crash: Challenges in Fuzzing Embedded Devices[END_REF] (see Section 3.1). In addition, we make available the already existing avatar2 examples [START_REF]avatar2 examples repository[END_REF]. The experiments have been grouped into categories of nodes according to their nature and the devices they use. A node is composed of a gateway where the analysis is executed, and one or multiple devices which are subjects of the analysis. The user has root shell access on the bare-metal gateway where he can freely interact with the devices and build and run container experiments. The gateways are either a Raspberry Pi 4 Model B 4GB or an Intel NUC BOXNUC8I5BELS1 with CPU Intel Core i5-8260U (Quad-Core 1.6/3.9 GHz, 8 threads, 6M cache), 16GB DDR4, 250Go NVMe SSD. The Raspberry Pi has the advantage to be relatively cheap, and space and power efficient. Yet its processor architecture is based on the ARM instruction set, which might cause compatibility problems with some experiments. Indeed, most research experiments work in a limited environment where portability is not always the main objective. For example, the PANDA emulator only supports host machines with x86_64 architecture. In this situation, the experiment needs to be carried out within the second node type using Intel NUC computers. We, therefore, use two categories of experiment nodes: • Alpha nodes. The Alpha nodes are designed as low-cost entry nodes. They also allow new users to familiarize themselves with the envi-ronment through a set of basic experiences. Each is composed of a Raspberry Pi 4 with Nucleo boards and a USB switch. • Bravo nodes. The Bravo nodes are more powerful and will be used when rehosting with powerful emulation is needed. Those experiments rely on an Intel NUC (instead of Raspberry Pi). This makes it possible to run more complex experiments or experiments relying on tools that are not portable to ARM architecture, e.g., those that rely on the PANDA emulator. Physical devices The following devices are available on the nodes. We refer to Table 3.2 in Section 3.1 to show which experiments they allow reproducing. • STM32 Nucleo L152RE (ARM Cortex-M3) and F072RB (ARM Cortex-M0) boards, • Nordic Semiconductor nRF51-DK for BLE (ARM Cortex-M3), • Cypress CYW920735Q60EVB-01 BLE Evaluation Kit (SoC CYW20735 with ARM Cortex-M4), • Allen Bradley 1769-L16ER-BB1B CompactLogix In addition, other components are present to control the state of devices: • Yepkit YKUSH, a USB Switchable Hub to turn on and off a USB link, • a USB programmable surge protector to power devices using a power plug, • SEGGER J-Link Debug Probes to interface with JTAG ports 6.4 Discussion Infrastructure security BEERR is mainly intended for the scientific community, without excluding other potential collaborations. The affiliation link makes the user accountable for their conduct. It is a best-effort approach where we rely on good behavior from the users. We plan to treat malicious behavior on a case-bycase basis and terminate the corresponding user or affiliation accesses. We nevertheless implemented minimal mechanisms to protect fair access to the service and preserve the integrity of the system. Time spent using BEERR is divided into 55 minutes time slots and daily quotas to avoid monopolizing all resources. To preserve a clean state at the start of a session, we use an overlay filesystem to store modifications. We do not protect against any attack attending to modify the gateway bootloader. Systems under test are also not protected against modifications. This is a desired behavior as the user might need to flash different firmware. However, nothing prevents permanent modifications such as a blowing fuse. We have chosen as the first step to giving freedom to the user before restricting its capabilities. We trust the user to do their best to not intentionally brick the devices. If users need specific requirements, we encourage them to contact us to discuss their feasibility. In the future as more expensive or hard-to-operate devices are incorporated, we might consider implementing a group policy for access to categories of experimental nodes. Chapter 7 Conclusion and future work Through this thesis, we explored various methods for improving the execution of firmware in an emulator by including the device in the analysis, a technique known as Hardware-in-the-loop. This approach addresses the limitations of emulation support for specific hardware such as peripherals as well as providing a higher degree of execution fidelity and accuracy close to that of the original environment. The BEERR testbed aims to tackle the challenge of hardware accessibility for security analysis and rehosting in particular. Through Chestburster, we focused on Linux user-mode programs and examined how the system call interface enables hybrid execution for processes between the emulator and the device. This presents numerous challenges due to the diversity and complexity of the system call interface and the limited control over the device and its running kernel. Indeed, Linux has approximately 380 system calls, and we have been able to confirm the functioning of forwarding for a hundred of them. To achieve this, we proposed a novel technique that can be implemented without making any modifications to the program or kernel and is versatile enough to be considered for other POSIX systems. This system call-forwarding method can be further improved to achieve a more comprehensive hybrid execution drawing inspiration from research on distributed operating systems. Moreover, this distributed execution opens up new perspectives and applications enabling more sophisticated security analysis usage on firmware devices such as fault injection, reverse-engineering, fuzzing, taint analysis and symbolic execution. Distributed operating systems Distributed operating systems aim to provide user programs with a unified view of a computer cluster. They offer the possibility to migrate processes and forward system calls to their original kernel. MOSIX [START_REF] Barak | The MOSIX distributed operating system: load balancing for UNIX[END_REF][START_REF] Barak | Scalable cluster computing with MOSIX for Linux[END_REF][START_REF] Lottiaux | OpenMosix, OpenSSI and Kerrighed: a comparative study[END_REF] describes a group of mechanisms to convert a UNIX system into a distributed operating system. It divides the program into two parts: the user context called remote and the system context called deputy. It allows the remote to be migrated to any node while the deputy has to stay in its original home node. Instead, our approach keeps two system contexts synchronized to allow the possibility to execute certain system calls locally. Gobelins [START_REF] Lottiaux | OpenMosix, OpenSSI and Kerrighed: a comparative study[END_REF][START_REF] Morin | Kerrighed: a single system image cluster operating system for high performance computing[END_REF][START_REF] Vallee | Process migration based on gobelins distributed shared memory[END_REF] insert a middleware between the Linux kernel virtual and physical layers. This allows for the interception and deviation of various events to multiple nodes, enabling process migration. To achieve this, Gobelins modifies core kernel structures such as task_struct, mm_struct, vm_area_struct and file_struct. Plan 9 [START_REF] Pike | Plan 9 from bell labs[END_REF][START_REF] Van Hensbergen | Grave Robbers from Outer Space: Using 9P2000 Under Linux[END_REF] reduces the total number of system calls by presenting all system resources and services as files. This way, a remote procedure call (RPC) mechanism named 9P protocol is used to access both local and remote resources indifferently. Although distributed operating systems share a lot of interesting techniques suitable for HIL firmware security analysis such as process migration and system call forwarding, they present significant drawbacks hindering their reuse as it. These limitations include a main focus on CPU-intensive workload, a lack of support for certain I/O operations [START_REF] Barak | The MOSIX cluster operating system for distributed computing on Linux clusters, multi-clusters and clouds[END_REF][START_REF] Lottiaux | OpenMosix, OpenSSI and Kerrighed: a comparative study[END_REF], the absence of a filtering mechanism to choose between local and remote system calls, and a lack of emulation support for non-x86 architectures. Additionally, the closed-source nature of available systems makes it difficult to extend their capabilities. Furthermore, the method of implementing system call forwarding through kernel patches is not deemed practical in the context of binary firmware analysis. Though it appears that MOSIX release 4 has fully reimplemented its mechanisms in user mode [START_REF] Barak | The MOSIX cluster operating system for distributed computing on Linux clusters, multi-clusters and clouds[END_REF]. In our context, process migration is limited by the inability to capture the entirety of the process resources without instrumenting the kernel. Modifying the kernel on the device is challenging. Debugging ports may not be available and the necessary toolchain and kernel headers to compile kernel modules may not be accessible. Generating a toolchain and kernel headers compatible requires identifying the kernel configuration and accurately reconstructing the layout of various data structure [START_REF] Pagani | Autoprofile: Towards automated profile generation for memory analysis[END_REF]. While this has been achieved to some extent for memory forensics profiles, it remains a challenge for recompiling kernel modules. Therefore, migrating processes in a limited context where control over the kernel is not available remains a challenge. Performance Performance plays a critical role in the usability of any system call analysis. As seen in the state-of-the-art Section 3.3, there is an incentive to minimize the runtime cost of interception and instrumentation techniques. With Chestburster, we aimed at demonstrating the feasibility of the approach and exploring its ability to handle various instrumentation scenarios related to tracing and forwarding. To this end, the architecture is designed with an orchestrator developed in Python to control simple targets. However, the evaluation has highlighted the main overheads, and we discuss potential improvements to enhance the overall performance. The general instrumentation of system call tracing, decoding and filtering could benefit from being integrated into the sysforward library. This would shift decoding and filtering from an interpreted to a compiled execution in addition to reducing the number of messages exchanged with the orchestrator. The implementation of the filtering with a dictionary of rules can be improved with a state-of-the-art filtering mechanism. For instance, Linux uses eBPF filters to monitor events in the kernel including system calls. This provides greater flexibility in filtering system calls because filters are small programs that can perform operations, unlike rule-based systems. These filters can be supported using a virtual machine or just-in-time (JIT) compilation, similar to the way seccomp-bpf operates in the kernel. However, it requires more effort from the analyst to write suitable filters compared to the ease of using static rules. Therefore, efforts towards automating the creation of filters through static or dynamic analysis hold potential benefits. Previous works [START_REF] Demarinis | sysfilter: Automated System Call Filtering for Commodity Software[END_REF][START_REF] Ghavamnia | Confine: Automated system call policy generation for container attack surface reduction[END_REF][START_REF] Ghavamnia | C2C: Fine-grained configuration-driven system call filtering[END_REF] have aimed for a similar goal to enforce the principle of least privilege by reducing the number of authorized system calls for an application. The memory forwarding in Chestburster could be enhanced by incorporating additional techniques discussed in Section 4.2.4. In particular, the use of userfaultfd and libsigsegv to intercept access to specific memory regions would facilitate synchronization between the device and the emulator. This is currently a limitation of Chestburster's design, which has an asymmetric capacity to initiate interactions between the process and the device environment. For instance, while a read() system call is an action started by the process, a write on a shared memory from another process on the device may not be transmitted to the emulator. Moreover, the ioctl function let driver programmers create custom interfaces. It can complicate the identification of memory blocks to forward during the system call. To address this challenge, taint tracking and symbolic execution could be applied to driver binaries to recover the interface and determine which memory blocks to for-ward. To further improve performance and remove the need for continuously forwarding system calls involving device interactions, a caching mechanism could be introduced. Kammerstetter et. al. [START_REF] Kammerstetter | Embedded security testing with peripheral device caching and runtime program state approximation[END_REF] proposed this through runtime program state approximation. To uniquely identify peripheral access in the cache, the program state, consisting of the CPU context and stack memory, is hashed as the key for the cache entry. This approach has the benefit of considering the state of the program during character device access, but it does not fully capture the comprehension of the driver behavior. Execution traces could help build models from recorded data, reducing the number of interactions with the device and potentially eliminating the need for forwarding in certain circumstances. Similar works have been done, but for monolithic firmware [START_REF] Gustafson | Toward the analysis of embedded firmware through automated re-hosting[END_REF][START_REF] Spensky | Conware: Automated modeling of hardware peripherals[END_REF]. Another option is the augmented process emulation [START_REF] Zheng | FIRM-AFL: High-Throughput Greybox Fuzzing of IoT Firmware via Augmented Process Emulation[END_REF] method which involves alternating execution between user mode and full-system emulation to issue system calls not supported by the host kernel. Enhancing hybrid execution by offering a choice between the three modes of execution, native on the device, user mode and full-system emulation, would grants the possibility to benefit from the best of each. However, similar to the challenge to generate automaticlly filters, there would need to be a decision-making process to determine when and why to switch based on the application behavior. Finally, for completeness symbolic execution could also be used to assist in inferring a peripheral model. Application POSIX systems Type-II firmware may also implement the POSIX interface, as is the case for real-time operating systems like FreeRTOS, VxWorks, QNX, and eCos. While the hybrid execution approach could be beneficial for these systems, a major challenge lies in the executor component. Currently it is implemented through the use of a debugger and the ptrace system call that is not part of the POSIX API. Therefore, alternative methods for replaying system calls need to be explored. Security testing techniques The ability to run the firmware in an emulated environment is just a starting point for dynamic analysis. This opens up the possibility to conduct binary security testing techniques such as fuzzing and symbolic execution. Fuzzing efficiency on bug finding relies on the capacity of executing a large number of input in a short period of time. However, this can be hindered by the overhead of intercepting, filtering, forwarding system call and synchronizing the memory. It is therefore crucial to focus on improving performance and scalability. Additionally, fault injection at the system call level can be used to evaluate the responsiveness of user mode applications [START_REF] Gario | A BPF-based syscall fault injector[END_REF][START_REF] Woodruff | How to write a rootkit without really trying[END_REF][START_REF] Zhang | Maximizing error injection realism for Chaos engineering with system calls[END_REF]. Tampering with system call arguments and return values provide opportunities to exercise error handling code paths [START_REF] Levin | Can strace make you fail? strace syscall fault injection[END_REF]. This technique can be used in combination with fuzzing to improve test coverage [START_REF] Peng | T-Fuzz: fuzzing by program transformation[END_REF]. In embedded systems, fault injection can be utilized to test proprietary peripherals that are challenging to evaluate through other means. Recovering semantics The approach to decoding could be further improved to recover the system call semantics, leading to a deeper understanding of what the program is doing. Syzkaller [START_REF] Google | an unsupervised coverage-guided kernel fuzzer[END_REF] is a fuzzer developed at Google that targets kernels through its various interfaces exposed via system calls. To facilitate program manipulation during the generation, mutation, minimization and validation phases, a description language called syzlang has been created. The programs are described as sequences of system calls. Each tested interface requires writing upstream and manually its specification. Listing 7.1 is an extract taken from the Linux audio MIDI interface. The description language does not directly describe system calls, but rather the operations performed on and the data exchange with the inter-face. Thinking at a higher level allows to take into account relationships within the data. For instance, in the write$midi operation, the relationship between the second and third arguments is expressed as len bytesize [data]. In this way, it is possible to add more meaning to a trace of system calls by both gathering the sequence of calls to the interface and linking the transferred data (with the kernel) between calls. This would bring benefits for both a human being who could read a trace and understand more easily what the programs do; but also for a machine program for which it would be more comfortable to reflect on and operate on the trace similarly to what syzkaller is already doing for its test program generation. Thus, this would be useful in the first place to reverse engineering programs dynamically from their execution traces. For instance, semantics reconstruction from lower traces always has been a difficult problem as highlighted in [START_REF] Nisi | Exploring Syscall-Based Semantics Reconstruction of Android Applications[END_REF]. Second, the idea may enhance the work of forwarding system calls for rehosting by being able to construct a model behind the interface which would allow removing the burden of forwarding system calls to the device. Kernel drivers Future work could also involve applying a similar hardware-in-the-loop approach to kernel drivers. The concept remains the same: using remote procedure calls (RPC) to transfer the control flow and copying necessary memory. Despite the lack of a stable internal API in the kernel, drivers need to export symbols for linking during loading. If low latency requirements are not a concern [START_REF] Mohammadjavad | Charm: Facilitating dynamic analysis of device drivers of mobile systems[END_REF], it may be possible to intercept and perform RPC on these functions. This would enable deeper introspection about kernel driver execution without the need to boot the entire kernel [START_REF] Jiang | ECMO: Peripheral Transplantation to Rehost Embedded Linux Kernels[END_REF][START_REF] Liu | Firmguide: Boosting the capability of rehosting embedded linux kernels through modelguided kernel execution[END_REF]. This could make it easier to identify vulnerabilities through fuzzing and symbolic execution. However, a major challenge would still be executing and replaying the calls within the kernel on the device. Testbed The BEERR testbed aims to address the challenge of hardware accessibility for hardware-in-the-loop security analysis by providing remote access and experiment sharing. However, the true advantage of HIL lies in the integration of the device in an environment. BEERR does not consider this aspect further than the different component on the same board. There is a significant challenge in setting up a full environment consisting of multiple devices that interact with each other. This raises questions about how to control the full environment, track its state, take snapshots, restore them, and have visibility into every aspect of the testbed, including the execution of firmware and wireless transmissions. Figure 3 . 1 : 31 Figure 3.1: Costin'16 rehosting approach for web services. Figure 3 . 2 : 32 Figure 3.2: Firmadyne rehosting approach for web services. Figure 3 . 3 : 33 Figure 3.3: FirmAFL rehosting approach for web services. Figure 3 . 4 : 34 Figure 3.4: EQUAFL rehosting method. Figure 3 . 5 : 35 Figure 3.5: FirmCorn rehosting approach. Figure 3 . 6 : 36 Figure 3.6: PROSPECT rehosting approach. open, read, write, close Filesystem mkdir, rmdir, rename Process management execve, fork, prctl Address space management mmap, brk Signal handling rt_sigaction, rt_sigprocmask, rt_sigreturn Peripheral control ioctl Architecture specifics arm_set_tls Table 3.5: Example system calls for various Linux components. Figure 4 . 1 : 41 Figure 4.1: Philips Hue devices in the home network. Figure 4 . 2 : 42 Figure 4.2: Set of protocols used by Philips Hue system. Figure 4 . 3 : 43 Figure 4.3: Hue Bridge block diagram. Figure 4 . 4 : 44 Figure 4.4: Proposed system call forwarding approach. Listing 4 . 4 : 44 int gettimeofday ( struct timeval * restrict tv , struct timezone * restrict tz ) ; int clock_gettime ( clockid_t clockid , struct timespec * tp ) ; Examples of system call prototypes that exchange user space data. Figure 4 . 5 : 45 Figure 4.5: The tracer state machine. Figure 4 . 6 :Figure 4 . 7 : 4647 Figure 4.6: The orchestrator state machine. Figure 5 . 1 : 51 Figure 5.1: Chestburster architecture. Figure 5 . 2 : 52 Figure 5.2: Sequence diagram illustrating the forwarded of a system call during parallel execution. Figure 5 . 3 : 53 Figure 5.3: Impact of Chestburster on common system calls (a). Figures 5 . 5 3 and 5.4 present the normalized runtime overhead while figure 5.5 highlights the composition of the overhead. We can see that the Forwarding scenario adds an extra order of magnitude compared to the Tracing scenario which itself represents 3 orders of magnitude compared to the Strace scenario. In contrast, the native QEMU system call logging adds an overhead of a factor of 10. Figure 5 . 4 : 54 Figure 5.4: Impact of Chestburster on common system calls (b). Figure 5 . 10 : 510 Figure 5.10: Composition of overhead for dd. Figure 6 . 1 : 61 Figure 6.1: BEERR architecture. Figure 6 . 2 : 62 Figure 6.2: An example layout for a Bravo node. Table 3 . 1 : 31 Experiments description in surveyed papers. Publication Experiment Hardware Artifact availability hardware 1 source code Beaglebone Black ✓ Wycinwyc [124] -study effects of memory corruption on different class of embedded systems Linksys EA6300v1 Foscam FI8918W ✗ ✗ ✗ STM32 L152RE ✓ -measure mitigation execution overheads with fuzzing STM32 L152RE ✓ ✓ -reproduce existing study PLC Allen Bradley 1769-L16ER-BB1B Avatar2 [123] -state transfer between concrete and symbolic execution modes ✓ ✓ -record firmware execution STM32 L152RE -forward memory accesses between emulators Avatar2 examples -state transfer between different targets STM32 L152RE ✓ ✓ -state transfer & peripheral modeling nRF51-DK -backdoor detection in a masked ROM bootloader Seagate ST3320413AS HDD ✗ Avatar [193] -vulnerability research in a commercial Zigbee device Redwire Econotag ✗ ✓ -helping reverse engineering the GSM stack of a phone Motorola C118 ✓ -feasibility (how long it takes to port a new driver) -performance (driver fuzzing with Syzkaller, driver initialization) Charm [170] -record-and-replay (record bug PoC, measure execution overhead) -bug finding (fuzzing with Syzkaller, sanitizing with KASAN) Nexus 5X ✓ ✓ -analyzing vulnerabilities with GDB (CVE-2016-3903, CVE-2016-2501, CVE-2016-2061) -build driver exploit using GDB Prospect [97] -performance impact on forwarding driver accesses using strace 324MHz embedded Linux MIPS with 16 MiB RAM ✗ ✗ -case study on proprietary fire alarm system (network fuzzing) not disclosed Pico Computing E17FX70T Surrogates [101] -measure performance impact of MMIO forwarding custom JTAG adapter board custom JTAG breakout / debug board ✗ ✗ FriendlyARM Mini2440 -measure vulnerability detection via synthetic tests (Klocwork Test Suite) -validation tests (53200 tests) LPC1850-DB1 & STM32 L152RE Inception [52] -comparison with binary-only approaches -measure timing overhead Xilinx ZedBoard FPGA ✓ ✓ (Dhrystone benchmark and real-world applications) -compare recovering semantic STM32 L152RE from a binary to the source code with libopencm3 -security flaw detection with Juliet Test Suite 1.3 on FreeRTOS -analysis of products during development phase (bootloader, chip SDK, payment terminal) not disclosed ✗ ✗ -performance evaluation Pixel 3 Mousse [105] -measure coverage Nexus 5X ✓ partially -bugs and vulnerabilities research Nexus 5 STM32 L152RE Pretender [86] -generate models for hardware peripherals (record, build and emulate) STM32 F072RB ✓ ✓ Maxim MAX32600MBED Conware [165] -generate models for hardware peripherals (record, build and emulate) Arduino Due (Atmel SMART SAM3X/A) ✓ ✓ Frankenstein [153] -heap overflow in device inquiry (CVE-2019-11516) -heap overflow in the reception of BLE PDUs (CVE-2019-13916) -heap overflow on ACL packets buffer (CVE-2019-18614) CYW20735 CYW20819 ✓ ✓ FirmCorn [85] -accuracy between virtual execution approaches -efficiency (benchmark nbench) -stability -effectiveness DLink (DIR-816, DIR-629, DIR-859, DIR-823G) TPLink (WR940N, WR941N) Ezviz C6CDahua (HFW5238M, HFW3236M) partially partially -correctness (control flow extraction, region inference, database Incision [174] improvement and error correction) -real-world usability (emulate Renault BCM, analysis the cryptography Huawei LTE R216h (ARMv7) Renault BCM (Renesas V850ES) ✓ ✗ of Huawei R216h) -human effort (qualitative measure of complexity of manual intervention in database correction) Table 3 . 3 2: Artifacts status in hardware in the loop papers surveyed. : source code Ɨ: container : virtual machine Publication Artifacts Packaging Hardware Tool Dataset Wycinwyc [124] 1 STM32 L152RE Avatar2 [123] 2 Ɨ PLC Allen Bradley 1769-L16ER-BB1B STM32 L152RE Avatar2 examples 3 N/A STM32 L152RE nRF51-DK Seagate ST3320413AS HDD Avatar [193] 4 Ɨ Econotag development board Motorola C118 Charm [170] 5 Nexus 5X Xilinx ZedBoard FPGA Inception [52] 6 Ɨ STM32 L152RE LPC1850-DB1 Mousse [105] 7 Pixel 3 STM32 L152RE Pretender [86] 8 STM32 F072RB Maxim MAX32600MBED Conware [165] 9 Arduino Due Frankenstein [153] 10 CYW20735 partially CYW20819 DLink (DIR-816, DIR-629, DIR-859, DIR-823G) FirmCorn [85] 11 TPLink (WR940N, WR941N) Ezviz C6CDahua (HFW5238M, HFW3236M) Incision [174] 12 Huawei LTE R216h (ARMv7) Renault BCM (Renesas V850ES) .4. Approach Rehosted Execution Other element mode characteristics Process Filesystem Generic kernel Custom kernel CPU emulation User mode emulation Chroot Full-system emulation System call forwarding Costin'16 [54] Table 3 . 3 6 provides a summary of them. Name Hook mechanism Insertion method Hook location Trap Trampoline Substitution Branch condition Static Dynamic User Kernel External Library injection Instrumentation location User Kernel External Binary rewriting ptrace seccomp Linux tracepoint Kprobes Uprobes Character device Kernel patching Hypervisor Hardware tracing Hardware debugging Table 3 . 3 6: Summary of Linux hooking methods. Table 4 . 1 : 41 Linux user space memory access API Name Function purpose get_user Macro to copy a variable from user space put_user Macro to copy a variable to user space strncpy_from_user Copy a string of count maximun length from user to kernel space strlen_user Return the size of a string from user space clear_user Zero n bytes of memory in user space access_ok Check whether a pointer is valid for user space access copy_from_user Copy size bytes of memory from user to kernel space copy_to_user Copy size bytes of memory from kernel to user space Table 4 . 2 : 42 .2 summarizes them. Summary of memory forwarding approaches. Name Use case Limitation Naive Keep region synchronized Migration time At runtime Ideal Dynamic memory tunneling Cat. B system calls ioctl() Identify accesses Heuristic Distributed shared memory mmap regions Trace accesses Record Cat. B system calls Record execution Model-based Selective symbolic execution Cat. B system calls Symbolize kernel Static analysis Cat. B system calls Disassemble binary Table 5 . 2 : 52 The programs from the paxtest suite are used to verify the execution correctness. NULL, NULL, NULL, &tv) BUG ✘ select(2, NULL, &fds, NULL, NULL) BUG ✘ select(1, (void *)1, NULL, NULL, 0) -1 EFAULT ✓ stat("/proc/self/blah", &stat_buf) -1 ENOENT ✓ stat(NULL, &stat_buf) -1 EFAULT ✓ symlink("/", "/") -1 EEXIST ✓ unlink("/") -1 EISDIR ✓ unlink("/proc/self/blah") -1 ENOENT ✓ wait(&tmp) -1 ECHILD ✓ waitpid(INT_MIN, &tmp, WNOHANG) -1 ECHILD != (-1 ESRCH) ✓ waitpid(getpid(), &tmp, WNOHANG) -1 ECHILD ✓ write(-1, &tmp, 1) -1 EBADF ✓ write(1, &tmp, 0) 0 ✓ Table 5 . 3 : 53 Principal system calls issued by evaluated coreutils programs. Interception Strace Tracing Forwarding Runtime Execution (normalized, log scale) 91.6% 91.6% 8.4% 8.4% 8.3% 91.6% 91.8% 91.8% 8.2% 8.2% 91.6% 92.5% 8.4% 7.5% 91.6% 8.4% 8.2% 91.8% getpid close(-1) creat open write lseek read close unlink Figure 5.5: Composition of overhead for common system calls. syzkaller / sys / linux / dev_snd_midi . txt [...] syz_open_dev$midi ( dev ptr [ in , string ["/ dev / midi #"]] , id intptr , flags flags [ open_flags ]) fd_midi write$midi ( fd fd_midi , data ptr [ in , array [ int8 ]] , len bytesize [ data ]) read$midi ( fd fd_midi , data ptr [ out , array [ int8 ]] , len bytesize [ data ]) Listing 7.1: Syzkaller interface specification. ioctl$SNDRV_RAWMIDI_IOCTL_PVERSION ( fd fd_midi , cmd const [ SNDRV_RAWMIDI_IOCTL_PVERSION ] , arg ptr [ out , int32 ]) [...] snd_rawmidi_params { stream flags [ sndrv_rawmidi_stream , int32 ] buffer_size intptr avail_min intptr no_active_sensing int32 :1 mode int32 reserved array [ const [0 , int8 ] , 12] } [...] define SNDRV_RAWMIDI_IOCTL_STATUS32 _IOWR ('W ', 0 x20 , char [36]) [...] The availability of the devices has been checked on google.com, amazon.com, digikey.com and ebay.com at the date 2021/12/15. https://github.com/UoBAutoSec/INCISION The cyclomatic complexity measures the complexity of a code by counting the number of linearly independent paths. It is calculated with the number of conditional statements such as if and loops. A notorious exception to this are system calls like gettimeofday or clock_gettime, which are dispatched in user space via the ELF virtual Dynamic Shared Object (vdso). SystemTap is a tool to write scripts which collect information of a running Linux system. The replacement of the traditional BPF virtual machine with the extended BPF (eBPF) version has been a topic of debate within the community, due to potential security concerns[10] We do not consider the hypervisor in our case. It would lay down between the hardware and the kernel. https://www.8devices.com/products/lima Acknowledgements I would like to take this opportunity to share my sincerest thankfulness to those who have supported me throughout my PhD journey. List of Tables List of Figures List of Acronyms
00411204
en
[ "info.info-mc" ]
2024/03/04 16:41:24
2009
https://hal.science/hal-00411204/file/chaumette-esmart-smartmobility-09.pdf
J Albert S Chaumette email: [email protected] D Dubernet J Ouoba SIM-Mee Mobilizing your social network Introduction Social Networks such as LinkedIn or FaceBook offer the opportunity to set up a group of friends or professional relationships .In such systems, the interaction most of the time takes place through a central Web site that can be accessed from a browser or from a mobile phone. In some sense the relationship thus remains virtual, disconnected from the real world. SIM-Mee bridges the gap between social networks and the real world, by making it possible to discover and interact with the members of your own network who happen to be geographically close to you in the real life, still ensuring privacy. From a technical point of view, it illustrates the convergence of several technologies (NFC, SAT, SCWS, Bluetooth) inside a single community oriented service. SIM-Mee relies on the secure exchange of virtual business cards, including signatures, each card being stored in the USIM of its owner's mobile phone. Exchanging cards is achieved in a contactless secure manner (by using NFC), by touching one phone with the other. Based on these cards and signatures, SIM-Mee offers a mobile social networking service in which a SIM-Mee user is able to discover those of his friends (i.e. the persons of who he owns the virtual business cards) who are in his neighborhood. The list of discoverable friends and the list of friends the user agrees to be discovered by are manageable through and address book like system (by means of a Servlet embedded in the USIM). This system can be used for instance in a train or an airport. One can select the persons he agrees to talk with, and if they are also present in the train/airport then both parties will receive a message saying that they are close to each other. They will then be able to initiate a classical call/text message and eventually meet somewhere in the train/airport. Main features and underlying technologies The main features that are supported are: Business card management (uses SAT, SCWS and NFC): The necessary support to edit and exchange business cards is provided. The edition of the user's personal business card takes place through a SAT[1] menu while the other cards are displayed using an embedded web application (SWCS [START_REF]SWCS: Smart Card Web Server[END_REF]). The cards are exchanged by NFC [START_REF]NFC: Near Field Communication Handbook Syed[END_REF]. Security and privacy (uses asymmetric cryptography[4]): Security and privacy are achieved by using asymmetric cryptography. Each user is provided with a pair of secret/public keys. His public key is part of his business card, while its secret key remains in his USIM card. It makes it possible to authenticate a user and to cipher communication. NFC power off: We have added a feature to put the application (more precisely its applet part) in « power off mode » in order to prevent a business card from being caught by any reader without the authorization of its owner. Without this feature, as soon as a mobile phone would get in reach of any reader, it would respond to any request to exchange its business card, which is highly intrusive. Discovery of contacts: The discovery of contacts makes it possible to work contacts of a user (the persons of which he have a business card) who are in his geographical area. This is achieved in a secure way by using the keys associated to each user (the public key of a user being part of his business card). High level architecture As explained above and show figure 1, SIM-Mee illustrates the convergence of different technologies. From a practical point of view, it consists of two parts: first, a Java Card Applet [START_REF]Java Card applet developer's guide[END_REF] which provides secure transfer (using NFC) and storage of business cards, and business card management (using the USIM embedded Smart Card Web Server [START_REF]SWCS: Smart Card Web Server[END_REF]); second, a J2ME MIDlet [START_REF]The Java Micro Edition technology[END_REF] that makes it possible for a user to discover those of his contacts who are in his neighbourhood. Figure 1-The global architecture of SIM-Mee Usage scenario and security considerations In this section we assume two mobile phones, A and B, and explain the interactions that take place between the different components of the SIM-Mee architecture, i.e. the applet, the MIDlet and the user of both A and B. These interactions rely on the following pieces of information: • Applet (of mobile) A knows: its own phone number; a pair of secret/public keys : SK A and PK A ; the phone number of (mobile) B; the public key of B: PK B . • Applet (of mobile) B knows: its own phone number; a pair of secret/public keys: SK B and PK B ; the phone number of (mobile) A; the public key of A: PK A . Phase 1. Business Card Management and Exchange A user creates his business card (that will be stored in the USIM of his phone) by providing his name and his first name. The phone number that is also part of this business card is retrieved from the USIM card. Once this card has been created it is possible to exchange it with another phone. To achieve this process, the user must set his NFC enabled phone in tag mode. The second phone involved in the process must be set in reader mode (more precisely its NFC chip) to receive the card. The two phones need to be physically put close to one another for the communication to take place. It should also be noted that SIM-Mee makes it possible to manage the business cards stored in the local USIM through the embedded Smart Card Web Server. When the web server is launched with the proper parameters (address, port and application name), a web page containing the business cards stored in the USIM are displayed and can be managed. Phase 2. Contact Discovery It is now possible to send an announcement message to discover those persons listed on the contact list who are present in the neighbourhood. This process relies on a MIDlet which communicates with the applet (to acquire the contact list, and for message encryption/signature)) via APDU commands (through the JSR 177 [START_REF]Security and Trust Services PAI for J2ME[END_REF] API). The MIDlet uses Bluetooth[8] to communicate any reachable contact. For example assume user A wants to check if used B is around. User A, who is running MIDlet A on his mobile phone, selects user B in his contact list and initiates the neighbourhood search. Let us recall that PK A and PK B respectively stand for the private key of user A and user B, as stored in their USIM. MIDlet A locally retrieves the message M1 = PK B (phone_number_A, random1) from Applet A and sends it to its neighbours. The neighbours are all the available Bluetooth devices that host the SIM-Mee service. The nounce random1 is used to prevent replay. If user B is in the neighbourhood, Midlet B receives M1 and forwards it to applet B for verification purpose. Applet B deciphers message M1 (the other neighbours will not be able to decipher it and will thus ignore the message and will consequently neither be discovered nor be aware of the presence of A) and verifies that user A is in the set of persons B wants be visible for. The other devices in the neighbourhood receiving M1 are not alerted since they are not the target of the announcement message (they work that out because the announcement message is ciphered with PK B). Conclusion SIM-Mee, as a social networking application, corresponds to a real demand of the users. It bridges the gap between social networks and the real world, by making it possible to discover and interact with the members of your own network who happen to be geographically close to you in the real life (still ensuring privacy). You can mobilize your social network at the airport, at the station, in the train, in the subway, in a nightclub, in a school, etc. SIM-Mee is easy to deploy because it does not require any infrastructure. The applets and MIDlets can be downloaded on the phones that are already deployed. Using SIM-Mee is free (but using SIM-Mee will arouse calls and text messages (SMS). Furthermore, future extensions will generate additional traffic, for instance business cards could be updated by using text messages. We are also working on a formal validation of the security of the whole system using Avispa [START_REF]Analysing Security Protocols with[END_REF]. If it is the case, applet B returns a message M2 = PK A (phone_number_B, random1, random2) to MIDlet B. The nounce random2 serves the same purpose as random1 above.MIDlet B sends M2 to MIDlet A via Bluetooth and at reception the message is forwarded to Applet A. Applet A verifies that M2.random1 is equal to M1.random1. If it is true, applet A returns M3 = PK B (telephone_number_A, random2) to MIDlet A. At this stage, if Midlet A gets M3, it displays a message to inform user A that user B is nearby. M3 is also sent via Bluetooth to Midlet B. MIDlet B forwards M3 to Applet B. Applet B verifies that M2.random2 is equal to M3.random2. If this is true, applet A returns a code meaning that user A is a neighbour, and Midlet B can display a message to inform user B that user A is nearby. Users A and B now know that they are close to each other and can decide on meeting.
04112062
en
[ "phys.astr" ]
2024/03/04 16:41:24
2021
https://theses.hal.science/tel-04112062/file/va_Yu_Miao.pdf
Miao Yu Entropic Unbalanced Optimal Transport : Application to Full-Waveform Inversion and Numerical Illustration Keywords: Transport-optimal entropique non-équilibré, divergence de Sinkhorn, inversion de formes d'onde, Unbalanced entropic optimal transport, Sinkhorn divergence, full-waveform inversion Les méthodes de tomographie sismique visent à inférer les propriétés physique et reconstruire le "modèle", i.e. les structures de l'intérieur de la Terre, à partir des ondes mécaniques -radiées par des sources naturelles ou anthropogéniques -enregistrées par des récepteurs en surface sous la forme de sismogrammes. Les méthodes d'inversion de formes d'onde ont été activement développées dans les contextes académiques et industriels et sont devenus des outils puissants pour améliorer l'estimation des propriétés physiques et des structures d'objets géologiques depuis les échelles globales jusqu'aux échelles locales de la géophysique d'exploration. L'inversion de formes d'onde est formulée comme un problème d'optimisation non linéaire, associé à un système d'équations aux dérivées partielles. Il est classiquement résolu par des méthodes d'optimisation locale via la minimisation itérative d'une fonction coût qui mesure la diérence entre les sismogrammes observés et synthétiques, et utilisent des méthodes d'état adjoint. Les méthodes d'état adjoint permettent le calcul des dérivées de la fonction coût par rapport aux paramètres du modèle en combinant le champ d'onde direct et un champ d'onde adjoint gouverné par un système d'équations adjointes et des conditions adjointes complémentaires. Les méthodes d'inversion de forme d'onde, qui inversent simultanément les courtes et grandes longueurs d'onde, sourent malheureusement en pratique de dicultés qui restreignent leur utilisation pratique. Leurs capacités se détériorent du fait du décit en basse fréquence des observations et d'un bon modèle initial. Des limitations qui sont associées à la nature mal posée du problème inverse qui peut facilement être piégé dans un minimum local. Une direction proposée, an de réduire la dépendance vis à vis du modèle initial, est de remplacer la fonction classique, basée sur une distance de type moindres carrés, par de nouvelles fonctions coûts, pouvant impliquer une transformation non linéaire du signal, an de promouvoir la convexité et élargir le bassin d'attraction du minimum global. La théorie du Transport Optimal (OT) a récemment été utilisée dans le cadre des problèmes inverses et de l'apprentissage automatique. Le transport optimal généralise les propriétés de la distance euclidienne au carré à l'espace des distributions de probabilité. La valeur optimale (au carré) du transport lui-même dénit une distance appelée distance 2-Wasserstein. Cette quantité est à nouveau convexe mais maintenant sur l'ensemble des distributions de probabilité. Le Transport Optimal est déjà utilisé en FWI, cette thèse en fait partie. L'approche OT est encore largement ouverte sur trois fronts : les formes d'onde sismiques ne sont ni positives ni de masse totale normalisée. La convexité par rapport au modèle n'est pas garantie et nalement le calcul réel de la distance OT est coûteuse. Dans ce travail, nous utilisons et combinons -d'un point de vue académique -deux extensions récentes d'OT dans le contexte FWI. D'abord la distance OT "non-équilibrée", qui dénit rigoureusement une distance sur l'ensemble des mesures de Radon positives contournant ainsi le problème de normalisation des données (mais pas le problème de positivité). Puis le cadre du Transport Optimal entropique et en particulier la variante simple et facile à calculer appelée divergence de Sinkhorn fournissant une bonne approximation de la distance 2-Wasserstein. La divergence de Sinkhorn peut être naturellement étendue au transport "non-équilibré". Nous utilisons ces outils pour construire et mettre en oeuvre une fonction coût OT "non-équilibrée". Nous discutons de son utilisation dans le contexte FWI au travers d'un certain nombre d'exemples académiques et de problèmes de référence classiques. Entropic Unbalanced Optimal Transport : Application to Full-Waveform Inversion and Numerical Illustration par Miao Yu Résumé étendu en français La tomographie sismique vise à inférer un modèle des propriétés physiques et des structures de l'intérieur de la Terre à partir des ondes mécaniques rayonnées par des sources sismiques naturelles ou articielles, et enregistrées à la surface par des récepteurs sous la forme de sismogrammes, également appelés traces sismiques. Il s'agit d'un problème important en géophysique avec des applications sociétales et industrielles telles que la découverte et l'exploitation de ressources énergétiques, l'investigation de sites pour le génie civil, les gisements minéraux et l'approvisionnement en eau souterraine, l'aléa sismique et l'atténuation des risques. C'est également problème dicile et intrinsèquement mal posé. Les interactions entre les ondes sismiques et le milieu géologique dépendent de la bande de fréquence, de la durée et de l'énergie des sources ; des échelles d'hétérogénéité du milieu ; du trajet de propagation des ondes entre la source et les récepteurs. L'extraction d'information à partir des sismogrammes enregistrés en surface n'est donc pas une simple tâche en particulier du fait que ces interactions sont intégrées le long du trajet de propagation. Le développement de nouveaux systèmes d'acquisition et l'utilisation de nouvelles sources (naturelles et/ou articielles) ont favorisé de nouvelles méthodes d'imagerie sismique haute résolution exploitant une quantité et une diversité de données de plus en plus importantes. La problème d'inversion de forme d'onde (FWI) est formulé comme un problème d'optimisation non linéaire associé à une fonction coût mesurant les diérences entre données observées et prédites, et qui est contraint par un système d'équations aux dérivées partielles. Celles-ci modélisent (sous certaines approximation et paramétrisation physiques) la propagation du champ d'onde et dénissent implicitement une application de l'espace des paramètres du modèle dans l'espace des données. En pratique, le problème d'optimisation est résolu de manière itérative dans le cadre de méthodes d'optimisation locale de type gradient. Au cours des dernières décennies, les méthodes FWI ont été activement étudiées et ont tirées parti : (1) du développement de nouvelles méthodes de modélisation physique et numérique de la propagation 3D du champ d'ondes -de l'acoustique au viscoélastique -dans des milieux hétérogènes et complexes ; [START_REF] Gangbo | The geometry of optimal transportation[END_REF] de l'augmentation rapide de la puissance et des capacités du calcul haute performance ; (3) de la quantité croissantes des données large bande. Les méthodes FWI inversent simultanément, e.g. sans séparation d'échelle, les courtes et grandes longueurs d'onde du modèle. Cependant ces méthodes sourent dans la pratique de dicultés qui restreignent leur utilisation en production. Le problème d'optimisation peut être piégé dans un minimum local, éventuellement éloigné du minimum global, du fait de : la non linéarité du problème d'optimisation qui le rend dépendant de la connaissance a-priori du modèle initial ; la bande de fréquence limitée des observations induite par la nature des sources et la géométrie du système d'acquisition, et en particulier un decit en basses fréquences ; la présence de bruit dans les observations dont l'origine est multiple (environnemental, instrumental) et relativement important dans les basses fréquences ; la complexité de la physique des ondes sismiques et de la faible sensibilité de la onction coût vis à vis des composants du modèle dont les nombres d'onde sont plus petits que les longueurs d'ondes sismiques résolues par l'opérateur de propagation. La thèse présente un rapide aperçu des méthodes d'imagerie sismique et des fondements de la méthode FWI, ainsi que de la méthode de l'état adjoint qui permet dans le cadre des méthodes d'optimisation locale un calcul ecace du gradient de la fonction coût via la corrélation point à point du champ d'onde incident prédit par les équations d'état du système et du champ d'onde adjoint régi par un système d'équations adjointes. La méthode est illustrée dans le cadre de de l'approximation acoustique qui est celui de cette thèse. Les diérentes limitations associées à la méthode FWI sont discutées avec une rapide revue des diérentes approches développées pour atténuer ces limitations. 4.3 (top) The input densities p (blue) and q (curve) as functions of x and y. (center) Evolution of the couplings l at iteration l of the Sinkhorn algorithm. (bottom) solution ? List of Figures " of problem (4.4) for several values of ". 5.1 Illustration of the effect of for the transformation P defined by (5.4), applied to the function f (t)=t, 8t 2 [ 1,1], which changes sign at t =0 .A s increases, P (f ) is getting smoother and smoother in the vicinity of t =0 , but increasing artificial mass is getting created that lead to possible bias in optimal transport. ............................. (5,10,30,35Hz) . Bottom figures: the negative (bottom-left figure ) and the positive (bottom-right figure) parts of the two wavelets. ................. 5.9 Comparison between the L 2 (solid blue curve), the balanced (normalised) S ✏, (solid purple curve), and the unbalanced OT ✏, (solid red curve) and S ✏, (solid yellow curve) misfit functions as a function of the time shift between Ricker and Ormsby wavelets for decreasing ✏ and . . La thèse fournit un rapide aperçu des fondements de ma méthode FWI, et de l'état adjoint, avec une illustration dans le cade de l'approximation acoustique scalaire, ainsi qu'un revue des différentes méthodes proposées pour prévenir les problèmes associés à cette méthodes, dans l'espace des données et/ou des images. Cette thèse est une contribution à une nouvelle approche qui se développe activement aujourd'hui pour la formulation de nouvelles fonctions coût des les problèmes inverses (et l'apprentissage statistique) basés sur les distances de transport optimal (OT). Les distances OT transforment la mesure intensive locale des moindres carrés en une mesure globale, sensible a tous les défauts de correspondance en espace, temps et amplitude. Les distances basées sur l'OT sont de nature lagrangienne et en particulier convexes par rapport aux translations et aux dilatations. Cette propriété en fait de bons candidats pour résoudre le problème du "cycle skipping". A ce jour, plusieurs familles de fonctions coût basées sur l'OT ont été proposées : Dans cette thèse, nous avons développé une troisième famille de fonctions coût, continues et différentiables, basées sut l'OT issues en combinant deux extensions récentes: • La variante OT « déséquilibrée » introduite récemment et qui définit rigoureusement une distance sur l'espace des mesures de Radon positives contournant ainsi le problème de la normalisation des données. Cette classe de distance OT a été déjà fait l'objet d'une application FWI par Li et collaborateurs en 2021. • La méthode de régularisation entropique de OT qui permet une implémentation efficace à l'aide de l'algorithme de Sinkhorn. L'OT entropique ne définit pas réellement une distance mais fournit une approximation du second ordre de la distance 2-Wasserstein sous la forme dite de divergence • Nous supposons que la source sismique est ponctuelle et connue, alors qu'en pratique la source doit être déterminée ou calibrée, généralement à partir d'une arrivée directe. • Nous supposons que les observables sont des échantillons ponctuels de la solution de l'équation des ondes acoustiques, plutôt que des versions filtrées (à la fois dans l'espace et dans le temps) des perturbations de pression mesurées par les récepteurs. • Nous ne traitons pas la problématique des bruits d'acquisition dans les observations (ex : dysfonctionnement des détecteurs, bruit sismique ambiant, diffusion incohérente des structures qui ne sont pas Chapter 1 Introduction Seismic tomography aims at inferring physical properties and reconstructing quantitatively structures of the Earth interior from the mechanical waves that are radiated by natural and man-made seismic sources and recorded at the surface at receivers in the form of seismograms also called seismic traces. This is a fundamental problem in geophysics with applications to societal concerns such as the discovery and exploitation of energy resources, site investigation for civil engineering, mineral deposits and underground water supplies, seismic hazard and risk mitigation. With respect to other physics-based osculating methodse.g. gravimetry, geomagnetism, and electromagnetism -seismic waves have large source-dependent penetration depths and provide high-resolution reconstruction due to their source-dependent short wavelengths. They are today recorded with increasing accuracy by extended arrays with a large number of receivers -seismometers, accelerometers, hydrophones, geophones, fibre optic cables -at the surface (land and sea) or near-surface (boreholes) of the Earth. In seismology, passive source tomography that makes use of natural seismic sourcese.g. earthquakes radiating very energetic seismic waves -have been actively developed in the last decades, and provide significant insights of the deep Earth interior from global to regional scales. The main limitation being the distribution and the frequency content of the natural sources, together with the available data coverage at these scales. In exploration geophysics, the development of controlled man-made sources -e.g. explosive and vibro-seis sources on land, air-gun sources at sea -and of long and very long offset acquisition systems has foster the development of high-resolution seismic imaging methods exploiting increasingly large amount of data. This led to higher and higher resolution subsurface models at local scales, from dozens of meters to dozens of kilometers, depending on the source energy and the acquisition system geometry. Interactions between the waves and the geological medium depend on: the limited frequency-band, the duration and radiated energy of the seismic sources; the characteristic heterogeneity scales of the medium; and of the source-receiver wave propagation path and distance. Extracting information from the recorded seismograms is not a trivial task as these interactions get integrated over the propagation distance. Usable data in seismic tomography have long been restricted to secondary observables such as travel times, phase speeds or waveforms -a small portion of the full recorded seismograms -by the limited capability of the physics and numerical wave propagation modelling. Most discoveries related to the structure of the Earth interior -such as the Earth inner core, asthenosphere and the major seismic discontinuities -have long been based on ray theory describing short-wavelength energy propagation and travel-time attributes. In the ray theoretical framework, the arrival time of the seismic phases are sensitive to the wave speeds along the ray path connecting source and receiver. Travel time tomography [START_REF] Aki | Determination of the three-dimensional seismic structure of the lithosphere[END_REF][START_REF] Luo | Wave-equation travel time inversion[END_REF][START_REF] Pratt | Combining wave-equation imaging with travel time tomography to form high-resolution images from crosshole data[END_REF][START_REF] Schuster | Wavepath eikonal travel time inversion: Theory[END_REF][START_REF] Trampert | Global phase velocity maps of love and raileigh waves between 40 and 150 seconds[END_REF][START_REF] Ekström | Measurements and global models of surface wave propagation[END_REF][START_REF] Nemeth | Dynamic smoothing in crosswell traveltime tomography[END_REF][START_REF] Zelt | Modelling strategies and model assessment for wide-angle seismic travel time data[END_REF][START_REF] Boschi | Whole earth tomography from delay times of p, pcp, and pkp phases: Latera heterogeneities in the outer core or radial anisotropy in the mantle?[END_REF]Raw linson and Sambridge , 2003;[START_REF] Romanowicz | Global mantle tomography: progress status in the past 10 years[END_REF] is a fast and cost effective tool. It is however limited to well-isolated body-wave phases or surface waves, for which the fundamental and higher modes are well separated. High-frequency approximation does not take into account the source finite-frequency band, and is only applicable to smooth media, in which the characteristic scales of the inhomogeneities are much larger than the dominant wavelength. As such the velocity structure derived from travel-time tomography is only sub-optimal, and ray-based method can yield distorted results and fail at caustics [START_REF] Čeverný | Seismic, ray theory[END_REF]. Efforts to overcome the limitations of ray theory led to the development of finite-frequency travel-time tomography, a semi-analytical extension of the ray-based inverse problem accounting for spatially extended, finite-frequency 3D wave-sensitivity kernels, together with the integration of different data sets [START_REF] Čeverný | Fresnel volume ray tracing[END_REF][START_REF] Masters | A shear-velocity model of the [and discussion[END_REF][START_REF] Friederich | Propagation of seismic shear and surface waves in laterally heterogeneous mantle by multiple forward-scattering[END_REF][START_REF] Friederich | The s-velocity structure of the east asian mantle for inversion of shear and surface waveforms[END_REF][START_REF] Ritsema | Complex shear wave velocity structure imaged beneath africa and iceland[END_REF][START_REF] Dahlen | Fréchet kernels of finite-frequency traveltimes -i. theory[END_REF][START_REF] Romanowicz | Anomalous splitting of free oscillations: A reevaluation of possible interpretations[END_REF][START_REF] Gu | Models of the mantle shear velocity and discontinuities in the pattern of lateral heterogeneities[END_REF][START_REF] Montelli | Finite-frequency tomography reveals a variety of plumes in the mantle[END_REF]Yoshizawa andKennett , 2004, 2005;[START_REF] Boschi | Global multiresolution models of surface wave propagation: comparing equivalently regularized born and ray theoritical solutions[END_REF][START_REF] Sigloch | Two-stage subduction history under north america inferred from multiple-frequency tomography[END_REF]. In particular asymptotic finite-frequency kernels allowed to exploit weighted, lowamplitude energy wave packets in the seismograms [START_REF] Li | Global mantle shear velocity model developed using nonlinear asymptotic coupling theory[END_REF][START_REF] Mégnin | The three-dimensional shear velocity structure from the inversion of body, surface and higher-mode waveforms[END_REF][START_REF] Gung | Q tomography of the upper mantle using three-component long-period waveforms[END_REF] leading to improved global tomographic models. In exploration seismology, more controlled setup and multifold acquisition systems have enabled accurate, passive seismic imaging methods making use of both travel times and reflectivity energy information. However limited offsets of the seismic reflection surveys and limited frequency band of the source make seismic imaging poorly sensitive to intermediate wavelengths [START_REF] Jannane | Wavelengths of earth structures that can be resolved from seismic reflection data[END_REF]. As such, two broad regimes of wave interactions were mainly considered: a transmission regime, in which wave interactions are dominated by the slow variations of the physical properties, and the seismic phases and their propagation direction are slightly perturbed with a dominant forward scattering; a reflectivity regime, in which wave interactions are dominated by the fast variations of the physical properties, and the seismic phases and their propagation directions are strongly perturbed with a significant back scattering. However, it is worth to note that other types of interactions such as multiple scattering may also contribute to the recorded traces. As one phase is mixing different interaction regimes during their propagation, the recorded seismic traces contain both kinematic information and reflectivity information. Following the early breakthroughs by [START_REF] Claerbout | Toward a unified theory of reflector mapping[END_REF][START_REF] Claerbout | Fundamentals of geophysical data processing with applications to petroleum prospecting[END_REF], two-step seismic imaging workflows orchestrating a tomography mode and a migration mode were actively developed, assuming that reflectivity result from small perturbation in velocity under the Born approximation, and that data -up to a first-order perturbation -linearly depend on the short Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 wavelengths of the velocity model. In tomography mode, long wavelength components of the velocity model are reconstructed using the kinematic information of the wave propagation. In migration mode, short wavelengths images of the subsurface structures, which appear as a discontinuity for a given seismic signal frequency, are reconstructed after kinematic corrections by amplitude summation and back-projection using different types of migration methods [START_REF] Claerbout | Downward continuation of moveout-corrected seismograms[END_REF][START_REF] Gazdag | Wave equation migration with the phase shift methode[END_REF][START_REF] Stolt | Migration by fourier transform[END_REF][START_REF] Baysal | Reverse time migration[END_REF][START_REF] Nemeth | Least-square migration of incomplete reflection data[END_REF][START_REF] Yilmaz | Seismic data analysis[END_REF][START_REF] Biondi | Angle-domain common-image gathers for migration velocity analysis by wavefield continuation imaging[END_REF] formulated in the time or the frequency domain. The notion of full wave form inversion (FWI), which is the context of this thesis, was first introduced to the geophysical community in the early '80s together with the elegant and physically insightful adjoint-state method [START_REF] Bamberger | Une application de la théorie du contrôle à un problème inverse sismique[END_REF][START_REF] Bamberger | Inversion of normal incidence seismograms[END_REF]Lail ly , 1983;Tarantola , 1984aTarantola , ,b, 1987[START_REF] Tarantola | A theoretical background for the inversion of seismic waveforms including elasticity and attenuation[END_REF]. It was later extended to the frequency domain [START_REF] Pratt | Inverse inverse theory applied to multi-source cross-hole tomography. part 1: acoustic wave-equation method[END_REF][START_REF] Pratt | Inverse theory applied to multi-source cross-hole tomography. part 2: Elastic waveequation method[END_REF][START_REF] Pratt | Seismic waveform inversion in the frequency domain, part i: Theory and verification in a physical scale model[END_REF][START_REF] Sirgue | Efficient waveform inversion and imaging: A strategy for selecting temporal frequencies[END_REF]. FWI goes one-step further by inverting simultaneously the short and long wavelengths with the potential to bridge the gap between the transmission and the reflectivity regimes, overcoming the limitations of the conventional sequential seismic-imaging workflow involving a tomography step followed by a migration step. FWI is formulated as a nonlinear PDE-based optimisation problem that considers the full wavefield information to estimate the Earth interior properties. The adjoint-state methods allows the computation of the derivative of an objective function -measuring the difference between the observations and the predicted observables -with respect to the model parameters by combining the predicted incident wavefield governed by the wave equations, together with appropriate initial and boundary equations, and the adjoint wavefield governed by a set of adjoint equations, together with adjoint subsidiary terminal and boundary conditions. In the last decades, FWI has been actively studied in both academia and industry, taking advantage of: (1) the development of new physical and numerical modelling methods for 3D full wavefield propagationfrom acoustic to viscoelastic -in heterogeneous, complex media; (2) the rapidly increasing capability and capacity of high-performance computing; (3) the increasing amount and coverage of broadband seismic data. FWI has proven to be a powerful tool that dramatically improved the capability to estimate physical properties and structures of various geological targets from global to regional scales [START_REF] Komatitsch | The spectral-element method, beowulf computing, and global seismology[END_REF][START_REF] Tromp | Seismic tomography, adjoint methods, time reversal and bananadoughnut kernels[END_REF]Fichtner et al., 2006a,b;[START_REF] Tape | Finite-frequency tomography using adjoint methods -methodology and examples using membrane surface waves[END_REF][START_REF] Liu | Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods[END_REF][START_REF] Fichtner | Theoretical background for continental -and global -scale full waveform inversion in the time-frequency domain[END_REF][START_REF] Fichtner | Ful l seismic waveform model ling and inversion[END_REF][START_REF] Peter | Forward and adjoint simulations of seismiv wave propagation on fully unstructured hexahedral meshes[END_REF][START_REF] Fichtner | Multiscale full waveform inversion[END_REF][START_REF] Zhu | Seismic structure of the european upper mantle based on adjoint tomography[END_REF][START_REF] Komatitsch | Anelastic sensitivity kernels with parsimonius storage for adjoint tomography and full waveform inversion[END_REF][START_REF] Tromp | Seismic wavefield imaging of the earth'sinterior across scales[END_REF] and local scales in exploration geophysics [START_REF] Pratt | Two-dimensional velocity models from wideangle seismic data wevefield inversion[END_REF][START_REF] Pratt | Seismic waveform inversion in the frequency domain -part 2: Faukt delineation in sediments using crosshole data[END_REF][START_REF] Virieux | An overview of full-waveform inversion in exploration geophysics[END_REF][START_REF] Brossier | Seismic imaging of complex onshore structures by 2d elastic frequency-domain full-waveform inversion[END_REF][START_REF] Plessix | Full waveform inversion of a deep water ocean bottom seismometer dataset[END_REF][START_REF] Sirgue | Full waveform inversion: the next leap forward in imaging at valhall[END_REF][START_REF] Plessix | Full waveform inversion and distance separated simultaneous sweeping: a study with a land seismic data set[END_REF][START_REF] Warner | Anisotropic 3d fullwaveform inversion[END_REF][START_REF] Stopin | Multiparameter waveform inversion of a large wide-azimuth low-frequency land data set in oman[END_REF][START_REF] Vigh | Elastic full-waveform inversion application using multicomponent measurements of seismic data collection[END_REF][START_REF] Operto | Efficient 3-d frequency-domain mono-parameter full-waveform inversion of ocean-bottom cable data: application to valhall in the visco-acoustic vertical transverse isotropic approximationb[END_REF][START_REF] Shen | Full waveform inversion: The next leap forward in subsalt imagin[END_REF][START_REF] Borisov | Application of 2d full waveform inversion on exploration land data[END_REF]. In practice, however, solving FWI problems using local, nonlinear optimisation methods is facing challenges that preclude routine use. The physics of seismic waves is complex and the objective functional is poorly sensitive to wavenumber components of the model that are shorter than seismic wavelengths and only through information that is homogenised by the wave propagation operator [START_REF] Cupillard | Non-periodic homogeneization of 3-d elastic media for the seismic wave equation[END_REF][START_REF] Capdeville | Elastic full-waveform inversion based on the homogeneization method: theoritical framework and 2-d numerical illustrations[END_REF][START_REF] Hedjazian | Multiscale seismic imaging with inverse homogenization[END_REF]. Accurate and computationally efficient models of wave propagation -from Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 pure acoustic waves to anisotropic viscoelasticity -together with well-defined physics-based and data-based parametrisation of the model need to be carefully considered in the FWI, specially in the case of multiparameter estimation and multi-mode modelling that remain very challenging [START_REF] Operto | A guided tour of multiparameter full-waveform inversion with multi-component data: From theoty to practice[END_REF]. Despite the rapidly increasing computational capabilities, numerical models can only process limited frequencies in 3D. While increasing the level of parametrisation allows more realistic models, the inversion problem becomes more ill-posed and even non-unique when the model space dimension increases. The accuracy deteriorates from the lack of low frequencies, observation noise and poor starting models [START_REF] Gauthier | Two-dimensional nonlinear inversion of seismic waveforms: Numerical results[END_REF][START_REF] Mora | Nonlinear two-dimensional elastic inversion of multioffset seismic data[END_REF][START_REF] Luo | Wave-equation travel time inversion[END_REF][START_REF] Fichtner | Theoretical background for continental -and global -scale full waveform inversion in the time-frequency domain[END_REF][START_REF] Virieux | An introduction to full waveform inversion[END_REF], unless some prior information is employed [START_REF] Bunks | Multiscale seismic inversion[END_REF]. In exploration geophysics, while ultra long-offsets allow to recover partially low frequencies, the seismic traces contain a full-variety of integrated, frequencydependent information associated with different resolution and interaction regimes during the propagation. This makes the inversion problem highly nonlinear with multimodal objective function. When using local gradient-based optimisation methods, a poor initial model can cause the nonlinear optimisation problem to be trapped easily in a local minimum e.g. [START_REF] Tarantola | Inverse problem theory and methods for model parameter estimation[END_REF]. Global optimisation methods -such as Monte Carlo [START_REF] Sambridge | Inverse theory, monte carlo method[END_REF][START_REF] Ray | Frequency domain full waveform elastic inversion of marine seismic data from the alba field using bayesian trans-dimensional algorithm[END_REF][START_REF] Biswas | 2d full-waveform inversion and uncertainty estimation using the reversible jump hamiltonian monte carlo[END_REF][START_REF] Ray | Low frequency full waveform seismic inversion within a tree based bayesian framework[END_REF], and Hamiltonian Monte Carlo (Fichtner et al., 2019;Fichtner and Zunino, 2019;[START_REF] Gebraad | Bayesian elastic full-waveform inversion using hamiltonian monte carlo[END_REF] sampling methods, simulated annealing [START_REF] Datta | Estmating a starting model for full-waveform inversion using global optimization method[END_REF], particle swarm optimisation (Chen and Wang, 2017) -could avoid theoretically this problem and quantify the uncertainties. However, owing to the large dimension of the model space, these methods remain currently intractable and far from reaching the computational efficiency needed to handle realistic large-scale seismic inversion problems in exploration geophysics. Seismic data are oscillatory signals by nature, as the zero-frequency part is not recorded [START_REF] Chauris | Seismic imaging: a practical approach[END_REF]. Since the optimisation problem is highly nonlinear with multimodal objective functions a poor initial kinematic model may cause the solution to be trapped in local minima. In the standard form of FWI, the objective function, measuring the misfit between the predicted observables and the observations, is based on the least-squares distance, which is oscillatory and non convex in directions associated with wavenumber components of the model that are longer than a half the dominant wavelength. For large kinematic errors, local optimisation algorithms will match incorrect phases between the predicted and the observed signals, making the method prone to the so-called cycle-skipping effect. The inversion converges in a local minimum, possibly far from the global minimum, due to kinematic errors introduced by the poor starting model. Different strategies have been developed to mitigate the non convexity of the FWI problem and to reduce the dependency on the initial model: • Improve the kinematic accuracy of the initial model using for example reflection tomography [START_REF] Woodward | A decade of tomography[END_REF] or stereo-tomography methods [START_REF] Lambaré | Stereotomography[END_REF], so that the initial model is not too far from the correct model and the kinematic errors are reduced. Following a similar approach, it has also been proposed to enhance the tomographic components at the early iterations and gradually reduce its weights towards convergence [START_REF] Tang | Tomographically enhanced full wavefield inversion[END_REF], or to further separate the gradient components based on the scattering angle at the imaging point [START_REF] Alkhalifah | Scattering-angle based filtering of the waveform inversion gradients[END_REF]. Although these methods do enhance Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 the low wavenumber components of the FWI gradient, they reduce the flexibility of the FWI workflow and can be rather time consuming. • Mitigate multi-modalities and enlarge the basin of attraction of the objective function using a data hierarchical approach: seeding it with low frequency data only, and slowly enlarge the data bandwidth as the iterations of the gradient-based methods progress [START_REF] Pratt | Seismic waveform inversion in the frequency domain, part i: Theory and verification in a physical scale model[END_REF][START_REF] Shipp | Two-dimensional full wavefield inversion of wide-aperture marine seismic streamer data[END_REF][START_REF] Brossier | Seismic imaging of complex onshore structures by 2d elastic frequency-domain full-waveform inversion[END_REF][START_REF] Wang | Reflection seismic waveform tomography[END_REF]. Extension to the time domain involves different temporal and/or offset selection of the observations. The idea is to reduce the number of propagated wavelengths to be recovered simultaneously. Among the many factors that affect in practice the success of this approach, the lowest starting frequency is an essential one. Broadband data with high signal-to-noise ratio may not be always available or of sufficient quality preventing reliable model reconstruction. Moreover, smart selection of the data is required for the different inversion stages, which remains often rather empirical. Following similar approach, low frequency data extrapolation based on the phases and amplitudes in the observed band has also been proposed, fitting smooth non-oscillatory functions to represent and extrapolate wave physics to the unrecorded frequency band [START_REF] Li | Phase and amplitude tracking for seismic-event separation[END_REF]. Other solutions based on a reformulation of the FWI have been extensively studied in the past decades to reduce the non-linearity of the continuous PDE-based optimisation problem and to increase the convexity of the inversion in higher-dimensions. This has led to different extension strategies that aim to restore the dimensional balance between the model and the data space by considering artificial, extended models through the introduction of redundant coordinates in the model space. The common feature in all model extension strategies is their tendency to promote the same sort of long-wavelength velocity updates as does travel time tomography, i.e. to extract kinematic information from the data. Enforcing a consistency condition that link the unphysical, extended model to the original physical one is widely viewed as a reasonable way of retrieving the background model in which waves propagate. There are at least two different ways of enforcing the consistency relation: • A first strategy in the image (reflectivity) domain was originally developed within the framework of wave equation migration analysis [START_REF] Symes | Migration velocity analysis and waveform inversion[END_REF] and relies on a scale separation assumption. The idea is to introduce additional degrees of freedom at the reflectivity level extending the model space along subsurface offset, plane-wave ray-parameter, or time-lag [START_REF] Shen | Differential semblance velocity analysis by wave-equation migration[END_REF][START_REF] Shen | Wave equation migration velocity analysis by differential semblance optimization[END_REF][START_REF] Sava | Time-shift imaging condition in seismic migration[END_REF][START_REF] Sun | Waveform inversion via nonlinear differential semblance optimization[END_REF][START_REF] Biondi | Simultaneous inversion of full data bandwidth by tomographic fullwaveform inversion[END_REF]. The extended modelling operator is now surjective under very general conditions in 3D, and as such an extended model can always be found to match any data, regardless of wether the background kinematic model is correct. The construction delocalises scattering events and the extended image (reflectivity) should take on near-zero values because the wave fields are not supposed to interact constructively. The consistency condition is then derived by penalising the additional parameters in order to focus the energy of the extended reflectivity image [START_REF] Chauris | Two-dimensional velocity macro model estimation from seismic reflection data by local differential semblance optimization: applications to synthetic and real data sets[END_REF][START_REF] Biondi | Angle-domain common-image gathers for migration velocity analysis by wavefield continuation imaging[END_REF][START_REF] Sava | Time-shift imaging condition in seismic migration[END_REF] the additional computational cost to the already computationally demanding FWI, associated to the construction of the extended reflectivity images. Moreover since only primary reflections are used to generate the images, considerable preprocessing -such as removing the multiples, muting the direct arrivals, and diving waves -is required. • A second strategy is the source-receiver extension strategy introducing the seismic source indexes as additional coordinates in the model parameter space [START_REF] Van Leeuwen | Mitigating local minima in full-waveform inversion by expanding the seach space[END_REF][START_REF] Warner | Adaptive waveform inversion: Theory[END_REF][START_REF] Huang | Full-waveform inversion via soure-receiver extension[END_REF][START_REF] Huang | Source-independent extended waveform inversion based on space-time source extension: Frequency-domain extension[END_REF], initially including trace-based Wiener filters to compare the observed and modelled data and later on the whole wavefield so that the reconstructed wavefield fits the data by design. The imaging condition involves incident and adjoint fields computed in the extended model independently for each source. The consistency condition again relies on the concept of differential semblance optimisation [START_REF] Symes | Velocity inversion by differential semblance optimization[END_REF], which is a good though not fully compelling choice of regularisation for model-extended waveform inversion. The applicability of these methods is by no mean feat when multiple arrivals are present in the transmission data [START_REF] Plessix | Automatic cross-well tomography by semblance and differential semblance optimization: Theory and gradient computation[END_REF], as well as their extension to the time domain [START_REF] Aghamiry | Accurate and efficient data-assimilated wavefield reconstruction in the time domain[END_REF]. Current solutions either rely on crude approximations [START_REF] Wang | Full waveform inversion with the reconstructed wavefield method[END_REF] or on advanced iterative solutions, which are computationally demanding [START_REF] Aghamiry | Accurate and efficient data-assimilated wavefield reconstruction in the time domain[END_REF] Apart from finding an adequate initial kinematic model, another line of investigation is to reformulate the FWI using alternative ways to measure the difference between predicted observables and observations in an attempt to mitigate the non convexity of the objective function with respect to the model parameters and to enlarge the basin of attraction of the global minimum. Different families of misfit function with enhanced convexity have been proposed in the past decades: • A first family involves transforming the signal before measuring the difference between the predicted observable and observations in the sense of the least-squares distance. Examples of such transformation include: reconstruction of the envelope and the unwrapped phase of the signal [START_REF] Wu | Seismic envelope inversion and modulation signal model[END_REF][START_REF] Luo | Seismic envelope inversion: reduction of local minima and noise resistance[END_REF], extraction of the instantaneous phase and enveloppe of the signal by the Hilbert transform [START_REF] Fichtner | Theoretical background for continental -and global -scale full waveform inversion in the time-frequency domain[END_REF][START_REF] Bozdaǧ | Misfit functions for waveform inversion baseds on instantaneous phase and envelope measurements[END_REF][START_REF] Alkhalifah | Taming waveform inversion non-linearity through phase unwrapping of the model and objective function[END_REF][START_REF] Luo | Time-domain inversion using instantaneous phase information with damping[END_REF], normalised integration of the signal leading to trace cumulative distribution [START_REF] Liu | The normalised integration method -and alternative to full waveform inversion?[END_REF][START_REF] Donno | Estimating the background velocity model with the normalised integration method[END_REF]. • Another family involves more global measures in replacement of the least-squares distance. Correlationbased objective functions, promoting the minimisation of travel-time shifts, were initially proposed [START_REF] Luo | Wave-equation travel time inversion[END_REF][START_REF] Dahlen | Fréchet kernels of finite-frequency traveltimes -i. theory[END_REF][START_REF] Zhao | Three-dimensional fréchet differential kernels for seismicdelay time[END_REF]van Leeuwen andMulder , 2008, 2010;[START_REF] Choi | Application of multi-source waveform inversion to marine streamer data using the global correlation norm[END_REF], leading to the so-called wave equation tomography strategy. However, crosscorrelation-based measures tend to fail in the case of multiple energetic seismic arrivals. Matching filter-based objective functions [START_REF] Luo | A decovolution-based objective function for wave-equation inversion[END_REF][START_REF] Warner | Adaptive waveform inversion: Theory[END_REF][START_REF] Huang | Full-waveform inversion via soure-receiver extension[END_REF][START_REF] Huang | Source-independent extended waveform inversion based on space-time source extension: Frequency-domain extension[END_REF][START_REF] Zhu | Building good starting model for full-waveform inversion using adaptive matching filter misfit[END_REF] were then developed to circumvent these limitations, making use of a normalised deconvolution, i.e. a Wiener filter, of the predicted and observed seismic traces , leading to the so-called adaptive wave inversion method. The resulting objective function penalises the energy of Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 the filter away from a band-pass Dirac filter, which would have resulted from a correct model. However, these objective functions can still suffer from cycle-skipping and from a loss of resolution in the case of complex multi-arrival signals. Matching-filter-based objective functions can be seen as an attempt to regularise the ill-conditioned inverse problem by controlling the slowly decaying energy associated to multiple arrivals and by removing effects of the small eigenvalues -associated to wavenumber components of the model that are shorter than seismic wavelengths -of the linearised forward wave operator. Finally, a new trend has recently emerged in inverse problems and machine learning promoting optimal transport (OT) distances to measure the difference between the predicted observables and the observations, [START_REF] Qiu | Full-waveform inversion with an exponentially encoded optimal-transport[END_REF]Yang and Engquist , 2018) to be compared using this metric. This can alter the phase and amplitude information, and prone non-convexity with respect to time shifts. Computing the 2-Wasserstein metric using the Monge-Ampère formulation either with optimised finite-difference, linear solvers [START_REF] Benamou | Numerical solution of the Optimal Transportation problem using the Monge-Ampère equation[END_REF][START_REF] Froese | A numerical method for the elliptic monge-amperè equation with transport boundary conditions[END_REF] or semi-discrete strategy [START_REF] Mérigot | A multiscale approach to optimal transport[END_REF] add significant computational cost to the FWI and in practice application of the 2-Wasserstein distance has often been restricted 1D-optimal transport, i.e. trace-by-trace comparison, for which an analytical solution exists. defining • A second family, proposed by Métivier and collaborators (Métivier et al., 2016a,b;Métivier et al., 2016), resorts to the 1-Wasserstein metric, a particular instance of the Kantorowitch-Rubinstein metric, associated to the dual formulation of the optimal transport problem. This OT metric is now defined for signed measures and can be lifted to handle unbalanced optimal transport, avoiding ad-hoc signal transformation and normalisation. It leads to a convex optimisation problem under linear constraints that can be solved using proximal splitting methods [START_REF] Combettes | Proximal splitting methods in signal processing, in Fixed-point algorithms for inverse problems in science and engineering[END_REF], reducing significantly the additional computational compared to the 2-Wasserstein metric. The convexity of the objective function is however not guaranteed with regard to time and amplitude transformations. • A third family recently introduced by Métivier et al [START_REF] Métivier | Optimal transport for mitigating cycle skipping in full-waveform inversion: A graph-space transform approach[END_REF][START_REF] Metivier | A graph space optimal transport distance as a generalisation of the l p distances: application to a seismic imaging inverse problem[END_REF] This new objective function shows promising convexity properties with respect to time and amplitude transformation, and in 2D the distance can be computed with moderate additional cost as a linear assignment problem. However extension to higher domain dimensions is still under development and remains to be evaluated, and it might increase significantly the computational cost. To date, OT-based misfit functions have mostly been illustrated using 2D academic and benchmark cases, and only few of them have been practically demonstrated for field data [START_REF] Poncet | Fwi with optimal transport: a 3d implementation and an application on a field dataset[END_REF][START_REF] Messud | Multidomensional optimal transport for 3d fwi: Demonstration of field data[END_REF][START_REF] Sedova | Acoustic land full waveform inversion on a broadband land dataset: The impact of optimal transport[END_REF]Górszczyk et al., 2020Górszczyk et al., , 2021;;[START_REF] Pladys | On cycle-skipping function modification for fullwaveform inversion: comparison of five recent approaches[END_REF]. In this thesis, we developed a third family of OT-based, continuous and differentiable objective functional, derived from combining two recent extensions of OT: • an "unbalanced" OT variant introduced in (Chizat et al., 2018a;[START_REF] Chizar | Unbalanced optimal transport: Dynamic and kantorovich formulations[END_REF][START_REF] Liero | Optimal Entropy-Transport problems and a new Hellinger-Kantorovich distance between positive measures[END_REF]Kondratyev et al., 2016a), which rigorously defines a distance on the set of positive Radon measures thus bypassing the data normalisation issue. This class of OT distance was proposed but not investigated in details in [START_REF] Li | Application of an unbalanced optimal transport distance and a mixed l1/wasserstein distance to full waveform inversion[END_REF]. • an entropic regularisation of the OT framework and its efficient implementation using the Sinkhorn algorithm Cuturi (2013) [START_REF] Peyré | Computational optimal transport[END_REF]. Entropic OT does not actually define a distance but provides a second-order approximation of the 2-Wasserstein distance, and the so-called Sinkhorn divergence variant [START_REF] Genevay | Learning generative models with sinkhorn divergences[END_REF][START_REF] Feydy | Interpolating between optimal transport and mmd using sinkhorn divergences[END_REF] leads to a very efficient computational implementation. The Sinkhorn divergence can be naturally extended to unbalanced OT [START_REF] Séjourné | Sinkhorn divergences for unbalanced optimal transport[END_REF]. We provide an overview of the theoretical framework of these two recent OT extensions leading to smooth and differentiable objective functions in the context of FWI. We numerically illustrate, in the 2D acoustic case, the use of these objective functions in the context of FWI with academic examples and the Marmousi benchmark data sets. The numerical illustrations make however a number of physical assumptions for sake of simplicity that need to be kept in mind: • We use the classical acoustic approximation of elastic waves and assume constant density in the 2D illustrations ; in this case, the sound speed velocity is the only model parameter. In the acoustic approximation mode conversion is not considered and the medium is isotropic. In practice, when dealing with heterogeneous media such an approximation can be challenged Cance and Capdeville (2015) • We assume that the seismic source is ponctual and known, whereas in practice the source needs to be determined or calibrated, usually from a direct arrival. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 • We assume that the observables are point samples of the solution of the acoustic wave equation, rather than filtered versions (both in space and time) of pressure disturbances as measured by the receivers. • We do not deal with the issue of acquisition noise in the observations (e.g. malfunctioning detectors, ambient seismic noise, incoherent scattering from structures that are not to be imaged). As such the 2D numerical illustrations presented in this work has to be considered as academic illustrations that need to be further investigated and evaluated in more realistic contexts. Outline of the manuscript Chapter 2 (Seismic Background) presents the Full-waveform Inversion (FWI) problem of reconstructing a background model and the classical adjoint state method used to compute the gradient of the misfit function, and provides a discussion the so-called "cycle skipping" problem in the case of the classical least-squares misfit function. Chapter 3 (OT Background) a general presentation of the Optimal Transport (OT) theory and of the main contributions using OT misfit functions in the FWI context is provided. Chapter 4 (Entropic regularization of OT and its generalizations) presents recent branches of OT. First its entropic penalization and companion Sinhkhorn algorithm . Then its "unbalanced" extension allowing to define a bona fide distance between positive distributions with total mass. Finally how the two extension can be combined and how to correct the bias introduced by the entropic penalization (Sinkhorn Divergence). Chapter 5 (The unbalanced optimal transport misfit function) summarizes the construction of the unbalanced misfit function from the model onward. We also illustrate its properties on two simple 1D and 2D parametric models. A general setting In the context of full-waveform inversion, the state of the dynamical system is determined by the wavefield u(x,t):Ω ⇥ [0,T] ! R nu for some n u .W ea s s u m et h a tu belongs to a subset of W = L 2 ([0,T], [L 2 (Ω)] nu ), The physical or computational domain Ω ⇢ R no can be open or bounded with boundary Γ. The evolution of the dynamical system is governed, under some physical approximation, by a state equations, i.e. a system of partial differential equations, represented by the differential operator L,t h e behavior of which depends on model parameters m: L (u(x,t), m(x)) f (x,t)=0, 8x 2 Ω, 8t 2 [0,T], (2.1) where the seismic source f : Ω ⇥ [0,T] ! R np is in W,a n dm : Ω !M⇢R nm the model parameters that define a vector-valued field, with M the admissible model parameters space. The solution of (2.1) defines a physical realisation of the system u(m)=u(x,t; m) for fixed model parameters m. The differential operator L governing the wavefield propagation, i.e. from acoustic to viscoelastic, is generally a linear operator: L(u, m) ⌘L[m] u, (2.2) where L [m] is the wave-propagation operator associated to the physical approximation. When L [m] is of order Q in times, the state equations (2.2) must be complemented with Q 1 initial conditions at t =0(we assume a null initial state to simplify) @ q u @t q (x,t = 0) = 0; 8x 2 Ω, 8q =0,..,Q 1, (2.3) and possibly with appropriate boundary conditions along Γ when the physical domain is bounded. In the data space D, observations d obs are seismograms recorded at n r receiverse.g. seismometers, hydrophones, geophones -for a source f : d obs : {d obs (x r ,t),r=1,...,n r } ; t 2 [0,T] , (2.4) where x r denotes the spatial position vector of the receiver r. The predicted observables are defined accordingly as d cal (m):{d cal (x r ,t; m),r=1,...,n r } = Ru(x,t; m); t 2 [0,T] , (2.5) where R : W!Dis an extraction operator mapping the physical realisation u(m) onto the receiver positions x r R : u(x,t; m) !{u(x 1 ,t; m),...,u(x nr ,t; m)} . (2.6) Note that the definition of R supposes that u(x,t; m) is continuous at points x = x r ; this holds generally when the seismic source is regular enough and when the medium m(x) is constant around the receivers x = x r FWI is classically formulated as a nonlinear optimisation problem associated to an objective functional J : M!R attached to the observations and that acts on the physical observables: J (m)=h (d cal (m), m; d obs )=h (Ru(m), m; d obs ) , (2.7) where the functional h(•), which may depend explicitly on m, measures the difference between the observations and the predicted observables associated to the physical realisation u(m)=u(x,t; m). 2.2 The optimisation problem and its solution. The optimisation problem can be stated as finding the optimal model parameters m ? such that J (m ? ) is the global minimum of J for a given set of observations: such as steepest descent, conjugate gradient or Newton-like methods can be formulated as: m ⇤ = argmin m2M J (m) where J (m)=h (d cal (m), m; d obs ) . ( 2 m k+1 = m k + ↵ k g k with g k • r m J (m k ) < 0, (2.9) where J (m k+1 ) < J (m k ),a n dg k is the local descent direction, with r m (•) the gradient operator with respect to the model parameters, known as the Fréchet derivative. The step length ↵ k 2 R + , must be chosen by a line search process so that J (m k ↵ k g k ) is minimal, and must satisfies the strong Wolfe conditions [START_REF] Wolfe | Convergence conditions for ascent methods[END_REF][START_REF] Wolfe | Convergence conditions for ascent methods. ii: some cirrections[END_REF], which are sufficient conditions to assure the convergence: J (m k + ↵ k g k ) < J (m k )+c 1 ↵ k r m (m k ) • g k (2.10) |r m J (m k + ↵ k g k )| <c 2 |r m J (m k ) • g k | , (2.11) with 0 <c 1 <c 2 < 1. The choice of the step length ↵ k is therefore a balance between stability and speed. The steepest descent algorithm corresponds to g k = r m J (m k ) and the conjugate gradient algorithm to g k = r m J (m k )/ k g k 1 , with k 0; whereas for Newton methods g k = H 1 J (m k )r m J (m k ) where the functional Hessian H J is defined as H J (m k )= r m @J @m (m k ), (2.12) and positive definite Hessian assures J (m k+1 ) < J (m k ). Gradient descent typically converges slowly, which is a significant impediment for large-scale problems, whereas the Newton method, which takes into account the curvature of the objective function, converges faster than gradient descent methods in the neighborhood of a local minimum when the Hessian of J is close to being positive semi-definite. Newton method is in general much more complicated to set up than gradient descent methods and much more computationally demanding as the Hessian of J is a large matrix, costly to store and to invert. Practical alternatives are the quasi-Newton methods which attempt to partially invert the Hessian of J . m k+1 = m k ↵ k Q k r m J (m k ), (2.13) where Q k ⇡H 1 J , is symmetric and positive definite. Among the quasi-Newton methods, the l-BFGS [START_REF] Nocedal | Updating quasi-newton matrices with limited storage[END_REF][START_REF] Liu | On the limited memory bfgs method for large scale optimisation[END_REF], a limited memory implementation of the Broyden-Fletcher-Goldfarb-Shanno quasi-Newton method, leads to an efficient iterative algorithm for computing recursively Q k involving only the functional derivative of J at iteration k and at the l previous iterations. The objective functional (2.7) can be relatively insensitive to wavenumber components of the model that are shorter than the seismic wavelengths. The eigenvalues of the Hessian of J corresponding to these components are nearly zero and the optimisation problem (2.8) becomes locally ill-posed. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 The eigenvectors associated to these eigenvalues are directions in which J (m) have very small curvature in the vicinity of m ? , the curvature being twice the second directional derivative in the eigen-directions. Small perturbations of the data or of the model m induce modifications of J that may result in large movements of its global minimum in problematic directions in the vicinity of the null-space of the Hessian of J . All gradient-based minimisation algorithms critically rely on the ability to efficiently compute the directional derivative of the objective function with respect to the model parameters. Adjoint-state methods and gradient computation Adjoint-state methods [START_REF] Lions | Optimal control of systems governed by partial differential equations[END_REF][START_REF] Chavent | Identification of functional parameters in partial differential equations[END_REF][START_REF] Chavent | History matching by use of optimal theory[END_REF][START_REF] Bamberger | Inversion of normal incidence seismograms[END_REF][START_REF] Tarantola | A theoretical background for the inversion of seismic waveforms including elasticity and attenuation[END_REF]Lail ly , 1983;[START_REF] Plessix | A review of the adjoint-state method for computing the gradient of a functional with geophysical applications[END_REF][START_REF] Fichtner | Ful l seismic waveform model ling and inversion[END_REF] allow to compute the directional derivatives of J with optimal efficiency. The adjoint wave equations can be derived from the wave equations, and the properties of the adjoint wave field, solution of the adjoint wave equations, are determined by the adjoint source, which is completely specified by the misfit function (2.7). Generalisation of the adjoint method allows to compute the Hessian of J , which plays a fundamental role in Newton-like methods of non-linear minimisation. The variation of the wave field u, a functional of m, with respect to m in the direction of m is given by the functional derivative, known as the Gâteau derivative: u = r m u m, (2.14) where r m (•) is the gradient operator with respect to the model parameters, e.g. the Fréchet derivative. The corresponding functional derivative of J is given by: J = r m J m = r m h(Ru(m), m; d obs ) m + ⌦ r u h(Ru(m), m; d obs ), u ↵ W , (2.15) where h•, •i W denotes the inner product in W, hu 1 , u 2 i W = Z T 0 ✓ Z Ω u 1 (x,t)u 2 (x,t) dx ◆ dt, (2.16) and the chain rule is used here together with the definition of u. The main problem with (2.15) is that explicitly computing the full kernel of the operator r m u can be highly inefficient. The adjoint-state method is a very good way to eliminate r m u m so that J can be computed in more favorable complexity. In order to achieve this, the state equation L [m] u f = 0 can be linearised, assuming a sufficiently smooth seismic source, leading to:  r m L [m] m u + L [m] u = 0 (2.17) Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 which shows that u can be eliminated by composition with L [m] on the left. The main idea of the adjointstate method is that a copy of L [m] u can materialise in (2.15) provided that r u h is seen as the adjoint of L [m] applied to some field λ, [L [m]] † λ = r u h(Ru(m), m; d obs )= R T r d cal h(d cal (m), m; d obs ), (2.18) with λ naturally called the adjoint field and (2.18) the adjoint-state equation. Then, J = r m h(Ru(m), m; d obs ) m ⌦ (L [m]) † λ, u ↵ W = r m h(Ru(m), m; d obs ) m ⌦ λ, L [m] u ↵ W Taking into account (2.17), the first variation of J , now reads J = r m h(Ru(m), m; d obs ) m + ⌦ λ, [r m L[m] m]u ↵ W (2.19) which is often much easier to compute than (2.15) Physics approximation: the acoustic wave Physical approximation of the dynamical system is an important component in full-waveform inversion. In this thesis, the behaviour of the dynamical system is described by the 2D acoustic wave equation. Linearised isotropic, inviscid fluid flow, in the physical domain Ω ⇢ R 2 , leads to @u @t = 1 ⇢ ? (x) rp, @p @t =  ? (x)r • u (2.20) where the model parameters m 2Dare ⇢ ? : Ω ! R + , the background mass density, and  ? : Ω ! R,t h e background bulk modulus with  ? = ⇢ ? [c ? ] 2 with c ? the sound speed. The physical state of the dynamical system is w =(u,p) 2W, with u : Ω ⇥ R + ! R 2 the displacement perturbation and p : Ω ⇥ R + ! R the pressure perturbation. The behaviour of the system is described by the linear wave operator L(m)w(x,t)=  @ @t + S(m) w(x,t) with S(m, x)= 0 @ 0 1 ⇢ ? (x) r  ? (x)r • 0 1 A (2.21) which defines a hyperbolic system of partial differential equations, and S is anti-self-adjoint, i.e. S † = S, with respect to the inner product hw, wi = 1 2 Z Ω ✓ ⇢ ? u • ũ + 1  ? p p◆ dx (2.22) Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 in which the factor 1/2 is chosen to be consistent with the physics convention for the energy. The acoustic wave operator can be rewritten in term of the pressure perturbation only [L p (m) p](x,t)=  1  ? (x,t) @ 2 @t 2 r• ✓ 1 ⇢ ? (x) r ◆ p(x,t) (2.23) leading to the scalar state equation [L p (m) p](x,t) f p (x,t)= 1  ? (x) @ 2 p(x,t) @t 2 r• ✓ 1 ⇢ ? (x) rp(x,t) ◆ f p (x,t)=0 (2.24) where f p : Ω ⇥ R + ! R is the pressure perturbation associated to a seismic source, generally considered as a point source: f p (x,t)=s(t) (x x s ) where x s 2 Ω is the position of the source and s(t) the source time function often modelled by a causal Ricker wavelet s(t)= ⇥ 1 2⇡ 2 f 2 0 (t t 0 ) 2 ⇤ e ⇡ 2 f 2 0 (t t 0 ) 2 (2.25) where the time shift t 0 assures the causality, i.e. s(t)=0 , for t<0,a n df 0 is the characteristic frequency of the wavelet. The state equation ( 2.24) needs to be completed by the initial conditions @p @t (x, 0) = ṗ0 (x)=0 and p(x, 0) = p 0 (x) = 0; 8x 2 Ω. (2.26) When the physical domain is limited by an upper free surface Γ 1 , the associated boundary condition is p| Γ 1 = p(x,t)=0, 8x 2 Γ 1 , 8t 2 [0,T] . (2.27) When the physical domain is unbounded, the computational domain must be truncated by artificial interfaces Γ 2 together with an appropriate absorbing boundary condition (ABCs). The simplest form of ABC is the zero-order absorbing boundary condition [START_REF] Clayton | Absorbing boundary consitions for acoustic and elastic wave equations[END_REF][START_REF] Engquist | Absorbing boundary conditions for numerical simulation of waves[END_REF][START_REF] Nataf | Absorbing boundary conditions and perfectly matched layers in wave propagation problems, in Direct and Inverse Problems in Wave Propagation and Applications[END_REF] derived from the approximation of the one-way wave equation, @p @t (x,t)+c ? (x)rp • n =0, 8x 2 Γ 2 , c ? (x)= s  ? (x) ⇢ ? (x) , (2.28) where n the unit outward normal associated to Γ 2 . Designing good absorbing boundary conditions is a somewhat difficult problem that has a long history. The currently most popular solution to this problem is to slightly expand the computational domain through an absorbing perfectly matched layer (PML) [START_REF] Berenger | A perfectly matched layer for the absorption of electromagnetic waves[END_REF]. The PML shows great superiority in the context of acoustic wave simulation over the classical ABC mentioned above [START_REF] Arbanel | A mathematical analysis of the pml method[END_REF][START_REF] Diaz | A time domain analysis of pml models in acoustics[END_REF][START_REF] Bermúdez | An optimal perfectly matched layer with unbounded absorbing function for time-harmonic acoustic scattering problems[END_REF][START_REF] Gao | Comparison of artificial absorbing boundaries for acoustic wave equation modelling[END_REF]. In particular, convolutional implementations of complex frequency shifter perfectly matched layer (Roden andGedney , 2000; Festa andVilotte , 2005; Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 [START_REF] Komatitsch | An unsplit convolutional perfectly matched layer improved at grazing incidence for seismic wave equation in poroelastic media[END_REF] have become widely used in acoustic wave simulation [START_REF] Pasalic | Convolutional perfectly matched layer for isotropic and anisotropic acoustic wave equations[END_REF][START_REF] Gao | Comparison of artificial absorbing boundaries for acoustic wave equation modelling[END_REF]. Here, to simplify the presentation, we adopt the first order absorbing condition (2.28) This leads to the forward modelling problem 8 > > > > > > > > > > < > > > > > > > > > > : 1  ? (x) @ 2 p(x,t) @t 2 r• ✓ 1 ⇢ ? (x) rp(x,t) ◆ f p (x,t) = 0; 8x 2 Ω, 8t 2 [0,T] @p @t (x, 0) = ṗ0 (x)=0 and p(x, 0) = p 0 (x) = 0; 8x 2 Ω p(x,t) = 0; 8x 2 Γ 1 , 8t 2 [0,T] @p @t (x,t)+c ? (x)rp(x,t) • n(x)=0, 8x 2 Γ 2 . (2.29) In order to specify the adjoint acoustic wave operator L † p let consider 8 2W † ; ⌦ L p (m)p, ↵ W = ⌧ 1  ? (x) @ 2 p(x,t) @t 2 r• ✓ 1 ⇢ ? (x) rp(x,t) ◆ , W = Z T 0 Z Ω 1  ? @ 2 p @t 2 r• ✓ 1 ⇢ ? rp ◆ dx dt. (2.30) After two integration by part: Z T 0 Z Ω ✓ 1  ? (x) @ 2 p @t 2 ◆ dx dt = Z T 0 Z Ω ✓ 1  ? @ 2 @t 2 ◆ pdx dt + Z T 0 ✓ 1  ? @p @t ◆ t=T t=0 dx Z Ω ✓ 1  ? (x) @ @t ◆ p t=T t=0 dx. (2.31) The initial conditions (2.26) imply that the first two last terms in the right hand side can be eliminated when imposing the terminal conditions for (x,T)=0 and @ @t (x,T) = 0; 8x 2 Ω. (2.32) Similarly applying the divergence theorem and the identity r • ✓ 1 ⇢ ? rp ◆ r• ✓ 1 ⇢ ? pr ◆ = r • ✓ 1 ⇢ ? rp ◆ pr • ✓ 1 ⇢ ? r ◆ , (2.33) together with the boundary condition (2.27) yields to where ds is a line element and n is the unit outward-pointing normal on the boundaries. Z T 0 Z Ω r • ✓ 1 ⇢ ? rp ◆ dx dt = Z T 0 Z Ω r • ✓ 1 ⇢ ? r ◆ pdx dt + Z T 0 Z Γ 1 ✓ 1 ⇢ ? rp ◆ • n ds dt + Z T 0 Z Γ 2 ✓ 1 ⇢ ? rp ◆ • n ds dt Z T 0 Z Γ 2 ✓ 1 ⇢ ? pr ◆ • n ds dt, Making use of the absorbing boundary condition (2.28) Z T 0 Z Γ 2 ✓ 1 ⇢ ? rp ◆ • n ds dt = Z T 0 Z Γ 2 ✓ 1 ⇢ ? ◆ 1 c @p @t dsdt = Z T 0 Z Γ 2 ✓ 1 ⇢ ? p ◆ 1 c @ @t Z Γ2 1 ⇢ ? 1 c pds t=T t=0 , (2.35) where the last term can be eliminated when imposing the initial conditions (2.26)o np and the terminal condition (2.32)o n . Gathering the previous results leads ⌧ 1  ? @ 2 p @t 2 r• ✓ 1 ⇢ ? rp ◆ , W = ⌧ 1  ? @ 2 @t 2 r• ✓ 1 ⇢ ? r ◆ ,p W + Z T 0 Z Γ 2 1 ⇢ ? ✓ 1 c @ @t r • n ◆ pdsdt Z T 0 Z Γ 1 ✓ 1 ⇢ ? rp ◆ • n ds dt. (2.36) The two last terms in (2.36) can be eliminated by imposing to the adjoint field the boundary condition 8 < : (x,t) = 0; 8x 2 Γ 1 , 8t 2 [0,T] @ @t cr • n = 0; 8x 2 Γ 2 , 8t 2 [0,T] . (2.37) Finally, the equations governing the adjoint wavefield is 8 > > > > > > > > > > < > > > > > > > > > > : 1  ? @ 2 @t 2 r• ✓ 1 ⇢ ? r ◆ = R T r d cal h (d cal = Rp, m; d obs ); 8x 2 Ω, 8t 2 [0,T] @ @t (x,T)=0and (x,T) = 0; 8x 2 Ω | Γ 1 = (x,t) = 0; 8x 2 Γ 1 , 8t 2 [0,T] @ (x,t) @t c ? (x) r (x,t) • n(x)=0, 8x 2 Γ 2 , 8t 2 [0,T] . (2.38) The adjoint state is solution of a backward problem where initial conditions (2.26) are replaced by the terminal condition (2.32). Homogeneous free surface boundary condition should be the same for the forward and adjoint wave field, whereas absorbing boundary conditions involving time derivatives needs to be properly time-reversed. Introducing the change of variable t ! T t, the adjoint problem can be rewritten in the same form as the forward problem with the auxiliary unknown: ˜ = (x,T t) and the time-reverse source R T r d cal h(d cal ; d obs )(T t). The result needs then to be time-reverse at each point x. This is a nice property that enable to solve the forward and the adjoint problem with the solver. Note however that is not in general the physical wave field run backward in time because of the limited Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 sampling of the receivers, and instead is introduced purely out of computational convenience. It is interesting to notice that changing the way to measure the distance between observed and modelled data (i.e. modifying the definition of h) only amounts to modify the source term in the adjoint equation. A simplification: the constant density model To go further, we will assume the following hypothesis 1. The density is assumed to be constant. More precisely, we assume that ⇢ ? (x)=⇢ 0 , ? (x)=⇢ 0 [c ? (x)] 2 . In this situation, the only parameter is c ? (x), the velocity law in Ω. 2. The admissible parameter set M is a convex subset of a finite dimensional vectorial space. Practically, we choose M scalar non negative functions m (x), x 2 Ω such that P m m (x)=1 , x 2 Ω and we define (0 <v <v + are some given parameters) M = ( c ? (x)= M X m=1 v m m (x),v  v m  v + , 8m =1,..,M ) . ( 2 h(d cal , m, d obs )= h(d cal , d obs ) In this situation, (2.19)a m o u n t st o J = ⌦ , [r c ? L p (c ? ) c ? ]p ↵ W . (2.40) With these simplifications, the pressure p(x,t) is the solution of 8 > > > > > > > > > > < > > > > > > > > > > : 1 [c ? (x)] 2 @ 2 p(x,t) @t 2 ∆p(x,t)=⇢ 0 f p (x,t); 8x 2 Ω, 8t 2 [0,T] @p @t (x, 0) = 0 and p(x, 0) = 0; 8x 2 Ω p(x,t) = 0; 8x 2 Γ 1 , 8t 2 [0,T] @p(x,t) @t + c ? (x) rp(x,t) • n(x)=0, 8x 2 Γ 2 , 8t 2 [0,T] . (2.41) It is possible to show that when f p (x,t) is in W and c ? is in M as given in (2.39) then p is in C 1 ([0,T],L 2 (Ω))\ Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 C 0 ([0,T],H 1 (Ω)) . Now, we define the source term for the adjoint equation as S ad (x,t)) = R T r d cal h (d cal = Rp, d obs )(x,t), (2.42) or, more explicitely, S ad (x,t)= X r (x x r )r d cal h (d cal = Rp, d obs )(x r ,t). (2.43) The adjoint state is solution of 8 > > > > > > > > > > < > > > > > > > > > > : 1 [c ? (x)] 2 @ 2 (x,t) @t 2 ∆ (x,t)=S ad (x,t) 8x 2 Ω, 8t 2 [0,T] @ @t (x,T)=0and (x,T) = 0; 8x 2 Ω (x,t) = 0; 8x 2 Γ 1 , 8t 2 [0,T] @ (x,t) @t c ? (x) r (x,t) • n(x)=0, 8x 2 Γ 2 , 8t 2 [0,T] . (2.44) Now, let us define the M -vector v =( v 1 ,v 2 ,...,v M ) T and p(x,t; v), (x,t; v) as p(x, t),a n d (x, t),t h e solutions of (2.41)-( 2.44)w i t hc ? (x)= P m v m m (x) 2M; the functional is now a simple function of vector v and is given by J(v)= h (Rp(x,t; v); d obs )= h ((p(x 1 ,t; v),p(x 2 ,t; v),...,p(x R ,t; v)); d obs ) , and we have, according to (2.40), J(v)= Z T 0 Z Ω (x,t)( [ r c ? L p (c ? ) c ? ]p(x,t)) dxdt. Since L p (c ? )= 1 [c ? ] 2 @ 2 @t 2 ∆+ Γ 2 ✓ 1 c ? @ @t n •r ◆ , we have [r c ? L p (c ? ) c ? ]= 2 c ? [c ? ] 3 @ 2 @t 2 Γ 2 ✓ c ? [c ? ] 2 @ @t ◆ , or, when c ? (x)= P m v m m (x), [r c ? L p (c ? ) c ? ]= M X m=1 ✓ 2 m v m [c ? ] 3 @ 2 @t 2 + Γ 2 m v m [c ? ] 2 @ @t ◆ , so that J(v)= X m  Z T 0 ✓Z Ω (x,t) 2 m (x) [c ? (x)] 3 @ 2 p(x,t) @t 2 dx + Z Γ 2 (x,t) m (x) [c ? (x)] 2 @p(x,t) @t dx ◆ dt v m , Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 and finally, the gradient of J at point v is given by @ J @v m (v)= Z T 0 ✓Z Ω (x,t) 2 m (x) [c ? (x)] 3 @ 2 p(x, t) @t 2 dx + Z Γ 2 (x,t) m (x) [c ? (x)] 2 @p(x, t) @t dx ◆ dt. (2.45) Discretisation of the problem The discretisation of the wave equation in heterogeneous medium is a classical topic and has been investigated for a long time. Here, we adopt the simplest method but others are possible. The starting point of the problem is to rewrite the wave problems (2.41)a n d( 2.44) in a weak form, i.e. Find p, in C 1 ([0,T],H 1 (Ω)) \ C 2 ([0,T],L 2 (Ω)) such that for all p t , t in H 1 (Ω), 8 > > > > > > > > > > < > > > > > > > > > > : m ✓ d 2 p dt 2 ,p t ; c ? ◆ + b ✓ dp dt ,p t ; c ? ◆ + k(p, p t )=S p (p t ,t) m ✓ d 2 h dt 2 , t ; c ? ◆ b ✓ d h dt , t ; c ? ◆ + k( h , t )=S ( t ,t) p(•,t = 0) = 0, dp dt (•,t = 0) = 0, (•,t = T )=0, d dt (•,t = T )=0. (2.46) In this formulation, appears three bilinear forms and two linear forms : • The mass bilinear form m(u, u t ; c ? )= Z Ω u(x) u t (x) [c ⇤ (x)] 2 dx. • The stiffness bilinear form k(u, u t )= Z Ω ru(x) • ru t (x) dx. • The boundary mass bilinear form b(u, u t ; c ? )= Z Γ 2 u | Γ 2 (x) u t | Γ 2 (x) c ⇤ (x) dx. • The two linear forms associated to the second terms S p (p t ,t)=⇢ 0 Z Ω f p (x,t) p t (x) dx,S ( t ,t)= Z Ω S ad (x,t) t (x) dx. This weak formulation is obtained by multiplying each wave equation by a test function then integrating the result over Ω and finally by using the Green's Theorem as well as the boundary conditions. Remark: When the sources are ponctual, this formulation makes no sense since H 1 (Ω) contains functions which are not continuous on Ω. This problem can be avoided by regularizing the Dirac mass, i.e. replacing (x x o ) by some smooth positive function r (x x o ) supported by a small neighborrow of x 0 such that R Ω r (x x o )=1. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 Semi discretisation in space We assume that Ω is a square in the plane, Γ 1 is its top horizontal line and Γ 2 is composed of the three other sides of the square boundary. We mesh Ω with a regular grid of step h. The set of the nodes of this grid is denoted by Ñ and the associated node is x n ; in order to distinguish between interior nodes and nodes located on the boundaries we set Ñ = N N Γ 1 , N = N 0 N Γ 2 . The grid is decomposed in squared cells of side h; the set of all cells is denoted by C: Ω= L C2C C;a t each node n 2N we associate C[n] the set of cells whose one vertex is node n. This set is composed of four cells for n 2N 0 , two or one cell(s) for n 2N Γ 2 ; we define N h n (x), n 2N the only continuous function whose restriction on each cell is linear and such that N h n (x n )=1, N h n (x n 0 )=0,n6 = n 0 2N 0 ; its easy to see that the support of N h n (x) is [ C2C[n] C. The semi discretization of problems (2.41)a n d( 2.44) consists in looking for approximate solutions in V h (Ω) ⇢ H 1 (Ω), the vector space spanned by the family of functions N h n (x),n2N p(x,t) ' p h (x,t)= X n2N p n (t)N h n (x), (x,t) ' h (x,t)= X n2N n (t)N h n (x); p h and h are defined as the solution of Find p h , h in C 2 ([0,T],V h (Ω)) such that for all N h n , n 2N 8 > > > > > > > > > > < > > > > > > > > > > : m h ✓ d 2 p h dt 2 ,N h n ; c ? ◆ + b h ✓ dp h dt ,N h n ; c ? ◆ + k(p h ,N h n )=S p (N h n ,t) m h ✓ d 2 h dt 2 ,N h n ; c ? ◆ b h ✓ d h dt ,N h n ; c ? ◆ + k( h ,N h n )=S ad (N h n ,t) p h (•,t = 0) = 0, dp h dt (•,t = 0) = 0, h (•,t = T )=0, d h dt (•,t = T )=0. To obtain this semi-discrete problem, we made the substitutions p ! p h and ! h and restricted the tests functions to element of V h (Ω). The linear and bilinear forms have been approximated (the substrict h )b y using a quadrature rule on each cell; more precisely, we assume that the restriction of c ? is constant on each cell c ? (x)= X C2C v c 1 C (x). We split the integrals according to Z Ω u(x) N h n (x) [c ? (x)] 2 dx = X C2C(n) Z C u(x) N h n (x) [c ? (x)] 2 dx = X C2C(n) 1 v 2 c Z C u(x) N h n (x) dx. Z Γ 2 u | Γ 2 (x) N h n (x) [c ? (x)] dx = X C2C(n) Z Γ 2 \@C u | Γ 2 (x) N h n (x) [c ? (x)] dx = X C2C(n) 1 v c Z Γ 2 \@C u | Γ 2 (x) N h n (x) dx, Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 then we use the quadature rule (V(C) is the set of the four vertexes of cell C and V(@C) is the set of the two vertexes of Σ C = @C \ Γ 2 when Σ C is not empty) Z C u(x) N h n (x) dx ' h 2 4 X xm2V(C) u(x m ) N h n (x m )= h 2 4 u(x n ), Z Γ 2 \@C u | Γ 2 (x) N h n (x) dx ' h 2 X xm2V(@C) u | Γ 2 (x m ) N h n (x m )= h 2 u | Γ 2 (x n ). With this process, we obtain m h (u h ,N h n ; c ? )= ✓ h 2 4 X C2C(n) 1 v 2 c ◆ u h (x n ),b h (u h ,N h n ; c ? )= ✓ h 2 X C2C(n) 1 v c ◆ [u h ] | Γ 2 (x n ). This approximation method is known as Mass Lumping; it is essential to obtain in fine a time explicit scheme. Now, we define the two diagonal mass matrices ( m n is the Kronecker symbol) M(v) n,m = m n ( h 2 4 X C2C(n) 1 v 2 c ), B(v) n,m = 8 > < > : m n ✓ h 2 P C2C(n) 1 vc ◆ , if x n 2N Γ 2 0 if x n 2N 0 (2.47) and the stiffness matrix K n,m = Z Ω rN h n (x) • rN h m (x) dx, which corresponds for n or m in N 0 (interior nodes) to a nine points stencil for h 2 times the Laplacian. Note that the coefficients of matrix K are bounded by a pure constant, we deduce that there exists a pure non negative number  such that (KU, U )   (U, U ). (2.48) These definitions being made, the semi discrete problem can be put in the following matricial form Find P (t)=(p n (t)) n2N and Λ(t)=( n (t)) n2N in C 2 ([0,T], R #N ) such that 8 > > > > > > > > > < > > > > > > > > > : M(v) d 2 P (t) dt 2 + B(v) dP (t) dt + KP (t)= Sp (t) M(v) d 2 Λ(t) dt 2 B(v) dΛ(t) dt + KΛ(t)= Sad (t) P (t = 0) = 0, dP dt (t = 0) = 0, Λ(t = T )=0, dΛ dt (t = T )=0. Discretisation in time We pick a time step ∆t; our aim is to compute approximations of vectors P (t) and Λ(t) at discrete instant k∆t P k ' P (k∆t), Λ k ' Λ(k∆t),k =0,...,K, K∆t = T. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 For that, we introduce the finite difference operators [@ 2 ∆t 2 U ] k = U k+1 2U k + U k 1 ∆t 2 , [@ 2∆t U ] k = U k+1 U k 1 2∆t , and define the totally discretised problem Find P k and Λ k in R #N ,k=0,...,K such that for all k>0, 8 > > > > > > < > > > > > > : M(v)[@ ∆t 2 P ] k + B(v)[@ 2∆t P ] k + KP k = Sk p = Sp (t = k∆t) M(v)[@ ∆t 2 Λ] k B(v)[@ 2∆t Λ] k + KΛ k = Sad (t = k∆t) P 0 =0,P 1 =0, Λ K =0, Λ K 1 =0. Note that computing P k+1 (resp. Λ k 1 ) from P k ,P k 1 (resp. Λ k+1 , Λ k ) involves the inversion of matrix v); this matrix being diagonal (thanks to Mass Lumping) the scheme is explicit. M(v)+ ∆t 2 B( Stability and CFL condition. The stability of this scheme relies on a conservation of a pseudo energy. For simplicity, we drop the source term and consider instead initial conditions. The model problem is now Find U k in R #N ,k=0,...,K such that for all k>0, 1) . 8 > > > < > > > : M(v)[@ ∆t 2 U ] k + B(v)[@ 2∆t U ] k + KU k =0 1 2 (U 1 + U 0 )=u (0) , U 1 + U 0 ∆t = u ( We take the scalar product of the equation in U k by [@ 2∆t U ] k ,w eo b t a i n ⇣ M(v)[@ ∆t 2 U ] k , [@ 2∆t U ] k ⌘ + ⇣ KU k , [@ 2∆t U ] k ⌘ + ⇣ B(v)[@ 2∆t U ] k , [@ 2∆t U ] k ⌘ =0. (2.49) Let us define V k+ 1 2 =[@ ∆t U ] k+ 1 2 = U k+1 U k ∆t , [ ∆t U ] k+ 1 2 = U k+1 + U k 2 , it is straightforward to show that ∆t ⇣ M(v)[@ ∆t 2 U ] k , [@ 2∆t U ] k ⌘ = 1 2 ⇣ M(v) ⇣ [@ ∆t U ] k+ 1 2 [@ ∆t U ] k 1 2 ⌘ , ⇣ [@ ∆t U ] k+ 1 2 +[@ ∆t U ] k 1 2 ⌘⌘ = 1 2 ⇣ M(v)V k+ 1 2 ,V k+ 1 2 ⌘ 1 2 ⇣ M(v)V k 1 2 ,V k 1 2 ⌘ , the last line being obtained thanks to the symmetry of matrix. M(v).I nt h es a m ew a y ,w eh a v e ∆t ⇣ KU k , [@ 2∆t U ] k ⌘ = 1 2 ⇣ KU k ,U k+1 ⌘ 1 2 ⇣ KU k ,U k 1 ⌘ . Institut U k a = ⇣ B(v)[@ 2∆t U ] k , [@ 2∆t U ] k ⌘ 0 E k+ 1 2 = 1 2 ⇣ M(v)V k 1 2 ,V k 1 2 ⌘ + 1 2 ⇣ KU k ,U k+1 ⌘ , we get from (2.49), E k+ 1 2 E k 1 2 +∆t U k a =0,k 1, and by a simple summation E k+ 1 2 + k X q=1 ∆t U q a = E 1 2 . (2.50) At this point, the problem is that the potential energy 1 2 KU k ,U k+1 might be a negative term since vector U is not taken at the same instant. However, we can rewrite U k+1 =[ ∆t U ] k+ 1 2 + ∆t 2 [@ ∆t U ] k+ 1 2 ,U k =[ ∆t U ] k+ 1 2 ∆t 2 [@ ∆t U ] k+ 1 2 , and so (due to the symmetry of Matrix K), ⇣ KU k ,U k+1 ⌘ = ⇣ K[ ∆t U ] k+ 1 2 , [ ∆t U ] k+ 1 2 ⌘ ∆t 2 4 ⇣ K[@ ∆t U ] k+ 1 2 , [@ ∆t U ] k+ 1 2 ⌘ . The energy term can be rewritten E k+ 1 2 = 1 2 ⇣ M(v)V k 1 2 ,V k 1 2 ⌘ + 1 2 ⇣ K ∆t U k+ 1 2 , ∆t U k+ 1 2 ⌘ , M(v)=M(v) ∆t 2 4 K. Our claim is that if this matrix is definite positive, i.e. ∆t 2 4 K < M(v), then our explicit scheme is stable. We can get a crude approximation of this condition using (2.48): ∆t 2 4 (KU, U )   ∆t 2 v 2 + 4h 2 h 2 v 2 + (U, U )   4 ∆t 2 v 2 + h 2 (M(v)U, U ), with v + the maximum of the components of v. Finally, there exists a pure constant ↵ 2   4 such that when C = ↵ 2 ∆t 2 v 2 + h 2 < 1( CFL condition), (2.51) then E k+ 1 2 is always positive and consequently (see (2.50)a n du s eU q a > 0) 0 E k+ 1 2 E 1 2 . It is easy to see that this ap r i o r iestimation implies that for each node n ⇣ U k+1 n U k n ⌘ 2  4v 2 + ∆t 2 h 2 E 1 2  4 ↵ 2 E 1 2 , Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 and U k n can only grow linearly with k : the scheme is stable. The estimates for the discretized problem with a source term is more involved and we fail to derive a simple estimate involving some norm of the source term in this case. Accuracy: the dispersion effect Analysis of the precision of such schemes are usually done for infinite grids (i.e. Ω = R 2 ) and constant velocity. The scheme is simply h 2 c 2 [@ 2 ∆t 2 U ] k + KU k =0. The scheme being invariant by time and spatial translations, there exists particular solutions of the form (discrete plane wave solution) U k = U (0) ,e i!k∆t , with 1 h 2 KU ? = 4sin 2 (! ∆t 2 ) c 2 ∆t 2 U ? . U ? is an eigenvector of matrix K. Let x n = ihx + jhẑ, we look for eigenvectors in the form U ? n = e i kh (!)•xn = e i(k h x (!)ih+k y h (!)jh) . Plugging this expression, we obtain the dispersion relation of the scheme 4sin 2 (! ∆t 2 ) c 2 ∆t 2 = 1 h 2 K(k h x (!)h, k h y (!)h); (2.52) K is the symbol of matrix K. We skip its exact definition, but only retain that due to both the centered nature and the consistency of the scheme, we have 1 h 2 K(k h x (!)h, k h y (!) 2 )=[k h x (!) 2 + k h y (!))] ⇣ 1+O((k h x (!)h) 2 )+O((k h y (!)h) 2 ) ⌘ . Now the Taylor development of sin x x gives immediately ! 2 c 2 =[k h x (!) 2 + k h y (!)) 2 ] ⇣ 1+O((k h x (!)h) 2 )+O((k h y (!)h) 2 )+O((!∆t) 2 ) ⌘ , and the relation of dispersion ! 2 c 2 (k 2 x +k 2 y ) =1is asymptotically recovered: the scheme is consistent. However, there is a second order error in space and time and this error is the cause of what is called the Pollution effect; as the wave propagates in the medium, the errors accumulate and this, all the more since the duration of the wave increases. It is usually mentioned that 10 points per wavelengh is enough to get a "precise" result, let us say ✏ =0.1% for some h. As the matter of fact, this result has been obtained in the seventies, when the computers only allows for a propagation over roughly 6 wavelengh. As the errors are second order in time and space, a propagation along 4 ⇥ 6 wavelengh would induce an error of 2✏. To get the same error we need to divide the space step h (as well as the time step for a constant CFL) by p 2. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 The analysis of the error of the numerical schemes for wave equations in heterogeneous medium has not been studied a lot. Usually, a thumb rule consists of choosing F c , a cut frequency of the seismic source and take n points per wavelenghts at this frequency and for the slowest velocity in the medium. The formula is h = v F c n . (2.53) The choice of n depends, as we said above, on the size of the domain of propagation and on the required accuracy. Note that for our inverse problem, we do not have a precise knowledge of the values of the velocities involved. The simplest way to control the dispersion effect is to find their values into a given and not too large interval. The discrete functional and its gradient At this point we have a process to compute an approximation of p h (x,t) at nodes x n of grid and time k∆t. Our discrete observed data set D h,∆t are seismograms recorded at n r receivers located on the grid d obs : {d obs (x nr ,k∆t),n r 2N r ⇢N,k 2 K r ⇢{1,...,K} .} , (2.54) The discrete predicted observables are defined as d cal (v): n P (v) k nr = R h,∆t P (v); n r 2N r ,k2 K r o , (2.55) where R h,∆t :[R #N ] K !D h,∆t is the discrete extraction operator. R h,∆t : P =(P k n ) k=1,..,K,n2N ! (P k n ) k2Kr,n2Nr (2.56) The discrete functional is now J (v)= hh,∆t (d cal = R h,∆t P (v); d obs ) . To compute the first variation of the discrete functional J , we move parameter from v to v + v,thesolution P = P (v) of the direct problem move to P + P and to the first order we have J (v)= X k2Kr X nr2Nr r cal h(x nr ,k∆t) P k nr = K X k=1 ⇣ R T h,∆t r d cal h(R h,∆t P (v); d obs ), P k ⌘ . (2.57) Now, to the first order we have 8 > > < > > : M(v)[@ ∆t 2 P ] k + B(v)[@ 2∆t P ] k + K P k = M(v)[@ ∆t 2 P ] k B(v)[@ 2∆t P ] k P 1 = P 0 =0. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 We take the scalar product of each of these equations by Λ k and sum over k =1,...,K. Thanks to time discrete integrations by parts and using the null initial and final conditions, we get Λ K =Λ K 1 = P 1 = P 0 =0 8 > > > > > > > > > > > > > > < > > > > > > > > > > > > > > : K X k=1 ⇣ M(v)[@ ∆t 2 P ] k , Λ k ⌘ = K X k=1 ⇣ M(v)[@ ∆t 2 Λ] k , P k ⌘ K X k=1 ⇣ B(v)[@ 2∆t P ] k , Λ k ⌘ = K X k=1 ⇣ B(v)[@ 2∆t Λ] k , P k ⌘ K X k=1 ⇣ K P k , Λ k ⌘ = K X k=1 ⇣ KΛ k , P k ⌘ . Adding the three equations and using the equations satisfied by Λ k and P k , we get K X k=1 ⇣ M(v)[@ ∆t 2 P ] k , Λ k ⌘ K X k=1 ⇣ M(v)[@ ∆t 2 P ] k , Λ k ⌘ = K X k=1 ⇣ S k ad , P k ⌘ . (2.58) Returning back to (2.57), we get J h,∆t = K X k=1 ⇣ M(v)[@ ∆t 2 P ] k , Λ k ⌘ + ⇣ B(v)[@ 2∆t P ] k , Λ k ⌘ , (2.59) as soon as (S k ad ) n = r cal h(x nr ,k∆t) for k 2 K, n 2N r , and 0 elsewhere. From this we deduce the expression of the gradient of J h,∆t , @J h,∆t @v c = K X k=1 ✓ @M(v) @v c [@ ∆t 2 P ] k , Λ k ◆ + ✓ @B(v) @v c [@ 2∆t P ] k , Λ k ◆ . ( 2 .60) What we have done here is to compute the exact gradient of the discrete functional. It is nowadays well known that discretizing directly the gradient of the continuous functional given in (2.45) by some other consistent scheme leads to problems of coherency when using a solver. From the very expressions of both diagonal matrices M and B (see (2.47)), it is seen that the two scalar products in (2.60) involve only a finite number of terms at each time step k∆t. The remaining problem is that P k is computed forward in time while Λ k is computed backward in time. Storing all the P k 's is memory consuming (solving the scheme only requires to store two successive instants). To overcome this the P k 's are first computed for k =1,..,K, then we restart from the two last instants P K and P K 1 and recompute backward in time the P k and Λ k , k = K, K 1..., 1; in order to avoid instability problems, it is nevertheless necessary to store all the P k n at node n 2N Γ 2 (i.e. on the absorbing boundary) since the boundary condition is only absorbant in the forward direction. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 Algorithms and Computational framework We have now all the ingredients for designing the algorithm for our Full Waveform Inversion method. A simple FWI workflow can be summarised by the flow chart of Figure 2.1. For a given seismic source and an initial model: In PySIT, the 2D acoustic wave forward and adjoint problems are solved using standard second-order finite differences in space and time on a uniform mesh in space together with a second-order explicit integration scheme in time with constant time step. Complex shifted frequency (CFS) PMLs are implemented following [START_REF] Sim | Nonreflecting boundary conditions for the time dependent wave equation[END_REF][START_REF] Grote | Local nonreflecting boundary condition for time-dependent multiple scattering[END_REF]. Thus, there is slight differences from what we have presented in the previous subsections (first order absorbing conditions instead of PML, a different molecule for the discrete Laplacian,..) but all the ideas remain the same: we have a CFL condition, see (2.51), and the dispersion of the waves must be controlled using a large enough number of points per shortest wavelenghts (i.e. for the maximum frequency of the signal and the minimum velocity). Least-squares misfit function and the cycle-skipping problem Since the introduction of full-waveform inversion, the most commonly used objective function measures the L 2 norm difference between the synthetic observables and the observations. As previously seen, in the data space D, observations d obs are common shot-gather of seismograms, often called seismic traces, recorded at n r receivers for a source s: d s obs : {d s obs (x r ,t),r=1,...,n r } ,t2 [0,T] (2.61) where x r denotes the spatial position vector of the receiver r The predicted observables are defined accordingly for a physical realisation satisfying the state equation L[m]u f s , together with the appropriate initial and boundary conditions d s cal (m):{d s cal (x r ,t; m),r=1,...,n r } = Ru s (x,t; m); t 2 [0,T] (2.62) where R : W!Ddefined in (2.6) and maps the physical realisation onto the receiver positions x r . The least-squares based objective function is defined as J (m)= h(d cal (m); d obs )= 1 2 ns X s=1 Z T 0 |d s cal (m) d obs | 2 dt = 1 2 ns X s=1 Z T 0 |Ru s (m) d obs | 2 dt (2.63) where n s is the number of seismic source used in the seismic survey. The monotonicity of the optimal transport with respect to time-and-space shifts and dilations, together with improved robustness with respect to noise, has attracted the interest of the image processing and machine learnings. Today, there is a growing interest of the FWI community to extend the optimal transport in the context of seismic oscillatory signals for developing alternative continuous and differentiable objective functions. In the next section, a brief overview of the theoretical optimal transport framework of the optimal transport and of some of the current limitations in the context of FWI is provided. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 Chapter 3 A brief overview of the Optimal Transport. This chapter provides a very brief overview of optimal transport introducing the main notions needed in this thesis, comprehensive presentation of optimal Transport can be found for example in [START_REF] Santambrogio | Optimal Transport for Applied Mathematicians[END_REF] and [START_REF] Peyré | Computational optimal transport[END_REF], and the references therein. Optimal transport (OT), is an old problem originally formulated by Monge (1781)asfindingtheoptimal way of moving soil from two places through a volume preserving transport in order to make embarkments. Assuming constant density and no mass creation or loss during the transport, the problem is to find a transport map that minimises the mechanical work , i.e. the mass µ(x) times the moving distance. The later being defined commonly as a L p transport distance with p 1. Monge's problem Let consider two probability measures: a "source" µ and a "target" ⌫ that have a compact support X 2 R d and Y 2 R d , respectively, and the density defined with respect to the Lebesgue measure. Mathematically, the problem is to find a transport map T from X to Y such that MP(µ, ⌫): T = arg inf T 2M Z X kx T (x)k p dµ(x), (3.1) where the displacement cost, i.e. the ground cost, is here defined as the L p distance, while value function in (3.1) is called the transport cost. Let denote M := {T : X ! Y |T # µ = ⌫} the set of all measure preserving maps between µ and ⌫ with T # µ the pushforward of the measure µ, defined as: [T # µ](B)=µ(T 1 (B)), For any measurable subset B 2 Y The Monge formulation (3.1) defines a non-convex optimisation problem with non linear constraints. However when, for example, µ and ⌫ have densities, then the optimal transport map T ? exist a give a natural interpolation between the two measures. In particular for the L p distance, the map T t (x)=(1 t)x + tT ? (x) describes the path of particle x and furthermore the measure of µ pushed forward by T t is the geodesic, i.e. shortest path for the OT induced metric, between µ and ⌫. If the map is smooth and the measures absolutely continuous, the constraint can be written as an equation for the Jacobian of T : µ(x)=⌫ (T (x)) |det rT (x)| (3.2) 3.2 Monge Ampère and semi-discrete formulation Since Brenier's theorem [START_REF] Brenier | Polar factorization and monotone rearrangement of vector-valued functions[END_REF] it is known that the quadratic cost OT problem, i.e. p =2, has nice properties: under very general assumptions on the densities µ and ⌫, there exists a unique transport map, which is characterised as the gradient of a convex potential '(x): T (x)=r'(x). Using this result in (3.2) yields the Monge-Ampère form of Monge problem : µ(x)=⌫ (r'(x)) det r 2 '(x) r'(X) ⇢ Y (3.3) where ' is a convex function. When ⌫(x): =µ(x ⌧ ), the convex potential is given analytically by '(x)= 1 2 kx 2 k ⌧ • x + C (this is unique up to a constant). Plugging this in (3.1) the transport cost is k⌧ k 2 , i.e quadratic with respect to the shift ⌧ . We will illustrate this property in chapter 5. Several numerical methods have been proposed to discretise and solve this problem: • A first class of methods [START_REF] Benamou | Numerical solution of the Optimal Transportation problem using the Monge-Ampère equation[END_REF][START_REF] Benamou | Minimal convex extensions and finite difference discretisation of the quadratic Monge-Kantorovich problem[END_REF] is based on the discretisation of X using cartesian grids and optimised finite difference stencils. The measure µ is approximated as a discrete measure prescribed on the grid but ⌫ requires a given analytic function or an oracle based on interpolation of ⌫, evaluated at r'(x) in Y . This is the approach followed in particular by Engquist and collaborators [START_REF] Engquist | Application of the wasserstein metric to seismic signals[END_REF][START_REF] Engquist | Optimal transport for seismic full waveform inversion[END_REF]Yang et al., 2018) in the context of full-waveform inversion. • A second class of methods [START_REF] Mérigot | Xhapter 2 -optimal transport: discretization and algorithms[END_REF][START_REF] Lévy | Notions of optimal transport theory and how to implement them on a computer[END_REF] relies on a semidiscrete approach, in which (3.3) is solved from a dual point of view. The measure ⌫ is assumed to be an arbitrary empirical measure, i.e. a weighted point cloud, and a continuous oracle is still needed but only for µ. Both approaches lead to a non-linear set of equations that can be solved with a quasi-newton solver in O(N log N ) operations, N being the number of points of the discrete measure, but with a problem-dependent constant that may be large. Kantorovich Formulation When both probability measures µ and ⌫ have discrete support, e.g. as in numerical approximations, rearranging the mass using a one-to-one map T may become impossible, see Figure 3 Kantorovich (1942) suggested to optimise instead a relaxed functional with respect to the transport plans: KP(µ, ⌫)= inf 2Π(µ,⌫) Z X⇥Y kx yk p d (x, y). (3.4) The set of admissible transport plans Π(µ, ⌫)={ 2P(X ⇥ Y ) | P X # = µ, P Y # = ⌫} (3.5) is the set of probability measures on the product space X ⇥ Y , also called "couplings", with prescribed marginal laws µ and ⌫,a n dP X : X ⇥ Y ! X and P Y : X ⇥ Y ! Y stand for the projections onto each space. Transp ort plans still transp ort the µ-mass repartition to the ⌫-mass repartition but the mass at location x can be split and sent to different locations y, see blueFigure 3.1 for illustration. The cost is again the total work, i.e. mass times ground cost. For the L p ground cost, p>1, the Monge and the Kantorovich formulations are equivalent. They both define the same distance on the set of probability measures: the Wasserstein distance denoted W p p (µ, ⌫), W p p (µ, ⌫)= inf 2Π(µ,⌫) Z X⇥Y kx yk p d (x, y)= inf T 2M Z X kx T (x)k p µ(x) dx (3.6) and, when µ has continuous densities, the support of any optimal transport plan ? is contained on the graph of a function T ? . In particular this implies ? (A, B)=µ ({x : x 2 A, T ? (x) 2 B}) and furthermore that the optimal plane defines a mapping between µ and ⌫. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 The dual problem associated to (3.6) is formulated as W p p (µ, ⌫)= sup 2C(X), 2C(Y ) s.t. (x)+ (y) kx yk p , 8(x,y)2X⇥Y. Z X (x) dµ(x)+ Z Y (y) d⌫(y). (3.7) When freezing ⌫ and considering the Wasserstein distance as a function of µ, the Wasserstein distance is the supremum of affine functions µ parameterised by ( , ), and as such is convex with respect to µ. The dual formulation (3.7) leads to a convex, linear optimisation problem with linear constraints. Under the primal form (3.6), the optimal transport is defined for probability measures, while the dual form (3.7) admits a notion of first variation over the space of general mesures, provided they have the same total mass, see Santambrogio (2015) section 7.2 for details. Discrete Kantorovich problem When X and Y are discrete, i.e. X = {x i ,i =1 ,...,N},Y = {y j ,j =1 ,...,N} where the x i and y j are in R d , µ and ⌫ can be written as the sums of Dirac masses: µ(x)= N X i=1 µ i x i and ⌫(y)= N X j=1 ⌫ j y j . The weights µ i and ⌫ j have to satisfy the positivity and the total mass balance constraints: 8 > > < > > : µ i > 0,⌫ j > 0, X i µ i =1, X j ⌫ j =1. (3.8) The set of admissible plans becomes: Π(µ, ⌫)= 8 < : 2S N,N : N X j=1 ij = µ i , N X i=1 ij = ⌫ j , 8i, j 9 = ; with S N,N = 8 < : ij 2 R N ⇥N + : N X i=1 N X j=1 ij =1 9 = ; The marginal constraints are given by summing over lines and columns the matrix . Finally the optimisation problem becomes: min 2Π(µ,⌫) N X i=1 N X j=1 c ij ij ,c ij = |x i y j | p (3.9) where c ij is the ground cost between x i and y j . Institut The dual formulation associated to (3.7) yields OT(µ, ⌫) = max u2R N ,v2R N u i +v j c ij X i u i µ i + X j v j ⌫ j (3.11) In the context of this thesis, the discretisation (x i ,y j ) is assumed to be static, and defined by the receiver positions and by the time sampling of the acquisition, and (µ, ⌫) are vectors in R N with coefficients (µ i )s and (⌫ j )s. The convexity of the map µ 2 R N 7 ! OT(µ, ⌫) again follows as the supremum of affine functions, and Danskin's theorem provides the gradient with respect to µ @ @µ i OT(µ, ⌫)=u ? i , @ @⌫ j OT(µ, ⌫)=v ? j , 8 i, j (3.12) where (u ? ,v ? ) are the maximisers of (3.11). The W 1 distance The case p =1is special. The Kantorovitch metric is well defined but the Monge problem has no unique solution [START_REF] Kantorovich | On a functional space and certain extremum problems[END_REF][START_REF] Villani | Optimal Transport, Grundlehren der mathematischen Wissenschaffen[END_REF]. Assuming X = Y , the dual formulation, a particular instance of the Kantorowitch-Rubinstein theorem, e.g. Santambrogio (2015) section 3.2.1, yields W 1 (µ, ⌫)=s u p Lip( )1 Z X (x) d(µ ⌫). (3.13) where Lip( ) denotes the minimal Lipschitz constant for . The 1-Lipschitz functions , for the ground cost associated to the `1 distance on R d , satisfy : X ! R, | (x) (y)||x y| , 8(x, y) 2 X ⇥ X with |x y| = d X i=1 |x i y i | , 8(x, y) 2 R d The Kantorovitch problem can be generalised when the total mass between µ and ⌫ is not conserved, leading Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 to the generalised W1 distance W 1 (µ, ⌫)= s u p Lip( )1s.t. k k 1 C Z X (x) d(µ ⌫). (3.14) which is a particular instance of the KR norm [START_REF] Bogachev | Measure thory[END_REF] defined in the space of Radon measures. This can also be seen as a generalisation of the L 1 norm (Lel lmann et al., 2014). The generalised W1 metric, also referred to as the KR distance, was first introduced in the context of fullwaveform inversion by Métivier and collaborators (Métivier et al., 2016a,b;Métivier et al., 2016). One advantage of the KR norm in the context of FWI is that there is no need to normalise the data to be positive and mass balanced. The KR norm has no direct connection with optimal transport when µ and ⌫ are no more required to be probability measures. The problem (3.14) leads to a convex optimisation problem under linear constraints that can be solved efficiently using proximal splitting methods [START_REF] Combettes | Proximal splitting methods in signal processing, in Fixed-point algorithms for inverse problems in science and engineering[END_REF], reducing significantly the computational cost compared to the 2-Wasserstein metric. Optimal Transport distances in the context of full-waveform inversion Classical optimal transport defines distances W p p , p>1 on the set of probability measures, assuming µ>0,⌫>0, and Z X dµ = Z Y d⌫. where µ and ⌫ have for support X 2 R d and Y 2 R d , respectively. In the context of full-waveform inversion, the observables d ob and d cal are oscillatory signals in time, recorded at receiver positions over a time window. As such the positivity assumption breaks down and the total mass conservation needs to be carefully checked in practice. Apart from data normalisation, there exists a significant amount of work from the mathematics community to generalise optimal transport to what is called the unbalanced optimal transport, either by penalising the marginal constraints [START_REF] Benamou | Numerical resolution of an "unbalanced" mass transport problem[END_REF]Piccoli andRossi, 2014, 2016;[START_REF] Lombardi | Eulerian models and algorithms for unbalanced optimal transport[END_REF][START_REF] Gangbo | Unormalised optimal transport[END_REF] or by rigorously defining an "unbalanced" distance on the set of positive Radon measures (Chizat et al., 2018b;[START_REF] Chizar | Unbalanced optimal transport: Dynamic and kantorovich formulations[END_REF]Kondratyev et al., 2016b;[START_REF] Liero | Optimal Entropy-Transport problems and a new Hellinger-Kantorovich distance between positive measures[END_REF]. The later will be considered in connexion with the entropic approximation of optimal transport and the Sinkhorn divergence formulation. Optimal transport and oscillatory signals In the mathematics community [START_REF] Ambrosio | Gradient flow of the chapman-rubinstein-schatzman model for signed vortices[END_REF][START_REF] Mainini | A description of transport cost for signed measures[END_REF] different approach have been explored to extend optimal transport to signed measure. One approach leads to split the predicted d cal and the This amounts to compute the optimal transport distance between the positive and negative parts of d obs d cal leading to the objective function h(d cal ,d obs = W 2 2 ( dcal , dobs ). This type of transformation suffers however from a convexity loss with respect to translation and from a loss of sensibility as translation increases [START_REF] Métivier | Optimal transport for mitigating cycle skipping in full-waveform inversion: A graph-space transform approach[END_REF]. Another approach [START_REF] Engquist | Application of the wasserstein metric to seismic signals[END_REF][START_REF] Engquist | Optimal transport for seismic full waveform inversion[END_REF] This strategy assures convexity, but provides no guarantee that the positive part, respectively the negative part, of the predicted signal has the same total mass than the positive part, respectively the negative part, of the observed signal. Moreover such a transformation is not differentiable, which is problematic when computing the Fréchet derivatives of the objective function in the context of local gradient-based optimisation methods. Signal transformation into probability distribution Another approach to achieve data positivity and mass balance is to directly transform the seismic data into probability densities by linear or nonlinear scaling functions [START_REF] Qiu | Full-waveform inversion with an exponentially encoded optimal-transport[END_REF][START_REF] Yang | Optimal transport for seismic inverse problems[END_REF]Engquist , 2017, 2018;[START_REF] Engquist | Optimal transport based seismic inversion: Beyond cycle skipping[END_REF]. The objective function is now defined as h(d cal ,d obs )=W 2 2 ( dcal , dobs ) A first type of transformation is the affine transformation dcal = d cal + b hd cal + bi , dobs = d obs + b hd obs + bi ,b > 0 (3.17) where b is chosen such that dcal and dobs are both positive. This kind of transformation is straightforward and ensures the total mass conservation. The convexity of the objective with respect to translation is however affected under such transformation as the constant b leads to the creation of artificial mass in the optimal transport. A This is a particular instance of the KR norm defined on the space of signed measures [START_REF] Bogachev | Measure thory[END_REF]Lel lmann et al., 2014) that is lifted by adding a simple bound constraint on the dual potential to deal with unbalanced mass transport. As such there is no need to normalised the data to be positive and mass balanced. The resulting objective function is differentiable and allows for simultaneous comparison of 2D and 3D shot gathers following the numerical strategy of Métivier et al. (2016a); Métivier et al. (2016). This strategy was shown in [START_REF] Métivier | Optimal transport for mitigating cycle skipping in full-waveform inversion: A graph-space transform approach[END_REF] to be equivalent to the one proposed by [START_REF] Mainini | A description of transport cost for signed measures[END_REF] in the case of the 1-Wasserstein distance. However the KR norm has no direct connection with optimal transport once we no longer require d obs and d cal to be probability measures [START_REF] Vershik | Long history of the monge-kantorovich transportation problem[END_REF], and the convexity is not guaranteed with respect to large translation. This research topic has been very active since for two reasons: the method comes with a simple computational method called the Sinkhorn algorithm for which there are now countless variants and acceleration techniques; the class of problems it can applied to is very flexible and includes in particular the recent extension of OT to positive radon measure called "unbalanced" optimal transport (Chizat et al., 2018a;[START_REF] Chizar | Unbalanced optimal transport: Dynamic and kantorovich formulations[END_REF][START_REF] Liero | Optimal Entropy-Transport problems and a new Hellinger-Kantorovich distance between positive measures[END_REF]. See section 4.2. We discuss its use in connection with positive/negative mass splitting in chapter 5. Two problems arise when using entropic OT as a proxy for the classical OT distance or unbalanced OT distance. First, the numerical stability and cost of Sinkhorn algorithm deteriorates when " goes to 0. Second, the entropic OT cost is not a mathematical distance on the set of probability measure anymore. A simple variant called Sinkhorn divergence [START_REF] Genevay | Learning generative models with sinkhorn divergences[END_REF][START_REF] Séjourné | Sinkhorn divergences for unbalanced optimal transport[END_REF] Entropic regularisation and Sinkhorn algorithm Entropic regularisation This section is presented in the discrete Kantorovich setting, section 3.2.2. The entropic regularisation of (3.4)-(3.5) has been introduced in the OT context by [START_REF] Cuturi | Sinkhorn distances: Lightspeed computation of optimal transport[END_REF], the Shannon entropy is defined as Ent( ) def. = X i,j ij (log( ij ) 1), 8(i, j) 2 [1,n] ⇥ [1,m] (4.1) The Entropic regularised problem becomes OT " (µ, ⌫)= min 2Π(µ,⌫) h , ci " Ent( ), (4.2) where ">0 is a parameter, and c is the ground cost, with the associated pairwise cost matrix c ij = c(x i , x j ) evaluated on the support of µ and ⌫. We will assume (as is the case in FWI) that µ and ⌫ have the same support, and will restrict to the quadratic ground cost (i.e. p =2) c ij = kx i x j k 2 (4.3) Later, in the context of FWI, x =(t, x) will be a point in time t and offset x. The problem (4.2)i sa n"-strongly convex program with a unique optimal solution ? " . The solution ? " converges to the solution with maximal entropy within the set of all optimal solutions of the Kantorovitch problem, i.e. OT " "!0 ! OT. Problem (4.2) can be reformulated as a projection onto the transport plans of the Gibbs kernel associated to the ground cost with respect to the Kullback-Leibler divergence OT " (µ, ⌫)= min 2Π(µ,⌫) h , ci + " X i,j ij (log( ij ) 1), =m i n 2Π(µ,⌫) " X i,j KL ij | K " ij . (4.4) where K " ij def. =e x p { c ij /"}, is the Gibbs kernel associated to the ground cost, and the Kullback-Leibler divergence is defined as KL(s | t) def. = 8 > < > : slog( s t ) s for(s, t) > 0, +1 for t =0 (4.5) The Kullback-Leibler divergence or relative entropy takes two arguments s and t, and is a particular instance of statistical divergences that has nice analytical, and computational properties. As illustrated in Figure 4.1, it is strictly convex with its minimum at s = t, e.g. t =1in this case. A key insight is that, as " increases, the entropic part of the cost function favours mass spreading and smooths out small scale transport of the non-regularised OT. The optimal coupling becomes less and less sparse. While this has the effect of both accelerating computational algorithms and promoting faster convergence, it introduces a penalisation bias in the transport that tends to destroy the distance properties. In particular, OT " (µ, µ) > 0 and µ is no more necessarily the minimum of ⌫ 7 ! OT " (µ, ⌫) as it may become cheaper to diffuse from µ than not moving. This is illustrated by the 1D example in Figure 4.3. Remark on the reference measure In order to simplify the presentation of this chapter, we used the penalization Ent( ij )=KL( ij |1). This is the relative entropy with respect to the uniform measure 1 (Lebesgue). A standard practice is to use KL( ij |µ i ⌫ j ) which yields a smaller value for the the penalization Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 and also a more accurate (in the sense of the non entropic problem) support of the entropic plan. All formulations and algorithms are easily extended to this more general case. inf ij sup u i ,v j L( ij ,u i ,v j ):= ij c ij + " X i,j ij (log( ij ) 1) u i ( X j ij µ i ) v j ( X i ij ⌫ j ) , (4.6) The Lagrangian L is convex in ,a n dt h efi r s tv a r i a t i o ni n ij yields: c ij + " log( ij ) u i v j =0, 8(i, j) 2 [1,n] ⇥ [1,m] (4.7) where {u i ,i =1,...,n} and {v j ,j =1,...,m} are the Lagrangian multipliers associated to the n-constraints Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 P j ij = µ i and the m-constraints P i ij = ⌫ j , respectively. The unique optimal coupling (plan) for entropic regularisation has the specific form ij =exp ⇢ u i + v j c ij " = a i K " ij b j , 8(i, j) 2 [1,n] ⇥ [1,m] (4.8) where K " ij =exp n c ij " o ; a i =exp n u i " o ; b j =exp n v j " o . (4.9) with a and b being non negative vectors. Using the marginal constraints in Π(µ, ⌫), it leads to the nonlinear system a i 0 @ m X j=1 K " ij b j 1 A = µ i , and b j n X i=1 K " ji a i ! = ⌫ j (4.10) which involves two unknowns a i and b j a i = µ i P m j=1 K " ij b j and b j = ⌫ j P n i=1 K " ji a i (4.11) or in compact form ( K " =(K " ij ) (i,j)2[1,n]⇥[1,m] .i sn o wam a t r i x ) a (K " b)=µ, and b K "T a = ν (4.12) where corresponds to the entry side multiplications of vectors, and the unknowns a and b are a = µ ↵ (K " b), and b = ν ↵ (K "T a) (4.13) where ↵ corresponds to the entry side divisions of vectors. That problem is known in the numerical analysis community as the matrix scaling problem, see Nemirovski and Rothblum (1999) and references therein. An intuitive way is to solve it iteratively. The Sinkhorn algorithm [START_REF] Sinkhorn | Concerning nonnegative matrices and doubly stochastic matrices[END_REF] constructs iteratively the solution (l+1) = a (l+1) K " b (l) , (l+1) ij = a (l+1) i K " ij b (l+1) j . (4.14) where a (l+1) = µ ↵ (Kb (l) ) and b The convergence error at iteration l can be monitored using the residual in the Kullback-Leibler sense : (l+1) = ⌫ ↵ (K T a (l+1) ), a (l+1) i = µ i P m j=1 K " ij b (l) j and b (l+1) j = ⌫ j P n i=1 K " ji b (l+1) i err (l) KL = n X i=1 KL 0 @ a (l+1) i m X j=1 K " ij b (l) j µ i 1 A + m X j=1 KL b (l)+1 j n X i=1 a (l+1) i K " ji ⌫ j ! . (4.16) Alternatively, the residual of the Kantorovich potentials in sup norm between successive iterations can also be used It is also known, e.g. Léonard (2014) proposition ( 13), that when " ! 0, the unique minimiser ? " of OT " (µ, ⌫) converges to the maximal entropy plan among the possible optimal transport plans of OT(µ, ⌫). err (l) 1 = " An example borrowed from [START_REF] Benamou | Iterative bregman projections for regularized transportation problems[END_REF], and reproduced Figure 4.3, illustrates the behaviour of the optimal transport plan when " decreases. The optimal entropic plan ? " converges to the unregularized optimal transport plan solution of (3.10) and concentrates on the graph of the optimal transport map. The middle line shows how the mass is progressively shifted away from the diagonal during the Sinkhorn iterations for a fixed ". Numerical stability When implementing and using the Sinkhorn algorithm it is important to keep in mind that the Kernel K " scales like exp ⌧ d /" where ⌧ is the scale of the ground transport and d the dimension of the physical space, Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 which depends on the data µ and ⌫. The numerical stability therefore depends on these parameters, and the convergence of the Sinkhorn algorithm deteriorates when " ! 0. The Sinkhorn algorithm often fail to terminate as soon as some of the elements of the kernel K " become too negligible for the machine precision. This can then result in a matrix product K " b or K "T a with very small entries with respect to the machine precision, resulting in a division by 0 numerically in the Sinkhorn update (4.15). Such issues can be partly resolved by carrying out computations in the log domain or using a technique called "-scaling, e.g. [START_REF] Schmitzer | Stabilized Sparse Scaling Algorithms for Entropy Regularized Transport Problems[END_REF]. Cost Denoting by N the number of points used in the discretisation of the marginal densities,i.e. {a 1 ,...,a N } and {b 1 ,...,b N } a naive Sinkhorn algorithm involves a O n iter ⇥ (n ⇥ m) 2 complexity. where n iter is the number of Sinkhorn iterations to converge, which would be prohibitive in practical FWI applications. For small " the sparsity of the Kernel can be used to reduce the cost, e.g. [START_REF] Schmitzer | Stabilized Sparse Scaling Algorithms for Entropy Regularized Transport Problems[END_REF]. Speed-up for separable kernels As discussed in Chizat (2017)a n di nPeyré and Cuturi (2019) Remark 4.15, an important particular case, relevant in FWI applications, for which the complexity of each Sinkhorn iteration can be significantly reduced is when each index i and j considered in the ground cost-matrix c ij can be described as a d-uple taken in Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 the cartesian product of d finite sets [1,n 1 ] ,..., [1,n d ]. i =(i k ) d k=1 ,j =(j k ) d k=1 2 [1,n 1 ] ⇥ ...⇥ [1,n d ] When the ground cost is additive along the sub-indices, namely there exists d matrices C 1 ,...,C d , each of respective size n 1 ⇥ n 1 ,...,n d ⇥ n d such that c ij = d X k=1 c k i k j k , then the kernel appearing in the Sinkhorn iterations has a separable multiplicative structure K ij =Π d k=1 K k i k j k leading to fast and exact matrix-vector multiplication, speeding-up the Sinkhorn algorithm. Let illustrate the Kernel separability in the context of FWI. For 2D horizontal shot-gathers, the predicted d cal (x i ,t j )=d cal ij and the observed d obs (x j ,t j )=d obs ij are discrete realisations at the i =1 ,...,n r receiver spatial positions and the j =1,...,n t time samples, which define on a 2D grid in the space and time domain. As such d cal and d obs are instantiated as n r ⇥ n t matrices, and as a consequence the Lagrange multiplier u and v. For the quadratic cost, c iljm = |x i x l | 2 + |t j t m | 2 , the associated kernel matrix yields K " ilmj =exp |x i x l | 2 + |t m t j | 2 " ! = K x il K t mj (4.18) where K x il =e x p ⇣ |x i x l | 2 /" ⌘ and K t mj =e x p ⇣ |t m t j | 2 /" ⌘ are n r ⇥ n r and n t ⇥ n t convolution matrices defined on the receiver and time finite sets, respectively. In this case a multiplication by K " can be carried out more efficiently. Such a separable multiplicative structure allows for a fast and exact multiplication by K when applying each 1-D convolution matrix along "slice" of the 2D shot-gather grid. Let rewrite a and b as A and B, respectively, to emphasise the fact that the multipliers are reshaped as n r ⇥ n t matrices. Then computing KB , which would naively require (n r ⇥ n t ) 2 operations with a naive implementation, can be obtained by applying two 1-D convolutions separately, as ⇣ K x K t B T ⌘ T = K x BK t to recover a n r ⇥ n t matrix in n as a tensor of suitable size. The relative gain is even higher at n 2 r 1 n t + n 2 r 2 n t +(n r 1 + n r 2 ) n 2 t operations instead of n 2 r 1 n 2 r 2 n 2 t . In order to get a simplified view of the tensorization gain, let us assume n t ' n r := N Acceleration by the Successive Over-Relaxation (SOR) Since the Sinkhorn algorithm is a fixed-point algorithm, standard linear or even non linear relaxation schemes to enhance the conditioning of the fixed-point mapping near the solution and improve the linear convergence rate. A Successive Over-Relaxation (SOR) algorithm, with an acceleration parameter ✓ 2 [1,[START_REF] Gangbo | The geometry of optimal transportation[END_REF] is presented in Chizat (2017), which yields. 8 > > > > > > < > > > > > > : a (l+1) i =(a (l) i ) 1 ✓ µ i P m j=1 K " ij b (l) j ! ✓ b (l+1) j =(b (l) j ) 1 ✓ ⌫ j P n i=1 K " ji b (l+1) i ! ✓ (4.19) Numerical experiments in [START_REF] Chizat | Transport optimal de mesures positives: modèles, méthodes numériques, applications[END_REF] illustrate that the convergence rate can be improved by up to orders of magnitude when using SOR, and a detailed convergence analysis is provided, which explicit the best choice of the acceleration parameter ✓. All the experiments in thesis have been performed for ✓ =1.4. Computation of the Gradient From (4.6), convex duality yields OT " (µ i ,⌫ j )= sup (u i ,v j ) = X i u i µ i + X j v j ⌫ j " X ij a i K " (x i ,y j ) b j . (4.20) As in the non entropic case the dual formulation shows that the map µ 2 R N 7 ! OT " (µ, ⌫) is the supremum of linear functional, hence convex. The gradient is easily deduced as in (3.12), 8(i, j) 2 [1,N] ⇥ [1,N] @ @µ i OT " (µ, ⌫)=u ? i = " log a ? i ; @ @⌫ j OT " (µ, ⌫)=v ? j = " log b ? j (4.21) where (u ? ,v ? ) are the maximisers of the entropic dual problem (4.20). . Unbalanced Optimal Transport In order to deal with seismic unbalanced data we want to drop the hard marginal constraints in Π(µ, ⌫). It allows mass to be transported and also created or destroyed, but at a cost. The "good" additional cost turns out to be the Kullback-Leibler divergence between the data and the marginal of the plan. The Sinkhorn algorithm is very easily adapted to this distance that eliminates the total mass balance constraint. Recently Penalisation of the marginal constraints The hard marginal constraints are replaced by the following soft penalisations: OT (µ, ⌫)=min N X i=1 N X j=1 c ij ij + 1 N X i=1 KL 0 @ N X j=1 ij µ i 1 A + 2 N X j=1 KL N X i=1 ij ⌫ j ! (4.22) where KL is the Kullback-Leibler divergence defined in (4.5). The 1 > 0 and 2 > 0 are parameters quantifying the cost of unbalanced mass relaxation. If µ and ⌫ are balanced, the classical optimal transport can be recovered by letting 1,2 ! +1. A nice interpretation of (4.22) is that, depending on the value of , available mass on both side will be either transported or just created or destroyed locally if it is cheaper through the KL penalization of the marginal constraint. In the case of the quadratic ground cost (4.3), the value function OT is called the Gaussian Hellinger-Kantorovich(GHK) distance, [START_REF] Liero | Optimal Entropy-Transport problems and a new Hellinger-Kantorovich distance between positive measures[END_REF]. Entropic regularised unbalanced OT problem Entropic regularisation can also be applied to (4.22)a si n( 4.4), which yields: OT ", (µ, ⌫)= min 2Π(µ,⌫) " KL ⇣ K " ⌘ + 1 N X i=1 KL 0 @ N X j=1 ij µ i 1 A + 2 N X j=1 KL N X i=1 ij ⌫ j ! . (4.23) The dual formulation of the entropic regularised unbalanced optimal transport problem in discrete form follows again from convex duality OT ", (µ i ,⌫ j )= sup As explained in Chizat et al. (2018b) the optimal plans are again in the form (4.8) and can be computed iteratively using the modified Sinkhorn iterations : (u i ,v j ) 1 X i ✓ exp ⇢ u i 1 1 ◆ µ i 2 X j ✓ exp ⇢ v j 2 1 ◆ ⌫ j " X ij a i K " (x i ,y j ) b j . 8 > > > > > > > > > > < > > > > > > > > > > : a l+1 i = µ i X j (K " (x i ,y j )b l j 1 1 + " , 8i, b l+1 j = ⌫ j X i K T " (x i ,y j ) a l+1 i 2 2 + " , 8j. (4.25) As before a i =e x p{u i /"} and b j =e x p{v j /"} and the Sinkhorn algorithm obeys to the same convergence properties. In practice we use the error (4.17) to monitor the convergence. The dual formulation (4.24) shows again that the map µ i 7 ! OT ", (µ i ,⌫ j ) is concave. Applying Danskin's theorem once more the differentials of OT ", (µ i ,⌫ j ) with respect to µ i and ⌫ i can be directly obtained as the differentials with respect to µ i and ⌫ i of the right hand side of (4.24) for the optimal (u ? i ,v ? i )s: @ @µ i OT ", (µ, ⌫)= 1 ✓ exp ⇢ u ? i 1 1 ◆ = 1 ⇣ a "/ 1 i 1 ⌘ @ @⌫ j OT ", (µ, ⌫)= 2 ✓ exp ⇢ v ? j 1 1 ◆ = 2 ⇣ b "/ 2 i 1 ⌘ . (4.26) Sinkhorn divergence Sinkhorn divergence for balanced OT problems As previously seen in Section 4.1, the entropic regularised optimal transport problem (4.4) can be efficiently solved with the Sinkhorn algorithm as long as the regularisation parameter " is large enough. The larger the " is, the faster the convergence of the iterative Sinkhorn algorithm. The entropic regularisation however introduces a bias increasing with ". In the continuous setting asymptotic results by [START_REF] Conforti | A formula for the time derivative of the entropic cost and applications[END_REF]a n dPal (2019)o nO T " when " ! 0, reminding that OT " ! OT when " ! 0, offer a better understanding of the entropic bias OT " (µ, ⌫) OT(µ, ⌫) ' 2 " p 2⇡"+ " [KL (µ | ⌫)+KL (⌫ | ⌫)] + O(" 2 ). (4.27) This result holds for measures (µ, ⌫) which have smooth densities with respect to the Lebesgue measure, still denoted here as µ, ⌫ by abuse of notation. The definition of the KL divergence in (4.5)i sa l s oa na b u s eo fn o t a t i o n , KL (µ | ⌫) def. = 8 > > < > > : Z µ(x) ✓ log ⇢ µ(x) ⌫(x) 1 ◆ dx if µ absolutely continuous w.r.t ⌫. +1 otherwise (4.28) It is worth to note that the densities µ and ⌫ are decoupled in the first-order term. A simple idea to remove Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 these terms is to use the symmetric entropic OT between the marginals and to remove them, keeping in mind that OT(µ, µ)=0 S " (µ, ⌫)=OT " (µ, ⌫) 1 2 {OT " (µ, µ)+OT " (⌫, ⌫)} (4.29) which is, formally at least, a second order approximation of OT(µ, ⌫) with respect to ". This formulation has been studied under the name "Sinkhorn divergence" in [START_REF] Genevay | Learning generative models with sinkhorn divergences[END_REF][START_REF] Feydy | Interpolating between optimal transport and mmd using sinkhorn divergences[END_REF]. It does not define a distance but has nice properties. We immediately see that S " (µ, µ)=0for any ">0, a property that was lost for OT " . It is also proven in [START_REF] Séjourné | Sinkhorn divergences for unbalanced optimal transport[END_REF]; [START_REF] Feydy | Interpolating between optimal transport and mmd using sinkhorn divergences[END_REF] that it is symmetric in µ and ⌫, and remains positive and convex with respect to µ and ⌫. It also metrises the weak convergence of measures. The Sinkhorn divergence (4.29) can be computed as well as its gradient just by applying the Sinkhorn algorithm three times. For small ", The Sinkhorn divergence S " for the quadratic ground cost is therefore a better approximation of OT 0 = W 2 2 than OT " , inheriting the good properties of Entropic Optimal Transport: strict convexity, smoothness and the Sinkhorn computational algorithm. Sinkhorn divergence for unbalanced OT problem The Sinkhorn divergence has a natural extension to the unbalanced case [START_REF] Séjourné | Sinkhorn divergences for unbalanced optimal transport[END_REF], yielding S ", (µ, ⌫)=OT ", (µ, ⌫) 1 2 {OT ", (µ, µ)+OT ", (⌫, ⌫)} + " 2 (m(µ) m(⌫)) 2 . (4.30) where OT ", (µ, ⌫) is the entropic regularised unbalanced optimal transport problem defined in (4.23), while m(µ)= P i µ i and m(⌫)= P j ⌫ j are the total mass of µ and ⌫, respectively, which can be different. It is proved in [START_REF] Séjourné | Sinkhorn divergences for unbalanced optimal transport[END_REF]thatS ", (µ, ⌫) is again positive, definite, and convex. A comparison between OT ", (µ, ⌫) with S ", (µ, ⌫) is illustrated using 1D wavelets in the next chapter. Computing the unbalanced Sinkhorn divergence and its gradient In order to solve problem (4.30), let first look at OT ", (µ, ⌫). Following section 4.2.2,O T ", (µ, ⌫) can be solved as a fixed-point problem with the Sinkhorn iterations 4.25. For the symmetric case OT ", (µ, µ),t h e Sinkhorn algorithm simplifies [START_REF] Séjourné | Sinkhorn divergences for unbalanced optimal transport[END_REF]. The dual problem becomes a concave maximisation problem, symmetric with respect to a unique unknown vector u i : OT ", (µ i ,µ j )=sup (u i 2 X i ✓ exp ⇢ u i 1 1 ◆ µ i " X ij a i K " (x i ,y j ) U j . (4.31) where a i =exp( u i /"). At iteration l +1, the optimal vector a l+1 i can be computed as a l+1 i = µ i P j K " (x i ,x j )a l j ! + " (4.32) Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 As in (4.26), the first variation of OT ", (µ, µ) with respect to µ can be directly obtained from the dual formula (4.31): @ @µ i OT ", (µ, µ)= 2 ⇣ a "/ i 1 ⌘ . (4.33) The Sinkhorn divergence S ", can be computed using the Sinkhorn iterations (4.25)tosolv ethestandard problem OT ", (µ, ⌫) and using the Sinkhorn iterations (4.32) to solve the symmetric problem OT ", (µ, µ) and OT ", (⌫, ⌫). The gradient of the unbalanced Sinkhorn divergence S ", (µ i ,⌫ j ) with respect to µ i is finally given by: @ @µ i S ", (µ, ⌫)= @ @µ i OT ", (µ, ⌫) 1 2 @ @µ i OT ", (µ, µ)+" (m(µ) m(⌫)) (4.34) where the first term is given by ( 4.26) and the second term by (4.33). Metric and domain scaling A 2D illustration of metric scaling In the context of FWI, the data are just pixelized images to be compared. Using a local misfit function (L 2 for instance) the relative size of the domain in time and offset does not impact the minimization. This is not the case when using a non local OT misfit. When comparing seismograms, a natural question is how to measure the displacement of mass in time and offset space and how to choose the dimensions of along these two axis. This is equivalent to use a normalized domain [0, 1] 2 and scaling the ground cost (4.3)( x =(t, x)) We introduce a rescaled cost c ijml = cos 2 (✓ M ) |x i x m | 2 +sin 2 (✓ M ) |t j t l | 2 (4.35) depending on an angle ✓ M 2]0,⇡/2[. We are going to discuss how the translation of specific data depends on ✓ M . Let the observed data µ(t, x)= {cos(✓ D ) x+sin(✓ D ) t=0} represent an idealized plane wave event. A single straight line carrying a weight of 1 and making an angle ✓ D with the offset axis. The other part of the data (the simulation): ⌫ ⌧ (t, x)=µ(t ⌧ sin(✓ D ),x ⌧ cos(✓ D )) is a simple translation depending on a parameter ⌧ in the direction (cos(✓ D ), sin(✓ D )). This emulates the misfit associated with an erroneous constant background velocity and as explained in the introduction will cause cycle skipping when using L 2 misfit. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 Assuming for simplicity that we are in a periodic box, the classical (non entropic) optimal transport map is a pure parallel translation of the support of µ. This is a consequence of Brenier's theorem which can be extended to ground costs formed as convex functions the vector y x such as (4.35) [START_REF] Gangbo | The geometry of optimal transportation[END_REF]. The translation may however not occur in the normal direction ✓ D + ⇡ 2 because the shortest path now depends on the distance induced by (4.35)a n dt h ea n g l e✓ M . The OT misfit can therefore be reduced to computing the shortest path squared from the origin to the translated line. It is given by solving the convex program: d 2 ⌧ = argmin (x,t),s.t. cos(✓ D ) x+sin(✓ D ) t=⌧ cos(✓ M ) 2 2 |x| 2 + sin(✓ M ) 2 2 |t| 2 = C ⌧ 2 2 (4.36) where C = 1 cos(✓ D ) 2 cos(✓ M ) 2 + sin(✓ D ) 2 sin(✓ M ) 2 . (4.37) Real or synthetic complex seismic data may be interpreted a collection of local event possibly misplaced. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 The discussion above raises the interesting but still untouched question of building a data dependent ground cost function where (↵, ) depends on (t, x). One could even consider more general metrics such as Finsler metric for example Benamou et al. (2018). On the (t, x) range of Entropic OT plans In the entropic version of OT, the plan (4.8) is a diagonal scaling of the interaction Kernel which "governs" mass displacement. In its separable version (4.18) it becomes K ijml = e |x i x m | 2 2 " e |t j t l | 2 2 " . (4.38) Let us discuss the choice of the relative size of the domain in time and offset and their discretization. Let (T,N t ) and (X, N x ) be the maximum time, respectively offset, and number of points in time, respectively offset, used in the computation of the misfit. That gives a computational grid size (dt, dx)=( T Nt , X Nx ). Setting k x dx = x i x m and k t dt = t j t l we rewrite (4.38): K " ijml = e (k x dx) 2 2 " e (k t d t ) 2 2 " = e (k x X) 2 2 "N 2 x e (k t T ) 2 2 "N 2 t (4.39) Freezing " and assuming (N t ,N x ) are given either by the data (number of lines) or though time sampling to limit the computational cost. The range of "horizontal" (in offset) or respectively "vertical" (in time) mass displacement is therefore governed by (k t T,k x X). Setting the finite precision 0 at exp 36 the Kernel above will vanish when k t T =6 p 2 "N t or k x X =6 p 2 "N x . Entropic Optimal Transport therefore allows to choose a smooth transport window of grid points in time and offset. It is possible to use (T,X) to control the width of this window. Let (k max t ,k max x ) be a prescribed maximum allowed displacement in terms of grid points and set .40) (also note that (T,X) can be interpreted as penalization of time/offset displacement in the ground cost as in the previous section ). T =6 p 2 "N t k max t or X =6 p 2 "N x k max x . ( 4 Trace by trace misfit with entropic OT We recall that the system solved by the Sinkhorn algorithm (4.11) is equivalent (in tensorized form (4.38 )) to the marginal constraints: Setting k max x =1in particular corresponds to 1D "trace by trace" misfit. The offset part of the Kernel K x im will vanish except when k x =0, i.e. x i = x m ,i n (4.38) where it is just 1. The 2D marginal constraints (4.41) will therefore reduce for all offsets x i to the 1D marginal constraints : The unbalanced optimal transport misfit function 8 > > > < > > > : X ml A ij K x im K t jl B ml = µ ij 8 i, j X ij A ij K x im K t jl B ml = ⌫ ml 8m, l ( 8 > > > < > > > : X l A ij K t jl B il = µ ij 8 j X j A ij K t jl B il = ⌫ il 8 l The general abstract form of the misfit to minimize (see (2.8)) is J (m)=h (d cal (m), d obs ) (5.1) where m is the model we strive to reconstruct and d obs is some given observation. The map m 7 ! d cal (m) is the forward model and in this thesis we used a 2D constant-density acoustic finite difference wave modeling. We also limit ourselves to synthetic data sets that is d obs := d obs (m true ), the observation is constructed with the same forward mode ling using a true model which is known a priori. We recall that we have implemented on top of PySIT (https://pysit.org/) is an "an open source toolbox for seismic inversion and seismic imaging developed by Russell J. Hewett and Laurent Demanet at MIT". It was designed to allow for fast prototyping of the h component of the misfit function (5.1). Developers can implement their own misfit on top of the available forward models (in our case 2D constant-density acoustic finite difference wave modeling) and optimisation method (we have been using the Limited-memory BFGS method). The rest of this chapter explains how we constructed the h function using the entropic OT costs and shows its behavior on simple parametric models. The misfit function In an attempt to preserve phase information, we follow the strategy of comparing separately the positive and negative parts (f + = max(f, 0) and f = max( f, 0)) of the observed and predicted seismic signal as proposed in [START_REF] Engquist | Application of the wasserstein metric to seismic signals[END_REF][START_REF] Engquist | Optimal transport for seismic full waveform inversion[END_REF]. This is to take care of the positivity constraint on the data. As explained in chapter 4 instead of normalizing and using the classic W (5.3) This will not be used in this chapter but in order to ensure misfit function differentiability, we will split the signal, here denoted f ,i n t oas m o o t hp o s i t i v ef + = P (f ) and smooth negative f = P ( f ) part, where P is given by P (f )= 1 2 ⇣ f + p f 2 +( f 0 ) 2 ⌘ (5.4) The parameter is a small parameter that controls the smoothness of P and f 0 carries the dimension of the signal (typically f 0 = kd obs k 1 .). This function is plotted in in Figure 5.1, for a simple function f (t)=t, f 0 =1and different values of . In summary the misfit function, used for FWI in the last chapter, is h S ", (d cal , d obs )=S ", (d cal ) + , (d obs ) + + S ", (d cal ) , (d obs ) . ( 5.5) and for completeness the adjoint-source to be used in the adjoint wave propagation (2.42) is given by @ @d cal h (d cal ; d obs )= @ @d cal S ", (d cal ) + , (d obs ) + @ @d cal P (d cal ) @ @d cal S ", (d cal ) , (d obs ) @ @d cal P ( d cal ) . ( 5.6) where the differentials of the Sinkhorn divergence (4.30) with respect to d cal is given by (4.34). Figure 5.2 shows that the smoothness of adjoint source, given by the pixel wise multiplication of the S ", gradient and the P derivative depends on the parameter . 1/2D parametric examples In this section we skip the smoothing related to the sign splitting of the data. We will use misfits (5.2)a n d ( 5.3) and they will be referred to as OT ", and S ", . We will omit the when the data is balanced and there is no need for unbalanced OT. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 Translation/shift of a 1D signal We are considering two copies of the same Ricker wavelet, with 20Hz peak frequency, denoted f (t),p l a y i n g the role of the observed signal d obs (t),a n dg(t), playing the role of the predicted signal d cal (t). The two wavelets are shifted in time such that g(t; s)=f (t s) see Figure 5.3. We will study the effect on the misfit of the variations in s the time shift. We recall the classical result (see section 3.2): the squared Wasserstein distance W 2 2 is quadratic, hence convex with respect to the translation of identical data. Sign splitting will preserve the mass balance and the translation and as such will not change this property. This was already verified numerically in [START_REF] Engquist | Application of the wasserstein metric to seismic signals[END_REF]. We here compare the behaviour of the L 2 , OT " ,a n dS " misfit functions, Figure 5.4.I n this example, the data is balanced so we do not need the unbalanced version of the transport distance. For such a simple example there is not much difference between the entropic OT " and the Sinkhorn divergence but the debiasing effect discussed in section 4.3.1 is noticeable. We are very close to a perfect s 7 ! 2 s 2 (the 2 factor comes from sign splitting). Sinkhorn divergence is known to be a good proxy for non entropic OT. In all experiments " = 10 2 and = 10. In the next experiment, Figure 5.6, we consider the misfit behaviour as a function of the time shift between the two Ricker wavelets when a Gaussian noise is added to one of the wavelet (linked for instance to the acquisition of d obs ), see Figure 5.5. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 We compare the L 2 and the unbalanced OT ", and S ", misfit functions for an increasing level of Gaussian noise (1, 5, 10, 20 %). The data is now unbalanced because of the noise. We also consider the balanced S " after normalisation of the data (as in [START_REF] Engquist | Application of the wasserstein metric to seismic signals[END_REF][START_REF] Engquist | Optimal transport for seismic full waveform inversion[END_REF]). As expected L 2 global minima remains centred but more local minima appear due to the noise. The balanced (normalised) S " remains convex as expected but the global minimum is shifted up because of the noise. It is indeed built to be positive convex and vanish when comparing two identical measures, this never is the case with added noise. A constant is added for all the mass creation. The position of the minimum depicted by a black cross is biased because the noisy and normalized signal does not satisfy the initial Ricker symmetry anymore. We recall that unbalanced OT remains a distance on radon measures, the marginal constraint on the data is penalised using the Kullback-Leibler divergence KL see (4.22)( w eh a v eu s e d 1 = 2 = ). Unbalanced Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 Sinkhorn Divergence S ✏, is also covex positive and corrects the unbalanced OT ✏, by removing the diagonal terms and therefore achieves a lower miminum. Again it cannot be zero as we are comparing different measures. The unbalanced version seems however to do a better job as the minimum is closer to the zero shift. Finally, we remark that the noise is akin to add mass everywhere therefore impacting the transport. Because we use the quadratic ground cost (4.3), the transport will favor moving the nearby noise mass when possible instead of fetching the shifted Ricker. This effect increases with the noise level, reduces the range of the transport and the modulus of convexity of the misfit. In figure 5.7 and or a fixed noise (5%) we explore the behavior of the different misfit for different values of ✏ and .I nt h efi r s tl i n e✏ is decreased, as expected OT ✏, and S ✏, (the second order approximation in ✏ of OT converge). We also notice that normalizing the data induce a significant bias to the minimum. The second line, when the marginal constraint are locally relaxed, the relative diminution of the mass transport leads to degeneracy of the convexity property with respect to translation (the amount of mass beeing transported decrease). Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 We perform the same experiment, Figure 5.9, when instead of adding noise to the recorded wavelet, we change its frequency content, see Figure 5.8. We roughly have the same observation as in figure 5.7.W e notice that the normalisation approach does much better, probably because the support of the wavelet is localised. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 Linear 2D model A simple two-parameter model, borrowed from Métivier et al. (2016a), allows the 2D representation of the L 2 ,O T ", and S ", misfit functions. We use sign splitting for the positivity for the transport misfits. Let consider a 2D-acoustic wave approximation model, with constant density, parameterised by the background acoustic velocity v 0 and a depth gradient ↵,s u c ht h a t v = v 0 + ↵z (5.7) The physical domain is a 2D rectangular domain 17 km long and 3.5 km deep. A single seismic source is located at x =8 .45 km at a depth of 50 m. The radiated acoustic waves are recorded at 168 receivers located at the same depth position that the source and regularly deployed with a spacing of 100 m for x 2 [0.15, 16, 85] km. The source wavelet is a Ricker wavelet centred at 5 Hz. The observed signals d obs (x r ,t) correspond to a numerical physical realisation, using second-order finite difference stencils on a regular finite difference grid and a second-order leap-from time integration, for a model parameterised with v ? 0 =2km/s and ↵ ? =0.67 s 1 . The L 2 ,O T ", and S ", misfit functions are compared for predicted signals d cal (x r ,t) corresponding to numerical physical realisations when exploring the parameter space v 0 2 [1.75, 2.25] km/s and ↵ 2 [0.49, 0.91] s 1 with ∆v 0 =0.0125 km/s and ∆↵ =0.015 s 1 , leading to a discretised model space of 29 ⇥ 41 points. For the OT misfit functions, the entropic parameter is chosen as " =0.01. The L 2 misfit function, in Figure 5.10, exhibits multiple local minima, and convergence to the global minimum with local gradient-descent methods requires to start with a good enough initial model, otherwise the optimisation problem will be trapped in a local minimum. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 The S ", misfit function, in Figure 5.12, exhibits nice convexity and sensitivity properties, together with a large basin of attraction toward the global minimum. It is worth to remind that while the mapping d cal 7 ! S ", (d obs ,d cal ) is strictly convex, here this is the mapping (v 0 ,↵) 7 ! S ", (d obs ,d cal (v 0 ,↵)) that is actually plotted. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 2D FWI numerical illustrations Preliminaries This section provides numerical illustrations for the use of the misfit function, see (5.6), based on the Sinkhorn divergence formulation of unbalanced, entropic optimal transport , see tol = 10 09 Sinkhorn convergence error (4.17). With the choice of parameters above and an optimal transport grid size considered of the order 10 2 in space and 128 in time, the number of iterations to achieve this precision is always of the order of 10 2 iterations and never exceeded 10 3 iterations. Transport window in space (in number of points) (see section 4.4.2). Velocity anomaly inclusion This is the classical circular velocity anomaly inclusion inclusion model in a homogeneous medium, and is inspired from [START_REF] Pladys | On cycle-skipping function modification for fullwaveform inversion: comparison of five recent approaches[END_REF]. The physical model is defined on a 1000 ⇥ 1000 ms q u a r ed o m a i n ,a n di s parametrised by a homogeneous background acoustic wave velocity set to 1300m/s, and a circular inclusion of 100 m radius centred in space, with a homogeneous wave velocity 1700m/s (⇡ 30% anomaly). The computational domain includes additional perfectly matched layers along each of the boundaries. The constant-density, acoustic wave equation is solved in space on a regular grid of 256 ⇥ 256 points using a sixth-order, centred finite difference scheme and a second-order leap-frog integration scheme in time with constant time step satisfying the CFL condition, as implemented in PySIT. The FWI inversion problem is solved using the l-BFGS method, as implemented in PySIT, and the number of FWI iterations is initially fixed to 25 iterations. Two homogeneous initial models are considered: initial model-A with acoustic wave velocity set to 1300 m/s; and initial model-B with acoustic wave velocity set to 1700 m/s. Both the predicted d cal and the observed d obs data, i.e. the latter corresponding to the "true" model, are physical realisations obtained with the same wave propagation solver, and without considering additional noise to the data, i.e. the inversion crime setting. Unless otherwise specified, the S ", -misfit function is constructed as discussed in Chapter 5. The observed and predicted signals are decomposed into positive and negative parts using the P -transforn in (5.4), and transported separately using the Sinkhorn divergence formulation of unbalanced optimal transport. The numerical experiments focus on transmission and reflection configurations. Transmission configuration The acquisition system is composed of 48 sources evenly-spaced below the upper boundary of the physical As expected, the resolution for both initial models is higher in the horizontal direction that in the vertical direction due to the acquisition geometry, with a vertical smoothing of the reconstructed velocity anomaly. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 The shape of the inclusion is not fully recovered possibly due to limited offset and deficit in long-wavelength components of the transmitted waves and lack of reflected wave information. Transmission (initial B) 25 iterations 100 iterations convergence convergence The S ", -misfit function provides reasonably accurate results, and the observed and predicted seismic traces are in phase after 25 iterations, as illustrated in Figure 6.5, even though the exact background velocity is not fully recovered. As expected, the resolution is higher in the horizontal direction that in the vertical direction due to the acquisition geometry, leading to vertical smoothing of the reconstructed velocity anomaly. The peak-amplitude of the velocity anomaly is not fully recovered after 25 iterations, see Figure 6.4 and With both the L 2 and S ", based inversion, the shape of the inclusion is not fully reconstructed, due limited offset and relative deficit in long-wavelength components. The resolution in the horizontal direction is higher than in the vertical one, leading to vertical smoothing. The L 2 and S ", adjoint-sources are different, as shown in Figure 6.10, leading to different sensitivity kernels. L 2 Gradient S ", Gradient The S ", adjoint source is more complicated with the signature of a mass-spreading over the whole domain. This is primarily due to the penalisation of the marginals in the unbalanced Sinkhorn divergence formulation, and more specifically to the gradient of Kullback-Leibler divergence, and possibly to the separate 2D-transport of the positive and negative parts of the seismic signals. The initial and final predicted data, in the time-receiver domain, are shown in Figure 6.11 together with the "true" data, i.e. the observed data. A diffraction pattern can be observed as well as a resolution deficit in the long wavelengths components, which precludes accurate reconstruction of the inclusion shape. FWI results, using different implementations of the S ", -misfit function are shown in Figure 6.12 and S ", (2D -P )S ", (2D-Squared Amp.) S ", (1D-Offset-P )S ", (1D-Time-P ) S ", (2D-P )S ", (2D-Squared Amp.) S ", (1D-Offset-P ) SD (1D-Time-P ) Figure 6.13: Velo city profile along a vertical section of the physical domain for different FWI iterations and different implementations of the S ", -misfit function. Reflection configuration The The peak-amplitude of the velocity anomaly is reasonably well retrieved, see Figure 6.17 The initial and final predicted data are shown in the time-receiver domain Figure 6.18. Reflection phase associated to the lower boundary of the inclusion, and the weaker multiple reflections behind, are poorly resolved. The background velocity also is not exactly retrieved. The S ", based inversion exhibits a monotonous convergence rate and the misfit keeps continuing to decrease after 25 iterations, see Figure 6.16. Initial model B Reconstructed models after 25 iterations with the L 2 and the S ", based inversion are shown in Figure 6.19. The L 2 -misfit function now does not suffer from cycle-skipping, and both the L2 and S ", based inversion reconstruct meaningful models. The observed and predicted data are in phase, see Figure 6.9 for the S ",misfit function. The peak-amplitude of the velocity anomaly is not yet fully recovered after 25 iterations with the L 2misfit function, whereas with the S ", -misfit function a relatively more accurate peak-amplitude is recovered, see Figures 6.19 L 2 Gradient S ", Gradient S ", (2D -P )S ", (2D-Squared Amp.) S ", (1D-Offset-P )S ", (1D-Time-P ) S ", (2D-P )S ", (2D-Squared Amp.) S ", (1D-Offset-P ) SD (1D-Time-P ) 6.3 Layered reflection Models One-layer reflection model The one-layer model is inspired from [START_REF] Pladys | On cycle-skipping function modification for fullwaveform inversion: comparison of five recent approaches[END_REF]. The physical model is defined on a rectangular domain of 2000 ⇥ 1000 m dimension, and is parametrised by a homogeneous background acoustic wave velocity set to1500 m/s and by a 100 m thick layer at 300 m depth with a homogeneous wave velocity of 1600 m/s (postive anomaly) or 1400 m/s (negative anomaly). The surface acquisition is the same for both models with 48 evenly-spaced seismic sources and 251 evenly- Reconstructed models, after 50 iterations, with the L 2 and S ", based inversion are shown in Figure 6.27. The L 2 and S ", based inversion provide similar meaningful reconstructions. Results for the L 2 based inversion were expected since the L 2 -misfit function is sensitive to amplitude variation and polarity. Results for the S ", -based inversion are less intuitive since the S ", -misfit function is based on signal decomposition into positive and negative parts and does not guarantees the preservation of phase and polarity information. The data are in phase for the S ", based inversion, after 50 iterations, see Figure 6.28. The initial and final predicted data for both L 2 and S ", based inversion are shown in the time-receiver domain Figure 6.32. The peak-amplitude of the velocity anomaly is correctly retrieved by the L 2 based inversion after 50 iterations, but only partially retrieved by the S ", based inversion, see Figure 6.29. The S ", based inversion recovers only half of the anomaly, with relatively more vertical smoothing than in the L 2 case, see also Figure This could possibly result from the entropic regularisation, which tends to spread mass and smooth discontinuity. The S ", based inversion shows slower convergence rate compared to the L 2 based inversion, see Figure 6.30, but the misfit distance keep continuing to decrease after 50 iterations. The L 2 and S ", adjoint sources and optimisation gradients, at the first iteration, are shown in Figure 6.31. The S ", adjoint source shows the signature of a mass-spreading over the whole domain as previously discussed, and the gradient is smoother than the L 2 one, even though both capture similar features. Results obtained with the offset-by-offset S ", , i.e. 1-D unbalanced transport in space, are less satisfactory. In particular the reconstruction of the positive anomaly layer is altered by strong horizontal smoothing artefacts. S ", (2D -P )S ", (2D-Squared Amp.) S ", (1D-Offset-P )S ", (1D-Time-P ) Reconstructed models, after 50 iterations, with the L 2 and S ", based inversion are shown in Figure 6.27. Both inversion provide similar meaningful reconstructions. For the L 2 based inversion, the recovered peak-amplitude of the velocity anomaly slightly exceeds the "true" one, see Figure 6.35, whereas for the S ", based inversion, the peak-amplitude is only partially recovered after 50 iterations. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 S ", (2D -P )S ", (2D-Squared Amp.) S ", (1D-Offset-P )S ", (1D-Time-P ) The L 2 and S ", adjoint sources and optimisation gradients, at the first iteration, are shown in Figure 6.37. The gradient associated to S ", appears smoother compared to the L 2 one, even though both capture similar features. The convergence rate of the S ", based inversion is similar to the L 2 one, and the misfit distance keep continuing to decrease after 50 iterations see Figure 6.36. Inversion results for different implementations of the S ✏, -misfit function are shown in Figure 6.38 and The results obtained with the offset-by-offset S ", -misfit function, i.e. 1-D unbalanced transport in space, are more meaningfull than in the case of a positive velocity anomaly, see section 6.3.2. S ", (2D-P )S ", (2D-Squared Amp.) S ", (1D-Offset-P ) SD (1D-Time-P ) Figure 6.39: Velo city profile along a vertical section of the physical domain for different FWI iterations and different implementations of the S ", -misfit function. Three-layer reflection model Model configuration The three-layer model is inspired from [START_REF] Yang | Optimal transport for seismic inverse problems[END_REF]. The physical model is defined on a rectangular domain of 15 ⇥ 6 km dimension, with three homogeneous layers: a first layer [0, 1] km, with a homogeneous wave velocity set to 1 km/s; a second layer [1,[START_REF] Gangbo | The geometry of optimal transportation[END_REF] km with a homogeneous wave velocity set to 2 km/s; and a third layer Both inversions converge slowly, see Figure 6.43, with a relatively better convergence rate, after 500 iterations for S ", . The first arrivals in the observed and predicted data are in phase but smaller-amplitude reflected phases behind are not properly recovered for larger offset, see Figure 6.44. A closer look at the reconstructed velocity models during the L2 and S ", based inversion iterations, see It is not intuitive that the S ", based inversion can recover velocity information in the third layer while no waves through this region return to the receivers. Velocity information of the third layer is printed in the amplitude of the reflected phases associated to the third-layer interface and in the head waves, the later being more easily retrieved with the 2D S ", -misfit function, even for limited offset. As the S ", based inversion gradually recovers deep velocity information, the reconstructed velocity model in the shallower region exhibits higher-amplitude, zero-mean oscillations around the "true" velocity model, compared to the L 2 based inversion. The amplitude of these oscillations slowly decrease as inversion iterate. Both the L2 and S ", based inversion overshoot the peak-amplitude of the velocity contrast at the third-layer interface. The initial and final predicted data are shown in the time-receiver domain Figure 6.46 for both the L 2 and S ", based inversion. The L 2 and S ", adjoint sources and optimisation gradients, at the first iteration, are shown in Figure 6.45. The initial predicted data do not contain the reflected phases at the third-layer interface, which has a strong signature in the L 2 and the S ", adjoint sources beside kinematic errors on reflected phases associated to the second-layer interface and the diving waves. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 The S ", contains additional signature of a mass-spreading over the whole domain. This can be associated to the penalisation of the marginals in the Sinkhorn divergence formulation of unbalanced entropic optimal transport, and more specifically of the local gradient of the Kullback-Leibler. The level of mass is linked to how mass-unbalanced are the predicted and observed data, and as such to the initial model. For strongly unbalanced observed and predicted data the transport cost relatively decreases. The gradient in the S ", based inversion is smoother at the third-layer interface, and with with less subsurface artefacts compared to L 2 based inversion. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 S ", (2D -P )S ", (2D-Squared Amp.) S ", (1D-Offset-P )S ", (1D-Time-P ) The velocity contrast at the third-layer interface is not correctly retrieved. This is not actually surprising as reflected waves polarity are lost in the squared-transformation. Interestingly, results obtained with the offset-by-offset S ", -misfit function, i.e. The amplitude of the zero-mean oscillations in the subsurface region seems better mitigated than with 2D transport. S ", (2D-P )S ", (2D-Squared Amp.) S ", (1D-Offset-P ) SD (1D-Time-P ) Model configuration This model is derived from the synthetic Marmousi I, originally generated at the Institut Français du Petrole (IFP) using a 2D acoustic wave solver [START_REF] Versteeg | The marmousi experience: velocity model determination on a synthetic complex data set[END_REF]. The model is defined on a 9000 ⇥ 3000 m domain and contains many reflectors, steep dips, and strong velocity variations in bot lateral and vertical dimensions, with a minimum velocity of 1500 m/s and a maximum velocity of 5500 m/s. The physical model here is defined on a mini-square domain extracted from the Marmousi I model, with origin x = 4680 ma n d3000 ⇥ 3000 m dimension. The constant density, acoustic wave velocity model is shown in Figure 6.49 The surface acquisition is composed of 48 evenly-spaced seismic sources and 150 evenly spaced receivers, both at 500 m depth below the upper interface. The sources are point sources and the source-time function is a Ricker wavelet with central frequency f c = 10Hz, and the recording time is set to 3 s. The computational domain includes additional perfectly matched layers along each side of the physical domain. The constant-density acoustic wave equation is solved in space on a regular grid with 150 ⇥ 150 points, i.e. dx = dz = 20 m, using a sixth-order finite difference scheme and in time using a second-order leap-frog integration scheme with constant time step satisfying the CFL condition, as implemented in PySIT. The inversion problem is solved using the l-BFGS gradient-based method, as implemented in PySIT, with a fixed number of FWI iterations set to 200 iterations. The initial model, Figure 6.49, is derived from the true velocity model using a smooth low-pass filter with a frequency of 1/300, which preserves part of the long-wavelength components of the true model. Reconstructed models with the L 2 and S ", based inversion are presented in Figure 6.50. Both the L 2 and S ", based inversion recover similar velocity models in the shallow region (above 1. The initial and final predicted data are shown in the time-receiver domain Figure 6.51 for both the L 2 and S ", based inversion.The L 2 and S ", adjoint sources and optimisation gradients, at the first iteration, are shown in Figure 6.52. The predicted data at the first iteration miss, as expected, the reflected phases and are only reasonably in phase at short offset for the diving waves. This is reflected into the L 2 adjoint source. The S ", adjoint source is more complicated with signature of mass-spreading over the whole domain as a result of mass-unbalanced predicted and observed data induced by the smooth initial model. The gradient in the L 2 and S ", based inversion are different and smoother for Sl ", .S i g n a t u r eo ft h e limited offset is visible in both cases. The convergence rate of the S ", based inversion, over 200 iterations, is relatively faster than the L 2 one, rapidly reducing the misfit distance by one order of magnitude after 20 iterations. The misfit distance keeps continuing slowly decreaseing after 200 iterations. The computational domain includes additional perfectly matched layers along each side of the physical domain. The constant-density acoustic wave equation is solved in space on a regular grid of 176 ⇥ 150 points, with dx = dz = 20m, using a sixth-order finite difference scheme and in time using a second-order leap-frog integration scheme with constant time step satisfying the CFL condition, as implemented in PySIT. The inversion problem is solved using the l-BFGS gradient-based method, as implemented in PySIT, with a fixed number of FWI iterations set to 250 iterations. Two initial models are considered: a S-500 initial model, which is derived from the "true" velocity model applying a smooth low-pass filter with a frequency set to 1/500; a 1D-GRAD initial model, which is a homogeneous model with a linearly depth-increasing velocity from 1500 m/s at the bottom of the water layer to 3500 m/s at depth. While the S-500 initial model conserves some of the long-wavelength content of the true model, the 1D-GRAD initial does not contain anymore signature of the long wavelength components of the true model. With both initial models it is a challenging task for FWI to recover large low-wavenumber discrepancies between the initial and "true" model. Both the predicted d cal and the observed d obs data, i.e. the latter corresponding to the "true" model, are physical realisations obtained with the same wave propagation solver, and without considering additional noise to the data, i.e. the inversion crime setting. Unless otherwise specified, the S ", -misfit function is formulated as in Chapter 5. The observed and predicted signals are each decomposed into positive and negative parts, using the P -transform defined in (5.4), and transported separately using the Sinkhorn divergence formulation for unbalanced optimal transport. Results Reconstructed physical models with the L 2 and S ", based inversion are shown in The observed data and the predicted data at the last iteration for both the L 2 and S ", based inversion are reasonably in phase in the shallow region (above 1500 m). In the predicted data at the first iteration reflected phases, as expected, are missing whereas diving phases are reasonably in phase only at short offset. This reflects into the L 2 and the S ", adjoint sources. The S ", adjoint source is smooth and capture information of both diving and reflection phases. The source captures also signature of a mass-spreading over the whole domain as a result of the unbalanced-mass between predicted and observed data. At first iteration, the he L 2 and the S ", based inversion gradients look different with different sensitivity regions at depth. The S ", based inversion convergence rate appears faster than the L 2 -based inversion one and the misfit distance keeps continuously decreasing after 250 iterations. 1D-GRAD initial model Reconstructed models with the L 2 and S ", based inversion are not satisfactory after 250 iterations., even if some long-wavelength components start emerging in a more satisfactory way for S ", . It is an extremely The predicted data at last iteration does not fit well the observed data for both the L 2 and S ", based inversion. Long wavelength error components appears for L 2 possibly as a result of cycle-skipping issue.This appears less an issue for S ", . The L 2 and S ", adjoint source are again different and does not capture the same space and time information in the early and late times of the source. The gradient of the S ", exhibit large lobe-sided, long-wavelength updates in the shallow part (above 800 m) and relatively less sensitivity in the deeper region. The gradient of the L 2 based inversion exhibits more complicated and stronger updates in the region between 800-2000 m depth. The convergence rate, over the first 250 iterations, appears faster with the S ", based inversion compared to the L 2 based inversion, and the misfit keeps continuing decreasing. Chapter 7 Conclusion This thesis aims at investigating application of entropic unbalanced optimal transport in the context of fullwaveform seismology. We provide a brief overview of full-waveform inversion in seismology and of the main issues encountered in practice and that are associated to the inherent ill-posedness and the non-convexity of the inversion problem, together with a brief review of different research directions that have been attempting to resolve these issues. The premise of this study is that transport based distances can be a reasonably effective tool, partly because of their Lagrangian nature, for designing objective functions alternative to classical L p based misfit functions that improve convexity and enlarge the basin of attraction of the full-waveform inversion problem. This has been already promoted by a number of studies investigating transport distances, such as 2-Wasserstein distance, earth mover's distance and Graph optimal distance. This Thesis can be seen as a contribution to this new trend. More specifically in Chapter 4, we present the Sinkhorn divergence formulation of entropic unbalanced optimal transport, correcting the bias introduced by the entropic penalisation, that benefits from the easy The numerical illustrations provided in this Thesis make however a number of assumptions, for sake of simplicity and computing cost, that need to be kept in mind when reaching conclusions. First they all assume constant density, acoustic wave propagation (only one model parameter) and as such mode conversion and anisotropy are not considered. Second, all observables are point samples solution of the acoustic wave equation, using the same solver and without additional noise such as acquisition noise (full control set-up, so called inversion crime setting). Third seismic sources are assumed to be known and modelled as point sources with a Ricker wavelet source-time function. As such, the numerical illustrations provided in this Thesis have to be considered as academic illustrations that need to be further investigated and evaluated in more realistic contexts before drawing any practical conclusion in term of field data applications. At the end of this document, we cannot unfortunately provide a definitive conclusion on the relevance of using Entropic Unbalanced OT in FWI even just in the context of academic models. We here give where we stand and the various ideas arising from this work. Based on chapter 6 we can say, as expected, that S ✏, provides similar results as the 2-Wasserstein distance promoted by the Engquist group under the use of signal sign splitting. It is yet unclear whether the introduction of unbalanced OT, avoiding signal normalisation, improves the inversion or not. Introducing unbalanced (to avoid normalisation) and entropic (for its numerical performance and flexibility) OT is appealing but introduces two new parameters ✏ and which have so far simply been set by trial and visual assessment on the numerical tests. We do not have a clear understanding of the impact of these parameters on the convexity of the misfit and how they can be optimised. One of the conclusion is therefore that this should be first investigated in more details in the 2D parametric example (section 5.2). The potential use of these parameters in simulated annealing strategies could also be investigated. On the positive side, our Entropic misfit function is easy to implement and the Sinkhorn divergence formulation really allows to set ✏ such that the number of Sinkhorn iterations to convergence remains acceptable. In all our experiments, it remained on the order of the discretisation of the offset line and in practice, one OT gradient step never exceeded 10 times one L 2 gradient step and was more like 5 times on average. This is encouraging and indicates that the extension of this method to 3D FWI is possible thanks to the tensorisation technique (section 4.1.2). Another interesting research direction raised by this study is the choice of the ground metric, the metric used to measure the distance (in time ⇥ offset) traveled by the mass (here the positive or the negative part of the signal). A first discussion just on scaling the dimensions of the common shot gather is given in section 4.4 with two unrelated conclusions: first, scaling the domain will affect the modulus of convexity for simple translating signals, and second when using entropic OT the scale of the domain will constrain the transport range and can indeed be used to control it. In the OT theory and also numerical practice, it is legit to replace The strategy developed in this thesis remains based on the awkward interpretation of seismic wavefield as probability densities and we pay the heavy price of poorly understood non-linear signal transformations even with the unbalanced OT (see above) normalisation, even though it avoids signal normalisation that often dampen the features and reduces the intensity range. In contrast with the L p distances that only considers differences in intensity, OT distances are transport distances and as such less sensitive to high frequency perturbations, the transport being on the order of the wavelength of the perturbation. The representation of the data as a graph in the (R amplitude ⇥ R time ⇥ R d 1 offset ) space proposed by the Metivier group is more natural and gives good results. There is a numerical price to pay though and it has remain until today limited to trace-by-trace comparisons. The application of Entropic OT maybe of interest for its numerical efficiency and also the range controlling property mentioned above. Finally let us remark that the "graph" strategy proposed above can be pushed further, e.g. a discretised seismogram line (a collection of ordered points) can be interpreted as a Dirac measure in R d 1 offset ⇥ R nt time ⇥ R amplitude and the OT distance can be defined on probability distributions over this space. This strongly reduces the number of samples of the data distributions but the computational burden now resides on the computation and storage of the ground cost. Interestingly, an efficient software exists (developed by J. Feydy https://www.kernel-operations.io/geomloss/), based on an online strategy to evaluate the ground cost and on a sophisticated GPU implementation, to compute Sinkhorn divergences in high dimensional spaces. It may be interesting to test if OT distances based on deformations in these higher dimensional spaces can be useful. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 2. 1 1 Flow chart of a simple full-waveform inversion workflow. ..................... 2.2 Simple example illustrating cycle-skipping artefact. The solid black line signal in the middle represents the observed sinusoidal signal with period T . The upper dashed line signal represents a predicted signal, which is simply the observed signal shifted by a time delay greater than, T/2. In this case the local optimisation problem will update the model so that the n +1-th phase of the predicted signal matches the n-th phase of the observed signal. The lower dashed line signal represents a predicted signal, which is the observed signal shifted by a time delay of less than T/2. In that case the local optimisation problem will rightly update the model such that the n-th phases of the observed and predicted signals match. From Virieux and Operto (2009). .......................................... 3.1 Kantorovich's transport plan between two discrete distributions of masses defined on spaces X and Y , allowing to distribute mass from the one location to multiple locations. ....... 4.1 KL function. s 7 ! KL(s|1) + 1. ................................... 4.2 Influence of the regularization parameter " on the convergence rate for (4.16)o fS i n k h o r n ' s algorithm (source: Peyré and Cuturi (2019)) ........................... (source: Benamou et al. (2015)). The coupling are plotted in grey level in x ⇥ y space. ...................................... 4.4 Colormap of C (4.37) as a function of ✓ M (the Metric Angle) and ✓ D (the Propagation Angle). 5. 2 2 The top row are a three layer synthetic model recording (d obs )a n dd a t a( d cal ) for one of the intermediate models in the simulation. The bottom line show the S ", gradient for the positive part and the P derivative of the signal needed to smooth the gradient. ............ v 5.3 Top figure: Ricker wavelets f (t) (solid blue line) and g(t) (solid red line), the latter being a shifted version for a translation s =0 .25. Bottom Figures: the negative (bottom-left figure) and the positive (bottom-right figure) parts of the two Ricker wavelets. Also their negative and positive parts. .......................................... 5.4 Misfit with respect to the shift s. .................................. 5.5 Top figure: Ricker wavelets g(t) (solid blue line) and f (t) (solid red line), with an added Gaussian noise. Bottom figures: the negative (bottom-left figure) and the positive (bottomright figure) parts of the two Ricker wavelets. ........................... 5.6 Comparison between the L 2 (solid blue curve), the balanced (normalised) S ✏, (solid purple curve), and the unbalanced OT ✏, (solid red curve) and S ✏, (solid yellow curve) misfit functions as a function of the time shift between the two Ricker wavelets for increasing Gaussian noise added to one of the wavelet. From top-left to bottom-right N (0, 0.01), N (0, 0.05), N (0, 0.1), N (0, 0.2):1 , 5 , 1 0a n d2 0%. ..................................... 5.7 Comparison between the L 2 (solid blue curve), the balanced (normalised) S ✏, (solid purple curve), and the unbalanced OT ✏, (solid red curve) and S ✏, (solid yellow curve) misfit functions as a function of the time shift between the two Ricker wavelets (one with added 5% noise) for decreasing ✏ and . .......................................... 5.8 Top figure: Ricker wavelet g(t) (solid blue line) and Ormsby Wavelet f (t) (solid red line) with characteristic frequencies 5 . 5 figure); FWI results with the L 2 -misfit function (bottom-left figure) and the S ", -misfit function (bottom-right figure). ...................................... 6.5 Seismic traces with the S ", -misfit function: initial (solid-green line) and predicted data (solidred line) after 25 FWI iterations, together with observed data (solid-blue line). ........ 6.6 Velocity profile along a vertical section of the physical domain at a surface position x = 600 m as reconstructed by the S ", based inversion. .......................... 6.7 Wavefield in the time-receiver domain for the common shot gather associated to the centred source: "true" wavefield (upper-right figure); initial wavefield (upper-left figure); wavefield at the final FWI iteration (lower-right figure); difference between the "true" wavefield and the wavefield at the final FWI iteration (lower-left figure). ...................... 6.8 Final reconstructed models: initial model (upper-left figure) and targeted model (upper-right figure); FWI results with the L 2 -misfit function (bottom-left figure) and the S ", -misfit function (bottom-right figure). ...................................... 6.9 Seismic traces: initial (solid-green line) and predicted data (solid-red line) after 25 FWI iterations, together with observed data (solid-blue line). ....................... 6.10 Ajoint-source in the time-receiver domain at the first FWI iteration. .............. 6.11 Wavefield in the time-receiver domain for the common shot gather associated to the centred source: "true" wavefield (upper-right figure); initial wavefield (upper-left figure); wavefield at the final FWI iteration (lower-right figure); difference between the "true" wavefield and the wavefield at the final FWI iteration (lower-left figure). ...................... 6.12 Reconstructed models for different implementation strategies of the S ", -misfit function: 2-D transport after decomposition of the signals into poistive and negative parts (upper-left figure); Squared-transform of the signals before 2-D transport (upper-left figure); 1-D transport in space (offset-by-offset) after decomposition of the signals into positive and negative parts (lower-left figure); 1-D transport in tile (trace-by-trace) (in time) after decomposition of the signals into positive and negative parts (lower-right figure). The later two implementations make use of the separability of the Gibbs matrix in the Sinkhorn algorithm and of the appropriate scaling of the transport domain. ............................ 6.13 Velocity profile along a vertical section of the physical domain for different FWI iterations and different implementations of the S ", -misfit function. ....................... 6.14 Final reconstructed models: initial model (upper-left figure) and targeted model (upper-right figure); FWI results with the L 2 -misfit function (bottom-left figure) and the S ", -misfit function (bottom-right figure). ...................................... 6.15 Seismic traces: initial (solid-green line) and predicted data (solid-red line) after 25 FWI iterations, together with observed data (solid-blue line). ....................... 6.16 Convergence rate for the S ", based inverstuin ........................... islinked to a graph-space optimal transport distance, following original developments in transport L p distance Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 for signal analysis by Thorpe et al. (2017). Discrete graphs of the predicted and observed signals are compared rather than the signals themselves. The time-signal is represented, after time discretisation, by a point cloud in the 2D time-amplitude domain, which ensure the positivity of the compared data. Chapter 6 ( 2 62 2D FWI numerical illustrations) illustrates numerically the use in the FWI context of the misfit function presented in chapter 4 on synthetic data, through simple canonical models, and discusses the results. Chapter 7 (Conclusion) gives partial conclusions supported by this study. We also describe a possible follow up work program. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 Chapter Seismic full-waveform inversion: an overview This section provides a brief overview of the full-waveform inversion and of the adjoint-state formulation, following closely the reviews of Tarantola (1988); Plessix (2006); Virieux and Operto (2009); Fichtner (2011); Virieux et al. (2014); Chauris (2021) . 8 ) 8 This is not a trivial task as J depends on m through d cal and the relation between d cal and m is given by the state equation (2.1)-(2.2)a n d( 2.6) and is clearly non linear. Global optimisation methods, which imply exploring the admissible model space M, still remain unfeasible today for most seismology applications owing to the large number of model parameters. In practice, the minimisation proceeds iteratively toward the global minimum m ? , starting from a plausible initial model m 0 , using local optimisation approach. Gradient-based methods Nocedal and Wright (2006), Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 (2.34) Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 .39) For example, given the partition Ω = L m C m , function m (x) can be 1 on C m and 0 in Ω\C m . Other choices are of course possible. 3. Function h(d cal , m, d obs ) does not depends on m = c ? : Figure 2 . 1 : 21 Figure 2.1: Flow chart of a simple full-waveform inversion workflow. Both seismic observations and predicted observables are oscillatory signals by nature. As such the leastsquares based misfit function (2.63) is oscillatory and non convex in directions associated with wavenumber Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 components of the model that are longer than seismic wavelengths, i.e. longer than a half-period of the dominant inverted wavelet. For large kinematic errors, local optimisation algorithms will match incorrect phases between the predicted and observed signals, making the method prone to the so-called cycle-skipping effect, i.e. the inversion converges in a local minimum possibly far from the global minimum due to poor starting model. This can be illustrated in Figure2.2 with a very simple example where both the observed and predicted signals are sinusoidal signals of period T .Apart from finding a good initial kinematic model, reformulating the FWI with alternative ways to measure the difference between predicted observable and observations to mitigate the non convexity with respect to time and space shifts has long been recognised in the FWI community as a proxy for the convexity of the FWI problem with respect to the model wave velocities[START_REF] Jannane | Wavelengths of earth structures that can be resolved from seismic reflection data[END_REF]. Figure 2 . 2 : 22 Figure 2.2: Simple example illustrating cycle-skipping artefact. The solid black line signal in the middle represents the observed sinusoidal signal with period T . The upper dashed line signal represents a predicted signal, which is simply the observed signal shifted by a time delay greater than, T/2. In this case the local optimisation problem will update the model so that the n +1-th phase of the predicted signal matches the n-th phase of the observed signal. The lower dashed line signal represents a predicted signal, which is the observed signal shifted by a time delay of less than T/2. In that case the local optimisation problem will rightly update the model such that the n-th phases of the observed and predicted signals match. From Virieux and Operto (2009). . 1 .Figure 3 . 1 : 131 Figure 3.1: Kantorovich's transport plan between two discrete distributions of masses defined on spaces X and Y , allowing to distribute mass from the one location to multiple locations. observed d obs signals into a positive and a negative part and to recombine them as dcal = d + cal + d obs dobs = d + obs + d cal (3.15) Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 with regard to the positivity assumption is to split the predicted d cal and the observed d obs signals into a positive and a negative part, and to scale them by their respective total mass. The objective function h(d cal ,d obs ) is defined as the sum of the associated Wasserstein distances h(d cal ,d obs )=W 2 d + = max {d, 0} and d = max { d, 0}. A 4 4 more recent strategy introduced by Métivier et al. (2018); Metivier et al. (2019) is linked to a graphspace optimal transport distance, following the original developments of transport L p distance for signal analysis by Thorpe et al. (2017). After discretisation in time, the predicted d cal and the observed d obs signals are mapped into the 2D time-amplitude graph space as point clouds that are compared rather than the signals themselves. The graph-space optimal transport distance can be efficiently computed as a linear assignment problem.This signal graph representation ensures positivity. This new objective function shows promising convexity properties with respect to translation and amplitude variation. This has been demonstrated for field data applications (Górszczyk et al., 2020, 2021; Pladys Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 et al., 2021). However extension to 2D and 3D source gathers remain to be evaluated as it might increase significantly the computational cost. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 Chapter Entropic regularization of OT and its generalizations A new class of numerical methods based on "entropic regularisation" has been introduced for OT computations in Cuturi (2013) (see also the book Peyré and Cuturi (2019)). It penalises OT(µ, ⌫) with the Shannon entropy of the transport plan. A small parameter, often denoted as ", controls this penalisation. Figure 4 . 1 : 41 Figure 4.1: KL function. s 7 ! KL(s|1) + 1. ( 4 4 .15) initialised with an arbitrary vector b 0 = m , where m is the m-vector of ones.. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 converges linearly, e.g. see[START_REF] Peyré | Computational optimal transport[END_REF]Remark4.13,buttheasymptotic convergence rate deteriorates when " ! 0. In the case of quadratic optimal transport (4.3)t h ea l g o r i t h m behaves as O(1 "). An example borrowed from[START_REF] Peyré | Computational optimal transport[END_REF], and reproduced in Figure4.2, illustrates this effect. Figure 4 . 2 : 42 Figure 4.2: Influence of the regularization parameter " on the convergence rate for (4.16)o fS i n k h o r n ' sa l g o r i t h m (source: Peyré and Cuturi (2019)) Figure 4 . 4 Figure 4.3: (top) The input densities p (blue) and q (curve) as functions of x and y.( c e n t e r ) E v o l u t i o n o f t h e couplings l at iteration l of the Sinkhorn algorithm. (bottom) solution ? " of problem (4.4)f o rs e v e r a lv a l u e so f". (source: Benamou et al. (2015)). The coupling are plotted in grey level in x ⇥ y space. , a new metric on positive Radon measures has been independently introduced in Chizat et al. (2018b), Kondratyev et al. (2016b)a n dLiero et al. (2018). Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 Figure 4 . 4 Figure 4.4 shows the value of this parameter for different regimes (✓ D ,✓ M ). Figure 4 . 4 : 44 Figure 4.4: Colormap of C (4.37)a saf u n c t i o no f✓ M (the Metric Angle) and ✓ D (the Propagation Angle). 4.41) Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 solve the 1D trace by trace OT " problem. This remark naturally extends to unbalanced (soft marginal constraints) and Sinkhorn divergence variants. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021Chapter 5 Figure 5 . 1 : 51 Figure 5.1: Illustration of the effect of for the transformation P defined by (5.4), applied to the function f (t)= t, 8t 2 [ 1, 1], which changes sign at t =0.A s increases, P (f ) is getting smoother and smoother in the vicinity of t =0,b u ti n c r e a s i n ga r t i fi c i a lm a s si sg e t t i n gc r e a t e dt h a tl e a dt op o s s i b l eb i a si no p t i m a lt r a n s p o r t . Figure 5 . 2 : 52 Figure 5.2: The top row are a three layer synthetic model recording (d obs )anddata(d cal )foroneofthein termediate models in the simulation. The bottom line show the S ", gradient for the positive part and the P derivative of the signal needed to smooth the gradient. Figure 5 . 3 : 53 Figure 5.3: Top figure: Ricker wavelets f (t) (solid blue line) and g(t) (solid red line), the latter being a shifted version for a translation s =0 .25.B o t t o mF i g u r e s :t h en e g a t i v e( b o t t o m -l e f tfi g u r e )a n dt h ep o s i t i v e( b o t t o m -r i g h t figure) parts of the two Ricker wavelets. Also their negative and positive parts. Figure 5 . 4 : 54 Figure 5.4: Misfit with respect to the shift s. Figure 5 . 5 : 55 Figure 5.5: Top figure: Ricker wavelets g(t) (solid blue line) and f (t) (solid red line), with an added Gaussian noise. Bottom figures: the negative (bottom-left figure) and the positive (bottom-right figure) parts of the two Ricker wavelets. Figure 5 . 6 : 56 Figure 5.6: Comparison between the L 2 (solid blue curve), the balanced (normalised) S ✏, (solid purple curve), and the unbalanced OT ✏, (solid red curve) and S ✏, (solid yellow curve) misfit functions as a function of the time shift between the two Ricker wavelets for increasing Gaussian noise added to one of the wavelet. From top-left to bottom-right N (0, 0.01), N (0, 0.05), N (0, 0.1), N (0, 0.2):1 , 5 , 1 0a n d2 0%. Figure 5 . 7 : 57 Figure 5.7: Comparison between the L 2 (solid blue curve), the balanced (normalised) S ✏, (solid purple curve), and the unbalanced OT ✏, (solid red curve) and S ✏, (solid yellow curve) misfit functions as a function of the time shift between the two Ricker wavelets (one with added 5% noise) for decreasing ✏ and . Figure 5 . 8 : 58 Figure 5.8: Top figure: Ricker wavelet g(t) (solid blue line) and Ormsby Wavelet f (t) (solid red line) with characteristic frequencies (5,10,30,35Hz) . Bottom figures: the negative (bottom-left figure) and the positive (bottom-right figure) parts of the two wavelets. Figure 5 . 9 : 59 Figure 5.9: Comparison between the L 2 (solid blue curve), the balanced (normalised) S ✏, (solid purple curve), and the unbalanced OT ✏, (solid red curve) and S ✏, (solid yellow curve) misfit functions as a function of the time shift between Ricker and Ormsby wavelets for decreasing ✏ and . Figure 5 . 5 Figure 5.10: The L 2 misfit function as a function of the model parameters v 0 and ↵. The red star indicates the location of the global minimum, while in the right figure the intersection of the two dashed lines indicate the reference model v ? 0 ,↵ ? in the model space. Figure 5 . 5 Figure 5.11: The OT ", misfit function as a function of the model parameters v 0 and ↵. The red star indicates the location of the global minimum. The blue star indicates the position of the reference model v ? 0 ,↵ ? in the left the figure, while in the right figure this position is indicated as the intersection of the two dashed lines. Figure 5 . 5 Figure 5.12: Misfit function based on the Sinkhorn divergence (S ", ), with respect to v 0 and ↵.T h er e ds t a ri st h e location of the global minimum, while the position of the reference model v ? 0 ,↵ ? is indicated as the intersection of the two dashed lines. (4.30), through canonical 2D models. The first model, i.e. the Camembert model, focuses on transmission and reflection configurations. The second model, i.e. layered model, focuses on simple reflection configurations. The third model, i.e. the Marmousi model, focus on a more realistic configuration. All the numerical examples are performed using 2D constant-density acoustic wave modelling and local gradient-based optimisation as implemented in the PySIT platform. This is reviewed in section 2. The seimic source is assumed to be a point source with a Ricker wavelet for time function. For all the numerical experiments, the main parameters values associated to the S ", -misfit function and the iterative Sinkhorn algorithm are summarised below Parameter (used value) Use " = 10 2 Entropic regularisation, see section 4.1.1,2 = 10 1 KL unbalanced penalisation of the marginal constraints on d cal and d obs , see section 4.2.2. = 10 3 P -transform into positive/negative part of d cal and d obs , see section 5.1. n t = 128 Number of discretisation points in time used for the OT computation. This implies a sampling of the signals which discretization vary according to the model and n r . n r problem-dependent Number of discretisation points in offset for the OT computation. It also is by default the number of receivers used in the acquisition configuration and the FD discretization of the forward model. k max t = [25, 1] Transport window in time (in number of points) (see section 4.4.2), k max t =1 corresponds to the trace by trace misfit. Figure 6.1: Numerical inversion after 25 iterations Figure 6 . 2 : 62 Figure 6.2: Inversion convergence Figure 6 . 6 Figure 6.3: S ", inversion: transmission (initial model B) Figure 6 . 6 ,Figure 6 . 4 : 6664 Figure 6.6, as well as the exact background velocity. The shape of the inclusion is not fully recovered also possibly due to limited offset and relative deficit in long-wavelength components. The initial and final predicted data, are shown in the time-receiver domain Figure 6.7. A diffraction pattern can be observed due to the relatively strong velocity anomaly in the inclusion. The improved convexity of the S ", -misfit function avoids the FWI problem to be trapped in local minima and enable the convergence toward the global minimum. Figure 6 . 5 : 65 Figure 6.5: Seismic traces with the S ", -misfit function: initial (solid-green line) and predicted data (solid-red line) after 25 FWI iterations, together with observed data (solid-blue line). Figure 6 . 6 :Figure 6 . 7 :Figure 6 . 8 : 666768 Figure 6.6: Velo city profile along a vertical section of the physical domain at a surface p osition x = 600 ma s reconstructed by the S ", based inversion. Figure 6 . 9 : 69 Figure 6.9: Seismic traces: initial (solid-green line) and predicted data (solid-red line) after 25 FWI iterations, together with observed data (solid-blue line). Figure 6 . 6 Figure 6.10: Ajoint-source in the time-receiver domain at the first FWI iteration. Figure 6 . 6 Figure 6.11: Wavefield in the time-receiver domain for the common shot gather asso ciated to the centred source: "true" wavefield (upper-right figure); initial wavefield (upper-left figure); wavefield at the final FWI iteration (lowerright figure); difference between the "true" wavefield and the wavefield at the final FWI iteration (lower-left figure). Figure 6 . 6 Figure 6.13 for comparison. A first implementation makes use of the squared-transformation, without normalisation, of the observed and predicted signals. With this signal transformation, the 2D S ", -misfit function leads to reasonably correct model reconstruction, and relatively improved estimation of the peak-amplitude of the velocity anomaly compared to the 2D S ", -misfit function based on the decomposition into positive and negative parts of the signals. Another implementation makes use of the separability of the Sinkhorn kernel matrix and of appropriate scaling of the transport domain, see section 4.4.2. The S ", -misfit can be reduced to either 1-D unbalanced transport in time (trace-by-trace) or 1-D unbalanced transport in space (offset-by-offset). Figure 6 . 6 Figure 6.12: Reconstructed models for different implementation strategies of the S ", -misfit function: 2-D transport after decomposition of the signals into poistive and negative parts (upper-left figure); Squared-transform of the signals before 2-D transport (upper-left figure); 1-D transport in space (offset-by-offset) after decomposition of the signals into positive and negative parts (lower-left figure); 1-D transport in tile (trace-by-trace) (in time) after decomposition of the signals into positive and negative parts (lower-right figure). The later two implementations make use of the separability of the Gibbs matrix in the Sinkhorn algorithm and of the appropriate scaling of the transport domain. 6 Figure 6 . 14 : 6614 Figure 6.14: Final reconstructed models: initial model (upper-left figure) and targeted model (upper-right figure); FWI results with the L 2 -m i s fi tf u n c t i o n( b o t t o m -l e f tfi g u r e )a n dt h eS ", -misfit function (bottom-right figure). Figure 6 . 6 Figure 6.15: Seismic traces: initial (solid-green line) and predicted data (solid-red line) after 25 FWI iterations, together with observed data (solid-blue line). . The upper interface and horizontal extension of the inclusion is meaningful, whereas the lower inclusion boundary is not properly resolved and appears as "melted" because reflections from the lower inclusion boundary have not been sufficiently recorded due to the limited offset. Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 Figure 6 .Figure 6 . 66 Figure 6.16: Convergence rate for the S ", based inverstuin Figure 6 . 19 : 619 Figure 6.19: Final reconstructed models: initial model (upper-left figure) and targeted model (upper-right figure); FWI results with the L 2 -m i s fi tf u n c t i o n( b o t t o m -l e f tfi g u r e )a n dt h eS ", -misfit function (bottom-right figure). Figure 6 . 6 Figure 6.20: Seismic traces: initial (solid-green line) and predicted data (solid-red line) after 25 FWI iterations, together with observed data (solid-blue line). Figure 6 .Figure 6 . 66 Figure 6.22: Ajoint-source in the time-receiver domain at the first FWI iteration. Figure 6 . 6 Figure 6.24: Reconstructed models for different implementation strategies of the S ", -misfit function: 2-D transport after decomposition of the signals into poistive and negative parts (upper-left figure); Squared-transform of the signals before 2-D transport (upper-left figure); 1-D transport in space (offset-by-offset) after decomposition of the signals into positive and negative parts (lower-left figure); 1-D transport in tile (trace-by-trace) (in time) after decomposition of the signals into positive and negative parts (lower-right figure). The later two implementations make use of the separability of the Gibbs matrix in the Sinkhorn algorithm and of the appropriate scaling of the transport domain. Figure 6 . 6 Figure 6.25: Velo city profile along a vertical section of the physical domain for different FWI iterations and different implementations of the S ", -misfit function. Figure 6 . 26 : 626 Figure 6.26: Convergence rate of the L 2 (left) and S ", (right) based inversion Figure 6.27: Final reconstructed models: initial model (upper-left figure) and targeted model (upper-right figure); FWI results with the L 2 -m i s fi tf u n c t i o n( b o t t o m -l e f tfi g u r e )a n dt h eS ", -misfit function (bottom-right figure). Figure 6 . 6 Figure 6.28: Velocity profiles reconstructed with the L 2 and the S ", based inversion at receivers 67 and 84: initial (solid-green line) and predicted data (solid-red line) after 50 iterations, together with the velocity profile in the "True" model (solid-blue line). Figure 6 . 6 Figure 6.30: Convergence rate with the L 2 (left) and the S ", (right) based inversion. 6. 27 . 27 Institut de Physique du Globe de ParisMiao YU, Ph.D. Thesis, 2021 Figure 6 . 31 :Figure 6 . 6316 Figure 6.31: Adjoint source and gradient at first iteration for the L 2 and S ", misfit functions. Figure 6 .Figure 6 . 34 : 6634 Figure 6.33: Reconstructed models for different implementation strategies of the S ", -misfit function: 2-D transport after decomposition of the signals into poistive and negative parts (upper-left figure); Squared-transform of the signals before 2-D transport (upper-left figure); 1-D transport in space (offset-by-offset) after decomposition of the signals into positive and negative parts (lower-left figure); 1-D transport in tile (trace-by-trace) (in time) after decomposition of the signals into positive and negative parts (lower-right figure). The later two implementations make use of the separability of the Gibbs matrix in the Sinkhorn algorithm and of the appropriate scaling of the transport domain. Figure 6 . 35 :Figure 6 . 36 :Figure 6 . 37 : 635636637 Figure 6.35: Velo city profiles in the physical domain at surface p osition x = 1000 m for different iterations with the L 2 and the S ", based inversion. Figure 6 . 6 Figure 6.38: Reconstructed models for different implementation strategies of the S ", -misfit function: 2-D transport after decomposition of the signals into poistive and negative parts (upper-left figure); Squared-transform of the signals before 2-D transport (upper-left figure); 1-D transport in space (offset-by-offset) after decomposition of the signals into positive and negative parts (lower-left figure); 1-D transport in tile (trace-by-trace) (in time) after decomposition of the signals into positive and negative parts (lower-right figure). The later two implementations make use of the separability of the Gibbs matrix in the Sinkhorn algorithm and of the appropriate scaling of the transport domain. Figure 6 6 Figure 6.39. The 2-D S ", misfit function based on squared-amplitude signal transformation, leads to a meaningful reconstruction similar to the L 2 results in Figure 6.34. The peak-amplitude of the velocity anomaly is only partially reconstructed after 50 iterations, and similar to the one obtained with the 2-D S ", -misfit function based on signal decomposition into positive and negative parts, see Fig 6.39. The results obtained with the trace-by-trace S ", -misfit function, i.e. 1-D unbalanced transport in time, are very similar to the L 2 results, Figure 6.27. The peak-amplitude of the velocity anomaly is well retrieved Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 [ 2 , 1 ]Figure 6 . 41 : 21641 Figure 6.41: Final reconstructed models: initial model (upper-left figure) and targeted model (upper-right figure); FWI results with the L 2 -m i s fi tf u n c t i o n( b o t t o m -l e f tfi g u r e )a n dt h eS ", -misfit function (bottom-right figure). Figure 6 . 42 : 642 Figure 6.42: Velo city profiles in the physical domain at surface p osition x = 7500 mf o rd i ff e r e n ti t e r a t i o n so ft h e L 2 and the S ", based inversion Figure 6 . 43 : 643 Figure 6.43: Convergence rate for the L 2 (left) and the S ", (right) based inversion Figure 6 . 6 Figure 6.44: Seismic traces at receivers 84 and 101: initial (solid-green line) and predicted data (solid-red line) after 500 FWI iterations with the S ", -misfit function; observed data (solid-blue line) Figure 6 . 6 Figure 6.42,sho wsthattheS ", based inversion recovers gradually part of the velocity information below the upper third-layer interface. Both the L 2 and S ", based inversion seem to converge toward a local minimum with more artificial deep reflectors, see Figure 6.41, which could be the signature of an ill-posed inversion problem rather than a cycle-skipping problem. Figure 6 . 6 Figure 6.46: Wavefield in the time-receiver domain for a common shot gather asso ciated to the centred source: "true" wavefield (upper-right figure); initial wavefield (upper-left figure); inverted wavefield and difference between the "true" wavefield and inverted wavefield for L 2 (lower-left figures) and S ", (lower-right figures). Figure 6 . 6 Figure 6.47: Reconstructed models for different implementation strategies of the S ", -misfit function: 2-D transport after decomposition of the signals into poistive and negative parts (upper-left figure); Squared-transform of the signals before 2-D transport (upper-left figure); 1-D transport in space (offset-by-offset) after decomposition of the signals into positive and negative parts (lower-left figure); 1-D transport in tile (trace-by-trace) (in time) after decomposition of the signals into positive and negative parts (lower-right figure). The later two implementations make use of the separability of the Gibbs matrix in the Sinkhorn algorithm and of the appropriate scaling of the transport domain. 1-D unbalanced transport in space, are meaningful and similar to the 2-D S ", results based on the signal decomposition into positive and negative parts. The inversion gradually recovers also the velocity below the third-layer interface. In this configuration, i.e. homogeneous, horizontally-layered medium, 1-D unbalanced transport in space retrieves velocity information associated to head waves and reflection waves at the third-layer interface. The peakamplitude of the velocity anomaly exceeds slightly the one retrived by the 2D S ", based inversion. Results obtained with the trace-by-trace S ", misfit function, i.e. 1-D unbalanced transport in time, are Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 also meaningful in this case, and the inversion gradually recovers the velocity below the third-layer interface. Figure 6 . 6 Figure 6.48: Velo city profile along a vertical section of the physical domain for different FWI iterations and different implementations of the S ", -misfit function. Figure 6 .Figure 6 . 50 :Figure 6 . 66506 Figure 6.49: Mini-square Marmousi I: initial (upper-left figure) and true (upper-right figure) models. 5 km), with accurately resolved fine layers and normal faults, but have more trouble resolving a high resolution and accurate velocity model in the deep region because reflections from the dipping reflectors and the anticline structure have not been sufficiently recorded due to the limited offset. Pseudo velocity logs at two surface locations reconstructed with the L 2 and the S ", based inversion are compared in Figure 6.54. Velocity models in both inversion recover reasonably well the true velocity model above 1500 m, whereas quality of the inverted models degrades with depth. Both inversion fail to update at depth (below 2000 m) the low wavenumber structure of the velocity model and place reflectors at wrong Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 positions. Figure 6 . 52 :Figure 6 . 6526 Figure 6.52: Adjoint source and gradient at first iteration for the L 2 and S ", misfit functions Figure 6 . 54 : 654 Figure 6.54: Pseudo velocity logs at two surface locations x = 6480 m (first row) and x = 7380 m (second row) for the L 2 (left) and S ", based inversion Figure 6 Figure 6 .Figure 6 666 Figure 6.55: Fracture zone Marmousi I I mo del: results for L 2 (second row) and S ", (third row) misfit functions, with the S-500 (second column) and 1D-GRAD (third column) initial models. Figure 6 . 6 Figure 6.56: S-500 initial model: pseudo velocity logs at two surface locations x = 10730 m (first row) and x = 11430 m (second row) for the L 2 (left) and S ", based inversion Figure 6 . 6 Figure 6.57: S-500 initial model: wavefield in the time-receiver domain for a common shot gather associated to a source et the middle of the domain: observed wavefield (upper-right figure); initial wavefield (upper-left figure); final wavefield for the inverted model and difference between the observed and final wavefield for L 2 (lower-left figures) and S ", (lower-right figures). Figure 6 . 6 Figure 6.59: S-500 initial model: FWI convergence with the L 2 (left) and the S ", (right) misfit functions Figure 6 . 6 Figure 6.60: Reconstructed models for different implementation strategies of the S ", -misfit function: 2-D transport after decomposition of the signals into poistive and negative parts (upper-left figure); Squared-transform of the signals before 2-D transport (upper-left figure); 1-D transport in space (offset-by-offset) after decomposition of the signals into positive and negative parts (lower-left figure); 1-D transport in tile (trace-by-trace) (in time) after decomposition of the signals into positive and negative parts (lower-right figure). The later two implementations make use of the separability of the Gibbs matrix in the Sinkhorn algorithm and of the appropriate scaling of the transport domain. Figure 6.63. Figure 6 .Figure 6 . 66 Figure 6.61: 1D-GRAD initial model: FWI convergence with the L 2 (left) and the S ", (right) misfit functions. Figure 6 . 6 Figure 6.64: 1D-GRAD initial model: wavefield in the time-receiver domain for a common shot gather associated to asourceetthemiddleofthedomain: observ edw a v efield(upper-righ tfigure);initialw a v efield(upper-leftfigure);final wavefield for the inverted model and difference between the observed and final wavefield for L 2 (lower-left figures) and S ", (lower-right figures). to use and cost effective Sinkhorn algorithm associated to the entropic penalisation, and specifically discuss its use in the context of full-waveform inversion. The S ", distance still interprets signals as positive measures, requiring ad-hoc signal transformation into positive and negative parts (discussed in chapter 5) but unbalanced transport avoids ad-hoc normalisation methods that can often dampen features and reduce the intensity range of a signal. This new misfit function is assessed in the most generic and reproducible way through several synthetic cases (presented in chapter 5 and 6) from 1D toy models (focussing on shifted patterns) to 2D FWI numerical illustrations on simple canonical configurations in transmission and reflection, and on more realistic heterogeneous configurations extracted from Marmousi models. Comparison between misfit functions based on L 2 and different formulations of S "," , associated to different signal transformations and ground cost metrics linked to transport domain scaling, is made simple by the adjoint state formalism of the FWI (presented in Chapter 2). For reproducibility, all misfit functions and numerical tests developed in this Thesis have been implemented within the Python Seismic Imaging Toolbox (PySIT), an open-source platform developed by Russel J. Hewett and Laurent Demanet in the Imaging and Computing group of the Department of Mathematics at MIT. the quadratic euclidean metric by a Riemanian or even a Finsler metric. The open question is wether such a metric depending on the model could have better convexity properties and can this new non-linearity in Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021the inverse problem be tackled theoretically and numerically. • Une première famille, proposée à l'origine par Engquist et ses collaborateurs est liée à la métrique 2-Wasserstein. Elle est limitée à des mesures de probabilité. Les signaux sismiques nécessitent alors une Une deuxième famille, proposée par Métivier et collaborateurs fait appel à la métrique 1-Wasserstein, instance particulière de la métrique de Kantorowitch-Rubinstein, associée à la formulation duale du problème de transport optimal. Cette métrique OT est désormais définie pour des mesures signées et peut être utilisée pour gérer un transport optimal déséquilibré, évitant la transformation et la normalisation ad hoc du signal. Cela conduit à un problème d'optimisation convexe sous contraintes linéaires qui peut être résolu en utilisant des méthodes d'optimisation convexe non lisses qui permettent de réduire significativement le coût de calcul par rapport à la métrique 2-Wasserstein. La convexité de la fonction objectif n'est cependant pas garantie vis-à-vis des transformations en temps et en amplitude.• Une troisième famille récemment introduite par Métivier et collaborateurs est liée à une distance de transport optimale dans l'espace étendu du graphe des données. Faisant ainsi suite aux développements transformation et une normalisation ad-hoc pour être comparés à l'aide de cette métrique. Cela peut altérer les informations de phase et d'amplitude, et favoriser la non-convexité par rapport aux décalages temporels. Le calcul de la métrique 2-Wasserstein en utilisant la formulation de Monge-Ampère soit avec des différences finies optimisées, des solveurs linéaires, ou une stratégie semi-discrète ajoute un coût de calcul significatif à le FWI. Dans la pratique l'application de la distance 2-Wasserstein a souvent Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 été restreinte au transport 1D-optimal, c'est-à-dire à la comparaison trace par trace moins couteuse. • originaux de la distance de transport Lp pour l'analyse du signal par Thorpe et collaborateurs. Des graphes discrets des signaux prédits et observés sont comparés plutôt que les signaux eux-mêmes. Le signal temporel est représenté, après discrètisation temporelle, par une mesure empirique dans le domaine 2D temps-amplitude, ce qui assure la positivité des données comparées. Cette nouvelle fonction coût montre des propriétés de convexité prometteuses en ce qui concerne les transformations en temps et en l'amplitude. En 2D, la distance peut être calculée avec un coût supplémentaire modéré en tant qu'un problème d'affectation linéaire. Cependant, l'extension à des dimensions de domaine plus élevées reste à développer et à évaluer, et cela pourrait augmenter considérablement le coût de calcul. Nous utilisons l'approximation acoustique classique des ondes élastiques et supposons une densité constante dans les illustrations 2D ; dans ce cas, la vitesse du son est le seul paramètre du modèle.Dans l'approximation acoustique, la conversion de mode n'est pas prise en compte et le milieu est isotrope. En pratique, lorsqu'il s'agit de milieux hétérogènes, une telle approximation peut être remise en cause[START_REF] Cance | Validity of the acoustic approximation for elastic waves in heterogeneous media[END_REF] Sinkhorn, qui réduit significativement le coût de calcul. La divergence de Sinkhorn peut être étendue à un OT déséquilibré. Cette approche nécessite une separation du signal en partie positive et partie negative ainsi qu'un filtrage. Nous donnons dans ce document un aperçu du cadre théorique de ces deux extensions récentes de l'OT conduisant à des fonctions objectifs lisses et différentiables dans le contexte de FWI, dans le contexte général de la théorie du transport optimal, ainsi qu'une revue des différentes approches existantes. Ces extensions Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 récentes sont en particulier analysées dans le contexte d'une application ) la méthode de FWI au travers d'études paramétriques 1D et 2D. Leur implémentation, exploitant le.s propriétés de tensorisation pour des espaces discrets cartésiens sont présentées ainsi qu'une évaluation de la complexité de l'algorithme de Sinkohrn. Nous illustrons numériquement l'utilisation de ces fonctions coût dans le cadre de FWI au travers de cas académiques et d'un example de référence (Marmousi-I). Les illustrations numériques font cependant un certain nombre d'hypothèses physiques par souci de simplicité qui doivent être gardées à l'esprit : • [START_REF] Qiu | Full-waveform inversion with an exponentially encoded optimal-transport[END_REF] Yang and Engquist , 2018; Yang et al., 2018;[START_REF] Sun | The application of an optimal transport to a preconditionned data matching function for robust waveform inversion[END_REF] new objective or fidelity functions. Those newly proposed methods transform the local, sample-by- sample least-squares measure to a global one, trace by trace, or even source gather by source gather. OT-based distances are appealing because of their Lagrangian nature and convexity with respect to translations and dilations of a prescribed probability distribution. To date several families of OT-based functions have been proposed: • A first family, proposed originally by Engquist and collaborators (Engquist and Froese, 2014; Engquist et al., 2016; Engquist and Yang, 2019; Engquist et al., 2020; Engquist and Yang, 2021) is linked to the 2-Wasserstein metric associated to balanced optimal transport, which shows nice convex properties with respect to time shifts and rotations. The classical 2-Wasserstein distance however only applies to probability measures. Raw oscillatory seismic signals require ad-hoc transformation and normalisation •i denote the L 2 scalar product on R N ⇥N ,( 3.9) can be written in short form, This is a linear optimisation problem with N 2 unknowns and 2 ⇥ N linear constraints. The best known linear programming methods have cubic complexity, see remark 3.3 in[START_REF] Peyré | Computational optimal transport[END_REF], and cannot be practically used for realistic seismic applications. OT(µ, ⌫)= min 2Π(µ,⌫) h , ci. (3.10) de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 Let h•, To mitigate this effect, it has been proposed to add in (3.18)a n d( 3.19) the same transformation applied d cal and d obs , leading to symmetric transformations.Other type of transformations originally proposed in the context of FWI least-squares objective function by[START_REF] Liu | The normalised integration method -and alternative to full waveform inversion?[END_REF][START_REF] Donno | Estimating the background velocity model with the normalised integration method[END_REF] are of relevance for optimal transport. They consider the normalised cumulative distributions associated to positive transforms of d cal and d obs , Q cal (x r ,t)= R t 0 dcalc (x r ,t) dt R T 0 dcalc (x r ,t) dt ,Q obs (x r ,t)= R t 0 dobs (x r ,t) dt R T 0 dobs (x r ,t) dt (3.20) where 8 > > > > |d| , [absolute value of the signal] , > d = < d 2 , [square of the signal] , (3.21) > > > > > : E(d), [enveloppe of the signal] second class of transformation is the exponential transformation dcal = exp(bd cal ) hexp(bd cal )i , dobs = exp(bd obs ) hexp(bd obs )i ,b > 0 (3.18) or the more stable Softplus version of the scaling, dcal = log(exp(b dcal ) + 1) hlog(exp(b dcal ) + 1)i , dcal = log(exp(b dcal ) + 1) hlog(exp(b dcal ) + 1)i ,b > 0 (3.19) Under such transformations the influence of the high-positive values and of the high-negative values are not Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 the same. Other strategies for full-waveform inversion Seemingly uncorrelated with the previously mentioned strategies, Métivier et al (Métivier et al., 2016a,b; Métivier et al., 2016) developed an approach based on a generalised dual W 1 distance (3.14), see Section 3.2.2. 1/d where N is the total number of points in time ⇥ offset. Then the cost of one Sinkhorn iteration is O(N 1+1/d ) in place of O(N 2 ). Of course one still has to iterate Sinkhorn until convergence. 2 2 quadratic Wasserstein distance, we can use the unbalanced OT modification and its various entropic approximations. A first choice is to use (4.23) h OT ", (d cal , d obs )=OT ", d + cal , d + obs ) + OT ", d cal , d obs (5.2) but we will mostly focus on its "debiased" Sinkhorn divergence version (4.30) h S ", (d cal , d obs )=S ", d + cal , d + obs + S ", d cal , d obs . Institut de Physique du Globe de ParisMiao YU, Ph.D. Thesis, 2021 Institut de Physique du Globe de Paris Miao YU, Ph.D. Thesis, 2021 in time with constant time step satisfying the CFL condition as implemented in PySIT. The inversion problem is solved using the l-BFGS gradient-based method, as implemented in PySIT, with a fixed number of iterations set to 500 iterations. The initial model is a model where the deepest third layer is unknown, see Figure 6.40. Both the predicted d cal and the observed d obs data, i.e. the latter corresponding to the "true" model, are physical realisations obtained with the same wave propagation solver, and without considering additional noise to the data, i.e. the inversion crime setting. Unless otherwise specified, the S ", -misfit function is formulated as in Chapter 5. The observed and predicted signals are decomposed into positive and negative parts, with the P -transform defined in (5.4), and transported separately using the Sinkhorn divergence formulation for unbalanced optimal transport. In this model configuration, there is no back-scattered information from the interior of the third layer returning to the receivers.
04112071
en
[ "sdv.bbm" ]
2024/03/04 16:41:24
2022
https://theses.hal.science/tel-04112071/file/these_A_ZAITER_Tahgrid_2022.pdf
Thomas Stalder Taghrid Zaiter Wassim El Basset Raphaël Cornu Hélène Martin Mona Diab Arnaud Béduneau email: [email protected]. WOR: water oil ratio W/O: water in oil ZnO: zinc oxide Zn: zinc Keywords: multi-walled nanotubes medium chain triglyceride MUC5AC, mucin 5AC NPs, nanoparticles polydispersity index PLA, poly-(lactic acid) PLGA, poly-(lactide-co-glycolide) PIT, phase inversion temperature QDs, quantum dots ROS, reactive oxygen species SG, stratum granulosum SS, stratum spinosum SC, stratum corneum simulated gastric fluid SIF, simulated intestinal fluid SOR, surfactant oil ratio SWNTs, single-walled nanotubes SLN, Solid Lipid Nanoparticles SiO2, silicon dioxide TEER, transepithelial electrical resistance titanium dioxide TEM, transmission electron microscopy List of figures CHAPTER I: GENERAL INTRODUCTION I. Nanomaterials and nanoparticles Definition The definition of nanomaterials (NM) published in 2011 in the Official Journal of the European Commission (2011/696/EU) (1) has been very recently updated (2). NM are defined as natural, incidental, or manufactured materials consisting of solid particles including single, agglomerated, and aggregated particles where at least 50% of particles in the number size distribution have one or more external dimensions comprised between 1 nm and 100 nm. Compared with the previous definition, two other conditions were added to consider the shape: -Particles with an elongated shape, such as rods, fibers, or tubes, where two external dimensions are smaller than 1 nm and the other one is larger than 100 nm. -Particles with a plate-like shape, where one external dimension is smaller than 1 nm and the other ones are larger than 100 nm." The specific surface by volume greater than 60 m²/cm 3 which was another condition in the definition of 2011 but was not considered anymore. High surface area is not always directly correlated to the dimensions of particles but rather to their internal structure. Nanoparticles (NPs) are a subclass of NM composed of three dimensions less than 100 nm (3). Economic market Due to their physicochemical properties, NM are used in many fields, including electronic, healthcare, aerospace, food, and textile industries. Their use is in constant increase, with an annual growth rate that should reach 13.1 % between 2021 and 2028 (4) (Figure 1). Figure 1: estimation of various nanomaterials uses over a span of 10 years. According to Market Analysis Report (4) . The medical segment of NM market is the most important with 29.98% of revenues in 2020. Other applications of NPs are also continuously growing such as electronics, paints, and energy. (Figure 2). Figure 2: applications insights of nanomaterials in 2020. According to Market Analysis Report (4). Considering the rapid evolution of nanofoods, the part of food industries in the global market of NM should rapidly increase. The different types of nanoparticles NPs are generally divided into organic, inorganic, and carbon-based NPs. Organic NPs mainly include polymeric nanospheres and nanocapsules, polymeric micelles, liposomes, nanoemulsions (NE), and dendrimers. Inorganic NPs are represented by silica NPs, metal oxide NPs, metal NPs and quantum dots. The carbon-based nanomaterials include fullerenes, graphenes and carbon nanotubes (5) (Figure 3). Organic NPs Polymeric NPs In 1979, Couvreur et al. developed the first polymeric NPs based on poly-(alkylcyanoacrylates) as a drug delivery system for cancer therapy. New polymers have been used such as poly-(lactic acid) (PLA), poly-(glycolic acid) (PGA) and their copolymer poly-(lactide-co-glycolide) (PLGA). These biocompatible and biodegradable polymers are approved as pharmaceutical inactive ingredient. Their degradation does not lead to the formation of toxic metabolites and their high stability gives them essential characteristics for applications in the biomedical field. There are two types of polymeric NPs, nanospheres and nanocapsules. Nanospheres are composed of a polymeric matrix in which drug is entrapped. Nanocapsules are composed of a core containing the drug and surrounded by a polymer shell (Figure 4) (6). Polymeric micelles Used as drug delivery system for cancer therapy, the polymeric micelles are the result of selfassembly of amphiphilic polymers, usually in aqueous solution (Figure 5). The inner core of these micelles includes a hydrophobic region of polymer that contains the drug, while the hydrophilic region provides stability in the aqueous environment. Depending on the route of administration, the stability of the micelles can be affected by the environmental changes such as the pH and the temperature (7). Liposomes The use of liposomes is focused on biomedical applications. They can be easily used for a precise active targeting. Liposomes are lipid vesicles composed of aqueous cavities enclosed by one or several lipid bilayers. They can vary in diameter, from nanometers to microns, depending on their chemical composition and the preparation methods. Hydrophilic and lipophilic drugs can be encapsulated within the inner aqueous compartments and the lipid bilayer, respectively (Figure 6) (9). Liposomes are also used for the protection and transport of nutrients. The stability of liposomes used as drug delivery system can be affected by the pH and temperature changes, causing a rapid degradation and then, the early release of the active compound (10). Nanoemulsions NE are lipid drug delivery systems composed of two immiscible liquids such as water and oil and stabilized by an appropriate surfactant. Many surfactants with various characteristics (ionic or nonionic) were used for their preparation. NE are classified in two categories: Oil-in-Water (O/W) and Water-in-Oil (W/O) systems. They have the ability to incorporate lipophilic and hydrophilic drugs and to enhance their oral bioavailability due to their physicochemical characteristics, especially the small droplet size (Figure 7) (11). Dendrimers In 1978 the branched molecules were discovered by Fritz Vogtle. Dendrimer structure begins from a central body formed by one or more atoms. From this central structure, branches of other atoms called "dendrons" grow through a variety of chemical reactions (Figure 8). In biomedical field, dendrimers are used as drug delivery system. The active molecules can be covalently bond to the ends of branches or entrapped in the hydrophobic cavities (13). Inorganic NPs Metallic and metallic oxide NPs The most common metallic NPs are synthesized from aluminum (Al), cobalt (Co), gold (Au), silver (Ag), zinc (Zn), iron (Fe), copper (Cu) and titanium (Ti). The metallic oxides such as iron oxide (Fe2O3), ZnO, TiO2 are also widely used. They are prepared by chemical, photochemical, or biological methods. Conjugations with several chemical compounds such as enzymes, ligands and drug were performed for biomedical applications (15,16). Metallic nanoparticles were also developed in other fields, including electronics, cosmetics, and food industries. TiO2 NPs are commonly used as sunscreen, thickening and opacifying agents in food products. Quantum dots Quantum dots (QDs) are spherical crystals where the size is comprised between 2 and 10 nm. They are composed of semiconductor metals (CdS, CdSe, CdTe, ZnS, PbS). Generally, they consist of a semiconductor core, covered with a shell such as ZnS, and a cap that allows a better solubility in aqueous buffers. QDs are used in biomedical domains, especially in cancer therapy where they have a selective binding to malignant cells and thus sparing normal ones from unwanted side effects (17). Silica NPs Silicon dioxide appears as the simplest model of tectosilicates. Eight crystallized forms of silica are known today. Their many applications in cosmetics, pharmaceuticals, and food industries require nevertheless a careful evaluation of their toxicity in Human (18). Silica NPs can be synthesized according to three different techniques. Inverse microemulsion, where the spherical micelles are formed by the dissolution of surfactant molecules in the presence of water. This method has been used successfully developed for nanoparticles coating and the attachment of functional groups. The high temperature decomposition of organometallic precursors is also a method used for the synthesis of silica nanoparticles. This method has been widely used in the commercial synthesis of silica nanoparticles under powder form. Sol-gel method involves the hydrolysis and condensation of tetraethyl orthosilicate (TEOS) or inorganic salts as sodium silicate in presence of mineral acid (19). Carbon-based nanomaterials Carbon nanotubes are cylindrical particles with an arrangement of carbon atoms in hybrid shapes. Its internal structure is hollow, and the surface consists of one or more layers of graphene sheets. Based on the presence of layer, carbon nanotubes are divided into two categories such as single-walled carbon nanotubes (SWNTs) and multi-walled carbon nanotubes (MWNTs) (Figure 9).They have a wide range of applications, especially in the biomedical field (20). Unlike carbon nanotubes, C60 fullerene is characterized by a spherical shape. Applications of nanoparticles The NPs currently on the market have large-scale applications in cosmetics, food industry, medicine, renewable energies, electronics, and others (Figure 10). Cosmetics field The use of NPs in cosmetic is to increase the stability, the solubility, and the efficiency of active compounds. Cosmetics products should not reach the bloodstream but rather act only on the superficial layers of the skin. TiO2 NPs are used as sunscreen. Their small size increases the surface area, facilitates the spreading, and reduces the whitish appearance compared with microparticles. NPs improve the quality of make-up and hair dyes (23). Cosmetic NE facilitate the skin moisturization and penetration. In addition, they offer a uniform distribution of the product on the skin, and a transparency. Kerastase  Nutritive is a NE developed for moisturizing dry hair (24). Due to their ability to incorporate both hydrophilic and lipophilic substances, liposomes are essential compounds in some anti-aging products, sunscreens, and moisturizers. Some marketed products can be cited, such as liposomes carrying ceramides, tanning agents (25). Solid lipid nanoparticles (SLN) characterized by a solid lipid core unlike NE are used in cosmetics for their stability, their ability to control the active compound release and to protect them from degradations (26). Carbon nanotubes, dendrimers, nanospheres and gold nanoparticles are used also in hair coloring, lotions, creams, hairstyling gel and hair colorant [START_REF] Kaul | Role of Nanotechnology in Cosmeceuticals: A Review of Recent Advances[END_REF]. The small size of NPs confers them very interesting properties in cosmetic products but at the same time, it could be considered as a toxicity factor. NPs could be internalized in cutaneous cells or even reach the bloodstream after penetration in the deep cutaneous layers. Pharmaceutical field Pharmacokinetic profile and biodistribution of active compounds can be improved by nanomedicines. NP have the ability, depending on their characteristics, to modulate the distribution, metabolism, and elimination of drugs. They allow the transport of lipophilic drugs and can control the release kinetic of active ingredients (28). Due to the small size and then their high surface area, dissolution rate of nanosized drugs can be widely improved (29). NPs protect the encapsulated active compound from enzymatic and chemical degradations (30). Liposomes are used in drug delivery because of their ability to encapsulate hydrophilic molecules in the internal phase, as well as hydrophobic molecules in the phospholipid bilayers. Many liposome-based products are currently on the market (Caelyx ® , Daunoxome ® , Myocet ® …). Their main advantages compared to conventional forms are to modify the pharmacokinetic and the bioavailability of drugs, usually anticancer agents and to reduce side effects. The polymeric NPs are used for the sustained release of active compounds. Metallic nanoparticles can be effectively internalized in organs and cells. The ferumoxytol (Feraheme™) was approved by FDA for the treatment of iron anemia in adult patients. Other forms such as nanoemulsions, dendrimers and inorganic nanoparticles showed a real interest for biomedical applications. Even if the nanoparticles exhibit numerous advantages as drug delivery systems, their toxicity including the empty form (without active compound) must be evaluated using tools adapted to their physicochemical properties. Their biodistribution in the body and the elimination process need also to be fully characterized for safety concerns (31). Food industry field Food industry uses nanotechnologies to create functional and preservative ingredients. NM allow increasing the physical stability of food dispersion by reducing the sedimentation/skimming. NE and phospholipid vesicles improve the solubility of active ingredients and flavors (32). Metal oxide nanoparticles, including zinc oxide, titanium dioxide known as E171 in Europe or silica dioxide E551, were added to some pastries as a coloring or opacifying agents. NM are also present in the packaging to increase the shelf life of food. They are usually embedded in plastic matrices to limit bacterial growth or prevent the permeation of gases or UV rays. Silver, zinc oxide and titanium dioxide nanoparticles possess antibacterial properties (34). Other applications of nanotechnologies were reported in the food sector. Nanoporous materials were developed for water filtration and for removing unwanted flavors or allergens from food products (32). Few studies focused on the consequences of the consumption of NM-containing food in human. Due to their small size, NM could accumulate in the intestinal mucus or in Peyer's patches. A chronic exposure to NPs could alter the integrity of the epithelium and modulate the intestinal permeability (33). Routes of human exposure to nanoparticles The effects of NPs on the human body are related to the route of entry into the organism and to their particulate form. NPs can enter the body by ingestion, inhalation, transdermal penetration, or intravascular injection and then be randomly distributed to organs (Figure 11). Dermal route Human skin is characterized by a surface area of approximately 2 m². It is composed of three distinct layers, the epidermis which represents the outermost layer, the dermis, and the hypodermis. The skin forms an effective barrier against the invasion of pathogens, chemical and physical attacks, as well as the uncontrolled loss of water and solutes (35). Engineered NPs are used in many skin products as sunscreen, texture agents, colorants, and drug delivery systems. The small size of NPs could promote their interaction with skin cells. NPs could cross the barrier by paracellular pathway, crossing the stratum corneum and passing through the lipid matrix between keratinocytes or along the hair follicles (Figure 12) (36). However, most of NPs are usually located in the epidermis which represents a very efficient protective barrier. Oral route A large part of NPs enters in the human body by the oral route. NPs are used as anti-caking agents to obtain a more homogeneous, smoother mixture, but also to improve the assimilation of nutrients. TiO2 nanoparticles are present in most toothpaste (1-10 µg/mg of product). Their use in food products represents a significant part of oral exposure to NPs in Human (38). They are also present in environment, including natural and anthropogenic NPs. Silica nanoparticles are used as an anti-caking agent (food additives E550/551) in most food powders such as the salt. Both in vitro and in vivo studies reported that some NPs are able to cross the intestinal barrier, depending on their physicochemical characteristics and compositions (39). However, the translocation pathway stays unclear. Immunohistochemically monitoring is faced with the nanometric scale which requires a high-resolution level for imaging instruments. The translocation of ingested NPs can occur by several routes including the paracellular and transcellular routes (Figure 13) (40). The paracellular passage is classically an exchange pathway for water and electrolytes. Only small hydrophilic molecules cross the intestinal epithelium by this route in healthy individuals (41). However, NPs can increase the intestinal permeability by acting on the tight junctions and then promote the paracellular transport (42). Transcellular transport occurs by endocytosis and transcytosis through enterocytes and M cells in Peyer's patches for small NPs characterized by diameter below 100 nm. (43). Pulmonary route Inhalation is a very commune route of exposure for NPs. Once inhaled, NM can either be exhaled or be deposited in the different regions of the respiratory tree. The upper airways include nasal cavities, mouth, pharynx, and larynx. The tracheo-bronchial tree is composed of the trachea, bronchi, and bronchioles and the pulmonary alveoli (Figure 14). The deposition of NPs depends on the diameter, the aggregation/agglomeration degree, and the density. Particles with a diameter between 10 and 100 nm are mainly deposited in the deep lung, at the level of the pulmonary alveoli. The penetration of NM through the respiratory tract is greater in impaired lung functions due for example, to chronic bronchitis (45). Ocular route The ocular route concerns fine particles in the air but also NPs for therapeutic purposes (Figure 15). They can be administered by periocular, intravitreal injection or corneal absorption [START_REF] Souto | Advanced Formulation Approaches for Ocular Drug Delivery: State-Of-The-Art and Recent Patents[END_REF]. Parenteral route The parenteral route is used for the administration of NPs for biomedical applications. NPs can be intravenously, subcutaneously, intradermally, or intramuscularly injected. The intravenous route provides an instantaneous response. It is also suitable for drugs that cannot be absorbed by the gastrointestinal tract or cannot be injected into muscles or other tissues [START_REF] Parker | Pharmacoeconomics of intravenous drug administration[END_REF]. Intravenously injected NPs improve the pharmacokinetic profile of drugs by extending the plasma half-life and preserve them from chemical degradations. They can also control the biodistribution by targeting specific tissues and cells. Abraxane ® , an albumin NP was approved by the FDA in 2006 for paclitaxel-based treatments [START_REF] Kalepu | Insoluble drug delivery strategies: review of recent advances and business prospects[END_REF]. A better therapeutic response was observed in women with a metastatic breast cancer compared with the standard formulation of paclitaxel [START_REF] Chenthamara | Therapeutic efficacy of nanoparticles and routes of administration[END_REF]. Distribution of untargeted NPs after intravenous administration was observed in many organs. 60% of the injected dose of gold NPs was detected in the liver, lungs, spleen, heart, kidneys and brain after intravenous injection [START_REF] Bednarski | The influence of the route of administration of gold nanoparticles on their tissue distribution and basic biochemical parameters: In vivo studies[END_REF]. Uncoated NPs characterized by a hydrophobic surface are mainly phagocytosed by macrophages in liver, spleen, and lungs, especially when the diameter is higher than 5 nm. Smaller NPs are eliminated from the body by renal excretion. Surface modification is an effective way to reduce clearance and enhance cellular uptake for a maximum drug accumulation in target sites [START_REF] Chenthamara | Therapeutic efficacy of nanoparticles and routes of administration[END_REF]. Regulations of nanomaterials The articles L-523-1 to L523-3 of The French environmental Code require an annual declaration system for "substances in nanoparticle state". From January 1 st , 2013, all the manufacturers, importers, and distributors of NM need to fill a declaration in the R-nano register when quantities are greater than 100g per year. It must include the quantity, the physicochemical properties, and the application. The available information regarding the health and environmental hazards must be mentioned [START_REF]Nanomatériaux, nanoparticules[END_REF]. From January 1 st , 2020, the European Commission requested to identify the size, shape, surface structure as well as dissolution and stability parameters of nanomaterials [START_REF][END_REF]. Regulations are recommended for the use of nanoparticles in various fields, including food industry, cosmetics, and the pharmaceutical applications. Since 2014, the European INCO regulation (Information du Consommateur UE 1169/2011) has required that the mention "nano" should appear on the labels of nanoparticle-containing foods [START_REF] Aschberger | Considerations on information needs for nanomaterials in consumer products[END_REF]. In 2011, the European Food Safety Authority (EFSA) published a guide for the risk assessment of nanoscience and nanotechnology applications in the food chain. This guide details the risk assessment procedure in three steps: identification, exposure assessment and risk characterization (55). Titanium dioxide was authorized as a food additive named E171 by European regulations in foodstuffs, mainly in confectionery, bakery products and sauces. It is also used in other fields like cosmetics, paints, and medicines. In May 2021, EFSA concluded that titanium dioxide could no longer be considered safe as a food additive, due to its genotoxic potential and accumulation in the organism [START_REF]Dioxyde de titane : le E171 n'est plus considéré comme sûr en tant qu'additif alimentaire[END_REF]. EFSA panel published in 2018 a re-evaluation of the E551 as a food additive. The panel concluded the European specifications of the additive are still insufficient. This requests a better size characterization of silica particles to identify the presence and the ratio of nano silicates. Besides, it was not possible to confirm the current acceptable daily intake not specified due to the limited toxicological data. The Cosmetics Regulation made obligatory, from July 2013, the reporting of the "nanomaterials" presence in the ingredient list of cosmetic products. In February 2022, the European Commission notified the world trade organization a draft amendment to cosmetics regulation. The aim is to prohibit the use of nanomaterials in cosmetic products for which the Scientific Committee on Consumers Safety (SCCS) identified a risk for human health. This is the case for copper NPs. In France, nanoparticles used in cosmetics are subject, like other nanomaterials, to the obligation of declaration in the R-nano register [START_REF]Scientific advice on the safety of nanomaterials in cosmetics[END_REF]. Concerning the use of nanoparticles in the pharmaceutical field, it was extremely difficult to establish a single regulation given the diversity of nanomedicines, their chemical nature and their many applications [START_REF]Nanotoxicologie et réglementation des nanomédicaments[END_REF]. In general, there was a lack of studies in the evaluation of nanomedicines. Currently, new nanoparticle-based drugs are evaluated by the Food and Drug Administration (FDA), the European Medicines Agency (EMA) and other agencies using a benefit/risk analysis approach [START_REF] Desai | Challenges in Development of Nanoparticle-Based Therapeutics[END_REF]. CHAPTER II: NANOTOXICITY: BIBLIOGRAPHIC PART 1. Nanotoxicology The fundamental aspect for understanding the toxicity of NPs is their physicochemical characterization. Their small size, surface charge and composition are critical factors governing the interactions with the biological environment. These bio-nano interactions are essential for understanding the biodistribution and toxic effects of many nanomaterials in both in vitro and in vivo models. Nanomaterials can form "aggregates" or "agglomerates" in contact with biological fluids, modulating the toxicity of the original nanomaterial (1). Due to their small size, NPs have the ability to penetrate the human body, cross the various biological barriers and reach the most sensitive organs (2). For particles with the same composition, the size change leads to different levels of cytotoxicity (3). Small NPs can be internalized in cells, causing toxicity issues (4).Changes in the surface charge result also in dramatic differences in cell internalization and biodistribution of NPs. NPs with a positively charged surface are known to have the highest toxicity. The chemical composition also influences the toxicity of NPs. In the literature, the most documented toxic NPs are metallic type NPs (5). Toxicological model studies To study the toxic effects of nanomaterials, cellular and animal models are usually used. The choice of model depends essentially on the study objective and the expected result. From literature, rats and mice are the most common species for in vivo toxicity studies. However, the pharmaceuticals regulation requires to include non-rodent species in safety studies. The number of in vitro toxicity studies have been multiplied in recent years due to the growing development of alternative methods to the use of animals in experiments (5). Cell types are mainly macrophages, blood cells, hepatocytes, epithelial and endothelial cells (7). An in vitro and in vivo correlation is necessary to validate the cellular model. Oral toxicity Ingested nanoparticles mainly come from drug delivery systems and food products. They can be also released from packaging materials. Another origin is their presence in in the air and drinking water, including natural and anthropogenic NPs. After ingestion, they are in contact with the gastrointestinal tract, especially the intestinal barrier which offers a large exposure surface. A review of the literature on the intestinal toxicity of ingested NPs was carried out in and inorganic (silica, titanium dioxide, metal and metal oxide NPs) materials [START_REF] Maroof | Scope of Nanotechnology in Drug Delivery[END_REF]. Some of them, especially inorganic NPs are present in food products due to their coloring or texturing properties. Organic NPs especially lipid and polymeric NPs are mainly used as drug delivery systems to enhance the solubility, stability, and bioavailability of drug molecules [START_REF] Dilnawaz | Polymeric Biomaterial and Lipid Based Nanoparticles for Oral Drug Delivery[END_REF]. Humans can also ingest natural and incidental anthropogenic NPs present in the environment. Silica and iron NPs are released from volcanic ash clouds while carbon nanotubes were detected during the combustion of pines [START_REF] Griffin | Natural Nanoparticles: A Particular Matter Inspired by Nature[END_REF]. After oral administration, NPs can reach the Gastrointestinal Tract (GIT), where the intestine, which offers a large exposure surface, is present. Their small dimensions and their high surface area grant those many advantages as food and pharmaceutical ingredients. At the same time, these features promote their interactions with the biological environment, including cells, mucus and bacteria. An alteration of physical and functional properties of the intestine could be reasonably assumed. However, the fate and the toxicity of NPs after oral ingestion has not been yet fully elucidated due to the growing number of nanomaterials and the lack in toxicological studies. Are they able to modulate the intestinal permeability or reach the systemic circulation after translocation? Can they affect the microbiota and the epithelial cells, inducing digestive disorders, inflammatory responses, genotoxicity or oxidative stress? This present work faces the published toxicological data on ingested nanomaterials to identify the critical factors responsible for the intestinal nanotoxicity. II-The multilayer intestinal barrier The intestine is an important body organ which prevents microorganisms and xenobiotics including NPs from reaching the systemic circulation [START_REF] Lundquist | Oral absorption of peptides and nanoparticles across the human intestine: Opportunities, limitations and studies in human tissues[END_REF][START_REF] Segata | Composition of the adult digestive tract bacterial microbiome based on seven mouth surfaces, tonsils, throat and stool samples[END_REF]. The intestine could be considered as a barrier composed of multilayers. The first one is the bacterial microbiota. It is responsible for many functions such as regulating host immunity, protecting against pathogens and preserving gut integrity [START_REF] Thursby | Introduction to the human gut microbiota[END_REF]. It also plays an important role in the collection of energy from food to synthetize vitamins and amino acids. Disruption of the bacterial microbiota can lead to many diseases as the Inflammatory Bowel Disease (IBD) and metabolic syndrome [START_REF] Barko | The Gastrointestinal Microbiome: A Review[END_REF]. The mucus represents the second layer which protects the epithelium from mechanical damages. Mucus acts as a physical barrier, hindering pathogens from reaching epithelial [START_REF] Pelaseyed | The mucus and mucins of the goblet cells and enterocytes provide the first defense line of the gastrointestinal tract and interact with the immune system[END_REF]. This hydrogel is composed of large glycoproteins, mainly mucins. Mucins are organized in two adjacent layers. The inner one is thin and strongly adherent in contrast to the outer one. The main mucins secreted in the small intestine by goblet cells are the MUC2 and MUC5AC. In the jejunum, each MUC2 binds to three other mucins to form a hexagonal mesh. The synthesis of these mucins is influenced by the intestinal microbiota as well as the luminal substances including bioactive compounds present in functional food [START_REF] Damiano | Quercetin Increases MUC2 and MUC5AC Gene Expression and Secretion in Intestinal Goblet Cell-Like LS174T via PLC/PKCα/ERK1-2 Pathway[END_REF][START_REF] Walsh | Expression of MUC2, MUC5AC, MUC5B, and MUC6 mucins in colorectal cancers and their association with the CpG island methylator phenotype[END_REF]. The intestinal epithelium represents the third layer. It is formed of a continuous polarized monolayer of cells interconnected and bound to a basement membrane by proteins, mainly integrins [START_REF] Pompili | The Charming World of the Extracellular Matrix: A Dynamic and Protective Network of the Intestinal Wall[END_REF]. It regulates solute transport and selective transporters for amino acids, electrolytes, short chain fatty acids and sugars [START_REF] Groschwitz | Intestinal barrier function: molecular regulation and disease pathogenesis[END_REF]. The epithelium is mainly composed of absorptive enterocytes, covering a large surface, and making up to 80-90 % of the epithelium and goblet cells (8 %). The apical membrane of the enterocytes is covered by microvilli that increase the intestine's surface area (Faust et al., 2014b;[START_REF] Lundquist | Oral absorption of peptides and nanoparticles across the human intestine: Opportunities, limitations and studies in human tissues[END_REF]. Paneth cells, enteroendocrine cells, Microfold cell (M cells) and stem cells are also present in the intestinal epithelium but at a smaller proportion [START_REF] Gehart | Tales from the crypt: new insights into intestinal stem cells[END_REF]. M cells located in the Peyer's patches are responsible for the transport of antigens from the lumen of the small intestine to the lymphoid follicles. They are involved in the mucosal immunity response [START_REF] Corr | M-cells: origin, morphology and role in mucosal immunity and microbial pathogenesis[END_REF]. Intestinal permeability is governed by two major pathways: paracellular and transcellular routes. The intercellular space is regulated by proteins [START_REF] Salvo Romero | The intestinal barrier function and its involvement in digestive disease[END_REF]. They are involved in different types of junctions such as Tight Junctions (TJs), anchoring and communicating junctions. TJs (as occludins, claudins), Junction Adhesion Molecules and tricellulin regulate the paracellular trafficking of molecules. They are arranged as multiprotein complexes that form a selective permeable seal between adjacent epithelial cells [START_REF] Lee | Intestinal permeability regulation by tight junction: implication on inflammatory bowel diseases[END_REF]. The expression of claudins 1, 4 and 5 consolidates the permeability of TJs while claudin-2 depletion enhances the barrier function of the intestine [START_REF] Suzuki | Interleukin-6 (IL-6) regulates claudin-2 expression and tight junction permeability in intestinal epithelium[END_REF][START_REF] Weber | Dynamic properties of the tight junction barrier[END_REF]. TJs are associated with Zonula Occludens (ZO-1 ZO-2 and ZO-3) which are bound to F actin [START_REF] Rodgers | Regulation of epithelial permeability by the actin cytoskeleton[END_REF]. Thus, ZO proteins allow the anchorage of TJs to the actin cytoskeleton. The interaction loss between TJs and cytoskeleton by actin depolymerisation can lead to a drastic increase in the intestinal paracellular permeability [START_REF] Shen | Actin depolymerization disrupts tight junctions via caveolae-mediated endocytosis[END_REF]. Occludin phosphorylation by protein kinase C (PKC) can also weaken the interaction with actin, leading to a dissociation from the junctional complex [START_REF] Lee | Intestinal permeability regulation by tight junction: implication on inflammatory bowel diseases[END_REF]. The inflammatory response involving the activation of the subepithelial immune system with dendritic cells, neutrophils, and macrophages can affect the TJ integrity. Interleukin 8 (IL-8) which is a major neutrophil recruiting chemokines induces actin rearrangement, thus increasing TJ permeability [START_REF] Gerloff | Influence of simulated gastrointestinal conditions on particle-induced cytotoxicity and interleukin-8 regulation in differentiated and undifferentiated Caco-2 cells[END_REF][START_REF] Talavera | IL8 release, tight junction and cytoskeleton dynamic reorganization conducive to permeability increase are induced by dengue virus infection of microvascular endothelial monolayers[END_REF]. Other cytokines like TNF-α, IL-1β, IL-6, IL-13 and IL-17 can affect the actin arrangement and increase claudin-2 expression through the MEK/ERK and PI3K pathways [START_REF] Mankertz | TNFalpha up-regulates claudin-2 expression in epithelial HT-29/B6 cells via phosphatidylinositol-3-kinase signaling[END_REF][START_REF] Nighot | Autophagy enhances intestinal epithelial tight junction barrier function by targeting claudin-2 protein degradation[END_REF][START_REF] Shen | Tight junction pore and leak pathways: a dynamic duo[END_REF][START_REF] Suzuki | Interleukin-6 (IL-6) regulates claudin-2 expression and tight junction permeability in intestinal epithelium[END_REF][START_REF] Talavera | IL8 release, tight junction and cytoskeleton dynamic reorganization conducive to permeability increase are induced by dengue virus infection of microvascular endothelial monolayers[END_REF]. On the contrary, anti-inflammatory cytokines secreted by regulatory T cells (T4) such as the IL-10 and TGF-β increase TJs' expression through MEK/ERK signalling and therefore reduce the intestinal permeability [START_REF] Sun | Decline in intestinal mucosal IL-10 expression and decreased intestinal barrier function in a mouse model of total parenteral nutrition[END_REF]. The anchoring junctions connect the cytoskeleton of each cell to that of neighbouring cells or to the extracellular matrix. They include adherent junction and desmosomes and play an important role in various cell processes such as differentiation, proliferation, and morphogenesis. The communicating junctions link the cytoplasm of adjacent cells by forming a channel of connexins through their membranes. They also play a crucial role in the growth and development of epithelial cells as well as in the barrier function maintenance [START_REF] Salvo Romero | The intestinal barrier function and its involvement in digestive disease[END_REF]. The transcellular pathway consists of transporting molecules across the enterocytes using different mechanisms. This can include energy-dependent selective transporters (active transport), passive or facilitated diffusion, and receptor or absorptive-mediated endocytosis. The type of mechanism involved depends on the size and physicochemical properties of the molecules [START_REF] Turner | Intestinal mucosal barrier function in health and disease[END_REF]. Passive transport enables the diffusion of small lipophilic molecules through the enterocyte membrane. The solute carriers, termed SLC, carry drugs according to their concentration gradient without adenosine triphosphate (ATP) molecules (Liu and Liu, 2013;[START_REF] Oostendorp | The biological and clinical role of drug transporters at the intestinal barrier[END_REF]. Transporters of the ATP-binding cassette family such as the P-glycoprotein (P-gp) are involved in the efflux mechanisms. They prevent the systemic distribution of toxins and some xenobiotics located in the intestinal lumen [START_REF] Fu | Where is it and How Does it Get There -Intracellular Localization and Traffic of Pglycoprotein[END_REF]. Endocytosis is a vesicle-mediated transcellular transport. Various endocytic processes were reported such as pinocytosis, clathrin-mediated endocytosis, caveolae-mediated endocytosis, and clathrin-caveolae independent endocytosis [START_REF] Panariti | The effect of nanoparticle uptake on cellular behavior: disrupting or enabling functions?[END_REF][START_REF] Talkar | Transmucosal Nanoparticles: Toxicological Overview[END_REF][START_REF] Zhang | Mechanism study of cellular uptake and tight junction opening mediated by goblet cell-specific trimethyl chitosan nanoparticles[END_REF]. III-Ingested nanoparticles 1. Nanoparticle description NPs administered by the oral route are used as drug delivery systems, imaging agents and food additive. Many research works focused on innovative biomedical applications of NPs, especially for the diagnosis and the treatment of colorectal cancer and IBD [START_REF] Pridgen | Polymeric nanoparticle drug delivery technologies for oral delivery applications[END_REF][START_REF] Tyagi | Nanoparticles -An Overview of Classification and Applications[END_REF][START_REF] Abbas | The Potential Role of Nanoparticles as an Anticancer Therapy in the Treatment of Rectal Cancer[END_REF]. Moreover, the oral route for drug administration offers many advantages such as convenience and medication compliance, especially in the treatment of chronic or long-term diseases that require frequent administration [START_REF] Pridgen | Polymeric nanoparticle drug delivery technologies for oral delivery applications[END_REF]. Active pharmaceutical ingredients and imaging agents entrapped in NPs are protected from chemical and enzymatic degradations [START_REF] Araújo | Safety and toxicity concerns of orally delivered nanoparticles as drug carriers[END_REF][START_REF] Ojer | Toxicity evaluation of nanocarriers for the oral delivery of macromolecular drugs[END_REF]. Insulin and calcitonin were protected against proteases after encapsulation in polymeric NPs [START_REF] Lowe | Calcitonin and Insulin in Isobutylcyanoacrylate Nanocapsules: Protection Against Proteases and Effect on Intestinal Absorption in Rats[END_REF]. The encapsulation improves the solubility and the systemic absorption of drugs [START_REF] Tan | Improved Bioavailability of Poorly Soluble Drugs through Gastrointestinal Muco-Adhesion of Lipid Nanoparticles[END_REF]. A 7.3-fold increase of the curcumin bioavailability was observed with nanoemulsions of ethyl oleate [START_REF] Wan | Novel nanoemulsion based lipid nanosystems for favorable in vitro and in vivo characteristics of curcumin[END_REF]. E551 food additive composed of amorphous silica NPs is commonly used as anti-caking or thickening agents (Fruijtier-Pölloth, 2016). Titanium dioxide (TiO2) NPs contained in E171 additive are widely present in food products due to their colouring and opacifying properties [START_REF] Younes | Safety assessment of titanium dioxide (E171) as a food additive[END_REF]. Inorganic NPs were also used in packaging materials to protect food products. Containers were designed with antimicrobial Titanium, Zinc (Zn) or other inorganic NPs to prevent the proliferation of bacteria [START_REF] Carrillo-Inungaray | Chapter 15 -Use of Nanoparticles in the Food Industry: Advances and Perspectives[END_REF]. However, their release in food products was reported [START_REF] Huang | Costbenefit analysis of capecitabine versus 5-fluorouracil/leucovorin in the treatment of colorectal cancer in the Netherlands[END_REF]. Humans are also exposed to natural nanomaterials. Volcanic activity, rock weathering and forest fires produce inorganic NPs such as silica, iron and carbon NPs (Barhoum et al., 2022). Anthropogenic NPs are incidentally released in the environment during the industrial processes and the combustion of fuels such as carbonaceous NPs (D'Anna, 2009). Fate of ingested NPs After ingestion, NP diffuses into the GIT. The GIT has a pH gradient varying from highly acidic in the stomach to basic with a pH of 7.5 in the colon [START_REF] Kararli | Comparison of the gastrointestinal anatomy, physiology, and biochemistry of humans and commonly used laboratory animals[END_REF]. During the digestion process, NPs undergo numerous chemical modifications as oxidation, deamidation and hydrolysis [START_REF] Sood | Peroral route: an opportunity for protein and peptide drug delivery[END_REF]. In addition, gut enzymes as proteases, nucleases and lipases present in the GIT are also involved in the degradation of NPs [START_REF] Ganapathy | Protein Digestion and Absorption[END_REF]. The feeding status and meal content also need to be considered [START_REF] Lundquist | Oral absorption of peptides and nanoparticles across the human intestine: Opportunities, limitations and studies in human tissues[END_REF]. Therefore, potential exposure of NPs to proteins, carbohydrates and lipids present in food products in gastrointestinal fluid can also modify their physicochemical properties. Changes in the surface properties due to adsorption at the surface of NPs as well aggregation/agglomeration or dissolution processes were reported [START_REF] Wang | Formation of Protein Corona on Nanoparticles with Digestive Enzymes in Simulated Gastrointestinal Fluids[END_REF][START_REF] Shi | Nanoparticle Digestion Simulator Reveals pH-Dependent Aggregation in the Gastrointestinal Tract[END_REF][START_REF] Mbanga | Dissolution kinetics of silver nanoparticles: Behaviour in simulated biological fluids and synthetic environmental media[END_REF]. The aggregation state, surface charge and morphology of polyvinylpyrrolidone-coated silver (Ag-PVP) NPs were modified in contact with simulated gastrointestinal fluids. The acidic pH of human gastric intestinal fluids in a fasted state termed FaSSGF, induced the release of silver (Ag) ions, subsequently forming an AgCl precipitate [START_REF] Jiang | Influence of gastrointestinal environment on free radical generation of silver nanoparticles and implications for their cytotoxicity[END_REF]. Agglomerates of TiO2 NPs were observed in simulated gastric fluid compared to water [START_REF] Jones | Human in vivo and in vitro studies on gastrointestinal absorption of titanium dioxide nanoparticles[END_REF]. Gastric digestion induced the clustering of silica (SiO2) NPs due to the acidic pH and high electrolytes concentrations. However, NPs were reformed in the intestinal fluid, suggesting a pH-dependent agglomeration state [START_REF] Peters | Presence of Nano-Sized Silica during In Vitro Digestion of Foods Containing Silica as a Food Additive[END_REF]. The interaction of NPs with the mucus layer is mainly modulated by their physicochemical properties; especially size, surface charge and chemistry (Figure 1) [START_REF] Lamprecht | Size-dependent bioadhesion of micro-and nanoparticulate carriers to the inflamed colonic mucosa[END_REF]. The mucus mesh size of approximately 100 nm allows only the diffusion of small particles [START_REF] Fröhlich | Models for oral uptake of nanoparticles in consumer products[END_REF][START_REF] Olmsted | Diffusion of macromolecules and virus-like particles in human cervical mucus[END_REF]. For example, Ag particles characterized by a size of 200 nm were entrapped to a great extent in the mucus layer of a TC7/Caco-2 co-culture model unlike 20 nm NPs (Georgantzopoulou et al., 2016a). Similarly, SiO2 NPs with hydrodynamic diameters of 20 and 30 nm were able to reach the HT29-MTX goblet cells while NPs of 70 and 200 nm were mainly stuck to the mucus layer [START_REF] Zaiter | Size effect and mucus role on the intestinal toxicity of the E551 food additive and engineered silica nanoparticles[END_REF]. Mucins facilitate the mucoadhesion by electrostatic interactions of positively charged NPs [START_REF] Fröhlich | Models for oral uptake of nanoparticles in consumer products[END_REF]. Moreover, interactions with mucins can occur by hydrogen binding, Van der Waals interactions, hydrophobic forces or polymer chain interpenetration [START_REF] Ojer | Toxicity evaluation of nanocarriers for the oral delivery of macromolecular drugs[END_REF][START_REF] Pridgen | Polymeric nanoparticle drug delivery technologies for oral delivery applications[END_REF][START_REF] Talkar | Transmucosal Nanoparticles: Toxicological Overview[END_REF][START_REF] Woodley | Bioadhesion: new possibilities for drug administration?[END_REF]. Some NPs as polyethylene glycol (PEG)-coated NPs were characterized by mucus-penetrating properties [START_REF] Fröhlich | Models for oral uptake of nanoparticles in consumer products[END_REF][START_REF] Lai | Rapid transport of large polymeric nanoparticles in fresh undiluted human mucus[END_REF][START_REF] Pridgen | Polymeric nanoparticle drug delivery technologies for oral delivery applications[END_REF]. PEG confers hydrophilic properties and a neutral charge to NPs, reducing the interactions with mucins. However, long PEG chains limit the mucus penetration due to steric hindrance [START_REF] Lundquist | Oral absorption of peptides and nanoparticles across the human intestine: Opportunities, limitations and studies in human tissues[END_REF]. Mucus-penetrating NPs reach the epithelial cell surface. Different basic internalization mechanisms of particles in cells were reported. Macro-pinocytosis is non-specific and induces the formation of vesicles with about 1 µm in diameter. It allows the internalization of large NPs or agglomerates in cells [START_REF] Sahay | Endocytosis of nanomedicines[END_REF]. Clathrin-mediated and caveolae-mediated endocytosis form vesicle at specific regions of the membrane with a diameter of less than 0.1 µm. They are the main cellular internalization pathways of nanomaterials such as PEGpolylactic acid (PLA), polylactic-co-glycolyc acid (PLGA), SiO2, chitosan and gold (Au) NPs [START_REF] Sahay | Endocytosis of nanomedicines[END_REF]. Clathrin-independent and caveolae-independent endocytosis induce the formation of small vesicles and occur continuously in the cell. They are also involved in the cell uptake of polymeric NPs such as PLGA NPs [START_REF] Palocci | Endocytic pathways involved in PLGA nanoparticle uptake by grapevine cells and role of cell wall and membrane in size selection[END_REF]. The caveolae/lipid raft is the main internalization pathway of nanoemulsions [START_REF] Fan | Endocytosis of Corn Oil-Caseinate Emulsions In Vitro: Impacts of Droplet Sizes[END_REF]. The physicochemical properties of NPs, in particular their size and surface charge, are the most important factors involved in the cell uptake [START_REF] Sahin | A small variation in average particle size of PLGA nanoparticles prepared by nanoprecipitation leads to considerable change in nanoparticles' characteristics and efficacy of intracellular delivery[END_REF]. The size range comprised between 10 and 60 nm was considered as an optimum diameter. As for the charge, it was reported that positively charged NPs electrostatically bind to the negative cytoplasmic membrane and are then endocytosed [START_REF] Sabourian | Effect of Physico-Chemical Properties of Nanoparticles on Their Intracellular Uptake[END_REF]. However, they are not able to cross the mucus layer in a significant manner. The interaction of negatively charged NPs with the cell membrane should be low compared with cationic and neutral particles. However, a significant internalization of carboxymethyl dextran-coated NPs with a surface charge of -50 mV was observed in Caco-2 cells [START_REF] Ayala | Effect of surface charge on the colloidal stability and in vitro uptake of carboxymethyl dextran-coated iron oxide nanoparticles[END_REF]. Figure 1: Schematic representation of the main factors affecting the intestinal distribution of NPs Very few studies investigated the transport of NPs across the intestinal barrier. After oral administration, SiO2 NPs were found in kidneys and liver, suggesting their ability to cross the intestinal barrier [START_REF] Lee | Tissue distribution and excretion kinetics of orally administered silica nanoparticles in rats[END_REF]. A translocation of TiO2 NPs through the ileum and Peyer's patches was reported [START_REF] Brun | Titanium dioxide nanoparticle impact and translocation through ex vivo, in vivo and in vitro gut epithelia[END_REF]. Oral administration of 75 nm TiO2 NPs in young rats for 30 days also caused hepatic damages [START_REF] Wang | Susceptibility of young and adult rats to the oral toxicity of titanium dioxide nanoparticles[END_REF]. This is in accordance with the elimination pathway of circulating NPs which involves Kupffer cells [START_REF] Moghimi | Role of Lactobacillus reuteri in Human Health and Diseases[END_REF]. However, the mechanism involved in the transepithelial transport of NPs remains unclear even if the transcytosis across the Peyer's patches seems the potential route (Jani et al., 1992). IV. Nanotoxicology The main toxicity endpoints reported in literature include cytotoxicity, genotoxicity, and inflammatory response on intestinal cells using both in vitro and in vivo models. Inorganic NPs were mainly responsible for intestinal damages (Table 1) while very few studies clearly demonstrated a toxicity of organic NPs (Table 2). Their influence on the microbiota and the mucus protective barrier was slightly investigated despite their critical role in the intestinal homeostasis. Microbiota and mucus interactions The influence of NPs on the GIT bacteria is mainly controlled by electrostatic interactions which depends on the pH and the composition of the gastrointestinal fluids [START_REF] Gangadoo | Inorganic nanoparticles as food additives and their influence on the human gut microbiota[END_REF]. Their effect is mediated through various mechanisms, such as Reactive Oxygen Species (ROS) production, release of cationic ions and disruption of cell membrane (Figure 2). Cationic Au NPs were more toxic on both Gram-positive and Gram-negative bacteria compared with anionic NPs [START_REF] Vivian Feng | Impacts of gold nanoparticle charge and ligand type on surface binding and toxicity to Gramnegative and Gram-positive bacteria[END_REF]. The toxicity was explained by an accumulation at the surface of bacteria, likely causing membrane destabilization. Modifications of microbiota after oral exposure to TiO2 NPs were also reported. A significant increase in Lactobacillus reuteri was observed after 90-day oral administration [START_REF] Chen | Hepatotoxicity and the role of the gut-liver axis in rats after oral administration of titanium dioxide nanoparticles[END_REF]. This bacteria is beneficial for human health due to its ability to produce metabolic molecules and prevent the migration and expansion of opportunistic pathogens (Mu et al., 2018). In contrast, chronic ingestion of rutile TiO2 NPs-containing foods reduced the proliferation of Bifidobacterium and facilitated the invasion of opportunistic pathogens such as Escherichia-Shigella [START_REF] Li | Oral administration of rutile and anatase TiO2 nanoparticles shifts mouse gut microbiota structure[END_REF]. Toxicity of Ag NPs on commensal bacteria in human GIT was also identified due to their antibacterial properties [START_REF] Mercier-Bonin | Mucus and microbiota as emerging players in gut nanotoxicology: The example of dietary silver and titanium dioxide nanoparticles[END_REF]. Two mechanisms were hypothesized: an accumulation of Ag NPs in the bacterial cell membrane or the Ag cations release, inducing ROS production [START_REF] G1108 Ladaycia | Microbiota and nanoparticles: Description and interactions[END_REF]. SiO2 NPs in the GIT increased the abundance of Lactobacillus genus after oral ingestion. A complexation of SiO2 NPs with bacteria was reported, reducing their cytotoxicity on human epithelial cells [START_REF] Siemer | Nanosized food additives impact beneficial and pathogenic bacteria in the human gut: a simulated gastrointestinal study[END_REF]. The interactions of inorganic NPs with the microbiota seems to depend on the bacteria type, especially the composition of the cell membrane and on the features of NPs [START_REF] Vivian Feng | Impacts of gold nanoparticle charge and ligand type on surface binding and toxicity to Gramnegative and Gram-positive bacteria[END_REF]. Metallic NPs are well-known for their antibacterial activity while SiO2 and TiO2 inhibit or induce the proliferation of commensal bacteria, depending on the strain [START_REF] Slavin | Metal nanoparticles: understanding the mechanisms behind antibacterial activity[END_REF]. Gut microbiota dysbiosis was also observed in mice after oral exposure to polystyrene (PS) NPs having a size of 500 nm for 5 weeks. The population of Firmicutes and α-Proteobacteria was significantly affected at a concentration in NPs of 1mg/L [START_REF] Lu | Polystyrene microplastics induce gut microbiota dysbiosis and hepatic lipid metabolism disorder in mice[END_REF]. Multi-walled carbon nanotubes (MWCNTs) administered by intratracheal instillation in doxorubicin-treated mice affected the gut microbiota through the proliferation of Helicobacteraceae and Coriobacteriaceae [START_REF] Liu | Multiwalled carbon nanotubes exacerbate doxorubicin-induced cardiotoxicity by altering gut microbiota and pulmonary and colonic macrophage phenotype in mice[END_REF]. et al., 2020). Changes in the composition of neutral and acidic mucins were also observed [START_REF] Mercier-Bonin | Exposition orale et devenir dans l'intestin des nanoparticules alimentaires : exemple de l'argent et du dioxyde de titane[END_REF]. Despite the limited number of studies, the NPs tested induced a modification of the biochemical composition of mucus. This could weaken the mucus' protective effect. Further investigations could confirm this hypothesis. Intestinal cell damages NPs in contact with intestinal epithelial cells can damage the brush border of enterocytes and then their absorptive properties. Scanning Electron Microscopy (SEM) revealed an affectation of the microvilli structure of Caco-2 after exposure to TiO2 NPs at a concentration of 10 µg/ml [START_REF] Koeneman | Toxicity and cellular responses of intestinal cells exposed to titanium dioxide[END_REF]. This finding was supported by other studies that showed a 42 % loss of microvilli (Faust et al., 2014a) and brush border disruption by TiO2 NPs (Faust et al., 2014b). Cytotoxicity of NPs in food products on intestinal cells was widely reported [START_REF] Fröhlich | Cytotoxicity of Nanoparticles Contained in Food on Intestinal Cells and the Gut Microbiota[END_REF][START_REF] Cornu | Ingestion of titanium dioxide nanoparticles: a definite health risk for consumers and their progeny[END_REF]. After 24 and 48 h exposure to ZnO NPs, a decrease in the cell viability of human colon cells was observed. The viability loss was concomitant with a ROS content increase, suggesting an oxidative stress-related mechanism. Direct interaction of ZnO NPs with the cell surface could trigger the intracellular signal activation, altering mitochondrial and/or endoplasmic reticulum functionalities [START_REF] De Berardis | Exposure to ZnO nanoparticles induces oxidative stress and cytotoxicity in human colon carcinoma cells[END_REF]. Similarly, TiO2 NPs injected in the abdominal cavity of mice for 14 days significantly increased lipid peroxidation, decreased glutathione (GSH) level and altered antioxidant enzyme activity in a dose-dependent manner [START_REF] Ma | Oxidative stress in the brain of mice caused by translocated nanoparticulate TiO2 delivered to the abdominal cavity[END_REF]. A viability drop was also seen after 24 h exposure of metallic and metallic oxide NPs to human colorectal adenocarcinoma HT29 (Schneider et al., 2017a). Morphological changes of cells and granular inclusions suggests an apoptotic process. Early and late apoptotic cells were observed by flow cytometry after contact with most of NPs except for Au and cerium oxide NPs (Schneider et al., 2017a). The digestion process of NPs involving enzymes and acidic conditions was considered in viability assays. Cytotoxicity of copper oxide NPs increased by 30 % compared to native NPs when pre-treated with simulated gastrointestinal fluids. This was explained by a diameter decrease of NPs after incubation with pepsin [START_REF] Henson | Cytotoxicity of copper (II) oxide nanoparticles in rat intestinal cells: effect of simulated gastrointestinal fluids and generation of oxidative stress[END_REF]. The same effect was observed with digested TiO2 NPs after exposure to mucin-producing cells termed HT29-MTX-E12. A cytotoxicity increase was observed compared to undigested TiO2 NPs due to a reduction of the agglomeration state [START_REF] Bettencourt | Analysis of the Characteristics and Cytotoxicity of Titanium Dioxide Nanomaterials Following Simulated In Vitro Digestion[END_REF]. Unlike inorganic NPs, neutral or negatively charged polymeric NPs including PLGA NPs did not show any damages on epithelial cells in use conditions. Their biodegradable feature could reduce the exposure time to cellular compounds and then limit their toxicity. Only cytotoxicity was observed with cationic polymeric NPs in Caco-2 cell models. Chitosan NPs characterized by a size of 25 nm reduced the mitochondrial dehydrogenase activity at pH 6 while no effect was observed at pH 7.4 [START_REF] Loh | Cytotoxicity of monodispersed chitosan nanoparticles against the Caco-2 cells[END_REF]. This was explained by the positive charge of chitosan at acidic pH. Toxicity of poly(amidoamine) dendrimers termed PAMAM on Caco-2 cells was dependent on the number of generations. While lactate dehydrogenase release was observed with generations (G) 3 and 4, no toxic effect was reported with G0-G2 (El-Sayed et al., 2002). This result suggests a synergistic effect of both size and positive surface on the cytotoxicity of dendrimers. However, the lack of a mucus layer in the Caco-2 model does not allow an extrapolation of results to in vivo conditions, especially in the case of positively charged NPs. Genotoxicity Nanomaterials can lead to DNA damage and mutagenic events (García-Rodríguez et al., 2018a;[START_REF] Asharani | Cytotoxicity and genotoxicity of silver nanoparticles in human cells[END_REF]. Guidelines for genotoxicity evaluation of manufactured nanomaterials were established by OECD (OECD, 2014). Ames test was excluded for nanogenotoxicity investigation while the in vitro micronucleus assay was adopted by requiring an exposure time without cyto B. Another recommendation is the need to conduct a toxicokinetic investigation prior to the genotoxicity study. The aim is to ensure that nanomaterials will reach the target cell/tissue. Ag NPs led to a DNA double-strand break in Caco-2 cells. An increase in the percentage of γH2AX-positive cells was observed in a dose and time dependent manner [START_REF] Gillois | Repeated exposure of Caco-2 versus Caco-2/HT29-MTX intestinal cell models to (nano)silver in vitro: Comparison of two commercially available colloidal silver products[END_REF]. Caco-2/HT29 co-culture were exposed to TiO2 nanospheres (NS), nanorods (NR) and nanowires (NW) for 48 h (García-Rodríguez et al., 2018b). NS, NR and NW were characterised using electron microscopy by sizes of 70-80, 40-70 and 8-14 nm, respectively. TiO2 NS did not modulate the gene expression of ZO-1 while it was downregulated in presence of TiO2 NR at 50 µg/ml and upregulated with NW at 150 µg/ml. Only NR induced a genotoxic damage evaluated by comet assay after 48 h exposure while no oxidative DNA damage was reported. After oral gavage of E171 food-grade TiO2 in rat, titanium accumulation was detected in the nucleus of cells located in the Peyer's Patches. However, no DNA damage including DNA strand breaks formation and oxidative DNA damages was reported after treatment for 7 days [START_REF] Bettini | Food-grade TiO2 impairs intestinal and systemic immune homeostasis, initiates preneoplastic lesions and promotes aberrant crypt development in the rat colon[END_REF]. Preneoplastic lesions were nevertheless detected in the colon after a long-term oral exposure of 100 days in rat. Intestinal permeability modulation NPs can modulate the intestinal permeability, especially the paracellular transport due to their toxicity on epithelial cells or by interacting with TJ network. Intestinal integrity can be assessed by measurements of the TransEpithelial Electrical Resistance (TEER) in intestinal barrier models and the paracellular flux of molecular markers such as lucifer yellow, fluorescein isothiocyanate-dextran and mannitol [START_REF] Graziani | Intestinal permeability in physiological and pathological conditions: major determinants and assessment modalities[END_REF]). An increase in the intestinal paracellular permeability could facilitate the transport in the blood compartment of pathogenic substances or modify the bioavailability of drugs and nutrients. A disruption of the intestinal barrier would facilitate the bacterial translocation, causing drastic sequelae as sepsis. Ag NPs increase the permeability of a monolayer of T84 human colorectal carcinoma cell lines in a dose and size dependent manner. A downregulation of the cytoplasmatic actin which plays a major role in the epithelial integrity maintenance and the regulation of TJs, was observed [START_REF] Baranwal | Nonredundant roles of cytoplasmic β-and γ-actin isoforms in regulation of epithelial apical junctions[END_REF][START_REF] Ku | The cytoskeleton of digestive epithelia in health and disease[END_REF]Georgantzopoulou et al., 2016a). SiO2 NPs smaller than 200 nm caused reversible relaxation of TJs in Caco-2 monolayer due to the activation of Myosin Light Chain Kinase (MLCK) enzymes. Activated MLCK enables the phosphorylation of the myosin part of the cytoskeleton, inducing TJs contraction [START_REF] Lamson | Anionic nanoparticles enable the oral delivery of proteins by enhancing intestinal permeability[END_REF]. A reversible TEER drop and a paracellular permeability increase were also observed in a Caco-2/HT29-MTX intestinal barrier models after exposure to amorphous 20 and 30 nm SiO2 NPs (Cornu et al., 2020). In the same study, no significant modulation of the permeability was observed with 200 nm SiO2 NPs due to the lack of cell internalization. In addition, no downregulation of TJs as claudin 2 and ZO-1 was observed, suggesting a TJ rearrangement after cytoskeleton disruption. The opposite effect was observed with ZnO NPs in a Caco-2/HT29-MTX coculture model. After 24 h incubation, a significant TEER increase was observed [START_REF] Mittag | Effects of Zinc Oxide Nanoparticles on Model Systems of the Intestinal Barrier[END_REF]. Zn increased the expression of ZO-1 in a Caco-2 model due the activation of PI3K/AKT/mTOR signaling pathway [START_REF] Shao | Zinc enhances intestinal epithelial barrier function through the PI3K/AKT/mTOR signaling pathway in Caco-2 cells[END_REF]. Oral exposure to Single-Walled Carbon Nanotubes (SWCNTs) increased the intestinal permeability in mice at a daily dose of 2.5mg/kg for 7 days (Chen et al., 2018). Intestinal permeability modulation of organic nanomaterials is usually attributed to the additives contained in the formulation rather than the nanoparticle structure. The manufacturing of lipid-drug delivery systems such as nanoemulsions and solid lipid NPs requires surfactants to decrease the interfacial tension between the immiscible phases and to guarantee their long-term storage [START_REF] Chen | Lipid excipients and delivery systems for pharmaceutical development: a regulatory perspective[END_REF]. Due to their amphiphilic properties, they insert into the membrane of epithelial cells, increasing their fluidity [START_REF] Danielsen | Intestinal surfactant permeation enhancers and their interaction with enterocyte cell membranes in a mucosal explant system[END_REF][START_REF] Dimitrijevic | Transepithelial transport of poly(amidoamine) dendrimers across Caco-2 cell monolayers[END_REF]. Direct interaction of surfactants with TJs was reported. Structural separation of TJs was observed in a Caco-2 model in the presence of sucrose monoester fatty acids [START_REF] Mine | Surfactants enhance the tight-junction permeability of food allergens in human intestinal epithelial Caco-2 cells[END_REF]. Modulation of the efflux pump was also observed in presence of surfactants [START_REF] Nieto Montesinos | Delivery of P-glycoprotein substrates using chemosensitizers and nanotechnology for selective and efficient therapeutic outcomes[END_REF]. Nanoemulsions increased the apical-tobasolateral flux of paeonol across a Caco-2 monolayer by a factor of 1.7 compared with the solution and decreased the efflux ratio in the same range. This effect was attributed to the P-gp inhibition by the Cremophor EL35 surfactant (S. Chen et al., 2018). Amphiphilic polymers used in the preparation of micelles and NPs such as Pluronic block copolymers were also able to inhibit the multidrug resistance mechanism in the intestinal epithelium [START_REF] Fang | Pluronic P85-coated poly(butylcyanoacrylate) nanoparticles overcome phenytoin resistance in Pglycoprotein overexpressing rats with lithium-pilocarpine-induced chronic temporal lobe epilepsy[END_REF][START_REF] Kabanov | Pluronic® block copolymers as novel polymer therapeutics for drug and gene delivery[END_REF]. Mixed micelles of Pluronic P85 and F68 reversed the efflux activity of the intestinal multidrug resistance protein 2 by affecting the structure and the function of mitochondria [START_REF] Chen | Pluronic P85/F68 Micelles of Baicalein Could Interfere with Mitochondria to Overcome MRP2-Mediated Efflux and Offer Improved Anti-Parkinsonian Activity[END_REF]. Pluronic 85 was able able to inhibit the Pgp drug efflux due to an ATP depletion associated to a membrane fluidization [START_REF] Batrakova | Mechanism of pluronic effect on P-glycoprotein efflux system in blood-brain barrier: contributions of energy depletion and membrane fluidization[END_REF]. Inflammatory response Inflammatory reactions after oral administration of NPs were widely reported in literature [START_REF] Martirosyan | Food Nanoparticles and Intestinal Inflammation: A Real Risk?[END_REF]. Ag NPs having a size of 20 nm exposed to Caco-2/TC7 cells at a concentration of 30 mg/L increased the expression of IL-8 proinflammatory cytokines while no effect was observed with 200 nm NPs (Georgantzopoulou et al., 2016a ;[START_REF] Őrfi | The immune system of the gut and potential adverse effects of oral nanocarriers on its function[END_REF], suggesting a size effect. This finding was confirmed by an in vivo study on female rats that showed a pro-inflammatory response in the intestinal mucosa after oral gavage of Ag NPs for 13 days [START_REF] Orr | Alteration in the mRNA expression of genes associated with gastrointestinal permeability and ileal TNF-α secretion due to the exposure of silver nanoparticles in Sprague-Dawley rats[END_REF]. However, no size-effect was observed with TiO2 NPs after 10day gavage in mice. Both 66 nm NPs and 260 nm microparticles induced in the same manner, the secretion of pro-inflammatory cytokines such as TNF-α and IFN-γ (Carolina Maciel Nogueira et al., 2012). The differences observed between nanomaterials could be explained by their composition. The small size of Ag NPs increases their surface area, facilitating the release of Ag ions responsible for the inflammatory response. By contrast, TiO2 NPs directly interact with the immune cells, especially macrophages which are able to phagocytose both small and large NPs [START_REF] Becker | Microparticles and their impact on intestinal immunity[END_REF][START_REF] Huang | Titanium dioxide nanoparticles prime a specific activation state of macrophages[END_REF]. Crystal structure of TiO2 NPs also needs to be considered. Unlike rutile NPs, anatase TiO2 NPs at a concentration of 50 µg/ml increased the expression of IL-8 and IL-1β in Caco-2 and THP-1-derived macrophages, respectively [START_REF] Tada-Oikawa | Titanium Dioxide Particle Type and Concentration Influence the Inflammatory Response in Caco-2 Cells[END_REF]. Concomitant production of IL-8, H2O2/OH•, and intracellular GSH by LoVo human colon cells was observed after 24 h exposure to ZnO NPs [START_REF] De Berardis | Exposure to ZnO nanoparticles induces oxidative stress and cytotoxicity in human colon carcinoma cells[END_REF]. Inflammatory response in this case was correlated to oxidative stress. Another indirect mechanism involving the microbiota was reported. Oral administration of single wall carbon nanotubes at 2.5 mg/kg per day for 1 week induced the proliferation of proinflammatory bacteria named Alistipes uncultured bacterium and Lachnospiraceae bacterium A4. They were responsible for the inflammatory cell infiltrations and the production of IL-1β, IL-6 and TNFα in the duodenum and colon (H. Chen et al., 2018). General population of lymphocytes was characterized by flow cytometry after oral ingestion of SiO2 NPs in rats for 3 months. Reductions by 33 % and 13 % in the number of leukocyte and T helper cells were observed. A 27 % decrease of the CD4/CD8 ratio was observed. This change in the immune cell populations is characteristic of an immunosuppression mechanism [START_REF] Shumakova | Toxicological assessment of nanostructured silica. III. Microecological, hematological indices, state of cellular immunity[END_REF]. The immune response in the gut was affected by Ag NPs after 13-day exposure. A decrease in the expression of immunomodulatory genes such as FOXP3, GPR43, TLR 2 and 4 was identified [START_REF] Williams | Effects of subchronic exposure of silver nanoparticles on intestinal microbiota and gut-associated immune responses in the ileum of Sprague-Dawley rats[END_REF]. E171 TiO2 additive reduced Treg cells activity after 100 day-oral exposure in rat while TiO2 nanotubes affected the immune response by inhibiting MAPK and NF-κB inflammatory signalling pathways [START_REF] Bettini | Food-grade TiO2 impairs intestinal and systemic immune homeostasis, initiates preneoplastic lesions and promotes aberrant crypt development in the rat colon[END_REF][START_REF] Neacsu | Attenuation of the macrophage inflammatory activity by TiO2 nanotubes via inhibition of MAPK and NF-κB pathways[END_REF]. 0.5 µm PS NPs exposed to Zebrafish for 14 days increased both the mRNA and the protein levels of IL1α, IL1β and interferons in the gut [START_REF] Jin | Polystyrene microplastics induce microbiota dysbiosis and inflammation in the gut of adult zebrafish[END_REF]). An Inflammatory response was also reported in mice after oral administration of SWCNTs at a dose of 2.5mg/kg for one week. An increase of the expression of IL-1β, IL6 and TNF-α was observed in duodenum and colon (Chen et al., 2018). Intratracheal instillation in mice of MWCNTs increased the M1-like polarization of macrophages in colon. This effect associated to a gut microbiota dysbiosis exacerbated the cardiotoxicity of doxorubicin [START_REF] Liu | Multiwalled carbon nanotubes exacerbate doxorubicin-induced cardiotoxicity by altering gut microbiota and pulmonary and colonic macrophage phenotype in mice[END_REF]. This combined toxicity of the loaded active ingredient and the MWCNTs emphasizes the importance to consider the intrinsic toxicity of nanomaterials, especially when they are used as drug delivery systems. Harmonized guidelines should be established for a better prediction of the intestinal nanotoxicity. For example, the presence of a mucus layer in in vitro models would prevent an overestimation of the nanotoxicity. Toxicokinetic investigations and consideration of digestion process would guarantee the relevancy of toxicological endpoints. IV- Conclusion Dermal toxicity Nanoparticles are used in cosmetic and dermatology products. The environment is also a source of human exposure to nanoparticles. Their toxicity after contact with the skin is still questioned mainly due to the lack of relevant in vitro models mimicking accurately the skin barrier. While the nanotoxicity by direct exposure to cutaneous cells was widely demonstrated, their ability to reach the viable epidermis stays unclear. The present review focused on the in vitro models and the toxicological effects of nanoparticles in contact with the skin barrier. CHAPTER III: Nanotoxicity: Experimental Part 1. SILICATES NANOPARTICLES Contexts and objectives Silica particles are widely used in food products to prevent the powder agglomeration. Despite their presence in the market for a long time, doubts regarding their oral toxicity remain as mentioned by the EFSA in its report on the re-evaluation of the food additive E551. This could be explained by the insufficient toxicological data but also the variability in the size distribution of additives between manufacturers. No specifications have been yet established, allowing the presence of nano silica. Due to their small size, they could interact with the intestinal cells and then alter the intestinal barrier. Thus, an investigation on the intestinal toxicity of silica particles was performed using in vitro models of the intestinal barrier. The influence of the mucus, digestion process and the size was evaluated using E551 additive and engineered silica NPs. Methodology The effects of silica NPs, native and digested E551 additives on the intestinal barrier were evaluated in vitro on Caco-2 and HT29-MTX cell lines in mono and coculture cultivated in Transwell ® . This support mimics the in vivo configuration of the intestine. It is composed of an apical compartment that represents the intestinal lumen and a basolateral compartment corresponding to the blood vessels. These two compartments are separated by a membrane on which the cells forming the intestinal barrier are seeded. The Transwell ® is used to study the intestinal permeability of drugs or the passage of toxins (Figure 16). E551 additive was incubated in gastric and intestinal simulated fluids to accurately mimic gastrointestinal digestion before exposure. The engineered silica NPs, native and digested E551 were then incubated with the cells for 7 days. Cytotoxicity, ROS production, cytokine expression and transepithelial electrical resistance modulation were investigated. Figure16: schematic representation of a Transwell ® . Main Finding/Conclusions This study showed that:  Digestion did not affect the physicochemical properties of the E551 additive  The mucus layer produced by a single culture of HT29-MTX acted as an effective protective barrier against both small and large silica particles.  Small nanoparticles affected the transepithelial electrical resistance of the Caco-2/HT29-MTX co-culture despite the presence of a mucus layer.  70 nm was considered as a threshold diameter above which silica particles do not cause intestinal toxicity Methodology In our study, the effect of the process temperature on the nanoemulsification of medium and long chain triglycerides (MCT & LCT) was investigated. The impact on the size and the stability of NE of the lipid composition, the molecular weight, and the viscosity of triglycerides as well as the surfactant-to-oil ratio was also studied. 1 Main Finding/Conclusions This study showed that:  A heating at 90°C was required to obtain nanodroplets of LCT while nanoemulsions of MCT were obtained at both 90°C and 37°C  Due to a low molar volume, the NE of glyceryl trioctanoate (GT) were destabilized by Ostwald ripening unlike lipid mixtures of GT with glyceryl tridecanoate (Labrafac ® ) or LCT  Mixtures of LCT and MCT allowed designing stable and fine NEs at 37°C.  The surfactant-to-oil ratio influences the size of NEs. Scientific article Cytotoxicity of NE NE preparation NEs were formulated by the PIT method. Non-ionic surfactant (Kolliphor ® HS15) was provided by Sigma-Aldrich (Saint Quentin Fallavier, France). Labrafac ® WL 1349, a mixture of glyceryl trioctanoate (56%) and glyceryl tridecanoate (43%), was obtained from Gattefossé (Saint Priest, France). Oil and Kolliphor ® were added in a 20 ml vial. Approximately 15 ml of purified water were added into another 20ml vial. Both vials containing water and oil mixture were closed using a septum and heated in a water bath at 90°C for 15 minutes. After heating, 6.2 ml of water was transferred into the lipid phase and mixed under magnetic stirring for 5 minutes at 90°C. Formulation was then removed from the water bath and magnetically stirred at room temperature. Samples were sterilized by filtration across 0.22 µm membrane and transferred into 2ml sterile vials under a laminar flow hood. Vials were closed with an elastomer cap and crimp with an aluminium seal. Samples were stored at 4°C. The quantities of Kolliphor ® and Labrafac ® were adjusted according to the desired size (Table 1). Hydrodynamic diameters, polydispersity index (PDI) and Zeta potentials of all the NEs were tested by Dynamic Light Scattering and laser doppler electrophoresis using the Zetasizer® Nano ZS90 (Malvern, UK). The stability of nanoemulsions during the incubation times with cells was 0 assessed by measuring the size during 2 hours at 37°C. Three independent measurements were performed for each time point. As demonstrated in the previous study, the surfactant-to-oil ratio (SOR) controls the final size of NEs (Table 2). Immediately after preparation, the hydrodynamic diameters were of 21.8 nm and 227.6 nm for SOR of 3.5 and 0.5, respectively. The polydispersity index was below 0.1 suggesting a monomodal distribution except for the NE200 with a value of 0.32. No size variation was observed during the 2h incubation at 37°C, demonstrating the stability of NEs. Cytotoxicity study of NE on intestinal cells NE were incubated with Caco-2 and HT29-MTX at different concentrations for 2 hours. MTS assay was then performed to evaluate the viability of cells. The relationship between the cell viability and the Kolliphor ® concentration was showed in Figure 17 to consider the well-known intrinsic toxicity of surfactant. At 0.01 g kolliphor ® /ml, the viability of Caco-2 cell-lines was of 76 % and 44% after exposure to NE20 and NE200, respectively (Figure 17A). This suggests that the cytotoxicity of NEs was not directly correlated to the concentration in Kolliphor ® . The same finding was observed with HT29-MTX (Figure 17B). A viability drop was observed from a concentration in Kolliphor ® of 0.086 and 0.025 g/ml for NE20 and NE200. For both cell-lines, at the same Kolliphor ® concentration, NE200 seem much more cytotoxic than NE20. 1 Figure 17: Influence of the concentration in Kolliphor ® on the viability of Caco-2 (A) and HT29-MTX (B). In Figure 18, the viability results were expressed as a function of the NE concentration (oil and Kolliphor ® ). For all the NEs, a viability drop was observed from a concentration of 0.05 and 0.11g/ml for Caco-2 and HT29-MTX, respectively. The HT29-MTX were less sensitive to NEs compared with Caco-2, likely due to the protective mucus barrier. Unlike previous results, no size influence was observed between the different NEs. Cytotoxicity was only dependent on the concentration in NE that includes the oil and the Kolliphor ® amounts. NE200 were prepared with 2.3 fold less Kolliphor ® compared with NE20. Thus, a high amount of NE200 was necessary to reach the same surfactant concentration as the small NE. This explains the differences observed between the different NEs in Figure 17. The lack of size influence could be due to the ability of NEs to be internalized in intestinal cells at the same levels, even for the large ones. This suggests an internalization process different between NEs and silica NPs. Surprisingly, despite high concentrations in Kolliphor ® , small NEs were not more toxic than the large ones. This could be explained by the adsorption of Kolliphor ® at the surface of droplets, characterized by a large surface area. Nanotechnology represents all sciences and techniques aimed for the producing and the using of objects at the nanometer scale and more precisely between 0.1 and 100 nm, for at least, one of their dimensions. The challenges for the researchers are to industrialize the manufacture of nanoobjects to exploit their characteristics. The nanotechnology revolution has quickly resulted in the emergence of a very large number of nanoproducts with multiple applications on the market (1). Some of them so-called nanomedicines are used as drug delivery systems, to carry drugs until the action site. For example, a nanocapsule with optimized physicochemical properties were designed to cross both the mucus layer and the intestinal epithelium to finally reach the blood compartment (2). Biodegradable nano-objects such as NE, liposomes, polymeric micelles and nanoparticles were designed for the transport of anticancer agents (3). Inorganic NM are used as food additives in confectionery, pastry, and many culinary preparations such as the E171 additive. Colloidal silica particles contained in E551 additive prevent the agglomeration of food powders such as sugar and salt (4). However, their use in food and pharmaceutical products requests a deep investigation on their potential effect along the gastrointestinal tract, especially the intestinal barrier. It is also important to understand the mechanisms involved and to determine if these particles are able to cross biological barriers and then be disseminated in the body (5). The numerous applications of silica particles in food products (6) and the emergency of oral lipid-based formulation such as nanoemulsions in therapy and nutrition request to fully characterize their effect on the intestinal barrier (7). This barrier is necessary for the absorption of vital nutrient and the protection of the organism against foreign bodies. Its alteration would be a source of several pathologies such as infection, local or systemic inflammation (8). Technological advances over the 40 past years in the field of cell culture facilitated in vitro investigations on intestinal disorders and physiological mechanisms of the intestinal barrier, thus reducing the number of in vivo experiments. Despite properties very close to in vivo, the use of primary cell cultures is faced with availability issues and their short lifespan, limiting longexposure treatments. Cell-lines represent a good alternative due to their homogeneity in terms of phenotype and genotype, their low cost and easy access (9). The first cell model of the intestinal barrier was the Caco-2 model. This model is defined as the gold standard for predictive studies of permeability and transport of molecules through the intestinal epithelium. This cell-line was isolated from human colorectal adenocarcinoma and established by Fogh &Trempe in 1975 (10). This lineage is well characterized as an enterocyte model regarding its morphological and functional characteristics expressed after differentiation. After reaching confluence, they spontaneously differentiate during approximately 21 days of culture. Once differentiated, they form a polarized cell monolayer, an intercellular junction complex, and a brush border with microvilli on the apical side (11). They express enzymes and membrane transporters (12). However, this cell line has some limitations, including the formation of a heterogeneous monolayer related to culture time and number of passages (13). However, Caco-2 model should be not considered as a relevant model for toxicity studies due to the lack of a mucus layer. Considering that the mucus could act as a protective barrier, an overestimation of the intestinal toxicity is usually observed. A model composed of mucus-secreting HT29-MTX cells associated with Caco-2 cells was developed to overcome this limitation. The HT-29 cell line is another intestinal cell-line established by Fogh & Trempe. It is derived from colon adenocarcinoma and has been used as an intestinal model for bioavailability and mechanistic studies (14). The HT-29 line is considered a pluripotent gut line because the changes in the culture medium can lead to different differentiation pathways. Unlike the Caco-2 cell line, HT-29 differentiation is not spontaneous but rather depends on nutritional and culture conditions (15). The major difference between HT-29 and Caco-2 cell lines is that under appropriate culture conditions, HT-29 differentiate into goblet cells and then produce mucus. They moderately express tight junction (16). The stable clone HT29-MTX is a sub-population of HT-29 cells resistant to high concentrations of methotrexate (MTX). This clone is thus capable of spontaneously differentiating into mucus-producing cells (17). The mucus produced by HT29-MTX is a waterinsoluble gel composed mainly of mucins such as MUC2 and MUC 5AC that form a protective layer in the intestine. The association of HT29-MTX with Caco-2 cell lines allows mimicking more accurately the human intestinal barrier by considering the mucus layer (18). In addition, the ratio between the number of Caco-2 and HT29-MTX cells can be adapted for a better correlation to the different segments of the intestine. Oral toxicity of silica nanoparticles 1.Methodology Regarding the first study, a re-evaluation of the E551 food additive was carried out according to the European Food Safety Authority (EFSA) recommendation. Structurally, this additive is composed of agglomerates and aggregates of primary NPs strongly bond between them. There are no available current specifications concerning their physical properties in terms of size distribution and polydispersity (19). To cover the full-size range of NPs potentially present in E551, four sizes of engineered silica NPs were evaluated in addition to the food additive. Besides, most in vitro tests using human cells do not consider the potential changes that NPs may undergo in physiological condition, for example after digestion. NPs can agglomerate, especially in acidic mediums, react with components of food bowl or digestive enzymes (20). The modification of physiochemical properties could then influence the interaction of silica particles with the gastrointestinal tract and therefore their toxicity (21). To consider the impact of digestion process, a digestion protocol was established according to the standardized INFOGEST protocol (22). Several studies have clearly shown the reproducibility of the standardized INFOGEST protocol (23). It was considered as an alternative method to the animal experiment to mimic the static in vitro digestion (21). The composition of gastric and intestinal fluids come from the US Pharmacopeia. After E551 digestion, single and coculture of Caco-2 and HT29-MTX were exposed to engineered silica NPs, native and digested E551 at 1mg.ml -1 for 7 days. The concentration choice agreed with the recommendations of the EFSA panel. The scientific committee noted that the highest exposure doses in the various available toxicity studies were always below no observed adverse effects (NOAEL). This explains why the panel could not guarantee the acceptable daily intake "not specified". Thus, concentrations of silica NPs above daily dietary intake, estimated between 0.3 and 0.8 mg/kg, were evaluated to identify a toxic concentration threshold. Besides, in the absence of a long-term study with nano-silicon dioxide, the scientific panel of EFSA could not extrapolate the toxic effects from the chronic available studies (24). For this reason, a long-term exposure in accordance with the lifespan of in vitro models (7 days) was performed. Results Transmission electron microscopy and BET specific surface area results confirmed the classification of E551 as nanomaterials. E551 is composed of primary aggregates particles strongly bonded between them, with a size of 14 nm (25). After successive digestions in simulated gastric and simulated intestinal fluids (SGF, SIF), the physicochemical characteristics of E551 did not change in a significant way. The size of the digested additive stayed around of 200 nm. Our results are in agreement with those of Desai et al., where the transmission electronic microscopy images showed that the mesoporous silica NPs structure was intact after incubation with SGF and SIF at pH 1.2 and 6.8 respectively (26). However, studies reported in literature were not in accordance with our findings. Sakai-Kato et al. observed an agglomerated state of amorphous silica NPs after incubation with simulated intestinal fluids [START_REF] Kaul | Role of Nanotechnology in Cosmeceuticals: A Review of Recent Advances[END_REF]. McCraken et al, showed that the incubation of negatively charged silica NPs in SGF led to an agglomeration (28). This was explained by the modification of the surface charge at low pH and the presence of ions and enzymes in the simulated gastric fluid (29). Incubation in SIF by a pH of 6.8 could nevertheless reverse the agglomeration state of silica particles. Thermogravimetric analysis showed the absence of organic compounds at the surface of digested particles, likely due to washing steps after digestion. Besides, the isoelectric point was the same before and after the digestion process, indicating the lack of surface modification. After full physicochemical characterization of native, digested E551 food additives and engineered silica NPs, single and coculture of Caco-2 and HT29-MTX were exposed to NPs. After 7 day-exposure, the cellular viability decreased in a size dependent manner in contact with Caco-2 single culture. 70 nm was identified as a cutoff size, from which viability of cells was preserved. Unlike large NPs, 20 nm and 30 nm nanoparticles induced a viability loss in Caco-2 single culture (30). This was in accordance with literature that showed the size-dependent cytotoxicity of silica NPs on different intestinal cell-lines. Small silica nanoparticles of 12 and 40 nm induced a viability decrease of Caco-2 (31), human hepatoma cell line (HepG2) (32), human colon cancer epithelial cell line (HCT116) (33,34).This result could be explained by an accumulation over the time of NP20 and NP30 in Caco-2 cell line. A time dependent disruption of actin filaments of Caco-2 was observed with NPs of 30 nm. Our findings were similar to those of Cornu et al where Caco-2 were exposed to 30 nm silica NPs during 2h and at a dose of 10 mg.ml -1 (35). The actin cytoskeleton remained intact with larger NPs and food additives, Perspectives It would be interesting to investigate the mechanism involved in the proliferative effect of silica NPs in contact with HT29-MTX single culture. Previous studies showed that silica NPs can stimulate the proliferation of human gastric carcinoma cell line GXF251L. The level of a nuclear proliferation marker Ki-67was enhanced in response to high concentrations of silica NPs. The author suggested the involvement of MAPK/EGFR pathway in the proliferation at different levels. Moreover, we could investigate more deeply the involved mechanism in the actin dependent disruption after Caco-2 exposure to NP30. Several mechanisms can be at the origin of the actin disruption. A direct physical interaction between NPs and actin filaments could be suggested. Cytoskeleton could be also affected in an indirect way due to the ROS or cytokine productions (37). To understand the differences observed between the HT29-MTX single culture and the co-culture, mucus characterization could be performed. Biochemical composition, thickness and mesh could be studied. To assess the safety of E551 food additive in humans, it is necessary to confirm our results with in vivo models. It would be possible to evaluate the distribution, the excretion, and the toxic effects on the GIT but also in other organs such as the liver in the case of translocation. Long-term exposure could be also performed (38). An assessment of nanomaterials impact on our health and the understanding of the involved toxicity mechanisms are recommended. This would help to define an appropriate regulatory and health framework for a safety use of NPs systems. Oral toxicity of lipid nanoparticles Methodology A wide range of pharmaceutical nanoforms was developed as delivery systems of active substances. Among them, NEs have been designed for the administration of pharmaceutical active ingredients and lipid supplementation (39). Oil-in-water NEs allow the delivery of active ingredients with a poor-water solubility (40). The solubility of the active substance is a great challenge in biopharmaceuticals, especially for the drug adsorption. Thanks to the solubility improvement and protection against gastrointestinal enzymatic degradation, NEs increase the bioavailability of active substances orally administered (41). Due to their size in the nanometer range, NE can also facilitate the transport of drug though the intestinal epithelium by interacting with intestinal cells. However, this mechanism could also cause toxicity by altering the intestinal barrier integrity. Methods such as high shear/pressure homogenization, ultrasonic or micro-fluidizer are used to generate nanoscale droplets. The desired size of NEs can be controlled by the amount of external energy applied. However, an excess applied energy can destabilize the NEs due to a local heating and affect the stability of the incorporated drug. 99% of the applied external energy is usually dissipated in heat. In contrast, the low-energy methods play on the tremendous reduction of the tension at the interface oil/water using surfactants and co-surfactants (42). Some of them require heating to shift the Hydrophilic Lipophilic Balance (HLB) of surfactants, inducing a phase inversion of the emulsion (43). Unlike phase inversion temperature (PIT) method, the spontaneous emulsification does not require any intense heating. The process depends mainly on the interfacial tension, viscosity, structures and concentrations of surfactants and co-surfactants (44). Low-energy processes often require a high concentration in surfactant. Due to their amphiphilic properties, surfactants can insert into the cell membrane and reach the cytoplasmic compartment, affecting the viability and biological functions of cells. Thus, beside the nanometric size of droplets, the presence of a high amount of surfactant in NE preparations could cause an intestinal toxicity. In this second part, an evaluation of different critical parameters on the features of nanoemulsions was performed. This investigation aimed to prepare NE with different physicochemical properties and compositions to identify in a second step, the toxicity factors such as the size and the surfactant concentration. This was initiated through a viability assessment of Caco-2 and HT29-MTX exposed to different NEs. Results Nanoemulsification at 37°C of Medium Chain Triglycerides (MCT) led to the formation of 25 nm lipid droplets. This process temperature is slightly above the melting point of the Kolliphor ® but does not lead to the phase inversion. Larger sizes, above 100 nm were obtained with Long Chain Triglycerides (LCT) at the same temperature. This agreed with Chang et al demonstrated that the oil phase composition has a major effect on the size of NEs. Only very fine nanoemulsions were obtained with LCT using the PIT method. The molar volume (MV) of triglycerides was considered as a critical factor for the spontaneous nanoemulsification. The MV of LCT could explain their limited diffusion in the aqueous phase (45). Addition of MCT with LCT at ratio 1:1 reduced the average MV of the oil mixture, allowing the spontaneous nanoemulsification at 37°C. The concentration in surfactant also played a major role in the nanoemulsification process by affecting the size and the polydispersity of droplets. High surfactant concentrations facilitated the production of fine NEs. At least two-fold more surfactants was requested to obtain droplets in the nanometer range. Despite the surfactant presence, NEs of glyceryl trioctanoate (GT) were not stable. Ostwald ripening was observed after a few days of storage. GT mixed with glyceryl tridecanoate (Labrafac ® ) and LCT oils such as glyceryl trioleate stabilized the nanoemulsions during at least 3 months. The mixture allowed increasing the VM value of the lipid core of NE, limiting the diffusion of oils in the water bulk phase and then the Ostwald ripening. By predicting the water solubility of triglycerides, VM could be a very interesting indicator for both the spontaneous emulsification and the Oswald ripening-mediated destabilization. In vitro study NEs have been developed as promising nano-object for the administration of pharmaceutical active substances (39).The major interest in this field focuses on O/W NEs for the delivery of hydrophobic active ingredients with a poor water solubility and to increase their oral bioavailability (40,41). In our study, we demonstrated that all NEs affected the viability of Caco-2 from a concentration of 0.05 g/ml without size influence. The same result was observed with HT29 MTX but at a higher concentration in NEs, 0.11 g/ml, suggesting a protective effect of mucus. Surprisingly, the Kolliphor ® concentration that is inversely proportional to the size of NEs did not play a major role in the cytotoxicity of NEs. Despite high Kolliphor ® concentrations in nanoemulsions of 20 nm and 30nm, no cytotoxicity difference was observed compared with larger NE. Majority of surfactant could be adsorbed at the surface of droplets characterized by a large surface area, limiting the interaction with cells. The lack of size influence could be explained by the ability of NEs to be internalized into the cells, even for lipid droplets of 200 nm. Thus, rather than the Kolliphor ® concentration and the size, only the NE concentration affected the viability of intestinal cells. This finding suggests nanotoxicity factors depend on the nature of NPs. While the size governs the toxicity of silica NPs, only the exposed dose influenced the cytotoxicity of nanoemulsions. Perspectives This toxicity study needs to be completed by evaluating the influence of the digestion process on the physicochemical properties of NEs. The presence of lipase in the intestinal fluid could affect their structure. The potential release of surfactants from the surface of nanodroplets to gastrointestinal fluids could also exacerbate the toxicity of NEs. Free surfactants would interact then with the biological environment, especially the intestinal cells. Influence of NEs on the permeability of the intestinal barrier should be also investigated. Surfactants are well known to interact with the multidrug resistance mechanism, especially the P-gp. An alteration of the P-gp would enhance the bioavailability of some xenobiotics and cause adverse effects, especially when NEs are used as drug delivery systems. Expression and functions of tight junctions such as ZO-1 and claudins could be also affected. The influence of NEs on the gut microbiota and the mucus layer needs also to be evaluated to identify a potential intestinal toxicity. General conclusion Despite the wide presence of nanomaterials in numerous manufactured products, their potential health hazard is still unclear due to the lack of available toxicological data and harmonized protocols. In addition, the exposure doses and times in the toxicity studies are not often correlated with the use conditions, especially for the daily products. In this present thesis work, we have demonstrated that the mucus needs to be considered in nanotoxicity studies due to its protective role. A size effect was identified in the toxicity mechanism of silica NPs. From a hydrodynamic diameter of 70 nm, silica particles were unable to be internalized in intestinal cells and to interact with the cytoskeleton. Our study also demonstrated that the exposure time is a critical parameter, especially for the lowly biodegradable particles. Small silica NPs accumulate into the cells, until reaching a toxic dose. The safety of the Emprove  E551 Food additive was demonstrated after subacute exposure to intestinal cells. This was explained by the large mean size of silica particles, close to 200 nm, even after the digestion process. Due to the lack of specifications on the dimensions of particles, nanosilicates could be nevertheless present in E551 additives provided by other manufacturers, affecting their toxicological profile. In contrast to silica NPs, no size effect was reported in the cytotoxicity of nanoemulsions. They provoked a cell viability drop in a concentration dependent manner for all the tested NE. This demonstrates that the mechanisms involved in the nanotoxicity is dependent on the composition of nanoparticles and consequently, toxicity factors cannot be extrapolated to all NPs. production after 7-day exposure. However, the single culture of goblet cells was not affected due to the protective barrier of mucus. Silica NPs with a size of at least 70 nm and additive E551 were not toxic, even in the lack of mucus. Unlike NP20 and NP30, they did not significantly modulate the transepithelial electrical resistance of the co-culture. Regarding nanoemulsions, a cytotoxicity in a dose-dependent manner was observed in Caco-2 and HT29-MTX cultures, without any size effect. The lipid composition and the presence of surfactant could facilitate the cell internalization of droplets, even the largest ones. This study demonstrates that the nanotoxicity factors vary according to the composition of NPs. Unlike nanoemulsions, the toxicological profile of silica particles is strongly dependent on the hydrodynamic diameter. Mucus also needs to be considered in toxicity studies for a better prediction. Annexes Figure 1 : 1 Figure 1: Estimation of various nanomaterials uses over a span of 10 years. According to Market Analysis Report. Figure 2 : 2 Figure 2: Applications insights of nanomaterials in 2020. According to Market Analysis Report. Figure 3 : 3 Figure 3: Classification of NPs. According to Shah et al 2020. Figure 4 : 4 Figure 4: The types of polymeric nanoparticles used as drug delivery. According to Christoforidis et al 2012. Figure 5 : 5 Figure 5: Schematic representation of polymeric micelles. According to Ghezzi et al 2021. Figure 6 : 6 Figure 6: Schematic representation of liposome. According to lembo et al 2010. Figure 7 : 7 Figure 7: Schematic representation of nanoemulsion types. According to Che Marzuki et al 2019. Figure 8 : 8 Figure 8: Schematic representation of dendrimers structure. According to Santos et al 2013. Figure 9 : 9 Figure 9: Schematic representation of carbon nanotubes. According to Abazari et al 2020. Figure 10 : 10 Figure 10: Global nanotechnology market. According to Talebian et al 2021. Figure 11 : 11 Figure 11: Main exposure routes of human body to nanoparticles. According to Naseer et al 2018. Figure 12 : 12 Figure 12: Schematic represent the different NPs penetration pathways through skin barrier. According to Nafisi et al 2018. Figure 13 : 13 Figure 13: Schematic represent the different NPs penetration pathways through gastrointestinal tract. According to Bellmann et al 2015. Figure 14 : 14 Figure 14: Deposition of nanoparticles in respiratory tract depending on their size. According to Geiser et al 2010. Figure 15 : 15 Figure 15: Schematic represent the different NPs penetration route through eye. According to Souto et al 2019. Figure 16 : 16 Figure 16: Schematic representation of a Transwell ® . Figure 17 : 17 Figure 17: Influence of the concentration in Kolliphor ® on the viability of Caco-2 (A) and HT29-MTX (B). Figure 18 : 18 Figure 18: Influence of the concentration in NEs on the viability of Caco-2 (A) and HT29-MTX (B). Figure 3 : 3 Figure 3: classification of NPs. According to Shah et al 2020 (5). Figure 4 : 4 Figure 4: the types of polymeric nanoparticles used as drug delivery. According to Christoforidis et al 2012 (6). Figure 5 : 5 Figure 5: schematic representation of polymeric micelles. According to Ghezzi et al 2021 (8). Figure 6 : 6 Figure 6: schematic representation of liposome. According to lembo et al 2010 (9). Figure 7 : 7 Figure 7: schematic representation of nanoemulsion types. According to Che Marzuki et al 2019 (12). Figure 8 : 8 Figure 8: schematic representation of dendrimers structure. According to Santos et al 2013 (14). Figure 9 : 9 Figure 9: schematic representation of carbon nanotubes. According to Abazari et al 2020 (21). Figure 10 : 10 Figure 10: Global nanotechnology market. According to Talebian et al 2021 (22). Figure 11 : 11 Figure 11: main exposure routes of human body to nanoparticles. According to Naseer et al 2018 (34). Figure 12 : 12 Figure 12: schematic represent the different NPs penetration pathways through skin barrier. According to Nafisi et al 2018 (37). Figure 13 : 13 Figure 13: schematic represent the different NPs penetration pathways through gastrointestinal tract. According to Bellmann et al 2015 (44). Figure 14 : 14 Figure 14: deposition of nanoparticles in respiratory tract depending on their size. According to Geiser et al 2010 (46). Figure 15 : 15 Figure 15: schematic represent the different NPs penetration route through eye. According to Souto et al 2019 (47). 1 . 2 . 12 this part. Toxicokinetic and toxicological endpoints including cytotoxicity, genotoxicity, oxidative stress, and inflammatory response were considered. Introduction ..........................................................................................................Microbiota and mucus interactions ......................................................................... Intestinal cell damages ............................................................................................. 3. Genotoxicity ..........................................................................................................definition issued by the European Commission (EC) in 2011, nanomaterials were described as solid particles including single, agglomerated, and aggregated particles where at least 50 % of particles in the number size distribution have one or more external dimensions between 1 nm and 100 nm (Commission Européenne, 2011). Two other conditions have been added in the new definition published in 2022 by the EC (Commission Européenne, 2022). This includes the elongated particles characterized by two dimensions smaller than 1 nm and another one larger than 100 nm such as the rods, fibers, and tubes. Platelet particle with one dimension below 1 nm and two others above 100 nm are also defined as nanomaterials. According to the ISO/TS 80004-2:2015 guidelines, nanoparticles (NPs) are characterized by three external dimensions in the nanoscale comprised between 1 and 100 nm and approximately in the same range ("ISO/TS 80004-1:2015, Nanotechnologies ). NPs represent a sub-class of nanomaterials. They are composed of organic (lipid and polymeric NPs, dendrimers, micelles) Figure 2 : 2 Figure 2: Schematic representation of interactions between nanomaterials and bacteria.Figure is reproduced with permission from reference[START_REF] G1108 Ladaycia | Microbiota and nanoparticles: Description and interactions[END_REF] Physicochemical properties of nanomaterials are critical factors mediating the distribution of nanomaterials in the intestine. A small size usually below 100 nm, neutral or negatively charged NPs facilitate the penetration across the mucus layer, acting as a protective barrier for the intestinal epithelium. The positive NPs, potentially more toxic for cells are stuck to the mucus layer and are unable to reach the epithelial cells in healthy models. Physical and chemical modifications of nanomaterials after contact with gastrointestinal fluids also need to be considered in toxicity studies. Changes in the surface charge and agglomeration state were reported, impacting their toxicological profile. Some nanomaterials are able to interact with the different layers of the intestinal barrier including microbiota, mucus layer and epithelial monolayer. Gut microbiota dysbiosis and changes in the secretion and the composition of mucus were observed. Intestinal permeability modulation can be directly correlated to epithelial cell damages. Another mechanism is attributed to the destabilization of the actin cytoskeleton of epithelial cells and the rearrangement of TJs. Besides the physicochemical properties of nanomaterials, the composition is another critical factor in the intestinal nanotoxicity. Inorganic materials are usually more toxic than polymeric and lipid-based NPs. Mainly surfactants used as additives for the preparation of organic particles were responsible for the cytotoxicity and the intestinal permeability modulation. Two toxicity pathways are suggested for the inorganic nanomaterials. The high surface area of metallic NPs such as Ag particles facilitates the release of ions, affecting the biological functions of cells. The second could be attributed to the lower biodegradation rate of inorganic NPs such as SiO2 and TiO2 NPs compared with organic materials. The contact time with the cellular compounds is then prolonged and a chronic exposure would result in their accumulation in the cells. Microbiota was affected by some NPs through the proliferation of pathogenic and pro-inflammatory bacteria. The antibacterial features of some metallic properties reduced the population of commensal microorganisms.Unlike to epithelium damages, the inflammation response triggered by nanomaterials was less dependent on their physicochemical properties. Despite the numerous investigations, consistency issues were usually observed in the toxicological data reported in literature. Lipid nanoparticles are composed of a solid (SLN) or liquid (NE) lipid core surrounded by an external corona of surfactants including phospholipids and PEGylated lipids. They exhibit numerous advantages as drug delivery systems. Hydrophobic drugs can be solubilized in the lipid core of NPs. The small size of droplets and the presence of surfactants improve the transport of drugs across the biological barriers. High and low energy methods can be used to produce nanoemulsions. High shearing rates are required to reduce the size of oil droplets in the nanometer scale. Another approach consists in the lowering of the tension at the oil/water interface using high quantities of surfactants and co-surfactants. A spontaneous nanoemulsification can then occur. Phase Inversion Temperature (PIT) is an alternative method based on the modification of the hydrophilic-lipophilic balance (HLB) of PEGylated surfactants with the temperature. A hydrophilic surfactant at low temperature becomes hydrophobic during the heating, leading to the inversion of the emulsion from O/W to W/O. This present work evaluates the critical parameters for the preparation of small and stable nanoemulsions containing medium and long chain triglycerides. Figure 18 : 18 Figure 18: Influence of the concentration in NEs on the viability of Caco-2 (A) and HT29-MTX (B). confirming the lack of cell uptake. The interactions of silica NPs with HT29-MTX exhibited a different pattern compared with Caco-2. The continuous mucus layer spread on HT29-MTX single culture prevented the cell internalization of silica NPs, even the smallest ones. No toxic effect including cytotoxicity, actin disruption, ROS production and inflammatory response was reported. Surprisingly, a proliferative effect of HT29-MTX cells was identified in presence of silica particles. Mucus layer secreted by HT29-MTX served as an effective barrier, preventing both small and large NPs to reach the cells. A rapid accumulation of engineered silica NPs in mucus layer was observed, due to the strong interaction of NPs with the mucus gel(36). In the lack of a mucus layer on HT29-MTX, a size-dependent cell uptake was noticed, similarly to Caco-2 cell line, proving the role of mucus Despite the mucus on the co-culture, a transepithelial electrical resistance drop was noticed with the 20 and 30 nm nanoparticles. The low ratio of HT29-MTX in the coculture model could change the mucus layer properties in comparison with the one spread on the HT29-MTX single culture. No damage of the co-culture was reported with E551 additive and engineered NPs of 70 and 200 nm. The safety of E551 food additives could be then guaranteed if it contains only silica NPs with a size of a least 70 nm. Evaluation in vitro du profil toxicologique des nanoparticules de silice et nanoémulsions Mots clés : nanoparticules de silice, nanoémulsions, toxicité, barrière intestinale, mucus Résumé : La quantité de nanoparticules manufacturées sur le marché est en constante augmentation du fait de leurs propriétés uniques. Elles sont utilisées comme excipients en cosmétique, additifs alimentaires, et systèmes de délivrance de médicaments. Leur taille nanométrique est responsable de leur forte interaction avec l'environnement biologique. Ce travail a pour but d'évaluer la toxicité intestinale de deux types de nanoparticules (NPs) : des nanoparticules de silice potentiellement présentes dans l'additif E551 et des nanoémulsions utilisées pour la délivrance de lipides et substances actives. L'évaluation toxicologique a été réalisée à partir de modèles in vitro, à savoir des monocultures et une co-culture de cellules intestinales (Caco-2 et HT29-MTX). Avant l'incubation, l'additif a été digéré en présence de fluides gastriques et intestinaux simulés afin de mimer fidèlement les conditions in vivo. Les plus petites NPs de silice caractérisées par des diamètres de 20 et 30 nm ont altéré la viabilité des entérocytes et induit la production de ROS après 7 jours d'exposition.Cependant, ces dernières n'ont pas causé de cytotoxicité sur la monoculture de cellules caliciformes en raison du rôle protecteur du mucus. Les NPs de silice avec une taille supérieure à 70 nm et l'additif E551 n'ont présenté aucune toxicité, même en l'absence de mucus. Contrairement aux NP20 et NP30, elles n'ont pas modulé significativement la résistance électrique transépithéliale de la co-culture. Concernant les nanoémulsions, une cytotoxicité dose-dépendante a été observée dans les cultures de Caco-2 et HT29-MTX, sans influence de la taille. La composition en lipides ainsi que la présence de tensioactif pourraient faciliter l'internalisation cellulaire des gouttelettes, même les plus grossières. Cette étude démontre que les facteurs de nanotoxicité varient selon la composition des NPs. Contrairement aux nanoémulsions, le profil toxicologique des particules de silice est fortement dépendant du diamètre hydrodynamique. Le mucus doit également être pris en compte dans les études de toxicité pour une meilleure prédiction.In vitro investigation of the toxicological profilKeywords: silica nanoparticles, nanoemulsions, tox Abstract : The quantity of manufactured nanoparticles (NPs) on the market is constantly increasing due to their unique properties. They are used as cosmetic ingredients, food additives and as drug delivery systems. Their size in the manometer range confers them a strong ability to interact with the biological environment. This work aims to evaluate the intestinal toxicity of two types of nanoparticles (NPs): silica NPs potentially present in E551 food additive and nanoemulsions used as drug/lipid delivery system. The toxicological evaluation was carried out using in vitro models consisting in single cultures and co-culture of intestinal cells (Caco-2 and HT29-MTX). Prior to the incubations, E551 additive was digested with gastric and intestinal simulated fluids, to accurately mimic the in vivo conditions. The smallest silica NPs characterized by hydrodynamic diameters of 20 and 30 nm, altered the viability of enterocytes and induced ROS Université Bourgogne Franche-Comté 32, avenue de l'Observatoire 25000 Besançon e of silicate nanoparticles and nanoemulsions icity, intestinal barrier, mucus Table 1 : Intestinal toxicity of inorganic NPs 1 NPs Size (nm) Concentration Exposure models Observations References Surface charge (mV) of NPs Time In vitro models Ag 20 and 200 nm 0.1 -100 mg/mL Short time Coculture of Human Increase in IL-8 in a dose-and (Georgantzopoulou -12.8 and -13.9 mV exposure: 24 colon colorectal size-dependent manner with a lack et al., 2016b) h adenocarcinoma Caco- of cytotoxicity and oxidative 2/TC7 and HT29-MTX stress. cells Bare-Ag (Ag-B) 23 nm 0.1 -2.0 µg/mL Short time Human colon colorectal Decrease in cellular viability in a (Chen et al., 2016) -7.7 mV exposure: 24 adenocarcinoma Caco-2 dose and coating dependent Citrate-coated Ag (Ag-CIT) 24 nm -9.6 mV h Long time exposure: cell manner Increase in IL-8 release in a dose and time dependent manner. poly (N-vinyl-2- 30 nm 21 days pyrrolidone)- -8.4 mV coated Ag NP (Ag-PVP) TiO2 265 nm 62.5, 250 and 12 h and 24 h Human intestinal Change of cells morphology (Setyawati et al., -14.1 mv 1000 µM carcinoma epithelial cell 2015; García- Pure anastase 70 -80 nm lines, SW480 / Normal Rodríguez et al., crystal-structure nanospheres of 12.5 350 μg/mL and 24 and 48 h human intestinal mucosa epithelial cell line Decrease in cellular viability in a dose and time dependent manner 2018c) TiO2 NCM460 and coculture Increase in DNA damage. of Human colon colorectal Pure rutile 40 -70 nm adenocarcinoma Caco-2 and HT29-MTX cells crystal-structure nanorods of TiO2 Table 2 : Intestinal toxicity of organic NPs. 2 NPs Size (nm) Concentration Exposure Intestinal models Observations Reference Surface charge (mV) of NPs Time In vitro models Chitosan (CS) 290 nm 1 mg/mL 1 h Human colon colorectal No significant cytotoxicity of CS (Slütter et al., 2009) loaded with a 43.3 mV adenocarcinoma Caco-2 NPs was observed model antigen cells ovalbumin (OVA) Chitosan 135.2 nm 0.125, 0.25, 3 h No cytotoxicity was observed conjugated with a 6.3 mV 0.375 and 0.5 goblet cell-target mg/mL peptide, (CSK) peptide. Poly / 0.1 1 and 10 mM 210 min Human colon colorectal G2, G3, G4 induced a significant (El-Sayed et al., 2002) (amidoamine) PA adenocarcinoma Caco-2 leakage of Lactate Dehydrogenase MAM cells in a dose dependent manner. Generation number: G0, G1, G2 G3, G4 PLGA 175 nm 12.5 - 200 8 h Human colon colorectal (Chaves et al., 2018) µg/mL adenocarcinoma Caco-2 cells 211 nm 15.63 -250 Human colon colorectal µg/mL adenocarcinoma Caco-2 No cytotoxicity was observed. cells and HT29 MTX In vivo models Chitosan 253.2 nm 5 mg/kg/day 7 days Male Wistar rats No cytotoxicity was observed (Sonaje et al., 2011; 28.2 mV Liu et al., 2013) Disruption of epithelial cell microvilli was also observed in mice exposed to Ag NPs for 21 days at a dose of 20 mg/kg (van der[START_REF] Van Der Zande | Distribution, elimination, and toxicity of silver nanoparticles and silver ions in rats after 28-day oral exposure[END_REF]. An alteration of actin filaments reported for TiO2 and Ag NPs [START_REF] Déciga-Alcaraz | Irreversible disruption of the cytoskeleton as induced by non-cytotoxic exposure to titanium dioxide nanoparticles in lung epithelial cells[END_REF][START_REF] Xu | Silver nanoparticles (AgNPs) cause degeneration of cytoskeleton and disrupt synaptic machinery of cultured cortical neurons[END_REF] could be hypothesized, considering that brush border is supported by a bundle of cytoskeleton compounds (Costa de [START_REF] Costa De Beauregard | Suppression of villin expression by antisense RNA impairs brush border assembly in polarized epithelial intestinal cells[END_REF] . Table 1 : The composition of blank nanoemulsions 1 Batch Products Theoretical amount Surfactant to oil ratio (SOR) Kolliphor ® 0.98g NE20 Labrafac ® 0.28g 3.5 Osmosed water 6.24g Kolliphor ® 0.84g NE30 Labrafac ® 0.42g 2 Osmosed water 6.24g Kolliphor ® 0.63g NE70 Labrafac ® 0.63g 1 Osmosed water 6.24g Kolliphor ® 0.42g NE200 Labrafac ® 0.84g 0.5 Osmosed water 6.24g Table 2 : Hydrodynamic diameter and Zeta potential of engineered blank NEs 2 NE20 NE30 NE70 NE200 Surfactant -to-oil ratio 3.5 2 1 0.5 Dynamic light scattering Hydrodynamic diameter (nm) T0 21.8±0.9 31.8±0.2 76.5±0.6 227.6±2.4 T2h 21.4±0.1 32.4±0.6 75.8±1.5 213.4±3.6 Polydispersity index T0 0.04±0.01 0.057±0.05 0.049±0.03 0.32±0.03 T2h 0.04±0.02 0.039±0.03 0.075±0.03 0.23±0.03 Laser Doppler electrophoresis Zeta Potential (mV) T0 -6.7 -1.9 -3 -3.1 T2h -4.2 -3.3 -2.7 -3.2 Acknowledgement This work was achieved in the department of Pharmaceutical Engineering of the PEPITE EA4267, University of Franche-Comté; Besancon; France. Allow me to thank the members of the jury: Dr. BOLAND Sonja and Pr. SAPIN-MINET Anne for accepting to be part of my thesis jury as reporters, Dr. MARTIN Hélène, and Pr. DAHER Ahmad for agreeing to examine this work. I would like to thank Dr. Raphael Cornu for his support and for his presence during the period of my thesis. I would also like to express my immense gratitude for your guidance, your scientific and moral help. Thank you for attending as a guest member. I would like to thank my thesis director Dr. Arnaud Béduneau for all his advices and encouragement throughout my thesis, and to his scientific honesty. His teachings have always been of high quality, they have brought me a lot and will remain an example for me. I also thank him for the long hours of work on this manuscript. I also express my thanks to my co-director, Pr. Mona Diab-Assaf. Thank you for your guidance and help. I would like to thank Pr. Céline Demougeot for trusting and welcoming me to the laboratory. I would also like to thank Dr. Hélène Martin and Dr. Georges Moarbess for agreeing to follow this work during the thesis monitoring committees. I would also like to thank Pr. Yann Pellequer and Claire Chrétien for their help and advice. I thank Pr. Nadine Millot and her team for their participation in the physicochemical characterisation of E551 food additive. I thank my colleagues Wassim El basset and Thomas Antoine for the great moments we shared in the laboratory. I dedicate this work to my adorable mum Insaf and my dear dad Haidar, to my loving brothers Acknowledgement T. Zaiter is supported by a fellowship from the "Centre Islamique d'Orientation et de l'Enseignement Supérieur (CIOES)". Some figures were designed using BioRender.com. Abstract The gastrointestinal tract represents one of primary routes of entry for many nanomaterials. Their size in the nanometer range and their high surface area confer them very interesting properties as food additives. They are used as texturizing, opacifying or anticaking agents. Food packaging contains nanomaterials with antimicrobial properties. Humans are also orally exposed to nanoparticles (NPs) present in the air or drinking water. Ingested NPs can then reach the intestinal lumen and interact with the gastrointestinal fluids, microbiota, mucus layer and the epithelial barrier, allowing a potential translocation. The toxicological profile of ingested NPs is still unclear due to the variety of NPs in terms of composition and physicochemical properties as well as the limited number of investigations. Their unique properties related to their small size could however affect the intestinal ecosystem but also the physical and functional properties of the intestinal barrier. This review focuses on the fate of ingested organic and inorganic NPs in the intestinal lumen and their toxicity on the microbiota and epithelial cells. Keywords : Nanoparticles, Intestinal cells, Microbiota, Intestinal permeability, Cytotoxicity, Genotoxicity Conflicts of interest There are no conflicts to declare.
04112105
en
[ "math" ]
2024/03/04 16:41:24
2023
https://ifp.hal.science/hal-04112105/file/ECM23.pdf
S Poncet email: [email protected] C Mehl K Truffin O Colin Modified diffusion model adapted to non-unity Lewis number mixtures for low flame stretch using the thickened flame model Keywords: Stretch, Diffusion, Non-unity Lewis number, Laminar premixed flame, Thickened Flame Model This study proposes a methodology based on species diffusion adaptation to recover flame response to stretch in the context of the Thickened Flame Model for non-unity Lewis mixtures. First, the flame speed variation induced by low stretch, that is overestimated by the standard form of the TFM, is illustrated with stoichiometric C 8 H 18 /air (Le > 1) and lean H 2 /air (Le < 1) stretched thickened flame fronts. Secondly, the model correction parameter is deduced from spherical flame simulations for both mixture conditions. A posteriori validations, performed on spherical flames and flame-vortex interaction configurations, finally illustrate the effectiveness of this method. Introduction In Large Eddy Simulations (LES) of reactive flows, the flame thickness is usually small compared to the mesh size (∆x ≫ δ 0 L ). The Thickened Flame Model (TFM), consisting in the artificial widening of the flame front, is a competitive approach which ensures the proper resolution of flame fronts [START_REF] Butler | A numerical method for two dimensional unsteady reacting flows[END_REF], while preserving the correct propagation speed. In TFM with full species and transport resolution, chemistry is solved using Arrhenius laws allowing for various phenomena such as flame-wall interactions, ignition and flame stabilization to be predicted accurately [START_REF] Veynante | instabilities in turbulent premixed burners large eddy simulation of combustion instabilities in turbulent premixed burners[END_REF][START_REF] Colin | Development of highorder taylor-galerkin schemes for les[END_REF]. However, a flame front thickened by a factor F is F times more sensitive to stretch than a real flame [START_REF] Veynante | instabilities in turbulent premixed burners large eddy simulation of combustion instabilities in turbulent premixed burners[END_REF]. Various studies [START_REF] Proch | Modeling heat loss effects in the large eddy simulation of a model gas turbine combustor with premixed flamelet generated manifolds[END_REF][START_REF] Han | Large eddy simulation/dynamic thickened flame modeling of a high karlovitz number turbulent premixed jet flame[END_REF][START_REF] Popp | An extended artificial thickening approach for strained premixed flames[END_REF] proposed a tabulated adaptation of the species diffusion coefficients to account for strain effect of turbulent flows for CH 4 /air mixtures (Le ∼ 1). With a similar approach, Comer et al. [START_REF] Comer | A modified thickened flame model for simulating extinction[END_REF] adapted the TFM approach in the context of strain-induced extinction limits for lean C 3 H 8 /air mixtures (Le > 1). Those studies only consider counter-flow strained flame configurations to determine their model correction parameters. Quillatre [8] proposed a diffusion-correction to limit the error of flame response to low stretch with TFM for non-unity Lewis mixtures based on asymptotic theories. However, restrictive assumptions of asymptotic theories didn't allow for the accurate preservation of flame speed for stretched-flame configurations. The aim of this study is to propose a model to recover the flame response to low stretch in the context of the TFM approach for non-unity Lewis number mixtures. Unlike previous studies [START_REF] Proch | Modeling heat loss effects in the large eddy simulation of a model gas turbine combustor with premixed flamelet generated manifolds[END_REF][START_REF] Han | Large eddy simulation/dynamic thickened flame modeling of a high karlovitz number turbulent premixed jet flame[END_REF][START_REF] Popp | An extended artificial thickening approach for strained premixed flames[END_REF][START_REF] Comer | A modified thickened flame model for simulating extinction[END_REF], we propose to address both strain and curvature effects by considering curved flames configurations. In Sec 1, we briefly recall the theory of the stretch influence on flame speed and the impact of the TFM on stretched flame configurations. In Sec 2, we develop a methodology, based on the modification of species diffusivities, to correct the over-sensitivity of thickened flame fronts to stretch. The newly introduced method is assessed for a lean H 2 /air (Le < 1) and a stoichiometric C 8 H 18 /air (Le > 1) mixtures. A posteriori validations are finally performed in Sec. 3 on spherical flames and flame-vortex interactions. 1 Theoretical background Premixed flame stretch sensitivity The balance between thermal energy released by oxidation reactions and the supply of chemical energy from reactive species, sustaining the combustion process in premixed flames, can be altered by flame stretching. Stretch is defined as the rate of change of the surface A normalised by its area and is evaluated as K = 1/A × dA/dt. Under the assumptions of the asymptotic theory, the flame speed exhibits a linear trend that scales with stretch [START_REF] Matalon | Flames as gasdynamic discontinuities[END_REF]. The flame speed is linearly dependant on stretch when either considering the flame consumption speed S c or the flame displacement speed S b d considered close to the burnt gas side [START_REF] Giannakopoulos | Consumption and displacement speeds of stretched premixed flames -theory and simulations[END_REF]. On the one hand, the flame consumption speed represents the speed at which the flame consumes the reactants such as: S c = - 1 ρ u Y k u -Y k b x + x -ωk dn (1) where ρ, Y k , ωk and n are respectively the density, the mass fraction of species k, the reaction rate of species k and the distance along the flame normal. x -and x + represent the coordinates in the unburnt and burnt gas side of the reactive front. Indexes u and b refer to the unburnt and burnt gases, respectively. On the other hand, the flame displacement speed S b d corresponds to the velocity of an iso-surface of the flame considered in the burnt gas. The impact of stretch on flame speed is characterized by the Markstein length L. Giannakopoulos et al. [START_REF] Giannakopoulos | Consumption and displacement speeds of stretched premixed flames -theory and simulations[END_REF] distinguish the consumption Markstein length L c and the displacement Markstein length L b d . S c = S 0 L -L c K (2) S b d = S 0 L -L b d K (3) where S b d is the normalized displacement speed defined by S b d = σ -1 S b d with the expansion ratio σ = ρ u /ρ b . The difference L b d -L c is always positive (see Eq. (34) in [START_REF] Giannakopoulos | Consumption and displacement speeds of stretched premixed flames -theory and simulations[END_REF]), i.e. L b d > L c , implying that these Markstein lengths can have opposite signs. Stretch sensitivity of thickened flames Based on the asymptotic theory, the consumption Markstein length scales with the flame thickness as L c ∝ δ 0 L (Le i -1) , where Le i is the Lewis number of the limiting reactant of the mixture and is defined as the ratio between thermal diffusivity α and mass diffusivity of species D i . Hence, for mixtures with large disparities between species mass diffusion and thermal diffusion, i.e Le i far from unity, the flame speed is significantly modified by stretch. As a consequence, with TFM, L c is proportional to the thickened flame thickness, Fδ 0 L where F is the thickening factor. A modification of the original TFM model is then necessary to account for stretch effects in flames modeled with preferential diffusion. Quillatre [8] proposed a corrected Lewis number Le * k for all species k based on the conservation of L c , estimated from the asymptotic theory: Le * k = 1 + Le k -1 F ⇔ Sc * k = Pr + Sc k -Pr F (4) This model was shown to reduce the over-estimation of the stretch effect with TFM [8]. However, the asymptotic theory being only qualitative for real multi-species chemistries, this approach still leads to important errors. Therefore, we propose here a model based on flame simulations which allows to accurately recover the real Markstein length of the flame. TFM with corrected stretch sensitivity 2.1 Model formulation Following previous approaches [START_REF] Comer | A modified thickened flame model for simulating extinction[END_REF]8] we propose to modify species diffusivities to correct the flame response to stretch. We define effective diffusion coefficients D * k , and thus Lewis numbers Le * k , as follows: D * k = D k γ or Le * k = α D * k = γLe k ( 5 ) where γ is a model parameter determined so that the flame sensitivity to stretch is correctly retrieved. γ is a priori dependent on the local thermo-chemical conditions (pressure, temperature, equivalence ratio,...) and the thickening factor F. The modification of the diffusive fluxes however leads to an alteration of the flame speed and thickness. To recover the unstretched laminar flame speed, the pre-exponential factors of all reaction rates are multiplied by a parameter A. It is evaluated as the ratio A = S 0 L /S γ L 2 , with S γ L the flame speed obtained by using the effective diffusivity D * k without correcting the pre-exponential factor. Model parameter evaluation The method to estimate the model parameter γ is now described. In practice, we want to find the value of γ for each thickening factor F, written γ F , which satisfies: L(F, γ F ) = L(F = 1, γ = 1) (6) where L denotes here either the consumption or displacement Markstein length. We consider the Markstein number Ma defined as: Ma(γ) = L(F, γ) Fδ 0 L (7) Dividing Eq. ( 6) by Fδ 0 L leads to the following identity: Ma(γ = γ F ) = Ma(γ = 1) F ( 8 ) The methodology adopted to determine γ F is then as follows: (i) we compute a series of stretched flames with varying values of γ, using each time the adequate correction factor A(γ), and we record the Markstein number Ma(γ). These flames may be computed at any fixed thickening F by virtue of the proportionality relationship given in Eq. ( 7). (ii) We solve Eq. ( 8) for the unknown variable γ F using an adequate optimization algorithm. The procedure is illustrated in Fig. 1. Step (i) might be performed by considering either curved or strained flames. In the present paper, the estimation of γ F is performed by considering spherical curved flames. A more detailed discussion of this choice is left for future work. Additionally, γ F might be determined considering the consumption or the displacement Markstein, leading to different estimations. A discussion on this aspect will be given in Sec. 3. Figure 1: Illustration of the method to determine γ F Validation conditions The proposed methodology is assessed on stoichiometric C 8 H 18 /air (Le > 1) and lean H 2 /air (Le < 1) mixtures. Both mixtures are considered at temperature T = 300K and pressure P = 1bar. A 2-step mechanism involving 6 species [START_REF] Suillaud | Direct numerical simulations of high karlovitz number premixed flames for the analysis and modeling of the displacement speed[END_REF] is used for the isooctane case while a detailed mechanism (21 reactions, 10 species) [START_REF] Conaire | A comprehensive modeling study of hydrogen oxidation: A Comprehensive Modeling Study of Hydrogen Oxidation[END_REF] is used for the lean hydrogen mixture (Φ = 0.47). The laminar flame characteristics, obtained from premixed adiabatic laminar flames computed with the CANTERA 1D solver [START_REF] Goodwin | Cantera: An objectoriented software toolkit for chemical kinetics, thermodynamics, and transport processes[END_REF], are (i) S 0 L = 0.35m/s and δ 0 L = 357µm for the C 8 H 18 case (ii) S 0 L = 0.41m/s and δ 0 L = 466µm for the H 2 case. Results and discussion The application of the corrected TFM model to H 2 and C 8 H 18 cases is explored here. It is first applied on spherical flames in Sec. 3.1, which serve both to fit the γ parameter and as a first validation set-up. The model is then validated on flame-vortex interactions in Sec. 3.2. Spherical flames 3.1.1 Numerical set-up Spherical-flame simulations are performed with the CONVERGE CFD software [START_REF] Richards | [END_REF], which features a finite volume solver on Cartesian meshes. The present work considers in particular the resolution of 1-D spherical flame equations. This formulation enables an efficient computation of these flames and avoids the presence of intrinsic 3-D flame instabilities. The simulation domain length a is scaled with the thickening factor F such that a = F × 0.73m for both mixture cases. Mesh size is set as ∆ x = F × (δ 0 L /10). Hence, for every thickening factor considered, the flame front is solved on approximately n ∼ 10 grid points. Markstein lengths for spherical flames We consider a spherical flame front propagating in the outwards direction into quiescent premixed fresh gases. The rate of change of the flame surface is directly related to the flame radius R BG , determined in the burnt gas side: K = 2 R BG dR BG dt = κS b d (9) S b d and κ represent respectively the flame front displacement speed and the flame front curvature considered in the burnt gas. The flame front radius is evaluated from the burnt gas mass in the computational domain. The flame displacement speed considered in the burnt gas side is S b d = dR BG /dt. The consumption and displacement Markstein lengths are determined by a linear regression of S c (K) and S b d (K) on the stretch rate interval K = [20/F; 100/F]s -1 . Figure 2 shows the displacement and consumption Markstein numbers as a function of γ for the H 2 case. Table 1 provides values of the Markstein lengths, for thickening factors values equal to 1 and 8. It emphasizes The methodology presented in Sec. 2 is used to determine the appropriate γ F to retrieve the reference Markstein length for a thickened flame. In particular, the evolution of Ma(γ) computed from spherical flames, as represented in Fig. 2 for the H 2 case, is used to determine γ F . Table 2 gives the correction factor values γ F for various thickening factors F, based either on Ma b d (γ) or Ma c (γ). lean H 2 /air mixture (Le < 1), the consumption Markstein length is negative and S c is greatly overestimated for the F = 8 thickened flame. A simulation with the TFM correction proposed by Quillatre [8] is also added in Fig. 3. According to Eq. ( 4), the corrected Lewis numbers rapidly tends towards 1 as F increases. This is evidenced here in Fig. 3 The initial vortex position is adjusted so that the centre of the vortex is equidistant from the peak reaction location for the real and thickened flame cases. We consider here the C 8 H 18 case only, since planar lean H 2 /air flames are subject to thermo-diffusive flame instabilities due to the low Lewis number. Hence, the methodology can't be assessed on the lean H 2 case because small-scale instability structures are nearly suppressed at F = 4 while they will appear at F = 1. A more quantitative comparison is performed considering space averaged properties of the flame. We define the flame length as: Results L f lame = S ∥∇c∥dS ( 10 ) Where c is the progress variable defined as c = 1-Y C 8 H 18 /Y u C 8 H 18 , where Y u C 8 H 18 corresponds to the value in fresh gases. S refers to the 2-D computational domain. The evolution of L f lame in time is represented in Figure 6: Flame length with respect to flame times. Fig. 6. It shows a significant under-estimation of the flame length for the thickened case with standard TFM compared to the non-thickened case. This is due to the flame speed underestimation induced by TFM. For all thickened flame fronts, with or without the corrected diffusion coefficients, the maximal flame length is the same. The underestimation of the flame length maximum value for thickened flames compared to the reference flame is due to a loss of flame surface for highly curved zone encountered after the penetration of the vortex in the flame (see the snapshots in Fig. 5). For the thickened flame front without correction, the maximal flame length is reached with a delay equivalent to 1.5τ f lame compared to the reference case (Fig. 6). On the contrary, the corrected TFM case based on displacement Markstein numbers reaches the maximal peak of flame length with an advance of 0.5τ f lame . The maximal flame length is obtained simultaneously for the real flame and the thickened front with diffusion correction based on consumption Markstein numbers. For a given flame geometry, here estimated by the flame length L f lame , an averaged flame consumption can be defined as: [START_REF] Suillaud | Direct numerical simulations of high karlovitz number premixed flames for the analysis and modeling of the displacement speed[END_REF] with index f referring to the fuel species. Fig. 7 displays the averaged consumption speed as a function of the flame length. The laminar flame speed deduced from the simulation of a 1D planar flame with the CONVERGE solver [START_REF] Richards | [END_REF], which equals S 0 l = 0.37m/s is added to the plot. The flame-vortex interaction induces a flame length increase with a predominantly concave curvature towards the burnt gases. As Le > 1, this configuration leads to a decrease of the flame speed as illustrated by the reference case in Fig. 7. The thickened flame front without correction features an overestimated decrease of the averaged speed, in line with the over-sensitivity of the TFM model to stretch. The averaged consumption speed obtained with the correction based on the displacement Markstein length is on the contrary over-estimated which is consistent with conclusions drawn from Fig. 3. For the thickened flame front with diffusion correction deduced from the consumption speed, S c coincides with the reference case. Hence, the change of consumption speed induced by preferential diffusion effects is accurately recovered for a thickened flame front when modifying species diffusion according to the proposed methodology performed with the consumption Markstein number Ma c . S c = 1 L f lame × -1 ρ u Y f u -Y f b S ω f dS ( Conclusions The present study proposes a framework to correctly take into account stretch effects on artificially thickened flames. The standard TFM approach induces an oversensitivity of thickened flame fronts to stretch, which leads to a multiplication of the Markstein length by the thickening factor factor F. To overcome this limitation, we proposed a TFM-adapted methodology to recover the flame speed of weakly stretched flames. The model, based on the adaptation of species Lewis numbers, was evaluated both for a Le < 1 mixture (lean H 2 /air case) and for a Le > 1 mixture (stoichiometric C 8 H 18 /air case). The present work illustrates the distinction between the influence of stretch (i) on flame consumption speed and (ii) on flame displacement speed, characterized respectively by a consumption Markstein length L c and a displacement Markstein length L b d [START_REF] Giannakopoulos | Consumption and displacement speeds of stretched premixed flames -theory and simulations[END_REF]. It was shown that our TFM-adapted diffusion model can be used either to recover the reference value of L c or L b d . However, it is not possible to preserve both Markstein lengths simultaneously. Both approaches were shown to provide improved results compared to the standard TFM on a flame-vortex interaction case. Choosing L c as a target allows in addition to recover nearly exactly the mean consumption speed. Both approaches will need to be evaluated on practical burner cases in a future study. Figure 2 : 2 Figure 2: Markstein numbers with respect to γ for H 2 mixture (Φ = 0.47, T = 300K, P = 1bar) Table 1 : 3 13 Markstein lengths evaluated from the linear regression of flame consumption speed S c (Eq2) and flame displace-Estimation of γ F Fig. 3 3 Fig.3illustrates the evolution of S c with the stretch K for the H 2 spherical flames. The reference case is represented by the continuous black line. For the present Figure 3 :Figure 4 : 34 Figure 3: Flame consumption speed with respect to stretch for the H 2 mixture (Φ = 0.47, T = 300K, P = 1bar) by the fact that S c computed with Quillatre's correction is insensitive to curvature. The flame consumption speed is overestimated when using γ F=8 estimated from Ma b d (γ), as seen in Figure 3. On the contrary, S c is perfectly recovered when using γ F=8 estimated from Ma c (γ). Similarly, stretched-flame displacement speeds S b d (K) obtained with those various approaches are provided with Fig. 4. The flame displacement speed is overestimated for the non-corrected thickened flame while it is underestimated with Quillatre correction and diffusion correction based on Ma c (γ). The evolution of S b d (K) is accurately recovered with the corrected TFM based on Ma b d (γ). Finally, the proposed corrected TFM enables to recover an accurate flame response to stretch, either preserving the flame consumption speed or the flame displacement speed of an iso-surface considered in the burnt gas. However, it appears that preserving both consumption Markstein lengths L c and L b d is impossible. It is then necessary to understand which formulation performs better in actual flame-turbulence interactions. A first answer to this question is attempted in the next section, where flame/vortex interactions are considered. 3. 2 2 Validation on flame-vortex interactions 3.2.1 Numerical set-up The validation set-up consists of a planar flame propagating towards a vortex. The 2D computational domain is square with l = 200δ 0 L side length. The incoming vortex is defined with radius R = 10δ 0 L and its azimuthal velocity at R is u ′ = 5S 0 L . Hence, the characteristic vortex time is τ vortex = R/u ′ = 2τ f lame , where τ f lame = δ 0 L /S 0 L is the flame time. The base mesh size is such that the vortex radius is discretized on 12 grid points, i.e. ∆x = R/12 = 0.8δ 0 L , and the TFM-AMR model is used to ensure a flame front resolution of n = 10 points in the flame using Adaptive Mesh Refinement (AMR) [15]. Four simulations are carried out to evaluate the performance of the newly developed corrected TFM model: (i) a reference simulation with F = 1; (ii) a simulation with F = 4 and the standard TFM; (iii) a simulation with F = 4 and the corrected TFM based on L b d ; (iv) a simulation with F = 4 and the corrected TFM based on L c . Figure 5 : 5 Figure 5: C 8 H 18 mass fraction profiles at t = 17τ f lame with (F = 1; γ = 1) (top left),(F = 4, γ = 1) (top right), (F = 4; γ = 0.622) (bottom left) and (F = 4; γ = 0.545) (bottom right). The white line represents the T = 400K isotherm. Fig. 5 5 Fig. 5 shows the fuel mass fraction field for various configurations. The white line localizes the isotherm T = 400K. The reference case (F = 1) is a fully resolved non-thickened flame front. Results using the standard TFM model for F = 4 is represented on the top right plot. A clear difference of flame front surface is observed between the F = 1 and F = 4 cases. The concave region towards burnt gases enters deeper into the fresh gases for the reference flame than for the thickened flame. In fact, local flame speed is underestimated since Markstein length is overestimated by a factor F = 4 in the thickened case. Using diffusion corrections based on L c and L b d , respectively on the bottom left and right in Fig. 5, the surface of the thickened flame is well retrieved.A more quantitative comparison is performed considering space averaged properties of the flame. We define the flame length as: Figure 7 : 7 Figure 7: Averaged consumption speed with respect to the flame length. Table 2 : 2 Correction factor values γ F based on Ma c and Ma b d . Fuel Method γ F=4 γ F=8 γ F=16 C8H18 displacement 0.545 0.510 0.495 consumption 0.622 0.589 0.574 H2 displacement 1.621 1.829 1.959 consumption 1.860 2.211 2.472 3.1.4 A posteriori validation on spherical flames Acknowledgements The authors gratefully acknowledge Damien Aubagnac-Karkar (IFPEN) for his technical and scientific assistance on the implementation of the model in the Cantera software.
03872326
en
[ "sdv", "sdv.bbm.bc", "sdv.bc.ic" ]
2024/03/04 16:41:24
2001
https://amu.hal.science/hal-03872326/file/183-5-744.pdf
Dr Emmanuel Fenouillet email: [email protected] Rym Barbouche Joe ¨l Courageot Raymond Miquelis The Catalytic Activity of Protein Disulfide Isomerase Is Involved in Human Immunodeficiency Virus Envelope-Mediated Membrane Fusion after CD4 Cell Binding Protein disulfide isomerase (PDI) is a multifunctional protein with thiol-disulfide redoxisomerase activities. It catalyzes thiol-disulfide interchange reactions on the cell surface that may cause structural modifications of exofacial proteins. PDI inhibitors alter human immunodeficiency virus (HIV) spread, and it has been suggested that PDI may be necessary to trigger HIV entry. This study examined this hypothesis by using cell-to-cell fusion assays, in which the HIV envelope (Env) expressed on the cell surface interacts with CD4 + lymphocytes. PDI is clustered at the lymphocyte surface in the vicinity of CD4-enriched regions, but both antigens essentially do not colocalize. Anti-PDI antibodies and 2 inhibitors of its catalytic function altered Env-mediated membrane fusion at a post-CD4 cell binding step. The fact that the PDI catalytic activity present on lymphocytes is required for fusion supports the hypothesis that catalysts assist post-CD4 cell binding conformational changes within Env. . For these reasons, PDI is considered to be a multifunctional protein with an important role in the processing of secretory proteins in the endoplasmic reticulum. , PDI has been detected in association with the cell surface of various secretory cells (e.g., rat exocrine pancreatic cells [5], platelets [6], lymphoid cells [7], the erythroleukemia cell line [START_REF] Zai | Cell-surface protein disulfide isomerase catalyzes transnitrosation and regulates intracellular transfer of nitric oxide[END_REF], and thyrocytes [START_REF] Mezghrani | Protein disulfide isomerase in FRTL-5 cells: pH dependent thyroglobulin/PDI interactions determine a novel PDI function in the post endoplasmic reticulum of thyrocytes[END_REF]). Some reports have indicated that cell surface PDI catalyzes thiol-disulfide interchange reactions. This activity may control the functions of extracellular proteins through the induction of structural modifications. For example, PDI influences the cleavage of disulfides required for the activation of the diphtheria toxin through a thiol-disulfide interchange reaction [START_REF] Ryser | Cell surface sulfhydryls are required for the cytotoxicity of diphtheria toxin but not of ricin in Chinese hamster ovary cells[END_REF], and it mediates functional conformation changes that lead to the shedding of the human thyrotropin receptor ectodomain through the cleavage of disulfide bonds [START_REF] Couet | Cell surface protein disulfide isomerase is involved in the shedding of human thyrotropin receptor ectodomain[END_REF]. The mature human immunodeficiency virus (HIV) envelope (Env) is composed of the outer membrane glycoprotein (gp) 120 and transmembrane gp41 subunits. HIV binding to CD4 ϩ lymphocytes is mediated by gp120 interaction with CD4 ϩ cells [START_REF] Ugolini | HIV-1 attachment: another look[END_REF]. After CD4 ϩ cell binding, various gp120 domains interact with various viral coreceptors present on the lymphocyte surface, including chemokine receptors and enzymes [START_REF] Ugolini | HIV-1 attachment: another look[END_REF][START_REF] Avril | Identification of the U937 membrane-associated proteinase interacting with the V3 loop of HIV-1 gp120 as cathepsin G[END_REF][START_REF] Broder | Chemokine receptors and HIV[END_REF]. Conformational changes within Env follow these interactions between the virus and the surface of the target cells [START_REF] Broder | Chemokine receptors and HIV[END_REF]. These post-CD4 binding events are necessary to activate the fusogenicity of gp41 and, consequently, to induce virus-cell membrane fusion and virus entry [START_REF] Chan | HIV entry and its inhibition[END_REF]. These structural modifications are usually thought to result primarily from the intrinsic properties of the Env interacting with HIV receptors. It may be assumed, however, that catalysts also assist in these conformation changes, which may include a partial reorganization of the network of the 10 Env disulfide bonds [START_REF] Leonard | Assignment of intrachain disulfide bonds and characterization of potential glycosylation sites of the type 1 recombinant human immunodeficiency virus envelope glycoprotein (gp120) expressed in Chinese hamster ovary cells[END_REF]. Such a hypothetical modification requires a redox-isomerase activity, and PDI is a prime candidate to exert this function. The observation made in a pioneering work by Ryser et al. [START_REF] Ryser | Inhibition of human immunodeficiency virus infection by agents that interfere with thiol-disulfide interchange upon virus-receptor interaction[END_REF] supports this hypothesis. These authors showed that the catalytic function of PDI is required for HIV spread in cell cultures, since various inhibitors blocking this activity interfere with HIV infection. On the basis of this result, they postulated, but did not show, that PDI is necessary for virus-cell fusion and HIV entry. Their work [START_REF] Ryser | Inhibition of human immunodeficiency virus infection by agents that interfere with thiol-disulfide interchange upon virus-receptor interaction[END_REF] is the only one on the role of PDI in HIV infection, and it is not clear why so much time has elapsed without further studies on this topic. Several years after the discovery of the coreceptors, however, the molecular basis of the receptor-induced conformational changes that take place in Env remain to be elucidated. The role that PDI plays in Env conformation changes is intriguing and deserves further study. Materials and Methods Reagents. We used PDI (Pierce) and anti-PDI rabbit polyclonal antibodies (SPA-890; StressGen). LY181984 was a generous gift from Lilly Laboratories [START_REF] Morre | A protein disulfidethiol interchange activity of HeLa plasma membranes inhibited by the antitumor sulfonylurea N-(4-methylphenylsulfonyl)-N'-(4-chlorophenyl) urea[END_REF]. Chemical products were of analytical grade (Sigma). VV9-1 is a recombinant vaccinia virus (rVV; gift of M. P. Kieny, Transge ´ne S. A.) that encodes the native fusogenic HIV Lai Env [START_REF] Kieny | Improved antigenicity of the HIV env protein by cleavage site removal[END_REF]. VV1163 is an rVV that encodes a soluble form of gp160 Lai resulting from the disruption of both cleavage sites present on the native gp160 precursor and from the elimination of the transmembrane domain of the gp41 subunit; consequently, this antigen does not associate on the cell surface [START_REF] Kieny | Improved antigenicity of the HIV env protein by cleavage site removal[END_REF]. Confocal laser scanning microscopy (CLSM). We cultured human lymphoid CEM, MolT4, and C8166 CD4 ϩ cells in RPMI 1640 medium, supplemented with 5% fetal calf serum (FCS) and 1% glutamine (Gln). In 2 of 4 experiments, cells were activated by the phaseolus vulgaris leucoagglutinin lectin (PHA; Sigma; 5 mg/mL) for 12 h [START_REF] Hasegawa-Sasaki | Phytohemagglutinin induces rapid degradation of phosphatidylinositol 4,5-bisphosphate and transient accumulation JID 2001;183 (1 March) of phosphatidic acid and diacylglycerol in a human T lymphoblastoid cell line, CCRF-CEM[END_REF][START_REF] Clevers | Wheat germ agglutinin activates human T lymphocytes by stimulation of phosphoinositide hydrolysis[END_REF]. After a wash step, cells were incubated with blocking buffer (PBS; 10% bovine serum albumin [BSA]) and then were incubated with the anti-PDI antibodies SPA890 (1:50 in PBS-0.1% BSA) for 1 h at 4ЊC [START_REF] Mezghrani | Protein disulfide isomerase in FRTL-5 cells: pH dependent thyroglobulin/PDI interactions determine a novel PDI function in the post endoplasmic reticulum of thyrocytes[END_REF][START_REF] Courageot | Intracellular degradation of the HIV-1 envelope glycoprotein[END_REF]. After a wash with PBS-10% BSA, anti-rabbit antibodies (1:100) coupled to Texas red dye (Jackson Immunoresearch) were added for 1 h. The mouse monoclonal antibody (MAb) CD4 OKT4 (1:30; Ortho Diagnostics Systems) and anti-mouse antibodies (1:100) coupled to fluorescein isothiocyanate dye were used to label CD4 cells. Cells then were washed, were fixed in 4% (wt/vol) paraformaldehyde for 15 min, and were mounted on slides in Mowiol (Calbiochem). Samples were submitted to CLSM (Leica TCS 4D; Leica Lasertechnik). We monitored a focal series of 8-10 horizontal section planes (0.5 mm apart) for each staining. The individual 8-bit-encoded pixel images, 1 for each channel for each section plane, 512 ϫ 512 were visualized with a true color display screen and were processed for analysis of colocalization. A single horizontal section plane representative of the cell sample was used for illustration. To assess fluorescence colocalization, the pixel images correspond-512 ϫ 512 ing to the single horizontal section plane used for illustrations were processed with our own computer analysis software. This method selects the regions of true overlap of green and red staining from the non-colocalized staining and allocates white dots to pixels corresponding to the regions where a yellow (red-green colocalization) signal is detected. Aggregation and cell-to-cell fusion assays. CEM cells cultured in 48-well plates ( cells/300 mL) were infected by VV9-1 (2 5 3 ϫ 10 pfu/cell), as described elsewhere [START_REF] Fenouillet | Functional role of the glycan cluster of HIV-1 transmembrane glycoprotein gp41[END_REF]. Various molecules (Bacitracin, 5,5 -dithiobis 2-nitrobenzoic acid [DTNB], PDI, anti-PDI antibodies, and LY181984 at concentrations indicated below) were added to wells 5 h after rVV infection and before Env-mediated cell aggregation and cell-to-cell fusion appeared in the control well, which was free of inhibitor throughout the experiment. We obtained similar data when the compounds were added together with VV9-1 at the onset of the experiment. We assessed the ratio (percentage) of the total surface occupied either by cell aggregates (before syncytia appeared) or by syncytia (24 h after infection) versus the surface of the field of vision in 5 fields of vision as follows. First, we scanned a photograph of each field of vision. Then, with Adobe Photoshop software (Adobe Systems) and our own computer analysis software, the areas occupied by aggregates or syncytia were outlined on the photograph, and the sum of the corresponding surfaces was calculated and plotted versus the surface of the field of vision. The cytotoxicity of DTNB and bacitracin was assessed by a thiazolyl blue (MTT) assay (Sigma): cells were incubated with MTT (0.5 mg/mL) for 3 h at 37ЊC. The reaction was stopped with a lysis buffer (50% dimethylformamide and 20% SDS, pH 4.5). The amount of MTT formazan product was determined (570/655 nm). In some experiments, CEM or baby hamster kidney (BHK-21) cells (10 5 cells/mL) were infected by VV9-1 (1 and 5 pfu/cell, respectively) in 24-well plates, as described elsewhere [START_REF] Fenouillet | Functional role of the glycan cluster of HIV-1 transmembrane glycoprotein gp41[END_REF][START_REF] Fenouillet | Recombinant HIV envelope expressed in an alpha glucosidase I deficient CHO cell line and its parental cell line in the presence of deoxynojirimycin is functional[END_REF][START_REF] Fenouillet | Biological properties of recombinant HIV envelope synthesized in glycosylation mutant CHO cell lines[END_REF], and were incubated with PDI inhibitors 8 h later. CD4-negative BHK-21 cells were cultured in Glasgow medium supplemented with 5% FCS, 5% tryptose phosphate, and 1% Gln. Such low cell densities do not enable cell aggregation or syncytium formation at postinfection day 2 in CEM cell culture [START_REF] Barbouche | An anti-human immunodeficiency virus multiple antigen peptide encompassing the cleavage region of the Env precursor interferes with membrane fusion at a post-CD4 binding step[END_REF]. Cell supernatants were removed 36 h after infection to prevent (see also [START_REF] Fenouillet | Recombinant HIV envelope expressed in an alpha glucosidase I deficient CHO cell line and its parental cell line in the presence of deoxynojirimycin is functional[END_REF][START_REF] Fenouillet | Biological properties of recombinant HIV envelope synthesized in glycosylation mutant CHO cell lines[END_REF]) the subsequent infection by VV9-1 of the MolT4 fusion partner cells ( cells/mL) that expressed the various HIV receptors that 5 5 ϫ 10 were added in the presence or absence of inhibitors for another 10-15 h. Cell aggregates and fusions were assessed as described above. Finally, C8166 cells chronically infected with HIV MN were washed and were incubated in 96-wells plates (10 4 cells/100 mL) [START_REF] Benjouad | Effect of sialic acid removal on the antibody response to the third variable domain of human immunodeficiency virus type-1 envelope glycoprotein[END_REF]. MolT4 cells ( cells/100 mL) were then added in the 4 5 ϫ 10 presence or absence of DTNB. Syncytia were counted 18 h later, as described elsewhere [START_REF] Benjouad | Effect of sialic acid removal on the antibody response to the third variable domain of human immunodeficiency virus type-1 envelope glycoprotein[END_REF]. Detection of production of Env species by Western blot. To express Env, CEM cells were infected with VV9-1 (2 pfu/cell) in the presence or absence of DTNB (10 Ϫ3 M). Cell pellets (10 6 cells) were washed once in PBS (pH 7.4) and were lysed in sample buffer, as described elsewhere [START_REF] Barbouche | An anti-human immunodeficiency virus multiple antigen peptide encompassing the cleavage region of the Env precursor interferes with membrane fusion at a post-CD4 binding step[END_REF]. Cleared lysates and cell-free supernatants were analyzed by 10% SDS-PAGE. Gels were blotted onto a nitrocellulose filter. After blocking with PBS-2% casein (PBSC) and 0.5% Tween 20 for 1 h, the nitrocellulose strips were incubated for 1 h at 25ЊC with a pool of human HIV-positive or HIV-negative serum samples (1:100) [START_REF] Fenouillet | Functional role of the glycan cluster of HIV-1 transmembrane glycoprotein gp41[END_REF][START_REF] Fenouillet | Immunological analysis of human immunodeficiency virus type 1 envelope glycoprotein proteolytic cleavage[END_REF]. Incubations and washes were done in PBSC and 0.5% Tween 20. After 2 washes, staining was performed by using peroxidase-coupled antibodies (1:100; Dakopatts) and diaminobenzidine. Detection of expression of Env at the cell surface. CEM cells were infected as described above. After a wash with PBS, cells were incubated for 2 h at 25ЊC with various dilutions (1:30-1:10,000 in PBSC) of a pool of HIV-positive serum samples to detect membrane Env, as described elsewhere [START_REF] Fenouillet | Recombinant HIV envelope expressed in an alpha glucosidase I deficient CHO cell line and its parental cell line in the presence of deoxynojirimycin is functional[END_REF][START_REF] Fenouillet | Biological properties of recombinant HIV envelope synthesized in glycosylation mutant CHO cell lines[END_REF][START_REF] Barbouche | An anti-HIV peptide construct derived from the cleavage region of Env acts on Env fusogenicity through the presence of a functional cleavage sequence[END_REF]. The use of a pool of serum samples instead of MAbs to quantify surface Env expression avoids potential drawbacks due to immunoreactivity changes within Env in relation to the use of inhibitors [START_REF] Fenouillet | Recombinant HIV envelope expressed in an alpha glucosidase I deficient CHO cell line and its parental cell line in the presence of deoxynojirimycin is functional[END_REF][START_REF] Fenouillet | Biological properties of recombinant HIV envelope synthesized in glycosylation mutant CHO cell lines[END_REF]. Incubations and washes were performed in PBSC in the presence of 0.1% NaN 3 , which blocks endocytosis [START_REF] Barbouche | An anti-human immunodeficiency virus multiple antigen peptide encompassing the cleavage region of the Env precursor interferes with membrane fusion at a post-CD4 binding step[END_REF][START_REF] Barbouche | An anti-HIV peptide construct derived from the cleavage region of Env acts on Env fusogenicity through the presence of a functional cleavage sequence[END_REF] and enables antibody binding to the cell surface Env to be monitored. To assess whether the signal induced by the pool of HIV-positive serum samples depended on the presence of cell surface-associated Env, we used the following controls: VV9-1-infected cells were incubated with a pool of HIV-negative serum samples and VV1163-infected or VV1163-uninfected cells were incubated with the pool of HIVpositive serum samples. After a wash step, peroxidase-coupled antihuman antibodies were added (1:100 in PBSC) for 1 h. After 2 washes, we added ortho-phenylenediamine to the cells. We measured the absorbance (492 nm/620 nm) of the cell supernatant 30 min after cell pelleting, to prevent interference with absorbance measurement. These conditions enable specific detection of surfaceassociated Env in an antibody concentration-dependent manner [START_REF] Fenouillet | Functional role of the glycan cluster of HIV-1 transmembrane glycoprotein gp41[END_REF][START_REF] Fenouillet | Recombinant HIV envelope expressed in an alpha glucosidase I deficient CHO cell line and its parental cell line in the presence of deoxynojirimycin is functional[END_REF][START_REF] Barbouche | An anti-HIV peptide construct derived from the cleavage region of Env acts on Env fusogenicity through the presence of a functional cleavage sequence[END_REF]. Downloaded Env and CD4 binding assay. Soluble CD4 (gift of I. M. Jones via Medical Research Council [United Kingdom] Reagents Program) was labeled as described elsewhere [START_REF] Barbouche | An anti-human immunodeficiency virus multiple antigen peptide encompassing the cleavage region of the Env precursor interferes with membrane fusion at a post-CD4 binding step[END_REF]: 5 mg was incubated for 30 min at 25ЊC in a 4-nM iodogen-coated tube with 250 mCi of 125 I-labeled Na. The labeling reaction was stopped by the addition of an excess of tyrosine. The labeled antigen (30 mCi/mg) was purified by chromatography. CD4 Ϫ BHK-21 cells were infected with VV9-1 (10 pfu/cell) for 2 days, to enable a high level of Env expression. Cells (1.5 and ) that expressed equivalent 6 3 ϫ 10 amounts of cell surface Env (determined as described above), regardless of the presence of DTNB during Env synthesis, were washed in PBS and were incubated for 2 h at 37ЊC with 125 I-labeled CD4 cells ( cpm/100 mL) in PBSC-0.1% NaN 3 . After 3 5 3 ϫ 10 washes, cell-associated radioactivity was assessed. Uninfected and VV1163-infected cells were similarly processed, to determine the background binding of CD4 to cell surface exhibiting no Env species. The specificity of 125 I CD4 cell labeling was also assessed by incubating VV9-1-infected cells with soluble CD4 (1 mg/10 6 cells) for 30 min before the addition of 125 I-labeled CD4. Results PDI is present near CD4-enriched regions on lymphoid cell surfaces. We verified whether the target of the PDI inhibitors was present on the surface of the CD4 ϩ lymphocytes used in our assays, and we investigated the surface distribution of CD4 and PDI labeling. About 80% and 50% of CEM cells exhibited a readily detectable CD4 (green) and PDI (red) labeling, respectively (figure 1). The labeling obtained by using the anti-CD4 antibody was rather homogeneous and was present on ∼50%-70% of labeled cell surfaces. In contrast, the labeling obtained with the anti-PDI antibodies was organized in small clusters on the cell surface. To study whether the surface expression of PDI depended on the state of the lymphocyte activation, lymphocytes were treated with PHA in conditions that induce CEM cell activation [START_REF] Hasegawa-Sasaki | Phytohemagglutinin induces rapid degradation of phosphatidylinositol 4,5-bisphosphate and transient accumulation JID 2001;183 (1 March) of phosphatidic acid and diacylglycerol in a human T lymphoblastoid cell line, CCRF-CEM[END_REF][START_REF] Clevers | Wheat germ agglutinin activates human T lymphocytes by stimulation of phosphoinositide hydrolysis[END_REF]. This treatment did not substantially modify the intensity of the labeling of PDI, but it increased the CD4 cell labeling Downloaded from https://academic.oup.com/jid/article/183/5/744/891086 by guest on 31 May 2023 (figure 1). PDI and CD4 were also detected on the surface of C8166 and MolT4 cells (data not shown). Of note, in the cell lines, PDI labeling was usually near the cell surface regions where CD4 cell labeling was important. However, PDI and CD4 exhibited essentially no colocalization on the cell surface, as indicated by the low level of yellow (red-green colocalization) signal on the true color display screen of our equipment in the surface regions where both labelings coexisted. The data shown in figure 1 were processed to further demonstrate the regions of true overlap of anti-PDI and anti-CD4 cell labeling. With this approach, the low level of colocalization of CD4 and PDI is immediately obvious (figure 1). PDI inhibitors, cell aggregation, and cell-to-cell fusion. The catalytic activity of PDI was specifically [START_REF] Ryser | Inhibition of human immunodeficiency virus infection by agents that interfere with thiol-disulfide interchange upon virus-receptor interaction[END_REF] inhibited by anti-PDI antibodies and by 2 membrane-impermeant PDI inhibitors, DTNB and the antibiotic, bacitracin. Both have a narrow specificity: DTNB is a sulfhydryl blocker that covalently reacts with the catalytic sites of PDI and abolishes its capacity for cleaving disulfides, whereas bacitracin inhibits both its reductive and oxidative functions [START_REF] Couet | Cell surface protein disulfide isomerase is involved in the shedding of human thyrotropin receptor ectodomain[END_REF][START_REF] Ryser | Inhibition of human immunodeficiency virus infection by agents that interfere with thiol-disulfide interchange upon virus-receptor interaction[END_REF][START_REF] Mandel | Inhibition of a reductive function of the plasma membrane by bacitracin and antibodies against protein disulfide isomerase[END_REF]. We first determined whether these inhibitors could inhibit the Env-mediated cellto-cell fusion process in human lymphocyte cultures infected by an rVV encoding mature native Env on the cell surface. This assay is reliable for addressing both Env bioactivity and its ability to interact with a cell surface presenting the various membrane components required for membrane fusion [START_REF] Ashorn | Vaccinia virus vectors for study of membrane fusion mediated by human immunodeficiency virus envelope glycoprotein and CD4[END_REF]. It enables the efficient discrimination between Env binding to CD4 cells and the membrane fusion step [START_REF] Kieny | Improved antigenicity of the HIV env protein by cleavage site removal[END_REF][START_REF] Fenouillet | Functional role of the glycan cluster of HIV-1 transmembrane glycoprotein gp41[END_REF][START_REF] Barbouche | An anti-human immunodeficiency virus multiple antigen peptide encompassing the cleavage region of the Env precursor interferes with membrane fusion at a post-CD4 binding step[END_REF]. Indeed, the presence of cell aggregates depends on the interaction of cell surface-associated gp120 with CD4, as shown elsewhere [START_REF] Barbouche | An anti-human immunodeficiency virus multiple antigen peptide encompassing the cleavage region of the Env precursor interferes with membrane fusion at a post-CD4 binding step[END_REF]. In brief, the relevance of these aggregates in terms of gp120 and CD4 interaction is shown by the fact that these formations are blocked [START_REF] Barbouche | An anti-human immunodeficiency virus multiple antigen peptide encompassing the cleavage region of the Env precursor interferes with membrane fusion at a post-CD4 binding step[END_REF] when Env is expressed by using an rVV coding for a soluble form of Env that does not associate with the cell surface (soluble gp120 and soluble gp160 encoded by VV1132 and VV1163, respectively [START_REF] Kieny | Improved antigenicity of the HIV env protein by cleavage site removal[END_REF]), and not when soluble CD4 (1 mg/100 mL) is added into the cell culture medium or when the anti-CD4 neutralizing Leu3a antibody (500 ng/100 mL) is present in the cell medium. When Env was expressed at the cell surface by use of an rVV coding for fusogenic Env (VV9-1) and when fusion partner cells expressed surface CD4 in a context of various HIV coreceptors [START_REF] Ashorn | Vaccinia virus vectors for study of membrane fusion mediated by human immunodeficiency virus envelope glycoprotein and CD4[END_REF], aggregates between Env-expressing cells and fusion partners developed in a few hours into large multinucleated cells (syncytia). These syncytia specifically reflect the coreceptordependent gp41-mediated membrane fusion process, as demonstrated by the following data. First, the presence of anti-V3 neutralizing antibodies interfered with cell-to-cell fusion but not with cell aggregation. Second, large aggregates, but no syncytium, appeared when Env was expressed on the cell surface by a gene coding for a nonfusogenic Env. The latter resulted from disruption of both gp160 cleavage sites (which impaired the activation of the gp41 fusogenicity [VV1139] [START_REF] Kieny | Improved antigenicity of the HIV env protein by cleavage site removal[END_REF]), removal of the cleavage site (REKRrNEHQ, which is essential for unmasking the gp41 fusion peptide [VV1134] [START_REF] Kieny | Improved antigenicity of the HIV env protein by cleavage site removal[END_REF]), and elimination of the gp41 fusion peptide (VV1136) [START_REF] Kieny | Improved antigenicity of the HIV env protein by cleavage site removal[END_REF]. These control procedures were carried out during development of the assays, to assess routine experimental procedures and to determine the reliability of the test format (see [START_REF] Barbouche | An anti-human immunodeficiency virus multiple antigen peptide encompassing the cleavage region of the Env precursor interferes with membrane fusion at a post-CD4 binding step[END_REF] for details). In each experiment, we also determined the specificity of the assay. First, cell aggregation was inhibited by 70%-80% by the addition of soluble CD4 (1 mg/well). Second, infection by VV0 (a VV vector coding for a membrane Env that exhibits CD4 cell binding capacity but not fusogenicity [START_REF] Fenouillet | Functional role of the glycan cluster of HIV-1 transmembrane glycoprotein gp41[END_REF]) induced cell aggregation but not cell-to-cell fusion. Third, infection by VV1163, which codes for soluble gp160, did not induce cell aggregation or syncytium formation. Compared with control conditions (i.e., CEM cell cultures infected by VV9-1 in the absence of inhibitor), the presence of either bacitracin or DTNB in the culture medium from the time that VV9-1 was added to the cell culture exerted a dosedependent inhibition of cell-to-cell fusion when cultures were observed 24 h after infection. Inhibition was important at M (figure 2A). Syncytium formation was abolished at Ϫ4 3 ϫ 10 10 Ϫ3 M, in contrast to cell aggregation, which remained unchanged. Similar data were obtained when cells were infected by VV9-1 in the absence of inhibitors to allow VV infection to proceed under routine conditions and when anti-PDI compounds were added to the culture 5 h later. Cell viability in uninfected cell cultures was assessed in parallel experiments after 2 days of treatment with either bacitracin or DTNB and by MTT assay and by monitoring cell counts (figure 2A). These parameters were compared with those obtained in untreated cell cultures. DTNB inhibited syncytium formation at concentrations that lacked cytotoxicity, in contrast to bacitracin. For these reasons, we used 10 Ϫ3 M DTNB in the following experiments to block cell-to-cell fusion. The sulfonyl urea compound LY181984 inhibits thiol-disulfide interchange activities associated at the surface of human cell lines in the micromolar range but not that of PDI [START_REF] Morre | A protein disulfidethiol interchange activity of HeLa plasma membranes inhibited by the antitumor sulfonylurea N-(4-methylphenylsulfonyl)-N'-(4-chlorophenyl) urea[END_REF]. Thus, its lack of effect on syncytium formation at concentrations of 10 Ϫ4 to 10 Ϫ10 M (figure 2B shows results at 10 Ϫ4 and 10 Ϫ6 M) argues against a specific role for PDI in fusion. Direct evidence of the involvement of PDI in the fusion process was shown with anti-PDI antibodies (1:50), which inhibited cell-tocell fusion, in contrast to irrelevant (preimmune) rabbit antibodies; cell aggregation remained unchanged. We also tested the effect of the presence of recombinant soluble PDI in cell medium on cell-to-cell fusion and observed that a high concentration of the molecule (16 mg/mL) did not modify syncytium formation. Env production and membrane expression in the presence of DTNB. We examined whether the inhibition by 10 Ϫ3 M DTNB of syncytium formation in the system described above was due to the inhibition of rVV infection or to an alteration Downloaded from https://academic.oup.com/jid/article/183/5/744/891086 by guest on 31 May 2023 of the synthesis, processing, and membrane expression of HIV Env. CEM cells express Env with or without DTNB. Cell pellets were lysed by using sample buffer. Cleared lysates and cell supernatants were analyzed by Western blot. Under either condition, similar quantities of the various Env species were detected in the cell pellet by using a pool of HIV-positive serum samples; gp120 was efficiently secreted into the cell supernatant (figure 3A). The migration profiles of the Env species obtained here were similar to those reported with this expression system [START_REF] Fenouillet | Functional role of the glycan cluster of HIV-1 transmembrane glycoprotein gp41[END_REF][START_REF] Fenouillet | Recombinant HIV envelope expressed in an alpha glucosidase I deficient CHO cell line and its parental cell line in the presence of deoxynojirimycin is functional[END_REF][START_REF] Fenouillet | Biological properties of recombinant HIV envelope synthesized in glycosylation mutant CHO cell lines[END_REF][START_REF] Barbouche | An anti-HIV peptide construct derived from the cleavage region of Env acts on Env fusogenicity through the presence of a functional cleavage sequence[END_REF]. The 160-, 120-, and 41-kDa species detected with a pool of HIV-positive serum samples were HIV Env, since a pool of HIV-negative serum samples precluded their detection in samples derived from cells cultured under control conditions. Alternatively, we incubated cells with a pool of HIV-positive serum samples to semiquantify the surface expression of Env [START_REF] Fenouillet | Recombinant HIV envelope expressed in an alpha glucosidase I deficient CHO cell line and its parental cell line in the presence of deoxynojirimycin is functional[END_REF][START_REF] Fenouillet | Biological properties of recombinant HIV envelope synthesized in glycosylation mutant CHO cell lines[END_REF]. When we used saturating concentrations of serum samples (1:30-1:100), the amount of Env expressed in the presence or absence of DTNB on the surface of the same number of cells led to similar signals (figure 3B). Nonsaturating concentrations (1:300-1:1000) also indicated that the alteration of Env expression by DTNB treatment could not obviously account for the potent anti-Env activity of the compound. Cell surface labeling depends on the presence of surface-associated Env, as demonstrated by the following results. First, the signal obtained by incubation with HIV-positive serum samples of cells infected with VV1163 that codes for soluble gp160 represented 3%-8% of the signal obtained when VV9-1 was used. Second, a similar background signal was obtained when we used either a pool of HIV-negative serum samples to detect Env at the surface of VV9-1-infected cells (data not shown) or when the HIV-positive serum pool was used for incubation with uninfected cells. These data indicate that 10 Ϫ3 M DTNB lacks important cytotoxicity: the amount of recombinant Env produced in its presence was not significantly affected, and it does not significantly inhibit cell surface expression of mature Env in our conditions. DTNB acts on the fusion process at a post-CD4 cell binding step. We did additional aggregation/syncytium assays, in which DTNB was present during the synthesis of Env and its routing to the cell surface and/or during the coincubation between Env-expressing cells and CD4 ϩ fusion partner cells. Our goals were to study whether DTNB exerts its inhibitory activity only during the coincubation between Env-expressing cells and fusion partners and to further determine whether DTNB inhibits gp120 binding to CD4 cells or the membrane fusion step. Table 1 shows the data when Env was expressed at the CEM cell surface and when fusion partners were MolT4 cells. This format efficiently discriminates between the CD4-gp120 binding step and the membrane fusion step: the specificity of cell aggregation in terms of gp120-CD4 interaction and of syncytium formation in terms of coreceptor-and gp41-mediated fusion were demonstrated as described above and elsewhere [START_REF] Barbouche | An anti-human immunodeficiency virus multiple antigen peptide encompassing the cleavage region of the Env precursor interferes with membrane fusion at a post-CD4 binding step[END_REF]. In this assay, the presence of either DTNB (table 1) or anti-PDI antibodies during the coincubation of Env-expressing cells and CD4 ϩ fusion partners (contact ϩ condition) significantly altered cell-to-cell fusion, compared with control conditions (synthesis Ϫ /contact Ϫ ). This confirmed the results shown in figure 2A. Moreover, a substantial inhibition of fusion was obtained when MolT4 cells were pretreated for 1 h with either DTNB or anti-PDI antibodies, to block surface PDI before being 3 ϫ 10 3 ϫ 10 with soluble CD4 (CD4) was also determined ( experiments; duplicates done). n p 2 washed and their subsequent incubation with Env-expressing cells for 10 h (Molt4 ϩ DTNB/wash/fusion); no potent effect was detected when DTNB only was present during Env synthesis (synthesis ϩ /contact Ϫ ). These 2 observations are consistent with the idea that the PDI molecules, whose catalytic activity is required for membrane fusion, are present at the surface of the fusion partners and are not those associated with Env ϩ cells. The fact that the presence of DTNB only during Env synthesis (synthesis ϩ /contact Ϫ ) could not potently inhibit the subsequent cell-to-cell fusion with fusion partner cells further demonstrates that 10 Ϫ3 M DTNB did not substantially alter the surface expression of mature and bioactive Env (figure 3A,3B). When DTNB was present during coincubation with fusion partner cells, cell aggregation occurred. Similar data were obtained with anti-PDI antibodies. The lack of a potent effect of both DTNB and anti-PDI antibodies on the cell aggregation process is consistent with the results obtained in the VV9-1-infected CEM cell cultures (figure 2). The observation that DTNB did not block the interaction between gp120 and the CD4 cell receptor present on adjacent cells was further illustrated by a molecular CD4-gp120 binding assay: the presence of DTNB during the incubation of Env-expressing BHK-21 cells with 125 Ilabeled CD4 cells did not alter CD4 cell binding to membrane Env, compared with control conditions in which DTNB was absent during both Env expression and coincubation with CD4 cells (figure 3C). Env expressed in cells treated by DTNB also bound 125 I-labeled CD4 cells. We controlled the specificity of the CD4 binding reaction several ways. First, the binding of 125 I-labeled CD4 to Env-expressing cells was ∼50% of that obtained with 6 1.5 ϫ 10 cells. Second, the binding to uninfected BHK-21 cells 6 3 ϫ 10 or to cells infected with VV1163 represented ∼5% of the binding to VV9-1-infected cells. Third, the binding was inhibited by ∼70% by preincubation of Env-expressing cells with soluble CD4 (1 mg/assay) before the addition of 125 I-labeled CD4. The results obtained with both the aggregation/fusion assays and the molecular CD4 binding assay indicate that DTNB interferes with the cell-to-cell fusion process at a post-CD4 binding step. This conclusion was further supported by other fusion assays. When Env was expressed at the surface of CD4 Ϫ BHK-21 cells, the fusion between the Env-expressing cells and the fusion partners [START_REF] Barbouche | An anti-human immunodeficiency virus multiple antigen peptide encompassing the cleavage region of the Env precursor interferes with membrane fusion at a post-CD4 binding step[END_REF] was inhibited only when DTNB was present during their coincubation. A significant inhibition also was achieved when MolT4 cells were preincubated with DTNB for 1 h before being washed and after incubation with Env-expressing BHK-21 cells. When C8166 cells chronically infected by HIV MN were washed and further incubated with MolT4 cells overnight to allow cell-to-cell fusion, a strong reduction (180%) in syncytium was induced by the presence of DTNB, bacitracin, or anti-PDI antibodies, compared with results in the absence of the inhibitors. The specificity of the syncytium formation was verified by addition of the anti-CD4 neutralizing Leu3a antibody (500 ng/100 mL) into the incubation medium, which blocks this process. Inhibition of syncytium formation also was obtained with neutralizing anti-V3 antibodies, as described elsewhere [START_REF] Benjouad | Effect of sialic acid removal on the antibody response to the third variable domain of human immunodeficiency virus type-1 envelope glycoprotein[END_REF]. HIV MN infections were also performed as described elsewhere [START_REF] Fenouillet | Role of N-linked glycans of envelope glycoproteins in infectivity of human immunodeficiency virus type 1[END_REF]. C8166 cells were incubated for 3 h with serial dilutions (10 1 -10 Ϫ2 mL) of a viral supernatant with or without DTNB. Cells then were grown for 12 days. A cytopathic effect correlated with p24 antigen production in the culture medium, Downloaded from https://academic.oup.com/jid/article/183/5/744/891086 by guest on 31 May 2023 NOTE. The human immunodeficiency virus envelope (Env) was expressed on surface of CEM cells in the presence (synthesis ϩ ) or absence (synthesis Ϫ ) of DTNB under conditions that do not enable cell aggregation or syncytium formation. After washing, Env-expressing cells were incubated for 10 h with fusion partner cells in the presence (contact ϩ ) or absence (contact Ϫ ) of DTNB. Alternatively, fusion partners were incubated for 1 h with DTNB before washing and fusion assay (MolT4 ϩ DTNB/wash/fusion). Ratio of total surface occupied either by cell aggregates or by syncytia vs. the surface of the field of vision was assessed ( experiments; representative experiment is shown; means of results obtained n p 4 in 5 fields of vision are presented). and p24 antigen production correlated with the virus dose used for infection. HIV infectivity was reduced by ∼3 orders of magnitude in the presence of DTNB: p24 antigen production after infection with 10 mL of the virus supernatant in the presence of DTNB was lower than that in the absence of DTNB with 10 Ϫ1 mL. The data demonstrate the inhibition by specific PDI inhibitors (anti-PDI antibodies and DTNB) of a post-CD4 binding step of the Env-mediated membrane fusion process in various cell systems in which either recombinant or viral Env interacts with cell surface presenting CD4 in a context of various membrane components required for fusion. Discussion The main objective of this study was to address the postulate that PDI is involved in HIV Env-mediated membrane fusion. Ryser et al. [START_REF] Ryser | Inhibition of human immunodeficiency virus infection by agents that interfere with thiol-disulfide interchange upon virus-receptor interaction[END_REF] provided convincing evidence that "HIV infection of human lymphoid cells is markedly inhibited by DTNB, by bacitracin, and by anti-PDI antibodies" (p. 4559). The presence of inhibitors of the catalytic activity of PDI during virus-cell interaction in these conditions, however, may have altered HIV infection at several steps of the infectious pathway. Inhibition of HIV entry is likely and can occur at 3 different steps: CD4 binding, the gp41-mediated membrane fusion process, and during uncoating events. It is also possible that HIV expression or budding events may be modified by cell metabolism changes or by cell membrane alterations that result either from entry of the inhibitors into the cells by endocytosis or from cell surface contact with these chemical agents. The first step of HIV infection, CD4 binding, may be impaired by PDI inhibition, inasmuch as this interaction is strongly dependent on the correct folding and disulfide network of both gp120 and CD4. Subsequent to CD4 binding and before virion fusion with the membrane of the host cell, PDI inhibition may also preclude the unfolding of gp120 as part of its interaction with the various HIV receptors (e.g., CXCR4 on CD4 ϩ lymphocytes). Thus, the gp41-mediated membrane fusion is also a step that may be altered by DTNB, bacitracin, and anti-PDI antibodies, because it requires a stepwise reorganization of the conformation of Env to trigger the ability of gp41 to mediate fusion. Finally, PDI inhibitors may have interfered with virion disassembly, which releases the reverse-transcription complex. This uncoating step is poorly documented and difficult to study but is highlighted by the use of compounds (catalysts [START_REF] Fenard | Secreted phospholipases A(2), a new class of HIV inhibitors that block virus entry into host cells[END_REF], antibodies [START_REF] Corbeau | Ig CDR3-like region of the CD4 molecule is involved in HIV-induced syncytia formation but not in viral entry[END_REF][START_REF] Mcinerney | A human IgG1 (b12) specific for the CD4 binding site of HIV-1 neutralizes by inhibiting the virus fusion entry process, but b12 Fab neutralizes by inhibiting a postfusion event[END_REF], and bicyclams [START_REF] De Clercq | Potent and selective inhibition of human immunodeficiency virus (HIV)-1 and HIV-2 replication by a class of bicyclams interacting with a viral uncoating event[END_REF]) that inhibit virus entry without altering CD4 binding or membrane fusion. We addressed these issues by using biologically sound systems that reasonably mimic the natural interaction of HIV with the native lymphocyte surface. Cell surface-associated native Env was expressed via a vaccinia virus vector [START_REF] Kieny | Improved antigenicity of the HIV env protein by cleavage site removal[END_REF][START_REF] Fenouillet | Functional role of the glycan cluster of HIV-1 transmembrane glycoprotein gp41[END_REF]. This approach is recommended [START_REF] Ashorn | Vaccinia virus vectors for study of membrane fusion mediated by human immunodeficiency virus envelope glycoprotein and CD4[END_REF] and is widely used to specifically investigate the fusion mediated by Env and CD4 in the absence of other viral proteins that may skew the results, including those discussed by Ryser et al. [START_REF] Ryser | Inhibition of human immunodeficiency virus infection by agents that interfere with thiol-disulfide interchange upon virus-receptor interaction[END_REF]. Alternatively, expression of Env at the lymphocyte surface has been obtained by HIV infection. In our format, membrane Env interacted with adjacent CD4 ϩ lymphocytes that presented PDI on their surfaces in the vicinity of CD4-enriched membrane regions, as shown here for the first time, and in the context of HIV receptors and fusion partner molecules. First, we investigated whether PDI inhibitors modify the CD4 cell binding capacity of Env. Ryser et al. [START_REF] Ryser | Inhibition of human immunodeficiency virus infection by agents that interfere with thiol-disulfide interchange upon virus-receptor interaction[END_REF] did not deal with this point but observed that CD4 immunolabeling at the cell surface is unchanged by treating cells with DTNB and that virions pretreated with DTNB can subsequently interact with the CD4 ϩ cell surface in the absence of DTNB. In contrast, we used semiquantitative assays to show that the presence of PDI inhibitors during Env synthesis or during interaction with CD4 cells had no significant influence on Env binding to the primary viral receptor: aggregation between CD4 ϩ and Env ϩ cells occurred in the presence of either DTNB or anti-PDI antibodies, and DTNB had no effect on Env binding to CD4 at the molecular level. In contrast, when Env was expressed in the absence of DTNB on the cell surface, the presence of the PDI inhibitor or of anti-PDI antibodies during the incubation of Env-expressing cells with CD4 ϩ fusion partner cells blocked syncytium formation. The various steps of the biosynthetic pathway of Env, including Env synthesis, processing, and membrane expression, apparently were unchanged by the inhibitor. This shows that both DTNB and anti-PDI antibodies act on the HIV receptor-dependent gp41-mediated fusion process per se through inhibition of a post-CD4 binding step. Our data are also consistent with the hypothesis that the PDI involved in membrane fusion was on the target CD4 ϩ lym-phocytes: syncytium formation was inhibited when fusion partner cells were pretreated with either DTNB or PDI antibodies before being washed and before subsequent incubation with Env ϩ cells, whereas no significant effect was observed when cells expressing Env were similarly pretreated with DTNB. Moreover, the lack of an effect of soluble PDI on syncytium formation supports the notion that only surface-associated PDI is involved in fusion. The fact that DTNB exerts its anti-HIV effect by inhibiting the catalytic activity of PDI is supported by the following observations: 2 other inhibitors of the catalytic activity of PDI (PDI antibodies and bacitracin) interfered with syncytium formation in various systems, whereas LY181984, which inhibits some disulfide thiol exchange activities but not that of PDI [START_REF] Morre | A protein disulfidethiol interchange activity of HeLa plasma membranes inhibited by the antitumor sulfonylurea N-(4-methylphenylsulfonyl)-N'-(4-chlorophenyl) urea[END_REF], had no activity in our assays. In addition, the concentration of DTNB that blocked fusion was /10 Ϫ3 M, which is Ϫ4 3 ϫ 10 comparable with that active in HIV infection experiments and with that required to inhibit the catalytic activity of PDI [START_REF] Ryser | Inhibition of human immunodeficiency virus infection by agents that interfere with thiol-disulfide interchange upon virus-receptor interaction[END_REF]. Altogether, our data suggest that lymphocyte surface PDI assists changes in the redox status of HIV Env. This, in turn, may destabilize the gp120 subunit and allow remodeling of the Env disulfide network and induction of profound conformation changes within Env, to trigger membrane fusion, in accord with current models [START_REF] Ugolini | HIV-1 attachment: another look[END_REF][START_REF] Broder | Chemokine receptors and HIV[END_REF][START_REF] Chan | HIV entry and its inhibition[END_REF]. Finally, we show that PDI labeling is present in the vicinity of regions of the lymphocyte surface enriched in CD4, whereas CD4 labeling also appears in membrane regions where PDI is lacking. This result, together with the finding that PDI catalytic activity is required for the fusion process at a post-CD4 binding step, suggests that only the subset of CD4 receptors present in PDI-enriched surface regions can be involved in HIV into cell entry. If one considers that PDI directly catalyzes disulfide changes within Env after CD4 binding, this suggests that Env binding to CD4 ϩ cells should subsequently induce the redistribution of PDI and the Env/CD4 complex at the cell surface, to induce their colocalization, and hence contact between the catalyst and Env. Alternatively, it also suggests that Env/CD4 binding and the subsequent membrane remodeling [START_REF] Ugolini | HIV-1 gp120 induces an association between CD4 and the chemokine receptor CXCR4[END_REF] induce the colocalization of PDI and the viral coreceptor CXCR4 at the target lymphocyte surface. Future work will focus on these points. In contrast, if one considers that PDI indirectly influences the redox status of Env through the thiol content of defined membrane regions, such membrane remodeling may not be necessary to trigger fusion. In conclusion, our results support and expand earlier findings [START_REF] Ryser | Inhibition of human immunodeficiency virus infection by agents that interfere with thiol-disulfide interchange upon virus-receptor interaction[END_REF] that the catalyst function of PDI at the lymphocyte surface plays a role in HIV entry at the membrane fusion step. We propose that it assists post-CD4 binding Env conformation changes through changes of some of its critical disulfides. Our data also indicate that the cell surface PDI is a novel target for anti-HIV therapies based on new sulfonyl urea compounds with activity on isomerase functions, such as those already used in anticancer trials [START_REF] Morre | A protein disulfidethiol interchange activity of HeLa plasma membranes inhibited by the antitumor sulfonylurea N-(4-methylphenylsulfonyl)-N'-(4-chlorophenyl) urea[END_REF]. Figure 1 . 1 Figure 1. Lymphocyte cell lines, CD4, and protein disulfide isomerase (PDI). Living CEM cells (a, control conditions; b, cells activated by phaseolus vulgaris leucoagglutinin) were incubated with mouse anti-CD4 monoclonal antibody (Ab) or rabbit anti-PDI polyclonal Abs. Staining was done with anti-mouse Abs coupled to fluorescein isothiocyanate (green; column 1) or anti-rabbit Abs coupled to Texas red (red; column 2), respectively. Both labelings are shown in column 3. Processing of images in column 3 enabled selection of regions of true overlap of green and red staining from non-colocalized staining; regions of overlap are shown as white dots (column 4). Figure 2 . 2 Figure 2. A, Effect on syncytium formation and cytotoxicity of 5,5 -dithiobis(2-nitrobenzoic acid) (DTNB) and bacitracin (BCT). Ratio (percentage) of total surface occupied by syncytia vs. surface of field of vision was assessed (Sync.) in VV9-1-infected CEM cell cultures done in presence of DTNB or bacitracin ( ; representative experiment shown with means of values obtained in 5 fields of vision; control, absence n p 5 of inhibitor: syncytia). Cell viability (Viab.) was assessed in parallel. B, Effect of protein disulfide isomerase (PDI), LY181984, and 40% ע 5% anti-PDI antibodies on syncytium formation. Syncytium formation was assessed as described above in presence of 16 mg/mL PDI, sulfonyl urea compound LY181984 (LY1, 10 Ϫ4 M; LY2, 10 Ϫ6 M), anti-PDI rabbit polyclonal Abs (PAbs), or irrelevant rabbit polyclonal Abs (CAbs; n p 2 experiments shown with mean of data from 5 fields of vision; C, control). Downloaded from https://academic.oup.com/jid/article/183/5/744/891086 by guest on 31 May 2023 Figure 3 . 6 3 ϫ 10 1 . 5 3615 Figure 3. A, Human immunodeficiency virus (HIV) envelope (Env) production in presence of 5,5 -dithiobis(2-nitrobenzoic acid) (DTNB). Env was expressed in presence (ϩDT) or absence (ϪDT) of DTNB. Env in cell lysates (Cell) or secreted into cell supernatant (Supnt) was characterized by electrophoresis and Western blot with HIV-positive serum sample pool for staining. In parallel, lysate and supernatant of Env-expressing cells were analyzed with pool of HIV-negative serum samples (C). B, Membrane expression. Env was expressed at surface of VV9-1-infected CEM cells with or without DTNB. Semiquantification of membrane Env was done with pool of HIV-positive serum samples (1:100 or 1:300) and then with anti-human antibodies coupled to peroxidase. In parallel, cells infected with recombinant vaccinia virus (VV) encoding soluble gp160 and uninfected cells (C) were similarly processed ( experiments; duplicates were made). C, Labeled CD4 binding (bind.) to Env synthesized in n p 2 presence (Env synt. ϩDT) or absence (Env synt. ϪDT) of DTNB at surface of VV9-1Ϫinfected baby hamster kidney (BHK-21) cells (gray column, cells; white column, cells) in presence (CD4 bind.: ϩDT) or absence (CD4 bind.: ϪDT) of DTNB. Cell-associated radioactivity 6 6 3 ϫ 10 1.5 ϫ 10 was counted. Binding to cells ( ) infected by recombinant VV encoding soluble gp160 and to cells ( ) expressing native Env preincubated 6 6 from https://academic.oup.com/jid/article/183/5/744/891086 by guest on 31 May 2023 Table 1 . 1 Effect of 5,5 -dithiobis(2-nitrobenzoic acid) (DTNB) on cell aggregation and syncytium formation. DTNB Synthesis ϩ /contact ϩ Synthesis Ϫ /contact ϩ Synthesis ϩ /contact Ϫ Synthesis Ϫ /contact Ϫ MolT4 ϩ DTNB/wash/fusion Aggregation index 23% ע 4% 26% ע 4% 24% ע 2% 35% ע 7% 29% ע 7% Syncytium index 1% ע 1% 3% ע 1% 20% ע 5% 28% ע 5% 5% ע 4% SIDA (2000-2001 to E.F.).
04112236
en
[ "sdv" ]
2024/03/04 16:41:24
2022
https://hal.science/hal-04112236/file/Gobet-MMB-v6-avecfigures.pdf
Alexia Gobet Christine Jaxel Sandrine Magnard Manuel Garrigos Stéphane Orlowski Nadège Jamin email: [email protected] Pierre Falson email: [email protected] Vincent Chaptal email: [email protected] Chapter Title: Overexpression of the ABC transporter BmrA within intracellular caveolae in Escherichia coli Keywords: ABC transporter, membrane protein overexpression, membrane caveolae, cytoplasmic vesicles We described here the overproduction and oriented membrane insertion of membrane protein inside intracellular vesicles named heterologous caveolae within E. coli. The method is described with BmrA, a multidrug efflux pump from Bacillus subtilis. BmrA is produced in these vesicles, thanks to the co-expression with the canine caveolin-1b, one of the two isoforms of caveolin-1. Enriched by sucrose gradient, the caveolae-containing fraction allows to probe the ATPase and Hoechst 33342 transport activities, the latter displaying a higher specific activity than the same without caveolin-1b. Introduction Membrane proteins are key drug targets that drive many researches on these objects [1]. Overexpression is thus often used to obtain the large quantity of proteins required and achieve adequate purity and homogeneity levels. For membrane proteins, this overexpression has often been a major bottleneck. Indeed, overexpression can be toxic to cells, frequently requiring a mild overexpression rather than a massive production, and the quality of the produced protein requires meticulous and rigorous testing. Some documented examples show that a gentler expression of membrane proteins often results in a protein more active than the same protein produced in a larger quantity [2]. The orientation of the membrane protein can also be a critical point for their function, a problem especially concerning multidrug efflux pumps which translocate drugs from cytosol to the extracellular space. Membrane caveolae are submicroscopic pits of the membrane found in higher eukaryotes [START_REF] Parton | Caveolae as plasma membrane sensors, protectors and organizers[END_REF]. These 60-80 nm wide pits are distinct from typical membrane invagination, being formed by the recruitment of a specific caveolin-1 protein and showing features distinct from clathrin-coated pits. Caveolae are cell-type specific with cells being devoid of them (Kidney), or at the opposite can represent 50% of the plasma membrane surface for endothelial cells or adipocytes [START_REF] Parton | Caveolae as plasma membrane sensors, protectors and organizers[END_REF][START_REF] Razani | Caveolae: from cell biology to animal physiology[END_REF]. Recently, heterologous caveolae have been successfully described in E. coli by heterologous expression of human caveolin-1 [START_REF] Walser | Constitutive formation of caveolae in a bacterium[END_REF], opening the way to their use is membrane protein overexpression, being a reservoir for cell membrane [START_REF] Shin | Display of membrane proteins on the heterologous caveolae carved by caveolin-1 in the Escherichia coli cytoplasm[END_REF]. Membrane caveolae are made from the plasma membrane, which imposes the orientation of the expressed protein exposing the intracellular region of the protein to the cytosol once the caveolae is generated. The Bacillus subtilis ABC transporter BmrA is a multidrug transporter recognizing and transporting many structurally unrelated drugs [START_REF] Orelle | Conformational change induced by ATP binding in the multidrug ATPbinding cassette transporter BmrA[END_REF][START_REF] Steinfels | Characterization of YvcC (BmrA), a multidrug ABC transporter constitutively expressed in Bacillus subtilis[END_REF][START_REF] Chaptal | Drug-bound and -free outward-facing structures of a multidrug ABC exporter point to a swing mechanism[END_REF]. Its overexpression confers resistance to B. subtilis against cervimycin C, an antibiotic secreted by a biotope competitor [START_REF] Krugel | Cervimycin C resistance in Bacillus subtilis is due to a promoter upmutation and increased mRNA stability of the constitutive ABC-transporter gene bmrA[END_REF]. Here, we report the functional overexpression of BmrA within heterologous caveolae in E. coli (Figure 1) allowing to probe the transport of drugs. -N, N, N', N'-tetramethylethylenediamine (TEMED, Sigma-Aldrich) -Running buffer stock solution: Tris-Glycine-SDS (TG-SDS) 10X (Euromedex) -Pre-stained molecular weight marker (10-170 kDa) (Euromedex) -Staining solution: Coomassie buffer: 0.025% (w/v) Coomassie blue R250, 1% (v/v) ethanol, 25% (v/v) isopropanol, 10% (v/v) acetic acid -Destaining solution: 10% (v/v) acetic acid 2.4. Cells and media for cell culture -Bacteria : E. coli C43(DE3) strain -SOC medium: 2% (w/v) tryptone, 0.5% (w/v) yeast extract, 10 mM NaCl, 2.5 mM KCl, 10 mM MgCl2, 10 mM MgSO4 and 20 mM glucose (NEB). Materials Lab equipments -Antibiotics stock solution: 40 mg/mL kanamycin (Sigma-Aldrich) and 50 mg/mL ampicillin resuspended in dH20(Sigma-Aldrich). Kept at -20°C. -LB media: 10 g/L tryptone, 5 g/L yeast extract and 10 g/L NaCl. The powder is dissolved in deionized water (making a total weight of 25 g/L) and sterilized by autoclaving 20 min at 121 °C. Right before use, the medium is supplemented with antibiotics: ampicillin 50 µg/mL for BmrA expression and kanamycin 40 µg/mL when caveolin is expressed. -LB-agar media: LB media supplemented with 15 g/L agar and antibiotic(s) -Isopropyl β-D-1-thiogalactopyranoside (IPTG) 0.5 M stock solution (Sigma-Aldrich) Membrane preparation and sucrose gradient -Bacterial lysis and membrane washing buffer: 50 mM Tris-HCl pH 8.0, 150 mM NaCl, 0. Methods E. coli co-transformation with genes coding for BmrA and MBP-Cav1b To generate heterologous caveolae and to have BmrA inserted into them, MPB-Cav1b and BmrA are overexpressed simultaneously. This is achieved by having the two plasmids at the same time within E. coli cells by co-transformation and selection using two different antibiotics. 1. Fifty nanograms of each supercoiled circular plasmid pET15b-bmra [START_REF] Chaptal | Drug-bound and -free outward-facing structures of a multidrug ABC exporter point to a swing mechanism[END_REF] and pETM40mbpcav [START_REF] Perrot | Production dans Escherichia coli de vésicules enrichies en cavéoline-1(32-178) canine ou son fragment[END_REF] are added to 100 µL C43(DE3) E. coli competent cells, for 30 minutes on ice. 2. Heat shock at 42 °C for 30 seconds, then place back on ice, 3. Add 300 µL SOC medium under biological safety cabinet and incubate 1 h at 37 °C under agitation. 4. Spread 50 µL of the cell mix onto LB-agar medium supplemented by ampicillin and kanamycin. 5. Incubate overnight at 37 °C, upside-down. Controls for downstream experiments are achieved by co-transformation of mutant BmrA E504A (inactive) with pETM40-mbpcav to compare to the WT protein. Controls of BmrA transformed alone, without the plasmid coding for MBP-Cav, are also performed to compare expression and activity levels. Culture and overexpression 1. A freshly transformed colony is transferred into 20 mL LB-medium (in a 100 mL flask) supplemented with adequate antibiotics. 2. Incubate this pre-culture overnight at 37 °C under agitation (180 rpm) 3. Inoculate 4 x 500 mL of LB medium (with adequate antibiotics) in 5 L Erlenmeyer to a final OD600 of 0.1 with the overnight pre-culture. 4. Incubate at 37°C under agitation until OD600 reaches 0.6 and induce expression with 0.5 mM IPTG. Incubate overnight at 22°C under agitation. 5. Centrifuge cultures at 5,000 xg, 4 °C for 15 min with JLA 9.1000 rotor to collect bacteria 6. With spatula, transfer the pellet in 50 mL-tube, suspend them in 40 mL of 50 mM Tris-HCl pH 8.0, 150 mM NaCl to wash bacteria. Centrifuge again as previously and discard the supernatant. 7. If cell lysis is not performed immediately, store pellets at -80 °C. Membrane preparation 1. Resuspend bacteria in 50 mL per 2-Liter of culture with lysis buffer: 50 mM Tris-HCl pH 8.0, 150 mM NaCl, 0.5 mM EDTA and benzonase. From this point, keep everything at 4°C or on ice. 2. Lyse cells with a cell disrupter such as Cell-D Constant Systems Ltd with one pass at 1.4 kbar. If others cell disrupters are used, refer to the manufacturer's guidelines in order to keep vesicles intact (see Note 1). 3. Centrifuge cell lysate at 5,000 xg, 15 min, 4°C to pellet cell debris and intact cells. Collect supernatant and discard pellets. 4. Centrifuge supernatants at 100,000 xg, 1h, 4°C using 50.2 Ti rotor and suspend pellet with 1 mL of 50 mM Tris-HCl pH 8, 150 mM NaCl, 0.5 mM EDTA, anti-protease. Use a glass homogenizer and pestle to homogenize. Take a 10 µL aliquot to determine protein concentration in membranes using a BCA assay (following manufacturer's procedures, PIERCE) with an approximate 20-fold dilution of the membranes. 5. Homogenized membranes can be stored in cryo-tubes and frozen in liquid nitrogen until further use. Probe BmrA expression on SDS-PAGE, quantify expression using pre-purified protein 1. Prepare a 10% separating gel by mixing 9.6 mL of water, 5 mL of 40% acrylamide/bisacrylamide solution, 5 mL of 1.5 M Tris-HCl pH8.8, 200 µL of 10% SDS, 200 µL of 10% APS (see Note 5) and 20 µL of TEMED. Pour it between 2 glass plates and add water at the surface of the gel to ensure a flat surface. Let polymerize and prepare a 4% stacking gel: 3.1 mL of water, 550 µL of 40% acrylamide/bis-acrylamide solution, 1.3 mL of 0.5 M Tris-HCl pH6.8, 50 µL of 10% SDS, 50 µL of 10% APS and 5 µL of TEMED. Remove water on separating gel and add stacking gel, place a 15-wells comb and let polymerize. 2. Pipet 10 µg of total protein, add water to a final volume of 10 µl and add 3 µl of 4x loading buffer. 3. Load 12 µL sample and run 10 min at 90 V then 1.5 h at 120 V 4. Add known amount of purified BmrA [START_REF] Chaptal | Drug-bound and -free outward-facing structures of a multidrug ABC exporter point to a swing mechanism[END_REF] on the gel for comparison. For pure protein, incubate only 5 minutes with loading buffer to minimize aggregating artifacts on the gel. 5. After electrophoresis, incubate gel in staining solution for 30 min and decolorating solution for 1 h. 6. Quantify the intensity of bands corresponding to BmrA by densitometry, using for example ImageJ. ATPase activity For each type of membranes, 2 tubes are prepared, with VO4 -and without VO4 - Reaction buffer Reaction mix -VO4 -mix + VO4 -mix 30 mM Tris-HCl pH8.0 100 mM NaCl 10 mM KCl 2 mM MgCl2 1. Calculate each necessary volume for "-VO4 -mix ", "+ VO4 -mix", "reaction mix" and each reagent 2. Prepare Na2S and NaN3 stock solution (they must be prepared just before the experiment and cannot be stored) (see Note 2). 3. Incubate VO4 -stock solution for 10 minutes at 95°C in dry bath before dilution (see Note 3). 4. Prepare reaction buffer and ATP-Mg solution. 5. Prepare "-VO4 -mix "and "+ VO4 -mix". 6. Place 24 µL of each type of membranes in 2 different tubes. 7. Add 46 µL of "-VO4 -mix" or "+ VO4 -mix" to the corresponding tube and incubate for 10 min at 30 °C under shaking. 8. Prepare reaction mix and incubate at 30 °C. 9. For each tube with or without VO4 -, dispatch in 3 x 20 µL in a 96-wells UV plate. 10. Add 180 µL of reaction mix in each well and read immediately NADH absorbance at 340 nm for 20 min at 30 °C. Figure 2 shows a typical example of BmrA ATPase activity on membranes. BmrA specific activity is the difference between the total activity, and the vanadate inhibited activity. Overexpression of BmrA in presence of caveolin-1b increases BmrA specific activity, being the result of fewer amount of BmrA produced but showing better activity. Note that the use of a BmrA inactive mutant is also a good control to differentiate specific activities. Sucrose gradient and SDS-PAGE analysis 1. Prepare 20%, 25%, 30%, 35%, 40%, 45%, 50%, 55% and 60% (w/v) sucrose in 50 mM Tris-HCl pH 8.0, 150 mM NaCl. 2. A sucrose gradient is prepared in SW41 rotor tube by pipetting first 1 mL of 60% sucrose solution and placing it at the bottom of the tube. Then, 1 mL of each sucrose concentration are added one after another in decreasing concentration order, by gentle pipetting to minimize disruption of the sucrose layers. At the top, add the volume of membranes corresponding to 10 mg of protein (see Note 4). When adding the sucrose solution, pour the solution slowly against the tube walls to not disturb the sucrose layers. It is a good idea to store each fraction in aliquots of 0.5 mL in liquid nitrogen. Resuspend the pellet in 1 mL of 50 mM Tris-HCl pH 8.0, 150 mM NaCl. 5. Analyze fractions of the gradient by SDS-PAGE by mixing 10 µL of sample with loading buffer and incubate 30 min at room temperature before loading on acrylamide/bisacrylamide gel, as described before. 6. Each fraction can be analyzed for activity as described above. Figure 3 shows a representative result of sucrose gradient separation of a membrane fraction. Noticeably, BmrA, overexpression in the presence of caveolin-1b results in a lower amount of BmrA produced, but the ATPase activity is remarkably higher. This is not surprising as it has been observed after changing BmrA overexpression on other E. coli cell lines [2]. 2. 1 1 . Plasmids -The gene coding for BmrA WT and BmrA E504A are inserted in the pET-15b vector (Invitrogen) bearing ampicillin resistance. -The gene coding for the fusion protein Maltose-Binding Protein-caveolin-1β (MBP-Cav) is inserted into the pETM-40 vector (EMBL). The periplasm addressing sequence of MBP has been removed. The plasmid bears kanamycin resistance. Both proteins carry an N-terminal 6xHis tag. 3 . 3 Centrifuge overnight (about 15 h) at 150,000 xg, 4 °C. 4. Collect 1 mL fraction one by one from top to bottom and keep them in Eppendorf tube. 3. 7 7 Hoechst 33342 transport assays -The Hoechst 33342 transport is recorded at 25°C with λexcitation = 355 nm and λemission = 457 nm and bandwidth of 8 nm. -In the quartz cuvette, put membranes corresponding to 50 µg of proteins and complete to 1 mL with transport buffer. -One minute later add 10 µL of 200 µM Hoechst 33342, final [Hoechst 33342] = 2 µM. -One minute later add 10 µL of 200 mM ATP, final [ATP] = 2 mM. -Record fluorescence until equilibrium is reached.
04112339
en
[ "info" ]
2024/03/04 16:41:24
2023
https://inria.hal.science/hal-04112339/file/debs23industry-final36.pdf
Arthur Navarro email: [email protected] Julien Ponge email: [email protected] Frédéric Le Mouël email: [email protected] Clément Escoffier email: [email protected] Considerations for integrating virtual threads in a Java framework: a Quarkus example in a resource-constrained environment Keywords: reactive programming, java, benchmarking, virtual threads, concurrency Virtual threads are a highly anticipated feature in the Java world, aiming at improving resource efficiency in the JVM for I/O intensive operations while simplifying developer experience. This feature keeps the traditional thread abstraction and makes it compatible with most of the existing Java applications, allowing developers preferring synchronous imperative abstractions to benefit from better performance without switching to asynchronous and reactive programming models. However, limitations currently hinder the usability of virtual threads. These limitations must be considered when building a piece of software around virtual threads for they might have non-trivial effects. This paper (i) discusses the different strategies envisioned to leverage virtual threads in the Quarkus framework, (ii) gives an overview of the final implementation, (iii) presents the benchmark used to characterize the benefits of using virtual threads in a typical container environment where resources are scarce compared to using Quarkus with traditional thread pools and Quarkus with reactive libraries ; (iv) results are interpreted and discussed. Our study reveals that the integration of virtual threads in Quarkus doesn't perform as well as Quarkus-reactive. This seems to be due to a mismatch between the core hypothesis of Netty and virtual threads regarding the amount of threads available. INTRODUCTION The emergence of distributed applications and services has brought forth significant challenges that cannot be effectively addressed by traditional monolithic architectures. Unequal and substantial traffic distribution across different parts of an application renders scaling a monolith uniformly inefficient. Additionally, the complexity of applications has increased, necessitating larger teams of developers working on various features and concerns. Furthermore, the process of deploying an application for every update or fix is burdensome and calls for an alternative approach. These challenges, encompassing scalability, modularity, resilience, and agility, have been addressed by decomposing large monoliths into multiple loosely coupled components [START_REF] Dragoni | Microservices: Yesterday, Today, and Tomorrow[END_REF][START_REF] Fortier | Dyninka: a FaaS framework for distributed dataflow applications[END_REF]. Consequently, modern applications are often composed of numerous cooperating microservices [START_REF] Dragoni | Microservices: Yesterday, Today, and Tomorrow[END_REF]. Cloud computing has enabled the hosting of a vast array of services at a lower cost, with the ability to dynamically allocate resources based on traffic demands. However, the pricing model of cloud computing services favors low-resource services due to their finer-grained control over expensive resources, such as memory and CPU time [START_REF] Williams | The Economics of Cloud Computing: An Overview For Decision Makers[END_REF]. As such, this study focuses on designing, integrating, evaluating easy-to-use programming abstractions to achieve efficient event processing of distributed asynchronous applications in resourceconstrained environments. Network communications can incur significant costs, making efficient and timely communication between microservices a critical factor. Historically, better performance have been achieved by using non-blocking, asynchronous tools provided by the Operating System [START_REF] Bonér | The Reactive Manifesto v2.0[END_REF][START_REF] Kegel | The C10K Problem[END_REF]. This has led to the development of asynchronous user APIs such as asynchronous callbacks, promises, and reactive streams. However, this paradigm represents a fundamental shift from the traditional synchronous and imperative approach, potentially leading to increased development and maintenance overhead [START_REF] Kambona | An Evaluation of Reactive Programming and Promises for Structuring Collaborative Web Applications[END_REF][START_REF] Komolov | An Empirical Study of Multi-Threading Paradigms Reactive Programming vs Continuation-Passing Style[END_REF]. Consequently, mainstream programming languages like Golang, C#, Python, JavaScript, and Java are increasingly emphasizing the provision of asynchronous communication primitives that strike a balance between preserving developer experience and facilitating asynchronous communication effectively. Therefore, it is crucial to be able to characterize and compare different libraries or languages when constructing a distributed application in order to ensure that any cost savings achieved through the use of asynchronous programming models are not negated by an increase in maintenance costs. In Java for instance, several approaches exist to build communicating services. Strategies such as pooling have spread to alleviate the performance issues of the traditional threading model while experiments with callbacks, futures and reactive extensions were led and entire frameworks such as Netty 1 became building blocks for many higher-level pieces of software. The Java language just introduced virtual threads 2 . This solution adds a layer of indirection between the threading model and the underlying non-blocking machinery to offer both a synchronous, imperative user API and better performance by leveraging nonblocking asynchronous primitives in the internals of the JVM. It aims at increasing the performance of existing Java applications using synchronous primitives simply by replacing traditional threads (referred to as platform threads in the rest of the paper) by virtual ones. This could be an important feature for applications built using frameworks relying on the thread and/or thread pool abstraction, but the Java ecosystem is split between frameworks using thread pools and reactive frameworks built above Netty. While reactive frameworks already provide superior performance, incorporating a synchronous programming model may enhance their development and maintenance. This paper presents a novel approach for integrating virtual threads into a Java reactive framework, which is the first contribution of the study. The proposed integration is expected to enable users to develop resource-efficient programs in a more straightforward manner, thereby reducing the costs associated with both development and maintenance. The Quarkus 3 framework was chosen due to its core reactive design, and the primary focus on cloudnative applications, hence its relevance to low-resource environments. Our contributions are: • investigating virtual threads integration in Netty-based frameworks and implementing a solution in Quarkus, • designing an experimental protocol for testing virtual threads integration in Quarkus, • evaluating the performance of the implementation in terms of latency, CPU usage, memory footprint, and comparing it with existing programming models. The structure of this paper is the following. Background information about the impact of message-driven architectures is given in Section 2 along with a description of virtual threads. This section also contains details about the implementation similar constructs in other programming languages as well as our integration of Quarkusvirtual-threads. Section 3 describes the experimental protocol used 1 https://netty.io/ 2 https://openjdk.org/jeps/425 3 https://quarkus.io/ in this study. Results are presented in Section 4. A discussion highlighting the limitations and potential enhancements also appears in this section. A comparison with the existing frameworks and benchmarks is presented in Section 5. Eventually, Section 6 summarizes the findings and discusses broader implications for reactive frameworks and virtual threads. BACKGROUND This introduction to resource management and modern abstraction aims at clarifying the goals, the constraints and the limitations of the current ecosystem. Through the limitations of both the legacy blocking imperative paradigm and the new non-blocking one, this introduction also presents the motivation for virtual threads. Efficient managing of resources in message-driven systems Processes involved in a distributed system spend most of their time performing I/O operations. This is especially true for microservices architectures where each microservice has a narrow functional scope, and where it must frequently cooperate with other microservices to achieve progress [START_REF] Dragoni | Microservices: Yesterday, Today, and Tomorrow[END_REF][START_REF] Indrasiri | Integrating Microservices[END_REF]. In this context, efficient management of resources and scalable handling of network I/O is crucial [START_REF] Bonér | The Reactive Manifesto v2.0[END_REF]. Efficient resource management is even more important when we consider cloud environments where resources are scarce [START_REF] Williams | The Economics of Cloud Computing: An Overview For Decision Makers[END_REF]. Many abstractions to efficiently perform network I/O are embedded in modern Operating Systems: EPoll for Linux, KQueue for macOS, FreeBSD, NetBSD, and I/OCP for Windows. 2.1.1 The old blocking model. The hegemonic model to handle concurrency that led to the C10K problem [START_REF] Kegel | The C10K Problem[END_REF] was to spawn a new worker thread per incoming request. Each thread was supposed to execute the request handler logic. If I/O operations had to be performed in the handler, the worker thread would perform a blocking syscall that would give control back to the OS scheduler. The thread would then be put in a wait queue during the entire operation duration. In Java, the class java.lang.Thread is a wrapper around OS threads. Hence, creating a Java thread implicitly creates an OSthread. An operation over the network can last several seconds, during which the worker would remain idle. It is a problem due to the following: (1) spawning a new thread takes time, (2) it is impossible to accumulate an arbitrary number of idle threads in memory at any given time, (3) performing a context-switch (loading the state of another thread in CPU registers) is not free. 2.1.2 Leveraging non-blocking primitives brought by modern operating systems. In the early 2000s, the limitations of the threading model led to the formalization of the C10K problem [START_REF] Kegel | The C10K Problem[END_REF], nonblocking communications were put forward as the replacement for the previous model based on concurrent threads. Non-blocking I/O were made available in the JDK in 2002 4 and leveraged by Netty in 2004. The assumption of non-blocking network I/O is that no CPU work is necessary to wait for an operation to complete (no active waiting, the NIC will notify the OS when an event happens). What we describe here is a simplified vision of the Reactor Pattern [START_REF] Douglas | Reactor: An Object Behavioral Pattern for Concurrent Event Demultiplexing and Event Handler Dispatching[END_REF]. In Linux for instance, EPoll is the mainstream non-blocking I/O facility although the recent io_uring model is getting more and more traction 5 . With EPoll, processes interested in performing I/O operations register what events on what file descriptors they are interested in to the epoll instance and can then interrogate the EPoll instance through certain syscalls. This primitive is often used by the event-loop model where a thread loops over a queue of tasks and checks the completion of pending I/O operations when it has time to do so. Thus, an event-loop must never be blocked: if longrunning tasks are performed on the event-loop then it won't be able to check the completion of I/O operations and make progress. This has direct effects on the programming model: asynchronous callbacks, promises, futures, etc. emerge since a function to call asynchronously must be called when an event happens, thus leaving the imperative paradigm. However, the Java ecosystem didn't entirely shift to this programming model. Blocking, imperative paradigm is still widely used and strategies such as pooling have been refined to mitigate the cost of creating new threads as well as limiting the memory consumption by capping the size of the pool (i.e. the number of threads within). Since 2005, frameworks relying on blocking communications in thread pools and frameworks relying on non-blocking communications coexist. Non-blocking mechanisms and imperative code The available options were either to write inefficient imperative, synchronous, blocking code, which is generally considered simpler by most programmers, or to write asynchronous, non-blocking, sometimes declarative code (particularly when using libraries such as reactive streams) code, which is generally perceived as more complex by most programmers. Experiments were led to design abstractions that would benefit from both paradigms: non-blocking synchronous coding style. Non-blocking imperative communications designs. As mentioned in the introduction 1, various programming languages have implemented non-blocking imperative abstractions with different levels of consistency. Here, we present an overview of the most popular approaches. On one hand, Python, JavaScript, C# and Kotlin employ the design of async functions, where non-blocking imperative code restricted to special functions marked with a specific keyword. This implementation introduces some transparency limitations, as the code can only reside within these designated functions. Consequently, using third-party libraries asynchronously can become a complex task, often necessitating the conversion of numerous functions into async functions. Developers refer to this issue as the "two colors functions" problem 6 These languages share a similar implementation approach: async undergo compile-time modifications and are rewritten using state machines in the case of C# and Kotlin, and generators and promises in the case of Python and JavaScript. In contrast, Golang revolves around the concept of goroutines, where every unit of execution is represented by a goroutine. It does not differentiate between async and non-async code. Golang was purposefully designed with this idea in mind, enabling the language and runtime to be built seamlessly around it. Java virtual threads, by modifying the runtime rather than the compiler, bear closer resemblance to goroutines. Virtual threads. With the introduction of new features in JDK 19, it is promised that the resource efficiency and scalability of non-blocking asynchronous programming can be utilized while still displaying a synchronous imperative API. This is achieved through the concept of virtual threads, which are JVM-managed threads that are not bound to OS threads in a one-to-one relationship [START_REF] Pressler | Virtual Threads[END_REF]. Virtual threads are threads: the JVM and debugging tools consider them as such. However, they are not related to OS threads in the same way platform threads are. Creating a virtual thread doesn't require the creation of an OS thread. Instead, virtual threads will be dynamically mounted on and unmounted from platform threads. Just like a platform thread can execute a function, it can now execute a virtual thread. When a platform thread is used to run a virtual thread we call it a carrier thread. Since they must be executed by platform threads, virtual threads can also be seen as Runnable or Callable (Java reification of the concept of task). However, virtual threads are unique in that they can suspend and resume their execution. Runnable and Callable must be executed fully, whereas virtual threads have the capacity to store their progress in memory, suspend their execution and resume it later. To do so, virtual threads possess a Continuation instance that has the capacity of performing stack manipulation to store/restore its progress in/from memory. This mechanism of suspension/resumption is triggered when a blocking call is issued by the application. The JDK has been partially rewritten such that every blocking call will eventually hit a specific method that saves the virtual thread progress in memory and unmount it from its carrier thread. In the case of I/O, the JDK leverages non-blocking facilities brought by the OS to be notified when the operation completes in order to resume the virtual thread execution. We consider virtual threads as a promising feature, their introduction is an opportunity to study how they could interact with other non-blocking abstractions. If virtual threads perform reasonably well compared to other non-blocking asynchronous abstractions already available, Java would offer a wide variety of programming paradigms. The code below illustrates the difference between reactive streams and traditional array manipulation using virtual threads. A list of names and a list of quotes are both fetched from a remote service or from a database in a non-blocking fashion. The list of quotes must be fetched after the request for the list of names completed since it needs to know its size. A quote is then appended to each name, and the resulting list is returned. Quarkus-virtual-threads In this section we summarize the different options envisioned to leverage virtual threads in the Quarkus framework. Quarkus was designed around an ecosystem of modular extensions, and we implemented virtual threads support in the resteasy-reactive extension 7which is used for exposing HTTP endpoints. Quarkus relies on Eclipse Vertx8 [START_REF] Ponge | Vertx in Action[END_REF], which itself relies on Netty9 for non-blocking I/O. It is designed to work with an ecosystem of extensions, most of them leveraging the Vert.x non-blocking primitives. Its HTTP routing layer also uses the Vert.x router. Thus, an entire non-blocking stack is in charge of: • listening to incoming HTTP requests, and • routing the requests to correct endpoints and application code. On the other hand, user code such as endpoint handlers can be executed either in a reactive way on the event-loops (by using asynchronous APIs and not performing blocking calls), or in a blocking way (by offloading the computation to a pool of worker-threads). Three options were available in order to leverage virtual threads. They are discussed in the next section Table 1 summarizes the pros and cons of each strategy. Forking the model of the worker thread. This is the easiest answer. Since Quarkus already provides a blocking execution model with a dedicated pool of threads, it requires minimal modification of the existing code base to add an unbounded pool of virtual threads and use them as workers. To do so, it is possible to use an Executor provided by the JDK, designed to create a new virtual thread every time a task is submitted; then submit the execution of this virtual thread to a pool of carrier threads 10 . However, this approach has two major drawbacks. There is now a separate pool of threads that will be used as workers. Hence, the overall number of threads, and thus the memory footprint of the application, is increased. Virtual threads use a FIFO variant of the ForkJoinPool executor to schedule virtual threads on carrier threads, it is hard to strictly bound the number of ForkJoinWorkers since it is only possible to specify a target level of parallelism (the number of running threads, the number of blocked and running worker threads can be bigger). Furthermore, since the routing layer is executed on the event-loop and the endpoint handler on a worker thread, a context-switch is required: remove the event-loop from the CPU to put the carrier in its stead (operation performed by the OS) which could harm performance. Modifying Netty, the underlying non-blocking machinery. Netty uses a fixed, limited set of threads called event-loops and leverages non-blocking facilities of the OS. It also provides a blocking mode (now discarded) where the small set of event-loops is replaced with a bigger pool of worker threads. Modifying the executor of the blocking mode to spawn virtual worker threads instead or platform worker threads was envisioned, but this idea was quickly discarded on the basis that (i) modifying Netty was not feasible for our team, and (ii) the blocking mode is now discarded, using it would require its integration into the supported features of Netty. Using the event-loops as carrier-threads. Virtual threads require an Executor11 to be dispatched among their carriers threads. It is possible to specify an executor, including an executor that would use existing event-loops as workers. This idea has the advantage of minimizing the number of threads in the application as well as the number of context switches (no need for the OS to unschedule the event-loop to put a carrier in its place). This strategy is the most promising of the three since (i) it requires little modification of the code base, (ii) it doesn't depend on any external project, and (iii) it minimizes the costs associated with context-switching between event-loops and carrier threads. Deadlock issue. In this model the event-loop has three functions: • it must fulfill its function as an event-loop for the entire non-blocking inner-workings of the framework, • it must fulfill its function as an event-loop for the nonblocking endpoint handlers declared and implemented by the developer, • it must work as a carrier thread for virtual threads on which certain endpoint handlers (declared and implemented by the developer) will be executed. It appeared that when a carrier thread shares locks with virtual threads it might cause a deadlock situation. To manage access to locks, the JDK uses a queue 12 : a thread that can't access a lock will enqueue itself and wait to be notified by the previous member of the queue that the lock has been released. If both the virtual thread and its carrier are waiting for the same lock and the virtual thread is notified first, it will wake up and will want to resume its computation. Its continuation will be submitted to its carrier that can't execute it since it is already in the queue for the lock. The virtual thread will be forced to wait for the carrier thread, itself waiting for the virtual thread. This is a deadlock situation, and fixing it would require implementing intricate scheduling strategies in the executor to be able to detect such situations and avoid them. Moreover, the current state of the JDK makes it impossible to implement such a strategy without (i) performing bytecode manipulation at compile time, or (ii) opening the access to restricted modules. Although it would be an interesting perspective for future work, it was decided to discard this strategy. Final choice. The Netty option being practically infeasible for us and the event-loop option presenting a high risk of deadlocks or an intricate executor implementation to solve it, we were led to opt for the worker model. EXPERIMENTAL PROTOCOL The objective of this experiment is to evaluate the performance of Quarkus services in resource-constrained environments. To achieve this, the services were executed inside Docker containers running Red Hat ubi8 13 operating system. Each container was allocated 512 MB of RAM and 0.5 CPU cores. The JDK version 19.0.1 was used in this experiment. The host machine used for running these containers was a 64-bit Ubuntu 22.04.2 LTS with 32 GB of memory and equipped with 12 Intel® Core™ i7-10850H CPUs @ 2.70GHz. Details about how the experiment was designed as well as how each component of the experiment (load generator, database, service) was integrated are given in this section. Discriminating tests on a large functional scope Before designing the benchmark, the Techempower benchmark suite 14 was used to rapidly cover a wide spectrum of behaviors. Three versions of the same Quarkus application were compared: (i) the first one was powered by the non-blocking engine built above Netty, (ii) the second one was powered by virtual threads, (iii) the last one was powered by the blocking multithreading model. The initial hypothesis was that Quarkus-reactive was going to perform best for IO-intensive tasks, followed by Quarkus-virtual-threads, and finally Quarkus-blocking. For CPU-intensive tasks, we didn't expect much difference between the three implementations. This suite measures how a framework responds to different kinds of load: some CPU-bound, some IO bound. We select two tests of the suite, a CPU-bound one and an IO-bound one here. Table 2 summarizes the findings. JSON is a CPU-bound test where the application only has to encode a static object in JSON and return it to the client, FORTUNE is an IO-bound, more involved test where the application has to query a database, sort the results and return them as a JSON object to the client. The results were appalling: Quarkus-virtual-threads performed worse than Quarkusreactive and Quarkus-blocking for simply encoding an answer. It offered almost no improvement compared to the blocking version, both being about 33% slower than the reactive version. This benchmark revealed that the way Netty was using ThreadLocals put much pressure on the Garbage Collector, especially during serialization. Time spent in GC is time lost for the application, hence the significant difference between Quarkus-reactive and Quarkus-virtualthreads. This stems from the underlying assumption of Netty: the application uses a small set of event-loops (a few threads). The Quarkus-virtual-threads on the other hand, created thousands of virtual threads, leading to the creation of thousands of ThreadLocals instances. Table 3 describes the results after fixing Quarkus-virtual-threads. The behavior of Netty was modified and doesn't rely as much on the creation of ThreadLocals. The improvement is significant: the performance of Quarkus-virtual-threads is multiplied by 4 for CPUintensive tasks and by 1.5 for IO-intensive tasks, confirming our initial hypothesis. These preliminary tests highlighted major mismatches between virtual threads and Netty. These are likely to happen for every reactive framework assuming a low number of event-loops, making the integration of virtual threads in existing reactive frameworks challenging. This low number of event-loops is to be understood in the context of constrained resources. As discussed in 1, microservices have limited resources and aim at using them as efficiently as possible. Characterizing the load and the goals of the test Based on a benchmark survey of 2015 [START_REF] Zhen | A Survey on Load Testing of Large-Scale Software Systems[END_REF], a load test can be of one of these kinds : • load testing : "assessing the behavior of a system under load in order to detect load-related problems. " • performance testing : "measuring and/or evaluating performance related aspects of a software system (response time, throughput, resource utilization)" • stress testing : "extreme conditions to verify the robustness of the system and/or to detect various load-related problems (e.g., memory leaks and deadlocks)" Our benchmark is an instance of performance testing since metrics such as CPU usage, memory footprint, latency, throughput are measured. These measures are conducted under a given load, so the benchmark can also be characterized as load testing. Furthermore, resources of the service are constrained through containerization, delay is added using chaos engineering tools, and the load is increased to values where the service displays behavior judged to be "failing". The benchmark can hence also be described as being an instance of stress testing. The benchmark is thus at the intersection of load testing, performance testing and stress testing. Questions about the design of the load must also be addressed [START_REF] Zhen | A Survey on Load Testing of Large-Scale Software Systems[END_REF]. Typically, the difference is between (i) realistic load and (ii) fault inducing load. The goal of this experiment is to see how an application will react to an IO-bound workload that will force it to create thousands of virtual threads. Putting the application under heavy load will lead to the creation of numerous virtual threads. It is then possible to see how much time is spent in creating, mounting and unmounting them from their carriers. Delay is added between the database and the application to force the virtual threads to remain idle while waiting for the results, thus making it possible to see the impact of thousands of idle virtual threads on memory footprint. In this regard, the load is mostly fault inducing since it is specifically designed to maximize memory footprint as well as the number of virtual threads switches. To load/stress the service, a scenario inspired by the Fortunes test of the Techempower benchmark suite 15is used. In this benchmark, the service must : (1) fetch all "Fortunes" objects in a Database (2) add a Fortune object to the list it fetched from the Database (3) sort the list (4) return it using a template engine. In our test we simply serialize the list in JSON, and return it The service was deployed in a controlled environment to see the evolution of CPU usage, memory footprint, latency and footprint on different specific configurations. Integrating different components into the experiment In this section, the role of each component as well as the different constraints put on each of them will be discussed. The Figure 3 is an overview of the different components involved in the benchmark, their relation and the different variables we can tweak. Pitfalls. Hyperfoil builds a report of what happened during the test. This report can indicate that something went wrong with an aspect that we do not measure, that isn't supposed to be the limiting factor. For instance, the number of connections required for the test are specified before running the benchmark. This number of connections needs to be properly tuned in order to have enough of them to support the requested rate. For every run, Hyperfoil generates a report, making it possible to check that no connection is blocked. Service. The service is the subject of the benchmark. response-time/latency: The longest response-time registered in the run is considered. The infrastructure provides a detailed list of every request, so it is possible to check that the number plotted is not an outlier by comparing it with the longest 0.1% requests for instance. memory consumption: Memory consumption is measured by frequently probing the Resident Set Size of the service. throughput: Throughput offers a different perspective on how the service responds to high loads. Maybe the latency will explode at req/sec, but the service will still be able to deliver 2 * req/sec. We hence need to measure both. CPU usage: CPU usage can be an indication of how efficiently resources are used. Moreover, it is useful to know if the bottleneck is the service itself or if it is located somewhere else (bad performance coupled with very low CPU usage might mean that the problem is somewhere else). Just like the RSS, CPU usage is probed several times during the test. worker-pool size : when in Blocking mode, the worker-pool needs to be configured at the right size(too small : too few threads, performance won't be optimal ; too big : too many threads, might get an OutOfMemoryError). DB connection-pool size : the throughput (and the latency) depends greatly on how fast data can move from the database to the service and from the service to the database. This is directly linked to the number of connections available to handle the traffic. Pitfalls. For the measures to be correct, the service needs not only to be the bottleneck of the benchmark but to be so for the right reasons. Unequal tuning of one of the application versions (reactive, virtual threads, blocking) might result in an erroneous comparison. Three different versions of the application are tested: Quarkus-reactive, Quarkus-blocking, and Quarkus-virtual-threads. Quarkus-reactive and Quarkus-virtual-threads have their own strategy regarding the number of IO threads (event-loops) they use but Quarkus-blocking doesn't set a worker-pool size depending on the available memory, it only has a default value that can be overridden. The correlation between the size of the worker-pool and the maximum concurrency is obvious : = / with the average request service-time. The bigger the worker-pool the better the results. However, if the limit is set too high, worker threads will occupy too much space in memory and pressure the garbage collector, leading the application to spend enough time in GC to actually downgrade the performance. This could even result in an OutOfMemoryError and kill the application. Thus, a preliminary test iterating over a range of worker-pool sizes was performed to find the optimal one. The conclusion was that setting the worker-pool size to the amount of MB in memory (if the container has 512 MB of memory available, use a worker-pool of 512 worker threads) works best. This seems to be a good compromise between avoiding OOME and getting good performance. Since the application runs in a Java Virtual Machine (JVM), many things can be tuned in order to optimize performance. Properly tuning the JVM for a specific application is hard and time-consuming. We decided to use the Parallel Garbage collector and to set the maximum heap size of the VM to half the available memory. Finally, when a Java process is started, the beginning of its lifetime will be significantly impacted by the JVM warm-up. According to [START_REF] Lion | Don't Get Caught in the Cold, Warm-up Your JVM: Understand and Eliminate JVM Warm-up Overhead in Data-Parallel Systems[END_REF], the warm-up time is "commonly the bottleneck of short running jobs, even when the job is I/O-intensive". Hence, the first 20s of a run are discarded to make sure all the needed classes have been loaded, and the hot methods have been optimized by the JIT. Since the JIT can take a few minutes to find and optimize the major hot methods, we run each test three times in the same VM. Some chaos introduced by the GC and the JIT might lead some tests to sporadically exhibit terrible performance under load that are near their supported limit. To avoid it, we took the decision to choose between the best of the three runs. Database. The following points must be considered: • properly dimensioniong the number of connections, • using a Chaos Engineering Tool, Pumba 17 to add delay (only outgoing packets), • making sure that the Chaos Engineering tool can handle the rate. The use of a Chaos Engineering tool makes it harder to characterize the service: it could be the source of an overhead that could skew the results. Pitfalls. Pumba makes it possible to specify what delay to apply to the outgoing packets of a certain containerized process. The base hypothesis is that no matter the delay, it shouldn't impact the server since it doesn't add any load to it. Thus, for a given request-rate (one that is low enough for the server to keep low response-time), a linear relation between the response-time and the latency should be observed. If the maximum response-time is 17 https://github.com/alexei-led/pumba without Pumba, it should be + 100 ms when Pumba adds 100ms of delay, and + ms when we add ms of delay. This hypothesis has been proven wrong. After comparing several runs using different delays to compare the behaviors of the servers, it appeared that although latency was linearly correlated to the delay at first there was a point at which latency increased exponentially. In order to make sure that Pumba is not the bottleneck of the benchmark, a baseline was required. This baseline is used to induce what a measure of an overloaded service should look like in terms of CPU usage, RSS, packet drop rates. Every run done using Pumba is compared to the results of the baseline to check if the bottleneck comes from the service or from Pumba. EXPERIMENT RESULTS As mentioned earlier, delay is added between the database and the service to force virtual threads to stay in memory in order to see how latency will impact the implementation. These results are compared to a baseline without any delay to see how latency (inevitable in a distributed application) will impact the performance of Quarkusblocking, Quarkus-virtual-threads and Quarkus-reactive. To better understand the results, an investigation is led using the Java Flight Recorder (JFR) for the capture of JVM event. No delay induced by Pumba, baseline test Figure 4 shows the evolution of Resident Set Size (RSS) 18 , CPU usage, latency and throughput of the service as the number of requests per second generated by Hyperfoil increases. This run is performed without Pumba, it is thus the baseline. Since the time it takes for the request to complete is in the order of 100µs and doesn't involve any I/O-operation, Quarkus-blocking is expected to be the best option, or close to the best option. System load areas. The goal of this experiment is to correlate different metrics. Ideally, we would like to be able to explain high latency and/or low throughput using CPU usage or memory consumption. The Table 4 summarizes the different limits for each working condition in terms of latency , throughput , memory and CPU . Latency Throughput RSS CPU Normal < 2 = < 330 < 80% Critical >= 2 = > 330 < 80% Overload >> 2 < > 330 > 80% Table 4: Different working conditions (1) normal conditions : the throughput is the same as the requested rate , resource utilization isn't maxed out. (2) critical conditions : latency starts increasing although the throughput is still the same as the requested rate. Messages might pile up and cause long latencies after a long time. Memory consumption or CPU usage should near the limits (3) overload conditions : the latency is high, the service can't process as many messages as requested. CPU usage should have reached the limit, memory consumption might have also reached its limit. Although the goal is to characterize the latency of a system, throughput is a critical measure that gives insights about the ability of the system to support a certain load. Memory consumption measured with the RSS might not be an indicator as reliable as CPU usage to determine if the service is overloaded or nearing overload. Although it steadily increases as the load increases, the Figure 4 shows that systems in a critical state all have a CPU usage nearing 100% whereas a discrepancy of 50 MB can be observed in terms of memory usage. Comparing the three systems. • Quarkus-virtual-threads rapidly reaches a plateau in terms of memory consumption at 2500-3000 req/s and still continues to keep a small response time and a satisfying throughput. • Quarkus-blocking also reaches high memory footprint early. • Their respective CPU usages remain low and seem to increase linearly with the request rate. • Quarkus-reactive has the lowest memory footprint during the entire experiment as well as the lowest CPU usage. We can derive a certain number of conclusions from this run: (1) A high memory footprint alone doesn't seem to be enough to cause an increase in latency or a stagnation of throughput. (2) When threads are blocked for a small amount of time (as low as tens of µs), Quarkus-blocking is less expensive than Quarkus-virtual-threads in terms of CPU-usage. (3) Quarkus-reactive is the most efficient system, especially in terms of memory consumption. Constant 200ms delay, increasing rate As soon as delay is added, Quarkus-blocking behaves differently. Its maximum level of concurrency is easy to compute, as seen in 3 : = / with the average request service-time (in seconds) and the number of workers. In this case, = 500 and = 200, so a maximum concurrency level of = 500 * 5 = 2500 is to be expected. Each of the three systems is expected to be affected by the additional delay, Quarkus-blocking especially due to its hard limit in terms of scalability. Hypothesis: performance should decrease for the three systems (bigger resource consumption, smaller throughput, earlier latency increase) but Quarkus-blocking should be the most affected by the delay. This hypothesis is somewhat consistent with what we measure through our experiment in the Figure 5: although every system is impacted by the delay, Quarkus-blocking is especially affected. The rate at which the latency of the system becomes overloaded is now 2500 req/s. As seen in Figure 4, high memory footprint alone is not enough to cause high latency responses. Adding delay shows that high CPU-usage alone can't cause it neither. The CPU-usages of Quarkus-virtual-threads and Quarkus-reactive stabilize at 90% and their latency starts increasing only when the memory footprint becomes high enough. The most obvious difference between the two runs is the CPUusage of each system: they increase by about 30%, some specific comparisons are displayed in Table 5. To understand why delaying the database response causes a drop in performance as well as an Quarkus-virtual-threads 73.9 99.8 101.3 Quarkus-virtual-threads-200 92.9 (+19.0) 106.0 101.7 Table 5: Comparing CPU-usage (in %) at different concurrency level with and without delay increase in CPU-usage, an analysis using the Java Flight Recorder (JFR) was performed. JFR analysis We decided to analyze two situations : (1) Quarkus-virtual-threads at 4500 req/s with and without delay to see if JFR can provide insights on why performance is affected by delay (2) Quarkus-reactive at 4500 req/s with and without delay to see what changes in a service that remains responsive and efficient. JDK Mission Control (JMC) 19 was used to analyze the recording. Table 6 summarizes the important results. It is an extract of larger set of data 20 . Quarkus-virt-threads-0 Quarkus-virt-threads-200 Max-latency Using Table 6, it is clear that the Garbage Collector (GC) is under much higher pressure in the case of Quarkus-virtual-threads with a 200ms of delay. CPU usage is already fairly high (about 90% according to Figure 5), having to stop the computation to run the GC for tens of milliseconds in average might be the dealbreaker here. Additionally, among the 709 GC performed by Quarkus-virtual-threads (scenario Quarkus-virtual-threads-200), 65 are Old Generation GC. The old generation is the set of long-living objects. After a certain time alive, the JVM promotes live objects to the Old Generation space to free space in the young Generation [START_REF] Williams | Java Garbage Collection Basics[END_REF]. Old Generation GC takes longer because it has to go through every living object. It is explicitly stated in [START_REF] Williams | Java Garbage Collection Basics[END_REF] that "for Responsive applications, major garbage collections should be minimized". In this case, young collection itself takes more than 40s, with an average pause longer than usual (65 ms against 19 ms without delay 21 ). GC seems to be the reason why the performance of Quarkusvirtual-threads is degraded when delay is added. Several hypotheses regarding what objects trigger GC must be proposed: • since requests take longer to be processed, more requests must be handled concurrently, hence more virtual threads might live in the system at the same time, occupying space; • Quarkus-virtual-threads relies on Netty (that hasn't been optimized for this use-case), when handling a high number of concurrent requests, buffers and caches might be used in a suboptimal way and occupy more memory than in the Quarkus-reactive implementation. Conducting a comprehensive analysis of the heap can provide valuable insights into the veracity of these hypotheses. It is imperative to note that these hypotheses have different implications. If virtual threads are indeed the root cause of prolonged garbage collection, then the optimal solution would involve modifying the JDK to reduce their size, if feasible. Conversely, if the increased memory footprint is attributed to the use of Netty in conjunction with virtual threads, then the solution would not impact the JDK. Therefore, the implementation of a solution would likely be more straightforward. Heap Analysis. Through a comparative analysis of heap dumps obtained from the Quarkus-reactive and Quarkus-virtual-threads implementations of the application, we can ascertain the factors contributing to garbage collection (GC) inefficiencies. The results presented here are extracted from this repository 22 . The top consumers for both Quarkus-virtual-threads and Quarkusreactive are presented in Table 7. Although there are less byte arrays in Quarkus-virtual-threads than in Quarkus-reactive, they weigh about thrice as much. It appears that about 81.3MB is occupied by only 10 147 byte arrays containing the JSON array returned by the Fortune benchmark. The presence of arrays of char instances is explained the same way. After further investigations, it appeared that the Jackson library generates thousands of ThreadLocals in its management of buffers. These data structures occupy a lot of space and take time to be collected. In Quarkus-reactive, there are only two of them since they are spawned for each event-loop. On the other hand, in Quarkus-virtual-threads, thousands of them are created every second since a virtual thread is spawned for every incoming request. As seen in Section 3.1, this comes from the mismatch between the core hypothesis of reactive programming and those of pool-based synchronous programming. Discussing our results The strategy to get a clean dataset was to perform three runs and to systematically choose the best one to avoid outliers. A better approach would be to run the tests times and perform a true statistical analysis on the results to have clear indications on the likelihood of getting extreme latencies at each concurrency level. Running the benchmark takes a long time: about 20 minutes per level of concurrency. Considering that the experiments displayed in this paper range over 30 to 50 levels of concurrency, performing one run of the benchmark takes between 10 to 16-17h. Running them times such that be big enough for a statistical analysis to be meaningful would take days if not weeks. Moreover, such results would be meaningful to make accurate predictions on how Quarkus-virtual-threads might behave in some given situations, but this is beyond the scope of this paper. The current experiments already gave directions for future research and optimization. However, a more reliable characterization of performance is an endeavor that we might undertake in the future. RELATED WORKS This experiment investigates and evaluates the integration of the Java virtual threads in an existing reactive framework, Quarkus. A review of the state of the art was necessary for (i) existing tools for benchmarking Java microservices, (ii) existing tests of Java virtual threads, and (iii) characterizing existing frameworks to motivate the work on Quarkus. Existing benchmarks for testing Java microservices As outlined in Section 3, this study employs a predominantly faultinducing workload, and is situated at the confluence of load testing, performance testing, and stress testing. We conducted a review of different existing benchmark suites. Most benchmarks aim at characterizing the application under test, not the JVM itself. Therefore, these benchmarks, such as [START_REF] Cecchet | BenchLab: An Open Testbed for Realistic Benchmarking of Web Applications[END_REF] is limited in our context. While benchmark suites such as DaCapo [START_REF] Stephen | The DaCapo Benchmarks: Java Benchmarking Development and Analysis[END_REF] or SPECs [START_REF] Shiv | SPECjvm2008 Performance Characterization[END_REF] are frequently used to tune components of the JVM (such as JIT compilers, garbage collectors, profilers), they weren't designed for studying concurrency, which is the main objective of our work. Additionally, they were created before atomic operations, Java Lambdas and lock-free data structures were added to the JDK. The Renaissance benchmark suite to enhance the JDK. It consists of 21 different benchmarks. Most of them are useless for our purposes since we are interested in IO-intensive operations. Although it seems to study concurrency and parallelism better than previous benchmark suites, "all benchmarks run within a single JVM process, and only rely on the JDK as an external dependency. Some benchmarks use network communication, and they are encoded as multiple threads that exercise the network stack within a single process (using the loop-back interface). " [START_REF] Prokopec | Renaissance: Benchmarking Suite for Parallel Applications on the JVM[END_REF]. As explained in Section 3, our experiment requires control over the duration of I/O operations to study how virtual threads compare to threads and reactive constructs in an environment where latency between services is non-negligible. It was implemented via a database running in another container and the use of chaos engineering tools to add delay to the network interface of the database container. This goes against the design of the Renaissance suite. We can however observe similarities between the benchmark suites mentioned earlier and the benchmark developed here: the warm-up phase is typically found everywhere, and the actual measure is always performed on the steady-state execution after the warm-up. Existing tests of virtual threads in Java Several studies [START_REF] Beronić | Comparison of Structured Concurrency Constructs in Java and Kotlin -Virtual Threads and Coroutines[END_REF][START_REF] Beronić | On Analyzing Virtual Threads -a Structured Concurrency Model for Scalable Applications on the JVM[END_REF][START_REF] Pufek | Achieving Efficient Structured Concurrency through Lightweight Fibers in Java Virtual Machine[END_REF] led by the same team have investigated the performance of virtual threads compared to Java platform threads and Kotlin coroutines. The findings indicate that virtual threads offer superior performance to both Java platform threads and Kotlin coroutines. Specifically, they can achieve better scalability and faster synchronization [START_REF] Beronić | On Analyzing Virtual Threads -a Structured Concurrency Model for Scalable Applications on the JVM[END_REF]. The benchmark used in the latest publication [START_REF] Beronić | Comparison of Structured Concurrency Constructs in Java and Kotlin -Virtual Threads and Coroutines[END_REF] is also a multithreaded HTTP server. For each request, an object is created, its information is written to a file and its data is returned. The IO operation performed is a disk I/O operation and the benchmark is run on Linux. However, non-blocking disk IO in Linux is notoriously complex 23 , which makes it harder to interpret the results. Moreover, these studies differ from ours in terms of service size. Specifically, the studies used: (i) an Ubuntu 18.04.3 64-bit virtual machine with a base memory of 9 GB running on Windows 10 64-bit OS with an x64-based processor with 16 GB of RAM [START_REF] Pufek | Achieving Efficient Structured Concurrency through Lightweight Fibers in Java Virtual Machine[END_REF]; (ii) an Ubuntu-20.04 64-bit machine with 13 GB of RAM, 8-core VM running on Windows 10 64-bit OS with an x64-based processor and 16 GB of RAM [START_REF] Beronić | On Analyzing Virtual Threads -a Structured Concurrency Model for Scalable Applications on the JVM[END_REF]; and (iii) an Ubuntu-20.04.3 LTS 64-bit machine with 16 GB memory and Intel® Core™ i7-6700 CPU @ 3.40GHz processor [START_REF] Beronić | Comparison of Structured Concurrency Constructs in Java and Kotlin -Virtual Threads and Coroutines[END_REF]. Notably, virtual threads were not compared to existing reactive programming models in these studies. Instead, they were compared to platform threads on machines with ample resources. In contrast, we evaluate virtual threads against reactive libraries on machines with limited resources. Existing Java frameworks In the experiments of the Spring 24 framework, Tomcat's standard thread pool is replaced with a virtual threads based executor 25,26 . Current users of the reactive version of Spring, Spring WebFlux 27 , won't benefit from virtual threads. In contrast, Quarkus is a relatively newer framework focused on cloud-native applications. Its core is reactive and users can choose to delegate some work to 23 https://unixism.net/loti/async_intro.html 24 https://spring.io/ 25 https://spring.io/blog/2023/02/27/web-applications-and-project-loom 26 https://spring.io/blog/2022/10/11/embracing-virtual-threads 27 https://docs.spring.io/spring-framework/docs/current/reference/html/webreactive.html blocking thread pools. As such, the three programming models (Quarkus-blocking, Quarkus-reactive and Quarkus-virtual-threads) can thus be used in the same application depending on the context. Helidon Níma28 is an instance of framework that was specifically designed for virtual-threads. It was briefly tested against Netty and performed favorably29 . However, the benchmark was limited in scope, and its conclusion should be nuanced. It takes place on the loopback interface, without any delay, and no long-running operation was performed. Thus, it is difficult to predict how an application will behave when thousands of virtual threads are kept in memory while waiting for IO completion. Additionally, as Níma is built from the ground up, it needs to reimplement features already implemented in well-established frameworks such as Netty. As a result, it is essential to conduct further benchmarking after incorporating missing features to confirm that the performance gain is still substantial To the best of our knowledge, Jakarta EE 30 has not released any information regarding its support for virtual threads. Finally, several options were available for the reactive streams library. The three most widespread libraries propose similar services and display similar performance [START_REF] Ponge | Analysing the Performance and Costs of Reactive Programming Libraries in Java[END_REF]. Mutiny was picked since it is the default choice for Quarkus. According to prior benchmarks [START_REF] Ponge | Analysing the Performance and Costs of Reactive Programming Libraries in Java[END_REF], the performance of the application should not be impacted significantly by the choice of library. CONCLUSION AND PERSPECTIVES This paper aims at describing and motivating the current integration of an easy-to-use programming abstraction, virtual threads, in the Quarkus framework. An experimental protocol was developed to evaluate the performance of the implementation in the context of resource-constrained environments with a fault-inducing load. The study involved a thorough examination of the tools used to obtain meaningful performance metrics. It reveals that although Quarkus-virtual-threads outperformed Quarkus-blocking in the context of the experiment, it still does not scale to the concurrency levels that Quarkus-reactive is capable of achieving. Additionally, resource management is less efficient than Quarkus-reactive, with a particularly high memory footprint. However, analysis indicates that virtual threads are not directly responsible for the high memory consumption, and that other data structures generated by Quarkus-Netty are the main memory consumers. The study suggests that it may be possible to optimize the way virtual threads are integrated into the Quarkus framework to reduce the heap size, GC pressure, and improve scalability and resource management efficiency. It should be noted that the findings may not necessarily generalize to other scenarios, and further investigations with realistic loads may provide additional insights into the performance of the three variants (Quarkus-blocking, Quarkus-reactive and Quarkus-virtualthreads). However, the study highlights the mismatch between the core assumptions of Netty, which assumes a limited number of event-loop threads, and the core assumptions of virtual threads, which assume that they are cheap to create and can be spun up as needed. The findings suggest that the integration of virtual threads in frameworks built on top of Netty or that rely heavily on ThreadLocals, presuming a small thread count, is likely to experience significant garbage collection (GC) pressure. Future investigations should explore the performance of the three variants under different architectures with more CPU cores and memory. It should also be compared with similar programming abstractions in other languages such as Golang goroutines, Python coroutines, Kotlin coroutines, C# tasks, JavaScript asynchronous functions. Additionally, it suggests that performance degradation should be evaluated alongside productivity gains. If the utilization of virtual threads significantly simplifies the implementation of a particular feature and the accompanying performance impact is minimal, it can prove advantageous. Finally, since garbage collection appears to be the main problem, different garbage collectors should be tested in order to characterize their impact on the performance of virtual threads integration. For example, using more optimist hypothesis could enable a quicker collection of unused ThreadLocals and improve performance. As mentioned in [START_REF] Beronić | Comparison of Structured Concurrency Constructs in Java and Kotlin -Virtual Threads and Coroutines[END_REF], work was started in 2019 [START_REF] Pufek | Analysis of Garbage Collection Algorithms and Memory Management in Java[END_REF] to compare different garbage collection algorithms and could be extended to virtual threads. Figure 2 : 2 Figure 2: Concatenation of asynchronous results using virtual threads Figure 3 : 3 Figure 3: Diagram of the different components involved in the benchmark Figure 4 : 4 Figure 4: Evolution of the latency, CPU usage, RSS and throughput in function of the request rate, no delay, use the default virtual threads scheduler Figure 5 : 5 Figure 5: Evolution of the latency, CPU usage, RSS and throughput in function of the request rate, 200ms delay, use the default virtual threads scheduler Table 1 : 1 Comparison of the different Quarkus-virtual-threads options Strategy Pros Cons Forking worker model Simple, fits virtual Context switches threads model Using event-loop as car- No context-switch, Potential deadlocks rier Fewer threads overall Modifying Netty event- Integration done at Can't modify Netty up- loops to be virtual the Netty level, Netty- stream, unpredictable threads based frameworks effects would benefit from it Table 2 : 2 Results of the benchmark suite for JSON and FORTUNE tests for each technology (number of requests processed) Test Name quarkus-reactive quarkus-v-thread quarkus-blocking JSON 43 955 10 682 43 863 FORTUNE 15 607 11 227 10 890 Table 3 : 3 Results of the benchmark suite for JSON and FORTUNE tests for each technology after correction (number of requests processed) Test Name quarkus-reactive quarkus-v-thread quarkus-block JSON 43 450 42 736 40,437 FORTUNE 17 673 16 011 11 461 Table 6 : 6 GC information summary 1.44 s 31.54 s GC count 275 709 Avg pause 18.662 ms 92.270 ms Longest pause 89.723 ms 520.437 ms Young Collection time 5.132 s 41.685 s Old Collection time N/A 23.734 s Sum of pauses 5.132 s 65 s Quarkus-reactive-0 Quarkus-reactive-200 Max-latency 1.08 s 704.64 ms GC count 192 183 Avg pause 15.169 ms 14.968 ms Longest pause 32.898 ms 49.312 ms Young Collection time 3.004 2.739s Old Collection time N/A N/A Sum of pauses 3.004 s 2.739 s Table 7 : 7 Top 5 consumers for each application https://docs.oracle.com/javase/8/docs/api/java/nio/package-summary.html https://unixism.net/loti/what_is_io_uring.html https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ https://quarkus.io/blog/resteasy-reactive/ https://vertx.io https://netty.io https://download.java.net/java/early_access/loom/docs/api/java.base/java/util/ concurrent/Executors.html#newVirtualThreadPerTaskExecutor() https://download.java.net/java/early_access/loom/docs/api/java.base/java/util/ concurrent/Executors.html https://www.techempower.com/benchmarks/#section=data-r20&hw=ph&test= fortune https://hyperfoil.io/ https://docs.oracle.com/cd/E29584_01/webhelp/endeca_glossary/src/gend_resident_ set_size_dgraph.html https://www.oracle.com/java/technologies/javase/products-jmc8-downloads.html https://github.com/anavarr/quarkus_virtual_threads_results https://github.com/anavarr/quarkus_virtual_threads_results https://github.com/anavarr/quarkus_virtual_threads_results https://helidon.io/nima https://medium.com/helidon/helidon-n%C3%ADma-helidon-on-virtual-threads-130bb2ea2088 ACKNOWLEDGMENTS This work is partially supported by Red Hat Research.
04112405
en
[ "shs.eco" ]
2024/03/04 16:41:24
2022
https://theses.hal.science/tel-04112405/file/these_internet_khelifa_s.pdf
Sylvie Démurger Béatrice Rey-Fournier Cette thèse constituait pour moi, non seulement un outil d'acquisition des compétences nécessaires au métier de chercheur, mais aussi un point tournant dans la construction de ma personne. J'ai appris que la patience, l'endurance et l'humilité sont des nécessités pour la réussite de la vie d'un chercheur. Il en est ainsi parce qu'une activité de recherche peut parfois engendrer des émotions de frustrations, de doute et d'incertitude qu'il faut savoir contrôler. Faire la recherche c'est aussi rencontrer beaucoup d'autres personnes passionnantes de différentes disciplines, ce qui nous pousse à se questionner, à être curieux et à mettre en question nos connaissances. Jamais ce voyage n'aurait été aussi enrichissant et formateur si je n'étais pas si bien accompagnée par des formidables personnes. Mes premiers remerciements s'adressent à mes directrices de thèse, Sylvie Démurger et Béatrice Rey-Fournier, grâce auxquelles j'ai beaucoup appris, et je ne les remercierai jamais assez. Sylvie étant économètre tandis que Béatrice théoricienne constituent le meilleur duo que tout doctorant aurait souhaité. Leur soutien, leur grande disponibilité, leur patience et leurs nombreux conseils étaient parmi les principaux ingrédients à l'origine de la réalisation de ce projet. Ces lignes ne suffiraient absolument pas pour montrer leurs qualités exceptionnelles de chercheurs. Je suis tellement heureuse que notre intérêt commun pour les questions de migration et de risque ait été rencontré. Au-delà du travail, c'est les personnes que je voudrais honorer. Merci pour votre considération et votre bienveillance. J'éprouve une grande reconnaissance pour les échanges, enrichissantes et humaines, que nous avons eu. Merci de m'avoir ouvert tant de portes. Merci, surtout, pour la conőance que vous m'avez donnée durant ces quatre années. Merci aux membres de mon Jury de thèse, Rémi Bazillier, Terry Sicular et David Crainich, d'avoir accepté d'en faire partie. Je suis particulièrement reconnaissante à Rémi i Bazillier et à Terry Sicular d'avoir bien voulu être rapporteur de cette thèse. C'est un honneur et un plaisir de présenter mon travail devant vous. Je tiens également à chaleureusement remercier mon co-auteure Jie He, qui m'a accueillie pendant mon séjour à l'université de Sherbrooke, au Canada. Ce fut un grand honneur pour un doctorant de travailler avec une chercheure aussi exceptionnelle et engagée. Merci de m'avoir accordée d'innombrables minutes de votre temps précieux, et d'avoir été réactive, solidaire et optimiste malgré les difficultés que nous avons rencontrées lors de notre travail ensemble. Merci d'avoir partagé avec moi votre expertise et votre passion qui ont été à la fois inspirantes et nourrissantes. Merci à l'ensemble du personnel administratif du laboratoire GATE et de l'Ecole Doctorale pour le travail immense qu'ils font pour nous soutenir. Ils sont toujours à l'écoute et très réactifs pour que nous puissions développer notre recherche dans les meilleures conditions. Un merci particulier à Izabela Jelovac, la directrice du GATE, et à Sonia Paty, l'ex-directrice du laboratoire. Un merci exceptionnel pour toutes et tous les chercheurs du GATE, et particulièrement pour Lise, qui m'a tendu la main à des moments difficiles. Merci pour ta disponibilité et pour tes conseils précieux. Merci à tous les doctorants et post-doctorants du laboratoire qui ont été de si bons collègues. Je remercie enőn ma chère famille. Il n'y a pas de mots qui peuvent décrire mon amour et ma gratitude envers mes parents, Dalila et Abdellaziz. Sans eux, je ne serais pas là où je suis aujourd'hui. Mes soeurs et mon frère font également partie des personnes que je tiens à remercier pour avoir toujours été là pour moi et m'avoir donné beaucoup d'énergie et de motivation. Un grand merci aussi à ma grand-mère maternelle et à mon oncle, Zouhair, qui m'ont donné beaucoup de courage et de soutien tout au long de mes études. As stated by Becker himself in the third edition of his seminal book, the term łhuman capital" was not widely accepted by economists back then: łIt may seem odd now, but I hesitated a while before deciding to call my book Human Capital-and even hedged the risk by using a long subtitle. In the early days, many people were criticizing this term and the underlying analysis because they believed it treated people like slaves or machines." [START_REF] Becker | Human capital: A theoretical and empirical analysis, with special reference to education[END_REF]. 3 The motivation of this concept started with the őndings that the growth of physical capital accounted only for a small part of the variation in economic growth. The importance of the other unexplained part led some scholars, mainly Schultz (e.g., Schultz, 1961), to suggest a new perspective to economic analysis where unmeasured skills and abilities of workers should be taken into consideration. However, it was Becker who constructed a theory about this unobserved factor, referred to as human capital, and deőned as the set of knowledge, skills, health, or values embodied in a person, that may affect observed outcomes of people such as earnings and education. The role of human capital in explaining the gaps in productivity and inequality among countries has, therefore, been recognized [START_REF] Becker | Investment in human capital: A theoretical analysis[END_REF][START_REF] Becker | Human capital (new york, national bureau of economic research[END_REF], and ever since, human capital has been placed at the heart of economic analyzes and a substantial work on showing its importance for enhancing economic growth has been developed (e.g., Romer, 1986;[START_REF] Lucas | On the mechanics of economic development[END_REF]. At the individual level, both cognitive and non-cognitive skills have been shown to predict success in different aspects of social and economic life, including wages, schooling and criminality (e.g., [START_REF] Murnane | The growing importance of cognitive skills in wage determination[END_REF][START_REF] Heckman | The importance of noncognitive skills: Lessons from the ged testing program[END_REF][START_REF] Bowles | The determinants of earnings: A behavioral approach[END_REF]Heckman et al., 2006;[START_REF] Borghans | The economics and psychology of personality traits[END_REF][START_REF] Huggett | Sources of lifetime inequality[END_REF][START_REF] Kautz | Fostering and measuring skills: Improving cognitive and non-cognitive skills to promote lifetime success[END_REF][START_REF] Heckman | The causal effects of education on earnings and health[END_REF][START_REF] Heckman | The nonmarket beneőts of education and ability[END_REF]. Moreover, a large part of the variation in the socioeconomic success across individuals can be explained by the differences in their abilities (Cunha and Heckman, 2007). These gaps that open up early in life play, therefore, important roles in creating inequality [START_REF] Cunha | The economics and psychology of inequality and human development[END_REF]. However, human capital involves abilities and skill sets that decision-making units are able to invest in, in order to improve their levels. Alfred Marshall stated, in the Principles of Economics, that łThe most valuable of all capital is that invested in human beings" [START_REF] Pujol | Gender and class in marshall's" principles of economics[END_REF], and that is true whether these investments are done at the micro-or macro-levels. In order to keep and increase a country's competitiveness, physical capital investments are, therefore, not enough. Human capital investments are as important as, if not more important than, other types of investment [START_REF] Heckman | China's investment in human capital[END_REF]. 4Recent evidence on the effects of government human capital investments in developing countries, through educational and health policies, support this view. Increases in the number of years of schooling and wages have been documented following the increase in the number of schools in Indonesia [START_REF] Duŕo | Schooling and labor market consequences of school construction in indonesia: Evidence from an unusual policy experiment[END_REF]. Providing monetary support for secondary school students improved access of female winners to jobs with rents in Ghana [START_REF] Duŕo | The impact of free secondary education: Experimental evidence from ghana[END_REF], while providing small cash transfers to fathers of school-aged children greatly increased school participation in poor rural areas of Morocco [START_REF] Benhassine | Turning a shove into a nudge? a" labeled cash transfer" for education[END_REF]. Removing school fees in primary schools increased educational attainment for children in Tanzania [START_REF] De Neve | Children's education and parental old age survivalś quasi-experimental evidence on the intergenerational effects of human capital investment[END_REF]. Providing free learning-assistance for those lagging behind in basic literacy and numeracy skills has been found effective in increasing average test scores (Banerjee et al., 2007), while monitoring and offering őnancial incentives for teachers reduced their absence in school and increased children's test scores in India [START_REF] Duŕo | Incentives work: Getting teachers to come to school[END_REF]. Government health policies are also important investments in people's human capital. Public health spending, and particularly increasing medical care, improves test scores and reduces mortality rates of children [START_REF] Bharadwaj | Early life health interventions and academic achievement[END_REF][START_REF] Farag | Health expenditures, health outcomes and the role of good governance[END_REF]. Combining social and health services also affects health outcomes. It was shown that a higher ratio of social to health spending improved the following health conditions: adult obesity, asthma, mentally unhealthy days, days with activity limitations, and mortality rates for lung cancer, acute myocardial infarction, and type 2 diabetes [START_REF] Bradley | Variation in health outcomes: the role of spending on social services, public health, and health care, 2000ś09[END_REF]. Positive effects of nutritional policies have also been documented. The introduction of iodine fortiőcation has been found to increase wages in the United States [START_REF] Adhvaryu | When it rains it pours: The long-run economic impacts of salt iodization in the united states[END_REF], while nutritional supplementation for pregnant women enhanced weight gain for children between 0 and 3 in Senegal [START_REF] Linnemayr | Almost random: Evaluating a large-scale randomized nutrition program in the presence of crossover[END_REF]. 5 Other policies that affect household resources are also important. Government policies affecting parental őnancial resources have positive effects on longevity (Aizer et al., 2016a), while policies promoting maternity leave led to more education and higher wages of exposed children [START_REF] Carneiro | A ŕying start? maternity leave beneőts and long-run outcomes of children[END_REF]. Child care national policies, replacing parental care, were also found to increase future earnings [START_REF] Havnes | No child left behind: Subsidized child care and children's long-run outcomes[END_REF]. Beyond the investment in the accumulation of human capital, łearly life conditions" are also important determinants of human capital and it is, therefore, important to take them into account, including in public policy decisions. Different risk factors for children's human capital deterioration, occurring as early as conception, have been documented, including, for example, diseases such as malaria [START_REF] Venkataramani | Early life exposure to malaria and cognition in adulthood: evidence from mexico[END_REF], maternal alcohol consumption [START_REF] Von Hinke Kessler Scholder | Alcohol exposure in utero and child academic achievement[END_REF], nutritional deprivation [START_REF] Almond | In utero ramadan exposure and children's academic performance[END_REF], maternal stress (Aizer et al., 2016b), family ruptures [START_REF] Persson | Family ruptures, stress, and the mental health of the next generation[END_REF], water pollution [START_REF] Currie | Something in the water: Contaminated drinking water and infant health[END_REF], and air pollution [START_REF] Zhang | Children's respiratory morbidity prevalence in relation to air pollution in four chinese cities[END_REF][START_REF] Black | This is only a test? long-run impacts of prenatal exposure to radioactive fallout[END_REF]Bharadwaj et al., 2017;[START_REF] Midouhas | Outdoor and indoor air quality and cognitive ability in young children[END_REF][START_REF] Heissel | Does pollution drive achievement? the effect of traffic pollution on academic performance[END_REF]. [START_REF] Zhang | Children's respiratory morbidity prevalence in relation to air pollution in four chinese cities[END_REF], for example, showed that air pollution is associated with respiratory problems for children, while [START_REF] Black | This is only a test? long-run impacts of prenatal exposure to radioactive fallout[END_REF], Bharadwaj et al. (2017), [START_REF] Heissel | Does pollution drive achievement? the effect of traffic pollution on academic performance[END_REF] and [START_REF] Midouhas | Outdoor and indoor air quality and cognitive ability in young children[END_REF] reported serious negative effects on their academic performance and cognitive abilities. Long-term outcomes, such as earnings, are also found to be associated with exposure to air pollution early in life (e.g., Isen et al., 2017). Government remediation policies that aim to counter these risk factors are, therefore, of great importance for human capital formation. However, the timing of these policies' implementation over the childhood also matters. Cunha and Heckman (2007) argued that, depending on early and late investments being complements or substitutes and on whether government investments are anticipated by parents, effects of national investments on the accumulation of human capital may differ. Early childhood government policies produce high returns for children from disadvantaged environments; however, later interventions for these children would not be as effective if perfect substitution is not what characterizes early and late investments, and the productivity of late investments is higher the higher the level of early investments (Cunha and Heckman, 2007). Empirically, evidence conőrms the preeminence of early childhood interventions for the disadvantaged compared to remedial actions later in life [START_REF] Cunha | Estimating the technology of cognitive and noncognitive skill formation[END_REF]. Deworming programs in Kenya, for example, have been shown to improve educational outcomes and labor supply [START_REF] Baird | Worms at work: Long-run impacts of a child health investment[END_REF]. Policies implemented to reduce blood lead levels are found to increase the probability to better perform in reading and math tests in Rhode Island [START_REF] Aizer | Do low levels of blood lead reduce children's future test scores?[END_REF]. Interventions including lead remediation services, education for caregivers, and nutritional and medical assessments, offered for lead-poisoned children, reduced antisocial behavior and improved school performance, in North Carolina [START_REF] Billings | Life after lead: Effects of early interventions for children exposed to lead[END_REF]. On the other hand, the theoretical literature suggests that parents are the main contributors to the formation of their children's human capital through genetics, investments, and choices of child environment [START_REF] Becker | An equilibrium theory of the distribution of income and intergenerational mobility[END_REF]Cunha and Heckman, 2007). Monetary resources parents invest in their children, including investments in education, health care, the home environment and other goods and services, are indeed shown to be determinants of children's economic success, in terms of educational attainment, health, earnings and behavior (e.g., [START_REF] Yeung | How money matters for young children's development: Parental investment and family processes[END_REF][START_REF] Ho | Child's gender, parental monetary investments and care of elderly parents in china[END_REF][START_REF] Attanasio | Human capital development and parental investment in india[END_REF]. Parental time investment is also as important. It is found to be positively associated with children's educational outcomes and cognitive skills [START_REF] Fiorini | How the allocation of children's time affects cognitive and noncognitive development[END_REF][START_REF] Thomsen | Parental time investments in children: Evidence from denmark[END_REF][START_REF] Bono | Early maternal time investment and early child outcomes[END_REF][START_REF] Gayle | Intergenerational mobility and the effects of parental education, time investment, and income on children's educational attainment[END_REF][START_REF] Cordero-Coma | Parental time dedication and children's education. an analysis of west germany[END_REF], to compensate initial birth-weight differences among siblings [START_REF] Hsin | Is biology destiny? birth weight and differential parental treatment[END_REF], and to reduce gender gaps in reading and math scores in the early grades [START_REF] Baker | Boy-girl differences in parental time investments: Evidence from three countries[END_REF]. Many activities reŕect decisions of time investments in children, including labor supply, divorce and migration. The latter can be thought of as a time disinvestment in the children's human capital, in the case where children are left behind, with migration duration measuring the level of this disinvestment. This is particularly common in developing countries, including China (UNICEF, 2018), Sri Lanka (Pinto-Jayawardena et al., 2006) and the Philippines (Cortes, 2015). In fact, economic conditions in developing countries are subject to frequent and unpredictable changes, making individuals and households, particularly those living around the subsistence level, vulnerable to different risks to their livelihood. Moreover, problems of moral hazard, information asymmetries, and difficulty to enforce contracts have implied a poor establishment of formal insurance and credit markets in those countries [START_REF] Alderman | Do the poor insure? a synthesis of the literature on risk and consumption in developing countries[END_REF]. 6 As a result, different alternative informal risk-coping strategies have been used to counter the effects of income variation. Informal risk sharing, in terms of voluntary transfers and interpersonal loans, among long-lasting interpersonal networks, has been proven to be a major risk-coping strategy for rural residents (e.g., [START_REF] Ellsworth | Mutual insurance and non-market transactions among farmers in burkina faso[END_REF][START_REF] Coate | Reciprocity without commitment: Characterization and performance of informal insurance arrangements[END_REF][START_REF] Ligon | Mutual insurance, individual savings, and limited commitment[END_REF][START_REF] Foster | Imperfect commitment, altruism, and the family: Evidence from transfer behavior in low-income rural areas[END_REF]. As theory suggests that efficient agreements to share risk with others have to be at the łcommunity level", most of the empirical tests are made at the village if not at a larger group level [START_REF] Genicot | Group formation in risk-sharing arrangements[END_REF]. Even though full/perfect insurance is rejected, Townsend (1994) found evidence consistent with substantial insurance at the village level, in southern India. Similar results are reported by Udry (1994) in northern Nigeria, by [START_REF] Jalan | Are the poor less well insured? evidence on vulnerability to income risk in rural china[END_REF] in rural China, and by [START_REF] Kinnan | Distinguishing barriers to insurance in thai villages[END_REF] in Thai villages. There is also evidence that some households engage in risk-sharing arrangements within the same ethnic group in Cote d'Ivoire [START_REF] Grimard | Household consumption smoothing through ethnic ties: evidence from cote d'ivoire[END_REF], and within caste risk-sharing groups in rural India [START_REF] Mazzocco | Testing efficient risk sharing with heterogeneous risk preferences[END_REF], while [START_REF] Fafchamps | Risk-sharing networks in rural philippines[END_REF], for example, found that risk-sharing networks are build among friends and relatives rather than at the village level, in rural Philippines. However, a limitation of this informal insurance mechanism is its ineffectiveness in managing spatially covariant risks, common at the regional level. 7It was shown, for example, that informal risk-sharing was reduced between households during the Covid-19 pandemic in Kenya [START_REF] Janssens | The short-term economic effects of covid-19 on low-income households in rural kenya: An analysis using weekly őnancial household data[END_REF]. Other than risk-sharing networks, the theory suggests that savings are used as a selfinsuring strategy to deal with risk (Leland, 1978;Kimball et al., 1990;Baiardi et al., 2020). Several empirical studies have reported evidence that higher levels of risk are indeed compensated by higher levels of savings in different contexts including, for example, rural Bangladesh [START_REF] Adnan | Adoption of contract farming and precautionary savings to manage the catastrophic risk of maize farming: Evidence from bangladesh[END_REF], rural Pakistan [START_REF] Lee | Precautionary saving under liquidity constraints: Evidence from rural pakistan[END_REF][START_REF] Ullah | Managing catastrophic risks in agriculture: Simultaneous adoption of diversiőcation and precautionary savings[END_REF], and both rural and urban China (Jalan and Ravallion, 2001;[START_REF] Meng | Unemployment, consumption smoothing, and precautionary saving in urban china[END_REF]Giles and Yoo, 2007). These precautionary savings cover the accumulation of a wide range of assets, such as liquid, semi-liquid and őxed assets in the form of cash, farming tools, crop inventories and livestock (e.g., [START_REF] Rosenzweig | Credit market constraints, consumption smoothing, and the accumulation of durable production assets in low-income countries: Investments in bullocks in india[END_REF][START_REF] Alderman | Saving and economic shocks in rural pakistan[END_REF][START_REF] Fafchamps | Drought and saving in west africa: are livestock a buffer stock[END_REF][START_REF] Park | Risk and household grain management in developing countries[END_REF]. [START_REF] Park | Risk and household grain management in developing countries[END_REF] argued, for example, that grain stocks are an attractive form of precautionary saving for Chinese households, even in the presence of credit opportunities. Precautionary saving is, however, costly for savers as it may prevent productive investments. It may also become ineffective in the presence of frequent realizations of risks. In fact, evidence shows that households reduce their precautionary saving in the presence of migration opportunities in the form of larger migrant networks (Giles and Yoo, 2007). In India, for example, Rosenzweig and Stark (1989) showed the effectiveness of marital migration by women as a way to extend insurance network into far-away areas, in an attempt to create a source of monetary help that is uncorrelated with local earnings. Indeed, according to the New Economics of Labor Migration, households may allocate some of their labor supply to migration markets, in case of ineffective informal strategies or if they want to preserve their savings and other assets, as a way to diversify their local income (Stark and Levhari, 1982;[START_REF] Stark | The new economics of labor migration[END_REF]. 89 This theory is, therefore, 8 In the case of migration, decisions may be characterized as being associated with both risk and uncertainty, even though the focus has been on risk rather than on uncertainty both in economic theories [START_REF] Williams | Migration, risk, and uncertainty: Theoretical perspectives[END_REF] and in empirical applications, as risk has the property of being measurable. In different empirical contexts, both aggregate and idiosyncratic measures of risk have been used, including, variability of GDP (e.g., [START_REF] Joon-Ho | Consumption growth, income growth and earings uncertainty: Simple cross-country evidence[END_REF][START_REF] Menegatti | Uncertainty and consumption: new evidence in oecd countries[END_REF][START_REF] Mody | Precautionary savings in the great recession[END_REF], employment probabilities (e.g., [START_REF] Barnum | Education, employment probabilities and rural-urban migration in Tanzania[END_REF], unemployment rates (e.g., [START_REF] Mayda | International migration: A panel data analysis of the determinants of bilateral ŕows[END_REF][START_REF] Mody | Precautionary savings in the great recession[END_REF][START_REF] Bande | Private saving rates and macroeconomic uncertainty: evidence from spanish regional data[END_REF], variability of weather variables (Rosenzweig and Stark, 1989;Dillon et al., 2011), and standard deviation or variance of income or income equation residuals (e.g., Carroll and Samwick, 1998;Jalan and Ravallion, 2001;[START_REF] Mishra | Measuring precautionary wealth using cross-sectional data: the case of farm households[END_REF]. The above are objective measures of income risk, but subjective measures can also be used. The later can be constructed, for example, using survey questions on the probability distribution of earnings and inflation [START_REF] Guiso | Earnings uncertainty and precautionary saving[END_REF]Lusardi, 1997), or on unemployment expectations [START_REF] Carroll | Unemployment expectations, jumping (s, s) triggers, and household balance sheets[END_REF]. Risk differs from uncertainty in the sense that the probabilities of the different outcomes of a given event are known, while uncertainty refers to situations where information about the likelihood of possible outcomes is not available [START_REF] Tobler | Valuation for risky and uncertain choices[END_REF]. 9 We distinguish between risk and the realized risk, known as shock. These two are different in the sense that, in the case of a negative shock, decision makers will try to minimize the already underwent adverse effects, while in the case of a risk, they will try to adjust their behavior to account for the possibility that suggesting that, contrarily to what is stated in the human capital models of migration, migration of individuals to destinations with lower incomes or higher risks is possible as long as it helps the household to reduce its overall riskiness. 10 As migrants move with the purpose of insuring their households against risks, most of them may be migrating only temporarily and hence would not be relocating their entire family to the destination location. This is not an unusual behavior of migrant workers in real world markets, even in the presence of higher earnings in the destination areas (Banerjee and Duŕo, 2007). In addition to the above-cited insurance motive, the theoretical migration literature has offered multiple incentives for why migration would not be permanent, hence probably resulting in left-behind children. Early studies mostly referred to preference for consumption in the home country as the motive for temporary migration (e.g., [START_REF] Hill | Immigrant decisions concerning duration of stay and migratory frequency[END_REF]Djajić and Milbourne, 1988). As the marginal utility of consumption is higher in the country of origin, compared to that in the destination country, migrants save and accumulate assets while abroad and de-save after their return, providing themselves with a higher consumption. A similar motive concerns the higher purchasing power of the destination country's currency in the country of origin, in terms of lower price levels. In this scenario, migration will never be permanent in the absence of earnings differences between the two countries, as the incentive of migration comes only from the destination country's currency's purchasing power in the home country (e.g., Dustmann, 1995Dustmann, , 2003)). Skills, accumulated in the destination country, increasing migrants' human capital and having higher returns in the country of origin may also induce a temporary migration (Dustmann, 1995), including when self-employment activities allow a higher return when established in the home couna negative shock may occur in the future. Most of the migration and child human capital investment literature focused on studying the effects of shocks, rather than risks. Examples of these shocks include those induced by adverse weather conditions and other events of nature such as typhoons (e.g., Jensen, 2000;[START_REF] Yang | Are remittances insurance? evidence from rainfall shocks in the philippines[END_REF][START_REF] Aguilar | El nino and mexican children: medium-term effects of early-life weather shocks on cognitive and health outcomes[END_REF][START_REF] Gröger | Internal labor migration as a shock coping strategy: Evidence from a typhoon[END_REF][START_REF] Minale | Agricultural productivity shocks, labour reallocation and ruralśurban migration in china[END_REF], and nutritional shocks such as famine (Almond et al., 2010). In this case, endogeneity is rarely a problem for causal inference as shocks usually occur beyond human control. 10 Human capital theory has long considered migration as an investment in one's human capital, in the sense that income is a return to this investment. This view of migration was first stated by [START_REF] Sjaastad | The costs and returns of human migration[END_REF]: the migrant considers the opportunity values of each possible destination relative to that at the area of origin, while also accounting for migration costs, and chooses the location that maximizes the lifetime earnings. [START_REF] Becker | Human capital. The concise encyclopedia of economics[END_REF] directly calls migration a human capital investment by stating that "Most investments in human capital-such as formal education, on-the-job training, or migration-[...]". The micro-economic version of the neoclassical theory of migration is based on this simple framework, to which Todaro (1969) and [START_REF] Harris | Migration, unemployment and development: a two-sector analysis[END_REF] added the assumption that prospective migrants consider expected income at the destination area, instead of the actual income. try (Dustmann and Kirchkamp, 2002). In fact, in the presence of borrowing constraints, self-employment may even motivate the decision to migrate with the aim of accumulating the initial required capital, and hence the return decision (e.g., Mesnard, 2004). However, migration as a risk-coping strategy is also imperfect, as it may be costly or itself risky [START_REF] Bryan | Underinvestment in a proőtable technology: The case of seasonal migration in bangladesh[END_REF]. The theoretical migration literature proposed that the insurance contract between the household and the migrant can be reciprocal, since the migrant may also face different risks in the destination area [START_REF] Stark | Migration, remittances, and the family[END_REF]. The unemployment risk, which was introduced into the migration-decision models by Todaro (1969) and [START_REF] Harris | Migration, unemployment and development: a two-sector analysis[END_REF], is one of the important examples of risks for migrants, as they may fail to őnd a job upon their arrival in the destination country [START_REF] Das | Migration as a risky enterprise: A diagnostic for bangladesh[END_REF]. This migration riskiness may even decrease the probability to migrate in the őrst place [START_REF] Bryan | Underinvestment in a proőtable technology: The case of seasonal migration in bangladesh[END_REF]. The present work aims at analyzing two types of investments in the human capital of children, in the presence of a risky context: migration and environmental policies. A large part of the literature has documented the different informal coping strategies used by rural households when faced with income riskiness, including migration. However, little has been done to understand the mechanisms driving the use of migration by households to diversify their income when faced with an aggregate income risk, particularly when parents have to migrate while children are to be left behind. Moreover, a high number of factors affecting the migration duration decision have been examined in the theoretical and empirical migration literature, but far less has been done on how monetary and child human capital riskiness determines the migration duration of parents. Chapter 1 models migration as a parental decision of time disinvestment in child human capital, with the aim of increasing future accumulated wealth. It explores the effects of different (idiosyncratic or aggregate) risks, as deőned by the stochastic dominance theory (e.g., Eeckhoudt and Schlesinger, 2006), on the migration duration of parents with left-behind children. Chapter 2 examines income differential as a potential mechanism through which aggregate income risk affects the decision to send a parent for migration by a household that cares about both its income and children's human capital. On the other hand, environmental policies, implemented early in children's life, can be considered as government investments in child human capital. While a lot has been said on the effects of air-pollution, particularly using short-run analyzes, in developing countries, little has been done on the long-term effects of early-life government interventions when children's human capital is at risk of deterioration because of air pollution. Environmental policies affect human capital through many channels other than changes in air quality. Chapter 3 investigates the long-term effects of exposure to an environmental policy, in the őrst year of birth, on educational outcomes, as a way to empirically assess the importance of early-life conditions, particularly those that can be affected by policy makers. The contribution of this thesis is in part theoretical but essentially empirical. While when theorizing decision-making of individuals or households under risk, Expected Utility Theory (EUT) framework is adopted, empirical illustrations are based on data from China. The choice of China is mainly driven by suitability for the research questions. Case study: China Since the open-up strategy and the economic reforms implemented in the late 1970s, China has witnessed an accelerated development of its economy. Ever since, its real GDP per capita has grown by about 9 percent, on average, each year [START_REF] Li | Human capital and china's future growth[END_REF], and during the years between 2000 and 2007, China accounted for about 35 percent of the world's GDP growth at purchasing power parity prices [START_REF] Ding | Why has china grown so fast? the role of physical and human capital formation[END_REF]. An important factor for China's growth success is due to its substantial physical capital investments, reaching a level of about 30% of its GDP in 1995 and more than 45% in 2002 [START_REF] Heckman | China's human capital investment[END_REF]. Between 1979 and 2008, the total amount of foreign direct investment (FDI) reached US$1,096.6 billion, making the FDI per capita higher than that in South Korea at the same stage of economic development [START_REF] Heckman | Human capital, economic growth, and inequality in china[END_REF]. This increase in physical capital investments led to an increasing demand for workers. However, starting from the mid-1990s, the one child policy, adopted in China in 1979, was responsible for the decrease in the urban-born labor force, raising the need for migrant workers. The relaxation of the rural-urban migration restrictions was, therefore, an important factor for the continuing growth of the urban economy. 11 The new jobs created in 11 What makes rural labor migration to China's cities a unique process is its household registration system, implemented in the 1950s and maintained to this day. This system requires that every person born in China is classified according to two criteria, the type of hukou (urban (non-agricultural) or rural (agricultural)) and the location of hukou. It therefore allows the government to strictly regulate population flows, particularly between rural and urban areas, and to limit access of each citizen, to industrial and service sectors were, therefore, mostly őlled with migrant workers from the agricultural sector [START_REF] Ding | Why has china grown so fast? the role of physical and human capital formation[END_REF]. This led to what has been called the largest labor migration in human history in recent decades. In 1990, the total "ŕoating population" stock was only about 29 million, but reached 68 million in 1996 [START_REF] Liang | The age of migration in china[END_REF]. 12 The momentum continued throughout the late 1990s to reach 121 million in 2000, then 221 million in 2010 and 247 million in 2015 (NBS, 2019). 13 Migration, not only allowed rural residents to earn higher incomes (Zhu, 2002), but it was also a way for households to deal with negative agricultural shocks [START_REF] Giles | Is life more risky in the open? household risk-coping and the opening of china's labor markets[END_REF][START_REF] Minale | Agricultural productivity shocks, labour reallocation and ruralśurban migration in china[END_REF]. Remittances are also found to particularly increase when families experience these income shocks (Du et al., 2005). The effect of risk on the decision to migrate is, however, less established. Jalan and Ravallion (2001) found that a higher household income risk reduced out-migration for work. 14 Internal labor migration in China is mainly temporary and circular [START_REF] Hu | Circular migration, or permanent stay? evidence from china's ruralśurban migration[END_REF]. It is an important source of income for the remaining households in the place of origin, but often leads to the separation of families. In 2015, for example, an estimated 69 million children were left behind by one or both migrant parents, representing about a third of all rural children in China (UNICEF, 2018). Due to the separation following parental migration, adverse effects may arise for children left behind. Profoundly disturbing conse-quences have been documented for their educational outcomes. Compared to their peers with no migrant parents, they were found to be worse off in terms of school enrollment and years of schooling [START_REF] Lee | Migration and children's welfare in china: The schooling and health of children left behind[END_REF][START_REF] Wang | The effect of parental migration on the educational attainment of their left-behind children in rural china[END_REF]. Parents' migration is also found to negatively affect school performance and cognitive skills of left-behind children (e.g., [START_REF] Zhang | Does parental absence reduce cognitive achievements? evidence from rural china[END_REF][START_REF] Zhao | The impact of parental migration on children's school performance in rural china[END_REF][START_REF] Hu | Parent migration and rural preschool children's early academic and social skill trajectories in china: Are 'leftbehind'children really left behind?[END_REF][START_REF] Mao | The effects of parental absence on children development: Evidence from left-behind children in china[END_REF], as well as increasing the risk of unhealthy behaviors [START_REF] Gao | The impact of parental migration on health status and health behaviours among left behind adolescent school children in china[END_REF][START_REF] Wen | Child development in rural china: Children left behind by their migrant parents and children of nonmigrant families[END_REF], and resulting in poorer mental health outcomes [START_REF] Qin | The mental health of children left behind in rural china by migrating parents: A literature review[END_REF]. These adverse effects are found to be stronger when both parents migrate. Moreover, Meng and Yamauchi (2017) argued that focusing on the effects of contemporaneous parental migration may be an underestimation of the true effects. They showed that the cumulative parental migration, over the childhood, have sizable adverse effects on children's health and education outcomes, while Zhou et al. (2014) reported that children with longer parental migration durations had poorer educational performances. Long-term effects of exposure to parental migration, early in life, on labor market outcomes have also been investigated. [START_REF] Wang | Childhood left-behind experience and labour market outcomes in china[END_REF] showed that exposure to a leftbehind experience, following the mother's migration, had adverse effects on the probability to őnd a job and wages. The long-term negative effect on incomes was further conőrmed by [START_REF] Feng | The effect of childhood left-behind experience on individual's income: evidence from china[END_REF]. [START_REF] Zheng | When left-behind children become adults and parents: The long-term human capital consequences of parental absence in china[END_REF] also reported damaging effects of early-life exposure to left-behind experience on cognitive abilities, in terms of fewer schooling years and lower cognitive test scores, and on health, in terms of chronic diseases and depression. They also tend to have poorer household socioeconomic outcomes, compared to their nonleft-behind counterparts. These negative effects may even be transferred over generations, as the offspring of left-behind children are found to be adversely affected, in terms of birth weight and height-for-age z-scores [START_REF] Zheng | When left-behind children become adults and parents: The long-term human capital consequences of parental absence in china[END_REF]. These effects can be absolutely explained by the reduced time investment in children by the migrant parents. This time disinvestment may affect children's outcomes in different ways, including through the adverse effects parental absence may have on the physical and mental health of leftbehind caregivers, including grandparents and spouses [START_REF] Chen | For better or worse: The health implications of marriage separation due to migration in rural china[END_REF][START_REF] Xiang | The impact of ruralśurban migration on the health of the left-behind parents[END_REF][START_REF] Tong | Spousal migration and married adults' psychological distress in rural china: The roles of intimacy, autonomy and responsibility[END_REF]. Parental migration may also reduce parents' recognition of children's education, and thus may reduce educational investment in left-behind children [START_REF] Lu | The impact of parental migration on offspring's education investment: Evidence from left-behind children in china[END_REF]. The absence of parents may also affect the way left-behind children allocate their time to activities beneőcial to the formation of their human capital, with evidence showing that they increase their time spent on farm work and domestic work (Chang et al., 2011). Despite these negative effects, rural-urban migration continued in China, as the economic growth continued. However, throughout the years, the demand for high-skilled workers increased and the return to education as well. 15 If the country's economic success was initiated by a combination of medium-skilled workers and high levels of FDI inŕows, it has been sustained by both physical capital and human capital investments [START_REF] Heckman | Human capital, economic growth, and inequality in china[END_REF]. 16 The government has, indeed, made substantial investments in improving its labor's quality. 17 [START_REF] Hongyi | Health, education, and economic growth in china: Empirical őndings and implications[END_REF], using provincial data, showed that both education and health have had positive effects on economic growth in China. Public spending on education, for example, was limited to 2.4% of its gross domestic product (GDP) in 1995, but increased to reach 3.3% of its GDP in 2002 [START_REF] Heckman | China's human capital investment[END_REF], and 4% in 2012 [START_REF] Su | The impact of foreign direct investment and human capital on economic growth: Evidence from chinese cities[END_REF]. Government investments in education were not only important for the overall growth of the economy, but also for each individual's economic well-being. [START_REF] Xiao | Education on the cheap: The long-run effects of a free compulsory education reform in rural china[END_REF] found positive long-term effects on math test scores and the completed years of schooling, from providing free education by the government during the nine compulsory years of schooling, in rural China, starting from 2006. 18 In the short-term, the reform was also found to stimulate enrollment in primary and junior high schools [START_REF] Chyi | The effects of tuition reforms on school enrollment in rural china[END_REF][START_REF] Shi | The impact of educational fee reduction reform on school enrolment in rural china[END_REF]. Education was further shown to be an important factor for increasing incomes of rural workers [START_REF] Knight | Towards a labour market in China[END_REF]. Moreover, China has also implemented different health policies, with the aim to improve people's human capital. Access to on-premise tap water was provided to only 11% of rural households in China, but thanks to its rural drinking water program, initiated in the 1980s, the share of rural households having access to on-premise tap water reached 55% by 2015 (WHO and UNICEF, 2015). [START_REF] Chen | Early-life exposure to tap water and the development of cognitive skills[END_REF] examined the long-term 15 High-skill workers refer to those with tertiary education. 16 Primary or secondary education graduates are referred to as medium-skilled workers. 17 For China's future economic success, [START_REF] Li | Human capital and china's future growth[END_REF] argued that the government should increase investments in human capital and put less emphasis on physical capital investments, as the former would have a higher impact on the country's future growth. 18 The Chinese government has implemented different educational reforms to reduce education costs in 2001, 2003 and 2006. After the 2006 reform, all rural students were exempt from paying tuition while eligible ones were offered free textbooks and living subsidies. effects of exposure to tap water during early childhood, and found positive effects on cognitive test scores at ages 10-15. In the short-term, access to treated water in rural China was found to improve children's weight-for-height and height outcomes [START_REF] Zhang | The impact of water quality on health: Evidence from the drinking water infrastructure program in rural china[END_REF], and their completed grades of education [START_REF] Zhang | The long-run effects of treated water on education: The rural drinking water program in china[END_REF]. However, in addition to these education and health investments, government remediation policies, that aim to counter risk factors for people's human capital deterioration, were also documented. One example of these risk factors that has drawn much attention, in recent decades, is pollution. In fact, China's rapid economic growth has had severe repercussions for the environment, raising air pollution to levels harmful to human capital [START_REF] Zeng | Air pollution reduction in china: Recent success but great challenge for the future[END_REF]. There is evidence that different air pollutants have caused serious damage for both children's and adults' outcomes in China, including their health (e.g., [START_REF] Zhang | Children's respiratory morbidity prevalence in relation to air pollution in four chinese cities[END_REF]Deng et al., 2015;Deschenes et al., 2020), education and cognitive functioning (e.g., [START_REF] Tang | Effects of prenatal exposure to coal-burning pollutants on children's development in china[END_REF]Zhang et al., 2018a), their work productivity (e.g., Chang et al., 2019;He et al., 2019), labor supply (e.g., [START_REF] Zhang | Does environmental pollution affect labor supply? an empirical analysis based on 112 cities in china[END_REF], migration (e.g., [START_REF] Lu | Could smog pollution lead to the migration of local skilled workers? evidence from the jing-jin-ji region in china[END_REF][START_REF] Qin | Run away? air pollution and emigration interests in china[END_REF] and well-being (e.g. Zhang et al., 2017;[START_REF] Smyth | The environment and well-being in urban china[END_REF]. It was also shown that air quality degradation in China is inhibiting economic and social growth and sustainable development (Li and Zhang, 2019;[START_REF] Liang | Urbanization, economic growth and environmental pollution: Evidence from china[END_REF][START_REF] Zeng | Air pollution reduction in china: Recent success but great challenge for the future[END_REF]. The Chinese government has, therefore, invested huge resources in restoring and protecting its people's human capital from the impacts of air pollution, among which őgures the Two Control Zones policy, implemented in 1998. Overview of the chapters Human capital is an important input for the social and economic success of a country as a whole and for each of the individuals. The present work proposes to study fundamental decisions that affect the human capital accumulation of individuals, namely household decision to migrate and migration duration, and government environmental interventions. Migration, being a disinvestment in the child human capital, is, in this dissertation, alternatively a way to accumulate wealth in the future by individuals, and a risk-coping strategy to diversify income by households. Environmental policies are, however, large scale government tools to affect individuals' outcomes. Chapter 1 'Risks and optimal migration duration: The role of higher order risk attitudes' In a world where many migrations are temporary, the stock of migrants at any destination country is, in part, determined by migration duration. What determines the optimal migration duration is, therefore, of immediate economic interest for both the home and the destination countries. Migration has long been theorized as affected by and affecting risks. The latter may be encountered in various forms, hence the importance to understand their impact on migration duration. Chapter 1 focuses on the temporary migration of parents with left-behind children, and considers migration duration a time disinvestment in child human capital. Parents are faced with a resource transfer problem, in the sense that, by migrating, they choose to sacriőce an amount of their children's human capital in order to increase their accumulated wealth once back in the country of origin. Do risks affect the migration duration of these parents? Do all risk-averse migrant parents decrease their migration duration in the face of an income risk in the destination country, and increase it in the face of an income risk in the country of origin? How do they react to a risk on the human capital of their children or to a risk on their accumulated savings? Answers to these questions are important for both the sending and the receiving countries, because the expected duration of migration can affect migrants' different economic choices, including consumption, saving, remittances, labor-market participation, leisure, human and social capital investments (e.g., language skills and networking) and ensuing assimilation proőles. While a handful of papers have examined the effects of income shocks, so far no paper, to our knowledge, has empirically explored how risks may affect migration spells, despite the relevance of risk in different circumstances. At the origin of this gap may lie the scarcity of theoretical references for the mechanisms underlying these effects and the suitable measures of the different risks. [START_REF] Bodvarsson | The determinants of international migration: Theory[END_REF] argued that theoretical work in the migration literature is still very limited and that a huge gap between theory and empirical work is in place. They showed the need for more theoretical work that would serve as a guide for future research. Chapter 1 provides a theoretical framework for modeling temporary migration decisions of parents with left-behind children, and shows that migrant parents do not necessarily change their migration duration, when faced with a change in risks, even if they are averse to risk. First, analogously to including a dummy variable for the presence of risk in empirical studies, I focus on examining how the parents' migration duration changes when faced with pure risks or zero-mean speculative risks on income, child human capital or accumulated savings, compared to a situation with no risks. I show that not all risk-averse migrant parents react to these risks. Conditions on their other risk preferences are required for a change in the return plans, and depending on the nature of these preferences, the direction of the change, being either an increase or a decrease of their stay abroad, is determined. Second, I explore changes in risk that increase variance or moments of higher order of incomes, children's human capital or that of the accumulated savings. In a way similar to the previous case, I provide sufficient conditions on preferences ensuring a change in their migration behavior, and explaining the heterogeneity in the parents' optimal responses to changes in risks. Finally, in the absence of riskiness, I show that larger income differentials between the destination and home countries do not unavoidably increase the migration duration. They may lead to longer migration cycles only if an extra unit of child human capital becomes less valuable for parents following the increase in their income (i.e., income and child human capital are substitutes). The feature of parents' preferences that measure this behavior is referred to as correlation aversion. The parents' correlation attitude is also shown to affect migration duration decisions under risk of the parents. Chapter 2 'Rural-urban migration as a risk coping strategy: The role of income differentials' Climate change is becoming a serious problem for different economic agents, particularly agricultural households, by increasing income risks for them, through rises in rainfall variance and temperature ŕuctuations, for example. Rural households in developing countries are particularly vulnerable to these effects of the climate change, in view of the absence or the ineffectiveness of formal insurance and credit markets. While there exists widespread evidence on the use of migration as a risk-coping strategy by these households, little is known about the mechanisms that are affecting the use of this particular strategy, when risk-sharing arrangements may become ineffective. This question is particularly stringent for the debate over the welfare and poverty of agricultural households, as well as the labor allocation between local and migrant markets. Two possible reasons for this literature gap are on the one hand, the difficulty to access databases with income and migration information at the household level, and the other hand, the focus of the migration theory on the different determinants of the migration decision separately while disregarding potential interactions between them. In a risky context, migration is seen as an important risk-coping strategy even in the absence of income differentials. Empirical work, however, shows that, sometimes, risk inhibits migration from rural areas. To date, nothing is known about the way the income differentials affect the migration decision in the presence of an aggregate income risk. To study this issue, Chapter 2 builds a model of an agricultural household, with at least one child, where the parent may migrate as a way to diversify income against riskiness. Using data from rural China and applying a Heckman and Lee procedure to compute the expected income differentials, we estimate assumptions of the theoretical model. We őnd that a negative expected urban-to-rural income difference, in the case of an aggregate income risk, decreases the probability of migration as a risk coping strategy, compared to a situation where the expected income differential is positive. We also show that this effect diminishes with higher levels of income differential. Moreover, as we focus on the migration of parents, the household's welfare depends not only on their income but also on their children's human capital. Our model shows evidence that the marginal utility of the household income increases as the child human capital deteriorates, suggesting that the considered Chinese rural households are correlation averse, i.e., in the case of an agricultural income risk, each additional monetary unit is more desirable for the household when children have lower school test scores. This result suggests that, in a migration context, if income differential is positive and the household is faced with an income risk, parents with children poorly performing at school may be more likely to migrate, compared to parents with better performing children. Chapter 3 'Long-Term Effects of Environmental Policies on Educational Performance: Evidence from China' Chapter 3 is concerned with the importance of early-life conditions, particularly those that can be controlled by policy-makers, in determining long-term human capital outcomes. While short-term effects of air pollution in developing countries have been widely documented, little is known about whether exposure to the implementation of an environmental policy, early in life, affects long-term adult outcomes, and if it does, in which direction. Two main reasons may explain this gap. First, environmental regulations in developing countries are scarce and when found, they are either not sufficiently enforced or do not allow a suitable empirical analysis. Second, an important difficulty in exploring the effects of fetal and early-childhood conditions on adult outcomes is the long time it may take to be able to do so. The łTwo control zones" (TCZ) policy, implemented in China in 1998, offers a perfect case study to examine the effects of exposure to an environmental policy early in life, as it has been stringently enforced only in particular areas of the country. To overcome the second difficulty of getting data on long-term adult outcomes, impacts on outcomes from younger ages are explored, while showing how predictive are the latter of longer-term outcomes. Using data from rural and urban China and applying a difference-in-differences approach, we őnd positive and signiőcant effects of exposure to the TCZ policy, in one's year of birth, on children's long-term educational outcomes, 15 years later. Particularly, we őnd that, in the absence of the TCZ policy, individuals born in counties designated as TCZ would have been less likely to obtain high scores in high school entrance exam and therefore, less likely to attend a high quality high school. They would have also been less likely to opt for an academic high school, instead of a specialized/technical one which focuses on manual labor training. Projecting forward, we also suggest better future higher education and labor market outcomes, stemming from being able to attend academic and better quality high schools. Looking more speciőcally at the effects by gender and by socio-economic status, we őnd that important beneőts associated with the TCZ policy relate to girls and to children born to low educated fathers. These results suggest that the environmental regulations may be used as a possible mechanism to reduce disparities in educational performance. However, we őnd no differential impacts between children exposed to the TCZ policy implementation at ages 1-5, in terms of the probability to attend a higher quality high school or an academic high school, although that does not imply the absence of positive effects for these age cohorts. Chapter 1 Risks and optimal migration duration: The role of higher order risk attitudes Introduction Temporary labor mobility characterizes migration, both internationally and domestically, in many countries and regions worldwide (IOM, 2020). Contrary to what would be expected, migrations can be temporary even in the absence of restrictions on the duration of stay in the destination country. An estimated 20 to 50 percent of migrants in the OECD countries were found to leave the host country within 5 years of arrival (OECD, 2008). Previous theoretical literature has reported various factors that rationalize the return of migrants, despite higher earnings in the destination country (see Dustmann and Görlach, 2016). However, little is known about the factors that affect the duration of the migrants' stay in the destination country. Understanding migration's duration is crucial since it determines various important economic behaviors, such as consumption, savings and human capital investments, that may have substantial effects not only for migrants and their households, but also for populations of the two countries, in terms of remittances, brain drain, őscal impacts and others (see Dustmann and Görlach, 2016). Previous theoretical studies showed that migration duration depends crucially on future income streams in the home-and destination countries. However, income is, generally, strongly affected by risk both in developing and developed countries. In this paper, we investigate the effects of risks on the migration duration, in the speciőc case of migrant parents with children being left behind in the home country. Previous theoretical literature has long associated uncertainty and risks to either migration decisions or other economic decisions in a migration context (e.g., Stark and Levhari, 1982;[START_REF] Katz | Labor migration and risk aversion in less developed countries[END_REF][START_REF] Galor | The probability of return migration, migrants' work effort, and migrants' performance[END_REF][START_REF] Dustmann | Return migration, uncertainty and precautionary savings[END_REF][START_REF] Daveri | Where do migrants go?[END_REF][START_REF] Chen | Migration, family, and risk diversiőcation[END_REF]. This paper, however, differs from previous work in at least two ways. First, the particular effect of income risks on the migration duration, for parents with left-behind children, has not been previously investigated. This type of migration is a widespread phenomenon as has been shown by the growing number of left-behind children around the world. In China, for example, an estimated 69 million children were left behind by one or both parents migrating to cities in 2015, accounting for about one third of all rural children in China (UNICEF, 2018). This number was approximated to 1 million in Sri Lanka in 2005 (Pinto-Jayawardena et al., 2006), and to about 1.5 to 3 million in the Philippines (Cortes, 2015). This loss of parental time has been shown to induce non-monetary costs for the left-behind children, generating troubling consequences for their education and cognitive ability (e.g., Antman, 2013;[START_REF] Nguyen | Does parental migration really beneőt left-behind children? comparative evidence from ethiopia, india, peru and vietnam[END_REF], health and nutrition (e.g., Antman, 2013;[START_REF] Nguyen | Does parental migration really beneőt left-behind children? comparative evidence from ethiopia, india, peru and vietnam[END_REF] and their socio-psychological behavior (e.g., [START_REF] Fellmeth | Health impacts of parental migration on left-behind children and adolescents: a systematic review and meta-analysis[END_REF]. Given these additional costs compared to other migrants, cross-effects between income and children's human capital in determining migration duration may be in place. 1 For this reason and based on Myerson (2017), we consider that the parents' preferences are deőned over his own income and his child's human capital. 2 Second, understanding the complex effects of income riskiness on migration duration dates back to the paper by [START_REF] Dustmann | Return migration, uncertainty and precautionary savings[END_REF], where the effect of income risk is compared to a situation with no uncertainty, but he limited his analysis to Taylor approximations of order two, which is equivalent to considering weak risks. As a consequence, he only compares risk levels à la RotschildśStiglitz [START_REF] Rothschild | Increasing risk: I. a deőnition[END_REF]. In this paper, we extend the analysis to the case of increases in risk that imply different kinds of modiőcations in the migrants' incomes' distributions, which is new with respect to the literature. This is important as workers usually face situations that scale up the level of their labor 1 As child human capital is a multidimensional variable [START_REF] Attanasio | The determinants of human capital formation during the early years of life: Theory, measurement, and policies[END_REF], three dimensions may be considered: health and nutritional status, education and socio-emotional skills. These dimensions vary over time according to a process that depends on three elements: the previous levels of the dimensions, environmental variables that are not varying over time and finally the investments by parents or institutions. Investments may affect different child outcomes differently, which raises the problem of how to aggregate these different effects in order to get the overall impact. For simplicity, in our model, we follow the theoretical literature on intergenerational mobility in using a summary metric of the child human capital status (see e.g., Becker et al., 2018). 2 In the theoretical literature of migration, Dustmann (2003a), focusing on the case where all family members are migrants and where the parent has to make a decision about a return to the home country, considers a parent's utility function defined over his consumption and the child's consumption. Dustmann and Görlach (2016) also propose that their theoretical model for migration duration can be extended to the case where the consumption of left-behind family members is added to the migrant's utility. Myerson (2017), investigating the sensitivity of the child's human capital to parental migration, models the parental preferences over income and child human capital. income risk, rather than moving from a non-risky to a risky situation. Many OECD countries, for example, were shown to have undergone, recently, increasing dispersion in wages due to globalization and digitalization [START_REF] Berlingieri | The great divergence (s)[END_REF], suggesting greater őnancial uncertainty faced by workers. Moreover, we also examine the ex-ante migrant parents' responses to a risk on the savings accumulated at the destination country and to a non-monetary risk on their children's human capital. To carry out our analysis, we build a model of temporary migration that is relatively close, in terms of the key assumptions of lifetime maximization framework and endogenous migration duration, to a type of models considered in the literature (see e.g., Djajić and Milbourne, 1988;Dustmann, 1995;[START_REF] Stark | Migrants' savings, purchasing power parity, and the optimal duration of migration[END_REF][START_REF] Dustmann | Temporary migration, human capital, and language ŕuency of migrants[END_REF]Dustmann and Kirchkamp, 2002;[START_REF] Dustmann | Return migration, wage differentials, and the optimal migration duration[END_REF]Mesnard, 2004). However, our model presents three main differences. First, contrary to the above studies, we focus on the temporary migration of parents when children are left behind. Second, while the models in the above works deőne migration duration with a time variable and represent it, in the maximization problem, as a multiplier of the utility enjoyed at the destination country, we model the migration duration as the equivalent of the decrease in the child human capital caused by the absence of the migrant parent. The last difference is related to the maximization problem of the migrant. While previous models deőne the optimal migration duration as the solution of a lifetime span allocation problem between migration and non-migration, we model the optimal migration duration as the solution of a resource transfer problem from the sub-period of migration to the sub-period of non-migration.3 . Our work makes a contribution to the understanding of temporary migrants' behavior. First, we provide a new perspective on the effect of income differentials on migration duration under certainty. We particularly conőrm, in a new context, the results of the previous literature showing that considering income differential on its own may result in misleading implications with respect to the stock of migrants in the destination country [START_REF] Carrington | Migration with endogenous moving costs[END_REF][START_REF] Dustmann | Return migration, wage differentials, and the optimal migration duration[END_REF]. Focusing on temporary migration of parents, we őnd that a decrease in the home country's income always increases the optimal migration duration. However, an increase in the destination country's income has an ambiguous effect. Migration duration may, therefore, decrease if income differentials between homeand destination countries increase, leading to a reduction in the migration population, for a constant inŕow of migrants. We show that such a behavior can be driven by the sign of the interaction between human capital of children and wealth in migrating parents' utility function, referred to as their correlation attitude. 4 The importance of the decision maker's correlation attitude has been emphasized in the theoretical economic decision literature in topics including savings, health and portfolio decisions (e.g., [START_REF] Bleichrodt | Comorbidities and the willingness to pay for health improvements[END_REF][START_REF] Eeckhoudt | A good sign for multivariate risk taking[END_REF][START_REF] Crainich | Health and portfolio choices: A diffidence approach[END_REF]Liu and Menegatti, 2019a). Second, we provide a further important mechanism for the effects of increases in risk on the migration duration of risk averse parents. Although intuition suggests, based on the risk aversion assumption, that migrant parents will always run away from increases in income riskiness, we show, in this paper, that this is not always the case. Risk aversion or correlation aversion are sufficient to generate a change in the parents' migration duration in the face of an increase in the unemployment risk. However, the sign of the effect of an increase in other types of income risk, which imply changes in higher order moments, is less straightforward and its sign cannot be determined by risk aversion. We provide conditions including higher order risk preferences that ensure, under any type of income risk, a precautionary migration motive for risk averse migrant parents. We also show that depending on the sign of these preferences, the sign of income risk's effect on the migration duration is determined. Under these conditions, an increase in the destination country's income risk or a decrease in the home country's income risk may lead to longer migration cycles. Similar conditions, following changes in the accumulated savings risk and in the migrant's child human capital risk, are also produced. Empirical work has long focused on the importance of risk aversion in the migration decision-making process (e.g., [START_REF] Jaeger | Direct evidence on risk attitudes and migration[END_REF]Dustmann et al., 2020), our results suggest that future research should also consider (cross-)prudence, (cross-)temperance and other higher order risk attitudes. 5 The later may be important both for future research on the effects of risk on the migration duration and for informing public policy-making. The above two results show the relevance of cross-effects in the particular case of migrant parents, compared to other types of migrants, in determining migration duration, both under certainty and under riskiness. This is new to the literature of temporary migration and suggests the importance of considering information on parents' preferences carrying cross-effects between the income and children's human capital in future empirical research. The remainder of the paper is organized as follows. In section 1.2, we introduce the basic model with no risks. Section 1.3 investigates the effect of the introduction of income, savings and child human capital risks on the optimal migration duration. Section 1.4 generalizes the results. Section 1.5 shows the applicability of our results in the empirical work. Section 1.6 concludes. Optimal migration decision We consider a parent with at least one child. The parent (the decision maker in our model) has preferences represented by the bivariate utility function G(y, Z), where y denotes income and Z the child human capital. We assume that G is n times continuously differentiable with respect to y and Z. 6 G is strictly increasing and concave in each argument, G (1,0) > 0, G (0,1) > 0, G (2,0) < 0 and G (0,2) < 0: the migrant parent's preferences are nonsatiated and risk-averse with respect to income and child human capital. We do not introduce any assumption on the interaction between income and child human capital, i.e., on the sign of the cross-second derivatives of G. Thus, we consider the three possible cases: G (1,1) = 0, G (1,1) < 0 and G (1,1) > 0. In the terminology of Epstein and Tanny (1980), the migrant parent is said to be correlation averse (correlation neutral, correlation loving) if G (1,1) < 0 (= 0, >0). 7 For such a decision maker, the marginal utility of income is lower (unchanged, higher) when the child has higher levels of human capital. Depending on whether preferences are correlation loving or averse, the child human capital can be a complement or a substitute for income. We consider a model with two periods. At the beginning of period 1, the parent is offered the option to migrate to a country of destination, and thus has to choose his migration duration. In period 2, he returns to the home country. During migration, the parent earns higher wages and is able to make savings, which are used to increase income of period 2. His absence implies, however, a non-monetary cost suffered by the left-behind child in terms of human capital. 8 Under the assumption that a longer parental migration duration implies a higher deterioration in the child's human capital (see e.g., Zhou et al., 2014;[START_REF] Cheng | Depression and anxiety among left-behind children in china: a systematic review[END_REF], choosing the migration duration, d, amounts to choosing a level of reduction in 6 The partial and the cross-derivatives f (k1,k2) of a function f with two arguments x 1 and x 2 are given by the following expression: f (k1,k2) (x 1 , x 2 ) = ∂ k 1 +k 2 f (x1,x2) ∂x k 1 1 ∂x k 2 2 , ∀k 1 ≥ N, ∀k 2 ≥ N . 7 No particular functional form of the parental utility function is adopted, as the later is related to the sign of the parents' correlation attitude. See Appendix 1.D for examples of the parental utility function. 8 In the absence of remittances, this constitutes the main effect on the human capital of the child. If the parent transfers resources that guarantee higher monetary investments for the child's human capital, a distinct positive effect is also possible. However, the overall effect of migration, being positive or negative, would remain ambiguous. In our model, we do not consider the effect of remittances. the child's human capital, m, such that m = f (d) and f ′ (d) > 0. The sacriőce in the child's human capital allows a beneőt in terms of an increase in wealth of period 2 due to the parent's accumulated savings in the destination country, g(m). Under the above assumptions, the objective of the decision maker is to maximize his lifetime utility: Max m W (m) = U (y 1 , Z 1 -m) + βV (y 2 + g(m), Z 2 ) (1.1) where U and V are the utility functions of periods 1 and 2, respectively, β is the discount factor, y i is the income level of period i (i= 1, 2), and Z i is the child human capital level in period i (i= 1, 2). In what follows, we denote the optimal level of migration duration by m * . We assume that g(0) = 0, g ′ (m) > 0 and g ′′ (m) ≤ 0 for all levels of m, i.e., an increase in m increases the accumulated savings in the destination country but at a decreasing rate.9 In this paper, we model the choice of migration duration as a resource transfer problem from the őrst to the second period. In this sense, our model comes close to a savings problem with two dimensions where the second argument is a non-monetary variable, usually health status or environmental quality (see e.g., [START_REF] Courbage | Precautionary saving in the presence of other risks[END_REF][START_REF] Menegatti | 20090930optimal saving in the presence of two risks[END_REF], and to a tertiary prevention model [START_REF] Eeckhoudt | A good sign for multivariate risk taking[END_REF]. 10 The amount that the decision maker renounces and the beneőt he gets, in these models, are expressed in the same unit. Our model differs from that in the sense that the decision maker gives up an amount expressed in non-monetary units (the child human capital), as in the tertiary prevention model, while the beneőt obtained in period 2 is expressed in monetary units, as in the saving model. In this sense, our model is symmetric to the health investment model studied, for example, by [START_REF] Denuit | Correlated risks, bivariate utility and optimal choices[END_REF] and Liu and Menegatti (2019b), where the decision maker endures a monetary cost in order to get a better health status in the future. The őrst-order condition (FOC) for a maximum (W ′ (m) = 0) is given by -U (0,1) (y 1 , Z 1 -m) + βg ′ (m)V (1,0) (y 2 + g(m), Z 2 ) = 0. The őrst part of the above equation, U (0,1) (y 1 , Z 1 -m), represents the cost of remaining one additional unit of time abroad in terms of forgone utility, induced by being separated from the child during that period. It is positive given our assumption on U , and increases in m. The second term, g ′ (m)V (1,0) (y 2 + g(m), Z 2 ), is the beneőt of staying a further unit of time in the destination country. It is also positive, given our assumptions on V and g, but decreases in m. It is easy to show that the difference in beneőt and cost of one additional unit of time abroad decreases in m (W ′′ (m) < 0, ∀m), therefore, the optimal level of m * is fully determined by the FOC: -U (0,1) (y 1 , Z 1 -m * ) + βg ′ (m * )V (1,0) (y 2 + g(m * ), Z 2 ) = 0 (1.2) The migrant parent returns, therefore, when costs of remaining an extra unit of time in destination are equal to the beneőts of doing so. 11 The parent migrates (m * > 0) if the őrst unit of time's utility in the destination country is positive, i.e., W ′ (0 ) > 0, equivalently if βg ′ (0)V (1,0) (y 2 , Z 2 ) > U (0,1) (y 1 , Z 1 ). 12 Particularly, the parent will migrate whenever his satisfaction from the present value of g ′ (0) additional units of income in the future (when back in the home country) is strictly higher than the satisfaction he gets from each additional unit of the child's human capital during his migration. In our model, the parent will never stay permanently in the destination country because the marginal utility of spending the last moment in the migrant's life, in the destination country, will always be negative. 13 We assume that these conditions are veriőed in all what follows. The change in the optimal migration duration as a result of the changes in the destination-and home country incomes and child human capital levels are summarized in the following proposition (details of calculations are presented in Appendix 1.A). Proposition 1 The optimal migration duration (m * ) (i) decreases with the home country income (y 2 ) and increases with the child's human capital level during the parent's migration (Z 1 ), (ii) increases [remains constant, decreases] with the destination country income (y 1 ) if the migrant parent is correlation averse [neutral, loving] during his migration (U (1,1) < 0) [U (1,1) = 0, U (1,1) > 0], 11 In our theoretical model, we only study the case of an interior solution. Corner solutions of permanent migration where the migrant parent never returns and of no migration where the parent never migrates are not considered. 12 Given our assumption that g ′ (0) > 0. 13 If d = d max , then m = m max , where d max is the maximum duration of migration for the parent, i.e., given the migration time, he only returns when he dies or at the end of his working life. Given our assumptions, g ′ is a decreasing and a strictly positive function, therefore, lim m→ mmax g ′ (m) = 0, thus lim m→ mmax W ′ (m) = U (0,1) (y 1 , Z 1 m max ) < 0. (iii) decreases [remains constant, increases] with the child's human capital level after the return of the migrant parent (Z 2 ) if the migrant parent is correlation averse [neutral, loving] (V (1,1) < 0) [V (1,1) = 0, V (1,1) > 0]. Intuitive explanations of these results are the following. An increase in the home country income y 2 decreases the home country marginal utility of wealth, leading to a reduction in the optimal migration duration, ceteris paribus. Now consider an increase in the level of the child human capital during the parent's migration Z 1 , due, for example, to government policies that increase services for left-behind children with migrating parents. 14 In this case, the marginal utility of child human capital will be decreasing, inducing, therefore, an increase in the optimal migration duration, ceteris paribus. The comparative statics show that, contrarily to the variation of m * with respect to y 2 and Z 1 , the variation of m * with respect to y 1 and Z 2 depends on the correlation attitude of the migrant parent, ceteris paribus. If the migrant parent's income in the destination country y 1 increases, then one would expect that his migration duration will increase. However, (ii) shows that it is not necessarily the case. As the migration duration is negatively related to the child human capital, the optimal migration duration depends on how the increase in y 1 affects the marginal utility of child human capital (U (1,1) ), ceteris paribus. This result implies that the effect of a sudden increase in income on the migration duration of parents with the same characteristics and from the same origin country, but with different child human capital levels, may be different, if higher incomes affect differently the satisfaction they get from each additional unit of child human capital. It also suggests that a parent faced with two destinations with different levels of earnings will stay less time in the destination where the income is higher, if the parent is correlation loving, and vice versa. In the case where the child human capital Z 2 is expected to increase upon the return of the parent, the variation of the optimal migration depends on how the marginal utility of income reacts to an increase in Z 2 . The length of the parent migration would be reduced in the unique case where the migrant parent is correlation averse (V (1,1) < 0), ceteris paribus. Migration duration decision under risk In this section, we extend the previous simple framework to a risky environment. More precisely, we introduce monetary or non-monetary risks and investigate their effects on 14 The child protection program in China as an example [START_REF] Man | Exploring the new child protection system in mainland china: How does it work?[END_REF]. the migration duration compared to a situation with no risks. Income risks Income risk for workers may have various sources. It may occur at the aggregate level, including, for example, political instability, economic crises or adverse weather variability for agricultural activities. Workers may also face idiosyncratic income risks, such as in the case of earnings that include commissions, future incomes that are not known in advance as in the case of self-employment, unemployment risk, and risks of unfair dismissal or nonpayment if working in the informal sector. In China, for example, one important source of insecurity for migrant workers is the risk of delayed payments and non-payment problems [START_REF] Chan | Recent trends in Chinese labour issuesÐsigns of change[END_REF]. Moreover, as discussed by [START_REF] Tressler | Labor supply and wage rate uncertainty[END_REF], even when nominal wages are known with certainty, if workers cannot accurately predict inŕation, real wages will also become uncertain. We analyze two cases: a risk on the destination country's income and a risk on the home country's income. Risk on the destination country income The migrant parent is now faced with a risk on his destination country's income (y 1 +ε 1w ). We investigate how the riskiness of migration affects the parent's stay abroad compared to a situation with no risks, ceteris paribus. The parent's maximization problem in the absence of risk is given by Eq. (1.1). In the case of an income risk in the destination country, he maximizes the following lifetime utility at the beginning of period 1 W ε1w (m) = E[U (y 1 + ε1w , Z 1 -m)] + βV (y 2 + g(m), Z 2 ). (1.3) where E denotes the expectation operator over the random variable ε1w . We denote by m * ε1w the length of the parent's migration in this case, implying a FOC given by W ′ ε1w (m * ε1w ) = -E[U (0,1) (y 1 +ε 1w , Z 1 -m * ε1w )]+βg ′ (m * ε1w )V (1,0) (y 2 +g(m * ε1 ), Z 2 ) = 0. (1.4) In our analysis, we distinguish two types of risks: an unfavorable pure risk and a speculative risk. Pure risk refers to a situation where no gain can be realized, loss is possible while the best outcome is the absence of loss. The expected gain, in this case, is, therefore, negative. We can consider, for example, an unemployment risk, ε1w = [0, -k; 1 -p, p], where p is the probability of unemployment (0 < p < 1) and k is the decrease in income following the unemployment (k > 0). Speculative risk, however, refers to a situation where the future outcome is subject to changes that may be higher or lower than anticipated. We consider the case of a zero-mean speculative risk, such that E[ε 1w ] = 0. Considering equations (1.2) and (1.4), we obtain the following results. Proposition 2 Under an unfavorable pure risk in the country of destination [first period], a migrant parent will stay longer (shorter) at destination than with no such risk, m * ε1w ≥ (≤)m * , if U (1,1) ≥ (≤)0. Under a zero-mean risk on the country of destination's income [first period], a migrant parent will stay longer (shorter) at destination than with no such risk, m * ε1w ≥ (≤)m * , if U (2,1) ≤ (≥)0. Proof See Appendix 1.B. We provide in Proposition 2 sufficient conditions that govern the direction of the migration duration variation, when a migrant parent is faced with an unfavorable pure risk or a zero-mean income risk, in the destination country. An unfavorable pure risk increases the migration duration of migrant parents only if they are correlation averse (U (1,1) ≥ 0), i.e., those for whom a higher human capital level of their children increases their marginal utility of income. The proposition also shows that all risk averse migrant parents may dislike changes in their incomes' distribution through a zero-mean risk, but not all of them would decrease their migration duration. Their behavior depends on the sign of U (2,1) . We refer to U (2,1) as the 'cross-prudence in the child human capital', following the previous literature calling U (2,1) the 'cross-prudence in the background variable' (e.g., [START_REF] Eeckhoudt | A good sign for multivariate risk taking[END_REF]Baiardi et al., 2020).15 A zero-mean income risk in the destination country decreases the migration duration of migrant parents if they are cross-prudent in the child human capital (U (2,1) ≥ 0). The cross-derivative until order 2 in the wealth argument and until order 1 in the background variable U (2,1) is not new in the analysis of choice under risk. It was shown in the precautionary saving model, for example, that the sign of this third cross-derivative is part of the sufficient condition that determines precautionary saving, in a setting where the decision maker is faced, simultaneously, with a labor income risk and a background risk in the second period (see e.g., [START_REF] Courbage | Precautionary saving in the presence of other risks[END_REF]Menegatti, 2009a,b;[START_REF] Denuit | Correlated risks, bivariate utility and optimal choices[END_REF]. 16 In a context of temporary migration, the sign of U (2,1) is the only condition needed in a setting where the decision maker is faced solely with a labor income risk, in the őrst period. Our setting is also different from that considered by [START_REF] Eeckhoudt | A good sign for multivariate risk taking[END_REF] in a model of tertiary prevention. The sign of the cross-derivative until order 2 in the wealth argument and until order 1 in the background variable is the sufficient condition, in their model, for an increase in the investment in the tertiary prevention, under an income risk in the second period, rather than the őrst one. In order to analyze our results, we refer to the interpretation of prudence provided by the precautionary saving literature. Following [START_REF] Menegatti | A new interpretation for the precautionary saving motive: a note[END_REF], condition U (2,1) ≥ 0 ensures that the disutility, suffered by the migrant parent because of the destination country's income risk, is reduced if the level of the child human capital is increased. 17 This is done by decreasing the disinvestment in the child human capital, and thus, by decreasing the migration duration. Moreover, following Eeckhoudt and Schlesinger (2006), this effect can be explained by the migrant parent's preference to disaggregate the harm of a higher income risk and that of a lower child human capital. 18 Risk on the home country income The migrating parent is now faced with a future risk on his income at the country of origin (y 2 + ε2w ). We investigate how this future income risk affects the optimal migration duration of the parent compared to a situation where there is no such risk, ceteris paribus. In other words, we investigate which among the two situations, where the parent uses migration to hedge against the future risk and where the parent uses migration for the sole purpose of increasing the future wealth, will induce a longer migration duration. In the case of a future income risk in the country of origin, he maximizes the following lifetime utility W ε2w (m) = U (y 1 , Z 1 -m) + βE[V (y 2 + ε2w + g(m), Z 2 )]. (1.5) The optimal length of the parent's migration is denoted by m * ε2w . Following the same reasoning as previously, we obtain the following results. 19 Proposition 3 Under an unfavorable pure risk in the country of origin [second period], (2011) further highlight the role of this cross-derivative in determining precautionary saving, in the case of small risks, and where no assumption is made on risk distributions or on the size of risks, respectively. 17 This disutility can be measured, following [START_REF] Menegatti | A new interpretation for the precautionary saving motive: a note[END_REF], by the difference between the expected utility when the migrant parent is bearing an income risk and the utility when there is no risk. 18 Child human capital becomes more valuable in utility terms under risk if U (2,1) ≥ 0, i.e., remaining one additional unit of time in the destination country is more costly under the income risk compared to the situation with no risks: E[U (0,1) (y 1 + ε1 , Z 1 -m * )] ≥ U (0,1) (y 1 , Z 1 -m * ), if U (2,1) ≥ 0. 19 Proofs are omitted as they are analogous to the case of the destination country's risks, but are available upon request. a migrant parent will stay longer (shorter) at destination than with no such risk, m * ε2w ≥ (≤)m * , if V (2,0) ≤ (≥)0. Under a zero-mean risk on the country of origin's income [second period], a migrant parent will stay longer (shorter) at destination than with no such risk, m * ε2w ≥ (≤)m * , if V (3,0) ≥ (≤)0. We show in Proposition 3 that an unemployment risk in the country of origin increases the migration duration of migrant parents only if they are risk averse (V (2,0) ≤ 0), i.e., they do not like the risk they will be facing if they return to the home country. The importance of risk aversion in the migration-decision-making has already been documented in the empirical literature (e.g., [START_REF] Jaeger | Direct evidence on risk attitudes and migration[END_REF]Dustmann et al., 2020). We also show that the third-order derivative V (3,0) has a signiőcant impact on determining the optimal migration duration, in the presence of a zero-mean income risk at the home country. V (3,0) refers to the feature of individuals' preference known as prudence, and particularly prudence in wealth (Kimball et al., 1990). In the saving literature, Leland (1978) was the őrst to show that the presence of a future risky income results in a positive extra saving if and only if the third-order derivative of the univariate utility function is positive. He called this additional amount of savings 'precautionary demand for saving' (see also [START_REF] Sandmo | The effect of uncertainty on saving decisions[END_REF][START_REF] Dreze | Consumption decisions under uncertainty[END_REF]. 20 The role of prudence has also been shown in the health literature (see e.g., [START_REF] Brianti | Optimal choice of prevention and cure under uncertainty on disease effect and cure effectiveness[END_REF]. In a migration context, we show that the zero-mean income risk increases the migration duration of parents only if they are prudent (V (3,0) ≥ 0), i.e., those for whom a higher level of wealth at the home country reduces the disutility induced by the income risk [START_REF] Menegatti | A new interpretation for the precautionary saving motive: a note[END_REF], and those for whom a combination of the harm from a higher income risk with a high level of wealth is preferred (Eeckhoudt and Schlesinger, 2006). 2122 Risk on the child human capital Children can also be faced with an exogenous risk on their human capital, and since parents care about the human capital of their children, their migration behavior can, as a consequence, be affected. This risk can be related to different factors, with air pollution being one example. 93% of children worldwide were exposed to levels of air pollution above the World Health Organization guidelines, in 2016 [START_REF] Organization | Air Pollution and Child Health: Prescribing Clean Air[END_REF]. This air pollution may affect the child human capital through its effect on child health, schooling and cognitive functioning (see e.g., [START_REF] Currie | What do we know about short-and long-term effects of early-life exposure to pollution?[END_REF]. In our model, we particularly focus on pollutants that induce harm following a prolonged exposure (over months or even years), and therefore, we introduce a risk on child human capital only in the secondperiod. The migrating parent is now faced with a future risk on his child's human capital (Z 2 + ε2z ). We investigate, therefore, how the future child human capital risk affects the optimal migration duration of the parent with respect to a situation where there is no such risk, ceteris paribus. In the case of no risks, the parent's maximization problem is given by Eq. (1.1). In the case of a risk on his child's human capital, he maximizes the following lifetime utility W ε2z (m) = U (y 1 , Z 1 -m) + βE[V (y 2 + g(m), Z 2 + ε2z )] (1.6) We denote by m * ε2z the length of the parent's migration in this case. Applying the same reasoning as before, we get the following proposition. Proposition 4 Under a pure risk on his child's human capital upon his return to the home country [second period], a migrant parent will stay longer (shorter) at destination than with no such risk, m * ε2z ≥ (≤)m * , if V (1,1) ≥ (≤)0. Under a zero-mean risk on his child's human capital upon his return to the home country [second period], a migrant parent will stay longer (shorter) at destination than with no such risk, m * ε2z ≥ (≤)m * , if V (1,2) ≥ (≤)0. We show that, following a pure risk on the child's human capital, migrant parents will increase the duration of their migration only if they are correlation averse, i.e., those for whom a higher child human capital level increases their marginal utility of income. We also show that the sign of the effect of a zero-mean child human capital risk on the migration duration of parents depends on the sign of the third cross-derivative 2) , which we refer to as 'cross-prudence in wealth'. In the classic saving problem, [START_REF] Eeckhoudt | A good sign for multivariate risk taking[END_REF] showed that, under a future risk on the background variable (health), precautionary saving is positive whenever V (1,2) is positive, i.e., whenever the decision maker is 'cross-prudent in wealth'. [START_REF] Baiardi | Precautionary saving under many risks[END_REF] also reported a complex condition depending, among other elements, on V (1,2) , but when considering a background risk ŕanked by a labor income risk and an interest rate risk. Similarly, in a model of health investment, comparing a situation with simultaneous income and background risks to a situation with only an income risk, [START_REF] Denuit | Correlated risks, bivariate utility and optimal choices[END_REF] found that the sufficient condition for a precautionary health investment depends, among other elements, on V (1,2) . 23 In our analysis of migration duration decision under a child human capital risk, we őnd that the migration duration of parents increases only if they are cross-prudent in wealth, i.e., those for whom a higher level of wealth at the country of origin reduces the disutility, suffered because of the risk on the child human capital [START_REF] Menegatti | A new interpretation for the precautionary saving motive: a note[END_REF], and those that would prefer to face the child human capital risk when it is coupled with a higher wealth (Eeckhoudt and Schlesinger, 2006). 24 V (1, Risk on the migrant's accumulated savings In addition to income risks at the destination and the origin countries, the accumulated savings generated from working at the destination country that the migrant parent wants to bring home may also become risky. An example is the major and unexpected interest rate or exchange rate ŕuctuations that may change the potential values of these savings in the country of origin, making them risky. Now, the migrant's accumulated savings are risky, g(m). We investigate how the riskiness of the parent's savings affects the duration of migration with respect to a situation with no risks, ceteris paribus. Previously, we considered that income risks are additive in nature, in the sense that they affect the parent independently from his income levels. The assumption of additive risks, in this section, would yield results that are technically identical to the case of introducing a risk on the home country's income, given by proposition 3. We consider, therefore, the multiplicative form of the risk such that g(m) = g(m)η, where E(η) = 1, in the sense that risk is proportional to the migration duration of the parent. 25 23 [START_REF] Menegatti | 20090930optimal saving in the presence of two risks[END_REF] and [START_REF] Denuit | Correlated risks, bivariate utility and optimal choices[END_REF] show that, comparing a situation with a simultaneous income and background risks to a situation with no risks, the sufficient condition for a precautionary saving depends, among other elements, on both of the cross-derivatives V (1,2) and V (2,1) . 24 Remaining one additional unit of time in the destination country brings more benefits under the child human capital risk compared to the situation with no risks: E[V (1,0) (y 2 + g(m * ), Z 2 + ε2z )] ≥ V (1,0) (y 2 + g(m * ), Z 2 ), if V (1,2) ≥ 0. 25 Realizations of the random variable η are assumed to be all positive to ensure that the accumulated For the sake of simplicity, we assume that y 2 = 0. The parent maximizes the following lifetime utility W η(m) = U (y 1 , Z 1 -m) + βE[V (g(m)η, Z 2 )]. (1.7) We denote by m * η the length of the migration, had the parent chosen to go to destination D 2 , implying a FOC given by W ′ η(m * η) = -U (0,1) (y 1 , Z 1 -m * η) + βE[g ′ (m * η) ηV (1,0) (g(m * η) η, Z 2 )] = 0 (1.8) In order to analyze the effects of a stochastic saving level on optimal migration duration we compare (1.8) with (1.2). We obtain the following result. Proposition 5 A migrant parent will stay longer (shorter) at a destination where his accumulated savings are risky than at a destination where they are not, m * η ≥ (≤)m * , if -g(m)η V (3,0) (y,Z) V (2,0) (y,Z) ≥ (≤)2. Proof See Appendix 1.C. The above proposition shows that when the accumulated savings of the migrant parent at the country of destination are risky, a threshold level of 2 for -g(m * )η V (3,0) V (2,0) determines whether the migrant increases or decreases the length of migration, with respect to the duration of migration in a similar destination with no such risk. -g(m * )η V (3,0) V (2,0) is a relative prudence index. 26 A risk on the accumulated savings of the migrant parent would increase his migration duration only if he has a relative prudence larger than 2, i.e., if his prudence is sufficiently łstronger" than his risk aversion. In fact, under prudence (V (3,0) ≥ 0), a migrant parent would prefer to have a higher wealth when he faces a risk, and thus would have an incentive to increase the migration duration, in order to increase the expected accumulated savings. However, when one increases the migration duration, one is receiving proportionally more risk. As the migrant parent is supposed to be risk averse (V (2,0) ≤ 0), he does not like this second effect, and thus, would have an incentive to reduce the migration duration. Therefore, the overall impact depends on which of the above effects dominates. Previous work on the choice under risk indicated, indeed, that risk on returns is related to a condition of relative prudence. In the saving literature, savings brought to the home country are always positive. In case where the realization of η is η, such that 0 < η < 1 (η = 1, η > 1), the accumulated savings are lower (equal to, bigger) than that under certainty. 26 Following Kimball et al. (1990), we define, for a bivariate function G(y, Z), absolute prudence as Z) . Note that if the second-period income was not zero, then results would involve "partial relative prudence" instead of "relative prudence" (see [START_REF] Chiu | On relative and partial risk attitudes: theory and implications[END_REF]. -G (3) (y,Z) G (2) (y,Z) and relative prudence as -y G (3) (y,Z) G (2) (y, the index of relative prudence being larger than 2 would be required for precautionary saving to occur when the interest rate (the return of saving) is risky (e.g., [START_REF] Rothschild | Increasing risk ii: Its economic consequences[END_REF][START_REF] Eeckhoudt | Changes in risk and the demand for saving[END_REF]. A similar condition has been found to control the decision maker's behavior when a return on a health investment is risky (Liu and Menegatti, 2019b).27 Increase in risks In the previous section, we investigated the effects of the introduction of some kind of risks on the migration duration decision process compared to a situation with no risks. Now, we generalize our results to the case where migrants may move from a risky unemployment situation to an even riskier one. Moreover, the situations in Section 1.3 imply only changes in the variables' variance and/or expectation. We consider, therefore, other cases of increases in risk (comparing two risky situations) that imply changes in the variables' distributions' other higher-order moments. We do so using the concept of nth-order stochastic dominance. The later is concerned with the comparison of probability distributions of random variables that have different moments of higher order. The particular case of [START_REF] Ekern | Increasing nth degree risk[END_REF] concept of increase in nth-degree risk is also considered.28 Increases in income risks Consider the case where the parent has the option to migrate to two similar destinations D 1 and D 2 , where the earnings are risky, but that in D 2 (y 1 + θ1w ) are riskier than that in D 1 (y 1 + ε1w ) in terms of nth-order stochastic dominance ( θ1w ⪯ SD-n ε1w ). We investigate how the level of increase in income risk at the destination country affects the migration duration of the parent, ceteris paribus. In the case where the parent considers migrating to destination D 1 , his maximization problem is given by Eq. (1.3). In the case where he considers migrating to destination D 2 , he maximizes the following lifetime utility W θ1w (m) = E[U (y 1 + θ1w , Z 1 -m)] + βV (y 2 + g(m), Z 2 ). (1.9) We denote by m * θ1w the optimal migration duration, in this case, implying a FOC given by W ′ θ1w (m * θ1w ) = -E[U (0,1) (y 1 + θ1w , Z 1 -m * θ1w )] + βg ′ (m * θ1w )V (1,0) (y 2 + g(m * θ1w ), Z 2 ) = 0. (1.10) Comparing (1.10) and (1.4), we obtain the following result. Proposition 6 A migrant parent will stay longer (shorter) in destination D 2 where his earnings are riskier, in terms of nth-order stochastic dominance, than in destination D 1 , m * θ1w ≥ (≤)m * ε1w , if his preferences are such that (1) (1+k) U (k,1) ≥ (≤)0 ∀k = 1, . . . , n. Proof See Appendix 1.F. Proposition 6 shows that changes in the migration duration of parents, following increases in the destination country's income risk, require conditions on the parents' preferences that involve the sign of higher-order derivatives of their utility function. This result is typical when considering increases in risk in terms of nth-order stochastic dominance. In particular, these conditions were őrst provided by [START_REF] Eeckhoudt | Changes in risk and the demand for saving[END_REF], in the case of a univariate utility function. This work was generalized by [START_REF] Chiu | The effects of stochastic wages and non-labor income on labor supply: update and extensions[END_REF] in the case of a bivariate utility, considering decisions on labor supply. If we restrict changes in the destination's income risk to the special case deőned by [START_REF] Ekern | Increasing nth degree risk[END_REF], then a migrant parent, facing a nth-degree riskier income at destination D 2 than in destination D 1 , will stay longer (shorter) at destination D 2 , if his preferences are such that (1) (1+n) U (n,1) ≥ (≤)0. This result provides, therefore, a generalization of the migrant parent's behavior under a zero-mean risk, shown in proposition 2. The later considers the particular case of second-order increase in risk (n=2). Consider now two migrant parents (referred to as A and B) that are similar in their characteristics but differ in the level of the income risk in their country of origin. The home country's income for migrant B (y 2 + θ2w ) is riskier than that for migrant A (y 2 +ε 2w ) in terms of nth-order stochastic dominance ( θ2w ⪯ SD-n ε2w ). We investigate how the level of increase in income risk at the home country affects the duration of migration, ceteris paribus. The maximization problem of the migrant parent A is given by Eq. (1.5), while migrant parent B maximizes the following lifetime utility W θ2w (m) = U (y 1 , Z 1 -m) + βE[V (y 2 + θ2w + g(m), Z 2 )]. (1.11) We denote by m * θ2w the optimal migration duration of migrant B. Following a similar reasoning as previously, we get the following proposition. Proposition 7 Migrant parent B, facing, at the country of origin, a riskier income, in terms of nth-order stochastic dominance, compared to a similar migrant parent A, will stay longer (shorter) at destination, m * θ2w ≥ (≤)m * ε2w , if his preferences are such that (1) (1+k) V (k+1,0) ≤ (≥)0 ∀k = 1, . . . , n. If we restrict changes in the country of origin's income risk to the special case deőned by [START_REF] Ekern | Increasing nth degree risk[END_REF], then a migrant parent B, facing a nth-degree riskier income at the country of origin compared to migrant parent A, will stay longer (shorter) at destination, if his preferences are such that (1) (1+n) V (n+1,0) ≤ (≥)0. The special case of introducing a zero-mean income risk in the country of origin is a second-order increase in risk (n=2), and the migration behavior is given by proposition 3. Increases in the risk on the child human capital Consider two parents (referred to as A and B). The two parents and their respective children are similar in their characteristics but differ in the level of the risk their children are exposed to. The human capital of migrant B's child (Z 2 + θ2z ) is riskier than that of migrant A's child (Z 2 + ε2z ) in terms of nth-order stochastic dominance ( θ2z ⪯ SD-n ε2z ). We investigate how the level of increase in child human capital risk affects the duration of migration, ceteris paribus. The maximization problem of the migrant parent A is given by Eq. (1.6), while migrant parent B maximizes the following lifetime utility W θ2z (m) = U (y 1 , Z 1 -m) + βE[V (y 2 + g(m), Z 2 + θ2z )]. (1.12) We denote by m * θ2z the optimal migration duration of migrant B. Following a similar reasoning as previously, we get the following proposition. Proposition 8 Migrant parent B, facing a riskier human capital of his child, in terms of nth-order stochastic dominance, compared to migrant parent A, will stay longer (shorter) at destination, m * θ2z ≥ (≤)m * ε2z , if his preferences are such that (1) (1+k) V (1,k) ≤ (≥)0 ∀k = 1, . . . , n. If we restrict changes in the child human capital risk to the special case deőned by [START_REF] Ekern | Increasing nth degree risk[END_REF], then a migrant parent B, facing a nth-degree riskier human capital of his child compared to migrant parent A, will stay longer (shorter) at destination, if his preferences are such that (1) (1+n) V (1,n) ≤ (≥)0. Proposition 4 is the particular case of this result when n=2. Increases in risk on the accumulated savings Consider a parent that has the option to migrate to two similar destinations D 1 and D 2 , but where the accumulated savings of the migrant parent in D 2 (g(m) ζ) would be riskier than that in D 1 (g(m)η), in terms of nth-order stochastic dominance ( ζ ⪯ SD-n η). We investigate how the level of increase in the riskiness on the savings accumulated during the work at the destination country affects the duration of migration, ceteris paribus. In the case where the parent considers migrating to destination D 1 , his maximization problem is given by Eq. (1.7). In the case where the parent considers migrating to destination D 2 , he maximizes the following lifetime utility W ζ (m) = U (y 1 , Z 1 -m) + βE[V (g(m) ζ, Z 2 )] (1.13) We denote by m * ζ the length of the migration, had the parent chosen to migrate to destination D 2 . The FOC of this problem is given by W ′ ζ (m * ζ ) = -U (0,1) (y 1 , Z 1 -m * ζ ) + βE[g ′ (m * ζ ) ζV (1,0) (g(m * ζ ) ζ, Z 2 )] = 0. (1.14) Assuming that ηg(m η) = y, where y > 0, and comparing (1.14) and (1.8), we obtain the following result. Proposition 9 A migrant parent will stay longer (shorter) in destination D 2 , where his accumulated savings are riskier, in terms of nth-order stochastic dominance, than in des- tination D 1 , m * ζ ≥ (≤)m * η, if his preferences are such that -y V (k+1,0) V (k,0) ≥ (≤)k ∀(y, Z), ∀k = 1, . . . , n. Proof See Appendix 1.G. Proposition 9 shows that the change in the optimal migration duration is governed by the value of the n-th degree relative risk aversion. 29If we restrict changes in the accumulated savings risk to the special case deőned by [START_REF] Ekern | Increasing nth degree risk[END_REF], then a migrant parent facing a nth-degree riskier accumulated savings at destination D 2 than at destination D 1 will stay longer (shorter) at destination D 2 if his preferences are such that -y V (n+1,0) V (n,0) ≥ (≤)n. Proposition 5 is the particular case of this result when n = 2. Theoretical results versus empirical applications In the previous section, we have shown how increases in risk imply different effects on the migration duration of parents. In order to incorporate our analysis into the empirical literature, we show how different measures of risk reŕect different orders of risk changes and require different conditions to explain parental migration behavior. In the empirical literature of migration, the increase in the home country's income risk has been measured by the variance of the residuals of the income regression (Jalan and Ravallion, 2001), the coefficient of variation of income (Munshi and Rosenzweig, 2016), and the coefficient of variation of temperature or rainfall variance (Rosenzweig and Stark, 1989;Dillon et al., 2011). When the income is normally distributed, the increase in its variance is sufficient to guarantee an increase in the second-order risk. 30 All risk averse migrant parents, i.e., those with a marginal utility that is decreasing with wealth, should dislike such changes in the distribution of their incomes. However, a change in the migration duration is not guaranteed unless the migrants exhibit a speciőc preference known as prudence. Munshi and Rosenzweig (2016) and Jalan and Ravallion (2001) found that greater rural income risk inhibits rural-urban migration in India and China, respectively, while Dillon et al. (2011) and Rosenzweig and Stark (1989) found evidence of an increased probability of migration due to the agricultural income risk in northern Nigeria and rural India, respectively. These studies suggest evidence for precautionary changes in the migration likelihood. If changes in the migration probability are similar to that in the migration duration, in the face of income risk, our results suggest that a sufficient condition for these behaviors is the sign of prudence. The implementation of the labor market reform, Hartz IV, in January 2005, in Germany, typically generated a őrst-order increase in labor income risk, as it signiőcantly decreased the generosity and toughened the eligibility requirements of unemployment beneőts. Applying our results, the expected effect on migration spells, for foreign migrant parents in Germany, is an increase in the migration duration if they are correlation loving, ceteris paribus. While, for German parents abroad, such a reform would increase their migration duration at the destination if they are risk averse, ceteris paribus. 31During the 2000s, tax reforms in favor of ŕat personal income tax structures have been adopted in a number of countries, including Iceland in 2007 and Czech Republic in 2008. [START_REF] Davies | Flat rate taxes and inequality measurement[END_REF] show how moving from an income tax with a graduated rate tax schedule (progressive tax schedule) to one with a single marginal rate levied on the same base (ŕat tax scheme) and a personal allowance adjusted to generate the same tax yield under the two tax schemes, may induce a decrease in the level of third-order risk within the after-tax income distribution. 32 A tax ŕattening exercise will result in a mean preserving contraction at the lower level of the income distribution (a progressive transfer) and a mean preserving spread (a regressive transfer). If the two tax schedules maintain the same tax yield for the government, the mean is unchanged between the two posttax income distributions. If their variance is also kept unchanged, then tax ŕattening parallels a decrease in downside risk as deőned by [START_REF] Menezes | Increasing downside risk[END_REF]. 33 Migrant parents exhibiting prudence like this decrease in downside risk. However, a change in the migration duration is not guaranteed unless migrant parents exhibit a speciőc preference known as temperance. Foreign migrant parents in Czech Republic, for example, may stay longer in the country after the tax ŕattening reform if they are cross-temperate (i.e., if U (3,1) ≤ 0), ceteris paribus. On the other hand, Czech migrant parents may stay shorter at any destination after their country's tax reform if they are temperate in wealth (i.e., if V (4,0) ≤ 0), ceteris paribus. Furthermore, economic news were found to affect the őrst moment of the short-and medium term interest rates in the Unites States (e.g., [START_REF] Tuysuz | The impact of macroeconomic news on us interest rates and stock indices conditional on their volatility[END_REF] and that of the exchange rates (e.g., [START_REF] Jansen | Talking heads: the effects of ecb statements on the eurośdollar exchange rate[END_REF]. This might indicate that, following certain news announcements, interest and exchange rates are expected to be stochastically lower, i.e., their conditional distributions would deteriorate in the sense of őrst-order stochastic dominance. In this case, foreign migrant parents in the United States should stay longer, even if they are now faced with a more unfavourable situation in terms of stochastic dominance, if their relative risk aversion exceeds 1 (i.e., if -y V (2,0) V (1,0) ≥ 1), ceteris paribus. Finally, őnancial integration, in the sense that banks in different countries are able to duration of migrant parents similar to that provoked by decreases in the unemployment payments. 32 Chiu (2010) also showed that the after-tax income distribution under a flat tax schedule is more skewed to the right (third moment of the after-tax income distribution is higher) than that under a graduated-rate tax schedule. This suggests that the tax-flattening may decrease the downside income risk. smooth local liquidity shocks by borrowing on the world interbank market, is shown to increase skewness of the interest rate distribution (e.g., [START_REF] Castiglionesi | Financial integration and liquidity crises[END_REF]. As an increase in skewness may reŕect a third-degree decrease in risk, migrant parents may stay shorter at the destination where the őnancial integration took place if they are relative temperate (i.e., if -y v (4,0) v (3,0) ≥ 3). Conclusion In this paper, we develop a simple model where migrant parents, with children left behind in the home country, determine their optimal migration duration. We focus on examining the effects of different risks on the desired length of stay at the destination country, compared to a situation with no risks. Then, we generalize our analysis to comparing decisions under different levels of the same risk. Based on the equivalence between the decreases in the child human capital and the length of the parental migration, our model produces a number of interesting and new results. First, we show that, in the absence of risks, larger income differentials between homeand destination countries may decrease the optimal migration duration. As a consequence, if the inŕow of migrants is regulated, a growing economic disparity between the two countries may result in a lower stock of migrants in the destination country, at any point in time. We show that such a behavior can be explained by the parental correlation attitude. Second, we show that the effect of the introduction of an unemployment risk in the home-or the destination country on the migration duration can be explained by risk aversion or correlation aversion of the migrating parent, respectively. However, risk aversion alone is not a sufficient condition for a precautionary behavior, in a context that introduces a zero-mean income risk. We show the need, in the later context, for either prudence or cross-prudence, i.e., a risk preference of order three, to ensure an increase in the optimal migration duration. In the case where the migrant's accumulated savings from work at the destination country become risky, our analysis shows the existence of incentives of opposite directions, referred to as relative prudence, to adjust the optimal migration duration. The later increases only if risk aversion is sufficiently łweaker" for parents than prudence. Considering a risk on the left-behind child's future human capital, cross-prudence is found to be the condition required for the migrant parent to change the duration of his migration abroad. We, further, show how our results can be generalized to the case of changes in risk of order N, as deőned by [START_REF] Ekern | Increasing nth degree risk[END_REF]. In this case, the role of risk preferences of higher order in determining the variation of the optimal migration duration is shown. One consequence of our results is that increases in income risks in the destination country or decreases in income risks in the home country may lead to longer migration cycles, and to an increase in the size of the migrant population at the destination country, as long as the inŕow of migrants is kept constant (and vice versa). Our results also suggest a number of implications. First, understanding the amount and characteristics of risks that are faced by migrant parents in the home-or the destination country may, in part, help policy makers with tools to control temporary migration, if coupled with information on the risk preferences of migrant parents. Second, our őndings suggest that the presence of risks leads to a self-selection of migrants with respect to their risk preferences, in terms of who stays longer and who returns faster. As some economically important attributes vary with the degree of these preferences, their distribution among migrants would matter. Indeed, it was shown that higher levels of prudence are correlated with better education and higher cognitive ability [START_REF] Noussair | Higher order risk attitudes, demographics, and őnancial decisions[END_REF][START_REF] Breaban | Prudence, emotional state, personality, and cognitive ability[END_REF]. While other studies found that risk aversion is negatively, even if only weakly, related to cognitive ability [START_REF] Lilleholt | Cognitive ability and risk aversion: A systematic review and meta analysis[END_REF] and hinders entrepreneurship [START_REF] Cramer | Low risk aversion encourages the choice for entrepreneurship: an empirical test of a truism[END_REF][START_REF] Zhang | Reassessing the link between risk aversion and entrepreneurial intention[END_REF]. Our results on selectivity predict that, under increases in unemployment risk in the country of origin, foreign labor market is more likely to keep, for a longer period, a labor force with a possibly lower cognitive ability and lower entrepreneurship intentions, ceteris paribus. However, under increases in income risks, that imply changes in the income's distribution's variance, in the home country, foreign labor market is more likely to beneőt longer from a labor supply, among the risk averse migrant parents, that has better education and cognitive abilities, ceteris paribus. It appears, therefore, that, it is crucial to control for risk preferences, other than risk aversion, in empirical migration studies, in order to net out the effect of the varying levels of these preferences with other personal characteristics. The development of approaches used to measure these preferences makes it easier to collect such data in administrative surveys. Finally, conditions governing the parents' migration duration, following increases in income differentials or in the different income and child human capital risks, involve preferences carrying cross-effects between the income and children's human capital, which themselves are affected by the parental utility functional form. Hence, the importance of these features in the selection of models describing migrant parents' preferences and for future empirical work. 53 Different developments can be suggested for future research. Our analysis considers the effect of risk on migration behavior under the assumption that the migration duration decision is made in isolation. However, that decision can be simultaneously taken with some other choices such as the monetary investment in the human capital of the child. Another extension is considering the case where the migrant parent does not necessarily know the objective probabilities of a future event that may affect his decision, and does not even have a clear subjective opinion on them. This uncertainty is referred to as 'ambiguity' and the preference of a migrant that wants to avoid such situation is called ' ambiguity aversion' [START_REF] Klibanoff | A smooth model of decision making under ambiguity[END_REF]. It follows that ∀k = y 1 , y 2 , Z 1 , Z 2 , ∂F ∂k + ∂F ∂m dm dk = 0 ⇔ dm dk = - ∂F ∂k ∂F ∂m . Since the SOC is veriőed ( ∂F ∂m < 0), Sign( dm dk ) = Sign( ∂F ∂k ). We obtain then the following: ∂F ∂y 1 = U (1,1) (y 1 , Z 1 m * ). If U (1,1) < 0 then ∂F ∂y 1 > 0,if U (1,1) = 0 then ∂F ∂y 1 = 0, if U (1,1) > 0 then ∂F ∂y 1 < 0. ∂F ∂y 2 = βg ′ (m * )V (2,0) (y 2 + g(m * ), Z 2 ) < 0. ∂F ∂Z 1 = U (0,2) (y 1 , Z 1 m * ) > 0. ∂F ∂Z 2 = βg ′ (m * )V (1,1) (y 2 + g(m * ), Z 2 ). The sign of this expression depends on the sign of the second cross-derivative V (1,1) (since g ′ (m * ) > 0 by assumption). If V (1,1) < 0 then ∂F ∂Z 2 < 0, if V (1,1) = 0 then ∂F ∂Z 2 = 0, if V (1,1) > 0 then ∂F ∂Z 2 > 0. This ends the proof. 1.B Proof proposition 2 In order to compare m * ε1w and m * , we evaluate the őrst-order condition (1.4) (where m * ε1w is determined) at m * . We thus obtain W ′ ε1w (m * ) = -E[U (0,1) (y 1 + ε1w , Z 1 -m * )] + βg ′ (m * )V (1,0) (y 2 + g(m * ), Z 2 ), (1.15) that is equivalent to (using Eq. (1.2)) W ′ ε1w (m * ) = -E[U (0,1) (y 1 + ε1w , Z 1 -m * )] + U (0,1) (y 1 , Z 1 -m * ) (1.16) In the case of the unemployment risk, ε1w = [0, -k; 1-p, p], E[U (0,1) (y 1 +ε 1w , Z 1 -m * )] can be written as (1 -p)U (0,1) (y 1 , Z 1 -m * ) + pU (0,1) (y 1 -k, Z 1 -m * ). If U (0,1) is increasing (decreasing) in y 1 , i.e., U (1,1) ≥ (≤)0 then U (0,1) (y 1 , Z 1 -m * ) ≥ (≤)U (0,1) (y 1 -k, Z 1 -m * ) ⇔ pU (0,1) (y 1 , Z 1 -m * ) ≥ (≤)pU (0,1) (y 1 -k, Z 1 -m * ) ⇔ (1 -p)U (0,1) (y 1 , Z 1 -m * ) + pU (0,1) (y 1 , Z 1 -m * ) ≥ (≤)(1 -p)U (0,1) (y 1 , Z 1 -m * ) + pU (0,1) (y 1 -k, Z 1 -m * ) ⇔ U (0,1) (y 1 , Z 1 -m * ) ≥ (≤)E[U (0,1) (y 1 + ε1w , Z 1 -m * )] ⇔ W ′ ε1w (m * ) ≥ (≤)0 (Eq. (1.16) is positive (negative)) ⇔ W ′ ε1w (m * ) ≥ (≤)W ′ ε1w (m * ε1w ) ⇔ m * ε1w ≥ (≤)m * (by the second-order condition, W ′ ε1w is decreasing in m). In the case of a zero-mean risk, by Jensen's inequality, U (2,1) (y, Z) ≤ (≥)0, ∀(y, Z), implies that Eq. (1.16) is positive (negative), resulting in m * ε1w ≥ (≤)m * . This ends the proof. 1.C Proof proposition 5 In order to compare m * η and m * , we evaluate the őrst-order condition (1.8) at m * . We obtain using Eq. (1.2) W ′ η(m * ) = βE[g ′ (m * )ηV (1,0) (g(m * )η, Z 2 )] -βg ′ (m * )V (1,0) (g(m * ), Z 2 ) (1.17) The parent will stay longer (shorter) in destination D 2 than in D 1 (m * η ≥ (≤)m * ) if E[ηV (1,0) (g(m * )η, Z 2 )] ≥ (≤)V (1,0) (g(m * ), Z 2 ). By Jensen's inequality, this occurs if the function H(η) = ηV (1,0) (g(m)η, Z 2 ) is convex (concave) with respect to η, ∀η, and that is whenever -g(m)η V (3,0) (g(m)η,Z 2 ) V (2,0) (g(m)η,Z 2 ) ≥ (≤)2. This ends the proof. 1.D Functional forms of parental utility functions A number of theoretical papers in the migration literature assumed the additive separability for the parental utility function (G(y, (Dustmann, 2003a;Myerson, 2017), suggesting that decision mak-ers are correlation neutral. 34 Other possible functional forms include the non-separable form (G(y, Z) = u(y + f (Z)), with u ′ > 0, u ′′ < 0, f ′ > 0 and f ′′ < 0, where f (Z) is the monetary equivalent of the child's human capital level Z), suggesting correlation aversion; Z) = u 1 (y) + u 2 (Z), with u ′ 1 > 0, u ′′ 1 < 0, u ′ 2 > 0 and u ′′ 2 < 0) and the multiplicative separability between the parent's earnings and the child's human capital (G(y, Z) = u 1 (y)u 2 (Z), with u ′ 1 > 0, u ′′ 1 < 0, u ′ 2 > 0 and u ′′ 2 < 0), suggesting correlation loving. 1.E Increases in risk We deőne the increases in risk, in this paper, via the concept of stochastic dominance. A stochastic change is the change from one probability distribution to another, and stochastic dominance allows to establish a partial ordering of these probability distributions. It allows, therefore, to compare distributions that have different moments of higher orders. Consider two random variables θk and εk valued in some interval [a, b] of the real line, with respective cumulative distribution functions F and G. Starting from F 1 = F and G 1 = G, deőne iteratively for z ∈ [a, b] F s+1 (z) = z a F s (t)dt and G s+1 (z) = z a G s (t)dt, for s ≥ 1. θk is said to be riskier than εk (or, θk is said to be dominated by εk ) via nth-order stochastic dominance ( θk ⪯ SDn εk ) if G n (z) ≤ F n (z) for all z, and if G s (b) ≤ F s (b) for s = 1, 2, . . . , n -1. From [START_REF] Ingersoll | Theory of financial decision making, volume 3[END_REF], we know that θk ⪯ SDn εk is equivalent to E[f ( θk )] ≤ E[f (ε k )] for all functions f with derivatives f (1) , f (2) , . . . , f (n) such that (-1) s+1 f (s) ≥ 0 for s = 1, 2, . . . , n. Hence, ⪯ SDn represents the common preferences of all the decision makers whose preferences satisfy risk apportionment of degrees 1 to n, in the terminology of Eeckhoudt and Schlesinger (2006). These decision makers prefer to disaggregate risk across equiprobable states of nature. When the őrst n -1 moments of θk and εk are equal, nth-order stochastic dominance coincides with the Ekern (1980)'s concept of increase in nth-degree risk. In other words, the later implies that only the nth moment of the distribution will change. For example, adding a zero-mean noise term to the income distribution or equivalently constructing a mean-preserving spread of the income probability distribution that transfers weight from the center to the tails, while preserving the same mean, is a second-degree increase in the income risk, as deőned by [START_REF] Ekern | Increasing nth degree risk[END_REF], or a łmean-preserving increase in risk" as deőned by [START_REF] Rothschild | Increasing risk: I. a deőnition[END_REF]. A different type of changes in the income distribution is a mean-variance-preserving transformation. The later follows from pairing a mean-preserving spread and a meanpreserving contraction in a way that the effect of the spread occurs before the contraction. The net effect of such operation, when added to an income distribution, will transfer dispersion from the right to the left for each value of income without changing the mean and variance, resulting in a new distribution with more downside risk, as deőned by [START_REF] Menezes | Increasing downside risk[END_REF], or equivalently a third-order increase in risk as deőned by [START_REF] Ekern | Increasing nth degree risk[END_REF]. Equivalently, an increase in downside risk is characterized by a transfer of a zeromean noise from higher wealth levels to equally likely lower ones, and the density function of income will be skewed to the left. 35An increase in outer risk, which is equivalent to an increase in the fourth degree risk, is deőned as a change in the distribution that relocates dispersion from the center to the tails, increasing the kurtosis, while keeping the three őrst moments of the distribution: the mean, the variance and the skewness unchanged [START_REF] Menezes | Increasing outer risk[END_REF]. 1.F Proof proposition 6 In order to compare the two optimal values, m * ε1w and m * θ1w , we evaluate the őrst order condition (1.10) at m * ε1w , we have W ′ θ1w (m * ε1w ) = -E[U (0,1) (y 1 + θ1w , Z 1 -m * ε1w )] + βg ′ (m * ε1w )V (1,0) (y 2 + g(m * ε1w ), Z 2 ). (1.18) Using Eq. (1.4), this becomes W ′ θ1w (m * ε1w ) = -E[U (0,1) (y 1 + θ1w , Z 1 -m * ε1w )] + E[U (0,1) (y 1 + ε1w , Z 1 -m * ε1w )]. (1.19) Following [START_REF] Ingersoll | Theory of financial decision making, volume 3[END_REF] ≤ (≥)m * ε1w (since W ′ θ1w is decreasing in m), if (1) (1+s) U (s,1) ≤ (≥)0, ∀s = 1, . . . , n. This ends the proof. 1.G Proof proposition 9 In order to compare m * ζ and m * η, we evaluate W ′ ζ at m * η, we have (using (1.8)) W ′ ζ (m * η) = βE[g ′ (m * η) ζV (1,0) (g(m * η) ζ, Z 2 )] -βE[g ′ (m * η) ηV (1,0) (g(m * η) η, Z 2 )] (1.20) We introduce the following function G: G(η) = ηV (1,0) (g(m * η)η, Z 2 ) ∀η. Using this notation, the previous equation can equivalently be written as W ′ ζ (m * η) = βg ′ (m * η) E[G( ζ)] -E[G(η)] (1.21) Following [START_REF] Ingersoll | Theory of financial decision making, volume 3[END_REF], we can prove that Eq. (1.21) is negative (positive), result- ing in m * ζ ≤ (≥)m * η, if (1) (1+k) G (k) ≥ (≤)0, ∀k = 1, . . . , n. Moreover, calculations show that for n = 1, G (1) (η) ≤ 0 if and only if -ηg(m * η) V (2,0) (g(m * η )η,Z 2 ) V (1,0) (g(m * η )η,Z 2 ) ≥ 1. And, it follows from standard induction arguments that, for any k > 1, G (k) (η) ≤ (≥)0 if and only if -ηg(m * η) V (k+1,0) (g(m * η )η,Z 2 ) V (k,0) (g(m * η )η,Z 2 ) ≥ (≤)k. This ends the proof. 65 Chapter 2 Rural-urban migration as a risk coping strategy: The role of income differentials Introduction In developing countries where formal insurance and credit markets are imperfect, absent, or inaccessible, rural households faced with high variability of income must adopt various strategies to insure against income risks. Self-insurance mechanisms include, inter alia, informal risk-sharing arrangements among households of the same region and temporary migration to urban areas. Regarding the latter, the New Economics of Labor Migration (NELM) has hypothesized that migration can be used as a household strategy not only to maximize expected earnings, but also to reduce risks and overcome constraints associated with market failures, notably in credit and insurance (Stark and Levhari, 1982;[START_REF] Stark | Migration incentives, migration types: The role of relative deprivation[END_REF]). Yet, the empirical literature investigating the effects of income risk at the place of origin on the migration behavior reports mixed evidence. While Rosenzweig and Stark (1989) and Dillon et al. (2011) found evidence in support of the NELM in India and northern Nigeria, respectively, Jalan and Ravallion (2001) and Munshi and Rosenzweig (2016) showed that greater rural income risks inhibit migration to urban areas in China and India, respectively. 1 1 Measures of household income risk used in these studies include temperature-related variability interacted with household land holdings to form an idiosyncratic measure of agricultural income risk (Dillon et al., 2011), rainfall variance interacted with households' landholdings to predict each household's variance in profits (Rosenzweig and Stark, 1989), the variance of the estimated innovation errors from the household-specific income process (Jalan and Ravallion, 2001), and the coefficient of variation of the household income (Munshi and Rosenzweig, 2016). Why would some rural households not rely on migration despite being faced with an income risk? In the case of mid-1980s China, Jalan and Ravallion (2001) explain their őnding by the way rural land is managed. Chinese rural households are denied private ownership of agricultural land and are only granted land-use rights through contracts, according to criteria such as the household's labor force. Under these circumstances, the migration of some household members generates a risk of land expropriation. It is therefore possible that when agricultural household income is risky, the incentive to secure the allocated land increases, and hence decreases the probability to migrate. In a similar vein, Munshi and Rosenzweig (2016) suggest that what drives a low migration probability, even in the presence of income risk, is the fear of households that in the case of migration of one of their members, the loss in the informal network insurance exceeds the income gain. 2 These studies are proposing that migration behavior under risk is mainly explained by one main opportunity cost, the loss of land or the loss of informal risk-sharing transfers, which may reduce income differentials, and therefore migration probabilities. 3 Yet, this may not be the only explanation for the negative relationship between income risk and migration. Another explanation may lie in the households' preference for the certain resources they have in their place of origin against risky resources from migration, even with positive income differentials. Hence, in a risky context, whether income differentials affect migration decision or not becomes unclear. To investigate this question, we build a theoretical model where rural farm households make migration decision under an aggregate income risk, hence when informal risk-sharing arrangements offer little to no protection. 4 Moreover, we consider that the rural household's welfare depends not only on income but also on child human capital. 5 Indeed, relying on the empirical evidence of the negative impacts of income risk on children's outcomes (Jensen, 2000;Maccini and Yang, 2009;[START_REF] Björkman-Nyqvist | Income shocks and gender gaps in education: Evidence from Uganda[END_REF], we conjecture 2 Morten (2019) shows that introducing risk sharing for households indeed reduces migration by 37 percentage points in rural India. The context that she considers is, however, different. Households respond ex-post to realized income shocks and not ex-ante to income risk. She suggests that this result can be explained by the fact that households, in the case of a realized income shock, would be net recipients of risk-sharing transfers, and therefore, they would consider post-transfer income differentials. 3 Expected income differential, accounting for the probability of employment, has long been shown to be a prime determinant of rural-to-urban migration (Todaro, 1969), driving the rural labor force towards urban labor markets (e.g. [START_REF] Bibliography Agesa | Migration and the urban to rural earnings difference: A sample selection approach[END_REF]Zhu, 2002). 4 Our focus on aggregate income risk stems from climate change that increases income risks for rural households, by increasing, for example, rainfall variance and temperature fluctuations [START_REF] Pachauri | Climate change 2007[END_REF]. 5 Including children's human capital or consumption in the parents' utility, in addition to the parental income, is not new in the theoretical literature of migration (Dustmann, 2003;Dustmann and Görlach, 2016;Myerson, 2017). 68 that migration is used not only to diversify income sources but also to protect, at least in part, against child human capital reductions. Given the deőnition we consider for the household's utility function, our model allows also to test how the marginal utility of income changes when the level of child human capital increases, which contrasts with some theoretical papers in the migration literature that assume additive separability for the parental utility function, implying no interaction between the income and the children's human capital levels in the parents' utility function (Dustmann, 2003;Myerson, 2017). On the one hand, private tutoring, learning activities and better physical environment at home are important drivers of improvement for low school-performing children, which increases the marginal utility of income when having low school test scores. On the other hand, other activities that are complements for good educational performance, such as leisure activities, may be less enjoyable when having a low school performance, which therefore decreases the marginal utility of income. No study, to our knowledge, has ever empirically investigated this relationship. The setting of our empirical testing is rural areas in China. Labor migration within this country has been increasing dramatically over time to reach a total of 291 million migrant workers in 2019 (National Bureau of Statistics, 2020). As the hukou system in China, which relates each citizen to her place of birth, entitles differential access to welfare beneőts between local and non-local residents, as well as between urban and rural residents, migrants tend to leave their family behind in their place of origin, resulting in an estimated 69 million łleft-behind" children in 2015 (UNICEF, 2018). We use a crosssectional household survey for the year 2008 that we combine with county rainfall data used to construct the aggregate measure of the agricultural income risk. Furthermore, in order to account for the non-randomness and the selectivity of migration, we employ an empirical framework based on a switching regression model with endogenous switching that was őrst adopted by [START_REF] Nakosteen | Migration and income: The question of selfselection[END_REF] in the context of migration. The contribution of this paper is twofold. First, it aims to contribute to the literature on the determinants of migration by exploring the effectiveness of migration as a risk coping strategy in the absence of informal risk-sharing arrangements. We particularly investigate the role of income differentials in determining the migration decision under risk. 6 This exercise explicitly demonstrates that a negative expected urban-to-rural income difference, in the case of an aggregate income risk, decreases the probability of migration as a risk coping strategy, compared to a situation where the expected income differential is positive. We also show that this effect diminishes with higher levels of income differential. This result is policy-relevant. Households with a negative income differential may resort to other unwanted strategies to mitigate the risk. Particularly, it has been shown that, in the absence of migration opportunities, households may store unproductive forms of wealth as a precaution against risk. In the case of Chinese rural households, this precautionary wealth may go up to 15% of their savings (Giles and Yoo, 2007).7 This self-insurance mechanism may decrease the household welfare as it limits the available resources for consumption and investments in health, education or any other agricultural investments. Such strategy may even reinforce poverty traps for the poorer households. It can also lead to larger problems when precautionary wealth is kept away from formal őnancial institutions, such as negative effects for intermediation and macroeconomic growth (Giles and Yoo, 2007). It is therefore important to consider the role of income differentials in determining the ability of households to use migration to adapt to aggregate risks, and to design policies that can effectively reduce their vulnerability to these risks. Second, this paper also őts into the theoretical body of literature on child human capital investment. In the context that we consider in this paper, we őnd statistically signiőcant evidence that the marginal utility of the household income increases as the child human capital deteriorates. The best speciőcation of the utility function to consider for these households is, therefore, the non-separability between the households' earnings and their children's human capital. More precisely, the household's preferences would be best captured when considering the monetary equivalent of the child's human capital status instead of its measured value. Moreover, this result suggests that, in the case of an agricultural income risk, each additional monetary unit is more desirable when children have lower school test scores. In the migration context, we can therefore expect that, if income differential is positive and the household is faced with an income risk, parents with children poorly performing at school may be more likely to migrate, compared to parents with better performing children. The remainder of the paper proceeds as follows. In Section 2, we present the theoretical model underlying the analysis. Section 3 outlines the empirical model and our strategy for testing the different effects, and Section 4 describes our data. Section 5 provides the with D y > 0) and a non-monetary one on the child's human capital (D z , with D z > 0). We assume that, facing an agricultural risk, the household would opt for farming and investment decisions that are the least sensitive to the risk, and therefore less proőtable, making the household income lower (e.g., [START_REF] Rosenzweig | Wealth, weather risk, and the composition and profitability of agricultural investments[END_REF]. 11 Regarding the non-monetary loss, the negative effect on child human capital can be twofold. First, out of fear of having an even lower income if the risk is realized, households may reduce their expenditure on education or their consumption of nutritious food, health care and leisure. Second, őnancial difficulties may induce an economic pressure for parents causing a state of emotional distress and potential marital conŕicts that would negatively affect the parent-child relationship, and therefore the children's human capital [START_REF] Conger | The role of economic pressure in the lives of parents and their adolescents: The family stress model[END_REF]. The household's utility is thus given by U 0 = pu(y 0 -D y , z 0 -D z ) + (1 -p)u(y 0 , z 0 ). (2.1) If the household decides to send a parent for migration, two effects will be at play, for both the household income and the child human capital. First, the loss of the household's labor force to migration decreases their cropping income. We denote this decrease by δ y (with δ y > 0). However, the money sent back by the migrant parent helps alleviating the previous negative effect, directly by increasing the per capita income and indirectly by stimulating crop production [START_REF] Lucas | Emigration to South Africa's mines[END_REF][START_REF] Rozelle | Migration, remittances, and agricultural productivity in China[END_REF]. We denote this effect by ∆ y (with ∆ y > 0). Second, the inŕow of remittances from the migrant parent may allow children left behind to have access to better nutritious food and better health care as well as to support their education. Remittances may also be used in a way that decreases the need for child engagement in household activities, initially induced by the absence of the migrating parent. These effects are aggregated into ∆ z (with ∆ z > 0). However, the loss of parental time and family disruption may also mean less attention, supervision and care, and less study and leisure hours for the children left behind, inducing an adverse effect on the different child outcomes. 12 We denote this negative effect by δ z (with δ z > 0). 11 In an economy where insurance and credit markets are imperfect, absent, or inaccessible, the household may rely on various informal coping strategies to alleviate the effects of financial uncertainty (Townsend, 1994;Udry, 1994). While these mechanisms may be effective against idiosyncratic shocks, they are unlikely to provide much insurance when faced with an aggregate income risk because of high spatial correlation (e.g., [START_REF] Fafchamps | Solidarity networks in preindustrial societies: Rational peasants with a moral economy[END_REF]Udry, 1994). Other geographical areas that are not affected by the risk may provide little or no help due to enforcement problems and information asymmetries (e.g., [START_REF] Morduch | Consumption smoothing across space: Testing theories of risk-sharing in the ICRISAT study region of South India[END_REF]. Such strategies are, therefore, unable to prevent a decrease in the household income. 12 See Antman (2013) and Askarov and Doucouliagos (2020) for a review. Furthermore, we make the following assumptions: • A1. The amount of remittances is łstate-dependent": ∆ y = ∆ 1-p y in the good state of nature (no damage) and ∆ y = ∆ p y in the bad state of nature (damage state), with ∆ p y > ∆ 1-p y . This assumption, which is in accordance with the NELM theory, means that the amount of remittances is higher in the bad state of nature: under the altruistic motive for remittances, migrants send more money to their families with riskier incomes [START_REF] Roberts | Fortune, risk, and remittances: An application of option theory to participation in village-based migration networks[END_REF]. • A2. The beneőts to the child human capital from remittances are łstate-dependent": ∆ z = ∆ 1-p z in the good state of nature and ∆ z = ∆ p z in the bad state of nature, with ∆ p z > ∆ 1-p z . As more remittances are sent in the bad state of nature, it is expected that they induce a higher beneőcial effect on the child human capital. Evidence shows that increases in income in a single year have positive effects on children's outcomes [START_REF] Mayer | The influence of parental income on children's outcomes[END_REF]. • A3. In the good state of nature, the migrant parent's absence is totally compensated with the remittances sent back home i.e. ∆ 1-p y = δ y and ∆ 1-p z = δ z . Empirical studies provided evidence for the positive net effect of migration and remittances on the household income (e.g. Taylor and Lopez-Feldman, 2010) and on the child human capital (e.g. [START_REF] Macours | Seasonal migration and early childhood development[END_REF][START_REF] Azzarri | International migration and nutritional outcomes in Tajikistan[END_REF]. However, for simplicity and as we are only interested in the case with agricultural income risk, we assume that, in case of no agricultural income risk, beneőts directly and indirectly related to remittances are equal to the losses underwent by the child due to the absence of the migrant parent. It follows that the household's utility in case of migration is given by U 1 = pu(y 0 -D y + ∆y , z 0 -D z + ∆z ) + (1 -p)u(y 0 , z 0 ) (2.2) where ∆y = ∆ p y -δ y (with ∆y > 0 by assumptions A1 and A3) and ∆z = ∆ p z -δ z (with ∆z > 0 by assumptions A2 and A3). Migration occurs if and only if U 1 -U 0 ≥ α where α is the psychological cost of migration 13 . In order to simplify notations, we pose Y 0 = y 0 -D y and Z 0 = z 0 -D z . Therefore, migration will rationally occur whenever ∆U = u(Y 0 + ∆y , Z 0 + ∆z ) -u(Y 0 , Z 0 ) ≥ α (2.3) where α = α p is the adjusted cost of migration. Following [START_REF] Rey | Health and wealth: How do they affect individual preferences?[END_REF], we assume that α can be decomposed as follows: α = α(X)+ϵ, where X is a vector of variables that affect the migration cost and ϵ is a random variable such that ϵ ∼ N (0, 1) and of cumulative distribution function F . Using a Taylor expansion of order 2 to approximate the utility function around (Y 0 , Z 0 ), the probability of migration, q (= p(∆U ≥ α)) can thus be approximated as: 14 q ∽ F [β 1 ∆y + β 2 ∆2 y + β 3 ∆z + β 4 ∆2 z + β 5 ∆y ∆z -α(X)] (2.4) where 1) , and α(X) is a function of variables related to the pecuniary and non-pecuniary costs of migration. ∆y and ∆z are respectively the income differential, deőned as the difference in income between a situation where the household has a migrant in the urban labor market (Y 0 + ∆y ) and a situation where all household members stay in the rural labor market (Y 0 ), and the child human capital differential, deőned as the difference in the child human capital from having a migrant parent (Z 0 + ∆z ) versus not having one (Z 0 ). Eq. (2.4) shows that in the presence of agricultural income risk (p), both income differential ∆y and the child human capital differential ∆z affect the migration decision of the household. Given the assumption of non-satiation of the household, β 1 should be positive: in the case of an agricultural income risk, decreasing the income differential should decrease the probability of the parent's migration. Similarly, β 3 should be positive: in the case of an agricultural income risk, decreasing the child human capital differential should decrease the probability of the parent's migration. Moreover, given our assumption on households being strictly risk averse, β 2 should be negative: an inverse U-shaped relationship should exist between the household income differential and the probability to send a parent for migration, in the case of an agricultural income risk. Similarly, β 4 should be negative: an inverse U-shaped relationship should exist between the child human capital differential and the probability to send a parent for migration, in the case of an agricultural income risk. Finally, the sign of β 5 reŕects how the marginal utility of the household's income changes with respect to the household's child human capital, in the case of an agricultural income risk. Due to the scarcity of theoretical and empirical works on this measure, the determination of its sign remains a pure empirical question. β 1 = u (1,0) , β 2 = 1 2 u (2,0) , β 3 = u (0,1) , β 4 = 1 2 u (0,2) , β 5 = u (1, Empirical model and econometric strategy Empirical model In order to investigate the decision to migrate, given by Eq. (2.4), we consider the continuous latent variable M ig * i , such that if M ig * i > 0, household i would send a parent for migration. We get the following empirical model: M ig * i = β 1 (y 1i -y 0i ) + β 2 (y 1i -y 0i ) 2 + β 3 (z 1i -z 0i ) + β 4 (z 1i -z 0i ) 2 + β 5 (y 1i -y 0i )(z 1i -z 0i ) + β 6 H i + u i (2.5) We deőne M ig i as a binary variable such that M ig i = 1 if M ig * i > 0 and M ig i = 0 otherwise. (y 1i -y 0i ) and (z 1i -z 0i ) are the differences in the household income and in the child human capital, from having a migrant parent versus not having one in the case of an income risk, for each household i, respectively. The migration costs are related to a set of observable household human capital and demographic characteristics, H i , found to be important in the empirical literature and some other unobservable factors included in the equation's error term, u i . Our primary objective is to estimate the structural migration equation (3.3). However, we do not observe direct measures of the household income and of the child human capital for migrant households, had they not had a migrant parent, and for non-migrant households, had they had a migrant parent. To overcome this issue, we introduce into Eq. (3.3) őtted values of the household income and of the child human capital variables, resulting from an ordinary least squares (OLS) estimation of the household income and the child human capital equations. Accordingly, we deőne below the household income and the child human capital functions. Suppose that y mivc is the log of the total household income for household i living in village v in county c, with a non-migrant parent for m = 0 or a migrant parent for m = 1, such that: y mivc = γ m J mi + θ m σ c + ϕ m S v + ϵ mivc (2.6) where J mi is a vector of household productive assets, including human capital and physical capital characteristics and institutional beneőts. 2005)), we further control for locational characteristics, S v at the village level v, which allow to account for the differences in economic conditions across villages. Similarly, suppose that z mivc is the outcome of children from a household i living in village v in county c, with a non-migrant parent for m = 0 or a migrant parent for m = 1, such that: z mivc = α m K mi + τ m σ c + ψ m S v + κ p + η mivc (2.7) where K mi is a vector, for each household i, of the household's children' Econometric strategy Equations (3.3)-(2.7) constitute the basic structural form of our model. The problem with the above procedure is that estimating the household income and the child human capital equations using OLS yields biased results because households are not randomly assigned to migration status, and therefore, the income and the child human capital observed for each category of migrants are truncated non-random samples. To correct for truncation and selection bias, we adopt the Heckman and Lee two-steps procedure and produce the correct őtted values of the household income and of the child human capital variables. 17 We can then estimate Eq. (3.3) by maximum likelihood probit techniques. This three-steps estimation procedure, including a switching regression model with endogenous switching and a structural decision equation, was őrst applied in the migration context by [START_REF] Nakosteen | Migration and income: The question of selfselection[END_REF]. In the őrst step, a reduced form of the migration equation is obtained by substituting the household income and the child human capital equations into Eq. (3.3). The procedure suggests, in the second step, to correct the income and the child human capital equations by introducing the appropriate selectivity variables, λm (with m = 0, 1), and zero mean error terms, ζ j (with j = 1, 2, 3, 4), as follows: y 0i = γ 0 J 0i + γ ′ 0 λ1i + θ 0 σ c + ϕ 0 S v + ζ 1i , y 1i = γ 1 J 1i + γ ′ 1 λ0i + θ 1 σ c + ϕ 1 S v + ζ 2i , z 0i = α 0 K 0i + α ′ 0 λ1i + τ 0 σ c + ψ 0 S v + κ p + ζ 3i , z 1i = α 1 K 1i + α ′ 1 λ0i + τ 1 σ c + ψ 1 S v + κ p + ζ 4i . (2.8) Estimating these equations using OLS produces consistent results [START_REF] Heckman | The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models[END_REF]. As error terms of households residing in the same county may be correlated, we cluster standard errors at the county level. We also compute them using a bootstrap procedure that accounts for the variation resulting from the estimated Inverse Mills Ratio (IMR). Finally, we can form consistent predictors for y 0i , y 1i , z 0i and z 1i and introduce them in the structural migration equation (Eq. ( 3.3)). The resulting estimates of the β's should be consistent, while coefficient standard errors are bootstrapped to account for the use of the generated income and child human capital differentials. Identification The household income and the child human capital equations on the one hand, and the migration equation on the other hand, have a large set of variables in common. Even though we can assume that our model is technically identiőed through the non-linearity of the IMR, there is still a risk that the income and child human capital equations would yield fragile results if insufficient non-linearity occurs [START_REF] Puhani | The Heckman correction for sample selection and its critique[END_REF]. To avoid a collinearity problem in the second stage of the estimation, it is recommended to have at least one observed variable that affects why households may choose to participate in migration but does not have any inŕuence on the outcome equations. Two instruments are used to satisfy this exclusion restriction. The őrst instrument, the past village-level migration network, consists of the ratio of migrant individuals in 2005 in the same village.18 Historical migration networks have been used by a number of papers, as an instrument for the migration variable, to investigate the effects of migration on the household income (e.g. [START_REF] Taylor | Migration and incomes in source communities: A new economics of migration perspective from China[END_REF] as well as on the child outcomes (e.g. [START_REF] Meyerhoefer | The effect of parental labor migration on children's educational progress in rural China[END_REF]). These papers argue that previous migrants from the same village will form a network in the destination areas. The network access should help reducing the migration costs and risks, and provide more information, contacts and support for households back home in the subsequent years, and thus facilitating their migration without having any direct effects on the household's income or on children's outcomes. The second instrument, village labor-out, is a dummy variable that takes the value of 1 if the village collective organizes labor, őnding jobs outside the village in 2008 and 0 otherwise. Its rationale is similar to having a historical network of migrants that facilitates migration and lowers its costs. Therefore, we expect that these two instruments will have a positive effect on the household decision to send a migrant in 2008 in the őrst step of the Heckman and Lee procedure, but will not be signiőcantly correlated to the error terms of the outcome equations in the second step. Still, one may think that these two instruments may reŕect unobserved factors of the local economy, and therefore, may be correlated with the current levels of the community development in each village. Thus, they could affect the current household income and children's outcomes. We control for some public facilities at the village level that may indirectly be related to our outcome variables. Data description We A detailed village survey is also carried out along the household survey. Following the National Bureau of Statistics' deőnition, we consider as a migrant worker, any person who lived at least six months outside the local countryside in 2008, for work or business purposes. 20 Both within and outside the county of origin movements are considered. Regarding household incomes, the total net income includes wages (wages from local off-farm activities and remittances), net incomes from family agricultural and non-agricultural activities, net property incomes and net transfer incomes. Parents or guardians were also asked to report their children's Chinese and mathematics scores from the őnal exams of the last school term. 21 We use these scores as measures of the child human capital.22 Following Meng and Yamauchi (2017), who exploited the RUMiC-RHS data to investigate the effect of cumulative parental migration on child education, we use normalized test scores, deőned as the actual test score divided by the full score applied in the child's school and multiplied by 100. According to Meng and Yamauchi (2017), there are no differences in the textbooks used in schools of the same province or prefecture. Therefore, we follow them and introduce province őxed effects when studying the determinants of children's test scores in order to account for possible inconsistencies across schools from different provinces. 23We measure agricultural income risk through county-level rainfall variation and use the ratio of the standard deviation and the mean of rainfall during the months of March to October, computed over 52 years of monthly rainfall data (January 1960 -December 2012) for different weather stations across China. 24 We collected the data from the National Oceanic and Atmospheric Administration's (NOAA) Global Summary Of the Month (GSOM) precipitation data, which contains monthly summaries computed from stations in the Global Historical Climatology Network (GHCN)-Daily dataset.25 About 94% of the data is originally collected at China's National Meteorological Information Center and has received thorough quality checks. In addition to monthly precipitation records, the latitude and longitude of each station's location is included in the data. We rely on this location information to match each county represented in the RUMiC-RHS survey to the nearest weather station. 26 The rainfall data we use exhibit variability across counties as well as within counties over the years. We make a number of restrictions on the initial sample in order to conduct our empirical analysis, with a focus on agricultural households with children. First, we restrict the sample to rural households with a farming activity and we drop households with no agricultural land and/or no member working in agriculture in 2008. We also exclude households that do not have any member in the labor force27 and those that do not have at least a child aged between 6 and 15 or a child aged between 16 and 22 and at school. We further drop households with intra-county migrants that worked in the agricultural industry, resulting in a sample of 3,711 households. Dropping households with missing values for their total net income and those whose migrants do not send remittances leads to a sample of 3,464 households. In addition, we exclude households whose children have differences. First, as expected, the number of migrants is lower in the sample we are using, since we focus on households with only one migrating parent. We also őnd that the household income and the number of children aged between 6 and 15 are positively associated with the inclusion in the őnal sample, while the household size and the labor ratio are negatively correlated with inclusion in the őnal sample. Included households have, therefore, fewer members, less members in the labor force, but higher incomes and more children aged between 6 and 15. Note that we control for any differences that might arise from these differences in all estimations. Tables 2.C.3 to 2.C.5 display various summary statistics, both for the overall sample and by migration status. Empirical results Structural equation of migration Our baseline estimates of the structural form of the household decision to send a parent for migration, as presented by Eq. (3.3), are shown in Table 2.5.1. The table provides estimated coefficients and bootstrapped standard errors, using different speciőcations. In our benchmark speciőcation, presented in columns (1) and ( 2), the mean of the predicted school test scores for all children for each household is used to build the child human capital differential variable. In columns (3) to (6), we use two alternative ways to deal with the child human capital variable, by considering the human capital of either the eldest (columns (3) and ( 4)) or the youngest (columns ( 5) and ( 6)) child among the household's children, for each household. 28 We also use two separate measures of the child human capital variable. The őrst relies on math scores (columns (1), ( 3) and ( 5)) and the second on Chinese scores (columns (2), ( 4) and ( 6)).29 Results in Panel A, where only one parent is a migrant, conőrm the predictions of the theoretical model. Of particular interest is the estimated coefficient on the expected income differential, which is positive and signiőcant. This result conőrms our assumption about the risk aversion of the households. It also shows that when households face an agricultural income risk and use migration as a coping strategy, the income differential remains a signiőcant determinant of the migration decision. A positive expected income differential increases the probability to migrate for households that face an income risk. However, the effect signiőcantly decreases with higher levels of income differential. This result suggests that households make their migration decision depending on the level of the income differential they are expecting. Focusing on the sample where only one parent is a migrant, we őnd that the effect of income differentials starts decreasing when the expected income of the household with a migrant parent is around 1.32 to 1.38 their expected income when they do not have a migrant parent.30 This result can be explained in two ways. First, a higher expected income differential may be due to higher remittances, hence to higher income for the parent in the destination area. If the latter is related to expected higher risks, the income risk in the destination area would increase compared to that in the place of origin. Hence, given the risk aversion of the household, the probability of migration would decrease. Second, a higher expected income differential may also be due to a lower expected reduction in the household income due to the absence of the parent. If the latter is related to a lower income risk, the income risk in the area of origin would decrease compared to that at the destination area. Hence, given the risk aversion of the household, migration probability would decrease. 31 Similarly, coefficients on the child human capital differential and its square are statistically signiőcant at 1% level and of expected signs. This result suggests that households also care about the human capital of their children, implying that economic incentives are not the only determinants of migration in China when households face an income risk. Another key coefficient is the one on the interaction between the expected income differential and the expected child human capital differential. Table 2.5.1 shows a negative 3) and ( 5) use math test scores as the measure of child human capital, while Columns (2), ( 4) and ( 6) use Chinese test scores. In Columns (1) and (2), we use the average of test scores of children in the same household, while we use in Columns (3) and ( 4), the test score of the oldest child in the household, and in Columns ( 5) and ( 6), the test score of the youngest child in the household. The interaction variable refers to the interaction term between the income differential and the child human capital differential. All regressions control for household characteristics (land size, mean household age, mean age squared, mean schooling, gender ratio, labor ratio, household size, the number of preschool children, school children (age<16), school children (age≥16), elderly (>60) and disabled members). Bootstrapped standard errors are in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. sign of this coefficient, although the effect is signiőcant at the 1% level only with Chinese score as the measure of the child human capital. In the context that we consider in this paper, the negative sign provides evidence for a decrease in the marginal utility of income following an increase in the child human capital level, i.e., households are correlation averse (u (1,1) < 0). In other words, when faced with an income risk and when migration is used as a coping strategy, the household generates a higher satisfaction from each additional monetary unit they earn if the human capital of children is lower. Furthermore, as explained in Appendix 2.A.1, this result also allows to draw conclusions about which functional form of the household utility best describes the households' income-child human capital trade-off choices. In this context, our negative estimate of u (1,1) implies that the non-separability between the household's earnings and the child's human capital is to be considered. More precisely, the household's preferences would be best captured when considering the monetary equivalent of the child's human capital status instead of its measured value.32 Results in Panels B and C, where one or two parents may be migrants and where any household member may be a migrant, respectively, conőrm our őndings on the effect of income differential and the interaction term of income differential and child human capital differential. Moreover, as the effects are likely to vary with the duration of parental absence, we replicate our estimations considering as migrants parents who were away from home for at least 3 months or for at least 9 months. We report the structural estimation of the migration equation in Appendix Table 2.C.6. Overall, the results remain the same, except for the coefficients on the income differential and the interaction term, which become less precisely estimated in the case of 9-month migration. Appendix Table 2.C.7 shows the details of the structural migration estimation, using the sample where only one parent may be a migrant. 3334 We őnd that more years of schooling of the household's labor force members decrease the probability to send a parent for migration. This result shows that better educated households may rely on strategies other than the short-term migration to diversify income. This is not surprising as the majority of short-term rural migrants in China occupy manual jobs that local residents are unwilling to take, in the destination areas [START_REF] Meng | Labor market outcomes and reforms in china[END_REF]. 35 Around 42% of migrants, surveyed in the 2009 RUMiC-RHS, worked in the manufacturing industry while around 19% of them in the construction industry and around 11% in the services industry. In accordance with prior research (e.g. Zhu, 2002;[START_REF] Mullan | Land tenure arrangements and ruralśurban migration in China[END_REF], we őnd a signiőcantly negative effect of the household land size on the probability to send a parent for migration, in the speciőcation using math score as the measure of child human capital. This őnding conőrms that the tenure of land provides a key source of employment and livelihood for rural households and, therefore, reduces the probability to migrate. 36 We also őnd that the higher the mean age of the household, the more likely it is to send a parent for migration. 37 Indeed, age reŕects personnel connections and the accumulation of knowledge and experience for the household, hence motivating migration. However, this effect decreases over time. This may also be explained by the type of jobs short-term migration offer. The ability to land these jobs decreases for older workers. This quadratic relationship between migration probability and age has been conőrmed by various studies in the case of individual migration decision in China (e.g. [START_REF] Giles | Elderly parent health and the migration decisions of adult children: Evidence from rural China[END_REF]Dustmann et al., 2020). Another interesting őnding is the signiőcant and positive coefficient on the gender ratio. It shows that the higher the number of males among the household members is, the higher is the likelihood of the household to send a parent for migration. This result is consistent with the literature showing that migration from rural areas is dominated by males. Even though this trend is decreasing in recent years, it is still important among new generation migrants (those born after 1980) in China [START_REF] Zhao | New trends in internal migration in china: Proőles of the new-generation migrants[END_REF]. When migration is considered as a household decision, a larger household size may encourage migration by providing a device for risk diversiőcation, in the absence of credit and insurance markets [START_REF] Stark | Migration incentives, migration types: The role of relative deprivation[END_REF]. Some previous studies have conőrmed this positive relationship in China (Zhu, 2002;[START_REF] Taylor | Migration and incomes in source communities: A new economics of migration perspective from China[END_REF], while others showed that the number of siblings, the number of working-age men and women in the household 35 Li and Zahniser (2002), using data from 1995, suggested that the education effect on the individuals' decision to migrate in China is quadratic, where the migration probability increases over the first seven years of schooling, then declines thereafter. [START_REF] Giles | Elderly parent health and the migration decisions of adult children: Evidence from rural China[END_REF], on the other hand, showed that the average number of years of education of household residents decreases the migration probability. 36 Other studies also found a significant positive relationship between land and migration in China (see e.g. [START_REF] Rozelle | Migration, remittances, and agricultural productivity in China[END_REF][START_REF] Taylor | Migration and incomes in source communities: A new economics of migration perspective from China[END_REF] 37 Tsegai (2007) reported a positive but insignificant effect of the mean age of the household's adult members on the migration decision of Ghanaian households. decrease the probability to migrate (e.g., [START_REF] Giles | Elderly parent health and the migration decisions of adult children: Evidence from rural China[END_REF]. In our analysis, we show that, overall, a larger household size hinders migration of parents. Going into the details, we őnd that the labor ratio is positive but only signiőcant at the 10% conventional level when using the Chinese test score of the youngest child in each household as a measure of the child human capital level. This means that households with a higher proportion of dependents compared to the proportion of workers are less likely to send a parent for migration. Considering the dependent members in the household, we őnd, contrary to previous research (e.g., [START_REF] Giles | Elderly parent health and the migration decisions of adult children: Evidence from rural China[END_REF][START_REF] Mullan | Land tenure arrangements and ruralśurban migration in China[END_REF], that the presence of children increases the probability of migration. This may be explained by the additional economic responsibilities related to more children in the household. Moreover, children, particularly older ones, may be a source of labor both for domestic and farming activities, replacing, in part, the absence of the migrating parent (Chang et al., 2011), hence explaining the estimated positive coefficient. The effect of pre-school children on the migration of the parent may also be explained by the fact that this effect is migrant-gender-related. The majority (62%) of married migrants in the 2009 RUMiC-RHS dataset are males, while studies suggest that Chinese fathers are less involved with the care activities of these children (de Bruin and Liu, 2020). Indeed, [START_REF] Li | The determinants of temporary rural-to-urban migration in china[END_REF] found that more pre-school children in the household signiőcantly decrease the migration of mothers but not that of fathers. However, we őnd that the migration probability is insensitive to the presence of dependent members that are aged more than 60 or are disabled. Finally, in line with other studies (e.g., [START_REF] Rozelle | Migration, remittances, and agricultural productivity in China[END_REF][START_REF] Taylor | Migration and incomes in source communities: A new economics of migration perspective from China[END_REF][START_REF] Mckenzie | Network effects and the dynamics of migration and inequality: theory and evidence from mexico[END_REF], we őnd that network variables, in terms of the share of migrants from the same village and the assistance of the village in őnding work outside of the village, are positively associated with the migration probability. The income and child human capital equations Although the income and child human capital equations are incidental in getting consistent estimates of the structural migration equation, they deserve to be presented. Table 2.5.2 and Table 2.5.3 show the estimates for these equations corrected for sample selection bias. Overall, the coefficients exhibit the expected signs. 38 Higher years of education, the household size and the household's labor force signiőcantly increase a household's income. Richer households are also able to assure higher incomes. Access to formal credit through banks or credit unions signiőcantly increases the household's income only for non-migrant households. Some services provided at the village level 38 All explanatory variables are that of 2008. also signiőcantly allow to achieve higher incomes, particularly when the village provides a uniőed drainage and irrigation system or a united production material purchasing service. The signiőcant and positive coefficients on the regional dummies suggest that households living in eastern and center regions earn substantially higher incomes compared to those living in western regions. More importantly, we őnd that the rainfall coefficient of variation has a negative effect on the household total net income. This effect is signiőcant, at the conventional 5% level, in the case where the household has no migrants, and at the 10% level if the household has a migrating parent. This means that facing riskier distributions of rainfall increases the likelihood to get lower incomes. In particular, the income risk reduces the household income by around 8.6% if the household has no migrants in 2008 and by only 5.8% if the household has one migrating parent. In fact, the riskiness of the environment is expected to induce a decrease in the agricultural income and an increase in the amount of remittances; however, the amount of remittances is unlikely to cover all the loss generated by the increase in the risk levels. Similarly, we őnd that a higher degree of uncertainty in the distribution of rainfall induces a decrease in the children's school math test scores, even though the estimates are not statistically signiőcant because of the high levels of standard errors. The effect of risk can be twofold. A negative effect is expected through the decrease in the agricultural income and the change of the parental behavior (quantity and/or quality of time), while a positive effect is expected through the increase in remittances and the change in the time use for children. 39 Moreover, males with no migrant parents, older children and wealthier children whose parents are migrants get lower scores. However, children with no siblings, compared to those with two siblings, those with more educated parents and those with a migrant parent but with more female adults in the household have higher scores. Finally, given the high standard errors of the selection variables in the different estimations, we cannot reject the hypothesis that the error terms of the migration and the outcome equations are uncorrelated. Income and child human capital regressions do not seem to suffer from the problem of selection on unobservable characteristics. Similar result has been found in related studies. Zhu (2002), for example, found non-signiőcant coefficients on the selection variables in the case of Chinese male and female migrants' samples and in the female non-migrants' sample. Conclusion In this paper, we investigate the ability of rural households to adapt to aggregate risk using migration. We particularly explore how income differentials interact with the risk coping motive to shape the őnal migration decision. Understanding such effects are relevant for policy making in light of the recent increasing problems of climate change. To do so, we have established a model of migration behavior under agricultural income risk, where the rural household's utility depends on their income and their children's human capital. We show, theoretically, that when migration is used as a coping strategy, the household's likelihood to migrate still depends on the household income differential, however, differently from their traditional deőnition, expected income differentials are now additionally determined by the agricultural income risk the household is facing. We estimated this model using data from China, and we employed a switching regression model with endogenous switching in order to account for the possibility that the rural-to-urban migration may be a self-selection mechanism. We show that the incidence of migration as a risk coping strategy is relatively higher for households with a positive expected income differential, compared to those with a negative one. However, this effect diminishes with higher levels of income differential. Our results suggest that part of households with negative expected income differentials may fail to optimally cope with aggregate income risks, which may reinforce poverty and inequalities among rural households. Hence the need for policies that reduce vulnerabilities of these households to aggregate risks. Our model also allows to test how the marginal utility of income changes with child human capital levels, where the school test scores are used as a measure of the child human capital. In the context considered here, we őnd evidence for a decrease in the household's marginal utility of income following an increase in the child's human capital level, suggesting that households are correlation averse. This result suggests that, considering the framework that we assume in this paper, the best speciőcation of the utility function to consider is the non-separability between the household's earnings and the household's children's human capital. More precisely, the child human capital should be modeled as its monetary equivalent in the household's utility function instead of its measured value. This result also means that, in the case of an agricultural income risk, each additional monetary unit is more enjoyable when children's school test scores are lower. We can, therefore, expect that, if income differential is positive and migration is used as a risk coping strategy, households with lower test scores of children may be more likely to send a parent for migration, compared to households with higher test scores of children. Our őndings raise some interesting questions for future research. We performed our analysis using an aggregate measure of the income riskiness; however, it would be interesting to explore whether idiosyncratic measures of the household income risk have the same effects. Examples of these measures include the variance of the household income [START_REF] Carroll | How does future income affect current consumption?[END_REF]Carroll and Samwick, 1998), the variance of the residuals in the household income regression (Jalan and Ravallion, 2001;[START_REF] Guariglia | Consumption, habit formation, and precautionary saving: evidence from the british household panel survey[END_REF], or the subjective measures of income uncertainty (see e.g. [START_REF] Guiso | An empirical analysis of earnings and employment risk[END_REF]Lusardi, 1997). Our paper is also a őrst step in looking at how changes in the child human capital level affects the marginal utility of income. We do so using data from rural China and the school test scores as a measure of the child human capital. Further research is needed to validate these results using different samples of children and adopting different measures of the child human capital, notably health measures. 91 Using our assumptions, ∆U (i.e. u(Y 0 + ∆y , Z 0 + ∆z ) -u(Y 0 , Z 0 )) can be approximated by ∆U ∼ ∆y u (1,0) + ∆z u (0,1) + 1 2 ( ∆y ) 2 u (2,0) + 1 2 ( ∆z ) 2 u (0,2) + ∆y ∆z u (1,1) . The probability of migration, q is such that: 1) . q = p(∆U ≥ α) ⇔ q = p(ϵ ≤ -β 0 X + ∆U ) ⇔ q = F [-β 0 X + ∆U ] ⇔ q = F [β 1 ∆y + β 2 ∆2 y + β 3 ∆z + β 4 ∆2 z + β 5 ∆y ∆z -β 6 X] with β 1 = u (1,0) , β 2 = 1 2 u (2,0) , β 3 = u (0,1) , β 4 = 1 2 u (0,2) , β 5 = u (1, 2.B Rainfall and agriculture in China There are two types of farming in China: rain-fed farming which depends only on natural rainfall, and irrigated farming where irrigation water is used in addition to the rainwater. For the latter, rainwater contributes to more than half of the needed water. Hence, in 2007 about 57% of the total available water for agricultural use in China was from rainfall while only about 43% was from irrigation water. Breaking down these values for the different regions of China shows that, except for the Northwest areas, more than 50% of agricultural water used in all regions comes from precipitation. Part of this is also linked to the fact that farmers usually do not fully beneőt from the irrigation water supplied to them. According to a report by the Ministry of Water Resources, up to 55% of irrigation water is wasted during delivery before reaching its őnal point. Given the large dependence of agricultural production to rainfall water, droughts are among the biggest problems for agricultural production in China, especially in areas with limited irrigation systems such as the North-East and the North-West. Droughts unavoidably result in considerable reductions of the grain production, up to 150 million kg per year. Corn production, for instance, can be reduced to as much as 20-50% of potential yield when comparing wet and drought years [START_REF] Peng | Water resources strategy and agricultural development in China[END_REF]. As a result, optimal yields highly depend on the distribution of precipitation and the available soil moisture during the growth, ŕowering and őlling stages of crops [START_REF] Zheng | The climatic resources for wheat production in China[END_REF]. their direct impact on human capital outcomes, the potential effects of an environmental policy may also work as reinforcement to other investments throughout the childhood, by the potentially dynamic complementarities in human capital accumulation (Cunha and Heckman, 2007). 3 In recent decades, China has witnessed a rapid urbanization and industrialization processes. This accelerated development of its economy has been associated with increasing ambient air pollution levels, raising concerns about the negative consequences on public health (Deng et al., 2015;Chen et al., 2017a), happiness and life satisfaction (Zhang et al., 2017;[START_REF] Zheng | Air pollution lowers chinese urbanites' expressed happiness on social media[END_REF] and the economic activity (Chang et al., 2019;He et al., 2019;Li and Zhang, 2019). 4 In an attempt to control the high levels of pollutants' emissions and, particularly, the Sulfur dioxide (SO 2 ), the Chinese government implemented, in 1998, the Two Control Zones (TCZ) policy. The later imposed stringent environmental restrictions in various locations exceeding nationally mandated pollution standards. In this paper, we focus on the TCZ policy and investigate how it affected long-term educational performance, measured 15 years later, of those exposed to the effects of its implementation in their year of birth (in utero and őrst year of birth). Moreover, we compare the effects of exposure to the TCZ policy between older and younger children, at the time of its implementation. This question is of clear policy relevance. Additional measures may be needed for the age-group gaining less from the environmental policy, in order to reinforce its initial beneőts. Finally, even though we focus on the overall effect of the TCZ environmental policy on long-term education outcomes, inclusive of any possible channel, we examine its particular effect that goes through the impact on air quality. To carry out our analysis, we draw individual data from the őfth wave survey of the China Household Income Project (2013) and focus on three measures of educational performance at the age of 15. One challenge of our empirical design is the non-randomness in the assignment of the TCZ designation. However, we show that most of the observable characteristics of the TCZ and non-TCZ counties follow the same trends in the pre-TCZ period, suggesting that the TCZ status is less likely to be confounded by differential trends in unobservable characteristics. Our main results show positive and signiőcant effects of exposure to the TCZ policy, in one's year of birth, on children's long-term educational outcomes, 15 years later. Particularly, we őnd that, in the absence of the TCZ policy, individuals born in counties designated as TCZ would have been less likely to obtain high high school entrance exam scores and therefore, less likely to attend a high quality high school. They would have also been less likely to choose an academic high school, instead of a specialized/technical one which focuses on manual labor training. Results from our correlated random effects speciőcation indicate that exposure to the TCZ policy, between conception and age 1, is associated with a 12.5 percentage point increase in the probability to join a higher quality high school and an 18.7 percentage point increase in the probability to join an academic high school. Projecting forward, we also suggest better future higher education and labor market outcomes, stemming from being able to attend academic and better quality high schools. Moreover, we őnd that important beneőts associated with the TCZ policy relate to girls and to children born to low educated fathers. These results suggest that the environmental regulations may be used as a possible mechanism to reduce disparities in educational performance. However, we őnd no differential impacts between children exposed to the TCZ policy implementation at ages 1-5, in terms of the probability to attend a higher quality high school or an academic high school, although that does not imply the absence of positive effects for these age cohorts. In the case of beneőcial effects at these slightly older ages, it suggests that the beneőts of exposure to the TCZ policy are larger than the ones studied here, including a wider age-group during early-life. This paper contributes to the previous literature in three different ways. First, we add to the literature relating early-life pollution exposure to long-term human capital outcomes, particularly educational outcomes. Few studies assessed this effect in the developing world context (Bharadwaj et al., 2017;[START_REF] Rosales-Rueda | The persistent effects of early-life exposure to air pollution evidence from the indonesian forest őres[END_REF][START_REF] Molina | Pollution, ability, and gender-speciőc investment responses to shocks[END_REF]. 5 We further this research by considering the persistent effects of exposure to another common pollutant, the Sulfur dioxide (SO 2 ), on long-term educational performance outcomes. Moreover, previous studies on the relationship between air quality and education in China have almost exclusively focused on the short term effects of air pollution.6 No study, to our knowledge, has examined how the air quality effects may persist in the long term, in the Chinese context. Second, this paper contributes to an even smaller body of research that investigates the overall long-term effects of environmental regulations on educational performance [START_REF] Nilsson | The long-term effects of early childhood lead exposure: Evidence from the phase-out of leaded gasoline[END_REF]Isen et al., 2017). Isen et al. (2017) reported the existence of a relationship between the Clean Air Act (CAA) of 1970 and later-life earnings, suggesting that this effect may have operated, in part, through the effects on total years of schooling in the United States. [START_REF] Nilsson | The long-term effects of early childhood lead exposure: Evidence from the phase-out of leaded gasoline[END_REF], however, exploited geographical variation in air lead levels, induced by the gasoline lead level regulations by the Swedish government, to investigate the effect of reduced early childhood lead exposure on long-run education outcomes. These studies have focused on developed countries, where pollution levels are considered relatively small compared to that of China,7 and where children are enjoying different social and economic conditions.8 In case of non-linear relationships between air quality or parental economic conditions and long-term educational outcomes, the marginal effects may depend on their initial values. Therefore, it may be misleading to rely on developed countries' studies to predict the effects of air pollution regulations in a developing country setting. No study, to the best of our knowledge, has investigated the long-term effects of early-life exposure to an environmental regulation, in a developing country context. We are providing, therefore, an additional empirical support for the łfetal origins" hypothesis and its extension into the early-life determinants of long-run outcomes, by examining, in China, the long-run effects of the TCZ policy. Finally, several studies documented the TCZ policy's different economic costs for the country.9 Investigating its beneőts and showing its effectiveness compared to its costs will, therefore, be helpful in maintaining and motivating the implementation of such environmental regulations in the developing world. [START_REF] Tanaka | Environmental regulations on air pollution in china and their impact on infant mortality[END_REF] showed the contemporaneous improvements in infant health, inducing a reduction in infant mortality, following the TCZ policy implementation. However, focusing on short-term gains may underestimate the true total beneőts of an environmental policy, particularly since some effects, such as the effects on mental development, may őrst become apparent only later on. We show, in this paper, how the TCZ policy might improve long-term mental and cognitive ability development for the surviving cohorts. The remainder of the paper is structured as follows. Section 2 presents the TCZ policy's background and implementation. Section 3 suggests a conceptual framework describing the relationship between the environmental policy and educational outcomes. Empirical models are detailed in Section 4. Section 5 describes the data, and results are presented in Section 6. Section 7 provides some robustness checks, and Section 8 concludes. The Two Control Zones policy Alongside years of China's fast economic growth, air pollution increased to levels harmful to human health. For example, total SO 2 emissions went from 18.4 million tons in 1990 to 23.7 million tons in 1995 (SEPA, 1996). In response, the Chinese government developed a number of environmental regulatory laws and policies. One of them is the Air Pollution Prevention and Control Law of the People's Republic of China (APPCL), enacted in 1987, and amended in 1995 in order to include a section about the regulation of pollutant emissions and coal combustion [START_REF] Hao | Air pollution and its control in china[END_REF]. However, the 1995 APPCL was poorly enforced and had low efficacy in reducing air pollution, leading to the proposition of a new regional scheme, the so-called Two Control Zone policy. The later was approved and put into effect in January of 1998 (State Council, 1998). Criteria Targeted locations of the TCZ policy were chosen based on the records of acid rain and sulfur dioxide from previous years.10 Particularly, a location was designated as SO 2 pollution control zone if (1) the average annual ambient SO 2 concentrations had been higher than the national Class II standard (i.e. 60µg/m 3 ), (2) the daily average concentrations of SO 2 had been larger than the Class III standard (i.e. 100µg/m 3 ), or (3) the SO 2 emissions were important. A location was designated as acid rain control zone if (1) the average annual precipitation's pH values were equal to or less than 4.5, (2) the Sulfate deposition exceeded the critical load, or (3) the SO 2 emissions were signiőcant. Areas designated as TCZ, at the second or the third administrative level, are reported in a document, published by the State Council, known as the łOfficial Reply to the State Council Concerning Acid Rain Control Areas and Sulfur Dioxide Pollution Control Areas".11 Particularly, among 333 prefecture cities across 27 provinces in China, 175 were labeled as TCZ which, according to [START_REF] Hao | Plotting of acid rain and sulfur dioxide pollution control zones and integrated control planning in china[END_REF], accounted for 11.4% of China's territory, 40.6% of its population, 62.4% of its GDP, and 58.9% of its total SO 2 emissions in 1995. Geographically, the acid rain control zones are concentrated in the south, while SO 2 pollution control areas are mainly located in the north (see Figure 3.A.1). Enforcement The TCZ policy has set several measures that limit the use and production of high-sulfur coal and promote the establishment of clean coal technology (see e.g., [START_REF] Hao | Plotting of acid rain and sulfur dioxide pollution control zones and integrated control planning in china[END_REF]. The establishment of the National Environmental Protection Bureau (NEPB) and the limited inŕuence of local governments in setting the policy directives, for their own cities, reduce the concern about having biased results due to poorly implemented policy. Effectiveness The TCZ regulatory actions were proven effective in reducing pollution emissions at the national level (see e.g., [START_REF] Hao | Plotting of acid rain and sulfur dioxide pollution control zones and integrated control planning in china[END_REF][START_REF] Yang | Air pollution control strategy for china's power sector[END_REF]. However, air pollution and acid rain fell substantially more in the designated TCZ locations. For example, the number of cities, among TCZ cities, reaching the SO 2 concentration standards (i.e. the Class II standard) rose from 81 in 1997 to 93 in 199893 in and to 98 in 199993 in (He et al., 2002)). The SO 2 emissions also dropped by about 3 million tons in the TCZ locations, with about 71% of factories, emitting more than 100 tons of pollutants per year, decreased their SO 2 emissions to the standard between 1998 and 2000 [START_REF] He | Urban air pollution in china: current status, characteristics, and progress[END_REF]. This translated into a substantial fall of pollution emissions and improved overall air quality in TCZ locations. Conceptual framework In this section, we explore the possible channels through which the TCZ policy may have affected long-term human capital formation. The őrst channel concerns changes in air quality, which itself can affect long-term outcomes in two different ways. First, damage to neurological development and physical health from early-life exposure to air pollution is biologically plausible. The reduced oxygen or cellular and organ damages suffered by the exposed mother may result in lower nutrition and oxygen ŕows to the fetus, directly impairing the brain development [START_REF] Sunyer | Pre-natal brain development as a target for urban air pollution[END_REF]. 1213 Such factors may even provoke a number of physical problems, such as a shorter gestation period or a low weight at birth [START_REF] Dejmek | Fetal growth and maternal exposure to particulate matter during pregnancy[END_REF]Liu et al., 2019). The later was shown to have long-term effects on different outcomes, including IQ and education [START_REF] Mccormick | The behavioral and emotional well-being of school-age children with different birth weights[END_REF][START_REF] Black | From the cradle to the labor market? the effect of birth weight on adult outcomes[END_REF], poorer language and social skills [START_REF] Hack | The effect of very low birth weight and social risk on neurocognitive abilities at school age[END_REF], and behavioral problems such as increased attention deőcit [START_REF] Pharoah | Prevalence of behaviour disorders in low birthweight infants[END_REF]. Pollutants do not affect the fetal development only through their effect on the mother's health but, can also directly migrate to the fetus' system through the bloodstream, causing potential damage to fetal neural cells and to the respiratory and cardiovascular system. The later potential health complications may, for example, have future effects on school attendance. TCZ policy may, therefore, have affected the longterm educational performance in the targeted locations, by providing better air quality early in life. It was indeed shown that its implementation is associated with a lower incidence of low weights at birth [START_REF] Tanaka | Environmental regulations on air pollution in china and their impact on infant mortality[END_REF]. A more direct and separate effect, protecting brain development and the cardiovascular-respiratory system, is also possible. Second, air pollution reductions, in China, were shown to increase productivity and labor supply of workers [START_REF] Fan | The impact of air pollution on labor supply in china[END_REF]Chang et al., 2019;He et al., 2019), suggesting that parents may have been able to ensure higher incomes, following the TCZ policy. It has been shown that better nutrition and access to health care during pregnancy and early childhood is a simulating factor for long-term better cognitive abilities [START_REF] Martorell | Improved nutrition in the őrst 1000 days and adult human capital and health[END_REF][START_REF] Currie | Child health as human capital[END_REF]. As family income is one of the main determinants of parental inputs, long-term human capital outcomes of the exposed children may further be affected, if early-childhood parental investments are altered as a consequence of the changes in their income. The second channel concerns a direct income effect that is totally independent of changes in air quality. According to [START_REF] Hao | Plotting of acid rain and sulfur dioxide pollution control zones and integrated control planning in china[END_REF], collieries producing more than 50 million tons of high-sulfur coal and a part of small power plants have been closed by the end of 1999. Decreases in employment by newly established foreign őrms in TCZ locations had also been proven [START_REF] Cai | Does environmental regulation drive away inbound foreign direct investment? evidence from a quasi-natural experiment in china[END_REF]. The policy implementation may, therefore, have negatively affected labor market opportunities and earnings capacity of some parents, particularly those working in the power industry. If parental investments in children decreased as a consequence of this effect, long-term educational outcomes of the exposed children would have been negatively affected. 14 Given the above potential channels, we present a simpliőed framework of how the TCZ policy may have affected the educational performance of early-life exposed children. An individual's educational performance, E, is a function of cognitive ability, A, 15 health 14 In the case where the reduced earnings of parents is accompanied with a higher degree of stress and conflict at home, it may further reinforce the negative effects of the decrease in income, as exposure to high levels of cortisol in utero negatively affects children's cognition, health, and educational attainment later in life (Aizer et al., 2016). 15 An extensive body of literature has shown that cognitive ability is strongly associated with a variety of educational outcomes including standardized test scores, school completion, post-secondary education stock, h, individual characteristics, X, family characteristics, F and county characteristics, C: E = e(A, h(I 1 , I 2 ), X, F, C), where I 1 represents the inputs into the child's stock of health from early childhood, and I 2 is the inputs during the following years until the outcomes are observed. 16 The cognitive ability also depends on inputs in early childhood, i 1 , and that in later life, i 2 , on the health stock and on other characteristics. Formally, A = a(i 1 , i 2 , h(I 1 , I 2 ), X, F, C). Hence, educational performance is given by E = e(a(i 1 , i 2 , h(I 1 , I 2 ), X, F, C), h(I 1 , I 2 ), X, F, C) (3.1) Following the implementation of the TCZ policy, we assume that cohorts born in TCZ designated counties, right after the policy implementation, should have better quality inputs in terms of lower air pollution, and probably lower parental income, compared to those born just before the TCZ policy in the same counties. However, both cohorts would be exposed to the same environment at older ages. We can therefore assume that, following the implementation of the TCZ policy, there will be variations in I j , by changes in air quality or in parental income, as shown above, (where j=1, 2 for cohorts born just after the TCZ policy in TCZ counties and j=2 for cohorts born just before the TCZ policy in TCZ counties). Comparing these two groups allows to detect the additional effect of changes to I 1 investments, following the TCZ policy, on educational performance at older ages: ∂E ∂I 1 = ∂e ∂a ∂a ∂h ∂h ∂I 1 + ∂e ∂h ∂h ∂I 1 (3.2) Using this reduced-form model, we can say that a variation in I 1 affects health stock, h, which itself would affect educational performance via two channels: either directly ( ∂e ∂h ∂h ∂I 1 ) or indirectly through its effect on the cognitive ability ( ∂e ∂a ∂a ∂h ∂h ∂I 1 ). In what follows, we try, őrst, to identify the overall effect of the TCZ environmental policy on future educational performance of exposed children during their early life ( ∂E ∂I 1 ). Then, we focus on the effect of the policy through its effect on pollution levels. participation and other measures of student performance [START_REF] Marks | Education, social background and cognitive ability: The decline of the social[END_REF]. 16 Modeling the exact structure of these relationships is beyond the scope of this paper as they are much more complex and may involve other parts that are not presented here. Given the data availability, we focus on the reduced form relationship between early-childhood variation in inputs into health and cognitive ability and long-term educational performance. Empirical framework Long-term effects of exposure to the TCZ policy The main goal of this study is to estimate the overall long-term effects of the exposure to an environmental regulation, the TCZ policy, in a very early stage of life (in utero and before age 1). We adopt a difference-in-differences (DID) approach, which estimates the effects, on long-term educational performance, for cohorts born into TCZ counties after the policy's implementation, relative to counterfactual: Y ict+15 = β 0 + β 1 (T CZ c × P ost t ) + θ 1 X i + θ 2 Z ct + γ c + κ pt + u ict (3.3) where outcome Y ict+15 is a measure of educational performance, 15 years later, for individual i, born in county c in year t. In the context of our paper, using education measures instead of earnings to investigate the long-term effects of an environmental regulation presents several advantages. First, reported education is less likely to be subject of serious measurement errors than reported earnings, especially in rural areas where a high number of workers are self-employed. Second, contrary to using earnings, life cycle biases are less probable to bias estimations when using education measures, as education is generally completed early or mid-twenties [START_REF] Black | Recent developments in intergenerational mobility, handbook of labor economics[END_REF]. T CZ c is an indicator variable equal to 1 if county c was designated as TCZ in 1998, and P ost t is an indicator that equals 1 for the years after the TCZ was implemented.17 The interaction term equals 1 for TCZ counties after 1997. β 1 is the key parameter that estimates the DID long-term effect of the TCZ policy. X i is a vector of time-varying individual characteristics, including each individual i's parental characteristics, that may affect the educational performance at older ages. Individual characteristics include gender, number of siblings, ethnicity indicator and urban hukou indicator. Parental characteristics include indicators for the mother's and father's levels of education (őve separate indicators for each parent), the age of each parent at birth and continuous measures of both of the parents' education. Z ct is a vector of weather variables and county characteristics at the year of birth. Weather controls include a second-degree polynomial in precipitation and seven variables for 10-Fahrenheit degree temperature bins. Each 10-degree bin includes the number of days with the maximum temperature falling in the given 10-degree bin: 33-39, 40ś49, 50ś59, 60ś69, 70ś79, 80ś89 and 90-99. Another variable that includes the number of days with the maximum temperature ≥ 100 is also added. We include precipitation and temperature variables since weather conditions may interfere with air pollution levels, with the rain washing away the air pollutants and temperature affecting their formation. While it was also well documented that rain and temperature early in life affect the later life outcomes (Maccini and Yang, 2009;[START_REF] Hu | Too hot to hold: the effects of high temperatures during pregnancy on birth weight and adult welfare outcomes[END_REF]. Finally, county characteristics at year of birth include a demographic variable (total population) and an economic variable (GDP per capita), used to control for their impact on both air pollution levels and long-term educational performance. 18 γ c are birth county őxed effects that absorb any time-invariant, unobserved determinants of educational performance for individuals born in a speciőc county. Finally, κ pt are the province-by-year of birth őxed effects, which non-parametrically absorb any time-varying determinants of long-term educational performance for individuals born in a speciőc year and province. It helps purging, for example, any potential effects induced by policy changes at the province level. In all of our estimations, we cluster the standard errors at the county level to allow for county-speciőc correlated errors over time. Early-childhood period is known as a critical and sensitive period of development. Moreover, the dynamic complementarities hypothesis suggests that earlier childhood investment in disadvantaged children is more effective for subsequent outcomes than later investments. Therefore, exposure to the beneőcial effects of the environmental policy earlier in childhood may have had a higher positive effect than exposure later on. However, if sensitivity to air quality and to changes in parental income is higher at older ages, later childhood exposure to the environmental policy may be more or at least as effective as exposure at early childhood. This question is of clear policy relevance. Additional education or social measures may be needed for the age-group gaining less, in terms of educational performance, in order to reinforce the initial beneőts of the environmental policy. Eq. (3.3) focuses on the long-term effects of exposure to the TCZ implementation between conception and age 1, and does not allow to test the variation in these effects across cohorts in age, at the time of the policy's implementation. To do so, we expand our sample to children born in years 1992ś1999. All individuals born in 1999 in TCZ counties should have been exposed to the impacts of the TCZ policy implementation from conception onward. Those born in 1998 beneőt from the TCZ policy in their year of birth, they may, thus, be partially exposed to its impacts in utero and fully exposed from birth 18 County characteristics also allow to control for selection into fertility. onward. Those born in years before 1998 experience the TCZ policy effects from age 1 or older depending on their year of birth. The interaction term in Eq. (3.3) is, therefore, replaced by all the possible interactions between birth year indicators and TCZ indicator, inducing the following distributed lag model: Y ict+15 = ϕ 0 + 1999 k=1992 ϕ k (T CZ c × 1(year = k)) + θ 1 X i + θ 2 Z ct + γ c + κ pt + η ict (3.4) where 1(*) is an indicator variable that equals 1 if year is k. The coefficients of interest are the ϕ k 's, and as not all of them are identiőed in the presence of county őxed effects, we normalize the coefficient for the 1997 cohort to be zero (ϕ 1997 = 0). The other ϕ k 's coefficients measure, therefore, the effect of the TCZ policy on individuals who were age (1998k) in 1998, in TCZ counties, with respect to those exposed to the policy's implementation at age 1. 19 Control variables are those from model 3.3. TCZ policy and changes in air quality In the previous subsection, we looked at the overall net effects of early-life exposure to the TCZ environmental policy on long-term educational performance. Under the assumption that the TCZ designation is responsible for a variation in the county-level air pollution, Section 3.3 shows how the TCZ policy may have affected long-term outcomes through its effect on air quality. Panel A of Figure 3.C.1 shows the trends in county-level annual means of SO 2 concentration of TCZ and non-TCZ counties from 1980 to 2000. 20 As expected, there is a notable difference between TCZ and non-TCZ counties in terms of the SO2 concentration levels. They are higher in TCZ locations. We also notice that, before the TCZ policy implementation, the two groups of counties had had the same trend, showing increasing levels of the SO 2 concentrations until 1996, the year following the implementation of the 1995 APPCL. And, even though reductions in SO 2 concentrations started in 1997, those happening since the TCZ policy was put into effect, appear to be more important. SO 2 concentrations decreased for both groups of counties. However, the reduction in TCZ counties seems more maintained, especially between 1999 and 2000.21 In order to estimate the speciőc effect, that goes through the reductions in the SO 2 pollutant concentrations, of the TCZ policy, we build a two-step model. We őrst consider 119 a difference-in-differences regression that allows to measure the effect of the TCZ policy on SO 2 concentration levels: SO 2ct = λ 0 + λ 1 (T CZ c × P ost t ) + α 1 X i + α 2 Z ct + γ c + κ pt + ϵ ict (3.5) where SO 2ct is the annual mean of SO 2 concentration in county c and year t, measured in µg/m 3 . T CZ c and P ost t indicators and other control variables are as deőned in model 3.3. In the second step, we investigate the effect of the TCZ-induced changes in air quality, in children's county and year of birth, on their long-term outcomes, using the predicted concentrations of SO 2 from Eq. (3.5): Y ict+15 = δ 0 + δ 1 ŜO 2ct + τ 1 X i + τ 2 Z ct + γ c + κ pt + v ict (3.6) Section 3.3 suggests that the TCZ policy may have affected children's outcomes in a way other than pollution reduction, by decreasing parental income. If this income effect was in place, then results of Eq. (3.6) will be downwardly biased. Qualify for High School. Compulsory education in China includes a total of nine years of schooling in primary and junior middle schools, starting typically at the age of six. Data and Descriptive Statistics Additional education in high school is determined by the child's willingness to get more years of schooling. Children may self-select themselves into this additional education, so that only students that think they can do well in high school will choose to continue their education. This category includes individuals that reported having more than 9 years of formal education. In the absence of this information, we use the reported educational level, including those who attended, or are attending at the end of 2013, a senior middle school, a vocational senior secondary school, a technical school, a specialized secondary school or in a higher level of education. High School Entrance Exam Scores. High schools in China select students based on their high school entrance exam (HSEE) score, a standardized test at the province level and known as zhongkao in China. Students usually get tested in Chinese, Mathematics, English, Physics, Chemistry, Political Science and PE. Better quality high schools have higher threshold scores than lower quality high schools. While students opt for the best school among the ones they were accepted in, it is reasonable to assume that HSEE test scores correlate with high school quality. We follow [START_REF] Sun | Estimating the earnings returns to exam-measured unobserved ability in china's urban labor market: Evidence for 2002ś2013[END_REF] in proxying the HSEE scores by the quality of high school. The CHIP13 asked respondents to report the type of their high school if they ever studied in high school. Six types are proposed: (1) National or provincial level key middle school, (2) City or district level key middle school, (3) County level or other key middle school, (4) Non key middle school, (5) Specialized secondary school/ vocational senior secondary school/technical school, (6) Others. As more and better education resources are allocated to łkey" and higher administrative level schools, compared to łnon-key" and lower administrative level schools, we deőne the high quality high schools as those belonging to types (1), ( 2) and (3), assuming that children succeeding to attend these schools have better educational performance than their peers joining schools of types ( 4), ( 5) or (6). Academic High School. Junior middle school graduates have the choice between joining an academic high school or a specialized/technical high school. Students wanting to attend an academic senior middle school must pass the high school entrance examination. Students also have the option to attend vocational and technical schools which provide a variety of training programs to produce all kinds of skillful workers. This educational path is opted for either by choice or because students failed the zhongkao exam. We deőne children as attending an academic high school if they have ever attended a high school of type (1), (2), (3) or (4).22 Air Pollution and TCZ Regulatory Status We use SO 2 concentration data from the satellite-based Aerosol Optical Depth (AOD) retrievals provided by the National Aeronautics and Space Administration (NASA). 23 We particularly make use of the M2TMNXAER version 5.12.4 from the Modern-Era Retrospective Analysis for Research and Applications version 2 (MERRA-2). This product provides monthly AOD data since 1980 in grids of 0.5 degree (50 km) latitude*0.625 degree (60 km) longitude. We aggregate the SO 2 concentration data from grid to county level for each month, and then, average to annual level for each county. 24 This pollution dataset has been used in a number of previous studies (e.g., [START_REF] Chen | The effect of air pollution on migration: evidence from china[END_REF]Deschenes et al., 2020). To identify whether each county in CHIP13 data was designated as TCZ in 1998, we Network Daily dataset (GHCN-D). In addition to climate records, the latitude and longitude of each station's location is provided. We rely on this location information to match each county represented in CHIP13 to the nearest weather station26 . County Characteristics County level characteristics are available from EPS China Data. 27 These characteristics include county-year information on GDP per capita and total population. 28 Sample For our main analysis, we restrict the sample to cohorts born in 1995-1999, the years immediately surrounding the policy implementation. 29 This results in a sample of 3363 rural (70%) and urban (30%) children, living in 216 different county. In order to match this data with other datasets at the county×year-of-birth level, the GB codes of counties were replaced by the county names (Ministry of civil affairs, 2014). An additional 5 observations were excluded from the sample, due to the lack of information on the county name corresponding to the GB code of these observations' counties, which reduces the sample to 3358 children, living in 214 different counties. Around 61% of these children live in a county that is designated as TCZ. About 32% of the sample is not matched to the county-level pollution data, the reason why we match them to the prefecture-level pollution data. Moreover, 9% of the sample is not matched to the corresponding county characteristics data. The őnal sample, used for each estimation, is different depending on the data availability for the considered outcome variable. About 36% of the sample have missing information about whether they joined high school, while about 56% have missing information about the quality of the high school. Considering all the missing values of the control variables, there are 1539 children in the remaining sample, for the outcome variable about the qualiőcation for high school, and 1005 children, for the outcomes about the quality of high school attended. To examine the air pollution channel, through which the TCZ policy implementation may affect long-term outcomes, we use only observations with available county level SO 2 concentrations. Moreover, our empirical strategy allows to include all observations with 27 County and City Data. Available at: http://www.epschinadata.com/. 28 See Appendix 3.B.3 for details about how county characteristics are manipulated. 29 Educational performance variables should be observed at age 15 for each child. Those born in 1999 are aged 15 in 2014, the year in which CHIP13 is conducted. The CHIP13 survey took place in July and August of 2014, while zhongkao exam is usually taken early June, and, in some cases, in late June or even early July. Therefore, some of those born in 1999 may be already in possession of their zhongkao results, at the time of the survey, while others are not. This induces the issue of whether the 1999 sample is sufficiently representative of the 1999 cohort. As the missing outcomes' data are due to agerelated inability to participate in the measurement, because of the survey timing, and unrelated to the educational outcomes, missingness should not be a problem. Consequently, despite the smaller number of observations for the 1999 cohort, it is included in the analyses because it is more informative to use all available data that contribute to the estimation of the effects. 123 missing values for the outcome variables, when running the őrst step of the model (Eq. (3.5)). This results in 1760 observations collected from 91 different counties for the őrst step. For the comparison of the long-term effects by age of exposure at the time of the TCZ implementation, we expand our sample to individuals born between 1992 and 1999. This leads to a sample of 5696 rural and urban children. 9 observations are additionally dropped for the lack of information on the county names corresponding to their GB codes, which results in a sample of 5687 children. Considering all the missing values of the control variables, there are 3189 children in the remaining sample, for the outcome variable about the qualiőcation for high school, and 2073 children, for the outcomes about the quality of high school attended. Descriptive statistics Using the sample of individuals born between 1992 and 1997, Table 3.C.1 presents the coefficients on the TCZ indicator from separate regressions for each of the observable characteristics. As the TCZ status was not randomly assigned, it is expected to see differences between the two localities in the pre-TCZ period. Particularly, we found that TCZ counties tend to include children with a lower number of siblings, a higher mother and father education and higher probability of having the urban hukou compared to the rural one. There also seems to be differences in weather indicators between the two areas. TCZ counties have a higher average temperature and a higher total precipitation per year. Empirical results Exposure to TCZ policy in early childhood Table 3.6.1 presents our difference-in-differences estimates of the TCZ implementation's effects on educational outcomes of those born in TCZ counties, 15 years later. The dependent variable in panel A is the likelihood of attending a high quality high school. Column (1) which controls for weather and county characteristics in addition to county and province by year of birth őxed effects, shows that those exposed to the TCZ policy implementation during conception and before age 1 are more likely to attend a higher quality high school, compared to counterfactual. More controls are added across the columns, addressing the concern that time varying individual characteristics that may explain the long-term outcomes are correlated with the TCZ status. Children's outcomes my differ by parents' education, for example, while systematic differences exist in the mother's and father's education between TCZ and non-TCZ counties (Table 3.C.1). We őnd that the exposure to the TCZ policy implementation, early in life, is associated with obtaining better exam results, and therefore, a higher predicted probability of joining a higher quality high school, in models including parents characteristics (columns (2) and (3)) and child characteristics (column ( 4)).30 Panel B of Table 3.6.1 provides results using the likelihood of attending an academic high school with respect to attending a specialized/technical high school as the dependent variable. Estimates show that cohorts exposed to the TCZ implementation in utero and before age 1 are more likely to attend an academic high school. The effects are positive and signiőcant across the different speciőcations adding more controls. Finally, panel C estimates the effect on the probability of high school enrollment (likelihood of attending high school with respect to dropping out, either by choice or because of exam failure). The estimate is negative when we only control for weather and county characteristics, but becomes positive when we further control for parents and child characteristics. Differently from the previous outcomes, the coefficients fail to be statistically signiőcant. In sum, our results suggest that the majority of the effect of the TCZ policy is occurring with respect to exam results and, therefore, the type of high school to attend instead of the qualiőcation to enter high school. The key identifying assumption for the validity of these results is that trends in outcomes between TCZ and non-TCZ counties would have evolved similarly in the absence of the policy, conditional on covariates. Table 3.C.3 presents, from Eq. (3.4), estimates of the interaction terms between TCZ indicator and children's year of birth dummies, compared to that of the 1997 year dummy, using high quality high school, academic high school and qualify for high school variables as dependent variables. These pairwise comparisons come in support for the identiőcation assumption for Eq. (3.3): as the trends before the TCZ implementation between the treatment and control cohorts are almost similar for the three outcomes, we can suggest that the trends after 1998 would have been similar without the effects of the TCZ policy. We also notice that for children, born in TCZ counties during the two years after the TCZ policy implementation, there are some improvements in the cohorts' outcomes. 31 These variations suggest additional beneőts of exposure to the implementation of an environmental regulation between conception and age 1, compared with cohorts exposed to the same regulation at age 1 and later. Table 3.C.4 provides additional evidence for the validity of our identiőcation assumption. TCZ status is not randomly assigned. It is affected, in part, based on emission levels and concentrations of SO 2 pollutant in each location. Therefore, it is possible that the estimated TCZ effects are reŕecting impacts of the factors that are conditioning air quality in the respective counties. In Table 3.C.4, we test for potential differences in trends of observable characteristics, following the TCZ policy implementation, between TCZ and non-TCZ counties. Each characteristic is used as the dependent variable in a separate regression, to compute the coefficient on the interaction term from Eq. (3.3). As we cannot test for the treatment status not covarying with unobservable characteristics, we assume that the absence of signiőcant relations with observable characteristics implies an absence of signiőcant correlations with unobservables [START_REF] Altonji | Selection on observed and unobserved variables: Assessing the effectiveness of catholic schools[END_REF]. Results suggest that trends, between TCZ and non-TCZ counties, in most of the observable characteristics, that may be determinants of the educational outcomes, are similar, further supporting our research design. There are two covariates that exhibit signiőcant differences: number of siblings and precipitation. For this reason, we later replicate all regressions dropping the number of siblings variable to test for any changes in estimates. The small correlation with local economic trends, measured by the coefficient on GDP per capita, suggests little probability that economic shocks would bias our results. Other policies affecting educational outcomes, implemented around the time of the TCZ policy, may confound our results. The 9 th Five-Year Plan decides on the different policies to implement for the years 1996-2000. We present, therefore, the balancing test results for the period 1996-1999 as a robustness check. The TCZ designation seems to balance the trends in important determinants of educational outcomes. Incidental parameter problem.śIn our basic speciőcation, we estimate a őxed effects model where we include county and year of birth dummies to capture unobserved heterogeneity. However, considering the large number of per-county observations for some counties but the few observations for others, the őxed effects estimations may result in an incidental parameter problem [START_REF] Neyman | Consistent estimates based on partially consistent observations[END_REF]. To address this issue, we consider two alternative estimation approaches. First, we estimate the Correlated Random Effects model (CRE) for the three educational outcomes, by adding county-speciőc year averages of all time-varying covariates, rather than the county dummies, and the number of yearly data entries of each county [START_REF] Wooldridge | Correlated random effects models with unbalanced panels[END_REF]. 32 The inclusion of the withincounty means of all time-varying variables accounts for county time-invariant unobserved heterogeneity. This approach tries to approximate a łőxed effects" strategy without estimating many incidental parameters. Table 3.6.2 reports the results. The TCZ policy implementation is positively and statistically signiőcantly associated with a higher probability to attend an academic high school, for those exposed between conception and age 1. The effect on the probability to attend a higher quality high school is positive but not precisely estimated. We reexamine the pre-existing trends using the CRE estimations of Eq. (3.4), for the three educational outcomes. Estimates of the interaction terms between each TCZ indicator and children's year of birth dummy, compared to that of the 1997 year dummy, are presented in Table 3.C.5. Comparisons of each marginal effect, for years before 1998, with that of 1997 are close to zero and non signiőcant, for the high quality high school outcome (except for the year 1992) and the academic high school outcome, but not for the qualiőcation for high school variable. 33 These pairwise comparisons suggest that the trends for the two outcomes, high quality high school and academic high school, before the TCZ implementation, between the TCZ and non-TCZ cohorts, are almost similar, supporting the identiőcation assumption of Eq. (3.3). The Table also shows that the estimates of the differences in AMEs, for those born in 1998-99, are positive and larger than zero, suggesting improvements in the outcomes of those born in TCZ counties, during the two years after the TCZ policy implementation. 34 To determine how large these effects are, we compute the DID estimates of the TCZ implementation effect, measured in probability of outcomes. Results are presented in Table 3.6.3. The average difference-indifferences in probability of attending a high quality high school is 12.5 percentage points, while it reaches 18.7 percentage points for the probability of attending an academic high school. This implies that our results are consistent with an important effect of exposure to the TCZ intervention between conception and age 1. Second, we present the Linear Probability Model (LPM) estimates, in Appendix (Table 3.C.6). Limitations of the LPM 33 These results suggest no differential effects of exposure to the TCZ policy starting at age 1, 2, 3, 4, 5 or 6, for the academic high school outcome. Moreover, even though we find no differential effects of exposure to the TCZ policy starting at age 1, 2, 3, 4 or 5, for the high quality high school outcome, we see an additional benefit for exposure at age 6, relative to exposure at age 1. We also see statistically significant additional beneficial effects of exposure at age 2, 3 and 6, relative to exposure at age 1, in terms of the probability to continue non-compulsory education. If, in fact, these differential effects are due to the TCZ policy, then, they may be explained by the decrease in the effectiveness of the policy, starting from year 2003, as shown in Figure 3.C.1. Those exposed, to the TCZ policy, at age 2, 3 or 6 benefit from more years of protection from air pollution, during school years, than those exposed at age 1. This suggests the need for the good implementation of the environmental policy throughout the childhood. 34 We also reexamine, using the Probit CRE, the potential differences in trends, following the TCZ policy, between TCZ and non-TCZ counties, for the three covariates estimated, in Table 3.C.4, by the pooled Probit with county dummies (Male, Han and Urban hukou). We find almost the same results as in Table 3.C.4, in terms of significance. 128 include getting some of the predicted probabilities outside of the unit interval. However, that does not necessarily compromise the usefulness of the estimates [START_REF] Wooldridge | Econometric analysis of cross section and panel data[END_REF]. Moreover, to account for the violation of the homoskedasticity assumption, we report cluster-robust standard errors, adjusted at the county level. We őnd similar results in terms of signiőcance and signs of the effects. 35 The TCZ policy implementation, during őrst year of birth, is associated with higher probabilities of attending a higher quality high school and an academic high school. 3637 35 We reexamine the pre-existing trends using the LPM estimations of Eq. (3.4). Figure 3.C.2 displaying estimates of the interaction terms between TCZ indicator and children's year of birth dummies provides some evidence supporting the identification assumption for Eq. (3.3): the trends before the TCZ implementation between the TCZ and non-TCZ cohorts are almost similar for the two outcomes, high quality high school and academic high school, at the 90% confidence interval. Moreover, reexamining, the potential differences in trends, following the TCZ policy, between TCZ and non-TCZ counties, for the three covariates (Male, Han and Urban hukou) in Table 3.C.4, using the LPM, yields non-significant estimates, under the two samples (1992-1999 and 1996-1999). 36 The DID estimates of the TCZ implementation effect, measured in probability of outcomes, estimated by the LPM are higher than those computed with the CRE models. 37 It is also possible to overcome the incidental parameter problem using the analytical and jackknife bias corrections proposed by Fernández-Val and Weidner 2016. Their approach can be used in the case of both time and cross-sectional fixed effects, but weak dependence is imposed. Conditional maximum likelihood estimations can also be used, however, they impose conditional independence across time [START_REF] Wooldridge | Econometric analysis of cross section and panel data[END_REF]. Mechanism: changes in air quality The conditional mixed-process (CMP) technique, developed by [START_REF] Roodman | Fitting fully observed recursive mixed-process models with cmp[END_REF], is used to explore how the TCZ policy affected long-term educational outcomes, through its effect on air pollution. 38 Although the data on SO 2 concentration are imperfect (albeit the best we can get, given the availability of this information), coefficients λ 1 from Eq. (3.5) (First step) are negative and signiőcant, suggesting that the TCZ policy may have reduced the county-level annual mean SO 2 concentrations by 0.256-0.265 µg/m 3 , relative to non-TCZ counties, in the post-TCZ period. The validity of these results relies on the assumption that trends in SO 2 concentrations between TCZ and non-TCZ counties are similar in the years prior to the TCZ policy implementation, conditional on covariates. (Second step), suggest that a 1 µg/m 3 decrease in the annual SO 2 concentration, in the year of birth, increases the predicted probabilities of attending an academic high school 38 The CMP technique allows to build a framework similar to that of the two-stage least squares, but while allowing for different sample sizes in each step of the estimations. We were, therefore, able to use a much larger dataset, covering more counties, in the first step (Eq. (3.5)), to estimate the relationship between the TCZ implementation and SO 2 concentration levels. 39 Note that estimations include only observations with available county level SO 2 concentrations. 40 Coefficients for years 1992 and 1993 are significantly negative, indicating that the difference in SO 2 concentrations, between TCZ and non-TCZ counties in our estimation sample, are smaller in years 1992 and 1993, compared to that of 1997. and a higher quality high school. The estimated effects are signiőcant at the conventional levels. Incidental parameter problem.śPanel B of Table 3.6.4 reports results using the LPM to estimate the relationship between SO 2 concentrations and educational outcomes (Second step). Estimates are compatible with an increase in the probabilities of attending an academic high school and a higher quality high school, following a decrease in SO 2 concentrations. However, these effects are not precisely estimated. 41 Treatment effect heterogeneity Our main results show positive effects of exposure to the TCZ policy in one's year of birth on children's long-term outcomes, positively affecting their educational performance. However, this average effect may be hiding important differences in the effects of the TCZ policy across subgroups. First, based on the son-preference rooted culture in China, males, for example, may already initially receive better protection from exposure to air pollution and probably a higher part of the parents' monetary child investment. Biological arguments, however, suggest that female fetuses are less physiologically sensitive to environmental shocks than male fetuses [START_REF] Tanaka | Environmental regulations on air pollution in china and their impact on infant mortality[END_REF]. To examine whether girls beneőt more from the TCZ policy, we estimate our model for the subsamples of boys and girls. The DID estimates, measured in probability of outcomes, are reported in Table 3.6.5. Panel A highlights that boys are statistically signiőcantly more likely to get additional beneőcial effects from being exposed to the TCZ policy between conception and age 1, in terms of having higher exam results, thus, attending better quality high schools. Moreover, even though, both girls and boys, exposed to the TCZ policy implementation, early in life, are more likely to choose an academic high school rather than a specialized/technical one, most of the increase in this probability relates to the girls. This indicates that environmental policies may contribute to reducing the gender gap in education. We further examine heterogeneity in the treatment effect by the parents' education. First, the beneőcial effects may be concentrated among low-educated parents' children. A higher parental education may reŕect a higher knowledge and a higher socioeconomic status, allowing better protection ex-ante and ex-post against exposure of children to air 41 It is also possible to extend the CRE approach to produce the predicted SO 2 concentrations following the TCZ implementation, by relying on the two-stage control function approach, as suggested by [START_REF] Papke | Panel data methods for fractional response variables with an application to test pass rates[END_REF] and [START_REF] Lin | Testing and correcting for endogeneity in nonlinear unobserved effects models[END_REF]. We don't present this approach as it was not possible, in our case, to produce bootstrapped standard errors, to correct for adding the first-step residuals in the second stage. pollution. However, low-educated mothers may be more likely to work in the outside, being more exposed to air pollution during pregnancies, while fetuses of lower educated parents may already suffer lower initial health endowments, making them more vulnerable to air pollution. Second, effects may be lower for low-educated parents' children, if these parents learned to avoid outdoor pollution during pregnancies and in the early life of children, thanks to their long-term exposure to air pollution. A second reason for a lower effect, for these children, is the negative impact of the TCZ policy on their parents' incomes (as explained in Section 3.3). Moreover, Becker et al. (2018) show that wealthy parents invest, on average, more in their children than poorer ones. Therefore, if, by the potentially dynamic complementarities in human capital accumulation, early beneőts of the TCZ policy are reinforced by later investments, then children of wealthier parents will beneőt more from the TCZ policy than children of poorer parents. In panels B and C of Table 3.6.5, we report results from sub-samples divided by the father's and the mother's education attainments, respectively. We őnd that beneőts associated with the TCZ policy signiőcantly relate to children of low educated fathers, in terms of better exam results and, therefore, attending a higher quality high school. While, exposed children of high educated mother or father are more likely to get into an academic high school, instead of a specialized/technical one. In terms of magnitude, the beneőts are larger for the relatively disadvantaged children, suggesting that the TCZ policy may have helped reducing educational disparities between the two groups of children. LPM results, presented in Table 3.C.7, support our previous estimates. Exposure of boys and children of low educated fathers to the TCZ policy, during their őrst year of birth, is associated with a higher probability to attend a higher quality high school. Projected higher education and wage effects Given the available data, it is still early to directly assess the long-term higher education and labor market effects of the TCZ environmental regulation. However, educational performance may be an important driver of these outcomes. A broad literature in economics shows that educational achievements are strong determinants of employment and earnings later in life [START_REF] Chetty | How does your kindergarten classroom affect your earnings? evidence from project star[END_REF][START_REF] Currie | Early test scores, school quality and ses: Longrun effects on wage and employment outcomes[END_REF]. It is plausible therefore, to think that improvements in education outcomes translate into better overall economic status, later in life. Hence, we evaluate the relationship between the measures of educational performance we employed in our analysis and later outcomes, using older cohorts from the same survey. To do so, we consider those born in 1990-94 (aged 19-23 when outcomes are observed) to estimate the effects of educational performance on higher education out- Notes: Each entry reports the difference between the marginal effects (in probability metric) in the TCZ and non-TCZ counties (following the CRE estimation for the considered subsample). High (low) father's and mother's education is deőned as having a number of years of schooling that is above or equal (below) 9. See notes to Table 3.6.2. comes, while we study those born in 1979-85 (aged 28-34 when outcomes are observed) to get the effects on earnings. We focus on the following outcome variables: National College Entrance Exam score. Every year, and in order to join a higher education institution in China, high school 12th grade students must take the National College Entrance Exam (NCEE, or gaokao). It is considered as a high stakes exam as it is the one and only determinant for post-secondary education admission. Students are to choose one of two important tracks in their őrst year of high school, the arts track or the science track. The NCEE exam papers differ depending on the chosen track; however, mathematics, English and Chinese are three compulsory exams for both tracks. The full score for the NCEE is set at 750 points. Respondents of CHIP13 were asked to report their NCEE scores. Higher education institutions participation. Post-secondary education institutions select students depending on their NCEE score being higher than a threshold set by these institutions. Earnings. This is about the logarithm of the annual total income (total wage income which may include various monetary subsidies or net business income) in 2013 in Yuan. When the income is earned by more than one person, survey participants divide the total sum by person. Table 3.6.6 shows that attending a high quality high school is associated with an increase of about 50 points in the NCEE score, an increase in the probability of having a NCEE score above the 75%-tile of the score distribution, and an increase in the probability of attending college/university. Controlling for individual and parental characteristics, the estimated coefficients on the NCEE scores become slightly lower but are still statistically signiőcant at the 1% level. Given our results on the positive association between the TCZ policy and the probability to attend a higher quality high school, TCZ policy exposure, in early childhood, may have also positively affected higher education outcomes. Table 3.6.7 also reveals positive signiőcant relationships between both being in an academic high school and being in a high quality high school and future earnings. Additional controls are added in each column. We also control for the years of education in columns (3)-( 5), as the high quality high school and the academic high school variables may be affecting earnings through their effect on education level. However, the coefficients on these two variables remain positive and signiőcant, suggesting a direct effect as well. Attending a high quality high school increases the future earnings by about 29.43% (Column 5), while attending an academic high school, rather than a technical/specialized one, increases future earnings by about 18.77% (Column 5). These results suggest that the TCZ policy exposure, in early childhood, may have also positively affected earnings. Robustness checks In this section, we further explore the robustness of our results. Selective migration.śFirst, we examine whether the perceived improvements in air quality and changes in the labor market opportunities due to the TCZ implementation changed the characteristics of the population in TCZ counties, resulting also in changes in the characteristics of the children born in there. If that was the case, part of the positive effects on the long-run cognitive outcomes we are getting may be driven by these changes in characteristics, instead of a causal impact from the early-life exposure to the TCZ policy. We show that this may not be the case. First, unlike most of other countries, permanent internal migration is particularly a complicated process in China. The hukou system relates each Chinese citizen to its place of birth and restricts the change of the registered residence only for those meeting certain requirements, e.g., having good education, getting a stable job or owning a permanent house in the destination. Therefore, there are two types of internal migrants in China. Those migrating without changing their hukou, they constitute the majority of internal migration in China and are known as the ŕoating population, migrating usually repeatedly over short periods of time. The second type is those who migrate permanently, succeeding to change their hukou. 42 Those migrating temporarily are more likely to leave their family behind in their place of origin. This is even more likely when health care is needed, such as in the case of pregnancy, since access to health services is related to one's hukou place of residence. The smaller part of migrants that are able to move their hukou to their place of destination may probably move to counties that are expected to have cleaner air due to the TCZ policy. If this is really happening and as this type of migrants are also expected to have a higher socioeconomic status, our estimates may be biased. 43 However, in Table 42 In our CHIP13 urban survey, only 3.78% of the surveyed individuals (and only 0.11% of those born between 1995-1999) had their current hukou out of the district/county in which they are being surveyed, suggesting a low proportion of migrants living in a county/ district different from that mentioned in their hukou. On the other hand, only 10.96% of the respondents (and only 0.68% of those born between 1995-1999) reported that they had changed their hukou from rural/urban hukou to resident hukou. 26.66% (and only 0.83% of those born between 1995-1999) reported changing their hukou from rural to urban. In the CHIP13 rural survey, these proportions are even lower. Only 1.01% of the surveyed individuals (and only 0.02% of those born between 1995-1999) had their current hukou out of the district/county in which they are being surveyed. Only 9.86% of the respondents (and only 0.54% of those born between 1995-1999) reported that they had changed their hukou from rural/urban hukou to resident hukou while 23.99% (and only 1.07% of those born between 1995-1999) reported changing their hukou from rural to urban. 43 [START_REF] Chen | The effect of air pollution on migration: evidence from china[END_REF] showed that migration at the county level may react to changes in air pollution, 3.C.4 we show that the TCZ implementation did not induce differential changes neither in parental education nor in GDP per capita, in the years after TCZ designation, in the newly regulated counties. These results suggest little evidence for a bias in our estimates due to differential sorting based on observable characteristics. County of residence versus county of birth.śAnother concern in our analysis is that we are only observing the county in which the child is living at the time of the survey. If the current county of residence is different from the county in which they were born, this may constitute a potential source of measurement error for the treatment variable. However, as mentioned above, permanent migration as a household is less likely to happen in China. Moreover, children's migration before age 16 is even less likely since their access to education is also related to their hukou of birth. Differences in trends.śIn Table 3.C.4, we found that the TCZ policy implementation is associated with a lower number of siblings for children living in counties designated as TCZ. TCZ counties already exhibited a lower number of siblings in the pre-TCZ period (Table 3.C.1). However, if parents in TCZ counties opted for an even lower number of children following the TCZ policy, this őnding may bias our results if the number of siblings signiőcantly affects the educational performance of each child. Moreover, in the speciőcation using only the 1996-1999 cohorts, we see that the TCZ policy implementation is associated with a higher number of males. If parents opted for a lower number of children and, because of the son-preference culture, it resulted in selective pregnancies with only males, then our results may be biased if gender signiőcantly affects the educational outcomes. For this reason, we replicate our main results without introducing the number of siblings and/or the indicator for gender as covariates. Results (for őxed effects models, CRE models and LPM) remain the same in terms of sign and signiőcance. 44 Alternative clustering.śOur main results are clustered at the county level, to account for the fact that the variation we use to estimate the effects of the TCZ policy is at the county-level. In this robustness check, we cluster the standard errors at a higher administrative level, the province level, to account for both autocorrelation and spatial correlation across counties within each province. 45 We őnd that accounting for dependence within provincial level increases our standard errors a bit more but the signiőcance levels of the results are the same, using the őxed effects and CRE models (the DID estimate for the academic high school variable is no longer signiőcant, using the LPM, due to the and that well educated people at the beginning of their professional careers are driving this migration. However, this is not evidence of permanent migration nor of the family migrating together to the destination. 44 Tables are omitted for simplicity but are available upon request. 45 Note that each province can include counties with different TCZ designation: TCZ or non-TCZ. much higher standard errors).46 Urban versus rural.ś In our baseline results, we use data from both the urban and rural CHIP13 household surveys. If urban areas are more likely to be designated as TCZ than rural areas, then a concern is that our results are being driven by simple comparisons between rural and urban administrative divisions. To ensure that this is not the case, we restrict our analysis to using only the rural sample. Estimates, in Table 3.C.8, show slightly lower signiőcance levels, compared to the main results, which can be explained by the higher standard errors, which may itself be due to the lower sample sizes. However, DID effects, for the probability to attend a higher quality high school (Panel B) or an academic high school (Panel A), remain signiőcant, at the 10% level, suggesting that the effect we found is not produced by comparisons between urban and rural children. Alternative research design.śThe results presented until now in this paper are based on a difference-in-differences design that uses the overall available sample while controlling for different characteristics. However, a concern about a bias in our estimates may still arise if omitted heterogeneity that is related to the TCZ designation is driving the apparent difference in our outcomes between those born in TCZ counties and those born in non-TCZ counties, rather than the real effect of the treatment. To further test the robustness of our results, we use the propensity score matching method to restrict our sample to TCZ and non-TCZ counties that have similar characteristics. We use both mother's and father's years of education, weather variables and county characteristics variables as covariates to compute propensity scores of being designated as TCZ. 47 In a second step, we investigate the long-term effects of TCZ implementation on children's educational performance using only the matched sample. Table 3.C.9 reports results from our DID estimation of the TCZ effects using the matched sample. Estimates remain the same across the speciőcations, in terms of sign and signiőcance, increasing the conődence in our empirical strategy. The DID estimates of the TCZ implementation effect, measured in probability of outcomes, are presented for the CRE model. They vary little compared to the main results. The average difference-in-differences in probability of attending a high quality high school is 14.3 percentage points, while it reaches 16.5 percentage points for the probability of attending an academic high school. 48Overall, all estimates remain similar across the different robustness checks, suggesting that the main results are not driven by inappropriate identiőcation assumptions. Conclusion In this paper, we provide the őrst empirical evidence, in the context of a developing country, of the additional beneőts of exposure to an environmental policy between conception and age 1, in terms of educational performance. Using the difference-in-differences approach, we compare cohorts born in TCZ counties, subjected to particularly stringent directives, just before and after the TCZ implementation, to the same difference among cohorts born in non-TCZ counties. Our estimated reduced form effects capture the full environmental policy impact, including any changes in air pollution levels and changes in the labor market outcomes and opportunities for parents. Considering the cohorts in our sample counties, the results are consistent with an important relationship between environmental regulation exposure, in utero and before age 1, and educational performance, 15 years later. Particularly, we found that the exposed children have higher predicted probabilities to attend a better quality high school, which can be explained by a higher ability to obtain better scores in a standardized high school entrance exam. They were also found to be more likely to join an academic, rather than a specialized/technical high school which focuses on manual labor training. Projecting forward, we showed that our educational performance measures may impact later educational attainment and earnings, suggesting even more beneőts of the environmental policy. In a further step, we present suggestive evidence that the TCZ policy effects act through improvements in SO 2 concentrations, even though other channels are possible. Moreover, we found that the beneőcial effects of exposure to the environmental policy, since conception and before age 1, are reinforced among cohorts born to fathers with low levels of education and among girls. These results suggest that environmental policies enhance educational performance while playing a role in reducing educational gaps. We also found results suggesting no differential beneőts, in terms of higher predicted probabilities to attend a higher quality high school and an academic high school, from exposure to the TCZ implementation starting at age 1, 2, 3, 4 or 5. However, that does not imply the absence of positive effects. In case of beneőcial effects at these slightly older ages, this suggests that the beneőts of exposure to the TCZ policy are larger than the ones examined in this paper, including a wider age-group during early-life. Even though our analysis presents the caveat of being limited by multiple data restrictions, our attempt to examine the importance of environmental policies for human capital formation suggests positive and signiőcant associations. Future research is, however, needed to further examine the long-term effects of the TCZ policy on educational and economic outcomes, using larger and more recent databases. 3.B Data construction 3.B.1 Counties' TCZ designation We respect the following rules in identifying TCZ treatment and control counties: • We consider the name, area and divisional status (either district, county-level city, or county) as of 1998 for each location in CHIP13. • Third level administrative locations explicitly listed as TCZ in the official document or which belong to prefectures that are listed as TCZ are assigned the TCZ status. • All other third level administrative locations are assigned the non-TCZ status. 3.B.2 Pollution data Pollution data is extracted using a polygon shapeőle with the third-level administrative divisions of China as of 2015 (Hijmans, R. and University of California, Berkeley, Museum of Vertebrate Zoology, 2015). We consider the following rules in treating the pollution data: • Changes in names and divisional status are considered, in order to get the right data for the right years. • When a location is merged into another one, we use data on the newly merged area. One exception is the data used for Qijiang district which is the merged area of Qijiang county and Wansheng district in 2011. We use pollution data for Qijiang County as it is the only available data. • when a location is divided into two different locations, I consider the new location. As we are interested in the concentration of air pollutants, smaller areas allows to have a better measure of these concentrations. However, due to data availability problem, the data for the area before division is used in some cases. 3.B.3 Income and demographic data For the county level socioeconomic data, we consider the following rules: • Changes in names and divisional status are considered, in order to get the right data for the right years. • When a location is merged into another one, the sum of the two locations is used, even in years of birth when the 2 locations are still independent. This is because we cannot, from the CHIP13 data, distinguish who is from which location. In cases where we are unable to get data for one of the two locations, we use data for only one location considering individuals born in the second location as originally born in the őrst one. • When a location is divided into two different locations, I use data from the original one location before division if division happened after the year of birth of the individuals (which is the case for the whole sample, all divisions in our sample happened after 1999). • Some counties have incomplete data on GDP per capita and total population, requiring us to interpolate the missing data. • Because of data availability, Gross Regional Product per capita was used for counties and districts of Jiangsu province and for some districts of Yunnan province. • We deŕate GDP data by province-year CPI, taking Beijing-1998 as the base provinceyear. For the few missing values of provincial CPI, we use China's national CPI (1998, being the base year). • Socioeconomic data for Beijing municipality at the third level administrative division is not available. Therefore, we allocate to each district from Beijing the same municipality level socioeconomic data. Notes: The estimates are that of the coefficients of the interaction term of TCZ indicator and a Post indicator (=1 for 1998 and 1999). Separate regressions are estimated while using each of the respective variables as the dependent variable based on Eq. (3.3). Note that outliers created due to the linear interpolation of GDP per capita and Population (10000 persons) variables are dropped when running the regressions for these variables. Column (1) reports results using all those born between 1992 and 1999, while column (2) reports results for those born between 1996 and 1999. See notes to Table 3.6.1. Notes: Each entry reports the difference-in-differences estimates using LPM regressions. High/low father's and mother's education is deőned as having a number of years of schooling that is above or equal/below 9. See notes to Table 3.6.1. [START_REF] Nguyen | Does parental migration really beneőt left-behind children? comparative evidence from ethiopia, india, peru and vietnam[END_REF], where the dependent variable used is SO 2ct , the annual SO 2 concentration by county. The sample used covers all counties used to estimate Eq. (3.5), and that have available pollution data at the county level only. We plot the coefficients on the interactions between birth year indicators and the indicator for TCZ status, and their associated 90% (Panel A) or 99% (Panel B) Confidence Interval. The coefficient on the interaction between TCZ and birth year 1997 is normalized to zero. Standard errors are clustered by county. strategy. Chapter 3 focuses on government policy-making that aim to deal with human capital risk-factors. 3.C Data analysis Using a theoretical model of parental migration with left-behind children, the dissertation shows how risk-averse migrant parents do not necessarily react to changes in the riskiness of incomes, children's human capital and accumulated savings. Only the migration duration of parents with particular features of their risk preferences, in terms of higher order risk attitudes, is affected by changes in risks. The sign of these effects is also shown to be determined by the nature of the considered risk preferences. Using household income and migration coupled with rainfall data, the dissertation also highlights that rural Chinese households with negative expected income differentials are less likely to use migration as a way to diversify their income, in the face of an aggregate income risk. This effect is, however, decreasing with the level of the income differential. Particularly, it starts dropping when the expected income of the household with a migrant parent is around 1.32 to 1.38 their expected income when they do not have a migrant parent. For these rural households, who care not only about their income but also about the children's human capital, each additional monetary unit is more desirable for them when children have lower school test scores. If rural-urban income differential is positive for these households, those with children poorly performing at school may be more likely to send a parent for migration, compared to households with better performing children. Last, focusing on rural and urban Chinese children exposed to the implementation of an environmental policy during their őrst year of birth reveals long-term positive effects on their educational performance 15 years later. Particularly, the probability to get better test scores and join a higher quality high school increases by about 12 percentage points, while the probability to choose an academic high school, instead of a specialized/vocational high school increases by about 18 percentage points. The main takeaway of the dissertation relates to the importance of understanding the determinants and the consequences of parents, households and policy makers' decisions with respect to human capital accumulation of children, in the presence of riskiness. Along this line, a lot remains to be done despite the substantial work on the human capital investment literature since the pioneering works of Becker, and the dissertation offers some insights for future research. First, it introduces two models of migration decision-making by parents and households in Chapters 1 and 2. A drawback of these frameworks is that they consider migration in isolation to other decisions. A fruitful area for future research would be to integrate other economic decisions simultaneously with the migration decision, such as savings or monetary investments in children. Indeed, the expected temporariness of the parent's migration may imply changes in other economic choices, particularly in the presence of risk, while the riskiness of agricultural incomes may stimulate the use of different risk-coping strategies in addition to migration. Second, Chapter 1 stresses the key role of the way parents perceive each additional monetary unit when their children's human capital decreases, in determining migration duration choices. Future research may be concerned with embedding this feature of the parents' preferences within models of migration behavior. Even though Chapter 2 provided an empirical approach to measure this preference, more sophisticated experimental methods may also be used. Last, a natural extension to Chapter 3 would be to enlarge the analysis to longer-term labor market outcomes as effects of exposure to the implementation of environmental policies in developing countries, in order to delve deeper into the persistence of these effects over larger periods of time. 3.5.1 School outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Air Pollution and TCZ Regulatory Status . . . . . . . . . . . . . . 3.5.3 Weather . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.4 County Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.5 Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.6 Descriptive statistics . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Empirical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Exposure to TCZ policy in early childhood . . . . . . . . . . . . . . 3.6.2 Mechanism: changes in air quality . . . . . . . . . . . . . . . . . . . 3.6.3 Treatment effect heterogeneity . . . . . . . . . . . . . . . . . . . . . 3.6.4 Projected higher education and wage effects . . . . . . . . . . . . . 3.7 Robustness checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendices 3.A The Two Control Zones locations . . . . . . . . . . . . . . . . . . . . . . . 3.B Data construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.B.1 Counties' TCZ designation . . . . . . . . . . . . . . . . . . . . . . . 3.B.2 Pollution data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.B.3 Income and demographic data . . . . . . . . . . . . . . . . . . . . . 3.C Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . of the structural model of migration . . . . . . . . . . . . . . . 2.5.2 Estimates of the household income corrected for sample selection bias . . 2.5.3 Estimates of the child human capital corrected for sample selection bias . 2.B.1 The Highest production of grain crops for each Province . . . . . . . . . . 2.C.1 Deőnition of variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.C.2 Test for attrition bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.C.3 Summary statistics by migration status, 2008 . . . . . . . . . . . . . . . . 2.C.4 Summary statistics by migration status -Income variables, 2008 . . . . . 2.C.5 Summary statistics by migration status -Child characteristics, 2008 . . . 2.C.6 Estimates of the structural model of migration (alternative migration duration) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.C.7 Estimates of the structural model of migration: full speciőcation . . . . . 3.6.1 Effect of TCZ Implementation in the year of birth on long-term educational outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Effect of TCZ Implementation in the year of birth on long-term cognitive outcomes, CRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.3 Impact of TCZ implementation in the year of birth on long-term educational outcomes: differences in probability (CRE) . . . . . . . . . . . . . 3.6.4 Effect of TCZ Implementation in the year of birth on long-term cognitive outcomes: effects through changes in air quality . . . . . . . . . . . . . . 3.6.5 Heterogeneity in the TCZ effects across Subpopulations: differences in probability (CRE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 3.6.6 Effect of school quality on higher education outcomes, 1990-1994 birth cohorts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.7 Returns to education, 1979-1985 birth cohorts . . . . . . . . . . . . . . . 3.C.1 Difference in characteristics by TCZ status before 1998 . . . . . . . . . . 3.C.2 Effect of TCZ implementation in the year of birth on long-term educational outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.C.3 Effect of TCZ implementation on long-run education outcomes by year of birth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.C.4 Balancing test by TCZ status . . . . . . . . . . . . . . . . . . . . . . . . 3.C.5 Effect of TCZ implementation on long-run education outcomes by year of birth (CRE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.C.6 Effect of TCZ Implementation in the year of birth on long-term cognitive outcomes, LPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.C.7 Heterogeneity in the TCZ effects across Subpopulations (LPM) . . . . . . 3.C.8 TCZ effects in rural areas . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.C.9 Effect of exposure to air pollution in the year of birth on long-term cognitive outcomes: PSM-DID method . . . . . . . . . . . . . . . . . . . . . x List of Figures 3.A.1 The Two Control Zones Locations . . . . . . . . . . . . . . . . . . . . . . 3.C.1 Trends in SO 2 concentrations by TCZ status . . . . . . . . . . . . . . . . 3.C.2 Effect of TCZ implementation on long-run eduction outcomes by year of birth (LPM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.C.3 Effect of TCZ implementation on county SO 2 concentration by year . . . xi xii General Introduction Gary Becker's work on łhuman capital" marked a turning point in economic research. 15 σ c is the aggregate county-level measure of income risk, for each household living in county c. Following previous literature on the determinants of income for rural Chinese households (e.g.[START_REF] Taylor | Migration and incomes in source communities: A new economics of migration perspective from China[END_REF] Du et al. ( use the 2009 Rural Household Survey (RHS) from the Rural Urban Migration in China (RUMiC) project (henceforth RUMiC-RHS), conducted between March and June of 2009 to collect information from 2008. 19 The RUMiC-RHS inspected the situation of 8,000 rural households from 800 villages in 82 counties and nine provinces: Hebei, Jiangsu, Zhejiang, and Guangdong from eastern China; Anhui, Henan, and Hubei from central China; Chongqing and Sichuan from western China. A wide range of individual and household level variables are covered by this survey including not only the demographic, social and economic information, but also records of the migration history, household income and child human capital variables, particularly important for our empirical testing. missing information about their test scores and whose children dropped out of school in 2008, which reduces the sample to 2,884 households. Regarding the number of migrants within the household, keeping households where one or both parents migrate further restricts the sample to 2,507 households, while keeping households where only one parent migrates reduces the őnal sample to 2,268 households. Our data on educational performance are retrieved from the őfth wave survey of the China Household Income Project of the year 2013 (hereafter CHIP13). It was conducted, in July and August of 2014, by the Annual Household Survey Office of Integration of Urban and Rural in National Bureau of Statistics, to collect data on the year 2013. It surveys rural and urban households from 15 provinces, 126 cities and 234 counties from east, center and west areas of China. It covers a variety of individual and household level social and economic information including income, assets, expenditure, employment, education and other demographic information. In this paper, we make use of both the urban and the rural surveys of CHIP13. Below we describe the measures of educational performance used in our analysis. This data is matched to other datasets based on the child's county of residence at the time of the survey. rely on the łOfficial Reply to the State Council Concerning Acid Rain Control Areas and Sulfur Dioxide Pollution Control Areas" document. 25 3.5.3 Weather Weather data are obtained from the Global Summary of the Year (GSOY) dataset, collected from the National Centers for Environmental Information (NCEI) under the National Oceanic and Atmospheric Administration (NOAA) of the United States. It includes temperature and precipitation measures for different weather stations across China, computed from the summary of the day observations of the Global Historical Climatology Figure 3.C.3 suggests that the two groups of counties may have had similar trends in SO 2 concentrations before 1998, in the years closer to 1998 (Panel A). 40 The őrst set of coefficients, in Panel A of Table 3.6.4 Figure 3 . 3 Figure 3.C.1: Trends in SO 2 concentrations by TCZ status (a) Panel A Figure 3 . 3 Figure 3.C.2: Effect of TCZ implementation on long-run eduction outcomes by year of birth (LPM) (a) High quality high school Figure Figure 3.C.3: Effect of TCZ implementation on county SO 2 concentration by year (a) Panel A s individual characteristics, parents' educational attainment, and other household characteristics. 16 σ c is the aggregate measure of income risk, for each household living in county c. Following Meng and Yamauchi (2017), we also control for village-level characteristics (S v ) and province őxed effects (κ p ). The deőnition of all the variables used in the estimations is provided in Table2.C.1, in Appendix. Table 2 2 .C.2 in Appendix checks whether this őnal sample of 2,268 households, where only one parent can be a migrant, is random. To do so, we focus on the 3,711 households that can be potentially covered in our analysis and compare those who are included with those who are excluded from our őnal sample. Results of estimating a dummy variable indicating whether the household is covered in our preferred sample on a vector of household-level variables, controlling for provincial őxed effects, suggest few signiőcant 2.B for details about the choice of the period of March to October. Table 2.C.3 highlights a number of signiőcant differences between households with a migrant parent and those without a migrant parent in terms of household composition, education and wealth. Table 2.C.4 also suggests that households with a migrant parent have signiőcantly lower net total incomes, while Table 2.C.5 doc- uments child characteristics and indicate that children with both parents at home score higher, on average, in both Chinese and math tests than their counterparts with one migrating parent. Table 2 2 The dependent variable is, in panel A, a dummy variable that equals 1 if only one parent is a migrant and 0 otherwise, in panel B, a dummy variable that equals 1 if one or the two parents are migrants, and in panel C, a dummy variable that equals 1 if any household member is a migrant. Columns (1), ( .5.1: Estimates of the structural model of migration (1) (2) (3) (4) (5) (6) A. Only one parent is a migrant Expected changes Income differential 1.878*** 1.442** 1.767*** 1.318** 1.998*** 1.553** (0.574) (0.638) (0.576) (0.648) (0.562) (0.608) Income differential 2 -3.158*** -3.479** -3.215*** -3.573** -3.071** -3.315** (1.204) (1.687) (1.204) (1.693) (1.200) (1.652) Child human capital differential 0.122*** 0.345*** 0.121*** 0.338*** 0.123*** 0.348*** (0.033) (0.036) (0.032) (0.037) (0.033) (0.034) Child human capital differential 2 -0.008*** -0.014*** -0.007*** -0.013*** -0.008*** -0.014*** (0.002) (0.004) (0.002) (0.004) (0.003) (0.004) Interaction -0.090 -0.194* -0.103 -0.220** -0.074 -0.171 (0.071) (0.109) (0.070) (0.107) (0.071) (0.109) Observations 1767 1767 1767 1767 1767 1767 B. At least one parent is a migrant Expected changes Income differential 2.257*** 1.515*** 2.208*** 1.415*** 2.368*** 1.662*** (0.518) (0.447) (0.519) (0.452) (0.502) (0.432) Income differential 2 -2.729*** -3.294*** -2.695*** -3.345*** -2.830*** -3.200*** (0.936) (0.930) (0.928) (0.918) (0.941) (0.940) Child human capital differential -0.325*** 0.067* -0.290*** 0.072** -0.348*** 0.062* (0.048) (0.035) (0.046) (0.035) (0.047) (0.034) Child human capital differential 2 -0.011*** -0.021*** -0.009*** -0.019*** -0.013*** -0.021*** (0.003) (0.004) (0.003) (0.004) (0.003) (0.004) Interaction -0.060 -0.144* -0.062 -0.152* -0.044 -0.124 (0.076) (0.081) (0.073) (0.079) (0.076) (0.080) Observations 1951 1951 1951 1951 1951 1951 C. At least one household member is a migrant Expected changes Income differential -0.123 0.377 0.105 0.245 -0.072 0.510 (0.805) (0.705) (0.760) (0.698) (0.805) (0.702) Income differential 2 -2.183*** -1.897*** -1.922** -1.904*** -2.603*** -1.908*** (0.823) (0.625) (0.777) (0.626) (0.850) (0.626) Child human capital differential -0.503*** -0.156** -0.429*** -0.130** -0.558*** -0.180*** (0.068) (0.066) (0.061) (0.065) (0.066) (0.064) Child human capital differential 2 -0.001 -0.009** 0.001 -0.008** -0.005 -0.011*** (0.004) (0.004) (0.004) (0.004) (0.004) (0.004) Interaction -0.145 -0.006 -0.111 -0.022 -0.132 0.010 (0.098) (0.084) (0.088) (0.081) (0.101) (0.085) Observations 2226 2226 2226 2226 2226 2226 Household Characteristics Yes Yes Yes Yes Yes Yes Notes: Table 2 2 .5.2: Estimates of the household income corrected for sample selection bias Household income (1) (2) Household characteristics Mean schooling 0.057*** 0.037** (0.010) (0.015) Male headed 0.134 -0.157 (0.101) (0.101) Household size 0.071*** 0.105*** (0.016) (0.025) Labor ratio 0.005*** 0.004* (0.002) (0.002) Physical capital Land size 0.006 -0.008 (0.008) (0.009) Irrigated land 0.000 0.019 (0.010) (0.013) House value 0.091*** 0.049** (0.019) (0.025) Institutional assets Access to formal credit 0.137*** -0.093 (0.046) (0.067) Access to informal credit 0.086 0.049 (0.060) (0.121) Risk variables Rainfall Coefficient of Variation (Mar-Oct) -0.086** -0.058* (0.035) (0.033) Selectivity variables Inverse Mills ratio -0.032 0.027 (0.181) (0.165) Village characteristics Uniőed irrigation system 0.130** 0.041 (0.064) (0.091) Furrow machine 0.160 0.001 (0.105) (0.169) Plant disease prevention and treatment -0.005 -0.052 (0.087) (0.134) United purchasing service -0.037 0.333** (0.120) (0.162) Regional characteristics East 0.363*** 0.280** (0.091) (0.134) Center 0.212** 0.168 (0.091) (0.148) Observations 1979 402 Notes: The sample where only one parent can be a migrant is used here. The dependent variable is the logarithm of the household total net income, for the sub-sample of households without a migrant parent in Column (1) and for the sub-sample of households with a migrant parent in Column (2). Bootstrapped standard errors, in parentheses, are clustered at the county level. * p < 0.10, ** p < 0.05, *** p < 0.01. Table 2 . 2 5.3: Estimates of the child human capital corrected for sample selection bias Math score Chinese score (1) (2) (3) (4) Table 2 . 2 B.1 highlights the most important crops for each of the surveyed provinces and shows that the main crops to consider are rice, wheat, corn, soybeans and tubers. Rice is the most prevalent grain crop in China. It can be grown as a single or a double-season cropping system. The major region which produces rice is Southern China, including six of the provinces in our sample (Jiangsu, Zhejiang, Anhui, Hubei, Guangdong and Table 2.C.2: Test for attrition bias Mu is the Chinese measurement of land. 1 Hectare = 15 Mu. This table uses the sample where only one parent may be a migrant. Standard deviations are in parentheses. Column (4) tests for differences in means between the two types of households. Included in the final sample Number of migrants -1.345*** (0.047) Household income 0.228*** (0.044) Mean Age 0.006 (0.005) Mean schooling -0.010 (0.015) Male headed 0.036 (0.131) Household size -0.221*** (0.044) Preschool children 0.163 (0.107) School children (age<16) 0.310*** (0.093) School children (age>=16) -0.135 (0.102) Gender ratio 0.000 (0.002) Labor ratio -0.012*** (0.004) Land size 0.006 (0.006) Irrigated land 0.000 (0.001) House value -0.027 (0.023) Eldery (>60) 0.080 (0.063) Disabled 0.015 (0.131) Province fixed effects Yes Observations 3652 Notes: "Included in the final sample"=1 if the household is included in our final sample. Regressors are defined in Table 2 .C.1. Standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. * p < 0.10, ** p < 0.05, *** p < 0.01. Table 2 . 2 C.4: Summary statistics by migration status -Income variables, 2008 This table reports means of income components of all households (Column (1)), of households with no migrants (column (2)) and of households with one migrating parent (Column (3)). This table uses the sample where only one parent may be a migrant. Standard deviations are in parentheses. Column (4) tests for differences in means between the two types of households. (1) (2) (3) (4) * p < 0.10, ** p < 0.05, *** p < 0.01. Table 2 2 .C.5: Summary statistics by migration status -Child characteristics, 2008 (1) (2) (3) (4) Math score 81.17 81.42 79.93 1.49** (12.32) (12.09) (13.35) Chinese score 79.47 79.69 78.42 1.27** (12.33) (12.29) (12.48) Age 13.24 13.32 12.83 0.50*** (3.90) (3.93) (3.69) Male 0.53 0.53 0.53 0.01 (0.50) (0.50) (0.50) Number of siblings 0.89 0.90 0.82 0.08* (0.85) (0.88) (0.69) Mother's years of schooling 7.28 7.33 7.03 0.30*** (2.23) (2.23) (2.20) Father's years of schooling 8.23 8.27 8.06 0.21** (2.15) (2.20) (1.84) number hours studying 9.09 9.22 8.49 0.73* (6.71) (6.84) (6.09) Observations 3061 2544 517 3061 Notes: This table reports means of child characteristics of all house- holds (Column (1)), of households with no migrants (column (2)) and of households with one migrating parent (Column (3)). This table uses all children from each household in the sample where only one parent may be a migrant. Standard deviations are in parentheses. Column (4) tests for differences in means between the two types of households. * p < 0.10, ** p < 0.05, *** p < 0.01. Table 2 . 2 C.6: Estimates of the structural model of migration (alternative migration duration) This table reports regression coefficients from 12 separate regressions. The outcome in panel A is a dummy variable that equals 1 if only one parent is a migrant for at least 3 months and 0 otherwise, while the outcome in panel B is a dummy variable that equals 1 if only one parent is a migrant for at least 9 months and 0 otherwise. See notes to Table2.5.1. This table reports regression coefficients from 6 separate regressions, using the sample where only one parent is a migrant. See notes to Table2.5.1. (1) (2) (3) (4) (5) (6) A. Migrant parent for at least 3 months Expected changes Income differential 1.236** 1.198** 1.189** 1.148** 1.306** 1.261** (0.559) (0.528) (0.559) (0.536) (0.555) (0.518) Income differential 2 -3.413*** -2.418 -3.404*** -2.443 -3.415*** -2.390 (1.164) (1.650) (1.166) (1.653) (1.160) (1.638) Child human capital differential 0.077* 0.311*** 0.082* 0.301*** 0.073* 0.319*** (0.042) (0.033) (0.042) (0.033) (0.042) (0.033) Child human capital differential 2 -0.012*** -0.021*** -0.011*** -0.020*** -0.013*** -0.021*** (0.003) (0.005) (0.003) (0.005) (0.004) (0.005) Interaction -0.100 -0.239* -0.103 -0.238** -0.094 -0.236* (0.080) (0.125) (0.079) (0.120) (0.081) (0.128) Observations 1717 1717 1717 1717 1717 1717 B. Migrant parent for at least 9 months Expected changes Income differential 1.272 -0.390 1.130 -0.405 1.356 -0.308 (1.213) (0.797) (1.207) (0.827) (1.198) (0.737) Income differential 2 -3.302*** -1.389 -3.208*** -1.492 -3.367*** -1.334 (1.215) (1.040) (1.200) (1.039) (1.222) (1.030) Child human capital differential -0.652*** 0.227*** -0.632*** 0.215*** -0.651*** 0.215*** (0.085) (0.053) (0.080) (0.053) (0.086) (0.047) Child human capital differential 2 -0.017*** -0.000 -0.016*** -0.000 -0.017*** 0.000 (0.003) (0.003) (0.002) (0.002) (0.003) (0.002) Interaction 0.028 0.009 0.018 0.006 0.035 0.010 (0.080) (0.069) (0.078) (0.070) (0.080) (0.064) Observations 1837 1837 1837 1837 1837 1837 Household Characteristics Yes Yes Yes Yes Yes Yes Notes: Table 3 . 3 6.1: Effect of TCZ Implementation in the year of birth on long-term educational outcomes This table reports regression coefficients from 12 separate pooled Probit regressions. Individual-level micro data is used for these regressions. The sample includes all observations for the years 1995-1999 and remains the same across all columns for each outcome. The outcome in panel A is a dummy variable that equals 1 if attending a high quality high school and 0 otherwise, the outcome in panel B is a dummy variable that equals 1 if attending an academic high school and 0 otherwise, and the outcome in panel C is a dummy variable that equals 1 if attending a high school and 0 otherwise. Estimates reported are that of the TCZxPost variable, which is an indicator that equals one for counties designated as TCZ interacted with an indicator that equals one for birth years after 1997. All regressions are unweighted and include controls for county characteristics (population, GDP per capita), weather variables (second-degree polynomials in annual precipitation, 10-degree bins based on daily maximum temperatures), province by year of birth őxed effects and county őxed effects. Additional controls are listed in the table and cover parents characteristics1 (con- (1) (2) (3) (4) A. High quality high school TCZxPost 0.855** 0.998*** 0.998*** 1.012*** (0.349) (0.344) (0.356) (0.361) Observations 939 939 939 939 Pseudo R 2 0.174 0.226 0.254 0.260 B. Academic high school TCZxPost 0.892** 1.117** 1.289** 1.299** (0.435) (0.488) (0.523) (0.531) Observations 709 709 709 709 Pseudo R 2 0.184 0.229 0.282 0.282 C. Qualify for high school TCZxPost -0.021 0.143 0.231 0.206 (0.371) (0.411) (0.414) (0.417) Observations 1126 1126 1126 1126 Pseudo R 2 0.126 0.204 0.229 0.257 Child characteristics No No No Yes Parents characteristics1 No Yes Yes Yes Parents characteristics2 No No Yes Yes Weather controls Yes Yes Yes Yes County characteristics Yes Yes Yes Yes County őxed effects Yes Yes Yes Yes ProvinceXyears őxed effects Yes Yes Yes Yes Notes: tinuous measures of both mother's and father's education, mother's age at birth), parents characteristics2 (father's age at birth, indicators for levels of education of both the mother and father) and individual characteristics (gender, ethnicity, number of siblings, urban hukou). Standard errors in parentheses are clustered at the county level. * p < 0.10, ** p < 0.05, *** p < 0.01. See Table 3.C.2 for details about these estimations. Table 3 . 3 6.2: Effect of TCZ Implementation in the year of birth on long-term cognitive outcomes, CRE This table reports regression coefficients from 3 separate CRE Probit regressions. All regressions include controls for county characteristics (population, GDP per capita) and their year means for each county, weather variables (second-degree polynomials in annual precipitation, 10-degree bins based on daily maximum temperatures) and their year means for each county. We also include the number of yearly data entries of each county. Additional controls are listed in the table and cover parents characteristics1 (continuous measures of both mother's and father's education, mother's age at birth), parents characteristics2 (father's age at birth, indicators for levels of education of both the mother and father) individual characteristics (gender, ethnicity, number of siblings, urban hukou), and province by year of birth őxed effects. Standard errors in parentheses are clustered at the county level. * p < 0.10, ** p < 0.05, *** p < 0.01. (1) (2) (3) High quality high school Academic high school Qualify for high school TCZxPost 0.473 1.002** 0.192 (0.298) (0.392) (0.314) Child characteristics Yes Yes Yes Parents characteristics1 Yes Yes Yes Parents characteristics2 Yes Yes Yes Weather controls Yes Yes Yes County characteristics Yes Yes Yes ProvinceXyears őxed effects Yes Yes Yes Observations 997 959 1425 Pseudo R 2 0.183 0.234 0.237 Notes: Table 3 . 3 6.3: Impact of TCZ implementation in the year of birth on long-term educational outcomes: differences in probability (CRE) This table reports the marginal effects (in probability metric) of Post in the TCZ (TCZ=1) and non-TCZ (TCZ=0) counties, following CRE Probit estimations of each outcome variable. Estimates for TCZxPost correspond to the pairwise comparisons of the average marginal effects. See notes to Table3.6.2. High quality high school Academic high school Qualify for high school TCZ=0 0.425*** 0.054 -0.370 (0.055) (0.152) (0.248) Conődence Interval (95%) [0.318, 0.532] [-0.243, 0.352] [-0.856, 0.116] TCZ=1 0.551*** 0.241** -0.302 (0.055) (0.117) (0.280) Conődence Interval (95%) [0.444, 0.658] [0.012, 0.471] [-0.851, 0.248] TCZxPost 0.125*** 0.187*** 0.069 (0.041) (0.064) (0.085) Conődence Interval (95%) [0.045, 0.206] [0.061, 0.313] [-0.098, 0.236] Observations 939 709 1126 Notes: Table 3 . 3 6.4 presents the results estimated by the CMP, using individual micro-data.39 Table 3 3 This table reports coefficients from 6 separate regressions. Each regression estimates, in Second step, the effects of exposure, in the year of birth, to SO 2 concentrations, predicted using First step. SO 2 concentration is measured by the annual means. Regressions are performed using the CMP strategy where two different sample sizes are used for the estimations of Eq. (3.5) (First step) and Eq. (3.6) (Second step). Panel A reports results of Eq. (3.6) based on the őxed effects Probit while Panel B. reports results of Eq. (3.6) from the LPM. See notes to Table3.6.1. .6.4: Effect of TCZ Implementation in the year of birth on long-term cognitive outcomes: effects through changes in air quality (1) (2) (3) High quality high school Academic high school Qualify for high school Panel A. FE Probit Second step SO 2 concentration -2.765** -2.955*** -0.765 (1.159) (0.373) (1.764) Observations 648 499 844 First step TCZxPost -0.265* -0.256** -0.256** (0.135) (0.129) (0.128) Observations 1760 1760 1760 Panel B. LPM Second step SO 2 concentration -1.434 -0.385 -0.148 (0.874) (0.503) (0.317) Observations 685 685 1109 First step TCZxPost -0.256** -0.256** -0.256** (0.128) (0.128) (0.128) Observations 1760 1760 1760 Child characteristics Yes Yes Yes Parents characteristics1 Yes Yes Yes Parents characteristics2 Yes Yes Yes Weather controls Yes Yes Yes County characteristics Yes Yes Yes County őxed effects Yes Yes Yes ProvinceXyears őxed effects Yes Yes Yes Notes: Table 3 3 .6.5: Heterogeneity in the TCZ effects across Subpopulations: differences in probability (CRE) (1) (2) (3) High quality high school Academic high school Qualify for high school Panel A. Boys and girls Boys 0.195*** 0.118*** 0.190 (0.048) (0.045) (0.153) Observations 502 459 733 Girls -0.057 0.430** -0.096* (0.173) (0.218) (0.058) Observations 452 416 478 Panel B. High and low education of father High father's education 0.075 0.233*** -0.044 (0.064) (0.034) (0.069) Observations 536 486 581 Low father's education 0.437** 0.050 0.041 (0.195) (0.196) (0.109) Observations 429 386 727 Panel C. High and low education of mother High mother's education 0.096 0.206** -0.190 (0.063) (0.088) (0.154) Observations 488 420 405 Low mother's education 0.144 -0.086 0.079 (0.203) (0.212) (0.126) Observations 480 459 819 Child characteristics Yes Yes Yes Parents characteristics1 Yes Yes Yes Parents characteristics2 Yes Yes Yes Weather controls Yes Yes Yes County characteristics Yes Yes Yes ProvinceXyears őxed effects Yes Yes Yes Table 3 3 Notes: Each entry in this Table is the coefficient on the TCZ variable from a separate regression of each of the respective variables on the TCZ indicator (=1 if county is designated as TCZ) and year of birth őxed effects. Regressions are estimated using individual micro-data for the years before the TCZ implementation(1992)(1993)(1994)(1995)(1996)(1997). Note that outliers created due to the linear interpolation of GDP per capita and Population (10000 persons) variables are dropped when running the regressions for these variables. Standard errors in parentheses are clustered by county. * p < 0.10, ** p < 0.05, *** p < 0.01. .C.1: Difference in characteristics by TCZ status before 1998 (1) 1992-1997 Male -0.0387 (0.0358) Han 0.236 (0.232) Number of siblings -0.218* (0.118) Urban hukou 0.395*** (0.101) Father's years of education 1.026*** (0.216) Mother's years of education 1.474*** (0.287) Mother's age at birth 0.0265 (0.209) Father's age at birth 0.354 (0.258) Total precipitation 311.6*** (82.80) Average temperature 2.775*** (0.613) Population (10000 persons) 0.0176 (0.123) GDP per capita 0.636*** (0.156) Table 3 . 3 C.2: Effect of TCZ implementation in the year of birth on long-term educational outcomes See notes to Table3.6.1. (1) (2) (3) High quality high school Academic high school Qualify for high school TCZxPost 1.012*** 1.299** 0.206 (0.361) (0.531) (0.417) Male -0.114 -0.063 -0.457*** (0.126) (0.159) (0.118) Han 0.486 0.050 -0.263 (0.333) (0.367) (0.239) Number of siblings 0.082 -0.011 -0.185*** (0.099) (0.100) (0.070) Urban hukou 0.281 0.014 0.607** (0.203) (0.218) (0.266) Father's years of education 0.111* -0.001 0.063 (0.065) (0.090) (0.062) Mother's years of education -0.105 -0.022 0.028 (0.072) (0.106) (0.067) Mother's age at birth 0.038 0.058* 0.037 (0.026) (0.034) (0.023) Father's age at birth -0.021 -0.059* -0.011 (0.028) (0.032) (0.020) Never schooled (Father) 0.444 -1.649 -9.174*** (0.994) (1.487) (1.156) Elementary school (Father) 0.296 -1.605 -8.741*** (0.605) (0.978) (0.791) Middle school (Father) 0.065 -1.564** -8.742*** (0.415) (0.741) (0.631) Secondary school (Father) -0.059 -0.751 -8.580*** (0.270) (0.592) (0.555) University (Father) ref ref ref Never schooled (Mother) -2.535** 0.002 -1.022 (1.038) (1.538) (0.980) Elementary school (Mother) -2.233*** -0.868 -1.216* (0.714) (1.020) (0.723) Middle school (Mother) -1.811*** -0.809 -0.720 (0.501) (0.786) (0.572) Secondary school (Mother) -0.765** -0.166 -0.422 (0.319) (0.582) (0.459) University (Mother) ref ref ref Total precipitation 0.000 0.003 -0.001* (0.001) (0.002) (0.001) Total precipitation squared -0.000 -0.000* 0.000* (0.000) (0.000) (0.000) Nbr days with 33<max temp<39 0.018 0.023 0.030 (0.044) (0.069) (0.036) Nbr days with 40<max temp<49 0.008 -0.170*** -0.006 (0.036) (0.052) (0.042) Nbr days with 50<max temp<59 -0.002 0.062 0.069* (0.037) (0.063) (0.035) Nbr days with 60<max temp<69 0.069** 0.116** -0.018 (0.031) (0.049) (0.028) Nbr days with 70<max temp<79 0.042 0.140** 0.010 (0.034) (0.059) (0.045) Nbr days with 80<max temp<89 -0.023 -0.015 -0.019 (0.032) (0.044) (0.038) Nbr days with 90<max temp<99 0.023 0.087* 0.032 (0.036) (0.044) (0.041) Nbr days with max temp>100 -0.001 0.030 -0.009 (0.015) (0.033) (0.024) Population (10000 persons) 1.460* -1.937 0.454 (0.839) (1.340) (0.881) GDP per capita 0.053 -0.359 -0.115 (0.344) (0.809) (0.364) County őxed effects Yes Yes Yes ProvinceXyears őxed effects Yes Yes Yes Observations 939 709 1126 Pseudo R 2 0.260 0.282 0.257 Notes: Table 3 . 3 C.3: Effect of TCZ implementation on long-run education outcomes by year of birth Notes: We compute, for each regression of Eq. (3.4) for the different dependent variables, the average marginal effects (AMEs) on the interactions between birth year indicators and the indicator for TCZ status. Each panel of thisTable presents regression pairwise comparisons of the AMEs, in Column (1). 1992 vs 1997 corresponds to (AME 1992 -AME 1997 ), 1993 vs 1997 corresponds to (AME 1993 -AME 1997 ), 1994 vs 1997 corresponds to (AME 1994 -AME 1997 ), 1995 vs 1997 corresponds to (AME 1995 -AME 1997 ), 1996 vs 1997 corresponds to (AME 1996 -AME 1997 ), 1998 vs 1997 corresponds to (AME 1998 -AME 1997 ), 1999 vs 1997 corresponds to (AME 1999 -AME 1997 ). Standard errors, clustered by county, are presented in Column (2) and their associated 95% conődence intervals in Column (3). (1) (2) (3) * p < 0.10, ** p < 0.05, *** p < 0.01. Table 3 . 3 C.4: Balancing test by TCZ status Trend difference 1992-1999 1996-1999 Table 3 . 3 C.5: Effect of TCZ implementation on long-run education outcomes by year of birth (CRE)Notes: See notes to Table3.C.3. (1) (2) (3) Table 3 . 3 C.6: Effect of TCZ Implementation in the year of birth on long-term cognitive outcomes, LPM This table reports regression coefficients from 3 separate regressions. Cluster robust standard errors are shown in parentheses. See notes to Table 3.6.1. (1) (2) (3) High quality high school Academic high school Qualify for high school TCZxPost 0.242** 0.154* 0.039 (0.106) (0.089) (0.066) Child characteristics Yes Yes Yes Parents characteristics1 Yes Yes Yes Parents characteristics2 Yes Yes Yes Weather controls Yes Yes Yes County characteristics Yes Yes Yes County őxed effects Yes Yes Yes ProvinceXyears őxed effects Yes Yes Yes Observations 1005 1005 1539 R 2 0.338 0.335 0.298 Notes: Table 3 . 3 C.7: Heterogeneity in the TCZ effects across Subpopulations (LPM) (1) (2) (3) High quality high school Academic high school Qualify for high school Panel A. Boys and girls Boys 0.293** 0.026 0.135 (0.128) (0.173) (0.103) Observations 522 522 806 Girls -0.202 0.120 0.001 (0.214) (0.158) (0.097) Observations 483 483 733 Panel B. High and low education of father High father's education 0.064 0.080 -0.069 (0.129) (0.108) (0.069) Observations 559 559 783 Low father's education 0.875*** 0.283 0.060 (0.280) (0.270) (0.116) Observations 446 446 756 Panel C. High and low education of mother High mother's education 0.093 0.010 -0.104 (0.141) (0.117) (0.069) Observations 498 498 657 Low mother's education 0.367 0.226 0.126 (0.318) (0.199) (0.130) Observations 507 507 882 Child characteristics Yes Yes Yes Parents characteristics1 Yes Yes Yes Parents characteristics2 Yes Yes Yes Weather controls Yes Yes Yes County characteristics Yes Yes Yes County őxed effects Yes Yes Yes ProvinceXyears őxed effects Yes Yes Yes Table 3 . 3 C.8: TCZ effects in rural areas Each entry reports the DID estimate, using CRE regressions (Panel A) and LPM regressions (Panel B). County őxed effects are included in the LPM estimations. See notes to Table3.6.2 and Table 3.6.1. Table 3.C.9: Effect of exposure to air pollution in the year of birth on long-term cognitive outcomes: PSM-DID method This table reports regression coefficients from 9 separate regressions. County őxed effects are used in estimations of Panel A and Panel C. TCZxPost reports the DID coefficients from the estimations, and TCZxPost (differences in probability) reports the average DID in probability of outcomes (pairwise comparisons of the average marginal effects in TCZ and non-TCZ counties). See notes to Table3.6.2 and Table3.6.1. (1) (2) (3) High quality high school Academic high school Qualify for high school Panel A. CRE TCZxPost 0.408 0.896* 0.285 (0.612) (0.506) (0.379) Observations 615 563 1004 Pseudo R 2 0.173 0.243 0.186 Panel B. LPM TCZxPost 0.390* 0.193 0.102 (0.206) (0.172) (0.120) Observations 638 638 1084 R 2 0.376 0.419 0.311 Child characteristics Yes Yes Yes Parents characteristics1 Yes Yes Yes Parents characteristics2 Yes Yes Yes Weather controls Yes Yes Yes County characteristics Yes Yes Yes ProvinceXyears őxed effects Yes Yes Yes Notes: Adam Smith argued, in the Wealth of Nations, that economic progress is related to the division of labor, but did not make a clear link between them[START_REF] Smith | The wealth of nations (new york: Modern library[END_REF][START_REF] Becker | Human capital, fertility, and economic growth[END_REF]. In addition to its important direct effect, investment in human capital may also have a long-term indirect effect on economic growth, and that is through its effects on investment in physical capital and technological changes[START_REF] Acemoglu | Why do new technologies complement skills? directed technical change and wage inequality[END_REF][START_REF] Hanushek | The knowledge capital of nations: Education and the economics of growth[END_REF]. See Belli and Appaix (2003) for a review of the earlier studies on the effects of government investments in child health. The absence or the ineffectiveness of formal insurance and credit markets may be one the factors for the persistent levels of poverty in developing countries[START_REF] Morduch | Microinsurance: The next revolution[END_REF] The empirical literature typically distinguishes idiosyncratic risks and aggregate risks, where the former affects only one decision-making unit (the individual or the household) while the latter affects simultaneously a large number of decision-making units in a particular region. essential public services, in only their local administrative unit of residence[START_REF] Song | What should economists know about the current chinese hukou system?[END_REF][START_REF] Zhou | Intra-national citizenship and dual-hukou strategies among migrant families in china[END_REF]. With the growth of China's economy and the parallel increase in demand for rural labor in cities, the hukou system has undergone various relaxations during the 1990's and 2000's. The 10th Five-Year Plan, which covers the period from 2001 to 2005, briefly suggests canceling unreasonable restrictions for rural residents to migrate and work in cities, while the 11th Five-Year Plan, concerning the period from 2006 to 2010, gives more details on how local hukou should be granted to migrants with stable jobs and houses in the destination area, and explicitly reports how rural migration should be encouraged towards small and medium sized cities and small towns[START_REF] Hsu | Urbanization policy and economic development: A quantitative analysis of china's differential hukou reforms[END_REF]. However, despite the various reforms, the majority of rural migrants in cities have not had the possibility to change their original status[START_REF] Song | What should economists know about the current chinese hukou system?[END_REF].12 The most consistent measure of migration and the more readily available in China is the "floating population" size. It includes those who have an urban or rural hukou, but who have lived and worked in a place different from where they have their household registration status.13 Different definitions of temporary migrants have been used. A temporary migrant is anyone who has stayed in their place of destination for at least one year for the years 1982 and 1990, and for at least 6 months for the year 1996[START_REF] Liang | The age of migration in china[END_REF]. According to NBS (2019), a migrant is anyone who has resided in a place other than their area of registration for more than six months, excluding those whose resident and registered streets or towns are different but still within the same municipality or prefecture.14 Among the other risk-coping strategies for Chinese rural households figures the precautionary saving.Giles and Yoo (2007), for example, found that, facing a median level of consumption risk, almost 10% of household savings can be attributed to a precautionary motive, and this increases to 15% for households whose per capita consumption is below the poverty line. There exists another type of models in the migration literature where the migration duration is not a choice variable in the model and where the optimal length of stay is determined in the postmigration stage (see e.g.,Dustmann and Görlach, 2016). In this case, the sign of the parent's correlation attitude is given by the direction of the variation in the marginal utility of child human capital, following an increase in the income abroad. Prudence and temperance are related to a positive third and a negative fourth derivatives of the utility function under expected utility, respectively(Kimball et al., 1990;[START_REF] Kimball | Precautionary motives for holding assets[END_REF]. We assume an increasing and a concave relationship between earnings and migration duration (see e.g.,[START_REF] Dustmann | Earnings adjustment of temporary migrants[END_REF][START_REF] Beenstock | Testing the immigrant assimilation hypothesis with longitudinal data[END_REF]. "Tertiary prevention activities involve treating an established disease or chronic illness.[START_REF] Eeckhoudt | A good sign for multivariate risk taking[END_REF]. In such a model, the decision maker "invests" in tertiary prevention care, which reduces his health by a level equivalent to the type of care chosen, but, in return, increases the status of his future health. A background variable is defined as the second argument of the decision-maker's utility function, different from the monetary argument. While Courbage and Rey (2007) andMenegatti (2009a) report their results under specific assumptions on the joint distribution of an income risk and a background risk,[START_REF] Menegatti | 20090930optimal saving in the presence of two risks[END_REF] andDenuit et al. Even though, our model is different from that of the saving, we find the same condition for a precautionary behavior as the saving model (sign of V(3,0) ). This is because, similarly to the saving model, both the impact of the decision (here the increase of the home country's wealth) and the changes in risk are experienced at the same argument in the same period. 21 Additional wealth at the home country becomes more valuable in utility terms under risk if V (3,0) ≥ 0, i.e., staying one extra unit of time at the destination country brings more benefits to the migrant parent under the income risk compared to the situation with no risks:E[V (1,0) (y 2 + ε2w + g(m * ), Z 2 )] ≥ V (1,0) (y 2 + g(m * ), Z 2 ), if V (3,0) ≥ 0.22 The case where migrant parents are faced with income risks simultaneously both in the home-and the destination countries is also possible. Migrant parents' behavior in terms of the migration duration, with respect to a situation with no risk, is independent of the correlation between the two risks. It depends only on the cross-prudence in child human capital and the prudence in wealth.[START_REF] Dustmann | Return migration, uncertainty and precautionary savings[END_REF], considering the migration of any individual that is faced with simultaneous income risks in the two countries, found that, in addition to the sign of the third order derivative of the utility function with respect to wealth (prudence), wage differentials and a comparison between the levels of risks between the two countries is needed to determine the variation of the migration duration with respect to a situation without risk. Note that our results are sensitive to the migrant's utility functional form. The later allows to derive more particular conditions for the effects, on the migration duration, of income, child human capital and the different risks considered in this paper. See appendix 1.E for more details about the nth-order stochastic dominance and[START_REF] Ekern | Increasing nth degree risk[END_REF] concept of increase in nth-degree risk. Note that if the income at the country of origin was not assumed zero, the results would involve a "partial n-th degree risk aversion" measure instead of a "relative n-th degree risk aversion" measure (see,[START_REF] Chiu | On relative and partial risk attitudes: theory and implications[END_REF]. This is true for any symmetric distribution of the income. pandemic has also been shown to increase the probability of unemployment in a number of countries[START_REF] Dang | Gender inequality during the covid-19 pandemic: Income, expenditure, savings, and job loss[END_REF]. Increases in unemployment would result in changes in the migration Even though, the immediate effects are observed as a redistribution of wealth within the population, the results can be transferred in terms of earning probabilities. For a given worker, the possible incomes under the flat tax schedule are higher, but with higher probabilities for the lower possible incomes and lower probabilities for the higher possible incomes, than under the progressive tax schedule. u 2 (Z) can be written as δυ(Z) where δ is a parameter that refers to the parent's degree of altruism toward the child. Skewness is a measure of the asymmetry of the probability distribution. Note that the increase in the downside risk implies (but is not implied by) a decrease in skewness, or equivalently, an increase in left skewness, as measured by the third central moment. Using a macro-level framework,[START_REF] Marchiori | The impact of weather anomalies on migration in sub-saharan Africa[END_REF] suggest that country-level rainfall deviations from the mean could affect international migration in sub-Saharan Africa through an economic geography channel. This channel consists of two main elements: urbanization and the ratio of GDP per capita between the immigration and the emigration countries. They test in a three-equation model the effect of rainfall deviations on the ratio of GDPs per capita, whose effect on the international migration rate is then given. These households may also diversify their income into livestock and nonfarm activities. However, local labor opportunities may be limited, while livestock is costly in terms of capital inputs and entails other forms of risks. We call it a psychological cost because it is measured in expected utility terms. See details in Appendix 2.A.2 Variables of physical capital characteristics reflect the household's ability to generate income and to face monetary risk. We do not include the household per capita income or the remittance variables here. We cannot construct a counterfactual of remittances for stayers, had they migrated, and we worry that the household income and the children's human capital are jointly determined by some household characteristics that we do not observe. To overcome such problems and capture the income effect, we introduce the log of the household house value. Note that in what follows, we may refer to the income and child human capital equations as the outcome equations. Unfortunately, we do not have information about the village population in 2005, so we use that of 2007 to compute the ratio of migrant individuals in each village (assuming that the village population did not vary much in 2 years). The RUMiC project, which started in 2008, includes yearly surveys on rural, urban, and migrant households. We use the second wave of the RUMiC-RHS since some variables that are important for our analysis were only collected from this wave. These variables include the children's school test scores and land size. Furthermore, in order to avoid a simultaneity bias and to account for the fact that the migration decision was made in the preceding year, we include the lags of most of our variables from the previous year's survey. As the Chinese urban labor market was affected by the financial crisis at the end of 2008, the migration decision and the return rate of migrants might have also been affected.[START_REF] Kong | The global őnancial crisis and ruralśurban migration[END_REF] investigated this issue and concluded that there was a decrease in the migration rate following the economic downturn of the 2008 crisis. However, Dutronc-Postel (2019) argued that their findings could be due to the fact that their survey data includes very short distance migrations. Moreover, the actual onset of the negative consequences of this crisis started at the end of 2008. Therefore, any negative effect should be on those who decided to migrate at the end of 2008 or on the migration duration of those who migrated before the crisis hits the Chinese labor markets, resulting in them to be counted as non-migrant in our framework. On the one hand, we are not interested in the potential migrants deciding their migration by the end of 2008 since we define as migrant any person with a migration duration of at least 6 months in that year. On the other hand, Dutronc-Postel (2019) provided evidence that the economic slowdown did not have a substantial nor a significant effect on the duration of migration in 2008. Therefore, we do not expect the economic crisis to have strong effects on the migration decisions in our survey. In China, schools usually send report cards to parents and require that they are returned with the parents' signatures[START_REF] Zhao | Can money 'buy' schooling achievement? Evidence from 19 Chinese cities[END_REF] Meng and Yamauchi, 2017). This suggests that parents or guardians who report this information are very likely to be aware of the children's school results. Three different specifications are used for the child human capital variables: we compute, for each household, either the mean of all children's test scores, or, in two alternative specifications, the test score of the oldest or the youngest child, in each household. Exams from schools of different counties within a province may still present some differences, which cannot be controlled for. Variability of rainfall distribution has been captured by either the variance(Giles and Yoo, 2007), the standard deviation[START_REF] Paxson | Using weather variability to estimate the response of savings to transitory income in Thailand[END_REF] or the rainfall coefficient of variation[START_REF] Rose | Ex ante and ex post labor supply response to risk in a low-income area[END_REF]. It is better to rely on the coefficient of variation as a measure of the riskiness of the environment since, unlike the variance and the standard deviation, this measure is not sensitive to scaling[START_REF] Rose | Ex ante and ex post labor supply response to risk in a low-income area[END_REF]. See Appendix This dataset is more complete and up-to-date than the GHCN-Monthly version 2 precipitation dataset. The data are available at https://gis.ncdc.noaa.gov/maps/ncei/cdo/monthly. The number of weather stations changed over time, so in some cases, we linked more than one station to the same county. We define members in the labor force as individuals aged between 16 and 65 (not retired) or more than 65 but still working, who do not have any physical disability that affects their work capabilities, and are not currently at school. In our preferred sample where only one parent may be a migrant, around 70% of households have only one child aged between 6 and 22 (at school), 25% have two children and almost 5% have three or four children. For simplicity, in the different specifications where we use the mean of test scores of all children in the same household or the test score of the oldest or the youngest child in the household, we estimate equations (2.7) using all children in each household. We then compute the predicted values of test scores and the mean of each household's children's test scores, or we keep the test score of the oldest or the youngest child in each household, depending on the specification we are interested in. Note that the variable of math scores is our preferred measure of child human capital, as they are generally considered more informative of learning and are more frequently used in the education literature. Given our definition of the income differential variable, we apply the exponential function to identify the location of the turning point in the curve of the income differential variable. The reported turning points are computed using math test scores as a measure of the child human capital levels. Dustmann et al. (2020), using the 2009 RUMiC-RHS, found evidence that the average risk aversion level of households is negatively associated with the migration probability. u(y, z) = µ(y + f (z)) with µ ′ > 0, µ ′′ < 0, f ′ > 0 and f ′′ < 0, where f (z) is the monetary equivalent of the child's human capital level z. All household characteristics are that of 2007, the year preceding the migration year, except for the land size variable. Land size information were not reported in 2007, hence the use of the 2008 data for this variable. Considering the sample used in Table2.C.7, only 9% of the households reported that their land has been adjusted in the last 5 years. Unfortunately, we cannot have more precise information about whether the changes happened between 2007 and 2008 or before 2007. This is important as the household land may decrease following the migration of one of its members. As a robustness check, we re-estimate our models while dropping households that had their land adjusted in the last 5 years, results remain the same (estimations are not reported for simplicity). The specification of the structural migration equation provides some important identifying restrictions, in that the migration cost is assumed independent of variables including the coefficient of variation of rainfall, the irrigated land, access to credit and some child individual characteristics. These variables are assumed to affect the household decision to send a parent for migration only through their effect on the expected household income and/or the expected child human capital, hence their exclusion from the structural probit equation. Using the samples where one or the two parents are migrants and where any household member is a migrant, we find similar results for the risk variable. Let h(x) denote a one-attribute differentiable function. h ′ (x) and h ′′ (x) denote, respectively, the first and second derivatives of h. ν(z) can be written as δυ(Z) where δ is a parameter that refers to the household's degree of altruism toward the child. "If investment inputs are not perfect substitutes but are instead complements, government investment in the early years for disadvantaged children promotes investment in the later years."[START_REF] Cunha | Formulating, identifying and estimating the technology of cognitive and noncognitive skill formation[END_REF]. One important reason of the high levels of air pollution in China is the heavy reliance on coal as its primary source of energy. Since 1990 and until 2015, coal accounted for more than 72% of the energy produced in China (NBS of China, 2019). [START_REF] Molina | Pollution, ability, and gender-speciőc investment responses to shocks[END_REF] showed that in utero exposure to thermal inversions leads to lower adult cognitive abilities in Mexico, whileBharadwaj et al. (2017) reported that fetal exposure to carbon monoxide (CO) and particulate matter (PM 10 ) has a negative effect on fourth grade test scores in Chile. However,[START_REF] Rosales-Rueda | The persistent effects of early-life exposure to air pollution evidence from the indonesian forest őres[END_REF] found no apparent significant effect of in utero exposure to air pollution on children's cognitive function in Indonesia, while[START_REF] Shrestha | Early life exposure to air pollution, cognitive development, and labor market outcome[END_REF] suggested that air pollution lowered cognitive test scores in the same country. 6 Examples include[START_REF] Zivin | The unintended impacts of agricultural őres: Human capital in china[END_REF] who investigated the contemporaneous impacts of fires on the National College Entrance Examination (NCEE) scores of exposed individuals.[START_REF] Chen | Smog in our brains: Gender differences in the impact of exposure to air pollution on cognitive performance[END_REF] andZhang et al. (2018) also studied the short-term effects of exposure to air pollution on cognitive performance using cognitive test scores. For example, in 1995, in China, the total suspended particulates level exceeded by four times that of the United States in 1970, the year when the CAA was amended[START_REF] Tanaka | Environmental regulations on air pollution in china and their impact on infant mortality[END_REF]. In the 1990s, China's poverty population was determined to be around 260 million[START_REF] Angang | China's economic growth and poverty reduction (1978ś2002). In India's and China's Recent Experience with Reform and Growth[END_REF]. While, in 1993, the number of poor Americans was determined to be 39.3 million, before falling throughout the decade to 31.6 million in 2000[START_REF] Chaudry | Poverty in the united states: 50-year trends and safety net impacts[END_REF]. These costs include a decrease in sectoral exports in targeted locations[START_REF] Hering | Environmental policy and exports: Evidence from chinese cities[END_REF], hindrance of firm productivity[START_REF] Tang | The impact of command-and-control environmental regulation on enterprise total factor productivity: A quasi-natural experiment based on china's łtwo control zone" policy[END_REF] and a fall in foreign direct investment[START_REF] Cai | Does environmental regulation drive away inbound foreign direct investment? evidence from a quasi-natural experiment in china[END_REF]. The exact years are not specified in the original document. China has four formal administrative levels, which, classified from the highest to the lowest, are: provinces/autonomous regions/municipalities, prefectures/prefecture-level cities, counties/county-level cities/districts and townships/towns/sub-districts. As the brain development of the fetus may start since week four and continue even after birth, the longer the exposure of the mother during this period to air pollution, the bigger the potential damage to the brain may be. In animal models, prenatal exposure to air pollution modifies brain development, generally causing long-term problems in functions associated with memory systems[START_REF] Brockmeyer | How air pollution alters brain development: the role of neuroinŕammation[END_REF]. Note that the non-TCZ designation does not mean that the locations are not affected by the environmental policy. It is more relevant to call TCZ localities as those where the policy is more stringent, compared to the non-TCZ localities. However, as we don't have a measure of the intensity of exposure to TCZ policy, we use the discrete choice of being a TCZ locality or not. If TCZ policy was also responsible for reductions in air pollution in non-TCZ designated localities, especially in those located near TCZ localities, that would only understate our results. A negative age of exposure refers to exposure since conception. Values of the SO 2 concentrations are to be taken with caution. Panel B of Figure 3.C.1 shows a new trend of rising SO 2 concentrations since 2003 and that may be due to China's Accession to the World Trade Organization during that year. For the High School Entrance Exam Scores and Academic High School variables, we drop those reporting a high school quality while at the same time reporting getting 9 or less years of education. We cannot use ground-based pollution data as they are not available for years before 2000 at the county level.[START_REF] Chen | The effect of air pollution on migration: evidence from china[END_REF] found no statistical difference between the AOD data and the groundbased pollution data in China, conditional on geographic and year fixed effects.23 https://goldsmr4.gesdisc.eosdis.nasa.gov/opendap/MERRA2_MONTHLY/M2TMNXAER.5.12.4/co ntents.html 24 Details on how pollution data are manipulated are described in Appendix 3.B .2.25 More details on how TCZ status is affected to counties in CHIP13 are described in Appendix 3.B.1.26 The number of weather stations changed over time, so in some cases, we linked more than one station to the same county. Each model is considered as independent of and incomparable to the other models in the different columns (see e.g.,[START_REF] Buis | Logistic regression: When can we do what we think we can do[END_REF]. The 1999 cohort is represented by a smaller number of observations. Consequently, AMEs on that cohort may end up with large standard errors and wide confidence intervals. However, this loss of precision does not condemn the usefulness of the results. [START_REF] Wooldridge | Correlated random effects models with unbalanced panels[END_REF] developed the correlated random effects approach to be used for nonlinear models under unbalanced panels. Tables are omitted for simplicity but are available upon request. The logistic regression with the nearest-neighbor matching within the caliper (1:1 pairing) is used to construct the propensity scores. Note that Stata does not account for the fact that propensity scores used for the matching procedure are estimated not the real ones. Acknowledgments Appendix 1.A Proof proposition 1 The FOC is given by F (y 1 , y 2 , Z 1 , Z 2 , m * ) = 0, where m * can be written as a function of y 1 , y 2 , Z 1 and Z 2 , m * = m(y 1 , y 2 , Z 1 , Z 2 ). The FOC becomes, then: F (y 1 , y 2 , Z 1 , Z 2 , m(y 1 , y 2 , Z 1 , Z 2 )) = 0. main results of the empirical analysis with some robustness checks. Section 6 concludes. Theoretical model We build a simple theoretical framework where a rural household, with at least one child aged less than 16, has to make a decision about sending a parent for migration. Drawing on the NELM theory, we assume that the migration decision is made collectively by the household members. Under the assumption of absent, incomplete or inaccessible credit and insurance markets, migration can be used as a self-insurance mechanism, as it reduces the consequences of the occurrence of the income risk. 8 We assume that the household's main activity is farming its own land and that the household cares about both its income and the child's human capital. Preferences are modeled by a bivariate utility function, u(y, z), where y is the household's income from farm production and other off-farm activities, and z is the household's child human capital. We consider the following standard assumptions: • u is twice differentiable; 9 • The marginal utilities with respect to each argument are strictly positive (u (1,0) > 0 and u (0,1) > 0); • The household is strictly risk averse, i.e. the utility function is strictly concave: u (2,0) < 0, u (0,2) < 0 and u (2,0) u (0,2) -(u (1,1) ) 2 > 0. We do not impose any restriction on the sign of the cross-second derivative of u, and we thus consider three possible cases: u (1,1) < 0, u (1,1) = 0 and u (1,1) > 0. In the terminology of Epstein and Tanny (1980), the household is said to be correlation averse (neutral, loving) if u (1,1) < 0 (= 0, >0). For such a household, the marginal utility of income is lower (unchanged, higher) when the child has higher levels of human capital. Moreover, depending on whether preferences are correlation loving or averse, the child human capital can be a complement or a substitute for income. 10 The household is exposed to an agricultural risk in the form of a crop failure with probability p (0 < p < 1). This risk induces two types of loss, a monetary one (D y , 8 Ehrlich and Becker (1972) showed that market insurance and self-insurance are substitutes. 9 The partial and the cross-derivatives u (k1,k2) of a utility function u with two arguments x 1 and x 2 are given by the following expression: 10 See Appendix 2.A.1 for details about the relationship between the correlation attitude of the household and the functional form of its utility function. Appendix 2.A Theoretical model 2.A.1 Functional forms of the utility function The household's correlation aversion, i.e. how the marginal utility of income varies with respect to the level of child human capital, given by u (1,1) , allows to determine which functional form of this utility function is to be used. If the household is assumed to be correlation neutral (u (1,1) = 0), then the additive separability between the household's earnings and the child's human capital should be assumed (u(y, z) = µ(y) + ν(z) with µ ′ > 0, µ ′′ < 0, ν ′ > 0 and ν ′′ < 0). 40 41 If the household is correlation averse (u (1,1) < 0), we are assuming the non-separable form of the utility function (u(y, z) = µ(y + f (z)) with µ ′ > 0, µ ′′ < 0, f ′ > 0 and f ′′ < 0, where f (z) is the monetary equivalent of the child's human capital level z). Finally, if the household is correlation loving (u (1,1) > 0), we can then consider the multiplicative separability between the household's earnings and the child's human capital (u(y, Z) = µ(y)ν(Z) with µ ′ > 0, µ ′′ < 0, ν ′ > 0 and ν ′′ < 0). 2.A.2 Probability of migration Using a Taylor expansion of order 2 around (Y 0 , Z 0 ), we obtain: Sichuan). These provinces account for about 40% of China's production of rice. Winter wheat, which accounts for more than 90% of the total wheat production in China, is mainly grown in Northeastern China, including 5 of the provinces in this study (Hebei, Jiangsu, Anhui, Henan and Hubei). These provinces account for about 53% of the wheat production of the country. Finally, corn is mainly planted in North, Central and hilly South-West China. It is a major crop in one of our provinces: Hebei that produces about 10% of the total country corn production. Since Hebei is a Northeastern region, the main corn production is a spring corn. 42 According to the Food and Agriculture Organization (FAO), water is needed more for grown crops than for crops that were just planted. 43 The water need at planting stage is evaluated at 50% of the crop water need during the mid-season stage. It starts to increase during the crop development stage and reaches its maximum at the beginning of the midseason stage. For the dry harvested crops that we consider in this study, the water need is minimal during the late season stage when crops mature and are harvested). Relying on this information, we determine for each major crop in our study the most important months for crop production in the surveyed areas. The average years of education of the child's parents Village characteristics Primary school in the village 1 = village has a standard six-grade primary school, other kind of primary school or a teaching spot Uniőed irrigation system 1 = the village provides a uniőed drainage and irrigation system Furrow machine 1 = the village provides a furrow machine Plant disease prevention and treatment 1 = the village implements united plant diseases and insect prevention and treatment United purchasing service 1 = the village provides a united production material purchasing service Share of village migrant ( 2005 Evidence from China Introduction Human capital is shown to have an important role in determining labor market outcomes (Heckman et al., 2006) and as a driver of long-term economic growth (Schultz, 1961;Romer, 1986). Its importance may, in part, explain why governments allocate an average of 4.5% of their GDP for education (The World Bank, 2017), and why Chinese families spent in 2017 alone, an average of 21.6% of their total disposable income on private tutoring [START_REF] Guo | Does private tutoring improve student learning in china? evidence from the china education panel survey[END_REF]. However, human capital formation depends on many factors, other than monetary investments. These factors include, according to the łfetal origins" hypothesis and its extension into early childhood environment, in-uterus and early-life health conditions and shocks [START_REF] Almond | Childhood circumstances and adult outcomes: Act ii[END_REF]. In this paper, we investigate how a health shock, induced by an environmental regulation, over which authorities have full control, can affect long-term development of human capital. 1 Two channels may be in place. One through improvements of air quality induced by the environmental policy, while the other may be through changes in parental earnings if the later interfere with early-childhood parental investments. 2 Moreover, in addition to 1 Most of early-life health shocks considered in the literature are difficult to predict or to avoid. For example, Almond et al. (2010) examined, in China, the effect of prenatal exposure to the 1959-1961 famine on the likelihood of being illiterate as an adult. 2 Parental investments are shown to determine the formation of both noncognitive and cognitive skills [START_REF] Cunha | Formulating, identifying and estimating the technology of cognitive and noncognitive skill formation[END_REF].
00411247
en
[ "info.info-cl" ]
2024/03/04 16:41:24
2009
https://hal.science/hal-00411247/file/saakm09-camanager-final.pdf
Florence Amardeilh Mondeca email: [email protected] Danica Damljanovic email: [email protected] Kalina Bontcheva CA Manager: a Framework for Creating Customised Workflows for Ontology Population and Semantic Annotation Keywords: semantic annotation, semantic repositories, ontologies, content augmentation knowledge acquisition, ontology population, software artefacts, key concept identification The aim of semantic annotation is to bridge the large gap between structured knowledge and the large volumes of unstructured text data that companies and people need to deal with daily. Alas, the process is very labourious and error-prone, even when performed semior fully automatically. Although widely researched, the two key steps in this process -semantic annotation and ontology population -still hold outstanding challenges. While there are numerous tools in existence, many lack compliance with recent standards, but more importantly, lack the flexibility to customise the annotation workflow. In this paper, we present the Content Augmentation Manager Framework, which bridges the gap between information extraction tools and semantic repositories by allowing easy plugin of various types of components. It is also capable of controlling the process of semantic annotation and ontology population by means of consolidation algorithms. INTRODUCTION Copyright is held by the author/owner(s). SAAKM '09 Berkeley, California USA ACM Gartner predicted in 2002 1 that for the next decade more than 95% of human-to-computer information input will involve textual language. They also predict that by 2012 taxonomic and hierarchical knowledge mapping and indexing will be prevalent in almost all informationrich applications. There is a tension here: between the increasingly rich ontology-based, semantic models on the one hand, and the continuing prevalence of human language materials on the other. This process may be characterised as the dynamic creation of interrelationships between ontologies and unstructured and semistructured textual content in a bidirectional manner. However, transforming huge amount of unstructured text into the semantically interlinked knowledge space is a big challenge. Two key parts of this process are: 1) semantic annotation, and 2) ontology population. We define semantic annotation as a formal representation of content, expressed using concepts, relations and instances as described in an ontology, and linked to the original resource. Ontology instances are usually stored in a knowledge base independently from the annotated resource, as in the case of KIM [START_REF] Popov | KIM -Semantic Annotation Platform[END_REF] or ITM [START_REF] Amardeilh | A semantic web portal with hlt capabilities[END_REF]. The process of adding new class instances or property values to a knowledge base is called knowledge acquisition or ontology population. As such, the ontology population looks at semantic annotation as means for data-driven enrichment of an existing knowledge base. Although widely researched, both semantic annotation and ontology population tasks still remain a big challenge, as the role of human annotators remains paramount. Consequently, high-quality automation is one of the very important requirements in order to ease the knowledge acquisition bottleneck, particularly for annotating large collections of legacy documents [START_REF] Uren | Semantic annotation for knowledge management: Requirements and a survey of the state of the art[END_REF]. While there are numerous tools in existence, many lack compliance with recent standards, but more importantly, lack the flexibility to customise the annotation workflow. Moreover, the workflow needs to also accomodate human, as well as automatic annotators. In this paper, we present the Content Augmentation Manager Framework (CA Manager) which is capable of performing and controlling the process of semantic annotation and ontology population by means of consolidation algorithms. This framework supports ontology population from text (semi)automatically, by allowing easy plugin of various types of components including information extraction tools, customised domain ontologies, and diverse semantic repositories. In other words, this framework helps to bridge the gap between information extraction tools and the semantic repositories which are used to store the collected knowledge. This paper is structured as follows. In Section 2 we present CA Manager. Evaluation results are presented in Section 3. Similar approaches are discussed in Section 4. Finally, we draw our conclusions and plan for future work in Section 5. THE CONTENT AUGMENTATION MAN-AGER FRAMEWORK The core philosophy of the CA Manager is to bridge the gap between the content augmentation tools, and the semantic repository tools. It is conceived as a middleware, meaning that it does not have the responsibility neither of the information extraction task by itself, nor of the knowledge storage, but it is flexible enough to adapt to any domain ontology, various content augmentation tools, semantic repositories and workflow requirements. It is, amongst other things, capable of controlling the quality and the validity of information extraction results against an ontology, matching them against existing resources (the application's knowledge base or repositories from the linking open data initiative for instance), and enriching them. To achieve that goal, the CA Manager relies on the recommendations formulated by the Semantic Web community (RDF/OWL languages, Service Oriented Architecture) combined with a UIMA-based infrastructure which has been enriched and customized to address the specific needs of semantic annotation and ontology population tasks. The UIMA (Unstructured Information Management Architecture) framework aims at providing a development platform for systems that analyze large volumes of unstructured information in order to discover knowledge that is relevant to an end user2 . We adopted UIMA as the foundation of the internal architecture of the CA Manager, due to its ease of integration and composition of internal or external modules, and more impor-tantly its wide acceptance by the Information Extraction (IE) community. However, although UIMA provides the building blocks to develop knowledge acquisition applications based on text-mining components, it does not give any guidelines as to which steps should be arranged in which order. Moreover, none of the existing components in the pipeline addresses the issue of controlling the quality and validity of the generated annotations/instances. Moreover, its Common Analysis Structure (CAS) defines a high-level annotation schema but it needs to be redefined for each new application need. Lastly, it uses a proprietary way to expose webservices (the Vinci IBM protocol), that makes it even more complex, as it is not reusing open Semantic Web standards. Therefore, we implemented the CA Manager in order to develop a flexible architecture based on a combination of several UIMA Analysis Engines. UIMA provides an Eclipse plugin Eclipse facilitating the definition and customisation of each Analysis Engines (stored in an XML file) needed in the target application and their ordering as a workflow. Therefore, each step of the annotation workflow is a component that can be plug in or out according to the objectives of the final application. At the same time, we aimed at improving the UIMA infrastructure with the systematic use of Semantic Web standards We defined an RDF-based annotation schema dedicated to ontology population and semantic annotation tasks, composed of entities, properties, annotations and offsets. This annotation schema is produced after applying the first Analysis Engine and then enriched and controled by the following ones during the whole duration of the workflow process. We also provide a distributed service-oriented architecture relying on languages and protocols defined for the Semantic Web, especially for easing the integration with external components through the use of opened web services. Furthermore, the CA Manager proposes a default workflow composed of a list of logical steps dedicated to semantic annotation and ontology population, see Figure 1: • Extracting the valuable knowledge; • Consolidating knowledge; • Storing These three phases allow the connection with information extraction tools (being defined as UIMA analysis engines plugin or providing a web service), semantic repositories and domain ontologies or corpus. Extracting knowledge from text The information Extraction component is made of two steps, split (optional) and extract. Split divides the in- KCIT KCIT [START_REF] Damljanovic | A text-based query interface to owl ontologies[END_REF] automatically retrieves key concepts from legacy documents with regards to the domain ontology. These annotations are created based on the pre-condition that a specific part of a document is referring to a particular ontology resource if the lemmas of the two match. By matching lemmas, we ensure that all morphological inflections of the relevant terms will be matched. The KCIT process can be broken down into several steps: • Building a list of relevant terms. Given an ontology, lexicalisations of all ontological resources (classes, instances, properties, property values) are lemmatised and added to a gazetteer list. • Annotating the legacy content. The legacy content is first lemmatised, with a morphological analyser. It is then matched against the gazetteer list created in the previous step. • Resolving conflicts. This step includes filtering annotations and solving ambiguity problems such as removing redundant annotations. To build a list of relevant terms dynamically we first extract a list of the ontology resource names (i.e., fragment identifiers) and their assigned property values (e.g., label and datatype property values). Each item from the list is further processed, based on some heuristic rules derived from different ontology designs. Although there is no need to customise KCIT when using it with different ontologies, customisation might yield better results. By default, KCIT apply several rules such as replacing dashes and underline characters with spaces and also splitting camelCase words. In addition it applies some heuristic rules which are optional, as their usage depends on the lexicalisations available in the ontology. KCIT remains domain-independent, as all domain-specific issues are implemented through parameters. Typically, the output of a content augmentation tool is in some kind of XML format and is not connected directly to the ontology. In such cases, it needs to be transitioned to the generic Annotation Schema (AS) of the CA Manager through the use of a mapping. However, as KCIT produces ontology-based annotations, such a mapping is not necessary in this case, because the annotations are directly grounded in the ontology model. Consolidating annotations and knowledge with an ontology repository We studied in [START_REF] Amardeilh | Semantic annotation and ontology population[END_REF] the various possible cases of instances and annotation creation and identified two axes of consolidation: 1. the first axis defines the ontological element concerned, i.e. an instance of a class, a property value or a semantic annotation 2. the second axis defines the constraints to be checked, i.e. non redundancy, the domain and range restrictions and the element's cardinality Each of the CA Manager consolidation algorithms takes into account these two axis. In the Information Consolidation component, they are performed through three steps, merge, control and optionally infer. The first step, merge, eliminates the duplicates within the CAS (same entity or annotations occurring more than once in the CAS) and queries the semantic repository in order to retrieve the corresponding URI of the concerned entity or annotation if not present. These queries can be simple (class + string label) or multicriteria (class + set of required properties that identify unambiguously an entity in the repository). For instance, a person can be queried by its name. However, in cases of homonymy, looking at the person name is clearly not enough and one might want to query on particular properties such as the date of birth that can better discriminate several instances of persons sharing the same name. This multicriteraai search is built from the restricted properties where cardinality is minimum 1; we call these properties identifier properties as they are required to identify and define an instance of a concept. In case of law case reports, the identifer properties are the court, the location of the court, the date of the decision and the decision itself. If it is not possible to disambiguate between two instances of the semantic repository because for example no identifier properties have been extracted and annotated as such, then the new entity or annotation is tagged with a metadata "invalid". The control step verifies that the extracted entity or annotation is valid against the ontology model. This implies controlling domains and ranges, cardinalities, date formats and the temporal information, the number formats and metric systems, etc. For instance, if in the preceding step the extracted entity was merged with an existing instance, the algorithms look at the properties of the extracted entity: are these properties types authorized for the entity's class? do these properties already exist on the merged instance? do they have the same values or different values? then how do we know which value is the right one, especially when dealing with thesaurus values such as geographical locations, or with time value such as dates? The algorithms try to automatically resolve these issues and when not possible, they also mark the new entity or annotation with the "invalid" metadata. All invalid statements are stored in the semantic repository on the server so that they can be retrieved and presented to the end-user for manual validation, if required by the target application. The last step of this component, infer, is optional. It applies inference rules through a reasoning engine in order to discover new entities or new relations between them and also to control the overall coherence and quality of the semantic repository. Storing annotations and knowledge in repositories The Information Storage component has two steps, serialise and store. The serialization step parses the enriched and consolidated annotation schema in order to generate an output in the requested applicatoin format (XML, RDF, OWL, NewsML, CityGML, etc.). The second step, storage, is optional as it depends if the application directly digests the serialised format or stores the results in a knowledge store (such as ITM) and/or in a dedicated annotation server (such as Sesame). The CA Manager framework is open-source and available on SourceForge.net 3 . The release includes a documentation describing how to install the CA Manager, how to configure workflows, how to develop new plugins as for connecting a new information extraction engine for example, and so on. Moreover, the above pipeline is also exposed as a web service, and a testing web client application is available from http://client2. mondeca.com/scan/. EVALUATION Within the TAO project 4 we implemented different workflows (a combination of different ontologies, semantic annotation tools, semantic repositories and corpora) for evaluating the flexibility and the scalability of the CA Manager framework. One of these was composed of loading the upper-level PROTON Ontology5 in the Sesame RDF repository and applying simple regular expressions on the corpus. The other two were based on the GATE Ontology developed within the TAO project, both using KCIT as the content augmentation system but one semantic repository being ITM 6 and the second one being Sesame RDF repository. Moreover, we have challenged flexibility of CA Manager beyond the TAO project by trialing it in different environments with various information extraction tools as well. For more details, see [START_REF] Damljanovic | CA Manager Framework: Creating Customised Workflows for Ontology Population and Semantic Annotation[END_REF]. The GATE domain ontology7 contains knowledge about GATE components, modules, plugins and language processing resources and was used by KCIT to annotate software artefacts about GATE software8 . In the rest of this section, we present evaluation of the two main steps described above: information extraction and consolidation, based on the described workflow configuration, using GATE ontology, KCIT annotation tool and ITM semantic repository. We have first evaluated the KCIT information extraction tool, as consolidation is entirely dependent on the quality of the KCIT output. As KCIT is primarily a semantic annotation tool, we have measured its performance using standard information extraction measures, namely precision and recall. We collected 64 GATE software artefacts of various types, namely source code, source documentation, GATE user manual, publications and forum posts from the GATE mailing list. Then, we organised an annotation exercise with 12 human subjects to whom we gave a manual on how to proceed in validating annotations and creating a gold standard. Each document has been assigned to two participants so that we can calculate inter-annotator agreement. Inter-annotator agreement Inter-annotator agreement is used to establish the upper bounds on performance of information extraction tools such is KCIT. Table 1 shows precision, recall and F-Measure values, based on the results of this experiment. For computing these measures between two annotation sets, one can use one annotation set as gold standard and another set as system output. One can switch the roles of the two annotation sets. The Precision and Recall in the former case become Recall and Precision in the latter, respectively. But the F1 remains the same in both cases. Table 1: Inter-annotator agreement Disagreements between annotators occurred for all document types excluding forum posts, with the source code being most problematic. In some cases, annotators did not agree on the annotation type. For instance, resources was annotated as a mention of gate:GATE-Resource 9 by one annotator, whereas another annotator labeled it as gate:ResourceParameter. In some cases, annotators decided to delete an annotation such as in the sentence Couldn't get resource data for gate.creole.morph.Morph, although the context indicates that resource refers to the GATE-Resource concept. Another interesting example is annotating Annotation-Schema() with brackets included, or protected FeatureMap features annotated as a whole, not only features part as done by the other annotators. We used the human-annotated corpus mentioned above to optimise the KCIT tool. In the first iteration we compared the automatic of KCIT against the gold standard. Then, the mistakes were examined manually and rules for improving KCIT performance were derived and implemented, mostly concerning the filtering and disambiguation phase. After improving KCIT, 4956 annotations (out of 6175 in total) were correct, 49 partially correct 10 , 347 annotations were missing (manually added by participants who created the gold standard), and 823 annotations were wrong (manually deleted by the participants who created the gold standard). These were used in the calculation of the precision and recall figures reported in Table 2. We conclude that with the proper configuration and 9 gate: is used instead of the full namespace which is http: //gate.ac.uk/ns/gate-ontology 10 Manually modified by the participants who created the gold standard, where modifications include for example extending the annotation to refer to the longer string; for instance, if ian roberts is not annotated as a whole, but only ian is, then this is considered partially correct. tailoring the filtering phase, very good results can be achieved. The quality of the KCIT annotations approaches that of human annotators, while offering significant gains in terms of annotation effort required. Evaluation of the information extraction and consolidation phase We selected another 20 documents to serve as a representative corpus of GATE software artefacts, on which to evaluate KCIT's performance. This selection is made in order to cover not only the knowledge about GATE, but also different types of documents (structured, semistructured, and unstructured). As before, these were annotated manually and then used to calculate precision and recall values as presented next. Information extraction To evaluate the information extraction phase, the automatic annotations produced by KCIT were compared against the gold standard, using the GATE Benchmarking tool [START_REF] Cunningham | GATE: A Framework and Graphical Development Environment for Robust NLP Tools and Applications[END_REF]. The results are shown in Table 3. For the 20 selected documents, 4523 created annotations were correct, 41 annotations were partially correct, 126 annotations were missing, and 255 were spurious. Further inspection of the annotated documents revealed that the majority of the spurious/missing annotations were due to errors by the GATE morphological analyser. For example, it could not extract correctly the roots of acronyms and camelCased words: the root of LanguageResources remained LanguageResources. In addition, there were many wrong annotations of the word learn. The problem arises from KCIT not being able to disambiguate correctly based on the local context, resulting in each appearance of the word learn, e.g., learn GATE using movie tutorials, being annotated as referring to the GATE maching learning plugin. Similarly, many annotations were created for each mention of the word resources, even though there was no reference to GATE resources. For example, when reporting the problems on the mailing list, a user said I cannot waste too much resources on the server..., meaning computational time, not GATE components. Nevertheless, as demonstrated by the overall results, such problematic cases were quite infrequent. Table 2: Average precision and recall values for all documents The third class of mistakes arose from overlapping annotations not being filtered out properly by KCIT. For example, ANNIE NE Transducer was annotated as the whole string refering to the eponymous processing resource, but also Transducer was annotated as a mention of JAPE Transducer, which again should have been filtered out as redundant. Overall, we can conclude that the performance of KCIT is of a good quality, especially on domain-specific documents and can be a good base for evaluating the consolidation algorithms. For example, forum posts and java classes were annotated with a very high precision and recall. As documents get more abstract and generalised, more ambiguities creep in (e.g., peer-reviewed publications) and KCIT's performance degrades. Consolidation phase To evaluate the consolidation algorithms, we applied recall and precision measures on the same corpus using the GATE ontology, KCIT exposed as a service, and ITM repository for storage. Here the recall and precision measures are applied to the semantic annotations produced by KCIT exploited for evaluating the two following tasks: • ontology population (knowledge instances newly created from annotations) and • semantic annotation (semantic annotations controlled with regard to the instantiated concepts in the repository). Hence, we obtain the two following adapted measures: • Precision measures the number of annotations/instances correctly acquired divided by the total number of annotations/instances acquired. • Recall measures the number of annotations/instances correctly acquired divided by the number of annotations/instances returned by KCIT. Table 4 shows results according to the two different tasks. We want to emphasize the fact that we are not evaluating the KCIT results (done in previous section) or the quality of the ontology model but the performance of the consolidation algorithms themselves. First, it is important to notice that the consolidation algorithms are reducing the final number of knowledge instances or annotations based on the KCIT annotations. That means that the merging step is successful in resolving references to different annotations towards the same instance in the knowledge base. For example, in the movies.xml file, on 129 annotations generated by KCIT, CA Manager created only 46 knowledge instances and 27 semantic annotations. For instance, the KCIT annotated the two terms ontology tool and Ontology Tool that in fact correspond to the same instance of the class GATEPlugin, with both labels as names and the following URI, http://gate.ac.uk/ns/ gate-ontology\#Ontology\_Tools. This URI is then used as the reference link to create the semantic annotation between the document and this instance. So instead of having two annotations, only one is produced in this case and the labels have been consolidated on the knowledge instances. The same occurs with for example the term ANNIE which is annotated 9 times by KCIT whereas at the end of the CA Manager pipeline, there is only one semantic annotation referring to the same instance ANNIE of class "GATEPlugin" It appears that sometimes the consolidation of the semantic annotation on one label has not been correctly achieved by the CA Manager, producing non pertinent annotations or instance references as some duplicates remained. For example, the terms "pipeline" and "Pipeline" produced four knowledge instances: two with the URI gate:Pipeline and two with the URI gate:GATEController. At most, we would like to have one of each URI if the CA Manager could not disambiguate to which instance the annotation refers to. But it should aggregate the two labels "pipeline" and "Pipeline" on each URI value. This leads to only 76.5% of precision for creating knowledge instances first and as the semantic annotations are referring to these instances with their URI, the precision result is slightly better, reaching 93.3%. Indeed, if in the knowledge instances results, we obtain four instances with the same URI, each having their own label, in the semantic annotation we point to the URI, and not the individual labels. As such, the different knowledge instances possessing the same URI are merged into the one with now four labels and the semantic annotation can refer to this federated instance, improving the results. In fact, the more the CA Manager is used, the better it enriches the knowledge base and therefore the semantic annotation results. The CA Manager obtains 100% recall on both tasks, hence there is no loss of information after processing the KCIT annotations to produce the final semantic annotations and knowledge instances. The CA Manager consolidation algorithms that deal with merging need to be improved in order to eliminate duplicated terms with different orthographic labels such as "datastore", "data store", "DATASTORE", and "Data Store" that must be merged together in the same knowledge instance and thus producing only one semantic annotation referring to the one particular instance. On the other hand, the consolidation algorithms that control the ontology model are performing nicely which is not very difficult in the GATE case study as we mostly refer to the class instances. There are no annotations which refer to the relation between knowledge instances for example. For doing this, we need to improve the linguistic analysis to support that feature and thus evaluate properly this functionality. This is the case in the research and industrial projects in which the CA Manager is now used. RELATED WORK KIM [START_REF] Popov | KIM -Semantic Annotation Platform[END_REF] performs semantic annotation and ontology population automatically in respect to their ontology, by identifying the Key Phrases and Named Entities (NE). As NE they consider people, organizations, locations, and others referred to by name. They use GATE [START_REF] Cunningham | GATE: A Framework and Graphical Development Environment for Robust NLP Tools and Applications[END_REF] for NE Recognition. Our approach is more flexible than KIM, in the sense that the CA Manager is a generic framework and can thus support semantic annotation based on any information extraction tool, and also any ontology, or semantic repository. Other similar work includes frameworks such as S-Cream [START_REF] Handschuh | S-CREAM -Semi-automatic CREAtion of Metadata[END_REF], MnM [START_REF] Motta | MnM: Ontology Driven Semi-Automatic and Automatic Support for Semantic Markup[END_REF], Artequakt [START_REF] Alani | Web-based Knowledge Extraction and Consolidation for Automatic Ontology Instantiation[END_REF] and OntoSophie [START_REF] Valarakos | Enhancing ontological knowledge through ontology population and enrichment[END_REF]. We can notice important differences between the CA Manager framework and similar approaches: some of them use machine learning techniques, some are based on high-level generic ontologies such as PROTON 11 rather than domain-oriented ontologies, or they perform either ontology population or semantic annotation. The main difference between those platforms and the CA Manager is the fact that the CA Manager preserves the independence between the content augmentation tool, such as KCIT, and the semantic repositories (Sesame 12 , ITM 13 ). It acts as a mediator, providing greater flexibility and adaptability capabilities as required by real world applications. For more details about the existing semantic annotation platforms we refer the reader to the survey in [START_REF] Uren | Semantic annotation for knowledge management: Requirements and a survey of the state of the art[END_REF]. Moreover, as the consolidation phase is a very important part of the CA Manager, we would like to emphasize that the tools for ontology population or semantic annotation which describe, or even mention, the consolidation phase in their workflows [START_REF] Alani | Web-based Knowledge Extraction and Consolidation for Automatic Ontology Instantiation[END_REF], are rare. However, this phase is extremely important to maintain the integrity and the quality of the application's referential. In fact, most of them rely on manual validation only, to check the generated annotations or instances. In the knowledge acquisition field, the ArtEquAkt project [START_REF] Alani | Web-based Knowledge Extraction and Consolidation for Automatic Ontology Instantiation[END_REF] was concerned with the consolidation phase where Alani et al. defined four problems related to the integration of new instances in a knowledge base: duplicated information, geographical consolidation, temporal consolidation and inconsistent information. Their approach consists of instantiating the knowledge base based on the information extracted from the documents. They apply a consolidation algorithm driven by a set of heuristics and methods of terminological expansion based on the WordNet [START_REF] Fellbaum | WordNet -An Electronic Lexical Database[END_REF] lexical base. Contrary to their approach, we are convinced that in order to preserve the integrity of the knowledge base, this consolidation phase must be carried out before the creation of the instances in the repository. Thus, only new and consistent information is created, yet preserving the integrity of the referential and thus improving the quality of the target application. We presented the CA Manager framework which serves as a mediator between semantic annotation and ontology population, and is capable of consolidating and controlling this process while allowing human annotators to be involved, if required. We created various workflows to evaluate the flexibility and scalability of this framework, which offers adapted workflows for ontology population and semantic annotation based on Semantic Web standards and UIMA concepts. The main contribution of the CA Manager in comparison to other similar tools is that it allows easy plugin of information extraction tools, semantic repositories and ontologies. We have created a gold standard corpus in the domain of software engineering, based on which we could calculate Inter Annotation Agreement, and also precision and recall values of automatically processed results. First, we have used this corpus to calculate the performance of KCIT -a GATE-based information extraction tool which has been exposed as a Web service and used by the CA Manager; then we calculated the performance of the consolidation algorithms based on the same corpus. The automatically produced annotations reach the level of human annotators and are therefore suitable for practical applications. Figure 1 : 1 Figure 1: Specialized UIMA processing pipeline Table 3 : 3 Precision and recall measures for the 20 GATE software artefacts Table 4 : 4 Performance results on the representative corpus of the GATE case study Element Number Number Number Recall Precision F1-measure type in of correct of missing of (A/A+B) (A/A+C) (R*P)/0.5 the elements elements spurious (R+P) ontology (A) (B) elements (C) Kb instances 208 0 64 1 0.765 0.867 Annotations 168 0 12 1 0.933 0.965 http://www3.gartner.com/DisplayDocument?id=379859 UIMA website: http://www.alphaworks.ibm.com/tech/ uima CA Manager release: http://sourceforge.net/ projects/scan-ca-manager TAO website: http://www.tao-project.eu PROTON website : http://proton.semanticweb.org ITM website: http://mondeca.com/index.php/en/ intelligent_topic_manager http://gate.ac.uk/ns/gate-kb GATE website: http://gate.ac.uk ACKNOWLEDGMENTS This research was partially supported by the EU Sixth Framework Program project TAO (FP6-026460).
00411249
en
[ "info.info-cl" ]
2024/03/04 16:41:24
2009
https://hal.science/hal-00411249/file/kcap09-tao-poster.pdf
Danica Damljanovic email: [email protected] Kalina Bontcheva CA Manager Framework: Creating Customised Workflows for Ontology Population and Semantic Annotation Keywords: I, 2, 4 [Knowledge Representation Formalisms and Methods]: representation languages, miscellaneous We present the Content Augmentation Manager Framework for creating various adapted workflows for ontology population and semantic annotation based on Semantic Web recommendations and UIMA precepts. This framework supports ontology population from text (semi)automatically, by allowing easy plug-in of various types of components including information extraction tools, customised domain ontologies, and diverse semantic repositories. Our evaluation reveals that the framework offers flexibility, without compromising on precision and recall of the constituting components. INTRODUCTION Gartner reported in 2002 1 that for at least the next decade more than 95% of human-to-computer information input will involve textual language. They also report that by 2012 taxonomic and hierarchical knowledge mapping and indexing will be prevalent in almost all information-rich applications. There is a tension here: between the increasingly rich ontology-based, semantic models on the one hand, and the continuing prevalence of human language materials on the other. This process may be characterised as the dynamic creation of interrelationships between ontologies and unstructured textual content in a bidirectional manner. However, transforming huge number of unstructured text into the semantically interlinked knowledge space is a big challenge. Two key parts of this process are semantic annotation and ontology population. While there are numerous tools to support these processes in isolation, many lack compliance with recent standards, but more importantly, lack the flexibility to customise and link them together. In this paper, we present the Content Augmentation (CA) Manager Framework which is capable of performing and controlling the process of semantic annotation and ontology population by means of consolidation algorithms. CA Manager allows easy plug-in of various types of components including Information Extraction (IE) tools, customised domain ontologies, and diverse semantic repositories. THE CA MANAGER FRAMEWORK The core philosophy of the CA Manager is to bridge the gap between the content augmentation tools, and the semantic repository tools. It is conceived as a middleware, capable of controlling the quality and the validity of IE results against an ontology, matching them with existing resources (e.g. the application's knowledge base or repositories from the Linked Open Data initiative (linkeddata.org), and enriching them. To achieve that goal, the CA Manager relies on the recommendations formulated by the Semantic Web community (RDF/OWL languages, Service Oriented Architecture) combined with the UIMA-based infrastructure which have been enriched and customised to address the specific needs of semantic annotation and ontology population tasks. The CA Manager proposes a list of logical steps arranged in a workflow, see Figure 1 EVALUATION CA Manager was developed in the course of the TAO project (www.tao-project.eu), but is now used in several others. We implemented different workflows (a combination of different ontologies, semantic annotation tools, semantic repositories and corpora) for evaluating the flexibility and the scalability of the CA Manager framework, see Table 1. IE tool can either call a natural language processing tool or any kind of knowledge provider such as the semantic databases available within the Linked Open Data community. This particularly shows how the CA Manager is flexible to adapt its generic framework to all kinds of application needs. One of the workflows trialed during the TAO project was used to evaluate consolidation algorithms [START_REF] Amardeilh | Semantic annotation and ontology population[END_REF] of CA Manager. Namely, we used ontology-based IE tool KCIT [START_REF] Damljanovic | A text-based query interface to owl ontologies[END_REF], but one semantic repository being the ITM (see http://mondeca.com/) and the other one being the Sesame RDF repository. Using KCIT, we annotated 20 documents about GATE [START_REF] Cunningham | GATE: A Framework and Graphical Development Environment for Robust NLP Tools and Applications[END_REF], with regards to the domain ontology (gate.ac.uk/ns/gate-kb). As consolidation is entirely dependent on the quality of the IE output, we have first measured precision and recall values for KCIT in isolation, by comparing automatically processed with human-annotated corpus. Then we calculated the performance of the consolidation algorithms based on the same settings. The CA Manager obtains 100% recall hence there is no loss of information after processing the KCIT annotations to produce the final semantic annotations and knowledge instances. The CA Manager consolidation algorithms that deal with merging, need to be improved in order to eliminate duplicated terms with different orthographic labels such as "datastore", "data store", and "DATASTORE". On the other hand, the consolidation algorithms that control the ontology model are performing nicely which is not very difficult in the GATE case study as we mostly refer to the class instances. There are no annotations which refer to the relations between knowledge instances for example. In future, we plan to improve the linguistic analysis to support that feature. CONCLUSIONS We presented the CA Manager framework which serves as a mediator between semantic annotation and ontology population, and is capable of consolidating and controlling this process while allowing human annotators to be involved, if required. We created various workflows to evaluate the flexibility and scalability of this framework, which is based on Semantic Web standards and UIMA concepts. The main contribution of the CA Manager in comparison to other similar tools is that it allows easy plug-in of any IE tool, semantic repository, ontology or corpus, while also applying its own consolidation algorithm in order to link IE phase with ontology population. : a) Extracting the valuable knowledge and annotating the content; b) Con-solidating knowledge with regards to the ontology model and the semantic repository; c) Serialising the typesystem output in various formats and storing it in the semantic repository. Figure 1 : 1 Figure 1: Specialised UIMA processing pipeline Table 1 : 1 CA Manager workflows ontology corpus CA tool Repository Architectural 3D objects DBPedia and ITM ontology Geonames (3D objects) web services Adverse Drug PubMed Luxid Effect ontology abstracts (Temis) ITM Tourism Touristic TimeFrame ontology web sites (Modyco, ITM Univ Paris X) FunGen MiRNA PubMed Discovery Sesame ontology articles (INSERM) ACKNOWLEDGMENTS This research was supported by the EU-funded TAO (FP6-026460), MUSING (FP6-027097), and ServiceFinder (FP7 215876) projects.
04112643
en
[ "shs" ]
2024/03/04 16:41:24
2022
https://hal.science/hal-04112643/file/posterIMC2022v2.pdf
Sabine Marie Moulin email: [email protected] The Implementation of Climate Change Provisions regarding Land Take in French Mountain Territories Land Use Planning In 2019, the French government created a 'Citizens Convention for Climate' , with the task to make proposals to reduce greenhouse gas emissions in 2030 by 40% from their level in 1990, in accordance with social justice. These proposals were mainly converted into the Climate and Resilience Act of 22 August 2021. For land use planning, solutions mainly consist in protecting soils as carbon sinks with a purpose of dividing by two land take by 2031, in a move towards the aim of 'zero net land take' by 2050. In this context, I here want to analyse if this first objective can be reached in French ski resorts with regard to the measures and timing enacted by Parliament. As a follow up to IMC 2019, I chose the same three case studies in the Northern Alps I used two years ago with Anouk Bonnemains, a geographer, to analyse the implementation of climate change provisions in French mountain territories. These are urban plans at different spatial scales (municipalities' plans 'PLU' , intermunicipalities' plans 'PLUi' and territorial coherence programs 'SCOT'), concerning territories with different contexts (high and medium mountain areas) and development models. It is unlikely that the target of dividing by two land take by 2031 will be reached. However, it will very much depend on how 'land take' is defined. Charts n° 2, 3 and 4: Comparison of cumulated land take planned in three urban plans for the period 2021-2030 according to the plans, their possible modification and the Climate and Resilience Act's target (point of reference since 2011) Chart 1 : Land take from 2011 to 2020 in Tarentaise-Vanoise, Coeur de Chartreuse and Chamrousse Tarentaise Main results Although land take has globally decreased from 2011 to 2020 (chart n° 1), the purpose of halving land take by 2031 will not be achieved unless the reality of land take is far below the figures planned (charts n°2 and 3). There are two reasons for this : -The Climate and Resilience Act has to cascade down from national level to local level through a 6-year timeline (see timeline below), -Derogations based on unclear rules can be applied. In particular, urban plans that already aim at reducing land take by 33% do not have to change (art. 194). The methods for counting land take have nevertheless not been determined yet. Therefore, we do not know whether the studied plans meet this condition or not. Therefore, to simplify the Act's rules, there are two possibilities : either the plan benefits from the derogation, or it will have to implement a different land take ratio (50% or less) no later than August 22, 2026 (Tarentaise-Vanoise and Coeur de Chartreuse) or August 22, 2027 (Chamrousse). The gap between the national and the local target will have grown by then. Furthermore, 'land take' has not been clearly defined yet. Chamrousse, that appears to be the most virtuous territory, plans to build an artificial pond, a waste reception center and various recreation facilities on top of its planned ratio considering them neutral for soils within the ski area. The French government has been sued by some NGOs. It acknowledged that the measures provided for by the Climate and Resilience bill, which weakened citizens' proposals, were not sufficient to achieve its purpose of reducing greenhouse gas emissions by 40% (CS Commune de Grande Synthe v. France, July 1, 2021). This will probably also prove to be true for land take, especially if ski resorts persist in transforming mountains into amusement parks…. Unless citizens take legal action, once more. Implementation of Provisions regarding Land Take in the Climate and Resilience Act of 22 August 2021 land take area (ha), as planned Cumulated land take area (ha) modified according the Climate and Resilience Act Cumulated land take reduced by 50% compared to 2011-2020 Year Area (ha)
00411265
en
[ "info.info-ti" ]
2024/03/04 16:41:24
2009
https://hal.science/hal-00411265/file/CDL_CP_fullversion_GirardBaudrierOgier.pdf
N Girard E Baudrier email: [email protected] J-M Ogier A perceptual image quality evaluation based on local spatial information Keywords: Image comparison, Hausdorff distance, distance transform, local dissimilarity measure, quality measure This paper presents a new comparative objective method for image quality evaluation. This method relies on two keys points: a local objective evaluation and a perceptual gathering. The local evaluation concerns the dissimilarities between the degraded image and the reference image ; it is based on a graylevel local Hausdorff distance. This new Hausdorff distance uses a generalized distance transform which is studied here. The evaluation result is a local dissimilarity map (LDMap). In order to include perceptual information, a perceptual map based on the image properties is then proposed. The coefficients of this map are used to weight and to gather the LDM measures into a single quality measure. The perceptual map is tunable but it gives encouraging quality measures even with naive parameters. Introduction Image quality evaluation is a key point in several domains including image compression algorithm assessment or graphical image quality evaluation. Even if the best method is the subjective method MOS (Mean opinion score), which is based on observers' evaluation, it is not always possible to seek it: it is subject to variations and it involves many people and a lot of time. An alternative is to use an automatic quality evaluation. In this frame, the measure can be estimated just on the transformed image itself or in comparison with a reference image. We focus on the latter kind of methods so-called comparative objective methods. There exists a lot of well known comparative objective methods like the Mean Square error (MSE) or the Peak Signal to Noise Ratio (PSNR), ... But none of the current methods take in account a perceptual evaluation because they are often based on a pixel to pixel difference and perceptual information include both local and global aspects of the image. In order to move closer to the evaluation of the final user, it is important to integrate a perceptual evaluation. We propose a new method which is based on two key points: 1) a local objective evaluation and 2) a perceptual gathering. 1) Unlike to human vision, the pixel to pixel difference is very sensitive to small translations. A generalization of it to a less sensitive local measure has been developed for binary images in [START_REF] Baudrier | Binary-image comparison with local-dissimilarity quantification[END_REF]: the so-called Local Dissimilarities Map (LDMap). It is based on the distance transform and several distance transform generalization to gray-level images are available. A comprehensive study of the LDMap generalization to graylevel images is presented here in quality measure aim. 2) Our method then exploits the spatial distribution of the dissimilarity measures gathered in the LDMap generalized to gray-level images thanks to a perceptual weighting that emphasizes image areas that are important for human vision. These weights, gathered in a socalled perceptual map, are based on brightness, shape and texture; they can be tuned in function of the final application. Our method results in a single measure. The proposed measure has been compared to subjective evaluations on the test image base IVC database and gives encouraging results. The two key points are detailed in the following sections. The Local Dissimilarity Map 2.1 Definition The LDMap is based on a local and adaptative evaluation of dissimilarities between images. It has been first defined on binary images via a local measure of dissimilarities, and is based on a distance between a pixel and a set of pixel. When Haussdorf distance is used for the local measure, the formula of the LDMap is simple (see Definition 1) and allows to generalize easily the LDMap to gray-level images. Fig. 1 illustrates the LDMAP between the image of letters CO and the image of letters ET. Definition 1 LDM ap Let I 1 , I 2 be two images, x ∈ R 2 a pixel, the LDmap between I 1 and I 2 is defined from R 2 to R by LDM ap(x) = |I 2 (x) -I 1 (x)| max(|I 2 (x) -I 1 (x)|) max(d(x, I 1 ), d(x, I 2 )) ( 1 ) where d is a distance transform (DT), measuring the distance between a point and a set of points. The distance transform and its continuous generalizations The DT are based on an underlying distance between pixels. The different DT come from distinct underlying distances. We focus here on the underlying distance computation so as to choose the most adapted DT to image processing. For binary images, pixels can be seen as points of the plan. The distance transforms are then based on 2D mathematical spatial distances (e.g. the Euclidean distance, the Mahalanobis distance and so on). For gray-level images, the gray-level dimension is added to the two plan dimensions. There are different ways of including this dimension in the distance computation, and they influence the produced dissimilarity measure. Considering a gray-level image as a set of points of R 3 , the image is a surface in R 3 . Then the distance between two points of this surface is the shortest path between these two points accorded to a pathlength measure. There are two possibilities in the literature for this measure: the measure of the length of the path on the image surface or the measure of the area of the surface under the path on the image surface. A formal definition is given thereafter (Definition 2). Definition 2 Let I be a continuous function defined on X the image support : I : X -→ R (x, y) → I(x, y) and let π : t ∈ [0, 1] → π(t) ∈ R 2 be a continuous path between the pixels p and q of I. The length of the path π (so-called L π ) is given by 1. the length of the path on the image surface L 1 π = 1 0 |I (π(t))|dt (2) 2. the area of the surface under the path on the image surface L 2 π = 1 0 |I(π(t))π (t)|dt (3) Both of these possibilities haves been implemented in the discrete space. The discretization step implies also choices so the different versions of the DT underlying distances are briefly introduced in the next section. Discrete implementations of the distance transforms The presentation of the different DT is too long to be detailed here so we will only give a brief insight on them. 1. first type methods (introduced by P. Toivanen): • the Distance Transform On Curved Space [START_REF] Toivanen | New geodesic distance transforms for gray scale images[END_REF][START_REF] Toivanen | Sequential local transform algorithms for gray-level distance transforms[END_REF] (DTOCS), the Weighted DTOCS (WDTOCS) • the method improvements: 3-4-DTOCS and Optimal WDTOCS (Opt-WDTOCS). second type methods • the Gray-Weighted Medial Axis Transform [START_REF] Levi | A grey-weighted skeleton[END_REF] (GRAYMAT) introduced by G. Levi and U. Montanari. Its underlying distance is based on the pixel difference value and as a consequence, it promotes low gray-level pixel paths. It is aimed at skeletization. • the Gray-Weighted Distance Transform [START_REF] Rutovitz | Data structures for operations on digital images[END_REF] (GWDT) introduced by D. Rutovitz Comments and choice It is quite difficult to anticipate the qualities of the different DT from their definition, nevertheless, we have chosen the second family method because their underlying distance is closer to the notion of distance in the graylevel dimension. The second family indeed, takes into account differences of gray levels on a path and the spatial distance. It seems to us more appropriate than the first family which overwrites some spatial information for the benefit of gray level information (see Fig. 3). In the second family the differences between the versions are minor. Nevertheless the 3-4-DTOCS seems the most interesting to us because it does not underestimate the diagonal distance unlike to DTOCS. Our study on this point should be furthered to detail the choice. We want to take advantage of the spatial information contained in the LDMap to include perceptual information. The dissimilarities between two images are already quantified in the LDMap. Given an original image (reference image, original image) and a transformed image (compressed image, degraded image ...), then the sum of the coefficients in the LDMap (computed between these two images) can represent the transformation evaluation between the compared images. Nevertheless, it is known that the visual perception does not take the entire image into account [START_REF] Lorenzetto | Image comparison metrics: A review[END_REF]. So we construct a perceptual map (PMap) in order to use the local information of the LDMap and the observer's attention model, and we associate the two maps to obtain a single measure. The PMap combines the brightness, shape, texture attributes which are factors the eye is sensitive to. Each attribute is calculated on the reference image and for each one we define a map: for the brightness B, for the shape S and for the texture T . The Pmap is defined as weighted sum (Definition 3) of B, S and T . Then the PMap is normalized in order to be a weighting Map. Figure 4 presents an example of PMap. Definition 3 P M ap Let A be an image, the PMap of A is defined by : P M ap A = p 1 • B + p 2 • S + p 3 • T (4) Where p 1 , p 2 and p 3 are the weights respectively of the brightness, the shape and the texture, and where B, S and T are normalized. By balancing the LDMap with the PMap as illustrated Fig. 5, we obtain the perceptual local dissimilarity map (PLDMap), containing local information on dissimilarities and visual perception, we can measure the quality of the transformed image by extracting the maximum value or the mean value of the PLDMap. We have study the two values and the maximum appears to be the best quality measure. The proposed measure has been compared to subjective evaluations on a test image base and gives encouraging results. The Fig. 6 shows our first results for one image compressed with different rates of compression. The smaller our measure, the better the quality, conversely for the Mean Opinion Score (MOS) the higher the score, the better the quality. Test results on a bigger database and in comparison with other quality are coming soon. In the future, the LDMap could be developed to color images and the PMap could take in account other attributes. We can refine weights of maps constituting the PMAP. Figure 1 : 1 Figure 1: Letters CO and ET and their LDMAP containing theirs local dissimilarities Figure 2 : 2 Figure 2: Distance transform of the binary image composed with a white pixel form and a black pixel background Figure 3 : 3 Figure 3: An example of some DT Figure 4 : 2 Figure 5 : 425 Figure 4: PMap with p 1 =2, p 2 =1 and p 3 =-2 Figure 6 : 6 Figure 6: First result on two images compressed with different rates (r1 < r2 < r3 < r4 < r5) and with p 1 =4,p 2 =1,p 3 =-2
04112673
en
[ "shs.droit" ]
2024/03/04 16:41:24
2014
https://hal.science/hal-04112673/file/islandora_163614.pdf
Table of contents Terms of Reference 1. The Study MARKT/2012/013/Dl/ST/SC on the territoriality of the making available right assessed how to localize the act of making available to the public and its consequences on selected issues such as the rules of enforcement, the provisions on authorship, ownership and transfer of rights, and the impact of that relation with the reproduction right. The Study identified the existence of different acts of reproduction upstream and downstreamin the process of the act of making available, that may complicate the licensing of works for online use and have an impact in terms of conflicts of laws in case of infringement. However, the Study observed that the InfoSoc Directive does not regulate the relation between the making available right and the reproduction right. Both rights are autonomous and can apply cumulatively, so that a single technical act may simultaneously fall under the making available and the right of reproduction, even if those rights are managed separately. In a second part, the Study analyzed two criteria that could help localizing the making available right (country of origin principle and place of the exploitation of the work) and assessed which consequences they would have on the selected topics. It showed that these localization criteria would not solve the bottlenecks found under the current state of law with regard to the reproduction right, as both the uploader and the end user having access to a work made available online would still face issues regarding the copies made in the course of the technological process used. The present Study deepens the analysis of the relation between the making available right and the reproduction right. Considering acts of streaming and downloading, it examines whether the copies of works made through the technical process of making them available on the internet fall under the scope of art. 2 of the InfoSoc Directive, and whether these may be exempted under the exception for temporary copy or the exception for private copy provided by art. 5. The Terms of Reference of the Study are the following: Output 1 1.1 Complement study MARKT/2012/013/Dl/ST/SC by providing an assessment of, and a set of recommendations for, possible accompanying legislative measures (in particular with regard to further harmonisation at EU level with regard to authorship and ownership, transfer of rights, enforcement and transitional measures to allow for the adaptation of contracts) that are needed if one of the policy options in study MARKT/2012/013/Dl/ST/SC were to be followed. With regard to further harmonization of the rules on authorship and ownership, the primary purpose is to examine which legislative specific measures -if any -would be required to mitigate negative outcomes for rightholders if a country of origin or targeting approach were to be followed. With regard to enforcement, the primary purpose is to examine which specific legislative measures -if any -would be required to mitigate negative outcomes for rightholders with regard to injunctions based on Article 8(3) of Directive 2001/29/EC if a country of origin or targeting approach were to be followed. On transitional measures for the adaptation of contracts, the purpose is to examine whether a solution like the one which has been adopted for satellite transmissions in the SatCab Directive would be desirable, or whether -and if so which -different solutions would be required.1.2 Complement study MARKT/2012/013/Dl/ST/SC by an analysis of the legal situation prevailing in the US and in Canada with regard to the right of making available and the right of reproduction, in particular with regard to the territorial scope of these rights (the localisation of the copyright relevant act in international cross-border situations) and their relationship to each other. (Germany, France, UK, Italy, Spain, Poland, Denmark, Hungary and the Benelux) to complement study MARKT/2012/013/Dl/ST/SC. The study should focus in particular on the implementation and application of Articles 2, 5(1) and5(2)(b) of Directive 2001/29/EC (the "InfoSoc Directive"). With regard to Article 5 (2) (b) of the InfoSoc Directive, it is not within the Contractor's mission to give a description of its implementation with regard to levy systems and other systems of fair compensation. This analysis should complement the analysis of the reproduction right in study MARKT/2012/013/Dl/ST/SC (identification of reproductions in relation to an act of making available, localised according to the "country of origin" or "targeting approach) and assess in particular the legal status of acts of reproduction located outside the Member State(s) where the act of making available takes place. This analysis involves an assessment of where the act of reproduction takes place (territoriality), whether it is exempted under the national provisions implementing Articles 5(1) or 5(2)(b) of the InfoSoc Directive or whether the right holder's consent is required. Examine the implementation and application of the exclusive right of reproduction in a selection of Member States 1.4 Complement study MARKT/2012/013/Dl/ST/SC by an analysis of the interplay between the right of reproduction and the right of making available to the public, in the Member States mentioned under point 3, in the US and in Canada. The specificities of different types of services should be duly taken into account, and the following two scenarios should be considered: (i) services that are based on streaming and do not involve other end user reproductions than those falling under Article 5(1) of Directive 2001/29/EC; and (ii) services where the end-user makes a reproduction that is not covered by Article 5(1), for example downloads (stored permanently or for a limited period of time, e.g. 24 hours) or print-outs (it being however understood that the reprography exception does not fall within the scope of this study). Output 2 The study should assess whether and which legislative changes are required or desirable with regard to the right of reproduction if one of the policy options in Study MARKT/2012/013/Dl/ST/SC (in particular the "country of origin" option and the "targeting" option) were to be followed. The assessment of possible legislative changes should not be limited to Directive 2001/29/EC (the "InfoSoc Directive") but also extend to the right of reproduction as provided for in Directive 96/9/EC (the "Database Directive") and Directive 2009/24/EC (the "Software Directive"), insofar as justified by differences in the legal framework (e.g. the exception provided for in Article 5(1) The study should also assess whether and to which extent legislative changes are required or desirable with regard to the exceptions and limitations provided for in Articles 5(1) and5(2)(b) of Directive 2001/29/EC if one of the policy options in Study MARKT/2012/013/Dl/ST/SC were to be followed. Such possible legislative changes will be examined only with regard to the issues related to territoriality and the possible solutions thereto. Other possibly desirable changes to these exceptions (but for other reasons) do not fall within the scope of this study. The assessment should focus on reproductions made by the end-user (the downloading of digital files in the country of destination) and should clearly distinguish these from reproductions made by the service provider (e.g. the uploading of digital files to a central server). The two different scenarios described at the end of Output 1 should be duly taken into account. The study should provide a range of possible policy options. The examined policy options should include but, if the Contractor considers that other options are also envisageable, not be limited to: (i) the possible "bundling" of the right of making available and the right of reproduction, in the sense that one is incidental to the other and does not require separate authorisation by the rightholder; (ii) the possible extension of a "country of origin" or "targeting" approach to certain reproductions made in the context of licensed digital services (in particular where rightholders licensed the right of making available); and (iii) the possible modification of existing limitations and exceptions, the possible introduction of new limitations and exceptions and/or the application of compulsory licensing to reproductions made in the context of licensed digital services (without however dealing with issues related to levies or equitable remuneration). In addition to the relevant legislation, relevant administrative and judicial decisions and scientific literature on these topics in EU MS, the US or Canada should be analysed. With regard to policy options i) to iii), the contractor should propose and assess possible options to mitigate negative outcomes for rightholders related to a) the possibility that, for a given work, the holder of the exclusive right of communication to the public (including the right of making available) and the holder of the right of reproduction may not be identical; and b) injunctions based on Article 8(3) of Directive 2001/29/EC (e.g. options to mitigate negative consequences in terms of enforcement if the act of reproduction would no longer be considered to occur in the country where the download takes place). 2. This report contains the outputs required in the terms of reference and adapted to the Commission's comments. The newly adopted directive of the European Parliament and of the Council on collective management of copyright and related rights and multi-territorial licensing of rights in musical works for online uses in the internal market is beyond the scope of this study and consequently this report does not examine its relation to the making available right and the reproduction right. This report represents the state of the law as of April 2014. Part 1 -Complement to the analysis of the making available right 3. The subject-matter of the present Study is to complement the Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society by providing an assessment of, and a set of recommendations for, possible accompanying legislative measures (in particular with regard to further harmonisation at EU level with regard to authorship and ownership, transfer of rights, enforcement and transitional measures to allow for the adaptation of contracts) that are needed if one of the policy options in the Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society were to be followed. With regard to further harmonisation of the rules on authorship and ownership (sub A), the primary purpose is to examine which legislative specific measures -if any -would be required to mitigate negative outcomes for rightholders if a country of origin or targeting approach were to be followed. With regard to enforcement (sub B), the primary purpose is to examine which specific legislative measures -if anywould be required to mitigate negative outcomes for rightholders with regard to injunctions based on Article 8(3) of Directive 2001/29/EC if a country of origin or targeting approach were to be followed. On transitional measures for the adaptation of contracts (sub C), the purpose is to examine whether a solution like the one which has been adopted for satellite transmissions in the SatCab Directive would be desirable, or whether -and if so which -different solutions would be required. I. Authorship & ownership 4. In this part of the report, we will describe sub 1 the current situation (divergent national rules on authorship and initial ownership) and develop the example of authorship of audiovisual and cinematographic works. Then we will describe the situation when the act of making available is localised in a country of origin (sub 2) and in the countries of exploitation (sub 3). Finally we will examine sub 4 which legislative measures could be considered to remedy the negative consequences of the divergence on authorship and ownership rules. A. Current situation: impact of the existing disparities 5. Introduction. In order to assess the impact of the existing disparities regarding authorship, ownership and transfer of rights, we will first summarise our findings described in the previous Study (MARKT/2012/013/Dl/ST/SC). It has been found that, absent a complete harmonisation of authorship and initial ownership, there are some disparities among the Member States on who is considered the "author" or the "initial copyright owner" of a work. All Member States apply the creator doctrine to some extent, some vest the copyright in another person straightaway, e.g. the employer for works created by an employee in the course of her employment or the person taking the initiative for a collective work. Moreover, different mechanisms exist to operate a transfer of rights from the author to the derived right holder. A more detailed description of the current situation can be found in the previous Study. It has been concluded that, in theory, the exercise of the making available right in cross-border situations is complicated by the disparity of rules on authorship and initial ownership. Where a work can be accessed on demand by a member of the public in several Member States and where protected acts of making available take place in several Member States, the consent of the owner of the making available right for each of those Member States should be acquired. Because the national rules on authorship and initial ownership vary, each Member State could consider a different person as the author of the work whose consent would be required for that Member State (assuming for now that the rights have not been concentrated in the hands of one person by virtue of contracts). This situation has been illustrated by an example of a journalist working as an employee for a newspaper publisher and a news aggregator active on the Web. If protected acts are found in several Member States and if the lex loci protectionis (transmission and/or reception) is applied to determine initial ownership, then the candidate licensee may have to acquire a licence from different persons (journalist or publisher)for the same exploitation of the same work. This problem would not arise if the lex loci originis of the work alone designates the initial right holder. The locus originis of the work is the country where the work has its origin and is invariable: if the authorship is determined according to this law, then the authorship is invariable as well, regardless of whom the lex loci protectionis considers as an author of the work. The divergence on the national rules of authorship and initial ownership does however not seem to pose insurmountable problems in practice since very few conflicts have been reported on this point. This conclusion could however be contradicted in an economic or other empirical study. 6. Cinematographic or audiovisual works. The Commission has requested to illustrate the current situation by the example of authorship, ownership and transfer of rights with regard to cinematographic or audiovisual works. No empirical data have been collected regarding the various professional practices in the audiovisual sectors (film, television and others using various forms of exploitation) of the Member States. As described in the Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, the notion of authorship has been partially harmonised for cinematographic or audiovisual works1 . The principal director is thus considered an author or co-author of the audiovisual or cinematographic work. Many other people contribute however to the creation of an audiovisual work and the Member States have different rules to indicate who is or may be a co-author 2 . Some countries have a precise and exhaustive list of whom they treat as authors, some consider the producer of the film an author and others have an illustrative list and a general rule that considers anyone who has made a creative contribution to the work as a co-author [START_REF] France | [END_REF] . This means that in some Member States the cameraman, the editor, the sound designer and others may be considered authors but not in other Member States 4 . Generally, the exploitation rights to the cinematographic or audiovisual work are concentrated in the hands of the film producer [START_REF]Spain Supreme Court[END_REF] . A rebuttable presumption of a transfer of rental rights to the film producer is provided for performers and (optionally) for authors in the Rental and Lending Rights Directive. The Member States also have different mechanisms to achieve this result. Some Member States operatedby lawthe transfer of rights in favour of the producer with no possibility for the authors to oppose such transfer. The CJEU has decided that such practice is contrary to several EU directives, which consider at least the principal director as the initial owner of the exploitation rights [START_REF] Netherlands | IER[END_REF] . Many Member States provide a rebuttable transfer of exploitation rights in favour of the producer, others rely more on complete agreements between the interested parties (for other rights than rental rights). In the Member States where such regulation exists, the exploitation rights of the creator/employee may remain with the creator/employee or may be transferred to the producer/employer for audiovisual works created in the course of the employment. The scope of the rights transferred to the producer may vary as well. Despite these divergences regarding the initial ownership and the transfer of rights, it seems that these differences do not cause major difficulties in practice, because they are overcome by contractual arrangements [START_REF] Strowel | Peut-on tenir compte des copies faites à partir de sources illicites pour déterminer le montant des redevances?[END_REF] . Audiovisual exploitation contracts are not harmonised at the European level. So far, it has been concluded that there is no immediate need to pursue such harmonisation [START_REF] Wnet | [END_REF] . 7. Example. National copyright laws recognise different people as authors of the audiovisual work, hence there may be consequences for the cross-border on demand exploitation of the audiovisual work, e.g. on the Internet. By way of example, we will consider the case of a film recorded in the UK and made available on a platform for European films, serving members of the public residing in any Member State of the European Union. Under British law, the principal director and the producer are considered authors of a film 9 . Other persons who have made other creative contributions are not considered co-authors of the film. The British film is made available to the European public and is accessible in all Member States, hence it is assumed that acts of making available can be found in all Member States and that the authors' authorisation should be acquired before the online exploitation can be engaged in. Authors. The issue is then who should be considered the authors of the audiovisual work: the authors recognised as such under UK law or, for each Member State where the work is made available to the public, the persons considered co-authors of the audiovisual works under that law (regardless of a transfer of rights to the producer)? Applying the principle lex loci protectionis (art. 14bis (2)(a) BC) 10 , the authors should be determined in accordance with the law of the country for which protection is claimed, i.e. the countries where an act of making available to the public can be established. This could be interpreted as the country where the transmission starts and/or where the reception of the work takes place (this will ultimately be decided by the court in function of the elements of any particular case). This means that, in theory, the operator of the film platform should verify who is considered a co-author of the British film under the national law of each Member State where a relevant act, protected under the making available right is performed (i.e. transmission and/or reception). Those persons' prior consent should be acquired for the exploitation in that Member State. If the lex loci protectionis is understood as a reference to the countries where the work is received, then the platform operator would thus be required to acquire a licence of e.g. the cameramen if it makes the film available in Germany, France, Belgium, Denmark (or any other country where the copyright law does not include an exhaustive list of co-authors and may consider a cameraman an author11 ) but not in the UK. Transfer of rights. In practice such cumbersome exercise is avoided. Film producers mostly make sure that they acquire the economic rights of all authors and performers, including the making available rights for all countries where they intend to pursue the exploitation of the work. This can be accomplished by various mechanisms, depending on the national law. Several Member States provide a rebuttable presumption of transfer in favour of the film producers, other rely on contract law (e.g. UK). Realistically, the film producer concludes contracts with all persons participating in the film production 12 and often such contracts contain a specific copyright clause. Absent such specific clause, the existence of such contract may suffice to rely on the presumed transfer of rights. In addition, the film producer may rely under some national laws on a presumed transfer of economic rights to the audiovisual work (or protected performance) of the authors or performers who have contributed in the course of an employment contract. It can be supposed that the film producer acquires the economic rights of all people involved and that it is thus free to grant licences for making the film available in several Member States. As explained in the Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, the transfer of rights in accordance with the law applicable to the contract cannot be questioned under the law of the country where a protected act of making available is performed: the operator has no option but to respect the transfer of rights to the film producer (even if the transfer has not been operated in accordance with the lex loci protectionis of the act of making available). Bottleneck. The identity of the author or initial right owner is of lesser importance when the relevant economic rights have been transferred (to the producer) before the content provider seeks a licence for the cross-border on demand exploitation. Suppose however that the film producer has not acquired the rights of the cameraman (who is not considered a co-author under UK copyright law) and that for some reason the exploitation rights are not presumed to be transferred to the producer. It is not excluded then that the cameraman claims infringement of her rights where the audiovisual work (including her protected contribution) is made available in Germany, France, Denmark and all other Member States where she would be a co-author of the work. A similar case was recently brought before the French Supreme Court13 : a French cameraman who had been working for an American broadcaster claimed, when his contract was ended, that he had never transferred his copyright to his employer. One issue was whether this cameraman should be considered an "author". The Court of Appeal of Paris ruled that this issue was to be solved by reference to the French rules of private international law (which would have led to the applicability of the US Copyright Code, as lex loci originis, under which all rights would have been transferred to the producer). The French Supreme Court quashed this decision and decided that the quality of "author" should be determined according to the lex loci protectionis since this rule in art. 5(2) BC was applicable to all issues of protection, including the question of authorship and initial ownership. . Accordingly, the question whether the cameraman must be considered an author was, insofar as protection is sought in France, to be answered under French substantive law. While it can be assumed that, in most cases, the film producer acquires the economic rights of all people involved for all Member States of the EU, the above example shows that complications due to differences in the national rules on authorship and ownership cannot be ruled out completely. B. Country of origin: Authorship, ownership and transfer of rights 8. One scenario considered was the localisation of the act of making available in one country, the country of "origin" of the restricted act (not to be confused with the country of origin of the work). It has been examined in The Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society how the disparity of national rules on authorship and initial ownership has an impact on the position of the right holders when applied to a making available right that is localised in one single Member State (Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 149-156). Author or initial owner. Where the act of making available is placed in a unique Member State, the latter is the locus protectioniseven if this act has effects in other Member States (where the work will be accessible). In consequence, per act of making available, the authors or initial right owners should be determined in accordance with the law of where the making available is localised, i.e. its Member State of origin. Since the rules on authorship and initial ownership are not completely harmonised, the outcome may vary in function of where the work is made available to the public (creator, employer, initiator of a collective work, others). It matters therefore where the uploader/service provider chooses to make the work available to the public (country of origin), both for the purpose of licensing and for enforcement of the making available right in case of infringement. The content provider (or any other person who makes works available online) thus has an interest in making the work available from a Member State where fewer persons are considered authors. Transfer of rights. Where the authors have transferred their rights to a third party (for the exploitation of the work), the licence should be obtained from this derived right holder. It matters not that the transfer of rights between the author or initial right holder and the derived right holder do not comply with the rules of the country of origin where the work is made available to the public. By contrast, if the country of origin designates a person as a co-author who is not considered an author when the derived right holder acquired the making available right (and consequently has not acquired that person's rights), this person may invoke her right of making available in the country of origin against the derived right holder and/or the person who makes the work available in that Member State of origin (content provider). The content provider who makes the work available in one Member State de facto determines the law applicable to the contract operating the transfer of rights for this act of exploitation. The national rules on licence contracts should then be respected. If it appears under the law of the country of origin that a person is considered an author, while the derived right holder has not acquired her rights (because she is not considered an author where the work was produced or first used), then this author can invoke infringement of her making available right in the country of origin. It is in the interest of the service provider (or any other person who makes works available online) to make the work available from a Member State where fewer persons are considered authors. 9. Cinematographic or audiovisual works. The application of the country of origin principle to the making available right is not neutral given the lack of complete harmonisation of the (co-)authorship of cinematographic or audiovisual works14 . It has been described that some Member States have an exhaustive list of co-authors (e.g. UK), while others have a general rule that besides the principal director everyone who has made a creative contribution may be a co-author of the audiovisual work. Applying the country of origin principle, the online exploitation in all Member States will be analysed in copyright terms as implying an act of making available to the public that is localised in one Member State of origin. According to the lex loci protectionis, the law of this Member State of origin determines who is considered an author of the film. When a film is made available in the UK, the service provider will obtain a licence for this exploitation from the authors of the film under British law, i.e. the principal director and the producer. Others will not be considered co-authors of the film. However, when the film is made available in France, the service provider must obtain the consent of many more co-authors of the film in accordance with French law, i.e. the principal director, the authors of the script, the adaptation, the dialogue, music specially composed for the film and others. The service provider has a (theoretical) interest in arranging that the film is made available to the public in the UK. In practice, the service provider does not need to contact all co-authors individually: they will mostly have transferred their rights to the film producer. The film producer can then grant a licence for the act of making available, localised in the Member State of origin and under the law of that Member State. To the extent that the film producer has acquired the making available rights for all Member States, it matters little where the film is made available. However, if several film producers have divided the making available rights on a territorial basis, they may undermine each other's exploitation and territorial exclusivity. The making available is localised in the Member State of origin but may the film may be accessible in all Member States of the EU and thus undermine the position of the holder of the making available rights for those other Member States. The owners of making available rights on a certain territory will only be able to control the territorial scope of the exploitation if this is arranged on a contractual basis 15 . In theory an infringement could be established when a film is made available in a Member State with higher protection and without the consent of co-authors in that Member State of origin but who are not recognised as such where the film was made. For example, the principal director and the producer are the authors of a British film. When this film is made available in France, it should be verified under French law as the lex loci protectionis (i.e. where the work is made available according to the country of origin principle, i.e. France) who is considered as an author. The writer of the screenplay, among others, is an author of the audiovisual work whose consent is required for making the film available in France. If the film producer has not acquired this person's consent (at the production in the UK) and the presumption of a transfer of rights has not been triggered, the screenplay writer can exercise her right of making available against the service provider and negotiate a licence fee or claim infringement (in France). 10. Conclusion. The divergent rules on authorship still affect the exercise of the making available right. The application of the country of origin principle does not solve issues resulting from the disparity of authorship rules. The work is made available in one Member State (with effects in other Member States) and it is verified, in accordance with the lex loci protectionis (i.e. the law of the country where the act of making available takes place), who are considered authors of the work. The authorship is thus determined per act of making available to the public and the consent of all author (co-authors) of the audiovisual work should be obtained. Generally, the film producer holds the exploitation rights but, as under the status quo, a tension may arise (in theory) between the authorship rules of the Member State where the work was produced (law of the film production contract) and the law applicable to the making available. C. Country of exploitation 11. When the work is made accessible to the public across the national borders, relevant acts of making available may occur in several Member States. This situation is essentially similar to the current situation except that the restricted acts may in some cases be localised more precisely and in fewer countries, i.e. where the exploitation takes place or where the public is targeted. One or several acts of making available can thus be identified and localised in one or several Member States. Given the disparities in the rules on authorship or initial ownership, different persons may be considered authors in different Member States, depending on where a relevant act is found and which national law applies 16 . When the lex loci originis is applied to determine authorship and initial ownership, the person thus designated is invariable and independent from where the exploitation takes place. If the lex loci protectionis is applied, the author and initial right owner are determined according to the law of the country of transmission or reception, which could lead to different natural or legal persons. Where the author has transferred her rights to a derived right holder, the service provider who intends to make the work available must acquire the making available rights for all Member States where the exploitation of the work will take place from the person that has contractually acquired the rights. The transfer of rights cannot be questioned on the basis of the laws of the Member States where the work will be made available. In order to operate the transfer from the derived right holder to the service provider (candidate licensee) the laws applicable to the contract must be respected. 12. Cinematographic or audiovisual works. The localisation of the act of making available in the Member States of exploitation leads to a situation similar to the current situation (supra sub A). It has been described supra that the Member States have divergent rules on authorship and initial ownership and on the transfer of rights to the film producer (contract law, presumption, transfer by law). The online exploitation of a film may entail an act of making available in one Member State (if the exploitation is restricted to one Member State), several Member States (e.g. exploitation per language group who share subtitles) or all Member States of the EU. In order to determine who are the authors of the work, it is verified in which Member States a protected act can be found and applying the lex loci protectionis it is determined who is considered an author of the audiovisual work (this could be the law of the country of transmission and/or reception depending on the court). The service provider that pursues the online exploitation of the film should thus per country of exploitation acquire the authorisation of the persons whom the applicable laws consider authors of the audiovisual work. By way of example, a film made available on a platform targeting German speaking public is made available to the public in the Member States where the public is targeted, e.g. in Germany, Austria, Luxemburg and Belgium. The service provider should acquire the consent of the persons who are considered authors in those Member States, according to the lex loci protectionis, which may be the unknown Member State of transmission or the Member States of reception (Germany, Austria, Luxemburg and Belgium). Since the Belgian law has an open authorship rule (recognising the author of any creative contribution to the film as such) it is possible that more people are considered authors of the audiovisual work than under say the Luxemburgish law. This could complicate the service provider's licensing process. The film producer may have acquired the rights of all co-authors (and performers) when the film was made, by contract or by virtue of a legal presumption of transfer. To the extent that the film producer has acquired the making available rights for all the Member States where the exploitation will take place, the content provider only has to deal with the film producer. Where the exploitation rights are split among film producers, the content provider must be careful to conclude agreements with the person holding the rights for the territory where the exploitation is localised. Returning to our example, the (theoretical) difficulty remains present where the film producer has not acquired the rights of the cameraman (who is not considered a co-author under British copyright law) and that for some reason the exploitation rights are not presumed to be transferred to the producer. It is not excluded then that the cameraman claims infringement of her rights where the audiovisual work (including her protected contribution) is made available in Germany, Belgium, Austria or Luxemburg where she would be considered an author. 13. Conclusion. The divergent rules on authorship may (in theory) complicate the clearing of licences when a work is made available in several Member States (several national publics are targeted). The authors' consent should be acquired for every national territory where a protected act takes place. The authorship is determined according to the lex loci protectionis (transmission or reception) [START_REF]USC §106[END_REF] and may lead to different persons, since the national laws recognise different persons as authors of an audiovisual work, which makes the licensing process more difficult. In practice however the exploitation rights are transferred to the producer, which can generally grant licencesincluding multi-territorial licences. The disparity of authorship rules may create a problem when not all authors (under the law of a Member State where the work is made available) have transferred their rights to the producer, which cannot subsequently grant a licence to those rights. D. Possible legislative measures 14. A priori there are few indications that the divergence of national authorship and initial ownership rules (including for audiovisual or cinematographic works) requires a legislative intervention. It does not seem to be a heavily debated issue in scholarly literature and only one court decision has been found on the disparity of authorship and initial ownership rules 18 . It seems that the making available rights are concentrated in the hands of the producers who ensure the (possibly pan-European) exploitation of the film. Leaving aside the question of the remuneration rights 19 and authorship, it seems that as far as the exercise of the exclusive right of making available to the public is concerned, there is no pressing need to harmonise the rules on authorship and initial ownership for audiovisual and cinematographic works. It seems that there is one scenario where the lack of harmonisation could lead to legal uncertainty, i.e. when the Member State where the work is made available recognises a person as an author who is not considered a co-author in the Member State whose law governs the contract between the film producer and the contributors to the film and who has not transferred her economic rights to the film producer [START_REF]USC §106[END_REF] See also Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, In this French case the issue was not the divergent rules on authorship among EU Member States but the difference between the American and the French regulations (Cass. fr., 10 April 2013, retrieved via http://www.courdecassation.fr/. 19 The level of the remuneration is not harmonised at the European level and differs from one country to the other. (either by contract, by legal presumption or by operation of law). It seems that this rarely occurs, although empirical data could contradict this conclusion. We have however no empirical data on the contractual practices and cannot assess to which extent the producers are exposed to the risk that persons whom they do not consider authors claim authorship under the law of more protective Member States. Should this be considered an important risk, legislative measures could be considered. Several options could be considered to solve this lack of coherence: (1) apply the lex loci originis of the work to establish an invariable authorship; (2) harmonise the notion of "authorship"; (3) harmonise the notion of "ownership" and (4) extend the rebuttable presumption of transfer of rights. These options will be discussed hereafter. Some of them might be applied cumulatively. Lex loci originis of the work 15. If authorship and initial ownership are determined in accordance with the lex logi originis of the work, the authorship or initial ownership of the work is invariable. This would remove the legal uncertainty that results from different national rules 20 . The Berne Convention provides which country is the country of origin of the work (art. 5(4) BC) 21 : This option is however controversial and seems excluded, at least for audiovisual or cinematographic works. The Berne Convention provides for the application of the lex loci protectionis in art. 5(2) BC, even if the scope of this rule is disputed (its application to questions of authorship). Anyhow the French Supreme Court has decided in a decision of 10 April 2013 that it applies art. 5(2) BC also to the initial ownership of the work (rather than national rules of private international law). Moreover, the Berne Convention explicitly provides that "ownership of copyright in a cinematographic work shall be a matter for legislation in the country where protection is claimed" (art. 14bis (2)(a) BC), i.e. in accordance with the lex loci protectionis (art. 14bis BC). Even if the criterion of lex loci originis would designate invariable applicable law and thus determine the authorship of the work regardless of where the exploitation takes place, it presents other problems. The locus originis refers to the country of first publication of the work. GINSBURG has pointed out that this notion of "publication" may be problematic for works that are only commercialised online, without the distribution of a tangible counterpart 22 . The expression "published works" refers to the availability of (tangible) "copies" of the work and excludes all types of performances and communications to the public (art. 3(3) BC). Furthermore, even if an offer for download can be interpreted as the availability of "copies", the work would be published simultaneously in several Berne Member States, which does not solve the issue here. An alternative criterion should then be sought to make this country of origin of the work a meaningful criterion [START_REF] Martin | Réponse du groupe français au questionnaire relatif au congrès de l'association littéraire artistique international Kyoto octobre[END_REF] . The Berne Convention defines another criterion for cinematographic works that are unpublished or that are first published outside the Berne Union without simultaneous publication in a Berne country. In that case the country of origin is the country where the maker has its headquarters or its habitual residence in 20 Several authors are in favour of this option : see Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 82-83. 21 Art. 5(4) BC: (a) in the case of works first published in a country of the Union, that country; in the case of works published simultaneously in several countries of the Union which grant different terms of protection, the country whose legislation grants the shortest term of protection; (b) in the case of works published simultaneously in a country outside the Union and in a country of the Union, the latter country; (c) in the case of unpublished works or of works first published in a country outside the Union, without simultaneous publication in a country of the Union, the country of the Union of which the author is a national, provided that: (i) when these are cinematographic works the maker of which has his headquarters or his habitual residence in a country of the Union, the country of origin shall be that country, and (ii) when these are works of architecture erected in a country of the Union or other artistic works incorporated in a building or other structure located in a country of the Union, the country of origin shall be that country. 22 J.C. GINSBURG, "Berne without borders: geographic indiscretion and digital communications", (Stephen Stewart Memorial Lecture, Intellectual Property Institute London U.K., Oct. 29, 2001) (November 2001). Columbia Law School, Pub Law Research Paper No. 01-30, Available at http://papers.ssrn.com/paper.taf?abstract_id=292010, 16p. [START_REF] Martin | Réponse du groupe français au questionnaire relatif au congrès de l'association littéraire artistique international Kyoto octobre[END_REF] GINSBURG proposes some solutions : GINSBURG, "Berne without borders", 11. a country of the Berne Union (art. 5(4)(c) BC). This criterion provides more legal certainty. However, as has been pointed out, the Berne Convention requires that authorship of cinematographic be determined in accordance with the lex loci protectionis principle (art. 14bis(2) BC). Harmonisation of "authorship" 16. Another option is to remove the divergence between national conceptions and to harmonise the notion of "authorship". However, the stakeholders who responded to the consultation on the online distribution of audiovisual works in the European Union seemed divided on the question whether a harmonisation of the notion of authorship in the field of audiovisual works is needed 24 . This could be done by providing a clear criterion that allows identifying the authors of a work, for example by reference to the "originality" of a work (cf. Infopaq I, Painer, Premier League). A work is protected if it is original in the sense that it is the author's own intellectual creation, reflecting the author's personality (Painer, par. 87-88) and expressing his free and creative choices (Painer, par. 94). The relation between the harmonised notion of "originality" and "authorship" of the work has not been clarified so far. It could however be argued that anyone who has thus stamped the work with her personal touch (Painer, par. 92) and has thus expressed her free and creative choices in it should be considered an author who has the right to authorise or prohibit the making available of the work to the public. Such abstract criterion suggests that the number of co-authors to a work (such as an audiovisual work) is unlimited 25 . Alternatively, legal certainty could be achieved by drafting a closed list of people contributing to the creation of an audiovisual work or cinematographic work whom are considered co-authors authors. It is a matter of policy who should be considered an "author". It is however unrealistic (and undesirable) to draft a closed list of authors for all types of works imaginable (and unimaginable!). Considering the open question regarding the relation between two notions, one could wonder whether such closed list is in conformity with the notion of originality and the status of authorship that may be derived from it: unless a complete list of authors can be come up with, an exhaustive list risks denying the status of "author" to persons who have made an original contribution to the expression of the works (in the sense of the CJEU decisions on originality). Harmonisation of "initial ownership" 17. Member States designate in some cases another person than the creator/physical person as the initial holder of the copyright. Specific rules may exist for works created in the course of employment and a few Member States vest the ownership of the copyright initially in the employer 26 . Also a legal arrangement may exist to identify the initial owner of the copyright to a collective work. It may be attempted to harmonise these issues at the European level. It seems however difficult to identify the cases for which such initial owner should be designated (given the different national cultures and traditions) and to decide in which natural or legal person (or persons) the initial ownership should be vested 27 . This is a policy consideration. It could be imagined that the producer of an audiovisual or cinematographic work would be considered the initial owner of the copyright in the audiovisual work. Following the decision in Luksan, it is unlikely that the designation of any other person than the author as the initial owner of the exclusive rights is 24 Green Paper on the online distribution of audiovisual works in the European Union: opportunities and challenges towards a digital single market. compatible with the existing European copyright framework 28 . Such rule would therefore require a modification of the directives in which the principal director is identified as an author of the audiovisual or cinematographic work (Rental and Lending Rights Directive, Satellite and Cable Directive, Term Directive). It seems that there is little empirical evidence that such measure is justified. Also the idea of immediately vesting a copyright in a legal entity (which may also hold neighbouring rights of the film producer) is contrary to several national copyright traditions and is likely to be met with opposition. Rebuttable presumption of transfer of exploitation rights 18. Several Member States provide in their national copyright law for a rebuttable presumption of transfer of exploitation rights with regard to the audiovisual or cinematographic work in favour of the producer. Currently, the Rental and Lending Right Directive provides for a limited presumption of transfer of rights in the context of film productions. Performers are presumed to have transferred the rental right (unless agreed otherwise) and a similar rebuttable presumption may be provided for authors. In any case they keep a right to equitable remuneration for the rental right (art. 5 Rental and Lending Directive). This approach could be extended to the making available right. It should then be defined which event triggers the application of the presumption (existing agreement regarding film production, collaboration on or implication in the film production), to which works the presumption applies and which is the scope of the presumption (all economic rights, audiovisual exploitation rights or only the making available right). Also it should be decided whether the authors receive remuneration in counterpart and whether it is waivable (cf. unwaivable remuneration right in Rental and Lending Rights Directive). Consequently the question arises again who is considered an author and consequently entitled to remuneration resulting from the presumed transfer of rights. It should be empirically verified whether such legislative intervention is required and which impact the extension of the rebuttable presumption would have. II. Enforcement 19. The present chapter focuses on the application of art. 8.3 of the InfoSoc Directive, which allows right holders to apply for an injunction against intermediaries whose services are used to carry a third party's infringement to a copyright or related right. In many cases such intermediaries are best placed to bring such infringing activities to an end (rec. 59). The purpose of this complement to the Study is not to examine in detail how effective this provision is for the enforcement of copyright, hence we will not verify whether art. 8(3) InfoSoc Dir has been implemented in the legislation of the Member States to a satisfactory degree or whether harmonisation has been achieved 29 . The objective of this complement is to conduct a brief reflection on the effects of the proposed localisation of the making available right (according to a country of origin criterion or following the exploitation of the work) on the possibility for the right holder to rely on the national provisions on injunctions against intermediaries. In other words, it will be examined how the localisation of the infringement and the territorial effect of the court measures affect the possibility to take legal action in case of infringement of the making available right (in those two scenarios). We will first briefly describe how art. 8.3 is interpreted by the European Court of Justice, how right holders have applied that provision so far and which rules of private international law govern the delivery of injunctions related to online infringements to copyright (sub 1). We will then evaluate whether the criteria 28 See already in this sense : 2002 Commission report on authorship of cinematographic or audiovisual works, p. 10. 29 proposed in the Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society to localise the making available right would have an effect on the application of art. 8.3 (sub 2). We will conclude the present chapter with a summary of our findings and with an assessment of the need to adopt legislative measures to mitigate the outcomes of the country of origin principle or the exploitation scenario (sub 3). A. Current situationinjunctions against intermediaries 1. Presentation of art. 8.3 InfoSoc Directive 20. The Information Society Directive requires the Member States to "ensure that rightholders are in a position to apply for an injunction against intermediaries whose services are used by a third party to infringe a copyright or related right" (art. 8(3) InfoSoc Dir30 ). This measure is part of the Member States' obligation to provide "for effective sanctions and remedies for infringements of rights and obligations", that are at the same time "effective, proportionate and dissuasive" (rec. 58). It is explained more explicitly that infringers in the digital environment may use the services of intermediaries and that those intermediaries are in many cases best positioned to bring the infringements to an end. It is concluded that "without prejudice to any other sanctions and remedies available, rightholders should have the possibility of applying for an injunction against an intermediary who carries a third party's infringement of a protected work or other subject-matter in a network. This possibility should be available even where the acts carried out by the intermediary are exempted under Article 5". The jurisdiction conferred on national courts must allow them to order intermediaries to take measures aimed not only at bringing to an end infringements already committed against intellectual property rights using their information-society services, but also at preventing further infringements 31 . 21. Modalities. Art. 8.3 of the InfoSoc Directive allows imposing both interim and permanent injunctions, as also provided in art. 9 and 11 of the Enforcement Directive [START_REF] Shapiro | Directive 2001/29/EC on copyright in the information society[END_REF] . These measures may consist, for instance, in filtering systems blocking mechanisms or removal initiatives. The Member States have the obligation to provide such measures under art. 8.3 but they are free to determine the conditions and modalities for the operation of the injunctions and to the procedure to be followed 33 . T. SHAPIRO notices that the implementation of art. 8.3 is a mixed bag [START_REF] Shapiro | Directive 2001/29/EC on copyright in the information society[END_REF] : some Members States such as Germany [START_REF] Schaefer | Implementation of Directive 2001/29 in the Member States of the European Union. Denmark[END_REF] or Denmark [START_REF] Schonning | Implementation of Directive 2001/29 in the Member States of the European Union. Denmark[END_REF] have not specifically implemented it as they considered that their legal system already met the requirements of that provision. Nevertheless, in accordance with art. 3 of the Enforcement Directive, any injunction has to be fair and equitable, not unnecessarily complicated or costly, and it may not entail unreasonable time-limits or unwarranted delays. Furthermore, the Court of Justice ensures that the measures based on art. 8.3 strike a fair balance between the protection of the intellectual property right enjoyed and the respect of the provisions of the Charter of Fundamental Rights of the European Union 37 . 22. Third party infringement. Injunctions based on art. 8.3 of the InfoSoc Directive may be granted against intermediaries whose services are used by third parties to infringe a copyright or related right. The text of the provision indicates that it should be established that (1) a copyright or related right is infringed and (2) the intermediary's services are used for this infringement. The Directive does not specify the "territorial" relation between the infringement and the use of the intermediary's services. We will discuss each of these elements in the next paragraphs. In order to obtain an injunction against an intermediary, it should be demonstrated that there is an infringement of a copyright or related right 38 . This requirement highlights the importance of the harmonisation of the national copyright laws. Any disparity in the implementation of the exclusive rights or the exceptions in national law may cause uncertainty as to whether certain behaviour constitutes an "infringement". This is not a merely theoretical possibility: given the disparities in the copyright legislation of the Member States regarding the application of the reproduction right or the implementation of the exceptions on private copy (see infra sub Part 2.I.B), similar acts of exploitation of a protected work could be found illegal in one Member State and not in others. An illustration can be found in online personal video recorders (PVR), which are assessed differently in various Member States, in particular on the account of the copy of television programmes that the PVR user makes using the services of an online PVR provider. In Germany it has been held in the cases Save.tv and Shift.tv that it was the user of the online PVRs who made the recording of the television broadcast, rather than the service provider (which has a purely technical role). Such copy, at the instruction of the end-user, should be imputed to the enduser and therefore it is possible that it is exempted under the exception for private use in the German Copyright Act 39 . Moreover, it has been held that the protected television programme is not made available to the "public", where the copy is only available to the end-user who has made the recording. Where there is no infringement of the reproduction right (due to the exception for private copy) nor of the right of making available (absent a "public"), the right holders would not be able to obtain an injunction against intermediaries, such as hosting providers, access providers or search engine operators. Not all Member States' copyright acts lead to the conclusion that online personal video recorders infringe no exclusive rights: in some countries there is no exception to the reproduction right for private purposes, in other countries a copy made by a service provider on behalf of an end-user is imputed to that service provider, which cannot invoke such exception for private use 40 . In such cases, the court may find an infringement of the reproduction right and/or the making available right. In this case, the right holders could request an injunction against intermediaries, such as providers of hosting, access or search engine services. 37 CJEU 24 November 2011, C-70/10, Scarlet, par. 46. 38 An example can be found in the decisin of the UK High Court of Justice [2013] EWHC 2058 (Ch), FAPL v British Sky Broadcasting and others. In this case, football matches (including elements protected under copyright) were streamed online without the FAPL's consent. FAPL brought action against the Internet access providers on the account of infringements via the website FirstRow (an aggregator of such streaming sites) and via publicans who displayed the football games without a licence. This different appreciation of similar services (infringement/no infringement) may lead to confusion and legal uncertainty. The right holders would have to demonstrate an infringement before obtaining an injunction against intermediaries, such as an access provider or a search engine operator. Where the online PVR service is not considered infringing any copyrights, the right holders are not likely to obtain an injunction against intermediaries providing their services in that Member State. Arguably, the right holder could try and demonstrate that the same service would be infringing copyright under the law of another Member State but then it would have to establish that the services of the intermediary are indeed "used" for the infringement taking place in this other Member State. 23. Use of the intermediary's service. The next question is whether the services of the intermediary are "used" by that third party "to" infringe a copyright or a related right. The Directive does not detail how this "use" should be understood but only in that case can the right holder obtain an injunction against the intermediary on the basis of art. 8(3) InfoSoc Dir. The intermediary's services are clearly being used for the infringement of copyright where a website offering content for download or streaming without consent is hosted on the servers of that intermediary. The infringer can only offer her website offering content to the public because it is hosted on the service provider's servers: the infringer has a contract with the service provider and may pay a fee for the hosting services. The hosting provider is then the intermediary whose services are used by the infringer to infringe the rights of communication to the public and/or reproduction. A more complicated question is whether the infringer "uses" the services of the Internet service provider of the visitors of the infringing website. A question on this point was submitted to the CJEU in Kino.to 41 . In this case, a website dubbed Kino.to offered a large number of films, protected under copyright, to the public in the form of streaming or downloads without the right holders' consent. The right holders seized the courts in Austria (the provider of Kino.to was established in Germany) and requested an injunction against UPC Telekabel Wien (UPC), an access provider in Austria, based on the reasoning that its services were used to access the illegally provided content. UPC refused to comply because it had no direct business relation with the providers of Kino.to and because its customers, as end-users of the service, did not commit a copyright infringement. UPC was ordered to block access to the website at first instance and on appeal42 . The case was then referred to the Austrian Supreme Court, which submitted preliminary questions to the Court of Justice. In relation to the orders against intermediaries, it was asked whether the person who makes the protected subject matter available to the public via the Internet (i.e. the uploader) is "using" the services of the access provider of the person seeking access to the content (i.e. the end-user) in the sense of art. 8(3) InfoSoc Dir. If it should be considered that this is the case, then the right holder can obtain a cessation order against this access provider, even though it has no direct relationship with the direct infringer and its customers do not infringe any copyright or related rights. The Advocate-General expressed the opinion that the UPC should indeed be considered as an intermediary whose services are used by a third party to infringe copyright43 . He came to this conclusion based on the letter, the context, the meaning and the objective of art. 8(3) InfoSoc Dir. Firstly, it was considered that the infringer uses the services of the Internet service provider that allows her to access the internet (i.e. her own internet access provider). In addition, she also uses the services of the access provider of the visitors of the site. Moreover, the Advocate General found that the works are made available to the public "mainly" by means of the service providers of the public (considered collectively). From a technical point of view, the work may be made available by uploading it to a hosting server somewhere, but it is only accessible to a "public" if that public has access to the Internet via its Internet access provider. The Advocate General concluded then that, according to the letter of the Directive, even the services of the Internet access provider of the end-user are used by the infringer for realising the infringement44 . Secondly, this conclusion was endorsed by an interpretation in the context of the provision45 . The preamble of the Directive (cons. 59 InfoSoc Dir) clarifies that the intermediaries are most suitably placed to put an end to infringements due to their roles in the transmission of contentnot only the first transmission but later transmissions (to the public) as well. Thirdly, this interpretation is in conformity with the sense and the purpose of the provision46 . The Directive pursues a high level of protection in the Information Society. This emphasises the important role of the service providers (including access providers) in putting an end to online infringements, especially where the owners of the website are commonly out of reach. These arguments led the Advocate General to conclude that the Internet access provider of the member of the public was also an intermediary, whose services are used by the infringer of the copyright and related rights for her infringement. Consequently an injunction could be obtained against such access provider of the end-user. The CJEU followed the Advocate General and came to the conclusion that an access provider, such as UPC, should be considered as an intermediary whose services are used for the copyright infringement47 . The Court started by stating that there was an infringement of copyright, since protected subject matter was made available via internet services to users without the right holders' consent. It then went on to discuss the role of the intermediaries under art. 8(3) InfoSoc Dir. The notion "intermediary" covers "any person who carries a third party's infringement of a protected work or other subject-matter in a network"48 . The internet service provider was considered as an "inevitable actor in any transmission of an infringement over the internet between one of its customers and a third party, since, in granting access to the network, it makes that transmission possible" and, consequently, an internet service provider that "allows its customers to access protected subject-matter made available to the public on the internet by a third party is an intermediary whose services are used to infringe a copyright or related right within the meaning of Article 8(3) of Directive 2001/29"49 . The Court discussed two other factors that could affect the granting of a blocking order against an intermediary. It mattered not whether there was a contractual relation between the service provider and the person who infringed copyright or a related right: the wording of the directive did not require such relation, nor could it be inferred from its objectives. Furthermore, the Court decided that an injunction against an intermediary does not depend on the proof that end-users have actually accessed the works made available without the right holder's consent. The reasoning was that, firstly, the Directive requires not only measures to put an end to an infringement, but also to prevent infringements of copyright or related rights and, secondly, the making available right protects the availability of the workthe actual access is not decisive. The CJEU thus concluded that the person who "makes protected subject-matter available to the public on a website without the agreement of the rightholder (...) is using the services of the internet service provider of the persons accessing that subject-matter, which must be regarded as an intermediary within the meaning of Article 8(3) of Directive 2001/29". 24. From a technical point of view, the access provider's role is indeed indispensable to transmit the file (containing protected subject matter) to the end-user and therefore to reach a "public". The owner of the infringing website makes the content available for transmission but the services of the access providers allow the end-users to request access to the content and to transmit the content to the endusers as a response to this request. Furthermore, the possibility for a right holder to obtain an injunction against an intermediary addresses a concern to act efficiently upon infringements and force the actor de facto best placed to put an end to the infringements (or their effect) to intervene. This element was put forward in a decision of a first instance court in Paris. This court ordered a number of intermediaries (among which access providers and search engine operators) to block access to a list of domain names where films and television series were streamed without the right holders' consent50 . The Court first decided that the streaming websites infringed the right of communication to the public. It then held that the defendants/internet access providers were intermediaries in the sense of art. 8(3) InfoSoc Dir and that they could "contribute to remedying the infringements" (as specifically provided in the French provision) in the sense that they could prevent their subscribers from accessing the available content. The Court stated that the requested measures, in that they intended to block access by their customers to the domain names under consideration, could indeed contribute to impeding or reducing the infringement of the exclusive rights. The Court also thought that the operators of search engines were intermediaries that could contribute to the cessation or at least reduction of the infringement. Interestingly, the "use" by the infringing party here consisted in accepting that its website was indexed and making sure it ranked high in lists of search results. The Court found that the infringers used the services of the search engine operators and that they would not be able to attract a public as large without these search services that referred internet users to the streaming websites. The search engines contribute to giving access to infringing content offered as a download or as a stream. This First Instance Court did not explicitly examine whether the infringement took place in France before it granted these injunctions against these intermediaries (including the access providers whose subscribers were not held for infringements). Given the specific implementation of article 8.3. InfoSoc in French law, the ramifications for other Member States are uncertain (French law explicitly mentioned the possibility to initiate proceedings against any person who was susceptible to contribute to prevent or stop the infringement). The inverse interpretation would seriously hamper the effectiveness of the provision. The purpose of the provision is to rely on the intermediaries, which are "best placed" to bring infringing activities to an end, independently of the question of their liability (cons. 59 InfoSoc Dir). Right holders should be able to obtain an injunction against any intermediary that "carries a third party's infringement (...) in a network" (cons. 59 InfoSoc Dir). Arguably, this is the case for the internet access provider of the end-user, even when she does not make a distinct infringement of the exclusive rights: the infringement is carried to the end-user by means of her access provider's services. 25. Location of infringement and intermediary service. If the services of the hosting provider (upstream) and the access provider of both the infringing third party and the visitor of the website/member of the public (who herself does not necessarily infringe any copyrights) are "used" by the third party for the infringement, the next question is whether the localisation of the infringement has an impact on the granting of the cessation order. The question is whether an injunction against an intermediary can only be granted if an exclusive right is infringed in the Member State where the intermediary offers its services. In a cross-border situation, the infringement may take place in several countries, the end-users of the infringing service may reside in several countries and the services of intermediaries may be used in several countries. It could be argued that a judge can only impose an injunction upon an intermediary if she finds an infringement within her jurisdiction (e.g. in the country where the intermediary is established or offers commercial services). Right holders sometimes seem to assume that an infringement needs to be demonstrated in the country where they bring legal action against an intermediary. In FAPL v Sky, a UK court verified whether FirstRow, an aggregator of unauthorised streams of football games, made an infringement of the right of communication to the public in the UK51 , The Court found that the public in the UK had indeed been "targeted"52 and therefore it was established that FirstRow (the third party infringer) had infringed copyright in the UK. Yet the provision in the Information Society Directive does not impose this requirement: it is merely stated that the services should be used by a third party to infringe a copyright. It seems that the location of the infringement did not play a role in the opinion of the Advocate General in Kino.to, nor in the decision of the Court of Justice. It was not established that the member of the public who used her internet access to consult the infringing website herself infringed any exclusive right in the Member State where the intermediary offered its service. Arguably, it could be considered that the operators of Kino.to infringed copyright in Austria by the mere accessibility of the infringing website in Austria.The Advocate-General did however not develop such argument and neither did the Court. It was not verified whether the owner of the website performed an infringement in the country of access that was different from the infringement in the country of transmission (i.e. two distinct infringements in two Member States). Instead, it seemed sufficient to establish that the access provider was an inevitable actor in the transmission of the infringement and that its services allow customers to access infringing content (i.e. transmission to the end-user at her request). Arguably, along the lines of the opinion of the Advocate General and the CJEU decision in Kino.to and the decision of the first instance court of Paris, it matters not whether the infringement occurs in the Member State of the recipientthe services of her access provider are in any case technically necessary to complete the transmission and therefore for the communication to the public (making available to the public). Even if the end-user does not make an additional and distinct infringement by accessing the content (streamed or offered for download without consent), the transmission to her through the intermediary's services could be seen as a necessary step to complete the infringing communication to the public originating in another Member State (i.e. the transmission to the end-user after her individual request). It seems however that in practice some national courts do require an infringement within their jurisdiction (and the territory where the intermediary offers its services) before granting an injunction. A clarification on this point may consequently be welcomed. 26. Intermediary. A number of decisions in several European jurisdictions have been issued on peer-to-peer file sharing services. In the most recent cases, the right holders focus rather on the intermediaries (the platform providers or the internet access providers) than on the users who actually share the protected materials. The intermediaries are indeed best placed in many cases to bring infringing activities to an end 53 . When it is difficult to identify or localise the platform providers, the right holders generally act against the internet access providers. The Court of Justice has already declared that art. 8.3 applies to operators of online social networking platforms, as their services may be exploited by users of those platforms to infringe copyright 54 . The Court of Justice also confirmed that access providers that merely provide users with Internet access must be regarded as 'intermediaries' within the meaning of art. 8.3: "Access providers who merely enable clients to access the Internet, even without offering other services or exercising any control, whether de iure or de facto, over the services which users make use of, provide a service capable of being used by a third party to infringe a copyright or related right, inasmuch as those access providers supply the user with the connection enabling him to infringe such rights. Moreover, according to Recital 59 in the preamble to Directive 2001/29, rightholders should have the possibility of applying for an injunction against an intermediary who 'carries a third party's infringement of a protected work or other subject-matter in a network'. It is common ground that access providers, in granting access to the Internet, make it possible for such unauthorised material to be transmitted between a subscriber to that service and a third party." 55 In Kino.to the CJEU was asked whether the "intermediary" is only the service provider of the operator of the website infringing copyright or also the access provider of the end-users, in the context of the use of an infringing streaming website. The Court decided that an "intermediary" covers "any person who carries a third party's infringement of a protected work or other subject matter in a network" 56 , including the access provider whose customers use its services to access infringing content is an intermediary in this sense. Other types of intermediaries (payment processors, …) might face an injunction based on art. 8.3 of the InfoSoc Directive, which applies to all intermediaries in the chain between the primary infringer and the 53 M. WALTER and S. VON LEWINSKI, European Copyright Law, Oxford, 2010, 1086. 54 CJEU 16 February 2012, Case C-360/10, SABAM v. Netlog, par. 28. 55 CJEU Order of 19 February 2009, Case C-557/07, LSG v. TELE2, 43-44. 56 CJEU, Kino.to, par. 30. end-user. A French court has issued an order against several operators of Web search engines to the effect that they prevent any link referring to certain streaming websites from appearing as a result to a search query 57 . 27. Cross-border copyright injunctions. Cross-border injunctions could be defined as measures imposed by a national court with effect in other Member States. A cross border injunction should allow right holders to sue several defendants located in different Member States before the courts of one single Member State for the delivery of an injunction that would have to be enforced in the other Member States. The question here is whether a national court can impose an injunction based on copyright infringement against an intermediary, with effect in other Member States than its own. Absent empirical data, it can be assumed that the national nature of the copyright title prevents right holders from obtaining cross-border or pan-European injunctions against intermediaries. It is described in a 2013 report of the CEPS Digital Forum that "given the territorial character of copyright and of the related enforcement measures, website blockages can actually be obtained on a country-by-country basis under conditions and criteria that vary from one jurisdiction to another" 58 . The Commission found albeit in the context of the Enforcement Directive 59 that there is no established jurisprudence with regard to cross-border injunctions in cases of copyright infringements and that it seems useful to examine under which conditions such injunctions can be granted 60 . It is suggested there that a distinction between the rules of attributing jurisdiction could be made: "while the jurisdiction of a court, when based on the place where the infringement causes harm, is limited to measures concerning its own territory, no such limitation exists when jurisdiction is based on the domicile of the defendant. In this later case, cross-border injunctions are not excluded" 61 . Moreover, some rightholders seem to call for a European initiative to facilitate cross-border measures against intermediaries and/or automatic enforcement of specific injunctions throughout the Europe Union 62 . Conflicts of law 28. We will briefly remind the rules applicable at the European level to treat the issues of jurisdiction and applicable law 63 . a) Jurisdiction 29. The rules determining jurisdiction will differ depending on the sort of injunction which is sought. injunction against an intermediary, whether interim or permanent, can always be asked in the country where that intermediary is established. Nothing precludes the measure to have a cross-border impact: an action might thus be brought against the operator of a peer-to-peer platform in the Member State where it is established to obtain an injunction prohibiting that operator to offer its infringing service in other Member States. Similarly the hosting provider of a streaming website infringing copyright could be sued for an order prohibiting him to allow the use of its services for the website to be accessible all over the European Union. Nevertheless, generally the court's jurisdiction will not be an issue in such proceedings, as the service provider will have to apply the measures in the country where it is established (measures that will then have effect beyond the borders of this country). In practice, claims against intermediaries based on art. 8. Advocate General CRUZ VILLALON defines a provisional measure as an injunction adopted for a limited period 65 . Art. 31 Brussels I Regulation was designed to apply independently of the jurisdiction as to the substance of a case. Consequently, the courts in one Member State may have competence to rule on a claim for a provisional measure even if the courts of another Member State have jurisdiction as to the substance of the matter. Nevertheless, the Court of Justice holds that the granting of provisional or protective measures is conditional upon, inter alia, the existence of a real connecting link between the subject-matter of the measures sought and the territorial jurisdiction of the court before which those measures are sought 66 . That condition means that the court of a Member State that is hypothetically not competent to deal with the substance of the case can declare itself competent to authorise a provisional measure only in so far as that measure has an effect in the territory of the Member State concerned and can be enforced there. Conversely, a national court should decline competence for provisional measures having no effect on its territory, which it is incumbent on that court to decide 67 . On that condition, provisional measures may be sought in a Member State to be executed in another Member State. One could therefore imagine bringing a legal action in one Member State seeking interim injunctions based on art. 8.3 against an intermediary located in another Member State. However, in practice, all the proceedings based on art. 8.3 seem to be introduced in the country where the intermediary is situated. Permanent injunctions. Regarding proceedings as to the substance, art. 5.3 Brussels I Regulation contains an additional rule to art. 2, granting jurisdiction to the courts of the Member State "where the harmful event occurred or may occur". This is a factual localisation criterion that applies independently from the debate on the substance, even if both issues are related: the localisation of the material act on the substance may influence the application of the rules of private international law, which nevertheless apply autonomously. In case of complex infringements, i.e. situations where the infringing act is located in different countries, it provides jurisdiction to the Member State of the event giving rise to the infringement and to the Member States where the damage occurs 68 . However, art. 5.3 Brussels I Regulation is not a strong legal ground for establishing territorial jurisdiction when seeking a measure against intermediaries. Art. 5.3 indeed applies in matters related to "tort, delict and quasi-delict". As a rule of special jurisdiction, it derogates from the principle that jurisdiction is vested in the courts of the State where the defendant is domiciled and must consequently be interpreted restrictively. Therefore, the scope of art. 5.3 is limited to actions which seek to establish the liability of a defendant and which are not related to a contract 69 . As legal actions based on art. 8.3 of the InfoSoc Directive do not seek to establish the liability of the intermediary, the jurisdiction of the courts of the Member States where the harmful event occurs may be contested. Prorogation of jurisdiction. According to art. 6.1 of the Brussels I Regulation, "a person domiciled in a Member State may also be sued, where he is one of a number of defendants, in the courts for the place where any one of them is domiciled, provided the claims are so closely connected that it is expedient to hear and determine them together to avoid the risk of irreconcilable judgments resulting from separate proceedings". That provision intends to minimise the possibility of concurrent proceedings and to ensure that irreconcilable judgments will not be given if cases were decided separately (Rec. 15). For Article 6.1 to apply, it must be ascertained whether, between various claims brought by the same plaintiff against different defendants, there is a connection of such a kind that it is expedient to decide the cases together in order to avoid the risk of contradictory judgments 70 . It is for the referring court to assess, in the light of all the elements of the case, whether there is a connection between the different claims brought before it. According to the Court of Justice, decisions might be regarded as irreconcilable if there is a divergence in the outcome of the dispute, which arise in the context of a same situation of law and fact 71 . In that context, the fact that defendants against whom a copyright holder alleges substantially identical infringements of his copyright did or did not act independently may be relevant 72 . Furthermore, the Court held in Solvay v. Honeywell that the fact that an intellectual property right is governed by the national law of different Member States may generate divergences in the outcome of the proceedings in the same situation of fact and law, so that it is possible that they will culminate in irreconcilable judgments resulting from separate proceedings. Indeed, two courts could have to examine an alleged infringement in the light of the same national legislations. In order to assess, in such situation, whether there is a connection between the different claims brought before it and thus whether there is a risk of irreconcilable judgments if those claims were decided separately, it is for the national court to take into account, inter alia, the dual fact that, first, the defendants in the main proceeding are each separately accused of committing the same infringements and, secondly, such infringements were committed in the same Member States, so that the same national laws apply to those infringements 73 . It could be imagined that intermediaries established in different Member States are brought before one court, whose jurisdiction may be established by reference to the domicile of one of the defendants (art. 2 Brussels I). It could be argued e.g. that one infringement for which the services of all intermediaries involved are used constitutes a close connection between the cases the claimant intends to bring against these intermediaries (art. 6.1 Brussels I) [START_REF] Torremans | Art. 6, 1. Brussel I: onveranderde tekst maar geen duidelijke weg voorwaarts[END_REF] . By contrast, such judge could be faced with different national laws (that are not entirely harmonised) as to the substantive law she has to apply to decide the cases against each intermediary. b) Applicable law 30. Lex loci protectionis. The principle lex loci protectionis determines the law applicable to copyright infringements (see Study on the application of directive 2001/29/EC on copyright and related rights in the information society, 81). According to that principle, the judge has to apply the law of the country for which a protection is claimed. In case of infringements to the making available right, it may be difficult to find the location of the infringing act (initiating in one country and culminating in another) and, consequently, to determine for which country the protection is sought. The lex loci protectionis may indeed refer to the application of several laws when the act is interpreted as taking place in different countries, i.e. the country of the event giving rise to the infringement (Member State from which the work is made availablecountry of transmission) or the countries where the damage occurs (Member States to which the work is made availablecountries of reception). The law of the country of the event giving rise to the harm (country of transmission) should regulate the full territorial impact of the act of making available, while the law of one of the countries where the damage occurs (country of reception) has effect in that country exclusively and not in the other countries also addressed by the infringer. Since plaintiffs often seek protection in the country where the harm is transpiring, localizing the place of the harm generally identifies both a forum and the applicable law in that forum: "The court of the country whose market the person making a work available has allegedly targeted will determine whether the defendant has in fact sought customers from that jurisdiction. If so, that country will be a place in which the making available occurs, and the forum will be competent not only to hear the case, but to apply its own law to the making available of the work to users within the jurisdiction." 75 Since no specific rule is provided, this principle also determines which law to apply to decide whether an injunction can be granted against an intermediary. The law applicable in legal proceedings against intermediaries will generally be the law of the country of the intermediary. This means that both the infringing nature of the third party use and the measure against the intermediary will be assessed by that law. Other outcomes are however possible. The locus protectionis has been interpreted in some cases as the country of transmission and in others as the country of reception. Applying the lex loci protectionis as the law of the country of reception, it could (theoretically) result that a court decides an action brought against the hosting provider whose service is used for a website where works are available without the right holders' consent by applying cumulatively several national laws of the countries where the website can be accessed. Application to peer-to-peer networks and streaming platforms a) Peer-to-peer networks 31. Functioning. A peer-to-peer network is a system in which individual users act simultaneously as suppliers and consumers of resources. Each user can download works made available on the network. These downloads are instantly used as a new source to make the work downloaded available to the other users of the network. Provided that they have not been allowed by the rightholder, these acts infringe the making available right. Furthermore, the download of the work in a peer-to-peer network constitutes an act of reproduction and in some Member States such copy was exempted under the exception for private copy. However, in several Member State it is required that the source from which the copy is made be accessed in a "legitimate" way. In these legal systems, the exception for private copy cannot apply if the copy originates from an illegal use of the work. This was also the position of the Advocate General in the case ACI Adam 76 : the exception for private copy in art. 5 InfoSoc Directive only applies to reproductions made from legitimate sources. If the CJEU follows the Advocate General's opinion, it seems that the copies generated by the peer to peer networks should be found unlawful if the right holder has not authorised the distribution via such networkand this irrespective of the applicable law. The Advocate General indeed held the opinion that Member States can only collect a levy for the private copying exception based on copies of protected subject matter made on the basis of legitimate sources: the Member States have no margin to widen the scope of the levy to copies from illegitimate sources77 . As each user of the peer-to-peer network will potentially be liable for infringements to the making available right and, eventually, to the reproduction right, measures based on art. 8.3 of the InfoSoc Directive could be taken against the intermediaries whose services are used for the functioning of the peer-to-peer network (the peer-to-peer operator78 , the access providers of the users). 32. Conflicts of law. Injunctions against the peer-to-peer operator could be asked in the country where it is established (Brussels I, art. 2). The injunction asked might then have effects in all the Member States where the network is accessible. The judge should (in theory) cumulatively apply the law of each country where the peer-to-peer network is used and for which a protection is claimed (these countries are simultaneously used for the transmission and the reception of the protected works). An action brought against an internet access provider allowing users to have access to the peer-to-peer network should be introduced in the Member State where that access provider is established (Brussels I, art. 2). The injunction granted should have no cross-border effect, as the activities of the access provider are in principle limited to one single country (e.g. website blocking). The judge should apply his national law to determine the infringement by the third party and the measure against the intermediary, as the protection claimed in the context of the action against the access provider will be limited to his Member State. The possibility to sue several access providers originating from different countries in one single Member State to avoid the deliverance of irreconcilable judgments (Brussels I, art. 6.1) is contested (see supra) but should not be excluded in the absence of a decision by the Court of Justice on that issue. b) Streaming and downloading platforms 33. Functioning. In streaming technologies, a work stored on a hosting server is then streamed to end users for immediate consultation (see Study to review options for developing the relationship between the reproduction right and the making available right in the context of the cross-border transmission of digital content, 31). One single act of making available is performed by the operator of the streaming platform. If it has not been authorized, that act infringes copyright. Moreover, the storage of the work on the hosting server constitutes an act of reproduction which should also be authorized, except if it can be exempted under a national exception. The reception of the work at each final user's end also generates an act of reproduction. These reproductions may be exempted under the exception for temporary copy, if the right holder has authorised the making available of the work streamed and the subsequent reception, if the intended use benefits from another exception (such as the private copy) or if it is "not restricted" (see Study to review options for developing the relationship between the reproduction right and the making available right in the context of the cross-border transmission of digital content, 34-35). If these conditions do not apply, the act of reproduction at the final user's end will constitute an infringement to copyright. Similarly, operators of websites using a download technology infringe the making available right if they have not acquired a licence for the making available of the works. Furthermore, the download of the works by the end users generate an act of reproduction which should also be licensed or should benefit from an exception, such as the exception for private copy (see Study to review options for developing the relationship between the reproduction right and the making available right in the context of the crossborder transmission of digital content, 35-36). Injunctions based on art. 8.3 of the InfoSoc Directive may be directed against the intermediaries whose services are used for the functioning of the streaming or downloading platform (e.g. hosting provider, internet access provider of the final user but also search engine operators or other intermediaries imaginable79 ). 34. Conflicts of law. Injunctions against the hosting provider of the website could be asked in the country where it is established (Brussels I, art. 2). The injunction could then have a cross-border effect. The judge should apply either his national law (country of transmission) or the law of each country where the website is accessible (countries of reception). It is unlikely that courts of the countries where the website is merely accessible have jurisdiction to decide on an injunction against an intermediary established on another territory. Art. 5.3 of the Brussels I refers to the territory where the harmful event occurred and is not applied directly to intermediaries. An action brought against an internet access provider allowing users to have access to the website should be introduced in the country where that access provider is established (Brussels I, art. 2). The injunction asked should have no cross-border effect, as the activities of the access provider are in principle limited to one single country. The law applicable might be the law applicable in the country of reception, which may coincide with the judge's national law, or the law of the country from which the website originates (country of transmission). The possibility to sue several access providers originating from different countries in one single Member State to avoid the deliverance of irreconcilable judgments (Brussels I, art. 6.1) is contested (see supra) but should not be excluded in the absence of a decision by the Court of Justice on that issue. B. Effect of the localisation of the making available right 35. The present section will exclusively focus on the rules applicable for the enforcement of injunctions against intermediaries whose services are used to infringe the making available right, as we were asked to complement the Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society which was focused on the making available right. Nevertheless, the online transmission of protected works does not only affect the making available right. It also generates permanent or transient copies, which fall under the reproduction right. Contrary to the making available right, these acts of reproduction do not have any cross border nature80 . The result is that attempts to localise the making available right in one (country of origin) or several (exploitation scenario) identified Member States and to apply those laws to the entire operation of online transactions will be undermined if the reproduction resulting from the act of making available takes place in other Member States. - Country of origin a) Jurisdiction 36. Principles. The application of a country of origin principle would have no impact on the issues of jurisdiction, as the rules used to determine jurisdiction are based on an assessment of the factual elements of the dispute, independent from the analysis on the substance (see Study to review options for developing the relationship between the reproduction right and the making available right in the context of the cross-border transmission of digital content, 148). Applying the rules provided by the Regulation Brussels I, an intermediary might be sued in the Member State where it is established. It can be imagined that it can be brought to court in another Member State on the basis of art. 6 of the Regulation Brussels I, together with another defendant established in that Member State. 37. Peer-to-peer networks. Based on art. 8(3) InfoSoc Dir., the operator of a peer-to-peer network should be sued in the country where it is "established" for the delivery of an injunction that might have cross-border effects. An action against an internet access provider might take place in the Member State where it is established. The injunction should have no cross-border effect, as the activities of the access provider are limited to one single Member State. Whether the making available right is infringed in that Member State does not affect the establishment of territorial competence and does not affect the action for an injunction (at this stage). The country of origin principle does not affect the application of the rules on jurisdiction to peer-to-peer networks. 38. Streaming or downloading websites. A claim against the hosting provider of a website infringing the making available right should be introduced in the country where it is established (defendant's domicile). The injunction delivered in that Member State could have an effect in the other Member States where users get access to the infringing website, but the injunction will remain territorial as it will apply to the hosting provider exclusively. Whether the making available right is infringed in that Member State does not affect the establishment of territorial competence and does not affect the action for an injunction (at this stage). An action against an internet access provider might take place in the country where it is established, which will correspond to the country where the works are received by the end-users. The injunction should have no cross-border effect, as the activities of the access provider are limited to one single country. The country of origin principle does not affect the application of the rules on jurisdiction to streaming or downloading platforms. b) Applicable law 39. Principles. The country of origin principle designates the application of the law of one Member State, instead of several legislations, depending on the criteria used to define the country of origin. The restricted act is analysed as taking place in one single Member State. In accordance with the principle lex loci protectionis, the law applicable will be the law of the country of origin as it is the place for which the protection is sought (see Study on the application of directive 2001/29/EC on copyright and related rights in the information society, 148-149). 40. Peer-to-peer networks. Each user of the peer-to-peer network who "shares" content with other peers is likely to make a new infringement to the making available right, taking place in its own country of origin (where the "peer" is domiciled or where the act of upload has taken place or any other criterion). Should the right holder attempt to obtain an injunction against the operator of the peer-to-peer network (acting as an intermediary whose services are used by its users for infringements), the existence of an infringement will be assessed by the law of each country in which people use the network 81 . In order to get a cross-border injunction based on national art. 8.3 provisions against the operator of the peer-to-peer network, the right holder will have to apply the laws of each Member State where the network is used cumulatively. An action against an internet access provider should be ruled under the law applicable in the Member State of the service provider, defined as the country of origin of the act of making available taking place in that country. De facto, the country of origin principle would not have an impact on the rules determining the applicable law to peer-to-peer networks. 41. Streaming or downloading websites. The territoriality issues may cause more complications for right holders to act against intermediaries with regard to streaming websites, at least in those Member States where the end-users of a streaming website are not held to infringe copyright by the mere reception of a communication. In those Member States, only the provider of the streaming website may be held to infringe the right of communication to the public/making available. It is important to know then whether an infringement needs to be established in the Member State where the intermediary provides its services: can an injunction be obtained against an intermediary providing access to an end-user who does not infringe any copyright by consulting content streamed without the right holders' consent? A claim against an intermediary to put an end to an infringement should respond to two conditions: (1) an infringement of copyright and (2) the use of its services for this infringement (supra, p. 18-21). Whether an infringement occurs should be assessed by the law of the country of origin, as the lex loci protectionis. Whether the intermediary's service is "used" for the infringement should then be assessed according to the same law (which is however harmonised under the InfoSoc Directive). Following the principles explained supra, it could be argued that an infringement need not be demonstrated in the territory where the intermediary is established, although this is a point that still needs to be confirmed by the CJEU. Under such assumption, an injunction against a hosting provider of the website or against an access provider giving access to the website to an end-user would be assessed under the law of the Member State defined as the country of origin. This means that actions could be undertaken against several intermediaries in several Member States (jurisdiction) and that different national courts would assess whether an act constitutes an infringement under the copyright law of the country of origin. The competent courts would grant injunctions according to the laws of the country of origin, with effect in the (other) Member States where the intermediaries offer their services. It is however still possible that some aspects remain subject to the law of the forum, such as procedural questions (which may determine as well whether the injunction can be granted). This is however a hypothetical situation for now. In practice, though, the application of a country of origin principle may cause a judge in another Member State to refuse an injunction against an intermediary when she finds no infringement in her jurisdiction. A national court could be tempted to look for an infringement by the end-user, who uses the access provider's services to access infringing content outside the country of origin. As mentioned, some courts verify whether an infringement can be found in the Member State where the intermediary offers its services before granting an injunction (such as a blocking order). The localisation of an infringing act of making available to the public in a country of origin could lead to the conclusion that no distinct infringement exists in other Member States and that right holders cannot obtain injunctions in other countries than the country of origin. For example, the operator of a streaming website makes works available -without the right holders' consentin Member State A. The right holder would like to block access to this streaming service in Member State B. The judge in Member State B will not find an infringement in Member State B but only in Member State A, as the country of origin. The judge could then be inclined to refuse the blocking order for this reason. The decision of the Court of Justice in Kino.to could serve the argument that the person making the work available without authorisation in Member State A still uses the services of an intermediary (such as an access provider) in Member State B, since the intermediary makes the transmission possible of unauthorised content to its subscriber, at her demand. Uncertainty will however subsist until the CJEU has taken a position on this specific issue of territoriality. Should the Commission consider adopting a localisation criterion that places an infringement in one Member State only, it may want to clarify this point to avoid legal uncertainty in practice, i.e. the localisation of the infringement and the use of the intermediary's services for an infringement in another Member State. The country of origin principle might simplify the choice of the applicable law in the context of actions against intermediaries whose services are used by streaming or downloading websites. Country of exploitation a) Jurisdiction 42. Principles. In the exploitation scenario, the act of making available is localised in the Member States where the work is exploited, i.e. where the national public is targeted. In case of a purely national exploitation, the act of making available will be localised in that Member State. When the work is addressed to several national publics, to a Europe-wide public or to no national public in particular (e.g. the Pirate Bay), the exploitation takes place in several Member States and the act of making available is localised in several Member States (see Study on the application of directive 2001/29/EC on copyright and related rights in the information society, 166-167). Similarly to the country of origin principle, the application of the exploitation scenario would have no impact on the issues of jurisdiction. 43. Peer-to-peer networks. The operator of a peer-to-peer network should be sued in the country where it is established. The injunction delivered against the network could have a cross-border scope. An action against an internet access provider might take place in the country where it is established. The injunction should have no cross-border effect, as the activities of the access provider are limited to one single country. 44. Streaming or downloading websites. A claim against the hosting provider of a website infringing the making available right should be introduced in the Member State where it is established. The injunction delivered in the Member State of establishment could have a cross-border effect. An action against an internet access provider might take place in the country where it is established. The injunction should have no cross-border effect, as the activities of the access provider are limited to one single country. b) Applicable law 45. Principles. If the exploitation scenario is applied as a material rule defining the place where the restricted act happens, the making available right will be localised in each Member State targeted by a cross border exploitation of works on internet. In accordance with the principle lex loci protectionis, the law applicable to such exploitation might be the law of the country of transmission of the work or the laws of each country targeted. The choice of that last interpretation would be a logical consequence of the application of the exploitation scenario (see Study on the application of directive 2001/29/EC on copyright and related rights in the information society, 167). 46. Peer-to-peer networks. As each user of the peer-to-peer network simultaneously receives and transmits works on the network, the law applicable to the operator of the peer-to-peer network would be the law of each country in which people use the network 82 . In order to get a cross-border injunction against the operator of the peer-to-peer network, the right holder will have to apply the laws of each Member State where the network is used cumulatively. An action against an internet access provider, as an intermediary whose services are used for the infringement, would then be assessed under the law applicable to the infringement. In practice the place of the infringement and the area where the intermediary offers its services will often coincide. An infringement can be found where the national public is "targeted" and right holders will give priority to bringing legal actions in those Member States where an important public exists for the infringing services. The need to clarify the (geographic) relation between the infringement and the use of the intermediary's service is then felt to a lesser extent. in the Member State of the service provider, which is the place of transmission and of reception of the works simultaneously. 47. Streaming or downloading websites. A claim against the hosting provider of the website or against a service provider giving access to the website will be decided according to the law of the Member State where the work is made available, i.e. the law(s) of the Member States targeted by the exploitation of the work (country of reception -Member State where the access provider(s) is (are) situated). Both the infringement and the measures against the intermediary will be assessed by this law. In practice an infringement will commonly be established in the Member State where the service provider offers its service (for which the right holder may want an injunction) hence the need to clarify this point is less urgent (the analysis of the "use" of the services and finding an infringement in the Member State where the service is offered). It is not excluded that the law of the country of transmission is applied as the lex loci protectionis but is perhaps less straightforward in practice. The exploitation scenario would simplify the choice of the law applicable in the context of proceedings against intermediaries whose services are used by streaming or downloading platforms, as it leads in practice to the application of the laws of the countries of reception of the work. C. Conclusion on the issue of injunctions against intermediaries 48. When right holders act against an intermediary whose services are used by a third party for an infringement, they mostly try to obtain injunctions on a country by country basis. In theory, it is however not excluded in the current legal framework that a cross-border injunction be obtained against several intermediaries situated in different countries. The rules of private international law do not exclude such measure (several defendants may thus be brought before one court, which is expected to apply different national laws on the merits, which makes such initiative less appealing). 49. We have made the exercise of applying the existing provisions on injunctions against intermediaries, on jurisdiction and applicable law to the two scenarios under consideration (making available localised in the Member State of origin or Member States of exploitation). It seems that some national courts verify whether an infringement exists in the Member State where the intermediary (against which the injunction is requested) offers its services. The InfoSoc Directive is however silent on this point. Based on the CJEU decision in Kino.to, it could be argued that this is not required to obtain an injunction against an intermediary. This matters in those cases where an injunction is sought against the access provider of the end-user, who consults an infringing website (e.g. a streaming website) and does not make a distinct infringement by doing so. Arguably, the transmission to the end-user (at her request) is an essential part of the (composite) act of making a work available to the public (the infringement) and the intermediary's services are used to complete this act. This understanding entails that no distinct infringement is required in the Member State of reception in order to obtain an injunction in that (or another) Member State. Should this interpretation prevail, it seems that the same principles would apply to infringements localised according to a country of origin or exploitation criterion. It is then not strictly necessary to adopt legislative measures and thus explicitly provide for the possibility of obtaining an injunction against intermediaries where no distinct infringement can be established in the territory where they offer their services. However, to the extent that there is some uncertainty on this point among national courts, the relation between the infringement, the use of the intermediary's services for the infringement and the territorial localisation of both should be clarified. In peer-to-peer networks, each final user performs a new act of making available when she uses the network. When the right holder acts against the intermediary, she can seize the courts of the domicile of the defendant or possibly, in the context of proceedings against several access providers, the courts of one of the Member State where these intermediaries are located. These are factual criteria, hence the outcome should not be affected by a change of the substantive law (the definition of the making available and its localisation). As far as the applicable law is concerned, the courts continue to apply the lex loci protectionis, i.e. the law of the country where the work is made available to the public. Each user of a peer-to-peer network both downloads and uploads the work and thus reproduces the work and makes it available to the public. Both acts are localised in one Member State if the country of origin criterion is applied, i.e. where the "peer" user resides or where she uploads the works to the network. Where the acts are localised in the countries of exploitation, infringements are likely to be found in every country where the network is used (the whole public of "peers" is thus targeted). In either approach, infringements to the exclusive rights can be found in several Member States. As far as streaming websites are concerned, the proceedings against intermediaries should be brought in the Member State where they are established (jurisdiction in the place of establishment/domicile). In the context of an action against several access providers situated in different countries, in the Member State where one of them is located. As these are factual criteria, a change of the substantive law would not affect these principles. As to the applicable law, the application of the country of origin principle or the exploitation scenario might help determining the lex loci protectionis. As the Member State(s) where the restricted act happens would be identified, the law applicable would be either the law of the Member State of transmission (country of origin) or the laws of the Member States of reception (exploitation scenario). It could be argued that it is not necessary to establish an infringement in the Member State where the intermediary's services are used, as long as an injunction can be granted for those cases where an infringement exists in another Member State and the intermediary's services are nonetheless used for this infringement. To the extent that there seems confusion among the national courts (and the Commission considers that an intervention is called for), this relation between the infringement, the use of the intermediary's services and the localisation of both should be clarified. 50. Finally, it is important to remind that those localization scenarios will have a very limited impact if they only apply to the making available right and if they do not have an effect on the acts of reproduction performed in the context of the online exploitation of protected works. The online transmission of works indeed generates several acts of reproductions, which may not always be exempted by the exceptions provided by art. 5 of the InfoSoc Directive. Consequently, even if particular criteria apply to the localisation of the making available right, the applicable law in the context of proceedings against intermediaries might still be determined in function of the localisation of the acts of reproduction (Member State for which a protection is sought). III. Transitional measures A. Impact of the country of origin principle 51. Country of origin principle. In the previous Study (Study MARKT/2012/013/Dl/ST/SC on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society), it has been examined whether a "country of origin" principle could be applied to localise a protected act of making available to the public in one Member State 83 . This conception, similarly to the "country of origin" principle in the SatCab Directive, would entail that the work is made available in one Member State, although it would be accessible in all other Member States. Although this act would have effects in other Member States, no protected act of making available would take place there. One of the main points of attention of this approach was the risk that the act of making available in one Member State undermines the exploitation of the work in other Member States 84 . When several content providers obtain a licence to make the work available in different Member States, their arrangement would only refer to the Member State of origin (i.e. where the work will be uploaded or where the content provider has its establishment) but not to the Member States where the work would be accessible. A content provider established in a Member State A could thus target the public in Member States B and C and thus direct the exploitation of the work to a specific public outside the Member State of origin A, without actually performing an act of making available to the public in those targeted Member States B and C. This could undermine the value of the licence that a competing content provider would have obtained for Member States B or C. This situation can be averted if the right holder and the content providers have agreed by contract that access to the work should (technically) be restricted to certain Member States of exploitation 85 . Even then, the parties must pay attention to the ruling of the CJEU in Premier League 86 . 52. Impact. The application of a country of origin principle to localise the act of making available in one Member State, if adopted, would thus have a serious impact on the economic model of online exploitation of works. The economic consequence of such switch should be considered carefully in an economic study but for now it can be assumed that it would thoroughly affect the exploitation agreed upon in existing contracts. In principle, a modification of a law does not have an immediate effect on the contracts concluded under the reign of a previous legislation: rights that are lawfully granted and acquired under the applicable legislation remain granted and acquired under the modified law. Only rarely are laws adopted with a retroactive effect. However, the law does have immediate effect on the acts performed after its entry into force. When a new act of making available is performed, after the country of origin principle has entered into force, it will be localised in one Member State. This new localisation criterion will thus have a bearing on the situation of the right holders and content providers having agreed on a licence under the previous legislation. Example. By way of example, the online exploitation rights on a film have been divided between two film producers, each enjoying a territorial exclusivity for indicated Member States. Film producer A holds the making available rights for Austria and film producer B holds the making available rights for Germany. When the parties agreed on this geographical division, this meant that producer A could control the online exploitation of the film in Austria, i.e. any distributor that wanted to make the film accessible on an online platform in Austria would have to acquire a licence from producer A. Producer B would then control the online exploitation in Germany and any content provider that wanted to make the film available to the German public would have to acquire a licence from producer B. This division would also affect the capacity to act in case of infringement. Suppose that the country of origin is applied to localise any act of making available, e.g. in the Member State where the content provider has its establishment. A content provider wishes to acquire a licence to make the film available, after this new rule has entered into force. A content provider having its establishment in Austria will make the film available in Austria, but the film may well be accessible in Germany. Moreover, this content provider may even specifically target the German public (specific publicity, promotional actions in cities in Germany,...) and still the act of making available will occur in Austria (where it has its establishment). Producer B, which has acquired the making available right for Germany, cannot weigh on the licensing process and cannot claim infringement of its rights since no protected act takes place in Germany. Yet producer B has invested in the production of the film and has negotiated its rights accordingly. The economy of the contract between producers A and B is thus thoroughly affected by this modification of the law. This simple example illustrates that a change of the definition of the making available right and the localisation of any act in one Member State would have far-reaching consequences for the existing contracts. Moreover, in a European context, there is a risk that all content providers choose an establishment in one Member State (e.g. for tax reasons or other reasons not related to copyright or, inversely, precisely for the weaker copyright protection in that Member State) and consequently all works are made available in that one Member State, while the public of all other Member States is served from that one Member State of origin. This change of the law could thus entail an important shift of the economy of the contract. Depending on the national laws of contract, the parties may experience difficulties to put an end to existing contracts due to a change of the law or they may simply not be entitled to end their arrangements for this reason. We have no empirical data on the nature of copyright contracts (territorial exclusivity, duration of the contracts, mode of remuneration and expectation of profit,...), the impact of such modification should be examined in an economic study. If the adoption of the "country of origin" principle to localise acts of making available is likely disrupt existing relations, then transitory measures may be welcome, including the possibility for parties to renegotiate their contracts without being held liable for any breach of contract. B. Transitional measures in the SatCab Directive 53. Transitional provisions in SatCab Directive. The SatCab Directive localises satellite broadcasts according to a "country of origin" principle (art. 1(2)(b)SatCab Dir87 ), thus clarifying that there is no relevant act in the Member States where the broadcast is received (footprint) 88 . This directive provides for transitional provisions (art. 7 SatCab Dir). It was anticipated in the travaux préparatoires that the immediate application of the provisions on satellite broadcasting could be a source of difficulty when an existing agreement was in place 89 , especially where broadcasting rights were divided between right holders per geographical area. It was felt that in most cases the difficulty could be resolved by reinterpreting or renegotiating the agreement. A grace period was given to allow the interested parties to find a solution. In the SatCab Directive it is provided that "agreements concerning the exploitation of works and other protected subject matter which are in force on the date mentioned in Article 14 (1) [the entry into force of the implementation laws] shall be subject to the provisions of Articles 1 (2), 2 and 3 [definition of satellite broadcasting and localisation; the broadcasting right and acquisition of broadcasting rights] as from 1 January 2000 if they expire after that date (art. 7(2) SatCab Dir). The SatCab Directive thus provides a transitional regime for existing agreements. It explicitly stated that the newly defined satellite broadcasting right would only apply to existing agreements, concluded before the entry into force of the national laws implementing the SatCab Directive (1 January 1995), after five years had passed (i.e. not before 1 January 2000). Many of those existing contracts would expire in that period or would be subject to renegotiation, so the new regime could be taken into account in those negotiations if the parties still had an interest (rec. 18 preamble SatCab Dir) 90 . 54. The SatCab Directive also provides an arrangement for international co-production agreements: "when an international co-production agreement concluded before the date mentioned in Article 14 (1) between a co-producer from a Member State and one or more co-producers from other Member States or third countries expressly provides for a system of division of exploitation rights between the co-producers by geographical areas for all means of communication to the public, without distinguishing the arrangement applicable to communication to the public by satellite from the provisions applicable to the other means of communication, and where communication to the public by satellite of the co-production would prejudice the exclusivity, in particular the language exclusivity, of one of the co-producers or his assignees in a given territory, the authorization by one of the co-producers or his assignees for a communication to the public by satellite shall require the prior consent of the holder of that exclusivity, whether co-producer or assignee" (art. 7(3) SatCab Dir). This provision addresses the situation where the exploitation rights are divided among the co-producers of an audiovisual work along territorial or linguistic lines 91 . The adoption of the "country of transmission" principle to localise the satellite broadcast would allow the co-producer to transmit the film into the territory for which another co-producer expected exclusivity (when the international coproduction contract was concluded). To protect the acquired rights and the legitimate expectations of the co-producers at the time of the contract, the new satellite broadcasting right would have effect starting from 1 January 2000 (art. 7(2) SatCab Dir). Even after this date a certain protection was granted, to the extent that the consent of a co-producer could be required when its exclusivity would be prejudiced 92 . The idea was to adopt an interpretation rule and to reconstruct what the parties would have agreed to in their contract had they been aware of the exploitations by satellite broadcasting and the country of origin principle. It was assumed that they would have provided a "principle of mutual consent" for any form of exploitation that could prejudice the "territorial rights" of one of the parties 93 . This additional protection was available under the following conditions: (1) an international co-production agreement concluded before 1 January 1995 between co-producers of different Member States (or with a producer from third countries); (2) express division by geographical areas of exploitation rights applicable to all communication to the public rights and without distinction of satellite broadcasting; (3) prejudice to this (language) exclusivity in the territory by a communication to the public by satellite. 89 Proposal for a Council Directive on the coordination of certain rules concerning copyright and neighbouring rights applicable to satellite broadcasting and cable retransmission, presented by the Commission, C0M(91) 276 final -SYN 358, available at http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:1991:0276:FIN:EN:PDF, p. 41. 90 Th. DREIER, "Satellite and Cable Directive" in M.M. WALTER & S. VON LEWINSKI, European Copyright Law. A commentary, Oxford University Press, 2010, 1555 p., 443. 91 See on this point : M. MARTIN-PRAT & K. JORNA, "New rules for the game in the European copyright field and their impact on existing situations", E.I.P. R. 1994 (145) 151-153. 92 DREIER, "Satellite and Cable Directive", 444. 93 MARTIN-PRAT & JORNA, "New rules for the game in the European copyright field and their impact on existing situations", 152. This rule formulated a precise answer to the prejudice some co-producers enjoying territorial exclusivity could suffer due to the change in the law. The existing professional customs with regard to international co-production agreements were explained in recital 19 SatCab Dir and this transitory arrangement was tailored to this precise situation 94 . C. Transitional measures for the making available right 55. Transitional measures. Making available right localised in a country of origin. It can be anticipated that similar issues will occur when the making available right is localised in the European economic area according to a country of origin principle. As in the SatCab Directive, a transitional period could be determined in addition to the period given to Member States to transpose the directive into national law. The right holders and their licensees (or several right holders who have divided the rights among themselves) should indeed be given the time to interpret the new legal regime and to make projections of the consequences which the country of origin principle will entail in economic terms. This should give the parties the time to put an end to their contracts or to renegotiate the contract taking account of the country of origin principle. It should be taken into account that end-users may be incited to put an end to their existing contracts and look for a better or cheaper service, newly available from outside their Member State. The aggregated effect of the consumers' behaviour may also affect the position of the interested parties. In the simplest case of an author (or a unique right holder) who has granted a simple licence to one of many content providers, who pays a royalty per use (sale, download, period of time,...), the transitional period may allow the parties to re-negotiate some terms (e.g. geographical scope of the service, remuneration). This is more complicated when territorial arrangements have been made, on a nonexclusive or an exclusive basis. Territorial exclusivity. Where certain parties enjoy territorial exclusivity under the existing agreement, this exclusivity should be maintained during the transitional period. This may be the case in sectors where the exploitation of the work is organised according to a certain "media chronology" (e.g. film distributors). These distributors should be able to continue to enjoy their territorial exclusivity without interference from the competing distributors during this transitional periodas expected at the signature of the contract. Similarly, right holders that have split the rights on a territorial basis should treat new acts of making available according to the existing contract. This means that any act of making available on the territory for which e.g. one co-producer has the territorial exclusivity is only allowed to the extent that it does not affect the exploitation of the work on the territories to which other co-producers have exclusive rights. If it does affect others, these co-producers with territorial exclusivity prerogatives should be involved in the contract. Limited (non-exclusive) accessibility. During this transitional period, content providers that have obtained a licence for a defined territory are not allowed to make the work accessible beyond those territories, even if according to the country of origin principle it would be legitimate to make the work available from one of those countries and accessible anywhere else. The content provider can only apply 94 (19) Whereas existing international co-production agreements must be interpreted in the light of the economic purpose and scope envisaged by the parties upon signature; whereas in the past international co-production agreements have often not expressly and specifically addressed communication to the public by satellite within the meaning of this Directive a particular form of exploitation; whereas the underlying philosophy of many existing international co-production agreements is that the rights in the co-production are exercised separately and independently by each co-producer, by dividing the exploitation rights between them along territorial lines; whereas, as a general rule, in the situation where a communication to the public by satellite authorized by one co-producer would prejudice the value of the exploitation rights of another co-producer, the interpretation of such an existing agreement would normally suggest that the latter co-producer would have to give his consent to the authorization, by the former co-producer, of the communication to the public by satellite; whereas the language exclusivity of the latter co-producer will be prejudiced where the language version or versions of the communication to the public, including where the version is dubbed or subtitled, coincide(s) with the language or the languages widely understood in the territory allotted by the agreement to the latter co-producer; whereas the notion of exclusivity should be understood in a wider sense where the communication to the public by satellite concerns a work which consists merely of images and contains no dialogue or subtitles; whereas a clear rule is necessary in cases where the international co-production agreement does not expressly regulate the division of rights in the specific case of communication to the public by satellite within the meaning of this Directive; the country of origin principle to the existing contract with the consent of the right holder, who can assess the effects on its contracts with other content providers (with rights for other territories) and on its income (considering the remuneration model in the existing agreement). Interested parties. Depending on the usages and the economic models per sector of exploitation, the new regime will be more favourable to the right holder or the content provider. The country of origin may have extreme consequences in certain cases. Firstly, a company that has developed the exploitation of online stores under one brand but on a national basis, in each Member State of the EU, will find that, under the country of origin principle, it makes all those works available from one Member State, i.e. where it has its establishment. It will be able to concentrate all its activities in its Member State of establishment (or any other criterion of localisation). Secondly, this has far-reaching consequences for its contracting partners in all other Member States, which can but observe that no relevant act takes place in their Member State and that their licence agreement is deprived of an object. This may for example be the case for collective management organisations, which are organised at a national level. Should the country of origin principle be adopted, these parties should be given the time to take all precautions necessary and adapt their economic model to the new situation95 . Sector-specific transition measures. Furthermore it could be verified whether specific transition measures are required per sector of exploitation. As in the SatCab Directive, it could be established whether certain customs exist with regard to e.g. territorial exclusivities, the duration of the contract (indefinite terms with possibility to give notice; determined duration; possibility to renegotiate). An empirical study should establish whether transitional measures are required per sector (music, film, broadcasting, music, publishing,...) and which specific measures can ease the transition, taking into account the existing customs. New exploitations. While the existing agreements can be enforced during the transitional period and the situation between right holders, content providers and existing competitors may be "frozen" during this period, this will not prevent new content providers from offering online services under the regime of the country of origin principle. These could be new companies or existing companies deploying activities in Member States where they were not active before. It can be imagined that such newcomers would like the benefit of obtaining a licence from the right holder in the country of origin only and of offering a service in several Member States. During the transitional period, however, the existing regime is still applicable and the national territories still define the geographical scope of the rights. Consequently, these "new" content providers may have to accept a temporary arrangement, which may be shorter than the usual term of a licence in that sector. Such new contracts would then be fitted in the existing regime (which is coming to an end) for the duration of the transitional period to allow the right holder to keep the benefit of the existing agreements with other content providers or between right holders. Such temporary arrangements should not last longer than the transitional period. Interpretation of the existing contracts. During a transitional period it may thus be justified that the existing contracts should be shielded from the application of the country of origin principle (as was provided in the SatCab Directive). However, if the existing contracts have not been modified after this transitory period, the making available right will be interpreted according to the new definition and it will be localised according to the country of origin principle. Where the contract provides an end-date beyond the transitional period, the Directive could require the Member States to propose a solution under their national (contract) laws to allow the parties or one of the parties to put an end to the contract before that the end date of the contract without being liable for breach of contract. Part 2 -The relation between the rights of making available and reproduction I. The reproduction right in Europe 56. The reproduction right has been harmonised in the InfoSoc Directive. A broad scope, based on a technical definition, was then provided to the reproduction right (sub A.1), so that it should cover copies made for the exploitation of works through digital systems (sub A.2). Nevertheless, the InfoSoc Directive also provides exceptions and limitations to the reproduction right that may be used to legitimate those copies. The present Study is limited to the analysis of the exception for temporary copy (sub B.1) and for private use (sub B.2). A. Reproduction right 57. We will first discuss the scope of the reproduction right as harmonised in the Information Society Directive and how it has been implemented in the selected Member States. Then we will examine how this right relates to the harmonised right of making available to the public. Scope of the reproduction right 58. The reproduction right is the very essence of copyright96 . It is common to all copyright systems97 and, in some countries, it is even the oldest right given to copyright owners98 . It is protected under art. 9 of the Berne Convention99 . According to the WIPO Copyright Treaty, the reproduction right, as set out in the Berne Convention, fully applies in the digital environment, in particular to the use of works in digital form. Similarly to the making available right, the principle of territoriality governs the reproduction right. Therefore, each time a copy of a work occurs, it falls under the copyright protection in the country where the act of reproduction is localized. This means that the Member State where the reproduction is performed is competent to regulate the behaviour of the person making the copy. The copyright law of this Member State also determines the transfer of rights or which (facultative) exceptions apply. It may however not be so straightforward to localize the reproduction. At first glance, the restricted act takes place where the copy is made. But, in the current technological framework, the person who makes the reproduction may be in a different Member State than where the copy is stored (servercf. third party hosting or cloud services). 59. The reproduction right was harmonized at the European level in art. 2 of the InfoSoc Directive in order to make European copyright fit for the challenges of the digital age 100 . As a consequence, the reproduction right is now a concept of European law and it must be given an autonomous and uniform interpretation throughout the European Union 101 . The definition of the reproduction right provided by the InfoSoc Directive is based on a technical approach and is qualified as technology-neutral, in the sense that it applies regardless of the technique used and includes all the acts of exploitation of works in the digital environment. It covers "direct or indirect, temporary or permanent reproduction by any means and in any form, in whole or in part". Recital 21 of the Directive explains that a broad definition of the reproduction right "is needed to ensure legal certainty within the internal market". Furthermore, the European Commission declared that the role of the reproduction right "will increase even more in the new information society environment. Once protected material is converted into electronic form and transmitted digitally, it is much more vulnerable to exploitation by copying than in the past" 102 . Because of that technical definition, the reproduction right covers any material fixation of a work. It suffices to find a reproduction in the material sense in order to find a reproduction in the sense of the InfoSoc Directive, regardless of the function or the economic value of the copy. The advocate general Trstenjak proposed to define the reproduction as "a fixation of a work in a given information medium" 103 . From the Information Society Directive, its preparatory works and the cases decided by the CJEU, it can be stated, generally, that the reproduction right protects the material act of copying, including transient copies in cache memories, satellite decoders or television screens 104 . Such extension of the reproduction right has been criticized in the scholar literature: « La solution, favorable aux titulaires de droits, est compatible avec la lettre de la loi, qui n'exige pas que l'oeuvre soit gravée dans le marbre pour l'éternité, ni même qu'elle soit fixée de manière permanente. Elle n'en est pas moins critiquable. Outre qu'elle oblige à forcer le sens du mot « fixation », elle procède d'une approche purement technique qui ne s'inscrit pas dans la logique du droit d'auteur. Ce qui déclenche l'application du droit exclusif est un acte d'exploitation. Or celui-ci doit être envisagé globalement comme un tout. Dès lors que la communication de l'oeuvre au public donne prise en elle-même au droit d'auteur, ce qui est le cas bien sûr pour la diffusion sur les réseaux numériques, il est inutile et dangereux de segmenter artificiellement le processus, pour prétendre identifier des actes distincts de reproduction, afin de soumettre au contrôle de l'auteur ou de son ayant droit toutes les fixations provisoires permettant d'acheminer les informations et, en aval, toute visualisation d'une oeuvre sur l'écran de l'utilisateur » 105 . 60. In Infopaq I, the Court of Justice confirmed the broad interpretation given to art. 2 of the InfoSoc Directive as regards to reproductions of part of a work. The Court declared that the concept of "reproduction in part" extends to "the reproduction of an extract of a protected work (…) is such as to constitute reproduction in part within the meaning of Article 2 of Directive 2001/29, if that extract contains an element of the work which, as such, expresses the author's own intellectual creation" 106 . In that judgment, the Court found that the reproduction of an extract of 11 words of a press article might fall under the reproduction right. creation, and the unit composed of the fragments reproduced simultaneously must be examined in order to determine whether it contains such elements" 107 . Those two judgments show that the Court of justice grants an extensive interpretation to the reproduction right, contributing to provide this right a very broad scope. 61. The definition provided in the InfoSoc Directive leaves very little room to manoeuvre for the national legislators and judges 108 . After the implementation of the InfoSoc Directive in the copyright legislation of the Member States, only some minor variations remain as regards the scope of the reproduction right 109 . Most of the Member States (Belgium, Germany, Italy, Spain) already understood the reproduction right or a similar concept in a broad way, so that the adoption of the Directive was generally seen as a confirmation of principles that were already applied. In other Member States (France, Luxemburg, Hungary), the Copyright Act defines the reproduction right under the traditional notion of a material fixation of a work 110 . However, it does not prevent a broad interpretation of the reproduction right covering transient copies in digital form. Denmark modified its copyright law to provide a broader definition to the reproduction right including temporary fixations 111 . In the Netherlands, the reproduction right does not cover certain temporary reproductions, which do not fall under the exclusive right 112 . The carve out corresponds to article 5(1) InfoSoc Directive and acts that correspond to these conditions are not considered reproductions in the first place. This might be seen as a breach in the implementation of the Directive 113 , as the Court of Justice states that the reproduction right extends to transient copies, which may have an "ephemeral existence because they are immediately effaced in the course of a technical process" 114 . After the decision Infopaq I, it was noticed that the concept of "reproduction in part", as interpreted by the Court of justice by reference to the originality of the work, was different than the standard applied in the United Kingdom, where a copy of a substantial part of a work is found following a qualitative assessment based on a "skill, labour and judgment" test of originality 115 . In the United Kingdom, the copyright law protects the work and substantial parts of the works. It seems that the concept of reproduction "in part" used in art. 2 of the InfoSoc Directive does not fully correspond to the notion of substantial part used in the UK Copyright Designs and Patents Act. Relation with the making available right 62. Resulting from the broad definition provided to the reproduction right, the act of making available involves a number of copies, at distinct stages of the technical process, that fall under the scope of the reproduction right (except the temporary copies according to the legislation in the Netherlands). A work may be uploaded from a user device to a server, where it is stored and members of the public have the possibility to access the work from their own devices. The level of control that the user can exert depends on the technology chosen by the uploader 116 and on the (commercial) customs: -Download (e.g. music, films, newspapers). The end user will receive the most complete control and independence of the uploader who has made the work initially available: the user stores her own copy of the work and can see, hear or read it whenever she likes; -Streaming (e.g. audio or audiovisual content, or, similarly, browsing to consult a text or images online). A work that is streamed to the end user will remain in the latter's possession only for the purpose of seeing or hearing it. No permanent copy is saved at the user's end. For a later use, the final user depends on the availability of the work at the uploader's end. It must be verified for each of the copies generated by those technologies whether these are reproductions (art. 2 InfoSoc Dir) and, if they are, whether they may be exempted under an exception. 63. We found disparities in the case law of the European Member States in the way the national Courts qualify the copies made in the context of the exercise of the making available right. These disparities appear, for example, in the analysis of the decisions regarding web search engines. In some countries the "upload" copy is not considered a reproduction but rather part of the act of making available. In other countries such "upload" copy is qualified cumulatively as a reproduction and making the work available to the public. A UK High Court found that the users who upload content on a bit torrent file sharing site make the content available to the public. The users "intervene, in full knowledge of the consequences of their actions, to give others access to the Claimants' copyright works. The recordings are made available to all other users of the Websites, a large and indeterminate class of people, without having to purchase them from authorised sources" 117 . In that decision, the upload copies of protected works on Bittorrent indexing websites were treated under the right of communication to the public but not separately under the reproduction right. The download copies on the other hand were qualified as reproductions. In Belgium, in the litigation opposing Copiepresse and Google, the Belgian courts found that the same material facts attributable to Google were infringing both the right of communication to the public (including the making available right) and the reproduction right of the right holders. Inversely, in the German case Voorschaubilder II, the Court of First Instance in Hamburg found that the act of including a thumbnail picture in a list of search results was an act of making available and it did not find an act of reproduction distinct from that act of making available. Contrary to what the Belgian judges have decided, this reflects an absence of overlap between the right of making available and the right of reproduction 118 . 64. Another issue dividing the European national Courts is who should be regarded as the person who reproduces the work. The answer will determine who has the obligation to obtain the author's consent for the reproduction or who can benefit from an exemption in the national copyright legislation. It seems that the Member States answer this question in different ways. In simple cases where only one actor is involved, the answer is straightforward: the person who makes the reproduction should acquire the author's consent or rely on an exception. For example, the person who copies the music from a CD to the hard drive of her computer is the person who reproduces the work. The answer is less obvious when several actors play different roles, in particular when a technical intermediary is involved (such as a copy centre or an Internet service provider). Member States focus on different elements to decide who is responsible for a reproduction. A few examples will illustrate the differences. In France, it is generally held that the person who makes the material copy of a workor even who intervenes in the reproduction by providing the technical means to do sois responsible for the 116 In this report the "uploader" refers to the person performing the act of upload that leads to the work being accessible to a public. We prefer to use the term "uploader", since it refers to a material act of upload rather than the legal qualification "making available". Also, it contains no bias with regard to the liability for the act of making a work available to the public. The uploader could be a natural person, sharing a work via social media, via peer-to-peer networks or via a blog, or a legal entity with or without commercial intent, such as the provider of online music services or a film platform. 117 UK High Court, 28 February 2013, EMI v BSkyB, [2013] EWHC 379 (Ch), par. 40. 118 LG Hamburg, 26 September 2008, https://openjur.de/u/30461.html. reproduction, even if she has acted upon the request of a final user (e.g. the user of a virtual personal video recording system) 119 . It seems that the same solution is adopted under Spanish law 120 . In Germany, it was held in the cases Save.tv and Shift.tv that the user of a "virtual personal video recorder" makes the reproductions of protected television programmes, not the company offering the online video recording service 121 . In both cases, a broadcasting organisation (RTL and Sat1) claimed that the provider of an online video recording service ("Save.tv" and "Shift.tv") infringed inter alia its making available right. These service providers made it possible for their customers to record free-to-air programmes via an electronic programme guide and to watch these programmes at a later time and at any place (provided there was an Internet connection). Technically, the programmes that the customers selected were stored on the provider's hard disk but at a space exclusively attributed per customer. From a legal point of view, it was crucial to decide who made the copy, both for the exception for private use and for the making available to the public. If the user indeed "produced" the recording, then it could be covered under the exception for private use (even if one copy served for the recordings of several clients). Furthermore, the German courts refused to qualify as an act of making available the individual reproduction of a work at the user's demand on a service provider's hard disk. The Court rejected the argument that the customers, who have ordered the recording of the broadcast, can be collectively considered as a public, and consequently found that no acts of making available to the public were made 122 . In respect of "cyber locker service" Rapidshare, it was held that not the service provider but the user of the service made the reproduction of the works 123 . According to the Belgian Supreme Court, the person who materially makes the reproduction or who orders it should acquire the author's consent or qualify for an exception, even if the reproduction is made in a copy centre 124 . In the Meltwater cases, the question was whether the customer of a press monitoring company made protected reproductions when she receives the press overviews per e-mail or when she views these overviews on the website of the company. The copies made while browsing or caching gave rise to a debate on the application of the exception for transient copies. On the email copy, it was considered common ground that Metlwater's customers need a licence for this reproduction, which is not a temporary one 125 . These few illustrations suffice to observe that diametrically opposed principles may be followed in different Member States. Member States may see either the ultimate beneficiary of the copy (final user) or technical copier (the service provider) as the person who reproduces the work. That person either has to obtain the authorisation or qualify for an exception. It follows that it has to be verified, for each reproduction, which national principles apply (taking into consideration the conflict of law rules) and to which effect. One possible outcome is that the reproduction requires the author's consent, to be acquired either by the service provider or by the final user (who ordered the reproduction and/or benefits from it). Another possible outcome is that the national copyright act contains an exemption in favour of the final user that is not ruled out because of the intervention of the service provider 126 . 65. These examples show that issues remain as to the scope of the reproduction right, in particular with regard to the copies made when works are made available on digital networks where an intermediary (service provider) is involved. This may cause disparities between the Member States of the European Union, despite the harmonization provided by art. 2 of the InfoSoc Directive. These difficulties could be progressively solved by the Court of Justice, which is competent to propose a uniform interpretation of the InfoSoc Directive and, doing so, to define which technical acts fall under the exclusive right. Nevertheless, in the actual context, the impact of the existing disparities in the interpretation of the reproduction right should not be neglected. As the reproduction right is governed by the principle of territoriality, these disparities might undermine any attempt to mitigate the effects of the territoriality of the making available right. B. Exceptions and limitations to the reproduction right 66. The Directive provided a list of exceptions and limitations to the reproduction right and to the right of communication to the public aiming to ensure a functioning internal market (art. 5 InfoSoc Dir). Amongst these limitations, the present study focuses on the exception for temporary copies 127 (sub 1) and on the exception for private use (sub 2). Exception for temporary copies 67. The exception allowing acts of temporary reproduction provided by art. 5.1 has specifically been drafted to allow and ensure the development of new technologies and safeguard a fair balance between the rights and interests of right holders, on the one hand, and of users of protected works who wish to avail themselves of those new technologies, on the other 128 . It states that: "Temporary acts of reproduction referred to in Article 2, which are transient or incidental [and] an integral and essential part of a technological process and whose sole purpose is to enable: (a) a transmission in a network between third parties by an intermediary, or (b) a lawful use of a work or other subject-matter to be made, and which have no independent economic significance, shall be exempted from the reproduction right provided for in Article 2." Based on a technology-neutral approach similar to the one used to define the reproduction right, this exception mitigates the consequences of the broad definition provided by art. 2 of the InfoSoc Directive that includes "temporary reproduction by any means and in any form". Indeed, it was found necessary to allow certain copies forming a part of a technological process, such as copies needed to enable browsing or caching (rec. 33). 68. The exception for temporary acts of reproduction is the only mandatory exception provided by the InfoSoc Directive. Consequently, it has been literally implemented in all the Member States of the European Union, except in the Netherlands, where it is used to define the limits of the exclusive right as it 126 Other solutions have been developed at the national level. In Germany, inconsistencies in the licensing of the making available right and associated reproductions have been tackled via the conditions for valid licences (centred on the exploitation of a work, not the economic rights as such). See LG München 25 June 2009 and OLG München 24 April 2010. An appeal before the Bundesgerichtshof is pending. In Canada, the Supreme Court has approached this issue differently, by giving guidelines on the qualification of resp. on demand downloads and streaming as either reproductions or communications to the public via telecommunication. 127 The Study on the Application of directive 2001/29/EC on Copyright and Related Rights in the Information Society contains a first analysis of this exception (see p. 113-118). 128 CJEU 4 October 2011, joined Cases C-403/08 and C-429/08, Premier League, par 164 does not cover temporary reproductions 129 . In Belgium, the text was implemented with minor deviations in terminology 130 . 69. The Court of Justice examined the conditions provided by art. 5(1) in Infopaq I and II, as well as in Premier League 131 . The Court first declared that the exception has to be interpreted strictly, being a restriction to the exclusive right of the author 132 and cumulative conditions have to be met in the sense that noncompliance with any one of them will lead to the act of reproduction not being exempted 133 :  the temporary copy must be transient or incidental;  it has to be an integral and essential part of a technological process;  its sole purpose should be: o to enable a transmission in a network between third parties by an intermediary; o or a lawful use of a work or protected subject-matter;  the act must have no independent economic significance. We will successively analyze these conditions and how these have been interpreted by the CJEU and by the national courts. 70. Transient or incidental copy. The InfoSoc Directive distinguishes two kinds of temporary acts of reproduction: the transient and the incidental ones. The transient copy is an ephemeral act of reproduction. According to the decision Infopaq I, an act can be held to be 'transient' "only if its duration is limited to what is necessary for the proper completion of the technological process of which it forms an integral and essential part, being understood that that process must be automated so that it deletes that act automatically, without human intervention, once its function of enabling the completion of such a process has come to an end" 134 . This means that the transient character of the reproduction should be assessed by reference to the "proper completion" of the technological process (a relative rather than an absolute assessment). The Court does not provide an explicit rule to evaluate how long the copy could last to be qualified as transient. In the litigation opposing Google and Copiepresse, the Court of Appeal of Brussels ruled that Google could not benefit from the exception for temporary reproductions. According to Google, the reproduction of the press articles on its servers was an act of caching authorized under that exception. Google expressly relied on recital 33 of the InfoSoc Directive, according to which copies which enable browsing as well as caching to take place should be allowed by the exception for temporary copies to the extent that they meet the conditions listed in art. 5.1. However, the Court of Appeal considered that the acts of caching proposed by Google exceeded the concept of caching targeted by the InfoSoc Directive, as Google did not only use the cached copies to enable its system to function efficiently, but also made them available to the public. Furthermore, the Court of Appeal decided that the copies made by Google could not be qualified as "transient", because Google stored them as long as the press article was available on the website of the newspaper, possibly during several weeks, months or years. For that reason, the Court found that the duration of the reproduction was not "limited to what is necessary for the proper completion of the technological process in question"135 . On 22 December 2011, the Court of Appeal of Barcelona decided a litigation between AGEDI and AIE (two Spanish collecting societies for phonogram producers and performers) and a radio station that offered songs through internet by means of simulcasting (streaming) and webcasting (downloads available on demand). The court concluded that the limitation for temporary copies could not exempt the technical copies of the broadcasts made to enable the simulcast and webcast transmissions, since these copies were available for longer than necessary to complete the broadcast. The Court relied on the requirement expressed by the Court of Justice in Infopaq I, according to which the copies must be automatically deleted. The court made no distinction between simulcasting/streaming and webcasting/downloading136 . In France, the company Wizzgo launched an online video recorder in 2006 allowing the users to download the TV programs of 18 broadcasters for free. These broadcasters brought claims against Wizzgo as they found it was infringing the authors and related rights they held. Wizzgo considered that its online offer was based on two successive reproductions of the audiovisual works, both legitimated by an exception to copyright. According to Wizzgo, it made a first encrypted reproduction of the TV programs on the end-user's demand. That copy was stored on Wizzgo's server until it was downloaded by the enduser. Consequently, Wizzgo argued, it should benefit from the exception for temporary reproductions. A second decrypted copy was then made by the end-user on her computer. In Wizzgo's opinion, this copy did not require the author's consent either since it was covered under the exception for private copy, allowing the end-user to watch the recorded TV program. The French Courts rejected Wizzgo's argumentation. The Court of Appeal in Paris considered that the system proposed by Wizzgo was generating only one act of reproduction, made by Wizzgo for the benefit of the end-user. The fact that the copy was first encrypted and then decrypted did not mean that two successive reproductions were made. The Court of Appeal ruled that the duration of the only copy made was not limited and, consequently, did not analyze it as a temporary act of reproduction: "Qu'il s'en infère que le service ne génère qu'une seule et unique copie, créée par la société Wizzgo et destinée à l'utilisateur final lequel aura le loisir de la conserver, ce qui n'est pas démenti, sans limitation de durée". That interpretation is consistent with the ruling Infopaq I according to which the removal of the transient copy must happen automatically and may not depend on a human intervention. 71. Contrary to the notion of transient copy, we found few judgments relying on the concept of "incidental copy". The use of the word "incidental" means that the reproduction lasts longer than a "transient" copy. The relation between the transient and incidental copies was exposed by the Supreme Court in the case Meltwater. The English judge described the role of an incidental copy as follows: "If, as I consider, the copies made in the internet cache or on screen are "transient", it is strictly speaking unnecessary to consider whether they are also "incidental". But I think it clear that they are. The software puts a web-page on screen and into the cache for the purpose of enabling a lawful use of the copyright material, i.e. viewing it. The creation of the copies is wholly incidental to the technological process involved"137 . An incidental act of reproduction can be found when the copy is incidental with regard to the main act of exploitation of the work138 , provided that the reproduction may nevertheless not be permanent as it has to remain 'temporary'. That concept of "incidental copy" might prove very useful to legitimate non-"transient" copies required by the use of a technological process to make works available. 72. Integral and essential part of a technological process. Either transient or incidental, the copy has to be made because it constitutes a step in a technical process of communication 139 . In other words, the copy must enable another use of the work that is executed by means of this technological process. In Infopaq II, the Court of Justice held that the fact that the temporary copy initiates or terminates a specific process and the fact that such process involves a human intervention do not alter the conclusion that it may be an integral and essential part of a technological process. The Court interpreted the condition as follows: "The concept of the 'integral and essential part of a technological process' requires the temporary acts of reproduction to be carried out entirely in the context of the implementation of the technological process and, therefore, not to be carried out, fully or partially, outside of such a process. This concept also assumes that the completion of the temporary act of reproduction is necessary, in that the technological process concerned could not function correctly and efficiently without that act. Furthermore, given that Article 5(1) of Directive 2001/29 does not specify at which stage of the technological process the acts of temporary reproduction must be carried out, it cannot be excluded that such an act can initiate or terminate that process. Similarly, there is nothing in that provision to indicate that the technological process must not involve any human intervention and that, in particular, manual activation of that process be precluded, in order to achieve a first temporary reproduction." 140In Germany, a service provider ("Ausschnittdienst") offering its customers the delivery of excerpts from magazine articles in the form of one document, transferred by fax and by e-mail (in form of pdf) with the text, was sued by the publisher of two magazines because the service provider had sent an entire article from both magazines by e-mail. As to the storage of the copies in the working memory of the service provider, the Court of First Instance in Berlin found that only the intermediary of a transmission in a network between third parties is exempted by the exception for temporary copies. According to the Court, this does not include the copies in the working memories of the sender and the recipient 141 . 73. Transmission in a network between third parties by an intermediary. The temporary copies should "enable transmission systems to function efficiently, provided that the intermediary does not modify the information and does not interfere with the lawful use of technology, widely recognised and used by industry, to obtain data on the use of the information" (rec. 33 InfoSoc Dir). This element does not seem to raise many difficulties. 74. Lawful use. According to the recital 33 of the InfoSoc Directive, a use should be considered lawful if it is authorized by the right holder or if it is not restricted by law. It results that a lawful use may consist of an intended use that is authorized, exempted under a legal exception or one that is not restricted by the applicable legislation. According to the Court of Justice, the acts of reproduction covered by the exception must not exceed what is necessary for the proper completion of the technological process. Both in Premier League and in Infopaq II, the Court of Justice identified the intended purpose of the copy 142 and it assessed whether this was its sole purpose. Then, the Court verified whether the intended use was restricted under European or national law, which was not the case. The Court ruled in Premier League that the picking up of the broadcasts and their visual display in private circles does not reveal an act restricted by European Union legislation or by that of the United Kingdom, and concluded that these acts of reproduction have the sole purpose of enabling a 'lawful use' of the works 143 . In Infopaq II, the Court of Justice noticed that the technological process used to enable a more efficient drafting of summaries of newspaper articles included several acts of temporary reproduction. It ruled that these acts were not unlawful as the drafting of a summary of newspaper articles is not restricted by the European 139 A. LUCAS e. a., op. cit., 351. 140 CJEU 17 January 2002, Case C-302/10, Infopaq II, par. 30-32. 141 Kammergericht Berlin 30 April 2004 -5 U 98/02 142 In Infopaq I the Court listed the conditions of this exception and as the fourth condition it cited "the sole purpose of the process is to enable a transmission in a network between third parties by an intermediary of a lawful use of a work or protected subject-matter" (Infopaq I, par. 54). 143 Premier League, par. 172. The Court did not consider the circumstance that in that case the copies not only made the reception of the programmes possible, but also the communication to the public in the pub. Study on the Application of directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 115. Union legislation neither by the Danish legislation 144 . The summarising of articles is a use that is not integrated in the technical process (in which the temporary copies are part) but this was not an obstacle to the application of the exception: the copy should enable a lawful use of a work and it was sufficient that there are not indications that the technical process is used for another purpose. In Meltwater, the UK Supreme Court decided that acts of browsing or caching (including the mere viewing, the access and the consultation of a webpage) constituted a lawful use justifying the making of transient copies generated by an end-user's use of the internet 145 . 75. No independent economic significance. The Court of Justice reminded in Infopaq II that the acts of temporary reproduction must facilitate the use of a work or make that use more efficient. The Court admitted that these acts enable the achievement of efficiency gains and, consequently, lead to increased profits or a reduction in production costs. Nevertheless, the economic advantage resulting from these acts of temporary reproduction must not be either distinct or separable from the economic advantage derived from the lawful use of the work concerned and it must not generate an additional economic advantage going beyond that derived from the use of the protected work the technological process concerned 146 . In assessing whether temporary acts of reproduction have independent economic significance within the meaning of Article 5(1) of Directive 2001/29, it is necessary to establish whether an economic advantage stems directly from the temporary acts of reproduction 147 . Temporary acts of reproduction have an independent economic significance if they generate an additional economic advantage going beyond the advantage derived from the use of the protected work 148 . According to the Court of Justice, there is an independent economic significance if the author of the reproduction is likely to make a profit due to the economic exploitation of the temporary reproduction itself or if the act of temporary reproduction leads to a change in the subject matter reproduced. Such act no longer aims to facilitate the use of the work, but the use of a different subject matter 149 . Reproductions that make access to a work possible have an economic significance (e.g. the display on a television screen) 150 : since the works have an economic significance, access to the works has an economic significance and therefore the reproductions that enable this access have an economic significance. This fact by itself does not preclude the application of the exception, as long as the reproduction does not have an independent economic significance. The Court found that the reproductions on a satellite decoder and a television screen are not capable of generating an additional economic advantage, beyond the advantage derived for the intended use (i.e. the mere reception of the broadcast) and that they do not have a separate economic significance. The Court derives this from the fact that these copies are an "inseparable and non-autonomous part of the process of reception" 151 . In the United Kingdom, several commercial broadcasters took legal action against TVCatchup 152 , a company that offered a live web stream of free to air television broadcasts. TV Catchup's service was limited to free to air channels and was only available to end-users residing in the UK and possessing a valid TV licence allowing them to watch television in the United Kingdom. The TVCatchup service was funded by advertising, by showing an advertisement before the live stream was viewed and by "In-skin advertising" (the viewer sees the live stream surrounded by advertising). TVCatchup captured the broadcasters' signals via a single domestic TV aerial and a single satellite dish and stored it in servers in a data centre. The signals were then encoded and finally streamed over the internet. At no stage during the process was the whole or any part of the video stream stored on any disk or other permanent storage medium. All processing took place in volatile memory. The commercial broadcasters alleged that TVCatchup's services implied a communication to the public and a reproduction of their programs. The High Court noticed that copies were created transiently in the buffers in TVCatchup's servers and on the 152 This case was referred to the CJEU for a preliminary ruling but no questions were asked regarding the reproduction right and the exception for temporary acts of reproduction. screen of users. It stated that these copies were both temporary and transient, and that they were an integral and essential part of the technological process undertaken by TVCatchup. According to the Court, the real debate between the parties was centred on the criterion of the independent economic significance 153 . It explained that this criterion had to be appreciated in relation with the technological process concerned. The High Court found that the reproduction in the buffers had no independent economic significance: What is required is that the transient copies have economic significance independently of the advantage to be derived from the (lawful) technological process concerned. To accept that the transient copies are a sine qua non of the advertising revenues is to accept that those revenues are derived from and dependent on the technological process. The revenues do not arise independently of the lawful processes, and therefore do not have independent economic significance 154 . However, as the Court notices it, the judges still have to appreciate whether the transient copy enables a lawful use of the work, which depend on whether TVCatchup's activities amount to a communication to the public. In the judgment it delivered over that issue, the Court of Justice found that TVCatchup was communicating the TV programs to the public 155 . After that decision, the High Court found that TVCatchup was infringing the broadcasters' rights and it declared that the relay service proposed by TVCatchup was unlawful under UK copyright law 156 . As a consequence, the reproductions enabling this use are not exempted under the exception for temporary acts of reproduction and are therefore considered an infringement of the reproduction right. The Court of Appeal in Paris also dealt with the condition of the economic significance in the case Wizzgo. The Court found that the copy made by Wizzgo had an independent economic value, as it was generating advertising incomes: "Que force est de relever encore que la copie opérée par le service est dotée d'une valeur économique propre dès lors qu'à chaque copie est attaché un utilisateur et que le montant des recettes publicitaires générées par le service sera directement lié au nombre des utilisateurs du service et au volume des copies réalisées pour le compte de ces utilisateurs". The Court of Appeal therefore excluded the application of the exception for temporary reproduction, together with the exception for private copy. In Germany, several litigations called Vorschaubilder cases opposed Google and rights holders with regard to Google's image search function. In the first case, a visual artist owning a website where her images were published sued Google for an infringement of her rights, as it made available and reproduced thumbnail versions of her works. The Court of Appeal and the Bundesgerichtshof examined whether Google could rely on the exception for temporary copies in order to justify the acts of reproduction. They both considered that the exception was not applicable as the copies made by Google were displayed on a permanent basis and created a number of possibilities for Google to attract an income, in particular by means of advertisement. 157 . In the German dispute between newspaper publishers and a service provider mentioned above ("Ausschnietdienst"), the Court of First Instance in Berlin found that the copies of articles made by the service providers in its working memory have an independent economic significance. They indeed allow the e-mail transmission by the service provider, by making a digital copy of one single original of the magazine article, and at the same time serving many customers through it and enabling simultaneous access for many workers of the customers 158 . , However, Google succeeded in its defence as it showed that the way the artist had used the technical instructions for search engines allowed Google to reproduce the protected works and make these available. 158 Kammergericht Berlin 30/04/2004 -5 U 98/02. In Spain, a dispute occurred between the owner of the website www.megakini.com and Google Spain, for the unauthorized reproduction and making available of its contents, by means of the Google search engine and the Google Cache service. The courts had to decide whether several unauthorized uses qualified as infringements: • the unauthorized reproduction of the web pages html code (and contents) in order for the search engine to operate; • the reproduction and display of some fragments of the web page contents ("snippets") under the links resulting from the operation of the search engine by the users; and • the reproduction and making available of the whole web page contents under the "Google Cache service". As to the first one, both parties agreed that this was exempted under the temporary copy exception. Moreover, the Supreme Court considered that the temporary copying limitation interpreted in accordance with the three-step-test would not allow for the cache copy service offered by Google but could certainly exempt the reproduction of fragments of the linked websites because of its insignificance and information purposes. The Court also mentioned that the non-economic significance requirement must apply to the acts of reproduction per se (that is, reproduction of fragments and cache copying), not to any other activities that Google may entertain on its website, namely advertising159 . Finally, a prejudicial issue coming from the United Kingdom has been referred to the Court of justice in the context of a litigation introduced by the Newspaper Licensing Agency (NLA) against Meltwater (a company providing an online media monitoring service to business customers)-and the Public Relations Consultants Association Limited (PRCA) (a professional association representing public relations providers). Meltwater sends its clients reports of articles with the headline of the selected articles, the opening words, a brief extract and a hyperlink to the article. The clients also have access to a web-page without downloading, printing or otherwise setting out to make a copy of it. However, the English judges at the court of first instance and the court of appeal found that the making of two types of copies was technically indispensable to allow the functioning of Meltwater's services: on the end user' screen and in the internet "cache" on the end-user's hard disk. The screen copy remains on screen until the end-user moves away from the relevant web-page. The cached copy remains in the cache until it is overwritten by other material as the end-user views further web-pages. The NLA sued Meltwater and the PRCA for copyright infringement because it considered that Meltwater's end users need to take a copyright licence (Web End User Licence) from NLA covering the right to receive and use copies of the newspapers content. It is interesting that both the High Court of Justice and the Court of Appeal found that Meltwater and the PRCA could not rely on the exception for temporary copy because the acts of reproductions they were doing had an independent economic significance: "A person making a copy of a webpage on his computer screen will not have a defence under s. 28A CDPA simply because he has been browsing. He must first show that it was lawful for him to have made the copy. The copy is not part of the technological process; it is generated by his own volition. The whole point of the receipt and copying of Meltwater News is to enable the End User to receive and read it. Making the copy is not an essential and integral part of a technological process but the end which the process is designed to achieve. Storage of the copy and the duration of that storage are matters within the End User's control. It begs the question for decision whether making the copy is to enable a lawful use of the work. Moreover, making the copy does have an independent economic significance as the copy is the very product for which the End Users are paying Meltwater." The Supreme Court however came back on this position and considered that the reproductions made in the context of browsing should be exempted under the exception for temporary copies : "The fifth condition, that the copying should have no independent economic significance, is satisfied for the same reason as it was satisfied in the Premier League case, namely that it has no independent economic value to Meltwater's customers. This is because unless they download or print out the material (in which case it is not disputed that they require a licence), the sole economic value which they derive from accessing information on Meltwater's website is derived from the mere fact of reading it on screen"160 . It nevertheless referred the case to the CJEU for a preliminary ruling (case nr. C-360/13). 76. Conclusion. Based on the several CJEU decisions on the exception for temporary acts of reproduction, it can be supposed that acts of reproduction that merely enable the reception of a transmission (at the end-user's end) can be exempted on this ground. Where a work is made available to the public in a temporary form, such as an offer for transmission by streaming, the copies on the recipient's devices are likely to be exempted under this exception. While this conclusion seemed rather uncontroversial, it is called into question by the preliminary questions to the CJEU regarding browsing and caching copies in Meltwater. These questions suggest that in certain Member States some uncertainty remains regarding the scope of that exception. Obviously, any disparity in the interpretation of the exception between the Member States will weigh on the application of the making available right. Moreover, uncertainties remain as to the application of the exception both in relation to internet technologies and other technologies. One of the main uncertainties is how to understand the "technological process" and its relation to the "sole" purpose it should be enabling (cf. Study On The Application Of Directive 2001/29/EC On Copyright And Related Rights In The Information Society, p. 99). Furthermore, it is difficult to derive general guidelines from the fact-specific cases mentioned above. The different approaches make it difficult to assess other technologies and the reproductions they generate in the light of the exception for temporary acts of reproduction. This is especially true for the act of "browsing", the example given in recital 33 of the InfoSoc Directive. The uncertainty on status of such browsing copies then has an impact on the issues under consideration in this Study. If the "access" to the protected works by the final user should be understood entirely and only in the framework of the exception for temporary acts of reproduction (possibly by some "twist" on the sole purpose of a "lawful use"), then such use can be considered fully harmonised under the InfoSoc Directive and declared "lawful" in all Member States. Hopefully the application of this exception with regard to the acts of browsing will be further clarified by the CJEU in Meltwater. Despite the teachings of the CJEU in Infopaq I & II and Premier League, it cannot be ruled out that the CJEU sides with the UK High Court and the Court of Appeal in this case, which have considered that this exception does not extend to copies made on the end-users' devices while browsing. In that case, each reproduction that enables the end-user to access a work should be considered a restricted act that should be authorised by the right holder or exempted by a national exception (e.g. for private use). In this event, the issue of the reproduction at the final user's end arises not only in case a work is offered to the public for download, but also when it is offered for "mere access", including the often cited cases of browsing or streaming. It should then be verified in practice whether the right holder has given her consent for this copy at the end-user's end or, alternatively, the national legislation contains an exception covering such use (considering the non-compulsory list of exceptions in art. 5(2) and 5(3) InfoSoc Dir161 ). As will be demonstrated in the following sections, such an interpretation of the reproduction right may undermine the understanding (and localisation) of the making available right. Exception for private copy 77. Art. 5.2.b of the InfoSoc Directive provides for an exception "in respect of reproductions on any medium made by a natural person for private use and for ends that are neither directly nor indirectly commercial, on condition that the rightholders receive fair compensation which takes account of the application or non-application of technological measures referred to in Article 6 to the work or subject matter concerned". Contrary to art. 5.1, art. 5.2.b is an optional exception provided by the InfoSoc Directive. Nevertheless, except the United Kingdom, all the Member States selected for the accomplishment of the present Study have implemented the exception for private copy, but both the regulatory framework and the details of the scope to which private copying is permitted differ 162 . Member States such as France or Germany have kept their previous regime for private copying, considering that it was already matching the requirements of the InfoSoc Directive, while other countries have slightly modified their copyright legislation. The United Kingdom chose to retain its existing exceptions and limitations and to amend them where necessary 163 . The Gowers Review of intellectual property, an independent report published in 2006 focusing on copyright law in the United Kingdom, proposed the private use exception. It was followed by a consultation launched by the UK Intellectual Property Office which defined the private use exception in a very narrow way and denied the requirement of a levy. The Government finally side-stepped the issue and did not implement the private copy exception. In the Hargreaves report of 2011 it was recommended once more that the British government introduce an exception to allow individuals to make copies for their own and immediate family's use on different media, thus adapting the law to the actual use and expectations of consumers and technology providers 164 . The scholar literature considers that the UK's long-standing opposition to levies appears to be entrenched and it thus seems unlikely that a private use exception that is broad enough to involve harm to the rightholder, and thus require fair compensation, will ever be introduced into UK copyright law 165 . Despite the harmonization realized in the InfoSoc Directive, the national exceptions seem to vary considerably on topics such as the works being copied, the type of equipment covered, the levels of the levies and the extent to which the use of technological measures are taken into account 166 . 78. Subject matter. The report made by IVIR in 2007 pointed out a lack of uniformity as regards the subject matter to which the exception applies. For example, Italy limits the exception to copies of sound recordings or audiovisuals works 167 , while other countries consider that the exception applies without restriction as to the subject matter. 79. Copies made by third parties. There is a debate whether copies made by a third party for the private use of a natural person might be exempted. The study made by IVIR noticed that art. 5(2)(b) does not expressly indicate whether the making of digital copies by third parties can be authorized, considering that a legal entity might rely on the private copying exception if its serviceif it is not remuneratedconstitutes some form of agency 168 . Intermediaries could consequently argue that the exception for private copy should allow a copy to be made for and on behalf of a natural person for private use. On the contrary, T. Shapiro considers that, in principle, reproductions made by, or for, a third party should not benefit from that exception 169 . The Advocate general Trstenjak raised that issue but did not solve it in her opinion in the case Padawan: "Indiscriminately burdening an undertaking by means of a levy as compensation for private copying could not be justified, since first of all the private copies must have been made 'by a natural person', so that a reproduction 'by an undertaking' is not covered, at least on the basis of the wording. However, even looking at the reality of the situation, whereby the act of reproduction must necessarily be carried out by a natural person, for instance an employee of the undertaking, the attribution of an act of reproduction to the undertaking would raise legal questions upon which a conclusive opinion cannot be given. On the other hand, it follows indirectly from the spirit and purpose of the provision in Article 5(2)(b) of Directive 2001/29 that the copy in question must in any case be intended 'for the private use of a particular person'" 170 . Hungary expressly excludes the possibility for third parties to make private copies on behalf of an individual. The French, the Dutch and the Spanish Copyright Acts apply the same solution. On the contrary, the German legislation allows third parties to make copies for the private use of another beneficiary. Finally, the Copyright Acts of countries such as Belgium do not specify whether the use of the private copy is limited to the person who made it or if copies made on behalf of other persons are covered as well (as long as the beneficiary meets the conditions of the exception). Furthermore, the identification of the person making the copy in the context of the internet may differ from one country to the other, depending of the solutions provided by the case law. For example, in France, the service provider of an online video recorder is seen as the person making the copy. Indeed, in the case Wizzgo, the Court of Appeal in Paris denied the application of the exception for private copy because it found that the copy madewhich could not benefit from the exception for temporary copieswas realized by the service provider and not by the final user: "Considérant qu'il suit de ces éléments que la copie réalisée par la société Wizzgo ne répond pas à la définition ci-avant énoncée de la copie transitoire, qu'au surplus, la copie réalisée n'est pas destinée à l'usage du copiste mais à l'usage de l'utilisateur final ; Que, par voie de conséquence, la société Wizzgo est mal fondée à se prévaloir tant de l'exception de copie transitoire que de l'exception de copie privée et ne saurait éluder les droits de propriété intellectuelle attachés aux programmes reproduits sans autorisation ;" On the contrary, in Germany, the individual downloading the copy might be seen as the copy maker171 . Belgium applies a similar solution as the Supreme Court ruled in the context of copy centres that the copy is made by the user who realizes the copy in the copy centre or by the person who gives the order to realize the copy, and not by the copy centre172 . 80. Private use. According to IVIR, the exception provided by art. 5(2)(b) excludes any use going beyond domestic uses 173 . In Belgium, the exception applies either to copies made in the "family circle" (art. 22, §1, 5°) or "for private use exclusively" (art. 22, §1, 4°). In France, the copyright act stipulates that the private copies are reserved strictly for the private use of the copier. However, the scholarly literature considers that the exception should cover the family circle, so that that part of the exception was to be interpreted broadly 174 . In Spain, the copyright legislation states that the copy may not be used for collective or lucrative purposes 175 . The exception requires that the copy is meant for private use, mostly of a natural person176 . The issue of the intended use was indirectly treated by the Court of Justice in Padawan. That case opposed SGAE, one of the bodies responsible for the collective management of intellectual property rights in Spain, to Padawan, a company selling CD-Rs, CD-RWs, DVD-Rs and MP3 players. SGAE claimed the payment of private copy levies for these devices, but Padawan refused to pay a part of the levies corresponding to the electronic devices sold to professionals, as Padawan considered that the private copy exception only benefits to individuals using the devices for a private purpose. The Court of Justice ruled that there is a necessary link between the application of the private copying levy to the digital reproduction equipment, devices and media and their use for private copying. According to the European judges, the indiscriminate application of the private copying levy to all types of digital reproduction equipment, devices and media, including in the case in which they are acquired by persons other than natural persons for purposes clearly unrelated to private copying, does not comply with Article 5(2) of the InfoSoc Directive177 . In France, the Conseil d'Etat ruled that the storage devices acquired for professional use should be exempted from the remuneration for private copy 178 . These decisions have been interpreted in the way that the private copy exception does not apply to legal or natural persons acting in a professional context 179 . In several European countries (Finland, Sweden, Austria), the private copy remuneration does not apply in the context of the acquisition of devices by professional users for professional means, but this principle is not followed by all the Member States 180 . 81. Absence of (direct or indirect) commercial advantage. In the travaux préparatoires to the Directive, the Commission explained that this condition clarifies the scope of the private use by providing that copying should be for "ends that are neither directly or indirectly commercial" 181 . This condition is relevant for many (online) services where an individual makes a copy using such services or a copy is made on her behalf by the service provider. These service providers are often commercial companies offering a service with commercial intent. From the point of view of the commercial service provider, any copy made for a customer has a commercial purpose, while the same copy can be intended for the mere private use of the customer. The question is then whether the intervention of such commercial intermediary excludes the application of the private use exceptions. The user of an online "personal video recorder" (PVR) may decide which programmes she wants to record for later viewing, but it is the provider of the online PVR that actually records (copies) the broadcast and makes it available for later viewing by this customer. In an offline context such copy would in many cases be considered a copy for private use and exempted under the corresponding exception. In an online context, the service provider (online PVR provider but also some IPTV providers) that makes the copy mostly has a commercial objective. In some Member States the exception can only apply when the person who technically makes the copy meets the conditions of the exception. Consequently, a commercial service provider that makes a copy on behalf of its customers (who may meet the exception's conditions) will not benefit from the exception because of the commercial advantage it may derive from it (e.g. the customer's subscription fee or the advertisement revenues). In other Member States, the intervention of a commercial entity does not impede the application of the exception as long as the end-user meets the conditions of the exception. Another example are the copies kept in online storage services ("cyber lockers" or "cloud" services). Private individuals may use these services to store their private documents but also for professional purposes (as a virtual storage space comparable to a hard drive). Some Member States may exclude such use from the exception and consider that all reproductions made for professional purposes require the author's consent. 82. Fair compensation. The rationale behind private copy levies is to provide an indirect compensation to the rights holders for the loss of their reproduction right 182 . According to the Court of Justice, the notion and level of fair compensation are linked to the harm resulting for the author from the reproduction for private use of his protected work without his authorization 183 . Where the right holder has already received a payment (e.g. licence fee), it may be that no additional payment is due (rec. 35 InfoSoc Dir). Many litigations at national level, as well as before the Court of Justice 184 , are centred on the levies to be paid to the right holders. Many issues are still pending regarding this topic. This aspect of the exception for private copy will nevertheless not be examined in this Study. be legally accessed. In these legal systems, the exception may not apply if the copy originates from an illegal use of the work. In France for instance, the Supreme Court declared in 2006 that the exception for private copy requires that the source be lawful and that the copy does not harm the prerogatives of the right holders 185 . This was confirmed by the Conseil d'Etat in a decision refusing a remuneration for copies made from unlawful sources: « (…) la rémunération pour copie privée a pour unique objet de compenser, pour les auteurs, artistes-interprètes et producteurs, la perte de revenus engendrée par l'usage qui est fait licitement et sans leur autorisation de copies d'oeuvres fixées sur des phonogrammes ou des vidéogrammes à des fins strictement privées ; que par suite, contrairement à ce que soutient le ministre de la culture et de la communication, la détermination de la rémunération pour copie privée ne peut prendre en considération que les copies licites réalisées dans les conditions prévues par les articles L. 122-5 et L. 311-1 du code de la propriété intellectuelle précités, et notamment les copies réalisées à partir d'une source acquise licitement » 186 . In these countries, the permanent copies obtained from an illegal peer to peer website or the temporary copies made through the use of an illegal streaming website will fall under the exclusive right. In Germany, the exception is not applicable if the source of the copy is obviously illegal. In the UK, a copy unlawfully created constitutes an infringing copy 187 . In Belgium, art. 22 of the Copyright Act stipulates that the exception for private copy may only apply "once a work has been lawfully published". There are debates as to the scope of that requirement. The scholar literature is divided on whether the aim of that provision is to protect the moral right of disclosure of the author or whether it extends to the legality of the source of the copy itself 188 . According to A. Strowel, that notion covers all the acts that enable a work to be made accessible to the public. Consequently, it should not allow the making of private copies through the use of unlawful sources via peer-to-peer networks 189 . Other Member States (Hungary, Netherlands) do not require such condition. Their copyright legislation generally does not provide any guidance on the issue whether and how copies created from illegal sources can benefit from the exception. In these countries, the private copy of a work originating from an unlawful source, such as a peer to peer website, might be allowed 190 . In the Netherlands, the Court of First Instance in Den Hague declared in 2008 that the making of a private copy from an unlawful source is illegal and may not benefit from that exception 191 . However, the Court of Appeal in Den Hague revised that decision in 2010 and decided that downloads originating from an unlawful source should be allowed 192 . The interpretation provided by the Court is based on an analysis of the position adopted by the Dutch Government during the legislative process implementing art. 5.2.b of the InfoSoc Directive. The case went then to the Dutch Supreme Court, which referred a prejudicial question to the Court of Justice 193 . The Court has to answer whether the exception for private copy applies regardless of whether the copies of the works from which the reproductions were taken became available to the natural person concerned lawfully -that is to say: without infringing the copyright of the right holders -or whether that limitation applies only to reproductions taken from works which have become available to the person concerned without infringement of copyright. Meanwhile, the Advocate General has delivered his opinion and concluded that article 5.2 should be interpreted as meaning that the private copying exception only applies to reproductions made from legitimate sources 194 . The Advocate-General also argued that the Member States have no margin to decide on this point differently and are not allowed to impose a levy on other copies than those from legitimate sources. The issue of the origin of the copy has also been referred to the Court of Justice by an Austrian Court195 . The dispute pending in Austria before the referring judges concerns the availability of protected films via internet. That prejudicial issue results from a litigation introduced by a German film production and film distribution company holding the rights in various films, seeking for an interim injunction against an Austrian internet access provider providing access to the website kino.to where the films owned by the German company are made illicitly available. The second question referred is whether a reproduction for private use (Article 5(2)(b) of the Information Directive) and transient and incidental reproduction (Article 5(1) of the Information Directive) are permissible only if the original of the reproduction was lawfully reproduced, distributed or made available to the public. The CJEU declined to answer this question, considering that this was not necessary after its answer to the first question 196 . In Hungary, it is a debate whether the nemo plus iuris principle applies to these cases (under which "one cannot transfer to another a right which he has not", that is if the source material is infringing, the reproduction shall not become lawful, even if the user was acting in good faith) or another maxim, under which only those acts are infringing that are directly prohibited by the statute. The problem of the origin of the copy is an important issue as it has an impact on the application of the exception for private copy to the reproductions made through the use of peer to peer networks or for copies downloaded from a well-known illegal website. 84. Conclusion. Due to these uncertainties, several scholars call for a review of the exception on private copying as laid down in the InfoSoc Directive197 . The present Study will indeed demonstrate that the disparities between the European copyright laws regarding the exception for private copy may have an impact on the application of the making available right and, consequently, on the development of cross-border online services. II. The exclusive rights in US and Canadian law 85. The purpose of this section is to describe in broad lines how the American and the Canadian copyright legislations treat online transmissions of protected works, as it was asked in the Terms of reference. Our ambition is not to give a complete comparative study but to find inspiration in these two legal systems for some of the issues encountered in the EU copyright order. Consequently we will focus on the issues with direct relevance for our purposes, rather than digging into controversies that are not pressing in the EU legal framework. A. US copyright law 86. The American copyright law provides a list of exclusive rights in section 106 198 : the rights to reproduce the copyrighted work in copies or phonorecords, to prepare derivative works based upon the copyrighted work, to distribute copies or phonorecords of the copyrighted work to the public by sale or other transfer of ownership, or by rental, lease, or lending, in some cases 199 to perform the copyrighted work publicly, in some cases to display the copyrighted work publicly 200 and, in the case of sound recordings, to perform the copyrighted work publicly by means of a digital audio transmission. In this chapter we will examine how the right of making available to the public (art. 8 WCT) has been implemented in US copyright, how the reproduction right is understood and how the relation between both is construed in the light of online forms of exploitation (making available, streaming or for download). Making availabledistribution right, performance right 87. The right of making available to the public, as provided in the WCT and WPPT, is not included verbatim in the US Copyright Law. During the negotiations it was clear for the US administration that interactive forms of exploitation (e.g. via the Internet) were already protected under American law. After the adoption of the WIPO Treaties the substance of the copyright rights and exceptions was not changed to accommodate the making available right 201 . The relevant rights that could be relied upon to cover acts of making a work available to the public, in the sense of the WCT and WPPT, are the rights of distribution ( § 106(3)) and public performance ( §106(4)). a) Distribution right 88. The distribution right is not defined in the US copyright law but it is clear that it covers the distribution of tangible copies or phonorecords, e.g. by sale or other transfer of ownership. It was decided that the distribution right does not only apply to copies incorporated in a tangible support or physical copies but also to digital files. In a case on peer-to-peer file sharing, it was argued that the distribution right can only apply to material copies, based on the circumstance that the person who transfers the file does not lose possession of her copy 202 . The court did not agree with this interpretation: it is decisive that that the recipient acquires a material object, not whether the file changes hand 203 . GINSBURG points out that this also appears from the provisions on compulsory licences for making and distributing phonorecords, in which it is considered that distribution may occur by means of "digital phonorecord delivery" 204 (s. 115 USCA) 205 . It was known at the time this provision was adopted that new copies are created when works are transmitted digitally and still it was considered that such digital phonorecord delivery was equated to distribution, therefore it can be derived that it is not necessary for the distributor to be dispossessed of her copy that is transferred to the recipient. 89. It seems that under American copyright law a controversy remains on whether the distribution right extends its protection to the availability of a work for transmission, i.e. making it available to the public 206 . There are reasons to argue that the distribution right only applies when a digital file is actually transmitted and received in a destination computer. In this reasoning, it is consequently not sufficient that a work is merely offered for transmission to establish a protected act. So far the courts have not brought any certainty yet. According to GINSBURG, there are decisions of appellate courts in favour of including the merely making available for download in the distribution right but these decisions do not furnish strong authority 207 . The first level courts that have ruled on the issue have issued diverging decisions and have not come to a consistent approach 208 . Some authors advocate that the distribution right is sufficiently broad to protect acts of making available to the public, without having to prove that an actual transfer has taken place 209 . However, the controversy seems to continue among copyright scholars, who do not agree on the scope of the distribution right 210 . The current Copyright Register lists the scope of the distribution right among the major issues to be considered in a copyright reform in the US 211 . Since this controversy has limited relevance for our purposes, we will not analyse this issue any further. 90. Downloads. The download of a (more or less permanent) copy of a work is considered a digital distribution of a copy. Such download can result from peer-to-peer file sharing 212 or from a commercial service. Where a service provider offers to users copies of works (including recordings) through "download transmittals", the question arises whether such download is a distribution or a public performance of the work. It seems an accepted practice to qualify downloads as a distribution of copies. The Second Circuit Court decided in RealNetworks and Yahoo! that a download of such work is not a public performance (infra) 213 . A download was described as "the transmission of an electronic file containing a digital copy of a musical work that is sent from an on-line server to a local hard drive (...) With a download the song is not audible to the user during the transfer (...). Only after the file has been saved on the user's hard drive can he listen to the song by playing it using a software program on his local computer". The first instance court had stated the principle that a download of a work constitutes a reproduction of a work, the downloading and uploading of works via a peer-to-peer network entailing acts of reproduction and distribution of the copyrighted material 214 . The Court of appeal noted that the parties did not dispute that downloads create copies of the musical works and that copyright owners must be compensated for this. 91. First sale. The distribution right in US copyright law is limited by the so-called "first sale" doctrine, comparable to the exhaustion principle in the EU (cf. Study on territoriality and the making available right). The US Copyright Act states that "the owner of a particular copy or phonorecord lawfully made under this title, or any person authorized by such owner, is entitled, without the authority of the copyright owner, to sell or otherwise dispose of the possession of that copy or phonorecord" (s. 109).This means that a person who has purchased a music record, such as a CD, is free to give her copy to a friend or to resell it to any other person. The friend or the second buyer will then be the owners of the CD. This resale is an act of distribution but on the basis of the first sale doctrine the consent of the copyright owner is not required, provided that the copy thus transferred has been "lawfully made". While this doctrine is commonly applied to the distribution and re-distribution of copyright works in a tangible form, the question arises whether such is also the case for the distribution and redistribution of digital files. If it is accepted that the offer for download and the subsequent download of a copyright work should be qualified as an act of distribution, then it could be argued that the downloader (acquirer of the copy) can resell the copy without the copyright owner's prior consent. A first instance court in New York did not follow this point of view in a case ReDigi 215 . ReDigi was an "online market place for digital used music": its users could sell the digital music files they legally purchased but no longer wanted to other users and buy music for a lower price than "new" files sold at iTunes. ReDigi's users installed software that verified the source of the music on the user's computer (iTunes or other ReDigi users) and uploaded the eligible files to a ReDigi server for sale to other users. In principle, the reseller no longer has access to her file once it is sold to another user. The issue before the Court was whether the user could resell her digital music under the first sale doctrine. The Court answered in the negative. In this case the court found a sale on ReDigi's website, hence there was an act of distribution of the protected works. However, this sale could not be saved by the first sale defence. It was established that, in the operation of ReDigi's service, a new copy (reproduction) of the work was made when the music file is transferred from the user's computer to ReDigi's servers (even if the user does not keep a copy), a new copy that was considered a "reproduction" for which the author's consent was required. Since the first sale doctrine can only apply with regard to "lawful copies" and the "unlawful reproductions" sold on ReDigi's website were not "lawfully made", the copyright owner's consent could not be disposed of. Moreover, the court held that the first sale doctrine could only restrict the right holder's control over the sale of one particular copy by the owner of that copy. In ReDigi's service, the user held one copy (since she downloaded it from iTunes to her hard disk) but she made another copy when she uploaded it to ReDigi's servers: this is not the same particular copy as the one downloaded. The conclusion is that the first sale defence is "limited to material items, like records, that the copyright owner put into the stream of commerce" 216 . According to the current Copyright Register, Maria PALLANTE, the doctrine of first sale should be reviewed by Congress 217 . On the one hand, it may consider that the copyright owner should control all copies in the digital market place, taking into account that "second hand" copies are perfect copies and that the transactions with regard to such digital copies are now qualified rather as licences than as sales. On the other hand, it may find that the first sale doctrine still has a role to play in a digital world, that technology can assist in verifying whether the first owner/seller does not keep her copy and that attention should be paid to the outright ownership of digital files (not everything should be licensed). b) Public performance and display rights 92. The copyright owner has the exclusive right of public performance with regard to specific copyright works and sound recordings. The public performances or displays can either occur in a public place or by transmission. Section 101 of the US Copyright Act defines the latter act of performing or displaying a work publicly as "(...) (2) to transmit or otherwise communicate a performance or display of the work (...) or to the public, by means of any device or process, whether the members of the public capable of receiving the performance or display receive it in the same place or in separate places and at the same time or at different times". The public performance right could cover forms of "interactive streaming", qualified as" making available" under the WIPO Treaties 218 . The US Copyright Act also contains a digital public performance right of sound recordings by means of a digital audio transmission (s. 106( 6) USCA), subject to a mandatory licence for webcasting and other noninteractive services (infra) 219 . 93. Private or public performance. Similarly to the European right of communication to the public, a performance is protected under American copyright when it is addressed to a public. While the existence of a "public" was not so difficult to establish for terrestrial or satellite broadcasting technologies, this is different in an interactive world where transmissions are technically based on individual requests and users of media services can choose when and where to access protected content (e.g. television programmes). Where a work is thus transmitted to one individual, it could be argued that there is no public and therefore no public performance (but a private performance instead). The Second Circuit Court ruled on this issue in a case "Cablevision" on remote personal video recorders (over the cable network) 220 . Cablevision allowed its customers to record cable programmes, which were stored in individualised spaces, and watch these at a later moment. The issue was whether such service constituted a public performance. The Court decided that this was not the case where the transmission of a programme is technically based on one copy made by the customer (not the service provider) and it occurs, on the demand of the customer, only to the person or to people within her personal circle 221 . The particular transmission of a performance could only be received by the customer and her close circle, not a "public". The audience of this particular transmission mattered, not the potential audience of the work. A first instance court applied the teaching of Cablevision to "cyber lockers" 222 . MP3tunes allowed users to upload their music to a personal space and to discover new music via a connected service sideload.com. With regard to the public performance right, the court found that there had been no infringement, since the service retained unique copies of the music files. Even though it used a deduplication technique (which allows storage economies), the court found there was no master copy at the basis of the transmissions. The Cablevision ruling was followed in a case named Aereo 223 . Aereo offered a web service allowing its users watch TV broadcasts in real time or to record and watch these programs later. The service was organised in such a way that there was an individually assigned antenna per user and storage of individual copies of the programs on a remote hard drive. The Second Circuit Court refused the broadcasters' motion for a preliminary injunction, on the grounds that it was unlikely that there was an infringement of the public performance right following Cablevision. It was considered that the transmission from an individual copy per user to that user was not a transmission to the "public" 224 . The broadcasters have lodged an appeal before the Supreme Court 225 . Meanwhile, a similar service named FilmOnX was brought before a different court. These proceedings led to a different outcome and the Court imposed an injunction (by the District Court of Columbia) against FilmOnX's services across the entire US except for the Second Circuit, where the ruling of the Second Circuit Court prevails. FilmOnX has appealed the case before the Court of Appeal of the 9 th Circuit. The distinction between transmissions based on master copy/unique copy decides whether a transmission is qualified as a public or a private one. This interpretation opens the door, according to GINSBURG, for copyright avoiding businesses, which may design the technical architecture of their services as to have customers make individual copies for later individual transmission, rather than providing a central copy as the basis for transmissions to each member of the public 226 . A US government branch has also warned that "congressional action" may be needed, should the judicial decisions undermine a "meaningful" public performance right 227 . 94. Downloads. If the public performance right protects the transmission of a performance of the work to the public by any means (device or process), then arguably the transmission for download could also be a public performance. In one case a federal court decided that this is not so 228 : where a download does not allow the performances to be perceived contemporaneously, there is no performance to the public. The performance of a work required "simultaneous" or "contemporaneous" perceptibility. Instead, where a work is downloaded and the downloader listens to it in her private circle, there is no public performance but a private performance instead 229 . It was not excluded that the works are perceptible during the download process and therefore qualify as public performances, but this was not the case in ASCAP's RealNetworks Yahoo! case 230 . 63 In this case, internet companies Yahoo! and Realnetworks offered services to their customers that involved the performance of recorded music works, but also downloads of such music. A download was defined as a "transmission of an electronic file containing a digital copy of a musical work that is sent from an online server to a local hard drive". It was noted that with a download, the song is not audible to the user during the transfer and that she can listen to it only after the file has been saved on her hard drive by playing it using software on her local computer. The Court considered that the "performance" of a musical work entails contemporaneous perceptibility (i.e. the performance and the perception of the performance take place at the same time), based on the language of the statute (s. 101). The Court found that downloads of songs are not "musical performances that are contemporaneously perceived by the listener. They are simply transfers of electronic files containing digital copies from an on-line server to a local hard drive. The downloaded songs are not performed in any perceptible manner during the transfers; the user must take some further action to play the songs after they are downloaded. Because the electronic download itself involves no recitation, rendering, or playing of the musical work encoded in the digital transmission, we hold that such a download is not a performance of that work, as defined by § 101" (RealNetwork and Yahoo!, p. 16). A stream is a performance: it is an "electronic transmission that renders the musical work audible as it is received by the client-computer's temporary memory. This transmission, like a television or radio broadcast, is a performance because there is a playing of the song that is perceived simultaneously with the transmission"231 . Reproduction right 95. The owner of copyright has the exclusive right to "reproduce the copyrighted work in copies or phonorecords" ( § 106(1)). Some of the terms are defined in section 101. "Copies" are understood as "material objects, other than phonorecords, in which a work is fixed by any method now known or later developed, and from which the work can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device. The term "copies" includes the material object, other than a phonorecord, in which the work is first fixed". It is further specified that a work is "fixed" in a tangible medium of expression "when its embodiment in a copy or phonorecord, by or under the authority of the author, is sufficiently permanent or stable to permit it to be perceived, reproduced, or otherwise communicated for a period of more than transitory duration. A work consisting of sounds, images, or both, that are being transmitted, is "fixed" for purposes of this title if a fixation of the work is being made simultaneously with its transmission". Phonorecords, on the other hand, are "material objects in which sounds, other than those accompanying a motion picture or other audiovisual work, are fixed by any method now known or later developed, and from which the sounds can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device. The term "phonorecords" includes the material object in which the sounds are first fixed". 96. Download. A copy can exist in the form of a tangible, material object or as a digital file, as long as there is a fixation in the sense that the work is embodied in a copy or a phonorecord that permits the work to be perceived (either directly or by means of a device), to be reproduced or to be further communicated for a period of more than transitory duration232 . The download of a protected work, e.g. via peer-to-peer networks, entails a distribution of copies (reproduction) of the work233 . When a user of a peer-to-peer network downloads a song, the digital file (a digital sequence representing the sound recording) is encoded on her hard disk and it can be used to reproduce the sound recording, consequently the electronic file (or the segment of the hard disk where it is stored) is a "phonorecord"234 . It can be derived that such stable copies, e.g. on a hard drive, are considered "copies" under US copyright law. 97. Transient copies. As in Europe, the position of temporary technical copies was not clear under US law. While this issue was addressed in the Information Society Directive by an exception for temporary acts of reproduction (art. 5(1)), there is still discussion in the US whether such copies (such as RAM copies) are in fact "copies" in which a work is "fixed" and thus protected under the reproduction right. There are exceptions for ephemeral copies in the context of broadcasts (section 112), the use of computer programs (s. 117), transitory digital network communication and system caching (s. 512). Some temporary copies may qualify as fair use 235 . The uncertainty on temporary copies of works has however persisted in some respects 236 . While permanent downloads of works are considered "copies", there is less certainty about less stable copies, such as RAM copies. Some copies may be kept for a fairly long time, others are sure to last for seconds only. The question is then whether such copies are "sufficiently permanent" or "stable" to permit the work to be perceived, reproduced or communicated for a period of "more than transitory duration". It was held that reproductions in a computer's buffer, lasting for 1,2 seconds, were not "copies", since they were not sufficiently "fixed" 237 . GINSBURG reports however that this decision may be in some tension with decisions from other Circuits and a study of the US Copyright Office 238 . There is no guidance on how long a copy should last before it is an embodiment lasting for a more than transitory duration. Some uncertainty remains on the copyright status of transient copies. The current Copyright Register acknowledges that the reproduction right has been useful in infringement proceedings, where other exclusive rights showed lacunae in the protection against "illegal" peer-to-peer file sharing and "illegal" streaming 239 . A new statutory exception could provide legal certainty for certain temporary copies 240 . 98. Maker of the copy. Volition. In principle, the person who makes a copy is responsible for acquiring the right holder's consent and if such consent is lacking she infringes the reproduction right. In some circumstances there may be uncertainty on who bears responsibility for a copy, in particular when one person orders a copy and another person executes the order (e.g. copy centre, online file hosting, distant personal video recorders). In Cartoon Network, it was held that the reproduction was not made by Cablevision, a cable operator that offered its subscribers the possibility of storing television programmes on its infrastructure (in a reserved section per subscriber) for later viewing, but by the subscriber who used this option. It was decided that Cablevision carried out the copying and the storage of the programmes requested by the subscriber and since these actions were automated they did not have the "volitional character", typical for the person who makes a copy 241 . GINSBURG criticises the 2 nd Circuit's decision in this respect, reminding that the notion of "volitional conduct" has its origin in a case on the simple transmission of works by a "mere conduit" online service provider 242 and making the analogy to a (analogue) document delivery service, which would be held liable for the copies thus delivered 243 . B. Canadian copyright law 99. The Canadian Copyright Act 244 determines the author's rights in section 3(1) CCA. It is provided that "copyright" in works means "the sole right to produce or reproduce the work or any substantial part thereof in any material form whatever, to perform the work or any substantial part thereof in public or, if the work is unpublished, to publish the work or any substantial part thereof, and includes the sole right (...) (f) in the case of any literary, dramatic, musical or artistic work, to communicate the work to the public by telecommunication (...) and to authorize any such acts". The notion of "telecommunication" covers "any transmission of signs, signals, writing, images or sounds or intelligence of any nature by wire, radio, visual, optical or other electromagnetic system" (s. 2 CCA). The Copyright Modernisation Act 245 introduced a new provision into the Copyright Act (s. 2.4(1.1) CCA) 246 : "for the purposes of this Act, communication of a work or other subject-matter to the public by telecommunication includes making it available to the public by telecommunication in a way that allows a member of the public to have access to it from a place and at a time individually chosen by that member of the public". By this section, the making available right in the WIPO Treaties is implemented in Canadian law. In order to enforce the "making available" right, a collecting society must file a tariff (a licence proposal) with the Copyright Board of Canada for certification 247 . Performance and communication to the public 100. Prior to the adoption of the Copyright Modernisation Act, the Canadian Supreme Court decided in July 2012 a number of copyright cases (dubbed the copyright pentalogy" 248 ) on key issues and set copyright protection in a certain direction, "away from an owner maximalist orientation and in favour of "users"" 249 . It established a classification of the exclusive rights that proved quite controversial (at least with regard to the qualification of downloads). The streaming of a work on a user's demand is qualified as a communication to the public. The Supreme Court of Canada ruled in this sense in a case Rogers v SOCAN 250 . The Society of Composers, Authors and Music Publishers of Canada (SOCAN) is a collecting society that proposes tariffs for various kinds of uses of copyright works (protected under the performing rights), certified by the Copyright Board of Canada. It was decided that SOCAN is entitled to require a licence for the streaming of a song, since the streaming of a work on demand by a user is considered a communication to the public. It matters not that it is operated by a point-to-point communication (one sender, one recipient), when the works are made available on demand of anyone with internet access. Such "pull" technologies can also be considered communications to the public, similarly to other "push" technologies (e.g. broadcasting). The same position was taken in ESA v SOCAN 251 (infra), where the Supreme Court stated that the communication rights were connected to the "performance" of the work and that communication cannot be extended to those transmissions where the end-users receive a permanent copy of the work (without its performance to the public). The communication right is seen as a subset of the performance right and the majority of the judges held that there is a performance when it is possible to perceive the work (the game and the music) during the transmission. Only streaming (not download) is akin to a broadcast or a performance and consequently a communication 252 . A performance is not permanent, while a reproduction exists were a durable copy of the work is made. The Supreme Court thus reversed earlier decisions that qualified the download of ringtones as an act of communication to the public 253 . Furthermore, a Federal Court decided that the person who posts a picture on her publically accessible website authorises the telecommunication of the work and a third-party hyperlink does not constitute an infringement 254 . This suggests that hyperlinking is considered an act of communication to the public under Canadian law 255 . Reproduction The Supreme Court ruled that a download of a musical work included in a video game (for which the enduser pays) is subject to the reproduction right and does not amount to a communication to the public. Consequently no licence fee is due on the basis of the communication right where the reproduction right has been cleared 256 . Entertainment Software Association (ESA) challenged that it had to pay a licence fee for the download over the Internet of games containing copyright protected musical works. It had cleared the reproduction rights and paid the royalties for the production of the games in physical formats and sales in traditional retail outlets or by mail-order. The issue was whether an additional fee was due for the sales and downloads of games via the Internet. SOCAN argued that such online distribution constituted a communication to the public by telecommunication of the music in the games for which a fee was due in addition to the fee for the reproduction of the games in a physical form. The Supreme Court was divided on the matter and ruled by a 5 to 4 majority that no licence fee had to be paid based on a communication to the public. It was considered that the author has certain rights under copyright (reproduction, performance, publication), illustrated in the copyright act by certain subsets (rental, communication to the public,...) 257 . Here it was found that the download of a work was qualified as a reproduction, not as a communication/performance. Moreover, it was held that there was one single activity (a download) which is qualified as a reproduction and cannot violate two separate rights at the same time 258 . The principle of technology neutrality was an important factor of this decision. The Court intended to treat the purchase of a video game in a shop equally to the purchase of such game online, over the Internet. A download over the Internet was considered an identical copy delivered by a "technological taxi" to the end-user. The download was considered an act of reproduction that results in an exact, durable copy of the digital file on the end-user's computer. By adding a layer of protection (in the form of the communication right), the principle of technological neutrality was infringed. It is concluded that the download of a work does not amount to a communication to the public. The four dissenting judges held that both the rights of reproduction and communication to the public were at stake and gave right to compensation. Although the principle of technology neutrality was acknowledged, they gave priority to the language of the law and the content of the rights that are independent economic rights. Following the decision in ESA v SOCAN, a group of mobile phone service providers wants to be rid of a SOCAN tariff applicable to the download of ring tones, arguing that such download should no longer give rise to a licence fee for the communication of the work (contained in the ring tone) 259 . New: the making available right 101. The Copyright Modernisation Act 260 introduces a new provision into the Copyright Act (s. 2.4(1.1) CCA) 261 : "for the purposes of this Act, communication of a work or other subject-matter to the public by telecommunication includes making it available to the public by telecommunication in a way that allows a member of the public to have access to it from a place and at a time individually chosen by that member of the public". The decisions of the Supreme Court of Canada pre-date this amendment of the Copyright Act and it is uncertain how the decisions and the Copyright Modernisation Act interact 262 . The question is whether the making available right should be seen as an illustration of the performance right or as a separate right 263 . If the making available right is merely an instance of the performance right, then the precedent in ESA v SOCAN continues to apply and no additional licence fee can be charged for on demand downloads of protected works. By contrast, if the making available right is an independent right, then it could be argued that a download of a work results in both an act of making available and a reproduction of the work. Alternatively, it could be argued that the making available right is a species of the communication right but that its recognition in the Copyright Modernisation Act overrides the ruling in ESA v SOCAN 264 . It was suggested that the making available right would give rise to remuneration, regardless of whether a reproduction follows (i.e. for streaming and for download) 265 . The question has not been settled yet. Also, it will remain to be seen how the activities are identified and qualified. Traditionally the economic rights were considered independent and could apply to one set of circumstances 266 . In ESA v SOCAN there was only "one activity" hence only one economic right could apply. It has been argued that this ruling continues to apply to the making available right. This making available then does not infringe both the reproduction and the communication right but implicates the reproduction or the communication right, depending on the "fundamental character of the intended interactive use, such as streaming or downloading", the former implicating a communication. The Supreme Court's ruling in ESA v SOCAN would continue to apply and the making available for download would implicate the reproduction and distribution of the work, while making available by streaming would entail a communication to the public 267 . III. Interplay between the reproduction right and the making available right 102. The rights of reproduction and communication to the public (including the making available right) are autonomous rights. These rights can be exercised independently and they may apply cumulatively to the same act: an act of upload (resulting in the accessibility of a work to a "public") may be qualified as an act of making available to the public and may result, in addition, in acts of reproduction 268 . Consequently, the circumstance that the author has authorised making her work available to the public does not entail that other reproductions are permitted under the same authorisation, in particular the copies following directly from this availability and transferred on the individual demand of the final user. In a similar manner, it is not sufficient for a service provider to acquire the authorisation for the reproduction on a server, it should also obtain the author's consent for making it available to the public 269 . This autonomy of the exclusive rights has consequences for the licensing practices and for the enforcement of the rights. The fragmentation of the economic rights may complicate this picture. The rights of reproduction and making available to the public may be held by different people and, moreover, the same rights may be exercised by different people in different territories. A service provider that offers an online service should assess which exclusive rights are at stake and negotiate licences with the holders of the rights of reproduction and/or communication to the public (both for the works and the subject matters protected under related rights). Where a work is shared via the internet without the right holders' consent (e.g. via a streaming site or a peer-to-peer platform), the different right holders may claim infringement of the rights they own. 103. It has been described how the reproduction right was implemented in the Member States under consideration, along with the exception for temporary acts of reproduction and for private copy (sub II). In the third part, it has been studied how the system of exclusive rights is developed in the USA and in Canada (sub III). In this section the interplay between the reproduction right and the right of making available is under consideration. The application of these rights will be summarised based on the following scheme 270 : 268 See on this subject : Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, 109-120. 269 In Meltwater, the issue was whether the recipients of a news service require a licence to receive news reports (by e-mail) or to view and read these on their computer screens. The defendants explained that sending the news reports and receiving them were two sides of the same coin. Meltwater acquired a licence (Web Database Licence) covering the sending of the news reports and it was argued that the end users did not require an extra licence for the reception of these news reports by e-mail and to access them on Meltwater's website, because the reproductions that enable this reception are the inevitable consequence of the sending. On the contrary, the Newspaper Licensing Agency claimed that the act of reception by the end users required a second Licence (Web End User Licence) to be acquired. Both the High Court of Justice and the Court of Appeal considered that the copies made on the end users' screen and hard disk were acts of reproduction falling under the exclusive right. The Court of Appeal rejected the argument that the right holders were attempting to license the same acts twice, on the ground that "the copies created on the end-user's computer are the consequence of the end-user opening the email containing Meltwater News, searching the Meltwater website or accessing the Publisher's website by clicking on the link provided by Meltwater. They are not the same copies as those sent by Meltwater" (UK Court of Appeal, 27 July 2011, [2011] EWCA Civ 890). However, the Supreme Court reversed that judgment, as far as browsing is concerned (i.e. reading the news reports on Meltwater's website by means of temporary copies), considering that the acts of reproduction performed by the end users for the purpose of browsing could be exempted under art. downloading and we will examine whether acts of reproduction occur and whether these can be exempted under the exception for temporary acts of reproduction or the exception for private copy. For the sake of this exercise, streaming will cover those forms of transmission that result in a temporary copy lasting for the duration of the consultation of the work (listening, viewing) and downloading will indicate those forms of transmission that result in a permanent copy at the end-user's disposal. In practice there are ambiguous forms: reproductions that are not permanent but not quite ephemeral either. The user has more control over her use of a work than with an ephemeral streaming copy but not as much as a downloaded permanent copy. For example, many music streaming services allow the users to make playlists and to listen to the songs on the playlist when they are offline. A copy of these songs is thus stored on the user's device: she does not need to request the transmission by streaming every time she wants to listen to a song. In practice however the user needs to refresh the download from time to time. In any case she does not keep a copy beyond her subscription: the copies are no longer available once the subscription has ended. Similarly, a video rental service may make a film available at the end-user's device for a limited period of time (e.g. one month or only 48 hours). The protected work may be downloaded or accessed through streaming during that period but the user loses access and control over the work after this point. For now, it is sufficient to be aware that such mixed forms exist but this does not raise other problems than an issue of qualification, i.e. a matter of fact which ultimately only the courts will decide upon . We will limit our analysis to the interplay between the making available right and the reproduction right (and its exceptions) for the stereotypical forms of use (by streaming and by download). The reasoning can be applied mutatis mutandis to the less stereotypical services. Based on the preceding distinction, it seems that acts of reproduction can be found in both cases, when works are streamed or downloaded. We will hereunder describe and qualify the acts performed during the upstream and the downstream processes following the scheme described supra. We will distinguish between the "upstream reproduction" that serves to make the transmissions to members of the public at their request and the "downstream reproduction" that occurs after the transmission has taken place at the member of the public's device. Upstream reproduction (hosting server copy) 105. We have described the upstream process as the making of a copy of a work on a hosting server (or another support having an equivalent function, e.g. in the context of peer-to-peer transmissions). That copy is made to enable the transmission of the work to the users, at their demand, either by stream or by download, depending on the technology used. This copy can thus be protected under two exclusive rights: the making available right and the reproduction right. Making available to the public. The upstream reproduction will affect the making available right if it enables the transmission of the work to a public, i.e. an indeterminate number of potential users who do not belong to a private group271 . Commercial service providers whose focus is to offer a catalogue of works to a public (e.g. online music store, newspaper website) keep works in databases that are accessible to their customers, a practice that is considered a making available to the public. There is generally no insurmountable obstacle to establish the presence of a "public" where a work is distributed among users via peer-to-peer platforms: the copy of one peer's computer is available to other peers, who can be considered a public 272 . This is different for technologies that allow a more individualised approach, such as online personal video recorders or cyber lockers. In those cases, a member of the public may use the service to make an individual copy that is only available to her and not to the "public" 273 . In those cases where the courts have found no "public", there was no infringement of the right of making available to the public. The existence of a public to which the work is accessible is therefore decisive to evaluate whether the upstream reproduction infringes the making available right and requires the right holder's consent. 106. Reproduction. The upstream reproduction also implies the making ofat leastone copy of the work. Given the broad definition provided by art. 2 of the InfoSoc Directive, that copy may fall under the scope of the reproduction right in all the Member States of the European Union. It seems however that not all Member States qualify this copy as a distinct reproduction, in addition to the making available to the public from this copy 274 . Whether the right holder's authorization will be required for the making of that copy depends on the possibility to rely on exceptions to the reproduction right. Private use. Reproductions serving the purpose of making a work available to the public generally do not qualify for the exception for private use, precisely because their function is to make the work available to the public (e.g. commercial providers of protected content, peer-to-peer platforms). Where the technology allows a modulated access and a work is only available for transmission to an individual user, this user may in some Member States rely on the exception for private use as well. This exception for private copy might only apply if the service provider offers an individual space of storage to the final users, allowing them to make their own copy (at a distance) of the work. This implies that the national legislation recognises that a private copy can be made and stored by a third on behalf of a natural person (cf. the court decisions in Germany on online personal video recording services 275 and cyber lockers 276 ). By contrast, where a Member State considers the platform provider responsible for the reproductions made on behalf of its customer, it is unlikely that it can rely on the exception for private use (cf. the court decision in France in the case Wizzgo). This example shows that uncertainties remain on (i) the relation between the making available right and the exception for private use and (ii) the relation between the design of the technical platform (allowing certain forms of use to escape the author's control by means of her making available right and her reproduction right). Temporary copy. The question may be asked if the upstream reproduction may fall in the scope of the exception for temporary copies. Usually this exception is invoked to justify downstream copies, but one should verify if it could exempt upstream copies as well. It should then be verified whether such upstream reproduction (that makes it possible to transmit copies to a user's device) respects the conditions set out by art. 5.1 of the InfoSoc Directive. Depending on the circumstances, the upstream reproduction should present the following cumulative features:  Temporary, transient or incidental. While many upstream copies may be excluded for having a permanent character, some services may be technically based on a reproduction that lasts for a limited period. It is unlikely that a temporary copy used to make a work available to a public is transient (the work mostly being available for a certain period of time to be transmitted at the user's individual request) but it is not excluded that it is incidental to the main act of exploitation of the work, i.e. the making available of the work to the public (streaming services as catchup TV or services of download). It is later verified whether this intended act meets the purpose requirement (infra).  Technological process. It can be expected that the copy that serves as the basis of the making available is part of a technological process. In the context of streaming or download processes, it forms the starting point of this process of transmission to any member of the public and it may be qualified as an integral and essential part of those technological processes;  Purpose. If the upstream copy is incidental to an ulterior act of exploitation (i.e. the availability for transmission to the public), it should be verified whether this purpose constitutes a lawful use (it is not likely that the upstream copy enables a transmission in a network between third parties by an intermediary). The intended use may be authorised by the right holder and hence be considered "lawful".  Independent economic significance. An important question is whether the upstream reproduction has an independent economic significance. It can be argued that this upstream reproduction has an economic significance but that this economic significance is dependent on the value of making the work available to the public. Although the interpretation of this condition is not entirely settled, it is closely connected to the incidental character of the upstream copy and to the fact that it constitutes an integral and essential part of a technological process. Depending on the interpretation of what constitutes an "incidental" copy, of the integration in a technological process and of the independence of its economic value, it is not excluded that the upstream act of reproduction is exempted under the exception for temporary copies provided that the act of making available is lawful (i.e. authorised). When the making available right and the reproduction right of a work are held by separate right holders, this means that the upstream reproduction would only require a licence delivered by the holder of the making available right. The exception for temporary copy can only apply when the upstream copy is a temporary one. Often the upstream reproductions in the context of streaming or download processes are not temporary. If it is assumed that the reproduction right and the right of making available apply cumulatively to the same copy, the content provider should clear both rights for those copies that are not of a temporary nature (this issue is avoided in those Member States where the copy is only qualified as an act of making available and is not explicitly considered an act of reproduction as well). Where both rights apply, the content provider should clear both rights (possibly from the same right holder and in the same licence). Downstream reproduction (end-user copy) 107. We described the downstream process as the transmission and the realization of copies of the work at the final user's end. Being a fixation of a work, these copies will necessarily fall under the right of reproduction (except certain temporary copies in the Netherlands). These acts of reproduction may nevertheless be exempted under the exception for private copy or the exception for temporary copy depending on the technological process that is used. We will first discuss the status of temporary copies following a transmission by streaming, then those following a transmission by download. a) Streaming 108. When a work is streamed, the copy made at the final user's end will typically last during a very short period, just long enough to enable the user to see or hear the work. By the design and the use of the technology, no permanent copy is saved at the user's end. Reproduction. That copy falls under the scope of the reproduction right (except in the Netherlands). Therefore, the person responsible for that copy (the "copy maker") should in principle acquire the prior authorisation of the holder of the reproduction right. Temporary copies. The copy that merely serves to receive the transmitted work and to consult the work (view, hear, read,...) the work may be exempted under the exception for temporary copy, provided that its conditions are met.  Temporary. The downstream copy of a streaming process is most likely a temporary copy that is kept for the duration of receiving and consulting a work (e.g. watching a streamed video clip on a news site). Moreover, it is often a transient copy, i.e. an ephemeral copy that is generated by the activation of the stream and that is even deleted automatically, without human intervention, once its function of enabling the completion of the streaming process has come to an end (especially in the case of musical or audiovisual content).  Technological process. the end copy is the final stage of a transmission and is likely to be an integral and essential part of the streaming process;  Purpose. Its sole purpose should be to enable a lawful use of the work streamed, which consist of an intended use that is authorized, exempted under a legal exception or one that is "not restricted by the applicable legislation". The temporary copy at the final user's end, following a streamed transmission, is lawful if the right holder has authorised the intended use (the making available of the work streamed and the subsequent reception), if the intended use benefits from another exception (such as the private copy) or if it is "not restricted" (cf. Premier League, where the CJEU decided that the mere reception of a work in private circle is not restricted under copyright law; cf. also the UK Supreme Court in Meltwater).The legal basis of this justification matters: insofar as some national copyright laws require a legitimate source as a condition for the exception for private use, the reception of a work from an unauthorised streaming website may result in an infringement of the reproduction right (as well as the making available right).  Independent economic significance. The temporary copy must have no independent economic significance, i.e. the economic advantage resulting from the copy at the final user's end must not be distinct or separable from the economic advantage derived from the lawful use of the work concerned; that copy must not generate an additional economic advantage going beyond that derived from the use of the protected work. It is not likely that a copy that merely allows a user to consult (see, hear) a transmitted work has an independent economic value from this act of consulting. Private use. The downstream copy could possibly be exempted by the exception for private copy in the Member States where art. 5.2.b of the InfoSoc Directive has been implemented. It should then be verified that:  the beneficiary is a natural person;  the downstream copy is intended for a private use exclusively (taking into account that the concept of 'private use' may differ from one Member State to the other); this could cover reproductions that allow e.g. the display of a work on a screen in the user's private sphere but not the display in a public space (which would entail an infringement of both the right of communication to the public (cf. Premier League) and the reproduction right).  it has no direct or indirect commercial purpose for the final user; this may jeopardise the copies that an individual makes to access a work for professional purposes (e.g. on screen reading).  a system of fair compensation is organised at the national level for the benefit of the right holder; this does not necessarily entail that a fee should be paid (cf. rec. 35 InfoSoc Dir).  in several Member States, the copy may not originate from an unlawful source: depending on the interpretation of that condition, this could mean that the copies enabling the reception of a work from an "illegal" streaming website cannot be exempted under an exception for private use; the work streamed should have been lawfully made available to the public (with the consent of the right holder) or in the context of the application of another exception. Due to the disparities in the way the Member States have implemented the exception for private copy, the downstream copy might be exempted in some Member States and not in other Member States. 109. Summary. The downstream copy of a work resulting from the use of a streaming technology will fall under the reproduction right (except perhaps in the Netherlands). Consequently, these copies should in principle be authorized by the right holder. However, the downstream copy may be exempted by the exception for temporary copy or even by the exception for private use. For these exceptions to apply, the downstream copy should enable a lawful use of the protected work (temporary copy), or at least originate from a lawful source (private copy, depending on the Member States). When the use of the streaming technology is lawful (for instance when the making available is authorised by the right holder), the exception for temporary copies should legitimate the reception of the works, provided that the temporary copy does not generate an additional economic advantage. b) Download (permanent copy) 110. Reproduction. The download process allows the final user to receive a permanent copy of the work, on which she has the most complete control (subject however to the possible use of TPM's). The final user decides autonomously when she wants to hear, watch or read the copy of the work, and when she will erase it. That copy affects the reproduction right. Consequently, the person responsible for that copy (the service provider or the final user, depending on who is qualified as the copy maker) should in principle have a licence authorizing the reproduction of the work. Temporary copy. The exception for temporary copies will not apply to copies resulting from a download. Downloads are in principle permanent reproductions. Indeed, according to the Court of Justice, the temporary copy must result from an automated process that deletes it automatically, without human intervention, once its function of enabling the completion of such a process has come to an end277 . That condition is not satisfied in the case of a download copy, under complete control of the final user, hence a human intervention will be necessary to use or erase the copy. Private use. Such authorisation is not required when the downstream copy resulting from the download process is exempted by the exception for private copy. For that exception to apply, several requirements will have to be fulfilled.  the copy should be made by a natural person. Some Member States allow a third party to make the copy on behalf of the beneficiary, others require that the copy be made by the final user herself.  the download copy should be intended for a "private use" exclusively (taking into account that the concept of 'private use' may differ from one Member State to the other); this could be the copy resulting directly from the download or a copy thereof (e.g. format shifting or use on multiple devices).  the download copy has no direct or indirect commercial purpose for the final user;  a system of fair compensation must be organised at the national level for the benefit of the right holder the licence fee paid for the initial download can be taken into account (cf. rec. 35 InfoSoc Dir).  in several Member States, the copy must originate from a lawful source: this means that the work downloaded must have been lawfully made available to the public; a download resulting from a work made available without the author's consent will constitute an infringement of the reproduction right (in addition to the infringement of the making available right). According to the Advocate General in ACI Adam the exception for private copies under the Directive applies only to copies made from a legitimate source278 . --To the extent that a service provider would have relied on an exception to develop its business model (as for instance the offer of online video recording services), the disparities in the way the Member States have implemented the exception for private copy might undermine the development of a multi-territorial or pan-European service using the download technology. It would indeed be difficult for such service to satisfy in each European country the conditions set out for the application of the national exception for private copy. 111. Summary. The downstream copy resulting from the use of a download technology will necessarily fall under the reproduction right and it should consequently be authorized by the right holder, except if the download may benefit from the exception for private use. However, as the conditions set out for the application of that exception are different from one Member State to the other, the use of a similar service requiring download copies could be found lawful in some European countries and not in other. This affects the legal certainty and may slow down the development of pan-European services. B. USA 112. We were asked in the Terms of reference of the present Study to examine which exclusive rights apply to the upstream and downstream copies under the American copyright law. Upstream copy. It can be assumed that the copy made on a server in order to transmit it at the individual request of end-users (either for streaming or for download) is considered a protected reproduction under US copyright law. The copy is made by the provider of the on demand service or by its customer, depending on the architecture of the service (cf. Cablevision, MP3tunes, Aereo). By contrast, it is uncertain whether this copy that makes the copyrighted work available for download or streaming is protected under the distribution right or the public performance right (given the existing controversy on whether an actual transmission must be established or the mere availability for transmission suffices). 113. Downstream copy. The type of downstream copy affects the qualification of the transmission (public performance or distribution of a copy). Download. The download of a permanent copy is qualified as the distribution of a copy of a protected subject matter and not as a public performance of the work (Realnetworks and Yahoo!). It seems clear that the first sale defence is not available for works distributed by means of downloads (ReDigi). Streaming. A work can be made accessible to end-users by means of streaming. Temporary copies on the end-user's device allow her to consult the work. These copies may or may not qualify as protected reproductions, depending on whether they are considered "fixations" of sufficient duration or stability (Cablevision). Where the end-user is able to perceive the works while they are being transmitted ("simultaneous" or "contemporaneous" perceptibility), there may be a public performance (Realnetworks and Yahoo!). However, the definition of the public performance right and the interplay with the reproduction right can take an awkward turn where the architecture of any technical system is such that the customer makes an individual copy (on the service provider's technical installations) that is subsequently used to transmit the work to the customer's device (for her viewing, reading or listening). In some cases it has been decided that the copy is made by the customer and that it serves only for a transmission to this customer. Consequently, the work is performed in private and since no public performance can be found, the service provider does not perform any act restricted under copyright (absent its volitional conduct with regard to the copies and the transmission of the work). - C. Canada 114. We were also asked in the Terms of reference to assess which exclusive rights rule the upstream and downstream copies in the Canadian copyright legislation. In Canada, the situation is not very clear given the recent decisions of the Supreme Court of Canada and the more recent adoption of the making available right in the Copyright Act. Upstream copy. In principle the upstream copy that enables the streaming or the downloading of a work is regarded as a reproduction. Given the SCC's position in ESA v SOCAN that one activity should not be qualified under several exclusive rights, it is uncertain if such upstream copy should be regarded as a reproduction, an act of making available (provided that the public has access) or as both. The making available right protects the phase preceding the transmission, but at the same time the making available right is considered a part of the right of public performance (communication to the public). 115. Downstream copy. Under the 2012 decisions of the Canadian Supreme Court, the type of downstream copy affected the qualification of the transmission (public performance or distribution of a copy). It is uncertain how these rulings will interact with the making available right that was adopted later that year. Download. Where a work is transmitted to end-users as a download, the end-user receives a permanent copy that is qualified as reproduction under ESA v SOCAN. The collecting society SOCAN was not allowed to require a licence for public performance, where a licence under the reproduction right had been obtained. It is uncertain how the newly introduced making available right, protected under the public performance right, will interact with this ruling. It depends on whether priority will be given to the availability for download (making available right) or the resulting copy of a permanent nature (reproduction). Streaming. Where the work is streamed to a public, without a permanent copy at the end-user's end and the work is visible or audible during the transmission, a public performance can be found. Again, it is uncertain how this qualification will be affected by the making available right. IV. Outline of the issues 116. The findings regarding the reproduction right, as it currently stands, can be summarised as follows. Reproduction right. The harmonisation of the reproduction right has led to a broad, fairly technical notion that is shared by all Member States. Where protected subject matter is used online, several acts of reproduction may occur. Regardless which technology process is used (streaming or download), the online transmission of works made available on internet necessitates several acts of reproductions, which we identified as "upstream" and "downstream "copies. Because of the broad definition provided to the reproduction right in the InfoSoc Directive, all these copies constitute a protected act of reproduction under the laws of the Member States (with an exception for certain temporary acts under the Dutch law). Localisation. Similarly to the making available right, the principle of territoriality applies to the reproduction right. This means that any Member State has the prerogative to regulate the reproductions occurring on its territory. With regard to analogue copies, it is fairly straightforward to localise the reproduction (the copy and the person making the copy are present at the same location). In a networked environment, reproductions may be localised according to various criteria that have not been determined (the location where the copy is made or stored, the location of the person making the copy, the location of the service provider storing the copy,...). Moreover, when a work is made available in several Member States, acts of reproduction may occur in several countries, in addition to the Member States where the making available takes place. Several copyright legislations might therefore rule different acts of reproduction that are part of one single technology process. Consequently, attempts to mitigate the consequences of the territoriality principle on the making available right might be ineffective if no initiative is simultaneously taken to localise the reproduction right in a compatible way. Responsibility for the copy. The InfoSoc Directive does not regulate the liability for a reproduction and does not identify the person responsible for the reproduction, hence there are disparities between the Member States on that issue. Indeed, some European countries admit that the copy may be made by a third party on behalf of a (private) user who will be considered as the person making the copy, while other Member states (such as France) hold responsible the person who physically realises the copy. Depending on the place where the act of reproduction occurs, the person bearing the risk for the use of the work may differ. Furthermore, in case of cross-border situations, the identity of the copy maker may vary in function of the place where the act of reproduction takes place. This affects the legal certainty and may constitute an impediment to the development of European cross-border services. 117. Art. 5 of the InfoSoc Directive provides a list of exceptions to the reproduction right, including the exception for temporary copy and the exception for private copy. Temporary copies. The exception for temporary copy is a mandatory exception for all Member States. It exempts transient and incidental copies, provided that all the conditions fixed in art. 5.1 of the InfoSoc Directive are satisfied. That provision was implemented in a similar way in all the Member States (except in the Netherlands) and, due to the decisions delivered by the Court of Justice on prejudicial issues, the conditions it fixes should receive a uniform interpretation. It has been examined how this exception applies to the reproductions made in the context of content made available to the public. Regarding the "upstream" copy, it is not excluded that this reproduction is exempted under this exception for temporary acts of reproduction. It is required then that the copy be temporary and that it serve a lawful use; it is also uncertain whether such copy is incidental and whether it lacks independent economic significance. Furthermore, the exception for temporary copy will most likely justify the "downstream" copy in the context of a streaming process to the extent that the copies merely allow the consultation (viewing, listening) of the protected subject matter. However, it does not apply to download copies to the extent that these are permanent copies over which the final user exerts the complete control (including on the duration of the copy). The same conclusion should be reached as regards copies clearly going beyond temporary copies even if they are not "permanent" in the sense of being under the full control of the end user (e.g. play lists lasting the time). Private copying. The exception for private copy has been implemented with lots of disparities in the different Member States. Depending on the national implementation, "downstream " copies can be exempted on this ground. Furthermore, depending on the national implementation and on the national rules on responsibility for the copy, even the "upstream" copy may be considered a copy for private use. Moreover, these rules may affect the application of the making available right. For instance, a service providing a personal online video-recorder making copies on behalf of its users did not infringe any exclusive rights under German law: due to the technical design of the architecture, it was considered that the copies were made by the final users who could rely on the exception for private use and since the copies were available only to that final user there was no act of making available to the public. In France, the opposite conclusion was reached on a similar service: the exception for private copy could not be relied upon and the service was qualified as an unlawful act of making available to the public. Consequently, the disparities between the Member States in the implementation of the private copy exception may also have an impact on the application of the making available right. This might affect the development of European cross-border services. 118. The present Study also assessed how the upstream and downstream copies generated by the online transmission of copyright protected works are treated in the USA and in Canada. These countries provide a significantly different system for the exclusive rights from the ones applied in Europe. The American copyright system imposes a stricter separation between the rights of reproduction and distribution and the right of public performance, in order to avoid the simultaneous application of both economic rights. The same was true for the Canadian system but it is uncertain how the application of the making available right under the general performance right will unfold. While concerns of overlap may be lesser, these legal systems have their own uncertainties (for instance as to the scope of each legal notion). 119. The next question is whether the reproduction right should somehow be modified to be applied with the making available right and to limit its geographical impact to the territoriality of the making available right, should this be interpreted and localised along the policy options studied in the Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society. We will first briefly summarise the issues identified in the previous studies279 . In the next part a first attempt is made to find some legal mechanisms and deal with some of the issues identified (sub V). The purpose is not to come up with a complete solution to all issues but to explore some of the options and identify the major obstacles to these constructions. 120. Localisation of the making available to the public. In the Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, the territoriality of the making available right has been examined. It was found that the exclusive right of making available is not clearly defined and that the localisation of the protected act could cover several Member States. The starting hypothesis was the localisation in every Member State of accessibility. Two working hypotheses have been developed, in order to streamline the localisation of the making available to the public. The act of making available could be localised in one unique Member State (e.g. place of upload, establishment of the uploader), a so-called country of origin. The work may then be accessible and available in several Member States, the restricted act takes place in one Member State only. Alternatively, the act of making available could be localised in the Member States where an act of exploitation can be found, in particular where the national public is "targeted". This hypothesis is developed along the lines of some recent decisions of the CJEU. In this case the act of making available takes place in one or several Member States, depending on the intentions of the uploader and the efforts she makes to reach a public. 121. Additional reproductions. Online exploitations of a work entail reproductions, both at the uploader's end (upstream) and at the end-user's end (downstream). Due to the technical interpretation of the reproduction right (confirmed by the existence of the exception for temporary acts of reproduction), no digital use is technically possible without an act of copying and therefore of reproduction (see Study on the territoriality of the making available right and current Study, Part 1). The rights of reproduction and communication to the public (including the making available right) are independent rights. In principle, both apply autonomously whenever a factual situation meets their legal description. Consequently it is possible that both rights apply to the same set of facts. Moreover, when the legal definitions cover the same material acts, there may be a structural overlap of both rights. This may be the case for the making available right and the reproduction right. Upstream, a copy (reproduction) is the technical basis for making the works available for transmission on demand (at least in the current technical framework). Downstream, the end-user makes a copy of the work (at least in the current technical framework), be it a temporary one (limited to the time of consultation) or a permanent one (the basis for repeated consultation). Because the rights of reproduction and making available are autonomous rights, both rights can apply cumulatively to the same acts of exploitation. The reproduction right is defined in a technical way and the act of reproduction is a precise material act that is localised accordingly280 . The reproduction can be localised where the copy was first made, where it is kept or where the copier was when she produced the copy. The overall picture may be more complicated even when the copies are made by means of a third party service. When the end-user makes a copy by means of software and/or hardware provided by a third party, the responsibility for the reproduction (and therefore its localisation) may be assessed differently. This is the case of "cyberlockers" or "online personal video recorders". In some Member States, the enduser alone bears the responsibility for the copy while in other Member States the provider of the copy infrastructure (software and hardware) that enables the end-user to make the copy is (at least partially) responsible for it. 122. Relation between the making available right and the reproduction right. This all leads to the finding that the reproduction right applies in addition to the making available right and that it can be localised in various Member States, i.e. in each single Member State where a relevant act under either right can be found. Depending on the practices in each sector, this situation may complicate the process of clearing the relevant copyrights for any cross-border or even pan-European service in the EU and it may bring legal uncertainty to the service providers trying to launch multi-territorial services. In case of infringement, there may be a lack of legal certainty (predictability) regarding which rights apply, who has to clear the rights and which damage can be claimed per infringed right. In the Study on the territoriality of the making available right, an attempt has been made to streamline the making available right and to localise it according to a country of origin criterion or an exploitation/targeting criterion. A legislative intervention on the making available right alone will not solve all issues if the reproduction right continues to be applied as it currently is, as illustrated by the following:  Country of origin for the making available right. The act of making available is localised in one single Member State. However, reproductions are found in the Member State of making available and in other Member States. This is the case if the upstream (server) copy is localised in another Member State (e.g. where the copy is stored on a server or where the copy was made) than the act of making available to the public (place of upload or centre of activities). Similarly, the protected content may be downloaded in other Member States than where the content was made available if it is offered for download beyond the country of origin. There may also be copies at the end-user's end in case of streaming but such reproductions are mostly exempted under the exception for temporary acts of reproduction (art. 5(1) InfoSoc Dir). -Consequently, the work may be made available to the public from one Member State, there may be reproductions in other Member States (upstream and downstream) for which in principle the right holders' consent is due. Acts falling under the exclusive right still happen in other Member States than the country of origin. - Country of exploitation for the making available right. Works or other subject matter may be made available to the public in one or more Member States, depending on where the exploitation of the subject matter can be found. Reproductions could be found in yet other Member States than the ones where the subject matter is made available to the public. It is thinkable that the upstream copy is localised in another country (e.g. where the copy is stored on the server or where the copy was made) than where the exploitation takes place. Also, if the subject matter can be accessed outside the countries of exploitation, a downstream reproduction (temporary or permanent) can be made when an end-user accesses and consults the subject matter (where the end-user stores the subject matter or where she was when she made the copy). In such case there would not be an additional act of making available to the public (spill-over) but there would be downstream reproductions outside the territory of exploitation/making available. Consequently, acts of reproduction of the work will be performed in other Member States than the ones defined as country of exploitation. The conclusion is that the efforts of localising the act of making available in certain Member States (one or several) risk being undermined by the existence and localisation of reproductions directly associated to the act of making available to the public. Especially the "downstream" reproductions may take place in any Member State where the work is accessed (temporary copy or permanent download). 123. The existence of reproductions and their relation to the making available right entail complications in terms of licensing and infringements of rights. Licensing. Where a reproduction is made that is not exempted under any exception, a licence should be obtained from the person holding the reproduction right in the particular Member State where the reproduction is made. Different sectors have different licensing practices. This means that the definition and localisation of rights has different consequences for each sector and that the economic impact on each sector should be taken into account when considering legislative interventions. -Where the rights are not fragmented (neither by subject matter nor by territory) and one person holds the rights of making available and reproduction for the entire EU, she can grant licences for the online exploitation according to the modalities requested by the candidate licensee. -Where the rights are fragmented either by subject-matter or by territory, it is likely that the consent of several right holders should be acquired for a cross-border exploitation. -Substantive fragmentation. The rights of reproduction and making available (or generally communication to the public) are held by different parties (e.g. rights in musical works from the Anglo-American repertoire). -Consequently, each online exploitation requires the consent of the owner of the making available right and the owner of the reproduction right (except if the reproduction is exemptedcf. streaming). These rights may be at the same time fragmented per national territories (e.g. rights for different territories held by different collecting societies). -This is true for both scenarios (country of origin and exploitation -Territorial fragmentation. The rights of reproduction and making available right are held by the same person (coherent bundles of rights) but are fragmented per territory, meaning that there are different persons or entities holding the bundles of rights in different territories (e.g. some audiovisual productions leaving aside the music). -The online exploitation of the work and other subject matter in one Member State only requires the consent of one right holder (owning both the right of reproduction and making available to the public). The online exploitation in several Member States will in most cases require the involvement of several persons. Country of origin. In principle, the protected subject matter can be made available in one Member State with effects in other Member States (therefore on the territory of other right holders). However, to the extent that the end-user makes reproductions in other Member States and these are not exempted under the copyright law of that Member State, the holder of the reproduction right in that Member State (other than the Member State of making available) needs to agree with this exploitation in the territory for which she has the rights. Country of exploitation. Where the subject matter is exploited in several Member States, the holder of the rights of reproduction and making available per Member State should authorise the use for that national territory. Where the work can be accessed in other countries than where the exploitation is targeted (overspill), there may be reproductions for which the consent of the holder of the reproduction (not the making available right) for that territory is required. It can be concluded that the localisation of the making available according to the country of origin or the targeting criteria cannot entirely solve the complications in the licensing process for cross-border online exploitations, because it has no effect on the localisation of the acts of reproduction which take place in every process of online exploitation. The Member States have different rules to determine who should acquire the licence (the end-user, the service provider or the service provider on behalf of the end-user). The service provider does not experience any particular practical problems where the reproduction rights are held by one person in all the Member States. By contrast where the reproduction right is territorially fragmented, several licences must be cleared for the various Member States where reproductions occur or may occur. 124. Infringement. If the reproduction resulting from the making available of works has not been licensed and is not exempted under an exception, then the online exploitation of the work will constitute an infringement to the reproduction right. The holder of the reproduction right will consequently be entitled to start judicial proceedings against the person liable for the unauthorised reproduction. She might also act against intermediaries whose services are used to infringe her right (art. 8.3 of the InfoSoc Directive). In an online environment, reproductions may be localised according to various criteria (the location where the copy is made or stored, the location of the person making the copy, the location of the service provider storing the copy,...). When the reproduction right is territorially fragmented, the claimant will differ depending on the Member State where the act of reproduction takes place. The same legal principles apply to the issues of conflicts of laws but, when they are applied to situations where both the rights of making available and reproduction are at stake, there may be different outcomes in terms of jurisdiction and applicable law. The reason for this difference is that often reproductions (upstream or downstream) will be made in different Member States, in addition to the Member State where the work is made available to the public:  Upstream reproduction. If the upstream reproduction has not been licensed, the right holder may act against the person liable for that reproduction either in the Member State where she is established (Brussels I, art. 2) or in the Member State where the copy is stored (Brussels I, art. 5.3). The law applicable is the law of the country for which a protection is claimed, the law of the Member State where the copy is stored.  Downstream reproduction. A downstream reproduction takes place in each country where the work is accessed. If it has not been licensed, the end-user will be liable for infringements to the reproduction right. Judicial initiatives against end-users may be taken in the Member States where they are established (Brussels I, art. 2) or in the Member States where the harmful event occurred, e.g. where the downstream copies are performed (Brussels I, art. 5.3). These localisation criteria may lead to different countries: for instance if the end-user uses cloud services, she may have her domicile in one Member State, make the reproduction in another Member State and the reproduction may be stored "in the cloud" in a third Member State. This gives the claimant the choice of where to bring her action. The law applicable will be the law of the country for which a protection is claimed, i.e. which could be the law of the Member State where the copy is performed or where it is stored. This is not necessarily the same country, e.g. when the individual uses "cloud services" to store protected material she may initiate (make) a copy where she resides but the copy may be stored on servers elsewhere. In that case the judge will have to determine the country for which the protection is claimed. In practice, she will verify to which country the conflict has most ties and apply that law. From the perspective of the right holders, the multiple reproductions in several Member States present rather an advantage, since it gives them more options to take judicial action against infringements. Each judicial action is however limited to the infringements committed in the Member State in which the proceedings are brought. From the perspective of a service provider, the difference in rules regarding the liability for the reproduction between the Member States may lead to uncertainty. This is especially the case where the service provider assumes that the copy is made by the end-user (who may invoke the exceptions for transient copies or for private copy) and that there is only an act of making available to a private person in her private circle (which is not subject to the right holders' consent). The same set of facts may not be assessed in the same way in other Member States (e.g. France and Germany). From the perspective of the end-user, this may entail some legal uncertainty in the sense that different countries may have different ideas about who is the person liable for the act of reproduction and whether an exception justifies it (legal uncertainty occurs especially e.g. where the end-user expected the service provider to clear the licence).  Injunctions against intermediaries. Art. 8.3 of the InfoSoc Directive allows right holders to act against intermediaries whose services are used by a third party to infringe copyright. An injunction based on that provision may be asked against an intermediary whose services are used by end-users to make unlawful downstream copies (access provider, provider of a cloud service, search engine,...) in the Member State where it is established (Brussels I, art. 2). The law applicable will be the law of the Member State where the act of reproduction is performed. The localisation of the making available right in a country of origin or in countries of exploitation will have no effect on the issues of enforcement related to the reproduction right. Acts of reproduction might still be performed outside the Member State(s) where the act of making available is localised. The complications essentially result from the downstream downloads (reproductions that are not exempted under 5(1) InfoSoc Dir) that take place in other Member States than where the making available takes place (either localised according to the country of origin principle or the countries of exploitation). There may indeed be restricted acts in the country where the subject matter is made available to the public and where it is downloaded, which could be within the same territory where the work is available or beyond (if the subject matter is accessible outside the territory where the work is made available to the public). The act of making available would be assessed according to the law of country of origin/exploitation and the reproduction taking place outside that country will be treated under the law of yet another country V. Outline of constructions addressing territoriality issues 125. In the hypothesis that it is decided that the making available right should be defined and localised according to a legal criterion (country of origin or exploitation) and provided that empirical data show that such intervention is required (at least desirable), then it is also desirable to integrate the reproduction right with the making available right in a coherent way. It should be avoided that this reproduction right be able to undermine the policy options chosen for the making available right. In the following chapter we will discuss some options to achieve a smooth application and coherent exercise of both rights. We have outlined some mechanisms that could contribute to such effect as a first impulse for reflection and without aspiring to solve all issues involved. Firstly, it is examined whether a more accurate definition of the exclusive rights of making available and reproduction can solve some of the issues (sub A). Acts of online exploitation would then rather be qualified as either an act of making available (communication to the public) or an act of reproduction (distribution), or both if there were specific circumstances justifying such double qualification. This possibility would put an end to the structural overlap between the making available right and the reproduction right. Furthermore a localisation criterion for the exclusive rights could restrict the territorial impact of both rights. Secondly, another option is to impose an obligation upon the author/initial right owner to transfer all rights required for the exploitation modes the candidate licensee intends to develop (sub B). The author/initial right owner would thus have to transfer coherent bundles of rights, per exploitation mode, so the licensee has all rights to use the work or other subject matter autonomously and without being dependent on a third party (with other exclusive rights to the same work) for the intended exploitation. Such approach should avoid the creation of "copyright thickets". Thirdly, it is examined whether regulating copyright contracts and licensing modalities can alleviate the complex issues of territoriality (sub C). In this section it will be briefly described which licensing mechanisms exist in the USA (music) and it will be verified if any useful lessons can be learned for the EU legal framework. Attention will be paid to the options of mandatory licences and mandatory collective management. Fourthly, the exceptions for temporary acts of reproduction and for private copy could be remodelled to limit the impact of end-user reproductions on the licensing process and hence its territorial scope. Provided that these exceptions are available in all Member States and that the conditions are more harmonised, this could solve some issues but only to the extent that the three-step test is complied with. A. Exclusive rights of making available and reproduction (distribution) 1. Definition and qualification of the protected acts 126. One way of dealing with the complex territoriality issues is by taking a step back and putting the definition of the exclusive rights under examination. It has been described that there is a structural overlap between the making available right and the reproduction right in the Information Society Directive (supra, II and previous Studies). This structural overlap could be avoided by providing a more precise definition of each right 281 . The distinctive criterion between the rights of making available to the public (rather the right of communication to the public) and the reproduction right would be the ephemeral character of the use and the corresponding control that the end-user has over the work (which in turn has ramifications for the exploitation modes) 282 . The right of making available to the public, as a species of the right of communication to the public, would then protect all uses that are ephemeral, transient or temporary. Technically these could be based on copies that last for the duration of the use (consultation or "consumptive" use), but these copies would not be considered "reproductions" as such. Similarly, the copy that makes it technically possible to transmit the subject matter (upstream copy) would not be considered an autonomous reproduction but would be "absorbed" by the making available right. The end-user who receives a work through a transmission resulting directly from the work being available on demand to members of the public does not obtain control over the work (or the file incorporating the work); she can only consult the work to the extent that the content provider allows her to consult it. The end-user remains dependent on the content provider for the repeated consultation of the work. The reproduction right then protects sustainable or permanent copies that allow the end-user control over the use (storage, access, consultation). A definition of the reproduction right could be proposed that mirrors the conditions of the existing exception for temporary acts of reproduction (art. 5(1) InfoSoc Dir). Consequently as "reproductions" would only be considered those copies of a more than transient duration, that are of a substantive nature and that have independent (autonomous) economic significance. Such understanding of the reproduction right alone would add little to the analysis of cross-border transmissions and their qualification as communications to the public or acts of making the work available to the public. To avoid repeating such structural overlap for downloads, it could be imagined qualifying the offer for download and the transmission for download as a "distribution of the reproduction", much like the distribution right now protects the distribution of reproductions on a material carrier. When a work is made available for download and the end-user receives a permanent copy (i.e. a copy that only disappears when the end-user removes or deletes it 283 ), the work is not made available to the public (communicated to the public) but a reproduction is distributed to the public. If such permanent download is qualified as a "reproduction" then the offer of such reproduction could be protected under the distribution right 284 . In many Member States the reproduction right and the distribution right are 281 See in this sense : G. MAZZIOTTI, Copyright in the EU Digital Single Market, 7. 282 See also the qualification of the exclusive rights in the US and in Canada : Part 2.II. 283 The qualification as a "distribution of a reproduction" may be complicated by the behaviour of the content providers and right holders. It has been reported that certain content providers have retracted works from user's control (accounts), although these users had initially acquired a right to use the work on a permanent basis. In some cases the content provider has taken this initiative to remedy its own distribution without the right holders' consent, in other cases the right holder has reserved the right to impose such changes upon the content provider. See e.g. "Amazon accidentally removes Disney Christmas special from owners' accounts", The Guardian 16 December 2013, available at http://www.theguardian.com/technology/2013/dec/16/amazon-disneychristmas-tv-special-prep-and-landing?CMP=twt_gu. 284 This classification would be similar to the copyright systems in the US and Canada, described supra sub Output 1, III. The idea is to qualify an online exploitation of a copyright work as either a distribution of a reproduction or a communication to the public/making available to the public. To avoid the uncertainty existing in the US with regard to scope of protection of the distribution right (due to the fact that the making available right has not been implemented as such), it should be clarified that such distribution right covers the availability for transmission as a download and the transmission. traditionally closely connected285 . This may not be the case in all Member States, which may operate different classifications of their exclusive economic rights. A more detailed survey on this point may be welcome to assess the precise implications of the suggested requalification. In summary, the idea would be to have the offer for download of subject matter protected under the distribution right, which would protect the availability to the public for download (permanent copy), the transmission and the first permanent copy by the end-user/downloader (see infra sub par. 127). This stricter interpretation of the exclusive right does not affect the principle of independence of the exclusive rights. It is not excluded that in some cases both rights apply, provided that distinct acts of exploitation are performed. For example, an online music service that offers the online streaming of music (as a part of its subscription) may also offer songs for download (for an additional fee). This distinction between both rights corresponds to different online exploitation modes and to the way end-users use works in digital format. 127. These definitions and qualifications correspond to the practices in the USA and Canada286 (see Part 2.II of this Study). Simply put, where a permanent copy is made, there is a reproduction. Where a work is transmitted and results in a permanent copy, this act is qualified as a distribution of the reproduction. By contrast, where no permanent copy is made, there is a public performance (the ephemeral copies not counting as reproductions). Legislative intervention. This re-arrangement of the exclusive rights and the qualification of online use would not require a major change of the text of the Directive. The rights of communication to the public, including making available to the public, and reproduction are expressed in abstract and general wording that leaves a margin for interpretation. It would by contrast require a shift in the interpretation and application of the exclusive rights. The legislature should indeed clearly express its legislative intentions and exclude all doubt on this point. The wording of the making available right ("the making available to the public of their works in such a way that members of the public may access them from a place and at a time individually chosen by them") could indeed be read in the sense that both the availability for temporary use and for sustainable use (download) must be protected under this right. It could be understood that the fact of the availability on demand imposes the qualification as an act of making available to the public, regardless of whether this transmission results in a permanent or a temporary copy. The legislature could however emphasise that the making available right is a subset of the right of communication to the public that covers transient forms of exploitation (broadcasting by terrestrial means or by satellite, retransmission by wire or other means, communication of broadcasting; cf. Premier League). The consequence of seeing the right of making available as a species of this general right of communication to the public is that it only protects the making available of works in a transient manner, as ephemeral as the other forms of communication to the public 287 . This interpretation creates a lacuna in the protection of right holders, i.e. the offer for transmission of nonephemeral, non-temporary reproductions (i.e. offer for download). If the making available right covers only temporary uses and the reproduction right is called to protect only the sustainable or permanent copies made of a work, then the availability for reproduction as such is left unprotected. Yet right holders would need the legal means to act against the infringing diffusion of their works before the transmission to the public actually takes place (cf. discussion in the USA, Part 2.II.A of this Study). Without the possibility to act during such "preparatory stage" preceding the transmission and the permanent copy of the work, the efficiency of copyright protection would be compromised: the right holder would not have the means to act against the person offering works for download but only against the person making the reproduction with all problems of evidence and enforcement. This could be solved by defining the distribution right as an act that starts from the availability of the work for distribution and extends to the transmission and even the first (permanent) copy on the enduser's device. Such protected act of distribution would in an online environment cover the copy of the work on the server from which the work is transmitted to end-users, i.e. the availability for download on demand. The legislature should however clarify that the protection stretches to the phase of the availability for distribution/reproduction, in order to avoid legal uncertainty on this point (cf. controversy in the USA, Part 2.II of this Study). Also, it remains to be seen how this issue will be dealt with in Canada, where the Supreme Court established a distinction between the rights of communication to the public and reproduction/distribution and where the making available right is now implemented under the right of communication to the public. Another clarification could be brought to the scope of the distribution right in the Information Society Directive. In its recital 28 it is stated that "copyright protection under this Directive includes the exclusive right to control distribution of the work incorporated in a tangible article". The reference to the distribution of material objects could be interpreted a contrario in the sense that works in immaterial form (digital files) are not protected under the distribution right. It could be emphasised that the distribution right protects the distribution of reproductions in all forms (both tangible, materials objects and intangible, immaterial files). The next question is whether the exhaustion principle restricts the digital distribution right in the Information Society Directive (cf. the CJEU's UsedSoft decision). This question is not decided by the suggested rearrangement of exclusive rights. Any transfer of the "distributed" work or other subject matter will entail another reproduction for which in principle the right holder's consent is required (the reproduction right not being exhausted by the first sale). There could be economic arguments to apply the exhaustion rule in a digital context, but these should be substantiated by empirical data that allow the legislature to take a decision on this important point. 128. Consequences for territoriality. The rearrangement of the exclusive rights may solve the structural overlap of the exploitation rights, but the main question here is whether it will simplify the localisation of protected acts and reduce the territorial impact of online exploitations. In other words: does this construction allow restricting the protected acts to the territory of the Member State(s) along the localisation criteria proposed for the making available right? In a country of origin approach, the (transient) making available of protected subject matter would take place in the country of origin only (country where the work is uploaded or where the uploader has her centre of activities). The copy resulting from the upload would not be considered a reproduction and neither would be the transient copies on the end-user's device. These copies would be "absorbed" by the making available right. Consequently, there is only one restricted act taking place in one single Member State. The offer for download would not be seen as an act of making available to the public but instead as a distribution of a (permanent) reproduction. The distribution right would comprise the preparatory phase of availability for distribution, the transmission and the first reproduction that follows directly from the technical process as designed by the content provider. The (online) distribution right would be localised according to the same principle so it would be possible to localise this complex and composite act of distribution to the public in the country of origin (i.e. country of upload or centre of interests). This approach marks however a shift vis-à-vis the approach of the CJEU (cf. Donner) 288 . The subsequent copies that the end-user makes, following the first copy, would be distinct acts of reproduction. The localisation of the distribution in one Member State requires that the overall level of harmonisation among the Member States is sufficiently high so the risk of location shopping is limited (the localisation of a distribution indeed entails the same risks as the localisation of the making available in one Member State289 ). As in the SatCab Directive, the remuneration should reflect all relevant factors, including the whole (actual or potential) audience of end-users (downloaders). In the exploitation approach, the making available takes place in the Member State or several Member States where the exploitation of work is localised. Making a work available for (ephemeral) use would require a licence in the Member State(s) where the public is targeted. When the work is accessible outside these Member States (no "targeting" of that public) and an end-user accesses the work, there are no additional acts that require the right holders' consent. There may be copies but these should not be considered "reproductions" (the copy following the first copy resulting from the download is by contrast considered a reproduction that is localised in the Member State where it is made, independently from the making available/distribution). Similarly, making subject matter available for distribution only requires the right holders' consent in the Member States where the public is targeted. The "overspill" to other Member States will not give rise to a distinct act of distribution (subject however to an evolving notion of "exploitation" and "targeting" in function of the changing attitude of the content provider). 129. Other consequences. Such approach could have consequences in terms of responsibility for the protected act and imputation of the acts to the actors involved. Especially in case of offers for download, the qualification as a distribution of a reproduction could imply that the content provider takes the responsibility for the first copy on the end-user's device rather than the end-user. At most both are held co-responsible in case of infringement. This may not sit well with the general liability principles of all Member States, especially with those where the person factually making a copy is seen as the person making the reproduction and thus bearing responsibility for it. Also, this approach could substantially affect the financial interests of the established right holders and their organisations. In some sectors the rights are divided between right holders along the lines of the rights of communication to the public ("public performance") and reproduction. A change in the classification of acts would entail that those exercising the right of communication to the public lose control over the offers for download, while those exercising the rights of reproduction and distribution are then the ones whose consent is required for this form of exploitation. The effects of such alternative approach should therefore be examined in an economic study, the results of which would allow the policy makers to take a stand on this possible remedy. Localisation criterion for the first downstream reproduction (licences) 130. The vast territorial impact of the online exploitation can be addressed, if not by rearranging the exclusive rights, then by defining a specific localisation criterion that applies in specific circumstances only. As explained before, the issue here is that reproductions may occur in other Member States than where the work was made available, which means that the right holders' consent should be cleared (by the content provider or by the end-user) for all those territories. Two major factors contribute to complicating the licensing process: firstly, localising the reproduction is not as straightforward as it used to be and, secondly, several people could be held accountable for the reproduction. Various criteria can indeed be applied to localise a digital copy in one or several Member States. The reproduction right, applied in a digital online context, is not as simple and straightforward as in the material world. A reproduction of protected subject matter on a material medium is made by a physical person at a certain point of time and at a certain place: the copy and the copier are together at the same place at the same time. This one-on-one relation is questioned in the online world: a copy can be initiated by one (physical) person at a precise moment but it can be stored by another person (intermediary), on faraway servers in another country. The time and space dimensions of the copy are variable and do not only depend on the person who decided to make the copy. This is for example the case for all copies stored on servers when using "cloud" services. Member States can thus choose different criteria to localise such digital copies, which means that candidate licensees for online services should take several criteria into account. Furthermore, several people could be held accountable for the reproduction following the on demand availability of the work. The notion of "reproduction" has become more complex and so has the responsibility for reproductions that follow directly from the public availability of works to members of the public. In an analogue environment, the first person in line would be the person who makes the copy. But already different interpretations arise when one person is making a copy on behalf of another person (the responsibility for the copy could be with either person). The same is true in an online environment. Leaving aside those cases where intermediaries have a merely technical role, it could be argued that some intermediaries (more precisely content providers) make an active contribution to the reproductions made using their services and therefore bear at least some responsibility for these reproductions. This is the case, for example, where a content provider offers a catalogue of works (e.g. music, e-books, films) to end-users, who can download any of the works in the catalogue. The content provider has an active role in the reproduction: the end-user (who chooses to download the work) may trigger the reproduction but this is only possible because of the intervention of the content provider. Yet Member States may consider either the end-user or the content provider as the person who is expected to obtain the right holders' consent for the reproduction. Divergences on this point mayat least in theorycomplicate the licensing process. In order to mitigate the territorial impact of the reproduction right, it could be imagined that some reproductions are localised according to a specific criterion for the purpose of facilitating the licensing process. The starting point is then that the content provider clears both the right of making available and the reproductions following directly from this on demand availability. Such mechanism would have as an effect that the first reproduction is localised in the Member State(s) where the act of making available is localised. The localisation of the reproduction follows thus the localisation of the making available to the public (from which the reproduction follows directly) 290 . It could even be considered that, although the reproduction right and the right of making available to the public are independent rights and both have to be cleared for such online exploitation, the reproduction right is incidental to the making available right for the sake of the localisation of the protected acts (the localisation of the reproduction follows the localisation of the making available, not the other way around). The next reproductions (after the download resulting directly from the availability) would then be localised according to the normal criteria and possibly in other Member States (e.g. where the copy is made or where it is stored). These copies are made at the initiative or under control of the end-user, without involvement of the content provider. Also, infringing reproductions would be localised according to the common criteria. This fiction of localising the first reproduction in the Member State of making available for licensing purposes would limit the territorial scope of the licence and does not oblige the licensee to clear the rights for all the territories where reproductions of the work could be found (according to various criteria, such as the first making of the reproduction, the storage, each separate use or the continued use of the work). 131. A localisation criterion for the first reproduction by the end-user could consequently be proposed, based on these specific circumstances. In practice this matters most for the reproductions that are not covered under an exception and for which a licence should be obtained. The first reproduction following the public availability of the work or other subject matter is localised in the Member State(s) where the work or other subject matter is made available to the public, regardless of where the end-user is present and regardless of where the reproduction is materially stored, if the following conditions are met: (1) The content provider acquires a licence for making the work or other protected subject matter available to the public and for the first reproduction following from this availability; (2) The reproduction is made by a decision of the end-user but follows directly from this public availability under the control of the content provider; (3) The localisation criterion is only applicable for the purpose of the licence acquired according to (1), covering this exploitation and negotiations pertaining to such licence. It does not apply in cases of infringement. 132. Consequences for territoriality. Depending on which localisation criterion is applied, different consequences of "territoriality" follow. Country of origin. When a content provider offers works for download within the EU, the act of making available is localised in one Member State. The end-users who downloads the work in other Member States make reproductions of the work. To the extent that the content provider clears the reproduction right for such reproductions made by using its online content service, these reproductions are localised in the same Member State of origin. For the purpose of the licence, the content provider should acquire a licence from the persons holding the making available right and the reproduction in that Member State of origin (these may be different persons) 291 . This legal fiction facilitates the task of the content provider that is supposed to clear all relevant rights, especially when the rights are territorially fragmented. However, when the rights are indeed territorially fragmented, the risk that a right holder in one Member State undermines the position of those holding rights in other Member States is extended to the reproduction right: it has been explained that localising the making available right in one Member State, while this has effects in other Member States leaves the owners of the making available right for those other Member States with empty hands292 . The same would be true for the reproduction right, should this be territorially fragmented: the first reproduction is localised in the (single) Member State of making available by a fiction, which undermines the position of the person owning the reproduction right in the Member State where the reproduction would otherwise be localised (actual download, storage, making,…). Content providers and right holders could avoid such effect by agreeing contractually upon the territorial reach of each online service. Country of exploitation. When a content provider offers works for download within the EU, the act of making available takes place in all Member States of exploitation. End-users are likely to download the works in the Member States of exploitation, but there may be downloads outside these Member States (overspill). To the extent that the content provider clears the reproduction right for the first reproductions following the public availability, made by using its online content service, these reproductions are localised in the Member State of exploitation, even those that are made by end-users residing outside this Member State. For the purpose of the licence, the content provider should acquire a licence from the persons holding the making available right and the reproduction in that Member State of exploitation (these may be different persons) 293 . An express localisation criterion would limit the risk that the reproduction right is exercised in a Member State where a download is made but the public is not actively targeted. Localising both acts of making available and reproduction in the Member States of exploitation allows all persons holding exclusive rights for a particular territory A to exercise their rights without competing with other right holders on their territory (territorial fragmentation): a person holding rights for another territory B cannot exercise her right in such a way that the exploitation in territory A is affected or undermined. Should the right holder for territory A engage in exploitation in territory B, then the right holder for territory B can claim infringement (regardless of the "country of origin" of the availability and reproductions). Considering that the "exploitation" or "targeting" of a public is a factual criterion, it also allows the right holders to adapt to evolving practices of the content provider and exercise their right whenever a marginal use (overspill) becomes an exploitation (with more active involvement of the content provider). This means that the right holder for territory A may have to tolerate some "overspill" from the activities of the right holder in territory B but when this overspill develops into a more intentional targeting of the public in territory A, then the right holder for territory A may claim infringement. A practical difficulty may arise where the work is exploited indifferently in several Member States (such as a website with content that can attract a general public and with an interface in a language that a general public understands, e.g. a music streaming service with an English interface). It may not always be straightforward to assign the reproduction outside the territory of exploitation to one Member State rather than another. For example, a website offering subject matter (such as e-books) to a German-speaking public could be "targeting" the public in several countries (Germany, Austria, Luxemburg, Belgium). When a German-speaker outside these territories, e.g. in the Netherlands, downloads a copy of the ebook, then this first reproduction would be localised in the country where the work is made available to the public. In this case, however, the work is made available in several countries so there may be a need to assign the reproduction to one of the territories of exploitation. It should be verified in practice to which extent this presents an actual problem: there may be no particular difficulty, e.g. if the reproduction right for all Member States are held by one and the same person . B. Author/initial right owner: transfer of coherent bundles of rights 133. Numerus clausus. Another way to mitigate the impact of the reproduction right on the exercise of the making available right is to control the way the initial right holder exercises copyright. When she licenses her rights to secondary right holders, the initial right holder is in principle free not only to assign the whole bundle of rights, but also to select any combination of her economic faculties and to transfer it through licence agreements. She may tailor the licence according to three basic dimensionscontent, time and space. This creates an indefinite number of potential fragmentations of copyright. That fragmentation does not necessarily correspond to a coherent exploitation of copyright and candidate licensees consequently have to acquire several authorisations from different secondary right holders to start the exploitation of the work. The need to acquire several licenses could be avoided if the rights transferred by the initial owner of copyright to secondary right holders were taking into consideration the purpose of the exploitation for which the licence is given. Each licence would then provide a coherent bundle of exclusive rights to secondary right holders allowing specific exploitations of the work. The contracting party would receive all the rights she needs to exploit the work independently of others. This obligation could then be extended to subsequent transfers of rights by derived right holders. In the field of tangible property rights, legislation limits the ways the right of ownership can be exercised. According to the numerus clausus principle that applies for tangible property rights294 , only a closed, exhaustive set of secondary property rights can be contractually derived from the initial ownership. As a consequence, the number of secondary property rights is limited by law, their content is restricted and it is laid down in mandatory rules how these rights can be exercised. This limits the contractual autonomy in the definition of the relevant and admissible classes of exploitation schemes. The application of the numerus clausus option as it regulates the sphere of tangible property rights is not realistic for copyright. Indeed, new forms of exploitation continue to be developed as new technologies create new opportunities and works can consequently be exploited in uncountable ways. Therefore, it seems impossible to achieve a legal numerus clausus of exploitation forms of copyright. Copyright licences have to adapt to new technologies that allow new forms of exploitation of the works. The idea could however inspire a more flexible criterion that may pursue the same result, i.e. avoid a (substantive) fragmentation of rights that make it unduly difficult to acquire all rights applicable to one form of exploitation. The author or initial right holder could be under the obligation to transfer rights in coherent bundles as to allow the licensee to exploit the work or other subject matter in an autonomous way, independently from third parties holding exclusive rights. She would not have the possibility to split the rights in a way that would conflict with exploitation modes. Such obligation could be met with different sanctions (invalidity of the transfer, non-opposability of the transfer of rights relating to non-autonomous forms of use) 295 . Consequently, it would be difficult to transfer categories of rights without regard of the particular forms of exploitation the work triggers 296 . 134. Difficulties. Many issues present difficulties with this option and make it unfit for solving territoriality issues297 . The impact of such measure would de facto be limited to works for which the initial right holder would fall under the scope of the InfoSoc Directive (EU). In practice, it could not apply to works for which the rules regarding the initial ownership are submitted to the copyright legislation of a third country and for which, acting upon those rules, contractual arrangements (including licensing leading to fragmentation) have been made between the initial owner (according to that applicable law) and several licensees. The rights could indeed be fragmented before the work enters the reign of the EU Member States. Consequently, that option would have a limited impact on foreign repertoires of works for which contractual arrangements have previously been made under a non-European applicable law. Furthermore, the impact of that measure will also be limited if it only applies to define the substance of the rights licensed by the initial right holder and if it does not regulate the territoriality of the agreements. Even if coherent bundles of rights are transferred to the secondary right holders, the exploitation that will be authorised will generally remain limited to a specific territory. In the audiovisual sector for instance, even if the producer holds coherent bundles of rights, these may be territorially fragmented per Member State, so that broadcasters or distributors get a national exclusivity. C. Licences 135. A third option to control the territoriality effects of cross-border exploitations is to regulate licensing mechanisms. We will first examine briefly the various licensing mechanisms in the United States (mostly applied in the music sector) before considering some options for the European legal framework. The purpose of this excursion is to get an idea of the diversity of licensing mechanisms can be applied and how this not always simplifies the licensing question. The objective is not to provide a comprehensive description of the licensing practices in the USA. blanket licence for all musical compositions belonging to the repertoire of one licensing entity. Furthermore, it is required that the phonorecord of a nondramatic musical work was distributed with the right holder's consent, the sound recording was fixed lawfully and the making of the phonorecords was authorised by the owner of copyright in the sound recording. Beneficiary. The beneficiary of the compulsory licence is any person whose primary purpose in making phonorecords is to distribute them to the public for private use, including by digital phonorecord delivery. Object. The compulsory licence covers the "mechanical rights", i.e. the rights of reproduction and distribution (s. 106(1) and ( 3)) but not the performing rights302 . The compulsory licence also covers "digital phonorecord delivery" (S.115(3)(A), at least in some respects. "Digital phonorecord delivery" is defined as "each individual delivery of a phonorecord by digital transmission of a sound recording which results in a specifically identifiable reproduction by or for any transmission recipient of a phonorecord of that sound recording, regardless of whether the digital transmission is also a public performance of the sound recording or any nondramatic musical work embodied therein". By contrast, "a digital phonorecord delivery does not result from a real-time, non-interactive subscription transmission of a sound recording where no reproduction of the sound recording or the musical work embodied therein is made from the inception of the transmission through to its receipt by the transmission recipient in order to make the sound recording audible" (S. 115(d) US Copyright Actour emphasis). This definition refers to elements of reproduction, distribution and public performance, but the compulsory licence is only available for the making of the phonorecord and only with respect to the musical work 303 . The mechanic licence to the sound recording must be negotiated with the record company , as should the licence of the right of public performance on the sound recording (record company) and the musical work (publisher or PRO). Formal conditions. The candidate for a compulsory licence must give notice of her intention to obtain a compulsory licence. This notification must be served per title to every right holder whose composition the candidate wishes to use. Copyright owner. The mechanical rights (reproduction right, distribution right) are commonly administered by the publisher of the work. There is a centralised administrator for the mechanical right for a number of publishers, but not all publishers' works are available there304 . Royalties. The licensee has the obligation to pay royalties, which are fixed by law. The right holder must be identified in the records of the Copyright Office for the payment of the royalties (s.115(c)). The royalties for digital phonorecord delivery are determined as well. The royalty payments should be reasonable and in this respect the distinction is made between the digital phonorecord deliveries where the reproduction or distribution is incidental to the transmission and the digital phonorecord deliveries "in general". It is however possible for the right holder and the licensee to negotiate the terms and rates of royalty payments. The author/recording artist may agree to reduce the mechanical royalty rate for the record company that makes and distributes phonorecords including the author's work ("controlled composition clauses") 305 . Combined compulsory licence and negotiated licence. Overall, in order to offer an online music service, the content provider can at most rely on a combination of the compulsory "mechanical" licence (for the reproduction and transmission by means of a digital phonorecord delivery of a musical composition embodied in a sound recording) and a negotiated licence with the copyright holder to the sound recording for the delivery of digital phonograms. The licences then cover the reproduction and distribution of the recorded performance, i.e. more than the reproduction and distribution of one's own version of a performance of a musical composition. This means that a service provider can become a "virtual record store if it is able to clear the rights to the sound recordings" 306 . Moreover, if the copyright owner of the sound recording has acquired the necessary rights for digital phonorecord deliveries from the copyright owner of the musical work, the former can license the rights to both the sound recording and the musical work to third parties 307 . Difficulties. The compulsory licence in the US copyright act shows some difficulties in practice. It is reported that, generally, the provision (s115) has rarely been used for compulsory licences. It serves rather as an upper limit for the royalty rate in privately negotiated licences between music publishers and record companies that wished to make and distribute sound recordings 308 . The obligation to serve a work-specific notification for each musical composition (rather than per repertoire) is cumbersome. The hesitations on the qualification of online transmissions are reflected in the compulsory licence 309 . One transmission may entail several protected acts (reproduction, distribution, public performance) and this entails legal uncertainty as to the appropriate licence (compulsory licence and/or negotiated licence). At stake is the obligation to clear certain rights (but not others), the type of licence (compulsory/negotiated) and the copyright owner involved. For the time being, this is solved by the decision in Realnetworks and Yahoo! 310 (see Part 2.II.A of this Study) but the same issues will arise, as digital services continue to evolve. The use of the compulsory licence for "incidental" reproductions is not clear, i.e. reproductions of the work needed to make a digital transmission. Some technical copies (cache copies, intermediate server copies in the course of downloads or streaming of performances) may not be copies for which a compulsory licence must be cleared. Some right holders claim payment for such technical copies 311 but Copyright Register Peters observed that it should first be established whether they fit the definition of "digital phonorecord delivery" (DPD) 312 , before it can be decided whether such copies are "incidental" 313 . The Copyright Register called for more guidance on the definition of "incidental DPD", in order to avoid that private licences are proposed where not even a compulsory licence is required by law. The question is whether on demand streams do result in a reproduction that fits the definition of a DPD. It is clearer that this is the case for delivery of a digital download (limited or otherwise). Overall, licensing is complicated by the circumstance that the rights required for such music services remain fragmented, there is no one-stop-shop 314 . There are indeed different licensing agencies for each right. Music publishers license the reproduction rights, distribution and even public performance rights directly, rather than by passing by an intermediary or an aggregator. Varying business models (with a mix of interactive and non interactive features, permanent and temporary reproductions) have resulted in a complicated tariffs structure 315 . US Copyright Register PETERS, in her 2004 and 2005 statements 316 , was in favour of abolishing the compulsory licence. According to her there was no reason to limit the exclusive right of the author (which can only be done when it conflicts with a public interest) and there is a viable alternative in the form of voluntary collective management (perhaps even extended collective licences) 317 . 139. Sound recordings. Public performance. Compulsory licence. A different regime applies to sound recordings. There is also a compulsory licence for most types of public performances 318 , including digital webcasting (and other linear or non-interactive transmissions to the public) of sound recordings 319 . The royalties from the compulsory licence are divided between producers and performers (both of which have contributed to the sound recording). 140. Overall complexity. A music service that offers both interactive and non-interactive features must take into account the following rights and licensing mechanisms 320 . Non-interactive/linear streaming. This feature involves the public performance right. -For musical compositions, this right should be individually licensed either via PROs or via individual publishers. -For the sound recordings, there is a compulsory licence for the public performance right (s. 114). Interactive features. This feature involves the mechanical rights (downloads) and/or the public performance rights (streaming). -For musical compositions, the mechanical rights can be subject to a compulsory licence or, rather, be cleared from individual publishers and the performance rights are licensed by the PROs or individual publishers. -For the sound recordings, an individual licence can be negotiated with the record companies. One option proposed by the Copyright Register and the Internet Policy Task Force of the Department of Commerce for reducing the overall complexity of music licensing is the further expansion of collective licensing 321 , even extended collective licensing 322 . In the next sections we will examine whether the licensing mechanisms used in the USA can be a source of inspiration for the EU 323 . Licensing modalities 141. Building on the experience in the USA, it could be verified whether in a European context the territoriality effects can be controlled by regulating the licensing modalities. One option could be to impose an obligation upon the author/initial right owner (see supra sub B) but also upon the following right holders to grant licences in coherent bundles. The purpose of such obligation is to make sure the licensee acquires a licence with autonomous value and is not dependent on other parties to use its rights. This would oblige the right holder (the author) to transfer homogenous bundles of copyright prerogatives, rather than categories of rights. A situation of copyright thickets should be avoided from the start 324 . In a litigation opposing MyVideoa German platform offering the possibility to watch videos by streaming and CELAS (a joint venture of collecting societies exercising the mechanical rights i.e. the reproduction and distribution rights of the Anglo-American repertoire of EMI Music Publishing)the Courts in Munich decided that according to the German law, only a transfer of rights corresponding to autonomous (economically and technically independent) exploitation forms has an effect towards third parties (in rem). In the context of the online use of music works, in particular by means of an online on demand service (streaming), the reproduction has no independent economic meaning in comparison to the making available to the public. That decision suggests that the making available right and the reproduction right should be maintained together, in the sense that the reproduction right is incidental to the making available right and should consequently not require a separate authorisation by the right holder 325 . This solves the fragmentation of the reproduction right and the making available right, even at a later stage of exploitation (by derived right holders). However, it does not take away the possibility for the author (or derived right holder) to fragment both rights territorially (see supra sub B). 142. More invasive options are to impose compulsory licences or, alternatively, the mandatory collective management of the reproduction right in online exploitations. We will briefly outline these options and the main objections (in the framework of this Study we cannot provide an exhaustive in-depth analysis of these options). It should also be noted that the Proposal for a Directive on collective rights management and multi-territorial licensing of rights in musical works for online uses (COM(2012) 372 final) is expected to facilitate the licensing process for cross-border online exploitations of musical works. The effect of this proposal is however beyond the scope of this Study. a) Compulsory licence for first reproductions 143. A compulsory licence could be imposed covering the first reproductions following directly from on demand availability to the public. When a work (and other subject matter) is made available to the public and a member of the public requests the transmission of the work, a reproduction (in the current understanding of the reproduction right) is made on her device. This could be a temporary reproduction or a permanent one. Instead of requiring that either the content provider or the end-user negotiate a licence for this reproduction, it could be decided to grant a compulsory licence for this first reproduction only. This use should be subject to remuneration. Similarly, the reproductions of other protected subject matter should be taken into account. These should either be authorised by the right holder or under a compulsory licence 326 . 144. Music and words pertaining thereto. Such compulsory licence is allowed under the Berne Convention, at least for the right of recording of musical works and the words to the musical work (art. 13 BC) 327 . Arguably this possibility to impose compulsory licences was not meant to cover the end-user's 324 See J. DE BEER, « Copyright royalty stacking » in M. GEIST (ed), The Copyright Pentalogy, p. 358. 325 Landgericht München, 25 June 2009, No. 7 O 4139/08, MyVideo Broadband S.R.L. vs. CELAS GmbH. 326 It is premature to elaborate on the technicalities of compulsory licence systems that should cover both copyright and neighbouring rights or on the need, alternatively, to provide a separate system for copyright and the neighbouring rights. 327 Article 13: Possible Limitation of the Right of Recording of Musical Works and Any Words Pertaining Thereto: 1. Compulsory licenses; 2. Transitory measures; 3. Seizure on importation of copies made without the author's permission downloads at the time this provision was adopted in the Berne Convention. The Berne Convention imposes two restrictions. (1) The compulsory licence should be restricted to the countries that have imposed it (territoriality of copyright). This means that the Member States are free to impose compulsory licences and, if they do, the effect of their compulsory licences should not exceed the territory of their country. There is therefore a risk that more disparities among the Member States are created. Should this option be considered, it should be imposed at the European level so all Member States have a compulsory licence on similar terms. (2) The authors should receive an equitable remuneration that is negotiated between the parties involved or fixed by a competent authority. From the American experience, it appears that in practice this entails formalities (e.g. notification per song), which creates an important burden. It also shows that the remuneration set by the competent authorities in practice functions as an upper limit for the negotiated fee for such reproductions. The debtor of such equitable remuneration should be the person making the work available to the public (content provider). Such obligation upon the end-user is not likely to be effective, even if she makes the material reproduction. As a matter of policy, it may be asked if a compulsory licence is called for when it appears from the current practice that the right holders are capable of exercising their exclusive rights (albeit in sometimes burdensome negotiations and licensing processes, which may entail important transaction costs). This should at least be assessed on the basis of empirical economic data. Also such compulsory licence would require a competent authority to set the level of the equitable remuneration. This could be done at the national level but this entails the risk at disparities between the Member States. Alternatively a competent authority at the European level could be created to ensure that the compulsory licence is applied evenly. Such solution obviously requires further examination that exceeds the scope of this Study. 145. Other works. The Berne Convention allows compulsory licences (in certain circumstances) for musical works and the words associated to this music but it explicitly excludes this for cinematographic rights (cinematographic adaptation and reproduction; distribution; public performance and public communication by wire of works thus adapted or reproducedart. 14(3) BC). Unless a modification of the Berne Convention is an option, it should be empirically verified whether it makes sense to provide a compulsory licence for music only. More generally, it should be assessed whether a uniform system of compulsory licences is appropriate: in some sectors there may be no need for such system. Arguably such compulsory licence constitutes an exception or a limitation of exclusive rights, which entails that the three-step test should be met (art. 10 WCT). If this is indeed the case, the limitation should be applicable for special cases (it is uncertain that any first reproduction following a making available to the public qualifies as a "special case"), it must not conflict with a normal exploitation of the work and it must not unreasonably prejudice the legitimate interests of the author. It is premature to assess whether the three-step test could be met in case of a compulsory licence for the first reproduction following a download. It is however observed that many types of works are made available for download without any particular problem of licensing. This is the case e.g. for literary works made available with the consent of the publisher. Similarly, some authors make their works directly available to their public (e.g. via academic platforms or open access databases). Such authors mostly own the rights to offer their works for (1) Each country of the Union may impose for itself reservations and conditions on the exclusive right granted to the author of a musical work and to the author of any words, the recording of which together with the musical work has already been authorized by the latter, to authorize the sound recording of that musical work, together with such words, if any; but all such reservations and conditions shall apply only in the countries which have imposed them and shall not, in any circumstances, be prejudicial to the rights of these authors to obtain equitable remuneration which, in the absence of agreement, shall be fixed by competent authority. (2) Recordings of musical works made in a country of the Union in accordance with Article 13(3) of the Conventions signed at Rome on June 2, 1928, and at Brussels on June 26, 1948, may be reproduced in that country without the permission of the author of the musical work until a date two years after that country becomes bound by this Act. (3) Recordings made in accordance with paragraphs (1) and (2) of this Article and imported without permission from the parties concerned into a country where they are treated as infringing recordings shall be liable to seizure. download by any interested member of the public. It is unlikely that a compulsory licence is necessary for such reproductions (it could even have the effect of restricting freely available works if the platform provider were subject to an obligation to pay remuneration; such obligation could negatively affect the right to freedom of expression). 146. Summary. A compulsory licence for the first reproduction following directly from the public availability of the work is not a straightforward solution to the issue of territoriality. The American experience shows a number of drawbacks (the compulsory licence results in a de facto upper limit for negotiated royalties, administrative burden, uncertainty on the definition of the rights and their classification and consequently the scope of the compulsory licence, lack of a one-stop-shop for all types of rights and subject-matter)328 , the Berne Convention allows it for musical works (and words to the music) but excludes it for cinematographic rights and it is uncertain whether a uniform system applicable to all types of works (regardless of their usual exploitation) meets the thee-step test. Furthermore, the reproductions of other protected subject matter should be authorised by the right holder or under the same or a similar system of compulsory licences. b) Mandatory collective management 147. Another mechanism for facilitating certain licensing processes is the mandatory collective management of rights. Such management has been imposed in the SatCab Directive with regard to the cable retransmission rights. In summary, the idea is that the right keeps its exclusive nature (right to authorise or prohibit a use) but it can in principle only be exercised by a collective management organisation (CMO). In the SabCab Directive, the cable retransmission right is subject to mandatory collective management (art. 9 SatCab Dir) 329 . As explained in the Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, the cable retransmission right is defined as a secondary act of exploitation, i.e. the retransmission by cable for reception by the public of an initial transmission of programmes intended for reception by the public. There are two acts of communication to the public and the cable retransmission right is defined as the secondary act by reference to the primary or initial communication to the public. The ratio of this special regime was to facilitate this mode of exploitation and to avoid that one right holder could exercise her individual right and thus compromise the exploitation of all other works. Cable operators were deemed vulnerable to such risk because they depended technically on the primary broadcast ("initial transmission") of the work and did not have the opportunity to secure the rights to all works and other subject matter in the broadcast programmes. The solution was to have the cable operator conclude licences with CMOs (in some circumstances broadcasting organisation) for the work in their repertoire (or works belonging to the category of works they managedsee art. 9(2) SatCab Dir). 148. Could a similar system be developed for the first reproduction resulting directly from the public availability of the work? The right holder (author, derived right holder) would have the right to individually authorise or prohibit the making available of the work or other subject matter to the public but not the first reproduction following from this availability. Instead of an individually negotiated licence, a CMO should exercise this right for her and conclude agreements for the first reproduction of all works in its repertoire (with a possible extension for non-affiliated authors or right holders). Some objections immediately raise questions on the feasibility (and desirability) of such system. The retransmission by cable of already broadcast material is not comparable to the download of a subject matter following the public availability. The cable distributors perform a secondary exploitation and are technically (and economically) dependent on a primary distributor (the broadcasting organisation), which alone determines the broadcasting programme and clears the rights accordingly. Such relation of a primary and a secondary exploitation is absent between the making the work available to the public and the reproduction that follows from it. On the contrary, both acts are part of the same exploitation and the content provider making the work available also controls the resulting reproduction through the same technical process. Unlike the cable distributor this content provider determines which works or other subject matter is made available for download hence it has the possibility and opportunity to clear all necessary rights (even if this is a burdensome process). Yet the content provider would have to clear the making available right from the individual right holders and obtain a licence from the relevant CMOs for the first reproduction. Such system could only be useful if (1) the reproduction right is highly fragmented and clearing the rights for all territories where possibly a reproduction could be made is considered too burdensome (2) one CMO is able to grant a licence for all such territories. Alternatively the downloader/end-user would have the obligation to clear the licence with the CMO in her territory. This scenario is not realistic (if the end-user should bear the burden of the remuneration the equitable remuneration for private copies is a more practical tool). In any case, the rights of making available and reproduction are not fragmented to the same extent in all sectors. Moreover, it may not be the reproduction right that is territorially fragmented. It should therefore be verified per sector if a mandatory collective management of the reproduction right for the first reproduction can provide a solution at all. It does not seem justified to design a unique system for all sectors of exploitation and for all types of works or other subject matter. The mandatory collective management of cable retransmission right was a solution tailored to a specific sector, i.e. the retransmission of radio and television broadcasts. This sector is well delineated hence it was possible to discern (and codify) sector practices. Also, these sectors coincided traditionally with the borders of one country, one national market and the CMOs are organised at the national level. In summary it can be concluded that the mandatory collective licensing of the reproduction right covering the first reproduction following directly from its public availability is not an overall practical solution to facilitate the licensing of cross-border on demand services. D. Exceptions 149. The exceptions for temporary acts of reproduction and "private copy" could play a role in aligning the territoriality of the reproduction right to the localisation of acts of making available to the public. Insofar as the Commission sees a necessity to restrict the impact of the exclusive rights to the territories of determined Member States, according to the criteria of its choice, it can be considered to apply these exceptions for this purpose and prevent the reproduction right from extending the geographical reach of the (online) exploitation of a work. Exception for incidental reproductions (art. 5(1) InfoSoc Dir) 150. The exception for temporary acts of reproduction applies in all Member States and its downstream impact is being clarified in the CJEUs decisions. It could however be examined whether this exception could be shaped to address other territoriality effects of upstream or downstream reproductions as well. Broadly speaking, the main idea is to emphasise that all reproductions that are dependent or incidental to the principal act (in this case, the assumption is that the making available to the public is the principal act) are exempted under this exception and therefore no legal consequences can follow from them in territories outside the Member States where the copyright relevant acts (such as the making available to the public) take place. It has been demonstrated that the current exception for temporary acts of reproductions is difficult to apply and uncertainty exists regarding its scope and its application (notwithstanding continuous clarification by the CJEU). The relation between the reproduction right and the right of communication to the public could be clarified by simplifying the exception for temporary acts of reproduction. This could have an impact on the overall territoriality effects of cross-border (online) exploitations. This could be achieved by emphasising the core of this exception, i.e. the absence of an independent economic significance of the reproduction in relation to the use it enables. It could be argued that the "transient", "temporary" or "incidental" nature of a reproduction under this exception are actually expressions of this condition that the exempted copies should not have independent economic significance330 . The incidental nature of the reproduction could be stressed, since this condition is complementary to the condition that the reproduction should not have any value on its own. Inversely, the conditions that the reproduction be temporary or transient could be softened (and treated as indications of the incidental nature of a reproduction in relation to the use it enables). The scope of the exception would thus be clarified and seemingly extended in comparison to the current exception. In practice such redrafted exception would mostly cover cases that are currently exempted. It would be beyond doubt that the temporary downstream copies made at the end-user's device (screen, cache memory) that allow the end-user to consult (read, view, hear) the protected material are thus exempted. Reproductions that last longer than a transient while could be covered as well. However they should functionally be incidental to the transmission and should not allow the end-user to have independent control over her access to the work. The emphasis on the incidental nature and the economic value of the reproduction in relation to the availability of the work for on demand transmission could perhaps even exempt the "upstream" copy of the protected work under this exception. It should be verified whether this reproduction does not in itself constitute a form of exploitation of the protected subject matter stored, i.e. whether it has independent economic significance. Permanent downloads that allow the end-user to acquire the file (and the protected subject matter) on a permanent basis are not incidental to the act of making available to the public. On the contrary, the person who downloads a file mostly intends to acquire control over this file and consult it independently of the content provider that first transmitted it to her. Such permanent downloads would not be exempted under this exception for incidental reproductions and the reproduction would be localised in the Member State where it occurs. In some cases there may be room for doubt. For instance, content providers (especially online music providers) offer different possibilities to "consume" music, by direct streaming (which requires Internet access) or from a temporary download on the subscriber's device. Such download makes the work available to the subscriber only and is stored locally on her device so she can listen to it even when she is offline. It could be argued that the reproduction is not incidental to the making available to the public, since the subscriber gains a certain independence of the service provider to enjoy her music. Consequently these reproductions, albeit temporary, should not be exempted and require the right holders' prior consent. On the other hand, the subscriber is entitled to these temporary downloads for the duration of the subscription only and in this sense she does continue to depend on the music provider, which transmits the music to the subscriber. Following this view, the reproductions do not allow the user to listen to the music independently from the provider which makes it available to the user and consequently the reproductions would be incidental and therefore possibly exempted under the exception It should then be a matter of technical and economic analysis to determine whether the end-user gains sufficient independence from the service provider to assess whether such temporary downloads are incidental to the making available of the protected subject matter by the music provider. 151. Three-step test. It should be verified whether the exception for incidental reproductions, after this clarification and possibly extension, still meets all steps of the three-step test (art. 10 WCT; art. 16 WPPT; art. 13 TRIPS)331 . (1) Special case. The exception should be formulated in a sufficiently precise way. The formulation of the exception will presumably not present a major problem. (2) Normal exploitation. There should be no conflict with the normal exploitation of the work or other subject matter. -To the extent that the exception only exempts reproductions of an incidental nature that do not have autonomous economic value, the normal exploitation of the work (by means of on demand forms of exploitation) is safeguarded. The right holder is able to monetise the making available right and continues to have this possibility with the exception. By contrast, the reproduction that thus comes along with the online availability and the resulting transmissions cannot be monetised independently from the making available right. The exemption of the reproduction would not undermine the exploitation of the work, which is protected by the making available right. There should therefore be no conflict with the normal exploitation of the work. (3) Legitimate interests of the right holder. The exception should not unreasonably prejudice the legitimate interests of the right holder. It should first be examined whether the exception is justified. The law grants a reproduction right hence it can be assumed that the right holder has a legitimate interest in exercising her right. The next question is how this interest is prejudiced by the exception and whether this limitation is unreasonable. It is recommended to assess the impact of any new exception or modified exception based on empirical data (economic data). Reproductions for private purposes 152. Private copy (art. 5(2)(b) InfoSoc Dir). It has been found in the first part of this Study that the private copy is unevenly implemented in the national copyright laws and that therefore there is no actual harmonisation between the Member States. Some Member States do not provide an exception for reproductions for private use at all. This divergence is illustrated by the example of the online personal video recorders (PVR). Some German courts have come to the conclusion that the using of an online PVR service to record television broadcasts can result in a copy that is made by the end-user and that is exempted under the exception for private use. A French court has assessed a similar situation differently, where the PVR provider was held liable for the reproduction at the instruction of its users and where therefore the exception for private use could not apply. The application of this exception is indeed intrinsically linked to the general liability rules (see supra sub Part 1. IIpar. 22). The divergence of liability rules and the exception for private use excludes the possibility to offer a pan-European service under one regime of exceptions. A second issue is that reproductions resulting directly from the public availability may be localised in other Member States than where the work or other subject matter is made available to the public. This is not so much a problem for those copies that are exempted under the exception for temporary acts of reproduction, which all Member States have had to implement, and that would mostly occur in relation to streaming services. For copies that are not exempted on this ground, mostly download copies, the task of the service provider may be complicatedat least when the reproduction right is fragmented. When the service provider cannot precisely predict where the reproduction will take place and for which territories it should clear the reproduction rights, it cannot conclude an all encompassing licence with the holder(s) of the (fragmented) reproduction right. This leads to legal insecurity for those cases. It should however be observed that, in some sectors, one right holder owns the rights of communication to the public (making available right) and reproduction (e.g. audiovisual works; book publishing), which allows the service provider to negotiate licences that cover all types use at once. Absent empirical data, it cannot be assessed how important this issue is and how it raises the transaction costs. In order to facilitate the offer of an online service in several Member States or even in all Member States, one option is to push the harmonisation of this exception. The idea would be to limit the territorial impact of the online exploitation by systematically exempting the reproductions following directly from the (legitimate) on demand availability, provided that the conditions are met (in particular, the copy is made by a natural person for her private use and without commercial purpose). In counterpart the holder of the reproduction right could in some cases be entitled to fair compensation 332 . The exception for private copying can serve as an instrument to facilitate multi-territorial services if it is applied in all Member States in the same way and if it is large enough to cover a variety of online services. -In order to allow providers to offer a pan-European service, this exception should be mandatory. All Member States should provide an exception to the reproduction right for private use. This has proven difficult, considering continued effort to discuss this exception and its levies at the European level (with as the latest outcome the recommendations of Mr. Vittorino of 31 January 2013 333 ). -It should be clarified whether the exception can apply when an individual uses third party services to make the reproduction for her private use (online service providers). If the exception is restricted to the reproductions made by the individual and cannot apply to reproductions made using online services, then this exception will not alleviate the task of the service provider. It would suffice that the content provider assists the individual in making the reproduction for the exception not to apply. By contrast if the exception could apply to all reproductions for the private use of the individual, made by her or made by third parties following her instructions and on her behalf, then it could apply to reproductions made using online services and it could alleviate the licensing process. There are however important legal (and economic) arguments against extending the private copying exceptions to the effect that it allows commercial services being created on the pretext that the end-user makes a private copy and that therefore no authorisation is required. These arguments will be discussed hereafter. 153. Three-step test. There are several objections to this extension of the private copying exception. Firstly, the fair compensation obligation would be an important element for the right holder to see her damage compensated. Given the discussions and controversies the levy-systems provoke, it does not seem politically desirable to extend the reach of this exception and its compensation mechanism. Moreover, it is highly questionable whether it is economically desirable to replace a negotiated licence by a legal compensation where this is not strictly necessary. Such levies may also raise other internal market concerns; it is however not within the scope of this study to describe them and it is for the Commission to assess whether such approach would fit in its policy priorities. From a legal point of view, the question is whether it is permissible under the three-step test to replace a system of negotiated licensing with an exception (with fair compensation). In the cases under consideration the individual for whom the copy is made may not have a direct or indirect commercial 332 The complicated questions related to levies and remunerations for private copies are beyond the scope of this study. It can however be pointed out that the Information Society Directive, in its recital 35, states that the « fair compensation » should compensate the right holders for the use made of the protected subject matter, in function of the harm to the right holders resulting from the act in question: "in cases where the rightholders have already received payment in some form, for instance as part of a licence fee, no specific or separate payment may be due". There is consequently no automatic entitlement to compensation, especially in those cases where remuneration is paid based on the making available right. 333 For an overview of the initiatives on private copying levies : http://ec.europa.eu/internal_market/copyright/levy_reform/. purpose and may use it only for her private use (or in her private sphere), but the third party service providers in many cases do have a direct commercial purpose. In some cases the end-users can use the service under a paid subscription. Alternatively, the service provider may attract advertisement and thus monetise the public it can gather through its service 334 . (1) Special case. The exception should be formulated in a sufficiently precise way. The formulation of the exception will presumably not present a major problem. - It should be verified whether the systematic use of the exception could divest right holders from major sources of revenues that are significant within the overall commercialisation of works 336 . The exploitation of works is structured differently across sectors. These differences should be taken into account when assessing the impact of an exception on the exploitation of the work. If such exception would also exempt reproductions via a third party service provider (with a commercial purpose), several circumstances should be taken into account to assess whether it affects the normal exploitation of the work. -In some cases, a work can be freely downloaded via an intermediary (platform) and, notably in some scientific or academic spheres, very little attention is paid to the legal basis of the download. It is not explicitly stated whether the download is authorised by the right holder or if the download is covered under an exception, such as the exceptions for research and education. This is for example the case for some academic platforms. For these cases, an exception will have a limited economic impact on the exploitation, at least where the authors have other motives than economic ones for making their works available for download. -In other cases, the service provider may pay a licence fee in return for the commercial online exploitation of the work to the right holders, without specifying whether the amount covers the public availability of the work and/or the reproduction by the downloader. The effect of an exception for downloads meant for private purposes should be assessed but possibly it does not have an impact on the amount licence fee (which will then be paid for the making the work available to the public). In that case the exception is unlikely to undermine the exploitation entirely (mere losses should be assessed under the third step of the test). -When the service provider deals with the holders of the reproduction right and the making available right separately, the effect of the exception is more important for the former than for the latter. This makes it complicated to assess whether there is a conflict with the overall "normal exploitation" of the work. -The impact on the exploitation of the work is even more important when the exception not only exempts the copy made by the end-user but also affects the qualification of the online accessibility as an act of making available to the public. It has been described in a previous Study 337 that in some jurisdictions online personal video recorder (PVR) providers can operate without acquiring the right holder's consent, neither for making the work available to the public, nor for the reproductions made 338 . It should be verified should be verified whether an approach per exploitation sector is more appropriate than a general and unique regime applicable to all types of works and all exploitation modes. Firstly, we have examined whether the definition of the exclusive rights and the qualification of online exploitations could provide a solution. This would require a sharper distinction between the making available right (as a subset of the right of communication to the public) and the reproduction right (and the distribution of the reproduction). Consequently, each act of exploitation would be qualified as either an act of making available or distribution of a reproduction. Broadly speaking, whenever the work or other subject matter is made available to the end-user on a temporary basis (without final control over the use of the work), it would be made available to the public (communication to the public). When the end-user does gain control over the work or other subject matter (and therefore independence of the content provider), a distribution of a reproduction has taken place. This construction is similar to the division of rights in the USA and Canada (cf. decisions of the Canadian Supreme Court). These acts of distribution/reproduction can be localised in the EU according to the criteria discussed for the making available right (country of origin; country of exploitation). -Considering that the copy from which the files are technically made available for transmission to the public and the first reproduction made by the end-user are part of one composite act of distribution to the public and considering that the content provider performs this protected act, the act of distribution could be localised in a country of origin. Only the subsequent reproductions, made by the end-user after she has taken possession of the first reproduction, would then be localised according to other criteria. This construction would localise all relevant acts of making a work available for download (including the download) in one Member State of the EU. -Drawbacks are similar to the ones described for the making available right, in particular the risk that the (exclusive) exploitation in other Member States is undermined. This may affect the overall exploitation of the work in the EU and thus cause adverse economic effects. -The making available for download qualified as a distribution of a reproduction could be localised in the Member States of exploitation, depending on where the national public is targeted. Downloads (i.e. reproductions) outside the territory of exploitation would not be qualified as a distinct distribution in the Member States where the public is not targeted (overspill). This solution allows a degree of flexibility to the extent that an overspill can develop, over time and by the doing of the content provider, into an exploitation for which the right holders' consent is required. An alternative option, based on the qualification of the on demand exploitation and the relation between the exclusive rights, is to admit a fictional localisation criterion for the reproduction following directly from the public availability. In this hypothesis, the definitions of the reproduction right and the making available right are not altered but it is explicitly stated by law that the first reproduction (download) resulting directly from the public availability is localised in the Member State where the act of making available takes place (i.e. country of origin or exploitation). This approach raises the same objections as the previous one. Secondly, there could be an obligation upon the author or initial right holder to transfer only coherent bundles of rights to the effect that the licensee acquires all rights it needs for a particular form of exploitation of a work or other subject matter. This obligation could be extended to subsequent transfers of rights by derived right holders. Such obligation could remedy the complications caused by the (substantive) fragmentation of exclusive rights. It does however not solve the difficulties arising from the territorial fragmentation of rights. To the extent that the exclusive rights of making available to the public and reproduction are in different hands (albeit in coherent bundles) per national territory, a cross-border exploitation would still require the consent of various right holders (especially in relation to the reproductions outside the Member State where the work or other subject matter is made available to the public). ...................................................................................................................................... Terms of Reference ................................................................................................................................... of Directive 2009/24/EC having no equivalent in Directive 2001/29/EC). 83 Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 132 et s. 84 Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 160. 85 See in this sense : G. MAZZIOTTI, Copyright in the EU Digital Single Market, p.64. 86 Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 153. 5.1 of the InfoSoc Directive (UK Supreme Court, 17 April 2013 [2013] uKSC 18). 270 See Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, 109. 2) Normal exploitation. The exception should not conflict with the normal exploitation of the work or other subject matter 335 . Table of contents . See inter alia : Report from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on the Application of Directive 2004/48/EC of the European Parliament and the Council of 29 April 2004 on the enforcement of intellectual property rights, 22 December 2010 COM(2010) 779 final and accompanying Commission Staff Working Document nr. SEC(2010) 1589 final, 22 December 2010, Analysis, 29 p. ; European Observatory on Counterfeiting and Piracy, Injunctions in Intellectual Property Rights, Studies of the Legal Sub-group, available at http://ec.europa.eu/internal_market/iprenforcement/docs/injunctions_en.pdf (no publication date provided). Recent developments of German Authors' Rights Law", AM 2011/2, 164-165. The court to which the decision was referred has followed the BGH's decision: OLG Dresden, 12 July 2011, case nr. 14 U 801/07, GRUR RR 2011, 413 ; The case was again brought before the German Supreme Court,which identified an infringement of the right of retransmission by cable and referred the case to another court of appeal. BGH 11 April 2013, I ZR 152/11, « Internet Videorecorder II », nr. 11 et s., accessible via http://www.bundesgerichtshof.de. The decision of the court of appeal does not seem to be published so far. 39 Bundesgerichtshof 22 April 2009, I ZR 175/07, "Save.tv". BGH, 22 April 2009, I ZR 216/06, "Shift.tv", via www.juris.bundesgerichtshof.de. See A. GIEDKE, ALAI Congress 2012 -Kyoto Questionnaire -Germany, 2012, http://www.alai.jp/ALAI2012/program/national_report/Germany.pdf (consulted 3 December 2012), p. 23 ; A. SCHNEIDER, "OLG Dresden klärt Rechtsstreit um Online-Videorekorder », Telemedicus, 15 July 2011, accessible via http://www.telemedicus.info/article/2040-OLG-Dresden-klaert-Rechtsstreit-um-Online-Videorekorder.html, S. VON LEWINSKI, " General rule. Art. 2 of the Brussels I Regulation provides that the claimant may always sue the defendant before the Courts of the Member State where he has his domicile 64 . Consequently, an 57 Tribunal de Grande Instance Paris 28 November 2013, case nr. 11/60013, Association des Producteurs de Cinéma (ACP) e.a. c. Auchan Telecom e.a., available via http://www.legalis.net/spip.php?page=jurisprudence-decision&id_article=3935 (cons. 2.7.1.1 ; cons. 2.8.1.1). 58 G. MAZZIOTTI (rapporteur), Copyright in the EU Digital Single Market, Report of the Centre for European Policy Studies (CEPS) Digital Forum, June 2013, available at http://www.ceps.be/book/copyright-eu-digital-single-market, 158 p., at p. 123. 59 Directive 2004/48/EC of the European Parliament and of the Council of 29 April 2004 on the enforcement of intellectual property rights (OJ L 157, 30.4.2004). Article 11 of Directive provides the possibility to issue an injunction against an intermediary in case of an infringement of a IPR, similarly to art. 8(3) InfoSoc Dir: "Member States shall ensure that, where a judicial decision is taken finding an infringement of an intellectual property right, the judicial authorities may issue against the infringer an injunction aimed at prohibiting the continuation of the infringement. Where provided for by national law, non-compliance with an injunction shall, where appropriate, be subject to a recurring penalty payment, with a view to ensuring compliance. Member States shall also ensure that rightholders are in a position to apply for an injunction against intermediaries whose services are used by a third party to infringe an intellectual property right, without prejudice to Article 8(3) of Directive 2001/29/EC". 60 Commission Staff Working Document SEC(2010) 1589 final re Analysis of the application of the [Enforcement Directive], 18. 61 Commission Staff Working Document SEC(2010) 1589 final re Analysis of the application of the [Enforcement Directive], 18. 62 Synthesis of the responses "Civil enforcement of intellectual property rights: public consultation on the efficiency of proceedings and accessibility of measures", July 2013, 8. 63 See Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, 65 et s. 64 Art. 2 of the Brussels I Regulation: 3 of the InfoSoc Directive are introduced on a country by country basis, in the Member State where the intermediary is established and where it is active, and the territorial scope of the injunction requested (and granted) is limited to that Member State. Interim injunctions. Complementary to art. 2, art. 31 of the Brussels I Regulations regulates the issues of jurisdiction regarding interim injunctions (see Study on the application of directive 2001/29/EC on copyright and related rights in the information society, 65). According to that provision, "application may be made to the courts of a Member State for such provisional, including protective, measures as may be available under the law of that State, even if, under this Regulation, the courts of another Member State have jurisdiction as to the substance of the matter". 144CJEU 17 January 2012, C-302/10, Infopaq II, par. 42-43. 145 UK Supreme Court, 17 April 2013, [2013] UKSC 18 146 CJEU 17 January 2002, Case C-302/10, Infopaq II, par. 49-50. 147 Opinion of Advocate General TRSTENJAK delivered on 12 February 2009, Case C-5/08, Infopaq, par. 127. 148 CJEU 4 October 2011, Joined Cases C-403/08 and C-429/08, Premier League, par. 177. 149 CJEU 17 January 2002, Case C-302/10, Infopaq II, par. 51-53. 150 CJEU 4 October 2011, Joined Cases C-403/08 and C-429/08, Premier League, par. 174. 151 CJEU 4 October 2011, Joined Cases C-403/08 and C-429/08, Premier League, par. 176. 153UK High Court, 18 July 2011, [2011] EWHC 1874 (Pat). 154 UK High Court, 14 November 2011, [2011] EWHC 2977 (Pat). 155 CJEU 7 March 2013, Case C-607/11, ITV Broadcasting LTD e.a v. TVCatchup LTD. 156 High Court, 10 October 2013, retrieved via http://presscentre.itvstatic.com/presscentre/sites/presscentre/files/TVCatchup.pdf. 157 OLG Jena, 27 February 2008, 2 U 319/07, MMR, 2008, 448; BGH, 29 April 2010 ). Country of origin: the work (and other subject matter) is made available in one Member State but the consent of the holder of the reproduction right is required for each Member State where a protected reproduction can be made. Country of exploitation. The work or other subject matter may be made available in several Member States for which consent is required, possibly from different right holders or persons managing the rights. There may also be reproductions in various Member States, for which the authorisation is required from several right holders or persons managing the reproduction rights per territory. - Such territorial fragmentation will cause difficulties if the making available right and the reproduction right are localised in different Member States: Country of origin: A producer A holds the exclusive rights for Member State A. Producer A can grant the right of making available from its Member State A but with effects in other Member States B and C. Producer A only holds the right of reproduction for her Member State A. Consequently, the acts of reproduction performed in the Member States of destination B or C fall under the exclusive right of the partners/producers holding the exploitation rights for these Member States. Producer A could thus make the work available for transmission in Member State A and beyond (in Member States B and C) but if the end-user makes a reproduction in Member States B or C, then the separate authorisation is required of the producers who hold the reproduction rights in Member States B and C. A licence should thus be acquired from these partners/producers, unless an exception applies in the Member State of destination. Each producer can organise the exploitation of the work in the Member States for which she has acquired coherent bundles of rights, i.e. making available and reproduction rights for streaming and for download in the Member States for which she has the exploitation rights. If a service provider intends to undertake exploitation in several Member States, she has to acquire the consent of various right holders.Several complementary options should consequently be assessed, such as the localisation of the act of reproduction in the country of origin (see supra sub Part. 2.V.A) or as to the territoriality of the licence agreements. -Country of exploitation. In the Rental and Lending Rights Directive (2006/115/EC), the Term Directive (2006/116/EC) and in the Satellite and Cable Directive (93/83/EEC). Furthermore the CJEU has decided in Luksan that the principal director should indeed be considered the initial owner of the copyright, hence the automatic transfer by operation of law was regarded contrary to several directives. SeeCJEU 9 February 2012, Case C-277/10, Martin Luksan v Petrus van der Let. 2 See for example Commission of the European Communities, Report from the Commission to the Council, the European Parliament and the Economic and Social Committee on the question of authorship of cinematographic or audiovisual works in the Community, COM(2002) 691 final, 12 p. 3 See Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society p. 104 and 2002 Commission report on authorship of cinematographic or audiovisual works, p. 8. 4 2002 Commission report on authorship of cinematographic or audiovisual works, p. 8. 5 2002 Commission report on authorship of cinematographic or audiovisual works, 12; European Commission DG INFSO and DG MARKT, Creative Content in a European Digital Single Market: Challenges for the Future, 22 October 2009, 22 p., 7. 6 CJEU 9 February 2012, Case C-277/10, Martin Luksan v Petrus van der Let. 2002 Commission report on authorship of cinematographic or audiovisual works, p. 11. In a recent CEPS report it was stated that "content owners emphasised that, considering the centralisation of rights that characterises these types of audiovisual content, It seems the identity of the author or initial owner cannot be determined in accordance with the lex loci originis of the work, since it is explicitly provided for audiovisual works that the lex loci protectionis principle applies. Following the lex loci originis of the work, the author would be invariably the same person. 2002 Commission report on authorship of cinematographic or audiovisual works, p. 8. The authors of (pre-existing) musical works are not included, those rights are commonly administered by collecting societies hence these rights must be cleared separately, following the system the collecting societies have set up for the international administration of rights. Cass. (fr) 10 April 2013, available at http://www.legifrance.gouv.fr/affichJuriJudi.do?oldAction=rechJuriJudi&idTexte=JURITEXT000027303750&fastReqId=1024588292 &fastPos=1 This policy option was already discussed at the CEPS Digital Forum, see MAZZIOTTI, Copyright in the EU digital single market, 61. It was mentioned at the CEPS Digital Forum that a country of origin principle, applied to audiovisual works, should leave a sufficient degree of "contractual freedom in order to be autonomous and flexible enough in designing, launching and promoting new and economically profitable content offerings and in determining their territorial reach in the context of culturally and linguistically diverse countries". See MAZZIOTTI, Copyright in the EU digital single market, 64. This mechanism has been described in the Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 82-85 and 167-169. Article 8 Sanctions and remedies 1. Member States shall provide appropriate sanctions and remedies in respect of infringements of the rights and obligations set out in this Directive and shall take all the measures necessary to ensure that those sanctions and remedies are applied. The sanctions thus provided for shall be effective, proportionate and dissuasive. 2. Each Member State shall take the measures necessary to ensure that rightholders whose interests are affected by an infringing In France, it was decided that the provider of an online PVR was liable for the reproduction made at the instruction of its customer and on her behalf. The service provider could not rely on the exception for private copy hence the court found an unlawful act of making available to the public. See Cour d'appel de Paris Pôle 5 (chambre 1), 14 December 2011, Wizzgo c Metropole Television et autres(2011). Retrieved from http://www.legalis.net/spip.php?page=jurisprudence-decision&id_article=3297. CJEU 27 March 2014, UPC Telekabel Wien GmbH v Constantin Film Verleih GmbH and Wega Filmproducktionsgesellschaft mbH, opinion of Advocate General CRUZ VILLALÓN delivered on 26 November 2013 (hereafter « Kino.to »). We will not analyse the issues regarding the measures imposed in view of blocking access to the site. Opinion in Kino.to, par. 36. Opinion in Kino.to, par. 48. Opinion in Kino.to, par. 54. Opinion in Kino.to, par. 59. CJEU, Kino.to, CJEU, Kino.to, par. 30. CJEU, Kino.to, par. 32. Tribunal de Grande Instance Paris 28 November 2013, case nr. 11/60013, Association des Producteurs de Cinéma (ACP) e.a. c. Auchan Telecom e.a., available via http://www.legalis.net/spip.php?page=jurisprudence-decision&id_article=3935 (cons. 2.7.1.1 ; cons. 2.8.1.1). See UK Hight Court of Justice 16 July 2013, FAPL v British Sky Broadcasting and others [2013 EWHC 2058 (Ch), par. 45. The website of FirstRow was in English, it had advertisements for companies located in the UK and products consumed in the UK, the matches streamed were "extremely popular" with the British public, etc. J.C. GINSBURG, "News From the EU: Where Does the Act of 'Making Available' Occur?" The Media Institute, 29 October 2012, available at http://www.mediainstitute.org/IPI/2012/102912.php. Opinion of Advocate General CRUZ VILLALÓN of 9 January 2014 in Case C-435/12, ACI Adam BV, Alpha International BV, AVC Nederland BV, BAS Computers & Componenten BV, Despec BV, Dexxon Data Media and Storage BV, Fuji Magnetics Nederland, Imation Europe BV, Maxell Benelux BV, Philips Consumer Electronics BV, Sony Benelux BV, Verbatim GmbH v. Stichting de Thuiskopie, Stichting Onderhandelingen Thuiskopie vergoeding. Opinion of the Advocate General in ACI Adam, par. 84. The peer-to-peer platform operator may be directly liable for copyright infringements but this aspect is not relevant for the purposes of this study. Opinion of the Advocate General CRUZ VILLALÓN delivered on 26 November 2013, Case C-314/12, UPC v. Constantin Film, par. 33-59. Opinion of the Advocate General JAASKINEN delivered on 13 June 2013, Case C-170/12, Pinckney, 53. In practice, legal actions against peer-to-peer operators will hold them directly liable for the infringements, rather than address the as intermediaries for third party infringements. In practice, legal actions against peer-to-peer operators will hold them directly liable for the infringements, rather than address the as intermediaries for third party infringements. « The act of communication to the public by satellite occurs solely in the Member State where, under the control and responsibility of the broadcasting organization, the programme-carrying signals are introduced into an uninterrupted chain of communication leading to the satellite and down towards the earth ». See Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 59. A Directive on collective rights management has been adopted by the European Parliament and the Council in February 2014: Directive of the European Parliament and of the Council on collective management of copyright and related rights and multiterritorial licensing of rights in musical works for online uses in the internal market. The consequences of the adoption of a country of origin principle should be examined but is out of the scope of this study. T. SHAPIRO, "Directive 2001/29/EC on copyright in the information society", in B. LINDNER & T. SHAPIRO, Copyright in the information society, Edward Elgar, 2011, 31. M. WALTER and S.VON LEWINSKI, European copyright law, Oxford University Press, 2010, 964. L. BENTLY and B. SHERMAN, Intellectual Property Law, Oxford University Press, 3 rd ed., 2009, 138. Art. 9 of the Bern Convention :(1) Authors of literary and artistic works protected by this Convention shall have the exclusive right of authorizing the reproduction of these works, in any manner or form.(2) It shall be a matter for legislation in the countries of the Union to permit the reproduction of such works in certain special cases, provided that such reproduction does not conflict with a normal exploitation of the work and does not unreasonably prejudice the legitimate interests of the author.(3) Any sound or visual recording shall be considered as a reproduction for the purposes of this Convention. Furthermore, in Premier League, the Court of Justice took the view that "the reproduction right extends to transient fragments of the works within the memory of a satellite decoder and on a television screen, provided that those fragments contain elements which are the expression of the authors' own intellectual 100 M. WALTER and S.VON LEWINSKI. op. cit., 946. 101 CJEU 16 July 2009, Case C-5/08, Infopaq International A/S v Danske Dagblades Forening (Infopaq I), par. 27 to 29. 102 Commission Communication of 20 November 1996 -"Follow-up to the Green Paper on copyright and related rights in the information society", 20 November 1996, COM(96) 568 final. 103 Opinion of Advocate General TRSTENJAK delivered on 12 February 2009, Case C-5/08, Infopaq, par. 56. 104 CJEU 4 October 2011, joined Cases C-403/08 and C-429/08, Football Association Premier League Ltd and Others v QC Leisure and Others (C-403/08) and Karen Murphy v Media Protection Services Ltd (C-429/08) (Premier League). 105 A. LUCAS, H-J LUCAS and A. LUCAS-SCHLOETTER, Traité de la propriété littéraire et artistique, 4 th ed. Lexis Nexis, 2012, 257-258. 106 CJEU 16 July 2009, Case C-5/08, Infopaq I, par., 48. According to art. 13a of the Dutch Copyright Act (unofficial translation provided by the Ministry of Justice): " The reproduction of a literary, scientific or artistic work will not include temporary reproduction of a passing or incidental nature and forming an essential part of a technical procedure whose sole purpose is to enable a) the passing on by an intermediary through a network between third parties, or Brussels 5 May 2011, www.juridat.be. AP Barcelona (sec.15) July 7, 2005 [Cromosoma v. Weblisten] Westlaw.ES JUR2005/46026. UK Supreme Court, 17 April 2013, [2013] UKSC 18. J-P TRIAILLE, "La question des copies caches et la responsabilité des intermédiaires", in A. STROWEL & J.-P. TRIAILLE, Google et les nouveaux services en ligne,Larcier, 2008, 257. Sent.172/2012, of 3 April 2012, Supreme Court, Civil Chamber. Available at: http://pdfs.wke.es/8/6/1/5/pd0000078615.pdf (in Spanish). UK Supreme Court, 17 April 2013, [2013] UKSC 18. The role of the exception for private use is likely to increase, even if it is subject to strict conditions and requires a fair compensation to be paid. A. LUCAS e.a., op. cit., 363. Cass. (B), 27 May 2005, www.juridat.be L. GUIBAULT e.a., op. cit., 16. A. LUCAS e.a., op. cit., 364. Art. 31.2 of the Spanish Copyright Act. Opinion of the Advocate General TRSTENJAK delivered on 11 May 2010, case C-467/08, Padawan. CJEU 21 October 2010, Case C-467/08, Padawan v. SGAE, par. 53. Reference for a preliminary ruling from the Oberster Gerichtshof (Austria) lodged on 29 June 2012, UPC Telekabel WienGmbH v Constantin Film Verleih GmbH, Munich (Germany), Wega Filmproduktionsgesellschaft mbH, CJEU, Kino.to, par. 41. A. LUCAS e.a., op. cit., 367 ; P. SIRINELLI, "Chronique de jurisprudence", RIDA, 2009/1, 237. Available at SSRN: http://ssrn.com/abstract=2218764 or http://dx.doi.org/10.2139/ssrn.2218764 (hereafter: GINSBURG, Copyright 1992-2012").204 This notion is defined as "each individual delivery of a phonorecord by digital transmission of a sound recording which results in a specifically identifiable reproduction by or for any transmission recipient of a phonorecord of that sound recording, regardless of whether the digital transmission is also a public performance of the sound recording or any nondramatic musical work embodied therein. A digital phonorecord delivery does not result from a real-time, non-interactive subscription transmission of a sound recording where no reproduction of the sound recording or the musical work embodied therein is made from the inception of the transmission through to its receipt by the transmission recipient in order to make the sound recording audible" (s. 115(d) USCA). RealNetwork and Yahoo!, p. 18. GINSBURG, "Recent developments in US Copyright law", 6. London Sire Records v Does, 542 F. Supp. 2d 153 (D. Mass. 2008) as cited in GINSBURG, "Recent developments in US Copyright law",6-7. London Sire Records v Does, 542 F. Supp. 2d 153 (D. Mass. 2008) cited in GINSBURG, "Exclusive rights on the Ebb?",6-7. CJEU 2 June 2005, Case C-89/04, Mediakabel BV v Commissariaat voor de Media; CJEU 14 July 2005, Case C-192/04, Lagardère;. CJEU 15 March 2012, Case C-135/10, Marco Del Corso. See for example UK High Court, 28 February 2013, EMI v BSkyB, [2013] EWHC 379 (Ch), par. 40. CJEU 16 July 2009, C-5/08, Infopaq I, 64. Opinion of the Advocate General in ACI Adam, par. 79. The purpose is not to repeat the analysis of the issues. We refer to the Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society(p. 120 et s.) and Output 1.3 of this Study (see also the analysis sub Output 1.IV). Or at least, it used to be. The reproduction right in the offline world is a defined act that is accomplished when the copy of the work exists (time) and at the location where the copier and the copy are present (place). After the reproduction, the copy has its own destination independently of the person who made the copy. In the digital networked (online) world, a copy has a different existence in terms of time and space. A copy comes into existence in an instant but it has a prolonged, continuous existence under the control of the copier (or a third party if it is transmitted) (time). Moreover, the copy can be produced at a different place from the place where the copier is when she makes the copy (e.g. cloud computing). During its life, the place where the reproduction is kept can even change. A detailed study of the relation between the reproduction right and the distribution right is beyond the scope of this Study (cf ToRs supra). See also Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 28-32. At least, before the implementation of the making available right in Canadian law. This interpretation was also suggested by J. DE BEER with regard to the new making available right recently introduced in Canadian copyright law. See J. DE BEER, « Copyright royalty stacking » in M. GEIST (ed), The Copyright Pentalogy. How the supreme court of Canada shook the foundations of Canadian copyright law, University Press Otawa, 2013, p. 364. The CJEU has localised the act of distribution in the Member State where the public is targeted, not a country of origin. See Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 55, 162 et s. See Study on the territoriality of the making available right, p. 147. The circumstance that subject matter is accessible in several countries, perhaps even worldwide, entails that downstream reproductions can be made in as many countries and it is difficult for the content provider to predict where a work will be downloaded or consulted online. This is less the case for the upstream copy: the content provider makes the upload and can determine where the copy is stored. This may or may not be the country of origin. The localisation of the upstream copy does not seem to cause major difficulties in practice and could, if necessary, be solved following the same principles as discussed in this section. See mutatis mutandis Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 150. See Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 146. See mutatis mutandis Study on the territoriality of the making available right, p. 150. F. MEZZANOTTE, "The interrelation between intellectual property licenses and the doctrine of numerus clausus. A comparative legal For this reason, we will not examine its compliance with international obligations. Also, musical arrangements of the work are thus allowed to the extent necessary to conform it to the style of manner of interpretation of the performance involved (but it should not amount to a derivative work without the right holder's consent). New recordings of musical works are thus authorised under the compulsory licence, but not the reproduction and distribution of existing sound recordings. M. PETERS, The Register of Copyrights, "Section 115 Compulsory License", Statement before the Subcommittee on Courts, The Internet and Intellectual Property of the House Committee on the Judiciary, US House of Representatives, 108 th Congress, 2d Session, March 11, 2004, available at http://www.copyright.gov/docs/regstat031104.html (hereafter, PETERS, "Section 115 Compulsory License"). US Department of Commerce, "Copyright policy, creativity, and innovation in the digital economy", 81, footnote 439. This royalty cannot be inferior to the royalty rate established for the compulsory licence (s. 115(c)(3)(E)), except if the artist/author acts as her own music publisher. In this case she may accept an inferior royalty rate, provided that the contract is entered into after the sound recording has been fixed in a tangible medium of expression in a form intended for commercial release. See supra sub Output 2, II, C.1, nr. 137 "Difficulties". See Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, 197 et s., in particular p. 207. It appears for the travaux préparatoires that the "purpose of article 5(1) is to exclude from the scope of the reproduction right certain acts of reproduction which are dictated by technology but which have no separate economic significance of their own. It applies notably to the online environment, but also to acts of reproduction taking place in the context of the use of a protected subject matter in off-line formats. In such cases, it is appropriate to limit the scope of the reproduction right and only protect those acts of reproduction which are of a separate economic relevance (...)". See Proposal for a European Parliament and Council Directive on the harmonization of certain aspects of copyright and related rights in the Information Society, presented by the Commission (COM(97) 628 final), p. 29. See on this point: Study on the Application of directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 255-257. The scope of the making available right has been examined in the Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, together with the reproduction right. The reproduction right intervenes in several respects when a work is made available to the public: a reproduction can be found whenever a copy in the material sense (fixation of the work on any medium) is made. A distinction can be made between the "upstream" and the "downstream" reproductions:  The "upstream" reproduction is the copy kept on the hosting server from which the work is available for transmission at the individual demand of the users.  The temporary reproductions that enable access to a work depending on the technological process used (e.g. cache copies; copies on transit servers).  The "downstream" reproductions are the copies at the final user's end, at whose individual request the work is transferred. We will examine how the different exclusive rights apply to two types of technology commonly applied for all kinds of media use: streaming and downloading. A. European Union 104. Both the use of streaming and download technologies affect the making available right as long as a "public" has access to the works exploited (cf. Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society). In the following section we will distinguish the situations where works are made available to the public by means of streaming and by means of and economic analysis". 296 It should be pointed out that such obligation may have consequences for the right holders' affiliation with collecting societies. The effects of any such rule on the functioning of CMOs should be duly taken into account but exceeds the scope of the current Study. Licensing mechanisms in the USA 136. Whenever the exploitation of a protected work involves acts protected under s. 106 US Copyright Act, the copyright owner's prior consent is in principle required 298 . This can be a complicated matter, especially when several rights are involved in one exploitation and these rights are in different hands. From the consulted literature it appears that rights licensing is most burdensome in the music sector. Many online music providers offer services with mixed features (both interactive and non-interactive elements), allowing online streaming and downloads (for a limited time or permanent copies). For such complex services, the service provider is most likely to have to clear the rights of reproduction, distribution and public performance for both the musical works and the sound recording (which corresponds to subject matter protected under neighbouring rights). Any recorded performance of a musical work may contain both sound recordings and a musical composition. The rights of reproduction, distribution and public performance are subject to different regimes, depending on whether the subject matter is the sound recording or the musical composition. The rights to musical works are managed both individually (mechanical licences) and collectively (public performance rights) by performance rights organisations 299 . In addition a compulsory licence is available for the reproduction and distribution rights for musical works, The licence for the sound recordings should be negotiated directly with the copyright owner. We will discuss some of these mechanisms hereafter. Public performance of musical works. Blanket licence. Copyright owners may entrust the exercise of their public performance rights to a performance rights organisation (PRO). Such PROs may issue blanket licences for their repertoire. Such licence gives the licensee the right to perform all works in the repertoire of the PRO for a single stated fee that does not vary depending on how much music from the repertoire the licensee actually uses 301 . 138. Compulsory "mechanical" licence for musical works. The US Copyright Act provides a compulsory licence for the making and distributing of phonorecords of non-dramatic musical works (s. 115). This compulsory licence is subject to precise conditions/characteristics as follows: Subject matter. A compulsory licence can be obtained with regard to nondramatic musical works. The mechanical licence covers specific works (title per title), it does not function as a 298 The different rights and their classification are described supra sub Output 1, III (Output 1.2). 299 D. TROUVE, "Music Startups and the Licensing Drag", Music Business Journal December 2012, www.thembj.org/2012/12/musicstartup-and-the-licensing-drag/. 300 US Department of Commerce, "Copyright policy, creativity, and innovation in the digital economy", 84. We have left out the column with exceptions. HFA stands for the "Harry Fox Agency", which serves as "a centralized administrator for the mechanical right for a number of publishers"see footnote 439 of that green paper. 301 US Court of Appeals for the second circuit, 28September 2010, USA v ASAP, No 09-0539-cv (L) re Realnetworks, Yahoo! (hereafter Realnetworks). whether such online video recordings constitute an autonomous form of exploitation and whether the application of the exception affects the normal exploitation of the work, taking into account this new type of use may have developed into an exploitation form of its own, from which the right holders are deprived on the basis of the exception 339 . Inversely, it may be proven that there is a market failure that justifies such exception if it is established that the transaction costs are so important that licences cannot be concluded 340 . Further economic analysis should be able to gather empirical data on this subject. -Provided that the normal exploitation of the work is not undermined, the impact of the prejudice should be assessed. The author should only accept "reasonable prejudice" (from an economic point of view). An unreasonable loss of income may be rendered "reasonable" by granting a remuneration right. This calls for an economic assessment (taking into account that the holder of the reproduction right may not receive any remuneration from the licence fee paid to the holder of the right of making available). -Considering other norms than economical ones, it should first be examined whether the exception is justified and this implies a weighing of the author's interests and other public interests expressed in the exception . The author has a legitimate interest in controlling the exploitation of her work and being paid for this. Her rights could however be restricted in favour of other public interests, which may apply to the end-user or even the intermediary. An end-user could indeed claim protection of the freedom of information, her privacy or her property rights. Such public interest grounds could justify an exception. The situation is somewhat different when service providers function as an intermediary between the right holder and the end-user. Apart from their right to conduct their business, there are fewer public interest ground that the commercial intermediaries can rely on to justify a restriction It is unlikely that avoiding licence fees and transaction costs can justify restricting the authors' exclusive rights. In summary, an extension of the exception for private copies is not a satisfactory solution to territoriality problems. Several objections exist to applying such exception indistinctly to all downstream reproductions, while it is actually possible to negotiate a licence for these acts (which is actually the case). The contractual burden that such obligation causes for content providers does not seem to justify a restriction (with a wide scope) of the author's rights. E. Conclusions 154. In this part, we have examined various legal constructions to deal with the different territorial impact of the making available right and the reproduction right and, more in particular, to localise the reproductions following directly from the public availability of a work or other subject matter according to the localisation criteria proposed in the previous Study. We have not found one construction that solves all issues. Moreover, we hold the opinion that more empirical (economic) data are required before a legislative initiative is taken in this domain and that it to the individual who requested the recording, therefore there is no act of making it available to the public. At the same time the individual will only use the work for private purposes, hence the reproduction that enables this private transmission is exempted as well. Based on these arguments the service provider is free to offer its service to all individuals, without a possibility for the right holders to prohibit this use or negotiate a remunerationeven when the online PVR provider makes a profit. 339 It should be observed that the CJEU has decided that the three-step test is met when the conditions of the exception are met. Within the scope of this study we cannot elaborate on this point. 340 See Study on the Application of Directive 2001/29/EC on Copyright and Related Rights in the Information Society, p. 256. Thirdly, the right holders' ability to license their rights could otherwise be limited in order to facilitate the exploitation of works and other subject matter in several Member States. One option is to impose compulsory licences for the first download following the public availability of the work or other subject matter. In this scenario the right holder loses its exclusive right, in return for remuneration. Such restriction should not be imposed without distinguishing between the sectors. Even if there are no legal objections to such compulsory licence for some types of works, such system is fairly complicated and perhaps not very useful (cf. the American experience). In order to address the territoriality issues in a useful way, the same compulsory licence should be imposed in all Member States. An equitable remuneration is due and should be determined by a competent authority (in case no agreement can be reached), either at the national level (with the risk of divergences) or at the European level (with the possibility that this authority acts as a price regulator). Similarly, an administrative system should be set up to keep track of the downloads of particular works and subject matter (to allow right holders to perceive their remuneration) at the national or European level. Empirical economic data are required to assess the feasibility and desirability of such system. Another possibility restricting the possibility for right holders to exercise their rights is to impose the collective management of the reproduction right with regard to the first reproduction (download) following directly from the public availability of a work. This system has been applied to the exercise of the cable retransmission right. It has however been found that there are insufficient similarities between the cable retransmission right (concerning a specific sector with specific actors and a specific form of secondary exploitation relating to a primary exploitation) and the download of works (all sectors, various types of actors and one exploitation). Finally, the territorial effects of the reproduction right can be attenuated to a certain extent by the modification of the exceptions. On the one hand, the exception for temporary acts of reproduction could be extended to emphasise the incidental character of the reproduction and its lack of independent economic significance vis-à-vis the making available to the public of the work or other subject matter. Such an exception would cover the upstream reproduction that technically allows the public availability for transmission and the downstream reproductions that allow the end-user to consult the work (read, view, hear,...) without gaining control over the use of the work or other subject matter. Reproductions that do allow the end-user to control her use of the work (whenever and wherever she wants to consult it) and that grant her independence from the content provider would not be exempted under this exception. On the other hand, the exception for "private copies" could exempt certain downloads of works made available to the public. In order to maximise the effect of the exception on the territoriality of the reproductions, this exception should be further harmonised. It should be made mandatory in all Member States and the conditions of the exception should be harmonised in more detail (in particular the role of the (commercial) intermediary whose services are used to make the private copy should be clarified). It seems however that the three-step test may raise a legal obstacle to the extension of this exception to exempt all reproductions in those cases where the normal exploitation of the work is threatened in favour of content providers or intermediaries that would otherwise be subject to licence fees and transaction costs to obtain such licences.
04112699
en
[ "phys", "phys.grqc" ]
2024/03/04 16:41:24
2023
https://hal.science/hal-04112699/file/CosmolgicalConstantDerivedV1.1.pdf
Espen Gaarder email: [email protected] Haug Orcid Org A New Way to Write The Newtonian Gravitational Equation Resolves What The Cosmological Constant Truly Is Keywords: Cosmological constant, New metric, Newton gravity, relativistic Newton, Bagge-Phipps model We demonstrate that there is a way to represent Newtonian gravity in a form that strongly resembles Einstein's field equation, but it is still a fundamentally di↵erent type of equation. In the non-relativistic regime, it becomes necessary to ad hoc introduce a cosmological constant in order to align it with observations, similar to Einstein's field equation. Interestingly, in 1917, Einstein also ad hoc inserted a cosmological constant in the Newtonian equation during his discussion on incorporating it into his own field equation. At that time, the cosmological constant was added to maintain consensus, which favored a steady-state universe. However, with the discovery of cosmological redshift and the shift in consensus towards an expanding universe, Einstein abandoned the cosmological constant. Then, around 1999, the cosmological constant was reintroduced to explain recent observations of distant supernovae. Currently, the cosmological constant is once again a topic of great interest and significance. Nevertheless, we will demonstrate that the cosmological constant is likely an ad hoc adjustment resulting from a failure to properly account for relativistic e↵ects in strong gravitational fields. We are able to derive the cosmological constant and show that it is linked to corrections for relativistic e↵ects in strong gravitational fields. In our model, this constant holds true for any strong field but naturally assumes di↵erent values, indicating that it is not truly a constant. Its value is constant only for the mass under consideration; for example, for the Hubble sphere, it always has the same value. Additionally, we will demonstrate how relativistic modified Newtonian theory also seems to resolves the black hole information paradox by simply removing it. This theory also leads to the conservation of spacetime. In general relativity theory, there are several significant challenges. One of them is how spacetime can change over time, transitioning from infinite curvature just at the beginning of the assumed Big Bang to essentially flat spacetime when the universe end up in cold death, while still maintaining conservation of energy all the way from the Big Bang to the assumed cold death of the universe. Can one really get something from nothing? Introduction In this paper, we explore the process of rewriting one of the fundamental Newtonian equations to closely resemble Einstein's field equation, but it is still a very di↵erent equation. By doing so, we also arrive at the Friedmann [START_REF] Friedmann | Über die krüng des raumes[END_REF] equation of the universe, which requires the inclusion of a cosmological constant through an ad hoc insertion when not taking into relativistic e↵ects, that is working with standard Newton theory. Einstein himself ad hoc introduced the cosmological constant in both Newton's theory and his field equation, a point we will delve into shortly. Remarkably, when we apply a similar approach to a relativistic modified Newtonian model dating back to Bagge [START_REF] Bagge | Relativistic e↵ects in the solar system[END_REF] and Phipps [START_REF] Phipps | Mercury's precession according to special relativity[END_REF], we obtain a cosmological constant that is identical to Einstein's, but without the need for an ad hoc insertion. Rather, this constant arises naturally from the derivation process. This demonstrates that the cosmological constant is a necessary relativistic adjustment. However, this relativistic adjustment serves a greater purpose; it represents the necessary relativistic adjustment required to accurately describe strong gravitational fields and is not only relevant for cosmos and the Hubble sphere but for any gravitational object, it is when generalized not a constant but a relativistic adjustment variable. Based on this finding, we argue that general relativity, or at least the metrics applied so far, may have misunderstood the nature of the universe. The cosmological constant, which they consider as an inherent component, is, in fact, an ad hoc adjustment necessary for accurately describing relativistic e↵ects in strong gravitational fields. Our approach to rewriting the Newtonian formula and the relativistic Newtonian formula yields two novel spacetime metrics. One is applicable solely to weak fields with strong similarities to the Schwarzschild metric which recent research indicates is only valid in weak fields, while the other metric we will derive from relativistic modified Newtonian theory is valid in both weak and strong fields. Although gravity causes curvature in space and time, the spacetime interval itself remains flat in our model. Our model predicts the conservation of spacetime curvature, and intriguingly, this appears to address Hawking's black hole information paradox. This paper presents a fresh interpretation of both general relativity and Newtonian gravity theory. It suggests that the relativistic modified Newtonian theory possesses greater power and potential than previously understood. A recent exact solution and metric to Einsteins field equation suggested by Haug and Spavieri [START_REF] Haug | Mass-charge metric[END_REF] makes the two methods quite close, but still it seems di↵erent, in that general relativity do not predict conservation of spacetime curvature, while relativistic modified Newton theory do. Background on the Cosmological constant Einsein [START_REF] Einstein | Näherungsweise integration der feldgleichungen der gravitation[END_REF] in 1915 to 1916 introduced his general relativity theory through with a new field equation that was given by (in todays modern notation) G µv = T µv R µv 1 2 Rg µv = 8⇡G c 4 T µv (1) where R µ⌫ represents the Ricci tensor, g µ⌫ denotes the metric tensor, and T µ⌫ represents the stress-energy tensor, G is Newtons gravitational constant, c is the speed of light. Einstein [START_REF] Einstein | Cosmological considerations in the general theory of relativity[END_REF] in 1917 ad hoc inserted a constant to counterbalance the e↵ect of gravity and achieve a static universe and then got G µv g µv = T µv R µv 1 2 Rg µv g µv = 8⇡G c 4 T µv (2) Where is now known as Einstein's cosmological constant. He called this his extended field equation as it was clearly an extension of his previous field equation. The consensus back then was that the universe had always been and would always be similar to what is observed now; there had been no big bang, and there was no expansion of the universe. Einstein inserted the constant ad hoc, but it was based on solid reasoning (philosophying) from Newton's gravity, cosmology, and the general theory of relativity. In 1917, he defined this constant as = 1 R 2 , where R was related to the radius of the universe. The cosmological redshift and the Hubble constant were discovered years after Einstein introduced the cosmological constant. However, it is interesting to note that if Einstein had inputted the Hubble radius into his cosmological constant equation in 1917, he would have obtained R = R H = c H0 . In that case, his constant would become = 1 R 2 = 1 R 2 H = ✓ H 0 c ◆ 2 (3) This is very similar to what is considered the cosmologican constant today ⇤ = 3 R 2 ⌦ ⇤ = 3 R 2 H ⌦ ⇤ = 3 ✓ H 0 c ◆ 2 ⌦ ⇤ (4) where ⌦ ⇤ is the energy density ratio due to the cosmological constant and the critical density of the universe. For the critical Friedmann [START_REF] Friedmann | Über die krüng des raumes[END_REF] universe ⌦ ⇤ = 1, so Einsteins original cosmological constant corresponded to this. The Firedmann eqations where first published in 1922, so we are talking about relative to what we know today, not from what Einstein could know then. So Einsteins cosmological constant correspond to what today could be called a steady state Hubble sphere universe. In Einsteins view in 1917 it was not that the universe not was infinite, it was that there was actually a limit on how far a distance gravitational e↵ects could work in addition to his belive of a steady state universe. An important point to note is that in his 1917 paper, Einstein significantly built up his arguments based on Newtonian theory. Please keep this in mind as we proceed with the paper. In the same paper, Einstein ad hoc inserted the cosmological constant into the Poisson equation, which is related to Newton's equation. The Poisson equation is typically given as: r 2 = 4⇡G⇢ (5) However, Einstein speculatively suggested including a constant related to the cosmos: r 2 = 4⇡G⇢ (6) He then proposed = 4⇡G ⇢ 0 , where ⇢ 0 represents the mean density of matter in the universe. Since the cosmological constant was ad hoc inserted, but based on sound reasoning and not derived from any existing framework, it had to be calibrated to observations. Essentially, this means that it is a free parameter that can be adjusted to make Einstein's field equation align with observations from the universe. When Hubble [START_REF] Hubble | Extragalactic nebulae[END_REF] discovered that most astronomical objects were red-shifted, the consensus gradually shifted. While there were multiple possible explanations for the observed cosmological redshift, such as tired light suggested by Zwicky [START_REF] Zwicky | On the redshift of spectral lines through interstellar space[END_REF], a consensus slowly emerged that the cosmological redshift indicated that most objects were moving away from us. This was interpreted as the universe expanding and likely having originated from a Big Bang. As the cosmological constant was introduced to maintain a steady-state universe consistent with the general theory of relativity, Einstein accepted the expanding-dynamic universe model around 1931. The Einstein-de Sitter spacetime was consistent with a continuously expanding Universe with zero cosmological constant. Supposedly, Einstein later referred to the inclusion of the cosmological constant in his field equation as his biggest blunder. However, there is some uncertainty regarding whether Einstein actually made this statement, as it is only indirectly attributed to him through Gamow [START_REF] Gamow | My World Line : An Informal Autobiography[END_REF]. Nevertheless, at this point, the cosmological constant was either discarded or, alternatively, set to zero. In 1998, two teams of astrophysicists, one led by Saul Perlmutter [START_REF] Perlmutter | Measurements of ! and from 42 high-redshift supernovae[END_REF] and another led by Brian Schmidt and Adam Riess [START_REF] Reiss | Observational evidence from supernovae for an accelerating universe and a cosmological constant[END_REF], conducted measurements and analyzed distant supernovae. The results indicated that the expansion of the universe was accelerating. To reconcile these findings with general relativity, the cosmological constant had to be reintroduced and somewhat reformulated. Gravity, as we understand it locally, such as on Earth and in the solar system, is attractive and in general relativity is caused by the curvature of spacetime resulting from mass (energy) curving spacetime. Therefore, it logically follows that a universe with mass should eventually contract or at least cease expanding and reach a state of balance, even if there was a big bang. This was the prevailing view discussed in the 1980s and early 1990s. However, the interpretation of supernova data as indicating accelerated expansion challenged this perspective. To explain the data from Type Ia supernovae, a new hypothesis had to be introduced, which is now known as dark energy and accelerating expansion. Dark energy is currently the consensus among most experts in gravity and astrophysics, although direct evidence of its existence is still lacking, with only indirect indications. In this paper, we will demonstrate for the first time that the cosmological constant arises directly from Newton's theory when considering relativistic energy. It is then a more general variable that is directly related to taking into account relativistic e↵ects. In the special case of the Hubble sphere it takes an almost identical value that was suggested by Einstein. The relativistic model also seems to remove the need for the dark energy hypothesis. A New Way to Newtonian Non-Relativistic Formula Resembles Einsteins field equation without the cosmological constant The gravitational force formula in modern times1 is expressed as follows: F = G Mm r 2 (7) This equation is closely related to the well-known equation: 1 2 mv 2 G Mm r = 0 (8) The above equation is valid for a spherical gravitational object M acting on a small mass m, where m ⌧ M . It is applicable in the radial direction. This formula is often utilized to solve for the escape velocity, denoted as v e , and it yields the well-known escape velocity formula: v e = v = r 2GM r (9) We can also solve this formula for r when setting v e = c. This results in r = 2GM c 2 , which is identical to the radius obtained for black holes from the Schwarzschild metric. As early as 1784, Michell [START_REF] Michell | On the means of discovering the distance, magnitude &c.of the fixed stars, in consequence of the diminution of the velocity of their light, in case such a diminution should be found to take place in any of them, and such other data should[END_REF] calculated such a radius based on Newton's gravity (albeit only describing it verbally) and predicted the existence of dark stars from which even light could not escape. During that time, in 1784, the theory of relativity had not yet been developed, so this equation was correct based on the knowledge available at that time (but naturally had its limitation as relativity theory not yet was discovered). Furthermore, the escape velocity in the Schwarzschild metric is identical to the one obtained in Newtonian gravity (see [START_REF] Augousti | An observation on the congruence of the escape velocity in classical mechanics and general relativity in a Schwarzschild metric[END_REF] for more details). Even this fact raises questions because the derivation of the escape velocity in Newtonian gravity is certainly not valid in a strong gravitational field. The derivation begins by using the kinetic energy formula E k ⇡ 1 2 mv 2 (seen as part of Eq. 8), which is well-known to be valid only when v ⌧ c. This condition holds only when r r s . The formula E k ⇡ 1 2 mv 2 is equivalent to the first term of a Taylor series expansion of Einstein's special relativistic kinetic energy: E k = mc 2 mc 2 . The first term of the Taylor series expansion is indeed valid as a good approximation only when v ⌧ c. How is it possible that the Schwarzschild metric, widely accepted as being fully valid in both weak and strong gravitational fields by the general relativity community, yields exactly the same escape velocity and escape radius (where v e = c) as the standard non-relativistic Newtonian gravity? This is perplexing considering that Newton's theory is known to be invalid in strong gravitational fields. No one can truly explain this. Some consider it to be a mere coincidence, for instance, Loinger [START_REF] Loinger | On michell-laplace dark body. ArXiV.org[END_REF] claims, "The dark body of Michell-Laplace has nothing to do with the relativistic black hole." On the other hand, Hawking [START_REF] Hawking | The Theory of Everything, The Origin and Fate of the Universe[END_REF] seems to think that there might be more to it than just coincidence. After all, Newton's theory is recovered from the Schwarzschild metric in the weak field. However, this does not explain why the Schwarzschild metric predicts exactly the same results as Newton's theory for quantities such as the escape velocity and event horizon, which are related to strong gravitational fields While it is understandable that the Schwarzschild metric predicts mostly the same results as Newton's theory in the weak field when r r s , the fact that it also predicts similar outcomes in very strong gravitational fields raises questions. This is one of the multiple aspects we will investigate in this paper. In certain aspects of the strong field, the Schwarzschild metric predicts results that are very di↵erent from those of Newton's theory, something we will get back to. The escape velocity derived from the Newton formula in Eq. 8 is naturally the only one that is valid for that equation. Now, we can substitute this escape velocity back into Eq. 8 and obtain: 1 2 mv 2 e G Mm r = 0 2GM r = 2GM r 1 2GM c 2 r = 1 2GM c 2 r 1 ✓ 1 2GM c 2 r ◆ = 2GE c 4 r 1 ✓ 1 2GM c 2 r ◆ = 8⇡G c 4 E 4 3 ⇡r 3 r 2 3 3 r 2 3 r 2 ✓ 1 2GM c 2 r ◆ = 8⇡G c 4 E 4 3 ⇡r 3 (10) The term E c 2 r ◆ = 8⇡G c 4 ⇢ E (11) The Gauss curvature of a normal sphere (2-sphere) is 1 r 2 , and the Ricci scalar curvature of a 2-sphere is 2 r 2 . Therefore, we propose calling 3 r 2 the curvature, or one can naturally refer to it as three times the Gauss curvature or 3 2 times the Ricci scalar curvature. The specific name we assign to it is not crucial. We will denote it as R i = 3 r 2 , which simplifies the notation without changing the equation. Additionally, we use the symbol g m to represent what we will call the metric component: 1 2GM c 2 r . Although this metric component is not the complete spacetime metric associated with the Newton metric (which we will discuss later), it plays a central role in the metric. It is worth noting that this metric component is identical to the term multiplied by c 2 dt 2 in the Schwarzschild metric: ds 2 = 1 2GM r c 2 dt 2 1 2GM r 1 g ⌦ . In other words, the Newton metric component is equivalent to the g 00 tensor component in the Schwarzschild metric. We believe this goes beyond mere coincidence, which we will explore further later on. By replacing the metric component 1 2GM c 2 r with the symbol g m and 3 r 2 with R i in Equation 11, we obtain: R i R i g m = 8⇡G c 4 ⇢ E (12) This expression bears a striking resemblance to Einstein's field equation [START_REF] Einstein | Näherungsweise integration der feldgleichungen der gravitation[END_REF], despite being mathematically di↵erent. It essentially represents a reformulated version of the linear Newtonian radial equation we started with. The original Einstein field equation without the cosmological constant can be written as R µ⌫ 1 2 Rg µ⌫ = 8⇡G c 4 T µ⌫ . The Einstein field equation consists of tensors and represents 16 equations in reality. On the other hand, the Newtonian equation we have presented here is a simple equation that requires only high school mathematics to understand and does not involve tensors. However, we find it intriguing that we can express the Newtonian gravity equation in this manner. The concept of energy density per unit volume, which we introduced, shares similarities with the stress-energy tensor in Einstein's field equation. Both involve energy per unit volume, although in tensor form (the stress-energy tensor) in Einsteins field equation, can contain also other quantities like pressure and shear stress. The metric component we referred to in the Newtonian equation, g m = 1 2GM c 2 r , exhibits strong resemblances to the Schwarzschild metric derived from Einstein's field metric, g µ⌫ , as we discussed earlier and will explore further. Next, let us consider the relativistic modification of Newton's theory. Independently, Bagge [START_REF] Bagge | Relativistic e↵ects in the solar system[END_REF] and Phipps [START_REF] Phipps | Mercury's precession according to special relativity[END_REF] have suggested incorporating relativity theory into Newton's gravity by simply re-writing the Newtonian gravitational force formula as: F = G M m r 2 (13) Here, represents the Lorentz factor, given by = 1/ p 1 v 2 /c 2 . Consequently, the Newtonian radial equation related to escape velocity must be: mc 2 mc 2 G M m r = 0 (14) Solving for v, we obtain the escape velocity as (see [START_REF] Haug | A new full relativistic escape velocity and a new Hubble related equation for the universe[END_REF]): v e = v = r 2GM r G 2 M 2 c 2 r 2 (15) To find the radius where the escape velocity is equal to the speed of light (c), we solve the following equation for r: v e = c = r 2GM r G 2 M 2 c 2 r 2 (16) This gives us: r = GM c 2 (17) This radius is half of the Schwarzschild radius, which will be significant later on. To distinguish it from the Schwarzschild radius r s = 2GM c 2 , we will denote it as r c . We can rewrite Equation 14 by substituting the escape velocity and rearranging it as follows: 1 p 1 v 2 e /c 2 G M c 2 r = 0 v 2 e = 2GM r G 2 M 2 c 4 r 2 1 2GM c 2 r + G 2 M 2 c 4 r 2 = 1 2GM c 2 r + G 2 M 2 c 4 r 2 G 2 M 2 c 4 r 2 + 1 ✓ 1 2GM c 2 r + G 2 M 2 c 4 r 2 ◆ = 2GM c 2 r 3G 2 M 2 c 4 r 4 + 3 r 2 3 r 2 ✓ 1 2GM c 2 r + G 2 M 2 c 4 r 2 ◆ = 8⇡G c 4 E 4 3 ⇡r 3 3G 2 M 2 c 4 r 4 + R i R i g m = 8⇡G c 4 ⇢ E (18) We will now introduce the term ⇤ N = 3G 2 M 2 c 4 r 4 . At this stage, the factor ⇤ N does not resemble the cosmological constant, and it is not even a constant itself, except for the symbol we use. However, we will soon discover something very interesting about how it is directly linked to the cosmological constant. Notably, the term ⇤ N = 3G 2 M 2 c 4 r 4 was not present when we derived a similar equation from non-relativistic Newtonian principles. We can now express the relativistic Newtonian radial equation as follows: ⇤ N + R i R i g m = 8⇡G c 4 ⇢ E (19) Since ⇤ N = 3G 2 M 2 c 4 r 4 appears as an additional term when we rewrote the relativistic Newton formula in this form compared to the non-relativistic case, it is correct to call ⇤ N a relativistic correction factor, as it adjusts the formula for relativistic e↵ects. This on the surface strongly resembles Einsteins field equation with the cosmological constant ad-hoc inserted. In the special case when we apply Eq. 19 to the cosmos, specifically the Hubble sphere, we consider r = r H as the Hubble radius, particularly when dealing with observations (photons) coming from close to that radius. Therefore, in this specific case, the Hubble radius in this model is given by r = r H = GMu c 2 , where M u = c 3 GH0 is the mass equivalent of all mass and energy in the Hubble sphere. We obtain this mass by solving the formula r c = GM c 2 for M , which gives M = c 2 rc G . When r c = r H = c H0 , we have M = c 2 r H G = c 3 GH0 . This means that in the special case of the Hubble sphere, the relativistic variable ⇤ N becomes: ⇤ N = 3G 2 M 2 u c 4 r 4 H = 3G 2 M 2 u c 4 GMu c 2 4 = 3 r 2 c = 3 r 2 H = 3 ✓ H 0 c ◆ 2 ⇡ 1.72 ⇥ 10 52 m (20) Remember that in his 1917 paper, when Einstein first introduced the cosmological constant, he defined it as = 1 r 2 , where r represented the radius of the cosmos. Therefore, what we have derived here bears a great similarity to that concept. The value ⇤ N = 3 H0 c 2 ⇡ 1.72 ⇥ 10 52 meter is a special case of the Newton relativistic variable (⇤ N ) that remains constant when applied to the Hubble sphere. Interestingly, it is identical to the modern interpretation of Einstein's cosmological constant: ⇤ = 3 H0 c 2 ⌦ ⇤ , where ⌦ ⇤ is set to 1. So, for the Hubble sphere, we have: ⇤ N + R i R i g m = 8⇡G c 4 ⇢ E 3 1 r 2 H + R i R i g m = 8⇡G c 4 ⇢ E 3 ✓ H 0 c ◆ 2 + R i R i g m = 8⇡G c 4 ⇢ E (21) Furthermore, in the special case where r = r c = GM c 2 , our metric component becomes: g m = ✓ 1 2GM c 2 r + G 2 M 2 c 2 r 2 ◆ g m = 1 2GM c 2 GM c 2 + G 2 M 2 c 2 GM c 2 2 ! = 0 (22) Thus, R i g m = 0, and we end up with: 3 ✓ H 0 c ◆ 2 + R i = 8⇡G c 4 ⇢ E 3 ✓ H 0 c ◆ 2 + 3 r 2 H = 8⇡G c 4 ⇢ E ⇤ H + 3 r 2 H = 8⇡G c 4 ⇢ E H 2 0 c 2 = 8⇡G c 4 ⇢ E ⇤ N 3 H 2 0 = 8⇡G c 2 ⇢ E ⇤ N c 2 3 H 2 0 = 8⇡G⇢ ⇤ N c 2 3 ( 23 ) Here, ⇢ represents the mass density (equivalent mass, ⇢ = ⇢ E c 2 ). Remarkably, this equation is identical to the Friedmann equation derived from Einstein's general relativity theory. However, there is a significant di↵erence in how it was derived. In Einstein's case, the cosmological constant is ad hoc inserted into the field equation. On the other hand, when using relativistic Newtonian theory, we did not need to insert any constant ad hoc. We have demonstrated that the so called cosmological constant is simply a relativistic correction that arises directly from relativistic Newtonian theory. Be aware that the general relativity community long time ago started ignoring relativistic Newton theory before properly investigating what predictions it could lead to. Well to do that require some luck together with some skills, and a lot of hard work. However, as the cosmological constant has not been previously discovered as a special case of a relativistic variable, it suggests that in general relativity theory, one may be neglecting an important relativistic component when not dealing with the cosmos. Naturally unknowingly so. This implies that fully relativistic metrics may not be obtained when solving the field equation without such an adjustment. Whether a similar adjustment can be directly incorporated into general relativity theory is unclear, and it raises the question of whether it is even necessary. Assuming that we have mistakenly calibrated our equation to Newtonian gravity without considering relativistic e↵ects, as we showed earlier: 3 r 2 3 r 2 ✓ 1 2GM c 2 r ◆ = 8⇡G c 4 ⇢ E 4 3 ⇡r 3 R i R i g m = 8⇡G c 4 ⇢ E 4 3 ⇡r 3 (24) If we are unaware of the relativistic solution, similar to Einstein's approach when working with the cosmos, we can attempt to solve this equation by ad hoc inserting the well-known cosmological constant ⇤ = 3 H0 c 2 into our non-relativistic Newtonian equation. This gives: ⇤ + R i R i g m = 8⇡G c 4 ⇢ E 4 3 ⇡r 3 3 ✓ H 0 c ◆ 2 + 3 r 2 3 r 2 ✓ 1 2GM c 2 r ◆ = 8⇡G c 4 ⇢ E 4 3 ⇡r 3 (25) The metric component in this case remains non-relativistic since we are working within the framework of nonrelativistic Newtonian physics and have not adjusted the metric for the cosmological constant. Instead, we have ad hoc inserted the cosmological constant, which is not an ideal approach. However, since we assume that we are not familiar with or working within relativistic Newtonian theory (which surprisingly applies within the general relativity community), the Hubble radius corresponds to r H = r s = c H0 = 2GMc c 2 , where the mass M c represents the critical mass in the Friedmann universe: M c = c 3 2GH0 . This di↵ers from the mass in relativistic Newtonian theory by a factor of 1 2 . Therefore, we obtain: 3 ✓ H 0 c ◆ 2 + 3 r 2 H = 8⇡G c 4 ⇢ E 3 r 2 H = 8⇡G c 4 ⇢ E ⇤ H 2 0 c 2 = 8⇡G c 2 ⇢ E ⇤ 3 H 2 0 = 8⇡G⇢ ⇤c 2 3 ( 26 ) So, we were once again able to derive the Friedman equation with a cosmological constant. However, we achieved this by starting from an incorrect equation that failed to fully and properly consider relativistic e↵ects. Nevertheless, we resolved this issue by ad hoc inserting what we referred to as a cosmological constant. This situation seems to resemble what may have actually happened in Einstein's field equation. It is well-known that the Friedmann equation can be obtained from Newtonian physics, but only by ad hoc inserting a cosmological constant. The significant breakthrough here is that such an insertion is not necessary in relativistic Newtonian theory. This finding is crucial as it suggests that when working with the cosmos and beyond, there is likely something missing in the general theory of relativity, specifically the relativistic adjustment to Newton's theory. This implies that, for instance, the Schwarzschild metric may not be a fully relativistic metric suitable for strong gravitational fields. Schwarzschild derived this metric from Einstein's field equation even before Einstein ad hoc inserted the cosmological constant. Furthermore, to this day, it remains unclear what exactly the cosmological constant represents and why it needs to be ad hoc inserted. In relativistic Newtonian theory, we observe that it is simply a relativistic variable that should always be incorporated when dealing with strong gravitational fields, not only for cosmos, but in particular for predicting and interpreting such things as black holes, that we soon will come back to. Consistency with previous relativistic Newtonian model of cosmos In a recent previous paper ( [START_REF] Haug | A new full relativistic escape velocity and a new Hubble related equation for the universe[END_REF]), we derived a cosmological model from the same modified relativistic Newton theory, which was given by: H 2 0 = 4⇡G⇢ M 3 (27) In that paper, we did not demonstrate how the cosmological constant could arise from relativistic Newton theory. However, in the derivations above, we obtained H 2 0 = 8⇡G⇢ M ⇤c 2 3 . Although they may appear di↵erent, they are actually fully consistent and equivalent, as we will now demonstrate. The Newtonian relativistic variable ⇤ N for the Hubble sphere is given by (as we demonstrated in the previous section): ⇤ N = 3 H 2 0 c 2 (28) Since the mass of the universe in the relativistic Newton model is expressed as M u = c 3 GH0 , we have H 0 = c 3 GMu . Substituting this expression for H 0 back into the equation above, we obtain: ⇤ N = 3 1 G 2 M 2 u c 6 c 2 ⇤ N = 4GM u 4 3 G 3 M 3 u c 4 ⇤ N = 4⇡G c 2 M u 4 3 ⇡ G 3 M 3 u c 6 ⇤ N = 4⇡G c 2 M u 4 3 R 3 H ( 29 ) Now substituting this expression back into H 2 0 = 8⇡G⇢ ⇤c 2 3 , we obtain: H 2 0 = 4⇡G⇢ 3 ( 30 ) This result is identical to the finding in our previous paper, even though in this paper we have gone much further and demonstrated for the first time the true nature of the cosmological constant. It is a relativistic variable that, in the special case of the Hubble sphere, is indeed equivalent to the cosmological constant. Additionally, our cosmological equation 30 is considerably simpler than the Friedmann equation. Finding the metric component In the framework of relativistic Newton theory, we are presented with the equation: ⇤ N + R i R i g m = 8⇡G c 4 ⇢ E (31) To determine the metric component (assuming its unknown), we can simply solve this equation for g m , yielding: g m = ⇤ N + R i 8⇡G c 4 ⇢ E R i g m = 3 r 2 G 2 M 2 c 4 r 2 + 3 r 2 8⇡G c 4 ⇢ E 3 r 2 g m = 3 r 2 G 2 M 2 c 4 r 2 + 3 r 2 8⇡G c 4 Mc 2 4 3 ⇡r 3 3 r 2 g m = 3 r 2 G 2 M 2 c 4 r 2 + 3 r 2 3 r 2 GM c 2 r 3 r 2 g m = ✓ 1 2GM c 2 r + G 2 M 2 c 2 r 2 ◆ ( 32 ) If we mistakenly failed to recognize that ⇤ N is a relativistic adjustment factor and instead assumed it to be solely a cosmological constant relevant to cosmological observations, we would likely neglect its inclusion when deriving the metric. As a result, we would obtain the following metric component: g m = R i 8⇡G c 4 ⇢ E R i g m = 3 r 2 8⇡G c 4 ⇢ E 3 r 2 g m = 3 r 2 8⇡G c 4 Mc 2 4 3 ⇡r 3 3 r 2 g m = 3 r 2 3 r 2 GM c 2 r 3 r 2 g m = ✓ 1 2GM c 2 r ◆ ( 33 ) This is identical to the g 00 component in the Schwarzschild metric: g µv = 0 B B @ 1 2GM rc 2 0 0 0 0 1 2GM rc 2 1 0 0 0 0 r 2 0 0 0 0 r 2 sin 2 ✓ 1 C C A (34) This could potentially imply that when deriving metrics from the Einstein field equation, if one omits the cosmological constant and is unaware that it should possibly be replaced with a relativistic variable, the resulting metrics may not be fully relativistic. Consequently, one might erroneously believe that the Schwarzschild metric is relativistic and applicable even in the case of strong gravitational fields, when in reality, it likely is not. The Schwarzschild metric remains a highly accurate approximation for predictions in weak gravitational fields. This also clarifies why the Schwarzschild metric yields the exact same escape velocity and horizon as the plain Newtonian theory. This is simply due to the fact that the Schwarzschild metric is not genuinely relativistic; it is a weak field approximation. What is often misinterpreted as relativistic e↵ects in general relativity theory is the assumption of curved spacetime, which will be discussed in the following section. For example, Haug and Spavieri [START_REF] Haug | Micro black hole candidates and the Planck scale: Schwarzschild micro black holes can only match a few properties of the planck scale, while a reissner-nordström and kerr micro black hole matches all the properties of the planck scale[END_REF] have recently demonstrated that no mass candidate for micro black holes can satisfy more than a few properties at the Planck scale, whereas relativistic Newtonian theory can satisfy all of them. Interestingly, the extremal solution to the Reissner-Nordström [START_REF] Reissner | Über die eigengravitation des elektrischen feldes nach der einsteinschen theorie[END_REF][START_REF] Nordström | On the energy of the gravitation field in Einstein's theory[END_REF] metric and the extremal solution to the Kerr [START_REF] Kerr | Gravitational field of a spinning mass as an example of algebraically special metrics[END_REF] metric can also achieve the same, and the g 00 component in the extremal case is mathematically equivalent to the relativistic Newtonian metric component we have just derived. 5 Curved or flat spacetime? But always curved space and time! In the relativistic Newton case we can also derive the space-time metric by re-writing the relativistic radial equation mc 2 G M m r = 0 1 r 1 v 2 e c 2 G M c 2 r = 0 1 G Mm c 2 r r 1 v 2 e c 2 = 0 ✓ 1 GM c 2 r ◆ 2 s 1 2GM r G 2 M 2 c 2 r 2 c 2 2 = 0 ✓ 1 2GM c 2 r + G 2 M 2 c 4 r 2 ◆ ✓ 1 2GM c 2 r + G 2 M 2 c 4 r 2 ◆ = 0 (35) Next we multiply dr 2 on both sides, and get ✓ 1 2GM c 2 r + G 2 M 2 c 4 r 2 ◆ dr 2 ✓ 1 2GM c 2 r + G 2 M 2 c 4 r 2 ◆ dr 2 = 0 ( 36 ) Where dr is the small distance as measured from far away from the gravitational field. The time it takes to cross this small distance is t = dr c so c 2 dt 2 = dr 2 , so we can re-write Equation 36 as ✓ 1 2GM c 2 r + G 2 M 2 c 4 r 2 ◆ c 2 dt 2 ✓ 1 2GM c 2 r + G 2 M 2 c 4 r 2 ◆ dr 2 = 0 (37) Which is the Newtonian spacetime metric (in only the radial direction) when taking into account relativistic e↵ects? The horizon in this metric is given by r c = GM c 2 . There is no singularity at the horizon, which is in strong contrast to, for example, the Schwarzschild metric where there is a singularity at r s = 2GM c 2 . In our metric, time and space are curved, but the spacetime interval is always flat and zero. This means that there is conservation of spacetime, unlike in the metrics known in general relativity theory, where spacetime varies depending on our location relative to the gravitational field. In general relativity theory, there is also something mystical and, in our opinion, not very logical. According to general relativity, the predicted Big Bang marked the beginning of the universe with infinite spacetime curvature and a singularity with infinite energy density. Then the Big Bang occurred, and due to the assumed expansion of the universe, and even acceleration of the expansion, there will be a predicted cold death of the universe where all energy and mass is spread out over enormously large distances, resulting in basically flat spacetime. Nevertheless, in our theory, like the standard model, we assume conservation of energy. How can it be that the spacetime curvature changes dramatically during the lifetime of the universe without a change in the total energy of the universe? In our theory, there is conservation of spacetime curvature, which seems more consistent with the conservation of energy. This should naturally be discussed and investigated further. To see the closer connection between the Schwarzschild metric and Newton theory lets look at Newton without relativistic e↵ects. When not taking into account relativistic e↵ects we have Next we divide by c 2 on both sides and add and subtract one and we get: 1 1 + 2GM c 2 r 2GM c 2 r = 0 ✓ 1 2GM c 2 r ◆ ✓ 1 2GM c 2 r ◆ = 0 (39) Some will possibly protest here and claim we can not just do this and get another constant in to the formula that not was there before. We will claim the speed of light already is there, not by assumption, but by calibration in G and also in M , this is discussed in Haug [START_REF] Haug | Demonstration that Newtonian gravity moves at the speed of light and not instantaneously (infinite speed) as thought![END_REF][START_REF] Haug | Progress in the composite view of the Newton gravitational constant and its link to the Planck scale[END_REF]. Next we multiply with dr 2 on both sides, this gives ✓ 1 2GM c 2 r ◆ dr 2 ✓ 1 2GM c 2 r ◆ dr 2 = 0 (40) Where dr represents a small change in the length r as measured from far away from the gravitational field. Furthermore, it will take the following time dt = dr c for a light signal to travel the short distance dr (think of a photon-light clock). This implies that c 2 dt 2 = dr 2 . Therefore, we can rewrite the equation above as: ✓ 1 2GM c 2 r ◆ c 2 dt 2 ✓ 1 2GM c 2 r ◆ dr 2 = 0 (41) Since the escape velocity is v e = q 2GM r , this can also be written as: ✓ 1 v 2 e c 2 ◆ c 2 dt 2 ✓ 1 v 2 e c 2 ◆ dr 2 = 0 ( 42 ) This equation is identical to what we obtain in Minkowski spacetime when the causal signal moves at the speed of light. Please refer to our derivation related to Minkowski spacetime further down. However, the equation derived from Newtonian theory is not relativistic, so it is only a very good approximation when v e ⌧ c, which corresponds to r r s = 2GM c 2 . We will refer to this as the Newtonian radial spacetime metric, and we claim that it represents a geometrical way to express (Eq. 8). This metric closely resembles the Schwarzschild metric from Einstein's general relativity theory, which is given as: ds 2 = ✓ 1 2GM c 2 r ◆ c 2 dt 2 ✓ 1 2GM c 2 r ◆ 1 dr 2 r 2 g ⌦ (43) which when only working in the radial direction can be simplified to ds 2 = ✓ 1 2GM c 2 r ◆ c 2 dt 2 ✓ 1 2GM c 2 r ◆ 1 dr 2 (44) It is important to understand that the Schwarzschild solution was published in 1916 by Schwarzschild [START_REF] Schwarzschild | über das gravitationsfeld einer kugel aus inkompressibler flussigkeit nach der einsteinschen theorie[END_REF], even one year before Einstein suggested adding a cosmological constant to his field equation. As we saw in the last section, this is likely a reason why the Schwarzschild solution may not truly be relativistic. It would not be enough to add the cosmological constant and then derived a Schwarzschild type metric, one also would have to understand that the cosmological constant was a special case of a relativistic variable ⇤ N . The Schwarzschild spacetime metric for the radial direction closely resembles what we called the Newton radial spacetime metric. However, there is a major di↵erence: the term in front of dr 2 is raised to the power of 1 in the Schwarzschild metric and to the power of 1 in the Newton radial metric. This means that time is a↵ected identically by gravitational time dilation in the two metrics, and they will predict the same gravitational red-shift also, but distances (in the length transformation) are expanding in our metric and contracting as predicted in the Schwarzschild metric closer to the gravitational field. One must be careful here and not mistake distance expansion with length contraction or expansion. Even in the Minkowski metric, there is distance expansion (the Lorentz length transformation) and length contraction that is derived from the Lorentz transformation, just like in our Newtonian metric. However, in the Schwarzschild metric, there is distance contraction in addition to length contraction. This means that our new Newtonian metric always predicts flat spacetime, while the Schwarzschild metric predicts curved spacetime. Bear in mind that spacetime is defined as the di↵erence between the space and time interval. In the Newton metric, time and space curve in the same direction, and since the spacetime interval is defined as the time interval minus the space interval, the spacetime interval will be flat in this Newton metric. Furthermore, our metric will not have any singularity before r = 0, while the Schwarzschild metric has a singularity at r = r s = 2GM c 2 and at r = 0. We will discuss this further later on. Figure 1 shows the time-interval as we move from outside the black hole and in to the horizon of the black hole. Time is clearly curved by the gravitational field, something also space will be. Figure 2 shows the spacetime interval from our metric. It is always flat and zero. 6 Fully consistent with Minkowski spacetime? Minkowski spacetime is invariant and flat, unlike spacetime in general relativity theory, where spacetime itself is curved. The spacetime interval in the Minkowski metric, denoted as ds 2 , is typically di↵erent from zero. However, in our Newtonian metric, it is flat and always zero. The reason the spacetime interval is zero in the Newtonian spacetime metric is that events (e↵ects) caused by gravity are always related to the speed of gravity which is identical to the speed of light light. This holds true even in Newtonian physics (see Hau21ProofNewton). This is fully consistent with Minkowski spacetime when dealing with causal events caused by signals moving at the speed of light. In such cases, the ds 2 term in the Minkowski metric is always zero. Rindler [START_REF] Rindler | Special Relativity[END_REF] (and perhaps others who showed the same long before him) demonstrated in 1960 that Minkowski spacetime can be simplified from: ds 2 = c 2 dt 2 dx 2 dy 2 dz 2 (45) to 0 = c 2 dt 2 dx 2 dy 2 dz 2 (46) When dealing with light signals, and also in the moving system dt 02 c 2 dx 02 dy 02 dz 02 = 0. This implies that ds 2 in the Minkowski metric is always zero (in addition to being flat) when causal events are caused by something moving at the speed of light. Minkowski spacetime represents a four-dimensional spacetime, with three spatial dimensions and one temporal dimension. In the case where we consider the simplified scenario of one dimension in space and time, we have the well-known relation ds 2 = c 2 dt 2 dx 2 = c 2 dt 02 dx 02 ( 47 ) This relation is directly related to the Lorentz transformation, as shown by the following equations: s 2 = c 2 t 02 x 02 = c 2 ⇣ (t v c 2 x) ⌘ 2 ((x vt) ) 2 s 2 = c 2 0 @ t x c 2 v q 1 v 2 c 2 1 A 2 0 @ x tv q 1 v 2 c 2 1 A 2 (48) Next, let's consider a scenario where we have two events that are separated by a distance x and are causally connected. For these two events to be causally linked, information needs to be transmitted between them. This could take the form of, for example, a bullet traveling from event one and hitting event two, or a sound signal. The signal moves at a speed v 2 as observed from the reference frame in which x is at rest. Consequently, the time between the cause and e↵ect of the two events is t = x v2 . Additionally, we have a velocity v which represents the velocity of the frame in which x is at rest with respect to another reference frame. Based on this, we can derive the following equations: s 2 = c 2 t 02 x 02 s 2 = c 2 ⇣ (t v c 2 x) ⌘ 2 ((x vt) ) 2 s = c 2 0 @ x v2 x c 2 v q 1 v 2 c 2 1 A 2 0 @ x x v2 v q 1 v 2 c 2 1 A 2 (49) The spacetime interval in Minkowski space-time is invariant because s 2 = t 02 c 2 x 02 remains the same regardless of the reference frame from which it is observed. This property implies that the spacetime interval is observed to be constant. One way to comprehend why this is the case is by examining the following derivation: s 2 = t 02 c 2 x 02 = c 2 0 @ x v2 x c 2 v q 1 v 2 c 2 1 A 2 0 @ x x v2 v q 1 v 2 c 2 1 A 2 = 0 @ x c v2 x v c q 1 v 2 c 2 1 A 2 0 @ x x v v2 q 1 v 2 c 2 1 A 2 = x 2 c 2 v 2 2 2x 2 v v2 + x 2 v 2 c 2 1 v 2 c 2 x 2 2x 2 v v2 + x 2 v 2 v 2 2 1 v 2 c 2 = x 2 c 2 v 2 2 + x 2 v 2 c 2 x 2 + x 2 v 2 v 2 2 1 v 2 c 2 = x 2 ⇣ c 2 v 2 2 + v 2 v 2 2 1 + v 2 c 2 ⌘ 1 v 2 c 2 = x 2 ⇣ 1 v 2 c 2 ⌘ ⇣ 1 c 2 v 2 2 ⌘ 1 v 2 c 2 = x 2 ✓ c 2 v 2 2 1 ◆ (50) This is independent of the velocity of the frame v, making it an invariant spacetime interval, as expected. In the special case where the causal signal moves at the speed of light, v 2 = c, we observe that the spacetime interval becomes s 2 = 0. To further illustrate this point, we can examine the following derivation: s 2 = t 02 c 2 x 02 = c 2 0 @ t x c 2 v q 1 v 2 c 2 1 A 2 0 @ x tv q 1 v 2 c 2 1 A 2 = 0 @ tc x v c q 1 v 2 c 2 1 A 2 0 @ x tv q 1 v 2 c 2 1 A 2 = t 2 c 2 2txv + x 2 v 2 c 2 1 v 2 c 2 x 2 2xtv + t 2 v 2 1 v 2 c 2 = t 2 c 2 2txv + x 2 v 2 c 2 x 2 1 v 2 c 2 = c 2 t 2 (1 v 2 c 2 ) x 2 (1 v 2 c 2 ) 1 v 2 c 2 = c 2 t 2 x 2 (51) This clearly demonstrates that the spacetime interval is invariant and the same in any frame in the Minkowski metric. The time interval t represents the duration between two events as observed in the rest frame. Let's assume that the causal signal between the two events moves at a velocity v 2 , then we have: ds 2 = c 2 t 2 x 2 ds 2 = c 2 x 2 v 2 2 x 2 ds 2 = x 2 ✓ c 2 v 2 2 1 ◆ (52) In the special case when the causal signal moves at the speed of light, v 2 = c, this means ds 2 = x 2 ⇣ c 2 v 2 2 1 ⌘ = x 2 ⇣ c 2 c 2 1 ⌘ = 0. Therefore, the spacetime interval is zero for causal signals moving at the speed of light. This implies that v becomes irrelevant in the equation, and it is proven that the Minkowski spacetime interval is invariant. Regardless of the signal speed v 2 between two events, the spacetime interval remains the same from every reference frame. Again let us look at the special case where the signal between the two causal events always moves at v 2 = c, the simplification is significant. In this special case, we have: s 2 = c 2 t 02 x 02 s 2 = c 2 ⇣ (t v c 2 x) ⌘ 2 ((x vt) ) 2 s 2 = c 2 0 @ t x c 2 v q 1 v 2 c 2 1 A 2 0 @ x tv q 1 v 2 c 2 1 A 2 s 2 = c 2 0 @ x v2 x c 2 v q 1 v 2 c 2 1 A 2 0 @ x x v2 v q 1 v 2 c 2 1 A 2 s 2 = 0 @ x c v2 x v c q 1 v 2 c 2 1 A 2 0 @ x x v v2 q 1 v 2 c 2 1 A 2 s 2 = 0 @ x c c x v c q 1 v 2 c 2 1 A 2 0 @ x x v c q 1 v 2 c 2 1 A 2 s 2 = 0 = 0 @ x 1 v c q 1 v 2 c 2 1 A 2 0 @ x 1 v c q 1 v 2 c 2 1 A 2 s 2 = 0 = x 2 1 v c 1 + v c x 2 1 v c 1 + v c s 2 = 0 = x 2 ✓ 1 v 2 c 2 ◆ x 2 ✓ 1 v 2 c 2 ◆ s 2 = 0 = c 2 t 2 ✓ 1 v 2 c 2 ◆ x 2 ✓ 1 v 2 c 2 ◆ ( 53 ) That we can set x 2 = c 2 t 2 only because we assume causal events of interest are caused by signals moving at the speed of light. Namely, in this case, the spacetime interval is always zero and naturally always invariant. This is not surprising. However, this is identical to the Newtonian metric, with the only exception that v is swapped with the escape velocity. Therefore, gravity can be seen as Minkowski spacetime where the frame velocity is swapped with the escape velocity. This is not in line with general relativity theory, where for some unknown reason, it has been decided that the spacetime interval itself ds should also change with velocity (gravitational field). The Schwarzschild metric is given by (for the radial direction only): s 2 = c 2 t 2 ✓ 1 2GM rc 2 ◆ x 2 ✓ 1 2GM rc 2 ◆ 1 (54) Since the escape velocity consistent with, and also derived from, the Schwarzschild metric is the well-known v e = q 2GM r , we can replace 2GM r in the Schwarzschild metric with v 2 e , and we get: ds 2 = c 2 dt 2 ✓ 1 v 2 e c 2 ◆ dr 2 ✓ 1 v 2 e c 2 ◆ 1 (55) It looks almost identical to the Minkowski metric, but there is a major di↵erence. In relation to dr 2 , we have ⇣ 1 v 2 e c 2 ⌘ 1 instead of ⇣ 1 v 2 c 2 ⌘ . This is what makes the Schwarzschild metric have curved spacetime and not just curved space and time. Therefore, the Schwarzschild metric is never fully consistent with the Minkowski metric. It is only approximately consistent in the sense that ds 2 approaches zero and gives almost flat spacetime when v e ⌧ c, which corresponds to r r s . In the appendix we show a incorrect way to apply the Minkowski metric to the Schwarzschild metric. Wormholes are Forbidden As early as 1916, Flamm [START_REF] Flamm | Beiträge zur einsteinschen gravitationstheorie[END_REF] hinted at the possibility of wormholes. Later, in 1935, Einstein and Rosen [START_REF] Einstein | The particle problem in the general theory of relativity[END_REF] mathematically formulated the concept of wormholes using the Schwarzschild metric. For the sake of simplicity, we will use geometric units where G = c = 1 in this discussion. Consequently, the Schwarzschild radius becomes r s = 2GM c 2 = 2m. Next, Einstein and Rosen introduced a new variable, u 2 = r 2m, and substituted r with u 2 + 2m in the Schwarzschild metric, resulting in: ds 2 = ⇣ 1 r s r ⌘ c 2 dt 2 dr 2 1 rs r r 2 (d✓ 2 + sin 2 ✓d 2 ) ds 2 = 4(u 2 + 2m)du 2 (u 2 + 2m) 2 (d✓ 2 + sin 2 ✓d 2 ) + u 2 u 2 + 2m dt 2 (56) Next, they examined the special case when u = 0. In this scenario, the term g 1,1 = 1 rs r c 2 dt 2 = u 2 u 2 +2m dt 2 vanishes, while the other terms in the Schwarzschild metric remain well-defined. This has been interpreted as the ability to move in space without experiencing the passage of time. In our relativistic Newton metric, we have r c = GM c 2 when v e = c. By utilizing the geometric unit system where G = c = 1, we can set m = GM c 2 . Similar to Einstein and Rosen, we set u 2 = r m. The selection of u is consistent with the Einstein-Rosen solution, leading to the cancellation of the dt 2 term. Consequently, we can replace r with r = u 2 + m. This yields the following expression: ds 2 = ✓ 1 2GM rc 2 + G 2 M 2 r 2 c 4 ◆ c 2 dt 2 ✓ 1 2GM rc 2 + G 2 M 2 r 2 c 4 ◆ dr 2 g ⌦ ds 2 = ✓ u 2 u 2 + m ◆ 2 dt 2 ✓ u 2 u 2 + m ◆ 2 dr 2 (u 2 + m) 2 (d✓ 2 + sin 2 ✓d 2 ) (57) Now, if we set u = 0, not only does the dt term vanish, but the dr term does as well. Consequently, it is not possible to move solely in space without moving also in time. Wormholes are forbidden within the framework of relativistic Newtonian theory. They seem to be a prediction due to one not properly have taken into account relativistic e↵ects. Black hole information paradox likely removed In the relativistic modified Newtonian model, the escape velocity is given by v e = q 2GM c 2 r G 2 M 2 c 2 r 2 . The radius at which the escape velocity is equal to the speed of light is r c = GM c 2 , in contrast to the Schwarzschild metric where it is r s = 2GM c 2 . It could be of interest to note that the relativistic Newtonian horizon corresponds to the horizon obtained in the extremal solution of the Reissner-Nordström metric [START_REF] Friedmann | Über die krüng des raumes[END_REF][START_REF] Bagge | Relativistic e↵ects in the solar system[END_REF] and the extremal solution of the Kerr metric, which likely results in the same escape velocity. However, the Reissner-Nordström and Kerr metrics are based on the general theory of relativity and therefore predict curved spacetime. In our Newtonian metric, the spacetime is flat and there is no singularity at the horizon r c = GM c 2 . That we have no singularity at the horizon is a key to get rid o↵ the black hole information paradox. In the Schwarzschild metric, the escape velocity is equal to the speed of light at the event horizon and exceeds the speed of light inside the event horizon, there is also a singularity with infinite spacetime curvature at the horizon. In the relativistic Newtonian model, the escape velocity is equal to the speed of light at the horizon and always remains equal to or below the speed of light inside the black hole object. Figure 3 illustrates the relativistic Newton escape velocity outside this type of black hole. As expected, the escape velocity is below the speed of light. Therefore, objects outside the black hole can naturally escape the gravitational forces of the black hole as long as they travel away from the black hole at a speed equal to or greater than the escape velocity c. This means that light potentially can escape this type of black hole, but anything with rest-mass cannot, so matter will fall into the black hole and light will be able to move out. However, this do not exclude that black holes are also dark, because remarkable also the orbital velocity is identical to the escape velocity at the radius where the escape velocity is c. That is we have v e = c = q 2GM r G 2 M 2 c 2 r 2 = q GM r = v o when r = r c = GM c 2 . This means a large amount of pure energy (photons) could potentially also end up orbiting the black hole. This means the relativistic Newtonian metric is consistent with a rotating black hole. Here one even have a mechanism explaining what is meant by a rotating black hole object, it will simply mean a large amount of pure energy (photons) are orbiting at the horizon of the object. The closest we come in general relativity theory to this is likely the extremal case of the Kerr metric when a = G 2 M 2 c 4 . However in the Kerr metric the spacetime interval is infinite at the horizon so no light can escape a black hole in the Kerr metric, while in the relativistic Newton metric there is no singularity at the horizon and light can likely also escape. Despite a considerably amount of pure energy (photons) could end up orbiting the black hole at r = r c = GM c 2 , it is highly likely that light would still manage to escape from these objects since the escape velocity never goes above c, rendering the term "black hole" somewhat misleading. Consequently, one could anticipate that at least some black holes to be extraordinarily bright. Quasars appear to align with this expectation. Interestingly, actual photographs of quasars taken, for example, by the Hubble Telescope, exhibit a remarkably bright center of quasars, while only artistic illustrations depict them as dark. Nonetheless, our model does not preclude them from being dark; at the very least, the photons orbiting the black hole at the event horizon would undoubtedly create significant disturbances in the immediate vicinity and could potentially impact the accretion disk, but the accretion disk is not needed to explain their brightness, as light likely also comes out from the black hole object itself. According to conventional theory, the brightness of quasars (as well as other galaxy centres with black holes) is explained through accretion disk theory. However, based on this new metric, it is also possible that light emanates from within the black hole itself. The accretion disk theory and the idea of light originating from the inside of the black hole are not mutually exclusive; instead, they could complement one another. As predicted by the new metric, matter and energy would naturally continue to be influenced outside the black hole by such things as the gravitational acceleration. Let us try to "enforce" faster than light escape velocity in the relativistic Newton metric. We can set up the following equation and solve with respect to what radius will correspond to x > 1, xc = r 2GM r G 2 M 2 c 2 r 2 (58) Solved with respect to r gives r = GM c 2 x 2 (1 p 1 x 2 ) (59) When x > 1, the resulting radius becomes imaginary (as we obtain the square root of a negative number). We believe that this should be interpreted as the impossibility of forcing the escape velocity to exceed the speed of light (c) when fully considering relativistic e↵ects, that is, properly taking into account strong gravitational fields. In the standard Schwarzschild metric, there is nothing that prevents the mass from collapsing into the central singularity (except perhaps in the Reissner-Nordström extremal solution where charge can potentially counteract the gravitational pull). Therefore, the interpretation in this metric is that all mass and energy that enters the black hole ends up in the central singularity. That is, all of the mass and energy of a Schwarzschild black hole is confined to a point with no spatial dimensions-the central singularity. No matter how absurd this may sound, that is what the standard Schwarzschild metric tells us. However, we believe that the Schwarzschild metric should never have been interpreted in the context of strong gravitational fields. We can even find the maximum density inside the black hole if we interpret imaginary escape radiuses anywhere inside the black hole as not valid. We already know our escape velocity not will go above c. We can set up the following equation c = v u u t 2GM i xr c G 2 M 2 i c 2 x 2 rc 2 (60) where M i is the mass inside the radius xr c = x GM BH c 2 where x  1 and M BH is the total mass in the whole black hole. This gives M i = xM BH where x  1 (61) We can now plot the escape velocity inside the black hole in 3D by incorporating the additional boundary of the maximum mass density within the black hole. This is illustrated in Figure 5, which depicts the escape velocity from a radial distance of r = 3 GM c 2 to r = l p . Outside the event horizon r = r c GM c 2 , the escape velocity remains lower than c, but from the event horizon all the way to the center of the black hole, it is equal to c. Also before imposing this condition on mass density we had a maximum escape velocity of c, figure 4. The di↵erence is our escape velocity now gives values all the way in to the very center r = l p , while before looking into the maximum mass density implied by the model the escape velocity would give imaginary values inside GM 2c 2 . Again keep in mind, that also the orbital velocity at the event horizon is c, which does not exclude black holes from appearing dark. Still it is highly likely that light will be able to escape simply as electromagnetic radiation, this finding has multiple important implications for unsolved problems the forefront of black hole research. as is predicted in the standard Schwarzschild metric.The vertical axis is ve c . As we can see is the escape velocity is never above c inside the black hole and always below c outside the hole. Our new interpretation of black holes stems from our understanding that their escape velocity di↵ers from the standard (weak field) Schwarzschild metric. This interpretation is expected to have significant implications for the black hole information paradox put forward by Hawking [START_REF] Hawking | Breakdown of predictability in gravitational collapse[END_REF]. While we do not attempt to solve the information paradox within the framework of general relativity theory, we assert that the paradox can be resolved for the black holes predicted by our metric. In this theory, the entire information paradox disappears as there is no singularity at the horizon, no infinite curved spacetime (spacetime is actually flat) and the escape velocity never goes above c. It appears that photons can escape the black hole or, at the very least, become trapped on the surface of the black hole. Additionally, there is no singularity at the event horizon in the relativistic Newton metric. Whether the black hole information paradox is truly resolved in the new metric should be thoroughly investigated, but it certainly appears that this may be the case. Conclusion We have demonstrated that the standard Newtonian theory can be rewritten in a form that strongly reminds us of Einstein's field equation. However, it is still a very di↵erent equation without tensors. We have shown how the cosmological constant simply emerges when taking into account relativistic e↵ects in Newtonian theory. Nonrelativistic Newtonian theory seems to yield a metric similar to the Schwarzschild metric, except that the spacetime interval is always flat. This metric is only valid for weak gravitational fields. In strong gravitational fields, one needs to consider relativistic e↵ects in Newtonian theory, leading to a strong field metric (holding in weak and strong fields). The strong field metric produces indistinguishable predictions from general relativity when working in weak field, such as gravitational time dilation, gravitational redshift, and gravitational bending of light. However, in strong gravitational fields, particularly for black holes, our theory provides significantly di↵erent predictions. The black hole information paradox proposed by Hawking seems to be resolved in this model. Furthermore, our theory predicts the conservation of spacetime, which is in stark contrast to general relativity. General relativity is supposedly consistent with the conservation of energy (as our theory is), but it predicts that the universe started with infinite spacetime curvature and will ultimately end up in basically flat spacetime. How is it possible to obtain Appendix: Mistaken analogy between Minkowski metric and Schwarchold metric We will also demonstrate an incorrect "heuristic" approach to obtaining the Schwarzschild metric from Minkowski spacetime. Minkowski spacetime is given by ds 2 = c 2 dt 02 dx 02 dy 02 dz 02 (62) One could mistakenly claim dt 0 = dt r 1 v 2 c 2 (63) and dx 0 = dx q 1 v 2 c 2 (64) These are the formulas for time dilation and length contraction, which are not identical to the time transformation and length transformation that are needed. If one mistakenly incorporates time dilation and length contraction into Minkowski spacetime, the resulting expression is: ds 2 = ✓ 1 v 2 c 2 ◆ c 2 dt 2 ✓ 1 v 2 c 2 ◆ 1 dx 2 dy 02 dz 02 (65) Then, by replacing v with the escape velocity v e = q 2GM r , we obtain: ds 2 = ✓ 1 2GM rc 2 ◆ c 2 dt 2 ✓ 1 2GM rc 2 ◆ 1 dx 2 dy 02 dz 02 (66) And next, we rewrite it in spherical polar coordinates and obtain: ds 2 = ✓ 1 2GM rc 2 ◆ c 2 dt 2 ✓ 1 2GM rc 2 ◆ 1 dx 2 r 2 g ⌦ (67) This expression appears to be identical to the Schwarzschild metric, which may lead one to mistakenly believe that it has been derived heuristically from simple logic. However, a major mistake is made in claiming that dx 0 = dx q 1 v 2 c 2 without utilizing the full Lorentz transformation to determine it. When done correctly, we would have: ds 2 = ✓ 1 v 2 c 2 ◆ c 2 dt 2 ✓ 1 v 2 c 2 ◆ dx 2 dy 02 dz 02 (68) By substituting the escape velocity v e = q 2GM r for v and rewriting it in spherical polar coordinates, we obtain: ds 2 = 0 = ✓ 1 2GM rc 2 ◆ c 2 dt 2 ✓ 1 2GM rc 2 ◆ dx 2 r 2 g ⌦ (69) This is not the Schwarzschild metric, but rather our Newtonian space-time metric when assuming 4-dimensional spacetime. 4 3 ⇡r 3 4 3 434 can be referred to as the energy density per unit volume, denoted by the symbol ⇢ E = E ⇡r 3 . It simply represents the rest mass energy of the gravitational object divided by the volume of the sphere with radius r. By substituting this back into the equation above, we obtain: Figure 1 : 1 Figure 1: The figure shows the spacetime interval in the new metric as we move towards the center of the gravitational object. In this plot, we start at a distance of R = 30 GM c 2 and move inwards to R = GM c 2 . Figure 2 : 2 Figure 2: The figure shows the spacetime interval in the new metric as we move towards the center of the gravitational object. In this plot, we start at a distance of R = 30 GM c 2 and move inwards to R = GM c 2 . Figure 3 :Figure 4 34 Figure 3: The figure illustrates the escape velocity on the outside of a black hole, as analyzed through the new Newtonian relativistic metric. It starts at r = 8 GM c 2 from the center of the black hole and radially moves inward to r = GM c 2 , where the escape velocity is c. The vertical axis represents ve c Figure 4 shows that the escape velocity from the relativistic Newton metric inside a black hole It is always below Figure 4 : 2 GM c 2 , 422 Figure 4: The figure illutrates the escape velocity from our new metric on the inside of a black hole starting at r = r h = GM c 2 from the center of the black hole and down and including r = 1 2 Figure 5 : 5 Figure 5: The figure illutrates the escape velocity from our new metric on the inside as well as on the outside of a black hole. The graph is starting at r = 3 GM c 2 = 3rc from the center of the black hole and down and including r = lp. Be aware that the radius where the escape velocity is c is at GM c 2 and not at 2GMc 2 This was not Newton's original formula; see[START_REF] Haug | Di↵erent mass definitions and their pluses and minuses related to gravity[END_REF][START_REF] Haug | Newton did not invent or use the so-called newton's gravitational constant; g, it has mainly caused confusion[END_REF]. something for nothing?
04112760
en
[ "info.info-lo" ]
2024/03/04 16:41:24
2000
https://inria.hal.science/hal-04112760/file/frocos.pdf
Gilles Dowek email: [email protected] Axioms vs. rewrite rules: from completeness to cut elimination When we search for a proof of the proposition 2 + 2 = 4, we can use the axioms of addition and equality to transform this proposition into 4 = 4 and conclude with the reflexivity axiom. We could also use these axioms to transform this proposition into 2 + 2 = 2 + 2 and conclude, but this proof is redundant with the first, that is in many ways better. Indeed, the axioms of addition are better used in one direction only: to compute values. In automated proof search systems, we often suppress axioms, such as the axioms of addition, and replace them by rewrite rules. This permits to cut off search space while keeping completeness. The rules of addition apply to terms, but rules applying to propositions may also be considered. For instance, the axiom ∀xy ((x × y) = 0 ⇔ (x = 0 ∨ y = 0)) can be replaced by the rule x × y = 0 → x = 0 ∨ y = 0 that rewrites an atomic proposition into a disjunction. Such rules are of special interest in set theory, e.g. x ∈ {y, z} → x = y ∨ x = z and in type theory [?, ?], e.g. ε(x ⇒y) → ε(x) ⇒ ε(y) When we orient an axiom, we want to cut off search space, but we do not want to loose completeness. In this note, we discuss the properties that the rewrite system must fulfill, so that orientation does not jeopardize completeness. Orientation From replacement to rewriting We consider a set A and a binary relation → defined on A. We write → * for its reflexive-transitive closure and ≡ for its reflexive-symmetrical-transitive closure. If t and t ′ are two elements of A, we have t ≡ t ′ if and only if there is a sequence of terms t 1 , ..., t n such that t = t 1 , t ′ = t n and for each i, either t i → t i+1 or t i ← t i+1 . For instance, if A is a set of terms built with a binary operator + and a finite number of constants and → is the relation such that t → u if and only if u is obtained from t by replacing a subterm of the form (x + y) + z by x + (y + z). We can establish that To establish that t ≡ t ′ , we search for such a sequence. We can apply the following replacement method: we start with the equality t = t ′ and we derive more equalities with two rules. The first permits to derive v = v ′ from u = u ′ if u = u ′ → v = v ′ and the second permits to derive v = v ′ from u = u ′ if u = u ′ ← v = v ′ . When reach an equality of the form u = u, we are done. Search is more efficient if we restrict to rewriting sequences, i.e. sequences of the form Indeed, to establish that t ≡ t ′ , we can restrict the replacement method, to the first rule and rewrite the equality t = t ′ to an equality of the form u = u. t = t 1 → ... → t n = u 1 ← ... ← u p = t ′ Such a restriction may be incomplete. Consider, for instance, the relation defined by p → ′ q and p → ′ r. We have q ← ′ p → ′ r but there is no rewriting sequence relating q to r and the equality q = r cannot be rewritten. A relation is said to be confluent if whenever t → * t 1 and t → * t 2 there is a object u such that t 1 → * u and t 2 → * u. For instance, the relation → above is confluent, but the relation → ′ is not. It is easy to prove that when a relation is confluent, two objects are related if and only if they are related by a rewriting sequence and thus that rewriting is a complete search method. From paramodulation to narrowing Let us now turn to proof search in predicate calculus with equality. We assume that we have an equality predicate and the associated axioms. Paramodulation [?] is an extension of resolution that allows to replace equals by equals in clauses. For instance, if we want to refute the clauses (X + Y ) + Z = X + (Y + Z) P ((a + (b + c)) + (d + e)) ¬P ((a + b) + ((c + d) + e)) we use the associativity to rearrange brackets in the other clauses until we can resolve them. Exactly as above, we can restrict the method and use of some equalities in one direction only. We suppress these equalities from the set of clauses to refute, we orient them as rewrite rules and we use them with a narrowing (or oriented paramodulation) rule that permits to unify a subterm of a clause with the left-hand side of a rewrite rule, and replace the instantiated subterm with the right-hand side of this rule (see, for instance, [?, ?]). Notice that since clauses may contain variables, we need to use unification (and not matching as in rewriting). The completeness of this method mostly rests on confluence of the rewrite system1 : instead of using the rewrite rules in both directions to unfold and fold terms, we can use them in one direction only and meet on a common reduct. Equational resolution Equational resolution [?, ?] is another proof search method for predicate calculus with equality. In this method, like in narrowing, we suppress some equalities from the set of clauses to refute and we orient them as rewrite rules. We write t → t ′ if t ′ is obtained by rewriting a subterm in t. We write → * for the reflexivetransitive closure of this relation and ≡ for its reflexive-symmetrical-transitive closure. Morally, we identify equivalent propositions and we work on equivalence classes. Hence, we use an extended resolution rules where unification is replaced by equational unification [?, ?]: a unifier of two-terms t and u is a substitution σ such that σt ≡ σu (and not σt = σu). For instance the terms (a+(b+c))+(d+e) and (a + b) + ((c + d) + e) are trivially unifiable because they are equivalent and hence with the clauses P ((a + (b + c)) + (d + e)) ¬P ((a + b) + ((c + d) + e)) we can apply the resolution rule and get the empty clause. To solve equational unification problems, we can use narrowing. But, in contrast with the previous method, the narrowing rule is applied to the unification problems and not to the clauses. Completeness We focus now on the completeness of equational resolution. We first recall a completeness proof of resolution and then discuss how it can be adapted. Completeness of resolution Cut elimination A way to prove completeness of resolution is to prove that whenever a sequent A 1 , ..., A p ⊢ B 1 , ..., B q has a proof in sequent calculus (see, for instance,[?, ?]), the empty clause can be derived from the clausal form of the propositions A 1 , ..., A p , ¬B 1 , ..., ¬B q . Usually this proof proceeds in two steps: we first prove that if a sequent Γ ⊢ ∆ has a proof, then it has also a proof that does not use the cut rule of sequent calculus (cut elimination theorem). Then we prove, by induction over proof structure, that if Γ ⊢ ∆ has a cut free proof, then the empty clause can be derived from clausal form cl(Γ, ¬∆) of the propositions Γ, ¬∆. A variant A variant of this proof isolates the clausification step. A first lemma proves that if the sequent Γ ⊢ ∆ has a proof and cl(Γ, ¬∆) = {P 1 , ..., P n }, then also is used. Whether or not completeness fails for non terminating systems in unknown to the author. the sequent ∀P 1 ...∀P n ⊢ also has a proof, where ∀P is the universal closure of the proposition P . Then, by the cut elimination theorem, if the sequent ∀P 1 ...∀P n ⊢ has a proof, it has also a cut free proof. At last, we prove by induction over proof structure that if ∀P 1 ...∀P n ⊢ has a cut free proof then the empty clause can be derived from {P 1 , ..., P n } = cl(Γ, ¬∆). Herbrand theorem Instead of using the cut elimination theorem some authors prefer to use Herbrand theorem that is a variant of it. According to Herbrand theorem, if P 1 , ..., P n are quantifier free propositions then the sequent ∀P 1 ...∀P n ⊢ has a proof if and only if there are instances σ [START_REF] Dowek | Theorem proving modulo[END_REF] 1 P 1 , ..., σ 1 k1 P 1 , ..., σ n 1 P n , ..., σ n kn P n of the propositions P 1 , ..., P n such that the quantifier free proposition ¬(σ [START_REF] Dowek | Theorem proving modulo[END_REF] 1 P 1 ∧ ... ∧ σ 1 k1 P 1 ∧ ... ∧ σ n 1 P n ∧ ... ∧ σ n kn P n ) is tautologous. As above, a first lemma proves that if the sequent Γ ⊢ ∆ has a proof and cl(Γ, ¬∆) = {P 1 , ..., P n }, then the sequent ∀P 1 ...∀P n ⊢ also has a proof. By Herbrand theorem, if the sequent ∀P 1 ...∀P n ⊢ has a proof, then there are instances σ [START_REF] Dowek | Theorem proving modulo[END_REF] 1 P 1 , ..., σ 1 k1 P 1 , ..., σ n 1 P n , ..., σ n kn P n of the propositions P 1 , ..., P n such that the quantifier free proposition ¬(σ 1 1 P 1 ∧ ... ∧ σ 1 k1 P 1 ∧ ... ∧ σ n 1 P n ∧ ... ∧ σ n kn P n ) is tautologous. At last, we prove that if the proposition ¬(σ 1 1 P 1 ∧ ... ∧ σ 1 k1 P 1 ∧ ... ∧ σ n 1 P n ∧ ... ∧ σ n kn P n ) is tautologous then the empty clause can be derived from {P 1 , ..., P n } = cl(Γ, ¬∆). Completeness of equational resolution This proof can be adapted to equational resolution. First, the identification of equivalent propositions used in proof search can be used in sequent calculus also. This leads to the sequent calculus modulo [?]. For instance, the axiom rule axiom A ⊢ A is transformed into the rule axiom if A ≡ B A ⊢ ≡ B and the left rule of disjunction Γ A ⊢ ∆ Γ B ⊢ ∆ ∨-left Γ A ∨ B ⊢ ∆ is transformed into the rule Γ A ⊢ ≡ ∆ Γ B ⊢ ≡ ∆ ∨-left if C ≡ A ∨ B Γ C ⊢ ≡ ∆ In sequent calculus modulo the rules of addition and multiplication, we have a very short proof that the number 4 is even axiom 4 = 4 ⊢ ≡ 4 = 2 × 2 ∀-left ∀x (x = x) ⊢ ≡ 4 = 2 × 2 ∃-right ∀x (x = x) ⊢ ≡ ∃y (4 = 2 × y) while proving this proposition would be much more cumbersome in sequent calculus with the axioms of addition and multiplication. Using sequent calculus modulo, we can prove the completeness of equational resolution. First, we prove the equivalence lemma: if the rewrite system encodes a theory T then the sequent ΓT ⊢ ∆ has a proof in sequent calculus if and only if the sequent Γ ⊢ ≡ ∆ has a proof in sequent calculus modulo. Then, the cut elimination theorem extends trivially to sequent calculus modulo when the equivalence is generated by rewrite rules applying to terms. At last, we prove by induction over proof structure, that if Γ ⊢ ≡ ∆ has a cut free proof in sequent calculus modulo, then the empty clause can be derived from cl(Γ, ¬∆) with the rules of equational resolution. The confluence of the rewrite system is only needed to prove that narrowing is a complete equational unification method. Equational resolution as narrowing Equational resolution can be seen as a formulation of narrowing. This suggests another completeness proof, reducing the completeness of equational resolution to that of narrowing. Indeed, instead of resolving two clauses and narrowing the unification problem, as we do in equational unification, we could first narrow the clauses and apply the usual resolution rule to get the same result. For instance, instead of resolving P ((a + (b + c)) + (d + e)) ¬P ((a + b) + ((c + d) + e)) and narrowing the unification problem (a + (b + c)) + (d + e) = (a + b) + ((c + d) + e) we could take an option on these two clauses, and narrow them until they can be resolved. So, narrowing a unification problem is just a way to narrow the clauses it comes from. Hence, equational resolution can be seen as a formulation of narrowing and thus as an implementation of paramodulation with the restriction that some equations should be used in one direction only. Hence, the completeness of equational resolution rests on confluence. Rewriting proposition So far, we have considered methods where axioms are replaced by rewrite rules applying to terms. We have seen that the proof search methods obtained this way could be seen as restrictions of paramodulation and that their completeness rested on confluence. We consider now more powerful rewrite rules applying to propositions. For instance, the axiom ∀xy (x × y = 0 ⇔ (x = 0 ∨ y = 0)) can be replaced by the rule x × y = 0 → x = 0 ∨ y = 0 that applies directly to propositions. For technical reasons, we restrict to rules, such as this one, whose left-hand side is an atomic proposition. For example, with this rule, we can prove the proposition ∃z (a × a = z ⇒ a = z) in sequent calculus modulo axiom a = 0 ⊢ ≡ a = 0 axiom a = 0 ⊢ ≡ a = 0 ∨-left a × a = 0 ⊢ ≡ a = 0 ⇒-right ⊢ ≡ a × a = 0 ⇒ a = 0 ∃-right ⊢ ≡ ∃z (a × a = z ⇒ a = z) Resolution modulo In this case, equational resolution can be extended to resolution modulo [?]. In resolution modulo, like in equational resolution, the rewrite rules applying to terms are used to narrow unification problems, but, like in narrowing, the rules applying to propositions are used to narrow the clauses directly. For instance, if we take the clausal form of the negation of the proposition ∃z (a × a = z ⇒ a = z) i.e. the clauses a × a = Z ¬a = Z we can narrow the first clause with the rule x × y = 0 → x = 0 ∨ y = 0 yielding the clause a = 0 Then, we resolve this clause with ¬a = Z and get the empty clause. Resolution with axioms An alternative is to keep the axiom ∀xy (x × y = 0 ⇔ (x = 0 ∨ y = 0)) and to use resolution. In this case, we have to refute the clauses ¬X × Y = 0, X = 0, Y = 0 X × Y = 0, ¬X = 0 X × Y = 0, ¬Y = 0 a × a = Z ¬a = Z We resolve the clause a × a = Z with the clause ¬X × Y = 0, X = 0, Y = 0 yielding the clause a = 0. Then, we resolve this clause with ¬a = Z and get the empty clause. An above, resolution modulo can be seen as a restriction of resolution, where some clauses can be used in one direction only. As above we could think that resolution modulo is complete as soon as the rewrite system is confluent. Unfortunately this is not the case. From B and A, ¬B, we derive A and then from this clause and ¬A we get the empty clause. Hence, a resolution proof with axioms cannot always be transformed into a resolution modulo proof. Resolution modulo cannot be seen as a restriction of resolution where some clauses can be used in one direction only and completeness may be lost even if the rewrite system is confluent. Notice that, in the resolution proof, clauses coming from both propositions A ⇒ (B ∧ ¬A) and (B ∧ ¬A) ⇒ A are used. Hence, although the rewrite system is confluent, we need both to unfold A to B ∧ ¬A and to fold B ∧ ¬A to A. Moreover, when we resolve B with ¬B, A we fold B ∧ ¬A to A although we have proved the proposition B, but not the proposition ¬A yet. This partial folding that resolution allows but resolution modulo disallows is the reason why some resolution proofs cannot be transformed into resolution modulo proofs. A counter example to completeness Hence, orientation cuts off search space more dramatically when we have rules rewriting propositions that when we have only rules rewriting terms. It can cut off search space so dramatically that completeness may be lost. Completeness of resolution modulo Since resolution can prove the proposition ¬B with the axiom A ⇔ (B ∧ ¬A), the sequent A ⇔ (B ∧ ¬A) ⊢ ¬B can be proved in sequent calculus and hence the sequent ⊢ ¬B can be proved in sequent calculus modulo. For instance, it has the proof axiom B ⊢ B weak. + ax. B, A, B ⊢ A ¬-left B, ¬A, A, B ⊢ ∧-left A, A, B ⊢ contraction-left A, B ⊢ ¬-right B ⊢ ¬A ∧-right B ⊢ A weak. + ax. B, A, B ⊢ A ¬-left B, ¬A, A, B ⊢ ∧-left A, A, B ⊢ contraction-left A, B ⊢ cut B ⊢ ¬-right ⊢ ¬B The proposition ¬B has a proof in sequent calculus modulo but not in resolution modulo and thus resolution modulo is incomplete. The completeness theorem of resolution modulo does not prove that each time a proposition has a proof in sequent calculus modulo, it has a proof in resolution modulo (because this is false), but it proves, using techniques similar to those developed in section ?? and ??, that if a proposition has a cut free proof in sequent calculus modulo, it has a proof in resolution modulo. The proposition ¬B has proof in sequent calculus modulo, but no cut free proof and sequent calculus modulo the rule A → B ∧ ¬A does not have the cut elimination property. Modulo some rewrite systems such as x × y = 0 → x = 0 ∨ y = 0 or A → B ∧ A sequent calculus modulo has the cut elimination property and thus resolution modulo is complete. Hence modulo these rewrite systems there is no occurrence of the phenomenon we have observed above, where a resolution proof could not be transformed into a resolution modulo proof because it used partial folding. Still the translation of resolution proofs to resolutions modulo proofs is non trivial because it involves cut elimination. In [?] we present cut elimination proofs for large classes of rewrite systems (including various presentations of type theory, all confluent and terminating quantifier free rewrite systems, ...) and we conjecture that cut elimination holds for all terminating and confluent rewrite system, although we know that such a conjecture cannot be proved in type theory or in large fragments of set theory because it implies their consistency. Cut elimination also holds for sequent calculus modulo some non terminating rewrite systems such as A → B ∧ A Notice that when resolution modulo is complete (for instance for type theory), this completeness result cannot be proved in the theory itself (because it implies its consistency), while confluence usually can. So, it is not surprising that tools more powerful than confluence, such as cut elimination, are required. Towards a new kind of completion Completion [?] transforms a rewrite system into a confluent one. We can imagine a similar process that transforms a rewrite system into one modulo which sequent calculus has the cut elimination property. For instance, the rule A → B ∧ ¬A is equivalent to the axiom A ⇔ (B ∧ ¬A) whose clausal form is A, ¬B B, ¬A ¬A like that of the axioms A ⇔ ⊥ and B ⇔ A that are equivalent to the rewrite system A → ⊥, B → A modulo which sequent calculus has the cut elimination property and resolution is complete. So we could imagine to transform the rewrite system A → (B ∧ ¬A) into A → ⊥ B → A or even into A → ⊥ B → ⊥ in order to recover completeness. (a + (b + c)) + (d + e) ≡ (a + b) + ((c + d) + e) with the sequence (a + (b + c)) + (d + e) ((a + b) + c) + (d + e) b) + (c + (d + e)) © (a + b) + ((c + d) + e) (b + c)) + (d + e) d d d a + ((b + c) + (d + e)) d d d a + (b + (c + (d + e))) © (a + b) + (c + (d + e)) © (a + b) + ((c + d) + e) The axiom A ⇔ (B ∧ ¬A) can be transformed into a rewrite rule (Crabbé's rule) A → B ∧ ¬A Resolution modulo cannot prove the proposition ¬B, because from the clause B, neither the resolution rule nor the narrowing rule can be applied. But, surprisingly, with the axiom A ⇔ (B ∧ ¬A), we can prove the proposition ¬B. Indeed, the clausal form of the proposition A ⇒ (B ∧ ¬A) yields the clauses ¬A, B ¬A the clausal form of the proposition (B ∧ ¬A) ⇒ A yields A, ¬B and the clausal form of the negation of the proposition ¬B yields B Actually, although confluence plays the major rôle in completeness proofs, termination Acknowledgements Thanks to Th. Hardin, C. Kirchner, Ch. Lynch and B. Werner for many helpful discussions on this subject.
04112765
en
[ "info.info-lo" ]
2024/03/04 16:41:24
2000
https://inria.hal.science/hal-04112765/file/ftp.pdf
Gilles Dowek email: [email protected] Automated theorem proving in first-order logic modulo: on the difference between type theory and set theory Resolution modulo is a first-order theorem proving method that can be applied both to first-order presentations of simple type theory (also called higher-order logic) and to set theory. When it is applied to some first-order presentations of type theory, it simulates exactly higherorder resolution. In this note, we compare how it behaves on type theory and on set theory. Higher-order theorem proving (e.g. higher-order resolution [1,[START_REF] Huet | Constrained resolution: a complete method for higher order logic[END_REF][START_REF] Huet | A mechanization of type theory[END_REF]) is different from first-order theorem proving in several respects. First, the first-order unification algorithm has to be replaced by the higher-order one [START_REF] Huet | A unification algorithm for typed lambda calculus[END_REF][START_REF] Huet | Résolution d'équations dans les Langages d[END_REF]. Even then, the resolution rule alone is not complete but another rule called the splitting rule has to be added. At last, the skolemization rule is more complicated [START_REF] Miller | Proofs in higher order logic[END_REF][START_REF] Miller | A compact representation of proofs[END_REF]. On the other hand, higher-order logic, also called simple type theory, can be expressed as a first-order theory [7], and first-order theorem proving methods, such as first-order resolution, can be used for this theory. Of course, first-order resolution with the axioms of this theory is much less efficient than higher-order resolution. However, we can try to understand higher-order resolution as a special automated theorem proving method designed for this theory. A motivation for this project is that it is very unlikely that such a method applies only to this theory, but it should also apply to similar theories such as extensions of type theory with primitive recursion or set theory. In [START_REF] Dowek | Theorem proving modulo[END_REF], together with Th. Hardin and C. Kirchner, we have proposed a theorem proving method for first-order logic, called resolution modulo, that when applied to a first-order presentation of type theory simulates exactly higher-order resolution. Proving the completeness of this method has required to introduce a new presentation of first-order logic, called deduction modulo that separates clearly computation steps and deduction steps. Resolution modulo can be applied both to type theory and to set theory. The goal of this note is to compare how resolution modulo works for one theory and the other. In order to remain self contained, we will first present shortly the ideas of deduction modulo and resolution modulo. 1 Resolution modulo Deduction modulo In deduction modulo, the notions of language, term and proposition are that of (many sorted) first-order logic. But, a theory is formed with a set of axioms Γ and a congruence ≡ defined on propositions. In this paper, all congruences will be defined by confluent rewrite systems (as these rewrite systems are defined on propositions and propositions contain binders, these rewrite systems are in fact combinatory reduction systems [START_REF] Klop | Combinatory reduction systems: introduction and survey[END_REF]). Propositions are supposed to be identified modulo the congruence ≡. Hence, the deduction rules must take into account this equivalence. For instance, the modus ponens cannot be stated as usual A ⇒ B A B but, as the two occurrences of A need not be identical, but need only to be congruent, it must be stated A ′ ⇒ B A if A ≡ A ′ B In fact, as the congruence may identify implications with other propositions, a slightly more general formulation is needed C A if C ≡ A ⇒ B B All the rules of natural deduction or sequent calculus may be stated in a similar way, see [START_REF] Dowek | Theorem proving modulo[END_REF][START_REF] Dowek | Proof normalization modulo[END_REF] for more details. As an example, in arithmetic, in natural deduction modulo, we can prove that 4 is an even number: axiom ∀x x = x (x, x = x, 4) ∀-elim 2 × 2 = 4 (x, 2 × x = 4, 2) ∃-intro ∃x 2 × x = 4 Substituting the variable x by the term 2 in the proposition 2 × x = 4 yields the proposition 2 × 2 = 4, that is congruent to 4 = 4. The transformation of one proposition into the other, that requires several proof steps in natural deduction, is dropped from the proof in deduction modulo. It is just a computation that need not be written, because everybody can re-do it by him/herself. In this case, the congruence can be defined by a rewriting system defined on terms 0 + y -→ y S(x) + y -→ S(x + y) 0 × y -→ 0 S(x) × y -→ x × y + y Notice that, in the proof above, we do not need the axioms of addition and multiplication. Indeed, these axioms are now redundant: since the terms 0 + y and y are congruents, the axiom ∀y 0 + y = y is congruent to the equality axiom ∀y y = y. Hence, it can be dropped. In other words, this axiom has been built-in the congruence [START_REF] Plotkin | Building-in equational theories[END_REF]1,[START_REF] Stickel | Automated deduction by theory resolution[END_REF]. The originality of deduction modulo is that we have introduced the possibility to define the congruence directly on propositions with rules rewriting atomic propositions to arbitrary ones. For instance, in the theory of integral rings, we can take the rule x × y = 0 -→ x = 0 ∨ y = 0 that rewrites an atomic proposition to a disjunction. Notice, at last, that deduction modulo is not a true extension of first-order logic. Indeed, it is proved in [START_REF] Dowek | Theorem proving modulo[END_REF] that for every congruence ≡, we can find a theory T such that Γ ⊢ P is provable modulo ≡ if and only if T Γ ⊢ P is provable in ordinary first-order logic. Of course, the provable propositions are the same, but the proofs are very different. Resolution modulo When the congruence on propositions is induced by a congruence on terms, automated theorem proving can be performed like in first-order logic, for instance with the resolution method, provided the unification algorithm is replaced by an equational unification algorithm modulo this congruence. Equational unification problems can be solved by the narrowing method [START_REF] Fay | First-order unification in an equational theory[END_REF][START_REF] Hullot | Canonical forms and unification[END_REF][START_REF] Jouannaud | Solving equations in abstract algebras: a rulebased survey of unification[END_REF]. The method obtained this way, called equational resolution [START_REF] Plotkin | Building-in equational theories[END_REF][START_REF] Stickel | Automated deduction by theory resolution[END_REF], is complete. The situation is different when the congruence identifies atomic propositions with non atomic ones. For instance, in the theory of integral rings, the proposition a × a = 0 ⇒ a = 0 is provable because it reduces to (a = 0 ∨ a = 0) ⇒ a = 0 Hence the proposition ∃y (a × a = y ⇒ a = y) is also provable. But, with the clausal form of its negation a × a = Y ¬a = Z we cannot apply the resolution rule successfully, because the terms a × a and a do not unify. Hence, we need to introduce a new rule that detects that the literal a×a = Y has an instance that is reducible by the rewrite rule x × y = 0 -→ x = 0 ∨ y = 0 instantiates it, reduces it and puts it in clausal form again. We get this way the clause a = 0 that can be resolved with the clause ¬a = Z. Hence, the rewrite rules have to be divided into two sets: the set E of rules rewriting terms to terms that are used by the equational unification algorithm and the set of rule R rewriting atomic propositions to arbitrary ones and that are used by this new rule called extended narrowing. The system obtained this way is called extended narrowing and resolution or simply resolution modulo. Figure 1 gives a formulation of this method where unification problems are postponed as constraints. A proposition is said to be provable with this method when, from Extended resolution: {A1, . . . , An, B1, . . . , Bm}/E1 {¬C1, . . . , ¬Cp, D1, . . . , Dq}/E2 {B1, . . . , Bm, D1, . . . , Dq}/E1 ∪ E2 ∪ {A1 =E . . . An =E C1 =E . . . Cp} Extended narrowing: C Transforming axioms into rewrite rules enhances the efficiency of automated theorem proving as shown by this very simple example. /E if l -→ r ∈ R cl(C[r]p)/(E ∪ {C |p =E l}) Example. To refute the theory P 1 ⇔ (Q 2 ∨ P 2 ), ..., P i ⇔ (Q i+1 ∨ P i+1 ), ..., While, in resolution modulo, the propositions P i ⇔ (Q i+1 ∨ P i+1 ), Q i ⇔ ⊥ and P n+1 ⇔ ⊥ can be transformed into rewrite rules P n ⇔ (Q n+1 ∨ P n+1 ), P 1 , Q 2 ⇔ ⊥, ..., Q n+1 ⇔ ⊥, P n+1 ⇔ ⊥, P i -→ Q i+1 ∨ P i+1 Q i -→ ⊥ P n+1 -→ ⊥ The only proposition left is P 1 . It reduces to ⊥ ∨ ... ∨ ⊥ and its clausal form is hence the empty clause. Of course, reducing the proposition P 1 has a cost, but this cost is much lower than that of the non deterministic search of a refutation resolution with the clauses above. Indeed, the reduction process is deterministic because the rewrite system is confluent. Cut elimination and completeness Resolution modulo is not complete for all congruences. For instance, take the congruence induced by the rewrite rule A -→ A ⇒ B The proposition B has a proof in sequent calculus modulo axiom A ⊢ A axiom B ⊢ B weak.-left A, B ⊢ B ⇒-left A, A ⊢ B contr.-left A ⊢ B ⇒-right ⊢ A axiom A ⊢ A axiom B ⊢ B weak.-left A, B ⊢ B ⇒-left A, A ⊢ B contr.-left A ⊢ B cut ⊢ B but it is not provable by resolution modulo. Indeed, the clausal form of the negation of the proposition B is the clause ¬B and neither the extended resolution rule nor the extended narrowing rule can be applied successfully. However, it may be noticed that the proposition B has no cut free proof in sequent calculus modulo. Hence sequent calculus modulo this congruence does not have the cut elimination property. We have proved in [START_REF] Dowek | Theorem proving modulo[END_REF] that resolution modulo is complete for all congruences ≡ such that the sequent calculus modulo ≡ has the cut elimination property. Together with B. Werner, we have proved in [START_REF] Dowek | Proof normalization modulo[END_REF] that cut elimination holds modulo a large class of congruences and conjectured that it holds modulo all congruences that can be defined by a confluent and terminating rewrite system. When cut elimination does not hold, only propositions that have a cut free proof are proved by resolution modulo. 2 Simple type theory and set theory Simple type theory Simple type theory is a many-sorted first-order theory. The sorts of simple type theory, called simple types, are defined inductively as follows. ι and o are simple types, if T and U are simple types then T → U is a simple type. As usual, we write T 1 → ... → T n → U for the type T 1 → ... (T n → U ). The language of simple type theory contains the individual symbols -S T,U,V of sort (T → U → V ) → (T → U ) → T → V , -K T,U of sort T → U → T , -∨ of sort o → o → o, -¬ of sort o → o, -∀T of sort (T → o) → o, the function symbols -α T,U of rank (T → U, T, U ), and the predicate symbol As usual, we write (t u) for the term α(t, u) and (t u 1 ... u n ) for (...(t u 1 ) ... u n ). Usual presentations of simple-type theory [START_REF] Church | A formulation of the simple theory of types[END_REF][START_REF] Andrews | An introduction to mathematical logic and type theory: to truth through proof[END_REF] define propositions as terms of type o. But, as we want type theory to be a first-order theory, we introduce a predicate symbol ε that transforms a term of type o into a genuine proposition. Then, we need an axiom relating the proposition ε(α(α( ∨, x), y)) and the proposition ε(x) ∨ ε(y). For instance, the axiom ∀x ∀y (ε( ∨ x y) ⇔ (ε(x) ∨ ε(y))) This axiom can be built in the congruence, if we take the rewrite rule ε( ∨ x y) -→ ε(x) ∨ ε(y) This leads to the rewrite system of the figure 2. This rewrite system is confluent because it is orthogonal and we prove in [START_REF] Dowek | Proof normalization for a first-order formulation of higher-order logic[END_REF] that it is strongly normalizing. Hence, the congruence is decidable. (S x y z) -→ (x z (y z)) (K x y) -→ x ε( ¬ x) -→ ¬ε(x) ε( ∨ x y) -→ ε(x) ∨ ε(y) ε( ∀ x) -→ ∀y ε(x y) It is proved in [START_REF] Dowek | Proof normalization modulo[END_REF] that deduction modulo this congruence has the cut elimination property, i.e. every proposition provable in sequent calculus modulo this congruence has a cut free proof. Set theory The language of Zermelo's set theory is formed with the binary predicate symbols ∈ and =. This theory contains the axioms of equality and the following axioms. pair: ∀x ∀y ∃z ∀w (w ∈ z ⇔ (w = x ∨ w = y)) union: ∀x ∃y ∀w (w ∈ y ⇔ ∃z (w ∈ z ∧ z ∈ x)) power set: ∀x ∃y ∀w (w ∈ y ⇔ ∀z (z ∈ w ⇒ z ∈ x)) subset scheme: ∀x 1 ...∀x n ∀y ∃z ∀w (w ∈ z ⇔ (w ∈ y ∧ P )) where x 1 , ..., x n are the free variables of P minus w. To these axioms, we may add the extensionality axiom, the foundation axiom, the axiom of infinity, the replacement scheme and the axiom of choice. To have a language for the objects of the theory we may skolemize these axioms introducing the function symbols {}, , P and f x1,...,xn,w,P . We then get the axioms ∀x ∀y ∀w (w ∈ {}(x, y) ⇔ (w = x ∨ w = y)) ∀x ∀w (w ∈ (x) ⇔ ∃z (w ∈ z ∧ z ∈ x)) ∀x ∀w (w ∈ P(x) ⇔ ∀z (z ∈ w ⇒ z ∈ x)) ∀x 1 ...∀x n ∀y ∀w (w ∈ f x1,...,xn,w,P (x 1 , ..., x n , y) ⇔ (w ∈ y ∧ P )) Then, these axioms may be built in the congruence with the rewrite system of figure 3. This rewrite system is confluent because it is orthogonal. But it does Deduction modulo this congruence does not have the cut elimination property. A counter example is again Crabbé's proposition (see [START_REF] Hallnäs | On normalization of proofs in set theory[END_REF][START_REF] Ekman | Normal proofs in set theory[END_REF] for a discussion). As we have seen, this proposition A rewrites to a proposition of the form B ∧ ¬A. Hence, the proposition ¬B has the following proof w ∈ {}(x, y) -→ w = x ∨ w = y w ∈ (x) -→ ∃z (w ∈ z ∧ z ∈ x) w ∈ P(x) -→ ∀z (z ∈ w ⇒ z ∈ x) v ∈ fx 1 ,..., axiom B ⊢ B axiom A ⊢ A weakening-left A, B ⊢ A ¬-left A, B, ¬A ⊢ ∧-left A, A ⊢ contraction-left A ⊢ ¬-right ⊢ ¬A weakening-left B ⊢ ¬A ∧-right B ⊢ A axiom A ⊢ A weakening-left A, B ⊢ A ¬-left A, B, ¬A ⊢ ∧-left A, A ⊢ contraction-left A ⊢ cut B ⊢ ¬-right ⊢ ¬B but it is easy to check that the proposition ¬B, i.e. ¬f w,¬w∈w (a) ∈ a, has no cut free proof. 3 Resolution modulo in type theory and in set theory ε( ∨ x y) -→ ε(x) ∨ ε(y) ε( ∀ x) -→ ∀y ε(x y) rewrite propositions to propositions and are used by the extended narrowing rule. Equational unification modulo the rules S and K is related to higher-order unification. Actually since the reduction of combinators is slightly weaker than the reduction of λ-calculus, unification modulo this reduction is slightly weaker than higher-order unification [START_REF] Dougherty | Higher-order unification via combinators[END_REF]. To have genuine higher-order unification, we have to take another formulation of type theory using explicit substitutions instead of combinators (see section 5). The extended narrowing modulo the rules ¬, ∨ and ∀ is exactly the splitting rule of higher-order resolution. A normal literal unifies with the left member of such a rule if and only if its head symbol is a variable. The skolemization rule in this language is related to the skolemization rule of type theory. When we skolemize a proposition of the form ∀x ∃y P we introduce a function symbol f of rank (T, U ) where T is the type of x and U the type of y (not an individual symbol of type T → U ) and the axiom ∀x [f (x)/y]P Hence, the Skolem symbol f alone is not a term, but it permits to build a term of type U when we apply it to a term of type T . This is, in essence, the higherorder skolemization rule, but formulated for the language of combinators and not for λ-calculus. Again, we have the genuine higher-order skolemization rule if we use the formulation of type theory using explicit substitutions instead of combinators (see section 5). Resolution modulo in set theory In set theory, there is no rule rewriting terms to terms. Hence, unification in set theory is simply first-order unification. Converselly, all the rules of figure 3 rewrite propositions to propositions and thus the extended narrowing is performed modulo all theses rules. In set theory, resolution modulo is incomplete. We have seen that the proposition ¬f w,¬w∈w (a) ∈ a has a proof in set theory, but it cannot be proved by the resolution modulo method. Indeed, from the clausal form of its negation f w,¬w∈w (a) ∈ a we can apply neither the resolution rule nor the extended narrowing rule successfully. 4 On the differences between set theory and type theory Termination The first difference between resolution modulo in type theory and in set theory is that the rewrite system is terminating in type theory and hence all propositions have a normal form, while some propositions, e.g. Crabbé's proposition, have no normal form in set theory. Hence, during proof search, we can normalize all the clauses while this is impossible in set theory. Formally, the method modified this way requires a completeness proof. Completeness Another difference is that, as type theory verifies the cut elimination property, resolution modulo this congruence is complete, while it is incomplete modulo the congruence of set theory. A solution to recover completeness may be to use an automated theorem proving method that searches for proofs containing cuts. For instance if we add a rule allowing to refute the set of clauses S by refuting both the set S ∪ {¬P } and the set {P } then we can refute the proposition B above. Another direction is to search for another presentation of set theory or for a restriction of this theory that enjoys termination and cut elimination. We conjecture that if we restrict the subset scheme to stratifiable propositions in the sense of W.V.O. Quine [START_REF] Quine | Set theory and its logic[END_REF], we get a restriction of set theory that is sufficient to express most mathematics, that terminates and that verifies the cut elimination property. The cut elimination and completeness results obtained by S.C. Bailin [START_REF] Bailin | A normalization theorem for set theory[END_REF][START_REF] Bailin | A λ-unifiability test for set theory[END_REF] for his formulation of set theory let this conjecture be plausible. Typing literals A minor difference is that when we try to prove a theorem of the form "for all natural numbers x, P (x)", we have to formalize this theorem by the proposition ∀x (x ∈ N ⇒ P (x)) in set theory. In contrast, in type theory, we can choose to take ι for the type of natural numbers and state the theorem ∀x P (x) During the search, in set theory, extra literals of the form x ∈ N appear and have to be resolved. The role of unification and extended narrowing In resolution modulo, like in most other methods, the main difficulty is to construct the terms that have to be substituted to the variables. In resolution modulo, these terms are constructed by two processes: the unification algorithm and the extended narrowing rule. The main difference between resolution modulo in type theory and in set theory is the division of work between the unification and the extended narrowing. In type theory, unification is quite powerful and the extended narrowing is rarely used. In contrast, in set theory, unification is simply first-order unification and all the work is done by the extended narrowing rule. This difference reflects a deep difference on how mathematics are formalized in a theory and the other. Indeed, the unification in type theory is rich because there are rules that rewrite terms to terms and these rules are there because the notion of function is primitive in type theory. When we have a function f and an object a we can form the term (f a) and start rewriting this term to a normal form. In set theory, there is no such term and a term alone can never be reduced. Instead of forming the term (f a) we can form a proposition expressing that b is the image of a by the function f , < a, b >∈ f , that then can be rewritten. For example, in the proof of Cantor's theorem we have a function f from a set B to its power set and we want to form Cantor's set of objects that do not belong to their image. If x is an element of B, in type theory we can express its image (f x), then the term of type o reflecting the proposition expressing that x belongs to its image (f x x), the term of type o reflecting its negation ¬(f x x) and then Cantor's set λx ¬(f x x) that, with combinators, is expressed by the term C = (S (K ¬) (S (S (K f ) (S K K)) (S K K))) In contrast, in set theory, we cannot form a term expressing the image of x by the function f . Instead of saying that x does not belong to its image we have to say that it does not belong to any objet that happens to be its image. C = {x ∈ B | ∀y (< x, y >∈ f ⇒ ¬x ∈ y)} This requires to introduce two more logical symbols ⇒ and ∀. These symbols cannot be generated by the unification algorithm and are generated by the extended narrowing rule. It is not completely clear what is the best division of work between unification and extended narrowing. Experiences with type theory show that the unification algorithm is usually well controlled while the splitting rule is very productive. Loading the unification and unloading the extended narrowing seems to improve efficiency. However, two remarks moderate this point of view. First, in type theory, the functions that can be expressed by a term are very few. For instance, if we take the type ι for the natural numbers and introduce two symbols O and Succ for zero and the successor function, we can only express by a term the constant functions and the functions adding a constant to their argument. The other functions are usually expressed with the description operator (or the choice operator) and hence as relations. We may enrich the language of combinators and the rewrite system, for instance with primitive recursion, but then it is not obvious that unification is still so well controlled. Another remark is that having a decidable and unitary unification (such as first-order unification) permits to solve unification problems on the fly instead of keeping them as constraints. This permits to restrict the use of the extended narrowing rule. For instance, in type theory, when we have a literal ε(P x) and we apply the extended narrowing rule yielding two literals ε(A) and ε(B) and a constraint ε(P x) = ε(A ∨B) we keep this constraint frozen and we may need to apply the extended narrowing rule to other literals starting with the variable P . In contrast, in set theory, if we have a literal x ∈ P and we apply the extended narrowing rule yielding two literals y = a and y = b and a constraint (x ∈ P ) = (y ∈ {a, b}). The substitution {a, b}/P can be immediately propagated to all the occurrences of P initiating reductions that let the extended narrowing steps be useless. As an illustration of this discussion, we want to compare resolution modulo proofs of Cantor's theorem in type theory and in set theory. However, the presentations of type theory and set theory above are a little too rough to be really practicable. In both cases, we shall use a more sophisticated presentation where the language contains a full binding operator. Indeed, in type theory, we want to express Cantor's set by the term C = λx ¬(f x x) and not by the term C = (S (K ¬) (S (S (K f ) (S K K)) (S K K))) Similarily, in set theory we want to express this set as C = {x ∈ B | ∀y (< x, y >∈ R ⇒ ¬x ∈ y)} where < x, y > is a notation for the set {{x, y}, {x}} i.e. {}({}(x, y), {}(x, x)), and not by the term C = {x ∈ B | ∀y (∀u ((∀v (v ∈ u ⇔ (∀w (w ∈ v ⇔ (w = x ∨ w = y)) ∨ ∀w (w ∈ v ⇔ w = x)))) ⇒ u ∈ R) ⇒ ¬x ∈ y)} 1 For type theory, such a first-order presentation with a general binding operator has been proposed in [START_REF] Dowek | HOL-λσ: an intentional first-order expression of higher-order logic[END_REF]. It uses an expression of λ-calculus as a first-order language based on de Bruijn indices and explicit substitutions. In this presentation, the sorts are of the form Γ ⊢ T or Γ ⊢ ∆ where T is a simple type and Γ and ∆ are finite sequences of simple types. The language contains the following symbols -1 Γ A of sort AΓ ⊢ A, -α Γ A,B of rank (Γ ⊢ A → B, Γ ⊢ A, Γ ⊢ B), -λ Γ A,B of rank (AΓ ⊢ B, Γ ⊢ A → B), -[] Γ,Γ ′ A of rank Γ ′ ⊢ A, Γ ⊢ Γ ′ , Γ ⊢ A), -id Γ of sort Γ ⊢ Γ , -↑ Γ A of sort AΓ ⊢ Γ , 1 In the presentation of set theory above, there is no instance of the subset scheme for the proposition ∀y (< x, y >∈ R ⇒ ¬x ∈ y) because it contains Skolem symbols. Hence, we replace the proposition < x, y >∈ R by the equivalent one ∀u ((∀v (v ∈ u ⇔ (∀w (w ∈ v ⇔ (w = x ∨ w = y)) ∨ ∀w (w ∈ v ⇔ w = x)))) ⇒ u ∈ R) Then we can build the set C with the function symbol introduced by the skolemization of this instance of the scheme. The proposition x ∈ C is then provably equivalent to x ∈ B ∧ ∀y (< x, y >∈ R ⇒ ¬x ∈ y) but it does not reduce to it. -. Γ,Γ ′ A of rank (Γ ⊢ A, Γ ⊢ Γ ′ , Γ ⊢ AΓ ′ ), -• Γ,Γ ′ ,Γ ′′ of rank (Γ ⊢ Γ ′′ , Γ ′′ ⊢ Γ ′ , Γ ⊢ Γ ′ ), -∨ of sort ⊢ o → o → o, -¬ of sort ⊢ o → o, -∀T of sort ⊢ (T → o) → o, -ε of rank (⊢ o). And the rewrite system is that of figure 4. β-reduction and η-reduction: (λa)b -→ a[b.id] λ(a 1) -→ b if a =σ b[↑] σ-reduction: (a b)[s] -→ (a[s] b[s]) 1[a.s] -→ a a[id] -→ a (λa)[s] -→ λ(a[1.(s • ↑)]) (a[s])[t] -→ a[s • t] id • s -→ s ↑ • (a.s) -→ s (s1 • s2) • s3 -→ s1 • (s2 • s3) (a.s) • t -→ a[t].(s • t) s • id -→ s 1. ↑-→ id 1[s].(↑ • s) -→ s reduction of propositions: ε( ∨ x y) -→ ε(x) ∨ ε(y) ε( ¬ x) -→ ¬ε(x) ε( ∀T x) -→ ∀y ε(x y) Fig. 4. The rewrite rules of type theory with explicit substitutions A formulation of set theory with a general binder has been given in [START_REF] Dowek | Lambda-calculus, combinators and the comprehension scheme[END_REF]. But it is not expressed in a first-order setting yet. Waiting for such a theory, for the example of Cantor's theorem, we add a constant C and an ad hoc rewrite rule x ∈ C -→ x ∈ B ∧ ∀y (< x, y >∈ R ⇒ ¬x ∈ y) 6 Three proofs of Cantor's theorem We now give three resolution modulo proofs of Cantor's theorem that there is no surjection from a set to its power set. The first is in type theory with a function expressing the potential surjection from a set to its power set. The second is also in type theory, but this potential surjection is expressed by a relation. The last one is in set theory and the surjection is, of course, expressed by a relation. Automated theorem proving for Cantor's theorem in type theory is discussed in [START_REF] Huet | Constrained resolution: a complete method for higher order logic[END_REF][START_REF] Huet | A mechanization of type theory[END_REF][START_REF] Andrews | Automating higher-order logic[END_REF]. In type theory with a function In type theory, a set is expressed by a term of type T → o. Here, we choose to consider only the set of all objects of type ι. Its power set is the set of all objects of type ι → o. Hence we want to prove that there is no surjection from the type ι to ι → o. The first solution is to represent this potential surjection by a function f of type ι → ι → o. The surjectivity of this function can be expressed by the existence of a right-inverse g to this function, i.e. a function of type (ι → o) → ι such that for all x, (f (g x)) = x. Using Leibniz' definition of equality this proposition is written ∀x ∀p (ε(p (f (g x))) ⇔ ε(p x)) Putting this proposition in clausal form yields the clauses ¬ε(P (f (g X))), ε(P X) ε(Q (f (g Y ))), ¬ε(Q Y ) The search is described on figure 5. It returns the empty clause constrained by 1 ¬ε(P (f (g X))), ε(P X) 2 ε(Q (f (g Y ))), ¬ε(Q Y ) 3 narr. (1) ¬ε(P (f (g X))), ¬ε(R)/c1 4 narr. (2) ε(Q (f (g Y ))), ε(S)/c2 5 res. (3, 4) 2/c1, c2, c3, c4, c5 with c1 (P X) = ¬R c2 (Q Y ) = ¬S c3 (P (f (g X))) = R c4 (Q (f (g Y ))) = S c5 (P (f (g X))) = (Q (f (g Y ))) P X) = ¬R (Q Y ) = ¬S (P (f (g X))) = R (Q (f (g Y ))) = S (P (f (g X))) = (Q (f (g Y ))) that have the solution and X = Y = λ ¬[↑](f [↑] 1 1) P = Q = λ(1 (g[↑] λ ¬[↑ 2 ](f [↑ 2 ] 1 1))) R = S = (f (g λ ¬[↑](f [↑] 1 1)) (g λ ¬[↑](f [↑] 1 1))) 6 F : ∀x ∀y ∀z (ε(R x y) ⇒ ε(R x z) ⇒ ∀p (ε(p y) ⇔ ε(p z))) Putting these propositions in clausal form yields the clauses ε(R g(U ) U ) ¬ε(R X Y ), ¬ε(R X Z), ¬ε(P Y ), ε(P Z) ¬ε(R X Y ), ¬ε(R X Z), ¬ε(P Z), ε(P Y ) The search is then described on figure 6 where we simplify the constraints and substitute the solved constraints at each step. It returns the empty clause constrained by the equations (P Y ) = ¬ ∀G 1 (G 1 W 1 ) = ¬(R g(U 1 ) U 1 ) ∨ ¬ ∀G 2 (G 2 y 1 ) = ¬A 2 ∨ ¬B 2 (G 1 W ′ 1 ) = ¬A 2 ∨ ¬B 2 (P Y ) = ¬ ∀G 3 (G 3 y 2 ) = ¬(R g(Y ′ ) Z ′ ) ∨ ¬B 3 (P Z ′ ) = ¬B 3 (P Y ′ ) = ¬ ∀G 4 (G 4 W 2 ) = ¬(R g(U 2 ) U 2 ) ∨ ¬ ∀G 5 (G 5 y 3 ) = ¬A 5 ∨ ¬B 5 (G 4 W ′ 2 ) = ¬A 5 ∨ ¬B 5 Remarks The termination and completeness issues are not addressed by these examples because, even in set theory, Cantor's theorem has a cut free proof and the search involves only terminating propositions. The proof in set theory is longer because several steps are dedicated to the treatment of typing literals that are repeatedly resolved with the clause [START_REF] Dowek | HOL-λσ: an intentional first-order expression of higher-order logic[END_REF]. In type theory with a function, only two extended narrowing steps are needed to generate the symbol ¬ in the term λ ¬[↑](f [↑] 1 1) (i.e. λx ¬(f x x)) that expresses Cantor's set. In type theory with a relation, four extended narrowing steps are needed to generate the term λ ∀ [↑]λ ( ¬[↑ 2 ](R[↑ 2 ] 2 1) ∨[↑ 2 ] ¬[↑ 2 ](1 2 )) (i.e. λx ∀λy ( ¬(R x y) ∨ ¬(y x))) that expresses Cantor's set. The term expressing Cantor set is thus mostly constructed by the unification algorithm in the first case and mostly constructed by the extended narrowing rule in the second. In set theory, like in type theory with a relation, the term expressing Cantor's set is mostly constructed by the extended narrowing rule. In this case, a single step is needed because we have taken the ad hoc rule x ∈ C -→ x ∈ B ∧ ∀y (< x, y >∈ R ⇒ ¬x ∈ y) But in a reasonable formulation of set theory several steps would be needed. Notice, at last, that in the proof in type theory with a relation, the term expressing Cantor's set is constructed several times, because the constraints are frozen while in set theory, because the constraints are solved on the fly, this term is constructed only twice and propagated. To avoid this redundancy in type theory with a relation, it would be a good idea to solve as soon as possible the constraints c 1 and c 2 . Conclusion Using a single automated theorem proving method for type theory and for set theory permits a comparison. Although the use of a typed (many-sorted) language can be criticized, type theory has several advantages for automated theorem proving: typing permits to avoid typing literals, it enjoys termination and cut elimination, and the possibility to form a term (f a) expressing the image of an object by a function avoids indirect definitions. This motivates the search of a type-free formalization of mathematics, that also enjoys termination and cut elimination and where functions are primitive. Fig. 1 . 1 Fig. 1. Resolution modulo Fig. 2 . 2 Fig. 2. Rewriting rules for simple type theory Fig. 3 . 3 Fig. 3. Rewriting rules for set theory 3. 1 1 Resolution modulo in type theoryIn the rewrite system of figure2, the first two rules (S x y z) -→ (x z (y z)) (K x y) -→ x rewrite terms to terms and are used by the unification algorithm. The three others ε( ¬ x) -→ ¬ε(x) Fig. 5 . 5 Fig. 5. Cantor's theorem in type theory with a function . 2 2 In type theory with a relation Instead of using the primitive notion of function of set theory, we can code the functions as functional relations R of type ι → (ι → o) → o. The surjectivity and functionality of this relation are expressed by the propositions E : ∀y ∃x ε(R x y) ¬ε(R X Y ), ¬ε(R X Z), ¬ε(P Y ), ε(P Z) 3 ¬ε(R X Y ), ¬ε(R X Z), ¬ε(P Z), ε(P Y ) 4 res. (1,[START_REF] Andrews | An introduction to mathematical logic and type theory: to truth through proof[END_REF] ¬ε(R g(Y ) Z), ¬ε(P Y ), ε(P Z) 5 res. (1,[START_REF] Bailin | A normalization theorem for set theory[END_REF] ¬ε(P Y ), ε(P Y ) 6 five narr. ( 5) ¬ε(A1), ¬ε(B1), ε(P Y )/c1, c2 7 res. [START_REF] Church | A formulation of the simple theory of types[END_REF]1) ¬ε(B1), ε(P Y )/c1, c ′ 2 8 four narr. (7) 18) res. [START_REF] Klop | Combinatory reduction systems: introduction and survey[END_REF][START_REF] Hullot | Canonical forms and unification[END_REF] ¬ε In set theory We consider a set B and a potential surjection from this set to its power set. We express this potential surjection by a set R. The surjectivity and functionality of this set are expressed by the propositions We use also the axiom of equality L : ∀z ∀x ∀y (x = y ⇒ ¬z ∈ x ⇒ ¬z ∈ y) The proposition E reduces to the proposition Putting this proposition in clausal form yields the clauses The two other propositions yield the clauses The search is described on figure 7 where we simplify the constraints and substitute the solved constraints at each step. Propagating the solved constraints may lead to new reductions that require to put the proposition in clausal form again. This explains that some resolution steps yield several clauses. It returns the empty clause.
00411277
en
[ "sdv.bibs", "info.info-bi" ]
2024/03/04 16:41:24
2009
https://inria.hal.science/inria-00411277/file/paper.pdf
Peter Clote email: [email protected] Evangelos Kranakis email: [email protected] Danny Krizanc email: [email protected] Bruno Salvy email: [email protected] Asymptotics of Canonical and Saturated RNA Secondary Structures It is a classical result of Stein and Waterman that the asymptotic number of RNA secondary structures is 1.104366 • n -3/2 • 2.618034 n . In this paper, we study combinatorial asymptotics for two special subclasses of RNA secondary structures -canonical and saturated structures. Canonical secondary structures are defined to have no lonely (isolated) base pairs. This class of secondary structures was introduced by Bompfünewerer et al., who noted that the run time of Vienna RNA Package is substantially reduced when restricting computations to canonical structures. Here we provide an explanation for the speed-up, by proving that the asymptotic number of canonical RNA secondary structures is 2.1614 • n -3/2 • 1.96798 n and that the expected number of base pairs in a canonical secondary structure is 0.31724 • n. The asymptotic number of canonical secondary structures was obtained much earlier by Hofacker, Schuster and Stadler using a different method. to show that the density of states for [all resp. canonical resp. saturated] secondary structures is asymptotically Gaussian. We introduce a stochastic greedy method to sample random saturated structures, called quasi-random saturated structures, and show that the expected number of base pairs of is 0.340633 • n. Introduction Imagine an undirected * graph, described by placing graph vertices 1, . . . , n along the periphery of a circle in a counter-clockwise manner, and placing graph edges as chords within the circle. An outerplanar graph is a graph whose circular representation is planar; i.e. there are no crossings. An RNA secondary structure, formally defined in Section 2, is an outerplanar graph (no pseudoknots) with the property that no vertex is incident to more than one edge (no base triples) and that for every chord between vertices i, j, there exist at least θ = 1 many vertices that are not incident to any edge (hairpin requirement). RNA secondary structure is equivalently defined to be a well-balanced parenthesis expression s 1 , . . . , s n with dots, where if nucleotide i is unpaired then s i = •, while if there is a base pair between nucleotides i < j then s i = ( and s j = ) . This latter representation is known as the Vienna representation or dot bracket notation (dbn). Formally, a well-balanced parenthesis expression w 1 • • • w n can be defined as follows. If Σ denotes a finite alphabet, and α ∈ Σ, and w = w 1 • • • w n ∈ Σ * is an arbitrary word, or sequence of characters drawn from Σ, then |w| α designates the number of occurrences of α in w. Letting Σ = { ( , ) }, a word w = w 1 • • • w n ∈ Σ * is well-balanced if for all 1 ≤ i < n, |w 1 • • • w i | ( ≥ |w 1 • • • w i | ) and |w 1 • • • w n | ( = |w 1 • • • w n | ) . Finally, when considering RNA secondary structures, we consider instead the alphabet Σ = { ( , ) , • }, but otherwise the definition of well-balanced expression remains unchanged. The number of well-balanced parenthesis expressions of length n over the alphabet Σ = { ( , ) } is known as the Catalan number C n , while that over the alphabet Σ = { ( , ) , • } is known as the Motzkin number M n [START_REF] Donaghey | Motzkin numbers[END_REF]. Stein and Waterman [START_REF] Stein | On some new sequences generalizing the Catalan and Motzkin numbers[END_REF] computed the number S n of well-balanced parenthesis expressions in the alphabet Σ = { ( , ) , • }, where there exist at least θ = 1 occurrences of • between corresponding left and right parentheses ( respectively ) . It follows that S n is exactly the number of RNA secondary structures on [1, n], where there exist at least θ = 1 unpaired bases in every hairpin loop. In this paper, we are interested in specific classes of secondary structure: canonical and saturated structures. A secondary structure is canonical [START_REF] Bompfunewerer | Variations on RNA folding and alignment: lessons from Benasque[END_REF] if it has no lonely (isolated) base pairs. A secondary structure is saturated [START_REF] Zuker | RNA folding prediction: The continued need for interaction between biologists and mathematicians[END_REF] if no base pairs can be added without violating the notion of secondary structure, formally defined in Section 2. In order to compute parameters like asymptotic value for number of structures, expected number of base pairs, etc. throughout this paper, we adopt the model of Stein and Waterman [START_REF] Stein | On some new sequences generalizing the Catalan and Motzkin numbers[END_REF]. In this model, any position (nucleotide, also known as base) can pair with any other position, and every hairpin loop must contain at least θ = 1 unpaired bases; i.e. if i, j are paired, then ji > θ. This latter condition is due to steric constraints for RNA. At the risk of additional effort, the combinatorial methods of this paper could be applied to handle the situation of most secondary structure software, which set θ = 3. Examples of secondary structure representations .)))))....)))))...))))))))))). Equivalent representations for the same secondary structure may be produced by software jViz [START_REF] Wiese | Rna-a Java tool for RNA secondary structure visualization[END_REF], as depicted in Figure 1. The left panel of this figure depicts the circular Feynman diagram (i.e. outerplanar graph representation), the middle panel depicts the linear Feynman diagram, and the right panel depicts the classical representation. This latter representation, most familiar to biologists, may also be obtained by RNAplot from the Vienna RNA Package [START_REF] Hofacker | Vienna RNA secondary structure server[END_REF]. [START_REF] Szymanski | 5S ribosomal RNA database Y2K[END_REF], and the graph was created using jViz [START_REF] Wiese | Rna-a Java tool for RNA secondary structure visualization[END_REF]. Outline and results of the paper In Section 2, we review a combinatorial method, known as the DSV methodology and the important Flajolet-Odlyzko Theorem, which allows one to obtain asymptotic values of Taylor coefficients of analytic generating functions f (z) = ∞ i=1 a i z i by determining the dominant singularity of f . The description of the DSV methodology and Flajolet-Odlyzko theorem is not meant to be self-contained, although we very briefly describe the broad outline. For a very clear review of this method, with a number of example applications, please see [START_REF] Lorenz | Asymptotics of rna shapes[END_REF] or the recent monograph of Flajolet and Sedgewick [START_REF] Flajolet | Analytic Combinatorics[END_REF]. In Section 2.1, we compute the asymptotic number 2.1614 • n -3/2 • 1.96798 n of canonical secondary structures, obtaining the same value obtained by Hofacker, Schuster and Stadler [START_REF] Hofacker | Combinatorics of RNA secondary structures[END_REF] by a different method, known as the Bender-Meir-Moon method. In Section 2.2 we compute the expected number 0.31724•n of base pairs in canonical secondary structures. In Section 2.3, we apply the DSV methodology to compute the asymptotic number 1.07427 • n -3/2 • 2.35467 n of saturated structures, while in Section 2.4, we compute the expected number 0.337361 • n of base pairs of saturated structures. In Section 2.5, we compute the asymptotic number 0.323954 • 1.69562 n of saturated stem-loop structures, which is substantially smaller than the number 2 n-2 -1 of (all) stem-loop structures, as computed by Stein and Waterman [START_REF] Stein | On some new sequences generalizing the Catalan and Motzkin numbers[END_REF]. In Section 3, we consider a natural stochastic process to generate random saturated structures, called in the sequel quasi-random saturated structures. The stochastic process adds base pairs, one at a time, according to the uniform distribution, without violating any of the constraints of a structure. The main result of this section is that asymptotically, the expected number of base pairs in quasi-random saturated structures is 0.340633 • n, rather close to the expected number 0.337361 • n of base pairs of saturated structures. The numerical proximity of these two values suggests that stochastic greedy methods might find application in other areas of random graph theory. In Section 4 we provide some concluding remarks. At the web site http://bioinformatics.bc.edu/clotelab/SUPPLEMENTS/JBCBasymptotics/, we have placed Python programs and Mathematica code used in computing and checking the asymptotic number of canonical and saturated secondary structures, as well as the Maple code for checking Drmota's [START_REF] Drmota | Systems of functional equations[END_REF] conditions to deduce the asymptotic normality of the density of states of RNA structures. DSV methodology In this section, we describe a combinatorial method sometimes called DSV methodology, after Delest, Schützenberger and Viennot, which is a special case of what is called the symbolic method in combinatorics, described at length in [START_REF] Flajolet | Analytic Combinatorics[END_REF]. See also the Appendix of [START_REF] Lorenz | Asymptotics of rna shapes[END_REF] for a detailed presentation of this method. This method enables one to obtain information on the number of combinatorial configurations defined by finite rules, for any size. This is done by translating those rules into equations satisfied by various generating functions. A second step is to extract asymptotic expansions from these equations. This is done by studying the singularities of these generating functions viewed as analytic functions. Since our goal is to derive asymptotic numbers of structures, following standard convention we define an RNA secondary structure on a length n sequence to be a set of ordered pairs (i, j), such that 1 ≤ i < j ≤ n and the following are satisfied. 1. Nonexistence of pseudoknots: If (i, j) and (k, ℓ) belong to S, then it is not the case that i < k < j < ℓ. 2. No base triples: If (i, j) and (i, k) belong to S, then j = k; if (i, j) and (k, j) belong to S, then i = k. 3. Threshold requirement: If (i, j) belongs to S, then ji > θ, where θ, generally taken to be equal to 3, is the minimum number of unpaired bases in a hairpin loop; i.e. there must be at least θ unpaired bases in a hairpin loop. Note that the definition of secondary structure does not mention nucleotide identity -i.e. we do not require base-paired positions (i, j) to be occupied by Watson-Crick or wobble pairs. For this reason, at times we may say that S is a secondary structure on [1, n], rather than saying that S is a structure for RNA sequence of length n. In particular, an expression such as "the asymptotic number of structures is f (n)" means that the asymptotic number of structures on [1, n] is f (n). Grammars We now proceed with basic definitions related to context-free grammars. If A is a finite alphabet, then A * denotes the set of all finite sequences (called words) of characters drawn from A. Let Σ be the set consisting of the symbols for left parenthesis ( , right parenthesis ) , and dot •, used to represent a secondary structure in Vienna notation. A context-free grammar (see, e.g., [START_REF] Lewis | Elements of the Theory of Computation[END_REF]) for RNA secondary structures is given by G = (V, Σ, R, S 0 ), where V is a finite set of nonterminal symbols (also called variables), Σ = {•, ( , ) }, S 0 ∈ V is the start nonterminal, and R ⊆ V × (V ∪ Σ) * is a finite set of production rules. Elements of R are usually denoted by A → w, rather than (A, w). If rules A → α 1 ,. . . , A → α m all have the same left-hand side, then this is usually abbreviated by A → α 1 | • • • |α m . If x, y ∈ (V ∪ Σ) * L(G) = {w ∈ Σ * : S 0 ⇒ * G w}. For any nonterminal S ∈ V , we also write L(S) to denote the language generated by rules from G when using start symbol S. A derivation of word w from start symbol S 0 using grammar G is a leftmost derivation, if each successive rule application is applied to replace the leftmost nonterminal occurring in the intermediate expression. A context-free grammar G is non-ambiguous, if there is no word w ∈ L(G) which admits two distinct leftmost derivations. This notion is important since it is only when applied to non-ambiguous grammars that the DSV methodology leads to exact counts. For the sake of readers unfamiliar with context-free grammars, we present some examples to illustrate the previous concepts. Consider the following grammar G, which generates the collection of well-balanced parenthesis strings, including the empty string. † Define G = (V, Σ, R, S), where the set V of variables (also known as nonterminals) is {S}, the set Σ of terminals is { ( , ) }, where S is the start symbol, and where the set R of rules is given by S → ǫ| ( S ) |SS Here ǫ denotes the empty string. We claim that G is an ambiguous grammar. Indeed, consider the following two leftmost derivations, where we denote the order of rule applications r1 := S → ǫ, r2 := S → SS, r3 := S → ( S ) , by placing the rule designator under the arrow. Clearly the leftmost derivation S → r2 SS → r2 SSS → r3,r1 ( ) SS → r3,r1 ( ) ( ) S → r3,r1 ( ) ( ) ( ) is distinct from the leftmost derivation S → r2 SS → r3,r1 ( ) S → r2 ( ) ( S ) S → r3,r1 ( ) ( ) S → r2 ( ) ( ) ( S ) → r1 ( ) ( ) ( ) yet both generate the same well-balanced parenthesis string. For the same reason, the grammar with rules S → • | • S| ( S ) |SS Type of nonterminal Equation for the g.f. S → T | U S(z) = T (z) + U (z) S → T U S(z) = T (z)U (z) S → t S(z) = z S → ε S(z) = 1 Table 1: Translation between context-free grammars and generating functions. Here, G = (V, Σ, R, S 0 ) is a given context-free grammar, S, T and U are any nonterminal symbols in V , and t is a terminal symbol in Σ. The generating functions for the languages L(S), L(T ), L(U ) are respectively denoted by S(z), T (z), U (z). generates precisely the collection of non-empty RNA secondary structures, yet this grammar is ambiguous, and we would obtain an overcount by applying the DSV methodology. In contrast, the grammar whose rules are S → • | • S| ( S ) | ( S ) S is easily seen to be non-ambiguous and to generate all non-empty RNA secondary structures. Generating Functions Suppose that G = (V, Σ, R, S) is a non-ambiguous context-free grammar which generates a collection L(S) of objects (e.g. canonical secondary structures). To this grammar is associated a generating function S(z) = ∞ n=0 s n z n , such that the nth Taylor coefficient [z n ]S(z) = s n represents the number of objects we wish to count. In the sequel, s n will represent the number of canonical secondary structures for RNA sequences of length n. The DSV method uses Table 1 in order to translate the grammar rules of R into a system of equations for the generating functions. Asymptotics In the sequel, we often compute the asymptotic value of the Taylor coefficients of generating functions by first applying the DSV methodology, then using a simple corollary of a result of Flajolet and Odlyzko [START_REF] Flajolet | Singularity analysis of generating functions[END_REF]. That corollary is restated here as the following theorem. Theorem 1 (Flajolet and Odlyzko) Assume that S(z) has a singularity at z = ρ > 0, is analytic in the rest of the region △\1, depicted in Figure 2, and that as z → ρ in △, S(z) ∼ K(1 -z/ρ) α . (1) Then, as n → ∞, if α / ∈ 0, 1, 2, ..., s n ∼ K Γ(-α) • n -α-1 ρ -n . It is a consequence of Table 1 that the generating series of context-free grammars are algebraic (this is the celebrated theorem of Chomsky and Schützenberger [START_REF] Chomsky | The algebraic theory of context-free languages[END_REF]). In particular this implies that they have positive radius of convergence, a finite number of singularities, and their behaviour in the neighborhood of their singularities is of the type (1). (See [START_REF] Flajolet | Analytic Combinatorics[END_REF] for an extensive treatment.) A singularity of minimal modulus as in Theorem 1 is called a dominant singularity. The location of the dominant singularity may be a source of difficulty. The simple case is when an explicit expression is obtained for the generating functions; this happens for canonical secondary structures. The situation when only the system of polynomial equations is available is more involved; we show how to deal with it in the case of saturated structures. Asymptotic number of canonical secondary structures In Bompfünewerer et al. [START_REF] Bompfunewerer | Variations on RNA folding and alignment: lessons from Benasque[END_REF], the notion of canonical secondary structure S is defined as a secondary structure having no lonely (isolated) base pairs; i.e. formally, there are no base pairs (i, j) ∈ S for which both (i -1, j + 1) ∈ S and (i + 1, j -1) ∈ S. In this section, we compute the asymptotic number of canonical secondary structures. Throughout this section, secondary structure is interpreted to mean a secondary structure on an RNA sequence of length n, for which each base can pair with any other base (not simply Watson-Crick and wobble pairs), and with minimum number θ of unpaired bases in every hairpin loop set to be 1. At the cost of working with more complex expressions, by the same method, one could analyze the case when θ = 3, which is assumed for the software mfold [START_REF] Zuker | Mfold web server for nucleic acid folding and hybridization prediction[END_REF] and RNAfold [START_REF] Hofacker | Vienna RNA secondary structure server[END_REF]. Grammar Consider the context-free grammar G = (V, Σ, R, S), where V consists of nonterminals S, R, Σ consists of the terminals • , ( , ) , S is the start symbol and R consists of the following rules: S → •|S • | ( R ) |S ( R ) (2) R → ( • ) | ( R ) | ( S ( R ) ) | ( S • ) The nonterminal S is intended to generate all nonempty canonical secondary structures. In contrast, the nonterminal R is intended to generate all secondary structures which become canonical when surrounded by a closing set of parentheses. We prove by induction on expression length that the grammar G is non-ambiguous and generates all nonempty canonical secondary structures. Define context-free grammar G R to consist of the collection R of rules from G, defined above, with starting nonterminal S, respectively. Formally, G R = (V, Σ, R, R). Let L(G), L(G R ) denote the languages generated respectively by grammars G, G R . Now define languages L 1 , L 2 of nonempty secondary structures with θ = 1 by L 1 = {S : S is canonical} L 2 = {S : ( S ) is canonical}. Note that structures like • • ( • ) and ( • ) ( • ) belong to L 1 , but not to L 2 , while structures like ( ( • ) ) belong to both L 1 , L 2 . Note that any structure S belonging to L 2 must be of the form ( S 0 ) ; indeed, if S were not of this form, but rather of the form either • S 0 or ( S 0 ) S 1 , then by ( S ) would have an outermost lonely pair of parentheses. Claim. L 1 = L(G), L 2 = L(G R ). Proof of Claim. Clearly L 1 ⊇ L(G), L 2 ⊇ L(G R ), so we show the reverse inclusions by induction; i.e. by induction on n, we prove that L 1 ∩ Σ n ⊆ L(G) ∩ Σ n , L 2 ∩ Σ n ⊆ L(G R ) ∩ Σ n . Base case: n = 1. Clearly L(G) ∩ Σ = { • } = L 1 ∩ Σ, L(G R ) ∩ Σ = ∅ = L 2 ∩ Σ. Induction case: Assume that the claim holds for all n < k. Subcase 1. Let S be a canonical secondary structure with length |S| = k > 1. Then either (1) S = • S 0 , where S 0 ∈ L 1 , or (2) S = ( S 0 ) , where S 0 ∈ L 2 , or (3) S = ( S 0 ) S 1 , where S 0 ∈ L 2 and S 1 ∈ L 1 . Each of these cases corresponds to a different rule having left side S, hence by the induction hypothesis, it follows that S ∈ L(G). Subcase 2. Let S ∈ L 2 be a secondary structure with length |S| = k > 1, for which ( S ) is canonical. If S were of the form • S 0 or ( S 0 ) S 1 , then ( S ) would not be canonical, since its outermost parenthesis pair would be a lonely pair. Thus S is of the form ( S 0 ) , where either (1) S 0 begins with • , or (2) S 0 is of the form ( S 1 ) , where S 1 is not canonical, but ( S 1 ) becomes canonical, or (3) S 0 is of the form ( S 1 ) , where S 1 is canonical and ( S 1 ) is canonical as well. In case (1), S 0 is either • or • S 1 , where S 1 is canonical. In case (2), S 0 is of the form ( S 1 ) , where S 1 must have the property that ( S 1 ) is canonical. In case (3), S 0 is of the form ( S 1 ) S 2 , where it must be that ( S 1 ) is canonical and S 2 is canonical. By applying corresponding rules and the induction hypothesis, it follows that S ∈ L(G R ). It now follows by induction that L 1 = L(G), L 2 = L(G R ). A similar proof by induction shows that the grammar G is non-ambiguous. Generating Functions Now, let s n denote the number of canonical secondary structures on a length n RNA sequence. Then s n is the nth Taylor coefficient of the generating function S(z) = n≥0 s n z n , denoted by s n = [z n ]S(z). Similarly, let R(z) = n≥0 R n z n be the generating function for the number of secondary structures on [1, n] with θ = 1, which become canonical when surrounded by a closing set of parentheses. By Table 1, the non-ambiguous grammar (2) gives the following equations S(z) = z + S(z)z + R(z)z 2 + S(z)R(z)z 2 (3) R(z) = z 3 + R(z)z 2 + S(z)R(z)z 4 + S(z)z 3 (4) which can be solved explicitly (solve the second equation for R and inject this in the first equation): S(z) = 1 -z -z 2 + z 3 -z 5 -F (z) 2z 4 (5) and S(z) = 1 -z -z 2 + z 3 -z 5 + F (z) 2z 4 (6) where F (z) = 4z 5 -1 + z 2 -z 4 + -1 + z + z 2 -z 3 + z 5 2 . ( 7 ) When evaluated at z = 0, Equation ( 6) gives lim r→0 S(z) = ∞. Since S(z) is known to be analytic at 0, we conclude that S(z) is given by ( 5). Location of the dominant singularity The square root function √ z has a singularity at z = 0, so we are led to investigate the roots of F (z). A numerical computation with Mathematicaç gives the 10 roots 0.508136, 4.11674, -0.868214-0.619448i, -0.868214+0.619448i, -0.799805-0.367046i, -0.799805+0.367046i, 0.410134 -0.564104i, 0.410134 + 0.564104i, 0.945448 -0.470929i, 0.945448 + 0.470929i. It follows that ρ = 0.508136 is the root of F (z) having smallest (complex) modulus. Asymptotics Let T (z) = 1-z-z 2 +z 3 -z 5 2z 4 and factor 1z/ρ out of F (z) to obtain Q(z)(1 -z/ρ) = F (z). It follows that S(z) -T (ρ) = Q(ρ) 2ρ 4 • (1 -z/ρ) α + O(1 -z/ρ), z → ρ, where α = 1/2. This shows that ρ is indeed a dominant singularity for S. Note that for each n ≥ 1, S(z) and S(z) -T (ρ) have the same Taylor coefficient of index n, namely s n . Now, it is a direct consequence of Theorem 1 that s n ∼ K(ρ) Γ(-α) • n -α-1 • (1/ρ) n , n → ∞ (8) where α = 1/2 and K(z) = √ Q(z) 2z 4 . Plugging ρ = 0.508136 into equation ( 8), we derive the following theorem, first obtained by Hofacker, Schuster and Stadler [START_REF] Hofacker | Combinatorics of RNA secondary structures[END_REF] by a different method. Theorem 2 The asymptotic number of canonical secondary structures on [1, n] is 2.1614 • n -3/2 • 1.96798 n . (9) Asymptotic expected number of base pairs in canonical structures In this section, we derive the expected number of base pairs in canonical secondary structures on [1, n]. Generating Functions The DSV methodology is actually able to produce multivariate generating series. Modifying the equations [START_REF] Clote | Combinatorics of saturated secondary structures of RNA[END_REF][START_REF] Donaghey | Motzkin numbers[END_REF] by adding a new variable u, intended to count the number of base pairs, we get S(z, u) = z + S(z, u)z + R(z, u)uz 2 + S(z, u)R(z, u)uz 2 (10) R(z, u) = uz 3 + R(z, u)uz 2 + S(z, u)R(z, u)u 2 z 4 + S(z, u)uz 3 . ( 11 ) This can be solved as before to yield the solution ‡ S(z, u) = n≥0 k≥0 s n,k z n u k = 2u 2 z 4 1 -z -uz 2 + uz 3 -u 2 z 5 - 4u 2 z 5 (-1 + uz 2 -u 2 z 4 ) + (-1 + z + uz 2 -uz 3 + u 2 z 5 ) 2 Here, the coefficient s n,k is the number of canonical secondary structures of size n with k base pairs. Using a classical observation on multivariate generating functions, we recover the expected number of base pairs in a canonical secondary structure on [1, n] using the partial derivative of S(z, u); indeed, [z n ] ∂S(z,u) ∂u (z, 1) [z n ]S(z, 1) = [z n ] i≥0 k≥0 s i,k z i ku k-1 (z, 1) s n = k≥0 s n,k k s n = k≥0 k s n,k s n , and s n,k /s n is the (uniform) probability that a canonical secondary structure on [1, n] has exactly k base pairs. We compute that G(z) = ∂S(z,u) ∂u (z, 1) satisfies G(z) = -(z 2 -2)(T (z) -F (z) + z F (z)) 2z 4 F (z) where T (z) = (1 -2z + 2z 3z 4 -3z 5 + z 6 ) and F (z) is as in [START_REF] Flajolet | Singularity analysis of generating functions[END_REF]. Simplification yields G(z) = -(z 2 -2)(z -1) 2z 4 - T (z)(z 2 -2) 2z 4 • 1 F (z) . Asymptotics From this expression, it is clear that the dominant singularity is again located at the same ρ = 0.508136. A local expansion there gives G(z) ∼ K(ρ)(1 -z/ρ) -1/2 , z → ρ with K(z) = -Q(z) -1/2 T (z)(z 2 -2) 2z 4 . By Theorem 1, we obtain the asymptotic value K(ρ) Γ(-α) • n -3/2 • (1/ρ) n . ( 12 ) Plugging ρ = 0.508136 into equation ( 12), we find the asymptotic value of [z n ] ∂S(z,u) ∂u (z, 1) is 0.68568 • n -1/2 • 1.96798 n . ( 13 ) Dividing ( 13) by the asymptotic number [z n ]S(z) of canonical secondary structures, given in ( 9), we have the following theorem. Theorem 3 The asymptotic expected number of base pairs in canonical secondary structures is 0.31724 • n. Asymptotic number of saturated structures An RNA secondary structure is saturated if it is not possible to add any base pair without violating the definition of secondary structures. If one models the folding of an RNA secondary structure as a random walk on a Markov chain (i.e. by the Metropolis-Hastings algorithm), then saturated structures correspond to kinetic traps with respect to the Nussinov energy model [START_REF] Nussinov | Fast algorithm for predicting the secondary structure of single stranded RNA[END_REF]. The asymptotic number of saturated structures was determined in [START_REF] Clote | Combinatorics of saturated secondary structures of RNA[END_REF] by using a method known as Bender's Theorem, as rectified by Meir and Moon [START_REF] Meir | On an asymptotic method in enumeration[END_REF]. In this section, we apply the DSV methodology to obtain the same asymptotic limit, and in the next section we obtain the expected number of base pairs of saturated structures. Grammar Consider the context-free grammar with nonterminal symbols S, R, terminal symbols •, ( , ) , start symbol S and production rules S → •| • •|R • |R • •| ( S ) |S ( S ) (14) R → ( S ) |R ( S ) (15) It can be shown by induction on expression length that L(S) is the set of saturated structures, and L(R) is the set of saturated structures with no visible position; i.e. external to every base pair [START_REF] Clote | Combinatorics of saturated secondary structures of RNA[END_REF]. Here, position i is visible in a secondary structure T if it is external to every base pair of T ; i.e. for all (x, y) ∈ T , i < x or i > y. Generating Functions Let S(z) = ∞ i=0 s i • z i , R(z) = ∞ i=0 r i • z i (16) denote the generating functions S resp. R, corresponding to the problems of counting number of saturated secondary structures resp. number of saturated structures having no visible positions. Applying Table 1,we are led to the equations S = z + z 2 + zR + z 2 R + z 2 S + z 2 S 2 (17) R = z 2 S + z 2 RS. ( 18 ) Location of the dominant singularity By first solving [START_REF] Flajolet | Analytic Combinatorics[END_REF] for R and injecting in ( 17), we get S = z + z 2 + z 2 S + z 2 S 2 + (z + z 2 ) z 2 S 1 -z 2 S , (19) which upon normalizing gives a polynomial equation of the third degree P (z, S) = -S 3 z 4 + z(1 + z) -S 2 z 2 -2 + z 2 + S -1 + z 2 = 0. (20) Unlike earlier work in this paper, direct solution of this equation by Cardano's formulas gives expressions that are difficult to handle. Instead, we locate the singularity by appealing to general techniques for implicit generating functions [START_REF] Flajolet | Analytic Combinatorics[END_REF]VII.4]. By the implicit function theorem, singularities of P (z, S) only occur when both P and its partial derivative ∂P ∂S (z, S) = -1 + (1 + 4S)z 2 -S(2 + 3S)z 4 (21) vanish simultaneously. The common roots of P and ∂P/∂S can be located by eliminating S between those two equations, for instance using the classical theory of resultants (see, e.g., [START_REF] Lang | Algebra[END_REF]). This gives a polynomial Q(z) = z 11 (1 + z)(4 + z -7z 2 -28z 3 -32z 4 + 4z 6 ), (22) that vanishes at all z such that (z, S) is a common root of P and ∂P/∂S. Numerical computation of the roots of Q yields 0, -1, -2.29493, -0.854537, -0.244657 -0.5601i, -0.244657 + 0.5601i, 0.424687, 3.2141. A subtle difficulty now lies in selecting among those points the dominant singularity of the analytic continuation of the solution S of (19) corresponding to the combinatorial problem. Indeed, it is possible that one solution of ( 19) is singular at a given r without the solution of interest being singular there. Considering such a singularity would result in an asymptotic expansion that is wrong by an exponential factor. One way to select the correct singularity is to apply a result by Meir and Moon [START_REF] Meir | On an asymptotic method in enumeration[END_REF] to Equation [START_REF] Stein | On some new sequences generalizing the Catalan and Motzkin numbers[END_REF]. This results in a variant of the computation in [START_REF] Clote | Combinatorics of saturated secondary structures of RNA[END_REF]. Instead, we use Pringsheim's theorem (see, e.g., [START_REF] Flajolet | Analytic Combinatorics[END_REF]). Theorem 4 (Pringsheim) If S(z) has a series expansion at 0 that has nonnegative coefficients and a radius of convergence R, then the point z = R is a singularity of S(z). In our example, there are only two possible real positive singularities, 0.424687 and 3.2141. The latter cannot be dominant, since it would lead to asymptotics of the form 3.2141 -n , i.e., an exponentially decreasing number of structures. Thus the dominant singularity is at ρ = 0.424687. Since the moduli of the non-real roots of Q is 0.611203 > ρ, the conditions of Theorem 1 hold, provided the function behaves as required as z → ρ. Asymptotics We now compute the local expansion of S(z) at ρ. From equation ( 21), we have that P (ρ, S) = 0.605047 -0.819641S + 0.328189S 2 -0.0325295S 3 (23) whose (numerical approximations of) roots are the double root S = 1.6569 and single root S = 6.77518. It is easily checked that 1.6569 is the only root of equation ( 23) in which P (ρ, S) is increasing; thus we let T = 1.6569. Recall Taylor's theorem in two variables f (x, y) = ∞ n=0 ∞ k=0 ∂ n+k f (x 0 , y 0 ) ∂x n ∂y k • (x -x 0 ) n n! • (y -y 0 ) k k! . We now expand P (z, S) at z = ρ and S = T and invert this expansion. This yields P (z, S) = P (ρ, T ) + ∂P ∂S (ρ, T )(S -T ) + ∂P ∂z (ρ, T )(z -ρ) + 1 2 ∂ 2 P ∂S 2 (ρ, T )(S -T ) 2 + • • • (24) where the dots indicate terms of higher order. The first two terms are 0, so by denoting P z = ∂P ∂z (ρ, T ) and P SS = ∂ 2 P ∂S 2 (ρ, T ), we have 0 = P = P z (z -ρ) + 1 2 P zz (S -T ) 2 + O(S -T ) 3 + O((z -ρ)(S -T ) 2 ) + O((z -ρ) 2 ). ( 25) Isolating (S -T ) 2 we get (S -T ) 2 = -2P z (z -ρ) P SS + O((z -ρ) 2 ) + O((S -T ) 3 ) S -T = ± 2ρP z P SS • 1 -z/ρ + O(z -ρ). Since [z n ]S(z) is the number of saturated secondary structures on [1, n] and the Taylor coefficients in the expansion of 1z/ρ are negative, we discard the positive root and thus obtain S -T = - 2ρP z P SS • 1 -z/ρ + O(z -ρ). (26) We now make use of Theorem 1 as before and recover the following result, proved earlier in [START_REF] Clote | Combinatorics of saturated secondary structures of RNA[END_REF] by the Bender-Meir-Moon method. Theorem 5 The asymptotic number of saturated structures is 1.07427 • n -3/2 • 2.35468 n . Expected number of base pairs of saturated structures In this section, we compute the expected number of base pairs of saturated structures, proceeding as in Section 2.2 by first modifying the equations to obtain bivariate generating functions and then differentiating with respect to the new variable and evaluating at 1 to obtain the asymptotic expectation. Generating Functions We first modify equations [START_REF] Graham | Concrete Mathematics -A Foundation for Computer Science[END_REF][START_REF] Flajolet | Analytic Combinatorics[END_REF] by introducing the auxiliary variable u, responsible for counting the number of base pairs: S = z + z 2 + zR + z 2 R + uz 2 S + uz 2 S 2 (27) R = uz 2 S + uz 2 RS. ( 28 ) Solving the second equation for R and injecting into the first one gives P (z, u, S) = Suz 2 (z + z 2 ) -(-1 + Suz 2 )(-S + z + z 2 + Suz 2 + S 2 uz 2 ). ( 29 ) Asymptotics We are interested in the coefficients of ∂S/∂u at u = 1. Differentiating (29) with respect to u gives ∂P ∂u + ∂P ∂S ∂S ∂u = 0. Using equation (26), we replace S(z, 1) by T + K 1z/ρ + O(1z/ρ) in this equation to obtain ρ 2 T (1 + 2(1 -ρ 2 )T -2ρ 2 T 2 ) + O( 1 -z/ρ) + (4Kρ 2 -2Kρ 4 -6Kρ 4 T ) 1 -z/ρ + O(1 -z/ρ) ∂S ∂u u=1 = 0 and finally ∂S ∂u (z, 1) ∼ - 0.642305 1 -z/ρ . Applying Theorem 1 to equation (30) gives ρ n [z n ] ∂S ∂u (z, 1) ∼ 0.642305 Γ(1/2) • n -1/2 = 0.362417 • n -1/2 . It follows that the asymptotic expected number of base pairs in saturated structures on [1, n] is [z n ] ∂S(z,u) ∂u (z, 1) [z n ]S(z, 1) ∼ 0.362417 • n -1/2 • ρ -n 1.07427 • n -3/2 • ρ -n = 0.337361 • n We have just proved the following. Theorem 6 The asymptotic expected number of base pairs for saturated structures is 0.337361• n. Since the Taylor coefficient s n,k of generating function S(z, u) = n,k s n,k z n u k is equal to the number of saturated structures having k base pairs, it is possible that the methods of this section will suffice to solve the following open problem. Open Problem 1 Clearly, the maximum number of base pairs in a saturated structure on [1, n] where θ = 1 is ⌊ n-1 2 ⌋. For fixed values of k, what is the asymptotic number s n,⌊(n-1)/2⌋-k of saturated secondary structures having exactly k base pairs fewer than the maximum? Note that in [START_REF] Clote | Combinatorics of saturated secondary structures of RNA[END_REF], we solved this problem for k = 0, 1. A related interesting question concerns whether the number of secondary structures s n,k having k base pairs is approximately Gaussian. As first suggested by Y. Ponty (personal communication), this is indeed the case. More formally, consider for fixed n the the finite distribution P n = p 1 , . . . , p n , where p k = s n,k /s n and s n = k s n,k . In the Nussinov energy model, the energy of a secondary structure with k base pairs is -k, so the distribution P n is what is usually called the density of states in physical chemistry. It follows from Theorem 1 of of Drmota [START_REF] Drmota | Systems of functional equations[END_REF] (see also [START_REF] Drmota | Asymptotic distributions and a multivariate Darboux method in enumeration problems[END_REF]) that P n is Gaussian. Similarly, it follows from Theorem 1 of Drmota that the asymptotic distribution of density of states of both canonical and saturated structures is Gaussian. Details of a Maple session applying Drmota's theorem to saturated structures appears in the web supplement http://bioinformatics.bc.edu/clotelab/SUPPLEMENTS/ JBCBasymptotics/. Asymptotic number of saturated stem-loops Define a stem-loop to be a secondary structure S having a unique base pair (i 0 , j 0 ) ∈ S, for which all other base pairs (i, j) ∈ S satisfy the relation i < i 0 < j 0 < j. In this case, (i 0 , j 0 ) defines a hairpin, and the remaining base pairs, as well as possible internal loops and bulges, constitute the stem. We have the following simple result due to Stein and Waterman [START_REF] Stein | On some new sequences generalizing the Catalan and Motzkin numbers[END_REF]. Proposition 1 There are 2 n-2 -1 stem-loop structures § on [1, n]. Proof. Let L(n) denote the number of secondary structures with at most one loop on (1, . . . , n). Then L(1) = 1 = L(2). There are two cases to consider for L(n + 1). Case 1. If n + 1 does not form a base pair, then we have a contribution of L(n). Case 2. n + 1 forms a base pair with some 1 ≤ j ≤ n -1. In this case, since only one hairpin loop is allowed, there is no base-pairing for the subsequence s 1 , . . . , s j-1 , and hence if n + 1 base-pairs with j, then we have a contribution of L(n -(j + 1) + 1) = L(nj). Hence L(n + 1) = L(n) + n-1 j=1 L(n -j) = L(n) + L(n -1) + • • • + L(1) and hence L(1) = 1, L(2) = 1, L(3) = 2, and from there L(n) = 2 n-2 by induction. § In [START_REF] Stein | On some new sequences generalizing the Catalan and Motzkin numbers[END_REF], stem-loop structures are called hairpins. Since the appearance of [START_REF] Stein | On some new sequences generalizing the Catalan and Motzkin numbers[END_REF], common convention is that a hairpin is a structure consisting of a single base pair enclosing a loop region; i.e. ( • • • • • ) . Here we use the more proper term stem-loop. Since |a 2 | = |a 3 | = .9207 > |a 1 |, it follows that the asymptotic behaviour is given by the term in a 1 . We have proved the following theorem. Theorem 7 The number h(n) of saturated stem-loops on [1, n] satisfies h(n) ∼ 0.323954 • 1.69562 n . (32) Convergence of the asymptotic limit in equation ( 32) is exponentially fast, so that when n = 20, 0.323954 • 1.69562 n = 12504.2, while the exact number of saturated stem-loops on [START_REF] Bompfunewerer | Variations on RNA folding and alignment: lessons from Benasque[END_REF][START_REF] Szymanski | 5S ribosomal RNA database Y2K[END_REF] is h(20) = 12503. 3 Quasi-random saturated structures In this section, we define a stochastic greedy process to generate random saturated structures, technically denoted quasi-random saturated structures. Our main result is that the expected number of base pairs in quasi-random saturated structures is 0.0.340633 • n, just slightly more than the expected number 0.337361 • n of all saturated structures. This suggests that the introduction of stochastic greedy algorithms and their asymptotic analysis may prove useful in other areas of random graph theory. Consider the following stochastic process to generate a saturated structure. Suppose that n bases are arranged in sequential order on a line. Select the base pair (1, u) by choosing u, where θ + 2 ≤ u ≤ n, at random with probability 1/(nθ -1). The base pair joining 1 and u partitions the line into two parts. The left region has k bases strictly between 1 and u, where k ≥ θ, and the right region contains the remaining nk -2 bases properly contained within endpoints k + 2 and n (see Figure 3). Proceed recursively on each of the two parts. Observe that the secondary structures produced by our stochastic process will always base pair with the leftmost available base, and that the resulting structure is always saturated. Before proceeding further, we note that the probability that the probability p i,j that (i, j) is a base pair in a saturated structure is not the same as the probability q i,j that (i, j) is a base pair in a quasi-random saturated structure. Indeed, if we consider saturated and quasirandom saturated structures on an RNA sequence of length n = 10, then clearly p 1,5 = 1/29 while clearly q 1,5 = 1/8. ¶ Despite the very different base pairing probabilities when comparing ¶ The web supplement contains a Python program to compute the number of saturated structures on n. Clearly p1,5 = s 3 •s 5 s 10 , where s k denotes the number of saturated structures on an RNA sequence of length k. A computation from a Python program (see web supplement) shows that s3 = 1, s5 = 5 and s10 = 145, hence p1,5 = 5/145 = 1/29. Taking an average over 100 repetitions, we have computed the average number of base pairs and the standard deviation for n = 10, 100, 1000. Results are µ = 0.323, σ = 0.0604 for n = 10, µ = 0.3526, σ = 0.0386 for n = 100 and µ = 0.35618, σ = 0.0361 for n = 1000. This clearly is a different stochastic process than that used for quasi-random saturated structures. Conclusion In this paper we applied the DSV methodology and the Flajolet-Odlyzko theorem to asymptotic enumeration problems concerning canonical and saturated secondary structures. For instance, we showed that the expected number of base pairs in canonical RNA secondary structures is equal to 0.31724 • n, which is far less than the expected number 0.495917 • n of base pairs over all secondary structures, the latter which follows from Theorem 4.19 of [START_REF] Hofacker | Combinatorics of RNA secondary structures[END_REF]. This may provide a theoretical explanation for the speed-up observed for Vienna RNA Package when restricted to canonical structures [START_REF] Bompfunewerer | Variations on RNA folding and alignment: lessons from Benasque[END_REF]. Additionally, we computed the asymptotic number 1.07427 • n -3/2 • 2.35467 n of saturated structures, the expected number 0.337361 • n of base pairs of saturated structures and the asymptotic number 0.323954•1.69562 n of saturated stem-loop structures. We then considered a natural stochastic greedy process to generate quasi-random saturated structures, and showed surprisingly that the expected number of base pairs of is 0.340633 • n, a value very close to the expected number 0.337361 • n of base pairs of all saturated structures. Finally, we apply a theorem of Drmota [START_REF] Drmota | Systems of functional equations[END_REF] to show that the density of states for [all resp. canonical resp. saturated] secondary structures is asymptotically Gaussian. Figure 1 1 Figure1gives equivalent views of the secondary structure of 5S ribosomal RNA with GenBank accession number NC 000909 of the methane-generating archaebacterium Methanocaldococcus jannaschii, as determined by comparative sequence analysis and taken from the 5S Ribosomal RNA Database[START_REF] Szymanski | 5S ribosomal RNA database Y2K[END_REF] located at http://rose.man.poznan.pl/5SData/. The sequence and its secondary structure in (Vienna) dot bracket notation are as follows: Figure 1 : 1 Figure 1: Depiction of 5S ribosomal RNA from M. Jannaschii with GenBank accession number NC 000909. Equivalent representations as (Left) outerplanar graph (also called Feynman circular diagram), (Middle) Feynman linear diagram, (Right) classical diagram (most familiar to biologists). The sequence and secondary structure were taken from the 5S Ribosomal RNA Database[START_REF] Szymanski | 5S ribosomal RNA database Y2K[END_REF], and the graph was created using jViz[START_REF] Wiese | Rna-a Java tool for RNA secondary structure visualization[END_REF]. and A → w is a rule, then by replacing the occurrence of A in xAy we obtain xwy. Such a derivation in one step is denoted by xAy ⇒ G xwy, while the reflexive, transitive closure of ⇒ G is denoted ⇒ * G . The language generated by context-free grammar G is denoted by L(G), and defined by Figure 2 : 2 Figure 2: The shaded region △ where, except at z = ρ, the generating function S(z) must be analytic. Figure 3 : 3 Figure 3: Base 1 is base-paired by selecting a random base u such there are at least θ unpaired bases enclosed between 1 and u. † A well-balanced parenthesis string is a word over Σ = {(, )} with as many closing parentheses as opening ones and such that when reading the word from left to right, the number of opening parentheses read is always at least as large as the number of closing parentheses. RNA secondary structures can be considered to be well-balanced parenthesis strings that also contain possible occurrences of • , and for which there exist at least θ occurrences of • between corresponding left and right parentheses ( respectively ) . ‡ Since S(z, u) is known to be analytic at 0, we have discarded one of the two solutions as before. Acknowledgements We would like to thank Yann Ponty, for suggesting that Drmota's work can be used to prove that the density of states for secondary structures is Gaussian. Thanks as well to two anonymous referees, whose comments led to important improvements in this paper. Figure 2 is due to W.A. Lorenz, and first appeared in the joint article Lorenz et al. [12]. Funding for the research of P. Clote was generously provided by the Foundation Digiteo -Triangle de la Physique and the National Science Foundation DBI-0543506 and DMS-0817971. Additional support is gratefully acknowledged to the Deutscher Akademischer Austauschdienst for a visit to Martin Vingron's group in the Max Planck Institute of Molecular Genetics. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. Funding for the research of E. Kranakis was generously provided by the Natural Sciences and Engineering Research Council of Canada (NSERC) and Mathematics of Information Technology and Complex Systems (MITACS). Funding for the research of B. Salvy was provided by Microsoft Research-Inria Joint Centre. Research partially supported by National Science Foundation Grants DBI-0543506, DMS-0817971, and the RNA Ontology Consortium. Additional support is gratefully acknowledged to the Foundation Digiteo-Triangle de la Physique and to Deutscher Akademischer Austauschdienst. part by Natural Sciences and Engineering Research Council of Canada (NSERC) and We now compute the asymptotic number of saturated stem-loop structures. Let h(n) be the number of saturated stem-loops on [1, n], defined by h(n) = 1 for n = 0, 1, 2, 3, h(4) = 3, and for n ≥ 5. Note that we have defined h(1) = 1 = h(2) for notational ease in the sequel, although there are in fact no stem-loops of size 1 or 2. Indeed in this case, the only structures of size 1 respectively 2 are • and • • . The first few terms in the sequence h(1), h(2), h(3), Grammar It is easily seen that the following rules provide for a non-ambiguous context-free grammar to generate all non-empty saturated stemloops. It defines actually a special kind of context-free language, called regular, whose generating function is rational. Generating Function By the DSV methodology, we obtain the functional relation whose solution is the rational function where Asymptotics For rational functions, an easy way to compute the asymptotic behaviour of the Taylor coefficients is to compute a partial fraction decomposition and isolate the dominant part. This is equivalent to solving the corresponding linear recurrence. See also [17, p. 325] or [16, Thm. 9.2]. Partial fraction decomposition yields where the a i s are the roots of Q and (Note that this is an actual equality valid for all n ≥ 0 and not an asymptotic result). Now, the roots of Q are approximately a 1 = 0.5897545, a 2 = -0.294877 -0.872272i, a 3 = -0.294877 + 0.872272i. saturated with quasi-random saturated structures, it is remarkable that the expected number of base pairs over saturated and quasi-random saturated structures is numerically so close. Let U θ n be the expected number of base pairs of the saturated secondary structure generated by this recursive procedure. In general, we have the following recursive equation with initial conditions If we write equation (33) for U θ n+1 and substitute in it the value for U θ n we derive If we multiply out by nθ and simplify we obtain which is valid for n ≥ θ + 1. Asymptotic behavior We now look at asymptotics. In particular we prove the following result. Theorem 8 Let U θ n denote the expected number of base pairs for quasi-random saturated structures of an RNA sequence of length n. Then for fixed θ, and as n → ∞ where )th harmonic number. The first few values can easily be obtained numerically and we have K 1 = 0.340633, K 2 = 0.285497, K 3 = 0.247908, K 4 = 0.220308, K 5 = 0.199018. Proof. For fixed integer θ, the recurrence (35) is linear with polynomial coefficients. It is a classical result that the generating functions of solutions of such recurrences satisfy linear differential equations. This is obtained by applying the following rules: if Starting from (35), we first shift the index by θ + 1 and apply these rules together with the initial conditions (34) to get Finally, this simplifies to This is a first order non-homogeneous linear differential equation. The homogeneous part is solved by integrating a partial fraction decomposition From there, variation of the constant gives the following expression for the generating function: Because the exponential is an entire function, we readily find that the only singularity is at z = 1, where y ∼ K/(1z) 2 with K as in the statement of the theorem. The proof is completed by the use of Theorem 1. Note that the asymptotic expected number of base pairs in quasi-random saturated structures with θ = 1 is 0.340633 • n, while by Theorem 6 the asymptotic expected number of base pairs in saturated structures is 0.337361 • n, just very slightly less. This result points out that the stochastic greedy method performs reasonably well in sampling saturated structures, although the stochastic process tends not to sample certain (rare) saturated structures having a less than average number of base pairs. The stochastic process used to construct quasi-random saturated structures iteratively base-pairs the leftmost position in each subinterval. One can imaging a more general stochastic method of constructing saturated structures, described as follows. Generate an initial list L of all allowable base pairs (i, j) with 1 ≤ i < j ≤ n and j ≥ i + θ + 1. Create a saturated structure by repeately picking a base pair from L, adding it to an initially empty structure S, then removing from L all base pairs that form a crossing (pseudoknot) with the base pair just selected. This ensures that the next time a base pair from L, it can be added to S without violating the definition of secondary structure. Iterate this procedure until L is empty to form the stochastic saturated structure S.
00380417
en
[ "shs.eco" ]
2024/03/04 16:41:24
2009
https://hal.science/hal-00380417v2/file/english_version_280809.pdf
Jean-Baptiste Gossé email: [email protected] The Real and Financial Implications of the Global Saving Glut: A Three-Country Model Keywords: F21, F32, F41, F47 International Macroeconomics, Global Imbalances, Balance of Payments, International Finance , Simulation and Forecast The model presented in this paper has two objectives. First, it models global imbalances in a simple way while conserving real andnancial approaches. This double approach is necessary because Global Imbalances are due to the conjunction of nancial and real phenomena: the increase in the price of commodities, the accumulation of foreign reserves by the Asian central banks, the limited absorption capacity of the OPEC countries, the insucient development of the Asian nancial system and the perception of better returns in the US. The second objective is to model the global saving glut hypothesis and to show its implications. We start with a model which consists of three identical countries and then we replicate the current pattern of global imbalances in introducing three asymmetries: a xed exchange rate between Asia and the United States, a limited absorption capacity in Asia and endogenous propensity to spend in the United States. In order to avoid the recession linked to the increase of their propensity to import, the United States increase their propensity to spend. This adjustment has a cost: (i) the Global Imbalances grow quickly with an increase of current account imbalances and net foreign assets in both the US and Asia ; (ii) the euro area supports an appreciation of its exchange rate which put it in a long depression. Introduction During the last years, the Global Imbalances have been increasing. The weight of both current account imbalances and net foreign assets in the 2006 world economy is twice what it was in 1996 (gure 1). If we choose an approach by the real sector, the two big reasons for this growth of Global Imbalances are the increase of oil prices (gure 2) and the under evaluation of the Renminbi (gure 3). The three major players of the GI are the US, for decits, and the OPEC and the Asian countries, for surpluses. The US suered the brunt of rising price of oil and the under evaluation of Renminbi and, since the beginning of the millennium, their propensity to spend has been increasing. This paper presents the alternatives of the US to respond to this shock and the consequences of their reactions, rst, on the countries with exible exchange rates and, second, on the countries with xed exchange rates. The model presented in this paper has two objectives. First, it models global imbalances in a simple way while conserving real and nancial approaches (the Three-Country model presented here is comprised of 50 equations). This double approach is necessary because Global Imbalances are due to the conjunction of nancial and real phenomena: the increase in the price of oil, the accumulation of foreign reserves by the Asian central banks (gure 5), the limited absorption capacity of the OPEC countries, the insucient development of the Asian nancial system and the perception of better returns in the US. The second objective is to model the global saving glut hypothesis and to show its implications. We want to demonstrate that in the present context, the increase in the US propensity to spend would oset negative impact on their propensity to import, but with an increase in their net foreign debt. The United States can temporarily avoid a recession by accumulating debts on the rest of the world as long as xed exchange rate countries accept to accumulate dollar assets. In contrast, the exchange rate of the euro area is the only one to adjust and the impact on European GNP is negative. The second part briey describes the evolution in the economic literature on global imbalances. The third part presents the model with three identical exible exchange rate countries. We introduce the Global Saving Glut Hypothesis in the fourth part. A brief review of the literature on Global Imbalances The article by Dooley, Folkerts-Landau and Garber (2003) marks the return at the forefront of the issue of current imbalances in economic literature. They argue that the Bretton Woods system has returned so their work has been named Bretton Woods II. The authors consider that the global imbalances of the 2000s result from an export-led growth strategy of Asian countries that is similar to the one adopted by Europe and Japan after the Second World War. In both cases, the surplus countries accumulate reserves so that their exchange rate remains undervalued relative to the dollar. They can maintain their competitiveness and benet from growth by exports. However, as [START_REF] Eichengreen | Global Imbalances and the Lessons of Bretton Woods[END_REF] pointed out, the resemblance to Bretton Woods is limited. The current international monetary system is a oating exchange rates one and the consistency between the Asian countries in the years 2000 is lower than that of Europe during the 1960s. In addition, the thesis of Bretton Woods II considers the global imbalances resulting from the real sector, i.e. the top of the balance of payments, and the nancial sector -i.e. the bottom of the balance of payments -adjust to it. This vision of global imbalances from the top of the balance of payments is not adopted in the following papers. The Global Saving Glut Hypothesis, supported by [START_REF] Bernanke | The Global Saving Glut and the U.S. Current Account Decit[END_REF], assumes that the global imbalances result from both the nancial and real sectors. The current account surpluses of the emerging countries have two origins. On the one hand, following the nancial crisis of 1997, the Asian emerging countries are accumulating reserves to cover against a possible sudden exit in foreign capital. At the same time, they maintain the under evaluation of their exchange rates, and the exports led their growth. On the other hand, the emergence of new great industrial countries weighs on oil prices. OPEC countries also emit a surplus of savings that they want to invest abroad, since their absorption capacity is limited due to their small population. The saving surplus of the emerging countries leads to the United States, the economy that presents the best features. Bernanke lists the special features that make the U.S. nancial sector the most attractive. This is the growth of productivity linked to the development of new technologies, the low political risk, the strong property rights protection or still favorable institutional environment. These inows of investments lead an appreciation of the dollar, an increase in asset prices and, after 2000, lower interest rates and an increase in household wealth. The combination of these factors motivates households to reduce their savings and increase consumption. Thus, the global savings glut is absorbed by the increase in spending of U.S. households that avoids a recession due to an excess of supply. The equilibrium model of Caballero, Fahri and Gourinchas (2006) explains growth of GI by the inability of nancial systems of certain areas to achieve sucient investment to use all the savings available. The current pattern of GI and the low global interest rates result from the dierences on the nancial institutions development in the world and from the greatest potential growth of the United States compared to other nancially developed areas. This thesis provides a new element to explain the accumulation of foreign assets by Asian countries beyond the level of reserves to provide assurance against a ight of foreign capital. Furthermore, Caballero et al. ( 2006), all kinds of models have been developed to represent the evolution of global imbalances. In Obstfeld and Rogo (2005) the adjustment occurs by changing the preferences between tradable and non tradable goods and between domestic and foreign goods. The model of Blanchard, Giavazzi and Sa (2005) stressed the role of the exchange rate in the allocation of international portfolio. These three models have a major disadvantage that should be overcome: they assume that the GDP are not aected by the adjustments of current imbalances. In the continuous time model with two countries Asada, Chiarella, Flaschel and Franke (2003) and Proaño (2008), the GDP reacts to the adjustment of current imbalances, but they only model the real sector. Finally, models of [START_REF] Lavoie | A Three-Country Stock-Flow Consistent model: A Study of the Diversication of China's Foreign Reserve[END_REF], [START_REF] Godley | A simple model of three economies with two currencies: Euroland and the USA[END_REF] and [START_REF] Zhao | A Three-Country Model with Fixed Exchange Rates under a Stock-Flow Coherent Approach[END_REF] have the advantage of being stock-ow consistent and of representing both the top and the bottom of the balance of payments. However, they should be simplied [START_REF] Lavoie | Les apports des approches stocks-ux à l'analyse postkeynésienne[END_REF]): these three-country models comprise 91, 79 and 89 equations, respectively. In his review of the state of macroeconomics, Blanchard (2008) also argues for smaller models to capture a specic mechanism. He cites Solow including: " My general preference is for small, transparent, tailored models, often partial equilibrium, usually aimed at understanding some little piece of the (macro-) economic mechanism. " [START_REF] Solow R | The State of Macroeconomics[END_REF] In order to replicate the global imbalances of the 2000s, we use an absorption model to describe the real sector and a portfolio model to describe the nancial sector. The interest of such a model is to take into consideration simultaneously factors linked to several interpretations of global imbalances: • The trade balance approach: the increase of the propensity to import Asian products in the United States and in Europe. • the absorption approach: the limited absorption capacity in Asian and OPEC countries. • The saving-investment approach: the consumption of the global saving glut by the United States. • The portfolio balance approach: the accumulation of reserves in Asian countries to cover against a risk of sudden exit of capitals and the inability of Asian nancial systems to absorb the whole domestic saving. We start with a model which consists of three identical countries and then we replicate the current pattern of global imbalances by introducing three asymmetries: a xed exchange rate between Asia and the United States, a limited absorption capacity in Asia and an endogenous propensity to spend in the United States. A model of three identical countries with exible exchange rates The model is composed of three identical countries. These economies exchange goods and services and hold foreign assets that provide incomes. We assume the three countries are at equilibrium before shocks: trade equilibrium, current account equilibrium, zero net foreign debt and stable exchange rates. The model is divided into two sectors. The real sector (3.1) describes the equations of national income, domestic demand, imports, exports and income balance. The nancial sector (3.2) presents the evolution of supply and demand for foreign assets and allows determining the net external position and the exchange rate. Simulations (3.3) illustrate the eect of an increase in the propensity to spend and the impact of a competitiveness shock. The real sector The real sector is represented synthetically in order to focus only on the adjustment mechanisms vis-à-vis the rest of the world. We use a model of absorption à la [START_REF] Gossé | Eects of a Devaluation on a Trade Balance[END_REF]. The interest of this approach is to focus on macroeconomic, based around expenditure and production in the economy as a whole, in the domestic and international perspectives. When absorption is lower than national income, the country has a current account surplus and vice versa. In the propensity to spend (c), it does not distinguish the propensity to spend of households (1 -s), rms (i) and government (g): c = 1 -s + i + g The subject of the paper is not to explain the government's way of intervening but to dene the level of domestic spending that it should attempt to achieve through scal, monetary and exchange rate policies. The scal policy acts on the budget decit (g). The monetary policy changes saving and investment levels, in particular, playing on interest rates. Finally, changes in central bank reserves allow to compensate for current account imbalances without modifying the exchange rate. In the model with exible exchange rates, the level of domestic expenditure (D) is determined like [START_REF] Samuelson | Interactions between the Multiplier Analysis and the Principe of Acceleration[END_REF] and [START_REF] Hicks | A contribution to the Theory of the Trade cycle[END_REF], the propensity to spend of the country (c) and its GNP in the previous period (Y t-1 ). We assume that before the shock the propensity to spend is equal to one in the three countries 1, 2 and 3: c 1 = c 2 = c 3 = 1. D i t = c i × Y i t-1 with i = 1, 2, 3 (1) Imports are dened as standard by levels of expenditure (D) and relative prices (e), approached by the nominal exchange rate (that implies P 1 = P 2 = P 3 = 1). Country i's currency is the CU i and the exchange rate between country i and country j is: 1CU i = e ij CU j . For instance, we use the exchange rate e 12 to convert CU 2 to CU 1 . IM ij t = m0 ij (D i t-1 ) m1 ij (e ij t-1 ) m2 ij with i = 1, 2, 3 ; j = 1, 2, 3 and i = j (2) We determine country i's propensity to import from country j (µ) by the ratio of imports on expenditures. µ ij t = IM ij t D i t ( 3 ) The balance of investment incomes between country i and country j is calculated as the dierence between incomes received and incomes paid taking into account exchange rate variations: IN C ij t = (ω ij t-1 W i t-1 r j t-1 e ij t-1 e ij t ) -(ω ji t-1 W j t-1 e ij t-1 r i t-1 ) (4) Incomes received from country j are dened by the share (ω ij ) of country i's wealth (W i ) invested in country j at the previous period, times the rate of return in country j (r j ), times the exchange rate variation ( e ij t-1 e ij t ). Incomes paid by country i to country j are dened by the share (ω ji ) of country j's wealth (W j ) invested in country i at the previous period, times the rate of return in country i (r i ). Then, we determine the GNP by the sum of the absorption (or expenditure Overall residents) A, the trade balance (the dierence between exports and imports) X -IM and the income balance INC. Y = A + (X -IM ) + IN C with D = A + IM The GNP corresponds to domestic demand (the rst member), plus exports (the second and third members), plus net income investment (fourth and fth members). Y i t = (1 -µ ij t -µ ik t )D i t + µ ji t (D j t )e ij t + µ ki (D k t )e ki t + IN C ij t + IN C ik t ( 5 ) with i = 1, 2, 3 ; j = 1, 2, 3 ; k = 1, 2, 3 and i = j = k The real sector allows for determining the trade balance. The trade decit (TD) is the dierence between imports and exports expressed in domestic currency: T D ij t = IM ij t -e ji t × IM ji t (6) The nancial sector A portfolio model à la [START_REF] Kouri | Balance of Payments and the Foreign Exchange Market: A Dynamic Partial Equilibrium Model[END_REF] which incorporates the mechanical of Blanchard et al. model (2005) represents the nancial sector. It is used to determine exchange rates. The propensity to hold foreign assets is determined in accordance with the horizontal constraint of [START_REF] Godley | Money, Finance and National Income Determination: An Integrated Appoach[END_REF], i.e., for each equation, the sum of all rates of return coecients (λ) is equal to zero. For the sake of simplicity, we assume that the rates of return are the same in the three countries (r 1 = r 2 = r 3 ). ω ij t = λ0 ij -λ1 ij (r i t ) + λ2 ij (r j t ) -λ3 ij (r k t ) (7) Net foreign debt is equal to the value of assets held by foreign investors in the country, minus the value of assets held by domestic investors abroad, plus the trade decit vis-à-vis the foreign country. N F D ij t = ω ji t-1 W j t-1 e ij t-1 (1 + r i t-1 ) -ω ij t-1 W i t-1 (1 + r j t-1 ) e ij t-1 e ij t + T D ij t ( 8 ) The quantity of the assets held by investors from country j in country i is equal to the share (ω ji ) of country j's wealth (W j t-1 ) held in country i during the previous period, times the exchange rate between country i and country j (1CU j = e ji CU i ), times the rate of return of country i's assets in CU i (1 + r i t-1 ). The value of assets held by investors from country i in country j is equal to the share of country i's wealth of the previous period (W i t-1 ), times the propensity to hold assets of country j (ω ij t-1 ), times the rate of return on assets of country j in CU i ((1 + r j t-1 ) × e ij t-1 e ij t ). The supply of domestic assets BS is given by a ratio κ of GNP Y : BS i t = κ i (Y i t ) (9) For the sake of clarity, we set: κ 1 = κ 2 = κ 3 = 1 (assuming that the supply of assets is the same size as GNP). The domestic wealth (W ) is equal to the supply of domestic assets (BS ) minus the net foreign debt NFD expressed in home currency. W i t = BS i t -N F D ij t -N F D ik t ( 10 ) The exchange rate e ij (to convert country j's currency into country i's currency) is dened so as to equalize the liabilities of country i and the assets of country j: e ij t [N F D ij t + ω ij t (W i t )] = ω ji t (W j t ) We replace W i and W j by their expressions: W i t = BS i t -N F D ij t -N F D ik t W j t = BS j t + (N F D ij t )e ij t -N F D jk t We get the following expression: e ij t [N F D ij t +ω ij t (BS i t -N F D ij t -N F D ik t )] = ω ji t [BS j t +(N F D ij t )e ij t -N F D jk t ] This equation determines the exchange rate between country i and country j: e ij t = ω ji t (BS j t ) -N F D jk t ω ij t (BS i t -N F D ik t ) + (1 -ω ji t -ω ij t )N F D ij t ( 11 ) We remark that as in the Blanchard et al. model, the higher the assets supply, the lower the exchange rate variation resulting from current account imbalances. Furthermore, as country j's net foreign debt vis-à-vis country k increases, the country i's currency is weakened compared to country j's currency, because the country j's assets supply available in country i decreases. Similarly, when country i's net foreign debt vis-à-vis country k increases, its exchange rate apprises because country i's assets supply available in country j decreases. 3.3 The simulations in a world without asymmetries 3.3.1 Scénario 1: a shock on country 1's propensity to spend Country 1's propensity to spend increases from 1 to 1.005 (gure 3). This rise provokes an increase of country 1's GNP and a growth of assets supply. As a result, country 1's currency is depreciated compared to two other countries currencies. The competitiveness of country 1 increases so it releases a trade surplus, a current account surplus and its net foreign position improves. Its trade surplus is shrinking gradually with the growth of its GNP but its current account surplus continues to increase because trade surplus reduction is oset by an elevation of the net income related to the depreciation of its currency. Scenario 2: a shock on the propensity to import of countries 1 and 2 In countries 1 and 2, the propensity to import goods made in country 3 passes from 0.05 to 0.055 (gure 4). The GNP of countries 1 and 2 decrease while that of country 3 increases. The currencies of countries 1 and 2 depreciate relative to the country 3 to return to current account equilibrium. As a rst step, the country 3's trade balance surplus allows it to accumulate assets in the rest of the world. As a second step, the country 3 recorded a trade decit which is oset by the receipt of net income. A new equilibrium is established in which the country 3 consumes more goods than it produces because the balance of investment incomes procures him a rent. Thus, in a "perfect world" without asymmetries, productivity shocks are adjusted by exchange rate changes and do not generate global imbalances. Results of simulations under exible exchange rates are presented in table 2. Modeling the global saving glut hypothesis We start from the previous three-country model to describe the relationships between the three major areas that currently interact. The country 1 is the United States who hold the currency on which some countries are pegged. The country 2 is named the euro area and it comprises exible exchange rates countries. The country 3 is named Asia and it includes xed exchange rates countries. However, this model does not describe the current global imbalances since it ignores several key features of the global economy. We introduce three asymmetries in the previous model in order to replicate the global imbalances of the 2000s and to show their implications on growth. First, some countries have xed exchange rates so the Asian propensity to hold foreign securities must be determined to leave unchanged its exchange rate vis-à-vis the dollar (4.1). Second, the limited absorption capacity of OPEC and Asia should be take into account (4.2). Third, the Bernanke's global saving glut hypothesis is described by a model endogenizing the American propensity to spend to maintain the income of the United States unchanged (4.3). Simulations show the eects of expansionary politics, of under-evaluation of the Renminbi and of the absorption by the United States of the global saving glut (4.4). First asymmetry: Asia pegs its currency on the dollar In this case, the adjustment is made by modifying the Asian home bias as long as it agrees to acquire securities issued to oset the current account imbalance. The home bias compatible with a xed exchange rate allows equalizing supply and demand for U.S. assets without modifying exchange rates: [N F D 13 t + ω 13 t (W 1 t )]e 13 t = ω 31 t (W 3 t ) We replace W 1 and W 3 by their expressions and we get the following expression: [N F D This equation gives the level of home bias that adjusts current account imbalances while maintaining the exchange rate unchanged: ω 31 t = ω 13 t (BS 1 t -N F D 12 t ) + (1 -ω 13 t )N F D 13 t BS 3 t -N F D 32 t e 13 t + N F D 13 t ( 12 ) Since Asia is in xed exchange rates vis-à-vis the United States, this equation replaces the equation of the exchange rate between the United States and Asia in the previous model. Second asymmetry: the limited absorption capacity of xed exchange rates countries The absorption capacity of country 3 is limited for two reasons. On the one hand, we assume that Asia reach its maximum absorption capacity because its nancial system is not able to use domestic saving. On the other hand, the absorption capacity of OPEC countries is limited due to their small population. We model this dual limit by assuming that the level of expenditure of the country 3 is xed ∆D 3 = 0. Then the propensity to spend adjusts changes in income: c 3 t = D 3 Y 3 t-1 (13) This equation replaces equation 3 of the previous model. 4.3 Third asymmetry: the U.S. propensity to spend is endogenized to maintain constant U.S. GNP We determine the level of the U.S. propensity to spend that can avoid a recession in the United States resulting from country 3's current account surplus. The level of propensity to spend which can absorb the shock c 1 * is determined in [START_REF] Brender | Les déséquilibres nanciers internationaux, La découverte[END_REF]. We highlight the equilibrium values (pre-shock). As a rst step, we equalize pre-shock income Y1 with post-shock income Y1 : Y 1 = Ȳ 1 With D 3 = D3 , D 1 = c 1 (Y 1 t-1 ) and c 1 = 1, so it comes: (1 -µ 12 t -µ 13 t )c 1 * (Y Then we determine the level of c1* which maintain the American GNP constant after a competitiveness shock: c 1 * = μ21 ( D2 )ē 21 -µ 21 t D 2 t e 21 t +D 3 [μ 3 1(ē 31 )-µ 31 t (e 31 t )-IN C 12 t -IN C 13 t ] (1-µ 12 t -µ 13 t )Y 1 t-1 + 1 -μ12 -μ13 1 -µ 12 t -µ 13 t ( 14 ) To avoid a global recession, the U.S. must increase their propensity to spend c 1 in order to compensate for the reduction in Asia c 3 . The simulations The rst two simulations are under asymmetries 1 and 2. In this case, the productivity shock implies a growth of global imbalances and a recession in the United States and in Europe. When we introduce the third asymmetry, the US GNP is stabilized, but the global imbalances are bigger and the negative eect on the European GNP is stronger. The results of these simulations are summarized in table 3 The U.S. propensity to spend rises from 1 to 1.005. The supply of U.S. securities and the American GNP increase (gure 5). The result is a depreciation of the dollar vis-à-vis the euro and an increase in Asian reserves to avoid an appreciation of &. The increase in the GNP generates a trade decit vis-à-vis Asia that is partly oset by a surplus vis-à-vis Europe. The euro area has a trade decit with the United States and Asia as the euro appreciates against currencies of both countries. The net external debt of the United States and Europe increase and their net income is negative. Finally, Asia has a current account surplus and the United States and Europe a decit. The European GNP decreases and that of Asia expands. Scenario 2: a shock on the propensity to import of countries 1 and 2 without global saving glut hypothesis The propensity to import Asian products to the United States and Europe increased from 0.05 to 0.055 (gure 6). The trade balances of both economies decline while Asia has a surplus. The trade balances tend to return to equilibrium with the depreciation of the euro vis-à-vis both Asian and American currencies and with the decline of European and U.S. GNP. Asia accumulates U.S. assets in order to avoid an appreciation of its currency. This accumulation leads to an increase of Asian net incomes. The U.S. GNP diminishes gradually to adjust the shock. The European GNP is reduced and then stabilized after the euro has depreciated. Finally, the increase of U.S. and European propensity to import causes a global recession and growing global imbalances. Scenario 3: a shock on the propensity to import from countries 1 and 2 with global saving glut hypothesis In this case, the propensity to spend of the United States adjusts itself in order to maintain their GNP unchanged following the increase in their propensity to import Asian products (gure 7). The dollar depreciates against the euro since the eect of the propensity to spend is stronger than the one of the propensity to import. Asia accumulates U.S. assets to maintain the level of its exchange rate with the dollar. The trade and current account imbalances persist. Thus, global imbalances grow rapidly: the current account decit and the net foreign debt of the United States continue to rise as the current account surplus and the net foreign assets of Asia. External imbalances in the Euro area are less important. The adjustment takes place through the gradual reduction of European GNP that is linked to the deteriorating competitiveness. The results of simulations under the global saving glut hypothesis are very close to the evolution of the pattern of global imbalances in the 2000s. The trend in the current account imbalances are similar to a surplus in Asia and a decit in the United States (gure 11). However, according to the simulation, the euro area should be in decit but the observations show a dierent trend around the equilibrium. The trends in net foreign debts are very akin observations: a growing net debt in the United States, a soaring net stock of assets in Asia and a smoothly increasing net debt in the euro area (gure 12). Conclusion The model includes a real and nancial approach of global imbalances. The three-country model only has 50 equations or about half of the models of [START_REF] Zhao | A Three-Country Model with Fixed Exchange Rates under a Stock-Flow Coherent Approach[END_REF] A rst series of simulations is conducted in a model with three identical countries under exible exchange rates. The increase in the propensity to import country 3's products in countries 1 and 2 causes a reduction of the GNP of both countries and provokes small external imbalances that stabilize after the depreciation of exchange rates. Then, we introduce three asymmetries in order to model the global saving glut hypothesis. First, the Asia-OPEC area is pegged on the dollar. Second, we assume a limited absorption capacity in xed exchange rates countries tied to the small population in the OPEC countries and to the limited nancial development in Asia. Third, under the Bernanke's global saving glut hypothesis, it is assumed that the United States raise their propensity to spend in order to maintain their GNP stable following an increase in their propensity to import. Under the rst two constraints, the productivity shock implies the stagnation of Asian GNP and a strong negative eect on US GNP and that of the euro area. By adding the third asymmetry, it appears that the recession can be avoided in the United States if they increase their propensity to spend. However, the consequences for the global economy would be disastrous: global imbalances grow very fast and the euro area experiences a deep recession. Finally, we nd that the trend of simulations are very close to the observed trends of the pattern of global imbalances both in stocks and in ows. • in the exible exchange rates model there are 50 endogenous equations for 50 equations, • the model with a xed exchange rate between the U.S. and Asia has 49 equations for 49 endogenous variables, • when we introduce the global saving glut hypothesis, there are 50 equations for 50 unknowns. ex/e exchange rate (1 £ = ex/e &) implicit 1 1/ex exchange rate (1 & = 1/ex $) implicit 1 m120 constant in the equation of imports of European products exogenous 0.05 m210 constant in the equation of imports of American products exogenous 0.05 m310 constant in the equation of imports of American products exogenous 0.05 m121 income elasticity in the equation of imports of European products exogenous 1 m211 income elasticity in the equation of imports of American products exogenous 1 m311 income elasticity in the equation of imports of American products exogenous hold European assets equation exogenous 0.2 λ211 coefficient on r1 in propensity to hold American assets equation exogenous 0.4 λ311 coefficient on r1 in propensity to hold American assets equation exogenous 0.4 λ122 coefficient on r2 in propensity to hold European assets equation exogenous 0.4 λ212 coefficient on r2 in propensity to hold American assets equation exogenous 0.2 λ312 coefficient on r2 in propensity to hold American assets equation exogenous 0.2 λ123 coefficient on r3 in propensity to hold European assets equation exogenous 0.2 λ213 coefficient on r3 in propensity to hold American assets equation exogenous 0.2 λ313 coefficient on r3 in propensity to hold American assets equation exogenous 0.2 exogenous 100 IM12 imports of European products endogenous 5 IM21 imports of American products endogenous 5 IM31 imports of American products endogenous 5 IM13 imports of Asian products endogenous 5 IM23 imports of Asian products endogenous 5 IM32 imports of European products endogenous 5 µ12 propensity to import European products endogenous 0.05 µ21 propensity to import American products endogenous 0.05 µ31 propensity to import American United States Euro area Asia . 4. 4 . 1 41 Scenario 1: a shock on the U.S. propensity to spend without global saving glut hypothesis , Godley and Lavoie (2007) and Lavoie and Zhao (2008) although, unlike the models of Obstfeld and Rogo (2005), Blanchard et al (2005) and Caballero et al (2006), it does not involve a constant GNP. Thus, we can observe both the real and nancial implications of the shocks. 1 m122 1 price elasticity in the equation of imports of European products exogenous 1 m212 price elasticity in the equation of imports of American products exogenous 1 m312 price elasticity in the equation of imports of American products exogenous 1 m130 constant in the equation of imports of Asian products exogenous 0.05 m230 constant in the equation of imports of Asian products exogenous 0.05 m320 constant in the equation of imports of European products exogenous 0.05 m131 income elasticity in the equation of imports of Asian products exogenous 1 m231 income elasticity in the equation of imports of Asian products exogenous 1 m321 income elasticity in the equation of imports of European products exogenous 1 m132 price elasticity in the equation of imports of Asian products exogenous 1 m232 price elasticity in the equation of imports of Asian products exogenous 1 m322 price elasticity in the equation of imports of European products exogenous λ120 constant in the propensity to hold European assets equation exogenous 0.1 λ210 constant in the propensity to hold American assets equation exogenous 0.1 λ310 constant in the propensity to hold American assets equation exogenous 0.1 λ121 coefficient on r1 in propensity to coefficient on r3 in propensity to hold European assets equation exogenous 0.2 ω12 propensity to hold European assets endogenous 0.2 ω21 propensity to hold American assets endogenous 0.2 ω31 propensity to hold American assets endogenous* 0.2 ω13 propensity to hold Asian assets endogenous 0.2 ω23 propensity to hold Asian assets endogenous 0.2 ω33 propensity to hold commercial deficit vis-à-vis the United States endogenous 0 DC31 commercial deficit vis-à-vis the United States endogenous 0 DC13 commercial deficit vis-à-vis Asia endogenous 0 DC23 commercial deficit vis-à-vis Asia endogenous 0 DC32 commercial deficit vis-à-vis the euro area endogenous 0 NFD12 net foreign debt vis-à-vis the euro area endogenous 0 NFD21 net foreign debt vis-à-vis the United States endogenous 0 NFD31 net foreign debt vis-à-vis the United States endogenous 0 NFD13 net foreign debt vis-à-vis Asia endogenous 0 NFD23 net foreign debt vis-à-vis Asia endogenous 0 NFD32 net foreign debt vis-à-vis the exchange rate (1 & = e/ex £) implicit 1 * the first term indicates the status of the variable in the first model in flexible exchange rates. the second term indicates the status of the variable when the country is in 3 fixed exchange and the assumption of surplus global savings is made. Figure 2 :Figure 3 : 23 Figure 2: Evolution of the crude oil price (dollars a barrel)Source: IMF, IFS Figure 4 : 4 Figure 4: Evolution of current account imbalances (in billions of dollars) Source: CEPII-Chelem Figure 5 : 5 Figure 5: Evolution of reserves minus gold (in billions of dollars) Source: Lane and Milesi-Ferretti (2007) Figure 9 : 9 Figure 9: The propensities to import Asian goods to the United States and to the euro area passe from 0.05 to 0.055, with xed exchange rates and limited absorption capacity Figure 11 : 11 Figure 11: Evolution of current account imbalances (% of GDP) Source: CEPII-Chelem Figure 12 : 12 Figure 12: Evolution of Net Foreign Assets (% of GDP) Source: Lane and Milesi-Ferretti (2007) Table 1 : 1 Values of model parameters
04112902
en
[ "sde", "sdu", "sdu.envi", "sdu.stu.gp" ]
2024/03/04 16:41:24
2002
https://hal.science/hal-04112902/file/AA2002-9-Ogawa-IGARSS.pdf
Kenta Ogawa Thomas Schmugge Frederic Jacob Andrew French Estimation of Broadband Emissivity from Satellite Multi-Channel Thermal Infrared Data Using Spectral Libraries HAL is I. INTRODUCTION Surface broadband emissivity is an important parameter for estimating the longwave surface energy. Mostly, the broadband emissivity may vary significantly, because the spectral emissivity variation ranges from 0.7 to 1.0 for bare soils and rocks (see Fig. 1) in the 8-12 µm range, there is smaller variation in the 5-8 µm and 12-14 µm ranges. A constant emissivity is often used for land surface in energy balance study and general circulation models (GCMs). But, the spatial variation of broadband emissivity may cause a feedback and affect surface temperature in GCMs. Recent spaceborne thermal infrared multispectral sensors, such as ASTER, allow the estimation of spectral emissivities from local to global scales. We are helped in this effort by the fact that the peak in the thermal emission is at ~10 µm, i.e. the region we have measurements. In this study, we propose to express the broadband emissivity as a linear combination of channel emissivities of ASTER/TIR. Firstly, we produced a dataset of the broadband emissivity (3-14 µm), and channel emissivities for ASTER/TIR, using spectral libraries. Then we calibrate and validate a regression using the dataset. Finally, we computed broadband emissivities over the Jornada Experimental Range [START_REF] Schmugge | Temperature and emissivity separation from multispectral thermal infrared observations[END_REF], New Mexico with ASTER/TIR using the calibrated regression. II. METHOD & DATA Broadband emissivity is defined in (1): ∫ ∫ = = = = - ≡ 2 1 2 1 2 1 ) , B( ) , B( ) ( λ λ λ λ λ λ λ λ λ λ λ λ λ λ λ ε ε d T d T (1). Where ε is surface thermal infrared emissivity, λ is the wavelength, B is the Planck function, and T is surface temperature. In this study, we selected a wavelength range, λ 1 =3.3 µm to λ 2 =14.0 µm, since spectral libraries covered this wavelength range. We assume T = 300 K in this analysis. We computed the difference of broadband emissivity caused by variation of target temperature. It was smaller than 0.005 over the range from 270 K to 330 K. We note that the emission at wavelengths longer than 14 µm is about 40 % to 50 % of the total blackbody radiation and that we have no spectral emissivity data of these longer wavelengths. AS5TER channel emissivity ε ch is defined in (2) as: ∫ ∫ = = = = ≡ 2 1 2 1 ) , B( ) ( ) , B( ) ( ) ( λ λ λ λ λ λ λ λ λ λ λ λ λ λ ε λ ε d T f d T f ch ch ch (2). Where f ch is the spectral response function of ASTER/TIR channel. We assume that broadband emissivity can be expressed in (3) as: c a ch ch ch + = ∑ = - 14 10 .0 14 3 . 3 ε ε (3). The center wavelengths of ASTER channel 10 to 14 are 8.3, 8.65, 9.1, 10.6, 11.3 µm, respectively. Then we calibrate and validate the coefficients of (3) using two spectral libraries. We used two spectral libraries; One is those collected by Jack Salisbury of Johns Hopkins University (JHU Library) [START_REF] Salisbury | Emissivity of terrestrial materials in the 8-14 µm atmospheric window[END_REF] and the other is collected by University of California Santa Barbara (UCSB Library) [START_REF] Snyder | Thermal infrared (3-14 µm) bi-directional reflectance measurement of sands and soils[END_REF]. We selected 150 samples spectra from the JHU Library including soils, vegetation, rock (except fine powders), water, and ice emissivity spectra, and selected 107 samples spectra from UCSB library including soils, vegetation, and water. The JHU library contains directional hemispherical spectral reflectance, ρ λ , therefore we converted to spectral emissivity ε λ using Kirchhoff's law, ε λ =1-ρ λ . The spectra of the fine powdered samples (particle size: <50 µm) were not used in this study, because they may not follow this relation [START_REF] Salisbury | Thermal-infrared remote sensing and Kirchhoff's law, 1, Laboratory measurements[END_REF]. ASTER data were acquired on 12 May 2001 over the Jornada Experimental Range in New Mexico, a desert grassland where the main vegetation components are grass and shrubs. The data were atmospherically corrected using MODTRAN with NCEP atmospheric profiles. Then the MMD method [START_REF] Gillespie | A temperature and emissivity separation algorithm for Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER) images[END_REF] was applied to separate the surface temperature and spectral emissivity from surface spectral radiance. III. RESULTS A. Spectral libraries and broadband emissivity Fig. 1 displays the average and range of spectral emissivity for data from both libraries. The range is larger both over [3 -5] µm and [8 -10] µm regions than in other wavelength regions. Because the emitted energy is low over 3 to 5 µm at around 300 K, it is expected that the emissivity from 8 to 10 µm (ASTER channel 10, 11, 12) will strongly affect the broadband emissivity. The calculated broadband emissivities range from 0.885 to 0.994. Mean emissivity of rock, soil, vegetation, and water is 0.938, 0.955, 0.967, and 0.982 respectively. B. Calibration & Validation Table I displays the calibrated coefficients of computed from the step-wise regression. The coefficient of 0.0 for channel 13 (a 13 ) means that the variable was dropped, because it was not statistically significant. Table II shows the range of emissivity, and RMSE and maximum error. RMSE is smaller than 0.01. The error was large in some kinds of igneous rocks, especially in ultramafic and mafic rocks samples, such as picrite. Maximum absolute error is 0.023. Table III shows RMSE, maximum error, and range of emissivity for validation with UCSB library. The agreement between estimated broadband emissivity and calculated ones is consistent. The difference was smaller than 0.01 for 100 of 107 samples (93%). Figure 2 shows the comparison of the estimated broadband emissivity against the calculated value using spectral libraries. We noticed an underestimation for the samples with higher emissivity (around 0.99), such as vegetation and water. This may results, because of the small number of spectra for these types ain the calibration. We also tried other empirical functions, such as, 1) the linear regression without a constant, and 2) polynomial regression. But we didn't observe significant improvements in either instance. C. Application to ASTER data This calibrated regression was finally applied to emissivities computed from ASTER/TIR data acquired over the Jornada Experimental Range in New Mexico. The result is shown in Fig. 3. The histogram of extracted emissivities is shown in Fig. 4. Emissivities range from 0.93 to 0.97 with the average ~0.95. In Fig. 3, the bright area in the upper center to lower center of the image corresponds to a ridge with vegetation. Another bright area in lower left corresponds to agricultural fields along the Rio Grande River. These areas have higher emissivities range from 0.96 to 0.97. The darker area at upper left and right edge is bare soil or sparsely vegetated sites. The upper right area is for the gypsum at white sand. These areas have lower emissivities that range from 0.93 to 0.95. IV. SUMMARY AND CONCLUSIONS The results of the spectral libraries analysis suggest that the estimation of broadband emissivity of 3-14 µm using the linear regression from emissivities of ASTER/TIR channels is potentially accurate and the expected RMSE is lower than 0.01 for the emissivity range from 0.90 to 0.99. The estimated broadband emissivity using the regression in Jornada site showed the variation of 0.04, from 0.93 to 0.97. Fig. 1 .Fig. 2 . 12 Fig. 1. The mean and range of emissivity in spectral libraries samples. Fig. 3 Fig. 4 34 Fig. 3 Broadband emissivity map of Jornada Experimental Range. ACKNOWLEDGMENTS This study was supported by the ASTER Project of NASA's EOS-Terra Program.
03735873
en
[ "math.math-pr" ]
2024/03/04 16:41:24
2023
https://hal.science/hal-03735873v2/file/martingale_vrjp_preprint2.pdf
V Rapenne Camille Jordan About the asymptotic behaviour of the martingale associated with the Vertex Reinforced Jump Process on trees and Z d We study the asymptotic behaviour of the martingale (ψ n (o)) n∈N associated with the Vertex Reinforced Jump Process (VRJP). We show that it is bounded in L p for every p > 1 on trees and uniformly integrable on Z d in all the transient phase of the VRJP. Moreover, when the VRJP is recurrent on trees, we have good estimates on the moments of ψ n (o) and we can compute the exact decreasing rate τ such that n -1 ln(ψ n (o)) ∼ -τ almost surely where τ is related to standard quantities for branching random walks. Besides, on trees, at the critical point, we show that n -1/3 ln(ψ n (o)) ∼ -ρ c almost surely where ρ c can be computed explicitely. Furthermore, at the critical point, we prove that the discrete process associated with the VRJP is a mixture of positive recurrent Markov chains. Our proofs use properties of the β-potential associated with the VRJP and techniques coming from the domain of branching random walks. Introduction and first definitions Let (V, E) be a locally finite graph. Let W > 0. In [START_REF] Davis | Vertex-reinforced jump processes on trees and finite graphs[END_REF], Davis and Volkov introduced a continuous self-reinforced random walk (Y s ) s≥0 known as the Vertex Reinforced Jump Process (VRJP) which is defined as follows: the VRJP starts from some vertex i 0 ∈ V and conditionally on the past before time s, it jumps from a vertex i to one of its neighbour j at rate W L j (s) where L j (s) = 1 + s 0 1{Y u = s}du. In [START_REF] Sabot | Edge-reinforced random walk, vertex-reinforced jump process and the supersymmetric hyperbolic sigma model[END_REF], Sabot and Tarrès defined the time-change D such that for every s ≥ 0, D(s) = i∈V L i (s) 2 -1 . Then, they introduced the time-changed process (Z t ) t≥0 = (Y D -1 (t) ) t≥0 . If V is finite, this process is easier to analyse than Y because it is a mixture of Markov processes whose mixing field has a density which is known explicitely. The density of the mixing field of Z was already known as a hyperbolic supersymmetric sigma model. This supersymmetric model was first studied in [START_REF] Disertori | Quasi-diffusion in a 3D Supersymmetric Hyperbolic Sigma Model[END_REF] and [START_REF] Disertori | Anderson localization for a supersymmetric sigma model[END_REF] and Sabot and Tarrès combined these previous works with their own results in order to make some important progress in the knowledge of the VRJP. However, their formula for the density of the environment of the VRJP was true only on finite graphs. This difficulty has been solved in [START_REF] Sabot | The vertex reinforced jump process and a random schrödinger operator on finite graphs[END_REF] and [START_REF] Sabot | A random Schrödinger operator associated with the vertex reinforced jump process on infinite graphs[END_REF] where Sabot, Tarrès and Zeng introduced a β-potential with some distribution ν W V which allows to have a representation of the environment of the VRJP on infinite graphs. Thanks to this β-potential, Sabot and Zeng introduced a positive martingale (ψ n (o)) n∈N which converges toward some random variable ψ(o). A remarkable fact is that ψ(o) = 0 if and only if the VRJP is recurrent. Moreover, they proved a 0-1 law for transitive graphs. On these graphs, the VRJP is either almost surely recurrent or almost surely transient. We can study the VRJP on any locally finite graph V . However, in this paper, we will focus only on the two most important cases: • First, we can consider the case where V = Z d . In this case, when d ∈ {1, 2}, the VRJP is always recurrent. (See [START_REF] Sabot | A random Schrödinger operator associated with the vertex reinforced jump process on infinite graphs[END_REF], [START_REF] Sabot | Polynomial localization of the 2d-vertex reinforced jump process[END_REF] and [START_REF] Kozma | Power-law decay of weights and recurrence of the two-dimensional VRJP[END_REF].) On the contrary, when d ≥ 3, Sabot and Tarrès proved in [START_REF] Sabot | Edge-reinforced random walk, vertex-reinforced jump process and the supersymmetric hyperbolic sigma model[END_REF] that the time-changed VRJP is recurrent for small W and that it is transient for large W . Further, in [START_REF] Poudevigne | Monotonicity and phase transition for the vrjp and the errw[END_REF], thanks to a clever coupling of ψ n (o) for different weights, Poudevigne proved there is a unique transition point W c (d) between recurrence and transience on Z d if d ≥ 3. • Another interesting case for the VRJP is when V is a tree. In this case, the environment of the VRJP is easy to describe thanks to independent Inverse Gaussian random variables. Using this representation of the environment, in [START_REF] Chen | Speed of vertex-reinforced jump process on Galton-Watson trees[END_REF], Chen and Zeng proved there is a unique phase transition between recurrence and transience on supercritical Galton-Watson trees for the time-changed VRJP. (This result was already proved in [START_REF] Basdevant | Continuous-time vertex reinforced jump processes on Galton-Watson trees[END_REF] but the proof of [START_REF] Basdevant | Continuous-time vertex reinforced jump processes on Galton-Watson trees[END_REF] was very different and did not use the representation of the VRJP as a mixture of Markov processes.) Furthermore the transition point W c (µ) can be computed explicitely and depends only on the mean of the offspring law µ of the Galton-Watson tree. Therefore, if V is a Galton-Watson tree or Z d with d ≥ 3, the following dichotomy is known: there exists W c ∈ R * + (depending on V) such that If W < W c , then a.s, ψ(o) = 0, i.e the VRJP is recurrent. If W > W c , then a.s, ψ(o) > 0, i.e the VRJP is transient. The recurrence of the VRJP can be regarded as a form of "strong disorder". Indeed, if W is small, the reinforcement, i.e the disorder of the system compared to a simple random walk, is very strong. Therefore, the martingale (ψ n (o)) n∈N associated with the system vanishes only when there is strong disorder. This situation is reminiscent of directed polymers in random environment. One can refer to [START_REF] Comets | Directed Polymers in Random Environments[END_REF] for more information on this topic. In the case of directed polymers, there is a positive martingale (M n ) n∈N which converges toward a random variable M ∞ . (M n ) n∈N and (ψ n (o)) n∈N play analoguous roles in different contexts. Indeed, M ∞ > 0 a.s if and only if the system exhibits "weak disorder", exactly as for ψ(o). However, on Z d or on trees, this is possible that M ∞ > 0 a.s but (M n ) n∈N is not bounded in L 2 . (See [START_REF] Camanes | The critical temperature of a directed polymer in a random environment[END_REF] and [START_REF] Buffet | Directed polymers on trees: a martingale approach[END_REF].) Therefore, a natural question regarding (ψ n (o)) n∈N is to know when it is bounded in L p for a fixed value of p > 1. Moreover, as shown in the proof of Theorem 3 in [START_REF] Sabot | A random Schrödinger operator associated with the vertex reinforced jump process on infinite graphs[END_REF], L p boundedness of the martingale (ψ n (o)) n∈N * on Z d for sufficiently large p implies the existence of a diffusive regime for the VRJP, i.e the VRJP satisfies a central-limit theorem. We would like to know whether this diffusive regime coincides with the transient regime or not. This gives another good reason to study the moments of (ψ n (o)) n∈N . Using [START_REF] Disertori | Quasi-diffusion in a 3D Supersymmetric Hyperbolic Sigma Model[END_REF], [START_REF] Sabot | A random Schrödinger operator associated with the vertex reinforced jump process on infinite graphs[END_REF] and [START_REF] Poudevigne | Monotonicity and phase transition for the vrjp and the errw[END_REF], one can prove that, on Z d with d ≥ 3, for any p > 1, there exists a threshold W (p) (d) such that (ψ n (o)) n∈N is bounded in L p for every W > W (p) (d). However, we do not know whether W (p) (d) = W c (d) for every p > 1 or not. In this paper, we will prove that (ψ n (o)) n∈N is uniformly integrable on Z d as soon as the VRJP is transient. Moreover, we will prove that (ψ n (o)) n∈N is bounded in L p for any p > 1 as soon as W > W c (µ) on trees. Furthermore, we will also look at the rate of convergence toward 0 of (ψ n (o)) n∈N on trees when W < W c (µ) under mild assumptions. We have a L p version and an almost sure version of the estimate of the decay of (ψ n (o)) n∈N toward 0. Finally a natural question consists in finding the behaviour of the VRJP at the critical point W c . On Galton-Watson trees, it was proved in [START_REF] Chen | Speed of vertex-reinforced jump process on Galton-Watson trees[END_REF] or [START_REF] Basdevant | Continuous-time vertex reinforced jump processes on Galton-Watson trees[END_REF] that the time-changed VRJP is a mixture of recurrent Markov processes at the critical point. In this paper, we prove that it is even a mixture of positive recurrent Markov processes. However the asymptotic behaviour of the VRJP at the critical point on Z d remains unknown. We will also compute the rate of convergence of (ψ n (o)) n∈N on trees when W = W c (µ). 2 Context and statement of the results General notation Let (V, E) be a locally finite countable graph with non oriented edges. We assume that V has a root o. We write i ∼ j when {i, j} ∈ E. For every n ∈ N, we define V n := {x ∈ V, d(o, x) ≤ n} where d is the graph distance on (V, E). For every n ∈ N * , we denote the boundary of V n , that is {i ∈ V n , ∃j ∈ V c n such that {i, j} ∈ E}, by ∂V n . Let us denote by E n the set of edges of V n . If M is a matrix (or possibly an operator) with indices in a set A × B, then for every A ⊂ A and B ⊂ B, the restriction of M to A × B is denoted by M A ,B = (M (i, j)) (i,j)∈A ×B . If M is a symmetric matrix, we write M > 0 when M is positive definite. In this article, we use a lot the Inverse Gaussian distribution. For every (a, λ), recall that an Inverse Gaussian random variable with parameters (a, λ) ∈ R * + 2 has density: 1{x > 0} λ 2πx 3 1/2 exp - λ(x -a) 2 2a 2 x dx. (2.1) The law of the Inverse Gaussian distribution with parameters (a, λ) ∈ R * + 2 is denoted by IG(a, λ). For W > 0 and t ∈ R, if A ∼ IG(1, W ), we write Q(W, t) = E A t . A well-known property of the Inverse Gaussian distribution states that Q(W, t) = Q(W, 1 -t). The β-potential and the martingale (ψ n ) n∈N Let (V, E) be an infinite countable graph with non-oriented edges. In this paper, the graph (V, E) will always have a special vertex o called the root. Actually, in our results, V is a rooted tree or Z d with root 0. Let W > 0. In [START_REF] Sabot | A random Schrödinger operator associated with the vertex reinforced jump process on infinite graphs[END_REF], the authors introduced a random potential (β i ) i∈V on V with distribution ν W V such that for every finite subset U ⊂ V , for every (λ i ) i∈U ∈ R U + , exp - i∈U λ i β i ν W V (dβ) = exp     - 1 2 i∼j i,j∈U W 1 + λ i 1 + λ j -1 - i∼j i∈U,j / ∈U W 1 + λ i -1     1 i∈U √ 1 + λ i . (2.2) Looking at the Laplace transform in (2.2), we see that (β i ) i∈V is 1-dependent, that is, if U 1 and U 2 are finite subsets of V which are not connected by an edge, then (β i ) i∈U 1 and (β i ) i∈U 2 are independent under ν W V . Moreover, the restriction of this potential on finite subsets has a density which is known explicitely. We give the expression of this density in subsection 3.1. Furthermore, for every (β i ) i∈V , let us introduce the operator H β on V which satisfies: ∀(i, j) ∈ V 2 , H β (i, j) = 2β i 1{i = j} -W 1{i ∼ j}. By proposition 1 in [SZ19], the support of ν W V is D W V = {β ∈ R V , (H β ) U,U is positive definite for all finite subsets U ⊂ V }. Therefore, under ν W V , for every n ∈ N, (H β ) Vn,Vn is positive definite. In particular, it is invertible. We denote by Ĝn the inverse of (H β ) Vn,Vn . Moreover, for n ∈ N and β ∈ D W V , let us define (ψ n (i)) i∈V as the unique solution of the equation: (H β ψ n )(i) = 0 ∀i ∈ V n ψ n (i) = 1 ∀i ∈ V c n . (2. 3) The idea behind the definition of (ψ n ) n∈N is to create an eigenstate of H β when n goes to infinity. We can make n go to infinity thanks to the following proposition: Proposition A (Theorem 1 in [SZ19]). For any i, j ∈ V , ( Ĝn (i, j)) n∈N * is increasing ν W V -a.s. In particular there exists a random variable Ĝ(i, j) such that Ĝn (i, j) -→ n→+∞ Ĝ(i, j), ν W V -a.s. Further, for any i, j ∈ V , Ĝ(i, j) < +∞, ν W V -a.s. Moreover, (ψ n ) n∈N is a vectorial martingale with positive components. In particular, for every i ∈ V the martingale (ψ n (i)) n∈N has an almost sure limit which is denoted by ψ(i). Besides, ( Ĝn ) n∈N is the bracket of (ψ n ) n∈N in the sense that for every i, j ∈ V , (ψ n (i)ψ n (j) -Ĝn (i, j)) n∈N is a martingale. This martingale (ψ n ) n∈N is crucial in order to study the asymptotic behaviour of the VRJP. One reason for this is that a representation of the environment of the discrete random walk associated with the VRJP starting from i 0 is given by (W G(i 0 , j)G(i 0 , i)) {i,j}∈E where for every (i, j) ∈ V 2 , G(i, j) = Ĝ(i, j) + 1 2γ ψ(i)ψ(j) where γ is random variable with distribution Γ(1/2, 1) which is independent of the random potential β. We will say more about the link between the VRJP and (ψ n ) n∈N in Proposition B. Before this, let us give some notation. Notation associated with the VRJP General notation for the VRJP In the previous section, for every deterministic graph (V, E), we introduced the measure ν W V associated with the β-potential. We write E ν W V when we integrate with respect to this measure ν W V . Moreover, we defined a martingale (ψ n (o)) n∈N . For a fixed graph V , we say that ( ψ n (o)) n∈N is bounded in L p if sup n∈N E ν W V [ψ n (o) p ] < +∞. We say that (ψ n (o)) n∈N is uniformly integrable if lim K→+∞ sup n∈N E ν W V [ψ n (o)1{ψ n (o) ≥ K}] = 0. We denote by ( Zn ) n∈N the discrete time process associated with the VRJP, that is, the VRJP taken at jump times. We will see that it is a mixture of discrete random walks. Let us introduce the probability measure P V RJP V,W under which ( Zn ) n∈N is the discrete time process associated with the VRJP on a graph V with constant weights W starting from o. Notation for the VRJP on trees If V is a rooted tree, there is a natural genealogical order ≤ on V . For u ∈ V , the parent of u is denoted by u and the generation of u is denoted by |u|. If (x, u) ∈ V 2 such that x ≤ u, then |u| x = |u| -|x|. If V is a Galton-Watson tree with offspring law µ, let us denote by GW µ the law of V . Then, let us define the probability measure P µ,W under which we first choose randomly the graph V with distribution GW µ and then we choose randomly the potential (β i ) i∈V with distribution ν W V . Moreover, we define P V RJP µ,W under which we first choose randomly the graph V with distribution GW µ and then we choose randomly a trajectory on V with distribution P V RJP V,W . We write E µ,W (•) and E V RJP µ,W (•) when we integrate with respect to P µ,W and P V RJP V,W respectively. The phase transition The martingale ψ is very important in order to understand the recurrence or transience of the VRJP as explained by the following proposition: [START_REF] Poudevigne | Monotonicity and phase transition for the vrjp and the errw[END_REF] and [START_REF] Chen | Speed of vertex-reinforced jump process on Galton-Watson trees[END_REF]). Let us assume that (V, E) is Z d . Then there exists W c (d) > 0 depending only on d such that: Proposition B ([ST15], [SZ19], • If W < W c (d), ν W d -a.s, for every i ∈ Z d , ψ(i) = 0 and the VRJP is recurrent. • If W > W c (d), ν W d -a.s, for every i ∈ Z d , ψ(i) > 0 and the VRJP is transient. Moreover, W c (d) < +∞ if and only if d ≥ 3. Now let us assume that (V, E) is a supercritical Galton Watson tree with offspring law µ such that µ(0) = 0. Then there exists W c (µ) ∈ R * + depending only on the mean of µ such that: • If W ≤ W c (µ), P µ,W -a.s, for every i ∈ V , ψ(i) = 0 and the VRJP is recurrent. • If W > W c (d), P µ,W -a.s, for every i ∈ V , ψ(i) > 0 and the VRJP is transient. Statement of the results Results on Z d For now, on Z d , we are not able to estimate the moments of the martingale (ψ n (o)) n∈N in the transient phase. However, when d ≥ 3, we can prove uniform integrability of this martingale in the transient phase. Theorem 1. We assume that V = Z d with d ≥ 3 and that W > W c (d). Then the martingale (ψ n (o)) n∈N is uniformly integrable. Results on Galton-Watson trees Let µ be a probability measure on N. In this paper, we use the following hypotheses for Galton-Watson trees: • Hypothesis A 1 : µ(0) = 0 and m := +∞ k=1 kµ(k) > 1. • Hypothesis A 2 : µ(1) = 0. • Hypothesis A 3 : There exists δ > 0 such that +∞ k=1 k 1+δ µ(k) < +∞. Our first theorem on trees states that, if V is a Galton-Watson tree, (ψ n (o)) n∈N is bounded in L p as soon as the VRJP is transient. Theorem 2. Let V be a Galton-Watson tree with offspring law µ satsifying hypothesis A 1 . Let W > W c (µ). Then, for every p ∈]1, +∞[, the martingale (ψ n (o)) n∈N is bounded in L p , GW µ -a.s. In the recurrent phase, we already know that ψ n (o) a.s --→ 0 on any graph as n goes to infinity. Thanks to the theory of branching random walks and the representation of the VRJP with the β-potential, we are able to be much more accurate on trees. Let us introduce some notation related to branching random walks in order to give the precise asymptotics of (ψ n (o)) n∈N . For every m > 1, W > 0, we define f m,W : R → R t → ln (mQ(W, t)) . Moreover, we will prove in the step 1 of the proof of Theorem 3 that there exists a unique t * (m, W ) > 0 such that f m,W (t * (m, W )) = f m,W (t * (m, W )) t * (m, W ) . (2.4) Then, we define τ (m, W ) = -f m,W (t * (m, W )). Thanks to these quantities, we are able to describe the asympotics of (ψ n (o)) n∈N in the two following results. First, we can estimate the moments of (ψ n (o)) n∈N . Theorem 3. Let V be a Galton-Watson tree with offspring law µ satsifying hypotheses A 1 , A 2 and A 3 . Let W < W c (µ). Then we have the following moment estimates: (i) ∀p > 0, E µ,W [ψ n (o) -p ] = E µ,W ψ n (o) 1+p = e npτ (m,W )+o(n) . (ii) ∀p ∈]1 -t * (m, W ), 1[, E µ,W [ψ n (o) p ] = E µ,W ψ n (o) 1-p ≤ e -n(1-p)τ (m,W )+o(n) with τ (m, W ) > 0 and 0 < t * (m, W ) < 1/2. Remark 2.1. In Theorem 3, remark that we can not estimate all the moments of (ψ n (o)) n∈N . This is due to the non-integrability of high moments of some quantities related to branching random walks. We will be more precise in Proposition K. The previous theorem gives good estimates of the moments of (ψ n (o)) n∈N . Moreover, it is also possible to give the exact almost sure decreasing rate of ( ψ n (o)) n∈N if W < W c (µ). Theorem 4. Let V be a Galton-Watson tree with offspring law µ satsifying hypotheses A 1 and A 3 . Let W < W c (µ). Then, it holds that, P µ,W -a.s, lim n→+∞ ln(ψ n (o)) n = -τ (m, W ) with τ (m, W ) > 0. The following proposition gives an estimate of the behaviour of the decreasing rate τ (m, W ) near the critical point W c (µ). Proposition 2.1. Let V be a Galton-Watson tree with offspring law µ satsifying hypothesis A 1 . In the neighborhood of the critical point W c (µ), τ (m, W ) ∼ W →Wc(µ) α(m)(W c (µ) -W ) where α(m) = 2 + 1 W c (µ) -2m K 1 (W c (µ)) K 1/2 (W c (µ)) > 0 where K α is the modified Bessel function of the second kind with index α. Following basically the same lines as in the proofs of the previous estimates on (ψ n (o)) n∈N , we deduce information on the asympotic behaviour of the VRJP when W < W c (µ). More precisely, we can estimate the probability for the VRJP to touch the generation n before coming back to the root o when W < W c (µ). Remind that ( Zk ) k∈N is the discrete-time process associated with the VRJP on the rooted tree V starting from o. We define τ + o = inf{k ∈ N * , Zk = o} and for every n ∈ N * , we define τ n = inf{k ∈ N * , | Zk | = n}. Recall that the probability measure P V RJP µ,W is defined in the paragraph 2.3.2. Proposition 2.2. Let V be a Galton-Watson tree with offspring law µ satsifying hypotheses A 1 , A 2 and A 3 . Let W < W c (µ). Then we have the following estimate: -2τ (m, W ) ≤ lim inf n→+∞ ln P V RJP µ,W (τ + o > τ n ) n and lim sup n→+∞ ln P V RJP µ,W (τ + o > τ n ) n ≤ -τ (m, W ) × t * (m, W ) where 0 < t * (m, W ) < 1/2. Remark 2.2. We suspect that the real decreasing rate in the proposition above is -2τ (m, W ). Indeed, we only have a problem of integrability of some functionals related to branching random walks. Up to this technical detail, the upper bound in Proposition 2.2 would be -2τ (m, W ) too. Now, let us look at the behaviour of the martingale (ψ n (o)) n∈N at the critical point W c (µ). Theorem 5. Let V be a Galton-Watson tree with offspring law µ satsifying hypothesis A 1 and A 3 . We assume that W = W c (µ). Then, under P µ,W , ln(ψ n (o)) n 1/3 a.s -----→ n→+∞ -ρ c where ρ c = 1 2 3π 2 σ 2 2 1/3 with σ 2 = 16m +∞ 0 W c (µ) ln(x) 2 √ 2πx e -Wc(µ) 2 (x+1/x-2) dx. Remark 2.3. At the critical point, we are not able to have precise L p bounds for ψ n (o). Indeed, in the subcritical phase, we have subexponential bounds for some functionals associated with branching random walks. At the critical point, we would need to be more accurate. The recurrence of the VRJP on trees at the critical point W c (µ) was already known. The following theorem states that the VRJP on trees is even positive recurrent at the critical point. This result is of a different kind than the previous ones. However, the proof requires the same tools as before. Theorem 6. Let V be a Galton-Watson tree with offspring law µ satsifying hypothesis A 1 and A 3 . We assume that W = W c (µ). Then, the discrete-time VRJP ( Zn ) n∈N associated with (Z t ) t≥0 is a mixture of positive recurrent Markov chains. Background Marginals and conditional laws of the β-potential The law ν W V introduced in section 1 was originally defined on finite graphs in [START_REF] Sabot | The vertex reinforced jump process and a random schrödinger operator on finite graphs[END_REF] with general weights. More precisely, on a finite set S, we can define a β-potential with some law νP,η S for every (η i ) i∈S ∈ R S + and every P = (W i,j ) i,j∈S 2 ∈ R S 2 + . One can remark that the weights in the matrix P are not assumed to be constants anymore. Moreover we allow loops, that is, W i,i can be non-zero for every i ∈ S. The term η is a boundary term which represents the weights of some edges relating S to some virtual vertices which are out of S. The probability measure νP,η S is defined in the following way: by Lemma 4 in [START_REF] Sabot | A random Schrödinger operator associated with the vertex reinforced jump process on infinite graphs[END_REF] the function β → 1{H (S) β > 0} 2 π |S|/2 e -1 2 1,H (S) β 1 -1 2 η,(H (S) β ) -1 η + η,1 1 det H (S) β (3.1) is a density. H (S) β is a matrix on S × S defined by H (S) β (i, j) = 2β i 1{i = j} -W i,j 1{i ∼ j} and 1 stands for the vector (1, • • • , 1) in R S in the expression (3.1). Then, we can define a probability measure with the density (3.1) and we denote it by νP,η S (dβ). Besides, the Laplace transform of νP,η S can be computed and it is very similar to the Laplace transform of ν W V . Indeed, for any λ ∈ R S + , e -λ,β νP,η S (dβ) = e -η, √ λ+1-1 -1 2 i∼j W i,j √ (1+λ i )(1+λ j )-1 i∈S (1 + λ i ) -1/2 where √ 1 + λ is the vector ( √ 1 + λ i ) i∈S . Further, the family of distributions of the form νP,η S have a very useful behaviour regarding its marginals and conditional laws. Indeed, marginals and conditional laws are still of the form νP,η S . The following lemma gives a formula for the law of the marginals and the conditional laws: Lemma C (Lemma 5 in [START_REF] Sabot | A random Schrödinger operator associated with the vertex reinforced jump process on infinite graphs[END_REF]). Let S be a finite set. Let U ⊂ S be a subset of S. Let (η i ) i∈S ∈ R S + and P = (W i,j ) i,j∈S 2 ∈ R S 2 + . Under νP,η S , (i) β U has law νP U,U ,η U , where for every i ∈ U , ηi = η i + j∈U c W i,j . (ii) Conditionally on β U , β U c has distribution ν P ,η U c where P and η are defined in the following way: For every (i, j) ∈ U c × U c , P (i, j) = Wi,j = W i,j + k∼i,k∈U l∼j,l∈U W i,k W j,l (H β ) -1 U,U (k, l). For every i ∈ U c , ηi = η i + k∼i,k∈U l∈U W i,k (H β ) -1 U,U (k, l)η l . In [START_REF] Sabot | A random Schrödinger operator associated with the vertex reinforced jump process on infinite graphs[END_REF], the infinite potential ν W V is defined thanks to a sequence of potentials of the form νP,η Vn on the exhausting sequence (V n ) n∈N which is shown to be compatible. More, precisely, the restrictions of ν W V are given by the following lemma: Lemma D. Let n ∈ N * . Let (β i ) i∈V be a random potential following ν W V . Then (β i ) i∈Vn is distributed as ν P (n) ,η (n) Vn where • For every i, j ∈ V n , P (n) (i, j) = W 1{i ∼ j}. • For every i ∈ V n , η(n) i = j∼i,j / ∈Vn W . Warm-up about the VRJP Recall that (Z t ) t≥0 := (Y D -1 (t) ) t≥0 is a time-changed version of the VRJP with constant weights W on the graph V . As explained before, (Z t ) t≥0 is easier to analyse than (Y t ) t≥0 because it is a mixture of Markov processes. In the particular case of finite graphs, Sabot and Tarrès gave an explicit description of the density of a random field associated with the environment. Proposition E (Theorem 2 in [START_REF] Sabot | Edge-reinforced random walk, vertex-reinforced jump process and the supersymmetric hyperbolic sigma model[END_REF]). Let (V, E) be a finite graph. Let W > 0. Then, the timechanged VRJP (Z t ) t≥0 on V with constant weights W > 0 starting from i 0 ∈ V is a mixture of Markov processes. Moreover, it jumps from i to j at rate W e U j -U i where the field (U i ) i∈V has the following density on the set {(u i ) i∈V ∈ R V , u i 0 = 0}: 1 √ 2π |V |-1 exp   - i∈V u i -W {i,j}∈E ((cosh(u i -u j ) -1)   D(W, u) i∈V \{i 0 } du i with D(W, u) = T ∈T {i, j}∈T W e u i +u j where T is the set of spanning trees of (V, E). This density was originally studied in [START_REF] Disertori | Quasi-diffusion in a 3D Supersymmetric Hyperbolic Sigma Model[END_REF] in order to study random band matrices. Remark that the distribution of U does not have any obvious property of compatibility. Therefore, this was not possible to extend the field U on a general infinite graph. However, in [START_REF] Sabot | The vertex reinforced jump process and a random schrödinger operator on finite graphs[END_REF], Sabot, Zeng and Tarrès introduced a smart change of variable which relates the field U and the β-potential. More precisely, if (V, E) is a finite graph, then the field U of Proposition E rooted at i 0 is distributed as (G (V ) (i 0 , i)/G (V ) (i 0 , i 0 )) i∈V where G (V ) is the inverse of H (V ) β which is the operator associated with the potential β with distribution νP,0 V where P (i, j) = W 1{i ∼ j}. In order to have a representation of the environment of the VRJP on infinite graph, Sabot and Zeng extended the β-potential on infinite graphs thanks to the measure ν W V and they proved the following result: Proposition F (Theorem 1 in [SZ19]). If V is Z d with d ≥ 1 or an infinite tree, then the time-changed VRJP (Z t ) t≥0 on V with constant weights W > 0 is a mixture of Markov processes. Moreover, the associated random environment can be described in the following way: if the VRJP started from i 0 , it jumps from i to j at rate (1/2)W G(i 0 , j)/G(i 0 , i) where for every i, j ∈ V , G(i, j) = Ĝ(i, j) + 1 2γ ψ(i)ψ(j) where γ is a random variable with law Γ(1/2, 1) which is independent from the the β-potential with distribution ν W V . In [START_REF] Gerard | Representations of the vertex reinforced jump process as a mixture of Markov processes on Z d and infinite trees[END_REF], Gerard proved that, in the case of trees, in the transient phase, there are infinitely many different representations of the environment of the VRJP. In this paper, we will often use a representation which is not the same as the one which is given in Proposition F. Now, let us describe this other representation. Specificities of the tree In the density given in Proposition E, if the graph is a tree, one can observe that the random variables U i -U i are i.i.d and distributed as the logarithm of an Inverse Gaussian random variable. It comes from the fact that the determinant term in the density becomes a product. Therefore, when the graph (V, E) is an infinite tree with a root o, this is natural to define an infinite version of the field U in the following way: for every i ∈ V , e U i := o<u≤i A u where (A i ) i∈V \{o} is a family of independent Inverse Gaussian random variables with parameters (1, W ). This representation implies directly the following result: Proposition G (Theorem 3 in [CZ18] ). If V is a tree with root o, the discrete-time VRJP ( Zn ) n∈N which is associated with (Z t ) t≥0 is a random walk in random environment whose random conductances are given by c(i, i) = W e U i +U i = W A i o<u≤ i A 2 u for every i ∈ V \{o}. This representation of the environment of the VRJP on trees is particularly useful because the conductances are almost products of i.i.d random variables along a branch of the tree. This situation is very close from branching random walks. This observation is crucial for the proofs in this paper. In particular, thanks to this representation and its link with branching random walks, this is much easier to compute the critical point on Galton-Watson trees. Proposition H (Theorem 1 in [START_REF] Chen | Speed of vertex-reinforced jump process on Galton-Watson trees[END_REF] or Theorem 1 in [START_REF] Basdevant | Continuous-time vertex reinforced jump processes on Galton-Watson trees[END_REF]). Let V be a Galton-Watson tree with offspring law µ satisfying hypothesis A 1 . Then the VRJP on V with constant weights W is recurrent if and only if mQ(W, 1/2) ≤ 1 where m is the mean of µ. In particular, the critical point W c (µ) is the only solution of the equation mQ(W, 1/2) = 1. Now, remind that our goal is to study the martingale (ψ n (o)) n∈N . This martingale is defined through the potential β. If V is an infinite tree with a special vertex o called the root, we can couple the field U and the potential β in the following way: for every i ∈ V , we define βi := W 2 i∼j e U j -U i = W 2 u=i A u + 1{i = o} 1 A i . (3.2) For every i ∈ V , βi can be interpreted as the total jump rate of the VRJP at i. The potential β is very important for our purposes. One reason for that is Lemma 4.4 which makes a link between the effective resistance associated with the VRJP and some quantity defined through ( βi ) i∈V . Now, let γ be a Gamma distribution with parameter (1/2, 1) which is independent of (A i ) i∈V \{o} . Then, let us define β = β + 1{• = o}γ. (3.3) Lemma 3.1. Let us assume that V is a tree. Let W > 0. Then, the potential (β i ) i∈V defined by (3.3) has law ν W V . Proof of Lemma 3.1. This is a direct consequence of Theorem 3 in [START_REF] Chen | Speed of vertex-reinforced jump process on Galton-Watson trees[END_REF] and Corollary 2 in [START_REF] Sabot | The vertex reinforced jump process and a random schrödinger operator on finite graphs[END_REF]. From now on, when we work on a tree V , we always assume that, under ν W V , the potential (β i ) i∈V is defined by (3.2) and (3.3). This coupling between the field U and the potential (β i ) i∈V is very important in order to relate our questions regarding the martingale (ψ n (o)) n∈N to tractable questions about branching random walks. This allows us to apply techniques coming from the area of branching random walks in order to study (ψ n (o)) n∈N . β-potential and path expansions In this subsection, we explain how Ĝ can be interpreted as a sum over a set of paths. This representation of Ĝ will be very useful in the sequel of this paper. A path from i to j in the graph (V, E) is a finite sequence σ = (σ 0 , • • • , σ m ) in V such that σ 0 = i and σ m = j and σ k ∼ σ k+1 for every k ∈ {0, • • • m-1}. Let us denote by P V i,j the set of paths from i to j in V . Let us also introduce P V i,j the set of paths from i to j which never hit j before the end of the path. More precisely, it is the set of paths σ = (σ 0 , • • • , σ m ) such that σ 0 = i, σ m = j and σ k = j for every k ∈ {0, • • • , m -1}. For any path σ = (σ 0 , • • • , σ m ), we denote its length by |σ| = m. For any path σ in V and for any β ∈ D W V , let us write, (2β) σ = |σ| k=0 (2β σ k ), ( 2β ) - σ = |σ|-1 k=0 (2β σ k ). Then, the following lemma stems directly from Proposition 6 in [START_REF] Sabot | A random Schrödinger operator associated with the vertex reinforced jump process on infinite graphs[END_REF]: Lemma I (Proposition 6 in [SZ19]). Let (V, E) be any locally finite graph. Let W > 0. Let i, j ∈ V . For any β ∈ D W V , Ĝ(i, j) = σ∈P V i,j W |σ| (2β) σ , Ĝ(i, j) Ĝ(i, i) = σ∈ P V j,i W |σ| (2β) - σ . In the special case of trees, we can mix this property with the construction given in subsection 3.3 in order to obtain the following lemma. Lemma 3.2. Let V be a Galton-Watson tree with a root o and an offspring law µ satisfying hypothesis A 1 . Let us assume that W ≤ W c (µ). Then, P µ,W -a.s, for every i ∈ V , Ĝ(o, i) Ĝ(o, o) = e U i . Proof of Lemma 3.2. Let us assume that the β-potential is constructed as in subsection 3.3. Let us consider the Markov chain ( Zk ) k∈N * on V with conductances given by c(i, i) = W A -1 i o<u≤i A 2 u = W e U i +U i for every i ∈ V . Actually, by Proposition G, Z is the discrete-time process associated with the VRJP. Let us remark that for every i ∈ V , π i := j∼i c(i, j) = e 2U i 2 βi . We denote by P c,i the probability measure associated with this Markov chain Z starting from i with random conductances c. Let us introduce the stopping time τ o = inf {n ∈ N, Zn = o}. If σ is a path, we write { Z ∼ σ} to mean that Z0 = σ 0 , Z1 = σ 1 , etc. Then, it holds that P µ,W -a.s, for every i ∈ V , P c,i (τ o < +∞) = σ∈ P V i,o P c,i ( Z ∼ σ) = σ∈ P V i,o |σ|-1 k=0 W e Uσ k +Uσ k+1 π σ k = σ∈ P V i,o |σ|-1 k=0 W e Uσ k+1 -Uσ k 2 βσ k . (3.4) There is a telescoping product in (3.4). Consequently, we deduce that P µ,W -a.s, for every i ∈ V , P c,i (τ o < +∞) = e -U i σ∈ P V i,o |σ|-1 k=0 W 2 βσ k . (3.5) In identity (3.5), remark that σ k is always different from o. Therefore, β can be replaced by β and we obtain that P µ,W -a.s, for every i ∈ V , P c,i (τ o < +∞) = e -U i σ∈ P V i,o |σ|-1 k=0 W 2β σ k . (3.6) In (3.6), one can observe the same quantity as in Lemma I. Therefore, P µ,W -a.s, for every i ∈ V , P c,i (τ o < +∞) = e -U i Ĝ(o, i) Ĝ(o, o) . (3.7) However, we assumed W ≤ W c (µ). Thus, by Propositions G and B, we know that P c,i (τ o < +∞) = 1, P µ,W -a.s. Together with (3.7), this concludes the proof. Warm-up about branching random walks In this subsection, we recall the most important facts about one-dimensionnal branching random walks. Indeed, it is a very important tool in this article. One can refer to [START_REF] Shi | Branching random walks[END_REF] for more information on this topic. We consider a point process L := {ρ i , 1 ≤ i ≤ N } such that N takes values in N and each point ρ i is in R. At time 0, there is a unique ancestor called the root o. We define S(o) = 0. At time n, each individual u generates independently a point process L u := {ρ u i , 1 ≤ i ≤ N u } with the same law as L. Each point in L u stands for a child of u. The positions of the children of u are given by the point process {ρ u i + S(u), 1 ≤ i ≤ N u }. The children of individuals of the n-th generation form the n + 1-th generation. In this way, we get an underlying genealogical Galton-Watson tree V with o as a root. For every u ∈ V , we denote the position of u by S(u). The set {(u, S(u)), u ∈ V } is called a branching random walk. Recall that |u| stands for the generation of u ∈ V . Throughout this subsection, we assume there exists δ > 0 such that E      |u|=1 1   1+δ    < +∞. (3.8) Moreover, we assume that for every t ∈ R, E   |u|=1 e tS(u)   < +∞. (3.9) Let us introduce the Laplace transform of L which is defined as f : R → R t → ln E |u|=1 e -tS(u) . Let us also assume that f (0) > 0, f (1) = f (1) = 0. (3.10) For every n ∈ N and for every β > 1, let us define, W n := |u|=n e -S(u) , W n,β = |u|=n e -βS(u) . In [START_REF] Hu | Minimal position and critical martingale convergence in branching random walks, and directed polymers on disordered trees[END_REF], Hu and Shi proved the following results: Proposition J (Theorem 1.4 of [START_REF] Hu | Minimal position and critical martingale convergence in branching random walks, and directed polymers on disordered trees[END_REF]). Assume hypotheses (3.8), (3.9) and (3.10) and let β > 1. Conditionally on the system's survival, we have lim sup n→+∞ ln (W n,β ) ln(n) = - β 2 a.s, (3.11) lim inf n→+∞ ln (W n,β ) ln(n) = - 3β 2 a.s. (3.12) Proposition K (Theorem 1.6 in [START_REF] Hu | Minimal position and critical martingale convergence in branching random walks, and directed polymers on disordered trees[END_REF]). Assume hypotheses (3.8), (3.9) and (3.10) and let β > 1. For any r ∈]0, 1/β[, E W r n,β = n -3rβ/2+o(1) . In many situations, hypothesis (3.10) is not satisfied. However, in most cases, we can transform the branching random walk in order to be reduced to hypothesis (3.10). Indeed, if there exists t * > 0 such that t * f (t * ) = f (t * ), then ( S(u)) u∈V := (t * S(u) + f (t * )|u|) u∈V is a branching random walk satisfying (3.10). However, one still has to check that such a t * > 0 does exist. Proposition L (Proposition 7.2, Chapter 3 in [START_REF] Jaffuel | Marches aléatoires avec branchement et absorption[END_REF]). Let us assume that for every M ∈ R, P(L(] -∞, -M ]) = ∅) > 0. Then, there exists t * > 0 such that t * f (t * ) = f (t * ). Remark 3.1. Be careful when you look at reference [START_REF] Jaffuel | Marches aléatoires avec branchement et absorption[END_REF]. The result is wrongly stated but the proof (of the corrected statement) is correct. Moreover, this is possible to know the sign of f (t * ) and whether t * is unique or not. Proposition 3.3. Let us assume that f (0) > 0 and that there exists t * > 0 such that t * f (t * ) = f (t * ). We assume also that f is strictly convex and that there exists a point t min such that f is strictly decreasing on [0, t min ] and strictly increasing on [t min , +∞[. Then t * is the unique solution in R * + of the equation tf (t) = f (t) and sgn(f (t * )) = sgn(f (t min )). Moreover, t * < t min if f (t min ) < 0 and t * > t min if f (t min ) > 0. Proof of Proposition 3.3. Let us introduce the function Φ : t → tf (t) -f (t). As f is stricly convex, for every t ∈ R * + , Φ (t) = tf (t) > 0. Therefore, Φ is stricly increasing on R + . Thus, t * must be unique. Moreover, Φ( In this subsection, V is a deterministic countable graph with constant weights W > 0. For every n ∈ N, we introduce the sigma-field t min ) = tf (t min ) -f (t min ) = -f (t min ). Thus, if f (t min ) < 0, then Φ(t min ) > 0. Furthermore, Φ(0) = -f (0) < 0. Therefore, as t * is the unique zero of Φ, t * must be in ]0, t min [. In particular, f (t * ) = t * f (t * ) < 0 because f is strictly decreasing on [0, t min ]. G n := σ (β i ) i∈Vn\{o} . (Recall that V n = {x ∈ V, d(o, x) ≤ n}.) Moreover, for every n ∈ N, let us introduce D n := 1 2 o∼j W Ĝn (o, j) Ĝn (o, o) . Then, it is remarkable that ψ n (o) has an Inverse Gaussian distribution conditionally on G n . Lemma 4.1. For every n ∈ N, under ν W V , (i) L(β o |G n ) = D n + 1 2 × IG Ĝn(o,o) ψn(o) , 1 (ii) L (ψ n (o)|G n ) = IG 1, ψ n (o) Ĝn (o, o) where we recall that IG(a, λ) stands for an Inverse Gaussian distribution with parameters a and λ. The computation achieved in the following proof is basically the same as Proposition 3.4 in [START_REF] Collevecchio | A note on recurrence of the Vertex reinforced jump process and fractional moments localization[END_REF] but we use it in a different way. Proof of Lemma 4.1. By Lemma D, (β i ) i∈Vn has law ν P (n) ,η (n) Vn where η(n ) i = j∈V c n ,i∼j W for every i ∈ V n and P (n) (i, j) = W 1{i ∼ j} for every i, j ∈ V n . Further, by Lemma C, the law of β o conditionally on G n is νWo,o,η {o} with: • W o,o = o∼j o∼k W 2 ĜVn\{o} (j, k) where ĜVn\{o} is the inverse of (H β ) Vn\{o},Vn\{o} . • η = o∼j k∈Vn\{o} W ĜVn\{o} (j, k)η (n) k . Nevertheless, reasonning on path-expansions (see Lemma I), one remarks that for every k ∈ V n \{o}, o∼j W ĜVn\{o} (j, k) = Ĝn (o, k) Ĝn (o, o) . (4.1) Consequently, by definition of D n and ψ n (o), it holds that • W o,o = o∼k W Ĝn(o,k) Ĝn(o,o) = 2D n . • η = k∈Vn\{o} Ĝn(o,k) Ĝn(o,o) η(n) k = 1 Ĝn(o,o) × k∈∂Vn Ĝn (o, k)η (n) k = ψn(o) Ĝn(o,o) . Moreover D n and ψn(o) Ĝn(o,o) are G n measurable. Indeed D n = 1 2 o∼k W Ĝn (o, k) Ĝn (o, o) and ψ n (o) Ĝn (o, o) = k∈∂Vn Ĝn (o, k)η (n) k Ĝn (o, o) . Further, for every k ∈ V n , Ĝn(o,k) Ĝn(o,o) does not depend on β o by (4.1) and, thus, it is G n measurable. Therefore, by (3.1), conditionally on G n , the law of β o is given by the density 1{β > D n } 1 π(β -D n ) e -(β-Dn) e - 1 4(β-Dn) ψn(o) 2 Ĝn(o,o) 2 e ψn(o) Ĝn(o,o) . We can recognise the reciprocal of an Inverse Gaussian distribution. More precisely, L (β o |G n ) = D n + 1 2 × IG Ĝn(o,o) ψn(o) , 1 . Besides, as Ĝn is the inverse of (H β ) Vn,Vn , β o -D n = 1 2 Ĝn(o,o) . Consequently, as D n is G n measurable, this yields L Ĝn (o, o)|G n = IG Ĝn (o, o) ψ n (o) , 1 . Moreover for every positive numbers (t, a, b), one can check that tIG(a, b) law = IG(ta, tb). Further- more Ĝn(o,o) ψn(o) is G n measurable. Thus, it holds that L (ψ n (o)|G n ) = IG 1, ψ n (o) Ĝn (o, o) . Moreover, we can pass to the limit in Lemma 4.1. Let us define G ∞ := σ (β i ) i∈Z d \{o} . Let us recall that ( Ĝn (i, j)) n∈N converges toward some finite limit Ĝ(i, j) for every (i, j) ∈ V 2 . Then, we introduce D = 1 2 o∼j W Ĝ(o,j) Ĝ(o,o) . Lemma 4.2. We assume that ψ(o) > 0, ν W V -a.s. Then, under ν W V , (i) L (β o |G ∞ ) = D + 1 2 × IG Ĝ(o,o) ψ(o) , 1 . (ii) L (ψ(o)|G ∞ ) = IG 1, ψ(o) Ĝ(o, o) . Proof of Lemma 4.2. Let Λ be a finite subset of V including o. Let us define Λ = Λ\{o}. Let A be a borelian set of R Λ. Let F be a bounded continuous function of R d . Then, by Lemma 4.1, for every n large enough, E ν W V F (β o )1{(β i ) i∈ Λ ∈ A} = E ν W V   +∞ 0 F (β + D n ) 1 √ πβ e -1 4β ψn(o) Ĝn(o,o) -2β 2 dβ1{(β i ) i∈ Λ ∈ A}   . (4.2) Moreover, the function (x, y) → +∞ 0 F (β + x) 1 √ πβ e -1 4β (y-2β) 2 dβ is clearly continuous and uniformly bounded on (R * + ) 2 . Therefore, as D n , ψ n (o) Ĝn (o, o) a.s -----→ n→+∞ D, ψ(o) Ĝ(o, o) , by means of the dominated convergence theorem, we can take the limit in (4.2) which implies the first point of our lemma. Then, the second point of Lemma 4.2 stems from the first point, exactly in the same way as in the proof of Lemma 4.1. Now we are able to prove Theorem 1. Proof of Theorem 1. By Lemma 4.2, we know that L (ψ(o)|G ∞ ) = IG 1, ψ(o) Ĝ(o, o) . In particular, E ν W V [ψ(o)] = E ν W V IG 1, ψ(o) Ĝ(o, o) = 1 (4.3) Thus for every n ∈ N * , E ν W V [ψ n (o)] = E ν W V [ψ(o)] = 1. Moreover, ψ n (o) a.s -----→ n→+∞ ψ(o) . Thus, by Scheffé's lemma, ψ n (o) L 1 -----→ n→+∞ ψ(o). Therefore (ψ n (o)) n∈N is uniformly integrable. Besides, Lemma 4.1 implies the following useful result: Lemma 4.3. Let p ∈ R. For every n ∈ N, E ν W V [ψ n (o) p ] = E ν W V ψ n (o) 1-p . Proof of Lemma 4.3. Let us define Y n = ψn(o) Ĝn(o,o) . Then, by Lemma 4.1, E ν W V [ψ n (o) p ] = E ν W V Y 1/2 n (2π) -1/2 x p-3/2 exp -Y n (x -1) 2 /(2x) dx = E ν W V Y 1/2 n (2π) -1/2 x -p+3/2 x -2 exp -Y n x(1/x -1) 2 /2 dx = E ν W V Y 1/2 n (2π) -1/2 x (-p+1)-3/2 exp -Y n (x -1) 2 /(2x) dx = E ν W V ψ n (o) 1-p . Resistance formula on a tree In this subsection we assume that V is a tree. Let n ∈ N. Let us define the matrix Hn on V n × V n such that for every (i, j) ∈ V n × V n , Hn (i, j) = 2 βi 1{i = j} -W 1{i ∼ j}. We assume that the potentials β and β are constructed as in (3.2) and (3.3). We also introduce D (n) U which is the diagonal matrix on V n × V n with diagonal entries D (n) U (i, i) = e U i for every i ∈ V n . We can observe that D (n) U Hn D (n) U = M n where for every (i, j) ∈ V n × V n , M n (i, j) = k∼i W e U i +U k 1{i = j} -W e U i +U j 1{i ∼ j}. M n is almost a conductance matrix with conductances W e U i +U j between two neighbouring vertices i and j. However, if i ∈ ∂V n , M n (i, i) = k∼i W e U i +U k > k∼i,k∈Vn W e U i +U k . Therefore, M n is strictly larger than a conductance matrix (for the order between symmetric matrices). Moreover conductance matrices are non-negative. Thus, M n and Hn are symmetric positive definite matrices. Then, we are allowed to define the inverse Gn of Hn . Moreover, for every n ∈ N, we construct a wired version ( Ṽn , Ẽn ) of (V n , E n ) in the following way: Ṽn = V n ∪ {δ n } Ẽn = E n ∪ {(δ n , i), i ∈ ∂V n } where δ n is a new vertex. For every (i, j) ∈ E, recall from the notation of Proposition G that c(i, j) = W e U i +U j . The conductances c are the environment of the VRJP. Now, let us introduce a family of conductances c n on Ẽn .    ∀(i, j) ∈ E n , c n (i, j) = c(i, j) ∀i ∈ ∂V n , c n (δ n , i) = j∼i,j∈V c n c(i, j) We denote by R(o ←→ δ n ) the effective resistance between o and δ n in ( Ṽn , Ẽn , c n ). Then, we have the following key identity: Lemma 4.4. If V is a tree, then, for every n ∈ N * , Gn (o, o) = R(o ←→ δ n ). Proof of Lemma 4.4. For every i ∈ V n , one defines h(i) = Gn(o,i)e -U i Gn(o,o) and h(δ n ) = 0. We are going to prove that h is harmonic everywhere excepted at o and δ n where h(o) = 1 and h(δ n ) = 0. Let i ∈ V n \{o}. Then, it holds that, i∼j c n (i, j)h(j) = i∼j,j∈Vn W e U i +U j × Gn (o, j)e -U j Gn (o, o) = e U i Gn (o, o) i∼j,j∈Vn W Gn (o, j). (4.4) By definition Gn = H-1 n . Together with (4.4), this yields i∼j c n (i, j)h(j) = e U i Gn (o, o) × 2 βi Gn (o, i). (4.5) Then, by definition of U i and βi , we infer that i∼j c n (i, j)h(j) = Gn (o, i) Gn (o, o) × i∼j W e U j = Gn (o, i)e -U i Gn (o, o) ×   c n (i, δ n ) + i∼j,j∈Vn c n (i, j)   = h(i) × i∼j c n (i, j). Consequently, h is harmonic. Therefore, by identity (2.3) in [START_REF] Lyons | Probability on Trees and Networks[END_REF], R(o ←→ δ n ) = 1 o∼j c n (o, j)(1 -h(j)) . (4.6) Besides, it holds that, o∼j c n (o, j)(1 -h(j)) = o∼j W e U j × 1 - Gn (o, j)e -U j Gn (o, o) = Gn (o, o) -1 o∼j W e U j Gn (o, o) -Gn (o, j) (4.7) However Gn is the inverse of Hn . Therefore, o∼j W Gn (o, j) = -1 + 2 β0 Gn (o, o). Moreover, o∼j W e U j = 2 β0 . Together with (4.7), this yields o∼j c n (o, j)(1 -h(j)) = Gn (o, o) -1 2 β0 Gn (o, o) --1 + 2 β0 Gn (0, 0) = Gn (o, o) -1 . ( 4 Burkholder-Davis-Gundy inequality As (ψ n (o)) n∈N is a martingale, there is a relation between its moments and the moments of its bracket ( Ĝn (o, o)) n∈N under mild assumptions. This relation is known as the BDG inequality. This inequality is not always true for discrete martingales. (See [START_REF] Burkholder | Extrapolation and interpolation of quasi-linear operators on martingales[END_REF].) However, this is always true for continuous martingales. Fortunately, by [START_REF] Sabot | Hitting times of interacting drifted Brownian motions and the vertex reinforced jump process[END_REF], for every n ∈ N, ψ n (o) can be obtained as the limit of some continuous martingale. That is why we can prove the following lemma: Lemma 4.6. Let V be a locally finite graph. Let W > 0. Let p > 1. Then, there exist positive constants C 1,p and C 2,p which do not depend on V and W such that for every n ∈ N, C 1,p E ν W V Ĝn (o, o) p/2 ≤ E ν W V [|ψ n (o) -1| p ] ≤ C 2,p E ν W V Ĝn (o, o) p/2 . Proof of Lemma 4.6. By [START_REF] Sabot | Hitting times of interacting drifted Brownian motions and the vertex reinforced jump process[END_REF], for every n ∈ N, there exists a continuous non-negative martingale (ψ n (o, t)) t≥0 such that, ), there exist positive constants κ 1,p and κ 2,p such that for every n ∈ N, for every t ≥ 0, ψ n (o, t) a.s ----→ t→+∞ ψ n (o) κ 1,p E ν W V ψ n (o, t), ψ n (o, t) p/2 ≤ E ν W V [ψ * n (o, t) p ] ≤ κ 2,p E ν W V ψ n (o, t), ψ n (o, t) p/2 . (4.10) As p > 1, by Doob's martingale inequality, there exist C 1,p > 0 and C 2,p > 0 such that for every n ∈ N, for every t ≥ 0, C 1,p E ν W V ψ n (o, t), ψ n (o, t) p/2 ≤ E ν W V [|ψ n (o, t) -1| p ] ≤ C 2,p E ν W V ψ n (o, t), ψ n (o, t) p/2 . (4.11) Let us define ψ * n (o) as the increasing limit of ψ * n (o, t) when t goes toward infinity. By monotone convergence theorem in (4.10), for every n ∈ N, E ν W V [ψ * n (o) p ] ≤ κ 2,p E ν W V Ĝn (o, o) p < +∞. (4.12) Moreover, for any fixed value of n, (|ψ n (o, t) -1| p ) t≥0 is dominated by ψ * n (o) p which is integrable by (4.12). Therefore, by dominated convergence theorem, we can make t go to infinity in (4.11) which concludes the proof. Link between Ĝn and Gn Let us recall that ( Ĝn (o, o)) n∈N is the bracket of the martingale (ψ n (o)) n∈N whose moments we are seeking an upper bound for. Therefore, it would be very interesting for our purpose to be able to control the moments of Ĝn (o, o) for n ∈ N. The following lemma shows there is a relation between the moments of Ĝn (o, o) and the moments of Gn (o, o) for n ∈ N. Remind that Gn (o, o) has been defined in subsection 4.2. For every x > 0, let us define F p (x) = +∞ 0 x p (1 + 2yx) p e -y √ πy dy. Lemma 4.7. We assume that V is a deterministic graph. Then, for every n ∈ N * and for every p > 1/2, E ν W V Ĝn (o, o) p = E ν W V F p ( Gn (o, o)) . Moreover, Remind that γ is a Gamma random variable with parameters (1/2,1) which is independent of β. F p (x) ∼ x→+∞ a p x p-1/ Together with (4.13), this implies directly the link between the moments of Ĝn (o, o) and Gn (o, o). We only have to look at the asymptotic behaviour of F p . By a change of variable, for every x > 0, F p (x) = x p-1/2 +∞ 0 e -y/x (1 + 2y) p (πy) 1/2 dy. (4.14) Then, by dominated convergence theorem, if p > 1/2, +∞ 0 e -y/x (1 + 2y) p (πy) 1/2 dy ----→ x→+∞ a p . (4.15) The transient phase We are now ready to prove Theorem 2. Let us explain quickly the strategy of the proof. Strategy of the proof: The idea is to find an upper bound for the moments of Ĝn (o, o). Indeed, it is enough for us because ( Ĝn (o, o)) n∈N is the bracket of (ψ n (o)) n∈N . Consequently, by Lemma 4.7, this is enough to find an upper bound for Gn (o, o) which is also the effective resistance until level n associated with the environment of the VRJP according to Lemma 4.4. Thus, we only need to show that the global effective resistance R(o ←→ ∞) has moments of order p for every p > 0. By standard computations, the effective resistance of the VRJP on a tree satisfies the equation in law R(x) = 1 i=x A 2 i W A i +W R(i) where the random variables R(i) for i = x are i.i.d copies of R(x). We will analyse this equation in law in order to bound the moments of the effective resistance. Proof of Theorem 2. Step 1: The potential (β i ) i∈V on V is constructed as in (3.2). For every x ∈ V , recall that e Ux = o<u≤x A u . For every x ∈ V , let us define the subtree V x := {u ∈ V, x ≤ u}. Moreover, for any neighbouring i, j ∈ V x , let us define c x (i, j) = W e U i +U j -2Ux . Then, for every x ∈ V , let R(x) be the electrical resistance between 0 and ∞ in the tree V x with conductances c x . Remark that, under P µ,W , (R(x)) x∈V is a family of identically distributed random variables. Furthermore, by Proposition G, as W > W c (µ), R(x) is finite for every x ∈ V , P µ,W -a.s. The figure 1 bellow explains the situation from an electrical point of view. Figure 1: Electrical network on a subtree. In this situation, the vertex x has three children, u 1 , u 2 , u 3 . On each edge the resistance in V x is written. By standard computations on electrical networks we infer that for every x ∈ V , R(x) = 1 i=x A 2 i W A i +W R(i) . For sake of convenience, we define R(x) = W R(x) for every x ∈ V . Therefore, it holds that for every x ∈ V , R(x) = 1 i=x A 2 i A i + R(i) . (5.1) Step 2: The following lines are inspired by the proof of Lemma 2.2 in [START_REF] Aidékon | Large deviations for transient random walks in random environment on a Galton-Watson tree[END_REF]. For every n ∈ N, the leftest vertex in generation n of V is denoted by v n . We denote by B(v n ) the set of "brothers" of v n . Remark that this set is possibly empty if µ(1) = 0. Let C > 0. Let α > 0. We define c α = 1 if α ≤ 1 and c α = 2 α-1 otherwise. For every n ∈ N * , let us introduce the event E n = {∀k ∈ {1, • • • , n}, ∀u ∈ B(v k ), cα A α u + cα R(u) α A 2α u > C}. By convention we write 1{E 0 } := 1. Now, let us prove the following key-inequality: for every n ∈ N * , P µ,W -a.s, R(o) α ≤ C n-1 k=0 1{E k } k i=1 c α A 2α v i + n k=1 1{E k }A α v k k i=1 c α A 2α v i + 1{E n } n i=1 c α A 2α v i R(v n ) α . (5.2) Let us prove it for n = 1. By (5.1), we can observe that for every child u of o, R(o) α ≤ 1 A u + R(u) A 2 u α ≤ c α A α u + c α A 2α u R(u) α . (5.3) If E 1 is satisfied, then we can apply (5.3) with u = v 1 which implies R(o) α ≤ 1{E 1 } c α A α v 1 + c α A 2α v 1 R(v 1 ) α . (5.4) If E 1 is not satisfied, then we can apply (5.3) with a brother of v 1 which implies R(o) α ≤ C. (5.5) Therefore, combining (5.4) and (5.5), we infer R(o) α ≤ C + 1{E 1 } c α A α v 1 + c α A 2α v 1 R(v 1 ) α (5.6) which is inequality (5.2) with n = 1. Remark, that the inequality (5.6) is true even if v 1 is the only child of o. The proof of (5.2) for any n is obtained by induction by iterating the inequality (5.6). Moreover, by construction, the events ∀u ∈ B(v k ), c α A α u + c α R(u) α A 2α u > C k∈N * are P µ,W -independent. In addition, the probability of each of these events is the same and it is strictly less than 1 because R(u) < +∞ for every u ∈ V as W > W c (µ). Therefore, P µ,W -a.s, there exists N ∈ N * such that 1{E n } = 0 for every n ≥ N . That is why we can make n go to infinity in (5.2) which implies, P µ,W -a.s, R(o) α ≤ C +∞ k=0 1{E k } k i=1 c α A 2α v i + ∞ k=1 1{E k }A α v k k i=1 c α A 2α v i . (5.7) Now, let us introduce the random set A = {i ∈ N * , B(v i ) = ∅} and for every k ∈ N * the random variable Γ k = |A∩{1, • • • k}|. Under GW µ , the sequence (Γ k ) k∈N is a random walk whose increments are independent Bernoulli random variables with parameter 1 -µ(1). Further, A can be written as {J 1 ≤ J 2 ≤ J 3 ≤ • • • }. For every i ∈ N * , there exists a brother L i of v J i . The situation is summarized by the figure 2 bellow. By construction, conditionally on the underlying Galton-Watson tree, the random variables 1{∀u ∈ B(v k ), cα A α u + cα R(u) α A 2α u > C} k∈N * and (A v k ) k∈N * are mutually independent. Therefore, together with (5.7), this implies that, GW µ -a.s, E ν W V R(o) α ≤ C + C + Q(W, -α) Q(W, -2α) +∞ k=1 (c α Q(W, -2α)) k Γ k i=1 ν W V c α A α L i + c α R(L i ) α A 2α L i > C (5.8) where we recall that Q(W, t) is the moment of order t of an Inverse Gaussian random variable with parameters (1, W ). Remark that, under GW µ , conditionally on (Γ k ) k∈N * , (P k ) k∈N * := ν W V c α A α L k + c α R(L k ) α A 2α L k > C k∈N * is an i.i.d sequence. Therefore, by the strong law of large numbers, GW µ -a.s, Γ k i=1 P i = exp (Γ k + o(Γ k ))E GW µ [ln (P 1 )] . Moreover, by the strong law of large numbers applied with (Γ k ) k∈N * , GW µ -a.s, Γ k i=1 P i = exp (1 -µ(1))(k + o(k))E GW µ [ln (P 1 )] . (5.9) Besides, as W > W c (µ), we know that R(u) < +∞ for every u ∈ V , P µ,W a.s. Consequently, by monotone convergence theorem, -E GW µ [ln(P 1 )] = -E GW µ ln ν W V c α A α L 1 + c α R(L 1 ) α A 2α L 1 > C can be made as large as we want by making C go toward infinity. Therefore, there exists C(α) > 0 such that ln (c α Q(W, -2α)) + (1 -µ( 1))E GW µ [ln(P 1 )] < 0. (5.10) Hence, for every α > 0, using (5.10) and (5.9) in (5.8) with C = C(α) implies that, GW µ -a.s, I α := E ν W V R(o) α < +∞. (5.11) Step 3: By (5.11), we can control any moment of R(o). Together with Lemma 4.4, this implies that for every α > 0, for every n ∈ N * , GW µ -a.s, E ν W V Gn (o, o) α = E ν W V [R(0 ←→ δ n ) α ] ≤ W α E ν W V R(o) α = W α I α < +∞. (5.12) Let p > 1. By Lemma 4.7, for every n ∈ N * , GW µ -a.s, E ν W V Ĝn (o, o) p/2 = E ν W V F p/2 ( Gn (o, o)) where F p/2 (x) ∼ a p/2 x p/2-1/2 . Therefore, together with (5.12), this shows there exists positive constants K 1 and K 2 such that for every n ∈ N * , GW µ -a.s, E ν W V Ĝn (o, o) p/2 ≤ K 1 + K 2 E ν W V Gn (o, o) (p-1)/2 ≤ K 1 + K 2 W I (p-1)/2 . (5.13) By Lemma 4.6, it implies that, GW µ -a.s, sup n∈N * E ν W V [ψ n (o) p ] < +∞. Remark 5.1. In the proof of Theorem 2, identity (5.1) shows that the distribution of Ĝ(o, o) is directly linked to the solution of the equation in law R(o) = 1 i=o A 2 i A i + R(i) . A non-trivial solution to this equation must exist in the transient phase. However, we do not know how to express this solution with standard distributions and if it is even possible. 6 The subcritical phase Proof of Theorem 3 In the study of the transient phase, we used the fact that the asymptotic behaviour of (ψ n (o)) n∈N is related to the effective resistance associated with the environment of the VRJP. We will also use this crucial property in the recurrent phase. In order to study the effective resistance of the VRJP between o and the level n, we will use techniques coming from the area of branching random walks. Indeed the fact that the environment of the VRJP on trees can be expressed as products of independent Inverse Gaussian random variables along branches of the tree makes our situation very similar to branching random walks. Proof of Theorem 3. Step 1: For every vertex x in the Galton-Watson tree V , let us define S(x) = - o<u≤x ln(A u ). We recall that f m,W (t) = ln (mQ(W, t)) for every t ∈ R. f m,W is the Laplace transform associated with the branching random walk {(x, S(x)), x ∈ V }. In particular, remark that {(x, S(x)), x ∈ V } satisfies (3.9). By assumption A 3 , it satisfies also (3.8). Remark that f m,W (0) = ln(m) > 0 because m > 1 by assumption A 1 . Moreover, this is easy to check that f m,W is stricly convex, strictly decreasing on [0, 1/2] and strictly increasing on [1/2, +∞[. In addition, the support of the point process L which is associated with {(x, S(x)), x ∈ V } is R because the support of an Inverse Gaussian distribution is R * + . Therefore, by Lemma L and Lemma 3.3, there exists a unique t * (m, W ) > 0 such that -τ (m, W ) := f m,W (t * (m, W )) = f m,W (t * (m, W )) t * (m, W ) . For every x ∈ V , we define S(x) := t * (m, W )S(x) + f m,W (t * (m, W ))|x| = t * (m, W ) S(x) -τ (m, W )|x| . By definition of t * (m, W ), the branching random walk {(x, S(x)), x ∈ V } satisfies (3.10). Consequently, with the branching random walk S, we are allowed to use the results of Hu and Shi, that is, Propositions J and K. Moreover W < W c (µ). By Proposition H, this is equivalent to say that Q(W, 1/2) < 1/m. Therefore, f m,W (1/2) < 0. Thus, by Proposition 3.3, t * (m, W ) < 1/2 and τ (m, W ) > 0. Now, we are ready to estimate the moments of (ψ n (o)) n∈N . By Lemma 4.3, we only have to control E µ,W [ψ n (o) p ] when p > 1 or p ∈]0, τ (m, W )[. Step 2: lower bound in (i). By Lemma 4.4, we know that for every n ∈ N, Gn (o, o) = R(o ←→ δ n ) where R(o ←→ δ n ) is the effective resistance between o and δ n with conductances c. Recall that if i ∈ V \{o}, then c(i, i) = W A -1 i o<u≤i A 2 u . By the Nash-Williams inequality (see 2.15 in [START_REF] Lyons | Probability on Trees and Networks[END_REF]), for every n ∈ N * , P µ,W -a.s, Gn (o, o) ≥ 1 W |x|=n A -1 x o<y≤x A 2 y . (6.1) Let p > 0. It holds that, for every n ∈ N * E µ,W Gn (o, o) p/2 ≥ 1 W p/2 E µ,W      |x|=n A -1 x o<y≤x A 2 y   -p/2    ≥ 1 W p/2 E µ,W    min |x|=n A p/2 x ×   |x|=n o<y≤x A 2 y   -p/2    = 1 W p/2 E µ,W    min |x|=n A p/2 x ×   |x|=n e -2S(x)   -p/2    = 1 W p/2 e pτ (m,W )n E µ,W min |x|=n A p/2 x × W -p/2 n,2/t * (m,W ) (6.2) where for every β > 1, (x) . W n,β = |x|=n e -β S By (3.12) in Lemma J, as 2/t * (m, W ) > 4 > 1, we know that, P µ,W -a.s, lim sup n→+∞ ln W n,2/t * (m,W ) ln(n) = -1/t * (m, W ). Therefore, P µ,W -a.s, W -p/2 n,2/t * (m,W ) ≥ n p/(2t * (m,W ))+o(1) . (6.3) Moreover, for every n ∈ N * , P µ,W min |x|=n A x < n -2 = P µ,W   |x|=n {A x < n -2 }   ≤ E GW µ Z n ν W V A < n -2 where A has an Inverse Gaussian distribution with parameter (1, W ) and Z n = |x|=n 1. In addition, the cumulative distribution function of an Inverse Gaussian random variable decreases exponentially fast at 0. Therefore there exists λ > 0 such that for every n ∈ N * , P µ,W min |x|=n A x < n -2 ≤ e -λn 2 E GW µ [Z n ] ≤ m n e -λn 2 (6.4) which is summable. Therefore, by Borel-Cantelli lemma, P µ,W -a.s, min |x|=n A p/2 x ≥ n -p+o(1) . (6.5) Consequently, using (6.5) and (6.3) and Fatou's lemma, we infer that E µ,W min |x|=n A p/2 x × W -p/2 n,2/t * (m,W ) ≥ n p/(2t * (m,W ))-p+o(1) . (6.6) Then (6.6) and (6.2) imply that, E µ,W Gn (o, o) p/2 ≥ e pτ (m,W )n+o(n) . (6.7) Together with Lemma 4.6 and Lemma 4.7, this yields E µ,W ψ n (o) 1+p ≥ e pτ (m,W )n+o(n) . (6.8) Step 3: upper bound in (i). This part of the proof is partially inspired from [START_REF] Faraud | Almost sure convergence for stochastically biased random walks on trees[END_REF]. C(o ←→ δ n ) = Gn (o, o) -1 . (6.9) Now, we introduce ( Zk ) k∈N * a Markov chain on V with conductances c starting from o (which is actually the discrete-time process associated with the VRJP). When we want to integrate only with respect to this Markov chain, we use the notations P c,o and E c,o . By definition of the effective conductance, we know that C(o ←→ δ n ) = W i=o A i × P c,o (τ n < τ + o ) ≥ W i=o A i × max |x|=n P c,o (τ x < τ + o ) (6.10) where τ n = inf{k ∈ N, | Zk | = n}, τ x = inf{k ∈ N, Zk = x} and τ + o = inf{k ∈ N * , Zk = o}. For every x ∈ V \{o}, we define x 1 the unique child of o which is an ancestor of x. By standard computations, for every n ∈ N * , for every x such that |x| = n, W i=o A i × P c,o τ x < τ + 0 = i=o A i A -1 x 1 o<u≤x c(u, u) -1 ≥ 1 o<u≤x c(u, u) -1 . (6.11) By (6.11) and the expression of c, we infer that W i=o A i × P c,o τ x < τ + 0 ) ≥ W o<u≤x A u o<v≤u A -2 v ≥ W o<u≤x A u e 2S(u) ≥ W e -2Sm(x) n × min |z|≤n A -1 z (6.12) where S m (x) = max o<u≤x S(u). Therefore, combining identities (6.12), (6.10) and (6.9), we get for every n ∈ N * , P µ,W -a.s, Gn (o, o) ≤ n W × max |z|≤n A z × e 2 min |x|=n Sm(x) . (6.13) Moreover, as τ (m, W ) > 0, it holds that for every x ∈ V , S m (x) = max o<u≤x S(u) = max o<u≤x S(u)/t * (m, W ) + τ (m, W )|u| ≤ τ (m, W )|x| + (1/t * (m, W )) max o<u≤x S(u) = τ (m, W )|x| + (1/t * (m, W )) Sm (x) (6.14) where Sm (x) = max o<u≤x S(u). Combining (6.13) and (6.14), it holds that for every n ∈ N * , P µ,W -a.s, Gn (o, o) ≤ n W × max |z|≤n A z × e 2τ (m,W )n × e 2/t * (m,W ) min |x|=n Sm(x) . (6.15) Let p > 0. By (6.15) and Cauchy-Schwarz inequality, for every n ∈ N * , E µ,W Gn (o, o) p/2 ≤ n p/2 W p/2 e pτ (m,W )n E µ,W max |z|≤n A p/2 z × e p/t * (m,W ) min |x|=n Sm(x) ≤ n p/2 W p/2 e pτ (m,W )n E µ,W max |z|≤n A p z 1/2 (a) E µ,W e 2p/t * (m,W ) min |x|=n Sm(x) 1/2 (b) . (6.16) If we show that (a) and (b) have a subexponential growth, it gives the good upper bound for E µ,W Gn (o, o) p/2 . In order to majorize (a), let us introduce a function h p on R + which is increasing, convex, bijective and such that there exists γ p > 0 such that h p (x) = e (W/4)x 1/p for every x > γ p . Such a function does clearly exist. By Jensen's inequality, for every n ∈ N * , it holds that h p E µ,W max |z|≤n A p z ≤ E µ,W max |z|≤n h p (A p z ) ≤ h p (γ p ) + E µ,W max |z|≤n e (W/4)Az ≤ h p (γ p ) + E µ,W   |z|≤n e (W/4)Az   ≤ h p (γ p ) + (m -1) -1 m n+1 E µ,W e (W/4)A where A is an Inverse Gaussian distribution with parameters (1, W ). Remark that E µ,W e (W/4)A < +∞. Thus, there exist positive constants C 1 and C 2 such that for every n big enough, where Sz (u) = S(u) -S(z). Therefore, for every n ∈ N As W < W c (µ), by Lemma 3.2, for every n ∈ N * , P µ,W -a.s, it holds that E µ,W max |z|≤n A p z ≤ h -1 p (C 1 + C 2 m n ) ≤ 4 W ln (C 1 + C 2 m n ) p . ( 6 ψ n (o) ≤ W Ĝ(o, o) |x|=n e Ux ν x = W Ĝ(o, o) |x|=n o<u≤x A u ν x . (6.25) Together with the notation introduced in step 1 of this proof, we get that for every n ∈ N * , P µ,W -a.s, ψ n (o) ≤ W Ĝ(o, o)e -τ (m,W )n |x|=n e -S(x)/t * (m,W ) ν x (6.26) By identity (4.13) and Lemma 4.5, as W < W c (µ), it holds that Ĝ(o, o) = 1 2γ . Together with (6.26) this implies that for every n ∈ N * , P µ,W -a.s, ψ n (o) ≤ W 1 2γ e -τ (m,W )n |x|=n e -S(x)/t * (m,W ) ν x . (6.27) Nevertheless, by the construction of the β-potential introduced in subsection 3.1, we know that γ, ( S(x)) |x|=n and (ν x ) |x|=n are independent and γ has a Gamma distribution with parameters (1/2, 1). Consequently, for every p ∈]0, t * (m, W )[, for every n ∈ N * , it holds that E µ,W [ψ n (o) p ] ≤ W p e -pτ (m,W )n +∞ 0 x -p-1/2 √ 4 p π e -x dx × E µ,W     |x|=n e -S(x)/t * (m,W ) ν x   p   . (6.28) For every p ∈]0, 1/2[, we denote κ p = W p +∞ 0 x -p-1/2 √ 4 p π e -x dx < +∞. As t * (m, W ) < 1/2 < 1, we are allowed to use concavity in (6.28) which implies that for every p ∈]0, t * (m, W )[, for every n ∈ N * , E µ,W [ψ n (o) p ] ≤ κ p e -pτ (m,W )n × E µ,W      |x|=n e -S(x)/t * (m,W ) ν x   t * (m,W )    p/t * (m,W ) ≤ κ p e -pτ (m,W )n × E µ,W   |x|=n e -S(x) ν t * (m,W ) x   p/t * (m,W ) . (6.29) However ( S(x)) |x|=n and (ν x ) |x|=n are independent. Therefore, for every n ∈ N * and for every p ∈]0, t * (m, W )[, E µ,W [ψ n (o) p ] ≤ κ p e -pτ (m,W )n × E µ,W [W n ] p/t * (m,W ) × E µ,W ν t * (m,W ) p/t * (m,W ) (6.30) where ν has distribution µ and W n = |x|=n e -S(x) . Therefore, as W n is a martingale with mean 1, we get that for every n ∈ N * and for every p ∈]0, t * (m, W )[, E µ,W [ψ n (o) p ] ≤ κ p × E µ,W ν t * (m,W ) p/t * (m,W ) × e -pτ (m,W )n In order to conclude the proof, we need the same estimate for p ∈]1 -t * (m, W ), 1[. This stems from Lemma 4.3. Proof of Theorem 4 First, we need the following lemma which establishes a link "in law" between ψ n (o) and the effective resistance associated with the VRJP. Lemma 6.1. Let V be a rooted tree with root o. Let W > 0. Then, under ν W V , it holds that for every n ∈ N * , ψ n (o) 2 × 2γ × (1 + 2γR(o ←→ δ n )) law = 2Γ(1/2, 1) where γ is the Γ(1/2, 1) random variable which was used to define the potential β on a tree (see identity (3.3)) and R(o ←→ δ n ) is the effective resistance from o to δ n associated with the conductances c defined in Proposition G. Proof of Lemma 6.1. Let n ∈ N. The proof is based on a coupling with a potential on the wired graph Ṽn . (See subsection 4.2 for the definition of the wired graph.) Recall that, under ν W V , thanks to (3.3), the potential β can be decomposed as β = β + 1{• = o}γ where γ and β are independent. For every i ∈ V n , we write η(n) i = j∼i,j / ∈Vn W . Then, recall that ψ n (o) = Ĝn η(n) . In particular, there exists a deterministic function F n from R |Vn|+1 into R 3 such that (ψ n (o), Gn (o), 2γ) = F n ( βVn , γ). ( 6 1 2γ = G (o, o) = Ĝ n (o, o) + G (δ n , δ n )ψ n (o) 2 . (6.32) The equality (6.32) can be proved by means of the results about path expansions given by Lemma I. By (6.32), we get ψ n (o) 2 1/(2γ ) -Ĝ n (o, o) = 1 G (δ n , δ n ) . (6.33) Besides, by Cramer's formula, 1 2γ -Ĝ n (o, o) = 1 2γ - G n (o, o) 1 + 2γ G n (o, o) = 1 2γ (1 + 2γ G n (o, o)) . Together with (6.33), this yields ψ n (o) 2 × 2γ × (1 + 2γ G n (o, o)) = 1 G (δ n , δ n ) . (6.34) Further, with the same function F n as in (6.31), it holds that (ψ n (o), G n (o), 2γ ) = F n ( β Vn , γ ). (6.35) Moreover, the joint law of ( β Vn , γ ) is the same as the joint law of ( βVn , γ). It stems from the restriction properties in Lemma C and Lemma D. Therefore, combining this with (6.31), (6.35) and (6.34), we obtain that Proof of Theorem 4. For every n ∈ N, it holds that ψ n (o) 2 × 2γ × (1 + 2γ Gn (o, o)) law = ψ n (o) 2 × 2γ × (1 + 2γ G n (o, o)) = 1 G (δ n , δ n ) . By Theorem 3 in [STZ17], 1/G (δ n , δ n ) law = 2Γ ψ n (o) 2 = 1 2γ(1 + 2γR(0 ←→ δ n )) × Φ n (6.36) where Φ n = ψ n (o) 2 × 2γ(1 + 2γR(o ←→ δ n )) . By Lemma 6.1, we know that for every n ∈ N, That is why, in order to conclude, we only have to prove that, P µ,W -a.s, Φ n law = 2Γ(1/2, R(o ←→ δ n ) = e 2τ (m,W )n+o(n) . Remark that the identity (6.2) is also true without the expectation and remember from Lemma 4.4 that R(o ←→ δ n ) = Gn (0, 0). Therefore, for every n ∈ N. R(o ←→ δ n ) ≥ 1 W e 2τ (m,W )n × min |x|=n A x × W -1 n,2/t * (m,W ) . (6.38) First, min |x|=n A x has at most polynomial decay P µ,W -a.s. This can be shown exactly as in (6.5). Furthermore, by Proposition J, W -1 n,2/t * (m,W ) has also polynomial asymptotics. Consequently, this proves the lower bound of R(o ←→ δ n ). More precisely, P µ,W almost surely, R(o ←→ δ n ) ≥ e 2τ (m,W )n+o(n) . Now, let us prove the upper bound. By (6.15), it holds that R(0 ←→ δ n ) ≤ n W × max |z|≤n A z × e 2τ (m,W )n × e 2/t * (m,W ) min |x|=n Sm(x) . (6.39) In the same way as in (6.5), max {A z : |z| ≤ n} has at most polynomial growth P µ,W -a.s. Moreover, by Theorem 1.4 in [START_REF] Faraud | Almost sure convergence for stochastically biased random walks on trees[END_REF], there exists some constant c > 0 such that min { Sm (x) : |x| = n} ∼ cn 1/3 P µ,W -a.s. This concludes the proof. Proof of Proposition 2.1 Proof of Proposition 2.1. Let m > 1. For every W > 0 and for every t > 0, let us define F (W, t) = ln(mQ(W, t)). Obviously, F ∈ C ∞ R * + × R * + . We introduce another function G defined by G(W, t) = F (W, t) -t ∂F ∂t (W, t) for every (t, W ) ∈ R * + × R * + . Moreover, by step 1 in the proof of Theorem 3, we know that for every W > 0, there exists a unique t * (m, W ) > 0 such that G(W, t * (m, W )) = 0. Further, for every (t, W ) ∈ R * + × R * + , ∂G ∂t (W, t) = -t ∂ 2 F ∂t 2 (W, t) = -t E µ,W A t E µ,W ln(A) 2 A t -E µ,W ln(A)A t 2 E µ,W [A t ] 2 ( G(W c (µ), 1/2) = F (W c (µ), 1/2) -(1/2) ∂F ∂t (W c (µ), 1/2) = ln (mQ(W c (µ), 1/2)) = 0. Therefore, t * (m, W c (µ)) = 1/2. (6.43) Thus, by Taylor expansion in a neighborhood of W c (µ), it holds that, F (W, t * (m, W )) = F (W c (µ), 1/2) + (W -W c (µ)) ∂F ∂W (W c (µ), 1/2) + (t * (m, W ) -1/2) ∂F ∂t (W c (µ), 1/2) + o W c (µ) -W, t * (m, W ) -1/2 = (W -W c (µ)) ∂F ∂W (W c (µ), 1/2) + o(W c (µ) -W ) (6.44) where in the last equality, we used the fact that F (W c (µ), 1/2) = 0 and (6.42 ). Moreover o(W c (µ) - W, t * (m, W ) -1/2) becomes o(W c (µ) -W ) in the last equality because t * (m, W ) -1/2 = t * (m, W ) -t * (m, W c (µ)) = O(W c (µ) -W ) as t * (m, •) is a smooth function. Besides, τ (m, W ) = -F (W, t * (m, W ))/t * (m, W ) ∼ -2F (W, t * (m, W )) in the neighborhood of W c (µ) because t * (m, W c (µ)) = 1/2. Together with (6.44), it yields τ (m, W ) ∼ W →Wc(µ) 2 ∂F ∂W (W c (µ), 1/2) (W c (µ) -W ) (6.45) Therefore, we only have to compute ∂F ∂W (W c (µ), 1/2) in order to conclude the proof. Let us recall that for every W > 0, F (W, 1/2) = ln(m) + 1 2 ln(W ) + ln +∞ 0 e -(W/2)(x+1/x-2) √ 2πx dx . (6.46) Differentiating (6.46), we get ∂F ∂W (W, 1/2) = 1 2W - 1 2 +∞ 0 (x + 1/x -2)(2π) -1/2 x -1 e -(W/2)(x+1/x-2) dx +∞ 0 (2π) -1/2 x -1 e -(W/2)(x+1/x-2) dx = 1 2W - 1 2 Q(W, 3/2) + Q(W, -1/2) -2Q(W, 1/2) Q(W, 1/2) = 1 + 1 2W - Q(W, 3/2) Q(W, 1/2) . (6.47) In the last equality, we used the fact that Q(W, 3/2) = Q(W, -1/2). Moreover, remark that for every W > 0, Q(W, 3/2) = +∞ 1 W 2π (x + 1/x) x e -(W/2)(x+1/x-2) dx = 2W π +∞ 0 cosh(u)e -W (cosh(u)-1) du = 2W π e W K 1 (W ) = K 1 (W ) K 1/2 (W ) (6.48) where K α is the modified Bessel function of the second kind with index α. Besides, recall that mQ(W c (µ), 1/2) = 1. Now, let us evaluate (6.47) at W = W c (µ). Together with (6.48), this implies ∂F ∂W (W c (µ), 1/2) = 1 + 1 2W c (µ) -m K 1 (W c (µ)) K 1/2 (W c (µ)) . (6.49) Moreover, we still have to prove that ∂F ∂W (W c (µ), 1/2) > 0. Actually, it is enough to prove that for every W > 0, 1 + 1 2W - Q(W, 3/2) Q(W, 1/2) > 0. Exactly as in (6.48), one can prove that Q(W, 1/2) = K 0 (W ) K 1/2 (W ) . Therefore, we have to prove that for every W > 0, 1 + 1 2W > K 1 (W ) K 0 (W ) . Nevertheless, it is exactly Corollary 3.3 in [START_REF] Chu | On approximating the modified Bessel function of the second kind[END_REF]. Proof of Proposition 2.2 Proof of Proposition 2.2. Recall from Proposition G that the measure P V RJP µ,W is defined as follows: • First, under measure P µ,W , we choose randomly a Galton-Watson tree V and the random conductances c on V which are given by Proposition G. • Secondly, we choose randomly a trajectory on V for the discrete-time process ( Zn ) n∈N with distribution P c,o where P c,o is the law of a random walk on the tree (V, E) starting from o with conductances c. Step 1: proof of the lower bound. Let n ∈ N * . By Jensen's inequality, it holds that 1 P V RJP µ,W (τ + o > τ n ) = 1 E µ,W P c,o (τ + o > τ n ) ≤ E µ,W 1 P c,o (τ + o > τ n ) . (6.50) However, by definition of the effective resistance, we know that 1 P c,o (τ + o > τ n ) = W   i=o A i   × R(o ←→ δ n ). Therefore, by Proposition 4.4 1 P c,o (τ + o > τ n ) = W   i=o A i   × Gn (o, o). Combining this with (6.50) and Cauchy-Schwarz inequality, there exists a positive constant C such that One can prove that the first term in (6.56) has at most polynomial growth by following exactly the same lines as for the proof of (6.17). Moreover, the second term in (6.56) decreases with a polynomial decay by Proposition K because α(1 + 2ε) < t * (m, W )/2. Together with (6.55), as α can be taken as close from t * (m, W )/2 as we want, this concludes the proof. 1 P V RJP µ,W (τ + o > τ n ) ≤ C E µ, 7 The critical point 7.1 Proof of Theorem 5 Now, we are going to prove Theorem 5 which describes the asymptotic behaviour of (ψ n (o)) n∈N at the critical point. Proof of Theorem 5. For simplicity of notation, we write W = W c (µ) in the entirety of this proof. Exactly as in the proof of Theorem 4, by using Lemma 6.1, we only need to find the almost sure behaviour of C(o ←→ δ n ), the effective conductance associated with the VRJP, in order to get the asymptotics of ψ n (o) 2 . Remember that the local conductance from any vertex x ∈ V \{o} to x is W A -1 x   o<u≤x A 2 u   which is not exactly the effective conductance associated with a branching random walk. Remark that for every n ∈ N, W min |z|≤n A -1 z n ≤ C(o ←→ δ n ) ≤ W max |z|≤n A -1 z n (7.1) where n is the effective conductance from o to level n when the local conductance from any vertex x ∈ V \{o} to x is given by   o<u≤x A 2 u   . As usual, min = ln E µ,W |x|=1 A 2t x . As we are at the critical point and thanks to Proposition H, ψ strictly decreases on [0, 1/4] and increases strictly on [1/4, 1], ψ(1/4) = 0 and ψ (1/4) = 0. Our n is exactly the same as the one defined in [START_REF] Faraud | Almost sure convergence for stochastically biased random walks on trees[END_REF] with the branching random walk Ŝ. By the proof of Theorem 1.2 in [START_REF] Faraud | Almost sure convergence for stochastically biased random walks on trees[END_REF], we get that, P µ,W -a.s, lim n→+∞ ln( n ) n 1/3 = - 3π 2 2 × 4 × ψ (1/4) 1/3 = -   24π 2 E µ,W   |x|=1 A 1/2 x ln(A x ) 2     1/3 . This concludes the proof. Positive recurrence at the critical point Now, let us prove Theorem 6. Proof of Theorem 6. We want to prove the positive recurrence of the discrete process ( Zn ) n∈N associated with (Z t ) t≥0 . By Proposition G, ( Zn ) n∈N is a Markov chain in random conductances with conductances given by c(x, x) = W e Ux+U x = W A x o<u≤ x A 2 u for every x ∈ V \{o}. For every x ∈ V , let us define S(x) = -1 2 o<y≤x ln(A u ). We assumed that W = W c (µ), that is, mQ(W, 1/2) = 1 by Proposition H. Therefore, {(x, S(x)), x ∈ V } is a branching random walk which satisfies hypothesis (3.10). This is easily checked that it satisfies also (3.9). Moreover it satisfies hypothesis (3.8) by hypothesis A 3 . Therefore, we are allowed to use the results of Hu and Shi (Propositions K and J.) with this branching random walk. Following the notations of Hu and Shi, we define In order to prove Theorem 6, this is enough to prove that for some r ∈]0, 1[, where ν has the same distribution as ν y for any y ∈ V . The last equality comes from the fact that (W n ) n∈N is a martingale because the branching random walk S satisfies hypothesis (3.10). Combining identities (7.3), (7.4) and (7.5), in order to make E µ,W [Λ r n ] summable, we need n 3r/2 E µ,W W r n,4 and E µ,W ν 1/4 1 ν≥n 3/2 4r to be summable. Moreover, recall we assumed that r < 1/4. By Proposition K, we know that n 3r/2 E µ,W W r n,4 = n 3r/2 × n -6r+o(1) = n -9r/2+o(1) . The case where f (t min ) > 0 can be treated in the same way. 4 Preliminary lemmas 4.1 ψ n (o) as a mixture of Inverse Gaussian distributions and proof of Theorem 1 For every n ∈ N * , let us denote by C(o ←→ δ n ) the effective conductance between o and δ n with respect to conductances c n . (See subsection 4.2 for the definition of the conductances c and c n .) By Lemma 4.4, for every n ∈ N * , k .17)Consequently, (a) in (6.16) has a subexponential growth. Now, let us look at (b) in (6.16). Let us define a * := 2p/t * (m, W ). Let ε > 0. Then, remark that for everyn ∈ N * , (b) ≤ e na * ε + E µ,W e ≥ εn≤ e na * ε + P µ,W min grows exponentially fast when n goes toward infinity. Therefore we only have to prove that (c) decreases faster than any exponential function. Let δ > 0. The crucial point is to remark that for every n ∈ N * , P µ,W min |x|=n max o<u≤x S(u) ≥ εn ≤ P µ,W max |z|= δn max o<u≤z S(u) ≥ εn/2 + P µ,W ∀z, |z| = δn , min |x|z= (1-δ)n max z<u≤x Sz (u) + S(z) ≥ εn ∩ S(z) ≤ εn/2 (1/2, 1) and by Proposition 4.4, Gn (o, o) = R(o ←→ δ n ). This concludes the proof. Now, we are ready to prove Theorem 4. z have polynomial asymptotics almost surely. Thus, we only need to focus on the behaviour of ( n ) n∈N . For every x ∈ V , let us denoteŜ(x) = -2 o<u≤x ln(A u ).We write ψ(t) = ln E µ,W |x|=1 e -t Ŝ(x) every n ∈ N * , let us define Λ n := |x|=n c(x, x). +∞ n=1 E≤E n=1 µ,W [Λ r n ] < +∞.(7.2)Let n ∈ N * and r ∈]0, 1[. r shall be made precise later in the proof. First, let us remark that,E µ,W [Λ r n ] ≤ E µ,W every y ∈ V , let us define the random variable ν y = x=y 1which is the number of children of y. Then, it holds that,(a) = E µ,W n 3r/2 E µ,W [W r n-1,4 ] + E µ,W Jensen's inequality, if r < 1/4, we get, (b) ≤ E µ,W µ,W ν 1/4 1 ν≥n 3/2 4r = E µ,W [W n-1 ] 4r E µ,W ν 1/4 1 ν≥n 3/2 4r = E µ,W ν 1/4 1 ν≥n 3 Lemma 4.5. Let V be a Galton-Watson tree whose offspring law satisfies hypothesis A 1 . (i) ∀W ∈]0, W c (µ)], lim Proof of Lemma 4.5. By Propositions G and H, W ≤ W c (µ) if and only if the random walk with conductances (c i,j ) (i,j)∈E is recurrent almost surely. By Theorem 2.3 in [LP16], this is equivalent to say that lim n→+∞ R(o ←→ δ n ) = +∞. Therefore, Lemma 4.4 concludes the proof. .8) Combining (4.6) and (4.8) concludes the proof. By means of Lemma 4.4, one can prove the following lemma which shall be useful later in this paper. n→+∞ Gn (o, o) = +∞, P µ,W -a.s. (ii) ∀W ∈]W c (µ), +∞[, lim n→+∞ Gn (o, o) := G(o, o) < +∞, P µ,W -a.s. and ψ n (o, t), ψ n (o, t) a.s ----→ t→+∞ Ĝn (o, o) (4.9) where • • • , • • • is the bracket for semimartingales. For t ≥ 0, let us introduce ψ * n (o, t) = sup s≤t |ψ n (o, s)-1|. Then, if p > 1, by BDG inequality for continuous martingales (see Theorem 4.1 in [RY98] 2 with a p = Proof of Lemma 4.7. Let n ∈ N. Recall that (H β ) Vn,Vn = Hn + 2γE o,o where E o,o is the matrix which has only null coefficients, excepted at (o, o) where it has coefficient 1. Then, by Cramer's formula, we have the following key-equality: 0 +∞ dy (πy) 1/2 (1 + 2y) p . Ĝn (o, o) = Gn (o, o) 1 + 2γ Gn (o, o) . (4.13) For every x ∈ V , let us denote by ν x the number of children of x. For every n ∈ N * , by definition of ψ n (o) we know that Moreover, for every x ∈ V , for every n ∈ N * , Ĝn (o, x) ≤ Ĝ(o, x). This can be proved thanks to path expansions. (See Lemma I.) Consequently, for every n ∈ N * , subexponential growth. Moreover, we also proved that (a) has subexponential growth. By (6.16), this yields E Gn (o, o) p/2 ≤ e pτ (m,W )n+o(n) . (6.22) Together with Lemma 4.6 and Lemma 4.7, this yields E µ,W ψ n (o) 1+p ≤ e pτ (m,W )n+o(n) . (6.23) P µ,W min |x|=n Step 4: upper bound in (ii). ψ n (o) = W max o<u≤x S(u) ≥ εn ≤ P µ,W max |z|= δn max o<u≤z S(u) ≥ εn/2 Ĝn (o, x)ν x . |x|=n + P µ,W ∀z, |z| = δn , min |x|z= (1-δ)n max z<u≤x Sz (u) ≥ εn/2 . (6.19) By the branching property, for every n ∈ N * and hypothesis A 2 , ψ n (o) ≤ W Ĝ(o, x)ν x . (6.24) P µ,W ∀z, |z| = δn , min |x|z= (1-δ)n |x|=n max z<u≤x Sz (u) ≥ εn/2 2 δn ≤ P µ,W min |x|= (1-δ)n max o<u≤x S(u) ≥ εn/2 . Therefore, using inequality (2.12) in [FHS12], there exists η > 0 such that for every integer n which is large enough, P µ,W ∀z, |z| = δn , min |x|z= (1-δ)n max z<u≤x Sz (u) ≥ εn/2 ≤ 1 -e -ηn 1/3 2 δn (6.20) which decreases faster than any exponential function. Now, let t > 0. By Markov inequality, for every n ∈ N * , P µ,W max |z|= δn max o<u≤z S(u) ≥ εn/2 ≤ e -nεt/2 k=1 δn E µ,W |x|=k   e t S(x)   δn = e -nεt/2 r(t) k k=1 where r(t) = E µ,W e t S(x) . Consequently, there exists a constant C > 0 such that for every |x|=1 n ∈ N * , P µ,W max |z|= δn max o<u≤z S(u) ≥ εn/2 ≤ C exp (n (δ ln(r(t)) -tε/2)) . (6.21) If we take t large enough and δ small enough, we get an exponential decay with a decreasing rate which is as large as we want. Therefore, combining (6.21), (6.20) and (6.19), we know that (c) in (6.18) decreases faster than any exponential function. Consequently, by (6.18), (b) has a * , the weighted graph Ṽn . We can associate a matrix H β with the potential β in the usual way and the inverse ofH β is denoted by G . We define γ = 1/(2G (o, o)) and β = β -1{• = o}γ . By Theorem 3 in [STZ17], γ is distributed as Γ(1/2,1) and is independent of β . Let us define the matrix Hβ in the same way as H β but we replace 2β o by 2 β o . Moreover, we define Ĝ n and G n as the inverse of (H β ) Vn,Vn and ( Hβ ) Vn,Vn respectively. Further, let us write ψ n = Ĝ Now, let us define a potential β on the wired graph Ṽn with distribution adjacency matrix of n η(n) . Then, by Proposition 8 in [SZ19], it holds that ν Pn,0 Ṽn .31) where Pn is the 6.40)where A is an Inverse Gaussian distribution with parameters (1, W ). From (6.40) and Cauchy-Schwarz inequality, we deduce that for every (t, W ) ∈ R * + × R * + , Therefore, we can apply the implicit function theorem which implies that W → t * (m, W ) is smooth. ∂G ∂t (W, t) < 0. (6.41) By Proposition H, W c (µ) is the unique W > 0 such that mQ(W, 1/2) = 1. Moreover, for every W ∈ R * + , ∂F ∂t (W, 1/2) = 0 (6.42) because the minimum of t → Q(W, t) is achieved for t = 1/2. Consequently, W Gn (o, o) 2 . V RJP µ,W (τ + o > τ n ) = E µ,W P c,o (τ + o > τ n ) ≤ E µ,W P c,o (τ + o > τ n ) α . (6.52) Furthermore, by definition of the effective conductance C(o ←→ δ n ) between o and level n of the tree, we know that However, Gn (o, o) -1 = C(o ←→ δ n ). Consequently, following exactly the same lines as in (6.2), we get C(o ←→ δ n ) ≤ W e -2τ (m,W )n × max |x|=n A -1 x × W n,2/t * (m,W ) . Combining this with (6.54), it yields 1/(1+ε) P V RJP µ,W (τ + o > τ n ) ≤ Ce -2ατ (m,W )n E µ,W max |x|=n A -(1+ε)α x × W (1+ε)α n,2/t * (m,W ) . (6.55) Moreover, by Hölder inequality, we get E µ,W max |x|=n A -(1+ε)α x × W (1+ε)α n,2/t * (m,W ) ≤ E µ,W max |x|=n A -α(1+ε)(1+2ε)/ε x ε/(1+2ε) × E µ,W W n,2/t * (m,W ) (1+2ε)α 1/(1+2ε) (6.56) (6.51) Combining (6.22) and (6.51), we obtain 1 P V RJP µ,W (τ + o > τ P c,o (τ + o > τ n ) = C(o ←→ δ n ) W A i . (6.53) i=o n ) ≤ e 2τ (m,W )n+o (n) . This is exactly the lower bound in Proposition 2.2. Step 2: proof of the upper bound. Let α ∈]0, t * (m, W )/2[. Remark that t * (m, W )/2 < 1/4 because W < W c (µ). Let n ∈ N * . It holds that P Let ε > 0 such that (1 + 2ε)α < t * (m, W )/2. Combining Hölder inequality, (6.52) and (6.53), there exists C > 0 such that P V RJP µ,W (τ + o > τ n ) ≤ CE µ,W C(o ←→ δ n ) (1+ε )α 1/(1+ε) . (6.54) Moreover by Hölder's inequality with p = 4, In order to conclude, we only need to choose r between 2/9 and 1/4 which is possible because 2/9 < 1/4. Acknowledgments I would like to thank my Ph.D supervisors Christophe Sabot and Xinxin Chen for suggesting working on this topic and for their very useful pieces of advice.
03866679
en
[ "math.math-ap" ]
2024/03/04 16:41:24
2023
https://hal.science/hal-03866679v2/file/BDLR-sans-bord%20%281%29.pdf
Nicolas Burq email: [email protected] AND Belhassen Dehman email: [email protected] J Ér Ôme L E Rousseau Rauch-Taylor / Bardos-Lebeau Jérôme Le MEASURE PROPAGATION ALONG C 0 -VECTOR FIELD AND WAVE CONTROLLABILITY ON A ROUGH COMPACT MANIFOLD Keywords: Mathematics Subject Classification. 35L05, 35Q49, 35Q93, 58Jxx, 93B05, 93B07. 1 1 The celebrated Measure support propagation: proof of Theorem 1.10 5. Exact controllability: proof of Theorem 1.12 6. Lack of continuity of the control operator with respect to coefficients 6.1. Proof of Theorems 1.14 and 1.14 6.2. Proof of Theorem 1. [START_REF] Gérard | Leichtnam Ergodic properties of eigenfunctions for the Dirichlet problem[END_REF] Introduction The observability property for the wave equation has been intensively studied during the last decades mainly because of its deep connection with the problem of exact controllability. Until the end of the 80's, most of the positive results of observability were established under a (global) geometric assumptions, the so-called Γ-condition introduced by J.-L. Lions, essentially based on and well adapted to a multiplier method [START_REF] Lions | Contrôlabilité exacte, Stabilisation et Perturbations de Systèmes Distribués. Tome 1. Contrôlabilité exacte[END_REF]. Later, following [START_REF] Rauch | Exponential decay of solutions to hyperbolic equations in bounded domains[END_REF], Bardos, Lebeau and Rauch established in [START_REF] Bardos | Sharp sufficient conditions for the observation, control and stabilization of waves from the boundary[END_REF], boundary observability inequalities under a geometric control condition (GCC in short), linking the set on which the control acts and the generalized geodesic flow. Proofs of this result are based on microlocal tools, such as the propagation in phase space of wavefront sets in [START_REF] Bardos | Sharp sufficient conditions for the observation, control and stabilization of waves from the boundary[END_REF] or the propagation of microlocal defect measures in more modern proofs [START_REF] Burq | Condition nécessaire et suffisante pour la contrôlabilité exacte des ondes[END_REF]. For the latter approach, microlocal defect measures originate from the concentration phenomena for sequences of waves if one assumes that observability does not hold. Away from boundaries one obtains t H p µ = 0, (1.1) yielding the transport of the measure µ along the bicharacteristic flow in phase space. This flow is generated by the hamiltonian vector field H p associated with the symbol of the wave operator p. However note that despite their high efficiency and robustness, these methods present the great disadvantage of requiring too much regularity in the coefficients of the wave operator and the geometry. To define the generalized bicharacteristic flow and prove the propagation properties mentioned above a minimal smoothness of the metric and the boundary domain is needed. To our knowledge, the best result, C 2 metric, was proven in [START_REF] Burq | Contrôle de l'équation des ondes dans des ouverts peu réguliers[END_REF], and barely misses the natural minimal smoothness required to define the geodesic flow (W 2,∞ ) and thus the geometric control condition. In this context, in the present article, we address the following natural question: how can one derive observability estimate for the wave equation from optimal observation regions in the case of a nonsmooth metric? This problem has already received some attention and answers by E. Zuazua and his collaborators, in [START_REF] Castro | Concentration and lack of observability of waves in highly heterogeneous media[END_REF][START_REF] Castro | Addendum to: "Concentration and lack of observability of waves in highly heterogeneous media[END_REF], and more recently in [START_REF] Fanelli | Weak observability estimates for 1-D wave equations with rough coefficients[END_REF] (see also the result of [START_REF] Dehman | Observability estimates for the wave equation with rough coefficients[END_REF]). More precisely, in [START_REF] Castro | Concentration and lack of observability of waves in highly heterogeneous media[END_REF][START_REF] Castro | Addendum to: "Concentration and lack of observability of waves in highly heterogeneous media[END_REF], the authors prove a lack of observability of waves in highly heterogeneous media, that is, if the density is of low regularity. In [START_REF] Fanelli | Weak observability estimates for 1-D wave equations with rough coefficients[END_REF], the authors establish observability with coefficients in the Zygmund class and also observability with loss when the coefficients are Log-Zygmund or Log-Lipschitz. Furthermore, this result is proven sharp since one observes a infinite loss of derivatives for a regularity lower than Log-Lipschitz. Note that these analyses are carried out in one space dimension. This calls for the following comments. First, in this simplified framework, for smooth coefficients all the geodesics reach the observability region in uniform time: captive geodesics are not an issue. Second, proofs are based on sidewise energy estimate, a technique that is specific to the one-dimensional setting; the underlying idea consists in exchanging the rôles of the time and space variables and, in fine, in proving hyperbolic energy estimates for waves with rough coefficients. Unfortunately, such method does not extend to higher space dimension. Furthermore, for the low regularity considered in these articles, the geodesic flow is not well defined. Proving propagation results for wavefront sets or microlocal defect measure appears quite out of reach in such cases. The present work is the first in a series of three articles devoted to the question of observability (and equivalently exact controllability) of wave equation with nonsmooth coefficients. Here, we initiate this study on a compact Riemannian manifold with a rough metric, yet without boundary, while the two forthcoming articles will present the counterpart analysis on manifolds with boundary (or bounded domains of R d ) [START_REF] Burq | Measure and continuous vector field at a boundary I: propagation equation and wave observability[END_REF][START_REF] Burq | Measure and continuous vector field at a boundary II: geodesics and support propagation[END_REF]. The presence of a boundary yields a much more involved analysis and in [START_REF] Burq | Measure and continuous vector field at a boundary I: propagation equation and wave observability[END_REF][START_REF] Burq | Measure and continuous vector field at a boundary II: geodesics and support propagation[END_REF] we develop Melrose-Sjöstrand generalized propagation theory in a low regularity framework. In the present article, our main result is the observability of the wave equation with a C 1 -metric, completed with the stability of the observability property for small Lipschitz (W 1,∞ ) perturbations of the metric. More precisely, we first show that if the geometric control condition in time T holds for geodesics associated with a C 1 -metric g, then the observability property holds for the wave equation, and equivalently exact controllability. For this low regularity case one has to carefully consider the meaning of the geometric condition (or more generally the meaning of a geodesic) since the metric does not define a natural geodesic flow: geodesics are not uniquely defined. Only their existence is guaranteed. Second, we consider a reference C 1 -metric g 0 as above and we prove that observability also holds for any Lipschitz metric g chosen sufficiently close to g 0 (in the Lipschitz topology). It has to be noticed that Lipschitz metric are too rough to permit the use of microlocal tools and a direct proof of the observability property. Even worse for such a metric the geometric control condition itself does not seem make sense (as the generating vector field is only L ∞ ), and we have to use a perturbation argument near the (not so) smooth C 1 reference metric. Following the strategy of [START_REF] Burq | Contrôle de l'équation des ondes dans des ouverts peu réguliers[END_REF], we argue by contradiction and we prove a propagation result for microlocal defect measures in a low regularity setting. We prove that these measures are solutions to the ODE (1.1) with here H p having C 0 -coefficients. Then, we deduce some general properties about their support. Namelly we show that their support is a union of integral curves of the vector field. This latter step also follows from Ambrosio-Crippa's superposition principle [START_REF] Ambrosio | Continuity equations and ODE flows with non-smooth velocity[END_REF]. Yet, we give a completely different proof which is of interest since it can be extended to the case of a domain with a boundary [START_REF] Burq | Measure and continuous vector field at a boundary I: propagation equation and wave observability[END_REF][START_REF] Burq | Measure and continuous vector field at a boundary II: geodesics and support propagation[END_REF]. We have not been able to extend the approach of [START_REF] Ambrosio | Continuity equations and ODE flows with non-smooth velocity[END_REF] to to that case. To derive the ODE fufilled by the microlocal defect measure we heavily relies on some harmonic analysis results due to R. Coifman and Y. Meyer [START_REF] Coifman | Au delà des opérateurs pseudo-différentiels[END_REF]Proposition IV.7], that expresses that the commutator of a pseudo-differential operator of order one and a Lipschitz function is a bounded operator on L 2 . Finally, going further in the analysis, we investigate another stability property with respect to perturbations of the metric. We prove that the HUM optimal control associated with a fixed initial data is not stable with respect to perturbations of the metric. In Section 2 we recall some geometric facts and the notions of pseudo-differential calculus and microlocal defect (density) measures on a manifold. In addition, using bicharacteristics we state the geometric control condition of [START_REF] Bardos | Sharp sufficient conditions for the observation, control and stabilization of waves from the boundary[END_REF] in its classical form (C 2 -metric) and generalized form (C 1 -metric). In Section 3 we recall what microlocal defect measures are and we show how, if associated with sequences of solutions of PDEs, their support can be estimated and how a transport ODE can be derived, in the particular context of low regularity of coefficients. Section 4 is devoted to our proof of the support propagation for measures solutions of a ODE with C 0 -coefficients, Theorem 1.10. In Section 5 we use the results of Sections 3 and the propagation result of Theorem 1.10 to prove the observability and controllability results for the wave equation, Theorems 1.11 and 1.12. Finally, in Section 6 we prove the results related to stability properties of the HUM control process. 1.2. Setting and well-posedness. Throughout the article, we consider M a d-dimensional C ∞ -compact manifold, that is, a manifold without boundary with a topology that makes it compact equipped with a C ∞ -atlas. We assume that the topology is also given by a Riemannian metric g, to be chosen either Lipschitz or of class C k for some value of k to be made precise below 1 . We denote by µ g the canonical positive Riemannian density on M, that is, the density measure associated with the density function (det g) 1/2 . We also consider a positive Lipschitz or of class C k -function κ and we define the density κµ g . The L 2 -inner product and norm are considered with respect to this density κµ g , that is, (u, v) L 2 (M) = M uv κµ g , u 2 L 2 (M) = M |u| 2 κµ g . (1.2) We denote by L 2 V (M) the space of L 2 -vector fields on M, equipped with the norm v 2 L 2 V (M) = M g(v, v) κµ g , v ∈ L 2 V (M). We recall that the Riemannian gradient and divergence are given by g(∇ g f, v) = v(f ) and M f div g vµ g = - M v(f ) µ g , for f a function and v a vector field, yielding in local coordinates (∇ g f ) i = 1≤j≤d g ij ∂ x j f, div g v = (det g) -1/2 1≤i≤d ∂ x i (det g) 1/2 v i , with (g ij x ) = (g x,ij ) -1 . We introduce the elliptic operator A = A κ,g = κ -1 div g (κ∇ g ), that is, in local coordinates Af = κ -1 (det g) -1/2 1≤i,j≤d ∂ x i κ(det g) 1/2 g ij (x)∂ x j f . 1 Note that despite considering C k metrics with k < ∞ we still impose the underlying manifold to be smooth. This is due to our use of pseudo-differential techniques that are simple to introduce on a smooth manifold. See Section 2.3 Its principal symbol is simply a(x, ξ) = -1≤i,j≤d g ij x ξ i ξ j . Note that for κ = 1, one has A = ∆ g , the Laplace-Beltrami operator associated with g on M. Similarly to ∆ g , the operator A is unbounded on L 2 (M). With the domain D(A) = H 2 (M) one finds that A is self-adjoint, with respect to the L 2 -inner product given in (1.2), and negative. Moreover, one has (Au, v) L 2 (M) = - M g(∇ g u, ∇ g v) κµ g , u ∈ H 2 (M), v ∈ H 1 (M). Together with A we consider the wave operator P κ,g = ∂ 2 t -A κ,g + m, with m > 0 a constant and the following equation P κ,g y = f in (0, +∞) × M, y |t=0 = y 0 , ∂ t y |t=0 = y 1 in M. (1.3) It is well-posed in the energy space H 1 (M) ⊕ L 2 (M). Proposition 1.1. Consider κ and g both of Lipschitz class. Let (y 0 , y 1 ) ∈ H 1 (M) × L 2 (M) and let f ∈ L 2 0, T ; L 2 (M) , for any T > 0. There exists a unique y ∈ C 0 [0, +∞); H 1 (M) ∩ C 1 [0, +∞); L 2 (M) that is a weak solution of (1.3), that is, y |t=0 = y 0 and ∂ t y |t=0 = y 1 and P κ,g y = f in D (0, +∞) × M . Remark 1.2. At this level of regularity of κ and g, the well-posedness of the wave equation is classical. For less regular coefficients we refer to [START_REF] Colombini | A note on hyperbolic operators with log-Zygmund coefficients[END_REF] and [START_REF] Colombini | Time-dependent loss of derivatives for hyperbolic operators with non regular coefficients[END_REF]. In what follows, for simplicity we shall consider the case m = 1, that is for P κ,g = ∂ 2 t -A κ,g + 1. In this case, we denote by E κ,g (y)(t) = 1 2 y(t) 2 H 1 (M) + ∂ t y(t) 2 L 2 (M) = 1 2 y(t) 2 L 2 (M) + ∇ g y(t) 2 L 2 V (M) + ∂ t y(t) 2 L 2 (M) , the energy of this solution at time t. For a weak solution y of (1.3), if f = 0 this energy is independent of time t, that is, E κ,g (y)(t) = E κ,g (y)(0) = 1 2 y 0 2 H 1 (M) + y 1 2 L 2 (M) . Remark 1.3. The equation we consider, with the constant m > 0, is often referred to the Klein-Gordon equation. Here, we keep the name wave equation. We choose this equation instead of the classical wave equation that corresponds to the case m = 0. In fact, on a compact manifold without boundary, constants are eigenfunctions of the elliptic operator A κ,g with 0 as an eigenvalue. Hence, constant functions are solutions to the wave equation and are so-called invisible solutions, as far as the observability property we are interested in is concerned. If one considers a manifold with boundary and say, homogeneous Dirichlet conditions, this issue becomes irrelevant. We could have dealt with the case m = 0 (the usual wave equation) to the price of additional technical complications. 1.3. Exact controllability and observability. Let ω be a nonempty open subset of M and T > 0. The notion of exact controllability for the wave equation from ω at time T is stated as follows. Definition 1.4 (exact controllability in H 1 (M) ⊕ L 2 (M)). One says that the wave equation is exactly controllable from ω at time T > 0 if for any (y 0 , y 1 ) ∈ H 1 (M) × L 2 (M), there exists f ∈ L 2 ((0, T ) × M) such that the weak solution y to (1.4) P κ,g y = 1 (0,T )×ω f, (y |t=0 , ∂ t y |t=0 ) = (y 0 , y 1 ), as given by Proposition 1.1 satisfies (y, ∂ t y) |t=T = (0, 0). The function f is called the control function or simply the control. Observability of the wave equation from the open set ω in time T is the following notion. Definition 1.5 (observability). One says that the wave equation is observable from ω at time T if there exists C obs > 0 such that for any (u 0 , u 1 ) ∈ H 1 (M) × L 2 (M) one has (1.5) E κ,g (u)(0) ≤ C obs 1 (0,T )×ω ∂ t u 2 L 2 (L) , for u ∈ C 0 [0, T ]; H 1 (M) ∩ C 1 [0, T ]; L 2 (M) the weak solution of solution to P κ,g u = 0 with u |t=0 = u 0 and ∂ t u |t=0 = u 1 as given by Proposition 1.1 (see [START_REF] Lions | Contrôlabilité exacte, Stabilisation et Perturbations de Systèmes Distribués. Tome 1. Contrôlabilité exacte[END_REF]). Proposition 1.6. Let ω be an open subset of M and T > 0. The wave equation is exactly controllable from ω at time T if and only if it is observable from ω at time T . Remark 1.7. In the case m = 0 the energy function is given by E κ,g (u)(t) = 1 2 ∂ t u(t) 2 L 2 (M) + ∇ g u(t) 2 L 2 V (M) . It follows that a constant function u, solution to the the wave equation (∂ 2 t -A)u = 0 has zero energy. Since 1 (0,T )×ω ∂ t u 2 L 2 (L) also vanishes, one sees that such solution are invisible for an observability inequality of the form of (1.5). Possibilities to overcome this difficulty are to work in a quotient space or to change the wave operator into the Klein-Gordon operator. Here, we chose for simplicity the latter option. 1.4. Main results. We introduce the following spaces for the coefficients (κ, g) to distinguish various levels of regularity: X 2 (M) = {(κ, g); κ ∈ C 2 (M) and g is a C 2 -metric on M}, X 1 (M) = {(κ, g); κ ∈ C 1 (M) and g is a C 1 -metric on M}, Y(M) = {(κ, g); κ ∈ W 1,∞ (M) and g is a W 1,∞ -metric on M}. We start by recalling the controllability result known for regularity higher than or equal to C 2 , under the Rauch-Taylor geometric control condition. Definition 1.8 (Rauch-Taylor, geometric control condition). Let g be a C k metric, k = 1 or 2, and let ω be an open set of M and T > 0. One says that (ω, T ) fulfills the geometric control condition if all maximal geodesics associated with g, travelled at speed one, encounter ω for some time t ∈ (0, T ). A second formulation of this geometric condition based on the dual notion of bicharacteristics is given in Section 2.2 below. Theorem 1.9 (Exact controllability -C 2 -regularity). Consider (κ, g) ∈ X 2 (M), ω an open subset of M and T > 0 such that (ω, T ) fulfills the geometric control condition of Definition 1.8. Then, the wave equation is exactly controllable from ω at time T . This result was first proven by Rauch and Taylor [START_REF] Rauch | Exponential decay of solutions to hyperbolic equations in bounded domains[END_REF] for a smooth metric. The case (κ, g) ∈ X 2 (M) was proven by the first author in [START_REF] Burq | Contrôle de l'équation des ondes dans des ouverts peu réguliers[END_REF]. On smooth open sets of R d , or equivalently on manifolds with boundary equipped with smooth (κ, g), for instance in the case of homogeneous Dirichlet boundary conditions, this result is given in the celebrated articles of Bardos, Lebeau and Rauch [START_REF] Bardos | Un exemple d'utilisation des notions de propagation pour le contrôle et la stabilisation de problèmes hyperboliques[END_REF][START_REF] Bardos | Sharp sufficient conditions for the observation, control and stabilization of waves from the boundary[END_REF]. In the present article, we extend the result of Theorem 1.9 to cases of rougher coefficients. Our extension is twofold: (1) we treat the case (κ, g) ∈ X 1 (M) and, (2) we treat small perturbations in Y(M) of some (κ, g) ∈ X 1 (M). Most importantly, these two results rely on the understanding of the structure of the support of a nonegative measure subject to a homogeneous transport equation with continuous coefficients. Consider a continuous vector field X on O and let µ be a nonnegative measure density on O. Assume that µ is such that t Xµ = 0 in the sense of distributions, that is, (1.6) t Xµ, a 1 D (O),C ∞ c (O) = µ, Xa 1 D ,0 (O),C 0 c (O) = 0, a ∈ C ∞ c (O). If X is moreover Lipschitz, one concludes that µ is invariant along the flow that X generates. However, if X is not Lipschitz, there is no such flow in general. Yet, integral curves do exist by the Cauchy-Peano theorem. The following theorem provides a structure of the support of µ. Theorem 1.10. Let X be a continuous vector field on O and µ be a nonnegative density measure on O that is solution to t Xµ = 0 in the sense of distributions. Then, the support of µ is a union of maximally extended integral curves of the vector field X. In other words, if m 0 ∈ O is in supp(µ), then there exist an interval I in R with 0 ∈ I and a C 1 curve γ : I → O that cannot be extended such that γ(0) = m 0 and d ds γ(s) = X(γ(s)), s ∈ I, and γ(I) ⊂ supp(µ). Theorem 1.10 can actually be obtained as a consequence of the superposition principle of L. Ambrosio and G. Crippa [START_REF] Ambrosio | Continuity equations and ODE flows with non-smooth velocity[END_REF]Theorem 3.4]. Here, we provide an alternative proof that is of interest as it allows one to extend this measure support structure result to the case of an open set or a manifold with boundary [START_REF] Burq | Measure and continuous vector field at a boundary II: geodesics and support propagation[END_REF] as needed for our application to observability and controllability. Ambrosio and Crippa's proof is based on a smoothing-by-convolution argument. Extending this approach does not seem to be straightforward in the context of a boundary. Theorem 1.10 is proven in Section 4 and its proof is independent of the other sections of the article. A reader only interested in our proof of Theorem 1.10 may thus head to Section 4 directly. 1.4.2. Exact controllability results. If (κ, g) ∈ X 2 (M), x ∈ M and v ∈ T x M there is a unique geodesic originating from x in direction v. In the case (κ, g) ∈ X 1 (M) uniqueness is lost. Existence holds however and maximal (here global, see below) geodesics can still be defined by the Cauchy-Peano theorem. In particular, the geometric control condition of Definition 1.8 still makes sense. As announced above, our first result is the following theorem. Theorem 1.11 (Exact controllability -C 1 -regularity). Consider (κ, g) ∈ X 1 (M), ω an open subset of M and T > 0 such that (ω, T ) fulfills the geometric control condition of Definition 1.8. Then, the wave equation is exactly controllable from ω at time T . A second result is the following perturbation result. Theorem 1.12 (Exact controllability -Lipschitz perturbation). Let (κ 0 , g 0 ) ∈ X 1 (M), ω an open subset of M and T > 0 be such that (ω, T ) fulfills the geometric control condition of Definition 1.8 with respect to the metric g 0 . There exists ε > 0 such that for any (κ, g) ∈ Y(M) satisfying (κ, g) -(κ 0 , g 0 ) Y(M) ≤ ε, the wave equation associated with (κ, g) is exactly controllable by ω in time T . Observe that Theorem 1.11 is a direct consequence of Theorem 1.12. We shall thus concentrate on this second more general result. Its proof relies on the measure support structure result of Theorem 1.10. The sequence of Theorems 1.9, 1.11, and 1.12 calls for the following important comment. Under the assumption of Theorem 1.9, that is, (κ, g) ∈ X 2 (M), there is a geodesic flow and the geometric condition of Definition 1.8 is actually a condition on the flow. Under the assumption of Theorem 1.11, that is, (κ, g) ∈ X 1 (M), as pointed out above there is no geodesic flow in general. Yet, maximal geodesics are still well defined and, the geometric condition of Definition 1.8 makes sense because it does not refer to a flow. However, under the assumption of Theorem 1.12, that is, (κ, g) ∈ Y(M), geodesics cannot be defined in general. No geometric condition can be formulated. Yet, Theorem 1.12 is a perturbation result and a geometric condition is expressed for a reference pair (κ 0 , g 0 ) ∈ X 1 (M) around which a (small) neighborhood in Y(M) is considered. The following remark further emphasizes that the perturbation is to be considered around a pair (κ 0 , g 0 ) ∈ X 1 (M) for which the geometric control condition holds and not around a pair (κ 0 , g 0 ) ∈ X 1 (M) for which exact controllability (or equivalently observability) holds. Remark 1.13 (On the perturbation result). Having both our results, geometric control for C 1 metrics and Lipschitz stability of exact controllability around a reference metric satisfying the geometric control condition, a natural question is whether the exact controllability property is itself stable by perturbation. On the one hand, it is classical that the exact controllability property is stable under lower-order perturbations of the elliptic operator A κ,g but on the other hand, it is possible to show that it is not stable under (smooth) perturbations of the geometry or the metric. Let us illustrate this instability property with a quite simple example. Consider the wave equation on the sphere S d = {x ∈ R d+1 ; i x 2 i = 1}, endowed with its standard metric and with control domain the open hemisphere ω = {x ∈ S d ; x 1 > 0}. Even though ω does not fulfill the geometric control condition of Definition 1.8 exact controllability holds for this geometry, an unpublished result by G. Lebeau (see [25, Section VI.B] and [START_REF] Zhu | Stabilization of damped waves on spheres and Zoll surfaces of revolution[END_REF] for extensions). Consider now the sphere endowed with the above standard metric, with the smaller control domain ω ε = {x ∈ S d ; x 1 > ε}, for some ε > 0. This second geometry is ε-close to the Lebeau example in the C ∞ -topology. Yet, for all ε > 0, exact controllability does not hold, because there exists a geodesic (the equator, {x ∈ S d ; x 1 = 0}) that does not encounter ω ε ). This shows that in Theorem 1.12, the assumption that the reference geometry should satisfy the geometric control condition cannot be replaced by the weaker assumption that it should satisfy the exact controllability property. This also shows that our perturbation argument will have to be performed on the actual proof that geometric control implies exact controllability and not on the final property itself. 1.4.3. Further results on the control operator. We finish this section with results analyzing the influence of some metric perturbations on the control process. We introduce further levels of regularity for the coefficients by setting for k ∈ N ∪ {+∞}, X k (M) = {(κ, g); κ ∈ C k (M) and g is a C k -metric on M}. First, we consider k ≥ 2. We recall the notation P κ,g = ∂ 2 t -A κ,g +1 with A κ,g = κ -1 div g (κ∇ g ), and we assume that (κ, g) ∈ X k (M), and that (ω, T ) satisfies the geometric control condition of Definition 1.8 for geodesics given by the metric g. Then, by Theorem 1.9, given (y 0 , y 1 ) ∈ H 1 (M) × L 2 (M), there exists f ∈ L 2 ((0, T ) × ω) such that the solution to (1.4) satisfies y(T ) = 0 and ∂ t y(T ) = 0. One can prove that among all possible control functions there is one of minimal L 2 -norm. We denote by f y 0 ,y 1 κ,g this control function usualy named HUM control function (cf. for instance [START_REF] Lions | Contrôlabilité exacte, Stabilisation et Perturbations de Systèmes Distribués. Tome 1. Contrôlabilité exacte[END_REF]). Moreover, the map H κ,g : H 1 (M) ⊕ L 2 (M) → L 2 ((0, T ) × M) (1.7) (y 0 , y 1 ) → f y 0 ,y 1 κ,g , is continuous. Note that f y 0 ,y 1 κ,g is actually a weak solution of the wave equation with initial data in L 2 (M) × H -1 (M), meaning that one moreover has f y 0 ,y 1 κ,g ∈ C 0 ([0, T ], L 2 (M)). Theorem 1.14 (Lack of continuity of the HUM-operator -Case k ≥ 2). Let k ≥ 2 and (κ, g) as above. For any neighborhood U of (κ, g) in X k (M), there exist (κ, g) ∈ U and an initial data (y 0 , y 1 ) ∈ H 1 (M) × L 2 (M), with y 0 2 H 1 + y 1 2 L 2 = 1 , such that the respective solutions y and ỹ of (1.8) P κ,g y = 1 (0,T )×ω f y 0 ,y 1 κ,g in (0, T ) × M, (y, ∂ t y) |t=0 = (y 0 , y 1 ) in M, P κ,g ỹ = 1 (0,T )×ω f y 0 ,y 1 κ,g in (0, T ) × M, (ỹ, ∂ t ỹ) |t=0 = (y 0 , y 1 ) in M, are such that (1.9) E κ,g (ỹ -y)(T ) = E κ,g (ỹ)(T ) ≥ 1/2. Moreover, there exists C T > 0 such that (1.10) (H κ,g -H κ,g )(y 0 , y 1 ) L 2 ((0,T )×ω) = f y 0 ,y 1 κ,g -f y 0 ,y 1 κ,g L 2 ((0,T )×ω) ≥ C T , for (y 0 , y 1 ) as given above. Remark 1.15. The result of Theorem 1.14 states that starting from the same initial data and solving the two wave equations with the same control vector f κ,g associated with P κ,g , a small perturbation of the metric can induce a large error for the final state (y(T ), ∂ t y(T )). In other words, the two dynamics are no longer close. In particular, the map X k (M) (κ, g) -→ H κ,g ∈ L H 1 (M) ⊕ L 2 (M), L 2 ((0, T ) × M) is not continuous. Remark 1.16. The result of Theorem 1.14 can also be stated on open bounded smooth domains of R n in the case of homogeneous Dirichlet condition. In fact, as can be checked in what follows, its proof only relies on basic properties of microlocal defect measures (support localization and propagation) that are known to be valid in this framework (see [START_REF] Burq | Contrôle de l'équation des ondes dans des ouverts peu réguliers[END_REF]). Remark 1.17. In the statement of Theorem 1.14 if the neighborhood U of (κ, g) in X k is small enough, the pair (ω, T ) also satisfies the geometric control condition of Definition 1.8 for (κ, g) and therefore f y 0 ,y 1 κ,g is well defined. In particular, this is clear as in the case k ≥ 2 there is a well defined and unique geodesic flow. The case k = 1 is quite different as there is no geodesic flow, as already mentioned above. However, given (κ, g) ∈ X 1 and (ω, T ) if the Rauch-Taylor geometric control condition of Definition 1.8 holds for (ω, T ) for the geodesics associated with g, given any neighborhood U of (κ, g) in X 1 one can still find (κ, g) ∈ U such that (1) the geometric control condition still holds for the geodesics associated with g, (2) the result of Theorem 1.14 also holds. Theorem 1.14 (Lack of continuity of the HUM-operator -Case k = 1). Let k = 1 and (κ, g) ∈ X 1 as above. For any neighborhood U of (κ, g) in X 1 (M), there exist (κ, g) ∈ U and an initial data (y 0 , y 1 ) ∈ H 1 (M) × L 2 (M), with y 0 2 H 1 + y 1 2 L 2 = 1 , such that the geometric control condition of Definition 1.8 for geodesics given by the metric g holds and moreover the results listed in Theorem 1.14 hold. The proofs of Theorems 1.14 and 1.14 are given in Section 6.1. We finish this section with some remarks and some questions. Remark 1.18. In all results above we have used 1 (0,T )×ω as a control operator, that is, the characteristic function of an open set. We could have also considered a control operator given by 1 (0,T ) (t)χ(x), with χ a smooth function on M. The controlled wave equation then has the form (1.11) P κ,g y = 1 (0,T ) χ f, (y |t=0 , ∂ t y |t=0 ) = (y 0 , y 1 ), In such case, the open set to be used in the geometric control condition is ω = {χ = 0}. This is often done this way, in particular since the smoothness of the function χ allows one to use some microlocal techniques that require regularity in the operator coefficients. The results and proofs of the present article can be written mutatis mutandis for this type of control operator. 1.4.4. Comparison with the smooth case and some open questions. Following on the previous remark, with a smooth in space control operator, one can wonder above the smoothness of the HUM operator. This question is addressed in the work of the second author jointly with G. Lebeau [START_REF] Dehman | Analysis of the HUM control operator and exact controllability for semilinear waves in uniform time[END_REF]. In fact, a gain of regularity in the initial data (y 0 , y 1 ) yields an equivalent gain of regularity in the HUM control function f y 0 ,y 1 κ,g . For instance, for (y 0 , y 1 ) ∈ H 2 (M)×H 1 (M) one finds f y 0 ,y 1 κ,g ∈ C 0 ([0, T ], H 1 (M)). Note that the result of [START_REF] Dehman | Analysis of the HUM control operator and exact controllability for semilinear waves in uniform time[END_REF] is proven in the case of smooth coefficients, that is, (κ, g) ∈ X ∞ . We thus consider this smooth case in the discussion that ends this introductory section. Open questions around the results of Theorems 1.14 and Theorem 1.14 are then raised. As we shall see in their proof, the result of Theorems 1.14 and Theorem 1.14 relies on the high frequency behavior of the solutions to (1.8). In the case of smooth coefficients and a smooth control operator, if we assume smoother data (y 0 , y 1 ) in the HUM control process, the result of Theorem 1.14 does not hold any more. The HUM control process becomes regular with respect to (κ, g) as expressed in the following proposition. Proposition 1.19 (HUM control process for smooth data). Consider (κ, g) ∈ X ∞ (M) and let χ ∈ C ∞ (M). Set ω = {χ = 0} and assume that (ω, T ) fulfills the geometric control condition of Definition 1.8 for the geodesics associated with (κ, g). Let α ∈ (0, 1]. There exists C α > 0 such that for any (κ, g) ∈ X ∞ (M) and any (y 0 , y 1 ) ∈ H 1+α (M) × H α (M), the respective solutions y and ỹ to P κ,g y = 1 (0,T ) χ f y 0 ,y 1 κ,g in (0, T ) × M, (y, ∂ t y) |t=0 = (y 0 , y 1 ) in M, P κ,g ỹ = 1 (0,T ) χ f y 0 ,y 1 κ,g in (0, T ) × M, (ỹ, ∂ t ỹ) |t=0 = (y 0 , y 1 ) in M, satisfy E κ,g (y -ỹ)(T ) 1/2 ≤ C α (κ, g) -(κ, g) α X 1 (M) (y 0 , y 1 ) H 1+α (M)⊕H α (M) . The proof of Proposition 1.19 is given in Section 6.2. In the above proposition coefficients are chosen smooth, quite in contrast with the rest of this article. As explained above, and as the reader can check in the proof, this lies in the use of the regularity of the HUM operator with respect to the data (y 0 , y 1 ), a result proven for smooth coefficients in [START_REF] Dehman | Analysis of the HUM control operator and exact controllability for semilinear waves in uniform time[END_REF]. The result of Proposition 1.19 raises the following natural questions: (1) Does the HUM operator exhibit regularity with respect to the data (y 0 , y 1 ) similar to what is proven in [START_REF] Dehman | Analysis of the HUM control operator and exact controllability for semilinear waves in uniform time[END_REF] in the case of not so smooth coefficients? (2) If so, if one increases the smoothness of the data (y 0 , y 1 ) as in Proposition 1.19, does the HUM control process also become regular with respect of the metric? Geometric aspects and operators We define the smooth manifold L = R × M and T * L its cotangent bundle. We denote by π : T * L → L the natural projection. Elements in T * L are denoted by (t, x, τ, ξ). One has π(t, x, τ, ξ) = (t, x). Setting |ξ| 2 x = g x (ξ, ξ) the Riemannian norm in the cotangent space of M at x, we define S * L = {(t, x, τ, ξ) ∈ T * L, τ 2 + |ξ| 2 x = 1}, the cosphere bundle of L. We shall also use the associated cosphere bundle in the spatial variables only, S * M = {(x, ξ) ∈ T * M, |ξ| 2 x = 1/2}. For a C k -metric both S * M and S * L are C k -manifolds. Consider a C ∞ -atlas A M = (C M j ) j∈J of M, #J < ∞, with C M j = (O j , θ j ) where O j is an open set of M and θ j : O j → Õj is a bijection for Õj an open set of R d . For j ∈ J, we define C j = (O j , ϑ j ) with O j = R × O j and ϑ j : O j → Õj (t, x) → t, θ j (x) , with Õj = R × Õj . Then A = (C j ) j∈J is a C ∞ -atlas for L. In what follows for simplicity we shall use the same notation for an element of T * L and its local representative if no confusion arises. 2.1. Hamiltonian vector field and bicharacteristics. Let (κ, g) ∈ X k , k = 1 or 2. The principal symbol of the wave operator P κ,g is given by (2.1) p(t, x, τ, ξ) = p κ,g (t, x, τ, ξ) = -τ 2 + |ξ| 2 x , (t, x, τ, ξ) ∈ T * L. In local charts, one has p(t, x, τ, ξ) = -τ 2 + 1≤i,j≤d g ij (x)ξ i ξ j . Note that (g ij (x)) i,j is the inverse of (g ij (x)) i,j , the latter being the local representative of the metric. We denote by H p the Hamiltonian vector field associated with p, that is, the unique vector field such that {p, f } = H p f for any smooth function f . Here, {., .} denotes the Poisson bracket, that is, in local chart {p, f } = ∂ τ p ∂ t f -∂ t p ∂ τ f + 1≤j≤d (∂ ξ j p ∂ x j f -∂ x j p ∂ ξ j f ), (2.2) yielding H p = -2τ ∂ t + ∇ ξ p • ∇ x -∇ x p • ∇ ξ , as p is in fact independent of the time variable t. The Hamiltonian vector field H p is of class C k-1 . Observe that, for a function f of the variables (t, x, τ, ξ), one has t H p f = 2τ ∂ t f -div x (f ∇ ξ p) + div ξ (f ∇ x p), with which one deduces t H p = -H p , (2.3) even in the case (κ, g) ∈ X 1 . First, consider the case k = 2. Thus, H p is a C 1 -vector field. For ∈ T * L one denotes by s → φ s ( ) the unique maximal solution to (2.4) d ds φ s ( ) = H p φ s ( ), s ∈ R, and φ s=0 ( ) = , as given by the Cauchy-Lipschitz theorem. One calls (s, ) → φ s ( ) the Hamiltonian flow map. Let s → γ(s) be an integral curve of H p , that is, γ(s) = φ s ( ) for some ∈ T * L. For any smooth function f on T * L one has d ds f • γ(s) = H p f γ(s) , Note that H p τ = 0, meaning that the variable τ is constant along γ. Note also that the value of p remains constant along γ since H p p = {p, p} = 0. Hence, |ξ| 2 x = g x (ξ, ξ) is also constant. Thus, if γ(0) ∈ S * L then γ(s) remains in S * L, and for ∈ S * L, the vector field H p at is tangent to S * L. consequently, we may consider H p as a tangent vector field on the C 2 -manifold S * L. In particular H p a makes sense if a ∈ C 1 c (S * L). If moreover a ∈ C 2+ c (S * L), ≥ 0, one has H p a ∈ C 1 c (S * L) . Since H p p = 0, the flow φ s preserves Char(p) = p -1 ({0}), the characteristic set of p. As is done classically, we call bicharacteristic an integral curve for which p = 0. Observe then that (2.4) defines a flow on the C 2 -manifold Char(p) ∩ S * L = {(t, x, τ, ξ); τ 2 = 1/2 and |ξ| 2 x = 1/2}. Second, consider the case k = 1. Then H p is only a continuous vector field. Thus, for any ∈ Char(p) there exists a maximal bicharacteristic s → γ(s) defined on R such that γ(0) = , that is, d ds γ(s) = H p γ(s) , s ∈ R, by the Cauchy-Peano theorem. Uniqueness is however not guaranteed and the notion of flow cannot be used in the case k = 1. Since the value of |ξ| x remains constant and the manifold M is compact, maximal bicharacteristics are actually defined globally. As above, if γ(0) ∈ S * L (resp. Char(p) ∩ S * L) one has γ(s) ∈ S * L (resp. Char(p) ∩ S * L) for all s ∈ R. The hamiltonian vector field H p can be viewed a C 0 -vector field on the C 1manifold S * L (resp. on the C 1 -manifold Char(p) ∩ S * L). For a ∈ C 1+ c (S * L), ≥ 0, one finds H p a ∈ C 0 c (S * L). Finally, connection between bicharacteristic and geodesics can be made. For this we recall that if ξ ∈ T * x M for some x ∈ M one can define v ∈ T x M by v = ξ , that reads in local coordinates v i = j g ij (x)ξ j . In particular |v| 2 x = g x (v, v) = |ξ| x . If now 0 = (t 0 , x 0 , τ 0 , ξ 0 ) ∈ Char(p)∩S * L and let s → (s) = t(s), x(s), τ, ξ(s) be a bicharacteristic such that (0) = 0 . One has τ = τ 0 and t(s) = t 0 -2τ 0 s. The map X : t → x (t 0 -t)/(2τ 0 ) , can be proven to be the geodesic originating from x 0 in the direction given by v 0 = (ξ 0 ) and parameterized by t. We now compute the speed at which the geodesic is travelled. We have dX dt (t) = -1 2τ 0 dx(s) ds , which yields dX dt (t) = -1 2τ 0 ∇ ξ p x(s), ξ(s) = - ξ(s) τ 0 . It follows that | dX dt (t)| x = |ξ(s) | x /|τ 0 | = |ξ(s)| x /|τ 0 | = |ξ 0 | x /|τ 0 | = 1, since 0 ∈ Char(p). Hence, the projection of the bicharacteristic s → γ(s) yields a geodesic travelled at speed one. 2.2. Geometric control condition. As the projections of bicharacteristics onto L yield geodesics, in the case k ≥ 2, we can state the Rauch-Taylor geometric control condition [START_REF] Rauch | Exponential decay of solutions to hyperbolic equations in bounded domains[END_REF] formulated in Definition 1.8 with the notion of Hamiltonian flow introduced above. Definition 1.8 (geometric control condition, k ≥ 2). Let g be a C 2 metric and let ω be an open set of M and T > 0. One says that (ω, T ) fulfills the geometric control condition if for all ∈ Char(p) one has π φ s ( ) ∈ (0, T ) × ω for some s ∈ R. In the case k = 1, since g is only C 1 there is no flow in general, one rather writes the geometric control condition by means of maximal bicharacteristics. Definition 1.8 (generalized geometric control condition, k = 1). Let g be a C 1 metric and let ω be an open set of M and T > 0. One says that (ω, T ) fulfills the geometric control condition if for any maximal bicharacteristic s → γ(s) in Char(p) one has π γ(s) ∈ (0, T ) × ω for some s ∈ R. In other words, for all ∈ Char(p), all bicharacteristics that go through meet the cotangent bundle above (0, T ) × ω. Naturally, the Definitions 1.8 and 1.8 coincide in the case k = 2 because of the uniqueness of a bicharacteristic going through a point of Char(p). Symbols and pseudo-differential operators. Here, we follow [5, Section 1.1] for the notation. We denote by H k (X) or H k loc (X), with X = M or L, the usual Sobolev space for complex valued functions, endowed with its natural inner product and norm. In particular, the L 2 (X)-inner product is denoted by (., .) L 2 (X) . Classical polyhomogeous symbol classes on T * R n R n × R n are denoted by S m ph (R n × R n ) and the classes of associated operators by Ψ m ph (R n ). We recall that symbols in the class S m ph (R n ×R n ) behave well with respect to changes of variables, up to symbols in S m-1 ph (R n ×R n ) (see [START_REF] Hörmander | The analysis of linear partial differential operators[END_REF]Theorem 18.1.17 and Lemma 18.1.18]). We define S m c,ph (T * L) as the set of polyhomogeneous symbols of order m on T * L with compact support in the variables (t, x) ∈ L (note that compactness with respect to x ∈ M is obvious). Having the manifold M smooth is important for symbols and following pseudodifferential operators to be simply defined. For any m, the restriction to the sphere (2.5) S m c,ph (T * L) → C ∞ c (S * L), a → a |S * L , is onto. This allows one to identify a homogeneous symbol with a smooth function on S * L with compact support. We denote by Ψ m c,ph (L) the space of polyhomogeneous pseudo-differential operators of order m on L: one says that Q ∈ Ψ m c,ph (L) if Q maps C ∞ c (L) into D (L) and (1) its kernel K(x, y) ∈ D (L × L) is such that supp(K) is compact in L × L; (2) K(x, y) is smooth away from the diagonal ∆ L = {(t, x; t, x); (t, x) ∈ L}; (3) for any local chart C j = (O j , ϑ j ) and all φ [START_REF] Hörmander | The analysis of linear partial differential operators[END_REF]Chapter 18.1]). Note that the principal symbol is uniquely defined in S m c,ph (T * L) because of the polyhomogeneous structure (see the remark following Definition 18.1.20 in [START_REF] Hörmander | The analysis of linear partial differential operators[END_REF]). The application σ m enjoys the following properties. 0 , φ 1 ∈ C ∞ c ( Õj ) one has φ 1 • ϑ -1 j * • Q • ϑ * j • φ 0 ∈ Op S m c,ph (R d+1 × R d+1 ) . For Q ∈ Ψ m c,ph (L), we denote by σ m (Q) ∈ S m c,ph (T * L) the principal symbol of Q (see (1) The map σ m : Ψ m c,ph (L) → S m c,ph (T * L) is onto. (2) For all Q ∈ Ψ m c,ph (L), σ m (Q) = 0 if and only if Q ∈ Ψ m-1 c,ph (L). (3) For all Q ∈ Ψ m c,ph (L), σ m (Q * ) = σ m (Q). (4) For all Q 1 ∈ Ψ m 1 c,ph (L) and Q 2 ∈ Ψ m 2 c,ph (L), one has Q 1 Q 2 ∈ Ψ m 1 +m 2 c,ph (L) with σ m 1 +m 2 (Q 1 Q 2 ) = σ m 1 (Q 1 )σ m 2 (Q 2 ). ( ) For all Q 1 ∈ Ψ m 1 c,ph (L) and Q 2 ∈ Ψ m 2 c,ph (L), one has [Q 1 , Q 2 ] = Q 1 Q 2 -Q 2 Q 1 ∈ Ψ m 1 +m 2 -1 c,ph (L) with σ m 1 +m 2 -1 ([Q 1 , Q 2 ]) = 1 i {σ m 1 (Q 1 ), σ m 2 (Q 2 )}. (6) If Q ∈ Ψ m c,ph (L), then Q maps continuously H k loc (L) into H k-m comp (L). In particular, for m < 0, Q is compact on L 2 loc (L). Given an operator Q ∈ Ψ m c,ph (L), one sets Char(Q) = Char σ m (Q) = { ∈ T * L, σ m (Q)( ) = 0}. 5 Microlocal defect measure and propagation properties A defect measure is used to characterize locally the failure of a sequence to strongly converge, meaning some concentration phenomenum. This characterization can be made finer by further considering microlocal concentration phenomena. for the duality bracket. This notation will also be used for a ∈ S 0 c,ph (T * L) according to the identification map (2.5). Consider a sequence (u k ) k∈N ⊂ L 2 loc (L) that converges weakly to 0. Here, to define the L 2 -norm and inner product on L we use a fixed (κ 0 , g 0 ) chosen in X 1 (M); see (1.2). As a consequence of [18, Theorem 1], there exists a subsequence of (u k ) k∈N (still denoted by (u k ) k∈N in what follows) and a density measure µ ∈ M + (S * L), such that (3.1) lim k→∞ Qu k , u k L 2 comp (L),L 2 loc (L) = µ, σ 0 (Q) S * L , for any Q ∈ Ψ 0 c,ph (L). Recall that symbols in S 0 c,ph (T * L) are compactly supported in time t here. We also refer to [START_REF] Tartar | H-measures, a new approach for studying homogenisation, oscillations and concentration effects in partial differential equations[END_REF] and [START_REF] Burq | Mesures semi-classiques et mesures de défaut Séminaire Bourbaki[END_REF]. One calls µ a microlocal defect (density) measure associated with (u k ) k∈N . Similarly, one can use the notion of H 1 -microlocal defect density measure. Consider (u k ) k∈N ⊂ H 1 loc (L) that converges weakly to 0. Then, there exists a subsequence of (u k ) k∈N (still denoted by (u k ) k∈N ) and a density measure µ ∈ M + (S * L) such that for any Q ∈ Ψ 2 c,ph (L) (3.2) lim k→∞ Qu k , u k H -1 comp (L),H 1 loc (L) = µ, σ 2 (Q) S * L . Naturally, in either cases, the density measure µ depends on the choice made of (κ 0 , g 0 ) ∈ X 1 (M). In what follows we shall make clear what choice is made. 3.2. Local representatives. Consider a finite atlas A = (C j ) j∈J on L, as introduced in Section 2, with C j = (O j , ϑ j ). Consider a smooth partition of unity (χ j ) j∈J subordinated to the covering by the open sets (O j ) j . We consider also χj , χj ∈ C ∞ (L) supported in O j such that χj ≡ 1 on a neighborhood of supp(χ j ) and χj ≡ 1 on a neighborhood of supp( χj ). Set also χ C j j = (ϑ -1 j ) * χ j , χC j j = (ϑ -1 j ) * χj , and χC j j = (ϑ -1 j ) * χj . One has χ C j j , χC j j , χC j j ∈ C ∞ c ( Õj ), with Õj = ϑ j (O j ). Let (u k ) k ⊂ H 1 loc (L) that converges weakly to 0, Q ∈ Ψ 2 c,ph ( L), and j ∈ J. One can write χ j Q = χ j Q χj + χ j Q(1 -χj ). Since χ j Q(1 -χj ) is a regularizing operator one finds µ, χ j σ 2 (Q) S * L ∼ χ j Qu k , u k H -1 comp (L),H 1 loc (L) ∼ χ j χj Q χj v k j , v k j H -1 comp (L),H 1 loc (L) , as k → +∞, for v k j = χj u k . The operator Q j = (ϑ -1 j ) * χj Q χj (ϑ j ) * is a pseudo-differential operator of order 2 on R d+1 with principal symbol q j = χ2 j q C j , where q C j is the local representative of σ 2 (Q). Set also v k,C j j = (ϑ -1 j ) * v k j . It converges weakly to 0 in H 1 (R d+1 ). Associated with this sequence is a microlocal defect measure µ j . If one writes χ j χj Q χj v k j , v k j H -1 comp (L),H 1 loc (L) = χ C j j Q j v k,C j j , v k,C j j H -1 comp (R d+1 ),H 1 loc (R d+1 ) , one obtains µ, χ j σ 2 (Q) S * L = µ j , χ C j j q j S * Õj = µ j , χ C j j q C j S * Õj . Note that here, the L 2 and H s -norms on R d+1 are based on the local representative of the density measure κ 0 µ g 0 dt. One thus sees that the local representative of χ j µ is precisely χ C j j µ j , that is, χ j µ = ϑ * j χ C j j µ j = χ j ϑ * j µ j . Summing up, we thus have [START_REF] Coifman | Au delà des opérateurs pseudo-différentiels[END_REF]Proposition IV.7]) and some of its consequences that we list below. µ = j∈J χ j µ = j∈J χ j ϑ * j µ j . and µ, σ 2 (Q) S * L = j∈J µ, χ j σ 2 (Q) S * L = j∈J µ j , χ C j j q C j S * Theorem 3.2 (Coifman-Meyer). Let Q ∈ Ψ 1 ph (R n × R n ). If m ∈ W 1,∞ (R n ) the commutator [Q, m] maps L 2 (R n ) into itself continuously. Moreover there exists C > 0 such that [Q, m] L 2 →L 2 ≤ C m W 1,∞ , m ∈ W 1,∞ (R n ). We deduce the following corollary. Corollary 3.3. Let Q ∈ Ψ 1 ph (R n ×R n ) be such that its kernel has compact support in R n ×R n . With q ∈ S 1 ph (R n × R n ) its principal symbol. Let m ∈ C 1 (R n ). There exist K 1 and K 2 , compact operators on L 2 (R n ), with compactly supported kernels, such that [Q, m] = 1 i ∇ x m • Op(∇ ξ q) + K 1 = 1 i Op(∇ ξ q) • ∇ x m + K 2 . (3.3) Proof. Consider a sequence (m k ) k∈N ⊂ C ∞ (R n ) such that |α|≤1 ∂ α x (m k -m) L ∞ → 0 as k → +∞. Classical symbolic calculus gives [Q, m k ] = 1 i ∇ x m k • Op(∇ ξ q) + K k 1 , (3.4) with K k 1 = Op(r k 1 ) for some r k 1 ∈ S -1 ph , j = 1, 2. Thus, K k 1 is bounded from L 2 (R n ) into H 1 (R n ). In addition, since K k 1 has a kernel with compact supports in R n × R n , it is compact on L 2 (R n ). Note that the support of the kernel of K k 1 lies in a compact K of R n × R n that is uniform with respect to k. On the other hand, observe that ∇ x m k • Op(∇ ξ q) → ∇ x m • Op(∇ ξ q) in L (L 2 (R n )). Moreover, from Theorem 3.2 applied to m k -m, one also has [Q, m k ] → [Q, m] in L (L 2 (R n )). Using then (3.4) we deduce that (K k 1 ) n∈N converges to some K 1 in L (L 2 (R n )), and from the closedness of the set of compact operators in L (L 2 (R n )) we find that K 1 is compact. Moreover, K 1 has a kernel supported in K. The limits above give the first equality in (3.3). The second equality follows similarly. Let Ω be a bounded open set of R n and (κ 0 , g 0 ) ∈ X 1 (Ω), with definition adapted from that of X 1 (M). The L 2 -inner product and norm are given by the density κ 0 µ g 0 . The following result is also a consequence of Theorem 3.2. Proposition 3.4. Let (u k ) k∈N ⊂ H 1 loc (Ω) be a sequence that converges weakly to 0 and let µ be a H 1 -microlocal defect density measure on S * Ω associated with the sequence (u k ) k . Let b 1 ∈ W 1,∞ (R n ) and b 2 ∈ C 0 (R n ). Consider also Q 1 , Q 2 ∈ Ψ 1 ph (R n ), both with kernels compactly supported in Ω × Ω, with q 1 , q 2 ∈ S 1 ph (R n × R n ) for respective principal symbol. Then, one has b 1 Q 1 b 2 Q 2 u k , u k H -1 comp (Ω),H 1 loc (Ω) -→ k→+∞ µ, b 1 b 2 q 1 q 2 S * Ω . (3.5) More generally, assume that (b k 1 ) k∈N ⊂ W 1,+∞ (R n ) and (b k 2 ) k∈N ⊂ L ∞ (R n ), and (κ k , g k ) k∈N ⊂ Y(Ω) with b k 1 -b 1 W 1,+∞ (R n ) + b k 2 -b 2 L ∞ (R n ) + (κ k , g k ) -(κ 0 , g 0 ) Y(Ω) → 0, as k → +∞. Then b k 1 Q 1 b k 2 Q 2 u k , u k H -1 comp (Ω,κ k µg k ),H 1 loc (Ω,κ k µg k ) -→ k→+∞ µ, b 1 b 2 q 1 q 2 S * Ω . (3.6) Remark 3.5. Note that b 1 is chosen in W 1,∞ (R n ) because one cannot multiply an element in H -1 by a bounded function. One derivative is needed. Proof of Proposition 3.4. With Lemma 3.6 below we may replace the density κ k µ g k in the L 2 -inner product by κ 0 µ g 0 and thus in the H -1 comp -H 1 loc duality. We write b k 1 Q 1 b k 2 Q 2 = b 1 Q 1 b 2 Q 2 + R k , R k = b 1 Q 1 (b k 2 -b 2 ) Q 2 + (b k 1 -b 1 )Q 1 b k 2 Q 2 . Note that R k maps H 1 loc (Ω) into H -1 comp (Ω) continuously. Moreover because of the convergences of b k 1 and b k 2 , and the boundedness of (u k ) k∈N in H 1 loc (Ω) one finds that R k u k → 0 strongly in H -1 comp (Ω). Thus we can write According to Theorem 3. b k 1 Q 1 b k 2 Q 2 u k , u k H -1 comp (Ω),H 1 loc (Ω) = b 1 Q 1 b 2 Q 2 u k , u k H -1 comp (Ω) 2 the commutator [b 1 , Q 1 ] is bounded on L 2 (Ω) implying that [b 1 , Q 1 ] b 2 Q 2 u k is bounded in L 2 (Ω) yielding [b 1 , Q 1 ] b 2 Q 2 u k , u k H -1 comp (Ω),H 1 loc (Ω) = ([b 1 , Q 1 ] b 2 Q 2 u k , u k ) L 2 (Ω) -→ k→+∞ 0, since u k → 0 strongly in L 2 (Ω). We may thus assume that b 1 = 1 without any loss of generality. Let ε > 0 and let b ε 2 ∈ C ∞ (Ω) be such that b 2 -b ε 2 L ∞ ≤ ε. Write Q 1 b 2 Q 2 = Q 1 b ε 2 Q 2 + R ε , R ε = Q 1 (b 2 -b ε 2 ) Q 2 . One has | R ε u k , u k H -1 comp (Ω),H 1 loc (Ω) | ≤ Cε, and this leads to Q 1 b 2 Q 2 u k , u k H -1 comp (Ω),H 1 loc (Ω) = Q 1 b ε 2 Q 2 u k , u k H -1 comp (Ω),H 1 loc (Ω) + o(1) ε→0 + o(1) k→+∞ . (3.7) Since b ε 2 is smooth, by symbolic calculus one has Q 1 b ε 2 Q 2 u k , u k H -1 comp (Ω),H 1 loc (Ω) -→ k→+∞ µ, b ε 2 q 1 q 2 S * Ω . (3.8) Finally, since µ, b ε 2 q 1 q 2 S * Ω → µ, b 2 q 1 q 2 S * Ω as ε → 0, with (3.7) and (3.8) one concludes that (3.5) holds. Lemma 3.6. Assume that (κ k , g k ) -(κ 0 , g 0 ) Y(Ω) → 0 and consider a sequence (f k , h k ) k∈N bounded in L 2 comp (Ω) ⊕ L 2 loc (Ω). Then (f k , h k ) L 2 (Ω,κ k µg k ) = (f k , h k ) L 2 (Ω) + o(1) k→+∞ . If (f k , h k ) k∈N is bounded in H -1 comp (Ω) ⊕ H 1 loc (Ω) then f k , h k H -1 comp (Ω,κ k µg k ),H 1 loc (Ω,κ k µg k ) = f k , h k H -1 comp (Ω),H 1 loc (Ω) + o(1) k→+∞ . Here, Lemma 3.6 is written in the case of a bounded open set of the the Euclidean space but the same result holds in the case of a compact manifold. Proof. One has µ g 0 = (det g 0 ) 1/2 dx and µ g k = (det g k ) 1/2 dx. Therefore κ k µ g k = α k κ 0 µ g 0 with α k = κ k κ 0 det g k det g 0 1/2 and α k → 1 in the Lipschitz norm. Measures and partial differential equations. Microlocal defect measures associated with sequences of solutions of partial differential equations with smooth coefficients can have properties such as support localization in the characteristic set and invariance along the Hamiltonian flow. With the material developed above, we extend these results to the case of C 1 -coefficients. We focus on the case of wave operators. Proposition 3.7. Let (κ 0 , g 0 ) ∈ X 1 (M) and set p 0 (x, τ, ξ) = -τ 2 + g 0 x (ξ, ξ), that is, the principal symbol of P 0 = P κ 0 ,g 0 . Let (κ k , g k ) k∈N ⊂ Y(M) be such that (κ k , g k ) -(κ 0 , g 0 ) Y(M) → 0 as k → +∞ and set P k = P κ k ,g k . Consider a sequence (u k ) k∈N ⊂ H 1 loc (L) that converges to 0 weakly and µ a H 1 -microlocal defect density measure associated with (u k ) k∈N . Let T 1 < T 2 . The following properties hold. ( ) If P k u k → 0 strongly in H -1 loc (T 1 , T 2 ) × M then (3.9) supp(µ) ∩ S * ((T 1 , T 2 ) × M) ⊂ Char(p 0 ). 1 ( ) If moreover P k u k → 0 strongly in L 2 loc (T 1 , T 2 ) × M then one has (3.10) t H p 0 µ = 0 in the sense of distributions on S * (T 1 , T 2 ) × M , that is, µ, H p 0 q S * L = 0 for all q ∈ C ∞ c S * (T 1 , T 2 ) × M . 2 Since H p 0 is a tangent vector field on S * L where µ lives (see Section 2.1) note that t H p 0 µ makes sense in the second item of the proposition. Moreover note that H p 0 is a tangent vector field on S * L ∩ Char(p 0 ) and one has supp(µ) ∩ S * ((T 1 , T 2 ) × M) ⊂ Char(p 0 ) by the first item of the proposition. Finally, notice that for a Hamiltonian vector field, H p 0 = -t H p 0 as recalled in Section 2.1 even in the case (κ 0 , g 0 ) ∈ X 1 (M). Naturally, Proposition 3.7 and its proof can be adapted to the other energy levels. We shall also need the following result. Let T 1 < T 2 . The following properties hold. ( ) If P k u k → 0 strongly in H -2 loc (T 1 , T 2 ) × M then supp(µ) ∩ S * ((T 1 , T 2 ) × M) ⊂ Char(p 0 ). (2) If moreover P k u k → 0 strongly in H -1 loc (T 1 , T 2 ) × M then one has t H p 0 µ = 0 in the sense of distributions on S * (T 1 , T 2 ) × M . Proof of Proposition 3.7. Consider B ∈ Ψ 0 c,ph (L) with kernel supported in (T 1 , T 2 )×M 2 and 1 b ∈ S 0 c,ph (L) its principal symbol. For the definition of the L 2 -inner product we use (κ 0 , g 0 ). We also use the partition of unity 1 = j∈J χ j with χ j ∈ C ∞ c (O j ) associated with the atlas A and the additional cutoff functions χj , χj ∈ C ∞ c (O j ) that are introduced in Section 3.2 and, as obtained in that section, we write BP k u k , u k H -1 comp (L),H 1 loc (L) = j∈J χ j BP k u k , u k H -1 comp (L),H 1 loc (L) (3.11) = j∈J χ j χj BP k χj v k j , v k j H -1 comp (L),H 1 loc (L) + o(1) k→+∞ , with v k j = χj u k . Associated with (ϑ -1 j ) * v k j , the local representative of v k j , is a microlocal defect measure µ j in ϑ j (O j ) = Õj = R × Õj and χ C j j µ j is the local representative of χ j µ in this chart. See Section 3.2. Note that we use local representatives of the operators, functions, measures without introducing any new symbols. Yet to keep clear that the analysis is carried out in a local chart we use the notation L 2 ( Õj ), H s ( Õj ) and not L 2 (L), H s (L). To further lighten notation we set κk = (det g k ) 1/2 κ k . One has P k = ∂ 2 t -(κ k ) -1 p,q ∂ p κk g pq k ∂ q + 1 = Pk - p,q R p,q k , with Pk = ∂ 2 t -p,q ∂ p g pq k ∂ q + 1 and R p,q k = (κ k ) -1 [∂ p , κk ]g pq k ∂ q . Note that χj BR p,q k χj defines a sequence of bounded operators from H 1 (L) into L 2 (L), uniformly with respect to k. Consequently, one has χ j χj BR p,q k χj v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = χ j χj BR p,q k χj v k j , v k j L 2 ( Õj ) → k→+∞ 0 since v k j converges strongly to 0 in L 2 ( Õj ). This leads to 1) k→+∞ , by Proposition 3.4. Since χ j µ j = χ j µ locally, lifting back the analysis to the manifold level, with (3.11), one finds χ j χj BP k χj v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = χ j χj B Pk χj v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) + o(1) k→+∞ = µ j , χ j bp 0 S * ( Õj ) + o( BP k u k , u k H -1 comp (L),H 1 loc (L) = j∈J µ, χ j bp 0 S * (L) = µ, bp 0 S * (L) + o(1) k→+∞ . Now, one has BP k u k , u k H -1 comp (L),H 1 loc (L) = P k u k , t Bu k H -1 loc (L),H 1 comp (L) + o(1) k→+∞ , with the transpose operator t B bounded from H 1 loc (L) into H 1 comp (L) since B is itself bounded from H -1 loc (L) into H -1 comp (L). If one assumes that P k u k → 0 strongly in H -1 loc (T 1 , T 2 ) × M one obtains BP k u k , u k H -1 comp (L),H 1 loc (L) → k→+∞ 0, and thus µ, bp 0 S * (L) = 0, ∀b ∈ S 0 c,ph (L) with supp(b) ⊂ T * (T 1 , T 2 ) × M , and one obtains the support estimation (3.9). We now prove the second item of the proposition. We assume that P k u k lies in L 2 loc (T 1 , T 2 )× M) and converges strongly to 0 in this space. Consider B ∈ Ψ 1 c,ph (L) with kernel supported in (T 1 , T 2 ) × M 2 and b ∈ S 1 c,ph (L) its principal symbol. We are interested in the limit of [P k , B]u k , u k H -1 comp (L),H 1 loc (L) , which makes sense since [P k , B] is of order 2. We have [P k , B]u k = P k Bu k -BP k u k ∈ H -1 (T 1 , T 2 ) × M). Since P k u k lies in L 2 (T 1 , T 2 ) × M) by assumption then BP k u k lies in H -1 (T 1 , T 2 ) × M ) and the same holds for P k Bu k . We may thus write [P k , B]u k , u k H -1 comp (L),H 1 loc (L) = P k Bu k , u k H -1 comp (L),H 1 loc (L) -P k u k , B * u k L 2 loc (L),L 2 comp (L) , where the adjoint is computed with respect to the L 2 -inner product associated with (k 0 , g 0 ) here. As B maps continuously L 2 loc (T 1 , T 2 ) × M) into H -1 comp (T 1 , T 2 ) × M) then B * maps continuously H 1 loc (L) into L 2 comp (L). Thus, one has P k u k , B * u k L 2 (L) → k→+∞ 0. By Lemma 3.6 it is asymptotically equivalent to use (κ 0 , g 0 ) or (κ k , g k ) for the definition of the L 2 -inner product and H -1 comp -H 1 loc duality, that is, P k Bu k , u k H -1 comp (L),H 1 loc (L) = P k Bu k , u k H -1 comp (L,κ k µg k dt),H 1 loc (L,κ k µg k dt) + o(1) k→+∞ . Since P k is selfadjoint for this latter L 2 -inner product, one obtains P k Bu k , u k H -1 comp (L),H 1 loc (L) = Bu k , P k u k L 2 comp (L,κ k µg k dt),L 2 loc (L,κ k µg k dt) + o(1) k→+∞ = Bu k , P k u k L 2 comp (L),L 2 loc (L) + o(1) k→+∞ . Using again that P k u k → 0 strongly to 0 in L 2 loc (T 1 , T 2 ) × M) we obtain P k Bu k , u k H -1 comp (L),H 1 loc (L) → k→+∞ 0, and finally [P k , B]u k , u k H -1 comp (L),H 1 loc (L) → k→+∞ 0. (3.12) As above, with the partition of unity 1 = j∈J χ j we write [P k , B]u k , u k H -1 comp (L),H 1 loc (L) = j∈J χ j [P k , B]u k , u k H -1 comp (L),H 1 loc (L) . (3.13) For each term in the sum one has χ j [P k , B]u k , u k H -1 comp (L),H 1 loc (L) = χ j [P k , Bj ]v k j , v k j H -1 comp (L),H 1 loc (L) + o(1) k→+∞ . with Bj = χj B χj . This allows one to work in a local chart and write [P k , B]u k , u k H -1 comp (L),H 1 loc (L) = j∈J χ j [P k , Bj ]v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) , (3.14) with the (manifold-local chart) identifications described above. With A k = A κ k ,g k , in the local chart C j one writes χ j [P k , Bj ] = χ j [∂ 2 t , Bj ] -χ j [A k , Bj ] = χ j [∂ 2 t , Bj ] - 1≤p,q≤d Q pq 1 + Q pq 2 + Q pq 3 + Q pq 4 , with Q pq 1 = χ j κ-1 k ∂ xp κk g pq k [∂ xq , Bj ], Q pq 2 = χ j κ-1 k ∂ xp [κ k g pq k , Bj ]∂ xq , Q pq 3 = χ j κ-1 k [∂ xp , Bj ]κ k g pq k ∂ xq , Q pq 4 = χ j [κ -1 k , Bj ]∂ xp κk g pq k ∂ xq . We now compute the limit of each term associated with this decomposition of [P k , Bj ] on the right-hand side of (3.14). The principal symbol of χ j [∂ 2 t , Bj ] is iχ j {τ 2 , b} and thus χ j [∂ 2 t , Bj ]v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = µ j , iχ j {τ 2 , b} S * ( Õj ) + o(1) k→+∞ . Proposition 3. applies and yields Q pq 1 v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = µ j , iχ j g 0,pq ξ p ∂ xq b S * ( Õj ) + o(1) k→+∞ , and Q pq 3 v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = µ j , iχ j g 0,pq (∂ xp b)ξ q S * ( Õj ) + o(1) k→+∞ . With Theorem 3.2 one has [κ k g pq k , Bj ] → [κ 0 g 0,pq , Bj ] in L L 2 ( Õj ) as k → +∞. It follows that Q pq 2 v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = Q pq 2,a v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) + o(1) k→+∞ , with Q pq 2,a = χ j κ-1 k ∂ xp [κ 0 g 0,pq , Bj ]∂ xq . With Corollary 3.3 one writes [κ 0 g 0,pq , Bj ] = - 1 i ∇ x (κ 0 g 0,pq ) • Op ∇ ξ ( χ2 j b) + K 1 , with K 1 a compact operator on L 2 (R d+1 ), with compactly supported kernel. One thus obtains Q pq 2 v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = Q pq 2,b v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) + o(1) k→+∞ , with Q pq 2,b = - 1 i χ j κ-1 k ∂ xp ∇ x (κ 0 g 0,pq ) • Op ∇ ξ ( χ2 j b) ∂ xq . Proposition 3. applies and yields Q pq 2 v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = µ j , -iχ j ξ p ξ q (κ 0 ) -1 ∇ x (κ 0 g 0,pq ) • ∇ ξ b S * ( Õj ) + o(1) k→+∞ . We now treat the term associated with Q pq 4 . Note that one has p,q Q pq 4 = χ j [κ -1 k , Bj ]κ k A k . We write, lifting temporarily the analysis back to the manifold, p,q Q pq 4 v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = χ j [κ -1 k , B]κ k A k v k j , v k j H -1 comp (L),H 1 loc (L) = χ j [κ -1 k , B]κ k A k u k , u k H -1 comp (L),H 1 loc (L) + o(1) k→+∞ . Setting f k = (∂ 2 t -A k )u k with f k → 0 strongly in L 2 loc (T 1 , T 2 ) × M , we thus find p,q Q pq 4 v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = χ j [κ -1 k , B]κ k ∂ 2 t u k , u k H -1 comp (L),H 1 loc (L) -χ j [κ -1 k , B]κ k f k , u k H -1 comp (L),H 1 loc (L) + o(1) k→+∞ = χ j [κ -1 k , Bj ]κ k ∂ 2 t v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) + o(1) k→+∞ , bringing again the analysis at the level of the local chart. Using that κk is independent of t we may write χ j [κ -1 k , Bj ]κ k ∂ t = χ j ∂ t [κ -1 k , Bj ]κ k + χ j [κ -1 k , E j ]κ k , where E j = [∂ t , Bj ] ∈ Ψ 1 c,ph ( Õj ), with ∂ t b ∈ S 1 c,ph ( Õj ) for principal symbol. With Theo- rem 3.2 we see that [κ -1 k , E j ] maps L 2 ( Õj ) into itself continuously and moreover [κ -1 k , E j ] → [(κ 0 ) -1 , E j ] in L L 2 ( Õj ) . Thus we obtain χ j [κ -1 k , E j ]κ k ∂ t v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = χ j [(κ 0 ) -1 , E j ]κ k ∂ t v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) + o(1) k→+∞ → k→+∞ 0, arguing as above. Similarly we write χ j ∂ t [κ -1 k , Bj ]κ k ∂ t v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) ∼ k→+∞ χ j ∂ t [(κ 0 ) -1 , Bj ]κ 0 k ∂ t v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) Arguing as we did for the term associated with Q p,q 2 we thus find χ j ∂ t [κ -1 k , Bj ]κ k ∂ t v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = µ j , -iχ j τ 2 κ0 (∇ x (κ 0 ) -1 ) • ∇ ξ b S * ( Õj ) + o(1) k→+∞ . Collecting the various estimates we found we obtain χ j [P k , Bj ]v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = µ j , χ j σ S * ( Õj ) + o(1) k→+∞ . (3.15) with σ = i{τ 2 , b} -i p,q g 0,pq ξ p ∂ xq b + g 0,pq (∂ xp b)ξ q -ξ p ξ q (κ 0 ) -1 ∇ x (κ 0 g 0,pq ) • ∇ ξ b + iτ 2 κ0 (∇ x (κ 0 ) -1 ) • ∇ ξ b. Recalling that p 0 = -τ 2 + p,q g 0,pq ξ p ξ q one finds σ = -i{p 0 , b} + ip 0 (κ 0 ) -1 ∇ x (κ 0 ) • ∇ ξ b. Since µ, and thus µ j , is supported in Char(p 0 ) by the first part of the proposition, one concludes that The strategy we follow is very much inspired by the Melrose and Sjöstrand approach to the propagation of singularities [START_REF] Melrose | Singularities of boundary value problems I[END_REF] and relies on careful choices of test functions allowing one to construct sequences of points in the support of the measure relying on nonnegativity 2 . Then, a limiting procedure leads to the conclusion, in the spirit of the classical proof of the Cauchy-Peano theorem. χ j [P k , Bj ]v k j , v k j H -1 comp ( Õj ),H 1 loc ( Õj ) = -i µ j , χ j {p 0 , b} S * ( Õj ) + o(1) k→+∞ . Since χ j µ = χ j µ j (see Section 3.2), with (3.13)-(3.14) one obtains [P k , B]u k , u k H -1 comp (L),H 1 loc (L) = -i µ, {p 0 , b} S * (L) + o(1) k→+∞ . With (3. The proof of Theorem 1.10 is made of two steps that are stated in the following propositions. (1) The set F is a union of maximally extended integral curves of the vector field X. (2) For any compact K ⊂ Ω where the vector field X does not vanish, ∀ε > 0, ∃δ 0 > 0, ∀x ∈ K ∩ F, ∀δ ∈ [-δ 0 , δ 0 ], B x + δX(x), δε ∩ F = ∅. Proposition 4.2. Let X be a C 0 -vector field on Ω an open set of R d . Consider a nonnegative measure µ on Ω solution to t Xµ = 0 in the sense of distributions, that is, (4.1) t Xµ, a D (Ω),C ∞ c (Ω) = µ, Xa D ,0 (Ω),C 0 c (Ω) = 0, a ∈ C ∞ c (Ω). Then, the closed set F = supp(µ) satisfies the second property in Proposition 4.1. Proof of Proposition 4.1. First, we prove that Property (1) implies Property (2) and consider a compact set K of R d such that K ⊂ Ω and K ∩ F = ∅. There exists η > 0 such that K ⊂ K η ⊂ Ω with K η = {x ∈ Ω; dist(x, K) ≤ η}. One has X ≤ C 0 on K η for some C 0 > 0. Let x ∈ K and let γ(s) be a maximal integral curve defined on an interval ]a, b[, a, b ∈ R and such that 0 ∈]a, b[ and γ(0 ) = x. If b < ∞ then there exists s 1 ∈ ]0, b[ such that γ(s 1 ) / ∈ K η . Since γ(s) ∈ K η if s < η/C 0 , one finds that b ≥ η/C 0 . Similarly, one has |a| ≥ η/C 0 . Consequently, there exists S > 0 such that any maximal integral curve γ(s) of the vector field X with γ(0) ∈ K is defined for s ∈ I = (-S, S). 2 of the measure in our case and of some operators for Melrose and Sjöstrand, via the Gårding inequality. By uniform continuity of the vector field X in a compact neighborhood of K we have γ(s) = γ(0) + s 0 γ(s)ds = γ(0) + s 0 X(γ(s))ds = x + sX(x) + r(s), s ∈ (-S, S) where lim s→0 r(s) /s = 0, uniformly with respect to x. We deduce that for any ε > 0 there exists 0 < δ 0 < S such that r(s) < sε for any s ∈ (-δ 0 , δ 0 ), which implies F γ(s) ∈ B x + sX(x), sε . Second, we prove that Property (2) implies Property [START_REF] Ambrosio | Continuity equations and ODE flows with non-smooth velocity[END_REF]. It suffices to prove that for any x ∈ F there exist an interval I 0 and an integral curve γ : I → F such that γ(s) = X(γ(s)) and γ(0) = x. Then, the standard continuation argument shows that this local integral curve included in F can be extended to a maximal integral curve also included in F . If X(x) = 0, then the trivial integral curve γ(s) = x, s ∈ R, is included in F . As a consequence, we assume X(x) = 0 and we pick a compact neighborhood K of x containing B(x, η) with η > 0 and where, for some 0 < c K < C K , c K ≤ X(y) ≤ C K , y ∈ K. Let n ∈ N * . Set x n,0 = x and ε = 1/n and apply Property (2). One deduces that there exist 0 < δ n ≤ 1/n and a point x n,1 ∈ F ∩ B x n,0 + δ n X(x n,0 ), δ n /n . If x n,1 ∈ K one can perform this construction again, yet starting from x n,1 instead of x n,0 . If a sequence of points x n,0 , x n,1 , . . . , x n,L + is obtained in this manner one has x n, +1 ∈ F ∩ B x n, + δ n X(x n, ), δ n /n , = 0, . . . , L + -1. (4.2) One can carry on the construction as long as x n,L + ∈ K. We can perform the same construction for ≤ 0, with the property x n, -1 ∈ F ∩ B x n, -δ n X(x n, ), δ n /n , | | = 0, . . . , L --1. (4.3) Having X ≤ C K on K and B(x, η) ⊂ K ensures that we can construct the sequence at least for L + = L -= L n = η δ n (C K + 1) + 1 ≤ η δ n (C K + 1/n) + 1, where . denotes the floor function. With the points x n, , | | ≤ L n , we have constructed we define the following continuous curve γ n (s) for |s| ≤ L n δ n : γ n (s) = x n, + (s -δ n ) x n, +1 -x n, δ n for s ∈ [ δ n , ( + 1)δ n ) and | | ≤ L n -1. This curve and its construction is illustrated in Figure 1(a). Note that γ n (s) remains in a compact set, uniformly with respect to n. In this compact set X is uniformly continuous. We set S = η/(C K + 1). Since S ≤ L n δ n we shall in fact only consider the function γ n (s) for |s| ≤ S in what follows. Note that since x n, ∈ F for | | ≤ L n then one has dist γ n (s), F ≤ δ n (C K + 1/n), |s| ≤ S. (4.4) From (4.2), for ≥ 0 and s ∈ ( δ n , ( + 1)δ n ), we have γn (s) = x n, +1 -x n, δ n = X(x n, ) + O(1/n). Similarly, from (4.3), for ≤ 0 and s ∈ (( -1)δ n , δ n ), we have γn (s) = x n, -x n, -1 δ n = X(x n, ) + O(1/n). In any case, using the uniform continuity of the vector field X, we find γn (s) = X(γ n (s)) + e n (s), where the error |e n | goes to zero uniformly with respect to |s| ≤ S as n → +∞. Since the curve γ n is absolutely continuous (and differentiable except at isolated points), we find We now let n grow to infinity. With (4.5), the family of curves (s → γ n (s), |s| ≤ S) n∈N * is equicontinuous and pointwise bounded; by the Arzelà-Ascoli theorem we can extract a subsequence (s → γ np ) p∈N that converges uniformly to a curve γ(s), |s| ≤ S. Convergence is illustrated in Figure 1(b). Passing to the limit n p → +∞ in (4.5) we find that γ(s) is solution to γ(s) = x + s 0 X(γ(σ))dσ. From estimation (4.4), for any |s| ≤ S, there exists (y p ) p ⊂ F such that lim p→+∞ y p = γ(s). Since F is closed we conclude that γ(s) ∈ F . Positivity argument and proof of Proposition 4.2. We consider a compact set K where the vector field X does not vanish. By continuity of the vector field there exist 0 < c K ≤ C K such that 0 < c K ≤ X(x) ≤ C K , for all x ∈ K. Let us consider x 0 ∈ K ∩ supp(µ). By performing a rotation and a dilation of coefficient X(x 0 ) ∈ [c K , C K ], we can assume that X(x 0 ) = (1, 0, . . . , 0) ∈ R d . We shall write x = (x 1 , x ) with x ∈ R d-1 . Let χ ∈ C ∞ (R) be given by (4.6) χ(s) = 1 s<1 exp(1/(s -1)), and β ∈ C ∞ (R) be such that (4.7) β ≡ 0 on ] -∞, -1], β > 0 on ] -1, -1/2[, β ≡ 1 on [-1/2, +∞[. We then set q ε,δ,x 0 = (χ • v)(β • w), g ε,δ,x 0 = (χ • v)(β • w)Xv, h ε,δ,x 0 = (χ • v)(β • w)Xw, (4.8) with v(x) = 1/2 -δ -1 (x 1 -x 0 1 ) + 8(εδ) -2 x -x 0 2 and w(x) = 2ε -1 1 -δ -1 (x 1 -x 0 1 ) , for ε > 0 and δ > 0 both meant to be chosen small in what follows. We have Xq ε,δ,x 0 = g ε,δ,x 0 + h ε,δ,x 0 . The function q ε,δ,x 0 is compactly supported. Indeed, in the support of β • w one has w ≥ 1 implying x 1 -x 0 1 ≤ δ(1 + ε/2), while on the support of χ • v one has v ≤ 1 which gives -1/2 + 8(εδ) -2 x -x 0 2 ≤ δ -1 (x 1 -x 0 1 ). On the supports of q ε,δ,x 0 and (χ • v)(β • w) one thus finds (4.9) -δ/2 ≤ x 1 -x 0 1 ≤ δ(1 + ε/2) and 8(εδ) -2 x -x 0 2 ≤ 3/2 + ε/2. Similarly, on the support of β • w one has -1 ≤ w ≤ -1/2 δ(1 + ε/4) ≤ x 1 -x 0 1 ≤ δ(1 + ε/2 ), which implies that on the support of h ε,δ,x 0 one has (4.10) δ(1 + ε/4) ≤ x 1 -x 0 1 ≤ δ(1 + ε/2 ) and 8(εδ) -2 x -x 0 2 ≤ 3/2 + ε/2. In particular, in the case ε ≤ 1, one finds (4.11) supp(h ε,δ,x 0 ) ⊂ B x 0 + δX(x 0 ), εδ . These estimations of the supports of q ε,δ,x 0 and h ε,δ,x 0 are illustrated in Figure 2. Lemma 4.3. For any 0 < ε ≤ 1 there exists δ 0 > 0 such that for any x 0 ∈ K and 0 < δ ≤ δ 0 , the function g ε,δ,x 0 is nonnegative. Moreover, g ε,δ,x 0 is positive in a neighborhood of x 0 . Proof. Let 0 < ε ≤ 1. We have g ε,δ,x 0 = (χ • v)(β • w)Xv. Since β ≥ 0 and χ < 0 it suffices to prove that Xv(x) ≤ 0 for x in the support of (χ • v)(β • w) for δ > 0 chosen sufficiently small, uniformly with respect to x 0 ∈ K. We write with α 1 (x, x 0 ) ∈ R and α (x, x 0 ) ∈ R d-1 . By (4.9), for x ∈ supp(χ • v)(β • w) we have x -x 0 δ. From the uniform continuity of X in any compact set we conclude that X(x) -X(x 0 ) = α 1 (x, x 0 )∂ x 1 + α (x, x 0 ) • ∇ x , x 0 x x 1 X(x 0 ) = ∂ x1 εδ 2 δ -δ/2 εδ 2 (a) Support of q ε,δ,x 0 . x 0 x x 1 X(x 0 ) δ -δ/2 εδ 4 εδ εδ 2 (b) Support of h ε,δ,x 0 . |α 1 (x, x 0 )| + α (x, x 0 ) = o(1) as δ → 0 + , (4.12 ) uniformly3 with respect to x 0 ∈ K and x ∈ supp(χ • v)(β • w). Using that X(x 0 ) = ∂ x 1 and the form of v given above, we write Xv(x) = X(x)v (x) = ∂ x 1 v + X(x) -X(x 0 ) v (x) = -δ -1 1 + α 1 (x, x 0 ) -16ε -1 (εδ) -1 α (x, x 0 ) • (x -x 0 ) . Using again (4.9), we thus find for x ∈ supp(χ • v)(β • w) α 1 (x, x 0 ) -16ε -1 (εδ) -1 α (x, x 0 ) • (x -x 0 ) α 1 (x, x 0 ) + ε -1 α (x, x 0 ) . With ε fixed above and with (4.12) we find that Xv(x) ∼ -δ -1 as δ → 0 + uniformly with respect to x 0 ∈ K and x ∈ supp(χ • v)(β • w). Finally, we have g ε,δ,x 0 (x 0 ) = -δ -1 χ (1/2)β(2ε -1 ) > 0 and thus g ε,δ,x 0 is positive in a neighborhood of x 0 . We are now in a position to conclude the proof of Proposition 4.2. Note that it suffices to prove the result for 0 < ε ≤ 1. We choose δ 0 > 0 as given by Lemma 4.3. Let then x 0 ∈ K ∩ supp(µ). We apply (4.1) to the family q ε,δ,x 0 of test functions with 0 < δ ≤ δ 0 : (4.13) 0 = µ, X(q ε,δ,x 0 ) = µ, g ε,δ,x 0 + µ, h ε,δ,x 0 . By Lemma 4.3, g ε,δ,x 0 ≥ 0 and g ε,δ,x 0 is positive in a neighborhood of x 0 . As x 0 ∈ supp(µ) we find µ, g ε,δ,x 0 > 0. Consequently, µ, h ε,δ,x 0 = 0. By the support estimate for h ε,δ,x 0 given in (4.11) the conclusion follows: supp(µ) ∩ B x 0 + δX(x 0 ), εδ = ∅. Exact controllability: proof of Theorem 1.12 Let (κ 0 , g 0 ) ∈ X 1 (M) and assume that (ω, T ) fulfills the geometric control condition of Definition 1.8 . Let also (κ, g) ∈ Y(M). With Proposition 1.6, the result of Theorem 1.12 follows if we prove that there exists ε > 0 and C obs > 0 such that E κ,g (u)(0) ≤ C obs 1 (0,T )×ω ∂ t u 2 L 2 (L,κµgdt) , for any weak solution u of the wave equation associated with (κ, g) chosen such that (κ, g) -(κ 0 , g 0 ) Y(M) ≤ ε. The L 2 -norm on the r.h.s. is associated with (κ, g), that is, 1 (0,T )×ω ∂ t u 2 L 2 (L,κµgdt) = T 0 ω |∂ t u| 2 κµ g dt. Yet, for ε > 0 chosen sufficiently small one has . L 2 (L,κ 0 µ g 0 ) . L 2 (L,κµg) , where A B means c 1 ≤ A/B ≤ c 2 for some c 1 , c 2 > 0. In other words, we have equivalence with constants uniform with respect to (κ, g). In what follows, L 2 -and more generally H s -norms on M are chosen with respect to κ 0 µ g 0 unless explicitely written. Our goal is thus to prove the following observability inequality (5.1) E κ 0 ,g 0 (u)(0) ≤ C obs 1 (0,T )×ω ∂ t u 2 L 2 (L) . The Bardos-Lebeau-Rauch uniqueness compactness argument reduces the proof of (5.1) to the proof of the weaker estimate (5.2) E κ 0 ,g 0 (u)(0) ≤ C 1 (0,T )×ω ∂ t u 2 L 2 (L) + C u(0), ∂ t u(0) 2 L 2 (M)⊕H -1 (M) , that exhibits an additional compact term, and expresses observability for high-frequencies. Low frequencies are dealt with by means of a unique continuation argument. To prove (5.2) we argue by contradiction and we assume that there exists a sequence (κ k , g k ) k∈N ⊂ Y(M) such that (5.3) lim k→+∞ (κ k , g k ) -(κ 0 , g 0 ) Y(M) = 0, and yet for each k ∈ N the associated observability inequality does not hold. Thus, for each k ∈ N, there exists a sequence of initial data v k,p,0 , v k,p,1 p∈N ⊂ H 1 (M) × L 2 (M) with associated solution (v k,p ) p∈N , that is, P k v k,p = 0 in (0, +∞) × M, v k,p |t=0 = v k,p,0 , ∂ t v k,p |t=0 = v k,p,1 in M, with P k = P κ k ,g k , that moreover has the properties E κ 0 ,g 0 (v k,p )(0) = 1 and 1 (0,T )×ω ∂ t v k,p L 2 (L) + v k,p,0 , v k,p,1 L 2 (M)⊕H -1 (M) ≤ 1 p + 1 . We take p = k and we set u k,0 , u k,1 = v k,k,0 , v k,k,1 and u k = v k,k , one obtains P k u k = 0 in L and (5.4) E κ 0 ,g 0 (u k )(0) = 1 and 1 (0,T )×ω ∂ t u k L 2 (L) + u k,0 , u k,1 L 2 (M)⊕H -1 (M) ≤ 1 k + 1 . From (5.4) one has u k 0 weakly in H 1 loc (L). With (3.1)-(3.2), we can associate with (a subsequence of) (u k ) k a H 1 -microlocal defect measure µ on S * (L). Here, the measure is understood with respect to L 2 (L, κ 0 µ g 0 dt). From the second part of (5.4) one has µ = 0 in S * ((0, T ) × ω). (5.5) In fact, for any ψ ∈ C ∞ ((0, T ) × ω) one has ψ∂ t u k L 2 (L) ∼ 0 and thus µ, τ 2 ψ 2 = 0. Hence, supp(µ) ∩ S * ((0, T ) × ω) ⊂ {τ = 0}. Since {τ = 0} ∩ Char(p 0 ) ∩ S * (L) = ∅ with (3.9) one obtains (5.5). With the first part of (5.4) one has the following lemma. Lemma 5.1. The measure µ does not vanish on S * L . A proof is given below. We now use Proposition 3.7 to obtain a precise description of the measure µ. First, one has supp(µ) ∩ S * ((0, T ) × M) ⊂ Char(p 0 ). Furthermore, one has t H p 0 µ = 0 in the sense of distributions on S * ((0, T ) × M). Since H p 0 is a C 0 -vector field on the manifold S * L, Theorem 1.10 implies that supp(µ) is a union of maximally extended bicharacteristics in S * ((0, T ) × M). Under the geometric control condition of Definition 1.8 , any maximal bicharacteristic meets S * ((0, T ) × ω) where µ vanishes by (5.5). Thus supp(µ) = ∅ yiedling a contradiction with the result of Lemma 5.1. We thus obtain that (5.1) holds. This concludes the proof of Theorem 1.12. Proof of Lemma 5.1. Let T 1 < T 2 and φ ∈ C ∞ c (R) nonnegative and equal to 1 on a neighborhood of [T 1 , T 2 ]. On L, consider the elliptic operator Q = -∂ 2 t -A κ 0 ,g 0 + 1 with symbol q = τ 2 + p,q g 0 p,q (x)ξ p ξ q . Taking (3.2) and Lemma 3.6 into account one can write φ 2 Qu k , u k H -1 comp (L),H 1 loc (L) ∼ k→+∞ µ, φ 2 q S * L . (5.6) Integrating by parts one obtains φ 2 Qu k , u k H -1 comp (L),H 1 loc (L) = L φ(t) 2 |∂ t u k | 2 + g 0 (∇ g 0 u k , ∇ g 0 u k ) + |u k | 2 κ 0 µ g 0 dt + 2(φ φ ∂ t u k , u k ) L 2 (L) = R φ(t) 2 E κ 0 ,g 0 (u k )(t)dt + 2(φ φ ∂ t u k , u k ) L 2 (L) . Since the energy built on κ p , g p is preserved by the evolution given by P p , we have by (5.4) (5.7) E κ 0 ,g 0 (u k )(t) = E κ k ,g k (u k )(t) + o(1) = E κ k ,g k (u k )(0) + o(1) = E κ 0 ,g 0 (u k )(0) + o(1) = 1 + o(1) and since (φ φ ∂ t u k , u k ) L 2 (L) → 0 as u k → 0 strongly in L 2 loc (L), one obtains φ 2 Qu k , u k H -1 comp (L),H 1 loc (L) ∼ k→+∞ φ 2 L 2 (R) . With (5.6) this proves that µ = 0. 6. Lack of continuity of the control operator with respect to coefficients 6.1. Proof of Theorems 1.14 and 1.14 . We prove the result of both theorems, that is, in the case k ≥ 1. In the case k = 1 we are simply required to prove additionnally that the geometric control condition of Definition 1.8 is fulfilled for geodesics given by the chosen metric g; see Remark 1.17. Let ε > 0. We set g = (1 + ε)g. Given any neighborhood U of (κ, g) in X k (M), for ε > 0 chosen sufficiently small one has (κ, g) ∈ U. Moreover, observe that for ε > 0 chosen sufficiently small geodesics associated with g can be made arbitrary close to those associated with g uniformly in t ∈ [0, T ]. Hence, for such ε > 0 the geometric control condition is fulfilled for geodesics associated with g. Observe that one has Char(p κ,g ) ∩ Char(p κ,g ) ∩ S * L = ∅. (6.1) We consider a sequence (y k,0 , y k,1 ) (0, 0) weakly in H 1 (M) ⊕ L 2 (M) such that 1 2 ( y k,0 2 H 1 (M) + y k,1 2 L 2 (M) ) = 1. L 2 -and H 1 -norms are based on the κµ g dt measure on L. Setting f k κ,g = H κ,g (y k,0 , y k,1 ) ∈ L 2 ((0, T ) × M) with H κ,g defined in (1.7), one obtains a sequence of control functions. According to the HUM method [START_REF] Lions | Contrôlabilité exacte, Stabilisation et Perturbations de Systèmes Distribués. Tome 1. Contrôlabilité exacte[END_REF], f k κ,g is itself a (weak) solution to the following free wave equation (6.2) P κ,g f k κ,g = 0, in the energy space L 2 (M) ⊕ H -1 (M), that is, (f k κ,g (0), ∂ t f k κ,g (0)) ∈ L 2 (M) × H -1 (M). Moreover, (f k κ,g (0), ∂ t f k κ,g (0)) depend continuously on (y k,0 , y k,1 ). The function f k κ,g is thus bounded in C 0 ((T 1 , T 2 ), L 2 (M)) uniformly with respect to k for any T 1 < T 2 . Since the map H κ,g is continuous f k κ,g 0 weakly in L 2 loc (L). Up to extraction of a subsequence, it is associated with a L 2 -microlocal defect measure µ f . With Proposition 3.7 one has (6.3) supp(µ f ) ⊂ Char(p κ,g ). We consider the sequences of solutions (y k ) k and (ỹ k ) k to P κ,g y k = 1 (0,T )×ω f k κ,g in L, (y k , ∂ t y k ) |t=0 = (y k,0 , y k,1 ) in M, P κ,g ỹk = 1 (0,T )×ω f k κ,g in L, (ỹ k , ∂ t ỹk ) |t=0 = (y k,0 , y k,1 ) in M. Both are bounded and weakly converges to 0 in H 1 loc (L). Up to extraction of subsequences, both are associated with H 1 -microlocal defect density measures µ and μ respectively. Since 1 (0,T )×ω f k κ,g 0 weakly in L 2 loc (L) then 1 (0,T )×ω f k κ,g → 0 strongly in H -1 loc (L) and, with Proposition 3.7, one finds supp(μ) ⊂ Char(p κ,g ). Thus one has supp(μ) ∩ supp(µ f ) = ∅. (6.4) The sequence (∂ t ỹk ) converges to 0 weakly in L 2 loc (L) and can be associated with a L 2microlocal defect density measure whose support is given by supp(μ). Lemma 6.1. One has (1 (0,T )×ω f k κ,g , ∂ t ỹk ) L 2 (L,κµ g dt) → 0 as k → +∞. A proof is given below. Using the density of strong solutions of the wave equation, with integration by parts, one finds the following classical energy estimate E κ,g (ỹ k )(T ) -E κ,g (ỹ k )(0) = (1 (0,T )×ω f k κ,g , ∂ t ỹk ) L 2 (κµ g dt) . With Lemma 6.1 one obtains E κ,g (ỹ k )(T ) ∼ k→+∞ E κ,g (ỹ k )(0). With the form of g chosen above one has E κ,g (ỹ k )(t) = (1 + O(ε))E κ,g (ỹ k )(t), uniformly with respect to t ∈ [0, T ]. Chosing ε > 0 sufficiently small and k sufficiently large, the first part of Theorem 1.14 follows since E κ,g (ỹ k )(0) = 1. We use the values of ε and k chosen above. To prove (1.10), we write ỹk in the form ỹk = v 1 + v 2 where v 1 and v 2 are solution to (6.5) P κ,g v 1 = 1 (0,T )×ω f k κ,g in L, (v 1 , ∂ t v 1 ) |t=0 = (y k,0 , y k,1 ) in M, P κ,g v 2 = 1 (0,T )×ω (f k κ,g -f k κ,g ) in L, (v 2 , ∂ t v 2 ) |t=0 = (0, 0) in M. with f k κ,g = H κ,g (y k,0 , y k,1 ). A hyperbolic energy estimation for v 2 solution to the second equation in (6.5) gives E κ,g (v 2 )(T ) ≤ C T 1 (0,T )×ω (f k κ,g -f k κ,g ) 2 L 2 (L) . Since one has (v 1 (T ), ∂ t v 1 (T )) = (0, 0) because of the definition of f k κ,g one finds E κ,g (v 2 )(T ) = E κ,g (ỹ k )(T ) ≥ 1/2, which gives the second result of Theorem 1.14. Proof of Lemma 6.1. The key point in the proof is the following lemma. Lemma 6.2 ([18, Proposition 3.1]). Assume that u k and v k are two sequences bounded in L 2 loc that converge weakly to zero and are associated with defect measures µ and ν respectively. Assume that µ ⊥ ν, that is, µ and ν are supported on disjoint sets. Then, for any ψ ∈ C 0 c , lim k→+∞ (ψu k , v k ) L 2 = 0. To apply this result, we just need to exchange the rough cutoff 1 (0,T )×ω for a smooth cutoff ψ(t, x). First, note that one has (1 (0,T )×ω f k κ,g , ∂ t ỹk ) L 2 (L,κµ g dt) ∼ k→+∞ (1 (0,T )×ω f k κ,g , ∂ t ỹk ) L 2 (L,κµgdt) . We may thus simply consider the L 2 -norm and inner product associated with κµ g dt. Second, let δ > 0. Since (f k κ,g ) k and (ỹ k ) k are both bounded in C 0 ((0, T ), L 2 (M)) uniformly with respect to k, there exists 0 < T 1 < T 2 < T and O ω such that With (6.4) and Lemma 6.2, one finds (ψf k κ,g , ∂ t ỹk ) L 2 (L) → k→+∞ 0, (6.6) and the conclusion of the lemma follows. 6.2. Proof of Theorem 1.19. We consider first the case α = 1. As proven in [START_REF] Dehman | Analysis of the HUM control operator and exact controllability for semilinear waves in uniform time[END_REF] one has f y 0 ,y 1 κ,g ∈ C 0 ([0, T ], H 1 (M)) and the estimate f y 0 ,y 1 κ,g L ∞ (0,T ;H 1 (M)) (y 0 , y 1 ) H 2 (M)⊕H 1 (M) . With this regularity of the source term in the right-hand-side of the wave equations in (1.8), one finds y, ỹ ∈ C 0 ([0, T ], H 2 (M)). Computing the difference in (1.8) one writes (6.7) P κ,g (y -ỹ) = (A κ,g -A κ,g )ỹ. A hyperbolic energy estimate yields E κ,g (y -ỹ)(T ) 1/2 (A κ,g -A κ,g )ỹ L ∞ (0,T ;L 2 (M)) (κ, g) -(κ, g) X 1 ỹ L ∞ (0,T ;H 2 (M)) (κ, g) -(κ, g) X 1 f y 0 ,y 1 κ,g L ∞ (0,T ;H 1 (M)) (κ, g) -(κ, g) X 1 (y 0 , y 1 ) H 2 (M)⊕H 1 (M) . In the case α = 0, one writes E κ,g (y -ỹ)(T ) 1/2 E κ,g (y)(T ) 1/2 + E κ,g (ỹ)(T ) 1/2 E κ,g (y)(T ) 1/2 + E κ,g (ỹ)(T ) 1/2 (y 0 , y 1 ) H 1 (M)⊕L 2 (M) . Finally, the result follows from interpolation between the two cases α = 0 and α = 1. 1. 4 . 1 . 41 Transport equation and measure support. Let O be an open set of a smooth manifold. We denote by 1 D (O) and 1 D ,0 (O) the spaces of density distributions and density Radon measures on O. 3. 1 . 1 Microlocal defect density measures. We define M + (S * L) as the set of positive density measures on S * L. For µ ∈ M + (S * L) and a ∈ C 0 c (S * L), we shall write µ, a S * L = S * L a( )µ(d ), ,H 1 loc 1 (Ω) + o(1) k→+∞ and (3.6) follows if we prove (3.5). Proposition 3 . 7 . 37 With the notation of Proposition 3.7, consider a sequence (u k ) k∈N ⊂ L 2 loc (L) that converges to 0 weakly and µ a L 2 -microlocal defect density measure associated with (u k ) k∈N . 4 . 4 12), this concludes the proof of the second part of the proposition since {p 0 , b} = H p 0 b. Measure support propagation: proof of Theorem 1.10 Theorem 1.10 is stated on an open subset of a smooth manifold. Yet, its result is of local nature. Using a local chart we may assume that we consider an open of set Ω of R d instead without any loss of generality. Proposition 4 . 1 . 41 Let X be a C 0 -vector field on Ω an open set of R d . For a closed set F of Ω, the following two properties are equivalent. Iterative construction of the curve γn. Convergence of γn as n increases. Figure 1 . 1 Figure 1. Construction and convergence of the sequence (γ n ) n . (4. 5 ) 5 γ n (s) = γ n (0) + s 0 γn (σ)dσ = x + s 0 X(γ n (σ))dσ + s 0 e n (σ) dσ. Figure 2 . 2 Figure 2. Estimation of the test function supports in the case ε ≤ 1. K |f k κ,g ||∂ t ỹk |κµ g dt ≤ δ. with K = (0, T ) × ω \ (T 1 , T 2 ) × O . Let ψ ∈ C ∞ c ((0, T ) × ω) such that 0 ≤ ψ ≤ 1 and equal to 1 in a neighborhood of [T 1 , T 2 ] × O. One thus has (1 (0,T )×ω f k κ,g , ∂ t ỹk ) L 2 (L) ≤ (ψf k κ,g , ∂ t ỹk ) L 2 (L) + ((1 (0,T )×ω -ψ)f k κ,g , ∂ t ỹk ) L 2 (L)≤ (ψf k κ,g , ∂ t ỹk ) L 2 (L) + δ. Õj . Remark 3.1. Local properties of microlocal defect measures like µ can be deduced from the properties of χ C j j µ j . In what follows most results are of local nature. In such cases we shall work in local charts and use Sections 2.3 and 3.2 to bring the analysis to open domains of R d+1 . 3.3. Operators with a low regularity. An important tool we use to handle low regularity terms in what follows is a result due to R. Coifman and Y. Meyer (see Observe that the change of variables made above for X(x 0 ) = (1, 0, . . . , 0) does not affect uniformity since the dilation is made by a factor in [cK , CK ]. Acknowledgements. This research was partially supported by Agence Nationale de la Recherche through projects ANA É ANR-13-BS01-0010-03, and ISDEEC ANR-16-CE40-0013, Institut universitaire de France (NB), and by the Tunisian Ministry for Higher Education and Scientific Research within the LR-99-ES20 program (BD). Finally, the authors wish to thank the anonymous reviewer for pointing out arguments that needed clarifications.
00342597
en
[ "phys.hthe", "math.math-mp", "phys.mphy" ]
2024/03/04 16:41:24
2009
https://hal.science/hal-00342597v2/file/sov04.pdf
Sylvain Ribault email: [email protected] On sℓ 3 Knizhnik-Zamolodchikov equations and W 3 null-vector equations Starting from Sklyanin's separation of variables for the sℓ 3 Yangian model, we derive the separation of variables for the quantum sℓ 3 Gaudin model. We use the resulting new variables for rewriting the sℓ 3 Knizhnik-Zamolodchikov equations, and comparing them with certain nullvector equations in conformal field theories with W 3 -algebra symmetry. The two sets of equations are remarkably similar, but become identical only in the critical level limit. This is in contrast to the sℓ 2 Knizhnik-Zamolodchikov equations, which are known to be equivalent to Belavin-Polyakov-Zamolodchikov equations for all values of the level. Introduction and conjecture Many interesting models of two-dimensional conformal field theories are based on affine Lie algebras sℓ N and their cosets, starting with Wess-Zumino-Witten models. To solve such theories is an interesting challenge, whose difficulty depends more from the choice of the underlying Lie algebra sℓ N , than from the particular coset or real form chosen. For example, the sℓ 2 family includes string theory in AdS 3 and in the SL(2, R)/U (1) 2d black hole, as well as the H + 3 model; the simplest non-rational nontrivial model of the family is however Liouville theory, also known as conformal sℓ 2 Toda theory. In several of the other theories in the sℓ 2 family, it turns out that arbitrary correlation functions have a simple relation to certain Liouville theory correlation functions [START_REF] Ribault | H + 3 correlators from Liouville theory[END_REF][START_REF] Hikida | H + 3 WZNW model from Liouville field theory[END_REF]. This relation entails a relation between the Knizhnik-Zamolodchikov equations which follow from sℓ 2 symmetry, and the Belavin-Polyakov-Zamolodchikov equations which follow from the conformal symmetry of Liouville theory [START_REF] Stoyanovsky | A relation between the Knizhnik-Zamolodchikov and Belavin--Polyakov-Zamolodchikov systems of partial differential equations[END_REF]. The relation to Liouville theory is helpful in solving certain models in the sℓ 2 family, by disentangling the particular details of a model from its general sℓ 2 -based properties. For example, the H + 3 -Liouville relation was very helpful in solving the H + 3 model on a disc [START_REF] Hosomichi | Solution of the H + 3 model on a disc[END_REF]. Moreover, playing with the Liouville side of the relation leads to the discovery of new conformal field theories which generalize the H + 3 model [START_REF] Ribault | A family of solvable non-rational conformal field theories[END_REF], and which can be considered as members of an extended sℓ 2 family. The intuitive reason why such a relation exists is that sℓ 2 representations are parametrized by just one number, their spin. So it is not very surprising that the dynamics of say the H + 3 model, a theory of three interacting bosons, are in some sense effectively one-dimensional. Applied to a theory with an sℓ N >2 symmetry algebra, which may involve as many as N 2 -1 bosons, this reasoning suggests that it could be related to a theory of only N -1 bosons. Such a theory is present in the sℓ N family: namely, conformal sℓ N Toda theory, which can be described by the Lagrangian L = (∂φ, ∂φ) + N -1 i=1 e b(e i ,φ) where the field φ(z, z) and the simple roots e i live in the N -1-dimensional root space of sℓ N . (See for example [START_REF] Fateev | Correlation functions in conformal Toda field theory I[END_REF] for details.) It is therefore natural to investigate whether correlation functions of that theory have a simple relation to correlation functions of other models in the family. Such a relation would be a welcome simplification: for instance, in the sℓ 3 family, we would trade 8 bosons of the SL(3, R) WZW model for the 2 bosons of sℓ 3 conformal Toda theory. The investigation of the sℓ N >2 families is motivated both from the appearance of groups of rank higher than one in many interesting string theory backgrounds, and from the observation that theories in the sℓ N >2 families are qualitatively more difficult, and more generic, than theories in the sℓ 2 family. This is due to features like: infinite fusion multiplicities, correlation functions involving degenerate fields without obeying nontrivial differential equations, and structures constants which can probably not be written in terms of known special functions [START_REF] Fateev | Correlation functions in conformal Toda field theory I[END_REF]. These are serious obstacles in the way of solving such theories. Nevertheless, we do know a strong explicit constraint on the correlation functions of all models which have the full sℓ N symmetry: they obey KZ equations. The aim of the present article is therefore to determine whether the sℓ 3 KZ equations are related to some null-vector equations in conformal sℓ 3 Toda theory, which follow from its symmetry algebra W 3 . In analogy with the sℓ 2 case, we will look for a relation based on Sklyanin's separation of variables [START_REF] Sklyanin | Quantum inverse scattering method[END_REF]. As the KZ equations are closely related to the Gaudin Hamiltonians, we will use Sklyanin's separation of variables for the quantum sℓ 3 Gaudin model. Before using it, we will actually have to work it out, as this has apparently not been fully done in the existing literature. A rather close starting point is available though: the separation of variables for the sℓ 3 Yangian model [START_REF] Sklyanin | Separation of variables in the quantum integrable models related to the Yangian Y[sl(3)[END_REF]. Let us now sketch the correlation functions we are interested in and the relation we are aiming at. Consider a theory with an sℓ 3 symmetry algebra at level k. We are interested in correlation functions of generic sℓ 3 affine primary fields Φ j (x|z), where the spin j labels sℓ 3 representations, the variable x is a generic isospin coordinate (a triplet of complex numbers), and z is a coordinate on the complex plane where the field lives. We denote an n-point function of such fields as Ω n ≡ n i=1 Φ j i (x i |z i ) . (1.1) We will seek to relate such correlation functions to fairly particular correlation functions in a theory with a W 3 symmetry algebra at parameter b = (k -3) -1 2 , which involve not only n generic W 3primary fields V α i (z i ) corresponding to Φ j i (x i |z i ), but also 3n -6 degenerate fields V -b -1 ω 1 (y a ) with the special value -b -1 ω 1 for their W 3 momentum: Ωn ≡ 3n-6 a=1 V -b -1 ω 1 (y a ) n i=1 V α i (z i ) . (1. 2) The number of degenerate fields is of the order of 3n, which allows their worldsheet positions y a to (approximately) correspond to the 3n components of the isospin variables x 1 • • • x n . This will also allow Ωn to obey some differential equations which may be related to the KZ equations for Ω n . Moreover, the tentative relation between Ω n and Ωn will involve a simple twist function Θ n = a<b (y a -y b ) λ i a (y a -z i ) µ i<j (z i -z j ) ν , (1.3) for some constants λ, µ, ν to be determined in terms of the level k of our sℓ 3 algebra; and the integral transformation K with integration kernel K({x i }|{y a }, U |{z i }) which implements Sklyanin's separation of variables, and may therefore depend on the spins j i but not on the level k. We will then investigate the validity of the conjecture Ω n ? ∼ K • Θ n Ωn ≡ dU a dy a K • Θ n Ωn , or more explicitly Ω n ({x i }|{z i }) ? ∼ dU a dy a K({x i }|{y a }, U |{z i }) • Θ n ({y a }|{z i }) Ωn ({y a }|{z i }) . (1.4) The meaning of the equivalence ∼ here is that both sides obey the same differential equations. If true, this equivalence may then be promoted to a relation between physical correlation function of specific models, like the relation between the H + 3 model and Liouville theory [START_REF] Ribault | H + 3 correlators from Liouville theory[END_REF], but this is not the focus of the present article. This is why we do not worry about such details as the dependence of the correlation functions on antiholomorphic variables. The article will start with a brief review of the KZ equations and other Ward identities in conformal field theories with sℓ N symmetries, where we will explain how the Gaudin Hamiltonians appear in such equations. We will then review the KZ-BPZ relation in the sℓ 2 case; the reader is not advised to skip that section as the KZ-BPZ relation is presented in a form suitable for generalization to sℓ 3 . In the sℓ 3 case, we will then find that the conjecture (1.4) holds only in the critical level limit k → 3. Gaudin Hamiltonians in conformal field theory We will review how the Gaudin Hamiltonians appear in Ward identities obeyed by correlation functions in conformal field theories with an sℓ N symmetry algebra. The Ward identities associated to the stress-energy tensor T J (z) lead to the KZ equations, which involve the ordinary Gaudin Hamiltonians. The Ward identities associated to the cubic field W J (z) involve higher Gaudin Hamiltonians. Knizhnik-Zamolodchikov equations The affine Lie algebra sℓ N is an infinite-dimensional extension of the simple Lie algebra sℓ N . The generators t a of sℓ N , its structure constants f ab c , and its metric κ ab are defined by the relations [t a , t b ] = f ab c t c , κ ab ≡ Tr t a t b , f ab c f cd b = 2N κ ad , (2.1) where here and in the following the trace Tr is taken in the fundamental representation, so that our metric κ ab coincides with the renormalized Killing form of [START_REF] Di Francesco | Conformal field theory New[END_REF](13.13). The affine Lie algebra sℓ N can be formulated as the algebra of currents J a (z) with the operator product expansion J a (z)J b (w) = - kκ ab (z -w) 2 + f ab c J c (w) z -w + (J a J b )(w) + O(z -w) , (2.2) where the parameter k is called the level, and the normal-ordered product (J a J b )(w) is defined by the present formula. Conformal symmetry follows from the existence of a Virasoro algebra with central charge c = k(N 2 -1) k-N , generated by the Sugawara stress-energy tensor T J (z) ≡ - 1 2(k -N ) (J a J a )(z) , (2.3) where J a J a is a shorthand for κ ab J a J b . The identification of T J (z) with the generator of conformal transformations will be at the origin of the KZ equations. These equations are satisfied by any correlation function (1.1) of n affine primary fields Φ j i (x i |z i ) on the complex z-plane, where the spins j i label representations of sℓ N , the isospin variables x i label the states in a given representation, and the complex numbers z i are positions on the Euclidean two-dimensional spacetime. The affine primary fields are defined by their operator product expansions with the currents J a (z), J a (z)Φ j (x|w) = D a Φ j (x|w) z -w + O(1) , (2.4) where D a provides a realization of the representation of spin j in terms of differential operators acting on the isospin variables x, so that [D a , D b ] = f ab c D c . We will keep this realization arbitrary, without committing to any particular choice of isospin variables. Let us however give an example of such a choice in the sℓ 2 case: D -= ∂ ∂x , D 3 = x ∂ ∂x -j , D + = x 2 ∂ ∂x -2jx . (2.5) The KZ equations are now obtained by inserting T J (z) into the correlation function Ω n , and using the conformal Ward identity for T J (z) on the one hand, and the affine Ward identities for (J a J a )(z) on the other hand: T J (z) n i=1 Φ j i (x i |z i ) = n i=1 L J 0,(i) (z -z i ) 2 + L J -1,(i) z -z i Ω n = - 1 2(k -N ) n i=1 D a (i) z -z i n ℓ=1 D a (ℓ) z -z ℓ Ω n , (2.6) where the subscript (i) in D a (i) indicates that it acts on the isospin variables x i , and by definition L J p,(i) is the p-th mode of T J (z) acting on Φ j i (x i |z i ), according to L J p Φ j (x|z) ≡ 1 2πi z dw (w -z) p+1 T J (w)Φ j (x|z) . (2.7) Calling ∆ J the eigenvalues of L J 0 , such that L J 0,(i) Ω n = ∆ J j i Ω n , we first deduce from eq. (2.6) the expression for ∆ j in terms of the quadratic Casimir C 2 (j) ≡ D a D a of the sℓ N representation with spin j, ∆ J j ≡ - C 2 (j) 2(k -N ) . (2.8) Now T J (z) is assumed to generate conformal transformations, and in particular L J -1,(i) Ω n = δ δz i Ω n . (We define δ δz i ≡ ∂ ∂z i x i as a derivative at fixed isospin variables.) Together with eq. (2.6), this implies the KZ equations [START_REF] Knizhnik | Current algebra and Wess-Zumino model in two dimensions[END_REF] (k -N ) δ δz i Ω n = -H i Ω n , H i ≡ ℓ =i D a (i) D a (ℓ) z i -z ℓ , (2.9) The n commuting differential operators H i are called the Gaudin Hamiltonians. Through its dependence on D a (i) and D a (ℓ) , each one of the n Hamiltonians involves all of the n isospin variables x i , which makes the problem of their simultaneous diagonalization difficult. This difficulty will be solved by Sklyanin's separation of variables, which replaces the isospins x i with new variables y i , and combines the Gaudin eigenvalue equations into an essentially equivalent set of equations, each of which involves only one of the new variables. Ward identities for the cubic field In addition to the quadratic invariant tensor κ ab = Tr t a t b , it is possible to define the fully symmetric cubic invariant tensor d abc ≡ Tr (t a t b t c + t a t c t b ) . (2.10) This tensor vanishes in the case of sℓ 2 , but not in the cases of sℓ N ≥3 . It can then be used for constructing the invariant cubic field W J (z) ≡ 1 6 ρ d abc (J a (J b J c ))(z) , ρ ≡ i (k -N ) 3 2 . (2.11) This generalizes the Sugawara construction (2.3), with however two substantial differences. First, while the field T J (z) is interpreted as the generator of conformal transformations, there is no such geometrical interpretation for W J (z). Second, while the field T J (z) obeys a Virasoro algebra, the field W J (z) does not obey the higher W 3 algebra [START_REF] Bouwknegt | W symmetry in conformal field theory[END_REF]. In other words, while the Virasoro algebra can be realized as either a coset of sℓ 2 or a subalgebra of the enveloping algebra of sℓ N ≥2 (albeit with differing central charges), the W 3 algebra is a coset of sℓ 3 but not a subalgebra of the enveloping algebra of sℓ N ≥3 . In analogy with eq. (2.6) we now have W J (z) n i=1 Φ j i (x i |z i ) = n i=1 W J 0,(i) (z -z i ) 3 + W J -1,(i) (z -z i ) 2 + W J -2,(i) z -z i Ω n = 1 6 ρ d abc n i=1 D a (i) z -z i n ℓ=1 D b (ℓ) z -z ℓ n m=1 D c (m) z -z m Ω n , (2.12) where by definition W J p,(i) is the p-th mode of W J (z) acting on Φ j i (x i |z i ), according to W J p Φ j (x|z) ≡ 1 2πi z dw (w -z) p+2 W J (w)Φ j (x|z) . (2.13) Calling q J the eigenvalues of W J 0 , such that W J 0,(i) Ω n = q J j i Ω n , we first deduce from eq. (2.12) the expression for q J j in terms of the cubic Casimir C 3 (j) ≡ d abc (D a D b D c + D a D c D b ) of the sℓ N representation with spin j, q J j = 1 6 ρ C 3 (j) . (2.14) We further deduce W J -1,(i) Ω n = 1 2 ρ H ′ i Ω n , (2.15) W J -2,(i) Ω n = 1 2 ρ H ′′ i Ω n , (2.16) where the differential operators H ′ i and H ′′ i are higher Gaudin Hamiltonians, whose explicit expressions in terms of D a (i) can easily be derived from eq. (2.12). But, in contrast to L J -1 , the operators W J -1 and W J -2 are not interpreted as differential operators with respect to z. The equations (2.15) and (2.16), which generalize the KZ equations, are therefore not differential equations, and they will therefore not help us test our conjecture. Nevertheless, they will naturally appear in certain formulas. Review of the sℓ 2 case In this section we will review the relation between the sℓ 2 KZ equations and BPZ equations. This was originally found by Feigin, Frenkel and Stoyanovsky [START_REF] Stoyanovsky | A relation between the Knizhnik-Zamolodchikov and Belavin--Polyakov-Zamolodchikov systems of partial differential equations[END_REF], using Sklyanin's separation of variables for the sℓ 2 Gaudin model [START_REF] Sklyanin | Quantum inverse scattering method[END_REF]. However, the original derivation relied on a particular choice of the isospin variables. This choice of isospin variables makes the result remarkably simple, but has no analog in the sℓ 3 case, as we will show. We will therefore reanalyze the sℓ 2 case, using whenever possible objects which do have analogs in the sℓ 3 or even sℓ N cases. We will present systematic derivations of their relevant properties, which will help clarify whether and how they can be generalized to the sℓ 3 case. Separation of variables for the sℓ 2 Gaudin model Let us consider a system of n representations of sℓ 2 with spins j 1 • • • j n . Consider the associated quantum variables D a (i) such that [D a (i) , D b (j) ] = δ ij f ab c D c (i) with D a (i) D a (i) = C 2 (j i ). The system comes with parameters z 1 • • • z n . Sklyanin's separation of variables for this system involves three ingredients: 1. A function B(u) of an arbitrary variable u (the spectral parameter), whose zeroes are the separated variables y i , so that B(y i ) = 0; 2. Another function A(u) such that p i = A(y i ) is the conjugate momenta to y i ; 3. A kinematical identity, called the characteristic equation, which for any given i relates y i and p i . We now briefly review the construction of these three objects in the sℓ 2 case. They are built from the sℓ 2 Lax matrix I(u) ≡ - n i=1 t a D a (i) u -z i , (3.1) whose matrix elements I β α (u) obey the identity (u -v)[I γ α (u), I ǫ β (v)] = δ ǫ α I γ β (u) -δ γ β I ǫ α (u) -δ ǫ α I γ β (v) + δ γ β I ǫ α (v) . (3.2) With the particular choice eq. (2.5) for the sℓ 2 isospin variable x, the sℓ 2 Lax matrix is explicitly I(u) = -   1 2 n i=1 1 u-z i x i ∂ ∂x i -j i n i=1 1 u-z i ∂ ∂x i n i=1 1 u-z i x 2 i ∂ ∂x i -2j i x i -1 2 n i=1 1 u-z i x i ∂ ∂x i -j i   . (3.3) Now choosing B(u) ≡ I 2 1 (u) , A(u) ≡ I 1 1 (u) , (3.4) it is easy to check that [B(u), B(v)] = 0 , [A(u), A(v)] = 0 , (3.5) (u -v)[A(u), B(v)] = B(v) -B(u) . (3.6) These relations ensure that the operators y i defined as the zeroes of B(u), and p i = A(y i ), do satisfy [y i , y j ] = 0 , [p i , y j ] = δ ij , [p i , p j ] = 0 . (3.7) In particular, [p i , B(v)] = B(v) y i -v agrees with B(v) ∝ Q i (v-y i ) Q j (v-z j ) . There is however a problem of operator ordering in the expressions A(y i ) and B(y i ), because the separated variables y i are operators. This problem is dealt with in reference [START_REF] Sklyanin | Quantum inverse scattering method[END_REF]. We will ignore it in the forthcoming heuristic derivation of the characteristic equation. Let us start with det (A(y i )id -I(y i )) = 0, where id is the identity matrix. (The determinant of a matrix whose first line vanishes is zero.) This implies p 2 i -1 2 (I β α I α β )(y i ) = 0. This characteristic equation can easily be rewritten as p 2 i - 1 2 ℓ C 2 (j ℓ ) (y i -z ℓ ) 2 - ℓ 1 y i -z ℓ H ℓ = 0 , (3.8) where H ℓ is of course a Gaudin Hamiltonian (2.9), and C 2 (j) is the quadratic Casimir of a spin-j representation. Functional space interpretation. We now wish to consider the quantum variables D a (i) as differential operators acting on functions Ψ({x i }) of isospin variables x i . (An example of such a realization was given in eq. (2.5).) Similarly, the separated variables y ℓ and their associated momenta p ℓ may act on functions Ψ({y ℓ }), in particular p ℓ Ψ = ∂ ∂y ℓ Ψ. The separation of variables {x i } → {y ℓ }, U (where the extra variable U will be defined shortly) is then intepreted as an integral transformation K such that Ψ({x i }) = K Ψ({y ℓ }, U ) = dU ℓ dy ℓ K({x i }|{y ℓ }, U ) Ψ({y ℓ }, U ) , (3.9) where the kernel K is characterized as a common eigenvector of the commuting operators B(u) B(u) -U ℓ (u -y ℓ ) i (u -z i ) K({x i }|{y ℓ }, U ) = 0 . (3.10) The simultaneous diagonalization of the Gaudin Hamiltonians H j , namely the set of equations (H ℓ -E ℓ )Ψ = 0, can now be reformulated using the characteristic equation (3.8), which implies ∂ 2 ∂y 2 i - 1 2 ℓ C 2 (j ℓ ) (y i -z ℓ ) 2 - ℓ E ℓ y i -z ℓ Ψ = 0 , (3.11) The solutions of this equation can be found in factorized form Ψ = i ψ(y i ). This justifies the name "separation of variables" attributed to the change of variables x i → y i . Some remarks. Finding the kernel K by the simultaneous diagonalization of the operators B(u) is easy in the sℓ 2 case because B(u) = I 2 1 (u) is a sum of n commuting operators, so that we have K({x i }|{y ℓ }, U ) = n i=1 k i (x i |{y ℓ }, U ) where the the equation on k i is obtained from eq. (3.10) in the limit u → z i : (t a ) 2 1 D a (i) + µ i k i (x i |{y ℓ }, U ) = 0 , µ i ≡ U ℓ (z i -y ℓ ) j =i (z i -z j ) . (3.12) For example, if the isospin variables are chosen as in eq. (2.5), then we find k i = e -µ i x i . This suggests that we could use other isospin variables μi such that D a (i) (t a ) 2 1 = -μ i , then we would find k i ∝ δ(μ i -µ i ), so that we could explicitly perform the integrals in eq. (3.9). This would lead to Ψ({μ i }) ∝ Ψ({y ℓ }, U ) with simple proportionality factors, as the change of variables {μ i } → ({y ℓ }, U ) would now be local and described by the functions µ i ({y ℓ }, U ). More generally, for any choice of isospin variables, the kernel K will be of the type K({x i }|{y ℓ }, U |{z j }) = n i=1 k i (x i | {µ j }) , (3.13) where µ j ({y ℓ }, U |{z j }) is defined in eq. (3.12), and we made the z j -dependence explicit. Thus, in the sℓ 2 case, the kernel K can be determined explicitly, and this is because the operator B(u) is a linear function of the Lax matrix I(u). Let us finally be more precise about the number of variables y ℓ . They are defined as the zeroes of a rational function B(u) which, barring extra constraints, has n poles and degree -1. Therefore we must have n -1 such variables, and the nth variable U is the eigenvalue of -(t a ) 2 1 n i=1 D a (i) . In conformal field theory applications, we however impose the extra constraint n i=1 D a (i) = 0, so that B(u) has degree -2. This yields n -2 variables {y ℓ } ℓ=1•••n-2 , and U is the eigenvalue of -(t a ) 2 1 n i=1 z i D a (i) . The sℓ 2 Knizhnik-Zamolodchikov equations in Sklyanin variables We just saw that Sklyanin's separation of variables is useful tool for simultaneously diagonalizing the sℓ 2 Gaudin Hamiltonians. This problem is closely related to the problem of solving the KZ equations (2.9), which are obtained by replacing the eigenvalues of the Gaudin Hamiltonians H i with -(k -2) δ δz i . This suggests that it may be interesting to rewrite the KZ equations in terms of Sklyanin's variables. To do this, we will use the characteritic equation (3.8) which such variables obey, and apply it to K -1 Ω n , which is a function of {y i }, so that p i K -1 Ω n = ∂ ∂y i K -1 Ω n . While itself just a kinematical identity, the characteristic equation then allows us to reorganize the KZ equations as 1 k -2 ∂ 2 ∂y 2 + n ℓ=1 1 y -z ℓ K -1 δ δz ℓ K + n ℓ=1 ∆ J j ℓ (y -z ℓ ) 2 K -1 Ω n = 0 , (3.14) where we drop the index from y i , and we use ∆ J j = -C 2 (j) 2(k-2) from eq. (2.8). We still have to perform the change of variables on the z ℓ -derivatives at fixed isospins, i.e. to rewrite K -1 δ δz ℓ K in terms of ∂ ∂z ℓ ≡ ∂ ∂z ℓ ya . This is rather easy because of the particular form of the kernel (3.13), where the dependences on {y a }, U and {z ℓ } are channeled through the particular functions {µ i }. This implies that the integral transformation (3.9) just adds first-order differential operators ∂ ∂ya , ∂ ∂U to δ δz ℓ , so that K -1 δ δz ℓ K = ∂ ∂z ℓ + a ∂y a ∂z ℓ µ i ∂ ∂y a + ∂U ∂z ℓ µ i ∂ ∂U . (3.15) Denoting {y a } = {y, {y b }}, we obtain the KZ equations in Sklyanin variables, 1 k -2 ∂ 2 ∂y 2 + n ℓ=1 1 y -z ℓ ∂ ∂z ℓ + ∂ ∂y + b 1 y -y b ∂ ∂y b - ∂ ∂y + n ℓ=1 ∆ J j ℓ (y -z ℓ ) 2 K -1 Ω n = 0 . (3.16) In this equation the variables are no longer separated, as the variables y b appear in addition to y. Comparison with Virasoro null-vector equations In the previous subsection, we have studied the KZ equations in a CFT with an sℓ 2 symmetry algebra at level k. We will now compare them with null-vector equations in a CFT with a Virasoro symmetry algebra at central charge c = 1 + 6(b + b -1 ) 2 where b 2 ≡ 1 k-2 . This is the Virasoro algebra which would be obtained from our sℓ 2 algebra by quantum Hamiltonian reduction (see for instance [START_REF] Bouwknegt | W symmetry in conformal field theory[END_REF]), although that reduction does not explain the relation between differential equations which we are about to review. The Virasoro algebra can be formulated in terms of the stress-energy tensor T (z), which obeys T (z)T (w) = 1 2 c (z -w) 4 + 2T (w) (z -w) 2 + ∂T (w) z -w + O(1) . (3.17) Primary fields V α (w) of momentum α and conformal dimention ∆ α = α(b + b -1 -α) are defined by T (z)V α (w) = ∆ α V α (w) (z -w) 2 + ∂V α (w) z -w + O(1) . (3.18) This definition does not distinguish the primary fields V α and V b+b -1 -α , which have the same conformal dimension. These fields are therefore assumed to be proportional, with a proportionality constant called the reflection coefficient. This Z 2 symmetry can be understood as the action of the Weyl group of sℓ 2 on the space of the momenta α. The Virasoro representation generated by the degenerate field V -1 2b is known to have a nullvector at level two Namely, (L -2 + b 2 L 2 -1 )V -1 2b = 0, where the modes L p are defined as in eq. (2.7). This implies that correlation functions involving such a degenerate field obey the Belavin- Polyakov-Zamolodchikov equation [12] b 2 ∂ 2 ∂y 2 + n i=1 1 y -z i ∂ ∂z i + n i=1 ∆ α i (y -z i ) 2 V -1 2b (y) n i=1 V α i (z i ) = 0 . (3.19) Curiously, this equation is formally identical to the variable-separated KZ equation (3.14). The meaning of this formal similarity is not clear to us. The KZ equations in Sklyanin variables (3.16) actually involve n -2 variables y 1 • • • y n-2 , therefore we should rather consider correlation functions of the type Ωn ≡ n-2 a=1 V -1 2b (y a ) n i=1 V α i (z i ) . (3.20) We then expect such correlation functions to be related to Ω n (1.1) as in equation (1.4). That equation means that the twisted BPZ equations satisfied by Θ n Ωn are identical to the KZ equations in Sklyanin variables (3.16). This can indeed be checked by explicit calculation, provided we correctly specify the function Θ n as well as the relation between sℓ 2 spins j i and Virasoro momenta α i . Requiring that the α-j relation is compatible with the respective Weyl symmetries j → -j -1 and α → b + b -1 -α, and that conformal dimensions ∆ J j = -j(j+1) k-2 eq. (2.8) and ∆ α are related by a constant shift, determines the relation α = b(j + 1) + 1 2b , ∆ α = ∆ J j + 1 2 + 1 4b 2 . (3.21) We still have to specify the values of the parameters λ, µ, ν in the ansatz (1.3) for the function Θ n . We could determine these values by requiring the twisted BPZ equations to agree with eq. (3.16), and we would find λ = 1 2b 2 , µ = - 1 2b 2 , ν = 1 2b 2 . (3.22) There are simple concurring arguments for the values of λ and ν. First, the value of λ is determined by the requirement of continuity of Θ n Ωn at y a = y b . This requirement plays an important role in the boundary H + 3 model [START_REF] Hosomichi | Solution of the H + 3 model on a disc[END_REF]. Second, the value of ν follows from checking equation (1.4) in the simplest case n = 2, when there are no y a variables and no BPZ equations. Let us now comment on this twist function Θ n and its relation to free field correlation functions. In this paragraph we will consider full correlation functions with dependences on both holomorphic and antiholomorphic variables, and the full twist factor which is thus |Θ n | 2 . With the above values (3.22) for λ, µ, ν, we observe that the inverse twist factor |Θ n | -2 coincides with the free field correlation function formally obtained from Ωn by taking the fields V α i (z i ) to have momenta α i = 1 2b instead of α i = b(j i + 1) + 1 2b . This means |Θ n | -2 = n-2 a=1 V -1 2b (y a ) n i=1 V 1 2b (z i ) f ree . (3.23) This interpretation of Θ n plays a role in a recent proof of the FZZ conjecture [START_REF] Hikida | The FZZ-Duality Conjecture -A Proof[END_REF], see also [START_REF] Giribet | A twisted FZZ-like dual for the 2D black hole[END_REF]. For now, let us explain the origin of this observed relation by studying the b → 0 limit of the H + 3 -Liouville relation. This relation can be written as Ω n ∼ K K|Θ n | 2 Ωn , whose factors we now analyze: • The Liouville correlation function Ωn reduces to n-2 a=1 V -1 2b (y a ) n i=1 V 1 2b (z i ) as b → 0. And it turns out that this coincides with a free field correlation function, because the momentum conservation condition is obeyed. Namely, the sum of the momenta is (n -2) × -1 2b + n × 1 2b = 1 b which coincides with the dominant term in the Liouville background charge 1 b + b. Therefore, according to standard path-integral reasoning in Liouville theory [START_REF] Zamolodchikov | Structure constants and conformal bootstrap in Liouville field theory[END_REF], we have Ωn ∼ b→0 R n n-2 a=1 V -1 2b (y a ) n i=1 V 1 2b (z i ) f ree = R n Q (ya-z i ) Q (ya-y b ) Q (z i -z j ) 1 b 2 where R n is b-independent. • The H + 3 correlation function Ω n is expected to have a finite "minisuperspace" limit [START_REF] Teschner | The mini-superspace limit of the SL(2,C)/SU(2) WZNW model[END_REF] as b → 0 which is equivalent to k → ∞ where k is the level. • The separation of variables K is b-independent by definition. This concludes our reminder of the KZ-BPZ relation in the sℓ 2 case. In the next section we will analyze the sℓ 3 KZ equations along the same lines. The sℓ 3 case 4.1 Separation of variables for the sℓ 3 Gaudin model To the best of our knowledge, the full quantum separation of variables for the sℓ 3 Gaudin model has not been derived yet. By the full separation of variables we mean the determination of A(u), B(u) and a characteristic equation, like in the sℓ 2 case. 1 Sklyanin did however derive the full separation of variables for the classical sℓ 3 Gaudin model [START_REF] Sklyanin | Separation of variables in the classical integrable SL(3) magnetic chain[END_REF]. In order to derive the quantum version, we will use Sklyanin's separation of variables for models with an sℓ 3 Yangian symmetry [START_REF] Sklyanin | Separation of variables in the quantum integrable models related to the Yangian Y[sl(3)[END_REF], see also [START_REF] Smirnov | Separation of variables for quantum integrable models related to U q ( sℓ N )[END_REF] for a generalization to sℓ N . This Yangian symmetry is present in the Gaudin model, which will allow us to derive its quantum characteristic equation from the Yangian's. Y (u) ≡ id - η u -z 1 t a D a (1) id - η u -z 2 t a D a (2) • • • id - η u -z n t a D a (n) (4.1) = id + ηI(u) + 1 2 η 2 : I 2 : (u) + 1 6 η 3 : I 3 : (u) + • • • , (4.2) where the definition of the normal ordering in : I 2 : (u) and : I 3 : (u) follows from the chosen ordering of the factors of Y (u). This object can be shown to obey the Yangian algebra (u -v)Y γ α (u)Y ǫ β (v) + ηY ǫ α (u)Y γ β (v) = (u -v)Y ǫ β (v)Y γ α (u) + ηY ǫ α (v)Y γ β (u) . (4.3) Sklyanin's separated variables y ℓ for the Yangian [START_REF] Sklyanin | Separation of variables in the quantum integrable models related to the Yangian Y[sl(3)[END_REF] are defined as the zeroes of a function B Y (u) = Y 2 3 (u)Y 1 2 (u)Y 2 3 (u -η) -Y 2 3 (u)Y 1 3 (u)Y 2 2 (u -η) + Y 1 3 (u)Y 2 3 (u)Y 1 1 (u -η) -Y 1 3 (u)Y 2 1 (u)Y 1 3 (u -η) , (4.4) while the conjugate variables are given by X i = A Y (y i ) where A Y (u) = Y 1 1 (u) -Y 2 3 (u -η) -1 Y 1 3 (u -η)Y 2 1 (u) , (4.5) Let us point out that interesting structural insight into these formulas for A Y (u) and B Y (u) was obtained in [START_REF] Chervov | Manin matrices and Talalaev's formula[END_REF], based on general properties of matrices with non-commuting elements. The functions A Y (u) and B Y (u) obey the commutation relations [A Y (u), A Y (v)] = 0 , [B Y (u), B Y (v)] = 0 , u -v η [A Y (u), B Y (v)] = B Y (u)A Y (v) Y 2 3 (u -η) -1 Y 2 3 (u) -1 Y 2 3 (v -η)Y 2 3 (v) -B Y (v)A Y (u) , (4.6) so that [y i , y j ] = 0 , [X i , y j ] = -ηδ ij X i , [X i , X j ] = 0 . (4.7) The quantum characteristic equation is then X 3 i -X 2 i t 1 (y i ) + X i t 2 (y i -η) -d(y i -2η) = 0 , (4.8) where we omitted the spectral parameter u in I β α (u), and used the sℓ 3 -defining relation I 1 1 + I2 2 + I 3 3 = 0. We then obtain the following quantum characteristic equation of the sℓ 3 Gaudin model: p 3 i -p i • 1 2 (I β α I α β )(y i ) + 1 4 (I β α I α β ) ′ (y i ) + 1 6 I β α I γ β I α γ + I α β I β γ I γ α (y i ) = 0 . (4.16) Notice that the particular cubic invariant which appears in this formula is related to the fully symmetric invariant tensor d abc eq. (2.10). Using the definition (3.1) of I(u), we indeed have I β α I γ β I α γ + I α β I β γ I γ α (u) = -d abc n i=1 D a (i) u -z i n ℓ=1 D b (ℓ) u -z ℓ n m=1 D c (m) u -z m . (4.17) This could further be expressed in terms of the higher Gaudin Hamiltonians of Section 2.2, so that the characteristic equation could help simultaneously diagonalize these Hamiltonians. Some remarks. Like in the sℓ 2 case, Sklyanin's change of variables can be interpreted as an integral transformation K (3.9) acting on a functional space. The kernel K of K now obeys B(u) -U ℓ (u -y ℓ ) i (u -z i ) 3 K({x i }|{y ℓ }, U ) = 0 . (4.18) However, the simultaneous diagonalization of the commuting operators B(u) is now a difficult problem, as B(u) is now cubic and not linear in I(u), and thus no longer a sum of n commuting operators. Therefore, the kernel K is no longer of the form (3.13). Certainly, no choice of isospin variables exists such that the kernel K has a simple expression. Another difference with the sℓ 2 case is the counting of variables: generic functions of the sℓ 3 isospin coordinates x i should correspond to functions of not only y i and U , but also of two extra variables. These extra variables are necessary for the transformation K to be invertible. We will neglect this issue 2 , as well as the issue of precisely defining the relevant functional spaces, and we will assume K to be invertible. Let us finally determine the number of separated variables y i -that is, the number of zeroes of B(u). Barring extra constraints, this is of course 3n -3. In conformal field theory applications, we however impose the extra constraints n i=1 D a (i) = 0, so that I(u) has degree -2. This does not immediately imply that B(u) (eq. (4.11)), which is cubic in I(u), has degree -6, because n i=1 D a (i) = 0 only holds when directly applied to a physical correlation function, and the matrix elements of I(u) generically do not commute with each other. Rather, the degree of B(u) depends on its precise form and should be evaluated by explicit calculation. We find that each one of the four terms of B(u) has degree -5, while B(u) itself has degree -6. This means that there are 3n -6 separated variables. Therefore, as in the sℓ 2 case, the number of separated variables vanishes for n = 2. The sℓ 3 Knizhnik-Zamolodchikov equations in Sklyanin variables Let us consider a conformal field theory with an sℓ 3 symmetry algebra. The Ward identities consist in the n KZ differential equations (2.9), plus 2n extra non-differential relations (2.15) and (2.16), which express W J -1,(i) and W J -2,(i) in terms of differential operators acting on isospin variables. Let us reorganize all these relations by injecting them into the characteristic equation of the quantum sℓ 3 Gaudin model (4.16). The result is schematically of the form ∂ 3 ∂y 3 + (k -3) ∂ ∂y • T J (y) - 1 2 (k -3)∂T J (y) - 1 ρ W J (y) K -1 Ω n = 0 , (4.19) where the constant ρ was defined in eq. (2.11). Explicitly, ∂ 3 ∂y 3 + (k -3) ∂ ∂y • n i=1 1 y -z i K -1 δ δz i K + ∆ J j i (y -z i ) 2 + 1 2 (k -3) n i=1 1 (y -z i ) 2 K -1 δ δz i K + 2∆ J j i (y -z i ) 3 - 1 ρ n i=1 K -1 W J -2,(i) K y -z i + K -1 W J -1,(i) K (y -z i ) 2 + q J j i (y -z i ) 3 K -1 Ω n = 0 , (4.20) where Ω n is still an n-point function of the type (1.1). In this equation, the terms involving W J -1,(i) and W J -2,(i) refer to correlation functions involving descendents of the primary fields Φ j (µ|z). We have little control over such non-differential terms, and we would like to ignore them in the following. This could be done by considering appropriate linear combinations of our 3n -6 equations. (Remember that the variable y spans the 3n -6 separated variables {y a }). We will for simplicity adopt the alternative approach of working modulo the unwanted terms. Let us make this precise by defining the space D S of differential operators in y a , z i (including functions of y a , z i ) which are symmetric under permutations of {y 1 , y 2 • • • y 3n-6 }. For any choice {y a } = {y, y b } of a distinguished variable y we further define F 2 (y) ≡ n i=1 1 y -z i D S + n i=1 1 (y -z i ) 2 D S . (4.21) By a simple counting of variables it can be realized that any differential operator which is symmetric under permuations of {y b } does belong to F 3 (y) ≡ n i=1 1 y-z i D S + n i=1 1 (y-z i ) 2 D S + n i=1 1 (y-z i ) 3 D S . But it does not always belong to F 2 (y), so we can define a nontrivial equivalence ∼ as the equality modulo F 2 (y). Thus, equation (4.20) simplifies to ∂ 3 ∂y 3 + ∂ ∂y • n i=1 k -3 y -z i K -1 δ δz i K + n i=1 (k -3)∆ J j i (y -z i ) 2 ∂ ∂y - n i=1 1 ρ q J j i + (k -3)∆ J j i (y -z i ) 3 K -1 Ω n ∼ 0 (4.22) Having thus eliminated W J -1,(i) and W J -2,(i) , we are left with operators δ δz i , which we recall are z i -derivatives at fixed isospin variables. We expect K -1 δ δz i K to be a combination of the operators ∂ ∂z i , ∂ ∂ya and ∂ ∂U , although we do not know how to compute it. And it is not clear whether K -1 δ δz i K is a first-order differential operator, as happened in the sℓ 2 case (see eq. (3.15)). Nevertheless, we do know that K -1 δ δz i K is independent from the level k, which is a parameter of our conformal field theory but neither of the Gaudin model nor of its separation of variables. Therefore, we will still be able to extract useful information from eq. (4.22), a sum of terms with various power-like dependences on (k -3), by considering all terms which are not linear in (k -3). W 3 null-vector equations Let us first briefly explain why we try to relate conformal field theories with an sℓ 3 symmetry at level k to theories with a W 3 symmetry at central charge c = 2 + 24(b + b -1 ) 2 where b 2 = 1 k -3 . ( 4 .23) A theory with an sℓ 3 symmetry like the sℓ 3 (R) WZW model can be written in terms of eight quantum fields, as sℓ 3 is eight-dimensional. However, affine sℓ 3 highest-weight representations are parametrized by just two numbers, namely the two components of the sℓ 3 spin j. This suggests that the non-trivial dynamics of the theory really take place in a two-dimensional space, where j would play the role of the momentum. There exists such an sℓ 3 -based theory which involves just two interacting quantum fields: the conformal sℓ 3 Toda theory, which has a W 3 symmetry algebra. The correct parameter b for this algebra is suggested by the Drinfeld-Sokolov reduction, which realizes W 3 as a kind of coset of the sℓ 3 algebra. W 3 algebra and primary fields. Referring to the review article [START_REF] Bouwknegt | W symmetry in conformal field theory[END_REF] for more details, we recall that the W 3 algebra is spanned by the modes of the fields T (z) = n∈Z L n z -n-2 and W (z) = n∈Z W n z -n-3 . Let us write the defining relations of the W 3 algebra in the form of commutation relations for the modes L n , W n rather than operator product expansions for the fields T (z), W (z), as this form is more convenient for finding null vectors in representations: where we introduce, using the normal ordering : [L m , L n ] = (m -n)L m+n + c 12 m(m 2 -1)δ m+n,0 , (4.24) [L m , W n ] = (2m -n)W m+n , ( 4 L m L n := L m L n if m ≤ n, Λ m = n∈Z : L n L m-n : + 1 5 x m L m with x 2ℓ = (1 + ℓ)(1 -ℓ) x 2ℓ+1 = (ℓ + 2)(1 -ℓ) . ( 4 .27) A primary fields V α of the W 3 algebra of momentum α, conformal dimension ∆ α and charge q α is defined by its operator product expansions with T (z) eq. (3.18) and W (z): W (z)V α (w) = q α V α (w) (z -w) 3 + W -1 V α (w) (z -w) 2 + W -2 V α (w) z -w + O(1) . ( 4 .28) The momenta α now belong to the two-dimensional root space of the Lie algebra sℓ 3 . A basis of this space is provided by the simple roots e 1 , e 2 whose scalar products appear in the Cartan matrix = 2 -1 -1 2 . We may also use the dual basis ω 1 = 2 3 e 1 + 1 3 e 2 , ω 2 = 1 3 e 1 + 2 3 e 2 such that (e i , ω j ) = δ ij . We decompose the momenta along this dual basis: α = α 1 ω 1 + α 2 ω 2 , and we introduce the vector Q = (b + b -1 )(e 1 + e 2 ). The conformal dimension and charge are parametrized in terms of the momentum as 2) which appears in our conjecture. We wish Ωn to obey third-order differential equations, which would correspond to the sℓ 3 KZ equations in Sklyanin variables. This suggests that we use the simplest non-trivial degenerate fields, which have null vectors at levels 1, 2 and 3. But there are actually four such degenerate fields, with α ∈ {-bω 1 , -bω 2 , -b -1 ω 1 , -b -1 ω 2 }, whereas we want only one of them to appear in Ωn , because the original isospin variables are invariant under permutations of the Sklyanin variables. By analogy with the sℓ 2 case, we focus on the fields V -b -1 ω 1 and V -b -1 ω 2 , whose momenta go to zero in the critical level limit k → 3. They are related to the other two fields by the W 3 algebra self-duality b → b -1 , which is however not an invariance of the sℓ 3 algebra. And they are related to each other by the Dynkin diagram automorphism ω 1 ↔ ω 2 of sℓ 3 , which acts on general primary fields V α as (∆ α , q α ) → (∆ α , -q α ). This symmetry does have a counterpart in the separation of variables for the sℓ 3 Gaudin model. The construction of the separated variables was indeed based on the introduction of an sℓ 3 Lax matrix I(u) (3.1), so that sℓ 3 generators act in the fundamental representation. But we could alternatively have used the antifundamental representation, which is related to the fundamental by the Dynkin diagram automorphism. With our conventions, our choice of the fundamental representation will turn out to correspond to the choice of the degenerate field V -b -1 ω 1 of the W 3 algebra. The three corresponding null-vector equations are [START_REF] Watts | Fusion in the W(3) algebra[END_REF] iW ∆ α = 1 2 (α, 2Q -α) , (4.29) q α = i 27 [α 1 -α 2 ][2α 1 + α 2 -3(b + b -1 )][α 1 + 2α 2 -3(b + b -1 )] . ( 4 -1 + b 2 + 5 6b L -1 V -b -1 ω 1 = 0 , (4.31) iW -2 -2 3b L -2 -bL 2 -1 V -b -1 ω 1 = 0 , (4.32) iW -3 -b 2 + 1 6b L -3 + bL -1 L -2 + b 3 L 3 -1 V -b -1 ω 1 = 0 . (4.33) The last null-vector equation implies that any correlation function with one degenerate field obeys E 1 V -b -1 ω 1 (y) n i=1 V α i (z i ) = 0, where E 1 ≡ ∂ 3 ∂y 3 + 1 b 2 ∂ ∂y • n i=1 1 y -z i ∂ ∂z i + ∆ α i (y -z i ) 2 + 1 2b 2 + 1 6b 4 n i=1 1 (y -z i ) 2 ∂ ∂z i + 2∆ α i (y -z i ) 3 + i b 3 n i=1 W -2,(i) y -z i + W -1,(i) (y -z i ) 2 + q α i (y -z i ) 3 . (4.34) This may be compared with eq. (4.20), which is formally similar, or even identical if the term with coefficient 1 6b 4 is absorbed into the other terms by redefining W -1,(i) and q α i . Like in the sℓ 2 case, the meaning of this formal similarity is not clear. Now the equations obeyed by correlation functions with several degenerate fields like Ωn eq. (1.2) are significantly more complicated than E 1 , because eliminating W -1 , W -2 descendents of the degenerate fields requires the use of the first two null-vector equations (4.31,4.32). Still denoting {y a } = {y, y b }, we obtain the equation E 2 Ωn = 0 with E 2 ≡ E 1 + 1 b 2 b 1 y -y b ∂ 2 ∂y 2 b + 1 b 2 ∂ ∂y • b 1 y -y b ∂ ∂y b + ∆ -b -1 ω 1 (y -y b ) 2 + 2 3b 4 b,i 1 (y -y b )(y b -z i ) ∂ ∂z i + ∆ α i y b -z i + 2 3b 4 b =c 1 (y -y b )(y b -y c ) ∂ ∂y c + ∆ -b -1 ω 1 y b -y c - 2 3b 4 b 1 (y -y b ) 2 ∂ ∂y b + ∂ ∂y + 1 b 2 + 1 b 4 ∆ -b -1 ω 1 + i b 3 q -b -1 ω 1 b 1 (y -y b ) 3 , (4.35) where ∆ -b -1 ω 1 = -1 - 4 3b 2 , q -b -1 ω 1 = - i 27b 3 (4 + 3b 2 )(5 + 3b 2 ) . (4.36) Relating W 3 momenta to sℓ 3 spins. In order to compare the equation E 2 Ωn = 0 with the KZ equations in Sklyanin variables (4.22), we should specify how we relate sℓ 3 primary fields Φ j (µ|z) to W 3 primary fields V α (z). We are looking for a relation between α and j which translates into a simple relation between (∆ α , q α ) and (∆ J j , q J j ). We propose α = -bj + b -1 (e 1 + e 2 ) ⇒ ∆ α = ∆ J j + 2 + b -2 q α = q J j , (4.37) where we use the following expressions for (∆ J j , q J j ) defined in eqs. (2.8) and (2.14) ∆ J j = - 1 k -3 1 2 (j, j + 2e 1 + 2e 2 ) , (4.38) q J j = 1 (k -3) 3 2 i 27 [j 1 -j 2 ] [2(j 1 + 1) + (j 2 + 1)] [(j 1 + 1) + 2(j 2 + 1)] , (4.39) where the components (j 1 , j 2 ) of the spin j are defined as j = j 1 ω 1 + j 2 ω 2 . Notice that our relation between α and j maps the principal unitary series of sℓ 3 representations j ∈ -e 1 -e 2 + iR 2 to the W 3 representations which appear in the physical spectrum of conformal sℓ 3 Toda theory [START_REF] Fateev | Correlation functions in conformal Toda field theory I[END_REF] α ∈ Q + iR 2 . Such choices of α or j lead to real values of (∆, q) if k > 3. However, there does not need to be any relation between the sℓ 3 creation operators W J -1 , W J -2 and their W 3 counterparts W -1 , W -2 . While relating L J -1 = δ δz to L -1 = ∂ ∂z , though difficult in practice, is in principle a simple matter of performing the change of variables, there is apparently no principle which would determine how W J -1 , W J -2 would behave through the change of variables. This is why we work modulo F 2 (y), ignoring the non-differential terms which involve such operators, and being left with differential equations. Now the presence of degenerate fields in correlation functions of W 3 fields does not necessarily lead to differential equations, a fact which makes conformal sℓ 3 Toda theory much more complicated than Liouville theory [START_REF] Fateev | Correlation functions in conformal Toda field theory I[END_REF]. Differential equations actually appear provided the number of degenerate fields is large enough. We are inserting 3n -6 degenerate fields V -b -1 ω 1 together with the n generic fields V α i , which is enough for eliminating the 2n terms W -1,(i) , W -2,(i) and being left with n -6 differential equations. Twisting W 3 null-vector equations. Finally, we should determine the twist factor Θ n which appears in the conjecture (1.4), so as to be able to compute E 3 ≡ Θ n E 2 Θ -1 n such that E 3 • Θ n Ωn = 0 . (4.40) The values of the parameters λ, ν can be derived as in the sℓ 2 case. Requiring continuity of Θ n Ωn at y a = y b implies λ = 2∆ -b -1 ω 1 -∆ -2b -1 ω 1 = 2 3b 2 , and requiring that the conjecture (1.4) holds in the case n = 2 implies ν = 2∆ α -2∆ J j = 2 b 2 + 4, see eq. (4.37). Notice however that this only determines ν up to b-independent terms, as the unknown b-independent kernel K may also contribute. These constraints leave the parameter µ arbitrary. We will obtain an ansatz for µ, and confirm the values of λ and ν, by generalizing the relation (3.23) between Θ n and free field correlation functions which was observed in the sℓ 2 case. In the sℓ 3 case the analogous relation is |Θ n | -2 = 3n-6 a=1 V -b -1 ω 1 (y a ) n i=1 V b -1 (e 1 +e 2 ) (z i ) f ree . (4.41) This ansatz leads to the values λ = 2 3b 2 , µ = - 1 b 2 , ν = 2 b 2 . (4.42) These values will turn out to be the only ones such that, modulo F 2 (y), the only non-differential terms in E 3 are of the type c i (y-z i ) 3 . This is a rather non-trivial requirement as many non-differential terms can potentially appear (cf Appendix A.1). Working modulo F 2 (y) eq. (4.21), and using the relation (4.37) between sℓ 3 and W 3 representation data, we indeed compute E 3 ∼ ∂ 3 ∂y 3 + 1 b 2 D 2 + 1 b 4 D 1 + 1 b 2 n i=1 ∆ J j i (y -z i ) 2 ∂ ∂y + n i=1 i b 3 q J j i -1 b 2 ∆ J j i (y -z i ) 3 , (4.43) where we introduced two differential operators D 1 and D 2 of respective orders 1 and 2, which depend neither on the field momenta α i nor on the model parameter b, The comparison for general b. To start with, the non-differential terms agree. This is actually a very non-trivial statement, as we started with complicated non-differential terms in eq. ( 4.35) an then generated more terms by twisting with Θ n . The freedoms to choose the three parameters λ, µ, ν of Θ n and to ignore terms belonging to F 2 (y) is a priori not sufficient to ensure the dozens of required cancellations, which nevertheless occur as can be seen in explicit calculations. These calculations use some helpful identities which are gathered in Appendix A.1. The existence of a simple twist which simplifies the differential equations obeyed by correlation functions involving many identical degenerate fields might well be a general phenomenon in conformal field theory, as we now see that it happens for the simplest degenerate field in theories with W 3 symmetry, in addition to the already known cases of the two simplest degenerate fields in theories with Virasoro symmetry [START_REF] Stoyanovsky | A relation between the Knizhnik-Zamolodchikov and Belavin--Polyakov-Zamolodchikov systems of partial differential equations[END_REF][START_REF] Ribault | A family of solvable non-rational conformal field theories[END_REF]. D 1 ≡ - i 1 (y -z i ) 2 ∂ ∂y + 2 i 1 y -z i 2 ∂ ∂y +3 i 1 y -z i b 1 y -y b ∂ ∂y b - ∂ ∂y -2 b =c 1 y -y b 1 y b -y c ∂ ∂y b - ∂ ∂y , (4.44) D 2 ≡ i 1 y -z i ∂ ∂y ∂ ∂z i + 3 ∂ ∂y + b 1 y -y b ∂ ∂y b - ∂ ∂y ∂ ∂y b + 2 ∂ ∂y + b 1 (y -y b ) 2 ∂ ∂y . ( 4 Let us then examine the term 1 b 2 D 2 in eq. ( 4.43). Agreement with the corresponding term in eq. (4.22) would occur provided ∂ ∂y • i 1 y -z i K -1 δ δz i K ? ∼ D 2 . (4.46) It seems technically challenging to check this identity. But remember that our inability to explicitly perform Sklyanin's change of variables for δ δz i does not contaminate the other terms in our equations, as we do know that the change of variables must be independent from the parameter b = (k -3) -1 2 . Let us now examine the term 1 b 4 D 1 . We would like this term to vanish modulo F 2 (y), as no such term is present in eq. (4.22). However, it is rather obvious that D 1 does not belong to F 2 (y), although it has quite a few remarkable properties. This is explained in detail in the Appendix A.3. As a result, the conjecture cannot hold for general values of b. The critical level limit b → ∞. We notice that the term 1 b 4 D 1 , which is responsible for the failure of our conjecture, vanishes in the b → ∞ limit. Therefore, the conjecture has better chances to hold in that limit. To completely prove that it does, we still need to clear one subtlety with the term 1 b 2 D 2 . This term seems to vanish in the b → ∞ limit but actually it does not. This is because near b → ∞ our correlation functions do not have finit limits. Rather, the Toda correlation function Ωn = a V -b -1 ω 1 (y a ) i V -bj i +b -1 (e 1 +e 2 ) (z i ) involves "heavy" fields V -bj i +b -1 (e 1 +e 2 ) (z i ) whose momenta grow as b. On general grounds (see for instance [START_REF] Fateev | Correlation functions in conformal Toda field theory I[END_REF]), it is therefore expected that Ωn ∼ b→∞ e b 2 S({z i }) T n where S and T n are b-independent functions, and S depends only on {z i } and not on {y a }. The differential operator 1 b 2 D 2 , which contains derivatives with respect to z i , may yield a finite contribution when such derivatives act on e b 2 S({z i }) . We should therefore check whether eq. (4.46) holds to the leading order in b 2 when acting on functions of the type e b 2 S({z i }) T n . This is actually the case, because the only term in D 2 with z i -derivatives is ∂ ∂y • i 1 y-z i ∂ ∂z i , and K -1 δ δz i K S({z i }) = ∂ ∂z i S({z i }). This completes the proof of the conjecture (1.4) in the critical level limit b → ∞ ⇔ k → 3. Notice that this b → ∞ limit is not sensitive to the twist function Θ n . This is because the exponents λ, µ, ν (4.42) vanish in this limit so that Θ n → b→∞ 1. The minisuperspace limit b → 0. In this limit, the discrepant term 1 b 4 D 1 , which is responsible for the failure of the conjecture (1.4) for general b, grows larger. We may therefore obtain some insights on the reasons for this failure. As in the sℓ 2 case, we will consider full correlation functions (with both holomorphic and antiholomorphic dependences) and use path-integral reasonings in sℓ 3 Toda theory. For full correlation functions, the conjecture reads Ω n ∼ K K|Θ n | 2 Ωn . As in the sℓ 2 case, the transformation K is b-independent, Ω n is expected to have a finite limit, and the Toda correlation function Ωn behaves as Ωn ∼ b→0 R n 3n-6 a=1 V -b -1 ω 1 (y a ) n i=1 V b -1 (e 1 +e 2 ) (z i ) where R n is b-independent. Therefore Ωn simplifies in the b → 0 limit but, in contrast to the sℓ 2 case, its leading behaviour does not reduce to a free field correlation function. This is because the simplified correlation function 3n-6 a=1 V -b -1 ω 1 (y a ) n i=1 V b -1 (e 1 +e 2 ) (z i ) does not obey momentum conservation, given the value 2Q = 2(b + b -1 )(e 1 + e 2 ) of the background charge in sℓ 3 Toda theory. However, momentum conservation can be restored by inserting n -2 screening operators V b -1 e 1 . (See [START_REF] Fateev | Correlation functions in conformal Toda field theory I[END_REF] for similar reasonings and calculations in sℓ 3 Toda theory.) Thus, Ωn ∼ b→0 R n 3n-6 a=1 V -b -1 ω 1 (y a ) n i=1 V b -1 (e 1 +e 2 ) (z i ) n-2 ℓ=1 d 2 x ℓ V b -1 e 1 (x ℓ ) f ree . (4.47) This free correlation function is the product of the free correlation function (4.41), which we took as our ansatz for |Θ n | 2 , and an integral over x ℓ , leading to |Θ n | 2 Ωn ∼ b→0 R n ℓ d 2 x ℓ a,ℓ |x ℓ -y a | 2 b 2 i,ℓ |x ℓ -z i | -2 b 2 ℓ =ℓ ′ |x ℓ -x ℓ ′ | -4 b 2 . (4.48) The integral in this formula is expected to be dominated by a saddle point, where the x ℓ s are solutions of Ωn as b → 0, which follows from the conjecture. 2 ℓ ′ =ℓ 1 x ℓ -x ℓ ′ + i 1 x ℓ -z i - a 1 x ℓ -y a = 0 . ( 4 One may be tempted to modify the conjecture by adding a factor θ to the twist function Θ n . This would not only correct the leading behaviour in the b → 0 limit, but also make the conjecture compatible with global conformal symmetry. We have not mentioned global conformal symmetry until now because this subject is independent from the differential equations in terms of which the conjecture was formulated. It is however easy to see that the conjecture is incompatible with the behaviour of correlation functions under scaling transformations (z i , y a ) → (λz i , λy a ), except in the b → ∞ limit. However, adding the factor θ -1 b 2 n would spoil the agreement between most terms of the KZ equations (4.22) and the twisted W 3 null-vector equations (4.43), in particular the terms depending on the spins j i . The modified conjecture would only hold at the level of the j i -independent dominant factors in the b → 0 limit, which would not be interesting. Conclusion The comparison of sℓ 3 KZ equations in Sklyanin variables (4.22) with W 3 null-vector equations (4.43) does not support the conjecture (1.4) in its general form. Nevertheless, the KZ equations are very similar to the null-vector equations: many terms agree nontrivially, and the disagreement is confined to a term which does not depend on the spins j i of the fields. This remarkable quasiagreement makes it unlikely that a full agreement can be obtained by modifying the conjecture. In the critical level limit k → 3 ⇔ b → ∞, the disagreement disappears and the conjecture (1.4) is true. This limit plays an important role in the Langlands correspondence [START_REF] Frenkel | Lectures on the Langlands program and conformal field theory[END_REF], which might possibly explain why the conjecture (1.4) holds for sℓ 2 and not for sℓ 3 , and why in the sℓ 3 case it holds only in the critical level limit. Another hopeful source of insights is the recent work on conformal Toda theories [START_REF] Fateev | Correlation functions in conformal Toda field theory I[END_REF], where the sℓ N ≥3 cases are understood to be qualitatively different from the sℓ 2 case. Of course, we already pointed out a significant qualitative difference, namely the failure of the sℓ 3 cubic field W J (z) (2.11) to obey the W 3 algebra. It is not clear how this is related to our problem. Our results in the sℓ 3 case lead to natural conjectures in sℓ N >3 cases, where we expect the KZ equations in Sklyanin variables to agree with W N null-vector equations only in the critical level limit k → N . Let us tentatively perform a counting of equations. There are 1 2 N (N -1) isospin variables on the lhs of eq. (1.4), and on the rhs we expect 1 2 N (N -1)(n -2) Sklyanin variables y a plus N (N -1) extra variables, which may be collectively included in the symbol U . Differential equations for the sℓ N Toda correlation function which generalizes Ωn are obtained by eliminating 1 2 (N -2)(N + 1)n non-differential terms from the 1 2 N (N -1)(n -2) null-vector equations. Thus, we have n -N (N -1) differential equations. When it comes to KΘ n Ωn , we should presumably add an equation for each one of the extra variables, reaching n differential equations. This precisely the number of KZ equations for the lhs Ω n of eq. (1.4). In addition, we have the same number of global Ward identities on both sides of eq. (1.4), namely N 2 -1. A.2 A characterization of F 2 (y) Here we will justify the characterisation (A.22) of the space F 2 (y) defined in eq. (4.21). For pedagogical reasons we will begin with the simpler problem of characterizing the space of permutation-symmetric functions of m variables {y a }. More precisely, given a function f (t, {y a }) which is permutation-symmetric in {y a }, depends on an additional variable t, and is regular at t = y a , we want to determine whether f (y, {y a }) is actually permutation-symmetric although it apparently depends on y. This amounts to determining whether f (y a ′ , {y a }) actually depends on the choice of a ′ . If it does not, then for any polynomial P (t) of degree m -2 we have Q a (t-ya) = 0, which can then be evaluated by moving the integration contours, if the analytic properties of f (t) permit. Let us apply a similar reasoning to the characterization of F 2 (y). If f (y) ∈ F 2 (y), for instance f (y) = 1 (y-z i 0 ) 2 f (y) where f (y) is actually permutation-symmetric, then given any polynomial P (t) of degree n -7 we have Thus, to know whether f (y) ∈ F 2 (y), we only need to evaluate the left hand-side of this equality. To do this we can use the assumed analytic properties of f (t): namely, that it is meromorphic with singularities only at t = z i , and goes to zero as t → ∞. This implies which proves f (y) ∈ F 2 (y) ⇒ P, f = 0 as in eq. (A.22). The reverse implication follows from a simple counting of variables: the space of polynomials of degree n -7 has dimension n -6, which is precisely the number of constraints which we expect for characterizing the space F 2 (y). A.3 Study of the differential operator D 1 As explained in Section 4. ∈ F 2 (y), where D 1 is the first-order differential operator written explicitly in eq. (4.44). Here we provide a rigorous argument that this relation is not true, which implies that the conjecture cannot hold for general values of the parameter b. To start with, let us reduce the study of the first-order differential operator D 1 to the study of mere functions. The operator D 1 , like all our differential equations, is assumed to act on functions which are symmetric under permutations of the 3n -6 variables {y a }. The space of such functions is algebraically generated by the 3n functions .17) ρ i ≡ a log(y a -z i ) , σ i ≡ a 1 y a -z i , τ i ≡ a 1 (y a -z i ) 2 , i = 1 • • • n. (A • So the twist factor |Θ n | 2 must absorb the b → 0 divergence of the Liouville correlation function Ωn , which implies the relation (3.23) and the values (3.22) for the parameters λ, µ, ν. (This reasoning does not exclude the presence of extra terms in λ, µ, ν which would be finite in the b → 0 limit.) sℓ 3 3 Yangian symmetry. As in the sℓ 2 case, the variables of the sℓ N Gaudin model can be combined into an sℓ N Lax matrix I(u) (3.1) obeying the relation (3.2). It is however possible to combine the variables into another sℓ N matrix, which depends on an extra parameter η, n)Λ m+n , (4.26) (e 1 1 ,e 1 ) (e 1 ,e 2 ) (e 2 ,e 1 ) (e 2 ,e 2 ) .30) W 3 3 degenerate fields. Let us now justify the choice of the field V -b -1 ω 1 in the correlator Ωn (1. .45) 4 . 4 44 Comparing sℓ 3 Knizhnik-Zamolodchikov equations with W 3 null-vector equationsWe are now in a position to test the conjecture (1.4) by comparing the KZ equations in Sklyanin variables(4.22), which apply to K -1 Ω n , with the twisted W 3 null-vector equations (4.43), which apply to Θ n Ωn . We will first do the comparison for general values of b, and then explain in more detail what happens in the particular limits b → ∞ and b → 0. 2 b 2 2 b 2 2222 .49) (Curiously, these are the Bethe equations for the sℓ 2 Jaynes-Cummings-Gaudin model at infinite coupling and with spins ± 1 2[START_REF] Babelon | On the Bethe ansatz for the Jaynes-Cummings-Gaudin model[END_REF].) The dominant behaviour of the integral is expected to be of the form |θ n | as b → 0, with θ n a b-independent quantity. This |θ n | factor contradicts the existence of a finite limit for |Θ n | 2 14 ) 14 So we have transformed them -1 conditions f (y 1 ) = f (y 2 ) = • • • = f (y m ) into the condition a ′ y a ′ dt P (t)f (t) -z i ) 2 3n- 6 a=1 2 a 262 (t -y a ) f (t) = f (y) ∞ dt P (t) i =i 0 (t -z i ) (t -y a ) = 0 . (A.15) 4 , 1 ?∼ 0 or equivalently D 1 ? 411 our conjecture (1.4) implies the relation D A different approach was proposed in[START_REF] Mukhin | On the new form of Bethe ansatz equations and separation of variables in the sl 3 Gaudin model[END_REF], which consists in trying to use the sℓ2 separation of variables in the sℓ3 case. This approach requires a particular choice of isospin variables. The results are complicated. A construction of the extra variables seems to be available in the article[START_REF] Adams | Darboux coordinates on coadjoint orbits of Lie algebras[END_REF]. Acknowledgments I wish to thank Alexander Chervov and Alexey Litvinov for interesting discussions and very helpful comments on this manuscript. I am also grateful to Nicolas Crampé, Vladimir Fateev, Philippe with the invariant operators t 1 (u), t 2 (u) and d(u) defined as [START_REF] Sklyanin | Separation of variables in the quantum integrable models related to the Yangian Y[sl(3)[END_REF] t 1 (u) = Tr Y (u) , t 2 (u) = Tr Ỹ (u) , d(u)δ γ α = Y β α (u) Ỹ γ β (u + η) , (4.9) where the matrix Ỹ is constructed by transposing the quantum comatrix of Y . For instance, , where the η-shifts are the manifestation of the quantum character of the comatrix whose 3 2 matrix element we just wrote. Operator ordering issues in expressions like t 2 (y i -η) are resolved by inserting the operator y i from the left. From the Yangian to the Gaudin model. We will now construct objects A(u), B(u) and a quantum characteristic equation for the sℓ 3 Gaudin model. Such η-independent functions of the matrix I(u) will be obtained by expanding the corresponding objects for the sℓ 3 Yangian algebra in powers of η. We find where we omitted the spectral parameter u in I β α (u), and we point out that our formula for A(u) is free of ordering ambiguities because I 2 3 (u) commutes with both I 2 1 (u) and I 1 3 (u). The commutation relations (4.6) for A Y (u) and B Y (u) imply the analogous relations which may be compared to the corresponding relations in the sℓ 2 case eq. (3.6). Let us rewrite the characteristic equation (4.8) as: The leading behaviour of this equation as η → 0 will turn out to be O(η 3 ). To compute this behaviour, we of course need to compute the behaviours of X i and y i as η → 0. It turns out that we only need the O(η) behaviour of X i . We therefore define the variable p i by As for y i we only need need the leading O(1) behaviour. To this leading order, the zeroes of B Y (u) coincide with those of B(u), so that we do not need distinct notations and call them all y i . The most complicated part of the calculation however does not involve such subtleties, but rather deals with the last term in eq. (4.14), A. A few technical results A.1 Helpful identities The following identities are used in computing the non-differential terms of the operator E 3 ≡ Θ n E 2 Θ -1 n eq. (4.43). Some identites are written modulo terms in F 2 (y) (4.21), as indicated by the relation sign ∼. All identities are proved by elementary manipulations, using observations of the type So D 1 σ i and D 1 τ i do not manifestly vanish modulo F 2 (y). Let us however study them further. They may be considered as values at t = y of functions f (t) = f (t, {y a }, {z i }) which are invariant under permutations of {y a } but depend on the additional variable t. Let us consider the space of such functions, which we in addition assume to be meromorphic in t with no singularities besides t = z i , and to go to zero as t → ∞. Let us moreover introduce the space P n-7 of polynomials P (t) of degree n -7. As we show in Appendix A.2, Then, explicit calculations yields This explicitly demonstrates that D 1 / ∈ F 2 (y). However, D 1 still has remarkable properties with respect to the constant polynomial P = 1, namely 1, D 1 σ i = 1, D 1 τ i = 0. These non-trivial identities sensitively depend on the general structure of D 1 and on the particular values of λ, µ, ν which determine its coefficients. This implies that, whereas arbitrary differential operators belong to F 2 (y) for n ≤ 6, D 1 ∈ F 2 (y) for n ≤ 7. The significance of these properties of D 1 is not clear. When combined with D 1 ρ i ∼ 0, they suggest that D 1 ∼ 0 when applied to a special class of permutation-symmetric function of y a (and z i ), and one might wonder whether Θ n Ωn actually belongs to this class. Given the freedom to choose y ∈ {y a }, this would imply that Ωn satisfies n -6 further differential equations. But Ωn is not expected to satisfy any further differential equations besides the global Ward identities, whose number is n-independent. So the supposition D 1 • Θ n Ωn ? ∼ 0 certainly fails for n > 7, and so does our conjecture (1.4).
04110714
en
[ "math.math-oc" ]
2024/03/04 16:41:24
2021
https://hal.science/hal-04110714/file/2109.13773.pdf
Johann Dreo Manuel López-Ibáñez Extensible Logging and Empirical Attainment Function for IOHexperimenter In order to allow for large-scale, landscape-aware, per-instance algorithm selection, a benchmarking platform software is key. IOHexperimenter provides a large set of synthetic problems, a logging system and a fast implementation. In this work, we refactor IOHexperimenter's logging system, in order to make it more extensible and modular. Using this new system, we implement a new logger, which aim at computing performance metrics of an algorithm across a benchmark. The logger computes the most generic view on an anytime stochastic heuristic performances, in the form of the Empirical Attainment Function (EAF). We also provide some common statistics on the EAF and its discrete counterpart, the Empirical Attainment Histogram. Introduction IOHexperimenter [START_REF] Doerr | IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics[END_REF] is a framework for the benchmarking of anytime stochastic heuristics for optimization problems that holds together a set of benchmarks and an advanced logging system. It provides Pseudo-Boolean Optimization problem generators (PBO) and the Black-Box Optimization Benchmark (BBOB). Its core is implemented in C ++ , which allow for fast computations, a crucial feature for benchmarking on synthetic problems. One of the most notable feature of IOHexperimenter is its logging system. A so-called Logger target an automated export of the algorithms' behaviour in a standardized format, which can be seamlessly imported in the IOHanalyzer for further study. It most notably allows for tracking the states of the algorithms' parameters, a unique feature on the market of benchmarking platforms. The legacy logging system was implemented as a single class, exposing a set of hard-coded parameters which controlled when logging event occurred, what parameters were to be watched and how they would be stored in a set of CSV files. The limitations of the legacy system were: • The logging event list was not extensible. • Only plain floating-point parameters could be logged. • Parameters had to be explicitly sent by the calling solver and should always exists. • It was only possible to log into the IOHanalyzer format, in a set of CSV text files. We show in section 2 how we remove those limitations. The new logging system is also more extensible and allows for easiest implementation of new loggers. This feature allowed us to add a new logger which targets immediate performance estimation, so as to help for automated algorithm selection. By binding the optimized problems and the performance estimation of a solver in a single binary, we allow for very fast benchmarking runs. Having fast benchmarking is a crucial feature for allowing per-instance algorithm configuration. As shown in [START_REF] Aziz-Alaoui | Towards Large Scale Automated Algorithm Design by Integrating Modular Benchmarking Frameworks[END_REF], having the performance estimation directly bound to the solver's binary allows for configuration budget that are an order of magnitude larger than other approaches. In a previous work, we implemented an IOHexperimenter logger which efficiently computes the discrete version of the empirical attainment function on a linear scale. In this work, we refactor this logger, adding log scales, and add a logger computing the empirical attainment function itself, as explained in section 3. We also refactored the debugging tools within IOHexperimenter, by using the Clutchlog1 project, which allow for fine-grained management of debug messages. Logging System Architecture Instead of a monolithic Logger, we design a set of modular classes, as shown on Figure 1. A Logger is thus most notably composed as two objects: Trigger: given the context at call, returns true if the call event is to be logged (cf. Section 2.1), Property: gives access to what value should be logged (cf. Section 2.2), Subclassing from the Logger interface allow to design a fully configurable object, with userdefined Triggers and Properties. For instance, the Watcher class allows the user to specify which Properties to watch. The standardized logging system for IOHanalyzer is itself defined as a sub-class of Watcher. We add a Logger object that allow for combining several loggers into a single one: Combine. For instance, this wrapper is useful if one want to computes a global performance and save the whole run. This system allow to assemble Loggers at run time. Triggers The Triggers are functors2 which can be called on the LogInfo context and returns a boolean, stating if a log event should be triggered. The sets of available Triggers is straightforward and matches the previous features, they allow to log an event: Always: triggers an event at every objective function call, OnImprovement: if the given solution has a better objective function value than the previous best one, At: at the given set of objective function call indices, Each: at regular intervals, During: during the given sets of objective function call intervals, Instead of managing those Triggers all at once, we allow to aggregate them with triggers that behaves like logical operators: Any: trigger an event if any of the managed Trigger instances is triggered, All: trigger an event if all of the managed Trigger instances is triggered. 1 problem::MetaData pb(0,0,"fake",2); 2 logger::Info i; 3 i.transformed_y = 9999; Listing 1: Excerpt of the code used to test the behaviour of the OnImprovement Trigger. Properties A Property is a way to access a value of interest, which can be either a state of the IOHexperimenter context or an external variable of interest. The set of available Properties accessing the context matches the previous features: Evaluations: the number of calls to the objective function so far, YBest: the best objective function value found so far, TransformedY: the current objective function value, with invariant-testing transformation(s) applied, TransformedYBest: the best transformed value found so far. We add a set of classes to help accessing several kind of external variables, which targets any algorithm parameter: Reference: capture a reference to a variable, Pointer: capture a pointer to a variable, PointerReference: capture a pointer to a reference to a variable. The two last classes allow for indicating if the Property is existing (or not) in the current context. This is useful for dynamic algorithm configuration, for which a given parameter may exists only for some variant. For instance, if one is logging the state of an algorithm that switch its mutation operator, some of those operators may have various parameters. Those parameters would be available only if this specific operator is actually instantiated. Listing 2: Excerpt of code used to test the behaviour of some context Properties. Debugging Messages We refactor the legacy debug messaging system, replacing it with a set of macros using the Clutchlog library. This allows to (de)clutch messages for a given: log level, source code location or call stack depth. Additionally, the unit tests interface use this features, which can be used from the command line interface. Empirical Attainment Loggers In order to allow for large-scale algorithm configuration, we aim at having a performance evaluation within the solver executable, so as to ease its interfacing with automated configurators. IOHexperimenter, providing both the benchmark problems and a logging system, is a good candidate to host a performance logger. However, there is many ways of looking at the performance of stochastic heuristics. In this work, we wanted to implement the most generic one, so as to allow the user to use various performance metrics. Figure 2 shows why we think that the Empirical Attainment Function (EAF) is the most generic choice in the case of synthetic benchmarking. q 0=q |f(x)-f(x*)|=q* ^qq* 0 q* r … q t q t P(q*=q) P q t P P(q*<q) q t P Expected Quality q t P Expected RunTime q t q t P Expected Quality-Time Function Level sets (g) (h) (i) (a) (b) (c) (d) (e) (f) Figure 2: A classical way of looking at the performance is to look at the objective function value as the error ("quality" q * ) to a known bound q, after a given "time" budget (a). For randomized algorithm, a statistic over the qualities output by several runs should be considered (b). But stochastic searches are anytime, hence showing monotonic "attainment trajectories" of best-so-far values along the time axis (c). A classical setting is to compute a performance statistic over the end distribution (d). From those two quality and time axis, one can consider several distribution along the time one (e), most generally in the form of cumulative distribution (f) showing the expected quality for given budgets. Conversely, one can consider the expected run time for given qualities (g), which is one of the most popular setting. Additionally, one can consider 2D first-order moments of the 2D quality/time distribution of trajectories (h). This forms a 2D distribution, called the Empirical Attainment Function (i). Section 3.1 introduces how the EAF can be viewed as the Empirical Cumulative Distribution Function (ECDF) of 2D objective function values' attainment trajectories, and how it can be discretized into an Empirical Attainment Histogram (EAH). Empirical Cumulative Distribution Functions This section introduces the ECDF of points and then generalize this notion to attainment trajectories, up to the EAF and EAH. Table 1 introduces the notation used in this section. Let (X 1 , . . . , X n ) be independent, identically distributed real random variables with the common cumulative distribution function F (x). Then, the Empirical Cumulative Distribution Function (ECDF) is classicaly [ Van der Vaart, 1998, p. 265] 3 defined as: F n (x) = 1 n n i=1 1(x i ≤ x) (1) where 1(•) is the indicator function that returns 0 or 1 and x i ∼ X i . For a pair of random variables Q, T , the joint ECDF is given by: F n (q, t) = 1 n n i=1 1(q i ≤ q ∧ t i ≤ t) (2) where q i and t i are samples from random variables Q and T , respectively. Let a b ⇐⇒ a d ≤ b d ∀d ∈ N + denote the weak dominance of a d-dimensional point a over point b. For a set of points Y = {y 0 , . . . , y n } (e.g. with y = {q, t}), the joint ECDF becomes: F n (y) = 1 n n i=1 1(Y i y) (3) Of trajectories -EAF Let Z = {Y 0 , . . . , Y m } be a set of non-dominated set of y points (e.g. y = {q, t}). Note that a set Y ∈ Z can weakly dominate another set {y} of cardinality one. The "Empirical Attainment Function" [Grunert da Fonseca et al., 2001] is defined as the ECDF of the closed set Z, over y [Grunert da Fonseca and Fonseca, 2002]: G m (y) = 1 m m i=1 1(Z i y) (4) Discretization as Histograms One can without loss of generality use a function h : R d → R d which may discretize the input space, and consider a generic set R of size r on which element the domination operator can be applied to compare it to any point y: G r h (y) = 1 r r i=1 1(R i h(y)) (5) 3.2 2D Quality/Time Distributions The Quality/Time EAF can be defined as eq. 5 with h(y) = y and R = Z = {Y 0 , . . . , Y m }, a set of "attainment trajectories". In our case, a trajectory is a set of weakly non-dominated {q, t} points, being the best quality/time targets encountered by the optimization algorithm. The Empirical Attainment Histogram (EAH) can be defined as eq. 5, using R = Z and a discretization function: h(y) = ∆ -1 ( ∆(y) ) (6) A linear discretization of the histograms can be defined by mapping on N + , as an indexed space, useful for implementation: ∆ β (y) = (y -v) β l (7) leading to: h β (y) = (y -v) β l l β + v ( Algorithms To compute the EAF, we use the algorithm of [START_REF] Fonseca | On the Computation of the Empirical Attainment Function[END_REF], and follow the implementation provided by [START_REF] López-Ibáñez | Exploratory Analysis of Stochastic Local Search Algorithms in Biobjective Optimization[END_REF], which is available in the eaf4 package fo R. In short, the algorithm computes the level sets of attainment points {q, t}, as figured by the red lines in Figure 2 (h) and (i). Note that the algorithm to compute the log-EAH is adapted from [Knowles, 2005], using the log discretization function 10. Implementation The implementation takes the form of the EAF class, inheriting from the Logger while fixing the Properties that are watched and the Triggers to the one being necessary to the EAF computation (YBest and OnImprovement, respectively). As we want the user to be able to compute various statistics over the EAF, we store every level set in a separated vector, storing the metadata about the run with which they have been produced. Each EAF is thus attached to the run, the problem instance, the dimension and the suite for which it has been produced. This allow the user to operate any aggregation across metadata and compute any statistics. Figure 1 : 1 Figure 1: UML class diagram of the architecture that we implemented for the logging system. Examples of debugging messages tuning, upper part shows messages up to the Note level for eaf.hpp file, lower part shows messages up to the Debug level for any file containing the eaf word. 8)where β is the number of buckets of the histogram, v = min(Y ), the minimum corner of Y andl = | max(Y ) -v|.Similarly, a log-discretization can be defined with:∆ β (y) = β • log (1 + (y - Figure 3 : 3 Figure 3: Screenshot of the automatically generated documentation. The low-level interface uses C ++ 's standard library's std::optional feature to indicate if this Property is available in this context. Pointer-based capturing classes are provided as a generic way to interact with any code, as it is sufficient to set the pointer to nullptr to indicate that a Property is disabled in the current context. using namespace ioh; suite::BBOB suite({1, 2}, {1, 2}, {3, 10}); // problems , instances , dimensions // Properties that watch my attribute : double my_attribute = 0; watch::Reference attr("Att_reference", my_attribute); watch::Pointer attp("Att_pointer" ,&my_attribute); double * p_transient_att = nullptr; watch::PointerReference attpr("Att_PtrRef", p_transient_att); // Instantiate some properties : trigger::Always always; watch::Evaluations evaluations; watch::RawYBest raw_y_best; watch::TransformedY transformed_y; watch::TransformedYBest transformed_y_best; logger::Store logger({always}, // Attach those properties to the logger : {evaluations, raw_y_best, transformed_y, transformed_y_best, attr, attp, attpr}); suite.attach_logger(logger); / * [ Call the problem function . . . ] * / // One can select which metadata layer to consider : logger::Store::Cursor first_eval(suite.name(), / * pb * /1, / * dim * /3, / * ins * /1, / * run * /0, / * eval * /0); // And recover a property value at this event : auto evals = logger.at(first_eval, evaluations); Table 1 : 1 Notation. i, n ∈ N integers x, y ∈ R scalar real values x, z ∈ R 2 points (vectors) in 2D space X , Y scalar real random variables Z, L ⊂ R 2 sets of points A = {Z 1 , . . . , Z n } collection, i.e.,a multiset of sets 1(•) Indicator function that returns {0, 1} z 1 z 2 point z 1 weakly dominates point z 2 Z z at least one point z ∈ Z weakly dominates point z 3.1.1 Of points -ECDF https://nojhan.github.io/clutchlog/ Objects that are callable as functions. https://archive.org/details/asymptoticstatis00vaar_017/page/n281/mode/2up https://mlopez-ibanez.github.io/eaf/ See the pull request for further details: https://github.com/IOHprofiler/IOHexperimenter/pull/92 We provide several generic statistics in the form of computation of the Surface of a given attainment level, and the Volume of a given set of levels. The user can use low-level classes to indicate which Nadir point they want to use, if the default worst point is not desired. Note that the volume gives access to a first-order moment-like statistic that behave like the average, while the surface of the median level of the EAF gives access to a robust, median-like statistics. Second-order moment statistics may be computed, as explained in [START_REF] Fonseca | Exploring the Performance of Stochastic Multiobjective Optimisers with the Second-Order Attainment Function[END_REF]. Additionally, it would be possible to take (or aggregate) slice(s) of the EAF in order to recover expected runtime or expected quality distributions. 1 size_t sample_size = 10; 2 size_t nb_runs = 10; 3 suite::BBOB suite({1, 2}, {1, 2}, {10, 30}); 4 // Instantiate the EAF logger . 5 logger::EAF logger; 6 suite.attach_logger(logger); The discretized EAHs (for linear scales, as defined in Section 3.2) have been implemented in a previous work and have been ported to the new architecture. We add support for log scales, which are usually more suited to the convex shape of the EAH. All implemented classes are accompanied by several corresponding unit tests and an extensive documentation, as shown on Figure 3. Conclusion In this work, we target the use of IOHexperimenter as a key framework for landscape-aware algorithm selection. In order to be able to perform large-scale per-instance automated configuration, we need to embed a powerful logger within a solver executable. Additionally, the logger need to be able to computes various performance statistics across large benchmarks. So as to allow for a generic performance estimation, we implement the computation of the Quality/Time Empirical Attainment Function, a generalization of Empirical Cumulative Distribution Function for attainment trajectories of anytime stochastic algorithms. To meet this objective, we refactor the logging system of IOHexperimenter, so as to obtain a highly modular and extensible architecture, with fine-grained debugging message management. Using this new system, we implement the EAF logger and some statistics on this distribution. The new setup have been successfully tested on an extension of our previous work [START_REF] Dreo | Using irace, paradiseo and iohprofiler for large-scale algorithm configuration[END_REF]. Our implementation provides an extensive documentation and several examples in the form of unit tests. It has eventually been merged within the IOHexperimenter code base 5 .
04113288
en
[ "phys", "sde" ]
2024/03/04 16:41:24
2019
https://hal.inrae.fr/hal-04113288/file/PosterISMARBerlin.pdf
(2) R. Lu et al. CJME. 26 (2013) The open geometry of the single-sided NMR-MOUSE® sensor results in a powerful spectrometer to characterize arbitrarily sized samples. This inhomogeneous magnet is designed in such a way that it generates a highly flat sensitive slice, i.e. the measurement volume, at a given distance (25 mm for the PM25 system) parallel to the scanner surface [1]. It is well known that low field magnets have a strong dependence between the magnetic field and the magnet temperature. For the NMR-MOUSE® it leads to a dependence Materials between the magnet temperature and the position of the sensitive volume [2]. As our aim is to use this portable device under unstabilized temperature conditions, we anticipate variations in the measurement position. This study aimed at characterizing the relationship between changes in the magnet temperature and the position of the measurement volume. Measurements were performed at two sites with two different PM25 NMR-MOUSE® spectrometers. Sample profiles were recorded during the room temperature change and until it stabilizes. We followed-up the evolution of the signal intensity at different depths for a specific slice thickness (profile). The relationship between the temperature of the NMR-MOUSE® magnet and the position of the sensitive volume was characterized in the range of 10 to 35°C and that in two different laboratories with two spectrometers. For both measurements, a shift of the measurement slice of 45 mm/°C was observed. Discussion and Conclusions Two solutions can be implemented to take into account the possible slice measurement depth shift during the experiment: 1. The magnet can be insulated to limit its temperature change during the experiment 2. An automatic correction method, based either on the NMR signal or on the magnet temperature measurement, can be developed Figure 3 : 3 Figure 3: Shift of the measurement slice determined from the interface between the petri dish and the water as a function of the magnet temperature. Figure 1 : 1 Figure 1: Sample profiles recorded at a magnet temperature of 10 (black) and 15°C (blue). A shift of 200 mm is observed for this 5°C difference. Figure 2 : 2 Figure 2: Sample profiles recorded 2 hours after the temperature stabilization of the magnet at 15 (blue) and 20°C (red). A shift of 241 µm is observed. Figures1 and 2show the profile acquired at different temperatures by using the experimental set-up of team 1 and 2, respectively. Both figures highlight the fact that a relatively limited temperature change induces a significant variation in the position of the measurement slice. For a 5°C difference, the slice shifts by approximatively 200-250 µm. 1 Figure 4 : 14 Figure 4: Shift in the measurement depth as a function of the magnet temperature. Sample = 1-cm height dopped-water in a Petri dish Results Principle How ?
00411358
en
[ "shs.eco" ]
2024/03/04 16:41:24
2006
https://shs.hal.science/halshs-00411358/file/2006-22s.pdf
Francis Bloch Axel Gautier email: [email protected]. Abstract Postal markets are open to competitor for a long time. But, with a few exceptions, the competitors of the incumbent postal operator are currently active on the upstream segments of the market -preparation, collection, outward sorting and transport of mail products. With the further steps planed in the liberalization process, there are new opportunities to extend competition to the downstream segments of the market -the delivery of mails. In the future, two business model will be possible for the new postal operators: (1) access: where the firm perform the upstream operations and uses the incumbent's delivery network and (2) bypass where the competing firm controls the entire supply chain and delivers mails with its own delivery network. These two options have a different impact on both the welfare and the profit of the historical operator. In particular, bypass raises severe concerns for the financing of the universal service obligations. The choice between access and bypass depends on the entrant's delivery cost relative to the cost of buying access to the incumbent operator (the access price). In this paper, we derive optimal -welfare maximizing-stamp and access prices for the incumbent operator when these prices have an impact on the delivery method chosen by the entrant. We show how prices should be re-balanced when the entry method is considered as endogenous i.e. affected by the incumbent's prices. GREQAM Groupement de Recherche en Economie Quantitative d'Aix-Marseille -UMR-CNRS 6579 Ecole des Hautes Etudes en Sciences Sociales Universités d'Aix-Marseille II et III Document de Travail n°2006-22 ACCESS PRICING AND ENTRY IN THE POSTAL SECTOR Francis Bloch Axel Gautier Access Pricing and Entry in the Postal Sector * Francis Bloch † Axel Gautier ‡ May 12, 2006 Mai 2006 Belgian Post.
00364732
en
[ "phys.qphy", "phys.cond.cm-gen", "phys.phys.phys-atom-ph" ]
2024/03/04 16:41:24
2009
https://hal.science/hal-00364732v4/file/EfficientAtomizationSubmit.pdf
Mathieu Melich Jacques Dupont-Roc Philippe Jacquier Efficient atomization of cesium metal in solid helium by low energy (10 µJ) femtosecond pulses Keywords: 67, 80, B-solid 4 He -61, 72, S-impurities in crystals -06, 60, Jn femtosecond techniques Metal atoms in solid and liquid helium-4 have attracted some interest either as a way to keep the atoms in a weakly perturbing matrix, or using them as a probe for the helium host medium. Laser sputtering with nanosecond pulsed lasers is the most often used method for atom production, resulting however in a substantial perturbation of the matrix. We show that a much weaker perturbation can be obtained by using femtosecond laser pulses with energy as low as 10 µJ. As an unexpected benefit, the atomic density produced is much higher. Introduction Over the past twenty years, atomic and molecular impurities in superfluid and solid helium-4 have been extensively studied by spectroscopic methods either in bulk (for a review (1; 2)) or in clusters (3; 4). Among the motivations of those works, one may cite the investigation of superfluidity at a microscopic level (5; 6), the reactions of atoms, molecules, radicals in the helium matrix (7; 8), investigation of matrix excitations by optical methods (5; 9; 10), measurement of electron EDM (11; 2) or parity violating nuclear anapole moment (12). Several methods (recombination (13), laser sputtering (14; 11), jet (15)) have been developped to introduce atomic impurities into bulk helium. Laser sputtering has been quickly recognized as the most efficient method to introduce various atomic species into liquid (14; 11; 16; 9) and solid helium (17). Atomic density about 10 14 -10 15 m -3 has been reported (14; 18). In this method, atomization proceeds in two steps. In the first step, a metal target situated in condensed helium is sputtered that produces metal grains and clusters embedded in the medium. These grains are then atomized by a second laser and detected by a third one. In superfluid, convection flow induced by the sputtering process carries away the atoms from the detection region in a few milliseconds (14). An improvement was later achieved leading to the detection of atomic Cs in superfluid helium over about 500 ms (19). In solid helium, atoms are more efficiently trapped and can be observed during several hours (18) or even a week (20). Laser sputtering has some drawbacks however. When YAG second harmonic (532 nm) pulses are used, pulse energies about 1-10 mJ are necessary. This creates transient a Present address : Institut Néel-MCBT, CNRS, Grenoble bubbles in the liquid or melting of the solid matrix over a macroscopic volume. These energetic events are likely to strongly perturb the helium matrix. Indeed evidence for the conservation of the global solid orientation is still lacking (21; 20). A first step may consist in reducing the energy brought by the laser. In this paper a new method is described making use of amplified femtosecond pulses to atomize cesium grains or clusters. Due to much shorter pulse duration, much lower energies (10 µJ) are used to obtain similar or even greater atomic densities than previous methods. To our knowledge, metal atomization with femtosecond pulses has been reported only once (19), but without details about the atomic densities obtained. We describe first the experimental set-up used and the atomization process of cesium impurities in an initially single helium crystal. Atomic densities obtained are then discussed. Experimental set-up A sketch of the experimental arrangement is given in figure 1. A 2 cm wide cell is attached to a helium-4 refrigerator. Its temperature can be regulated between 1 and 1.8 K. To grow a hcp helium crystal, liquid helium is pressurized at constant temperature, typically 1.2 K, up to the solidification pressure (25.4 bar). An electrostatic device (22) nucleates the solid phase as a small single crystal which falls on the bottom of the cell. It is then grown by continuously feeding the cell with helium until the liquid/solid interface reaches the middle of the cell windows. In order not to damage the crystal by the first step of the atomization process, we put the cesium target in the liquid above the solid. Sputtering is produced by second harmonic YAG laser pulses (energy 5 mJ, repetition rate 2 Hz). Sub-millimetric cesium grains, and possibly clusters, fall onto the crystal surface. When their number is sufficient, they are embedded into the crystal by a further growth. They lie then 1 or 2 mm below the solid/liquid interface. The next section explains how they are later atomized by femtosecond laser pulses. In order to measure the local density of cesium atoms, laser induced fluorescence is used. A Ti:Sa laser ('d' in figure 1) can be used in cw or mode-locked regime. In the cw regime, it excites the fluorescence on the D1 line at 850 nm. The absorption line is shifted with respect to the free atom line due to the interaction of cesium atoms with the helium matrix (23). The fluorescence light is collected at right angle, selected by an interference filter centered at 880 nm and its intensity is measured by a cooled APD (Hamamatsu C4777-01). The use of femtosecond laser pulse for metal sputtering or machining is now well documented (24) and various kinds of amplified sources have been described. In the present case, low average power is desirable in order to keep the low temperature sample with a minimal perturbation. Hence we require a low repetition rate and a pulse energy sufficient to atomize metal grains a focal distance of 0.2 m, comparable with the cryostat radius. Pulse energy on the order of a few microjoules is sufficient. Much shorter focal length would allow to reduce this energy, but would require the focusing lens to be located inside the cryostat, which we wanted to avoid for practical reasons. Microjoule pulse energies are intermediate between the nanojoule energies delivered by Ti:Sa oscillators and millijoules currently reached by the chirped pulse amplifiers. Simple lasers are not available for such pulse energy. Hence we built a simple home made multipass unsaturated amplifier which does not require the 100 femtosecond pulse to be chirped. Non-linear effects in the Ti:Sa crystal are kept below the critical value for self-focusing if pulse energy does not exceed 40 µJ. The Ti:Sa oscillator produces 5 nJ, 200 fs pulses at the rate of 70 MHz. A Pockels cell selects one pulse every 0.1 s. It enters a 4f-cavity where it is amplified 6 times in a 1.5 cm Ti:Sa crystal pumped by two 7 mJ second harmonic Nd-YAG pulses. The pulse energy is then 6±2 µJ. Another opportunity is to let all oscillator pulses go through the amplifier. Then trains of about 12 successive pulses are amplified synchronously with the YAG pump pulses. Other Ti:Sa pulses go through the amplifier unaffected. In the time between successive pulses, atoms and clusters have already been expelled few microns away but have had no time to recombine. Hence clusters produced by one pulse may be further atomized by the next ones before being thermalized and possibly recombine. One expects a more efficient atomization in this multi-pulse regime. Atomization process inside solid helium Once cesium grains are embedded 2 mm below the crystal surface, atomization with the Ti:Sa laser begins. The wavelength is tuned to 850 nm so that fluorescence of the atoms will be excited. Monitoring the atomization is done by replacing the APD by a fiber/CCD spectrograph (Ocean Optics 4000). Two detection bands are defined. One from 845 to 855 nm measures the incident light scattered by cesium grains and clusters. The other one measures atomic fluorescence from 870 to 890 nm. This LIF detection is specific to cesium monomers. Dimers and trimers do not fluoresce in this band when excited at 850 nm (25). Larger cesium clusters are not known to fluoresce. Evolution of both intensities during an atomization process are shown in figure 2. The Ti:Sa laser is modelocked with an averaged intensity 50 mW in the cell, with amplified bursts at a rate of 2 Hz. The burst contribution to the average intensity is negligible. During the sequence shown, the focal point of the laser is moved from time to time to explore the region in which clusters are located. This is indicated by gray areas in figure 2. Intensity of elastically scattered light varies strongly according to the focal point location, probably in relation with the presence of grains. Keeping the laser focus at such place and continuing laser irradiation, one observes a slow decrease of the scattered light intensity, while the atomic fluorescence intensity increases strongly. Note the difference in scale (factor 10) for the two intensities : the atomic fluorescence is much stronger. After 20 minutes at this place, the scattered light has nearly disappeared and the atomic fluorescence does not increase anymore. A straightforward explanation is that successive laser pulses break metal grains into smaller and smaller pieces, and finally into atoms. In some cases, no light is scattered, indicating there are no grains while fluorescence does appear after some time. In these situations, source of atoms are very likely clusters. As we will see now, this atomization process is indeed efficient. Estimate of the atomic density Once atomic fluorescence intensity is strong enough, a measurement of the fluorescence intensity is made to provide an estimate of the atomic density. The Ti:Sa laser is switched to its cw mode, while keeping its wavelength at 850 nm. The beam is chopped at 173 Hz and the APD output is measured through a lock-in amplifier. The detected fluorescence intensity I det is proportional to the mean atomic density ρ in the region where the two beams intersect, to the laser intensity I exc and to the absorption cross-section σ for the D1 line. More precisely, ρ is given by ρ = 1 η σ l det κ 4 π Ω I det I exc ( 1 ) where Ω is the solid angle of the collected light (0.12 sr), κ is the transmission efficiency of the detection beam estimated to be 0.38 at 880 nm, η is the fluorescence efficiency for the D1 line measured to be 0.9 at 25 bar (26). This formula takes into account the fact that the waist of Ti:Sa beam in the cell (0. natural linewidth (FWHM) (Γ/2π = 4.5 MHz) to the linewidth in the helium matrix (∆λ ≃ 10 nm). This gives σ ≃ 1.3 × 10 -19 m 2 . For a 50 µW excitation intensity, we have measured atomic fluorescence intensities up to 8 × 10 -11 W. According to the formula above, this corresponds to an atomic density about 3.4 × 10 18 m -3 . This is about 4 orders of magnitude higher than what have been reported in solid helium (18). We have obtained repeatedly densities over 10 17 m -3 . As already mentionned the atomic density depends strongly on the number and sizes of the metal grains in the sputtered region. One can get densities varying by several order of magnitude for apparently similar conditions. Note that the atomic density does not vary significantly over the time required for the density measurement. As already reported, at 1.0 K, atomic density decays only over several days (20; 29). Conclusion We have shown that femtosecond laser pulses with energy as low as 10 µJ are an efficient tool to atomize metals in solid helium. Such low energy pulses with a repetition rate of a few hertz are suitable for experiments at low temperatures below 1 K where the cooling power of helium refrigerators drops down. These pulses can be produced by simple amplifiers without a stretching/compression device. Substantial atomic densities have been obtained, larger than 10 17 atom/m 3 . However we have no evidence that atomization with such low energy pulses still does not damage the crystal order. In a further step we plan to attempt atomization directly above crystal surface, in the liquid, and to grow the crystal from this 'cesium solution'. This should allow for obtaining good quality doped single crystal. Fig. 1 . 1 Fig. 1. Sketch of the experimental arrangement. (a) Cell inside the cryostat containing solid He at the bottom, (b) Solid cesium target immersed into HeII, (c) Sputtering YAG laser, (d) Amplified Ti:Sa laser for atomization (mode-locked) or in cw mode for detection, (e) fluorescence detection, (g) cesium metal grains and clusters. 3 mm) is smaller than the diameter of the area seen by the APD detector (l det = 1.1 mm). The absorption cross-section σ is the resonant cross-section for the D1 line λ 2 /2π (27; 28), scaled by the ratio of the Fig. 2 . 2 Fig. 2. Atomic fluorescence (lower sub-figure) and incident light scattered by cesium grains and clusters (upper sub-figure) during an atomization process at 850 nm. The growth of the atomic fluorescence is correlated with the disappearence of the grains scattering the incident light. Gray areas indicate when the laser focus is moved from one place to a new one, free of pre-existing cesium monomers. We are indebeted to Béatrice Chatel (LCAR, Toulouse) for the loan of the Ti:Sa laser. This work was supported by the ANR contract META (ANR-05-BLAN-0084-02).
04113860
en
[ "spi" ]
2024/03/04 16:41:24
2021
https://hal.science/hal-04113860/file/Aravanis_6G_wave.pdf
Marco Di Renzo Alexis I Aravanis Catching the 6G wave by Using Metamaterials: A Reconfigurable Intelligent Surface Paradigm Smart radio environments empowered by Reconfigurable Intelligent Surfaces The ever increasing demand for broadband access has pushed wireless networks away from the archetypal wireless network paradigm, where (mainly) outdoor users were served by centralized network entities. As opposed to this approach, emerging B5G network architectures are expected to employ atomised and softwarized network infrastructures in a dispersed, device-centric manner. Moreover, the unabated increase of indoor traffic [START_REF] Wong | A Vision to Smart Radio Environment: Surface Wave Communication Superhighways[END_REF] is pushing for even more disruptive network designs able to provide high quality service even to indoor users, surrounded by urban blockages. Furthermore, the emphasis of the envisaged 6G networks on high frequency bands (e.g. D-band and THz) makes the adverse effect of blockages more acute [2]. In this setup, a fundamentally distinct wireless ecosystem needs to arise, able -to not only overcome the challenges posed by wireless blockages, but more importantly -to satisfy the need for increased spatial bandwidth, i.e. the need of B5G networks to deliver bits per second per m 3 rather than simply bits per second [2], by exploiting the spatial domain of the wireless environment. In this course, future networks must challenge the entrenched status where the wireless environment is perceived as an invariable "unintentional adversary", to which the system needs to adapt, and where only the end-points of the communication network can be optimized. This is an extremely inefficient paradigm, where base stations (BSs) transmit radio waves of the order of magnitude of Watts while user equipment detects signals of the order of magnitude of μWatts or even nWatts, with the rest of the energy being dissipated over the wireless environment (i.e. the channel), while creating interference to other network elements. In order to challenge this status, an antipodal approach is required, where the wireless environment (i.e. the channel) needs to dynamically adapt to the wireless network operation, giving rise to what is referred to as a smart radio environment. A smart radio environment is a controllable wireless environment, where software-defined materials or softwarecontrolled metasurfaces can be overlaid on top of environmental objects or blockages to transform them into controllable network entities, conducive to wireless propagation [START_REF] Renzo | Smart radio environments empowered by reconfigurable AI meta-surfaces: an idea whose time has come[END_REF]. This transformation of the wireless networks into a smart radio environment is made possible by the advancement in the area of electromagnetic (EM) meta-materials [START_REF] Yu | Light propagation with phase discontinuities: generalized laws of reflection and refraction[END_REF], which has given rise to a new technology allowing the control and manipulation of the radio waves traversing through the wireless channels [START_REF] Renzo | Smart radio environments empowered by reconfigurable AI meta-surfaces: an idea whose time has come[END_REF]. This technology has allowed for the design and fabrication of large software controlled meta-surfaces [START_REF] Liu | Intelligent metasurfaces with continuously tunable local surface impedance for multiple reconfigurable functions[END_REF], [START_REF] Kaina | Shaping complex microwave fields in reverberating media with binary tunable metasurfaces[END_REF], [START_REF] Tretyakov | Metasurfaces for general control of reflection and transmission[END_REF], known as Reconfigurable Intelligent Surfaces (RISs) [START_REF] Renzo | Smart radio environments empowered by reconfigurable AI meta-surfaces: an idea whose time has come[END_REF], which are able to recycle radio waves without the need for generating new signals, but rather controlling the radio waves from within the wireless channel. Reconfigurable Intelligent Surfaces An RIS is an intelligent surface able to manipulate the impinging radio waves at will. As opposed to a regular surface which reflects impinging waves by adhering to Snell's law of specular reflections, (where the angle of the incident wave is equal to the angle of the reflected wave) an RIS is an "intelligent" surface in the sense that it is able to reflect and transmit electromagnetic waves in a desired direction. This anomalous reflection, also referred to as geometric scattering, is not bound by the rules of reflection or refraction, but guides reflected waves toward directions that can be considered anomalous with respect to Snell's law and this reflection is governed by the so called generalized Snell's law [START_REF] Tretyakov | Metasurfaces for general control of reflection and transmission[END_REF]. The capability of RISs to shape and direct the electromagnetic waves in an intelligent way, gives rise to a number of interesting applications that are expected to play a crucial role in 6G [START_REF] Tariq | A speculative study on 6G[END_REF]. These applications, depicted in Figure 1 [10], include but are not limited to (a) anomalous reflection, (b) beamforming to focal points [START_REF] Qian | Beamforming Through Reconfigurable Intelligent Surfaces in Single-User MIMO Systems: SNR Distribution and Scaling Laws in the Presence of Channel Fading and Phase Noise[END_REF], [START_REF] Di Renzo | Analytical Modeling of the Path-Loss for Reconfigurable Intelligent Surfaces -Anomalous Mirror or Scatterer ?[END_REF], and (c) joint encoding on the RIS reflection pattern [START_REF] Karasik | Beyond Max-SNR: Joint Encoding for Reconfigurable Intelligent Surfaces[END_REF]. RISs are capable of applying the aforementioned customized transformations to the reflected radio waves, by changing the phases of the scattering particles that comprise the RIS [START_REF] Yu | Light propagation with phase discontinuities: generalized laws of reflection and refraction[END_REF], in a way that allows for the constructive interference of all the particle-engendered multipath components at the desired location. These constituent subwavelength scattering particles (a.k.a. unit cells) are dielectric or metallic [START_REF] Liaskos | A new wireless commun. paradigm through software-controlled metasurfaces[END_REF], forming a sub-wavelength array of planar or curved conformation. This malleable conformation of RISs allows for the coating of environmental objects with software-controlled, artificial, thin films of RISs, facilitating the dynamic manipulation of the incident radio waves in a fully reconfigurable manner [START_REF]VISORSURF project, A hardware platform for software-driven functional metasurfaces[END_REF]. This allows the efficient management of the transmitted radio waves, toward (a) improving network coverage, by circumventing physical obstacles [START_REF] Renzo | Smart radio environments empowered by reconfigurable AI meta-surfaces: an idea whose time has come[END_REF], (b) enhancing the signal power at the receiver, exploiting the scaling law of the power reflected from the RIS scattering particles [START_REF] Basar | Wireless Communications Through Reconfigurable Intelligent Surfaces[END_REF], [START_REF] Qian | Beamforming Through Reconfigurable Intelligent Surfaces in Single-User MIMO Systems: SNR Distribution and Scaling Laws in the Presence of Channel Fading and Phase Noise[END_REF], (c) reducing BS power consumption by substituting power-hungry MIMO RF chains with RIS empowered multi-stream transmitters on a single-RF chain (media-based modulation) [START_REF] Basar | Wireless Communications Through Reconfigurable Intelligent Surfaces[END_REF], or (d) cloaking antennas from one another in particular frequency bands allowing for their ultra-tight packing [START_REF] Monti | Mantle cloaking for co-site radio-frequency antennas[END_REF]. Numerous companies are already productizing RIS based solutions, engendering the next generation of RIS empowered wireless networks. In the present chapter we will elaborate on many of those industrial activities, while outlining the state of the art. In this course and in order to visualize the RIS empowered network operation, Figure 2 [18] demonstrates a Metawave implemented solution (Metawave is one of the companies actively involved in the engineering of RIS based solutions [18]) toward improving network coverage by coating building facades, and street furniture with RISs, steering radio waves toward locations of poor coverage. The subsequent interconnection and control of all network RISs (that are used for coating the network environment), through radio access intelligent controllers can indeed transform wireless networks into customizable smart radio environments. In this setup, the spatial domain arises as a fully malleable pillar of flexibility, playing an active role in transferring and processing information [START_REF] Renzo | Smart radio environments empowered by reconfigurable AI meta-surfaces: an idea whose time has come[END_REF]. Types of RISs, advantages and limitations RISs can be manufactured in practise either as large arrays of inexpensive antennas, usually spaced half of wavelength apart (e.g. Figure 6, Figure 7 of Section 3), or as large metamaterial-based surfaces (i.e. meta-surface based RISs), whose scattering elements have sizes and inter-distances much smaller than the wavelength (e.g. Figure 8, Figure 9 of Section 3). These two approaches, can be effectively viewed as "discrete" and "continuous" implementations of RISs, respectively. In the discrete approach the phases of the antenna elements are optimized individually, whereas in the continuous approach the phases of all scattering particles are optimized collectively. This collective optimization of all scattering particles, in the latter case, is imposed by the very nature of meta-surface based RISs, as will be detailed in the following paragraphs. To elaborate on meta-surface based RISs, meta-surfaces are electrically thin and electrically large structures, which means that their thickness is considerably smaller and their transverse size is considerably larger than the wavelength. The mode of operation and the reflection properties of these meta-surfaces are defined by the inter-distance and the size of their constituent scattering particles (a.k.a. unit cells). Meta-surface based RISs constitute the third generation of meta- surfaces, coming as the evolution of first generation of uniform meta-surfaces (i.e. surfaces whose unit cells are uniformly distributed), and of the second generation of non-uniform meta-surfaces. This second generation of non-uniform metasurfaces extended the range of applicability of the first generation, by introducing a variability in the spatial domain (over the uniform spatial pattern) to support ultrawideband, beamforming, multibeam and multi frequency operation. The current third generation of meta-surface based RISs introduced an additional variability in the time domain, through the employment of switches, varactors and diodes, to allow for the dynamic reconfiguration of the reflected or refracted waves. Having outlined the concept of meta-surface based RISs and since the fundamental of antenna arrays are already known to the community, we can proceed we the axiomatic formulation of the aforementioned classification of RISs (described as "discrete" and "continuous"), based on the concept of homogeneity. In particular, RISs can be axiomatically categorised based on the size of their unit cells to homogenizable and inhomogenizable. Homogenizable RISs are surfaces that can be globally or locally described by effective material parameters, like surface impedance, susceptibility and polarizability [START_REF] Wang | Surface-impedance engineering for advanced wave transformations[END_REF]. This means that RISs characterized by homogeneity, can be practically considered continuous and so can be the ensuing analysis on the reflected waves, as opposed to the discrete RISs of inexpensive antennas, mentioned above. Figure 3 [START_REF] Wang | Surface-impedance engineering for advanced wave transformations[END_REF] demonstrates the threshold between homogenizable and inhomogenizable surfaces, with surfaces of unit size (denoted by α) smaller than half a wavelength, like uniform metasurfaces, high impedance surfaces (HDS), or some frequency selective surfaces (FSS) being homogenizable, whereas surfaces of unit size greater than half a wavelength being inhomogenizable. This homogeneity based taxonomy of RISs is further refined in Figure 3, by taking into account not just the unit size, (characterizing the homogeneity of the structure), but also the period of the unit cells, i.e. their inter-distances. In particular, surfaces of unit size smaller than half a wavelength, and of inter-distance (denoted by D) also smaller than half a wavelength, are globally homogenizable, whereas surfaces of unit size smaller than half a wavelength but of inter-distance higher than half a wavelength maintain the homogeneity, but only at a local level. This means that structures such as metasurface-based gratings, which are clustering unit cells in super-cells of varying interdistances, maintain the homogeneity, not at a global, but at a local super-cell level [START_REF] Wang | Surface-impedance engineering for advanced wave transformations[END_REF]. Wang [START_REF] Wang | Surface-impedance engineering for advanced wave transformations[END_REF]. The aforementioned homogeneity of RISs is extremely important to provide properties such as larger angles of reflection, and perfect anomalous reflection or refraction [START_REF] Asadchy | Perfect control of reflection and refraction using spatially dispersive metasurfaces[END_REF], [START_REF] Díaz-Rubio | From the generalized reflection law to the realization of perfect anomalous reflectors[END_REF], [START_REF] Lavigne | Susceptibility derivation and experimental demonstration of refracting meta-surfaces without spurious diffraction[END_REF], as opposed to the imperfect anomalous reflection and refraction of discrete, inhomogenizable structures. In particular, as already mentioned, the unit cells of discrete RISs (that are inhomogenizable due to the half wavelength spacing of the reflect/transmit arrays as shown in Figure 3. Planar structures classified by homogenization property. Here, a is the unit size and D the structure period (inter-distance).) are optimized independently of each other, whereas the unit cells of continuous RISs follow a coupled optimization. These two optimization approaches, described in the literature as local and non-local design respectively, are imposed by the very nature of the two implementations and endow different properties to discrete and continuous RISs. In the case of the non-local design of continuous RISs, the coupled optimization of unit cells allows the transfer of energy from one unit-cell to another. In particular, energy from an incident wave can be transferred by exciting evanescent superficial (not propagating) waves, travelling on the surface of the RIS [START_REF] Díaz-Rubio | From the generalized reflection law to the realization of perfect anomalous reflectors[END_REF]. Hence, the power received from one unit-cell can be transferred to a different unit-cell, giving rise to a more flexible power budget. To elaborate, the local design of the discrete RISs imposes a standalone power constraint for each unit cell, where the impinging power on each unit cell must be less or equal to the power reflected by the same unit cell, since RISs are in general passive structures. However, the non-local design allows for the transfer of energy between unit cells, giving rise to a globally passive structure that can however be active at a local level. That is, the reflected power by a unit cell can be higher than the power that impinged on the unit cell, due to the amplification achieved by the aggregation of power from the neighbouring unit cells. This gives rise to a flexible power budget of higher power efficiency, that allows continuous RISs to provide larger angles of reflection and perfect anomalous reflection, exploiting the fact that the structures are passive on a global but not on a local level. At this point it should be noted that even though second generation meta-surfaces are completely passive structures the aforementioned characterization of meta-surface based RISs as globally passive structures is a relatively abusive term, since in reality an RIS is only a nearly-passive structure. That is since, minimal power and digital signal processing capabilities are needed in order for the RIS to interact with the environment and in order for the surface to be re-configured dynamically over time. In this course, RISs are equipped with processing units, micro controllers and radio frequency chains that allow them to either report changes in their environment if they are equipped with appropriate sensing elements, or to receive commands during the control and configuration phase in response to the changing wireless channel. These commands are subsequently implemented by the RIS tuneable elements (i.e. switches, varactors and diodes), resulting in the consumption of power. However, after the control and configuration phase of the RISs is over, no power amplification is used after the RISs enter the normal operation phase. Due to the passive behavior at the normal operation phase and the only minimal power requirements at the control phase, RISs can more accurately be characterized as nearly-passive structures. Advantages and limitations Having outlined the types of RISs and their modes of operation we can proceed with the comparison of RISs with other existing technologies to understand the advantages, limitations and trade-offs of this emerging technology. In this direction, an experimental case study is examined as performed by the startup company Pivotal [23], [24], that develops meta-surface based RIS solutions. In this case study, an RIS architecture (developed by the company under the framework of their "holographic beamforming" technology and described in detail in a number of white papers [23],[24]) is compared with the MIMO and Phased Array architectures in terms of cost, size and implementation complexity and is presented in Table 1 [23]. The block diagrams of Table 1 demonstrate that the complexity of the MIMO array is very high, since a high number of antennas is required along with an equal number of RF chains and a complex signal processing unit, while the complexity increases significantly with the number of antenna elements. In the case of the phased array the complexity decreases since instead of an RF chain per antenna element, a phase shifter is employed and the signal processing unit is much less complex. As opposed to these two systems Table 1 demonstrates that the RIS system requires a much higher number of unit cell elements, however, the circuits supporting each element are much less complex and much less expensive. Hence, this gives rise to an implementation related trade-off where the successful application of RISs requires a large number of RIS elements supported however, by circuits characterized by extremely low complexity compared to legacy technologies. This conclusion becomes also evident in Table 2 [24], where an RIS system and a phased array achieving the same quality of service (QoS) are compared. Table 2 demonstrates that the number of RIS elements is much higher than the corresponding elements of the phased array (640 vs 256), however their power consumption is much lower than that of the phased array elements (12.9W vs 39.6W). Evidently, compared with other transmission technologies, e.g., phased arrays, multi-antenna transmitters, and relays, RISs require the highest number of scattering elements, but each of them needs to be backed by the fewest and least costly components, while no dedicated power amplifiers per element is needed. This trade-off between the number of elements and the power, cost and complexity of implementation per element needs to be investigated in detail in future works. However, the aforementioned case study has demonstrated, that RISs arise as prime candidates for realizing what is called a C-SWaP design, i.e. a transmitter design at reduced cost, size, weight, and power [23], through a softwaredefined RIS-enabled architecture. Moreover, such architectures can achieve a sustainable wireless design of low electromagnetic field exposure, by recycling transmitted waves while the transmitter can be manufactured by recyclable materials, that (as will be demonstrated in the following section) can be seamlessly integrated in urban network environments by being physically & aesthetically unobtrusive. The previous case study has already demonstrated that in order to achieve performance comparable or even superior to that of other technologies a high number of RIS elements is required. Two more key factors determining the performance of RISs that need to be taken into account prior to their productization and their incorporation in wireless networks are the size and transmit distance of the RISs. In particular, it has been demonstrated that for an indicative transmission frequency of f = 28 GHz and an RIS length of 1,5 m = 140 λ (i.e. an electrically large RIS) an RIS can provide a similar performance or even outperform a full duplex (FD) relay without even employing a power amplifier. The data rates achieved by the RIS, the FD relay and a half duplex (HD) relay for different transmission distances are demonstrated in Figure 4 [START_REF] Renzo | Reconfigurable Intelligent Surfaces vs. Relaying: Differences, Similarities, and Performance Comparison[END_REF]. The results obtained in Figure 4 are obtained for inter-distances in the range of λ/5 and a number of unit cells equal to 700 (or λ/2 and 280 respectively). An interesting finding is that for distances up to 25-50 m the RIS behaves as an anomalous mirror (i.e. the transmitter is close enough to the RIS to perceive it as electrically large), whereas for distances greater than 75-100 m the RIS behaves as a local diffuse scatterer (i.e. the transmitter resides that far from the RIS, that in practice sees is as electrically small), while for distances greater than 150 m the RIS exhibits a performance that is inferior to that of the FD relay. This finding demonstrates the importance of the transmit distances and of the size of the RIS, since at higher distances RISs are perceived as electrically small and therefore larger RISs may be required in order to outperform legacy technologies. However, the fact that as already mentioned and as will be demonstrated in the following section modern RISs can be aesthetically unobtrusive and seamlessly integratable on glass building facades, billboards etc. allows for the employment of electrically large structures (depending of course on the operating frequency) giving rise to a performance comparable to this of Figure 4. Moreover, due to the aforementioned C-SWaP design, the use of RISs is expected to be pervasive with ultra-dense deployments of RISs that are expected to minimize transmit distances allowing network operators to effectively manage this trade-off between RIS sizes and transmit distances. This trade-off is also visualized in Figure 5 where the bitrate achieved by the employment of an RIS is compared with the bitrate achieved by an FD and an HD relay for the same setup described above, at 2 different transmit distances of 10 m and 100 m for different RIS lengths. Once again it is demonstrated that for an electrically large RIS the achieved bit rate is higher than that of the FD and HD relay, which motivates the pervasive use of RISs in the network in order to be employed particularly for small transmit distances. Experimental Activities Having outlined the key technologies employed for the implementation of RISs as well as the main factors influencing their performance as well as their advantages and limitations, the present section will focus on the practical implementations of RISs and the experimental activities available in the literature. In particular, the potential of the RIS technology toward realizing smart and controllable radio environments for the first time in history has already spurred the practical implementation of RIS prototypes both discrete and continuous, corroborating the theoretical results presented above with respect to the enhancement of received power and capacity, and a brief survey of those prototypes and of their characteristics is performed hereafter. Large Arrays of inexpensive antennas RFocus One of the first prototypes realized based on discrete arrays of inexpensive antennas is the RFocus prototype implemented at the CSAIL lab of MIT [START_REF] Arun | RFocus: Beamforming using thousands of passive antennas[END_REF]. The RFocus prototype, in Fig. 7, employs 3,200 inexpensive antennas distributed over a surface of 6 square-meters and constitutes (to the best of the authors' knowledge, among all published configurations) the configuration comprising the largest number of antennas ever, employed for a single communication link. The prototype constitutes, in reality, a software-controlled surface with thousands of plain switching elements. Each of the switching elements is a two-way RF switch, with two distinct states. Any impinging wave is either reflected or traversing through the element. Moreover, a controller is employed to configure all constituent elements in the way that ensures that any signal impinging upon the RFocus surface from the transmitter will be directed toward the receiver. The controller is agnostic to the location of the transmitter or receiver and just switches between different optimized states to maximize the signal power between the different endpoints of the transmitter and receiver. The RFocus redirected signals are not amplified and the controller can choose between a "mirror" or a "lens" operation, where the endpoints can reside either on the same or on different sides of the surface. The measurements at the premises of the MIT CSAIL laboratory report that the employment of RFocus improved the median signal strength by a factor of 9.5× and the median channel capacity by a factor of 2.0× [START_REF] Arun | RFocus: Beamforming using thousands of passive antennas[END_REF]. At this point however it should be noted that these measurements and gains correspond only to an indoor environment, whereas similar outdoor measurements are required in order to demonstrate the efficiency of the prototype in less controllable environments. In practice RFocus serves as a beamformer, but the beamforming function is now shifted from the radio endpoints to the radio environment itself. This shift of the beamforming functions to the environment allows for the deployment of huge beamforming antenna arrays (of a 6 square-meter area or more) that could not be deployed before, due to the space limitations typically imposed to infrastructure base stations or access points (which are hard to deploy). Thus, allowing for the deployment of massive and potentially multiple beamforming antenna arrays. Moreover, such a beamforming antenna array is not in need of connecting each antenna element to a full-fledged radio transmit/receive circuitry of increased power consumption and cost. However, the deployment of such massive antenna arrays of an excessively high number of antenna elements incurs an optimization overhead cost. In particular, the of all 3200 antenna elements by the controller incurs a significant latency. After collecting TCP throughput data, the latency of the optimization algorithm is equal to 4000 packets. Hence, an interesting trade-off arises between the number of antenna elements and implicitly the level of reflected power, and the incurring latency of the optimization algorithm at the controller. This trade-off needs to be extensively studied and in this direction smaller discrete arrays of reduced latency have also been studied and implemented in practice, like the one mentioned below. The ScatterMIMO prototype The ScatterMIMO prototype is another discrete array implementation of an RIS similar to that of RFocus, implemented by the University of California [START_REF] Dunna | ScatterMIMO: Enabling virtual MIMO with smart surfaces[END_REF]. The ScatterMIMO prototype encompasses 48 antenna elements in the form of 3 tiles of 4×4 MIMO antenna arrays forming a virtual access point. This virtual access point complements the active access point by reflecting its signal in a controlled an optimized manner either vertically or horizontally, toward creating additional MIMO streams to double the throughput. Moreover, the location of this small RIS can itself be appropriately optimized in order to ensure that either an LOS link between the transmitter and the receiver is established or that the RIS will reflect the maximum amount of power toward the receiver. In the latter case the RIS needs to be positioned close to the receiver or the transmitter and since the position of the receiver is not fixed the RIS needs to be positioned in the vicinity of the fixed transmitter. Thus, the distance to the receiver or transmitter arises as an additional degree of freedom. By exploiting this degree of freedom and strategically place the RIS at the optimal location the authors of [START_REF] Dunna | ScatterMIMO: Enabling virtual MIMO with smart surfaces[END_REF] have demonstrated that the same QoS can be achieved as that provided by an RIS encompassing more antenna elements but at a suboptimal location. Moreover [START_REF] Dunna | ScatterMIMO: Enabling virtual MIMO with smart surfaces[END_REF] demonstrates that the RIS of 48 antenna element incurs only a minimal optimization-related latency cost which is equal to 3 packets, as opposed to the 4000 packets of RFocus. Hence, the ScatterMIMO approach demonstrates an interplay between the antenna elements and the latency (already demonstrated by the RFocus approach), but also with respect to the distance between the RIS and the receiver or the transmitter respectively. Once again all reported measurement of [START_REF] Dunna | ScatterMIMO: Enabling virtual MIMO with smart surfaces[END_REF] pertain to an indoor environment. Meta-surface approaches Further to the discrete array implementation of RISs mentioned above, numerous meta-material based implementations of RISs have also been fabricated, with the majority of those approaches also being tested primarily in indoor and controllable environments. As already mentioned the non-local design of such meta-surfaces (i.e. the coupled optimization of their constituent elements) endows an interesting set of properties such as perfect anomalous reflection at large angles of reflection. One of the first approaches achieving a perfect design of an anomalously reflective surface employing a strongly nonlocal design is that of Aalto university [START_REF] Díaz-Rubio | From the generalized reflection law to the realization of perfect anomalous reflectors[END_REF] with the fabricated metasurface being depicted in Figure 8. The principles of nonlocal design have already been developed in detail in Section 2 and are therefore omitted at this point. However, for the sake of completeness it should be noted that the fabricated meta-surface employs 10 unit cells clustered at a super-cell level with each unit-cell having a width of 3.5 mm (in the x axis) and each super-cell having a width of 40 mm (in the x axis) and a height of 18.75 mm (in the y axis). The overall dimensions of the metasurface are 440mm in the x axis and 262.5 mm in the y axis, which for an operating frequency of f = 8 GHz corresponds to 11.7λ and 7λ respectively. The subwavelength dimensions of the unit cells give rise to a homogenizable metasurface and more importantly pave the way for the advent of homogenizable structures of even smaller granularity (i.e. of even smaller unit cells) that can give rise to structures aesthetically unobtrusive like the one presented below. Such an aesthetically unobtrusive metasurface-based RIS is that fabricated by DOCOMO and depicted in Figure 9 [27]. The DOCOMO prototype employs unit cells of extremely small size, smaller than the 3.5 mm of the Aaltofabricated metasurface or even of the 2 mm size of other contemporary state of the art metasurfaces. These miniscule unit cell sizes and the implicit continuous granularity give rise to a transparent RIS, that can be deployed on glass building facades, vehicles and billboards, as previously visualized in Figure 2. Such transparent RISs do not interfere aesthetically or physically with the surrounding environment, facilitating the proliferation of RISs in wireless networks environments in an unobtrusive manner. Thus, allowing for the transformation of wireless networks into RIS empowered smart radio environments, through the employment of large RISs deployed at small transmit distances, and residing in the vicinity of the receiver or transmitter. In this course, relevant measurements not just in indoor and controllable environments but also in outdoor environments need to be reported and this constitutes one of the key challenges of future research work in the field. RIS Research Areas and Challenges in the 6G Ecosystem The previous sections have epigrammatically outlined the potential of RISs, by investigating their potential benefits and limitations, while also comparing them with similar and well established technologies such as conventional relays [START_REF] Renzo | Reconfigurable Intelligent Surfaces vs. Relaying: Differences, Similarities, and Performance Comparison[END_REF], MIMO and phased arrays [23,24]. Moreover, since RISs are a novel paradigm in the context of wireless communications, the realization of proof-of-concept platforms and hardware testbeds is outlined in the previous section [Error! Bookmark not defined., [START_REF] Arun | RFocus: Beamforming using thousands of passive antennas[END_REF][START_REF] Dunna | ScatterMIMO: Enabling virtual MIMO with smart surfaces[END_REF]27] in order to corroborate the theoretical findings and demonstrate the potential b enefits arising from the utilization of RISs by 6G systems. In this course, a multitude of additional experimental activities can also be found in the literature, focusing, for instance, on RIS elements of finer (2-bit) phase resolution [START_REF] Dai | Reconfigurable Intelligent Surface-based Wireless Communication: Antenna Design, Prototyping and Experimental Results[END_REF] or RIS testbeds employed for ambient backscatter communications [START_REF] Fara | Polarization-Based Reconfigurable Tags for Robust Ambient Backscatter Communications[END_REF]. The employment of RISs and their amalgamation with legacy transmission technologies (such as that of [START_REF] Fara | Polarization-Based Reconfigurable Tags for Robust Ambient Backscatter Communications[END_REF]) is also attracting the interest of the wireless community, with RIS being seamlessly conflated with other wireless sections such as: physical layer security [START_REF] Yang | Secrecy Performance Analysis of RIS-Aided Wireless Communication Systems[END_REF][START_REF] Yu | Enabling Secure Wireless Communications via Intelligent Reflecting Surfaces[END_REF][START_REF] Guan | Intelligent Reflecting Surface Assisted Secrecy Communication: Is Artificial Noise Helpful or Not?[END_REF][START_REF] Qiao | Secure Transmission for Intelligent Reflecting Surface-assisted mmWave and Terahertz Systems[END_REF], non-orthogonal multiple access [START_REF] Mu | Exploiting Intelligent Reflecting Surfaces in NOMA Networks: Joint Beamforming Optimization[END_REF][START_REF] Ding | A Simple Design of IRS-NOMA Transmission[END_REF][START_REF] Liu | RIS Enhanced Massive Non-orthogonal Multiple Access Networks: Deployment and Passive Beamforming Design[END_REF], Internet of Things and backscattering communications [START_REF] Zhang | Large Intelligent Surface/antennas (LISA) Assisted Symbiotic Radio for IoT Communications[END_REF], aerial communications [START_REF] Li | Reconfigurable Intelligent Surface Assisted UAV Communication: Joint Trajectory Design and Passive Beamforming[END_REF][START_REF] Yang | On the Performance of RIS-Assisted Dual-Hop UAV Communication Systems[END_REF], wireless power transfer [START_REF] Pan | Intelligent Reflecting Surface Aided MIMO Broadcasting for Simultaneous Wireless Information and Power Transfer[END_REF], multiple-access edge computing [START_REF] Bai | Latency Minimization for Intelligent Reflecting Surface Aided Mobile Edge Computing[END_REF], millimeter-wave, terahertz, and optical wireless communications [START_REF] Zuo | Intelligent Reflecting Surface Enhanced Millimeter-Wave NOMA Systems[END_REF][START_REF] Perovic | Channel Capacity Optimization Using Reconfigurable Intelligent Surfaces in Indoor mmwave Environments[END_REF], as well software defined networking protocols for the control and programmability of the RISs [START_REF] Liaskos | On the Network-layer Modeling and Configuration of Programmable Wireless Environments[END_REF]. For a thorough literature review on the application of RISs on the holistic wireless ecosystem, the reader can also refer to a number of available overviews and tutorials on RISs and the references therein [START_REF] Di Renzo | Smart Radio Environments Empowered by Reconfigurable Intelligent Surfaces: How it Works, State of Research, and Road Ahead[END_REF][START_REF] Renzo | Smart radio environments empowered by reconfigurable AI meta-surfaces: an idea whose time has come[END_REF][START_REF] Basar | Wireless Communications Through Reconfigurable Intelligent Surfaces[END_REF][START_REF] Huang | Holographic MIMO Surfaces for 6G Wireless Networks: Opportunities, Challenges, and Trends[END_REF][START_REF] Liu | Reconfigurable Intelligent Surfaces: Principles and Opportunities[END_REF]. Further to the aforementioned cross-sectorial implementation of RISs a number of research challenges have also arisen from the standalone implementation of RISs. In particular, one of the major research challenges pertains to the development of simple but accurate models to account for the power at the receiver, after a transmitter emits radio waves that are reflected from an RIS. Some early research works have already provided models and measurements for this RIS operation [START_REF] Tang | Wireless Communications with Reconfigurable Intelligent Surface: Path-loss Modeling and Experimental Measurement[END_REF][START_REF] Di Renzo | Analytical Modeling of the Path-Loss for Reconfigurable Intelligent Surfaces -Anomalous Mirror or Scatterer ?[END_REF][START_REF] Danufane | On the Path-Loss of Reconfigurable Intelligent Surfaces: An Approach Based on Green's Theorem Applied to Vector Fields[END_REF][START_REF] Gradoni | End-to-End Mutual-Coupling-Aware Communication Model for Reconfigurable Intelligent Surfaces: An Electromagnetic-Compliant Approach Based on Mutual Impedances[END_REF], but additional work is required in order to analyze the performance limits of RISs and optimize their operation. In this direction, some early approximate, and asymptotic analytical frameworks have already been developed, toward quantifying the gains and limitations of RISs in different network scenarios, like in the presence of phase noise, characterizing the distribution and scaling laws of the SNR [START_REF] Qian | Beamforming through Reconfigurable Intelligent Surfaces in Single User MIMO Systems: SNR Distribution and Scaling Laws in the Presence of Channel Fading and Phase Noise[END_REF], or by employing point processes, such as stochastic geometry, and random spatial processes to compute the probability that an RIS coated object in the network acts as a reflector [START_REF] Di Renzo | Reflection Probability in Wireless Networks with Metasurface-coated Environmental Objects: An Approach based on Random Spatial Processes[END_REF]. As already stressed, the next step after the network analysis and performance evaluation in the presence of RISs, is the optimization of the RIS operation. In fact, the resource optimization problem in the presence of RISs (both model-based and data-driven) is one of the most investigated research topics in the context of RIS-empowered networks. The majority of the research activities focus on the problem of active and passive beamforming in MISO systems, i.e., the optimization of transmit (active) beamforming and (passive) RIS phase shifts, with respect to energy and spectral efficiency as well as system rate and optimization-introduced overhead. In general, both analytical techniques [START_REF] Zhou | Spectral and Energy Efficiency of IRS-Assisted MISO Communication with Hardware Impairments[END_REF][START_REF] Han | Intelligent Reflecting Surface Aided Network: Power Control for Physical-layer Broadcasting[END_REF][START_REF] Zhou | Robust Beamforming Design for Intelligent Reflecting Surface Aided MISO Communication Systems[END_REF][START_REF] Zappone | Overhead-aware Design of Reconfigurable Intelligent Surfaces in Smart Radio Environments[END_REF][START_REF] Perović | Achievable Rate Optimization for MIMO Systems with Reconfigurable Intelligent Surfaces[END_REF] such as the alternating maximization (for the non-convex problem) and data-driven, machine learning and artificial intelligence techniques [START_REF] Zappone | Wireless Networks Design in the Era of Deep Learning: Model-based, AI-based, or both?[END_REF][START_REF] Gacanin | Wireless 2.0: Towards an Intelligent Radio Environment Empowered by Reconfigurable Metasurfaces and Artificial Intelligence[END_REF] have been employed. However, the RIS optimization remains an underexplored research challenge, since the problems arising in RIS enabled MIMO systems, or in systems comprising multiple RISs remain yet to be explored, with the complexity increasing with the number of antennas and RISs. Another open research challenge related to RIS is the channel estimation which accounts for the effect of the RIS on the channel between the transmitter and the receiver. The RIS's transformation of the channel must be either reported by a feedback mechanism to the transmitter or the channel state information (CSI) of the combined channel (transmitter-RSI-receiver) must be estimated by the transmitter as a whole (for all possible RIS states). In general, the nearly-passive operation of RISs and their minimal processing capabilities, impose the investigation of new algorithms and protocols for the channel estimation while avoiding signal processing on-board the RIS to the extent possible. Some research works in this direction, employing neural networks already exist [START_REF] Liu | Deep Denoising Neural Network Assisted Compressive Channel Estimation for mmWave Intelligent Reflecting Surfaces[END_REF], however channel estimation of RIS enabled networks, remains a little explored research avenue. Last but not least, one of the most promising applications of RISs is the modulation and encoding of data directly on the RIS scattering elements and on the reflected radiation pattern. This is in fact, an RIS enabled variation of the spatial modulation and index modulation concepts [START_REF] Karasik | Beyond Max-SNR: Joint Encoding for Reconfigurable Intelligent Surfaces[END_REF][START_REF] Basar | Wireless Communications Through Reconfigurable Intelligent Surfaces[END_REF][START_REF] Li | Single-RF MIMO: From Spatial Modulation to Metasurface-Based Modulation[END_REF], which, as already explained in detail in Section 2.1, provides a low complexity, power-conscious alternative to massive MIMO chains. Figure 1 . 1 Figure 1. Potential applications of "intelligent" reflectors. Source: Di Renzo et al. [10]. Figure 2 . 2 Figure 2. RIS enhanced network coverageSource: Metawave Corporation. Figure 3 . 3 Figure 3. Planar structures classified by homogenization property. Here, a is the unit size and D the structure period (inter-distance). Source:Wang[START_REF] Wang | Surface-impedance engineering for advanced wave transformations[END_REF]. Figure 4 . 4 Figure 4. Data rate of RISs and relays versus the transmission distance. Source: Di Renzo et al. [10]. Figure 5 . 5 Figure 5. Data rate of RISs and relays versus the size of the RIS. Source: Di Renzo et al. [10]. Figure 6 6 Figure6The RFocus prototype surface. Source: Arun[START_REF] Arun | RFocus: Beamforming using thousands of passive antennas[END_REF]. Figure 7 7 Figure7The ScatterMIMO Hardware prototype. Source:[START_REF] Dunna | ScatterMIMO: Enabling virtual MIMO with smart surfaces[END_REF]. Figure 8 8 Figure 8 Aalto-fabricated metasurface. Source: [21]. Figure 9 9 Figure 9 DOCOMO prototype of transparent dynamic metasurface. Source: [27]. Table 1 1 Summary of key differences among RIS, phased array and MIMO beamformers Architecture Block Diagram Cost Size Challenges Super- Single beam sampled, Thin, per polarization RIS COTS design, Conformable per sub-aperture Allows for low price Source: Eric J. Black, PhD, CTO, Pivotal Commware, Holographic Beam Forming and MIMO, 2021. Table 2 2 Difference between RIS and Active Phased Array power consumption Phased Array RIS Unit Number of Unit Cells 256 640 # Antenna Gain 28 26 dB Number of RF chains 256 1 # Transmit Power per chain 6.2 2512 mW Total RF Transmit Power 1.58 2.51 W Power Added Efficiency 4.0% 25.0% % DC Draw for RF 39.6 10.0 W RIS Controller 0 2.9 W Total DC Power 39.6 12.9 W Source: Pivotal Staff, Pivotal Commware, Holographic Beam Forming and Phased Arrays, 2021.
04114087
en
[ "phys.meca.mefl" ]
2024/03/04 16:41:24
2021
https://hal.inrae.fr/hal-04114087/file/239paperISPIV2021_Khojasteh.A.R_July082021.pdf
Ali Rahimi Khojasteh email: [email protected] Dominique Heitz Yin Yang Adjustable interogation window for 2D PIV estimation based on local Lagrangian coherency ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction In Particle Image Velocimetry (PIV) algorithms, both correlation-based and local optical flow techniques rely on the interrogation windows. The importance of interrogation window has been studied widely for obtaining effective methods of adapting the window size and shape, which directly impacts the spatial accuracy of velocity estimation [START_REF] Theunissen | Spatially adaptive PIV interrogation based on data ensemble[END_REF][START_REF] Wieneke | Adaptive PIV with variable interrogation window size and shape[END_REF][START_REF] Theunissen | An adaptive sampling and windowing interrogation method in PIV[END_REF]. Since the flow behaviour inside the interrogation window has clusters of small and large scale coherent motions, PIV techniques involve window size reduction to avoid those non-coherent areas and increase the maximum achievable spatial resolution. Generally, the interrogation window size is gradually reduced based on empirical precalculations and tunings, while this empirical approach can be adjusted by temporal and local spatial information. This means flow behaviour in different times and spaces would result in different interrogation window shapes, which is the main objective of this paper. To demonstrate the performance of the proposed method, we integrated the adjustable interrogation window with the local optical flow PIV algorithm. In the local optical flow approach, all pixels inside the window are considered for calculating a single-pixel velocity at the centre of the window. However, to estimate more accurate motions, it is crucial to ignore areas that are non-coherent with the centre pixel. This study seeks to adjust the interrogation window shape in motion estimation by calculating locally coherent and non-coherent areas. We propose performing Lagrangian Coherent Structures (LCS) by looking for the local separatrix ridges that divide the flow field into clusters of coherent regions [START_REF] Haller | Lagrangian coherent structures[END_REF]. The idea of applying LCS in Particle Image Velocimetry (PIV) / Particle Tracking Velocimetry (PTV) algorithms was demonstrated by Khojasteh et al. [START_REF] Rahimi Khojasteh | Lagrangian Coherent Track Initialisation (LCTI)[END_REF]. To this end, all neighbour pixels inside the interrogation window must be classified as coherent or non-coherent with the centre pixel. Similar interrogation window adjustment can be implemented in cross correlation-based PIV techniques. Local optical flow Classic optical flow works under the intensity consistency assumption of the acquired images that is inspired by Horn and Schunck [START_REF] Berthold | Determining optical flow[END_REF] formulation. It can be also written in terms of the Optical Flow Constraint Equation (OFCE) as following, d f dt = + υ.∇ f , (1) where v is the desired velocity of one time step and f is the image intensity. The operator ∆ denotes gradient over 2D area. On the other hand, we try to minimise the energy function with intensity consistency assumption. However, real PIV images are featured with temporal changes of intensity between consecutive images due to illumination and trigger setup. So this assumption can be violated in PIV applications. Schuster et al. [START_REF] Schuster | Motion Estimation Under Location Uncertainty, Application To Large-Scale Characterization Of A Mixing Layer[END_REF] improved the intensity inconsistency problem by introducing stochastic optical flow formulation. In the stochastic approach, the Eulerian flow velocity field is decomposed into a large-scale smooth component and a small-scale turbulent component. In the present study, we use the same stochastic approach implemented into the Lucas-Kanade optical flow estimator [START_REF] Schuster | Motion Estimation Under Location Uncertainty, Application To Large-Scale Characterization Of A Mixing Layer[END_REF]. In theory, optical flow provides one velocity vector for each pixel of two consecutive images based on spatial and temporal variations of the image intensities. While cross correlation based PIV techniques result in coarse resolution estimation. Therefore, optical flow PIV techniques might provide more details of the flow behaviour in turbulent flows. Coherent interrogation window Lagrangian Coherent Structures (LCS) divide the local flow field into regions of coherent motions [START_REF] Haller | Lagrangian coherent structures[END_REF]. LCS is also known as the skeleton of flow that can be utilised as a deterministic criterion to shape the interrogation window. We computed the LCS separatrix ridges using Finite-Time Lyapunov Exponent (FTLE) by employing modified versions of two open-access codes named LCS kit [START_REF] Shadden | Lagrangian analysis of fluid transport in empirical vortex ring flows[END_REF] and LCS tool [START_REF] Onu | LCS Tool: A computational platform for Lagrangian coherent structures[END_REF]. FTLE is a scalar value that measures the amount of spatial stretching over a finite time. In this study, the spatial region is determined by the interrogation window to compute the FTLE value locally. Khojasteh et al. [START_REF] Rahimi Khojasteh | Lagrangian Coherent Track Initialisation[END_REF] showed that using FTLE in the local spatial regions over sparse neighbour particles can reveal signs of local ridges that can be employed in the velocimetry algorithms. FTLE analysis provides spatial and temporal flow field behaviour. We propose to adjust the interrogation window based on separatrix ridges, resulting in different window shapes in space and time. The window shape will not change if the area is entirely coherent. On the other hand, the shape of the interrogation window does not change if all pixels are coherent with the target pixel. The major problem happens when the interrogation window consists of multi-scale dynamics. particles do not cross these lines. Flow motions in between these boundaries have coherent motions. Depending on the centre location, coherent flow motions (in the blue vector field) and regions (in yellow pixels) are on one side of these lines (see Figure 1). Any pixel inside the coherent region is considered for the spatial and temporal gradient of intensity computation. We need to have prior knowledge about the velocity field to compute the FTLE map. The minimisation process in the optical flow is an iterative approach. We found that it is unnecessary to compute the whole iterative process with additional LCS computations since it is costly in time. Therefore, we introduce the LCS computation in the last three iterations of the minimisation process. In this way, we provide near to the final solution vector field for the first Lagrangian separatrices computation. Results and evaluation 4.1 Synthetic evaluation We performed synthetic analyses to examine the performance of the proposed technique. The synthetic PIV images were generated from Direct Numerical Simulation (DNS) of a 2D homogeneous isotropic turbulent flow. The boundary condition of each side of the domain was set at periodic. The DNS resolution was 256 × 256 mesh cells. We created the synthetic particle trajectories using linear Euler transport function in time and linear velocity interpolation in space. More details of the DNS simulation can be found in [START_REF] Heitz | Variational fluid flow measurements from image sequences: synopsis and perspectives[END_REF]. We assessed improvements of using coherent interrogation window in velocity estimation results of local optical flow technique. As a result, three terms, including RMS (representing the magnitude of velocity estimation), vorticity, and angular errors, were defined to quantify the local and overall performances of the proposed technique. In an overall view, as shown in Figure 2, we gained around 5% global increase in accuracy of velocity estimation compared with the classic local optical flow. However, it should be noted that the main objective here was to increase the resolution and accuracy around separation and non-coherent areas. Without using adjustable windows, cross-correlation and optical flow techniques would result in up to 50% false estimation at those separation and non-coherent areas. In detail view, Figure 3 shows improvements in three specific flow behaviours, vortex, shear and hyperbolic flows selected locally from 2D synthetic data. These three regions are intentionally picked to illustrate differences in detailed motions. We found that local optical flow with square IW suffers from inaccurate angular estimation compared with the DNS reference in the core vortex regions when the interrogation scale is larger than the vortex scale. Disagreement hits over 7 degrees of angular vector field misestimation, with over 50% angular error. Figure 3 shows significant local improvements in such a region if an adjustment is performed. Similar motion refinements were also observed when high shear or hyperbolic behaviour occurs. We found that coherent optical flow has better velocity estimation in complex local regions. These local improvements impact the overall assessment of the technique in a global view as well. Experiment case study We performed a 2D2C PIV experiment of the wake behind a cylinder in the wind tunnel to study the capability of the proposed technique on real experiment images. The Reynolds number corresponding to the cylinder with 12 mm diameter was set at 3900. An sCMOS camera with 2560 × 2160 pixels was employed to acquire images in 49.2 HZ frequency. The measurement plane was illuminated using a 200 mJ laser (EverGreen from Quantel). The disparity of the velocity estimation between optical flow with and without adjustable interrogation window of the current experiment is shown in Figure 4. As mentioned in Section 3, the window shape stays unchanged if the flow motion is coherent inside the interrogation window. This means that disparity should be almost zero in the majority of freestream regions. In agreement with the synthetic analysis, coherent adjustable window only refined velocity estimations of complex motions such as shear, wake, and mixing regions (see Figure 4). We, therefore, compared our proposed technique with the cross-correlation results obtained from Davis software (10.1.2 version). A snapshot of the instantaneous vorticity and vector fields are shown in Figure 5 that is illustrating the existence of complex mixing and vortex generations downstream of the cylinder. The vorticity field shows signs of strong shears in two sidewards of the wake immediately downstream of the cylinder (x/D < 4). These regions are featured by high velocity and acceleration gradients. We compared the cross-correlation results with coherent optical flow, knowing that the synthetic analysis showed significant misestimation in such regions (see Figure 5.a). A 2D sliding average filter was used for both techniques for the image pretreatment. The cross-correlation final spatial resolution was 16 pixels with multi-pass vector calculations starting from 64 × 64 down to 16 × 16 and 75% overlap. As mentioned in Section 2, the resolved resolution of coherent optical flow is the same as the camera resolution. Therefore, the comparison was performed between high resolution coherent optical flow and coarse resolution cross-correlation results. The vector field estimations of coherent optical flow and cross-correlation PIV techniques captured shear with high gradient vector change. In contrast, coherent optical flow estimated detailed vector change in normal to shear direction with smooth change of vectors representing more physics of the flow behaviour. Figure 5.b shows complex vortex and mixing inside the wake region. We found that the centre of the vortex is not aligned in two techniques. There is roughly 3 pixels shift between two estimations. Coherent optical flow maintained smooth rotation with a stretch in diagonal directions. Moreover, the vector field is decreasing gradually toward the vortex centre. However, the cross-correlation technique only captured the large scale motion with a weak signature of stretching in the diagonal direction. The third local comparison is in the formation region with a strong vortex (see Figure 5.c). Similarly, we observed disagreement in the vortex centre estimation between the two techniques while the large scale motions are almost equal. The upper right corner of the vortex is nearby of the large velocity motions (see Figure 5.c). By contrast, the vortex centre is located inside the wake, with drastically lower velocity values. Such a gradient associated with the flow rotation creates a complex local region for PIV estimation. Comparison of two techniques shows that using a coherent adjustable interrogation window resolves more details of the flow field than the classic cross-correlation techniques. Conclusion A novel approach to adjust the PIV interrogation windows based on local spatial and temporal coherent motions is proposed. We quantify the coherent and non-content regions using Lagrangian Coherent Structures (LCS) as skeletons of flow. The synthetic analysis showed that coherent optical flow locally improves the velocity estimation accuracy up to 50%. The main advantage of the proposed technique was the improvement in angular estimation in regions with high velocity and acceleration gradients. We also demonstrated our coherent optical flow performance in a real PIV experiment of the wake behind a cylinder at Reynolds number equal to 3900. The experiment case study revealed well-resolved velocity estimations in complex motions such as high shear, wake, and mixing regions. Figure 2 : 2 Figure 2: Angular error comparison between local optical flow with and without coherent adjustable window in three scenarios locally selected from 2D homogeneous isotropic turbulence: a), vortex flow; b), shear flow; c), hyperbolic flow. Only high values of angular error are shown. Figure 3 : 3 Figure 3: Angular error comparison between local optical flow with and without coherent adjustable window in three scenarios locally selected from 2D isotropic homogeneous turbulence: a), vortex flow; b), shear flow; c), hyperbolic flow. Only high values of angular error are shown. Figure 4 : 4 Figure 4: Angular error comparison between local optical flow with and without coherent adjustable window in three scenarios locally selected from 2D isotropic homogeneous turbulence: a), vortex flow; b), shear flow; c) hyperbolic flow. Only high values of angular error are shown. Figure 5 : 5 Figure 5: Instantaneous snapshot of vorticity and vector fields obtained from the PIV experiment at a Reynolds number equal to 3900: a), Local view of the vector field estimation comparison between coherent optical flow and Davis cross correlation in high shear region; b), Comparison of vector estimation inside the wake region; c), Comparison of vortex estimation.
04114120
en
[ "shs" ]
2024/03/04 16:41:24
2023
https://hal.science/hal-04114120/file/Institutions%20of%20expert%20judgment_BLaurent.pdf
Brice Laurent Institutions of Expert Judgment: The Production and Use of Objectivity in Public Expertise Keywords: public expertise, institutions, objectivity, expert judgment, Europe This chapter discusses the relationships between objectivity and expert judgment in public bodies. Building on science and technology studies (STS), it looks at how the manufacturing of objectivity and the definition of appropriate expert judgment have been jointly undertaken in public institutions of expertise. The analyses of objectivity as a historical and social construct invite us to consider that public expertise always relies on operations that actively shape human subjects and social organizations, yet in ways that differ across various institutional settings. The chapter discusses the case of the European institutions, which have struggled to stabilize a unique expert voice while also being accused of being overly technocratic. Instead of considering this case as a failure of expertise, the chapter shows that it offers a magnifying lens into the current difficulties of expertise, and provides elements to explore potential ways forward. Introduction How to define the appropriate expertise for policymaking? This question traditionally receives an answer in the terms of objectivity. Objective facts are described as the main ingredients of the sound scientific advice required making decisions about complex policy matters. A central issue, then, is to integrate the contribution of individual experts. The judgment of experts is a component of the production of objective knowledge that is both necessary and potentially problematic, as it is tied to the personal experience of the expert as a human being who is bound to be subjected to various limitations and potential bias. How experts are then expected to behave to produce objective facts for policymaking has thus proven to be controversial. In recent years, the trustfulness of public expertise and its ability to convincingly ground objectivity in the judgment of public experts have been questioned. Events such as Brexit and the 2016 election of Donald Trump as US president have been interpreted as outcomes of a pervasive mistrust of the ability of public experts to provide convincing advice. These events can be (and have been) read as signs of a re-imagination of expert judgment, as the question of whether to reserve them to certain authorized people appears more problematic than ever. The expression "alternative facts," used by Trump adviser Kellyanne Conway, was a clear attack on the uniqueness of the voice of objectivity. It seemed to indicate an opening of the ownership of the production of facts, at the risk of suggesting that any judgment could be considered "expert." In parallel with a growing mistrust in experts, other actors claim that new ways of producing objective knowledge could insulate the production of claims from subjective interventions and individual bias. "Evidence-based policy" has been used as an umbrella term to point to a range of methods, from cost-benefit analysis to randomized controlled trials, meant to insulate policymaking from the tribulations of politics. The pervasive reference to machine learning can be situated in that context as well, as algorithms are said to be able to finally provide an automated channel toward the objective description of reality. Thus, Facebook's recent claim that artificial intelligence could be used as a tool to identify "fake news" ties together the broadening of the definition of expert judgment with the calls for new mechanical ways of ensuring objectivity. Kellyanne Conway's and Facebook's interventions are two opposite reactions to the fact that the ability of expert judgment to provide objective knowledge is being questioned. The former points toward the limitless extension of who has the ability to be trusted as experts, and the latter supposes that the automation of expert judgment could eliminate persistent, and necessarily biased, subjective elements. The former is not very satisfactory: if anyone can be an expert, then no one in particular can be trusted as one. But neither is the recourse to an even more technologized version of expertise because that can only exacerbate the democratic issues arising from the restriction of expert advice to a well-defined group of people. The first reaction gets rid of the problem of objectivity by turning to a whole mass of individual subjects. The second one hopes to make the human subject disappear behind automatized tools that are expected to ensure objectivity. For all their situatedness in the era of the former Trump presidency, Brexit, and the alleged influence of social media in the growing mistrust of expertise, these reactions are not entirely foreign to a long-term debate in science-policy circles about the potential widening of the sources of public expertise. In 1979, a report by the Organisation for Economic Change Co-operation and Development (OECD) discussed public participation in science and technology in the wake of what was already construed as a delegitimation of public expertise, and explored the ways in which such participation could be articulated with the production of objective expertise (OECD 1979). Since then, the issue of the finding the appropriate balance between opening up the circles of expertise and maintaining a control over what counts as objective knowledge has been widely discussed in theoretical and practical terms. Both these discussions and the current difficult situation that expertise faces are invitations to theorize the relationships between objectivity and expert judgment. This chapter builds on the important body of work in science and technology studies (STS) to discuss some analytical perspectives that can be useful in theorizing these relationships and, eventually, in tackling the current challenges that public expertise faces. Central to the argument here is that objectivity for the sake of expertise is manufactured in public institutions in ways that also determine the type of expert judgment considered acceptable. In that sense, objectivity is not gained despite the subjective human component, but relies on operations that actively shape human subjects and social organizations. The chapter is organized in two sections. The first one reviews the STS works that have analyzed objectivity as a historical and social construct. These works invite us to consider that public expertise always articulates objectivity and expert judgment, yet in ways that differ in various institutional settings. The second section discusses the specific case of European expertise. The European institutions have struggled to stabilize a unique expert voice, at the same time they are accused of being overly technocratic. But instead of considering the European case as an illustration of failed attempts at manufacturing public expertise, I show that it proposes an original, if unstable, articulation of objectivity and expert judgment. As such, the European example offers a magnifying lens on the current difficulties of expertise, and may provide elements for exploring the potential ways forward. Manufacturing Objectivity, Shaping Scientific Subjects Objectivity in Historical Perspective A first step in reflecting on the relationships between objectivity and expert judgment consists in problematizing objectivity itself. History is a powerful resource in this regard because it helps us to situate a version of objectivity that we might consider straightforward. Lorraine [START_REF] Daston | The Moral Economy of Science[END_REF]Peter Galison's (1992, 2006) works on the history of scientific images have demonstrated that objectivity has a history. They analyze the historical evolution of scientific atlases in Western countries, covering various scientific fields, including botany, biology, paleontology, and astronomy. They show that the quality of the scientific image as a convincing representation of reality has been diversely evaluated over time. Early scientific images were the product of individual craftsmanship, and the outcome of the ability of an individual to correct direct observations, complement them with additional elements, or combine several of them to produce a fictitious "type." Daston and Galison then document the gradual emergence, in the nineteenth century, of what they call "mechanical objectivity." Whereas the earlier understandings of objectivity associated the production of the scientific image with the personal intervention of the scientist, mechanical objectivity supposes that the individuality of the scientist can be erased, so that scientific representation is only obtained by mechanical means. The emergence of mechanical objectivity, in Daston and Galison's account, is directly linked to the growing importance of technical instruments in scientific practice. It means that scientific images are expected to be unmitigated reflections of a natural reality on which the individuality of the observer is not expected to act. Although mechanical objectivity has been dominant since the nineteenth century, it can be contrasted with contemporary scientific disciplines that require the active intervention of the individual scientist in the production of representations of nature. Nanotechnology, for instance, is a domain where the scientist's manipulation of atoms is a way of both learning about physical laws and making new properties emerge. In this case, objectivity is not only mechanical but also relies on the personal intervention of a scientist who seeks to obtain original physical features for future practical applications, if not economic gain. The history of objectivity is a crucial element in our reflection on objectivity and expert judgment. First, it shows that defining good practices for objectivity implies a set of expectations about scientific selves. Mechanical objectivity is based on a series of hypotheses about how the scientist is expected to behave. It cannot exist without an understanding of the subjectivity of the individual scientist, defined precisely by his or her ability to disappear behind a neutral instrument that will provide a faithful representation of nature uncorrupted by human intervention. The "moral economy of science" [START_REF] Daston | The Moral Economy of Science[END_REF]) that goes with mechanical objectivity is a kind of asceticism, requiring the scientist to make an abstraction of the mundane contingency that might corrupt the work of the instrument. In doing so, it also introduces expectations about the audience for the scientific image, who will then be required to interpret the image based on professional knowledge. Along with a scientific self in charge of producing images go other imaginations of individual scientists, tasked with mustering their own professional abilities to read information that is inaccessible to lay people. What this shows is that objectivity is not produced in spite of expert judgment but requires particular forms of expert judgment. A second significant contribution of the historical works on objectivity is that they situate an understanding of objectivity that has become dominant in contemporary liberal democracies. Philosopher Thomas [START_REF] Nagel | The View from Nowhere[END_REF] spoke of the "view from nowhere" that would characterize objectivity. He wrote: "A view or form of thought is more objective than another if it relies less on the specifics of the individual's makeup and position in the world, or on the character of the particular type of creature he is" [START_REF]See, for instance, among others in a prolific literature[END_REF]. From there, Nagel could then consider that "the standpoint of morality is more objective than that of private life, but less objective than the standpoint of physics" [START_REF]See, for instance, among others in a prolific literature[END_REF]. The "view from nowhere" can then be considered as a condition for a particular kind of objectivity-namely, mechanical objectivity. The historical situatedness of mechanical objectivity also suggests exploring the material conditions under which it is possible to craft it. Daston and Galison's works on scientific instruments can be related to a rich landscape of STS studies of scientific practices that have examined how the circulation and the standardization of instruments result in the production of the view from nowhere. Thus, historian of science Ted Porter (1993) spoke of "a 'kind of objectivity' that is more nearly identical to impersonality, or standardization" (89; see also [START_REF] Latour | Visualisation and Cognition: Drawing Things Together[END_REF] and is produced by the construction of standardized instruments. The dominant understanding of objectivity has a history, and requires active work to be produced. How it translates in the world of expert advice is the question we will now examine, by extending these reflections to institutional settings. Objectivity in Scientific Institutions The historical and sociological works about objectivity have illuminated the tight connection between the making of objective knowledge and the construction of the scientific self. Thinking about expertise requires adding another dimension, though. The history of science has shown that the production of facts relies not only on material and literary technologies, but also on social technologies. [START_REF] Shapin | Leviathan and the Air Pump : Hobbes, Boyle, and the experimental life[END_REF] seminal study of the birth of the experimental practice in seventeenth-century England has shown that when Robert Boyle invented a set of material practices around such instruments as the air pump and a type of experimental discourse, he also defined a social organization whereby only certain individuals were able to act as witnesses in charge of attesting experimental results This historical work has an important consequence for our reflection-namely, that expertise necessarily ties together the problem of scientific objectivity with the social organization of the institutions in charge of delivering knowledge. We can now develop our considerations about mechanical objectivity and the view from nowhere by examining the institutional work they require. What historical studies such as Shapin and Schaffer's suggest is that boundary work is one of such techniques. Ever since sociologist Thomas [START_REF] Gieryn | Boundary-Work and the Demarcation of Science from Non-science: Strains and Interests in Professional Ideologies of Scientists[END_REF] pointed the analytical attention toward the work needed to differentiate "science" from "non-science," empirical studies have illuminated the work of the institutions that are expected to ensure that this boundary is well maintained. Among these institutions are the scientific bodies in charge of regulating scientific publication. Thus, one can consider peer reviewing as a social technology in charge of delimitating what counts as knowledge. This social technology, in the guise of Boyle's process of selecting who can be a witness in charge of evaluating scientific experiments, relies on a definition on who is authorized to say what counts as knowledge. The recent history of the practice of anonymity in scientific publications is a fascinating lens through which to not only examine the empirical practice of peer reviewing, but also, and more importantly for our concern here, to discuss how the institutions of scientific publishing articulate the production of objectivity with the practices of expert judgment. David [START_REF] Pontille | The Blind Shall See! The Question of Anonymity in Journal Peer Review[END_REF]2015) have shown that anonymity, particularly under its "double blind" guise, is a relatively recent invention, marked by pervasive issues about who should be "blind," and under what conditions. Pontille and Torny's works discuss the various approaches used in different scientific journals, as well as recent episodes that mark a reconfiguration of the sources of scientific objectivity and the practice of expert judgment. One of these episodes is the case of Donna Haraway, who chose to reveal her identity as a reviewer for a paper in Social Studies of Science, and was then quoted by name in the acknowledgments. Pontille and Torny note that Haraway, the author of "Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective" [START_REF] Haraway | Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective[END_REF] and a critic of objectivity as imagined in the terms of a universal category, was, indeed, a perfect advocate for a situated expert judgment, fully embodied in the individual person of the known reviewer. Other telling episodes are provided by academic journals in economics, which publish papers that have already circulated widely as working papers or conference papers, and which use metrics of circulation, readership, and popularity as basis for granting publication. While the Haraway case is one of the resingularization of the universal voice of the blind expert, this latter is one of the extension of the community of peers to a wide and not pre-circumscribed audience. From Scientific Institutions to Expert Institutions The scholarly analysis of peer reviewing extends the analysis of objectivity and expert judgment to the institutional organizations expected to manufacture objectivity. It has the interest of explicitly thematizing the role of expert judgment. Here, the "expert" is the reviewer in charge of evaluating the scientific value of the paper. He or she might be anonymous or a known contributor, a member of a delimitated discipline or of an extended community of interested people. His or her works is tied to an institution expected to maintain boundaries between what is scientific and what is not, between who can exercise scientific judgment and who cannot. How these operations are conducted in some cases directly echo a view from nowhere-and one can situate the conditions of anonymity in this framework. In others, the view of the expert is expected to be situated, either in a known individual (as in the Haraway example) or in a broader collective (as in that of the economics journals). In all cases, institutional rules tie together the practices of objectivity and the definition of the appropriate expert judgment. The operations of boundary-making are remarkably similar in the institutions of expertise that are the main focus of this chapter-namely, that of the public bodies in charge of providing expert advice for decision-making purposes, or "public expertise." These institutions, like those discussed earlier, tie together the production of objectivity with the practices of expert judgment. But they add another crucial element to this already complex mix-namely, the expected legitimacy of the public institutions in charge of providing advice for decision-making purposes. When we shift our analytical focus from scientific institutions (such as those related to peer reviewing in scientific journals) to policy ones, the issue of political legitimacy becomes crucial. One of the main results of STS in the analysis of public expertise has been to theorize the joint production of scientific objectivity and political legitimacy. Sheila [START_REF] Jasanoff | Judgment under Siege: The Three-Body Problem of Expert Legitimacy[END_REF] has written extensively about expert advice in policy settings, and has reflected on what she calls the "three body problem" of expert legitimacy. Jasanoff explains that the legitimacy of expertise, in the eyes of decision-makers and the wider public expected to trust it, relies on three different kinds of "bodies." It needs a consistent "body of knowledge," to be used by the "body of the expert" as a human being. One can understand the view from nowhere in the context of this dual requirement: here, the body of the expert has to disappear for the body of knowledge to be used in acceptable ways. She also underlines the importance of the third dimension-namely, the "public body" of the institutions in charge of providing expert advice. Thus, if the view from nowhere is seen as a desirable basis for public expertise, then it requires corresponding public institutions. The American institutions of public expertise are good illustrations of this position, and its associated tensions (Jasanoff 1990[START_REF] Jasanoff | The practices of objectivity in regulatory science[END_REF]. They rely on operations of boundary-making between what is expected to be the domain of expert advice (supposedly purely scientific) and what is expected to be the domain of policymaking [START_REF] Jasanoff | Contested Boundaries in Policy-Relevant Science[END_REF]. A telling illustration of this process is provided by the case of the public presentations of reports written by the US National Academy of Science. STS scholar Steve [START_REF] Hilgartner | Science on Stage: Expert Advice as Public Drama[END_REF] has shown that maintaining a boundary between what scientists do behind closed doors and what is presented to the public is a crucial operation for the academy, seen as a condition for producing objective work and legitimate advice. The diagnostic of the pervasiveness of the view from nowhere in the American regulatory system can be nuanced when considering the practice of expertise in the court system. American courts require each party to summon their experts; these experts are then tied to the interests of the party that brings them in. The confrontation of expertise here is about who can produce facts before the court, which is expected to side with science, can rule (see e.g., [START_REF] Jasanoff | The Eye of Everyman: Witnessing DNA in the Simpson Trial[END_REF]. Legal scholars have noted the specificity of American courts, where the adversarial system of expert witnessing is accompanied by "a special penchant for fact finding," as opposed to other legal systems in which "judges are more willing to recognize the limits of factfinding, using presumptions when necessary to bridge the gaps in the evidence."1 A Variety of Institutional Constructs The American situation provides a telling illustration of how the view from nowhere is institutionalized. It is one particular solution to the three-body problem of expertise, and not necessarily the only one. One can, indeed, compare it to the case of other public institutions of expertise in national and international contexts. Sheila [START_REF] Jasanoff | Judgment under Siege: The Three-Body Problem of Expert Legitimacy[END_REF] comparative study of biotechnology policy has analyzed the British and German cases. In the United Kingdom, a public demonstration conducted by a known professional appears to be an essential condition for claims to credibility. In Germany, collecting representative viewpoints from various social actors proved to be crucial in the production of expert advice. Instead of the desirable view from nowhere, the British and German cases suggest that a view from "somewhere" or a view from "anywhere" might be a basis for public expertise. These examples are useful for our reflections here because they force us to theorize objectivity in other terms than those of the view from nowhere. The British and German public institutions of expertise show that public facts can be grounded in the judgment of a known individual or on the representations of social groups. In both cases, objectivity is manufactured by known and active human subjects. One can then contrast the view from nowhere with other approaches to objectivity in public institutions. The British "view from somewhere" and the German "view from anywhere" are two examples of the other approaches, but there is no reason to limit the landscape of possible articulations between objectivity and expert judgment. One can extend this analytical thread by examining international organizations. Some of them adopt the discursive and institutional practices of the view from nowhere. The World Trade Organization (WTO) and the OECD, for example, strategically problematize the conditions of legitimacy of the expertise they produce by drawing rigorous boundaries between international scientific expertise and the national regulatory choices of sovereign member countries [START_REF] Bonneuil | How Does the World Trade Organization Know? The Mobilization and Staging of Scientific Expertise in the GMO Trade Dispute[END_REF] the WTO; Laurent 2016a on the OECD). By contrast, the Intergovernmental Panel on Climate Change (IPPC) is a hybrid institution; it is expected to provide scientific knowledge while serving as an arena for international negotiations. Doing so relies on a complex organization whereby scientific and diplomatic operations are carefully distributed [START_REF] Beck | Moving beyond the Linear Model of Expertise? IPCC and the Test of Adaptation[END_REF][START_REF] Miller | Hybrid Management: Boundary Organizations, Science Policy, and Environmental Governance in the Climate Regime[END_REF]. The example of the IPPC shows that international organizations may favor procedural approaches to define the conditions under which objective knowledge can be produced and experts are expected to behave. Alberto Cambrosio and Peter Keating speak of "regulatory objectivity" to refer to situations in which public and private institutions need to agree on the procedures according to which various regulatory entities can be crafted. Regulatory objectivity "consistently results in the production of conventions, sometimes tacit and unintentional but most often arrived at through concerted programs of collective action" (Cambrosio et al. 2006, 190). Describing various standardization and regulatory interventions related to biomedicine, Cambrosio and Keating analyze the ways in which public and private actors coordinate with each other to produce procedural instruments ("conventions" or "protocols") that allow them to stabilize the use of technological tools that might otherwise vary across the local sites where they are applied. The notion of "regulatory objectivity" points to an institutional configuration whereby objectivity and expert judgment are articulated through a set of agreed principles that provide experts with common references to base their actions on. The diversity of the institutions in charge of providing expert advice is not only about organizational choices. It also points to the plurality of approaches used to define what counts as credible knowledge and legitimate policy. These approaches can be characterized as "institutionalized practices by which members of a given society test and deploy knowledge claims used as a basis for making collective choices," or, in Sheila Jasanoff's (2005) terms, "civic epistemology" (255). The term "civic epistemology" can be read as a proposition for theorizing the articulation between objectivity and expert judgment in public institutions. Examining various civic epistemologies in national or international contexts, then, shows that the role of the public institutions of expertise is less to tame subjective expert judgment for the sake of objectivity (as if the two were opposed) than to solidify practices of defining who the experts should be and how they should behave. Cracks in the Public Institutions of Expertise The contrasts I just sketched among several civic epistemologies might point to an overall landscape of geographical zones, neatly distinguished according to how they define the sources of the objectivity and legitimacy of expert advice. The situation, however, is less stable, and the challenges for the institutional production of expert advice are numerous. Some of these challenges can be situated in the institutional constructs described above. Thus, the American public bodies have often struggled to maintain the boundary between science and policy. As soon as that boundary between risk assessment (i.e., the scientific phase) and risk management (i.e., the policy phase) was affirmed as a necessary basis for producing credible expert advice, particularly in the document that became known as the Red Book (National Research Council 1983), it was also nuanced as necessarily porous in practice (Jasanoff 1990). Accordingly, controversies in the American institutional context revolve around the possibilities of producing expert advice seen as detached from political bias. A telling illustration of the dynamics of these controversies, and of their institutional consequences, is that of the Office of Technology Assessment (OTA), as described by political scientist Bruce [START_REF] Bimber | The Politics of Expertise in Congress: The Rise and Fall of the Office of Technology Assessment[END_REF]. Created in 1972 and closed in 1995, the OTA's short history is marked by pervasive controversies about its alleged political bias. Eager to ensure the office would be seen as a neutral provider of expert advice, successive institutional reforms established a firm boundary between the OTA's contributions and policy decisions. Eventually, in 1995, as the newly elected Republican majority in Congress was looking for ways to cut the budget, it could argue that no policy use could be identified for the OTA. In other institutional contexts, controversies about public expertise might take a different form. In the United Kingdom, for instance, episodes when known professionals fail to convince the public of the value of their knowledge claims can be read as failures to stabilize institutions of public expertise that give so much weight to the intervention of the individual and known public expert [START_REF] Jasanoff | Judgment under Siege: The Three-Body Problem of Expert Legitimacy[END_REF]. Other difficulties arise in sites where different civic epistemologies might clash. This is especially the case in international organizations, where the oppositions between member countries are arbitrated in ways that might favor one civic epistemology over others. That the WTO tends to reason in the terms of the view from nowhere makes it more difficult for European countries to make their position appear to be objective [START_REF] Jasanoff | The practices of objectivity in regulatory science[END_REF][START_REF] Winickoff | Adjudicating the GM Food Wars: Science, Risk, and Democracy in World Trade Law[END_REF]. The framing of the OECD reports about science policy in terms of international expert advice that is neatly distinguished from national regulatory choices makes it impossible to envision new risk-governance instruments, such as public engagement, in terms that would significantly transform the relationships between science and society (Laurent 2016a). Less described in the STS literature are current situations where the very terms under which public expertise is expected to be produced are questioned. The French bodies of public expertise provide an illustration of one such situation. Historically marked by the crucial role of the public expert who is able to manipulate technical tools and was trained in state-controlled grandes écoles [START_REF] Porter | Objectivity and Authority: How French Engineers Reduced Public Utility to Numbers[END_REF], French public expertise now faces challenges about its ability to include unruly people and objects [START_REF] Joly | Beyond the French 'Technocratic Regime'? Transformations of the Use of Scientific Expertise for Public Decisions[END_REF]Laurent 2016b[START_REF] Laurent | Democratic Experiments: Problematizing Nanotechnology and Democracy in Europe and the United States[END_REF]. Recent debates about technological programs, such as biotechnology, nanotechnology, and synthetic biology, have seen attempts by the French public bodies of expertise to rethink the terms under which public expertise is crafted and deemed legitimate (or, in Jasanoff's terms, its civic epistemology). Public debates have been organized to make dialogue between public experts and various concerned groups possible, and regulatory decisions have been made to allow the public administration to characterize technical and legal uncertainties about objects such as nanomaterials or synthetic organisms. These initiatives are not consensual, and the new missions undertaken by the French public experts are far from clear. Political scientists and practitioners have identified institutional weaknesses in the ability of the French public institutions to manage their stated objectives to govern uncertain risks and ensure public participation [START_REF] Benamouzig | Administrer un monde incertain: Les nouvelles bureaucraties techniques. Le cas des agences sanitaires en France[END_REF][START_REF] Dab | Agir face aux risques sanitaires[END_REF]. This shows that the integration of new publics and objects on the perimeter of the French public expertise is still very much in transition. The French transition situation is an illustration of the new instabilities that institutions of public expertise face, and which have accelerated with the help of digital technologies. These instabilities show a current lack of institutions able to stabilize the conditions under which expert knowledge can be considered acceptable. The emergence of individual skepticism channeled by social media is often read as a threat to the existing expertise institutions. In this case, the current unease about the uncontrolled circulation of information on social media shows the consequences when institutions meant to stabilize the criteria for granting credibility are lacking. Symmetrically, digital technologies are often claimed to be resources for crafting new technical tools for ensuring public objectivity. A good illustration is how Facebook refers to artificial intelligence as the solution to eliminate fake news.2 Here again, a crucial issue, though one not often made explicit, is the absence of institutions that would ensure that what Facebook does is appropriately kept in check. At this stage in our reflection we cannot pretend that solely a call for objectivity could solve the current problems that public expertise faces. It is not that objectivity is not worth looking after or useful as a common reference point for public discourse. But a simple call for objectivity has little chance of settling the subtle constructs that are necessary to stabilize the public institutions of expertise. Because the terms under which objectivity should be produced are situated in institutional contexts, there is an institutional work to undertake if the production of objective knowledge is to be reimagined. There is a real-world laboratory in which to explore both the challenges of manufacturing institutions for expert advice and the complexity of the allure of the unproblematized reference to objectivity. This real-world laboratory is that of the European institutions, where the question of the appropriate institutional format for public expertise has been debated for years, and where it is still far from solved. A Laboratory for Objectivity and Expert Judgment: The European Institutions of Public Expertise European Expertise: Objectivity and the Representation of Interests The European project was initially, and still is, an economic one, so much so that legal scholars speak of the unwritten "economic constitution" of the European institutions, whereby the source of political and legal legitimacy is the construction of the common market, and the imagined beneficiary of the European project is an economic agent, either a consumer being offered a variety of choices at reasonable prices, or a producer free to engage in business activities across the member states [START_REF] Streit | The Economic Constitution of the European Community: From 'Rome' to 'Maastricht[END_REF]. The economic constitution of the European Union acquired a new layer of meaning with the addition of the Monetary Union. It should not, then, be a surprise that the European economic expertise produced by the European Central Bank has adopted the view from nowhere [START_REF] Hall | Mixed Signals: Central Bank Independence, Coordinated Wage Bargaining, and European Monetary Union[END_REF][START_REF] Mcnamara | Rational Fictions: Central Bank Independence and the Social Logic of Delegation[END_REF][START_REF] Vauchez | Democratizing Europe[END_REF]. Scientific expertise is an entirely different story, though. Scientific expertise started to become a European concern when the construction of the single market in the 1980s made the harmonization of consumer goods a central European objective. After the 1986 Single European Act, health and safety matters became part of the scope of the European competences. The 1997 Amsterdam Treaty then asked the European Commission to "take as a base a high level of protection, taking account in particular of any new development based on scientific facts" in domains related to "health, safety, environmental protection and consumer protection."3 In many respects, scientific expertise is now everywhere in Europe, as the rich scholarly literature on the topic shows. The conduct of European regulation has been characterized by a growing mobilization of scientific advice via committees that are expected to provide technical information and expertise, particularly in the health and safety sectors [START_REF] Demortain | Standards of Scientific Advice: Risk Analysis and the Formation of the European Food Safety Authority[END_REF][START_REF] Vos | The Rise of Committees[END_REF]; and networks of experts based in national institutions routinely exchange information and thereby take part in shaping European regulations [START_REF] Dehousse | Regulation by Networks in the European Community: The Role of European Agencies[END_REF]. Political scientists have produced detailed analyses of the composition of the European expert groups and the way they operate. They have talked about "technicization" or "depoliticization" to point to the mechanisms whereby large-scale policy issues are turned into matters of expert examination by groups that are, if not entirely secluded from public view, then at least extremely difficult for members of nongovernmental organizations or other civil society groups to access [START_REF] Robert | Les groupes d'experts dans le gouvernement de l'Union européenne[END_REF][START_REF] Radaelli | The Public Policy of the European Union: Whither Politics of Expertise?[END_REF]. As these expert groups strengthen the executive power of the European Commission at the expense, so the analyses show, of political discussions taking place in institutions such as the European Parliament, national parliaments, or in publicly held negotiation arenas, they may well contribute to the Union's democratic deficit and the prevalence of technocracy. The pervasiveness of scientific expertise in the European institutions can hardly be described in the terms of the view from nowhere, though. In the practice of European expertise, expert judgment is directly tied to the political representation of the interests of the actors involved. A prime reason for this is that the production of European expertise is tightly and explicitly articulated with lobbying activities in Brussels. Many expert groups are also supposed to be platforms for negotiating with stakeholders [START_REF] Saurugger | L'expertise: Un mode de participation des groupes d'intérêt au processus décisionnel communautaire[END_REF]. If expertise is everywhere in Europe, it does not usually result in a single authoritative voice of the kind that would originate from a well-defined expertise body subsuming the contributions of individual experts under a common reference to objective science. Rather, the production of expertise is distributed in many places, which also serve as sites for collective bargaining. This articulation between objectivity and the representation of interests has not been fundamentally transformed by the growing importance of the European technical agencies. The independence of the European Central Bank is very peculiar, in fact, when compared with other EU agencies and authorities that have been created since the 1990s to provide independent scientific advice to European institutions, above all, the European Commission. Consider, for instance, the case of pharmaceutical products. This has traditionally been a difficult domain for market harmonization, as already recognized in the 1985 White Paper on Completing the Internal Market (European Commission, 1985). Since then, many attempts have been made to harmonize the pharmaceuticals market, including the "multi-state approach," whereby each Member State recognizes the decisions taken elsewhere by virtue of a principle of "mutual recognition," was deemed unsatisfactory for market harmonization [START_REF] Orzack | Pharmaceutical Regulation in the European Community: Barriers to Single Market Integration[END_REF]. As part of this ongoing attempt at harmonization, a European expertise agency about pharmaceuticals, the European Medicines Evaluation Products Agency, was created in 1995, renamed the European Medicines Agency (EMA) in 2004. The EMA has not approached harmonization as requiring the sudden replacement of national expert bodies with a centralized European epistemic authority. Instead, the agency introduced a centralized authorization procedure focused on innovative medicine products that would not replace the whole range of activities undertaken by national expert bodies4, and the European approach is primarily based on coordination between member states and the European level for deciding on the authorization of medicines [START_REF] Groenleer | The Actual Practice of Agency Autonomy: Tracing the Developmental Trajectories of the European Medicines Agency and the European Food Safety Authority[END_REF][START_REF] Orzack | Pharmaceutical Regulation in the European Community: Barriers to Single Market Integration[END_REF][START_REF] Permanand | Constitutional Asymmetry and Pharmaceutical Policy-Making in the European Union[END_REF]. The EMA illustrates the European approach to public expertise characterized by a distribution of action among experts tied to their national origins and institutional coordination between European and national expert bodies. One sees this approach in other European agencies, such as the European Chemicals Agency (ECHA). The work of the ECHA has been described, as the EMA's could be, as an illustration of epistemic subsidiarity [START_REF] Jasanoff | Epistemic Subsidiarity: Coexistence, Cosmopolitanism, Constitutionalism[END_REF]; Boullier 2016)-that is, an institutional arrangement whereby the production of expertise is the outcome of carefully orchestrated exchanges between European and national sources of expertise. Epistemic subsidiarity is a useful way to characterize the articulation between objectivity and expert judgment that is seen in the European institutions of expertise. Here, objectivity is the outcome of coordinated operations, related to both science and politics. Many experts are involved. Some come from national public bodies; others, from private companies or civil society organizations that participate in the Commission's technical working groups. Eventually, the role of the European expert working in agencies such as the EMA or the ECHA is to orchestrate the distribution of roles and the circulation of knowledge. The European expert uses procedural or technical tools to assess knowledge claims (such as those presented by companies wishing to register chemicals at ECHA) but also needs to coordinate with evaluations undertaken at national levels or in private organizations. In that context, attempts to mechanize expert judgment, for instance, by using models, require that European experts reopen technical black-boxes and use their personal experience [START_REF] Laurent | A Situated Expert Judgment: QSAR Models and Transparency in the European Regulation of Chemicals[END_REF]. These attempts do not signal an institutionalization of mechanical objectivity, but rather, an extension of the coordinating role of the European expert. The Instability of Epistemic Subsidiarity The landscape of European expertise that appears through the numerous expert committees of the European Commission, and technical agencies such as EMA and ECHA ties the production of objective knowledge to the negotiation between national and European, public and private interests. In this context, expert judgment is not expected to ensure a view from nowhere but, rather, a distributed gaze, itself a product of epistemic and political practices. This European approach to expertise faces pervasive issues, including its problematic legitimacy and a persistent uncertainty about its institutional format. First, that European expertise relies on the articulation between knowledge production and political negotiation does not imply that anyone can participate in the production of European expertise. Rather than publicly visible deliberative bodies, European expert groups are sites marked by unequal powers of influence, as shown by the numerous studies that have examined lobbying practices in Europe.5 As such, European expertise is characterized by a pervasive legitimacy issue. This issue relates to the management of the relationships between European regulatory decisions, and the interests of private economic actors, or individual member states. For instance, a regular source of controversy about EMA has been the close relations between the agency and the pharmaceutical industry. 6Second, the institutional organization of the European expertise is far from stable. A sign of this instability is the profusion of scholarly works about the institutional nature of European expertise, particularly as it is produced by European agencies. Since the 1990s, scholars of European integration have discussed the form of regulation "by information" that European agencies propose (e.g., [START_REF] Majone | The New European Agencies: Regulation by Information[END_REF], how these agencies are controlled (e.g., Dehousse 2008), the way they appear out of networks of European experts and functioned in conjunction with them (e.g., Borras et al. 2007;[START_REF] Chiti | Emergence of a Community Administration: The Case of European Agencies[END_REF][START_REF] Levi-Faur | Regulatory Networks and Regulatory Agencification: Towards a Single European Regulatory Space[END_REF], and how certain modes of organization circulate from one agency to the next (e.g., [START_REF] Demortain | Institutional Polymorphism: The Designing of the European Food Safety Authority with Regard to the European Medicines Agency[END_REF]. The problematic institutional nature of European expertise is not merely an academic issue. It also manifests itself in numerous public controversies about seemingly arcane bureaucratic evolutions inside the European Commission. For instance, the relevance of the "science adviser" of the president of the European Commission, a position created in 2012 by José-Manuel Barroso, was vigorously debated. NGOs argued that the position added a layer of opacity to an already complex decision-making process, which, though allegedly aimed at ensuring that European policy was "evidence-based," gave industrial interests privileged access to the president of the Commission (Parr 2015). Others saw the NGOs' position as merely a reaction against the alleged pro-GMO position of Barroso's science adviser, Anne Glover.7 Eventually, Jean-Claude Junker scrapped the position, to the dismay of science-policy scholars, who had hoped to turn it into a vehicle for renewed dialogue about the relationships between science and policy in Europe.8 This episode is revelatory. It shows that if European expertise proposes an original articulation between objectivity and expert judgment, this proposition is not clearly stabilized in institutional terms. This instability also manifests itself in international settings. A good illustration here is the case of GMOs. The ban of certain GMOs in Europe was contested at the WTO by Argentina, Canada, and the United States [START_REF] Winickoff | Adjudicating the GM Food Wars: Science, Risk, and Democracy in World Trade Law[END_REF]. The opponents of the European regulation believed that the evaluation of the risks should be the product of a universal science expected to serve as a judge of international trade conflicts. The ban, for them, was nothing but a political move meant solely to protect the interests of European farmers at the expense of international trade. As STS scholars have shown, the challengers of the European ban imagined objectivity in the terms of the view from nowhere, as the outcome of mechanistic processes able to eliminate uncertainty and stabilize a technical assessment of risks, free of political considerations [START_REF] Winickoff | Adjudicating the GM Food Wars: Science, Risk, and Democracy in World Trade Law[END_REF]. By contrast, one could have framed the European ban as an attempt to deal with pervasive uncertainties about both the scientific evaluation of GMOs and the social expectations about them. That Argentina, Canada, and the United States won their case against Europe is a sign that this framing failed to be articulated in convincing ways.9 A European View from Nowhere? The proposition for European expertise based on an original articulation between objectivity and expert judgment is barely stable. In that context, the reference to a form of public expertise based on the uniqueness of the voice of objectivity that is expected to be free of any subjective influence (or, in other words, a variation on the view from nowhere) has often appealed to European actors. Consider, for instance, the case of the European Food Safety Authority (EFSA). EFSA was created in 2002 as an institutional response to the BSE (bovine spongiform encephalopathy) "mad cow" crisis.10 The crisis had propelled far-ranging reflections about how the European Commission had based its action on scientific facts, and how it had used scientific expertise. The 2000 White Paper on Food Safety (European Commission, 2000) called for the creation of a European expert authority on food safety to prevent another crisis like the mad cow scandal by ensuring that food products were properly assessed before being circulated on the European market. The reorganization of the European expertise about food safety eventually led to the creation of EFSA, a centralized European expert body that would identify the food products that were safe for consumption across Europe [START_REF] Vos | EU Food Safety Regulation in the Aftermath of the BSE Crisis[END_REF]. [START_REF]through mutual recognition and comitology systems, and no centralized body existed[END_REF] The new agency would isolate European decision-making from the economic interests of particular member states or private actors. Because member states were said to have influenced the delayed reaction to the BSE crisis,12 the EFSA would be composed of individual experts, and not based on national representation.13 EFSA, contrary to propositions that saw a need for the agency to be granted regulatory power, was conceived as a public body whose power would be restricted to "risk assessment" [START_REF] Demortain | Standards of Scientific Advice: Risk Analysis and the Formation of the European Food Safety Authority[END_REF]. EFSA, in short, would be the locus of a renewed European objectivity on food safety, based on the ability to independently assess food products. The new agency was to "restore trust" in the European institutions' ability to deal with technical risks. Yet the Authority has been the object of much criticism, pertaining to the quality of the scientific advice it provides, the transparency of its functioning, and its independence from special interests. Criticisms have been voiced by NGOs14 about EFSA's proximity to industrial interests. The value of EFSA's advice on GMOs has been heavily contested, as the standardized tests it used have themselves been controversial [START_REF] Demortain | Regulatory Toxicology in Controversy[END_REF]. If EFSA's objective was to "restore trust," it fell well short of that goal. EFSA introduced several changes were at in response to the criticism. EFSA asked its experts to disclose their financial and institutional ties and launched a "glass house" policy of opening scientific meetings to the public in 2012. It introduced a "stakeholder consultative platform" in 2005, tasked to "assist the Authority in developing its overall relations and policy with regard to 'civil society stakeholders'" and launched several "public consultations" (Dreyer and Renn 2013, 332). This evolution is consistent with a growing discourse of the "democratization of expertise" adopted by the European Commission in the 2000s [START_REF] Moodie | For the Sake of Democracy? The European Commission's Justifications for Democratising Expertise[END_REF]. But it did not free EFSA from public controversies. Endocrine disruptors have been one recent instance of a controversial domain about which EFSA's contributions have been severely criticized by environmental organizations [START_REF] Bozzini | Open Controversies: Bees' Health, Glyphosate and Endocrine Disruption[END_REF]Horel 2016). Construed as an entity that could adjudicate controversies thanks to expert knowledge based on the view from nowhere, EFSA has itself become a topic of controversies. The difficult construction of European expertise through agencies such as EFSA is telling. It can be read as yet another example of contested science and policy boundary-making in public institutions [START_REF] Jasanoff | Contested Boundaries in Policy-Relevant Science[END_REF], rendered even more difficult by the dual objective of ensuring that science is purified from political discussion and is open to public participation.15 The EFSA situation might be a reaction to the instability of epistemic subsidiarity by attempting to centralize European expertise. But instead of providing a single authoritative voice that is able to ensure the legitimacy of European decision, EFSA has become perhaps the most visible illustration of the impossibility of basing European expertise on the view from nowhere. A Path Forward for Public Expertise? The European situation provides an original and unstable institutional configuration meant to produce public expertise. This configuration is based on epistemic subsidiarity. It does not separate the production of scientific advice from policymaking but ties them together. It has consequences for the definition of objectivity and expert judgment. Here, objectivity is inherently tied to regulatory objectives, on the one hand, and to the concerns and needs of the actors involved in its production, on the other. As such, it can be labeled an "interested objectivity." The expert judgment that participates in manufacturing interested objectivity is explicitly political, in that it serves both to produce technical advice and to represent interested parties, be they member states or concerned stakeholders. Getting back to the current difficulties that expertise faces, one might want to turn to the European situation to provide theoretical and practical elements for identifying what be a path forward would be. The debates about European expertise, indeed, resonate with the current and more general crisis of expertise. The questions raised today are about who has the ability to be an expert, and what the bases for ensuring objectivity are. These questions underscore the political character of expertise by suggesting that it is either hopelessly biased or in need of being "freed" from politics. In Europe, what I have described here as interested objectivity can be seen as an attempt to define public expertise in explicitly political terms, for the sake of both robust technical advice and legitimate decision-making. Because the European context makes expertise a matter of both epistemic production and political negotiation, the question of how to organize public expertise is bound to receive sophisticated answers. Thus, configurations that are characterized by epistemic subsidiarity are based on an unlimited opening of the possibility for expertise production nor on a tight delimitation of expertise to technical means. Perhaps because of its originality, this approach faces pervasive instability, and is regularly confronted with the persistent allure of the view from nowhere, as the example of EFSA shows. There are two potential readings of this situation. The first one diagnoses a persistent failure to ensure that a true European expertise can convince member states, and possibly the European public at large, of its value. It sees a need to make yet other attempts to stabilize a centralized body of European expertise, which, at last, would be able to provide a unified voice of science. The second reading also identifies a failure, although not in the same terms (see e.g., [START_REF] Carr | GM Food on Trial: Testing European Democracy[END_REF][START_REF] Jasanoff | Epistemic Subsidiarity: Coexistence, Cosmopolitanism, Constitutionalism[END_REF][START_REF] Wickson | The anglerfish deception: The light of proposed reform in the regulation of GM crops hides underlying problems in EU science and governance[END_REF]. Often inspired by STS, this second reading sees epistemic subsidiarity as a way of recognizing that the production of expert advice is a scientific process and a political process, which should more explicitly associate the exploration of scientific uncertainties with that of social concerns. In this reading, the specificities of European expertise are not to be erased but further cultivated. If we adopt this second reading, we are to consider that if there is a failure, it is related to the inability to publicly account for European expertise in ways that would convince international audiences (for instance, at the WTO) and European ones that it can be scientifically robust and politically legitimate. While the mechanism of European expertise suggests that original institutional constructs might produce objective expert advice and sound expert judgment, it also illustrates the amount of work needed to ensure that new propositions such as interested objectivity are both scientifically robust and politically legitimate. In Europe, this work implies correcting the asymmetries of access that make participating in regulatory circles far easier for skilled lobbyists representing corporate interests than for concerned environmental protection groups. But it also implies a more fundamental theoretical and institutional task, which pertains to the mode of scientific and political representation. There are resources in the science studies literature at this point, particularly Bruno Latour's (2004aLatour's ( , 2004b) ) discussions of "matters of concerns" as potential entry points for rethinking the sources of scientific objectivity and democratic legitimacy. As their main activities all relate to technical entities, such as energy, chemicals or data, the European expertise institutions are already connected to the main public concerns of contemporary societies. As such, they might provide institutional paths for making interested objectivity a vehicle, if not for renewing the European project, at least for ensuring the scientific quality and the political legitimacy of expert advice. At this point, the failure of EFSA to provide a European view from nowhere is a forceful reminder of the limited value of calling for an unproblematized "objective expertise" to solve the issues faced by European expertise. By contrast, what the instability of European expertise and its contestations in international settings make visible is the dual necessity of an analytical repertoire and institutional support to ensure the scientific and political robustness of epistemic subsidiarity. Although this situation is specific to the European context, it can also help us understand the current difficulties of public expertise. As public expertise is contested on scientific and political grounds, the call for "objectivity" is tempting. What the European example suggests is that a reimagination of the institutional organization of expertise might be, if theoretically and practically more challenging, also more relevant to ensure the public credibility of expertise. Conclusion How to define the appropriate expert judgment in institutions that are in charge of producing objective facts for policymaking? This question seems to be particularly problematic as current challenges to the voices of official expertise often prompt public and private actors to call for "objective knowledge" and "trustful experts" without clarifying those terms. The contemporary issues about objectivity and expert judgment are not qualitatively different from the problem of how public institutions of expertise ought to function, about which STS works offer crucial resources. These works have shown that the production of expert advice necessarily brings together knowledge production and legitimacy building. They have commented on the institutionalized practices whereby particular expert claims are considered trustworthy, or "civic epistemologies." They have illuminated the variety of civic epistemologies, and analyzed various sources of instability in the public institutions of expertise. Europe is a particularly interesting laboratory in which to reflect on these instabilities. How to organize the European public expertise has been a topic of concern for years. On technical committees and in agencies such as EMA or ECHA, it originates from distributed processes whereby the production and use of knowledge is undertaken by member states and at the European level. This "epistemic subsidiarity" also means that negotiations with the involved stakeholders occur in processes that are expected to provide to the European institutions with expert advice. In that context, experts come from national institutions, private organizations, and European bodies, and their judgment is tied to their positions. The difficulties in stabilizing the institutions of European expertise reveal both that sophisticated institutional constructs are possible and that their stabilization requires significant scientific and political investments. They signal a crucial need for inventing institutional formats, as well as analytical repertoires that are able to account for practices of expertise that attempt to redefine the relationships between science and policy.
04114133
en
[ "phys.meca.mefl" ]
2024/03/04 16:41:24
2021
https://hal.inrae.fr/hal-04114133/file/236paperISPIV2021_Khojasteh.A.R_July082021.pdf
Ali Rahimi Khojasteh email: [email protected] Dominique Heitz Yin Yang Lionel Fiabane Particle position prediction based on Lagrangian coherency for flow over a cylinder in 4D-PTV de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction This paper discusses a novel approach in time-resolved Particle Tracking Velocimetry (4D-PTV) to predict particle positions over space and time based on Lagrangian Coherent Structures (LCS). In the classic 4D-PTV algorithm, predicted particle positions are given to the optimisation process for further corrections. The optimisation can deal with slight deviations between the predicted position and the true position. However, the optimisation fails to find the true position if the deviation is large enough to have multi-candidates for a single particle at t n+1 . This shows the importance of having an appropriate prediction in dense and complex motions. The proposed idea is initiated by arguing that predictions in PTV techniques focus on a single particle individually, while this single particle is not acting alone [START_REF] Rahimi Khojasteh | Lagrangian Coherent Track Initialisation (LCTI)[END_REF]. We propose this approach to locally differentiate coherent and non-coherent motions of neighbour particles around a single particle position to improve prediction accuracy, as shown in Figure 1. On the other hand, the information of coherent particles should be shared with each neighbour particle to predict their behaviour accurately and avoid misprediction. The present study was designed as a complementary function for 4D-PTV algorithms such as STB [START_REF] Schanz | Shake-The-Box: Lagrangian particle tracking at high particle image densities[END_REF] and KLPT [START_REF] Yang | Kernelized Lagrangian Particle Tracking[END_REF]. Briefly, it is assumed that particle positions are known for n time steps (four to five). Afterwards, a mathematical prediction function is implemented to estimate particle positions for time-step n + 1 followed by "shaking" and position refinement. It should be noted that the "shaking" process tries to look for a candidate true position very close to the predicted position. If misprediction happens, no matter how many times we perform "shaking", the true position is not achievable. This implies the importance of producing accurate predictions. This paper seeks to investigate the possibilities of improvements in motion estimation by adding meaningful physics into the prediction function. A simple prediction approach is a Polynomial function, suggested by Schanz et al. [START_REF] Schanz | Shake The Box': A highly efficient and accurate Tomographic Particle Tracking Velocimetry (TOMO-PTV) method using prediction of particle positions[END_REF], resulting in reasonable predictions and 3D particle position reconstruction in simple flows [START_REF] Schanz | Shake The Box': A highly efficient and accurate Tomographic Particle Tracking Velocimetry (TOMO-PTV) method using prediction of particle positions[END_REF][START_REF] Schröder | Advances of PIV and 4D-PTV "shake-The-Box" for Turbulent Flow Analysis -the Flow over Periodic Hills[END_REF][START_REF] Schanz | Shake The Box' -a 4D PTV algorithm: Accurate and ghostless reconstruction of Lagrangian tracks in densely seeded flows[END_REF]. However, significant off prediction occurs in case of flow associated with complexities such as high turbulence level, high Reynolds number, and mixing flows [START_REF] Tan | An open-source Shakethe-Box method and its performance evaluation[END_REF]. In such conditions, even by increasing the order of the Polynomial predictor functions from 3 to 10 [START_REF] Tan | An open-source Shakethe-Box method and its performance evaluation[END_REF], off prediction stays remained. The solution for this challenge is implementing optimal temporal filtering such as the Wiener filter, which has been first examined in 4D-PTV experiments by Schröder et al. [START_REF] Schröder | Advances of PIV and 4D-PTV "shake-The-Box" for Turbulent Flow Analysis -the Flow over Periodic Hills[END_REF]. Since then, this concept became consistent in the STB studies due to its high robustness and accurate motion estimations [START_REF] Schröder | Nearwall turbulence characterization using 4D-PTV Shake-The-Box[END_REF][START_REF] Tan | An open-source Shakethe-Box method and its performance evaluation[END_REF]. As Figure 1: Particle prediction scenario from t n to t n+1 : a), Known particle positions from history starting from t n-4 up to t n (particle size is increasing gradually to show time-step differences); b), The trajectory (golden line) obtained from filtered curve fitting of known particle positions; c), Prediction based on extrapolating of the fitted trajectory (red dash line) from t n to t n+1 ; d), Modified prediction (grey dash line) using velocity and acceleration information of coherent particles in neighbourhood of the target particle at t n . mentioned, the Wiener filter showed robust behaviour in prediction with complex flows but still suffers in high motion gradients. This implies the fact that the prediction function suffers from a lack of information to find true positions. Worth mentioning that these prediction-based techniques rely on one particle individually, excluding it from surroundings. All the information we know from an individual particle is its history. Even if we implement filtering and smoothing schemes such as STB using Wiener filter [START_REF] Schröder | Nearwall turbulence characterization using 4D-PTV Shake-The-Box[END_REF], our information is limited by the history of the target particle, ignoring that every particle is spatially and temporally coherent with a specific group of other particles following the same behaviour. This motivated us to take into account a group of coherent motions for predicting a single particle. We propose to locally determine information of coherent and non-coherent particles during the trajectory procedure by using the Finite-Time Lyapunov Exponent (FTLE). More details of coherent motion detection are discussed in Section 2. After that, we address the prediction function with the minimisation approach in Section 3. In the following Sections 4 and 5, we study and evaluate our proposed technique using synthetic and experimental case studies of the wake over and behind a smooth cylinder at Reynolds number equal to 3900. Coherent and non-coherent neighbours Every single particle is spatially and temporally coherent with a specific cluster of other particles following the same behaviour [START_REF] Rahimi Khojasteh | Lagrangian Coherent Track Initialisation (LCTI)[END_REF]. Figure 2 shows a schematic of coherent and non-coherent groups of neighbour particles in different colours evolving by time (t 1t 5 ). A particle can spatially meet a group of other particles in which there is no coherency link between them. There are many available concepts to identify Lagrangian Coherent Structures (LCS) from looking for separatrix lines or surfaces which divide structures into different coherent regions [START_REF] Haller | Lagrangian coherent structures[END_REF]. We introduced such a concept for the track initialisation in 4D-PTV studies, named Lagrangian Coherent Track Initialisation (LCTI), in synthetic flow configurations of the recent LPT Challenge [START_REF] Rahimi Khojasteh | Lagrangian Coherent Track Initialisation[END_REF] and for a real jet impingement experiment [START_REF] Rahimi Khojasteh | Lagrangian Coherent Track Initialisation (LCTI)[END_REF]. LCTI showed that FTLE is locally applicable for surrounding sparse trajectories to determine coherent neighbours. To this end, we defined a local Eulerian frame around each particle, while all neighbourhood particles inside this area must be classified as coherent or non-coherent with the target particle. This frame is fixed during a series of time-steps that can provide an Eulerian view of the neighbourhood behaviour. Velocity values of the target particle are used to quantify the Eulerian frame size in each direction. If 2D / 3D velocity values are equal in each direction, then the shape would be a circle / sphere around the target particle; however, more directional gradients would adjust the shape (see Figure 1.d). All particles inside the Eulerian frame in the same phase or with phase delay are considered neighbourhoods. By tracking each Figure 2: Schematic of particle trajectories in 2D pair vortices starting from t 1 to t 5 : t 1 ), The target particle with coherent neighbour particles located in a clockwise vortex (golden cluster), non-coherent particles belong to different clusters; t 2 ), The target particle trajectory (golden line) approaching to particles in the red cluster; t 3 ), The target particle separation with non-coherent particles in the red cluster and approaching to particles in the grey cluster; t 4 ), Separation of non-coherent particles in the grey cluster with the target particle; t 1 -t 5 ), Full trajectory view of the target particle alongside with coherent particles in golden cluster. Lagrangian particle in the flow over a finite time (see Figure 2), we can compute the FTLE value based on neighbour trajectories as, σ t t 0 = 1 |T | λ max (∆) = 1 |T | log( δx(t) δx(t 0 ) ) (1) where σ t t 0 is named the scalar FTLE value showing the amount of stretching over the interval time T = tt 0 , and δx is the displacement of neighbour particles. λ max is the maximum eigenvalue of the right Cauchy-Green deformation tensor [START_REF] Shadden | Definition and properties of Lagrangian coherent structures from finite-time Lyapunov exponents in two-dimensional aperiodic flows[END_REF] obtained from the displacement map. A lower FTLE value means a neighbouring particle is coherent and acts in the same behaviour with the target particle spatially over a specific temporal scale. As a result of this classification, the mean values, including velocity and acceleration of the coherent neighbour particles, can be superimposed (weighted averaged) on the target particle (see Figure 1). This physics-based information is added to the resulting prediction function. The prediction function of this study is a 3rd order polynomial predictor that must satisfy particle history and additional coherent dynamics. Details of the prediction function is addressed in Section 3. Prediction Function Polynomial function is the most simple predictor that can be used in the time resolved particle tracking techniques. The polynomial coefficients must be determined optimally by minimising mean square error such that the corresponding polynomial curve with order of n best fits the given positions. This can be formalised as, n ∑ j=n-m n+1 ∑ i=1 a i .t i-1 j = y n-m,n , (2) where a i is unknown coefficients of the predictor function. y n-m,n is known positions of the last finite frames (in this study m = 4). Therefore, the least square cost function is, The coherent prediction function adds information obtained from temporal / local spatial Lagrangian coherent particles to come up with additional constraints in the polynomial cost function (Equation 3). In the worst-case scenario where there is no coherent neighbour information, the prediction function is just a simple polynomial function without additional constraints. Each particle carries sets of information, including position, first and higher-order derivative values. Assuming positions of at least three time steps m are known. In this function, we impose first and second-order derivatives of each coherent particle into the prediction function as below, J = 1 n n ∑ i=n-m (X i -y i ) 2 , (3) n+1 ∑ i=2 a i,n+1 .(i -1) * t i-2 n = ÿ LCS n n+1 ∑ i=3 a i,n+1 .(i -1) * (i -2) * t i-3 n = ÿ LCS n . (4) Therefore, each particle will end up with the weighted averaged of local coherent velocity and coherent acceleration values ( ẏLCS , ÿLCS ). In the present study, we take four time step histories of particles to minimise the cost function and then predict the next step. The first and second derivatives of all coherent particles are weighted averaged based on their FTLE level and distance to the target particle. Two weighted averaged values, velocity and acceleration, of each target particle can be obtained for the estimation. Therefore, the modified cost function (i.e. coherent predictor) can be written as, J = 1 n n ∑ i=1 (X i -y i ) 2 + ( Ẋn -ẏn ) 2 + ( Ẍn -ÿn ) 2 . ( 5 ) Velocity constraint controls the direction of the prediction function, while in the case of having high acceleration gradients, a second-order constraint is required to control the acceleration of the prediction. The solution for the cost function in Equation 5 is not only smooth on the history of the target particle but also satisfies local coherent first and second-order derivatives. We compared the performance of the coherent predictor with three other prediction functions, as listed in Table 1. DNS predictor was defined as a reference using the Euler equation to transport particle positions by the ground truth DNS velocity. Predicted particle positions are followed by shaking or other optimisation techniques. Therefore, all particles are either tracked or untracked except for the inlet and outlet trajectories. In every time-step, untracked particles are like additive noises and might gradually cause to collapse of the whole trajectory process. Due to this, untracked particles must be fed by other complementary treatments. To this end, new information can be extracted if any groups of tracked particles are found to be located in the neighbourhood of untracked particles with a time step phase delay (i.e. t n+1 ). This phase delay means that new tracked particles at time-step t n+1 are Method Fit parameters Cost function Prediction a) DNS predictor -- X n+1 = ẊDNS • t n+1 b) Polynomial predictor ∑ n j=n-∑ n+1 i=1 a i, j • t i-1 j = y n-,n J = 1 n ∑ n i=n-m (X i -y i ) 2 y n+1 = ∑ n+1 i=1 a i,n+1 • t i-1 n+1 c) Wiener filter ∑ i=1 w i .u n = y n J = (X n -u T n w) 2 y n+1 = ∑ +1 i=2 w i .u n d) Coherent predictor ∑ n j=n-∑ n+1 i=1 a i, j .t i-1 j = y n-,n J = 1 n ∑ n i=1 (X i -y i ) 2 + ( Ẋn -ẏn ) 2 + ( Ẍn -ÿn ) 2 y n+1 = ∑ n+1 i=1 a i,n+1 • t i-1 n+1 ∑ n+1 i=2 a i,n+1 .(i -1) * t i-2 n = ẏ LCS n ∑ n+1 i=3 a i,n+1 .(i -1) * (i -2) * t i-3 n = ÿ LCS n Table 1: Prediction function formulation locally coherent with one specific untracked particle at time-step t n . Another technique to reduce the number of untracked particles is to use backward prediction, which is well established in classic schemes such as nearest neighbour trajectory. Similarly, we implemented the backward predictor to search for additional information from the coherent particles to estimate in reverse pace followed by backward shaking. On the one hand, surrounding information of an untracked particle can provide the least information to predict in backward pace. In addition, this treatment can also connect spilt tracklets for reconstructing longer particle trajectories. This process is iterative, meaning that every forward step is embedded with a series of backward estimations from the current time-step up to the first step. In the present study, we evaluated and compared the performance of coherent predictor only in forward prediction. Synthetic Evaluation To evaluate our novel particle position prediction scheme, we used a DNS simulation of the wake behind a smooth cylinder at Reynolds number equal to 3900 computed by an open-access code named Incompact3d [START_REF] Laizet | Incompact3d: A powerful tool to tackle turbulence problems with up to O(105) computational cores[END_REF]. Particle trajectories around the cylinder are shown in Figure 3. Particles are transported by every 10 DNS time step using the fourth order Runge Kutta temporal and trilinear spatial schemes. The synthetic dataset is available to the public for interested readers [START_REF] Rahimi | Lagrangian and Eulerian dataset of flow over a circular cylinder at Reynolds number 3900[END_REF][START_REF] Rahimi | Lagrangian and Eulerian dataset of the wake over a smooth cylinder at a Reynolds number equal to 3900[END_REF]. In the synthetic case, particle trajectories are smooth and predictable when the synthetic temporal scale is with the same order of the DNS time step due to the small travelling distance between two time steps (less than the Kolmogorov timescale). However, the travelling distance is comparably large for a real PTV experiment. To mimic the real experiment, we created around 150, 000 ground truth trajectories associated with noise for every 20 DNS time step. By increasing the temporal scale, less particle position temporal information is available, and then the prediction would be more challenging. Therefore, even a weak signal of coherent motion would lead to a better prediction. We compared position prediction of four schemes with the ground truth particle trajectories (see Table 1). First, we predicted particle positions based on known ground truth DNS velocity with a linear Euler transport function. In such a scenario, we can estimate the minimum uncertainty level that can be achieved in this sparse temporal scale. Both Wiener filter and Polynomial predictors are also selected to be compared with the LCS based predictor (i.e. coherent predictor). Figure 4 shows normal pdf of the predicted position error in x direction of four schemes. Position error in x direction shows that deviations for coherent predictor stay virtually below 0.05 ε/D, where D is the cylinder diameter. On the contrary, a significant number of particles are mispredicted in both Polynomial and Wiener filter techniques. Similar significant improvements of using coherent predictor are observed in y and z directions. Figure 5 shows the projected distribution of the position error on xy plane for each predictor function. Interestingly, the prediction error is highly correlated with the flow behaviour. Although the DNS predictor (see Figure 5.a) uses known ground truth velocity information, the travelling distance is large enough to introduce the prediction errors, particularly inside the wake region. As shown in Figure 5.b, third order polynomial has the worst prediction error, which can be up to 0.2 ε/D around the cylinder leading edge and inside the wake region. The polynomial prediction error distribution is fully shaped by the flow motion meaning that any gradients inside the flow create huge estimation error. Overall and local performance of the Wiener filter is better than the Polynomial predictor. Wiener filter succeeded to reduce the prediction error in most of the peak regions (see Figure 5.b.c). Error distribution reveals that coherent predictor has the best performance locally and globally compared to Wiener and Polynomial predictors. Worth mentioning that a small prediction error reduces the probability of picking a wrong particle from the surroundings in the optimisation process of 4D-PTV. Experimental Evaluation An experimental study of the cylinder wake flow at Reynolds number equal to 3900 (same value as the synthetic data) was performed. We designed an experimental setup with four cameras as shown in Figure 6.c. Four CMOS SpeedSense DANTEC cameras with a resolution of 1280 × 800 pixels and the maximum frequency of 3 kHz are empowered. Cameras are equipped with Nikon 105 mm lenses. The first two cameras are positioned in backward light scattering, while the second two cameras received maximum intensity signal in forward scattering. The calibration error was lower than 0.06 pixel and reduced to 0.04 after the We used an LED system to illuminate this large volume. The seeding particles were Helium Filled Soap Bubbles (HFSB) [START_REF] Scarano | On the use of helium-filled soap bubbles for large-scale tomographic PIV in wind tunnel experiments[END_REF] resulting desired intensity signal with appropriate particle size. However, bubbles are limited by three main factors in the wind tunnel experiments, including generation rate, lifetime, and image glare points. We placed 50 bubble generator nozzles with airfoil-shaped structures inside the wind tunnel chamber. The nozzles were far upstream of the measurement section to ensure a sufficient number of bubbles are created, and the main flow field is not disturbed by the existence of nozzles. The bubble lifetime is very short (less than 2 -3 minutes) inside the wind tunnel, mainly because they explode by passing through honeycomb layers. To overcome this issue, we injected bubbles inside the chamber for up to 5 minutes when the wind tunnel is off before starting the acquisition. We found that particles larger than three pixels create two glare points on two sides of the bubble. This requires more image treatments before running the 4D-PTV algorithm to avoid false particle reconstruction. However, the intensity of two glare points can diffuse and merge if the particle size is around two pixels. Therefore, we adjusted the camera magnification to reach two particle pixel sizes on average to surpass the glare point issue. One snapshot of the experiment is shown in Figure 6.a. Trajectory results of the current experiment with superimposed vorticity iso-surfaces are shown in Figure 6.b. To quantify the results of different schemes, we compared predictions with optimised positions obtained from STB Davis. As a result of the experiment, STB managed to successfully build nearly 12000 particles as shown in Figure 5.b. Noisy particle reconstruction of four time steps was used as an input of the prediction functions. We compared three techniques, Polynomial, Wiener filter, and coherent predictors, with final optimised positions. The deviation of position estimated of each technique is shown in Figure 7. The distribution shows that the coherent predictor has more accurate estimations within 1 pixel deviation from the optimised positions. Position estimations of Wiener filter and coherent predictors stay below 2.5 pixels deviation for nearly all particles. On the contrary, the Polynomial predictor has maximum deviation with STB Davis. conclusion We proposed a robust technique to predict particle positions based on their local temporal and spatial coherent motions. LCS can classify and divide the coherent neighbour motions. We imposed first and secondorder derivatives of the neighbour coherent motions into the predictor function in addition to the particle history. To assess the proposed method named coherent predictor, we performed the synthetic analysis of the wake behind a smooth cylinder at Reynolds number equal to 3900. We compared three predictor functions. Polynomial predictor showed maximum deviation with the ground truth data. Whereas coherent predictor provided the most accurate position estimation. We found that the flow regions highly impact the estimation error. Inside the wake region, particularly the vortex formation zone and the two sideward shear layers, cause more challenges in prediction. These mentioned regions are featured by high acceleration and 3D directional motions. We also performed the 4D-PTV experiment of the wake flow behind a cylinder at the same Reynolds number. It was found that the coherent predictor is reliable to estimate particle positions very close to the optimised positions. Figure 3 : 3 Figure 3: Lagrangian particle trajectories obtained from transport of synthetic particles through DNS Eulerian fields. Figure 4 : 4 Figure 4: Normal pdf of particle position error in x direction of four predictor functions. Figure 5 : 5 Figure 5: Position estimation error averaged in z direction: a), DNS predictor; b), Polynomial Predictor; c), Wiener filter; d), Coherent predictor. Figure 6 : 6 Figure 6: The cylinder wake flow at Reynolds 3900 with four cameras: a), Snapshot of the 4D-PTV experiment; b), Side view of particle trajectories superimposed by vorticity iso-surfaces; c), Schematic of the experimental setup. Figure 7 : 7 Figure 7: Experiment normal pdf results of particle position error in x direction of three predictors. Each predictor is compared with final optimised positions of STB Davis.
04114168
en
[ "phys", "spi" ]
2024/03/04 16:41:24
2023
https://hal.science/hal-04114168/file/DAAA23063.1685632906.pdf
M Khodsiani F Beyrau B Fond Flame-particle interaction inside a packed bed of cooled particles Flame-particle interaction inside a packed bed of cooled particles Mohammadhassan Khodsiani, Frank Beyrau, Benoît Introduction Many chemical processes are designed to transform material in the form of large particles (larger than a few millimeters in size) that travel at a slow pace through a reactor. These reactors are known as packed bed reactors and they are used for particle transformation process such heating, drying, pyrolysis, calcination, and gasification. Those reactors significantly contribute to large CO 2 emission of process industries. The temperature distribution within the bulk solid, as well as the gas composition surrounding it, are directly determined by the extent and location of the flame within the void space between the particles [START_REF] Krause | Coupled three dimensional DEM-CFD simulation of a lime shaft kiln-Calcination, particle movement and gas phase flow field[END_REF]. Both parameters play a crucial role in the heat and mass transfer between the bulk and the flame, which ultimately impacts the quality of the final product and the energy efficiency of the process. Flame-wall interactions (FWI) in combustion engines and porous inert media (PIM) combustion are typically two separately treated area of combustion, but combustion in packed bed reactors directly relate to both of them. In nearly all configurations with a flame, walls play a major role in flame propagation by means of radical exchange, heat exchange and also by inducing a boundary layer. This is a two-may coupling, as the wall affects the propagation of the flame by means of inducing a boundary layer, exchanging radicals and dissipating heat, and the flame in return heats the particle up. The specific wall area per unit volume dictates the strength of this coupling. In case of low specific area per volume, as in the internal combustion engines and gas turbines, thor-ough experimental and numerical investigation have addressed various aspects of the flame-wall interaction (FWI). [START_REF] Poinsot | Direct simulation and modeling of flame-wall interaction for premixed turbulent combustion[END_REF][START_REF] Jainski | Sidewall quenching of atmospheric laminar premixed flames studied by laserbased diagnostics[END_REF]. On the other hand, combustion in porous inert media (PIM), which has gained attention as a new low NO x [START_REF] Trimis | Combustion in a Porous Medium-Advances and Applications[END_REF][START_REF] Wood | Porous burners for leanburn applications[END_REF], lies on the "high coupling" side of the interaction as there is a high specific area of particle surface per volume. There the fuel-air mixture combusts in the interstices and pores of a porous inert media such as ceramic foams, pellets or packed beds. In the context of the interaction of a flame and solid surfaces, a Peclet number is defined as the ratio of the distance from the wall at which the flame quenches (δ Q ), to the Zeldovich premixed laminar flame thickness (δ F ), that is, Pe=δ Q /δ F . For porous media burner it is the ratio of the pore size to the laminar thickness. A minimum Pe for the flame to propagate inside a porous medium is reported to be 65 [START_REF] Trimis | Combustion in a Porous Medium-Advances and Applications[END_REF]. Considering a regular spherical packed bed of particles and a methane air mixture, this translates into a minimum sphere diameter of 10 mm diameter with a pore size of 2.5 mm for the sphere to propagate inside the packing. A few experimental studies describe the propagation of the bulk flame zone in the interstices of the packed bed placed in a quartz glass tube [START_REF] Okuyama | Turbulent combustion characteristics of premixed gases in a packed pebble bed at high pressure[END_REF][START_REF] Wu | Experimental investigation on low velocity filtration combustion in porous packed bed using gaseous and liquid fuels[END_REF]. Nevertheless, the harsh scattering of light inside the packed bed and 3D-flame front were among the obstacles to an accurate and highlyresolved detection of the flame front. In short, there is a need for an accurate description of the systems in which a reactive gas flows through a packing of particles. Also, CFD simulations must be validated as the high volume specific surface ratio means that the thermodiffusivity and wall chemistry play an important role. Obtaining unambiguous data in a setup with well defined boundary conditions is central to the validation of numerical simulations . Existing set of of experimental data do not allow such validation, as a) their flame image recordings incorporate complex 3Deffects b) the temperature of the particles is now known c) the geometry of the bed can not be fully reconstructed in simulations. In this work we elaborate on the design of a cylindrical packed bed assembly in which a 2D flame can be stabilized between the cylinders. The cylindrical geometry also allows for a line-of-sight optical access. The first version of this assembly includes cooled cylinders to accurately compare with simulations and to replicate cold start-up conditions. Wall chemistry is expected to have less impact on flame propagation under these conditions than uncooled particles, which will be implemented in the next phase of the work. Prior to the flame studies, the uniformity of the temperature along the cylinder and the inflow velocity are verified to ensure twodimensionality of the flow field and thermal boundary conditions. The impact of flame position and extent on flame to particle heat transfer is studied using telecentric imaging of the reaction and a double calorimetric technique. The experiments are performed under different equivalence ratio and inflow velocity conditions. Two dimensional flame in a packed bed setup In the design of the two-dimensional packed bed burner, three criteria are taken into account, a) transferabiliy and reproducibility of the experimental results, b) relevance of both geometry and boundary conditions to the packed beds in the industry and their flow specifications, c) allowing for measurements of quantities such as wall temperature, gas velocity, temperature and also species concentration. The geometry of the experimental setup comprises a slit burner and a packed bed of cylindrical particles. A cross sectional view of the cylinder arrangement in the central region is given in Fig. 1. As shown, rows of cylinders are offsetted by half the center to center distance c, imposing turns to the flow similar to spherical packed beds. The slit burner is designed to supply a uni-form two-dimensional flow all along the cylinder length axis. It is composed of an inner duct delivering fuel/air mixture and an outer duct shielding the mixture by a coflow of air. The mixture of fuel and air flows between the two cylinders in the center of the first row. To set the porosity, taken as void fraction here, the distance c can be adjusted. Eq. ( 1) is used to calculate the porosity within a unit cell defined as the dashed triangle which connects the particle centers in Fig. 1. 1 - π 2 √ 3 ( D c ) 2 (1) As described in the introduction, flame-particle interaction is largely affected by the porosity. To have a porosity of 40%, relevant to the typical spherical packed beds, the distance c was chosen to be 1.23 D. Thus, the reactive flow inside the cylindrical packed bed is exposed to a similar level of porosity and tortuosity. The specific area of the solid/gas boundary is only 33% lower than a spherical packed bed. The cylinder diameter D is 10 mm, leading to a minimum gap of 2.3 mm between the particles, corresponding to the limit Peclet number of porous media burner and and therefore is the case of intense interaction between the flame and the particles. The main advantage of this geometry is its through optical access which enables the implementation of lineof-sight optical diagnostics. The two-dimensionality of the flow and of the gas thermochemical state remains a significant consideration for the following reasons: 1) It simplified the flame behaviour description in both experiments and numerical simulations. 2) It avoids the signal averaging error induced when implementing diagnostics such as chemiluminescence imaging and schlieren, reliant on the integration of light all along the line-of-sight. A three dimensional view of the assembly of the packed bed assembly and burner is shown in Fig. 2a To keep the flow two-dimensional, end walls are placed in the cylinder axes normal direction to confine the flow between the cylinders and avoid the apparition of a third flow velocity component which would be otherwise be caused by the escape of the flow through the open ends of the cylinders. For this reason, the uncooled cylinders are also extended past the holding frames to maintain similar pressure drop in the end region than through the cylinder array. A pair of optical windows on front and back side of the packed bed is used to keep the optical access. Also, side walls are also used to position the aluminum holding plates and to confine the co-flow and central jet and facilitate comparison with numerical simulations. The cylinder holder plates were machined as to provide an extended optical access to a large connected region, shown in Fig. 2b. This optical access covees the inflow region between the duct exit region below the first row, the region of the reactive flow between the three cooled cylinders, and also a large region of burnt gases. The two dimensionality of the inflow conditions was verified by particle image velocimetry measurements, described in [START_REF] Khodsiani | Spatially resolved experimental investigation of particle-flame interaction in a twodimensional model packed bed[END_REF]. The inflow velocity variation along the cylinder axis was found to vary by less than 5%. Flame front imaging A telecentric imaging system was employed to both detect the two dimensional flame reaction zone position, and to measure the real geometry of the rig compared to ideally conceived geometry. Chemiluminescence emission from the intermediate radicals is used to mark the position and shape of the flame front through the lineof-sight optical access in between the cooled cylindrical particles. CH* chemiluminescnece at 431 nm and C 2 * at 471 nm are recorded to mark the reaction zone. The error in the marked flame front using visible emission should be limited to around a few percent of the laminar flame thickness [START_REF] Bellenoue | Direct measuremrment of laminar flame quenching distance in a closed vessel[END_REF]. That the maximum intensity position of C 2 * and CH* emission are pretty similar is also experimentally demonstrated in [START_REF] Kojima | Spatially resolved measurement of OH, CH, and C2 chemiluminescence in the reaction zone of laminar methane/air premixed flames[END_REF]. Otherwise integration of the emitted light from the two radicals would have led to the detection of an artificially thicker flame front. However, imaging by conventional lenses would cause perspective distortion along the line-of-sight, so that the flame zone closer to the lens would be imaged with a larger magnification compared to the regions further from the lens. This finally leads to a loss of resolution when collecting all emitted light along the line of sight. To remedy this problem, a telecentric lens (Edmund Optics #62-912) was employed to maintain the same magnification all along the line of sight over which the chemiluminescence is emitted, as illustrated in Fig. 3(a). The telecentric lens axis is aligned with the cylinders axes and mounted on a sCMOS camera (PCO Edge 5.5) to record the chemiluminescence emanating from the reaction front over an exposure of 100 ms. The telecentric lense was equiped with a 500 nm short pass filter (Edmund Optics #84-719) to distinct the blue chemiluminescence from stray light and thermal radia-tion. Image processing An image processing algorithm is developed in MATLAB to accurately detect the cylinder boundaries as well as the location of the two dimensional flame front. This is both to assess the real geometry of the bed, and to reveal the location of the reaction front with respect to this geometry. A gradient-based algorithm detects the points lying on the edges of the top, right and left cylinders. Fitting a circle to those points, the center coordinates of cylinders is found. Blue, green and red circles of Fig. 3(c) are the fitted circles and delineate the detected edges of the top, right and bottom cylinders, respectively. Images of the flame luminos-ity are then recorded as shown in Fig. 3(b). Nevertheless, the accurate detection of the reaction front requires the extraction of the high intensity core of the flame in Fig. 3(b). In doing so, an algorithm is developed to access the centreline of the reaction zone by a combination of a Canny edge detection algorithm [START_REF] Canny | A Computational Approach to Edge Detection[END_REF], to isolate the reaction zone, and then a subsequent skeletonization to extract the high intensity core of that the reaction zone. The extracted centerline, being a one pixel thick line is superimposed on the blue reaction front in Fig. 3(b). In order to mark the location of the methane-air flame relative the cylinders, the detected cylinder walls and the extracted flame centerline are put together on a grid in Fig. 3 (c). This forms the premise for the discussion of flame front location under different operating conditions in section . Internal cylinder cooling and heat transfer measurements Besides the velocity uniformity, the temperature of the aluminum cylinders must be kept uniform along the cylinder length to make sure that flame stabilizes at the same height and lateral position all along the cylinder length, forming a two dimensional flame surface. Therefore, a sufficiently high flow of coolant must flow inside the hollow aluminum cylinders to prevent an oil temperature increase along the cylinder direction. Two cooling circuits are designed to feed a high flow of silicon oil into the cylinders cooling channels. Based on the flame power, oil coolers were selected to provide a high enough oil flow rate to reduce the cylinders temperature down to about 100 °C while minimizing the surface temperature variations along the cylinder axis to below 1°C. Surface temperature measurements are described in [START_REF] Khodsiani | Spatially resolved experimental investigation of particle-flame interaction in a twodimensional model packed bed[END_REF]. Heat transfer rates from the flame to to each individual cylinder are important quantities in the investigation of the interaction between the flame and the particles. To measure the heat transfer to each cylinder, we resort to a double calorimetric technique. First, through Eq. 2, the heat transfer rate is related to the oil temperature increase and its mass flow rate as it flows through the cylinder. T i and T o are the oil temperature measured at the inlet and outlet of the top cylinder. The same equation holds for the side cylinders. QTop cylinder = ṁoil c p oil (T o -T i ) (2) To measure the oil temperature rise in the presence of the flame, Resistance Temperature Detection (RTD) sensors with a diameter of 1 mm were inserted to the inlet and outlet of the cylinder. Probes are four-wired platinum resistance thermometers with class A accuracy, corresponding to a ±0.15 °C absolute temperature accuracy and an excellent capability to measure relative temperature variations due to very low drift. Such temperature resolution was required as a high silicon oil flow rate is used to keep the oil temperature rise below one degree, ensuring the uniformity of the temperature uniformity all along the cylinder. A second calorimetric technique was used to measure the oil mass flow rate. This technique relates the oil mass flow rate to another temperature difference. These in-house designed calorimetric flowmeters were implemented in the cooling circuit. More details on the method can be found in Ref. [START_REF] Khodsiani | Spatially resolved experimental investigation of particle-flame interaction in a twodimensional model packed bed[END_REF] Having measured the oil mass flow rate and the inlet to outlet oil temperature rise, Eq. 2 yields the heat transfer to the cylinder which is discussed in conjunction with the flame chemiluminescence imaging measurements later in section . Results In this section, two parameters, namely, the inflow velocity (u) and the equivalence ratio (φ ) are altered, and the flame position and geometry is discussed together with the heat transfer to the cylinders. Chemiluminescence images recorded between the cylinders under three different operating conditions are shown in Figure . 4(a,b,c). Blue region in between them delineates the methane-air reaction front, while black areas mark the cylinders. The extracted flame skeletons are shown in Fig. 4(d) to better discern the flame position and geometry under three different conditions. A stable flame is recorded as shown in Fig. 4 at the reference condition, an inflow velocity of 0.3 m/s and stoichiometric condition. A high chemiluminescence intensity region, with a thickness of about 600 µm, marks the location of the reaction zone in between the cylinders. The extracted flame skeleton of Fig. 4(d) mark the exact geometry and extent of the flame front. The uppermost flame skeleton point is located at a distance of around 3 mm from the top cylinder, and the flame skeleton horizontally extends over a length of about 3.5 mm. A high pressure region, due to the stagnation of the fast burnt gases downstream of the flame front, is induces near the top cylinder. This stagnation region pushes the flame front further upstream. The flame front has a negative curvature, flattening on the side, which may be caused by the presence of a low velocity recirculation zone combined with the slower flame speed due to flame to wall heat losses. Maintaining φ at 1 and slowing u down to 0.25 m/s, the flame front contracts and moves upstream considerably to the location observed in Fig. 4(b); as also observed in the extracted skeleton in Fig. 4(d). With the equivalence ratio and the laminar flame speed remaining untouched, the flame front descends by one cm upstream to a region where the gas speed under the current inflow velocity (u=0.25 m/s) is as fast as it was for inflow velocity (u) of 0.3 m/s. In addition, compared to the reference condition, the flame front is considerably flatter and the curvature is even slightly positive. This positive curvature is probably due to the thinner boundary layer and the smaller size of the recirculation region, which leads to a top hat reactant velocity profile. In this case, the positive flame curvature near the wall might have been induced by the slowed flame speed due to the heat losses when approaching the walls. Transiting to the last operating condition, u is matintained at 0.25 m/s while φ is decreased to 0.95. A lean mixture slows the laminar flame speed down, and therefore the reaction front ascends half a centimeter downstream to settle in a region of lower gas velocity. As shown in Fig. 4c and also the flame skeleton of Fig. 4d, the reaction front expands, its curvature becomes more pronounced compared to the stoichiometric condition, but remains still a lower than the the curvature of the flame at the reference condition. The rate of flame to particle heat transfer is measured for the top and bottom cylinders three operating conditions. In order to take the variation of the total flame power into account, when comparing the heat transfer rates under different operating conditions, a parameter named heat transfer efficiency (η) is introduced. η is defined as the ratio of the heat transferred to the cylinder divided by the theoretically calculated flame power, being the product of the flow rate of the fuel and its lower heating value. Having detected the flame skeletons and measured the flame to particle heat transfer rate, the influence of the flame stabilization position on the heat transfer to the cylindrical particles can be quantified. To do so, the distance between the tip of the flame skeleton to the cylinder surface is evaluated for both top and bottom cylinders and plotted against the heat transfer efficiency (η) in Fig. 5. The heat transfer efficiency of both top and bottom cylinder diminishes as the flame front moves away from the cylinder. For the top cylinder, this might be attributed to a lower temperature gradients over the cylinder surface as the flame front moves further up-stream away from the cylinder. In case of the bottom cylinders, the flame front moves downstream away from the cylinders, exposing a lower portion of the cylinder surface to the hot and fast burnt gas. This in turn leads to a weaker heat transfer. Conclusion Spatially-resolved experimental data on the flameparticle interaction in a packed bed model reactor are essential to develop and validate numerical models. Therefore a two-dimensional packed bed of cooled cylinders was conceived and operated. Telecentric imaging of the CH* chemiluminescence emanating from the reaction zone, in conjunction with flame-toparticle heat transfer measurements, revealed the effect of the flame extent and position on the individual flameto-particle heat transfer. A flame was stabilised above the reactant inflow in the void space between the first and second row of cylinder at a Reynolds number of 250. Due to the closeness of the flame to the cylinders, a significant flame to wall heat transfer was measured, being 25 % ± 2.5% of the total flame thermal power. By varying the inflow velocity and equivalence ratio, the reaction front was stablized at different locations downstream of the narrow gap between the cylinders in the bottom row. These changes also led to different heat transfer distribution between the upstream and downstream cylinders. Also, a correlation was found between the flame tip to cylinder stand-off distance and the heat transfer rate efficiency to that cylinder. These data provide the heat transfer rate, thermal boundary condition and reaction front stabilization position over a fully characterised geometry precluding any spatial ambiguity. They can therefore be directly exploited to validate the implementation of wall heat transfer and chemistry models in direct numerical simulations. In future works, flow velocity measurements by laser Doppler anemometry will allow to interpret the flame position and shape with respect to complex geometry of the flow and to determine the local flame consumption speed. Moreover, we will also investigate the case of hot cylinder temperature, replacing the three cooled cylinders by ceramic cylinders. Figure 1 : 1 Figure 1: a) A cross-section of the packed bed assembly in the central region. Here the cylinder-to-cylinder distance (c) is 1.23 diameter D, resulting in a porosity of 40% packed bed porosity. Figure 2 : 2 Figure 2: a) Finally manufactured assembly of the cylindrical packed bed and burner b) front view of the assembly; white area delineates the region with optical access) Figure 3 : 3 Figure 3: a) Telecentric lens mounted on the sCMOS camera and aligned with through optical access; b) Recorded methane-air flame image and superimposed extracted skeleton Figure 4 : 4 Figure 4: a to c) CH* chemiluminescence recorded under different operating conditions; d) detected flame skeletons. Black lines on the top, left and right are the detected edge of the particles Figure 5 : 5 Figure 5: Variation of the heat transfer efficiency (η) with the stand-off distance of the flame skeleton tip from the top and bottom cylinders. The corresponding operating condition of each data point is annotated. Acknowledgment The authors gratefully acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -Project-ID 422037413 -TRR 287.
04114229
en
[ "spi.auto" ]
2024/03/04 16:41:24
2022
https://hal.science/hal-04114229/file/DBP_J17.pdf
Sijia Kong Delphine Bresch-Pietri Probabilistic Sufficient Conditions for Prediction-based Stabilization of Linear Systems with Random Input Delay Keywords: Delay systems, prediction-based controller, distributed parameter systems This paper focuses on the prediction-based stabilization of a linear system subject to a random input delay. Modeling the delay as a finite-state Markov process, it proves that a constant time-horizon prediction enables robust compensation of the delay, provided the horizon prediction is sufficiently close to the delay values in average. Simulation results emphasize the practical relevance of this condition. I. INTRODUCTION Delays are among the most common phenomena in engineering practice [START_REF] Palmor | Time-delay compensation-Smith predictor and its modifications[END_REF] as they range from control or sensing processing time, to transport delays. Lately, with the rise of communication and information technologies, communication delays have become a major concern for multiple research areas, such as multi-agent systems coordination or traffic estimation and control based on Vehicle-to-Vehicle transmissions. In such network controlled systems, transmitted information often suffers from lag, data reordering, packets dropouts or quantization [START_REF] Simon | Real-time control systems: feedback, scheduling and robustness[END_REF]. These phenomena can be accounted for by a random delay model. This paper considers the case where such a random delay affects the input of a dynamical system and investigates the design of a prediction-based controller to compensate for this delay. This control technique is well-known for constant delays [START_REF] Kwon | Feedback stabilization of linear systems with delayed control[END_REF], [START_REF] Manitius | Finite spectrum assignment problem for systems with delays[END_REF], [START_REF] Smith | A controller to overcome dead time[END_REF], and has since then been extended to various contexts including time-varying delays [START_REF] Artstein | Linear systems with delayed controls: a reduction[END_REF], uncertain input delays or disturbances [START_REF] Mondié | Delay robustness of closed loop finite assignment for input delay systems[END_REF], and nonlinear systems [START_REF] Bekiaris-Liberis | Nonlinear control under nonconstant delays[END_REF], [START_REF] Karafyllis | Predictor feedback for delay systems: Implementations and approximations[END_REF]. It consists in computing a state prediction over a time window of the length of the (current or future) delay, and using this prediction in the feedback loop to eliminate or mitigate the effect of the delay in the closed-loop dynamics. Yet, while this technique has been recently applied to linear random Delay Differential Equations in [START_REF] Cacace | Predictor based outputfeedback control of linear stochastic systems with large i/o delays[END_REF] and for a deterministic delay term multiplied by a random variable in [START_REF] Li | Predictor-based h∞ leader-following consensus of stochastic multi-agent systems with random input delay[END_REF], its extension to the case where the delay itself is a random variable remained to be carried out. Recently, in our preliminary work [START_REF] Kong | Prediction-based controller for linear systems with stochastic input delay[END_REF], we studied for the first time the problem of prediction-based control of dynamical systems subject to random input delays and proposed to use a constant time-horizon prediction. The present paper pursues this study and extends significantly its scope of application. Modeling the delay as a Markov process with a finite number of states as in [START_REF] Kolmanovsky | Mean-square stability of nonlinear systems with time-varying, random delay[END_REF], we reformulate the dynamics as a Partial Tel: +33140519330 Emails: [email protected], [email protected] Differential Equation-Ordinary Differential Equation (PDE-ODE) system as in the now standard methodology for stability analysis of input-delay systems proposed by Krstic and coworkers [START_REF] Bekiaris-Liberis | Nonlinear control under nonconstant delays[END_REF], [START_REF] Krstic | Boundary control of PDEs: A course on backstepping designs[END_REF], but applied to our random context. Using the so-called technique of probabilistic delay averaging [START_REF] Kolmanovsky | Mean-square stability of nonlinear systems with time-varying, random delay[END_REF] to a new Lyapunov functional, we prove that mean-square exponential stabilization of the closed-loop system is obtained provided the prediction horizon is sufficiently close to the delay values in average, in the sense of the expected value. This considerably generalizes the condition of [START_REF] Kong | Prediction-based controller for linear systems with stochastic input delay[END_REF] which was deterministic and thus quite restrictive, and constitutes the main contribution of the paper. The paper is organized as follows. In Section II, we formulate the problem under consideration, design a constanthorizon prediction-based controller and formulate our stabilization result. In Section III, we propose a backstepping reformulation of the system, which then enables us to analyze the stability of the closed-loop system in Section IV and prove the paper main result. Simulation results are then provided in Section V to illustrate the practical relevance of the proposed sufficient condition for stabilization. Notations. In the following sections, for a signal v : (x, t) ∈ [0, 1]×R → v(x, t) ∈ R, we denote v(t) its spatial L 2 -norm with respect to x. λ(A) denotes the spectrum of a square matrix A, while min(λ(A)) and max(λ(A)) are its minimum and maximum eigenvalues, respectively. Additionally, |A| denotes its Euclidean norm |A| = max(λ(A T A)) in which A T denotes the transpose of A. E(x) denotes the expectation of a random variable x. For a random signal x(t) (t ∈ T ⊂ R), the conditional expectation of x(t) at the instant t knowing that x(s) = x 0 at the instant s ≤ t is denoted E [s,x0] (x(t)). Finally, e i denotes the ith standard basis vector of R r (r ∈ N + and i ∈ {1, ..., r}). II. PROBLEM STATEMENT AND MAIN RESULT We consider a controllable linear system with random input delay of the form Ẋ(t) = AX(t) + BU (t -D(t)) , (1) in which X ∈ R n and U ∈ R are the state and control input, respectively. The random delay D is a Markov process with the following properties: (P1) D(t) ∈ {D i , i ∈ {1, . . . , r}}, r ∈ N with 0 < D ≤ D 1 < D 2 < • • • < D r ≤ D. ( i = j , (2) in which τ ij and c j = r k=1 τ jk are positivevalued functions such that τ ii (t) = 0. In addition, we assume that the functions τ ij are bounded by a constant τ ⋆ > 0. (P3) The realizations of D are right-continuous. Considering a finite number of delay values in (P1) is a common assumption considered, e.g., in [START_REF] Kolmanovsky | Mean-square stability of nonlinear systems with time-varying, random delay[END_REF], [START_REF] Sadeghpour | Stability of linear continuoustime systems with stochastically switching delays[END_REF]. In the case of network systems, these discrete values can be seen as a measure of the congestion state of the network. Likewise, Property (P3) is standard for the modeling of continuous-time Markov Chain and to assess the system well-posedness. Furthermore, it is important to emphasize that the two properties (P1) and (P3), along with the Markov property, guarantee that P ij satisfies the Kolmogorov Equation ( 2) for certain positive-valued functions τ ij , c j (see [START_REF] Rausand | System reliability theory: models, statistical methods, and applications[END_REF], [START_REF] Ross | Introduction to probability models[END_REF]). In that sense, Property (P2) is only requiring the functions τ ij to be bounded, which constitutes a mild modeling assumption. Finally, its is worth observing that the parameter τ ij ∆t is approximately the probability of transition from D i to D j on the interval [t, t + ∆t). Similarly, 1 -c j (t)∆t represents the probability of staying at D j during the time interval [t, t+∆t). We aim at controlling the random-delay system (1) with a prediction-based controller. As the delay may not be known 1and, in any case, varies abruptly, using the current delay value as prediction horizon would in all likelihood result into a chattering control law and an inaccurate prediction 2 . Thus, we propose to use the following prediction-based controller with constant time horizon D 0 (D 0 ∈ [D, D]) U (t) = K e AD0 X(t) + t t-D0 e A(t-s) BU (s)ds , (3) in which K is a feedback gain such that A + BK is Hurwitz. Obviously, contrary to the deterministic case [START_REF] Artstein | Linear systems with delayed controls: a reduction[END_REF], such a predictor can only robustly compensate for the random input delay. We now provide the main result of the paper: a sufficient condition for such a robust compensation. Theorem 1: Consider the closed-loop system consisting of the system (1) and the control law (3). There exists a positive constant ǫ ⋆ (K), such that if, for all time t ≥ 0, E [0,D(0)] (|D 0 -D(t)|) ≤ ǫ ⋆ (K) , (4) then, the closed-loop system is mean-square exponentially stable, that is, E [0,(Υ(0),D(0))] (Υ(t)) ≤ RΥ(0)e -γt , (5) for certain positive constants R and γ and with Υ(t) = |X(t)| 2 + t t-3D U (s) 2 ds . (6) Theorem 1 requires the prediction horizon to be sufficiently close in average to the delay values. This is in accordance with the constant-horizon feature of the prediction used in (3) and generalizes the restrictive deterministic condition obtained in our previous work [START_REF] Kong | Prediction-based controller for linear systems with stochastic input delay[END_REF] and bearing on |D 0 -D l | for all l ∈ {1, . . . , r}. Indeed, requiring |D 0 -D(t)| to be small enough for all time implies that the delay values are themselves close enough, otherwise such a choice of D 0 cannot be achieved. In that sense, Condition (4) represents a considerable relaxation, by distinguishing among the delay distributions. Besides, it is worth mentioning that an expression for the positive constant ǫ ⋆ is provided3 in the proof of Theorem 1 detailed in the sequel. However, this value is likely to be of very little practical use, due to the conservativeness of the Lyapunov analysis carried out. Nevertheless, thanks to this expression, one can observe that the positive constant ǫ ⋆ depends on the feedback gain and that this dependence is likely to be considerable. Providing a quantitatively meaningful bound and studying its relation with K in view of increasing the closed-loop robustness is out of the scope of the present paper, but should be the focus of future works. Finally, note that the interval of definition of the integral in ( 6) is [t-3D, t] a technical reason, namely, for Lemma 3 used in the stability analysis to hold. We now provide the proof of this theorem in the following sections. III. BACKSTEPPING TRANSFORMATION In order to prove Theorem 1, we follow the now standard stability analysis methodology for input-delay systems of [START_REF] Bekiaris-Liberis | Nonlinear control under nonconstant delays[END_REF], [START_REF] Krstic | Boundary control of PDEs: A course on backstepping designs[END_REF]. Consequently, we reformulate (1) as a PDE-ODE cascade and introduce additional distributed variables to better account for the effect of the prediction-based control law [START_REF] Cacace | Predictor based outputfeedback control of linear stochastic systems with large i/o delays[END_REF]. We denote v(x, t) = v 1 (x, t) . . . v k (x, t) . . . v r (x, t) T the vector of distributed actuators v k (x, t) = U (t + D k (x -1)). Then, one can rewrite the linear system (1) as the following PDE-ODE cascade with random parameter δ      Ẋ(t) = AX(t) + Bδ(t) T v(0, t) Λ D v t (x, t) = v x (x, t) v(1, t) = 1U (t) , (7) in which Λ D = diag(D 1 , . . . , D r ), 1 is a r-by-1 all-ones vector and δ(t) ∈ R r is such that, if D(t) = D j , δ(t) = e j , the j th vector of the standard basis of R r . Hence, δ(t) is a Markov process with the same transition probabilities as the process D(t), but with the finite number of states (e i ) instead of (D i ). we introduce several distributed variables v(x, t) = U (t + D 0 (x -1)) , (8) ṽ (x, t) = v(x, t) -1v(x, t) , (9) µ(x, t) = U (t -D 0 + (3D -D 0 )(x -1)) . (10) In details, v represents the control input U (t) within the interval [t-D 0 , t], ṽ the corresponding input estimation error while µ represents the controller within the interval [t -3D, t -D 0 ]. The extended state (X(t), v(x, t), ṽ(x, t), µ(x, t)) then satisfies                          Ẋ(t) = AX(t) + Bv(0, t) + Bδ(t) T ṽ(0, t) D 0 vt (x, t) = vx (x, t) v(1, t) = U (t) Λ D ṽt (x, t) = ṽx (x, t) -Σ D vx (x, t) ṽ(1, t) = 0 (3D -D 0 )µ t (x, t) = µ x (x, t) µ(1, t) = v(0, t) , (11) in which Σ D = ( D1-D0 D0 , . . . , Dr-D0 D0 ) T and 0 is a r-by-1 all-zeros vector. Finally, in view of stability analysis, we introduce the backstepping transformation (see [START_REF] Bekiaris-Liberis | Nonlinear control under nonconstant delays[END_REF], [START_REF] Krstic | Boundary control of PDEs: A course on backstepping designs[END_REF]) w(x, t) = v(x, t) (12) -K e AD0x X(t) + D 0 x 0 e AD0(x-y) Bv(y, t)dy . Lemma 1: The backstepping transformation [START_REF] Li | Predictor-based h∞ leader-following consensus of stochastic multi-agent systems with random input delay[END_REF], jointly with the control law (3), transform the plant (11) into the target system (X, w, ṽ, µ)                              Ẋ(t) = (A + BK)X(t) + B δ(t) T ṽ(0, t) + w(0, t) D 0 w t (x, t) = w x (x, t) -D 0 Ke AD0x Bδ(t) T ṽ(0, t) w(1, t) = 0 Λ D (x, t) = ṽx (x, t) -Σ D h(t + D 0 (x -1)) ṽ(1, t) = 0 µ t (x, t) = 1 (3D -D 0 ) µ x (x, t) µ(1, t) = KX(t) + w(0, t) , (13) in which, h is defined for t ≥ 0 as h(t) = D 0 K (A + BK)e AD0 X(t) + e AD0 Bδ(t) T ṽ(0, t) + D 0 (A + BK) 1 0 e AD0(1-x) B w(x, t) + Ke (A+BK)D0x × X(t) + x 0 KD 0 e (A+BK)D0(x-y) Bw(y, t)dy dx . ( 14 ) Proof: The proof is similar to the one of [8, Lemma 2]. With this new set of coordinates, we are now ready to analyze the exponential stabilization of the closed-loop system. IV. STABILITY ANALYSIS A. Definition of the infinitesimal generator Let us define the state of the target system (13) as Ψ = (X, w, ṽ, µ) ∈ R n × L 2 ([0, 1], R) × L 2 ([0, 1], R r ) × L 2 ([0, 1], R) D Ψ . As the solution to (13) is unique (see [START_REF] Kong | Prediction-based controller for linear systems with stochastic input delay[END_REF] for further details), (Ψ, δ) defines a continuous-time Markov process and we can therefore introduce the following elements for stability analysis. Define the infinitesimal generator L (see [START_REF] Kolmanovskii | Introduction to the theory and applications of functional differential equations[END_REF] and [START_REF] Kolmanovsky | Mean-square stability of nonlinear systems with time-varying, random delay[END_REF]) acting on a functional V : D Ψ × {e 1 , . . . , e r } → R as LV (Ψ, δ) = (15) lim sup ∆t→0 + 1 ∆t E [t,(Ψ,δ)] (V (Ψ(t + ∆t), δ(t + ∆t)) -V (Ψ, δ) . We also define L j , the infinitesimal generator of the Markov process (Ψ, δ) obtained from the target system (13) by fixing δ(t) = e j , as L j V (Ψ) = dV dΨ (Ψ, e j )f j (Ψ) + r l=1 (V l (Ψ) -V j (Ψ)) τ jl (t) , (16) in which V l (Ψ) = V (ψ, e l ) and f j denotes the operator corresponding to the dynamics of the target system (13) with the fixed value δ(t) = e j , that is, for Ψ = (X, w, ṽ, µ), f j (Ψ)(x) =     (A + BK)X + Be T j ṽ(0) + Bw(0) 1 D0 w x (x) -D 0 Ke AD0x Be T j ṽ(0) Λ -1 D ṽx (x) -Σ D h(• + D 0 (x -1)) µ x (x)/D     . ( 17 ) For the sake of conciseness, in the sequel, we denote V (t), LV (t), V j (t) and L j V (t), for short, instead of V (Ψ(t), δ(t)), LV (Ψ(t), δ(t)), V (Ψ(t), e j ) and L j V (Ψ(t)), respectively. Due to the dynamics (2) of the transition probabilities, the infinitesimal generators ( 15) and ( 16) are related as follows r j=1 P ij (0, t) dV j dΨ (Ψ(t))f j (Ψ(t)) + r j=1 ∂P ij ∂t (0, t)V j (t) = r j=1 P ij (0, t)L j V (t) = LV (t) . (18) Therefore, for stability analysis, one can first focus on the constant delay functional L j V . This is the probabilistic delay averaging approach [START_REF] Kolmanovsky | Mean-square stability of nonlinear systems with time-varying, random delay[END_REF], which we follow in the sequel. B. Lyapunov analysis Consider the following Lyapunov functional candidate V (Ψ, δ) = X T P X + c 1 0 (1 + x)(δ • D) T ṽ(x) 2 dx ( 19 ) + bD 0 1 0 (1 + x)w(x) 2 dx + d(3D -D 0 ) 1 0 (1 + x)µ(x) 2 dx , with b, c, d > 0, P the symmetric positive definite solution of the equation P (A + BK) + (A + BK) T P = -Q, for a given symmetric positive definite matrix Q, and D = (D 1 . . . D r ) T and where • denotes the Hadamard multiplication and the square in ṽ(x) 2 should be understood component-wise. Note that, contrary to [START_REF] Kong | Prediction-based controller for linear systems with stochastic input delay[END_REF], the functional (19) explicitly depends on δ. We can then get the following result. Lemma 2: There exist (b, c, d) ∈ (R * + ) 3 such that the Lyapunov functional V defined in [START_REF] Simon | Real-time control systems: feedback, scheduling and robustness[END_REF] satisfies, for t ≥ 2D, LV (t) ≤ -η -M E [0,D(0)] (|D 0 -D(t)|) -N g(t) V (t) , (20) with η, M, N > 0 positive constants and the function g defined as g(t) r j=1 |D j -D 0 | 2 ∂Pij (0,t) ∂t + c j (t)P ij (0, t) . Proof: According to [START_REF] Sadeghpour | Stability of linear continuoustime systems with stochastically switching delays[END_REF], we first consider L j V defined in [START_REF] Rausand | System reliability theory: models, statistical methods, and applications[END_REF]. For the first term in [START_REF] Rausand | System reliability theory: models, statistical methods, and applications[END_REF], from [START_REF] Ross | Introduction to probability models[END_REF], applying integration by parts and Young's inequality, one obtains dV j dΨ (Ψ)f j (Ψ) ≤ - min(λ(Q)) 2 -4d|K| 2 |X(t)| 2 (21) -b(1 -2D 0 |K||B|e |A|D0 γ 1 ) w(t) 2 -c 1 - 2 D 0 |D 0 -D j |γ 2 ṽj (t) 2 -d µ(t) 2 -b -4d - 4|P B| 2 min(λ(Q)) w(0, t) 2 -c - 4|P B| 2 min(λ(Q)) -2bD 0 |K||B|e |A|D0 1 γ 1 ṽj (0, t) 2 -dµ(0, t) 2 + 2c D 0 γ 2 |D 0 -D j | h(t + D 0 (• -1)) 2 , for any γ 1 , γ 2 ≥ 0. Observing that D 0 ∈ [D, D], let us choose (b, c, d, γ 1 , γ 2 ) ∈ (R * + ) 5 as follows (a) d < min(λ(Q)) 8|K| 2 , (b) b ≥ 4d + 4|P B| 2 min(λ(Q)) , (c) γ 1 < 1 2D|K|e |A|D |B| , (d) γ 2 < 1 4 min D D-D1 , D D r -D , (e) c ≥ 4|P B| 2 min(λ(Q)) + 2bD|K||B|e |A|D 1 γ1 , and define η 0 = min{min(λ(Q))/2 -4d|K| 2 , b(1 - 2D 0 |K||B|e |A|D0 γ 1 ), d} , which implies dV j dΨ (Ψ)f j (Ψ) ≤ -η 0 |X(t)| 2 + w(t) 2 + µ(t) 2 + 2c γ 2 D 0 |D 0 -D j | h(t + D 0 (• -1)) 2 . ( 22 ) Using Lemmas 3 and 4 for the index j 0 ∈ {1, . . . , r} such that e j0 = δ(t), this finally gives dV j dΨ (Ψ)f j (Ψ) ≤ -ηV (t) + 2cM 1 γ 2 D 0 |D 0 -D j |V (t) , (23) with η = η0 2 max{max(λ(P )),2bD,2cDr max{NX ,Nw,Nµ},2d(3D-D)} , in which N X , N w , N µ are defined in Lemma 4. In addition, for the second term in [START_REF] Rausand | System reliability theory: models, statistical methods, and applications[END_REF], by definition of the Lyapunov function [START_REF] Simon | Real-time control systems: feedback, scheduling and robustness[END_REF], one obtains r l=1 (V l (Ψ) -V j (Ψ))τ jl (t) (24) = r l=1 c 1 0 (1 + x)τ jl (t) D l ṽl (x, t) 2 -D j ṽj (x, t) 2 dx , in which, from the definition (9) of the input estimation error, D l ṽl (x, t) 2 -D j ṽj (x, t) 2 (25) = D l t+D l (x-1) t+D0(x-1) U (s)ds -D j t+Dj (x-1) t+D0(x-1) U (s)ds × D l t+D l (x-1) t+D0(x-1) U (s)ds + D j t+Dj (x-1) t+D0(x-1) U (s)ds ≤ (1 -x) 2 | D l -D j ||D j -D 0 | + D|D l -D j | × D |D l -D 0 | + |D j -D 0 | max s∈[-D,0] U (t + s) 2 ≤ M 2 M 3 |D j -D 0 | + |D l -D j ||D l -D 0 | V (t) , in which we used Lemma 3 in the last inequality and with M 3 = max 3D -2(DD) 1/2 |D -D|, D . Therefore, gathering (23), ( 24) and (25), one gets L j V (t) ≤ -ηV (t) + M 4 |D 0 -D j |V (t) + N r l=1 τ jl (t)|D l -D j ||D l -D 0 |V (t) , (26) with M 4 = 2c M1 γ2D0 + M 2 M 3 rτ ⋆ and N = 2cM 2 M 3 . Then, from [START_REF] Sadeghpour | Stability of linear continuoustime systems with stochastically switching delays[END_REF] and as r j=1 P ij (0, t)|D 0 -D j | = E [0,D(0)] (|D 0 -D(t)|), the following inequality holds LV (t) ≤ -η -M 4 E [0,D(0)] (|D 0 -D(t)|) V (t) + N r j=1 P ij (0, t) r l=1 τ jl (t)|D l -D j ||D l -D 0 |V (t) . Hence, applying the triangle inequality and using (2), one finally gets LV (t) ≤ -(η -M 4 E [0,D(0)] (|D(t) -D 0 |))V (t) (27) + N r l=1 r j=1 P ij (0, t)τ jl (t)|D l -D 0 | 2 V (t) + N τ ⋆ E [0,D(0)] (|D(t) -D 0 |) r l=1 |D l -D 0 |V (t) ≤ -η -(M 4 + N τ ⋆ r|D -D|)E [0,D(0)] (|D(t) -D 0 |) V (t) + N r j=1 |D j -D 0 | 2 ∂P ij (0, t) ∂t + c j (t)P ij (0, t) V (t) . Lemma 2 is then proved with M = M 4 + N τ ⋆ r|D -D|. C. Conclusion of the stability analysis With Lemma 2, we are now ready to conclude the proof of Theorem 1. Let us denote γ 0 (t) = η -M E [0,D(0)] (|D(t) -D 0 |) -N g(t), in which η, M, N and g are defined in Lemma 2 and introduce the functional Z as Z(t) = exp ( t 0 γ 0 (s)ds)V (t). Applying Lemma 2, we obtain LZ(t) = γ 0 (t)Z(t) + exp t 0 γ 0 (s)ds LV (t) ≤ 0 . Therefore, for t ≥ 3D, according to Dynkin's formula [4, Theorem 5.1, p. 133], E [3D,(Ψ,D)(3D)] (Z(t)) -Z(3D) = E [3D,(Ψ,D)(3D)] t 3D LZ(s)ds ≤ 0 , (28) from which, using standard conditional expectation properties, one deduces E [0,(Ψ,D)(0)] (Z(t)) ≤ E [0,(Ψ,D)(0)] (Z(3D)) . In addition, observe that E [0,(Ψ,D)(0)] (Z(t)) ≥ E [0,(Ψ,D)(0)] V (t) (30) × exp(-N (D -D)E [0,D(0)] (|D 0 -D(t)|) + t 0 (η -(M + N c ⋆ (D -D))E [0,D(0)] (|D 0 -D(s)|))ds) . Thus, if (4) holds with ǫ ⋆ ∆ = η 2(M + N c ⋆ (D -D)) , (31) one obtains from ( 28) and ( 30) E [0,(Ψ,D)(0)] e -N (D-D)ǫ ⋆ + η 2 t V (t) ≤ E [0,(Ψ,D)(0)] (Z(t)) ≤ E [0,(Ψ,D)(0)] (Z(3D)) ≤ e 2Dη E [0,(Ψ,D)(0)] (V (3D)) , (32) which implies, with γ = η 2 , E [0,(Ψ,D)(0)] (V (t)) ≤ E [0,(Ψ,D)(0)] (V (3D))e 3Dη+N (D-D)ǫ ⋆ -γt . (33) Notice that V and Υ are equivalent, that is, there exist positive constants q 1 and q 2 such that for ∀t ≥ 0, q 1 V (t) ≤ Υ(t) ≤ q 2 V (t) (see [START_REF] Kong | Constant time horizon prediction-based control for linear systems with time-varying input delay[END_REF]Lemma 4] for a proof of this fact in a similar case). It thus follows that E [0,(Υ(0),D(0))] (Υ(t)) ≤ q2 q1 e 3Dη+N (D-D)ǫ ⋆ e -γt . In addition, as the dynamics (1) is linear, there exists a constant R 0 > 0 (see [START_REF] Kong | Prediction-based controller for linear systems with stochastic input delay[END_REF]Lemma 5]) such that Υ(t) ≤ R 0 Υ(0), t ∈ [0, 3D]. Consequently, [START_REF] Karafyllis | Predictor feedback for delay systems: Implementations and approximations[END_REF] follows with R = R 0 e 3Dη+N (D-D)ǫ ⋆ q 2 /q 1 . V. SIMULATIONS To illustrate Theorem 1 and in particular the role played by the condition (4), we consider the following toy example Ẋ(t) = 0 1 -1 1 X(t) + 0 1 U (t -D(t)) . (34) The control law (3) is applied with the feedback gain K = -1 2 resulting in conjugate closed-loop eigenvalues λ(A + BK) = {-0.5000 + 1.3229i, -0.5000 -1.3229i}. The initial conditions are chosen as X(0) = [1 0] T and U (t) = 0, for t ≤ 0. Simulations are carried out with a fixed-step solver in Matlab-Simulink and a sampling time ∆t = 0.01 s. Finally, the integral in (3) is discretized using its zero-order hold approximation, in line with a suggestion in [START_REF] Manitius | Finite spectrum assignment problem for systems with delays[END_REF]. We consider 3 different delay values (D 1 , D 2 , D 3 ) = (0.1, 2.0, 2.1). The initial transition probabilities are taken as4 P 1 (0, 0 + ) = 0.02, P 2 (0, 0 + ) = 0.69 and P 3 (0, 0 + ) = 0.29, which means that the delay values are initially concentrated in D 2 , and D 3 . We pick the prediction horizon as D 0 = 2. Notice that the delay margin of the closed-loop system (34) and (3) with constant delay D 0 is ∆D = 0.096 (see [START_REF] Kong | Constant time horizon prediction-based control for linear systems with time-varying input delay[END_REF] for details on the computation of this quantity). Thus, the realizations of both D 1 and D 3 lead to a delay difference which is beyond the robustness margin of the closed-loop system, resulting in a challenging set-up as prediction-based controllers are wellknown to be sensitive to delay mismatch [START_REF] Mondié | Delay robustness of closed loop finite assignment for input delay systems[END_REF]. Simulations performed for constant transition probabilities equal to the above initial conditions (i.e., τ ij = 0) resulted VI. CONCLUSION In this paper, we proposed a constant horizon predictionbased controller to compensate for a random input delay modeled as a Markov process with a finite number of values. We proved the exponential mean-square stability of the closedloop control system provided that the chosen prediction horizon is in average close enough to the delay value. Simulations illustrated the relevance of this condition and the interest of this prediction-based control law. Future works will focus on the adaptation of the predictionhorizon to the current delay distribution, as it is likely to increase the closed-loop delay robustness, and thus represents an interesting design feature to explore. APPENDIX Lemma 3: Consider the control law defined in (3) and the function h defined in [START_REF] Mondié | Delay robustness of closed loop finite assignment for input delay systems[END_REF], there exist M 1 , M 2 > 0 such that h(t + D 0 (• -1)) 2 ≤ M 1 V (t), t ≥ D 0 , (36) U (t + s) 2 ≤ M 2 V (t), t ≥ 2D . ( 37 ) Proof: (36) is proved in [START_REF] Kong | Prediction-based controller for linear systems with stochastic input delay[END_REF]. Observing that h(t) = D 0 U (t), (37) is obtained with similar arguments. Lemma 4: There exist N X , N w , N µ > 0 such that, for all j ∈ {1, . . . , r} and t ≥ D, ṽj (t) 2 ≤ N X |X(t)| 2 + N w w(t) 2 + N µ µ(t) 2 . (38) Proof: From the definition of the input estimation error [START_REF] Kong | Constant time horizon prediction-based control for linear systems with time-varying input delay[END_REF], it follows that ṽj (t) 2 = 0 E 0 ds ≤(D -D) E [0,D(0)] (|D 0 -D(t)|) (29) + c ⋆ t [0,D(0)] (|D 0 -D(s)|)ds , as c j = r k=1 τ jk ≤ rτ ⋆ c ⋆ . Hence, it follows that Dynamic of transition probabilities P 1 , P 2 and P 3 . (c) Monte Carlo simulation of log X and the closed-loop input U (100 trials), in which the means and the standard deviations are highlighted by the colored lines. Fig. 2 : 2 Fig. 2: Simulation results of the closed-loop system (34) and (3) for D = (0.1, 2.0, 2.1) T , X(0) = [1 0] T and U (t) = 0 for t ≤ 0. The prediction horizon is D 0 = 2.0. The transition probabilities follow (2) with τ (t) = τ ⋆ e -kt (1 3×3 -I 3 ) (τ ⋆ = 0.2 and k = 0.1).VI. CONCLUSION In this paper, we proposed a constant horizon predictionbased controller to compensate for a random input delay modeled as a Markov process with a finite number of values. We proved the exponential mean-square stability of the closedloop control system provided that the chosen prediction horizon is in average close enough to the delay value. Simulations illustrated the relevance of this condition and the interest of this prediction-based control law.Future works will focus on the adaptation of the predictionhorizon to the current delay distribution, as it is likely to increase the closed-loop delay robustness, and thus represents an interesting design feature to explore. max s∈[-D,0] 1 0(U 1 U (t + D j (x -1)) -U (t + D 0 (x -1))) (s) 2 ds ≤ 4 D + D 0 D v(t) 2 + µ(t) 2 .Besides, from the inverse of the backstepping transformation[START_REF] Li | Predictor-based h∞ leader-following consensus of stochastic multi-agent systems with random input delay[END_REF], which is v(x, t) =w(x, t) + Ke (A+BK)D0x X(t)+ x 0KD 0 e (A+BK)D0(x-y) Bw(y, t)dy ,(40)it follows, using Young's and Cauchy-Schwarz inequalities that v(t) 2 ≤ N 1 |X(t)| 2 + N 2 w(t) 2 with the positive constants N 1 = 3|K| 2 e 2|A+BK|D0 and N 2 = 3(1 + |K| 2 D 2 0 e 2|A+BK|D0 |B| 2 ). Hence, (38) follows with N X = 4N 1 (D + D 0 )/D, N w = 4N 2 (D + D 0 )/D and N µ = 4(D + D 0 )/D. P2) The transition probabilities P ij (t 1 , t 2 ), which quantify the probability to switch from D i at time t 1 to D j at time t 2 b) P ij is a differentiable function which, for s < t, follows the Kolmogorov equation ∂P ij (s, t) ∂t = -c j (t)P ij (s, t) + r k=1 P ik (s, t)τ kj (t) , P ii (s, s) =1 and P ij (s, s) = 0 for ((i, j) ∈ {1, . . . , r} 2 , t 2 ≥ t 1 ≥ 0) satisfy a) P ij : R 2 → [0, 1] with r j=1 P ij (t 1 , t 2 ) = 1. Time stamping of the exchange data can be used, but requires the controller internal clock and the system one to be synchronized, which could be difficult to guarantee in practice. For instance, if the current delay value is much larger than its average, leading to an over-estimation of the prediction horizon. Namely, in Equation (31), involving itself various other parameters such as the intermediate constants chosen in (a)-(e) or introduced in Lemmas 3 and The subscript i is omitted in this section, as the probability transitions do not depend on the initial delay value. This is consistent with the fact that the expectation in (5) is conditioned by the initial delay value. Besides, to avoid a conflict between the initial condition in[START_REF] Bekiaris-Liberis | Nonlinear control under nonconstant delays[END_REF] and their discretized version used in simulation, we denote their initial conditions as P j (0, 0 + ).
04114237
en
[ "shs.anthro-se", "shs.art", "shs.psy" ]
2024/03/04 16:41:24
2023
https://hal.science/hal-04114237v1/file/Affective%20Guidance%20in%20the%20Creative%20Process%3A%20Ritual%20Anthropology%20as%20a%20model%20-%20The%20Case%20of%20Jackson%20Pollock.pdf
Affective Guidance in the Creative Process: Ritual Anthropology as a Model -The Case of Jackson Pollock Keywords: Affects, Creative Process, Ritual Anthropology, Jackson Pollock, Methodology INTRODUCTION "If we were to ask [an artist] why he did this or changed that, he might not be able to tell us. He does not follow any fixed rules. He just feels his way" (Gombrich 1951: 15). Ernst dealing with an anthropological reading of art. In Gell's account, artists can be described as having an "active" or "passive" position relative to their artworks, but the transition between the two states is left unclear. Finally, the model is compared to existing ones in the psychology of creativity to assess the place given to affects in relation to agency. FROM PASSIVITY TO AGENCY Gell's model aims to encompass the sociology of people and artworks acting upon each other. In this article, my primary focus is the creative process leading to the artwork's completion rather than its reception by the viewers. From that standpoint, we will examine his distinction between active and passive positions. Gell states, in a paragraph actually referencing Pollock's art, that the artist is active when the passive artwork is but "a congealed 'trace' of the artist's creative performance (Gell 1998, 33). At this point, a problem of terminology arises. For Gell, the artist is active at that moment because his artwork can act upon the viewers. However, in our case, the first viewer of the completed artwork is Pollock himself. If viewers attribute agency to the artist via the artwork, whose agency is Pollock perceiving when looking at his own works? The agency perceived is that of Pollock in the recent past, during his actual creation process. Thus, the artist is passive as a viewer of his own completed works, and active while producing them. Here is how Pollock phrasing: When I am in my painting, I'm not aware of what I'm doing. It is only after a sort of "get acquainted" period that I see what I have been about. I have no fears about making changes, destroying the image, etc., because the painting has a life of its own. I try to let it come through. It is only when I lose contact with the painting that the result is a mess. Otherwise there is pure harmony, an easy give and take, and the painting comes out well. (Karmel 1999, 18;original emphasis). For the artist, the artwork appears to be more than just a material "index" as suggested by Gell's Peircean terminology. Pollock's engagement with the painting during the process leads him to perceive it as manifesting a form of life There is a way to engage with the painting during the process that makes Pollock describe it as a form of life. This engagement is discussed extensively in Tim Ingold's Making, where this author advocates for a neverending process (Ingold 2013, 96), an idea which I reject partially as we will see later. For now, let us examine how, from Pollock's point of view, his actions during the creative process lead to the feeling of agency: it appears less as mere object-making and more like an exchange taking place between two agents. Critically though, this "life-giving process" is guided by a specific type of affect Pollock identifies as "harmony". The painting is felt as complete when this harmony is achieved during the act of painting, provided that the artist maintains contact with it. Feeling harmony, in other words, is a sign that the process is going in the right direction, and amplifying this harmony to a certain level becomes the goal of the process. When the right level of harmony is felt by Pollock, as we will see, this marks the switch from an active to a passive position: he can then cease making changes to the painting's surface. He then becomes a passive observer, affected by the achieved level of harmony in the painting, an affect that eliminates the need for further work on the artwork. This aligns with one of Gell's definitions of an artist's passivity in relation to an artwork: when the artwork "possesses the characteristics which motivate its selection by the artist" (Gell 1998, 30). In fact, there are two moments in the creative process where this type of passivity occurs. In the first moment, as we've just seen, the artwork is "selected" as complete when it displays qualities that make Pollock feel the right level of harmony. The second moment takes place at the very beginning of the creative process, before any actual work has begun. This is illustrated by the long-running competition Pollock felt he was in with Picasso, as reported by artist Lee Krasner, Pollock's wife: ...there's no question that he admired Picasso and at the same time competed with him, wanted to go past him… I remember one time I heard something fall and then Jackson yelling, 'God damn it, that guy missed nothing!' I went to see what had happened. Jackson was sitting, staring; and on the floor, where he had thrown it, was a book of Picasso's work... (Karmel 1999, 36). Here, Pollock appears deeply frustrated by Picasso's art, given that Picasso has presumably materialized many ideas into finished artworks, ideas Pollock would have liked to realize himself. Frustration, by definition, implies a strong limitation of agency. If Pollock is frustrated due to his inability to produce artworks comparable to Picasso's, this moment aligns with Gell's definition above. It is the particular characteristics of Picasso's artworks that lead Pollock to "select" them. And it is Picasso's artworks in particular that trigger the most frustration in Pollock among the many works by numerous artists he saw. Pollock experiences the full affective cycle as follows: 1) passivity, sparked by seeing someone else's art; 2) agency, during the hands-on work of making his own artwork and aiming for a balance of harmony; 3) returning to passivity, once that harmony is achieved, signaling that the artwork is complete. Gell's analysis doesn't account for this chronological transition from one position to another. Although the switch from agency (artist as creator) to passivity (artist as first viewer of the completed artwork) can be understood from these two vignettes, the switch from initial passivity to creative agency is less obvious. This is where Favret-Saada's ethnographic study of the dynamic of affects provides a useful model. RITUAL DYNAMIC AS A MODEL Gell makes several references to sorcery in his book, specifically Volt, to illustrate two points. First, a "prototype" is distinct from the artwork that it shapes. As an example, the drawing of an apple, the artwork, is different from the apple that forms its prototype. Then, sorcery provides insight into how a prototype "acts" upon the other agents within his system. We will come back later at what serves as Pollock's prototype. In the meantime, I will continue to examine the sorcery analogy to uncover its inherent affective dynamics. I'll then adapt these dynamics to our understanding of the creative process, which could give us a fresh perspective on the entire artistic procedure. Favret-Saada's 1970s ethnography of sorcery in rural France identifies three positions that exist, like Gell's, within the active-passive dichotomy. But unlike the latter, her ethnography provides a description of the shift from passivity to agency. According to Favret-Saada, rural sorcery is a farmer's last recourse to end a series of adverse events (unexpected livestock deaths or failed crops). These events are said to be the result of a sorcerer's curse. The farmer is the victim, thus in a passive position. To regain agency, the victim undergoes a ritual. A ritual specialist, or "unsorcerer", performs a visually striking ritual action (e.g., inserting rusty nails in a boiled ox's heart) in front of the victim. This violent ritual action is perceived as the means to fight the sorcerer's own violence inflicted upon the victim. The ritual action has to be as violent as the sorcerer's action. Favret-Saada shows that the sorcerer position is actually always a fictive one: the victim identifies someone as their sorcerer without the latter being aware. It is the victim who, in a paranoid reflex, interprets events as being aggressively directed against them from a common origin. Thus Favret-Saada's understanding is that the specialists ritual action is a pragmatic example of the affect the victim has to harness to escape their paranoid tendencies, a "shifter" in Favret-Saada's parlance (2009,88). In other words, the specialist provides an affective guidance to the victim. Violence is the specific affect that the farmer has to summon in order to regain agency against their identified source of the curse. After the specialist's ritual, smaller daily rituals, prescribed by the specialist, are performed by the farmer, relying on the violent affect experienced during the main ritual. The mobilization of this violent affect by the farmer positions them as an agent. These smaller rituals are prescribed as a way to avoid any relapse in paranoia, the farmer becoming hereby their own private ritual specialist. We can now interpret Pollock's reaction to Picasso's art in the same fashion. The farmer's adverse events reduce or halt his agency, like Picasso's artworks do for Pollock. The farmer is then regaining agency through the violence of the visual ritual performed by the specialist. Here is the main difference between rural sorcery and the creative process: Pollock is his own specialist, performing the necessary actions to regain agency. There are exceptions to this difference though. Pollock, as we will see, has been helped to refine his creative process by two types of specialists, art teachers, and therapists. In both the sorcery and the creative cases, through different means, specialists aim to foster further autonomy of their (Lewison 1999, 22) since the artworks share four elements of composition (fig. 5, bottom). In Gell's terminology, Picasso's painting is the prototype for Pollock's. We can now imagine Pollock flipping through the book on Picasso's art and "selecting" this painting, for it exhibits specific characteristics. regain agency then? What are the pseudo-ritual actions to take? In this example, the first step seems to imitate the general composition from which, for Pollock, most of the harmonious feeling seems to come. This is a way to tap into that something felt in Picasso's art. But then Pollock introduces differences seemingly aimed at affirming his own agency (for simple imitation would leave Picasso's unchallenged): Picasso's composition is inverted on the vertical axis, and the palette is shifted to a colder hue. The eponymous girl in Picasso's painting is peacefully drawing. His execution exhibits mastery. In contrast, Pollock's rework of the main character appears more nervous. However, the main difference between the two artworks doesn't lie in the way similar elements are drawn. Pollock's painting displays many black lines, the "stenographic figures" of its title. Since those added lines are absent in Picasso's version, their origin has to be clarified. In a note written in 1950, Pollock might lead us on a path: Technic is the result of a need-/ new needs demand new technics-/ total controldenial of / the accident-/ States of order-/ organic intensity-/ energy and motion / made visible-/ memories arrested in space, / human needs and motive-/ acceptance- (Karmel 1999, 24) Mentions of "control and the denial of the accident" are probably a response to an art critic condemning Pollock's later abstract paintings for their apparent simplistic technique. Pollock's defense rests on the notion of "need", a term he uses here three times. It is not just the artist's will that dictates the creative process. There is a deeper need to fulfill with the help of a particular technique. The technique then allows the capture of "memories in space", a formulation echoing Gell's "congealed trace" seen earlier. Yet, contrary to the latter, Pollock's formulation seems spoken from the perspective of the artist during the creative process. The memories in question could be the ones of the "give and take" process, the dialogue taking place with the painting in the making. For the right harmony to come about, this dialogue needs to happen, motivated by Pollock's affects, translated into movements of applying paint on the canvas. This idea is condensed in the expression "make energy and motion visible". The painting could be thus seen as the evidence of a fight rooted in his need to recover agency over, in our example, Picasso. In this light, the additional lines we see in Stenographic figure are ways to render the fight against Picasso. To be more precise, they are ways to enhance Pollock's first rework of Picasso's composition. On the far right of his composition, for instance, one element painted in white, blue, and ochre is overpainted with black lines that highlight and reinforce the impression of movement (fig. 6). We can deduce that these lines are added after Pollock completes the first version of the composition. Conventionally, an artist signs his work after completing it. In this case, Pollock's signature is partially covered with the same black lines as the element above. These added lines can thus be viewed as a way to "make energy and motion visible". They enable Pollock to witness in his painting the same kind of harmony he witnessed in Picasso's, despite the poor execution of his first rework. Picasso's painting is thus the "prototype" of Pollock's in two ways described by Gell. First in terms of "appearance", they share compositional elements. But then and foremost, Picasso's artwork is a prototype in terms of "presence" (Gell 1998, 26). Pollock's first rework was perhaps a good start (appearance), but to achieve the proper level of harmony experienced in front of Picasso's painting, extra movements had to be suggested. Although these lines partly mask the initial composition, they are the price Pollock pays to infuse in his artwork the same "energy" as Picasso's original (presence). This outlines what the visually striking pseudo-ritual Pollock performs would look like. Pollock's initial passivity in front of Picasso's art is actually his frustration with the absence of his own agency, all the while feeling the right level of harmony triggered by the Spaniard's artwork. The frustration of lost agency creates a need to regain that capacity. Pollock starts by imitating the elements causing this harmonious feeling for him. During the creation of the painting, the successive traces that he paints change his overall impression of the whole canvas, an impression that guides him as to how close he is to achieving the right level of harmony. As that right level begins to affect him, the initial need to fight Picasso disappears. He can now witness a novel artwork, the result of his own agency, which allows him to experience the specific harmony he perceived while discovering Picasso's cubist painting. In this view, the creation of artworks is for artists foremost a fight over existing artifacts from their surrounding material culture that affect them in peculiar ways, but without the experience of their own agency. AFFECTIVE GUIDANCE OVER A LIFETIME This model reflects three phases in the creative process of producing a novel artifact. It starts with the initial loss of agency from an artwork displaying the right level of harmony. Pollock then engages in the pseudo-ritual as a sequence of actions aimed at regaining agency, guided by the increasing level of the right harmony displayed by the artwork in the making. Finally, a feeling of completion, once that right level is reached, pushes the artist into another kind of passivity: witnessing his now completed work. What this model doesn't do is clarify why artists engage in the process multiple times. As we have seen, the primary difference with Favret-Saada's ethnography is that the artist assumes the position of the specialist after being affected by someone else's artifact. One reason for the cyclic creative engagement, unlike the farmer's "unsorcering" ritual, might be that the initial trigger for the artist and the result of the artist's efforts to regain agency are of the same kind: artifacts. Indeed, following the regaining of agency through the creative process, the artist relapses into a kind of passivitywitnessing an artwork displaying the right level of harmony. There are then limited possibilities to get that feeling of agency outside of engaging in artwork-making again. Let us now examine some elements of Pollock's career to see that dynamic unfold throughout his lifetime. This dynamic is not the mere repetition of the model above for a single artwork. Through the iterations, Pollock learns to make "energy and motion visible" in a unique way, fine-tuning his creations to more accurately grasp the right harmony he seeks. The genesis of Pollock's famous "dripping" technique, where paint is poured on the canvas on the floor, has been tentatively traced to many places: a house painter splashing clean his brushes on a board; Pollock's possible ocular migraine producing visual hallucinations; a memory of his father drawing patterns on the ground by urinating (Naifeh and White Smith 2007, 488). More probable prototypes are the works of Janet Sobel (1894-1968) "a self-taught, fifty-two-year-old grandmother from Brooklyn" (ibid., 475) which Pollock sees prior to his abstract period. Sobel produces small paintings made of curvy splashes on the whole surface of the support (fig. 1). According to the model, Sobel's artworks affect Pollock to such a degree that he feels the need to surpass their "energy" by creating similar but much larger compositions. Pollock's well-known abstract paintings occupy roughly 10,000 times more surface than Sobel's. The way they affect him thus clearly matches or surpasses Sobel's. But if we only consider the two types of passivities, the affecting prototypes (right harmony, without agency) on one side, and completed artworks (right harmony, with past agency) on the other side, we return to where Gell was, not taking into account the transition from one to the other. So we need not only to consider prototypes of composition (Picasso, Sobel) but prototypes of action. It is during the process of making the artwork-the pseudoritual-that energy and motion are made visible, and that presence-the "easy give and take" exchange with the artwork-can be experienced by Pollock. Let us examine some of these possible action prototypes. Charles and Sanford Pollock, two of his four older brothers, have an artistic practice. Compared to theirs, Jackson's technique might be referred to as "pictorial dyslexia" (Karmel and Varnedoe 1999, 55): a frustrating incapacity to master figurative painting techniques, against which Pollock has to fight. An action prototype that might have helped him overcome this struggle can be found in an anecdote reporting how Pollock's teacher asks his teenage students to mix color paints in a liquid (Naifeh and White Smith 2007, 117). After moving to New York in the 1930s, Pollock enrolls in Thomas Hart Benton's courses at an art school. Benton is one of the rare artists that Pollock names publicly: "my work with Benton was important as something against which to react very strongly, later on; in this, it was better to have worked with him than with a less resistant personality who would have provided a much less strong opposition. At the same time, Benton introduced me to Renaissance art" (Karmel 1999, 15). Benton's work is fairly figurative. As a "regionalist" artist, he developed his style depicting farm work and American landscapes with exaggerated curves (fig. XXX top and middle). He masters traditional techniques of figuration, a source of frustration of agency for a "dyslexic" Pollock. Nevertheless, Pollock tries, according to one classmate, "to replicate Benton's technique and subject matter 'down to the last brush stroke'", but only achieves "jagged imitations of Benton's sure, undulating lines" (Naifeh and White Smith 2007, 173, 202). Even if jagged, Pollock does imitate the gestures, as action prototypes, in order to understand the movements in the paintings: in contact with technically perfect artworks (appearances), Pollock needs to appropriate their "energy" (presence) to regain some proper agency in his own paintings (fig. XXX bottom). In 1936, Pollock participates in the Experimental Workshop led by Mexican muralist artist David Alfaro Siqueiros, who aims at making explicitly political figurative artworks fighting American imperialism. In his workshop, Siqueiros asks the participants to prepare painted backgrounds on panels for his personal artworks. With panels placed on the ground, paint is splashed and poured directly from the can. Pollock displays little interest in the figurative aspect of Siqueiros's works, yet is quite attracted by that preparatory phase for which the muralist artist developed a theory of "controlled accidents" (Karmel and Varnedoe 1999, 46)-wording similar to Pollock's note above. Pollock thus begin experimenting with the fluidity of paint to cover surfaces well before his abstract period. He is in the active position of the specialist looking to solve his pictorial dyslexia through the action prototypes of others. Several more years will pass before he puts this solution into practice for himself, and as a self-sufficient way of making art, independent of traditional drawing techniques. Pollock pursues therapy for his chronic drinking problem. In 1939, he begins working with Doctor Joseph L. Henderson, a direct disciple of Jung. Henderson is also very interested in the work of Picasso. As is usual for a Jungian therapist, Henderson engages with Pollock's drawings, still figurative at the time, blending Picassoian motifs with Jungian imagery. Henderson confesses that he did not seek to treat Pollock's alcoholism, but rather he "offered Left : Pablo Picasso, Guernica, 1937 (detail). Right : Jackson Pollock, plate 40 in Psychoanalytical drawings, (Wysuph 1970, 72). Horse's head by Picasso and Pollock occasional criticisms which [he] thought might be helpful in order the free him from his influences (mainly Picasso) that seemed to inhibit his own native ability" (Leja 1993, 139). Henderson does not assist Pollock with his alcoholism but contributes to the artist's fight against Picasso. Henderson occupies the typical position of the ritual specialist in Favret-Saada's ethnography: helping the victim escape the sorcerer. Yet, Henderson does not produce the visually striking ritual to free Pollock. It is Pollock who performs the necessary actions in order to free himself. If Pollock, through therapy, cannot entirely control his alcoholism, he does, however, further develop his practice based on Picasso's works, and regains some agency. In a drawing resulting from the interaction with Henderson, for instance, the horse's head in Picasso's Guernica is reworked. He adds a floating mane, with Bentonian-like curves as action prototypes, to his version (absent in Picasso's), thereby increasing the head's "energy and motion," which he would have felt was missing from his drawing (fig. XXX). Stenographic figure is made in 1942, and Mural, probably the gateway to the abstract Pollock we know, is produced in 1944, the same year Pollock reportedly sees Sobel's work (Zalman 2015, 20). Mural, a 6-meter-long painting, is a commission from the gallerist and collector Peggy Guggenheim, which pushes Pollock to change his process. Accustomed to the easel, he now has to walk along the canvas-a new action prototype for him-to apply the layers of paint. While often considered abstract, the painting is actually built on a base of "stick figures" (Karmel and Vernadoe 1999, 90), partly covered with other colors, following their rhythm. Phosphorescence (1947), entirely abstract to the spectator's eyes, is revealed on X-Rays to contain an early sketch made of these figures, which Pollock "veils" (ibid,104). Krasner relates: "many of the most abstract [artworks] began with more or less recognizable imagery … Once I asked Jackson why he didn't stop the painting when a given image was exposed. He said 'I choose to veil the imagery'" (Karmel 1999, 36). This shows the evolution of Pollock's practice. At first, there is the need to replicate the "energy" of Picasso's art. Then, work after work, as he acquires confidence, the addition of lines, the veiling, becomes his specific way of making art, his action prototype. He fully becomes a specialist able to regain agency during creation, and witness afterward the "life of its own" (presence) the painting has gained. In August 1949, Life magazine publishes an article on Pollock, illustrated with his abstract artworks, titled "Jackson Pollock: Is he the greatest living painter in the United States?" This is a consecration of sorts for it gives his work visibility in mainstream media. The tone of the article, though, might seem mocking at times. The apparent ease of making, which his paintings exhibit, suggests that they are "simple" (Friedman 1995, 140). Pollock has to prove to others and to himself that this is not the case, that his paintings are not, as a critic writes at the 1950 Venice Biennale, "chaotic" or "lacking any technique" (Karmel 1999, 68). Pollock has to find a way out. In 1951, he notes in a letter that he is running tests on Japanese papers (Friedman 1995, 171) given to him by his artist friend Tony Smith. Pollock pours lines of black ink on their surface (Delahunty 2015, 17). The paper absorbs and diffuses the ink in peculiar ways, another type of "controlled accident" as an action prototype. This reaction of the paper might have echoed Pollock's use of unprimed canvases at times since 1947-the canvas fibers spread the paint in similar ways-for the 1951 series of black paintings that directly succeed his abstract period. He explains: "I've had a period of drawing on canvas in black-with some of my early images coming thru-[I] think the nonobjectivists [abstract expressionist artists] will find them disturbing-and the kids who think it simple to splash a Pollock out" (Friedman 1995, 174). The paint he uses is more fluid, employing at times a turkey baster, for greater precision while drawing. He can now show that he controls what he is doing while still achieving results as harmonious to his eyes as the abstract works (fig. 7). This line no longer "veils" a weak copy of a Picasso, or covers the initial rhythmic stick figures, but is a drawing technique, an action prototype, that belongs to him. This is his line. It "makes the energy and motion visible" even with simpler designs. Whether figuration reappears or not, this is his mastery of the line which is at stake. With it, comes a certain maturity of the specialist position. This maturity is the confidence to be able to create visually striking artworks to fight the energy of the works of Picasso, Sobel, and others along the way. However, in 1951, he also fights his own abstract paintings from the previous years. Indeed, the abstract Pollock became internationally famous (Zimmer 2016, 221), but was criticized for the apparent ease of his production. For the 1951 Pollock, the abstract Pollock took the sorcerer's position. After showing these new works, one critic writes that Pollock "found his own way of dealing with human experience". Yet, the global audience is now used to the abstract works and finds it difficult to see the black paintings as continuous with the earlier works (Delahunty 2015, 19). In 1952 and 1953 Pollock uses colors again, contrasting with the black from the preceding year. Those years, however, are dominated by depression and the return of alcoholism (Zimmer 2016, 239). Despite these detours and the criticism, Pollock seems to have never given up his path of "making energy and motion visible" through his line, building confidence, painting after painting. However, this confident position is fragile. It can be occupied at times during the creative process but never secured. Indeed, when a painting approaches completion, Pollock reverts to the passive position we've seen earlier, being a witness of his own art, seeing but "memories arrested in space" of a time when he is about to feel the specific harmony he is seeking. CONCLUSION With the model we have discussed, each painting is the result of affective guidance aimed at overcoming a felt passivity, defined as the feeling of the right harmony displayed in an artifact, but without the accompanying feeling of agency. This process occurs for artifacts made by others, but along the way also for the ones made by oneself. The critical point here is that agency, by definition, can only be felt by the artist through is creative actions. The artist's double transition from passive to active, at the start of the process, and from active to passive, at the end, forms the basis of the dynamic by which a body of works is slowly constituted. Artworks, in this view, are but means for the artist to experience, during the act of creation, a specific type of affect, a specific harmony in the case of Pollock. Competition, initially against the work of others and then against one's own, sustains the activity throughout a lifetime. Since each successive work has to match or surpass the previous one's "energy" or presence, artists are incrementally guided throughout their careers to perfect the means, the action prototypes, to feel their specific affect. By focusing on affects as the main motivation of the artist's action, the gap between the passive and active positions in Gell's argument seems bridged. Affects are specific to each person. What affects me might not affect you, and the way I can produce an artifact that can affect me in the way I need will not be the same as yours. In other words, this is a subjective matter. Yet, the expression of this subjectivity is related to Edward Morphy's critique, in the pages of this very journal, of Gell's lack of consideration for the artifacts' "meaning" beyond their semiotic content. If art is a "mode of action", it must integrate how "people who…interact with works of art…do so with some background of knowledge and experience…of the artwork itself and of innumerable previous interactions involving it, or similar objects" (Morphy 2009, 14). Here, I propose that affects participate in the meaning-making process one goes through, since background knowledge and experience can be expressed in the harmony Pollock finds during creation: his paintings feel harmonious to him because they fit the previous experience of harmonious art Pollock had, with his own works or those of others. To Pollock's eyes, his works are meaningful because they make him feel that specific affect he has been after his whole life. Or as Pollock states: "painting is selfdiscovery. Every good artist paints what he is." (Leja 1993, 186). Through art-making, as a mode of action, one experiences the self as an agent capable of producing that specific affect, rooted in subjectivity. The model I propose builds on Gell's focus on agency, enriched by the dynamic of affects found in Favret-Saada's ethnography of sorcery. It is built on historical rather than ethnographic data, for this article's aim is to provide a perspective on a long-term practice, challenging to study through ethnography alone. Applications of this model are currently tested in close collaboration with artists, in particular regarding its limitations, such as the somewhat linear description of the creative process. For instance, many compositional and action prototypes are involved in the creation of new artworks. The development of an apt model for the creative process may require an interdisciplinary effort. In conclusion, I will therefore compare my model to one elaborated by a succession of psychologists. In 1926, Graham Wallas identifies, after a speech by Hermann von Helmholtz in 1891, four stages in the "art of thought", that we can assimilate to the creative process at large: preparation ("hard, conscious, systematic and fruitless analysis"), incubation ("unconscious mental exploration"), illumination ("appearance of a happy idea"), and verification (testing and shaping of the idea) (Wallas 1926, 80-1). In 1961, Mel Rhodes proposes the "four P's of creativity". For our purpose, the Person refers to the artist, the Process encapsulates Wallas' four stages above, the Product means the artwork, and the Press is the global environment of creation. A person creates a product by going through a process surrounded by a pressing environment, made of "sensations and perceptions from both internal and external sources [reflecting] uniquely upon the originator's self" (Rhodes 1961, 308). Yet Rhodes advocates for the study of each "P" in relative isolation from the others to dissipate the "fog" surrounding what creativity actually is. This then results in a lack of dynamism in the model. Vlad [START_REF] Glăveanu | Rewriting the Language of Creativity: The Five A's Framework[END_REF] thus proposes to expand these individualfocused definitions and proposes the "five A's model", a more dynamic way of linking Actor (a person performing social roles), Action (psychological and behavioral dimension of the process), Artifacts (as cultural objects, broader than bare material products), Audiences (the social aspect of Rhodes' Press), and Affordances (physical aspects of the Press). This change of terminology widens our understanding of each component. For instance, "actor" implies the existence of an audience and an action to be performed for it, whereas "person" tends to be understood in narrower terms: a person doesn't necessarily need an audience and action to be defined. Thus, creativity, in this view, takes place in the interrelation between the five A's. Glăveanu's model is for now proposed as a framework for future research and not as the result of a study. The five A's could also be mapped onto Gell's terminology: actor corresponds to the artist; artifact to the index or artwork; audience to the recipients; and affordances to a form of prototypes. After appreciating the similarities and without stressing further here the differences between these psychological and anthropological models, it seems interesting to underline that in both cases the vector of the interrelations between the terms can be identified as affects. From what we have seen with the Pollock case, I would argue that affect is the vector. Pollock, as actor, is affected by the affordances of artifacts made by others or by himself. He performs his art -the aptly named "action painting" to affect an audience, in the middle of which he stands as the first viewer of his finished paintings. His actions are guided by the quest for an affect, toward completion. Positioning affect at the center of these models opens interesting research possibilities across various academic fields. As I have tried here, examples drawn from art history can be examined for the affective connections that artworks establish with each other, following Aby Warburg's notion of "Pathosformeln". This enables anthropology to link these longitudinal lines of creative processes to other forms of affect-driven, meaning-making actions such as rituals in non-Western contexts, thus touching on processes that might be universally human. Furthermore, methodologies from experimental psychology could be adapted to conduct fieldwork with artists in their environment, to examine precisely how affects are driving meaningful actions. Such an interdisciplinary effort would broaden our understanding of art and creativity. patients, another way to regain agency. Under that model, what would Pollock's visually striking pseudo-ritual look like? Following Krasner's indication of Pollock's perceived rivalry with Picasso, we will examine a pre-abstract painting of the former where the link between the two artists is easily understandable. Stenographic Figure (fig. 5, top right), is a mostly figurative painting from 1942. The art historian Jeremy Lewison proposes to see it as an inverted reworking of Picasso's cubist Interior with a girl drawing (fig. 5, top left), created seven years earlier Pollock might not be able to express what those characteristics are, yet he might have felt affected by this painting more than by others. Just as the farmer does, Pollock then makes Picasso his sorcerer of sorts since the painting that affects Pollock has been made by Picasso. But instead of adverse events for the farmer, Pollock perceives in Picasso's painting the very specific level of harmony seen earlier, and his frustration comes from not being the source of that felt harmony. How to 8 Top left: Pablo Picasso, Interior with a girl drawing, 1935, oil on canvas, 130x195 cm, MoMA, New York. Top middle: Jackson Pollock, Stenographic Figure, 1942, oil on canvas, 102x142 cm, MoMA, New York. Bottom middle: Pollock's painting inverted on the horizontal axis and compared to similar elements in Picasso's (author's modifications). Right: detail of Stenographic Figure Stenographic Figure as an inverted Picasso A common entry point in Pollock's work is his renowned abstract paintings (fig. XXX). Their 1946, oil and enamel on cardboard, 45 x 35 cm, MoMa, New York. 1. Janet Sobel, untitled lack of figurative elements helps us focus on the presence felt by the artist rather than their mere appearance (as is partly the case for Stenographic figure). It can thus be said that Pollock is searching for an inner prototype guiding his actions. 13 Top: Benton, Chilmark landscape, 1922. Middle: with underlined curves. Bottom: Pollock, ca. 1935, oil on fiberboard, Smithsonian American Art Museum, Washington DC. 1951, 143x167cm, enamel on canvas, National Gallery of Art, Washington DC.7. Jackson Pollock, Number 7Even though the two Pollocks are the same person, in terms of his experience of creation, the abstract Pollock is someone else. The abstract paintings are material proof for the 1951 Pollock that there was then a Pollock able to produce artworks which now affect him in a very specific way. Furthermore, his abstract paintings probably have a more precise effect on the 1951 Pollock than Picasso's artworks had on the abstract Pollock. The abstract works were made by the abstract Pollock to surpass the Spaniard's works' presence. In turn, the 1951 Pollock feels the need to escape the abstract Pollock paintings to gain his agency back. This is the moment in Pollock's career that encapsulates the best the self-competition artists entertain. It is probably present earlier in the development of the artist, but oftentimes obscured by the competition against others.
00572430
en
[ "spi.meca.msmeca", "phys.meca.msmeca" ]
2024/03/04 16:41:24
2007
https://hal.science/hal-00572430/file/Huon2007.pdf
Vincent Huon email: [email protected] Bertrand Wattrisse email: [email protected] Moulay Saïd El Youssoufi ; Moulay Saïd El André Chrysochoos Mechanical Behavior of Terra Cotta Ceramics Characterized by Kinematic Full-Field Measurements Keywords: Mechanical properties, Clays, Measurement, Construction materials teaching and research institutions in France or abroad, or from public or private research centers. Mechanical behavior of terra cotta ceramics characterized by kinematic full-field measurements Introduction Terra cotta ceramics are often used in residential house building. They are nearly always associated with other civil engineering materials, e.g., terra cotta beams with a core of prestressed concrete are widely used for building floors, terraces, or flat roofs. In this kind of composite element, terra cotta serves as a sacrificial formwork, i.e., concrete is considered to be the only material that supports prestressing forces. However, from a mechanical standpoint, terra cotta has remarkable strengths ͑i.e., often better than those of standard concrete͒ that are often not effectively utilized. The work presented in this paper is a part of an overall study carried out to characterize the anisotropic thermohygromechanical behavior of terra cottas used as building material in civil engineering projects. From an industrial standpoint, the goal is to develop numerical computer-aided design tools that can be used to optimize structural elements. This optimization must simultaneously take into account parameters related to thermal comfort and those associated with the mechanical strength of the designed structure. In the recent past, the classical use of terra cotta bricks did not require in-depth knowledge on their mechanical properties, since they were mainly loaded in compression. Now, the use of the terra cotta as a composite structural component calls for a more rigorous analysis of its contribution to the overall behavior of the structure. Thus, we studied the heterogeneous and anisotropic properties of this material using strain field measurements. These measurements were obtained by digital image correlation ͑DIC͒ techniques with the aim of checking the degree of heterogeneity of the ceramic structures and characterizing the anisotropy of the terra cotta. This paper is structured as follows: First, we review the material properties that led us to propose a transverse isotropic model to describe the elasticity of ceramics. Second, we show how the elastic parameters were locally derived from tests conducted on elementary structures considered as representative volumes of the material. Finally, to check the influence of the discrepancy noted in the elastic constants, we compare strain patterns predicted by a three-dimensional ͑3D͒ finite element ͑FE͒ computation with those obtained by the DIC. Terra Cotta Argillaceous soils are the primary products for manufacturing terra cotta. In most cases, they are used with additives ͑sand, limestone, etc.͒ to enhance the characteristics of structural elements, or to modify the functional characteristics or the aspect of the finished products. Clays are hydrated aluminosilicates whose lamellar structure can fix a certain quantity of water between folia. Four stages are required to obtain the end product: 1. Preparation: To obtain an argillaceous mixture after proportioning and crushing the components; 2. Forming: Generally by extrusion; 3. Drying: To eliminate almost all of the water used during the forming stage; and 4. Firing: The duration depends on the size of the terra cotta components, generally within the 800-1,200°C temperature range. These different stages induce an anisotropic and heterogeneous thermomechanical behavior of the terra cotta structural elements. The extrusion process involves a privileged direction and leads to transverse isotropy of the material. This also induces gradients of elastic properties throughout the structure, especially between the boundary and the core of the extruded parts. These gradients are often amplified and disturbed by the drying and firing processes, because of mass and heat transfers, by the material heterogeneity due to the irregular distribution of the pores, and the varied size grains of the mixture despite the preparation stage. We used sets of specimens randomly extracted from an extruded structure to estimate discrepancies in the elastic parameters. To define transverse isotropy axes, we introduced L as the direction of extrusion, T as the direction across the layers of terra cotta, and R as the direction in the plane of the layers, i.e., orthogonal with L and T ͑Fig. 1͒. The five elastic constants to be determined are then: Young's moduli E L = E R and E T ; Poisson's ratios LR and LT = RT ; Shear moduli G LT = G RT . Due to the isotropy of the plane of the layers, the shear modulus G LR can be written as G LR = E L / ͓2͑1+ LR ͔͒. Using engineering notations, the strain-stress relationship may be written as LL RR TT LR RT LT = ΄ 1 E L - LR E L - LT E L 00 0 - LR E L 1 E L - LT E L 00 0 - LT E L - LT E L 1 E T 00 0 000 1+ LR E L 00 000 0 1 2G LT 0 000 0 0 1 2G LT ΅ LL RR TT LR RT LT ͑1͒ where IJ and IJ ϭmatrix components of the strain and stress tensors with respect to the LRT frame. Experimental Protocol Material and Specimens An electron probe microanalysis ͑EPM͒ was used to determine the grain size and nature. The maximum grain size was 1 mm, including: • Quartz grains ͑the most common͒; • Calcite grains of up to 200 m in size ͑the least common͒; and • A low percentage of small feldspath grains. The chemical composition of the terra cotta studied in this paper is given in Table 1. Building engineering standards do not give any dimensional specifications for terra cotta specimens. Moreover, in the scientific literature, we found no examples of terra cotta specimens adapted for classical mechanical tests such as tension compression tests, shearing, and bending tests. Thus, we based our study on guidelines for the characterization of cements or mortars with granular sizes similar to that of terra cotta. Although compression tests are generally performed on specimens with a ratio of 2, we decided to use cubic samples of reduced sizes ͑sides: 15 mm͒ in order to ensure an adequate homogeneity of these elementary structures at the risk of getting a triaxiality effect due to the compression plates. The specimens were machined using a water jet cutting process to avoid geometrical defects. The compression tests, performed on cubic samples with transverse isotropy axes, gave access to the three elastic moduli and the three Poisson ratios. The shear modulus G LR was determined in the shearing zone of the Y-shaped specimen whose cross section is given in Fig. 2. For this type of geometry, it is easy to obtain a layer plane parallel Experimental Setup The experimental setup involved a 100 kN tension-compression servomechanical testing machine. Digital images were recorded during the test by a charge coupled device ͑CCD͒ camera set in front of the sample, Fig. 3. The lens axis of the camera was fixed according to the frame of the testing machine and remained perpendicular to the surface of the sample. The CCD sensor had eight square megapixels distributed in a 3,500 line ϫ 2,300 column grid. The camera provided maximum space resolution from 0.01-0.02 mm/ pixel with the "macro" lens used during the tests. For images of maximum size, the camera can capture up to five images a second digitized in eight bits. Displacement and Strain Measurements The inplane displacement components were computed by a direct DIC algorithm on each point M 0 ͑i 0 , j 0 ͒ of a virtual grid defined in the reference configuration. The position of the discrete maximum of the discrete intercorrelation function gives the displacement of point M 0 with a one pixel resolution. Between two images, I 1 and I 2 , separated by a small strain increment, is written as ͑k,l͒ ͑k,l͒ ͓ -RZ 2 , RZ 2 ͔ = ͚ i-i 0 =-CZ/2 CZ/2 ͚ j-j 0 =-CZ/2 CZ/2 I 1 ͑i, j͒ • I 2 ͑i + k, j + l͒ ͱ ͚ i-i 0 =-CZ/2 CZ/2 ͚ j-j 0 =-CZ/2 CZ/2 I 1 2 ͑i, j͒ • ͱ ͚ i-i 0 =-CZ/2 CZ/2 ͚ j-j 0 =-CZ/2 CZ/2 I 2 2 ͑i + k, j + l͒ ͑2͒ In Eq. ͑2͒, CZ stands for the correlation zone ͓i.e., the M 0 neighborhood defining the optical signature of "point" M 0 ͑i 0 , j 0 ͔͒, and RZϭresearch zone ͑i.e., the M 0 neighborhood where the optical signature is tracked͒. These zones correspond to the domain of variation of ͑i , j͒ and ͑k , l͒, respectively. To obtain subpixel measurements, we used a polynomial interpolation of in the vicinity of its discrete maximum ͑Oulamara et al. 1988͒. Heterogeneous strain field analysis considers small "gauge lengths" that induce a poor signal-to-noise ratio. Consequently, a local least-squares fitting of the displacement data is performed before any differentiation: The displacement field is locally ap-proximated in the neighborhood of each point M 0 by a given function. Both the shape of the approximation function and the size of the approximation zone ͑AZ͒ may affect the accuracy of the strain measurement. Here, bilinear functions were used as they are associated with a locally constant deformation. For more information, the reader may refer to ͑Wattrisse et al. 2001͒. The image processing performance was tested, respectively, in experimental and analytic cases corresponding to rigid body motion and to homogeneous or heterogeneous strain ͑Wattrisse et al. 2002͒. Indeed, analytical checks were necessary, because it is impossible to impose a given strain field to a real material specimen. Experimental Results Global Response of the Samples The compression tests, performed at constant velocity ͑0.1 mm s -1 ͒, were initially conducted on cubic specimens in the three directions L, R, and T. Fig. 4 illustrates the results obtained. This feature is consistent with the transverse isotropy hypothesis. At the beginning of the loading process, the cubic structure has an elastic response until the maximum load was reached. Then, the softening part of the curve corresponds to the propagation of microcracks throughout the sample. Compression in the direction T was performed perpendicular to the plane of the layers. This configuration gave the structure greater compliance and seemed to be less favorable for the propagation and coalescence of microscopic cracks. Material Response Based on previous results, the tests were carried out, while limiting maximum compressive loading in order to ensure an elastic response of the structure. These tests were performed at constant velocity ͑0.1 mm s -1 during loading and 0.5 kN s -1 during un-loading͒. The strain field measurements were first used to check the boundary conditions during loading. Some examples of strain patterns obtained by DIC are shown in Fig. 5 and correspond to components LL and TT of the strain tensor. Both fields were extracted from a compression test in direction L at maximum loading ͑Ϸ11 kN͒. We observed slightly heterogeneous deformation fields, which were not in full agreement with the kinematics of a simple compression test. The strain level curve distributions highlighted that this nonuniformity was partly due to the boundary conditions at the specimen surfaces in contact with the compression plates. The irregular patterns noted for level curves could also have been associated with the material heterogeneities and, of course, with the noise on strain measurements. The fields could be interrelated with the FE calculation using displacement boundary conditions corresponding to those observed experimentally. The stress-strain curve corresponding to measurements obtained in the central area of the specimen is plotted in Fig. 6. The compression strain LL is directly derived from speckle image processing, while the compression stress LL is determined by assuming a uniform stress distribution over the cross section. Then, a simple regression is used to estimate the elasticity modulus ͑E L in Fig. 6͒. All results for the estimation of Young's modulus are given in Table 2. Note that the standard deviations obtained on moduli using a random set of about ten samples were about 4 GPa for mean values of about 20 GPa. This discrepancy is very substantial and illustrates the heterogeneity between specimens. Fig. 7 presents estimates of Poisson ratio LT =-TT / LL for different LL . As previously underlined, this computation assumes a uniform compression state in the central part of the sample. The increasing discrepancy of measurements observed when strain components were around zero was associated with degradation of the signal to noise ratio. Therefore, only estimates of the Poisson's ratios for ͉ LL ͉ Ͼ 5 ϫ 10 -4 were considered. Tests were performed on Y-shaped specimens to determine the shear modulus G LR ͑see Fig. 2͒. These tests were performed at constant velocity ͑0.01 mm s -1 during loading and 0.5 kN s -1 during unloading͒. We noted two phases in the structural response: The first corresponded to a quasielastic behavior, while the second started with the inception of the first macroscopic crack. Fig. 8 illustrates the potentials of correlation methods regarding microcrack detection. The displacement field features ͑here, component U L ͒ enabled us to visualize crack onset and propagation at the sample surface as soon as its opening was not parallel to the direction of the chosen displacement component. The amplitude of the displacement discontinuity may be related to the crack opening. Fig. 10 presents the RL distribution and shows the significant sliding ͑"shearing"͒ zones. In these zones, we estimated a mean Validity Checks Cubic Specimens The elastic modulus estimates were checked by comparing the local stress values obtained in the central area by 3D FE computations with the compression stress deduced from the loading data. The numerical model takes transverse isotropy elasticity into account and supposes that the material is homogeneous. The elasticity tensor takes the predetermined mean values into account. It also supposes that the axis of compression is one of the transverse isotropy axes. Loading was modeled by displacement fields imposed at the surfaces in contact with the compression plates. These boundary conditions were directly extracted from the dis-placement data obtained by DIC. Fig. 9 presents the results obtained for a simulated compressive test in the L direction. Considering a cross section placed in the medium part of the specimen, computations enabled us to estimate, at maximum elastic loading, a mean value for the compression stress pattern ͑ ¯LL ͒ comp of about 44.5 MPa. The standard deviation associated with this distribution was about 0.6 MPa. Hence, the stress state, imposed by the experimental boundary conditions, was uniform in a first good approximation. Compression stress ͑ LL ͒ exp can also be estimated assuming, as usual, a uniform stress state over the cross section and using the load measurements. We found that ͑ LL ͒ exp was about 43.5 MPa. The difference between ͑ LL ͒ exp and ͑ ¯LL ͒ comp remained small, thus, partly confirming the previous approximations. Y-Shaped Specimens We checked the overall consistency of the characterization procedure by comparing strain fields obtained in Y-shaped samples with those resulting from FE computations. Loading was modeled by displacements obtained by DIC imposed at surfaces in contact with the compression plates. The computed and experimental strain patterns are compared in Fig. 10. We noted a satisfactory global correlation between both distributions, regardless of the strain tensor component. Moreover, we noted a close correlation between the measurement and the calculation when comparing the overall loading applied to the structure ͑Table 3͒. Conclusion The results presented in this paper highlight the benefits of digital correlation techniques for characterizing and identifying the mechanical behavior of a traditional civil engineering material. Full field measurements were initially useful for checking the quality of the tests by analyzing the characteristics of the displacement fields near surfaces where loading boundary conditions were imposed. The kinematic data were also useful for the evaluation of elasticity constants of a transverse isotropic behavior model. A comparison between experimental and numerical results obtained on a Y-shaped terra cotta brick element showed that the set of identified elastic constants gave satisfactory results. To estimate the different elastic parameters and to perform 3D FE computations, we assumed that the material was and remained homogeneous during mechanical transformation. Other very promising approaches will be available in the near future ͑Geymonat et al. 2002;[START_REF] Claire | Identification de conductivités thermiques et de propriétés élastiques locales par analyse de champs[END_REF][START_REF] Bonnet | Inverse problems in elasticity[END_REF] to consider the local elastic properties of the material and to account for the structural heterogeneities leading to property gradients in civil engineering structures. The capabilities of these inverse approaches are presently being tested on academic ͑numerical͒ examples ͑Latourte et al. 2005͒. In the near future, they will be applied to noisy and discrete data fields obtained by DIC. Fig. 1 . 1 Fig. 1. Directions of transverse isotropy Fig. 2 . 2 Fig. 2. Y-shaped specimen Fig. 4 .Fig. 5 .Fig. 6 . 456 Fig. 4. Compression tests according to the directions R, L, and T-Sample responses Fig. 7 . 7 Fig. 7. Evolution of the Poisson's ratio LT associated with the loading in Fig. 6 Fig. 8 .Fig. 9 . 89 Fig. 8. Displacement pattern of u L ͑a͒ fissuring; ͑b͒ propagation Fig. 10 . 10 Fig. 10. Fields of strain: ͑1͒ derived from digital image correlation; ͑2͒ 3D FE calculation Table 1 . 1 Chemical Composition of Terra Cotta Elements Atomic % Silicon dioxide ͑SiO 2 ͒ 57.9 Alumina ͑Al 2 O 3 ͒ 15.4 Lime ͑CaO͒ 14.4 Iron oxide ͑Fe 2 O 3 ͒ 4.9 Titania ͑TiO 2 ͒ 0.4 Magnesia ͑MgO͒ 1.4 Potash ͑K 2 O͒ 4.3 Soda ͑Na 2 O͒ 1.3 Table 2 . 2 Average Value and Standard Deviation of Experimental ResultsObtained for E L , E R , and E T Average value Standard deviation Number of tests ͑GPa͒ ͑GPa͒ E L 10 20.2 2.8 E R 6 19.8 3.1 E T 8 5 0.3 Table 3 . 3 Compression Load Supported by the Y-shaped Sample: Measurement and Computation F exp ͑kN͒ E comp ͑kN͒ t =6 s 0.32 0.37 t =12 s 2.17 2.36 Acknowledgments The writers would like to thank Saverdun Terre Cuite ͑STC͒,a French terra cotta building component manufacturer, for partly supporting this research work.
04114330
en
[ "scco", "sde", "shs", "stat", "qfin" ]
2024/03/04 16:41:24
2023
https://hal.science/hal-04114330/file/01-05-2023%20LFPR.pdf
Angelo Leogrande Alberto Costantiello The Labor Force Participation Rate in the Context of ESG Models at World Level Keywords: Analysis of Collective Decision-Making, General, Political Processes: Rent-Seeking, Lobbying, Elections, Legislatures, and Voting Behaviour, Bureaucracy, Administrative Processes in Public Organizations, Corruption, Positive Analysis of Policy Formulation, Implementation. JEL Classification: D7, D70, D72, D73, D78 In this article we analyze the impact of Labor Force Partecipation Rate-LFPR in the context of the Environmental, Social and Governance-ESG model at world level. We use data from the ESG dataset of the World Bank for the period 2011-2020. We use Panel Data with Fixed Effects, Panel Data with Random Effects, Pooled OLS, Dynamic Panel. We find that the level of LFPR is positively associated among others to "Ratio of Female to Male Labor Force Participation Rate" and "Life Expectancy at Birth", and negatively associated among others, to "Unemployment" and "Agricultural Land". Furthermore, we have applied a clusterization with the k-Means algorithm optimized with the Silhouette coefficient, and we found the presence of three clusters. Finally, we confront eight different machine learning algorithms to predict the value of LFPR. We find that the best predictor is the Linear Regression. Linear Regression predicts an increase in LFPR equal to 0.42% on average for the analyzed countries. 1) Introduction-Research Question In the following article, we analyze the role of LFPR in the context of ESG models worldwide using data of the World Bank. The role of work in the current economic systems is subjected to many pressures and criticisms. On the one hand, in fact, technology and capital tend to reduce the role of labour by favoring IT systems and the work of machines. On the other hand, labour seems to be increasingly a privilege rather than a right in developed and underdeveloped economies due to gender and racial discrimination. However, work remains an essential component for value added both in countries with low per capita incomes and in countries with high per capita incomes. The value of labour, and in particular the value of LFPR tends to reflect all the various discriminations present at the social level. Added to these discriminations are those produced by educational qualifications with the contrast between high-skilled workers and low-skilled workers. Labour is therefore the most relevant context for analyzing the social and economic contradictions in terms of income inequality and opportunities. In our case, we analyzed the LFPR variable in the context of ESG models. This analysis adds an element of originality to the study. In fact, if the various determinants of the labor market have been extensively analyzed in the scientific literature, there are few studies that have taken into consideration the relationship between the LFPR variable and ESG models. The article continues as follows: the second section refers to the analysis of the literature, the third section presents the econometric model, the fourth section shows the results of the clustering, the fifth section presents the results of the analysis with the machine algorithms learning, the sixth section concludes. 2) Literature Review A brief review of the literature related to LFPR is presented below. The articles discussed are not exhaustive of the scientific debate in terms of LFPR. The citations reported have the sole purpose of introducing the topic by highlighting the salient points of recent research. LFPR, Covid 19 and health issues. Low levels of LFPR have been positively associated to high levels of child abuse and neglect during the Covid 19 pandemic at Los Angeles [START_REF] Barboza | A spatiotemporal analysis of the impact of COVID-19 on child abuse and neglect in the city of Los Angeles, California[END_REF]. The increase in drug abuse during the Covid 19 pandemics has reduced the level of LFPR in the post-pandemic [START_REF] Greenwood | Did Substance Abuse During the Pandemic Reduce Labor Force Participation?[END_REF]. There is a negative relationship between depression and anxiety disorders on one side and LFPR on the other side [START_REF] Banerjee | Effects of psychiatric disorders on labor market outcomes: a latent variable approach using multiple clinical indicators[END_REF]. The presence of health and psychological problems reduces the population's ability to actively participate in the labor market. The distinction between people who enjoy good health and people who have both physical and mental health problems creates a further discrimination in the workplace. Furthermore, the fact that people actively participate in the labor market also reduces the likelihood that they are involved in criminal activity or violent behavior. High levels of LFPR are therefore necessary either to ensure a better health status of the population either to reduce the incidence of domestic and social violence. LFPR and developing countries. Remittances reduces the female LFPR but do not affect male LFPR in a set of 122 developing countries in the period 1990-2015 [START_REF] Azizi | The impacts of workers' remittances on human capital and labor supply in developing countries[END_REF]. There is a negative relationship between the level of female education and the female LFPR in rural India [START_REF] Afridi | Why are fewer married women joining the work force in rural India? A decomposition analysis over two decades[END_REF]. A study on individuals in the age 55-64 in Turkey shows that the main determinants for LFPR are education and marital status [START_REF] Şentürk | Aging and Employment: A Study on the Labor Force Participation Decision of the Elderly in Turkey[END_REF]. The increase of LFPR has a positive effect on income distribution in India [START_REF] Angeles | The Effect of Gender Inequality in Education, Labor Force Participation and Economic Opportunity on the Income Distribution of India[END_REF]. There is a positive relationship between female LFPR and female salaries in Pakistan [START_REF] Afridi | Female Labor Force Participation in Pakistan and Central Asian States: A Comparative Analysis[END_REF]. There is a negative relationship between female LFPR and natality in Brazil [START_REF] De Souza | The Changing Effects of Fertility on Women's Education and Labor Supply: Natural Experiments at Different Parities in Brazil[END_REF]. Female LFPR and occupational segregation grown together in 66 developing countries in the period 1980-2011 [START_REF] Borrowman | Drivers of gendered sectoral and occupational segregation in developing countries[END_REF]. Female LFPR in Middle East is low despite the increase in the level of female education; the presence of a male's veto power on the decision of woman to work reduces female LFPR [START_REF] Majbouri | Preferences and the Puzzle of Female Labor Force Participation in the Middle East[END_REF]. The analysis shows some counterfactual elements. The fact that there is an inverse relationship between female education and female LFPR levels in India may be due to the low incomes in rural markets especially for highskilled female workers. But, in general, except for this case, the dynamics of LFPR and especially female LFPR in developing and low-income countries is similar to that of high-income countries. LFPR and democracy and discriminations. There is a positive relationship between female LFPR and the participation in USA democratic elections i.e. the number of voters increases with female LFPR [START_REF] Cebula | Female Labor Force Participation and Voter Turnout: Evidence from the American Presidential Elections[END_REF]. The black-white racial gap in LFPR is greater than the hispanic-white gap LFPR in USA [START_REF] Cajner | Racial gaps in labor market outcomes in the last four decades and over the business cycle[END_REF]. The presence of low skilled immigrants has a positive effect on LFPR for low skilled coloured native born workers in South Africa [START_REF] Broussard | Immigration and the labor market outcomes of natives in developing countries: A case study of South Africa[END_REF]. Labor participation rates have a positive impact in terms of democratic activism. A person who works might also have a greater interest in voting at political elections. There is therefore a positive relationship between the value of participation in the labor market and the value of participation in democratic life, at national level. However, for the same reasons, racial discriminations could have a negative retroactive effect on the solidity of democratic institutions. In fact, the case of the USA shows that both the Afro-American community and the Hispanic community, despite showing a proclivity for activism in the labor market, still have reduced levels of employability, due to racial discrimination. LFPR policies. The offering of afterschool care increases labor force participation rate by 7% in Chile [START_REF] Martínez | Childcare effects on maternal employment: Evidence from Chile[END_REF]. The promotion of childcare services has increased mother's LFPR by 0.2% in Germany, especially for mothers with medium-high education [START_REF] Müller | Does subsidized care for toddlers increase maternal labor supply. Evidence from a large-scale expansion of early childcare[END_REF]. The investment in labor-intensive industries can improve female LFPR in Botswana and Namibia [START_REF] Matandare | Botswana unemployment rate trends by gender: Relative analysis with upper middle income Southern African countries (2000-2016)[END_REF]. There are policies that can have a positive effect on the growth of the value of LFPR, especially in the case of female LFPR. Specifically, it is possible to develop care services for children, to ensure that women can actively participate in the labor market. However, in some cases, such as in the case of people with low levels of education, the offer of childcare services should also be accompanied by a set of educational and financial interventions to support mothers in the pathways of entry into the labor market. Economic policies that invest in labor-intensive sectors can help LFPR growth. LFPR miscellaneous. There is a positive relationship between the increase in female entrepreneurship and the reduction in male vs. female LFPR [START_REF] Ribes-Giner | Domestic economic and social conditions empowering female entrepreneurship[END_REF]. Internet has increased female LFPR by 4.1% for married woman thanks to tele working [START_REF] Dettling | Broadband in the labor market: The impact of residential high-speed internet on married women's labor force participation[END_REF]. A positive relationship was found between labour supply and LFPR in Germany during the period 2003-2010 [START_REF] Burda | Reevaluating the German labor market miracle[END_REF]. The promotion of female entrepreneurship can reduce the gap between men and women in the sense of LFPR. However, it is necessary to invest in promoting women's economic rights, freedom, and capabilities to increase the value of LFPR through female entrepreneurship. One of the variables that increases women's participation in work is information technology. In fact, if IT makes it possible to improve the work-life balance, then it is possible that the number of women participating in the labor market will increase. In addition, there are demographic conditions that can lead to an increase in the value of LFPR. However, it is said that an increase in the value of LFPR is not always accompanied by an improvement in the condition of the workers in the sense of income and the quality of workplace. 3) The Econometric Model for the Estimation of the Value of LFPR Below we present a regression analysis aimed at identifying the LFPR determinants within the context of the ESG dataset of the World Bank. The data refer to 193 countries in the period between 2011 and 2020. The data were analyzed using the following econometric models or: Panel Dat with Fixed Effects, Panel Data with Random Effects, Pooled OLS, Dynamic Panel. In summary, we estimated the following equation or: = + + + + . + + + Where = and = [ ; ]. We found that LFPR is positively associated to: • #$ % : it is a variable that considers the value of carbon dioxide emissions deriving from the combustion of fossil fuels and from the production of cement. They include carbon dioxide produced during the burning of solid, liquid, and gaseous fuels and gas flaring. There is a positive relationship between the value of LFPR and #$ % . This positive relationship is due to the fact that the countries where there is the greatest production of #$ % are also the countries that have the highest levels of LFPR, i.e. the western countries. Countries that have high levels of LFPR are also countries that have an active industrial and manufacturing system, and which therefore tend to have significant #$ % production. However, this positive relationship is very likely to change in the future due to the emphasis placed by European governments especially on climate change. The incentives that are offered to the European industrial system try to transform production through the application of sustainable methodologies that could allow an increase in the LFPR in the reduction of the #$ % value in the future [START_REF] Costantiello | The Determinants of CO2 Emissions in the Context of ESG Models at World Level[END_REF]. • LEAB: is a variable that considers life expectancy at birth as the number of years a newborn would live if the prevailing patterns of mortality at the time of birth remained the same throughout life. There is a positive relationship between the LEAB value and the LFPR value. Such a relationship tends to be paradoxical. Countries that have high levels of LFPR are either low-income countries from Central and Southern Africa, either high-income countries i.e. North America, North Europe and Oceania. Otherwise, there are a set of countries that show a low level of LFPR i.e. Mediterranean and South Asia countries. However, the fact that there is a positive relationship between LFPR and LEAB suggests that the positive effect on life expectancy in high-income countries that have high levels of LFPR tends to largely offset the negative effect on life expectancy in countries lower-middle-income with a high level of LFPR. • PM2.5: is a variable that considers the weighted exposure for the population to environmental pollution from PM2.5 defined as the average level of exposure of the population of a nation to concentrations of suspended particles measuring less than 2.5 microns in aerodynamic diameter, which are able to penetrate deep into the respiratory tract and cause great harm to health. harm. Exposure is calculated by weighting the average annual concentrations of PM2.5 by population in urban and rural areas. There is a positive relationship between the PM2.5 value and the LFPR value. In fact, it must be considered that many of the countries that have high levels of LFPR are low-middle income countries that also have high levels of pollution from PM2.5 emissions. In fact, countries in Central and Southern Africa that have high levels of LFPR also have high levels of PM2.5. • RFTM: is a variable that considers the percentage of the population aged 15 or over who is economically active: all the people who provide work to produce goods and services during a given period. The ratio of female to male labor force participation rates is calculated by dividing the female labor force participation rate by the male labor force participation rate and multiplying by 100. Looking at the map of countries by RFTM value many countries that have high LFPR levels also have high RFTM values. This is the case, for example, of the countries of Central-Southern Africa, of North America, of Northern Europe, of South America and of some parts of Oceania. Hence the positive relationship between the LFPR value and the RFTM value derives. However, it must also be considered that there are some countries which have low LFPR value, and which nonetheless have high RFTM values such as Italy, Spain, China for example. • RQ: captures perceptions of government's ability to formulate and implement robust policies and regulations that enable and promote private sector development. The estimate gives the country's score on the aggregate indicator, in units of a standard normal distribution, i.e. ranging from approximately -2.5 to 2.5. There is a positive relationship between the RQ value and the LFPR value. This relationship is since many of the countries that have high levels of LFPR such as the countries of North America and Northern Europe also have high levels of RQ. However, there are also countries that have high LFPR lifelines and low RQ levels such as, for example, the countries of Central-Southern Africa. It therefore follows that the positive impact of high-middle-income per capita countries with high LFPR levels on the RQ value tends to more than offset the negative impact of low-middle-income countries with high LFPR levels on the RQ value [START_REF] Costantiello | The Regulatory Quality and ESG Model at World Level[END_REF]. LFPR is negatively associated to: • AL: refers to the share of arable land, with permanent crops and permanent pastures. Arable land includes land defined by FAO as land under temporary crops (double harvested areas are counted once), temporary grassland for mowing or grazing, land under market or market gardens and land temporarily fallow. There is a positive relationship between the AL value and the LFPR value. In fact, many countries that have high levels of AL have low levels of LFPR such as for example India and many countries in the Mediterranean area. On the contrary, many Anglo-Saxon countries that have a high level of LFPR, such as the USA, Canada, Australia have low levels of AL. • UT: refers to the share of the labor force that is out of work but available and looking for work. There is a negative relationship between the UT value and the LFPR value. That is, unemployment tends to rise as the value of LFPR decreases. This is because many countries that have high LFPR values are also low unemployment countries such as North America, Northern Europe, and Australia. 4) Clusterization with k-Means Algorithm Optimized with the Silhouette Coefficient In the following analysis we apply the k-Means clustering algorithm to check for clustering within the data. Since the k-Means algorithm is an unsupervised algorithm then we will apply the Silhouette coefficient to choose the optimal number of clusters. The result shows the presence of three clusters. Considering the value of the median of the clusters, the following ordering of the clusters results: #3 = 69,978 > #1 = 59,79 > #2 = 43,677. • We can therefore note that although these countries have the same capacity in terms of LFPR value, they nevertheless have important differences in terms of GDP value both in absolute value and in per capita value. This analysis highlights the low value of labor compared to capital in creating the conditions for economic growth. In fact, even if the population actively participates in the labor market, this does not necessarily lead to growth in GDP. Otherwise, it is the capital endowment that allows, LFPR ceteris paribus, to create the conditions for economic growth either in the sense of growth in the value of per capita gross domestic product either as growth in GDP in absolute terms. Furthermore, we must also underline the enormous heterogeneity of the conditions of workers in the various countries even if they are in the same cluster for the median level of LFPR. In fact, working class conditions in Mongolia are certainly not comparable to those in Sweden, despite Mongolia and Sweden participate in the same cluster in the sense of LFPR. And in fact, perhaps it would be necessary to create a new indicator capable of measuring LFPR adjusted for the level of workers conditions. • Cluster 2: Afghanistan, Bosnia and Herzegovina, Comoros, Djibouti, Algeria, Egypt Arab Rep., Gabon, Greece, Croatia, India, Iran, Iraq, Italy, Jordan, Lebanon, Libya, Morocco, Moldova, Mauritania, Nepal, Pakistan, Papua New Guinea, Puerto Rico, West Bank and Gaza, Sudan, Senegal, Somalia, Serbia, Eswatini, Syrian Arab Republic, Tajikistan, Turkmenistan, Tunisia, Turkey, Yemen. It is the last cluster by value of LFPR. C2 is a very heterogeneous cluster as it is made up of countries from various continents, which have different levels of per capita income. All these countries have low participation in the labor market. However, it must be considered that the LFPR variable could underestimate the presence of undeclared, irregular, and informal workers in the economy. In Italy, for example, there is a significant percentage of the population working in the irregular and in the informal economy. These are workers who however appear as inactive in the official labour statistics. It is probable that similar conditions also occur in other countries of C2. However, net of this effect, it is highly probable that there is a problem of inefficient incentives in these countries. The workers of the C2 countries do not find in the labor market a feasible opportunity to improve their life and their social condition through the active participation in the job market. One of the reasons for the low participation of workers in the labor market could be the low level of labor income. Other reasons could be connected to the lack of development of industrial and labor systems capable of offering jobs considered financially and technically adequate. These countries suffer for a double problem in the labor market: on the one hand, low wages and on the other, insufficient working conditions. C2 is the only cluster that does not include high-income countries. This condition indicates that low levels of LFPR could in the long run compromise per capita income and productivity levels at country level. • Cluster 3: Angola, United Arab Emirates, Burundi, Benin, Bahrain, The Bahamas, Bolivia, Central African Republic, Switzerland, China, Cameroon, Congo Rep., Colombia, Eritrea, Ethiopia, Ghana, Indonesia, Iceland, Kazakhstan, Kenya, Cambodia, Kuwait, Liberia, St. Lucia, Macao SAR, Madagascar, Mali, Mozambique, Malawi, Niger, New Zealand, Oman, Peru, North Korea, Paraguay, Qatar, Singapore, Solomon Islands, South Sudan, Thailand, Timor-Leste, Tanzania, Uganda, Vietnam, Vanuatu. This is the first cluster in the sense of LFPR. The cluster is made up of countries with low-medium per capita income, except for the following countries: Switzerland, Iceland, New Zealand, Singapore, Qatar, Macao, United Arab Emirates. The other countries that are in the cluster, all have low per capita incomes. It must be considered that many of these countries are very significant nations from the point of view of absolute GDP value as in the case of China and Indonesia. Other countries have a reduced GDP value both from a per capita and absolute perspective. Many of the C3 countries are African and Latin American ones. It must therefore be considered that there is a real dichotomy considering the relationship between the LFPR and the value of GDP either in absolute value either per capita. Indeed, the LFPR tends to be high both in countries with high levels of per capita income and in countries with low levels of per capita income. The C3 therefore presents a very significant level of polarization between rich and poor countries. Both are characterized by the presence of high labor participation of the population. This condition makes us understand how much work is in some ways irrelevant. In fact, the real distinction between rich and poor countries, as indicated in the case of C3, does not consist in the difference in terms of LFPR, but in the capital endowment. This fact is so evident that even with the same values of LFPR there are still significant differences in terms of per capita income, which are entirely attributable to differences in capital endowments. 5) Machine Learning and Prediction for the Estimation of the Future Value of LFPR Below we present an analysis for predicting the future value of LFPR. Specifically, we compare eight different machine learning algorithms. The algorithms are compared according to their ability to maximize the R-squared value and to minimize the MAE-Mean Average Error, MSE-Mean Standard Error, RMSE-Root Mean Standard Error value. The algorithms were trained with 70% of the available data while the remaining 30% was used for the actual prediction. To identify the best performing algorithm, four rankings were created for each of the four statistical indicators presented. The score of each algorithm in the individual rankings was added up. The algorithm with the lowest aggregate score was chosen, i.e. the algorithm that totaled the highest places in the single rankings presented. In this way, the following ordering of the algorithms was obtained, i.e.: • Linear Regression with a payoff equal to 5; • Random Forest Regression with a payoff value of 7; • Tree Ensemble Regression with a payoff value of 13; • Gradient Boosted Tree Regression with a payoff value of 15; • PNN-Probabilistic Neural Network with a payoff value of 20; • ANN-Artificial Neural Network with a payoff value of 24; • Simple Regression Tree with a payoff value of 29; • Polynomial Regression with a payoff value of 31. Therefore, by applying the best predictor algorithm or Linear Regression it is possible to verify that there are some countries for which an increase in the LFPR value is expected, while there are other countries for which a reduction is predicted. There are some countries for which a growth in the value of LFPR is predicted, namely: Philippines with a value of +9.1%; Venezuela with +7.88%; Ecuador with 6.61%; Lebanon with 5.9%; Cuba with 5.67%; Honduras with a value of 5.58%; El Salvador with a value of 5.24%; Uruguay with a value of 3.95%; The Bahamas with a value of 3.48%; Bolivia with 3.26%; Mali with 3.21%; Trinidad and Tobago with 2.53%; Gabon with 2.33%; Iceland with 2.24%; Kuwait with 2.05%, Bhutan with 1.96%,Vietnam with 1.91%; Canada with 0.86%; Belize with 0.82%; Equatorial Guinea with 0.81%; Algeria with 0.5%; Germany 0.34%; United States '0.11%; North Korea with 0.08%; Ukraine with 0.06%. By applying the best predictor algorithm or Linear Regression it is also possible to obtain the following values for the countries that are losers or: Nicaragua with -0.07%; Guyana with -0.14%; Sri Lanka with -0.22%; Madagascar with -0.24%; Kazakhstan with -0.28%; Mozambique with -0.32%; Greece with -0.36%; Congo Dem Rep with -0.4%; Ghana with -0.47%; Lesotho with -0.59%; Myanmar with -0.61%; Vanuatu with -0.78%; Guam with -0.95%; Guinea-Bissau with -1.11%; French Polynesia with -1.18%; United Kingdom with -1.36%; Denmark with -1.37%; Norway with -1.39%; Mauritania with -1.45%; Virgin Islands with -1.5%; Georgia with -1.53%; Belgium with -1.71%; Poland with -1.78%; Malaysia with -1.96%; Croatia with -1.97%; Netherlands with -2.13%; Slovenia with -2.14%; Hungary with -2.77%; Zambia with -4.16%; Malta with -5.09%; Azerbaijan with -5.26%; Jordan with -6.13%. 6) Conclusions In this article, we analyzed the relationship between LFPR and ESG using a dataset from the World Bank over the period 2011-2020. We found that the value of LFPR is negatively connected to the E-Environment component, and positively connected to the S-Social and G-Governance components within the ESG model. Thus, econometric analysis shows that LFPR growth tends to be costly for the Environment even if it is compatible with Social and Governance issues within the ESG model. The cluster analysis performed with the k-Means algorithm optimized with the Silhouette Coefficient shows the existence of three clusters in the sense of LFPR. The k-Means algorithm shows that within the same cluster it is possible to find either countries with low per capita income either countries with high per capita income. LFPR growth by itself does not increase GDP. The positive relationship between LFPR and GDP per capita is mediated through the investment in capital structure. Therefore, to ensure that the increase in the LFPR in low-income countries has a significant positive impact in terms of GDP, it is necessary to invest in capital endowments in developing countries. Finally, the predictive analysis based on machine learning algorithms shows a positive trend in the value of the LFPR in the future with an average growth of 0.42% for the countries analyzed. 7) Declarations Data Availability Statement. The data presented in this study are available on request from the corresponding author. Funding. The authors received no financial support for the research, authorship, and/or publication of this article. Declaration of Competing Interest. The authors declare that there is no conflict of interests regarding the publication of this manuscript. In addition, the ethical issues, including plagiarism, informed consent, misconduct, data fabrication and/or falsification, double publication. Software. The authors have used the following software: Gretl for the econometric models, Orange for clusterization and network analysis, and KNIME for machine learning and predictions. They are all free version without licenses. 9) Appendix Figure 1 . 1 Figure 1. Average value of regressions and ESG decomposition with average value. Figure 2 . 2 Figure 2. Structure of the clusters with the k-Means algorithm optimized with the Silhouette Coefficient. Figure 3 . 3 Figure 3. Ranking of algorithms based on the maximization of R-squared and Minimization of MAE, MSE and RMSE. Figure 4 . 4 Figure 4. Winners: countries for which is predicted an increase in LFPR. Figure 15 . 15 Figure 15. Losers. Countries for which is predicted a reduction in LFPR. = 44 Test for AR(1) errors: z = -1,63743 [0,1015] Test for AR(2) errors: z = 0,0789429 [0,9371] Sargan over-identification test: Chi-square(35) = 1363,95 [0,0000] Wald (joint) test: Chi-square(8) = 17245,1 [0,0000] Random-effects (GLS), using 1930 observations Using Nerlove's transformation Included 193 cross-sectional units Time-series length = 10 Dependent variable: A33 Cluster 1: Albania, Argentina, Armenia, Australia, Austria, Azerbaijan, Belgium, Burkina Faso, Bangladesh, Bulgaria, Belarus, Belize, Brazil, Barbados, Brunei Darussalam, Bhutan, Botswana, Canada, Channel Islands, Chile, Cote d'Ivoire, Congo Dem. Rep., Cabo Verde, Costa Rica, Cuba, Cyprus, Czechia, Germany, Denmark, Dominican Republic, Ecuador, Spain, Estonia, Finland, Fiji, France, United Kingdom, Georgia, Guinea, The Gambia, Guinea Bissau, Equatorial Guinea, Guatemala, Guam, Guyana, Hong Kong SAR, Honduras, Haiti, Hungary, Ireland, Israel, Jamaica, Japan, Kyrgyz Republic, South Korea, Lao PDR, Sri Lanka, Lesotho, Lithuania, Luxembourg, Latvia, Maldives, Mexico, North Macedonia, Malta, Myanmar, Montenegro, Mongolia, Mauritius, Malaysia, Namibia, New Caledonia, Nigeria, Nicaragua, Netherlands, Norway, Panama, Philippines, Poland, Portugal, French Polynesia, Romania, Russian Federation, Rwanda, Saudi Arabia, Sierra Leone, El Salvador, Sao Tome and Principe, Suriname, Slovak Republic, Slovenia, Sweden, Chad, Togo, Tonga, Trinidad and Tobago, Ukraine, Uruguay, United States, Uzbekistan, St. Vincent and the Grenadines, Venezuela, Virgin Islands, Samoa, South Africa, Zambia, Zimbabwe. It is the second cluster for median value of LFPR. It is a very large cluster made up of various countries that are either uppermiddle income, such as the USA, Australia, Austria, Hong Kong, either lower-middle income countries such as Mongolia, Namibia, Zimbabwe. Professor of Economics at LUM University Giuseppe Degennaro and Researcher at LUM Enterprise s.r.l. Email: [email protected], Strada Statale 100 km 18, Casamassima, Bari, Puglia, Italia. Professor of Economics at LUM University Giuseppe Degennaro. Email: [email protected]. Strada Statale 100 km 18, Casamassima, Bari, Puglia, Italia. Acknowledgements. We are grateful to the teaching staff of the LUM University "Giuseppe Degennaro" and to the management of the LUM Enterprise s.r.l. for the constant inspiration to continue our scientific research work undeterred.
04114343
en
[ "math.math-nt", "math.math-ag", "math.math-kt" ]
2024/03/04 16:41:24
2023
https://hal.science/hal-04114343/file/LE3.pdf
ON THE MAHLER MEASURE OF (1 + x)(1 + y) + z FRANC ¸OIS BRUNAULT Keywords: 2020 Mathematics Subject Classification. Primary 19F27, Secondary 11F67, 11G16, 11R06, 19E15 Mahler measure, motivic cohomology, modular curve, regulator, L-function ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction The (logarithmic) Mahler measure of a complex Laurent polynomial P (x 1 , . . . , x n ) is defined as the average of log |P | over the torus T n ∶ |x 1 | = . . . = |x n | = 1, m(P ) = 1 (2πi) n ∫ T n log |P | dx 1 x 1 . . . dx n x n . For a monic one-variable polynomial P (x) = d ∏ i=1 (x -α i ), Jensen's formula gives m(P ) = d ∑ i=1 |α i |≥1 log |α i |. Originally, the multivariate Mahler measure was introduced as a height function for polynomials, in relation with transcendental number theory. It was later realised that the Mahler measure appears naturally in other contexts. For example, Smyth discovered in 1981 the following formulas [START_REF] Beilinson | Higher regulators of modular curves[END_REF] m(1 + x + y) = 3 √ 3 4π L(χ -3 , 2), m(1 + x + y + z) = 7 2π 2 ζ(3), where χ -3 (n) = ( -3 n ) is the Dirichlet character modulo 3. The Mahler measure of integer polynomials turns out to have deep links with special values of L-functions. We mention here some aspects of this connection, referring to [START_REF] Boyd | The many aspects of Mahler's measure[END_REF][START_REF] Boyd | Explicit formulas for Mahler measure[END_REF][START_REF] Bertin | Women in numbers 2: research directions in number theory[END_REF][START_REF] Brunault | Many Variations of Mahler Measures: A Lasting Symphony[END_REF] for more complete surveys. A combination of experiments and theoretical insights led Boyd and Deninger to conjecture the identity [START_REF] Bertin | Women in numbers 2: research directions in number theory[END_REF] m(x + 1 x + y + 1 y + 1) = L ′ (E, 0), where E ∶ x + 1 x + y + 1 y + 1 = 0 is an elliptic curve of conductor 15. This was proved some 15 years later by Rogers and Zudilin [START_REF] Rogers | On the Mahler measure of 1 + X + 1/X + Y + 1/Y , Intern[END_REF]. The identity (2) can be conceptually explained using Beilinson's theory of regulators, and Deninger gave in [START_REF] Deninger | Deligne periods of mixed motives, K-theory and the entropy of certain Z n -actions[END_REF] a general framework to relate Mahler measures and cohomology. More precisely, let P (x 1 , . . . , x n ) be a complex Laurent polynomial, which we assume to be monic in x n . Applying Jensen's formula with respect to x n , we may write m(P ) as an integral [START_REF] Bosma | The Magma algebra system. I. The user language. Computational algebra and number theory[END_REF] ∫ Γ P η(x 1 , . . . , x n ), where η is a differential (n -1)-form on the zero locus V P of P in (C × ) n , and Γ P is the (n -1)dimensional Deninger chain, Γ P = {(x 1 , . . . , x n ) ∈ V P ∶ |x 1 | = ⋯ = |x n-1 | = 1, |x n | ≥ 1}. We make here all necessary assumptions for this integral to make sense [START_REF] Deninger | Deligne periods of mixed motives, K-theory and the entropy of certain Z n -actions[END_REF]Assumptions 3.2], in particular Γ P must avoid the singular points of V P . Assume now that Γ P is closed. Then (3) can be given a cohomological interpretation, since the class of η in de Rham cohomology is the image under the Beilinson regulator map of the cup-product {x 1 , . . . , x n } in the motivic cohomology group H n M (V P , Q(n)). This situation is favourable and under certain conditions, the Beilinson conjectures predict a link between m(P ) and some L-value associated to V P . The identity [START_REF] Bertin | Women in numbers 2: research directions in number theory[END_REF] is an example of this phenomenon (in reality, in this case Γ P is not a closed path, but symmetries can be used to "close Γ P "). A more mysterious situation is when the form η is exact, in which case we say P is exact. Stokes's formula reduces the Mahler measure m(P ) to an (n -2)-dimensional integral over the boundary ∂Γ P , but Deninger's theory does not provide an intrinsic cohomological interpretation of this integral. Maillot suggested in 2003 that, in the exact case, m(P ) should be related to the cohomology of the variety W P ∶ P (x 1 , . . . , x n ) = P ( 1 x 1 , . . . , 1 x n ) = 0. What makes it plausible is that ∂Γ P is contained in W P , because V P ∩T n = W P ∩T n . The relevant motivic cohomology group is now H n-1 M (W P , Q(n)) , which is harder to deal with, as we cannot use cup-products. The identities (1) are of this type. For example, the motivic cohomology group for the polynomial 1 + x + y is isomorphic to the algebraic K-group K 3 (Q( √ -3)) ⊗ Q, which is not a Milnor K-group. For general exact polynomials, this non-Milnor character makes it difficult to handle the Mahler measure. Following Maillot's insight, Boyd and Rodriguez Villegas discovered in 2003 several identities involving 3-variable exact polynomials [START_REF] Boyd | The many aspects of Mahler's measure[END_REF][START_REF] Boyd | Explicit formulas for Mahler measure[END_REF][START_REF] Boyd | Mahler's measure and L-functions of elliptic curves evaluated at s = 3[END_REF]. One example is: Conjecture 1 (Boyd and Rodriguez Villegas [START_REF] Boyd | Explicit formulas for Mahler measure[END_REF]). We have the equality (4) m((1 + x)(1 + y) + z) ? = -2L ′ (E, -1) , where E ∶ (1 + x)(1 + y)(1 + 1 x )(1 + 1 y ) = 1 is an elliptic curve of conductor 15. Here E arises as the Maillot variety W P for P = (1 + x)(1 + y) + z, and L ′ (E, -1) is related in a simple way to L(E, 3) by the functional equation of the L-function [13, 7.3.6]. The first result towards Conjecture 1 was obtained by Lalín [START_REF] Lalín | Mahler measure and elliptic curve L-functions at s = 3[END_REF], who expressed m(P ) as the regulator of a cocycle in the Goncharov complex Γ(E, 3) (see Section 3 for the definition of this complex). We write γ E = ∂Γ P for the boundary of Deninger's chain Γ P ; this is a closed path in E. Theorem 2 (Lalín [START_REF] Lalín | Mahler measure and elliptic curve L-functions at s = 3[END_REF]). We have m [START_REF] Bosma | The Magma algebra system. I. The user language. Computational algebra and number theory[END_REF], and r 3 (2) is the Goncharov regulator map. (P ) = 1 4π 2 ∫ γ E r 3 (2)(ξ E ), where ξ E is the class of the cocycle {-x} 2 ⊗ y -{-y} 2 ⊗ x in Γ(E, In essence, Lalín's theorem reduces Conjecture 1 to the Beilinson conjecture for L ′ (E, -1). Beilinson proved a weak form of his conjecture for L-values of modular forms by considering regulators associated to Eisenstein symbols [START_REF] Beilinson | Higher regulators of modular curves[END_REF], but the Goncharov regulator here is of different nature. In this article, we compute the latter regulator, leading to the following theorem. Theorem 3. The Boyd and Rodriguez Villegas conjecture (4) is true. Another fascinating conjecture by Rodriguez Villegas concerns the Mahler measure of the polynomials 1 + x 1 + . . . + x n for n = 4 and n = 5. These polynomials are also exact and their Mahler measures are expected to involve L-values of cusp forms of weight 3 and 4, respectively [START_REF] Brunault | Many Variations of Mahler Measures: A Lasting Symphony[END_REF]Section 6.2]. Partial results have been obtained by Shinder and Vlasenko [START_REF] Shinder | Linear Mahler measures and double L-values of modular forms[END_REF]. We found recently a (conjectural) identity which naturally generalises (4): m((1 + x)(1 + y)(1 + z) + t) ? = -6L ′ (f 7 , -1) - 48 7 ζ ′ (-2), where f 7 (τ ) = η(τ ) 3 η(7τ ) 3 is the unique CM newform of weight 3 and level 7. The main ingredient in the proof of Theorem 3 is the computation by Zudilin and the author [START_REF] Brunault | Modular regulators and multiple Eisenstein values[END_REF] of the Goncharov regulator of explicit classes ξ 1 (a, b) in the motivic cohomology of the modular curve Y 1 (N ), which were introduced in [START_REF] Brunault | On the K 4 group of modular curves[END_REF]. A key fact here is that E is isomorphic to the modular curve X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF], something we make precise in Section 2. In Section 3, we recall Goncharov's theory of polylogarithmic complexes in weight 2 and 3 and, for modular curves, we define subcomplexes built out of modular units. These complexes are amenable to computation, and we partly implemented the weight 3 complex in PARI/GP [START_REF]PARI/GP version 2.15[END_REF]; the scripts are available at [START_REF] Brunault | K 4 of modular curves[END_REF]. In Sections 4 and 5, we express Lalín's class ξ E and the path γ E in purely modular terms. The final computation is performed in Section 6, using the results of [START_REF] Brunault | Modular regulators and multiple Eisenstein values[END_REF]. In the appendix, we give tables of (conjectural) identities relating 3-variable Mahler measures and L(E, 3) for a number of elliptic curves E over Q. Acknowledgements. I am grateful to Matilde Lalín, Riccardo Pengo, Wadim Zudilin and the International Groupe de travail on differential equations in Paris for exchanges which have been helpful in several parts of this paper. I would also like to thank Berend Ringeling for checking numerically several Mahler measure identities from the appendix. The modular parametrisation Consider the polynomial P (x, y, z) = (1 + x)(1 + y) + z. We keep the same notations as in the introduction, so that the Maillot variety W P in (C × ) 3 is defined as W P ∶ ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ (1 + x)(1 + y) + z = 0, (1 + 1 x )(1 + 1 y ) + 1 z = 0. Eliminating z, we see that W P is isomorphic to the smooth curve in (C × ) 2 given by (5) C ∶ (1 + x) 2 (1 + y) 2 = xy. Let E denote the closure of C in P 1 (C) × P 1 (C). We view E as a smooth projective curve defined over Q. It turns out that E is isomorphic to an elliptic curve of conductor 15 [22, (4.2)]. The PARI/GP commands E = ellfromeqn((1+x)^2*(1+y)^2-x*y) ellidentify(ellinit(E)) confirm that E is isomorphic to the elliptic curve with Cremona label 15a8. On the other hand, we know that the modular curve X 1 (15) is isomorphic to 15a8, since they are both elliptic curves of conductor 15, and the period lattice of X 1 (15) can be computed using modular symbols, agreeing with that of 15a8. Note that Stevens's conjecture [29, Conjecture II] is known in this case by [START_REF] Stevens | Stickelberger elements and modular parametrizations of elliptic curves[END_REF]Section 7]. In Proposition 4 below we give an explicit isomorphism X 1 (15) ≅ E (note that the proof does not rely on floating point computations). An important feature of this parametrisation is that the functions -x and -y correspond to modular units on X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF]. This is crucially used in Section 4 to relate Lalín's class ξ E and the modular classes ξ 1 (a, b) from [9, Section 6]. Even more, we need the functions -x and -y to be of the form u 1 (a, b, c, d), a class of modular units introduced in [START_REF] Brunault | On the K 4 group of modular curves[END_REF] and whose definition we now recall. Let N ≥ 1 be an integer. For any a = (a 1 , a 2 ) ∈ (Z/N Z) 2 / ± 1, a ≠ (0, 0), we define ℘ a (τ ) = ℘(τ ; a 1 τ + a 2 N ) (τ ∈ C, Im(τ ) > 0), where ℘(τ ; z) is the Weierstraß function. The function ℘ a is a modular form of weight 2 on the principal congruence group Γ(N ). For any distinct elements a, b, c, d of (Z/N Z) 2 / ± 1, we then define u(a, b, c, d) as the cross-ratio [℘ a , ℘ b , ℘ c , ℘ d ], with the convention ℘ 0 = ∞. This is a modular unit on Γ(N ). For distinct elements a, b, c, d of (Z/N Z)/ ± 1, we use the shortcut u 1 (a, b, c, d) = u((0, a), (0, b), (0, c), (0, d)) , which is a modular unit on Y 1 (N ) defined over Q. The properties of u(a, b, c, d) needed in this article can be found in [9, Section 3]. In the following proposition, we take N = 15. Proposition 4. The curve E is parametrised by the following modular units on Γ 1 (15): (6) x(τ ) = -u 1 (1, 2, 3, 7)(τ ), y(τ ) = -u 1 (2, 4, 6, 1)(τ ). Moreover, the map τ ↦ (x(τ ), y(τ )) induces an isomorphism φ ∶ X 1 (15) ≅ → E defined over Q. Proof. Let us show that u = -u 1 (1, 2, 3, 7) and v = -u 1 (2, 4, 6, 1) satisfy (1+u) 2 (1+v) 2 = uv. For this we may replace u and v by their transforms under the Atkin-Lehner involution W 15 ∶ τ ↦ -1/15τ on X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF], as this does not affect the equation. The units ũ = u ○ W 15 and ṽ = v ○ W 15 can be expressed in terms of Siegel units of level 15 using [9, eq. ( 6)]: ũ = - g2 g4 g1 g7 = -1 -q + q 4 + q 5 -q 7 + O(q 8 ), ṽ = - g4 g7 g1 g2 = -q -2 -q -1 -2 -2q -2q 2 -2q 3 -2q 4 + O(q 5 ) (7) where, for a ∈ Z/N Z, a ≠ 0, ga (τ ) = q N B 2 (â/N )/2 ∏ n≥1 n≡a mod N (1 -q n ) ∏ n≥1 n≡-a mod N (1 -q n ) (q = e 2πiτ ). Here B 2 (t) = t 2 -t + 1 6 is the Bernoulli polynomial and â is the lift of a in {1, . . . , N -1}. We are now going to compute the divisors of ũ and ṽ. To this end, we recall the description of the cusps of the modular curve X 1 (N ). There is a bijection [START_REF] Diamond | Modular forms and modular curves[END_REF]Example 9.1.3] {cusps of X 1 (N )(C)} ≅ → {(c, d) ∶ c ∈ Z/N Z, d ∈ (Z/(c, N )Z) × }/ ± 1 which associates to a cusp γ∞ with γ ∈ SL 2 (Z), the class of the bottom row (c, d) of γ. Moreover, by [15, Section 9.3, p. 79], the Galois action on the cusps is described as follows: for σ ∈ Aut(C), we have σ ⋅ (c, d) = (c, χ(σ)d), where χ(σ) ∈ (Z/N Z) × is characterised by σ(e 2πi/N ) = e 2πiχ(σ)/N . As a consequence, a complete set of representatives of the Galois orbits is provided by the cusps 1 k = ( 1 0 k 1 )∞ with 0 ≤ k ≤ ⌊ N 2 ⌋. Now we can compute the divisor of u 1 (a, b, c, d) for distinct a, b, c, d ∈ (Z/N Z)/ ± 1 as follows. Since this unit is defined over Q, it suffices to determine its order of vanishing at the cusps 1/k just described. By [9, Proposition 3.6], we have u 1 (a, b, c, d)| ( 1 0 k 1 ) = u((ka, a), (kb, b), (kc, c), (kd, d)). The order of vanishing of this unit at ∞ is deduced from the expression of u(a, b, c, d) in terms of Siegel units [9, Proposition 3.7], taking into account that it should be computed with respect to the uniformising parameter q (k,N )/N . Applying this in our situation, we obtain div(u) = -2[1/2] + 2[1/7], div(v) = -2[0] + 2[1/4]. These cusps are rational and we see that 2 -uv has poles of order at most 4 at 0 and 1/2, and is regular elsewhere. Moreover, we compute from (7) that F (-1/15τ ) = O(q 5 ) when Im(τ ) → +∞. Therefore F vanishes at order ≥ 5 at 0, and consequently F = 0. It remains to show that φ ∶ X 1 (15) → E is an isomorphism. Since X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF] and E are smooth, it suffices to check that φ is a birational map. We know that u has degree 2 as a function on X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF], while x has degree 2 as a function on E. It follows that φ is birational. □ 3. The weight 3 complex of the modular curve Y 1 (N ) F = (1 + u) 2 (1 + v) Goncharov has defined in [START_REF] Goncharov | Geometry of configurations, polylogarithms, and motivic cohomology[END_REF] polylogarithmic complexes which are expected to compute the motivic cohomology of arbitrary fields. We define in this section a modular complex C N (3), which is a subcomplex of the weight 3 polylogarithmic complex attached to the modular curve Y 1 (N ). It is generated (in a suitable sense) by the Siegel units and the modular units u 1 (a, b, c, d) from Section 2. Our construction can be seen as a weight 3 analogue of the weight 2 Euler complex E • N introduced by Goncharov in [START_REF] Goncharov | Euler complexes and geometry of modular varieties[END_REF]. We also explain how to manipulate C N (3) using PARI/GP. The constructions below work with no more effort for the modular curve Y (N ) with full level N structure, using parameters in (Z/N Z) 2 instead of Z/N Z. However we have not implemented it, as the case of Y 1 (N ) suffices for our application. We briefly recall Goncharov's polylogarithmic complexes in weight 2 and 3. Let F be any field. Define B 2 (F ) to be the quotient of Q[F × /{1}] by the subspace generated by the 5-term relations [START_REF] Goncharov | Geometry of configurations, polylogarithms, and motivic cohomology[END_REF]Section 1.8]. The group B 3 (F ) is defined similarly as an explicit quotient of Q[F × /{1}] [17, Section 1.8], whose definition will not be needed here. For x ∈ F × /{1} and n ∈ {2, 3}, we denote by {x} n the image of the generator [x] in B n (F ). Then the complex Γ(F, 2), in degrees 1 and 2, is defined as Γ(F, 2) ∶ B 2 (F ) Λ 2 F × ⊗ Q {x} 2 (1 -x) ∧ x, and the complex Γ(F, 3), in degrees 1 to 3, is defined as Γ(F, 3) ∶ B 3 (F ) B 2 (F ) ⊗ F × ⊗ Q Λ 3 F × ⊗ Q {x} 3 {x} 2 ⊗ x {x} 2 ⊗ y (1 -x) ∧ x ∧ y. Goncharov conjectures that H i (Γ(F, n)) is isomorphic to H i M (F, Q(n)). In the case F is the function field of a smooth curve Y over a field k, these complexes are endowed with residue maps Γ(F, n) → Γ(k(x), n -1)[-1] for every closed point x ∈ Y . Goncharov then defines the complex Γ(Y, n) as the simple of the morphism of complexes Γ(F, n) → ⊕ x∈Y Γ(k(x), n-1)[-1], and he conjectures that H i (Γ(Y, n)) is isomorphic to H i M (Y, Q(n)) [17, Section 1.15(b) ]. We will consider these complexes in the case Y is the modular curve Y 1 (N ), and F is its function field. We will see, in particular, that they have natural subcomplexes built out of modular units. Definition 5. Fix an integer N ≥ 1. We introduce the following sets of modular units on Y 1 (N ): • U 1 consists of the Siegel units g 0,a , a ∈ (Z/N Z)/{0}, in O(Y 1 (N )) × ⊗ Q; • U 2 consists of the modular units u 1 (a, b, c, d) in O(Y 1 (N )) × , where a, b, c, d are distinct elements of (Z/N Z)/ ± 1. Moreover, we associate to them the following spaces: • ⟨U 1 ⟩ is the Q-span of U 1 in F × ⊗ Q; • ⟨U 2 ⟩ is the Q-span of {u} 2 , u ∈ U 2 , in B 2 (F ); • ⟨U 2 ⟩ 3 is the Q-span of {u} 3 , u ∈ U 2 , in B 3 (F ). With these definitions, the weight 2 modular complex can be defined as C N (2) ∶ ⟨U 2 ⟩ Λ 2 ⟨U 1 ⟩, {u} 2 ↦ (1 -u) ∧ u. This complex is well-defined because U 2 is contained in ⟨U 1 ⟩ by [9, Proposition 3.8], and U 2 is stable under u ↦ 1-u from the definition of u 1 (a, b, c, d) as a cross-ratio. It would be interesting to compare C N (2) with the Euler complex E • N defined by Goncharov in [19, Section 2.5]. We are now ready to introduce a version of the weight 3 modular complex. Definition 6. The complex C N (3) is the following subcomplex of Γ(F, 3) in degrees 1 to 3: C N (3) ∶ ⟨U 2 ⟩ 3 ⟨U 2 ⟩ ⊗ ⟨U 1 ⟩ Λ 3 ⟨U 1 ⟩. We warn the reader that the group ⟨U 2 ⟩ 3 in degree 1 may not be the right one. Indeed, the unit u 1 (a, b, c, d) is by definition a cross-ratio, hence is a natural argument for the dilogarithm, but a priori not for the trilogarithm. However, the complex C N (3) will suffice for our needs. Since the construction of C N (3) involves only modular units, the elements of ⟨U 2 ⟩ 3 , ⟨U 2 ⟩ ⊗ ⟨U 1 ⟩ and Λ 3 ⟨U 1 ⟩ have trivial residues at every point of Y 1 (N ). In particular, C N (3) embeds as a subcomplex of Γ(Y 1 (N ), 3), and we have natural maps in cohomology H i (C N (3)) → H i (Γ(Y 1 (N ), 3)) in degree i ∈ {1, 2, 3}. The case of interest to us is i = 2. We have implemented part of the complex C N (3) in PARI/GP, with the specific aim of comparing cocycles in degree 2. Firstly, the following lemma gives a natural way to represent modular units in ⟨U 1 ⟩. Lemma 7. A basis of ⟨U 1 ⟩ is given by the Siegel units g 0,a with 1 ≤ a ≤ ⌊N /2⌋. Proof. We have g 0,-a = g 0,a in F × ⊗ Q, and by [START_REF] Streng | Generators of the group of modular units for Γ 1 (N ) over the rationals[END_REF], the units g 0,a with 1 ≤ a ≤ ⌊N /2⌋ form a basis of (O(Y 1 (N )) × /Q × ) ⊗ Q. □ Each unit in U 2 can be written in the basis of Lemma 7 using [9, Proposition 3.8]. Note that no computation of divisor is needed here, thanks to this choice of basis. We actually need to determine U 2 as a set, and so to check whether two given units u 1 (a, b, c, d) and u 1 (a ′ , b ′ , c ′ , d ′ ) are equal. We remark that the leading coefficient of u 1 (a, b, c, d) at the cusp 0 is equal to 1 by the discussion after [9, Proposition 3.8]. Combining this with Lemma 7, we see that two such units are equal if and only if their coordinates in the basis of ⟨U 1 ⟩ are equal. We now consider the free vector space Q[U 2 ], and we quotient it by the following subspaces, encoding the relations between the symbols {u 1 ∑ j∈Z/5Z {u 1 (a j , a j+1 , a j+2 , a j+3 )} 2 = 0 in B 2 (F ), (9) for any family (a j ) j∈Z/5Z of distinct elements of (Z/N Z)/ ± 1. We denote by R 2 the subspace of Q[U 2 ] generated by the antisymmetry relations (8) and the 5-term relations [START_REF] Brunault | On the K 4 group of modular curves[END_REF]. Finally, we denote by Q the subspace of Q[U 2 ] ⊗ ⟨U 1 ⟩ generated by the symbols [u] ⊗ u with u ∈ U 2 , which correspond to the degree 2 coboundaries in C N (3). In practice, in order to reduce the size of the objects, we only compute: • a set U ′ 2 of representatives of the quotient U 2 /S 3 ; • the subspace R ′ 2 of Q[U ′ 2 ] generated by the 5-term relations; • the subspace Q ′ of Q[U ′ 2 ] ⊗ ⟨U 1 ⟩ of degree 2 coboundaries. The corresponding scripts are contained in the file K4-modular-complex.gp from [START_REF] Brunault | K 4 of modular curves[END_REF]. They can be applied in the following way. Say we have two degree 2 cocycles ξ and ξ ′ in Γ(Y 1 (N ), 3). Assume that they are both linear combinations of symbols {u 1 (a, b, c, d)} 2 ⊗ g 0,x . We may then represent ξ -ξ ′ by an element of Q[U 2 ] ⊗ ⟨U 1 ⟩, and we check whether this element belongs to the subspace R 2 ⊗ ⟨U 1 ⟩ + Q. If so, then we can deduce that ξ and ξ ′ are cohomologous, and thus have the same image in K does not belong to the subspace, we cannot conclude anything, as R 2 and Q may not contain all the relations in the respective groups. The linear system involved in the above computation has size O(N 5 ) × O(N 6 ). Experimentally, we have found that the cardinality of U 2 for N = p prime is (p 2 -1)(p 2 -25)/192, which is smaller by a factor of about 3 than what we could expect, namely 6( (p+1)/2 4 ). Furthermore, it seems that the dimension of Q[U 2 ]/R 2 is equal to (p -1)(p -5)/12, which is also the number of triples (a, b, c) with 0 < a < b < c < p and a + b + c ≡ 0 mod p, where 2b < p; see [23, Sequence A242090]. If true, there should be a way to bypass the step of quotienting by R 2 . This would result in a much smaller linear system for the comparison of cocycles. It would be extremely interesting to extend the construction of the modular complex in this section to higher weight complexes C N (n) with n > 3, as well as in higher dimension, replacing the modular curve Y 1 (N ) by Kuga-Sato modular varieties in order to deal with higher weight modular forms. The Lalín class Recall that Lalín's theorem (Theorem 2) expresses the Mahler measure of (1 + x)(1 + y) + z as the regulator of the class ξ E of the cocycle ξE in Γ(E, 3) defined by ξE ∶= {-x} 2 ⊗ y -{-y} 2 ⊗ x. Our aim in this section is to relate ξ E to the classes ξ 1 (a, b) on X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF], which were introduced in [9, Section 6]. This is a purely algebraic computation making use of our implementation of the weight 3 complex of X 1 (15) explained in Section 3. We first pull back ξ E to the modular curve X 1 (15) using the modular parametrisation φ. Using Proposition 4 and its proof, we have in the degree 2 cohomology of Γ(Y 1 [START_REF] Diamond | Modular forms and modular curves[END_REF], 3) (10) φ * ξE = {u 1 (1, 2, 3, 7)} 2 ⊗ ( g 4 g 7 g 1 g 2 ) -{u 1 (2, 4, 6, 1)} 2 ⊗ ( g 2 g 4 g 1 g 7 ), with the shortcut g k = g 0,k for k ∈ Z/15Z. Let us denote by ξ15 the right-hand side of [START_REF] Brunault | K 4 of modular curves[END_REF]. Lalín has shown that the cocycle ξE has trivial residues [22, Section 4.1, p. 213], so that ξ15 has trivial residues at the cusps of X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF]. The next task is to express ξ15 in terms of the cocycles ξ1 (a, b) with a, b ∈ Z/15Z. We do this using the modular complex C 15 (3) from Section 3. Using the function find_xi1ab from K4-modular-complex.gp [START_REF] Brunault | K 4 of modular curves[END_REF], we detect the following simple expression for ξ15 . Proposition 8. We have the equality of cocycles ξ15 = -20 ξ1 (1, 4) modulo coboundaries {u} 2 ⊗u with u ∈ U 2 . In particular, we have φ * (ξ E ) = -20ξ 1 (1, 4). The linear system involved in the proof of Proposition 8 has size 273 × 288. The integration path In Theorem 2, the integration path γ E = ∂Γ P is a closed path in E, and we would like to express it in terms of modular symbols on X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF], via the modular parametrisation of Section 2. This is a crucial ingredient in the computation of the regulator integral on E. We will do this carefully in order to certify the relation (Proposition 9). Lalín [START_REF] Lalín | Mahler measure and elliptic curve L-functions at s = 3[END_REF]Section 4.1] has shown that γ E is a generator of H 1 (E, Z) + , where (⋅) + denotes the subgroup of invariants under complex conjugation. So we first search for a generator γ 15 of H 1 (X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF], Z) + . We do this with the help of SageMath [START_REF] Sagemath | the Sage Mathematics Software System (Version 9.4), The Sage Developers[END_REF]; see the notebook ModularSymbolGamma15.ipynb in [START_REF] Brunault | K 4 of modular curves[END_REF]. For any g ∈ SL 2 (Z), denote by [g] = {g0, g∞} the associated Manin symbol, viewed in the relative homology group H 1 (X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF], {cusps}, Z). We obtain [START_REF] Brunault | Many Variations of Mahler Measures: A Lasting Symphony[END_REF] γ 15 = 2 [( 1 9 2 19 )] -[( 0 -1 1 11 )] -[( 0 -1 1 4 )] + 2 [( 0 -1 1 2 )] . We therefore have γ E = ±φ * (γ 15 ). The precise sign is not strictly needed in what follows, as the Mahler measure is a positive real number and the final identity fixes the sign for us. However, we want to sketch a method to determine the sign rigorously, as it could be useful in more general situations, where the integration path γ need not be a generator of the homology group. In such a scenario, one wishes to ascertain an identity of the form γ = c ⋅ φ * (γ 0 ), where φ is the modular parametrisation, γ 0 is a modular symbol, and c ∈ Z is to be determined. The idea is to integrate an invariant differential form over the cycles to be compared. By [START_REF] Lalín | Mahler measure and elliptic curve L-functions at s = 3[END_REF]Section 4.1], an invariant differential form on E is given by ω E ∶= -dx 2(x + 1) 2 (y + 1) -x . Using [START_REF] Boyd | The many aspects of Mahler's measure[END_REF], we can compute the Fourier expansion of the pull-back of ω E to X 1 (15): W * 15 (φ * ω E ) = -(q -q 2 -q 3 + O(q 4 )) dq q . A basis of Ω 1 (X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF]) is given by ω 15 ∶= 2πif 15 (τ )dτ , where f 15 = q -q 2 -q 3 + O(q 4 ) is the newform of weight 2 on Γ 1 [START_REF] Diamond | Modular forms and modular curves[END_REF]. Therefore W * 15 (φ * ω E ) = -ω 15 . Moreover, the involution W 15 has a fixed point τ = i/ √ 15 in the upper half-plane, so it must act on the complex torus underlying X 1 (15) as z ↦ z 0 -z for some z 0 (it cannot be a translation). It follows that W 15 acts as -1 on Ω 1 (X 1 (15)), and we conclude that φ * ω E = ω 15 . Now let us integrate the forms ω E and ω 15 , and compare the signs of the integrals. Following [22, Section 4.1], the path γ E is described using polar coordinates x = e iθ , y = e iψ with θ, ψ ∈ [-π, π], and is given by the equation cos(θ/2) cos(ψ/2) = 1/4. Since the orientation of the Deninger chain Γ P is induced by the product orientation of [-π, π] 2 , its boundary γ E is oriented counterclockwise in this square (see Figure 1). We can use the symmetries of γ E to reduce the integration path. For any automorphism σ of E defined over R, we have σ * ω E = ε(σ)ω E , where ε(σ) = 1 if σ preserves the orientation of E(R), and ε(σ) = -1 otherwise. Equivalently, ε(σ) = 1 if and only if σ = id or σ has no fixed point. Applying this with the symmetries (x, y) ↦ (1/x, y) and (x, y) ↦ (x, 1/y), which reverse the orientation of E(R) as well as that of γ E , we obtain that ∫ γ E ω E is 4 times the integral over the path γ ′ E pictured in Figure 1. Figure 1. The Deninger chain Γ P , its boundary γ E and the path γ ′ E . After some computation, we get [START_REF] Brunault | Modular regulators and multiple Eisenstein values[END_REF] ∫ γ E ω E = 4 ∫ γ ′ E ω E = 4 ∫ 2 arccos(1/4) 0 dθ √ 16 cos 2 (θ/2) -1 > 0. Now with the modular curve X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF], we wish to determine the sign of ∫ γ 15 ω 15 . For this, consider the linear map H 1 (X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF], {cusps}, Z) → H 1 (X 1 [START_REF] Diamond | Modular forms and modular curves[END_REF], Q) provided by the Manin-Drinfeld theorem [START_REF] Drinfeld | Two theorems on modular curves[END_REF]. Again with SageMath, we compute that the image of {0, ∞} is equal to -1 16 γ 15 (see ModularSymbolGamma15.ipynb [START_REF] Brunault | K 4 of modular curves[END_REF]). It follows that [START_REF] Cohen | A course in computational algebraic number theory[END_REF] ∫ To be fully accurate (and in order to handle more general situations), ascertaining this equality requires to compute numerically the integrals ( 12) and ( 13) with rigorous error bounds. This suffices since the ratio of these integrals is known to be an integer. The integral ( 12) is a complete elliptic integral which can be dealt with the Arb library [START_REF] Johansson | Arb: efficient arbitrary-precision midpoint-radius interval arithmetic[END_REF][START_REF] Johansson | Numerical integration in arbitrary-precision ball arithmetic[END_REF]. On the other hand, [START_REF] Cohen | A course in computational algebraic number theory[END_REF] involves integrating a modular form over a modular symbol. We can do it in the present situation thanks to the rapidly convergent series. In general, although PARI/GP [START_REF]PARI/GP version 2.15[END_REF] and Magma [START_REF] Bosma | The Magma algebra system. I. The user language. Computational algebra and number theory[END_REF] can evaluate such integrals efficiently, we are not aware of implementations that prove error bounds for them. Final computation We denote by r 3 (2) the Goncharov regulator map in degree 2 for the weight 3 complex of a smooth complex curve [START_REF] Goncharov | Explicit regulator maps on polylogarithmic motivic complexes[END_REF]. It sends a degree 2 cocycle to an explicit closed 1-form on this curve. By [START_REF] Brunault | K 4 of modular curves[END_REF] and Proposition 9, we have [START_REF] Deninger | Deligne periods of mixed motives, K-theory and the entropy of certain Z n -actions[END_REF] ∫ γ E r 3 (2)(ξ E ) = ∫ φ * (γ 15 ) r 3 (2)( ξE ) = ∫ γ 15 r 3 (2)(φ * ξE ) = ∫ γ 15 r 3 (2)( ξ15 ). Note that the differential form r 3 (2)( ξ15 ) is defined only on the open modular curve Y 1 [START_REF] Diamond | Modular forms and modular curves[END_REF]. However, it has trivial residues at the cusps since the same is true for ξ15 , see Section 4. We may therefore compute the integral by choosing the representative of γ 15 given by [START_REF] Brunault | Many Variations of Mahler Measures: A Lasting Symphony[END_REF]. Note that this integral involves cusps but it is absolutely convergent by [START_REF] Brunault | On the K 4 group of modular curves[END_REF]Corollary 7.3]. The technical details of this procedure are explained at the end of [9, Section 8]. Lemma 10. Let u be a modular unit on X 1 (N ) such that 1 -u is also a modular unit. For any two cusps α ≠ β in P 1 (Q), we have ∫ β α r 3 (2)({u} 2 ⊗ u) = L3 (u(β)) -L3 (u(α)) , where L3 ∶ P 1 (C) → R is the single-valued trilogarithm defined in [18, Section 2.1]. Proof. By [18, Theorem 2.2], we have r 3 (2)({u} 2 ⊗ u) = r 3 (2)(δ({u} 3 )) = dr 3 (1)({u} 3 ) = d L3 (u). □ Since the path γ 15 is closed, Lemma 10 implies that ∫ γ 15 r 3 (2)({u} 2 ⊗ u) = 0 for any u ∈ U 2 . Using Proposition 8, the computation (14) continues as [START_REF] Diamond | Modular forms and modular curves[END_REF] ∫ γ E r 3 (2)(ξ E ) = -20 ∫ γ 15 r 3 (2)( ξ1 (1, 4 )). We are now in position to apply the main result of [START_REF] Brunault | Modular regulators and multiple Eisenstein values[END_REF], which computes G(a, b) ∶= ∫ ∞ 0 r 3 (2)( ξ(a, b)) (a, b ∈ (Z/N Z) 2 ), under the assumption that the coordinates of a, b and a + b are non-zero. We may integrate along Manin symbols [g] = {g0, g∞} as well, noting that ∫ g∞ g0 r 3 (2)( ξ(a, b)) = ∫ ∞ 0 r 3 (2)( ξ(ag, bg)) = G(ag, bg) (g ∈ SL 2 (Z)). Recall also that ξ1 (a, b) = ξ((0, a), (0, b)). Expanding [START_REF] Diamond | Modular forms and modular curves[END_REF], we get The assumption on the coordinates of the parameters is satisfied, and [12, Theorem 1] gives ( 16) ∫ γ E r 3 (2)(ξ E ) = - ∫ γ E r 3 (2)(ξ E ) = π 2 L ′ (F, -1) with F = -8(G 2,1 G 8,-4 + G 2,-1 G 8,4 ) + 4(G 1,14 G 4,-11 + G 1,-14 G 4,11 ) + 4(G 1,1 G 4,-4 + G 1,-1 G 4,4 ) -8(G 1,8 G 4,-2 + G 1,-8 G 4,2 ). (17) Here G a,b is a shortcut for the Eisenstein series G defined in [START_REF] Brunault | Modular regulators and multiple Eisenstein values[END_REF]Introduction] for arbitrary level N by G (1);N a,b (τ ) = a 0 (G (1);N a,b ) + ∑ m,n≥1 (m,n)≡(a,b) mod N q mn/N - ∑ m,n≥1 (m,n)≡-(a,b) mod N q mn/N (a, b ∈ Z/N Z). In our situation the indices a, b are non-zero modulo 15, so that the constant terms a 0 (G a,b ) all vanish. The functions G a,b are Eisenstein series of weight 1 on Γ [START_REF] Diamond | Modular forms and modular curves[END_REF]. Note that the products G x G y appearing in ( 17) are actually power series in q, because x 1 x 2 + y 1 y 2 is divisible by 15 for each such product. It follows that F belongs to M 2 (Γ 1 (15)). We have written a script K4-reg-Lvalue.gp [START_REF] Brunault | K 4 of modular curves[END_REF] to automate the application of [12, Theorem 1] and compute the q-expansion of the resulting modular form to arbitrary precision. We find that F = -8f 15 + O(q 21 ), where f 15 is the newform associated to E. Moreover, the Sturm bound for the space M 2 (Γ 1 [START_REF] Diamond | Modular forms and modular curves[END_REF]) is equal to 16 (apply [28, Sturm's theorem, 9.4.1.2] with the group ±Γ 1 [START_REF] Diamond | Modular forms and modular curves[END_REF], which has index 96 in SL 2 (Z)). This means that if two modular forms F 1 and F 2 in this space satisfy F 1 = F 2 + O(q 17 ), then F 1 = F 2 . In our situation, this allows us to certify that F = -8f 15 . Using Theorem 2 and ( 16), the Mahler measure finally equals m(P ) = 1 4π 2 ∫ γ E r 3 (2)(ξ E ) = 1 4π 2 ⋅ π 2 L ′ (-8f 15 , -1) = -2L ′ (E, - 1 ). This concludes the proof of Theorem 3. Appendix. Tables of 3-variable Mahler measures We would like to give here a list of conjectural identities for 3-variable Mahler measures involving L(E, 3) for several elliptic curves E over Q. It is possible that our methods can be applied to prove at least some of these identities. The success of the approach will depend very much on the modular parametrisation of the elliptic curve; in our case, Proposition 4 was crucial. This is similar to what happens for the 2-variable Mahler measures, where the proofs using the Rogers-Zudilin method require the curve to be parametrised by modular units [START_REF] Brunault | Many Variations of Mahler Measures: A Lasting Symphony[END_REF]Section 8.4 and Chapter 9]. Boyd and Rodriguez Villegas [START_REF] Boyd | Explicit formulas for Mahler measure[END_REF] discovered several identities of type m(P (x, y, z)) = r ⋅ L ′ (E, -1) with r ∈ Q × by looking at polynomials of the form P = A(x) + B(x)y + C(x)z where A, B, C are products of cyclotomic polynomials. Boyd found further examples in [START_REF] Boyd | Mahler's measure and L-functions of elliptic curves evaluated at s = 3[END_REF][START_REF] Boyd | Conjectural explicit formulas for the Mahler measure of some three variable polynomials[END_REF]. We extended Boyd's search with A, B, C of degree up to 5 and found a few other examples, see Table 1 below (we do not claim to have spotted all identities for this range of A, B, C). Table 2 displays two Mahler measures which involve a combination of L(E, 3) and ζ(3). Note that a ζ(3) term also appears in the main result of [START_REF] Brunault | Modular regulators and multiple Eisenstein values[END_REF] (see Theorem 1 there). In the tables below, the curve E is given by its Cremona label, and the integer g is the genus of the Maillot variety W P (or a component of it) whose Jacobian has E as an isogeny factor. We also looked at polynomials P (x, y, z) which have degree 1 in each variable x, y, z, and all of whose coefficients are ±1 (or zero). It seems to be the case that every such polynomial = r ⋅ L ′ (E, -1) + s ⋅ ζ ′ (-2) is exact. The identities found are collected in Table 3. The first entry in this table is not of this shape but we include it for completeness; it already appears in [START_REF] Brunault | Many Variations of Mahler Measures: A Lasting Symphony[END_REF]. Ringeling computed numerically the Mahler measures in Table 3, and the identities seem to hold to at least 100 digits. A particular feature of Table 3 is the appearance of the elliptic curve 36a1, which has complex multiplication. The elliptic curve 450c1 is also the first example with a curve of rank 1. P (a, b, c, d)} 2 . From the definition of u 1 (a, b, c, d) as a cross-ratio, the symmetric group S 4 acts on U 2 by permuting the indices, and this action factors through S 3 . Moreover, because of the relations {1/u} 2 = {1 -u} 2 = -{u} 2 in B 2 (F ) [31, VI, Lemma 5.4], we have the antisymmetry property: (8) {u 1 (a σ(1) , a σ(2) , a σ(3) , a σ(4) )} 2 = ε(σ){u(a 1 , a 2 , a 3 , a 4 )} 2 (σ ∈ S 4 ), for all distinct parameters a i in (Z/N Z)/ ± 1, where ε(σ) = ±1 is the signature. It thus suffices to consider those parameters satisfying 0 ≤ a < b < c < d ≤ ⌊N /2⌋. The elements {u 1 (a, b, c, d)} 2 are also subject to the 5-term relations [9, Lemma 4.7]: ( 3 ) 4 ( 34 Y 1 (N )) under De Jeu's map [9, Theorems 5.3 and 5.4]. If ξ -ξ ′ γ 15 ω 15 = -16 ∫ ∞ 0 ω 15 = 15 φ 15 ωProposition 9 . 151501515159 16L(f 15 , 1) > 0.That L(f 15 , 1) is positive can be ascertained witout much effort using the rapidly convergent series L(f 15 , 1) = 2 ∑ ∞ n=1 a n e -2πn/ √ 15 /n [13, Proposition 7.5.8], where the a n are the Fourier coefficients of f 15 . Namely, one may use the bound |a n | ≤ n for n ≥ 1, which follows from the Hasse bound on E and the inspection of a n for small n. Combining[START_REF] Brunault | Modular regulators and multiple Eisenstein values[END_REF] and∫ φ * (γ 15 ) ω E = ∫ γ * ω E = ∫ γ 15 > 0,we come to the following conclusion. We have γ E = φ * (γ 15 ). 20(2G((2, 4), (8, 1)) -G((1, 11), (4, 14)) -G((1, 4), (4, 1)) + 2G((1, 2), (4, 8))). Table 1 . 1 Conjectural identities m(P ) -x + 1 + (x + 1)(y + z) 30a1 -2/9 -476/27 1[START_REF] Brunault | Unpublished list of conjectural identities for 3-variable Mahler measures[END_REF] (x -1) 3 + (x + 1) 3 (y + z) 108a1 -2/15 112/15 2 [5, p. 21],[START_REF] Brunault | Unpublished list of conjectural identities for 3-variable Mahler measures[END_REF] P E r g Source (x -1) 3 + (x + 1)(y + z) 14a4 -6 2 [5, p. 21], [8] (1 + x) 2 + (1 -x)(y + z) 20a1 -2 1 [6], [11, p. 81] x + 1 + (x -1)y + (x 2 -1)z 20a1 -3/2 3 [8] (1 + x) 2 (1 + y) + z 21a4 -3/2 1 [6], [11, p. 81] 1 + x + y -xy + z 21a1 -5/4 1 [6], [11, Section 6.3] (1 + x) 2 + y + z 24a4 -1 1 [7, Section 8], [6] (x + 1) 2 + (x 2 -1)y + (x -1) 2 z 48a1 -2/5 3 [8] (x + 1) 2 + (x -1) 2 y + z 225c2 -1/48 1 [6, 8] ? = r ⋅ L ′ (E, -1) P E r s g Source x 2 Table 2 . 2 Conjectural identities m(P ) ? Table 3 . 3 Conjectural identities m(P )ÉNS Lyon, Unité de mathématiques pures et appliquées, 46 allée d'Italie, 69007 Lyon, France Email address: [email protected] URL: https://perso.ens-lyon.fr/francois.brunault E r g Source 1 + x + y + z + xy + xz + yz 14a4 -5/2 1 [8] 1 + x + y + z + xy + xz + yz -xyz 36a1 -1? = r ⋅ L ′ (E, -1) The author was supported by the research project "Motivic homotopy, quadratic invariants and diagonal classes" (ANR-21-CE40-0015) operated by the French National Research Agency (ANR). 1
04114464
en
[ "shs.gestion" ]
2024/03/04 16:41:24
2019
https://theses.hal.science/tel-04114464/file/ZYLSTRA.pdf
Andrew Zylstra MARKETING EXPENDITURES, EVEN WHEN CEO COMPENSATION AND BLOCKHOLDERS ARE TAKEN INTO ACCOUNT Keywords: institutional investors, investment horizons, top executives' compensation, advertising spending, myopic management This paper studies whether investor horizons influence marketing expenditures. We find that high investor turnover, our proxy for investor horizon, is associated with a higher probability to reduce marketing expenditures. We verify our results using three alternative measures of investor turnover and three alternative definitions of marketing expenditure. We test the direction of causality using a panel vector autoregressive model to ensure that our results are not driven by firms with myopic management of marketing resources attracting short-term investors. We find that blockholders mitigate the effect of high investor turnover and that CEOs with higher long-vs-short term compensation do not mitigate the effect of short-term investors. These results suggest that the existence of myopic management of marketing expenditures is more than just an issue of agency conflicts. Rewards designed to ensure long-term shareholder loyalty may help reduce the myopic management of marketing expenditures. v Remerciements / Acknowledgements "It's been a long time coming." The French translation of this oft-cited English expression seems even more to the point: « C'est un moment que nous attendions depuis longtemps. » When I started writing my acknowledgements, this expression came to mind. It seems appropriate given how long it has taken to reach the thesis defense. However, I would hasten to add, "but what a time it's been." A thesis is a personal adventure. I have met a lot of great people who have guided me, pushed me and laughed me through these years. It's been a stimulating adventure that has broadened my horizons in many ways. Many people have contributed to making the years I have spent at ESCP fulfilling years. I hope my thanks reflect their contributions faithfully. My first thanks go to Professor Sandrine Macé, who was kind enough to take a look at a lost PhD student's work and ended up having me, the lost one, as her PhD student. You have taught me a lot about marketing and academic research and many other things. Your constant encouragement pushed me to do better. The thesis I am about to defend owes a great deal to you. I would like to thank Professor Paul Valentin Ngobo and Professor Pascal Alphonse for having accepted to act as referees for this thesis. I am grateful that you have taken the time and contributed to improving the quality of my work. Your comments during the pre-defense helped me improve the quality and structure of this thesis considerably. I am also grateful that Professor Philippe Aurier and Professor Sophie Changeur accepted to be members of the thesis jury. I am honored that you have evaluated and contributed to improving the quality of my work. vi I am deeply appreciative that Professor Christophe Moussu has accepted to be a member of my jury. Not just for being a jury member but more generally for helping me when I doubted myself and this thesis. Our conversations over the years have made a major impact on me and this thesis and I hope you enjoy reading this thesis. I would also like to thank the professors of the finance department of ESCP Europe who have supported me during my thesis years. Christophe Thibièrge, Cécile Kharoubi, Pramuan Bunkanwanicha, Alberta Di Giuli, Anne Gazengel, Fahmi Ben Abdelkader, Houdou Basee Mama, thank you for your comments and help. I would like to acknowledge in particular Professor Michael Troege, whose support, advice and comments have been a great help to me during my thesis. I would like to thank the Ecole Doctorale de Management Panthéon-Sorbonne and the PhD program of ESCP Europe for having put up with me and supporting me. I would like in particular to thank Hervé Laroche and Christine Rocque. Hervé, you trusted me to finish and I think this time it will end well. Christine, thanks for doing the stuff that needs doing in a PhD program with a smile and being patient when answering all questions. And thank you Claire Dambrin for getting me here. I would also like to thank the Labex ReFi for its support, both materially and in terms of research. I would obviously like to thank my unfortunate roommate Arthur Petit-Romec, whom I spent many hours conversing with, sometimes even about the thesis. I will not repeat your jokes either but thank you for listening to my problems and making sure I did not lose sight of the greater goal, completing my thesis. You made it possible for me to finish it. Thank you! For the past 8 years, I have had the pleasure of sharing my office with the PhD gang of "les Bluets." You are a unique group of people from many different horizons united in a common cause: a PhD. You are more than just co-workers. You have become friends in vii the true sense of the term. You welcomed me into your midst and for that I am grateful: Emma, Isabelle, Anissa, Xavier, Annalisa, Nora, Caro, Alex (my sensei in Stata and many, many other things), Jean-Christophe, Alban, Olivier, Penelope, Jean-Christophe, Marianne, José, Pilar, Stéphane, Cylien, Francois-René, Miona, Antoine, Domitille and Guillaume. And I haven't mentioned yet the 'oldies', in relative terms of course, i.e., those who started before me and who showed me the many possible exits after a thesis: Violette, Marie, Mathilde, Véronique, Sébastien, Xavier, Anna, Aurélie, Renata, Christelle and Arnaud. And of course, my thanks go to my parents, without whom I wouldn't be here. I am grateful to you for giving me the curiosity to go off and explore subjects and trying to understand what causes things to happen. Finally, I would like to thank my family, without whom this thesis would have not been possible. First, thank you Sophie for putting up with my mad wish to get a PhD and putting up with me and my 'obscure' subject all these years. And thanks to my children Zoé and Lucas, who have known me for more years as a PhD student than as not. Without you guys, your effort and your support, this thesis would not have been possible. RÉSUMÉ Afin d'approfondir les connaissances sur la relation entre le marketing et le marché des actions, cette thèse questionne l'éventualité selon laquelle la perspective des « effets réels des marchés financiers » (P. [START_REF] Bond | The Real Effects of Financial Markets[END_REF] est adaptée à la fusion des deux courants de l'interface marketing-finance. Les quatre études de cette thèse font la démonstration suivante : les flux d'informations transmis par les cours des actions sont bidirectionnels entre les investissements marketing et les marchés secondaires. Les deux premières études (chapitres 3 et 4) montrent de façon empirique l'impact des flux d'informations provenant des investissements marketing sur les marchés secondaires, tandis que les troisième et quatrième études (chapitre 5 et 6) montrent de la même manière l'impact des flux d'informations provenant du marché des actions sur les investissements marketing. Réunies, ces quatre études attestent que l'information circule de façon bidirectionnelle entre les investissements marketing et les marchés secondaires, ce qui met en relief les débats autour de la perspective « des effets réels des marchés financiers». Nous présentons deux conclusions relatives à ce résultat. Dans un premier temps, nous soutenons l'idée selon laquelle la perspective des « effets réels des marchés financiers » devrait être superposée à l'interface marketing-finance car elle améliore notre compréhension de ces deux axes. Elle nous apporte également un cadre théorique adéquat pour examiner la manière par laquelle les investissements marketing reflètent et impactent les informations du marché des actions. Dans un second temps, la superposition de la perspective des « effets réels des marchés financiers » à l'interface marketingfinance ouvre la voie à de nombreuses possibilités de recherche permettant d'en savoir plus sur les interactions bidirectionnelles entre les investissements marketing et le marché des actions. x Abstract To obtain deeper insights into the relationship between marketing and equity markets, this thesis investigates whether the 'real effects of financial markets' perspective (P. [START_REF] Bond | The Real Effects of Financial Markets[END_REF] is suitable for integrating the two streams of the marketing-financeaccounting interface research area. The four studies in this thesis highlight the bidirectional flows of information in stock prices between marketing investments and equity markets. The first two studies (Chapters 3 and 4) show empirically the impact of information flows from marketing investments to equity markets while the third and fourth studies (Chapters 5 and 6) show empirically the inverse flow of information from equity markets to marketing. Together, the four studies suggest that information flows bidirectionally between marketing investments and equity markets, reflecting the contentions of the 'real effects of financial markets' perspective. We make two arguments based on this finding. First, we contend that the 'real effects of financial markets' perspective should be transposed onto the marketing-finance interface because it enhances our understanding of the two research streams of the marketingfinance interface and provides a suitable theoretical framework to account for how marketing investments both affect and reflect information in equity markets. Second, transposing the 'real effects of financial markets' perspective onto the marketing-finance interface opens up many research possibilities to generate new insights into the two-way interactions between marketing investments and equity markets. Uncertainties about the effectiveness of marketing, increasing pressure from capital markets on top management teams and the perception of marketing expenditures as merely costs expensed on the income statement prompted marketers to prove the value relevance of marketing investments (see for example the research priorities of the Marketing Science Institute from 2014 to 2016), leading to the emergence of the marketing-finance interface about twenty years ago. This research stream explores whether and how marketing investments created shareholder value and how equity markets reflect information about marketing expenditures such as advertising and R&D and marketing assets such as brands and customer satisfaction that the marketing expenditures create (see (S. Srinivasan & Hanssens, 2009) for a review). The second research stream in the marketingfinance interface that emerged more recently considers how financial market participants such as financial analysts, investors and bondholders may hinder the ability of marketing investments to generate value for shareholders (see [START_REF] Chakravarty | 11 Putting the cart before the horse: short-term performance concerns as drivers of marketing-related investments[END_REF] for a review). Literature Review Marketing investments and investors Investors and marketing investments Information in academic research The real effects of financial markets Brokerage brands and investors Brokerage brands and competitors Stock mispricing and marketing investments Investor horizons and marketing investments Conclusion term perspective to describe the 'real effects of financial markets' because the authors argue for a broadening of the efficient market theory and not a new conceptual framework. This thesis argues this perspective would confer the marketing-finance interface with a more solid theoretical grounding and open up new areas of research that reflect the two-way information flows. Recently, finance has investigated an approach from economics that considers that information flows may be bidirectional and studies their effects on real management decisions. So, security analysts traditionally looked upon the rise and fall of a firm's stock price as an indicator of investor expectations about the firm's future cash flows. The stock however price can also reflect investor agreement or disagreement with the management's decisions. It can consequently affect what decisions are taken as managers glean information from stock prices. The 'real effects of financial markets' perspective studies how the information flows are bidirectional between companies and financial markets, in particular secondary markets. Under this perspective, stock prices both reflect information about investor expectations and convey information to managers. We mobilise the 'real effect of financial markets' perspective to better understand the relationship between marketing investments and financial markets. To explore this perspective, we adopt a multidisciplinary approach, bringing together research from marketing and finance. Research questions The starting point of this thesis is the information conveyed by stock prices, which has been studied by academic research for 50 years [START_REF] Akerlof | The Market for "Lemons": Quality Uncertainty and the Market Mechanism[END_REF][START_REF] Fama | Efficient Capital Markets: A Review of Theory and Empirical Work[END_REF]Rappaport, 1987). The information conveyance led to the efficient market hypothesis under which stock prices aggregate all available information [START_REF] Fama | Efficient Capital Markets: A Review of Theory and Empirical Work[END_REF]. This thesis builds on the role of stock price as an aggregator of information to investigate the influence of information in stock prices on the relationship between marketing investments and equity markets. We use the term marketing investments in this thesis rather than marketing action (e.g., advertising expenditure) or marketing asset (e.g., brand) because the term investment englobes 1/ the action or process of investing money for profit, that is to say a marketing action, and 2/ the outcome created by the action of investing that will generate wealth in the future, or, in other words, the marketing assets that marketing expenditures create. This thesis argues that the relationship between marketing investments and financial markets is a bidirectional relationship with information conveyed by share prices playing a key role in both directions. The bidirectional flows reflect the emerging body of research into the real effects of financial markets (P. [START_REF] Bond | The Real Effects of Financial Markets[END_REF]. The authors argue that the theory of informationallyefficient markets should be broadened to include the bidirectional flows of information between all market players in order to have a better understanding of the real effect of equity markets and in particular the effect of secondary markets on firms. We argue that the marketing finance interface research area should incorporate the bidirectional information flows that current research considers separately based on the 'real effects of financial markets' perspective. In so doing, the marketing-finance interface acquires a stronger conceptual basis and open up new research possibilities. The general research question reflects our argument about applying the 'real effects of financial markets' perspective to the two research streams of the marketing-finance interface by considering whether information flows are bidirectional between marketing and equity markets: Does the information in stock prices flow bidirectionally between marketing investments and equity markets? In the four studies in this thesis described briefly below, we investigate the following four research sub-questions. The first two research sub-questions assess whether information flows about marketing investments influence equity markets. The last two research sub-questions investigate whether information from investors influence marketing investments. Together, the four research sub-questions are designed to study whether the information flows between marketing investments and equity markets are bidirectional. Research sub-question 1 (Study 1) The research sub-question 1 focuses on the impact of brokerage house brands, a type of marketing investment, on equity investors. More specifically, we seek to ascertain whether the information contained in the brokerage house brand influence investor response to recommendation changes. Does the information in brokerage house brand signals matter for equity investors? To address this question, we use the brand signal model [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF] based on information economics in the context of an event study to investigate the impact of the information Findings and contribution Overall, this thesis determines that the information flows between marketing investments and investors are bidirectional. This aspect of information flows has thus far been ignored by researchers in the marketing-finance interface. We argue that the 'real effects of financial markets' perspective provides a suitable framework for assessing the impact of the two-way information flows and opens up new research possibilities to better comprehend the two-way effects. Studies 1 and 2 investigate the effect of information flows from marketing investments to equity markets. Study 1 shows that information in brokerage house brands, a type of marketing investment, influence equity investors. We further show four characteristics of brokerage houses that influence investor response to the brokerage house brand. In so doing, we show that brokerage house brands influence firm pricing and develop a methodology to estimate a brokerage house's brand equity. Study 2 shows that brokerage house brands impact competitors in addition to investors. We further show how brokerage house characteristics influence competitors. Brands contain information for competitors about the leadership of a brokerage house on a particular stock and the brokerage house's characteristics contribute to this leadership. Combined, the findings of Studies 1 and 2 suggest that information flow from marketing investments to equity markets. They also suggest that brokerage houses perceive competing brands differently than investors and suggest that what is important for investors in brands is different than for competitors. The third and fourth studies study the flow of information from investors to marketing investments. Study 3 determines that how investors prices a stock affects marketing investments. We show empirically that a stock's mispricing has strong, negative impacts on advertising and R&D expenditures. In addition, a firm's dependence on equity financing moderates the relationship. We show that stock mispricing may drive cuts to marketing expenditures and that the irrationality of stock prices may affect marketing in firms. Study 4 determines that information about investor horizon influences marketing investments. We show that investor turnover is associated with a high probability to cut marketing expenditures. We also show that CEO compensation does not mitigate this effect, suggesting that the existence of myopic marketing management is more than just a matter of agency conflicts. Taken together, studies 3 and 4 demonstrate that information flows from equity markets to marketing investments. Or in other words, information about investors reflected in stock prices affects how firms invest in marketing. The four studies combined show the bidirectional nature of information flows between marketing investments and equity markets. The implications of this finding, limitations and suggestions for future research are discussed in the conclusion. Research design The general research design of the thesis shown in Figure 5 includes arrows that represent the relevant direction of information flows between marketing investments and investors. The top arrow represents flows from marketing investments to equity markets. The bottom arrow indicates flows from equity markets to marketing investments. The relevant players we study are indicated next to the chapter numbers. Figure 5 -General research design Logically, the sender of information in Chapters 3 and 4 would be the receiver of the information in Chapters 5 and 6. In other words, corporations would be the sender of information in Chapters 3 and 4 and receiver of information in Chapters 5 and 6. However, the sender of information in Chapters 3 and 4 is brokerage houses and the receiver in Chapters 5 and 6 is corporations. We do not follow this logical approach because the impact of corporate brands on investors has been studied extensively elsewhere, so our thesis would not make a contribution. We opt instead to use brokerage houses as senders of information in Chapters 3 and 4 because investors are their direct clients. Although prior marketing research establishes that equity investors take into account the brand equity of corporations they invest in (S. Srinivasan & Hanssens, 2009), it does not indicate whether marketing investments of brokerage houses directly affect investors. The outcome for this thesis is that the sender of information to investors via marketing investments is brokerage houses (top arrow) and that the receiver of information from equity investors via stock prices is corporations (bottom arrow). In the general research design above ( Epistemology When considering a research subject, the obvious issues are: 1/ what is the best way to apprehend the subject; and 2/ what is the best method to clearly show the interest of the subject, present the theoretical choices, explain the methodological choices, justify results, and, above all, make the thesis as coherent as possible? Our answer to these questions leads us to our choice for the format of this thesis, meaning, literature review and four studies that respond to the four sub research questions. Each study adopts the format of hypothesis, data collection, data analysis and results. Our thesis therefore adopts a hypothetico-deductive approach. All research gives rise to epistemological questions. This thesis takes a positivist paradigm for two reasons: i. Our thesis is framed in terms of "why and for what reasons?" In other words, whether and how stock price can convey information between marketing investments and equity markets rather than the interpretivist paradigm that asks questions in terms of "for what reasons do actors …?" or the constructivist paradigm that asks "for what purpose?" ii. The validity criteria of this thesis correspond to the positivist paradigm: verifiability, confirmability and refutability. Thesis organisation This thesis starts with this introductory chapter, then the literature review (Chapter 2), four empirical studies (Chapters 3,4,5 and 6) and finally the conclusion. Chapter 2 reviews the relevant literature for the thesis. It starts with an unnumbered introduction, then the two streams of the marketing-finance interface that reflect the two directions of information flows studied in marketing (2.1 and 2.2), how information has been studied in the academic literature (2.3) and finally a look at the accounting and finance research into information in financial markets and the real effects of financial markets (2.4). Chapters 3, 5 and 6 (Studies 1, 3 and 4 respectively) are structured in the format of an academic study, with an introduction, theoretical framework, empirical analysis, results and discussion. Chapter 4 (Study 2) contains an introduction, results and discussion without a theoretical framework or empirical analysis section. We adopt this approach because Chapter 4 is an application of the brand signal model developed in Chapter 3 applied to competing brokerage houses. Chapter 3 investigates how brands, a type of marketing investment, contribute to information in equity market prices using an event study. Chapter 4 investigates how brands contribute to stock market information efficiency using a brokerage house's leadership status. Chapter 5 studies the impact of information in stock prices that leads to stock mispricing and whether and how it affects marketing investments. Chapter 6 studies the impact of investor horizon information in stock prices on marketing investments. Chapter 7 sets out the results, theoretical and managerial contributions, limitations and future research. Chapter 8 contains the French résumé of the thesis and Chapter 9 the references. Introduction This section describes the main theoretical fields mobilized in this thesis. Then, in sections 1 through 4 of this chapter, we explore the relevant literature. We start by looking at the first stream of the marketing-finance interface, i.e., how financial markets respond to marketing investments (section 2.1), then the more recent second stream of the marketing-finance interface, i.e., how financial markets impact marketing investments (section 2.2), the study of information in marketing (section 2.3), the 'real effects of financial markets' perspective (section 2.4) and finally we conclude (section 2.5). iii. Section 2.1: The first stream of the marketing-finance interface studies whether and under what conditions stock prices reflect information from financial markets that impact marketing investments. We see how questions about marketing's ability to generate value for shareholders accountability drove the emergence of the marketingfinance interface as a separate research area in marketing. We then study how marketing investments contribute to shareholder returns, and end with a description of the main empirical methods used in this stream, particularly event studies since chapter 3 (study 1) employs this methodology. The second stream of the marketing-finance interface considers how information from financial markets affect marketing investments. We see how the discretionary nature of marketing expenditures makes them vulnerable to manipulation by managers. Then we consider the conditions that may prompt managers to behave myopically towards marketing investments, grouped into three themes: 1/ REAM (real earnings management), 2/ firm financing and 3/ stock price information and marketing investments. v. Section 2.3 considers how information has been studied in marketing research. We start by considering how the imperfect and asymmetric information creates uncertainty about the quality of brokerage house research. We consider how consumers look for information and determine its role in consumer purchases. We then consider how signals help resolve consumer uncertainty about quality, then look at research into brands as information, first in general and then in services, which is the sector we focus on in our studies in Chapters 3 and 4 (studies 1 and 2). vi. Section 2.4 studies how the 'real effects of financial markets' perspective impacts the theory of informationally-efficient markets. Next, we consider how managers and financial markets contribute to the real effects. We consider managerial motivations for reacting to the information in stock prices via personal motivations, catering and learning. Then we consider the information in stock prices arising from limited arbitrage, investor expectations, monitoring and corporate funding needs. Finally, we consider the literature surrounding mispricing and investor horizons in finance, which we study in the context of marketing investments in chapters 5 and 6 (studies 3 and 4). vii. Section 2.5 concludes and explains the choice of subjects and levels of analyses included within this thesis. How marketing investments affect financial markets The marketing-finance interface has emerged as a major area of study in marketing research as marketing practitioners and academics realized that criticism over marketing accountability needed to be addressed starting mainly in the 1990s. We study this criticism, the reasons behind it and the research results in section 2.1. We see how information about firm marketing investments is reflected in stock prices. Over time, a second stream emerged, which considers how information from investors influence marketing investments, which we consider in section 2.2. The two streams taken together reflect the bidirectional information flows that we argue in this thesis can be consolidated using the real effects of financial markets (section 2.4). Impetus for creating the marketing-finance interface The increasing pressure on marketing departments and researchers alike to show how marketing creates value for shareholders prompted the emergence of the marketing-finance interface. Calls for marketing to prove its contribution to the firm are not new [START_REF] Rust | Measuring Marketing Productivity: Current Knowledge and Future Directions[END_REF][START_REF] Stewart | Marketing Accountability: Linking Marketing Actions To Financial Results[END_REF], giving rise to the notion of marketing accountability for firm investments in marketing. [START_REF] Ambler | Marketing: the trouble with finance[END_REF] describes how senior management and CFOs focus on shareholders' returns and ignore metrics favoured by marketing departments such as customer loyalty, customer satisfaction and brand awareness because they either do not understand them or do not understand how these metrics affect shareholder returns. A Fortune 100 CF0 quoted in [START_REF] Stewart | Marketing Accountability: Linking Marketing Actions To Financial Results[END_REF] sums up the perception of marketing's unaccountability in this way: "Marketing is not strategic. It's just tactics and we just control the cost." A second factor that hinders marketing accountability is that marketing assets (e.g., brands, customer relationships) are intangible and are surrounded by asymmetric information, which creates uncertainty about their value [START_REF] Barth | Analyst Coverage and Intangible Assets[END_REF]. Tangible assets however generate earnings in the short-term whereas intangible assets take longer to be priced into share prices [START_REF] Srivastava | Market-Based Assets and Shareholder Value: A Framework for Analysis[END_REF]. (Mouncey 2009) reports that almost 80% of a US firm's assets are intangible assets and may indicate better future stock returns than the tangible assets that firms are required to disclose. This intangibility and the difficulties in measuring the value of marketing investments make holding marketing departments accountable for expenditures difficult. Note however that the debate about marketing assets does not suggest that all marketing investments should be capitalized on the balance sheet. Creating unlimited accounting assets from marketing investments would create new problems for marketing and accounting that go beyond the scope of this thesis but this limit needs to be mentioned. The issue of marketing accountability This thesis follows the American Marketing Association definition of marketing accountability as "responsibility for the systematic management of marketing resources and processes to achieve measurable gains" (R. K. S. [START_REF] Rao | Marketing Initiatives, Expected Cash Flows, and Shareholders' Wealth[END_REF]. Accountability for marketing resources is a struggle because the outcomes of marketing actions such as pricing and advertising are often measured by marketing via sales impact [START_REF] Leone | Generalizing What is Known about Temporal Aggregation and Advertising Carryover[END_REF], and not stock returns and other financial measures, which CEOs and shareholders use. Marketing's traditional focus on ensuring that products are successful was not enough to ensure accountability. The underlying assumption in marketing departments was that as long as product-market results were positive, i.e. revenues, good financial results would follow [START_REF] Srivastava | Market-Based Assets and Shareholder Value: A Framework for Analysis[END_REF]. Furthermore, [START_REF] Lehmann | Journal Evolution and the Development of Marketing[END_REF] observes that marketing is often insular in nature, which has led to a push for metrics that expresses the contribution of marketing in terms that understands (e.g., stock returns, profits). To ensure marketing remains a vital part of modern firms, (P. F. [START_REF] Anderson | Marketing, Strategic Planning and the Theory of the Firm[END_REF] argues that marketing research needs to take into account the firm as a whole. Furthermore, looking at marketing in terms of firm performance measures may enhance the status marketing in firms [START_REF] O'sullivan | Marketing Performance Measurement Ability and Firm Performance[END_REF]. This focus on assessing marketing's performance in terms of shareholder-relevant metrics is one of the specific characteristics of the marketing-finance interface stream of literature. Conceptualizing how marketing investments create value for investors Our definition of marketing investments follows that of [START_REF] Rust | Measuring Marketing Productivity: Current Knowledge and Future Directions[END_REF]S. Srinivasan & Hanssens, 2009;[START_REF] Srivastava | Market-Based Assets and Shareholder Value: A Framework for Analysis[END_REF], which provide theoretical frameworks about how marketing outlays create marketing assets. These outlays, which we call marketing actions in this thesis, are the expenditures that firms make to enhance customer loyalty or enhance the brand equity through advertising or customer relationship management. In this section, we focus on how marketing actions (e.g., promotions, marketing communication, etc.) influence firm performance in the stock market. The starting point of marketing creating value for investors is a firm's marketing strategy, e.g. product innovation or price promotions or customer service. The strategy gives rise to tactical actions such as advertising, brand initiatives and loyalty programs that require marketing expenditures, which in turn affect customer satisfaction, brand attitudes and other customercentred items. These firms are combined at the firm level into what we call marketing assets in this thesis, defined as assets that arise from the commingling between the firm and outside entities. These assets may be brand equity or customer satisfaction. The marketing assets lead to long-term returns such as satisfied customers making repeat buys. Marketing actions therefore create and leverage market-based assets [START_REF] Srivastava | Market-Based Assets and Shareholder Value: A Framework for Analysis[END_REF]. (R. K. S. [START_REF] Rao | Marketing Initiatives, Expected Cash Flows, and Shareholders' Wealth[END_REF] then built on the theory of firm valuation [START_REF] Modigliani | The Cost of Capital, Corporation Finance and the Theory of Investment[END_REF] to formalize the chain of effects from a firm's marketing actions to shareholders' wealth via marketing's impact on a firm's net present value. How marketing actions create marketing assets Finance textbooks suggest that the role of the firm is to create value for shareholders by maximizing stock returns. Investors determine the stock price based on their expectations of a firm's ability to generate cash based on the net present value of a firm's future cash flows. Shareholder value is created when the present value of future cash flows rises. A key step in conceptualizing the shareholder value created by marketing came with the publication of a conceptual framework for the marketing-finance interface. [START_REF] Srivastava | Market-Based Assets and Shareholder Value: A Framework for Analysis[END_REF] propose that marketing is concerned with developing market-based assets (e.g., customer and channel relationships, brands) that emerge from the commingling of the firm with entities in its external environment rather than product markets and positive outcomes. They argue that when the value of assets is hard to assess, which is the case of marketing assets, they are less likely to be allocated resources. Marketing's adoption of shareholder metrics such as net present value (NPV) ensures that shareholders are better able to grasp marketing's contribution to shareholder value, making marketing's contribution to the firm more difficult to ignore. To show how marketing can contribute to shareholder value, [START_REF] Srivastava | Market-Based Assets and Shareholder Value: A Framework for Analysis[END_REF] break market-based assets into two types, customer relationships and partner relationship (see figure below). Together, customer and partner relationships deliver favorable outcomes for the firm via price premiums or higher market share, etc. In turn, the favorable outcomes lead to positive effects on shareholder value such as accelerated cash flows and higher residual values of cash flows. These relationships are highlighted in the figure below. i. Advertising has the most research into its effects, perhaps due to its relatively easy availability as data relative to the other types of marketing investments. The first interesting research result is that advertising directly affects stock returns beyond the effects on sales revenues and profits. [START_REF] Joshi | The direct and indirect effects of advertising spending on firm value[END_REF] suggest that the direct effect of advertising on stock returns passes through spillover and signalling. Spillover refers to advertising for products spilling over into investment behaviour and affecting demand for the stock. The authors suggest that demand for stock driven by advertising may increase the number of investors, which increases firm liquidity and may lower the cost of capital [START_REF] Grullon | Advertising, Breadth of Ownership, and Liquidity[END_REF]. Signalling refers to advertising conveying positive signals to investors about the firm's financial situation. [START_REF] Joshi | The direct and indirect effects of advertising spending on firm value[END_REF]. In addition to advertising's effect on stock returns, it may also lower a firm's systematic market risk [START_REF] Mcalister | Advertising, Research and Development, and Systematic Risk of the Firm[END_REF]. ii. To study the effect of promotions, [START_REF] Pauwels | New Products, Sales Promotions, And Firm Value: The Case Of The Automobile industry[END_REF]) looks at the impact of sales promotions on firm value in the context of new product introductions in the automobile industry. The authors find that whereas product promotions enhance sales they do not increase long-term financial performance. This reinforces the idea that the temporality impact of marketing actions is an important factor in research and in practice. iii. To study the effect of whether investors understand the importance of adding new distribution channels, [START_REF] Geyskens | The market valuation of internet channel additions[END_REF] look at the impact of adding internet distribution channels on firm valuation. Using an event study, the authors show that on average internet channel investments are positive NPV investments and that strong firms with few direct channels show greater valuation gains than smaller firms with broader distribution channels. iv. Finally, new products are studied in the context of preannouncements and product introductions. (A. [START_REF] Sorescu | New Product Preannouncements and Shareholder Value: Don"t Make Promises You Can"t Keep[END_REF] study the effect of new product preannouncements on shareholder value. They assess the benefits and risks of preannouncements and find that the financial returns are significantly positive over the long term. The benefits only accrue, however, if firms keep markets updated on the progress of the new product and that the reliability of preannouncements is high. Findings on investor response to marketing assets In this section, we look at research into how marketing assets, the second type of marketing investment, create value for shareholders. To organize the empirical research, we use the categories set out in the review article of (S. Srinivasan & Hanssens, 2009), shown in full in Appendix 1. (S. Srinivasan & Hanssens, 2009) break the findings into how investors respond to marketing assets into four types of marketing assets, which we look at below. Equity market response to marketing assets i. For brands, [START_REF] Madden | Brands matter: an empirical demonstration of the creation of shareholder value through branding[END_REF] show that strong brands generate higher risk-adjusted stock returns. [START_REF] Mizik | Myopic Marketing Management: Evidence of the Phenomenon and Its Long-Term Performance Consequences in the SEO Context[END_REF] go one step further, showing that changes in firm brand assets are related to changes in firm valuations. (S. G. [START_REF] Bharadwaj | The impact of brand quality on shareholder wealth[END_REF] show that unanticipated changes in brand quality are positively linked to stock returns and negative linked to changes in idiosyncratic risk. ii. Customer satisfaction has been studied considerably in how it enhances shareholder wealth. (E. W. [START_REF] Anderson | Customer satisfaction and shareholder value[END_REF] show that a 1% increase in customer satisfaction as measured by ACSI (American Customer Satisfaction Index) increased Tobin's Q by 1.016%. finds that financial analysts respond to changes in customer satisfaction. More recently, [START_REF] Fornell | Stock Returns on Customer Satisfaction Do Beat the Market: Gauging the Effect of a Marketing Intangible[END_REF] show that firm selection based on customer satisfaction significantly outperforms the S&P 500, highlighting the value of customer satisfaction for investors. iii. [START_REF] Gupta | Valuing Customers[END_REF]) look at customer value, saying that valuing customers make it possible to value firms. They show that improvements in retention, margin or acquisition costs improve firm value. iv. For product quality, (S. [START_REF] Srinivasan | Product Innovations, Advertising, and Stock Returns[END_REF] suggest that new products with positive quality perceptions and product appeal systematically generate higher returns. All told, marketing-finance interface research suggests overall that the higher the marketing investments, the greater the share price impact [START_REF] Saboo | Organizational Debut on the Public Stage: Marketing Myopia and Initial Public Offerings[END_REF]. Marketing actions create value over the long term An important finding of the marketing-finance interface is the temporality of value creation for marketing assets and investments. Research in the marketing-finance interface suggests that marketing investments bear fruit over the long term (e.g. [START_REF] Rust | Measuring Marketing Productivity: Current Knowledge and Future Directions[END_REF]. For example, investments in brands (advertising, service quality) affect cash flows over the long term [START_REF] Keller | Brands and Branding: Research Findings and Future Priorities[END_REF]. Likewise, [START_REF] Pauwels | New Products, Sales Promotions, And Firm Value: The Case Of The Automobile industry[END_REF] studies the role of promotions and new product launches in the automobile industry and shows that product innovation positively affects long-term financial performance and firm value. [START_REF] Reilly | Advertising Decisions and Stockholders Wealth[END_REF] study the impact of advertising on stockholder wealth and find that the effects of advertising expenditures increase and decay over time depending on if they are supported. [START_REF] Joshi | The direct and indirect effects of advertising spending on firm value[END_REF] indicate that finance has studied the decay effect for many years but much less so in marketing, perhaps because measuring long-term effects is harder to do [START_REF] Dekimpe | Sustained Spending and Persistent Response: A New Look at Long-Term Marketing Profitability[END_REF] but some studies do exist [START_REF] Dekimpe | Sustained Spending and Persistent Response: A New Look at Long-Term Marketing Profitability[END_REF][START_REF] Hirschey | Intangible Capital Aspects of Advertising and R & D Expenditures[END_REF][START_REF] Joshi | The direct and indirect effects of advertising spending on firm value[END_REF]. [START_REF] Hirschey | Intangible Capital Aspects of Advertising and R & D Expenditures[END_REF] finds that marketing spending and R&D produce long-term benefits when considered jointly. [START_REF] Dekimpe | Sustained Spending and Persistent Response: A New Look at Long-Term Marketing Profitability[END_REF] found mixed effects of marketing spending on stock returns. [START_REF] Joshi | The direct and indirect effects of advertising spending on firm value[END_REF] show that the effect of advertising on sales and profits was significantly positive but the link between R&D and sales was less clear. The long-term horizon necessary for marketing to create value for shareholders is important for the thesis because marketing investments such as R&D for new products and advertising to build brand assets only bear fruit in the long term. However, if managers adopt myopic behaviour such as cutting marketing expenditures, then the capacity of marketing to generate value for shareholders is hindered. We consider two factors that may prompt myopic behaviour in this thesis: stock mispricing (Chapter 5) and investor horizon (Chapter 6). Research approaches for studying investor response to marketing Srinivasan describe five methods for evaluating financial market reaction to marketing investments: the four-factor approach, calendar portfolio, stock return response models, persistence modelling and event studies. We discuss below in more detail the event study methodology and what makes it suitable for our research. 2.1.9 The use of event studies in marketing research In this section, we delve further into the use of event studies in marketing research because our first study (Chapter 3) uses this approach. Our goal is to show the rich and long variety of uses of event studies in marketing research and its relevance in marketing research. [START_REF] Horsky | Does It Pay to Change Your Company's Name? A Stock Market Perspective[END_REF] were among the first to use event studies in marketing research, revealing that firm name changes are associated with improved performance and that the name change is an information signal that other measures to improve performance will be successfully undertaken. [START_REF] Agrawal | The economic worth of celebrity endorsers: An event study analysis[END_REF] study the impact of celebrity endorsements and determine that the average impact of celebrity endorsement contracts on stock returns is positive, making celebrity endorsement contracts a worthwhile investment. [START_REF] Gielens | Dancing with a giant: The effect of Wal-Mart's entry into the United Kingdom on the performance of European retailers[END_REF]) study the performance implications for incumbents of the strategic entry of Wal-Mart in the United Kingdom in 1999 and show that proactive actions can mitigate the negative performance consequences of the giant's entry. [START_REF] Homburg | Firm Value Creation Through Major Channel Expansions: Evidence from an Event Study in the United States, Germany, and China[END_REF] use event studies to show that new channel creation is highly positive for firm value whereas the effect of expansions of existing distribution channels varies. In this thesis, event studies are used to assess the response of investors to brand signals. We focus on the research into brands using event studies for this reason. [START_REF] Lane | Stock Market Reactions to Brand Extension Announcements: The Effects of Brand Attitude and Familiarity[END_REF] study the impact of brand extensions. They show that brand attitude and brand name familiarity influence the positive benefits of brand influence and also the adverse consequences. Stock market participant response to brand extension announcements depends interactively and monotonically on brand attitude and familiarity. To assess whether financial markets saw the impact of changes in brand attitude and help predict future earnings and thus stock returns, [START_REF] Aaker | The Value Relevance of Brand Attitude in High-Technology Markets[END_REF] found that changes in brand attitudes are associated contemporaneously with stock stocks. They also found that increases in brand awareness that were not reflected in brand attitude had little impact on future earnings. [START_REF] Changeur | Strategies de marque et richesse des actionnaires: une approche financiere du capital-marque[END_REF] use announcement dates of brand strategies to assess whether share prices react. The overall results show that investors react positively to announcements and that investor reactions vary depending on differences in brand strategies. Summary and implications In section 2.1, we see how the need to improve marketing accountability fostered the creation of the marketing-finance interface and the results of actions by marketing researchers to empirically link marketing investments to financial market outcomes such as stock prices and financial ratios. A key finding for the purposes of this thesis is that financial markets recognize that financial markets incorporate information about firm marketing investments into stock prices over the long term. This idea is particularly important for our third and fourth studies, which look at two factors that may hinder the ability of marketing expenditures to generate long-term value for shareholders. We also look in this section at the event study methodology that we mobilize in Chapter 3 and show it has been used extensively in marketing research and its advantages for our research. How equity markets affect marketing investments In this second emerging research stream of the marketing-finance interface studied below, we examine how information flows from financial markets to marketing investments via stock prices. We organize the research into three main themes: real activity manipulation (REAM), corporate financing (equity and debt) and stock price information. Before we do so however, we consider how the discretionary and low-profile nature of marketing investments actually makes them susceptible to manipulation by managers. Marketing expenditures are discretionary and have low visibility The accounting requirements for marketing expenditures and low visibility make them vulnerable to manipulation by managers. Current accounting standards require marketing expenditures, i.e., R&D and advertising, to be expensed immediately rather than being considered as investments that create value over the long-term [START_REF] Cañibano | Accounting for intangibles: a literature review[END_REF]. Marketing expenditures are currently treated as expenses as set out by accounting standards that must be booked in the current period. For example, concerning R&D expenditures, SFAS No. 2, October 1974 explains that this is due to the 'uncertainty' surrounding the future benefits of R&D. This immediate booking means that firms consider marketing expenditures from a short-term standpoint and ignore their long-term value creation that we highlighted in 2.1. The immediate expensing and low visibility make marketing expenditures suitable for manipulation. Some research suggests that marketing expenditures are the first to be cut, for example during recessions [START_REF] Lamey | How Business Cycles Contribute to Private-Label Success: Evidence from the United States and Europe[END_REF]. To complicate the issue further, the assets created by marketing [START_REF] Srivastava | Market-Based Assets and Shareholder Value: A Framework for Analysis[END_REF] are not visible to shareholders because Generally Accepted Accounting Principles (US GAAP) does not allow the recognition of intangible assets arising from marketing expenditures such as advertising and SG&A such as brands and customer satisfaction on the balance sheet as marketing assets. The absence of marketing assets on financial statements means that investors cannot see how marketing expenditures contribute to marketing assets in financial terms, which in turn generate firm profits and cash flows, creating a context where marketing's role may be underestimated over the short and long term. To the logical follow-up question of whether marketing expenditures are always the first to be cut, [START_REF] Paul | Managerial myopia and the observability of future cash flows[END_REF] responds that managers do not systematically favour long or short-term projects, but prefer projects that the stock market can best evaluate in the short run. In the next few sub-sections, we look at the three ways financial markets affect marketing investments. We end with research on how the information in stock prices affects marketing investments. REAM (real earnings manipulation) and marketing investments In this section, we look at how brokerage houses affect marketing expenditures. Brokerage houses act as information intermediaries between firms and investors. Their opinions play a key role in informationally-efficient markets. Several financial analysts may cover one stock, each from different brokerage houses. Each of these brokerage houses issues earnings forecasts. Managers will expend considerable efforts to meet the earnings forecasts because, as the literature in finance and accounting documents, investors react negatively to corporate earnings that do not meet earnings forecasts (e.g. [START_REF] Bhojraj | Making Sense of Cents: An Examination of Firms That Marginally Miss or Beat Analyst Forecasts[END_REF]. To achieve the earnings goal, managers may engage in real activity manipulation (REAM). REAM involves cutting expenditures that affect cash flows or alter a firm's underlying operations to alter current period earnings [START_REF] Gunny | The Relation Between Earnings Management Using Real Activities Manipulation and Future Performance: Evidence from Meeting Earnings Benchmarks*[END_REF]. In the context of marketing, REAM denotes managers altering planned marketing investments to ensure current period earnings prop up or increase stock prices [START_REF] Ganesan | Handbook of Marketing and Finance[END_REF]. Managers may cut marketing investments when they fear that quarterly earnings will not meet analyst forecasts [START_REF] Mizik | The Theory And Practice Of Myopic Management[END_REF] shows that firms cut R&D investments to manage earnings. [START_REF] Bhojraj | Making Sense of Cents: An Examination of Firms That Marginally Miss or Beat Analyst Forecasts[END_REF] find that firms cut advertising and R&D expenditures to meet or exceed analyst earnings forecasts, which generates higher stock returns in the short run. [START_REF] Cohen | The use of advertising activities to meet earnings benchmarks: Evidence from monthly data[END_REF] test if managers manage expenditures to meet or exceed analyst earnings forecasts and find that managers do cut advertising investments to increase earnings. (Chakravarty & Grewal, 2016) study the boundary conditions and find that monitoring of investors and managerial compensation moderates the relationship between earnings forecasts and advertising and R&D outlays. Furthermore, the authors demonstrate that unexpected changes in R&D and advertising outlays hurt long-term firm returns and risk, highlighting both the value of marketing investments for investors and the factors that hinder the ability of marketing investments to create value for investors. Finally, [START_REF] Currim | Effect Of Analysts' Earnings Pressure On Marketing Spending And Stock Market Performance[END_REF] show that firms with greater past commitment to marketing investments during periods of high analyst pressure generate higher stock market performances. Firm financing and marketing investments Firm financing via equity or debt issuance is the second area of marketing research on how investors can affect marketing investments. [START_REF] Mizik | Myopic Marketing Management: Evidence of the Phenomenon and Its Long-Term Performance Consequences in the SEO Context[END_REF] show how enterprises can profit from the financial market's focus on earnings by cutting advertising and R&D expenditures to boost earnings prior to seasoned equity offerings (SEO), i.e., the sale of stock by mature companies on secondary markets. The effect on equity financing may be important for marketing because increases in marketing outlays are usually funded by equity [START_REF] Garmaise | Marketing Issues in Corporate Finance[END_REF]. [START_REF] Cohen | Accrual-based and real earnings management activities around seasoned equity offerings[END_REF] find corroborating evidence of REAM and, furthermore, they show that the decline of subsequent earnings of SEO firms is linked to decisions to cut marketing expenditures in the year of the SEO. [START_REF] Malshe | From Finance to Marketing: The Impact of Financial Leverage on Customer Satisfaction[END_REF] consider the effect of the second type of firm financing, debt, on marketing investments. They find that higher debt leverage reduces customer satisfaction and moderates the relationship between satisfaction and firm value. The authors theorize that the requirement of making regular cash payments to debt holders pressures managers to undertake myopic actions such as cutting advertising and R&D outlays, which in turn reduces customer satisfaction. Past stock performance and marketing investments As far as we know, [START_REF] Markovitch | Using Capital Markets as Market Intelligence: Evidence from the Pharmaceutical Industry[END_REF] were the first to publish on this theme by showing the effect of equity market expectations of future earnings incorporated into stock prices. Using empirical data from the pharmaceutical industry, they find that firms with underperforming stocks tend to implement more changes to product portfolios and distribution while firms with outperforming stocks make fewer changes to their current portfolio and distribution and focus instead on long-term R&D and marketing of existing products. In the context of resource accumulation, [START_REF] Shin | Marketing and R&D Investment of Leader vs[END_REF] show that unexpected drops in the stock prices of leaders prompts increased investments in marketing whereas unexpected drops in stock prices of followers prompt increased investments in R&D, indicating that managers take into account market position when deciding how to react to information in stock prices. Finally, [START_REF] Chakravarty | The Stock Market in the Driver's Seat! Implications for R&D and Marketing[END_REF] look at how information in stock prices about past performances and volatility affect R&D and marketing expenditures, finding that volatility and past stock performances affect managerial actions concerning marketing and R&D outlays. Myopic marketing management hinders long-term value creation The question that arises is what is the effect on stock returns of myopically managing marketing investments to meet analyst forecasts, boost IPO returns and respond to information in stock prices? After all, if firm value were not affected, then cutting marketing expenditure would be inconsequential for shareholder value creation. Research indicates that myopic management of marketing resources does generate the desired short-term performances but yields a negative long-term effect. In her seminal paper, [START_REF] Mizik | The Theory And Practice Of Myopic Management[END_REF] studies the total financial impact of cutting marketing and R&D spending to meet analyst earnings forecasts. The author shows that initially firms that behave myopically do outperform over a 1-year horizon. However, over a three-year horizon, the firms that behave myopically underperform. Furthermore, the author demonstrates that REAM has a greater negative effect on future financial performance relative to accruals-based earnings inflation. Similarly, [START_REF] Bhojraj | Making Sense of Cents: An Examination of Firms That Marginally Miss or Beat Analyst Forecasts[END_REF] show that using accruals or discretionary expenditures such as marketing to meet or exceed earnings forecasts yields a higher short-term stock return but a longer-term underperformance compared to companies that do not manage earnings to meet forecasts. In their study of managers using marketing actions to manage earnings, [START_REF] Chapman | An investigation of earnings management through marketing actions[END_REF] find that marketing promotions can be used to boost quarterly net income by up to 5% but the cost is up to 7.5% of the next quarter's net income. In the context of IPOs, [START_REF] Saboo | Organizational Debut on the Public Stage: Marketing Myopia and Initial Public Offerings[END_REF] show likewise that investors are effectively misled but they correct their beliefs in the three years following the IPOs and penalize these firms. Summary and implications In section 2.2, we review the second stream of the marketing-finance interface. The section starts by discussing how the discretionary nature of marketing investments and low visibility to shareholders make them vulnerable to changes by top managers. We then look at how the financial markets can exert pressure on managers grouped into three themes. Section 2.2 ends by considering how myopic considerations can hinder long-term value creation. We build on this second stream of the marketing-finance interface to investigate in Studies 3 and 4 the impact of stock mispricing and investor horizon. In doing so, we can highlight the reverse flow of information from equity markets to marketing investments. The study of information in marketing research After looking at the two streams of the marketing-finance interface (2.1 and 2.2), we now look at academic research into information. We start by considering brokerage houses and the services they provide. We then use investor uncertainty about the quality of brokerage house services to motivate the review of how information is studied in information economics and marketing research. We examine why and how investors (consumers for marketing research) look for signals to resolve issues of imperfect and asymmetric information and end by focusing on research into the information contained in brand signals. Assessing uncertainty about brokerage house quality. To avoid any confusion about terminology, when we speak in this thesis about brokerage houses, we refer to the traditional brokerage house that serves institutional investors. We focus on them because they are by far much larger players in financial markets compared to the second type of brokerage house, which provide services to retail investors. The larger size of the traditional brokerage house makes our research more relevant to a broader audience than studying the latter type. Furthermore, retail investors hold a small portion of the stock market. Finance research shows that the services provided by brokerage houses contribute valuable information to financial markets and have a significant effect on prices and investors responding to the information conveyed by brokerage house forecasts [START_REF] Griffin | Competitive Information in the Stock Market: An Empirical Study of Earnings, Dividends and Analysts' Forecasts[END_REF]. Other research analyses the predictive content of analyst recommendations and shows that analyst recommendation change are followed by abnormal returns [START_REF] Stickel | The Anatomy Of The Performance Of Buy And Sell Recommendations[END_REF][START_REF] Womack | Do Brokerage Analysts' Recommendations Have Investment Value[END_REF], indicating that analysts add value to markets by producing valuable information. The brokerage houses and the analysts they employ have different levels of quality driven by for example better information processing, a higher frequency of earnings and recommendation revisions, or greater experience and effort (e.g. [START_REF] Clement | Do investors respond to analysts' forecast revisions as if forecast accuracy is all that matters?[END_REF][START_REF] Sorescu | The Cross Section Of Analyst Recommendations[END_REF]. These factors indicate that brokerage house quality may vary, creating uncertainty for investors about quality. We look in the next section at research into how investors might use marketplace information to resolve the uncertainty about service quality based on economic and marketing research. How information economics looks at information The mathematical definition of information is that which reduces uncertainty or changes an individual's degree of belief about the world [START_REF] Shannon | The Mathematical Theory of Communication[END_REF]. Glazer reports that this definition is impractical as the construct depends on the context and is multidimensional [START_REF] Glazer | Marketing in an information-intensive environment: strategic implications of knowledge as an asset[END_REF]. This has limited its use in mathematics because it is hard to measure the meaning of the information to relative agents. For example, Glazer points out that several sources have even shown that two signals that can reduce uncertainty by the same amount (thus they are quantitatively the same) but may carry vastly different meaning for the receivers. This mathematical shortcoming did not stop the subsequent growth of information economics. The classic context for information economics adapted to marketing is that consumers generally have less information about product quality than producers. Producers know the effort and inputs that were used to make the product, but consumers do not, creating information asymmetry. To resolve this uncertainty, consumers may look for a signal that contains information to resolve the information asymmetry. In the case of product markets, a signal can be a brand or a certification. This signal indicates higher quality and a higher price. When consumers do seek out information about quality, consumers are still not sure of their information, giving them imperfect or limited information. The notion of uncertainty emerged or the making of decisions in response to a situation without knowing for certain what the outcome will be. This uncertainty prompts decision makers to look for information, to suggest how things will turn out. The notion of credibility arises in this context because investments in advertising to build brands or certification are sunk costs. In other words, the firm is not able to retrieve its investments in building brands or gaining a certification. This indicates to consumers that the firm is committed to its product. Thus, the cost of advertising acts as a signal because the perceived high cost confers credibility on the certification or brand. However, the question arises, however, as to whether the consumer finds the product brand or certification useful. This depends in turn on how easy it is to fake a brand or certification. The situation where faking is easy is called a pooling equilibrium because all the firms in the market can do it so there is no differentiation between them and therefore they are pooled together. This gives rise to a low signalling costs. If faking is hard, then the brand or certification differentiates between the firms, making the signal useful for consumers. Information in a marketing context Economic research started looking at consumers and the role of information in the 1970s, when Nelson published a string of articles [START_REF] Nelson | Information and Consumer Behavior[END_REF][START_REF] Nelson | Advertising as Information[END_REF][START_REF] Nelson | The Economic Consequences of Advertising[END_REF] that suggest that the primary goal of advertising campaigns is not to inform consumers about products. The goal instead is to tell consumers that a large amount of money has been spent on the ads themselves, and thus inform consumers about product quality. Nelson argues that the mere fact advertising is taking place may represent a signal to consumers of high product quality. If high-quality brands advertise more and if consumers can gain an approximate idea of the amount of a seller's advertising spending, then consumers will react positively to advertising, even if the advertising itself does not convey a lot of information. [START_REF] Klein | The Role of Market Forces in Assuring Contractual Performance[END_REF][START_REF] Shapiro | Premiums for High Quality Products as Returns to Reputations[END_REF] put forward the first models that analyze the role of branding for consumers. They suggest the notion of costly signalling and use it to justify how markets can police quality level and describe a reputation equilibrium. Research in economics on signalling theory assumes that consumers are rational. The research usually focuses on identifying equilibrium conditions. Empirical testing tests if the overall behaviour of firms in the marketplace fits the results of the equilibrium. Marketing research, however, uses mainly experimentation to analyze the implied behaviour of consumers [START_REF] Boulding | A consumer-side experimental examination of signaling theory: do consumers perceive warranties as signals of quality[END_REF][START_REF] Kirmani | The Effect of Perceived Advertising Costs on Brand Perceptions[END_REF]. Marketing research aims to assess whether consumer behaviour reflects the assumptions set out in signalling theory and not to focus on equilibriums like in economics, which constitutes a different approach to the study of consumer use of information. The study of brands in marketing Marketing research into signals [START_REF] Kirmani | No Pain, No Gain: A Critical Review Of The Literature On Signaling Unobservable Product Quality[END_REF][START_REF] Wernerfelt | Umbrella Branding As A Signal Of New Product Quality: An Example Of Signalling By Posting A Bond[END_REF] suggests formally that in the presence of asymmetrical and imperfect information, firms can inform consumers of unobservable quality in a transaction by emitting an observable signal. There are four circumstances under which signals are useful: 1/ the product or service is not familiar [START_REF] Kirmani | No Pain, No Gain: A Critical Review Of The Literature On Signaling Unobservable Product Quality[END_REF] 2/ perceived risk is too high and needs to be lowered [START_REF] Shimp | Warranty and Other Extrinsic Cue Effects on Consumers' Risk Perceptions[END_REF] 3/ consumers do not have the required level of knowledge to assess quality (A. R. [START_REF] Rao | The effect of price, brand name, and store name on buyers' perceptions of product quality: an integrative review[END_REF] 4/ an information search preference exists and additional information is required [START_REF] Nelson | Information and Consumer Behavior[END_REF][START_REF] Nelson | Advertising as Information[END_REF]. Marketing research has studied signals of quality via several parts of the marketing mix: price [START_REF] Tellis | Competitive Price And Quality Under Asymmetric Information[END_REF], warranty [START_REF] Boulding | A consumer-side experimental examination of signaling theory: do consumers perceive warranties as signals of quality[END_REF], advertising [START_REF] Kirmani | The Effect of Perceived Advertising Costs on Brand Perceptions[END_REF] and country of origin [START_REF] Verlegh | A Review And Meta-Analysis Of Country-Of-Origin Research[END_REF]. Research in the field of marketing has even studied what the combined impact of signals is on consumers [START_REF] Dawar | Marketing universals: consumers' use of brand name, price, physical appearance, and retailer reputation as signals of product quality[END_REF]. [START_REF] Dawar | Marketing universals: consumers' use of brand name, price, physical appearance, and retailer reputation as signals of product quality[END_REF]) also studied the mix of signals in different cultures. Marketing research indicates that these marketing mix elements convey both direct information and indirect information [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF]. So, marketing mix elements become conveyors of information. The study of brand equity in marketing research focuses on comprehending the impact of the brand name and symbol on the consumer decision-making process. [START_REF] Farquhar | Managing brand equity[END_REF] defines brand equity as the 'added value' a brand gives a product relative to an unbranded one. Brands have been studied from a cognitive psychology approach (e.g. [START_REF] Mcalexander | Building Brand Community[END_REF], from a socio-cultural standpoint [START_REF] Holt | How Brands Become Icons[END_REF] and finally as signals [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF][START_REF] Spence | Job Market Signaling[END_REF]. In this thesis, we focus on brands from a cognitive psychology standpoint and then brands as signals [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF][START_REF] Changeur | Strategies de marque et richesse des actionnaires: une approche financiere du capital-marque[END_REF][START_REF] Swait | Shocks to Brand Equity: An Information Economics Perspective on the US Auto Industry 2006-2011[END_REF]. Two works laid the groundwork for the cognitive psychology approach. The first is (Aaker 1991), which argues that a brand name and symbol have assets and liabilities attached to them. These asset liabilities are brand loyalty, brand awareness, perceived quality, brand associations and other proprietary brand assets. The second is [START_REF] Keller | Conceptualizing, Measuring, And Managing Customer-Based Brand Equity[END_REF], which argues that brands should be seen only from a consumer's standpoint. The advantage of [START_REF] Keller | Conceptualizing, Measuring, And Managing Customer-Based Brand Equity[END_REF] is that he builds his brand equity model on a solid conceptual framework. Unfortunately it has not proven effective for building testable models of brand equity and choice behaviour [START_REF] Christodoulides | Consumer-based brand equity conceptualisation and measurement: a literature review[END_REF]. It should be noted that the cognitive psychology approach is not completely separate from the brand signal approach we describe below. [START_REF] Sweeney | Brand Equity: An Integrated Framework[END_REF] reconcile the cognitive psychology approach which has a sound conceptual basis with the brand signal models. [START_REF] Erdem | The Information-Economics Perspective on Brand Equity[END_REF] further develop the theoretical complementarity of the two streams of brand research. This reconciliation is important for this thesis because it resolves our concern that focusing on only one brand approach may influence our results. differ in terms of uncertainty about the product's attributes, the cost of acquiring product-related information and the perceived risks. The results indicate that brand signals impact brand consideration more than brand choice, even for products with moderate uncertainty. [START_REF] Erdem | Brands as signals: A cross-country validation study[END_REF] investigate the applicability of the brand signal model to various countries with different cultural dimensions. Among the cultural differences between countries that the authors study, brand signals show higher impact in countries where consumers attach more importance to collectivism or uncertainty avoidance. [START_REF] Swait | Shocks to Brand Equity: An Information Economics Perspective on the US Auto Industry 2006-2011[END_REF] Signalling costs Brands have significant monetary value (Aaker 1991). [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF][START_REF] Rao | Signaling Unobservable Product Quality Through A Brand Ally[END_REF][START_REF] Wernerfelt | Umbrella Branding As A Signal Of New Product Quality: An Example Of Signalling By Posting A Bond[END_REF] argue that signalling theory in information economics defines the monetary underpinning of a brand. A quick look at Interbrand shows the high monetary value of brands. Interbrand estimates Apple's brand value at $178bn in 2016 and Google's brand value at $133bn. Consumer punishment may include negative word of mouth and no longer using the brand (A. R. [START_REF] Rao | Signaling Unobservable Product Quality Through A Brand Ally[END_REF], which would reduce the brand value. This potential punishment by the consumer underlies signal theory, what [START_REF] Ippolito | Bonding and Nonbonding Signals of Product Quality[END_REF]) calls posting a bond or signal cost [START_REF] Connelly | Signaling theory: A review and assessment[END_REF]. This idea is so important to signalling theory that it is sometimes called the "theory of costly signalling". The high value of brands mean that "faking it" will lead to punishment by consumers, as we describe in 2.3.2. Utilities of brand signals for consumers The brand signal conveys information that has three utilities (benefits) for investors (see diagram above). These utilities are the "added value" [START_REF] Farquhar | Managing brand equity[END_REF]) that brands contribute to products [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF]. The first utility is perceived quality [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF], defined as the consumer's judgment about how good the product or service is (Zeithaml, 1988, p. 5). The second utility of brand signals is perceived risk, defined by (Schiffman and Kanuk 2000) as "the uncertainty consumers face when they cannot foresee the consequences of their purchase decisions." The third utility is information gathering costs, i.e., how much cost in terms of time, money and psychological costs must expend to gather information about product quality. The brand signal increases product quality, reduces perceived risk and save on information gathering costs. The importance of brands in services industries Brokerage houses are services and as such have specific characteristics that make brands more important relative to products. All products, be they products or services, have search, experience and credence attributes [START_REF] Darby | Free Competition and the Optimal Amount of Fraud[END_REF][START_REF] Nelson | Information and Consumer Behavior[END_REF]. Search attributes, such as brands and prices, reflect product characteristics that consumers can assess before buying. Experience attributes, such as emotion and entertainment value, are characteristics that can only be assessed during consumption or after buying. Credence attributes reflect any product attribute that consumers cannot assess even after purchase or consumption [START_REF] Darby | Free Competition and the Optimal Amount of Fraud[END_REF]. Car repairs are an example of credence products because it is hard to assess the quality of the car repairs even when the repairs have been carried out, leading to uncertainty about the service received. [START_REF] Berry | Cultivating service brand equity[END_REF] suggests that the difference between a product and a service company is that for products, the product itself is the primary brand whereas for a service the company is the primary brand. So in services, a strong brand would act as a safe haven for consumers [START_REF] Richards | Brand Knowledge Management: Growing Brand Equity[END_REF]. The intangibility of services makes buying them from a strong brand appealing. Research confirms the perceived higher risk of services relative to goods. Brands as information to investors Academic research highlights how brand signals communicate information about quality to investors and stock prices. [START_REF] Shiller | From Efficient Markets Theory to Behavioral Finance[END_REF] and [START_REF] Frieder | Brand perceptions and the market for common stock[END_REF] show how investors prefer familiar brands because they cannot know everything about all companies they have in their portfolios. [START_REF] Singh | Modeling Preferences for Common Attributes in Multicategory Brand Choice[END_REF] suggest that advertising can help stock prices because it increases the familiarity of the brand to investors, who therefore base their investing practices in part on brand familiarity. [START_REF] Merton | A Simple Model Of Capital Market Equilibrium With Incomplete Information[END_REF] suggests that investors prefer firms with stronger information flows. (Grullon 2003) indicate that companies that have built their brands using advertising have a broader ownership of both individual and institutional investors and their stocks are more liquid. This result suggests that brand familiarity influences a company's cost of capital and thus firm value. [START_REF] Mcalister | Advertising, Research and Development, and Systematic Risk of the Firm[END_REF] took the impact of brands one step further, showing that strong brands reduce firm risk because of the strong brands' effect on investors. Summary and implications In section 2.3, we look into academic research into information. We start by looking at how the wide variety of brokerage houses gives rise to uncertainty about the quality of their research. This uncertainty motivates our discussion about how consumer uncertainty in general about product quality motivates consumers to look for information. We look at academic research on consumers in general and briefly at information from a mathematical and economic standpoint because they shape marketing research's study of information. We then shift to marketing's study of information and in particular the research into signals. We see that credibility plays a key role in signals and that bonding costs underlie credibility. Next, we consider specific characteristics about services that heighten consumer uncertainty due to their nonphysical nature and then focus on how investors use information about brands. We see in this section how consumers can use marketplace information to reduce asymmetric and imperfect information about product quality. We use the brand signal model in studies 1 & 2 to investigate the impact of brokerage house brand signal on investors and competing brokerage houses. We argue that the brand signal model is relevant and gives a richer understanding of how marketing investments can help incorporate information into stock prices. Furthermore, no research has been undertaken into how investors and competing brokerage houses respond to the brands of information intermediaries. The real effects of financial markets on corporate decisions In the previous section, we looked at how information is mobilized in marketing to address quality uncertainty issues. In this section, we survey the literature from research in finance and accounting on the theory of the real effects of financial markets (P. [START_REF] Bond | The Real Effects of Financial Markets[END_REF]. Similar to information flows described separately in the marketing-finance interface in sections 2.1 and 2.2, the 'real effects of financial markets' perspective suggests that information flows both ways, i.e., from firms to financial markets and vice versa. The onus in this section is on describing the full process of how financial market information flows can affect firm decisions. Secondary financial markets do not lead to any direct transmission of capital to firms. Information in prices will therefore only have an impact if they affect the decision makers in firms. We start by describing the efficient market theory and the extension that the 'real effects of financial markets' perspective proposes. Then we look at the role played by the two parties involved in the real effects of financial markets, managers and investors. We end by looking at the literature surrounding two effects of financial markets on corporate investment, i.e., stock mispricing and institutional investors, which we consider in more detail in chapters 5 and 6 respectively. Efficient markets and the role of stock price information In financial markets, information plays a key role in efficient markets. Fama gave this definition of the efficient market hypothesis saying, 'I take the market efficiency hypothesis to be the simple statement that security prices fully reflect all available information [START_REF] Fama | Efficient capital markets: II[END_REF]. In other words, security prices reflect all information about the fundamental value of the security, i.e. the present value of future cash flows. The fundamental value in asset prices is beneficial for both long and short-term investors because both benefit from the discounted value of cash flows generated over the long term. In efficient markets, managers adopt a long-term outlook and make strategic decisions to create the highest NPV projects. Managers use their information to maximize the sum of discounted value of future cash flows and corporate investment decisions are based on choosing the best projects and the discount rate. The discount rates reflect the cost of capital and managerial decisions reflect the best interests of all investors. The stock price set by markets is fair for all shareholders because it reflects the long-term value of the stock. The focus here is on information flows from firms to stock markets. Incorporating the 'the real effects of financial markets into the efficient markets hypothesis Corporate finance research traditionally reflects the information flow defined by Fama above, i.e., understanding the effects of financing on the firm and the firm's cost of capital. Corporate finance research focuses less on the effect of information from secondary markets on stock prices. (P. [START_REF] Bond | The Real Effects of Financial Markets[END_REF] argue that financial markets may not be as neutral for firms as presented in textbooks and the efficient market theory. They argue that the efficient information theory described in 2.4.1 should be broadened to reflect the 'feedback effect'. They suggest that the large amounts of capital traded on securities markets every day and the substantial resources invested in secondary markets prompt managers to keep a close eye on the information in the share price. This is the definition of the 'real effects of financial markets' perspective we adopt in this thesis. The authors argue that the importance of stock prices stems from the information they convey (P. [START_REF] Bond | The Real Effects of Financial Markets[END_REF], an idea first suggested by [START_REF] Hayek | The Use of Knowledge in Society[END_REF] in his article about the role played by knowledge in society. Hayek describes how information known only to some market players can spread to the whole market with prices conveying the information because of their role as information aggregators. In turn, decision makers such as managers, investors and customers use information from prices when making decisions, which in turn affects corporate expenditures, cash flows and stock performance [START_REF] Baumol | The stock market and economic efficiency[END_REF]. In the next section of the literature review, we study the channels through which stock prices may influence corporate decisions via the information they convey, focusing first on managerial channels and then financial market channels. Managerial incentives to respond to stock price information 2.4.3.1 Compensation incentives The first way that financial markets may have real effects on corporate decisions is through a manager's incentives to take real decisions. Under agency theory, managers can take advantage of their information edge and discretionary power to maximize their personal interests instead of maximizing value for shareholders. To mitigate the divergence of goals, firms may structure managerial compensation plans to link them to quarterly or annual stock returns, which is supposed to better align management interests with shareholder concerns [START_REF] Jensen | Performance pay and top-management incentives[END_REF]. Managers looking to maximize private benefits may seek to boost near-term stock prices to increase their compensation arising from the equity-linked portion of their compensation at the expense of the long-term investors. A second driver of agency conflicts is the length of a term's tenure at a company. [START_REF] Narayanan | Managerial Incentives for Short-Term Results[END_REF] suggests that a shorter employment contract means a manager is unlikely to benefit from a firm's future cash flows. And the contracts have gotten shorter with the turnover of CEOs increasing over time [START_REF] Kaplan | How Has CEO Turnover Changed?[END_REF]. This may prompt managers with shorter contracts to behave myopically because they will not benefit from investments that generate returns over the longer term. Thus they may adopt projects with a lower NPV but higher returns in the earlier part of their investment lives [START_REF] Palley | Managerial Turnover And The Theory Of Short-Termism[END_REF], showing that shorter CEO horizons are related to faster returns at the expense of value creation over the long term. Managerial reputations in the employment market are a third factor that can incite managers to focus on myopic concerns such as earnings because investors punish firms that do not meet earnings forecasts [START_REF] Bartov | The rewards to meeting or beating earnings expectations[END_REF]. [START_REF] Narayanan | Managerial Incentives for Short-Term Results[END_REF] suggests that managers who focus on their labour market reputation may undertake actions that boost the stock's shortterm return at the expense of the long-term performance. If their labour market reputations are sufficiently tarnished, managers may find it hard to find another job, making them focus on stock price information [START_REF] Jensen | Agency Costs of Overvalued Equity[END_REF]. Investor pressure has intensified since the mid-1990s with managers reporting that they prioritize meeting or exceeding analysts' earnings forecasts over other benchmarks. Research shows that that meeting or exceeding analyst earnings forecasts increases manager credibility with capital markets, props up or increases share prices, improves manager reputations and conveys information about future growth prospects. To sum up, agency theory, CEO tenure and reputational concerns may motivate managers to use the information in stock prices when making corporate decisions. Catering to investors A second channel through which markets may have real effects on corporate investment decisions is through managerial catering to investors. Catering theory argues that investment decisions are influenced by investor misperceptions [START_REF] Stein | Rational Capital Budgeting In An Irrational World[END_REF]. The investor misperceptions are reflected in stock prices. Managers care about how stockholders perceive the firm and so the misperception that is reflected in the stock price information in turn affects investment decisions. Firms that rely on the stock market respond by catering their investment decisions to the opportunities created by these misperceptions. Recent research shows empirically that that managers may cater to investor information incorporated in share prices. (Q. Chen, Goldstein, & Jiang, 2007a) provide evidence that the amount of private information in the stock price has a strong positive effect on the sensitivity of corporate investment to stock price information. In a similar vein [START_REF] Duchin | Costly external finance, corporate investment, and the subprime mortgage credit crisis[END_REF] show that corporate investment declines significantly following the onset of the financial crisis. Consistent with the causal effect of a supply shock, the decline is greatest for firms that have low cash reserves or high net short-term debt, are financially constrained, or operate in industries dependent on external finance. [START_REF] Baker | Capital Market-Driven Corporate Finance[END_REF][START_REF] Baker | A Catering Theory of Dividends[END_REF] show that when the market grants an irrational premium to dividend-paying firms or to low-price firms, managers respond to this stock price information by paying more dividends or by supplying shares at lower prices. All of this research shows empirically that managers may cater to investor information incorporated in share prices. Managerial learning The third channel for financial markets to motivate managers to use stock price information is the managerial learning hypothesis. The hypothesis is that markets produce new information and managers learn from this information. [START_REF] Hayek | The Use of Knowledge in Society[END_REF] suggests that markets may be better at generating some kinds of information because markets aggregate many small pieces of information from players who have no direct way of communicating with managers but they can inform managers through their trading activity [START_REF] Zuo | The Informational Feedback Effect Of Stock Prices On Management Forecasts[END_REF]. This does not mean that managers know less than investors. Rather, it's not necessary for managers to have perfect information for every decision in order for the information in share prices to influence investment decisions (P. [START_REF] Bond | The Real Effects of Financial Markets[END_REF]. Possible information includes subjects such as the external environment, competition and customer demand. Several studies have empirically shown that managers can learn from markets. [START_REF] Edmans | The Source of Information in Prices and Investment-Price Sensitivity[END_REF] show that price information affects firm investment using the staggered enforcement of insider trading laws as an exogenous shock. They conclude that although the enforcement shock lowered private information it increased outside information, leaving the total amount of information that contributes to the stock price constant. [START_REF] Zuo | The Informational Feedback Effect Of Stock Prices On Management Forecasts[END_REF] shows that market information feeds back to management forecasts. Furthermore, these studies show that investor private information also helps managers improve their forecast accuracy. [START_REF] Foucault | Learning from peers' stock prices and corporate investment[END_REF] confirm that not only do managers learn from their own share prices, but they also learn from the stock price of peers, which are defined as firms that sell related products and this information affects corporate investment decisions. Financial market influence on stock price information In the previous section, we consider factors that may prompt managers to react to investor information in stock prices. In this section, we look at factors that may influence the incorporation of information in stock prices by financial markets. We look at the impact of limited arbitrage, investor expectations and monitoring on the process of integrating fundamental information into share prices. We see that limited arbitrage may impede the incorporation of fundamental information into stock prices, investor expectations may send confounding information to managers and the effectiveness of monitoring may decline, which means that the market's discipline of managers to focus on fundamentals may diminish. Limited arbitrage The first factor that may affect the quality of information in share prices is market limitations that prevent arbitrage from incorporating investor information into stock prices. Under the Modigliani-Miller theorem, stock prices mirror fundamentals because informed investors (arbitrageurs) compete to eliminate mispricing. So, mispricing between two firms with the same cash flows but different capital structures in a frictionless market creates risk-free arbitrage opportunities. Arbitrageurs would quickly take advantage of this information and incorporate it into their arbitraging and restore stock prices to their fundamental values. However, finance research highlights that arbitrage may be less efficient than postulated by theory. (E. M. [START_REF] Miller | Risk, Uncertainty, and Divergence of Opinion[END_REF] examines the impact of short-selling constraints on stocks that can prevent the aggregation role of information into stock prices from taking place and thus hindering significantly the effectiveness of arbitrage. [START_REF] De Long | Noise Trader Risk in Financial Markets[END_REF]) present a model that indicates that noise trader risk can create a risk in prices that deters rational arbitrageurs, who have short-term horizons and are risk averse, from betting aggressively against noise traders. [START_REF] Alphonse | Mispricing Persistence and the Effectiveness of Arbitrage Trading[END_REF] suggests that market depth, which influences the ability of arbitrageurs to unwind their equity positions, may further weaken the effectiveness of arbitrage. Arbitrage is often carried out using relative-value arbitrage. However, the necessary security with similar cash flows may not be available to carry out this arbitrage strategy [START_REF] Pontiff | Costly Arbitrage: Evidence from Closed-End Funds[END_REF]. All of these factors may hinder the ability of arbitrageurs to maintain information-efficient prices and may enable stock prices to include non-fundamentalrelated information, sometimes for extended periods of time. Stock prices may therefore contain information that may be irrelevant and perhaps misleading for managers. Investor expectations Financial market expectations may also deter managers from focusing on information that generates long-term value for all shareholders. One expectation in particular stands out in the literature, earnings expectations. Firms that release results more often to the market may prompt managers to take myopic decisions in order to meet earnings guidance or improve a firm's share performance. [START_REF] Bartov | The rewards to meeting or beating earnings expectations[END_REF] show that the frequency of meeting analyst expectations has increased in recent years, and that the reward from meeting earnings expectations is higher quarterly average returns. Thus the short-term reward is clear but what is the long-term impact on firm growth rates? [START_REF] Cheng | Earnings Guidance and Managerial Myopia[END_REF] study firms that frequently issue quarterly earnings guidance to see the effect on investment decisions. They find that frequent guider firms invest less in R&D and more often beat the analyst consensus relative to occasional guiders. However, the frequent guiders show significantly lower long-term earnings growth rates. [START_REF] Rappaport | The Economics of Short-Term Performance Obsession[END_REF] suggests that a second reason that hinders the incorporation of fundamental information arises from investor reliance on technical analysis and comparables to value stocks. Stock prices contain information that investors use in their valuation processes. The information about prospects and fundamentals used in technical analysis and comparables is short-term information rather than long-term information. These methods obviously shape investor expectations. The share price will therefore contain more short-term related information and less information related to fundamentals, which may mislead managers. Monitoring managers In the literature, investors are supposed to monitor manager decisions, ensuring that managers focus on fundamental information that is relevant to all investors. All investors however are not equal when it comes to monitoring. Institutional investors are generally investors with greater resources than individual shareholders. Their superior resources give them better information arising from research into stocks and industries, whereas individual shareholders have limited time to gather information. The benefits of gathering information are more likely to exceed costs for institutional investors compared to individual investors [START_REF] Shleifer | Large Shareholders and Corporate Control[END_REF]. In addition, institutional shareholders, which include pension funds, investment trusts, university endowments and insurance companies, invest much larger sums of money than individual investors, giving them more votes and power [START_REF] Parrino | Voting With Their Feet: Institutional Ownership Changes Around Forced Ceo Turnover[END_REF]. The large sums involved and their fiduciary duties motivate institutional investors to monitor their stakes in firms. If firms fare poorly, investors can dialogue with managers or in extreme cases sell their stakes. So, institutional investors can exert market power for corporate control. In the finance literature, monitoring is often associated with long-term investment horizon. If a firm has a considerable number of short-term investors, managers may be monitored less because the short-term investors have less incentive to monitor. This decrease in monitoring may prompt managers to focus on their personal benefits at the expense of shareholders. [START_REF] Gaspar | Shareholder investment horizons and the market for corporate control[END_REF] find that firms with short-term shareholders are more likely to receive a takeover bid but get lower premiums. They argue that firms owned by short-term investors have a weaker bargaining position in acquisitions, which arises from lower monitoring. In a similar vein, (X. [START_REF] Chen | Monitoring: Which institutions matter[END_REF] show that concentrated holdings by independent long-term institutions are linked to post-merger performance (bid announcement returns and 3y BHAR returns) and make the withdrawal of bad bids more probable. Some information limits, however, may hinder the effectiveness of monitoring. For example, [START_REF] Zeckhauser | Are Large Shareholders Effective Monitors? An Investigation Of Share Ownership And Corporate Performance[END_REF] argue that firms present in sectors with highly-complex R&D investments show greater information asymmetry making monitoring more difficult due to the complexity of the high-R&D. The authors further show that blockholders owning over 15% of a firm monitor effectively in low-R&D sectors but not in high-R&D sectors. Acquiring capital market funding A company's dependence on financial markets for funding may affect the influence of investors on firm policy [START_REF] Stein | Efficient Capital Markets, Inefficient Firms: A Model of Myopic Corporate Behavior[END_REF]. Managers who need equity financing for new projects may be incentivized to boost short-term stock prices in order to get the financing with the best terms [START_REF] Bar-Gill | Misreporting Corporate Performance[END_REF]. So, around the dates for seasoned and initial offerings, firms strive to ensure the share price is inflated, which later becomes abnormal long-term negative returns. In line with this argument, offering issuers tend to report higher net earnings before the offering and post lower long-term abnormal returns [START_REF] Teoh | Earnings Management and the Long-Run Market Performance of Initial Public Offerings[END_REF]. Why passive investors may not be passive monitors As shown above, research in finance has highlighted the effect of institutional investors on governance and corporate policies (e.g. [START_REF] Aghion | Innovation and Institutional Ownership[END_REF][START_REF] Hartzell | Institutional Investors and Executive Compensation[END_REF]. However, one concern is that a large portion of investors are classified as passive, so-called because they hold portfolios of stocks with low turnover. Their investment goals are to replicate a given index or an investment style (e.g., small-cap growth) at a lower cost. Their passive nature induces the question of how effective they are in monitoring managers. Passive investors may be perceived as large shareholders that do not expend resources to monitor managers due to their passive nature. Furthermore, they have little incentive to monitor as their goal is simply to replicate the index and given the large number of shares in their portfolios, they may not have enough resources to carry out monitoring effectively. This passivity would in turn weaken corporate governance and hurt shareholder performance. The issue can be framed as follows: are passive investors as passive as the term implies in terms of influencing managerial decisions? [START_REF] Venkiteshwaran | Is Carl Icahn Good for Long-Term Shareholders? A Case Study in Shareholder Activism[END_REF] investigate what effects passive investors have on firms. Studying this question is difficult because dialogues with management teams are usually 'not shared with outsiders'. They find, contrary to expectations, that the dialogue between indexers (a type of passive manager) and management teams is indeed fruitful and affects corporate decisions. Passive investors have two incentives to prompt monitoring. First, monitoring may increase the value of assets under management, which increases the amount of fees they earn from the assets they manage. Secondly, institutional investors have a fiduciary duty to manage and vote their proxies in the best interest of shareholders. In this context, activist investors may seek to gain the votes of passive investors given their size and concentration [START_REF] Brav | Hedge Fund Activism, Corporate Governance, and Firm Performance[END_REF] to help activist investors pass their proposals. Taken together, passive investors may be more active in influencing managerial decisions than the term passive indicates. Mispricing and corporate decisions Many facets of financial markets may affect corporate decisions, such as institutional investors, information asymmetry and analyst coverage. In the next two sections, we describe two financial market features studied in Chapter 5 and 6: stock mispricing and investor horizon, which as far as we know have not been studied in the marketing-finance interface. The efficient market hypothesis assumes that prices should follow a random walk. In other words, all information is reflected in stock prices, so tomorrow's stock prices will be independent of today's price changes. Some research, however, suggests that prices include past price information. [START_REF] Haugen | The January Effect: Still There after All These Years[END_REF] show that the so-called January effect in which certain types of stocks tend to produce higher abnormal returns, notably small market capitalization stocks, persists despite existing for several decades. [START_REF] Fama | The Cross-Section of Expected Stock Returns[END_REF] detect unusually high average returns from stocks with high book-to-market ratios. The rising number of inefficiencies gave rise to the Fama-French three-factor model and then the Carhart four-factor model and perhaps the Fama-French five-factor model [START_REF] Fama | A five-factor asset pricing model[END_REF]. And of course, the TMT bubble and the 2007-2008 financial crisis were put forward as arguments that markets may not always be efficient for extended periods. Research in behavioral finance suggests inefficient markets may be driven by investors making systematic mistakes in how they come up with their beliefs and expectations about the stock price (Shleifer & Summers, 1990). How stock mispricing affects corporate decisions Considerable evidence has emerged in the financial literature that suggests that stock mispricing has a real effect on a firm's investment and financing choices. For example, [START_REF] Shleifer | Stock Market Driven Acquisitions[END_REF] present a model that looks at the role of stock mispricing and how it affects mergers and acquisitions. Their model shows who buys who, how the acquisition is paid for, the valuation consequences of mergers and how a wave of mergers can take place. [START_REF] Dong | Does Investor Misvaluation Drive the Takeover Market[END_REF] empirically test the links between market valuations of firms and takeover characteristics. They show that a firm's market mispricing can drive takeovers and firm strategy. Some finance studies suggest that mispricing also affects corporate decisions. [START_REF] Chirinko | Business Fixed Investment and "Bubbles": The Japanese Case[END_REF] seek to study whether bubbles affect fixed investment using the Japanese bubble between the late 1980s and the early 1990s. They theorize that the high equity values represent cheap financing that can be used to finance investments and find that mispricing did lead to higher stock and bond issues, with the proceeds being used to finance a much larger than normal portion of fixed investments, as well as much lower investment after the bubble burst. They show that high stock prices relax financing constraints, which affects corporate policies (capital investment, stock issuance, cash savings). [START_REF] Hau | Real effects of stock underpricing[END_REF] The second financial market face we look at is the investment horizon of institutional investors. We first look at institutional investor ownership in general and then one particular aspect, their investment horizon. Institutional ownership of U.S. firms has increased dramatically in the last 50 years, and today, institutional investors collectively hold the majority of U.S. shares (Gompers and Metrick 2001) and rising. The high ownership of firms by institutional owners partly justifies our studies in chapter 6 on their impact on marketing investments. Shareholders exercise their power through proxy votes, shareholder proposals or the threat of exiting by selling large amounts of shares that pushes down the share price. Their effect is reflected in the higher votes for their shareholder proposals and a higher stock price reaction [START_REF] Gillan | Corporate governance proposals and shareholder activism: the role of institutional investors[END_REF]. [START_REF] Shleifer | Large Shareholders and Corporate Control[END_REF], among others, theorize that major shareholders monitor managers. Indeed, institutional owners influence R&D by monitoring management and through CEO compensation. (X. [START_REF] Chen | Monitoring: Which institutions matter[END_REF]) study whether monitoring works and show, using acquisition decisions to reveal monitoring, that large shareholders that monitor enhance the results of firms participating in mergers. [START_REF] Cronqvist | Large Shareholders and Corporate Policies[END_REF] investigate whether large shareholders impact corporate policy and show their presence affects executive compensation and corporate investments. Highlighting their important, research suggests that CFOs view institutional investors as the most important marginal investors. CFOs say that institutional investors are important because they can leave the stock through herding if the company's earnings disappoint and inversely they can grant easier funding access that lower the future cost of capital if they are pleased with firm management. [START_REF] Gillan | Corporate governance proposals and shareholder activism: the role of institutional investors[END_REF] provide some empirical evidence of the influence of institutional investors by threatening to exit. Institutional investor horizon Underlying the notion of monitoring is the idea that investors stay in the firm long enough to reap the benefits of their monitoring and dialogue with management. However, Institutional investors with a shorter investment horizon may have little to gain from monitoring because they will not remain shareholders long enough to reap the benefits compared to the costs engendered. Furthermore, they have less time to acquire knowledge about the firm and thus are less able to dialogue with management due to their lower knowledge. In the context of earnings management, [START_REF] Bushee | The Influence of Institutional Investors on Myopic R&D Investment Behavior[END_REF] shows empirically that having institutional ownership with higher portfolio turnover significantly enhances the probability that managers reduce R&D to manage earnings. [START_REF] Gaspar | Shareholder investment horizons and the market for corporate control[END_REF] investigate the impact of short-term investors in the context of acquisitions. They show that target firms owned by shortterm investors are more likely to receive an acquisition bid but the bid premiums are lower. Furthermore, bidding firms with short-term investors generate significantly lower abnormal terms around merger announcements. (X. [START_REF] Chen | Monitoring: Which institutions matter[END_REF] look at the impact of large holdings with long-term investments on mergers. Their findings indicate that their presence leads to higher post-merger abnormal returns and post-merger changes in industry adjusted return on assets. Blockholders, i.e. long-term investors with large stakes, have also anecdotally been shown to have a positive impact on firms. In a case study on the impact of Carl Icahn, a leading blockholder present in many companies over the past 30 years and using Schedule 13D filings of his investment vehicles, [START_REF] Venkiteshwaran | Is Carl Icahn Good for Long-Term Shareholders? A Case Study in Shareholder Activism[END_REF] find that meeting corporate governance targets is one of his biggest successes. Taken together, these studies indicate that investor horizon may affect corporate policies. Dissenting opinions about whether secondary markets influence managers Some researchers have contested the real effects of financial markets on corporate policies. They advance two main arguments that we develop briefly below. The first argument is that managerial cuts to long-term-oriented investments such as brand building and customer satisfaction are visible to the markets but that dropped positive net present value projects are invisible to markets. In other words, the impact of cutting some expenditures may be compensated for by other decisions that create value for shareholders but they also do not show up either to the markets (Stein 1989c). Second, it is impossible to know management's true intentions implying that any earnings management behaviour may be the result of an omitted variable or may be capturing behaviour other than intentional manipulation [START_REF] Gunny | The Relation Between Earnings Management Using Real Activities Manipulation and Future Performance: Evidence from Meeting Earnings Benchmarks*[END_REF]. Summary and implications In section 2.4, we discuss the 'real effects of financial markets' perspective (P. [START_REF] Bond | The Real Effects of Financial Markets[END_REF]. We start by describing how it argues that information flows should be considered bidirectional to better comprehend the effect of financial markets on firms. Next, we consider reasons why managers may respond to information in stock prices. We consider personal incentives, catering to investors and managerial learning as possible explanations for their willingness to respond to information in stock prices. We subsequently focus on various facets of financial markets that could lead to stock prices being inefficient, including limited arbitrage, investor expectations, reduced investor monitoring of managers and the role of financing, and suggest that passive investors may be more active in terms of monitoring and pressure on managers than the term passive indicates. We then briefly mention two arguments against secondary markets actually affecting firm decision-making before considering the literature surrounding mispricing and investor horizon. Conclusion We strive to show in this literature review that marketing and finance are linked and that marketing is an active player in a firm's relationship with financial markets both as a transmitter and a receiver of information. This thesis strives to extend this work by focusing on what is not yet done, i.e., theorizing the bidirectional relationship of information flows that we hopefully make clear. Chapter 2 starts by reviewing the two streams of the marketing-finance interface. Section 2.1 studies how marketing information flows from marketing investments in firms to financial markets while Section 2.2 studies the literature concerning the opposite information flow, i.e. from financial markets to marketing investments. These two directions of information flow have traditionally been studied separately. We propose to combine the streams using the 'real effects of financial markets' perspective studied in section 2.4. In financial research, this perspective proposes to extend the efficient market hypothesis to reflect the bidirectional information flows. We believe it would provide a suitable conceptual framework for combining the two streams of the marketing-finance interface. Putting the two streams of the marketing-finance side by side in sections 2.1 and 2.2 show the complementarity of the research streams and suggest that the natural evolution is to combine the two approaches using the real effects of financial markets as a theoretical framework. In section 2. 3 we study what the term information means and how it has been mobilized in the academic literature. This is important for our thesis because we mobilize the concept of information in Chapters 3 and 4 and how information can influence the response of investors (customers in this thesis) and competitors. In the following studies, we explore empirically the bidirectional nature of information flows between marketing investments and equity markets. Our first two studies ascertain whether marketing investments directly impact equity markets, with the equity market being defined as made up of investors and competing brokerage houses. The last two studies then investigate information flows from equity markets to marketing investments. analysts. This research studies how one aspect of brokerage houses, their brands, influences the behavior of equity investors. In so doing, this article highlights a new bias that influences investor perception of information flows in equity markets. BROKERAGE HOUSES BRANDS AND INVESTORS Applying the brand-signaling framework developed in the marketing field to this context suggests that brokerage house brands act as signals that convey information and influence investor response. We study three possible determinants of the brokerage house brand signal: awareness, performance and reputation and propose a measure of the brokerage house brand score. Empirically, we perform an event study on recommendation changes and find a strong, positive impact of the brand of the brokerage house that issues the recommendation change on investor response. We further validate the impact of the three proposed determinants of the brokerage house brand. Key words: institutional investor, brokerage house, brand signal, investors, marketing-finance interface, event study This research has been presented at two peer-reviewed conferences: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Introduction "When EF Hutton talks, people listen." This well-known tagline figured in a series of commercials for EF Hutton, a famous brokerage house, in the 1980s. It suggests anecdotally that EF Hutton's brand is so strong that just mentioning its name is enough to draw investor attention, without even reading its research reports. Marketing research supports the idea that brokerage house brands may be important [START_REF] Davis | Branding a B2B service: Does a brand differentiate a logistics service provider?[END_REF][START_REF] Homburg | Brand awareness in business markets: When is it related to firm performance[END_REF]. However, the hundreds of articles into investor response to equity research over the past 30 years [START_REF] Brown | Earnings forecasting research: its implications for capital markets research[END_REF][START_REF] Brown | Inside the "Black Box" of Sell-Side Financial Analysts[END_REF][START_REF] Ramnath | The Financial Analyst Forecasting Literature: A Taxonomy With Suggestions For Further[END_REF][START_REF] Schipper | Analysts' forecasts[END_REF] suggest that brokerage houses do not matter to equity investors. Instead, brokerage houses are limited in this research to a support role for security analysts, with brokerage houses proxying as performance and career outcomes of security analysts [START_REF] Bonner | Investor reaction to celebrity analysts: The case of earnings forecast revisions[END_REF][START_REF] Hilary | Analyst Forecast Consistency[END_REF][START_REF] Hong | Analyzing The Analysts: Career Concerns And Biased Earnings Forecasts[END_REF]. This article investigates this divergence about the influence of brokerage houses on investors by studying one aspect of brokerage houses, their brands. Several reasons underlie the interest of exploring the impact of brokerage house brands and their determinants on investors. First, brokerage houses have developed marketing policies (e.g., road shows, ads in specialized press, investor meetings, websites, etc.) that position their brands at the heart of all of their activities. Their brand names figure prominently on their research notes targeting their clients and the journalists. Sales people always present their firm when calling and analysts names are always given with the brokerage house name. Second, brokerage house services are rarely provided through direct sale [START_REF] Brennan | Brokerage Commission Schedules[END_REF]. Instead, payment for brokerage house research results from a broker vote system by institutional investors that allocates trading commissions proportionally [START_REF] Maber | The Use of Broker Votes to Reward Brokerage Firms" and Their Analysts[END_REF]. Brokerage houses that are perceived as providing the best value are rewarded with a larger portion of an institutional investor's trading commissions. This system drives brokerage houses to build a strong marketing policy in order to enhance the perceived quality of their service in addition to delivering quality investment research. Finally, previous research from marketing, finance and accounting indicates that equity markets are sensitive to marketing assets in general and brands in particular. For instance, customer satisfaction, a type of marketing asset, affects both investors [START_REF] Fornell | Stock Returns on Customer Satisfaction Do Beat the Market: Gauging the Effect of a Marketing Intangible[END_REF] and financial analysts [START_REF] Ngobo | Is customer satisfaction a relevant metric for financial analysts[END_REF]. Also, [START_REF] Madden | Brands matter: an empirical demonstration of the creation of shareholder value through branding[END_REF][START_REF] Mizik | The Financial Value Impact Of Perceptual Brand Attributes[END_REF] show that brand equity positively affects firm financial valuations meaning that investors pay attention to the brands of companies they invest in. Furthermore, brand name changes (M. J. Cooper, Dimitrov, & Rau, 2001a) and investor recognition of firm names have been shown to significantly impact investors (Green & Jame, 2013). Since equity markets seem sensitive to firm marketing activities, we reason that investors will be sensitive to the marketing of other equity market players, in particular brokerage house brands. In this article, we use brokerage houses as the main unit of analysis. The little research into brokerage house brands is surprising given the key role brokerage houses play in the USD30 trillion US stock market (source: barrons.com). In their role as information intermediaries, brokerage houses expend significant resources in collecting and analyzing information to provide investment research in the form of research reports delivering earnings forecasts and stock recommendations (i.e., strong buy, buy, hold, sell, strong sell) to their clients. Prior research in accounting and finance has studied extensively how investors respond to investment research [START_REF] Ramnath | The Financial Analyst Forecasting Literature: A Taxonomy With Suggestions For Further[END_REF] in general, but it has not studied brokerage houses separately from security analysts. In order to explore investor response to brokerage house brands, we apply a well-known conceptual framework from the branding literature -the brand signal framework [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF] -to equity markets. This framework posits that brands can be seen as a signal and allows us to explain how brokerage house brand signals and their determinants -awareness, performance and reputation -might influence investor response to research notes. These research notes, where the brokerage house logo and the analyst's name figure prominently on the front page, usually discuss a stock or news item and often contain an actionable recommendation or forecast. Empirically, we generate abnormal returns using an event study on recommendation changes to test whether the amplitude of abnormal returns is associated with brokerage house brands and their determinants. We consider recommendation changes because they are more informative than recommendation levels [START_REF] Boni | Analysts, Industries, and Price Momentum[END_REF][START_REF] Jegadeesh | Do Analysts Herd? An Analysis of Recommendations and Market Reactions[END_REF]. We distinguish between recommendation upgrades and downgrades as previous research shows that investors may respond differently to recommendation upgrades and downgrades [START_REF] Asquith | Information content of equity analyst reports[END_REF][START_REF] Chang | Financial analysts' stock recommendation revisions and stock price changes[END_REF][START_REF] Loh | Is Sell-Side Research More Valuable in Bad Times[END_REF]. This research is the first, as far as we know, to investigate whether a key marketing investment, i.e., the brands of brokerage houses, influence investor behavior. To do so, we measure a brand score for each brokerage house and further develop proxies for the three main brand determinants -awareness, performance and reputation. Our results show that brokerage house brands and their determinants influence investors and suggests more generally that information intermediary brands influence investors. This article makes several major contributions. First, whereas accounting and finance research has focused mainly on security analysts as the primary unit of analysis, this research aims to show that behind every security analyst lies the brokerage house employing them, and that the perception of a brokerage house plays a crucial role in the impact of information flows in equity markets. Second, we identify a new bias that may influence investors' perception of information flows. Investors should be aware that information intermediary brands may influence positively or negatively their perception of information, leading potentially to suboptimal decisions. Brokerage houses can take advantage of the influence of brands on information by developing a strong marketing policy. Finally, by applying a conceptual framework developed in marketing to investors, and by providing measures of the brokerage house brand equity, this article contributes to the marketing-finance interface, a recent and growing stream of research [START_REF] Ganesan | Handbook of Marketing and Finance[END_REF]. The remainder of this study is organized as follows: we first develop the conceptual framework and hypotheses. Then, we describe the data and methodology, present the results and finish by discussing the study's theoretical and managerial implications, limitations and suggestions for future research. 3.2 Conceptual framework and hypotheses The brand signal theory applied to equity markets To investigate brokerage house brands, we use the brand signal theory [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF][START_REF] Zuo | The Informational Feedback Effect Of Stock Prices On Management Forecasts[END_REF], derived from information economics, which states that in situations of asymmetric information about product quality, brands convey information in the form of signals that reduce uncertainty about quality in the decision making process [START_REF] Kirmani | No Pain, No Gain: A Critical Review Of The Literature On Signaling Unobservable Product Quality[END_REF]. In this theory, credibility, which can be defined as the perception of trustworthiness of the brand's claims as well as its technical capability to deliver on its promises, is central. Highly credible brands are more likely to be considered and chosen because they reduce uncertainty [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF][START_REF] Zuo | The Informational Feedback Effect Of Stock Prices On Management Forecasts[END_REF]. The brand signal theory is appropriate for credence products such as research notes for which quality uncertainty is inherent: equity investors cannot assess the quality until well after the recommendation is issued. Moreover, the brokerage house knows more about the quality of the recommendation relative to investors because it knows the information that has been collected and analyzed, and the effort put into drawing up the recommendation. Hypotheses Applying the brand signal framework to equity markets, we argue that, when deciding to respond to a research note, investors do not rely exclusively on their perception of the analyst who signs the note but also on their perception of the brokerage house brand, which acts as a signal that conveys information about the quality of the research note (see Figure 1). A highly credible brokerage house brand signal (believable and trustworthy) reassures investors that they can trust and follow the recommendation; the signal reduces decision-making costs by decreasing their need to seek more information about the issuing brokerage house, and reduces the perceived risks of following the recommendation. In other words, the same research note (same analyst, same content, same recommendation level, etc.) bearing the brand of brokerage house A will be perceived differently from the research note bearing the brand of brokerage house B, resulting in a different decision by investors (i.e. buying, selling or avoiding depending on the recommendation change). We summarize the core conceptual idea in Hypothesis 1. Hypothesis 1: Brokerage house brand signals influence investor perception of research notes. The notion of credibility underlies our three hypotheses concerning brokerage house determinants. Credibility of the brand signal in equity markets seems to be crucial as brokerage houses communicate in a similar territory: they all claim to offer high quality services, deep knowledge of the stock market, excellent recommendations and accurate forecasts. This lack of differentiation makes brand credibility all the more important for investors. The consistency of the brokerage house's claims with reality lends further credibility to the brand as a signal; in contrast inconsistent claims erode brand signal credibility [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF][START_REF] Zuo | The Informational Feedback Effect Of Stock Prices On Management Forecasts[END_REF]. In this section, we consider three brokerage house brand determinants that might contribute to the credibility of the brokerage house brand signal and lead to higher investor response: awareness, performance and reputation. Brokerage house awareness -Awareness facilitates brand recognition and reassures investors about their decisions. For a brokerage house with low (high) awareness, we expect the brand signal to be perceived as less (more) credible, leading to a lower (higher) response of investors. Hypothesis 2: The higher the awareness, the greater the investor response to research notes bearing the brokerage house brand. Brokerage house performance -The finance and accounting literature highlights the importance of performance as indicators of ability for investors [START_REF] Bradley | Before an Analyst Becomes an Analyst: Does Industry Experience Matter[END_REF][START_REF] Hilary | Analyst Forecast Consistency[END_REF]. We reason that the better the performance of a brokerage house, the more the brokerage house brand will be perceived as a credible signal and the more investors will respond to research notes bearing the brokerage house brand. Hypothesis 3: The better the performance of a brokerage house, the greater the investor response to research notes bearing the brokerage house brand. We consider two characteristics that might contribute to the perception of a brokerage house's performance: perceived brokerage error and information access. Perceived Brokerage Error -Over time, the gap size between a brokerage house's earnings forecasts and actual earnings leads investors to build an ongoing perception of the overall error of a brokerage house, as well as to assess the trustworthiness and believability of a brokerage house. Other things equal, we expect a brokerage house brand with low (high) perceived error to generate a more (less) credible signal, leading to lower (higher) response of investors. Information Access: Investment Bank -Brokerage houses that are (not) part of an investment bank should be perceived as more (less) performant because of more (less) privileged access to information arising from the underwriting relationships of their investment banking group. These brokerage house brands are considered as more (less) credible, leading to a higher (lower) response of investors. Brokerage house reputation -The concept of reputation has been studied in many disciplines, including marketing [START_REF] Walsh | Customer-Based Corporate Reputation Of A Service Firm: Scale Development And Validation[END_REF], finance and accounting [START_REF] Hong | Security Analysts' Career Concerns and Herding of Earnings Forecasts[END_REF][START_REF] Stickel | Reputation and Performance Among Security Analysts[END_REF]. Reputation can be seen as an overall evaluation of a firm or organization, the evaluation serving as a "quality promise" that positively impacts attitude and behaviors toward that entity [START_REF] Fombrun | Reputation[END_REF][START_REF] Fombrun | Who's tops and who decides? The social construction of corporate reputations[END_REF][START_REF] Walsh | Customer-Based Corporate Reputation Of A Service Firm: Scale Development And Validation[END_REF]. We reason that the greater the brokerage reputation, the more investors may consider the brokerage house brand as a credible signal. Hypothesis 4: The stronger the brokerage house reputation, the greater the investor response to research notes bearing the brokerage house brand. Industry Recognition reflects the credibility the stock investment industry accords to brokerage houses. We expect brokerage house brands with more (less) industry recognition, as shown by the total number of industry awards won, to be perceived as more (less) credible, leading to a higher (lower) response of investors. Control variables We add to the conceptual framework control variables from the literature that have been shown to impact investor response (see table in Appendix). Analyst characteristics -Analyst experience, boldness, number of recommendations issued, number of stocks covered and error difference. Recommendation characteristics -size of a recommendation change, distance of a recommendation to the consensus. Firm characteristics -Stock percentage of institutional ownership, book-to-market ratio and firm size. Data and measures We now look at the data and methodology used to test the hypotheses. We explain how we calculate the recommendation changes, the dependent variable, the brokerage house brand determinants and the control variables concerning analysts, recommendation changes and firms. Table 1 provides a short description of the variables and their sources. We then specify the statistical models. Recommendation data and description of recommendation change measure We start with all recommendations issued by US brokerage houses on US-listed firms in all industries between 2000 and 2014 from the I/B/E/S recommendations detail file. For recommendation changes issued on non-trading days or recommendations between 4:30 PM and 11:59 PM, Day 0 is the next trading day. The selection criteria of recommendations for inclusion in the dataset are as follows: 1/ All listed firms with fewer than three analysts covering them are removed from the sample to ensure that the firms in the sample are of interest to investors [START_REF] Loh | When Are Analyst Recommendation Changes Influential?[END_REF]. 2/ Stock price data from CRSP and accounting information from Compustat must be available for the firms concerned by the recommendation changes. 3/ The recommendation must be confirmed by the analyst (in the I/B/E/S review date field) within 365 calendar days [START_REF] Ljungqvist | Rewriting History[END_REF]. The 365-day criterion ensures that recommendations are not stale and remain relevant to investors. Measuring the brokerage house brand score We calculate a unique brand score for each of the 66 brokerage houses per year using the 755-year observations of brokerage house characteristics. First, we carry out a principal component analysis on the brokerage house characteristics (see 3.3.1, retaining only the quantitative variables) that define the three determinants of the broker house brand. We keep the three principal components and then weight them by their respective eigenvalue to calculate an average brand score called Brand_Scorek,y for each brokerage house per year. A positive value indicates that the brokerage house brand has a stronger brand than the average (it equals 0) whereas a negative value indicates a weaker brand. Control variables We rely on prior research to operationalize control variables. Analyst characteristics For the analyst characteristics, forecasts, actual earnings and recommendation data were taken from I/B/E/S. In line with previous research, all analyst variables except experience are lagged to alleviate endogeneity concerns. Analyst experience (Analyst_Experiencej,y) -For a given year y, this variable indicates the total number of years the analyst j has issued recommendations in the sample at the time the recommendation change is issued. Analyst boldness (Analyst_Boldnessj,y) -For a given year y, we calculate the average absolute distance of earnings forecasts of analyst j to the consensus. We winsorize the average at the 5% level. Stock coverage (Stock_Coveragej,y) -We count the number of stocks covered by the analyst j in a given year y in the recommendation sample. Recommendation frequency (Reco_Frequencyj,y) -This variable represents the number of recommendation changes the analyst j issues in a given year y. Error Difference (Error_Differencej,k,y) -We calculate the average of the analyst's absolute forecast errors in a given year y. The absolute forecast error is defined as the percentage difference between the analyst's EPS forecast and the actual EPS in absolute value. We winsorize the error at 5%. Finally, we subtract the average brokerage house error (BH_Errork,y). Consensus distance (Consensus_Distancel) -For each recommendation change l, the consensus distance is calculated by subtracting the recommendation level from the consensus recommendation level. Firm characteristics Book-to-market ratio (Book_to_Marketi,y) -The book-to-market ratio is calculated as the book value of common equity (total assets from Compustat) divided by market capitalization (source: CRSP). Institutional ownership (Inst_Owneri,y) -This figure represents the percentage of institutional investors who hold stock in the firm (source: Thomson-Reuters). Firm size (Firm_sizei,y) -We take the log of the firm's market capitalization. Market capitalization is calculated as the shares outstanding at yearend times the share price at yearend and represents the value of the firm's stock on the stock market (source: CRSP). Model specification In this section, we discuss how we specify the model to test investor response to brokerage house brands and their determinants. To test hypotheses 1-4 about investor response, we proceed in the following way. First, like [START_REF] Loh | Is Sell-Side Research More Valuable in Bad Times[END_REF], we separate the upgrades from the downgrades and we keep only positive (negative) Industry_Recognitionk,y have been treated to remove the effect of the brokerage house size as described in 3.3.2 and show no significant correlations (see Table 4). Results Description of recommendation change magnitude Summary statistics Table 3 reports the summary statistics for the main sample. The statistics are consistent with previous research. In our sample, the average CAR, 4.5% for upgrades and -4.25% for downgrades, is in line with the values reported by [START_REF] Raassens | The Market Valuation Of Outsourcing New Product Development[END_REF] (3.2% for positive events and -2.8% for negative events). Similar to [START_REF] Loh | When Are Analyst Recommendation Changes Influential?[END_REF], we find that the distributions of the SCAR of recommendation changes (histograms available upon request) show a significant number of values with unexpected signs, i.e., negative SCARs for upgrades and positive SCARs for downgrades. These distributions justify using separate models and considering only the positive (negative) values for SCARs for upgrades (downgrades), like [START_REF] Loh | When Are Analyst Recommendation Changes Influential?[END_REF]. Brokerage houses have an average awareness of 12.2 years. The average absolute brokerage error is 3.79% (SD = 2.91). Regarding the reputation characteristics, brokerage houses win on average 3.17 award points (SD = 10.4). The yearly error difference between analyst and brokerage house error is -0.01 (SD = 0.09). The data for analysts, firms and recommendation changes are in line with [START_REF] Baum | Mutual Forbearance and Competition Among Security Analysts[END_REF][START_REF] Clement | Analyst forecast accuracy: Do ability, resources, and portfolio complexity matter[END_REF][START_REF] Clement | Do investors respond to analysts' forecast revisions as if forecast accuracy is all that matters?[END_REF][START_REF] Jegadeesh | Do Analysts Herd? An Analysis of Recommendations and Market Reactions[END_REF][START_REF] Loh | When Are Analyst Recommendation Changes Influential?[END_REF]. Analysts show an average of 7.1 years of experience in our sample. The average analyst boldness is 0.19, they cover 3.8 stocks per year and issue on average 16.7 recommendations per year. Firms in the sample have an average market cap of $6.5bn, 73% of owners are institutional investors and they show a mean book-to-market of 0.5. Table 4 shows the correlations between variables. There are no significant strong correlations between the explanatory variables as they have been treated to remove the effect of brokerage house size (see 3.3.2), Brokerage house brand outcomes This section looks at the estimated coefficients from the regressions on the sample for investor response (see Table 5 andTable 6). Do brokerage house brands affect investor response to recommendation changes? As explained in the sub-section 3. To examine whether the brand score influences investor response (see Table 5). To do so, we estimate Equation 2for upgrades and downgrades with Brand_Scorek,y taking the place of BHk,y in the equation. The impact of the brokerage house brand is non-significant for downgrades but significant and positive when an upgrade is recommended (p < 0.01). 3.4.4 Do brokerage house brand determinants affect investor response to recommendation changes? We now present the hypothesis results concerning the impact of brokerage house awareness (H2), performance (H3) and reputation (H4) on the response of investors (see Table 6). These results come from Equation 2, where BHk,y is replaced by the brokerage house brand characteristics: BH_Awarenessk,y, BH_Errork,y, IBk,y and Industry_Recognitionk,y. Table 7 summarizes the major findings of our four hypotheses. Brokerage house awareness -The results indicate that brokerage house awareness significantly and positively influences investors for upgrades (p < 0.05) and downgrades (p < 0.01). The results support Hypothesis 2. Brokerage house performance -Overall, the results indicate that brokerage house performance influences investor response to brokerage house brands, supporting Hypothesis 3. Perceived Brokerage House Error -We find that investors respond negatively to brokerage house error for downgrades (p < 0.01). The impact of this variable however is non-significant for upgrades. Information Access: Investment Bank -We find that investors are positively and significantly affected by the additional information access for both upgrades (p < 0.01) and downgrades (p < 0.01). Brokerage house reputation -Hypothesis 4 about brokerage house reputation impacting investor response is partially supported. The results indicate that industry recognition is positively significant for upgrades (p < 0.001). However, the brokerage house reputation has no significant impact in the case of downgrades. Control variables Analyst experience, recommendation change size, institutional ownership and firm size are significant and show the expected signs from investor research (see Appendix 1). Analyst boldness and recommendation frequency are unexpectedly negative and stock coverage and consensus distance show mixed signs. Error difference, for which we had no expectations, is negative and significant for downgrades. Book-to-market is positive and significant for upgrades. Validation Analysis In the investor model, we test for multicollinearity and heteroskedasticity. The VIF factors in Equation 2are below 3 for both the brand score and the brokerage house determinants. The results are similar when regressing with robust standard errors. Discussion This article examines the impact of brokerage house brand signals on a key player of equity markets, investors. Based on a sample of 47,345 recommendation upgrades and downgrades from 66 brokerage houses concerning 2,759 firms over 15 years, we find that brokerage houses brand signals are a significant determinant of investor buying or selling behavior in response to recommendation changes in addition to the effect of security analysts. We define three determinants of a brokerage house brand -awareness, performance and reputationand find overall that these three brokerage house brand determinants influence investor response to recommendation changes. We structure the discussion below around the key results that arise from the research. We discuss first whether brokerage house brands impact investors, the impact of determinants of brokerage house brands and finally the general contributions. Do brokerage house brands influence equity markets? Under the efficient market hypothesis, the price of a stock reflects all available information and provides an unbiased estimate of a firm's value [START_REF] Fama | Efficient Capital Markets: A Review of Theory and Empirical Work[END_REF]. Under this framework, brokerage house brand signals should not influence investor response to recommendation changes. By showing that some brokerage house brands on recommendation changes generate more investor response than others we however highlight a new variable that matters to investors, suggesting a stock market inefficiency. To better understand the perception of their brands, we propose a methodology based on market data that managers can use to calculate their brokerage house brand premium and to assess where their brand stands relative to competitors. Using lagged market variables allows brokerage houses to identify the perception of their brands and measure the effectiveness of their brand. For example, according to our methodology, Deutsche Bank Securities brand is scored 0.63 on average, while FBR Capital Markets takes the value of -0.5. Applying the estimates obtained from Equation 2(see estimates in Brokerage houses with lower brand premiums should be aware that the lower effectiveness of their brands may affect indirectly their revenues (as discussed in the introduction, institutional investors may allocate them lower trading commissions due to the lower quality perception of the brokerage house brand). Accordingly, managers should adopt more focused marketing policies to strengthen their brands by taking marketing actions such as practising a strong corporate communication policy using supports such as social media, websites and specialized press, etc., mobilizing more sales people when the brokerage house issues recommendation changes, setting up more meetings between firm management and investors, more investor relations events, etc. These findings lead us to recommend that future research into the impact of investment research on equity markets include systematically, at least as an indicator variable to take into account brokerage houses brands instead of using brokerage house size to proxy for brokerage house effects. Which brokerage house brand determinants influence equity markets? Our conceptual framework considers three brand determinants -awareness, performance and reputation -that partially explain the perception and credibility of the brokerage house brand signal. The results (see Table 6) show that the three determinants impact significantly investor response to both upgrades and downgrades, suggesting nomological validity for the brand determinants of the model. Overall, these results support the argument that the three brand determinants contribute to the effectiveness of brokerage house brand signal. The greater the past performance of the brokerage house, the more investors perceive the brand signal as credible and respond to the recommendation change. In a similar way, the greater the brokerage house awareness and the stronger the reputation, the greater the impact of the recommendation change on investor decisions. These findings suggest that brokerage houses must have sufficiently large marketing budgets to build and maintain these three brand determinants, by creating and maintaining brand awareness over the long term, communicating about a brokerage house's performance and awards to create a better perception of a brokerage house's reputation. Public relations, road shows, investor conferences and specialized communications to investors and journalists can all contribute to making the brand signal more effective. Contributions In this article, we address how a key marketing factor, branding, impacts investors. In this paragraph, we discuss the contributions of the research to marketing and finance and accounting, practical implications and finally suggestions for future research. Contributions to marketing This paper contributes to the marketing literature in three ways. First, we apply the brand signal framework [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF][START_REF] Zuo | The Informational Feedback Effect Of Stock Prices On Management Forecasts[END_REF] to an understudied sector in marketing, i.e., capital markets [START_REF] Lehmann | Journal Evolution and the Development of Marketing[END_REF]. The results highlight the robustness of this model. Second, in contrast to previous studies that use declarative data in the brand signal framework, this research is the first to use actual data (stock market data), yielding the real benefit of brand equitydefined as the value of the brand signal [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF] -on the brand's performance in the marketplace. By quantifying the incremental benefit due to the brokerage house name according to the stakeholder (investors), we provide a new measure of brand equity, namely the abnormal return premium. Third, this research is anchored in the marketing-finance interface stream of research. In the introduction to a special edition on this subject, [START_REF] Hanssens | Marketing strategy and Wall Street: nailing down marketing's impact[END_REF]) call for research into information intermediaries such as security analysts. We focus on a previously unstudied information intermediary, the brokerage houses that employ security analysts. Furthermore, we show that brokerage house brands act as signals that convey information and influence the response of equity investors. We also respond to a call for research into understanding the potential biases introduced by persuasive communication aimed at investors (S. Srinivasan & Hanssens, 2009). Contributions to finance and accounting This article explores the impact of a key information intermediary, brokerage houses, and leads to contributions to research into investors, which we discuss successively. This research contributes to investor research and signals. We introduce a new type of signal, brokerage house brand signals, using the brand signal framework [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF][START_REF] Zuo | The Informational Feedback Effect Of Stock Prices On Management Forecasts[END_REF] derived from information economics. Previous research has shown that investors take into account marketing assets of firms such as brands [START_REF] Madden | Brands matter: an empirical demonstration of the creation of shareholder value through branding[END_REF] and customer satisfaction [START_REF] Fornell | Stock Returns on Customer Satisfaction Do Beat the Market: Gauging the Effect of a Marketing Intangible[END_REF] in their decision making process (see (S. Srinivasan & Hanssens, 2009) for a review). We contribute to a better understanding of how another marketing asset, brokerage house brands, impacts investors. Accounting and finance research have focused mainly on security analysts as the main unit of analysis. However, behind every security analyst lies the brokerage house that employs them. We identify a new Finally, our research adds to previous research that differentiates between upgrades and downgrades. Investors seem to rely more on the brokerage house brand and determinants for upgrades relative to downgrades, perhaps due to greater information needs by investors for downgrades relative to upgrades [START_REF] Asquith | Information content of equity analyst reports[END_REF]. Implications for brokerage houses, analysts, investors and CEOs/CFOs Brokerage houses should be aware that their brands are important marketing assets [START_REF] Srivastava | Market-Based Assets and Shareholder Value: A Framework for Analysis[END_REF] that help generate revenues. We recommend that brokerage houses reinforce their marketing policies to increase awareness and build a strong reputation. Marketing tools can help convey the perception of a brokerage house's performance to investors. Good performance is crucial in this sector but is not enough if investors do not perceive it. To benchmark their brand effectiveness relative to competitors, brokerage houses can use the methodology developed in this article. Also, our results may have strong implications for measures of analyst performance and compensation. So far, analyst performance has aggregated together the contribution of both brokerage house brands and analysts. This means that the performance and compensation of analysts in brokerage houses with strong brands benefited unknowingly from this advantage. In contrast, analysts employed by brokerage houses with less effective brands were penalized in the measure of their equity market response. Our results suggest that when calculating the real contribution of analysts and their compensation, an approach, such as our methodology, that distinguishes between the real contribution of the analyst and the impact of the brokerage house brand, is recommended. We identify a new bias that may influence investor perception of the information flows that underline capital markets. Investors should be aware that information intermediary brands and brand determinants may influence positively or negatively their perception of the information provided, leading potentially to suboptimal investment decisions. CEOs and CFOs communicate to investors via security analysts. This research suggests that, when selecting security analysts used to convey messages to equity markets, they should prefer security analysts employed by brokerage houses with more effective brands. Limits and future research We calculate the brokerage house brand score between 2000 and 2014. Future research could study whether and how brokerage house brand scores evolve over time and the influence of events such as recessions [START_REF] Loh | Is Sell-Side Research More Valuable in Bad Times[END_REF], Regulation Fair Disclosure [START_REF] Baum | Mutual Forbearance and Competition Among Security Analysts[END_REF] and big bangs (separation of research and trading fees). While we focus on assessing the impact of brokerage house brands on investors and define some brokerage house determinants that might explain this impact, future research might consider other brand characteristics from the marketing literature such as loyalty and brand associations [START_REF] Aaker | Measuring Brand Equity Across Products And Markets[END_REF]. We operationalize brand determinants using equity market data. We recommend that other measures (e.g., declarative) of brokerage house brands and their determinants might be developed to build on our results. Nevertheless, declarative measures will be complex to operationalize over a long time span. The article highlights the importance of one particular information intermediary, i.e., the brokerage house. Further research could replicate the approach used in this article to investigate the role of other information intermediary brands in other financial markets such as credit rating agencies in bond markets. Finally, we successfully apply the brand signal framework to capital markets. We hope that this research will inspire the application of other marketing research frameworks to financial markets and stimulate the marketing-finance interface research. 6,500,000 21,900,000 (a) Statistics for the sample of 755 brokerage-year observations for 66 brokerage houses (b) Calculated based on the sample of 47,345 recommendation changes [START_REF] Weitz | Introduction to special issue on competition in marketing[END_REF]. While chapter 3 focuses on investor response to brokerage house brands, this paper investigates competitor response to brands. We do so in an understudied setting for marketing research, i.e., equity markets [START_REF] Lehmann | Journal Evolution and the Development of Marketing[END_REF]. We ask two questions in this article. We ask first whether brokerage house brands influence competitors. Furthermore, we study whether brand determinants -awareness, performance and reputation -contribute to this influence? How brokerage houses compete is an understudied question in marketing research. Inovation in the brokerage industry is high, with innovation quickly copied or imitated. A leader is characterised as a pioneer or innovator that meets the competitors' needs better than anyone else in the industry. A market follower strategy is a strategy of product imitation. This strategy reflects Porter's [START_REF] Porter | Competitive strategy: Techniques for analyzing industries and competitors[END_REF] generic competitive strategy of differentiation, which has been extensively studied in industrial organisation. Under this strategy, the leader bears the expense and risk of developing, bringing it to market and educating the market. Another firm can come along and copy or improve the product. Although it will not overtake the leader, the follower can achieve high profits because it did not bear any of the innovation expenses. Under Porter's framework, once a firm acquires a reputation for quality, the brand acts as a repository for its reputation of quality. In this study, we look more closely at the role of the brand as a repository for a brokerage house's reputation and the determinants of the brand. Little research has studied how brokerage houses compete despite the importance of brokerage houses in equity markets. The research has instead focused on competition between analysts and herding (R. A. Cooper, Day, & Lewis, 2001b;[START_REF] Jegadeesh | Do Analysts Herd? An Analysis of Recommendations and Market Reactions[END_REF]. As far as we know, this study is the first to investigate the role played by brands in the competition between brokerage houses. In the stock market, when a brokerage house issues a research note on a stock, competitors (we use the terms competing brokerage houses and competitors interchangeably) might choose to follow this research note quickly by releasing their own research note. This choice depends on whether the issuing brokerage house is perceived as having the status of information leader on a stock or not. The status of information leadership on a stock may arise because of specific industry knowledge, intellectual capital or business relationships [START_REF] Baum | Mutual Forbearance and Competition Among Security Analysts[END_REF]R. A. Cooper et al., 2001b) that enable some brokerage houses over time to become dominant or leader brokerage houses due to the high quality of their forecasts and recommendations. Information leader status is associated with higher first-mover trading volume and indirectly generates higher revenues for brokerage houses (R. A. Cooper et al., 2001b). However, because brokerage house revenues and profits are related in part to perceived performance, brokerage houses with less industry specific knowledge or intellectual know-how may choose to follow the leader brokerage houses by delaying their release of forecasts or recommendation changes [START_REF] Moorthy | Using Game Theory to Model Competition[END_REF]. The delay enables the follower brokerage houses to use the information produced by leader brokerage houses to improve forecast accuracy and recommendation quality of their earnings forecasts and recommendations and at the same time reduce spending on the internal skills, sector experts and information sources needed to be a leader brokerage house. This strategy allows follower brokerage houses to achieve profits without incurring the higher costs of a leader brokerage house. To sum up, leader brokerage houses may choose to benefit from either additional revenues or they may choose to follow leaders and save on the required spending needed to reach leader status, depending on their strategic posture. Extending the theoretical model from Chapter 3, we reason that the brokerage house brand signal on a research note conveys a credible signal to its competitors about its information leadership, in addition to its signal to investors. Empirically, we measure competitor response using the leader status defined by [START_REF] Baum | Mutual Forbearance and Competition Among Security Analysts[END_REF]R. A. Cooper et al., 2001b;[START_REF] Jegadeesh | Do Analysts Herd? An Analysis of Recommendations and Market Reactions[END_REF]. We then use a brokerage house's leader status on a stock recommendation change to study the impact of brokerage house brands and their determinants on competing brokerage houses using the methodology from. The rest of the empirical framework is the same as Chapter 3, i.e., we use the same brand score and brand determinants and control variables. This extension of Chapter 3 makes three contributions. First, accounting and finance research has focused mainly on security analysts as the primary unit of analysis. This research aims to show that behind every security analyst lies the brokerage house employing them and that the perception of a brokerage house plays a crucial role in the impact of information flows in equity markets. Second, we identify a new bias that may influence equity market players' perception of information flows. The characteristics of brokerage house brands may influence positively or negatively competitor perception of their information leadership. Third, this article is as far as we know the first to study the influence of brokerage house brands on their competitors. In the following sections, we describe our hypotheses, discuss our dependent variable and then present the results and end with the conclusion. Because we extend on Chapter 3 in this article, we do not present the conceptual framework, independent variables and control variables, because they are presented in Chapter 3. Conceptual framework and hypotheses Applying the brand signal framework set out in Chapter 3 to competition between brokerage houses, we argue that, when deciding to respond to a research note, competing brokerage houses rely not only on their perception of the analyst who signs the note, but also on the brokerage house brand present on the research note. This brand acts as a signal that conveys information about the quality of the research note that competitors respond to (see Figure 1). In other words, the same research note (same analyst, same content, same recommendation level, etc.) bearing the brand of brokerage house A will be perceived differently from the research note bearing the brand of brokerage house B, resulting in different decisions by competing brokerage houses (i.e. buying or selling depending on the recommendation change). We summarize the core conceptual idea in Hypothesis 1. Hypothesis 1: Brokerage house brand signals influence competing brokerage house perception of research notes. We further consider the three brokerage house brand determinants presented in Chapter 3. We reason that these three determinants may enhance the credibility of the brokerage house brand signal and lead to higher competitor response: awareness, performance and reputation. When a recommendation change is issued, we reason that awareness, perceived performance and reputation increase the credibility of a brokerage house brand signal, increasing the likelihood of competing brokerage houses following the issuing brokerage house recommendation and increasing the likelihood of the issuing brokerage house being considered a leader. Consequently, we formulate hypotheses 2, 3 and 4 as follows. Hypothesis 2: The higher the awareness, the higher the competing brokerage house response to research notes bearing the brokerage house brand. variables; Reco_Chari,k,l is a matrix of recommendation variables; Yeary is the year fixed effects; Industryi is the fixed effect of the industry (SIC 1 level) that firm i belongs to, g0 is the intercept, g1 to g6 are the vectors of estimated coefficients and eI,k,l is a vector of error terms. All brokerage house, firm, analyst and recommendation characteristics are described above. To test Hypothesis 1 about brokerage house brands affecting investors, BHk,y in Equation 2takes the form of Brand_Scorek,y as described in 3.3.2. To test hypothesis 2, 3 and 4, we introduce simultaneously the brokerage house characteristics BH_Awarenessk,y, BH_Errork,y, IBk,y, and Industry_Recognitionk,y. BH_Errork,y, and Industry_Recognitionk,y have been treated to remove the effect of the brokerage house size as described in Chapter 3.3.2 and show no significant correlations. Results Description of recommendation change magnitude Like in Chapter 3, we show the transition probabilities and magnitude of recommendation changes for our sample. The sample size is smaller than in Chapter 3 because the methodology deletes recommendation changes without two recommendation changes by competitors before and two after. Summary statistics Table 3 reports the summary statistics for the main sample. The statistics are consistent with previous research and Chapter 3. In line with [START_REF] Jegadeesh | Do Analysts Herd? An Analysis of Recommendations and Market Reactions[END_REF], we find that about 11% of recommendations can be qualified as information leaders and generate responses from competitors. Brokerage houses have an average awareness of 11.9 years (SD = 3.9). The average absolute brokerage error is 3.8% (SD=7.4%). Brokerage houses win on average 4.1 award points for industry recognition (SD = 11.9). The annual error difference between analyst and brokerage house error is 0.01 (SD of 0.08). The data for analysts, firms and recommendation changes are in line with [START_REF] Baum | Mutual Forbearance and Competition Among Security Analysts[END_REF][START_REF] Clement | Analyst forecast accuracy: Do ability, resources, and portfolio complexity matter[END_REF][START_REF] Clement | Do investors respond to analysts' forecast revisions as if forecast accuracy is all that matters?[END_REF][START_REF] Jegadeesh | Do Analysts Herd? An Analysis of Recommendations and Market Reactions[END_REF][START_REF] Loh | When Are Analyst Recommendation Changes Influential?[END_REF]. Analysts show an average 7.1 years of experience in our sample. The average analyst boldness is 0.19, they cover 3.9 stocks per year and issue on average 16.7 recommendations per year. Firms in the sample have an average market capitalization of $9.2bn, 75% of firm shares are owned by institutional investors and the mean book-to-market is 0.53. Table 4 shows the correlations between the variables. Do brokerage house brands affect competing brokerage house response to recommendation changes? The results (see Table 5) indicate that brokerage house brand scores are not significant for upgrades. Furthermore, contrary to expectations, the brand score affects negatively competitor response to recommendation changes for downgrades, which disconfirms Hypothesis 1. Do brokerage house brand determinants affect competitor response to recommendation changes? We now present the hypothesis results concerning the impact of brokerage house awareness (H2), performance (H3) and reputation (H4) on the response of competing brokerage houses (see Table 6). These results come from Equation 2, where BHk,y is replaced by the brokerage house brand characteristics: BH_Awarenessk,y, BH_Errork,y, IBk,y, and Industry_Recognitionk,y,. Table 7 summarizes major findings. Brokerage house awareness -The results indicate that brokerage house awareness has a negative influence on competing brokerage houses for upgrades and downgrades. The results disconfirm Hypothesis 2. Brokerage house performance -Overall, the results indicate that brokerage house performance influences competitor response to brokerage house brands. Brokerage house error is fully supported as we find competing brokerage houses respond negatively and significantly to brokerage house error. Information access: Investment Bank -As expected, we find that competitors are positively affected by the additional information access. Brokerage house reputation -the results indicate that brokerage house reputation impacts competing brokerage house response significantly and positively for upgrades but is not significant for downgrades. Control variables The findings show relationships in line with expectations for investors for the firm characteristics (book-to-market, institutional ownership and firm size), analyst boldness and analyst experience. For the analyst and recommendation characteristics, relationships are either mixed (error difference, consensus distance) or inverse (stock coverage, recommendation frequency, recommendation change size). Discussion This article examines the role played by brands in the competition between brokerage houses. Based on a sample of 30,619 stock upgrades and downgrades from 66 brokerage houses concerning 1,769 firms over 15 years, we find that competitor decisions to follow a recommendation change are significantly and negatively influenced by the brand score of the brokerage house that issues the recommendation changes for upgrades. We further find that reputation and performance behave as expected but awareness unexpectedly negatively influences competitor response. Do brokerage house brands influence competitors? Our results are partially as expected. Our brand score results suggest that brokerage houses are negatively influenced by competitor brands for downgrades. In other words, strong brokerage house brands diminish the perception of information leadership. Furthermore, brokerage houses that perform better, that is the brokerage houses associated with an investment bank and brokerage houses with lower past error are more likely to be followed by their competitors. Brokerage houses with better reputations are also more likely to be followed for upgrades. Awareness however has an unexpectedly negative impact on leadership status. Reassuringly, competitors are positively influenced by a brokerage house's reputation and performance. However, and unlike for investors, brokerage house awareness negatively influences competitor response. We surmise that other sector-specific factors may influence competitor response, such as innovative recommendation changes that contain new information. These innovative ideas may come from low awareness brokerage houses, which would explain the negative relationship between information leadership and awareness. Furthermore, the negative influence of awareness may explain why the brokerage house brand score, which shows a high correlation with awareness (0.65), is perceived as a negative signal by competitors in our results for downgrades. Contributions In this article, we address how a key marketing factor, branding, impacts competing brokerage houses. In this section, we discuss the contributions of the research to marketing and finance and accounting, practical implications and finally suggestions for future research. Contributions to marketing Our contributions are obviously similar to the contributions of Chapter 3: applying the brand signal model to financial markets, using financial data instead of declarative data to measure brand equity and studying an understudied sector in finance. Furthermore, we apply the brand signal framework [START_REF] Erdem | Brand equity as a signaling phenomenon[END_REF][START_REF] Zuo | The Informational Feedback Effect Of Stock Prices On Management Forecasts[END_REF] to assess its impact on competitors. We show that brokerage house brands act as signals that convey information and influence the response of competing brokerage houses. This responds to the call by [START_REF] Baum | Mutual Forbearance and Competition Among Security Analysts[END_REF] for research into competition between brokerage houses. Contributions to finance and accounting This article explores the impact of a key information intermediary, brokerage houses, and leads to contributions to herding research. Previous research in finance and accounting shows that analysts tend to herd [START_REF] Graham | Herding among Investment Newsletters: Theory and Evidence[END_REF]. This led to the emergence of the concept of information leadership (R. A. Cooper et al., 2001b). We complement this literature by showing that a previously understudied equity market player, brokerage houses, can also act as an information leader and that brand characteristics may influence competitors. Implications for brokerage houses, analysts, investors and CEOs/CFOs Our results show that brokerage house reputation and performance have a positive impact on a brokerage house's leadership status. We recommend that brokerage houses reinforce their marketing policies to build a strong reputation and consider how their policies convey the perception of a brokerage house's performance to competitors. Limitations and future research The same limits described in Chapter 3 apply to this extension: data limited to 2000-2014, scope limited to only equity markets, limited number of brand characteristics taken into consideration. In this extension of Chapter 3, we calculate the brokerage house brand score from the perspective of investors, as described in Chapter 3 and then test competitor response to this score. Our results suggest that competitors weigh brand characteristics differently to investors. Our results indicate competitors place greater emphasis on performance and reputation and underweight awareness. We recommend developing a brand score measure specifically for competitors based on characteristics they consider important. To assess which brand characteristics may be important for competitors, we recommend undertaking qualitative research to assess which criteria in addition to the two this extension identifies may be important for competitors. Secondly, we hope that this research will inspire the application of other marketing research frameworks to financial markets. Abstract The issue of whether stock prices simply reflect expectations about future cash flows or whether stock prices convey information that influences corporate investment decisions has received a lot of attention in recent years. This paper studies whether the way the market prices a firm affects its advertising and R&D expenditure decisions. We find empirically that a stock's mispricing has a strong and negative impact on advertising and R&D expenditures. We verify the robustness of our results using three measures of mispricing. We test the effect of a moderator that should prompt managers to care more about the stock price, i.e., a firm's equity dependence. We find that equity dependence moderates the relationship between mispricing and advertising and R&D expenditures. Our results contribute to the literature on the determinants of advertising and R&D expenditures by showing that cuts in expenditures may be due to stock mispricing. We also show that stock prices may convey irrational information that affects marketing expenditures. Key words: institutional investors, investment horizons, marketing resource allocation, top executives' compensation, advertising spending, myopic management Introduction Do stock prices merely reflect expectations about future cash flows? Or do stock price have a real effect on corporate decisions, i.e., they affect the cash flows they are supposed to reflect? These questions have received a lot of attention in financial economics (e.g. P. [START_REF] Bond | The Real Effects of Financial Markets[END_REF]. Purely passive market prices having no effect on real decisions would be hard to reconcile with the level of attention devoted to stock market prices, the importance lavished on stock prices by managers and the press and the role the stock market plays in many areas of the modern economy. One key reason why stock market prices may have a real effect on managerial decisions is the transmission of information [START_REF] Baumol | The stock market and economic efficiency[END_REF]. Stock prices aggregate diverse pieces of information from the various players in the stock market and managers may use this information to guide their real decisions. Whether stock prices influence real decisions becomes important when stock prices are not informationally efficient and do not fully reflect a firm's fundamentals. We believe it is important to examine whether the market's pricing of the firm's stock influences advertising and R&D expenditures to better understand the factors that determine marketing investment decisions. Investments in R&D and advertising play a key role in marketing activities such as product innovation, brand building, and customer satisfaction. The marketing literature has shown that R&D and advertising contribute to firm growth and value creation, however research into the financial drivers of advertising and R&D expenditure decisions is relatively scarce. [START_REF] Joseph | Free Cash Flow, Agency Costs, and the Affordability Method of Advertising Budgeting[END_REF] consider the influence of free cash flow and agency costs on advertising expenditure. [START_REF] Mizik | Myopic Marketing Management: Evidence of the Phenomenon and Its Long-Term Performance Consequences in the SEO Context[END_REF] show that a firm may adopt myopic marketing management of marketing expenditures when undertaking a seasoned equity offering. [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF] show that the structure of executive compensation has a strong impact on advertising and R&D expenditures. [START_REF] Malshe | From Finance to Marketing: The Impact of Financial Leverage on Customer Satisfaction[END_REF] show that financial leverage is negatively related to customer satisfaction. Our main contribution is to study the real effect of stock prices on advertising and R&D. Specifically, we examine whether the way a firm is valued by financial markets affects its advertising and R&D expenditure decisions. In this respect, our paper is related to [START_REF] Markovitch | Using Capital Markets as Market Intelligence: Evidence from the Pharmaceutical Industry[END_REF]'s study showing that the stock underperformance and outperformance of firms in the pharmaceutical industry may affect product portfolios and distribution, and [START_REF] Chakravarty | The Stock Market in the Driver's Seat! Implications for R&D and Marketing[END_REF]'s paper that suggests that past stock returns and volatility of firms in the high-technology industry may affect R&D and marketing budgets. Compared to the two aforementioned papers, the main novelty is that we focus on the influence of a firm's mispricing, i.e., the component of a firm's valuation that is not related to fundamentals. Growing evidence in the finance literature indicates that market prices can deviate from their fundamental values for prolonged periods of time [START_REF] Shiller | Measuring Bubble Expectations And Investor Confidence[END_REF][START_REF] Shleifer | Inefficient Markets: An Introduction to Behavioral Finance[END_REF]. This stock price deviation from its fundamental value is the definition of mispricing. A vast empirical literature shows that market prices deviate from fundamentals due to elements that are unrelated to future cash flows. For instance, market prices respond to investor demand for securities [START_REF] Greenwood | Aggregate Corporate Liquidity and Stock Returns[END_REF][START_REF] Mitchell | Price Pressure around Mergers[END_REF][START_REF] Wurgler | Does Arbitrage Flatten Demand Curves For Stocks[END_REF], securities with the same fundamentals may not trade at the same price [START_REF] Froot | How are stock prices affected by the location of trade[END_REF][START_REF] Lamont | Anomalies: The Law of One Price in Financial Markets[END_REF][START_REF] Mitchell | Limited Arbitrage in Equity Markets[END_REF], and security returns are predictable in ways that are unrelated to risk [START_REF] Barberis | A survey of behavioral finance[END_REF][START_REF] Fama | Market efficiency, long-term returns, and behavioral finance[END_REF][START_REF] Fama | Disagreement, tastes, and asset prices[END_REF]. Moreover, the market cannot arbitrage away all mispricing. While, theoretically, rational investors should buy undervalued firms and sell overvalued firms so that firm's valuation corrects towards its fundamental value, institutions, and intermediaries do not always have the required capital reserves and the incentives to ensure that prices reflect fundamentals [START_REF] Brunnermeier | Predatory Trading[END_REF][START_REF] Shleifer | The Limits Of Arbitrage[END_REF]. As a result, mispricing may last long enough to influence real decisions. If stock prices are not fully efficient and do not accurately reflect the fundamental value of the firm, the relative inefficiency of market prices potentially leads to an inefficiency of real decisions. Several studies have examined the effect of a firm mispricing on different corporate decisions such as financing, M&A activities and investments [START_REF] Baker | When Does the Market Matter? Stock Prices and the Investment of Equity-Dependent Firms[END_REF][START_REF] Campello | Do stock prices influence corporate decisions? Evidence from the technology bubble[END_REF][START_REF] Hau | Real effects of stock underpricing[END_REF]. However, relatively little is known about whether stock mispricing has an effect on advertising and R&D expenditure decisions. Our first research question is whether the mispricing of a firm's stock price affects advertising and R&D expenditure decisions. [Insert Figure 1 about here] Second, we are interested in whether the relation between mispricing and advertising and R&D expenditure is moderated by a firm's need for external equity to finance its investments (See Figure 1 -Conceptual Framework). The moderating effect of this variable deserves some explanations. The effect of the stock mispricing on marketing investments should be stronger in firms where management is forced to focus on the firm's current stock price. The moderator may influence the extent to which managers care about the firm's current stock price enough for it to affect their investment decisions. Mispricing is likely to matter more to firms that need external equity to finance new expenses or investments [START_REF] Baker | When Does the Market Matter? Stock Prices and the Investment of Equity-Dependent Firms[END_REF][START_REF] Stein | Rational Capital Budgeting In An Irrational World[END_REF]. To measure stock mispricing, we follow the literature on the real effects of mispricing and use three residual book-to-market variables as proxies for mispricing [START_REF] Hoberg | Product Market Synergies and Competition in Mergers and Acquisitions: A Text-Based Analysis[END_REF][START_REF] Rhodes Kropf | Valuation Waves And Merger Activity: The Empirical Evidence[END_REF]. These variables capture the difference between observed book-to-market and fundamental book-to-market, indicating when mispricing may be present. We find that all our mispricing proxies have a strong and negative impact on advertising and R&D expenditures. Our results are both statistically and economically highly significant. Furthermore, we show that the relation between mispricing and advertising and R&D expenditure is moderated by the level of a firm's equity-dependence (measured using the K-Z index from [START_REF] Kaplan | Do Investment-Cash Flow Sensitivities Provide Useful Measures of Financing Constraints[END_REF]. The contributions of our paper are two-fold. First, we contribute to the literature on the determinants of advertising and R&D expenditure decisions by showing that reductions in marketing expenditures may be driven by a temporary mispricing. We add to the nonmarketing drivers of R&D and advertising decisions such as CEO compensation [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF], managerial myopia [START_REF] Mizik | The Theory And Practice Of Myopic Management[END_REF][START_REF] Mizik | Myopic Marketing Management: Evidence of the Phenomenon and Its Long-Term Performance Consequences in the SEO Context[END_REF], analyst coverage (Chakravarty & Grewal, 2016), stock performance [START_REF] Markovitch | Using Capital Markets as Market Intelligence: Evidence from the Pharmaceutical Industry[END_REF] and previous stock returns [START_REF] Chakravarty | The Stock Market in the Driver's Seat! Implications for R&D and Marketing[END_REF]. Second, while recent papers document that stock price changes carry rational information used by managers to take marketing decisions [START_REF] Chakravarty | The Stock Market in the Driver's Seat! Implications for R&D and Marketing[END_REF][START_REF] Markovitch | Using Capital Markets as Market Intelligence: Evidence from the Pharmaceutical Industry[END_REF], this paper shows that also the irrational portion of a firm's market pricing also influences marketing decisions. Conceptual Framework and Expectations Real effect of mispricing Corporate investment and the stock market are positively correlated, both over time and across firms. The traditional explanation for this relationship is that stock prices reflect the marginal product of capital [START_REF] Furstenberg | Corporate Investment: Does Market Valuation Matter in the Aggregate[END_REF][START_REF] Tobin | A General Equilibrium Approach To Monetary Theory[END_REF]. [START_REF] Keynes | The General Theory Of Interest, Employment And Money[END_REF] suggests a different explanation. He argues that stock prices contain an important element of irrationality and that the causality could be reversed, i.e. mispricing generates changes in corporate investment. Building on Keynes' insight, the finance literature postulates that the market might price firms away from their fundamental value, that the market is not able to arbitrage away the mispricing and that managers use opportunistically this mispricing to take decisions by catering to the market mispricing [START_REF] Baker | Capital Market-Driven Corporate Finance[END_REF][START_REF] Jensen | Agency Costs of Overvalued Equity[END_REF][START_REF] Stein | Rational Capital Budgeting In An Irrational World[END_REF]. So, if the market grants an irrational premium to dividend-paying firms or to low-price firms, managers respond by paying more dividends or by supplying shares at a lower price [START_REF] Baker | Capital Market-Driven Corporate Finance[END_REF][START_REF] Baker | A Catering Theory of Dividends[END_REF]. The same reasoning applies to stock over or undervaluation. Some finance studies document a link between investment and mispricing. For example, [START_REF] Chirinko | Business Fixed Investment and "Bubbles": The Japanese Case[END_REF] used market bubbles in Japan to show that stock overpricing affects adversely business fixed investments. [START_REF] Gilchrist | Do stock price bubbles influence corporate investment[END_REF] identify the bubble component in Tobin's Q in the 1990s using the variance of analysts' earnings forecasts and find that orthogonalized shocks to dispersion have positive and statistically significant effects on Tobin's Q, net equity issuance, and real investment. [START_REF] Campello | Do stock prices influence corporate decisions? Evidence from the technology bubble[END_REF] further document that high stock prices affect corporate policies (capital investment, stock issuance and cash savings) by relaxing financing constraints. They show that during the 1990stechnology bubble, constrained non-tech firms' investment responded strongly to "high stock prices" by issuing more stock to finance current or future investments. Using mutual fund redemptions as an instrument for price changes [START_REF] Edmans | The real effects of financial markets: The impact of prices on takeovers[END_REF] document that an interquartile decrease in valuation leads to a seven percentage point increase in acquisition likelihood, relative to a 6% unconditional takeover probability. [START_REF] Hau | Real effects of stock underpricing[END_REF] provide evidence for a causal effect of equity prices on corporate investment and employment. They use fire sales by distressed equity funds during the 2007-2009 financial crisis to identify substantial exogenous underpricing and show that firms whose stock is most underpriced have considerably lower investment and employment than industry peers not subject to any fire sale discount. Taken together, these results indicate that mispricing affects management decisions in general. Hypothesis 1: A higher undervaluation will be associated with a decrease in the allocation to advertising (R&D) as a percentage of sales. Equity financing dependence We identify in the literature a relevant channel that might moderate the effect of mispricing on advertising and R&D expenditure, i.e., capital market financing needs. When a firm needs access to capital market to finance new advertising and R&D expenses, weaker markets play a limiting role [START_REF] Baker | When Does the Market Matter? Stock Prices and the Investment of Equity-Dependent Firms[END_REF][START_REF] Stein | Rational Capital Budgeting In An Irrational World[END_REF]. Because seasoned equity offerings are rarely used to finance investment, (Polk and Sapienza 2009) believe it is important to assess whether firms change their investment policies according to the valuation of their stock, even if they are not issuing equity to finance investments. They argue that, the stronger the focus of the manager on the short-term stock price appreciation, the more she will cater to the market mispricing, i.e. act according to what the market values better, even if it is not related to firm fundamentals [START_REF] Baker | Capital Market-Driven Corporate Finance[END_REF][START_REF] Baker | Limited arbitrage in mergers and acquisitions[END_REF][START_REF] Baker | A Catering Theory of Dividends[END_REF]. As argued by [START_REF] Keynes | The General Theory Of Interest, Employment And Money[END_REF], because of mispricing, the effective cost of external equity sometimes diverges from the cost of other forms of capital. This affects the pattern of equity issues and in turn corporate investment. This "equity financing channel" has been further studied by [START_REF] Blanchard | The Stock Market, Profit, and Investment[END_REF][START_REF] Bosworth | The Stock Market and the Economy[END_REF][START_REF] Fischer | Macroeconomics and finance: The role of the stock market[END_REF][START_REF] Morck | The Stock Market and Investment: Is the Market a Sideshow?[END_REF][START_REF] Stein | Rational Capital Budgeting In An Irrational World[END_REF]. [START_REF] Stein | Rational Capital Budgeting In An Irrational World[END_REF] argues that those firms that are in need of external equity finance will have investment that is more sensitive to the non-fundamental component of stock prices. Intuitively, a firm with no debt and high cash can insulate its investment decisions from irrational gyrations in its stock price. But an "equity-dependent" firm needs equity to fund its marginal investments. [START_REF] Baker | When Does the Market Matter? Stock Prices and the Investment of Equity-Dependent Firms[END_REF] empirically test several implications of this financing channel. They rank firms according to this proxy for equity dependence and find that stock prices have a stronger impact on the investment of "equity-dependent" firms. The literature suggests that equity funding dependence will influence corporate investments overall. Hypothesis 2: Equity financing dependence will moderate the effect of mispricing on advertising and R&D expenditure as a percentage of sales. Sample and Data Sample construction and data source We construct our sample as follows. We begin with all publicly traded U.S. firms in CRSP and Compustat between 1980 and 2014. We keep U.S. operating firms defined as firms with CRSP share codes of 10 or 11. We drop firms that are financials or utilities. We then restrict our sample to firms for which we have available or extrapolated data for advertising and R&D. This leaves our sample of 40,966 firm-years comprising 5,785 unique firms between 1980 and 2014. Stock trading data are from CRSP, accounting data are from Compustat, and investor portfolio data are from Thomson's 13f filings. We winsorize all continuous variables at the 1st and 99th percentiles. Table 1 presents summary statistics for all of our variables. [Insert Table 1 about here] Advertising and R&D expenditures Our main dependent variable is advertising expenditure as a share of sales. Like in [START_REF] Malshe | From Finance to Marketing: The Impact of Financial Leverage on Customer Satisfaction[END_REF], to impute missing values of advertising on Compustat, we use a combination of the estimates used in prior literature. For each firm reporting advertising, we compute the ratio of advertising to sales, general, and administrative (SG&A) expenses each fiscal year. Next, we obtain the yearly average advertising-to-SG&A ratio for every industry. The values of our main dependent variable range between 0 and 1, giving a limited dependent variable. We follow [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF] to avoid problems associated with directly using a limited dependent variable in a regression, so we perform a logit transformation on the limited dependent variable. We also do these two steps for R&D expenditure as a share of sales. Measuring Mispricing For our purposes, we define mispricing simply as the deviation of observed stock prices from their fundamental values. Though there is no consensus on what constitutes a stock's fundamental value, the literature indicates five factors that drive a stock's fundamental value: a firm's age, whether it pays dividends, the amount of debt, the volatility of total returns and a firm's profitability as measured by return on equity. To estimate a firm's mispricing, our models first capture the effect of these factors on a stock's price. Once these factors are accounted for, what is left is a firm's mispricing, i.e., the difference between observed bookto-market and fundamental book-to-market. The three proxies we use in this paper differ in their specification of fundamental book-tomarket. For our first proxy (Mispricing_PV), we follow (Pástor and Pietro Veronesi 2003a): each year, we regress book-to-market on age, dividend payer status, leverage, total return volatility, and return on equity. We use the residuals from these regressions as our first mispricing proxy. For our second proxy (Mispricing_RK), we follow [START_REF] Rhodes Kropf | Valuation Waves And Merger Activity: The Empirical Evidence[END_REF]: each year and for each industry, we regress book-to-market on size, return on equity if return on equity if negative, and leverage. We use the residuals from these regressions as our second mispricing proxy. For our third mispricing proxy (Mispricing_HP), we follow [START_REF] Hoberg | Product Market Synergies and Competition in Mergers and Acquisitions: A Text-Based Analysis[END_REF], in that we use the same specification as (Pástor and Pietro Veronesi 2003a) but we run regressions by year and industry like [START_REF] Rhodes Kropf | Valuation Waves And Merger Activity: The Empirical Evidence[END_REF]. We use the residuals from these regressions as our third mispricing proxy. We add as a control a fourth proxy for mispricing, the pure book-to-market ratio to ensure our results are robust. Equity financing dependence A good measure of equity financing dependence should capture a combination of frictions that makes certain firms more reliant on outside equity financing at the margin. Standard corporatefinance considerations suggest that equity-dependent firms will tend to be young, and to have high leverage, low cash balances, and cash flows, high cash flow volatility (and hence low incremental debt capacity), and strong investment opportunities [START_REF] Baker | When Does the Market Matter? Stock Prices and the Investment of Equity-Dependent Firms[END_REF]. The measure that satisfies most of these criteria is a an index based on the work of [START_REF] Kaplan | Do Investment-Cash Flow Sensitivities Provide Useful Measures of Financing Constraints[END_REF], who carried out an in-depth study of the financial constraints faced by a sample of 49 low-dividend manufacturing firms. Using both subjective and objective criteria, they rank these firms on an ordinal scale, from least to most obviously constrained. Most useful for our purposes, they then estimate an ordered logit regression that relates their qualitative ranking to five Compustat variables. This regression attaches positive weight to Q and leverage, and negative weight to operating cash flow, cash balances, and dividends. The parameters of this regression allow the creation of a synthetic "KZ index" of financial constraints for a broader sample of firms. One disadvantage of this index is that the model's concept of equity dependence requires a proxy for investment opportunities that is distinct from mispricing. Of the five variables in the index, both low dividends and high values of Q can be thought of as proxies for strong investment prospects. However, Q will also contain information about mispricing. This dual role for Q is problematic. In light of this ambiguity, like in [START_REF] Baker | When Does the Market Matter? Stock Prices and the Investment of Equity-Dependent Firms[END_REF] where CFit/ATt-1 is cash flow over lagged assets; DIVit/Ait-1 is cash dividends (over assets) Cit/Ait-1 is cash balance over assets, and LEVit is leverage. Control variables We control for debt because the amount of debt on the balance sheet may affect a firm's expenditures on advertising [START_REF] Malshe | From Finance to Marketing: The Impact of Financial Leverage on Customer Satisfaction[END_REF]. [START_REF] Chakravarty | The Stock Market in the Driver's Seat! Implications for R&D and Marketing[END_REF] show that idiosyncratic risk may prompt management to alter marketing budgets. Market share is relevant as a control variable because higher market share makes managers less concerned about meeting short-term targets and instead focus on long-term performance drivers such as marketing expenditures [START_REF] Eberhart | An Examination of Long-Term Abnormal Stock Returns and Operating Performance Following R&D Increases[END_REF]. Companies may opt for higher advertising expenditures to push up profit margins so we control for profit margins [START_REF] Andras | Advertising intensity and R&D intensity: Differences across industries and their impact on firm's performance[END_REF]. [START_REF] Dekimpe | The Persistence of Marketing Effects on Sales[END_REF] suggest that a portion of a firm's sales growth may be driven by advertising expenditures, so we control for this variable. We control for institutional ownership because institutional owners may have a short-term focus on firms with current earnings [START_REF] Bushee | The Influence of Institutional Investors on Myopic R&D Investment Behavior[END_REF]. We control for firm size because expenditures on advertising and R&D may differ for firms of different sizes. We control for cash flow because the amount of cash a company generates may influence manager decisions about advertising and R&D expenditures. Empirical methodology The following model regresses the advertising share of sales for firm i at time t (ADVSALEit) as follows ( 1): 9?FG9DE 45 = H I + H < J@KLM@N@OP 45;< + H Q G@RS 45;< + H T DSASMUPS 45;< + V W JUMXSY_KℎUMS 45;< + H * \M]^@YU_@`@Ya 45;< + H + GU`SK_PM]bYℎ 45;< + H c dOKY_]bO 45;< + H , e@KX 45;< + H ) 7UKℎ_^`]b 45;< + f where Mispricingit-1 is our indicator of mispricing and the control variables are Size it-1 (log of total assets), Leverage it-1 (total debt over total assets), Market_share it-1 (percentage of sales of the industry), Profitability it-1 (EBITDA over total assets), Sales_growth it-1 (increase in sales relative to the preceding year), Inst_own it-1 (Institutional ownership), Cash_flow it-1 (cash flow over lagged assets) and Risk it-1 (stock average daily volatility). We control for industry and year unobservable heterogeneity by industry and year fixed effects. We further control for missing R&D and advertising variables using dummies. All of our independent variables are lagged by one-year to alleviate endogeneity concerns. We use the same model to regress R&D as a share of sales. We estimate the model parameters by means of OLS regressions. In all specifications, standard errors are robust to heteroscedasticity and clustered by firms. The moderation effect of equity financing dependence on the relationship between stock mispricing and marketing expenditures To test the moderation effect of equity financing dependence on the relationship between stock mispricing and advertising expenditures, we introduce the KZ score variable into our regressions. [Insert Table 2 about here] Results The results of the main model estimations are presented in Table 2. We begin with the advertising share of sales model (Column 1). Our first main result is that the effect of mispricing on advertising expenditure as a share of sales is negative. This effect is statistically very significant (p<.01). We comment the control variables for the first specification only. The coefficient of leverage is negative and highly significant, in line with [START_REF] Malshe | From Finance to Marketing: The Impact of Financial Leverage on Customer Satisfaction[END_REF]. Furthermore, company profitability has a significant impact on advertising, reflecting the link between advertising and company profitability. Furthermore, institutional ownership is positively related to advertising, perhaps reflecting the maturity of firms held by institutional owners. The dummy for replaced advertising values is significant whereas it is not significant for replaced R&D values. Cash flow is negatively related to advertising and not significant for R&D. The other control variables coefficients are not significant. We now turn to the results of R&D as a share of sales model (Column 2). Our second main result is that the effect of mispricing on marketing expenditure as a share of sales is also negative (p<.01). Consequently, Hypothesis 1 is supported. As reported in Table 3, we perform a series of robustness test, using three proxies of mispricing. In all specification, we find similar results. [Insert Table 3 about here] We now look at whether equity financing moderates the effect of mispricing on advertising and R&D expenditures as share of sales. We find that the coefficient is statistically significant, indicating that it moderates the relationship between mispricing and advertising and R&D expenditures as a percentage of sales. This result is consistent with undervaluation affecting negatively advertising and R&D expenditures because the firm is dependent on capital markets to finance new projects and maintain its marketing budgets. Discussion This paper examines the results of stock mispricing on marketing expenditures. Our main result suggests that stock mispricing affects advertising and R&D expenditures (H1). If a firm aims to maximize the long-term returns of shareholders by investing in marketing to help build brands, customer loyalty and customer satisfaction, then a firm's investors should be aware that stock mispricing may negatively affect marketing expenditures. The negative effect is driven by management opportunistically catering to stock mispricing. Investors should monitor managers to ensure that they are behaving myopically. Our second result is that managers care more about stock mispricing in the presence of equity finance dependence This result has implications for both the effect on marketing and marketing's relations with investors. The effect on marketing is that the focus on stock prices by the press and top management is of concern to marketers because of its negative effects on corporate marketing expenditures. To reduce the impact, marketing needs to communicate better with marketing to help reduce non-fundamental information in the stock price. As far as we know, the findings on the impact of stock mispricing on marketing expenditures is new in the marketing literature. The importance of information in the stock price has been important for marketing in recent years [START_REF] Chakravarty | The Stock Market in the Driver's Seat! Implications for R&D and Marketing[END_REF][START_REF] Markovitch | Using Capital Markets as Market Intelligence: Evidence from the Pharmaceutical Industry[END_REF], which has sought to understand how information in share prices affect marketing's ability to create value. Our results shed light on an important determinant of marketing expenditures and enhance our understanding of the impact of stock price information, which should not be considered as irrelevant to marketers. The study has some limitations. There are different reasons for stocks to be mispriced. It could be interesting to see if some reasons affect marketing expenditures more than others such as dividend payers versus supplying shares to investors at lower prices. Notes: This table reports the results of our robustness checks. We check the robustness of our mispricing measure shown in column 1 in Table 2 by using two other mispricing measures from the literature and book-to-market. The dependent variable in all four regressions is advertising as a percentage of sales, like in column 1 in Table 2. We control for size, leverage, profitability, market share, sales growth, risk, institutional ownership, replaced advertising values and cash flow. All regressions include year dummy variables. We control for industry unobservable heterogeneity by adding industry fixed effects. Standard errors are robust to heteroskedasticity and clustered by industry. Constants are not reported. Robust standard errors are shown in parentheses. Column 1 shows the results of using the mispricing measure of [START_REF] Pástor | Stock Valuation and Learning about Profitability[END_REF]. Column 2 shows the results of using the mispricing measure of [START_REF] Rhodes Kropf | Valuation Waves And Merger Activity: The Empirical Evidence[END_REF]. Column 3 shows the same results as in column 1 of Table 2 to facilitate comparisons. Column 4 shows the results of using the raw book-to-market as a proxy measure. ***, **, * indicate significance at 1, 5, and 10% respectively. For more detailed information on variables see the Appendix. Notes: This table presents panel-data regressions of the moderating effect of equity financing depending on the relation between marketing expenditures and our mispricing measure. We also include all control variables. The main dependent variable in column 1 is advertising as a percentage of sales. The main dependent variable in column 2 is R&D as a percentage of sales. The main independent variable in column 1 and column 2, our measure of mispricing, is based on the mispricing measure of [START_REF] Hoberg | Product Market Synergies and Competition in Mergers and Acquisitions: A Text-Based Analysis[END_REF]. The moderating effect is tested using the KZ Score [START_REF] Kaplan | Do Investment-Cash Flow Sensitivities Provide Useful Measures of Financing Constraints[END_REF] firms that do not report R&D tend to be smaller than other firms and there may be other differences. To ensure our results are robust, we rerun Table 2 regressions again with an indicator value set to one for firms with non-missing advertising or R&D. The overall results for regression 1 are very similar. The indicator variable for the advertising regression (regression 1) is negative and significant (p < 0.000) whereas the indicator variable is not significant (p=0.21) for the R&D regression (regression 2). -INVESTOR HORIZONS AND MARKETING INVESTMENTS 6.1 Introduction Myopic management, the practice of overemphasizing short-term goals at the expense of longterm strategy and performance [START_REF] Stein | Efficient Capital Markets, Inefficient Firms: A Model of Myopic Corporate Behavior[END_REF], has become a growing concern among companies and marketing researchers. It poses an important challenge to marketers because underinvesting in marketing expenditures is one of the main effects of myopic management, which means lower long-term firm performances (S. Srinivasan & Hanssens, 2009). [START_REF] Graham | The Economic Implications Of Corporate Financial Reporting[END_REF] document that decreasing discretionary marketing expenditures, such as advertising and R&D, is the privileged option for 80% of CFOs of firms who seem unlikely to meet their short-term earnings target. Top executives often consider marketing expenditure as discretionary and an adjustment tool that can be used to both boost short-term performance [START_REF] Deleersnyder | The Role of National Culture in Advertising's Sensitivity to Business Cycles: An Investigation across Continents[END_REF] and ensure for example that they reach their earnings guidance or meet analyst earnings forecasts. A key problem of these cuts in marketing expenditures management is that it generally implies serious distortions in investment decisions away from the maximization of net present value and value creation (S. Srinivasan & Hanssens, 2009). This may be because the returns of marketing expenditures are not entirely predictable because they represent investments in intangible marketing assets [START_REF] Graham | The Economic Implications Of Corporate Financial Reporting[END_REF] whose effects on firm value have not been studied enough [START_REF] Rust | Measuring Marketing Productivity: Current Knowledge and Future Directions[END_REF]. Furthermore, accounting regulations do not allow the capitalization of these intangible values on the balance sheet. Cutting marketing expenditures hampers new product development, future product sales and brand building, which are all strong determinants of a firm's comparative advantage and longterm business performance [START_REF] Krasnikov | The Relative Impact Of Marketing, Research-And-Development, And Operations Capabilities On Firm Performance[END_REF]S. Srinivasan & Hanssens, 2009). In an insightful paper, [START_REF] Mizik | The Theory And Practice Of Myopic Management[END_REF] documents the long-term consequences of myopic management. The paper shows that cutting marketing expenditures boosts short-term performance indicators but has a long-term net negative impact on firm value. Recent research in the marketing field has highlighted the existence of myopic management in marketing strategy and its short-term and long-term effects on firm performance [START_REF] Chapman | An investigation of earnings management through marketing actions[END_REF][START_REF] Mizik | The Theory And Practice Of Myopic Management[END_REF][START_REF] Mizik | Myopic Marketing Management: Evidence of the Phenomenon and Its Long-Term Performance Consequences in the SEO Context[END_REF]. This stream of research has focused on understanding what prompts firms to adopt myopic management behavior. [START_REF] Mizik | Myopic Marketing Management: Evidence of the Phenomenon and Its Long-Term Performance Consequences in the SEO Context[END_REF] show that seasonal equity offerings can affect marketing spending. [START_REF] Malshe | From Finance to Marketing: The Impact of Financial Leverage on Customer Satisfaction[END_REF] show that higher debt impacts marketing outcomes and firm value. (Chakravarty & Grewal, 2016) look at the role analyst earnings forecasts play in management's decisions concerning marketing expenditures. Our paper focuses on another potential determinant of myopic management behavior, the characteristics of a firm's ownership and the link to myopic management of marketing expenditures. We have two research questions. First, does the investment horizon of a firm's shareholders influence marketing expenditures? Second, does the influence of investor horizon persist when we take into account executive compensation and blockholders? We find that the investment horizon of shareholders is closely related to the existence of cutting marketing expenditures. More precisely, firms with a higher (lower) turnover percentage of institutional investors are more (less) likely to cut marketing expenditures. To ensure our results are not driven by our measure of investor horizon, we use three measures of investor turnover identified in the finance and accounting literature. We find that an increase in the percentage of investor turnover has a strong positive impact on presence of myopic management of marketing expenditures. Our investor turnover proxies are lagged to alleviate concerns of endogenous variables. We further use three definitions of marketing expenditure to ensure that the definition of the dependent variable is not driving our results. One concern is the direction of the causality in that investors may choose firms with myopic management instead of myopic management being the product of investor turnover. We therefore test the direction of the causality between marketing expenditures and investor turnover using a panel vector autoregressive model (e.g. [START_REF] Holtz-Eakin | Estimating Vector Autoregressions with Panel Data[END_REF]. The results indicate that investor turnover drives marketing expenditures and not vice versa. We then test using sample splits the impact of two other factors that the literature has identified as alleviating the impact of a high proportion of short-term investors, managerial compensation and blockholders. We study managerial incentives by assessing the link between the structure of executive compensation and myopic management. [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF] show that the horizon of CEO compensation measured by the equity/bonus ratio is an important determinant of advertising and R&D spending but finance research has shown that well-designed topmanagement compensation reduces agency costs. [START_REF] Bergstresser | CEO incentives and earnings management[END_REF] use the incentive ratio to test whether the use of discretionary accruals to manage earnings is higher at firms where CEO compensation is more closely linked to the value of stock and option holdings. We use the two measures in examining the effect of investor horizons on marketing expenditure. We find that the negative effect of short-term investors on marketing expenditures is concentrated in firms where managers are not already incentivized to adopt myopic behavior. This finding indicates that even if managerial incentives are properly designed, myopic management may still arise as a consequence of the short-term horizon and preferences of some shareholders. We study the impact of blockholders because financial research has shown that blockholders can mitigate myopic management behavior and are a possible solution to agency problems [START_REF] Shleifer | The Limits Of Arbitrage[END_REF]. We find that the presence of blockholders mitigates the effect of short-term investors on marketing expenditures. This finding indicates that the presence of long-term shareholders may be a desirable outcome for the long-term performance of firms. Our findings have the potential to benefit marketing managers, firms, shareholders and consumers. Our results show that the existence of myopic management is not necessarily a symptom of weak governance or agency conflicts between managers and shareholders but may reflect the short-term orientation of some shareholders. Our results have important implications for executives and marketers concerning the solutions to curb myopic management of resources. Indeed, in line with theories on short-termism behavior by managers in financial economics [START_REF] Stein | Efficient Capital Markets, Inefficient Firms: A Model of Myopic Corporate Behavior[END_REF], myopic marketing management has been viewed as an opportunistic behavior that arises against the wishes and interests of shareholders [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF][START_REF] Mizik | The Theory And Practice Of Myopic Management[END_REF]. This view of myopic management has led to researchers considering its mitigation principally through the lens of agency theory and to propose solutions such as providing long-term incentives to managers [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF]. Furthermore, our research identifies a new boundary condition to help shareholders identify determinants of myopic management of marketing expenditures (Chakravarty & Grewal, 2016). Our results support the idea that changing the structure of executive compensation is certainly desirable. However, our results concerning the effect of blockholders also indicate that myopic management may very well persist even if manager incentives are better aligned with shareholder interests but could be mitigated by favoring the creation of a committed base of long-term shareholders. This long-term shareholder base should ensure that the presence of short-term shareholders does not lead to unproductive cuts in marketing expenditures, which in turn reduce the contribution of marketing expenditures to the firm's long-term performance (S. Srinivasan & Hanssens, 2009). The better firm long-term performance would ensure managers generate better long-term stock returns, managers have less pressure to use marketing expenditures counterproductively and consumers benefit from an economy built on strong, long-term firm performances. From this perspective, initiatives to reward long-term investors such as the loyalty-shares proposed by [START_REF] Bolton | Loyalty-Shares: Rewarding Long-term Investors[END_REF] or encouraging other types of ownership such as blockholders [START_REF] Edmans | Blockholder trading, market efficiency, and managerial myopia[END_REF]) may be of interest to offset myopic management of marketing expenditures. The rest of the paper is organized as follows. In the next section, we review the literature on investor horizon and its potential impact on marketing expenditures. In section 3, we develop our theoretical framework and set out our hypotheses. Section 4 presents the empirical methodology. Section 5 describes our data. In section 6, we present our empirical results. We conclude and discuss the implications of our results in section 7. 6.2 Theoretical Framework and Hypothesis Development Institutional Investor Investment Horizons The theory and the practical consequences of myopic management have received a lot of attention in recent marketing literature [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF][START_REF] Mizik | The Theory And Practice Of Myopic Management[END_REF]. [START_REF] Mizik | The Theory And Practice Of Myopic Management[END_REF] establishes that abnormally cutting marketing and R&D expenses has a negative impact on firm value over the long term. Given the well-documented existence and severe consequences of myopic management, we believe it is worthwhile to go one step further and assess factors likely to influence the existence and apparition of myopic management. We focus our literature review on how the nature of a firm's ownership has been shown to affect management decisions. We then look at the role of one potential determinant, the investment horizon of institutional investors. Institutional investors are by far the largest owners of US firms, with average total institutional holdings at 75% as of 2009 [START_REF] Bena | Are foreign investors locusts? The long-term effects of foreign institutional ownership[END_REF], reflecting strong growth over the past 50 years from just 16% in 1965 {Useem:1996tb}. Their large holdings give them a key role in the governance of firms where they are present. This strong growth is accompanied by strong heterogeneity with institutional investors differing considerably in terms of trading frequency, competitive pressures, fiduciary responsibility and investment style. Historically, researchers classified institutions by their legal type. This classification of investors by legal type however masks considerable differences in terms of investment horizons and sensitivity to short-term earnings news (Bushee 2004). Institutional investors have different investment horizons, some of them being more focused on the short-term stock performance of firms in their portfolios. This focus stems from a variety of factors. The first key determinant of investment horizon is the nature of liabilities and funding drive short-term investing. For example, in an open-ended fund, redemptions are carried out upon request, which means that a fund manager might be required to liquidate a position at short notice. The fund manager is therefore reluctant to take long positions because he might be forced to sell at any point in time, even when the share price is very low. Secondly, the investment horizon of clients of institutional investors affects the investment horizons of institutional investors. [START_REF] Cella | Investors' Horizons and the Amplification of Market Shocks[END_REF] underline that the more fund flows are sensitive to fund performance the more the fund manager turns over its portfolio. Thirdly, even if clients have a long-term horizon, institutional investor performance evaluation and remuneration practices might shorten their money managers' investment horizons (short evaluation period, short maturity of remuneration). [START_REF] Cella | Investors' Horizons and the Amplification of Market Shocks[END_REF] document that funds with a greater share of long-term remuneration present significantly lower turnover ratios. Fourth, if the fund manager expects to stay in post for just a few years, he might also overly concentrate on the short-term performance of the fund. Fifth, different investment horizons might also simply result from different trading strategies. Whereas investing in value firms in considered to be more long-term oriented, momentum investing is short-term oriented [START_REF] Warren | Long-Term Investing: What Determines Investment Horizon?[END_REF]. Other factors may also play a role in prompting investors to favor a short-term or long-term investment horizon. They include broker incentives to regularly change recommendations, the trend towards portfolio management based on hedging and diversification, fiduciary responsibilities based on quarterly returns, the fair value valuation of investor assets and new technologies lowering transactions costs and accelerating the response of investors to news have further contributed to shorten institutional investors investment horizons [START_REF] Porter | Capital Choices: The Causes And Cures Of Business Myopia[END_REF]). The impact of differing investor horizons Differences in investor horizons may matter for markets. Short-term investors have been associated with some market inefficiencies. Short-term investors might drive a firm's stock price from its fundamental price either because they herd on the same irrelevant information [START_REF] Froot | Herd on the street: Informational inefficiencies in a market with short-term speculation[END_REF] or because they focus on earnings, near-term cash flows, relative value and technical analysis rather than long-term discounted cash flows analysis [START_REF] Bushee | The Influence of Institutional Investors on Myopic R&D Investment Behavior[END_REF][START_REF] Rappaport | The Economics of Short-Term Performance Obsession[END_REF]. Further, stocks held by short-term investors suffer larger declines during financial crises [START_REF] Cella | Investors' Horizons and the Amplification of Market Shocks[END_REF]. Long-term institutional ownership, however, has a stabilizing role (e.g. contrarian strategies). Differences in investor investment horizons matter for corporate behaviors too. [START_REF] Bushee | The Influence of Institutional Investors on Myopic R&D Investment Behavior[END_REF] finds that firms with transient institutional investors (defined as high portfolio turnover and following momentum trading strategies) reduce R&D expenditures to increase short-term earnings. [START_REF] Koh | Institutional Investor Type, Earnings Management And Benchmark Beaters[END_REF] documents that transient institutional ownership is associated with aggressive earnings management. The importance of transient institutional investors ownership is also positively related to the likelihood and magnitude of financial restatements such as misreporting [START_REF] Burns | Institutional ownership and monitoring: Evidence from financial misreporting[END_REF]. (Brochet, Loumioti, and Serafeim 2013) report a positive association between high portfolio turnover of institutional investors and a proxy of short-termism present in managerial discourse. 6.2.3 The effect of institutional investors on corporate policies [START_REF] Markovitch | Using Capital Markets as Market Intelligence: Evidence from the Pharmaceutical Industry[END_REF] highlight how changes in firm stock prices in the pharmaceutical industry may influence management decisions. In the case of undervalued firms, managers highlight the impact of executive compensation on marketing expenditures and stock market returns. Agency theory [START_REF] Jensen | Theory of the firm: managerial behavior, agency costs, and ownership structure[END_REF] says that management interests are not aligned in some situations with shareholder interests. To better align management interests with shareholder interests, executive compensation can be structured so management interests are better aligned with shareholder interests and thus generate long-term value for shareholders. So executive compensation should be designed so that the long-term components of executive compensation (stock and options) is higher than the short-term components (bonus). Hypotheses When the investment horizons of a firm's shareholders differ, manager faces the dilemma of whether they should strive to please shareholders with long-term or short-term horizons [START_REF] Froot | Herd on the street: Informational inefficiencies in a market with short-term speculation[END_REF]. To illustrate this point, let us consider a simplified example wherein a firm manager wants to maximize the firm's market value. If the firm's shareholders are exclusively composed of long-term oriented investors, the firm stock price reflects the fundamental value of the firm. In this situation, launching a new project that requires large marketing expenditures now but that will generate large cash flows (additional sales, new products) at some point in the future will positively impact the firm's stock price by an amount equal to the net present value of the expected cash flows generated by the project. Under these circumstances, behaving myopically (in our case abnormally cutting long-term marketing expenditures) for a manager is counterproductive because it should impact negatively its firm market value and thus the share price. However, if the firm's ownership is exclusively composed of short-term investors, a manager might increase firm market value by behaving myopically. Short-term shareholders might cause firm stock price changes unrelated to fundamental news (unrelated to changes in expected project cash flows). For instance, if reported earnings are above analyst forecasts, short-term oriented investors might buy additional shares, which will push up the firm's market value even though the fundamental value of the firm has not changed. Under these circumstances, to maximize the share price in the short term, a manager might engage in myopic management, pushing up current earnings and the stock price (at the expense of marketing expenditures and long-term performance). Given this reasoning, we expect firms with a mainly short-term oriented ownership to encourage myopic management. Although short-term institutional investors know that the overall impact of myopic management on the firm's value might be negative over the long run, they benefit from the short-term stock price increase that myopic behavior causes and they will not face the ensuing long-term loss in firm value. An alternative explanation that encourages myopic management is that short-term oriented investors do not encourage myopic management but are less able to prevent myopic management than long-term oriented investors because they dispose of less information, knowledge and a shorter investment timeframe to monitor managers. In this paper, we make the hypothesis that shorter investor horizon generates myopic management of marketing resources. Marketing expenditures both reduce reported earnings and takes time to materialize in higher expected cash-flows, and thus are not immediately fully priced by the market. As such, marketing expenses are likely to suffer from managerial myopic decisions aiming to inflate current earnings and firm stock price over the short-run. Hypothesis 1: Investor turnover encourages myopic management of marketing resources Yet the pressure coming from short-term investors for boosting the firm short-term stock price and over focusing on short-term earnings performance might be counterbalanced by lengthier investment horizon of other key stakeholders of the firm. Our second hypothesis is that the presence of institutional blockholders mitigates the effect of investor turnover. Blockholders are more likely than other shareholders to oppose management myopic decisions. They might use their informational edge to infer the detrimental consequences for long-term firm value of a myopic management of marketing resources today. Furthermore, they have the means to make management change its behavior, either through voice [START_REF] Baker | When Does the Market Matter? Stock Prices and the Investment of Equity-Dependent Firms[END_REF] or exit threat [START_REF] Barber | The behavior of mutual fund investors[END_REF][START_REF] Derrien | Investor Horizons and Corporate Policies[END_REF][START_REF] Gaspar | Payout Policy Choices and Shareholder Investment Horizons[END_REF][START_REF] Hotchkiss | Does Shareholder Composition Matter? Evidence from the Market Reaction to Corporate Earnings Announcements[END_REF]. Formally, we predict that the presence of blockholders mitigates the negative effect of investor turnover on marketing expenditures. Hypothesis 2: The presence of blockholders mitigate the effect of investor turnover on marketing expenditures We expect that investor turnover does not encourage myopic management of marketing resources in firms where managerial compensation already overemphasizes short-term results in the sense of [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF], i.e., higher share of cash bonus compensation relative to equity compensation. In this case, the manager already has an important incentive to behave myopically because he directly profits from higher short-term results through higher cash bonuses and is less reluctant to take myopic decisions because a lower share of his compensation depends on the long-run firm market value. For robustness, we also test the effect using the incentive ratio measure of [START_REF] Bergstresser | CEO incentives and earnings management[END_REF]. Hypothesis 3: The negative effect of investor turnover on marketing expenditures should be concentrated in firms where manager short-term compensation relative to long-term compensation is relatively low. Empirical methodology 6.3.1 Measuring marketing expenditures Our main dependent variable is advertising spending as a share of total assets. Like [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF], to avoid problems associated with directly using a limited dependent variable in a regression, we perform a logit transformation on the limited dependent variable (see Appendix for more details). For robustness, we use two alternative measures to proxy for marketing expenditures. The first is advertising spending this time as a share of total sales to ensure our scaling factor is not influencing our results. The second is marketing expenditures defined as (SG&A -R&D) as a share of total assets used by [START_REF] Mizik | The Theory And Practice Of Myopic Management[END_REF]. Measuring an Institutional Investor's Investment Horizon We measure investor investment horizon using an investor's portfolio turnover [START_REF] Derrien | Investor Horizons and Corporate Policies[END_REF]. For each investor of our sample, we measure its average turnover, the fraction of its portfolio that has been sold on average over the 12 last quarters. To compute the average turnover, we calculate, for each investor j, each quarter t, and each firm i, the fraction of shares of i held by j at date t-12 (three years previously) that has been sold at date t. We then weight this reduction in the position in firm i by the weight of i's stock in j's portfolio taken at t-12, and sum it over all the firms held in j's portfolio as of t-12. :gMO]ASM h,5 = j GℎUMSKkS`l 4,5 -GℎUMSKkS`l 4,5;<Q GℎUMSKkS`l 4,5;<Q * m 4,5;<Q n 4o< Where: -:gMO]ASM h,5 is the turnover ratio of the investor j at quarter t. -GℎUMSKkS`l 4,5 is the number of shares of firm i that is held in the investor j portfolio at quarter t. m 4,5;<Q is the weight of shares of firm i in investor j portfolio at quarter t-12. We set it to zero if the change in shares held does not correspond to a reduction in position. p stands for the number of firms in the portfolio of investor j. To reduce the influence of one quarter with a high turnover, we compute for investor j its mean portfolio turnover over the previous four quarters (from t to t-3). This measure lies between 0 and 1. 9:gMO]ASM h,5 = j 9:gMO]ASM h,5;q 4 qoT qoI At the firm level we aggregate the horizon of institutional investors, weighted by institutional investors' shares and obtain the firm investor turnover. For robustness, we also use two other measures of investor horizon. The first alternative measure of investor horizon is the investor churn ratio, which is similar to the turnover measure we use as our main proxy for investor horizon but also includes buys as well as sales of stocks in the calculation of investor turnover [START_REF] Gaspar | Shareholder investment horizons and the market for corporate control[END_REF]. Our second alternative measure of investor horizon is [START_REF] Bushee | The Influence of Institutional Investors on Myopic R&D Investment Behavior[END_REF]'s classification, which computes the percentage of a firm's institutional investors who are transient investors. [START_REF] Bushee | The Influence of Institutional Investors on Myopic R&D Investment Behavior[END_REF] Control variables We control for debt because the amount of debt a firm carries on the balance sheet may influence a firm's investments in advertising and marketing [START_REF] Malshe | From Finance to Marketing: The Impact of Financial Leverage on Customer Satisfaction[END_REF]. Research shows that idiosyncratic risk may prompt management to modify marketing budgets [START_REF] Chakravarty | The Stock Market in the Driver's Seat! Implications for R&D and Marketing[END_REF]. We control for market share because research shows that higher market share makes managers less concerned about meeting short-term targets and thus will continue to invest in long-term performance drivers such as marketing expenditures (e.g. [START_REF] Eberhart | An Examination of Long-Term Abnormal Stock Returns and Operating Performance Following R&D Increases[END_REF]. Companies may choose to spend more on advertising to generate high profit margins so we control for profit margins [START_REF] Andras | Advertising intensity and R&D intensity: Differences across industries and their impact on firm's performance[END_REF]. A portion of a firm's sales growth may be linked to advertising expenditures so we control for this variable [START_REF] Dekimpe | The Persistence of Marketing Effects on Sales[END_REF]. We control for size because firm expenditures on advertising may differ according to their size. Data To test the three hypotheses, we construct our sample as follows. We begin with all publicly traded U.S. firms from CRSP (Center for Research in Securities Prices) and Compustat between 1980 and 2014. The advantage of using these databases is their large size and long historical data. We keep U.S. operating firms defined as firms with CRSP share codes of 10 or 11. We drop firms that are financials or utilities. We then restrict our sample to firms for which we have available data for advertising expenses in Compustat. We take CEO compensation data from Execucomp. The Execucomp database includes historic and total compensation data on CEOs for US firms in the S&P 500, the S&P 400 MidCap and the S&P 600 Small Cap. We use yearly data to ensure there is coherence between the Compustat and Execucomp data described below. This leaves a sample of 40,962 firm-years comprising 5,784 unique firms between 1980 and 2014. All investor data used to measure investor turnover and blockholders is taken from Thomson's 13f filings. All data for the control variables comes from Computstat with the exception of Risk, which comes from CRSP. We winsorize all continuous variables at the 1st and 99th percentiles. The winsorization reduces the likelihood that extreme values in the sample influence the results. ("Insert Table 1 about here") Results ("Insert Table 2 about here") The results from our first empirical model are reported in Table 2. We test, using a logit panel regression with robust standard errors and firm and year fixed effects, to see if advertising as a share of assets is linked to investor turnover. We use year and firm fixed effects to control for unobserved heterogeneity that is constant over time at the firm and year levels in all specifications to ensure our results are robust. Our first main result is that the advertising expenditure in year t is negatively associated with investor turnover in year t-1 in all estimated versions of the model. So, Hypothesis 1 is therefore validated, supporting the idea that considering shareholder turnover is important to understand the causes of myopic management of marketing resources. The negative coefficient suggests that an increase in investor turnover results in a decrease in advertising spending as a percentage of assets. The impact is statistically very significant (p<0.01). An investor with a short-term horizon will not support the long-run consequences of myopic management of marketing expenditures and may have incentives to prompt or at least not deter this behavior, all the more since myopic management is associated with higher stock performance over the short-run [START_REF] Rhodes Kropf | Valuation Waves And Merger Activity: The Empirical Evidence[END_REF]. The results remain significant when taking into account firm size, leverage, risk, market share, profit margin and sales growth. ("Insert Table 3 about here") Table 3 shows the results of our robustness tests for Hypothesis 1. All specifications include all control variables. The first and second show results using two alternative measures of marketing expenditures, advertising divided by sales and marketing expenditures (SG&A-R&D) divided by total assets. The results remain highly significant (p<0.01). The change in dependent variable does, however, affect the coefficients of some control variables. For instance, when we scale advertising by sales, the effect disappears. This may be due to the use of sales as the scaling factor. The effect of risk disappears when we take advertising divided by sales as the dependent variable. To ensure that our measure of investor turnover is not driving our results, we use two other lagged measures of investor turnover defined by the finance and accounting literature in the robustness tests. The third and fourth specifications contain the results of using the two other investor turnover measures. The results remain significant for the two alternative measure of investor horizon. Also, the results are weakest for [START_REF] Bushee | The Influence of Institutional Investors on Myopic R&D Investment Behavior[END_REF]'s measure of transient investors but nonetheless remain significant. Causality One concern regarding our results is that causality might run from marketing expenditures to shareholder investment horizon if firms managing myopically marketing resources attract short-term investors. To address this issue, we run a test of causality between the choice of marketing expenditures and investor turnover. We estimate the following panel vector autoregressive model [START_REF] Holtz-Eakin | Estimating Vector Autoregressions with Panel Data[END_REF]): 9?F9: 4,5 = 9?F9: 4,5;< + 9:repsFEe 4,5;< + t 4,5 + A 4 + u 4,5 (1) Where 9?F9: 4,5 denotes advertising expenses scaled by total assets, 9?F9: 4,5;< denotes lagged advertising expenses scaled by total assets, 9:repsFEe 4,5;< is the lagged investor horizon, t 4,5 is a matrix of control variables, A 4 represents firm-specific effects and u 4,5 represents serially uncorrelated idiosyncratic errors. Indices i and t represent firms and years, respectively. The specification assumes that the dynamics of the endogenous variables are such that it takes no more than one year for the past values of endogenous variables to affect their future values. We use first differences to eliminate the firm-specific effect (whose correlation with the lagged dependent variable renders least-squares estimation inconsistent), obtaining: v: 4,5 = H < v9?F9: Each equation is estimated individually using a generalized-method-of-moments (GMM) dynamic panel data estimator to accommodate the correlation between the first-differenced errors and the lagged differences of the endogenous variable implicit in ( 2)-( 3). The first two observations for each firm in the panel are lost to lags and differencing. [START_REF] Arellano | Some Tests of Specification for Panel Data: Monte Carlo Evidence and an Application to Employment Equations[END_REF] use the lagged levels of the endogenous variables to obtain the moment conditions. This approach suffers from a weak instrument problem if the autoregressive parameter f is close to one, i.e. if the dependent variable exhibits severe persistence [START_REF] Blundell | Initial conditions and moment restrictions in dynamic panel data models[END_REF][START_REF] Blundell | Estimation in dynamic panel data models: improving on the performance of the standard GMM estimator[END_REF]. The Blundell and Bond estimator therefore adds to the instrument matrix moment conditions that utilize the lagged differences of the endogenous variables of the equation in levels. These moment conditions are enough to identify the parameters of interest f and b. are first-order negatively autocorrelated and second-order serially uncorrelated, as required by the assumptions of the GMM estimator [START_REF] Arellano | Some Tests of Specification for Panel Data: Monte Carlo Evidence and an Application to Employment Equations[END_REF] (the table shows the p-value of the latter test). Second, the Hansen test of over-identifying restrictions is reported along with our results to ensure that the instruments are appropriately chosen. Results indicate that the causality runs from investor horizon to marketing expenditures rather than the other way around. Whereas lagged investor turnover has a strongly significant negative effect on advertising expenses, lagged advertising expenses do not affect investor turnover. These results support the interpretation that investment horizons affect marketing expenditures but not the opposite. ("Insert Table 4 about here") The results from Table 2, Table 3 and Table 4 largely support our hypothesis 1 and the direction of the relationship, showing the influence of investor turnover on myopic management. In Table 5 we report the results of tests for Hypothesis 2 and Hypothesis 3. We use sample splits to test whether blockholders (specifications 1 and 2) mitigate the effect of investor turnover and whether the effect of investors on marketing expenditures is concentrated in firms with managers whose short-term compensation is low relative to long-term compensation (specifications 3, 4, 5 and 6). To test Hypothesis 2, we split our sample of firms into two regressions, the first being firms without blockholders (specification 1) and the second being firms with blockholders (specification 2). Specifications 1 shows that for firms with no blockholders, investor turnover influences marketing expenditures. Specification 2 shows that for firms with blockholders present, investor turnover is not significant. The results show, consistent with the literature, that myopic management is less likely to appear as the proportion of blockholders increases. In specifications 3, 4, 5 and 6, we test Hypothesis 3 about the impact of CEO compensation on the effect of investor turnover on marketing expenditures using two measures. [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF] shows that the structure of CEO compensation affects marketing expenditures. More specifically, [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF] shows that the long-versus-short term compensation of CEOs affects marketing expenditures. The first measure (specifications 3 and 4) is taken from [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF] and the second measure (specifications 5 and 6) from [START_REF] Bergstresser | CEO incentives and earnings management[END_REF]. To test the Equity/Bonus measure from [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF], we split the sample into firms where CEOs are incentivized to make myopic decisions due to the long-vs-short term compensation structure. Specification 3 shows that for firms where CEOs are not incentivized to make myopic decisions concerning marketing expenditures, investor turnover influences marketing expenditures. Specification 4 shows that for firms where CEOs are incentivized to make myopic decisions due to the higher proportion of bonus relative to equity in their compensation, investor turnover does not affect marketing expenditures. To test the Incentive Ratio measure from [START_REF] Bergstresser | CEO incentives and earnings management[END_REF], we split the sample into firms where the Incentive Ratio is below the median (Specification 5) and above the median (Specification 6). Specification 5 shows that for firms where CEOs are less incentivized to make myopic decisions concerning marketing expenditures, investor turnover influences marketing expenditures. Specification 6 shows that for firms where CEOs are more incentivized to make myopic decisions, investor turnover does not affect marketing expenditures. To summarize, the results for Specifications 3, 4, 5 and 6, the negative effect of a shorter horizon of shareholders on marketing expenditures is concentrated in firms where CEOs are not incentivized to make myopic decisions. All told, the results validate Hypothesis 2 and 3. ("Insert Table 5 about here") Additional Analysis We measure the impact of investor turnover on the short-term management behavior of marketing expenditures. A partial adjustment model enables us to measure the inertia in marketing expenditures and the annual speed of adjustment. We use OLS regressions with standard errors adjusted for heteroskedasticity and firm-level clustering to calculate the adjustment speed because firm fixed effects models with lagged dependent variables on the right-hand side lead to biased coefficient estimates [START_REF] Hovakimian | Is the Partial Adjustment Model a Useful Tool for Capital Structure Research?*[END_REF]. Our regression results indicate an annual speed of adjustment of 0.35 (p < 0.01). Conclusion and Discussion Our investor horizon results have implications for a firm's management and investor relations departments concerning marketing expenditures and the firm's long-term performance. Our first result (H1) shows that investor horizons impact marketing expenditures. If a firm has the goal of generating strong long-term performances for investors by investing continuously in marketing to build brands and innovate, then a firm's management and investor relations department should be aware that a short-term investor horizon may restrain marketing expenditures. The restraint is driven by short-term investors influencing management through the implicit threat of rapidly selling their shares if the short-term performance declines. Marketing expenditures are further restrained by the long-term nature of their performance as they generate value over several years. Short-term horizons investors will not see the full benefits of the investments so they are less interested in marketing expenditures. One way to address the drivers restraining marketing expenditures is to create a committed base of long-term shareholders. The beneficial effect of long-term shareholders is supported by our second main result that the presence of blockholders mitigates the effect of high shareholder turnover on marketing expenditures (H2). The creation of a large long-term shareholder base could be achieved through the development of loyalty schemes that consist of firm shareholders receiving a reward after having held their shares for a specified period of time (loyalty period). The reward might be extra dividends or voting rights. The increase in investor horizon prompts firms to adopt a longer-term outlook and increase marketing expenditures despite the returns from such expenditures taking several years to be fully realized. If a long-term investor base is not created, then managers who are responsible for determining marketing expenditures may choose to restrain marketing expenditures because any expenditures they propose must be approved by top management and therefore present plans that converge with the top management's interests [START_REF] Fama | Agency problems and residual claims[END_REF][START_REF] Joseph | Free Cash Flow, Agency Costs, and the Affordability Method of Advertising Budgeting[END_REF]. The pressure on managers to focus on marketing expenditures that generate short-term returns hinders the firm's long-term performance from marketing actions such as product innovation and building customer satisfaction. Promising alternative ways of rewarding long-term shareholders are still waiting to be more broadly implemented. Initiatives to reward long-term investors such as the loyalty-shares proposed by [START_REF] Bolton | Loyalty-Shares: Rewarding Long-term Investors[END_REF] or encouraging other types of ownership such as blockholders [START_REF] Edmans | Blockholder trading, market efficiency, and managerial myopia[END_REF]) may be of interest to offset myopic management of marketing expenditures. Our third result (H3) shows that structuring CEO compensation so that the CEO's interests are in line with shareholders may not be enough. The structure of CEO compensation and notably equity-based compensation has long been considered as an efficient tool to reduce agency costs and to orientate managerial decisions towards the long-term interest value of the firm [START_REF] Fama | Agency problems and residual claims[END_REF][START_REF] Jensen | Theory of the firm: managerial behavior, agency costs, and ownership structure[END_REF] and in particular away from myopic marketing management [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF]. We show that investor horizon can influence marketing expenditures even when CEO compensation is well structured. Indeed, our results show that when CEOs have a incentive to focus on the short term due to their compensation prompting such behavior, investor horizon does not affect marketing expenditures. CEOs with a higher proportion of long-term compensation relative to short-term compensation, however, may behave myopically due to investor turnover. To the best of our knowledge, the findings on the impact of shareholder turnover on marketing expenditures, even in the presence of greater long-vs-short term compensation CEO compensation, is a new finding in the marketing literature. Myopic management represents an important issue for academic research in marketing, which has sought to assess the consequences of myopic management, understand its determinants and find mechanisms that could remedy it. Our results shed light on an important determinant of myopic management, namely shareholder turnover, and deepen our understanding of this behavior, which should not be considered as a mere agency conflict or opportunistic behavior arising against the wishes of shareholders. This study has some limitations. Changes in the makeup of a firm's investors can lead to changes in a firm's strategy. For instance, the arrival of an activist shareholder could prompt the firm to change its strategic orientation. There are also other forces at work that influence marketing expenditures. Management's need to meet earnings guidance target could prompt the firm to cut marketing expenditures. A temporary increase in competition intensity in a sector could also lead to increased spending on promotions at the expense of advertising and R&D. Further research could strive to define the effects of different levels of horizons and what is a suitable level of long-term ownership. The antecedents of changes in a firm's ownership could also prove interesting subjects for research and how they affect marketing expenditures. [START_REF] Mizik | The Theory And Practice Of Myopic Management[END_REF]. We also verify our measure of investor horizon by adding two alternative measures of investor horizon identified in the literature. Specification 3 uses the churn ratio to measure investor horizon [START_REF] Gaspar | Shareholder investment horizons and the market for corporate control[END_REF]. Specification 4 reports the results using the transient investor measure of [START_REF] Bushee | The Influence of Institutional Investors on Myopic R&D Investment Behavior[END_REF]. ***, **, * indicate significance at 1, 5, and 10% respectively. For more detailed information on variables see the Appendix. This table presents dynamic panel estimates of the causal relation between marketing expenditures in the form of advertising expenses scaled by total assets and investor turnover. We use the generalizedmethod-of-moments dynamic panel data estimator of [START_REF] Blundell | Initial conditions and moment restrictions in dynamic panel data models[END_REF]. This method assumes that there is no autocorrelation in the idiosyncratic errors and requires the initial condition that the panellevel effects be uncorrelated with the first difference of the first observation of the dependent variable. Please refer to Appendix for details on variable construction. In column 1, the dependent variable advertising expenses scaled by total assets, is regressed on its lag and on lagged Investor Turnover. In column 2, the dependent variable is Investor Turnover which is regressed on its lag and on lagged advertising expenses scaled by total assets. All control variables of our basic specification (cf. Notes: This table reports the results of our tests of Hypothesis 2 and 3. To test the effect of blockholders (Hypothesis 2), we split our sample of firms into two parts, those without blockholders (=0) and those with blockholders (>0). Specification 1 shows the results without blockholders and Specification 2 the same regression for firms with blockholders. Specifications 3, 4, 5 and 6 show our results for tests of Hypothesis 3. We use two measures of executive compensation and whether it mitigates the effect of investor horizon. Specifications 3 and 4 split the sample into two parts using the measure of [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF]. Specification 3 shows results for CEO compensation containing more long-term compensation than short-term compensation (=0) and those with a greater portion of short-term compensation relative to long-term compensation (>0). Specifications 5 and 6 split the sample using an alternative measure of CEO compensation [START_REF] Bergstresser | CEO incentives and earnings management[END_REF] called the incentive ratio. Specification 5 contains the results for CEOs below the median incentive ratio (<m) and Specification 6 shows the results for CEOs with incentive ratios above the median (>m). We use year and firm fixed effects in all specifications. Where α_(i,t-12) is the weight of shares of firm i in investor j portfolio at quarter t-12. We set it to zero if the change in shares held does not correspond to a reduction in position. We then average it over the four last quarters, and aggregate the turnover of a firm's institutional investors at the firm level. ACHURNRATIO Percentage of portfolio stocks that have been sold or bought over the last 12 quarters averaged over all the firm's institutional shareholders and weighted by the number of shares held. See [START_REF] Gaspar | Shareholder investment horizons and the market for corporate control[END_REF] for a detailed explanation of the variable computation. BUSHEETRA [START_REF] Bushee | The Influence of Institutional Investors on Myopic R&D Investment Behavior[END_REF] See [START_REF] Bergstresser | CEO incentives and earnings management[END_REF]. This measure shows the share of a CEO's compensation for a one percentage point increase in the equity value of the firm. Chapters 5 and 6 study the information flow from equity markets to firms. Chapter 5 seeks to answer the third research sub-question of whether a stock's mispricing affects marketing investments. We find that stock mispricing (i.e. stock prices that reflect information not related to firm fundamentals) affects marketing investments. We show that mispricing has a strong and negative impact on advertising and R&D expenditures. We find further that a firm's equity dependence moderate the relationship between mispricing and marketing investments. Our results indicate that stock prices may convey irrational information that affects marketing investments. Chapter 6 seeks to answer the fourth research sub-question about whether information about investor horizons affect marketing investments. We find that shorter investor horizons are associated with a higher probability to reduce marketing investments. We further confirm our results using a causality test. We find further that two factors, blockholders and CEO compensation, moderate the relationship between investor horizon and marketing investments. Our results indicate that information about investor horizons influence marketing investments. To sum up, Chapters 3 and 4 show that information flows from marketing investments influence equity markets, giving an affirmative response to research sub-questions one and two. Chapters 5 and 6 show that information flows from equity markets influence marketing investments, giving an affirmative response to research sub-questions three and four. Our general research question is whether the information in stock prices flow bidirectionally between marketing investments and equity markets. Overall, the results of the sub-research questions confirm that information in stock price plays a bidirectional role between marketing investments and equity markets. This thesis argues that the information flows in both directions, in line with the 'real effects of financial markets' perspective. In light of the affirmative responses to our four research subquestions and the general research question, we conclude that the 'real effects of financial markets' perspective is relevant for the marketing-finance interface research area. Contributions This thesis contributes to the literature on the effects of stock price information in the relationship between marketing investments and financial markets and more generally to the literature in the marketing-finance interface. In this thesis, we posit that combining the two directions of information flows makes for a better theoretical framework of the relationship between marketing investments and equity markets. We make two points. First, we argue that the 'real effects of financial markets' perspective should be integrated into the marketingfinance interface. This integration would reflect how stock prices both convey and reflect information that is used in corporate decisions concerning marketing investments. Second, incorporating the feedback effect into the marketing-finance interface opens up new possibilities for research, which we discuss in 7.4. Theoretical contributions i. By studying the bidirectional nature of information flows, this thesis shows that marketing investments and equity markets are closely linked in a two-way relationship as suggested by the 'real effects of financial markets' perspective. Within this perspective, marketing plays a key role in a firm's relationship with financial markets, both as sender of information and receiver of financial market information. ii. Our research complements the literature on the financial drivers of myopic management of marketing resources. We identify two new financial market drivers in the marketing literature, investor horizon and stock mispricing and study boundary conditions that may alter the impact of myopic behaviour. iii. This thesis responds to a call by [START_REF] Malshe | From Finance to Marketing: The Impact of Financial Leverage on Customer Satisfaction[END_REF] for research into the impact of institutional investors on a firm's marketing strategy. The studies identify two previously unstudied determinants of marketing investments, stock mispricing and investor horizon, extending previous research into the effect of investors on firms' marketing investments [START_REF] Chakravarty | The Stock Market in the Driver's Seat! Implications for R&D and Marketing[END_REF][START_REF] Markovitch | Using Capital Markets as Market Intelligence: Evidence from the Pharmaceutical Industry[END_REF]. iv. We respond to a call for further study into investor biases and how they affect stock returns by showing how brokerage house brands affect investor decisions. We further show that investor biases, as reflected through investor horizon and mispricing, may affect marketing investments. v. We extend research into security analysts by showing the importance of brokerage house brands [START_REF] Hong | Analyzing The Analysts: Career Concerns And Biased Earnings Forecasts[END_REF][START_REF] Womack | Do Brokerage Analysts' Recommendations Have Investment Value[END_REF] vi. The thesis adopts a multidisciplinary approach to study the role played by marketing investments in financial markets, a neglected area of research in marketing 7.2.2 Contributions to marketing practice i. This thesis helps marketing practitioners' better grasp why senior managers pay so much attention to stock prices. We show marketers the importance of the information managers glean from share prices and its influence on marketing practice. ii. Understanding the impact of stock mispricing and investor horizons facilitates the dialogue between the finance group and the marketing group. ii. Shareholders should be aware that their biases affect the decisions of the firms they invest in. Limits Research is a difficult undertaking despite our best efforts. This thesis suffers from several weaknesses. The first and biggest limit in our opinion is that we do not test the feedback effect set out by the 'real effects of financial markets' perspective, focusing instead on the bidirectional nature of information flows. This is in large part because identifying and operationalizing these real effects is difficult. A second limit is that the studies in Chapters 5 and 6 are based on yearly data. Higher frequency data such as quarterly expenditures would perhaps improve the granularity of the results but at the expense of lower availability as many US firms do not report quarterly data for some variables such as R&D because it is non-compulsory. Furthermore, we use the marketing data that is available in databases. We lack more detailed information about other types of market investments such as the cost of promotions and the amounts spent on the marketing mix that could give a richer understanding of investor impact on firms. More detailed advertising and marketing expenditure data is available from research firms but at a high cost. Future research The 'real effects of financial markets' perspective applied to the marketing-finance interface opens up many opportunities for research, providing potentially deeper insights into the implication of the feedback effect between marketing investments and equity markets. ii. Brands both affect and reflect information in equity markets. What impact does brand equity have on investors in addition to customers? Do investments in brand equity help attract investors through a ricochet effect, which thus boosts a firm's stock performance separately from the brand's contribution to increasing a firm's cash flows? Future research could study whether analysts moderate the effect of stock mispricing and investor horizon on marketing investments and the channels for doing so. Finally, the research subjects could be broadened to other countries to see if the effects of brokerage house brands are similar and whether cultural factors influence the impact of brokerage house brands. Questions de recherche Le point de départ de notre thèse est l'information communiquée par le cours de la bourse, prise en considération par la recherche académique depuis 50 ans [START_REF] Akerlof | The Market for "Lemons": Quality Uncertainty and the Market Mechanism[END_REF][START_REF] Fama | Efficient Capital Markets: A Review of Theory and Empirical Work[END_REF]Rappaport, 1987). La communication de l'information a mené à la théorie de l'efficience des marchés selon laquelle les cours de la bourse agrègent toute l'information disponible [START_REF] Fama | Efficient Capital Markets: A Review of Theory and Empirical Work[END_REF]. Notre thèse se base sur le rôle du cours de la bourse en tant qu'agrégateur de l'information pour enquêter sur l'influence de ce phénomène au niveau de la relation entre les Le chapitre 4, une extension du chapitre 3, analyse la manière par laquelle les marques contribuent à l'efficacité de l'information du marché d'actions en utilisant le statut d'une maison de courtage. Le chapitre 5 étudie l'impact de l'information sur le prix des actions qui mène à la mauvaise évaluation des prix et les conditions selon lesquelles cela affecte les investissements marketing. Le chapitre 6 étudie l'impact de l'horizon des investisseurs dans le prix des actions sur les investissements marketing. Le chapitre 7 enfin, présente les résultats, les contributions théoriques et managériales ainsi que les limites de la recherche future. 8.9 Chapitre 2 : Revue de littérature Ce chapitre décrit les principaux cadres théoriques présentés dans cette thèse où nous explorons la littérature spécifique. Nous commençons par étudier les deux axes de recherche de l'interface marketing-finance, c'est-à-dire la manière dont les marchés financiers réagissent aux investissements marketing (section 2.1) et comment les marchés financiers impactent les investissements marketing (section 2.2). Viennent ensuite l'étude de l'information en marketing (section 2.3), la perspective des « effets réels des marchés financiers » (section 2.4) et enfin, la conclusion (section 2.5). En lien avec les réponses affirmatives à nos quatre sous-questions de recherche et à la question de recherche générale, nous concluons que la perspective des « effets réels des marchés financiers » est importante pour la zone de recherche de l'interface marketing-finance. ii. Notre recherche complète la littérature sur les raisons financières de la gestion myope des ressources marketing. Nous distinguons deux nouveaux moteurs venant des marchés financiers dans la littérature marketing : l'horizon des investisseurs et les erreurs de valorisation des cours de la bourse, ainsi que des conditions qui peuvent modérer l'impact d'un comportement myope. iii. Cette thèse répond à la demande de [START_REF] Malshe | From Finance to Marketing: The Impact of Financial Leverage on Customer Satisfaction[END_REF] pour plus de recherches sur l'impact des investisseurs institutionnels sur la stratégie marketing d'une société. L'étude identifie deux déterminants jusque-là non-étudiés des investissements marketing : les erreurs de valorisation des cours de la bourse et l'horizon des investisseurs. Elle contribue également à la recherche sur les effets des investisseurs sur les investissements marketing d'une société [START_REF] Chakravarty | The Stock Market in the Driver's Seat! Implications for R&D and Marketing[END_REF][START_REF] Markovitch | Using Capital Markets as Market Intelligence: Evidence from the Pharmaceutical Industry[END_REF]. v. Nous élargissons notre recherche aux analystes financiers en démontrant l'importance de la marque d'une maison de courtage [START_REF] Hong | Analyzing The Analysts: Career Concerns And Biased Earnings Forecasts[END_REF][START_REF] Womack | Do Brokerage Analysts' Recommendations Have Investment Value[END_REF]. Nous leur montrons l'importance de l'information que les dirigeants peuvent obtenir à partir des cours des actions et l'influence que cela peut avoir sur leur département. ii. La compréhension de l'impact des erreurs de valorisation et de l'horizon des investisseurs facilite le dialogue entre le département marketing et le département finance des grandes sociétés. under management) could afford to pay full commission for full services of brokerage houses, which include the execution and equity research. However, the traditional asset manager has been losing ground to low-cost passive asset managers. The passive asset managers charge asset-management fees, meaning they cannot afford to pay as much for traditional brokerage services. So, they opt for low-cost execution and do not use traditional research. The result is increased pressure on the buyside. The result of this competition has prompted brokerage houses to specialize. Brokerage firms may pursue large or small capitalization stocks. Or they may focus on certain sectors such as technology or utilities. And they may pursue more certain types of buy-side clients, such as quants or traditional active investors. Advertising expenditures of brokerage houses are modest, mainly for ads in the specialized press of the type shown on the next page. Some advertising is carried out via scholarships and sponsoring of charity events, etc. The marketing focus is more on marketing assets such as brands and customer relationships. French press article describing the key marketing points of a brokerage house This excerpt highlights the various arguments that brokerage houses put forward in their advertising. The press release stresses the broad geographical coverage of stocks, the importance of size, the brand, the services offered and awards won and employee expertise, all important factors to attract and retain institutional investors. Excerpts of article concerning 1. 1 1 Research framework Since Akerlof's seminal article in 1970 on lemon cars and the role played by information in transactions, academic research has carried out considerable investigation into the role played by information in decision-making. Marketing research has sought to understand the motivations that drive consumers to search for information, sources of information, how information is processed and the role information plays at different stages in the consumer decision-making process. This research has been extended in the marketing-finance interface to understand how information about marketing investments is assessed by equity markets and in turn how information about financial market players affects marketing investments. Finance research, on the other hand, looks at the effect of information in financial markets, focusing on market-level outcomes such as informational efficiency and what happens when informational efficiency declines. Figure 5 ) 1 ( 51 , Chapter 3 and 4 study the impact of information about brokerage houses (i.e., brand signals) on investors and competitors Chapter 5 and 6 study the influence of investor mispricing and investor horizon respectively on corporate marketing investments. The arrows in the General Research Design represent the relevant direction of the information flow. Studies 1, 3 and 4 are formatted as articles. Study 1 has been submitted to IJRM. The other two articles will be submitted soon. Study 2 (Chapter 4) is not formatted as a research article because it is an extension of the conceptual framework and empirical methodology used in Study Chapter 3). 4 / 4 We remove recommendations for which there is no previous outstanding (i.e. valid) recommendation from the same brokerage house because we need the previous recommendation to determine the size and direction of the recommendation change. This means that all coverage initiations and re-initiations are excluded. 5/ Selection of 66 brokerage houses: Brokerage houses employing less than 20 analysts on average over the entire life of the sample are removed as well as brokerage houses with fewer than 100 recommendations over the whole sample period to avoid results being influenced by small brokerage 3.3.3.1 Measuring brokerage house characteristics Brokerage House Awareness (BH_Awarenessk,y) -There is no measure of a brokerage house's awareness over time so we use as a proxy, for a given year y, the number of years the brokerage house k has been present in the sample of recommendations from I/B/E/S.We consider two characteristics that might contribute to the perception of a brokerage house's performance: the perceived brokerage error and the information access.Perceived Brokerage House Error (BH_Errork,y) -We take the one-year lagged average absolute forecast error. Using forecast and actual earnings data from the I/B/E/S actual earnings file and detailed forecasts file, we calculate the average difference (in absolute value) between the last yearend EPS forecast issued at least 30 days before the fiscal year-end and the actual EPS value and divide this difference by the actual value, for all firms covered by the brokerage house k. Brokerage house errors are winsorized at the 5% level. We observe a correlation between brokerage house error and brokerage house size. To ensure we capture the forecast error, we regress brokerage house error on brokerage size and introduce the residuals in the regressions.Information Access -Investment bank (IBk,y) -The information access dummy is set to 1 if the investment bank associated with the brokerage house figures in the Carter-Manaster investment bank rankings (source: site.warrington.ufl.edu/ritter/ipo-data), and 0 otherwise. Industry Recognition (Industry_Recognitionk,y) -The number of awards conferred by the industry reflects the industry recognition of a brokerage house. For a given year y, we sum the number of awards attributed by the Institutional Investor survey to the brokerage house k. Awards are weighted by level with four points for the 1st place, three points for the second place, two points for the third place and 1 point for the fourth place. Institutional Investor magazine's web site provides the final yearly score at the brokerage house level starting in 2001. To ensure we capture industry recognition, we first regress industry recognition on brokerage size and use the residuals in the final model. 3. 3 . 4 . 2 342 Recommendation characteristics Recommendation change size (Reco_Chg_Sizel) -We describe in section 3.1 how we calculate the size of recommendation change l and take the absolute value. Possible recommendation change values range from 1 to 4. 3.3.2, we obtain a yearly brand score for each of the 66 brokerage houses in the sample. The histogram of the brand scores is shown in Appendix 2. The average brand score is -0.15 with a standard deviation of 0.45. A positive value indicates that the brokerage house brand has a stronger brand than the average whereas a negative value indicates a weaker brand. We can identify several patterns over the period: some brokerage houses (e.g. Oppenheimer, Jefferies, Lazard Frères) underperform over the whole period, their brand score remains negative. In contrast, some brokerage houses, such as Deutsche Bank Securities, CIBC World Markets or Credit Suisse Securities, outperform and show an ongoing positive brand score. Others (e.g. Roth Capital Partners) have a neutral brand as they score around zero. Finally, some brokerage houses (Dresdner Kleinwort, Calyon Securities) improved gradually their brand score, moving from negative to positive values. 3.4.3.2 Impact of the brokerage house brand score on investors determinant of investor response to investment research by showing that brokerage house brand signals add to the impact of analyst name signals. In other words, an analyst employed by brokerage house A would not have the same impact on equity markets if the same analyst were employed by brokerage house B. A brokerage house brand signal is a new variable to consider when studying security analysts. Consequently, brokerage houses should be present systematically as a separate factor in research into security analysts. We identify, based on the marketing and finance literature, three determinants of brokerage house brands -awareness, performance and reputation -to understand what contributes to an effective brokerage house brand signal more effective. Our results indicate that a brokerage house brand's awareness, performance and reputation influence investor perception of a brokerage house's brand signal. use portfolio characteristics to classify investors. He divides investors across three factor variables: portfolio turnover, portfolio concentration and trading sensitivity to current earnings. He identifies three clusters of data along those three factors. "Transient investors" have the highest turnover and the highest use of momentum strategy. 7. 2 . 3 23 Contributions to finance practice i.Investors can learn how brokerage house brand influence their own response and the response of competing brokerage houses to recommendation changes. 8. 5 5 Résultats et contributions Notre thèse entreprend de déterminer si les flux d'informations entre les investissements marketing et les investisseurs sont bidirectionnels. Cet aspect des flux d'informations a jusqu'à présent été négligé par les chercheurs en interface marketing-finance. Nous soutenons que la perspective des « effets réels des marchés financiers » apporte un cadre conceptuel capable à la fois d'expliquer l'influence des flux d'informations bidirectionnels et d'ouvrir de nouvelles possibilités de recherche pour délimiter les effets des informations allant dans les deux sens. Les études 1 et 2 se penchent sur les effets qu'ont les flux d'informations provenant des investissements marketing sur les marchés d'actions. L'étude 1 démontre que l'information des marques de maisons de courtage, un type de dépenses marketing, a une influence sur les investisseurs en actions. De plus, nous exposons quatre caractéristiques de maisons de courtage qui influencent la réaction des investisseurs par rapport à la marque de ces dernières. Ainsi, nous faisons la démonstration suivante : la marque des maisons de courtage influence le prix des sociétés. Nous développons par conséquent une méthodologie pour estimer le score de la Les études 3 et 4 analysent les flux d'informations provenant des investisseurs et allant vers les dépenses marketing. L'étude 3 démontre que la manière dont les investisseurs influencent le prix d'une action affecte les investissements marketing. Nous explicitons empiriquement que l'erreur de valorisation des cours de la bourse impacte négativement les dépenses publicitaires et R&D. De plus, la dépendance d'une société au financement de ses capitaux propres a un impact sur cette relation. Nous développons et démontrons que les erreurs de valorisation des actions peuvent mener à la réduction des dépenses marketing, et que l'irrationalité des prix des actions peut avoir un effet sur les investissements marketing. L'étude 4 détermine si l'information sur les horizons des investisseurs dans les cours de la bourse influence les investissements marketing. Nous montrons que le roulement des investisseurs est associé à une plus forte probabilité de réduction de dépenses marketing. Nous montrons aussi que la rémunération du PDG n'atténue pas les effets, ce qui suggère que l'existence de la myopie managériale est plus qu'un simple conflit d'agence. Prises dans leur ensemble, les études 3 et 4 démontrent que les informations circulent à partir des marchés d'actions vers les investissements marketing. Ces quatre études combinées démontrent la nature bidirectionnelle des flux d'informations entre les investissements marketing et le marché d'actions. Les implications de ce résultat, les limites et les suggestions pour la recherche future sont présentes dans la conclusion. 8.6 Structure de la recherche La structure générale de la recherche représentée dans la Figure 5 contient des flèches qui montrent les directions pertinentes sur les flux d'informations entre les investissements marketing et les investisseurs. La flèche du haut représente les flux d'informations allant des investissements marketing aux marchés d'actions. La flèche du bas indique les flux informations contenues dans les cours des bourses (flèche du bas) sont bien les sociétés et non les maisons de courtage. Dans la structure générale de la recherche ci-dessus, le chapitre 3 étudie la relation entre les maisons de courtage et les investisseurs. Le chapitre 4 étudie la relation entre une maison de courtage et ses concurrentes. Les chapitres 5 et 6 étudient les effets de la relation entre les investisseurs et les investissements marketing des sociétés. Les études 1, 3 et 4 sont présentées en tant qu'articles. L'étude 1 a été soumise au journal International Journal of Research in Marketing. Les deux autres études le seront bientôt. L'étude 2 (Chapitre 4) n'est pas présentée sous forme d'article car elle représente une extension du cadre conceptuel et méthodologie empirique utilisés dans l'étude 1. 8.7 Épistémologie Lorsque nous étudions un objet d'étude, les difficultés les plus manifestes sont : 1) la meilleure manière d'aborder le sujet et 2) le format sous lequel représenter la thèse au mieux : clairement démontrer l'intérêt du sujet, présenter les choix théoriques, expliquer le choix de méthodologie, justifier les résultats et principalement, la manière adéquate pour rendre notre thèse aussi cohérente que possible. Afin de prendre en compte toutes ces considérations, notre choix s'est donc porté sur un format de thèse par études. Chaque étude (mis à part le Chapitre 4 qui est dépourvu de cadre conceptuel et de cadre empirique propres) adopte la forme de l'hypothèse, de la collection des données, de l'analyse des données et des résultats. Notre thèse prend alors une approche hypothéticodéductive. Toute recherche donne lieu à des questions épistémologiques. Notre thèse adopte un paradigme positiviste pour deux raisons : Elle est mue par les questions « pourquoi ? » et « pour quelle raison ? ». En d'autres termes, si et comment le cours de la bourse transmet des informations entre les investissements marketing et le marché d'actions, plutôt que le paradigme interprétativiste qui pose des questions dans les termes de « pour quelle raison est-ce que les acteurs… ? » ou le paradigme constructiviste qui demande « pour quel motif ? ». Les critères de validité de cette thèse correspondent au paradigme positiviste : vérifiabilité, confirmabilité et réfutabilité. 8.8 Organisation de la thèse Notre thèse commence par un chapitre d'introduction, suivi d'une revue de la littérature (chapitre 2), puis de quatre études empiriques (chapitres 3, 4, 5 et 6) et se termine par une conclusion (chapitre 7). Le résumé en français de la thèse est en chapitres, suivi par les références et les annexes. Le chapitre 2 est une revue de la littérature relative à cette thèse. Il commence par une introduction non numérotée, puis les deux axes de l'interface marketing-finance qui reflètent les deux directions des flux d'informations étudiées en marketing (2.1 et 2.2), la façon dont cette information a été étudiée dans la littérature académique (2.3) et, pour finir, la manière par laquelle la recherche en comptabilité et en finance analyse le rôle de l'information dans les marchés financiers et la perspective des « effets réels des marchés financiers » (2.4). Les chapitres 3, 4, 5 et 6 sont présentés dans le format d'une étude académique, avec une introduction, un cadre théorique, une analyse empirique, des résultats et une discussion. Le chapitre 3 porte sur la façon dont les marques, un type d'investissement marketing, contribuent à l'information dans le prix du marché d'actions en se servant d'un objet d'étude. 1 :Section 2 . 2 :Section 2 . 3 :Section 2 . 4 : 5 : 12223245 Le premier cadre de l'interface marketing-finance étudie les conditions selon lesquelles le prix des actions reflète l'information des marchés financiers qui impactent les investissements marketing. Nous analysons la manière à travers laquelle les questions posées sur la capacité du marketing à générer de la valeur pour les actionnaires a mené à l'émergence de l'interface marketing-finance en tant qu'axe de recherche indépendant en marketing. Nous étudions ensuite comment les investissements marketing contribuent au rendement des actionnaires, et terminons avec une description des principales méthodes empiriques utilisées dans ce cadre. Nous nous concentrons particulièrement sur la méthode des études d'événements puisque nous utilisons cette approche dans le chapitre 3.ii. Le deuxième axe de l'interface marketing-finance analyse la manière par laquelle l'information provenant des marchés financiers affecte les investissements marketing. Nous étudions la façon dont la nature discrétionnaire des dépenses marketing les rend vulnérables à la manipulation par les dirigeants. Enfin, nous réfléchissons à trois conditions, regroupées en trois thèmes, qui pourraient pousser les dirigeants à des comportements myopes concernant les investissements marketing : 1/ le REAM, real earnings management (gestion réelle des résultats), 2/ le financement des entreprises et 3/ le lien entre l'information contenue dans les cours d'actions et les investissements marketing.iii. Nous examinons comment l'information a été étudiée dans la recherche marketing. Nous considérons d'abord la manière par laquelle l'information imparfaite et asymétrique crée de l'incertitude quant à la qualité de la recherche des maisons de courtage, ensuite nous étudions la façon dont les consommateurs se procurent l'information et l'utilisent pour prendre leur décision d'achat. Enfin, nous examinons la façon dont les signaux aident les consommateurs à résoudre leur incertitude sur la qualité d'un produit, puis nous nous penchons sur la recherche concernant les marques en tant qu'information, d'abord en général puis dans les services ; ces derniers étant le secteur sur lequel nous nous focalisons dans les chapitres 3 et 4. iv. Nous étudions la manière à travers laquelle la perspective des « effets réels des marchés financiers » impacte la théorie sur l'efficience informationnelle des marchés. Ensuite, nous nous concentrons sur les stratégies adoptées par les dirigeants et les marchés financiers pour contribuer aux effets réels. Nous étudions aussi trois théories qui expliquent pourquoi les dirigeants réagissent aux informations contenues dans les cours de la bourse, à savoir leur motivation personnelle, le désir de se conformer aux exigences des investisseurs et l'apprentissage des erreurs passées (learning). Nous nous attachons par la suite à l'information contenue dans les cours de la bourse provenant des limitations de l'arbitrage, des attentes des investisseurs, de la veille que font les investisseurs sur les actions managériales et les besoins de financements externes de l'entreprise. Enfin, nous nous concentrons sur la littérature concernant les erreurs de valorisation et les horizons des investisseurs que nous étudions dans le contexte des investissements marketing dans les chapitres 5 et 6. v. Section 2.Nous concluons et expliquons le choix des sujets et le niveau des analyses incluses dans cette thèse. 8.11 Contribution du chapitre 4 à cette thèse Le chapitre 4 élargit le périmètre du modèle du signal de la marque utilisé dans le chapitre 3. Nous regardons plus précisément si l'information contenue dans les signaux de la marque d'une maison de courtage influence la réponse des maisons de courtage concurrentes. Élargir le périmètre du modèle du chapitre 3 nous permet de mieux saisir l'impact des marques sur le marché des actions en incorporant un deuxième acteur-clé des marchés financiers, les maisons de courtage concurrentes. Seul le régulateur, le troisième et dernier acteur-clé, n'a pas été pris en compte. La flèche en gras dans la figure ci-dessous représente les flux d'informations de notre schéma de recherche pertinents pour le chapitre 4. Nous utilisons le même cadre conceptuel, les mêmes variables indépendantes et les mêmes variables de contrôle que pour le chapitre 3. Nous calculons une mesure de la réponse des maisons de courtage concurrentes aux changements de recommandation d'une maison de courtage leader. Cette mesure s'appelle « statut de leader » et devient alors notre variable dépendante. Nous testons empiriquement l'effet de l'évaluation de la marque de la maison de courtage et ses déterminants par rapport à la réponse des concurrents. Nous aboutissons sur l'assertion suivante : les maisons de courtage concurrentes réagissent différemment des investisseurs à la marque d'une maison de courtage. De plus, la performance et la réputation, deux des déterminants d'une marque, ont une influence positive sur les concurrents alors que la notoriété a une influence négative. Nous pensons que des facteurs spécifiques au secteur peuvent expliquer que les déterminants de la marque d'une maison de courtage n'influencent pas ses concurrents de la même manière que les investisseurs. Le chapitre 4, qui est une application pratique du modèle développé dans le chapitre 3, visait à répondre à la deuxième sous-question de recherche, à savoir si l'information dans les marques de maisons de courtage est importante pour les concurrents, un deuxième acteur des marchés boursiers en plus des investisseurs étudiés dans le chapitre 3. Ceci est fait en examinant l'influence des marques de maisons de courtage sur ses concurrents sur le marché américain. Les résultats suggèrent que les caractéristiques d'une marque de maison de courtage sont en effet prises en compte par ses concurrents dans leur prise de décision de changement de recommandation. Cependant, les concurrents et les investisseurs sont impactés par la marque d'une maison de courtage de manière différente. Ainsi, les investisseurs et les concurrents sont bien tous les deux influencés positivement par la réputation et la performance mais ils réagissent de manière contraire à l'expérience, de manière positive pour les investisseurs et négative pour les concurrents. Les chapitres 5 et 6 étudient les flux d'information allant du marché d'actions vers les sociétés. Le chapitre 5 tente de répondre à la troisième sous-question de recherche, à savoir si la mauvaise évaluation des prix affecte les investissements marketing. Nous découvrons que les erreurs de valorisation des cours de la bourse (c'est-à-dire le prix des actions qui reflète l'information qui n'est pas fondamentalement liée aux sociétés) affectent les investissements marketing. Nous constatons que les erreurs de valorisation des cours de la bourse ont un fort impact négatif sur les dépenses publicitaires et R&D. Nous avons aussi déduit que la dépendance des sociétés sur le marché pour leur financement modère la relation entre les erreurs de valorisation et les investissements marketing. Nos résultats indiquent que le prix des actions peut transmettre de l'information irrationnelle qui affecte les investissements marketing. Le chapitre 6 cherche à répondre à la sous-question de recherche, à savoir si l'information à propos de l'horizon des investisseurs a un impact sur les investissements marketing. Nous découvrons que les horizons les plus courts des investisseurs sont associés à une plus grande probabilité de réduction d'investissements marketing. De plus, nous remarquons deux autres facteurs : les détenteurs de blocs d'actions et la rémunération du PDG modèrent la relation entre les horizons des investisseurs et les investissements marketing. Nos résultats indiquent que l'information spécifique aux horizons des investisseurs influence les investissements marketing. Les chapitres 3 et 4 montrent que les flux d'informations provenant des investissements marketing influencent le marché d'actions, donnant une réponse positive aux sous-questions de sous-recherche une et deux. Les chapitres 5 et 6 démontrent que les flux d'informations venant des marchés d'actions ont un impact sur les investissements marketing, ce qui donne une réponse positive aux sous-questions de recherche trois et quatre. Ces réponses aux quatre sous-questions confirment que l'information dans le prix des actions a un lien bidirectionnel entre les investissements marketing et les marchés d'actions et permettent donc de répondre de manière positive à notre question générale, à savoir si l'information du prix des actions circule de façon bidirectionnelle entre les investissements marketing et le marché d'actions. Cette thèse soutient que l'information circule dans les deux sens, ce qui est en accord avec la perspective des « effets réels des marchés financiers ». à la littérature sur le rôle joué par l'information dans la relation entre le marché d'actions et les investissements marketing, et plus généralement à la littérature sur l'interface marketing-finance. Dans notre travail, nous soutenons que la combinaison des deux directions des flux d'informations crée un meilleur cadre théorique sur la relation entre les investissements marketing et le marché d'actions. Nous faisons deux remarques. Premièrement, nous avançons que la perspective des « effets réels des marchés financiers » devrait être intégrée à l'interface marketing-finance. Cette intégration permettrait une meilleure prise en compte des effets des flux d'informations sur la relation entre les cours de la bourse et les prises de décisions des sociétés concernant les investissements marketing. Deuxièmement, nous pensons que le fait d'incorporer l'effet feedback dans l'interface marketing-finance ouvrirait la voie à de nouvelles possibilités de recherches dont nous discuterons dans la section 8nature bidirectionnelle des flux d'informations, cette thèse montre que les investissements marketing et le marché d'actions sont étroitement liés dans une relation bidirectionnelle, telle qu'elle est suggérée par la perspective « des effets réels des marchés financiers ». Dans cette perspective, le marketing joue un rôle-clé dans la relation d'une société avec les marchés financiers, à la fois en tant qu'émetteur et récepteur de l'information des marchés financiers. iv.Nous répondons à la demande pour plus de recherches sur les préjugés dont font preuve les investisseurs et sur la manière dont ils affectent les rendements des actions en montrant la manière avec laquelle les marques des maisons de courtage affectent les décisions des investisseurs. De plus, nous montrons que ces préjugés des investisseurs concernant leurs horizons d'investissement et leurs erreurs de valorisation des actions peuvent avoir un effet sur les investissements marketing. vi.Cette thèse adopte une approche multidisciplinaire afin d'étudier le rôle des investissements marketing dans les marchés financiers, un sujet délaissé par les recherches en marketing.Contributions managériales pour le marketing i.Cette thèse aide les professionnels du marketing à mieux comprendre la raison pour laquelle les comités de directions attachent autant d'importance au prix des actions. que les dépenses marketing affectent et reflètent simultanément l'information du prix des actions. Les dépenses marketing utilisées pour commercialiser les produits d'une entreprise permettent-elles aussi d'attirer les investisseurs ? À quel point les dépenses publicitaires contribuent-elles directement à la performance du marché d'actions d'une société, par rapport à l'effet indirect produit par les investisseurs ? Les dirigeants des sociétés peuvent-ils optimiser le double effet des dépenses publicitaires sur les consommateurs et les investisseurs ? Les marques reflètent et influencent en même temps l'information du marché d'actions. Quel impact l'image de la marque a-t-elle sur les investisseurs ? Les investissements dans le capital de la marque aident-ils, par ricochet, à attirer les investisseurs, ce qui renforcerait ainsi la performance des actions d'une entreprise, indépendamment de la contribution de la marque, et augmenterait les flux de trésorerie de l'entreprise ? De la recherche ultérieure pourrait déterminer si les analystes modèrent l'effet des erreurs de valorisation et les horizons d'investisseurs sur les investissements marketing et les canaux employés pour ce faire. Enfin, les objets d'étude pourraient être élargis à d'autres pays pour voir si les effets des maisons de courtage seraient similaires et si les effets culturels influenceraient l'impact des maisons de courtage. French brokerage house Exane BNP Paribas that highlights the marketing policies of brokerage houses "Exane BNP Paribas est classée dans le Top 10 de l'intermédiation actions européennes" Sous la marque Exane BNP Paribas, créée en 2004 à l'issue de l'accord de partenariat avec BNP Paribas, le groupe Exane propose aux institutionnels des services incluant la recherche, la vente et l'exécution sur les actions européennes, peut-on lire par ailleurs sur son site Internet : "Exane BNP Paribas est classée dans le Top 10 de l'intermédiation actions européennes, notamment grâce au développement d'une importante plateforme à Londres qui a été le moteur de sa stratégie européenne." Avec plus de 500 valeurs européennes suivies (dont 70 % non françaises), 80 analystes et 80 vendeurs et sales-traders, Exane BNP Paribas "bénéficie d'une taille critique et d'une expertise reconnue dans ce métier où il mène une politique d'investissement ambitieuse, régulière et soutenue", lit-on encore sur son site. Exane BNP Paribas sert plus de 1.200 institutionnels à travers le monde en s'appuyant sur huit implantations (Paris, Londres, Genève, Francfort, Milan, Zürich, New York, et Singapour). Source : http://trends.levif.be/economie/banque-et-finance/actions-exane-bnp-paribasouvrira-une-succursale-a-bruxelles/article-normal-188685.html Table 3 -Empirical Overview of Thesis Studies Study 1 (Ch. 3) Study 2 (Ch. 4) Study 3 (Ch. 5) Study 4 (Ch. 6) 3 Title Do House Matter for Equity Brokerage Brands Markets? Do House Matter Competitors? Brokerage Brands for Stock Mispricing and Marketing Investments Investor Horizon and Marketing Investments Advertising Advertising Dependent variable(s) Cumulative abnormal returns Leader Status expenditures; R&D expenditures; marketing expenditures expenditures Brokerage house Brokerage house Investor indicator variables; indicator variables; Mispricing_PV; horizon; Independent brokerage brokerage Mispricing_RH; Blockholders; variables awarness awareness Mispricing_HP; CEO Equity- performance, performance, KZ Index Bonus Ratio; reputation reputation Incentive Ratio IBES, CRSP, IBES, CRSP, CRSP, Data sources Compustat, Carter-Manaster IB Prestige, Thomson-Reuters, Institutional Compustat, Carter-Manaster IB Prestige, Thomson-Reuters, Institutional CRSP, Compustat, Thomson-Reuters 13F Filings Compustat, Execucomp, Thomson-Reuters Filings, Bushee 13f Investor Survey Investor Survey web site Size, Leverage, Size, Leverage, Risk, Market Control variables Analyst, firm and recommendation characteristics Analyst, firm and recommendation characteristics Sales Profitability, Growth, Market Share, Institutional Share, Margin, Growth, Investor Profit Sales Churn, Bushee Ownership, Risk Investor Classification Primary analysis Ordinary squares least Logit regressions Panel regressions Logit regression panel Sample size 47,345 recommendation changes 30,619 recommendation changes 40,966 firm year observations 40,962 firm-year observations Time frame 2000-2014 2000-2014 1980-2014 1980-2014 . 2.4.6.1 Theoretical determinants of mispricingMispricing emerged gradually as a subject of interest for corporate finance. Prior to about the year 2000, stock markets were still thought to be efficient in that prices reflect all available information at a given moment. The TMT bubble prompted considerable and abrupt stock market movements that seemed unwarranted given the information available at the time. This prompted considerable research into what information can drive prices away from fundamentals be they undervalued or overvalued (mispricing). The financial crisis of 2007-2008 increased the interest in the area of stock mispricing. DO BROKERAGE HOUSE BRANDS MATTER FOR INVESTORS? Andrew Zylstra Sandrine Macé ESCP Europe, Panthéon-Sorbonne University ESCP Europe [email protected] [email protected] Prior research in accounting and finance has relegated brokerage houses to a support role for security 1: 14th Interdisciplinary Workshop on Intangibles and Intellectual Capital held at Ludwig-Maximilians-University in Munich in September 2018 2: Marketing Strategy Meets Wall Street held at Insead in Fontainebleau in June 2019 Table 2 2 displays the transition probabilities (Panel A) and the distribution of the magnitude of the recommendation changes (Panel B). The results are consistent with (Loh & Stulz, 2011). First, most recommendation changes involve the Hold, Buy and Strong Buy recommendation levels. Second, 54% of recommendation changes concern 1-notch upgrades or downgrades, a further 45% of recommendation changes concern 2-notch upgrades or downgrades and few recommendation changes exceed 2 notches. Table 6 ) 6 , this means that the Deutsche Bank Securities brand name on a recommendation upgrade increases ASCAR by about 6.21% relative to the average (e.g. 6.21 = 0.63 × ).** )+.,+ * 100). In contrast, Table 1 -Variables, Descriptions and Data Sources 1 Variable Description Table 2 Panel A -Transition probabilities of recommendation changes 2 Current Recommendation Prior 1 2 3 4 5 Recommendation Sell Underperform Hold Buy Strong Buy Total 1 (Sell) 0 69 1,941 6 104 2,120 % 0 3.25 91.56 0.28 4.91 100 2 (Underperform) 62 0 2,486 242 12 2,802 % 2.21 0 88.72 8.64 0.43 100 3 (Hold) 1,418 2,084 0 8,485 9,443 21,430 % 6.62 9.72 0 39.59 44.06 100 4 (Buy) 18 230 8,873 0 1,817 10,938 % 0.16 2.1 81.12 0 16.61 100 5 (Strong Buy) 63 16 8,450 1,526 0 10,055 % 0.63 0.16 84.04 15.18 0 100 Total 1,561 2,399 21,750 10,259 11,376 47,345 % 3.3 5.07 45.94 21.67 24.03 100 Panel B - Distribution of recommendations by magnitude change Rec Change Frequency Percentage -4 63 0.13 -3 34 0.07 -2 10,098 21.33 -1 12,545 26.5 1 12,857 27.16 2 11,626 24.56 3 18 0.04 4 104 0.22 Total 47,345 100 Table 3 -Descriptive Statistics of sample 3 Variable Sample characteristics Number of brokerage houses 66 Number of upgrades 24,611 Number of downgrades 22,734 Dependent variables Mean Std. Dev. CARi,k,l (upgrades-%) 4.47 4.02 CARi,k,l (downgrades-%) -4.25 3.86 ASCARi,k,l (upgrades-%) 204.7 193.5 ASCARi,k,l (downgrades-%) 202.5 214.1 Brokerage house characteristics (a) Brand_Scorek,y -.15 0.45 BH_Awarenessk (years) 12.2 4.0 BH_Errork,y (%) 3.79 2.91 IBk 0.91 0.27 Industry_Recognitionk,y 3.17 10.4 Analyst characteristics (b) Error_Differencej,k,y 0.01 0.09 Analyst_Experiencej,y 7.14 4.16 Analyst_Boldnessj,y 0.19 0.48 Stock_Coveragej,y 3.78 2.81 Reco_Frequencyj,y 16.7 7.09 Recommendation characteristics (b) Reco_Chg_Sizel 1.46 0.52 Consensus_Distancel 0.04 0.10 Firm characteristics (b) Book_to_Marketi,y 0.56 0.43 Inst_Ownershipi,y (%) 0.73 0.22 Firm_Sizei,y ('000) Table 4 -Correlation table 4 1 2 3 4 5 6 7 8 9 10 11 12 13 1 ASCAR 1 2 Analyst Experience .01* 1 3 Analyst Boldness -.02* .03* 1 4 Stock Coverage -.02* .16* .03* 1 5 Reco Frequency -.01* .23* .11* .40* 1 6 Error Difference -.01* -.01* -.11* .01 .00 1 7 Reco Chg Size .03* .00 .00 -.05* -.01* .02* 1 8 Consensus Distance .01* .01 .01* -.04* -.02* -.01 .03* 1 9 Book-to-Market .00 .06* .02* .03* .06* .01 .01 .00 1 10 Institutional Own. .06* .11* .00 .00 -.02* .01* .02* .01* -.06* 1 11 Firm Size -.07* .03* .01 .00 .02* -.01* .01 .02* -.04* -.06* 1 12 Industry Recognition .00* -.01* .02* -.06* -.01* .00 -.12* -.01* .01* -.01 .02* 1 13 Brokerage Error .02* .02* -.02* -.05* .01 -.07* .08* .05* -.01 .04* .02* .04* 1 14 Brokerage Awareness .04* .29* .03* -.08* .08* -.03* -.01* .02* .10* .18* .00 -.19* .11* Industry Recognition and Brokerage Error have been treated to remove the effect of brokerage house size (see 3. 3.2) Table 5 -Investor response to brokerage house brand scores on ASCAR 5 (1) (2) Table 6 -Investor response to brokerage house determinants 6 (1) (2) THE BROKERAGE HOUSE BRAND SIGNAL MODEL (DEVELOPED IN CHAPTER 3) TO STUDYING COMPETITION BETWEEN BROKERAGE HOUSES We extend the brokerage house brand signaling framework developed in Chapter 3 to study competition between brokerage houses. More specifically, we apply the brand framework to see if competing brokerage houses respond to information in a brokerage house's brand signal around a recommendation change. Empirically we develop a measure of a brokerage house's information leadership status on a stock as our dependent variable and use the brand score, 4.1 Introduction Deregulation, and emerging technologies have transformed equity markets considerably over the past 20 years. Brokerage houses must compete fiercely to protect their profits. Stiff competition in this mature market means that brokerage houses are constantly confronting competitors when marketing their services. The effectiveness of brokerage house marketing programs depends on the response of customers and competitors. Marketing research has typically focused on consumer response and paid less notice to competitor response brokerage house brand determinants and control variables from Chapter 3 to investigate the impact of brokerage house brands on competitors. We find that competing brokerage houses respond differently than investors to a brokerage house brand. Furthermore, the brand determinants performance and reputation are influential but awareness is negatively related to brokerage house information leadership on competitors. Some results are in line with expectations and others are unexpected, leading us to reason that there are sector-specific factors that affect competitor response to brokerage house brands. Key words: competitive strategy, brand signal, equity market, marketing-finance interface, Table 2 2 displays the transition probabilities (Panel A) and the distribution of the magnitude of the recommendation changes (Panel B). The results are consistent with (Loh & Stulz, 2011) and Chapter 3. First, most recommendation changes involve the Hold, Buy and Strong Buy recommendation levels. Second, 54% of recommendation changes concern 1-notch upgrades or one-notch downgrades, a further 45% of recommendation changes concern 2-notch upgrades or downgrades and few recommendation changes exceed 2 notches. Table 1 Panel A -Transition probabilities of recommendation changes 1 Current Recommendation Prior 1 2 3 4 5 Recommendation Sell Underperform Hold Buy Strong Buy Total 1 (Sell) 0 52 1,414 4 74 1,544 % 0 3.37 91.58 0.26 4.79 100 2 (Underperform) 27 0 1,483 158 4 1,672 % 1.61 0 88.7 9.45 0.24 100 3 (Hold) 1,044 1,409 0 5,527 6,027 14,007 % 7.45 10.06 0 39.46 43.03 100 4 (Buy) 4 135 5,597 0 1,351 7,087 % 0.06 1.9 78.98 0 19.06 100 5 (Strong Buy) 50 14 5,190 1,055 0 6,309 % 0.79 0.22 82.26 16.72 0 100 Total 1,125 1,610 13,684 6,744 7,456 30,619 Panel B - Distribution of recommendations by magnitude change Rec Change Frequency Percentage -4 50 0.16 -3 18 0.06 -2 6,369 20.8 -1 8,088 26.41 1 8,413 27.48 2 7,599 24.82 3 8 0.03 4 74 0.24 Total 30,619 100 Table 2 -Descriptive Statistics 2 Variable Sample characteristics Number of brokerage houses 66 Number of upgrades 16,099 Number of downgrades 14,520 Dependent variable Mean Std. Dev. Leader_Statusi,k,l (upgrades) .100 .301 Leader_Statusi,k,l (downgrades) .124 .329 Brokerage house characteristics BH_Awarenessk (years) 11.91 3.88 BH_Errork,y (%) 3.81 7.47 Industry_Recognitionk,y 4.11 11.95 Analyst characteristics Error_Differencej,k,y 0.01 0.08 Analyst_Experiencej,y 7.08 4.04 Analyst_Boldnessj,y 0.19 0.51 Stock_Coveragej,y 3.90 2.93 Reco_Frequencyj,y 16.7 6.94 Recommendation characteristics Reco_Chg_Sizel 1.47 0.53 Consensus_Distancel 0.04 0.11 Firm characteristics Book_to_Marketi,y 0.53 0.40 Inst_Ownershipi,y (%) 0.75 0.20 Firm_Sizei,y ('000) 9,200,000 25,000,000 Table 3 - Correlation table 3 1 2 3 4 5 6 7 8 9 10 11 13 14 Analyst Experience 1 Analyst Boldness .01* 1 Stock Coverage .04* .03* 1 Reco Frequency .15* .08* .35* 1 Reco Chg Size .02* -.01* -.08* -.05* 1 Consensus Distance .01* -.00* -.04* -.03* .05* 1 Error Difference -.01* -.03* .04* .01* .01* .00* 1 Brokerage Awareness .27* .01* -.12* .03* .01* -.01* -.03* 1 Investment Bank -.00 .02* -.07* -.07* -.03* -.05* -.01* .15* 1 Brokerage Error .08* -.00 -.05* .03* .08* .06* -.12* .13* .02* 1 Industry Recognition .10* .03* -.02* .07* -.22* -.02* -.03* .03* .11* .16* 1 Book to Market .03* -.00* .04* .06* -.02* .01* .03* .01* .00* -.02* .02* 1 Institutional Own. .09* .00 -.02* -.05* .04* .03* -.00 .17* -.02* .07* .07* -.12* 1 Firm Size .04* .00 -.01* .00 -.02* .04* -.01* .01* -.04* .06* .09* -.09* -.09* Table 4 -Competitor response to brokerage house brand score 4 DV = Leader Status Coefficient Coefficient Analyst Experience -0.007 0.025*** (0.01) (0.01) Analyst Boldness 0.171*** 0.118*** (0.05) (0.05) Stock Coverage 0.000 0.031*** (0.01) (0.01) Reco Frequency -0.017*** -0.024*** (0.00) (0.00) Error_Difference 0.633** -0.723** (0.30) (0.30) Reco Chg Size -0.082 -0.318*** (0.05) (0.05) Consensus Distance -0.677** 0.657*** (0.30) (0.25) Book-to-market -0.305*** -0.558*** (0.08) (0.08) Inst. Ownership 0.027 0.533*** (0.15) (0.14) Firm Size -0.000*** -0.000*** (0.00) (0.00) Brand Score 0.048 -0.156*** (0.06) (0.06) Constant -1.190*** -1.645*** (0.22) (0.23) Observations 16,099 14,520 Industry Effect Yes Yes Year Effect Yes Yes Pseudo R-squared 0.0298 0.0414 Likelihood ratio chi-square 314.3 450.7 Standard errors in parentheses -*** p<0.01, ** p<0.05, * p<0.1 Table 5 -Competitor response to brokerage house characteristics 5 DV= Leader Status (1) (2) Analyst Experience -0.006 0.029*** (0.007) (0.007) Analyst Boldness 0.144*** 0.097** (0.050) (0.046) Stock Coverage 0.003 0.038*** (0.011) (0.010) Reco Frequency -0.015*** -0.025*** (0.005) (0.005) Error_Difference 0.502* -0.867*** (0.297) (0.304) Reco Chg Size -0.059 -0.318*** (0.053) (0.053) Consensus Distance -0.632** 0.776*** (0.318) (0.257) Book-to-market -0.292*** -0.523*** (0.080) (0.078) Institutional ownership 0.086 0.505*** (0.150) (0.142) Firm size -0.000** -0.000*** (0.000) (0.000) Brokerage Awareness -0.043*** -0.081*** (0.010) (0.009) Investment Bank 0.264*** 0.564*** (0.101) (0.114) Brokerage Error -2.632*** -1.912* (1.012) (0.987) Industry Recognition 0.012*** -0.004 (0.003) (0.003) Constant -1.055*** -1.566*** (0.242) (0.259) Observations 16,099 14,520 Industry Effect Yes Yes Year Effect Yes Yes Pseudo R-squared 0.0336 0.0491 Likelihood ratio chi-square 354.4 535.2 Standard errors in parentheses -*** p<0.01, ** p<0.05, * p<0.1 , our baseline specifications use a modified four-variable version of the KZ index that omits Q. We denote the four-variable version of the KZ index simply by KZit: 23 45 = -1.002 × 78 45 9: 45;<) -39.368 × ?@A 45 9: 45;<) -1.315 × 7 45 9: 45;<) + 3.139 × (DEF 45) ) Table 1 -Summary statistics 1 Variable Mean Std. Dev. Min Max Advsale 0.03 0.04 0.00 0.38 R&Dsale 0.07 0.15 0.00 1.48 Size 5.37 1.97 0.61 10.30 Leverage 0.20 0.20 0.00 0.91 Sales growth 0.04 0.35 -0.89 5.14 Profitability 0.10 0.17 -0.95 0.40 Market share 0.11 0.20 0.00 1.00 Institutional ownership 0.44 0.28 0.00 0.97 Risk 0.04 0.02 0.01 0.13 Mispricing_PV 0.00 0.64 -2.87 3.54 Mispricing_RH -0.00 0.52 -2.30 3.38 Mispricing_HP 0.00 0.57 -3.62 3.81 Book-to-market 0.63 .70 -1.18 3.91 Cash flow 0.07 0.15 -0.94 0.32 KZ score -0.01 1.50 -7.06 4.12 Notes: Details of variable definitions are shown in the Appendix. To reduce the influence of outliers, all accounting variables were winsorised at the 1st and 99th percentiles. Table 2 -Stock mispricing and advertising and R&D expenditures 2 This table presents panel-data regressions of the causal relation between marketing expenditures and our mispricing measure. We also include all control variables. The main dependent variable in column 1 is advertising as a percentage of sales. The main dependent variable in column 2 is R&D as a percentage of sales. The main independent variable in column 1 and column 2, our measure of mispricing, is based on the mispricing measure of[START_REF] Hoberg | Product Market Synergies and Competition in Mergers and Acquisitions: A Text-Based Analysis[END_REF]. We control for size, leverage, profitability, market share, sales growth, risk institutional ownership, cash flow and replaced R&D & advertising variables. All regressions include year dummy variables. We control for industry unobservable heterogeneity effects by adding industry fixed effects. Standard errors are robust to heteroscedasticity and clustered by industry. Constants are not reported. Robust standard errors are shown in parentheses. ***, **, * indicate significance at 1, 5, and 10% respectively. For more detailed information on variables see the Appendix. Advertising/Sales RD/Sales (1) (2) Size -0.0792*** 0.0450 (0.0236) (0.0421) Leverage -0.486** -1.169*** (0.188) (0.226) Profitability 0.576** -2.801*** (0.244) (0.547) Market_share 0.436*** -0.667*** (0.143) (0.212) Sales growth 0.0492* 0.0477 (0.0271) (0.0762) Cash flow -0.940*** 0.161 (0.204) (0.210) Risk -0.453 7.054*** (1.569) (2.260) Institutional ownership -0.215 0.354** (0.128) (0.169) Advertising dummy 0.358*** (0.0641) R&D dummy 0.00830 (0.151) Mispricing_HP -0.241*** -0.365*** (0.0379) (0.0573) Observations 19,195 17,287 Adj. R-squared 0.459 0.519 YEAR FE YES YES IND FE YES YES Notes: Table 3 : Robustness check of the mispricing proxy 3 Advertising / sales (1) (2) (3) (4) Size -0.101*** -0.0660** -0.0792*** -0.0991*** (0.0237) (0.0246) (0.0236) (0.0233) Leverage -0.481*** -0.694*** -0.486** -0.537*** (0.173) (0.217) (0.188) (0.175) Profitability 0.720*** 0.658** 0.576** 0.632*** (0.244) (0.272) (0.244) (0.226) Market share 0.351*** 0.448*** 0.436*** 0.356*** (0.124) (0.137) (0.143) (0.126) Sales growth 0.0375 0.0305 0.0492* 0.0338** (0.0278) (0.0254) (0.0271) (0.0151) Cash flow -0.973*** -0.981*** -0.940*** -0.922*** (0.198) (0.204) (0.204) (0.195) Risk 0.263 0.565 -0.453 1.157 (1.475) (1.516) (1.569) (1.417) Institutional ownership -0.0338 -0.255** -0.215 -0.0214 (0.127) (0.120) (0.128) (0.119) Advertising dummy 0.303*** 0.354*** 0.358*** 0.309*** (0.0644) (0.0627) (0.0641) (0.0650) Mispricing_PV -0.233*** (0.0351) Mispricing_RK -0.218*** (0.0397) Mispricing_HP -0.241*** (0.0379) Book-to-market -0.233*** (0.0357) Observations 23,719 18,683 19,195 24,975 R-squared 0.454 0.461 0.459 0.454 YEAR FE YES YES YES YES IND FE YES YES YES YES Table 4 : Testing the moderating effect of equity financing dependence 4 Advert./Sales R&D/Sales (1) (2) Size 0.00333 0.0475 (0.0358) (0.0421) Leverage 0.0320 -0.672*** (0.207) (0.229) Profitability -1.499*** -2.905*** (0.158) (0.565) Market_share 0.0788 -0.681*** (0.174) (0.209) Sales growth 0.0148 0.0130 (0.0428) (0.0584) Cash flow -0.145 0.0870 (0.179) (0.212) Risk 0.853 8.184*** (1.341) (2.166) Institutional ownership -0.0549 0.380** (0.128) (0.170) Advertising dummy 0.363*** (0.0543) R&D dummy 0.0104 (0.151) Mispricing_HP -0.209*** -0.349*** (0.0280) (0.0511) Equity_financing -0.0922*** -0.0962*** (0.0278) (0.0205) Observations 19,080 17,209 R-squared 0.371 0.522 Year FE YES YES Ind. FE YES YES . We control for size, leverage, profitability, market share, sales growth, risk institutional ownership, cash flow and replaced R&D & advertising variables. All regressions include year dummy variables. We control for industry unobservable heterogeneity effects by adding industry fixed effects. Standard errors are robust to heteroscedasticity and clustered by industry. Constants are not reported. Robust standard errors are shown in parentheses. ***, **, * indicate significance at 1, 5, and 10% respectively. For more detailed information on variables see the Appendix. Table 1 1 presents summary statistics for all of the variables used in this paper. The smaller number of firms present in the Execucomp database reduces the number of observations available to test for the effect of executive compensation. Table 4 4 presents our results focusing only on the parameter estimates of the endogenous variables. The table also reports two diagnostic tests. First, we test if the differenced residuals Table 1 : Summary Statistics 1 Variable N Mean SD Min Max ADVAT 19816 -4.09 1.56 -12.59 -0.52 ADVSALE 19810 -4.19 1.43 -13.66 -0.78 MARKTINGAT 13140 -0.80 1.07 -3.26 2.56 SIZE 19816 5.65 2.04 0.51 10.61 LEVERAGE 19816 0.21 0.21 0.00 0.94 PROFIT MARGIN 19816 0.10 0.16 -0.87 0.42 MARKET SHARE 19816 0.12 0.22 0.00 1.00 RISK 19816 0.04 0.02 0.01 0.12 SALES GROWTH 19816 0.05 0.34 -0.82 5.16 ATURNOVER 18897 0.45 0.10 0.15 0.77 ACHURNRATIO 18910 0.25 0.06 0.13 0.47 BUSHEETRA 18917 0.18 0.18 0.00 0.89 NB BLOCKHOLDERS 18917 1.80 1.62 0.00 6.00 EQUITY INCENTIVE 4148 0.24 0.61 0.00 4.03 INCENTIVE RATIO 3498 0.30 0.26 0.01 0.98 Notes: Details of variables definitions are in the Appendix. Study sample includes all available firms over the 1980-2014 period. To reduce the influence of outliers, 1% of extreme values were set to missing for each accounting variable in the analysis. Table 2 : Marketing Expenditures and Investor Horizon This 2 table reports stepwise panel-data regression of marketing expenditures management on investor horizon variables and control variables. The main dependent variable is ADVAT, advertising expenses scaled by total assets. The main independent variable is ATURNOVER, the one-year-lagged weighted average of a firm's institutional investor portfolio turnover. We control for firm size, leverage, bookto-market, institutional ownership, risk, market share, profitability and sales growth. All regressions include year dummy variables. We control for firm unobservable heterogeneity by adding firm fixed effects. Standard errors are robust to heteroskedasticity and clustered by firms. Constants are not reported. Robust standard errors are shown in parentheses. ***, **, * indicate significance at 1, 5, and 10% respectively. For more detailed information on variables see the Appendix. ADVAT (1) (2) (3) (4) (5) (6) (7) ATURNOVER -0.272*** -0.275*** -0.280*** -0.267*** -0.269*** -0.251*** -0.222*** (0.0693) (0.0683) (0.0684) (0.0684) (0.0684) (0.0682) (0.0675) SIZE -0.260*** -0.257*** -0.244*** -0.250*** -0.234*** -0.213*** (0.0153) (0.0155) (0.0158) (0.0161) (0.0160) (0.0161) LEVERAGE -0.118** -0.155*** -0.152*** -0.229*** -0.239*** (0.0549) (0.0557) (0.0557) (0.0564) (0.0557) RISK 2.013*** 2.014*** 1.310*** 1.168** (0.467) (0.467) (0.467) (0.463) MARKET SHARE 0.165** 0.179** 0.137* (0.0797) (0.0794) (0.0801) PROFIT MARGIN -0.576*** -0.532*** (0.0779) (0.0768) SALES GROWTH 0.233*** (0.0257) Observations 19,845 19,845 19,845 19,845 19,845 19,825 19,816 R-squared 0.883 0.887 0.887 0.887 0.887 0.888 0.890 YEAR FE YES YES YES YES YES YES YES FIRM FE YES YES YES YES YES YES YES Table 3 : Marketing expenditures and Investor Horizon: Robustness Checks 3 ADVSALE MKTAT ADVAT ADVAT SIZE 0.0433*** -0.496*** -0.216*** -0.215*** (0.0153) (0.0134) (0.0162) (0.0161) LEVERAGE -0.127** -0.189*** -0.226*** -0.234*** (0.0524) (0.0458) (0.0553) (0.0556) RISK 0.691 1.149*** 1.212*** 1.162** (0.442) (0.398) (0.465) (0.461) MARKET SHARE -0.102 0.134* 0.166** 0.153* (0.0759) (0.0772) (0.0819) (0.0811) PROFIT MARGIN -1.402*** -0.754*** -0.537*** -0.533*** (0.0742) (0.0649) (0.0766) (0.0762) SALE GROWTH 0.00103 0.280*** 0.231*** 0.234*** (0.0235) (0.0224) (0.0258) (0.0255) ATURNOVER -0.185*** -0.212*** (0.0661) (0.0526) ACHURNRATIO -0.240** (0.120) BUSHEETRA -0.0693* (0.0373) Observations 19,810 13,216 19,740 19,901 R-squared 0.878 0.912 0.890 0.890 YEAR FE YES YES YES YES FIRM FE YES YES YES YES Notes: This table reports the results of our robustness checks. Specifications 1-2 show our tests with two alternative measures of marketing expenditure. Specification 1 shows our regression with advertising divided by sales. Specification 2 contains another measure of marketing expenditure from the literature Table 4 : Marketing Expenditures and Investor Horizon: Causality Analysis 4 Dependent Variable: ADVAT ATURNOVER 0.748*** 0.002 L.ADVAT (28.40) (0.73) -0.458*** 0.486*** L.ATURNOVER (-2.87) (23.35) N 19,796 19,796 P-value of AR(2) test 0.35 0.19 P-value of Hansen test of overidentified restrictions 0.23 0.14 Table 2 ) 2 are used (parameter estimates not shown). The table shows the p-value of the hypothesis test that the first-differenced residuals are autocorrelated of order 2, and the p-value of the Hansen test of the null hypothesis of validity of the over-identifying moment conditions. T-statistics are reported in parentheses and the symbols ***, **, * denote significance at 1%, 5% and 10%. Table 5 : Marketing expenditures and Investor Horizon: Splits 5 NB BLOCKHOLDERS EQUITY BONUS RATIO INCENTIVE RATIO CR (ST/LT) ADVAT =0 >0 =0 >0 <m >m L.ATURNOVER -0.261** -0.140 -0.594** -0.380 -0.616*** -0.501 (0.127) (0.0905) (0.268) (0.315) (0.236) (0.371) SIZE -0.267*** -0.216*** -0.216*** -0.305*** -0.199*** -0.218*** (0.0403) (0.0205) (0.0572) (0.0643) (0.0590) (0.0823) LEVERAGE 0.0230 -0.303*** -0.551*** -0.475** -0.320* -0.186 (0.143) (0.0671) (0.150) (0.199) (0.163) (0.177) RISK 0.235 1.266** 0.575 2.637 -0.119 0.911 (0.908) (0.612) (1.707) (2.198) (1.160) (3.122) MARKET 0.164 0.151 -0.166 -0.604* 0.221 -0.542 SHARE (0.176) (0.116) (0.210) (0.345) (0.245) (0.522) PROFIT -0.570*** -0.537*** -0.350 0.427 -0.00907 -0.640 MARGIN (0.157) (0.104) (0.317) (0.576) (0.229) (0.552) SALES 0.164*** 0.252*** 0.472*** 0.306*** 0.368*** 0.391*** GROWTH (0.0638) (0.0325) (0.0760) (0.0904) (0.0579) (0.0648) Observations 4,954 13,963 2,381 1,767 1,749 1,749 R-squared 0.913 0.898 0.963 0.939 0.956 0.950 YEAR FE YES YES YES YES YES YES FIRM FE YES YES YES YES YES YES use portfolio characteristics to classify investors He divides investors across three factor variables: portfolio turnover, portfolio concentration and trading sensitivity to current earnings. He identifies three clusters of data along those three factors. "Transient investors" have the highest turnover and the highest use of momentum strategy. We use Bushee's classification to compute the percentage of a firm's institutional investors who are transient investors. NB BLOCKHOLDERS Number of institutional blockholders (with 1% or more ownership) in a firm's ownership EQUITY/BONUS RATIO See (Currim et al., 2012). Short-term compensation (cash bonus) / long-term compensation (restricted stock and stock options) INCENTIVE RATIO Conclusionto investors studied in Chapter 3. To do so, we use the same model and empirical methodology used in Chapter 3. It does so by looking at the influence of brokerage house brands on competitors in the US market. The findings suggest that brokerage house brands are used by competitors as signals when deciding how to respond to recommendation changes of brokerage houses. Introduction Literature Review Brokerage brands and investors Brokerage brands and competitors Stock mispricing and marketing investments Investor horizons and marketing investments Marketing investments and investors Investors and marketing investments Information in academic research The real effects of financial markets The feedback effect has consequences for marketing managers, top management teams, and the other stakeholders in firms because the actions of one player may come back to affect themselves. The literature review table in section 1.1 illustrates the rich areas of research that the 'real effect of financial markets' perspective can be used to study. We propose two examples below from two different areas of marketing to illustrate our call for further research into the role of the real effects of financial markets on marketing investments. i. Research shows that advertising expenditures affect and reflect information in stock prices. Do advertising expenditures used for the marketing of a company's products feedback to attract investors in the firm? How much do advertising expenditures contribute directly to a firm's stock market performance versus the indirect effect via investors? Can firm managers optimize the double effect of advertising expenditures on consumers and investors? Contexte de la rechercheLes incertitudes concernant l'efficacité du marketing, l'accroissement de la pression provenant des marchés financiers sur les équipes de direction et l'impression que les dépenses marketing ne font qu'alourdir la note générale, ont incité les marketeurs à prouver la valeur des investissements dans le domaine (cf. exemple des priorités de recherche du Marketing Science Institute de 2014 à 2016), ce qui a mené à l'apparition de l'interface marketing-finance il y a environ vingt ans. Cet axe de recherche examine la condition et la manière selon lesquelles les investissements marketing ont créé de la valeur actionnariale et comment les marchés d'actions miroitent des informations sur les dépenses marketing. Ces dépenses concernent la publicité, la R&D, ainsi que les actifs marketing tels que la marque et la satisfaction client (cf. (S. plus de détails la manière par laquelle les flux d'informations de différents investissements marketing ont été considérés séparément dans la littérature marketing et financière. marchés financiers. Le terme de perspective est utilisé pour les « effets réels des marchés financiers » car il existe un débat autour de l'élargissement de la théorie de l'efficience des marchés ; il n'y a, de ce fait, pas de création d'un nouveau cadre conceptuel. Notre thèse soutient également que l'élargissement donnerait à l'interface marketing-finance une base théorique suffisamment solide et ouvrirait de nouvelles perspectives de recherche.8.2 Cadre conceptuelDepuis le célèbre article d'Akerlof en 1970 sur le marché des voitures d'occasion (« Market for Lemons ») et le rôle occupé par l'information dans les transactions, la recherche académique a considérablement enquêté sur le rôle joué par celle-ci dans la prise de décision. Cette information propre aux investisseurs qui est reflétée dans le cours de la bourse peut alors affecter à son tour les décisions qui sont prises par les comités exécutifs. La perspective des « effets réels des marchés financiers » étudie la manière par laquelle les flux d'informations sont bidirectionnels entre les entreprises et les marchés financiers, en particulier les marchés secondaires. Suivant cette perspective, le prix des actions reflète à la fois l'information sur les attentes des investisseurs et transmet de l'information aux comités exécutifs. On applique la perspective de « l'effet réel des marchés financiers » afin de mieux comprendre la relation entre les investissements marketing et les marchés financiers. Afin d'explorer cette perspective, nous adoptons une approche multidisciplinaire, qui combine la recherche en marketing et la recherche en finance. 8.1 Introduction 8.1.1 Les deux axes de recherche étudient les investissements marketing de manière séparée. Le premier axe étudie si oui ou non les investisseurs prennent en compte les informations concernant les actions marketing telles que les dépenses publicitaires dans leurs prises de Récemment, la finance a opté pour une approche dérivée de l'économie de l'information qui décisions. Le deuxième axe de recherche voit si oui ou non les dépenses publicitaires d'une considère que les flux d'informations pourraient être bidirectionnels. Elle étudie les effets de entreprise ont une influence sur les intentions d'un investisseur et des marchés financiers. Cette ces derniers sur de vraies décisions stratégiques prises par les comités exécutifs. Il en ressort scission concerne aussi les actifs comme la satisfaction client. À titre d'exemple, (Fornell, que les analystes financiers voient traditionnellement la hausse et la baisse du prix de l'action Mithas, & Morgeson, 2006) étudie l'influence des flux d'informations concernant la Srinivasan & Hanssens, 2009) pour une revue). Le deuxième axe de recherche au niveau de l'interface marketing-finance, apparu plus récemment, étudie la manière par laquelle les participants aux marchés financiers, à l'instar des analystes, des investisseurs et des actionnaires, peuvent empêcher les investissements marketing de générer de la valeur pour les actionnaires (cf. [START_REF] Chakravarty | 11 Putting the cart before the horse: short-term performance concerns as drivers of marketing-related investments[END_REF] pour une revue). satisfaction client sur les sociétés, tandis que d'autres études prennent la direction opposée et étudient l'impact des créanciers obligataires sur la satisfaction client. Le Tableau 1 ci-dessous illustre avec La recherche marketing a tenté de comprendre ce qui mène le consommateur à s'informer, ses sources d'informations, comment cette information est traitée et son impact dans les différentes étapes de sa prise de décision. Cette recherche a été élargie par l'interface marketing-finance pour comprendre comment, dans un premier temps, l'information liée aux investissements marketing est évaluée par les marchés d'actions et comment, dans un second temps, l'information sur les acteurs des marchés financiers impacte les investissements marketing. Par opposition, la recherche en finance se penche sur l'effet de l'information dans les marchés financiers, se focalisant sur les résultats au niveau de ces marchés, tels que l'efficience informationnelle et les effets causés par sa diminution. comme étant un indicateur des attentes des investisseurs sur les flux de trésorerie futurs de l'entreprise. Quel que soit le prix de l'action, celui-ci peut refléter l'accord ou le désaccord des investisseurs avec les décisions managériales. Survol empirique des études de la thèse Étude 1 (Ch. 3) Étude 2 (Ch. 4) Étude 3 (Ch. 5) Étude 4 (Ch. 6) [START_REF] Bond | The Real Effects of Financial Markets[END_REF]. Les auteurs avancent que la théorie de l'efficience informationnelle des marchés devrait être élargie pour inclure les flux directionnels de l'information entre tous les acteurs des marchés, ce qui aiderait à avoir une meilleure idée des effets réels des marchés d'actions, en particulier l'effet des marchés secondaires sur les sociétés. Nous soutenons que la recherche sur l'interface marketing-finance devrait inclure les flux bidirectionnels d'information que les chercheurs actuels examinent séparément. Elle devrait aussi être fondée sur la perspective des « effets réels des marchés financiers ». De ce fait, l'interface marketing-finance obtient une base conceptuelle plus grande, ce qui a pour effet d'ouvrir de nouvelles possibilités de recherche. Titre La marque des maisons de courtage compte-t-elle pour les marchés d'actions ? La marque des maisons de courtage compte-t-elle pour leurs concurrents ? Les erreurs de valorisation des actions et les investissements marketing. L'horizon investisseurs et des les dépenses marketing. Variable dépendante Les rendements anormaux cumulés. Statut de leader. Dépenses marketing dépenses R&D. et Dépenses publicitaires et dépenses marketing. Variable indépendante Score de marques des maisons de courtage, performance, expérience, réputation, notoriété. Score de marques des maisons de courtage, performance, expérience, réputation, notoriété. Valorisation PV, valorisation RH, valorisation HP, Index KZ, ratio cours de bourse/valeur comptable. Horizon investisseurs : des détenteurs de blocs d'actions et rémunération des PDG. Sources données de IBES, Compustat, WSJ, CRSP, Carter-Manaster, Thomson-Reuters, Institutional Investor Survey IBES, Compustat, WSJ, CRSP, Carter-Manaster, Thomson-Reuters, Institutional Investor Survey CRSP, Compustat, Thomson-Reuters Filings 13F CRSP, Compustat, Execucomp, Thomson-Reuters Filings 13F Taille, croissance des Taille, Variables de contrôle Caractéristiques des analystes, sociétés et recommandations. Caractéristiques des analystes, sociétés et recommandations. ventes bénéfices, parts & de marché, détention institutionnelle, croissance des ventes & bénéfices, parts de marché, détention risque institutionnelle. investissements marketing et le marché d'actions. Nous préférons le terme « investissements évalue la condition selon laquelle les flux d'informations sur les investissements marketing ont idiosyncratique. marketing » dans notre thèse à celui d'action marketing (ex. : dépense publicitaire) ou d'actif marketing (ex. : la marque) car le terme « investissement » englobe 1/ l'action ou le processus Méthode Moindres carrés ordinaires. Régression Régression Régression logit. panel. panel logit. un impact sur le marché d'actions. Le groupement de sous-questions des deux dernières études évalue si l'information provenant des investisseurs a une influence sur les investissements Taille de l'échantillon d'étude 47 345 changements de recommandations. 30 619 40 966 40 962 changements de recommandations. observations. observations. d'investir de l'argent qui générera de la richesse dans un temps futur et 2/ le résultat créé par marketing ou pas. Les quatre sous-questions ont pour but de déterminer si ou non les flux Temps d'étude 2000-2014 2000-2014 1980-2014 1980-2014 cette action d'investir qui générera de la richesse dans le futur ou, en d'autres termes, les actifs d'informations entre les investissements marketing et le marché d'actions sont bidirectionnels. Afin d'étudier nos sous-questions de recherches, nous utilisons des cadres théoriques, des marketing que les dépenses marketing créeront. données financières et des méthodologies provenant de la littérature en marketing et finance. 8.3.1 Sous-question de recherche 1 (Étude 1, chapitre 3) Notre thèse soutient que la relation entre les investissements marketing et les marchés financiers est une relation bidirectionnelle où l'information est véhiculée par les cours de la bourse. Les flux bidirectionnels reflètent l'émergence du corps de la recherche dans les effets réels des marchés financiers (P. La problématique reflète notre argument sur l'application des effets réels des marchés financiers aux deux axes de recherche de l'interface marketing-finance en considérant si oui ou non les flux d'informations sont bidirectionnels entre le marketing et le marché d'actions : Suivant cette optique, est-ce que l'information du cours de la bourse circule de façon bidirectionnelle entre les investissements marketing et le marché d'actions ? Dans les quatre études de notre thèse brièvement décrites ci-dessous, nous enquêtons sur quatre sous-questions de recherche. Le groupement de sous-questions des deux premières études La première sous-question de recherche se focalise sur l'impact des marques des maisons de courtage -un type d'investissement marketing -sur les investisseurs d'actions. Nous Tableau 3 - Contributions managériales pour la finance i. Les investisseurs peuvent se rendre compte à quel point la marque d'une maison de courtage affecte leur réponse et la réponse des maisons de courtage concurrentes aux changements de recommandations. ii. Les actionnaires devraient être conscients du fait que leurs préjugés influent sur les décisions des entreprises dans lesquelles ils investissent. Faire de la recherche n'est pas chose aisée. Cette thèse a plusieurs limites. La première d'entreelles et la plus importante selon nous, est que nous n'avons pas testé l'effet feedback mis en avant par la perspective des « effets réels des marchés financiers ». Nous nous concentrons plutôt sur la nature bidirectionnelle des flux d'informations. Cela est dû en grande partie au fait qu'identifier ces effets réels et les rendre opérationnels est difficile. Une deuxième limite de notre thèse est que les chapitres 5 et 6 sont fondés sur des données annuelles. Une plus grande fréquence de données telles que des dépenses trimestrielles aideraient peut-être à améliorer l'ensemble des résultats. Malheureusement, la plupart des sociétés américaines ne partagent pas leurs données trimestrielles pour certaines variables, telles que la R&D, parce que ce n'est pas obligatoire. De plus, nous utilisons les données marketing disponibles dans des bases de données. Nous n'avons pas d'informations plus détaillées à propos des autres types d'investissements du marché, tels que le coût des promotions ou la somme dépensée sur un mix marketing qui pourraient nous donner une compréhension plus fine de l'impact des investisseurs sur les sociétés. Des données de dépenses de la publicité et du marketing plus détaillées sont disponibles auprès des sociétés de recherche marketing, mais à un prix très élevé. 8.14.3 Perspectives de recherche La perspective des « effets réels des marchés financiers » appliquée à l'interface marketingfinance ouvre la voie à de nombreuses opportunités de recherche, donnant ainsi accès à une plus grande vision des implications de l'effet feedback entre les investissements marketing et le marché d'actions. L'effet feedback impacte directement les directions marketing, les dirigeants et les autres parties prenantes dans les entreprises, leurs actions pouvant potentiellement se retourner contre eux. La revue de littérature de la table 1 section 1.1 montre la richesse des secteurs de recherche. Nous proposons deux exemples ci-dessous provenant de deux secteurs de marketing afin d'illustrer notre appel à plus de recherche dans le rôle joué par les effets réels des marchés financiers sur les investissements marketing. LITERATURE REVIEW CONCLUSION RÉSUMÉ REFERENCES marque d'une maison de courtage. L'étude 2, une extension de l'étude 1, montre qu'en plus d'avoir un impact sur les investisseurs, la marque des maisons de courtage a un impact sur les maisons de courtage concurrentes. De plus, nous montrons la manière suivant laquelle les caractéristiques d'une maison de courtage influencent ses concurrentes. Une marque de maison courtage contient de l'information pour ses concurrents sur son statut de leader au niveau d'une action donnée. La caractéristique d'une maison de courtage contribue également à son statut. Combinés, les résultats des études 1 et 2 suggèrent que les investissements marketing influencent les marchés d'actions. Ils suggèrent aussi que les maisons de courtage concurrentes perçoivent les marques différemment des investisseurs et la marque impacte tout aussi différemment les investisseurs et les maisons concurrentes. + indicates positive and significant coefficient (p< 0.05), -indicates negative and significant coefficient (p< 0.05) and ns indicates not significant + indicates positive and significant coefficient (p< 0.05), -indicates negative and significant coefficient (p< 0.05) and ns indicates not significant Appendix 1: Variable Definition Variable Definition Advsale Advertising expenses (xad) / sales (sale). As in [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF], we perform a logit transformation on the limited dependent variable, so we use ln(x/(1-x)). R&Dsale R&D expenditures (xrd)/ sales (sale). As in [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF], we perform a logit transformation on the limited dependent variable, so we use ln(x/( Mispricing_PV Residuals from the regression of book-to-market against age, dividend payer status, leverage, total return volatility and return on equity. See (Pástor and Pietro Veronesi 2003a) for more information. Mispricing_RH Residuals from the regression of book-to-market on size, return on equity if negative and leverage for each year and each industry. See [START_REF] Rhodes Kropf | Valuation Waves And Merger Activity: The Empirical Evidence[END_REF] for more information. Mispricing_HP Residuals from the regression of book-to-market against age, dividend payer status, leverage, total return volatility and return on equity for each year and each industry. See [START_REF] Hoberg | Product Market Synergies and Competition in Mergers and Acquisitions: A Text-Based Analysis[END_REF] Appendix 2: Estimation of Advertising and R&D Expenses Values Advertising and R&D are required to be reported if they are not immaterial as per US GAAP rules (FASB 1974(FASB , 1993)). However, many firms still choose to not disclose this data, leading to many cases of missing R&D and advertising in Compustat. I extract the data concerning R&D and advertising expenditures for all firms in Compustat between 1980 and 2014. I also look at missing SG&A, which is required to be reported and can be used as a benchmark for this analysis. The The table below assesses the size of firms that report advertising, R&D and SG&A relative to the average size of firms as a whole in our sample to assess whether size influences the tendency to report R&D and advertising. The firms that report R&D are much smaller on average (57.8%) relative to firms that report advertising (157.5%). The tendency for firms that report R&D to be much smaller than firms that report advertising probably reflects the relative maturity of firms, with smaller firms perhaps more likely to focus on R&D to create value while the larger firms that report advertising are more likely to focus on value appropriation. Because a large number of firms with no R&D do not report it, we follow the treatment for missing values based on the literature [START_REF] Hovakimian | Is the Partial Adjustment Model a Useful Tool for Capital Structure Research?*[END_REF]. The results above show that should cater more to short-term investors than long-term investors [START_REF] Polk | The Stock Market and Corporate Investment: A Test of Catering Theory[END_REF]. Indeed, whereas long-term investors might wait till the mispricing disappears, investors with a short-term investment horizon are more interested in a swift correction. As a result, when shortterm investors are dominant in the firm's ownership, managers are under strong pressure to increase the stock price in the short-run. In line with this theoretical insight, [START_REF] Derrien | Investor Horizons and Corporate Policies[END_REF] find that when a firm is undervalued, a predominantly short-term investor ownership is associated with lower investment and higher payouts to shareholders. How blockholders may mitigate the impact of myopic marketing management Academics usually identify two kinds of long-term investors, blockholders and the remaining long-term investor ownership [START_REF] Derrien | Investor Horizons and Corporate Policies[END_REF]. Blockholders take sizeable stakes in firms. Research in finance indicates that blockholders may be a potential solution to agency problems [START_REF] Shleifer | Large Shareholders and Corporate Control[END_REF] and have a sizeable impact on corporate policies [START_REF] Cronqvist | Large Shareholders and Corporate Policies[END_REF]. Blockholders have more incentive to monitor managers [START_REF] Gaspar | Shareholder investment horizons and the market for corporate control[END_REF] because they are more likely to be affected by managerial misbehavior that affect stock prices over the long term. They may also have more ways to acquire information used to monitor managers because the costs of acquiring information can be absorbed over time. Furthermore, they benefit from monitoring if it builds value over the long term [START_REF] Shleifer | Large Shareholders and Corporate Control[END_REF]. If they are unhappy with management, they can voice their concerns to management [START_REF] Hirschman | Exit, Voice, and Loyalty[END_REF]. So blockholders may mitigate the effect of short-term investors on myopic marketing management. How CEO compensation may influence marketing expenditures Research in finance and accounting has highlighted how executive compensation influences firm expenditures. [START_REF] Bergstresser | CEO incentives and earnings management[END_REF] studied the impact of executive compensation on the use of discretionary accruals to manage earnings. [START_REF] Currim | You Get What You Pay For: The Effect Of Top Executives' Compensation On Advertising And R&D Spending Decisions And Stock Market Return[END_REF] 7 .1 Summary of results This thesis seeks to answer the general research question of whether the information in stock prices flow bidirectionally between marketing investments and equity markets. Transposing the 'real effects of financial markets' perspective onto the marketingfinance interface Abstract To obtain deeper insights into the relationship between marketing and equity markets, this thesis investigates whether the 'real effects of financial markets' perspective (P. [START_REF] Bond | The Real Effects of Financial Markets[END_REF] is suitable for integrating the two streams of the marketing-finance-accounting interface research area. The four studies in this thesis highlight the bidirectional flows of information in stock prices between marketing investments and equity markets. The first two studies (Chapters 3 and 4) show empirically the impact of information flows from marketing investments to equity markets while the third and fourth studies (Chapters 5 and 6) show empirically the inverse flow of information from equity markets to marketing. Together, the four studies suggest that information flows bidirectionally between marketing investments and equity markets, reflecting the contentions of the 'real effects of financial markets' perspective. We make two arguments based on this finding. First, we contend that the 'real effects of financial markets' perspective should be transposed onto the marketing-finance interface because it enhances our understanding of the two research streams of the marketing-finance interface and provides a suitable theoretical framework to account for how marketing investments both affect and reflect information in equity markets. Second, transposing the 'real effects of financial markets' perspective onto the marketingfinance interface opens up many research possibilities to generate new insights into the two-way interactions between marketing investments and equity markets.
04114474
en
[ "phys.meca.mema", "phys.meca.stru" ]
2024/03/04 16:41:24
2022
https://theses.hal.science/tel-04114474/file/114637_LEGOURRIEREC-BRAUN_2022_archivage.pdf
Myriam Maxence Takwa Ahmed Matthew De mon point de vue la thèse n'est pas un travail solitaire. Je souhaite remercier un grand nombre de personnes avec lesquelles j'ai pu collaborer et vivre au cours de ces années d'études. À Saint-Gobain Recherche qui m'a confié un sujet très varié avec de nombreuses pistes expérimentales et scientifiques à traiter. Aux fructueux échanges avec l'équipe d'encadrement, Xavier, René, et les collègues Benoît, Keyvan, Jérémie... À Richard, Grégorie, Maxime, Quentin, David... les membres de l'équipe de Chantereine qui m'ont accueilli chaleureusement lors des essais sur le banc cannon. Au LMPS et l'épanouissement scientifique que j'ai pu ressentir. À François pour ma première chance. À Stéphane et Bastien, mes encadrants qui ont supervisé entièrement le projet de thèse. Ils ont su me diriger vers les points essentiels à traiter, avec complémentarité et régularité. Aux membres des centres d'essai et de gestion, facilitateurs de l'ombre. À Rémi et Xavier qui ont grandement aidé de par leur expérience, leur expertise, et donné de leur personne pour la réalisation du dispositif blowpipe. À Xuyang pour le temps consacré à la stéréo-vision et la corrélation d'images. À Mahmoud et Sofiane, les "Trouaeducs". À Marcello, Maxime, Cédric pour notre passion commune. À Roxane, Definition and application Many interpretations of glass can be distinguished in the ages. The first scientific consensus about its definition was proposed by Jerzy Zarzycki: "A glass is a non-crystalline solid (i.e. amorphous) presenting the phenomenon of glass transition" however the definition of common sense often obscures it: glass is a hard material or alloy, brittle and transparent to visible light, made of silicon oxide and fluxes [START_REF] Zanotto | The glassy state of matter: Its definition and ultimate fate[END_REF]. More precisely, the glass of glazing is obtained mixing silica with other atoms: sodium (e.g. soda-lime glass) and boron (e.g. borosilicate glass) for the glazing in the table 1.1, but also inoculants such as metal oxides for glass-ceramics. This insertion modifies some covalent bonds of the structure into ionic bonds. [START_REF] Seward | High temperature glass melt property database for process modeling[END_REF]. Soda-lime Glass appeared quite early in man's civilization. Over the different periods, it has never ceased evolving and finding an ever-increasing number of applications. Today, it is a material with well-developed technology, as evidenced by the evolution of its technical and mechanical characteristics: structural part, resistance to impact, temperature, UV, integration of technologies... Its amorphous state coupled with zero porosity makes it an ideal material in many cases. Today, glass is a part of our daily lives as it is in larger projects. For example, in the building sector, one can mention the Louvre pyramid, the glass floors of the Eiffel Tower, and the Grand Canyon Skywalk (glass floor built at 1200 m height, 24 m from the edge of the cliff supporting 74 tons). Glass grades and manufacturing processes Glass is a material that has made many appearances throughout the ages. Its utilization started in the 2 nd millennial B.C. in mosaics found in Mesopotamia and then in ancient Egypt. The glazes were formed in vertical furnaces, changing the raw materials (sands, natron) to vary their tints. Glass vessels that appeared in the 1 st millennial B.C. were formed through the deposition of the hot, melted, and viscous glass on a clay core. Later in the 1 st millennial A.D., the natron additives were replaced with plant ashes, and the use of antimony could make the glass colorless [Freestone 2021]. The blowpipe technique, the first notable glass-making technique whose concept is still utilized in modern industries, was invented in the 1 st century B.C. in the Eastern Mediterranean region. Murano glass-makers modeled glass for several medieval centuries using this method: the craftsman takes a molten glass ball from a silica furnace and blows it into a tube to shape it. The construction of sleeves made it possible to create one meter long glass plates. At the end of the 17 th century, the table casting technique appeared in France. The molten glass is deposited on a metal plate and then distributed over its entire surface. The rolling process was invented in 1920 in Germany. Max Bicheroux's idea was to compress the glass between two rollers, quickly installed in the factories because it allowed the molten glass to be poured and pulled continuously. The finished product strip is then cut at regular intervals to form large glass plates. This process will be followed by casting in molds for hollow parts such as glass bottles and jars, and other weaving and centrifugation techniques for the glass wool manufacturing. FIG. 1.1 -Scheme of float glass manufacturing. In the case of glass panels for building and automotive applications, the float process has been the most widely used by flat glass manufacturers since the 1960s. The principle schematized in Fig 1 .1 consists in casting the molten glass on a liquid tin bath. Rollers then draw the glass ribbon, which slowly cools to relax the stresses inside the plate. The thickness of the glass is then controlled by the feed rate of the glass, i.e. the rotation speed of the rollers (i.e. RADS in Fig 1 .1). Mechanical properties Conversely to silica glass, the ionic and covalent bonds in the soda-lime glass allow little room for plasticity and densification. As a result, it exhibits elastic and brittle behavior at room temperature without permanent deformation. Even if the intrinsic tensile strength of glass is good, it is always weakened by surface defects (scratches, indentations, notches) and behaves brittle. Often, cracks start in regions with a higher stress concentration. The table 1.2 shows some mechanical properties of soda-lime and borosilicate glasses. 1.1. Glass Ch. 1. Introduction The first crack is an indelible impression left by the indentation of the projectile near the glass surface. The glass is subjected to tensile stress and shear in different directions in this small area. When the kinetic energy of the impact is higher, a crushed zone with tiny glass fragments is observed. A plastic zone is created in the amorphous material [START_REF] Baret | Extremal model for amorphous media plasticity[END_REF]. Then comes the concentric circular crack, orthoradial or ring cracks, initiated from radial stresses. The Hertz cone crack starts with a circular shallow crack that remains stable for a while, and then a second instability gives way to the cone. For plates, the bending may prevent this second stage and favors the initiation of another set of circular cracks instead. Then, radial cracks extending straight from the impact location can be observed. The presence of orthoradial stresses forms them. A star-like appearance on the indented plane is observed when these cracks are formed. The median cracks are supposedly circular and tangent to the free surface. Its geometry goes from a circle to a semi-circle with the same depth, but the semi-circle has a center on the surface. The median crack may turn into radial when the indenter is released. Finally, lateral cracks are created beneath the glass surface, forming circles parallel to the indented plane. In dynamic loading conditions, the greater the kinetic energy or speed of the projectile, the more radial cracks will get formed. These cracks are numerous during high-velocity bullet impact on glass armor [START_REF] Bless | Impact damage in layered glass[END_REF]. Some new shapes appear, like the fan cracks or the bow-tie cracks. In-depth cracks also evolve vertically in the thickness of the glass. A comminution zone may appear during an impact. This area is located under the tip of an indenter as it continues its way into the already cracked glass producing local secondary cracks and, therefore, small fragments. Fracture mechanic Glass is representative of a brittle material. It can be deformed in the elastic domain with small deformations and breaks when critical stress is reached. The consideration of critical stress allowed the engineers of the 19 th century to dimension glass structures. The cracking phenomenon only began to be analyzed by Griffith in the 1920s. The author developed a criterion for crack extension in a 2D plate under known loading, based on the theorems of mechanics and thermodynamics [START_REF] Lawn | Fracture of Brittle Solids[END_REF]]. After the first law of thermodynamics, the total of energy, U, of the considered system is U = U M + U S + U k with a surface creation contribution U S , and a mechanical energy U M = U E +U A split into elastic energy stored in the media, U E , and potential energy from the applied forces, U A . The inertia term U k added to the initial static problem is non-negligible as soon as dynamic loading of the plate is considered. At equilibrium, the Griffith energy-balance concept gives In a linear elastic fracture mechanics problem, considering plain stress, Irwin introduced the planar stress intensity factors to express the stresses near the crack tip, K I and K II dU dc = 0 σ rr = K I √ 2πr f Irr (θ) + K II √ 2πr f IIrr (θ) σ θθ = K I √ 2πr f Iθθ (θ) + K II √ 2πr f IIθθ (θ) σ rθ = K I √ 2πr f Irθ (θ) + K II √ 2πr f IIrθ (θ) and an anti-planar stress intensity factor, K III σ rz = K III √ 2πr f IIIrz (θ) σ θz = K III √ 2πr f IIIθz (θ) where f i (θ) are non geometrically nor mechanically dependent functions. K I , K II and K III vary with the applied loading, the crack tip size and geometrical parameters. The three modes, I, II and III, correspond to the fracture modes, respectively, an opening crack mode, a planar shear mode, and an anti-planar shear mode. Irwin used a new material parameter to define a crack propagation criterion. The fracture toughness corresponds to a critical value of the stress intensity factor. As a remark, in plain strain, the stress component σ zz is not null. In Linear Elastic Fracture Mechanics, one can write Irwin's equation and link the energy release rate G with the stress intensity factors K i for a simple or mixed-mode opening crack in its initial direction, G = 1 -ν 2 E K 2 I + K 2 II + 1 + ν E K 2 III with ν the Poisson ratio and E the Young's modulus. The critical energy released rate, or Griffith energy, is also known for soda-lime glass: G c ∼ 10 J/m 2 [START_REF] Freund | Dynamic Fracture Mechanics[END_REF]]. Concerning the fast crack propagation in mode I opening, one may express the crack propa- υ = c R 1 - K Ic K I n f or K I ≥ K Ic , υ ≤ υ max with the Rayleigh waves celerity c R = 3300 m/s, the maximum crack opening celerity υ max = 1550 m/s and n = 2 [START_REF] Freund | Dynamic Fracture Mechanics[END_REF]]. Dealing with glass reliability is a significant issue for glass industries. The intrinsic strength of glass material is linked to its structure, its chemical composition, and the energy required for atomic bond cleavage [START_REF] Hühn | Ab Initio energetics of Si-O bond cleavage[END_REF][START_REF] Wondraczek | Advancing the Mechanical Performance of Glasses: Perspectives and Challenges[END_REF]]. Experimentally, the strength of a glass sample highly depends on extrinsic factors [Kurkjian and Gupta 2020]: the environment, the presence of surface flaws, and defects. All surface defects affect the material variability in lifetime and strength and must be determined and controlled at best. The typical probabilistic distribution to model the presence of the flaws is obtained from Weibull theory [START_REF] Weibull | A statistical distribution function of wide applicability[END_REF]], which links the probability of failure, P, and the failure stress, σ f P = 1e -(σ f /σ 0 ) m with a Weibull modulus m and σ 0 the size dependent scaling stress. Laminated glass 1.2.1 Origin and history The brittleness of glass is a problem in many applications, particularly in ensuring the safety of its users. For example, in the case of a windshield, if the glass is an optimal choice for structural aspects and provides good visibility, it breaks in the form of large chips in a crash. Therefore, combining the glass with a soft, adhesive material that can deform in large strain regimes to retain the deadly fragments ejection during an impact is made in most applications. Laminated glass is a composite composed of two or more glass plies separated by polymeric interlayers that ensure user safety and is nowadays widely used as glazing. Sometimes clumsiness may lead to great things. Edouard Benedictus illustrated this concept by dropping a glass jar containing a celluloid solution on the floor of his laboratory. Instead of splintering, the pot cracked, but the fragments remained together. This unexpected discovery allowed the french chemist to file the patent for Triplex glass in 1909. Safety glass quickly met with success in the automotive and aeronautical industries, ensuring the safety of passengers in a vehicle, as shown in Fig 1 .4. The significant role of the interlayer is to maintain the cohesion between glass shards and to dissipate a higher amount of energy during a choc. Today, laminated glass is becoming more and more widespread in the building industry for several reasons. The first is the possibility of constructing large glass windows, such as a high-rise building, while ensuring the safety of the users by preventing a quick and easy perforation of the laminate. In addition to excellent mechanical properties, nanocoatings on the different surfaces of the composite allow the material adaptation to various applications. One example is the low-emissivity for thermal management. Another advantage is the possibility of integrating technology in the interlayer, thus using the glass as a screen displaying text and images. Finally, according to the objective of reaching carbon neutrality in 2050, glass seems to be an appreciated choice: the use of cullets, i.e. recycled glass debris from the float, in the manufacture of glass is increasingly widespread. Also, hydrogen or biogas can replace natural gas or coal, for example (e.g. [Saint-Gobain 2022]). The manufacturing process involves laminating a sandwich composed of glass plies or panels connected to thermoplastic interlayers. The composite is produced in an autoclave with specific temperature and pressure conditions for a defined period, varying with desired adhesion strength. There, chemical bonds are formed between the polymer hydroxyl groups and the glass silanol groups, facilitated at moderate temperatures (∼100°C) . The polymeric interlayer has an enormous influence on the global performance of a laminate during a dynamic loading. The polymeric interlayer is sensitive to temperature variation near its glass transition temperature, T g . A slight change in room temperature during an experiment with a laminate might lead to significant variation in the mechanical properties of the polymer. Zhang et al. highlighted this characteristic by performing impact tests with low-velocity loading, changing the interlayer material, and varying the room temperature between -30°C and 50°C. In addition, to identify the differences between the response of the polymer under a blunt impact loading with an incident energy of 15 J, the authors highlighted the brittle behavior of the SGP (the ionoplast SentryGlas ® Plus) layers, forming large fragments at -30° [START_REF] Zhang | Temperature effects on the low velocity impact response of laminated glass with different types of interlayer materials[END_REF]]. The choice of a thermoplastic interlayer depends on the desired characteristics of the laminated glass. Some are well-known and used in different applications. For instance, thermoplastic polyurethane (TPU) and polycarbonate (PC) are used in aeronautics. Ethylene-vinyl acetate (EVA) and polyvinyl butyral (PVB) are cheaper and mainly used in building and automotive industries. The latter will be further considered in this section. The PVB presents good dissipating properties and is the most common interlayer in laminated glass because of its transparency, large dissipating capacities, large elastic strain regimes, good adhesion to the glass, and has a reasonable price: app. [START_REF] Cleary | Characterization and quantication of in-service windshield fracture mechanisms and evaluation of laminate sharp impact resistance as a function of construction[END_REF].4 e/m 2 for a 0.76 mm foil [START_REF] Martín | Polymeric interlayer materials for laminated glass: A review[END_REF]]. Polymer structure It seems appropriate to look at the properties of the polymeric interlayer before visiting the whole composite behavior. Polyvinyl butyral (PVB) is a thermoplastic polymer whose macromolecules are essentially composed of three different monomers: vinyl butyral, vinyl alcohol, and vinyl acetate, whose chemical formulas are presented in Polymer mechanical behavior The PVB non-linear hyperelastic behavior in tension might be assessed regarding Fig 1 .6. Still, when the stress is released, it is indeed a visco-elastic behavior which is highlighted by the unloading path. It can also be seen that, with faster loading, the material stiffens and dissipates more energy. Its ductility also reduces with the increase of the strain rate. These phenomena are highlighted by authors like Zhang, that have conducted tensile experiments on PVB, some of which at very high strain rates, up to 1360 s -1 [START_REF] Zhang | The mechanical properties of Polyvinyl Butyral (PVB) at high strain rates[END_REF]]. Some mechanical properties of PVB are represented at ambient temperature in table 1.3 [S-LEC ® 2020; Saflex ® 2020; Trosifol ® 2020]. Because of the different possible formulations of this polymer, the PVB characteristics may vary significantly. However the table provides values for some of the PVB characteristics that give a good overview. The large strains and the ultimate tensile engineer strength then correspond to the elongation obtained by Fourton in Fig 1 .6. When used close to its glass transition temperature, PVB presents non-linear, visco-elastic behavior. Also, visco-plasticity can be observed via tensile tests [Elzière 2016]. The visco-elastic behavior of PVB in a dynamic regime can be described using a generalized Maxwell rheological model (assuming small strains and a constant temperature). It gives the polymer shear modulus G(t) as a relaxation function expressed via its Prony series. Considering the PVB as isotropic, G(t) = E(t)/2(1 + ν) ∼ E(t)/3. It is modeled with N branches in parallel, presenting a spring with a modulus G i in series with a damper that has a relaxation time PVB -Mechanical properties τ i G(t) =G ∞ + N i=1 G i e -t/τ i G 0 =G ∞ + N i=1 G i (1.1) One defines the instantaneous modulus G 0 as the immediate response of the material and G ∞ as the long-term elasticity [Elzière 2016]. Writing g i = G i /G 0 the real and loss moduli, directly sensible to pulsation ω in the complex domain, are G ′ (ω) =G ∞ + G 0 N i=1 g i τ 2 i ω 2 1 + τ 2 i ω 2 G ′′ (ω) =G ∞ + G 0 N i=1 g i τ i ω 1 + τ 2 i ω 2 (1. 2) The parameters of the PVB Prony series are reported in table 1.4. g i = G i /G 0 τ i (s) g i = G i /G 0 τ i ( Glass-PVB interface adhesion The adhesion between the PVB interlayer and the glass plies is a key parameter for the impact response of laminates. Too poor or too strong adhesion may lead to a loss in terms of performances of the composites [Elzière 2016]. Some easy to set-up tests are commonly used in industry. To cite for instance the pummel test (i.e. cohesive failure test), which is very dependent on the operator and its hammer, or the peeling test (i.e. mode I delamination of the polymer from the glass). However, none provide the mode II failure activation during a dynamic loading on laminates. A simple test, firstly called "tension adhesion test" by Sha et al. [START_REF] Sha | Analysis of adhesion and interface debonding in laminated safety glass[END_REF]] then named TCT test, consists of a tensile test on a pre-cracked laminated glass specimen. One can measure the adhesion force versus the large strain of the polymer. The major energy is dissipated at the polymeric bridges (i.e. location where glass shards are interconnected via polymer ligaments). Two main phenomena are involved: the visco-elastic dissipation in the bulk of the PVB, and the delamination occurring near the cracks [START_REF] Galuppi | A homogenized analysis à la Hashin for cracked laminates under equi-biaxial stress. Applications to laminated glass[END_REF]. Hence, the TCT has become the most-used experiment to measure the residual and post-fragmentation behavior of cracked laminated glass [START_REF] Muralidhar | Mechanical behaviour in tension of cracked glass bridged by an elastomeric ligament[END_REF] Concerning the modeling of the interface, one common description of the glass-PVB interface is the cohesive zone model. The technique correlates the adhesion energy with the energy required to delaminate the interface. The Orowan theory is adapted and demonstrated by Needleman from the principle of virtual work [START_REF] Needleman | An analysis of tensile decohesion along an interface[END_REF]]. The author represents the normal tensile stress or force versus strain or normal displacement and assesses the adhesive energy of the interface. The cohesive zone developed by Needleman is a continuous model based on the description of void initiation and growth, in which the unbounded zone expands with delamination. This model fits well with the PVB, whose yield strength at ambient temperature is much higher in practice than the interface strength between the matrix and the glass [START_REF] Needleman | A continuum model for void nucleation by inclusion debonding[END_REF]. This cohesive zone model is adopted by Gao [START_REF] Gao | Intrinsic cohesive modeling of impact fracture behavior of laminated glass[END_REF] or Fourton [Fourton 2019] for instance, to study and simulate the mixed fracture mode I+II delamination. Fourton has made the adhesion between the layers vary, grafting different silanes on the glass surface before lamination with PVB, then peeling the samples. The combined measurement of the macroscopic work of fracture and development of TCT tests on such samples have informed the author that adhesion modifies the interface dissipation and increases the crack contribution of polymer [START_REF] Fourton | Adhesion rupture in laminated glass: influence of adhesion on the energy dissipation mechanisms[END_REF]] (i.e. a strong adhesion causes a rapid rupture of the interlayer). The authors also splits the macroscopic work of fracture in the cohesive zone, G m , into a bulk dissipation proportional to the interlayer thickness h and an interface work of separation. G m = h • Π bulk + 2Γ crack The bulk dissipation, Π bulk , only has a slight change with the interface level of adhesion; its mean value is Π bulk = 5.2 MJ/m 3 . Thus, the crack contribution, Γ crack , is proportional to the separation energy term of the total fracture energy, Γ 0 . Fourton summarizes Γ crack ∼ (Γ 0 ) 1 Π bulk ∼ (Γ 0 ) 0 Composite assembly process There are several ways of manufacturing laminated glass. The first step is choosing the geometry of the final composite: the shape does not necessarily have to be flat (e.g. windscreens, curved panels), or the laminate can be composed of several polymer layers (e.g. armored glass). The thickness of the plies will vary according to the glazing application: armored glass being much thicker than a standard windshield, for instance. Next comes the choice of material. Different types of glass and interlayers are available. A typical stack is chosen for the tests: two soda-lime glass plies joined with a PVB interlayer. Glass sharp edges are polished with an abrasive band to manipulate the plates easily during the making process. The float glass used for the impact loading presented in the manuscript will be the Saint-Gobain Glass Planilux ® and its clearer successor, the Planiclear ® . The PVB plies used in the composites are RB41 and RB11 from the supplier Eastman Saflex ® . After sizing the two glass plies and the PVB foils, assembly is carried out in a clean room to avoid the insertion of dust or external particles between the layers and then control the adhesion between the plies. Glass plies are cleaned up with distilled water and soap on both sides. After rinsing, the sandwiched plies are set into a vacuum bag under pressure and temperature for a specific duration. Numerical models Modeling glass Glass subjected to dynamic loading can be modeled as an elastic brittle isotropic material, characterized by its Young's modulus, Poisson ratio, energy release rate, and tensile failure stress. The challenge remains in choosing a correct failure criterion, the crack propagation description, and their typology. Easy to interpret, the first criterion corresponds to a critical stress criterion. For instance, Peng et al. defined a glass strength limit, implementing shell element deletion after cracking [START_REF] Peng | Finite element modeling of crash test behavior for windshield laminated glass[END_REF]]. Pyttel el al. used a Rankine criterion that also deletes the concerned elements when the stress overpasses the failure criterion [START_REF] Pyttel | Failure criterion for laminated glass under impact loading and its application in Finite Element simulation[END_REF]]. The two methods are implemented in a FEM solver, with an erosion of the fully damaged elements. One drawback of this technique is the loss of mass in eroded zones, which means a loss of inertia and a violation of the mass conservation in the case of dynamic loads. Still, other methods exist. For instance, X-FEM splits the former elements with the crack evolution in the media (e.g. [START_REF] Xu | Investigation of dynamic multi-cracking behavior in PVB laminated glass plates[END_REF]). The peridynamic method also represents the glass discontinuity, as the materials are represented via discrete elements of a specific volume and density. Elements interactions are directly described with the nodes neighbors' relative positions and displacements [START_REF] Lai | A non-ordinary state-based peridynamics modeling of fractures in quasi-brittle materials[END_REF]]. The difference between the X-FEM and the peridynamic methods is shown in Fig 1 .7. For the same experiment, the authors are reproducing numerically the cracking pattern obtained after a low-velocity impact on a windshield. (e.g. energy dissipated per unit area of debonding), at 20°C, that is much constant for tensile rates between 10 s -1 and 100 s -1 , G a ∼2500 J/m 2 , but decreases for quasi-static TCT tests, G a ∼300 J/m 2 at 0.1 s -1 . Other plate models have been used to describe the behavior of laminated glass. Vallabhan's model considers the bending as well as the tension-compression of the glass plies coupled with the shear of the interlayer, which is compared to quasi-static free bending tests of a laminated plate [START_REF] Vallabhan | Analysis of laminated glass units[END_REF]]. The shear of the interlayer is also considered quasi-statically in the Foraboschi model [START_REF] Foraboschi | Analytical model for laminated-glass plate[END_REF]]. The visco-elasticity of the interlayer is visited by Galuppi and Royer-Carfagni [START_REF] Galuppi | Enhanced effective thickness of multi-layered laminated glass[END_REF]. Steady-state delamination and stretching are explored by Fourton [Fourton 2019]. Finally, non-empirical models are also proposed for laminated glass. The high contrast laminate model developed by Viverge is using asymptotic methods, i.e. without assumptions on the shape of the stress and strain fields [START_REF] Viverge | Model of highly contrasted stratified plates : application to laminated glass[END_REF]]. All these models are very sophisticated for a limited payback, as they are not adapted to the description of the impacts presented in the following sections. Dynamic behavior After introducing the materials' origin, their behavior, and the composite properties, the focus is made in the following on the laminated glass behavior subjected to dynamic loading. The behavior of laminated glass is complex. The latter does not react in the same way when it is subjected to quasi-static or dynamic loads. The study of the different damage steps represents a challenge: describing the cracks initiation, propagation, and arrest mechanisms in laminates. Thus the PVB interlayer plays a central role in the energy dissipating process, ensuring the cohesion of the composite and avoiding large fragments from splintering. Its visco-elasticity is a determinant factor for the global efficiency of a laminated glass submitted to an impact. Describing the response of brittle materials, such as glass, to low-velocity impacts by solid bodies is an important issue because of its significance in terms of structural degradation and integrity. [Nassr et al. 2021]. The influence of the boundary conditions is also at stake. The reflection on the plate edges and the positioning of the plate can change the cracking and the overall response of the laminated glass during an impact. The authors note that the impactor head influences the transient response, i.e. the first period of the impact, but does not significantly affect the stretching and tiring phase that follows or changes the perforation threshold. To understand the laminated glass behavior under low-velocity impacts, one must qualify the existing impact phenomena and quantify their contribution to the total dissipated energy. It is essential to establish a correlation between the experimental parameters of the dynamic loading and the measured dissipating phenomena. Experimental parameters are choices the experimenter has made before a test, e.g. the projectile mass, its blunt or sharp geometry, ply thickness, material aging, adhesion, glass chemical composition and PVB formulation. However, the different stored or dissipated energies that have to be measured are roughly represented in Fig 1 .9: the crack formation in the glass plies, the interface decohesion, the PVB visco-elastic dissipation, the inertia of the sample and the projectile, the energy lost in the test device or stored as elastic energy. Knowing the proportion of energy dissipated by each phenomenon and the influence of the parameters mentioned above provide keys for enhancing laminated glass performance and safety. The glass company must submit a certain number of normalized tests that the new products must pass to be approved for sale. Manufacturers aim to evaluate the objective and representative performance of laminates submitted to an application case, like a stone impact on a windshield or a building break-in. Such tests may vary according to the requirements. Two European norms for the building industry can be mentioned: The data accumulated from many repetition of the cannon and blowpipe-like experiments has already shed some light on several laminated glass parameters. Examples include the size of the glass plies and interlayer, their adhesion, and the relationship between the number of chips and the impact energy dissipated. The whole study of the dynamic behavior of laminated glass will be divided into three problem areas. The first study concerns the propagation and arrest of cracking in the glass from the earliest moments of an impact. The second part concerns glass fragmentation, the evolution of the crack density or number of shards created with the dissipated impact energy. The third section focuses on the post-impact residual resistance and residual strength of laminated glass. The objective is to depict the composite behavior of pre-fragmented glass, i.e. to head towards a model of cracked laminated glass. Propagation and arrest during dynamic indentation Observing crack initiation, propagation and arrest help to anticipate the final crack typologies on laminates. One generally seeks to predict the evolution of a crack created by an impact. This phase firstly differs according to the indent geometry (Vickers, Berkovitch, Rockwell). If the latter is blunt (Fig 1 .10a), a cone crack followed by median cracks appears in the glass. The stored energy is initially more significant in the thin glass ply, and the many radial cracks rapidly appear at failure. If the indent is sharp ( This section mainly concerns the sharp indent impact in the spirit of gravel pitting. Well described in the litterature [START_REF] Dörkop | Investigation of the mechanism of stone impact on laminated glass windscreens[END_REF][START_REF] Lawn | Fracture of Brittle Solids[END_REF]], the formation of the star crack under these conditions can be divided into three stages. A first initiation phase through dynamic indentation of the glass having a localized defect on the impacted surface. A second phase is then initiated and corresponds to a sequenced in-depth progression of the crack up to the plate median plane. The third phase is unstable because the star-shaped crack forms and spreads very quickly, around 1100 m/s according to ultra-fast camera observations. Indeed, the crack reaches a zone of tensile stress due to plate bending, and propagation to the bottom of the first ply and lateral extension proceed very fast. The propagation and arrest steps of a star crack formation are more complicated phenomena because of the existing competition between dissipating phenomena: crack opening energy, bending of the plates, and local stiffness decrease. The study will focus on the design of a blowpipe test device to represent such a dynamic indentation, despite the conventional aspect of the test compared to a real graveling. The impact energy of the present kind of loading goes from 30 to 100 mJ. This device is highly instrumented to measure phenomena (gauges, high-speed camera). Observing through-thickness or in 3/4 view at 1 to 5 Mfps is possible with the high-speed camera Shimadzu ® HPV-X and allows to follow the damage evolution in the laminate. Concerning the parameters, the use of a chemically tempered glass or borosilicate glass will have to be reconsidered (e.g. [START_REF] Kang | Crack nucleation and growth during dynamic indentation of chemically-strengthened glass[END_REF]). Indeed, the strong superficial compression stresses prevent the nucleation of the cracking at a very shallow depth but, on the contrary, favor it at a sufficient depth (then highly unstable), which makes the formation of the star crack much more challenging to observe. Fragmentation and energy dissipation during an impact The laminated glass fragmentation study is in line with the reflection initiated by Nourry [Nourry 2005]. The device he developed allows to link impactor speed and kinetic energy dissipated during the impact with the glass damage, as shown in Some properties have a noticeable influence: impacting the atmosphere or tin side of float glass, PVB foil thickness, sample temperature during the tests, initial presence of surface scratches at the glass surface, glass ply adhesion with the interlayer, thickness difference between the two glass plies... New cannon tests will be performed with well-controlled experimental conditions (e.g. controlled temperature via a thermostat set at 23°C) and richer instrumentation: use of high-speed cameras and stereo-digital image correlation (S-DIC) to monitor the laminated glass deformation, a telemeter to measure the impactor movement and optical sensors to capture the projectile velocity. The impact energy of the present kind of loading goes from 30 to 400 J. The experimental campaign will focus on the crack appearance on the laminated glass. At first, radial cracks are formed, quickly followed by orthoradial cracks throughout the test. The fragmentation pattern, which varies with the properties and the projectile velocity, is a critical factor in energy dissipation and will be highly reviewed via instrumented cannon experiments. The stretching of the PVB and glass-PVB delamination will be further analyzed regarding the live fragmentation evolution. This final part will be based on the described TCT model [Elzière 2016]. Residual resistance Repeated impacts are historically linked to anti-infraction tests in buildings. To experimentally understand the behavior of a fragmented laminate, many tests can be set up. For example, a first impact can be performed, and the glass can be loaded, dynamically or statically, on different areas to observe the link between cracking and dissipated energy. We can also repeat the impact on the same point of our laminated glass. This second choice is made in the article Seshadri has been one of the pioneers that describe the interconnection between two fragments, connected by an interlayer polymeric bridge [Seshadri et al. 2002]. The author have looked at the delamination and the particular behavior of the areas between two glass fragments, which are considered here as two substrates connected by the very deformable polymer interlayer. In Fig 1 .12 the decohesion zone near these interfaces is schematized. Tensile stress of such sample introduces rate-dependent large deformations, stretching of the interlayer, and decohesion starting from the initial notch, followed by tearing. Conditions like ambient temperature or local heating during impact also play a role in the global response. The large strain rheology of PVB and its adhesion with glass are extensively studied in the theses of Elziere [Elzière 2016] and Fourton [Fourton 2019]. In addition to describing the stretching stage of PVB during an impact, with a simple test such as the TCT test, they also model the cohesive zone between glass and PVB in different ways. The importance of adhesion is also highlighted, and it is shown that the optimum performance of the laminate during impact is highly dependent on this property. The cannon device will host multi-impact tests to compare the performances between uncracked and already damaged laminated glass. A model will be proposed to measure the delamination at the crack edges of the fragmented laminated glass plate. Experimental data processing: methods After introducing the three problem areas, some experimental methods will be described in the current section. Instrumented tests are based on experimental techniques whose concepts have already been introduced in the literature. Only the well-established theory, i.e. state-of-the-art, is introduced in these two sections. Any further developments will be explained in the bulk of the manuscript. First, the strain gauge measurements and the study of the uniaxial elastic wave propagation performed to determine the force and velocity in the indenter during the blowpipe experiments will be explained. The theory is well-established in the community of dynamicists, especially in the analysis of Hopkinson bars experiments [START_REF] Kolsky | An investigation of the mechanical properties of materials at very high rates of loading[END_REF]]. Then, the stereo-digital image correlation technique (S-DIC) and the computer vision needed to analyze the laminated glass movement during cannon experiments will be introduced. Hopkinson-Kolsky bar theory A Kolsky-Hopkinson bar test uses long entrance and exit bars as a mechanical waveguide. The uniaxial description of the movement is based on the small diameter size compared to the total length of the bar. The set-up is also instrumented with strain gauges that deform with the elastic wave passage. Wheatstone bridges are used to detect the change in resistance of the gauges, which is converted into measured strain. 1.4. Experimental data processing: methods Ch. 1. Introduction Elastic wave propagation In this section, small strain assumption is made: |∂ x u x | ≪ 1. The uniaxial wave propagation hypothesis in the indenter cylindrical body gives the following momentum equation on the cylinder axis, i.e. x-axis ∂σ ∂x = ρ ∂ 2 u x ∂t 2 (1.3) where σ = σ xx is the axial stress, u x the displacement along x, t the time and ρ the density. To perform a Hopkinson bar experiment analysis, the indenter is considered to remain elastic through the whole test, so that the elastic behavior σ = E∂ x u x , with E the Young's modulus, is introduced in eq (1. 3) ∂ 2 u x ∂t 2 = c 2 ∂ 2 u x ∂x 2 (1.4) with c = E/ρ the elastic tension-compression waves celerity in the indenter. The general solution of equation (1.4) is u x (x, t) = f (x -ct) + g(x + ct) (1.5) where f represents a wave propagating from left to right and g from right to left [START_REF] Meyers | Dynamic behavior of materials[END_REF]]. The instantaneous material velocity in the cross section is v x (x, t) = ∂u x (x, t) ∂t = c -f ′ (x -ct) + g ′ (x + ct) (1.6) Lagrange diagram The Lagrange diagram allows the determination of the 1D velocity and stress in a bar from the gauge strain measurements, inspired from the Kolsky-Hopkinson bar set-up [START_REF] Hopkinson | A method of measuring the pressure produced in the detonation of high explosives or by the impact of bullets[END_REF][START_REF] Kolsky | An investigation of the mechanical properties of materials at very high rates of loading[END_REF]]. In the diagram, constant values are represented by straight lines of absolute slope c. The impedance z can be introduced in the equation d(σ + zv x ) dt = ∂(σ + zv x ) ∂t + ∂(σ + zv x ) ∂x dx dt = 0 (1.7) leading to z = ∓ √ Eρ in both ways dx/dt = ±c. Therefore, the stress and velocity are first computed at the gauge locations, considering the system at rest initially: σ(x, t = 0) = 0 and v x (x, t = 0) = 0. To measure the stress and velocity at the indenter, at a distance ℓ from the closer gauges, the waves information is transported at the cone bottom where the 1D propagation hypothesis is still valid. The stress σ(x = ℓ, t) and velocity v x (x = ℓ, t) can be expressed for all t > (L + ℓ) /c σ(0, t + ℓ/c) + zv x (0, t + ℓ/c) = σ(ℓ, t) + zv x (ℓ, t) σ(0, t -ℓ/c) -zv x (0, t -ℓ/c) = σ(ℓ, t) -zv x (ℓ, t) (1.8) L being the x-distance between the two sets of strain gauges. The Lagrange diagram used to analyze the blowpipe experiments is represented in Fig A .2. Digital Image Correlation Principle and stereo-digital image correlation (S-DIC) formulation Digital Image Correlation (DIC) most often relies on the assumption of gray level conservation between a reference image f and a deformed image g, so that for each pixel x f (x, t = 0) = g (x + u(x), t) (1.9) where t is the time and u the pixel displacement. However, this equation is never strictly obeyed due to the presence of acquisition noise, dependence on the camera performances, and hence the displacement field is computed from the minimization of the quadratic difference between the two members of Eq. (1.9). W(u) = ∥ f (.) -g(. + u)∥ 2 (1.10) calculated on all pixels of the region of interest (ROI). The philosophy is similar for global S-DIC with multiple view system, but minimization is performed for the 3D displacements and considering all the N c cameras at once [START_REF] Beaubier | CAD-based Calibration of a 3D DIC System[END_REF]Dufour et al. 2015b;[START_REF] Berny | High-temperature tests for ceramic matrix composites : from full-field regularised measurements to thermomechanical parameter identification[END_REF]. Using the parametric model with world coordinates as the reference, any point X moves at time t to a new position in 3D, X ′ = X + U(X). Based on the calibration step, this relation is expressed for all camera images, labeled by c, as x ′c = x c + u c = [Π] c X ′ where x c = [Π] c X. With [Π] a projection matrix explained in the next section. Thus a residual field is defined as ϱ c (X, U(X)) = g c (x ′c ) -f c (x c ) (1.11) and hence the 3D displacement field is computed from the minimization of W(U) = N c c=1 X∈ROI ϱ c (X, U(X)) 2 (1.12) The global displacement is decomposed over the mesh using the finite element shape functions, θ i (X), as U = i U i θ i (X), H c i j = X∈ROI ∂u c ∂U i • ∇ f c (x c ) ∂u c ∂U j • ∇ f c (x c ) b c i = X∈ROI ∂u c ∂U i • ∇ f c (x c ) f c (x c ) -g c (x ′c ) (1.13) The minimization problem finally consists of successive corrections of the displacement field, δU = U -Ũ, from a system assembled from the sum over all cameras of their contribution to the global residual N c c=1 [H] c • δU = N c c=1 b c (1.14) 1.4. Experimental data processing: methods Ch. 1. Introduction It is worth emphasizing that the measured displacement is directly the 3D nodal displacement field, minimizing the camera contribution (i.e. residuals for each camera) simultaneously. As S-DIC matrices are ill-conditioned, some tricks such as the Tikhonov regularization can be used to solve the minimization problem [START_REF] Tikhonov | Solutions of ill-posed problems[END_REF]. The minimization problem becomes         N c c=1 [H] c + λ[I]         • δU = N c c=1 b c + λ (U 0 -U) (1.15) with [I] the identity matrix, U the current corrected displacement and U 0 the estimated displacement for this iteration. The weight λ associated to the Tikhonov regularization is chosen as λ = 10 -2 • λ max (1.16) with λ max the larger eigenvalue of the S-DIC Hessian matrix N c c=1 [H] c . Even if using a trick such as the Tikhonov regularization improves the computation and the problem conditioning, the solution is not satisfying as the enforced constraint is incorrect. Stereo vision Stereo vision consists in registering multiple images from different cameras. To correctly measure the physical shape and kinematics of objects, the calibration of stereo-camera system (i.e. identifying the projection matrix [Π]), is crucial to bridge the real 3D World coordinates The projection matrix, [Π], can be decomposed into an intrinsic matrix, [K] which characterizes the intrinsic parameters of camera (optical center, focal distance and skewness) and an extrinsic projection matrix, [RT] which gives the relative position of the camera and 3D object (translation and rotation). s is a scale factor. The projection matrix [Π] of each camera is commonly measured via a calibration object [START_REF] Besnard | Caractérisation et quantification de surfaces par stéréocorrélation pour des essais mécaniques du quasi statique à la dynamique ultra-rapide[END_REF]Dufour et al. 2015b;[START_REF] Berny | High-temperature tests for ceramic matrix composites : from full-field regularised measurements to thermomechanical parameter identification[END_REF]]. The technique consists in monitoring this well-known object, whose geometry and size have been independently measured and stored as a 3D-mesh file, expressed with the world coordinates of the problem. A pre-calibration phase is implemented to provide a first determination of the projection matrices. Then comparing the gray level monitored by each camera at predefined integration points location, the second phase of calibration provides a refined determination of the projection matrices [START_REF] Berny | High-temperature tests for ceramic matrix composites : from full-field regularised measurements to thermomechanical parameter identification[END_REF][START_REF] Beaubier | CAD-based Calibration of a 3D DIC System[END_REF]]. Chapter 2 Star-crack on laminated glass Gravel pitting context Resisting small impacts like graveling is at stake for laminated glass glazing. A sharp glass indentation is rarely avoided during an impact, and sometimes a crack appears. Many parameters may drive this breakage, including the projectile mass and speed, its geometry, or the laminated glass characteristics (adhesion of the plies, geometries, thicknesses) [Grant et al. 1998]. Therefore, minimizing the defect size during low-velocity impacts is a priority for windshield manufacturers since locally damaged parts on the external glass layer can be repaired if the crack extension is not too large. The existing norms in the domain concern the repair conditions of a windshield (e.g. norm NF R19-601 [Afnor, French Norm 2011]) that depend on the critical crack length and location on the windshield. Also thee are no defined construction standards for glass manufacturers, they have created their resistance criteria to highlight their product performances, with, for instance, a series of repetitive indentation tests characterized by empirical damage criteria. Various cracks may be observed during graveling, such as Hertzian cone or star cracks. The latter are critical defects as they may grow under quasi-static sub-critical loads or vibrations: e.g. glass residual stresses, aerodynamic load, car body deformations, and thermal stresses due to temperature variations. This damage geometry is first constituted of a local sub-millimetric indentation caused by the sharp indenter on the glass surface, then extended by branching selected radial cracks propagated due to orthoradial stress in the glass ply. The poly-vinyl butyral (PVB) layer and the inner glass ply -which remains undamaged at the end of a typical gravel impact -play a role in the final crack geometry. The behavior of thin glass or laminated glass plates under the low-velocity impact of small projectiles has been reported in the publications [Behr et al. 1993 Cleary et al. 2020]. Authors analyzed the live crack formation, performed postmortem analyses, and designed specific devices to reproduce such impacts in a controlled way. It can already be observed that the projectile impact angle and velocity are important parameters for the star crack appearance on the laminated glass sample. For an identical loading, a star crack is formed for thin plies, but a conical mark appears on the surface of thick plies. Examining the angle of incidence during the test reveals that only the normal components are involved in the damage [Grant et al. 1998;Cleary et al. 2020]. Such methods can be extended to many different glass compositions as ion-exchanged glass or borosilicate glass. The median crack evolution in strengthened glass can also be at stake because the ply breakage in the whole thickness cannot be avoided when it reaches a critical depth. This phenomenon has been observed via X-ray observations [START_REF] Kang | Crack nucleation and growth during dynamic indentation of chemically-strengthened glass[END_REF]]. However, few measurements are acquired directly on the indenter itself except for its speed. In the present study, a new device named blowpipe is introduced to replicate conditions met for stone impacts, leading to similar crack patterns. This device is designed to measure both force and velocity, with a high temporal resolution, during the impact on the laminated glass sample. The measurement principle is inspired by the Kolsky-Hopkinson method [START_REF] Hopkinson | A method of measuring the pressure produced in the detonation of high explosives or by the impact of bullets[END_REF]; [START_REF] Kolsky | An investigation of the mechanical properties of materials at very high rates of loading[END_REF], with strain gauges positioned on an input bar. This input bar, inserted between the tested glass and the impactor, transfers the dynamic loading and is therefore used as the indentor. As it remains elastic under graveling impact, the gauge strain measurements can be used to estimate the contact force. Additionally, a synchronized high-speed camera is used to monitor the star crack formation on the laminated glass: the Shimadzu ® HPV-X2 camera is used at an acquisition rate between 1 to 5 Mfps, with a fixed definition of 400×250 pixels. A movie of 256 frames is registered during a test. The resulting spatial resolution will be approximately 20 µm/px. A long blowpipe A first gravel resistance test is carried out with a projectile striking a flat laminated plate of about 300×300 mm 2 [Kishta 2018a[Kishta ,b, 2019]. The 3.2 g projectile is dropped from a specified height, from 50 cm to 2 m, led through a vertically suspended pipe. The impact energy varies with the projectile's initial height, giving the theoretical impact velocity at the end of its stroke. Many of those experiments were carried out to see the influence of different parameters: projectile height, glass initial damage, and glass ply thickness. Finally, with such a device, it is possible to analyze and classify the influence of the laminate properties, considering a significant standard deviation in the results. The different tests are not highly instrumented and thus not quantitative. Indeed, the projectile velocity, overestimated due to the friction in the pipe, is not measured. Moreover, the impact angle of the indenter is often not normal to the glass surface and varies with the pipe wall, making the observations difficult. So, it is impossible to detect differences in cracking scenarios between a free bending sample and a sample laid on a rigid support. Some high-rate films have been recorded during an impact. The 5 Mfps movie is triggered with an accelerometer placed on the glass surface. Two viewpoints have been chosen to identify the initiation and the crack propagation: an edge vision to follow the in-depth propagation of the damage and a 3/4-view that enables live monitoring of the star-crack formation. Firstly, cracks begin to propagate through the thickness of the impacted ply. Secondly, an unstable propagation happens once the crack tip reaches the middle of the first glass ply and a centimeter-length star crack pops at 1100 m/s around 15 µs after impact. This crack propagation velocity is measured with the high-speed camera with a good uncertainty due to the good temporal resolution of the movie. The crack initiation is well documented via those videos and is consistent with other observations, for instance, the crack in-depth evolution observed by Chai and Ravichandran [Chai and Ravichandran 2009]. However, the impact conditions are not quantitatively well described. The conditions of the arrest of the star-crack are not fully understood and measured. The role of the PVB interlayer and its adhesion to the glass are not known, neither is the bending of the plate, nor is the potential local PVB-glass delamination. These primary questions are still pending after the long blowpipe experiments. So, the main goal of the present study is to develop an instrumented test device, including indenter motion and force measurements. Following the crack evolution in the thickness and 3/4-view is also a plus that will validate the first conclusions of the former tests. Moreover, a shorter device design is chosen to facilitate the impact tests and their repeatability. A shorter (enhanced) blowpipe The device aims to reproduce the star crack and not to make a census of all possible graveling tests. FIG. 2.1 -Blowpipe prototype The short blowpipe prototype is represented in Fig 2 .1a. It has been designed to fire a 3 g projectile at 5 m/s. The structure is cut from a 40×40 mm 2 section aluminum rail. A support is made from profile bars, making the adjustment of the vertical or horizontal position of the blowpipe possible. The barrel is clamped in a bore. When adjusted to the correct height, the tube allows the extremity of the indenter to be held over 2 to 3 mm. A pin, the projectile, and the spring are then placed in the tube. The loading is applied in the form of elastic energy in the spring, varying the stroke of the screw. As a remark, spring-aided systems have already been used by other authors for impact tests with balls [Sun et al. 2005b]. A shot is made when the pin is pulled out. Thanks to the large mass of the bench, the spring and projectile inertia are well absorbed during the test. In Fig 2 .1b, One can see the instrumented indenter, inserted at the pipe exit. Gauge red cables are long enough (∼100 mm) to adjust the position of the indenter with the desired impacted location on the glass ply. Impact simulations are performed on a laminated glass (Fig 2 .3) and on void (i.e. no sample at the tip of the indenter, Fig 2.4). The purple curves correspond to the measurements made via the two virtual gauges and the Lagrange diagram, using the wave equation (1.4). High frequency components are present in the Lagrange diagram, however the low frequency components (green curves) of the Lagrange-measured force and speed (purple curves) are quite matching the reference (nodal fields computed at the cone bottom, black curves). A direct low-pass filtering of the gauge signals is also possible (i.e. before the Lagrange procedure). The resulting force and velocity, in 6 is considered a dispersion of the dynamic waves in the bar. Equations introducing the Poisson effect are described in appendix A. Therefore, using this method, a physics-based filter penalizes the higher frequencies in the Lagrange diagram, and the obtained signal is closer to the reference. The model gives the best results and will be used on noisy experimental data. The preliminary simulations of the blowpipe test justify using strain gauges to measure the speed and the force at the indenter-glass contact. Gravel impacts are then performed, synchronizing these measurements with an ultra-fast camera monitoring the crack formation. Blowpipe prototype 2.2.1 First experiments First tests are carried out with a Rockwell indenter. The camera is triggered with an accelerometer detecting surface waves on the glass. In the surface wave acceleration on the glass surface. The lack of accuracy and the impossibility of synchronizing the camera images relatively to the gauges curves led us to the decision of deciding for another means to trigger the camera. In the following, a gauge signal threshold is used to start the camera shooting and to ensure a perfect synchronization of the devices. Indenter body bending In Fig 2 .8a are localized the four strain gauges (principle further explained in section A.4) glued to the indenter at two longitudinal positions L and ℓ. Each gauge is connected with a quarter ε i = ∆e i U i • f • G (R 1 + R 4 ) 2 R 1 R 4 = 4∆e i U i • f • G (with R 1 = R 4 ) with U i the generator voltage, the gauge factor f and the gain G. 9d are similar and validate that the average of the two axisymmetric gauges makes it possible to suppress measured bending induced by an off-centered impact between the projectile and the fixed indenter, expressing the mean signals: ε upp. = ε 1 + ε 2 2 & ε low. = ε 3 + ε 4 2 Due to dispersion, the velocity of the bending waves is much lower than the traction/compression ones. On the right of Fig 2.9, the bending signal is detected way after the first passage of the compression wave on the lower gauges. This difference is not significant at first, but it can affect the mean signal analysis, thus the force and velocity measurements, at the end of the impact (from ∼35 µs). Synchronization of gauges and camera As mentioned earlier, a good synchronization between strain gauges and camera images is essential. Indeed, the objective is to observe the opening and arrest mechanisms of the star cracks and, in parallel, to measure quantities at the base of the indenter cone via the gauges signal. Fig 2 .10 shows the different periods at stake. Here the formation of the star-shaped crack takes place over 50 µs. The camera triggers off when the first gauge measures a descending signal at a threshold of -150 mV, corresponding to a compression strain measurement of about 10 -5 . The level was decided to be low enough to ensure the trigger function and high enough to be insensitive to the permanent signal noise (i.e. oscillations around 0 V with a small amplitude: under 0.1 V). The time interval between the strain gauge data points is 0. 2 MHz movie is not an issue in the capture of cracking due to the camera (Shimadzu ® HPV-X) ability to record the previous frames when receiving the trigger. However, a 5 MHz film has a fixed duration of 51.2 µs and misses the first moments of impact testing: elastic deformation and dynamic indentation. Still, it is possible to capture the crack opening and arrest. In the experiments presented in section 2.3.2, performed with the final device, this issue is corrected by using an oscilloscope as an external trigger for the gauge signal recording device and camera frame. Impacts with Rockwell indenter This section deals with impact tests performed with a Rockwell (i.e. spherical) indenter. The curvature radius of the tip is close to 200 µm. The indenter body diameter at the gauge locations is 5 mm, and the indenter cone is 120°. The elastic properties of the indenter, used in the Lagrange diagram, are given in table 2.2. 3/4 view The excellent synchronization between the camera and the deformation gauges makes it possible to follow star cracking and discern different stages during the impact. In Fig 2 .11, the first observation to be emphasized is a peak in force and velocity at the beginning of impact. If the velocity and displacement signals show a double load, the latter is more difficult to observe on the force curve that presents multiple peaks. The initial positive force (in Fig 2 .11) is caused by small defects in the gauge position on the bar. It has been corrected in the final device results in section 2.3.2, making small adjustments of the gauge location on the indenter body. At the same time, the force measured here seems slightly too high compared to test dimensions: greater than anticipated in the elastic simulations. However, the energy (i.e. The force times velocity integrated over time) transmitted to the laminated glass during the impact is of the same order of magnitude as the theoretical energy initially loaded into the spring: approximately 80 mJ. The velocity peak magnitude is also expected. Some of the difference between the 50 mJ on the curve is due to energy loss caused by friction in the pipe and to the kinetic energy remaining elsewhere (e.g. projectile, bench parts). It is possible to follow the star crack branches length evolution during the impact with the help of the camera. The use of monovision makes it challenging to measure the crack evolution outside the focal plane. Therefore, only the standard lengths (i.e. current lengths divided by their final value) of the four visible cracks are shown in Fig 2 .13. The fast crack opening during phase 2 is highlighted with the observed crack length evolution occurring at 20 µs. In phase 3, two plateaus are observable at 73% and 60% of the standard length. Side view The 3/4-view film makes it difficult to observe cracking in the thickness of the external glass ply and, more generally, any cracking that does not appear on the impacted side of the glass. The camera installation shown in Fig 2 .14 allows the steps of crack formation through the glass ply thickness to be followed. The idea of this tracking in the thickness of the glass is to reflect the light into the lens as the crack forms. The two 400 W spotlights are initially directed towards the indenter but perpendicular to the camera. This gives a dark image at the beginning of the test, except for a reflection on the metal part of the indenter, as observed in Fig 2 .15a. The aim behind this throughthickness view is to show the crack via light reflection. The two 400 W light spots are initially directed towards the indenter but perpendicular to the camera. The first image of the test is, therefore, completely dark. However, the metal of the indenter reflects a small fraction of the light in the camera sensor (see Fig A dashed red line represents the plane between glass and air that behaves as a mirror. A dashed green line separates glass and PVB. Orange lines represent the reflections of the same crack, and the blue lines highlight the indenter and its reflection. One may notice three distinct crack visualization zones: the upper zone is a reflection on the external layer of the external glass ply, in the middle is the direct observation, and at the bottom occurs a reflection on the interface with the PVB layer. The triboluminescence phenomenon appears at time t=22.2 µs and disappears after 1.4 µs. This brief flash of light and the luminous dot at the indenter tip, captured in A two-step crack opening is interpreted from the two global increases in gray levels. A peak at 22.8 µs is due to ephemeral light emission during fracture of the glass sample. The literature reports similar emissions of photons, electrons, and ions with ceramic fracture [START_REF] Dickinson | The emission of electrons and positive ions from fracture of materials[END_REF][START_REF] Lawn | Fracture of Brittle Solids[END_REF][START_REF] Kawaguchi | Fractoluminescence in minerals[END_REF]]. The second peak at 26 µs is the light reflection due to a permanent crack in the glass that remains in the following images. Finally, this side-view validates the rebound phase happening at around 40 µs and the secondary crack growth phase between 40 and 60 µs. The indenter stops sinking into the glass at the end of phase 4 (∼55 µs), which is consistent with the measured decreasing light intensity. It signals the end of the crack formation. Reproducibility Excellent reproducibility of the impact tests performed with the blowpipe test prototype is essential to understand the difference between the different crack patterns. Velocity and force curves are plotted for six successful tests in Fig 2 .18. The overall trends are relatively similar but not identical. It can be noted the use of the same laminated glass sample, the same spring, and indenter, therefore all experimental conditions were chosen to be identical. The observed slight variability is attributed to the friction and the chosen impact location on a laminated glass plate. The imposed velocity of the projectile is also not perfectly tuned with the prototype but a better control is achieved, via a cell force, with the final device. Those repeatable experiments allow us to draw parallels between tests and to create links between the different observations made with the camera. The prototype allows reliable velocity measurements of the indenter movement. However, concerning the force, positive peaks can be measured at the beginning of the test (Fig 2 .18), corresponding to noise and non-physical signal, which renders the force measurement reliability weaker than the velocity even if its averaged signal remains consistent (Fig 2 .3). The noise level on the force signal will be further discussed at the end of the next section. Final blowpipe device Many information introduced in this section is taken from [Le Gourriérec et al. 2022]. Experimental set-up In this section, several features are common to the previous prototype. These are still worth presenting because of the different details and enhancements provided by the final blowpipe device. A test using the final blowpipe device is fully detailed in Appendix B. Experimental blowpipe device The experimental device is inspired by a car manufacturer test for gravel impact resistance on windshields. Many impacts are performed on a large laminated glass plate, and the critical size of the defect and its occurrence regarding a large number of impacts determines whether the laminates pass an empirical criteria. These tests use cylindrical steel-made projectiles with a cone-end. The cone tip has a 200 µm curvature radius and is made of synthetic diamond. The projectile velocity range is 3 to 8 m/s. on the sample surface. An aluminum frame supports the 15 cm long pipe. Inside the tube, the impactor is initially retained via two balls. Then a spring is compressed between the impactor and a steel spacer, ending with a cell force. The cell force gives the initial static force in the spring. A compressed air intake pushes the balls aside during the release phase and clears the way out to the impactor. After a 5 cm stroke, the impactor hits the indenter, only guided on 3 mm at the pipe extremity to avoid undue friction from the tube. The steel-made projectile mimics the gravel mass and velocity. The stored elastic energy in the spring is converted into kinetic energy, and the cell force measurement then enables to estimate the impactor velocity. As shown in A PMMA frame, whose dimensions are identical to the laminated glass plate, supports the sample. The width of the sides is 5 mm, which enables the plate to bend under the indenter during an impact. The choice of PMMA as the frame material does not interfere with the global compliance of the bending laminate during the first tens of microseconds of the experiment. The key property is that the surface of the laminate, opposite to the indenter, must be a free surface to help the star-crack to be formed. The crack evolution is fully monitored via a Shimadzu ® HPV-X2 camera at a 1 Mfps minimum frame rate. Its focus on the known impact location can be finely tuned before the experiment. Moreover, the fixed indenter is instrumented with two pairs of strain gauges. Gauges of the same pair are glued on both sides of the cylinder. Two sites can be distinguished: the upper site (labeled 'A' in Synchronization between the high-speed camera and the gauges The trigger adjustment requires a high accuracy in dynamic experiments, as high-speed phenomena are occurring and need to be apprehended. On the final device, the Shimadzu ® cam-era is synchronized with a 0.1 µs uncertainty with the descending signal on an upper site gauge at a level of -0.2 V. Once this level reached, a 5 V step is sent to trig the camera. It is also plotted over the gauge signals and allows for the synchronization of the camera and the strain gauges through the definition of the same time origin. The excellent synchronization between the measurement devices is shown in Fig 2 .20. The local standard deviation of the gray level under the diamond tip of the indenter, corresponding to the median crack evolution in the thickness of the glass ply, is correlated to the tip displacement measured via the strain gauges. This gray-level evolution can be measured thanks to side lighting that is reflected in the camera lens with the occurrence of the crack, as further explained in section 2.3.2. Impact velocity range Tests on 1.6 mm thick soda-lime glass plies are carried out to determine a link between the first peak velocity of the indenter extremity and the initial compression force in the spring measured by the piezoelectric cell force. The results are plotted in Fig 2 .21. A linear interpolation is proposed, validated thanks to the experimental data, allowing one to predict the peak velocity from the spring loading. The results are strongly dependent on the impacted sample response to the dynamic loading and may also vary with the friction in the tube, the fast crack formation on glass right under the indenter tip, or the initial position of the indenter on the glass surface. However, the bench is deemed reliable, with results matching the linear regression. Reproducibility The superimposition of the force and velocity measurements at the bottom of the indenter cone is shown in Due to the measurement technique via the Lagrange diagram, the stress measured close to the indenter is quite noisy. The signal rippling sometimes leads to a local null force value or even a small tension load due to wave propagation in the steel bar. These instants are linked to crack growth, hence a local gray level variation under the indenter measured by the camera (as will be discussed further down). Three velocity peaks are clearly measured at the cone bottom on the velocity data. Their width corresponds to the necessary duration for the elastic waves to propagate along the 60 mm long indenter body: 12 µs. The maximum value for each peak decreases, so the indenter globally slows down as expected from the energy dissipation in the damage of the indentation. The choice of representing only the three first velocity peaks (i.e. 60 µs) is justified experimentally, as the radial growth of the branches of a newly formed star crack on a laminated glass sample becomes negligible after these three periods. Experiments and results Monitoring a star crack formation on laminated glass An instrumented impact is performed on a laminated glass plate. Through-thickness observation on a free-bending sample Through-thickness measurement is also performed to estimate the crack depth evolution. In this experiment, a free-bending laminated glass sample movement is captured, and the crack growth under the indenter tip is monitored. c) and corresponds to a bright zone under the indenter. Until reaching the apex of the first velocity peak, the crack grows in depth up to 0.8 mm, which is half the impacted glass ply thickness. Then, a fast crack growth occurs: it propagates through the whole glass thickness, and its radial growth rate approaches 1000 m/s (cf the orange curve in Fig 2 .25b). This fast propagation phase of the crack may only take place when the crack tip has reached up to half the glass ply thickness beyond which the bending of the laminated induces tensile stress. This is in accordance with the already known effect of the impacted glass ply thickness on the gravel impact resistance [Grant et al. 1998]. During the decreasing phase of the second peak, slower secondary crack growth is also observed (Fig 2 .26(d-f)). A previous study [Kishta 2018a] reported the observation of crack growth through the thickness of the stack with the same camera. On the first frame in Force measurement uncertainties The excellent reproducibility of the force and velocity signals is illustrated in Fig 2 .22. The force changes slightly from one impact to another in the same testing configuration. If the mean force imposed on the structure looks repeatable, the accuracy of the peak still needs to be discussed. Therefore, impacts on polystyrene foams are performed using the blowpipe device. Such impact allows uncertainty measurement as the stiffness and impedance contrasts are very high between the steel body of the indenter and the foam. The transmitted force should be null or close to zero during the test. The force and velocity of three tests on polystyrene, performed with an initial static load in the spring, F cell , of 30 N, are reported in Fig 2 .28. Velocities are represented on the bottom row. It shows the very accurate and reproducible measurement of the velocity peaks that may be found in a dynamic indentation test on laminated glass with the blowpipe device. However, the raw force signal is noisier. The mean force is obtained with a mean glide filter applied on 24 µs. It oscillates slightly around the 0 N. The peaks amplitude may rise to at most 200 N in the duration of interest of these particular experimental cases. The oscillations and the high amplitude of the reflected waves during an impact on the glass surface are partly due to the impedance ratio between glass and steel: Z steel /Z glass = 3. The low level of transmitted force is also explained by the conical shape of the Rockwell indenter that allows Hertzian contact during the first microseconds of an experiment: the designed angle of 130°close to the diamond tip provokes spurious elastic waves reflection at the cone surface. The mean force on 24 µs is plotted (red line, --), showing more minor oscillations around 0 N. The stress imposed on the glass surface is locally high, allowing crack initiation. Still, the force signal reported at the bar end remains challenging to interpret and illustrates the large amplitude of the reflection waves during a test on laminated glass. Considering these results, it was chosen in the model to simulate the whole indenter body and to apply the measured force signal at the contact section with the projectile (i.e. at the top of the indenter). The amplitude of the first impulse reaching at least 2.5 kN (Fig 2 .30), the relative error regarding the determined force uncertainty is decreased at 8%, versus 90 to 100% at the cone bottom. Dynamic indentation model The imposed force and velocity on the glass sample are determined from strain history measurements by using a Lagrange diagram. These measurements are representative of the indentation behavior during the experiment. Hence, it is not yet possible to conclude about the damage zone behavior as the measured signal does not only depend on the response of the impacted sample but is also sensitive to the boundary conditions, the sample geometry, the material, or the damage evolution. In order to identify the behavior of a locally damaged zone at the glass surface, the use of dynamic simulations is proposed to interpret the measurements as a response of the local damage created on the glass ply. The idea, illustrated in glass surface. This element is tiny compared to the whole sample (3) and the indenter (1), so it has virtually no inertia, which implies that the measured force at the end of the input bar ( 1) is also locally applied to the element (2) and to the plate (3). U e , an extrapolation of the laminated glass displacement at the initial contact point between the indenter and the glass surface, is calculated via a visco-elastic (linear) simulation reproducing the diamond tip and the glass plate with the effort boundary conditions of the experiment. By comparing the numerical result to the experimental displacement at the cone bottom U x , one can evaluate the indentation depth δ = U x -U e , i.e. the effective displacement discontinuity across the damage region. The behavior of the non-linear zone (2) is then finally captured by the evolution of the force at the indenter F as a function of the indentation of glass δ. So, the objective is to identify an effective behavior of the local damaged zone, comparing the experimental response of a sample to the simulated response of a visco-elastic laminated glass. The same philosophy was pioneered by Lawn [START_REF] Lawn | Fracture of Brittle Solids[END_REF]], where the author considered the overall quasi-static response of the indented glass sample as a superposition of the reversible elastic behavior of the set-up and the non-linear local plastic zone created in a radial/median crack system. The first step is the making of a visco-elastic dynamic simulation to obtain the damage-free response under the same imposed loading. To do so, the set-up geometry is meshed and further studied in the Abaqus ® CAE solver via a dynamic explicit scheme, introducing its steelmade cylindrical body down to the diamond indenter. The tip is in contact with the glass surface, modeled as an elastic material, respecting the normal incidence between its revolution axis and the plate surface. The different parts and their characteristics are reported in Table B.1. It contains the data used in the simulations for the steel-made indenter and its diamond tip, the glass and PVB material. The parts geometry replicates the experimental device. Therefore, the steel-made body cylinder is 60 mm long and has a 2.5 mm radius. The indenter ends with a 200 µm curvature radius diamond set initially on the glass surface. The layer thicknesses of the 100×100 mm 2 plates are respectively 1.6, 0.76, and 1.1 mm from top to bottom. The free bending is ensured by a It is then possible to identify an homogenized behavior of the locally non-linear indented glass by plotting the experimental force, which should be identical in the simulation and the experiment (neglecting the size and inertia of the small volume responsible for the non-linear behavior) versus indentation δ (see Fig 2.31b). This curve is an average obtained by simulating eight impact tests on laminated glass with initial speeds ranging from 5 to 8 m/s. The geometry of the stack is unchanged. Only the mean measured force is used because the higher frequencies of the force signal are induced by the dynamic effects coupled with the device geometry. Parameters Glass Fig 2 .31a shows the obtained result after considering a local confined damage zone under the indenter tip. The damage behavior is described during the loading up to the maximum force value with a plastic law, introducing its characteristic length, which is approximately the width of the crater created on the glass surface: 600 µm. Indeed, the plasticity of the damage zone is performed using the direct F vs δ law (Fig 2 .31b) and a characteristic volume length υ, which is for a large part arbitrary, but needed to speak the language of a constitutive law for a volume element. Therefore, the stress σ = F/υ 2 and the plastic strain ε p = δ/υ are defined to comply with the plastic law used in the 3D simulation. Of course, this model only describes the loading phase and cannot be used to characterize the discharge. The displacement obtained at the bottom of the cone with the plastic model (U ep ) matches the experimental results (U x ), meaning the model allows to replicate the correct experiment kinematics. Comments on the indentation zone The indented zone is challenging to analyze. The multiple fissurations at the indenter tip after an impact can be seen on a SEM image (Fig 2 .32). The topology shows a complex comminution zone with many cracks in the spherical newly created crater at the glass surface. Two starcracks are expending outside the main damage, up to a few millimeters long. The circular flake around the indentation is present at the surface of the crack, caused by crack propagation just beneath the glass surface. These are not major defects. In Observing these two images makes the 1D approach, developed to characterize the damage volume behavior under dynamic indentation, elementary. It undoubtedly can be improved and complexified, but describing the laminated glass under gravel pitting does not necessarily need a too fine description of the complex phenomenology occurring at the end of an indenter. Therefore, the non-linear local behavior described in this section is one of the best compromises that can be made to represent this kind of impact. Conclusion This chapter has introduced a newly designed device to analyze the star crack formation via dynamic indentation on laminated glass. The input force and velocity are measured close to the damaged surface of the sample and synchronized with ultra-high-speed camera imaging, with one tenth of a microsecond uncertainty. Depending on the camera location, one may measure the median crack in-depth propagation or the radial branches appearance. The difference between the experiment and the linear visco-elasto-dynamic simulation leads to an adequate unidimensional elastic-plastic description of the confined damage for such dynamic indentation. Even if the device geometry is quite different from a true gravel impact configuration, the indenter large-scale geometry and stiffness, the projectile initial velocity, mass, and size can be considered to be close to real gravel dimensions and impact velocities. Moreover, the uncertainty in the force measurement can be managed by reducing the projectile and bar impedance to be comparable to the glass impedance. But it requires a change in material or cross-section, which would denature the test. Thanks to these quantitative measurements on the star crack formation, it is possible to determine the behavior of the locally damaged laminated glass and potentially see the influence of various parameters, such as the ply geometry or the material, on the damage size and dissipated energy. From an experimental perspective, it is also possible to add a local measurement of the deflection on the opposite side of the sample to directly assess the experimental indentation δ and compare it with the visco-elastic simulation results. A device improvement in terms of higher range of initial indenter velocity (more than 10 m/s) can be planned by increasing the dimensions of the load spring or adapting the compressed air arrival directly on the projectile section. The short-term use of the device today can be adapted to other studies: a comparative study of varying glass shades, PVB formulation... Alternatively, one could change the indenter shape or material to test carbide tools indentation via a controlled and instrumented experiment, for instance. The final objective of this study is to predict the star-crack length during a gravel impact. The next move using the blowpipe device should be the use of an elasto-plastic simulation, plotting the stress state in the elastic glass plate, then comparing it with the glass stress at break during dynamic loading. The results could be compared with an enhanced crack propagation monitoring to ensure the crack propagation and arrest are well described in the simulations. Else, it is also possible to initiate a radial crack in the simulation and study its propagation during an impact while still modeling the non-linear zone at the indented location. Chapter 3 Fragmentation under impact Context Characterizing and quantifying the mechanical response of laminated glass during a lowvelocity impact, especially structural degradation and integrity, is a critical issue for laminated glass manufacturers. Such impacts can be made by people hit by a car, people falling from their height, or hand tools during a burglary, for instance. Each loading case has a velocity in the range of 10 m/s. Many experimental studies have been carried out to fully understand the mechanics of laminated glass under impact, e.g. the semi-rigid or head-form impact [Yu et Describing the response on the whole fragmented laminated glass plate is rarely performed in the case of rigid-body impact. These experimental works have brought significant characteristics of the dynamic behavior of laminated glass through a few key quantities. However, these measurements do not provide a global quantitative picture of the whole plate. In particular, because of the very heterogeneous nature of fragmentation and deformation, it is tough to validate numerical modeling from such observations. The present study aims to introduce an instrumented test that provides a more detailed description, global and well-resolved both spatially and temporally. Ball-drop experiment The present study focuses on the low-velocity rigid body impact on laminated glass in the context of the anti-break-in as described in standard EN 356 [Norme Européenne NF EN 356 2000], corresponding to a series of three successive 4 kg steel ball drops from a height in the range of 1.5 m to 9 m, on a 1 m 2 laminate. This standard focuses on the prevention of perforation, which is also of utmost importance for protecting people against falls. Several steps are observed after an impact between the ball and the laminated glass sample [Nourry 2005]. First comes the formation of the radial cracks, expanding from the impact point to the edges, and the first glass fragments. Secondly, the laminated glass plate crack density increases with bending. The number of radial cracks increases slightly, but orthoradial cracks are created at each loading step, up to perforation. Sometimes, in the central region where the impactor hits the surface, a dense fragmentation takes place,leading to the formation of a cohesionless powder (communution zone). Thirdly, PVB is stretched, and delamination of the glass fragments occurs. The two latter phenomena are coupled and represent the majority of dissipated energy during an impact on the laminated glass. Adhesion in steady-state regime During an impact on the laminated glass, the glass fragmentation dissipates a small amount of energy, which is directly linked to the total crack area created on the glass plates. This dissipation reaches 1 to 2 percent of the initial kinetic energy of the steel ball [Nourry 2005]. Thus, this phenomenon will not help to stop the projectile. However, at each created crack location, the delamination and stretching of the interlayer occur. The adhesion strength between the layers becomes an essential parameter for the impact resistance: the higher the adhesion, the more energy is needed to separate PVB from glass. But in practice, too much adhesion prevents the PVB from being stressed over a larger area. The localized stress initiates a tearing of the polymer layer and finally dissipates a small portion of the initial energy of the projectile. Conversely, a lower adhesive strength allows the stretching of the PVB bulk over a more significant distance, activating visco-elastic dissipation in the interlayer [Fourton 2019]. In the latter reference, the author studied the coupling between the dissipation at the glass PVB interface and in the polymer bulk via TCT tests, changing the imposed tensile velocity and interlayer thickness. In the literature, models such as a dissipation via cohesive interface simulation expressed with a power law [START_REF] Landis | Crack velocity dependent toughness in rate dependent materials[END_REF]] can be found. The associated steady-state model with Fourton work uses a visco-plastic description of the energy dissipation at the crack tip. It links the crack opening velocity with the plastic strain rate. This plastic strain rate εp i j is given in J 2 -flow theory by εp i j = εp 3 2 σ ′ i j σ with σ ′ i j ẇp = σ i j εp i j Understanding the author model is of interest to describe a ball drop. At the crack tip, the dissipation mechanisms are similar to those described by the author. Using tracking with a fast camera during the test, may lead to a quantitative estimation of the total area of decohesion. In that case, one may deduce the amount of incident kinetic energy dissipated in the breakage of the glass, then in decohesion by linking the advance of the breakage in the cohesive zone to the cracking area correlated to the energy dissipated in the bulk of the material. This leads to a good performance evaluation of the material under impact. Cannon device Due to the large size of the laminated glass sample and the use of a spherical steel ball as proposed in the EN 356 norm, it is quite complicated to perform quantitative measurements of all the phenomena that occur during a ball drop test, whether or not the sample has been perforated being the only required output for the standard specifications. Initially developed by Nourry [Nourry 2005], the cannon test device aims at reproducing hard impacts (with blunt and rigid objects) on laminated glass in a controlled way, representative of serviceability or ultimate limit state via a low-velocity/large-mass impact. The device allows testing of different specimen geometries and materials to compare their impact performances quantitatively, with good reproducibility, exploring a large range of impact velocities from 2 to 15 m/s. The impacted samples are square-shaped plates of 300×300 mm 2 : this surface is about ten times smaller than that of ball-drop tests and corresponds to the size of the damaged surface during standard EN 356 impacts. Here, the fastening of the plate is well-controlled and reproducible: two steel rings, polished with a curvature radius of 1 mm in contact with the glass to avoid any indentation and tightened by four screws at 10 N•m. Additionally, Decourcelle highlighted that mechanical wave propagation on the glass sample, provoking the first orthoradial cracks, is not significantly dependent on the selected sample size (from 300 mm to 1 m side length) [Decourcelle 2011], further legitimating the use of smaller samples than those used for standard EN 356 impacts. A telemeter measures the hemisphere displacement at 30,000 Hz, with a 3 µm uncertainty. These high rate measurements allow the evaluation of axial accelerations and velocities through finite differences. The kinetic energy can be estimated assuming the impactor to be perfectly rigid during the test. During a typical fragmentation test with the cannon, the half-sphere apex impacts the outer glass ply with an initial velocity v 0 (from 2-15 m/s) and at a normal incidence. In the first microseconds after impact, the laminate behaves elastically for the most part. In contrast, a small zone of the first ply right under the impactor head is comminuted under intense multiaxial loading. Depending on the impact velocity and the sample strength, the first radial cracks may appear from the plate center within a few microseconds to a few hundreds of microseconds. As the plate bends, the radial cracks extend and sometimes multiply, while orthoradial cracks are also formed. More and more cracks are created during the whole impact duration. When the fragmentation is well-advanced, the stretching (and possibly tearing out) of the PVB interlayer dissipates a large amount of the impact energy as the impactor stroke increases. This dissipation occurs at the PVB bridge locations that connect all the glass fragments [Seshadri et al. 2002]. These steps are quite similar to the ball-drop experiments. From an industrial point of view, repetitive tests were carried out on sample series to detect the influence of different parameters: the ply thickness and adhesion. Many conclusions were reported, but the studies did not focus on the fragmentation that occurs during such impact nor on the response of the fragmented composite submitted to stretching, tearing, and delamination. The principal goal of the new experiments performed in the present study is to replicate such experiments, while measuring glass fragmentation, (total crack length, fragment density and geometry) and finally estimating the total dissipated impact energy. Interest in full-field measurements Measurements of the impactor motion allow a better understanding of the behavior of laminated glass under a blunt impact. However, adding direct measurements of the plate motion during an impact will shed new light on the glass fragments kinematics. ) can be flexibly deformed to fit seamlessly with the real deformed 3D shape through the minimization of the quadratic norm of correlation residuals (image difference after correcting for perspective and shape), allowing for shape characterization and kinematic field measurement of complex 3D objects, even for large deformation configuration. Finding the correspondence between the 3D coordinates of known points in the specimen and their projected 2D coordinates in retinal images captured by stereo cameras (2 cameras or more) is crucial for accurate 3D shape reconstruction. This is performed in a so-called "calibration stage", which involves identifying the projection matrix to bridge the 3D coordinates of points in the lab frame to their 2D projection in the camera image plane. Guideline To analyze the cannon in the spirit of TCT, it is proposed to measure the 3D displacement field as well as the velocity of the fragments (giving the driving velocity that locally dictates the PVB delamination and stretching) under the projectile shock using S-DIC technique. A stereo camera system is set up with two ultra-high-speed cameras. The back surface of the glass is monitored, measuring the glass shard formation and kinematics. Direct measurements on the sample are compared to the impactor kinematics. A model is then proposed to evaluate the delamination of the PVB-glass interfaces and determine the location that is the most likely to tear during a secondary impact. The residual resistance of an impacted and damaged laminated glass is discussed in the final section, where a repetition of successive shots on damaged laminated glass samples is presented and discussed. First, centered impact tests are performed on pre-cracked samples to compare the influence of the cracking pattern on the projectile kinetic energy dissipation. Then, non-centered impacts on larger laminated glass plates are studied to analyze the behavior of fragmented and partially delaminated samples under secondary dynamic loads. First cannon experiments Laminated glass samples The laminated glass sample used in this experiment is composed of two 4 mm thick Planiclear ® soda-lime glass plies and a 0.76 mm PVB interlayer Saflex ® RB41. The laminates are assembled in a clean room, degassed using an oven, and placed in an autoclave, where the temperature is set to about 130°C and pressure to 13 bars for several hours. The laminates are assembled in a clean room, degassed, and put through an autoclave which is represented in perature is increased in the chamber, and rises up to 80°C. The cooling phase extends over 3 hours, under atmospheric pressure. The incident angle between the camera axes and the cannon principal axis varies on each side from 30°to 45°according to the remaining space around the large test bench. Set-up & equipement Observing the back surface of the laminated glass at high temporal resolution has two motivations: First, the measurement of surface deformation, and second the characterization of glass plies fragmentation. Unfortunately, these two goals are mutually exclusive. Indeed, the kinematics of the back surface can be obtained from S-DIC, by depositing first a surface marking with a speckle pattern. In turn, the formation of cracks is mostly hidden due to the opaque paint presence. To circumvent this difficulty, a black and white speckle pattern is applied on the back surface of the laminated glass sample, covering the right-bottom part of the glass, providing sufficient Gray Level (GL) contrast for S-DIC measurements (Fig 3 .5). The left-top part is left transparent, but a thin tissue paper can be interleaved between the projectile and the sample to observe the formation of crack patterns on a more uniform background without adding spurious interference with the background scene. The interest of painting only one-half of the plate is to compare the evolving cracking pattern of the laminated glass and the deformed surface measured via S-DIC. Access to these cracks evolution gives information on the time duration of the elastic deformation phase of the sample, the formation of the radial cracks, the orthoradial ones, the beginning of the tearing phase, and the perforation step, for instance. The S-DIC measurements can also be interpolated on the non-speckled area of the plate. Therefore measurement of the evolution of the cracks via the residuals is made possible on the deformed surface. The trigger solution uses the same photoelectric sensor that detects the projectile presence a few milliseconds before its contact with the impactor, with a 40 kHz sampling rate. This triggering method is also used in the tests presented in section 3.3. Impactor movement Telemeter measurements are performed to access the impactor half-sphere displacement. The telemeter is placed at 90°with respect to the axis of the test. The laser beam created by the source is deflected by a mirror, whose angular deflection close to 45°has been tested to be as accurate as possible. The beam is reflected on the face of the half-sphere and returns to the receiver using the same corner mirror. The impactor stroke is deduced from a different height measurement of the incident beam on the sensor. Thus, the speed of the half-sphere is deduced by centered finite differences of the obtained displacement, averaged on five consecutive points to filter out high frequencies. From this speed measurement, and assuming a rigid body movement, the impactor kinetic energy can be deduced from its velocity V as E c = mV 2 /2, with a constant impactor mass m=4.34 kg, whose value is close to that of the ball used in norm EN 356. The impactor acceleration is also estimated via finite differences. With these 1D curves, it is possible to comment on the telemeter accuracy. In Fig FIG. 3.6 -Superimposed telemeter measurements for an initial impactor velocity close to 5 m/s. Just after the contact between the glass and the half-sphere, a divergence of the curves is observed. The kinetic energy is plotted conserving the sign of the velocity curves. at the location of the cracks, combined with the interlayer stretching and tearing between the glass fragments [Elzière 2016]. The tests last from 16 ms to 20 ms, with a maximum stroke after the contact with the plate varying from 32 mm to 42 mm. Sometimes the second plateau is not observed: e.g. tests 13 and 14 in Fig 3 .6. They correspond to the samples with the higher crack density formed after the elastic deformation phase of the composite plate. Even with a numeric induced noise caused by the two successive derivations of the telemeter acquisition, the acceleration curves highlight two events. A first negative peak, before 5 ms, is associated with the contact between the sphere and the glass surface. The second widespread peak of lower amplitude starts around 6.5 ms and remains, along with the experiment, over 10 ms and is synchronized with the plate finite strain and delamination regime. Field measurements The main objective of this section is the kinematic field measurement via S-DIC, whose principle is further detailed in section 1. The mean displacement field is interpolated on 5 mm long radial elements to finally get radial and axial displacement fields. The radial range of the interpolation is between 20 mm and 120 mm. An extrapolation method is proposed using the radial displacement evolution for known locations, i.e. from 20 mm to 120 mm. Boundary conditions are enforced: the axial symmetry implies a null displacement derivative at R=0 mm, and a null displacement along the edge at R=140 mm. Plus, the amplitude of the instantaneous deformed surface is corrected thanks to the telemeter curve that measures the displacement at R=0 mm. A fourth-order polynomial is used for the extrapolation between 0 mm and 150 mm (the plate maximum radius). The displacement and velocity extrapolation are represented in Fig 3 .8, where the displacement curves correspond to the deformed back surface for each time step. From the estimation of the velocity norm in the laminated glass plate, it is possible to categorize the influence of the plate kinetic energy regarding the impact energy. The velocity is considered homogeneous through the plate thickness. So the sample kinetic energy can be formulated as an integral over the glass surface E lami k = h g ρ g + h pvb ρ pvb 2 x ∥V(x)∥ 2 dx (3.1) with ρ the density of the glass (g) or the PVB, h the total thickness of the material and ∥V(x)∥ the velocity norm at point x. This kinetic energy is displayed in Fig 3 .9a. The telemeter measurement of the impactor kinetic energy (i.e. E telemeter ) can be subtracted from its initial 55.7 J value (i.e. E plateau , the mean plateau value). This energy corresponds to the energy stored and dissipated during the test: E lami = E plateau -E telemeter . This energy is displayed with two abscissa: time (Fig 3 .9b) and the impactor penetration (Fig 3 .9c). The first gap between the two energy plateaus is associated with the kinetic energy transfer from the impactor to the plate. Therefore, subtracting the kinetic energy from the signal gives a smoother energy evolution during the test (plain blue curve). Axisymmetric description of the experiment Calibration of the Stereo Vision Camera System Before introducing an enhanced version of the prior axisymmetric measurement, new calibrations have been performed to localize the projection in the world coordinate system using a well-known calibration target. The general principle of computer vision is introduced in section 1.4.2. In this study, the calibration target is a dihedral ("open-book") which has two square faces of T = N n=1 (x n -[Π]X n ) 2 (3.2) with respect to Π. Initialization is validated by projecting the 3D mesh over the 2D images for each camera (Fig 3 .10d-e.). New axisymmetric S-DIC method As introduced in section 1.4.2, the S-DIC problem is ill-conditioned. So, the ideal solution may be to help the problem formulation using mechanical modeling of the laminated glass response. However, the fragmentation of the glass plies, the delamination of the PVB, and its extension involving significant geometric non-linearities (large displacements and rotations) make this route quite challenging. Alternatively, the axisymmetry of the set-up is a very strong constraint which may be used for regularization without any prior assumption about the mechanical model of the monitored sample. For global S-DIC, the T3-mesh has 4 mm length elements superimposed on the reference frames in Fig 3 .11. There is a total of 2366 elements, each of them possesses 32 integration points so that all gray level information on the images is considered during S-DIC calculation. Due to the geometry of the whole set-up, the problem is axisymmetric, even if this symmetry is expected to be broken locally in the long run as the laminated glass is fragmented and finally torn apart. A global axisymmetric description is assumed next for the minimization problem. It is a noninvasive technique that constrains the unknown displacement field (3D vector field over a 2D surface) to a much simpler one (2D displacement field over a 1D radial profile). Such a regularization can be used with B-splines, for instance [START_REF] Chapelier | Free-Form Deformation Digital Image Correlation (FFD-DIC): A non-invasive spline regularization for arbitrary finite element measurements[END_REF]], but a piece-wise linear description is adopted in the following. Also, it can be seen as a reduction of the degrees of freedom of the initial mesh nodal displacements. Expressed in cylindrical coordinates, these new shape functions impose displacement shape between two images U(X) = [U X (X, Y, Z), U Y (X, Y, Z), U Z (X, Y, Z)] as a purely radial and out-of-plane displacement U ax (r) = [U r (r), 0, U Z (r)] The orthoradial displacement and the dependence on the angular position are estimated to be globally null, which remains a reasonable assumption as long as no tear opens out in the interlayer (a phenomenon, local at first, that propagates and breaks the axisymmetry of the deformation even at a global scale). Radial and axial displacements depend on the radial distance r between the impact location and the nodal position. The FE mesh center is localized at the impact initial position, i.e. the center of the laminated glass plate, with a 200 µm uncertainty. The radius is directly r i = X 2 i + Y 2 i , the angle α i = arctan(Y i /X i ). The transformation from cartesian to cylindrical coordinates is written as a matrix [P], combining the smaller rotation matrices P i for each node i P i =                 cos α i -sin α i 0 sin α i cos α i 0 0 0 1                 (3.3) such as U(X i ) = P i U ax (r i ) (3.4) The axisymmetric description will be forced via the introduction of 1D-linear shape functions (lines in Fig 3 .12), limiting the displacement measurement along r and Z directions. The 2Dmesh current nodal positions (circles in Fig 3 .12) are represented on the new shape functions. 3.12 -1D shape functions q i (r) ensuring displacement continuity, in straight lines. Interpolated T3-nodes (Fig 3 .11) as circles on the new shape functions. The plot illustrates 15 mm-long elements for better readability, while 4 mm elements were actually used in the axisymmetric S-DIC measurements. The 1D shape functions are noted q i (r) and filled in a second transfer matrix [Q] such as the new Hessian and right-hand terms become [H] ax = ([P][Q]) T •         N c c=1 [H] c         • [P][Q] b ax = ([P][Q]) T •         N c c=1 b c         (3.5) and the minimization problem [H] ax • δU ax = b ax (3.6) The global 3D-displacement increment δU is updated via the two transfer matrices and the nodal axisymmetric displacement δU ax δU = [P][Q] • δU ax (3.7) This reduced problem is far better conditioned than the original S-DIC one (decreasing the degrees of freedom from 7098 to 84). Therefore no additional regularization (e.g. Tikhonov) is needed to perform axisymmetric S-DIC. Results and discussion Global S-DIC and axisymmetric S-DIC have been performed on the painted deformed surface of the laminated glass during the first three milliseconds following impact, or a total of 90 frames per camera. In A comminuted zone, centered at the impact point, appears as a bright white zone in Fig 3 .13. The formation of radial and orthoradial cracks could be observed during the experiment. First, radial cracks propagate from the impact point to the edge so that fragment sizes increase with the distance from the center. Then orthoradial cracks appear, as can already be seen in Fig 3 .13. As time passes, the density of both radial and orthoradial cracks increases, leading to smaller and smaller fragments. This growing crack number also increases the deformability of the fragmented laminated glass plate, whose glass shards are still connected by interlayer polymeric bridges that sustain large strains. The geometry of the deformed plate surface could at first be approximated by a rounded cone of increasing height. This information could also be read on the displacement fields (e-h) for the three figures 3.14, 3.16 and 3.18. The sample shape evolution and fragment orientation induce light reflection, with severe gray level changes on the captured images. Specular reflections are even observed in some images (see, for instance, the white spot at the top of Fig 3 .17a), making the analysis more and more difficult for the S-DIC software. A brightness change, not being accounted for by the S-DIC procedure, is finally present in the residual field (a-d). In particular, a contrast in the residuals field across a radial boundary may be interpreted as a localized bending along a radial crack, making some of them apparent. However, they could hardly be perceived in the raw images. Accessing the final crack pattern by an examination of the post-mortem sample in a fragmentation experiment is not easy as many non-reversible phenomena occur: large stretching then tearing of the interlayer, delamination between the PVB and the glass shards, comminution at the center of the plate that loses smaller fragments sometimes crushed into glass powder. Therefore the glass damage is hard to qualify, evolving throughout the test period. Three pictures have been taken at the end of this experiment. As can be seen, the PVB layer has been torn apart, and glass fragments are missing close to the impact location (Fig 3 .19). Out of the torn zone, the laminated glass retains a crater-like or cone-like shape at the end of the experiment, which emphasizes a residual irreversible deformation of the plate. Thanks to a good spatial resolution, many cracks can be observed in these three images, but the damage temporal evolution is not assessed via post-mortem imaging. Therefore, live monitoring of the crack formation method is proposed. Due to the global axisymmetry of the measured displacement, it is possible to reproduce the deformation of the non-speckled glass surface on a semi-circular mesh, then analyze the residuals between a reference image and the experiment images. Some residuals of the non-speckled Moreover, an axisymmetric interpretation of the displacement fields is represented in Fig 3 .22, projecting the global S-DIC obtained fields on radial shape functions (legend 'gl'). The out-ofplane displacement is therefore filtered, and a good radial approximation of the displacement can be obtained, reducing the influence of the ill-behaved measurement points. The extrapolated displacement also matches the telemeter curve (black straight line), which can be interpreted as the displacement along the axis (R=0). It fits the axisymmetric S-DIC results (legend 'ax'). For large radius and time, the global and axisymmetric results are moving away from each other showing the difference between a reliable axisymmetric code and the presently limited global S-DIC procedure. It is also validated by the presence of orthoradial stress at the beginning of the test, creating the radial cracks. The radial displacement is higher for smaller radii, up to a characteristic size approximately equal to the size of the half-sphere (i.e. 50 mm radius). Due to the better calibration than the one performed during the first wave of cannon experiments, the calibration of the stereo-vision system is much better, and does not need a further rescaling. Energy dissipation models on pre-cracked samples submitted to tensile loads are already performed with a tensile speed that may vary from quasi-static up to 10 m/s [Del Linz et al. 2017] (up to ∼70 s -1 ), which covers the range of strain rates experienced by PVB in the cannon tests (locally up to ∼30 s -1 ). The link between the interlayer stretching and its delamination from the glass shards is well documented. The gap between a fragmentation experiment on the cannon and a Through Crack Tensile (TCT ) test is narrowing thanks to the full-field and the cracking pattern measurements presented here. It becomes a question of integrating the energy dissipated in TCT at the proper strain rate over all the area created between the glass fragments. Those data are of interest to the experimenter to determine the behavior of different areas of the laminate rather than relying only on a local measurement via strain gauges or impactor measurement, which do not give the whole plate kinematics. Each laminated glass fragment has its specific movement during an impact that can be interpreted as a local TCT test. In a nutshell, these measure-ments, coupled with a correct corresponding model of the fragmented laminated glass plate, will allow the evaluation of the local impact energy dissipation at the glass fragment edges and, after summation over all cracks, the assessment of the total energy absorbed by the broken laminated glass plate. Delamination model Developing a model on the laminated glass under impact is challenging due to the numerous radial and orthoradial cracks on the glass plies. Setting up a model requires using appropriate hypotheses. An axial symmetry in the cracking pattern and in the kinematic fields is assumed, and, the rich knowledge acquired from TCT tests in the literature is used to simplify the model design. Problem definition Through Crack Tensile (TCT ) tests: a starting point The concept of the TCT test, already described in the introduction, is based on a tensile test on a pre-notched sample of laminated glass. Due to the presence of the crack, the elastic deformation concentrates in the polymer layer. In addition to the interlayer stretching, symmetrical delamination may occur depending on the tensile test imposed speed, δ, of the clamps (see scheme in glass and PVB (see Fig 3.24b). The delamination occurs in a certain velocity range [Elzière 2016]. An essential characteristic of the TCT can be seen on the force versus time curve: as the imposed speed of the test δ is constant, a limit force is reached. This state of the test is called the steady-state regime, as the force plateau is reached, and the steady-state stretch, λ a , that connects the glass shards displacement, δ, and the delamination fronts advancement, a, can be expressed as λ a = 1 + δ/2a. Locally as shown in Fig 3 .24b, the same situation is encountered all along the experiment at different positions of the crack front a. The load needed to advance the crack is thus a characteristic of the decohesion that only depends on the delamination velocity ȧ, and nothing else. During a time increment dt, the crack front advances by da = ȧ • dt. This added length of PVB ligament was unstretched prior to delamination, but after it is stretched to λ a • da, so that the two ends of the sample move apart by the extra length (λ a -1) • da on each side of the mid section, and hence dδ = 2(λ a -1) • da, or 2(λ a -1) • ȧ = δ. The central part of the PVB is subjected to a tensile stress depending on ȧ, needed for the decohesion of the PVB. Hence, the stretch λ a is the quasi-static response of the PVB to a macroscopic tensile stress σ. Finally, a time integration may be trivially performed to arrive at The dissipation mechanisms are all localized at the delamination front, therefore in steady-state, the delaminated PVB 'far' from the crack front is uniformly stretched up to λ a . The steady-state force plateau versus the tensile test speed is reported in Fig 3 .26a. The stress is therefore limited in the cracked laminated glass, as delamination should occur at the steady-state stress. 2(λ(σ(ȧ)) -1) • ȧ = δ. The potential tearing that may occur during a test should be characterized by a specific criterion rates using a high-speed hydraulic testing machine [38,39] . In their data, the stretch does not reach a steady value: it increases until rupture of the interlayer, so that we cannot define a consistent G m value. We chose to compare the "adherence energy"i.e. force at the plateau F ss normalized by the sample width b 0 -in order to combine their data with those of Elzière in figure I.9. At low velocities, undulation of the crack fronts leads-though not systematicallyto an arrest of the delamination. The stretch increases continuously, until rupture of the interlayer. The empirical "unstable limit" is indicated by the black straight line in figure I.8. The adherence energy increases with the tensile velocity ˙ : naively, one could think that the higher the velocity, the better the impact performance since more energy can be dissipated. However, the interlayer also bears a higher stress, limited by the strength and toughness of the polymer: the work of fracture is limited by the rupture of the interlayer at high velocities. In the TCTT, the transition from the steady-state regime to the "rupture" regime occurred around 100 mm • s 1 at 20 C. F ss /b 0 [kJ m -2 ] 5° C 20° C 50° C Elziere et al. 0° C 10° C 20° C 30° C 40° C 50° C 60° C Samieian et al. 20° C Del Linz et al. [5] [37] [39] Figure I.9 Influence of traction velocity and temperature on the delamination force measured with the TCT-Test (PVB thickness = 0.76 mm) Solid line: empirical limit between unstable and steady delamination behaviors. Dashed line: empirical limit from steady behavior to rupture of the interlayer. [5,37,39] (a) In Fig 3 .26d, the adhesion limit stress σ adh is plotted versus the delamination velocity ȧ. A crack velocity limit appears for high stress values. Finally the hyperelastic behavior of the PVB, adapted from the visco-elastic behavior measured by Fourton [Fourton 2019], is represented in Fig 3 .26c for two strain rates: 1 s -1 and 0.1 s -1 . The true stress is obtained assuming volume conservation (i.e. Poisson's ratio of 0.5). Hyperelasticity is used here because the discharge is not considered yet. The effective PVB behavior implies a non reversible behavior of the interlayer, but is not at stake here. Connection between TCT and cannon tests The idea of the model is to consider the cannon experiment, with its multiple cracks appearing during the test, as a system of many simultaneous TCT experiments. Writing L the distance between two cracks, the relation between the TCT displacement, δ, and the macroscopic apparent stretch λ is λ = (δ + L)/L (3.8) The stress plateau, noted σ adh , corresponds to the adhesion limit strength between glass and PVB. An effective constitutive law λ(σ) can therefore be proposed introducing crack density contrary to more standard cases. Due to the limited resolution for the live crack measurement, a choice is made to separate the different zones of the fragmented plate. The different zones are represented in Fig 3 .27. Larger shards are located far from the impact location (Z3). A circular zone with a larger orthoradial crack density is also visible for median radius (ZO2). The last zone is composed of finer glass fragments (Z2). In the cannon test, the comminuted zone (Z1), located under the impactor head, is assumed to bring only a minor contribution to the globally dissipated energy, but it cannot be measured with the device. The macroscopic strain in this zone is imposed by the impactor head and hence is soon saturated in the experiment. The model does not consider the deformation of the laminated plate occurring at a radius smaller than 35 mm. S-DIC indicates that U r continuously increases at r ∼20 mm. Thus some dissipation is expected there. But the crack evolution is hardly monitored in this area. Therefore this small area, regarding the plate size, is not studied. After the crack formation, the laminated glass structure is weakened and becomes heterogeneous [START_REF] Bermbach | Experimental investigation of energy dissipation mechanisms in laminated safety glass for combined blast-temperature loading scenarios[END_REF]]. There are three main distinct zone categories on the plate. The first one corresponds to zones of perfect adhesion between the glass and PVB, representing the plate stiffer zones. The second category includes the PVB ligament (i.e. without glass shards) that is stretched to a specific value. Finally, delamination and visco-elastic dissipation in the interlayer is at stake at the delamination fronts. Defining different area leads to the consideration of crack densities in the radial ρ r (r) and orthoradial ρ θ (r) directions which play the role of 1/L needed to describe the previously defined apparent stretch λ. For the radial direction, the circular crack number (N cc ) has to be considered between radii r 1 and r 2 delimiting the disk of the selected zone. So, ρ r (r) = N cc r 2 -r 1 (3.9) For the orthoradial direction, a radial crack number (N cr ) is considered, depending on the radial position r ρ θ (r) = N cr 2πr (3.10) A simplified membrane kinematics of the fragmented plate The deformation of the fragmented laminated glass plate can be assimilated to a large strain hyperelastic membrane composed of stretchable PVB bridges between the stiffer glass shards. Dynamic effects (e.g. plate inertia, kinetic energy) are neglected. In the current model, the inter-crack distance is postulated from the observations. The apparent macroscopic stretch is deduced from the particular displacements U = (U r (r), 0, U z (r)). It can be expressed, along the radius of the membrane λ r = 1 + U r,r 2 + U z,r 2 (3.11) and, along the orthoradial direction λ θ = 1 + U r /r (3.12) From each side of a crack, the relative displacement of the two glass fragments, (i.e. the effective displacement δ e f f , see (3.8)) is δ e f f r = (λ r -1)/ρ r (3.13) in the radial direction, and, δ e f f θ = (λ θ -1)/ρ θ (3.14) in the orthoradial direction. These effective values relate the cannon (locally) and the TCT mechanics. That leads to two local delamination lengths a, a r = λ r -1 λ r a -1 1 2ρ r (3.15) in the radial direction, with λ r a = 1 + δ e f f r /2a r , and, a θ = λ θ -1 λ θ a -1 1 2ρ θ (3.16) in the orthoradial direction, with λ θ a = 1 + δ e f f θ /2a θ . In the axial symmetry hypothesis, the force balance is simplified (see e.g. [START_REF] Chevaugeon | Contribution à l'étude des membranes hyperélastiques en grandes déformations[END_REF]). σ θ = r • σ r,r + σ r (3.17) with σ θ the orthoradial stress and σ r the radial stress, in the cylindrical coordinates system (r,θ,z). Macroscopic behavior law The behavior law Ψ of the PVB is introduced as λ a = 1 + Ψ(σ) (3.18) with λ a the steady-state stretch and σ the stress. This law describes the uniaxial visco-elastic and large strain behavior of the polymer. Neglecting the viscosity as a first assumption transforms Ψ into a scalar function (i.e. local in time). That means a unique hyperelastic law can describe the behavior of the polymer in large strain regimes during the imposed monotonous load. The function Ψ is represented in Fig 3 .26c, for stretch rates of 0.1 and 1 s -1 [Fourton 2019]. The second phenomenon to consider is the rupture of adhesion (i.e. delamination) between the glass and the PVB. The law φ that links the delamination front velocity ȧ and the adhesive strength σ adh is σ adh = φ(ȧ) for ȧ ≥ 0 (3.19) The function φ is represented in Fig 3 .26d [Elzière 2016]. Using the membrane kinematics, the behavior leads to λ =1 + 2ρaΨ (φ(ȧ)) σ =φ(ȧ) (3.20) as σ = σ adh the stress imposed by the adhesion phenomenon. Considering steady-state with infinite delamination, and an hyperelastic behavior of the polymer, the law becomes ȧ =φ -1 (σ) λ =1 + 2ρΨ(σ) t 0 φ -1 (σ(τ)) dτ (3.21) Partial conclusion on the delamination model The present section links the cannon and the TCT tests. The well-documented theory developed thanks to the tensile tests can be used to simulate a plate bending experiment, adding information on the evolution of the crack pattern. The model is supposed to describe the delamination evolution versus time, which is quite interesting for multi-impact tests. But it still needs some improvements in the variety of the TCT data used (e.g. delamination rate range, temperatures). The choice made in the next section is to look at the behavior of a triangular laminated glass fragment with no prior supposition of the crack density. Triangular fragment model The overall behavior of a cracked laminated glass plate loaded with a rigid sphere seems challenging. Apart from the materials, elastic-brittle soda-lime-glass, and visco-elastic PVB, the adhesion between the layers plays a significant role in kinetic energy dissipation and in the plate global out-of-plane movement. In this section is proposed an elastoplastic model of a laminated glass fragment under impact. The loading conditions are imposed from the experimental telemeter measurement: the displacement of the sphere is imposed in the simulation. The goal is to describe the plate kinematics and to model the energy dissipation phenomena. Half-fragment model The studied geometry is simplified in the model, assuming angular symmetries. First, the glass cracking does not dissipate much impact energy during the test. Nourry observed that most of the radial cracks are the first to pop out on the glass plies [Nourry 2005], before the circular cracks for a large part of the cannon tests. After the experiments, the number of radial cracks is measured between 100 to 250, considering an impact energy of 50 J. Also, the radial cracks are formed in the early stage of the test, when the impactor kinetic energy has not dropped significantly yet. From these observations, the model geometry is based on a triangular laminated glass fragment, represented in Fig 3 .28. Thus at the beginning of the simulation, the radial cracks are already formed. Due to the plane symmetry, each half of a triangular fragment behaves in the same way. Therefore, instead of representing the whole fragments, only their half separated by the radial crack can be simulated with proper symmetry conditions. Elziere has shown that for constant delamination velocity, the measured steady-state stress between the two glass fragments is constant. Thus, the model can again be simplified, considering that the stretching phase of the experiment occurs at a constant stress (i.e. steady-state regime) at the radial crack location, by imposing such condition: this steady-state stress is estimated between 10 to 15 N/mm at 25°C [Elzière 2016]. It is now possible to simulate one half of a triangular fragment, applying the chosen steady-state force at the radial crack location. This simplification is illustrated in Fig 3 .29. Homogenization An orthoradial crack formation is considered instantaneous through the glass ply thickness, regarding the total duration of an experiment. Furthermore, once a crack has appeared on one of the two glass plies, the second ply has more chance of being cracked similarly because of the global bending of the plate. Therefore, as there is a uniformity of the behavior in thickness, it becomes convenient at to first model a plate with shell elements to simplify the problem. Besides the homogenization, the assumptions are: orthoradial cracks are perfectly circular cracks. The glass plies have the same thickness. Elasticity To model the three layers of the laminated glass plate, a transverse isotropic model is defined in table 3.1. The transverse direction corresponds to the stacking direction, i.e. through the plate thickness. This direction corresponds to index 3 in the table. The stacking is symmetrical, and the proportions of material i = {Glass ( g ), PVB ( pvb )} are defined with υ i = h i /h total where h i is the i-ply thickness, and h total = i h i the total composite plate thickness. In the 3.1 -Transverse isotropic model of the homogenized laminated glass fragment. Direction 3 is the stacking direction (i.e. through the thickness). table 3.1, E is the Young's modulus, G the shear modulus, ν the Poisson's ratio, ρ the density. The visco-elastic coefficient of the Prony series, g i and τ i are detailed in table 1.4. Laminated glass fragment Homogenized fragment Glass isotropic: E g , ν g , ρ g E 11 = E 22 = υ g E g + υ pvb E pvb PVB isotropic: E pvb , ν pvb , ρ pvb E 33 = E g E pvb υ g E g +υ pvb E pvb visco-elastic: g i , τ i G 12 = υ g G g + υ pvb G pvb G 13 = G 23 = G g G pvb υ g G g +υ pvb G pvb Table The homogenization of the bending behavior of the plate is not yet considered as the PVB layer is at first supposed to be very thin. Plasticity With the transverse isotropic model, it is possible to reproduce the behavior of a nonfragmented laminated glass. However, at orthoradial crack locations, the creation of some new degrees of freedom has to be considered. Therefore, a plastic behavior is implemented when the strain at break is reached for the soda-lime glass ply, near 2.8 × 10 -3 . The polymer bridge created at a crack location will be simulated via an isotropic plastic zone, whose local stiffness is E crack = υ pvb E pvb . This behavior is schematized on figure 3.30. Using plasticity allows to describe the behavior during the loading phase, but can not predict the unloading behavior of the plate. Boundary conditions The first model is driven by displacement. The displacement measured by the telemeter is imposed on the half sphere, supposed to be perfectly rigid (Fig 3 .31b). All planar movements are unauthorized on the apex of a triangular fragment (i.e. lower radius). All rotations and displacements are null at the clamped location, at a radius of 140 mm from the plate center. The symmetry of the half fragment in the (r, z)-plan is represented, retaining movements inside the symmetry plane. Finally, an orthoradial 2D stress of 15 N/mm is imposed throughout the test at the radial crack location, assuming the effective PVB delamination. The boundary conditions are illustrated in Fig 3 .31. The glass parameters are taken from the soda-lime glass column in Table 1.2. The steel-made half-sphere parameters are reported in Table 3 Results To compare with the experimental measurements, the simulated out-of-plane displacement and velocity are shown in Fig 3 .32. The displacement has a good trend. The velocity field synchronized in Fig 3 .33b, the blue and red curves show a similar trend, but the energy of the simulation is 20 J higher than expected, corresponding to the simulated elastic energy stored in the plate (purple curve). Considering the clamped edges and the hard contact defined between the glass and the more rigid steel-made half sphere, provoking high frequencies in the velocity signal in Fig 3 .32, the first model is gives correct results. It reproduces the right kinematic of the impacted plate and successfully predicts the first circular crack radius of 72 mm. Additionally, it is manageable to implement, imposing the measured displacement of the sphere or its initial velocity, and runs in a few minutes. The drawbacks of the simulation are multiple, though. First, an energy gap exists between the experiment and the simulation as the stored elastic energy in the laminate converges but does not dissipate. The plastic model does not reproduce well the unloading of the glass elastic energy after the crack appearance. This leads to two main consequences: an overestimation of the impact energy and a circular crack growth due to the lack of free surface creation. However, the experiments highlight that the stored elastic energy in the fragmented composite can be neglected and that the glass fragments movement can be seen as purely rigid compared to the interlayer stretching. Experimentally, the contact with the sphere is softened by the local comminution of the glass. A softer contact between the plate and sphere could avoid the abrupt forth and back movement that adds genuine vibrations to the velocity curve. Following some new ideas can help the enhancement of the simulation. First, the plate characteristics should be better set, paying attention to bending, using quadratic elements in the model or skipping the homogenization phase. The plasticity model at orthoradial cracks is also to be revisited. Indeed, the uniform dissipation occuring on the plate is not representative of the actual discrete places of dissipation, and should be revisited. In this regard, a subroutine can be used to stop the simulation when the strain at break is reached, to create 'manually' a discontinuity in the glass parts (i.e. crack), then continue the simulation by imposing null stress on the newly created free surface. Multi-impact & residual resistance After an impact on a laminated glass plate, the broken sample appears resistant to new loads. The plate resists quasi-static in-plane compression because of the remaining glass fragments on the adhesive layer. However, the interlayer stiffness mainly governs the tension. A fragmented and already deformed laminate can absorb impact energy during secondary impacts, as long as delamination between glass and PVB can occur and no tearing of the interlayer takes place. This behavior is already considered in the standards that include three successive ball-drops on the laminated glass plate [Norme Européenne NF EN 356 2000]. The influence of the distance between two successive impact locations and the damage history of the fragmented and partially delaminated sample are critical parameters for the residual resistance characterization of broken glazing in the context of multi-impact (e.g. anti-break-in standard tests). Second impact on pre-cracked laminated glass In this section, the behavior of pre-cracked samples is at stake. Tests are performed using the cannon, and the initial impactor kinetic energy is roughly 130 J. This value corresponds to a ball-drop fall of 3 m high, a typical value used in the anti-break-in normative tests. The samples are PVB-laminated glass plates, with two glass sheets of 4 mm thickness, interleaved by a 0.76 mm PVB foil. Table 3.3 -Initial number of cracks and comminution zone size after a first impact. In Fig 3 .34 are shown five laminated glass samples pre-impacted at various energies. The stroke of the impactor has been limited to 1 cm (with a stop) after its contact with the glass surface, avoiding global deformation of the plates and delamination at the newly formed crack locations. The impact energy of the first shots are reported in table 3.3. The crack numbers and the comminution diameter are also counted and reported. Various patterns are presented here,e.g. the radial cracks have successive branching along the radius returning a 'pine needle' aspect observed after the impacts with the lower velocity. The orthoradial cracks are not entirely circular, so the number in the table corresponds to their maximum number along a radial direction on the plate. Radial cracking patterns are not equivalent for the two glass plies. For instance, in Fig 3 .34e, the radial crack number is estimated only on the second glass sheet. For all tests, the impactor kinetic energy is entirely dissipated in less than 15 ms after impact. No tears are observed. The deformed surface can be approximated at first with cones of 140 mm bottom radii and heights equivalent to the impactor stroke. For the five tests, the number of cracks did not increase much in the second impact: in Fig 3 .34 the initial damage represent ∼80% of the final crack patterns. In Fig 3 .35b, a steeper slope appears after 4 ms, at a radius of 40 mm. The same phenomenon is observed in Fig 3 .35c at 40 mm radius. It seems that the deformation is larger for the comminution zone, whose displacement is closer to the impactor head after some time. It is worth noting that the initial glass fragments are mostly detached, broken or fully delaminated. Locally, the plate stiffness becomes similar to that of the interlayer. In these two cases, the deformed surface can be approximated with two embedded cones, with a joint at 40 mm radius. The orthoradial cracks also play a role in the deformed shape of the plate. This is illustrated in Fig 3 .35e, where the surface has a higher curvature radius around 50 mm (i.e. the half-sphere radius) radial position. A localized bending concentrates the deformation, lowering the overall dissipation of the plate. This leads to a premature high stretch of the polymer and tearing, as the stiffness is greatly reduced. The initial cracks have influenced the shape of the deformed surface. However, it does not much interfere with the impact performance of the laminated glass as no delamination had occurred during the first impact. Multi-non-centered impacts The previous section is quite informative on the influence of the pre-existing crack pattern before a second impact. But the influence of the distance between successive impact locations has yet to be explored. Using the cannon device instrumentation to analyze multi-impact experiments can shed new light on the behavior of the fragmented laminated glass plates subjected to a second and a third impacts. 3.36 -Scheme of the three sample sizes (dashed green). From smaller to larger areas: 300×300 mm 2 cannon samples, 560×560 mm 2 cannon new samples, 900×1100 mm 2 ball-drop samples. The device axis and positioning of the plates are represented. Reworked The size of the frame support is enlarged to 500×500 mm 2 to impact 560×560 mm 2 plates possibly two or three times. A bigger calibration target of the same dihedral object is used to deal with the larger amplitude of out-of-plane movements and the wider sensor size. The tests are all performed at a 130 J kinetic energy, i.e. a 7.8 m/s initial velocity. The laminated glass sample still comprises two 4 mm thick glass plies interleaved with 0.76 mm thick PVB foils. Square sample: First impact Because of the manufacturing process of float glass, the bending strength of a sheet of glass is dissymmetric. Indeed, the "tin" face, i.e. the surface of glass that was in contact with the tin bath, has a higher density of surface defects (or micro-defects), and hence the glass ply breaks easily when bent with the tin surface in tension. In contrast the "air" face is much resistant to rupture under flexure when it is in tension. That means with a tin side in tension and air side in compression, the crack apparition in the early tens of microseconds of the first impact is facilitated. In Regarding the full-field measurements from 1 to 2 ms after the contact between the glass and the impactor, the plate edge does not deform significantly compared to the indentation area. This section can be defined by a distance to the impactor of about 100 mm. From this distance and a specific duration, the fragmented laminated glass dissipation of the impactor kinetic energy becomes minor. The laminated glass stops the impactor, and no tear is observed at the end of the test duration (i.e. ∼25 ms including the unloading phase). S-DIC measurements are performed thanks to the lateral cameras images (Fig 3 .38). On the non-speckled part of the surface, the crack pattern in the low-density fragmented area is observed at 4.5 ms (Fig 3 .38c-d). Referring to the energy curve of the impactor at 4.5 ms, 40% of the impact energy is transferred to the laminate (see 'shot 1' in Fig 3 .41a). The degrees of freedom are reduced using the test axisymmetry (up to 70 mm radius). A cone shape is observed up to 100 mm radius. The constraints are relaxed from 70 mm radius, giving different shape functions for each angular zones. The sections, defined every π/24, have independent axisymmetric kinematics. The orthoradial displacement is still null. These new shape functions allow the ellipse shape to appear: i.e. extended circles for positive Y-values are observed in Fig 3 .38e-h. This behavior is confirmed by the difference in motion between the bottom left and the upper left corners that do not show identical out-of-plane displacements. At first, circular and elliptical zones describe the displacement field. Still, at the end of the experiment, the ellipses have formed, and an implementation of the orthoradial displacement appears to be unavoidable to describe the surface kinematics adequately and avoid the apparition of radial marks on the displacement fields in Fig 3 .38e-h. However, regarding the evolution of the residual in Fig 3 .39, the displacement of the ROI is globally well described. The curves in Fig 3 .39a show a rapid increase of the residuals from 0 to 5 ms caused by the cracks and the large strain regime. The different dynamic from 5 ms is mainly caused by brightness and contrast changes in the different zones of the mesh. The residual field in Fig 3.39b shows the apparition of the large residual areas (in black and red) interpreted as a change in lighting condition. The smaller dots that have appeared are explained by the presence of the cracks in the speckled areas. Radial cracks can be observed, even for higher radius values. The non-converged local regions are mostly contained in the first 80 mm radius, which is the limit of the fully axisymmetric area. The kinematic is very restrictive in this zone to Square sample: Second & third impacts During successive ball-drop experiments, the second steel ball fall is performed at 13 cm from the initial impact location [Decourcelle 2011], and the third drop is made at 13 cm from the two previous locations. Thanks to the design of the experiments, successive impacts are performed: a second shot at 14.14 cm from the first impact, and a third shot at 20 cm and 14.14 cm from the first and second impacts, respectively. By comparing the first and final images, it is confirmed that no cracks are formed far from the secondary impact locations (e.g. non-speckled area of . Some fragments still appear under the impactor head and close to the frame support, but none in the finely fragmented areas. During the tearing of the interlayer, no cracks are formed in the glass plies. The tear locations are shown in orange for previous radial damage and in green for previous orthoradial one (see Fig 3 .40b-c). These locations are the major structural defects on the fragmented and delaminated glass plate. They all originate from the first impact cracks. During the third impact, the ball goes through the laminated glass plate, and large glass fragments remain on the sample far from the three impact locations. The second and third shots show smoother contact between the impactor and the fragmented glass because no rapid energy drop is measured in those cases. The energy versus time (or displacement) is similar between the last two tests. The plate stops the impactor during the second shot. However, a tear occurs at 10 ms (or 67 mm of impactor penetration) and causes a slope change during the third shot. Resistance to three impacts During this second experimental campaign, the glass surfaces in contact with the tin bath are put in compression during the laminated glass plate bending. The cracks are more difficult to initiate, i.e. the elastic energy stored before the crack apparition is much higher with this configuration. Small glass fragments are created during the late breakage. The sample cracking patterns are shown in under the impactor head, but no new cracks are formed on the plate edges. Due to the small glass fragments size, the plate has locally conformed to the shape of the sphere. The impact energy is fully dissipated in the three cases, and no tearing of the interlayer is observed at the end of each shot. The strokes measured before a tear appearance of the second and third impacts are also more significant than those of the previous experiment (see Fig 3.43). Comparing the two series of three impacts, the finer fragments best line up to the impactor shape, and the global plate can dissipate much more impact energy. Discussion on multi-impacts Many conclusions can be made based on the study of the many impacts performed on fragmented laminated glass plates. The first one concerns the crack density influence on the plate residual behavior during secondary impacts. With a fine crack density, the damage is concentrated close to the impact location that becomes a structural weak point of the fragmented composite. So, in the case of a second shot on a very finely fragmented laminated glass plate, if its impact location is close to the weak point, the steel ball should go easily through the sample. On the other hand, far from this weak point, the projectile is more prone to be arrested as the polymer bridges created during the previous impact(s) can conform to the projectile shape and dissipate a much larger amount of its kinetic energy. When the fragments are bigger, delamination may occur far from the impact zone and weaken the structure on a wider area. Moreover, as many large fragments remain, the weak zone is concentrated at the fewer polymer bridges (i.e. radial and circular cracks location), where tearing is easily initiated and further propagated during secondary impacts. The apparition of a circular zone (also visible in Nourry's ball-drop experiments [Nourry 2005]) is a consequence of the ability of the fragmented laminated glass to conform to the geometry of the indentor head. So, the boundary condition influence is reduced concerning the global impact performance of the broken laminated glass plate. Here a circular, finely fragmented zone (of 100 mm radius) is formed even if the contact point is not centered on the plate. This newly formed zone becomes the weak point of a structure that can still resist plural impacts. This result is significant and needs to be further studied because it seems universal for all low-velocity impacts on laminated glass (e.g. anti-break-in with steel ball, head-impact on windshields, sphere-ball pitting, automotive crash). For instance, the circular pattern is also observed in head-impact on automotive windshield studies (e.g. [START_REF] Shahriari | Prediction of vehicle impact speed based on the postcracking behavior of automotive PVB laminated glass: Analytical modeling and numerical cohesive zone modeling[END_REF]). The link between the cracking pattern and the characteristics of a projectile still has to be made. Also, the influence of the large fragments outside this circular, densely cracked, area, and how they participate in a projectile kinetic energy dissipation need to be discussed. Conclusion Instrumented and quantitative experiments are performed with the cannon device to analyze the laminated glass plate behavior under hard-body blunt impact. S-DIC analyses are carried out from high-speed camera images to extract the 3D shape variation and the corresponding kinematic information of the laminated glass during the fragmentation test concerning the measured projectile movement. A newly developed axisymmetric description of the displacement field in S-DIC, which can be seen as a relevant geometric regularization devoid of material-based mechanical assumptions, provides very good convergence properties and remarkable robustness to the analysis. The obtained performances are much greater with the help of this regularization which avoids the divergence of some nodes of the mesh and allows the formation of the cracks to be measured on the deformed, non-speckled part of the laminated glass. Also, the use of a calibration object provides a reliable quantitative estimate of the displacements, and no rescaling of the results are therefore needed to match the telemeter measurements. The data treatment presented here gives access to the shape evolution of the laminated glass during the impact, yielding the geometrical link between the energy dissipation close to the cracks (energy per delaminated area, which can be measured on TCT experiments) and the global energy dissipated (the diminution of kinetic energy lost by the impactor). This link enables to build of a model that computes the energy that the laminated glass can absorb as a function of material parameters only, such as the glass-interlayer adherence or its largedeformation stress-strain behavior. It could be interesting to perform new TCT experiments to refine the model parameters, especially the relationship between the steady-state stretch λ a and the imposed velocity δ in higher strain rate regimes. Adapting the simulation with new boundary conditions is at stake, considering the large strain regime and the out-of-plane movement. An enhanced model also can integrate the influence of the PVB layer thickness (here fixed at 0.76 mm) and the shearing of the polymer, which is fully neglected with the membrane approximation. Knowing the mean crack delamination per zone is quite interesting to evaluate the resistance to multi-impact. The macroscopic influence of the cracking pattern is explored in the previous section. Still, it is worth combining the measurement of the crack evolution and the local delamination to estimate the structural damage evolution on the laminated glass plate and therefore know its residual resistance before the tearing of the interlayer. The next step is to expand the membrane model (without the axial symmetry hypothesis) to analyze multiple, non-centered impacts. A comparison between the sample simulation and the full-field measurements via S-DIC on the same sample can help adjust the model to real test cases. Also, the unloading phase needs to be described, as the elastic and visco-elastic energy stored in the impacted plate is quite significant. Finally, the adhesion recovery needs to be explored by introducing a time-dependent damage variable, considering the duration between two impacts on the same laminated glass plate. Chapter 4 Conclusion & perspectives In this work, new highly-instrumented experiments have helped to apprehend the laminated glass behavior under various dynamic loads. The gravel pitting, i.e. the dynamic indentation, has been revisited and the star-crack formation documented. The ball-drop test, i.e. fragmentation during hard-body low-velocity impact representative of an anti-break-in experiment, has been adapted to analyze the influence of glass fragmentation on dissipated energy, and to test its residual resistance to multi-impact. Based on this work, new leads are proposed for future studies. Dynamic indentation tests The blowpipe experiments have revisited the dynamic indentation tests, by performing reliable velocity and force measurements. The addition of a high-speed camera has allowed the synchronized monitoring of the star-crack formation. The device allowed the star-crack formation monitoring and quantitative measurements have been performed. The reliability of the device has been presented, and the uncertainties of force measurements at the indenter are discussed: the signal being very sensitive to noise, the use of low frequencies and of the force measured far from the glass is a choice allowing to exploit numerical simulations representative of the stress and velocity at stake during a blowpipe test. The focus was made on developing an original testing device for crack formation experiments. A more extensive series of tests can be performed to achieve a better definition of the indented zone mechanical behavior. Especially performing tests with a Vickers indenter may be interesting as this geometry is very well documented in the literature. During the blowpipe tests, the crack evolution is performed via sudden jerks caused by the vibration of the impacted plate and the indenter. Therefore, comparing the indentation law (i.e. force F versus indentation δ) obtained during the dynamic indentation tests with a quasi-static indentation, using the same indenter, can give more information on the dynamic effects on the laminated glass behavior. The main objective of those experiments was to predict the star-crack formation on the laminated glass regarding the impact parameters: gravel shape and mass, its normal to the glass velocity, the sample material, and geometry. Adding parameters such as surface condition or temperature during testing also provides a confidence interval for the model based on the fitted parameters. Furthermore, changing the glass composition using borosilicate glass can help compare the indented zone behavior. Dynamic indentation model A model has been designed by reproducing the test boundary conditions. A model involving the difference of behavior between the tests and a viscous elastic plate allowed to characterize, in a residual way, the behavior under the indenter during the formation of the star crack network. This unidimensional point of view, can represent the loss of stiffness under the indenter during a test, and explains the very complex and hardly described indentation zone. The model can be enhanced by changing the most influential parameters. For instance, the indenter geometry should not change the local damage behavior, the glass ply thickness may have a minor influence on this behavior, and finally, the change of glass formulation should lead to a new law. Even if the unloading phase has not been considered in the model, the amount of elastic energy stored in the local area is meager regarding the global elastic energy stored in the bending laminated glass plate. However, the stiffness decrease may impact the indentation zone behavior during the second and third force peaks and should be explored. With a finely modeled behavior under the tip, it becomes possible to use X-FEM in the current elastic part of the glass, to make the crack branches evolve during an impact and, therefore, predict the final crack length. A criterion may be defined to describe the number of branches of the star crack as a function of the impact velocity (in the spirit of [START_REF] Vandenberghe | Star-shaped crack pattern of broken windows[END_REF]) and propagate them from the boundary between the damaged zone and the elastic zone. Glass fragmentation Many conclusions have been reached thanks to the instrumentation of the cannon device, especially the new S-DIC measurements of the deformed surface of a fragmented laminated glass plate during an impact. The influence of glass fragmentation on energy dissipation has been explored. Even if the cracking pattern does not influence the direct impact on an unbroken laminated glass plate, it becomes significant for the future dynamic loads as it will dictate which parts of the structure are more damaged than others. The triangular fragment model can be improved to describe the circular crack initiation during the test and predict the location of the orthoradial cracks that accounts for the finer fragmented zone and the outer plate area, beyond the first crack. S-DIC enhancement Some new explorations concerning the S-DIC analyses may be done. To study the whole test duration, the release of some degrees of freedom of the regularized code can help describe the slight orthoradial movement. Mesh evolution can be implemented during the test, depending on the observed movement in certain zones. Using smaller elements than the smallest fragments can help better describe the joint apparition at the crack location. Indeed, with more refined elements (at most two times smaller than the fragment size), the slope breaks of the deformed surface would only correspond to a new crack, considering rigid glass fragments. A multiscale approach can also be considered [START_REF] Hild | Multiscale DIC Applied to Pantographic Structures[END_REF]]. Dealing with the brightness and contrast changes is also at stake, even if this issue can be partly resolved using iterative S-DIC (i.e. using as reference frame the previous image). A change of instrumentation (e.g. higher magnification, higher image definition) may be needed in further studies to increase the number of integration points in the finer fragmentation areas. Models: circular crack initiation and membrane deformation The membrane model based on the experimental measurement of the cracking pattern evolution allowed assessing the live delamination length on the surface. This parameter was easily measured during quasi-static TCT tests in the literature, but is very difficult to apprehend during ball-drop experiments. Therefore, the model introduced in this work provides a way to use the extensive tensile test data in a simulation of cannon tests. Considering finite strain, especially the dominant out-of-plane movement during the test, should complete the model. This evolution is already proposed in Fourton's perpectives [Fourton 2019] and can be addressed using the membrane kinematics [START_REF] Chevaugeon | Contribution à l'étude des membranes hyperélastiques en grandes déformations[END_REF]]. The model generalization to non-axisymmetric impact can be a plus in determining the evolution of the delamination far from the impact areas of the plate. Indeed, it should help to characterize the area of a plate where the impact energy is dissipated during a ball-drop experiment. One perspective for the model is predicting the crack length instead of imposing the number of radial and orthoradial cracks. It may be possible to avoid using the crack densities and use the deformed surface, with a suited criterion for crack formation which is more easily accessible. It can be performed using the measured strain and strain rate regimes that will be linked to a theoretical crack density. So, the delamination and the dissipated energy could directly be deduced from the deformed state for each image. Towards a damage description of the structure The multi-impact instrumented tests have scratched the surface of a much larger topic: the residual resistance of broken laminated glass. The new geometry introduced on the cannon allows repetitive tests to be performed, then compared. During successive impacts, there is competition between the fragmentation density, the evolution of glass-PVB delamination, and the residual strength of the fragmented plate. If the glass fragments are small, the whole structure will be less resistant to a second impact, regardless of the location of the impacted plate. If the fragments are large, there is larger delamination near the previous impact, making that area more susceptible to tearing than the rest of the structure during another loading. As tearing of the interlayer can happen outside the comminution zone, it is possible to formulate the crack field as an anisotropic "damage" model. Giving the damage field on the already impacted laminate will help to know the limited damage value of a laminated glass subjected to several impacts and anticipate the residual behavior of the fragmented laminated glass. One should work out the impact energy dissipation capacity of any damaged region. This involves carrying in parallel 1. a theoretical analysis of the appropriate damage propagation and associated dissipation based on the microscopic picture of the crack density and delamination 2. and a new impact test campaign, exploring the influence of the distance between the first and second shots, and comparing the final delamination (or damage) fields. The impactor velocity (i.e. kinetic energy) may also be changed to access of wide range of experimental conditions for validation. Presently, the influence of a tear implies estimating the laminated glass plate as fully broken. It would be interesting to look at the effect of a pre-tear at a certain distance from the second impact and see if its length evolves and if the laminate can still stop a projectile. Assuming that u is not dependent on r, and in r = 0, the radial displacement due to inertia is null u r (r, x, t) = -νr ∂u(x, t) ∂x (A.11) the speed v is then v = -νr ∂ 2 u(x, t) ∂t∂x e r + ∂u(x, t) ∂t e x (A.12) Since the bar is free to move in the radial direction, the force balance on the volume Ω of the bar of section S , defined between x 0 and x 1 and the principle of virtual powers [START_REF] Dhatt | Une présentation de la méthode des éléments finis[END_REF] is then expressed as Ω ρ ∂ 2 u i ∂t 2 v * i dV + Ω σ i j ε i j (v * i )dV = 0 (A.13) With v * the trial field, a kinematically admissible velocity, similar to v. Verifying equation for every v * (x, t), and writing I the cylinder's second moment of area I = S r 2 dS (A.14) one gets S ρ ∂ 2 u(x, t) ∂t 2 -ρν 2 I ∂ 4 u(x, t) ∂x 2 ∂t 2 -S ∂σ(x, t) ∂x = 0 (A.15) which leads to Love equation [START_REF] Meyers | Dynamic behavior of materials[END_REF]] ∂ 2 u(x, t) ∂t 2 = c 2 ∂ 2 u(x, t) ∂x 2 + ν 2 I S ∂ 4 u(x, t) ∂x 2 ∂t 2 (A.16) The differential equation is still a 1D-equation but now includes the radial displacement effects on the longitudinal displacement over the bar. The general solution of this equation is u(x, t) = ES ρν 2 I 0 Re F(ω)e i(ξ(ω)x-ωt) + G(ω)e i(ξ(ω)x+ωt) dω (A.17) F(ω) and G(ω) are two complex functions expressed thanks to the boundary and initial values of the experiment. The wave number ξ(ω) is expressed as follows ξ(ω) = ± ρω 2 E -ρν 2 I S ω 2 (A.18) A.2.2 Correction of the harmonic wave dispersion During a Split Hopkinson Pressure Bar test, strains are measured punctually on the bar via strain gauges. The wave shifting consists of inserting the dispersion between two measuring points, for instance, between the gauges A and B separated by distance L = x Bx A (with x A < x B , cf Fig A.3), at fixed pulsation ω ε * B (ω) = ε * A (ω)e iξ(ω)L Zhao and Gary also solved the dispersion in the time domain using a shift function f shi f t evaluated via the Fast Fourier Transform (FFT) [START_REF] Zhao | A new method for the separation of waves. Application to the SHPB technique for an unlimited duration of measurement[END_REF]] f shi f t ε j asc,des (t) = FFT -1 e iξ(ω)L FFT ε j asc,des (t) With the ascending and descending terms of the strain component j of A or B on the bar. The wave velocity is now frequency-dependent, i.e. the wave propagation time between the gauges will depend on the frequency. Therefore, the celerity C(ω) of the wave is C(ω) = ω ξ(ω) = ± E 1 -ρν 2 I ES ω 2 ρ (A.19) The weakness of this method lies in the fact that a multi-frequency signal is projected within a very limited time interval in the Fourier domain. It leads to a lack of accuracy in the contributions of each frequency, which will be highly averaged. A.3 Elastic waves: bending and shear waves propagation In this section, the 1D propagation of bending and shear waves without any other axial displacements is assumed. The considered bending occurs in the (x, y) plan. First, one may express equilibrium of an uni-dimensional element dx, represented on the scheme A.1. The hypothesis of small strain and small strain velocity is made. Which, in the circular beam, directly leads to ∂M(x, t) ∂x + Q(x, t) = Iρ ∂ 2 θ(x, t) ∂t 2 ∂Q(x, t) ∂x = ρS ∂ 2 u y (x, t) ∂t 2 (A.20) The elastic equation in the case of a Timoshenko beam theory [START_REF] Timoshenko | Theory of bending, torsion and buckling of thin-walled members of open cross section[END_REF]] also gives M(x, t) = yσ b xx dS = EI ∂θ(x, t) ∂x Q(x, t) = σ b xy dS = GS ∂u y (x, t) ∂x -θ(x, t) (A.21) With G the shear modulus. Finally, the complete bending equation can be expressed as ρS ∂ 2 u y (x, t) ∂t 2 = Iρ ∂ 3 θ(x, t) ∂x∂t 2 -EI ∂ 3 θ(x, t) ∂x 3 (A.22) A.3.1 Neglecting lateral inertia In this subsection, the lateral inertia term is ignored: ∂ 2 u y (x,t) ∂t 2 = 0. Due to the initial conditions Q(x, t = 0) =0 M(x, t = 0) =0 and the free edges of the indenter, Q(x = R, t) = 0, one may neglect the transverse force during the experiment. The bending equation is then expressed with rotational terms Iρ ∂ 2 θ(x, t) ∂t 2 -EI ∂ 2 θ(x, t) ∂x 2 = 0 (A.23) This equation is very similar to equation (A.2), with c = E/ρ the traction-compression wave propagation celerity. This wave creates a momentum along the bar, which is a good approximation of the beam bending in the first microseconds of an impact but is rapidly underestimated due to radial inertia. A.3.2 Including lateral inertia Here, one considers equation (A.22), dealing with rotational and lateral inertia. The substitute of the deflection terms leads to a fourth-order equation ∂ 2 θ(x, t) ∂t 2 + Iρ GS ∂ 4 θ(x, t) ∂t 4 + EI ρS ∂ 4 θ(x, t) ∂x 4 - I S 1 + E G ∂ 4 θ(x, t) ∂t 2 ∂x 2 = 0 (A.24) Which complex equation is resumed to -ω 2 + Iρ GS ω 4 + EI ρS ξ(ω) 4 - I S 1 + E G ω 2 ξ(ω) 2 = 0 (A.25) With a wave number ξ(ω) which depends on the pulsation ω. ξ(ω) = ± ω c 1 + 1 2 + ν ± 1 2 + ν 2 + c R 2 2 ω 2 (A.26) For t > (L + ℓ)/C(ω) σ ω (0, t + ℓ/C(ω)) + Z(ω)v ω (0, t + ℓ/C(ω)) = σ ω (ℓ, t) + Z(ω)v ω (ℓ, t) σ ω (0, t -ℓ/C(ω)) -Z(ω)v ω (0, t -ℓ/C(ω)) = σ ω (ℓ, t) -Z(ω)v ω (ℓ, t) (A.31) Combining the equations one gets v ω (ℓ, t) = 1 2Z(ω) [σ ω (0, t + ℓ/C(ω)) -σ ω (0, t -ℓ/C(ω))] + 1 2 [v ω (0, t + ℓ/C(ω)) + v ω (0, t -ℓ/C(ω))] σ ω (ℓ, t) = 1 2 [σ ω (0, t + ℓ/C(ω)) + σ ω (0, t -ℓ/C(ω))] + Z(ω) 2 [v ω (0, t + ℓ/C(ω)) -v ω (0, t -ℓ/C(ω))] ( A.4.3 Bending and shear waves Lagrange diagram In the same way, Lagrange diagrams for the pure shear and the bending problem can be designed. One may find the impedance Z(ω) of any harmonic of pulsation ω, such as d(M ω + Z(ω) θω ) dt = ∂(M ω + Z(ω) θω ) ∂t + ∂(M ω + Z(ω) θω ) ∂x dx dt = 0 (A.33) with the rotation velocity θω = ∂θ ω /∂t. Considering the Timoshenko beam theory for purely elastic problem, the momentum M ω and rotation speed θω in complex forms are M ω = -jξ(ω)EIθ θω = -jωθ So, the two characteristic lines will depend on impedance Z(ω) dx dt = C(ω) & Z(ω) = - EIξ(ω) ω dx dt = -C(ω) & Z(ω) = EIξ(ω) ω The momentum and rotation speed created during an uncentered impact is measurable with the two axisymmetrical gauge difference. This measured momentum is not dependent on the traction-compression movement as it is treated separately in the equation. One may improve the measured forces and momentum coupling the traction-compression and the bending phenomena. In the blowpipe experiments, the momentum was measured to ensure the bending did not interfere with the purely axial loading during the whole test duration. It was verified that the bending contribution was not a problem for the force and velocity measurement in the end, as bending included and bending-neglected calculations have given the same results on the force and velocity. B.2 Banc d'essai Le banc d'essai blowpipe permet de réaliser des mesures quantitatives de force et de vitesse lors de la formation d'une fissure en étoile sur du verre feuilleté. Il est instrumenté à l'aide : • Des jauges de déformation collées sur la barre cylindrique qui permettent de détecter le passage des ondes élastique dans son corps cylindrique. A partir de ces signaux et dans le cadre d'une description uniaxiale d'un tel essai, on déduit la déformation et la contrainte aux positions des jauges. A l'aide d'un diagramme de Lagrange, on transporte ces informations pour remonter aux vitesse et force à l'extrémité de la barre où se trouve l'indenteur. • De la caméra ultra-rapide Shimadzu ® "HPV-X2", filmant 256 images à une fréquence de 1, 2 ou 5 MHz. Celle-ci est utilisée pour suivre visuellement la formation de fissure. On peut ainsi quantifier la longueur d'une fissure au cours de l'essai. • D'une mesure de force dans le ressort à l'aide d'un capteur piézoélectrique qui permet de régler la force dans le ressort de compression avant le tir. Cela permet de prévoir approximativement la vitesse d'impact du projectile. Par un transport de cette information, on remonte à l'état de contrainte et de vitesse en tout point de l'indenteur, et en particulier la base du cône (proche du verre) et la surface impactée (au sommet). On peut résumer la démarche de la sorte: 1. La mesure de déformation donne directement la contrainte aux positions des jauges (loi de Hooke pour la barre). 2. Connaissant l'état initial du système, ici au repose et sans pré-charge, on mesure l'amplitude des ondes antérogrades et rétrogrades au droit de chaque jauge. B.3 Dépouillement complet d'un essai Un essai d'impact, réalisé avec le dispositif, est présenté. La barre est initialement au repos et sans pré-charge. Celle-ci ne présente pas de variation de section. Le cadre de 5 mm de côté maintient la plaque de 100 mm×100 mm de verre feuilleté en son bord (cf support de la Figure On mesure en premier la force, F = πr 2 σ (avec une moyenne glissante sur 1 µs), et la vitesse. On peut déjà noter le signal très bruité de la force comparé à celui de la vitesse. La formation de la fissure arrive autour de 0 µs, après la mesure des deux premiers pics de force et du premier pic de vitesse, puis s'en suit une phase de chargement plus lente au cours de laquelle la fissure croît. Les mesures intégrées en temps de l'énergie transmise au verre E = t F • V, de l'impulsion I = t F et du déplacement U = t V permettent d'afficher les tendances deux signaux précédents sans trop de bruit. On remarque l'avancée en escalier, ou par à-coups, de l'indenteur. Il est également possible de remonter à la force de l'impact, mesurée sur la section de contact entre le projectile et la barre. Ce signal est donné par la courbe noire de la Figure B.6. Les trois premières courbes correspondent au signal au sommet de l'indenteur. On trace en noir le signal brut, un filtrage moyen sur 1 µs en orange et le signal du premier pic de force (en jaune). Ce dernier est utilisé comme signal de force en entrée dans la simulation présentée au chapitre suivant. Le pic de force de l'impact atteint 1.7 kN. Les oscillations qui suivent permettent de définir une incertitude sur la mesure de force de 100 à 150 N. Les signaux sont comparés avec la mesure de force en base de cône (courbe violette), proche de la surface du verre. La différence de mesure de force montre que l'essentiel de l'onde est réfléchie au niveau de l'indenteur de part l'impédance très différente entre le verre et l'acier ou le diamant. La fissure finale obtenue est visible sur la Figure B.7. On remarque de larges fissures en forme de pétale qui se situent juste sous la surface extérieur du verre. Le contact sphérique de l'indenteur créé une zone comminutée (partie blanche) circulaire, dont le diamètre mesure 600 µm. On remarque à 9h un départ de fissure radiale. (i.e.adhésion parfaite) puisque l'endommagement est localisé dans la première moitié du pli de verre extérieur. Un cadre de 5 mm de côté est tracé sur la face intérieure du pli de verre non impactée, pour simuler les conditions d'appui simple du composite sur le cadre en PMMA. B.4 Simulation de cet essai Les paramètres matériaux utilisés sont renseignés Table B.1. Le maillage est présenté en Figure B.9. Il est constitué d'hexaèdres. Le maillage le plus fin se situe au lieu du contact sphérique indenteur/verre, où il ne dépasse pas 20 µm (i.e.10% du rayon de courbure de la pointe). Au niveau de la barre, le maillage mesure 1 mm. Dans la zone élastique du verre feuilleté, à plus de 2 cm de l'impact, un maillage grossier de 5 mm est utilisé. Ce sont les premiers instants de l'indentation qui nous intéressent ici car c'est l'intervalle de temps dans lequel la fissure se forme, la phase de propagation secondaire n'étant pas étudiée ici. On considère donc une durée de la simulation jusqu'à la fin du premier pic de vitesse (i.e.à 10 µs). On compare les résultats issus de la simulation élastique de l'essai avec ceux expérimentaux de la section B.3. Les déformations sont comparées dans la Figure B.10. On remarque une excellente concordance entre les signaux en trait plein de l'essai et ceux en pointillé de la simulation. La différence la plus notable est celle de la jauge aval, principalement après la phase de contact avec le verre qui est très différente entre l'essai et la simulation élastique. L'écart maximum mesuré est de 10 -5 pour la déformation. On peut également mesurer avec un diagramme de Lagrange les données de force, vitesse, impulsion, déplacement et énergie transmise en Il est clair que la mesure de force est ici la plus sensible pour l'essai sarbacane. Le moindre changement sur le banc d'essai provoque d'importantes différences sur la force. Cependant, les tendances des courbes de force présentent des amplitudes et évolutions comparables jusqu'à la fin du premier pic de vitesse. On peut aussi noter au regard du signal que l'inertie du cône de l'indenteur est négligeable et n'apparaît pas sur les courbes. Donc les mesures effectuées en base de cône sont très proche de la sollicitation réelle de l'indenteur sur le verre ce qui valide le choix de la géométrie de la pointe de l'indenteur. A la vue des courbes, on remarque que les mesures de vitesse et déplacement sont bien reproduite dans la simulation élastique, tandis que la force, l'impulsion et l'énergie semblent être des données moins raisonnables. La modélisation élastique linéaire pose dès lors problème. Il faut donc corriger cet écart en décrivant au mieux le contact verre-indenteur, ce qui est proposé dans la suite. B.4.3 Simulation élasto-plastique A la suite de plusieurs simulations élastiques d'essais différents, un modèle est proposé pour décrire le comportement non-linéaire du petit volume de la zone indentée (cf section 2.3.3 du manuscrit). Dans cette partie, l'idée est de considérer l'essai avec trois éléments en série : 1. L'indenteur et son corps cylindrique qui restent élastiques. 2. La plaque de verre feuilleté restant dans sa globalité visco-élastique. B.5 Conclusion sur le montage et les simulations Le banc blowpipe permet de former des fissures sur le verre feuilleté, tout en quantifiant les grandeurs imposées à l'échantillon indenté. répétabilité des essais, modélisation de la mécanique non-linéaire de l'indentation dynamique par un comportement plastique équivalent qui rend compte assez fidèlement des mesures. Cela alors que l'indentation dynamique du verre est compliquée à décrire de part le caractère aléatoire de l'initiation et qui influe sur la propagation de la fissure, donc de sa géométrie finale. Un compromis est fait sur le design du banc d'essai au sens où l'on ne peut pas ajuster l'impédance de la barre portant l'indenteur, car on souhaite répliquer une condition expérimentale représentative du gravillonnage. Par conséquent, l'essentiel de la quantité de mouvement transmise à la barre par le projectile est réfléchi à l'extrémité proche du verre et très peu est transmis au vitrage. Il en résulte des allers-retours d'ondes élastiques dans la barre complexes, et la force estimée sur l'indenteur est très faible et donc soumise à un bruit relatif très important. Inversement, la vitesse et le déplacement estimés à partir des mêmes mesures de jauge montrent un bruit beaucoup plus faible et donc des mesures plus fiables. Les autres mesures semblent quant à elles bien plus satisfaisantes. De plus, les données mesurées expérimentalement sont validées numériquement. Le modèle du comportement de la zone indentée rend compte raisonnablement des mesures. Il présente cependant encore quelques lacunes pour • d'une part, décrire les phases de décharge lors des à-coups imposés par l'indenteur au verre. En particulier il n'y a pas de composante visco-élastique ou visco-plastique attachée à cette sollicitation dynamique. • d'autre part, mais il peut s'agir là d'un second modèle esclave du premier, décrire une phase de propagation des fissures dans le verre et se focaliser sur un critère d'arrêt, et donc fournir une prédiction de la taille du défaut final. déformation macroscopique de la plaque. Cette dernière est due à l'étirement et au délaminage de l'intercalaire, phénomènes responsables de la majeure partie de la dissipation d'énergie lors d'un tel chargement [16,[START_REF] Elzière | Laminated glass : dynamic rupture of adhesion[END_REF]. De tels impacts représentants des situations de la vie courante : accident de voiture, chute de corps... peuvent être reproduits en laboratoire à l'aide de montages instrumentés (e.g. [START_REF] Del Linz | Delamination properties of laminated glass windows subject to blast loading[END_REF][START_REF] Mohagheghian | Quasi-static bending and low velocity impact performance of monolithic and laminated glass windows employing chemically strengthened glass[END_REF]14]). Avec celui développé dans la thèse de Nourry [16], il est possible de réaliser des essais très instrumentés, représentatifs d'une chute de bille. L'étude se concentre sur l'apparition de la ssuration et la déformation du composite. Une mesure de la déformée face arrière du verre et de l'évolution de l'énergie cinétique de l'impacteur au cours de l'essai sont mis en relation pour relier la performance, déformation du composite et la ssuration des plis de verre. Comme décrit dans la norme EN 356 [15], le verre feuilleté est capable d'arrêter un projectile lors de chargements successifs, même après ssuration. Après réalisation de plusieurs impacts sur des échantillons de diérentes tailles, l'inuence des ssures survenant au premier impact sera discutée vis-à-vis de la résistance du feuilleté fragmenté, mesurée lors des tirs suivants. Une modication de la géométrie du banc permettra également de faire varier le point d'impact entre deux tirs successifs. L'objectif principal est de créer la ssuration puis de la qualier avant de charger à nouveau le verre. En eet, les zones plus ou moins endommagées ne se comporteront pas de la même manière lors d'essais successifs. Un grand nombre d'essais a été réalisé, mais une large partie de l'exploitation des résultats reste à faire cependant. La synchronisation de cette mesure avec une caméra rapide a permis de mettre en relation l'avancée de la formation de la ssure et les sollicitations appliquées sur le matériau. Le modèle empirique associé aux essais permet de mesurer la pénétration de l'indenteur dans le verre et d'en déterminer une loi de comportement eective de la zone endommagée au cours de l'impact. Le point de vue adopté ici est de considérer trois éléments en série : l'indenteur, dont les paramètres matériaux sont connus, la partie non-linéaire qui englobe la pointe en diamant et la ssure nouvellement créée, inconnue, puis le reste de la plaque feuilletée en exion, connue. Les conditions aux limites d'appuis du composite et des positions relatives des éléments sont maîtrisées. La force de l'impact est appliquée au sommet de l'impacteur. On mesure les champs Concernant les essais de fragmentation, une implémentation des corrections de brillance et de contraste serait un plus pour rendre l'analyse de la déformée de la plaque et de la mesure de la ssuration plus robustes. Les modèles de ssuration et de dissipation doivent évoluer : éviter d'imposer une symétrie axiale, passer en grandes déformations, prédire la densité de ssure... Le chargement sous impacts multiples a été seulement survolé, peut être largement approfondi pour FIG. 1.3 -Coordinate system at Irwin slit crack tip. 1. 2 . 2 FIG. 1.4 -Triplex Safety Glass Co's advertising poster, 1923 [Triplex Co. 1923]. Fig 1 . 5 . 15 FIG. 1.5 -PVB monomers [Elzière et al. 2019]. FIG. 1 . 6 - 16 FIG. 1.6 -Tensile test of PVB at ambient temperature (T∼20°C). Engineering stress plot versus elongation for different strain rates [Fourton 2019]. FIG.1.7 -Authors numerically reproducing some cracking pattern after the same impact experiment on laminated glass. FIG. 1 . 10 - 110 FIG. 1.10 -Blunt (a) or sharp (b) projectiles used to impact semi-infinite monolithic glass [Chai and Ravichandran 2009]. (a) represents a Hertz cone crack, and (b) represents a median crack. FIG. 1.11 -Evolution of the impactor kinetic energy after contact with the glazing during impact tests and associated cracking profiles (at T=17°C). Impact speed is 9.2 m/s for initial kinetic energy of 170 J [Nourry 2005]. 1. 3 . 3 FIG.1.12 -Model of a cracked laminate[Seshadri et al. 2002]. (a) Support. (b) Pipe & instrumentation. Feasibility study via simulations (a) Abaqus set-up. (b) Indenter tip. 5 FIG. 2 . 3 - 523 FIG. 2.3 -Blowpipe simulation on laminated glass. Fig 2.5, are closer to the reference. The explicit code used for the numerical tests is sensitive to noise and adds high-frequency artifacts to the signals measured via Lagrange diagram. 5 FIG. 2 . 4 - 3 FIG. 2 . 5 - 3 FIG. 2 . 6 - 524325326 FIG.2.4 -Blowpipe simulation on void. A null low-frequency force is globally measured, except at the end where it is slightly increasing due to the short duration of the strain signals. FIG. 2.7 -Star-cracking on laminated glass sample. (a-c) High-speed imaging (at 5 Mfps) and (d) crack size measurement. Time origin is given by the increase of the detected acceleration (i.e. trigger) signal during the contact. FIG. 2.8 -(a) Gauges glued on the indenter and (b) their electric diagram. R 1 is the gauge nominal resistance of 120 Ω. The Wheatstone bridge and the 120Ω resistances, R i are shown in Fig 2.8b. Both locations have two diametrically opposed strain gauges to correct bending measured during an off-centered impact. In Fig 2.9a strain measurements from the two upper gauges, ε 1 and ε 2 close to the impact zone are shown, together with the two lower ones, ε 3 and ε 4 close to the base of the cone. In contrast, in Fig 2.9b similar measurements for an off-centered impact with one signal in tension and a second in compression on the two upper gauges are shown. However, the average signals of both experiments displayed in Fig 2.9c and Fig 2. FIG. 2.9 -(a)-(c) Centered and (b)-(d) Off-centered strain measurements. (c) and (d) are the mean signal of upper or lower site. FIG. 2 . 10 - 210 FIG. 2.10 -Trigger setting. The movie duration is varying with the acquisition rate as the image number is limited to 256 with this camera. 12c. Finally, secondary crack propagation occurs in phase 4: slow growth of the crack branches is observable for 20 µs. The final star shape is obtained in Fig 2.12d. The end of the impact happens around 60 µs. A discharge marks it: the displacement changes trend after a short plateau (Fig 2.11), and some of the remaining elastic energy stored in the glass is restituted to the indenter (decrease of the energy curve in Fig 2.11). FIG. 2 . 11 - 211 FIG. 2.11 -Velocity, displacement, force, impulse and energy measurements of the indenter during blowpipe experiment. Four steps can be distinguished with a camera set in 3/4-view: 1-Dynamic indentation & elastic deformation. In-depth propagation. 2-Star-crack opening. 3-Bounce. 4-Secondary crack growth. The red dashed line are set at 21.2 µs, 24.6 µs, 39.8 µs and 60.2 µs. FIG. 2.12 -Images of the blowpipe experiment corresponding to the red vertical lines in Fig 2.11. 4 FIG. 2 . 13 - 4213 FIG. 2.13 -The four crack branches length evolution, percent of their maximum length. Corresponding labels on the right-hand image (cf Fig 2.12d). FIG. 2 . 14 - 214 FIG.2.14 -Through-thickness camera setting. The lateral lightning is performed through the external glass ply in the z-direction. FIG. 2.15 -First and last images of the side-view 5 Mfps film. Initial view, (a) without mask, (b) with mask. Final image, (c) without mask, (d) with mask. 14 FIG. 2 . 17 - 14217 FIG. 2.16 -Symmetries observable during the side-view captured blowpipe experiment. 4 FIG. 2 . 18 - 4218 FIG.2.18 -Superposition of 6 force and velocity measurements at the indenter during experiments with the blowpipe prototype. FIG. 2 . 19 - 219 FIG. 2.19 -Blowpipe set-up: a steel-made indenter with diamond Rockwell tip is instrumented with strain gauges as a mini-Hopkinson bar. Fig 2 . 2 19b), closer to the contact with the impactor, and the lower site ('B' in Fig 2.19b), close to the cone and the glass surface. The gauges position are shown in Fig 2.19b. The reference position (x=0) is the lower site (labeled 'B'). The upper site (labeled'A') is placed at a distance L ∼ 35 mm from the reference, and the cone bottom is distant from ℓ ∼ 10 mm from 'B'. The strain uncertainty is estimated at 1.6×10 -6 (i.e. 0.7% of the maximum deformation, i.e. a tension of 2.5 mV). FIG. 2 . 2 FIG. 2.20 -Standard deviation of the gray level (dashed line) under the indenter tip, together with the measured displacement of the indenter cone (solid line) at the same instant. The gray level change under the indenter is shown in Fig 2.26. FIG. 2 . 2 FIG. 2.21 -Peak velocity of the indenter tip (m/s) versus the spring initial load (N). 5 FIG. 2 . 22 - 5222 FIG. 2.22 -Superimposed force (N) and velocity (m/s) for four impacts on flat soda-lime glass samples. No bending of the sample is authorized, and the initial load in the spring is close to 25 N. 7 FIG. 2 . 72 FIG.2.23 -Force (□) and Velocity (⃝) measurements at the indenter tip. FIG. 2.25 -Through thickness crack evolution. (a) Force (□) and Velocity (⃝) measurements at the indenter tip. (b) Crack depth (□) and in-plane width (⃝). 6 FIG. 2 62 FIG.2.28 -Force (top) and velocity (bottom) of the indenter measurement during three dynamic indentation experiments on polystyrene foam. The raw signal is displayed in blue (-). The mean force on 24 µs is plotted (red line, --), showing more minor oscillations around 0 N. FIG.2.29 -Indenter and laminated glass scheme. The whole blowpipe is composed of three elements in series:(1) The indenter, which is instrumented via strain gauges to calculate stress and velocity in the upper cross-section of the cone. (2) Small local non-linear volume between the indenter and the sample. (3) The laminated glass plate, without the damaged volume (2), which remains visco-elastic during the experiment. FIG. 2.30 -Blowpipe simulation in Abaqus ® CAE software. The imposed impact load is represented on the curve and applied on the top surface of the indenter. A sectional view of the bottom of the cone and its contact with the glass surface is represented in the top right-hand corner. FIG. 2 . 2 FIG. 2.31 -(a) Displacement and indentation evaluated at the bottom of the cone. (b) Average effective behavior of the local zone under the indenter. The experimental force is plotted versus indentation depth. The identified behavior (b) is implemented as a plastic law (defined from experimental stress vs plastic strain curve) of the local damage zone, allowing the measurement of a new displacement of the cone bottom U ep . FIG. 2.32 -SEM Low vacuum observation of the indented zone after a blowpipe experiment on the laminated glass. Two radial branches of the star-crack are particularly visible (at 3:30 and 9:00 on a clock). The indentation zone diameter measures 540 µm. The surface flakes around the indentation are formed during the unloading phase. FIG. 2 . 33 - 233 FIG. 2.33 -Conical indentation median plane observation through the glass thickness via an optical microscope. The polarized light underlines the residual stress in the glass ply. The non-elastic zone is 500 µm deep and 1 mm large. the deviatoric part of the Cauchy stress tensor, σ the stress criterion (e.g. Von Mises) and εp the equivalent plastic strain rate described by Fourton and Landis et al. by a power law of exponent m εp ε0 + 1 = σ σ y m considering the material yield stress σ y and the initial plastic strain rate ε0 . The plastic strain rate εp i j is directly linked to the plastic energy dissipated (i.e. the bulk contribution of the dissipated energy) during the crack advance FIG. 3 . 1 - 31 FIG.3.1 -Scheme of a cannon experiment. The impactor and projectile are guided in the barrel. The impactor head finally hits normally the laminated glass. If the laminated glass is completely pierced, the impactor will be stopped by an arrest cone in the barrel. Compared to other techniques, Stereo-Digital Image Correlation [Sutton et al. 2009] (S-DIC) is a non-contact, non-destructive, robust, and precise optical method to access 3D shape and kinematic information (motion and deformation) of large structures [Dufour et al. 2015b; Genovese and Sorgente 2018]. Moreover, by expressing S-DIC into a global Finite Element based formulation, the virtual model (e.g. CAD [Dufour et al. 2015a] or Finite Element mesh [Pierré et al. 2017; Berny 2020; Colantonio et al. 2020] A classical Point-to-Point initialization using natural features (e.g. edges and corners) provides an initial guess of the projection matrix. Then, the projection matrices are further refined in the Global CAD/FEbased framework as proposed by [Dufour et al. 2015a; Shao et al. 2016; Berny 2020].S-DIC also enables to fill in the gap between impact tests and more classical material tests. The energy dissipation that occurs during an impact through delamination and stretching of a polymer layer can be estimated using the through-crack tensile (TCT ) test results of the litterature[Seshadri et al. 2002; Elzière 2016; Del Linz et al. 2017; Samieian et al. 2018; Fourton et al. 2020; Ma et al. 2022]. Fig 3 . 3 .FIG. 3 . 3 - 3333 FIG. 3.3 -Vacuum cycle for 44-2 laminated glass samples. FIG. 3 . 4 - 34 FIG. 3.4 -The stereo camera setup: 1 The 400W spotlights. 2 Shimadzu ® HPV-X2. 3 Right-view Photron ® SA-5 (24 mm lens). 4 Left-view Photron ® SA-5 (24 mm lens). 5 Telemeter. 6 Barrel box. 7 Laminated glass sample clamped in a circular window. 8 Impactor half-sphere. Cameras, shown in Fig 3.4, are used to record the details of the dynamic fracture. The first camera (2) is a Shimadzu ® 'HPV-X2' that captures the crack initiation at 100,000 Hz with a 400×250 pixel definition. The camera axis is coincident with the cannon z-axis. Two synchronized (at ±20 µs) Photron ® 'SA-5' cameras(3 & 4) are positioned on each side of the device, monitored at 30,000 Hz. The CMOS sensors ensure a good quality of the images captured with the two lateral cameras, but their definition is limited and decreases as the acquisition rate increases. The total definition of each camera is then 640×376 pixels. Moreover, due to the very short exposure time of the fast cameras, two 400 W spotlights are oriented towards the sample to provide enough light with a negligible PVB heating. FIG. 3 . 5 - 35 FIG. 3.5 -Initial view of the two lateral cameras. World coordinate system projected on the images. (d) Acceleration (10 4 m/s 2 ) 2 4 . 2 . 42 FIG. 3.7 -Example of surface mesh used for S-DIC measurement, projected on the two camera views. FIG. 3 . 8 - 38 FIG. 3.8 -Extrapolated out-of-plane (a) displacement and (b) velocity measured at different times. Kinetic energy (J) of the laminated glass plate. FIG. 3 . 9 -FIG. 3 . 10 - 39310 FIG. 3.9 -(a) Kinetic energy of the laminated glass. (b)-(c) Energy stored or dissipated in the laminated glass, with (black dashed line) or without (plain blue line) the kinetic energy of the plate. The signal appears more smooth when the contribution of the kinetic energy is removed from the telemeter measurement. FIG. 3.11 -Initial 2D mesh projected on the two reference frames. T3 element average edge length is 4 mm. FIG.3.12 -1D shape functions q i (r) ensuring displacement continuity, in straight lines. Interpolated T3-nodes (Fig 3.11) as circles on the new shape functions. The plot illustrates 15 mm-long elements for better readability, while 4 mm elements were actually used in the axisymmetric S-DIC measurements. Fig 3 . 3 14, 3.16 and 3.18, which corresponds to the 20 th , 50 th and 80 th frames, are shown the residual fields for global (a)-(c) and axisymmetric (b)-(d) S-DIC, expressed in gray levels, and the out-of-plane displacement field for global (e) and axisymmetric (f) S-DIC, in millimeters. The raw images (a) left and (b) right of the corresponding instants are represented in Fig 3.13, 3.16 and 3.18. The residuals and displacements are shown on the undeformed geometry (Z=0 plane). The out-of-plane displacement is also plotted on the current deformed 3D surface for global (g) and axisymmetric (h) S-DIC. Images are 8-bit encoded, meaning that the gray level of each pixel may vary from 0 to 255. FIG. 3 . 13 - 313 FIG. 3.13 -Stereo images at t=0.63 ms (i.e. iteration 20). FIG. 3 . 14 - 314 FIG. 3.14 -Comparison between global S-DIC (left column) and axi-S-DIC (right column) at t=0.63 ms (i.e. iteration 20). The different rows show respectively: (a-b) residual field for the left camera images; (c-d) residual field for the right camera images; (e-f) Out of plane displacement plotted onto the initial mesh; (g-h) Out of plane displacement plotted onto the deformed mesh. FIG. 3 . 16 -FIG. 3 . 18 - 316318 FIG. 3.16 -Comparison between global S-DIC (left column) and axi-S-DIC (right column) at t=1.63 ms (i.e. iteration 50). The different rows show respectively: (a-b) residual field for the left camera images; (c-d) residual field for the right camera images; (e-f) Out of plane displacement plotted onto the initial mesh; (g-h) Out of plane displacement plotted onto the deformed mesh. FIG. 3 . 3 FIG. 3.19 -Post-breakage sample observations performed just after the experiment: (a) sample is still set in the device. Flat positions: (b) Back side or monitored side view, (c) Front side or impacted side view. FIG.3.22 -Out-of-plane displacement (mm) for some radius ('R', in mm) versus time (ms). Axisymmetric ('ax') and global ('gl') S-DIC are compared with the telemeter measurement. Z-displacement (mm). r-displacement (mm). FIG.3.23 -Full-field measurement with the axisymmetric S-DIC. Fields are represented as a function of the radius, at different instants. The results for R=0 mm and R=140 mm are extrapolated with respect to the boundary conditions. Fig 3 .FIG. 3 . 33 FIG. 3.24 -Schemes: (a) TCT test [Fourton 2019], (b) Delamination crack front. During a test, the energy dissipation occurs at the crack front (orange area). Symmetry planes are represented in the figure (dashed lines) as the delamination is supposed to be symmetric in the model. FIG. 3 . 25 - 325 FIG. 3.25 -Simplified laminated glass behavior based on observation with TCT tests. Inspired from [Elzière 2016; Fourton 2019]. This argument is schematized in Fig 3.25. Fig 3 . 3 Fig 3.25a corresponds to the hyperelastic behavior of a PVB film submitted to tensile stress. On this curve, the delamination occurs in one point that depends on the delamination velocity. This curve depends on the strain rate applied during the tensile test, as the PVB behaves viscoelastically. In the linear left-hand part in Fig 3.25b, λ a is constant: steady-state regime, up to the maximum reachable delamination length a = L/2. Finally, in Fig 3.25c, the macroscopic behavior, as measured during the TCT tests, is schematized. The adhesion limit is reached very quickly with increasing extension. At the end of the steady-state, no delamination occurs anymore, and the stress increases quickly before the tearing of the interlayer. FIG. 3 . 3 FIG. 3.26 -Schemes: (a) TCT data in steady-state regime. Solid line: limit between stable and unstable steady delamination. Dashed line: limit between steady delamination and interlayer tearing [Fourton 2019]. (b) Steady-state stretch, λ a , vs tensile velocity, δ [Elzière 2016]. (c) Hyperelastic behavior at 25°C for two cannon strain rates [Fourton 2019].(d) Limit of adhesion σ adh versus the delamination velocity ȧ, the delamination velocity is limited to an asymptotic value of 33 mm/s [Elzière 2016]. These curves are available for a PVB thickness of 0.76 mm. FIG. 3 . 3 FIG. 3.27 -(a) Monitored crack number evolution versus time during a cannon test, (b) Defined zone on the fragmented plate. The cracks are numbered watching the high-speed camera images and the residuals projected on the deformed surface to know their precise localization. FIG. 3 . 3 FIG. 3.28 -(a) Image of the test after the radial crack formation. (b) Scheme of the triangle model, with a radial crack as a symmetry plane. The shape of the triangle used in the model is visible in red in the image. FIG. 3 . 3 FIG.3.29 -Half-fragment with applied linear force on the PVB bridge at the radial crack location. Symmetry planes (dashed orange lines) are represented at the median radius of the triangular fragment, and the middle of the stretched PVB ligament. FIG. 3 . 30 - 330 FIG.3.30 -Stress versus strain curve illustrating the softening at a newly formed polymer bridge, i.e. the orthoradial crack location. FIG. 3 . 3 FIG. 3.31 -(a) Boundary conditions on the half-fragment in Abaqus ® CAE 6.14. (b) Measured and imposed displacement of the half sphere. FIG. 3 . 3 FIG.3.32 -Nodal (a) displacement and (b) velocity evolution for different radius. The imposed displacement of the sphere in the simulation is also represented. FIG.3.33 -Energies (J) extracted from the simulation results. Those data are superimposed to the previous experimental curves in blue dashed line. 5 FIG. 3 . 34 - 5334 FIG.3.34 -Image of the pre-cracked plates before a second impact test at 130 J. FIG. 3 . 3 FIG.3.35 -Measurements obtained during the second impact on pre-fragmented laminated glass samples. (a-e) Samples' shapes are measured via S-DIC and plotted every 1 ms. 5 mm elements are used for these axisymmetric S-DIC measurements. (f) Impactor kinetic energy estimated from the telemeter versus time. FIG.3.36 -Scheme of the three sample sizes (dashed green). From smaller to larger areas: 300×300 mm 2 cannon samples, 560×560 mm 2 cannon new samples, 900×1100 mm 2 ball-drop samples. The device axis and positioning of the plates are represented. Fig 3 . 3 36 represents the new dimension of the window frame. The axis of the device, i.e. the contact location between the half-sphere and the glass surface, is indicated. It is horizontally centered but positioned vertically at 15 cm from the lower edge and 35 cm from the upper edge. The cameras are installed as shown in Fig 3.4. However, the frame acquisition of the two Photron ® SA-5 cameras is reduced at 20 kHz, and the sensor resolution is increased to 704×520 pixels to monitor a larger area of the new plate. Camera lateral views are shown in Fig 3.38.A Memrecam ® HX-3 replaces the Shimadzu ® in the axis of the barrel. Its acquisition rate is also set at 20 kHz for a definition of 640×640 pixels and synchronized with the lateral cameras, using the same trigger signal. Front view is represented in Fig 3.37. Fig 3 . 3 37 is shown the crack pattern evolution during the test. At first, radial and orthoradial cracks slowly grow in number on the two glass plies. Cracks are formed throughout the test duration, up to 15 ms. The final damage pattern is observed in Fig 3.37h. A 40 mm diameter comminuted zone is visible under the impactor. A 200 mm external diameter disk zone shows an area with finer fragments. Above this limit, larger glass fragments are created. For instance, 10 cm long glass shards can be measured at the top. New elliptical cracks are formed at the bottom right to comply with the boundary conditions. Those kind of ellipse cracks are observed in ball-drop tests as well [Nourry 2005; Decourcelle 2011; Elzière 2016]. FIG. 3.37 -Front view of a cannon test at 130 J. (a) Moment of the impact and time reference. Radial cracks are formed in about ten milliseconds after the impact. (b-g) Time evolution of the crack pattern. (h) A 200 mm diameter circular crack zone is formed around the impact location. FIG. 3 . 3 FIG. 3.38 -S-DIC measurements: four out-of-plane displacement fields (mm) are plotted on the deformed mesh, projected on the current images. Right-view on the right column. Left-view on the left. (a) 2 2 FIG. 3.40 -(a)-(b) Second and (c)-(d) third impacts. First and last frames are shown. The largest defaults from the first impact are plotted in orange (radial crack) and green (orthoradial crack). During the third impact, tearing of the interlayer appears at the highlighted critical crack location. FIG. 3 . 3 FIG. 3.41 -Impactor kinetic energy evolution, (a) versus time, (b) versus displacement, for three successive tests on a laminated glass sample. Tin side of the glass layers in tension. The initial energy drop in the first test is important because the cracks progressively appear. The fragmented plate can accommodate the sphere geometry in the two other cases. FIG. 3 . 3 FIG. 3.42 -(a) Pattern obtained during the first ply breakage (at t=2.4 ms). Patterns obtained at the end of the tests: (b) First shot, (c) Second shot, and (d) Third shot. The impact is made at 6 on a clock. A pi/2 plate rotation is made between (b)-(c) and (c)-(d). No tear has appeared. FIG. 3 . 3 FIG.3.43 -Impactor kinetic energy evolution, (a) versus time, (b) versus displacement, for three successive tests on a laminated glass sample. Tin side of the glass layers in compression. The initial energy drop in the first test is large because the cracks have appeared lately. The final cracking pattern is obtained quasi-instantaneously (in 2 or 3 images). Plus, only a few cracks are created under the impactor during the last two shots. FIG.A.1 -Schemes of a beam submitted to bending, in the (x, y) plan. FIG. A.2 -Lagrange diagram with no dispersion, used to analyzed blowpipe experiments. 4 FIG. B. 2 - 42 FIG. B.2 -Déformation de la jauge amont ('top') en rouge et aval ('bot') en bleu. L'ordre de grandeur est de 10 -4 . FIG. B.3 -(a) Contraintes et (b) vitesses aux positions des jauges. En rouge la jauge en amont, en bleu la jauge en aval. FIG. B. 4 - 4 FIG. B.4 -Propagation des ondes utilisées dans le diagramme de Lagrange. FIG. B. 5 -❷ 5 FIG. B.5 -Mesure des grandeurs en base de cône, au plus proche de la surface indentée. On mesure (a) la force et (b) la vitesse. Par intégration temporelle (c) l'impulsion, (d) le déplacement et (e) l'énergie. FIG. B. 9 -B. 4 . 2 942 FIG. B.9 -Vue en coupe du maillage au lieu du contact entre l'indenteur et le verre. La partie en vert foncé sous l'indenteur est un cube de côté 600 µm. Figure B.11. 4 FIG. B. 10 - 410 FIG. B.10 -Déformations mesurées dans l'essai (-) et dans la simulation (--) aux positions des jauges. FIG. B.11 -Mesures des grandeurs en base de cône. On mesure (a) la force et (b) la vitesse. Par intégration temporelle (c) l'impulsion, (d) le déplacement et (e) l'énergie. Les données issues de l'essai (bleu) et de la simulation élastique (orange) sont représentées. A FIG. B.12 -Mesures de déformation aux jauges (a) et mesures en base de cône. On trace (b) la force, (c) la vitesse, (d) l'impulsion, (e) le déplacement et (f) l'énergie. Les données tracées sont issues de l'essai (bleu) et des simulations élastique (orange) et élasto-plastique (jaune). FIG. B. 14 -FIG. B. 15 - 1415 FIG.B.14 -Tracé de la force F en fonction de l'indentation δ. 2Fig. 1 - 1 Fig. 1 -Banc d'essai blowpipe : Le corps en acier de l'indenteur est instrumenté avec des jauges de déformation tel une barre de Hopkinson miniature. Sa pointe Rockwell en diamant est initialement en contact avec la surface du verre. cinématiques et la contrainte dans les simulations. La relation entre la force transmise à la pointe de l'indenteur et sa pénétration dans le verre dénissent une loi de comportement eective de l'ensemble des non-linéarités du problème. La force est mesurée expérimentalement et imposée dans la simulation, tandis que la profondeur d'indentation est évaluée comme la diérence de déplacement mesurée entre l'expérience et la simulation 100% visco-élastique réversible. Cela permet de qualier la non-linéarité et ainsi trouver un comportement 1D du dommage non-linéaire créé. Cette étude ouvre la voie vers une caractérisation des performances du verre feuilleté. En faisant varier les géométries, les matériaux, et les topologies de ssure, il sera possible d'optimiser le choix des matériaux pour mieux résister aux impacts de gravillons.3 Fragmentation lors de l'impact et résistance résiduelle Qualier et quantier la réponse mécanique du verre feuilleté lors de chargement dynamique est important pour la sécurité de passagers dans un véhicule ou de passants sous une verrière. De tels impacts sont facilement rencontrés lors d'un usage courant du composite lors d'accidents (et éventuellement d'eractions). De nombreuses réglementations sont alors mises en vigueur et des essais normatifs, tels que la chute à la bille, doivent être réalisés avant la mise sur le marché d'un nouveau produit. Plutôt que de répliquer directement la norme EN 356 à l'aide Fig. 2 - 2 Fig. 2 -Installation pour la stéréo-vision : 1 Spots lumineux de 400W. 2 Shimadzu ® HPV-X2. 3 Photron ® SA-5 (focale de 24 mm) de droite. 4 Photron ® SA-5 de gauche. 5 Télémètre. 6 canon. 7 Verre feuilleté encastré. 8 Demi-sphère de l'impacteur. Fig 2) se compose d'un projectile et d'un impacteur au corps cylindrique terminé par une demisphère de rayon 50 mm en acier, reproduisant la géométrie de l'impact EN 356 qui utilise des sphères d'acier ayant le même rayon. L'impacteur et le projectile sont guidés dans le canon, dont l'axe est en incidence normale avec la surface de l'échantillon. A la n de sa course l'impacteur est arrêté par un épaulement. Une mesure du mouvement du plan médian de la demi-sphère est réalisée à l'aide d'un télémètre échantillonné à haute fréquence. L'espace autour du banc d'essai permet également l'installation de plusieurs caméras rapides synchronisées orant diérentes vues de la déformée 3D de l'échantillon. La stéréo-corrélation d'images numériques (S-CIN) est revisitée pour pouvoir mesurer une déformée axisymmétrique de la face arrière du verre feuilleté. Le code est régularisé en amont des itérations sur le déplacement et permet de ltrer localement les déplacements erronés à cause des artefacts sur les variations de niveau de gris : changement de brillance et contraste, zone fortement éclairées, apparition des ssures, ous. l'hypothèse de comportement axisymmétrique permet de mesurer la partie non mouchetée du feuilleté et ainsi mesurer l'évolution du nombre de ssures radiales et orthoradiales en fonction du rayon. Lors d'un essai il est possible de réaliser la mesure dans une base axisymétrique du déplacement hors-plan (Fig 3a), radial (Fig 3b) et de la vitesse (Fig 3c). A l'aide de cette méthode Fig. 3 -Fig. 4 - 4 Perspectives 344 Fig.3-Champs mesurés à l'aide de la S-CIN axisymmétrique, représentés en fonction du rayon, pour différents instants. Les résultats à R=0 mm et R=140 mm sont extrapolés à partir des conditions aux limites. Table 1 . 1 1 -Composition of soda-lime and borosilicate glass Borosilicate Table 1 . 1 Units 3 -PVB properties at temperature T∼20°C [Martín et al. 2020]. 1.2. Laminated glass Ch. 1. Introduction Table 1 . 4 14 s) -PVB Prony series at 25°C [Elzière 2016 ]. ; Seshadri et al. 2002; Galuppi and Royer-Carfagni 2016; Dam 2017; D'Ambrosio et al. 2019; Samieian et al. 2019]. Table 1 . 1 The four different sandwiches used for the tests are reported in table 1.5. Internal Impacted PVB Total Ply Ply plies thickness 0.38 3.08 Thicknesses 1.6 1.1 0.76 3.46 (mm) 3×0.38=1.14 3.84 0.76+0.38=1.14 3.84 5 -Samples used for blowpipe experiments. Plates surface is 100×100mm 2 . Many researchers have examined quasi-static indentation of glass samples[START_REF] Roesler | Brittle fractures near equilibrium[END_REF][START_REF] Frank | On the theory of Hertzian fracture[END_REF];[START_REF] Evans | Quasi-static solid particle damage in brittle solids-I. Observations analysis and implications[END_REF], by moving a spherical indent on a surface or describing a Hertz cone crack formation in the glass thickness. Concerning dynamic loadings on semi-infinite monolithic glass, Chaudhri and Kurkjian observe the formation of indepth, radial, and orthoradial cracks on glass dynamically indented by steel balls[START_REF] Chaudri | Impact of small steel spheres on the surfaces of "normal" and "anomalous" glasses[END_REF]]. Rapid imaging is performed by Knight et al. to follow the cracking steps of the glass block and thus characterize the crack geometries, including the conical crack typical of Other experiments, such as the hard-body impact on windshields, are widely carried on. Those kinds of experiments are mainly designed to replicate head-form impact experiments. In literature, Pyttel et al. model failure in laminated windshields, without considering the visco-elastic behavior of the PVB interlayer, using a local Rankine maximum stress criterion to describe the crack initiation and growth[START_REF] Pyttel | Failure criterion for laminated glass under impact loading and its application in Finite Element simulation[END_REF]]. Peng et al. seek at the optimum fracture stress in their models that matches best the different simulation patterns of the laminates[START_REF] Peng | Finite element modeling of crash test behavior for windshield laminated glass[END_REF]. They have found a 50 MPa failure stress of glass was the most appropriate value to describe their head-forms impact experiments. Comparing experiments and models, Yu et al. have proposed a homogenized single-layered model for laminated windshields that also works for a triple-layered windshield application[START_REF] Yu | Computation modeling of laminated crack glass windshields subjected to headform impact[END_REF]]. Finally, the tempered laminated windshield is reported to be the safer and lighter windshield compared to classic flat soda-lime glass glazing[START_REF] Li | Dynamic mechanical behavior and pedestrian safety characteristics of toughened laminated windshield[END_REF]].Phenomena are reported during hard-body impacts. For instance, Nassr et al. insisted on the predominance of the flexural waves versus the in-plane tension compression ones, in the glass breakage Hertz contact [START_REF] Knight | Impact of small steel spheres on glass surfaces[END_REF] ]. In addition, for thin glass plates, particularly laminated glass, it is advisable to take into account the crack opening mechanisms resulting from the bending of the glass, which is dominant during blunt impact [Behr et al. 1993 ]. The different cracking mechanisms of thin float glass (between 3 and 12 mm) and impact speeds of small spherical projectiles around 10 m/s are listed by [START_REF] Ball | On the low velocity impact behaviour of glass plates[END_REF] ]. with nodal amplitudes U i [Aleyaasin and Harrigan 2010; Hild and Roux 2012]. The hessian matrix [H] c relative to each camera, c, and right-hand vector {b} c are defined as Verifying the gauge measurements with the simulations Steel Diamond Units Density 7800 3500 kg/m 3 Young's modulus 210 1050 GPa Poisson ratio 0.27 0.1 - Table 2.1 -Indenter and projectile properties. In Fig 2.2a the indenter body diameter is 5 mm, then narrows down to 4.6 mm. The cone angle is 130°. The external ply (i.e. impacted ply), PVB foil, and internal ply thicknesses are respectively 1.6 mm, 0.76 mm, and 1.1 mm. The virtual test starts with an initial velocity of the projectile of -5 m/s. It impacts the fixed inden- ter, initially in frictionless contact with the glass. The laminate is free of movement: because of its sufficient inertia regarding the initial velocity and the 100 µs experiment duration, it does not need any boundary conditions. This was verified by a null displacement of the plate boundary. The force and velocity are measured at the base of the cone, accessible in practice via the Lagrange diagram (see section 1.4.1). Velocity and force signal are reported in Fig 2.3, 2.4, 2.5 and 2.6. FIG. 2.2 -(a) and (b) Dynamic explicit analysis with axisymmetrical geometry, on Abaqus ® 6.14, for the blowpipe experiment feasibility. (c) Gauge signals: in red the simulated upper gauge signal, in blue the simulated lower gauge signal. Preliminary simulations are achieved to verify the reliability of the gauge measurements with the device. Gauges locations are shown in Fig 2.2a. The forces and velocities determined from the simulated gauge measurements can be compared with those directly computed by the finite element software at the bottom of the cone. In addition, it will be possible to look at the gauge signals (Fig 2.2c) during an impact test on the void (i.e. without the laminate) to evaluate the measurement dispersion and its uncertainty. Finally, two filters are compared: a low-pass filter and a dispersive model-based filter. The simulation reproduces the new experimental set-up. The axisymmetry along the axis of the tube is assumed. The glass is supposed to be elastic (cf soda-lime glass in table 1.2) and to have a perfect adhesion with the visco-elastic PVB foil (see tables 1.3 and 1.4). The indenter and the projectile are made of steel, and the Rockwell tip, in white in Fig 2.2b, is made of diamond (cf table 2.1 for material characteristics). Table 2 . 2 2 -Rockwell indenter material properties. mental data, 4 phases are noteworthy during the crack formation (Fig 2.11).At the beginning of the impact (zone 1 in Fig 2.11), the glass plate globally behaves elastically, but a local indentation (of a few microns in length) is made under the indenter tip. The crack propagates through the thickness of the glass ply until it reaches half of its thickness. This propagation is done by bursts since the zone is in slight compression and does not facilitate the formation of cracks that have no choice but to evolve vertically through the thickness. This is also observed in a previous reference[Kishta 2018b]. This step is hard to capture with the Density 7648 kg/m 3 Young's modulus 205 GPa Poisson ratio 0.27 - Given the experi-chosen 3/4-view, but it corresponds to the first images (Fig 2.12a). Then, at time t=21.2 µs, a brutal crack opening occurs in less than 4 µs (between Fig 2.12a and Fig 2.12b). At the same time, the speed of the cone decreases, and the maximum impact energy is reached. The star crack is formed during this quick step. During phase 3, the speed becomes negative with a slight decrease of the displacement, which means the indenter bounces off the glass for a few microseconds. The crack length does not evolve much between Fig 2.12b and Fig 2. Table 2 . 2 PVB Prony series (at 25°C) in the time domain, with the amplitudes g i , the characteristic times τ i (in s), and G(t) = G 0 (1 + g i exp(-t/τ i ) the shear modulus: 3 -Constitutive parameters used in the simulations. rigid plane contact of 5 mm from the edges. However, it plays a minor role as the impacts are performed far enough from the sides of the laminated glass plate to avoid overlaying reflected elastic waves during dynamic indentation.The force measured at the top of the input bar, at the contact section hit by the impactor (see force and contact in Fig 2.30), is imposed. The displacement U e of the bottom of the cone is obtained via the simulation (Fig 2.31a). The difference at the cone bottom between the experimentally measured displacement U x and the simulated one U e gives the indentation δ. PVB Steel Diamond Damage Units Density 2500 1000 7850 3500 2500 kg/m 3 Young's modulus 70 1 200 1050 70 GPa Poisson ratio 0.23 0.49 0.3 0.1 0.23 - (0.10690, 1e-04) (0.09720, 1e-03) (0.08748, 1e-02) (g i ,τ i ) (0.01940, 1e-01) (0.00583, 1.0) (0.00107, 1e01) (0.00078, 1e02) (0.00058, 1e03) (0.00032, 1e04) However, the relative movement of the fragments during the tests is seldom studied, the random cracking pattern of the glass layers being quite tricky to apprehend.In a general context, the experimental tools to characterize the impact response of either impactor or laminated glass can be classified into three major categories:1. Direct measurements on the projectile/impactor via an accelerometer or a cell force[Yu et al. 2017; Li et al. 2019; Wang et al. 2020]. 2. Detection of wave propagation on the glass surface via multiple strain gauges glued at different radial distances from the impact location [Wang et al. 2020; Nassr et al. 2021]. 3. Kinematic field measurements in 2D [Del Linz et al. 2017] or 3D [Mohagheghian et al. 2017]. al. 2017; Li et al. 2019; Nassr et al. 2021] on windshields. These studies have been coupled to various numerical models [Pyttel et al. 2011; Chen et al. 2016; Lai et al. 2018; Samieian et al. 2019; Shahriari and Saeidi Googarchin 2020; Fernandes et al. 2021; Ma et al. 2022; Kim et al. 2022]. Wang et al. impacted very thick samples with a rigid spherical projectile [START_REF] Wang | Post-fracture performance of laminated glass panels under consecutive hard body impacts[END_REF][START_REF] Wang | Experimental and analytical study on the pre-crack impact response of thick multi-layered laminated glass under hard body impact[END_REF] . Measurement techniques used by the authors are direct measurements of the projectile acceleration, combined with high-speed cameras capturing the in-plane cracking of the glass plies. Quantitative measurements are performed, mainly locally, using strain gauges or accelerometers, for instance. Table 3 . 3 .2. Density 7800 kg/m 3 Young's modulus 210 GPa Poisson ratio 0.3 - 2 -Half-sphere elastic material properties. ture et du pare-brise, des multiples géométries de projectiles, multiples vitesses d'impact, états de précontrainte liés au formage, raideur structurale, ... On se propose donc dans la suite de se focaliser sur le phénomène d'apparition de la fissure et le traitement des mesures réalisées, plutôt que la comparaison et l'influence des nombreux autres paramètres du système. Le matériau impacté est un verre feuilleté composé d'un pli de poly(butyral vinylique) (PVB) intercalé entre deux plis de verre (plat) sodo-calcique. Les épaisseurs des couches sont don- nées en Figure B.1 : 1.6 mm pour le verre extérieur, 1.1 mm pour le verre intérieur et 0.76 mm pour l'intercalaire. Table B . B Le concept du banc d'essai est présenté en Figure B.1. Un projectile en acier entre en contact avec l'indenteur initialement fixe, dont l'extrémité est un indenteur (Rockwell, ou Vickers, et dans le futur possiblement Berkowich, Knoop, Cube corner, ...) en diamant, posé sur sur la surface du verre. Lors de l'impact avec le projectile, des ondes élastiques se propagent du haut vers le bas de la barre et sont détectées par les jauges de déformation (en rouge). L'indenteur se met en mouvement et sa pointe Rockwell, petite sphère en diamant de 200 µm, indente la surface du verre. Une dizaine de microsecondes après se forme la fissure sur le verre après une phase de croissance suivant uniquement l'épaisseur du pli de verre impacté. Le verre feuilleté repose sur un cadre en PMMA, permettant à la plaque de fléchir sous l'axe de l'indenteur au cours de l'essai. Schéma de principe du montage blowpipe pour l'essai traité. On retrouve la géométrie de l'indenteur et de la plaque de verre feuilleté de 100×100 mm 2 . l'indenteur est en diamant sur une hauteur d'environ 400 µm. le comportement visco-élastique du PVB est décrit par une série de Prony. PVB Prony series (at 25°C) in the time domain, with the amplitudes g i , characteristic times τ i (in s), such that the shear modulus reads G(t) = G 0 (1 + g i exp(-t/τ i ): 1 -Paramètres matériaux des éléments de l'essai aussi utilisés dans les simulations.Le traitement du signal pour la mesure de contrainte (σ) et vitesse (v) est présenté dans les annexes précédentes du manuscrit. En supposant une propagation 1D (suivant x positif et x négatif) des ondes élastiques dans la barre, et un état initial nul en contrainte et vitesse (i.e.état statique initial et pas de précharge), on peut remonter au fur et à mesure de l'avancement de l'essai, aux vitesses et contraintes aux jauges. Indenter Laminated glass PMMA Frame support Glass 1.6 mm 0.76 mm Glass 1.1 mm (for free bending experiments) Glass Density 2500 Young's modulus 70 Poisson ratio 0.23 (g i ,τ i ) (0.10690, 1e-04) 60 mm PVB 1000 1 0.49 (0.01940, 1e-01) FIG. B.1 -Parameters PVB (0.00078, 1e02) 20.7 mm 31.2 mm 9.5 mm Projectile (m=3.2g) Gauges: strain measurement processing from Hopkinson method Rockwell diamond tip (200µm curvature radius) 5 mm Diamond Units 3500 kg/m 3 1050 GPa 0.1 -(0.09720, 1e-03) ∅ 5 mm Steel 7850 200 0.3 (0.08748, 1e-02) (0.00583, 1.0) (0.00107, 1e01) (0.00058, 1e03) (0.00032, 1e04) Les propriétés matériaux des éléments, également utilisés dans les simulations, sont reportés dans la Table B.1. Le projectile et l'indenteur sont en acier et leurs dimensions sont : 5 mm de diamètre, un projectile de longueur 20.7 mm, un indenteur de longueur 60 mm. La pointe de Table B . B 3. La contrainte et la vitesse peuvent être transportées en tout point de la barre, et en particulier au niveau de l'indenteur.Les grandeurs utiles pour le dépouillement par diagramme de Lagrange sont présentés dans la TableB.2. Parameters Units Density ρ 7850 kg/m 3 Young's modulus E 200 GPa Cylinder radius r 2.5 mm Inter-gauges distance L 31.2 mm From bottom gauge to cone bottom distance ℓ 9.5 mm From impact location to upper gauge distance d 17.2 mm 2 -Grandeurs utilisées pour le dépouillement des mesures de jauge de déformation. B.1). La charge statique dans le ressort, mesurée à l'aide du capteur piézoélectrique, est initialement de 25.69 N, impliquant un premier pic de vitesse attendu autour de 4.2 m/s (cf Fig. 2.21 du manuscrit). La mesure des jauges est présentée en Figure B.2. L'essai est très rapide et la formation de fissure prend environ 30 µs entre le moment où la jauge amont détecte la première onde incidente et le moment où la fissure en étoile est formée sur le verre.L'origine des temps est choisie à l'aide de la réception du signal de trigger par les boîtiers de commande. C'est pourquoi on mesure du signal utile avant 0 µs. On peut noter l'écart d'arrivée du signal de τ = L ρ/E = 6.2 µs de la jauge amont à la jauge aval. Les deux signaux sont ensuite utilisés pour les mesures de contraintes aux lieux des B.4.1 Concept et design De nombreux modèles apparaissent dans la littérature concernant l'impact des vitrages feuil- letés. Différentes approches sont examinées et les résultats concernant l'évolution de la fissure et la réponse du matériau fragile sont comparés [Wang et al. 2017] : Modélisation par éléments finis (FEM) [Sun et al. 2005b,a; Peng et al. 2013; Chen et al. 2016] utilisant des modèles de zones cohésives [Chen et al. 2015; Gao et al. 2017; Prasongngen et al. 2019], modélisation par éléments finis étendus (XFEM) [Xu et al. 2017], les éléments mixtes finis-discrets [Wang et al. 2018; Chen et al. 2022], et la péridynamique [Lai et al. 2018; Wu et al. 2020]. Cepen- dant, la plupart de ces études se concentrent sur les impacts de tête ou de corps dur et sur la fragmentation des pare-brises stratifiés. Les énergies cinétiques des projectiles sont donc bien plus importantes dans ces essais : autour de 100 J, alors que celles des impacts gravillon se situent plutôt autour de 100 mJ. La simulation présentée ci-après est très liée aux essais et concerne plusieurs points : Remerciements Appendix A Hopkinson bars theory Performing measurements of a dynamic experiment is a real issue by itself. Indeed, due to the high-frequency phenomena to measure, the experimenter must adapt its devices -for instance, high-speed rate cameras, laser trackers, or strain gauges. The scientist will also deal with the increase in the uncertainty measurements and then has to filter the appropriate frequencies when needed. This appendix is an echo to the section 1.4.1. It completes the theory of the propagation of the elastic wave in the bar, introducing the dispersion problem due to the lateral inertia of the bar and the bending and shear waves propagation. A.1 Long bars 1D equations The Hopkinson bar device, created in 1913 by Hopkinson [START_REF] Hopkinson | A method of measuring the pressure produced in the detonation of high explosives or by the impact of bullets[END_REF]] and revisited in 1949 by Kolsky [START_REF] Kolsky | An investigation of the mechanical properties of materials at very high rates of loading[END_REF]], allows the dynamic loading in compression, traction, or shear of a sample, i.e. at deformation rates of 10 s -1 to 1500 s -1 . At the beginning of the test, a first incident bar is impacted by the projectile with specified energy. The shock waves propagate through the bar, the material, and finally into the outlet bar. At the interfaces, part of the wave is transmitted, and the other part is reflected. For a Hopkinson bar experiment analysis, we consider the above-mentioned elastic bars throughout the stress. Adding the hypothesis of uniaxial propagation, and neglecting Poisson effects (i.e. the lateral inertia of the bars), because the length of a bar is much greater than its diameter, one can write the equilibrium equation with E the Young's modulus, u the movement of a material point on the bar, σ = Eε the stress and ε = ∂u/∂x the longitudinal strain x, ρ the density and c = E/ρ the elastic tension/compression waves velocity in the bar. The general form of u is with f , a wave, propagating through x > 0 and g, propagating through x < 0. The speed definition is When the output bar is long enough to neglect the propagation in x < 0 one can consider g = 0, so σ(x, t) = -ρcv(x, t) (A.5) A reflection at the interface between the bar and the sample is considered. The strain of the incident wave ε i and the reflected wave ε r is then measured by a gauge glued to the input bar. The speed of the bar extremity x 1 is computed as idem for the forces F(x 1 , t) considering the bar section S . A.2 Elastic waves: axial propagation and dispersion Writing a subsection dedicated to dispersion is justified here because the configuration described in chapter 2 is a Hopkinson mini-bar system. Indedd, the small length of the bar makes it difficult to justify the usual 1D hypothesis based on Kolsky-Hopkinson theory [START_REF] Hopkinson | A method of measuring the pressure produced in the detonation of high explosives or by the impact of bullets[END_REF][START_REF] Kolsky | An investigation of the mechanical properties of materials at very high rates of loading[END_REF]], the indenter diameter being only ten times shorter than its length. A.2.1 Considering the lateral inertia To deal with a 1D model, the lateral inertia of our mechanical waves in the mini-bar should be considered, also described by Aleyaasin for visco-elastic polymeric bars [START_REF] Aleyaasin | Wave dispersion and attenuation in viscoelastic polymeric bars: Analysing the effect of lateral inertia[END_REF]. A simple hypothesis is to take into account a diameter variation due to Poisson effect [START_REF] Meyers | Dynamic behavior of materials[END_REF]], i.e. a displacement u like u = u r (r, x, t)e r + u(x, t)e x (A.9) Adding Hooke's law The wave celerity C(ω) of an harmonic is with R the bar radius. From here, the condition over the wave celerity with an asymptotic celerity c s = G/ρ, meaning the bending waves celerity due to shear stress are slower than the traction and compression ones. A.4 Strain gauge measurements A.4.1 Principle A strain gauge is a sensor that converts mechanical displacement into the change of resistance. The device can be attached directly to the surface of a deformed material to measure strains on a sample. It exists in various sizes and shapes, but the working principle of a foil-type gauge is constant: the resistance of the foil changes as it is subjected to mechanical deformation on the tested sample. This change is proportional to the applied strain. Gauges are connected to a Wheatstone bridge to detect the slight changes in the gauge resistance. The electric potential variation measured with a voltmeter informs on the micro-strain using the gauge factor. The advantages of this measuring device are its simplicity and low-costs. It can also adapt to many surfaces with a suited coating. However, a simple strain gauge provides only a 1D local measurement of the sample deformation. One must make a hypothesis on the 1D fields to match an object global behavior. A.4.2 Traction-compression Lagrange diagram The construction of a Lagrange diagram is justified, with a 1D hypothesis, as it will allow us to measure stresses and velocities at the gauges in the presence of incident and reflected waves in the short indenter. Indeed, waves reflection does not allow for the exploitation of the equation (A.5), only applicable during the first wave passage. So, the principle is based on measuring the stress at two places of our indenter, i.e. at two separate gauges of a length L, in x direction. One names "upper strain gauge" the upstream gauge of abscissa x = -L and "lower strain gauge" the downstream gauge of abscissa x = 0. With a 1D purely elastic and axial behavior of the indenter, To solve the dispersion problem, Zhao and Gary expressed the sum of the ascending and descending waves of the gauge measurement [START_REF] Zhao | A new method for the separation of waves. Application to the SHPB technique for an unlimited duration of measurement[END_REF]]. However, it is decided here to draw a Lagrange diagram for each considered frequency and follow the characteristic lines. We will suppose the harmonic decomposition of the considered quantities as a discrete sum of the frequency components. So, From Love's motion equation solution (A.17), given by Pochhammer, one should define the impedance Z(ω) giving Which leads to Presently the frequency components equation is solved separately. The initial stress and speed are null in the indenter, C(ω) the elastic wave celerity of the frequency in the material defined equation (A. 19) and Z(ω) = E/C(ω) the impedance. The relations between stresses σ ω (x, t) and speeds v ω (x, t) can be expressed For t ≤ L/C(ω) To measure the stress and speed at the indenter tip, for instance, at length ℓ, one may consider the data computed at the downstream gauge. The stress σ ω (x = ℓ, t) and speeds v ω (x = ℓ, t) can be expressed as Appendix B Dépouillement d'un essai blowpipe Dans cette annexe, des éléments concernant le banc d'essai blowpipe sont ajoutés. Un essai est entièrement dépouillé. Les résultats de simulations sont donnés. Le modèle de comportement de la zone indentée est implémenté, et les résultats des simulations sont comparés aux données expérimentales. B.1 Gravillonnage sur le verre Une fissure peut se propager lors de l'impact entre un gravillon et un pare-brise ; on note par exemple dans cette catégorie les fissures en cône de Hertz ou les fissures en étoile. Ce sont les fissures en étoile qui représentent le dommage le plus critique car leurs branches peuvent croître au cours du temps lors de nouveaux chargements (statiques, vibratoires, dynamiques, thermiques...). On trouve dans la littérature des essais sur de fines plaques de verre ou de feuilleté permettant d'étudier différents points lors des essais de gravillonnage (i.e.impact d'un gravillon). Même si le gravillon n'est pas toujours utilisé comme projectile de part ses dimensions non standard, Les auteurs [Behr et al. 1993 ont analysé la formation de fissures sous impact, effectué des analyses post-mortem et conçu des dispositifs spécifiques pour reproduire de tels impacts de manière contrôlée. On peut déjà observer que l'angle et la vitesse d'impact du projectile sont des paramètres importants pour l'apparition de la fissure en étoile sur le verre feuilleté : pour une charge identique, une fissure en étoile se forme pour les plis minces, mais une marque conique apparaît à la surface des plis épais. L'examen de l'angle d'incidence pendant l'essai a révélé que seules les composantes normales sont impliquées dans l'endommagement [Grant et al. 1998;Cleary et al. 2020]. Ces analyses peuvent être étendues à de nombreuses compositions de verre différentes comme le verre borosilicaté. L'évolution de la fissure médiane dans le verre peut également être en jeu car lorsqu'elle atteint une profondeur critique, la rupture du pli dans toute l'épaisseur ne peut alors être évitée. Ceci a été observé aux rayons X [START_REF] Kang | Crack nucleation and growth during dynamic indentation of chemically-strengthened glass[END_REF]]. L'étude des conditions de formation d'une telle fissure représente un vrai challenge : zone d'indentation sub-millimétrique, fissuration en moins de 30 µs, dépendance de l'état de la struc- Abstract : Laminated glass, a composite made of two or more dierent glass sheets, interleaved with thin polymeric layers, is a widely used material for building or automotive applications. Used as glazing, laminated glass is designed to insure the user safety by preventing large glass chip ejection during an impact and by dissipating the kinetic energy. The composite response to dynamic loads is therefore a crucial issue to validate the safety of its use through many standards implying a wide variety of tests. Resisting small impacts like graveling is also at stake for such glazing. It is proposed to study the star crack formation by designing a new device. The set-up is composed of a spring-loaded projectile that mimics a gravel momentum and a mini-Hopkinson input bar equipped with strain gauges, whose end is an indenter initially in contact with the sample. A synchronized highspeed camera, used to measure the crack propagation, is combined with the indenter force and velocity measured at the indenter tip. A simu-lation analyzes the non-linear local behavior accounting for the sample dynamic bending. This work is a prelude to the crack length prediction on windshields submitted to gravel pitting. Hard-body blunt impact is representative of anti-burglary testing. During a ball-drop test, the glass layers are cracked and a global large strain regime, driven by the newly formed polymeric bridges, is activated. A large amount of impact energy is dissipated thanks to the stretching of the interlayer and delamination. To estimate the composite behavior after glass breakage, stereo-digital image correlation analyses, introducing axial regularization, are carried out from high-speed camera images to extract the kinematic information of the sample motion. Leads are proposed to design a predictive model with regards to the observations. Repetitive impacts are also studied as the crack pattern and the delamination history are crucial to evaluate the laminated glass residual resistance.
04114477
en
[ "sdv.bbm" ]
2024/03/04 16:41:24
2020
https://theses.hal.science/tel-04114477/file/TH2020LOUCHEARTHUR.pdf
Claude Bernard -Lyon M Frédéric Fleury M Hamda Ben Hadid M Didier M Pierre Julia Mar´ıa Coronas-Serna 1® Arthur Louche Mar´ıa Rodr´ıguez-Escudero Morgane Roussin Paul R C Imbert Isabel Rodr´ıguez-Escudero Laurent Terradot Mar´ıa Molina Jean-Pierre Gorvel V´ıctor J Cid Suzana P Salcedo email: [email protected] Sarah Bigot The TIR-domain containing effectors BtpA and BtpB from Brucella abortus impact NAD metabolism Keywords: Pseudomonas, TIR domain, TLR adaptors, UBAP1, virulence Pull-down, Protein-protein interactions, Tagged protein, Affinity purification Brucella species are facultative intracellular Gram-negative bacteria relevant to animal and human health. Their ability to establish an intracellular niche and subvert host cell pathways Remerciements : Je tiens tout d'abord à remercier sincèrement ma directrice de thèse, le Dr Suzana Salcedo. Suzana, ça fait maintenant un petit bout de temps que je travaille dans ton équipe de recherche. En effet j'ai commencé à travailler à ses côtés en Novembre 2013 où à l'époque elle m'avait engagé pour un contrat à durée déterminée de 1 an qui a été reconduit pour deux années supplémentaire en tant qu'assistant ingénieur/ ingénieur d'étude. A la fin de ces CDD, Suzana m'a proposé de réaliser une thèse dans son équipe de recherche, après avoir hésité j'ai finalement accepté de réaliser la thèse. Aujourd'hui je ne regrette absolument pas ce choix. Suzana a été une chef formidable, toujours à l'écoute et de bons conseils. Outre le coté scientifique et les journal club qui m'ont permis de développer un esprit critique, je la remercie également pour les moments partagés en équipe notamment les repas de Noël, les barbecues d'été et les activités que nous avons pu faire en dehors du labo. C'est un tout qui fait que les « plus ou moins » matin j'étais heureux de venir travailler au labo. Je remercie l'Agence National de la Recherche pour leur financement. Je remercie Laurent Terradot, qui m'a accueilli dans son laboratoire pour mon stage de M2. Laurent m'a appris toutes les étapes en partant de l'ADN jusqu'à l'obtention de protéine Abstracts Résumé en langue française : Les agents pathogènes intracellulaires ont la capacité de moduler les réponses cellulaires de l'hôte pour survivre et proliférer. Ceci est réalisé en produisant et en injectant des protéines effectrices dans les cellules hôtes. Ces effecteurs peuvent notamment moduler la réponse immunitaire et le trafic intracellulaire. Brucella, l'agent causal de la brucellose, une maladie animale transmissible à l'homme, délivre des effecteurs dans la cellule hôte via son système de sécrétion de type IV. À ce jour, seuls quelques effecteurs ont été identifiés chez Brucella et leurs fonctions moléculaires restent floues. Au cours de ma thèse, nous nous sommes intéressés à deux effecteurs de Brucella abortus contenant un domaine TIR, BtpA et BtpB, connues pour moduler la réponse immunitaire innée de l'hôte. Cependant, il a été démontré récemment que les domaines TIR purifiés ont une activité NADase in vitro. Nous avons ainsi montré que BtpA et BtpB conservent cette activité et sont capables de réduire la quantité de NAD présente dans la cellule hôte pendant l'infection. Ces résultats suggèrent que la NAD intracellulaire joue un rôle dans les infections à B. abortus et pourrait constituer un nouveau mécanisme de pathogénie. Parallèlement à ce projet, j'ai également eu l'occasion de travailler sur la caractérisation des effecteurs contenant un domaine TIR chez d'autres agents pathogènes. J'ai apporté une contribution majeure à la caractérisation de l'effecteur PumA chez Pseudomonas aeruginosa et participé à la caractérisation de l'effecteur TirS chez Staphylococcus aureus. Le projet principal de ma thèse était d'initier une caractérisation structurale et fonctionnelle de deux nouveaux effecteurs de B. abortus, que nous avons nommés NyxA et NyxB. Après avoir établi qu'ils sont transloqués dans la cellule hôte pendant l'infection, nous avons observés ces effecteurs dans des compartiments nucléaires et cytosoliques. Par des expériences de co-précipitation, nous avons montré que ces deux effecteurs sont capables d'interagir entre eux et nous avons identifié SENP3 comme leur cible eucaryote. SENP3 est une déSUMOylase essentiellement située dans un sous-compartiment nucléaire, le nucléole. Cette protéine est impliquée dans la régulation de nombreuses fonctions cellulaires et joue un rôle majeur dans la biogenèse des ribosomes. Nous avons montré que SENP3 est nécessaire à la réplication de B. abortus dans la cellule hôte. L'analyse structurale de NyxB, nous a permis d'identifier une poche acide impliquée dans l'interaction avec SENP3. De façon surprenante, dans les cellules infectées par B. abortus, SENP3 est délocalisée et ne s'accumule plus dans les nucléoles. Dans les cellules transfectées par NyxA et NyxB, nous avons observé que ces deux effecteurs recrutent SENP3 dans un autre compartiment nucléaire : les corps nucléaires PML. En outre, ces effecteurs modulent également la distribution de NVL et RPL5, composants clés de la machinerie de biogénèse ribosomique dans les nucléoles, qui dans les cellules infectées forment des structures punctiformes dans le cytosol de manière dépendante de SENP3. Nous avons observé que ces structures cytosoliques correspondent à la localisation cytosolique des effecteurs Nyx et sont également enrichies en NUFIP1, un récepteur de la ribophagie. En résumé, ce deuxième projet identifie deux nouveaux effecteurs chez B.abortus, qui ont un impact sur la localisation nucléolaire de SENP3. Ils sont impliqués dans la formation de structures cytosoliques enrichies en NUFIP1, NVL et RPL5, ce qui suggère un processus similaire à celui de la ribophagie. Abstract in english : Intracellular pathogens have the ability to modulate the hosts' cellular responses to survive and proliferate. This is achieved by producing and injecting effector proteins into host cells. These effectors can modulate various cellular functions, including immune response and intracellular trafficking. Brucella, the causative agent of brucellosis, an animal disease transmissible to humans, delivers effector proteins into the host cell through its type IV secretion system. To date, only a few effectors have been identified in Brucella and their molecular functions remain unclear. During my thesis, we were particularly interested in two Brucella effector proteins containing a TIR domain, BtpA and BtpB, known to modulate the host innate immune response. However, more recently, purified TIR domains were also shown to have NADase activity in vitro. We have thus shown that BtpA and BtpB retain this activity and are able to reduce the amount of NAD present in the cell host during infection. These results suggest that intracellular NAD plays a role in Brucella infections and may constitute a new mechanism for Brucella pathogenicity. In parallel to this project, I also had the opportunity to work on the characterization of TIR-domain-containing effectors in other pathogens. I have made a major contribution to the characterization of the PumA effector in Pseudomonas aeruginosa and participated to the characterization of the TirS effector in Staphylococcus aureus. The major project of my thesis was to initiate a structural and functional characterization of two new B. abortus effectors, which we named NyxA and NyxB. After establishing they were translocated into the host cell during infection, we imaged these effectors and found them in nuclear and cytosolic compartments. By pull-down experiments, we showed that these two effectors were able to interact with each other and we identified SENP3 as their eukaryotic target. SENP3 is a deSUMOylase essentially located in a nuclear sub-compartment, the nucleolus. This protein is involved in the regulation of many cellular functions and plays a major role in ribosome biogenesis. We have shown that SENP3 is necessary for the replication of B. abortus within the host cell. The structural analysis of NyxB allowed us to identify an acidic pocket involved in the interaction with SENP3. Surprisingly, in cells infected with B. abortus, SENP3 is delocalized and no longer accumulating in nucleoli. In NyxA and NyxB transfected cells we could observe these two effectors recruited SENP3 to another nuclear compartment: the PML nuclear bodies. Furthermore, these effectors were also able to modulate the distribution of NVL and RPL5, key components of the ribosomal biogenesis machinery in the nucleoli, which in infected cells form cytosol punctate structures in a manner dependent on SENP3. We have observed that these cytosolic structures correspond to the cytosolic location of Nyx effectors and are also enriched in NUFIP1, a ribophagy receptor. In summary, this second project identifies two new Brucella effectors, which have an impact on the nucleolar localization of SENP3. They are involved in the formation of cytosolic structures enriched in NUFIP1, NVL and RPL5 suggesting a process similar to that of ribophagy. Brucellosis is a worldwide infectious disease caused by intracellular bacteria belonging to the genus Brucella [1]. It seems that brucellosis is an old disease, present among humans since a long time. Indeed, macroscopic and microscopic analysis of the lumbar vertebrae of ancient human remains of the Pliocene period (2.8 million years ago) have been suggested as signs of brucellosis [2]. Archaeologists have also found a jar containing fossilized cheese in an Egyptian tomb dating back to the 19th dynasty (-1293(- -1185 BC) BC), for which proteomic analysis revealed the presence of Brucella [3]. Moreover, symptoms reminiscent of brucellosis have been described by Hippocrates in his publication Epidemics (around 410 BC). It wasn't until the late 19 th century that the causative agent of brucellosis was isolated by a British physician Dr David Bruce from the spleen of a dead soldier on the Malta island, and for this reason the organism bears his name. Over time brucellosis had different names, including undulant fever, Maltese fever, Gibraltar fever, Crimean fever, goat fever and Bang disease. Epidemiology Brucellosis in the world: Brucellosis is one of the most prevalent zoonosis. The word zoonosis, from the Greek zoo (animal) and nosos (disease) are defined as diseases that are naturally transmitted from vertebrate animals to humans. Every year more than 500,000 cases of human brucellosis are recorded [4], which is why WHO has classified this disease as an important neglected zoonosis re-emerging in the world. However, the incidence of disease is believed to be under-diagnosed and the true incidence is evaluated to be 10 times higher (5,000,000 cases) [5]. Brucellosis is still endemic in many regions, namely the countries of the Mediterranean basin, Latin America and the Middle East. Throughout the world, the geographical distribution of the disease varies greatly (figure1). While the incidence of the disease is clearly declining in the developed world, this is not the case in developing countries where it often reaches worrying levels. The prevalence of the disease ranges from 0.3 cases per million in developed countries to more than 1000 cases per million in endemic regions [6]. Brucellosis in France: As a result of strong prophylactic campaigns in the 1960s on ruminant farms, cases of human brucellosis in France decreased from 800 cases in 1976 to 44 cases in 2000 [6]. France has been declared officially free of bovine brucellosis since 2005, and no outbreaks in animals have been identified on the national territory from 2003 to 2012. Every year, cattle, sheep and goat farms are regularly tested for brucellosis. Now, any confirmation of brucellosis on a farm will lead to slaughtering of the entire herd and the destruction of resulting products, as a precautionary measure. However, two outbreaks of bovine brucellosis were confirmed in 2012 on the French territory, thus calling for vigilance regarding this zoonosis [7] [8]. A first outbreak was due to the importation of an infected bovine animal and was rapidly eradicated. The second outbreak was related to a large wild reservoir discovered in mountain ibexes [8], which resulted in a The most affected countries are Mongolia and the countries of the Middle East and the Balkans. Some Central and South American countries and the countries of the Mediterranean basin also have numerous cases of human brucellosis. [6] massive eradication program of the older ibex members of the herd (infected and noninfected) in the Bargy mountain range to prevent further dessimination. These measures were highly controversial and in fact proven inefficient in the control of brucellosis in the ibex population. Therefore, recent new surveillance measures have been set in place for monitoring all wild-life in the region (ibex, chamoises, deer) as well as local pastures (sheep, goats and cows) [9]. Selective slaughtering programs are in place and a vaccination program being studied. Brucella spp Brucella are small coccobacilli Gram-negative bacteria, 0.6-1.5 μm long by 0.5-0.7 μm in width. They are non-motile, non-encapsulated, non-spore-forming and do not present pili nor toxins. Brucella are strictly aerobic but some strains growth better in an atmosphere containing 5 to 10 % CO2. Brucella does not present classical virulence factors known in others pathogenic bacteria such as: lysogenic phages, virulence plasmids, invasive protease, exotoxin and toxic lipopolysaccharide (LPS). Nevertheless, the LPS plays an important role in the virulence of Brucella. The LPS, which found in the outer membrane of all Gram-negative bacteria, is composed of a hydrophobic lipid moiety, called lipid A, normally associated with its toxic properties linked to a polysaccharide, with a hydrophilic core and a repeating O-antigenic oligosaccharide side chain. There are 2 types of LPS depending on the surface morphology: smooth or rough. Smooth correspond to the presence of the O-polysaccharide side chain in the LPS moiety and rough without the O-polysaccharide side chain. Most Brucella species are naturally smooth, except for B. ovis and B. canis but can transition to form rough colonies in the laboratory, usually associated with reduced virulence. Bases on 16S rRNA sequence homology, Brucella spp. belongs to the α2-subdivision of the proteobacteria, together with animal pathogens such as Bartonella spp., Rickettsia spp. and plant pathogens such as Agrobacterium tumefaciens, but also plant symbiont such as Sinorhizobium meliloti [10]. Brucella spp. are facultative intracellular bacteria that can infect many species of animals and can also be associated with disease in humans. Recently, identification of several new Brucella species has considerably expanded the genus. To date 12 species form the genus Brucella. The classification of these species was based on the difference in host preference (Table 1) and phenotypic characteristics. In 1968, the genus was composed by 6 "classical" or "core" Brucella species affecting terrestrial mammals: Brucella abortus, Brucella canis, Brucella melitensis, Brucella neotomae, Brucella ovis and Brucella suis [4]. These species are very close genetically, they share on average more than 94% of identity at the nucleotide level [11]. Biovars were recognized for certain of these species. Since the 1990s, 6 news and "atypical" strains have joined the genus Brucella: Brucella microti, Brucella inopinata, Brucella ceti, Brucella pinnipedialis, Brucella papionis and Brucella vulpis. Each species of Brucella has one or more preferential animal reservoirs that maintain their transmission cycle (Table 1). However, they are not totally specific to their reservoir. Recently, bacteria of the genus but not belonging to any of the known families have been isolated in amphibians as the natural host [12], the first non-mammal host described for Brucella. In vitro and in vivo experiments suggest that this strain is able to invade and proliferate in macrophages but also survive in the murine host model. The mechanisms explaining the apparent Brucella host-specificity of the different species remains to this day poorly understood. The majority of brucellosis infections in humans are due to: Brucella melitensis, Brucella abortus, Brucella canis and Brucella suis (Table 1). In addition, the new strains of Brucella ceti and Brucella pinnipedialis associated with cases of brucellosis in marine mammals also appear to be capable of transmitting the disease to humans [13]. Table 1: Host preference and zoonotic potential of the different Brucella species. Modified from [14]. Rough [16] Brucella Brucella neotomae Desert wood rat No reported infection Smooth [17] Brucella pinnipedialis Pinnipeds (Seals, sea lions and walruses) Low Smooth [18] Brucella ceti Cetaceans (Dolphins, whales, porpoises) Low Smooth Brucella microti Wild voles, red foxes No reported infection Smooth [19] Brucella inopinata Unknown Human from a breast implant infection, Amphibians High Smooth [20] Brucella papionis Baboons No reported infection (?) [21] Brucella vulpis Red foxes No reported infection (?) [22] Transmission In animals, transmission occurs generally at the time of abortion or parturition. High concentrations of Brucella are found in fetal waters from infected animals [23]. This pathogen is able to survive outside its host for up to several months depending on the environment, temperature, humidity. Human brucellosis has always been associated with an animal reservoir of Brucella spp. [24]. We can distinguish different transmission routes: -The most common way is the ingestion of food from animal origin, unpasteurized milk or raw dairy products. Indeed, when animals are infected with Brucella, their milk becomes contaminated. Consumers of unpasteurized dairy products are at risk of contracting brucellosis. Nevertheless, veterinary controls and pasteurization have made contamination by dairy products exceptional in France. -Inhalation of aerosolized infectious material is one of the most common routes for human infection. Persons at risk for this mode of transmission are meat slaughterhouse workers and people in laboratories that work with the bacteria [25]. Brucellosis is the most common laboratory-acquired infection, in the majority of cases aerosols are involved [26] [27]. For example, more than 100 students and staff became infected in two different institutes in China (Lanzhou and Harbin Veterinary research institute) (Nature new 17december2019). That is why in French laboratories, experiments on Brucella must be carried out in a biosafety level 3 (BSL3) containment. Its high infectious power (10 to 100 bacteria are sufficient to infect humans) coupled with its possible spread by aerosols make Brucella an organism with the potential to be used as a weapon of bioterrorism [28]. For this reason, Brucella has been included in the list of possible bioterrorism agents by the Centers for Disease Control and Prevention (CDC) [6]. -Close contact with infected animals. Brucella can penetrate into the body through skin wounds or mucous membranes. People most sensitive to this type of transmission are veterinarians, hunters and slaughterhouse workers. -Human to human transmission is very rare. The disease can be transmitted to the newborn through transplacental route or breastfeeding of the infected mother. Transplantation, sexual or blood transfusions have also been reported as possible routes of human to human transmission [29] [30] [31]. Symptoms Symptoms are different between animals and humans. In animals, Brucella has a tropism for the reproductive system. In pregnant female animals, bacteria colonize the placenta leading generally to abortion of the foetus in the second half of gestation. This is due to high colonization of trophoblasts by Brucella. Brucellosis in cows and goats also leads to a reduction in milk production, due to bacterial colonization of the mammary glands [32] [33]. In male animals, brucellosis is characterized by orchitis which leads to sterility. In both male and female animals, symptoms can include arthritis [34]. In humans, brucellosis is a serious debilitating disease that can have a fatal outcome if untreated. The incubation period of the pathogen is 1 to 4 weeks but sometimes the incubation period can last several months without any symptoms. The symptoms of brucellosis are not dependent on the mode of transmission by which the patient became infected. The clinical presentation of brucellosis can be subdivided in three phases: acute, sub-acute and chronic. In the acute phase, clinical manifestations are variable and non-specific. The infected individual may present with flu-like symptoms, fever, chills, sweating, malaise, weight loss or osteoarticular pain, hepatosplenomegaly, lymphadenopathy and hearing loss [35]. Then the bacterium spreads and can reach different organs. In absence of treatment, the disease evolves into a subacute or chronic phase which are much more complicated to treat [36]. The sites of infection are the bones and joints, the liver, and sometimes the heart and the nervous system. The latter two infections cause endocarditis and neuro-brucellosis, which can lead to death in 2% of cases [37]. Diagnosis Diagnosis from clinical observation data is difficult due to symptom variability and nonspecificity in the acute phase. Nevertheless, it is important to be able to start as early as possible a treatment in order to avoid the establishment of the chronic phase. For this reason, diagnosis is initially based on the patient's history of possible exposures (travels and/or consumption of contaminated milk products imported from endemic areas…). Subsequently, an adequate laboratory diagnostic method is essential to confirm or not brucellosis. There are several diagnoses that can be direct or indirect. Direct diagnosis includes culture and isolation of the strain from biological samples (blood culture, lymph node, bone marrow, cerebrospinal fluid…), which is the gold standard. However, in the chronic phase of the disease, detection is only very rarely positive in the case of blood cultures. However, this technique is time-consuming and expensive. Another direct detection technique is polymerase chain reaction (PCR) and real-time PCR. These techniques are sensitive and specific. They also have the advantage of being relatively fast and reduce the risk of acquisition of brucellosis by the personnel performing the analysis, because the bacteria are inactivated. On the other hand, false negatives may be observed as a result of the presence of DNA polymerase inhibitors in clinical samples. False positives may also be observed in the case of sample contamination or cross-amplification reactions. Indirect diagnoses include serological tests, which are the most commonly used diagnoses for brucellosis. They have the advantage of being quick, less expensive and safer. Among these techniques are the Rose Bengal Test, the Wright agglutination (WHO reference reaction), the Coombs antibrucella, immunocapture techniques and serology to detect specific IgG and IgM antibodies. False positives may occur due to an antigenic relationship with other pathogens (Escherichia coli O157, Francisella tularensis, Yersinia enterocolitica, Vibrio cholerae and Salmonella species) [38]. Treatments, prevention & vaccins Unfortunately, there is no safe and effective vaccine for brucellosis in humans. During the acute phase, the treatment of brucellosis is based on the combination of two antibiotics doxycycline and rifampicin (treatment according to WHO) for a period of 6 weeks or streptomycin and doxycycline [39]. However, in some cases relapses are reported, despite the treatment [40]. In the absence of treatment or when treatment is insufficient, the disease progresses and enters a subacute focused phase, leading to the constitution of isolated or multiple infectious foci at the osteoarticular, neurological, genital, hepatic or cardiac level. In this case a treatment based on three antibiotics (doxycycline, rifampicin and gentamicin) is used at least 3 months. In the chronic phase, treatment of the disease is made impossible by the fact that the bacteria have become inaccessible. Only symptoms are treated, sometimes surgical operations are performed to remove the site of infection in the case of endocarditis or osteoarticular localization. Prevention of infection relies on reducing people's exposure to infected animals, food hygiene measures (milk sterilization), personal hygiene measures (wearing gloves, masks) for exposed occupations. In order to be able to control the disease in the human population it is important to be able to control and eradicate the disease in domestic and wild animals. Indeed, the decrease in the incidence of the disease in animals is correlated with a decrease in the number of infected individuals. Control and eradication of the disease in animals must be achieved through surveillance tests, slaughter of infected animals but also vaccination. Indeed, vaccination of cattle, sheep and goat herds is one of the best means of prevention to limit the transmission of the disease as much as possible. There are two types of brucellosis vaccines: live attenuated vaccines and inactivated vaccines. Currently the vaccines licensed for use in livestock are live attenuated vaccines. The most commonly used vaccines to protect farm animals are S19 and RB51 for Brucella abortus, Rev1 for Brucella melitensis. Vaccine strain S19 is a live B. abortus, spontaneously attenuated mutant discovered in 1923 (Graves., 1943). This vaccine has been the reference vaccine and has a high level of protection. It is used for the vaccination of calves and cows with reduced dose. This vaccine provides a good rate of immunization and greatly reduces the number of abortions. However, S19 vaccination has some disadvantages. S19 has a smooth phenotype due to its O-antigen LPS which induces a specific antibody response and leads to interference with serodiagnosis [41]. This serological interference impedes the distinction between immunized and naturally infected animals. It has a residual virulence that can lead to abortion if administrated in pregnant cows, and orchitis in male. For this reason, vaccination with S19 is currently only available in female calves between 3 and 8 months of age. In addition, S19 is fully pathogenic to humans, so veterinary personnel should pay particular attention at the time of vaccination to avoid accidental contamination. Vaccine strain RB51 is a live attenuated B. abortus developed in 1982 [42]. Unlike S19, RB51 has a rough phenotype, it does not possess O-antigen LPS chain and therefore does not present serological interference problems. Like S19, RB51 has a good protection against infection and abortion [43]. For this reason, it is used as the reference vaccine in many countries. However, this vaccine does have some drawbacks. RB51 is still pathogenic for humans and it is resistant to rifampicin which is use in treatment in human brucellosis. Vaccine strain Rev1 is a live attenuated B. melitensis developed in the 1960s. To date Rev1 is the most effective strain against goat and sheep brucellosis [44]. Just like S19, Rev1 has a smooth phenotype and therefore will cause serological interference problems and is still virulent to humans [45]. In pregnant animals, Rev1 can lead to abortion even with reduced doses of vaccine. Efforts are currently being made to develop new and better livestock vaccines, and generate human vaccines. Live attenuated vaccines against farm animals have been in use for several decades. They are mostly effective in preventing abortion and disease transmission. However, they still have disadvantages as described above and immunization protocols must be chosen carefully (single or multiple dose vaccination, reduced dose, route of administration, age of the animal to be vaccinated, etc). Besides, not all countries have the same vaccination regulations. In France, for example, vaccination of cattle, sheep and goats is prohibited because vaccination is not compatible with serological screening. However, departments that are still infected may have derogations for the vaccination of sheep and goats. Consequences Nowadays, this disease seems to be re-emerging in the world, with the identification of new strains (marine mammals, and amphibian) and the appearance of wild foci (bison, deer, hare, caribou, wild boar, chamois, ibex, etc.). Indeed, animal brucellosis in wild animals constitutes a Brucella reservoir with possibility of accidental transmission to farm animals and therefore also to humans [46] [47]. This is case in the Greater Yellowstone in USA where bison and elk remain a major reservoir for the disease [48] or in France with ibex and wild boar [49]. Brucellosis is still a highly infectious disease with a worldwide distribution. It is a threat to public health and also has important socio-economic repercussions (along with farmer emotional suffering). The deficit in animal production due to reduced milk production, abortion, and slaughtering of infected animals [50] leads to important economic losses in developing countries estimated to hundreds of millions of dollars [51]. Brucellosis control measures have proven to be effective since many developed countries are now considered free of brucellosis. Nevertheless, efforts must be focused on educating the population in order to acquire the barrier gestures and on the scientific community in order to better understand the mechanisms of virulence of the bacterium with the aim of developing more effective vaccines. Brucella intracellular life Like Legionella, Listeria and Salmonella, Brucella are classified as facultative intracellular pathogens, in contrast to obligate intracellular pathogens such as Chlamidia, Rickettsia or Coxiella. However, the term facultative extracellular pathogen might be more appropriate, as to date no environmental reservoirs have been identified and the intracellular environment is suitable to Brucella replication [52]. Indeed, Brucella is extremely well adapted to intracellular life style, it is able to manipulate intracellular traffic, immune response, and replicate within host cells without inducing cell death. Furthermore, hiding within eukaryotic host cells limits exposure to immune responses, competition with other microbes and protects Brucella against the effect of antibiotics making treatment more complicated. This pathogen is able to enter, survive and replicate in a wide range of mammalian cell types, including professional phagocytes such as macrophages and dendritic cells and nonprofessional phagocytes such as epithelial cells, fibroblasts and trophoblasts [53] [54] . Entry into the cells Successful bacterial invasion depends on two consecutive steps: binding and internalization. The mechanism of attachment and entry into cells host by Brucella are not fully understood and is still controversial. In non-professional phagocytic cells, Brucella is taken up within minutes after inoculation, with one or two bacteria per cell [55]. In this process, lipid-rafts are involved. Brucella's entry into the host cell is abolished by inhibiting clathrin, suggesting it plays a fundamental role in the entry [56]. Brucella uptake by HeLa cell leads to a slight reorganization of the membrane with enrichment of the F-actin cytoskeleton at the entry site. Previously it has been shown that infection performed in the presence of cytochalasin D and colchicine, two specific inhibitors of microfilaments and microtubules respectively also hamper internalization [START_REF] Guzmán-Verri | GTPases of the Rho Subfamily Are Required for Brucella abortusInternalization in Nonprofessional Phagocytes[END_REF]. Rho small GTPases (Rho, Rac and Cdc42) known to regulate the organization of actin cytoskeleton, are required for actin polymerization and Brucella penetration into cells. Cdc42 is directly recruited and activated by virulent Brucella [START_REF] Guzmán-Verri | GTPases of the Rho Subfamily Are Required for Brucella abortusInternalization in Nonprofessional Phagocytes[END_REF]. Regulator BvrR and sensor protein BvrS coding by the two-component regulatory system BvrR/BvrS are involved in the invasion of Brucella in both professional and non-professional phagocytic cells. Brucella BvrR/BvrS mutants fail to recruit small GTPases of the Rho subfamily [58] [59]. In professional phagocytic cells, two cases can be distinguished depending on the opsonic condition of the bacteria. In the case of opsonized Brucella, internalization occurs via complement and Fc receptors. After recognition by the receptor, Brucella is internalized by phagocytosis which occurs by a zipper-like mechanism. In this case, Brucella is mostly phagocytized by activated macrophages which can efficiently destroy bacteria [52]. In the case of non-opsonized Brucella, internalization into host macrophages is mediated by components of lipid rafts such as cholesterol and ganglioside GM1 on the cell host plasma membrane [START_REF] Watarai | Macrophage Plasma Membrane Cholesterol Contributes to Brucella abortus Infection of Mice[END_REF] [61] [START_REF] Porte | Role of the Brucella suis Lipopolysaccharide O Antigen in Phagosomal Genesis and in Inhibition of Phagosome-Lysosome Fusion in Murine Macrophages[END_REF]. In this case, internalization is dependent on toll like receptor 4 (TLR4) and PI3 kinases. Indeed, when TLR4 is disrupted internalization of Brucella abortus is suppressed [63] [64,65]. The LPS also plays an important role in bacterial adhesion and internalization. In smooth strains, the LPS O side chain interacts directly with the class A scavengers receptors (SR-A) present in lipid rafts of host cell plasma membrane [START_REF] Kim | Lipid raft microdomains mediate class A scavenger receptor-dependent infection of Brucella abortus[END_REF]. Macrophages from SR-A-deficient mice hamper Brucella internalization and replication, showing internalization is SR-A dependent. Nevertheless, signal transduction mediated by SR-A remains unknown. In contrast, rough strains, whose LPS does not have O side chain, do not enter through lipid rafts and are quickly degraded by lysosomes [START_REF] Porte | Role of the Brucella suis Lipopolysaccharide O Antigen in Phagosomal Genesis and in Inhibition of Phagosome-Lysosome Fusion in Murine Macrophages[END_REF]. Another receptor that has been shown to be involved in the internalization of Brucella abortus in macrophages is the cellular prion protein (PrPc) present on the surface of macrophages or M cells through Hsp60 a chaperone of the GroEL family [67] [68]. However, these studies remain controversial [START_REF] Fontes | Absence of evidence for the participation of the macrophage cellular prion protein in infection with Brucella suis[END_REF]. To enable adhesion to cells Brucella has developed surface molecules that target different cellular receptors. In HeLa cells as well as macrophages, Brucella adhesion is mediated by SP41 a surface marker of the brucellae which interact with eukaryotic receptors containing sialic acid [70] [71]. More recently, additional adhesins have been described, including BigA [START_REF] Czibener | BigA is a novel adhesin of Brucellathat mediates adhesion to epithelial cells[END_REF] and the trimeric autotransporter BtaE [START_REF] Ruiz-Ranwez | an Adhesin That Belongs to the Trimeric Autotransporter Family, Is Required for Full Virulence and Defines a Specific Adhesive Pole of Brucella suis. Payne SM[END_REF]. Intracellular traffic Once inside phagocytic or non-phagocytic cells, Brucella is found in a membrane-bound compartment, the so-called Brucella-containing vacuole (BCV). These BCVs traffic along the endocytic and secretory pathway and mature over time (figure 2). The initial stage of trafficking take place between 0-and 8-hours post-infection and the BCV is called endosomal BCV (eBCV). During this stage, eBCV fuses with early endosomes and acquires some of their markers such as the small GTPase Rab5 and Early endosome Antigen 1 (EEA1). Thereupon, eBCV partially fuses with late endosomes and lysosomes where it acquires late endosomal markers such as Lysosomal-Associated Membrane Protein 1 (LAMP-1) as well as the small GTPaseRab7 [START_REF] Starr | Brucella Intracellular Replication Requires Trafficking Through the Late Endosomal/Lysosomal Compartment[END_REF]. This transient fusion with the lysosome lead to an acidification of the BCV reaching a pH of 4.5 [START_REF] Porte | Early acidification of phagosomes containing Brucella suis is essential for intracellular survival in murine macrophages[END_REF]. eBCV acidification is essential for correct traffic and replication in macrophages and HeLa cells during early stage of infection [START_REF] Porte | Early acidification of phagosomes containing Brucella suis is essential for intracellular survival in murine macrophages[END_REF].This acidification triggers the expression of the VirB type IV secretion system (T4SS) [START_REF] Lacerda | Brucella T4SS: the VIP pass inside host cells[END_REF]. T4SS allows the translocation of several bacterial effector proteins into the cell host to modulate various cellular processes including BCV trafficking and host immune responses, which we will be discuss in the corresponding T4SS section. Several studies have shown that replication of Brucella in both macrophages and epithelial cells as well as in vivo models of infection requires the expression of the T4SS [52]. Additional virulence factors have also been implicated, such as the cyclic -1,2-glucan (CG) synthesized by Brucella that is important for preventing total fusion with lysosomes and thus degradation [START_REF] Arellano-Reynoso | Cyclic β-1,2-glucan is a brucella virulence factor required for intracellular survival[END_REF]. It is important to note that at this early trafficking step, more than 90% of bacteria are killed inside the BCV in macrophages, meaning that only a few of the internalized bacteria are able to reach their replicating niche. The endoplasmic reticulum: the replicating niche Between 8 and 12 hours post-infection, eBCVs are redirected towards the endoplasmic reticulum (ER) and interacts with the Endoplasmic Reticulum Exit Sites (ERES). eBCVs lose the endosomal markers and acquires some specific markers of the ER such as calnexin, calreticulin, Sec61β [78] [54], giving rise to replicative BCVs (rBCV). This interaction is regulated by the small GTPase Sar1 involved in the formation of the COPII complex. This complex allows shuttling between the membranes of the ER towards the Golgi apparatus. Inhibition of Sar1 activity blocks replication, indicating that this interaction is essential for the establishment of the Brucella replication niche [START_REF] Celli | Brucella coopts the small GTPase Sar1 for intracellular replication[END_REF]. There is also other evidence that rBCVs interact with the vesicular traffic between the ER and the Golgi apparatus. Rab2 and the GAPDH are recruited to the BCV. These two proteins are involved in the vesicular transport between the ER and the Golgi apparatus [START_REF] Fugier | The glyceraldehyde-3-phosphate dehydrogenase and the small GTPase Rab 2 are crucial for Brucella replication[END_REF]. Brucella effectors secreted by the VirB T4SS and that are involved in this process have been identified [81] [78] [82]. Studies have identified several effector proteins targeting host secretory functions among them, BspA, BspB and BspF that contribute to bacterial replication by impairing host secretory trafficking [START_REF] Myeni | Brucella Modulates Secretory Trafficking via Multiple Type IV Secretion Effector Proteins[END_REF] and RicA [START_REF] De Barsy | Identification of a Brucella spp. secreted effector specifically interacting with human small GTPase Rab2[END_REF]. For example, BspB interacts with the Golgi apparatus-associated conserved oligomeric Golgi (COG) complex which is a key regulator of vesicular traffic at the Golgi apparatus [START_REF] Miller | A Brucella Type IV Effector Targets the COG Tethering Complex to Remodel Host Secretory Traffic and Promote Intracellular Replication[END_REF]. This conversion into rBCV, provides a suited environment for Brucella replication. Interestingly, initiation of bacterial division occurs at the eBCV stage, suggesting sensing of changing conditions within the vacuole [START_REF] Deghelt | On the link between cell cycle and infection of the Alphaproteobacterium Brucella abortus[END_REF]. Indeed, it has been clearly shown that Brucella cell division is intimately connected to intracellular trafficking [START_REF] De Bolle | Brucella abortus Cell Cycle and Infection Are Coordinated[END_REF]. During the first stage of BCV maturation, the bacterial cell cycle is arrested. However, before conversion of eBCV to rBCVs, bacterial division is initiated, which then likely assists the action of the T4SS and enables further maturation of BCVs into a compartment suited for proliferation. Once the intracellular niche of replication is established, the bacteria aim to remain in the cell for as long as possible by inhibiting apoptosis [88] [89]. Thereupon between 12 and 48 hours post-infection, Brucella extensively proliferate in rBCVs to occupy almost the entire cytoplasmic volume of the host cell. The exit of Brucella : The autophagic BCV It has been proposed that Brucella completes its intracellular cycle by the formation of autophagosome-like structures, autophagic BCVs (aBCVs) [START_REF] Starr | Selective subversion of autophagy complexes facilitates completion of the Brucella intracellular cycle[END_REF]. Autophagy is a ubiquitous degradation mechanism, orchestrated by more than 30 specific proteins called ATG (for autophagy-related gene) in a multi-step process, involving at least three main phases: Initiation, nucleation and elongation. It is a dynamic membrane process that begins with the de novo formation of vacuoles called autophagosomes encompassing cytoplasmic fractions into a double-membrane compartment. This process allows the capture of aggregated/misfolded protein, damaged organelles or pathogens. Subsequently these autophagosomes fuse with the lysosomes where their contents will be degraded providing nutrients or elimination of the pathogen. Although it has a role against microbial invasions, all or part of the autophagic process can be blocked, or hijacked, by microorganisms for their own benefit [START_REF] Huang | Bacteria-autophagy interplay: a battle for survival[END_REF]. As a result of rBCV proliferation, strating at 48 hours after infection, BCVs lose ER markers and gain features consistent with an autophagosome. The formation of these vacuoles, named aBCVs for autophagic BCVs, is dependent on nucleation proteins such as Beclin1, ULK1 and Atg14L, but independent on elongation proteins Atg5, Atg7, Atg4 or Atg16L [START_REF] Starr | Selective subversion of autophagy complexes facilitates completion of the Brucella intracellular cycle[END_REF]. Thus, Brucella is able to subvert part of the autophagy machinery by modulating specifically autophagy initiation/nucleation complexes to allow its exit from the host cell and the infection of new adjacent cells. Figure 2. Brucella intracellular lifecycle Once internalized by host cells, Brucella is contained in a vacuole named Brucella-containing vacuole (BCV). Brucella will follow the endosomal pathway and mature over time. BCV interacts with the host cell to first become endosomal BCV (eBCV), then replicative BCV (rBCV) and finally autophagic BCV (aBCV). [START_REF] Celli | The Intracellular Life Cycle of Brucella spp[END_REF]. Brucella T4SS and its effectors Pathogenic bacteria are often able to secrete macromolecules necessary for virulence in the extracellular medium but also export them into host cells. These macromolecules must be able to cross the internal and external membranes of the bacteria, which are naturally hydrophobic and are therefore impermeable to hydrophilic macromolecules. Dedicated secretion systems make it possible to transport these macromolecules through the bacterial membranes and have an essential role in bacterial pathogenesis. Nine secretion systems are currently known in Gram-negative bacteria: types I to IX. Brucella T4SS In 1999, thanks to the sequencing of the Brucella suis genome, O'callaghan et al. identified an operon coding for a type IV secretion system homologous to the virB system of Agrobacterium tumefaciens [START_REF] O'callaghan | A homologue of the Agrobacterium tumefaciens VirB and Bordetella pertussis Ptl type IV secretion systems is essential for intracellular survival of Brucella suis[END_REF]. The virB operon is conserved in all the sequenced Brucella species.  T4SS functions  Type 4 secretion systems are protein complexes that carry out the transfer of macromolecules (protein, DNA) across the two bacterial membranes. They are found in several pathogenic bacterial species such as Agrobacterium, Legionella, Bartonella, Bordetella, Helicobacter and Brucella. T4SS have several functions, namely: i) the transfer of an effector molecule from the pathogen to the host cell, ii) the release of DNA allowing the exchange of DNA with the extracellular medium, iii) bacterial conjugation, which ensures the transfer of DNA to a target receptor cell allowing the acquisition of a selective advantage [94] [95].  T4SS architecture  The best characterized T4SS to date is the VirB/D4 system from A. tumefaciens. VirB/D4 is composed of 12 proteins called VirB1-11 and VirD4. This system serves as a reference for the study of effector translocation by T4SS in Gram-negative bacteria. The VirB/D4 system forms a dynamic protein complex consisting of three functional groups of proteins: i) the pilus, present on the bacterial surface, is formed by the VirB2 and VirB5 proteins and allows contact with the host cell. ii) the transmembrane or "core" channel (VirB7, VirB9 and VirB10) passes through the inner membrane, the periplasm and the outer membrane. This channel allows the transport of the effector. iii) three NTPases provide the energy input necessary for the assembly of the complex as well as the transport of substrates through the pore (VirB4, VirB11 and VirD4) (Figure 3) [START_REF] Lavigne | Type IV secretion system and their effectors: an update[END_REF][START_REF] Christie | Type IV secretion: the Agrobacterium VirB/D4 and related conjugation systems[END_REF] . T4SSs are broadly classified as type IVA with structural components resembling the VirB/D4 complex of Agrobacterium tumefaciens or classified as type IVB, when they resemble the conjugal transfer system of the self-transmissible IncI plasmid, such as the case of Legionella and Coxiella [START_REF] Sexton | Type IVB secretion by intracellular pathogens[END_REF]. Nevertheless, it is important to note that there are important differences between the systems even within the same class. This is nicely illustrated with the example of the type IVA T4SS of Helicobacter pylori. A pathogenicity island called cagPAI encodes for the T4SS known as cagT4SS, responsible for the translocation of the CagA oncoprotein [START_REF] Kaplan-Türköz | Structural insights into Helicobacter pylori oncoprotein CagA interaction with β1 integrin[END_REF][START_REF] Kaplan-Türköz | Structure and mode of injection of the oncoprotein CagA of Helicobacter pylori[END_REF]. This island encodes 28 proteins, 12 of which have similarities to the VirB proteins of A. tumefaciens and are essential for CagA translocation. This reveals that T4SS in H. pylori is more complex than its counterpart in A. tumefasciens. Studies are currently being undertaken to characterize the Brucella's T4SS, which has some intriguing features such as lacking the gene encoding for VirD4, an important ATPase in other systems and encoding for an additional protein (VirB12) with unknown function. Brucella effectors In Brucella, the VirB T4SS delivers into the host cells protein effectors that modify cellular functions in order for the bacterium to survive and proliferate [START_REF] Myeni | Brucella Modulates Secretory Trafficking via Multiple Type IV Secretion Effector Proteins[END_REF]. Mutants lacking virB essential genes cannot survive or replicate in host cells and are attenuated in a mouse model of infection, showing that the virB encoded T4SS plays a major role in Brucella pathogenesis [101].  Brucella effectors identification:  Since its discovery 20 years ago, only about twenty effectors have been identified and not all of them have yet been characterized. This contrasts with Legionella, whose T4SS was identified in 1998 [102], with more than 300 effectors characterized [103]. This shows how difficult it has been to identify effectors in Brucella. The Table 2 recapitulates briefly what is known about Brucella VirB-secreted effectors. The majority of effectors in Brucella have been identified using a bioinformatic approach based on the sequenced Brucella genome. This approach aims to identify genes encoding proteins with eukaryotic-like domains, proteinprotein interaction domains, domains present in other bacterial effectors known to be involved in virulence [104], potential horizontally transmitted regions encoding transposases or recombinases adjacent to transfer tRNAs [105] or presence of features similar to known VirB T4SS effectors (eg potential secretion sequences) or distinct GC content [START_REF] Myeni | Brucella Modulates Secretory Trafficking via Multiple Type IV Secretion Effector Proteins[END_REF].  Brucella effectors confirmation: Once identified, these putative effectors must be confirmed as translocated in a T4SSdependent manner during infection in the host cell. Several techniques have been successfully used: in particular the use of reporter systems such as CyaA adenylate cyclase or TEM1-βlactamase enzymatic translocation assays [104] [106] and more rarely by directly observing the effector fused to a tag [107]. The position of the N-terminal or C-terminal reporter tag of the putative effector can disturb the translocation of the protein. For example, only the position of the TEM1 in C-terminal of BspB, BspC, BspE showed a translocation while the translocation of BspA and BspF is independent of the position of the tag in N or C-terminal. In addition, several effectors have been shown to be translocated in the host cell but in a manner totally independent of the T4SS (BspG, BspH, BspI, BspK) [START_REF] Myeni | Brucella Modulates Secretory Trafficking via Multiple Type IV Secretion Effector Proteins[END_REF]. This indicates the need for better characterization of the molecular mechanisms involved in effector translocation via T4SS in Brucella and the caveats associated with the use of reporter translocation systems. Effector proteins secreted by Brucella T4SSs The first reported effector protein were VceA and VceC, identified using TEM1-beta lacamase Additionally, BPE005 was shown to induce collagen deposition and matrix metalloproteinase 9 down-modulation via transforming growth factor beta1 in hepatic stellate cells [113]. De Barsy et al. used a high-throughput yeast two-hybrid screen to identify interactions between Brucella proteins and human proteins predicted to be enriched in phagosomes such as RabGTPases. This screening identified RicA (Rab2 Interacting Conserved protein A) as putative effector confirmed by TEM1-β-lactamase reporter assay. RicA preferentially interacts with GDP-bound inactive form of the Rab2. Rab2 is involved in membrane trafficking from the Golgi apparatus to the ER [114]. It has been shown that Rab2 is recruited onto BCVs during infection and is involved in the control of intracellular trafficking of BCVs [START_REF] Fugier | The glyceraldehyde-3-phosphate dehydrogenase and the small GTPase Rab 2 are crucial for Brucella replication[END_REF]. ΔricA mutant strain loses LAMP1 earlier than the wild type strain. By interacting with GDP-bound Rab2, RicA delays the conversion from eBCV to rBCV [START_REF] De Barsy | Identification of a Brucella spp. secreted effector specifically interacting with human small GTPase Rab2[END_REF]. Later on, five more effector protein were described, BspA, BspB, BspC, BspE and BspF [START_REF] Myeni | Brucella Modulates Secretory Trafficking via Multiple Type IV Secretion Effector Proteins[END_REF]. Some of them were shown to target the host cell secretion pathway, including BspA, BspB and BspF. These three effectors were predicted by bioinformatics approach and confirmed by TEM-1 and CyaA assays. Overexpressed in Hela cells or HEK293T, these effectors inhibit protein secretion. In infection, Brucella also interferes with host protein secretion and this inhibition takes place 8 hours after infection, just before the BCV joins the ER, its replication niche. ΔbspB and ΔbspF mutant strains failed to inhibit protein secretion in infection, whereas ΔbspA mutant strain inhibit protein secretion similar to the wild type strain, suggesting that BspA is not involved in the inhibition of secretion in the context of infection. It seems that the inhibition of secretion in the host by these effectors promotes the formation of the Brucella ER-derived replication niche. Indeed, the triple ΔbspABF mutant leads to a replication defect compared to the wild type strain [START_REF] Myeni | Brucella Modulates Secretory Trafficking via Multiple Type IV Secretion Effector Proteins[END_REF]. The molecular mechanism by which BspB inhibits protein secretion and its involvement in bacterial replication has been recently studied. BspB interacts with an important complex in the coordination of vesicular trafficking in the host cell secretion pathway: the conserved oligomeric Golgi (COG) complex, that regulates protein secretion and vesicular traffic at the Golgi apparatus. The interaction of BspB with the COG complex, leads to the diversion of COGdependent vesicles towards the BCV to promote rBCV biogenesis. BspB and the COG complex are required for an optimal bacterial replication. Surprisingly, the replication defect of ΔbspB mutant strains is restored by the depletion of Rab2 suggesting that BspB may affect retrograde secretory traffic to redirect COG-dependent Golgi vesicular traffic to the BCV [115]. Although individually the role of these effectors is becoming well established, the effect of the combined actions of these effectors is still unknown. Recently Smith et al have established a link between RicA and BspB during infection. They have shown that BspA and RicA show an epistatic interaction in the replication of Brucella. Although there are some discrepancies in relation to previous reports showing a role of Rab2 and RicA in rBCV biogenesis [80] [84], both effectors seem to be involved in modulation of Golgi apparatus associated vesicular transport dependent on Rab2. The currently proposed model is that RicA by interacting Rab2 inhibits the conversion of eBCV to rBCV, impacting rBCV biogenesis and replication, while BspB counters this effect to promote biogenesis and bacterial replication [116]. It will be important to establish the kinetics of secretion of these effectors. BtpA BtpB were identified together ass T4SS effector protein [117]. BtpA was previously studied by several authors, who showed that BtpA has a TIR domain and is able to interfere with DC maturation [118] [119]. It was also reported that its target was TIRAP [120] [121] and that the protein could inhibit killing by CD8+T cells [122]. BtpA and BtpB through their TIR domain are able to modulate host inflammatory responses during infection specifically inhibiting TLR pathways [117], we will discuss BtpA and BtpB in more detail in the chapter on TIR proteins. Finally the T4SS effector protein of Brucella identified until now is SepA [123]. This protein inhibit the fusion of BCVs with the lysosome. The second part of this thesis manuscript corresponds to my principle PhD project. This work was funded by a young researcher ANR project "NucPath" obtained by Dr Suzana Salcedo. This project aims at characterizing the cellular role of two new effectors in Brucella: NyxA and NyxB. To do this, we are using a multidisciplinary approach combining cell biology, cell imaging, structural biology and biochemistry. We had multiple objectives: to identify the target of these 2 effectors and to understand their role and mode of action in infection; to carry out a structural and functional study in order to define the functional domains involved in the modulation of host cell functions. For this reason, we are going to present the TIR protein related to the first research project and in a second time introduce the nucleus and the sumoylation corresponding to a compartment and a post-translational modification targeted by the two new effectors identified during my major thesis project. 5. TIR proteins and innate immune system. Immune system The immune system is the body's biological defense system. It allows the identification of nonself (proteins, viruses, bacteria, parasitic fungi and other pathogens) and self-defense by controlling the invasion and proliferation of infectious agents. It is composed of a multitude of proteins, cells, tissues and organs forming a dynamic network capable of specifically recognizing and adapting a response to eliminate a wide variety of foreign microorganisms. The immune system is able to recognize molecular patterns that characterize groups of pathogens with known characteristics, and to establish a rapid immune response directed against these pathogens. It is divided into two parts that differ according to the speed and specificity of the immune response: the innate immune system (non-specific) and the adaptive immune system (specific). The innate immune system is the body's first line of defense against infectious agents while the adaptive immune system acts as a second line of defense. Nevertheless, there is an strong interconnection between these two immune systems, so the innate immune response stimulates the adaptive response and influences its response [124] [125]. Despite these efficient barriers, some pathogens have developed strategies to circumvent immune defense mechanisms. In most cases, the innate immunity is very effective and immediately brought into play, preventing most infections from spreading and thus allowing the infectious agent to be eliminated within a few hours of its encounter with the organism [126]. Innate immunity includes several non-specific protective mechanisms such as physical or mechanical barriers. A good example are epithelial cells (urogenital, bronchial or digestive epithelium). that form a barrier between the external environment where the microorganisms reside and the internal environment which is supposed to be pathogen-free. The pulmonary epithelium for instance, ensures a protective function against external aggressions by the secretion of mucus on which dust, particles and microbes will be trapped. Multiciliated cells, on which several hundred vibrating lashes will beat in a coordinated manner, allow this mucus to be transported to the digestive tract where it will be broken down. Once infectious agents have penetrated the body, the innate immune system is able to induce an inflammatory reaction using specific receptors. This mechanism involves different cell types such as: phagocytes (macrophages, neutrophils, monocytes and dendritic cells), natural killer (NK) cells and innate lymphoid cells. The innate immune system corresponds to a common defense mechanism in all multicellular animals and is thought to have evolved long before the adaptive immune system [127]. Recognition of microbes and pathogens by the innate immune system The immune system is able to make "self" and "not self" this distinction by recognizing repeated molecular structural patterns present on the surface of invading microorganisms, whether pathogenic or not.  PAMPs : In mammals, the immune response is essentially via effector cells such as macrophages, which are able to detect particular patterns decorating pathogens. These particular patterns have been named by Charles Janeway, PAMPs which stands for Pathogen Associated Molecular Pattern [124]. Nevertheless, it is important to emphasize that PAMP is not entirely the right term to describe these structures, since they are also present on non-pathogenic microorganisms, especially commensal flora. These molecular signatures are conserved during evolution and are exclusively present on the microorganisms and totally absent from the host cells. These structures are also essential for the survival and proliferation of these microorganisms. The recognition of the non-self will lead to the production of cytokines such as interleukin (IL)-1, TNFα (Tumor Necrosis Factorα) and IL-6 leading to the inflammatory response and stimulate the adaptive immune response via lymphocytes. Each group of microorganisms has one or more PAMPs. These unique molecular units present on the surface of the microbes can be: lipopoysaccharide (LPS) present on the outer membrane of Gram-negative bacteria, lipoteichoiic acid and peptidoglycan present on the membrane of Gram-positive bacteria, flagellin in flagella of bacteria, dsRNA and ssRNA in viruses, or mannans on the wall of fungi. These PAMPs will be detected by our body to alert the immune system. How can the immune system recognize these PAMPs? PAMPs are recognized by specific receptors present on the surface or inside cells of the innate immune system called PRRs (Pattern Recognition Receptors) [128].  PRRs : PRRs are receptors expressed in innate immune cells such as macrophages and dendritic cells. PRRs are also present in non-professional cells, as epithelial cells, endothelial cells and fibroblasts that contribute to the innate immune response. These are present at the plasma membrane or within cells, at the cytosol level but also in different cell compartments such as endosomes. This diversity of cellular localization of these PRRs corresponds to the different cellular compartments in which pathogens can be found. Thus, PRRs are able to recognize pathogens both extracellularly and intracellularly. Each type of PRR can recognize a multitude of pathogenic species that share a particular molecular motif. There are 4 main types of PRRs [129] (Figure 4): -TLRs : Toll-Like Receptors : TLRs were the first family of PRRs to be discovered, they are also the best characterized. TLRs are related to defense against viral, bacterial and fungal infections. We will describe them in more detail later. -CLRs : C-type Lectin Receptors : CLRs are a large family of transmembrane receptors present mainly at the plasma membrane level of macrophages and dendritic cells. They are composed of one or more CTLDs (C-type lectin-like domains). They allow the detection of carbohydrate (polysaccharide) motifs contained mainly in fungal walls and leads to an inflammatory response. -RLRs : RIG-1 like Receptors RLR receptors essentially recognize viral components, mainly viral nucleic acids, and will activate all signaling pathways: NF-κB, MAP-kinases and interferon. -NLRs : NOD-Like Receptors NLRs are a large family of cytosolic receptors. They are present in cells where they act as sensors of bacterial invasion. They are subdivided into 3 subfamilies: NOD, NALP and NAID. All three subfamilies recruit caspases that cleave a number of cytokines, in particular inflammatory cytokines such as interleukin 1, which is found in the inactive form in the cytoplasm and is thus activated. Scavenger receptors can also be added to the PRR family because of their involvement in the recognition of PAMPs [130]. Activation of PRRs induces a cascade of intracellular signaling leading to activation and/or modulation of the immune response. This signaling cascade results in the production of antimicrobial peptides, the secretion of pro-inflammatory cytokines and the recruitment of neutrophils and macrophages that will together allow elimination of the invading pathogen. Hereafter we are going to focus more particularly on the TLR. 134-136] [134]. From an evolutionary point of view, it seems that this TLR recognition system is highly conserved over time. In mammals, TLR receptors are mainly present on the surface of immune cells such as dendritic cells, macrophages, polynuclear cells, B and T lymphocytes. They can also be found in a large number of cells having contact with the external environment such as skin cells, epithelial cells or intestinal cells.  TLR structure:  TLRs are type I membrane receptors in which three domains can be distinguished (Figure5): -An extracellular N-terminal domain, which allows ligand recognition; this domain corresponds to a leucine-rich repeat (LRR). -A transmembrane domain, -An intracellular C-terminal domain that allows signal transduction, thanks to a specialized domain called TIR, for Toll/interleukin-1 receptor homology. Extracellular domain : The extracellular domain of TLRs is composed of approximately 800 amino acids in the form of leucine-rich repeats called Leucin-Rich Repeats or LRRs. This extracellular domain is involved in pathogen recognition and plays a key role in the initiation of TLR signaling [135]. The distribution and number of leucine-rich repeat units is specific to each TLR, which gives them their specificity in pathogen recognition [136,137]. It is composed of a beta-sheets and an alpha helix that forms a horseshoe structure, the beta-sheets are present on the concave side of the structure (Figure 6). Intracellular domain : The intracellular TIR domain of TLRs, consisting of 150 to 200 aa, is similar to that of the IL-1 These proteins allow, once the TLRs are activated, the transduction of the signal. Proteins containing TIR domains share 20-30% sequence identity. From a structural point of view, the TIR domains adopt approximately the same three-dimensional structure, with 5 parallel beta sheets surrounded on both sides by 5 alpha helixes. The protein structure is more stable but also more conserved than the sequence during evolution. When comparing the structural alignment of the TIR domain of TLR2 (figure 7C) with that of the MyD88 TIR domain (Figure 7B) and calculating the Root-Mean-Square Deviation (RMSD) of each of the alpha carbons, an RMSD of 2.431Å is obtained, which allows us to consider that the two structures are relatively close to each other (Figure 7D). This is why we can say that the TIR domains are domains conserved during evolution.  TLR and their Ligands :  To date there are 10 different TLRs in humans (TLR1 to TLR10) and 12 TLRs have been identified in mice (TLR1 to TLR13), TLR10 is not present in mice [144]. Two different TLR groups can be distinguished according to their location in the cell. The TLRs present on the cell surface TLR1, TLR2, TLR4, TLR5, TLR6 and TLR10, and the TLRs present in the cell at the endosomes: TLR3, TLR7, TLR8, TLR9 [145] (Figure 8). Each TLR is specific to a component of the microbes (Table 2). For example TLR2 recognizes bacterial lipoglycans, peptidoglycans TLR3,7 and 8 recognizes viral nucleic acids, TLR4 recognizes LPS, TLR5 recognizes flagellin, TLR 9 recognizes unmethylated oligonucleotide motifs. Once PAMP has been detected, TLR dimerize into either a homodimer or a heterodimer [146]. TLRs will activate an intracellular signaling cascade which is initiated by recruiting via TIR-TIR interactions between the TIR domain of the TLRs and the TIR domains adaptor proteins. This interaction allows the recruitment of IL-1R-associated protein kinases (IRAK) 1, 2, 4, M, TAB2 and TNF receptor-associated factor 6 (TRAF6). This signaling cascade leads to the translocation of transcription factors in the cell nucleus such as NF-B [147], members of the IRF family (IRF3 8). These transcription factors allow the activation of genes and thus the production of inflammatory cytokines such as TNF alpha, IL-6, IL-1beta and IL-6 but also the production of chemokines (CXCL8 and CXCL10) and type I interferon (IFN) [150]. TLRs induce a specific response to their ligand. This specificity is mediated by the adaptor proteins involved in signal transduction. ). TRAF6 will thus be able to ubiquitinate itself. Ubiquitinylate TRAF6 will be recognized by TAB2 (TAK1 binding protein) and TAB3, leading to the activation of the TAK1 complex (transforming growth factor [TGF] -activated kinase 1). The activated TAK1 complex generates a phosphorylation cascade of the different members of the MAPK (Mitogenactivated protein kinase) pathway but also the induction of IB (inhibitor of NF-B ) degradation via activation of the IK (IB kinase) complex, leading to the release of NF-B and its nuclear translocation [154]. Activation of the MAPK pathways and the NF-B transcription factor allows the expression of inflammatory cytokines.  TRIF-dependent signaling pathway  The second TLR pathway induced following pathogen recognition is independent of Myd88 and is associated with TLR3 and TLR4[155]. This alternative pathway is dependent on TRIF, which is recruited directly at the TIR domain of TLR3 or indirectly via TRAM for TLR4. Subsequently TRIF interacts with TRAF6 and RIP1 (Receptor-interacting protein 1) to activate the MAPK and NF-B pathways in the same way as the MyD88-dependent signalling pathway. However, TRIF also collaborates with TRAF3 which activates IKK??inhibitor of NF-κB kinase ? and TBK1 (TRAF family member-associated NF-B activator-binding kinase 1), both of which are responsible for phosphorylation and nuclear translocation of IRF3 and IRF7. This pathway thus allows the expression of type I IFNs and mainly IFN- [156]. Microbial targeting of the TLR signaling pathway Over the course of evolution, bacteria have developed different strategies to subvert immune responses from the host cell, including hijacking signaling pathways, modifying their PAMPs, and activation of others receptors that dampen the effect of TLR activation (Figure 9) [157]. Some pathogenic bacteria have evolved by modifying their PAMPs so that they are less Brucella is also known to modify its PAMPs to escape the immune system or limit the host's immune response. Firstly, Brucella does not express pili, fimbriae or capsules which are recognized by the immune system. Furthermore, Brucella melitensis seems to produce a nonfunctional flagellum which limits its recognition by the Therefore, in the introduction I have decided to briefly present the different structures of the nucleus and their functions, the process of SUMOylation and finish with a more detailed description of the functions of SENP3. The nucleus : The nucleus can be considered as the heart of the cell. It was the first cell compartment to be discovered in 1833 by Robert Brown. It contains the vast majority of the cell's genetic material, which is present as compacted DNA in the 23 pairs of chromosomes, with the rest of the genetic material found in the mitochondria. The chromosomes are present in defined regions of the nucleus in interphase [191]. The nucleus is highly organized and dynamic. It protects the genetic material and controls the physiology of the cell, particularly during cell division. It also performs major functions: DNA repair, transcription, RNA splicing, DNA replication, ribosome assembly, chromatin modifications, gene regulation and expression. Only the cells of higher organisms (eukaryotes) have a nucleus, with the exception of red blood cells. The nucleus is the largest organelle of the cell, separated from the cytosol by a nuclear envelope corresponding to a double lipid membrane. The nuclear envelope has nuclear pores allowing the transport of biomolecules in both directions between the cytoplasm and the nucleus. The nucleus represents about 10% of the total volume of the cell with a diameter of about 10µm, however this value varies greatly depending on the cell type and the stage of the cell cycle. In Hela cells the nucleus occupies 8-21% of the total cell volume. The protein atlas website (https://www.proteinatlas.org/humanproteome/cell) provides information on the expression and spatio-temporal distribution of proteins in human cells. They have been able to establish experimentally that 33% of all human proteins (i.e. 6523 proteins) are located in the nucleus. The presence of such a large number of proteins in a relatively small space underlines the need for structural organization in the nucleus. Architecture of the nucleus : Much is known regarding nuclear molecular mechanisms and processes, however it is only recently through molecular biology, biochemistry and cell biology techniques and especially through technological advances in the field of imaging using increasingly powerful microscopes that the architecture of the nucleus and its three-dimensional organization is beginning to be elucidated. In the 1980s, the use of fluorescent probes in microscopy made it possible to revive the study of the architecture of the nucleus by observing nuclear proteins of chemically fixed cells [192]. This made it possible to highlight the presence of different structures within the cell with distinct and highly dynamic morphologies. To date, several nuclear sub-structures have been identified, also called nuclear bodies. The different nuclear compartments: nuclear bodies In a similar way to cytosol, where a compartmentalization of the different organelles can be observed according to different metabolic processes (i.e. mitochondria, Golgi apparatus, endoplasmic reticulum...), the cell nucleus presents different sub-structures [192] [193]. However, unlike the cytosol, the different nuclear sub-structures are not separated from their surroundings by a membrane. This compartmentalization facilitates metabolic processes, According to 3[198] integrity is ensured by protein-protein and protein-RNA interactions, yet the mechanisms of formation of these structures are still poorly understood [194]. Numerous nuclear bodies have been described and characterized: nucleoli, nuclear speckles, clastosome, PML bodies, Cajal bodies, Gems bodies, paraspeckles, histone bodies and stress bodies [195] (figure10). Hereafter we will discuss only the nucleoli and PML nuclear bodies. The nucleolus: an assembly platform The nucleoli form the largest and most dense structures of the nucleus, first observed by microscopy at the end of the 18th century (Fontana, 1781). Fontana characterized it as an "oviform body with a spot in the middle". Most mammalian cells contain 1 to 5 nucleoli with a diameter of 0.5 to 5 µm. [198] They are mainly known for their role in the production of ribosomal RNA (rRNA) and in the assembly of ribosomes. In addition, the nucleoli are involved of the cell's metabolic activity. Indeed, rRNA synthesis is linked to cell growth and activity [196]. Dividing cells generally have large nucleoli to ensure a high level of ribosome biogenesis for sustained protein synthesis. There is therefore a positive correlation between rRNA synthesis and nucleolar size. Conversely, cell cycle arrest leads to a reduction in nucleolar size [197]. The nucleolus adopts a structure divided into three zones: a fibrillar centre, a dense fibrillar component and a granular component (figure 11) [198]. The Fibrillary Centre (FC) : It is a rounded area of the nucleoli not very dense to electrons, located in the centre in quiescent cells, more in periphery and less regular in the most active cells. This zone contains the promoter regions of the rDNA genes and is the site of initiation of rRNA transcription. The dense fibrillar component (DFC) : DFC is very dense to electrons, which is adjacent to the fibrillar centre. Limited to a small proportion of the nucleolus in quiescent cells, it becomes more important in proliferating cells and forms filaments called nucleonemes. FC and DFC participate in the transcription of rRNA. The rRNA thus produced is called precursor or pre-rRNA. The pre-rRNA then undergoes several stages of maturation within DFC. The granular component (GC) : GC is an area larger than CF and CFD located at the periphery of the nucleoli. The granular aspect of GC is due to the large number of pre-ribosomal particles being assembled or stored. Although it is clear that nucleoli play a role in cell growth via ribosome biogenesis, the notion of nucleoli multifunctionality is emerging. Indeed, proteomic analysis reveals that about 30% of nucleoli proteins are involved in ribosome biogenesis, suggesting that nucleolus are likely to be involved in various other cellular processes [199]. Nucleoli are believed to be involved in the maturation and export of certain mRNAs and tRNAs, DNA replication and repair, cell cycle progression, proliferation and apoptosis [200] [201]. Indeed, nucleoli are sensitive to a variety of stresses and trigger a rapid response by regulating the induction of p53, a key transcription factor regulating multiple cellular processes including proliferation and death [196].  The production of ribosomes by nucleoli : Ribosomes : Ribosomes are made up of proteins and ribosomal RNA (rRNA) to form ribonucleoprotein complexes. They are found in both eukaryotic and prokaryotic cells. These structures are relatively well preserved during evolution, mediating protein synthesis proteins by decoding the information contained in messenger RNA (mRNA). Ribosomes are the main effectors of translation and therefore are intimately linked to the capacity of cells to grow and divide [202]. The ribosomes are formed of two subunits: a small subunit that allows binding to the messenger RNA and a large subunit whose function is to carry out the polymerization of the amino acid residues to form the corresponding protein. These sub-units are designated according to the sedimentation coefficient expressed in Svedberg (S) units. In eukaryotes, the large subunit has a sedimentation coefficient of 60S with a molar mass of 2.9 MDa. It consists of three ribosomal RNAs: 28S rRNA, 5.8S rRNA and 5S rRNA with 49 ribosomal proteins. The small subunit has a sedimentation coefficient of 40S with a molecular weight of 1.4 MDa. This sub-unit is composed of a single ribosomal RNA, the 18S rRNA associated with 33 ribosomal proteins. The rRNA is the major constituent of ribosomes, it represents 60% of the total mass of ribosomes. Ribosome biogenesis Ribosome biogenesis in eukaryotes is a complex process of transcription, maturation, folding and post-translational modification of rRNAs. This process takes place in several stages with different locations in the cell: it starts in the nucleoli, then during their maturation the preribosomal particles join the nucleoplasm and cross the nuclear pore to reach the cytosol where they are finally assembled [203]. It involves more than 200 partners (protein and small nucleolar RNAs (snoRNAs)) who coordinate the different stages of maturation, pre-rRNA cleavage, post-translational modifications of ribosomal proteins and the export of sub-units into the cytosol [206][207][208][209]. Although a large number of proteins have been identified as ribosome-associated by co-immunoprecipitation or mass spectrometry, the molecular functions of a large number of proteins involved in ribosome biogenesis are still unknown. This process is extremely energy intensive for the cell. It is estimated that 80% of the cell's energy reserves are required for ribosome biogenesis [204]. Ribosome biogenesis is globally well conserved between eukaryotic species. The model is best established in yeast (Saccharomyces cerevisiae) and is often extrapolated to mammals and humans. Nevertheless, many differences can be observed in the structure, factors involved and maturation stages of pre-rRNAs [211][212][213]. In mammals, it involves the three RNA polymerases (I, II, III). RNA polymerase I allows the transcription of 28S, 18s and 5.8S rRNAs. RNA polymerase II synthesizes the mRNA of ribosomal proteins. RNA polymerase III synthesizes rRNA5S. Biogenesis starts in the fibrillar center of the nucleoli, where RNA polymerase I allows the transcription of 47S RNA precursors. The latter contains the sequences of the 5.8S, 18S and 28S rRNAs surrounded by external transcribed spacers at the 5' and 3' ends (5'-ETS and 3'-ETS), and are separated from each other by internal transcribed spacers (ITS-1 and ITS2). This 47S transcript will undergo a succession of endonucleolytic cleavage and exonucleolytic digestions, these processing and cleavage steps performed by several ribosomal proteins that participate in the ribosome structure, different assembly factors as well as non-coding RNA species [205]. The assembly factors assist the formation of the maturation of the subunits but will not be present in the final architecture. Thus, the 47S rRNA precursor will first lose the 3' ETS then the 5' ETS to form the 45S rRNA precursor then the 41S rRNA precursor. At this stage the pre-rRNA processing will split in two pathways. The first will lead to the pre-rRNA18S which will be incorporated, together with ribosomal proteins, in the pre-ribosomal 40S subunit [206]. The second pathway will undergo additional cleavage steps to obtain the 5.8S and 28S rRNA. At the same time, the 5S rRNA that follows a different assembly pathway, will be transcribed by RNA polymerase III at the periphery of the nucleoli. Subsequently, it will be recognized by two ribosomal proteins uL18 (Rpl5) and uL5 (Rpl11). It will then join the nucleoli where it will associate with 28S and 5.8S rRNA and ribosomal proteins to form the 60S pre-ribosomal subunit. These steps of cleavage and maturation of the spacer regions are performed in the nucleoli. These pre-ribosomal particles (40S and 60S) undergo several steps of cleavage and maturation in nucleoplasm, and are exported to the cytoplasm to form the mature 40S and 60S sub-units and subsequently form the 80S ribosomal particle, capable of mRNA translation (figure 12) [207]. The PML nuclear bodies: PML nuclear bodies are structures composed of multi-protein complexes defined by the presence of the PML (promyelocytic leukaemia) protein itself. These bodies have a punctate appearance in the cell nucleus that vary in number from 5 to 20 per interphase nucleus, with a diameter of 0.5 µm. The main organizing protein of these nuclear bodies is the PML protein, whose modification by SUMO is necessary for its stability and PML-NB formation [216,217]. Indeed, only the form of PML conjugated to SUMO is able to properly aggregate and induce the formation of nuclear bodies. PMLs are regulated by the desumoylases of the SENP family (SENP1, SENP3, SENP5 and SENP6) [218][219][220]. Nevertheless, the mechanisms of action of SENPs on PML are still poorly understood. In the presence of arsenic, the SUMOylation of PML is exacerbated, and an increase in the size of PML nuclear bodies is found [208]. PML nuclear bodies recruit different proteins, the only common characteristic of which is their ability to be sumoylated [209]. These include the proteins PML, SP100, BLM, Daax, LEF1, p53, CBP... [210]. PML nuclear bodies are proposed to be reservoirs of regulators involved in the control of many cellular processes such as stress response, apoptosis or response to DNA damage. They constitute hubs of further post-translational modifications, including ubiquitination. For the cell, PML nuclear bodies are likely to be a repository of nuclear factors and a specific site for modification and assembly of transcriptional factors [211] [212]. It is also interesting to note that the depletion of PML in cells leads to an acceleration of cell proliferation [213]. PMLs have been studied in particular because of the alteration of PML-NB in pathological conditions such as promyelocytic leukemia, viral infections or various cellular stresses [214] [215] [216]. Shigella infection was also shown to impact PML nuclear bodies by increasing the number of PMLs in infected cells [217], following decrease in the levels of host sumoylation. Recently, PML was identified as being targeted by the pore-forming toxin (LLO) of Listeria monocytogenes. LLO triggers a de-SUMOylation of PML, which leads to a de-structuring of the PML nuclear body and triggers an antibacterial response via activation of immune response genes and cytokine secretion [218]. Further studies with other intracellular bacterial pathogens need to be undertaken to determine whether PMLs play a general antimicrobial control mechanism. SUMOYLATION : The cell can mediate the dynamic regulation and biochemical functioning of its proteins through post-translational modifications (PTM), that correspond to the attachment of a group to a target protein, which will modify its properties and consequently its functions. These include: glycosylation, acylation, phosphorylation, acetylation, methylation, ubiquitination... Sumoylation is also a post-translational modification of target proteins. It is an extremely dynamic and reversible process. SUMO was initially identified in 1995 in Saccharomyces cerevisiae with the smt3 gene, a homologue of SUMO1, following genetic screening to identify suppressors of a mutation in MIF2, a protein associated with the centromere [219]. In 1996, three teams simultaneously discovered SUMO1, through yeast two-hybrid experiments. In these studies, SUMO1 was shown to interact with PML [220], Rad51 and Rad52 involved in DNA repair [221] and Fas protein playing a role in apoptosis [222], already suggesting a high diversity of mechanisms in which this PTM is involved. In recent years many more SUMO substrates have been identified, underlining the importance of this SUMO modification within the eukaryotic cell [223] [224]. SUMOylation consists of the addition of a polypeptide called SUMO (Small Ubiquitin-like MOdifier) of about 10kDa which shares 20% identity with ubiquitin. Nevertheless, the threedimensional structure of these two proteins is relatively close (Figure 13). The addition of this SUMO polypeptide is achieved by forming an isopeptide bond with the -amino group of the lysine acceptor residues of the target protein. SUMOylation is involved in a multitude of cellular functions such as transport between the nucleus and the cytosol, transcriptional regulation, apoptosis, protein stability, immune response, DNA repair, stress response, cell cycle progression, cell proliferation and differentiation [235,237,238]. For example, mice in which expression of Ubc9, a key enzyme in the SUMOylation pathway, is turned off, die in the embryonic stage [239,240]. This underlines the importance of SUMOylation in cell proliferation and differentiation. Initially, the scientific community thought that sumoylation occurred only in nuclear or perinuclear compartments [225], but it has become clear that it also regulates cytoplasmic and plasma membrane proteins [226]. SUMO proteins are present in all eukaryotes. Some organisms have only one SUMO gene such as yeasts (SMT3), Caenorhabditis elegans (SMO-1) and Drosophila melanogaster (smt3) [242][243][244] , whereas plants and vertebrates have several SUMO genes. In humans, 5 paralogues of SUMO have been identified: SUMO1, SUMO2, SUMO3, SUMO4 and more recently SUMO5 [227] [246,247]. SUMOs are formed by about 100 amino acid residues. SUMO2 and SUMO3 are very strongly similar and share 97% identity, only three amino acids differ. As a result, it is very complicated to differentiate between them and they are often referred to SUMO2/3. SUMO2/3 shares 47% identity with SUMO1 and 83% identity with SUMO4 [228] and 46% identity with SUMO5 (figure 14). Proteomic analyses have shown that 2% of the mammalian proteome is SUMOylated underlining the importance of this PTM [229]. SUMO1 and SUMO2/3 perform very distinct functions in the cell, as they are conjugated to different target proteins in vivo [250][251][252]. The role of SUMO4 is more obscure, currently scientists are not sure whether it is in the mature in vivo form needed for conjugation [245,253]. Results suggest that SUMO4 is involved in the pathogenesis of type I diabetes [228]. The youngest member of the SUMO5 family seems to play a major role in the regulation of the nuclear PML bodies [230]. It should be noted that SUMO1, SUMO2/3 have been studied in more detail than the SUMO4 and SUMO5 paralogues. To date, the mechanisms and consequences of this specificity of conjugation of the SUMO paralogues to their substrate are not clearly established and many questions remain unanswered. Like ubiquitination, proteins can be modified with the addition of a single SUMO molecule (monoSUMOylation) or with several chains of SUMO molecules (polySUMOylation) [231]. Nevertheless, only SUMO2/3 and SUMO4 are capable of producing polySUMO chains, due to the presence of a lysine 11 present in a KxD/E consensus sequence in SUMO2/3 and SUMO4 to which SUMO is added, but absent in SUMO1 and SUMO5 (Figure 14) [255,256]. On the contrary, lysine 18 can be observed on SUMO5, which could be SUMOylated, but to date no study has reported a polySUMOylation of SUMO5. The enzymatic mechanism of SUMOylation : The different paralogues of SUMO are conjugated by covalent binding to the target protein at the lysine level generally present in KxD/E consensus motif (in which  is a branched aliphatic amino acid, K corresponds to lysine, x any amino acid, D corresponds to aspartic acid E corresponds to glutamic acid) [257,258]. It should be noted, however, that the literature reports an increasing number of sumoylated proteins on lysines not belonging to a consensus sumoylation site. In addition, many proteins with this motif are not sumoylated.  Maturation of the precursor:  The various forms of SUMO are expressed as inactive precursors that require cleavage of the C-terminal part of the peptide to expose a di-glycine motif essential for conjugation. It is at the carboxy-terminal glycine doublet (GG) that the binding with the lysine of the target protein will take place. This cleavage is achieved by the hydrolase activity of specific enzymes of the SENPs family (SUMO/Sentrin-specific protease). Similar to ubiquitination, SUMOylation consists of a cascade of enzymatic steps involving three enzymes: SUMO activating enzyme (E1), SUMO conjugating enzyme (E2) and ligase (E3) (Figure 15).  -E1 the activating enzyme :  Unlike ubiquitination, the E1 enzyme is not monomeric but heterodimeric. This heterodimeric E1 enzyme consists of two subunits: a small SAE1 subunit and a large SAE2 subunit (SUMO Activating Enzyme 1 and 2). This activation is ATP dependent. Initially E1 forms a high-potential energy connection between the C-terminal end of SUMO and SAE2 using ATP. Then, by releasing the AMP, the adenylated SUMO intermediate uses this binding energy to form a covalent thioester-type bond between the carboxyl group at the C-terminal end of SUMO and the sulphide group of the C173 residue of the SAE2 sub-unit [232].  -E2 conjugating enzyme:  In contrast to ubiquitinilation, SUMOylation uses exclusively the conjugating enzyme E2: UBC9 (SUMO ubiquitin-conjugating enzyme 9). UBC9 is highly conserved in eukaryotes, it has 56% amino acid sequence identity between the human UBC9 sequence and its orthologue in S. cerevisiae [233]. By a transesterification reaction, "activated" SUMO is transferred from SAE2 to the cysteine C93 residue of the conjugating enzyme Ubc9, again forming a covalent thioesther bond. Ubc9 serves as the donor of SUMO. Ubc9 will then create an isopeptide bond between the double glycine of SUMO and the lysine of the substrate  -E3 SUMO ligase :  It is important to note that in vitro tests show that the activating enzyme E1 and the conjugating enzyme E2 are sufficient for SUMOylation of the substrate [261][262][263]. Nevertheless, the conjugation process can be assisted by E3 ligases which promote the interaction of the E2 enzyme with the substrate or act by positioning SUMO in a conformation that facilitates its transfer to the target lysine residue [234]. Thus, the conjugation of SUMO is generally increased in the presence of E3 ligases in vivo and in vitro [265,266]. The three main types of E3 SUMO ligases: the PIAS family (Protein Inhibitor of Activated STAT), the nucleoporin RanBP2 (Ran Binding Protein2) and the Polycombe PC2 complex, interact with Ubc9 and the substrate [236,267]. SUMOylation: firstly, they allow the maturation of newly translated SUMO into its active form thanks to their hydrolase activity, which is necessary for conjugation, and secondly, they allow the removal of SUMO from their substrate by an isopeptidase activity. DeSUMOylation seems to be essentially exercised by SENPs, in fact cells lacking members of the SENPs family present a large accumulation of SUMOylated proteins. Conversely, cells lacking DESI1, DESI2 or ULSPL1 do not present an alteration in the total SUMOylated protein profile. This suggests the major role of the SENPs family in the overall status of cell SUMOylation. However, the study of this post-translational modification is complicated by the fact that this event is highly dynamic. Only a small fraction of substrate is SUMOylated at any given time. It therefore seems that there is a SUMOylation/de-SUMOylation equilibrium for a specific substrate and that this equilibrium is finely regulated. To date many questions, remain unanswered about the understanding of the cellular signals that trigger de-SUMOylation and how SENPs are regulated under physiological as well as pathological conditions. ThE SUMO Interacting Motif (SIM: SUMO Interacting Motif): Interestingly, on some proteins, a SUMO interacting motif (SIM) has been identified to interact non-covalently with SUMOylated proteins [238]. The SIM motif corresponds to a short sequence not exceeding 10 amino acid residues, may contain phosphorylated amino acids, and interacts with a specific groove present on the surface of SUMO [239]. To date, two important characteristics of SIM motifs have emerged: Presence of a core formed by 3-4 hydrophobic residues (generally valine or isoleucine) and the presence of a close acidic region such as the side chains of glutamic or aspartic acids or phosphorylated residues such as serine or threonine [272,274]. A study on the PML protein showed that it has this non-covalent binding motif, which is involved in the formation of PML nuclear bodies [240]. Similar to the consensus site of SUMOylation KxD/E, not all SUMO binding motifs are functional, due to poor exposure of the SIM motif, or because these SIM motifs are masked by interaction with other proteins. Consequences of SUMOylation Modification by SUMO can have several consequences on the target protein, and it can change its localization, activity or stability. SUMO can compete with other PTMs which modify the function of the protein, or SUMO can modify directly the conformation of the target protein. SUMO can also modulate the interaction with other proteins, by preventing protein-protein interactions or conversely by promoting new interactions by exposing a new binding domain or directly binding to another protein with a SIM (figure 16) [241].  Bacterial effectors can mimic host deSUMOylases.  This is the case for Yersinia outer membrane protein J (YopJ), the first bacterial protein identified to modulate SUMOylation. YopJ is a cysteine protease that mimics the isopeptide activity of SENPs. It will lead to the deconjugation of SUMO from their substrate and thus to a decrease in the overall level of SUMOylated proteins [283,284]. Similarly, XopD is an effector secreted by the T3SS of a plant pathogen, Xanthomonas euvesicatoria. XopD has an isopeptidase activity that allows it to deconjugate SUMO from the transcription factor SIERF4. Thus, the bacterium suppresses the ethylene response allowing it to escape the plant immune system and promote its proliferation [285,286] . Klebsiella pneumoniae is a major multidrug-resistant pathogen causing nosocomial infections worldwide. Recently, it has been shown that K. pneumoniae decreases the level of SUMOconjugated proteins in epithelial cells but also in macrophages. In epithelial cells, this decrease is not dependent on an alteration of the E1 or E2 enzymes. The authors have shown that SENP2 deSUMOylase is involved in the decrease of SUMO-conjugated proteins. K. pneumoniae triggers the delocalization of SENP2 from the nucleus to the cytosol. The bacterium leads to the deNEDDylation of E3 ubiquitin ligase which results in a defect of ubiquitylation of SENP2 by this ligase and thus prevents the degradation of SENP2 by the proteasome. On the other hand, in macrophages, the decrease in SUMOylation is mediated by type I interferon in a TLR4-dependent manner. The type interferon stimulates the transcription of miRNA let-7 which prevents SUMOylation. This decrease in the SUMOylation status of epithelial cells and macrophages allows intracellular survival of Klebsiella pneumoniae and limitation of the inflammatory response [247].  Bacteria using the host's machinery to SUMOylate their own effectors  This has been observed in Ehrlicia chaffeensis and Anaplasma phagocytophilum [288,289]. Thus the TRP120 and AmpA effectors belonging respectively to Ehrlicia chaffeensis and Anaplasma phagocytophilum are secreted in the host cell cytosol and localize to the membrane of the pathogen-containing vacuole. TRP120 and AmpA are sumoylated during the infection. SUMOylation of TRP120 allows it to interact with host proteins and thus promote infection [248]. Inhibition of the SUMO pathway significantly decreases the interaction of TRP120 with its protein targets, resulting in decreased intracellular survival of Ehrlicia chaffeensis. However, the molecular mechanisms and repercussions of this modification are not yet established. Shigella flexneri also uses the host machinery for SUMOylate OspF, an effector secreted during infection. The sumoylation of OspF allows it to be translocated into the nucleus of the infected cell where it can modulate proinflammatory cytokine expression [249]. Thus, by interfering with the SUMOylation of the host cell, bacterial pathogens can modulate the activity of proteins in order to promote the replication and dissemination of the bacterium in its host. It seems obvious today that the understanding of these interactions between the host and the pathogen modulating sumoylation is more than necessary and could be used in the future for the development of new therapeutic strategies The SENP family SENPs are cysteine proteases, discovered in the early 2000s [237]. They have a C-terminally conserved catalytic domain of about 200 amino acid residues and exhibit a characteristic papain-like folding. The cysteine residue of the active site is present at a catalytic triad (histidine, aspartic acid and cysteine). The N-terminal domain is different among members of the SENP family (Figure 17). This domain is variable in length and plays a major role in their regulatory function and location. It is often subject to post-translational modifications such as phosphorylation or ubiquitination, allowing regulation for partner recruitment or protein stability. We also find SIM motifs allowing them to target their substrates. In humans, there are 6 isoforms of SENPs: SENP1, SENP2, SENP3, SENP5, SENP6 and SENP7. The 6 members of this family can be divided into three subfamilies according to their sequence identity within their catalytic domain, substrate specificity and cell location [223]. Indeed, they present 20 to 60% identity in their catalytic domain [250]. The first sub-family is composed by SENP1 and SENP2. They do not present any specificity for SUMO, they act as well on SUMO1 as on SUMO2/3. The second sub-family is made up of SENP3 and SENP5 and the last sub-family is made up of SENP6 and SENP7. With the exception of SENP1 and SENP2, all other isoforms of SENP act preferentially on SUMO2/3. Each member of the SENP family has a characteristic distribution, allowing them to target their substrate more specifically. SENPs are located almost exclusively in different nuclear substructures (Figure 18). For example, SENP3 and SENP5 are mainly present in the nucleoli thanks to Nucleolar Localization Site (NoLS) where they are involved in ribosome biogenesis steps [218,[292][293][294]. They can also be found in the nucleoplasm and cytoplasm. Surprisingly, SENP5 moves to mitochondria during mitosis [251]. SENPs key regulators of macromolecular assembly occurring in the nucleus. They are thus indispensable for the formation of PML nuclear bodies and the assembly of pre-ribosomes. They also play a role in chromatin remodeling and control of gene expression. SENPs are also involved in cell cycle progression, inflammatory signalling and the adaptive immune response. In the framework of this thesis project we will focus on the role of SENP3. SENP3 SENP3 is a cysteine protease. This protein is composed of 574 amino acid residues with a molecular weight of 65 kDA. Its catalytic domain is located at the C-terminus of the protein and presents a catalytic triad of cysteine, histidine and aspartic acid. Cysteine 532 is the key residue that enables it to attack the peptide bond between the glycine terminal residue of SUMO and the lysine residue of the substrate. The mutation of Cys532 to alanine leads to a total loss of its isopeptidase activity. SENP3 acts preferentially on substrates sumoylated by SUMO2/3 and has a very limited activity on SUMO1. Depletion of SENP3 by siRNA leads to a significant increase in SUMOylated substrate by SUMO2/3 in cells [293,296,297]. The result of this stabilization under oxidative stress is a redistribution of SENP3 from the nucleoli to the nucleoplasm where it will accumulate and participate in the de-SUMOylation of a large number of proteins. Notably, nucleoplasmic SENP3 will deconjugate SUMO2/3 from p300, a co-activator of HIF-1, a transcription factor sensitive to redox. This deSUMOylation by SENP3 will lead to a transactivation of HIF-1 [257]. A few years later the same team showed that SENP3 is induced under mild oxidative stress but under high oxidative stress its catalytic activity is inhibited due to the oxidation of catalytic cysteine532, preventing the transcriptional activity of HIF-1 [259]. Under oxidative stress, the delocalization of SENP3 nucleoli leads to accumulation within PMLs and thus deconjugation of PML from SUMO2/3. As a consequence, a decrease in the number of PML nuclear bodies, which requires the sumoylation of PML for their formation is observed under oxidative stress. SENP3 as a modulator of gene expression Studies have shown that deSUMOylation mediated by the SENPs family has a role in bone metabolism. SENP3 is found associated with the MLL1/MLL2 histone methyltransferase complex and catalyzes the SUMO deconjugation of RbBP5 required for the activation of Hox gene. SENP3 promotes ostogenesis by deSUMOylating RbBP5, which activates the expression of HOX genes necessary for cell specialization and determine pathway such as osteogenesis [250]. Recently a study showed that in bone marrow-derived monocytes, SENP3 activity is decreased during osteoclastogenesis promoting osteoclast differentiation. The authors showed that SENP3 negatively regulates osteoclast differentiation by deconjugating SUMO2/3 present on IRF8 lysine K310. This suggests a potential role for SENP3 as a therapeutic target in diseases related to bone loss [261]. SENP3 is also phosphorylated by the CDK1 protein kinase before entering mitosis and is dephosphorylated by the protein phosphatase-1 (PP1) at the exit of mitosis. The phosphorylation of SENP3 negatively regulates its activity, so that it can no longer desumoylate chromosome-associated proteins such as Topoisomerase ll α (TopoII α). The expression of a mutant of SENP3 that cannot be phosphorylated decreases the SUMOylation of TopoII α, leading to abnormal chromosome segregation, abnormal mitotic cell cycle and tumerogenesis [262]. These data show that SENP3 phosphorylation plays a crucial role in the regulation of chromosome stability in mitosis. Subsequently the same team showed that in response to DNA damage, p53 is activated and will suppress the phosphorylation of SENP3. SENP3 and immune response : It is well established that protein SUMOylation plays a role in the innate immune response. ROS are produced in abundance during macrophage activation and are necessary for inflammatory signaling by TLR4. As we have seen above, SENP3 is sensitive to ROS. SENP3 deficient cells markedly deregulate the activation of TLR4 inflammatory signaling and the production of pro-inflammatory cytokine in LPS-stimulated macrophages. SENP3 potentiates LPS-induced TLR4 signaling via MKK7 desumoylation resulting in increased JNK phosphorylation and downstream events. This suggests that SENP3 may be the link between redox regulation and innate immune response [263]. Finally, NLRP3 (NOD-like receptor family, pyrin domain containing3) which is an inflammasome activating protein undergoes SUMOylation, essential for the regulation if its inflammatory activity [264]. Subsequently, it was shown that SENP3-mediated deSUMOylation of NLRP3 orchestrates the activation of the inflammasome [265]. SENP3 as a regulator of macromolecular assemblies in the nucleus : ribosome biogenesis As mentioned above, SENP3 is mainly localized in nucleoli, a major site of ribosome biogenesis due to the presence of a nucleolar localization sequence (NoLS). However, authors have also shown that this nucleolar localization is also dependent on the serine/threonine kinase activity of mTOR. Indeed, when HeLa cells were treated with mTOR inhibitors (Ku-0063794, Rapamycin), SENP3 was no longer able to localize into the nucleoli, revealing that phosphorylation of several N-terminal SENP3 serines and threonines (S25, S139, S141, T142, S143, T145) by mTOR was required for its localization. This delocalization of SENP3 is also observable in case of amino acid starvation, consistent with the fact that during starvation mTOR is no longer activated [266]. When SENP3 is phosphorylated, it interacts with its major partner nucleophosmin (NPM1/B23) which will allow SENP3 to be retained in the nucleolus In addition, a large protein complex associated with SENP3 has been identified, it is composed of PELP1, TEX10 and WDR18. PELP1 is found in the GC region of the nucleolus, associated with the precursor forms of 28S rRNA, and it interacts with LAS1L [270], a 28S rRNA maturation factor. When PELP1 is sumoylated it is localized in the nucleoplasm while when it is not conjugated to SUMO it is found in the nucleolus. Thus, SENP3 will regulate the nuclear distribution of this complex. The authors suggested that SENP3 via its activity on PELP1 allows Saccharomyces cerevisiae as a eukaryotic cell model. We found that both effectors were cytotoxic and that their respective TIR domains were necessary and sufficient for yeast growth inhibition. Growth arrest was concomitant with actin depolymerization, endocytic block and a general decrease in kinase activity in the cell, suggesting a failure in energetic metabolism. Indeed, levels of ATP and NAD + were low in yeast cells expressing BtpA and BtpB TIR domains, consistent with the recently described enzymatic activity of some TIR domains as NAD + hydrolases. In human epithelial cells, both BtpA and BtpB expression reduced intracellular total NAD levels. In infected cells, both BtpA and BtpB contributed to reduction of total NAD, indicating that their NAD + hydrolase functions are active intracellularly during infection. Overall, combining the yeast model together with mammalian cells and infection studies our results show that BtpA and BtpB modulate energy metabolism in host cells through NAD + hydrolysis, assigning a novel role for these TIR domain-containing effectors in Brucella pathogenesis. reproduction in any medium, provided the original author and source are credited. Data Availability Statement: All relevant data are within the manuscript and its Supporting Information files. Introduction Several bacterial pathogens can circumvent host innate immune responses during infection, often by injecting effector proteins into host cells that target components of innate immune pathways. In many cases, these effectors contain eukaryotic-like domains capable of modulating receptor proximal events. This is the case of Toll/interleukin 1 receptor (TIR) domains present on the cytosolic faces of all Toll-like receptors (TLRs) and corresponding adaptor proteins, enabling the formation of a scaffold for the assembly of intricate protein signaling complexes [1]. The formation of these supramolecular organizing complexes (SMOCs) involves both self-interactions and interactions with other TIR domains [2]. TIR domains are also present in plants, where they mediate disease resistance, in amoebas with a role in ingestion of bacteria and immune-like functions, as well as in many bacterial genera [3]. Several Gram-negative and Gram-positive bacterial pathogens are known to rely on TIR domain-containing protein effectors for down-regulation of TLR-signaling during infection [4]. One of the best characterized is the TIR-containing protein of uropathogenic E. coli (TcpC), prevalent in clinical isolates associated with acute pyelonephritis in children. TcpC was shown to contribute to kidney pathology by hijacking the MyD88 TLR adaptor, resulting in inhibition of TLR4 and TLR2 signaling [5]. TcpC inhibition of TRIF-and IL-6/IL-1-dependent pathways has also been described [6]. Interestingly, the observation that expression of TirS from Staphylococcus aureus, present in a multi-drug resistant (MDR) island of numerous clinical isolates, is induced by specific antibiotic treatment [7] raises the possibility that these bacterial proteins may be tightly regulated, enhancing virulence, persistence or dissemination in particular clinical contexts such as exposure to selective pressure. For some pathogens, additional functions have been assigned to bacterial TIR domains other than the downregulation of TLR pathways, as in the case of PumA from Pseudomonas aeruginosa, which interferes with TNF receptor signaling by targeting UBAP1 [8], a component of the endosomal-sorting complex required for transport I (ESCRT-I). Also, E. coli TcpC directly interacts with the NACHT leucine-rich repeat PYD protein 3 (NLRP3) inflammasome and caspase-1 resulting in inflammasome perturbation [9]. Author summary Brucella is a genus of zoonotic bacteria that cause severe disease in a variety of mammals, ranging from farm animals (as bovines, swine and ovine) to marine mammals. Transmission to humans, often by ingestion of non-treated dairy products, leads to serious systemic infection. Brucella abortus invades host cells and replicates intracellularly. Such behavior relies on the injection of bacterial proteins into the host cytoplasm via specialized secretion systems. Our work focuses on the study of two of these factors, BtpA and BtpB, previously described to contain Toll/Interleukin-1 Receptor (TIR)-domains that modulate innate immunity. We use here two biological models: the yeast Saccharomyces cerevisiae and human cell lines. We found that the TIR domains of both Brucella proteins were necessary and sufficient to collapse energy metabolism in yeast by depleting ATP and NAD + . This result was translatable to higher cells and consistent with the recently described NADase activity of some TIR domains both in mammalian and bacterial proteins. Importantly, we demonstrate that Brucella down-regulates total NAD levels in host cells by using both BtpA and BtpB effectors. Our results show that NAD + is targeted by Brucella during infection, which may constitute a novel mechanism for its pathogenicity. PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + Recent work on mammalian TLR adaptor SARM1 and plant nucleotide-binding leucinerich repeat (NLR) immune receptors, such as RUN1, unveiled that their TIR domains possess enzymatic activity [10,11]. Authors went on to demonstrate that not only eukaryotic but also prokaryotic TIR domains, in general, constitute a new family of nicotinamide adenine dinucleotide (NAD + ) hydrolase enzymes [12]. Although this NADase activity is efficiently neutralized in the bacteria by an unknown mechanism, when heterologously expressed in laboratory E. coli strains or assayed in vitro these prokaryotic TIR domains were able to cleave NAD + . Loss of NAD + was also detected when full-length S. aureus TirS was ectopically expressed in mammalian cultured cells [12]. One of the bacterial TIR domains shown to have NAD + -consuming activity when expressed in E. coli was that of BtpA (also known as TcpB) from Brucella spp. [12]. In Brucella abortus, a clear role in virulence has been established not only for BtpA but also for BtpB, the second TIR domain-containing protein of Brucella. Together, these effectors have been shown to downmodulate dendritic cell activation contributing to the stealthy characteristics of this pathogen in the context of chronic brucellosis [13,14]. The precise target of Brucella TIR-containing effector proteins remains unclear. BtpA has been proposed to act as a mimic of the TLR adaptor TIRAP by binding specific phosphoinositides of the plasma membrane [15] and increasing TIRAP ubiquitination and degradation during infection [16]. However, preferential binding to MyD88 was also demonstrated [17]. It is likely that these Brucella TIR-containing proteins display additional targets or functions, as they modulate microtubule dynamics when ectopically expressed [18,19] and BtpA was shown to induce the unfolded protein response [20]. Given all the possible roles proposed for these Brucella TIR effectors and their potential NADase activity we set out to investigate in greater detail their functions. By combining ectopic expression in the model eukaryotic organism Saccharomyces cerevisiae and human cells, as well as in vitro infection studies we have found that BtpA and BtpB reduce total NAD levels during infection, suggesting their NADase activities are an integral part of their role in Brucella pathogenesis. Our results point towards a novel function of these effectors in modulation of host metabolism through the modulation of intracellular NAD levels during infection. Results Expression of Brucella abortus TIR-domain containing BtpA and BtpB proteins in S. cerevisiae induces toxicity To gain insight into the roles of BtpA and BtpB in modulation of cellular functions, btpA and btpB genes were cloned in a yeast expression vector under the control of the inducible GAL1 promoter to produce the corresponding GFP fusion proteins. Thus, expression was repressed in glucose-based media, but incubation of yeast transformants in galactose-based media led to the expression of GFP-BtpA and GFP-BtpB, as verified by Western blotting (S1A and S1B Both GFP-BtpA-TIR and GFP-BtpB-TIR conspicuous filaments resembled cytoplasmic microtubule bundles. However, immunofluorescence with anti-tubulin antibodies revealed that they did not co-localize with tubulin (S2 Fig) . Although we cannot rule out that Brucella TIR domains are interacting with filamentous structures other than tubulin in yeast, our results suggest that they are prone to forming highly ordered structures by self-interaction, and that their N-terminal extensions negatively influence this behaviour. BtpB depolarizes actin patches, blocks endocytosis and down-regulates signaling in S. cerevisiae To understand the mechanisms underlying growth inhibition in yeast expressing Brucella TIR proteins, we chose to analyze the effects of full-length BtpB. The actin cytoskeleton supports polarized growth in yeast during budding, so growth arrest could be caused by actin dysfunction. Indeed, as shown in Fig 2A, staining of actin cortical patches with rhodamine-conjugated phalloidin revealed a dramatic loss of polarization of actin structures towards the growing bud and septum region. Moreover, the BtpB TIR domain was fully responsible for this phenotype (Fig 2A). Besides supporting growth along the budding cycle, actin function is important for endocytosis. We used the FM4-64 fluorochrome to monitor endocytic traffic. Internalization of this non-permeable molecule via the endocytic pathway leads to staining of the vacuolar membrane in cells after 1 hour of incubation [21]. We observed that cells that efficiently expressed GFP-BtpB, as judged by the presence of intense green fluorescent cytoplasmic spots, were unable to internalize this marker, as compared to those lacking green fluorescence or control cells expressing GFP alone (Fig 2B), indicating that BtpB severely blocked endocytosis. This phenotype also relies on BtpB TIR domain, as shown in S1C Fig. Often, cellular stresses that lead to actin depolarization in yeast trigger the activation of signaling cascades involving mitogen-activated protein kinase (MAPK) modules, such as the cell wall integrity (CWI) pathway, engaging the Slt2 MAPK [22]. Also, we have previously described that some bacterial effectors, such as Salmonella SteC and SopB [23,24], depolarize actin by down-regulating small GTPases when expressed in yeast, leading to concomitant dephosphorylation of downstream Fus3 and Kss1 MAPKs of the mating pathway. Thus, we tested MAPK activation levels in BtpB-expressing cells by immunoblot using anti-phospho-MAPK antibodies. Peculiarly, all Slt2, Fus3 and Kss1 MAPK basal phosphorylation levels were downregulated in BtpB-expressing cells, but not upon BtpA expression (S3A Fig) . Then we investigated whether BtpB would be able to downregulate MAPK activation upon stimulation of these pathways, by incubation at 39 ºC or in the presence of the cell wall-stressing compound Congo red to stimulate the CWI pathway, and by using the mating pheromone α-factor to activate Fus3 and Kss1. Although BtpB still allowed activation of these pathways by the stimuli, MAPK phosphorylation was always less efficient (S3B Fig) . A fourth MAPK, Hog1, a p38 homolog, operates in budding yeast responding to high osmolarity challenges [25]. As observed for the other MAPKs, phosphorylation of Hog1 was less efficient in the presence of BtpB when this pathway was stimulated by osmotic stress (S3C Fig) . Since these MAPK pathways do not share upstream components, it is striking that they all were simultaneously downregulated by BtpB expression. Such a general effect in MAPK phosphorylation might reflect inability of the cell to properly undergo phosphorylation events. In support of this view, when heterologous mammalian Akt1 (which undergoes phosphorylation in its activation site by conserved yeast PDK-like kinases [26]), was co-expressed with BtpB, a reduced phosphorylation was also observed (S3D In sum, BtpB and BtpB-TIR expression in yeast result in severe actin disorganization, endocytic block and a general defect in the phosphorylation of all signaling kinases tested. Genetic screen for yeast genes that suppress BtpB-induced lethality We pooled three non-overlapping libraries obtained from the whole genome yeast ORF collection, consisting of all S. cerevisiae predicted ORFs cloned in an expression vector under the PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + control of the inducible GAL1 promoter transformed in E. coli. This pooled whole genome expression library was co-transformed with a GFP-BtpB GAL1-based expression plasmid and positive selection allowed the recovery of genes suppressing BtpB toxicity in galactose-based medium. Suppressor genes listed in S4A Fig and S3 Table were selected when growth rescue (i) was confirmed after individual re-transformation, (ii) was specific for BtpB-induced growth inhibition, but not that of other toxic heterologous protein (PI3Kα-CAAX) [26], and (iii) was not due to a lower production of GFP-BtpB, as verified by immunoblot (S4B Fig) . As shown in S4A Fig, suppression was partial in all cases. Co-transformation of these suppressors with BtpB-TIR led to the same rescue levels, although no growth recovery was detected when coexpressed with BtpA-TIR (S4C and S4D Fig) . Thus, either these suppressors are specific for BtpB-TIR domain derived toxicity in yeast or the effect of BtpA-TIR is too strong to allow partial suppression. Although most of these genes have not been yet assigned a bona fide function in yeast, a subset of them, INM2, RBK1, and DOG2 are sugar or inositol phosphorylating/ dephosphorylating enzymes related to metabolic pathways (S3 Table ). DOG2 encodes a 2-deoxyglucose-6 phosphate phosphatase and its overexpression overcomes toxicity of this glycolytic inhibitor [27], and RBK1 encodes a putative ribokinase, which has been recently shown to be catalytically active [28]. These results suggest that metabolic shifts related to carbon source usage partially counteract BtpB toxicity. BtpA and BtpB deplete ATP and NAD + in the yeast cell Our results on BtpB expression in yeast affecting dynamic cellular events such as cytoskeletal function and vesicle traffic as well as general kinase function would be consistent with limiting intracellular ATP levels. Furthermore, the fact that sugar kinase/phosphatases were isolated as btpB overexpression suppressors suggest that energetic metabolism is compromised in BtpBexpressing yeast cells. Recently, Essuman et al. [12] reported that the TIR-domain of proteins from phylogenetically diverse bacteria, including Brucella BtpA, displayed enzymatic activity as NAD + hydrolases. Thus, we were prompted to study ATP and NAD + levels in yeast cells expressing BtpA and BtpB. Yeast cells expressing BtpB or the TIR domains of either BtpB or BtpA, showed significant losses of both ATP and NAD + intracellular levels, as determined by luciferase assay or quantitative mass spectrometry respectively (Fig 3). This effect was especially dramatic in NAD + levels, which were lowered about one order of magnitude upon BtpB overexpression. We also observed a slight but significant reduction of NAD + in the case of fulllength BtpA (Fig 3B). The decrease in NAD + and ATP correlated very well with the differential toxicity for yeast cells of each protein version (Fig 1A), as full-length BtpB had the strongest effect on intracellular NAD + and ATP levels while, in the case of BtpA, the TIR domain alone had a more dramatic effect than the full-length protein. This raises the idea that the N-terminal extension of BtpA, but not that of BtpB, has a negative regulatory effect on the C-terminal TIR/NAD + hydrolase domain. As a control, we generated a BtpB E234A catalytically inactive mutant, by changing to Ala the equivalent Glu residue described by Essuman et al. [12] to be essential for catalysis in other TIR domains. BtpB E234A was no longer toxic for yeast (Fig 4B) In agreement with its lower toxicity, this BtpB mutant was expressed at higher levels than the toxic wild-type version (S1B Fig) . Expression of the BtpB E234A mutant had no effect on ATP or NAD + intracellular levels (Fig 3), strongly suggesting that, as described for other TIR domains [12], this residue is essential for the catalytic activity of BtpB. PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + 0.0046 ( ) for BtpB-TIR. (B) Cellular NAD + levels measured by mass spectrometry, standardized as a NAD + /extract protein ratio, from YPH499 cells transformed with the same plasmids as in A. Graph shows NAD + /protein ratio as a percentage of the empty vector control cells NAD + /protein ratio. Data correspond to means ± standard deviation of four different transformants and statistical comparison was done with one-way ANOVA with p-values referred to vector <0.0001 ( ) for BtpA-TIR, BtpB and BtpB-TIR, and 0.0134 ( ) for BtpA. https://doi.org/10.1371/journal.ppat.1007979.g003 Mapping of residues essential for NAD + hydrolase function at the TIR domain of BtpB Taking advantage of the severe toxicity of BtpB in yeast, we devised a screen for the isolation of loss-of-function mutations by random mutagenesis. This was performed by plasmid gaprepair [29], forcing in vivo homologous recombination between an open gapped plasmid and a partially overlapping insert encoding BtpB, which had been generated by error-prone PCR. Such strategy allows direct selection in galactose-based medium for recombinant clones bearing mutations in btpB that yield the protein non-toxic. Ten single and two double mutants were recovered and sequenced (S4 Table ). As shown in S5A Fig, most mutations corresponded to non-conservative amino acid changes in highly conserved regions between BtpA and BtpB TIR domains. Some of these residues are also conserved in the TIR-domain of human SARM1 and plant RUN1, in which the NAD + hydrolytic activity was recently described [10,11]. To decipher the effects of the mutations on BtpB properties, we mapped the corresponding residues on the BtpA-TIR domain structure (PDB: 4LZP) [30], as BtpB structure is not yet solved. As seen in Fig 4A, none of the residues mutated belonged to the TIR-TIR interface. S162 (S149 in BtpA) and F163 (H150 in BtpA) belong to the pA strand. Y225 (F208 in BtpA) and Q226 (F209 in BtpA) are part of the small helix αC. Mutations of these residues are likely to disrupt the inner core and thus destabilize the whole structure. Mutation of BtpB S201P (S185 in BtpA) is likely to perturb the NAD+ catalytic site. In the recent crystal structure of NADP + -bound RUN1-Tir domain [11] (PDB: 6O0W), the substrate lies in a pocket formed by the BB-loop and the loop containing the conserved catalytic WxxxE motif [19] (S5B Fig) . In BtpA structure S185 lies in the BB loop and interacts with the W213 (W231 in BtpB) of the WxxxE motif, which contains the essential catalytic E217 residue (E234 in BtpB). Finally, D158 (D145 in BtpA), F188 (F174 in BtpA), Y193 (Y178 in BtpA), and I291 (I275 in BtpA) residues clustered in two patches at the protein surface (Fig 4A). In order to determine whether NAD+ hydrolase and filament formation of TIR domains were separable features, we transferred mutations D158G, S162P and Y255C, as well as the mutation in the catalytic residue E234A, to GFP-BtpB-TIR to study whether loss of toxicity corelated with the ability of the TIR domain alone to produce filaments. Interestingly, only E234A, S162P and, partially, Y225C mutations eliminated BtpB-TIR toxicity in yeast cells (Fig 4B), despite the fact that all four mutations fully prevented toxicity and endocytosis defects in full-length BtpB (S5C-S5E Fig) . Moreover, only the GFP-BtpB-TIR S162P mutant significantly lost the ability to form protein filaments (Fig 4C and4D), probably because that mutation damaged the inner core (Fig 4A) These results indicate segregation between filament formation and growth inhibitory functions of BtpB-TIR in yeast and highlight the importance of the Glu234 residue specifically for NAD + hydrolase activity, while Ser162 is key for both features. We also changed by site-directed mutagenesis the catalytic E217 residue to Ala in both fulllength and the TIR domain alone of BtpA. Both mutants lost their toxicity on yeast (Fig 4E). Moreover, BtpA-TIR E217A did not reduce MAPK phosphorylation and yeast cells sustained higher levels of expression as compared to WT BtpA-TIR (S1A Fig) . Although a statistically significant reduction in the percentage of cells showing BtpA-TIR filaments was found (45.1% PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + from pYES2 plasmid derivatives, under control (Glucose) and induction (Galactose) conditions. (C) Normarski and fluorescence microscopy of YPH499 cells expressing GFP-BtpB-TIR and the indicated mutants, after 4h induction. Scale bars correspond to 5 µm. (D) Graph displaying percentage of cells showing filamentous fluorescent structures. Data corresponds to means ± standard deviation of three independent transformants and statistical comparison was done with one-way ANOVA with a p-value < 0.0001 ( ) between BtpB-TIR WT and S162P. (E) Ten-fold serial dilution growth assay of YPH499 yeast strain bearing pYES2 empty vector and pYES2 plasmid derivatives expressing BtpA, BtpA-TIR and their corresponding catalytically inactive mutants E217A, under control (Glucose) and induction (Galactose) conditions. (F) Normarski and fluorescence microscopy of yeast cells expressing pYES2-GFP-BtpA-TIR and its E217A mutant version. Scale bars correspond to 5 µm. https://doi.org/10.1371/journal.ppat.1007979.g004 ± 7.7 for BtpA-TIR vs. 26% ± 12 for BtpA-TIR E217A), these structures were larger and more intense for the mutant than for the wild type version (Fig 4F). Importantly, these results indicate that, as observed for BtpB, the catalytic E217 residue is essential for toxicity in yeast but still allows assembly of the BtpA TIR domain into ordered structures. Inhibition of endocytosis occurs upon ectopic expression of BtpB in human cells but not during infection To investigate whether the results obtained in yeast were translatable to mammalian cells, we began by overexpressing BtpB in human epithelial cells (HeLa). As previously described [19] and consistent with the yeast model, a punctate accumulation of BtpB was observed in the cytosol of HeLa cells. These results were obtained for GFP-BtpB (Fig 5), as well as Mycexpressing BtpB (S6A Fig) , indicating that this localization is independent of the tag. To gain insight into the type of structures BtpB was forming, we labelled for different endocytic markers. Some of these structures were enriched in mono-and poly-ubiquitinated proteins as recognized by the FK2 antibody (Fig 5A), which could correspond to either aggregates of misfolded protein or sites with densely ubiquitinated proteins as previously described [19]. However, some of the BtpB compartments did not show labelling with the FK2 antibody and were also negative for the lysosomal associated membrane protein Staphyloccocus aureus TirS protein [7]. Most likely, as also inferred from the above yeast data, these structures correspond to self-assembled ordered filaments consisting of the TIR domain, which are absent when stabilized by the presence of the N-terminal domain. These results confirm that the N-terminal portion of BtpB plays an important role in subcellular localization. Interestingly, the BtpB E234A mutant retained the dot-like distribution observed for BtpB in HeLa cells (S6E Fig). PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + We next determined if BtpB expression resulted in perturbation of endocytosis in human cells as observed in yeast. Fluorescently labeled transferrin or Epidermal Growth Factor (EGF) were incubated with HeLa cells expressing GFP-BtpB or GFP alone and the percentage of cells with uptake of these endocytosis markers quantified by microscopy. The expression of GFP-BtpB significantly decreased endocytosis of both markers in comparison to GFP alone (Fig 7A -7C) consistent with the yeast model. Although overexpression of individual effectors provides a powerful tool to investigate direct functions of these proteins, to thoroughly investigate the capacity of BtpA and BtpB to inhibit endocytosis, we analyzed this phenotype during infection. Although we could observe a slight decrease of endocytosis after 24 h of infection of HeLa cells, this phenotype was neither statistically significant nor abrogated by deletion of btpA nor btpB (S7A Fig) . Furthermore, no significant differences were observed at 48 h post-infection for HeLa cells infected with WT Brucella in comparison to cells infected with mutants lacking either btpA, btpB or both genes (Fig 7D). Finally, no impact on endocytosis was observed in immortalized bone marrow-derived macrophages (iBMDM) at 24 nor 48 h post-infection (S7B Fig and Fig 7E,respectively), suggesting that Brucella TIR proteins do not interfere with endocytosis during infection. BtpA and BtpB deplete cellular NAD when ectopically expressed in human cells and during infection As previous studies attributed a NAD + -consuming activity to the BtpA TIR domain when expressed in E. coli [12] and our experiments using the eukaryotic yeast model showed that both BtpB and BtpB-TIR strongly reduce intracellular NAD + content, we next assessed total NAD levels in HeLa cells expressing either Myc-BtpB or Myc-BtpA in comparison to Myc alone by using a colorimetric assay. Both Myc-tagged proteins were well expressed in epithelial cells although BtpA always migrated as a double band, potentially indicative of post-translational modifications occurring in the cell ( To determine whether Brucella could impact intracellular NAD levels during infection, we first established that all bacterial strains had equivalent levels of total NAD in the inocula (S7C Fig), which corresponds to a 16h culture, the time required to reach early stationary phase used for our infection studies. We next infected HeLa cells with wild-type or mutant strains lacking either btpA or btpB and quantified the levels of total NAD. Although we did not observe any differences at 24h post-infection, we could observe that B. abortus infection for 48h resulted in reduction of total NAD levels in a manner dependent on BtpA and BtpB (Fig 9A). As it is well established that btp mutants replicate to the same levels as wild-type Brucella [14] and we have found they have equivalent bacterial total NAD levels (S7C Fig) we can conclude that BtpA and BtpB NAD-consuming activities are likely to impact host intracellular NAD levels. To confirm these phenotypes were specifically due to the absence of BtpA and BtpB, we attempted to complement the mutant strains. The phenotype of the btpA mutant strain could be restored by expressing btpA from a plasmid (Fig 9B). Although the same tendency could be observed for the complementation of the btpB mutant, due to a lower effect on the NAD concentration in HeLa cells infected with the btpB mutant we could not obtain statistical significance with the number of experiments performed (Fig 9C). We therefore infected iBMDM, as a much higher rate of infection can be attained with phagocytic cells. In this cellular model, wild-type B. abortus infection also resulted in the reduction of intracellular NAD levels, in a manner dependent on BtpA and BtpB. The expression of each gene from a plasmid PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + 48 h with either wild-type B. abortus or a mutant strain lacking btpA, btpB or both genes. Cells were then incubated with EGF conjugated with Alexa Fluor 555 for 10 minutes and the percentage of infected cells showing uptake of this fluorescent marker quantified by microscopy. Counts correspond to individual microscopy fields, with a total of at least 200 cells counted for each, from three independent experiments. Mock infected cells are included as a control. Data correspond to means ± standard deviation and statistical comparison was done with one-way ANOVA test, with no statistical significance observed. Finally, to determine if the observed reduction of NAD was due to the catalytic activity of the TIR domain we complemented the btpA mutant with a plasmid carrying a E217A mutation in btpA (∆btpApbtpA E217A) and the btpB mutant with a plasmid expressing a E234A mutation in btpB (∆btpBpbtpB E234A). We first controlled that these catalytic mutant versions of BtpA and BtpB could be efficiently translocated into host cells. We constructed TEM1 fusions as previously reported [14] and determined the percentage of cells emitting coumarin fluorescence at 24 h post-infection (S7D Fig) . We had to use RAW macrophages for these experiments as previously described [14] because CCF2 was toxic for iBMDM. In the case of BtpA we observed a level of translocation of TEM-BtpA E217A consistent with what was previously reported for the wild-type TEM-BtpA [14]. In the case of BtpB, a lower percentage of infected cells showed translocation of the TEM-BtpB E234A, consistent with what has been observed for the wild-type TEM-BtpB (less than 2% of infected cells) [14]. PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + ΔbtpApbtpA 0.0124, ∆btpApbtpA versus ΔbtpApbtpA E217A 0.0177 and wild-type versus ΔbtpApbtpA E217A 0.0072. (C) HeLa cells were infected for 48 h with either wild-type B. abortus, a ∆btpB mutant, the complemented strain ∆btpBpbtpB and a ∆btpB complemented with a catalytic mutant ∆btpBpbtpB E234A. Results are normalized to mock infected cell values and correspond to means ± standard deviation from three independent experiments and statistical comparison was done with a one-way ANOVA test, with a slight statistical significance only observed between wild-type and ∆btpB (p = 0.0491). https://doi.org/10.1371/journal.ppat.1007979.g009 We next measured the concentration of total NAD in both HeLa and iBMDM infected cells. In HeLa cells, catalytically inactive BtpA failed to complement the mutant strain (Fig 9B). Consistently, this is also the case in iBMDM for both BtpA and BtpB (Fig 10A and10B). Together these results indicate that BtpA and BtpB contribute to depletion of intracellular total NAD levels via direct enzymatic cleavage of this metabolic co-factor during infection, assigning a novel function for these two effectors during Brucella infection. Discussion Bacterial TIR domain-containing proteins have been shown to be major contributors to the evasion of innate immunity for a variety of bacterial pathogens, mainly by interfering with the assembly of innate immune signaling complexes involving TIR domains [4]. However, certain PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + TIR domains have recently been demonstrated to possess a NAD + hydrolase activity which may contribute to their function, as for example the case of mammalian SARM1 [12] or plant NLR immune receptors [11]. We have found that both the Brucella TIR domain-containing proteins BtpA and BtpB retain this NAD + hydrolase activity inside cells when ectopically expressed in yeast or human cells, as well as during infection, resulting in reduction of intracellular total NAD levels at late stages of the infection. Furthermore, we have highlighted that the N-terminal non-TIR domains of these proteins are necessary for intracellular targeting of the effectors. Remarkably, BtpA-TIR and BtpB-TIR resulted in formation of long filament-like structures when ectopically expressed in yeast and human cells. Since this phenomenon can only be observed in the absence of their N-terminal regions it is likely that such N-terminal extensions may function to modulate intrinsic TIR self-assembly. Furthermore, genetic analyses in yeast revealed that such highly ordered structures formed by expression of the TIR domains alone are still achieved when expressing point mutants that lose their NADase activity, suggesting that distinct features of TIR domains rely in different structural determinants. NAD + is an important coenzyme participating in hundreds of enzymatic reactions, notably glycolysis, the TCA cycle and mitochondrial oxidative phosphorylation. NAD + homeostasis is essential for metabolic balance and cell survival either being used as an electron carrier in redox reactions or being consumed as a substrate for numerous reactions. Beyond its wellknown role in bioenergetics, NAD + has been found to have a prominent function in cell signaling, with sirtuins, poly ADP-ribose polymerases (PARPs) and CD38 using NAD + as substrate [31,32]. NAD + has also been shown to be a key modulator of immune metabolism, acting as an important metabolic switch. In macrophages, it has been shown that increased NAD + levels are associated with activation and control of inflammatory responses, particularly involving regulation of TNFα transcription in classically activated pro-inflammatory (M1) macrophages [33,34]. Interestingly, NAD + limitation also prompts important cellular changes such as the Warburg effect, a cellular state in which consumption of glucose is increased and aerobic glycolysis is favoured instead of the more energy efficient mitochondrial oxidative phosphorylation [35]. A switch to Warburg metabolism has also been observed upon immune activation of many cell types, for example macrophages, following pattern recognition receptor activation [36]. In addition, low NAD + levels are a trigger for cell death via necroptosis in macrophages [37]. Our work highlights that Brucella is decreasing total NAD levels in the host cell, likely contributing to modulation of cellular metabolism and signaling. This is dependent on two translocated effectors, BtpA and BtpB, containing a TIR domain that had previously been shown to down-modulate innate immune signaling in specific in vitro differentiated mouse bone marrow-derived dendritic cells [13,14]. It is possible that the two phenotypes, NAD reduction and blocking of TIR-TIR interactions along the TLR signaling pathways are intimately connected. Indeed, targeting of these Brucella effectors to the vacuolar membrane or innate immune signaling platforms might locally impact NAD + levels inhibiting specific enzymatic reactions. Interestingly, the first enzyme to use NAD + in glycolysis, glyceraldehyde 3-phosphate dehydrogenase (GAPDH) has been shown to be recruited to the membrane of Brucella-containing vacuoles playing an essential role in intracellular replication [38]. Previous studies have also reported the role of the specific T4SS effector BPE123 in targeting the host enolase, another enzyme of the glycolysis pathway, which was also shown to be essential for Brucella intracellular multiplication in human cultured epithelial cells [39]. Host metabolism during Brucella infection has only recently started to be unravelled. In classically activated macrophages, Brucella infection was shown to induce a Warburg-like effect, with high consumption of glucose and generation of lactate efficiently used as a carbon source by intracellular replicating bacteria [40]. Interestingly, in alternatively activated PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + macrophages, abundant during chronic brucellosis, a shift from oxidative metabolism of glucose to oxidation of fatty acids occurs, enhancing the availability of glucose to promote intracellular bacterial replication [41]. In this study, we now highlight the role of the innate immune regulator effectors BtpA and BtpB in direct control of host energy metabolism. Both BtpA and BtpB TIR domains were robust enough as NADases in the yeast heterologous model as to drop NAD + levels over one order of magnitude, causing a severe decrease of ATP availability in the cell and strong toxicity when overexpressed. The fact that full length BtpA is not as toxic for the yeast cell suggests that the N-terminal domains of BtpA negatively regulate BtpA catalytic activity. BtpB, on the contrary, is intrinsically active both in the absence and in the presence of its N-terminal extension. We show here that catalytically dead mutants in BtpA and BtpB TIR domains, still have the ability to self-aggregate and form filaments, proving that both features can be segregated. BtpA has been related to tubulin structures in host cells, specifically by protecting microtubules from depolymerization [19,42]. However, we did not see coincidence of TIR cytoplasmic filaments with tubulin. ATP is necessary to achieve depolymerization of microtubules by nocodazole [43], so NAD + depletion and low ATP levels could contribute to the microtubule-stabilizing properties assigned to BtpA. Similar ATP-dependent phenomena could account for the inhibition of endocytosis in yeast and human cells. It is important to note however, that no impact of BtpA and BtpB on endocytosis was observed in infected cells. Therefore, ectopic over-expression of these effectors and its strong effect on ATP and NAD could be responsible for this phenotype, absent when a much smaller amount of protein is translocated into host cells during infection. Alternatively, Brucella infection may induce compensatory effects that would mask this phenotype, for example translocate other effectors that would enhance endocytosis. Why Brucella may limit NAD + levels and energy metabolism in particular subcellular compartments and stages of the establishment of the intracellular niche? This is a challenging question and at this stage we can only speculate. NADase activity may contribute to evading the innate immune response. Importantly, NAD + levels are sensed by sirtuin proteins, like SIRT1, leading to the activation of several signaling pathways, some related to immunomodulation [44]. NAD + -dependent SIRT1 has recently been shown to be an important hub for cellular defence against M. tuberculosis intracellular survival. M. tuberculosis infection reduces intracellular NAD + and down-regulates SIRT1, which can be reversed by the addition of SIRT1-activating compounds that represent a potential therapeutic option [45]. The Tuberculosis Necrotizing Toxin (TNT) of M. tuberculosis bears NAD + glycohydrolase activity, which was shown to be important for mycobacterial replication in macrophages, and is involved in triggering necroptosis as a consequence of NAD + depletion [37,46]. In addition, it has been proposed that reduced NAD + and ATP levels in Salmonella-infected macrophages, a process dependent on the Salmonella Pathogenicity Island 2 (SPI-2) type 3 secretion system [47], trigger the modulation of TORC1 to protect intracellular bacteria from xenophagy via lysosomal degradation of its upstream SIRT1/LKB1/AMPK regulators [47]. Thus, NAD + depletion could eventually be understood as a strategy to evade cellular innate immunity responses and promote bacterial intracellular survival. To our knowledge, this is the first report on bacterial TIR domain-containing proteins showing NAD + hydrolase activity on the host cell during infection. Our results on the comparison of the divergent behaviour of full length and TIR domains alone in the case of BtpA and BtpB, together with the observation that the N-terminal non-TIR extensions determine subcellular localization and prevent filament formation, suggests that these extensions may finely tune NADase activity in the context of infection, likely by negatively modulating TIR-TIR assembly and by directing NADase activity to specific intracellular compartments. Further work is now necessary to better understand the control of NAD + homeostasis during Brucella PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + infection. Knowledge on how metabolic switches occur during infection at a molecular level could provide clues for the development of therapeutic strategies and vaccines. Materials and methods Saccharomyces cerevisiae strains and growth conditions YPH499 (MATa ade 2-101 trp1-63 leu2-1 ura3-52 his3-200 lys2-801) [48] was the S. cerevisiae strain for general use in these studies, unless otherwise stated. W303-1A (MATa leu2-3,112 trp1-1 can1-100 ura3-1 ade2-1 his3-11,15) [49] was used for the Yeast ORF overexpression library screening. The E. coli strain DH5α F 0 (K12∆(lacZYA-argF)U169 deoR supE44 thi-1 recA1 endA1 hsdR17 gyrA96 relA1 ( §80lacZΔM15)F 0 ) was used for general molecular biology. As a general medium, YPD (1% yeast extract, 2% peptone and 2% glucose) broth or agar was used for growing yeast cells. For plasmid selection, synthetic complete medium (SC) contained 0.17% yeast nitrogen base without amino acids, 0.5% ammonium sulphate, 2% glucose and the appropriate amino acids and nucleic acid bases supplements. SG and SR were SC with 2% galactose or 1.5% raffinose, respectively, instead of glucose. For galactose induction experiments in liquid media cells were grown in SR medium to log phase and then galactose was added to 2% for 4-6 h. Effects of the expression of Brucella genes on yeast growth were tested by ten-fold serial dilution assay: spotting cells onto SC or SG plates lacking the appropriate auxotrophic markers to maintain the corresponding plasmids, and incubating them at 30 ºC for 72 h [26]. Cell culture and transfections HeLa cells (from ATCC) were grown in DMEM supplemented with 10% of fetal calf serum (FCS) and were transiently transfected using Fugene (Roche) for 24 h, according to manufacturer's instructions. Immortalized bone marrow-derived macrophages from C57BL/6J mice were grown in DMEM supplemented with 10% of fetal calf serum (FCS) and 10% of spent supernatant of L929 cells (that provides M-CSF). RAW 264.7 macrophages (ATCC) were grown in DMEM supplemented with 10% of fetal calf serum (FCS). The Brucella abortus 2308 strain was used for transfection and genetic manipulation. Construction of yeast expression plasmids General molecular biology techniques, as well as transformation of yeast by the lithium acetate method were performed according to standard procedures. All plasmids and oligonucleotides used in this work are listed in S1 and S2 Tables respectively. pYES2-GFP, pYES3-GFP-Akt1 and YCpLG-PI3Kα-CAAX were previously described [26,29,50]. To clone B. abortus btpA into the URA3-based pYES2-GFP plasmid, this gene was amplified from pGEM-T-easy-BtpA plasmid [13]. The primers used for amplification of btpA (named BtpA-UP and BtpA-LO) had BamHI respectively and EcoRI sites. PCR products were cleaved by these restriction enzymes to be inserted in the same sites of pYES2-GFP, generating the pYES2-GFP-BtpA plasmid. To obtain the pYES2-GFP-BtpB construction, btpB was amplified form pGEM-T-Easy-BtpB plasmid [13]using primers BtpB-UP and BtpB-LO, cleaved with BamHI and XbaI restriction enzymes and the insert obtained was cloned into the pYES2-GFP plasmid. pYES3-GFP-BtpB was generated on a similar way but using primers BtpB-UP-pYES3 and BtpB-LO-pYES3 and BamHI sites to clone into the TRP1-based pYES3-GFP plasmid. The latter plasmid had been constructed by subcloning the GFP sequence of pYES2-GFP with HindIII/BamHI sites into pYES3 (Invitrogen). PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + In order to obtain the truncated versions of both BtpA and BtpB, we generated pYES2-GFP-BtpA-N (1-126), pYES2-GFP-BtpA-TIR (127-275), pYES2-GFP-BtpB-N (1-139) and pYES2-GFP-BtpB-TIR (140-292) by amplifying the corresponding DNA fragments from pYES2-GFP-BtpA and pYES2-GFP-BtpB using for the N-terminal regions BtpA-UP + BtpA-126stop-LO and BtpB-UP + BtpB-140stop-LO primers, respectively. In the case of the C-terminal regions, BtpA-BamHI127-UP + BtpA-LO and BtpB-BamHI140-UP and BtpB-LO primers were used. All upper primers carried BamHI restriction site and lower primers had EcoRI site, except in the case of BtpB-LO, which carried an XbaI sequence. The PCR products were cleaved with their corresponding restriction enzymes and inserted in the same sites in the pYES2-GFP plasmid. Additionally, both C-terminal regions were also cloned into pYES3-GFP vector: BtpA-TIR (127-275) fragment was directly subcloned from pYES2-GFP-BtpA, whereas BtpB-TIR (140-292) was PCR amplified with BtpB-BamHI140-UP and BtpB-EcoRI-LO primers and then inserted on BamHI-EcoRI sites on pYES3-GFP. DpnI-based site directed mutagenesis was performed using the QuikChange kit (Agilent). To generate the catalytically inactive mutants in both full length and TIR domain of BtpA and BtpB in the pYES2-GFP and pYES3-GFP backbones, the mutations were introduced with primers Mut-BtpAE217A-UP and Mut-BtpAE217A-LO and Mut-BtpBE234A-UP and Mut-BtpBE234A-LO respectively. To introduce the three selected loss-of-function BtpB mutations into the TIR domain construct, primers Mut-BtpBD158G-UP andMut-BtpBD158G-LO, Mut-BtpBS162P-UP and Mut-BtpBS162P-LO, or Mut-BtpBY225C-UP and Mut-BtpBY225C-LO were used. Construction of mammalian expression vectors The DNA fragment encoding amino acid residues 1-139 of BtpB (BtpB-N), 140-292 of BtpB (BtpB-TIR) and BtpB full-length from Brucella abortus were cloned into the Gateway pDONR (Life Technologies, ThermoFisher Scientific) and then cloned into the pENTRY (Life Technologies, ThermoFisher Scientific) GFP vectors according to the manufacturer. The following primers were used: BtpB Fw, BtpB Rv, BtpB-TIR Fw and BtpB-N Rv. BtpB E234A was constructed as above from the pYES3-GFP-BtpB E234A with primers used to amplify BtpB. Construction of Brucella complementing vectors The DNA fragments encoding btpA, and btpB were cloned into the plasmid pBBR-1-MCS4. The primers used for amplification of btpA had for the forward primer a SpeI restriction site and for the reverse primer a EcoRI restriction site. The primers used for amplification of btpB had for the forward primer a SacI restriction site and for the reverse primer a SpeI restriction site. The following primers were used BtpA Fw-pBBR-1-MCS4, BtpA Rv-pBBR-1-MCS4, BtpB Fw-pBBR-1-MCS4, BtpB Rv-pBBR-1-MCS4. The PCR products were cleaved with their corresponding restriction enzymes and inserted in the same site in the digested pBBR-1-MCS4 plasmid. The BtpA E217A and BtpB E234A complementation vectors were obtained from pBBR-1-MCS4-BtpA and pBBR-1-MCS4-BtpB, respectively using QuickChange Site-Directed Mutagenesis. The mutations were introduced with primers BtpAE217A Fw, BtpAE217A Rv, BtpBE234A Fw, BtpBE234A Rv. To clone BtpA E217A and BtpB E234A into the pFlag-TEM, this genes were amplified from pBBR-1-MCS4-BtpA E217A and pBBR-1-MCS4-BtpB E234A, respectively. The primers used for amplification of btpAE217A and btpBE234A had for the forward primer a XbaI restriction site and for the reverse primer a PstI restriction site. The PCR products were cleaved with their corresponding restriction enzymes and inserted in the same site in the pFlag-TEM plasmid. PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + The primers sued were: TEM-BtpAE217A FW-pFlagTEM, TEM-BtpAE217A Rv-pFlagTEM, TEM-BtpBE234A FW-pFlagTEM, TEM-BtpBE234A Rv-pFlagTEM. Yeast cells microscopy and immunofluorescence For fluorescence microscopy of live yeast to visualize GFP, cells were cultured in SR medium for 18 h at 30 ˚C, then, the appropriate amount of these cultures was suspended into fresh SG to reach an OD600 of 0.3, and they were incubated for additional 4-6 h for GAL1 promoter induction. Cells were harvested by centrifugation and observed directly. To monitor vacuolar morphology and endocytosis, staining with FM4-64 was performed as described [21]. Nuclear labelling was performed by adding DAPI at 1:1000 directly to the harvested cells in vivo and washed once with PBS. To observe actin, yeast cells were fixed and treated with rhodamineconjugated phalloidin (Sigma) as described [23]. Indirect immunofluorescence on yeast cells was performed as previously described [51]. Antibodies were used as follows: As primary antibody, monoclonal rat anti-alpha-tubulin (Serotec, YOL1/34) at 1:500 dilution; as secondary antibody, Alexa Fluor 594 anti-rat dye (Life Technologies) at 1:1000 dilution. DAPI was added at 1:1000 for nuclear labelling. Cells were examined in Eclipse TE2000U microscope (Nikon, Tokyo, Japan) and digital images were acquired with an Orca C4742-95-12ER charge-coupled-device camera (Hamamatsu Photonics, Hamamatsu City, Japan) and processed by HCImage and ImageJ software. HeLa cells immunofluorescence, antibodies and microscopy Cells were fixed in Antigenfix (DiaPath), at room temperature for 10 min or methanol at -20 ˚C for 3 min for tubulin staining. Cells were then labelled at RT with primary antibody mix diluted in 0.1% saponin in PBS with 1% BSA and 10% horse serum for blocking. Primary antibody was incubated for 1 h followed by two washes in 0.1% saponin in PBS. Secondary antibodies where then mixed and incubated for a further 30 min, followed by two washes in 0.1% saponin in PBS, one wash in PBS and one wash in distilled water before mounting with Prolong Gold. Samples were examined on a Zeiss LSM800 laser scanning confocal microscope for image acquisition. Images of 1024×1024 pixels were then assembled using ImageJ. Primary antibodies used were mouse anti-beta-tubulin clone E7 (Developmental Studies Hybridoma Bank) at 1:250 or mouse anti-vimentin (V9) at 1:100 (Sigma). Secondary antibodies used were anti-rabbit or mouse conjugated with Alexa Fluor 488, 555 or 647 all from Jackson Immunoresearch. When necessary phallodin-568 (1:1000) was used to label the actin cytoskeleton and DAPI nuclear dye (1:1000) for the host cell nucleus. When indicated, cytochalasin D was added for 2 h at a final concentration of 1 µg/ml. HeLa cells endocytosis assay Transferrin conjugated with Alexa Fluor 568 (Invitrogen) and EGF conjugated with Alexa Fluor 555 (Invitrogen) were added at 10 µg/ml and 50 µg/ml respectively, for 10 min a 37 ˚C. Cells were then placed on ice, washed with ice cold PBS twice and fixed in 3% paraformaldehyde for 15 min, followed by three washes with PBS. Immunodetection by western blotting Standard procedures were used for yeast cell growth, collection, breakage, protein separation by SDS-PAGE, and transfer to nitrocellulose membranes. Anti-P-MAPK antibody (Anti-phospho-p44/ p42 MAPK (Thr-202/Tyr-204), New England Biolabs) was used to detect dually phosphorylated Slt2, Kss1 and Fus3 MAPKs diluted 1:1000. Slt2 protein was detected using a PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + polyclonal anti-Slt2 antibody [52], diluted 1:1000. To detect phosphorylated Hog1 high osmolarity pathway MAPK, antibody Anti-P-p38 (Sigma) at 1:1000 was used. Heterologous expressed Akt1 was detected with Anti-Akt1 (Cell Signalling) as total protein and with anti-P-Akt1(Thr)308 (Cell Signalling) for the phosphorylated forms. GFP fusion proteins were detected using monoclonal anti-GFP antibodies (Living Colors, JL-8) diluted 1:2000. As loading control either a monoclonal anti-actin (MP Biomedicals) diluted 1:2000 or a yeast specific polyclonal anti-Glucose-6-Phosphate Dehydrogenase (Sigma) diluted 1:50000 were used. In all cases, primary antibodies were detected using IRDye-680 or -800 anti-rabbit or anti-mouse antibodies (Li-Cor Biosciences), or Alexa-680 anti-mouse (Invitrogen) with an Odyssey Infrared Imaging System (Li-Cor Biosciences). Yeast whole genome ORF overexpression library screening A pooled S. cerevisiae whole genome ORF library (Yeast ORF collection, GE Healthcare), 4500 URA3-based plasmids, for overexpression under GAL1 promoter and protein A-tagged, was split into three groups. W303-1A wild type yeast strain (was co-transformed with pYES3-GFP-BtpB and one of the three library pools (S3A Fig) . BtpB toxicity suppression by overexpression of a specific cDNA was tested by its ability to grow in SG agar plates. 20 different candidates were selected and tested for specificity by co-transformation with YCpLG-PI3Kα-CAAX [26], another toxic construct for yeast cells that acts by a different mechanism. Eventually 7 positive ORF, listed on S3 Table, were found to specifically rescue BtpB toxicity. Random mutagenesis of btpB and isolation of mutants The region of pYES2-GFP-BtpB including amino acids 118 to 292 delimited by the mutazarUP and mutazarLO primers was amplified by PCR using low-fidelity Taq DNA Polymerase (Biotools) under standard conditions. The PCR products were purified with a QIAquick Gel Extraction kit (250) kit (Qiagen) and 5 µg of DNA were co-transformed into YPH499 yeast cells with 1 µg of the largest fragment of pYES2-GFP-BtpB plasmid, resulting from digestion with BsiWI/XbaI. Such BsiWI/XbaI digestion of pYES2-GFP-BtpB produces a gap, so the btpB allele can only be reconstructed upon recombination with the amplicon by in vivo gap repair. Recombinants were recovered by positive selection, plating the transformation mixture onto galactose-based agar medium. The pYES2-GFP-BtpB-derived plasmids were isolated from growing clones, amplified in E. coli, verified by restriction analysis and transformed again in yeast cells to verify that they had lost the ability to inhibit yeast cell growth. Mutations were identified on the positive clones by DNA sequencing. Yeast cellular ATP measurement by luciferase assay ATP levels were measured using ENLITEN ATP Assay System (Promega) following manufac-turer´s instructions. Yeast cells were cultured in SR for 18 h and then new SG was added for GAL1-driven gene expression to a final OD600 of 0.3 and cultured for 3 h at 30 ˚C. Approximately 1.8x10 7 cells were harvested in 3 mL of culture and then concentrated by centrifugation for 3 min at 2500 rpm at 4 ºC. Pellets were washed with 1 mL PBS at 4 ºC and stored at -80 ºC for further analysis. For ATP extraction, pellets were resuspended with trichloroacetic acid (TCA, 5%, 10 µL) and immediately neutralized using 500 µL of Tris-acetate-EDTA buffer (TAE 1 ×; 40 mM Tris base, 20 mM acetic acid, and 1 mM EDTA, pH 7.75). The samples were centrifuged for 15 sec at 13000 rpm and then 1:100 diluted in more TAE 1×. Ten µL of this solution was mixed with 100 µL of the rL/L reagent provided by the kit and luminescence was measured using OPTOCOMP1 luminometer (MGM instruments). A standard curve for quantification was prepared using the kit´s reagents. PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + Yeast cellular NAD + measurement by mass spectrometry Yeast cells were cultured as stated for the ATP luciferase assay. Approximately 6x10 7 cells were harvested in 10 mL of culture and then concentrated by centrifugation for 3 min at 2500 rpm at 4 ºC. Pellets were washed with 1 mL PBS at 4 ºC and stored at -80 ºC for further analysis. Our yeast NAD + extraction protocol is a simplified version of the one described by Sporty et al. [53]. Pellets were resuspended in ammonium acetate (600 µL of 50 mM in MS grade water) and approximately 300 µL of 0.5-0.75 mm diameter glass beads (Reesch) were added to the tube. Cells were bead blasted at 5.5 m/s using a FastPrep-24 (MP Biomedicals) for 30 sec twice, allowing a 5 min incubation on ice in between. Supernatant was recovered by perforation of the bead-blasting tube´s base with a red-hot 0.9 x 40 mm needle. The pierced tube was placed inside a capless 1.5 mL microfuge tube and both tubes were centrifuged together for 3 min at 2000 rpm at 4 ºC. This first cell lysate was stored in a new 1.5 microfuge tube on ice. The glass beads in the bead-blasting tube were then washed one more time with 600 µL of a 3:1 v/v mixture of acetonitrile (MS grade) and ammonium acetate (50 mM in MS grade water). The rinsate was then mixed with the first lysate. The mixture was clarified by centrifugation for 3 min at 13000 rpm at 4 ºC and the supernatant was transferred to an ice-cold 1.5 mL microfuge tube. To standardize results, 150 µL of these lysate were kept for protein concentration measurement by Bradford method. Samples were filtered with a 0.22 µm PTFE fliter (JASCO) and analyzed by liquid chromatography (LC) coupled to a QQQ mass spectrometer equipped with a turbo ion spray source operating in positive ion mode (LCMS 8030,Shimadzu). Chromatographic separation was performed on a Gemini C18 analytical column (50 mm×2.1 mm I.D., 2.7 µm particle size; Poroshell 120 PhenylHexyl). Injection volume was 10 µL. Samples were delivered over 11 min at a flow rate of 0.3 mL/min through the analytical column at 45 ˚C. The mobile phase was composed of A (3% methanol, 10 mM tributylamine, 3 mM acetic acid in water LC grade, 0.1% formic acid in water) and B (methanol). Mobile phase composition began with 0% B and was increased to 45% B in 2 min, to 50% in 5 more minutes and up to 95% in one minute. The mobile phase was then maintained at 95% B for 2 min and followed by re-equilibration with 0% B over the next 2 min, before injection of the next sample. Quantification of NAD + was performed by multiple reactions monitoring (MRM) mode to monitor the parent ion-product ion (m/z) of the analyte. Mass transitions of m/z 662.10 to 540.00 (CE = +16 V) were used for quantification and m/z 662.10 to 407.90 (CE = +30 V) for identification with a dwell-time of 100 ms. The calibration curve was determined by plotting the peak area of the analyte (Y) versus the nominal concentration (X) with least square linear regression. All analyses were made under ISO 9001:2008 quality management system certification. Brucella infection of HeLa and iBMDM cells Immortalized Bone Marrow-Derived Macrophages (iBMDM) were obtained as previously described [54]. HeLa cells were seeded at 1×10 5 cells/well for HeLa or 0.8 × 10 5 cells/well for iBMDM (6 -well plates) overnight and tryptic soy broth Brucella cultures inoculated and incubated for 16 h with agitation at 37 ˚C. Cells were inoculated at an MOI of 500, centrifuged at 400 g for 10 min and incubated for a further 1 h at 37 ˚C for HeLa cells and 30 min for iBMDM and RAW cells. Cells were then washed 3 times with media and incubated for 1 h with media containing gentamycin 50 µg/ml and streptomycin 100 µg/ml. After this time media was replaced to reduce the gentamycin concentration to 10 µg/ml streptomycin 20 µg/ ml. PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + Total NAD colorimetric assay Total NAD (NAD + and NADH) was extracted and quantified from cell lysates (from 2 wells of a 6-well plate for each sample) using the NAD + /NADH Colorimetric Assay Kit (Abcam, ab65348) following the manufacturer's instructions. Briefly, the amount of total NAD was calculated from a standard curve (pmol) divided by the sample volume added to the reaction well (µl) and multiplied by the dilution factor. TEM assay RAW cells were seeded in a 96 well plate at 1x10 4 cells/well overnight. Cells were then infected with an MOI of 500 by centrifugation at 4 ˚C, 400 g for 5 min and 1 at 37 ˚C 5% CO2. Cells were washed with HBSS containing 2.5 mM probenicid. Then CCF2 mix (as described in the Life Technologies protocol) and probenicid were added to each well, and incubated for 1.5 h at room temperature in the dark. Cells were finally washed with PBS, fixed using 3% PFA and analysed immediately by confocal microscopy (Zeiss LSM800). Statistical analysis and software All data sets were tested for normality using Shapiro-Wilkinson test. When a normal distribution was confirmed a One-Way ANOVA test with a Tukey correction was used for statistical comparison of multiple data sets and Students t-test for two sample comparison. For data sets that did not show normality, a Kruskall-Wallis test was applied, with Dunn's correction, or Mann-Whitney U-test for two sample comparison. 3D protein images were generated using PyMOL (Scho ¨dinger), taking advantage of previously published structural data of BtpA (PDB: 4LZP) and RUN1-TIR + NADP + (PDB: 6O0W). (BAB_RS18145). The time-point of 4h was selected to enable a comparison between the wildtype and the ∆virB9 mutant strain, which lacks a functional type 4 secretion system (T4SS). Supporting information S1 These experiments were carried out in RAW macrophages, allowing infection of most cells with B. abortus. BAB1_0296 was efficiently translocated into host cells in a manner significantly dependent on the T4SS, in contrast to BAB1_0466 (Figure 20A andB). We should note that a small number of cells infected with the ∆virB9 expressing BAB1_0296 were positive for CCF2 compared to BAB1_0466 (Figure 20B). These first TEM experiments were carried out by a formal postdoc in the lab, Thais Lacerda, prior to my arrival. At 24h post-infection, we could still not detect translocation of BAB1_0466 with this reporter system (Figure 20C). Hence, we can conclude that BAB1_0296 is likely to be a B. abortus effector, and we have named it as NyxA, inspired by Greek mythology. Nyx is the Greek personification of the night, daughter of Chaos, that seemed appropriate to us after remaining in the dark for so long regarding its function. A Brucella genome analysis then identified another gene, BAB1_1101 (BAB_RS21200), encoding for a protein with 82% identity to NyxA, but without a potential CAAX motif (Figure 21). For consistency, we have named this second protein NyxB and that we found was also translocated by B. abortus into host cells, albeit to lower levels than NyxA at both 4 and 24h post-infection (Figure 20A andC). Curiously, the translocation of NyxB was independent of the T4SS (Figure 20A). To confirm that NyxA was indeed translocated across the vacuolar membrane during infection, we created new strains with NyxA fused on its N-terminus with either HA or 4HA epitope tags. We then infected HeLa cells, a well-characterized model of B. abortus infection nicely suited for microscopy studies. We could not detect any HA-NyxA at any of the time-points analysed. However, we could observe translocated 4HA-NyxA at 48h post-infection, accumulating in cytosolic structures in the vicinity of multiplying bacteria (Figure 22A). This was also the case for 4HA-NyxB (Figure 22A, bottom panel). Analysis of fluorescence intensity profiles along a defined straight line across the 4HA-positive structures confirmed the majority of the 4HA signal detected does not correspond to intra-vacuolar NyxA or NyxB. In a proportion of infected cells, 4HA-NyxA and NyxB positive structures could be detected in the nucleus (Figure 22B), suggesting these effectors may also be targeting the host nuclei. To determine if these two effectors could interact, we purified them both and carried out pulldown experiments in vitro. We found that His-NyxA could pull-down purified NyxB (Figure 25A). Furthermore, microscale thermophoresis confirmed this interaction and determined a Kd of 482 ± 126 nM (Figure 25B). Similar results were obtained using His-NyxB against NyxA (Figure 26A andB). Together, these results show NyxA and NyxB interact and can accumulate in cytosolic structures as well as the nucleus, suggesting they are likely cycling between these two cellular compartments. A B FT W E FT E 15 15 His-NyxA/NyxB empty column/NyxB NyxA and NyxB enriched nuclear structures are in close association with PMLnuclear bodies To determine the nature of these nuclear aggregates formed upon ectopic expression of NyxA and NyxB we labelled cells with antibodies recognizing different nuclear bodies (Figure 27A and B). We found the highest level of colocalization (measured by the Pearson Coefficient Correlation PCC) between NyxA and NyxB. Both effectors also extensively co-localize with PMLs unlike nuclear speckles, suggesting closer association with PML-nuclear bodies. As many proteins that accumulate or transit through these nuclear compartments often interact directly with the PML itself, we next carried out a pull-down experiment with either purified His-NyxA or NyxB and incubated with a cellular extract (Figure 27C). No interaction with PML was detected suggesting these Brucella effectors do not interact with PML in the conditions tested. Interactions with endogenous PML were visualized by western blotting (upper blot), and column binding with anti-His (lower blot). Non-bound fractions (F1 and F2), last wash (W) and elution (E) are shown for each sample and the molecular weights indicated (kDa). The cell extract is also shown. NyxA and NyxB do not interact with SUMO in vitro As all proteins that associate with PML-nuclear bodies are SUMOylated we wondered if this was the case for NyxA and NyxB. We therefore investigated if NyxA and NyxB could be covalently conjugated by SUMO in vitro. These experiments were done by Mariam Taktek, a Canadian student that I supervised. We purified both NyxA and NyxB, as well as SUMO2 and UBC9, the SUMO ligase and carried our an in vitro SUMOylation assay. As a positive control GST-RanGA1 was used. A single band was detected in the absence of ATP, indicated with a red asterisk at 46.5 KDa, whereas in the presence of ATP a second band was visible (red square) corresponding to the SUMOylated protein (Figure 28). In contrast, no SUMOylated NyxA nor NyxB were detected, which would have an approximative molecular weight of 25 KDa (Figure 28). Only the non-SUMOylated forms are observed in the presence of ATP (blue asterisk). The Nyx effectors interact with SENP3, which is necessary for efficient B. abortus intracellular multiplication To identify potential host-interacting partners of the Brucella Nyx effectors, our collaborator Jean-Paul Borg (CRCM, Marseille) performed a yeast two-hybrid screen. One of the main proteins identified was SENP3, and taking into account the nuclear localization of the Nyx effectors, we focused on this potential target. However, we cannot exclude that the other candidates identified are not relevant (Table 4). To confirm the interaction between SENP3 and the Nyx effectors, we attempted to purify the whole SENP3. We were not successful and instead focused on the purification of the Nterminal region of SENP3 that comprised all the yeast two-hybrid hits (SENP37-159). His-V5tagged NyxA or NyxB were able to pull-down SENP37-159, confirming a direct interaction of these effectors with the N-terminus domain of SENP3 (Figure 29B). No unspecific binding to the column was detected (right panel, Figure 29B). We cannot exclude the involvement of other regions of SENP3, but our data show SENP37-159 is sufficient for this interaction. NoLS catalytic domain His-NyxB His-NyxA As SENP3 seems to be the host target of NyxA and NyxB, we next sought to determine its relevance during infection. Amandine Blanco in the lab carried out these experiments. The depletion of SENP3 was efficiently achieved only after 72h treatment with siRNA (Figure 30A and B), after which cells were infected with wild-type B. abortus. We observed a decrease in CFU counts at 48h post-infection following the depletion of SENP3 (Figure 30C). These results were confirmed by microscopy counts, which showed a reduction in the percentage of cells with more than ten bacteria at 48h post-infection when SENP3 was depleted (Figure 30D) but siRNA for 72h were infected with wild-type B. abortus expressing DSRed and the percentage of cells with either 1 to 5, 6 to 10 or more than 10 bacteria per cell was quantified by microscopy at 48h postinfection. Data correspond to means  95% confidence intervals from 3 independent experiments, with more than 500 cells being counted for each siRNA treatment. A two-way ANOVA with Bonferroni correction was used to compare the bacterial counts obtained in siControl treated cells with siSENP3 depleted cells, for each subgroup (1-5, 6-10 or >10 bacteria/cell). The NyxB structure defines a novel family of effectors To gain further insight into the function of NyxA and NyxB and their interaction with SENP3, Laurent Terradot and Virginie Gueguen-Chaignon (PSF) solved the crystal structure of NyxB at 2.5Å (Figure 31). The asymmetric unit of the crystal contains twelve monomers of NyxB but no significant differences were found between them and thus only the structure of subunit A is described hereafter. The NyxB model encompasses residues 17 to C-terminal residue 134. Lack of density for residues 1 to 16 suggest that this part is flexible. NyxB has a mixed - fold with five  strands and six α-helices with a core made of helices 2 to 4 and a small curved β-sheet formed by 3, 4 and 5. The longest helix 4 interacts with 2 and 6 and is connected to the core via a loop containing two short 310 helices designated 3a and 3b (Figure 31A). Helix α1 is loosely associated with the rest of the protein core and is positioned by the preceding and following loops. In particular, a β-hairpin formed by 1 and 2 packs against 5 and anchors 1 to the protein core. Search for structural homologues did not reveal any significant homology and thus make of NyxB structure a prototype for this protein family. Size exclusion chromatography coupled to multi-angle light scattering (MALS) experiments indicate that both NyxA and NyxB form dimers (Figure 32). Two putative dimers (dimer 1 and dimer 2) were identified in the asymmetric unit (Figure 32). The association of dimer 1 (chain A and H) buries a total of 530 Å 2 (Figure 32B) while dimer 2 (chain A and J) relies on fewer interactions burying a total 400 Å 2 (Figure 32B). To determine which dimer(s) existed in solution we used size exclusion coupled to small angle X-ray scattering (SEC-SAXS) done by Célia Bergé in Laurent Terradot's lab on NyxB and NyxA proteins. Data indicates that the Rg of NyxB and NyxA are 2.9 and 2.7 nm, with Dmax values of 10.2 nm and 9.6 nm, respectively. These data are in agreement with a dimeric form of the proteins given that theoretical Rgs for dimer 1 and 2 are 2.4 nm and 2 nm, respectively while Rg of a monomer of NyxB is 1.4 nm. The higher Rg observed in solution might be due to the disordered region present at the Nterminal portion of each NyxB monomer (residues 1-16) and NyxA (residues 1-12). Comparison of NyxB experimental SAXS data with theoretical curves obtained with NyxB dimer 1 or dimer 2 indicates that the best fit is obtained with dimer 1 (χ2= 1.3) compared to dimer 2 (χ2= 3.3) (Figure 32C andD). Ab initio modelling using SAXS data confirmed that the envelope obtained fits better the NyxB dimer1. Similar results were obtained using NyxA (Figure 32C). Collectively, X-ray, MALS and SEC SAXS data clearly establish that dimer 1 (Figure 31A) is the conformation of NyxB and NyxA in solution. This NyxB dimer relies on reciprocal Identification of the Nyx-SENP3 interacting groove Taking advantage of the structural information of NyxB, we looked for potential interacting sites. Analysis of the NyxB surface revealed an acidic pocket delineated by residues Y66, D80 and E82 within an acidic patch consisting of amino acids D72, E73, Y70, Y86 and Y63, residues that are strictly conserved in NyxA (Figure 33A). In the context of the dimer, these surfaces are juxtaposed to form an extended concave negatively charged area of around 2000 Å 2 (Figure 33A). The inset shows a close-up view of the area with residues' side chains displayed as ball-andsticks and mutated residues (E82,Y66 and D80) coloured in cyan. (B) Pull-down assay with His-NyxA, His-NyxB or the specific catalytic mutants (His-NyxA MAG or His-NyxB MAG ) immobilized on Ni NTA resins that were incubated with a HeLa cell extract. Empty column was used as a control for non-specific binding. Interactions with endogenous SENP3, NPM1 or Histone 3 (H3) were visualized by western blotting using the corresponding antibody, and column binding with anti-His (lower blot). Non-bound fractions (F1 and F2), last wash (W) and elution (E) are shown for each sample and the molecular weights indicated (kDa). The cell extract and the different purified Nyx inputs are also shown. We mutated Y66, D80 and E82 to obtain His-NyxB MAG , where "MAG" stands for mutated acidic groove and Y62, D76 and E78 to obtain His-NyxA MAG . Purified NyxA or NyxB was indeed able to pull-down endogenous SENP3 from a HeLa cell extract confirming their interactions (Figure 33B). A decreased ability for both His-NyxA MAG and His-NyxB MAG to interact with SENP3 was observed (Figure 33B). However, we could only detect a small amount of endogenous SENP3 in the cell extract with our antibody. As it is well established that when SENP3 is pulled-down from a cell extract, its major cellular partner NPM1 can be easily detected by western blotting, we next probed the same membrane with an antibody against NPM1. NPM1 was not pulleddown by His-NyxA MAG and His-NyxB MAG , confirming that this mutation strongly impaired in their ability to bind the complex SENP3-NPM1 (Figure 33B). As a negative control, the membrane was also probed for Histone 3, an abundant nuclear protein that did not bind to NyxA nor NyxB (Figure 33B). Together these results identify SENP3 as a target of the Brucella Nyx effectors and identify the acidic groove responsible for this interaction in vitro. The Brucella Nyx effectors induce delocalisation of SENP3 After showing that NyxA and NyxB interact directly with SENP3, a eukaryotic protein mainly found in the nucleoli, we were intrigued by what impact these effectors could have on SENP3 (Figure 5A, top panel). Ectopic expression of HA-tagged NyxA resulted in a marked reduction of endogenous nucleolar SENP3, which instead formed aggregates in the nucleoplasm (Figure 34A andB). As SENP3 redistribution from nucleoli to the nucleoplasm could be due to starvation or mild oxidative stress [268] [257], we ectopically expressed the mutant HA-NyxA MAG , that we know is less able to interact with SENP3. The mutation of the acidic interaction groove impaired delocalisation of SENP3 by NyxA, confirming this effect was due to NyxA's direct interaction with SENP3. Consistently, analysis of the same images for colocalization of SENP3 with HA-NyxA revealed important recruitment, dependent on the acidic groove (Figure 34C). All the cells quantified are shown, with each colour corresponding to an independent experiment and its corresponding mean (N = 4). NyxB was also able to delocalise SENP3 from the nucleoli and recruit SENP3 via direct interaction, implicating both effectors in this phenotype (Figure 34B, C and D). Next, we investigated the prevalence of these phenotypes during infection. We infected HeLa cells with either the wild-type B. abortus strain, the mutant lacking nyxA, or a complemented strain, expressing nyxA from the chromosome under the control of its promoter. Analysis of the level of SENP3 retained in the nucleoli during infection showed a significant lack of nucleoli localization of SENP3 in cells infected with the wild-type B. abortus strain in contrast with the ∆nyxA strain (Figure 35A andB). The wild-type phenotype was partially restored with the complemented strain but not with a ∆nyxA strain expressing nyxA MAG . This mutation impairs interaction with SENP3 in vitro but does not affect effector translocation during infection (Figure 36). This result shows that NyxA's interaction with SENP3 during infection prevents its accumulation in the nucleoli. nyxA, its complemented strain nyxA::Tn7-nyxA or a complementing strain expressing the mutated acidic groove responsible for interaction with SENP3 (nyxA::Tn7-nyxA MAG ). Data are represented and were analysed as in (B). Not all comparisons are shown. All microscopy images displayed have scale bars corresponding to 5 m. In the case of NyxB, we could also observe a statistically significant increase of SENP3 in the nucleoli in cells infected with the ∆nyxB strain compared to cells infected with wild-type B. abortus, although to a lesser extent than what we observed for ∆nyxA. However, we could not fully complement this phenotype, possibly due to the low sensitivity of this microscopy approach combined with a weaker phenotype (Figure 37A). As expected, a strain lacking both genes encoding for NyxA and NyxB could not mislocalise SENP3 as the wild-type strain. Representative images of all strains are shown in the (Figure 37B). These phenotypes could not be evaluated in macrophages, as we could not detect SENP3 in bone marrow-derived macrophages with any of the commercial antibodies available. * * Pescadillo (PES1) (Figure 38A), involved in the maturation of the 28S and 5.8S rRNAs and, similarly to SENP3, requires interaction with NPM1 for nucleolar targeting [268]. A slight effect on NPM1 was observed in some B. abortus infected cells (Figure 20B), but to a lower extent than SENP3, suggesting that the bulk of nucleolar NPM1 remains unaffected. NPM1 wt DSRed NPM1 We next analysed the nucleolar accumulation of the VCP-like AAA-ATPase (NVL), also part of the complex of proteins involved in the maturation of the 28S and 5.8S rRNAs [282]. Unexpectedly, we observed a striking cytoplasmic accumulation of NVL in punctate structures in all B. abortus infected cells rarely visible in non-infected cells (Figure 39A). It is important to note that significant NVL staining was still observed in the nucleoli of infected cells. To determine if the Brucella Nyx effectors induced the formation of these NVL-positive cytoplasmic structures, Amandine Blanco quantified their number in HeLa cells infected either wild-type or a mutant lacking both NyxA and NyxB. She found that cells infected with ∆nyxAnyxB had fewer NVL-positive structures (Figure 39B), suggesting the Nyx effectors contribute to their induction. This difference was not due to different intracellular bacterial numbers as nyx mutant strains replicate as efficiently as the wild-type in HeLa cells (Figure 40). Strong induction of NVL cytoplasmic punctate accumulation was also observed in immortalized bone marrow-derived macrophages infected with wild-type B. abortus (Figure 39C), in a Nyx-dependent manner (Figure 39D). These results indicate that induction of NVL accumulation in cytosolic punctate structures during Brucella infection is dependent on SENP3. We must now complement the mutant phenotypes to confirm these results. It is important to note that depletion of SENP3 alone was not sufficient to induce NVL cytosolic accumulation, suggesting this phenotype is triggered by the infection. Therefore, we wondered it could constitute a generalized cellular response to intracellular bacteria. We was observed with RPL5 labelling (Figure 45) and also when using 3Flag-tagged NyxA (Figure 46). These data overwhelmingly support the idea that these cytosolic Brucella-induced structures contain NVL, RPL5, and both Nyx effectors, at least transiently. We then focused on other cellular processes resulting in cytoplasmic accumulation of nucleolar proteins. Recent reports have described NUFIP1 (nuclear fragile X mental retardation-interacting protein 1) as a nucleo-cytoplasmic shuttling protein that accumulates in the cytoplasm upon starvation [287]. In this context, NUFIP1 acts as a receptor for ribophagy, a specialized autophagy process dedicated to the degradation of ribosomes to generate nutrients and enhance cell survival [288]. We therefore wondered if NUFIP accumulation in the cytosol, a hallmark of ribophagy, could be enhanced in infected cells. Indeed, cells infected with wild-type B. abortus resulted in an increase in cytosolic NUFIP1 compared to non-infected cells (Figure 47A), with the appearance of vesicular structures. These were also enriched in NyxA (Figure 47B). However, these structures were negative for LAMP1, suggesting blockage with lysosomes or that they constitute a different NUFIP1-positive compartment (Figure 48). Materials and Methods Cells and culture conditions HeLa and RAW cell lines were obtained from ATCC and were grown in DMEM supplemented with 10% of fetal calf serum. Immortalized bone marrow-derived macrophages [289] from C57BL/6J mice were maintained in DMEM supplemented with 10% FCS and 10% spent medium from L929 cells that supplies MC-CSF. All cells were routinely tested and were mycoplasma free. When indicated cells were starved with HBSS for 8h. Brucella strains, cultures and infections B. abortus 2308 was used in this study. Wild-type and derived strains were routinely cultured in liquid tryptic soy broth and agar. 50 μg/ml kanamycin was added for cultures of DSRed and 50 μg/ml ampicillin for strains expressing pBBR1MCS-2 (4HA and 3Flag constructs) or complemented strains (with pUCTminiTn7T_Km). The list of all Brucella strains constructed is included in Supplementary Table 2. For infections, eukaryotic cells were plated on glass coverslips 18h before infection and seeded at 2x10 4 cell/well and 1x10 5 cells/well for 24 and 6 well plates, respectively. Bacterial cultures were incubated for 16h from isolated colonies in TSB shaking overnight at 37 °C. Culture optical density was controlled at 600 nm. Bacterial cultures were diluted to obtain a multiplicity of infection (MOI) of 1:1000 in the appropriate cell culture medium. Inoculated cells were centrifuged at 400 x g for 10 min to initiate bacterial-cell contact followed by incubation for 1h at 37°C and 5% CO2 for HeLa cells and 30 min for iBMDMs. Cells were then washed 3 times with DMEM and incubated a further hour with complete media supplemented with gentamycin (50 μg/mL) to kill extracellular bacteria. The gentamycin concentration was then reduced to 10 μg/mL by replacing the media. At the different time points coverslips were fixed for immunostaining. For enumeration of bacterial colony forming units (CFU), cells were lysed in 0.1% Triton for 5 min and a serial dilution plated in tryptic soy agar. in a EcoRV digested suicide vector (pNPTS138) for NyxB. These vectors were then electroporated in B. abortus and transformants selected using the kanamycin resistance cassette of the pNPTS138 vector. The loss of the plasmid concomitant with either deletion or a return to the wild type phenotype was then selected on sucrose, using the sacB counter selection marker also present on the vector. Deletant (∆) strain was confirmed by PCR and sequencing. The complementing strain was constructed by amplifying either nyxA or nyxB and their corresponding promoter region (500 bp upstream) with the PrimeStar DNA polymerase (Takara). Insert and pUCTminiTn7T_Km [START_REF] Myeni | Brucella Modulates Secretory Trafficking via Multiple Type IV Secretion Effector Proteins[END_REF] were digested with BamHI and SpeI and ligated overnight. Transformants were selected on kanamycin 50 μg/mL and verified by PCR and sequencing. To obtain the complementing strain the ΔnyxA or ΔnyxB mutants were electroporated with pmini-Tn7-nyxA or -nyxB, respectively, with the helper plasmid pTNS2. Electroporants were selected with kanamycin 50 μg/mL and verified by PCR. Eukaryotic expression vectors The NyxA and NyxB constructs were obtained by cloning in the gateway pDONR TM (Life Technologies) and then cloned in the pENTRY Myc or HA. The NyxA MAG and NyxB MAG constructs were obtained by site-directed mutagenesis. The 4HA-NyxA and NyxB were clone into pBBR-MCS4. All the primers used are listed in Supplementary Table 2. Transfections and siRNA All cells were transiently transfected using Fugene® (Promega) overnight, according to manufacturer's instructions. HeLa cells were seeded 18h prior to experiments at 2x10 4 cells/well for 24 well plates. siRNA experiments were done with Lipofectamine® RNAiMAX Reagent (Invitrogen) according the protocol of the manufacturers. Cells were seeded at 1.4x10 4 cells/well. Importantly, siRNA depletion of SENP3 was done by treatment with 0.1µM siRNA every day for 72h. The following sequence was used AAACUCCGUACCAAGGGUUAU. A scrambled siControl was also used. Some cell detachment was observed following the 3 days siRNA treatment. Legionella infection Legionella pneumophila strain Paris was cultured in N-( 2 Proteins eluted were separated by SDS-PAGE, transferred to a PVDF membrane, incubated with specific primary antibodies for 1 h and detected with horseradish peroxidase (HRP)conjugated secondary antibodies by using ClarityTM Western ECL Blotting Substrate (Bio-Rad). Pulldown assays between cell extract and purified Nyx and MAG mutants, see annexe 3 and the nucleolin as well as between the signals of SENP3 and HA-NyxA/NyxB. For each nucleus, the ratio between the mean intensity of the SENP3 signal in the nucleoli and the mean intensity of the SENP3 signal outside the nucleoli is also calculated. The details can be found in the source code and the comments of the macro. In the same way, this software was used to quantify the colocalization between PML and HA-NyxA/NyxB. For the Brucella infected HeLa cells, a pipeline in the software CellProfilerref3 was used to measure the Pearson correlation coefficient between the signal of SENP3 and the nucleolin, in the nuclei of the cells. The cells were also classified in two classes according to the intensity of the Brucella DSRed signal in the perinuclear area. TEM1 translocation assay RAW cells were seeded in a 96 well plates at 1x10 4 cells/well overnight. Cells were then infected with an MOI of 500 by centrifugation at 4 °C, 400 g for 5 min and 1 at 37 °C 5% CO2. Cells were washed with HBSS containing 2.5 mM probenicid. Then CCF2 mix (as described in the Life Technologies protocol) and probenicid were added to each well, and incubated for 1.5 h at room temperature in the dark. Cells were finally washed with PBS, fixed using 3.2% PFA and analysed immediately by confocal microscopy (Zeiss LSM800). Thermophoresis The method is based on the movement of molecules in a micro gradient of temperature, an effect called "thermomophoresis" which varies depending on whether the molecule is free or in interaction (Figure 21). In the initial state, the proteins do not move. Then the solution protein will be heated by an infrared laser. The laser will cause a gradient of temperature ( 2to 3°C). The molecules in solution will then move from hot to cold. Their movement depends on their charge, their size and their hydration layer. The interaction between a molecule and its partner will modify at least one of these parameters. Once the laser is turned off, the proteins will return to their initial state. The phenomenon of thermophoresis is followed by NyxB is used at the following increasing concentrations : 6,67nM, 13,34nM, 26,68nM, 53,3nM, 106,72nM, 213,44nM, 426,88nM, 853,76nM, 1,7µM, 3,4µM, 6,8µM, 13,6µM, 27,2µM, 54,4µM, 108,8µM, 217,6µM. SUMOylation assay In 50µl of reaction solution (20mM HEPES pH9, 20mM NaCl, 0.5mM DTT) were added 1.5µg of purified recombinant protein SUMO2 with 1µg of recombinant protein NyxA/NyxB or RanGAP1 (Biomol international #UW9755), 0.2µg of the SAE1/UBA2 heterodimer (Boston Biochem #E-315), 1µg of the purified recombinant protein hUbC9 in the presence or absence of Mg-ATP solution (Boston Biochem #B-20) at 5mM. This reaction solution is incubated for one hour at 37°C and then the reaction is stopped by 10mM of EDTA. 15µL of the reaction solution is used for analysis by cooomassie blue staining. Yeast two-hybrid NyxA was cloned into pDBa vector, using the Gateway technology, transformed into MaV203 and used as a bait to screen a human embryonic brain cDNA library (Invitrogen). Media, transactivation test, screening assay and gap repair test were performed as described [299] [300] [301] [302]. 19. Stats All data sets were tested for normality using Shapiro-Wilkinson test. When a normal distribution was confirmed we used a One-Way ANOVA test with a Tukey's correction for multiple comparisons. For two independent variables, a Two-Way ANOVA test was used. For data sets that did not show normality, a Kruskall-Wallis test was applied, with Dunn's correction. All analyses were done using Prism Graph Pad 6. We next sought to observe directly the effectors translocated in the cell by immunofluorescence microscopy. We succeeded with a strain expressing 4HA-NyxA/B, an approach used many times for the observation of numerous effectors in particular in Legionella and Coxiella [303] [304] by increasing the fluorescence signal. Supplementary We observed a cytosolic and nuclear distribution of the effectors in transfection. Nevertheless, 4HA-Nyx as opposed to HA-Nyx is much less frequently found to accumulate in the nucleus suggesting that the 4HA tag disturbs this localization. It seems that this is not a direct consequence of the 4HA tag since 4HA-NopA, an effector of Coxiella burnetii, does not present a problem to localize in the nuclei of infected U2OS cells [304]. Consistently, we observed our Nyx effectors during infection using a 4HA tag sporadically localizing in the nucleus. Nonetheless, it is possible that due to its size a 4HA tag masks Nyx domains involved in the translocation from the cytosol to the nucleus. NLS have not been identified in Nyx, but Victor Cid's team have shown in yeast that the import of these effectors is active and dependent on the alpha-importin encoded by the SRP1 gene in yeast (figure 51), suggesting either that Nyx present NLS that are not identified to date or that they interact with proteins presenting functional NLS allowing them to reach the nucleus. We believe that both observed localizations (cytosolic and nuclear) are relevant, suggesting that Nyx must either shuttle between cytosol and nucleus or it is a sequential mechanism with a first step in the cytosol before being imported into the nucleus. Further work will be needed to understand the mechanisms and kinetics for the nuclear import of these effectors. We have identified SENP3 as the eukaryotic target of our two effectors. Moreover, the analysis of the crystallographic structure of NyxB allowed us to identify a pocket of interaction. Thus we confirmed this interaction by co-precipitation experiments with purified NyxA and NyxB against the N-terminal domain of SENP3. We were not able to purify the entire protein or the C-terminal domain, so we cannot exclude that the Nyx effectors do not interact with the Cterminal domain of SENP3. It would have been interesting to be able to carry out measurements of affinity constant between Nyx and SENP3 as well as Nyx mag and SENP3, in order to show a decrease or an abolition of the formation of this complex. The pocket of interaction observed thanks to the crystallographic structure corresponds to a large acidic domain present on Nyx allowing the interaction with an arginine rich domain present on SENP3 at the level of the NoLS domain. It is at the level of this domain that mTOR phosphorylates SENP3, which allows it to translocate into the nucleoli and to interact with NPM1. It is interesting to note that this interaction between Nyx and SENP3 seems to be similar to that between SENP3 and NPM1. Indeed, NPM1 presents an acidic domain involved in the interaction with the arginine-rich motif of SENP3 [268]. Thus, the interaction between SENP3 and Nyx could mask the domain of interaction with NPM1 but this seems unlikely as we are able to pull-down NPM1 with purified Nyx effectors. It is even more likely that Nyx interactions would mask the mTOR phosphorylation residues of SENP3. Indeed, it seems that the phosphorylation of SENP3 is not necessary for the interaction with Nyx, since the interaction with the purified recombinant proteins of Nyx and the N-terminal domain of SENP3 is observed. Thus NyxA/B would interact with SENP3 upstream of mTOR pohsphorylation thus preventing its nucleolar localization. In order to understand the role of these effectors in the cell we expressed ectopically in Hela cells the Nyx as well as Nyx mag in which the interaction pocket with SENP3 was mutated. We observe that Nyx are able to recruit SENP3, displacing it from its natural nucleoli localization, and recruitment is dependent on the acidic domain identified on the Nyx. The same observations could be made in infection, especially for NyxA. To date we do not really understand the interest of Brucella to produce two highly similar proteins. We have observed that NyxA always shows a stronger impact on the phenotypes of SENP3 delocalization, in comparison to NyxB. Furthermore, the effects of NyxA and NyxB are not cumulative consistent with their observed interaction in vitro. On possibility is that NyxA by interacting with NyxB could increase its interaction with SENP3 and this would give NyxB a role of chaperone protein for NyxA. To add to the complexity of these phenotypes, we observed that BspJ ectopically expressed in HeLa cells, was also able to delocalize SENP3, although less than Nyx, however BspJ does not recruit SENP3 (results not shown) unlike both Nyx effectors. In addition, we observed that cells infected with a mutant strain lacking NyxA still have significant SENP3 displaced from nucleoli. These observations suggest that Nyx may act in concert with other Brucella effectors such as BspJ and SerB. Consistent with these results, deletion of NyxA and NyxB alone does not impact Brucella intracellular replication, however we saw that in cells in which we turned off the expression of SENP3 Brucella had a replication defect, highlighting the key role of SENP3 during infection. It would be interesting to observe the impact on the localization of SENP3 and Brucella's ability to replicate when HeLa cells are infected with strains in which the genes encoding all four effectors, NyxA, NyxB, BspJ, and SerB are mutated. In addition to the delocalization of SENP3 from nucleoli, our study also established that the cytosolic punctiform structures containing translocated Nyx effectors are enriched in NVL, RPL5 and NUFIP1. NVL is an AAA-ATPase that localizes in nucleoli following its interaction with RPL5 and participates in ribosome synthesis [54]. During infection of HeLa cells or BMDMs by Brucella we observed a relocation of NVL from the nucleus into punctiform structures of the cytosol, partially dependent on NyxA and NyxB. It seems that NyxA/B is not the only one responsible for this delocalization and that something else during the infection allows it, it would be interesting to observe if SerB and BspJ could also be involved in this phenotype. These quantifications of the cytosolic structures of NVL were made for the wild type strain and the mutant strain lacking the genes encoding nyxA and nyxB. Nevertheless, it would be very important to quantify the cytosolic structures of NVL when cells are infected with nyxA or nyxB mutant Brucella strains that have been complemented with either nyxA/B wild type or nyxA/B mag. This would allow us to confirm that this phenotype is directly dependent on Nyx interaction with SENP3. During my thesis, we thought that this was the first time that NVL was observed in cytosolic However, cytosolic NVL was only observed under when AMN was over-expressed and authors did not analyze endogenous NVL in these conditions. It would be interesting to observe whether this role of NVL in endocytic regulation is involved in Brucella infections. We have also hypothesized that these cytosolic structures could correspond to autophagy. More generally, autophagy is a ubiquitous degradation mechanism orchestrated by more than 30 specific proteins called ATGs (autophagy-related genes) (Figure 54) [306]. It is a dynamic membrane process that begins with the de novo formation of vacuoles called autophagosomes encompassing cytoplasmic fractions. The degradation of the sequestered material also called the cargo occurs after fusion of the autophagosomes with the lysosomes, in vacuoles called autolysosomes. Autophagy has multiple functions within the cell. It removes damaged or non-functional components and recycles nutrients to maintain homeostasis. In particular, it is a fundamental process in the host's response to infection by pathogens, involved in both innate and adaptive immune responses. This defense mechanism can be countered or used by intracellular microorganisms for their own multiplication. First of all, membrane supply via autophagy is very often beneficial for both bacteria and viruses. Because of their inability to replicate in an acidic environment, these microorganisms block, at least partially, the maturation stage of autophagosomes. Coxiella burnetii for example, which can survive at acidic pH, uses autophagy for optimal development of its replicative vacuole. Several studies have implicated autophagy at different stages of Brucella trafficking. The YipA protein that localizes at the site where formation of rBCVs is initiated, is necessary for rBCV formation. This process is dependent on Atg9 and WIP1 [307]. In addition, multi-membrane BCVs capable of incorporation of monodansyl cadaverine, a marker of autophagosomes, have been described in epithelial cells [54] [55]. At the late stages of the trafficking, autophagic vacuoles are formed that mediate bacterial exit from infected cells. Current data converge towards a model where there is a very selective hijacking of autophagy components at different stages of BCV maturation, with dependency on Atg9 and WIP1 for rBCVs [307] and ULK and Beclin-1 for aBCVs [START_REF] Starr | Selective subversion of autophagy complexes facilitates completion of the Brucella intracellular cycle[END_REF]. Consistently, inhibition of a large panel of components involved in autophagosome nucleation and elongation did not prevent rBCV formation [START_REF] Starr | Selective subversion of autophagy complexes facilitates completion of the Brucella intracellular cycle[END_REF]. Thus, our study revealed that Brucella-infected cells present NUFIP1 in cytosolic structures corresponding to NyxA/B, NVL and RPL5. NUFIP1 is a receptor for selective autophagy of ribosomes, a nuclear protein that is delocalized in the cytosol when ribophagy is induced. In humans, ribophagy leads to the degradation of the small and large ribosomal subunit. It is triggered by various stresses: starvation, inhibition of mTOR by Torin1, sodium arsenite, oxidative stress such as oxygen peroxide or reversin, an inhibitor of MPS1 that leads to protein imbalance due to poor chromosomal segregation [309]. In human cells the loss of NUFIP1 leads to an inability of the cell to degrade ribosomes following starvation. During the response to starvation NUFIP1 changes location. In the cytoplasm, NUFIP1 will bind to the ribosome and deliver its cargo to the autophagic vesicle by binding to LC3B through its LC3 Interaction Region (LIR) [287]. The presence of the RPL5 ribosomal protein of the large subunit 60S in these cytosolic structures reinforces the idea that we are dealing with ribophagy. We could also label other ribosomal proteins such as RPS3 associated with the 40S subunit or RPL28 which is associated with the 60S subunit and observe by confocal microscopy whether they colocalize with NUFIP1, NyxA/B or NVL. On the other hand, the presence of NVL and NyxA/B are rather surprising since they are not associated with ribosomes. It is interesting to note that NVL has several potential LIR sequences and NyxA/B also has one. Furthermore, it would be interesting to select infected cells expressing 4HA-NyxA/B using a cell sorter and to perform coimmunoprecipitation using an anti-HA resin to purify these NyxA/B /RPL5/NVL/NUFIP1 structures and to perform mass spectrometry in order to identify other associated proteins. Quantification of the number of NUFIP1 present in the cytosol of cells infected with wild Brucella and the NyxA/B mutated strain will strengthen our hypothesis that NyxA/B may be ribophagy regulators. However, we do know that NyxA/B are not ribophagy activators since HeLa cells expressing NyxA/B do not present cytosolic NUFIP1. It is possible that Brucella infection causes a kind of starvation that activates the ribophagy. It would be interesting to measure the nutrient status of cells infected with either wild-type or nyxAnyxB mutants. Starvation-induced ribophagy provokes nucleolar delocalization of SENP3. Recently a study showed that SENP3 regulates autophagy by desumoylating beclin1 (BECN1). Cells not expressing SENP3 show an accumulation of LC3B due to increased autophagosome formation. BECN1 is required for the initiation of autophagy. BECN1 forms a complex with PIK3C3 important for autophagosome formation. SUMOylation of BECN1 on K380 promotes interaction with PIK3C3 accelerating autophagosome formation. On the other hand, SENP3 deSUMOylate BECN1 which weakens the interaction with PIK3C3 leading to a decrease in the formation of autophagosomes. According to the authors, when starvation occurs, SENP3 accumulates in the cytosol, allowing it to interact with BECN1. As a result, SENP3 can negatively regulate autophaghy [310]. It seems that SENP3 is a kind of a guard to prevent the cell from digesting too much. So far, I have never been able to observe SENP3 in the cytosol. Perhaps by using their labeling protocol we will be able to observe if SENP3 is also present in the cytosol in the vicinity of NUFIP1 in a NyxA/B dependent manner. Nyx potentially sequester SENP3 in the nucleoplasm and thus prevent the negative regulation of SENP3 on the ribophagy. Thus, SENP3 could be a specific regulator of ribophagy. The fact that the observed structures are LAMP1 negative suggests that fusion with lysosomes does not occur or that the observation at 48h post infection is too early for these structures to be LAMP1 positive, it would be interesting to perform further kinetics. Brucella may also divert ribophagosomes to obtain nutrients and membrane. How can BCV recruit these structures? How can Brucella metabolize ribosomes? Currently we cannot totally exclude that potentially these NUFIP1 positive structures do not correspond to ribophagy, experiments are in progress to confirm this. During my thesis, I also spent time setting up an iBMDM infection protocol with Brucella, in order to obtain a maximum of infected cells. The objective was then to perform a mass spectrometry SUMOscan to obtain the SUMOyloma of Brucella WT versus Brucella nyxAnyxB infected cells. Unfortunately, I did not have time to perform this experiment, one can imagine that BECN1 would have emerged from this screening, as well as many other proteins. In conclusion, we have identified two novel Brucella effectors that target SENP3 and induce the relocation of this nucleolar protein during infection. We propose that the SENP3 interaction leads to the induction of ribophagy, visible with the appearance of cytosolic structures enriched in NUFIP1, NVL, RPL5, and the two Nyx Brucella effectors that may allow replicative Brucella-containing vacuoles to obtain nutrients or membranes while promoting cell survival. NyxA and NyxB can act as positive regulators of ribophagy by diverting SENP3, which normally acts as a negative regulator of ribophagy. In addition, our work suggests that NVL is potentially a novel ribophagy marker. In this manuscript we highlighted a role of the S. aureus TIR protein TirS, present on a chromosomal cassette containing multiple antibiotic resistance genes, on the control of virulence. In addition, this was the first study to show upregulation of a TIR protein upon antibiotic treatment and an epidemiological link with enhanced success of bacterial spread. My contribution was on the evaluation of the intracellular localization of TirS in transfected cells and attempts to express and purify TirS in E. coli for in vitro assays. Staphylococcus aureus TIR mimic that is part of a novel bacterial invasion mechanism. Its ectopic expression in eukaryotic cells inhibited TLR signaling, downregulating the NF-kB pathway through inhibition of TLR2, TLR4, TLR5, and TLR9. Skin lesions induced by the S. aureus knockout tirS mutant increased in a mouse model compared with wild-type and restored strains even though the tirS-mutant and wild-type strains did not differ in bacterial load. TirS also was associated with lower neutrophil and macrophage activity, confirming a central role in virulence attenuation through local inflammatory responses. TirS invariably localizes within the staphylococcal chromosomal cassettes (SCC) containing the fusC gene for fusidic acid resistance but not always carrying the mecA gene. Of note, sub-inhibitory concentration of fusidic acid increased tirS expression. Epidemiological studies identified no link between this effector and clinical presentation but showed a selective advantage with a SCCmec element with SCC fusC/tirS. Thus, two key traits determining the success and spread of bacterial infections are linked. Introduction The innate immune system constitutes the first line of host defense against invading microbial pathogens in multicellular organisms. Key components of the innate immune response are pattern recognition receptors, which recognize a wide range of conserved bacterial structures, collectively called pathogen-associated molecular pattern and initiate an intracellular signaling immune cascade [1]. The Toll-like receptor/interleukin (IL)-1 receptor (TLR/IL-1R) superfamily, which comprises Toll-like receptors (TLRs) and interleukin-1 receptors (IL-1Rs), is required for many host innate immune responses and characterized by the presence of Toll/ interleukin-1 receptor (TIR) domains cytoplasmically located on each TLR [2]. The TIR domain is critical for protein-protein interactions between TLRs with the corresponding TIRcontaining adaptors. These interactions activate specific transcription factors such as nuclear factor-κB (NF-κB), which regulates the expression of various inflammatory mediators [3,4]. The TIR domain therefore plays a pivotal role in signaling from these receptors, and their importance in immune regulation has made them the subject of intense study. The TLR signaling pathway is a key target of pathogen mechanisms of host immune system evasion [4]. Indeed, microbes can target various levels of the TLR signaling pathway, from modification of pathogen-associated molecular patterns to modifications in the immune signaling cascade. A potential host evasion mechanism involving TLRs came to light with the identification of bacterial TIR homologues. The majority of studies on bacterial TIR proteins have focused on their potential role as virulence factors that directly subvert host TLR signaling. For example, TIR-like protein A (TlpA) from Salmonella enterica serovar Enteriditis reduces NF-κB activation by a TLR4, IL-1R, and MyD88-dependent pathway and modulates IL-1β secretion during infection [5]. TcpC in the uropathogenic Escherichia coli CFT073 and Btp1/BtpA/TcpB in Brucella species suppress TLR2-and TLR4-mediated activation of NF-κB by targeting MyD88 [6,7]. A second TIR-containing protein in Brucella ssp. (BtpB) was reported to be a potent inhibitor of TLR signaling, probably via MyD88 as well [8]. The presence of a putative TIR-domain-containing protein in Staphylococcus aureus was suggested through a data search analysis [5] before being recently confirmed [9]. S. aureus is an important human pathogen that causes a wide variety of community and healthcare-associated infections [10]. This bacterium has a proven ability to adapt to the selective pressure of antibiotics. S. aureus was initially methicillin-sensitive (MSSA) but isolates resistant to this antibiotic were identified soon after its introduction (MRSA, or methicillin-resistant S. aureus) [11]. S. aureus becomes resistant to methicillin mainly by the acquisition of the methicillinresistant gene mecA, which occurred first in hospital settings and now takes place in the community and in livestock [12,13]. The mecA gene is carried on a particular class of mobile microbial attack. TLR signaling elicits a proinflammatory response that controls immune cell recruitment to infected tissues. Here, we show that Staphylococcus aureus, an opportunistic human pathogen, expresses a host defense-like protein, TirS, that actively perturbs the initial TLR activation stage. Results with isolated human cells and mouse models show that TirS is a broad innate immune inhibitor of TLR-dependent signaling and modulates bacterial virulence, attenuating local inflammation. Moreover, the tirS gene lies near antimicrobial resistance genes for an antibiotic that enhances TirS production, shifting the balance to favor the pathogen and promote disease. Understanding mechanisms by which S. aureus modulates the immune response may lead to novel approaches for preventing and treating infection. genetic elements prevalent in staphylococci, the staphylococcal chromosomal cassette (SCC), designated as SCCmec [14]. Askarian et al. [9] characterized the TIR domain protein TirS in the SCC476 element of the methicillin-susceptible S. aureus strain MSSA476. SCC476 is integrated at the same site on the chromosome as SCCmec elements in MRSA [15]. TirS interferes with the TLR2-induced MAPK and NF-κB signaling pathway and enhances bacterial survival within the host [9]. In the present work, we report that TirS is spread among 12% of MRSA and MSSA strains. In an attempt to describe the genetic context of tirS (for staphylococcal TIR gene) in S. aureus, we fully sequenced the SCC element of representative bacterial strains. In all MRSA and MSSA lineages, the tirS gene was invariably located within this mobile genetic element and co-located with the fusC and mecA (for the MRSA strains) antibiotic resistance genes. Interestingly, our results show that sub-inhibitory concentration of fusidic acid induced overexpression of tirS. We also confirm previous findings that tirS expression induces a negative regulation of the TLR signaling pathway. Our results with a mouse model of skin infection support that TirS modulates bacterial virulence through attenuation of host inflammatory responses during infection. This work is the first description of a TIR homolog protein carried by a mobile genetic element conferring resistance to antibiotics, suggesting a potential selective advantage. Indeed, these features may contribute to the ability of S. aureus to survive and establish a critical population size. Results tirS distribution and molecular epidemiology in S. aureus lineages To assess the prevalence of tirS in various MRSA and MSSA lineages, a series of 226 well-characterized clinical isolates from more than 27 clonal complexes (CCs) or sequence types were subjected to tirS-specific PCR. Among the 226 strains examined, 28 (12.4%) yielded positive tirS amplification (Table 1). tirS was detected in MRSA and MSSA strains belonging to only 3 CCs: CC1, CC5, and CC8. In detail, the tirS gene was detected in 18/18 MRSA strains, CC5 Geraldine clone; 1/6 MRSA strains, CC5 pediatric clone; 1/2 MRSA strains, CC1; 6/9 MSSA strains, CC1; and 2/10 MSSA strains, CC8. Of further interest was our finding of a perfect association between tirS gene amplification and the MRSA Geraldine clone. We next examined the molecular epidemiology of tirS in human staphylococcal infections. S. aureus strains were isolated from clinical specimens of individuals presenting skin and soft tissue infections, community-acquired pneumonia, bacteremia, infective endocarditis, or nasal colonization (asymptomatic bacterial carriage). No significant association was detected between specific disease and the presence of tirS. Among the 28 tirS-positive clinical samples, 10 were isolated from healthy patients (asymptomatic portage; 36%), 8 from patients with cutaneous infection (29%), 5 from patients with pneumonia (18%), 3 from patients with osteomyelitis (11%), 1 from a patient with bacteremia (4%), and 1 from a patient with infective endocarditis (4%). Genomic context of the tirS gene in MRSA and MSSA Because the tirS gene was previously described on a staphylococcal chromosomal cassette SCC476 [9], we performed whole-genome shotgun sequencing of six representative MRSA and MSSA strains positive for the presence of the tirS gene. These included four MRSA lineages: the prototype Geraldine clone strain HT20030749 (CC5; SCCmec I), strain ST20120331 (CC5, SCCmec IV), and strains ST20121850 and ST20130096 (CC1; SCCmec V fusC+) (ENA database project study accession number PRJEB12840; sample accessions ERS1070204 to ERS1070207). Two MSSA strains tirS-positive were also added to the study: strain ST20110167 (CC1) and strain ST20121341 (CC8) (sample accessions ERS1434451 and ERS1434452). As reference to the comparison with our sequenced strains, we used the genome of strain MSSA476, in which tirS was recently described [9]. Whole-genome alignment and search for the site-specific insertion sequences (ISS) typical of SCC-like cassette insertions showed that tirS is invariably present within the SCC element of all six MRSA and MSSA genomes analyzed (Fig 1). Moreover, this gene is found in a highly conserved region within the J1 region (between the ccr complex and the 5' ISS(L)), consisting of five open reading frames (ORFs) that include the tirS gene but also the fusC gene, responsible for resistance to fusidic acid. This region is present as well in MRSA as in the MSSA strains analyzed (Fig 1). Furthermore, BLASTn searches using the tirS and its surrounding four ORFs ("tirS region"), as annotated in the strain MSS476 genome (GenBank: BX571857.1), against the Gen-Bank nucleotide collection (nr/nt) revealed the presence of the tirS region in 14 MRSAs, 2 other MSSAs, and a methicillin-sensitive Staphylococcus hominis strain. Unexpectedly, in all 17 staphylococci, the tirS region was conserved and located in SCC elements in the vicinity of the site-specific recombinase of type ccrAB. For 15 of the S. aureus strains in which the tirS region was detected, a whole-genome shotgun sequence was available, and we observed that they belonged to four distinct clonal complexes: CC1 (2 MRSA and 1 MSSA), CC5 (7 MRSA), CC8 (1 MSSA), and CC22 (4 MRSA). IV USA300 4 0/4 (0) IV Other 2 0/2 (0) IV MRSA-44 2 0/2 (0) IV USA700 2 0/2 (0) V MRSA-91 1 0/1 (0) CC6 IV WA MRSA 51 2 0/2 (0) CC9 - NA 1 0/1 (0) CC10 - NA 1 0/1 (0) CC12 - NA 3 0/3 (0) CC15 - NA 5 0/5 (0) CC20 - NA 2 0/2 (0) CC22 - NA 8 0/8 (0) IV EMRSA-15/Barnim/Middle Eastern 5 0/5 (0) CC30 - NA 20 0/20 (0) IV Southwest pacific 3 0/3 (0) ST34 - NA 4 0/4 (0) ST36 II EMRSA-16 2 0/2 (0) CC45 - NA 8 0/8 (0) IV Berlin 2 0/2 (0) IV Other 2 0/2 (0) CC59 - NA 3 0/3 (0) IV USA1000 2 0/2 (0) V Taiwan 2 0/2 (0) V Other 1 0/1 (0) CC80 IV European 4 0/4 (0) CC88 IV WA MRSA-2 2 0/2 (0) V Other 2 0/2 (0) ST93 IV Queensland Clone 2 0/2 (0) CC97 - NA 5 0/5 (0) (Continued ) TirS production To investigate TirS production by S. aureus, tirS expression was examined by quantitative realtime reverse transcription PCR (RT-qPCR) during bacterial growth in one MRSA (clone Geraldine, CC5) and one MSSA (CC1) tirS-positive strains. The analysis of the tirS expression kinetics showed stable expression of tirS during growth with a marginal increase at the end of the exponential phase (Fig 2A). The difference in tirS expression and the standardization gene hu was generally around seven cycles. Because tirS is located in the SCC element, we investigated the effect on tirS expression of adding a sub-minimum inhibitory concentration (sub-MIC) of oxacillin (1/2 MIC) and fusidic acid (1/4 MIC) during the exponential phase of S. aureus growth in 9 strains. Addition of fusidic acid appeared to significantly modulate tirS expression both in MSSA and MRSA strains For all 6 strains, the tirS gene was found within the SCC element. Schematic diagrams represent the genomic comparison of the 6 SCC regions of both MRSA and MSSA strains used in our study and the SCC476 element of strain MSSA476 for which the tirS gene was first described [9]. Predicted ORFs are marked in the direction of transcription as arrows. The highly conserved tirS region, consisting of the tirS gene and 4 surrounding genes among which we find the fusC gene (fusidic acid resistance), is represented by green arrows. Light rose arrows represent genes coding for antibiotic resistance, orange arrows represent the mecA gene conferring resistance to beta-lactams and blue-gray arrows represent the site specific recombinases (ccrAB or ccrC) of SCC elements. The SCC elements ISS(L) and ISS(L)) are also represented as black vertical bars. Abbreviations: ccrAB = cassette chromosome recombinase A and B; fusC = fusidic acid resistance gene; mecA = methicillin resistance gene; ccrC: cassette chromosome recombinase C; ble = bleomycin resistance gene; aaD/knt = kanamycin nucleotidyltransferase (resistance gene). doi:10.1371/journal.ppat.1006092.g001 (Fig 2B). tirS was upregulated up to six-fold with fusidic acid compared to its expression without antibiotic stress and level of induction by fusidic acid was strain dependent. In contrast, no difference in levels of expression of tirS was observed with exposure to sub-MIC of oxacillin. TirS interferes with TLR signaling TirS belongs to the family of bacterial TLRs [9]. We therefore investigated the ability of TirS to specifically interfere with TLR signaling using an in vitro NF-κB-dependent luciferase reporter system. Although a variety of TLR receptors have been described, in humans the most relevant for recognition of bacterial molecules are TLR2, TLR4, TLR5, and TLR9. Ectopic expression of of antibiotic treatment. Bacterial strains without stress were used as reference (relative quantity = 1) to estimate the relative quantity of tirS mRNA. White bars correspond to negative control (no antibiotic), grey bars to bacteria exposed to oxacillin, and black bars to bacteria exposed to fusidic acid. Data represent mean ± SEM of three to six independent assays. * p < 0.05; ** p < 0.01; *** p < 0.001. doi:10.1371/journal.ppat.1006092.g002 tirS in HEK293T cells transfected with the luciferase reporter vector and various TLRs resulted in reduced TLR2 activation and to a lesser extent, reduced activation of TLR9, TLR4, and TLR5 (Fig 3A). As a control, we confirmed that this inhibitory effect of TirS on murine TLR2 was dose-dependent by carrying out the transfections with increasing amounts of the expression vector encoding TirS (Fig 3B). Inhibition of human TLR2 and TLR4 pathways following addition of the appropriate ligands was also observed (Fig 3C). These results suggest that TirS may, at least partly, target a common molecule between these pathways such as the adaptor molecule MyD88. Consistently, reduction of IL-1R following IL-1β stimulation was also observed in the presence of TirS (Fig 3D). In contrast, although much lower levels of activation can be obtained for TLR3, which is independent of MyD88, we did not observe any significant effect of TirS on this pathway (Fig 3E). This was also the case for endogenous TNF receptor (TNFR) (Fig 3D). Overall, these results are in agreement with data from Askarian et al. [9] that previously described TirS inhibition of TLR2, MyD88 and TIRAP dependent pathways in vitro. In addition to its ability to interfere with TLR signaling, the bacterial TIR effector protein from Brucella BtpA targets and modulates microtubules [16] through a WxxxE motif [17] as well as the recently identified TIR protein from Bacillus anthracis referred to as BaTcp [18]. Since the WxxxE is also present in TirS, we investigated the intracellular localization of ectopically expressed TirS by confocal microscopy on HeLa cells transfected with myc-TirS and indirect immuno-fluorescence staining with anti-myc antibodies. We found that myc-tagged TirS accumulated in filament-like structures of irregular shapes within the host cytosol (Fig 3F). Similar results were observed for GFP-TirS, suggesting that this localization was not dependent on the tag (S1 Fig) . In addition, TirS showed no co-localization with cytoskeleton components as observed after labeling with either phalloidin for actin, tubulin for microtubules, or vimentin for intermediate filaments (Fig 3F), suggesting a different targeting than previously described for BtpA and BaTdp. TirS modulates S. aureus virulence To further investigate the role of TirS during infection, we carried out in vivo studies in mice. Because S. aureus is the leading cause of skin infection, we developed a subcutaneous model of infection in mice with the inoculation of S. aureus clone Geraldine-wild type (WT) strain, a deleted for the tirS gene (ΔtirS) strain, or a ΔtirS strain restored for tirS gene in a chromosomic position (ΔtirS + tirS). First, we confirmed that bacterial growth was not affected by genetic manipulation of TirS by assessing growth in two different media (brain-heart infusion (BHI), tryptic soy broth (TSB)) (S2 Fig) . Then, wild-type (WT) C57Bl/6 mice were inoculated with the three strains. Cutaneous infection resulted in the development of visible lesions by day 1 that healed by day 14 regardless of the strain (Fig 4A and4B). As control, injection of sterile phosphate buffered saline (PBS) did not induce any skin lesions in mice. Infection with ΔtirS strain resulted in larger lesions (~2.5-fold) compared with the WT strain (p < 0.01). These differences appeared around day 6 and persisted for about 4 days before being resolved at the same time for both strains. Similar lesion sizes were observed in mice infected with the S. aureus restored strain compared with the WT strain (p = 0.9), confirming the role of TirS on the observed results. To better understand the factors that might be important in determining the size of skin lesions in the mouse model, animals were sacrificed between 5 and 9 days after infection induced by MRSA clone Geraldine WT and ΔtirS strains to the sample lesion. First, we enumerated the bacterial load in the skin lesion. No significant differences in colony-forming unit (CFU) counts between the strains were observed (Fig 5A). Moreover, lesion size was not correlated with bacterial burden in either the WT or Δ-tirS strain (S. aureus WT: r 2 = 0.39; p = 0.2, S. aureus ΔtirS: r 2 = 0.35; p = 0.2) (Fig 5B). This finding raised the possibility that the inflammatory response was more important in determining lesion severity, as assessed by lesion size, than bacterial burden in the lesions. To follow up on these observations, we sought to measure neutrophil and macrophage activity with the quantification of levels of myeloperoxidase (MPO) and a panel of inflammatory cytokines in the skin lesions of mice infected with MRSA clone Geraldine WT and ΔtirS strains. Interestingly, the levels of MPO and IL-1β, IL-6, IL-17, CXCL1, and MCP1 were significantly lower in the murine skin lesions developed after the WT strain inoculation compared with the ΔtirS strain (Fig 5B). Correlation between skin lesion size and the levels of MPO, IL-1β, IL-6, IL-17, CXCL1, and MCP1 was statistically confirmed for each of these inflammatory markers (MPO: r 2 = 0.71, p < 0.001; CXL1: r 2 = 0.55, p < 0.001; IL-1β: r 2 = 0.61, p < 0.001; IL-17A: r 2 = 0.45, p < 0.01; MCP1: r 2 = 0.74, p < 0.001; IL-6: r 2 = 0.64, p < 0.001) (Fig 5C). Therefore, the smaller lesions observed after experimental inoculation with the bacterial WT strain were associated with lower levels of chemokines and less inflammation but not with decreased bacterial burden. These data support the notion that the local inflammatory response is a key determinant of lesion size. These results are consistent with a role for TirS in the control of the inflammatory response during S. aureus infection in vivo. Finally, to evaluate the contribution of the TIR-containing adaptor protein MyD88, a key component of the TLR signaling pathway implicated in proinflammatory mechanisms [19], MyD88-deficient mice from the C57BL/6 genetic background were subcutaneously inoculated with MRSA clone Geraldine WT and ΔtirS strains. An identical infection protocol and bacterial concentration were used as in WT mice. As control, injection of sterile PBS did not induce any skin lesions in mice. Interestingly, there was no significant difference in lesion size between MyD88-deficient mice infected with the WT or ΔtirS S. aureus strain (Fig 6), suggesting that MyD88 might play a role in virulence of the MRSA clone Geraldine used. These results confirm in vitro previously reported data showing that TirS inhibits signaling in a MyD88-dependent manner [9]. Discussion Bacterial strategies for innate immune evasion involve manipulation of the TLR signaling by TIR homologues such as TirS for S. aureus [20]. In this work we report the localization of the tirS in different SCC elements and its role in the control of the inflammatory response during S. aureus infection. Using a mouse model of S. aureus skin infection, we evaluated the role of TirS on S. aureus virulence. We show that S. aureus Geraldine deleted for the tirS gene exhibited superior virulence compared to the WT strain, as attested by the size of the skin lesion. Of note, bacterial counts in the skin lesions did not differ between the mutant and the WT strains and did not correlate with clinical severity (i.e., lesion size). This finding suggests that bacterial burden may not be the primary driver of lesion severity and argues that lesion severity may be due, at least in part, to the associated inflammatory response. In support of this hypothesis, we found a correlation between lesion size and levels of the proinflammatory cytokines, as well as neutrophil activity (as assessed by MPO levels). These findings are consistent with previous studies, which underscores that the severity of skin infection is often driven by the inflammatory response to the invading pathogen as much or more than by the direct effects of the pathogen itself [21][22][23]. This inference suggests that the attenuation of skin inflammation observed with S. aureus WT strain, compared to its tirS mutant, was driven by modulation of the inflammation resulting from TirS action. Such a conclusion is concordant with the fact that the ectopic expression of TirS in eukaryotic cells appeared to temper stimuliinduced TLR2-, TLR4-, TLR5-, and TLR9-mediated NF-kB activation. Accordingly, a previous and independent work has reported a negative interference of TirS with the TLR2 signaling pathway [9]. These results can be directly linked to our in vivo observations in mice, explaining the modulation of virulence during S. aureus infection by tirS [4,20]. Here the bacterial TIR effector has been shown to induce attenuation of virulence during infection because of the downregulation of the innate immune pathway. In addition to control of inflammatory responses, some TIR domain-containing proteins such as BtpA or BaTcp have been shown to target and modulate microtubules [18,24] through a WxxxE motif that is also present in TirS and involved in cross-talk between the TLR and GTPase signaling pathways [16]. We found that ectopically expressed TirS accumulated in filament-like structures of irregular shapes within the host cytosol but that do not co-localize with typical cytoskeleton components (microtubules, actin or intermediate filaments). This observation is different from those previously described for other bacterial TIR proteins from Brucella [8,16] and BaTdp from B. anthracis [18]. The role of TirS accumulation in filament-like structures in the modulation of S. aureus pathogenicity remains to be explored. The host interacting partner of TirS has yet to be identified. MyD88 is a general adaptor protein that plays an important role in Toll/IL-1 receptor family signaling. In vitro results of Askarian et al. [9] showed that TirS interferes with TLR2 and both MyD88 and TIRAP pathways in vitro. Here we report that in vitro, TirS is a potent inhibitor of not only TLR2 but also TLR4, TLR5, and TLR9. All of these proteins are dependent on MyD88, a general adaptor protein that plays an important role in the Toll/IL-1 receptor family signaling, which argues for an interaction of TirS with MyD88. Consistently, we found that S. aureus carrying tirS induced no increased virulence in a MyD88 knockout mice model. Taken together, these results suggest that TirS modulates an inflammatory response at the site of the infection through the MyD88 adaptor protein. Preliminary work from our lab has not been able to purify TirS, which is highly insoluble and failed to detect an interaction by co-immunoprecipitation assays. Further work is now required to understand the molecular mechanism by which TirS interacts and/or competes with MyD88. As yet, there is no clear consensus on the mechanism by which TIR proteins enter host cells and localize to the host cell cytoplasm. From deduced amino acid sequences, no recognizable signal sequence for secretion was detected in TIR proteins, including TirS. In the case of TlpA from Salmonella enterica, for which no direct evidence of secretion has been reported, the suggested mechanism is a role for type III or type IV secretion systems (T3SS or T4SS) that can directly inject TIR effectors into the host cells [5]. TcpC of E. coli has been reported to be secreted into the media of cultured bacterial cells and subsequently taken up into host macrophages to interfere with TLR-mediated tumor necrosis factor induction [17]. In the case of Brucella, BtpA and BtpB are translocated into host cells in a manner partially dependent on the T4SS [8]. As observed with other TIR protein genes, the tirS gene is not co-located with genes encoding for a secretion system in S. aureus. However, TirS has been reported to be secreted into the media through an unknown mechanism [9]. Further studies are needed to clarify this issue, perhaps by performing real-time imaging and co-localization studies. A comparison of different SCC elements in various S. aureus genetic backgrounds highlights the invariable presence of tirS within the SCC fusC/tirS mobile genetic element, sometimes included in the SCCmec elements in the J1 region. As proposed previously [9,25], the finding that tirS region was invariably present within an SCC element suggests that similar to mecA and fusC transmission, SCC-mediated horizontal transfer is the major mechanism of tirS dissemination. Moreover, horizontal transfer of the tirS region is also suggested with the nearby presence of site-specific recombinases of the ccrAB type in all SCC elements carrying the tirS gene [25]. Of further interest, all published sequences of SCC fusC/tirS have identified four conserved additional genes. Future work should attempt to assign functional roles of these putative proteins and evaluate their involvement in the regulation or modulation of tirS and fusC transcription. The co-location of mecA and fusC genes in this genetic element suggested that, in addition to antibiotic resistance genes, the presence of tirS may also confer a selective advantage for S. aureus. However, our observation indicated that a sub-inhibitory concentration of fusidic acid but not oxacillin induced overexpression of tirS in all strains tested. These results were unexpected because we observed the opposite outcome for Panton Valentine leukocidine and alpha-toxin expression with the same antibiotics: overexpression by oxacillin and inhibition by fusidic acid [26]. The acquisition by S. aureus of a gene encoding for factors that modulate virulence in the SCCmec element is not exceptional. The pls gene encoding for a large surface protein with an LPXTG peptidoglycan-anchoring sequence is a part of the type I SCCmec element [27]. This protein reduces S. aureus adherence and invasiveness [28]. Pls was also described in SCCmec III [29] but never in MSSA. Conversely, genes encoding for phenol soluble modulins alpha 1 to 3 (PSM-α1-3; small peptides with an amphipathic α-helical structure and strong surfactant-like properties that induce the production of proinflammatory cytokines and recruit, activate and lyse neutrophils) were first described in MSSA [30] before being found to be part of SCCmecII, SCCmecIII, and SCCmecVIII (PSM-α-mec) [31,32]. To our knowledge, is it not yet known whether antibiotics whose resistance is encoded by these SCC elements modulate the expression of Pls and/or PSM-mec. A number of bacteria expressing TIR-containing proteins have been described, but as far as we know, only one study references their prevalence throughout a bacterial species [24]. In this work, we report the presence of tirS gene in 12.4% of a clinical MSSA and MRSA S. aureus strain collection. The TirS effector was not shown to be associated with a specific clinical human presentation by molecular epidemiology studies. For the Geraldine clone that always holds the tirS gene, previous observation did not identify a particular association with disease severity [33,34]. By contrast, the first nosocomial outbreak due to the Geraldine clone was recently reported, emphasizing its efficiency in being transmitted and easily spread within health care settings [35]. Indeed, this clone has been reported to be both a communityacquired and hospital-acquired MRSA. Thus, the clone Geraldine SCCmec element may provide a selective advantage in both settings: in the hospital because of a higher antimicrobial resistance compared to drug-susceptible WT S. aureus strains, and in the community because of its enhanced inhibition of the innate immune response. Similarly, different groups recently described the emergence in the community and in the hospital of ST1, ST45, and ST149 MRSA fusC positive strains in England and of fusC positive ST5 MRSA in New Zealand that also are tirS positive [36,37]. These epidemiological observations highlight the selective advantage of S. aureus in carrying the SCCmec element with SCC fusC/tirS. In summary, we identify for the first time a bacterial TIR homolog protein genetically linked to an antimicrobial agent resistance determinant in the genetic mobile element SCC, thus providing a molecular connection between two key traits determining the successful outcome and spread of bacterial infections. Moreover, TIR homolog protein production was modulated by one antibiotic, fusidic acid, for which the resistance is encoded in a conserved region, which includes the tirS gene, and located within these SCCmec and non-mec SCC elements. This result expands knowledge about bacterial TIR homologs that constitute an ingenious strategy of pathogenic bacteria to evade the host immune system. The current state of knowledge strongly suggests that TIR effectors should be considered as potential key effectors of host defense, which emphasizes that further research is required to elucidate the precise mechanism of action of these interesting molecules. From the clinical point of view, the identification of the critical role of TirS signaling for modulating the immune response to a site of infection raises the possibility that this pathway could be locally targeted to engage the host's own immune responses in the treatment of a microbial infection. Materials and Methods Ethical statements Isolates were obtained as part of routine diagnostic testing and were analyzed anonymously. All data were collected in accordance with the European Parliament and Council decision for the epidemiological surveillance and control of communicable disease in the European community [38,39]. Ethical approval and informed consent were not required. Bacterial strain characterization A subset of 226 strains from the collection of the Centre National de Re ´fe ´rence des Staphylocoques (Lyon, France), composed of 103 strains of the main community and hospitalacquired MRSA clones, and 123 strains of MSSA were used in this study. They were sent to the laboratory for detection of toxin production in the context of nasal colonization, skin and soft tissue infection, pneumonia, bacteremia, or endocarditis. The S. aureus HT20030749 strain belonging to the clone Geraldine was isolated from blood culture of patient with bone-joint infection. All strains were genotyped as previously described. Briefly, bacterial DNA was extracted according to the manufacturer's recommended protocol using commercial extraction kits (Qiagen). The diagnostic DNA microarrays, identibac S. aureus Genotyping (Alere) used for this study, as well as related procedures and protocols, have been previously described in detail [40]. This microarray covers 332 different target sequences corresponding to approximately 185 distinct genes and their allelic variants. The assigning of isolates to CCs was determined by the comparison of hybridization profiles with those previously characterized by using multilocus sequence typing reference strains [40]. Illumina Sequencing. Genomic DNA were extracted from each isolate using a QIAcube extraction kit (Qiagen). The Nextera XT DNA preparation kit (Illumina) was used to generate sequencing libraries from 1 ng of DNA. Whole-genome sequencing was finally done with an Illumina HiSeq (Illumina, San Diego, CA, USA) to generate 150-bp paired-end reads. De novo assembly. For each isolate, the raw paired-end reads were assembled using a modified version of the A5-miseq open-source pipeline [41]. This pipeline implements a complete sequencing data processing workflow from raw read cleaning to de novo assembly. The first task of the read cleaning involves removing the regions of the raw reads that are contaminated by the adapters during the Nextera XT protocol using the Trimmomatic program [42]. Then, the reads are filtered and trimmed according to quality and length criteria using Trimmomatic and the preprocess function of String Graph assembler (SGA) [43]. Finally, the correct function of SGA is used to correct errors in the reads by a k-mer frequency-based method. After being quality filtered and error corrected, the reads were assembled by the IDBA-UD500 program, which implements an iterative De Bruijn graph built with several values of k-mer lengths, from low to high values, instead of a single value as for most de novo assemblers [44]. The reads are then mapped against the assembly using BWA-MEM [45] to polish the contigs at every position where basecalls differ between the mapping and the assembly. The scaffolding implemented at the end of the original A5-miseq pipeline was not performed because it provides no gain in the subsequent analysis of marker detection. Characterization of SCCmec V and tirS regions. First, the three MRSA genomes were aligned against the strain MSSA476 genome using progressiveMauve with default parameters [46]. Then, identification of SCC was done by searching for the ISS of these elements [47,48], both at the end of the ORF X / rlmH gene and further downstream. The newly detected SCCmec elements were annotated using the RAST Server (http://rast.nmpdr.org/), and blastn searches were performed (http://blast.ncbi.nlm.nih.gov/) against the nucleotide collection (nr/nt) using the megablastn algorithm and the organism filter for S. aureus (taxid:1280). tirS amplification Oligonucleotide primers tirS-For-CTTCAAAAAGAGCAGTCTAGG and tirS-Rev-CTTCAA CACTCACTTTATGCC were designed according to tirS sequence. After amplification for 30 cycles (30 s of denaturation at 94˚C, 30 s of annealing at 53˚C, and 30 s of extension at 72˚C), the PCR products were resolved by electrophoresis through 1.5% agarose gels (Sigma, Saint Quentin Fallavier, France). This step was followed by SYBR Safe DNA (Life Technologies) staining and analysis. To assess the specificity of tirS amplification, PCR products were subjected to DNA sequencing (Biofidal, Lyon, France). S. aureus HT20030749 and RN4220 strains were used as positive and negative amplification controls, respectively. Transcription of tirS tirS expression was analyzed using RT-qPCR. MRSA ST20121850 (CC1), MSSA ST20130407, ST20110167, ST20080979 (CC1), MSSA ST20121341 (CC8), MRSA ST20120331 (CC5, type IV), MRSA ST20111318, ST20110610, HT20030749 (CC5, type V) were grown in fresh MHB at 37˚C, after a 1:100 dilution of overnight cultures. For kinetics analysis, total RNA of two representative strains (MRSA HT20030749, CC5 and MSSA ST20130407, CC1) was purified at 2, 4, 6, and 8 h of growth as previously described [49]. Total RNA was extracted with the RNeasy Plus (Qiagen) kit including a gDNA eliminator column and an additional DNAse treatment (Qiagen). RNA quality and quantity were determined by Bioanalyzer (Agilent) using the RNA Nano chips and quantified using the ND-8000 (NanoDrop Technologies). Absence of DNA contamination was checked by using tirS-specific primers and probe at optimal concentrations (assessed as previously described [50] without the reverse-transcription step). The final concentrations were 0.2 μM for primers and probe (tirS-F-CTATTTGGCATAAAGTGAGTGTT GAAG, tirS-R-AAATCACTTGTATTCAATGCATACTTATCT, and tirS-P-CGTGCATAC AACCCATAT labeled with NED at the 5' and 3'-minor groove binder), and reactions were performed in a one-step RT-PCR enzymatic mixture (Agilent Technologies, Brilliant II QRT-PCR Master Mix Kit) in a final volume of 20 μl in the CFX96 system (Bio-Rad) and following manufacturer's instructions. Differences in Ct values between tested transcripts and hu signals were used for normalization purposes and based on the MHB medium condition at 2 h as a reference. The fold change was expressed as the inverse exponential of the difference between MHB Ct (reference) and the stress condition Ct. This assay was also used on 2-h cultures to assess gene expression values of tirS in various stress conditions such as the presence of 1/2 MIC of oxacillin for MSSA and 5 μg/ml for MRSA or 1/4 MIC of fusidic acid for MSSA and MRSA [49] on total RNA purified after 30 min of exposition. MICs were determined using CLSI recommendations [50], and the stress experiments were performed with drug concentrations showing minor impact on growth rate following control experiments (S3 Fig) . Construction of the tirS-deleted mutant S. aureus RN4220 (lab strain collection) was used for plasmid amplification and genetic manipulations because it is a nitroso-guanidine-induced mutant capable of accepting E. coli DNA [51]. To delete the tirS gene, we performed allelic replacement using double crossover recombination as previously described [52]. Using the primers and restriction enzymes listed in S1 Table, we generated two fragments of 930 bp 5 0 and 1029 bp 3 0 of tirS. These fragments were ligated in 5 0 and 3 0 of the chloramphenicol acetyl transferase gene [53] and inserted into pMAD [54]. Plasmid and inserts were checked by PCR and sequencing using the primers listed in S1 Table . Plasmid was introduced by electroporation in RN4220 and then in HT20030749 (Geraldine strain). We performed double crossover recombination yielding to deletion of tirS in HT20030749 (Geraldine strain). The mutant strain obtained was called ΔtirS. Gene deletion was checked by PCR and sequencing using specific primer hybridizing with internal and external positions of the deleted region (S1 Table ). Insert presence was checked with primers IngDNA_F and vG_CDS_1_R or IngDNA _F and vG_cat_1_R, yielding a negative PCR and a 2017-bp amplicon for the correct construct. Construction of the tirS restored strain Total DNA and plasmid DNA were prepared with Qiagen kits (QIAamp DNA Mini Kit and QIAprep Spin Miniprep Kit) after lysostaphin lyses for S. aureus. When necessary, transformation of E. coli DH5α (Promega, Madison, USA) was performed by treatment with CaCl2, and S. aureus strains were transformed by electroporation (Bio-Rad gene pulser). The tirS-deleted strain was complemented by inserting the tirS gene sequence downstream from the leukocidin promoter P-lukS in the bacterial chromosome using sequences homologous to sequence NWMN_0029 and NWMN_0030 of Newman strain (GenBank: AP009351.1) for chromosomal recombination. The region of NWMN_0029 and NWMN_0030 was amplified using primers New29-523 and New30-2371, restricted by EcoRI and SalI and cloned on a pBluescript vector (Stratagene) to obtain plasmid pLUG37. The tirS gene sequence was amplified and cloned between the P-lukS promoter region and lukF-PV transcriptional terminator of the Panton-Valentine leukocidin genes (respectively amplified using primers phi259/phi748 and phi2648/phi2819) on pBluescript. The whole DNA fragment obtained by SmaI was cloned in pLUG37 in the natural EcoRV restriction site between the NWMN39 and NWMN30 sequences. From the resulting plasmid, the DNA fragment corresponding to NMWN30-PlukS-tirS-term lukF-NWMN29 obtained by EcoRI-SalI restriction was cloned in the pMAD vector (pLUG1158). The resulting chromosomic restored strain was called ΔtirS + tirS. Expression of tirS in the restored strain was confirmed by RT-PCR. Oligonucleotides primers for PCR and DNA cloned subfragments are detailed in S2 Table. Luciferase activity assay HEK293T cells (American Type Culture Collection (ATCC), USA) were transiently transfected using Fugene (Roche) for 24 h, according to the manufacturer's instructions, for a total of 0.4 μg of DNA consisting of 50 ng TLR plasmids, 200 ng of pBIIXLuc, a reporter plasmid containing luciferase under the control of two Igκ-κB sites [55], 5 ng of control Renilla luciferase (pRL-null, Promega), and 50 ng of myc-TirS expression vector, unless stated otherwise. In the case of TLR4, MD2 was co-transfected for efficient detection of LPS. When indicated, an increasing amount of vector (ng) was used for the transfections to obtained different levels of expression of TirS. In all cases, the total amount of DNA was kept constant by adding empty vector. Negative control corresponds to empty vector alone (pCDNA3.1). Where indicated, cells were treated with E. coli LPS (1 μg/ml), Pam2CSK4 (100 ng/ml), CpG ODN1826 (1 μM), and Flagellin Fl-ST (1 μg/ml), all obtained from Invivogen, for 6 h, and then cells were lysed and luciferase activity measured using the Dual-Glo Luciferase Assay System (Promega). In the case of IL-1R and TNFR, endogenous detection was monitored following 6 h of stimulation with IL-1β (100 ng/ml) or TNF-α (100 ng/ml). The tirS construct was obtained with the following primers tirS-fw-GGGGACAAGTTTGTACAAAAAAGCAGGCTTCTCAGTATTA GAAACTAAATTAAAAAG and tirS-rev-GGGGACCACTTTGTACAAGAAAGCTGGGTCC TAATTCTTAGAATTAACGATTACTTG and then cloned in the gateway (Life Technologies) entry vector and subsequently in the pDEST-Myc (Life Technologies) to create an N-terminal myc tag fusion with TirS. Immunofluorescence microscopy HeLa cells (ATCC, USA) were transfected with myc-tirS using Fugene (following manufacturer's instructions) for 10 h. Cells were either fixed in 3% paraformaldehyde, pH 7.4, at room temperature for 15 min or placed in ice-cold methanol for 5 min. Cells were then permeabilized for 10 min with 0.1% saponin in PBS, followed by blocking for 1 h with 2% bovine serum albumin and 10% horse serum in PBS with 0.1% saponin. Primary antibodies were incubated for 1 h followed by three washes in PBS, 1 h incubation for secondary antibodies, two washes in PBS, and one wash in water before being mounted with Prolong Gold. Primary antibodies used were rabbit anti-myc (Abcam) at 1/1000, with either mouse anti-beta tubulin (TUB 2.1) at 1/250 or mouse anti-vimentin (V9) at 1/100 (both Sigma). Secondary antibodies used: donkey anti-rabbit Alexa 488, donkey anti-mouse Alexa 555 (Life Technologies) at 1/1000. The actin cytoskeleton was labeled with phalloidin 568 (Life Technologies). Samples were examined on a Zeiss 710 laser scanning confocal microscope for image acquisition. Images of 1024 × 1024 pixels were then assembled using ImageJ and Adobe Photoshop 7.0. Growth curves of S. aureus strains S. aureus Geraldine WT and ΔtirS strains were grown in fresh BHI medium or TSB medium at 35˚C, after dilution of overnight cultures to OD600 = 0.03. A thermostated microplate reader (TECAN M200 Infinite Pro) was used to follow bacterial growth by measuring OD600 every 15 min for 24 h. As controls, specific wells were inoculated with medium only. Experiments were done in triplicate. Murine model of S. aureus subcutaneous infection Bacterial isolates and growth. S. aureus Geraldine WT, ΔtirS, and ΔtirS + tirS strains were used in a murine model of skin infection. For preparation of the inoculum used for subcutaneous inoculation, the bacteria were grown into BHI medium at 37˚C for 8 h with constant shaking (200 rpm). They were washed twice and resuspended in sterile PBS, aliquoted to a final concentration of 1.10 7 bacteria/ml, and stored at -80˚C until used. For determination of bacterial titers, samples were serially diluted, plated on agar, and incubated overnight at 37˚C. Viability of the inocula was confirmed by colony counts with each experiment. Moreover, to ensure that the results of experiments were consistent, hemolysis phenotype was checked using blood agar plates (Trypcase soy agar + 5% sheep blood, bioMe ´rieux, France). Mouse strains. Mice on a C57BL/6 genetic background were used in all experiments. Sixweek-old WT female mice were purchased from Charles River, France. Male and female MyD88-deficient mice were kindly provided by L. Genestier (from S. Akira's lab; [56]). All mice were maintained in pathogen-free conditions in a biosafety level 2 facility at the Plateau de Biologie Expe ´rimentale de la Souris (PBES, Ecole Normale Supe ´rieure de Lyon, Lyon, France). Skin lesion model. One day prior to infection, mice were prepared for inoculation. Animals first were anesthetized with 2% isoflurane, and a flank was shaved with electric clipper and hair remover cream (Nair, Church & Dwight Co. Inc., Princeton, NJ) on the shaved flank. On the day of infection, mice were infected subcutaneously with 100 μl of the bacterial suspension (1.10 6 CFUs) in the shaved area. This inoculum was determined in preliminary studies to produce consistent skin lesions. Mice were returned to their cages and observed to awaken. All mice had free access to food and water throughout the duration of the experiments. Animals were weighed at 24 h intervals for 14 days. The area of lesions was measured daily using an electronic caliper and calculated with the formula A = π × (L/2) × (W/2) (mm 2 ). PBS injection was used for controls. Bacterial recovery and cytokine quantification in skin lesions. To determine the number of CFUs at the site of infection, a second set of mice was inoculated as described above. On days 5 to 9 after infection, mice were euthanized by cervical dislocation; the lesion and the surrounding tissues were removed and transferred in sterile tubes. Tissue samples were weighed, homogenized in 1 ml of PBS (gentleMACS Dissociator, Miltenyi Biotec, Germany), diluted in sterile PBS, and plated on selective agar (ChromID S. aureus, bioMe ´rieux, France). Enumeration of CFUs was performed 24 h later. For determination of cytokine and MPO levels, lesion homogenates were centrifuged and the supernatant removed and immediately stored at -80˚C. Cytokine levels were determined using Luminex assays (Bio-Techne) according to the manufacturer's instructions. The amount of MPO in the skin lesions was quantified using a commercially available ELISA kit (Bio-Techne). Statistical analysis The data were analyzed using R software (http://www.r-project.org). Student's t test, Wilcoxon test, or one-way ANOVA followed by multiple comparisons tests (Tukey) were used to compare tirS expression, luciferase activity, lesion size, cytokine levels, and bacterial CFUs between the groups. Correlations were assessed by Spearman's correlation. In all experiments, values of p < 0.05, p < 0.01, and p < 0.001 were considered statistically significant. Abstract Bacterial pathogens often subvert the innate immune system to establish a successful infection. The direct inhibition of downstream components of innate immune pathways is particularly well documented but how bacteria interfere with receptor proximal events is far less well understood. Here, we describe a Toll/ interleukin 1 receptor (TIR) domain-containing protein (PumA) of the multi-drug resistant Pseudomonas aeruginosa PA7 strain. We found that PumA is essential for virulence and inhibits NF-jB, a property transferable to non-PumA strain PA14, suggesting no additional factors are needed for PumA function. The TIR domain is able to interact with the Toll-like receptor (TLR) adaptors TIRAP and MyD88, as well as the ubiquitin-associated protein 1 (UBAP1), a component of the endosomal-sorting complex required for transport I (ESCRT-I). These interactions are not spatially exclusive as we show UBAP1 can associate with MyD88, enhancing its plasma membrane localization. Combined targeting of UBAP1 and TLR adaptors by PumA impedes both cytokine and TLR receptor signalling, highlighting a novel strategy for innate immune evasion. Introduction Microbial pathogen recognition by innate immune receptors initiates a progression of molecular interactions and signalling events assuring host defence. In bacterial infections, detection of surface components, such as peptidoglycan, lipopolysaccharides and flagellin by Toll-like receptors (TLR) 2, 4 and 5, respectively, is essential for induction of pro-inflammatory cytokines and type I interferon (IFN) responses. Specific sorting and signalling adaptor proteins bridge activated receptors with downstream kinases to initiate signalling cascades via Toll/interleukin 1 receptor (TIR) domains present on both the adaptors and the cytosolic face of TLRs [START_REF] Brubaker | Innate immune pattern recognition: a cell biological perspective[END_REF]. Upon TLR2 or TLR4 activation, the TIRcontaining adaptor protein (TIRAP) recruits myeloid differentiation primary response 88 (MyD88) that interacts with the TLR via its TIR domain [START_REF] Fitzgerald | Mal (MyD88-adapter-like) is required for tolllike receptor-4 signal transduction[END_REF][START_REF] Horng | TIRAP: an adapter molecule in the Toll signaling pathway[END_REF][START_REF] Kagan | Phosphoinositide-mediated adaptor recruitment controls Toll-like receptor signaling[END_REF]. MyD88 oligomerization and recruitment of specific kinases leads to the formation of myddosomes, signalling platforms that induce NF-jB translocation into the nucleus and subsequent transcription of pro-inflammatory associated genes [START_REF] Nagpal | Natural loss-of-function mutation of myeloid differentiation protein 88 disrupts its ability to form myddosomes[END_REF][START_REF] Bonham | A promiscuous lipid-binding protein diversifies the subcellular sites of toll-like receptor signal transduction[END_REF]. TLR4 activation also results in induction of a type I IFN via another set of adaptors, TRAM and TRIF [START_REF] Fitzgerald | Mal (MyD88-adapter-like) is required for tolllike receptor-4 signal transduction[END_REF]Yamamoto et al, 2002;[START_REF] Kagan | TRAM couples endocytosis of Toll-like receptor 4 to the induction of interferon-beta[END_REF]. In the case of the MyD88-dependent TLR5, the identity of a sorting adaptor remains undefined and the role of TIRAP unclear although it has been implicated in proper TLR5 signalling in epithelial cells [START_REF] Choi | PTEN regulates TLR5-induced intestinal inflammation by controlling Mal/TIRAP recruitment[END_REF]. Microbial pathogens have been shown to counter these host defence pathways. Most bacterial immune-modulatory proteins described to date rely on inhibition of downstream signalling components, such as MAP kinases and transcription factors (reviewed in Rosadini & Kagan, 2015). In contrast, few examples of direct blocking at the level of initial receptor-adaptor complexes are known. Some bacterial pathogens rely on TIR domain-containing proteins to perturb TIR-dependent interactions (Newman et al, 2006;Cirl et al, 2008;Salcedo et al, 2008Salcedo et al, , 2013)), essential in innate immune signalling. The growing number of bacterial TIR proteins recently identified in both Gram-negative and Gram-positive human pathogens (Spear et al, 2012;Askarian et al, 2014;[START_REF] Zou | A TIR domain protein from E. faecalis attenuates MyD88-Mediated signaling and NF-jB activation[END_REF] highlights the importance of this immune evasion strategy in disease. However, the molecular mechanisms underlying most TIRdependent virulence strategies remain to be defined. We focused on a previously uncharacterized TIR domaincontaining protein of the multi-drug resistant pathogen Pseudomonas aeruginosa PA7, that we called PumA. P. aeruginosa PA7 lacks genes encoding the type III secretion system (T3SS) and its cognate effector proteins that are normally associated with strong induction of cell death, a hallmark of acute P. aeruginosa infections (reviewed by [START_REF] Filloux | Protein secretion systems in Pseudomonas aeruginosa: an essay on diversity, evolution, and function[END_REF]. In addition, PA7 does not show high lytic capacity towards epithelial cells due to exolysin A (ExlA) as described for the haemorrhagic pneumonia-causing strain of the same family [START_REF] Elsen | A type III secretion negative clinical strain of Pseudomonas aeruginosa employs a two-partner secreted exolysin to induce hemorrhagic pneumonia[END_REF][START_REF] Reboud | Phenotype and toxicity of the recently discovered exlA-positive Pseudomonas aeruginosa strains collected worldwide[END_REF]. We thus took advantage of the absence of traditional virulence factors in this P. aeruginosa strain to study the molecular interactions involved in TIR-mediated bacterial targeting of events proximal to receptoradaptor signalling complexes and to dissect PumA function. We found that the PumA Pseudomonas TIR domain-containing protein is essential for PA7 virulence conferring a previously unrecognized ability to Pseudomonas to down-modulate innate immune responses during infection. We show that PumA directly interacts with both TIRAP and MyD88 to control TLR signalling. Uniquely, it also targets the ubiquitin-associated protein 1 (UBAP1), a recently discovered component of the endosomal-sorting complex required for transport I (ESCRT-I; [START_REF] Stefani | UBAP1 is a component of an endosome-specific ESCRT-I complex that is essential for MVB sorting[END_REF]. UBAP1 is known to play a key role in selective sorting of ubiquitinated endosomal cargo on multi-vesicular bodies (MVB), via its interaction with VPS37A and other components of ESCRT-I namely TSG101 [START_REF] Wunderley | The molecular basis for selective assembly of the UBAP1-containing endosome-specific ESCRT-I complex[END_REF], as well as with the ESCRT regulator, His domain protein tyrosine phosphatase (HDPTP; [START_REF] Stefani | UBAP1 is a component of an endosome-specific ESCRT-I complex that is essential for MVB sorting[END_REF]. UBAP1 has been shown to control endosomal sorting of ubiquitinated EGFR [START_REF] Stefani | UBAP1 is a component of an endosome-specific ESCRT-I complex that is essential for MVB sorting[END_REF] as well as ubiquitin-dependent degradation of antiviral surface proteins [START_REF] Agromayor | HA-MyD88 Myc-PumA HA-MyD88 Myc-PumA Myc-PumA HA-TIRAP Zoom Myc-PumA HA-TIRAP Myc-PumA HA-MyD88 Zoom Myc-PumA HA-MyD88 Myc-PumA HA-TIRAP Zoom Myc-PumA HA-TIRAP Myc-PumA HA-MyD88 Zoom Myc-PumA HA-MyD88 References[END_REF] and integrins [START_REF] Kharitidi | Interplay of endosomal pH and ligand occupancy in integrin a5b1 ubiquitination, endocytic sorting, and cell migration[END_REF]. More recently, UBAP1 was shown to modulate steady-state trafficking of cytokine receptors in nonstimulated cells [START_REF] Mamin ´ska | ESCRT proteins restrict constitutive NF-jB signaling by trafficking cytokine receptors[END_REF]. UBAP1 is expressed in a wide range of tissues, but when deleted in mice, it is lethal for embryos [START_REF] Agromayor | HA-MyD88 Myc-PumA HA-MyD88 Myc-PumA Myc-PumA HA-TIRAP Zoom Myc-PumA HA-TIRAP Myc-PumA HA-MyD88 Zoom Myc-PumA HA-MyD88 Myc-PumA HA-TIRAP Zoom Myc-PumA HA-TIRAP Myc-PumA HA-MyD88 Zoom Myc-PumA HA-MyD88 References[END_REF]. We propose that this novel Pseudomonas effector modulates UBAP1 function, hence the name PumA (for Pseudomonas UBAP1 modulator A), which confers to this TIR domain-containing protein the distinctive ability to also interfere with cytokine receptor signalling. Targeting of both TLR adaptors and UBAP1 by PumA is not spatially restricted as we found UBAP1 can associate with MyD88 in host cells. Our results thus highlight a novel role of bacterial TIR domains and place UBAP1 sorting in the context of TLR signalling. Results PumA is required for Pseudomonas aeruginosa PA7 virulence in vivo In Pseudomonas, TIR domain-containing proteins were first identified in an in silico study in P. aeruginosa and the plant pathogen P. syringae (Zhang et al, 2011). Analysis of currently available genomes shows that several plant strains encode such proteins as well as additional human pathogenic strains of P. stutzeri and P. aeruginosa. The closest orthologue is found in the plant pathogen P. viridiflava. The TIR domain of PumA spans the first 136 amino acids of PumA (Appendix Fig S1A and B), with no significant sequence/structure homologies detected for the C-terminal domain (amino acid 137-303) and no signal peptide. Analysis of the PA7 genome shows pumA (PSPA7_2375) is within the genomic island RGP56, which displays a G+C content of 58.5% in contrast to the average 66.5% in the remaining genome. Interestingly, using Geneious [START_REF] Kearse | Geneious Basic: an integrated and extendable desktop software platform for the organization and analysis of sequence data[END_REF], we found the pumA gene itself has an even larger reduction in G+C content (46.6%) (Appendix Fig S1C), suggesting that it is not a conserved gene within its immediate genetic context. We assessed the potential role of PumA in virulence by infecting the nematode Caenorhabditis elegans, a well-established model for P. aeruginosa allowing for rapid assessment of virulence [START_REF] Garvis | Caenorhabditis elegans semi-automated liquid screen reveals a specialized role for the chemotaxis Gene cheB2 in Pseudomonas aeruginosa virulence[END_REF]. Infection with the highly virulent strain P. aeruginosa PA14 which contains virulence factors such as the T3SS but no TIR protein resulted in 50% lethality at day 5. The PA7 wild-type strain caused 50% lethality 7 days after inoculation. In contrast, we found that the PA7 ∆pumA mutant showed a slight but significant attenuation in virulence in C. elegans (Fig 1A). These differences were not due to an in vitro growth defect of the mutant (Appendix Fig S2A) nor to a problem in expression of PumA in the wild-type P. aeruginosa PA7 strain (Appendix Fig S2B). We then used an acute in vivo infection model to evaluate the involvement of pumA in P. aeruginosa induced lung injury. Mice infected with DpumA showed a clear increased survival compared to wild-type strain (Fig 1B). A dose of 4.10 7 CFU of PA7 induced 100% lethality after 52 h against 62.5% survival after 96 h for the DpumA mutant. Bacterial clearance and cellular recruitment were then analysed with a lower inoculum of 3.10 7 CFU. PA7DpumA infected mice showed decreased cell recruitment (Fig 1C) and an enhanced lung bacterial clearance in bronchoalveolar lavages (BAL) compared to the wild-type strain (Fig 1D). The bacterial dissemination measured with the spleen bacterial load was equivalent between the two groups (Fig 1E). Together these results show that PumA is required for P. aeruginosa PA7 infection. PumA inhibits NF-jB translocation into the nucleus during infection in vitro As bacterial TIR proteins down-modulate NF-jB activation (Newman et al, 2006;Cirl et al, 2008;Salcedo et al, 2008Salcedo et al, , 2013;;Spear et al, 2012;Askarian et al, 2014;[START_REF] Zou | A TIR domain protein from E. faecalis attenuates MyD88-Mediated signaling and NF-jB activation[END_REF], we infected the lung carcinoma epithelial cell line A549, a wellestablished cellular model for Pseudomonas infection and analysed NF-jB translocation into the nucleus after one hour of infection by confocal microscopy. We developed an automated analysis of p65/RelA fluorescence in relation to DAPI labelling using a specific ImageJ plugin from images obtained by confocal microscopy (Fig EV1A) which allowed us to clearly differentiate between TNFa-treated and mock-infected cells (Figs 2A and EV1B). Infection with the three heat-killed P. aeruginosa strains, wild-type PA7, isogenic mutant DpumA or wild-type PA14 resulted in significant induction of NF-jB translocation into the nucleus, although to a lower level than TNFa-treated cells (Fig 2A). When cells were infected with PA7, there was no significant induction of We then investigated whether expression of PumA alone could confer the ability to block NF-jB translocation to a different Pseudomonas strain. We chose PA14, which does not contain pumA and is known to be more virulent due to the presence of several virulence factors, namely those secreted by the T3SS. As expected, cells infected with wild-type PA14 showed high levels of NF-jB translocation into the nucleus (Fig 2B). Induction of pumA from the PA14 chromosome, which did not impact membrane permeability (Fig EV2A ), was sufficient to enable this highly virulent strain to block NF-jB accumulation in the nucleus of infected cells (Fig 2B). These data indicate that PumA expression in P. aeruginosa is necessary and sufficient for NF-jB inhibition, highlighting its central role in immune evasion. PumA translocation into host cells during infection in vitro We next sought to determine whether PumA could be secreted by Pseudomonas. Fractionation of bacterial cells grown in liquid culture indicates that PumA is mostly cytoplasmic and to a lesser extent associated with the inner membrane (Appendix Fig S3A). The protein was not detected in the outer membrane fractions nor could it be found in the supernatant indicating absence of secretion into the extracellular milieu in vitro. To determine whether PumA could be found inside host cells during infection, we fused chromosomal PumA with the TEM1 b-lactamase. Although the presence of other b-lactamases in PA7 and/or potential bacterial lysis resulted in non-specific cleavage of the CCF2 substrate within host cells infected with the wild-type strain, significant levels of coumarin fluorescing cells following infection with a strain containing PumA-TEM1 (Appendix Fig S3B ) suggest that PumA is translocated into host cells during infection. PumA is associated with both TIRAP and MyD88 at the plasma membrane and intracellular compartments To determine the mechanism by which PumA interferes with NF-jB activity, PumA was expressed in mammalian cells. We found PumA localized mostly at the plasma membrane, with some intracellular distribution, independently of the tag and in both immortal HeLa cells (Fig EV3A, top panel) and primary mouse embryonic fibroblasts (MEFs,Fig EV3B). As this localization was reminiscent of that of the TLR adaptor TIRAP (Fig EV3A andB), we co-transfected cells with PumA and TIRAP. We found extensive co-localization, in particular at the plasma membrane in both HeLa and MEFs (Fig 3A and B). We observed these results with any combination of tags (HA, Myc or GFP) for both proteins. In contrast with TIRAP, MyD88 is mostly localized in intracellular structures that do not label the plasma membrane. We therefore co-expressed MyD88 with PumA. Surprisingly, we found enrichment of PumA in a proportion of MyD88-positive structures in both cell types although to a lesser extent than that observed with TIRAP (Fig 3A andB). These results were confirmed by structured illumination microscopy (SIM) to enable imaging at higher resolution (Fig 3C andD The TIR domain of PumA is responsible for interaction with both TIRAP and MyD88 We then investigated whether the TIR domain present in the first 136 amino acids of PumA was responsible for membrane targeting. PumA1-136 was also efficiently targeted to the plasma membrane (Fig 4A). However, unlike TIRAP and another bacterial TIR protein BtpA/TcpB which are known to interact with specific phospholipids of the plasma membrane [START_REF] Kagan | Phosphoinositide-mediated adaptor recruitment controls Toll-like receptor signaling[END_REF]Radhakrishnan et al, 2009), PumA and PumA1-136 did not show any lipid binding properties when incubated with phosphoinositide phosphate strips (Fig 4B). We then tested whether PumA could interact with TIRAP, which could explain its membrane localization. We found that TIRAP-GFP and Myc-PumA co-immunoprecipitated (co-IP) suggesting TIRAP and PumA could be part of the same complex (Fig 4C). This association was confirmed using purified His-tagged PumA or PumA1-136 immobilized on Ni-NTA resin, which both retained HA-TIRAP (Fig 4D). As we had also observed enrichment of PumA in MyD88-positive compartments, we investigated whether PumA could interact with this adaptor protein. Although we did not observe and interaction between GFP-MyD88 and Myc-PumA ( of PumA, which mediates interaction with these adaptor proteins via its TIR domain. PumA interacts with the ESCRT-I component UBAP1 As PumA was able to interact with both TIRAP and MyD88, two key adaptors in TLR signalling, we hypothesized PumA's function was to block all immune pathways dependent on these adaptors. Using an in vitro luciferase assay, we tested key immune receptors implicated in Pseudomonas infection. Surprisingly, we found PumA could not only block TLR4, TLR5 and IL-1b but also the TNF receptor, 1876 The While we were conducting this work, another study reported UBAP1 participates in control of TNFR1 and other cytokine receptor trafficking [START_REF] Mamin ´ska | ESCRT proteins restrict constitutive NF-jB signaling by trafficking cytokine receptors[END_REF]. Our data along with this recently published study thus suggest that PumA interaction with UBAP1 results in inhibition of the TNF receptor-mediated pathway. To determine whether PumA was targeting two different types of cellular compartments, one with UBAP1 controlling the TNFR pathway and another containing TLR adaptors, we analysed whether UBAP1 was excluded from TIRAP and MyD88 containing compartments. We first analysed their intracellular localization following transfection as we were not able to detect endogenous UBAP1 with currently available antibodies. Extensive co-localization was observed at the plasma membrane and intracellular structures when co-expressing UBAP1 and TIRAP (Fig 7A ), with no visible impact on the normal distribution of TIRAP. However, in the case of MyD88, co-expression with UBAP1 resulted in accumulation of this adaptor at the plasma membrane, not seen in cells expressing MyD88 alone (Fig 7A,bottom panel and B). Quantification of membrane enrichment of MyD88 in cells expressing UBAP1 showed MyD88 membrane association was even more striking in the presence of UBAP1 (Fig 7C ) than that observed when co-expressing TIRAP with MyD88 [START_REF] Kagan | Phosphoinositide-mediated adaptor recruitment controls Toll-like receptor signaling[END_REF], suggesting UBAP1 could be participating in MyD88 intracellular sorting. To determine whether UBAP1 could be interacting with these TLR adaptors, we carried out biochemical analysis of cells coexpressing UBAP1 and either TIRAP or MyD88. We could efficiently detect an interaction between UBAP1 and MyD88 by co-IP (Appendix Fig S5D andE) but not UBAP1 and membrin (Appendix Fig S5E), used as a control eukaryotic protein with the same tag. In the case of TIRAP, the co-IP was much less efficient (Appendix Fig S5D). These results suggest that UBAP1 may be associated with MyD88-containing compartments and to a lesser extent TIRAP, consistent with our microscopy observations. To confirm these results and ensure these interactions were taking place with UBAP1 in the context of the ESCRT-I, we determined whether MyD88 and TIRAP could interact with endogenous UBAP1 and TSG101. We found that HA-MyD88 co-immunoprecipitated both components of the ESCRT-I as well as endogenous TIRAP (Fig 7D ), as expected. However, we did not observe an interaction between HA-TIRAP and endogenous UBAP1 nor TSG101 (Fig 7E ), suggesting that only MyD88 can be found associated with the ESCRT-I. Overall, these data suggest that PumA mediates interactions with UBAP1 in the context of ESCRT-I, which can itself associate with the TLR adaptor MyD88, also targeted by this P. aeruginosa effector protein. Pseudomonas aeruginosa PA7 induces a decrease of TNFR1 in a PumA-dependent manner during infection in vitro It is well described in the literature that inhibition of UBAP1 induces intracellular accumulation of EGFR, LTbR and TNFR1 [START_REF] Stefani | UBAP1 is a component of an endosome-specific ESCRT-I complex that is essential for MVB sorting[END_REF][START_REF] Mamin ´ska | ESCRT proteins restrict constitutive NF-jB signaling by trafficking cytokine receptors[END_REF]. To establish a link between PumA interaction with UBAP1 and the ability of PumA to reduce TNFadependent signalling (Fig EV5A ), we analysed the levels of TNFR1 during infection. In wild-type PA7 infected A549 cells, we observed a decrease of TNFR1 compared to the negative control (Fig 7F). In contrast, the mutant lacking PumA was not able to reduce the levels of TNFR1 in infected cells and this phenotype could be fully restored by expression of PumA in the complemented strain. This is consistent with PumA targeting of UBAP1 and enhancing its activity during infection in vitro. Interestingly, we did not see any impact on the overall levels of TIRAP during infection (Fig 7F ) suggesting that PumA is not inducing TIRAP degradation as was previously reported for BtpA (Sengupta et al, 2010). Discussion Many pathogens have developed sophisticated strategies to evade or modify host immune responses to their advantage. We have found that the TIR domain-containing protein PumA plays a major role in the virulence of multi-drug resistant P. aeruginosa PA7 strain. PumA ensures efficient inhibition of innate immune responses by interacting with MyD88 and TIRAP, key adaptor proteins for IL-1R and the main relevant TLRs in Pseudomonas infection (TLR4 and TLR5), as well as UBAP1 which regulates cytokine receptor pathways. These results identify UBAP1 as a novel cellular target for bacterial pathogens. Pseudomonas aeruginosa is an important human pathogen associated with high level of mortality in nosocomial infections and cystic fibrosis patients. Most P. aeruginosa strains rely on a multitude of virulence factors to control host cellular pathways, including effectors delivered by the T3SS. However, in a cystic fibrosis context, colonizing strains modulate levels of expression of some of these virulence factors [START_REF] Hauser | Clinical significance of microbial infection and adaptation in cystic fibrosis[END_REF], namely downregulation of the T3SS [START_REF] Jain | Type III secretion phenotypes of Pseudomonas aeruginosa strains change during infection of individuals with cystic fibrosis[END_REF] and undergo a remarkable accumulation of pathoadaptive mutations [START_REF] Marvig | Convergent evolution and adaptation of Pseudomonas aeruginosa within patients with cystic fibrosis[END_REF]. The Myc-UBAP1 HA-MyD88 E PA7-related P. aeruginosa strains lack the 20-Kb-long genomic region encoding the T3SS core components and all genes encoding secreted effectors but contains several additional genomic islands and potential novel virulence factors [START_REF] Pirnay | Pseudomonas aeruginosa population structure revisited[END_REF][START_REF] Roy | Complete genome sequence of the multiresistant taxonomic outlier Pseudomonas aeruginosa PA7[END_REF][START_REF] Cadoret | Txc, a new type II secretion system of Pseudomonas aeruginosa strain PA7, is regulated by the TtsS/ TtsR two-component system and directs specific secretion of the CbpE chitin-binding protein[END_REF][START_REF] Freschi | Clinical utilization of genomics data produced by the international Pseudomonas aeruginosa consortium[END_REF]. In some of these strains, such as CLJ1, an exolysin secreted by a two-partner secretion system is responsible for hypervirulence [START_REF] Elsen | A type III secretion negative clinical strain of Pseudomonas aeruginosa employs a two-partner secreted exolysin to induce hemorrhagic pneumonia[END_REF]. However, in PA7, this exolysin is detected at only low levels in the secretome and is not responsible for cytotoxicity [START_REF] Reboud | Phenotype and toxicity of the recently discovered exlA-positive Pseudomonas aeruginosa strains collected worldwide[END_REF], suggesting an alternative pathogenicity mechanism. In this context, we hypothesize that PumA might be underlying an alternative pathogenicity mechanism to allow PA7 persistence within a host. Consistently, we observed a clear attenuation in virulence for a PA7 strain lacking pumA in both C. elegans and in a mouse lung infection model. Interestingly, no impact in the ability of the pumA mutant to disseminate systemically was observed suggesting a role in control of local pathology. This type of P. aeruginosa infection based on persistence and colonization rather than rapid cytotoxicity could be relevant in specific clinical contexts such as infection of wound and burn patients, aggravated by the high level of multi-drug resistance. It is interesting to note that other Pseudomonas contain a TIR domain protein, namely several strains pathogenic in plants. In this context, it will be interesting to analyse the role of the orthologous TIR protein in the plant pathogens P. syringae or P. viridiflava with over 90% identity to PumA in amino acid sequence for the TIR domain, regarding control of plant responses as these functions may be relevant across taxonomic kingdoms. Pseudomonas is not the only bacterial pathogen to take advantage of the TIR domain to engage TIR-TIR interactions which are essential components of innate immune signalling. Bacterial targeting of TLRs has been best described for uropathogenic E. coli TcpC (Cirl et al, 2008) and Brucella BtpA, also known as TcpB (Cirl et al, 2008;Salcedo et al, 2008), even though their molecular mode of action remains elusive. Brucella relies on an additional TIR protein, BtpB to down-modulate inflammation during infection (Salcedo et al, 2013). TcpC was shown to interfere with MyD88-dependent and independent pathways to down-modulate TLR signalling and contribute to kidney pathology (Cirl et al, 2008;Yadav et al, 2010). In the case of Brucella, BtpA/TcpB has been described as a mimic of TIRAP, since it can directly bind specific phosphoinositides of the plasma membrane (Radhakrishnan et al, 2009). This is clearly distinct from PumA that shows no significant lipid binding properties. In addition, BtpA/TcpB was also shown to bind TIRAP, which results in its increased ubiquitination and degradation during infection (Sengupta et al, 2010), which also differs from PumA which despite TIRAP binding does not induce its degradation. Several studies have followed disputing the precise target of BtpA/ TcpB with some proposing preferential binding to MyD88 (Chaudhary et al, 2011). One key question that remains unanswered is how these bacterial TIR proteins are entering host cells and where do they localize during infection? No direct imaging of bacterial TIR proteins has been described. In the case of TcpC, internalization into host cells was observed but the export mechanism was not identified (Cirl et al, 2008), whereas no data are available regarding Salmonella, Yersinia, Staphyloccocus and Enteroccoccus. In the case of Brucella, depending on the fusion tags, translocation into host cells of BtpA/TcpB or BtpB was dependent or independent of the T4SS (Salcedo et al, 2013) whereas a separate group has proposed that BtpA/TcpB is cell permeable and may enter host cells in a passive manner [START_REF] Radhakrishnan | Biochemical and functional analysis of TIR domain containing protein from Brucella melitensis[END_REF]. Unfortunately, PumA fusion with CyaA resulted in its cleavage preventing us from using this system. Using different fluorescent tags and the specific anti-PumA antibody, we were not able to confidently visualize it inside host cells during infection. We were however able to detect intracellular PumA using a TEM1 fusion. Further work needs to be carried out to confirm translocation of PumA into host cells and define the intracellular location of PumA during infection. PumA was also not found in the bacterial culture supernatant in vitro, suggesting that contact-dependent delivery is involved. How PumA is entering host cells will have to be further investigated, but since the T3SS is not present in PumA-encoding strains, it suggests that host cell delivery would need an alternative secretion pathway. It is important to note that TIR domains are widespread in multicellular organisms, such as in plants (role in disease resistance) and amoebas (dual role in ingestion of bacteria and immune-like functions) as well as in numerous bacterial genera that include cyanobacteria and other non-pathogenic bacteria (Zhang et al, 2011). This suggests these domains have evolved as an essential protein-protein interaction platform that could have additional functions. Indeed, recently the TIR domain of TcpC has been shown to directly interact with the NACHT leucin-rich repeat PYD protein 3 (NLRP3) inflammasome and caspase-1, besides MyD88, to perturb inflammasome activation (Waldhuber et al, 2016). There are also additional potential targets yet to be identified for BtpA/TcpB since it interferes with microtubule dynamics (Radhakrishnan et al, 2011;Felix et al, 2014) and induces unfolded protein response (Smith et al, 2013). Western blot of TNFR1 in A549 cells infected for 1 h with either P. aeruginosa PA7 wt, ∆pumA or ∆pumA:pumA (Ara) induced with 1% arabinose. A mock-infected sample was included as a negative control. The same blot was also probed for TIRAP and actin to control loading. Source data are available online for this figure. This notion that bacterial TIR domains provide a broad interaction platform is supported by our observations. We found that in addition to directly interacting with TIRAP and MyD88, PumA also targets the ESCRT-I machinery by binding to UBAP1 as PumA could co-immunoprecipitate endogenous UBAP1 and TSG101. All these interactions seem to be mediated by the TIR domain of PumA, but endogenous co-IP experiments showed that the full-length PumA is required for efficient interactions to occur. It is likely that TIR-TIR interactions are taking place with TIRAP and MyD88. In the case of UBAP1, the PumA interacting domain remains to be identified. All yeast two-hybrid preys identified in our screen encoded for a region containing amino acid 45-164, present between two key functional domains: the N-terminal UBAP1-MVB12-associated (UMA) domain (residues 17-63) that binds the central stalk of ESCRT-I Vps37 and the central domain (residues 159-308), containing the recently identified key binding site for HDPTP which can act as a cargo adaptor [START_REF] Gahloth | Structural basis for selective interaction between the ESCRT regulator HD-PTP and UBAP1[END_REF]. The C-terminal portion of UBAP1 includes a SOUBA domain (residues 381-502) known to bind ubiquitin [START_REF] Agromayor | HA-MyD88 Myc-PumA HA-MyD88 Myc-PumA Myc-PumA HA-TIRAP Zoom Myc-PumA HA-TIRAP Myc-PumA HA-MyD88 Zoom Myc-PumA HA-MyD88 Myc-PumA HA-TIRAP Zoom Myc-PumA HA-TIRAP Myc-PumA HA-MyD88 Zoom Myc-PumA HA-MyD88 References[END_REF]. UBAP1 is a key component of ESCRT-I that enables sorting of ubiquitinated cargo on MVBs. PumA may be binding an intermediate region of UBAP1 that could partially overlap with that interacting with HDPTP. Further work is now necessary to confirm this hypothesis. In view of the recent work implicating UBAP1 in restriction of constitutive NF-jB signalling (Mamin ´ska et al, 2016), PumA could be impacting the activation of TNFR pathway through UBAP1. Depletion of UBAP1 was shown to induce intracellular accumulation of the cytokine receptors in endosomal compartments [START_REF] Mamin ´ska | ESCRT proteins restrict constitutive NF-jB signaling by trafficking cytokine receptors[END_REF], which leads to increase in constitutive levels of NF-jB, since UBAP1 cannot ensure proper steady-state cytokine receptor (such as LTbR and TNFR1) sorting and subsequent degradation. Since in vitro experiments suggest PumA is blocking TNF receptor-mediated pathway, PumA could be enhancing activity of UBAP1. This phenotype is specific of PumA since we observed no effect of another bacterial TIR domain-containing protein BtpA/TcpB which does not interact with UBAP1 and its ectopic expression does not result in inhibition of the TNF-induced pathway. Consistent with our hypothesis, wild-type PA7 decreases the levels of TNFR1 in A549 cells in a PumAdependent manner suggesting targeting of UBAP1 is occur-ring during infection and could enhance its activity. In an attempt to determine whether distinct intracellular locations were targeted by PumA to enable interaction with TLR adaptors and the ESCRT-I component UBAP1, we analysed whether UBAP1 was excluded from TIRAP or MyD88-enriched compartments. Surprisingly, co-IP experiments revealed endogenous UBAP1 itself and TSG101 could be found associated with MyD88 but not TIRAP, suggesting that the ESCRT-I machinery may be interacting with specific TLR adaptors. We therefore propose that additional crosstalk between these pathways may exist. MyD88 has been shown to interact with TLRs and with TIRAP via its TIR domain or the death domain. It remains to be demonstrated whether UBAP1 interacts directly with MyD88 but our data strongly suggest they can be found in the same complex, namely at the plasma membrane. Interestingly, co-expression of MyD88 and UBAP1 resulted in MyD88 enhanced plasma membrane targeting, to higher levels than that previously described for TIRAP [START_REF] Kagan | Phosphoinositide-mediated adaptor recruitment controls Toll-like receptor signaling[END_REF]. Further work is required to determine if UBAP1 interaction with MyD88 promotes activation of TLR signalling and whether PumA could disrupt this interaction. A few studies have suggested the implication of ESCRT-I or MVBs in the control of TLR pathways. In Drosophila, ESCRT-0 components modulate endosomal sorting of Toll [START_REF] Husebye | Endocytic pathways regulate Toll-like receptor 4 signaling and link innate and adaptive immunity[END_REF][START_REF] Huang | Endocytic pathway is required for Drosophila Toll innate immune signaling[END_REF]. ESCRT have been also shown to negatively regulate TLR7 and 9 to enable recycling of these receptors following ubiquitination [START_REF] Chiang | Cofactors required for TLR7-and TLR9-dependent innate immune responses[END_REF]. More interestingly, inhibition of endosomal sorting via ESCRT-I increases LPS-induced signalling [START_REF] Husebye | Endocytic pathways regulate Toll-like receptor 4 signaling and link innate and adaptive immunity[END_REF], suggesting it is playing a role in sorting and degradation of activated receptor complexes. In conclusion, our study describes a P. aeruginosa effector PumA that targets UBAP1 in the context of ESCRT-I and plays a major role in virulence. In addition, our data associate UBAP1 to MyD88, highlighting a potential larger role of endosomal sorting by ESCRT-I in regulation of TLR signalling. Materials and Methods Strains Pseudomonas aeruginosa strains used in this study were wild-type PA7, PA14 or derived strains and were routinely cultured in liquid Luria Bertani (LB) medium. Antibiotics were added to P. aeruginosa cultures, when appropriate, at the following concentrations: 150 lg/ ml tetracycline and 750 lg/ml carbenicillin. When indicated, arabinose at 1% or glucose at 0.5% was added to cultures. For Escherichia coli cultures, antibiotics were added when necessary at the following concentrations: 50 lg/ml kanamycin and 50 lg/ml ampicillin. Construction of Pseudomonas DpumA mutant and complemented strains The 500 base pairs upstream and 500 base pairs downstream of pumA gene (PSPA7_2375; NC_009656.1.) were amplified from P. aeruginosa PA7 genomic DNA to do overlapping PCR, using primers 5 0 -TTTGGGCCCAAGACGATCAGCGGCACC-3 0 , 5 0 -ATCGGCT CTGCCCTATGCCATCTTTTTAACTCCATCCTTGTAATTCC-3 0 , 5 0 -GG ATGGAGTTAAAAAGATGGCATAGGGCAGAGCCGAT-3 0 and 5 0 -TT TTGATCACAACTACCCCGATGCGTT-3 0 , respectively. Then, the PCR product was sub-cloned into pGEM ® -T Easy Vector (PROMEGA) and ligated into pKNG208 [START_REF] Cadoret | Txc, a new type II secretion system of Pseudomonas aeruginosa strain PA7, is regulated by the TtsS/ TtsR two-component system and directs specific secretion of the CbpE chitin-binding protein[END_REF] following digestion with SpeI and ApaI to generate pKNG208-∆pumA. This plasmid was introduced into P. aeruginosa PA7 by conjugation where it is incapable of autonomous replication. Homologous recombination events were primary selected using tetracyclin resistance (150 lg/ml) in Pseudomonas isolation agar (PIA) plates and secondary selected using sucrose 6% sensitivity in LB agar plates during 2-3 days at room temperate. PCR and sequencing analyses confirmed the pumA wild-type gene was deleted and Western blotting showed absence of PumA production of the PA7 ∆pumA strain (Appendix Fig S2B). The mini-CTX-PBAD plasmid was constructed by cloning the SalI-AraC-PBAD-SacI fragment from pJN105 vector [START_REF] Newman | Broad-host-range expression vectors that carry the L-arabinose-inducible Escherichia coli araBAD promoter and the araC regulator[END_REF] into the 6711 bp SalI/SacI DNA fragment from miniCTX-lacZ vector [START_REF] Hoang | A broadhost-range Flp-FRT recombination system for site-specific excision of chromosomally-located DNA sequences: application for isolation of unmarked Pseudomonas aeruginosa mutants[END_REF]. PSPA7_2375 gene was amplified with an artificial Shine-Dalgarno (AAGAAG) and cloned into mini-CTX-PBAD digested by SpeI/SacI using the SLIC method [START_REF] Jeong | Onestep sequence-and ligation-independent cloning as a rapid and versatile cloning method for functional genomics studies[END_REF]. Primers used were 5 0 -AGCCCGGGGGATCCACTAGTAGGAGGTGA GATATACAATGGCGGTCTTCATTAGTTA-3 0 and 5 0 -ACCATCCAGT GCAGGAGCTCCTATGCGCGCGGCCACGGG-3 0 . Construction of PA7 pumA::bla1 strain The 500 base pairs upstream and downstream of pumA stop codon from P. aeruginosa PA7 genomic DNA and blaM gene from pJC121 plasmid (Myeni et al, 2013) were PCR amplified using primers 5 0 -ATTACGCGTTAACCCGGGCCCAGGATGTTGACGGCTATC-3 0 , 5 0 -CAGCGTTTCTGGTGCGCGCGGCCACGG-3 0 , 5 0 -CTGATTAAGTAGGG CAGAGCCGATCAGCTC-3 0 , 5 0 -ACACTGGCGGCCGTTACTAGTGCTG GACTGGCGCAACTA-3 0 , 5 0 -TGGCCGCGCGCACCAGAAACGCTGGT GAAA-3 0 and 5 0 -ATCGGCTCTGCCCTACTTAATCAGTGAGGCACC T-3 0 and used in overlapping PCR. DNA product was then cloned by the SLIC method [START_REF] Jeong | Onestep sequence-and ligation-independent cloning as a rapid and versatile cloning method for functional genomics studies[END_REF] into pKNG208 [START_REF] Cadoret | Txc, a new type II secretion system of Pseudomonas aeruginosa strain PA7, is regulated by the TtsS/ TtsR two-component system and directs specific secretion of the CbpE chitin-binding protein[END_REF] digested by ApaI/SpeI to generate pKNG208-pumA::bla1 vector. Construction of eukaryotic expression vectors The PumA constructs were obtained by cloning in the gateway pDONR TM (Life Technologies) and then cloned in the pENTRY Myc, HA or GFP vectors. The following primers were used 5 0 -GGGGA CAAGTTTGTACAAAAAAGCAGGCTTCGCGGTCTTCATTAGTTATT CCCACG-3 0 and 5 0 -GGGGACCACTTTGTACAAGAAAGCTGGGTCC TATGCGCGCGGCCACGGGGTAGC-3 0 . PumA1-136 was constructed with the following primers: 5 0 -GGGG ACAAGTTTGTACAAAAAA GCAGGCTTCATGGCGGTCTTCATTAGTTATTCC -3 0 ; 5 0 -GGGGACCA CTTTGTACAAGAAAGCTGGGTCCTAACGGGACTGATCAGGATTAG AG-3 0 . PumA137-303 with 5 0 -GGGGACAAGTTTGTACAAAAAAGCA GGCTTC ATTGAGGATGTTGACGGCTA-3 0 ; 5 0 -GGGGACCACTTTG TACAAGAAAGCTGGGTC CTATGCGCGCGGCCACGGGGTAGC -3 0 . Construction of prokaryotic expression vectors The full-length P. aeruginosa PA7 pumA and its TIR domain (residues 1-136) were cloned into pET151/D-Topo (Invitrogen)-which carries the T7 promoter, N-terminal 6xHis and V5 tags, protease recognition site for tobacco etch virus (TEV) protease and ampicillin resistance gene. The following primers were used: 5 0 -CAC CATGGCGGTCTTCATTAGTTATTCC-3 0 and 5 0 -TGATCGGCTCT GCCCTATGC-3 0 for pumA; the same forward primer and 5 0 -CTAACGGGACTGATCAGGATTAGAG-3 0 for pumA TIR domain. BtpA was cloned in this same vector. The HA-TIRAP and HA-Myd88 vector was used as a template to clone TIRAP and Myd88, respectively, into pRSF-MBP vector. This vector corresponds to pRSFDuet-1 (Novagen) but modified to insert 6xHis-MBP from pETM-41 vector (EMBL) behind the cloning multiple site. Cell culture and transfections HeLa, HEK 293T and A549 cells (all obtained from ATCC) were grown in DMEM supplemented with 10% of foetal calf serum (FCS). Mouse embryonic fibroblasts were prepared as described previously [START_REF] Conner | Mouse embryo fibroblast (MEF) feeder cell preparation[END_REF] and maintained in DMEM supplemented with 10% (FCS). All cells were transiently transfected using Fugene (Roche) for 24 h, according to manufacturer's instructions. Pseudomonas infection of A549 cells For adhesion assays and microscopy analysis of NF-jB, cells were first seeded into 24-well tissue culture plates at 2 × 10 5 cells/well (to obtain a monolayer) or 5 × 10 4 cells/well, respectively. Cells were infected with overnight cultures at a MOI of 10 or 100 of P. aeruginosa in 500 ll of complete medium per well. Plates were centrifuged at 400 × g for 5 min and then incubated for 1 h at 37°C with 5% CO2 atmosphere. Cells were then washed five times with DMEM and either lysed or fixed. In the case of the cytotoxicity assays, cells were incubated for longer periods with complete media. When indicated, arabinose at 1% or glucose at 0.5% was added. For NF-jB experiments, exponential phase cultures were also used, but no differences were detected. After 1 h, medium was removed and cells were washed two times with ice-cold PBS. Control samples were always performed by incubating cells with mock inocula and following the exact same procedure as for the infection. For adhesion assays, cells were lysed with 500 ll of 0.1% Triton solution and pipetted vigorously several times. Lysed samples were harvest, and serial 10-fold dilutions in PBS were plated on LB agar to enumerate CFUs. For Western blot analysis of TNFR1, cells were seeded in six-well plates at 2 × 10 5 cells/well and infected as described above. At 1 h post-infection, cells were washed with ice-cold PBS 2 times, were collected and lysed directly with loading buffer. For each sample, six wells were pooled. Cell cytotoxicity exerted by bacteria was quantified with the cytotoxicity detection kit-LDH (Roche), which measures the activity of cellular lactate dehydrogenase released into the supernatants. The assays were performed according to the instructions of the manufacturer. For propidium iodide staining, A549 cells were maintained in DMEM media supplemented with 10% FCS. Cells were seeded at 1 × 10 5 cells/ml in 96-well plate to achieve confluent monolayers. Cells were then infected with overnight cultures of P. aeruginosa or mutants supplemented with arabinose to a final concentration of 2% (as indicated) at a MOI of 100. The plates were centrifuged at 400 × g for 5 min and incubated for 1 h at 37°C. After 1 h of infection, cells were washed three times with PBS then incubated with complete media (without red phenol) containing propidium iodide and labelling measured during 6 h every 15 min with a Tecan Infinite M1000. Immunofluorescence labelling and microscopy Cells were fixed in Antigenfix (DiaPath), at room temperature for 10 min. Cells were then labelled at RT with primary antibody mix diluted in 0.1% saponin in PBS with 1% BSA and 10% horse serum for blocking. Primary antibody was incubated for 1 h followed by two washes in 0.1% saponin in PBS. Secondary antibodies were then mixed and incubated for a further 30 min, followed by two washes in 0.1% saponin in PBS, one wash in PBS and one wash in distilled water before mounting with Prolong Gold. Samples were examined on a Zeiss LSM710 or Zeiss LSM800 laser scanning confocal microscopes for image acquisition. Images of 1,024 × 1,024 pixels were then assembled using plugin FigureJ from ImageJ. For immunofluorescence microscopy analysis of NF-jB, cells were permeabilized for 6 min with 0.1% Triton in PBS, followed by a blocking for 1 h with 2% BSA in PBS. Primary antibodies were incubated for 1 h followed by two washes in 2% BSA in PBS, 30min incubation for secondary antibodies, two washes in 2% BSA in PBS, one wash in PBS and one wash in water before mounting with Prolong Gold (Life Technologies). Samples were examined on a Zeiss LSM710 laser scanning confocal microscope for image acquisition. Images of 2,648 × 2,648 pixels were then passed through a specific plugin of ImageJ developed by L. Plantevin, based on a previous study [START_REF] Noursadeghi | Quantitative imaging assay for NF-kappaB nuclear translocation in primary human macrophages[END_REF]; raw images were treated with a median filter and threshold moments, afterwards total NF-jB was subtracted from the Dapi channel to obtain cytoplasmic NF-jB. Total NF-jB was then subtracted from cytoplasmic NF-jB to obtain nuclear . Quantification was always done by counting at least 200 cells per condition in minimum three independent experiments, for a total of at least 600 host cells analysed per condition. Antibodies and reagents Primary antibodies used were rabbit anti-p65 from Santa Cruz (clone C-20, ref. sc-372) TNFR1 (Santa Cruz, and rabbit anti-TSG101 (Atlas Antibodies, ref. HPA006161) both at 1/200. Rabbit polyclonal anti-PumA serum was obtained by repeated immunization of rabbits with purified PumA (Eurogentec) and was used at 1/1,000 for Western blot and for immunofluorescence microscopy. Purified BtpA was used to obtain chicken anti-BtpA (Eurogentec). Anti-EF-Tu antibody (kind gift from R. Voulhoux) was used at 1/10,000. Secondary antibodies used were anti-rabbit, mouse, chicken or rat conjugated with Alexas-488, -555 or -647 all from Jackson ImmunoResearch. When necessary, phalloidin-568 (1/1,000) was used to label the actin cytoskeleton and DAPI nuclear dye (1/1,000) for the host cell nucleus. For Western blots, anti-mouse or rabbit-HRP antibodies were used at 1/5,000. TEM translocation assay HeLa cells were seeded in a 96-well plates at 1 × 10 4 cells/well overnight. Cells were then infected with an MOI of 100 by centrifugation at 4°C, 400 g for 5 min and 1 at 37°C 5% CO2. Cells were washed with HBSS containing 2.5 mM probenecid. Then, 6 ll of CFF2 mix (as described by Life Technologies protocol) and 2.5 mM probenicid were added to each well, and incubated for 1.5 h at room temperature in the dark. Cells were finally washed with PBS, fixed using Antigenfix and analysed immediately by confocal microscopy (Zeiss LSM800) or flow cytometry (MACSQuant10 analyser). Luciferase activity assay HEK 293T cells were seeded in a 96-well plates at 2 × 10 4 cells/well overnight, and cells were transiently transfected with FuGENE ® 6 (Promega) for 24 h for a total of 0.4 lg of DNA consisting of 50 ng TLR plasmids, 200 ng of pBIIXLuc reporter plasmid, 5 ng of control Renilla luciferase (pRL-null, Promega) and 50 ng of myc-PumA expression vector. The total amount of DNA was kept constant by adding empty vector. Where indicated, cells were treated with E. coli LPS (1 lg/ml) and Flagellin FLA-ST (1 lg/ml), all obtained from InvivoGen, for 6 h. In the case of IL-1b and TNFR, endogenous receptors were stimulated with IL-1b (100 ng/ml) and TNFa (100 ng/ml), respectively. Cells were then lysed and luciferase activity measured using Dual-Glo Luciferase Assay System (Promega). Yeast two-hybrid screen Full-length pumA cloned in pB27 (N-LexA-bait-C fusion) was used in a ULTImate screen against a human normal lung-RP1 library (Hybrigenics). Protein expression and purification Escherichia coli BL21 star (DE3) cells carrying pET151D topo-pumA, pET151D topo-pumA1-136, or pRSFDuet-TIRAP or MyD88 plasmids were grown in 1 L Luria Bertani (LB) media containing ampicillin or kanamycin according to the plasmid at 37°C until an OD600 value of 0.5-0.8 was reached. Isopropyl-b-D-thiogalactopyranoside (IPTG) was added to final concentration of 1 mM, and culture was further grown overnight at 20°C. Cells were harvested by centrifugation at 6,000×g for 20 min at 4°C. Bacterial pellets were lysed by sonication in cold lysis buffer (40 mM Tris-HCl pH8, 250 mM NaCl, 10% (v/v) glycerol, 1% (v/v) Triton X-100) supplemented with DNase-I, lysozyme and protease inhibitor tablets (Roche). Extracts were cleared at 16,000 × g for 20 min at 4°C and loaded onto a 5 ml His-Trap column or 5 ml MBP-Trap column (GE Healthcare) pre-equilibrated with buffer A (40 mM Tris [pH8], 250 mM NaCl, 5% glycerol). The column was washed successively with buffer A, 10% v/v buffer B (buffer A with 500 mM imidazole), 1M NaCl and eluted in a gradient of buffer B (His-Trap) or wash in buffer A and eluted in buffer A containing 20 mM maltose (MBP-Trap). Proteins used for lipid binding assay were incubated with TEV protease, 1mM DTT and 0.5 mM EDTA, dialysed against buffer A at 4°C overnight. The untagged recombinant protein was purified through a second His-trap column. Pure fractions were pooled, concentrated and applied to size exclusion chromatography (Superdex 75 10/300; GE Healthcare). Fractions were analysed by SDS-PAGE. Pull-downs from cell extracts Human embryonic kidney (HEK) 293T cells were seeded at 5 × 10 5 in 10-cm cell culture dish in Dulbecco's modified Eagle's medium (Life Technologies) supplemented with 10% foetal bovine serum. Cells were incubated overnight in a 37°C humidified atmosphere of 5% CO2. Cells were transiently transfected with different plasmids (8 lg) using FuGENE 6 (Promega). 22 h after infection, cells were washed in ice-cold PBS, harvested and resuspended in 200 ll of RIPA buffer (Sigma) supplemented with phenylmethylsulfonyl fluoride (Sigma) and protease inhibitor The EMBO Journal UBAP1 targeting by Pseudomonas TIR domain protein Paul RC Imbert et al cocktail (Roche). Extracts were then centrifuged at 17,000 g at 4°C for 20 min. The supernatant was incubated with 50 lg of His tag recombinant protein during 2 h at 4°C, then incubated within gravity flow column (Agilent) containing 80 ll Ni-NTA agarose beads (Macherey-Nagel) during 1 h beforehand washed in water and pre-equilibrated in equilibrium buffer 20 mM Tris-HCl pH7.5, 250 mM NaCl. The column was washed successively three times in equilibrium buffer supplemented with 25 mM imidazole, three times in equilibrium buffer and eluted in equilibrium buffer supplemented with 500 mM imidazole. Proteins eluted were separated by SDS-PAGE, transferred to a PVDF membrane, incubated with specific primary antibodies for 1 h and detected with horseradish peroxidase (HRP)-conjugated secondary antibodies by using Clarity TM Western ECL Blotting Substrate (Bio-Rad). Co-immunoprecipitations HEK 293T cells were cultured in 100 mm × 20 mm cell culture dishes at 4 × 10 5 cells/dish overnight. Cells were transiently transfected with 14.7 ll of Torpedo DNA (Ibidi) for 24 h for a total of 5 lg of DNA/plate. On ice, after two washes with cold PBS, cells were collected by with a cell scraper and centrifuged at 80 g at 4°C during 5 min. Cell lysis and processing for co-immunoprecipitation were done as described by either GFP-Trap ® _A kit (Chromotek) or with the PierceTM HA Epitope Antibody Agarose conjugate (Thermo scientific). For endogenous co-IP, HeLa cells were cultured in 100 mm × 20 mm cell culture dishes at 1 × 10 6 cells/dish overnight. Cells were transiently transfected and collected as described above. Cell lysis and processing for co-immunoprecipitation were done following the manufacturers' instructions (PierceTM HA Epitope Antibody Agarose conjugate, Thermo scientific) but using 100 ll of beads and increasing the number of washes to 5. Co-expression analysis Escherichia coli BL21 star (DE3) cells harbouring both pET151D topo-pumA1-136 and pRSF-Duet vector-TIRAP (or Myd88 or empty vector) plasmids were grown in LB media containing ampicillin and kanamycin at 37°C until an OD600 value of 0.5-0.8 and induced with 2 mM isopropyl-b-D-thiogalactopyranoside overnight at 20°C. Cells were lysed and loaded onto a 5 ml MBP column as described in the protein expression and purification section. Fractions were analysed by SDS-PAGE. Lipid binding assays Lipid binding assays were performed as described previously [START_REF] Marek | Phosphoinositide binding by the Toll adaptor dMyD88 controls antibacterial responses in Drosophila[END_REF]. Briefly, phosphoinositide phosphate (PIP) strips (Echelon Biosciences) were saturated in blocking buffer (10 mM Tris [pH8], 150 mM NaCl, 0.1% Tween 20, 0.1% Ovalbumin) for 1 h at room temperate under shacking. Strips were probed for 2 h at room temperate with each recombinant protein (2.5 lg) in the presence of the specific anti-protein antibody. PIP strips were then washed in blocking buffer three times for 10 min each and probed with an HRP-conjugated anti-rabbit IgG or anti-Hen IgY for 30 min in blocking buffer. Bound protein was detected using Clarity TM Western ECL Blotting Substrate. Caenorhabditis elegans infection The slow killing assay was performed as described previously [START_REF] Garvis | Caenorhabditis elegans semi-automated liquid screen reveals a specialized role for the chemotaxis Gene cheB2 in Pseudomonas aeruginosa virulence[END_REF]. Each independent assay consisted of three replicates. Briefly, five 60 mm NGM plates were inoculated with 60 ll of overnight culture of each bacterial strain and incubated at 37°C overnight. Plates were seeded with L4 stage hermaphrodite fer-15 worms (10 per plate). Plates were then incubated at 25°C and scored each day for live worms. A worm was considered dead when it no longer responded to touch. Escherichia coli was used as a control. Animal survival was plotted using GraphPad Prism version 6.0 for Mac, GraphPad Software, La Jolla, California, USA. Survival curves are considered significantly different from the control when P-values are < 0.05. Prism calculates survival fractions using the product limit (Kaplan-Meier) method. Prism compares survival curves by two methods: the log-rank test (also called the Mantel-Cox test) and the Gehan-Breslow-Wilcoxon test. Mouse model of Pseudomonas acute infection Wild-type C57BL6/J male mice, 8-10 weeks old, were purchased from Janvier laboratories. Mice were randomized before the experiments and infection were performed blindly. Following a light anaesthesia with isoflurane (Baxter), a pulmonary infection model was induced by intranasal instillation with 3 × 10 7 CFU of P. aeruginosa PA7 or PA7DpumA strains (except for survival studies conducted with lethal inocula of 4 × 10 7 CFU/mouse). All mice were sacrificed at 24 h or survival was monitored for 96 h. To establish bacterial burden, mouse lungs and spleens were homogenized in sterile tubes with PBS. Lung and spleen homogenates were sequentially diluted and cultured on Lysogeny Broth agar plates for 24 h at 37°C to assess bacterial load. Bronchoalveolar lavage (BAL) was done as follows: lungs from each experimental group were washed with a total of 1.5 ml sterile phosphate-buffered saline (PBS). The recovered lavage fluid was centrifuged (200 g for 10 min), and red blood cells from the cellular pellet were lysed with 300 ll of ACK Lysis Buffer (Gibco). Cell counts were performed directly by optical microscopy. Ethics statement All experiments involving animals were carried out in compliance with French and European regulations on the care and protection of laboratory animals (European Commission Directive 86/609 andthe French Act #2001-486, issued on June 6, 2001) and performed by certified personnel. The study and all experimental protocols associated were registered and approved by the French authorities (Ministe `re de l'Enseignement Supe ´rieur et de la Recherche-Direction Ge ´ne ´rale pour la Recherche et l'Innovation-Secre ´tariat Autorisation de projet, registration number 00481.01). Animals were housed at the Lille University Animal Research Facility (De ´partement Hospitalo-Universitaire de Recherche Expe ´rimentale de Lille, France) accredited by the French Ministry of Agriculture for animal care and use in research (#B59-350009). Paul RC Imbert et al UBAP1 targeting by Pseudomonas TIR domain protein The EMBO Journal Fractionation of Pseudomonas aeruginosa Pseudomonas aeruginosa strains were grown in LB for 4 h and adjusted to OD600 20 in 1 ml cold 50 mM Tris-HCl pH 8.0 with 1 mM EDTA and protease inhibitors (Roche). All subsequent steps were conducted at 4°C. The cell samples were sonicated three times at 30-s intervals, with the resulting cellular debris pelleted by centrifugation three times at 4,000 g for 5 min, taking the uppermost supernatant for each spin. The total membrane fraction was separated from the soluble fraction by ultracentrifugation at 100,000 g for 1 h. After washing the membrane pellet thoroughly in sonication buffer, the inner membrane fraction was solubilized in 200 ll 50 mM Tris-HCl pH 7.6 with 2% (v/v) sodium lauroyl sarcosinate for 1 h with gentle agitation. The outer membrane fraction was pelleted by ultracentrifugation at 100,000 g for 1 h, washed and resuspended in sonication buffer. The preparation of supernatant samples separation by sodium dodecyl sulphate-polyacrylamide gel electrophoresis and subsequent immunoblotting has been described previously [START_REF] Hachani | Type VI secretion system in Pseudomonas aeruginosa: secretion and multimerization of VgrG proteins[END_REF]. Immunodetection was conducted using monoclonal antibodies against RNA polymerase (NeoClone) and polyclonal antibodies against PilQ, XcpY [START_REF] Michel | Mutual stabilization of the XcpZ and XcpY components of the secretory apparatus in Pseudomonas aeruginosa[END_REF] and LasB. Expanded View for this article is available online. Introduction Pathogenic bacteria produce virulence factors that usually help the pathogen to survive in an environmental niche, to promote colonization and invasion of host tissues, or to modulate the immune system. Virulence factors are toxins or effector proteins than can be transported by diverse secretion machineries in bacteria [1,2]. Once secreted, these proteins can be assembled on the bacterial cell surface, released in the extracellular space, or secreted directly into a host cell or a neighboring bacterium. Once in host cells, effectors often target key proteins to hijack the host cellular machinery to remodel signaling cascades. The yeast two-hybrid system is often used to screen a large number of host proteins that potentially interact with bacterial effectors [3]. Regarding the mechanism of the secretion systems, a bacterial two-hybrid system is frequently employed to identify interaction networks between components of the secretory apparatus, as well as interaction between effectors and proteins of the machinery [4]. However, protein-protein interactions that have been determined by twohybrid assay must be confirmed by other methods [5]. Laure Journet and Eric Cascales (eds.), Bacterial Protein Secretion Systems: Methods and Protocols, Methods in Molecular Biology, vol. 1615, DOI 10.1007/978-1-4939-7033-9_20, © Springer Science+Business Media LLC 2017 247 Pull-down is an in vitro method widely used to detect or confirm interactions among multiple proteins. This assay is similar in methodology to co-immunoprecipitation experiments in its use of an affinity ligand to capture interacting proteins. The difference between these two methods is that while co-immunoprecipitation uses immobilized antibodies to capture protein complexes, the pull-down approach uses a purified and tagged protein as a "bait" to bind any interacting proteins. The method consists of first immobilizing the tagged protein (bait) on an affinity ligand specific to the tag, creating an affinity support to capture and purify other proteins (prey) that interact with the bait. The bait and prey proteins can be obtained from multiple sources, such as cell lysates, purified proteins, expression systems, and in vitro transcription/ translation systems. Once the prey proteins have been incubated with an immobilized bait protein, interacting complexes are eluted using an eluting buffer depending on the affinity ligand. Each experiment needs proper controls to demonstrate that characterized interactions are not an artifact. For example, a positive control consisting of an immobilized bait protein alone is necessary to verify proper attachment of the tagged bait protein to the affinity support. To identify and eliminate false positives caused by nonspecific binding of prey proteins to the affinity support, cell lysates or purified proteins can be analyzed after being passed through a minus bait support. Following a pull-down experiment, protein fractions are resolved by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and then visualized by gel staining or western-blotting detection. In this chapter, we describe detailed pull-down assay procedures that allow the identification of interacting proteins. First, we focus on how to perform a pull-down experiment to identify an interaction between a bacterial bait protein and eukaryotic prey proteins expressed in host cells (Subheadings 3.1 and 3.2). Next, we present how the interaction between two purified proteins can be visualized by a pull-down assay (Subheading 3.3). In these procedures, pull-down experiments have been performed using specific bait proteins fused to a 6× histidine tag. As a consequence, we selected Ni-NTA agarose beads as the affinity support used to immobilize these recombinant proteins. Materials Preparation of Cell Lysate Prepare all solutions with distilled water at room temperature and keep them at the indicated temperatures. 1. Eukaryotic cells. 2. Cell culture dish, treated for optimal cell attachment, with growth surface area around 55 cm 2 , sterile. Methods Preparation of Cell Lysate Pull-Down Assay Using Cell Lysate as Prey (See Notes 10 and 11) 1. Seed eukaryotic cells at 5.10 5 in 10 cm cell culture dish (see Note 7) and incubate overnight at 37 °C in CO2. 2. Transfect cells with plasmid containing gene of interest fused to a specific tag with appropriate transfection reagent for time necessary for optimal expression of protein (16-24 h is usually a good range). 3. Cool cells by placing plates on ice, wash cells with 1× PBS. Add 2 mL cold PBS and harvest cells using cell scraper. SDS-PAGE and Analysis of Protein Fractions 1. To 15 µL protein fraction add 5 µL Laemmli lysis buffer, 4× concentrate. Heat for 3 min at 100 °C and centrifuge 30 s using a microcentrifuge to bring down condensate. 2. Load 10 µL protein fraction and 5 µL protein ladder on SDSpolyacrylamide gel. 3. Electrophorese proteins in running buffer at 100 V for 15 min then 180 V until dye front has reached bottom of gel. 4. Identify interacting proteins by immunodetection or blue coomassie coloration (see Note 25). Notes 1. We prefer not to use the solutions after 6 months of storage. 2. A different buffer, such as HEPES (4-(2-hydroxyethyl)-1piperazineethanesulfonic acid), MES (2-(N-morpholino) ethanesulfonic acid), or phosphate buffers, may be required for your specific protein-protein interaction. Additionally, different pH values may be tested as these are specific and dependent on the interaction between proteins. 3. We found that pull-down experiments work better with fresh equilibrium and elution buffers. 4. The bait proteins used in this protocol are tagged with 6× His that bind the nickel agarose affinity support. The choice of the matrix-associated antibody depends on the fusion tag. The His tag is composed of a peptide motif that consists of six histidine residues with a high affinity towards metals like nickel that composes the used Ni-NTA agarose but also the Ni-NDA, Ni-TED, or Ni-TALON resins. The 6× His tag is very small (~1 kDa), which renders it less immunogenic than other larger tags, is shown not to affect the native conformation of bait proteins, and maintains its partner binding activity. Few naturally occurring proteins also bind to Ni-NTA matrices, making this tag the most commonly used affinity tag. In pull-down assays, the choice of the matrix-associated antibody depends on the fusion tag. What follow are some examples of tags with their advantages and disadvantages. The FLAG tag is an octapeptide that is likely located on the surface of the fusion protein due to the hydrophilic nature of amino acid residues and has affinity to anti-FLAG resin. Like the His tag, the FLAG tag is small, but a disadvantage is that the monoclonal antibody matrix is not as stable as Ni-NTA. Glutathione S-transferase (GST) tag binds to glutathione-associated support with high affinity and specificity. This tag has the advantage that GST isoforms are not normally found in bacteria, so purified bacterial prey proteins normally do not have affinity with glutathione resin. However, GST tag is large (26 kDa), exists as a dimer, is prone to nonspecific interaction, is expensive, and affinity to its support depends on certain reagents. The maltose-binding protein (MBP) tag from an Escherichia coli periplasmic protein has affinity for matrix consisting of sugars or anti-MBP. This tag is used for the purposes of overcoming problems associated with the expression and purification of recombinant proteins [7]. However, the disadvantage of the MBP tag is its large size, its immunogenicity, and the mild elution of MBP-tagged proteins, which complicate pull-down experiments. 5. Make an aliquot of 1 mL before -20 °C storage. This will prevent the degradation caused by repeated thawing. 6. Make an aliquot of 500 µL before -20 °C storage. The used Laemmli lysis buffer can be kept at 4 °C for 1 month. 7. As negative control, prepare a cell lysate without expressing bait protein (negative cell lysate). This will eliminate false positives resulting from nonspecific interactions of cell lysate proteins with the Ni-NTA agarose beads. Additional negative controls can include an irrelevant protein with the same tag or expression of the tag alone, as in the case of the GFP. 8. Before stocking the cells, remove an aliquot and control by western blot the production of the prey protein. 9. Whole-cell lysate instead of the supernatant fraction can also be used to test whether the prey protein of interest localizes in the pellet fraction. 10. Pull-down experiments using cell lysates will not demonstrate that interaction between the bait and prey proteins is direct but only determine that they are part of the same complex. To prove a direct interaction, the prey protein must be purified and used in pull-down experiments as described in Subheading 3.3. 11. Try to work mostly on ice or at 4 °C to prevent the degradation or the denaturation of the proteins. 12. Break the end cap of the gravity flow column and place it on a 1.5 mL Eppendorf tube. Thoroughly resuspend the Ni-NTA resin by inverting the bottle several times to obtain a uniform suspension. Pipette tips must be cut to allow the Ni-NTA agarose beads to get into. 13. This step eliminates the left 30% ethanol present in the Ni-NTA resin. 14. Before loading the bait protein, plug the gravity flow column using a piece of parafilm before replacing it on a 2 mL Eppendorf tube. 15. Prepare a supplementary column by mixing 50 µg of a known noninteracting bait fused to 6× His tag with 400 µL equilibrium buffer to an empty column. Additionally, prepare a column by adding 400 µL equilibrium buffer to an empty column. These negative bait columns will be used in combination with cell lysates to eliminate false positives resulting from nonspecific interactions. 16. The incubation time can be increased from several hours to overnight at 4 °C under agitation depending on the strength of the interaction between bait and prey proteins. 17. Rotate on roller or rotating platform. 18. The column should stand straight on the ice. This step allows the resin to flow by gravity before centrifugation. 19. We found that loading two times the flow-through increases the capacity of the binding. 22. We found that loading two times the eluted fraction increased its quantity. 23. As negative control, incubate 50 µg bait protein (minus prey) or prey protein alone (minus bait) in 400 µL equilibrium buffer. The minus prey control will ensure that the Ni-NTA agarose resin will correctly capture the His-tagged bait protein alone. The minus bait control will eliminate false positives resulting from an interaction between affinity support and prey protein. 24. A different incubation temperature and time may be required for your specific protein-protein interaction. 25. A prey protein that interacts with the bait protein will be found in the eluted fraction. In contrast, a noninteracting protein will not be retained by the bait protein, will pass through the column, and will be found in the flow-through protein fraction. Je remercie la Dr. Agathe Subtil et le Dr. Matteo Bonazzi d'avoir accepté d'être rapporteurs de ma thèse. Je remercie le Dr. Xavier De Bolle d'avoir accepté de juger mon travail en tant qu'examinateur. Je remercie également le Pr. Patrice Gouet d'avoir accepté de présider ce jury de thèse. Je remercie également les membres de mes différents comités de suivi de thèse, la docteure Anne Vianney et le docteur Lionel Ballut pour tous leurs conseils prodigués durant les 3 CST. Durant ma thèse, j'ai pu utiliser différents équipements de la plateforme Protein Science Facility PSF (SFR Biosciences, UMS 3444). Ainsi, je souhaite remercier pour leur accueil chaleureux et leur disponibilité : Frédéric Delolme, Aline Page, Roland Montserret, Eric Diesis et plus particulièrement Virginie Gueguen-Chaignon qui a contribué à l'obtention de la structure cristallographique. Je remercie également la plateforme d'imagerie et microscopie PLATIM (SFR Biosciences, UMS 3444/US8) et plus particulièrement Claire Lionnet. Figure 1 . 1 Figure 1. Global distribution of brucellosis cases in the year 2000.The most affected countries are Mongolia and the countries of the Middle East and the Balkans. Some Central and South American countries and the countries of the Mediterranean basin also have numerous cases of human brucellosis.[6] Figure 3 . 3 Figure 3. Schematic representation of the T4SS VirB/D. The NTPases are coloured blue, the core proteins green and the pilus yellow modified from [100] protein translocation reporter assay [106]. VceC interacts with Bip/Grp78, activating the Unfolded Protein Response (UPR) [108]. More recently, contradictory results have been reported related to the role of VceC. Some author found that VceC induces CHOP expression favoring ER stress in placenta trophoblast, and thus, inducing cell death and placenta inflammation, promoting abortion [109]. Other authors found that VceC inhibits the expression of CHOP protein inhibiting apoptosis mediated by ER stress and thus, protecting cell from death by apoptosis and favoring intracellular persistence [110]. Also, very recently, it has been shown that a VceA mutant promotes autophagy and inhibits apoptosis in trophoblast during Brucella infection [111]. BPE005, BPE043, BPE275 and BPE123 were identified using CyaA adenylate cyclase assay [104]. BPE123 was shown to interact with alpha enolase (ENO-1), a host cell factor involved in B abortus intracellular replication [112]. Figure 4 : 4 Figure 4 : Location of the different classes of PRRs Figure 5 . 5 Figure 5. Schematic representation showing structure of TLR family. TLRs can be present at the plasma membrane of the cell but also in the endosomes present in the cell. The TIR domain of TLRs is always present in the cytosol. 90°Figure 6 . 6 Figure 6. Cartoon representation of the horseshoe-shaped structure of human TLR3 ectodomain (2a0z), depending on the secondary structural elements: -sheet in yellow, -helices in red and loops in green. Structural representation were performed with Pymol. receptor [138]. Three highly conserved sequences called box 1, box 2 and box 3 have been characterized in this TIR domain after alignment of sequences from different members of the Il-1R/TLR family[139] (Figure 7A). These sequences are involved in the signal transduction downstream of the receiver. The box 1 sequence is involved in the coupling of the TLR with another receptor [140], the box 2 sequence is involved in the interaction with the adaptor molecules involved in TLR signaling [141]. The box 3 sequence contains essential amino acids NT CT NT CT for signaling and is involved in the localization of the receptor through interactions with elements of the cytoskeleton [142] and also in the interaction with the other TIR domaincontaining proteins : adaptor protein. There are 5 adaptor proteins : MyD88 (Myeloid differentiation primary response protein), TRIF (Toll/IL-1 receptor domain containing adaptor inducting IFN-β ou TICAM1 pour Toll-like receptor adaptor molecule 1), TIRAP (TIR domain containing adaptor protein ou MAL pour MyD88 adaptor-like), TRAM (TRIF-related adaptor molecule) et SARM (pour Sterile alpha-and armadillo-motif-containing protein) [126] [143]. Figure 7 . 7 Figure 7. Sequence and strucure of TIR domains. A) Multiple sequence alignment of TIR domain proteins from Human and bacterial : Human TLR2 (residues 639-782), Human TLR4 (residues 672-815), Human TLR6 (residues 640-781), Human MyD88 (residues 159-293), Human TIRAP (residues 84-213), Human TRIF (residues 393-553), Human TRAM (residues 73-229), Brucella abortus BtpA (residues 141-275), Brucella abortus BtpB (residues 156-292), Escherichia coli TcpC (residues 169-303). Structural features above the sequence are based on the MyD88. Blue arrows denote areas of beta sheet, red boxes denote alpha helices and black line denote connecting loops. The alignment and the prediction structure were performed with PROMALS3D. B) and C) Cartoon representation of the structure of human MyD88 TIR domain (4DOM) and human TLR2 TIR domain (1FYW) respectively, depending on the secondary structural elements: -sheet in yellow, -helices in red and loops in green. D) Ribbon representation of the superposition of MyD88 TIR domain (4DOM) in cyan with TLR2 TIR domain (1FYW) with Root Mean Square Deviation (RMSD) of 2,431Å. Structural representation were performed with Pymol. Figure 8 . 8 Figure 8. Schematic representation of TLRs pathways. Each receptor recognizes a specific ligand. TLR1, TLR2, TLR6, TLR5, TLR11 are at the cytoplasmic membrane, while TLR3, TLR7, TLR8, TLR8 and TLR9 are endosomal. Figure 9 : 9 Figure 9 : The different strategies used to manipulate TLRs. (A) Microbes may stimulate other receptors that lead to a weakened immune response. (B) Microbes may modify their PAMPs in order to avoid TLR recognition. (C) Microbes may interfere with TLR signaling. Adapted from 3underhill 20043 effective in stimulating TLRs. Yersinia pestis, for example, is capable of modifying the composition of its membrane lipid A, according to the growth temperature. Thus at 37°C, host temperature, the LPS of Yersinia pestis does not stimulate the TLR4 receptor [158] [159]. In addition, Campylobacter jejuni, H. pylori and Bartonella bacilliformis produce flagellins that do not activate TLR5 [160]. regulates the concentration of nuclear factors or serves as a hub for gene regulation. These nuclear bodies are extremely dynamic, reacting to different physiological processes and different forms of stress. They are formed at the exit of mitosis and maintained until the start of the next mitosis. Nuclear bodies are made up of proteins and RNAs whose structural Figure10.Schematic representation of the main nuclear subcompartments. According to3[198] Figure 11 : 11 Figure 11 : Schematic representation of the structure of the nucleolus consisting of a fibrillar centre, a dense fibrillar component and a granular component. According to[198] Figure 12 . 12 Figure 12. Schematic view of ribosome biogenesis in mammals :From the 47S precursor ribosomal RNA, a succession of treatment steps lead to the formation of the 40S and 60S subunits which will be assembled in the cytosol to form the mature 80S ribosome[316]. Figure 13 . 13 Figure 13. Tridimentary structure of ubiquitin, SUMO1 and SUMO2. Cartoon representation of the structure of human Ubiquitin (1UBI), human SUMO1 (2N1V) and human SUMO2 (4NPN), depending on the secondary structural elements: -sheet in yellow, -helices in red and loops in green. Figure 14 . 14 Figure 14. Alignment of the five SUMO paralogues in humans. The preserved amino acids are in red and the poorly preserved amino acids in black. Amino acids highlighted in red are preserved in all sequences. The sequence alignment was carried out with ClustalW and ESPript3. Figure 15 . 15 Figure 15. Enzyme cascades involved in sumoylation . SUMO-specific proteases (SENPs) catalyse the maturation of SUMO proteins by cleaving the amino acids that follow the double glycine reside at the Cterminal end of SUMO. Mature SUMO can then be conjugated to its substrate by following a cascade of enzymatic reactions: E1 the activating enzyme (SAE1/SAE2), E2 the conjugating enzyme (Ubc9) and E3 the ligase. The substrate can be deSUMOylated by the action of the protease of the SENPs family Figure 16 . 16 Figure 16. Molecular consequences of modification by SUMO. SUMO and ubiquitin are indicated respectively by S and U. Adapted from [241] Created with BioRender.com Figure 17 . 17 Figure 17. Schematic representation of SENP family protein. Catalytic domain (CD) colored magenta, Putative SUMO-interacting motifs (SIMs) are indicated by asterisks and regions shown to be important for intracellular localization colored in cyan [231]. Figure 18 : 18 Figure 18 : Schematic representation of the cellular distribution of the SENPs family members. The preferential distribution of SENPs is shown in green. SENP1 and SENP2 are found at the level of the nuclear envelope (NE) of the nucleoplasm, and around the PML nuclear body. SENP3 and SENP5 are found in the nucleoli (No), nucleoplasm and also in the mitochondria (Mit). SENP6 and SENP7 is only found in the nucleoplasm [250]. Furthermore, SENP3 does not possess hydrolase activity and therefore does not participate in the maturation process of SUMO.The N-terminal part contains a domain responsible for its localization in the cell, a NoLS (Nucleolar Localization Site) domain, corresponding to a very acidic region of the protein. The absence of this domain leads to a redistribution of SENP3 from the nucleolus to the nucleoplasm. Nevertheless it seems that the localization of SENP3 is not exclusively in the nucleoli since it has been established that SENP3 may reside in the cytosol and is required for mitochondrial fission [297,298]. The biological processes in which SENP3 is involved are now well established, and include: inflammation [252], cell differentiation [253], cell stress response [297,301,302] and ribosome biogenesis [254]. Nevertheless, the identification of substrates and their physiological roles are still poorly characterized. This lack of knowledge stems in part from the difficulty of experimentally obtaining recombinant SENP3 proteins, especially the stable and active catalytic domain [255].I will now present some of the most relevant discoveries relating to SENP3 to illustrate the multiple roles of this protease in various cellular processes.1. SENP3 as a redox-sensor under stressIt has been shown that SENPs can be redox sensors[256]. Reactive oxygen species (ROS) have been shown to modulate SENP3 and have an impact on its stability and localization. Indeed, the redox state of the cell regulates the level of SENP3 expression. HeLa cells under mild oxidative stress such as treated with oxygen peroxide (H2O2) present an increased level of SENP3 due to inhibition of ubiquitination, revealing that under normal conditions SENP3 is ubiquitinated by the ubiquitin ligase CHIP (for C-terminus of Hsc70-interacting protein) and sent to the proteasome for degradation[257] [258]. Under moderate oxidative stress, two SENP3 cysteines (Cys243 and Cys274) are oxidized[259], allowing recruitment of Hsp90 and preventing ubiquitination by CHIP (Figure19)[260]. Figure 19 . 19 Figure 19. Redox regulation of SENP3 : SENP3 interacts whith CHIP under standard and oxidative stress condition. Under oxidative condition SENP3 undergo oxidative modification which are signal for the recruitment of HSP90. Created with BioRender.com. [267] [268]. NPM1 is a 37kDa phosphoprotein that shuttles between the granular component of the nucleoli and the cytoplasm. The SENP3/NPM1 interaction has been shown by co-precipitation experiments in HeLa cells and by yeast two hybrid. In addition, the authors showed that NPM1 specifically interacted with SENP3, since other members of the SENPs family are not able to interact with NPM1 [269]. SENP3 catalyzes the deSUMOylation of NPM1 by removing SUMO2/3. Together SENP3 and NPM1 play a key role in ribosome biogenesis. Indeed, depletion of SENP3 or NPM1 by siRNA negatively impacts ribosome biogenesis by inhibiting the maturation of 32S rRNA into 28S [269]. a quality control mechanism by preventing the loading of the PELP1-WDR18-TEX10 complex on the maturing pre-ribosome[271] [272]. Thus, SENP3-mediated deSUMOylation makes it possible to coordinate the rate of ribosome formation with the physiological state of the cell.In addition, the absence of SENP3 leads to a localization error of the ribosomal transformation factor NVL (Nuclear VCP-like protein)[273]. NVL belongs to the AAA ATPase family[274]. The human genome codes for two NVL isoforms, NVL1 and NVL2, which differ in the length of their N-terminal sequence. Thus, NVL1 starts from the residue corresponding to the second methionine at position 107 of NVL2. NVL1 and NVL2 have distinct locations in the cell and more particularly in the nucleus. NVL2 is the more represented of the two, it is mostly found in the nucleoli while NVL1 is nucleoplasmic. Nagahama et al. were able to establish that NVL interacts with RPL5 in a manner dependent on ATP and their NoLS. This interaction allows NVL to have a nucleolar localization[275]. Depletion of NVL2 by siRNA inhibits ribosomal biosynthesis, highlighting that like SENP3, NVL2 plays a key role in this cellular process[275] [276]. 1. The TIR-domain containing effectors BtpA and BtpB from Brucella abortus impact NAD metabolism When I first joined the team, the topic of my research was centered on bacterial TIR domain proteins. During this time I participated in several projects that gave rise to 4 publications. My main contribution lead to the publication included in this chapter, in which I am joint first author on the characterization of the Brucella TIR domain-containing proteins, BtpA and BtpB. In the work we show that the recently NADase activity described for TIR domains is retained in BtpA and BtpB and leads to intracellular NAD depletion during infection. This was a joint project with our long-term collaborators at the University of Madrid, the lab of Victor Cid and Maria Molina. I constructed several expression and complementation vectors as well as Brucella strains, participated in the assessment of the localization of BtpA/BtpB in transfected cells along with Paul Imbert, and carried out all the infection work, namely for the NAD measurements that were done with the help of Morgane Roussin. I also worked side-by-side on this project with Julia Coronas-Serna when she came for a 3 months stay at our lab. to their advantage depends on the delivery of bacterial effector proteins through a type IV secretion system. Brucella Toll/Interleukin-1 Receptor (TIR)-domain-containing proteins BtpA (also known as TcpB) and BtpB are among such effectors. Although divergent in primary sequence, they interfere with Toll-like receptor (TLR) signaling to inhibit the innate immune responses. However, the molecular mechanisms implicated still remain unclear. To gain insight into the functions of BtpA and BtpB, we expressed them in the budding yeast Fig 1 . 1 Fig 1. Expression and localization of B. abortus BtpA and BtpB in S. cerevisiae. (A) BtpA and BtpB induce different levels of toxicity when expressed in yeast. Ten-fold serial dilution assay to monitor growth in YPH499 yeast strain expressing the pYES2 empty vector or the BtpA or BtpB indicated versions (full-length, N and TIR) from pYES2-GFP plasmid derivatives, under control (Glucose) and induction (Galactose) conditions. Nomarski and fluorescence microscopy of YPH499 yeast strain expressing from pYES2 plasmid derivatives the following fusion proteins: (B) fulllength GFP-BtpB after 4h induction; (C) full-length GFP-BtpA (green) and stained with DAPI (red), after 6h induction; (D) the GFP-fused N-terminal regions of BtpA and BtpB after 5h induction; (E) the GFP-fused C-terminal regions containing TIR domains of BtpA and BtpB after 4h induction. Scale bars correspond to 5 µm. https://doi.org/10.1371/journal.ppat.1007979.g001 Fig 2 . 2 Fig 2. BtpB expression causes severe defects in actin cytoskeleton and endocytosis in S. cerevisiae. (A) Nomarski and fluorescence microscopy images (upper panel) and graph showing the percentage of small-to medium-budded cells with depolarized actin (lower panel) after rhodamine-phalloidin staining of YPH499 cells expressing pYES2-GFP empty vector, BtpB, BtpB-N or BtpB-TIR from pYES2-GFP plasmid derivatives after 4h induction. Data correspond to means ± standard deviation of three independent transformants (n 100) and statistical comparison was done with Kruskall-Wallis ANOVA with p-values referring to BtpB of 0.024 ( ) for vector and BtpB-N. Scale bars indicate 5 µm. (B) Nomarski and fluorescence microscopy images (left panel) and graph representing the percentage of cells showing both GFP and FM4-64 vacuolar signal (right panel) of YPH499 cells expressing pYES2-GFP or pYES2-GFP-BtpB, after 4h induction, stained with the endocytic marker FM4-64 for 1h. Data correspond to means ± standard deviation of three independent transformants (n 100) and statistical comparison was done with Student´s t-test, p < 0.0001 ( ). Scale bars indicate 5 µm. https://doi.org/10.1371/journal.ppat.1007979.g002 Fig). The TIR domain of BtpB alone was fully responsible for signaling down-regulation (S1B Fig). Interestingly, such effect was also observed when expressing the TIR domain of BtpA (S1A Fig). , and it did not lead to reduced MAPK phosphorylation (S1B Fig), or endocytosis defects (S5D-S5E Fig). Fig 3 . 3 Fig 3. BtpB, BtpA-TIR and BtpB-TIR reduce NAD + and ATP levels when expressed in yeast. (A) Cellular ATP measurement by luciferase assay in YPH499 cells transformed with pYES2 empty vector and pYES2 plasmid derivatives bearing: both full-length and TIR domain versions of BtpA and BtpB and the catalytically inactive BtpB E234A mutant. Graph shows ATP levels as a percentage relative to the ATP levels measured on empty vector control cells. Results correspond to means ± standard deviation of three different transformants and statistical comparison was done with one-way ANOVA with p-values referred to vector of 0.0045 ( ) for BtpA-TIR, 0.0017 ( ) for BtpB and Fig 4 . 4 Fig 4. Functional analysis of BtpB and BtpA mutations in yeast. (A) Structure of BtpA-TIR domain showing the positions equivalent to those identified as loss-of-function in BtpB by yeast random mutagenesis screening. Left panels: two views of BtpA-TIR dimer structure (PDB: 4LZP) with chain A colored in wheat and chain B in grey. Residues identified are colored according to their assigned properties. Positions of the mutations putatively affecting protein folding are colored in blue (pA strand) and yellow (αC helix). Mutations at the active site are colored in magenta. Mutations at the surface outside the active site are colored in green. Right panels: views in the same orientation of the BtpA dimer depicted as cartoon with the side chains of mutated residues displayed as ball-and-sticks. Residue numbers are indicated for BtpA and the corresponding residues in BtpB are in parenthesis. (B) Ten-fold serial dilution growth assay of YPH499 cells expressing pYES2 empty vector, BtpB full-length, BtpB-TIR and the indicated mutants 1 (LAMP1) (Fig 5A), suggesting BtpB associates with multiple intracellular structures. Unlike the yeast model, expression of BtpB neither resulted in significant perturbation of the actin cytoskeleton (Fig 5B) nor the microtubule network morphology (Fig 5C). Consistent with results from Felix and colleagues [19], we also observed localization of BtpB between cells, at sites of intercellular bridges that form during cell division (S6B Fig). As the yeast model revealed a potential role for the N-terminal domain of BtpB in intracellular localization whereas the TIR domain for its toxicity, we next analyzed the fate of truncated BtpB versions in HeLa cells. As in yeast, expression of BtpB-N resembled that of fulllength BtpB, with cytosolic aggregates being formed (Fig 6). Expression of the TIR domain alone (BtpB-TIR) resulted in the formation of long filament-like structures that showed no colocalization with tubulin (Fig 6), consistent with the results obtained in the yeast model. In some cells, BtpB-TIR induced disorganization of the microtubule network (S6C Fig). These filamentous structures did not co-localize with vimentin either, a marker of intermediate filaments (S6D Fig), strongly reminiscent of what has been previously described for the Fig 5 . 5 Fig 5. Localization of ectopically expressed BtpB in human cells. HeLa cells expressing GFP-BtpB (green) were labelled and analyzed by confocal immunofluorescence microscopy. Representative images are shown. (A) Cells were labelled with FK2 (cyan) along with LAMP1 (red) antibodies. The nuclei are labelled with DAPI (blue). (B) Cells were labelled with phalloidin for visualization of the actin cytoskeleton (red) and (C) with anti-tubulin antibody for visualization of the microtubules (red). Scale bars correspond to 5 µm. The tubulin image was obtained with Airyscan confocal imaging mode. https://doi.org/10.1371/journal.ppat.1007979.g005 Fig 8A). As shown in Fig 8B, both BtpA and BtpB strongly reduced total NAD levels in HeLa cells validating the results obtained with the yeast model. Fig 7 . 7 Fig 7. Inhibition of endocytosis occurs upon ectopic expression of BtpB but not during infection. HeLa cells expressing GFP-BtpB (green) were incubated with either EGF conjugated with Alexa Fluor 555 (red) or transferrin conjugated with Alexa Fluor 568 (red) for 10 minutes. (A) Cells were then analyzed by confocal microscopy and (B) and (C) the percentage of cells showing uptake of either fluorescent marker quantified. A cell was considered positive when clear labelling of endocytic vesicles was observed throughout the cell. Counts correspond to individual microscopy fields, obtained from three independent experiments. Data correspond to means ± standard deviation and statistical comparison was done with Mann-Whitney test, p = 0.0001 ( ) for EGF (left) and p < 0.0001 ( ) for transferrin (right). (D) HeLa cells or (E) immortalized bone marrow-derived macrophages (iBMDM) were infected for https://doi.org/10.1371/journal.ppat.1007979.g007 fully restored the wild-type phenotype in the case of BtpA (Fig 10A) and partially in the case of BtpB (Fig 10B). Fig 8 . 8 Fig 8. Ectopic expression of BtpB results in reduction of total NAD + in HeLa cells. (A) Representative Western blot showing levels of Myc-BtpA and Myc-BtpB expression revealed with an anti-Myc antibody and anti-actin antibodies, as a loading control. Myc-BtpA has a predicted molecular weight of 33 kDa whereas BtpB 38 kDa. (B) Measurement of total NAD levels using a colorimetric assay from HeLa cells expressing either Myc-tagged BtpA or BtpB. Non-transfected cells (negative) and cells transfected with Myc vector alone are also included as controls. Data correspond to means ± standard deviation from three independent experiments and statistical comparison was done with one-way ANOVA, with a p-value for the negative control versus Myc-BtpA of <0.0001 ( ) and versus Myc-BtpB of 0.0006 ( ). https://doi.org/10.1371/journal.ppat.1007979.g008 Fig 10 . 10 Fig 10. B. abortus TIR domain-containing proteins control intracellular total NAD levels during macrophage infection. (A) and (B) Immortalized bone marrow-derived macrophages (iBMDM) were infected for 48 h with either wild-type B. abortus or strains lacking btpA, btpB or both genes or complemented strains expressing btpA and btpB or the corresponding catalytic mutants. Mock infected cells are also included as a control. Total NAD levels were measured using a colorimetric assay and data correspond to means ± standard deviation from five (A) or three (B) independent experiments and statistical comparison was done with a one-way ANOVA test, with statistical significance indicated in the graph. Higher NAD levels in the infected cells than in the mock experiment are likely attributable to the fact that intracellular NAD levels from bacterial cells are added up to those of the cell line. In (A) for wild-type versus ΔbtpA p = 0.0071, wild-type versus ΔbtpApbtpA E217A p = 0.0476, ∆btpA versus ΔbtpApbtpA p = 0.0058 and ∆btpApbtpA versus ΔbtpApbtpA E217A p = 0.0398. In (B) for wild-type versus ΔbtpAbtpB p = 0.0011, wild-type versus ΔbtpB p = 0.0003, wild-type versus ΔbtpBpbtpB E217A p = 0.0001, ∆btpB versus ΔbtpBpbtpB p = 0.0156 and ∆btpBpbtpB versus ΔbtpBpbtpB E217A p = 0.0058. Statistical significances in relation to the negative control are not shown. https://doi.org/10.1371/journal.ppat.1007979.g010 Fig. Expression of BtpB and BtpA versions in yeast and in inhibition of yeast endocytosis by the TIR domain of BtpB. Western blotting of YPH499 cells extracts bearing the indicated BtpA (A) and BtpB (B) versions from pYES2-GFP plasmid derivatives, using antibodies anti-GFP (upper panels), anti-P-MAPK to show dual phosphorylation of Slt2 yeast MAPK and anti-G6PDH as loading control (lower panels). (C) Normarski and fluorescence microscopy of YPH499 cells expressing GFP-BtpB, GFP-BtpB-N and GFP-BtpB-TIR after 4h induction, stained with the endocytic marker FM4-64 for 1h. Scale bars indicate 5 µm. (PDF) S2 Fig. BtpA and BtpB TIR filaments are not coincident with yeast tubulin. Indirect immunofluorescence of YPH499 yeast cells expressing GFP-BtpA-TIR, GFP-BtpB-TIR, and their corresponding E234A mutant versions from pYES2 plasmid derivatives (green). Microtubules are stained using anti-tubulin antibody (red). Nuclei are labelled with DAPI (blue). Scale bars correspond to 5 µm. (PDF) S3 Fig. Inhibitory effect of BtpB on phosphorylation of yeast signaling proteins.(A) Western blotting from cells bearing the empty vector pYES2 (control), BtpA or BtpB from pYES2-GFP plasmid derivatives, developed with anti-P-MAPK antibody to detect duallyphosphorylated Slt2, Kss1 and Fus3 (upper panel) and anti-actin to detect actin as loading control. (B) Upper part: representative immunoblot from yeast cell lysates bearing pYES2-GFP-BtpB (+) or pYES2 (-) and upon different conditions: 30ºC (control), high temperature (39ºC), pheromone (α-factor) or Congo red, using anti-P-MAPK (upper panel), anti-PLOS PATHOGENSBrucella TIR proteins deplete host cell NAD + Slt2 (medium panel) and anti-actin (lower panel). Lower part: densitometric measurement of WB bands corresponding to phosphorylated MAPKs Slt2, Kss1 and Fus3. The graph displays densitometric data of phosphorylated MAPKs normalized against actin and error bars show the standard deviation from three independent experiments on different transformant clones. (C) Western blotting of cells containing the pYES2 empty vector (control) or pYES2-GFP-BtpB, developed with anti-P-p38 antibody to detect MAPK Hog1 under high osmolarity. conditions (0.6M KCl). (D) Western blotting of cells expressing heterologous Akt1 (pYES3-GFP-Akt1) with either pYES2 empty vector (control) or pYES2-GFP-BtpB, using anti-P-Akt1(Thr)308 (upper panel) and anti-Akt1 antibodies. All immunoblots were performed on protein extracts from transformants of the YPH499 yeast strain after 4 h of galactose induction. (PDF) S4 Fig. Partial suppression of BtpB toxicity by overexpression of yeast genes. (A) Ten-fold serial dilution assay of yeast cells co-expressing BtpB and each of the seven suppressor ORFs isolated from a yeast genetic screen. pYES3 and pYES2 are the corresponding empty vectors for BtpB and for the overexpressed genes, respectively. (B) Western blotting of W303-1A yeast strain co-expressing GFP-BtpB and each of the proteins encoded by the suppressor genes. Antibodies anti-GFP to detect GFP-BtpB (upper panel) and Anti-G6PDH as loading control (lower panel) were used. Anti-GFP antibody allows the detection of the indicated protein Atagged proteins due to affinity of the tag with the Fc region of IgG-type antibodies. (C) and (D) Ten-fold serial dilution assays of yeast cells co-expressing BtpB-TIR (C) or BtpA-TIR (D) and the suppressor genes. pYES3 and pYES2 are the corresponding empty vectors for BtpB-or BtpA-TIR and for the overexpressed genes, respectively. (PDF) S5 Fig. Functional analyses in yeast loss-of-function mutations in conserved residues of BtpB. (A) Alignment of protein sequences of the TIR domains of BtpB, BtpA, human SARM1 and plant RUN1. Conserved residues relevant for this study are marked with the same color code as in Fig 4, except for for the catalytic site residues W213 and E217, that are colored in pink. (B) Structure of BtpA-TIR (left; PDB: 4LZP)) and RUN1-NADP+ complex (right; PDB: 6O0W), showing the equivalent positions of residues mutated in BtpB isolated in the yeast screen. Both structures cartoons are displayed in the same orientation. Side chain of mutated residues of BtpA relevant for this study are colored as in (A). The side chains of residues of the catalytic site of RUN1 are shown as ball-and-sticks and colored in pink and the NADP+ ligand is colored in cyan. Specific atoms are colored as follows: nitrogen in blue, oxygen in red and phosphorus in orange. (C) Phenotype of selected loss-of-function BtpB mutants. Ten-fold serial dilution growth assay of YPH499 cells transformed with pYES2 empty vector and pYES2 plasmid derivatives expressing full-length BtpB wild-type and mutants D158G, S162P and Y225C, under control (Glucose) and induction (Galactose) conditions. (D) Nomarski and fluorescence microscopy images of YPH499 cells expressing the GFP-BtpB indicated mutants, after 4h induction, stained with the endocytic marker FM4-64 for 1h. Scale bars indicate 5 µm. (E) Graph from the same experiment as in C representing the percentage of cells showing both GFP and FM4-64 vacuolar signal. Results correspond to means ± standard deviation of three independent transformants (n ≥ 100) and statistical comparison was done with one-way ANOVA with a p-value < 0.0001 ( ) for all four mutants versus wild-type. (PDF) S6 Fig. Localization and effects of GFP-BtpB versions in HeLa cells. (A) Representative micrograph of HeLa cells expressing Myc-BtpB revealed with an anti-Myc antibody (red) andPLOS PATHOGENSBrucella TIR proteins deplete host cell NAD + Figure 20 . 20 Figure 20. The Brucella NyxA and NyxB proteins are translocated 20 into host cells during infection. (A) RAW macrophage-like cells was infected for 4h with either B. abortus wild-type or virB9 15 expressing TEM1 (encoded by the bla gene) fused with NyxA, NyxB or BAB1_0466. The percentage of cells with coumarin emission, 10 which is indicative of translocation, was quantified after incubation with the CCF2-AM substrate. Data represent means ± 5 95% confidence intervals from 5 independent experiments, with more than a 500 cells counted for each condition. Kruskal-Wallis 0 with Dunn's multiple comparisons test was used. Not all statistical comparisons are shown. (B) Representative images for B. abortus wild-type or ∆virB9 carrying pbla:nyxA or pbla:nyxB to exemplify the presence of effector translocation visible in coumarin-positive cells (violet) or absence of translocation. (C) NyxA and NyxB translocation at 24h post-infection, as in A. Figure 21 . 21 Figure 21. Amino acid sequence alignment of NyxA and NyxB obtained with ClustalW, matrix EBLOSUM62, showing 82.1% identity, 86.6% similarity and 3.7% gaps. Figure 22 .Figure 24 . 2224 Figure 22. NyxA and NyxB accumulate in cytosolic and nuclear structures. (A) Accumulation of 4HAtagged NyxA (top) or 4HA-tagged NyxB (bottom) in cytosolic punctate or filament-like structures or (B) in the nucleus, in HeLa cells infected for 48h with ∆nyxA or ∆nyxB strains expressing DSRed and the corresponding 4HA-tagged effector. The cell nucleus is visible with DAPI. A fluorescence intensity profile along a defined straight line across the 4HA-positive structures is included for each image, with the HA signal represented in green and bacterial signal in red. Figure 25 . 25 Figure 25. NyxA and NyxB directly interact. (A) Pulldown experiment using purified NyxB against His-NyxA immobilized on a Ni NTA resin. An empty column was used as a control for non-specific binding. Interactions were visualized with coomassie blue stained gels. The flowthrough (FT), wash (W) and elution (E) fractions are shown for each sample and the molecular weights indicated (kDa). Eluted His-NyxA and NyxB are indicated with black and red arrows, respectively. (B) Microscale thermophoresis measuring the fraction of 20 nM of purified NyxA labelled with kit protein labelling RED-NHS binding to increasing concentrations of NyxB(6,67nM-219µM). Data correspond to means ± standard deviations of 3 independent experiment. The obtained Kd is indicated. Figure 26 . 26 Figure 26. NyxA and NyxB directly interact. (A) Pulldown experiment using purified NyxA against His-NyxB immobilized on a Ni NTA resin. Empty column was used as a control for non-specific binding. Interactions were visualized with coomassie blue stained gels. The flowthrough (FT), wash (W) and elution (E) fractions are shown for each sample and the molecular weights indicated (kDa). Eluted NyxA and His-NyxB are indicated with black and red arrows, respectively. (B) Confirmation of the identity of the two major eluted bands by mass spectrometry. The identified peptides are highlighted in green. Figure 27 . 27 Figure 27. Nuclear Nyx-positive structures are closely associated with PML-nuclear bodies. (A) HeLa cells were transfected for 12h with either HA-NyxA or NyxB (red) and labelled with an anti-PML antibody (green). (B) The level of co-localization measured with the Pearson's coefficient (PCC) between each effector and either PML or speckles. Co-transfection of the two effectors was used as a positive control. (C) Pull-down assay with His-NyxA or His-NyxB immobilized on Ni NTA resins that were incubated with a HeLa cell extract. Empty column was used as a control for non-specific binding. Figure 28 . 28 Figure 28. NyxA and NyxB are not SUMOylated in vitro. Purified GST-Ran GAP1, NyxA or NyxB were incubated with SUMO2, Ucb9 and SAE1/UBA2 in the presence or absence of ATP. GST-RanGAP418-587 It is used as a positive control, in the presence of ATP a band appears in the red frame corresponding 1 1 11 genomic scaffold, alternate assembly HuRef SCAF_1103279188392 1 Homo sapiens chromosome 11 genomic scaffold, alternate assembly CHM1_1.0 Homo sapiens chromosome 11 genomic contig, GRCh37.p10 Primary Assembly Heparan sulfate proteoglycan 2 (HSPG2) 1 Homo sapiens RNA binding motif, single stranded interacting protein 3 (RBMS3) Homo sapiens alkaline phosphatase, intestinal (ALPI), mRNA 1 Homo sapiens alkaline phosphatase, placental-like 2 (ALPPL2), mRNA Homo sapiens alkaline phosphatase, placental (ALPP), mRNA SENP3 belongs to a family of cysteine proteases that share a conserved catalytic domain, characterized by a papain-like fold [231]. The variable N-terminal region often contributes to intracellular targeting of the protease. In the case of SENP3, the N-terminal region is implicated in nucleolar targeting, encoding a nucleolar localization sequence (NoLS) and phosphorylation site for mTOR, necessary for subsequent interaction with NPM1 and nucleolar targeting [268] (Figure 29A). Figure 29 . 29 Figure 29. The Nyx effectors interact with host protease SENP3. (A) Schematic representation of SENP3, highlighting its catalytic domain and its N-terminal nucleolar localization sequences (NoLS). (B) Pull-down assay with the N-terminal region of SENP3 from amino acid 7 until 159 (SENP37-159) against His-V5-NyxA or His-V5-NyxB immobilized on Ni NTA resins. An empty column was used as a control for non-specific binding and purified His-NyxA and His-Nyx-B inputs are shown. Interactions were visualized by western blotting using anti-SENP3 antibody, and column binding with anti-V5 (lower blot). Non-bound fractions (F1 and F2), last wash (W) and elution (E) are shown for each sample and the molecular weights indicated (kDa). Figure 30 . 30 Figure 30. SENP3 is important for efficient B. abortus intracellular multiplication. (A) Western blot of HeLa cell lysate treated with siRNA control (siControl) or siRNA SENP3 (siSENP3) for 72h. Membrane was probed with an anti-SENP3 antibody followed by anti-actin for loading control. (B) Depletion was also verified by microscopy, showing a predominant nucleolar localization of SENP3 in control cells which is strongly reduced in siSENP3 treated cells. Scale bar is 5 m. (C) Bacterial colony forming units (CFU) counts of wild-type B. abortus following 2, 24 or 48h of infection of HeLa cells pre-treated with either siControl or siSENP3 for 72h. Data correspond to means  95% confidence intervals from 3 independent experiments. A two-way ANOVA with Bonferroni correction was used to compare siControl and siSENP3 at each time-point. (D) HeLa cells depleted for SENP3 or treated with the control Figure 31 . 31 Figure 31. The NyxB structure defines a novel family of effectors and allowed identification of the SENP3 interacting groove. (A) Two views of the NyxB monomer depicted in ribbon with helices coloured in wheat, strands in blue and loops in pink. (B) Two views of the NyxB dimer. (C) Structurebased sequence alignment of NyxB and NyxA. Secondary structure elements are indicated above the sequences. Identical residues are not shaded, residues shaded in black and grey are non-conserved Figure 32 . 32 Figure 32. NyxA and NyxB dimer formation. (A) Chromatograms (A280, plain) and mass measurements (dots) by Multi Angle Light Scattering of NyxB (left, green) and NyxA (right, blue). The average molecular weight determined is indicated. (B) Surface representation of dimer A (chains A and H) and Figure 33 . 33 Figure 33. Identification of the SENP3 interacting groove. (A) Surface representation of NyxB dimer coloured according to electrostatic potential (red negative, blue positive) showing the extended acidic patch.The inset shows a close-up view of the area with residues' side chains displayed as ball-andsticks and mutated residues (E82,Y66 and D80) coloured in cyan. (B) Pull-down assay with His-NyxA, His-NyxB or the specific catalytic mutants (His-NyxA MAG or His-NyxB MAG ) immobilized on Ni NTA resins that were incubated with a HeLa cell extract. Empty column was used as a control for non-specific binding. Interactions with endogenous SENP3, NPM1 or Histone 3 (H3) were visualized by western blotting using the corresponding antibody, and column binding with anti-His (lower blot). Non-bound fractions (F1 and F2), last wash (W) and elution (E) are shown for each sample and the molecular weights indicated (kDa). The cell extract and the different purified Nyx inputs are also shown. Figure 34 . 34 Figure 34. The Brucella Nyx effectors directly reduce the SENP3 nucleolar localization in host cells. (A) Representative confocal microscopy images of HeLa cells expressing the HA empty vector (top), Figure 35 . 35 Figure 35. The Brucella NyxA effector directly reduces the SENP3 nucleolar localization in host cells. (A) Representative confocal microscopy images of cells expressing HA-NyxB (top) and HA-NyxB MAG (bottom), with nucleolin (red), SENP3 (white) and HA (green). (B) Quantification of the Pearson's coefficient of SENP3 versus nucleolin in HeLa cells infected for 48h with either B. abortus wild-type or Figure 36 . 36 Figure 36. The Brucella Nyx effectors with mutated acidic grooves are still translocated during infection. RAW macrophage-like cells was infected for 24h with B. abortus wild-type expressing TEM1 (encoded by the bla gene) fused with NyxA, NyxA MAG , NyxB or NyxB MAG . The percentage of cells with coumarin emission, which is indicative of translocation, was quantified after incubation with the CCF2-AM substrate. Data represent means ± 95% confidence intervals from 5 independent experiments. Figure 37 . 37 Figure 37. The Brucella NyxB effector also potentially contributes to SENP3 mislocalization during infection. (A) Quantification of the Pearson's coefficient of SENP3 versus nucleolin in HeLa cells infected for 48h with either B. abortus wild-type or nyxB, its complemented strain nyxB::Tn7-nyxB or double deletion mutant nyxAnyxB. Data are represented as means  95% confidence intervals from 4 independent experiments. (B) Representative confocal microscopy images of HeLa cells infected with the different strains expressing DSRed and labelled for SENP3 (green), in comparison to mock infected control cells (non-infected first panel). Figure 38 . 38 Figure 38. B. abortus does not induce significant delocalization of PES1 and NPM1 from the host nucleoli. (A) Representative confocal images of HeLa cells infected for 48h with wild-type B. abortus and labelled for DNA to visualize bacteria and cell nuclei (cyan) and PES1 (yellow). Infected cells are indicated with an asterisk. (B) HeLa cells infected for 48h with DSRed-expressing wild-type B. abortus and labelled for NPM1 (white). All scale bars correspond to 5 m. Figure 39 . 39 Figure 39. B. abortus induces cytoplasmic punctate accumulation of NVL in a manner partially dependent of the Nyx effectors. (A) Confocal microscopy image of wild-type B. abortus (expressing DSRed) infected HeLa cells labeled with an anti-NVL antibody (white). (B) Quantification of the number of NVL-positive cytosolic structures in mock infected control cells in comparison to wild-type or a mutant strain lacking both nyxA and nyxB. Data are represented as means  95% confidence intervals Figure 40 .Figure 41 . 4041 Figure 40. Deletion of NyxA and NyxB does not impact intracellular multiplication of B. abortus. Enumeration of bacterial colony forming units (CFU) of wild-type B. abortus, virB9, nyxA, nyxB or nyxAnyxB following 2, 24 or 48h of infection of HeLa cells. Data correspond to means  95% confidence intervals from 3 independent experiments. selected another intracellular pathogen, Legionella pneumophila, that multiplies in an ERderived vacuole inducing ER stress as observed for B. abortus[283]. Experiments done by Monica Rolando in the team of Carmen Buchrieser (Inst. Pasteur) showed tha infection with L. pneumophila did not result in NVL cytosolic accumulation (Figure42), suggesting this phenomenon is specifically induced during B. abortus infection. Figure 42 . 42 Figure 42. Legionella infection does not induce NVL cytosolic accumulation. Immunofluorescence analysis of THP-1 cells infected 4 and 24 hours with wild type L. pneumophila strain Paris, carrying a DsRed expressing plasmid. Cells were stained with an anti-NVL antibody (green) and analyzed by confocal microscopy. DAPI, light blue; Phalloidin, grey. Scale bars correspond to 10 µm. Figure 43 . 43 Figure 43. B. abortus induces cytosolic accumulation of RPL5. Representative confocal microscopy images of (A) HeLa cells or (B) iBMDM infected with wild-type DSRed-expressing B. abortus for 48h and labelled for RPL5 (white) and DAPI (blue). Zoomed cells are included on the right. Figure 44 .Figure 45 .Figure 46 . 444546 Figure 44. B. abortus NVL-punctate cytosolic structures are enriched in Nyx effectors. Representative confocal microscopy images of (A) HeLa cells or (B) iBMDM infected with B. abortus strains expressing either 4HA-NyxA or NyxB (green) for 48h and labelled for NVL (white) and DAPI (blue). Scale bars correspond to 5 µm. Figure 47 . 47 Figure 47. NVL punctate cytosolic structures are enriched in Nyx effectors and the ribophagy receptor NUFIP1. Representative confocal images of HeLa cells infected for 48h with (A) infected with wild-type Figure 48 . 48 Figure 48. NVL, Nyx and NUFIP1-positive cytosolic punctate structures are not lysosomal compartments. Representative confocal microscopy images of HeLa cells infected for 48h with DSRedexpressing (A and C) wild-type B. abortus or (B) nyxA encoding 4HA-NyxA. Cells were labelled for DAPI (cyan), the lysosomal marker LAMP1 (green) and either (A) NVL, (B) HA or (C) NUFIP1 (white). In (C) Figure 49 . 49 Figure 49. NVL-positive cytosolic punctate structures are observed under starvation condition. Representative confocal microscopy images of HeLa cells untreated or treated with HBSS (starved cells) for 4h and labelled with NVL (white). All scale bars correspond to 5µm. 3 . 3 Brucella expressing vectors DNA fragments coding for NyxA and NyxB were obtained by PCR amplification from the B. abortus 2308 genome, digested with XbaI and cloned into pFlagTEM1 [290] using Infusion (Takara Bio). After verification by sequencing, plasmids were introduced into B. abortus 2308 or virB9 by electroporation. B. abortus 2308 knockout mutant ΔnyxA and nyxB were generated by allelic replacement. Briefly, upstream and downstream regions of about 750 bp flanking each gene were amplified by PCR (Q5 NEB) from the B. abortus 2308 genomic DNA. An overlapping PCR was used to associate the two PCR products and the ΔnyxA or ΔnyxB fragments were digested and cloned -acetamido)-2-aminoethanesulfonic acid (ACES)-buffered yeast extract broth or on ACES-buffered charcoal-yeast (BCYE) extract agar. Human monocyte cell line (THP-1) was cultured and infected as previously described[291]. For immunofluorescence analyses cells were fixed in 4 % paraformaldehyde, permeabilized with PBS-triton 0.5% and stained with 4-6-diamidino-2-phenylindole (DAPI), Phalloidin and and primary anti-NVL antibody (16970-1-AP). Immunosignals were analyzed with a Leica SP8 microscope at 63X. Images were processed using ImageJ software. 9. Pulldown assays 50 µg of His tag recombinant protein was incubated with recombinant protein during 2 h at 4°C, then incubated within gravity flow column (Agilent) containing 80 µl Ni-NTA agarose beads (Macherey-Nagel) during 1 h beforehand washed in water and pre-equilibrated in equilibrium buffer 20 mM Tris-HCl pH7.5, 250 mM NaCl. The column was washed successively three times in equilibrium buffer supplemented with 25 mM imidazole, three times in equilibrium buffer and eluted in equilibrium buffer supplemented with 500 mM imidazole. 10 . 10 Quantification of SENP3 localizationColocalization analysis for the transfected HeLa cells were performed with a custom ImageJref1/Fijiref2-based macro, that segmented the nuclei and the nucleoli of the cells in each image, classified the cells in two classes according to the intensity of HA-NyxA/NyxB, and then measured in the areas of each nucleus the Pearson correlation coefficients (by calling the plugin Coloc2 -https://github.com/fiji/Colocalisation_Analysis) between the signal of SENP3 fluorescence. In practice, one of the partners is labelled with a fluorescent molecule or is capable of fluorescing natively. It is put in contact with a ligand at 10-16 different concentrations and then placed in capillaries. This system makes it possible to measure the dissociation constant of the interaction between two molecules without immobilization of a partner. It is a technique requiring little biological material and allows the study of interactions from nanomolar to milimolar. The experiments were performed at 20°C on a Monolith NT.115 (Nanotemper) with Premium Coated (Nanotemper) capillaries. The tested interactions were performed in triplica. The labelling of NyxA is performed by covalent coupling of the lysines of the protein to a fluorophore using the RED-NHS (Nanotemper) Protein Labeling Kit. Unlabeled 14 . 14 Protein expression and purificationE. coli BL21-DE3-pLysS bacteria were transformed with the expression vectors, grown in LB media (Sigma-Aldrich) to OD280=0.6 and expression was induced with 1mM IPTG at 37°C for 3 hours. Cells were After harvested by 15 min of centrifugation at 5,000 g and resuspended in lysis buffer 20mM Tris pH 8, 150mM NaCl, 5% Glycerol, 0,1% Triton. Disruption cell was achieved with sonication after addition of antiprotease EDTA-Free cocktail (Roche) and 30U/ml benzonase (Sigma-Aldrich). Cell debris were removed by centrifugation 30 min at 20,000g at 4°C. Recombinant protein was purified by chromatography using a Nickel-loaded Hitrap Chelating HP column (GE Healthcare). Unbound material was extensively washed using Tris 20mM pH8, NaCl 300mM, 25 mM Imidazole, 5mM -mercaptoethanol, 10% Glycerol. An additional washing step with 2 column volumes of 1M NaCl was done before elution of NyxB over a 25 to 500 mM gradient of imidazole over 8 column volumes. Peak fractions were pooled and the His tag was cleaved with TEV protease (500 µg/20mg of eluted protein) in presence of 1 mM DTT and 0,5 mM EDTA in overnight dialysis buffer 20mM Tris pH 8, 150mM NaCl.NyxB was further purified by size exclusion chromatography (Superdex 200 HiLoad 16/600, GE Healthcare) equilibrated in 20mM Tris pH 8, 150mM NaCl, 5 % Glycerol. Purity of the sample was assessed by SDS-PAGE. Freshly purified NyxB was concentrated to 21 mg/ml on 3 KDa Amicon Ultra concentrators (Millipore). SeMet-NyxB was produced in M9 minimum medium and purified as above, to a final concentration of pure SeMet-NyxB of 24 mg/ml.15. Crystallization and data collectionScreening was conducted using a Mosquito workstation (TTP Labtech) on commercial crystallization solutions with the sitting-drop vapour diffusion technique, against a protein solution. All crystallization trials were performed at 19°C and visualized on RockImager 182 (Formulatrix). Crystals of native NyxB were obtained with 21 mg/ml NyxB in 25% PEG4000, 0,2M CaCl2, 100 mM Tris pH 8. Crystals were frozen in reservoir solution supplemented with 15% Gly. Diffraction data was collected at the European Synchrotron Radiation Facility (ESRF) Beamline line ID23EH1 and crystals diffracted up to 2.5 Å resolution in space group P6122, with 12 molecules per asymmetric unit. Crystals of Se-Met NyxB were obtained using a reservoir solution containing 2.6 M NaFormate and crystals were cryoprotected in 2.8 M Naformate supplemented with 10% glycerol. Data were collected at SOLEIL on beamline Proxima-2 from a single crystal that diffracted up to 3.7 Å resolution and belonged to the space group P6222, with 2 molecules per asymmetric unit. Diffraction data were processed using XDS[292] and with SCALA from the CCP4 program suite[293].The structure was solved using the single anomalous dispersion method on Se-Met crystals usingAutoSol [294] from the Phenix program suite[295]. An excellent experimental electron density map enabled us to manually build an initial model. The resulting model was then used for molecular replacement with data from native NyxB crystal using Phaser[295].Twelve monomers were positioned and the resulting electron density map was then subjected to the AutoBuild program, part of the Phenix program suite[295]. Model building were completed with sessions of manual model building using Coot combined with model refinement using Phenix[START_REF] Adams | PHENIX: building new software for automated crystallographic structure determination[END_REF]. The final model was refined to a final Rwork/Rfree of 0.20/0.24 with excellent geometry. The coordinates and structure factors of NyxB have been deposited in the protein DataBank with the code 7AD4. Figures were generated with Pymol. 16 . 16 Size exclusion Multi-angle light scatteringSize exclusion chromatography (SEC) experiments coupled to multi-angle laser light scattering (MALS) and refractometry (RI) were performed on a Superdex S200 10/300 GL increase (GE Healthcare) for NyxA and a Superdex S75 10/300 GL (GE Healthcare) for NyxB. Experiments were performed in buffer 20 mM Tris pH 8, 150 mM NaCl, 5 % Glycerol. 100 μl of proteins were injected at a concentration of 10 mg.ml -1 . On-line MALS detection was performed with a miniDAWN-TREOS detector (Wyatt Technology Corp., Santa Barbara, CA) using a laser emitting at 690 nm and by refractive index measurement using an Optilab T-rex system (Wyatt Technology Corp., Santa Barbara, CA). Weight averaged molar masses (Mw) were calculated using the ASTRA software (Wyatt Technology Corp., Santa Barbara, CA). 17. Small-angle X-ray scattering SAXS data were collected for NyxA and NyxB on BioSAXS beamline BM29, ESRF using an online size-exclusion chromatography setup. 50 µl of protein (10 mg.ml -1 ) were injected into a sizeexclusion column (Agilent BioSec-3) equilibrated in 50 mM Tris, pH 8.0, 200 mM NaCl. Images were acquired every second for the duration of the size-exclusion run. Buffer subtraction was performed by averaging 20 frames either side of the peak. Data reduction and analysis was performed using the BsxCuBE data collection software and the ATSAS package[296]. The program AutoGNOM was used to generate the pair distribution function (P(r)) and to determine Dmax and Rg from the scattering curves (I(q) versus q) in an automatic, unbiased manner. Theoretical curves from the models were generated by FoXS[297]. Ab initio modelling was performed with GASBOR[298]. Figure 51 : 51 Figure 51 : NyxA and NyxB are targeted to the yeast nucleus in an alpha importin-dependent manner. A) Transformants of YPH499 strain with either pYES2-GFP-NyxA or pYES2-GFP-NyxB, as indicated, were incubated in synthetic galactose medium for 6 h, stained with DAPI and observed. Cells expressing GFP-NyxA/B fusion proteins had a signal coincident with DAPI-stained nuclei. B) Srp1 function is required for nuclear localization of NyxA. Transformants of the W303 (SRP1) or thermosensitive isogenic JCY1410 (srp1-31) strains with pYES2-GFP-NyxA were incubated for 6h at 24 ˚C or shifted for 1 h to 37 ˚C after 5 h of incubation, as indicated. Cells with nuclear fluorescence in the population were counted (n > 100) and expressed as percentage. Data are the average of three individual transformant clones. Bars indicate the standard deviation. A student t-test was performed for statistical significance. Data from Victor Cid's lab. Figure 52 : 52 Figure 52 : Illustration of the potential assembly between NyxA and NyxB Figure 53 : 53 Figure 53 : Ectopically expressed SerB and BspJ are associated with PML-nuclear bodies like Nyx. HeLa cells were transfected 18h with either HA-BspJ, HA-SerB or co-transfected with HA-SerB and Myc-NyxB and labelled with anti-PML antibody (green). All microscopy images displayed have scale bars corresponding to 5 m. Images from Emilie Mills, Erasmus student in the lab. structures. A study published in the summer of 2020 presented for the first time NVL outside the nucleus, in the cytosol. The authors established that NVL was involved in the absorption of albumin. Absorption of albumin involves a multi-ligand receptor complex called CUBAM containing cubulin (CBN) and amnionless (AMN). Although the mechanisms of albumin endocytosis are not fully characterized, the authors observed that NVL interacts with AMN.They observed that the co-expression of NVL and AMN in cells led to the delocalization of NVL from nucleoli to a cytosolic compartment and that the abolition of NVL expression greatly altered albumin internalization, revealing a new function of NVL in endocytic regulation[305]. Figure 54 : 54 Figure 54 : General schema of autophagy.[306] 1 . 1 Annexe 1: The TIR Homologue Lies near Resistance Genes in Staphylococcus aureus, Coupling Modulation of Virulence and Antimicrobial Susceptibility Patot et al PLoS Pathogens 2016 Table 1 . 1 Distribution of tirS gene among MSSA and MRSA clinical strains. Staphylococcus aureus TIR Protein Modulates Host Virulence CC/ST ab SCCmec bc Clone name b No. of strains tested No. of positive PCR tests for tirS /no. of strains tested ( Table 1 .a 1 (Continued ) Staphylococcus aureus TIR Protein Modulates Host Virulence CC/ST ab SCCmec bc Clone name b No. of strains tested No. of positive PCR tests for tirS /no. of strains tested (CC: clonal complex; ST: sequence type b CC/ST, SCCmec, and MRSA clone name were identified using the identibac S. aureus Genotyping Kit. c -: absence of SCCmec d tirS gene detected by specific PCR as described in the Material and Methods * SCCmec element sequencing doi:10.1371/journal.ppat.1006092.t001 Fig 1 . 1 Fig 1. Characterization of the genomic context of the tirS region in S. aureus MRSA and MSSA strains. Localization of the tirS gene and surrounding context in 4 tirS-positive MRSA strains and 2 tirS-positive MSSA strains. For all 6 strains, the tirS gene was found within the SCC element. Schematic diagrams represent the genomic comparison of the 6 SCC regions of both MRSA and MSSA strains used in our study and the SCC476 element of strain MSSA476 for which the tirS gene was first described [9]. Predicted ORFs are marked in the direction of transcription as arrows. The highly conserved tirS region, consisting of the tirS gene and 4 surrounding genes among which we find the fusC gene (fusidic acid resistance), is represented by green arrows. Light rose arrows represent genes coding for antibiotic resistance, orange arrows represent the mecA gene conferring resistance to beta-lactams and blue-gray arrows represent the site specific recombinases (ccrAB or ccrC) of SCC elements. The SCC elements ISS(L) and ISS(L)) are also represented as black vertical bars. Abbreviations: ccrAB = cassette chromosome recombinase A and B; fusC = fusidic acid resistance gene; mecA = methicillin resistance gene; ccrC: cassette chromosome recombinase C; ble = bleomycin resistance gene; aaD/knt = kanamycin nucleotidyltransferase (resistance gene). Fig 3 . 3 Fig 3. TirS interferes with TLR and IL-1R signaling. (A) HEK293T cells were transiently transfected for 24 h with the luciferase reporter vector and murine TLR2, TLR4, TLR5, or TLR9 in the presence or absence of TirS (50 ng). Cells were then stimulated with the appropriate ligand (PAM, LPS, Fl-ST, and CpG) for 6 h before measurement of luciferase activity. White bars correspond to negative control, black bars to cells stimulated with the appropriate ligand, and gray bars to cells transfected with TirS and stimulated with the ligand. Data represent the means ± SEM of the relative luciferase activity and were obtained from duplicates of 3 independent experiments. (B) Luciferase activity of murine TLR2 following transfection with Fig 4 .Fig 5 . 45 Fig 4. S. aureus deleted for tirS induced larger skin lesions in WT C57Bl/6 mice. (A) Data are presented as mean total lesion size (mm 2 ) ± SEM and are representative of 3 independent experiments with at least 4 mice/group. * p < 0.05; ** p < 0.01 (B) Photographs of representative lesions at day 7 after staphylococcal infection. Black bars indicate 10 mm. Abbreviations: WT = wild-type; ΔtirS = deleted for the tirS gene; ΔtirS + tirS = ΔtirS strain chromosomally restored for tirS gene. doi:10.1371/journal.ppat.1006092.g004 Fig 6 . 6 Fig 6. S. aureus deleted for tirS induced similar skin lesion size in MyD88-deficient mice. Data are presented as mean total lesion size (mm 2 ) ± SEM and are representative of 2 independent experiments with at least 4 mice/group. Abbreviations: WT = wild-type; ΔtirS = deleted for the tirS gene. doi:10.1371/journal.ppat.1006092.g006 All mouse protocols were carried out in strict accordance with the Directive 2010/63/EU revising Directive 86/609/EEC on the protection of animals used for scientific purposes. This directive was translated in the French regulation as the De ´cret N˚2013-118 of February 2013 under the jurisdiction of the Ministry of Education, Research and Technology. The initial research project had been approved by the local Animal Ethic Evaluation Committee CEC-CAPP (Comite ´ d'Evaluation Commun au Centre Le ´on Be ´rard, à l'Animalerie de transit de l'ENS, au PBES et au laboratoire P4) with the references ENS_2014_025 and ENS_2014_052 and subsequently authorized by the French Ministry of Education, Research and Technology. 2 .A 2 Annexe 2 : A Pseudomonas aeruginosa TIR effector mediates immune evasion by targeting UBAP1 and TLR adaptors Imbert et al. EMBO Journal 2017 This work identified PumA, a TIR protein of an atypical P. aeruginosa strain, as a key virulence determinant capable of interfering with both TLR and TNF receptor signalling. This work also implicated for the first time a role for the endosomal-sorting complex required for transport I (ESCRT-I) in modulation of intracellular trafficking of TLR adaptors. I carried out all the in vitro interaction assays with purified PumA and PumA 1-136 either using eukaryotic cell extracts expressing TIRAP and Myd88 or by co-expression in E. coli and realized lipid binding experiments. Pseudomonas aeruginosa TIR effector mediates immune evasion by targeting UBAP1 and TLR adaptors Paul RC Imbert 1 , Arthur Louche 1 , Jean-Baptiste Luizet 1 , Teddy Grandjean 2 , Sarah Bigot 1 , Thomas E Wood 3 , Stéphanie Gagné 1 , Amandine Blanco 1 , Lydia Wunderley 4 , Laurent Terradot 1 , Philip Woodman 4 , Steve Garvis 5 , Alain Filloux 3 , Benoit Guery 2 & Suzana P Salcedo 1,* Figure 1 .Figure 2 . 12 Figure 1. PumA is required for Pseudomonas aeruginosa PA7 virulence in vivo. A Caenorhabditis elegans survival curve. Fifty C. elegans were infected with E. coli OP50 and with highly virulent strain P. aeruginosa PA14. One hundred C. elegans were infected with P. aeruginosa PA7 and PA7 ∆pumA. Test of Mantel-Cox was used with ***P = 0.0002. B To establish an in vivo model of acute infection, mice were intranasally infected with 4 × 10 7 CFU P. aeruginosa PA7 or PA7DpumA strains (n = 7/group). Lethality was monitored for 96 h, and a test of Mantel-Cox was used, with **P = 0.0035. C-E Mice were intranasally infected with 3 × 10 7 CFU P. aeruginosa PA7 or PA7DpumA strains (n = 7/group). Cells from bronchoalveolar lavage (BAL) were counted (C). Bacterial load in the lungs (D) and dissemination (E) were assessed through cultured lung or spleen homogenate. Non -parametric two-tailed Mann-Whitney test was carried out with (C) *P = 0.0173, (D) *P = 00364 and (E) P = 0.3629. All data correspond to mean ! standard error. ). PumA enrichment was observed with PumA tagged with Myc or HA and MyD88 fused to either HA, FLAG or Myc. Curiously, this phenotype was exacerbated when GFP-PumA, normally at the cell surface (Fig EV3) was co-expressed with MyD88, resulting in the majority of GFP-PumA being recruited to MyD88-positive compartments (Fig EV3C). Figure 3 .Figure 4 . 34 Figure 3. PumA co-localizes with TIRAP at the plasma membrane and to a lesser extent with intracellular MyD88, when ectopically expressed in host cells. A, B Confocal microscopy of HeLa cells (A) and mouse embryonic fibroblast (MEFs) (B) co-expressing Myc-PumA and adaptor proteins HA-TIRAP (top panel) and HA-MyD88 (bottom panel). Cells were fixed after 10 h of transfection. Scale bars correspond to 10 lm. C, D Representative micrographs obtained by super resolution structure illumination microscopy (SIM) of MEFs co-expressing (C) Myc-PumA and TIRAP and (D) Myc-PumA and HA-MyD88. Wide field (WF) is shown in top panels and structured illumination of wide field (SIM) in bottom panels. Scale bars correspond to 1 lm. HA therefore carried out co-IP experiments with GFP-PumA137-303 expressed in host cells. We could not detect any interaction between nor between. However, it is important to note that expression of PumA137-303 results in loss of plasma membrane localization. Instead, we observed formation of cellular aggregates that are positive for FK2 labelling(Fig EV4C), which recognizes mono-and poly-ubiquitinated proteins and could correspond to misfolded protein. For this reason, we cannot completely exclude a role of the C-terminus of PumA in these interactions. Nonetheless, our data identify TIRAP and MyD88 as host cell targets EMBO Journal Vol 36 | No 13 | 2017 ª 2017 The Authors Figure 5 . 5 Figure 5. PumA is also capable of interacting with MyD88. A Pull-down assay using extracts from cells expressing HA-MyD88 against His-PumA or His-PumA1-136 immobilized on a Ni-NTA resin. Empty column was used as a control for non-specific binding. Interactions were visualized by Western blotting using anti-HA antibody, and column binding with anti-His (lower blot). Nonbound fraction (FT), last wash (W) and elution (E) are shown for each sample and the molecular weights indicated (kDa). B Co-immunoprecipitation (co-IP) assay from cells expressing GFP-PumA and either Myc-TIRAP or Myc-MyD88. GFP was used as a control for non-specific binding. The co-IP was revealed using an anti-Myc antibody, the fraction bound to GFP-trapping beads using an anti-GFP antibody and the inputs (shown on the bottom two images) using both anti-Myc and anti-GFP antibodies. C-E Co-purification of His-PumA1-136 co-expressed in E. coli BL21 with either (C) His-MBP (control), (D) His-MBP-TIRAP or (E) His-MBP-MyD88. Interactions were visualized with coomassie blue stained gels. Soluble fraction (SF) and selected elutions (E) are shown for each sample and the molecular weights indicated (kDa).Source data are available online for this figure. which is not dependent on TIR-TIR interactions(Fig EV5A). TNFR1 inhibition was specific to PumA as expression of the Brucella TIR domain-containing protein BtpA/TcpB did not have any significant effect(Fig EV5B). We therefore carried a yeast two-hybrid screen to identify an alternative target of PumA and found UBAP1, a key component of the ESCRT-I mediating trafficking and sorting of ubiquitinated cargo proteins on MVBs[START_REF] Stefani | UBAP1 is a component of an endosome-specific ESCRT-I complex that is essential for MVB sorting[END_REF][START_REF] Agromayor | HA-MyD88 Myc-PumA HA-MyD88 Myc-PumA Myc-PumA HA-TIRAP Zoom Myc-PumA HA-TIRAP Myc-PumA HA-MyD88 Zoom Myc-PumA HA-MyD88 Myc-PumA HA-TIRAP Zoom Myc-PumA HA-TIRAP Myc-PumA HA-MyD88 Zoom Myc-PumA HA-MyD88 References[END_REF]. This interaction was confirmed by co-IP from cells coexpressingPumA and UBAP1 (Fig 6A and B) as well as by pulldown using purified PumA or PumA1-136 and cell extracts with either streptavidin-tagged UBAP1 (Fig 6C) or Myc-UBAP1 (Appendix Fig S5A). These results were specific to PumA as the Brucella TIR protein BtpA/TcpB did not show any interaction (Fig 6C). Furthermore, PumA137-303 could not co-IP UBAP1 (Fig EV4D) supporting a role of the TIR domain in targeting UBAP1. Not surprisingly, microscopy analysis of cells expressing both PumA and UBAP1 showed significant co-localization at the plasma membrane and intracellular compartments (Appendix Fig S5C).We next investigated whether PumA is interacting with UBAP1 in the context of the ESCRT-I machinery. As over-expression of UBAP1 could result in its mislocalization, we carried out endogenous co-IP from cells expressing HA-PumA. Full-length PumA not only interacted very efficiently with endogenous UBAP1 but more importantly also co-immunoprecipitated TSG101 (Fig 6D), confirming PumA is targeting the ESCRT-I machinery. As expected, PumA also interacted with endogenous TIRAP (Fig 6D). The TIR domain of PumA only weakly interacted with endogenous UBAP1 and TIRAP (Fig 6E), whereas the C-terminus of PumA showed no interactions (Fig 6F). Figure 6 . 6 Figure 6. Identification of UBAP1 as a novel host protein targeted by the bacterial TIR domain of PumA. A, B Co-immunoprecipitation (co-IP) assay from cells expressing Myc-UBAP1 (A) or Strep-UBAP1 (B) with either GFP, GFP-PumA or TIRAP-GFP. The co-IPs were revealed using an anti-Myc (A) or anti-UBAP1 (B) antibodies, the fractions bound to GFP-trapping beads using an anti-GFP antibody and the inputs using anti-Myc, anti-GFP or anti-UBPA1 antibodies as indicated. C Pull-down assay using extracts from cells expressing Strep-UBAP1 against His-PumA or His-PumA1-136 immobilized on a Ni-NTA resin. Empty column was used as a control for non-specific binding. Interactions were visualized by Western blotting using anti-UBAP1 antibody, and column binding with anti-His (middle blot), followed by anti-V5 (lower blot), necessary for detection of BtpA, which for reasons we do not understand cannot be easily detected with the anti-His antibody (Appendix Fig S5B). D-F Endogenous co-IP from cells expressing (D) HA-PumA, (E) HA-PumA1-136 and (F) HA-PumA137-303. The fractions bound to HA-trapping beads were probed with anti-HA, anti-UBAP1, anti-TIRAP and anti-TSG101 antibodies. Non-bound fraction (FT), last wash (W) and elution (E) are shown for each sample and the molecular weights indicated (kDa).Source data are available online for this figure. Figure 7. 1880 TheEMBO Journal Vol 36 | No 13 | 2017 ª 2017 The Authors Figure 7 . 7 Figure 7. Analysis of the impact of UBAP1 on TIRAP and MyD88. A Representative micrographs obtained by confocal microscopy of HeLa cells co-expressing Myc-UBAP1 (red) and adaptor proteins HA-TIRAP (green, top panel) or HA-MyD88 (green, bottom panel). Cells were fixed after 10 h of transfection. Scale bars correspond to 10 lm. B Different zoomed images showing HA-MyD88 (green) recruitment to the plasma membrane in the presence of Myc-UBAP1 (red). Scale bars correspond to 10 lm. C Quantification of plasma membrane localization of MyD88 in cells expressing MyD88 alone or with either UBAP1, TIRAP or PumA. At least 200 cells were enumerated in three independent experiments, and membrane localization was defined under the strict criteria of clear line at the plasma membrane. Cells with MyD88-positive vesicles in close proximity to the plasma membrane were not counted as positive. Non -parametric one-way ANOVA Kruskal-Wallis test was performed, with Dunn's multiple comparisons test. **P < 0.01. D, E Endogenous co-IP from cells expressing (D) HA-MyD88 and (E) HA-TIRAP. The fractions bound to HA-trapping beads were probed with anti-HA, anti-UBAP1, anti-TIRAP and anti-TSG101 antibodies. Non-bound fraction (FT), last wash (W) and elution (E) are shown for each sample and the molecular weights indicated (kDa). FWestern blot of TNFR1 in A549 cells infected for 1 h with either P. aeruginosa PA7 wt, ∆pumA or ∆pumA:pumA (Ara) induced with 1% arabinose. A mock-infected sample was included as a negative control. The same blot was also probed for TIRAP and actin to control loading. at 1/250, mouse anti-myc9E10 (developed by Bishop, J.M. was obtained from the Developmental Studies Hybridoma Bank, created by the NICHD of the NIH and maintained at the University of Iowa), mouse anti-HA (Eurogentec, clone 16B12, ref. MMS-101R), rabbit anti-HA (Sigma, ref. H6908), rabbit anti-GFP (Amsbio, ref. TP401), rabbit anti-UBAP1 (Proteintech, ref. 12385-1-AP), mouse anti-His (Sigma, clone HIS-1, ref. H1029), mouse anti-FLAG (Sigma, clone M2, ref. F1804) all at 1/1000 and mouse anti- 20 . 20 The volume is dependent on the protein concentration of the cell extract. As a guide, 125-150 µg of protein of a cell extract is usually incubated per microgram of bait protein. Alternatively, cell extract samples can be normalized by visualization of transfected proteins to ensure equivalent expression of the prey and the relevant controls (see Note 7). 21. Several controls should be added at this step. Load 400 µL equilibrium buffer without prey protein to analyze the efficiency of the immobilization of the bait protein. As negative controls, load onto the negative column (see Note 12) 200 µL cell lysate containing the prey protein or the negative cell lysate (see Note 7) mixed with 200 µL equilibrium buffer. Additionally, load 200 µL negative cell lysate mixed with 200 µL equilibrium buffer onto the column associated with the bait protein. Table 2 : Brucella T4SS effector proteins 2 Translocation Name Gene Target Function Method References VceA bab1_1652 Unknown Inhibits autophagy and induces apoptosis TEM1 [106] VceC bab1_1058 BiP Activates UPR TEM1 [106] BPE005 bab1_2005 Unknown Inhibition of matrix metalloproteinase 9 CyaA [104] BPE043 bab1_1043 Unknown Unknown CyaA [104] BPE275 bab1_1275 Unknown Unknown CyaA [104] BPE123 bab1_0123 ENO-1 Contributes to intracellular lifestyle. ENO1 recruitment to the BCV CyaA [104] RicA bab1_1279 Rab2 Regulation of vesicular traffcking. Rab2 recruitment at the BCV TEM1 [84] BspA bab1_0678 Unknown Inhibits the secretory pathways TEM1 and CyaA [83] BspB bab1_0712 COG complex Inhibits the secretory pathway. Biogenesis of rBCV and bacterial proliferation TEM1 and CyaA [83] BspC bab1_0847 Unknown Unknown TEM1 and CyaA [83] BspE bab1_1671 Unknown Unknown TEM1 and CyaA [83] BspF bab1_1948 Unknown Inhibits the secretory pathways TEM1 and CyaA [83] BtpA/TcpB/Btp1 bab1_0756 TIRAP, Myd88 Inhibits TLR pathways Modulation of microtubule dynamics UPR induction TEM1 and CyaA [119] BtpB/Btp2 bab1_0279 Myd88 Inhibits TLR pathways TEM1 and CyaA [117] SepA bab1_1492 Unknown Inhibits BCV fusion with the lysosome 3xFLAG [105] The objective of this thesis is to characterize new bacterial effectors identified in Brucella. My work focused on 4 Brucella effectors, BtpA and BtpB, and two new effectors NyxA and NyxB.BtpA and BtpB are two TIR domain-containing proteins known to interfere with TLR-mediated signalling and thereby inhibit the host's innate immune response. Recently, proteins containing TIR domains have been described as having NAD+ hydrolase enzymatic activity in vitro. In collaboration with the team led by Dr. Victor Cid, we aimed to determine if this NAD+ hydrolase activity is retained for BtpA and BtpB. Victor Cid's lab, using over-expression of 4. The different research projects of my thesis work: these TIR effectors in yeast was able to show that both proteins resulted in NAD depletion in yeast cells. Our team, by measuring the intracellular level of NAD, confirmed these results in epithelial cells expressing BtpA and BtpB but also during Brucella infection. This collaborative work is included in the results, and was published in PloS Pathogens this year, in which I am joint-first author. Table 3 . Bacterial TIR domain-containg proteins and immune subversion adapted from [169] Protein 3 Bacterium Function Interaction References BtpA Brucella spp. Inhibit TLR2, TLR4 and TIRAP [169] (Btp1/TcpB) TLR5-mediated signaling MyD88 [118] Stabilization of TLR4 [174] microtubules Microtubule [175] Inhibition of dentritic cell Lipid raft [120] maturation [176] Induces the unfolded [119] [117] protein response (UPR) [121] Facilitate bacterial [177] colonization [178] BtpB Brucella spp. Inhibit TLR2, TLR4, TLR5 MyD88 [117] and TLR9-mediated signaling Promote virulence TlpA Salmonella Inhibit TLR signaling Not known [167] enterica responsible for activation of NF-κB Promotes bacterial survival TirS Staphylococcus Inhibit TLR2, TLR4, TLR5 Not known [179] aureus and TLR9-mediated [180] signaling Promote virulence YpTdp Yersinia pestis Supresses NF-κB MyD88 [181] activation No role in virulence PdTlp Paracoccus Not known MyD88 [182] denitrificans TLR4 TcpC Escherichia coli Inhibit TLR2 and TLR4- MyD88 [118] mediated signaling TLR4 [168] Facilitate bacterial [183] colonization [184] PumA Pseudomonas Inhibit TLR4 and TLR5- Myd88 [185] aeruginosas PA7 mediated signaling TIRAP Inhibit NF-κB activation UBAP1 Promote virulence Lipid raft 50 . 50 Rodriguez-Escudero I, Andres-Pons A, Pulido R, Molina M, Cid VJ. Phosphatidylinositol 3-kinasedependent activation of mammalian protein kinase B/Akt in Saccharomyces cerevisiae, an in vivo model for the functional study of Akt mutations. J BiolChem. 2009; 284(20):13373-83. Epub 2009/03/ 25. https://doi.org/10.1074/jbc.M807867200 PMID: 19307184 51. Cid VJ, Shulewitz MJ, McDonald KL, Thorner J. Dynamic localization of the Swe1 regulator Hsl7 during the Saccharomyces cerevisiae cell cycle. Mol Biol Cell. 2001; 12(6):1645-69. Epub 2001/06/16. https:// doi.org/10.1091/mbc.12.6.1645 PMID: 11408575 52. Martin H, Arroyo J, Sanchez M, Molina M, Nombela C. Activity of the yeast MAP kinase homologue Slt2 is critically required for cell integrity at 37 degrees C. Mol Gen Genet. 1993; 241(1-2):177-84. Epub 1993/10/01. https://doi.org/10.1007/bf00280215 PMID: 8232202. 53. Sporty JL, Kabir MM, Turteltaub KW, Ognibene T, Lin SJ, Bench G. Single sample extraction protocol for the quantification of NAD and NADH redox states in Saccharomyces cerevisiae. J Sep Sci. 2008; 31 (18):3202-11. Epub 2008/09/03. https://doi.org/10.1002/jssc.200800238 PMID: 18763242 54. Blasi E, Mathieson BJ, Varesio L, Cleveland JL, Borchert PA, Rapp UR. Selective immortalization of murine macrophages from fresh bone marrow by a raf/myc recombinant murine retrovirus. Nature. 1985; 318(6047):667-70. https://doi.org/10.1038/318667a0 PMID: 4079980. 2. Brucella effectors target SENP3, inducing subcellular mislocalisation of nucleolar proteins and induction of ribophagy 1. Two newly identified B. abortus effectors, NyxA and NyxB, accumulate in cytosolic and nuclear structures Bacterial effectors often contain eukaryotic-like domains to enable efficient modulation of cellular pathways. Previous work highlighted a subgroup of Brucella candidate effectors containing a carboxyl-terminal CAAX tetrapeptide motif (C corresponds to cysteine, A to aliphatic amino acids and X to any amino acid) [277] . Several bacterial effectors rely on this kind of motif as a lipidation site to facilitate membrane attachment, such as SifA from Salmonella enterica [278] [279] and AnkB from Legionella pneumophila [277,280] . We have recently confirmed that one of these Brucella candidate effectors, BspL, is translocated into host cells during infection although no function has yet been assigned for its CAAX sequence [281] . Therefore, we set out to determine if other B. abortus encoded proteins with a potential CAAX box could be translocated into host cells during infection. We relied on the TEM1 ßlactamase reporter, widely used to assess Brucella effectors' translocation, to test two CAAXcontaining proteins encoded by the genes BAB1_0296 (BAB_RS17335) and BAB1_0466 B Samples 18_0106-0107 -Gel from 16/02/2018 Ref. PSF Ref. Labo LCMSMS Analysis (Q Exactive HF) Database Enzyme Results File PD 2.1 18_0107 Checked # Peptides Sequest HT TRUE Master Protei NyxA ALOUCHE_1802 86,6 79,3 14 99 7 1 15,2 150,4 14 TRUE Master Protei HisNyxB ALOUCHE_1802 26,2 38,9 9 60 2 1 18,8 87,0 9 NyxA Master Accession 180302-0107-01.raw Description Sum PEP Score Coverage # Peptides uniprot-brucella-abortus_1803 + séquences HisBnpB et BnpA # PSMs nique PeptideProtein Groups MW [kDa] Chymotrypsin Score Sequest HT 180302-0106-01.pdResult Table 4 . Eukaryotic proteins interacting with NyxA identified with the yeast two-hybrid (Y2H) screen. 4 Table 1 . Antibodies used in this study. 1 Antibody Species Source Reference Dilution SENP3 Rabbit Cell Signalling 5591 1/400 NVL Rabbit Proteintech 16970-1-AP 1/25 RPL5 Rabbit X 1/100 PES1 Rabbit ATLAS antibodies HPA040210 1/100 NPM1 Rabbit Abcam Ab37659 1/400 Nucleolin Mouse Invitrogen 39-6400 1/100 HA Rat 1/100 Flag Mouse Sigma F1804 (clone M2) 1/2000 Myc Mouse DSHB (Clone E10) 1/1000 NUFIP1 Proteintech 12515-1-AP 1/50 LAMP1 Mouse DSHB (Clone H4A3) 1/200 FK2 Mouse Enzo BML-PW8810 1/1000 PML Rabbit Abcam Ab53773 1/500 Western blot Actin Mouse Sigma A4700 1/1000 His Mouse Sigma H1029 1/3000 V5 Mouse Invitrogen R960-25 1/1000 SENP3 Rabbit Cell Signalling 5591 1/1000 NPM1 Rabbit Abcam Ab37659 1/400 Histone 3 Rabbit Abcam Ab8895 1/500 PML Rabbit Abcam Ab53773 1/500 Secondary antibodies Donkey anti-mouse Invitrogen A-21202/A- 1/1000 AlexaFluor488/555/647 31570/A-31571 Donkey anti-rabbit Invitrogen A-21206/A- 1/1000 AlexaFluor488/555/647 31572/A-31573 Donkey anti-rat Invitrogen A-21208/A- 1/1000 AlexaFluor488/555/647 21434/A-21472 Table 2 . Primers used in this study. 2 Purpose Sequence 5'-3' pNPTS138- Fw1 GATAATGACGCTAGCTTTCA NyxA Rev1 GGCCACCCGACAAGCAGTTTGGAGGTTTACCTTTTGTTGA Fw2 TCAACAAAAGGTAAACCTCCAAACTGCTTGTCGGGTGGCC Rev2 GGGTGTCCTTGAAACATCCA pNPTS138- Fw1 TTGGATCAATCCGGCGTGTG NyxB Rev1 GGGCTTCAACTTCTTTAACCGGCTATTCTTCCTGTCAATT Fw2 AATTGACAGGAAGAATAGCCGGTTAAAGAAGTTGAAGCCC Rev2 CGTAAACGCTTCGGCAGGGA pFlagTEM1- fw TAAGCATTGGTCTAGAATGAACGCTCACACAAACATAA NyxA rev ACTGCAGTTATCTAGATCAAAGCTCCAAGCATCTAATT pFlagTEM1- fw GAGATAGGTGCCTCACTGATTAAGCATTGGTCTAGAATGAACACGCAAGCAACAATA NyxB rev GTGTGCTGGAATTCGCCCTTACTGCAGTTATCTAGATCAAGGCATCTCGATAAG pFlagTEM1- fw TAAGCATTGGTCTAGAATGAAAATGTGGACCCTTGC BAB1_0466 rev ACTGCAGTTATCTAGATCACTGTTCTACGCAGCTTA pTn7-nyxA fw rev GTGAAATCAATCAACAAAAGGTAAACCTCCatgaacgctcacacaaacataag GGAATTCCTGCAGCCCGGGGGATCCACTAGtcaaagctccaagcatctaatttc pTn7-nyxB fw rev ATTTAACCGAAATTGACAGGAAGAATAGCCatgaacacgcaagcaacaatag GGAATTCCTGCAGCCCGGGGGATCCACTAG tcaaggcatctcgataaggc pBBRMCS2 fw aaaaaaGAGCTCaaggagatatacatatgTACC 4HA-nyxA rev aaaaaaACTAGTtcaaagctccaagcatctaatttc pBBRMCS2 3Flag-nyxA fw rev TATTCCCGGGGGATCCATGGGTAAGCCTATCCCTAACCCTCTCCTCGGTCTCGATT CTACGAACGCTCACACAAACATAAG TATAGGGCGAATTGGAGCTCAAGCTCCAAGCATCTAATTT pBBRMCS2 fw aaaaaaGAGCTCaaggagatatacatatgTACC 4HA-nyxB rev aaaaaaACTAGTtcaaggcatctcgataaggc pDONOR- fw GGGGACAAGTTTGTACAAAAAAGCAGGCTTCAACGCTCACACAAACATAAG NyxA rev GGGGACCACTTTGTACAAGAAAGCTGGGTCCTAAAGCTCCAAGCATCTAATTTC pDONOR- fw GGGGACAAGTTTGTACAAAAAAGCAGGCTTCAACACGCAAGCAACAATAGATA NyxB rev GGGGACCACTTTGTACAAGAAAGCTGGGTCCTAAGGCATCTCGATAAGGCGGATT Pet151His- fw CACCATGAACGCTCACACAAAC NyxA rev TCAAAGCTCCAAGCATCT Pet151His- fw CACCATGAACACGCAAGCAAC NyxB rev CATTATGCTCCCCTGTTGT NyxA MAG fw rev aaaaaaGAGCTCTAAGTGTCTGCCATAGCCGACG aaaaaGGATCCtcaaagctccaagcatctaatttc NyxA Y62R fw rev CGATTGGcgaCCTGCCGCCTATGATG CGGCAGGtcgCCAATCGTAGCAGTCGAAG NyxA D76R fw rev CCATGAAAcgaCGGGAACTGATCCAATACG TTCCCGtcgTTTCATGGCGTTGCCTTC fw CGACGGagaCTGATCCAATACGAAGAGTGGTG Nyx E78R rev GGATCAGtctCCGtcgTTTCATGGCG NyxA MAG fw rv aaaaaaGAGCTCTTGGATCAATCCGGCGTGTGC aaaaaGGATCCtcaaggcatctcgataaggc SENP3 7-159 Fw CACCATGGCCGGCACCGGTAGCTGGGGTCCGGAACC rv TTTGGATCCTTATTTGCTATACAGCAGCATACGAAATGC Plasmid containing the gene of interest fused to a specific tag (obtained from a EndoFree maxipreparation). 4. Transfection reagent. 5. Phosphate buffered saline (PBS): Prepare a 10× solution with bidistilled water (18.2 MΩ cm) containing 10.6 mM KH2PO4, 30 mM Na2HPO4, 2H2O, and 1.54 M NaCl, and sterilize with a 0.2 µm filter. The 1× solution obtained following dilution with bidistilled water will have a pH of around 7.4. 6. Radioimmunoprecipitation assay (RIPA) buffer: Ready-to-use solution containing 150 mM NaCl, 1.0% IGEPAL ® CA-630, 0.5% sodium deoxycholate, 0.1% SDS, 50 mM Tris, pH 8.0. 7. Antiprotease cocktail: Mix 1% (v/v) of protease inhibitor cocktail (Sigma-Aldrich), phosphatase inhibitor cocktail 2 (Sigma-Aldrich), phosphatase inhibitor cocktail 3 (Sigma-Aldrich), and phenylmethylsulfonyl fluoride (PMSF). 1. 1 M Tris-HCl, pH 7.5 stock solution. Weigh 121.1 g Tris base and transfer to a 1 L graduated cylinder. Add water to 800 mL, mix, adjust pH with HCl, and make up to 1 L with water. Store at room temperature (see Note 1). 2. 5 M NaCl stock solution. Weigh 292.2 g NaCl and transfer to a 1 L graduated cylinder. Add water to 800 mL, stir, and adjust volume to 1 L with water (see Note 1). 3. Equilibrium buffer (see Note 2): 20 mM Tris-HCl, pH 7.5, 250 mM NaCl. Mix 1 mL 1 M Tris-HCl, pH 7.5 stock solution with 2.5 mL 5 M NaCl stock solution in a 50 mL centrifuge tube, and add water to a volume of 50 mL. Keep at 4 °C (see Note 3). 4. Elution buffer (see Note 2): 20 mM Tris-HCl, pH 7.5, 250 mM NaCl, 500 mM imidazole. Weigh 1.7 g imidazole in 50 mL solution of equilibrium buffer. Keep at 4 °C (see Note 3). 5. Purified His-tagged protein (bait). 6. Ni-NTA agarose beads: 6% beaded agarose (cross-linked), precharged with Ni 2+ (Protino ® Ni-NTA Agarose, Macherey Nagel, or equivalent). Store at 4 °C (see Note 4). 7. 0.8 mL empty columns for gravity flow (Pierce™ Centrifuge Columns, Thermo Fisher Scientific, or equivalent). 8. Refrigerated microcentrifuge. 1. Resolving gel: 1.5 M Tris-HCl, pH 8.8. Weigh 90.8 g, transfer to 500 mL graduated cylinder, and add 300 mL water. Adjust pH with HCl and fill with water to 500 mL. Store at room temperature. 2. Stacking gel buffer: 0.5 M Tris-HCl, pH 6.8. Weigh 30.275 g, transfer to 500 mL graduated cylinder, and add 300 mL water. Adjust pH with HCl and fill with water to 500 mL. Store at room temperature. 3. 30% acrylamide/Bis solution (37.5:1 acrylamide:Bis). Store at 4 °C. 4. Ammonium persulfate (APS): 20% solution in water. Store at -20 °C (see Note 5). 5. N,N,N',N'-tétraméthyléthylènediamine (TEMED). Store at room temperature. 6. SDS-PAGE running buffer: 25 mM Tris-HCl, 192 mM glycine, 0.1% SDS. Prepare 10× running buffer solution: Weigh 30 g Tris base, 144 g glycine, and 10 g SDS and add distilled water to 1 L. Store at room temperature. Prepare fresh 1× solution before gel electrophoresis. 7. Laemmli lysis buffer [6], 4× concentrate: 62.5 mM Tris-HCl pH 6.8, 2% SDS, 10% glycerol, 0.01% bromophenol blue, 5% p-mercaptoethanol. Store at -20 °C (see Note 6). 8. Protein ladder. 3. 2.2 Pull- Down Assays 2.3 Sodium Dodecyl Sulfate (SDS) Polyacrylamide Gel Components 4. Centrifuge 5 min at 80 × g at 4 °C. 5. Resuspend cells with 200 µL RIPA buffer supplemented with antiprotease cocktail. 6. Incubate on ice 20 min and mix gently every 5 min with a P200 micropipette. 7. Stock prepared cells at -80 °C (see Note 8). 8. Right before pull-down experiment, thaw prepared cell extract. Centrifuge at 17,000 × g at 4 °C for 20 min. Use the supernatant as prey by following step 9 in Subheading 3.2 (see Note 9). . Equilibrate column by adding 400 µL equilibrium buffer supplemented with 20 mM imidazole. 4. Centrifuge column for 1 min at 1000 × g at 4 °C. Discard flow-through. 5. Load 400 µL incubated bait and prey proteins onto column. Incubate 10 min on ice without agitation (see Note 18). 6. Centrifuge column for 1 min at 1000 × g at 4 °C. Keep flowthrough at 4 °C for analysis. 7. Wash by adding to column 400 µL equilibrium buffer supplemented with 20 mM imidazole. 8. Centrifuge column for 1 min at 1000 × g at 4 °C. Save first washing at 4 °C for analysis. 9. Repeat washing steps 7 and 8 four times and keep last washing fraction at 4 °C for analysis. 10. Add 200 µL elution buffer to column and incubate on ice 10 min. 11. Centrifuge column for 1 min at 1000 × g at 4 °C. Keep eluted fraction at 4 °C for analysis. 1. Transfer 120 µL Ni-NTA agarose beads to gravity flow column (see Note 12). 2. Centrifuge column for 1 min at 1000 × g at 4 °C. Discard flow-through. 3 PLOS Pathogens | https://doi.org/10.1371/journal.ppat.1007979April 16, 2020 PLOS Pathogens | DOI:10.1371/journal.ppat.1006092December 29, 2016 TheEMBO Journal Vol 36 | No 13 | 2017 ª 2017 The Authors ª 2017 The Authors The EMBO Journal Vol 36 | No 13 | 2017 Acknowledgments We wish to acknowledge the Genomics and Mass Spectrometry Units at Universidad Complutense de Madrid (UCM) for sequencing and NAD + analyses, respectively. We would like to thank Thomas Henry (CIRI, Lyon) for proving us with iBMDM and Diego Comerci (CONI-CET, Argentina) for helping us with the complementation constructs. We also thank Lucia Sastre for technical assistance in molecular cloning and yeast growth assays and members of U3 Research Lab for help and discussion. Acknowledgments We gratefully acknowledge Vale´rie Vogel and David Hernandez for help concerning bioinformatics analysis, and Myriam Girard, Christophe Ginevra and He ´lène Meunier for technical assistance. We thank Christopher Montgomery and Lloyd Miller for mouse model consultation. We also thank Jacqueline Marvel for helpful discussions. We are grateful to Laurent Genestier for helpful discussions and for providing MyD88-deficient mice and to the staff members at PBES, with a particular thanks to Nadine Aguilera for their support in the animal facility. Acknowledgements This work was funded by the FINOVI foundation under a Young Researcher Starting Grant and the Cystic Fibrosis French Foundation Vaincre la Mucovicidose (VLM), grant RF20130500897. SS and SB are supported by INSERM and CNRS staff scientist contracts, respectively. SG and JBL are funded by the Région Rhônes-Alpes ARC1 Santé fellowships. PI and AL by the VLM and FINOVI grants. TW is supported by a Wellcome Trust PhD fellowship. We thank the following people: L. Plantevin for programming of ImageJ plugin; V. Gueguen-Chaignon and the Protein Science Facility (SFR Biosciences, France) for protein purification and plasma resonance experiments; R. Voulhoux (CNRS UMR7255, Aix-Marseille University, France) for anti-EF-Tu and LasB antibodies, PA7 strain and vectors pKNG208; G. Ball (CNRS UMR7255, Aix-Marseille University, France) for advice on genetics of PA7; S. Lory (Harvard Medical School, USA) for anti-PilQ antibody; pCMV-HA-MyD88 was a gift from B. Beutler (UT Southwestern Medical Center, USA; Addgene plasmid # 12287), FLAG-TLR5 from R. Medzhitov (Yale University, USA; Addgene plasmid # 13088), TIRAP-GFP and GFP-MyD88 from J. Kagan (Harvard Medical School, USA); HA-TIRAP from L. O'Neil (Trinity College Dublin, Ireland); Myc-TIRAP from A. Weber; L. Alexopolou (CIML, France) FLAG-TLR2 and FLAG-TLR4. We also thank the PLATIM of the SFR Biosciences for help with microscopy, T. Henry (CIRI, Lyon) for discussion and J. Kagan (Harvard Medical School, USA) for discussion and critical reading of the manuscript. This work was funded by the FINOVI foundation under a Young Researcher Starting Grant, the Cystic Fibrosis French Foundation Vaincre la Mucovicidose grant RF20130500897 and the ANR (grant n˚ANR-15-CE15-0011) to SS, and by grants BIO2016-75030-P from Ministerio de Econom´ıa y Competitividad (Spain) and S2017/ BMD-3691 (InGEMICS-CM) from Comunidad de Madrid and European Structural and Investment Funds to VJC and MM. SS is supported by an INSERM staff scientist contract. J.M.C-S is supported by a predoctoral contract from UCM. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Funding: This work was supported by the LABEX ECOFECT (ANR-11-LABX-0048) of Universite´ de Lyon, within the program "Investissements d'Avenir" (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR) and a FINOVI Young Researcher Grant and by the Swiss Here, we investigate TirS, a National Science Foundation grant 31003A_153474. SP was supported by the Innovative Medicines Initiative Joint Undertaking under the Combatting Bacterial Resistance in Europe (COMBACTE) grant agreement no. 115523. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. ENA database project study accession number PRJEB12840; sample accessions ERS1070204 to ERS1070207 and ERS1434451, ERS1434452. Chapter II: Results PLOS PATHOGENS Brucella TIR proteins deplete host cell NAD + phalloidin for labelling actin (cyan). (B) GFP-BtpB (green) can be detected at intercellular contacts (arrow and zoomed image). Cells were labeled with an anti-tubulin antibody (red). (C) HeLa cells were transfected with GFP-BtpB-TIR (green) and then labelled for tubulin (red) or (D) vimentin (red). (E) Representative image of cells labelled with anti-tubulin antibody (red) expressing aggregates of GFP-BtpB E234A (green). Scale bars correspond to 5 µm. For both effectors, punctate and filament-like structures were observed (Figure 23A). Equivalent NyxA structures were observed with an N-terminal 3Flag (Figure 23B), suggesting these are not an artefact due to the 4HA tag. NyxA and NyxB interact with each other and target the same cellular compartments Imaging of translocated NyxA and NyxB suggested that these effectors may have an identical subcellular localisation. To investigate this possibility, we ectopically expressed NyxA and NyxB with different tags. We found that HA-tagged NyxA and NyxB predominantly accumulated in Bacterial expression vectors NyxA and NyxB were amplified by PCR and inserted into the pET151D topo vector following the manufacturer's procedure (Invitrogen) to obtain His-NyxA and His-NyxB. An additional V5 tag is also present. His-NyxA MAG and His-NyxB MAG were obtained by site-directed mutagenesis. For expression and purification of SENP37-159, a vector with E. coli codon-optimized SENP3 was obtained from Thermofisher and used as a template. All the primers used are listed in Supplementary Table 2. Immunofluorescence microscopy At the indicated time points, coverslips were washed twice with PBS, fixed with PFA 3.2% (Electron Microscopy Sciences) for 20 minutes and then washed again 4 times with PBS. Best NVL staining was achieved with AntigenFix (MicromMicrotech France). For SENP3, NVL, RPL5, PES1 and NUFIP1 immunostaining, permeabilization was carried out with a solution of PBS containing 0.3% triton for 30 minutes followed by blocking also for 30 minutes in a solution of PBS containing 2% bovine serum albumin (BSA), 10% horse serum, 0.3% triton. For HA and Flag to detect translocated effectors blocking and antibody solutions did not contain Triton (only the permeabilization step). For NPM1 staining Permeabilization was done with Tween 0.05% followed by antibody diluted in 2% BSA, 10% horse serum and 0.3M glycine. Coverslips were then incubated at 4 °C overnight with primary antibody diluted in the blocking solution. Subsequently, the coverslips were washed twice in PBS and incubated for 2h with secondary antibodies. When indicated DAPI was also included in the secondary antibody mix (1/1000 suggesting that NyxA exhibits a non-functional CAAX motif. Indeed, preliminary data from the lab using HeLa cells were transfected with a plasmid encoding myc-NyxA or myc-NyxAΔCLEL, showed no difference in localization between these proteins suggesting that this CAAX motif may not be functional (Figure 50). Subsequently nyxB was identified in the Brucella genome encoding for a protein showing 82% identity with the sequence of NyxA. It is somewhat surprising that Brucella produces two highly similar effectors, yet this must be advantageous for the bacterium, otherwise during evolution Brucella would have lost one of the two genes. The first step of our study was to confirm that these two potential effectors were translocated in the host cell during infection. For this, a reporter system, TEM1 ß-lactamase reporter, was used. We found that the translocation of NyxA is dependent on the T4SS present in Brucella but this does not seem to be the case for NyxB. This observation is rather surprising and raises many questions. The position of the reporter tag on the N-terminus of the effectors, may interfere with the translocation signal and perturb its association with the T4SS. VirB operon could be deleted. Consistent with our results, several Brucella effectors appear to be translocated independently of T4SS (for example, BspD, G, H, I, J, K) [START_REF] Myeni | Brucella Modulates Secretory Trafficking via Multiple Type IV Secretion Effector Proteins[END_REF]. It therefore seems necessary to better characterize the molecular mechanisms involved in the translocation of effector protein in the host cell by the Brucella T4SS VirB to determine if an alternative secretion system is contributing. Perhaps NyxA and NyxB, highly similar but with clear translocation mechanisms, could provide an interesting tool to study these questions. ANNEXES Author Summary Pathogenic microbes have evolved elaborate strategies to manipulate host defenses to establish and spread in the host population. One such mechanism involves disruption of the immune signaling cascade orchestrated by the Toll-like receptors (TLRs), which sense Conflict of interest The authors declare that they have no conflict of interest.
04114479
en
[ "info.info-ni" ]
2024/03/04 16:41:24
2022
https://theses.hal.science/tel-04114479/file/2022UCFAC065_HADHBI.pdf
Monsieur Ali Ridha Mahjoub Je Voudrais Remercier Madame Professeur Hande Yaman Monsieur Eduardo Uchoa Nancy Madame Perrot Keywords: Cette thèse m'a permis de consolider mes connaissances et de confirmer ma passion pour le domaine de la Recherche Opérationnelle en général et l'Optimisation Combinatoire en particulier. Cette expérience fut très enrichissante et constituera un atout important pour mon futur parcours professionnel. J'ai eu la chance d'être accompagné, conseillé et aidé par plusieurs personnes, de près ou de loin, qui se sont vraiment engagées pour m'épauler durant quatre années de travail. Je suis très heureux d'avoir ici l'occasion pour leur témoigner toute ma gratitude. Je voudrais en premier lieu remercier tout particulièrement Je remercie aussi tous mes enseignants de maternelle au lycée qui ont, avec mes parents, participé à mon éducation et ont fait de moi ce que je suis aujourd'hui. Je garderai toujours les souvenirs indélébiles de tous ces professeurs et enseignants. Je voudrais exprimer ma reconnaissance envers mes amis et collègues qui m'ont apporté leur soutien moral et intellectuel tout au long de ces années. Enfin, je ne saurais pas clore ces remerciements sans dédier ce travail à toute la famille HADHBI qui m'a doté d'une éducation digne et son amour inconditionnel a fait de moi ce que je suis aujourd'hui. C'est grâce à eux que je me dis que rien n'est impossible car je ne serai jamais seul. Remerciements spéciaux à mes très chers parents, pour leurs efforts, leurs sacrifices, leurs peines, leurs bénédictions et pour m'avoir appris à surmonter mes peurs et d'être là quand cela est nécessaire. Ils sont une source inépuisable de tendresse. L'amour qu'ils ont porté à moi, la dignité, l'éducation et le sens de l'honneur me servent de modèle. Ils m'ont toujours dit de prioriser mes études même ils n'avaient rien dans leurs proches. Ce travail est le fruit de leurs sacrifices qu'ils ont consenti pour mon éducation et ma formation le long de ces années. Du plus profond de mon coeur, un grand merci à tous mes frères, et à ma soeur, pour leur soutien moral et matériel qui m'ont permis d'aboutir à ce résultat. Ils ont pris en charge tous les frais de mes études (inscription, hébergement, vie privée,...) pendant tout mon cursus universitaire. Il est donc impossible d'exprimer à quel point je suis reconnaissant pour tout cela. Ce travail soit témoignage de mon amour sincère et fidèle à eux. Cette thèse est dédiée aussi à leurs épouses (époux), et à leurs enfants, pour l'amour qu'ils m'ont toujours donné. Un grand merci aussi à mes beaux parents et beaux frères, pour leur amour inconditionnel qu'ils me portent. Tous les mots ne sauraient exprimer la gratitude, l'amour infaillible, le respect, la reconnaissance à ma chère épouse qui a su me soutenir, m'épauler, me supporter, m'encourager ... tout au long de ces années et plus particulièrement durant les moments difficiles et d'inquiétude qui n'ont pas toujours été des plus agréables. Les mots sont peu de choses pour la remercier pour tout cela. Merci donc d'avoir fait de notre vie de couple tout ce dont je rêvais. Je ne pourrais pas terminer ces remerciements sans dédier ce travail à mes futurs enfants qu'ils vont me donner le plus beau rôle de ma vie : celui d'être papa. Je leur promets mon amour infaillible, mon attention sans limite, mon soutien en toutes circonstances. Je promets surtout de faire mon mieux, même si parfois j'échoue. facet-defining for the associated polytope under some necessary and sufficient conditions. In addition, we develop separation algorithms for these inequalities. Using these results, we devise a Branch-and-Cut (B&C) algorithm for the problem, and discuss experimental results. A second part of the sis is devoted to an extended formulation for the C-RSA. A column generation algorithm is developed to solve its linear relaxation. We prove that the related pricing problem is equivalent to the so-called resource constrained shortest path problem, which is well known to be NPhard. For this, we propose a pseudo-polynomial time based dynamic programming algorithm. Using this, we devise Branch-and-Price (B&P) and Branch-and-Cut-and-Price (B&C&P) algorithms to solve the problem. An extensive experimental study with comparisons between the different B&C, B&P, and B&C&P algorithms is also presented. Finally, we turn our attention to the Spectrum Assignment (SA) sub-problem. This has been shown to be equivalent of wavelength assignment, interval coloring, and dynamic storage allocation problems that are well known to be NP-hard. To the best of our knowledge, a polyhedral approach to the SA problem has not been considered before, even to its equivalent problems. For this, first, we propose an integer linear programming compact formulation and investigate the facial structure of the associated polytope. Moreover, we identify several classes of valid inequalities for the polytope and prove that these inequalities are facet-defining. We further discuss the related separation problems. Using this, we devise a Branch-and-Cut (B&C) algorithm for the SA problem, along with some computational results are presented. Keywords: optical network, network design, integer programming, polyhedron, facet, separation, branch-and-cut, branch-and-price, branch-and-cut-and-price, dynamic programming. Long Résumé Pour faire face à une croissance continue de la demande de trafic liée à l'augmentation de la bande passante, les opérateurs de réseaux ont dû faire évoluer l'architecture de leurs réseaux. En conséquence, une nouvelle génération de réseau de transport optique flexible appelée "Spectrally Flexible Optical Networks" (SFONs) a été introduite en 2008 comme une technologie prometteuse en raison de sa flexibilité et de son efficacité par rapport à l'ancienne technologie connue sous le nom "Optical Wavelength Division Multiplexing (WDM)". Les SFONs ont suscité un intérêt intense de la part des laboratoires de recherche, ainsi que dans l'industrie. Nous étudions dans cette thèse l'un des problèmes clés lors de dimensionnement et planification des SFONs, le problème du routage contraint et assignation spectrale, connue sous le nom " Constrained-Routing and Spectrum Assignment " (CRSA) selon la terminologie anglaise. Il se compose de deux parties: le routage contraint (sélectionner pour chaque demande en trafic un chemin optique physique qui connecte sa source avec sa destination à travers le réseau sans dépasser une longueur maximale de chemin (en km) fixée pour chaque demande en trafic), et l'assignation d'un spectre (assigner à chaque demande en trafic un seul intervalle de "slot" consécutifs (contrainte de contiguïté) au long de son chemin du routage de sorte que le même intervalle de slots consécutifs doit être utilisé sur tous les liens qui appartiennent à son chemin optique physique (contrainte de continuité), et les intervalles de slots consécutifs alloués par un ensemble de demandes dont les chemins ne sont pas des liens disjoints dans le réseau ne peuvent pas partager aucun slot sur les liens partagés (contrainte de non-chevauchement), tout en optimisant une ou plusieurs fonctions objectives linéaires. Le problème CRSA est bien connu comme un problème NP-difficile et très difficile en pratique aussi que de nombreuses études de recherche ont été menées dans ce contexte depuis sa première apparition en 2010. Certains des algorithmes de résolution proposés dans la littérature sont basés sur des formulations mathématiques utilisant la programmation linéaire (mixte) en nombres entiers qui n'ont pas pu résoudre des instances de grande taille, ainsi que des heuris-tiques et métaheuristiques qui ne peuvent pas garantir l'optimalité de solutions. Il a été jugé approprié de proposer des nouveaux modèles mathématiques plus souples et efficaces en se basant sur la programmation linéaire en nombres entiers, de concevoir et de développer des algorithmes exacts qui pourraient offrir des améliorations prometteuses par rapport aux méthodes existantes. À notre connaissance, l'étude polyédrale n'a pas encore fait l'objet de recherches récentes pour ce problème. Mots clés : réseaux optiques flexibles, polytope, inégalité valide, facette, separation, algorithme de coupes et branchements, algorithme de génération de colonnes et branchements. Contents List of Tables Introduction The global Internet Protocol (IP) traffic is expected to reach 396 exabytes per month by 2022, up from 194. [START_REF] Balas | On the Maximum Weight Clique Problem[END_REF] Exabytes per month in 2020 [START_REF]The Network Cisco's Technology News Site: Cisco Predicts More IP Traffic in the Next Five Years Than in the History of the Internet[END_REF]. Optical transport networks are then facing a serious challenge related to continuous growth in bandwidth capacity due to the growth of global communication services and networking: mobile internet network (e.g., 5th generation mobile network), cloud computing (e.g., data centers), Full High-definition (HD) interactive video (e.g., TV channel, social networks) [START_REF] Cheng | Routing and Spectrum Assignment Algorithm based on Spectrum Fragment Assessment of Arriving Services[END_REF], etc... as shown in Figure 1. To sustain the network operators face this trend of increase in bandwidth, a new generation of optical transport network architecture called Spectrally Flexible Optical Networks (SFONs) (called also FlexGrid Optical Networks) has been introduced as promising technology because of their flexibility, scalability, efficiency, reliability, and survivability [START_REF] Chatterjee | Fragmentation Problems and Management Approaches in Elastic Optical Networks: A Survey[END_REF][19] compared with the traditional FixedGrid Optical Wavelength Division Multiplexing (WDM) [START_REF] Ramaswami | Optical Networks: A Practical Perspective[END_REF] [START_REF] Ramaswami | Multiwavelength lightwave networks for computer communication[END_REF]. In SFONs the optical spectrum is divided into small spectral units, called frequency slots [START_REF] Santos | Heuristics for Routing and Spectrum Allocation in Elastic Optical Path Networks[END_REF]. They have the same frequency of 12.5 GHz where WDM uses 50 GHz [START_REF] Stern | Multiwavelength Optical Networks: Architectures, Design and Control[END_REF] as recommended by ITU-T [START_REF] Amar | Performance assessment and modeling of flexible optical networks[END_REF]. This can be seen as an improvement in resource utilization. The concept of slots was proposed initially by Masahiko Jinno et al. in 2008 [57], and later explored by the same authors in 2010 [START_REF] Walkowiak | Elastic optical networks -a new approach for effective provisioning of cloud computing and content-oriented services[END_REF]. We refer the reader to [START_REF] Lopez | Elastic Optical Networks: Architectures, Technologies, and Control[END_REF] for more information about the architectures, technologies, and control of SFONs. The Routing and Spectrum Assignment (RSA) problem plays a primary role when dimensioning and designing of SFONs which is the main task for the development of this next generation of optical networks. It consists of assigning for each traffic demand, a physical optical path, and an interval of contiguous slots (called also channels) while optimizing some linear objective(s) and satisfying the following constraints [START_REF] Hadhbi | A novel integer linear programming model for routing and spectrum assignment in optical networks[END_REF]: a) spectrum contiguity: an interval of contiguous slots should be allocated to each demand k with a width equal to the number of slots requested by demand k; Numerous research studies have been conducted on the RSA problem since its first appearance. The RSA is known to be NP-hard [START_REF] Shirazipourazad | On routing and spectrum allocation in spectrum-sliced optical networks[END_REF][109], and more complex than the historical Routing and Wavelength Assignment (RWA) problem [START_REF] Hai | Combining heuristic and exact approaches for solving the routing and spectrum assignment problem[END_REF]. Various (mixed) integer linear programming (ILP) formulations and algorithms have been proposed to solve it. A detailed survey of spectrum management techniques for SFONs is presented in [START_REF] Talebi | Spectrum management techniques for elastic optical networks: A survey[END_REF] where the authors classified variants of the RSA problem into: offline RSA which has been initiated in [START_REF] Klinkowski | An overview of routing methods in optical burst switching networks[END_REF], and online or dynamic RSA which has been initiated in [START_REF] Wan | Dynamic Routing and Spectrum Assignment in Spectrum-Flexible Transparent Optical Networks[END_REF] and recently developed in [START_REF] Patel | On Efficient Candidate Path Selection for Dynamic Routing in Elastic Optical Networks[END_REF] [START_REF] Zhou | Link State Aware Dynamic Routing and Spectrum Allocation Strategy in Elastic Optical Networks[END_REF]. Numerous aspects are investigated in the tutorial [START_REF] Chatterjee | Routing and Spectrum Allocation in Elastic Optical Networks: A Tutorial[END_REF]. This work focuses on the offline RSA problem. There exist two classes of ILP formulations used to solve the RSA problem, called edge-path and edge-node formulations. The ILP edge-path formulation is majorly used in the literature where variables are associated with all possible physical optical paths inducing an explosion of a number of variables and constraints which grow exponentially and in parallel with the growth of the instance size: number of demands, the total number of slots, and topology size: number of links and nodes [START_REF] Hadhbi | A novel integer linear programming model for routing and spectrum assignment in optical networks[END_REF]. We observe that several papers which use the edge-path formulation as an ILP formulation to solve the RSA problem, use a set of precomputed-paths without guaranty of optimality e.g. in [START_REF] Christodoulopoulos | Elastic Bandwidth Allocation in Flexible OFDM-Based Optical Networks[END_REF], [START_REF] Klinkowski | An overview of routing methods in optical burst switching networks[END_REF], [START_REF] Klinkowski | Routing and Spectrum Assignment in Spectrum Sliced Elastic Optical Path Network[END_REF], [START_REF] Klinkowski | A routing and spectrum assignment problem in optical OFDM networks[END_REF], [START_REF] Velasco | Modeling the routing and spectrum allocation problem for flexgrid optical networks[END_REF], [START_REF] Zotkiewicz | Optimization models for flexgrid elastic optical networks[END_REF], [START_REF] Salameh | Routing With Intelligent Spectrum Assignment in Full-Duplex Cognitive Networks Under Varying Channel Conditions[END_REF]. On the other hand, column generation techniques have been used by Klinkowski et al. [START_REF] Ruiz | Column generation algorithm for RSA problems in flexgrid optical networks[END_REF], Jaumard et al. [START_REF] Jaumard | Scalable elastic optical path networking models[END_REF], and recently by Enoch [START_REF] Enoch | Nested Column Generation decomposition for solving the Routing and Spectrum Allocation problem in Elastic Optical Networks[END_REF] to solve the relaxation of the RSA taking into account all the possible paths for each traffic demand. To improve the LP bounds of the RSA relaxation, Klinkowsky et al. proposed a class of valid inequalities induced by cliques separable using a branch-and-bound algorithm [START_REF] Klinkowski | Valid inequalities for the routing and spectrum allocation problem in elastic optical networks[END_REF]. On the other hand, Klinkowski et al. [START_REF] Klinkowski | A Simulated Annealing Heuristic for a Branch and Price-Based Routing and Spectrum Allocation Algorithm in Elastic Optical Networks[END_REF] propose a branch-and-cut-and-price method based on an edge-path formulation for the RSA problem. Recently, Fayez et al. [START_REF] Fayez | Recursive algorithm for selecting optimum routing tables to solve offline routing and spectrum assignment problem[END_REF], and Xuan et al. [START_REF] Xuan | New bi-level programming model for routing and spectrum assignment in elastic optical network[END_REF], proposed a decomposition approach to solve the RSA separately (i.e., R+SA) based on a recursive algorithm and an ILP edge-path formulation. To overcome the drawbacks of the edge-path formulation usage, a compact edgenode formulation has been introduced as an alternative for it. It holds a polynomial number of variables and constraints that grow only polynomially with the size of the instance. We found just a few works in the literature that use the edge-node formulation to solve the RSA problem e.g., [START_REF] Cai | Novel Node-Arc Model and Multiiteration Heuristics for Static Routing and Spectrum Assignment in Elastic Optical Networks[END_REF], [START_REF] Velasco | Modeling the routing and spectrum allocation problem for flexgrid optical networks[END_REF], [START_REF] Zotkiewicz | Optimization models for flexgrid elastic optical networks[END_REF]. Bertero et al. [START_REF] Bertero | Integer programming models for the routing and spectrum allocation problem[END_REF] present a comparative study between several edge-node formulations and introduce new ILP ones. On the other hand, and due to the NP-hardness of the C-RSA problem, several heuristics [START_REF] Ding | Hybrid routing and spectrum assignment algorithms based on distance-adaptation combined coevolution and heuristics in elastic optical networks[END_REF], [START_REF] Mesquita | A Routing and Spectrum Assignment Heuristic for Elastic Optical Networks under Incremental Traffic[END_REF], [START_REF] Santos | Heuristics for Routing and Spectrum Allocation in Elastic Optical Path Networks[END_REF], and recently in [START_REF] He | Invalid-Resource-Aware Spectrum Assignment for Advanced-Reservation Traffic in Elastic Optical Network[END_REF], greedy algorithms [START_REF] Mahala | Spectrum assignment technique with firstrandom fit in elastic optical networks[END_REF], metaheuristics as tabu search [START_REF] Goscien | Tabu search algorithm, Routing, Modulation and spectrum allocation, Anycast traffic, Elastic optical networks[END_REF], simulated annealing [START_REF] Klinkowski | A Simulated Annealing Heuristic for a Branch and Price-Based Routing and Spectrum Allocation Algorithm in Elastic Optical Networks[END_REF], genetic algorithms [START_REF] Gong | A Two-Population Based Evolutionary Approach for Optimizing Routing, Modulation and Spectrum Assignments (RMSA) in O-OFDM Networks[END_REF], [START_REF] Hai | An efficient genetic algorithm approach for solving routing and spectrum assignment problem[END_REF], [START_REF] Hai | Combining heuristic and exact approaches for solving the routing and spectrum assignment problem[END_REF], [START_REF] Dinarte | Routing and spectrum assignment: A metaheuristic for hybrid ordering selection in elastic optical networks[END_REF], ant colony algorithms [START_REF] Lezama | Solving routing and spectrum allocation problems in flexgrid optical networks using pre-computing strategies[END_REF], and a hybrid meta-heuristic approach [START_REF] Ruiz | A hybrid meta-heuristic approach for optimization of routing and spectrum assignment in Elastic Optical Network (EON)[END_REF], have been used to approach large scale instances of the RSA problem. Furthermore, recent works start using artificial intelligence [START_REF] Reihani | Artificial neural network-based adaptive modulation for elastic optical networks[END_REF], see for example [START_REF] Liu | A Monte Carlo Based Routing and Spectrum Assignment Agent for Elastic Optical Networks[END_REF][62], and deep-learning [START_REF] Chen | Deep-RMSA: A Deep-Reinforcement-Learning Routing, Modulation and Spectrum Assignment Agent for Elastic Optical Networks[END_REF], and machine-learning [START_REF] Salani | Routing and Spectrum Assignment Integrating Machine-Learning-Based QoT Estimation in Elastic Optical Networks[END_REF][120] [117][48] to get more perefermonce. Selvakumar et al. give a survey [START_REF] Selvakumar | The Recent Contributions of Routing and Spectrum Assignment Algorithms in Elastic Optical Network (EON)[END_REF] in which they summarise the most contributions done for the RSA problem before 2019. In this thesis, we are interested in the resolution of a complex variant of the RSA problem, called the Constrained-Routing and Spectrum Assignment (C-RSA) problem. Here we suppose that the network should also satisfy the transmission-reach constraint for each traffic demand according to the actual service requirements. To the best of our knowledge a few related works on the RSA, take into account this additional constraint so that the length of the chosen path for each traffic demand should not exceed a certain length (in kms). Recently, Hadhbi et al. [START_REF] Hadhbi | A novel integer linear programming model for routing and spectrum assignment in optical networks[END_REF][51] introduced a novel tractable ILP based on the cut formulation for the C-RSA problem with a polynomial number of variables and an exponential number of constraints that are separable in polynomial time using network flow algorithms. Computational results show that their cut formulation solves larger instances compared with those of Velasco et al. [START_REF] Velasco | Modeling the routing and spectrum allocation problem for flexgrid optical networks[END_REF] and Cai et al. [START_REF] Cai | Novel Node-Arc Model and Multiiteration Heuristics for Static Routing and Spectrum Assignment in Elastic Optical Networks[END_REF]. It has also been used as a basic formulation in the study of Colares et al. [START_REF] Colares | An extended formulation for the Constraint Routing and Spectrum Assignment Problem in Elastic Optical Networks[END_REF], and also by Chouman et al. [START_REF] Chouman | Impact of RSA Optimization Objectives on Optical Network State[END_REF] [START_REF] Chouman | Assessing the Health of Flexgrid Optical Networks[END_REF] to show the impact of several objective functions on the optical networks state. Note that Velasco et al. [START_REF] Velasco | Modeling the routing and spectrum allocation problem for flexgrid optical networks[END_REF], Cai et al. [START_REF] Cai | Novel Node-Arc Model and Multiiteration Heuristics for Static Routing and Spectrum Assignment in Elastic Optical Networks[END_REF], and Bertero et al. [START_REF] Bertero | Integer programming models for the routing and spectrum allocation problem[END_REF], did not take into account the transmission-reach constraint. However, so far the exact algorithms proposed in the literature could not solve largescale instances. We believe that a cutting-plane-based approach could be powerful for the problem. To the best of our knowledge, such an approach has not been yet considered except the works done by Bianchetti et al. [START_REF] Bianchetti | Valid inequalities and a branch-and-cut algorithm for the routing and spectrum allocation problem[END_REF] for the RSA problem. For this, the main aim of this work is to investigate thoroughly theoretical properties of the C-RSA problem. To this end, we aim at providing a deep polyhedral analysis of the C-RSA problem, and based on this, devise branch-and-cut and branch-and-cut-and-price algorithms for solving large-scale instances of the problem. So we will introduce a new ILP formulation called cut formulation for the C-RSA problem which can be seen as an improved formulation for the one introduced by Hadhbi et al. [START_REF] Hadhbi | A novel integer linear programming model for routing and spectrum assignment in optical networks[END_REF] [START_REF] Hadhbi | Routage et Affectation Spectrale Optimaux dans des Réseaux Optiques Élastiques FlexGrid[END_REF]. We investigate the facial structure of the associated polytope. We further identify several classes of valid inequalities to obtain tighter LP bounds. Some of these inequalities are obtained by using conflict graphs related to the problem. We then devise separation procedures and give sufficient conditions under which these inequalities are facet defining. Using this, we develop a Branchand-Cut (B&C) algorithm, along with computational results are presented using large-scale instances. On the other hand, we introduce an extended ILP formulation, called path formulation. A column generation algorithm is proposed to solve its linear relaxation. We further adapt the valid inequalities proposed for the cut formulation to obtain also tighter bounds for the path formulation. Based on this, we develop a Branch-and-Cut-and-Price (B&C&P) algorithm to solve the problem. Computational results are presented using this algorithm. We finally provide a comparative study between the B&C and B&C&P algorithms is presented by using two types of instances: random and realistic ones. The results show that the B&C&P algorithm is more efficient. Furthermore, we have studied the influence of the valid inequalities. The results show that some of them, in particular, clique and cover inequalities are quite efficient. Several enhancements are further investigated and used to speed up and increase the efficiency of our approaches. They are based on a primal heuristic used to produce feasible solutions from fractional solutions given at each node of the branching tree. It allows obtaining good primal bounds and prune some uninteresting nodes of the branching tree. We have also introduced some symmetry-breaking inequalities to manage the equivalent sub-problems in the branching tree. Several concepts are exploited throughout this dissertation. We start this dissertation by presenting the basic notions of combinatorial optimization, complexity, graph theory, and further give some notations that are used through this manuscript. In Chapter 2, we present the C-RSA problem. We introduce an integer linear programming formulation namely cut formulation. We then carry out an investigation of the related polytope, the convex hull of all its solutions. Moreover, we describe the classes of valid inequalities and study their facial structure. In particular, we introduce symmetry-breaking inequalities. In Chapter 3, we discuss the separation procedures for the valid inequalities and describe a Branch-and-Cut algorithm. The comparative study is presented in this chapter, it shows the impact of the additional valid inequalities using several mixedinteger linear program solvers. In Chapter 4, we give the extended ILP formulation. We present the column generation algorithm to solve its linear relaxation, and the Branch-and-Cut-and-Price (B&C&P) algorithm, along with some computational results are presented. In this chapter, we also provide the comparative analysis of performance between the different algorithms. Chapter 5 is devoted to the Spectrum Assignment (SA) sub-problem. First, we propose an integer linear programming compact formulation, and investigate the facial structure of the associated polytope. Fuerthremore, we describe several valid inequalities, some of them come from those that are already proposed for the C-RSA. We also give sufficient conditions under which these inequalities are facet defining. Based on these results, we develop a B&C algorithm to solve the problem. Furthermore, we describe symmetry-breaking inequalities for the SA, and provide some lower bounds. Finally, we present an extensive experimental study while showing the impact of the valid inequalities and symmetry-breaking inequalities on the effectiveness of the B&C algorithm. Chapter 1 Preliminary Notions In this chapter, we present some basic notions of combinatorial optimization, and polyhedra approaches. Combinatorial Optimization Operations research is a discipline related to computer science and applied mathematics. In this dissertation, we are interested in one of its branches, called combinatorial optimization. The optimization problems related to combinatorial optimization can be formulated as follows. Let E = {e 1 , ..., e n } be a finite set, namely basic set. Suppose that each element e i , it is associated a weight c(e i ) ∈ R with i ∈ {1, ..., n}. Let F denote a family of subsets of E. The problem aims to identify one subset F from F with the smallest or largest weight given by the sum e i ∈F c(e i ). Such a problem is called combinatorial optimization problem where the set F represents the set of all feasible solutions of the problem. In general, the set F contains an exponential number of solutions. For this, it's known to be very hard to solve combinatorial optimization problems by enumerating all its feasible solutions. To deal with this, various approaches have been developed to approach combinatorial optimization problems. They use different tools, complexity theory, combinatorial optimization, graph theory, linear and non-linear programming, integer programming, mixed integer programming. In the next section, we discuss some concepts from complexity theory. Complexity Theory Several researchers in computer science and mathematics are interested in working on the classification of problems into easy or hard problems, and further on the algorithmic complexity whose objective is to find the most efficient algorithm. This has been initiated by Cook [25], Edmonds [START_REF] Edmonds | Covers and packings in a family of sets[END_REF] and Karp [START_REF] Karp | Reducibility among Combinatorial Problems[END_REF]. Theory of complexity [START_REF] Garey | Computers and Intractability: A Guide to the ory of Np-completeness[END_REF] [START_REF] Garey | Computers and Intractability: A Guide to the ory of Np-completeness[END_REF] classifies problems into two essential classes: the class P (polynomial time) class, and the class NP (Nondeterministic polynomial time). In addition, the problems of the NP class are shared into two subclasses: the class of NP-complete problems, and the class of NP-hard problems. Before defining each class, we first give a general definition of a problem. In general, a problem is a question having parameters given in input such that an answer is needed for it, called solution. A problem is described by giving: a general description of all its parameters, and certain constraints. An instance of a problem is obtained by specifying the value of each input parameter of the problem. For this, one can propose an algorithm to solve the problem. An algorithm for solving a given problem is a procedure that is decomposable into a sequence of finite operations. It allows giving a solution for each instance of the problem. In general, the complexity of an algorithm depends on the size of the problem that reflects the number of parameters needed to describe an instance. The algorithm is said to be polynomial if the maximum number of its operations necessary to solve an instance of size n is bounded by a polynomial function f in n (i.e., f (n)). This means that there exists a scalar c such that the number of its operations necessary is equal to c.f (n). As a result, the notation big O is appeared to express the complexity of an algorithm. There exists two types of problems: optimization problems and decision problems. In optimization problems, we want to minimize (or maximize) a function while satisfying a set of constraints. On the other hand, in the a decision problem, the solution is binary like yes / no or 0/1. An easy problem that can be solved by a polynomial algorithm with respect to its size, is called a problem of class P. A problem is NP if one can verify in polynomial time that a given solution is feasible. A problem is called NP-complete if it belongs NP, and every other problem in NP can be reduced to it in polynomial time [START_REF] Garey | Computers and Intractability: A Guide to the ory of Np-completeness[END_REF]. The Satisfiability Problem (SAT) is the first problem that has been shown to be NP-complete. This was proved in 1971 by Stephen Cook [START_REF] Cook | The complexity of theorem-proving procedures[END_REF] [START_REF] Gherboudj | Méthodes de résolution de problémes difficiles académiques[END_REF]. NP-hard problems are difficult as the NP-complete ones. If a decision problem associated with a optimization problem P is NP-complete then P is said to be NP-hard [START_REF] Gherboudj | Méthodes de résolution de problémes difficiles académiques[END_REF]. Furthermore, note that every problem of the class P is in NP (P ⊆ N P ). However, the converse is still open. It constitutes a well-known mathematical problem which is part of the 7 problems of the millennium prize. The question P = N P ? is one of the most important questions that has not yet been solved. The answer to this question by "yes" is to prove that all the problems of the NP class are in the P class. Cook has proved in [START_REF] Cook | The complexity of theorem-proving procedures[END_REF]] that all the problems of the NP class are reducible to the SAT problem, which means that if someone finds a polynomial algorithm for this problem, the question P = N P ? is then solved ! [START_REF] Gherboudj | Méthodes de résolution de problémes difficiles académiques[END_REF], i.e. we will be able to solve all NP-complete problems in polynomial time. Polyhedral Approach and Branch-and-Cut Algorithm 1.3.1 Elements of the Polyhedral Theory In this section, we will introduce some definitions and properties of polyhedraltheory. Schrijver [START_REF] Schrijver | Theory of Linear and Integer Programming[END_REF], Nemhauser and Wolsey [START_REF] Nemhauser | Integer and Combinatorial Optimization[END_REF], Wolsey [START_REF] Wolsey | Integer programming[END_REF] and Schrijver [START_REF] Schrijver | Combinatorial Optimization -Polyhedra and Efficiency[END_REF] are the most useful references [START_REF] Zhao | Maximum Bounded Rooted-Tree Problem : Algorithms and Polyhedra[END_REF]. Let x be a vector in R n , with n a positive integer. x is said to be a linear combination of vectors x 1 , x 2 , .., x k ∈ R n if there exist k scaler λ 1 , λ 2 .., λ k such that x = k ı=1 λ i x i . Furthermore, if k ı=1 λ i = 1, then x is said to be affine combination of x 1 , x 2 , .., x k . We say that x is convex combination of x 1 , x 2 , .., x k if x is affine combination of x 1 , x 2 , .., x k and λ i ∈ R + . The vectors x 1 , x 2 , .., x k are affinely independent if λ i = 0 for each i ∈ {1, ..., k} , is the unique solution of the system k ı=1 λ i x i = 0, k ı=1 λ i = 1, Given a set S = {x 1 , ..., x k }, the convex hull denoted by conv(S), is the set of all the convex combinations of solutions of S that is conv(S) = {x ∈ R n | k i=1 λ i x i , ∀λ i ≥ 0 and i λ i = 1}. This definition ensures that S ⊂ conv(S). A polyhedron P is the set of solutions of a linear system Ax ≤ b. That is P = {x ∈ R n |Ax ≤ b}. A bounded polyhedron is called a polytope. The dimension of polyhedron P is one less than the maximum number of solution in P that are affinely independent. An inequality ax ≤ α is valid for a polyhedron P if and only if for every solution x ∈ P , ax ≤ α. It is said to be violated by a solution x if ax > α. A set F ⊂ P is called face if there exists a valid inequality ax ≤ α for the polyhedron P such that F = {x ∈ P, ax = α}. We say that the valid inequality ax ≤ α supports a face F if F ̸ = ∅. A face F is said to be proper face if F ̸ = ∅ and F ̸ = P . If F is a proper face of P , and dim(F ) = dim(P ) -1, then F is called a facet. A face F of P is a facet if there doesn't exist any proper face F ′ of P containing F . If P is full-dimensional polyhedron, then ax ≤ α defines a facet P if and only if F is a proper face and there exists a facet defining inequalitybx ≤ β and a scalarρ ̸ = 0 such that F ⊂ {x ∈ P |bx = β} and b = ρa. If P is not full dimensional polyhedron, then ax ≤ α defines a facet of polyhedron P if and only if F is a proper face and there exists a facet of P induced by an inequality bx ≤ β, a scalar ρ ̸ = 0 and a vector λ such that F ⊂ {x ∈ P |bx = β} and b = ρa + λA = . A solution x ∈ P is an extreme point of P if x is a face of P of dimension 0. Furthermore, it cannot be written as a convex combination of other points in P . Cutting Plane Method Let P be a combinatorial optimization problem and S the set of its feasible solutions. The problem P can be written as min{cx|x ∈ S}, where c denotes the weight vector associated with the variables x of the problem. Consider the convex hull conv(S) of the feasible solutions of P. The problem P is then equivalent to the linear program min{cx|x ∈ conv(S)}. The polyhedral approach, introduced by Edmonds [START_REF] Edmonds | Covers and packings in a family of sets[END_REF] consists in describing the polyhedron conv(S) by a set of linear inequalities. This reduces the problem P to solving a linear program. However, a complete description of the polyhedron may contain an exponential number of linear inequalities. The optimization problem on the polyhedron conv(S) can be solved having all its linear inequalities. However, one can have a partial characterization of the polyhedron conv(S). This may be sufficient to solve the problem in polynomial time using the so-called cutting-plane method. This method is based on the so-called separation problem defined as follows. Let C be a class of valid inequalities for the polyhedron conv(S). The separation problem associated with C consists in deciding whether a given solution x satisfies all inequalities of C, and if not to find an inequality of C violated by x. Grötschel, Losvàsz, and Schrijver [START_REF] Grötschel | Geometric Algorithms and Combinatorial Optimization[END_REF] have shown that an optimization problem over C can be solved in polynomial time if and only if the separation problem associated with C can be solved in polynomial time. This may permit to solve the optimization problem in polynomial time as a sequence of linear programs. Each program is obtained by adding new valid inequalities obtained by solving the related separation problem. For this, we start by solving a linear program containing a small set of valid inequalities. Let us denote by x the obtained optimal solution. We solve the separation problem for C. If x satisfies all the constraints of C, then x i optimal. Otherwise, at least one constraint violated by x is identified. These should be added to the current linear program. This process is repeated until an optimal solution is found. Branch-and-Cut Algorithm The cutting-plane method provides only an optimal solution for the linear relaxation of the problem. This solution may not be integer which means that it is not feasible for the original problem. In this case, we pass to the branching step which consists in dividing the problem into two Sub-problems by choosing a fractional variable x i and setting x i to x i = 1 and x i = 0. We then apply the cutting-plane method for each of the sub-problem. We continue this process until an optimal solution is obtained for the problem. This method is known as Branch-and-Cut method since it combines a branching method with a cutting plane method at each node of the tree. 1.4 Column Generation and Branch-and-Cut-and-Price Algorithms Sometimes mathematical formulations of a problem contain a huge number of variables that can be exponential. These are known as "extended formulation". To solve such problems, we use column generation based algorithm. We begin the optimization with a restricted linear program containning a feasible basis. At each iteration, the column generation algorithm checks if there exists a missing variable having a negative reduced cost, and add it to the current restricted linear program. This is the "Pricing Problem". In fact, this consists in determining variables with negative reduced cost. This procedure is repeated until no new variable with negative reduced cost is found. The final solution is optimal for the linear relaxation of the underlaying problem. Furthermore, if it is integral, then it is optimal for the problem. If not, we create two subproblems called children by branching on some fractional variables (variable branching rule) or on some constraints using the Ryan & Foster branching rule [START_REF] Ryan | An integer programming approach to scheduling[END_REF] (constraint branching rule). Such an algorithm is called a Branch-and-Price. A Branch-and-Cut-and-Price algorithm combines a Branch-and-Price algorithm with a cutting-plane procedure. Graph Theory In this section, we introduce some elementary definitions in graph theory that are very useful throughout the dessertation, Diestel [START_REF] Diestel | Graph Theory (Graduate Texts in Mathematics)[END_REF], and Golumbic [START_REF] Golumbic | Algorithmic Graph Theory and Perfect Graphs[END_REF] are the most useful references on graph theory [START_REF] Zhao | Maximum Bounded Rooted-Tree Problem : Algorithms and Polyhedra[END_REF]. A graph is a pair G = (V, E), where V is a finite set of nodes (called also vertices) linked by a set of edges (called also links) E which can be oriented or not. A path p in graph G = (V, E) from node a to node b, is a sequence of nodes such that for each pair of successive nodes v i ,v i+1 , there exist an edge e (v i , v i+1 ) ∈ E. For any subset of nodes X ⊆ V with X ̸ = ∅, let δ(X) denote the set of edges having one extremity in X and the other one in X = V \ X which is called a cut. When X is a singleton (i.e., X = {v}), we use δ(v) instead of δ({v}) to denote the set of edges incidents with a node v ∈ V . The cardinality of a set K is denoted by |K|. A vertex coloring of G is an assignment of colors to the vertices of G so that two adjacent vertices v and v ′ cannot get the same color. Same rule for edges, an edge coloring of G is an assignment of colors to the edges of G so that two adjacent edges e and e ′ cannot get the same color. We say that graph G is t-colorable if no more than t different colors assigned in G. G ′ is called a weighted graph if each node in G ′ is associated with weight. An interval t-coloring of a weighted graph G ′ = (V, E, w) is a function c : V -> {1, 2, ..., t} such that c(v)+w(v)-1 ≤ t. We assign an interval [c(v), ..., c(v)+w(v)-1] of consecutive integers satisfying w(v) of each vertex v that the intervals of colors assigned to two adjacent vertices do not overlap. If interval t-coloring is feasible for a graph G ′ then G ′ is said to be interval t-colorable [START_REF] Shirazipourazad | On routing and spectrum allocation in spectrum-sliced optical networks[END_REF]. The interval chromatic number of G ′ , denoted by χ is the least integer number t such that G ′ has a interval t-coloring [START_REF] Shirazipourazad | On routing and spectrum allocation in spectrum-sliced optical networks[END_REF]. Flexible Optical Networks The two last decades have seen a big developement in telecommunication networks with a continuous growth in demands. To face this trend of increase in bandwidth, network operators have had to make their network architectures and management evolve. Two significant changes appeared recently in the optical network architec-ture. First the bandwidth-greedy FixedGrid architecture for Optical Wavelength Division Multiplexing (WDM) (called also wavelength routed networks) [START_REF] Ramaswami | Optical Networks: A Practical Perspective[END_REF][93] based on fixed spectrum grid is being replaced by the FlexGrid architecture that is capable of supporting variable data rate (in Gb/s) through flexible spectrum. In this concept, the optical spectrum is divided into slots having the same frequency of The concept of slot was proposed initially by Masahiko Jinno et al. [START_REF] Jinno | Demonstration of novel spectrum-efficient elastic optical path network with perchannel variable capacity of 40 Gb/s to over 400 Gb/s[END_REF], and later explored and more developed by the same authors in 2010 [START_REF] Walkowiak | Elastic optical networks -a new approach for effective provisioning of cloud computing and content-oriented services[END_REF]. In SFONs any optical path can elastically span as many contiguous slots as needed. This technology provides a more efficient use of the spectral domain than the traditional Fixed Grid WDM. Furthermore, a new generation of transponders is becoming available namely, bandwidth-variable transponders (BV-Ts) and bandwidth variable wavelength cross-connects (BV-WXCs) [START_REF] Walkowiak | Elastic optical networks -a new approach for effective provisioning of cloud computing and content-oriented services[END_REF]. These can manage data rates up to 400 Gb/s which cannot be accommodated by a 50 GHz wavelength, and restores the signal which is necessary to re-amplify, re-shape and re-time the passive optical signal (which is called (3R) signal regeneration rule) when the transmission-reach of signals is limited which represents the maximum length (in kms) for the routing of each traffic demand. The network operators have confronted several optimization problems, in particular some variants of routing and resource allocation problems that arise when designing or planning optical networks. The classical Routing and Wavelength Assignment (RWA) problem is the key issue for design FixedGrid WDM networks. In this problem, we are given an optical network and a set of demands where each demand has an origin and destination. The task is to find a path for each demand and a wavelength such that a single 50 GHz wavelength is assigned to each demand. It was extended by Chlamtac et al. [START_REF] Chlamtac | Lightpath communications: an approach to high bandwidth optical WAN's[END_REF]. It is known to be a NP-hard problem [START_REF] Chlamtac | Lightpath communications: an approach to high bandwidth optical WAN's[END_REF]. It is equivalent to the n-graph-coloring problem where the number n of the colors corresponds to the number of wavelengths and finding the minimum number of wavelengths to route all the traffic demands is equivalent to finding the chromatic number of the conflict graph (where the demands are represented by the nodes and two nodes are linked by an edge if the path of the associated demands share an edge) when the paths are already established. It has been considered as a special case of the integer multicommodity flow (MCF) problem where some specific constraints [START_REF] Brun | Routing and Wavelength Assignment in Optical Networks[END_REF] are added and should be respected. Several models and algorithms have been proposed to solve the RWA problem. However, in SFONs, RWA cannot handle the changes from wavelength to contiguous slots. As a result, a new problem is appeared to deal with this, called Routing and Spectrum Assignment (RSA) problem. It can be stated as follows. Given an optical network G and a multiset of traffic demands K, it aims at determining for each traffic demand k ∈ K a path and an interval of contiguous slots while optimizing some linear objective(s) and satisfying the following constraints [START_REF] Hadhbi | A novel integer linear programming model for routing and spectrum assignment in optical networks[END_REF]: 1. spectrum contiguity: an interval of contiguous slots should be allocated to each demand k with width equals to the number of slots requested by demand k; 2. spectrum continuity: the interval of contiguous slots allocated to each traffic demand stills the same along the chosen path; 3. non-overlapping spectrum: the intervals of contiguous slots of demands whose paths are not edge-disjoints in the network cannot share any slot over the shared-edge. The RSA problem is harder than the RWA problem because of the continuity constraint that has not been taken into account when defining the RWA problem. In such that S k ∩S k ′ = ∅ for each pair of demands k, k ′ ∈ K (k ̸ = k ′ ) with E(p k ) ∩ E(p k ′ ) ̸ = ∅ ( Cut Formulation Here we introduce an integer linear programming formulation for the C-RSA problem, called formulation. It can be seen as a reformulation of the one introduced by Hadhbi et al. [START_REF] Hadhbi | A novel integer linear programming model for routing and spectrum assignment in optical networks[END_REF]. For k ∈ K and e ∈ E, let x k e be a variable which takes 1 if demand k goes through edge e and 0 if not, and for k ∈ K and s ∈ S, let z k s be a variable which takes 1 if slot s is the last slot allocated for the routing of demand k and 0 if not. The contiguous slots s ′ ∈ {s -w k + 1, ..., s} should be assigned to demand k whenever z k s = 1. The C-RSA problem is equivalent to the following linear integer program min k∈K e∈E c e x k e , (2.1) subject to e∈δ(X) x k e ≥ 1, ∀k ∈ K, ∀X ⊆ V , |X ∩ {o k , d k }| = 1, (2.2) e∈E l e x k e ≤ lk , ∀k ∈ K, (2.3) z k s = 0, ∀k ∈ K, ∀s ∈ {1, ..., w k -1}, (2.4) s s=w k z k s ≥ 1, ∀k ∈ K, (2.5) x k e + x k ′ e + min(s+w k -1,s) s ′ =s z k s ′ + min(s+w k ′ -1,s) s ′ =s z k ′ s ′ ≤ 3, ∀(e, k, k ′ , s) ∈ Q e,s , (2.6 ) 0 ≤ x k e ≤ 1, ∀k ∈ K, ∀e ∈ E, (2.7) 0 ≤ z k s ≤ 1, ∀k ∈ K, ∀s ∈ S, (2.8) x k e ∈ {0, 1}, ∀k ∈ K, ∀e ∈ E, (2.9) z k s ∈ {0, 1}, ∀k ∈ K, ∀s ∈ S. (2.10) where Q e,s denotes the set of all the quadruples (e, k, k Note that the linear relaxation of the C-RSA can be solved in polynomial time given that inequalities (2.2) can be separated in polynomial time using network flows. Associated Polytope Let P(G, K, S) be the polytope, convex hull of the solutions of (2.1)-(2.10). In this section, we discuss the facial structure of the polytope P(G, K, S). First, we describe some structural properties. These will be used for determining the dimension of P(G, K, S). (resp. (o k , j) and (i, d k )) greater that lk then edge ij cannot be in a path routing of demand k, and we then say that ij is a forbidden edge for demand k due to the transmission-reach constraint. Let E k t denote the set of forbidden edges due to the transmission-reach constraint for demand k ∈ K. Note that using Dijkstra's algorithm, one can identify in polynomial time the forbidden edges E k t for each demand k ∈ K. Table 2.1 shows the set of forbidden edges E k t and forbidden nodes V k 0 for each demand k in K Fig. 2.1(b). k o k ⇒ d k w k lk V k 0 E k t 1 a ⇒ c 2 4 {e, d, g} {cg, dg, de, df, cd, ef } 2 a ⇒ d 1, 00 4 {g} {cg, dg, df } 3 b ⇒ f 2 4 {e, d, g} {cg, dg, de, df, cd, ef } 4 b ⇒ e 1, 00 4 {g} {cg, dg, df } Table 2.1: Sets of V k 0 and E k 0 of the example of Fig. 2.1(b). Consider a subset of nodes W in V \ V k 0 with o k ∈ W and d k ∈ V \ W . Let f be an edge in a cut δ(W ) such that all the edges e ∈ δ(W ) \ {f } are forbidden for demand k. As a consequence, edge f is an essential edge for demand k. As the forbidden edges, the essential edges can be determined in polynomial time using network flows. Let E k 1 denote the set of essential edges of demand k, and K e denote the subset of demands in K having e as essential edge. Therefore, x k e = 1, for all k ∈ K and e ∈ E k 1 . (2.11) In addition to the forbidden edges thus obtained due to the transmission-reach constraints, there may exist edges that may be forbidden because of lack of resources for demand k. This is the case when, for instance, the residual capacity of the edge in question does not allow a demand to use this edge, i.e., w k > s -k ′ ∈Ke w k ′ . Let E k c denote the set of forbidden edges for demand k, k ∈ K, due to the resource constraints. Let E k 0 = E k t ∪ E k c for k ∈ K. Hence, x k e = 0, for all k ∈ K and e ∈ E k 0 . (2.12) As a result of the pre-processing stage, a non-compatibility between the demands may appear due to a lack of resources. For an edge e, two demands k and k ′ with e / ∈ E k 0 ∪ E k 1 ∪ E k ′ 0 ∪ E k ′ 1 , are said to be non-compatible if the residual capacity of edge e does not allow to route the two demands k, k ′ together through e, i.e., w k + w k ′ > s -k"∈Ke w k" . Let K e c denote the set of pairs of demands (k, k ′ ) in K that are non-compatible for edge e. On the other hand, a non-compatibility between the edges for a demand may appear due to the transmission-reach constraint. Consider a demand k. Two edges e = ij / ∈ E k 0 ∩E k 1 , e ′ = lm / ∈ E k 0 ∩E k 1 are said non-compatible edges if the length of all (o k , d k )paths formed by e = ij and e ′ = lm together are greater than lk . Note that we are able to determine the non-compatible edges for each demand k in polynomial time using shortest-path algorithms. Dimension We first describe some properties that are useful to determine the dimension of P(G, K, S). First the following is easily seen to be true. Proposition 2.3.1. The follows equation system (2.13) is of full rank          x k e = 0, for all k ∈ K and e ∈ E k 0 , x k e = 1, for all k ∈ K and e ∈ E k 1 , z k s = 0, for all k ∈ K and s ∈ {1, ..., w k -1}. (2.13) The rank of system (2.13) is given by r = k∈K (|E k 0 | + |E k 1 | + (w k -1)). Let Q denote a matrix associated with the system (2.13) which contains r lines linear independents. A solution of the C-RSA problem is given by two sets E k and S k for each demand k ∈ K where E k is a set of edges used for the routing of demand k, and S k is a set of slots assigned to demand k. For the sake of presentaion, we will denote by E k a feasible path, and by S k the last slots assigned to demand k. Below are some assumptions that will be considered Let S i = (E i , S i ) be a solution of the C-RSA problem such that E i = (E i 1 , E i 2 , ..., E i |K|-1 , E i |K| ) and S i = (S i 1 , S i 2 , ..., S i |K|-1 , S i |K| ) , and let (x S , z S ) be its incidence vector. Note that when the routing of demands is trivial or already established, one can find a feasible spectrum assignment S i in polynomial time using some heuristics and greedy algorithm as the well-known First-Fit algorithm [START_REF] Amar | Performance assessment and modeling of flexible optical networks[END_REF]. This will be used throughout each proof of polyhedron dimension or facial structure of some valid inequalities such that the set of demands K is considered as an ordered set of demands, i.e., K = {k 1 , k 2 , ..., k |K| }. Proposition 2.3.2. System (2.13) defines a minimal equation system for P(G, K, S). Proof. Consider an equation µx + σz = λ of P(G, K, S). To prove that µx + σz is a linear combination of equations system (2.13), it is sufficient to prove that there exists γ = (γ 1 , γ 2 , γ 3 ) ( with γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = γQ. We will show that σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s}. Consider a demand k in K and a slot s in {w k , ..., s}. Consider the solution S 0 = (E 0 , S 0 ) given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E 0 k i be the set of edges involved in a shortest path between o k i and d k i , b) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 0 i given by I 0 i = [ kj ∈D 0 i {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s-w k }∪{s+w ki , ..., s}] if E 0 ki ∩ E 0 k ̸ = ∅ or I 0 i = kj ∈D 0 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where D 0 i = {k j ∈ {k 1 , ..., k i-1 } : E 0 k i ∩ E 0 k j ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 0 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s -w k + 1, ..., s} = ∅ if E 0 k i ∩ E 0 k ̸ = ∅ ( we take into account the possibility of adding slot s in the set of last slots S 0 k assigned to demand k in solution S 0 ). We let S 0 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S 0 is feasible for the problem, and its incidence vector (x S 0 , z S 0 ) belongs to P(G, K, S). Let S 1 = (E 1 , S 1 ) be the solution obtained from S 0 by adding slot s as last slot to demand k. Solution S 1 is feasible for the problem. The corresponding incidence vector (x S 1 , z S 1 ) belongs to P(G, K, S). Hence, solutions S 0 and S 0 satisfy equation µx + σz = λ. We then obtain that µx S 0 + σz S 0 = µx S 1 + σz S 1 = µx S 0 + σz S 0 + σ k s . It follows that σ k s = 0. In a similar way, we can show that σ k s = 0, for all k and s ∈ {w k , ..., s} Next, we will show that µ k e = 0 for all k ∈ K and e ∈ E \ (E k 0 ∪ E k 1 ). Consider a demand k ∈ K and an edge e ∈ E \ (E k 0 ∪ E k 1 ) . Consider the solution S ′0 = (E ′0 , S ′0 ) such that a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E ′0 k i be the set of edges involved in a shortest path between o k i and d k i , b) we select slot s k = w k as last slot for demand k in solution S ′0 , c) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′0 i given by I ′0 i = [ kj ∈D ′0 i {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s k -w k }∪{s k +w ki , ..., s}] if E ′0 ki ∩ (E ′0 k ∪ {e}) ̸ = ∅ or I ′0 i = kj ∈D ′0 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not. where D ′0 i = {k j ∈ {k 1 , ..., k i-1 } \ {k} : E ′0 k i ∩ E ′0 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D ′0 i , • {s k i -w k i + 1, ..., s k i } ∩ {s k -w k + 1, ..., s k } = ∅ if E ′0 k i ∩ (E ′0 k ∪ {e}) ̸ = ∅ ( we take into account the possibility of adding edge e in the set of edges E ′0 k to route demand k). We let S ′0 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′0 is clearly feasible for the problem, , and its incidence vector (x S ′0 , z S ′0 ) belongs to P(G, K, S). Let S 2 = (E 2 , S 2 ) be the solution obtained from S ′0 by adding edge e ∈ E \ (E k 0 ∪ E k 1 ) for the routing of demand k in solution S ′0 that E 2 k = E ′0 k ∪ {e}, and remove slot s already selected for demand k as last slot in S ′0 and replaced it by a new slot s ′ such that s ′ is the smallest slot index in {w k , ..., s} such that {s ′ -w k + 1, ..., s ′ } ∩ {s" -w k ′ + 1, ..., s"} = ∅ for each demand k ′ with E ′0 k ∩ E ′0 k ′ ̸ = ∅. S 2 is clearly feasible for the problem. The corresponding incidence vector (x S 2 , z S 2 ) belongs to P(G, K, S). Hence, solutions S ′0 and S 2 satisfy equation µx + σz = λ. It follows that µx S ′0 + σz S ′0 = µx S 2 + σz S 2 = µx S ′0 + µ k e + σz S ′0 -σ k s + σ k s ′ . Since σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s}, µ k e = 0 for demand k and edge e. In a similar way, we can show that µ k e = 0, for all k ∈ K and e ∈ E \ (E k 0 ∪ E k 1 ). Therefore all the equations of the polytope P(G, K, S) are given only in terms of the variables x k e with e ∈ E k 0 ∪ E k 1 and z k s with s ∈ {1, ..., w k }. We distinguish 3 blocks of lines in the matrix Q associated with the system (2.13) a) block Q 1 corresponds to the equations x k e = 0 for all k ∈ K and e ∈ E k 0 such that rang(Q 1 ) = k∈K |E k 0 |, b) block Q 2 corresponds to the equations x k e = 1 for all k ∈ K and e ∈ E k 1 such that rang(Q 2 ) = k∈K |E k 1 |, c) block Q 3 corresponds to the equations z k s = 0 for all k ∈ K and s ∈ {1, ..., w k - 1} such that rang(Q 3 ) = k∈K w k -1. Note that the 3 blocks of the matrix Q are independents. Let Q k =     Q 1 k Q 2 k Q 3 k     be the submatrix of matrix Q associated to the equations (2.12) and (2.11) and involving variables x k e for all e ∈ E k 0 ∪ E k 1 , and variables z k s with s ∈ {1, ..., w k } for demand k. Note that a forbidden edge can never be an essential edge at the same time. Otherwise, the problem is infeasible. Furthermore, there is no dependency between essential edges for each demand k and also for different demands in K. Same thing for the forbidden edges. We want to show that µ k = γ k 1 Q 1 k + γ k 2 Q 2 k and σ k = γ k 3 Q k 3 . The only solution of these two systems is given by µ k e = γ k,e 1 , for all k ∈ K and e ∈ E k 0 , (2.14) µ k e = γ k,e 2 , for all k ∈ K and e ∈ E k 1 , (2.15) σ k s = γ k,s 3 , for all k ∈ K and s ∈ {1, ..., w k -1}. (2.16) We conclude at the end that for each k ∈ K and e ∈ E µ k e =          γ k,e 1 if e ∈ E k 0 γ k,e 2 if e ∈ E k 1 0 otherwise, (2.17) yielding µ k = γ k 1 Q 1 k + γ k 2 Q 2 k for each k ∈ K. Moreover, for each k ∈ K and s ∈ S σ k s =    γ k,s 3 if s ∈ {1, ..., w k -1} 0 otherwise, (2.18) i.e., σ k = γ k 3 Q 3 k . As a consequence, (µ, σ) = γQ as desired. As a consequence, we have the following result. Theorem 2.3.1. The dimension of P(G, K, S) is given by dim(P(G, K, S)) = |K| * (|E| + |S|) -r. Facial Investigation In this section, we describe facets defining inequalities for the polytope P(G, K, S) from the cut formulation (2.2)-(2.10), and the ones from the valid inequalities. First, we characterize when the basic inequalities (2.2)-(2.10) define facets. Theorem 2.3.2. Consider a demand k ∈ K, and an edge e ∈ E \ (E k 0 ∪ E k 1 ). Then, inequality x k e ≥ 0 is facet defining for P(G, K, S). Proof. Let F k e be the face induced by inequality x k e ≥ 0, that is F k e = {(x, z) ∈ P(G, K, S) : x k e = 0}. Denote inequality x k e ≥ 0 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F k e ⊆ F . To prove that F k e is a facet of P(G, K, S), we need to show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) ( with γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β) + γQ. First, we will show that σ k ′ s = 0 for all k ′ ∈ K and s ∈ {w k ′ , ..., s}. Consider a slot s in {w k , ..., s}, and solution S 3 = (E 3 , S 3 ) such that a) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we let E 3 k i be the set of edges involved in a shortest path between o k i and d k i , b) for demand k, we let E 3 k be the set of edges involved in a shortest path between o k and d k which does not use edge e, c) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 3 i given by where I 3 i = [ D 3 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E 3 k i ∩ E 3 k j ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 3 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s -w k + 1, ..., s} = ∅ if E 3 k i ∩ E 3 k ̸ = ∅ ( we take into account the possibility of adding slot s in the set of last slots S 3 k assigned to demand k in solution S 3 ). We let S 3 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S 3 is clearly feasible for the problem, and its incidence vector (x S 3 , z S 3 ) belongs to F k e . Now, let S 4 = (E 4 , S 4 ) a solution obtained from S 3 by adding slot s as last slot to demand k. Solution S 4 is feasible for the problem. The corresponding incidence vector (x S 4 , z S 4 ) belongs to F k e . Hence, solutions S 3 and S 4 satisfy equation µx + σz = τ . As a consequence, we have µx S 3 + σz S 3 = µx S 4 + σz S 4 = µx S 3 + σz S 3 + σ k s . Hence, σ k s = 0. In a similar way, we can show that σ k ′ s ′ = 0, for all k ′ ∈ K and s ′ ∈ {w k ′ , ..., s}. Next, we will show that µ k ′ e ′ = 0 for all demand k ′ ∈ K \ {k} and edge e ′ ∈ E \ (E k ′ 0 ∪ E k ′ 1 ) , and µ k e ′ = 0 for edge e ′ ∈ E \ (E k 0 ∪ E k 1 ∪ {e}). Consider an edge e ′ ∈ E \ (E k 0 ∪ E k 1 ∪ {e}) chosen arbitrarily. Let S ′3 = (E ′3 , S ′3 ) be the solution given by a) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we let E ′3 k i be the set of edges involved in a shortest path between o k i and d k i , b) for demand k, we let E ′3 k be the set of edges involved in a shortest path between o k and d k which does not use edge e ′ , c) we select slot s k = w k as last slot for demand k in solution S ′3 , d) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′3 i given by I ′3 i = [ kj ∈D ′3 i {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s k -w k }∪{s k +w ki , ..., s}] if E ′3 ki ∩ (E ′3 k ∪ {e ′ }) ̸ = ∅ or I ′3 i = kj ∈D ′3 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where D ′3 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E ′3 k i ∩ E ′3 k j ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D ′3 i , • {s k i -w k i + 1, ..., s k i } ∩ {s k -w k + 1, ..., s k } = ∅ if E ′3 k i ∩ (E ′3 k ∪ {e ′ }) ̸ = ∅ ( we take into account the possibility of adding edge e in the set of edges E ′3 k to route demand k). We let S ′3 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′3 is clearly feasible for the problem, and its incidence vector (x S ′3 , z S ′3 ) belongs to F k e . Let S 5 = (E 5 , S 5 ) be the solution obtained from S ′3 by adding edge e ′ ∈ E\(E k 0 ∪ E k 1 ) for the routing of demand k in solution S ′3 that E 5 k = E ′3 k ∪ {e ′ }, and removing slot s selected for demand k in S ′3 and replaced it by a new slot s ′ ∈ {w k , ..., S} (i.e., 5 is clearly feasible for the problem. S 5 k = (S ′3 k \ {s}) ∪ {s ′ } such that {s ′ -w k + 1, ..., s ′ } ∩ {s" -w k ′ + 1, ..., s"} = ∅ for each k ′ ∈ K and s" ∈ S ′3 k ′ with E 5 k ∩ E ′3 k ′ ̸ = ∅). S The corresponding incidence vector (x S 5 , z S 5 ) belongs to F k e . Hence, solutions S ′3 and S 5 satisfy equation µx + σz = τ . As a consequence, we have µx S ′3 + σz S ′3 = µx S 5 + σz S 5 = µx S ′3 + µ k e ′ + σz S ′3 -σ k s + σ k s ′ . Since σ k s = 0, it follows that µ k e ′ = 0. As e ′ is chosen arbitrarily, we have that µ k e ′ = 0, for all e ′ ∈ E \ (E k 0 ∪ E k 1 ∪ {e}). Using similar argument as above, we can show that µ k ′ e ′ = 0, for all k ′ ∈ K \ {k} and e ′ ∈ E \ (E k ′ 0 ∪ E k ′ 1 ) . By (2.17) and (2.18), we know that          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ 3 for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. Overall, we obtain that µ k ′ e ′ =                  γ k ′ ,e ′ 1 if e ′ ∈ E k ′ 0 , γ k ′ ,e ′ 2 if e ′ ∈ E k ′ 1 , ρ if k ′ = k and e ′ = e, 0 otherwise, for each k ′ ∈ K and e ′ ∈ E, and σ k ′ s ′ =    γ k ′ ,s ′ 3 if s ′ ∈ {1, ..., w k ′ -1}. 0 otherwise, for each k ′ ∈ K and s ′ ∈ S. Clearly, we have (µ, σ) = ρ(α, β) + γQ. Theorem 2.3.3. Consider a demand k ∈ K, and a slot s ∈ {w k , .., s}. Then, inequality z k s ≥ 0 is facet defining for P(G, K, S). Proof. Let F k s denote the face induced by inequality z k s ≥ 0, that is F k s = {(x, z) ∈ P(G, K, S) : z k s = 0}. Denote inequality z k s ≥ 0 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F k s ⊆ F . To prove that F k s is a facet of P(G, K, S), it suffices to show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) ( with γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β) + γQ. First, we will show that µ k ′ e ′ = 0 for all demand k ′ ∈ K and edge e ′ ∈ E \ (E k ′ 0 ∪ E k ′ 1 ). Consider an edge e ∈ E \ (E k 0 ∪ E k 1 ). Let S 6 = (E I 6 i = [ kj ∈D 6 i {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s k -w k }∪{s k +w ki , ..., s}] if E 6 ki ∩ (E 6 k ∪ {e}) ̸ = ∅ or I 6 i = kj ∈D 6 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where D 6 i = {k j ∈ {k 1 , ..., k i-1 } : E 6 k i ∩ E 6 k j ̸ = ∅}. This verifies that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 6 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s k -w k + 1, ..., s k } = ∅ if E 6 k i ∩ (E 6 k ∪ {e}) ̸ = ∅ ( we take into account the possibility of adding edge e in the set of edges E 6 k to route demand k). We let S 6 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S 6 is clearly feasible for the problem, and its incidence vector (x S 6 , z S 6 ) belongs to F k s . Based on this, we consider a solution S 7 = (E 7 , S 7 ) obtained from S 6 by adding edge e ∈ E \(E k 0 ∪E k 1 ) for the routing of demand k in solution S 6 that E 7 k = E 6 k ∪{e}. S 7 is clearly feasible for the problem. The corresponding incidence vector (x S 7 , z S 7 ) belongs to F k s . Hence, solutions S 6 and S 7 satisfy equation µx + σz = τ . As a consequence, we have µx S 6 + σz S 6 = µx S 7 + σz S 7 = µx S 6 + µ k e + σz S 6 . As a result, µ k e = 0 for demand k and edge e. In a similar way, we can show that µ k ′ e = 0, for all k ′ ∈ K and e ∈ E \ (E k ′ 0 ∪ E k ′ 1 ). Next, we will show that, σ k ′ s ′ = 0 for all k ′ ∈ K \ {k} and s ′ ∈ {w k ′ , ..., s}, and σ k s ′ = 0 for all slots s ′ ∈ {w k , ..., s} \ {s}. Consider a slot s ′ in {w k , ..., s} \ {s}. Let S ′6 = (E ′6 , S ′6 ) be the solution given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E ′6 k i be the set of edges involved in a shortest path between o k i and d k i , b) we select the smallest slot index s k in {w k , ..., s}\{s, s ′ } as last slot for demand k, c) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′6 i given by I ′6 i = [ kj ∈D ′6 i {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s ′ -w k }∪{s ′ +w ki , ..., s}] if E ′6 ki ∩ E ′6 k ̸ = ∅ or I ′6 i = kj ∈D ′6 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where D ′6 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E ′6 k i ∩ E ′6 k j ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s} ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D ′6 i , • and {s k i -w k i + 1, ..., s} ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E ′6 k i ∩ E ′6 k ̸ = ∅ ( we take into account the possibility of adding slot s ′ in the set of last slots s′6 k assigned to demand k in solution S ′6 ). We let S ′6 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′6 is clearly feasible for the problem, its incidence vector (x S ′6 , z S ′6 ) belongs to F k s . Then consider the solution S 8 obtained from S ′6 by adding slot s ′ as last slot to demand k. Solution S 8 is clearly feasible for the problem. The corresponding incidence vector (x S 8 , z S 8 ) belongs to F k s . Hence, solutions S ′6 and S 8 satisfy equation µx + σz = τ . We have so µx S ′6 + σz S ′6 = µx S 8 + σz S 8 = µx S ′6 + σz S ′6 + σ k s ′ . Hence, σ k s ′ = 0. In a similar way, we can show that σ k ′ s ′ = 0, for all k ′ ∈ K and s ′ ∈ {w k ′ , ..., s} with s ̸ = s ′ if k = k ′ .          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ 3 for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. We conclude that for each k ′ ∈ K and e ∈ E µ k ′ e =          γ k ′ ,e 1 if e ∈ E k ′ 0 , γ k ′ ,e 2 if e ′ ∈ E k ′ 1 , 0 otherwise, and for each k ′ ∈ K and s ′ ∈ S σ k ′ s ′ =                  γ k ′ ,s ′ 3 if s ′ ∈ {1, ..., w k ′ -1}, 0 if s ′ ∈ {w k ′ , ..., s} and k ′ ̸ = k, 0 if s ′ ∈ {w k ′ , ..., s} \ {s} and k ′ = k, ρ if s ′ = s and k ′ = k. Clearly, we have (µ, σ) = ρ(α, β) + γQ. Proposition 2.3.3. Consider a demand k ∈ K. Let (e, e ′ ) be a pair of noncompatible edges for demand k. Then, the inequality x k e + x k e ′ ≤ 1, (2.19) is valid for P(G, K, S). Proof. It is trivial due to the transmission-reach constraint and given the definition of non-compatible edges for demand k. Based on the definition of a non-compatible demands for an edge e, we introduce the following inequality. ∪ E k 1 ∪ E k ′ 0 ∪ E k ′ 1 . Then, the inequality x k e + x k ′ e ≤ 1, (2.20) is valid for P(G, K, S). Proof. It is trivial given the definition of non-compatible demands for edge e. As a result, inequality x k e ≤ 1 is not facet defining for P(G, K, S). Sufficiency. Let F ′k e denote the face induced by inequality x k e ≤ 1, that is F ′k e = {(x, z) ∈ P(G, K, S) : x k e = 1}. Denote inequality x k e ≤ 1 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F ′k e ⊆ F . To prove that F ′k e is a facet of P(G, K, S), we need to show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) ( with γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β) + γQ. First, we will show that σ k ′ s = 0 for all k ′ ∈ K and s ∈ {w k ′ , ..., s}. Consider a slot s in {w k , ..., s}. Let S 9 = (E 9 , S 9 ) be the solution given by a) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we let E where D 9 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E 9 k i ∩ E 9 k j ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 9 i , • and {s k i -w k i +1, ..., s k i }∩{s-w k +1, ..., s} = ∅ ( we take into account the possibility of adding slot s in the set of last slots S 9 k assigned to demand k in solution S 9 ). We let S 9 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S 9 is clearly feasible for the problem, and its incidence vector (x S 9 , z S 9 ) belongs to F ′k e . Then consider the solution S 10 = (E 10 , S 10 ) obtained from S 9 by adding slot s as last slot to demand k. Solution S 10 is feasible for the problem. The corresponding incidence vector (x S 10 , z S 10 ) belongs to F ′k e . Hence, solutions S 9 and S 10 satisfy equation µx + σz = τ . We then obtain that µx S 9 + σz S 9 = µx S 10 + σz S 10 = µx S 9 + σz S 9 + σ k s . Hence, σ k s = 0. In a similar way, we can show that σ k ′ s = 0, for all k ′ ∈ K and s ∈ {w k ′ , ..., s} Next, we will show that µ k ′ e ′ = 0 for all demand k ′ ∈ K \ {k} and e ′ ∈ E \ (E k ′ 0 ∪ E k ′ 1 ), and µ k e ′ = 0 for demand k and e ′ ∈ E \ (E k 0 ∪ E k 1 ∪ {e}). Consider an edge e ′ ∈ E \ (E k 0 ∪ E k 1 ∪ {e}) chosen arbitrarily. Let S ′9 = (E ′9 , S ′9 ) be the solution given by a) for each demand k i ∈ K \ {k} with i ∈ {1, .. where Hence, µ k e ′ = 0. In a similar way, we can show that D ′9 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E ′9 k i ∩ E ′9 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D ′9 i , • and {s k i -w k i +1, ..., s k i }∩{s k -w k +1, ..., s k } = ∅ if E ′9 k i ∩(E ′9 k ∪{e ′ }) ̸ = ∅ ( µ k e ′ = 0, for all e ′ ∈ E \ (E k 0 ∪ E k 1 ∪ {e}). Moreover, we re-do the same procedure for all k ′ ∈ K \ {k} and e ′ ∈ E \ (E k 0 ∪ E k 1 ). We conclude at the end that µ k ′ e ′ = 0, for all k ′ ∈ K \ {k} and e ′ ∈ E \ (E k ′ 0 ∪ E k ′ 1 ), µ k e ′ = 0, for all e ′ ∈ E \ (E k 0 ∪ E k 1 ∪ {e}). 50 We know from (2.17) and (2.18) that          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ 3 for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. We conclude that for each k ′ ∈ K and e ′ ∈ E µ k ′ e ′ =                  γ k ′ ,e ′ 1 if e ′ ∈ E k ′ 0 , γ k ′ ,e ′ 2 if e ′ ∈ E k ′ 1 , ρ if k ′ = k and e ′ = e, 0 otherwise, and for each k ′ ∈ K and s ∈ S σ k ′ s =    γ k ′ ,s 3 if s ∈ {1, ..., w k ′ -1}, 0 otherwise. Consequently, (µ, σ) = ρ(α, β) + γQ which ends the proof. Theorem 2.3.5. Consider a demand k ∈ K, and a slot s ∈ {w k , .., s}. Then, inequality z k s ≤ 1 is facet defining for P(G, K, S) if and only if there does not exist a demand k ′ ∈ K \ {k} with E k 1 ∩ E k ′ 1 ̸ = ∅. Proof. Neccessity. For a demand k ∈ K and a slot s ∈ {w k , .., s}, if there exists a demand k ′ ∈ K \ {k} with E k 1 ∩ E k ′ 1 ̸ = ∅. Then, the inequality z k s ≤ 1 is domined by the non-overlapping inequality (2.6) for each edge e ∈ E k 1 ∩ E k ′ 1 . As a result, the inequality z k s ≤ 1 is not facet defining for P(G, K, S). Sufficiency. Let F ′k s be the face induced by inequality z k s ≤ 1, that is F ′k s = {(x, z) ∈ P(G, K, S) : z k s = 1}. We denote inequality z k s ≤ 1 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F ′k s ⊆ F . To prove that F ′k s is a facet of P(G, K, S), we need to show there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) ( with γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β) + γQ. First, we will show that µ k ′ e = 0 for all demand k ′ ∈ K and edge e where ∈ E \ (E k ′ 0 ∪ E k ′ 1 ). Consider an edge e ∈ E \ (E k 0 ∪ E k 1 ). Let S 12 = (E D 12 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E 12 k i ∩ E 12 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 12 i , • and {s k i -w k i +1, ..., s k i }∩{s k -w k +1, ..., s k } = ∅ if E 12 k i ∩(E 12 k ∪{e}) ̸ = ∅ ( we take into account the possibility of adding edge e in the selected path E 12 k to route demand k in solution S 12 ). We let S 12 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S 12 is clearly feasible for the problem, its incidence vector (x S 12 , z S 12 ) belongs to F ′k s . Then consider the solution S 13 = (E 13 , S 13 ) obtained from S 12 by adding edge e ∈ E \ (E k 0 ∪ E k 1 ) for the routing of demand k in solution S 12 which means that E 13 k = E 12 k ∪ {e}. The last slots assigned to the demands K, and paths assigned the set of demands K \ {k} in S 12 remain the same in solution S 13 , i.e., S 13 k = S 12 k for each k ∈ K, and 13 is clearly feasible for the problem. The corresponding incidence vector (x S 13 , z S 13 ) belongs to F ′k s . Hence, solutions S 12 and S 13 satisfy equation µx + σz = τ . It follows that µx S 12 + σz S 12 = µx S 13 + σz S 13 = µx S 12 + µ k e + σz S 12 . E 13 k ′ = E 12 k ′ for each k ′ ∈ K \ {k}. S As a result, µ k e = 0. In a similar way, we can show that µ k ′ e = 0, for all k ′ ∈ K and e ∈ E \ (E k ′ 0 ∪ E k ′ 1 ). Next, we will show that, σ k ′ s ′ = 0 for all k ′ ∈ K \ {k} and s ′ ∈ {w k ′ , ..., s}, and σ k s ′ = 0 for all slots s ′ ∈ {w k , ..., s} \ {s}. Consider a slot s ′ in {w k , ..., s} \ {s}. Let S ′12 = (E ′12 , S ′12 ) be the solution given by a) for each demand where k i ∈ K with i ∈ {1, ..., D ′12 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E ′12 k i ∩ E ′12 k j ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D ′12 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E ′12 k i ∩ E ′12 k ̸ = ∅ ( we take into account the possibility of adding slot s ′ in the selected last slots S ′12 k to route demand k in solution S ′12 ). We let S ′12 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′12 is clearly feasible for the problem, its incidence vector (x S ′12 , z S ′12 ) belongs to F ′k s . Then we derive solution S 14 from S ′12 by adding slot s ′ as last slot to demand k in S ′12 . Solution S 14 is clearly feasible for the problem. The corresponding incidence vector (x S 14 , z S 14 ) belongs to F ′k s . Hence, solutions S ′12 and S 14 satisfy equation µx + σz = τ . We have so µx S ′12 + σz S ′12 = µx S 14 + σz S 14 = µx S ′12 + σz S ′12 + σ k s ′ . Hence, σ k s ′ = 0. In a similar way, we can show that σ k ′ s ′ = 0, for all k ′ ∈ K and s ′ ∈ {w k ′ , ..., s} with s ̸ = s ′ if k = k ′ .          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ 3 for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. Overall, we obtain that µ k ′ e =          γ k ′ ,e 1 if e ∈ E k ′ 0 , γ k ′ ,e 2 if e ′ ∈ E k ′ 1 , 0 otherwise, for each k ′ ∈ K and e ∈ E, and σ k ′ s ′ =                  γ k ′ ,s ′ 3 if s ′ ∈ {1, ..., w k ′ -1}, 0 if s ′ ∈ {w k ′ , ..., s} and k ′ ̸ = k, 0 if s ′ ∈ {w k ′ , ..., s} \ {s} and k ′ = k, ρ if s ′ = s and k ′ = k, for each k ′ ∈ K and s ′ ∈ S. As a consequence, (µ, σ) = ρ(α, β) + γQ. Theorem 2.3.6. Consider a demand k ∈ K. Then, inequality (2.5), s s=w k z k s ≥ 1, is facet defining for P(G, K, S). Proof. Let F k S be the face induced by inequality s s=w k z k s ≥ 1, that is F k S = {(x, z) ∈ P(G, K, S) : s s=w k z k s = 1}. Denote inequality s s=w k z k s ≥ 1 by αx + βz ≤ λ. Let µx + σz ≤ τ be a valid inequality that defines a facet F of P(G, K, S). Suppose that F k S ⊆ F . To prove that F k S is a facet of P(G, K, S), we show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) ( with γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1 ) such that (µ, σ) = ρ(α, β) + γQ. First, we will show that µ k ′ e = 0 for all demand k ′ ∈ K and edge e where As a result, µ k e = 0. In a similar way, we can show that ∈ E \ (E k ′ 0 ∪ E k ′ 1 ). Consider an edge e ∈ E \ (E k 0 ∪ E k 1 ). Let S 15 = (E D 15 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E 15 k i ∩ E 15 k j ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 15 i , • {s k i -w k i + 1, ..., s k i } ∩ {s -w k + 1, ..., s} = ∅ if E 15 k i ∩ (E µ k ′ e = 0, for all k ′ ∈ K and e ∈ E \ (E k ′ 0 ∪ E k ′ 1 ). Next, we will show that, σ k ′ s ′ = 0 for all k ′ ∈ K \ {k} and s ′ ∈ {w k ′ , ..., s}. Consider a demand k ′ in K \ {k} and a slot s ′ in {w k ′ , ..., s} \ {s}. Let S ′15 = (E ′15 , S ′15 ) be the solution given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E ′15 k i be the set of edges involved in a shortest path between o k i and d k i , b) we select slot s k = w k as last slot for demand k, and let S ′15 k = {s k }, c) we select the smallest slot index s k ′ from the set of slots I ′15 k ′ given by where I ′15 k ′ = {w ki , ..., s k -w k }∩{s k +w ki , ..., s}\{s ′ } if E ′15 k ′ ∩E ′15 k ̸ = ∅ or I ′15 k ′ = {w k ′ , ..., s}\{s ′ } if not. d) for each demand k i ∈ K \ {k, k ′ } with i ∈ {1, ..., D ′15 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k, k ′ } : E ′15 k i ∩ E ′15 k j ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D ′15 i , • {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s} = ∅ if E ′15 k i ∩ E ′15 k ′ ̸ = ∅ ( we take into account the possibility of adding slot s ′ in the set of last slots S ′15 k ′ to route demand k ′ in solution S ′15 ). We let S ′15 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′15 is feasible for the problem. The corresponding incidence vector (x S ′15 , z S ′15 ) belongs to F k S . Then we derive a solution S 17 from S ′15 by adding slot s ′ as last slot to demand k ′ . Solution S 17 is clearly feasible for the problem. The corresponding incidence vector (x S 17 , z S 17 ) belongs to F k S . Hence, solutions S ′15 and S 17 satisfy equation µx + σz = τ . We have so µx S ′15 + σz S ′15 = µx S 17 + σz S 17 = µx S ′15 + σz S ′15 + σ k ′ s ′ . Hence, σ k ′ s ′ = 0. In a similar way, we can show that σ k ′ s ′ = 0, i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : Ẽ15 k i ∩ Ẽ15 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D15 i , • {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if Ẽ15 k i ∩ Ẽ15 k ̸ = ∅ ( we take into account the possibility of adding slot s ′ in the set of last slots S15 k to route demand k in solution S15 ). We let S15 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S15 is clearly feasible for the problem, and its incidence vector (x S15 , z S15 ) belongs to F k S . Then consider the solution S 18 obtained from S15 by adding slot s ′ as last slot to demand k ′ in S 18 k and removing the last slot s assigned to k in S15 k (i.e., S 18 k = ( S15 k \ {s}) ∪ {s ′ } for demand k). Solution S 18 is feasible for the problem. The corresponding incidence vector (x S 18 , z S 18 ) belongs to F k S . Hence, solutions S15 and S 18 satisfy equation µx + σz = τ . We have so µx S15 + σz S15 = µx S 18 + σz S 18 = µx S15 + σz S15 + σ k s ′ -σ k s . As a result, σ k s ′ = σ k s . In a similar way, we can show that σ k s ′ = σ k s , for all slots s, s ′ ∈ {w k , ..., s}. Consequently, we obtain that σ k s = ρ for demand k and slot s in {w k , ..., s}. We know from (2.17) and (2.18) that          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ We conclude that for each k ′ ∈ K and e ∈ E µ k ′ e =          γ k ′ ,e 1 if e ∈ E k ′ 0 , γ k ′ ,e 2 if e ∈ E k ′ 1 , 0 otherwise, and for each k ′ ∈ K and s ∈ S x k e ≥ 1, that is σ k ′ s =          γ k ′ ,s 3 if s ∈ {1, ..., w k ′ -1}, ρ if k ′ = k F k X = {(x, z) ∈ P(G, K, S) : e∈(δ(X)\E k 0 ) x k e = 1}. Let X = {o k }. Denote inequality e∈(δ(X)\E k 0 ) x k e ≥ 1 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F k X ⊆ F . To prove that F k X is a facet of P(G, K, S), we need to show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) ( with γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β) + γQ. First, we will show that σ k ′ s = 0 for all k ′ ∈ K and s ∈ {w k ′ , ..., s}. Consider a slot s in {w k , ..., s}. Let S 19 = (E 19 , S 19 ) be the solution given by a) for each demand where Hence, σ k s = 0. In a similar way, we can show that k i ∈ K \ {k} with i ∈ {1, ..., D 19 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E 19 k i ∩ E 19 k j ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 19 i , • {s k i -w k i + 1, ..., s k i } ∩ {s -w k + 1, ..., s} = ∅ if E 19 k i ∩ E 19 k ̸ = ∅ ( σ k ′ s ′ = 0, for all k ′ ∈ K and s ∈ {w k ′ , ..., s}. Next, we will show that µ k e ′ = 0 for all demand where k ′ ∈ K \ {k} and e ′ ∈ E \ (E k ′ 0 ∪ E k ′ 1 ), and µ k e ′ = 0 for demand k and e ′ ∈ E \ (E k 0 ∪ E k 1 ∪ δ(X)). Consider an edge e ′ ∈ E \ (E k 0 ∪ E k 1 ∪ δ(X)) chosen D ′19 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E ′19 k i ∩ E ′19 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D ′19 i , • {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E ′19 k i ∩ (E ′19 k ∪ {e}) ̸ = ∅ ( we take into account the possibility of adding edge e ′ in the selected path E ′19 k to route demand k in solution S ′19 ). We let S ′19 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′19 is feasible for the problem. its incidence vector (x S ′19 , z S ′19 ) belongs to F k X . Let S 21 = (E 21 , S 21 ) be the solution obtained from S ′19 by adding edge e ′ ∈ E \(E k 0 ∪E k 1 ) for the routing of demand k in solution S ′19 which means that E 21 k = E ′19 k ∪{e ′ }. S 21 is clearly feasible for the problem. The corresponding incidence vector (x S 21 , z S 21 ) belongs to F k X . Hence, solutions S ′19 and S 21 satisfy equation µx + σz = τ . It follows that µx S ′19 + σz S ′19 = µx S 21 + σz S 21 = µx S ′19 + µ k e ′ + σz S ′19 . Hence, µ k e ′ = 0. In a similar way, we can show that µ k ′ e ′ = 0, for all k ′ ∈ K \ {k} and e ′ ∈ E \ (E k ′ 0 ∪ E k ′ 1 ), µ k e ′ = 0, for all e ′ ∈ E \ (E k 0 ∪ E k 1 ∪ δ(X)). Next, we will prove that the µ k e for all edge e ∈ (δ(X) \ E k 0 ) are equivalent. Consider an edge e ′ ∈ (δ(X) \ E k 0 ) such that e ′ / ∈ E 19 k . Let S19 = ( Ẽ19 , S19 ) be the solution given by a) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we let Ẽ19 k i be the set of edges involved in a shortest path between o k i and d k i , b) for demand k, we let Ẽ19 k be the set of edges involved in a shortest path between o k and d k . This guarantees that one edge e from (δ(X) where \ E k 0 ) is chosen to route demand k, i.e., |(δ(X) \ E k 0 ) ∩ Ẽ19 k | = 1, c) D19 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : Ẽ19 k i ∩ Ẽ19 k j ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D19 i , • {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if Ẽ19 k i ∩ ( Ẽ19 k ∪ {e}) ̸ = ∅ ( we take into account the possibility of using edge e ′ in the selected path Ẽ19 k to route demand k in solution S19 ). We let S19 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S19 is clearly feasible for the problem, and corresponding incidence vector (x S19 , z S19 ) belongs to F k X . Then consider the solution S 22 obtained from S19 by modifying the path assigned to demand k in S19 from Ẽ19 k to a path E 22 k passed through edge e ′ with |(δ(X) \ E k 0 ) ∩ E 22 k | = 1, and modifying the last slots assigned to some demands K ⊂ K from S19 k to S 22 k for each k ∈ K while satisfying non-overlapping constraint. The paths assigned to the demands K \ {k} in S19 remain the same in S 22 (i.e., E 22 k" = Ẽ19 k" for each k" ∈ K \ {k}), and also without modifying the last slots assigned to the demands K\ K in S19 , i.e., S19 k = S 22 k for each demand k ∈ K\ K. Solution S 22 is feasible for the problem. The corresponding incidence vector (x S 22 , z S 22 ) belongs to F k X . Hence, solutions S19 and S 22 satisfy equation µx + σz = τ . We have so µx S19 + σz S19 = µx S 22 + σz S 22 = µx S19 + σz S19 + µ k e ′ -µ k e + k∈ K s ′ ∈S 22 k σ k s ′ - s∈ S19 k σ k s + e"∈E 22 k \{e ′ } µ k e" - e"∈ Ẽ19 k \{e} µ k e" . Since µ k e" = 0 for all e" ∈ E \ (E k 0 ∪ E k 1 ∪ δ(X)), and σ k ′ s = 0 for all k ′ ∈ K and s ∈ {w k ′ , ..., s}, it follows that µ k e ′ = µ k e . In a similar way, we can show that µ k e = µ k e ′ , for all pairs e, e ′ ∈ (δ(X) \ E k 0 ). Consequently, we obtain that µ k e = ρ for all e ∈ (δ(X) \ E k 0 ). By (2.17) and (2.18), we know that          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ 3 for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. We conclude that for each k ′ ∈ K and e ∈ E µ k ′ e =                  γ k ′ ,e 1 if e ∈ E k ′ 0 , γ k ′ ,e 2 if e ∈ E k ′ 1 , ρ if k = k ′ and e ∈ (δ(X) \ E k 0 ), 0 otherwise, and for each k ′ ∈ K and s ∈ S σ k ′ s =    γ k ′ ,s 3 if s ∈ {1, ..., w k ′ -1}, 0 otherwise. As a consequence, (µ, σ) = ρ(α, β) + γQ. In what follows, we present several valid inequalities for P(G, K, S), and study their facial structure. Valid Inequalities and Facets We start this section by introducing some classes of valid inequalities that can be defined using Chvàtal-Gomory procedure. Then, the inequality k∈Ke min(s+w k -1,s) Edge-Slot-Assignment Inequalities s ′ =s z k s ′ ≤ 1, (2.21) is valid for P(G, K, S). Proof. Inequality (2.21) ensures that the set of demands K e cannot share slot s over edge e, which means that slot s is assigned to at most one demand k from K e over edge e. We know from the inequality (2.6) that for each pair of demands k, k ′ ∈ K e with k ̸ = k ′ min(s+w k -1,s) s ′ =s z k s ′ + min(s+w k ′ -1,s) s ′ =s z k ′ s ′ ≤ 1, given that x k e = x k ′ e = 1. After that, we use the Chvàtal-Gomory and recurrence procedures to prove that (2.21) is valid for P(G, K, S). For any subset of demands K ⊆ K e , by using a recurrence procedure, we get that for all demands K ′ ⊆ K with |K ′ | = | K| -1 k∈K ′ min(s+w k -1,s) s ′ =s z k s ′ ≤ 1. By adding the previous inequalities for all subset of demands K ′ ⊆ K with |K ′ | = | K| -1 K ′ ⊆ K |K ′ |=| K|-1 k∈K ′ min(s+w k -1,s) s ′ =s z k s ′ ≤ K ′ ⊆ K |K ′ |=| K|-1 1. Note that for each k ∈ K, the sum min(s+w k -1,s) s ′ =s z k s ′ appears ( | K| | K|-1 -1) = | K|-1 times in the previous sum. This implies that k∈ K min(s+w k -1,s) s ′ =s (| K| -1)z k s ′ ≤ | K|. By dividing the two sides of the previous sum by | K| -1, we have k∈ K min(s+w k -1,s) s ′ =s z k s ′ ≤ | K| | K| -1 ⇒ k∈ K min(s+w k -1,s) s ′ =s z k s ′ ≤ | K| | K| -1 . As a result, k∈ K min(s+w k -1,s) s ′ =s z k s ′ ≤ 1 given that | K| | K| -1 = 1. We conclude at the end that inequality (2.21) is valid for P(G, K, S). Inspiring ∩ E k ′ 0 ∩ E k" 0 , (k, k ′ ) / ∈ K e c , (k, k") / ∈ K e c , and (k ′ , k") / ∈ K e c . Then, the inequality x k e + x k ′ e + x k" e + min(s+w k -1,s) s ′ =s z k s ′ + min(s+w k ′ -1,s) s ′ =s z k ′ s ′ + min(s+w k" -1,s) s"=s z k" s" ≤ 4, (2.22) is valid for P(G, K, S). Proof. Consider an edge e ∈ E. Let s be a slot in S. Inequality (2.22) ensures that if the three demands k, k ′ , k" pass through edge e, they cannot share slot s. Let show that inequality (2.22) can be seen as Chvàtal-Gomory cuts using Chvàtal-Gomory procedure. We know from (2.24) that x k e + x k ′ e + min(s+w k -1,s) s ′ =s z k s ′ + min(s+w k ′ -1,s) s ′ =s z k ′ s ′ ≤ 3, x k e + x k" e + min(s+w k -1,s) s ′ =s z k s ′ + min(s+w k" -1,s) s"=s z k" s" ≤ 3, x k ′ e + x k" e + min(s+w k ′ -1,s) s ′ =s z k ′ s ′ + min(s+w k" -1,s) s"=s z k" s" ≤ 3. By adding the three previous inequalities, we get the following inequality 2x k e + 2x k ′ e + 2x k" e + 2 min(s+w k -1,s) s ′ =s z k s ′ + 2 min(s+w k ′ -1,s) s ′ =s z k ′ s ′ + 2 min(s+w k" -1,s) s"=s z k" s" ≤ 9. By dividing the two sides of the previous inequality by 2, we obtain that x k e + x k ′ e + x k" e + min(s+w k -1,s) s ′ =s z k s ′ + min(s+w k ′ -1,s) s ′ =s z k ′ s ′ + min(s+w k" -1,s) s"=s z k" s" ≤ 9 2 . As a result, x k e + x k ′ e + x k" e + min(s+w k -1,s) s ′ =s z k s ′ + min(s+w k ′ -1,s) s ′ =s z k ′ s ′ + min(s+w k" -1,s) s"=s z k" s" ≤ 4. We conclude at the end that inequality (2.22) is valid for P(G, K, S). Inequality (2.22) can then be generalized for any subset of demand K ⊆ K under certain conditions. Proposition 2.4.3. Consider an edge e ∈ E, and a slot s in S. Let K be a subset of demands of K with e / ∈ E k 0 for each demand k ∈ K, (k, k ′ ) / ∈ K e c for each pair of demands (k, k ′ ) in K, and k∈ K w k ≤ sk"∈Ke\ K w k" . Then, the inequality k∈ K x k e + k ′ ∈ K min(s+w k ′ -1,s) s ′ =s z k ′ s ′ ≤ | K| + 1, (2.23) is valid for P(G, K, S). Let n k denote the total number of possibilities to choose a k element in a set of n elements. Proof. Inequality (2.23) ensures that if the demands k ∈ K pass through edge e, they cannot share slot s. For this, we use the Chvàtal-Gomory and recurrence procedures to prove that (2.23) is valid for P(G, K, S). For any subset of demands K ⊆ K with e / ∈ E k 0 for each demand k ∈ K, by recurrence procedures we get that for all demands K ′ ⊆ K with |K ′ | = | K| -1 k∈K ′ x k e + k∈K ′ min(s+w k -1,s) s ′ =s z k s ′ ≤ |K ′ | + 1. By adding the previous inequalities for all subset of demands K ′ ⊆ K with |K ′ | = | K| -1 K ′ ⊆ K |K ′ |=| K|-1 k∈K ′ x k e + K ′ ⊆ K |K ′ |=| K|-1 k∈K ′ min(s+w k -1,s) s ′ =s z k s ′ ≤ K ′ ⊆ K |K ′ |=| K|-1 (|K ′ | + 1). Note that for each k ∈ K, the variable x k e and the sum min(s+w k -1,s) s ′ =s z k s ′ appear ( | K| | K|-1 -1) times in the previous sum. This implies that k∈ K( | K| | K| -1 -1)x k e + k∈ K min(s+w k -1,s) s ′ =s ( | K| | K| -1 -1)z k s ′ ≤ | K| | K| -1 (|K ′ |+1) Given that |K ′ | = | K| -1, this is equivalent to say that k∈ K( | K| | K| -1 -1)x k e + k∈ K min(s+w k -1,s) s ′ =s ( | K| | K| -1 -1)z k s ′ ≤ | K| | K| -1 | K| Moreover, and taking into account that ( | K| | K|-1 -1) = | K| -1, we found that k∈ K(| K| -1)x k e + k∈ K min(s+w k -1,s) s ′ =s (| K| -1)z k s ′ ≤ | K| 2 By dividing the two sides of the previous sum by | K| -1, we have k∈ K x k e + k∈ K min(s+w k -1,s) s ′ =s z k s ′ ≤ | K| 2 | K| -1 ⇒ k∈ K x k e + k∈ K min(s+w k -1,s) s ′ =s z k s ′ ≤ | K| | K| | K| -1 . After some simplifications, we obtain that k∈ K x k e + k∈ K min(s+w k -1,s) s ′ =s z k s ′ ≤ | K| + | K| | K| -1 . As a result, k∈ K x k e + k∈ K min(s+w k -1,s) s ′ =s z k s ′ ≤ | K| + 1 given that | K| | K| -1 = 1. We conclude at the end that inequality (2.23) is valid for P(G, K, S). Inequality ∈ K with e / ∈ E k 0 ∩ E k ′ 0 and (k, k ′ ) / ∈ K e c . Then, the inequality x k e + x k ′ e + min(s+w k -1,s) s ′ =s z k s ′ + min(s+w k ′ -1,s) s ′ =s z k ′ s ′ + k"∈Ke\{k,k ′ } min(s+w k" -1,s) s ′ =s z k" s ′ ≤ 3, (2.24) is valid for P(G, K, S). Proof. Consider an edge e ∈ E, and a pair of demands k, k ′ ∈ K. Let s be a slot in S. Inequality (2.24) ensures that if the two demands k, k ′ pass through edge e, they cannot share slot s with the set of demands in K e \ {k, k ′ }. This can be seen as a particular case for inequality (2.21) induced by subset of demands K = {k, k ′ } ∪ K e . Let generalize inequality (2.24) for any subset of demand K ⊆ K under certain conditions. Proposition 2.4.5. Consider an edge e ∈ E, and a slot s in S. Let K be a subset of demands of K with e / ∈ E k 0 for each demand k ∈ K, (k, k ′ ) / ∈ K e c for each pair of demands (k, k ′ ) in K, and k∈ K w k ≤ sk"∈Ke\ K w k" . Then, the inequality k∈ K x k e + k∈ K min(s+w k -1,s) s ′ =s z k s ′ + k ′ ∈Ke\ K min(s+w k ′ -1,s) s"=s z k ′ s" ≤ | K| + 1, (2.25) is valid for P(G, K, S). This can be seen as a strengthened version of inequality (2.24). Proof. Inequality (2.25) ensures that if the demands k ∈ K pass through edge e, they cannot share slot s with the set of demands in K e \ K. This can be seen be a particular case inequality (2.23) induced by K ∪ K e for slot s over edge e. Definition 2.4.1. An interval I = [s i , s j ] represents an ordered set of contiguous slots situated between the two slots s i and s j with j ≥ i + 1 and s j ≤ s (e.g., interval I = [START_REF] Accorsi | Guidelines for the Computational Testing of Machine Learning approaches to Vehicle Routing Problems[END_REF][START_REF] Balcan | Learning to Branch[END_REF] contains all slots situated between the slots s i = 1 and s j = 6). Theorem k ′ ∈ K w k -1, ..., s j -max k∈ K w k + 1}, c) and w k + w k ′ ≥ |I| + 1 for each k, k ′ ∈ K, d) and 2w k ≥ |I| + 1 for each k ∈ K. Proof. Neccessity. • if K e \ K ̸ = ∅, Sufficiency. Let F e,s K be the face induced by inequality (2.23), that is F e,s K = {(x, z) ∈ P(G, K, S) : k∈ K x k e + min(s+w k -1,s) s ′ =s z k s ′ = | K| + 1}. Let denote by αx + βz ≤ λ inequality k∈ K x k e + min(s+w k -1,s) s ′ =s z k s ′ ≤ | K| + 1. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F e,s K ⊆ F . To prove that F e,s K is a facet of P(G, K, S), we need to show that there exists ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) (such that such that s / ∈ {s γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β) + γQ. Let first show that µ k e ′ = 0 for each edge e ′ ∈ E \ (E k 0 ∪ E k 1 ) for each demand k ∈ K with e ̸ = e ′ if k ∈ K. Consider a demand k ∈ K and an edge e ′ ∈ E \ (E k 0 ∪ E k 1 ) with e ̸ = e ′ if k ∈ K. Let S 38 = (E k i -w k i +1, ..., s k i } if k i ∈ K, where D 38 i = {k j ∈ {k 1 , ..., k i-1 }∪{k ′ } : E 38 k i ∩ E 38 k j ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 38 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s k -w k + 1, ..., s k } = ∅ if E 38 k i ∩ (E 38 k ∪ {e ′ }) ̸ = ∅ ( we take into account the possibility of using edge e ′ in the selected path E ′38 k to route demand k in solution S ′38 ). We let S 38 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S 38 is feasible for the problem. its incidence vector (x S 38 , z S 38 ) belongs to F e,s K . Then consider the solution S 39 = (E 39 , S 39 ) obtained from S 38 by adding edge As a result, µ k e ′ = 0. In a similar way, we can show that e ′ ∈ E \ (E k 0 ∪ E k 1 ) µ k e ′ = 0, for all k ∈ K and e ′ ∈ E \ (E k 0 ∪ E k 1 ) with e ̸ = e ′ if k ∈ K. Let show that σ k s ′ = 0 for all k ∈ K and s ′ ∈ {w k , ..., s} with s ′ / ∈ {s, ..., s + w k -1} if k ∈ K. Consider a demand k in K and a slot s ′ in {w k , ..., s} with s ′ / ∈ {s, ..., s + w k -1} if k ∈ K. Let S ′38 = (E ′38 , S ′38 ) be the solution given by a) for each demand k i ∈ K \ K with i ∈ {1, ..., |K|}, we let E ′38 k i be the set of edges involved in a shortest path between o k i and d k i , b) for demand k, we let E ′38 k be the set of edges involved in a shortest path between o k and d k which uses edge e, c) for each demand k ′ ∈ K \ {k}, we let E ′38 k ′ be the set of edges involved in a shortest path between o k ′ and d k ′ which use edge e, d) for one demand k ∈ K, we select the smallest slot index s k in {w k , ..., s} as last slot such that s ∈ {s k -w k + 1, ..., s k }, e) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′38 i given by I ′38 i = [ kj ∈D ′38 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E ′38 ki ∩ E ′38 k ̸ = ∅ or I ′38 i = kj ∈D ′38 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not. where D ′38 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E ′38 k i ∩ E ′38 k j ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D ′38 i , • s / ∈ {s k i -w k i + 1, ..., s k i } if k i ∈ K, • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E ′38 k i ∩ E ′38 k ̸ = ∅ ( we take into account the possibility of adding slot s ′ in the selected set of last slots S ′38 k to route demand k in solution S ′38 ). We let S ′38 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′38 is clearly feasible for the problem. The corresponding incidence vector (x S ′38 , z S ′38 ) belongs to F e,s K . Then consider the solution S 40 obtained from S ′38 by adding slot s ′ as last slot to demand k. Solution S 40 is feasible for the problem. The corresponding incidence vector (x S 40 , z S 40 ) belongs to F e,s K . Hence, solutions S ′38 and S 40 satisfy equation µx + σz = τ . We have so µx S ′38 + σz S ′38 = µx S 40 + σz S 40 = µx S ′38 + σz S ′38 + σ k s ′ . It follows that σ k s ′ = 0. In a similar way, we can show that σ k s ′ = 0, for all k ∈ K and s ′ ∈ {w k , ..., s} with s ′ / ∈ {s, ..., s + w k -1} if k ∈ K. Let prove that σ k s ′ for all k ∈ K and s ′ ∈ {s, ..., s+w k -1} are equivalent. Consider a demand k ′ ∈ K and a slot s ′ ∈ {s, ..., s + w k ′ -1} with k ′ ∈ K. Let S 41 = (E 41 , S 41 ) be a solution obtained from S 38 by adding slot s ′ as last slot to demand k ′ with modifying the paths assigned to a subset of demands K ⊂ K in S 38 (i.e., E 41 k = E 38 k for each k ∈ K \ K, and E 41 k ̸ = E 38 k for each k ∈ K) , and also the last slots assigned to the demands K \ {k, k ′ } in S 38 remain the same in S 41 , i.e., S 38 k" = S 41 k" for each demand k" ∈ K \ {k, k ′ }, and S 41 k ′ = S 38 k ′ ∪ {s ′ } for demand k ′ , and modifying the last slots assigned to demand k by adding a new last slot s and removing the last slot s ′ ∈ S 38 k with s ′ ∈ {s i +w k +1, ..., s j } and s / ∈ {s i +w k +1, ..., s j } for demand k with k ∈ K such that S 41 k = (S 38 k \{s})∪{s} such that {s-w k +1, ..., s}∩{s ′ -w k ′ +1, ..., s ′ } = ∅ for each k ′ ∈ K and s ′ ∈ S 41 k ′ with E 41 k ∩ E 41 k ′ ̸ = ∅. Solution S 41 is feasible for the problem. The corresponding incidence vector (x S 41 , z S 41 ) belongs to F e,s K . Hence, solutions S 38 and S 41 satisfy equation µx + σz = τ . We have so µx S 38 + σz S 38 = µx S 41 + σz S 41 = µx S 38 + σz S 38 + σ k ′ s" -σ k s ′ + σ k s - k∈ K e ′ ∈E 38 k µ k e ′ + k∈ K e ′ ∈E 41 k µ k e ′ . Since σ k s = 0 for s / ∈ {s, ..., s + w k -1} with k ∈ K, and µ k e ′ = 0 for all k ∈ K and e ′ ∈ E \ (E k 0 ∪ E k 1 ) with e ′ ̸ = e if k ∈ K, it follows that σ k ′ s" = σ k s ′ . In a similar way, we can show that σ k s ′ = σ k ′ s" , for all pairs (k, k ′ ) ∈ K with s ′ ∈ {s, ..., s+w k -1} and s ′ ∈ {s, ..., s+w k ′ -1}. We re-do the same procedure for each two slots s, s ′ ∈ {s, ..., s + w k -1} for each demand k ∈ K with k ∈ K such that σ k s ′ = σ k s" , for all k ∈ K and s, s ′ ∈ {s, ..., s + w k -1}. We will prove that µ k e for all k ∈ K are equivalent. Let S 42 = (E 42 , S 42 ) be the solution given by a) for each demand where k i ∈ K \ K with i ∈ {1, ..., D 42 i = {k j ∈ {k 1 , ..., k i-1 } ∪ K : E 42 k i ∩ E 42 k j ̸ = ∅}. This ensures that {s k i - w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 42 i . We let S 42 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. Obviously, S 42 is feasible for the problem. Moreover, the corresponding incidence vector (x S 42 , z S 42 ) belongs to F e,s K . Consider now a demand k ′ in K such that e / ∈ E 42 k ′ . We derive a feasible solution S 43 = (E 43 , S 43 ) for the problem from S 42 by a) the paths assigned to the demands K \ {k, k ′ } in S 42 remain the same in S 43 (i.e., E 43 k" = E 42 k" for each k" ∈ K \ {k, k ′ }), b) without modifying the last slots assigned to the demands K in S 42 , i.e., S 42 k = S 43 k for each demand k ∈ K, c) modifying the path assigned to demand k ′ in S 42 from E 42 k ′ to a path E 43 k ′ passed through edge e (i.e., e ∈ E 43 k ′ ) with k ′ ∈ K such that {s -w k + 1, ..., s} ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ for each k ∈ K and s ′ ∈ S 42 k ′ and s ′ ∈ S 42 k with E 42 k ∩ E 43 k ′ ̸ = ∅, d) w k + 1, ..., s} ∩ {s ′ -w k" + 1, ..., s ′ } = ∅ for each k" ∈ K \ {k, k ′ } and s ′ ∈ S 42 k and s ′ ∈ S 42 k" with E 42 k" ∩ E 43 k ̸ = ∅, and {s -w k + 1, ..., s} ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ for each s ′ ∈ S 42 k and s ′ ∈ S 42 k ′ with E 43 k" ∩ E 43 k ̸ = ∅. The corresponding incidence vector (x S 43 , z S 43 ) belongs to F e,s K . Hence, solutions S 42 and S 43 satisfy equation µx + σz = τ . We then obtain that µx S 42 + σz S 42 = µx S 43 + σz S 43 = µx S 42 + σz S 42 + µ k ′ e -µ k e + e"∈E 43 k ′ \{e} µ k ′ e" - e"∈E 42 k ′ µ k ′ e" + e"∈E 43 k µ k e" - e"∈E 42 k \{e} µ k e" . Since µ k e" = 0 for all k ∈ K and e" ∈ E \ (E k 0 ∪ E k 1 ) with k ∈ K, it follows that µ k ′ e = µ k e . In a similar way, we can show that µ k e = µ k ′ e , for all pairs (k, k ′ ) ∈ K. Furthermore, let prove that all σ k s ′ and µ k e are equivalent for all k ∈ K and s ′ ∈ {s, ..., s + w k -1}. Now let us consider for each demand k ′ with k ′ ∈ K, a solution S 44 = (E 44 , S 44 ) obtained from S 42 as below a) the paths assigned to the demands K \ {k ′ } in S 42 remain the same in S 44 (i.e., E 44 k" = E 42 k" for each k" ∈ K \ {k ′ }), b) without modifying the last slots assigned to the demands K \ {k} in S 42 , i.e., S 42 k" = S 44 k" for each demand k" ∈ K \ {k}, c) modifying the set of last slots assigned to demand k ′ in S 42 from S 42 k ′ to S 44 k ′ such that S 44 k ′ ∩ {s, ..., s + w k ′ -1} = ∅. Hence, there are | K| -1 demands from K that share slot s over edge e (i.e., all the demands in K \ {k ′ }), and two demands {k, k ′ } from K that use edge e in solution S 44 . Solution S 44 is then feasible for the problem. The corresponding incidence vector (x S 44 , z S 44 ) belongs to F e,s K . Hence, solutions S 42 and S 44 satisfy equation µx + σz = τ . We then obtain that µx S 42 + σz S 42 = µx S 44 + σz S 44 = µx S 42 + σz S 42 + µ k ′ e -σ k ′ s ′ + e"∈E 44 k ′ \{e} µ k ′ e" - e"∈E 42 k ′ µ k ′ e" . Since µ k e" = 0 for all k ∈ K and e" ∈ E \ (E k 0 ∪ E k 1 ) with e ̸ = e" if k ∈ K, it follows that µ k ′ e = σ k ′ s ′ . In a similar way, we can show that µ k e = σ k s ′ , for all k ∈ K and s ′ ∈ {s, ..., s + w k -1}. Based on this, and given that all µ k e are equivalent for all k ∈ K, and that σ k s ′ are equivalent for all k ∈ K and s ′ ∈ {s, ..., s + w k ′ -1}, we obtain that µ k e = σ k ′ s ′ , for all k, k ′ ∈ K and s ′ ∈ {s, ..., s + w k ′ -1}. Consequently, µ k e = σ k ′ s ′ = ρ, for all k, k ′ ∈ K and s ′ ∈ {s, ..., s + w k ′ -1}. We know from (2.17) and (2.18) that          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ 3 for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. Overall, we obtain that µ k ′ e ′ =                  γ k ′ ,e ′ 1 if e ′ ∈ E k ′ 0 , γ k ′ ,e ′ 2 if e ′ ∈ E k ′ 1 , ρ if k ′ ∈ K and e ′ = e, 0 otherwise, for each k ′ ∈ K and e ′ ∈ E, and σ k s ′ =          γ k,s ′ 3 if s ′ ∈ {1, ..., w k -1} ρ if k ∈ K and s ′ ∈ {s, ..., s + w k -1}, 0 otherwise. for each k ∈ K and s ′ ∈ S. As a consequence, we have (µ, σ) = ρ(α, β) + γQ. Theorem 2.4.2. Consider an edge e ∈ E, and a slot s ∈ S. Let K be a subset of demands in K with | K| ≥ 3, and k∈ K w k ≤ s -k ′ ∈Ke\ K w k ′ . Then, inequality (2.25) is facet defining for P(G, K, S) if and only if there does not exist an interval of contiguous slots I = [s i , s j ] such that a) |{s i + w k -1, ..., s j }| ≥ w k for each demand k ∈ K, b) and s ∈ {s i + max k ′ ∈ K w k -1, ..., s j -max k∈ K w k + 1}, c) and w k + w k ′ ≥ |I| + 1 for each k, k ′ ∈ K, d) and w k + w k ′ ≥ |I| + 1 for each k ∈ K and k ′ ∈ K e \ K, e) and 2w k ≥ |I| + 1 for each k ∈ K, f ) and 2w k ′ ≥ |I| + 1 for each k ′ ∈ K e \ K. Proof. Neccessity. Suppose that there exists an interval of contiguous slots I = [s i , s j ] such that all the conditions a) -f ) are verified. Then inequality (2.25) is dominated by another valid inequality which will be presented later. As a result, inequality (2.25) is not facet defining for P(G, K, S). Sufficiency. Let denote F ′e,s K the face induced by inequality (2.25), that is F ′e,s K = {(x, z) ∈ P(G, K, S) : k∈ K x k e + k∈ K min(s+w k -1,s) s ′ =s z k s ′ + Ke\ K min(s+w k ′ -1,s) s ′ =s z k ′ s ′ = | K| + 1}. We denote inequality k∈ K x k e + k∈ K min(s+w k -1,s) s ′ =s z k s ′ + Ke\ K min(s+w k ′ -1,s) s ′ =s z k ′ s ′ ≤ | K|+1 by αx+βz ≤ λ. Let µx+σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F ′e,s K ⊆ F . We show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) (such that γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ - 1}) such that (µ, σ) = ρ(α, β) + γQ. We re-do the same technique of proof already detailed to prove that inequality (2.23) is facet defining for P(G, K, S) such that the solutions S 38 -S 44 still feasible for F ′e,s K given that their incidence vector are composed by k∈ K x k e + k∈ K min(s+w k -1,s) s ′ =s z k s ′ + Ke\ K min(s+w k ′ -1,s) s ′ =s z k ′ s ′ ≤ | K|+1. We conclude at the end that for each k ′ ∈ K and e ′ ∈ E µ k ′ e ′ =                  γ k ′ ,e ′ 1 if e ′ ∈ E k ′ 0 , γ k ′ ,e ′ 2 if e ′ ∈ E k ′ 1 , ρ if k ′ ∈ K and e ′ = e, 0 otherwise, and for each k ∈ K and s ′ ∈ S σ k s ′ =          γ k,s ′ 3 if s ′ ∈ {1, ..., w k -1}, ρ if k ∈ K ∪ K e and s ′ ∈ {s, ..., s + w k -1}, 0 otherwise. As a result, we have (µ, σ) = ρ(α, β) + γQ. Edge-Interval-Capacity-Cover Inequalities Let now introduce some valid inequalities which can be seen as cover inequalities using some notions of cover related to the problem. Definition 2.4.2. For an interval of contiguous slots I = [s i , s j ], a subset of de- mands K ′ ⊆ K is said a cover for the interval I = [s i , s j ] if and only if k∈ K w k > |I| and w k < |I| for each k ∈ K. Moreover, it is said to be a minimal cover if k ′ ∈ K\{k} w k ′ ≤ |I| for each demand k ∈ K. Based on these definitions, we introduce the following inequalities. over edge e. Then, the inequality k∈K ′ s j s=s i +w k -1 z k s ≤ |K ′ | -1, (2.26) is valid for P(G, K, S). Proof. The interval I = [s i , s j ] can cover at most |K ′ | -1 demands given that K ′ is a minimal cover for interval I = [s i , s j ] over edge e. Otherwise, the nonoverlapping constraint is violated given that there exists at least one slot s ∈ I such that k∈K ′ s+w k -1 s ′ =s z k s > 1. Inequality (2.26) can be lifted using a sequential lifting procedure [START_REF] Balas | Facets of the Knapsack Polytope From Minimal Covers[END_REF] to be facet defining and generate lifted facets for a sub-polytope of P(G, K, S). Theorem 2.4.3. Let I = [s i , s j ] be an interval of contiguous slots in [1, s]. Let K ⊆ K e be a minimal cover for interval I = [s i , s j ] over edge e. Let K e \ K = {k 1 , ..., k q } be arbitrarily ordred with q = |K e \ K|. Consider the following sequence of knapsack problems defined as                      z i = max j∈ K a j + i-1 j=1 β j a j , j∈ K w j a j + i-1 j=1 w k j a j ≤ |I| -w k i , a j ∈ {0, 1}, ∀j ∈ K ∪ {1, ..., i -1}, (2.27) for all i ∈ {1, ..., q} with β j = | K| -1 -z j for all j ∈ {1, ..., i -1}. Then, the inequality k∈ K s j s=s i +w k -1 z k s + q j=1 s j s ′ =s i +w k j -1 β j z k j s ′ ≤ | K| -1, (2.28) is valid for P(G, K, S). Moreover, inequality (5.13) defines facet of P(G, K, S, K, e, E) = {(x, z) ∈ P(G, K, S) : k ′ ∈K E k ′ 1 ∩E k 1 ̸ =∅ for all k∈ K s j s ′ =s i +w k ′ -1 z k ′ s ′ = 0}. if there does not exist an interval of contiguous slots I ′ = [s ′ i , s ′ j ] in [1, s] with I ⊂ I ′ such that K defines a minimal cover for the interval I ′ . Proof. It is trivial given that inequality (5.13) can never be dominated in P(G, K, S, K, e, E) if there does not exist an interval of contiguous slots [1, s] with I ⊂ I ′ such that K defines a minimal cover for the interval I ′ . Inequality (2.26) can then be generalized over all edges e ∈ E. Moreover, it should be lifted to be facet definig for the polytope P(G, K, S) as follows. I ′ = [s ′ i , s ′ j ] in Proposition 2.4.7. Let I = [s i , s j ] be an interval of contiguous slots in [1, s]. Let K ′ ⊂ K be a minimal cover for interval I = [s i , s j ] such that E k 1 ∩ E k ′ 1 ̸ = ∅ for each pair (k, k ′ ) ∈ K ′ . Then, the inequality k∈K ′ s j s=s i +w k -1 z k s ≤ |K ′ | -1, (2.29) is valid for P(G, K, S). Proof. The interval I can cover at most |K ′ | -1 demands given that K ′ is a minimal cover for interval I. Inequality (2.29) can then be lifted using a sequential lifting procedure [START_REF] Balas | Facets of the Knapsack Polytope From Minimal Covers[END_REF] to generate several facets for the polytope P(G, K, S). Theorem 2.4.4. Let I = [s i , s j ] be an interval of contiguous slots. Let K ⊆ K be a minimal cover for interval I = [s i , s j ] such that E k 1 ∩ E k ′ 1 ̸ = ∅ for each pair (k, k ′ ) ∈ K. Let K ′ ⊆ K \ K = {k 1 , ..., k q } such that E k 1 ∩ E k ′ 1 ̸ = ∅ for each pair (k, k ′ ) ∈ K ∪ K ′ . Consider the following sequence of knapsack problems defined as                      z i = max j∈ K a j + i-1 j=1 β j a j , j∈ K w j a j + i-1 j=1 w k j a j ≤ |I| -w k i , a j ∈ {0, 1}, ∀j ∈ K ∪ {1, ..., i -1}, (2.30) for all i ∈ {1, ..., q} with β j = | K| -1 -z j for all j ∈ {1, ..., i -1}. Then, the inequality k∈ K s j s=s i +w k -1 z k s + q j=1 s j s ′ =s i +w k j -1 β j z k j s ′ ≤ | K| -1, (2.31) is valid for P(G, K, S). Moreover, inequality (2.31) defines facet of P(G, K, S) if there does not exist an interval of contiguous slots I ′ = [s ′ i , s ′ j ] in [1, s] with I ⊂ I ′ such that K defines a minimal cover for the interval I ′ . Proof. It is trivial given that inequality (2.31) can never be dominated in P(G, K, S) if there does not exist an interval of contiguous slots [1, s] with I ⊂ I ′ such that K defines a minimal cover for the interval I ′ . I ′ = [s ′ i , s ′ j ] in Inspiring from inequalities (2.26) and (2.29), we define another valid inequality induced by any subset of demands K defining a minimal cover for any interval I as follows. Definition 2.4.3. Consider an inequality αx T ≤ β which is not valid for a polyhedron P(G, K, S). It is said to be optimality cut for P(G, K, S) if it is valid for a semi-polytope of P(G, K, S) which covers at least one optimal solution for the problem. Let Q(G, K, S) = {(x, z) ∈ P(G, K, S) : s s=w k z k s = 1, ∀k ∈ K} be a semi-polytope of P(G, K, S) . Note that each valid inequality of Q(G, K, S) which is not valid for P(G, K, S), it defines an optimality cut for P(G, K, S). Proposition 2.4.8. Consider an edge e ∈ E. Let I = [s i , s j ] be an interval of contiguous slots in [1, s]. Let K be a minimal cover for the interval I such that a) k∈ K w k ≤ s - k ′ ∈Ke\ K w k ′ , b) e / ∈ E k 0 for each demand k ∈ K, c) (k, k ′ ) / ∈ K e c for each pair of demands (k, k ′ ) in K. Then, the inequality k∈ K x k e + k∈ K s j s=s i +w k -1 z k s ≤ 2| K| -1, (2.32 ) is valid for Q(G, K, S). Proof. The interval I = [s i , s j ] can cover at most | K| -1 demands given that K is a minimal cover for interval I = [s i , s j ] over edge e. It follows that if demands K pass together through edge e (i.e., k∈ K x k e = | K|), there are at most | K| -1 demands that can share the interval I over edge e. We ensure that inequalities (2.32) are verified by any feasible solution having an incidence vector in Q(G, K, S). Otherwise, the non-overlapping constraint is violated such if there exists a solution S that violates inequality (2.32), this will certainly prove that there exists a slot s ∈ I over edge e such that k∈ K s+w k -1 s ′ =s z k s ′ > 1 given that k∈ K x k e ≤ | K| and k∈ K s j s=s i +w k -1 z k s ≤ |K| for any feasible solution S with incidence vector in Q(G, K, S). Inequality (2.32) can also be lifted using a sequential lifting procedure [START_REF] Balas | Facets of the Knapsack Polytope From Minimal Covers[END_REF] to be facet defining and generate lifted facets for the polytope Q(G, K, S). Theorem 2.4.5. Let I = [s i , s j ] be an interval of contiguous slots in [1, s]. Let K be a minimal cover for the interval I such that K does not define a minimal cover for an edge e, where e / ∈ E k 0 for each demand k ∈ K. Let K e \ K = {k 1 , ..., k q } be arbitrarily ordred with q = |K e \ K|. Consider the following sequence of knapsack problems defined as                      z i = max j∈ K a j + i-1 j=1 β j a j , j∈ K w j a j + i-1 j=1 w k j a j ≤ |I| -w k i , a j ∈ {0, 1}, ∀j ∈ K ∪ {1, ..., i -1}, (2.33) for all i ∈ {1, ..., q} with β j = | K| -1 -z j for all j ∈ {1, ..., i -1}. Then, the inequality k∈ K x k e + k∈ K s j s=s i +w k -1 z k s + q j=1 s j s ′ =s i +w k j -1 β j z k j s ′ ≤ 2| K| -1, (2.34 ) is valid for Q(G, K, S). Moreover, inequality (2.34) defines facet of Q(G, K, S) if there does not exist an interval of contiguous slots I ′ = [s ′ i , s ′ j ] in [1, s] with I ⊂ I ′ such that K defines a minimal cover for the interval I ′ . Proof. It is trivial given that inequality (2.34) can never be dominated in Q(G, K, S) if there does not exist an interval of contiguous slots [1, s] with I ⊂ I ′ such that K defines a minimal cover for the interval I ′ . I ′ = [s ′ i , s ′ j ] in Edge-Interval-Clique Inequalities Using inequalities (2.32), and based on the set of minimal cover K with cardinality | K| = 2, we introduce the following inequality. Proposition 2.4.9. Consider an edge e ∈ E. Let I = [s i , s j ] be an interval of contiguous slots. Let {k, k ′ } be a minimal cover for the interval I over edge e such that e / ∈ E k 0 ∪ E k ′ 0 . Then, the inequality x k e + x k ′ e + s j s=s i +w k -1 z k s + s j s=s i +w k ′ -1 z k ′ s ≤ 3, (2.35 ) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if 2w k > |I| and 2w k ′ > |I|. Proof. Inequality (2.35) is a particular case of inequality (2.36) for a minimal cover K = {k, k ′ }. Using this, we introduce the following conflict graph. H e I if w k + w k ′ > |I| and (k, k ′ ) / ∈ K e c . This is equivalent to say that two linked nodes v k and v k ′ means that the two demands k, k ′ define a minimal cover for the interval I over edge e. For an edge e ∈ E, the conflict graph H e I is a threshold graph with threshold value equals to t = |I| such that for each node v k with e / ∈ E k 0 ∪ E k 1 , we associate a positive weight wv k = w k such that all two nodes v k and v k ′ are linked by an edge if and only if wv k + wv k ′ > t which is equivalent to the conflict graph H e I . Proposition 2.4.10. Consider an edge e ∈ E. Let I = [s i , s j ] be an interval of contiguous slots. Let C be a clique in the conflict graph H e I with |C| ≥ 3, and v k ∈C w k ≤ s -k ′ ∈Ke\C w k ′ . Then, the inequality v k ∈C x k e + s j s=s i +w k -1 z k s ≤ |C| + 1, (2.36 ) is valid for Q(G, K, S). Moreover, It is valid for P(G, K, S) if 2w k > |I| for each v k ∈ C. Proof. For each edge e ∈ E and interval of contiguous slots I ⊆ S, inequality (2.36) ensures that if the set of demands in clique C pass through edge e, they cannot share the interval I = [s i , s j ] over edge e. This means that there are at most one demand from the demands in C that can be totally covered by the interval I over edge e (i.e., all the slots assigned to the demand are in I). Inequality (2.36) can be shown as Chvàtal-Gomory cuts using Chvàtal-Gomory and recurrence procedures. For all two linked node v k and v k ′ in H e I , we know from inequality (2.35) x k e + x k ′ e + s j s=s i +w k -1 z k s + s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ 3. By adding the previous inequalities for all two linked node v k and v k ′ in the clique set C, and by recurrence procedure we obtain that for all K ′ ⊆ C with |K ′ | = |C|-1 v k ∈C ′ x k e + v k ∈C ′ s j s=s i +w k -1 z k s ≤ |K ′ | + 1. By adding the previous inequalities for all K ′ ⊆ C with |K ′ | = |C| -1, we get K ′ ⊆C |K ′ |=|C|-1 v k ∈C ′ x k e + K ′ ⊆C |K ′ |=|C|-1 v k ∈C ′ s j s=s i +w k -1 z k s ≤ K ′ ⊆C |K ′ |=|C|-1 (|K ′ | + 1). Note that for each demand k with v k ∈ C, the variable x k e and the sum s j s=s i +w k -1 z k s appear ( |C| |C|-1 -1) times in the previous sum. It follows that v k ∈C ( |C| |C| -1 -1)x k e + v k ∈C s j s=s i +w k -1 .( |C| |C| -1 -1)z k s ≤ |C| |C| -1 |C|. Given that ( |C| |C|-1 -1) = |C| -1, we obtain that v k ∈C (|C| -1)x k e + v k ∈C s j s=s i +w k -1 (|C| -1)z k s ≤ |C| 2 . By dividing the two sides of the previous sum by |C| -1, we have v k ∈C x k e + v k ∈C s j s=s i +w k -1 z k s ≤ |C| 2 |C| -1 ⇒ v k ∈C x k e + v k ∈C s j s=s i +w k -1 z k s ≤ |C| |C| |C| -1 ⇒ v k ∈C x k e + v k ∈C s j s=s i +w k -1 z k s ≤ |C| |C| -1 + 1 |C| -1 . By doing the following simplification v k ∈C x k e + v k ∈C s j s=s i +w k -1 z k s ≤ |C| |C| -1 |C| -1 + |C| |C| -1 ⇒ v k ∈C x k e + v k ∈C s j s=s i +w k -1 z k s ≤ |C| + |C| |C| -1 . As a result, v k ∈C x k e + v k ∈C s j s=s i +w k -1 z k s ≤ |C|+ |C| |C| -1 ⇒ v k ∈C x k e + v k ∈C s j s=s i +w k -1 z k s ≤ |C|+1 given that |C| |C| -1 = 1. We conclude at the end that inequality (2.36) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if 2w k > |I| for each v k ∈ C. Moreover, inequality (2.36) can be strengthened as follows. Proposition 2.4.11. Consider an edge e ∈ E. Let I = [s i , s j ] be an interval of contiguous slots. Let C be a clique in the conflict graph H e I with |C| ≥ 3, and v k ∈C w k ≤ s -k ′ ∈Ke\C w k ′ . Let C e ⊆ K e \ C be a clique in the conflict graph H e I such that w k + w k ′ ≥ |I| + 1 for each v k ∈ C and v k ′ ∈ C e . Then, the inequality v k ∈C x k e + v k ∈C s j s=s i +w k -1 z k s + v k ′ ∈Ce s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ |C| + 1, (2.37) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if 2w k > |I| for each v k ∈ C ∪ C e . Proof. For each edge e ∈ E and interval of contiguous slots I ⊆ S, inequality (2.37) ensures that if the set of demands in clique C pass through edge e, they cannot share the interval I = [s i , s j ] over edge e with a subset of demands in C e . On the other hand, inequality (2.37) can be seen as a particular case of inequality (2.36) induced by a clique C ′ = C ∪ C e given that x k e = 1 for all v k ∈ C e . Theorem 2.4.6. Consider an edge e ∈ E. Let I = [s i , s j ] be an interval of contiguous slots. Let C be a clique in the conflict graph H e I with |C| ≥ 3, k∈C w k ≤ s -k ′ ∈Ke\C w k ′ , and |{s i + w k -1, ..., s j }| ≥ w k for each demand k with v k ∈ C. Then, inequality (2.36) is facet defining for P(G, K, S) if and only if a) there does not exist a demand k ′ ∈ K e \ C with w k + w k ′ > |I| and w k ′ ≤ |I|, b) and there does not exist an interval I ′ of contiguous slots with I ⊂ I ′ such that C defines also a clique in the associated conflict graph H e I ′ . Proof. Neccessity. It is trivial given that • if there does not exist a demand k ′ ∈ K e \ C with w k + w k ′ > |I| and w k ′ ≤ |I|, and • if there exists an interval I ′ of contiguous slots with I ⊂ I ′ such that C defines also a clique in the associated conflict graph H e I ′ . This implies that inequality (2.36) induced by clique C for the interval I is dominated by inequality (2.36) induced by the same clique C for the interval I ′ given that {s i + w k -1, ..., s j } ⊂ I ′ for each k ∈ C. As a result, inequality (2.36) is not facet defining for P(G, K, S). |{s i + w k -1, ..., s j }| ≥ w k for each demand k with v k ∈ C. Sufficiency. Let F H e I C denote the face induced by inequality (2.36), that is F H e I C = {(x, z) ∈ P(G, K, S) : v k ∈C x k e + s j s=s i +w k -1 z k s = |C| + 1}. Let denote inequality v k ∈C x k e + s j s=s i +w k -1 z k s ≤ |C| + 1 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx+σz = τ }. Suppose that F H e I C ⊆ F . In order to prove that inequality v k ∈C x k e + s j s=s i +w k -1 z k s ≤ |C|+1 is facet defining for P(G, K, S), we need to show that there exists ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) (such that γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ - 1}) such that (µ, σ) = ρ(α, β) + γQ. We first show that µ k e ′ = 0 for each edge 53 ) be the solution given by a) for each demand e ′ ∈ E \ (E k 0 ∪ E k 1 ) for each demand k ∈ K with e ̸ = e ′ if k ∈ C. Consider a demand k ∈ K and an edge e ′ ∈ E \ (E k 0 ∪ E k 1 ) with e ̸ = e ′ if k ∈ C. Let S 53 = (E 53 , S k i ∈ K \ C with i ∈ {1, ..., |K|}, we let E 53 k i be the set of edges involved in a shortest path between o k i and d k i , b) for each demand k ∈ C, we let E 53 k be the set of edges involved in a shortest path between o k and d k which uses edge e, c) for one demand k ′ from C, we select the slot s k ′ = s i + w k ′ -1 as last slot, d) for each demand k i ∈ C \ {k ′ } with i ∈ {1, ..., I 53 i = [ kj ∈D 53 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] \ {s i , ..., s j } if E 53 ki ∩ E 53 k ′ ̸ = ∅, where D 53 i = {k j ∈ {k 1 , ..., k i-1 } ∩ C : E 53 ki ∩ E 53 kj ̸ = ∅}. e) for each demand k i ∈ K \ C with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 53 i given by I 53 i = [ kj ∈R 53 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s k -w k } ∪ {s k + w ki , ..., s}] if E 53 ki ∩ (E 53 k ∪ {e ′ }) ̸ = ∅ or I 53 i = kj ∈R 53 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where R 53 i = {k j ∈ {k 1 , ..., k i-1 } ∪ C such that E 53 k i ∩ E 53 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ R 53 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s k -w k j + 1, ..., s k } = ∅ if E 53 k i ∩ (E 53 k ∪ {e ′ }) ̸ = ∅ ( µx S 53 + σz S 53 = µx S 54 + σz S 54 = µx S 53 + µ k e ′ + σz S 53 . As a result, µ k e ′ = 0. In a similar way, we can show that µ k e ′ = 0, for all k ∈ K and e ′ ∈ E \ (E k 0 ∪ E k 1 ) with e ̸ = e ′ if k ∈ C. Let show that σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s} with s / ∈ {s i + w k -1, ..., s j } if v k ∈ C. Consider a demand k in K and a slot s ′ in {w k , ..., s} with s ′ / ∈ {s i + w k -1, ..., s j } if v k / ∈ C. Let S ′53 = (E ′53 , S ′53 ) be the solution given by a) for each demand k i ∈ K \ C with i ∈ {1, ..., |K|}, we let E ′53 k i be the set of edges involved in a shortest path between o k i and d k i , b) for each demand k ∈ C, we let E ′53 k be the set of edges involved in a shortest path between o k and d k , c) for one demand k ′ from C, we select the slot s k ′ = s i + w k ′ -1 as last slot, d) for each demand k i ∈ C \ {k ′ } with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′53 i given by I ′53 i = [ kj ∈D ′53 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] \ {s i , ..., s j } if E ′53 ki ∩ E ′53 k ′ ̸ = ∅, where D ′53 i = {k j ∈ {k 1 , ..., k i-1 } ∩ C : E ′53 ki ∩ E ′53 kj ̸ = ∅}. e) for each demand k i ∈ K \ C with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′53 i given by I ′53 i = [ kj ∈R ′53 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E ′53 ki ∩ E ′53 k ̸ = ∅ or I ′53 i = kj ∈R ′53 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where R ′53 i = {k j ∈ {k 1 , ..., k i-1 } ∪ C : E ′53 k i ∩ E ′53 k j ̸ = ∅}. We let S ′53 k i = {s k i } µx S ′53 + σz S ′53 = µx S 55 + σz S 55 = µx S ′53 + σz S ′53 + σ k s ′ . Hence, σ k s ′ = 0. In a similar way, we can show that σ k s = 0, for all k ∈ K and s ∈ {w k , ..., s} with s / ∈ {s i + w k -1, ..., s j } if v k ∈ C. Let prove that σ k s for all v k ∈ C and s ∈ {s i + w k -1, ..., s j } are equivalent. Consider a demand k ′ ∈ K and a slot s ′ ∈ {s i + w k ′ -1, ..., s j } with v k ′ ∈ C. Let S53 = ( Ẽ53 , S53 ) be the solution given by a) for each demand k i ∈ K \ C with i ∈ {1, ... , |K|}, we let Ẽ53 k i be the set of edges involved in a shortest path between o k i and d k i , b) for each demand k ∈ C, we let Ẽ53 k be the set of edges involved in a shortest path between o k and d k which uses edge e, c) for one demand k" from C, we select the slot s k" = s i + w k" + 1 as last slot, d) for each demand k i ∈ C \ {k"} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots Ĩ53 i given by Ĩ53 i = [ kj ∈ D53 i {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s ′ -w k ′ }∪{s ′ +w ki , ..., s}]\{s i , ..., s j } if Ẽ53 ki ∩ Ẽ53 k ′ ̸ = ∅ or Ĩ53 i = [ kj ∈ D53 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] \ {s i , ..., s j } if not, where D53 i = {k j ∈ {k 1 , ..., k i-1 } ∩ C : Ẽ53 k i ∩ Ẽ53 k j ̸ = ∅}, e) for each demand k i ∈ K \ C with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots Ĩ53 i given by Ĩ53 i = [ kj ∈ R53 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k ′ } ∪ {s ′ + w ki , ..., s}] if Ẽ53 ki ∩ Ẽ53 k ′ ̸ = ∅ or Ĩ53 i = kj ∈ R53 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where R53 i = {k j ∈ {k 1 , ..., k i-1 } ∪ C : Ẽ53 k i ∩ Ẽ53 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ R53 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ if Ẽ53 k i ∩ Ẽ53 k ′ ̸ = ∅ ( we take into account the possibility of adding slot s ′ as a last slot in the selected last slots S53 k ′ to route demand k ′ in solution S53 ). We let S53 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S53 is clearly feasible for the problem given that it satisfies all the constraints of cut formulation (2.2)-(2.10). Hence, the corresponding incidence vector (x S53 , z S53 ) belongs to F H e I C . Let S 56 = (E 56 , S 56 ) be a solution obtained from S53 by adding slot s ′ as last slot to demand k ′ in S53 , and modifying the last slots assigned to demand k by adding a new last slot s in S53 and removing the last slot s ∈ S53 k from S53 with s ∈ {s i + w k + 1, ..., s j } and s / ∈ {s i + w k + 1, ..., s j } for demand k ∈ C such that {s -w k + 1, ..., s} ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ for each k ′ ∈ K and s ′ ∈ S 56 k ′ with E 56 k ∩ E 56 k ′ ̸ = ∅. Solution S 56 is feasible for the problem. The corresponding incidence vector (x S 56 , z S 56 ) belongs to F H e I C . Hence, solutions S 53 and S 56 satisfy equation µx + σz = τ . We have so µx S53 + σz S53 = µx S 56 + σz S 56 = µx S53 + σz S53 + σ k ′ s ′ -σ k s + σ k s . Since σ k s = 0 for s / ∈ {s i + w k -1, ..., s j } with v k ∈ C, it follows that σ k ′ s ′ = σ k s . In a similar way, we can show that σ k s = σ k ′ s ′ , for all pairs (v k , v k ′ ) ∈ C with s ∈ {s i + w k -1, ..., s j } and s ′ ∈ {s i + w k ′ -1, ..., s j }. We re-do the same procedure for each two slots s, s ′ ∈ {s i + w k -1, ..., s j } for each demand k ∈ K with v k ∈ C such that σ k s = σ k s ′ , for all v k ∈ C and s, s ′ ∈ {s i + w k -1, ..., s j }. Let prove now that µ k e for all k ∈ K with v k ∈ C are equivalent. Consider a demand k ′ ∈ K with v k ′ in C such that e / ∈ E 57 k ′ . For this, we derive a solution S" 58 = (E" 58 , S" 58 ) from S 53 by we derive a solution S 58 = (E 58 , S 58 ) from S 53 by a) the paths assigned to the demands K \ {k, k ′ } in S 53 remain the same in S 58 (i.e., E 58 k" = E 53 k" for each k" ∈ K \ {k, k ′ }), b) without modifying the last slots assigned to the demands K in S 53 , i.e., S 53 k = S 58 k for each demand k ∈ K, c) modifying the path assigned to demand k ′ in S 53 from E 53 k ′ to a path E 58 k ′ passed through edge e (i.e., e ∈ E 58 k ′ ) with v k ′ ∈ C such that {s -w k + 1, ..., s} ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ for each k ∈ K and s ′ ∈ S 53 k ′ and s ∈ S 53 k with E 53 k ∩ E 58 k ′ ̸ = ∅, d) w k + 1, ..., s} ∩ {s ′ -w k" + 1, ..., s ′ } = ∅ for each k" ∈ K \ {k, k ′ } and s ∈ S 53 k and s ′ ∈ S 53 k" with E 53 k" ∩ E 58 k ̸ = ∅, and {s -w k + 1, ..., s} ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ for each s ∈ S 53 k and s ′ ∈ S 53 k ′ with E 58 k" ∩ E 58 k ̸ = ∅. Solution S" 58 is feasible for the problem. The corresponding incidence vector (x S" 58 , z S" 58 ) belongs to F H e I C . Hence, solutions S 53 and S" 58 satisfy equation µx + σz = τ . We then obtain that µx S 53 + σz S 53 = µx S 58 + σz S 58 = µx S 53 + σz S 53 + µ k ′ e -µ k e + e"∈E" 58 k ′ \{e} µ k ′ e" - e"∈E 53 k ′ µ k ′ e" + e"∈E" 58 k µ k e" - e"∈E 53 k \{e} µ k e" . Since µ k e" = 0 for all k ∈ K and e" ∈ E \ (E k 0 ∪ E k 1 ) with v k / ∈ C, it follows that µ k ′ e = µ k e . In a similar way, we can show that µ k e = µ k ′ e , for all pairs (v k , v k ′ ) ∈ C. Furthermore, let prove that all σ k s and µ k e are equivalent for all k ∈ C and s ∈ {s i + w k -1, ..., s j }. Now let us consider a demand k ′ ∈ K with v k ′ ∈ C, a solution S 59 = (E 59 , S 59 ) obtained from S 53 as below a) the paths assigned to the demands K \ {k ′ } in S 53 remain the same in S 59 (i.e., E 59 k" = E 53 k" for each k" ∈ K \ {k ′ }), b) without modifying the last slots assigned to the demands K \ {k} in S 53 , i.e., S 53 k" = S 59 k" for each demand k" ∈ K \ {k}, c) modifying the set of last slots assigned to demand k ′ in S 53 from S 53 k ′ to S 59 k ′ such that S 59 k ′ ∩ {s i + w k ′ -1, ..., s j } = ∅. Hence, there are |C| -1 demands from C that are covered by the interval I (i.e., all the demands in C \ {k ′ }), and two demands {k, k ′ } from C that use edge e in solution S 59 . Solution S 59 is then feasible for the problem. The corresponding incidence vector (x S 59 , z S 59 ) belongs to F H e I C . Hence, solutions S57 and S 59 satisfy equation µx + σz = τ . We then obtain that µx S57 + σz S57 = µx S 59 + σz S 59 = µx S57 + σz S57 + µ k ′ e -σ k ′ s + e"∈E 59 k ′ \{e} µ k ′ e" - e"∈ Ẽ57 k ′ µ k ′ e" . Since µ k e" = 0 for all k ∈ K and e" ∈ E \ (E k 0 ∪ E k 1 ) with e ̸ = e" if v k ∈ C, it follows that µ k ′ e = σ k ′ s . In a similar way, we can show that µ k e = σ k s , for all v k ∈ C and s ∈ {s i + w k -1, ..., s j }. Based on this, and given that all µ k e are equivalent for all v k ∈ C, and that σ k s are equivalent for all v k ∈ C and s ∈ {s i + w k ′ -1, ..., s j }, we obtain that µ k e = σ k ′ s , for all k, k ′ ∈ C and s ∈ {s i + w k ′ -1, ..., s j }. Consequently, µ k e = σ k ′ s = ρ, for all k, k ′ ∈ C and s ∈ {s i + w k ′ -1, ..., s j }. By (2.17) and (2.18), we know that          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ 3 for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. We conclude that for each k ′ ∈ K and e ′ ∈ E µ k ′ e ′ =                  γ k ′ ,e ′ 1 if e ′ ∈ E k ′ 0 , γ k ′ ,e ′ 2 if e ′ ∈ E k ′ 1 , ρ if k ′ ∈ C and e ′ = e, 0 otherwise, and for each k ∈ K and s ∈ S σ k s =          γ k,s 3 if s ∈ {1, ..., w k -1} ρ if v k ∈ C and s ∈ {s i + w k -1, ..., s j }, 0 otherwise. As a consequence, (µ, σ) = ρ(α, β) + γQ. Proof. Neccessity. • if there exists a demand k" ∈ K e \ C e with w k + w k" ≥ |I| + 1 for each v k ∈ C, and w k ′ + w k" ≥ |I| + 1 for each v k ′ ∈ C e . Then, inequality (2.37) is dominated by its lifted with C ′ e = C e ∪ {k"}. Moreover, if |{s i + w k -1, ..., s j }| < w k for each demand k with v k ∈ C ∪ C e , then inequality (2.37) is then dominated by inequality (2.25) for a set of demands K = {k ∈ K such that v k ∈ C} and slot s = s i + min k∈C∪Ce w k + 1 over edge e. As a result, inequality (2.37) is not facet defining for P(G, K, S). • if there exists an interval I ′ of contiguous slots with I ⊂ I ′ such that C ∪ C e defines also a clique in the associated conflict graph H e I ′ . This implies that inequality (2.37) induced by clique C ∪ C e for the interval I is dominated by inequality (2.37) induced by the same clique C ∪ C e for the interval I ′ given that {s i + w k -1, ..., s j } ⊂ I ′ for each k ∈ C ∪ C e . As a result, inequality (2.37) is not facet defining for P(G, K, S). Sufficiency. Let F ′H e I C be the face induced by inequality (2.37), that is F ′H e I C = {(x, z) ∈ P(G, K, S) : v k ∈C x k e + s j s=s i +w k -1 z k s + v k ′ ∈Ce s j s ′ =s i +w k ′ -1 z k ′ s ′ = |C| + 1}. We denote inequality v k ∈C x k e + s j s=s i +w k -1 z k s + v k ′ ∈Ce s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ |C|+1 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F ′H e I C ⊆ F . We use the same proof of the facial structure done for inequality (2.36) in the proof of theorem 2.4.6 to prove that inequality v k ∈C x k e + s j s=s i +w k -1 z k s + v k ′ ∈Ce s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ |C|+1 is facet = (γ 1 , γ 2 , γ 3 ) (such that γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β) + γQ. For this, we show that a) σ k s = 0 for all demand k ∈ K and slot s ∈ {w k , ..., s} with s / ∈ {s i + w k -1, ..., s j } if v k ∈ C ∪ C e , b) i + w k ′ - 1, ..., s j }. At the end, we obtain that for each k ′ ∈ K and e ′ ∈ E µ k ′ e ′ =                  γ k ′ ,e ′ 1 if e ′ ∈ E k ′ 0 , γ k ′ ,e ′ 2 if e ′ ∈ E k ′ 1 , ρ if k ′ ∈ C and e ′ = e, 0 otherwise, and for each k ∈ K and s ∈ S σ k s =          γ k,s 3 if s ∈ {1, ..., w k -1} ρ if v k ∈ C ∪ C e and s ∈ {s i + w k -1, ..., s j }, 0 otherwise. As a result, we have (µ, σ) = ρ(α, β) + γQ. Interval-Clique Inequalities We have looked at the definition of inequality (2.36), we detected that there may exist some cases that we can face which are not covered by inequality (2.36). For this, we provide the following inequality and its generalization. Proposition 2.4.12. Consider an interval of contiguous slots I = [s i , s j ] in S with s i ≤ s j -1. Let k, k ′ be a pair of demands in K with E k 1 ∩ E k ′ 1 ̸ = ∅, and w k ≤ |I|. Then, the inequality s j s=s i +w k -1 z k s + s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ 1, (2.38) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if 2w k > |I| and 2w k ′ > |I|. Proof. It is trivial given that the interval I = [s i , s j ] cannot cover the two demands k, k ′ shared an essential edge with total sum of number of slots exceeds |I|. Furthermore, inequality (2.38) is a particular case of inequality (2.36 ) for K = {k, k ′ } over each edge e ∈ E k 1 ∩ E k ′ 1 . However, it will be used for a generalized inequality using the following conflict graph. Definition 2.4.5. Let I = [s i , s j ] be an interval of contiguous slots in [1, s] with s i ≤ s j -1. Consider the conflict graph H E I defined as follows. For each demand k ∈ K with w k ≤ |I|, consider a node v k in H E I . Two nodes v k and v k ′ are linked by an edge in H E I if w k + w k ′ > |I| and E k 1 ∩ E k ′ 1 ̸ = ∅. Proposition 2.4.13. Let I = [s i , s j ] be an interval of contiguous slots in [1, s] with s i ≤ s j -1, and C be a clique in the conflict graph H E I with |C| ≥ 3. Then, the inequality v k ∈C s j s=s i +w k -1 z k s ≤ 1, (2.39 ) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if 2w k > |I| for each v k ∈ C. Proof. It is trivial given the definition of clique set in the conflict graph H E I such that for all two linked node v k and v k ′ in H E I , we know from inequality (2.38) s j s=s i +w k -1 z k s + s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ 1. By adding the previous inequalities for all two linked node v k and v k ′ in the clique set C, and by recurrence procedure we obtain that for all C ′ ⊆ C with |C ′ | = |C| -1 v k ∈C ′ s j s=s i +w k -1 z k s ≤ 1. By adding the previous inequalities for all C ′ ⊆ C with |C ′ | = |C| -1, we get C ′ ⊆C |C ′ |=|C|-1 v k ∈C ′ s j s=s i +w k -1 z k s ≤ C ′ ⊆C |C ′ |=|C|-1 1. 91 Note that for each demand k with v k ∈ C, the sum s j s=s i +w k -1 z k s appears ( |C| |C|-1 - 1) = |C| -1 times in the previous sum. It follows that v k ∈C s j s=s i +w k -1 (|C| -1)z k s ≤ |C|. By dividing the two sides of the previous sum by |C| -1, we have so v k ∈C s j s=s i +w k -1 z k s ≤ |C| |C| -1 ⇒ v k ∈C s j s=s i +w k -1 z k s ≤ 1 given that |C| |C| -1 = 1. We conclude at the end that inequality (2.39) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if 2w k > |I| for each v k ∈ C. I ′ in [1, s] such that I ⊂ I ′ with • w k + w k ′ ≥ |I ′ | for each k, k ′ ∈ C, • w k ≤ |I ′ | for each k ∈ C. Proof. Neccessity. We distinguish two cases a) if there exists a clique C ′ that contains all the demands k ∈ C. Then, inequality (2.39) induced by clique C is dominated by another inequality (2.39) induced by clique C ′ . Hence, inequality (2.39) cannot be facet defining for P(G, K, S). b) if there exists an interval of contiguous slots I ′ in [1, s] such that I ⊂ I ′ with • w k + w k ′ ≥ |I ′ | for each k, k ′ ∈ C, • w k ≤ |I ′ | for each k ∈ C. This means that inequality (2.39) induced by clique C for the interval I is dominated by inequality (2.39) induced by clique C for the interval I ′ . Hence, inequality (2.39) cannot be facet defining for P(G, K, S). Let F H E I C be the face induced by inequality (2.39), that is F H E I C = {(x, z) ∈ P(G, K, S) : v k ∈C s j s=s i +w k -1 z k s = 1}. Denote inequality v k ∈C s j s=s i +w k -1 z k s ≤ 1 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx+σz = τ }. Suppose that F H E I C ⊆ F . In order to prove that inequality v k ∈C s j s=s i +w k -1 z k s ≤ 1 is facet defining for P(G, K, S), we need to show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) (such that γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β) + γQ. We first show that µ k e = 0 for each edge e ∈ E \ (E k 0 ∪ E k 1 ) for each demand k ∈ K. Consider a demand k ∈ K and an edge e ∈ E \ (E k 0 ∪ E k 1 ) . Let S 60 = (E 60 , S 60 ) be the solution given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E 60 k i be the set of edges involved in a shortest path between o k i and where d k i , b) for one demand k ′ from C, we select the slot s k ′ = s i + w k ′ -1, c) for each demand k i ∈ C \ {k ′ } with i ∈ {1, ..., D 60 i = {k j ∈ {k 1 , ..., k i-1 } ∩ C : E 60 k i ∩ E 60 k j ̸ = ∅}, d) for each demand k i ∈ K \ C with i ∈ {1, ..., I 60 i = [ kj ∈R 60 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s k -w k } ∪ {s k + w ki , ..., s}] if E 60 ki ∩ (E 60 k ∪ {e}) ̸ = ∅ or I 60 i = kj ∈R 60 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where As a result, µ k e = 0. In a similar way, we can show that R 60 i = {k j ∈ {k 1 , ..., k i-1 } ∪ C such that E 60 k i ∩ E 60 k j ̸ = ∅}. We let S 60 k i = {s k i } µ k e = 0, for all k ∈ K and e ∈ E \ (E k 0 ∪ E k 1 ). Let show that σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s} with s / ∈ {s i + w k -1, ..., s j } if v k ∈ C. Consider a demand k ∈ K and a slot s ′ in {w k , ..., s} with s ′ / ∈ {s i + w k -1, ..., s j } if v k ∈ C. Let S ′60 = (E ′60 , S ′60 ) be the solution given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E ′60 k i be the set of edges involved in a shortest path between o k i and where d k i , b) for one demand k ′ from C, we select the slot s k ′ = s i + w k -1, c) for each demand k i ∈ C \ {k ′ } with i ∈ {1, ..., D ′60 i = {k j ∈ {k 1 , ..., k i-1 } ∪ C : E ′60 k i ∩ E ′60 k j ̸ = ∅}, d) for each demand k i ∈ K \ C with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′60 i given by I ′60 i = [ kj ∈R ′60 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E ′60 ki ∩ E ′60 k ̸ = ∅ or I ′60 i = kj ∈R ′60 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where R ′60 i = {k j ∈ {k 1 , ..., k i-1 } ∪ C such that E 60 k i ∩ E 60 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ R ′60 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E ′60 k i ∩ E ′60 k ̸ = ∅ ( we take into account the possibility of adding slot s ′ as a last slot in the selected last slots S ′60 k to route demand k in solution S ′60 ). We let S ′60 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′60 is feasible for the problem. Hence, the corresponding incidence vector (x S ′60 , z S ′60 ) belongs to F H E I C . Then consider the solution S 62 obtained from S ′60 by adding slot s ′ as last slot to demand k in S ′60 . Solution S 62 is feasible for the problem. The corresponding incidence vector (x S 62 , z S 62 ) belongs to F H E I C . Hence, solutions S ′60 and S 62 satisfy equation µx + σz = τ . We have so µx S ′60 + σz S ′60 = µx S 62 + σz S 62 = µx S ′60 + σz S ′60 + σ k s ′ . Hence, σ k s ′ = 0. In a similar way, we can show that σ k s = 0, for all k ∈ K and s ∈ {w k , ..., s} with s / ∈ {s i + w k -1, ..., s j } if v k ∈ C. Let prove that σ k s for all v k ∈ C and s ∈ {s i + w k -1, ..., s j } are equivalent. Consider a demand k ′ ∈ K and a slot s ′ ∈ {s i + w k ′ -1, ..., s j } with v k ′ ∈ C, and a solution S60 = ( Ẽ60 , S60 ) given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let Ẽ60 k i be the set of edges involved in a shortest path between o k i and d k i , b) for one demand k from C, we select the slot s k = s i + w k -1, c) for each demand k i ∈ C \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots Ĩ60 i given by Ĩ60 i = [ kj ∈ D60 i {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s ′ -w k ′ }∪{s ′ +w ki , ..., s}]\{s i , ..., s j } if Ẽ60 ki ∩ Ẽ60 k ′ ̸ = ∅ or Ĩ60 i = [ kj ∈ D60 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] \ {s i , ..., s j } if not, where D60 i = {k j ∈ {k 1 , ..., k i-1 } ∩ C : D60 ki ∩ D60 kj ̸ = ∅}, d) for each demand k i ∈ K \ C with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots Ĩ60 i given by Ĩ60 i = [ kj ∈ R60 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k ′ } ∪ {s ′ + w ki , ..., s}] if Ẽ60 ki ∩ Ẽ60 k ′ ̸ = ∅ or Ĩ60 i = kj ∈ R60 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where R60 i = {k j ∈ {k 1 , ..., k i-1 } ∪ C such that D60 k i ∩ D60 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ R60 i , 95 • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ if Ẽ60 k i ∩ Ẽ60 k ′ ̸ = ∅ ( we take into account the possibility of adding slot s ′ as a last slot in the selected last slots S60 k ′ to route demand k ′ in solution S60 ). We let S60 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S60 is feasible for the problem. Hence, the corresponding incidence vector (x S60 , z S60 ) belongs to F H E I C . Then consider the solution S 63 obtained from S60 by adding slot s ′ as last slot to demand k ′ , and modifying the last slots assigned to demand k by adding a new last slot s and removing the last slot s ∈ S60 k with s ∈ {s i + w k + 1, ..., s j } and s / 63 is feasible for the problem. The corresponding incidence vector (x S 63 , z S 63 ) belongs to F H E I C . Hence, solutions S60 and S 63 satisfy equation µx + σz = τ . We have so ∈ {s i + w k + 1, ..., s j } for demand k ∈ K with v k ∈ C such that S 63 k = ( S60 k \{s})∪{s} such that {s-w k +1, ..., s}∩{s ′ -w k ′ +1, ..., s ′ } = ∅ for each k ′ ∈ K and s ′ ∈ S 63 k ′ with E 63 k ∩ E 63 k ′ ̸ = ∅. Solution S µx S60 + σz S60 = µx S 63 + σz S 63 = µx S60 + σz S60 + σ k ′ s ′ -σ k s + σ k s . Since σ k s = 0 for s / ∈ {s i + w k -1, ..., s j } with v k ∈ C, it follows that σ k ′ s ′ = σ k s . In a similar way, we can show that σ k s = σ k ′ s ′ , for all pairs (v k , v k ′ ) ∈ C, with s ∈ {s i + w k -1, ..., s j } and s ′ ∈ {s i + w k ′ -1, ..., s j }. We re-do the same procedure for each two slots s, s ′ ∈ {s i + w k -1, ..., s j } for each demand k ∈ K with v k ∈ C such that σ k s = σ k s ′ , for all v k ∈ C and s, s ′ ∈ {s i + w k -1, ..., s j }. Consequently, we obtain that σ k s = ρ for all v k ∈ C and s ∈ {s i + w k -1, ..., s j }. By (2.17) and (2.18), we know that          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ 3 for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. We conclude that for each k ∈ K and e ∈ E µ k e =          γ k,e 1 if e ∈ E k 0 , γ k,e 2 if e ∈ E k 1 , 0 otherwise, and for each k ∈ K and s ∈ S σ k s =          γ k,s 3 if s ∈ {1, ..., w k -1} ρ if v k ∈ C and s ∈ {s i + w k -1, ..., s j }, 0 otherwise. As a consequence, (µ, σ) = ρ(α, β) + γQ. Let N (v) denote the set of neighbors of node v in a given graph. Theorem 2.4.9. Consider an interval of contiguous slots I = [s i , s j ], and a pair of demands k, k ′ ∈ K with (v k , v k ′ ) in G E I . Then, inequality (2.38) is facet defining for P(G, K, S) if and only if a) N (v k ) ∩ N (v k ′ ) = ∅ in the conflict graph H E I , b ) and there does not exist an interval of contiguous slots I ′ in [1, s] such that I ⊂ I ′ with w k + w k ′ ≥ |I ′ |, w k ≤ |I ′ |, and w k ′ ≤ |I ′ |. Proof. Neccessity. We distinguish two cases: a) if N (v k ) ∩ N (v k ′ ) ̸ = ∅ in the conflict graph H E I , Sufficiency. We use the same proof of theorem 2.4.8 for a clique C = {v k , v k ′ } in the conflict graph H E I . Interval-Odd-Hole Inequalities Proposition 2.4.14. Let I = [s i , s j ] be an interval of contiguous slots in [1, s] with s i ≤ s j -1, and H be an odd-hole H in the conflict graph H E I with |H| ≥ 5. Then, the inequality v k ∈H s j s=s i +w k -1 z k s ≤ |H| -1 2 , (2.40) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if 2w k > |I| for each v k ∈ H. Proof. It is trivial given the definition of odd-hole set in the conflict graph H E I . We strengthen the proof as belows. For each pair of nodes (v k , v k ′ ) linked in H by an edge, we know that s j s=s i +w k -1 z k s + s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ 1. Given that H is an odd-hole which means that we have |H| -1 pair of nodes (v k , v k ′ ) linked in H, and by doing a sum for all pairs of nodes (v k , v k ′ ) linked in H, it follows that (v k ,v k ′ )∈E(H) s j s=s i +w k -1 z k s + s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ |H| -1. where (v k ,v k ′ )∈E(H) s j s=s i +w k -1 z k s + s j s ′ =s i +w k ′ -1 z k ′ s ′ = v k ∈H 2 s j s=s i +w k -1 z k s ≤ |H| -1. By dividing the two sides of the previous sum by 2, it follows that v k ∈H s j s=s i +w k -1 z k s ≤ |H| -1 2 = |H| - H ∩ C = ∅, c) and the nodes (v k , v k ′ ) are linked in H E I for all v k ∈ H and v k ′ ∈ C. Then, the inequality v k ∈H s j s=s i +w k -1 z k s + |H| -1 2 v k ′ ∈C s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ |H| -1 2 , ( 2.41 ) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if 2w k > |I| for each v k ∈ C ∪ H. Proof. It is trivial given the definition of odd-hole set and clique set in the conflict graph H E I such that if s j s ′ =s i +w k ′ -1 z k ′ s ′ = 1 for v k ′ ∈ C, it forces the quantity v k ∈H s j s=s i +w k -1 z k s to be equal to 0. Otherwise, we know from inequality (2.40) that the sum v k ∈H s j s=s i +w k -1 z k s is always smaller than |H|-1 2 . We strengthen the proof as belows. For each pair of nodes (v k , v k ′ ) linked in H by an edge, we know that s j s=s i +w k -1 z k s + s j s ′ =s i +w k ′ -1 z k ′ s ′ + v k" ∈C s j s"=s i +w k" -1 z k" s" ≤ 1 given that all the nodes v k" ∈ C are linked with the nodes v k and v k ′ . Given that H is an odd-hole which means that we have |H| -1 pair of nodes (v k , v k ′ ) linked in H, and by doing a sum for all pairs of nodes (v k , v k ′ ) linked in H, it follows that (v k ,v k ′ )∈E(H) s j s=s i +w k -1 z k s + s j s ′ =s i +w k ′ -1 z k ′ s ′ + v k" ∈C s j s"=s i +w k" -1 z k" s" ≤ |H| -1. Taking into account that each node v k in H has two neighbors in H, this implies that s j s=s i +w k -1 z k s appears twice in the previous inequality. The sum v k" ∈C s j s"=s i +w k" -1 z k" s" appears |H| -1 times in in the previous inequality. As a result, (v k ,v k ′ )∈E(H) s j s=s i +w k -1 z k s + s j s ′ =s i +w k ′ -1 z k ′ s ′ + (|H| -1) v k" ∈C s j s"=s i +w k" -1 z k" s" ≤ |H| -1 ⇒ v k ∈H 2 s j s=s i +w k -1 z k s + (|H| -1) v k" ∈C s j s"=s i +w k" -1 z k" s" ≤ |H| -1. By dividing the two sides of the previous sum by 2, and since |H| is an odd number, it follows that v k ∈H s j s=s i +w k -1 z k s + |H| -1 2 v k" ∈C s j s"=s i +w k" -1 z k" s" ≤ |H| -1 2 = |H| -1 2 . We conclude at the end that inequality (2.41) is valid for Q(G, K, S). Moreover, it c) and there does not exist an interval I ′ of contiguous slots with I ⊂ I ′ such that H defines also an odd-hole in the associated conflict graph H E I ′ . is valid for P(G, K, S) if 2w k > |I| for each v k ∈ C ∪ H. H E I ((H \ {v k }) ∪ {v k ′ }) does not contain an odd-hole H ′ = (H \ {v k }) ∪ {v k ′ }, b) and there does not exist a node v k ′ / ∈ H in H E I such that v k ′ is linked with all nodes v k ∈ H, Proof. Neccessity. We distinguish the following cases: a) if for a node v k ′ / ∈ H in H E I , there exists a node v k ∈ H such that the induced graph H E I ((H \ {v k }) ∪ {v k ′ }) contains an odd-hole H ′ = (H \ {v k }) ∪ {v k ′ }. This implies that inequality (2.40) can be dominated by doing some lifting procedures using the following valid inequalities v k ∈H s j s ′ =s i +w k -1 z k s ′ ≤ |H| -1 2 , and v k ′ ∈H ′ s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ |H| -1 2 , as follows s j s ′ =s i +w k -1 z k s ′ + s j s ′ =s i +w k ′ -1 z k ′ s ′ + 2 v k" ∈H\{k,k ′ } s j s"=s i +w k" -1 z k" s" ≤ |H| -1. By adding the sum s j s ′ =s i +w k ′ -1 z k ′ s ′ to the previous inequality, we obtain s j s ′ =s i +w k -1 z k s ′ + 2 s j s ′ =s i +w k ′ -1 z k ′ s ′ + 2 v k" ∈H\{k,k ′ } s j s"=s i +w k" -1 z k" s" ≤ |H| -1 + s j s ′ =s i +w k ′ -1 z k ′ s ′ . Since s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ 1, it follows that s j s ′ =s i +w k -1 z k s ′ + 2 s j s ′ =s i +w k ′ -1 z k ′ s ′ + 2 v k" ∈H\{k,k ′ } s j s"=s i +w k" -1 z k" s" ≤ |H|. By dividing the last inequality by 2, we obtain that s j s ′ =s i +w k -1 1 2 z k s ′ + s j s ′ =s i +w k ′ -1 z k ′ s ′ + v k" ∈H\{k,k ′ } s j s"=s i +w k" -1 z k" s" ≤ |H| 2 . Given that H ′ = (H \ {k}) ∪ {k ′ } such that |H ′ | = |H|, and |H| is an odd number which implies that |H| 2 = |H|-1 2 . As a result s j s ′ =s i +w k -1 1 2 z k s ′ + v k ′ ∈H ′ s j s"=s i +w k ′ -1 z k ′ s" ≤ |H ′ | -1 2 . That which was to be demonstrated. b) if there exists a node v k ′ ∈ H in H E I such that v k ′ is linked with all nodes v k ∈ H. As a result, inequality (2. 40) is dominated by the following inequality v k ∈H s j s ′ =s i +w k -1 z k s ′ + |H| -1 2 s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ |H| -1 2 . c) if there exists an interval I ′ of contiguous slots with I ⊂ I ′ such that H defines also an odd-hole in the associated conflict graph H E I ′ . This implies that inequality (2.40) induced by odd-hole H for the interval I is dominated by inequality (2.40) induced by the same odd-hole H for the interval I ′ given that {s i + w k -1, ..., s j } ⊂ I ′ for each k ∈ H. As a result, inequality (2.40) is not facet defining for P(G, K, S). If no one of these two cases is verified, inequality (2.40) can never be dominated by another inequality without changing its right-hand side. Sufficiency. Let F H E I H be the face induced by inequality (2.40), that is F H E I H = {(x, z) ∈ P(G, K, S) : v k ∈H s j s=s i +w k -1 z k s = |H| -1 2 }. We denote inequality v k ∈H s j s=s i +w k -1 z k s ≤ |H|-1 2 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F H E I H ⊆ F . In order to prove that inequality v k ∈H s j s=s i +w k -1 z k s ≤ |H|-1 2 is facet defining for P(G, K, S), we will show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) (such that γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β) + γQ. We first show that µ k e = 0 for each edge e ∈ E \ (E k 0 ∪ E k 1 ) for each demand k ∈ K. Consider a demand k ∈ K and an edge e ∈ E \ (E k 0 ∪ E k 1 ). Let S 64 = (E 64 , S 64 ) be the solution given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E 64 k i be the set of edges involved in a shortest path between o k i and d k i , b) select a subset of demands H from H with | H| = |H|-1 2 , c) for each demand k i ∈ H with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 64 i given by I 64 i = [ kj ∈L 64 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ {s i + w ki -1, ..., s j }. where where As a result, µ k e = 0. In a similar way, we can show that L 64 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H : E 64 k i ∩ E 64 k j ̸ = ∅}, d) for each demand k i ∈ H \ H with i ∈ {1, ..., R 64 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H such that E 64 k i ∩ E 64 k j ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ R 64 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s k -w k j + 1, ..., s k } = ∅ if E 64 k i ∩ (E 64 k ∪ {e ′ }) ̸ = ∅ ( µ k e = 0, for all k ∈ K and e ∈ E \ (E k 0 ∪ E k 1 ). Let show that σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s} with s / ∈ {s i + w k -1, ..., s j } if v k ∈ H. Consider a demand k in K and a slot s ′ in {w k , ..., s} with s ′ / ∈ {s i + w k -1, ..., s j } if v k ∈ H. Let S ′64 = (E ′64 , S ′64 ) be the solution given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E ′64 k i be the set of edges involved in a shortest path between o k i and d k i , b) select a subset of demands H from H with | H| = |H|-1 2 , c) for each demand k i from H with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′64 i given by I ′64 i = [ kj ∈D ′64 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ {s i + w ki -1, ..., s j }, where D ′64 i = {k j ∈ {k 1 , ..., k i-1 } ∩ H : E ′64 k i ∩ E ′64 k j ̸ = ∅}, d) for each demand k i ∈ H \ H with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′64 i given by I ′64 i = kj ∈D" 64 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} \ {s i + w ki -1, ..., s j }, where D" 64 i = {k j ∈ {k 1 , ..., k i-1 } ∩ H : E" 64 k i ∩ E" 64 k j ̸ = ∅}, e) for each demand k i ∈ K \ H with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′64 i given by I ′64 i = [ kj ∈R ′64 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E ′64 ki ∩ E ′64 k ̸ = ∅ or I ′64 i = kj ∈R ′64 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where R ′64 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H such that E ′64 k i ∩ E ′64 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ R ′64 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E ′64 k i ∩ E ′64 k ̸ = ∅ ( we take into account the possibility of adding slot s ′ as a last slot in the selected last slots S ′64 k to route demand k in solution S ′64 ). We let S ′64 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′64 is feasible for the problem. Hence, the corresponding incidence vector (x S ′64 , z S ′64 ) belongs to F H E I H . Then consider the solution S 66 obtained from S ′64 by adding slot s ′ as last slot to demand k in S ′64 . Solution S 66 is clearly feasible for the problem. The corresponding incidence vector (x S 66 , z S 66 ) belongs to F H E I H . Hence, solutions S ′64 and S 66 satisfy equation µx + σz = τ . We have so µx S ′64 + σz S ′64 = µx S 66 + σz S 66 = µx S ′64 + σz S ′64 + σ k s ′ . Hence, σ k s ′ = 0. In a similar way, we can show that σ k s = 0, for all k ∈ K and s ∈ {w k , ..., s} with s / where where ∈ {s i + w k -1, ..., s j } if v k ∈ H. Let prove that σ k ′ s ′ for all v k ′ ∈ H and s ′ ∈ {s i + w k ′ -1, ..., s j } are equivalent. Consider a demand k ′ ∈ K with v k ′ ∈ H and a slot s ′ ∈ {s i + w k ′ -1, ..., L 66 i = {k j ∈ {k 1 , ..., k i-1 } ∩ H : E 66 k i ∩ E 66 k j ̸ = ∅}, d) for each demand k i ∈ H \ H with i ∈ {1, ..., R 66 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H such that E 66 k i ∩ E 66 k j ̸ = ∅}. Hence, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ R 66 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ if E 66 k i ∩ E 66 k ′ ̸ = ∅ ( k ′ s ′ + σ k i s′ -σ k i s . Since σ k s = 0 for all demand k ∈ K and slot s ∈ {w k , ..., s} with s / ∈ {s i +w k +1, ..., s j } if v k ∈ H, it follows that σ k i s = σ k ′ s ′ . In a similar way, we can show that σ k s = σ k ′ s ′ , for all pairs (v k , v k ′ ) ∈ H. Consequently, we obtain that σ k s = ρ for all v k ∈ H and s ∈ {s i + w k -1, ..., s j }. Overall, and using the results (2.17) and (2.18), we obtain that µ k e =          γ k,e 1 if e ∈ E k 0 , γ k,e 2 if e ∈ E k 1 , 0 otherwise, for each k ∈ K and e ∈ E, and σ k s =          γ k,s 3 if s ∈ {1, ..., w k -1}, ρ if v k ∈ H and s ∈ {s i + w k -1, ..., s j }, 0 otherwise. for each k ∈ K and s ∈ S. As a consequence, (µ, σ) = ρ(α, β) + γQ. Theorem 2.4.11. Let H be an odd-hole, and C be a clique in the conflict graph H E I with a) |H| ≥ 5, b) and H ∩ C = ∅, c) 2w k > |I| for each v k ∈ C ∪ H, d) and the nodes (v k , v k ′ ) are linked in H E I for all v k ∈ H and v k ′ ∈ C. v k ∈H s j s=s i +w k -1 z k s + |H| -1 2 v k ′ ∈C s j s ′ =s i +w k ′ -1 z k ′ s ′ + |H| -1 2 s j s"=s i +w k" -1 z k" s ′ ≤ |H| -1 2 . b) if there exists an interval I ′ of contiguous slots with I ⊂ I ′ such that H and C define also an odd-hole and its connected clique in the associated conflict graph H E I ′ . This implies that inequality (2.41) induced by odd-hole H and clique C for the interval I is dominated by inequality (2.41) induced by the same odd-hole H and clique C for the interval I ′ given that {s i + w k -1, ..., s j } ⊂ I ′ for each k ∈ H. If these cases are not verified, we ensure that inequality (2.41) can never be dominated by another inequality without modifying its right-hand side. Otherwise, inequality (2.41) is not facet defining for P(G, K, S). Sufficiency. Let F H E I H,C be the face induced by inequality (2.41), that is F H E I H,C = {(x, z) ∈ P(G, K, S) : v k ∈H s j s=s i +w k -1 z k s + |H| -1 2 v k ′ ∈C s j s ′ =s i +w k ′ -1 z k ′ s ′ = |H| -1 2 }. Let denote inequality v k ∈H s j s=s i +w k -1 z k s ≤ |H|-1 2 by αx+βz ≤ λ. Let µx+σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F H E I H,C ⊆ F . To prove that F H E I H,C is a facet of P(G, K, S), we need to show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) (such that where where γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, D 70 i = {k j ∈ {k 1 , ..., k i-1 } ∩ H : E 70 k i ∩ E 70 k j ̸ = ∅}, e) for each demand k i ∈ K \ H with i ∈ {1, ..., R 70 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H such that E 70 k i ∩ E 70 k j ̸ = ∅}. Hence, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ R 70 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k′ + 1, ..., s′ } = ∅ if E 70 k i ∩ E 70 k′ ̸ = ∅ ( k i + 1, ..., s j }) for each k i ∈ H, b) and select a new last slot s′ i / ∈ {s i + w k i + 1, ..., s j } for each k i ∈ H i.e., S 71 k i = (S 70 k i \ {s i }) ∪ {s ′ i } such that {s ′ i -w k i -1, ..., s′ i } ∩ {s -w k + 1, ..., s} = ∅ for each k ∈ K and s ∈ S 70 k with E 71 k ∩ E 71 k i ̸ = ∅ for each k i ∈ H, + k i ∈ H σ k i s′ i - k i ∈ H σ k i si . Since σ k s = 0 for all demand k ∈ K and slot s ∈ {w k , ..., s} with s / ∈ {s i +w k +1, ..., s j } if v k ∈ H ∪ C, it follows that k i ∈ H σ k i si = σ k′ s′ for v k′ ∈ C. In a similar way, we can show that σ k ′ s ′ = ρ |H| -1 2 , for all v k ′ ∈ C and s ′ ∈ {s i + w k ′ + 1, ..., s j }. As a result, σ k s = σ k ′ s ′ , for all (v k , v k ′ ) ∈ C and s ∈ {s i + w k + 1, ..., s j } and s ′ ∈ {s i + w k ′ + 1, ..., s j }. Consequently, we obtain that σ k ′ s ′ = ρ |H|-1 2 for all v k ′ ∈ C and s ′ ∈ {s i + w k ′ - 1, ..., s j }. By (2.17) and (2.18), we know that          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ 3 for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. We conclude that for each k ∈ K and e ∈ E µ k e =          γ k,e 1 if e ∈ E k 0 , γ k,e 2 if e ∈ E k 1 , 0 otherwise, and for each k ∈ K and s ∈ S σ k s =                  γ k,s 3 if s ∈ {1, ..., w k -1}, ρ if v k ∈ H and s ∈ {s i + w k -1, ..., s j }, ρ |H|-1 2 if v k ∈ C and s ∈ {s i + w k -1, ..., s j }, 0 otherwise. As a result, we have (µ, σ) = ρ(α, β) + γQ. Edge-Slot-Assignment-Clique Inequalities Here, we introduce another conflict graph totally different compared with the conflict graphs presented previously. Definition 2.4.6. Let H e S be a conflict graph defined as follows. For each slot s ∈ {w k , ..., s} and demand k ∈ K with e / ∈ E k 0 , consider a node v k,s in H e S . Two nodes v k,s and v k ′ ,s ′ are linked by an edge in H e S if a) k = k ′ , b) or {s -w k + 1, ..., s} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ if k ̸ = k ′ and (k, k ′ ) / ∈ K e c . Based on this definition, we introduce the following inequalities. v k,s ∈C (x k e + z k s ) ≤ |C| + 1, (2.42 ) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if {s -w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each (v k,s , v k ′ ,s ′ ) ∈ C. Proof. It is trivial given the definition of a clique set in the conflict graph H e S such that for each two linked nodes v k,s and v k ′ ,s ′ in H e S , we have x k e + x k ′ e + z k s + z k ′ s ′ ≤ 3. This can be generalized for a triplet of linked nodes v k,s and v k ′ ,s ′ and v k ′′ ,s ′′ with w k + w k ′ + w k" ≤ s -k∈Ke\{k,k ′ ,k"} w k, such that for each linked nodes (v k,s , v k ′ ,s ′ ) and (v k,s , v k ′′ ,s ′′ ) and (v k ′ ,s ′ , v k ′′ ,s ′′ ), we have x k e + x k ′ e + z k s + z k ′ s ′ ≤ 3, x k e + x k" e + z k s + z k" s" ≤ 3, x k ′ e + x k" e + z k ′ s ′ + z k" s" ≤ 3. By adding the three previous inequalities, we get the following inequality using the chvatal gomory procedure 2x k e + 2x k ′ e + 2x k" e + 2z k s + 2z k ′ s ′ + 2z k" s" ≤ 9 ⇒ x k e + x k ′ e + x k" e + z k s + z k ′ s ′ + z k" s" ≤ 4 given that 9 2 = 4. This can be generalized for each clique C with |C| ≥ 4 while showing that inequality (2.42) can be seen as Chvàtal-Gomory cuts. Using the Chvàtal-Gomory and recurrence procedures, we obtain that v k,s ∈C ′ x k e + z k s ≤ |C ′ | + 1, for all C ′ ⊂ C with |C ′ | = |C| -1 and |C ′ | ≥ 3. By adding the previous inequalities for all C ′ ⊂ C with |C ′ | = |C| -1, and doing then some simplification, we get at the end that v k,s ∈C x k e + z k s ≤ |C| + |C| |C| -1 ⇒ v k,s ∈C x k e + z k s ≤ |C| + 1, given that |C| |C| -1 = 1. We conclude at the end that inequality (2.42) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if {s -w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each (v k,s , v k ′ ,s ′ ) ∈ C. Theorem 2.4.12. Consider an edge e ∈ E, and a clique C in the conflict graph H e S with {s - w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each (v k,s , v k ′ ,s ′ ) ∈ C. I = [s i , s j ] ⊂ [1, s] with a) [ min v k,s ∈C (s -w k + 1), max v k,s ∈C s] ⊂ I, b) and w k + w k ′ ≥ |I| + 1 for each (v k , v k ′ ) ∈ C, c) and 2w k ≥ |I| + 1 and w k ≤ |I| for each v k ∈ C. Proof. Neccessity. If C is not maximal clique in the conflict graph H e S , this means that inequality (2.42) can be dominated by another inequality associated with a clique C ′ such that C ⊂ C ′ without changing its right-hand side. Moreover, if there exists an interval of contiguous slots I = [s i , s j ] ⊂ [1, s] with a) [ min v k,s ∈C (s -w k + 1), max v k,s ∈C s] ⊂ I, b) and w k + w k ′ ≥ |I| + 1 for each (v k , v k ′ ) ∈ C, c) and 2w k ≥ |I| + 1 and w k ≤ |I| for each v k ∈ C. Then, inequality (2.42) is dominated by inequality (2.36). As a result, inequality (2.42) cannot be facet defining for P(G, K, S). Sufficiency. Let F H e S C be the face induced by inequality (2.42), that is F H e S C = {(x, z) ∈ P(G, K, S) : v k,s ∈C x k e + z k s = 1}. Let denote inequality v k,s ∈C x k e + z k s ≤ 1 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx+σz = τ }. Suppose that F H e S C ⊆ F . In order to prove that inequality v k,s ∈C x k e +z k s ≤ 1 is facet defining for P(G, K, S), we need to show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) (such that γ k,e ′ 1 ∈ R for all k ′ ∈ K and e ′ ∈ E k ′ 0 , γ k,e ′ 2 ∈ R for all k ′ ∈ K and e ′ ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β)+γQ. In a similar way with the proof of theorem 2.4.1, we obtain that µ k ′ e ′ =                  γ k ′ ,e ′ 1 if e ′ ∈ E k ′ 0 , γ k ′ ,e ′ 2 if e ′ ∈ E k ′ 1 , ρ if k ′ ∈ K(C ) and e ′ = e, 0 otherwise, for each k ′ ∈ K and e ′ ∈ E, and σ k s ′ =          γ k,s ′ 3 if s ′ ∈ {1, ..., w k -1} ρ if v k,s ∈ C, 0 otherwise. for each k ∈ K and s ′ ∈ S, where K(C) = {k ∈ K : ∃s ∈ {w k , ..., s} with v k,s ∈ C}. As a consequence, (µ, σ) = ρ(α, β) + γQ. Slot-Assignment-Clique Inequalities On the other hand, we detected that there may exist some cases that are not covered by inequality (2.42) and (2.25) previously introduced. For this, we provide the following definition of a conflict graph and its associated inequality. • k = k ′ , • or E k 1 ∩ E k ′ 1 ̸ = ∅ and {s -w k + 1, ..., s} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ if k ̸ = k ′ . Proposition 2.4.17. Let C be a clique in the conflict graph H E S with |C| ≥ 3. Then, the inequality v k,s ∈C z k s ≤ 1, (2.43) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if {s - w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each (v k,s , v k ′ ,s ′ ) ∈ C. Proof. It is trivial given the definition of a clique set in the conflict graph H E S such that for each two linked nodes v k,s and v k ′ ,s ′ in H E S , we know from the inequality (2.6) that z k s + z k ′ s ′ ≤ 1, given that x k e = x k ′ e = 1 for all e ∈ E k 1 ∩ E k ′ 1 and {s -w k + 1, ..., s} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅. By adding the previous inequalities for all two nodes v k,s and v k ′ ,s ′ in C, and by recurrence procedure we obtain that for all C ′ ⊆ C with |C ′ | = |C| -1 v k,s ∈C ′ z k s ≤ 1. By adding the previous inequalities for all C ′ ⊆ C with |C ′ | = |C| -1, we get C ′ ⊆C |C ′ |=|C|-1 v k,s ∈C ′ z k s ≤ C ′ ⊆C |C ′ |=|C|-1 1. 112 Note that for each demand k and slot s with v k,s ∈ C, the variable z k s appears ( |C| |C|-1 -1) = |C| -1 times in the previous sum. It follows that v k,s ∈C (|C| -1)z k s ≤ |C|. By dividing the two sides of the previous sum by |C| -1, we have so v k,s ∈C z k s ≤ |C| |C| -1 ⇒ v k,s ∈C z k s ≤ 1 given that |C| |C| -1 = 1. We conclude at the end that inequality (2.43) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if {s -w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each (v k,s , v k ′ ,s ′ ) ∈ C. Theorem 2.4.13. Consider a clique C in the conflict graph H E S with {s - w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each (v k,s , v k ′ ,s ′ ) ∈ C. Then, inequality (2.43) is facet defining for P(G, K, S) if and only if C is a maximal clique in the conflict graph H E S , and there does not exist an interval of contiguous slots I = [s i , s j ] ⊂ [1, s] with a) [ min v k,s ∈C (s -w k + 1), max v k,s ∈C s] ⊂ I, b) and w k + w k ′ ≥ |I| + 1 for each (v k , v k ′ ) ∈ C, = [s i , s j ] ⊂ [1, s] with a) [ min v k,s ∈C (s -w k + 1), max v k,s ∈C s] ⊂ I, b) and w k + w k ′ ≥ |I| + 1 for each (v k , v k ′ ) ∈ C, c) and 2w k ≥ |I| + 1 and w k ≤ |I| for each v k ∈ C. Then, inequality (2.43) is dominated by inequality (2.39). As a result, inequality (2.43) cannot be facet defining for P(G, K, S). Sufficiency. Let F H E S C be the face induced by inequality (2.43), that is F H E S C = {(x, z) ∈ P(G, K, S) : v k,s ∈C z k s = 1}. Let denote inequality v k,s ∈C z k s ≤ 1 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F H E S C ⊆ F . In order to prove that inequality v k,s ∈C z k s ≤ 1 is facet defining for P(G, K, S), we need to show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) (such that γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β)+γQ. We first show that µ k e = 0 for each edge e where where ∈ E \ (E k 0 ∪ E k 1 ) for each demand k ∈ K. Consider a demand k ∈ K and an edge e ∈ E \ (E k 0 ∪ E k 1 ). Let S 72 = (E D 72 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k ′ } : E 72 k i ∩ E 72 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 72 i , • {s k i -w k i + 1, ..., s k i } ∩ {s k -w k j + 1, ..., s k } = ∅ if E 72 k i ∩ E 72 k ̸ = ∅ ( D ′72 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k ′ } : E ′72 k i ∩ E ′72 k j ̸ = ∅}. This satisfies that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D ′72 i , • {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E ′72 k i ∩ E ′72 k ̸ = ∅ ( we take into account the possibility of adding slot s ′ in the selected set of last slots S ′72 k to route demand k in solution S ′72 ). We let S ′72 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′72 is feasible for the problem. Hence, the corresponding incidence vector (x S ′72 , z S ′72 ) belongs to F H E S C . Then consider the solution S 74 obtained from S ′72 by adding slot s ′ as last slot to demand k in S ′72 . Solution S 74 is clearly feasible for the problem. The corresponding incidence vector (x S 74 , z S 74 ) belongs to F H E S C . Hence, solutions S ′72 and S 74 satisfy equation µx + σz = τ . We have so µx S ′72 + σz S ′72 = µx S 74 + σz S 74 = µx S ′72 + σz S ′72 + σ k s ′ . Hence, σ k s ′ = 0. In a similar way, we can show that σ k s = 0, for all k ∈ K and s ∈ {w k , ..., s} with v k,s / ∈ C. Let prove that σ k s for all v k,s ∈ C are equivalent. Consider a node v k ′ ,s ′ in C such that s ′ / ∈ S 72 k ′ . Let S72 = ( Ẽ72 , S72 ) be the solution given by 1. for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we let Ẽ72 k i be the set of edges involved in a shortest path between o k i and d k i , 2. select a pair of demand k and slot s from clique C (i.e., v k,s ∈ C) such that slot s k = s will be used as last slot for demand k, 3. for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots Ĩ72 i given by Ĩ72 i = [ kj ∈ D72 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k ′ } ∪ {s ′ + w ki , ..., s}] if Ẽ72 ki ∩ Ẽ72 k ′ ̸ = ∅ or Ĩ72 i = kj ∈ D72 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where D72 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : Ẽ72 k i ∩ Ẽ72 k j ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D72 i , • {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ if Ẽ72 k i ∩ Ẽ72 k ′ ̸ = ∅ ( we take into account the possibility of adding slot s ′ in the selected set of last slots S72 k ′ to route demand k ′ in solution S72 ). We let S72 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}. S72 is feasible for the problem. Hence, the corresponding incidence vector (x S72 , z S72 ) belongs to F H E S C . Then consider the solution S 75 obtained from S72 by adding slot s ′ as last slot to demand k ′ in S72 , and modifying the last slots assigned to demand k by adding a new last slot s and removing the last slot 75 is clearly feasible for the problem. The corresponding incidence vector (x S 75 , z S 75 ) belongs to F s ∈ S72 k with v k,s ∈ C and v k,s / ∈ C such that S 75 k = ( S72 k \ {s}) ∪ {s} and {s -w k + 1, ..., s} ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ for each k ′ ∈ K and s ′ ∈ S 75 k ′ with E 75 k ∩ E 75 k ′ ̸ = ∅. Solution S H E S C . Hence, solutions S72 and S 75 satisfy equation µx + σz = τ . We have so µx S72 + σz S72 = µx S 75 + σz S 75 = µx S72 + σz S72 + σ k ′ s ′ -σ k s + σ k s . Since σ k s = 0 for v k,s / ∈ C, and µ k e = 0 for all k ∈ K and e ∈ E \ (E k 0 ∪ E k 1 ), it follows that σ k ′ s ′ = σ k s . In a similar way, we can show that σ k s = σ k ′ s ′ , for all pairs (v k,s , v k ′ ,s ′ ) ∈ C. Consequently, we obtain that σ k s = ρ for all pairs v k,s ∈ C. We know from (2.17) and (2.18) that          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ 3 for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. As a result, we obtain that for each k ∈ K and e ∈ E µ k e =          γ k,e 1 if e ∈ E k 0 γ k,e 2 if e ∈ E k 1 0 otherwise and for each k ∈ K and s ∈ S σ k s =          γ k,s 3 if s ∈ {1, ..., w k -1} ρ if v k,s ∈ C, 0 if v k,s / ∈ C. As a consequence, (µ, σ) = ρ(α, β) + γQ. Slot-Assignment-Odd-Hole Inequalities One can strengthen inequality (2.43) by introducing the following inequalities based on the so-called odd-hole inequalities. Proposition 2.4.18. Let H be an odd-hole in the conflict graph H E S with |H| ≥ 5. Then, the inequality v k,s ∈H z k s ≤ |H| -1 2 , ( 2.44 ) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if {s -w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each pair of nodes (v k,s , v k ′ ,s ′ ) that are linked in H. Proof. It is trivial given the definition of the odd-hole in the conflict graph H E S such that for each pair of nodes (v k,s , v k ′ ,s ′ ) linked in H by an edge, we know that z k s + z k ′ s ′ ≤ 1. Given that H is an odd-hole which means that we have |H| -1 pair of nodes (v k,s , v k ′ ,s ′ ) linked in H, and by doing a sum over all pairs of nodes (v k,s , v k ′ ,s ′ ) linked in H, it follows that (v k,s ,v k ′ ,s ′ )∈E(H) z k s + z k ′ s ′ ≤ |H| -1. Taking into account that each node v k in H has two neighbors in H, this implies that z k s appears twice in the previous inequality. As a result, (v k,s ,v k ′ ,s ′ )∈E(H) z k s + z k ′ s ′ = v k,s ∈H 2z k s =⇒ v k,s ∈H 2z k s ≤ |H| -1. As a result, v k,s ∈H z k s ≤ |H| -1 2 = |H| -1 2 since |H| is an odd number. We conclude at the end that inequality (2.44) is valid for P(G, K, S). Note that inequality (2.44) can be strengthened without modifying its right-hand side by combining inequality (2.44) and (2.43). Proposition 2.4.19. Let H be an odd-hole, and C be a clique in the conflict graph H E S with a) |H| ≥ 5, b) and H ∩ C = ∅, c) and the nodes (v k,s , v k ′ ,s ′ ) are linked in H E S for all v k,s ∈ H and v k ′ ,s ′ ∈ C. Then, the inequality v k,s ∈H z k s + |H| -1 2 v k ′ ,s ′ ∈C z k ′ s ′ ≤ |H| -1 2 , ( 2.45 ) is valid for Q(G, K, S). Moreover, it is valid for P(G, K, S) if {s -w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each (v k,s , v k ′ ,s ′ ) ∈ C and pair of nodes (v k,s , v k ′ ,s ′ ) linked in H. Proof. It is trivial given the definition of the odd-hole and clique in H E S such that if v k ′ ,s ′ ∈C z k ′ s ′ = 1 for a v k ′ ,s ′ ∈ C ∈ C which implies that the quantity v k,s ∈H z k s is forced to be equal to 0. Otherwise, we know from inequality (2.44) that the sum v k,s ∈H z k s is always smaller than |H|-1 2 . We strengthen the proof as belows. For each pair of nodes (v k,s , v k ′ ,s ′ ) linked in H by an edge, we know that z k s + z k ′ s ′ + v k",s" ∈C z k" s" ≤ 1 given that all the nodes v k",s" ∈ C are linked with the nodes v k,s and v k ′ ,s ′ . Given that H is an odd-hole which means that we have |H| -1 pair of nodes (v k,s , v k ′ ,s ′ ) linked in H, and by doing a sum for all pairs of nodes (v k,s , v k ′ ,s ′ ) linked in H, it follows that (v k,s ,v k ′ ,s ′ )∈E(H) z k s + z k ′ s ′ + v k",s" ∈C z k" s" ≤ |H| -1. Taking into account that each node v k,s has two neighbors in H, this implies that z k s appears twice in the previous inequality. The sum v k",s" ∈C z k" s" appears |H| -1 times in in the previous inequality. As a result, (v k,s ,v k ′ ,s ′ )∈E(H) z k s + z k ′ s ′ + (|H| -1) v k",s" ∈C z k" s" ≤ |H| -1 ⇒ v k,s ∈H 2z k s + (|H| -1) v k",s" ∈C z k" s" ≤ |H| -1. By dividing the two sides of the previous sum by 2, and since |H| is an odd number, it follows that v k,s ∈H z k s + |H| -1 2 v k",s" ∈C z k" s" ≤ |H| -1 2 = |H| -1 2 . We conclude at the end that inequality (2.45) is valid for P(G, K, S). Theorem 2.4.14. Let H be an odd-hole in the conflict graph H E S with |H| ≥ 5, and {s - c) and there does not exist an interval of contiguous slots w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each pair of nodes (v k,s , v k ′ ,s ′ ) linked in H. v k ′ ,s ′ / ∈ H in H E S such that v k ′ ,s ′ is linked with all nodes v k,s ∈ H, I = [s i , s j ] ⊂ [1, s] with • [ min v k,s ∈H (s -w k + 1), max v k,s ∈H ] ⊂ I, • and w k + w k ′ ≥ |I| + 1 for each (v k , v k ′ ) linked in H, • and 2w k ≥ |I| + 1 and w k ≤ |I| for each v k ∈ H. Proof. Neccessity. We distinguish the following cases: a) if for a node v k ′ ,s ′ / ∈ H in H E S , there exists a node v k,s ∈ H such that the induced graph H E S (H \{v k,s }∪{v k ′ ,s ′ }) contains an odd-hole H ′ = (H \{v k,s })∪{v k ′ ,s ′ }. This implies that inequality (2.44) can be dominated using some technics of lifting based on the following two inequalities v k,s ∈H z k s ≤ |H|-1 2 , and v k ′ ,s ′ ∈H ′ z k ′ s ′ ≤ |H ′ |-1 2 . b) if there exists a node v k ′ ,s ′ / ∈ H in H E S such that v k ′ ,s ′ is linked with all nodes v k,s ∈ H. This implies that inequality (2.44) can be dominated by the following valid inequality v k,s ∈H z k s + |H| -1 2 z k ′ s ′ ≤ |H| -1 2 . c) if there exists an interval of contiguous slots I = [s i , s j ] ⊂ [1, s] with • [ min v k,s ∈H (s -w k + 1), max v k,s ∈H ] ⊂ I, • and w k + w k ′ ≥ |I| + 1 for each (v k , v k ′ ) linked in H, • and 2w k ≥ |I| + 1 and w k ≤ |I| for each v k ∈ H. This implies that inequality (2.44) is dominated by inequality (2.40). If no one of these cases is verified, inequality (2.44) can never be dominated by another inequality without changing its right-hand side. Otherwise, inequality (2.44) cannot be facet defining for P(G, K, S). Sufficiency. Let F H E S H be the face induced by inequality (2.44), that is F H E S H = {(x, z) ∈ P(G, K, S) : v k,s ∈H z k s = |H| -1 2 }. Denote inequality v k,s ∈H z k s ≤ |H|-1 2 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F H E S H ⊆ F . In order to prove that inequality v k,s ∈H z k s ≤ |H|-1 2 is facet defining for P(G, K, S), we will show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) (such that γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β)+γQ. We first show that µ k e = 0 for each edge e where given by ∈ E \ (E k 0 ∪ E k 1 ) for each demand k ∈ K. Consider a demand k ∈ K and an edge e ∈ E \ (E k 0 ∪ E k 1 ). Let S 76 = (E D 76 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H76 : E 76 k i ∩ E 76 k j ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 76 i , • {s k i -w k i + 1, ..., s k i } ∩ {s k -w k j + 1, ..., s k } = ∅ if E 76 k i ∩ E 76 k ̸ = ∅ ( I ′76 i = [ kj ∈D ′76 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E ′76 ki ∩ E ′76 k ̸ = ∅ or I ′76 i = kj ∈D ′76 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where D ′76 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H′76 : E ′76 k i ∩ E ′76 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D ′76 i , • {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E ′76 k i ∩ E ′76 k ̸ = ∅ ( we take into account the possibility of adding slot s ′ in the selected set of last slots S ′76 k to route demand k in solution S ′76 ). We let S ′76 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′76 is feasible for the problem. Hence, the corresponding incidence vector (x S ′76 , z S ′76 ) belongs to F H E S H . Then consider the solution S 78 obtained from S ′76 by adding slot s ′ as last slot to demand k in S ′76 . Solution S 78 is feasible for the problem. The corresponding incidence vector (x S 78 , z S 78 ) belongs to F H E S H . Hence, solutions S ′76 and S 78 satisfy equation µx + σz = τ . We have so µx S ′76 + σz S ′76 = µx S 78 + σz S 78 = µx S ′76 + σz S ′76 + σ k s ′ . Hence, σ k s ′ = 0. In a similar way, we can show that where σ k s = 0, D 80 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H80 : E 80 k i ∩ E 80 k j ̸ = ∅}. Hence, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 80 i , • {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ if E 80 k i ∩ E 80 k ′ ̸ = ∅ ( k with v k,s ∈ H and v k,s / ∈ H such that S ′80 k = (S 80 k \ {s}) ∪ {s} for demand k such that {s -w k + 1, ..., s} ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ for each k ′ ∈ K and s ′ ∈ S ′80 k ′ with E ′80 k ∩ E ′80 k ′ ̸ = ∅. The corresponding incidence vector (x S ′80 , z S ′80 ) belongs to F H E S H . Hence, solutions S 80 and S ′80 satisfy equation µx + σz = τ . We have so µx S 80 + σz S 80 = µx S ′80 + σz S ′80 = µx S 80 + σz S 80 + σ k ′ s ′ -σ k s + σ k s . It follows that σ k ′ s ′ = σ k s for demand k ′ and a slot s ′ ∈ {w k ′ , ..., s} with v k ′ ,s ′ ∈ H given that σ k s = 0 for v k,s / ∈ H. 123 By (2.17) and (2.18), we know that          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ 3 for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. We conclude that for each k ∈ K and e ∈ E µ k e =          γ k,e 1 if e ∈ E k 0 γ k,e 2 if e ∈ E k 1 0 otherwise, and for each k ∈ K and s ∈ S σ k s =          γ k,s 3 if s ∈ {1, ..., w k -1} ρ if v k,s ∈ H, 0 if v k,s / ∈ H. As a result, we have (µ, σ) = ρ(α, β) + γQ. Theorem 2.4.15. Let H be an odd-hole, and C be a clique in the conflict graph H E S with a) |H| ≥ 5, b) and H ∩ C = ∅, c) and the nodes (v k,s , v k ′ ,s ′ ) are linked in H E S for all v k,s ∈ H and v k ′ ,s ′ ∈ C, d) {s -w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each (v k,s , v k ′ ,s ′ ) ∈ C and pair of nodes (v k,s , v k ′ ,s ′ ) linked in H. Then, inequality (2.45) is facet defining for P(G, K, S) if and only if a) for each node v k",s" in H E S with v k",s" / ∈ H ∪ C and C ∪ {v k",s" } is a clique in H E S , there exists a subset of nodes H ⊆ H of size |H|-1 2 such that H ∪ {v k",s" } is stable in H E S , b) and there does not exist an interval of contiguous slots I = [s i , s j ] ⊂ [1, s] with • [ min v k,s ∈H∪C (s -w k + 1), max v k,s ∈H∪C ] ⊂ I, • and w k + w k ′ ≥ |I| + 1 for each (v k , v k ′ ) linked in H, • and w k + w k ′ ≥ |I| + 1 for each (v k , v k ′ ) linked in C, • and w k + w k ′ ≥ |I| + 1 for each v k ∈ H and v k ′ ∈ C, • and 2w k ≥ |I| + 1 and w k ≤ |I| for each v k ∈ H, • and 2w k ′ ≥ |I| + 1 and w k ′ ≤ |I| for each v k ′ ∈ C. Proof. Neccessity. We distinguish the following cases: a) if there exists a node v k",s" / ∈ H ∪ C in H E S such that v k",s" is linked with all nodes v k,s ∈ H and also with all nodes v k ′ ,s ′ ∈ C. This implies that inequality (2.45) can be dominated by the following valid inequality v k,s ∈H z k s + |H| -1 2 v k ′ ,s ′ ∈C z k ′ s ′ + |H| -1 2 z k" s" ≤ |H| -1 2 . b) if F H E S H,C = {(x, z) ∈ P(G, K, S) : v k,s ∈H z k s + |H| -1 2 v k ′ ,s ′ ∈C z k ′ s ′ = |H| -1 2 }. Let denote inequality v k,s ∈H z k s + |H|-1 2 v k ′ ,s ′ ∈C z k ′ s ′ ≤ |H|-1 2 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F H E S H,C ⊆ F . To prove that F H E S H,C is a facet of P(G, K, S), it suffices to show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) (such that where γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β) + γQ. D 82 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H82 : E 82 k i ∩ E 82 k j ̸ = ∅}. Hence, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 82 i , • {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ if E 82 k i ∩ E 82 k ′ ̸ = ∅ ( we take into account the possibility of adding slot s ′ in the selected set of last slots S 82 k ′ to route demand k ′ in solution S 82 ). We let S 82 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S 82 is feasible for the problem. Hence, the corresponding incidence vector (x S 82 , z k ∈ S 82 k with v k,s k ∈ H and v k,s k / ∈ H ∪ C such that S 83 k = (S 82 k \ {s k }) ∪ {s k } for each demand k ∈ { k ∈ K with v k,s ∈ H82 } such that {s -w k + 1, ..., s} ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ for each k ′ ∈ K and s ′ ∈ S 83 k ′ with E 83 k ∩ E 83 k ′ ̸ = ∅. Solution S 83 is feasible for the problem. The corresponding incidence vector (x S S 82 + σ k ′ s ′ - (k,s k )∈ H82 σ k s k + k∈K H σ k sk . where K H = { k ∈ K with v k,s ∈ H82 }. Since σ k sk = 0 for v k,s k / ∈ H ∪ C, it follows that σ k ′ s ′ = (k,s k )∈ H82 σ k s k . As a result, σ k ′ s ′ = ρ |H|-1 2 given that σ k s are equivalent for all v k,s ∈ H. Given that the pair v k ′ ,s ′ is chosen arbitrarily in clique C, we re-do the same procedure for all v k ′ ,s ′ ∈ C. Consequently, we obtain that σ k ′ s ′ = ρ |H|-1 2 for all v k ′ ,s ′ ∈ C. Overall, and using the results (2.17) and (2.18), we obtain that µ k e =          γ k,e 1 if e ∈ E k 0 , γ k,e 2 if e ∈ E k 1 , 0 otherwise, for each k ∈ K and e ∈ E, and σ k s =                  γ k,s 3 if s ∈ {1, ..., w k -1}, ρ if v k,s ∈ H, ρ |H|-1 2 if v k,s ∈ C, 0 otherwise, for each k ∈ K and s ∈ S. As a consequence, we obtain that (µ, σ) = ρ(α, β) + γQ. Let us now introduce some valid inequalities that are related to the routing sub-problem issus from the transmission-reach constraint. Incompatibility-Clique Inequalities C ′ ⊆ C with |C ′ | = |C| -1 v k,e ∈C ′ x k e ≤ 1. By adding the previous inequalities for all C ′ ⊆ C with |C ′ | = |C| -1, we get C ′ ⊆C |C ′ |=|C|-1 v k,e ∈C ′ x k e ≤ C ′ ⊆C |C ′ |=|C|-1 1. Note that for each demand k and edge e with v k,e ∈ C, the variable x k e appears ( |C| |C|-1 -1) = |C| -1 times in the previous sum. It follows that v k,e ∈C (|C| -1)x k e ≤ |C|. By dividing the two sides of the previous sum by |C| -1, we have so v k,e ∈C x k e ≤ |C| |C| -1 ⇒ v k,e ∈C x k e ≤ 1 given that |C| |C| -1 = 1. This ends the proof. F H K E C = {(x, z) ∈ P(G, K, S) : v k,e ∈C x k e = 1}. Let denote inequality v k,e ∈C x k e ≤ 1 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F H K E C ⊆ F . In order to prove that inequality v k,e ∈C x k e ≤ 1 is facet defining for P(G, K, S), we show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) (such that where As a result, µ k e = 0. In a similar way, we can show that γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β) + γQ. We first show that µ k e = 0 for each edge e ∈ E \ (E k 0 ∪ E k 1 ) for each demand k ∈ K with v k,e / ∈ C D 84 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E 84 k i ∩ E 84 k j ̸ = ∅}. Hence, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 84 i , • {s k i -w k i + 1, ..., s k i } ∩ {s k -w k + 1, ..., s k } = ∅ if E 84 k i ∩ (E 84 k ∪ {e}) ̸ = ∅ ( µ k e = 0, for all k ∈ K and e ∈ E \ (E k 0 ∪ E k 1 ) with v k,e / ∈ C. Let show that σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s}. Consider a demand k in K and a slot s ′ in {w k , ..., s}, and a solution S ′84 = (E ′84 , S ′84 ) such that a) select one pair of demand k ′ and edge e ′ from clique C (i.e., v k ′ ,e ′ ∈ C), we let E ′84 k ′ be the set of edges involved in a shortest path between o k ′ and d k ′ which uses edge e ′ , b) for each pair of demand k" and edge e" with v k",e" ∈ C \{v k,e }, we let E ′84 k" be the set of edges involved in a shortest path between o k" and d k" which uses edge e" which does not use edge e", c) for each demand k i ∈ K \ C with i ∈ {1, ..., |K|} \ {k}, we let E ′84 k i be the set of edges involved in a shortest path between o k i and d k i , d) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′84 i given by I ′84 i = [ kj ∈D ′84 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E ′84 ki ∩ E ′84 k ̸ = ∅ or I ′84 i = kj ∈D ′84 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not. where D ′84 i = {k j ∈ {k 1 , ..., k i-1 } : E ′84 k i ∩ E ′84 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D ′84 i , • {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E ′84 k i ∩ E ′84 k ̸ = ∅ ( we take into account the possibility of adding slot s ′ as a last slot in the set of last slots S ′84 k to route demand k in solution S ′84 ). We let S ′84 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′84 is feasible for the problem. Hence, the corresponding incidence vector (x S ′84 , z ) such that v k ′ ,e ′ and v k,e" linked in C, c) modifying the last slots assigned to some demands K ⊂ K from S 84 k to S 87 k for each k ∈ K while satisfying non-overlapping constraint. The paths assigned to the demands K \ {k, k ′ } in S 84 remain the same in S 87 (i.e., E 87 k" = E 84 k" for each k" ∈ K \ {k, k ′ }), and also without modifying the last slots assigned to the demands K \ K in S 84 µ k e" . Since µ k e" = 0 for all k ∈ K and e" ∈ E \ (E k 0 ∪ E k 1 ) with v k,e" / ∈ C, and σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s}, it follows that µ k ′ e ′ = µ k e . Given that the pair (v k,e , v k ′ ,e ′ ) are chosen arbitrarily in clique C, we re-do the same procedure for all pairs (v k,e , v k ′ ,e ′ ) such that we find µ k e = µ k ′ e ′ , for all pairs (v k,e , v k ′ ,e ′ ) ∈ C. Consequently, we obtain that µ k e = ρ for all v k,e ∈ C. By (2.17) and (2.18), we know that          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ 3 for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. We conclude that for each k ∈ K and e ∈ E As a consequence, (µ, σ) = ρ(α, β) + γQ. µ k e =                  γ k,e 1 if e ∈ E k 0 , γ k,e 2 if e ∈ E k 1 , ρ if v k,e ∈ C, Incompatibility-Odd-Hole Inequalities (v k e ,v k ′ e ′ )∈E(H) x k e + x k ′ e ′ ≤ |H| -1. Taking into account that each node v k e in H has two neighbors in H, this implies that x k e appears twice in the previous inequality. As a result, (v k e ,v k ′ e ′ )∈E(H) x k e + x k ′ e ′ = v k e ∈H 2x k e =⇒ v k e ∈H 2x k e ≤ |H| -1 =⇒ v k e ∈H x k e ≤ |H| -1 2 = |H| -1 2 since |H| is an odd number. We conclude at the end that inequality (2.47) is valid for P(G, K, S). Then, the inequality Inequality v k e ∈H x k e + |H| -1 2 v k ′ e ′ ∈C x k ′ e ′ ≤ |H| -1 2 , (2.48) is valid for P(G, K, S). Proof. It is trivial given the definition of the odd-hole and clique in H K E such that if v k ′ e ′ ∈C x k ′ e ′ = 1 for a v k ′ e ′ ∈ C, which implies that the quantity v k e ∈H x k e is forced to be equal to 0. Otherwise, we know from inequality (2.47) that the sum v k e ∈H x k e should be smaller than |H|-1 2 . We strengthen the proof as belows. For each pair of nodes (v k,e , v k ′ ,e ′ ) linked in H by an edge, we know that x k e + x k ′ e ′ + v k",e" ∈C x k" e" ≤ 1 given that all the nodes v k",e" ∈ C are linked with the nodes v k,e and v k ′ ,e ′ . Given that H is an odd-hole which means that we have |H| -1 pair of nodes (v k,e , v k ′ ,e ′ ) linked in H, and by doing a sum for all pairs of nodes (v k,e , v k ′ ,e ′ ) linked in H, it follows that (v k,e ,v k ′ ,e ′ )∈E(H) x k e + x k ′ e ′ + v k",e" ∈C x k" e" ≤ |H| -1. Taking into account that each node v k,e has two neighbors in H, this implies that x k e appears twice in the previous inequality. The sum v k",e" ∈C x k" e" appears |H| -1 times in in the previous inequality. As a result, (v k,e ,v k ′ ,e ′ )∈E(H) x k e + x k ′ e ′ + (|H| -1) v k",e" ∈C x k" e" ≤ |H| -1 ⇒ v k,e ∈H 2x k e + (|H| -1) v k",e" ∈C x k" e" ≤ |H| -1. By dividing the two sides of the previous sum by 2, and since |H| is an odd number, it follows that v k,e ∈H x k e + |H| -1 2 v k",e" ∈C x k" e" ≤ |H| -1 2 = |H| -1 2 . We conclude at the end that inequality (2.48) is valid for P(G, K, S). v k ′ ,e ′ ∈H ′ x k ′ e ′ ≤ |H ′ |-1 2 . b) if there exists a node v k ′ ,e ′ / ∈ H in H K E such that v k ′ ,e ′ is linked with all nodes v k,e ∈ H. This implies that inequality (2.47) can be dominated by the following valid inequality v k,e ∈H x k e + |H| -1 2 x k ′ e ′ ≤ |H| -1 2 . If no one of these cases is verified, inequality (2.47) can never be dominated by another inequality without changing its right-hand side. Otherwise, inequality (2.47) is not facet defining for P(G, K, S). Sufficiency. Let F H K E H denote the face induced by inequality (2.47), that is F H K E H = {(x, z) ∈ P(G, K, S) : v k,e ∈H x k e = |H| -1 2 }. Suppose that F Denote inequality v H K E H ⊆ F . In order to prove that inequality v k,e ∈H x k e = |H|-1 2 is facet defining for P(G, K, S), we show that there exist ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) (such that γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, β)+γQ. Let first show that µ k e = 0 for each edge e where As a result, µ k e = 0. In a similar way, we can show that ∈ E \ (E k 0 ∪ E k 1 ) for each demand k ∈ K with v k,e / ∈ H D 88 i = {k j ∈ {k 1 , ..., k i-1 } : E 88 k i ∩ E 88 k j ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 88 i , • {s k i -w k i + 1, ..., s k i } ∩ {s k -w k + 1, ..., s k } = ∅ if E 88 k i ∩ (E 88 k ∪ {e}) ̸ = ∅ ( µ k e = 0, for all k ∈ K and e ∈ E \ (E k 0 ∪ E k 1 ) with v k,e / ∈ H. Let show that σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s}. Consider a demand k in K and a slot s ′ in {w k , ..., s}. Let S ′88 = (E ′88 , S ′88 ) be the solution given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E ′88 k i be the set of edges involved in a shortest path between o k i and where d k i , b) select a subset of nodes H′88 from H with | H′88 | = |H|-1 2 , D ′88 i = {k j ∈ {k 1 , ..., k i-1 } : E ′88 k i ∩ E ′88 k j ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D ′88 i , • {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E ′88 k i ∩ E ′88 k ̸ = ∅ ( we take into account the possibility of adding slot s ′ as a last slot in the set of last slots S ′88 k to route demand k in solution S ′88 ). We let S ′88 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′88 is feasible for the problem. Hence, the corresponding incidence vector (x S ′88 , z S ′88 ) belongs to F H K E H . After that, we derive solution S 90 obtained from S ′88 by adding slot s ′ as last slot to demand k in S ′88 . Solution S 90 is clearly feasible for the problem. The corresponding incidence vector (x S 90 , z S 90 ) belongs to F H K E H . Hence, solutions S ′88 and S 90 satisfy equation µx + σz = τ . We have so µx S ′88 + σz S ′88 = µx S 90 + σz S 90 = µx S ′88 + σz S ′88 + σ k s ′ . Hence, σ k s ′ = 0. In a similar way, we can show that µ k e" . σ k s = 0, Since µ k e" = 0 for all k ∈ K and e" ∈ E \ (E k 0 ∪ E k 1 ) with v k,e" / ∈ H, and σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s}, it follows that µ k ′ e ′ = µ k e . Given that the pair (v k,e , v k ′ ,e ′ ) are chosen arbitrarily in odd-hole H, we re-do the same procedure for all pairs (v k,e , v k ′ ,e ′ ) such that we find µ k e = µ k ′ e ′ , for all pairs (v k,e , v k ′ ,e ′ ) ∈ H. Consequently, we obtain that µ k e = ρ for all v k,e ∈ H. We know from (2.17) and (2.18) that          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ We then conclude that for each k ∈ K and e ∈ E µ k e =                  γ k,e 1 if e ∈ E k 0 , γ k,e 2 if e ∈ E k 1 , ρ if v k,e ∈ H, 0 otherwise, and for each k ∈ K and s ∈ S σ k s =    γ k,s 3 if s ∈ {1, ..., w k -1}, 0 otherwise. As a consequence, (µ, σ) = ρ(α, β) + γQ. Theorem 2.4.18. Let H be an odd-hole, and C be a clique in the conflict graph Proof. Neccessity. H K E with a) |H| ≥ 5, b) and H ∩ C = ∅, c) If there exists a node v k",e" / ∈ H ∪ C in H K E such that v k",e" is linked with all nodes v k,e ∈ H and also with all nodes v k ′ ,e ′ ∈ C. This implies that inequality (2.48) can be dominated by the following valid inequality v k,e ∈H x k e + |H| -1 2 v k ′ ,e ′ ∈C x k ′ e ′ + |H| -1 2 x k" e" ≤ |H| -1 2 . As a result, inequality (2.48) is not facet defining for P(G, K, S). Sufficiency. Let F H K E H,C be the face induced by inequality (2.48), that is F H K E H,C = {(x, z) ∈ P(G, K, S) : v k,e ∈H x k e + |H| -1 2 v k ′ ,e ′ ∈C x k ′ e ′ = |H| -1 2 }. Let denote inequality v k,e ∈H x k e + |H|-1 2 v k ′ ,e ′ ∈C x k ′ e ′ ≤ |H|-1 2 by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Suppose that F H K E H,C ⊆ F . To prove that F H K E H,C is a facet of P(G, K, S), we need to show that there exists ρ ∈ R and γ = (γ 1 , γ 2 , γ 3 ) (such that c) modifying the last slots assigned to some demands K ⊂ K from S 92 k to S ′93 k for each k ∈ K while satisfying non-overlapping constraint. γ k,e 1 ∈ R for all k ′ ∈ K and e ∈ E k ′ 0 , γ k,e 2 ∈ R for all k ′ ∈ K and e ∈ E k ′ 1 , γ k ′ ,s ′ 3 ∈ R for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}) such that (µ, σ) = ρ(α, The paths assigned to the demands K \ (K(H 92 ) ∪ {k ′ }) in S 92 remain the same in S ′93 (i.e., E ′93 k" = E 92 k" for each k" ∈ K \ {k, k ′ }), and also without modifying the last slots assigned to the demands K \ K in S 92 , i.e., S 92 k = S ′93 k for each demand k ∈ K \ K. Solution S ′93 is feasible for the problem. The corresponding incidence vector (x S ′93 , z S ′93 ) belongs to F H K E H,C . Hence, solutions S 92 and S ′93 satisfy equation µx + σz = τ . We have so µx S 92 + σz S 92 = µx S ′93 + σz S ′93 = µx S 92 + σz S 92 + µ k ′ e ′ - v k,e ∈H 92 µ k e + k∈ K s ′ ∈S ′93 k σ k s ′ - s∈S 92 k σ k s + e"∈E ′93 k ′ \{e ′ } µ k ′ e" - e"∈E 92 k ′ µ k ′ e" + e"∈E ′93 k µ k e" - k∈K(H 92 ) e"∈E 92 k µ k e" . Since µ k e" = 0 for all k ∈ K and e" ∈ E \ (E k 0 ∪ E k 1 ) with v k,e" / ∈ H ∪ C, and σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s}, it follows that µ k ′ e ′ = v k,e ∈H 92 µ k e . As a result, µ k ′ e ′ = ρ |H|-1 2 . Given that the pair v k ′ ,e ′ is chosen arbitrarily in clique C, we re-do the same procedure for all pairs v k ′ ,e ′ ∈ C such that we find µ k ′ e ′ = ρ |H| -1 2 , for all pairs v k ′ ,e ′ ∈ C. As a result, all µ k ′ e ′ ∈ C are equivalent such that µ k ′ e ′ = µ k" e" = ρ |H| -1 2 , for all pairs v k ′ ,e ′ , v k",e" ∈ C. By (2.17) and (2.18), we know that          µ k ′ e ′ = γ k ′ ,e ′ 1 for all k ′ ∈ K and e ′ ∈ E k ′ 0 , µ k ′ e ′ = γ k ′ ,e ′ 2 for all k ′ ∈ K and e ′ ∈ E k ′ 1 , σ k ′ s ′ = γ k ′ ,s ′ 3 for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. As a result, we obtain that for each k ∈ K and e ∈ E µ k e =                        γ k,e 1 if e ∈ E k 0 , γ k,e 2 if e ∈ E k 1 , ρ if v k,e ∈ H, ρ |H|-1 2 if v k,e ∈ C, 0 otherwise, and for each k ∈ K and s ∈ S σ k s =    γ k,s 3 if s ∈ {1, ..., w k -1}, 0 otherwise. As a result, we have (µ, σ) = ρ(α, β) + γQ. Tranmission P(G, K, S, C, k) = {(x, z) ∈ P(G, K, S) : e ′ ∈E\(E k 1 ∪E k 0 ) x k e ′ = 0}. Proof. It is trivial given that inequality (2.49) can never be dominated in P(G, K, S, C, k). On the other hand, one can use sequential lifting procedure [START_REF] Balas | Facets of the Knapsack Polytope From Minimal Covers[END_REF] to sequentially lift the inequality (2.49) and generate lifted valid inequalities that are facet defining for the polytope P(G, K, S) as follows. Theorem 2.4.20. Let C be a minimal cover for a demand k ∈ K. Let E \ (E k 1 ∪ C ∪ E k 0 ) = {e 1 , ..., e q } be arbitrarily ordred with q = |E \ (E k 1 ∪ C ∪ E k 0 )|. Consider a sequence of knapsack problems defined as                      z i = max j∈C u j + i-1 j=1 α j u j , j∈C l j u j + i-1 j=1 l j u j ≤ lk - e ′ ∈E k 1 ℓ e ′ -l e i , u j ∈ {0, 1}, ∀j ∈ C ∪ {1, ..., i -1}, (2.50) for all i ∈ {1, ..., q} with α j = |C| -1 -z j for all j ∈ {1, ..., i -1}. Then, the inequality e∈C x k e + q j=1 α j x k e j ≤ |C| -1, (2.51) is valid for P(G, K, S). Moreover, it's facet defining for P(G, K, S). Proof. It's trivial given that inequality (2.51) can never be dominated in P(G, K, S). Definition 2.4.10. Consider a demand k ∈ K. Let p be a sub-path in G such that each pair of edges (v k,e , v k,e ′ ) ∈ E \(E k 0 ∪E k 1 ) are not linked by an edge in the conflict graph HK E . We say that the path p is infeasible for the demand k if it does not occur as a subpath in any feasible routing for the demand k, i.e., there does not exist a feasible path for demand k containning p due to the transmission-reach constraint. Moreover, it is said to be minimal infeasible if each sub-path p ′ of p of cardinality |E(p ′ )| = |E(p)| -1, can be used in a feasible routing for the demand k. Note that one can verify in polynomial time using Dijkstra algorithm if a subpath p in G if it is infeasible or not for a demand k ∈ K. is valid for P(G, K, S). Proof. It is trivial given that p is minimal infeasible sub-path for demand k this means that there are at most |E(p)| -1 edges from the set of edges in E(p) that can be used to route demand k. Edge-Capacity-Cover Inequalities Let provide now some inequalities related to the capacity constraint over edges. Proposition 2.4.25. Consider an edge e in E. Then, the inequality k∈K\Ke w k x k e ≤ s - k ′ ∈Ke w k ′ , (2.53) is valid for P(G, K, S). Proof. The number of slots allocated in edge e ∈ E should be less than the residual capacity of edge e which is equal to s - k ′ ∈Ke w k ′ . Based on this, we introduce the following definitions. Definition 2.4.11. For an edge e ∈ E, a subset of demands C ⊆ K with e / ∈ E k 0 ∩E k 1 For each demand k ∈ C, is said a cover for edge e if k∈C w k > s - k ′ ∈Ke w k ′ . Moreover, it is said to be a minimal cover if k ′ ∈C\{k} w k ′ ≤ s - k"∈Ke w k" . Proposition k ′ ∈K\(C∪Ke) x k ′ e = 0}. Proof. It is trivial given that inequality (2.54) can never be dominated in P(G, K, S, C, e). One can use the sequential lifting procedure [START_REF] Balas | Facets of the Knapsack Polytope From Minimal Covers[END_REF] to sequentially lift the inequality (2.54) and generate lifted facets for the polytope P(G, K, S) as follows. Theorem 2.4.22. Let C be a minimal cover for an edge e ∈ E. Let K \ (K e ∪ C ∪ Ke ) = {k 1 , ..., k q } be arbitrarily ordred with q = |K \ (K e ∪ C ∪ Ke )|. Consider a sequence of knapsack problems defined as                      z i = max j∈C u j + i-1 j=1 α j u j , j∈C w j u j + i-1 j=1 w k j u j ≤ s - k ′ ∈Ke w k ′ -w k i , u j ∈ {0, 1}, ∀j ∈ C ∪ {1, ..., i -1}, (2.55) for all i ∈ {1, ..., q} with α j = |C| -1 -z j for all j ∈ {1, ..., i -1}. Then, the inequality k∈C x k e + q j=1 α j x k j e ≤ |C| -1, (2.56) is valid for P(G, K, S). Moreover, it's facet defining for P(G, K, S). Proof. It's trivial given that inequality (2.56) can never be dominated in P(G, K, S). Symmetry-Breaking Inequalities We have noticed that several symmetrical solutions may appear given that there exist several feasible solutions that have the same value of the solution (called equivalents solutions), and they can be found by doing some permutations between the slots assigned to some demands without changing the selected paths (routing) while satisfying the C-RSA constraints. There exists several methods to break the symmetry. See, for example, perturbation method proposed by Margot [START_REF] Margot | Symmetry in integer linear programming[END_REF], isomorphism pruning method by Margot et al. [START_REF] Margot | Pruning by isomorphism in branch-and-cut[END_REF][68], orbital branching method by Ostrowski et al. [START_REF] Ostrowski | Symmetry in scheduling problems[END_REF][76], orbital fixing method by Kaibel et al. [START_REF] Kaibel | Orbitopal fixing[END_REF], and symmetry-breaking constraints by Kaibel and Pfetsch [START_REF] Kaibel | Packing and partitioning orbitopes[END_REF] which is applied in our study. We aim to introduce breaking-symmetry inequalities to remove the sub-problems in the enumeration tree that are equivalent due to the equivalency of their associated solutions. For this, we derive the following inequalities. Proposition 2.5.1. Consider a demand k, slot s ∈ {1, ..., s -1}. Let s ′ be a slot in {s, ..., s} min(s ′ +w k -1,s) s"=s ′ z k s" - k ′ ∈K min(s+w k ′ -1,s) s"=s z k ′ s" ≤ 0. (2.57) This ensures that slot s ′ can be assigned to demand k if and only if slot s (which precedes slot s ′ ) is already assigned to at least one demand k ′ in K. A similar idea was proposed by Mendez-Diaz and Zabala [START_REF] Méndez-Díaz | A Branch-and-Cut algorithm for graph coloring[END_REF] to break the symmetry for the vertex coloring problem. Note that inequalities (5.17) are not valid for the polytope P(G, K, S) given that they cut off some feasible regions in the polytope P(G, K, S). In any case, we ensure that there exists at least one optimal solution from our original problem that remains feasible and belongs to the convex hull of non-symmetric solutions of the C-RSA problem. Lower Bounds In this section, we derive some lower bounds for the C-RSA. Let p * k denote the minimum-cost path between origin node o k and destination node d k for demand k with total length smaller than the transmission-reach lk . We know in advance that the optimal path chosen for each demand k ∈ K in the optimal solution, its total cost is at least equal to the total cost of the minimum-cost path p * k . Based on this, we introduce the following inequalities. is valid for P(G, K, S). Proof. It's trivial given that in any feasible solution S in P(G, K, S), the total cost of the path selected to demand k is greatest than or equal to the total cost of the minimum-cost path p * k . Inequality (2.58) is then used to derive a lower bounds for the C-RSA as follows. Proof. It's trivial given that the optimal value is at least equal to the sum of the total cost of minimum-cost path over all the demands in K. The separation problem associated with inequality (2.59) is equivalent to solving the Resource Constrained Shortest Path (RCSP) Problem for each demand k. The RCSP is well known to be a NP-hard problem [START_REF] Dror | Note on the Complexity of the Shortest Path Models for Column Generation in VRPTW[END_REF]. For this, we propose a pseudo-polynomial time algorithm using dynamic programming [START_REF] Dumitrescu | Algorithms for the weight constrained shortest path problem[END_REF] to compute the minimum-cost path for each demand k while satisfying the transmission-reach constraint. For each demand k ∈ K, we associate to each node v ∈ V \ V k 0 in the graph G a set of labels L v such that each label corresponds to differents paths from th origin node o k to the node v, and each label p is specified by a cost equals to e∈E(p) c e , and a weight equals to e∈E(p) ℓ e . We denote by T v the set of labels on node v ∈ V . For each demand k and slot s ∈ {w k , ..., s}, the complexity of the algorithm is bounded by O(|E \ E k 0 | * lk ) [START_REF] Dumitrescu | Algorithms for the weight constrained shortest path problem[END_REF]. Algorithm 1 summarizes the different steps of the dynamic programming algorithm. Chapter 3 Branch-and-Cut Algorithm for the C-RSA Problem Based on theoretical results presented in chapter (2), we devise a Branch-and-Cut algorithm to solve the C-RSA problem. Our aim is to study the effectiveness of the algorithm, and assess the impact of the valid inequalities on the effectiveness of the Branch-and-Cut algorithm. First, we give an overview of the algorithm. Then, we describe the separation procedure used for each valid inequality based on exact algorithms, greedy-algorithms, and heuristics. At the end, we provide a detailed behavior study of the Branch-and-Cut algorithm. Branch-and-Cut Algorithm Description In what follows, we describe the Branch-and-Cut algorithm. Consider an undirected, loopless, and connected graph G = (V, E), which is specified by a set of nodes V , and a multiset E of links. Each link e = ij ∈ E is associated with a length ℓ e ∈ R + (in kms), a cost c e ∈ R + such that each link e ∈ E is divided into s ∈ N + slots. Let S = {1, . . . , s} be an optical spectrum of available frequency slots with s ≤ 320, and K be a multiset of demands such that each demand k ∈ K is specified by an origin node o k ∈ V , a destination node d k ∈ V \ {o k }, a slot-width w k ∈ Z + , x k e = 0, ∀k ∈ K, ∀e ∈ E k 0 , x k e = 1, ∀k ∈ K, ∀e ∈ E k 1 , z k s = 0, ∀k ∈ K, ∀s ∈ {1, ..., w k -1}, s s=w k z k s = 1, ∀k ∈ K, 0 ≤ x k e ≤ 1, ∀k ∈ K, ∀e ∈ E, 0 ≤ z k s ≤ 1, ∀k ∈ K, ∀s ∈ S. Test of Feasibility Given an optimal solution (x, z) for the relaxation of LP 0 . It is feasible for the C-RSA problem if and only if (x, z) is integral and it satisfies the cut inequalities (2.2) and non-overlapping inequalities (2.6). Usually, (x, z) does not satisfy inequalities (2.2) and (2.6). As a result, (x, z) is not feasible for the C-RSA problem. For this, we generate several valid inequalities violated by a solution (x, z) at each iteration of the Branch-and-Cut algorithm. This is known under the name of Separation Procedure. It consists in identifying for a given class of valid inequalities the existence of one or more inequalities of this class that are violated by the current solution. We repeat this procedure in each iteration of the algorithm until no violated inequality is identified. As a result, the final solution is optimal for the linear relaxation of the cut formulation. Furthermore, if it is integral, then it is optimal for the C-RSA problem. Otherwise, we create two subproblems called childs by branching on a fractional variable (variable branching rule) or on some constraints using the Ryan & Foster branching rule (constraint branching rule). Based on this, we devise a basic Branch-and-Cut algorithm by combining cutting-plane algorithm based on the separation of the cut inequalities (2.2) and non-overlapping inequalities (2.6), and a Branch-and-Bound algorithm. On the other hand, to make more efficient the Branch-and-Cut algorithm, we already introduced several classes of valid inequalities used to obtain tighter LP bounds. Based on this, and at each iteration in each node of the Branch-and-Cut tree, one can identify one or more than one violated inequality by the current fractional solution for a given class of valid inequalities. Algorithm 2 summarizes the different steps of Branch-and-Cut algorithm taking into account additional valid inequalities for a given class of valid inequalities. For this, we study the separation problem of each class of valid inequality introduced in this dissertation as follows. Consider a fractional solution (x, z). Separation of Non-Overlapping Inequalities Let e be an edge in E and s a slot in S. s ′ =s zk s ′ + min(s+w k ′ -1,s) s"=s zk ′ s" > 3. For this, we propose an exact algorithm in O(|E| * s * |K| * log(|K|)) which works as follows. We select each pair of demands k, k ′ ∈ K with x k e > 0, min(s+w k -1,s) s ′ =s z k s ′ > 0, xk ′ e > 0 and min(s+w k ′ -1,s) s"=s zk ′ s" > 0. We then add the following inequality induced by each selected pair of demands k, k ′ for slot s over edge e to the current LP if it is violated x k e + x k ′ e + min(s+w k -1,s) s ′ =s z k s ′ + min(s+w k ′ -1,s) s"=s z k ′ s" ≤ 3. Otherwise, we conclude that such inequality does not exist for the current solution (x, z). On the other hand, given that inequalities (2.5) are taken in format of equalities when implementing the B&C algorithm (i.e., s s=w k z k s = 1 for all k ∈ K). Based on this, and taking into account the non-overlapping inequalities (2.6), we propose a new non-overlapping inequality (3.1) more efficient compared to the ones of (2.6). Proposition 3.1.1. Consider an edge e, and a pair of demands k, k ′ ∈ K with e / ∈ E k 0 ∪ E k ′ 0 . Let s be a slot in {w k , ..., s}. Then, the inequality x k e + x k ′ e + z k s + min(s+w k ′ -1,s) s"=s-w k +1 z k ′ s" ≤ 3, (3.1) is valid for Q(G, K, S). The separation problem associated with inequality (3. zk ′ s" > 0. We then add the following inequality to the current LP if it is violated x k e + x k ′ e + z k s + min(s+w k ′ -1,s) s"=s-w k +1 z k ′ s" ≤ 3. Otherwise, we conclude that there does not exist an inequality from the non-overlapping inequalities (3.1) violated by the current solution (x, z). Note that, from an efficiency point of view, inequalities (3.1) replace inequalities (2.6) in the B&C algorithm. Separation of Cut Inequalities In this section we discuss the separation problem of the cut inequalities (2.2). Its associated separation problem consists in identifying a cut inequalities (2.2) that is violated by (x, z). For each demand k ∈ K, this can be done in polynomial time [START_REF] Ford | Maximal flow through a network[END_REF] as shown in theorem of Ford and Fulkerson by finding a minimum cut separating the origin-node o k and destination-node d k . As a result, this can be done exactly [START_REF] Ford | Maximal flow through a network[END_REF] and very effectively in O(|V \ V k 0 | 2 * |E \ E k 0 | ) for each demand k using an efficient implementation of minimum cut algorithm based on the so-called preflow push-relabel algorithm of Goldberg and Tarjan [START_REF] Goldberg | A New Approach to the Maximum Flow Problem[END_REF] Separation of Edge-Slot-Assignment Inequalities Consider an edge e ∈ E and a slot s ∈ S. The separation problem associated with inequality (2.23) consists in identifying a subset of demands K * ⊂ K such that k∈ K * xk e + min(s+w k -1,s) s ′ =s zk s ′ > | K * | + 1. For this, we propose an exact algorithm in O(|K| * |E| * s) which works as follows. The main idea is to iteratively add each demand k ∈ K to K * if and only if x k e > 0 and min(s+w k -1,s) s ′ =s z k s ′ > 0. We then add the following inequality induced by K * to the current LP if it is violated and satisfies some conditions about validity of inequality (2.23) k∈ K * x k e + min(s+w k -1,s) s ′ =s z k s ′ ≤ | K * | + 1. Otherwise, we conclude that such inequality does not exist for the current solution (x, z). Moreover, if such violated inequality is identified, it can be easily lifted by introducing inequality (2.25) induced by K * and a subset of demands K e \ K * as follows k∈ K * x k e + min(s+w k -1,s) s ′ =s z k s ′ + k ′ ∈Ke\ K * min(s+w k ′ -1,s) s ′ =s ≤ | K * | + 1. Remark 3.1.1. Inequality (3.1) is a particular case of inequality (2.42) for a clique C = {v k,s } ∪ {v k ′ ,s ′ ∈ H e S such that {s ′ -w ′ k + 1, ..., s ′ } ∩ {s -w k + 1, ..., s} ̸ = ∅}. Separation of Edge-Slot-Assignment-Clique Inequalities Consider an edge e ∈ E. The separation algorithm for inequality (2.42) To do this, we use the greedy algorithm introduced by Nemhauser and Sigismondi [START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF] to identify a maximal clique C * in the conflict graph H e S given that computing a maximal clique in such a graph is also NP-hard problem [START_REF] Karp | Reducibility among Combinatorial Problems[END_REF]. Based on this, we first assign a positive weight zk s * xk e to each node v k,s in the conflict graph H e S . We then select a node v k,s in the conflict graph H e S having the largest weight compared with the other nodes in H e S , and set C * = {v k,s }. After that, we iteratively add each node v k ′ ,s ′ to the current C * if it is linked with all the nodes v k,s already assigned to the current clique C * and zk ′ s ′ > 0 and xk ′ e > 0. At the end, we add inequality (2.42) induced by clique C * for edge e to the current LP if it is violated and satisfies some conditions about validity of inequality (2.42). Hence, we add the following inequality v k,s ∈C * x k e + z k s ≤ |C| + 1. Furthermore, it can be lifted by identifying a maximal clique N * such that each v k ′ ,s ′ ∈ N * is linked with all the nodes v k,s ∈ C * ∪ (N * \ {v k ′ ,s ′ }) in H e S . For this, we also use the greedy algorithm introduced by Nemhauser and Sigismondi [START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF] to identify clique N * as follows. We first set N * = {v k ′ ,s ′ } with v k ′ ,s ′ / ∈ C * ) + v k ′ ,s ′ ∈N * z k ′ s ′ ≤ 1. Separation of Edge-Interval-Clique Inequalities Let discuss the separation problem of inequality (2.32). Consider an edge e ∈ E. We first construct a set of intervals of contiguous slots I ∈ I e such that each interval of contiguous slots I e is identified by generating two slots s i and s j randomly in S with s j ≥ s i + 2 max k∈K\ Ke w k . Consider now an interval of contiguous slots I = [s i , s j ] ∈ I e over edge e. The separation problem associated with inequality (2.32) is NP-hard [START_REF] Klabjan | The complexity of cover inequality separation[END_REF] given that it consists in identifying a cover K * for the interval I = [s i , s j ] over edge e, such that k∈ K * xk e + s j s ′ =s i +w k -1 zk s ′ > 2| K * | -1. For this, we use a greedy algorithm introduced by Nemhauser and Sigismondi [START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF] as follows. We first select a demand k ∈ K having the largest number of requested slot w k with xk e > 0 and s j s ′ =s i +w k -1 zk s ′ > 0, and then set K * to K * = {k}. After that, we iteratively add each demand k ′ ∈ K \ K * to K * with xk ′ e > 0 and s j s ′ =s i +w k ′ -1 zk ′ s ′ > 0, until a cover K * is obtained for the interval I over edge e with k∈ K * w k > |I|. We further derive a minimal cover from the cover K * by deleting each demand k ∈ K * if k ′ ∈ K * \{k} w k ′ ≤ |I|. We then add inequality (2.32) induced by the minimal cover K * for the interval I and edge e if it is violated and satisfies some conditions about validity of inequality (2.32). The following valid inequality is then added to the current LP k∈ K * x k e + s j s ′ =s i +w k -1 z k s ′ ≤ 2| K * | -1. Separation of Edge-Interval-Clique Inequalities The separation problem related to inequality (2.36) is NP-hard [77][81] given that it consists in identifying a maximal clique C * in the conflict graph H e I for a given edge e and a given interval I = [s i , s j ] such that k∈C * xk e + s j s ′ =s i +w k -1 zk s ′ > |C * | + 1. We start our procedure of separation by constructing a set of intervals of contiguous slots I = [s i , s j ] ∈ I e for a given edge e ∈ E such that each interval of con- x k e + tiguous slots I = [s i , s j ] ∈ I e is s j s ′ =s i +w k -1 z k s ′ + k ′ ∈C * e s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ |C * | + 1, such that a) w k ′ + w k ≥ |I| + 1 for each k ∈ C * and k ′ ∈ C * e , b) w k ′ + w k" ≥ |I| + 1 for each k ′ ∈ C * e and k" ∈ C * e , c) w k ′ ≤ |I| for each k ′ ∈ C * e . Separation of Interval-Clique Inequalities Given an interval of contiguous slots I = [s i , s j ]. Our separation algorithm for inequality (2.39) consists in identifying a maximal clique C * in the conflict graph H E I such that k∈C * s j s ′ =s i +w k -1 zk s ′ > 1. As a result, its associated separation problem is NP-hard given that computing a maximal clique in a given graph is known to be a NP-hard problem [START_REF] Karp | Reducibility among Combinatorial Problems[END_REF]. For this, we also use the greedy algorithm introduced by Nemhauser and Sigismondi [START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF] to identify a maximal clique in the conflict graph H E I as follows. We first generate a set of intervals of contiguous slots denoted by I E such that each interval of contiguous slots I = [s i , s j ] ∈ I E is given for each slot s i ∈ S and slot s j with s j ∈ {s i + max k∈K, |E k 1 |≥1 w k , ..., min(s, s i + 2 max k∈K, |E k 1 |≥1 w k )}. We then consider an interval of contiguous slots I = [s i , s j ] ∈ I E and its associated conflict graph H E I . We associate a positive weight s j s ′ =s i +w k -1 zk s ′ for each node v k in H E I . We select a demand k having the largest number of slots w k and weight s j s ′ =s i +w k -1 zk s ′ , and then set C * = {k}. After that, we iteratively add each demand k ′ having s j s ′ =s i +w k ′ -1 zk ′ s ′ > 0 such that its corresponding node v k ′ is linked with all the nodes v k with k ∈ C * . At the end, we consider inequality (2.39) induced by the maximal clique C * if it is violated, i.e., by adding the following inequality to the current LP k∈C * s j s ′ =s i +w k -1 z k s ′ ≤ 1. Moreover, this additional inequality can be strengthened as follows k∈C * s j s ′ =s i +w k -1 z k s ′ + k ′ ∈C * e s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ 1, where C * E ⊂ K \ C * such that a) w k ′ + w k ≥ |I| + 1 and E k 1 ∩ E k ′ 1 ̸ = ∅ for each k ∈ C * and k ′ ∈ C * E , b) w k ′ + w k" ≥ |I| + 1 and E k ′ 1 ∩ E k" 1 ̸ = ∅ for each k ′ ∈ C * E and k" ∈ C * E , c) w k ′ ≤ |I| for each k ′ ∈ C * E . Separation of Interval-Odd-Hole Inequalities For inequality (2.40), we propose a separation algorithm that consists in identifying an odd-hole H * in the conflict graph H E I for a given Interval I and a fractional solution (x, z) such that k∈H * s j s ′ =s i +w k -1 zk s ′ > |H * | -1 2 . This can be done in polynomial time as shown by Rebennack et al. [94][95]. Based on this, we use the exact algorithm proposed by the same authors which consists of finding a minimum weighted odd-cycle in a graph. For this, we should first generate a set of intervals of contiguous slots I E as we did before in the section 3.1.9. We then consider a conflict graph H E I associated with a given interval of contiguous slots I ∈ I E . We construct an auxiliary conflict graph H ′E I which can be seen as a bipartite graph by duplicating each node v k in H E I (i.e., v k and v ′ k ) and two nodes are linked in H ′E I if their original nodes are linked in H E I . We assign to each link (v a , v b ) in H ′E I a weight equals to 1- s j s ′ =s i +wa-1 za s ′ - s j s ′ =s i +w b -1 zb s ′ 2 . We then compute for each node v k in H E I , the shortest path between v k and its copy in the auxiliary conflict graph H ′E I denoted by p v k ,v ′ k . After that, we check if the total sum of weight over edges belong this path is smallest than 1 2 , (va,v b )∈E(p v k ,v ′ k ) 1 - s j s ′ =s i +wa-1 za s ′ - s j s ′ =s i +w b -1 zb s ′ 2 < 1 2 . If so, odd-hole H * is composed by all the original nodes of nodes belong the computed shortest path p v k ,v ′ k , i.e., V (p v k ,v ′ k ) \ {v ′ k }. We then add inequality (2.40) induced by odd-hole H * to the current LP, i.e., k∈H * s j s ′ =s i +w k -1 z k s ′ ≤ |H * | -1 2 . It can be lifted using the greedy algorithm introduced by Nemhauser and Sigismondi [START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF] to identify a maximal clique C * in the conflict graph H E I such that a) w k ′ + w k ≥ |I| + 1 and E k 1 ∩ E k ′ 1 ̸ = ∅ for each k ∈ H * and k ′ ∈ C * , b) w k ′ + w k" ≥ |I| + 1 and E k ′ 1 ∩ E k" 1 ̸ = ∅ for each k ′ ∈ C * and k" ∈ C * , c) w k ′ ≤ |I| for each k ′ ∈ C * . For this, we first assign a positive weight equals to the number of slots request w k ′ by demand k ′ for each node v k ′ linked with all the nodes v k ∈ H * in the conflict graph H E I . We then select the node v k ′ linked with all the nodes v k ∈ H * in the conflict graph H E I having the largest weight, and set C * to {k ′ }. After that, we iteratively add each demand k" to the current clique C * if its associated node v k" is linked with all the nodes v k ∈ H * and nodes v k ′ ∈ C * . As a result, we add inequality (2.41) induced by odd-hole H * and clique C * to the current LP, i.e., k∈H * s j s ′ =s i +w k -1 z k s ′ + |H * | -1 2 k ′ ∈C * s j s"=s i +w k ′ -1 z k ′ s" ≤ |H * | -1 2 . Separation of Slot-Assignment-Clique Inequalities Now, we describe the separation algorithm for inequality (2.43). It consists in identifying a maximal clique C * in the conflict graph H E S such that v k,s ∈C * zk s > 1, for a given fractional solution (x, z) of the current LP. For this, we use the greedy algorithm introduced by Nemhauser and Sigismondi [START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF] to identify a maximal clique C * in the conflict graph H E S given that computing a maximal clique in such a graph is also NP-hard problem [START_REF] Karp | Reducibility among Combinatorial Problems[END_REF]. Based on this, we first assign a positive weight zk s to each node v k,s in the conflict graph H E S . We then select a node v k,s in the conflict graph H E S having the largest weight compared with the other nodes in H E S , and set C * = {v k,s }. After that, we iteratively add each node v k ′ ,s ′ to the current C * if it is linked with all the nodes v k,s already assigned to the current clique C * and zk ′ s ′ > 0. At the end, we add inequality (2.43) induced by clique C * to the current LP if it is violated, i.e., we add the following inequality v k,s ∈C * z k s ≤ 1. Furthermore, it can be lifted by identifying a maximal clique N * such that each v k ′ ,s ′ ∈ N * is linked with all the nodes v k,s ∈ C * ∪ (N * \ {v k ′ ,s ′ }) in H E S . For this, we also use the greedy algorithm introduced by Nemhauser and Sigismondi [START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF] to identify clique N * as follows. We first set N * = {v k ′ ,s ′ } with v k ′ ,s ′ / ∈ C * C * ∪ N * to the current LP, i.e., v k,s ∈C * z k s + v k ′ ,s ′ ∈N * z k ′ s ′ ≤ 1. Separation of Slot-Assignment-Odd-Hole Inequalities The separation algorithm of inequality (2.44) can be performed by identifying an odd-hole H * in the conflict graph H E S for a given fractional solution (x, z) such that v k,s ∈H * zk s > |H * | -1 2 . This can be done in polynomial time as shown by Rebennack et al. [START_REF] Rebennack | Stable Set Problem: Branch & Cut Algorithms[END_REF][95] by finding a minimum weighted odd-cycle in the conflict graph H E S . For this, we first construct an auxiliary conflict graph H ′E S which can be seen also as a bipartite graph by duplicating each node v k,s in H E S (i.e., v k,s and v ′ k,s ) such that each two nodes are linked in H ′E S if their original nodes are linked in H E S . We assign to each link (ṽ k,s , ṽk ′ ,s ′ ) in H ′E S a weight equals to 1-z k s -z k ′ s ′ 2 . We then compute for each node v k,s in H E S , C * v k,s ∈H * z k s + |H * | -1 2 v k ′ ,s ′ ∈C * z k ′ s ′ ≤ |H * | -1 2 , Separation of Incompatibility-Clique Inequalities Consider now inequality (2.46). Its associated separation algorithm consists in identifying a maximal clique C * in the conflict graph H K E such that v k,e ∈C * xk e > 1. The separation problem related to this inequality is NP-hard given that computing a maximal clique in the conflict graph H K E is NP-hard problem [START_REF] Karp | Reducibility among Combinatorial Problems[END_REF]. For this, we also use the greedy algorithm introduced by Nemhauser and Sigismondi [START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF] v k ′ ,e ′ ∈ N * is linked with all the nodes v k,e ∈ C * ∪ (N * \ {v k ′ ,e ′ }) in H K E . For this, we also use the greedy algorithm introduced by Nemhauser and Sigismondi [START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF] to identify clique N * as follows. We first set N * = {v k ′ ,e ′ } with v k ′ ,e ′ / ∈ C * a node in H K E having the largest degree |δ(v k ′ ,e ′ )| in H K E and should be also linked with all the nodes v k,e ∈ C * in H K E . We then iteratively add each node x k ′ e ′ ≤ 1. v k ′ ,e ′ / ∈ C * ∪ N * to the current N * if it is linked in H K E with Separation of Incompatibility-Odd-Hole Inequalities The separation algorithm related to inequality (2.47) can be done in polynomial time by finding a minimum weighted odd-cycle in the conflict graph H K E as shown by Rebennack et al. [94][95]. For this, our aims is to identify an odd-hole H * in the conflict graph H K E such that v k,e ∈H * xk e > |H * | -1 2 , for a given fractional solution (x, z) of the current LP. We start its procedure of separation by constructing an auxiliary conflict graph v k ′ ,e ′ ∈C * x k ′ e ′ ≤ |H * | -1 2 . Separation of Transmission-Reach-Cover Inequalities In this section, we study the separation problem of inequality (2.49). Consider a demand k ∈ K. The separation problem associated with inequality (2.49) is NP-hard [START_REF] Klabjan | The complexity of cover inequality separation[END_REF] given that it consists in identifying a cover C * related to the transmission-reach constraint of demand k, such that e∈c * xk e > |C * | -1. For this, we propose a separation algorithm based on a greedy algorithm introduced by Nemhauser and Sigismondi [START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF]. We first select an edge e ∈ E \ (E k 0 ∪ E k 1 ) having the largest length ℓ e with xk e > 0, and set C * to C * = {e}. After that, we iteratively add each edge e ′ ∈ E \(E k 0 ∪E k 1 ∪C * ) to C * while e∈C * ℓ e ≤ lk and e ′ is compatible with the edges already added to the cover C * , i.e., until a cover C * is obtained for the demand k with e∈C * ℓ e > lk . We further derive a minimal cover from the cover C * by deleting each edge e ∈ C * if e ′ ∈C * \{e} ℓ e ′ ≤ lk . We then add inequality (2.49) induced by the minimal cover C * for demand k to the current LP if it is violated, i.e., e∈C * x k e ≤ |C * | -1. Furthermore, inequality (2.49) induced by the minimal cover C * can be lifted by introducing an extended coverinequality as follows e∈C * x k e + e ′ ∈E(C * ) x k e ′ ≤ |C * | -1, where ℓ e ′ ≥ ℓ e for each e ∈ C * and e ′ ∈ E(C * ) with e ′ / ∈ E k 0 ∪E k 1 and e ′ is compatible with each edge e ∈ C * . Separation of Edge-Capacity-Cover Inequalities Let now study the separation problem of inequality (2.54). Given an edge e ∈ E. The separation problem associated with inequality (2.54) is NP-hard [START_REF] Klabjan | The complexity of cover inequality separation[END_REF] given that it consists in identifying a cover K * edge e, such that k∈ K * xk e > | K * | -1. For this, we propose a separation algorithm based on a greedy algorithm introduced by Nemhauser and Sigismondi [START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF]. We first select a demand k ∈ K \ K e having largest number of requested slot w k with xk e > 0, and set K * to K * = {k}. After that, we iteratively add each demand k ′ ∈ K \ (K e ∪ K * ) to K * while k∈ K * w k ≤ s -k∈Ke w k, i.e., until a cover K * is obtained for the edge e with k∈ K * w k > s -k∈Ke w k. We further derive a minimal cover from the cover K * by deleting each demand k ∈ K * if k ′ ∈ K * \{k} w k ′ ≤ s -k∈Ke w k. We then add inequality (2.54) induced by the minimal cover K * for edge e to the current LP if it is violated, i.e., k∈ K * x k e ≤ | K * | -1. Primal Heuristic Here, we propose a primal heuristic to boost the performance of the Branch-and-Cut algorithm. It is based on a hybrid method between a local search algorithm and a greedy-algorithm. Given an optimal fractional solution (x, z) in a certain node of the B&C tree, our primal heuristic consists in constructing an integral "feasible" ℓ e ≤ lk . This can be done in polynomial time using network flow algorithms. Afterwards, we use a local search algorithm. It consists in generating at each iteration a sequence of numeroted demands L (order) with L = 1 ′ , 2 ′ , ..., |K| ′ -1, |K| ′ . Based on this sequence of demands, we use a greedy algorithm to select a path p from R k ′ and a slot s ∈ {w k ′ , ..., s} for each demand k ′ ∈ L with zk ′ s ̸ = 0 and xk ′ e ̸ = 0 for each e ∈ E(p), while respecting the non-overlapping constraint with the set of demands that precede demand k ′ in the list L (i.e., the demands 1 ′ , 2, ..., k ′ -1). However, if there does not exist such pair of path p and slot s for demand k ′ , we then select a path p and a slot s ∈ {w k ′ , ..., s} for demand k ′ ∈ L with zk After that, we compute the associated total cost of the paths selected for the set of demands K in the final solution S given by the greedy-algorithm (i.e., k∈K e∈E k c e ). Our local search algorithm generates a new sequence by doing some permutation of demands in the last sequence of demands, if the value of the solution given by greedy-algorithm is smaller than the value of the best solution found until the current iteration. Otherwise, we stop the algorithm, and we give in output the best solution found during the primal heuristic induced by the best sequence of demands having the smallest value of total cost of the selected path compared with the others generated sequences. Computational Study Implementation's Feature We have used C++ programming language to implement the B&C algorithm under Linux using three framworks, Cplex 12.9 [START_REF] Cplex | V12. 9: User's Manual for Cplex[END_REF], Gurobi 9.0 [START_REF] Gurobi Optimization | Gurobi Optimizer Reference Manual[END_REF], and "Solving Constraint Integer Programs" (Scip 7.0) [START_REF] Gamrath | The Scip Optimization Suite 7.0[END_REF] framework using Cplex 12.9 as LP solver. It has been tested on LIMOS high performance server with a memory size limited to 64 gb while benefiting from parallelism by activating 8 threads when using Gurobi or Scip (which is not possible when using cutting-plane based method under Cplex). We limit the CPU time to 5 hours (18000 s). Description of Instances We further present computational results using two types of graphs: real, and other realistics from SND-Lib [START_REF] Orlowski | SNDlib 1.0-Survivable Network Design Library[END_REF] with total number of nodes |V | up to 161 and total number of edges |E| up to 166 as shown in Table 3 Computational Results We consider 4 criteria in our computational study, average number of nodes in the B&C tree (Nb Nd), average gap (Gap) which represents the relative error between the lower bound gotten at the end of the resolution and best upper bound, average number of violated inequalities added during the algorithm (Nbr Cuts), and average Cpu time computation (TT). Based on preliminary results, the cover-based inequalities (2.54) and (2.32) are shown to be efficient than the clique-based inequalities (2.43), (2.42) and (2.36). In fact, the B&C algorithm performs very well when adding the cover-based inequalities (2.54) and (2.32) under Scip and Gurobi. We notice that adding these valid inequalities allows solving to optimality some instances that are not solved to optimality when using Cplex, Gurobi and Scip. Furthermore, they allow reducing the average gap, average number of nodes, and the average cpu time. On the other hand, we observed that the valid inequalities do not work well when using Cplex. This is due to deactivating the Cplex's cut generation such that Cplex does not work well without its proper cut generation even if the valid inequalities are shown to be efficient when using Gurobi and Scip for the instances tested. The results also show that several inequalities of the cover-based inequalities (2. Using this, we provide a comparative study between Cplex, Gurobi and Scip. For this, we aim to evaluate the impact of the valid inequalities used within the B&C algorithm. Our first series of computational results presented in Tables 3.2, it concerns a comparaison between the results obtained for the B&C algorithm using Cplex and Scip (without or with additional valid inequalities). On the other hand, in the second series of computational results shown in Table 3.3, we present the results found for the B&C algorithm using Gurobi and Scip (without or with additional valid inequalities). In the third series shown in Table 3.4, we compare the results found by the B&C algorithm using Cplex (without or with additional valid inequalities) with those that are found when using Scip (without or with additional valid inequalities). For each instance, we run • Cplex with benefiting of its automatic cut generation and without our additional valid inequalities (denoted by B&C CPX in the different tables), • Cplex using our valid inequalities and disabling its proper cut generation (denoted by Own B&C CPX), • Gurobi with benefiting of its automatic cut generation and without our additional valid inequalities (denoted by B&C GRB), • Gurobi using our valid inequalities and disabling its proper cut generation (denoted by Own B&C GRB), • Scip with benefiting of its automatic cut generation and without our additional valid inequalities (denoted by B&C SCIP), • Scip using our valid inequalities and disabling its proper cut generation (denoted by Own B&C SCIP). To make the results and the comparison more readable, we just present some computational results using a subset of instances based on 2 real topologies: German, Nsfnet, and 2 realistic topologies: India35 and Pioro40. We first notice that our valid inequalities allows solving several instances to optimality that are not solved to optimality when using B&C CPX, B&C GRB and B&C SCIP. Furthermore, they enabled reducing the average number of nodes in the B&C tree, and also the average Cpu time for several instances. On the other hand, and when the optimality is not proven, adding valid inequalities decreases the average gap for several instances. However, there exists a few instances in which adding valid inequalities does not improve the results of B&C algorithm. We further observe that Own B&C SCIP is shown to be very efficient compared with Cplex and Gurobi (see for example Table 3.2 and 3.3). However, and looking at the instances that are solved to optimality by Own B&C GRB and Own B&C SCIP, we notice that we have less number of nodes and time cpu when using Own B&C SCIP compared with Own B&C GRB (see for example Table 3.3). Furthermore, Own B&C SCIP works much betther than SCIP, Cplex and Gurobi even when using their proper cuts such that Own B&C SCIP is able to solve several instances to optimality that are not solved when using B&C CPX, B&C GRB and B&C SCIP. This means that we are able to beat Cplex, Gurobi and Scip using Own B&C SCIP. On the other hand, and considering large-scale instances with |K| ≥ 200, we noticed that adding valid inequalities does not improve the effectiveness of the B&C algorithm such that there exist some instances that are solved to optimality using B&C CPX and B&C GRB that are not solved to optimality when using Own B&C CPX, Own B&C GRB and Own B&C SCIP. Based on these results, we conclude that using the valid inequalities allows obtaining tighter LP bounds. They significantly improve the results yielded by the B&C CPX, B&C GRB, and B&C SCIP for several instances with number of demands up to 150. Given that Own B&C SCIP is able to beat Cplex, Gurobi and Scip, we turn our attention to the numerical results found when using SCIP. They are shown in the following Table 3.5. We can see from Table 3.5 that our B&C algorithm (Own B&C) is able to solve to optimality more instances than B&C SCIP. Indeed, 137 instances are solved to optimality when our inequalities are used (Own B&C) while 101 instances are solved to optimality in run B&C SCIP. Also, when our inequalities are used, the number of nodes in the B&C tree is decreased in most cases compared to the case where they are not used. Moreover, the CPU time is, in general, smaller when our inequalities are used. Finally, when comparing the instances which are not solved to optimality, we can see that the optimality gap is smaller, for most of the instances, when our inequalities are used. Concluding Remarks In this chapter, we have devised a B&C algorithm, and conducted some computational experiments. Our study shows that the valid inequalities are effective for solving real and realistic instances of the problem. It could be interesting to study the impact of the symmetry breaking inequalities and the precomputed lower bounds on the performance of the Branch-and-Cut algorithm. not, such that s represents the last slot of the interval of contiguous slots of width w k allocated by demand k ∈ K, with s ∈ S and p ∈ P k . Note that all the slots s ′ ∈ {s -w k + 1, ..., s} should be assigned to demand k along the path p whenever y k p,s = 1. Let P k (e) denote set of all admissible (o k ,d k ) paths going through edge e in G for demand k. In this case, the C-RSA is also equivalent to the following integer linear program min k∈K p∈P k e∈E(p) s s=w k c e y k p,s , (4.1) subject to p∈P k w k -1 s=1 y k p,s = 0, ∀k ∈ K, (4.2) p∈P k s s=w k y k p,s = 1, ∀k ∈ K, (4.3) k∈K p∈P k (e) s+w k -1 s ′ =s y k p,s ′ ≤ 1, ∀e ∈ E, ∀s ∈ S, (4.4) y k p,s ≥ 0, ∀k ∈ K, ∀p ∈ P k , ∀s ∈ S, (4.5) y k p,s ∈ {0, 1}, ∀k ∈ K, ∀p ∈ P k , ∀s ∈ S. (4.6) Inequalities (4.2) express the fact that a demand k ∈ K cannot occupy a slot s as the last slot before her slot-width w k . Inequalities (4.3) express the routing and spectrum constraints at the same time such that they ensure that exactly one slot s ∈ {w k , . . . , s} is assigned as last slot for the routing of demand k, and exactly one single path from P k is allocated by each demand k ∈ K. Note that a slot s ∈ S is said an allocated slot by demand k if and only if p∈P k s+w k -1 s ′ =s y k p,s ′ = 1 which means that s is covered by the interval of contiguous slots allocated by demand k. Inequalities (4.4) ensure that a slot s over edge e cannot be allocated to at most by one demand k ∈ K. Inequalities (4.5) are trivial inequalities, and constraints (4.6) are the integrality constraints. To benefit from some theoretical results done in chapter 2, we introduce the two variables x k e and z k s used in the cut formulation already presented in chapter 2. As a result, all the valid inequalities for the polytope associated with the cut formulation, they are still valid for the polytope associated with the path formulation following the addition of these two variables and the two following constraints x k e - p∈B k (e) s s=w k y k p,s = 0, ∀k ∈ K, ∀e ∈ E, (4.7) and z k s - p∈B k y k p,s = 0, ∀k ∈ K, ∀s ∈ S. (4.8) Therefore, the C-RSA is then equivalent to the extended formulation based on the following integer linear program min k∈K e∈E c e x k e , (4.9 ) p∈P k w k -1 s=1 y k p,s = 0, ∀k ∈ K, (4.10) p∈P k s s=w k y k p,s = 1, ∀k ∈ K, (4.11) x k e - p∈P k (e) s s=w k y k p,s = 0, ∀k ∈ K, ∀e ∈ E, (4.12 ) z k s - p∈P k y k p,s = 0, ∀k ∈ K, ∀s ∈ S, (4.13) k∈K p∈P k (e) s+w k -1 s ′ =s y k p,s ′ ≤ 1, ∀e ∈ E, ∀s ∈ S, (4.14) y k p,s ≥ 0, ∀k ∈ K, ∀p ∈ P k , ∀s ∈ S, (4.15) x k e ≥ 0, ∀k ∈ K, ∀e ∈ E, (4.16) z k s ≥ 0, ∀k ∈ K, ∀s ∈ S, (4.17) y k p,s ∈ {0, 1}, ∀k ∈ K, ∀p ∈ P k , ∀s ∈ S. (4.18) Column Generation Algorithm As it has been mentioned previously, our path formulation contains a huge number of variables which can be exponentiel in the worst case due to the number of all feasible paths for each traffic demand. To deal with this, we use a column generation algorithm to solve its linear relaxation. For this, we begin the algorithm with a restricted linear program of our path formulation by considering a feasible subset of variables (columns). For this, we first generate a subset of feasible paths for each demand k ∈ K denoted by B k ⊂ P k such that the variables y k p,s for each k ∈ K, p ∈ B k and s ∈ S induce a feasible basis for the restricted linear program. This means that there exists at least one feasible solution for the restricted linear program. Based on this, we derive the so-called restricted master problem (RMP) as follows min k∈K e∈E c e x k e , subject to p∈B k w k -1 s=1 y k p,s = 0, ∀k ∈ K, p∈B k s s=w k y k p,s = 1, ∀k ∈ K, x k e - p∈B k (e) s s=w k y k p,s = 0, ∀k ∈ K, ∀e ∈ E, z k s - p∈B k y k p,s = 0, ∀k ∈ K, ∀s ∈ S, k∈K p∈B k (e) s+w k -1 s ′ =s y k p,s ′ ≤ 1, ∀e ∈ E, ∀s ∈ S, y k p,s ≥ 0, ∀k ∈ K, ∀p ∈ B k , ∀s ∈ S, x k e ≥ 0, ∀k ∈ K, ∀e ∈ E, z k s ≥ 0, ∀k ∈ K, ∀s ∈ S. At each iteration, the column generation algorithm checks if there exists a variable y k p,s with p / ∈ B k for a demand k and slot s having a negative reduced cost using the solution of the dual problem associated with the constraints of the linear relaxation (4.1)-(4.5), and add it to B k . This can be achieved by solving the so-called pricing problem (PP). The Pricing Problem As noted later, we consider an initial restricted master problem denoted by RM P 0 which is based on an initial subset of variables induced by a subset of feasible path B k ⊂ P k for each demand k ∈ K. The pricing problem consists in finding a feasible path p for a demand k and slot s having a negative reduced cost using the optimal solution of the dual problem. For this, we consider the following dual variables a) α associated with the equations (4.10) such that α k ∈ R for all k ∈ K, b) β associated with the equations (4.11) such that β k ∈ R for all k ∈ K, c) µ associated with inequalities (4.14) such that µ e s ≤ 0 for all e ∈ E and s ∈ S, d) λ associated with the equations (4.12) such that λ k e ∈ R for all k ∈ K and e ∈ E, e) ρ associated with the equations (4.13) such that ρ k s ∈ R for all k ∈ K and s ∈ S, The dual problem of the linear relaxation (4.9)-(4.17) is equivalent to max - k∈K β k + e∈E s∈S µ e s , (4.19) subject to β k - e∈E(p) (λ k e + s s ′ =s-w k +1 µ e s ′ ) -ρ k s ≥ 0, ∀k ∈ K, ∀p ∈ P k , ∀s ∈ {w k , ..., s}, (4.20) α k - e∈E(p) s s ′ =max(1,s-w k +1) µ e s ′ ≥ 0, ∀k ∈ K, ∀p ∈ P k , ∀s ∈ {1, ..., w k -1}, (4.21 ) c e + λ k e ≥ 0, ∀k ∈ K, ∀e ∈ E, (4.22) α k + ρ k s ≥ 0, ∀k ∈ K, ∀s ∈ S, (4.23) µ e s ≤ 0, ∀e ∈ E, ∀s ∈ S. (4.24) As a result, the so-called reduced-cost rc k s (p) related to each demand k ∈ K, path p ∈ P k and slot s ∈ {w k , ..., s}, is given by rc k s = β k -ρ k s + min p∈P k [ e∈E(p) -λ k e - s s ′ =s-w k +1 µ e s ′ ], (4.25) Therefore, for each demand k ∈ K and slot s ∈ {w k , ..., s}, the pricing problem aims at finding a path p * of P k such that rc k s (p * ) = β k -ρ k s + min p∈P k [ e∈E(p) -λ k e - s s ′ =s-w k +1 µ e s ′ ], (4.26) Finding such path p * can be seen as a separation procedure for the dual constraint (4.20) which consists in identifying a path p * for each demand k ∈ K and slot s ∈ {w k , ..., s} such that β k -ρ k s + e∈E(p * ) (-λ k e - s s ′ =s-w k +1 µ e s ′ ) < 0 and e∈E(p * ) ℓ e ≤ lk . As a result, the pricing problem consists in solving the Resource Constrained Shortest Path (RCSP) problem. The RCSP problem is well known to be weakly NP-hard [START_REF] Dror | Note on the Complexity of the Shortest Path Models for Column Generation in VRPTW[END_REF]. Several algorithms have been proposed in the literature to solve this problem based on dynamic programming algorithms, heuristics and some techniques related to the lagrangian decomposition. As background references we mention [START_REF] Carlyle | Lagrangian relaxation and enumeration for solving constrained shortest-path problems[END_REF][START_REF] Dumitrescu | Algorithms for the weight constrained shortest path problem[END_REF][START_REF] Eppstein | Finding the k shortest paths[END_REF][START_REF] Joksch | The shortest route problem with constraints[END_REF][START_REF] Lozano | On an exact method for the constrained shortest path problem[END_REF]. Dynamic Programming Algorithm for the Pricing Problem In this work, we propose a pseudo-polynomial time based dynamic programming algorithm [START_REF] Dumitrescu | Algorithms for the weight constrained shortest path problem[END_REF]. It consists in finding the minimum-cost path for each demand k and slot s while satisfying the transmission-reach constraint. It is based on the dynamic programming algorithm proposed by Dumitrescu et al. [START_REF] Dumitrescu | Algorithms for the weight constrained shortest path problem[END_REF] to solve the RCSP problem. For each demand k ∈ K and slot s, we associate to each node v ∈ V in the graph G a set of labels L v such that each label corresponds to differents paths from th origin node o k to the node v, and each label p is specified by a cost equals to e∈E(p) (-λ k e -s s ′ =s-w k +1 µ e s ′ ) , and a weight equals to e∈E(p) ℓ e . We denote by T v the set of labels on node v ∈ V . For each demand k and slot s ∈ {w k , ..., s}, the complexity of the algorithm is bounded by O(|E \ E k 0 | * lk ) [START_REF] Dumitrescu | Algorithms for the weight constrained shortest path problem[END_REF]. Algorithm 3 summarizes the different steps of the dynamic programming algorithm. Initial Columns The basic subset of paths used to define the restricted master problem, they are generated using a brute-force search algorithm which creates a search tree that covers all the feasible paths P k for each demand k. It is then used to pre-compute an initial subset B k of feasible paths for each demand k ∈ K taking into account the transmission-reach constraint to prune some non intersecting nodes in our search tree of this algorithm. Branch-and-Price and Branch-and-Cut-and-Price Algorithms Based on these features, we derive a Branch-and-Cut-and-Price algorithm for solving the C-RSA problem. Description The main purpose of this algorithm is to solve a sequence of linear programs using the column generation algorithm at each node of a Branch-and-Bound algorithm. At each iteration of the algorithm, we solve our pricing problem by identifying one or more than one new column by solving a RCSP problem for each demand k and slot s ∈ {w k , ..., s} using the dynamic programming algorithm. We repeat this pro-cedure in each iteration of the column generation until no new column is found (i.e., rc k s ≥ 0 for all k ∈ K and s ∈ {w k , ..., s}. As a result, the final solution is optimal for the linear relaxation of the path formulation. Furthermore, if it is integral, then it is optimal for the C-RSA problem. Otherwise, we create two subproblems by branching on fractional variables (variable branching rule) or on some constraints using the Ryan & Foster branching rule [START_REF] Ryan | An integer programming approach to scheduling[END_REF] (constraint branching rule). Algorithm 4 summarizes the different steps of the Branch-and-Price algorithm. By combining the Branch-and-Price algorithm with a cutting-plane based algorithm, we devise a Branch-and-Cut-and-Price which works as follows. Consider a fractional solution (ȳ, x, z). At each iteration of the Branch-and-Price algorithm, and for a given class of valid inequalities, our aim is to identify the existence of one or more than one inequalities of this class that are violated by the current solution. We repeat this procedure in each iteration of the algorithm until no violated inequality is identified. As mentioned before, the Branch-and-Cut-and-Price algorithm also uses the different classes of valid inequalities presented in chapter [START_REF] Amar | Performance assessment and modeling of flexible optical networks[END_REF]. They are performed in the order (2.54), (2.32), (2.42), (2.36), (2.43). Algorithm 5 summarizes the different steps of the Branch-and-Cut-and-Price algorithm for a given class of valid inequalities. Primal Heuristic Here, we propose a primal heuristic based on a hybrid method between local search algorithm and a greedy-algorithm. Given a feasible fractional solution (ȳ, x, z), our primal heuristic consists in constructing an integral "feasible" solution from this fractional solution. For this, we propose a local search algorithm which consists in generating at each iteration a sequence of demands L = 1 ′ , 2 ′ , ..., |K| ′ -1, |K| ′ . Based on this sequence of demands, our greedy algorithm selects a path p and a slot s for each demand k ′ ∈ L with y k ′ p,s ̸ = 0 while respecting the non-overlapping constraint with the set of demands that precede demand k ′ in the list L (i.e., the demands 1 ′ , 2, ..., k ′ -1). However, if there does not exist such pair of path p and slot s for demand k ′ , we then select a path p and a slot s for demand k ′ ∈ L with y k ′ p,s = 0 and s ∈ {w k ′ , ..., s} while respecting the non-overlapping constraint with the set of demands that precede demand k ′ in the list L. After that, we compute the associated total length of the paths selected for the set of demands K in the final solution S given by the greedy-algorithm. Our local search algorithm generates a new sequence by doing some permutation of demands in the last sequence of demands, if the value of the solution given by greedy-algorithm is smaller than the value of the best solution found until the current iteration. Otherwise, we stop the algorithm, and we give in output the best solution found during our primal heuristic induced by the best sequence of demands having the smallest value of total length of the selected path compared with the others generated sequences. Computational Study Implementation's Feature The B&P and B&C&P algorithms described in the current chapter have been implemented in C++ under Linux using the "Solving Constraint Integer Programs" framework (Scip 6.0.2), and Cplex 12.9 as LP solver. These have been tested on LIMOS high-performance server with a memory size limited to 64 Gb while benefiting from parallelism by activating 8 threads, and with a CPU time limited to 5 hours (18000 s). Computational Results Throughout this section, we present the performance results of the B&C&P algorithm. Our main goal is to show the effectiveness of the valid inequalities used within the B&C&P algorithm. Table 4.1 reports the experiment results for both the Branch-and-Price (B&P) (i.e., B&C&P without using our additional valid inequalities) and the B&C&P algorithms. Each line corresponds to the average results of 4 tested instances. Note that we deactivate the SCIP's proper cut generation for both the B&P and B&C&P algorithms given that they may change the dual problem, as well as the calculation of the reduced-cost. In order to evaluate the impact of the additional valid inequalities used within the B&C&P algorithm, we consider 5 criteria, the average number of nodes in the branching tree (Nb Nd), the average optimality gap (Gap) which represents the relative error between the lower bound and the best upper bound obtained at the end of the resolution, the average number of generated columns (Nbr Cols), the average number of violated inequalities added (Nbr Cuts), and the average CPU time in seconds (TT). The results show that the B&C&P is able to solve 187 instances to optimality while 147 instances are solved to optimality when using the B&P. Hence, our valid inequalities allow solving several instances to optimality within a reasonable amount Based on the reported results, we notice that the B&C&P algorithm seems to be very efficient compared with B&C algorithm such that it is able to provide optimal solutions for several instances, which is not the case for the B&C algorithm (without or with additional valid inequalities) within the CPU time limit (5 hours). Furthermore, several instances are solved to optimality by the B&C algorithm using Cplex, Gurobi, and Scip could also be solved to optimality within the B&C&P algorithm. The average number of explored nodes using the B&C&P algorithm is greatly reduced for several instances compared with the B&C algorithm. Moreover, the average CPU time is significantly reduced using the B&C&P algorithm compared with the B&C algorithm. On the other hand, and when using the B&P algorithm, we notice that we are able to beat Own B&C SCIP such that B&P is able to provide optimal solutions for several instances that are not solved to optimality by the B&C algorithm using Cplex (see Table 4.2), and Gurobi (see Table 4.3). Furthermore, we noticed that the average number of explored nodes and the average CPU time using the B&P algorithm are greatly reduced for several instances compared with the B&C algorithm using Cplex and Gurobi. However, Own B&C SCIP is able to beat the B&P algorithm. The results in Table 4.4 show that Own B&C SCIP provide optimal solutions for several instances that are not solved to optimality by the B&P algorithm. But when the optimality is verified by these two algorithms, we found that using the B&P algorithm reduces the average number of explored nodes and the average CPU time for several instances compared with Own B&C SCIP. Concluding Remarks In this chapter, we first have given an extended formulation for the problem, and solve its linear relaxation using a column generation algorithm. We have discussed the associated pricing problem. Moreover, we have investigated the polytope associated with our formulation, and introduced several classes of valid inequalities. Their separation procedures are further presented. Using this, we have devised the B&C&P algorithm. Computational experiments have convincingly shown the strength of the valid inequalities. They significantly improve the results yielded by the B&P algorithm. Hence, the B&C&P algorithm performs very well compared with the B&P algorithm. Furthermore, the B&C&P algorithm is shown to be able to beat the B&C algorithm. A computational analysis is conducted to show the effectiveness of our approach for solving large-scale instances. Chapter 5 Compact Formulation and Polyhedra for the Spectrum Assignment Sub-problem In this chapter, we focus on the Spectrum Assignment (SA) sub-problem. First, we propose an integer linear programming compact formulation, and further investigate the facial structure of the associated polytope. Moreover, we identify several classes of valid inequalities for the polytope such that some of them come from those that are already proposed for the C-RSA. We further prove that these inequalities are facet-defining, and discuss their separation problems. Based on these results, we devise a Branch-and-Cut (B&C) algorithm for the SA problem. The Spectrum Assignment Sub-problem The SA problem can be stated as follows. We consider an optical spectrum of s ∈ Z + The SA is well known to be NP-hard problem [START_REF] Bermond | On spectrum assignment in elastic optical treenetworks[END_REF]. It is equivalent to the problems of wavelength assignment, interval coloring, and dynamic storage allocation [START_REF] Bermond | On spectrum assignment in elastic optical treenetworks[END_REF] that are well known to be NP-hard. Compact Formulation Here we introduce an integer linear programming compact formulation for the SA problem. For s ∈ S, let u s be a variable which takes 1 if slot s is used and 0 if not, and for k ∈ K and s ∈ S, let z k s be a variable which takes 1 if slot s is the last slot allocated for the routing of demand k and 0 if not. The contiguous slots s ′ ∈ {s -w k + 1, ..., s} should be assigned to demand k whenever z k s = 1. The SA is equivalent to the following integer linear program min s∈S u s , (5.1) subject to z k s = 0, for all k ∈ K and s ∈ {1, ..., w k -1}, (5.2) s s=w k z k s ≥ 1, for all k ∈ K, (5.3) k∈ Ke min(s,s+w k -1) s ′ =s z k s -u s ≤ 0, for all e ∈ E, and s ∈ S, z k s ≥ 0, for all k ∈ K and s ∈ S, (5.5) u s ≤ 1, for all s ∈ S, (5.6) z k s ∈ {0, 1}, for all k ∈ K and s ∈ S, (5.7) where u s ∈ {0, D 105 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j {k 1 , ..., k i-1 } ∪ {k} with E(p k i ) ∩ E(p k j ) ̸ = ∅, • and s / ∈ {s k i - for each demand k ∈ K. S 106 is feasible for the SA problem. Hence, the corresponding incidence vector (u S 106 , z S 106 ) belongs to P sa (G, K, S). We then obtain that µu S 105 + σz S 105 = µu S 106 + σz S 106 = µu S 105 + σz S 105 + µ s. Hence, µ s = 0. w k i + 1, ..., s k i } (slot In a similar way, we can show that µ s = 0, for all slots s ∈ S. Let show now that σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s}. Consider a demand k in K and a slot s in {w k , ..., s}. Let S 107 = (U where Since µ s = 0 for each s ∈ S, it follows that σ k s = 0. In a similar way, we can show that σ k s = 0, for all k ∈ K and s ∈ {w k , ..., s}. D 107 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 107 i , • and {s -w k + 1, ..., s} ∩ {s k i -w k i + 1, ..., s k i } = ∅ if E(p k ) ∩ E(p k i ) ̸ = ∅ ( Therefore, we obtain that all the equations of the polytope P sa (G, K, S) are given only in terms of the variables z k s with s ∈ {1, ..., w k } for each demand k ∈ K. We distinguish |K| blocks of lines in the matrix M associated with the system (5.2) • block M k corresponds to the equations z k s = 0 for all s ∈ {1, ..., w k -1} such that rang(M k ) = w k -1. Note that the |K| blocks of the matrix M are independents. Furthermore, there is no dependency between slots such that for each demand k, the slots s ∈ {1, ..., w k -1} are independants such that w k -1 s=1 σ k s = w k -1 s=1 γ k,s ⇒ w k -1 s=1 (σ k s -γ k,s ) = 0, for each demand k ∈ K. The only solution of this system is σ k s = γ k,s for each s ∈ {1, ..., w k -1} for demand k. As k is chosen arbitrarily in K, we re-do the same procedure for all k ′ ∈ K \ {k}. We then get that σ k s = γ k,s , for all k ∈ K and s ∈ {1, ..., w k -1}. (5.9) As a result, we have (µ, σ) = γM which ends the proof. Proof. Given the rank of the matrix M which equals to r ′ and the results of proposition (5.3.1). Facial Investigation Here we study the facial structure of the basic constraints of the compact formulation (5.1)-(5.8) that are facets defining for the polyhedron P sa (G, K, S) under certain conditions. Theorem 5.3.2. Consider a demand k ∈ K and a slot s ∈ {w k , .., s}. Then, inequality z k s ≥ 0 is facet defining for P sa (G, K, S). Proof. Let us denote F k s the face induced by inequality z k s ≥ 0, that is F k s = {(u, z) ∈ P sa (G, K, S) : z k s = 0}. We denote inequality z k s ≥ 0 by αu + βz ≤ λ. Let µu + σz ≤ τ be a valid inequality that defines a facet F of P sa (G, K, S). Suppose that F k s ⊂ F = {(u, z) ∈ P sa (G, K, S) : µu + σz = τ }. To prove that F k s is facet defining for P sa (G, K, S), it sufficient to show that there exist ρ ∈ R and γ ∈ R k∈K (w k -1) ) such that (µ, σ) = ρ(α, β) + γM . where D 109 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j {k 1 , ..., k i-1 } ∪ {k} with E(p k i ) ∩ E(p k j ) ̸ = ∅, • and s / ∈ {s k i - for each demand k ∈ K. Solution S ′109 is feasible for the SA problem. Hence, the corresponding incidence vector (u S ′109 , z S ′109 ) belongs to F k s . Solutions S 109 and S ′109 satisfy equation µu + σz = τ . We then obtain that µu S 109 + σz S 109 = µu S ′109 + σz S ′109 = µu S 109 + σz S 109 + µ s. Hence, µ s = 0. w k i + 1, ..., s k i } (slot In a similar way, we can show that µ s = 0, for all slots s ∈ S. Next, we will show that, σ k s ′ = 0 for all s ′ ∈ {w k , . where Given that µ s = 0 for all s ∈ S, it follows that σ k s ′ = 0. In a similar way, we can show that σ k s ′ = 0, for all slots s ′ ∈ {w k , ..., s} \ {s}, σ k ′ s ′ = 0, for all k ′ ∈ K \ {k} and s ′ ∈ {w k ′ , ..., s}. D 110 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 110 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E(p k ) ∩ E(p k i ) ̸ = ∅ ( It follows that σ k s = ρ for demand k and slot s in {w k , ..., s}. By (5.9), we know that σ k ′ s ′ = γ k ′ ,s ′ , for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. Overall, we obtain that µ s = 0 for each slot s ∈ S , and for each k ′ ∈ K and s ′ ∈ S σ k ′ s ′ =          γ k ′ ,s ′ if s ′ ∈ {1, ..., w k ′ -1}, ρ if k ′ = k and s ′ = s, 0 otherwise. As a consequence, we have (µ, σ) = ρ(α, β) + γM . Theorem 5.3.3. Consider a slot s ∈ S. Then, inequality u s ≤ 1 is facet defining for P sa (G, K, S). Proof. Let us denote F s the face induced by inequality u s ≤ 1 given by F s = {(u, z) ∈ P sa (G, K, S) : u s = 1}. We denote inequality u s ≤ 1 by αu + βz ≤ λ. Let µu + σz ≤ τ be a valid inequality that defines a facet F of P sa (G, K, S). Suppose that F s ⊂ F = {(u, z) ∈ P sa (G, K, S) : µu + σz = τ }. To prove that F s is facet defining for P sa (G, K, S), it sufficient to show that there exist ρ ∈ R and γ ∈ R k∈K (w k -1) ) such that (µ, σ) = ρ(α, β) + γM . where This implies that µ s = 0. D 113 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k ′ } : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D In a similar way, we can show that µ s = 0, for all slots s ∈ S \ {s}. Next, we will show that, σ k s ′ = 0 for all k ∈ K and s ′ ∈ {w k , ..., s}. Consider a demand k ∈ K and a slot s ′ in {w k , ..., s}. Let S 114 = (U 114 , S 114 ) be the solution given by a) for one demand k ′ ∈ K \ {k}, we select the smallest slot index s k ′ ∈ {w k ′ , ..., s ′ } ∩ {s, ..., s + w k ′ -1} as last slot, b) we select the slot s k in {w k , ..., s} \ {s} \ {s ′ } as last slot for demand k with {s k - where Since µ s = 0 for all s ∈ S \ {s}, it follows that σ k s ′ = 0. In a similar way, we can show that σ k ′ s ′ = 0, for all k ′ ∈ K \ {k} and s ′ ∈ {w k ′ , ..., s}. w k + 1, ..., s k } ∩ {s k ′ -w k ′ + 1, ..., s k ′ } = ∅ if E(p k ) ∩ E(p k ′ ) ̸ = ∅, c) for each demand k i ∈ K \ {k, k ′ } with i ∈ {1, ..., D 114 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k, k ′ } : E(p k i ) ∩ E(p k j ) ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 114 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E(p k ) ∩ E(p k i ) ̸ = ∅ ( It follows that µ s = ρ for slot s in S. We know from (5.9) that σ k ′ s ′ = γ k ′ ,s ′ , for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. We conclude that µ s ′ =    ρ if s ′ = s, 0 otherwise, and for each k ′ ∈ K and s ′ ∈ S σ k ′ s ′ =    γ k ′ ,s if s ∈ {1, ..., w k ′ -1}, 0 otherwise. As a consequence, we have (µ, σ) = ρ(α, β) + γM as desired. Theorem 5.3.4. For a demand k ∈ K, inequality s s=w k z k s ≥ 1 is facet defining for P sa (G, K, S). Proof. Let F k S be the face induced by inequality s s=w k z k s ≥ 1, that is F k S = {(x, z) ∈ P sa (G, K, S) : s s=w k z k s = 1}. 200 We denote inequality s s=w k z k s ≥ 1 by αu + βz ≤ λ. Let µu + σz ≤ τ be a valid inequality that defines a facet F of P sa (G, K, S). Suppose that F k S ⊂ F = {(u, z) ∈ P sa (G, K, S) : µu + σz = τ }. To prove that F k S is facet defining for P sa (G, K, S), it sufficient to show that there exist ρ ∈ R and γ ∈ R k∈K (w k -1) ) such that (µ, σ) = ρ(α, β) + γM . where Hence, µ s = 0. D 116 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This verifies that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D In a similar way, we can show that µ s = 0, for all slots s ∈ S. Next, we will show that, σ k ′ s ′ = 0 for all k ′ ∈ K \ {k} and s ′ ∈ {w k ′ , ..., s}. Consider a demand k ′ in K \ {k} and a slot s ′ in {w k ′ , ..., s}. Let S 117 = (U 117 , S 117 ) be the solution given by a) we select slot s k = w k as last slot for demand k, b) we select the smallest slot index s k ′ from the set of slots I 117 k ′ given by where Since µ s = 0 for all s ∈ S, it follows that σ k ′ s ′ = 0. In a similar way, we can show that I 117 k ′ = {w ki , ..., s k -w k } ∩ {s k + w ki , ..., s} \ {s ′ } if E(p k ′ ) ∩ E(p k ) ̸ = ∅ or I 117 k ′ = {w k ′ , ..., s} \ {s ′ } if not. c) for each demand k i ∈ K \ {k, k ′ } with i ∈ {1, ..., D 117 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k, k ′ } : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 117 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ if E(p k ′ ) ∩ E(p k i ) ̸ = ∅ ( σ k ′ s ′ = 0, for all k ′ ∈ K \ {k} = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E(p k i ) ∩ E(p k j ) ̸ = ∅}. Hence, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D117 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ if E(p k ′ ) ∩ E(p k i ) ̸ = ∅ ( = ( S118 k \ {s}) ∪ {s} for demand k such that {s ′ -w k + 1, ..., s ′ } ∩ {s" -w k ′ + 1, ..., s"} = ∅ for each k ′ ∈ K and s" ∈ S 119 k ′ with E 119 k ∩ E 119 k ′ ̸ = ∅. The last slots assigned to the demands K \ {k} in S118 remain the same, i.e., S118 k" = S 119 k" for each demand k" ∈ K \ {k}. Solution S 119 is feasible for the SA problem. The corresponding incidence vector (u S 119 , z S 119 ) belongs to F k S . Hence, solutions S 118 and S 119 satisfy equation µu + σz = τ . We then obtain that µu S118 + σz S118 = µu S ′119 + σz S ′119 = µu S118 + σz S118 -σ k s + σ k s ′ - s∈U 118 \U 119 µ s + s′ ∈U 119 \U 118 µ s′ . Since µ s = 0 for all s ∈ S, it follows that σ k s ′ = σ k s . In a similar way, we can show that σ k s ′ = σ k s , for all slots s, s ′ ∈ {w k , ..., s}. Consequently, we obtain that σ k s = ρ for demand k and slot s in {w k , ..., s}. By (5.9), we know that σ k ′ s ′ = γ k ′ ,s ′ , for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. We then conclude that µ s = 0 for each slot s ∈ S , and for each k ′ ∈ K and s ∈ S σ k ′ s =          γ k ′ ,s if s ∈ {1, ..., w k ′ -1}, ρ if k ′ = k and s ∈ {w k ′ , ..., s}, 0 otherwise. As a result, we have (µ, σ) = ρ(α, β) + γM as desired. Valid Inequalities and Facets In what follows, we present several valid inequalities for P sa (G, K, S), and prove that they are facet-defining under certain conditions. Interval-Capacity-Cover Inequalities We start this section by introducing some classes of valid inequalities related to the knapsack constraints. Let us introduce the following conflict graph. i +w k -1 z k s ≤ |K ′ | -1, (5.10) is valid for P sa (G, K, S). Proof. The interval I can cover at most |K ′ | -1 demands given that K ′ is a minimal cover for interval I. Inequality (5.10) can be strengthened by extending each minimal cover K ′ ⊂ K for an interval I as follows. Proposition 5.4.2. Let I = [s i , s j ] be an interval of contiguous slots in [1, s]. Let K ′ ⊆ K be a minimal cover for interval I = [s i , s j ] such that K ′ defines a clique in H sa , and Ξ(K ′ ) be a subset of demands in K \ K ′ such that Ξ(K ′ ) = {k ∈ K \K ′ such that w k ≥ w k ′ and E(p k )∩E(p k ′ ) ̸ = ∅ ∀k ′ ∈ K ′ }. Then, the inequality k∈K ′ s j s=s i +w k -1 z k s + k ′ ∈Ξ(K ′ ) s j s ′ =s i +w k ′ -1 z k ′ s ′ ≤ |K ′ | -1, (5.11) is valid for P sa (G, K, S). Proof. The interval I = [s i , s j ] can cover at most |K ′ |-1 demands from the demands in K ′ ∪ Ξ(K ′ ) given that K ′ is a minimal cover for interval I = [s i , s j ] and the definition of the set Ξ(K ′ ) such that for each pair (k, k ′ ) with k ∈ K ′ and k ′ ∈ Ξ(K ′ ), the set (K ′ \ {k}) ∪ {k ′ } stills defining minimal cover for the interval I over edge e. Furthermore, for each quadruplet (k, k ′ , k, k′ ) with k, k ′ ∈ K ′ and k, k′ ∈ Ξ(K ′ ), the set (K ′ \ {k, k ′ }) ∪ { k, k′ } stills defining minimal cover for the interval I given that w k + w k ′ ≤ w k + w k′ . Theorem P sa (G, K, S, K, I) = {(u, z) ∈ P sa (G, K, S) : k ′ ∈K\ K (v k ,v k ′ )∈H r ∀k∈ K s j s ′ =s i +w k ′ -1 z k ′ s ′ = 0}. Proof. Necessity If there exists an interval of contiguous slots I ′ in [1, s] with I ⊂ I ′ such that K defines a minimal cover for the interval I ′ . This means that {s i + w k -1, ..., s j } ⊂ I ′ . As a result, inequality (5.10) induced by the minimal cover K for the interval I, it is dominated by another inequality (5.10) induced by the same minimal cover K for the interval I ′ . Hence, inequality (5.10) cannot be facet defining for the polytope P sa (G, K, S, I). Sufficiency. Let F I K denote the face induced by inequality (5.10), that is F I K = {(u, z) ∈ P sa (G, K, S, K, I) : k∈ K s j s=s i +w k -1 z k s = | K| -1}. Denote inequality k∈ K s j s=s i +w k -1 z k s ≤ | K| -1 by αu + βz ≤ λ. Let µu + σz ≤ τ be a valid inequality that is facet defining F of P sa (G, K, S, I). Suppose that F I K ⊂ F = {(u, z) ∈ P sa (G, K, S, I) : µu + σz = τ }. In order to prove that inequality k∈ K s j s=s i +w k -1 z k s ≤ | K| - 1 is facet defining for P sa (G, K, S, I), we show that there exist ρ ∈ R and γ ∈ R k∈K (w k -1) ) such that (µ, σ) = ρ(α, β) + γM . First, we show that µ s = 0 for all s ∈ S. Consider a slot s ∈ S. Let S 120 = (U 120 , S 120 ) be the solution given by a) for one demand k ′ from K, we select the smallest slot index where As a result, µ s = 0. s k ′ in [{w k ′ , ..., s}\[{s i + w k ′ -1, ..., s j } ∪ {s, ..., s + w k ′ -1}]] \ {s, ..., s + w k i -1} (slot D 120 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 120 i , • and s / ∈ {s k i -w k i + 1, ..., s k i } (slot In a similar way, we can show that µ s = 0, for all slots s ∈ S. where Since µ s = 0 for all slots s ∈ S, it follows that σ k s ′ = 0. In a similar way, we can show that σ k s = 0, for all k ∈ K and s ∈ {w k , ..., s} with s / D 122 i = {k j ∈ {k 1 , ..., k i-1 } ∩ K : E(p k i ) ∩ E(p k j ) ̸ = ∅}, c) for each demand k i ∈ K \ K with i ∈ {1, ..., = {k j ∈ {k 1 , ..., k i-1 } ∪ K : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 122 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E(p k i ) ∩ E(p k ) ̸ = ∅ ( ∈ {s i + w k -1, ..., s j } if k / ∈ K. Let prove that σ k s for all k ∈ K and s ∈ {s i + w k -1, ..., s j } are equivalent. Consider a demand k ′ ∈ K and a slot s ′ ∈ {s i + w k ′ -1, ..., s j } with k ′ ∈ K. Let S 124 = (U 124 , S 124 ) be the solution given by a) for one demand k" from K, we select the smallest slot index s k" in {w k" , ..., s} \ {s i + w k" -1, ..., s j } as last slot, b) for each demand k i ∈ K \ {k"} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 124 i given by I 124 i = [ kj ∈D 124 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ {s i + w ki -1, ..., s j }, where D 124 i = {k j ∈ {k 1 , ..., k i-1 } ∩ K : E(p k i ) ∩ E(p k j ) ̸ = ∅}, c) for each demand k i ∈ K \ K with i ∈ {1, ..., = {k j ∈ {k 1 , ..., k i-1 } ∪ K : E(p k i ) ∩ E(p k j ) ̸ = ∅}. Hence, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 124 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ if E(p k i ) ∩ E(p k ′ ) ̸ = ∅ ( = (S 125 k \{s})∪{s} such that {s-w k +1, ..., s}∩{s ′ -w k ′ +1, ..., s ′ } = ∅ for each k ′ ∈ K and s ′ ∈ S 125 k ′ with E 125 k ∩ E 125 k ′ ̸ = ∅. Solution S 125 is feasible for the SA problem. The corresponding incidence vector (u S 125 , z S 125 ) belongs to F I K . Hence, solutions S 124 and S 125 satisfy equation µu + σz = τ . We then obtain that µu S 124 + σz S 124 = µu S 125 + σz S 125 = µu S 124 + σz S 124 + σ k ′ s ′ -σ k s + σ k s + s"∈U 125 \U 124 µ s" - s"∈U 124 \U 125 µ s" . Since σ k s = 0 for s / ∈ {s i + w k -1, ..., s j } with k ∈ K, and µ s" = 0 for all s" ∈ S, it follows that σ k ′ s ′ = σ k s . The pair (k, k ′ ) are chosen arbitrarily in the minimal cover K, we then re-do the same procedure for all pairs (k, k ′ ) such that we find σ k s = σ k ′ s ′ , for all pairs (k, k ′ ) ∈ K, with s ∈ {s i + w k -1, ..., s j } and s ′ ∈ {s i + w k ′ -1, ..., s j }. We re-do the same procedure for each two slots s, s ′ ∈ {s i + w k -1, ..., s j } for each demand k ∈ K with k ∈ K such that σ k s = σ k s ′ , for all k ∈ K and s, s ′ ∈ {s i + w k -1, ..., s j }. By (5.9), we know that σ k ′ s ′ = γ k ′ ,s ′ , for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. Overall, we obtain that µ s = 0 for each slot s ∈ S, and σ k ′ s =          γ k ′ ,s if s ∈ {1, ..., w k ′ -1}, ρ if k ′ ∈ K and s ∈ {s i + w k ′ -1, , ..., s j }, 0 otherwise, for each k ′ ∈ K and s ∈ S. As a consequence, we have (µ, σ) = ρ(α, β) + γM as desired. Inequality (5.10) can then be lifted using a sequential lifting procedure [START_REF] Balas | Facets of the Knapsack Polytope From Minimal Covers[END_REF] to be facet defining and generate several facets for the polytope P sa (G, K, S). Theorem 5.4.2. Let I = [s i , s j ] be an interval of contiguous slots. Let K ⊂ K be a minimal cover for interval I = [s i , s j ] such that K defines a clique in H sa . Let K ′ ⊂ K \ K = {k 1 , ..., k q } such that K ∪{k 1 , ..., k q } defines a clique in H sa . Consider the following sequence of knapsack problems defined as                      z i = max j∈ K a j + i-1 j=1 β j a j , j∈ K w j a j + i-1 j=1 w k j a j ≤ |I| -w k i , a j ∈ {0, 1}, ∀j ∈ K ∪ {1, ..., i -1}, (5.12) for all i ∈ {1, ..., q} with β j = | K| -1 -z j for all j ∈ {1, ..., i -1}. Then, the inequality k∈ K s j s=s i +w k -1 z k s + q j=1 s j s ′ =s i +w k j -1 β j z k j s ′ ≤ | K| -1, (5.13) is valid for P sa (G, K, S). Moreover, inequality (5.13) defines facet of P sa (G, K, S) if there does not exist an interval of contiguous slots I ′ = [s ′ i , s ′ j ] in [1, s] with I ⊂ I ′ such that K defines a minimal cover for the interval I ′ . Proof. It is trivial given that inequality (5.13) can never be dominated in P sa (G, K, S) if there does not exist an interval of contiguous slots [1, s] with I ⊂ I ′ such that K defines a minimal cover for the interval I ′ . I ′ = [s ′ i , s ′ j ] in Interval-Clique Inequalities if w k + w k ′ > |I| and E(p k ) ∩ E(p k ′ ) ̸ = ∅. Let Q sa (G, K, S) = {(x, z) ∈ P sa (G, K, S) : k∈K s s=w k z k s = 1} be a semi- polytope of P sa (G, K, S). Proposition 5.4.3. Let I = [s i , s j ] be an interval of contiguous slots in [1, s] with s i ≤ s j -1, and C be a clique in the conflict graph H ′E I with |C| ≥ 3. Then, inequality (2.39) is also valid for Q sa (G, K, S). Moreover, it is valid for P sa (G, K, S) if 2w k > |I| for each v k ∈ C. Proof. We use the same proof of proposition (2.4.13). ′ in [1, s] such that I ⊂ I ′ with • w k + w k ′ ≥ |I ′ | for each k, k ′ ∈ C, • 2w k ≥ |I ′ | + 1 and w k ≤ |I ′ | for each k ∈ C. c) and there does not exist a slot s ∈ I such that s ∈ {s ′ -w k + 1, .., s ′ } for each k ∈ C and s ′ ∈ {s i + w k -1, .., s j }. Proof. Neccessity. We distinguish three cases a) if there exists a clique C ′ that contains all the demands k ∈ C. F H ′E I C = {(u, z) ∈ P sa (G, K, S) : v k ∈C s j s=s i +w k -1 z k s = 1}. We denote inequality v k ∈C s j s=s i +w k -1 z k s ≤ 1 by αu + βz ≤ λ. Let µu + σz ≤ τ be a valid inequality that is facet defining F of P sa (G, K, S). Suppose that F H ′E I C ⊂ F = {(u, z) ∈ P sa (G, K, S) : µu + σz = τ }. In order to prove that inequality v k ∈C s j s=s i +w k -1 z k s ≤ 1 is facet defining for P sa (G, K, S), we need to show that there exist ρ ∈ R and γ ∈ R k∈K (w k -1) ) such that (µ, σ) = ρ(α, β) + γM . Let first show that µ s = 0 for all s ∈ S. Consider a slot s ∈ S. Let S 127 = (U 127 , S 127 ) be the solution given by a) for one demand k ′ from C, we select the smallest slot index s k ′ = {s i +w k ′ -1, ..., s j }\ {s, ..., s -w k ′ -1} as last slot (slot assignment constraint taking into account the possibility of adding slot s in the set of used slots U 127 ), b) for each demand k i ∈ C \ {k ′ } with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 127 i given by I 127 i = [ kj ∈D 127 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}][∩{s i + w ki -1, ..., s j }] \ {s, ..., s + w ki -1}, where where Since µ s = 0 for all slots s ∈ S, it follows that σ k s ′ = 0. In a similar way, we can show that σ k s = 0, for all k ∈ K and s ∈ {w k , ..., s} with s / D 127 i = {k j ∈ {k 1 , ..., k i-1 } ∩ C : E(p k i ) ∩ E(p k j ) ̸ = ∅}, c) for each demand k i ∈ K \ C with i ∈ {1, ..., D 129 i = {k j ∈ {k 1 , ..., k i-1 } ∩ C : E(p k i ) ∩ E(p k j ) ̸ = ∅}, c) for each demand k i ∈ K \ C with i ∈ {1, ..., = {k j ∈ {k 1 , ..., k i-1 } ∪ C : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 129 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E(p k i ) ∩ E(p k ) ̸ = ∅ ( ∈ {s i + w k -1, ..., s j } if v k / ∈ C. Let prove that σ k s for all v k ∈ C and s ∈ {s i + w k -1, ..., s j } are equivalent. Consider a demand k ′ ∈ K and a slot s ′ ∈ {s i + w k ′ -1, ..., s j } with v k ′ ∈ C, and a solution S 131 = (U 131 , S 131 ) given by a) for one demand k from C, we select theslot Since σ k s = 0 for s / ∈ {s i + w k -1, ..., s j } with v k ∈ C, and µ s" = 0 for all s" ∈ S, it follows that σ k ′ s ′ = σ k s . Given that the pair (v k , v k ′ ) are chosen arbitrarily in clique C, we re-do the same procedure for all pairs (v k , v k ′ ) such that we find s k = s i + w k -1 as last slot, b) for each demand k i ∈ C \ {k}, = {k j ∈ {k 1 , ..., k i-1 } ∪ C such that E(p k i ) ∩ E(p k j ) ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s -w k j + 1, ..., s} = ∅ for each k j ∈ D 131 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ if E(p k i ) ∩ E(p k ′ ) ̸ = ∅ ( σ k s = σ k ′ s ′ , for all pairs (v k , v k ′ ) ∈ C, with s ∈ {s i + w k -1, ..., s j } and s ′ ∈ {s i + w k ′ -1, ..., s j }. We re-do the same procedure for each two slots s, s ′ ∈ {s i + w k -1, ..., s j } for each demand k ∈ K with v k ∈ C such that σ k s = σ k s ′ , for all v k ∈ C and s, s ′ ∈ {s i + w k -1, ..., s j }, σ k s = σ k ′ s ′ , for all v k , v k ′ ∈ C, s ∈ {s i + w k -1, ..., s j } and s ′ ∈ {s i + w k ′ -1, ..., s j }. Consequently, we obtain that σ k s = ρ for all v k ∈ C and s ∈ {s i + w k -1, ..., s j }. We know from (5.9) that σ k ′ s ′ = γ k ′ ,s ′ , for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. As a consequence, we obtain that µ s = 0 for each slot s ∈ S , and σ k ′ s =          γ k ′ ,s if s ∈ {1, ..., w k ′ -1}, ρ if v k ′ ∈ C and s ∈ {s i + w k ′ -1, , ..., s j }, 0 otherwise, for each k ′ ∈ K and s ∈ S. As a result, we have (µ, σ) = ρ(α, β) + γM as desired. Interval-Odd-Hole Inequalities F H ′E I H = {(u, z) ∈ P sa (G, K, S) : v k ∈H s j s=s i +w k -1 z k s = |H| -1 2 }. We denote inequality v k ∈H where where s j s=s i +w k -1 z k s ≤ |H|- D 134 i = {k j ∈ {k 1 , ..., k i-1 } ∩ H : E(p k i ) ∩ E(p k j ) ̸ = ∅}. We let S 134 k i = {s k i } be the set of last slots assigned to demand k i , d) for each demand k i ∈ K \ H with i ∈ {1, ..., R 134 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H such that E(p k i ) ∩ E(p k j ) ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ R 134 i , • where where Since µ s = 0 for all slots s ∈ S, it follows that σ k s ′ = 0. In a similar way, we can show that L 136 i = {k j ∈ {k 1 , ..., k i-1 } ∩ H : E(p k i ) ∩ E(p k j ) ̸ = ∅}. c) for each demand k i ∈ H \ H with i ∈ {1, ..., D 136 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 136 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E(p k i ) ∩ E(p k ) ̸ = ∅ ( σ k s ′ = 0, for demand k and s ′ ∈ {w k , ..., s} with s ′ / ∈ {s i + w k -1, ..., s j } if v k ∈ H. We re-do the same procedure for all demand k ′ in K \ {k} such that where where if v k ∈ H, and µ s" = 0 for all s" ∈ S, it follows that σ k i s = σ k ′ s ′ . Given that the pair (v k , v k ′ ) are chosen arbitrarily in odd-hole H, we re-do the same procedure for all pairs (v k , v k ′ ) such that we find σ k s = σ k ′ s ′ , for all pairs (v k , v k ′ ) ∈ H, s ∈ {s i + w k -1, ..., s j } and {s i + w k ′ -1, ..., s j }. σ k ′ s = 0, for all k ′ ∈ K \ {k} and s ∈ {w k ′ , ..., s} with s / ∈ {s i + w k ′ -1, ..., s j } if v k ′ ∈ H. Let prove that σ k ′ s ′ for all v k ′ ∈ H and s ′ ∈ {s i + w k ′ -1, ..., s j } are equivalent. Consider a demand k ′ ∈ K with v k ′ ∈ H and a slot s ′ ∈ {s i + w k ′ -1, ..., L 138 i = {k j ∈ {k 1 , ..., k i-1 } ∩ H : E(p k i ) ∩ E(p k j ) ̸ = ∅}, c) for each demand k i ∈ H \ H with i ∈ {1, ..., D 138 i = {k j ∈ {k 1 , ..., k i-1 } ∩ H : E(p k i ) ∩ E(p k j ) ̸ = ∅}, d) for each demand k i ∈ K \ H with i ∈ {1, ..., Consequently, we obtain that σ k s = ρ for all v k ∈ H and s ∈ {s i + w k -1, ..., s j }. Overall, and using the result (5.9), we obtain that µ s = 0 for each slot s ∈ S , and for each k ′ ∈ K and s ∈ S. As a result, we have (µ, σ) = ρ(α, β) + γM as desired. σ k ′ s =          γ k ′ , Slot-Assignment-Clique Inequalities On the other hand, we also noticed that there may exist some cases that are not covered by inequality (2.25). For this, we provide an adapted definition of a conflict graph H E S for the SA problem and its associated inequality. Definition 5.4.3. Let H ′E S be a conflict graph defined as follows. For all slot s ∈ {w k , ..., s} and demand k ∈ K, consider a node v k,s in H ′E S . Two nodes v k,s and v k ′ ,s ′ are linked by an edge in H ′E S if and only if • k = k ′ , • or E k 1 ∩ E k ′ 1 ̸ = ∅ and {s -w k + 1, ..., s} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ when k ̸ = k ′ . Based on the conflict graph H ′E S , we introduced the following inequalities. Proof. We use the same proof of proposition (2.4.17). Theorem On the other hand, if there exists a slot s ′ ∈ S such that s ′ ∈ {s-w k +1, .., s} for each v k,s ∈ C, then inequality (2.43) is dominated by the non-overlapping inequality (5.4). Hence, inequality (2.43) cannot be facet defining for P sa (G, K, S). Sufficiency. We denote inequality v k,s ∈C z k s ≤ 1 by αu + βz ≤ λ. Let µu + σz ≤ τ be a valid inequality that is facet defining F of P sa (G, K, S). Suppose that F H ′E S C ⊂ F = {(u, z) ∈ P sa (G, K, S) : µu + σz = τ }. In order to prove that inequality v k,s ∈C z k s ≤ 1 is facet defining for P sa (G, K, S), we show that there exist ρ ∈ R and γ ∈ R k∈K (w k -1) ) such that (µ, σ) = ρ(α, β) + γM . Since µ s = 0 for all slots s ∈ S, it follows that σ k s ′ = 0. In a similar way, we can show that σ k s = 0, for all k ∈ K and s ∈ {w k , ..., s} with v k,s / ∈ C. Let prove that σ k s for all v k,s ∈ C are equivalent. Consider a demand k ′ ∈ K and a slot s ′ ∈ {w k ′ , ..., s} with v k ′ ,s ′ ∈ C, and a solution S 145 = (U 145 , S 145 ) given by a) select a pair of demand k and slot s from clique C (i.e., v k,s ∈ C) such that slot s k = s will be used as last slot for demand k, b) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 145 i given by Since σ k s = 0 for v k,s / ∈ C, and µ s" = 0 for all s" ∈ S, it follows that σ k ′ s ′ = σ k s . In a similar way, we can show that σ k s = σ k ′ s ′ , for all pairs (v k,s , v k ′ ,s ′ ) ∈ C, Consequently, we obtain that σ k s = ρ for all v k,s ∈ C. Overall, and using the result (5.9), we obtain that µ s = 0 for each slot s ∈ S, and Since µ s = 0 for all slots s ∈ S, it follows that σ k s ′ = 0. In a similar way, we can show that σ k s ′ = 0, for demand k and s ′ ∈ {w k , ..., s} with v k,s ′ / ∈ H. I 145 i = [ σ k s =          γ k, We re-do the same procedure for all demand k ′ in K \ {k} such that Since σ k s = 0 for v k,s / ∈ H, and µ s" = 0 for all s" ∈ S, it follows that σ k ′ s ′ = σ k s . Consequently, we obtain that σ k s = ρ for all v k,s ∈ H. By (5.9), we know that σ k ′ s ′ = γ k ′ ,s ′ , for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. σ k ′ s = 0, We then conclude that µ s = 0 for each slot s ∈ S , and σ k ′ s =          γ k ′ ,s if s ∈ {1, ..., w k ′ -1}, ρ if v k ′ ,s ∈ H, 0 otherwise, As a consequence, we have (µ, σ) = ρ(α, β) + γM as desired. In the next section, we will derive some symmetry breaking inequalities for the SA subproblem in which some symmetrical solutions may appeared. Symmetry-Breaking Inequalities In this section, we address some symmetry issues that can appear when solving the SA problem. Proposition 5.5.1. We ensure that for all slot s ∈ {1, ..., s -1} u s -u s+1 ≥ 0, (5.14) which means that a slot s + 1 can be used if and only if slot s is used. Similar idea was proposed by Mendez-Diaz et al. [69][70] to break the symmetry for the vertex coloring problem. To strengthen inequality (5.14), we propose the following inequalities. Similar idea was proposed by Friedman [START_REF] Friedman | Fundamental Domains for Integer Programs with Symmetries[END_REF]. However, the coefficient 2 |K|-k can provoques some numerical intractabilities for the computer machine [START_REF] Bendotti | Sub-symmetry-breaking inequalities and application to the Unit Commitment Problem[END_REF]. For this, we introduce the following inequality. Inequality (5.20) is not valid for P sa (G, K, S) given that there exist some feasible solutions in P sa (G, K, S) which violate inequality (5.20) when for example a slot s ∈ S is used (i.e., u s = 1) but there is no demand k ∈ K which use slot s (i.e., k∈K min(s+w k -1,s) s ′ =s z k s ′ ). On the other hand, we ensure that all the optimization algorithms developed to solve the MWC problem can be used to compute the upper bound based on the conflict graph H r w . Based on inequalities (5.19) and (5.20), we conclude that the minimum number of slots to be used by the set of demands K while satisfying the SA constraints, it's equal to the total weight of the maximum weighted clique in the conflict graph H r w . Based on theoretical results presented in this chapter, we devise a Branch-and-Bound (B&B) and Branch-and-Cut algorithms to solve the SA problem. Moreover, we study the effectiveness of these algorithms and assess the impact of the valid inequalities on the effectiveness of the Branch-and-Cut algorithm. Branch-and-Cut Algorithm Description Here we describe the Branch-and-Cut algorithm. We consider the following linear problem which can be seen as a strenghtned formulation for the compact formulation Note that the separation procedures of the valid inequalities presented in this chapter are still the same as those presented in chapter (2) for the C-RSA. However, we need to present the separation procedure for the interval-capacity-cover inequalities (5.10) as follows. Given a fractional solution (ū, z). We first consider an interval of contiguous slots I = [s i , s j ] which is identified by generating two slots s i and s j randomly in S with s j ≥ s i + 2 max k∈K w k . The separation problem associated with inequality (5.10) is NP-hard [START_REF] Klabjan | The complexity of cover inequality separation[END_REF] given that it consists in identifying a cover K * for the interval I = [s i , s j ], such that k∈ K * s j s ′ =s i +w k -1 zk s ′ > | K * | -1. For this, we use a greedy algorithm introduced by Nemhauser and Sigismondi [START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF] as follows. We first select a demand k ∈ K having the largest number of requested slot w k with s j s ′ =s i +w k -1 zk s ′ > 0, and then set K * to K * = {k}. After that, we iteratively add each demand k ′ ∈ K \ K * to K * with s j s ′ =s i +w k ′ -1 zk ′ s ′ > 0 and demand k ′ share an edge with all the demands already added K * , until a cover K * is obtained for the interval I over edge e with k∈ K * w k > |I|. We further derive a minimal cover from the cover K * by deleting each demand k ∈ K * if k ′ ∈ K * \{k} w k ′ ≤ |I|. We then add inequality (5.10) induced by the minimal cover K * for the interval I if it is violated, i.e., we add the following valid inequality to the current LP k∈ K * s j s ′ =s i +w k -1 z k s ′ ≤ | K * | -1. Primal Heuristic Let us present now a primal heuristic useful to boost the performance of the Branchand-Cut algorithm. It is based on a hybrid method between a local search algorithm and a greedy-algorithm. Given an optimal fractional solution (ū, z) in a certain node of the B&C tree, it consists in constructing an integral solution and "feasi- We also consider the valid inequalities (5. [START_REF] Cheng | Routing and Spectrum Assignment Algorithm based on Spectrum Fragment Assessment of Arriving Services[END_REF]) introduced previously that are shown to be as a precomputed lower bounds for the SA problem. They can be separated as follows. For each demand k ∈ K, we use a greedy algorithm introduced by Nemhauser and Sigismondi [START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF] to generate a maximum clique in H sa containning demand k. We first set Ck to Ck = {k}. After that, we iteratively add each demand k ′ ∈ K \ Ck to Ck such that demand k ′ must share an edge with all the demands already generated in Ck . We further add inequality (5.19) induced by clique Ck for demand k to the compact formulation (5.1)-(5.8) s∈S u s ≥ k ′ ∈ Ck w k ′ . Based on this, we provide a comparative study between the B&B (without additional valid inequalities) and the B&C (with additional valid inequalities) algorithms. Our objective in this study is to show the efficiency of the inequalities we have introduced for solving the SA problem. We present some computational results using several instances with a number of demand ranges in {10, 20, 30, 40, 50, 100, 150, 200, 250, 300} and s up to 320 slots. We use two types of topologies: real, and realistic ones from SND-LIB already described in Table 3.1. We first run our B&C algorithm with SCIP in which our valid inequalities are used, and all the Scip's internal cuts are deactivated. We call this run Own B&C SCIP. Then, we run the B&C algorithm with SCIP, and activating all the internal cuts we had deactivated prior in run 1. We call this run B&B SCIP. Tables 5.1 and 5.2 below report the results obtained for the two runs. For each run and each instance, we report the number of nodes in B&C tree (Nbr Nd), the optimality gap (Gap), the number of violated inequalities added during the algorithm (Nbr Cuts), and the total CPU Time (TT) in seconds. Finally, notice that each line of each table 5.1 and 5.2, corresponds to the average results of 4 instances. The results show that Own B&C SCIP is able to solve several instances to optimality that are not solved to optimality when using the B&B SCIP even if Scip uses its proper cuts. Furthermore, we noticed that our valid inequalties allow solving to optimality more instances than B&B SCIP. Also, they enable reducing the average number of nodes in the B&C tree for several instances such that there exist some cases that we are able to solve some instances in the root of the B&C tree which is not the case when using the B&B. On the other hand, and looking at the instances that are not solved to optimality (i.e., gap > 0, 00), adding valid inequalities decreases the average gap for several instances and much more for the large instances with a number of demands |K| ≥ 150. However, there exist a few instances very rare, for example the triplet (German, 300, 320), in which adding valid inequalities does not improve the results of the B&B algorithm. Based on these results, we ensure that using the valid inequalities allows obtaining tighter LP bounds and improve the effectiveness of the B&B algorithm such that the B&C algorithm is able to beat the B&B algorithm even if Scip use its proper cuts that are shown to be very efficient for another optimization problems studied in the literature. Concluding Remarks In this chapter, we have studied the Spectrum Assignment sub-problem. We have introduced an integer linear programming compact formulation, and further investigated the facial structure of the associated polyhedron. Moreover, we have derived several valid inequalities that are facet-defining under sufficient conditions. Using the polyhedral results and the separation procedures, we have devised a Branchand-Cut (BC) algorithm to solve the problem. We have also presented experimental results. The results have shown the effectiveness of the valid inequalities such that the B&C algorithm is shown to be very performant for solving large-scale instances of the problem. It could be very interesting to study the impact of the symmetry breaking inequalities on the performance of the Branch-and-Cut algorithm. valid inequalities allowed improving the effectiveness of the B&C&P algorithm. On the other, we have presented a comparative study between the B&C, B&P, and B&C&P algorithms. The results have shown that the B&C&P algorithm is able to provide optimal solutions for several instances, which is not the case for the B&C algorithm within the CPU time limit (5 hours). Moreover, both B&C and B&P algorithms perform well. However, some instances are still difficult to solve with both B&C, B&P and B&C&P algorithms. For this, some enhancements are further investigated and integrated into our algorithms. They are based on a warm-start algorithm using some metaheuristics, and a primal heuristic using a hybrid method between a greedy algorithm and local search algorithm that is shown to be very useful to obtain good primal bounds. Moreover, we introduce some symmetry-breaking inequalities that allow avoiding the equivalents sub-problems in the different enumeration trees of B&C, B&P, and B&C&P algorithms. Instances Afterward, we have studied the Spectrum Assignment (SA) sub-problem when the routing is trivial or a routing path is pre-selected for each demand. First, we have presented a compact formulation for the SA problem. We have carried out an investigation of the associated polytope. Moreover, we have identified several valid inequalities for the polytope, some of them come from those that are already proposed for the C-RSA. We have proved that they are facet defining under certain necessary and sufficient conditions. They were further incorporated within a Branch-and-Cut e) and provide a deeper comparative study between the algorithms [START_REF] Accorsi | Guidelines for the Computational Testing of Machine Learning approaches to Vehicle Routing Problems[END_REF]. istrative au LIMOS, pour sa sympathie, son aide précieuse et son efficacité dans l'organisation et la résolution des problèmes administratifs. J'associe à ces remerciements Madame Fatiha Bendali, Enseignante-Chercheuse à l'Université de Clermont Auvergne, et Monsieur Jean Mailfert, Maitre de conférences à l'Université de Clermont Auvergne, pour leur gentillesse, leur écoute, leurs conseils et le temps conséquent qu'ils m'ont accordés. Je les remercie pour tout cela. Ces remerciements seraient incomplets sans une mention particulière pour le groupe POC (Polyèdre et Optimisation Combinatoire). Un énorme merci pour les discussions intéressantes que nous avons pu avoir autour de l'Optimisation Combinatoire et les approches polyédrales en particulier. Ils ont si généreusement contribué à la réussites des différents évènements organisés pour les doctorants en France (JPOC, séminaires, ROADEF,...), et une conférence internationale ISCO qui donne l'opportunité d'échanger avec des chercheurs très connus à l'échelle internationale et de profiter de leurs expériences scientifiques et professionnelles. Chaque jour, je suis reconnaissant à tous mes anciens professeurs de Master AN-DROIDE à la Sorbonne Université (ex UPMC-Université Pierre et Marie Curie, Paris), mes encadrants de stage du Master à EDF (Paris) et ceux de Licence Recherche Opérationnelle à l'USTHB (Alger), qui par leurs paroles, leurs écrits, leurs conseils et leurs commentaires m'ont aidé à exceller tout au long de ces années. Nous fournissons donc une analyse théorique approfondie et concevons des algorithmes exacts de type coupes, branchements et génération de colonnes pour résoudre le problème CRSA en considérant des réseaux de taille réaliste. Pour ce faire, notre contribution consiste à introduire un programme linéaire en nombres entiers basée sur des coupes, où le nombre de variables n'augmente que de manière polynomiale avec la taille de l'instance traitée. En outre, nous étudions la structure polyédrale du polyèdre associé, et dérivons plusieurs classes d'inégalités valides. Nous donnons quelques conditions nécessaires et suffisantes pour que certaines inégalités valides soient des facettes pour le polyèdre associé. Nous élaborons ensuite des procédures de séparation pour ces inégalités valides. Ces inégalités sont ensuite utilisées dans la relaxation linéaire afin d'obtenir des bornes duales plus serrées. En se basant sur ça, nous développons un algorithme de coupes et branchements pour le problème CRSA. D'autre part, nous avons proposé une nouvelle formulation étendue basée sur des chemins, où les variables sont associées à tous les chemins possibles pour chaque demande en trafic induisant donc une explosion de nombre de variables qui croissent de manière exponentielle et en parallèle avec la croissance de la taille de l'instance traitée. Nous développons également un algorithme de génération de colonnes pour la résolution de sa relaxation linéaire. Les inégalités valides de la formulation de coupes restent aussi valides pour le polyèdre associé à cette formulation étendue. Nous développons ensuite un algorithme exact qui combine un algorithme de coupes et branchements avec un algorithme de génération de colonnes pour résoudre le problème CRSA. D'autre part, vu la complexité du problème, le problème CRSA peut être décomposé en deux sous-problèmes de telle sorte que le routage contraint précède l'assignation du spectre (CR+SA). Nous analysons la structure polyédrale du sous-problème d'assignation du spectre (SA) lorsque le routage est déjà établi. Tout d'abord, nous proposons une formulation compacte pour le problème SA. Nous étudions ensuite la structure du polyèdre associé. Nous définissons quelques classes supplémentaires d'inégalités valides et introduire quelques inégalités pour bien gérer la symétrie afin de supprimer les solutions symétriques obtenues lors de la résolution du problème. Nous donnons également quelques conditions nécessaires et suffisantes pour que certaines inégalités valides définissent des facettes pour le polyèdre. Des procédures de séparation sont ensuite proposées pour certaines de ces inégalités valides et qui seront utilisées par la suite pour obtenir des bornes plus étroites dans la relaxation linéaire. Nous élaborons ensuite un algorithme de coupes et branchements pour le sous problème SA. A la fin de chaque étape, nous examinons plus en profondeur l'efficacité et le comportement de nos algorithmes, et augmentons leurs efficacités grâce à plusieurs améliorations basant sur des heuristiques primales et aussi quelques techniques de branchement qui pourraient offrir une promesse d'amélioration par rapport aux méthodes existantes compte tenu des réseaux de taille réaliste de SndLib, et d'autres de taille réelle. Nous menons aussi une étude comparative d'efficacité entre les différents algorithmes proposés dans cette thèse. Figure 1 : 1 Figure 1: Historical Evolution of Optical Transport Networks. Figure 1 . 1 : 11 Figure 1.1: Relation between P, NP, NP-complete and NP-hard problems. Figure 1 . 2 : 12 Figure 1.2: conv(S) vs S. Figure 1 . 1 Figure 1.3 illustrates the polyhedron P , valid inequality, face, facet and extreme point. Figure 1 . 3 : 13 Figure 1.3: Geometric interpretation for the polyhedron P , valid inequality, face, facet and extreme point. 12. 5 5 GHz (where FixedGrid networks use 50 GHz, the width of a wavelength) as recommended by ITU-T [2]. See for example Figure 1.4 which shows a fixed-grid with 4 wavelengths of 50 GHz to serve 4 demandes of two of 10 Gb/s, one of 40 Gb/s, one of 100 Gb/s. However, in the flex-grid we use just 9 slots of frequency 12.5 GHz to serve these demands. Figure 1 . 4 : 14 Figure 1.4: FixedGrid Vs FlexGrid. 2 Cut 2 our work, we focus on a variant of the RSA problem, called Constrained-Routing and Spectrum Assignment Problem (C-RSA). Chapter Formulation and Polyhedra for the C-RSA Problem 2.1 The Constrained-Routing and Spectrum Assignment Problem The Constrained-Routing and Spectrum Assignment Problem can be stated as follows. We consider an optical spectrum of s ∈ Z + available contiguous frequency slots, denoted by S = {1, . . . , s}. A SFON topology can be represented by an undirected, loopless, and connected graph G = (V, E), with V is the set of vertices representing the optical nodes (data centers, users, stations,...), and E the set of links representing optical-fibers. A length ℓ e ∈ R + (in kms), a cost c e ∈ R + , and a set of s of contiguous frequency slots are associated with each edge e. Let K be a set of non-splittable traffic demands. Each demand k ∈ K has an origin nodeo k ∈ V , a destination node d k ∈ V \ {o k }, a slot-width w k ∈ Z + ,and a transmissionreach lk ∈ R + (in kms). The C-RSA consists in determining for each demand k ∈ K, a (o k ,d k )-path p k in G (non-splittable demands) such that e∈E(p k ) l e ≤ lk (tranmission-reach constraint), and an interval of contiguous frequency slots S k ⊆ S of width equal to w k (continuity and contiguity constraint) Fig. 2 . 2 Fig. 2.1 provides a feasible solution for an instance of the C-RSA problem containing 4 demands routed in a graph G consisting of 7 nodes and 10 edges. Each edge e is specified by a triplet [l e , c e , s] with s = 9. Figure 2 . 1 : 21 Figure 2.1: Set of established paths and spectrums in graph G (Fig. 2(a)) for the set of demands {k 1 , k 2 , k 3 , k 4 } defined in Table 2(b). For each demand k and node v, one can compute a shortest path between each of the pairs of nodes (o k , v), (v, d k ). If the length of the (o k , d k )-paths formed by the concatenation of the shortest paths (o k , v) and (v, d k ) is greater that lk then node v cannot be in a path routing demand k, and we then say that v is a forbidden node for demand k. Let V k 0 denote the set of forbidden nodes for demand k ∈ K. Note that using Dijkstra's algorithm, one can identify in polynomial time the forbidden nodes V k 0 for each demand k ∈ K. On the other hand and regarding the edges, for each demand k and edge e = ij, one can compute a shortest path between each of the pairs of nodes (o k , i), and (j, d k ), and (o k , j), and (i, d k ). If the length of the (o k , d k )-path formed by e together with the shortest paths (o k , i) and (j, d k ) a) graph G contains at least one feasible path between o k and d k , for all k ∈ K, b) the number of slots s is largely sufficient to route all the demands, c) for each demand k ∈ K and e ∈ E \ (E k 0 ∪ E k 1 ), there exists at least a feasible route E k between o k and d k such that e ′ ∈E k ℓ e ′ +ℓ e ≤ lk , and for each e ′ ∈ E k , the edges (e, e ′ ) are compatible edges for demand k. Proposition 2 . 4 . 1 . 241 Consider an edge e ∈ E with K e ̸ = ∅. Let s be a slot in S. Proposition 2 . 4 . 6 . 246 Consider an edge e ∈ E. Let I = [s i , s j ] be an interval of contiguous slots in[1, s]. Let K ′ ⊆ K e be a minimal cover for interval I = [s i , s j ] Definition 2 . 4 . 4 . 244 Consider an edge e ∈ E. Let I = [s i , s j ] be an interval of contiguous slots in [1, s] with s i ≤ s j -1. Consider the conflict graph H e I defined as follows. For each demand k ∈ K with w k ≤ |I| and e / ∈ E k 0 , consider a node v k in H e I . Two nodes v k and v k ′ are linked by an edge in we take into account the possibility of using edge e ′ in the selected path E 53 k to route demand k in solution S 53 ). We let S 53 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S 53 is feasible for the problem. The corresponding incidence vector (x S 53 , z S 53 ) belongs to F H e I C . Then we derive a solution S 54 = (E 54 , S 54 ) obtained from S 53 by adding edge e ′ ∈ E \ (E k 0 ∪ E k 1 ) for the routing of demand k in solution S 53 which means that E 54 k = E 53 k ∪ {e ′ }. The last slots assigned to the demands K, and paths assigned the set of demands K \ {k} in S 53 remain the same in solution S 54 , i.e., S 54 k = S 53 k for each k ∈ K, and E 54 k ′ = E 53 k ′ for each k ′ ∈ K \ {k}. S 54 is clearly feasible for the problem. The corresponding incidence vector (x S 54 , z S 54 ) belongs to F H e I C . Hence, solutions S 53 and S 54 satisfy equation µx + σz = τ . It follows that be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′53 is clearly feasible for the problem. Hence, the corresponding incidence vector (x S ′53 , z S ′53 ) belongs to F H e I C . Then we derive solution S 55 from S ′53 by adding slot s ′ as last slot to demand k in solution S ′53 , i.e., S 55 k = S ′53 k ∪ {s ′ }. Solution S 55 is feasible for the problem. The corresponding incidence vector (x S 55 , z S 55 ) belongs to F H e I C . Hence, solutions S ′53 and S 55 satisfy equation µx + σz = τ . We have so Theorem 2 . 4 . 7 . 247 Consider an edge e ∈ E. Let I = [s i , s j ] be an interval of contiguous slots. Let C be a clique in the conflict graph H e I with |C| ≥ 3, v k ∈C w k ≤ sk ′ ∈Ke\C w k ′ , and |{s i +w k -1, ..., s j }| ≥ w k for each demand k with v k ∈ C ∪C e . Let C e ⊆ K e \ C be a clique in the conflict graph H e I such that w k + w k ′ ≥ |I| + 1 for each v k ∈ C and v k ′ ∈ C e . Then, inequality (2.37) is facet defining for P(G, K, S) if and only if a) there does not exist a demand k" ∈ K e \ C e with w k + w k" ≥ |I| + 1 for each v k ∈ C, and w k ′ + w k" ≥ |I| + 1 for each v k ′ ∈ C e , b) and there does not exist an interval I ′ of contiguous slots with I ⊂ I ′ such that C ∪ C e defines also a clique in the associated conflict graph H e I ′ . this means that there exists a clique C in the conflict graph H E I of cardinality equals to |C| ≥ 3 with k, k ′ ∈ C. As a result, inequality (2.38) is dominated by inequality (2.39) induced by clique C. Hence, inequality (2.38) is not facet defining for P(G, K, S). b) if there exists an interval of contiguous slots I ′ in [1, s] such that I ⊂ I ′ with w k + w k ′ ≥ |I ′ |, w k ≤ |I ′ |, and w k ′ ≤ |I ′ |. This means that inequality (2.38) induced by the two demands k, k ′ for the interval I is dominated by inequality (2.38) induced by the same demands for the interval I ′ . 70 i 70 i= kj ∈D 70 i 707070 Consider a demand k′ with v k′ ∈ C and a slot s′ ∈ {s i + w k′ -1, ..., s j }. Let S 70 = (E 70 , S 70 ) be the solution given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E 70 k i be the set of edges involved in a shortest path between o k i and d k i , b) select a subset of demands H from H with | H| = |H|-1 2 , c) for each demand k i ∈ H with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 70 i given by {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ {s i + w ki -1, ..., s j },whereL 70 i = {k j ∈ {k 1 , ..., k i-1 } ∩ H : E 70 k i ∩ E 70 k j ̸ = ∅}, d) for each demand k i ∈ H \ H with i ∈ {1, ...,|K|}, we select the smallest slot index s k i in the set of slots I 70 i given by I {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} \ {s i + w ki -1, ..., s j }, we take into account the possibility of adding slot s′ as a last slot in the selected last slots S70 k′ to route demand k′ in solution S 70 ). We let S 70 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S 70 is feasible for the problem. Hence, the corresponding incidence vector (x S 70 , z S 70 ) belongs to F H E I H,C . Then consider the solution S 71 obtained from S 70 as belows a) remove all the last slots si totally covered by the interval I and which has been selected by each demand k i ∈ H in solution S 70 (i.e., s ∈ S 70 k i and s ∈ {s i + w S 70 k 70 c) and add slot s′ to the set of last slots S 70 k′ assigned to demand k′ in solution S 70 , i.e., S 71 k′ = S 70 k′ ∪ {s ′ }, d) without changing the set of last slots assigned to the demands K \ H, i.e., S 71 k = for each demand K \ H. Solution S 71 is feasible for the problem. The corresponding incidence vector (x S 71 , z S 71 ) belongs to F H E I H,C . Hence, solutions S 70 and S 71 satisfy equation µx + σz = τ . We have so µx S 70 + σz S 70 = µx S 71 + σz S 71 = µx S 70 + σz S 70 + σ k′ k′ Proposition 2 . 4 . 16 . 2416 Consider an edge e ∈ E. Let C be a clique in the conflict graph H e S with |C| ≥ 3, and k∈C w k ≤ sk ′ ∈Ke\C w k ′ . Then, the inequality c) and 2w k ≥ |I| + 1 and w k ≤ |I| for each v k ∈ C.Proof. Neccessity.If C is not maximal clique in the conflict graph H E S , this means that inequality (2.43) can be dominated by another inequality associated with a clique C ′ such that C ⊂ C ′ without changing its right-hand side. Moreover, if there exists an interval of contiguous slots I we take into account the possibility of adding slot s ′ in the selected set of last slots S 80 k ′ to route demand k ′ in solution S 80 ). We let S 80 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S 80 is feasible for the problem. Hence, the corresponding incidence vector (x S 80 , z S 80 ) belongs to F H E S H . After that, we derive the solution S ′80 = (E ′80 , S ′80 ) from S 80 by a) and adding slot s ′ as last slot to demand k ′ , i.e., S ′80 k ′ = S 80 k ′ ∪ {s ′ } for demand k ′ , b) and modifying the last slots assigned to demand k by adding a new last slot s and removing the last slot s ∈ S 80 . Then we derive solution S 83 obtained from S 82 by a) adding slot s ′ as last slot to demand k ′ , i.e., S 83 k ′ = S 82 k ′ ∪ {s ′ } with v k ′ ,s ′ ∈ C, b) and modifying the last slots assigned to each demand k ∈ { k ∈ K with v k,s ∈ H82 } by adding a new last slot sk and removing the last slot s 1 2 1 k,e ∈H x k e ≤ |H|-by αx + βz ≤ λ. Let µx + σz ≤ τ be a facet defining inequality for P(G, K, S) and F = {(x, z) ∈ P(G, K, S) : µx + σz = τ }. Proposition 2 . 4 . 24 . 2424 Consider a demand k ∈ K. Let p be a minimal infeasible sub-path for demand k in G. Then, the inequality e∈E(p) x k e ≤ |E(p)| -1.(2.52) Theorem 2 . 4 . 21 . 2421 Consider an edge e in E. Let C be a minimal cover in K for edge e. Then, inequality (2.54) is facet defining for the polytope P(G, K, S, C, e) where P(G, K, S, C, e) = {(x, z) ∈ P(G, K, S) : Proposition 2 . 6 . 1 . 261 Consider a demand k ∈ K. Then, the inequality e∈E c is valid for P(G, K, S). . It consists in computing a maximum flow/minimum cut in G of demand k by assigning a positif weight xk e for each edge e in G. For this, we use a C++ library proposed by the LEMON GRAPH library [59] which calls the algorithm of Goldberg and Tarjan for the minimum cut computation. Based on this, we conclude that the separation of the cut inequalities (2.2) can be done in O(|V | 2 * |E| * |K|) in the worst case. all the nodes already assigned to C * and N * . At the end, we add inequality (2.46) induced by clique C * ∪ N * to the current LP, i.e., v k,e ∈C * x k e + v k ′ ,e ′ ∈N * ′ s = 0 and xk e ̸ = 0 for each e ∈ E(p) while respecting the non-overlapping constraint with the set of demands that precede demand k ′ in the list L. The complexity of this algorithm is bounded by O(|K| * |S| * |P | * log(|K|)) where |P | = max k∈K R k . 54)(2.32), and clique-based inequalities (2.43), (2.42) and (2.36), are generated along the B&C algorithm. However, the number of clique-based inequalities (2.43) generated is very less compared with other inequalities. Based on these results, we conclude that the valid inequalities are very useful to obtain tighter LP bounds using Gurobi and Scip. On the other hand, the clique-based inequalities (2.46), cover-based inequalities (2.49), and the different families of odd-hole inequalities, are shown to be not efficient for the instances tested such that the number of their violated inequalities generated is very less and equal to 0 for several instances. However, they are still very interesting from a theoretical point of view. Based on this, the separation of our valid inequalities, is performed along the B&C algorithm (using Cplex, Gurobi and Scip) in the following order a) edge-capacity-cover inequalities (2.54), b) edge-Interval-Capacity-Cover inequalities (2.32), c) edge-slot-assignment-clique inequalities (2.42), d) edge-interval-clique inequalities (2.36), e) slot-assignment-clique inequalities (2.43). available contiguous frequency slots, denoted by S = {1, . . . , s}. A spectrally flexible optical network can be represented by an undirected, loopless, and connected graph G = (V, E), with V is the set of vertices representing the optical nodes (data centers, users, stations,...), and E the set of links representing the optical-fibers. A length ℓ e ∈ R + (in kms), a cost c e ∈ R + , and a set of s of contiguous frequency slots are associated with each edge e. Let K be a multiset of demands such that each demand k is specified by an origin node o k ∈ V , a destination node d k ∈ V \ {o k }, a slot-width w k ∈ Z + , and a routing path p k from its origin o k to its destination d k through G. The SA consists of determining for each demand k ∈ K an interval of contiguous frequency slots S k ⊂ S of width equal to w k (continuity and contiguity constraints) such that S k ∩ S k ′ = ∅ for each pair of demands k, k ′ ∈ K (k ̸ = k ′ ) with paths sharing an edge , i.e., E(p k ) ∩ E(p k ′ ) ̸ = ∅ , while optimizing the number of slots allocated in S. assignment constraint taking into account the possibility of adding slot s in the set of used slots U 105 ), We let S 105 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. c) we let U 105 be the set of slots used in S such that for each demand k and last slot s k ∈ S 105 k and s ′ ∈ {s k -w k + 1, ..., s k }, we have s ′ ∈ U 105 . S 105 is feasible for the SA problem. Hence, the corresponding incidence vector (u S 105 , z S 105 ) belongs to P sa (G, K, S). Then we derive a solution S 106 = (U 106 , S 106 ) obtained from S 105 by adding slot s as an used slot in U 106 without modifying the last slots assigned to the demands K in S 105 which remain the same in solution S 106 i.e., S 105 k = S 106 k Theorem 5 . 3 . 1 . 531 The dimension of P sa (G, K, S) is given bydim(P sa (G, K, S)) = |K| * |S| + |S| -r ′ = |K| * |S| + |S| -k∈K (w k -1). First 109 i 109 , let show that µ s = 0 for all s ∈ S. Consider a slot s ∈ S, and a solution S 109 = (U 109 , S 109 ) given by a) we select the smallest slot index s k in {w k , ..., s} \ [{s, ..., s + w k -1} ∪ {s}] as last slot for demand k (slot assignment constraint taking into account the possibility of adding slot s in the set of used slots U 109 ), b) for each demand k i ∈ K with i ∈ {1, ..., |K|} \ {k}, we select the smallest slot index s k i in the set of slots I 109 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] \ {s, ..., s + w ki -1} assignment constraint taking into account the possibility of adding slot s in the set of used slots U 109 ), Let S 109 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. c) we let U 109 be the set of slots used in S such that for each demand k ′ ∈ K and last slot s k ′ ∈ S 109 k and s ′ ∈ {s k ′ -w k ′ + 1, ..., s k ′ }, we have s ′ ∈ U 109 . S 109 is clearly feasible for the SA problem. Hence, the corresponding incidence vector (u S 109 , z S 109 ) belongs to F k s . Then consider the solution S ′109 = (U ′109 , S ′109 ) obtained from S 109 by adding slot s as an used slot in U ′109 without modifying the last slots assigned to the demands K in S 109 which remain the same in solution S ′109 i.e., S 109 k = S ′109 k First, let show 113 i 113 that µ s ′ = 0 for all s ′ ∈ S \ {s}. Consider a slot s ∈ S \ {s}, and a solution S 113 = (U 113 , S 113 ) given by a) for one demand k ′ ∈ K, we select the smallest slot index s k ′ ∈ [{w k ′ , ..., s ′ }∩{s, ..., s+ w k ′ -1}] \ {s, ..., s + w k ′ -1} as last slot, b) for each demand k i ∈ K \ {k ′ } with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 113 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] \ {s, ..., s + w ki -1}, we take into account the possibility of adding slot s ′ in the set of last slots S 114 k assigned to demand k in solution S 114 ), Let S 114 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. d) we let U 114 be the set of slots used in S. S 114 is clearly feasible for the SA problem. The corresponding incidence vector (u S 114 , z S 114 ) belongs to F s . Then consider the solution S 115 = (U 115 , S115 ) obtained from S 114 by adding slot s ′ as last slot to demand k without modifying the last slots assigned to the demands K \{k} in S 114 k remain the same in solution S 115 i.e., S First, let show 116 i 116 that µ s = 0 for all s ∈ S. Consider a slot s ∈ S, and a solutionS 116 = (U116 , S 116 ) given by a) we select the smallest slot index s k in {w k , ..., s} \ {s, ..., s + w k -1} as last slot for demand k (slot assignment constraint taking into account the possibility of adding slot s in the set of used slots U 116 ), b) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 116 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] \ {s, ..., s + w ki -1} 120 i 120 i= 120 i 120120120 assignment constraint taking into account the possibility of adding slot s in the set of used slots U 120 ), b) for each demand k i ∈ K \ {k ′ } with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 120 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}][∩{s i + w ki -1, ..., s j }] \ {s, ..., s + w ki -1},whereD {k j ∈ {k 1 , ..., k i-1 } ∩ K : E(p k i ) ∩ E(p k j ) ̸ = ∅}, c) for each demand k i ∈ K \ K with i ∈ {1, ...,|K|}, we select the smallest slot index s k i in the set of slots I 120 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] \ {s, ..., s + w ki -1} 120 k = S 121 k 120121 assignment constraint taking into account the possibility of adding slot s in the set of used slots U 120 ). We let S 120 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}.d) let U 120 be the set of slots used in S such that for each demand k ∈ K and last slots k ∈ S 120 k and s ′ ∈ {s k -w k + 1, ..., s k }, we have s ′ ∈ U 120 .206S 120 is clearly feasible for the SA problem. Hence, the corresponding incidence vector (u S 120 , z S 120 ) belongs to F I K . Then consider the solution S 121 = (U 121 , S 121 ) obtained from S 120 by adding slot s as an used slot in U 121 without modifying the last slots assigned to the demands K in S 120 which remain the same in solution S 121 i.e.,Sfor each demand k ∈ K. S 121 is feasible for the SA problem. Hence, the corresponding incidence vector (u S 121 , z S 121 ) belongs to F I K . Hence, solutions S 120 and S 121 satisfy equation µu + σz = τ . We then obtain that µu S 120 + σz S 120 = µu S 121 + σz S 121 = µu S 120 + σz S 120 + µ s. Theorem 5 . 4 . 3 . 543 Let I = [s i , s j ] be an interval of contiguous slots in [1, s] with s i ≤ s j -1, and C be a clique in the conflict graph H ′E I with |C| ≥ 3, and 2w k > |I| for each v k ∈ C. Then, inequality (2.39) is facet defining for P sa (G, K, S) if and only if a) C is a maximal clique in the conflict graph H ′E I , b) and there does not exist an interval of contiguous slots I Proposition 5 . 4 . 5 . 545 Let C be a clique in the conflict graph H ′E S with |C| ≥ 3. Then, inequality (2.43) is valid for Q sa (G, K, S). Moreover, it is valid for P sa (G, K, S) if{s -w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each (v k,s , v k ′ ,s ′ ) ∈ C. v k,s ∈C s] ⊂ I, • and w k + w k ′ ≥ |I| + 1 for each (v k , v k ′ ) ∈ C,• and 2w k ≥ |I| + 1 and w k ≤ |I| for each v k ∈ C. 3 . 3 and there does not exist a slot s ′ ∈ S such that s ′ ∈ {s -w k + 1, .., s} for each v k,s ∈ C.Proof. Neccessity.If C is not maximal clique in the conflict graph H ′E S , this means that inequality (2.43) can be dominated by another inequality associated with a clique C ′ such that C ⊂ C ′ without changing its right-hand side. Moreover, if there exists an interval of contiguous slots I = [s i , s j ] ⊂ [1, s] satisfying the conditions of the condition 2 of the theorem. Then, inequality (2.43) is dominated by inequality (2.39). As a result, inequality (2.43) cannot be facet defining for P sa (G, K, S). u, z) ∈ P sa (G, K, S) : v k,s ∈C z k s = 1}. First, we showii that µ s = 0 for all s ∈ S. Consider a slot s ∈ S, and a solution S 141 = (U 141 , S 141 ) given by a) select one pair of demand k ′ and slot s ′ from clique C (i.e., v k ′ ,s ′ ∈ C), and use slots k ′ =s ′ as last slot with s / ∈ {s ′ -w k ′ + 1, ..., s ′ } (slot assignment constraint taking into account the possibility of adding slot s in the set of used slots U 141 ), b) for each demand k i ∈ K \ {k ′ } with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 141 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] \ {s, ..., s + w ki -1},whereD 141 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k ′ } : E(p k i ) ∩ E(p k j ) ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 141 i , • and s / ∈ {s k i -w k i + 1, ...,s k i } (slot assignment constraint taking into account the possibility of adding slot s in the set of used slots U 141 ). We let S 141 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}. c) Let U 141 be the set of slots used in S such that for each demand k and last slot s k ∈ S 141 k and s ′ ∈ {s k -w k + 1, ..., s k }, we have s ′ ∈ U 141 . S 141 is clearly feasible for the SA problem. Hence, the corresponding incidence vector (u S 141 , z S 141 ) belongs to F H ′E S C . Then we derive a solution S 142 = (U 142 , S 142 ) from S 141 by adding slot s as an used slot in U 142 without modifying the last slots assigned to the demands K in S 141 which remain the same in solution S 142 i.e., k ∈ K. S 142 is feasible for the SA problem. Hence, the corresponding incidence vector (u S 142 , z S 142 ) belongs to F H ′E S C . Hence, solutions S 141 and S 142 satisfy equation µu + σz = τ . We then obtain that µu S 141 + σz S 141 = µu S 142 + σz S 142 = µu S 141 + σz S 141 + µ s. Hence, µ s = 0. In a similar way, we can show that µ s = 0, for all slots s ∈ S. Let show that σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s} with v k,s / ∈ C. Consider a demand k in K and a slot s ′ in {w k , ..., s} with v k,s ′ / ∈ C. Let S 143 = (U 143 , S 143 ) be the solution given by a) select one pair of demand k ′ and slot s ′ from clique C (i.e., v k ′ ,s ′ ∈ C), and use slot s k ′ = s ′ as last slot with {s ′-w k ′ + 1, ..., s ′ } ∩ {s ′ -w k + 1, ..., s} = ∅ if E(p k ) ∩ E(p k ′ ) ̸ = ∅, b) for each demand k i ∈ K \ {k ′ } with i ∈ {1, ...,|K|}, we select the smallest slot index s k i in the set of slots I 143 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E(p ki ) ∩ E(p k ) ̸ = ∅ or I 143 i = kj ∈D 143 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not,whereD 143 i = {k j ∈ {k 1 , ..., k i-1 } ∪ {k ′ } : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 143 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E(p k i ) ∩ E(p k ) ̸ = ∅ (we take into account the possibility of adding slot s ′ as a last slot in the selected last slots S 143 k to route demand k in solution S 143 ). We let S 143 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}. c) a set of slots U 143 are then used in S such that for each demand k ′ ∈ K and last slot s" ∈ S 143 k ′ and s" ∈ {s k ′ -w k ′ + 1, ..., s k ′ }, we have s" ∈ U 143 . S 143 is clearly feasible for the problem. Hence, the corresponding incidence vector (u S 143 , z S 143 ) belongs to F H ′E S C . Then we derive a solution S 144 = (U 144 , S 144 ) from S 143 by adding slot s ′ as last slot to demand k without modifying the last slots assigned to the demands K \ {k} in S 143 , i.e., S 143 k ′ = S 144 k ′ for each demand k ′ ∈ K \ {k}, and S 144 k = S 143 k ∪ {s ′ } for demand k. Solution S 144 is feasible for the SA problem. The corresponding incidence vector (u S 144 , z S 144 ) belongs to F H ′E S C . Hence, solutions S 143 and S 144 satisfy equation µu + σz = τ . We then obtain that µu S 143 + σz S 143 = µu S 144 + σz S 144 = µu S 143 + σz S 143 + σ k s ′ + s∈U 144 \U 143 µ s -s∈U 143 \U 144 µ s. SH= 1 2⊂Fii 1 {(u, z) ∈ P sa (G, K, S) : Denote inequality v k,s ∈H z k s ≤ |H|-by αu + βz ≤ λ. Let µu + σz ≤ τ be a valid inequality that is facet defining F of P sa (G, K, S). Suppose that F H ′E S H = {(u, z) ∈ P sa (G, K, S) : µu + σz = τ }. To prove that F H ′E S H is a facet of P sa (G, K, S), we need to show that there exist ρ ∈ R and γ ∈ R k∈K (w k -1) ) such that (µ, σ) = ρ(α, β) + γM .We first show that µ s = 0 for all s ∈ S. Consider a slot s ∈ S, and a solutionS 148 = (U 148 , S 148 ) such that a) select a subset of nodes H148 from H with | H148 | = |H|-1 2 , and each pair of nodes (v k,s , v k ′ ,s ′ ) ∈ H148 are not linked in the conflict graph H ′E S , and s / ∈ {s wk + 1, ..., s} for each v k,s ∈ H148 , b) for each pair of demand k and slot s with v k,s ∈ H148 , we select slot s k = s as last slot for demand k, c) for each demand k i ∈ K \ H148 with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 148 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] \ {s, ..., s + w ki -1},whereD 148 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H148 : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This guarantees that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 148 i , • and s / ∈ {s k i -w k i + 1, ...,s k i } (slot assignment constraint taking into account the possibility of adding slot s in the set of used slots U 148 ). We let S 148 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}. d) let U 148 be the set of slots used in S such that for each demand k and last slot s k ∈ S 148 k and s ′ ∈ {s k -w k + 1, ..., s k }, we have s ′ ∈ U 148 . S 148 is clearly feasible for the SA problem. Hence, the corresponding incidence vector (u S 148 , z S 148 ) belongs to F H ′E S H . Then consider the solution S 149 = (U 149 , S 149 ) obtained from S 148 by adding slot s as an used slot in U 149 without modifying the last slots assigned to the demands K in S 148 which remain the same in solution S 149 i.e., S 148 k = S 149 k for each demand k ∈ K. S 149 is clearly feasible for the SA problem. Hence, the corresponding incidence vector (u S 149 , z S 149 ) belongs to F H ′E S H . Hence, solutions S 148 and S 149 satisfy equation µu + σz = τ . We then obtain that µu S 148 + σz S 148 = µu S 149 + σz S 149 = µu S 148 + σz S 148 + µ s.It follows that µ s = 0.In a similar way, we can show that µ s = 0, for all slots s ∈ S.Let show thatσ k s = 0 for all k ∈ K and s ∈ {w k , ..., s} with v k,s / ∈ H. Consider a demand k in K and a slot s ′ in {w k , ..., s} with v k,s ′ / ∈ H. Let S 150 = (U 150 , S 150 ) be the solution given by a) select a subset of nodes H150 from H with | H150 | = |H|-1 2 , and each pair of nodes (v k,s , v k ′ ,s ′ ) ∈ H150 are not linked in the conflict graph H ′E S , , b) for each pair of demand k and slot s with v k,s ∈ H150 , we select slot s k = s as last slot for demand k, c) for each demand k i ∈ K \ H150 with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 150 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E(p ki ) ∩ E(p k ) ̸ = ∅ or I 150 i = kj ∈D 150 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not,whereD 150 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H150 : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 150 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k + 1, ..., s ′ } = ∅ if E(p k i ) ∩ E(p k ) ̸ = ∅ (we take into account the possibility of adding slot s ′ as a last slot in the selected last slots S 150 k to route demand k in solution S 150 ). We let S 150 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}. d) a set of slots U 150 are used in S such that for each demand k" and last slot s k" ∈ S 150 k" and s" ∈ {s k" -w k" + 1, ..., s k" }, we have s" ∈ U 150 . S 150 is clearly feasible for the problem. Hence, the corresponding incidence vector (u S 150 , z S 150 ) belongs to F H ′E S H . Then consider the solution S 151 = (U 151 , S 151 ) obtained from S 150 by adding slot s ′ as last slot to demand k without modifying the last slots assigned to the demands K \ {k} in S 150 , i.e., S 150 k ′ = S 151 k ′ for each demand k ′ ∈ K \ {k}, and S 151 k = S 150 k ∪ {s ′ } for demand k. Solution S 151 is feasible for the SA problem. The corresponding incidence vector (u S 151 , z S 151 ) belongs to F H ′E S H . Hence, solutions S 150 and S 151 satisfy equation µu + σz = τ . We then obtain that µu S 150 + σz S 150 = µu S 151 + σz S 151 = µu S 150 + σz S 150 + σ k s ′ + s∈U 151 \U 150 µ s -s∈U 150 \U 151 µ s. SH. Then consider the solution S 153 = (U 153 , S 153 ) obtained from S 152 such that a) the last slots assigned to the demands K \{k, k ′ } in S 152 remain the same in S 153 , i.e., S 152 k" = S 153 k" for each demand k" ∈ K \ {k, k ′ }, where k is a demand with v k,s ∈ H152 and s ∈ S 152 k such that v k ′ ,s ′ is not linked with any node v k",s" ∈ H152 \ {v k,s }, b) and adding slot s ′ as last slot to demand k ′ , i.e., S 153 k ′ = S 152 k ′ ∪ {s ′ } for demand k ′ , c) and modifying the last slots assigned to demand k by adding a new last slot s and removing the last slot s ∈ S 152 k with v k,s ∈ H and v k,s / ∈ H such that S 153 k = (S 152 k \ {s}) ∪ {s} for demand k such that {s -w k + 1, ..., s} ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ for each k ′ ∈ K and s ′ ∈ S 153 k ′ with E(p k ) ∩ E(p k ′ ) ̸ = ∅. Proposition 5 . 5 . 2 . 552 Consider a slot s ∈ {1, ..., s -1}. Then, k∈K min(s+w k -1,s) for all k ∈ K and s ∈ {1, ..., w k -1},(5.22) s s=w k z k s = 1, for all k ∈ K,(5.23) k∈ Ke min(s,s+w k -1)s ′ =s z k s -u s ≤ 0,for all e ∈ E, and s ∈ S, (5.24)u s -k∈K min(s+w k -1,s) s ′ =s z k s ′ ≤ 0, for all s ∈ S,(5.25)z k s ≥ 0, for all k ∈ K and s ∈ S, (5.26)0 ≤ u s ≤ 1, for all s ∈ S,(5.27)z k s ∈ {0, 1}, for all k ∈ K and s ∈ S,(5.28)u s ∈ {0, 1}, for all s ∈ S.(5.29) Inequality (5.25) ensures that if slot s is not used by at least one demand, its associated variable u s is forced to be equal to zero.On the other hand, and to boost the performance of the B&B algorithm, we already introduced several classes of valid inequalities to obtain tighter LP bounds. Based on this, and at each iteration in a certain level of the B&B algorithm, one can identify one or more than one violated inequality by the current fractional solution for a given class of valid inequalities. Algorithm 6 summarizes the different steps of the Branch-and-Cut algorithm taking into account additional valid inequalities for a given class of valid inequalities. ble" if possible from this fractional solution. For this, we first use a local search algorithm to generate at each iteration a sequence of demands L numeroted with a) interval-odd-hole inequalities (2.40), b) slot-assignment-odd-hole inequalities (2.44), c) interval-clique inequalities (2.43), d) slot-assignment-clique inequalities (2.43), e) interval-capacity-cover inequalities (5.10). algorithm. The results have shown the efficiency of the valid inequalities allowed enhancing the resolution of the SA problem. Hence, the Branch-and-Cut is shown to be very performant compared with the Branch-and-Bound algorithm. Finally, it would be interesting to further investigate a combination of the different algorithms with some machine learning and reinforcement learning algorithms to well manage the B&C, B&P, and B&C&P trees and particularly for a) the node selection [27][36], b) variable selection and branching rule [6][36], c) column selection [39][111], d) cut selection [54][110], 2.1 Sets of V k0 and E k 0 of the example of Fig. 2.1(b). . . . . . . . . . . . 3.1 Characteristics of Different Topologies Used for our Experiments. . . Table of Comparison Between B&C, B&P and B&C&P Algorithms: Scip (Without or With Additional Valid Inequalities) Vs Scip (Without or With Additional Valid Inequalities). . . . . . . . . . . . . . . 5.1 Table of Comparison Between: B&B SCIP Vs Own B&C SCIP Using Real Graphs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Table of Comparison Between: B&B SCIP Vs Own B&C SCIP Using Realistic Graphs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Relation between P, NP, NP-complete and NP-hard problems. . . . . List of Figures 1 Historical Evolution of Optical Transport Networks. . . . . . . . . . 1.1 3.2 Table of Comparison for the B&C Algorithm: Cplex (Without or With Additional Valid Inequalities) Vs Scip (Without or With Additional Valid Inequalities). . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Table of Comparison for the B&C Algorithm: Gurobi (Without or With Additional Valid Inequalities) Vs Scip (Without or With Additional Valid Inequalities). . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Table of Comparison for the B&C Algorithm: Cplex (Without or With Additional Valid Inequalities) Vs Gurobi (Without or With Additional Valid Inequalities). . . . . . . . . . . . . . . . . . . . . . . . 3.5 The Impact of Valid Inequalities in the Own B&C SCIP Performance Using Realistic Graphs. . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Influence of the Valid Inequalities: B&P Vs B&C&P. . . . . . . . . . 4.2 Table of Comparison Between B&C, B&P and B&C&P Algorithms: Cplex (Without or With Additional Valid Inequalities) Vs Scip (Without or With Additional Valid Inequalities). . . . . . . . . . . . . . . 4.3 Table of Comparison Between B&C, B&P and B&C&P Algorithms: Gurobi (Without or With Additional Valid Inequalities) Vs Scip (Without or With Additional Valid Inequalities). . . . . . . . . . . . . . . 4.4 1.2 conv(S) vs S. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Geometric interpretation for the polyhedron P , valid inequality, face, facet and extreme point. . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 FixedGrid Vs FlexGrid. . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Set of established paths and spectrums in graph G (Fig. 2(a)) for the set of demands {k 1 , k 2 , k 3 , k 4 } defined in Table 2(b). . . . . . . . . . 6 , S6 ) be the solution given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E 6 k i be the set of edges involved in a shortest path between o k i and d k i , b) we select the smallest slot index s k in {w k , ..., s} \ {s} as last slot for demand k, c) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 6 i given by Theorem 2.3.4. Consider a demand k ∈ K, and an edge e ∈ E \ (E k 0 ∪ E k 1 ). Then, inequality x k e ≤ 1 is facet defining for P(G, K, S) if and only if a) there does not exist a demand k ′ ∈ K \ {k} such that the two demands k and k ′ are non-compatible demands for edge e, b) and there does not exist an edge e ′ ∈ E \ (E k 1 ∪ E k 0 ∪ {e}) such that the two edges e and e ′ are non-comptible edges for demand k. Proof. Neccessity. For demand k and edge e ∈ E \ (E k 0 ∪ E k 1 ), if a) there exists a demand k ′ ∈ K \ {k} such that the two demands k and k ′ are non-compatible demands for edge e. Then, inequality x k e ≤ 1 is dominated by inequality (2.20). b) there exists an edge e ′ ∈ E \ (E k 1 ∪ E k 0 ∪ {e}) such that the two edges e and e ′ are non-comptible edges for demand k. Then, inequality x k e ≤ 1 is dominated by inequality (2.19). 9 k i be the set of edges involved in a shortest path between o k i and d k i , b) for demand k, we let E 9 k be the set of edges involved in a shortest path between o k and d k which uses edge e, c) we select the smallest slot index s k in {w k , ..., s} \ {s} as last slot for demand k in solution S 9 , d) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 9 i given by {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s-w k }∪{s+w ki , ..., s}] I 9 i = [ kj ∈D 9 i if E 9 ki ∩ E 9 k ̸ = ∅ or I 9 i = kj ∈D 9 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not. ., |K|}, we let E ′9 k i be the set of edges involved in a shortest path between o k i and d k i , b) for demand k, we let E ′9 k be the set of edges involved in a shortest path between o ) we select the slot s k = w k as last slot for demand k, d) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′9 i given by kj -w kj } ∪ {s kj + w ki , ..., s} if not, I ′9 i = {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if E ′9 ki ∩ (E ′9 k ∪ {e ′ }) ̸ = ∅ kj ∈D ′9 i or I ′9 i = {w ki , ..., s kj ∈D ′9 i k and d k which uses edge e, and edge e ′ is compatible-edge with all the selected edges e" ∈ E ′9 k in solution S ′9 , i.e., e"∈E ′9 k l e" + ℓ e ′ ≤ lk . c corresponding incidence vector (x S 11 , z S 11 ) belongs to F ′k e . Hence, solutions S ′9 and S 11 satisfy equation µx + σz = τ . It follows that µx S ′9 + σz S ′9 = µx S 11 + σz S 11 = µx S ′9 + µ k e ′ + σz S ′9 . we take into account the possibility of adding edge e ′ in the selected path E ′9 k to route demand k in solution S ′9 ).We let S ′9 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′9 is clearly feasible for the problem, and its incidence vector (x S ′9 , z S ′9 ) belongs to F ′k e . Let S 11 = (E 11 , S 11 ) be the solution obtained from solution S ′9 by adding edge e ′ ∈ E \ (E k 0 ∪ E k 1 ) for the routing of demand k in solution S ′9 which means that E 11 k = E ′9 k ∪ {e}. The last slots assigned to the demands K, and paths assigned the set of demands K \ {k} in S ′9 remain the same in solution S 11 , i.e., S 11 k = S ′9 k for each k ∈ K, and E 11 k ′ = E ′9 k ′ for each k ′ ∈ K \ {k}. S 11 is clearly feasible for the problem. The 12 , S12 ) be the solution given by a) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we let E 12 k i be the set of edges involved in a shortest path between o k i and d k i , b) for demand k, we let E12 k be the set of edges involved in a shortest path between o k and d k which does not use edge e, and edge e ′ is compatible-edge with all {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s k -w k }∪{s k +w ki , ..., s}] {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s} if not, the selected edges e" ∈ E 12 k , i.e., e"∈E 12 k l e" + ℓ e ′ ≤ lk . c) we select slot s k = s as last slot for demand k, and we let S 12 k = {s}, d) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 12 i given by I 12 i = [ kj ∈D 12 i if E 12 ki ∩(E 12 k ∪{e}) ̸ = ∅ or I 12 i = kj ∈D 12 i |K|}, we let E ′12 k i be the set of edges involved in a shortest path between o k i and d k i , b) we select slot s k = s as last slot for demand k, and we let S ′12 k = {s}, c) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′12 {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s ′ -w k }∪{s ′ +w ki , ..., s}] {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not. i given by I ′12 i = [ kj ∈D ′12 i if E ′12 ki ∩ E ′12 k ̸ = ∅ or I ′12 i = kj ∈D ′12 i 15 , S15 ) be the solution given by a) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we let E 15 k i be the set of edges involved in a shortest path between o k i and d k i , b) for demand k, we let E 15 k be the set of edges involved in a shortest path between o k and d k such that edge e is compatible-edge with all the selected {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s k -w k }∪{s k +w ki , ..., s}] {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s} if not, edges e" ∈ E 15 k , i.e., e"∈E 15 k l e" + l e ≤ lk , c) we select slot s k = w k as last slot for demand k, and let S 15 k = {s k }, d) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 15 i given by I 15 i = [ kj ∈D 15 i if E 15 ki ∩(E 15 k ∩{e}) ̸ = ∅ or I 15 i = kj ∈D 15 i assigned to the demands K, and paths assigned the set of demands K \ {k} in S 15 remain the same in solution S 16 15 k ∪ {e}) ̸ = ∅ ( we take into account the possibility of adding edge e in the selected pathE 15 k to route demand k in solution S 15 ).We let S 15 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}.S15 is feasible for the problem, its incidence vector (x S 15 , z S 15 ) belongs to F k S . Then we derive a solution S 16 = (E 15 , S 15 ) obtained from S 15 by adding edge e ∈ E \(E k , i.e., S 16 k = S 15 k for each k ∈ K, and E 16 k ′ = E 15 k ′ for each k ′ ∈ K \ {k}. S 16 is clearly feasible for the problem. The corresponding incidence vector (x S 16 , z S 16 ) belongs to F k S . Hence, solutions S 15 and S 16 satisfy equation µx + σz = τ . It follows that µx S 15 + σz S 15 = µx S 16 + σz S 16 = µx S 15 + µ k e + σz S 15 . 0 ∪ E k 1 ) for the routing of demand k in solution S 15 which means that E 16 k = E 15 k ∪ {e}. The last slots |K|}, we select the smallest slot index s k i in the set of slots I ′15 {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s ′ -w k ′ }∪{s ′ +w ki , ..., s}] {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, i given by I ′15 i = [ kj ∈D ′15 i if E ′15 ki ∩ E ′15 k ′ ̸ = ∅ or I ′15 i = kj ∈D ′15 i for all k ′ ∈ K and s ′ ∈ {w k ′ , ..., s}.Let prove now that σ k s for demand k and slot s in {w k , ..., s} are equivalent. Consider a slot s ′ ∈ {w k , ..., s}. Let S15 = ( Ẽ15 , S15 ) be the solution given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let Ẽ15 k i be the set of edges involved in a shortest path between o k i and d k i , b) we select the smallest slot index s k from {w k , ..., s}\{s ′ } as last slot for demand {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s ′ -w k }∪{s ′ +w ki , ..., s}] {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, k, c) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots Ĩ15 i given by Ĩ15 i = [ kj ∈ D15 if Ẽ15 ki ∩ Ẽ15 k ̸ = ∅ or Ĩ15 i = kj ∈ D15 where D15 i i and s ∈ {w k ′ , ..., s}, 0 otherwise. As a consequence, (µ, σ) = ρ(α, β) + γQ. Theorem 2.3.7. Consider a demand k in K and a subset of node X ⊂ V, with |X ∩ {o k , d k }| = 1 and δ(X) ∩ E k 1 = ∅. Then, inequality (2.2), e∈δ(X) x k e ≥ 1, is facet defining for P(G, K, S). Proof. Let F k X denote the face induced by inequality e∈(δ(X)\E k 0 ) |K|}, we let E 19 k i be the set of edges involved in a shortest path between o k i and d k i , we select the smallest slot index s k in {w k , ..., s} \ {s} as last slot for demand k, d) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 19 i given by {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s -w k } ∪ {s + w ki , ..., s}] {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, b) for demand k, we let E 19 k be the set of edges involved in a shortest path between o k and d k . This guarantees that one edge e from (δ(X) \ E k 0 ) is chosen to route demand k, i.e., |(δ(X) \ E k 0 ) ∩ E 19 k | = 1, i = [ kj ∈D 19 i if E 19 ki ∩ E 19 k ̸ = ∅ or I 19 i = c) for demand k, I 19 kj ∈D 19 i 19 and S 20 satisfy equation µx + σz = τ . We then obtain that µx S 19 + σz S 19 = µx S 20 + σz S 20 = µx S 19 + σz S 19 + σ k s . we take into account the possibility of adding slot s in the set of last slots S19 k to route demand k in solution S19 ).We let S 19k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}.S19 is clearly feasible for the problem, and its incidence vector (x S 19 , z S 19 ) belongs toF k X .Then consider the solution S 20 = (E 20 , S 20 ) obtained from S 19 by adding slot s as last slot to demand k without modifying the paths assigned to the demands K in S 19 (i.e., E 20 k = E 19 1 for each k ∈ K), and the last slots assigned to the demands K \ {k} in S 19 remain the same in solution S 20 i.e., S 19 k ′ = S 20 k ′ for each demand k ′ ∈ K \ {k}, and S 20 k = S 19 k ∪ {s} for demand k. Solution S 20 is feasible for the problem. The corresponding incidence vector (x S 20 , z S 20 ) belongs to F k X . Hence, solutions S arbitrarily. Let S ′19 = (E ′19 , S ′19 ) be the solution given by a) for each demandk i ∈ K \ {k} with i ∈ {1, ...,|K|}, we let E ′19 k i be the set of edges involved in a shortest path between o k i and d k i , b) for demand k, we let E ′19 k be the set of edges involved in a shortest path between o k and d k such that edge e ′ is compatible-edge with the selected edges e" ∈ E ′19 k , i.e., {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s k -w k } ∪ {s k + w ki , ..., s}] {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, e"∈E ′19 k l e" + ℓ e ′ ≤ lk . c) we select the slot s k = w k as last slot for demand k, and let S ′19 k = {s k }, d) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′19 i given by I ′19 i = [ kj ∈D ′19 i if E ′19 ki ∩ (E ′19 k ∪ {e ′ }) ̸ = ∅ or I ′19 i = kj ∈D ′19 i we select the slot s k = w k as last slot for demand k , and let S19k = {s k }, d) for each demand k i ∈ K \ {k} with i ∈ {1, ...,|K|}, we select the smallest slot index s k i in the set of slots Ĩ19 , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s k -w k } ∪ {s k + w ki , ..., s}] , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not. i given by Ĩ19 i = [ kj ∈ D19 if Ẽ19 ki ∩ ( Ẽ19 k ∪ {e ′ }) ̸ = ∅ or Ĩ19 i = kj ∈ D19 i {w ki i {w ki 2.4.1. Consider an edge e ∈ E, and a slot s ∈ S. Let K be a subset of demands in K, and k∈ K w k ≤ sk ′ ∈Ke\ K w k ′ . Then, inequality (2.23) is facet defining for P(G, K, S) if and only if K e \ K = ∅, and there does not exist an interval of contiguous slots I = [s i , s j ] such that a) |{s i + w k -1, ..., s j }| ≥ w k for each demand k ∈ K, b) and s ∈ {s i + max 38 , S38 ) be the solution given by a) for each demand k i ∈ K \ K with i ∈ {1, ..., |K|}, we let E 38 k i be the set of edges involved in a shortest path between o k i and d k i , b) for demand k, we let E 38 k be the set of edges involved in a shortest path between o k and d k which uses edge e such that edge e ′ is compatible with all the selected edges e" ∈ E 38 k of demand k in solution S 38 , i.e., e"∈E 38 k l e" + ℓ e ′ ≤ lk , c) for each demand k ′ ∈ K \ {k}, we let E 38 k ′ be the set of edges involved in a shortest path between o k ′ and d k ′ which does uses edge e, d) for one demand k ′ ∈ K, we select the smallest slot index s k ′ in {w k ′ , ..., s} as last slot such that s ∈ {s k ′ -w k ′ + 1, ..., s k ′ }, e) for each demand k i ∈ K \ {k ′ } with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 38 i given by kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s k -w k } ∪ {s k + w ki , ..., s}] kj -w kj } ∪ {s kj + w ki , ..., s} if not. I 38 i = [ kj ∈D 38 i {w ki , ..., s if E 38 ki ∩ (E 38 k ∪ {e ′ }) ̸ = ∅ or I 38 i = {w ki , ..., s kj ∈D 38 i for the routing of demand k in solution S38 which means thatE 39 k = E 38 k ∪ {e ′ }.The last slots assigned to the demands K, and paths assigned the set of demands K \ {k} in S 38 remain the same in solution S 39 , i.e., S 39 k = S 38 k for each k ∈ K, and E 39 k ′ = E 38 k ′ for each k ′ ∈ K \ {k}. S 39 is clearly feasible for the problem. The corresponding incidence vector (x S 39 , z S 39 ) belongs to F e,s K . Hence, solutions S 38 and S 39 satisfy equation µx + σz = τ . It follows that µx S 38 + σz S 38 = µx S 39 + σz S 39 = µx S 38 + µ k e ′ + σz S 38 . |K|}, we let E 42 k i be the set of edges involved in a shortest path between o k i and d k i , b) for one demand k from K, we let E 42 k i be the set of edges involved in a shortest path between o k and d k which uses edge e, c) for each demand k ′ ∈ K \ {k}, we let E 42 k ′ be the set of edges involved in a shortest path between o k ′ and d k ′ which does not use edge e, d) for each demand k ∈ K, we select the smallest slot index s k in {w k , ..., s} ∩ {s, ..., s + w k -1} as last slot, e) for each demand k i ∈ K \ K with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 42 i given by I 42 i = [ kj ∈D 42 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s -w k } ∪ {s + w ki , ..., s}] if E 42 ki ∩ E 42 k ̸ = ∅, or I 42 i = [ kj ∈D 42 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] if not, modifying the path assigned to demand k in S 42 with e ∈ E 42 k and k ∈ K from E42 k to a path E 43 k without passing through edge e (i.e., e / ∈ E 43 k ) and {s - modifying the path assigned to demand k in S 53 with e ∈ E 53 k and v k ∈ C from E 53 k to a path E 58 k without passing through edge e (i.e., e / ∈ E 58 k ) and {s - 53 defined in the proof of theorem 2.4.6 which stills feasible such that its incidence vector (x S 53 , z S 53 ) belongs to F C . Furthermore, and based on the solutions S 53 to S 59 with corresponding incidence vector (x S 53 , z S 53 ) to (x S 59 , z S 59 ) I solution S ′H e belong to F ′H e I ′H e I C is a proper face based on defining for P(G, K, S). We first prove that F C , we show that there exist ρ ∈ R and γ and σ k s are equivalent for all v k ∈ C ∪ C e and s ∈ {s i + w k -1, ..., s j }, c) and µ k e ′ = 0 for all demand k ∈ K and edge e ∈ E \ (E k 0∪ E k 1 ) with e ̸ = e ′ if v k ∈ C, d) and µ k e are equivalent for the set of demands in C, e) and σ k ′ s and µ k e are equivalent for all v k ∈ C and v k ′ ∈ C ∪ C e and s ∈ {s |K|}, we select the smallest slot index s k i in the set of slots I 60 i given by I 60 i = kj ∈D 60 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} \ {s i , ..., s j }, |K|}, we select the smallest slot index s k i in the set of slots I 60 i given by be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S 60 is feasible for the problem, and its incidence vector (x S 60 , z S 60 ) belongs to F The last slots assigned to the demands K, and paths assigned the set of demands K \ {k} in S60 remain the same in solution S 61 , i.e., S 61 k = S 60 k for each k ∈ K, and E 61 k ′ = E 60 k ′ for each k ′ ∈ K \ {k}. S 61 is clearly feasible for the problem. The corresponding incidence vector (x S 61 , z S 61 ) belongs to F 60 and S 61 satisfy equation µx + σz = τ . It follows that µx S 60 + σz S 60 = µx S 61 + σz S 61 = µx S 60 + µ k e + σz S 60 . H E I C . Hence, solutions S H E I C . Then we derive a solution S 61 = (E 61 , S 61 ) obtained from S 60 by adding edge e ∈ E \ (E k 0 ∪ E k 1 ) for the routing of demand k in solution S 60 which means that E 61 k = E 60 k ∪ {e}. |K|}, we select the smallest slot index s k i in the set of slots I ′60 i given by I ′60 i = kj ∈D ′60 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} \ {s i , ..., s j }, E(H) denotes the set of edges in the sub-graph of the conflict graph H E I induced by H. Taking into account that each node v k in H has two neighbors in H, this implies that s j s=s i +w k -1 z k s appears twice in the previous inequality. As a result, Theorem 2.4.10. Let H be an odd-hole in the conflict graph H E I with |H| ≥ 5 and 2w k > |I| for each v k ∈ H. Then, inequality (2.40) is facet defining for P(G, K, S) if and only if a) for each node v k ′ / ∈ H in H E I , there exists a node v k ∈ H such that the induced graph |K|}, we select the smallest slot index s k i in the set of slots I 64 i given by 64 i = {k j ∈ {k 1 , ..., k i-1 } ∩ H : E 64 k i ∩ E 64 k j ̸ = ∅}. We let S 64 k i = {s k i } be the set of last slots assigned to demand k i , e) for each demand k i ∈ K \ H with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 64 i given by {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s k -w k } ∪ {s k + w ki , ..., s}] {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, I 64 i = [ kj ∈R 64 i if E 64 ki ∩ (E 64 k ∪ {e}) ̸ = ∅ or I 64 i = kj ∈R 64 i I 64 i = kj ∈D 64 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} \ {s i + w ki -1, ..., s j }, where D we take into account the possibility of using edge e ′ in the selected path E 64 k to route demand k in solution S 64 ).64 is feasible for the problem. Hence, the corresponding incidence vector (x S 64 , z S 64 ) The last slots assigned to the demands K, and paths assigned the set of demands K \ {k} in S 64 remain the same in solution S 65 , i.e.,65 is clearly feasible for the problem. The corresponding incidence vector (x S 65 , z S 65 ) belongs to H . Hence, solutions S 64 and S 65 satisfy equation µx + σz = τ . It follows that µx S 64 + σz S 64 = µx S 65 + σz S 65 = µx S 64 + µ k e + σz S 64 . We let S 64 k belongs to F H E S 65 k = S 64 F H E I i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S I H . Then we derive a solution S 65 = (E 65 , S 65 ) obtained from S 64 by adding edge e ∈ E \ (E k 0 ∪ E k 1 ) for the routing of demand k in solution S 64 which means that E 65 k = E 64 k ∪ {e}. k for each k ∈ K, and E 65 k ′ = E 64 k ′ for each k ′ ∈ K \ {k}. S s j }. Let S 66 = (E 66 , S 66 ) be the solution given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E 66 k i be the set of edges involved in a shortest path between o k i andd k i , b) select a subset of demands H from H with | H| = |H|-12 , c) for each demand k i from H with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 66 i given by I 66 i = [ kj ∈L 66 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ {s i + w ki -1, ..., s j }. |K|}, we select the smallest slot index s k i in the set of slots I 66 i given by I 66 i = kj ∈D 66 I 66 i = [ kj ∈R 66 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} \ {s i + w ki -1, ..., s j }, where D 66 i = {k j ∈ {k 1 , ..., k i-1 } ∩ H : E 66 k i ∩ E 66 k j ̸ = ∅}, e) for each demand k i ∈ K \ H with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 66 i given by i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k ′ } ∪ {s ′ + w ki , ..., s}] if E 66 ki ∩ E 66 k ′ ̸ = ∅ or I 66 i = kj ∈R 66 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, we take into account the possibility of adding slot s ′ as a last slot in the selected last slots S 66 k ′ to route demand k ′ in solution S 66 ).66 is feasible for the problem. Hence, the corresponding incidence vector (x S 66 , z S 66 ) H . Based on this, we construct a feasible solution S 67 = (E 67 , S 67 ) obtained from S 66 as belows a) without changing the established paths for the demands K \ K in solution S 66 , i.e.,E 67 k = E 66 k for each demand k ∈ K \ K, b)remove the last slot s totally covered by the interval I and which has been selected by a demand k i ∈ {v k 1 , ..., v kq } in solution S 66 (i.e., s ∈ S 66 k i and s′ ∈ {s i +w k i +1, ..., s j }) such that each pair of nodes (v k ′ , v k j ) are not linked in odd-hole H with j ̸ = i, c) and select a new last slot s′ / ∈ {s i + w k i + 1, ..., s j } for demand k i i.e., S 67 k i = (S 66 k i \ {s}) ∪ {s ′ } such that {s ′ -w k i -1, ..., s′ } ∩ {s -w k + 1, ..., s} = ∅ for each k ∈ K and s ∈ S 66 k with E 67 k ∩ E 67 k i ̸ = ∅, d) and add slot s ′ to the set of last slots S 66 k ′ assigned to demand k ′ in solution S 66 , i.e., S 67 k ′ = S 66 k It follows that µx S 66 + σz S 66 = µx S 67 + σz S 67 = µx S 66 + σz S 66 + σ We let S 66 k belongs to F H E I i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S ′ ∪ {s ′ }. Solution S 67 is feasible for the problem. The corresponding incidence vector (x S 67 , z S 67 ) belongs to F H E I H . Hence, solutions S 66 and S 67 satisfy equation µx + σz = τ . Then, inequality (2.41) is facet defining for P(G, K, S) if and only ifa) for each node v k" in H E I with v k" / ∈ H ∪ C and C ∪ {v k" } is a clique in H E I , there exists a subset of nodes H ⊆ H of size |H|-1 2 such that H ∪ {v k" } is stable in H E E I such that v k" is linked with all nodes v k ∈ H and nodes v k ′ ∈ C.This implies that inequality (2.41) is dominated by the following inequality I , b) and there does not exist an interval I ′ of contiguous slots with I ⊂ I ′ such that H and C define also an odd-hole and its connected clique in the associated conflict graph H E I ′ . Proof. Neccessity. a) Note that if there exists a node v k" / ∈ H ∪ C in H β) + γQ. For this, we show that a) σ k s = 0 for all demand k ∈ K and slot s ∈ {w k , ..., s} with s / ∈ {s i + w k -1, ..., s j } if v k ∈ H ∪ C as we did in the proof of theorem 2.4.14, b) and µ k e = 0 for all demand k ∈ K and edge e ∈ E \ (E k 0 ∪ E k 1 ) as we did in the proof of theorem 2.4.14, c) and σ k s are equivalent for all v k ∈ H and s ∈ {s i + w k -1, ..., s j } as we did in the proof of theorem 2.4.14. Solutions S 49 -S 69 still feasible for F . We should prove now that σ k s are equivalent for all v k ∈ C and s ∈ {s i + w k -1, ..., s j }. H E I H,C |K|}, we select the smallest slot index s k i in the set of slots I 70 i given by {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s′ -w k′ } ∪ {s ′ + w ki , ..., s}] I 70 i = [ kj ∈R 70 i if E 70 ki ∩ E 70 k ′ ̸ = ∅ or I 70 i = kj ∈R 70 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, 107 Definition 2.4.7. Let H E S be a conflict graph defined as follows. For all slot s ∈ {w k , ..., s} and demand k ∈ K, consider a node v k,s in H E S . Two nodes v k,s and v k ′ ,s ′ are linked by an edge in H E S if and only if 72 , S72 ) be the solution given by 1. for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we let E 72 k i be the set of edges involved in a shortest path between o k i and d k i , 2. for demand k, we let E 72 k be the set of edges involved in a shortest path between o k and d k such that edge e is compatible with all the selected edges e ∈ E 72 k , i.e., I 72 i = [ kj ∈D 72 e ′ ∈E 72 k ℓ e ′ + ℓ e ≤ lk , [START_REF] Balas | Facets of the knapsack polytope[END_REF] . select one pair of demand k ′ and slot s ′ from clique C (i.e., v k ′ ,s ′ ∈ C), and use slot s k ′ = s ′ as last slot, 4. for each demand k i ∈ K \ {k ′ } with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 72 i given by i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s k -w k } ∪ {s k + w ki , ..., s}] if E 72 ki ∩ (E 72 k ∪ {e}) ̸ = ∅ or I 72 i = kj ∈D 72 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, we take into account the possibility of using edge e in the selected path E72 k to route demand k in solution S72 ). We let S 72 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}.S 72 is feasible for the problem. Hence, the corresponding incidence vector (x S 72 , z S 72 )C . Then we derive a solution S 73 = (E 73 , S 73 ) obtained from S 72 by adding edge e ∈ E \ (E k 0 ∪ E k 1 ) for the routing of demand k in solution S 72 which means that E73 k = E 72 k ∪ {e}. The last slots assigned to the demands K, and paths assigned the set of demands K \ {k} in S 72 remain the same in solution S 73 , i.e.,73 is clearly feasible for the problem. The corresponding incidence vector (x S 73 , z S 73 ) belongs to C . Hence, solutions S 72 and S 73 satisfy equation µx + σz = τ . It follows that µx S 72 + σz S 72 = µx S 73 + σz S 73 = µx S 72 + µ k e + σz S 72 . Let show that σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s} with v k,s / ∈ C. Consider a demand k in K and a slot s ′ in {w k , ..., s} with v k,s ′ / ∈ C. Let S ′72 = (E ′72 , S ′72 ) be the solution given by 1. for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E ′72 k i be the set of edges involved in a shortest path between o k i and d k i , 2. select one pair of demand k ′ and slot s ′ from clique C (i.e., v k ′ ,s ′ ∈ C), and use slot s k ′ = s ′ as last slot, 3. for each demand k i ∈ K \ {k ′ } with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′72 F H E S As a result, µ k e = 0. In a similar way, we can show that µ k e = 0, for all k ∈ K and e ∈ E \ (E k 0 ∪ E k 1 ). i given by I ′72 i = [ kj ∈D ′72 belongs to F H E S S 73 k = S 72 k for each k ∈ K, and E 73 k ′ = E 72 k ′ for each k ′ ∈ K \ {k}. S i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E ′72 ki ∩ E ′72 k ̸ = ∅ or I ′72 i = kj ∈D ′72 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, 76 , S76 ) be the solution given by 1. for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we let E 76 k i be the set of edges involved in a shortest path between o k i and d k i , 2. for demand k, we let E 76 k be the set of edges involved in a shortest path between o k and d k such that edge e is compatible with all the selected edges e ∈ E 76 k , 3. select a subset of nodes H76 from H with | H76 | = |H|-1 2 , and each pair of nodes (v k,s , v k ′ ,s ′ ) ∈ H76 are not linked in the conflict graph H E S , 4. for each pair of demand k and slot s with v k,s ∈ H76 , we select slot s k = s as last slot for demand k, 5. for each demand k i ∈ K \ H76 with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 76 i given by I 76 i = [ kj ∈D 76 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s k -w k } ∪ {s k + w ki , ..., s}] if E 76 ki ∩ (E 76 k ∪ {e}) ̸ = ∅ or I 76 i = kj ∈D 76 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, we take into account the possibility of using edge e in the selected path E76 k to route demand k in solution S76 ). ′77 , S ′77 ) obtained from S 76 by adding edge e ∈ E \ (E k 0 ∪ E k 1 ) for the routing of demand k in solution S 76 which means that E ′77 k = E 76 k ∪ {e}. The last slots assigned to the demands K, and paths assigned the set of demands K \ {k} in S 76 remain the same in solution S ′77 , i.e.,S ′77 k = S 76 k for each k ∈ K, and E ′77 k ′ = E 76 k ′ for each k ′ ∈ K \ {k}. S ′77is clearly feasible for the problem. The corresponding incidence vector (x S ′77 , z S ′77 ) belongs H . Hence, solutions S 76 and S ′77 satisfy equation µx + σz = τ . It follows that µx S 76 + σz S 76 = µx S ′77 + σz S ′77 = µx S 76 + µ k e + σz S 76 . Let show that σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s} with v k,s / ∈ H. Consider a demand k in K and a slot s ′ in {w k , ..., s} with v k,s ′ / ∈ H. Let S ′76 = (E ′76 , S ′76 ) be the solution given by 1. for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E ′76 k i be the set of edges involved in a shortest path between o k i and d k i , 2. select a subset of nodes H′76 from H with | H′76 | = |H|-1 2 , and each pair of nodes (v k,s , v k ′ ,s ′ ) ∈ H′76 are not linked in the conflict graph H E S , 3. for each pair of demand k and slot s with v k,s ∈ H′76 , we select slot s k = s as last slot for demand k, 4. for each demand k i ∈ K \ H′76 with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′76 i We let S 76 k to F H E S As a result, µ k e = 0. In a similar way, we can show that µ k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}. S 76 is feasible for the problem. Hence, the corresponding incidence vector (x S 76 , z S 76 ) belongs to F H E S H . Then we derive a solution S ′77 = (E e = 0, for all k ∈ K and e ∈ E \ (E k 0 ∪ E k 1 ). for all k ∈ K and s ∈ {w k , ..., s} with v k,s / ∈ H. we consider the solution S 80 = (E 80 , S 80 ) defined as follows 1. for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E 80 k i be the set of edges involved in a shortest path between o k i and d k i , 2. select a subset of nodes H80 from H with | H80 | = |H|-1 2 , and each pair of nodes (v k,s , v k",s" ) ∈ H80 are not linked in the conflict graph H E S , and each v k,s ∈ H80 is not linked with node v k ′ ,s ′ in H E S , 3. for each pair of demand k and slot s with v k,s ∈ H80 , we select slot s k = s as last slot for demand k, 4. for each demand k i ∈ K \ H80 with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 80 i given by I 80 i = [ kj ∈D 80 Let prove that σ k s for all v k,s ∈ H are equivalent. Consider a node v k ′ ,s ′ in H. i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k ′ } ∪ {s ′ + w ki , ..., s}] if E 80 ki ∩ E 80 k ̸ = ∅ or I 80 i = kj ∈D 80 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not. . In what follows, we prove that σ k ′ s ′ are equivalent for all v k ′ ,s ′ ∈ C. For this, we consider a node v k ′ s ′ ∈ C, and a solution S 82 = (E 82 , S 82 ) given by a) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we let E 82 k i be the set of edges involved in a shortest path between o k i and d k i , b) select a subset of nodes H82 from H with | H82 | = |H|-1 2 , and each pair of nodes (v k,s , v k",s" ) ∈ H82 are not linked in the conflict graph H E S , c) for each pair of demand k and slot s with v k,s ∈ H82 , we select slot s k = s as last slot for demand k, d) for each demand k i ∈ K \ H82 with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 82 i given by H E S H,C I 82 i = [ kj ∈D 82 For this, we need to show that a) σ k s = 0 for all demand k ∈ K and slot s ∈ {w k , ..., s} with v k,s / ∈ H ∪ C as done in the proof of theorem 2.4.14, b) and µ k e = 0 for all demand k ∈ K and edge e ∈ E \ (E k 0 ∪ E k 1 ) as done in the proof of theorem 2.4.14, c) and σ k s are equivalent for all v k,s ∈ H as done in the proof of theorem 2.4.14, given that the solutions S 65 -S 80 still feasible such that their corresponding incidence vectors belong to F i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k ′ } ∪ {s ′ + w ki , ..., s}] if E 82 ki ∩ E 82 k ̸ = ∅ or I 82 i = kj ∈D 82 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, 83 , z S 83 ) . Hence, solutions S 82 and S 83 satisfy equation µx + σz = τ . We have so µx S 82 + σz S 82 = µx S 83 + σz S 83 = µx S 82 + σz H E S H,C belongs to F ) if k ̸ = k ′ : k and k ′ are non compatible demands for edge e. Proposition 2.4.20. Let C be a clique in H K E . Then, the inequality x k e ≤ 1, (2.46) v k e ∈C is valid for P(G, K, S). Proof. It is trivial given the definition of a clique set in the conflict graph H K E . We know from inequalities (2.19) or (2.20) that for all pairs of nodes (v k e , v k ′ e ′ ) in a clique C in H K E x k e + x k ′ e Based on inequalities (2.19) and (2.20), we introduce the following conflict graph. Definition 2.4.8. Let H K E be a conflict graph defined as follows. For each demand k and edge e / ∈ E k 0 ∪ E k 1 , consider a node v k e in H K E . Two nodes v k e and v k ′ e ′ are linked by an edge in H K E a) if k = k ′ : e and e ′ are non compatible edges for demand k. b′ ≤ 1, By adding the previous inequalities for all two nodes v k,e and v k ′ ,e ′ in C, and by recurrence procedure we obtain that for all |K|} \ {k}, we let E 84 k i be the set of edges involved in a shortest path between o k i and d k i , d) for demand k, we select the slot s k = w k as last slot, e) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 84 i given by I 84 i = [ kj ∈D 84 if E 84 ki ∩ (E 84 k ∪ {e}) ̸ = ∅ or I 84 i = kj ∈D 84 . Consider a demand k ∈ K and an edge e ∈ E \ (E k 0 ∪ E k 1 ) with v k,e / ∈ C. Let S 84 = (E 84 , S 84 ) be the solution given by a) select one pair of demand k ′ and edge e ′ from clique C (i.e., v k ′ ,e ′ ∈ C), we let E 84 k ′ be the set of edges involved in a shortest path between o k ′ and d k ′ which uses edge e ′ , b) for each pair of demand k" and edge e" with v k",e" ∈ C \ {v k,e }, we let E 84 k" be the set of edges involved in a shortest path between o k" and d k" which uses edge e" which does not use edge e", c) for each demand k i ∈ K \ C with i ∈ {1, ..., i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s k -w k } ∪ {s k + w ki , ..., s}] i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, we take into account the possibility of using edge e in the selected path E84 k to route demand k in solution S 84 ). We let S 84 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S 84 is feasible for the problem. Hence, the corresponding incidence vector (x S 84 , z S 84 ) C . Then we derive a solution S 85 = (E 85 , S 85 ) obtained from S 84 by adding edge e ∈ E \ (E k 0 ∪ E k 1 ) for the routing of demand k in solution S 84 which means that E 85 k = E 84 k ∪ {e}. The last slots assigned to the demands K, and paths assigned the set of demands K \ {k} in S 84 remain the same in solution S 85 , i.e., S 85 k = S 84 k for each k ∈ K, and E 85 k ′ = E 84 k ′ for each k ′ ∈ K \ {k}. S 85 is clearly feasible for the problem. The corresponding incidence vector (x S 85 , z S 85 ) belongs to C . Hence, solutions S 84 and S 85 satisfy equation µx + σz = τ . It follows that µx S 84 + σz S 84 = µx S 85 + σz S 85 = µx S 84 + µ k e + σz S 84 . belongs to F H K E F H K E Then consider the solution S 86 obtained from S ′84 by adding slot s ′ as last slot to demand k in S ′84 . Solution S 86 is clearly feasible for the problem.The corresponding incidence vector (x S 86 , z S 86 ) belongs to F ′84 and S 86 satisfy equation µx + σz = τ . We have so µx S ′84 + σz S ′84 = µx S 86 + σz S 86 = µx S ′84 + σz S ′84 + σ k s ′ . S ′84 ) belongs to F C . H K H K E E C . Hence, solutions S Hence, σ k s ′ = 0. In a similar way, we can show that σ k s = 0, for all k ∈ K and s ∈ {w k , ..., s} with v k,s / ∈ C. Let prove that µ k e for all v k,e are equivalent. Consider a node v k ′ ,e ′ in C such that e ′ / ∈ E 84 k ′ . For this, we derive solution S 87 from S 84 by a) modifying the path assigned to demand k ′ in S 84 from E 84 k ′ to a path E 87 k ′ passed through edge e ′ with v k ′ ,e ′ ∈ C, b) modifying the path assigned to demand k in S 84 with e ∈ E 84 k and v k,e ∈ C from E 84 k to a path E 87 k without passing through any edge e" ∈ E \ (E k 0 ∪ E k 1 , i.e., S 84 k = S87 k for each demand k ∈ K \ K. Solution S 87 is feasible for the problem. The corresponding incidence vector (x S 87 , z S 87 ) belongs to F C . Hence, solutions S 84 and S 87 satisfy equation µx + σz = τ . We have so µx S 84 + σz S 84 = µx S 87 + σz S 87 = µx S 84 + σz S 84 + µ k ′ H K E e ′ -µ k e + k∈ K s ′ ∈S 87 k σ k s ′ - s∈S 84 k σ k s + µ k ′ e" - µ k ′ e" + µ k e" - e"∈E 87 k ′ \{e ′ } e"∈E 84 k ′ e"∈E 87 k e"∈E 84 k \{e} pair of nodes (v k e , v k ′ e ′ ) linked in H, and by doing a sum for all pairs of nodes (v k e , v k ′ e ′ ) linked in H, it follows that Proposition 2.4.21. Let H be an odd-hole in the conflict graph H K E with |H| ≥ 5. Then, the inequality v k e ∈H x k e ≤ |H| -1 2 , (2.47) is valid for P(G, K, S). Proof. It is trivial given the definition of the odd-hole in the conflict graph H K E . We strengthen the proof as belows. For each pair of nodes (v k e , v k ′ e ′ ) linked in H by an edge, we know that x k e + x k ′ e ′ ≤ 1. Given that H is an odd-hole which means that we have |H| -1 \ {v k,e } ∪ {v k ′ ,e ′ }) does not contain an odd-holeH ′ = H \ {v k,e } ∪ {v k ′ ,e ′ }, b) and there does not exist a node v k ′ ,e ′ / ∈ H in H K E such that v k ′ ,e ′ is linked with all nodes v k,e ∈ H.We distinguish the following cases:a) if for a node v k ′ ,e ′ / ∈ H in H K E , there exists a node v k,e ∈ H such that the induced graph H K E (H \{v k,e }∪{v k ′ ,e ′ }) contains an odd-hole H ′ = (H \{v k,e })∪{v k ′ ,e ′ }.This implies that inequality (2.47) can be dominated using some technics of lifting based on the following two inequalities v k,e ∈H x k e ≤ |H|-1 2 , and Proof. Neccessity. Theorem 2.4.17. Let H be an odd-hole in the conflict graph H K E with |H| ≥ 5. Then, inequality (2.47) is facet defining for P(G, K, S) if and only if a) for each v k ′ ,e ′ / ∈ H, there exists a node v k,e ∈ H such that the induced graph H K E (H |K|}, we let E 88 k i be the set of edges involved in a shortest path between o k i and d k i , b) for demand k, we let E 88 k be the set of edges involved in a shortest path between o k and d k such that edge e is compatible with all the selected edges e ∈ E 88 |H|-1 2 , and each pair of nodes (v k ′ ,e ′ , v k",e" ) ∈ H88 are not linked in the conflict graph H K E , d) for each pair of demand k ′ and edge e ′ with v k ′ ,e ′ ∈ H88 , we consider a new set of edges E 88 k ′ involved in a shortest path between o k ′ and d k ′ if edge e ′ is not compatible with all the selected edges e" ∈ E 88 k ′ , or we add edge e ′ in E 88 k ′ if not, i.e., E 88 k ′ = E 88 k ′ ∪ {e ′ }, e) for each demand k ′ and edge e ′ with v k ′ ,e ′ ∈ H \ H88 , we modify the set of edges E 88 k ′ if E 88 k ′ contains some edges e ′ that are non compatible with the selected edges E 88 k" with v k",e" ∈ H88 . This can be done by selecting a new set of edges E 88 k ′ which contains all edges involved in a shortest path between o k ′ and d k ′ such that edge e ′ is compatible with each edge e" and demand k" with v k",e" ∈ H88 , f) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 88 i given by {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s k -w k } ∪ {s k + w ki , ..., s}] I 88 i = [ kj ∈D 88 i if E 88 ki ∩ (E 88 k ∪ {e}) ̸ = ∅ or I 88 i = kj ∈D 88 . Consider a demand k ∈ K and an edge e ∈ E \ (E k 0 ∪ E k 1 ). Let S 88 = (E 88 , S 88 ) be the solution given by a) for each demand k i ∈ K \ {k} with i ∈ {1, ..., k , i.e., e ′ ∈E 88 k ℓ e ′ + ℓ e ≤ lk , c) select a subset of nodes H88 from H with | H88 | = i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, we take into account the possibility of using edge e in the selected path E88 k to route demand k in solution S 88 ). We let S 88 k i = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. S 88 is feasible for the problem. Hence, the corresponding incidence vector (x S 88 , z S 88 )H . Then we derive a solution S 89 = (E 89 , S 89 ) obtained from S 88 by adding edge e ∈ E \ (E k 0 ∪ E k 1 ) for the routing of demand k in solution S 88 which means that E 89 k = E 88 k ∪ {e}. The last slots assigned to the demands K, and paths assigned the set of demands K \ {k} in S 88 remain the same in solution S 89 , i.e.,89 is clearly feasible for the problem. The corresponding incidence vector (x S 89 , z S 89 ) belongs to H . Hence, solutions S 88 and S 89 satisfy equation µx + σz = τ . It follows that µx S 88 + σz S 88 = µx S 89 + σz S 89 = µx S 88 + µ k e + σz S 88 . belongs to F H K E S 89 k = S 88 F H K E k for each k ∈ K, and E 89 k ′ = E 88 k ′ for each k ′ ∈ K \ {k}. S and each pair of nodes (v k ′ ,e ′ , v k",e" ) ∈ H′88 are not linked in the conflict graph H K E , c) for each pair of demand k ′ and edge e ′ with v k ′ ,e ′ ∈ H′88 , we consider a new set of edges E ′88 k ′ involved in a shortest path between o k ′ and d k ′ if edge e ′ is not compatible with all the selected edges e" ∈ E ′88 k ′ , or we add edge e ′ in E ′88 k ′ if not, i.e., E ′88 k ′= E ′88 k ′ ∪ {e ′ }, d)for each demand k ′ and edge e ′ with v k ′ ,e ′ ∈ H \ H′88 , we modify the set of edgesE ′88 k ′ if E ′88k ′ contains some edges e ′ that are non compatible with the selected edges E ′88k" with v k",e" ∈ H′88 . This can be done by selecting a new set of edges E ′88 k ′ which contains all edges involved in a shortest path between o k ′ and d k ′ such that edge e ′ is compatible with each edge e" and demand k" with v k",e" ∈ H′88 , e) for each demand k i ∈ K with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I ′88 i given by I ′88 i = [ kj ∈D ′88 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∩ {s ′ + w ki , ..., s}] if E ′88 ki ∩ E ′88 k ̸ = ∅ or I ′88 i = kj ∈D ′88 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, the path assigned to demand k ′ in S 88 from E 88 k ′ to a path E91 k ′ passed through edge e ′ with v k ′ ,e ′ ∈ H, b) and selecting a pair of demand-edge (k, e) from the set of pairs of demand-edge inH 88 such that v k ′ ,e ′ is not linked with any node v k",e" in H 88 \ {v k,e }, c) modifying the path assigned to demand k in S 88 with e ∈ E 88 k and v k,e ∈ H from E 88 k to a path E 91 k without passing through any edge e" ∈ E \ (E k 0 ∪ E k 1 ) such that v k ′ ,e ′ and v k,e" linked in H, d) modifying the last slots assigned to some demands K ⊂ K from S 88 k to S 91 k for each k ∈ K while satisfying non-overlapping constraint. The paths assigned to the demands K \ {k, k ′ } in S 88 remain the same in S 91 (i.e., E 91 k" = E 88 k" for each k" ∈ K \ {k, k ′ }), and also without modifying the last slots assigned to the demands K \ K in S 88 , i.e., S 88 k = S 91 k for each demand k ∈ K \ K. Solution S 91 is feasible for the problem. The corresponding incidence vector (x S 91 , z S 91 ) belongs to F H . Hence, solutions S 88 and S 91 satisfy equation µx + σz = τ . We have so µx S 88 + σz S 88 = µx S 91 + σz S 91 = µx S 88 + σz S 88 + µ k ′ e H K E s∈S 88 k σ k s + µ k ′ e" - µ k ′ e" + µ k e" - e"∈E 91 k ′ \{e ′ } e"∈E 88 k ′ e"∈E 91 k e"∈E 88 k \{e} for all k ∈ K and s ∈ {w k , ..., s} with v k,s / ∈ H. Let prove now that µ k e for all v k,e are equivalent. Consider a node v k ′ ,e ′ in H such that e ′ / ∈ E 88 k ′ . For this, we derive solution S 91 from S 88 by a) modifying ′ -µ k e + k∈ K s ′ ∈S 91 k σ k s ′ - Now let us consider a node v k ′ ,e ′ in C such that e ′ / ∈ E 92 k ′ . For this, we derive solution S ′93 from S 92 by a) modifying the path assigned to demand k ′ in S 92 from E 92 k ′ to a path E ′93 k ′ passed through edge e ′ with v k ′ ,e ′ ∈ C, b) and modifying the path assigned to each demand k with v k,e ∈ H 92 in S 92 with β) + γQ. For this, we show that a) σ k s = 0 for all demand k ∈ K and slot s ∈ {w k , ..., s} as done in the proof of theorem 2.4.17, b) and µ k e = 0 for all demand k ∈ K and edge e ∈ E \ (E k 0 ∪ E k 1 ) with v k,e / ∈ H ∪ C as done in the proof of theorem 2.4.17, c) and µ k e are equivalent for all v k,e ∈ H as done in the proof of theorem 2.4.17, given that the solutions defined in the proof of theorem 2.4.17, their corresponding incidence vector belong to F H K E H,C . Let prove now that µ k ′ e ′ are equivalent for all v k ′ ,e ′ ∈ C. e ∈ E 92 k and v k,e ∈ H from E 92 k to a path E ′93 k without passing through any edge e" ∈ E \ (E k 0 ∪ E k 1 ), ℓ e > lke ′ ∈E k 1 ℓ e ′ , and each pair of edges (e, e ′ ) ∈ C are compatible edges for demand k. Furthermore, it is said minimal cover for demand k if and only if for each e ∈ C we have e ′ ∈C\{e} ℓ e ′ ≤ lke"∈E k 1 l e" . Based on this, we introduce the following inequalities. Proposition 2.4.23. Consider a demand k ∈ K. Let C be a minimal cover related to the tranmission-reach constraint for demand k. Then, the inequality x k e ≤ |C| -1, (2.49) e∈C is valid for P(G, K, S). -Reach-Cover Inequalities Inequalities (2.46), (2.47) and (2.48) can be strengthened by defining a minimal cover related to the transmission-reach constraint. Definition 2.4.9. Consider a demand k ∈ K. A cover C for demand k related to the transmission-reach constraint is a subset of edges in E \ (E k 0 ∪ E k 1 ) such that e∈C Proof. It is trivial given that C is minimal cover for demand k this means that there are at most |C| -1 edges from the set of edges in C that can be used to route demand k. Theorem 2.4.19. Consider a demand k in K. Let C be a minimal cover related to the tranmission-reach constraint for demand k. Then, inequality (2.49) is facet defining for the polytope P(G, K, S, C, k) where 2.4.26. Consider an edge e in E. Let C be a minimal cover in K for edge e. Then, the inequality x k e ≤ |C| -1, (2.54) k∈C is valid for P(G, K, S). Proof. If C is minimal cover for edge e ∈ E this means that there are at most |C| -1 demands from the set of demands in C that can use edge e. For this, we propose an exact algorithm in O(|E| * s * |K| * (|K| -1)) which works as follows. For each demand k and slot s ∈ {w k , ..., s} over edge e with x k e > 0, z k s > 0, we select each demand k ′ ∈ K \ {k} with xk ′ e > 0 and min(s+w k ′ -1,s) s"=s-w k +1 1) consists in identifying for each edge e, demand k ∈ K, and slot s ∈ {w k , ..., s}, a demands k ′ ∈ K such that xk e + xk ′ e + zk s + min(s+w k ′ -1,s) s"=s-w k +1 zk ′ s" > 3. a node in H e S having the largest value of node-degree (i.e., |δ(v k ′ ,s ′ )|) in H e S and v k ′ ,s ′ is linked with all the nodes v k,s ∈ C * in H e S and k ′ ∈ K e . Afterwards, we iteratively add each node v k",s" / ∈ C * ∪ N * to the current N * if it is linked in H e S with all the nodes already assigned to C * and N * and k" ∈ K e . At the end, we add inequality (2.42) induced by clique C * ∪ N * to the current LP, i.e., (x k e + z k s v k,s ∈C * identified for each slot s i ∈ S and slot s j with s j ∈ {s i + max k∈K\ Ke w k , ..., min(s, s i + 2 max k∈K\ Ke w k )}. Consider now an interval of contiguous slots I = [s i , s j ] ∈ I e over an edge e, and its associated conflict graph H e I . We then use a greedy algorithm introduced by Nemhauser and Sigismondi[START_REF] Nemhauser | A Strong Cutting Plane/Branch-and-Bound Algorithm for Node Packing[END_REF] to identify a maximal clique in the conflict graph H e I as follows. We first associate a positive weight for each node v k in H e I equals to xk e * s j s ′ =s i +w k -1 zk s ′ . We then set C * = {k} such that k is a demand in K having the largest number of slots w k and weight xk e * s j s ′ =s i +w k -1 zk s ′ . After that, we iteratively add each demand k ′ having xk ′e > 0 ands j s ′ =s i +w k ′ -1zk ′ s ′ > 0 such that its corresponding node v k ′ is linked with all the nodes v k with k already assigned to the current C * . After that, we check if inequality (2.36) induced by the maximal clique C * for the interval I and edge e is violated or not. If so, we add inequality (2.36) induced by the maximal clique C * to the current LP, i.e., s j x k e + z k s ′ ≤ |C k∈C * s ′ =s i +w k -1 * | + 1. One can strengthen this additional inequality by adding inequality (2.37) induced by the maximal clique C * and C * e ⊂ K e \ C * , i.e., k∈C * E S and v k ′ ,s ′ is linked with all the nodes v k,s ∈ C * in H E S . Afterwards, we iteratively add each node v k ′ ,s ′ / ∈ C * ∪ N * to the current N * if it is linked in H E S with all the nodes already assigned to C * and N * . At the end, we add inequality (2.43) induced by clique a node in H E S having the largest value of node-degree (i.e., |δ(v k ′ ,s ′ )|) in H the shortest path between v k,s and its copy v ′ k,s in the auxiliary conflict graph H ′E S denoted by p v k,s ,v ′ k,s . After that, we check if the total sum of weight over edges belonging to this path is smaller than 1 2 . If so, odd-hole H * is composed by all the original nodes of nodes belong the computed shortest path p v k,s ,v ′ k,s , i.e., V (p v k,s ,v ′ k,s ) \ {v ′ k,s }. As a result, the following inequality (2.44) induced by odd-hole H * in the conflict graph H E S such that each node v k ′ ,s ′ ∈ C * should have a link with all the nodes v k,s ∈ H * , and the nodes v k",s" ∈ C * \ {v k ′ ,s ′ } in the conflict graph H E S . For this, we first assign a node v k ′ ,s ′ / ∈ H z k s ≤ |H v k,s ∈H * * | -1 2 , should be added to the current LP. Moreover, one can strengthen inequality (2.44) induced by odd-hole H * using the greedy algorithm introduced by Nemhauser and Sigismondi [73] to identify a maximal clique C * * to clique C * (i.e., C * = {v k ′ ,s ′ }) such that v k ′ ,s ′ has the largest value of node-degree (i.e., |δ(v k ′ ,s ′ )|) in H E S and v k ′ ,s ′ is linked with all the nodes v k,s ∈ H * in H E S . After that, we iteratively add each node v k ′ ,s ′ / ∈ H * ∪ C * to the current clique C * if it is linked in H E S with all the nodes already assigned to odd-hole H * and clique C * . We then add inequality (2.45) induced by odd-hole H * and clique to identify a maximal clique in the conflict graph H K E taking into account the fractional solution (x, z) as follows. We first assign a positive weight xk e to each node v k,e in the conflict graph H K E . We then select a node v k,e in the conflict graph H K E having the largest weight xk e , and set C * = {v k,e }. After that, we iteratively add each node v k ′ ,e ′ to the current C * if it is linked with all the nodes v k,e ∈ C * and xk ′ e ′ > 0. At the end, the following inequality (2.46) induced by clique C * x k e ≤ 1, v k,e ∈C * should be added to the current LP if it is violated. Furthermore, one can strengthen the additional inequality (2.46) by identifying a maximal clique N * such that each v k,e ,v ′ k,e . Note that if the total sum of weight over edges belonging to this path is smaller than 1 2 , this means that there exists odd-hole H * composed by all the original nodes of nodes belong the computed shortest path p v k,e ,v ′ k,s , i.e., V (p v k,e ,v ′ Moreover, inequality (2.47) induced by odd-hole H * can be lifted using the greedy algorithm introduced by Nemhauser and Sigismondi [73] by identifying a maximal clique C * in the conflict graph H K E such that each node v k ′ ,e ′ ∈ C * should have a link with all the nodes v k,e ∈ H * , and the nodes v k ′ ",e" ∈ C * \{v k ′ ,e ′ } in the conflict graph H K E . For this, we first assign a node v k ′ ,e ′ / ∈ H * to clique C * (i.e., C * = {v k ′ ,e ′ }) having the largest degree |δ(v k ′ ,e ′ )| in H K E , and v k ′ ,e ′ should be linked with all the nodes v k,e ∈ H . We then add inequality (2.48) induced by odd-hole H * and clique C * k,s ) \ {v ′ k,s }, such that its associated inequality (2.47) is violated by the current fractional solution (x, z) to the current LP. As a result, we add following inequality (2.47) induced by odd-hole H * v k,e ∈H * x k e ≤ |H * | -1 2 . x k e + |H v k,e ∈H * H ′K E by duplicating each node v k,e in H K E (i.e., v k,e and v ′ k,e ) such that each two nodes are linked in H ′K E if their original nodes are linked in H K E . We assign to each link (ṽ k,e , ṽk ′ ,e ′ ) in H ′K E a weight 1-x k e -x k ′ e ′ 2 . After that, we compute for each node v k,e in H K E , the shortest path between v k,e and its copy v ′ k,e . We denote this shortest path by p * in H K E . After that, we iteratively add each node v k ′ ,e ′ / ∈ H * ∪ C * to the current clique C * if it is linked in H K E with all the nodes already assigned to H * ∪ C * * | -1 2 solution from (x, z). For this, we first construct several paths R k for each demand k ∈ K based on the fractional values xk e such that for each p ∈ R k xk e ≥ 1, ∀X ⊂ V s.t. |X ∩ {o k , d k }| = 1 and e∈δ(X)∩E(p) e∈E(p) .1. The demands K are randomly generated with |K| ∈ {10, 20, 30, 40, 50, 100, 150}, and s up to 320 slots. Note that we tested 4 instances for each triplet (G, K, s) with |K| ∈ {10, 20, 30, 40, 50, 100, 150, 200, 250, 300}, and s up to 320 slots. Graphs Number Number Max Node Min Node Average Node of Nodes of Links Degree Degree Degree German 17 25 5 2 2.94 Real Topology Nsfnet Spain Conus75 14 30 75 21 56 99 4 6 5 2 2 2 3 3.73 2.64 Coronet100 100 136 5 2 2.72 Europe 28 41 5 2 2.92 France 25 45 10 2 3.6 German50 50 88 5 2 3.52 Realistic Topology Brain161 Giul39 India35 161 39 35 166 86 80 37 8 9 1 3 2 2.06 4.41 4.57 Pioro40 40 89 5 4 4.45 Ta65 65 108 10 1 3.32 Zib54 54 80 10 1 2.96 Table 3 . 3 1: Characteristics of Different Topologies Used for our Experiments. Table 3 . 3 2: Table of Comparison for the B&C Algorithm: Cplex (Without or With Additional Valid Inequalities) Vs Scip (Without or With Additional Valid Inequalities). Own B&C SCIP Nbr Nd Gap Nbr Cuts TT 59 0,00 429,75 0,83 141 0,00 2403,50 3,89 160376,50 1,46 129867,25 8334,95 383058,66 3,70 224642,33 16624,87 251152,50 13,73 305309,75 17074,95 3014,25 0,00 17668,50 617,80 3609 0,00 24782,25 3057,79 1 0,00 95,75 0,15 21586 0,00 24587 192,27 281569,66 3,29 340177 11048,71 119841,66 1,17 163519,33 5673,46 148476,50 5,91 340399,25 17405,09 1 0,00 464,25 40,87 B&C SCIP Nbr Nd Gap TT 1310,25 0,00 14,35 185956 0,27 3895,50 401335,75 1,60 11740,04 315993,66 8,33 16206,36 246146,50 9,62 16675,88 1158,50 0,00 340,10 12759 0,01 7329,06 13462 0,00 113,64 699646 9,51 18000 272065 40,99 18000 225696,67 46,74 18000 247873,25 43,09 18000 56598,50 57,19 18000 Own B&C GRB Nbr Nd Gap Nbr Cuts TT 3460,25 0,00 1866,50 149,13 28093,75 0,27 33413 4962,48 3614,50 0,00 19141,50 882,68 9920 1,82 216792,33 15453,54 6679 17,55 263086,50 18000 1746,50 0,00 200920 15425,53 979,75 73,06 44188,25 18000 4385 0,00 4095,25 216,83 181730 2,58 99411 18000 19702,67 13,20 263905,66 18000 6239,33 24,31 307366 18000 5265,75 47,22 347095,75 18000 2253,25 41,60 326605 18000 B&C GRB Nbr Nd Gap TT 4906,25 0,00 981,48 20752 0,27 4889,90 24321,50 0,39 9279,54 35451,67 5,52 18000 18901,50 6,15 18000 1 0,00 1634,37 64,75 0,00 3184,56 15222,75 0,00 2087,33 51525 12,77 18000 27735 22,41 18000 12631 34,18 18000 8733,50 29,35 18000 7790,50 29,60 18000 Instances Instances |K| |S| 10 15 20 45 30 45 German 40 45 50 55 100 140 150 210 10 15 20 20 30 30 40 35 50 50 100 120 Table 3 . 3 3: Table of Comparison for the B&C Algorithm: Gurobi (Without or With Additional Valid Inequalities) Vs Scip (Without or With Additional Valid Inequalities). Table 3 . 3 4: Table of Comparison for the B&C Algorithm: Cplex (Without or With Additional Valid Inequalities) Vs Gurobi (Without or With Additional Valid Inequalities). Instances B&C SCIP Own B&C SCIP Graph |K| s Nbr Nd Gap TT Nbr Nd Gap TT 10 15 1310,25 0,00 14,35 59 0,00 0,83 20 45 185956 0,27 3895,5 141 0,00 3,89 30 45 401335,75 1,60 11740,04 160376,50 1,46 8334,95 German 40 50 45 55 315993,66 246146,50 8,33 9,62 16206,36 16675,88 383058,66 251152,50 13,73 17074,95 3,70 16624,87 100 140 1158,50 0,00 340,10 3014,25 0,00 617,80 150 210 12759 0,01 7329,06 3609 0,00 3057,79 200 260 5099,33 0,78 10095,88 3067 0,00 6770,75 10 320 1 0,00 10,37 1 0,00 462,15 20 320 10,50 0,00 19,21 15 0,00 832,22 30 40 66534,75 16,40 18000 11304,25 6,08 5006,31 Coronet100 40 50 40 80 81051 11385,25 3,96 0,01 18000 4496,92 2127 19,75 0,00 0,00 707,54 139,55 100 120 12787,50 13,36 14228,34 8390,25 7,66 10920,70 150 200 4454,50 27,12 13692,63 3165,75 29,13 15527,10 200 280 3579,25 33,35 18000 1 38,97 18000 10 15 13462 0,00 113,64 1 0,00 0,15 20 20 699646 9,51 18000 21586 0,00 192,27 30 30 272065 40,99 18000 281569,66 3,29 11048,71 Nsfnet 40 50 35 50 225696,67 46,74 247873,25 43,09 18000 18000 119841,66 148476,50 1,17 5,91 5673,46 17405,09 100 120 56598,50 57,19 18000 1 0,00 40,87 150 160 12663 58,50 18000 1 0,00 136,02 200 210 7726,50 54,85 18000 710 0,28 9121,79 10 40 1907,25 0,00 87,60 1 0,00 1,80 20 40 9 0,00 4 7 0,00 5,92 30 40 91798 0,00 7821,5 32156,75 0,00 2309,66 India35 40 50 40 80 161514 34 2,42 0,00 17486,08 22,13 191812 69,25 0,18 0,00 17333,53 112,19 100 120 24797 0,32 9137,26 23403,75 0,44 9494,52 150 200 16809 0,21 13739,65 1026 0,00 4101,80 200 280 11197 0,37 13930,35 2027,75 3,69 14516,65 10 40 1 0,00 1,49 1 0,00 1,69 20 40 1,50 0,00 3,44 1 0,00 4,88 30 40 1,50 0,00 5,72 6,25 0,00 10,54 Pioro40 40 50 40 80 83597 14 0,20 0,00 8692,5 15,93 67151 4 0,12 0,00 8711,30 54,39 100 80 21281,75 0,04 9087,52 23785,75 0,04 9916,63 150 160 823,50 0,00 816,89 124,50 0,00 1509,87 200 280 1503,75 0,00 3772,9 423,50 0,00 7424,98 10 40 1 0,00 1,58 1 0,00 1,83 20 40 1,50 0,00 2,92 1 0,00 3,71 30 40 4 0,00 4,50 1 0,00 6,10 Giul39 40 50 40 40 4,50 54420 0,00 0,00 7,17 4376,98 1 52156,75 0,00 0,00 10,15 4361,26 100 40 55472,50 6,88 17781,71 54675,50 8,38 17802,83 150 120 836 0,00 1050,13 11655,50 0,00 9411,30 200 120 10191,25 0,24 13794,32 6518 0,01 9914,02 Table 3 . 3 5: The Impact of Valid Inequalities in the Own B&C SCIP Performance Using Realistic Graphs. Table 4 . 4 1: Influence of the Valid Inequalities: B&P Vs B&C&P. Table 4 . 4 2: Table of Comparison Between B&C, B&P and B&C&P Algorithms: Cplex (Without or With Additional Valid Inequalities) Vs Scip (Without or With Additional Valid Inequalities). Table 4 . 4 3: Table of Comparison Between B&C, B&P and B&C&P Algorithms: Gurobi (Without or With Additional Valid Inequalities) Vs Scip (Without or With Additional Valid Inequalities). Table 4 . 4 4: Table of Comparison Between B&C, B&P and B&C&P Algorithms: Scip (Without or With Additional Valid Inequalities) Vs Scip (Without or With Additional Valid Inequalities). 1}, for all s ∈ S.(5.8) where Ke denotes the set of demands in K passing through edge e (i.e., Ke = {k ∈ K, e ∈ E(p k )}. Equations (5.2) ensure that demand k cannot occupy a slot s as last slot before her slot-width w k . Inequalities(5.3) ensure than more than one interval of contiguous slots can be assigned to each demand k ∈ K. It should normally be an equation form ensuring that exactly one slot s ∈ {w k , . . . , s} (one interval of contiguous slots) must be assigned to demand k as last-slot. Here we relax this constraint. Optimizing the spectrum-usage objective function, the equality is a) we select the smallest slot index s k in {w k , ..., s} \ {s, ..., s + w k -1} as last slot for demand k (slot assignment constraint taking into account the possibility of adding slot s in the set of used slots U 105 ), b) for each demand k i ∈ K with i ∈ {1, ..., |K|} \ {k}, we select the smallest slot index s k i in the set of slots I 105 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] \ {s, ..., s + w ki -1} i given by I 105 i = [ kj ∈D 105 i 107 , S 107 ) be the solution given by a) we select the smallest slot index s k in {w k , ..., s} \ {s} as last slot for demand k, b) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 107 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s -w k } ∪ {s + w ki , ..., s}] if E(p ki ) ∩ E(p k ) ̸ = ∅ or I 107 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, i given by I 107 i = [ kj ∈D 107 i i = kj ∈D 107 i we take into account the possibility of adding slot s in the set of last slots S107 We let S 107 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}.c) we let U 107 be the set of slots used in S such that for each demand k and last slots k ∈ S 107 k and s ′ ∈ {s k -w k + 1, ..., s k }, we have s ′ ∈ U 107 .S 107 is clearly feasible for the problem given that it satisfies all the SA constraints of the compact formulation (5.1)-(5.8). Hence, the corresponding incidence vector (u S 107 , z S 107 ) belongs to P sa (G, K, S). Then consider the solution S 108 = (E 108 , S 108 ) obtained from S 107 by adding slot s as last slot to demand k without modifying the last slots assigned to the demands K \ {k} in S 107 remain the same in solution S 108 i.e., S 107 k ′ = S 108 k ′ for each demand k ′ ∈ K \ {k}, and S 108 Solution S 108 is feasible for the SA problem. The corresponding incidence vector (u S 108 , z S 108 ) belongs to P sa (G, K, S). We then obtain that µu S 107 + σz S 107 = µu S 108 + σz S 108 = µu S 107 + σz S 107 + σ k s + k assigned to demand k in solution S 107 ). k = S 107 k ∪ {s} for demand k. s∈{s,...,s-w k +1}\U 107 µ s. .., s} \ {s}. Consider a slot s ′ in {w k , ..., s} \ {s}. Let S 110 = (U 110 , S 110 ) be the solution given by a) we select the smallest slot index s k in {w k , ..., s} \ {s, s ′ } as last slot for demand k, b) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 110 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E(p ki ) ∩ E(p k ) ̸ = ∅ or I 110 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, i given by I 110 i = [ kj ∈D 110 i i = kj ∈D 110 i we take into account the possibility of adding slot s ′ in the set of last slots S 110 = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}. c) we let U 110 be the set of slots used in S such that for each demand k ′ ∈ K and lastslot s k ′ ∈ S 10 k ′ and s" ∈ {s k ′ -w k ′ + 1, ..., s k ′ }, we have s" ∈ U 110 .S 110 is clearly feasible for the SA problem. Hence, the corresponding incidence vector (u S 110 , z S 110 ) belongs to F k s . Then we derive a solution S 112 = (U 112 , S 112 ) obtained from S 110 by adding slot s ′ as last slot to demand k without modifying the last slots assigned to the demands K \ {k} in S 110 Solution S 112 is feasible for the SA problem. The corresponding incidence vector (u S 112 , z S 112 ) belongs to F k s . Hence, solutions S 110 and S 112 satisfy equation µu + σz = τ . We the obtain that µu S 110 + σz S 110 = µu S 112 + σz S 112 = µu S 110 + σz S 110 + σ k s ′ + s∈{s ′ -w k +1,...,s ′ }\U 110 k assigned to demand k in solution S 110 ), Let S 110 k i k remain the same in solution S 112 i.e., S 110 k ′ = S 112 k ′ for each demand k ′ ∈ K \ {k}, and S 112 k = S 110 µ s. ′ ∪ {s ′ } for demand k. 113 i , • and s / ∈ {s k i -w k i + 1, ..., s k i } (slot assignment constraint taking into account the possibility of adding slot s in the set of used slots U 113 ), = {s k i } be the set of last slots assigned to each demand k i with i ∈ {1, ..., |K|}. c) we let U 113 be the of slots used in S such that for each demand k and last slot s ∈ S 113 k and s ′ ∈ {s k -w k + 1, ..., s k }, we have s ′ ∈ U 113 . S 113 is clearly feasible for the SA problem, and its incidence vector (u S 113 , z S 113 ) belongs to F s . After that, we derive a solution S ′113 = (U ′113 , S ′113 ) obtained from S 113 by adding slot s as an used slot in U ′113 without modifying the last slots assigned to the demands K in S 113 which remain the same in solution S ′113 i.e., Solution S ′113 is feasible for the SA problem. The corresponding incidence vector (u S ′113 , z S ′113 ) belongs to F s . Hence, solutions S 113 and S ′113 satisfy equation µu + σz = τ . We then obtain that µu S 113 + σz S 113 = µu S ′113 + σz S ′113 = µu S 113 + σz S 113 + µ s. S 113 k = S ′113 k for each demand k ∈ K. Let S 113 k i |K|}, we select the smallest slot index s k i in the set of slots I 114 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E(p ki ) ∩ E(p k ) ̸ = ∅ or I 114 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, i given by I 114 i = [ kj ∈D 114 i i = kj ∈D 114 i } for demand k. Solution S 115 is feasible for the SA problem. The corresponding incidence vector (u S 115 , z S 115 ) belongs to F s . Hence, solutions S 114 and S 115 satisfy equation µu + σz = τ . We the obtain that µu S 114 + σz S 114 = µu S 115 + σz S 115 = µu S 114 + σz S 114 + σ k s ′ + s∈{s ′ -w k +1,...,s ′ }\U 114 114 k ′ = S 115 k ′ for each demand k ′ ∈ K \ {k}, and S 115 k = S 114 µ s. ′ ∪ {s ′ 116 i ,• and s / ∈ {s k i -w k i + 1, ..., s k i } (slot assignment constraint taking into account the possibility of adding slot s in the set of used slots U 116 ).We let S 116 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}. c) a set of slots U 116 is then used in S such that for each demand k ′ ∈ K and last slots ∈ S 116 k ′ and s ′ ∈ {s k ′ -w k ′ + 1, ..., s k ′ }, we have s ′ ∈ U 116 ,S116 is clearly feasible for the SA problem. The corresponding incidence vector (u S 116 , z S 116 ) belongs to F k S . Next, we derive a solution S ′116 = (U ′116 , S ′116 ) obtained from S 116 by adding slot s as an used slot in U ′116 without modifying the last slots assigned to the demands K in S 116 which remain the same in solution ∈ K. Solution S ′116 is feasible for the SA problem, and its incidence vector (u S ′116 , z S ′116 ) belongs to F k S . Hence, solutions S 116 and S ′116 satisfy equation µu + σz = τ . We then obtain that µu S 116 + σz S 116 = µu S ′116 + σz S ′116 = µu S 116 + σz S 116 + µ s. S ′116 i.e., S 116 k = S ′116 k for each demand k |K|}, we select the smallest slot index s k i in the set of slots I 117 {w ki , ..., s kj -w kj } ∪{s kj +w ki , ..., s}]∩[{w ki , ..., s ′ -w k ′ } ∪{s ′ +w ki , ..., s}] if E(p ki ) ∩ E(p k ′ ) ̸ = ∅ or I 117 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, i given by I 117 i = [ kj ∈D 117 i i = kj ∈D 117 i we take into account the possibility of adding slot s ′ in the set of last slots S117 k ′ assigned to demand k ′ in solution S 117), We let S 110 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}. d) a set of slots U 117 are then used in S such that for each demand k" ∈ K and last slot s ∈ S 117 k" and s" ∈ {s k" -w k" + 1, ..., s k" }, we have s" ∈ U 117 . S 117 is clearly feasible for the SA problem. The corresponding incidence vector (u S 117 , z S 117 ) belongs to F k S . Then consider the solution S 118 = (U 118 , S 118 ) obtained from S 117 by adding slot s ′ as last slot to demand k ′ without modifying the last slots assigned to the demands K \ {k ′ } in S 117 ∈ K \ {k ′ }, and S 118 k ′ = S 117 k ′ ∪ {s ′ } for demand k ′ . Solution S 118 is feasible for the SA problem. The corresponding incidence vector (u S 118 , z S 118 ) belongs to F k S . Hence, solutions S 117 and S 118 satisfy equation µu + σz = τ . We the obtain that µu S 117 + σz S 117 = µu S 118 + σz S 118 = µu S 117 + σz S 117 + σ k ′ s ′ + s∈{s ′ -w k ′ +1,...,s ′ }\U 117 k remain the same in solution S 118 i.e., S 117 k = S 118 k for each demand k µ s. and s ′ ∈ {w k ′ , ..., s}.Let prove now that σ k s for demand k and slot s in {w k , ..., s} are equivalent. Consider a slot s ′ ∈ {w k , ..., s} such that s ′ / ∈ S 119 k . Let S119 = ( Ũ 119 , S119 ) be the solution given by a) we select the smallest slot index s k from {w k , ..., s} \ {s ′ } as last slot for demand k, b) for each demand k i ∈ K \ {k} with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots Ĩ117 i given by Ĩ117 i = [ kj ∈ D117 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E(p ki ) ∩ E(p k ) ̸ = ∅ or Ĩ117 i = kj ∈ D117 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where D117 i we take into account the possibility of adding slot s ′ in the set of last slots S117 k ′ assigned to demand k ′ in solution S117 ). = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}. c) we let Ũ 117 be the set of slots used in S such that for each demand k ′ ∈ K and last slot s k ′ ∈ S117 k ′ and s" ∈ {s k ′ -w k ′ + 1, ..., s k ′ }, we have s" ∈ Ũ 117 . S118 is feasible for the SA problem. Hence, the corresponding incidence vector S . Based on this, we derive a solution S 119 = (U 119 , S 119 ) from S118 by adding slot s ′ as last slot to demand k and removing the last slot Let S117 k i (u S118 , z S118 ) belongs to F k s ∈ S 118 k , i.e., S 119 k This is equivalent to say that two linked nodes v k and v k ′ means that the routing paths of the demands k, k ′ share an edge in G.Based on the conflict graph H sa , we introduce the following inequalities. Proposition 5.4.1. Let I = [s i , s j ] be an interval of contiguous slots in[1, s]. Let K ′ ⊂ K be a minimal cover for interval I = [s i , s j ] such that K ′ defines a clique in H sa . Then, the inequality s j k∈K ′ s=s Definition 5.4.1. Consider the conflict graph H sa defined as follows. For each demand k ∈ K, consider a node v k in H sa . Two nodes v k and v k ′ are linked by an edge in H sa if and only if E(p k ) ∩ E(p k ′ ) ̸ = ∅. Let show that σ k s = 0 for all k ∈ K and s ∈ {w k , ..., s} with s / ∈ {s i + w i given by I 122 i = [ kj ∈D 122 k -1, ..., s j } if k ∈ K. Consider a demand k in K and a slot s ′ in {w k , ..., s} with s ′ / ∈ {s i + w k -1, ..., s j } if k ∈ K. Let S 122 = (U 122 , S 122 ) be the solution given by a) for one demand k ′ from K, we select the smallest slot index s k ′ in {w k ′ , ..., s} \ {s i + w k ′ -1, ..., s j } as last slot, b) for each demand k i ∈ K \ {k ′ } with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 122 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ {s i + w ki -1, ..., s j }, |K|}, we select the smallest slot index s k i in the set of slotsI 122 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E(p ki ) ∩ E(p k ) ̸ = ∅ or I 122 i given by I 122 i = [ kj ∈D 122 i i = kj ∈D 122 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where D 122 i we take into account the possibility of adding slot s ′ as a last slot in the selected last slots let S 122 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}.d) a set of slots U 122 are then used in S such that for each demand k ′ ∈ K and lastslot s k ′ ∈ S 120 k ′ and s" ∈ {s k ′ -w k ′ + 1, ..., s k ′ }, we have s" ∈ U 120 .S 122 is clearly feasible for the problem. Hence, the corresponding incidence vector (u S 122 , z S 122 ) belongs to F I K . Then we derive a solution S 123 = (U 123 , S 123 ) obtained from S 122 by adding slot s ′ as last slot to demand k without modifying the last slots assigned to the demands K \ {k} in S 122 , i.e., S 122k ′ = S 123 k ′ for each demand k ′ ∈ K \ {k},and S 123 } for demand k. Solution S 123 is feasible for the SA problem. The corresponding incidence vector (u S 123 , z S 123 ) belongs to F I K . Hence, solutions S 122 and S 123 satisfy equation µu + σz = τ . We then obtain that µu S 122 + σz S 122 = µu S 123 + σz S 123 = µu S 122 + σz S 122 + σ k s ′ + We k = S 122 k ∪ {s ′ s∈U 123 \U 122 µ s - µ s. s∈U 122 \U 123 S 122 k to route demand k in solution S 122 ). |K|}, we select the smallest slot index s k i in the set of slots I 124 {w ki , ..., s kj -w kj } ∪{s kj +w ki , ..., s}]∩[{w ki , ..., s ′ -w k ′ } ∪{s ′ +w ki , ..., s}] if E(p ki ) ∩ E(p k ) ̸ = ∅ or I 124 i given by I 124 i = [ kj ∈D 124 i i = kj ∈D 124 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where D 124 i we take into account the possibility of adding slot s ′ as a last slot in the selected last slotsS 124 k ′ to route demand k ′ in solution S 124 ).We let S 124 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}.d) let U 124 be the set of slots used in S such that for each demand k" ∈ K and last slot s k" ∈ S 124 k" and s" ∈ {s k" -w k" + 1, ..., s k" }, we have s" ∈ U 124 .S 124 is clearly feasible for the problem given that it satisfies all the constraints of cut formulation (2.2)-(2.10). Hence, the corresponding incidence vector (u S 124 , z S 124 ) belongs to F I K . Then consider the solution S 125 = (E 125 , S 125 ) obtained from S 125 by adding slot s ′ as last slot to demand k such that the last slots assigned to the demands K \ {k, k ′ } in S 125 remain the same in S 125 , i.e., S 125 k" = S 125 k" for each demand k" ∈ K \ {k, k ′ }, and S 125 k ′ = S 125 k ′ ∪ {s ′ } for demand k ′ , and modifying the last slots assigned to demand k by adding a new last slot s and removing the last slot s ∈ S 125 k with s ∈ {s i + w k + 1, ..., s j } and s / ∈ {s i + w k + 1, ..., s j } for demand k with k ∈ K such that S 125 k Based on the definition of the conflict graph H E I , we define a new conflict graph adapted to the SA problem. Definition 5.4.2. Let I = [s i , s j ] be an interval of contiguous slots in [1, s] with s i ≤ s j -1. Consider the conflict graph H ′E I defined as follows. For each demand k ∈ K with w k ≤ |I|, consider a node v k in H ′E I . Two nodes v k and v k ′ are linked by an edge in H ′E I + w k ′ ≥ |I ′ | for each k, k ′ ∈ C, • 2w k ≥ |I ′ | + 1 and w k ≤ |I ′ | for each k ∈ C.This means that inequality (2.39) induced by clique C for the interval I is dominated by inequality (2.39) induced by clique C for the interval I ′ . Hence, inequality (2.39) cannot be facet defining for P sa (G, K, S). c) if there exists a slot s ∈ I such that s ∈ {s ′ -w k + 1, .., s ′ } for each k ∈ C and s ′ ∈ {s i + w k -1, .., s j }, this implies that inequality (2.39) is dominated by the the non-overlapping inequality (5.4). Hence, inequality (2.39) cannot be facet defining for P sa (G, K, S). Then, inequality (2.39) induced by clique C is dominated by another inequality (2.39) induced by clique C Sufficiency. Let F H ′E I C be the face induced by inequality (2.39), that is ′ . Hence, inequality (2.39) cannot be facet defining for P sa (G, K, S). b) if there exists an interval of contiguous slots I ′ in [1, s] such that I ⊂ I ′ with • w k |K|}, we select the smallest slot index s k i in the set of slots I 127 This guarantees that• {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ...s k j } = ∅ for each k j ∈ D 127 i , • and s / ∈ {s k i -w k i + 1, ...,s k i } (slot assignment constraint taking into account the possibility of adding slot s in the set of used slots U 127 ), We let S 127 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}. d) a set of slots U 127 are used in S such that for each demand k ∈ K and last slot s k ∈ S 127 k and s ′ ∈ {s k -w k + 1, ..., s k }, we have s ′ ∈ U 127 . S 127 is feasible for the SA problem. Hence, the corresponding incidence vector (u S 127 , z S 127 ) belongs to F H ′E I C . Then we derive a solution S 128 = (U 128 , S 128 ) obtained from S 127 by adding slot s as an used slot in U 128 without modifying the last slots assigned to the demands K in S 127 which remain the same in solution S 128 i.e., Solution S 128 is feasible for the SA problem. The corresponding incidence vector (u S 128 , z S 128 ) belongs to F H ′E I C . Hence, solutions S 127 and S 128 satisfy equation µu + σz = τ . We then obtain that µu S 127 + σz S 127 = µu S 128 + σz S 128 = µu S 127 + σz S 127 + µ s. |K|}, we select the smallest slot index s k i in the set of slots I 129 S 127 k = S 128 Hence, µ s = 0. In a similar way, we can show that µ s = 0, for all slots s ∈ S. Let show that σ k i given by I 129 i = [ kj ∈D 129 i given by I 127 i = [ kj ∈D 127 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] \ {s, ..., s + w ki -1} where D 127 i = {k j ∈ {k 1 , ..., k i-1 } ∩ C : E(p k i ) ∩ E(p k j ) ̸ = ∅}. k for each demand k ∈ K. s = 0 for all k ∈ K and s ∈ {w k , ..., s} with s / ∈ {s i + w k -1, ..., s j } if v k ∈ C. Consider a demand k in K and a slot s ′ in {w k , ..., s} with s ′ / ∈ {s i + w k -1, ..., s j } if k ∈ C. be the solution given by S 129 = (U 129 , S 129 ) be the solution given by a) for one demand k ′ from C, we select the slot s k ′ = s i + w k ′ -1 as last slot, b) for each demand k i ∈ C \ {k ′ } with i ∈ {1, ..., i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ {s i + w ki -1, ..., s j }, |K|}, we select the smallest slot index s k i in the set of slots I 129 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] i given by I 129 i = [ kj ∈D 129 i if E(p ki ) ∩ E(p k ) ̸ = ∅ or I 129 i = kj ∈D 129 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where D 129 i we take into account the possibility of adding slot s ′ as a last slot in the selected last slots We let S 122 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}.d) let U 129 be the slot used in S such that for each demand k ′ ∈ K and last slots k ′ ∈ S 129 k ′ and s" ∈ {s k ′ -w k ′ + 1, ..., s k ′ }, we have s" ∈ U 129 .S 129 is clearly feasible for the problem, and its incidence vector (u S 129 , z S 129 ) belongs to F H ′E I C . Then consider the solution S 130 = (U 130 , S 130 ) obtained from S 129 by adding slot s ′ as last slot to demand k without modifying the last slots assigned to the demands K \ {k} in S 129 , i.e., S 129 k ′ = S 130 k ′ for each demand k ′ ∈ K \ {k}, and S 130 k = S 129 k ∪ {s ′ } for demand k. Solution S 130 is feasible for the SA problem. The corresponding incidence vector (u S 130 , z S 130 ) belongs to F H ′E I C . Hence, solutions S 129 and S 130 satisfy equation µu + σz = τ . We then obtain that µu S 129 + σz S 129 = µu S 130 + σz S 130 = µu S 129 + σz S 129 + σ k s ′ + S 129 k to route demand k in solution S 129 ), µ s - s∈U 130 \U 129 s∈U 129 \U 130 µ s. we select the smallest slot index s k i in the set of slotsI 131 {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{w ki , ..., s ′ -w k ′ }∪{s ′ +w ki , ..., s}]\{s i , ..., s j } if E(p ki )∩E(p k ′ ) ̸ = ∅ or I 131{w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]\{s i , ..., s j } if not, ) for each demand k i ∈ K \ C with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 131 {w ki , ..., s kj -w kj } ∪{s kj +w ki , ..., s}]∩[{w ki , ..., s ′ -w k ′ } ∪{s ′ +w ki , ..., s}] if E(p ki ) ∩ E(p k ′ ) ̸ = ∅ or I 131 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, i given by I 131 i = [ kj ∈D 131 i i = kj ∈D 131 i where D 131 i i given by I 131 i = [ kj ∈D 131 i i = [ kj ∈D 131 i where D 131 i = {k j ∈ {k 1 , ..., k i-1 } ∪ C such that D 131 ki ∩ D 131 kj ̸ = ∅}, c we take into account the possibility of adding slot s ′ as a last slot in the selected last slotsS 131 k ′ to route demand k ′ in solution S 131 ).We let S 131 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}. d) Let U 131 be the set of slots used in S such that for each demand k and last slots k ∈ S 131 k and s" ∈ {s k -w k + 1, ..., s k }, we have s" ∈ U 131 .S 131 is clearly feasible for the problem. Hence, the corresponding incidence vector (u S 131 , z S 131 ) belongs to F H ′E I C . Then consider the solution S 132 = (U 132 , S 132 ) from S 131 by adding slot s ′ as last slot to demand k without modifying the last slots assigned to the demands K \ {k, k ′ } in S 131 , i.e., S 131 k" = S 132 k" for each demand k" ∈ K \ {k, k ′ }, and S 132 k ′ = S 131 k ′ ∪ {s ′ } for demand k ′ , and with modifying the last slots assigned to demand k by adding a new last slot s and removing the last slot s ∈ S 131 k with s ∈ {s i +w k +1, ..., s j } and s / ∈ {s i +w k +1, ..., s Solution S 132 is feasible for the SA problem. The corresponding incidence vector (u S 132 , z S 132 ) belongs to F H ′E I C . Hence, solutions S 131 and S 132 satisfy equation µu + σz = τ . We then obtain that µu S 131 + σz S 131 = µu S 132 + σz S 132 = µu S 131 + σz S 131 + σ k ′ s ′ -σ k s + σ k s + µ s" s"∈U 132 \U 131 - µ s" . s"∈U 131 \U 132 j } for demand k ∈ K with v k ∈ C such that S 132 k = (S 131 k \{s})∪{s} such that {s-w k +1, ..., s}∩{s ′ -w k ′ +1, ..., s ′ } = ∅ for each k ′ ∈ K and s ′ ∈ S 132 k ′ with E(p k ) ∩ E(p k ′ ) ̸ = ∅. Then, inequality (2.40) is valid for Q sa (G, K, S). Moreover, it is valid for P(G, K, S) if2w k > |I| for each v k ∈ H.Proof. We use the same proof of proposition(5.4.4). 5.4.4. Let H be an odd-hole in the conflict graph H ′E I with |H| ≥ 5, and 2w k > |I| for each v k ∈ H. Then, inequality (2.40) is facet defining for P sa (G, K, S) if and only if a) for each node v k ′ / ∈ H in H ′E I , there exists a node v k ∈ H such that the induced graph H ′E I((H \ {v k }) ∪ {v k ′ }) does not contain an odd-hole H ′ = (H \ {v k }) ∪ {v k ′ }, b) and there does not exist a node v k ′ / ∈ H in H ′E I such that v k ′ is linked with all nodes v k ∈ H,c) and there does not exist an interval I ′ of contiguous slots with I ⊂ I ′ such that H defines also an odd-hole in the associated conflict graph H E I ′ .We use the same proof presented in the proof of theorem (2.4.10). Sufficiency. Theorem Proof. Neccessity. Let F H ′E I H be the face induced by inequality (2.40), that is Proposition 5.4.4. Let I = [s i , s j ] be an interval of contiguous slots in [1, s] with s i ≤ s j -1, and H be an odd-hole H in the conflict graph H ′E I with |H| ≥ 5. 1 2 1 by αu + βz ≤ λ. Let µu + σz ≤ τ be a valid inequality that is facet definingF of P sa (G, K, S). |H|-12 , b) for each demand k i from H with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 134 Suppose that F H ′E I H ⊂ F = {(u, z) ∈ P sa (G, K, S) : µu + σz = τ }. To prove that F H ′E I H is a facet of P i given by I 134 i = [ kj ∈L 134 i given by I 134 i = kj ∈D 134 sa (G, K, S), we need to show that there exist ρ ∈ R and γ ∈ R k∈K (w k -1) ) such that (µ, σ) = ρ(α, β) + γM . Let first show that µ s = 0 for all s ∈ S. Consider a slot s ∈ S. Let S 134 = (U 134 , S 134 ) be the solution given by a) select a subset of demands H from H with | H| = i {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}]∩[{s i +w ki -1, ..., s j }]\{s, ..., s+w ki -1}, where L 134 i = {k j ∈ {k 1 , ..., k i-1 } ∩ H : E(p k i ) ∩ E(p k j ) ̸ = ∅}. c) for each demand k i ∈ H \ H with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 134 i {w ki , ..., s kj -w kj }∪{s kj +w ki , ..., s}\[{s i +w ki -1, ..., s j }∪{s, ..., s+w ki -1}], |K|}, we select the smallest slot index s k i in the set of slots I 134 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] \ {s, ..., s + w ki -1} i given by I 134 i = [ kj ∈R 134 i and s / ∈ {s k i -w k i + 1, ..., s k i } (slot assignment constraint taking into account the possibility of adding slot s in the set of used slots U 134 ). We let S 134 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}.e) let U 134 be the set of slots used in S such that for each demand k and last slots k ∈ S 134 k and s ′ ∈ {s k -w k + 1, ...,s k }, we have s ′ ∈ U 134 , and and s / ∈ U 134 (slot assignment constraint taking into account the possibility of adding slot s in the set of used slots U 134 ). S 134 is clearly feasible for the SA problem. Hence, the corresponding incidence vector (u S 134 , z S 134 ) belongs to F H ′E I H . Then we derive a solution S 135 = (U 135 , S 135 ) obtained from S 134 by adding slot s as an used slot in U 135 without modifying the last slots assigned to the demands K in S 134 which remain the same in solution S 135 i.e., S 134 Solution S 135 is feasible for the SA problem. Hence, the corresponding incidence vector (u S 135 , z S 135 ) belongs to F H ′E I H . Hence, solutions S 134 and S 135 satisfy equation µu + σz = τ . We then obtain that µu S 134 + σz S 134 = µu S 135 + σz S 135 = µu S 134 + σz S 134 + µ s. |H|-1 2 , b) for each demand k i from H with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 136 k = S 135 Hence, µ s = 0. In a similar way, we can show that µ s = 0, for all slots s ∈ S. Let show that σ k i given by I 136 i = [ kj ∈L 136 k for each demand k ∈ K. s = 0 for all k ∈ K and s ∈ {w k , ..., s} with s / ∈ {s i + w k -1, ..., s j } if v k ∈ H. Consider a demand k in K and a slot s ′ in {w k , ..., s} with s ′ / ∈ {s i + w k -1, ..., s j } if v k ∈ H. Let S 136 = (U 136 , S 136 ) be the solution given by a) select a subset of demands H from H with | H| = i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{s i + w ki -1, ..., s j }], |K|}, we select the smallest slot index s k i in the set of slotsI 136 ∈ {k 1 , ..., k i-1 } ∩ H : E(p k i ) ∩ E(p k j ) ̸ = ∅}. We let S 136 k i = {s k i } be the set of last slots assigned to demand k i , d) for each demand k i ∈ K \ H with i ∈ {1, ...,|K|}, we select the smallest slot index s k i in the set of slots I 136 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E(p ki ) ∩ E(p k ) ̸ = ∅ or I 136 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, i given by I 136 i = [ kj ∈D 136 i i = kj ∈D 136 i i given by I 136 i = kj ∈D 136 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} \ {s i + w ki -1, ..., s j }, where D 136 i = {k j we take into account the possibility of adding slot s ′ as a last slot in the selected last slots S 136 k to route demand k in solution S 136 ). We let S 136 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}, e) let U 136 be the set of slots used in S such that for each demand k ′ ∈ K and last slot s" ∈ S 136 k 137 , S 137 ) from S 136 by adding slot s ′ as last slot to demand k without modifying the last slots assigned to the demands K \ {k} in S 136 , i.e., S 136 k ′ = S 137 k ′ for each demand k ′ ∈ K \ {k}, and S 137 } for demand k. Solution S 137 is feasible for the SA problem. The corresponding incidence vector (u S 137 , z S 137 ) belongs to F H ′E I H . Hence, solutions S 136 and S 137 satisfy equation µu + σz = τ . We then obtain that µu S 136 + σz S 136 = µu S 137 + σz S 137 = µu S 136 + σz S 136 + σ k s ′ + k = S 136 k ∪ {s ′ s∈U 137 \U 136 µ s - µ s. s∈U 136 \U 137 ′ and s" ∈ {s k ′ -w k ′ + 1, ..., s k ′ }, we have s" ∈ U 136 . S 136 is clearly feasible for the problem. Hence, the corresponding incidence vector (u S 136 , z S 136 ) belongs to F H ′E I H . After that, we derive a solution S 137 = (U s j }. Let S 138 = (U 138 , S 138 ) be the solution given by a) select a subset of demands H from H with | H| = |H|-1 2 , b) for each demand k i from H with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 138 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ {s i + w ki -1, ..., s j }. i given by I 138 i = [ kj ∈L 138 i |K|}, we select the smallest slot index s k i in the set of slots I 138 i given by I 138 i = kj ∈D 138 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} \ {s i + w ki -1, ..., s j }, |K|}, we select the smallest slot index s k i in the set of slots I 138• {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for eachk j ∈ R 138 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ if E 138 k i ∩ E 138 k ′ ̸ = ∅ (we take into account the possibility of adding slot s ′ as a last slot in the selected last slots S 138 k ′ to route demand k ′ in solution S 138 ). e) a set of slots U 138 are used in S such that for each demand k and last slot s k ∈ S 138 k and s ′ ∈ {s k -w k + 1, ..., s k }, we have s ′ ∈ U 138 . S 138 is clearly feasible for the problem. Hence, the corresponding incidence vector (u S 138 , z S 138 ) belongs to F H ′E I H . Then we derive a solution S 139 from S 138 as belows a) remove the last slot s totally covered by the interval I and which has been selected by a demand k i ∈ {v k 1 , ..., v kq } in solution S 139 (i.e., s ∈ S 139 k i and s′ ∈ {s i + w k i + 1, ..., s j }) such that each pair of nodes (v k ′ , v k j ) are not linked in odd-hole H with j ̸ = i, b) and select a new last slot s′ / ∈ {s i + w k i + 1, ..., s j } for demand k i i.e., S 139 k i = (S 138 k i \ {s}) ∪ {s ′ } such that {s ′ -w k i -1, ..., s′ } ∩ {s -w k + 1, ..., s} = ∅ for each k ∈ K and s ∈ S 139 k with E 139 k ∩ E 139 k i ̸ = ∅, c) and add slot s ′ to the set of last slots S 139 k ′ assigned to demand k ′ in solution S 139 , i.e., S 139 k ′ = S 138 k ′ ∪ {s ′ }. solution S 139 is clearly feasible for the SA problem. The corresponding incidence vector (u S 139 , z S 139 ) belongs to F H ′E I H . Hence, solutions S 138 and S 139 satisfy equation µu + σz = τ . We have so µu S 138 + σz S 138 = µu S 139 + σz S 139 = µu S 138 + σz S 138 + σ k ′ s ′ + σ k i s′ -σ k i s + Since σ k s = 0 for all demand k ∈ K and slot s ∈ {w k , ..., s} with s / ∈ {s i +w k +1, ..., s j } µ s" s"∈U 139 \U 138 - µ s" . s"∈U 138 \U 139 i given by I 138 i = [ kj ∈R 138 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k ′ } ∪ {s ′ + w ki , ..., s}] if E(p ki ) ∩ E(p k ′ ) ̸ = ∅ or I 138 i = kj ∈R 138 i {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where R 138 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H : E 138 k i ∩ E 138 k j ̸ = ∅}. Hence, s if s ∈ {1, ..., w k ′ -1}, ρ if v k ′ ∈ H and s ∈ {s i + w k ′ -1, , ..., s j }, 0 otherwise, 5.4.5. Consider a clique C in the conflict graph H ′E S with {s -w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each (v k,s , v k ′ ,s ′ ) ∈ C.Then, inequality (2.43) is facet defining for P sa (G, K, S) if and only if 1. C is a maximal clique in the conflict graph H ′E S , 2. and there does not exist an interval of contiguous slots I = [s i , s j ] ⊂ [1, s] with • [ min v k,s ∈C (s -w k + 1), max kj ∈D 145 i {w ki , ..., s kj -w kj } ∪{s kj +w ki , ..., s}]∩[{w ki , ..., s ′ -w k ′ } ∪{s ′ +w ki , ..., s}] if E(p ki ) ∩ E(p k ′ ) ̸ = ∅ or I 145 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not, where D 145 i= {k j ∈ {k 1 , ..., k i-1 } ∪ {k} : E(p k i ) ∩ E(p k j ) ̸ = ∅}. This ensures that • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 145 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ if E(p k i ) ∩ E(p k ′ ) ̸ = ∅ ( we takeinto account the possibility of adding slot s ′ as a last slot in the selected last slotsS 145 k ′ to route demand k ′ in solution S 145 ).We let S 145k i = {s k i }be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}. c) let U 145 be the slots used in S such that for each demand k" ∈ K and last slot s k" ∈ S 145 k" and s" ∈ {s k" -w k" + 1, ..., s k" }, we have s" ∈ U 145 . S 145 is feasible for the SA problem. Hence, the corresponding incidence vector (u S 145 , z S 145 ) belongs to F H ′E S C . After that, we derive a solution S 146 = (E 146 , S 146 ) from S 145 by adding slot s ′ as last slot to demand k ′ without modifying the last slots assigned to the demands K \ {k, k ′ } in S 145 , i.e., S 145 k" = S 146 k" for each demand k" ∈ K \ {k, k ′ }, and S 146 k ′ = S 145 k ′ ∪ {s ′ } for demand k ′ , and with modifying the last slots assigned to demand k by adding a new last slot s and removing the last slot s ∈ S 145 k with s ∈ {s i + w k + 1, ..., s j } and s ∈ {w k , ..., s} for demand k with v k,s / Solution S 146 is feasible for the SA problem. The corresponding incidence vector (u S 146 , z S 146 ) belongs to F H ′E S C . Hence, solutions S 145 and S 146 satisfy equation µu + σz = τ . We then obtain that µu S 145 + σz S 145 = µu S 146 + σz S 146 = µu S 145 + σz S 145 + σ k ′ s ′ -σ k s + σ k s + i = kj ∈D 145 i ∈ C such that S 146 k = (S 145 µ s" s"∈U 146 \U 145 - µ s" . s"∈U 145 \U 146 k \ {s}) ∪ {s}. s if s ∈ {1, ..., w k -1}, ∈ K and s ∈ S. As a result, we have (µ, σ) = ρ(α, β) + γM .5.4.5 Slot-Assignment-Odd-Hole InequalitiesProposition 5.4.6. Let H be an odd-hole in the conflict graph H ′E S with |H| ≥ 5, and {s -w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each pair of nodes (v k,s , v k ′ ,s ′ ) linked in H.Then, inequality (2.44) is valid for P sa (G, K, S). Proof. We use the same proof of proposition (2.4.14). Theorem 5.4.6. Let H be an odd-hole in the conflict graph H ′E S with |H| ≥ 5, and {s -w k + 1, ..., 1} ∩ {s ′ -w k ′ + 1, ..., s ′ } ̸ = ∅ for each pair of nodes (v k,s , v k ′ ,s ′ ) linked in H. Then, inequality (2.44) is facet defining for P sa (G, K, S) if and only if a) for each node v k ′ ,s ′ / ∈ H in H ′E S , there exists a node v k,s ∈ H such that the induced graph H ′E S ((H \ {v k,s }) ∪ {v k ′ ,s ′ }) does not contain an odd-hole, b) and there does not exist a node v k ′ ,s ′ / ∈ H in H ′E S such that v k ′ ,s ′ is linked with all nodes v k,s ∈ H, c) and there does not exist an interval of contiguous slots I = [s i , s j ] ⊂ [1, s] with • and w k + w k ′ ≥ |I| + 1 for each (v k , v k ′ ) linked in H, • and 2w k ≥ |I| + 1 and w k ≤ |I| for each v k ∈ H. We distinguish the following cases: a) if for a node v k ′ ,s ′ / ∈ H in H ′E S , there exists a node v k,s ∈ H such that the induced graph H ′E S (H \{v k,s }∪{v k ′ ,s ′ }) contains an odd-hole H ′ = (H \{v k,s })∪{v k ′ ,s ′ }. This implies that inequality (2.44) can be dominated using some technics of lifting based on the following two inequalitiesv k,s ∈H z k s ≤ |H|-1 2 , and v k ′ ,s ′ ∈H ′ z k ′ s ′ ≤ |H ′ |-1 2 . b) if there exists a node v k ′ ,s ′ / ∈ H in H ′E S such that v k ′ ,s ′ islinked with all nodes v k,s ∈ H. This implies that inequality (2.44) can be dominated by the following ) if there exists an interval of contiguous slots I = [s i , s j ] ⊂ [1, s] satisfying the conditions of c). Hence, inequality (2.44) is dominated by inequality (2.40).If no one of these cases is verified, inequality (2.44) can never be dominated by another inequality without changing its right-hand side. Otherwise, inequality (2.44) cannot be facet defining for P sa (G, K, S). valid inequality v k,s ∈H ρ z k s + if v k,s ∈ C, |H| -1 2 z k ′ s ′ ≤ |H| -1 2 . 0 otherwise, v k,s ∈H Sufficiency. (s -w k + 1), max v k,s ∈H for each k • [ min Let F H ′E ] ⊂ I, Proof. Neccessity. cS H denote the face induced by inequality (2.40), that is F H ′E for all k ′ ∈ K \ {k} and s ∈ {w k ′ , ..., s} with v k,s ′ / ∈ H.Let prove that σ ks for all v k,s ∈ H are equivalent. Consider a node v k ′ ,s ′ in H. Let S 152 = (U 152 , S 152 ) be the solution given by a) select a subset of nodes H152 fromH with | H152 | = |H|-1 2 , and each pair of nodes (v k,s , v k ′ ,s ′ ) ∈ H152 are not linked in the conflict graph H′E S , b) for each pair of demand k and slot s with v k,s ∈ H152 , we select slot s k = s as last slot for demand k, c) for each demand k i ∈ K \ H152 with i ∈ {1, ..., |K|}, we select the smallest slot index s k i in the set of slots I 152 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s}] ∩ [{w ki , ..., s ′ -w k } ∪ {s ′ + w ki , ..., s}] if E(p ki ) ∩ E(p k ) ̸ = ∅ or I 152 {w ki , ..., s kj -w kj } ∪ {s kj + w ki , ..., s} if not,whereD 152 i = {k j ∈ {k 1 , ..., k i-1 } ∪ H152 : E(p k i ) ∩ E(p k j ) ̸ = ∅}. As a result, • {s k i -w k i + 1, ..., s k i } ∩ {s k j -w k j + 1, ..., s k j } = ∅ for each k j ∈ D 152 i , • and {s k i -w k i + 1, ..., s k i } ∩ {s ′ -w k ′ + 1, ..., s ′ } = ∅ if E(p k i ) ∩ E(p k ′ ) ̸ = ∅ ( we takeinto account the possibility of adding slot s ′ as a last slot in the selected last slotsS 152 k ′ to route demand k ′ in solution S 152 ).We let S 152 k i = {s k i } be the set of last slots assigned to demand k i with i ∈ {1, ..., |K|}.d) let U 152 be the set of slots used in S such that for each demand k" ∈ K and last slot s k" ∈ S 152 k" and s" ∈ {s k" -w k" + 1, ..., s k" }, we have s" ∈ U 152 .S 152 is clearly feasible for the problem. Hence, the corresponding incidence vector (u S 152 , z S 152 ) belongs to F H′E i given by I 152 i = [ kj ∈D 152 i i = kj ∈D 152 i Solution S 153 is feasible for the SA problem. The corresponding incidence vector (u S 153 , z S 153 ) belongs to F H ′E S H . Hence, solutions S 152 and S 153 satisfy equation µu + σz = τ . We then obtain that µu S 152 + σz S 152 = µu S 153 + σz S 153 = µu S 152 + σz S 152 + σ k ′ s ′ -σ k s + σ k s + µ s" - µ s" . s"∈U 153 \U 152 s"∈U 152 \U 153 Table 5 . 5 1: Table of Comparison Between: B&B SCIP Vs Own B&C SCIP Using Real Graphs. Instances B&B SCIP Own B&C SCIP Topology |K| |S| Nbr Nd Gap TT Nbr Nd Gap Nbr Cuts TT 10 40 1 0,00 0,02 1 0,00 0 0,03 20 40 1 0,00 0,53 1 0,00 0 0,1 30 40 1 0,00 3,74 1 0,00 5,5 0,57 40 40 4 0,00 1,32 3 0,00 12,5 5,84 Pioro40 50 100 80 80 5 3 0,00 0,00 2,66 44,31 1,25 18,5 0,00 0,00 15,75 77,5 13,52 2769,13 150 160 56 1,95 9335,82 57 0,00 48,75 9169,93 200 280 1 0,14 4934,59 1,25 0,00 28,5 3023,14 250 280 1 0,00 3782,08 1 0,00 73,5 2580 300 320 4,25 0,18 10548,18 3,25 0,36 96 13502,49 10 80 1 0,00 0,04 1 0,00 0 0,06 20 40 1 0,00 0,08 1 0,00 0 0,14 30 40 2 0,00 3,52 1,25 0,00 12 6,11 40 80 4,5 0,00 4,43 1 0,00 0 3,82 India35 50 100 240 160 1 13,5 0,00 1,55 13278,76 7,64 9,25 10,5 0,00 0,20 7 64,50 67,06 10572,62 150 400 8 4,71 18000 15 5,18 89 18000 200 280 1 10,58 13577,39 1,25 4,11 0,75 8531,99 250 280 1 1,45 18000 1 0,72 61 18000 300 320 1 1,8 16858,2 3 1,97 62,25 18000 10 40 1 0,00 0,08 1 0,00 0,50 0,17 20 40 1 0,00 0,04 1 0,00 0 0,09 30 40 1 0,00 0,36 1 0,00 0 0,47 40 80 6,75 0,00 11,91 5,50 0,00 26 18,12 Brain161 50 100 160 120 9 65 0,00 0,00 3297,48 25,23 3 6 0,00 0,00 16,25 35 25,17 1009,43 150 320 58,5 0,26 10284,04 43,25 0,27 148,25 12232,16 200 400 8 0,40 12172,23 1,67 0,36 45,67 18000 250 480 1 0,86 13492,92 1,67 0,33 52 18000 300 320 1 1,30 18000 1 0,32 11,50 18000 Table 5 . 5 2: Table of Comparison Between: B&B SCIP Vs Own B&C SCIP Using Realistic Graphs. à Orange Labs, pour avoir bien voulu examiner mes travaux de thèse et accepter de participer au jury. Je la remercie aussi pour les commentaires intéressants qu'elle a formulés et sa clairvoyance sur le sujet. Je voudrais témoigner toute ma reconnaissance à Monsieur Mourad Baïou, directeur de laboratoire LIMOS, et Monsieur Farouk Toumani, ancien directeur de LIMOS et actuellement directeur de l'ISIMA, pour leur confiance, leurs disponibilités, leur écoute et leur accompagnement dont j'ai pu bénéficier pendant l'élaboration de ma thèse. Ils ont été une grande ressource pour le développement personnel et professionnel durant cette thèse. Bien sûr, atteindre ces objectifs n'aurait pas été possible sans le soutien de l'UCA et le LIMOS, qui m'ont permis, grâce à une allocation de recherches de l'ANR et diverses aides financières durant la crise sanitaire, de me consacrer sereinement à l'élaboration de cette thèse. Il est important donc de remercier tout le personnel de l'UCA et le LIMOS pour m'avoir accueilli chaleureusement. Ils ont tout mis en oeuvre pour que ma thèse se déroule dans les meilleures conditions possibles. Merci pour toutes ces années de guidance.Je souhaite remercier spécialement Madame Béatrice Bourdieu, responsable admin- This work was supported by the French National Research Agency grant ANR-17-CE25-0006, FlexOptim Project. for all k ′ ∈ K and s ′ ∈ {1, ..., w k ′ -1}. return constrained minimum-cost path p * k for demand k; Remerciements 0 and a label p ∈ L i \ T i having the smallest value of e∈E(p) c e ; for each e = ij ∈ δ(i) \ E k 0 such that e ′ ∈E(p) ℓ e ′ + ℓ e ≤ lk do if j / ∈ V (p) then Concluding Remarks In this chapter, we have focused on a complex variant of the Routing and Spectrum Assignment (RSA) problem, called the Constrained-Routing and Spectrum Assignment (C-RSA). We first have proposed a new integer linear programming formulation based on the so-called cut formulation for the C-RSA. We have investigated the facial structure of the associated polytope by showing that some basic inequalities are facet-defining under certain conditions. We have further identified several families of valid inequalities to obtain tighter LP bounds. Moreover, we have studied the facial structure of these valid inequalities, and have shown that they are facet defining for the polytope under certain necessary and sufficient conditions. We have also introduced some symmetry-breaking inequalities to well manage the so-called equivalents sub-problems. Problem In this chapter, we first introduce an extended integer linear programming formulation based on the so-called path formulation. All the different valid inequalities presented in chapter 2, they are still valid for the path formulation. Using this, we derive a Branch-and-Cut-and-Price algorithm to solve the C-RSA problem. In this section, we describe the framework of this algorithm. First, we give an overview of the column generation algorithm. Then, we discuss the pricing problem. We further present a primal heuristic used to boost the performance of the algorithm. We give at the end some computational results and a comparative study between Branch-and-Cut and Branch-and-Cut-and-Price algorithms. We close our chapter with some concluding remarks. Path Formulation Let P k denote the set of all feasible (o k ,d k ) paths in G such that for each demand k ∈ K, we have ℓ e ≤ lk , for all p k ∈ P k . We consider for k ∈ K and p ∈ P k and s ∈ S, a variable y k p,s which takes 1 if slot s is the last slot allocated along the path p for the routing of demand k and 0 if Result: Optimal path p * for demand k and slot s Update the set of label of CPU time. Furthermore, they enable reducing the average number of nodes in the B&C&P tree, and also the average CPU time for several instances. We also notice that several instances have been solved to optimality in the root of the B&C&P tree (i.e., Nb Nd=1) that necessitates a large number of branching nodes when using the B&P algorithm. On the other hand, and when the optimality is not proven, adding valid inequalities decreases the gap for several instances. However, there exist a few instances very rare in which adding valid inequalities have no any impact. Moreover, some instances are still difficult to solve with both the B&P and B&C&P algorithms. Branch-and-Cut-and-Price using SCIP. We denote by B&C CPX the Branch-and-Cut algorithm when using Cplex with benefiting of its automatic cut generation and without our additional valid inequalities, and by Own B&C CPX when using Cplex with our additional valid inequalities and disabling its proper cut generation. On the other hand, in the second series of computational results are shown in Table 4.3, we present the results found for the Branch-and-Cut algorithm using Gurobi (without or with additional valid inequalities) compared with those of Branch-and-Price and Branch-and-Cut-and-Price using SCIP. We denote by B&C GRB when using Gurobi with benefiting of its automatic cut generation and without our additional Associated Polytope Let P sa (G, K, S) be the polytope, convex hull of the solutions for the formulation (5.1)-(5.8). Here we study the facial structure of the polytope P sa (G, K, S). A solution of the SA problem is based on the variables (u, z) is given by two sets S k for each demand k ∈ K and U for the spectrum-usage of S where a) S k denotes the set of index of the last slots selected for demand k such that b) U denotes the set of slots allocated over the spectrum S such that for each demand k ∈ K and last slot s ∈ S k ⇒ each slot s ′ ∈ {s-w k +1, ..., s} should be in U i.e. s ′ ∈ U. We suppose that the number of slots s is largely sufficient to route all the demands, and to avoid the existence of some slots s ∈ S such that u s = 1 in any feasible solution S of the SA problem. This means that there does not exist a slot s ∈ S such that u s = 1. Dimension Let M denote the matrix associated with the equations (5.2). We ensure that the matrix M is of full rank given that the demands are independants, and the slots in S are independents for each demand k ∈ K. As a result, rank(M ) = k∈K (w k -1). Let us denote by r ′ the rank of the matrix M . Proposition 5.3.1. The equation system (5.2) defines a minimal equation system for P sa (G, K, S). Proof. To prove that σz + µu = λ is a linear combination of equations (5.2), it's sufficient to prove that for each demand k ∈ K, there exists for each demand k ∈ K a γ k ∈ R w k -1 such that (µ, σ) = γM . Let u S and z S denote the incidence vector of a solution S of the SA problem. Let first show that µ s = 0 for all s ∈ S. Consider a slot s ∈ S, and solution S 105 = (U 105 , S 105 ) given by Proposition 5.5.3. We ensure that for all slot s ∈ {1, ..., s -1} k∈K min(s+w k -1,s) (5.16) which means that the number of intervals of contiguous slots allocated which cover slot s + 1 (cardinality of slot-usage) cannot be greater than the number of channels allocated which cover slot s. Similar idea was proposed by Mendez-Diaz et al. [START_REF] Méndez-Díaz | A Polyhedral Approach for Graph Coloring 1[END_REF][70] to break the symmetry for the vertex coloring problem. Our inequalities and those of Mendez-Diaz et al. [69] [START_REF] Méndez-Díaz | A Branch-and-Cut algorithm for graph coloring[END_REF] differ in their right and left hand sides. Proposition 5.5.4. Due to inequality (5.14), we ensure that for all k ∈ K, and s 0 ∈ {1, ..., s -1} and s ∈ {s 0 , ..., s} which means that for a slot S 0 ∈ {1, ..., s -1}, a demand k can allocate a slot in the sub-spectrum {S 0 , ..., s} if slot S 0 is used. Similar idea was proposed by Mendez-Diaz et al. [START_REF] Méndez-Díaz | A Branch-and-Cut algorithm for graph coloring[END_REF] for the vertex coloring problem. Inequalities (5.17)} and those of Mendez-Diaz et al. [START_REF] Méndez-Díaz | A Branch-and-Cut algorithm for graph coloring[END_REF] differ in their left hand sides. Lower Bounds Here we propose some lower bounds issus from the conflict graph H sa . They can be seen as a valid inequalities for the polytope P sa (G, K, S). is valid for P sa (G, K, S). Proof. Inequality (5.18) ensures that the number of slots used in the spectrum S is greater than the flow over all the edges (the flow for an edge e is equal to the number of slots that should be used over edge e). Inequality (5.18) can be generalized as follows using the conflict graph H sa . Proposition 5.6.2. Let C be a clique in H sa . Then, the inequality is valid for P sa (G, K, S). Proof. It's trivial given the definition of clique C in the conflict graph H sa such that we know in advance that the demands in C share an edge in E which means that they cannot share a slot in S. Hence, the number of allocated slots s∈S u s is at least equal to the number of requested slots of the demands in C. Upper Bounds Let us introduce the following weighted conflict graph in which a positive integer called weight is assigned to each node. Definition 5.7.1. Consider the conflict graph H r w defined as follows. For each demand k ∈ K, consider a node v k in H r w . Two nodes v k and v k ′ are linked by an edge in H r w if and only if Each node v k is associated with a positive weight which equals to the requested number of slots w k of demand k. Definition 5.7.2. Let C be a clique in H r w . It's known to be the maximum weight clique in H r w if the total weight of the nodes in C ( v k ∈C w k ) defines the maximum total weight over all cliques in H r w , i.e., Based on these definitions, we introduce the following inequality and showing that computing the upper bound for the SA is equivalent to solving the Maximum Weighted Clique Problem (MWC) which is well known to be NP-hard problem [START_REF] Balas | On the Maximum Weight Clique Problem[END_REF]. Proposition 5.7.1. Let C be the maximum weighted clique in H r w . Then, the upper bound is defined as follows Proof. It's trivial given the definition of the maximum weighted clique C in the conflict graph H r w such that the maximum number of allocated slots s∈S u s is at most equal to the number of requested slots of the demands in C. end return the best optimal solution (u * , z * ) for the SA; Based on this sequence of demands, our greedy algorithm selects a slot s for each demand k ′ ∈ L with zk ′ s ̸ = 0, while respecting the non-overlapping constraint with the set of demands that precede demand k ′ in the list L (i.e., the demands 1 ′ , 2, ..., k ′ -1). However, if there does not exist such slot s for demand k ′ , we then select a slot s for demand k ′ ∈ L with zk ′ s = 0 with s ∈ {w k ′ , ..., s} while respecting the non-overlapping constraint with the set of demands that precede demand k ′ in the list L. The complexity of this algorithm can be bounded by O(|K| * |S| * log(|K|)). Afterwards, we compute the total number of slots in S used by the set of demands K in the final solution S given by the greedy-algorithm (i.e., s∈S u s ). Our local search algorithm generates a new sequence by doing some permutation of demands in the last sequence of demands, if the value of the solution given by greedy-algorithm is smaller than the value of the best solution found until the current iteration. Otherwise, we stop the algorithm, and we give in output the best solution found during our primal heuristic induced by the best sequence of demands having the smallest value of the total number of slots in S used compared with the others generated sequences. Computational Study Implementation's Feature We use C++ to implement the B&B and B&C algorithms under Linux using the "Solving Constraint Integer Programs" framework (Scip 6.0.2) such that Cplex 12.9 is used as LP solver. These have also been tested on LIMOS high-performance server with a memory size limited to 64 Gb while benefiting from parallelism by activating 8 threads, and with a CPU time limited to 5 hours (18000 s). We use the same graphs presented in Table 3.1, and the same instances used in the section 3.2.2. Computational Results Preliminary results show that introducing some families of valid inequalities allows solving several instances to optimality. Moreover, they enable reducing the average number of nodes in the B&C tree, and also the average CPU time for several instances. On the other hand, the results show that the odd-hole inequalities Conclusion In this thesis, we have studied the Constrained-Routing and Spectrum Assignment (C-RSA) problem related to the dimensioning and designing of Spectrally Flexible Optical Networks (SFONs). It's well known to be NP-hard. The main aim of this thesis was to provide a deep polyhedral investigation and design a cutting plane method for the problem and handle large-scale instances. First, we have proposed an integer linear programming formulation namely cut formulation. We have investigated the related polytope defined by the convex hull of all its solutions. Moreover, we have identified several classes of valid inequalities for the polytope and studied their facial structure. We further have discussed their separation problems. We have also proposed a primal heuristic to obtain tighter primal bounds and enhance the resolution of the problem. These results are used to devise a Branch-and-Cut (B&C) algorithm for the C-RSA problem, along with some computational results are presented using two types of instances: random and In the second part of thesis, we have discussed an extended formulation based on the so-called path formulation. It can be seen as a reformulation of the cut formulation using the so-called path variables. We have developed a column generation algorithm to solve its linear relaxation. We have shown that the pricing problem is equivalent to the resource-constrained shortest path problem, which is well known to be NPhard. For this, we have developed a pseudo-polynomial algorithm based on dynamic programming enabled solving the pricing problem in polynomial time. Using this, we have devised Branch-and-Price and Branch-and-Cut-and-Price algorithms. The results show that the Branch-and-Cut-and-Price performs very well compared with the Branch-and-Price. Hence, the significant impact and the power of the introduced
04114547
en
[ "info.info-ni" ]
2024/03/04 16:41:24
2023
https://inria.hal.science/hal-04114547/file/VTC__Mitigating_usrs_identification_errors_in_ressource_optimization_for_GFRA-5.pdf
Alix Jeannerot email: [email protected] Malcolm Egan email: [email protected] Lélio Chetot email: [email protected] Jean-Marie Gorce email: [email protected] Mitigating User Identification Errors in Resource Optimization for Grant-Free Random Access Keywords: random access, resource allocation, user identification, stochastic gradient descent In grant-free random access, a key question is how devices should utilize resources without coordination. One standard solution to this problem are strategies where devices randomly select time-slots based on an optimized stochastic allocation rule. However, the optimization of this allocation rule requires accurate knowledge of which devices have been active in previous frames. As user identification algorithms are subject to errors, the expected throughput of the optimized allocation can be highly suboptimal. In this paper, we propose algorithms for optimization of device time-slot allocations that mitigate the impact of user identification errors. We show that when the activity distribution with and without errors is known, then our algorithm converges with probability one to a stationary point. When the activity distributions are not available, we introduce new theoretically-motivated heuristics which significantly improve the expected throughput over existing algorithms and approach the performance when errors are not present. I. INTRODUCTION Resource allocation is a central problem for wireless access networks, particularly for the massive access systems critical for 6G [START_REF] Lee | 6G massive radio access networks: Key applications, requirements and challenges[END_REF], where a large number of devices aim to transmit data to an access point. In 5G-NR and NB-IoT, frames are decomposed into time-frequency resources. A common problem is therefore to determine which devices should be allocated to a given time-slot within a frame. In order to avoid high levels of control signaling in massive access systems, it is desirable to provide devices with a timeslot selection policy [START_REF] Paolini | Coded slotted ALOHA: a graphbased method for uncoordinated massive access[END_REF], [START_REF] Kalør | Random access schemes in wireless systems with correlated user activity[END_REF], [START_REF] Zheng | Stochastic resource optimization of random access for transmitters with correlated activation[END_REF], [START_REF] Zheng | Stochastic resource allocation for outage minimization in random access with correlated activation[END_REF]. This means that when a device has data to transmit, limited control signaling is required in order to determine which resources the device should use. The selection of a policy, often a variant of ALOHA (e.g., [START_REF] Paolini | Coded slotted ALOHA: a graphbased method for uncoordinated massive access[END_REF], [START_REF] Wieselthier | An exact analysis and performance evaluation of framed ALOHA with capture[END_REF]), is typically based on a criterion such as the expected system throughput, outage probability, or expected sum-rate. In event-driven IoT systems, devices (e.g., sensors) only require data transmission at irregular frames. The expected throughput therefore depends on the activity distribution, which governs the probability a given device will be active in a frame and can vary device to device. In general, the activity distribution may not be perfectly known, due to imperfect estimation of active devices within each frame. Indeed, the estimation of active devices, also known as user identification, is currently under active investigation [START_REF] Ahn | EP-based joint active user detection and channel estimation for massive machine-type communications[END_REF], [START_REF] Ke | Compressive sensingbased adaptive active user detection and channel estimation: Massive access meets massive MIMO[END_REF]. In this paper, we consider the impact of imperfect estimation of active devices on resource allocation based on the expected throughput. Our focus is on the problem of designing strategies for allocating a single time slot within a frame to an active device [START_REF] Kalør | Random access schemes in wireless systems with correlated user activity[END_REF], [START_REF] Zheng | Stochastic resource optimization of random access for transmitters with correlated activation[END_REF], [START_REF] Zheng | Stochastic resource allocation for outage minimization in random access with correlated activation[END_REF], relevant for NB-IoT. In this setting, a key challenge is to account for uncertainties in user activity probabilities, which are ignored in most existing work [START_REF] Paolini | Coded slotted ALOHA: a graphbased method for uncoordinated massive access[END_REF], [START_REF] Kalør | Random access schemes in wireless systems with correlated user activity[END_REF], [START_REF] Zheng | Stochastic resource optimization of random access for transmitters with correlated activation[END_REF], [START_REF] Zheng | Stochastic resource allocation for outage minimization in random access with correlated activation[END_REF] on resource allocation and coding for random access. Our main contributions are as follows: • We demonstrate that the stochastic resource optimization algorithms recently proposed in [START_REF] Zheng | Stochastic resource optimization of random access for transmitters with correlated activation[END_REF], [START_REF] Zheng | Stochastic resource allocation for outage minimization in random access with correlated activation[END_REF] can be highly susceptible to user identification errors, leading to suboptimal performance. This is significant as in the absence of user identification errors, these algorithms are optimal in contrast with other heuristic approaches [START_REF] Kalør | Random access schemes in wireless systems with correlated user activity[END_REF] and do not assume perfect knowledge of activity probabilities as in [START_REF] Zheng | Stochastic resource optimization of random access for transmitters with correlated activation[END_REF], [START_REF] Zheng | Stochastic resource allocation for outage minimization in random access with correlated activation[END_REF]. • To mitigate the impact of the bias arising from user identification errors, we use importance sampling in a manner analogous to sample selection bias correction [START_REF] Zadrozny | Learning and evaluating classifiers under sample selection bias[END_REF] and bias reduction in private synthetic data [START_REF] Ghalebikesabi | Bias mitigated learning from differentially private synthetic data: A cautionary tale[END_REF]. We prove that by weighting the gradient estimates in the algorithms in [START_REF] Zheng | Stochastic resource optimization of random access for transmitters with correlated activation[END_REF], [START_REF] Zheng | Stochastic resource allocation for outage minimization in random access with correlated activation[END_REF], the resource allocation converges to a stationary point with probability one. • In practice, the weight for the gradient estimates requires the evaluation of the true activity distribution and the imperfect activity distribution induced by erroneous user identification. As these distributions may be difficult to obtain, we propose two heuristic weights that are readily available in practical systems, leading to new stochastic resource optimization algorithms. • The proposed algorithms are validated via a numerical study. We show user identification errors can significantly decrease the expected throughput. Realistic scenarios where a low-complexity message passing algorithm [START_REF] Chetot | Joint identification and channel estimation for fault detection in industrial IoT with correlated sensors[END_REF] is utilized to estimate active devices are also presented. In certain scenarios it is even possible to approach the performance achieved without the presence of user identification errors. II. SYSTEM MODEL Consider a time-slotted network consisting of an access point (AP) and N devices equipped with a single antenna and utilizing a common frequency band. Transmissions occur in frames of a fixed duration, where each device n is active with probability p n in each frame. The activity of each device is denoted by X n ∼ Ber(p n ) and the joint activity probability mass function by p X . We assume that the activities of the devices are mutually independent. As such, p X (x) = Π N n=1 p xn n (1 -p n ) 1-xn , x ∈ {0, 1} N . (1) In frame t, the activity of all devices is denoted by X t = [X t 1 , . . . , X t N ] T . The activity vector X t is also assumed to be mutually independent of X t ′ , t ′ ̸ = t. A. Transmission Protocol The transmission protocol is detailed in Alg. 1. Each frame consists of a preamble and K data slots. The resource allocation is communicated to devices every f frames to limit downlink usage. Sync and Preamble: After receiving the sync signal in Step 1, each active device in the current frame transmits a pilot sequence of length L during the preamble phase in Step 2, denoted by q n ∈ C L . At the end of the preamble, the AP attempts to identify active devices, as detailed in Step 3. In particular, the received signal y t ∈ C L during the preamble phase is given by: y t = N n=1 X t n h t n q n + w t 0 (2) Where h t n ∈ C is the fading coefficient of the n-th device and w t 0 ∈ C L is circular complex Gaussian noise with variance τ 2 w . An efficient method to estimate the set of active devices in Step 3 is provided by approximate message passing algorithms (AMP) within the hybrid-gaussian AMP (HGAMP) family [START_REF] Chetot | Joint identification and channel estimation for fault detection in industrial IoT with correlated sensors[END_REF], [START_REF] Zou | Message passing based joint channel and user activity estimation for uplink grant-free massive MIMO systems with low-precision ADCs[END_REF]. Nevertheless, identification of active devices is not perfect. Denoting the estimated activity vector at time t by Xt , the probability of observing the vector x at the output of the HGAMP algorithm is denoted by Pr( Xt = x). Data Transmission: Each active device transmits in exactly one of the K data slots. In Step 5, active devices select a slot randomly, governed by an allocation matrix A t with A t ij , i ∈ {1, . . . , N }, j ∈ {1, . . . , K} corresponding to the probability of device i selecting slot j, with k A t nk = 1, n = 1, . . . , N (as each active user transmits exactly once in a frame). B. Optimization Objective The main focus of this paper is the optimization of the slot selection matrix A. In the following sections, we drop the notation t when the frame index is not relevent. This problem has recently been considered in [START_REF] Kalør | Random access schemes in wireless systems with correlated user activity[END_REF], [START_REF] Zheng | Stochastic resource optimization of random access for transmitters with correlated activation[END_REF], [START_REF] Zheng | Stochastic resource allocation for outage minimization in random access with correlated activation[END_REF] when the activity vector is perfectly known in each frame. We consider optimization of the expected throughput, defined by [START_REF] Zheng | Stochastic resource optimization of random access for transmitters with correlated activation[END_REF]: T (A) = E X [T X (A)], (3) where T X (A) is defined as T n (A; x) = K k=1 X n A nk N m=1 m̸ =n (1 -X m A mk ) T X (A) = N n=1 T n (A; X) The objective in (3) can be interpreted as the expected number of slots in which devices can transmit collision-free. The resource allocation problem is then to find an allocation matrix A that maximizes T (A). In particular, we seek a solution A * such that A * = arg max A∈R N ×K + ; k A n,k =1 T (A) Note that the objective T (A) is non-convex in A. We also highlight that the methods presented in the remaining of this paper are not limited to the maximization of the expected throughput T (A) and are applicable to other objectives; e.g., proportional-fair, expected sum-rate and outage probability [START_REF] Zheng | Stochastic resource optimization of random access for transmitters with correlated activation[END_REF], [START_REF] Zheng | Stochastic resource allocation for outage minimization in random access with correlated activation[END_REF]. III. THE IMPACT OF IMPERFECT USER IDENTIFICATION As observed in [START_REF] Zheng | Stochastic resource optimization of random access for transmitters with correlated activation[END_REF], [START_REF] Zheng | Stochastic resource allocation for outage minimization in random access with correlated activation[END_REF], the problem in (3) can be viewed as a stochastic optimization problem. Under the assumption that the activity vectors X 1 , X 2 , . . . are perfectly known to the AP, a natural solution is stochastic gradient ascent (SGA), which is detailed in Alg. 2. Here, H = {A ∈ R N ×K + : K k=1 A n,k = 1, n = 1, . . . , N } (4) and Π H [•] denotes the projection to the closest point in H with respect to the Euclidean norm. The matrix of the gradient estimate g(A; X) consists of elements g(A; X) ql , q ∈ {1, . . . , N }, l ∈ {1, . . . , K} defined by [START_REF] Zheng | Stochastic resource optimization of random access for transmitters with correlated activation[END_REF]: g(A; X) ql = X q N m=1 m̸ =q (1 -X m A ml ) - N n=1 n̸ =q X q X n A nl N m=1 m̸ =n m̸ =q (1 -X m A ml ). (5) Note that in this case, g(A; X) is an unbiased estimate of ∇T (A) = E X [∇ A T X (A)] due to the absence of user identification errors. Algorithm 2: Stochastic optimization algorithm when user identification is error-free. 1 Choose initial allocation matrix A 1 ∈ H and step-size sequence {α t } with α t > 0, t = 1, 2, . . . 2 t ← 1. 3 while not converged do 4 Based on X t , compute an unbiased estimate g(A t ; X t ) of ∇ A t T (A t ) 5 A t+1 ← Π H [A t + α t g(A t ; X t )] 6 t ← t + 1 7 end When the activity vectors are perfectly known, under weak conditions on the step-size sequence (detailed in Theorem 1), the SGA algorithm converges almost surely to a stationary point [START_REF] Zheng | Stochastic resource optimization of random access for transmitters with correlated activation[END_REF]. In fact, this convergence result holds even when device activity is correlated and the process X 1 , X 2 , . . . is an irreducible Markov process [5, Theorem 1]. However, with realistic user identification algorithms, errors will sometimes be introduced in the estimation of the activity X t . As shown in Example III.1, the errors introduced by imperfect user identification can strongly affect the network throughput. Exemple III.1. To illustrate the potential impact of user identification errors on resource allocation, consider the simple network consisting of 4 devices sharing two time slots. The true activity probability of the devices is p = 0.2 0.3 0.6 0.9 . Suppose that, due to non-orthogonal pilots or a very low signal-to-noise ratio (SNR), the user identification algorithm might not be able to differentiate between the first and last users. This means that, with probability ϵ, the user activity vector is in fact drawn from p = 0.9 0.3 0.6 0.2 . Fig. 1 shows the impact of ϵ on the network throughput T (A) after optimizing A. The case ϵ = 0 corresponds to the case of perfect user identification and ϵ = 1 to the case where an error event always occurs. Observe that the errors in the user identification can strongly affect the throughput, especially if error events are quite likely to occur (i.e., ϵ ≈ 1). We also compare to the throughput of a Frame Slotted Aloha [START_REF] Wieselthier | An exact analysis and performance evaluation of framed ALOHA with capture[END_REF] network, i.e., a network that always use a constant matrix of elements A i,j = 1 K , ∀i, j. and lead to a throughput worse than an Aloha network. The blue line corresponds to a perfect identification (ϵ = 0). IV. A NEW ALGORITHM TO MITIGATE ERRORS As illustrated in Fig. 1, user identification errors can have a significant impact on resource optimization. In this section, we propose new algorithms in order to mitigate the impact of user identification errors on optimization of the allocation matrix A. A. The Effect of Errors In order to develop a new algorithm to mitigate the impact of user identification errors, we first examine the impact of the errors on the SGA algorithm in Alg. 2. To this end, consider the update rule of Alg. 2: A t+1 = Π H [A t + α t g(A t ; X t )], (6) Observe that the update rule can be rewritten as A t+1 = A t + α t g(A t ; X t ) + α t Z t , where α t Z t is the projection term which represent the smallest vector (w.r.t. the Euclidean norm) needed to project A t + α t g(A t ; X t ) into the constraint set H, g(A t ; X t ) is the gradient estimate, E X [g(A t ; X t )] = ∇ A t T (A t ) + β t with β t the bias term. We then have A t+1 = A t + α t E X [g(A t ; X t )] + g(A t ; X t ) -E X [g(A t ; X t )] =∂M t + α t Z t (7) = A t + α t ∇ A t T (A t ) + β t + ∂M t + α t Z t (8) with E[∂M t ] = 0. In the case when there is no error in the user identification, g(A t ; X t ) is an unbiased estimate of the gradient ∇ A T (A t ), and hence β t = 0. This unbiased property is critical in order to guarantee convergence of Alg. 2 [START_REF] Kushner | Stochastic approximation and recursive algorithms and applications[END_REF]. On the other hand, when user identification errors are present, β t ̸ = 0. As a consequence g(A t ; Xt ) is a biased estimate of ∇ A t T (A t ). Hence, there is no guarantee of convergence to a stationary point of (3). B. A Bias Reduction Method We have observed that the key cause of the performance reduction in Fig. 1 is the bias in the gradient estimates introduced by imperfect user identification. In order to improve the expected throughput, it is therefore desirable to reduce the bias in g(A t ; X t ). We propose the following solution, inspired by importance sampling [START_REF] Ghalebikesabi | Bias mitigated learning from differentially private synthetic data: A cautionary tale[END_REF], [START_REF] Zadrozny | Learning and evaluating classifiers under sample selection bias[END_REF], to reduce the bias. Consider the weight function w : x → w(x) for x ∈ {0, 1} N defined by w(x) = Pr(X = x) Pr( X = x) , ( 9 ) where X is the true activity vector and X is an estimate; e.g., obtained from HGAMP. A key property of the importance weight w(x) is that for all A ∈ R N ×M in H, E X [g(A; X)] = x g(A; x)Pr(X = x) = x g(A; x) Pr(X = x) Pr( X = x) Pr( X = x) = E X[w( X)g(A; X)]. (10) In other words, w( X)g(A; X) is an unbiased estimate of g(A; X), as long as w(x) < ∞ for all x such that Pr(X = x) > 0. As such, as we will rigorously establish shortly, this choice of weight overcomes the key problem preventing convergence of Alg. 2. Incorporating the weight defined in (9) into the SGA algorithm yields Alg. 3. This algorithm has the following convergence guarantee. t ; Xt ) of ∇ A t T (A t ) 5 A t+1 = Π H [A t + α t w( Xt )g(A t ; Xt )] 6 t ← t + 1 7 end Theorem 1. The iterates A t of Algorithm 3, converge almost surely as t → ∞ to a stationary point provided that the step-size sequence {α t } with α t > 0, t = 1, 2, . . . has the following properties: ∞ t=1 α t = ∞, ∞ t=1 (α t ) 2 < ∞ and w(x) < ∞ for all x such that Pr(X = x) > 0. Proof. Similar to [START_REF] Ahn | EP-based joint active user detection and channel estimation for massive machine-type communications[END_REF], the update rule can be written as A t+1 = A t + α t E Xt [w( Xt )g(A t ; Xt )] + w( Xt )g(A t ; Xt ) -E Xt [w( Xt )g(A t ; Xt )] =∂M ′t + α t Z t = A t + α t E X t [g(A t ; X t )] + ∂M ′t + α t Z t (11) = A t + α t ∇ A t T (A t ) + β t + ∂M ′t + α t Z t = A t + α t ∇ A t T (A t ) + ∂M ′t + α t Z t ( 12 ) where the step ( 11) is obtained through property: E X [g(A; X)] = E X[w( X)g(A; X)]. (13) As such, β t = 0 as the property in [START_REF] Kushner | Stochastic approximation and recursive algorithms and applications[END_REF] implies that E X [g(A t ; X)] is an unbiased estimate of ∇ A t T (A t ). To establish convergence, we seek to apply Theorem 5.2.1 of [START_REF] Kushner | Stochastic approximation and recursive algorithms and applications[END_REF]. To do so, it is necessary to check the following assumptions: • A.5.2.1: sup t E Xt [w( Xt )g(A t ; Xt )] < ∞, (14) which holds provided that P( Xt = x t ) ̸ = 0 for all x t . • A.5.2.2: There exists g : R N ×M → R such that E X[w( Xt )g(A t ; Xt )] = E X w( Xt )g(A t ; Xt ) |A t , w( Xt )g(A i ; Xi ), ∀i < t = g(A t ) + β t which holds since β t = 0 and by setting g(A t ) = ∇ A t T (A t ). • A.5.2.3: ∇ A t T (A) is continuous, which can be immediately verified from the definition of the objective. • A.5.2.5: t α t |β t | < ∞, (15) which holds as β t = 0 ∀t. In addition, a condition on the constraint set H must be verified. As the same constraint is presented in [START_REF] Zheng | Stochastic resource optimization of random access for transmitters with correlated activation[END_REF] for the case without errors, we refer the reader to the statement and proof of the constraint condition in [4, Theorem 1]. Theorem 1 shows that even when there are errors, Alg. 3 converges to stationary points of the problems in (3). This is in contrast to the standard SGA algorithm in Alg. 2, for which these guarantees only hold when there are no errors. C. Proposed Algorithm While the weight in (13) yields convergence of Alg. 3 to a stationary solution, it must first be computed. In practice this can be challenging for two reasons: (i) the true activity distribution may not be perfectly known or is difficult to sample from; (ii) the distribution of the estimated activity X must be computed. We propose the following solution to address (i) and (ii). In order to perform user identification, an estimate of Pr(X = x) is required for the prior within the HGAMP algorithm. This prior is typically estimated via the expectation-maximization algorithm coupled with HGAMP [START_REF] Ke | Compressive sensingbased adaptive active user detection and channel estimation: Massive access meets massive MIMO[END_REF]. In the computation of the weight in ( 9), the probability Pr(X = x) can therefore be approximated by the prior used within HGAMP. The HGAMP algorithm also produces estimates of the channels h t n . As such, samples of X t n and y t in (2) can be simulated using the prior estimate of Pr(X = x). Let ϵ(p) be the error probability of HGAMP for a given true activity p and e ∼ Ber(ϵ(p)), as a consequence, an estimate Pr( X = x) of Pr( X = x|e = 1) (the posterior of HGAMP given that an error is made) and ϵ(p) can be obtained via Monte Carlo estimation. This gives the following approximation for the weighting function: ŵ1 (x) = Pr(X = x) (1 -ϵ(p))Pr(X = x) + ϵ(p)Pr( X = x) . (16) Unfortunately, when N is very large, Monte Carlo estimation of Pr( X = x) and ϵ(p) has a high sample complexity. It is therefore desirable to consider further approximations. A convenient choice is P (x) = (1 -ϵ(p))Pr(X = x) + ϵ(p) 2 N -1 . (17) Here, (17) is obtained by considering every possible vector to be equally likely whenever an error is made by HGAMP. An approximation of the weight in ( 13) is then given by ŵ2 (x) = Pr(X = x) P (x) . (18) If the number of users is large, ϵ(p) can also be approximated to a value close to one (as it will be very likely to miss at least one user). Accounting for the challenges in computing the weight in (i) and (ii) then leads to Alg. 4 where ŵ is either equal to ŵ1 or ŵ2 . Due to the approximations used in the computation of these weights, the bias in the estimated gradients is not zero, as the importance sampling property does not hold anymore, unlike the weight in [START_REF] Zheng | Stochastic resource allocation for outage minimization in random access with correlated activation[END_REF]. Nevertheless, as we show in the Sec. V, significant improvements in the performance are obtained over the standard SGA algorithm in Alg. 2, which does not account for the presence of errors in user identification. V. NUMERICAL RESULTS While we have established that Alg. 3 converges to a stationary point with probability one, this is under the assumption that the weight can be computed with no approximation error. In practice, Alg. 4 must be used with the weight approximated by ŵ1 or ŵ2 . This inevitably leads to bias in gradient estimates and hence a reduction in the final expected throughput. To investigate the reduction compared with the case of perfect knowledge, we perform a numerical study focusing on two scenarios. In the first scenario, a network consists of N = 10 devices and an AP, where each active device transmits on one of K = 4 slots. Each device is assumed to transmit independently with probability p n , with p = .19 .20 .21 .23 .31 .55 .71 .82 .85 .94 In the preamble phase, user identification errors are assumed to occur with probability ϵ. In the case an error occurs, the estimated activity vector X is assumed to have independent components with Pr( Xn = 1) = 1-p n . This error distribution corresponds to a worst case scenario and is therefore of interest to validate that Alg. 4 can effectively mitigate the impact of the errors on slot allocations.The initial matrix A 1 is randomly selected by setting the elements to A i,j = 1 K + z ij , with z ij ∼ N (0, 0.05) and then projecting the resulting matrix into H. Fig. 2 plots the expected throughput with optimized allocation matrix A for varying error probability ϵ. Five algorithms are considered: perfect knowledge (Alg. 2) and Aloha (uniform slot allocation for each user, as in III.1) which serve as references; the unweighted case corresponding to Alg. 2 but with a biased estimate of ∇ A T (A) due to the errors; and two curves corresponding to the proposed Alg. 4 with weights ŵ1 and ŵ2 , respectively. The step-size for each algorithm is set to 1 50 and the algorithms run for f = 5000 iterations. The results are obtained by averaging over 10 runs. Observe that as expected, the expected throughput with perfect knowledge yields the best performance, while there is significant performance degradation in the unweighted case based on Alg. 2. We highlight that the proposed algorithm in Alg. 4 significantly outperforms the unweighted case for both weights ŵ1 and ŵ2 . For most values of ϵ, Alg 4 is also seen to yield performance close to the case of perfect knowledge, which suggests that the impact of user identification errors is largely mitigated. Note also that unless ϵ = 1, all the proposed methods outperform Frame Slotted Aloha. In the second scenario, Alg. 4 is again compared with the unweighted case, the perfect decoding case and an Aloha system in a network with N = 10 devices, K = 2 slots and activity vector governed by p = .11 .17 .18 .23 .25 .34 .57 .63 .70 .94 . In contrast with the first scenario, the distribution of the estimated activity Pr( X = x) is based on user identification using the HGAMP algorithm which exploits circular complex Gaussian pilot sequences of length L = 5. Note that L < N meaning that active users are likely to mixed by the decoder. The weights ŵ1 and ŵ2 are estimated from 3000 prior simulations. The initial matrix is obtained as previously. Each algorithm is run 15 times for f = 15000 iterations with a constant step-size of α t = 1 20 . HGAMP algorithm often yields poor user identification and there are limited differences in the performance. On the other hand, as the SNR is increased, while there are still errors it can be observed that the proposed Alg. 4 with weight ŵ1 yields a significant performance improvement over the unweighted case. The reason for the poor performance of weight ŵ2 is that the assumption of uniformly distributed activity in the case of an error is a poor approximation, especially if there are many users. It is interesting to note even when the SNR is small, the algorithms are still outperforming Aloha. VI. CONCLUSION Stochastic resource allocation in grant-free random access relies on estimates of device activity via user identification algorithms, such as HGAMP. However, in practice errors are introduced which lead to potentially significant performance reductions. In this paper, we have developed an algorithm that can mitigate the impact of errors via importance sampling. Numerical results show significant performance improvements over existing algorithms, in many cases approaching the performance when user identification errors are not present. Fig. 1 : 1 Fig.1: Expected throughput with user identification errors for a network of N = 4 devices sharing K = 2 slots. Drawing the sample with probability ϵ from p instead of p (in orange) might introduce a 60% reduction of the throughput (if ϵ = 1) and lead to a throughput worse than an Aloha network. The blue line corresponds to a perfect identification (ϵ = 0). Fig. 2 : 2 Fig. 2: Comparison of the different algorithms for varying error probability ϵ and activity governed by (19). Fig. 3 : 3 Fig. 3: Expected throughput of the different algorithms as a function of the SNR. Fig. 3 3 Fig. 3 plots the performance of each algorithm for varying SNR in the preamble phase. For low values of SNR, the Algorithm 4 : 4 Stochastic Optimization Algorithm with Approximate Weight Function 1 Choose initial allocation matrix A 1 ∈ H, and step-size sequence {α t } with α t > 0, t = 1, 2, . . . Based on the estimate Xt , compute a biased estimate g(A t ; Xt ) of ∇ A t T (A t ) 2 t← 1. 3 while not converged do 4 5 A t+1 = Π H [A t + α t ŵ( Xt )g(A t ; Xt )] 6 7 end t ← t + 1
04114567
en
[ "phys", "spi" ]
2024/03/04 16:41:24
2022
https://cnrs.hal.science/hal-04114567/file/JASA_preprint_chambon.pdf
Joannès Chambon Jérôme Antoni Simon Bouley Galerkin equivalent sources published or not. The documents may come Galerkin Equivalent Sources Method for sound field reconstruction around diffracting bodies Joannès Chambon, 1, a) Jérôme Antoni, 1 and Simon Bouley. 2 1) Univ Lyon, INSA Lyon, LVA, 25 bis av. Jean Capelle F-69621, Villeurbanne Cedex, France. 2) MicrodB, 28 Chemin du Petit Bois, F-69134, Écully Cedex, France. (Dated: 3 October 2022) The rising interest for three-dimensional acoustic imaging requires the improvement of the numerical models describing the propagation between a radiating body and a microphone array. The commonly used free field transfer functions boil down to assume a full acoustic transparency of the radiating object, which in some case may lead to misleading outcomes for their characterization. Among other approaches, Equivalent Sources Methods (ESM) emerged as a convenient and powerful approach to simulate scattered sound fields. In the following paper, an acoustic imaging algorithm named Galerkin ESM where equivalent sources are tailored to concomitantly match with microphone pressures and a Neumann boundary condition is proposed. By means of a projected matrix inversion and a backpropagation of the equivalent sources, Galerkin ESM aims at the direct synthesis of the pressure field around a diffracting body by making the most of an array measurement. This method is compared with two other existing imaging algorithms fueled by both free field and computed transfer functions. The impact of the chosen transfer model is discussed, and Galerkin ESM performances are evaluated on both numerical and experimental test cases. [https://doi.org(DOI number)] [XYZ] Pages: 1-15 I. INTRODUCTION Acoustic propagation models that account for rigid body diffraction can be obtained with a wide range of numerical methods. When it comes to problems of infinite extent, the Boundary Element Method (BEM) emerges as the main option for its strong theoretical foundation and its convenience (see for instance [START_REF] Bonnet | Boundary integral equations methods in solids and fluids[END_REF] or [START_REF] Burton | The application of integral equation methods to the numerical solution of some exterior boundary-value problems[END_REF] for an overview). As it got enhanced over the years to tackle its non-uniqueness issues, BEM became a complete but more complex and time consuming algorithm that requires consequent computational resources. Over and above the implementation of the method itself, BEM also requires a fine mesh management expertise. The use of advanced iterative solvers (see [START_REF] Saad | Iterative Methods for Sparse Linear Systems[END_REF] or Fast Multipoles introduced by [START_REF] Rokhlin | Rapid solution of integral equations of scattering theory in two dimensions[END_REF] led to significant speed ups, but at the same time turned BEM into a rather hard to master option. With a view to provide a more simple and flexible alternative for more simple test cases, [START_REF] Koopmann | A method for computing acoustic fields based on the principle of wave superposition[END_REF] designed and assessed an Equivalent Sources Method (ESM). Its physical principle can be stated concisely: considering a rigid body exposed to an incident acoustic field, a volume interior distribution of elementary sources can be set to offset the boundary condition induced by the incident field on the skin of the object. The main interest of having equivalent sources strictly in a) Also at MicrodB. Author to whom correspondence should be addressed. Electronic mail : [email protected] the interior domain is to avoid numerical singularities issues encountered with BEM. These sources are thus considered as acoustically equivalent to the presence of the rigid body, and can be propagated toward the domain outside the skin to get the scattered sound field. After describing the theoretical background of ESM, [START_REF] Koopmann | A method for computing acoustic fields based on the principle of wave superposition[END_REF] also established its numerical version in which the rigid body and the volume of equivalent sources are respectively modeled by a rudimentary mesh and a discrete set of acoustic monopoles. Since then, ESM has been extensively studied and [START_REF] Lee | Review: The use of Equivalent Source Method in computational acoustics[END_REF] proposed a review article where the key parameters of its proper implementation are identified: The optimal number equivalent sources was initially discussed by [START_REF] Koopmann | A numerical solution for the general radiation problem based on the combined methods of superposition and singular-value decomposition[END_REF] on a cylindrical test case. Later on, [START_REF] Dunn | Aeroacoustic scattering via the equivalent source method[END_REF] experimentally showed that a ratio of three times less sources than nodes on the mesh provided the best results on practical cases. Their spatial distribution proved out to be directly linked to the conditioning of ESM matrices. Notably, a slight distinction was pointed out by [START_REF] Leblanc | A wave superposition method based on monopole sources with unique solution for all wave numbers[END_REF] between ESM with regularly distributed equivalent sources (Method of Fundamental Solutions, see [START_REF] Chen | Using the method of fundamental solutions in conjunction with the degenerate kernel in cylindrical acoustic problems[END_REF][START_REF] Kondapalli | Analysis of acoustic scattering in fluids and solids by the method of fundamental solutions[END_REF] and ESM using randomly located sources (Wave Superposition Method, see again [START_REF] Koopmann | A method for computing acoustic fields based on the principle of wave superposition[END_REF]. They showed that the first is a wellposed problem but suffers from non-uniqueness issues at particular frequencies while the latter is more robust but prone to ill-conditioning deficiencies. This can be dealt with using singular value regularization (see [START_REF] Lee | Assessment of timedomain equivalent source method for acoustic scattering[END_REF]. [START_REF] Pavić | An engineering technique for the computation of sound radiation by vibrating bodies using substitute sources[END_REF] also designed a time consuming but accurate algorithm to determine optimised spatial configurations. Their retreat distance toward the skin of the rigid body is also a key driver for the proper functioning of ESM approaches. The stake is comparable to BEM collocation techniques : equivalent sources far from the mesh struggle to precisely match with complex incident acoustic fields while sources close to the skin are likely to be numerically unstable because of singularities. [START_REF] Bai | On optimal retreat distance for the equivalent source method-based nearfield acoustical holography[END_REF] recently tackled this issue and proposed leads to find the optimal balance. Lastly the nature of the equivalent sources plays a role in the efficiency of the method. ESM can be described in terms of single and double layer potentials (see [START_REF] Wilton | A clarification of nonexistence problems with the superposition method[END_REF], and [START_REF] Jeans | The wave superposition method as a robust technique for computing acoustic fields[END_REF] first investigated the use of dipolar equivalent sources, while [START_REF] Ochmann | The source simulation technique for acoustic radiation problems[END_REF] later on suggested that any type of sources (like spherical harmonics, see [START_REF] Bouchet | Calculation of acoustic radiation using equivalent-sphere methods[END_REF] could be used. Featuring all these refinements (see [START_REF] Lee | Review: The use of Equivalent Source Method in computational acoustics[END_REF], for a more in-depth review), ESM is a relevant alternative to BEM for its straightforward implementation and its accuracy on a large scope of test cases. Bearing in mind the advantages cited above, ESM naturally got introduced into acoustic imaging methods over the last years. The generic set-up common to all acoustic imaging problems, p = Hq, (1) where a source grid q has to be identified from microphonic pressures p and a collection of acoustic ) for imaging in enclosed space. If they differ in the algorithm used to identify the sources, the common ground to all these references lies in their use of a discrete set of equivalent monopoles (or dipoles, see for example [START_REF] Valdivia | Advanced equivalent source methodologies for near-field acoustic holography[END_REF] to reconstruct a radiated acoustic field from array measurements. However, another common aspect is that they no more involve these equivalent sources in the simulation of a particular boundary condition in contrast to the historical purpose of ESM. A few tries to align the use of equivalent sources in acoustic imaging with scattering simulation are to be mentioned : Le Magueresse (2016, section 6.2.4) first introduced the idea of using Koopman's ESM to fill the H matrix in Eq.( 1) with coefficients taking into account the diffracting behaviours in the acoustic scene. Later on, Le Magueresse et al. (2020, section 4) also highlighted how the use of such transfer functions can matter to avoid misleading interpretations of source identification on an experimental engine test bench. At the same time, [START_REF] Chambon | 3D Beamforming for wind tunnel applications using ESM based transfer functions[END_REF] used the same approach in a wind tunnel, and showed how ESM could be used for aeroacoustic source identification on a car mirror to improve beamforming maps resolution. Drawing on this, the connection between Koopman's ESM approach and acoustic imaging remained to be formalized, which is the point addressed in this article. The concept of fitting a set of equivalent sources to conform to a given boundary condition is borrowed from ESM, and included in the acoustic imaging inverse problem. Considering a Neumann boundary condition, this embedding takes the form of an orthogonal projection of the equivalent sources on the kernel of a matrix modeling the impedance of a rigid body. The basic inverse problem in Eq.(1) remains, but finds itself restricted to solutions satisfying an orthogonality constraint similar to those stemming from Galerkin methods. The aim of such an approach is also a crossover between classical ESM and acoustic imaging : the data provided by the phased microphone array is processed to identify virtual equivalent sources, and the latter are to be propagated to synthesize the acoustic fields at areas of interest. The proposed algorithm, hence named Galerkin ESM, is depicted in this paper. Section II is dedicated to its mathematical formulation, from the statement of the inverse problem to the backpropagation step providing the display of the pressure maps synthesized from equivalent sources. Then, in section III, the emphasis is put on an in-depth study of the orthogonal projector involved in Galerkin ESM. A lead to precondition the kernel matrix with a physical maximization of the radiation efficiency of the sources is exposed. In section IV, Galerkin ESM is put to the test head-to-head against the well-assessed acoustic imaging methods that are Conventional Beamforming (CBF) and iterative Bayesian Focusing (iBF) on a various set of test cases featuring scattered sound propagations. With a full control on the target pressure fields, this benchmark is an opportunity to discuss the importance of supplying imaging algorithms with realistic transfer functions instead of the commonly used free field models. The last section consists in an illustrative application of Galerkin ESM on an academical experimental set of measurements. Conventions and notations Throughout this article, vectors are represented by bold lowercase letters and matrices by bold uppercase letters. M H denotes the transpose conjugate of a complex matrix, and M + stands for its Moore Penrose inverse. The orthogonal complement of a given Hilbert space H is noted as H ⊥ . The complementary of a given set E is Ē. The euclidean norm of a vector v is written ∥v∥ and its p-norm ∥v∥ p . The column space of a matrix M is noted as Im M and its kernel as ker M. Lastly, the convention chosen for phase sign is e -iωt . II. GALERKIN ESM A. Overall methodology First is considered a three-dimensional radiating body bounded by a closed surface Γ. The acoustic field is sensed via a set of M microphones located at the positions (r i ) i≤M . The purpose of the method is to provide a description of the exterior acoustic field (both phase and amplitude) at a given pulsation ω by means of equivalent sources placed inside Γ. The latter are to be determined with respect to the measured pressures p(r i ) i≤M on the one hand, and to the impedance condition induced by the behaviour of the body on the other hand. A discrete formulation of the problem is proposed now. Γ is modeled by a surfacic mesh featuring N nodes of positions (r j ) j≤N and a set of N s equivalent monopolar acoustic sources are introduced at the positions (r k ) k≤Ns in the volume Ω. Let G ∈ C M ×Ns be the matrix defining the acoustic free-field transfer functions between the equivalent sources and the pressure at the microphone positions, i.e. ∀i ≤ M, ∀l ≤ N s , G il = -iωρ e ik∥ri-r l ∥ 4π ∥r i -r l ∥ , r i ∈ Ω, r l ∈ Ω. ( 2 ) Similarly can be defined the matrix G Γ ∈ C N ×Ns describing the source to pressure transfer between the equivalent sources and the nodes on Γ, and T Γ ∈ C N ×Ns the source to normal velocity transfer ∀j ≤ N, ∀l ≤ N s , T Γ,jl = e ik∥rj -r l ∥ 4π ∥r j -r l ∥ (1 -ik ∥r j -r l ∥) cos θ jl , r j ∈ Γ, r j ∈ Ω. ( ) 3 where θ jl denotes the angle between r j -r l and the direction normal to Γ at r j . Finally the impedance imposed by the radiating body is introduced as z j = p(rj ) vn(rj ) , r j ∈ Γ j≤N . After stacking the M pressures p(r i ), the N s equivalent sources flow q(r l ), and the impedances z j into the column vectors p, c and the diagonal matrix Z respectively, the inverse problem under scrutiny boils down to p = Gc (measurements matching) (ZT Γ -G Γ ) c = 0 (boundary condition). ( 4 ) From a mathematical standpoint, the goal here will be to solve a classical acoustic imaging inverse problem whose solution has to be projected on a null space accounting for the presence of the boundary condition. Assuming Eq.( 4) solved, the last step of the approach consists in the repropagation of the obtained sources c on any observation region Γ obs thanks to free field transfer matrices defined similarly to Eq.( 2) and (3). In this paper, the particular case of a perfectly rigid body is investigated for the sake of simplicity. In other words, the admittance on Γ is set to zero which implies T Γ c = 0 (5) to account for a Neumann boundary condition, and the overall problem as stated in Eq.( 4) becomes p = Gc (measurements adequation) T Γ c = 0 (boundary condition). (6) B. T Γ Null space As the acoustic inverse problem turns into an algebraic projected inversion, the settings of the operating matrices dimensions is a major stake for the practical implementation of the method. M and N are considered as inputs of the problem: the number of microphones is given by the available hardware, and the number of nodes describing the body is fixed in accordance with the minimum wavelength under scrutiny. Concerning the test cases presented in this paper, a 5 nodes per wavelength condition is matched to ensure a margin [START_REF] Lee | Assessment of timedomain equivalent source method for acoustic scattering[END_REF] showed that ESM approaches could be more resilient than FEM/BEM on this point). The number of equivalent sources N s is a key parameter of the method and requires further clarification. As it is mandatory that Eq.( 5) admits non zero solutions for the proper operation of the method, the lowest N s is first given by the rank theorem: N s = Rank (T Γ ) + dim (ker T Γ ) . (7) As T Γ is of full rank (i.e. Rank (T Γ ) = min (N, N s )), a non-empty kernel implies N s > N and the number of equivalent sources must be greater than the number of nodes supporting the boundary condition on the body. Under this assumption, the resolution of Eq.( 4) involves the extraction of the kernel of T Γ . The simplest way to do so is based on the classical relationship ker T Γ = ImT H Γ ⊥ . ( 8 ) From then on, the QR decomposition (see for example [START_REF] Golub | Matrix Computations[END_REF] of T H Γ is sufficient to sample a K sized matrix B ∈ C Ns×K from the null space of T Γ . Let us first introduce T H Γ = QR =       Q 1,1 • • • Q 1,N Q 1,N +1 • • • Q 1,Ns Q 2,1 • • • Q 2,N Q 2,N +1 • • • Q 2,Ns . . . • • • . . . . . . • • • . . . ImT H Γ Q Ns,1 • • • Q Ns,N null space of T Γ Q Ns,N +1 • • • Q Ns,Ns       R, (9) where Q ∈ C Ns×Ns is an unitary matrix and R ∈ C Ns×N is an upper triangular matrix. Equation ( 9) can be rewritten as T H Γ = Q 1 Q 2 R, (10) with TΓ) describing the image of T H Γ and Q 2 ∈ C Ns×(Ns-Rank(TΓ)) being a set of orthogonal rows in the kernel of T Γ . From there, any matrix B of the form Q 1 ∈ C Ns×Rank( B = Q 2 Λ, Λ ∈ C (Ns-Rank(TΓ))×K (11) ensures that T Γ B will equal zero. The B matrix is the cornerstone of the proposed Galerkin ESM as it plays two major roles: Because of its similarity with Q 2 , it can force equivalent sources to respect the Neumann boundary condition stated in Eq.( 5). The appropriate construction of Λ allows the reduction of the the inverse problem dimension in Eq.(4) from N s to K unknowns. Leads to define what can be a relevant definition of this matrix is investigated in section III. C. Practical inversion The initial system (4) can now be reduced to a single inverse problem p = Gc = GBd (12) where the new unknown vector d ∈ C K may be understood as a set of equivalent source coefficients in a basis accounting for the Neumann boundary condition on the body. Since it is desirable for this new statement of the problem to be correctly conditioned, a parametric condition arises from Eq.( 12) as it should not be underdetermined. This means that the size of the kernel basis K has to be larger than the output dimension M of the matrix GB to invert, which necessarily leads to M ≤ K ≤ N s -N. ( 13 ) This prerequisite on the number of equivalent sources will be supported by numerical considerations in section IV. Now that the inverse problem boils down to the formulation Eq.( 12) remains the choice of an inversion algorithm to reach the solution d. A large scope of sophisticated algorithms are available in the literature, (see for example [START_REF] Leclère | A unified formalism for acoustic imaging based on microphone array measurements[END_REF] or Merino-Martínez et al. (2019) for a review). The shape of the optimal solution d (i.e. for example sparse or with minimum energy) is casedependant and likely to be influenced by the choice of Λ. For that reason, the generic choice would be to make use of an L 2 regularized inversion. This is the one made in the framework of this article in the validation sections, with the regularization parameter set up according to [START_REF] Pereira | Empirical Bayesian regularization of the inverse acoustic problem[END_REF] Bayesian regularization algorithm. Once c is recovered from Eq.( 12), the last step consists in the backpropagation of the optimal equivalent sources vector ĉ = B d to estimate the radiated pressure in the region of interest within Ω. This is simply achieved with the free field propagator following p obs = GΩĉ. ( 14 ) For this backpropagation to be meaningful, a noticeable precaution has to be highlighted: since the equivalent sources arising from Galerkin ESM are designed to acoustically model the object defined by Γ, it is mandatory for this region of interest not to intersect Γ. This would indeed boil down to evaluating the pressure radiated by the body inside or on itself, and most likely lead to incoherent results and interpretations. Indeed, like classical ESM or BEM, the Galerkin inverse problem is dedicated to the simulation of the acoustic field outside the source region since the 1/r singularity of the monopoles is unmanageable within Ω. Apart from this area, it is supposed to provide a realistic synthesized acoustic field outside the rigid body based on the microphone array measurements. In the end, the overall execution of Galerkin ESM is summarized in Algorithm 1. ∩ Γ obs = ∅ G ∈ C M ×Ns ← Eq.(2). TΓ ∈ C N ×Ns ← Eq.(3). Q2, R ← QR decomposition of T H Γ . Λ ∈ C (Ns-Rank(T Γ ))×K ← User defined, see section III. B ← Q2Λ. d ← regularized inversion of p = GBd. p obs ← GΓ obs B d. From the computational cost point of view, the two most significant steps are the QR decomposition (O N 3 operations using numpy Householder reflectors) and the inversion of GB that will depend on Λ and the chosen inverse methods. III. KERNEL LAYOUT THROUGH PRINCIPAL SURFACES When choosing the simplest form for Λ, i.e. a rectangular matrix with unitary entries on its diagonal, there is no genuine reason to expect any dimension reduction from N s -N to K: in that case B is directly sampled from the QR decomposition, which is a pure mathematical operation with no physical meaning involved. The selection of columns from Q 2 is thus bound to be arbitrary and it is shown in the next section that Galerkin ESM is unlikely to provide meaningful results with small values of K in this set up. As mentioned in the previous section, the construction of Λ thus deserves an in-depth analysis since it impacts both the physical meaning of the equivalent sources model and the numerical cost of the inverse problem. The current section is dedicated to a practical way to define an acoustic operator from which are derived principal radiating surfaces. For the sake of clarity, only one promising option is advocated in this section, but some other leads could be studied as well depending on the application cases. Sources with maximum radiation efficiency The proposed option is to tidy up the columns of Q 2 following an order of increasing efficiency in terms of acoustic radiated power. For that purpose, the choice was to test Galerkin ESM featuring a Λ matrix filled with K eigenvectors associated to the largest eigenvalues of the radiation efficiency operator. Given a closed surface completely covering the rigid body (Γ W ⊂ Ω) and discretized in N W nodes and surface elements, the acoustic power propagated by the equivalent sources c through Γ W is determined by W = 1 2 ℜe v H n Ap = 1 2 ℜe (T Γ W c) H AG Γw c (15) where T Γ W , G Γ W are the free field propagation matrices of c on Γ W respectively for normal velocity and pressure, and A ∈ R N W ×N W is a diagonal matrix containing the areas of Γ W surface elements. From that perspective, it seems relevant to favour distributions of equivalent sources with the best radiation efficiency when reducing the size of the inverse problem. In practice this can be achieved by setting the columns of Λ as Λ = [λ 1 , . . . , λ K ] ( 16 ) where λ i = arg max ∥u∥=1 u H λj =0, j≤i-1 1 2 u H ℜe Q H 2 T H Γ W AG Γw Q 2 u. ( 17 ) All in all, the matrix B = Q 2 Λ brings in a combination of a projection on the kernel of T Γ while sorting the equivalent sources distribution in ascending order regarding their radiation efficiency on Γ W . Through this intermediary step, a physical behaviour is coupled to Q 2 and allows to significantly reduce the size of the inverse problem without damaging its accuracy. An illustration of how radiate such principal surfaces and the quantitative gain of the introduction of Λ defined with Eq.( 17) is put forth in section IV E. Another subsidiary benefit lies in the orthogonality between the columns of B. Considering further applications of Galerkin ESM, one could think of decomposing the overall radiated acoustic power on the principal surfaces according to W = K i=1 w i , where w i = di 2 Π i , (18) with Π i being the i th eigenvector of the matrix 17) and di the i th coefficient of d. Such a process is likely to provide some insightful hierarchy in the operating radiative patterns (see Fig. 6,7,8 in the next section for a example on an academic case). 1 2 ℜe Q H 2 T H Γ W AG Γw Q 2 in Eq.( IV. NUMERICAL VALIDATION A comparison of Galerkin ESM with other popular acoustic imaging algorithms is proposed in this section. The purpose of the method is to reconstruct acoustic fields outside a rigid surface. To evaluate these methods from that angle, a spherical geometry is chosen: with respect to Fig. 1, Γ is modeled here by a spherical mesh of radius a = 0.3 meters featuring 1280 triangular elements and N = 642 nodes. As they get equally spread over the sphere, this set of nodes allows a 5 elements per wavelength condition up to ka = 15. A. Spherical Related Transfer Functions The acoustic field radiated around a rigid box (see for example [START_REF] Bouchet | Calculation of acoustic radiation using equivalent-sphere methods[END_REF][START_REF] Koopmann | A method for computing acoustic fields based on the principle of wave superposition[END_REF] is a common test case for ESM approaches. However, the point of this paper is also to put Galerkin ESM to the test in terms of three-dimensional diffracted source identification based on array imaging. From that perspective, the spherical case offers valuable advantages. First of all, the pressure in the vicinity of a rigid sphere given a monopolar source on its skin is analytically known (Williams (1999, section 8.8)): using the notations of Eq.( 2) and considering the sphere centered on the origin, the acoustic transfer function between a unitary monopolar source located at r l on the spherical surface Γ and a microphone at r i in the outer field is given by H s il = -iρc 4πa 2 n∈N (2n + 1) h n (k ∥r i ∥) h ′ n (ka) P n r i • r l ∥r i ∥ ∥r l ∥ , ( 19 ) where h n , h ′ n and P n respectively refers to spherical Bessel Functions of second kind and order n, its derivative, and Legendre polynomials of order n. [START_REF] Duda | Range dependence of the response of a spherical head model[END_REF] even proposed an accelerated routine to compute high orders of formulation ( 19), and Pereira (2013, chapter 2) empirically linked the maximum order n max to the frequency of interest through n max > 1.2(ka) 2 + 8 r/a + 1 r/a (20) to ensure a truncation error lesser than 10 -9 on the sum. Equation ( 19) provides the acoustic field resulting from any combination of monopoles on the surface of the sphere, and thus allows to assess Galerkin ESM on monopolar source identification. An observation surface Γ obs is set up for the display of the acoustic fields propagated by the equivalent sources from Galerkin ESM and the sources identified through classical acoustic imaging methods. It consists in an additional sphere of radius a obs = 4 3 a merged with a cutting plane of size 5a (see Fig. 2). This set-up allows to check discrepancies between the obtained propagated fields and the analytical one on both near and far field. For the sake of simplicity, the choice was made to use a regular spherical array of radius 16 3 a with M = 250 microphones. B. Acoustic imaging methods For each validation case below, Galerkin ESM is benchmarked against two already existing acoustic imaging approaches. The first one is Conventional Beamforming (CBF) (see [START_REF] Chiariotti | Acoustic beamforming for noise source localization -reviews, methodology and applications[END_REF]. Known to be fast and robust, CBF is theoretically justified for the identification of a single monopolar radiating source in a free field toward the array. Even if its relevance may be questioned given the more complex radiation patterns used here, it remains with e j ∈ C N being the j th element of the canonical basis. The second method is iterative Bayesian Focusing (iBF), a current state-of the-art algorithm initially proposed by [START_REF] Antoni | A Bayesian approach to sound source reconstruction : Optimal basis, regularization, and focusing[END_REF]. The full description of this algorithm as used in the scope of this article is to be found in Antoni et al. (2019, Algorithm 1), featuring a generalized multivariate complex Gaussian as prior density function. iBF was assessed as an imaging approach that provides the full cross spectral matrix of the sources, and includes a regularization step to deal with noisy measurements or poorly conditioned FRFs. In the end, the comparison study features five different approaches to reconstruct the acoustic field: 1. CBF between N monopole sources on the mesh Γ and the array, featuring the free-field propagation of Eq.( 2) in the transfer matrix H of Eq.( 21). 2. CBF involving the analytical transfer Eq.( 19) in H. With these first two is tested the usability of CBF for sound field synthesis and its ability to handle non free-field FRFs. 3. iBF between N monopole sources on Γ and the array, with free-field FRF and without any prior hypothesis on the sources in terms of regularization. This one serves at evaluating what a turnkey version of iBF produces. 4. Galerkin ESM as described in section II, with N s = 3N = 1926 and K = N s -N = 1284 in a first stage, to maintain a margin regarding Eq.( 13). No principal surfaces for the validation in comparison with other imaging algorithms to remain general. The equivalent sources are distributed inside the mesh according to classical ESM literature guidelines : half of it are placed on a 85% scaled replica of Γ and the other half randomly located but at least 0.15a m away from the control points (see [START_REF] Leblanc | A wave superposition method based on monopole sources with unique solution for all wave numbers[END_REF]. 5. Lastly iBF again, but with the use of the analytical FRF in H and a strong sparsity constraint (L 1 regularization). It should be noted that this last approach differs from the first four as it is the only one that features a suitable a priori on the sources. It aims at displaying the maximum degree of accuracy reachable on the reconstructed scattered field when the inverse method perfectly fits with the ground truth source distribution. The aim of such a methodology is multiple: Checking the error induced by approximated FRFs in CBF and iBF when the actual propagation is not free field. Assessing the ability of CBF and iBF to benefit from the use of a perfectly accurate transfer function. Evaluating Galerkin ESM comparatively to what is reachable with blind acoustic imaging approaches (i.e. without proper assumptions on the source distribution) and with the perfectly tuned iBF version. C. Uncorrelated monopolar sources The first test case consists in the reconstruction of the acoustic field produced by N src = 10 uncorrelated monopoles randomly placed on the yellow sphere (on Fig. 2). The cross spectral matrix (CSM) resulting from their combined radiation at the microphones is computed through S pp = H s S qq (H s ) H , ( 22 ) where H s is determined according to Eq.( 19) and S qq is set to the identity matrix in order to describe completely uncorrelated sources of unitary strength. Each algorithm presented in sections II and IV B is then fed with this CSM and the sources obtained are finally backpropagated on a circle of radius 1.5a. The directivities resulting from this process are plotted on Fig. 3. This first configuration is supposed to fall in the range of all the methods assessed, but this first benchmark offers interesting preliminary conclusions. Methodwise, it seems that the overall directivity pattern is seized by both CBF, iBF and Galerkin ESM. 0 • 30 • 60 • 90 • 120 • 150 • 180 • 210 • 240 • 270 • 300 • 330 • It appears then that the use of the accurate FRFs leads to an enhancement at high frequencies, dealing with some 5 dB discrepancies for both beamforming and Bayesian Focusing. This item is quite intuitive and shows how the diffraction of the spherical body plays a role in the acoustic transfer and should not be underestimated. With respect to the classical methods, Galerkin ESM (green dotted line Fig. 3) may be ranked at the second place in terms of accuracy. It outperforms CBF in general, deals with the impact of diffraction in a much better way than iBF with free-field FRFs, but does not compete with the latter when used with spherical transfer functions and induced sparsity. It may be noticed that at both high and low frequencies, iBF with correct transfer functions (yellow dashed line) perfectly managed to reconstruct the reference directivity of the ten sources. This basically means that the number and the disposition of the microphones allows enough information to fully describe the acoustic field. That being said, it should be concluded that Galerkin ESM is by far the best performing algorithm among the without a priori approaches even if there still are room for improvements. D. Correlated monopolar sources The panel of imaging approaches is now put to the test for the recovery of an acoustic field produced by the same ten monopoles, but this time correlated: the CSM and the the reference directivity pattern is obtained in the same way as in the previous section using Eq.( 22), except that S qq is no longer the identity matrix. Instead, it gets defined as S qq = I Nsrc + ∆ Nsrc + ∆ H Nsrc (23) where ∆ Nsrc is an upper triangular matrix randomly filled with 0 or 1 entries. The ten sources on the sphere are thus either fully or not correlated one with each other, causing a much more complex and uneven directivity pattern than in the previous paragraph. Results are plotted on Fig. 4. As expected, CBF loses relevance when dealing with correlated sources and fails at nailing the interference peaks. Regarding iBF, the scattering effect of the sphere becomes of first order at large ka number and the quality of the output is damaged as long as a free field transfer function is used. In that case, levels seem to be underestimated with respect to the reference: inherently to their propagation model, the identified sources are namely interferated through the spherical mesh at the backpropagation step of the process leading to overall lower levels because of the inappropriate L 2 regularization with respect to the source model. Here again, Galerkin ESM does well as being in the efficiency range of the optimal set-up offered by iBF with spherical related transfer functions. Every interference lobe is well rendered at both high and low frequencies, only remain some local discrepancies with the reference. This uneven aspect may likely result from a non optimal disposition of the equivalent sources, which constitutes the major lead for further improvements of the method. With a view to exemplify the potential applications of Galerkin ESM, the backpropagation depicted in Eq.( 14) is achieved and displayed on Fig. 5 for correlated monopoles (5 only for the sake of readability). Comparing (a) and (b), this result highlights to what extent the free assumption leads to misleading interpretations on the left side of the sphere. On this test case, it appears that iBF with accurate FRFs and Galerkin ESM only can provide a reliable backpropagated sound field. All in all, the conclusion regarding the numerical test cases seems consistent. In terms of methods ranking, it can be stated that : Beamforming is unusable at low frequencies for radiated field reconstruction. The poor spatial resolution on the identified sources leads to over smoothed propagated fields completely blurring directivity patterns. At large ka values, the use of accurate FRFs as steering vectors for CBF slightly improves it but this statement remains especially when dealing with correlated sources. At high frequencies, i.e. when the scattering behaviour of the sphere is of first order, the identification of the outer pressure field with iBF should be achieved with great precautions. Without any prior knowledge on the sources nature, the generic parametrization of the algorithm with free-field transfer functions and L 2 regularization leads to significant inaccuracies. In practice, this means that iBF should be used for array-based field synthesis only when refined transfer functions and funded assumptions on the sources are available. Galerkin stands out as a powerful alternative to have a proper simulation of the acoustic field around the rigid sphere without ground truth FRFs. Discrepancies with the reference are observable but local and acceptable. The peaks induced by the scattering presence of the sphere is well modeled by the kernel projection added to precondition the free-field transfer matrices as exposed in section II. E. Principal Surfaces Given its reasonable size, the spherical case used throughout this section is ideal to evaluate the contribution of principal surfaces introduced in section III. A 5/4 scaled version of the mesh plotted in Fig. 2 was chosen as Γ W discretized in N W = N = 642 nodes. The Λ matrix of size N s × N s was filled with columns computed following Eq.( 17) and it is insightful to visually check their radiative pattern as basis vector for the Galerkin ESM inverse problem. For that purpose, the amplitude of the pressure field radiated by the columns of B on Γ obs , i.e. 0 • 30 • 60 • 90 • 120 • 150 • 180 • 210 • 240 • 270 • 300 • 330 • p λi obs = G Γ obs B i (24) was computed for various i values and displayed in Fig. 6 with Λ being the identity matrix and Fig. 7 with Λ issued from Eq.( 17). A relevant point arises from the observation of these figures and confirms the reasoning exposed in section IV E: On the one hand, in terms of acoustic radiation, no obvious hierarchy seems to sort the basis vector inside B when Λ is the identity. In other words, every columns of Q 2 share the same radiation efficiency and it is unlikely to sample K < N s -N of them without any loss on the accuracy on the Galerkin ESM outcome. On the other hand, from Fig. 7 it appears that the power-optimized version of Λ leads to much more workable basis functions, with a clear ranking between the most radiating vector at low i values and the weakest ones at larger indices. The quantitative assessment of this interpretation was conducted and shown in Fig. 8. The latter exposes the global L 2 relative error on Γ obs between the analytical pressure field and the one propagated by Galerkin ESM sources, with respect to the number of columns K selected from Λ. Simultaneously is plotted the relative acoustic power of the equivalent sources integrated on Γ W for rising truncation values K in the sum of Eq. ( 18). It can be stated from this graph that the optimized formulation of Λ drastically improves the convergence speed of Galerkin ESM. On this particular test case for example, the -13 dB error ratio is obtained for K ≈ 0.8(N s -N ) = 1027 when B is only sampled from Q 2 while the same accuracy can be achieved with K ≈ 0.4(N s -N ) = 513 with the power oriented version of Λ. This earning represents a significant reduction of the dimension of the inverse problem in Eq.( 12) for practical applications of the method. The red lines assesses the point made above with Eq. ( 18): when Eq.( 17) is used for the construction of Λ, most of the acoustic energy is concentrated in the first coefficients of d while at least 85% of them are required to reconstruct the same energy levels when Λ is simply the identity matrix. F. Computational efficiency Results above on the spherical test case were computed with Python 3.7 on an Intel Core I7 8750H (2.2 Ghz). Averaged on 25 runs of the code, 1.7 seconds per frequency were needed for the QR decomposition, 1.2 for the BΛ calculation in Eq.( 17) and 2.3 for the GB regularized inversion. Given the results displayed in the current section, the fair comparison would be with iBF combined with realistic FRFs. Considering a generic test case, iBF itself is of the same complexity than the GB inversion but the FRF simulation has to achieved with FEM, BEM or classical ESM and it is likely that in the end the overall performance does not overrun Galerkin ESM. V. EXPERIMENTAL VALIDATION This last section lays out an experimental validation of Galerkin ESM. The aim of this final point is to gauge the behaviour of the method dealing with actual microphone array measurements on a large frequency range. The chosen case is a wooden mock-up flush mounted with omnidirectional sources, as described on Fig. 9. The latter is modeled with triangular mesh of N = 2377 vertices. Its characteristic length equals L = 0.77 meters, and the planar array features M = 36 analogical microphones placed at 12 centimetres from the sources plan. Similarly to what was proposed in section IV, sources A and C were fully correlated while the last one remained uncorrelated. The measurement is a 9.84 seconds record sampled at f s = 25600 Hz. The session took place in a semi anechoic room with a locally treated ground, and array signals were post processed using the Welch periodogram with a Hann window. The block size was set to 2560 samples with a 50% overlap to reach a 10 Hz frequency resolution up to 12800 Hz. Since no analytical transfer function is available for such a geometry, the relevance of this last section namely lies in this transfer function computation step: whatever the choice of the numerical method (ESM, BEM, FEM, etc.), computing transfer functions involves numerical re- sources and may induce additional errors for complex meshes. The Galerkin ESM proposed in this paper could be seen as an alternative method that includes scattering effects in the model while bypassing this transfer simulation step. With a view to recover the sound field propagated by the three sources around the engine, the same methods as in the previous section were applied here: CBF, iBF and Galerkin ESM, with the difference that this time only free field FRFs can be provided to the two first ones. The numerical modelisation of the set up is exposed on Fig. 10. An illustrative display of the associated synthesized fields is given on Fig. 11, at a frequency at which scattering effects are likely to be significant (kL ratio far greater than one). Preliminary observations of these acoustic maps are enlightening. As expected CBF is unsuitable to seize the acoustic produced by the correlated sources A and C. iBF and Galerkin ESM are in accordance on the side of the faces with the sources but differ on the part of Γ obs with pressure levels scattered around the mock-up. At first glance it seems that the acoustic shadow zones caused by the presence of the wooden structure find its most faithful depiction with the Galerkin ESM synthesized pressure field. Another quantitative evaluation on the experimental case is displayed on Fig. 12. It exposes the backpropagated pressure level on reference microphone placed 12cm in the alignment behind the array shown on Fig. 9. For the sake of readability, the beamforming curve was post-processed by propagating only the three local maxima corresponding to the A, B and C points of the CBF map. At low ka numbers, Galerkin ESM all three methods yield a good adequation with the measured level, with a small edge for iBF and Galerkin ESM. However, beamforming rapidly tends to be globally inappropriate as frequency rises and iBF is subject to discrepancies at specific frequency ranges compared to Galerkin ESM. With regard to Fig. 11, this is consistent with the fact that acoustically masked areas in Ω do differ between iBF and Galerkin ESM issued backpropagation because of the misleading free-field propagator. Lastly, this last plot gives a first empirical clue on the maximum frequency reachable with the chosen number of equivalent sources concerning our method (here approximately) 1800 Hz). As for classical ESM, a sensitivity study will be conducted in further studies to define more precise guidelines on that specific issue. VI. CONCLUSION Galerkin ESM is introduced as a promising algorithm to reproduce sound fields scattered around diffracting bodies from array measurements. The transcription of a Neumann boundary condition as a matrix kernel inclusion is fully described, and integrated to the acoustic imaging inverse problem. A sub-process dedicated to the reordering of the kernel is also proposed to reduce the dimension of the resulting inverse problem, by maximizing the overall radiation efficiency the equivalent sources. The method is assessed first on numerical data issued from correlated and uncorrelated monopoles flush mounted on a rigid sphere. Since highly realistic FRF can be analytically computed for such a geometry, this case first underlines to what extent the free field assumption becomes hazardous when it comes to reconstructing pressure fields at high ka ratios. From this first benchmarks, Galerkin ESM turns out to outperform CBF and iBF restricted to the free field model. For the sphere specific case, it however remains coarser than the inverse problem supplied with analytical ground truth FRFs. Finally on experimental data, Galerkin ESM backpropagated maps provide insightful refinements on the sources directivity, especially compared to what actually used methods are likely to provide. In view of the initial motivation of the paper, i.e. scattered sound field synthesis from microphone array pressures, it appears that the approach pursued here takes on its full relevancy on this type of industrial application. Indeed, a complete set of realistic transfer functions is seldom available or at the expense of costly upstream computations, and it has been shown that Galerkin ESM provides an efficient framework to grasp strongly directive patterns while bypassing the FRF simulation step. FIG. 1 1 FIG. 1. (Color online)A set of equivalent monopolar sources with complex amplitudes c (blue dots) is sought after to match both the impedance condition z on the skin of the body and the pressure p measured by a microphone array at a given frequency. Algorithm 1 : 1 Step-by-step description of Galerkin ESM Require: Array measures p ∈ C M , N node positions on Γ, Ns equivalent sources positions in Ω, and a region of backpropagation Γ obs featuring N obs points. Ensure: M ≤ Ns -N and Γ FIG. 2 2 FIG. 2. (Color online) Geometrical set-up for validation. The yellow surface is the control point support Γ, the transparent grey is Γ obs and the cloud of points around it is the microphone array. FIG. 3. (Color online) Directivity pattern of 10 uncorrelated monopoles on a rigid sphere at ka = 1 (top) and ka = 13 (bottom), computed through various imaging algorithms. At low frequency, both Galerkin ESM and iBF are matching with the reference (all 4 plots are overlapped). FIG. 4. (Color online) Directivity pattern of 10 correlated monopoles on a rigid sphere at ka = 1 (top) and ka = 13 (bottom), computed through various imaging algorithms. At low frequency, both Galerkin ESM and iBF are matching with the reference (all 4 plots are overlapped). FIG. 5. (Color online) Acoustic pressure (amplitude) on Γ obs radiated by 5 randomly correlated monopoles at ka = 13 (with a 20 dB dynamic range). (a) Analytical sound field. (b) Sound field propagated by sources identified with iBF featuring free-field FRF. (c) Sound field propagated by sources identified with iBF featuring analytical FRF. (d) Sound field propagated by sources identified with Galerkin ESM. FIG. 6. (Color online) Pressure amplitudes on Γ obs radiated by the i th column of Q2Λ (Λ being the identity matrix). (a) i = 1, (b) i = 10, (c) i = 100, (d) i = 250 and ka = 8.24 . Plots are displayed within a 30 dB dynamic range, and the red dot indicates the maximum level location. FIG. 10 10 FIG. 10. (Color online) Numerical set-up for experimental set up. The yellow surface is the control point support Γ, the transparent grey is Γ obs and the 36 black dots represent the microphone array placed at 12 cm from the mock-up. FIG. 11. (Color online) Acoustic pressure (amplitude) backpropagated on Γ obs at kL = 21 (relative levels with a 30 dB dynamic range). (a) Sound field propagated by sources identified with CBF featuring free field FRF. (b) Sound field propagated by sources with iBF featuring free field FRF. (c) Sound field propagated by sources identified with Galerkin ESM. J. Acoust. Soc. Am. / 3 October 2022 Chambon & al J. Acoust. Soc. Am. / 3 October 2022Chambon & al ACKNOWLEDGMENTS The authors would like to thank Thibaut Le Magueresse (from Amiral Technologies, France and formerly from MicrodB) for making experimental results possible, and for his early stage intuitions on involving equivalent sources directly in the imaging inverse problem.
04114581
en
[ "info", "info.info-lo" ]
2024/03/04 16:41:24
1999
https://inria.hal.science/tel-04114581/file/habilitation.pdf
Ce mémoire est une introduction à un dossier d'habilitation composé d'articles écrits entre 1994 et 1998 et référencés ci-dessous [a, b, c, d, e, f, g, h, i, j]. Par ailleurs, une annexe contient deux articles d'exposition référencés [x, y] destinés essentiellement à des chercheurs de disciplines connexes 1 . On trouvera peu d'arguments techniques dans ce mémoire. J'ai tenté, en revanche, de donner les motivations de ce travail, de mettre les différents articles en perspective, de donner des exemples et de montrer quelle pourrait être une suite de ces recherches. Les articles [a, b, c, d, e, f, h, i] ont été publiés dans des journaux ou des actes de conférences. L'article [g] n'est, en revanche, que soumis à publication. L'article [j] est un article invité au symposium First order theorem proving qui s'est tenu à Vienne en Novembre 1998. Les articles publiés dans des journaux apparaissent dans ce dossier dans leur forme définitive. Ceux qui sont publiés dans des actes de conférences ou soumis à publication apparaissent en version longue. Chapitre 1 La déduction modulo Les règles Les notions de langage, de terme et de proposition de la déduction modulo sont les mêmes que celles de la logique du premier ordre. En revanche, dans ce formalisme, une théorie est composée non seulement d'un ensemble d'axiomes, mais aussi d'une congruence ≡ définie sur les termes et les propositions. On identifie les propositions congruentes. La règle de modus ponens, par exemple, ne s'exprime plus de la manière traditionnelle A ⇒ B A B mais, pour prendre en compte le cas où les deux occurrences de A ne sont pas identiques, mais seulement congruentes, on formule cette règle de la manière suivante Une formulation plus générale sera, en fait, utile dans la suite C A si C ≡ A ⇒ B B Les règles de déduction naturelle modulo sont données figure 1.1 et celles du calcul des séquents modulo figure 1.2. Nous avons proposé le calcul des séquents modulo dans [g], puis la déduction naturelle modulo dans [h]. À titre d'exemple, donnons une démonstration, en arithmétique, en déduction naturelle modulo de la proposition "4 est pair", ∃x (2 × x = 4) ∀x x = x (x, x = x, 4) ∀-élim Je tiens, avant toute chose, à remercier Monsieur Guy Cousineau de l'honneur qu'il me fait en présidant ce jury. Enseigner dans le DEA Programmation qu'il dirige a été pour moi une source constante de questions et de remise en question. Je tiens, ensuite, à remercier Messieurs Henk Barendregt, Pierre-Louis Curien et Jean-Pierre Jouannaud qui ont lu ces pages avec une grande patience. Je dois, entres autres choses, à Henk Barendregt de m'avoir signalé le passage dans lequel Poincaré oppose démonstration et vérification et plusieurs discussions "informelles" sur les rapports entre déduction et calcul lors des symposiums Types. Je suis reconnaissant à Pierre-Louis Curien de l'intérêt qu'il a porté très tôt à l'utilisation que nous avons faite des substitutions explicites, qui est assez éloignée -en apparence -de ce pour quoi il les avait inventées. Jean-Pierre Jouannaud m'a invité à plusieurs reprises au séminaire qu'il anime à Orsay. J'ai toujours trouvé, lors de ces exposés, une stimulante exigence. Je suis également très reconnaissant à Messieurs Paul Gochet, Gilles Kahn et Per Martin-Löf de l'intérêt qu'ils portent à ce travail en formant le jury qui le sanctionne. La clarté de Paul Gochet, qui a beaucoup contribué à faire connaître Quine aux informaticiens et la démonstration automatique aux philosophes, est pour moi un exemple. L'encyclopédisme de Gilles Kahn est un modèle difficile à imiter. De nombreuses conversations avec Per Martin-Löf m'ont appris un certain sens de la synthèse, et aussi de la prudence. Toute ma gratitude va à Gérard Huet qui a accepté le rôle de directeur de recherche pour cette habilitation. L'influence de ses recherches sur ce travail qui touche à la résolution d'ordre supérieur, à la réécriture et à l'interprétation fonctionnelle des démonstrations est immense. Ce travail, qui est largement collectif, n'aurait jamais pu voir le jour si je n'avais pas rencontré Thérèse Hardin, Claude Kirchner, Frank Pfenning et Benjamin Werner avec qui nombre des articles qui constituent ce dossier sont signés. Je suis conscient de la chance que j'ai eu de pouvoir travailler avec chacun d'eux et de ce qu'ils m'ont appris. Enfin, ce travail n'aurait pas non plus pu se faire, si je n'avais pas eu la chance de travailler dans le projet Coq de l'INRIA que dirige Christine Paulin. La richesse intellectuelle et la liberté que procure cet exceptionnel environnement de travail méritent d'être préservées. Je crois que deux et deux sont quatre, Sganarelle, et que quatre et quatre sont huit. Molière -Dom Juan Comme toute proposition mathématique, la proposition 2 + 2 = 4 est susceptible d'être démontrée. La figure 1 donne une démonstration possible. Certains, pourtant, comme H. Poincaré, contestent le fait qu'un tel raisonnement soit véritablement une démonstration : "On ne saurait nier que ce raisonnement soit purement analytique. Mais interrogez un mathématicien quelconque : 'Ce n'est pas une démonstration proprement dite, vous répondra-t-il, c'est une vérification' " [81]. Le problème de savoir si deux et deux font quatre, ou plus généralement de trouver la somme de deux nombres, leur produit ou leur plus grand diviseur commun, est en effet d'une nature très différente de celui de savoir si, par exemple, un carré peut être le double d'un autre ou la somme de deux puissances n ème , une puissance n ème . Résoudre le premier problème demande uniquement d'effectuer un calcul, alors que résoudre le second demande de construire un raisonnement, car la quantification sur une quantité infinie d'objets rend la vérification impossible. Les mathématiques assyrio-babyloniennes et égyptiennes étaient essentiellement constituées de méthodes de calcul : méthodes pour effectuer des opérations comptables ou calculer des aires agricoles, ... Puis la géométrie grecque introduisit la notion d'espace indéfini, elle cessa alors de parler de telle ou telle parcelle agricole, pour parler d'objets abstraits : les triangles, les cercles, ... De même, l'arithmétique cessa de parler de telle ou telle quantité, pour parler, elle aussi, d'objets abstraits : les nombres. Avec ces objets abstraits arrivèrent de nouveaux problèmes que le calcul ne suffisait pas à résoudre. On introduisit alors un nouvel outil : le raisonnement. Les Éléments d'Euclide restèrent longtemps le prototype de la méthode mathématique : la solution de problèmes par le raisonnement. C'est encore ce point de vue qui domine, au début du vingtième siècle dans les travaux de G. Peano, G. Frege, D. Hilbert, E. Zermelo, A.N. Whitehead, B. Russell ou N. Bourbaki. Dans l'arithmétique de Peano, par exemple, il n'y a pas de règle de calcul qui permette de vérifier que 2 + 2 = 4, mais il y a des axiomes qui permettent de démontrer cette proposition. La spécificité de ce problème, qui peut se résoudre par un simple calcul, est gommée, au profit d'une méthode uniforme : le raisonnement. Il en est de même dans la Begriffschrift de Frege, la géométrie de Hilbert, la théorie des ensembles de Zermelo, les Principia mathematica de Whitehead et Russell ou les Éléments de mathématique de Bourbaki. Si le raisonnement domine dans ces travaux qui concernent tous la formalisation des mathématiques ou de l'une de ses branches, on peut noter que dans les mathématiques informelles, en revanche, une place importante a toujours été accordée à la construction de méthodes de calcul, comme en témoignent les algorithmes de multiplication des nombres en notation décimale (al-Uqlidisi, ...), de résolution d'équation algébriques du Ce paradoxe se retrouve au sein de la logique. La notion de calcul est, par exemple, centrale dans le programme de Hilbert (trouver un algorithme qui décide si une proposition mathématique est un théorème ou non) et dans la théorie de la calculabilité qui permet d'y répondre négativement. Cette notion est également centrale dans l'étude des démonstrations intuitionnistes, par exemple dans la démonstration de terminaison de l'algorithme d'élimination des coupures qui transforme chaque démonstration d'existence en un témoin de cette existence. Pourtant, si les logiciens s'intéressent au calcul, la notion de démonstration qu'ils utilisent, qui est celle de la logique du premier ordre, se fonde, quant à elle, uniquement sur le raisonnement. Depuis l'apparition des ordinateurs, la notion de calcul connaît un certain renouveau. Par delà l'effet de mode, ce regain d'intérêt s'explique, très concrètement, par le fait que la solution de certains problèmes en informatique demande d'utiliser une formalisation des mathématiques qui accorde une place au calcul, contrairement à la formulation traditionnelle de la théorie des ensembles. Le premier de ces problèmes est celui de la démonstration automatique, c'est-à-dire de la construction de programmes qui recherchent des démonstrations pour des propositions mathématiques. L'utilisation de formalisations des mathématiques reposant uniquement sur le raisonnement aboutit au paradoxe suivant : quand on demande à un tel programme de résoudre le problème "Est-ce que 2 + 2 = 4 ?", il ne voit pas que ce problème peut se résoudre par un simple calcul et il cherche à démontrer la proposition 2 + 2 = 4 en utilisant potentiellement tous les axiomes des mathématiques. Chercher une telle démonstration est naturellement une opération beaucoup plus coûteuse qu'effectuer une simple addition. Un autre problème est celui de la vérification de correction des démonstrations. Quand on vérifie la correction d'une démonstration qui utilise le fait que 2 + 2 est égal à 4, on peut penser qu'il est paradoxal de devoir vérifier une démonstration de ce fait, alors qu'on peut directement effectuer le calcul et vérifier le fait lui-même. Un autre exemple de problème est celui de la programmation en langage mathématique [64, 79, y]. Il est devenu banal de remarquer qu'un programme qui associe une valeur (la valeur de sortie) à une ou plusieurs autres valeurs (les valeurs d'entrée) n'est rien d'autre qu'une fonction, au sens mathématique du terme. Par exemple, le programme qui calcule le montant des mensualités d'un prêt en fonction de la somme empruntée, du nombre de mensualités et du taux d'intérêt n'est rien d'autre que la fonction f = e, n, t -→ e t 1 -1 (1+t) n Le langage mathématique étant bien adapté à l'expression de fonctions, certains proposent de l'utiliser comme langage de programmation. Si on emprunte 5000 € sur 24 mois au taux mensuel de 0.64% (ce qui correspond à un taux annuel de 8%), les mensualités seront de f (5000, 24, 0.0064) €. Hélas, ce n'est pas cette information qui est utile, mais celle que les mensualités seront de 225 €. Exécuter les programmes écrits en langage mathématique demande donc une formalisation des mathématiques qui distingue un terme quelconque, comme f (5000, 24, 0.0064) d'une valeur, comme 225, et qui permette de calculer le terme f (5000, 24, 0.0064) en la valeur 225. Un dernier exemple est le calcul formel. Ici encore, il ne suffit pas de savoir que la dérivée de la fonction x -→ sin 2 x est la fonction x -→ 2 sinx cosx mais il faut que le terme (D (x -→ sin 2 x)) se calcule sur le terme x -→ 2 sinx cosx. Des systèmes formels qui intègrent raisonnement et calcul ont été développés pour répondre à ces problèmes : G. Plotkin [START_REF] Plotkin | Building-in equational theories[END_REF] et P.B. Andrews [3] proposent, dans le cadre de la démonstration automatique, de mêler certains axiomes à l'unification, ce qui permet de les supprimer de l'ensemble des axiomes avec lesquels on raisonne, après le langage du système Automath [START_REF] De Bruijn ; To | A Survey of the project Automath[END_REF], la théorie des types de P. Martin-Löf [START_REF] Martin-Löf | Intuitionistic type theory[END_REF], le Calcul des constructions [START_REF] Th | The calculus of constructions[END_REF] et le Calcul des constructions inductives [START_REF] Ch | Inductive definitions in the system COQ, Rules and properties, Typed Lambda Calculi and Applications[END_REF][START_REF] Werner | Une théorie des constructions inductives[END_REF] ont une règle de conversion qui permet d'effectuer des calculs dans une proposition sans changer sa démonstration, l'Arithmétique fonctionnelle du second ordre [START_REF] Krivine | Programming with proofs[END_REF][START_REF] Krivine | Lambda-calcul, types et modèles[END_REF]] permet d'utiliser des axiomes équationnels sans que cela apparaisse dans les démonstrations. H. Barendregt et E. Barendsen [START_REF] Barendregt | Autarkic computations in formal proofs[END_REF] ont proposé d'appeler "principe de Poincaré" ce principe -qu'on retrouve ici et là -qui consiste à calculer dans une proposition sans en changer la démonstration. L'objectif de l'ensemble d'articles réunis dans ce dossier est de proposer, et d'étudier les propriétés, d'un ensemble de règles de déduction, la déduction modulo, qui intègre le calcul et le raisonnement de la manière la plus générale qui soit, indépendamment d'une formalisation particulière des mathématiques (la théorie des ensembles, l'Arithmétique fonctionnelle du second ordre, la théorie des types simples, la théorie des types de Martin-Löf, le Calcul des constructions, le Calcul des constructions inductives, ...) ou d'une application particulière (la démonstration automatique, la vérification de correction des démonstrations, la programmation en langage mathématique, le calcul formel, ...). Pour que la correction des démonstrations reste décidable, il faut que l'information supprimée des démonstrations puisse être reconstruite. Il faut donc que la congruence ≡ soit décidable. De plus, il faut indiquer le témoin lors de l'application des règles d'élimination du quantificateur universel et d'introduction du quantificateur existentiel. En effet, en l'absence de ces témoins, la décidabilité de la correction des démonstrations demanderait non seulement la décidabilité de la congruence ≡ elle-même, mais aussi celle du filtrage modulo cette congruence, puisqu'il faudrait pouvoir décider si une proposition est une instance d'une autre modulo ≡ ou non. Γ ⊢≡ C Γ, A ⊢≡ B (x, A) ∃-élim si C ≡ (∃x A) et x ̸ ∈ F V (ΓB) Γ ⊢≡ B B tiers exclu si A ≡ (B ∨ ¬B) Γ ⊢≡ A Figure 1.1 -La déduction naturelle modulo 1.1. LES RÈGLES Γ, A ⊢≡ ∆ (x, A) ∃-gauche si B ≡ (∃x A) et x ̸ ∈ F V (Γ∆) Γ, B ⊢≡ ∆ Γ ⊢≡ [t/x]A, ∆ (x, A, t) ∃-droite si B ≡ (∃x A) Γ ⊢≡ B, ∆ Proposition 1.1.1 Si la congruence ≡ est décidable, il existe un algorithme qui prend en argument une proposition et une dérivation (en déduction naturelle modulo ou en calcul des séquents modulo) et indique si la dérivation est une démonstration correcte de la proposition ou non. Cependant, rien n'empêche de considérer abstraitement des "démonstrations" modulo une congruence indécidable ou dont la décidabilité est un problème ouvert, et d'étudier certaines de leur propriétés. C'est par exemple le cas de la théorie des ensembles que nous discuterons au chapitre 3. Si ce système de réécriture termine et est confluent, la congruence est décidable. Les congruences Les équations entre termes Quand on utilise une opération commutative, on veut parfois pouvoir identifier les propositions P (x+y) et P (y+x), c'est-à-dire considérer que la permutation des termes est du ressort du calcul et non du raisonnement, bien que la règle x + y → y + x ne termine pas. On considère donc des congruences définies par des règles de réécriture et des équations entre termes. La réécriture des propositions Ces différentes congruences sont définies par des règles de réécriture et des équations sur les termes du langage. La congruence sur les propositions est induite par la congruence sur les termes : c'est parce que les termes 2 + 2 et 4 sont congruents que les propositions 2 + 2 = 4 et 4 = 4 le sont également. Dans de nombreux cas, on veut pouvoir définir des congruences directement sur les propositions. On est amené à poser des règles de réécriture sur les propositions elles-mêmes, par exemple la règle Remarquons que le langage des propositions contenant des symboles lieurs (les quantificateurs) ces systèmes de réécriture sont des systèmes de réduction combinatoires (combinatory reduction systems) ou CRS [START_REF] Klop | Combinatory reduction systems : introduction and survey[END_REF]. Succ(x) = Succ(y) → x = y La forme des propositions atomiques joue un rôle très mineur dans les démonstrations (seule la règle axiome y est sensible). La possibilité d'appliquer ou non une règle logique dépend essentiellement des connecteurs et quantificateurs de la proposition. De ce fait, les congruences qui n'affectent que les termes influent peu sur la possibilité d'appliquer une règle logique ou non. La situation est tout à fait différente avec les règles comme celle ci-dessus : le fait que la proposition x×y = 0 soit congruente à la proposition x = 0∨y = 0 permet de la démontrer en utilisant une règle d'introduction de la disjonction. Ces règles permettent donc une véritable interaction entre le calcul et le raisonnement. Dans les articles réunis dans ce dossier, nous nous sommes restreints à des règles qui réécrivent des propositions atomiques en des propositions quelconques. Cela nous prive de la possibilité d'appliquer les règles de simplification propositionnelle proposées par J. Hsiang [START_REF] Hsiang | Refutational theorem proving using term-rewriting systems[END_REF]. L'avantage de se placer dans une telle restriction est que, pour pouvoir appliquer une règle de déduction, il suffit de réduire la proposition considérée. Ce ne serait pas le cas si on avait, par exemple, une règle de la forme A ∧ B → C dans ce cas, la proposition C étant congruente à la proposition A ∧ B, on pourrait la démontrer à l'aide de la règle d'introduction de la conjonction, mais pour mettre cette proposition C sous la forme d'une conjonction, il faudrait appliquer la règle "à l'envers", ce qui est contraire à l'intuition de la réécriture. Les congruences que nous considérons sont donc définies par -un ensemble E de règles de réécriture et d'équations entre termes ou entre propositions atomiques, -un ensemble R de règles de réécriture qui réécrivent des propositions atomiques sur des proposition quelconques. Le lemme d'équivalence La première propriété de ces systèmes de déduction modulo est le lemme d'équivalence démontré dans [g]. Ce lemme établit que du point de vue de la démontrabilité, une théorie modulo est toujours équivalente à une théorie du premier ordre. Du point de vue de la démontrabilité, la déduction modulo n'est donc pas une nouvelle logique, c'est simplement une nouvelle formulation de la logique du premier ordre. La notion de modèle pour la déduction modulo Le lemme d'équivalence nous permet de définir simplement la notion de modèle pour la déduction modulo : on définit les modèles d'une théorie modulo (≡, Γ) comme les modèles de T Γ où T est la théorie ci-dessus. Cette définition peut également se formuler de la manière suivante. Définition 1.4.1 Un modèle de (≡, Γ) est un modèle de Γ tel que si P ≡ Q alors P et Q ont même dénotation. Du théorème de correction et du théorème de complétude de K. Gödel pour la logique du premier ordre, on déduit les propositions suivantes. Proposition 1.4.1 (Correction) Si Γ ⊢ ≡ P alors P est valide dans tous les modèles de (≡, Γ). Proposition 1.4.2 (Complétude) Si P est valide dans tous les modèles de (≡, Γ) alors Γ ⊢ ≡ P . Avec cette notion de modèle, deux propositions congruentes ont nécessairement la même dénotation, mais pas nécessairement deux termes congruents. Cependant, les dénotations de deux termes congruents ne peuvent être discernée par aucun prédicat. Comme dans le cas de la théorie de l'égalité, on peut renforcer le lemme de complétude en se restreignant à une classe plus petite de modèles. Définition 1.4.2 Un modèle égalitaire de (≡, Γ) est un modèle de Γ tel que si P ≡ Q alors P et Q ont même dénotation et si t ≡ u alors t et u ont même dénotation. Clairement, tous les modèles égalitaires d'une théorie (≡, Γ) sont des modèles de cette théorie. On montre que pour tout modèle de (≡, Γ), il existe un modèle égalitaire qui valide les mêmes propositions. On en déduit les lemmes de correction et de complétude. Proposition 1.4.4 (Correction) Si Γ ⊢ ≡ P alors P est valide dans tous les modèles égalitaires de (≡, Γ). Proposition 1.4.5 (Complétude) Si P est valide dans tous les modèles égalitaires de (≡, Γ) alors Γ ⊢ ≡ P . Le cas des congruences présentées par un système de réécriture Quand la congruence est présentée par un système de réécriture RE, on peut prendre une théorie T plus petite dans le lemme d'équivalence : pour chaque règle de réécriture l → r ou équation l = r de RE, on prend la clôture universelle de la proposition t = u ou t ⇔ u selon que t et u sont des termes ou des propositions. Cela demande de disposer d'un prédicat d'égalité et des axiomes correspondants. Si la théorie ne contient pas un tel prédicat, on peut l'ajouter de manière conservatrice, comme le montre le lemme suivant. On obtient également une caractérisation plus simple des modèles égalitaires comme les modèles égalitaires (c'est-à-dire dans lesquels l'égalité dénote l'égalité) de la théorie ci-dessus. De manière équivalente, on peut les caractériser comme les modèles tels que pour toute règle de réécriture l → r ou équation l = r de RE, l et r ont même dénotation. Chapitre 2 La théorie des types simples Deux exemples de théories modulo méritent une attention particulière : la théorie des types simples, aussi appelée logique d'ordre supérieur et la théorie des ensembles. Ce chapitre est consacré à la théorie des types simples. L'intérêt de cette théorie est que beaucoup des propriétés et des algorithmes de la théorie des types simples (en particulier la normalisation des démonstrations et la résolution d'ordre supérieur) seront des cas particuliers de propriétés et d'algorithmes de la déduction modulo. La théorie naïve des ensembles La théorie naïve, incohérente, des ensembles, pose un schéma d'axiome, le schéma de compréhension, qui énonce l'existence de tous les ensembles définissables par une proposition P : ∀x 1 ... ∀x n ∃y ∀w (w ∈ y ⇔ P ) où x 1 , ..., x n sont les variables libres de P sauf w. Pour obtenir un langage de termes on peut skolémiser ce schéma. On obtient alors une infinité de symboles de fonction f v ∈ R → ¬v ∈ v et donc en notant A la proposition R ∈ R A → ¬A ce qui permet de construire une suite infinie de réductions A → ¬A → ¬¬A → ... De plus cette théorie est incohérente car on peut démontrer la proposition ⊥ 19 axiome A ⊢ ≡ ¬A axiome A ⊢ ≡ A ¬-élim A ⊢ ≡ ⊥ ¬-intro ⊢ ≡ ¬A axiome A ⊢ ≡ ¬A axiome A ⊢ ≡ A ¬-élim A ⊢ ≡ ⊥ ¬-intro ⊢ ≡ A ¬-élim ⊢ ≡ ⊥ La théorie des types simples La théorie des types simples résout le paradoxe de Russell en classant les objets en fonction du nombre et de la nature de leurs arguments : la théorie des types simples est une théorie multisortée. Comme la théorie naïve des ensembles, la théorie des types simples comporte des termes qui expriment des ensembles, mais elle comporte aussi des termes qui expriment des relations, et également des fonctions. Les ensembles s'identifient alors simplement avec les relations à un argument unique. Si R est un terme de relation portant sur des objets de sorte T 1 , ..., T n et t 1 , ..., t n sont des termes de sorte T 1 , ..., T n , la proposition exprimant le fait que les termes t 1 , ..., t n sont liés par la relation R s'écrit en général R(t 1 , ..., t n ). Comme en théorie naïve des ensembles, si on veut rester dans un langage du premier ordre, il faut introduire un symbole ∈ T1,...,Tn et noter cette proposition ∈ T1,...,Tn (R, t 1 , ..., t n ). De même si f est un terme de fonction qui s'applique à des objets de sorte T 1 , ..., T n pour donner un objet de sorte U et t 1 , ..., t n sont des termes de sorte T 1 , ..., T n , le terme exprimant le résultat de l'application de f à t 1 , ..., t n s'écrit en général f (t 1 , ..., t n ). Ici encore, si on veut rester dans un langage du premier ordre, il faut introduire un symbole α T1,...,Tn,U et noter ce terme α T1,...,Tn,U (f, t 1 , ..., t n ). Quand n = 0 le symbole α U n'a pas grand intérêt, car le terme α U (t) est redondant avec le terme t. En revanche, le symbole ∈ ϕ , qu'on note aussi ε, est beaucoup plus intéressant, car il permet de distinguer le terme t de la proposition ε(t). Le terme t véhicule la même information que la proposition ε(t), mais sous forme d'un terme. On l'appelle contenu de la proposition ε(t). Ce symbole ε et corrélativement la distinction entre les propositions et leur contenu est la principale différence entre cette présentation de la théorie des types et les présentations plus traditionnelles [START_REF] Church | A formulation of the simple theory of types[END_REF]6], qui insistent moins sur la possibilité d'exprimer cette théorie comme une théorie du premier ordre ou une théorie modulo. Nous reviendrons sur ce symbole au paragraphe 2.3. En théorie des types, il est d'usage d'identifier la relation R avec la fonction qui à n objets t 1 , ..., t n associe le contenu de la proposition R(t 1 , ..., t n ). Il est également d'usage de curryfier les fonctions, c'est-à-dire d'identifier la fonction f avec la fonction qui à x 1 associe la fonction qui à x 2 , ..., x n associe f (x 1 , x 2 , ..., x n ). On peut, de cette manière, considérer toutes les fonctions comme des fonctions d'un argument unique. Ainsi, les sortes qui permettent de classer les objets peuvent simplement être définies inductivement comme le plus petit ensemble qui contient ι (sorte des objets de base) et o (sorte des contenus propositionnels) et qui est clos par la flèche → qui forme la sorte des fonctions d'une sorte vers une autre. Ces sortes sont appelées types simples. Suivant l'usage on notera T 1 → ... → T n → U le type T 1 → (T 2 → ... → (T n → U )...). Dans cette présentation de la théorie des types, seuls les symboles de fonction α T,U , et le symbole de prédicat ε sont utiles. Le terme α T1,...,Tn,U (f, t 1 , ..., t n ) se note α Tn,U (...α T1,T2→...→Tn→U (f, t 1 ), ..., t n ). De même, la proposition ∈ T1,...,Tn (R, t 1 , ..., t n ) se note ε(α Tn,o (...α T1,T2→...→Tn→o (R, t 1 ), ..., t n )). Suivant l'usage, dans les exemples, on note (t u) le terme α T,U (t, u) et (t u 1 ... u n ) le terme (...(t u 1 ) ... u n ). Comme en théorie naïve des ensembles on pose un schéma d'axiome de compréhension ∃R ∀x 1 ... ∀x n (ε(R x 1 ... x n ) ⇔ P ) où toutes les variables libres de la proposition P sont parmi | P } x 1 ... x n ) ⇔ P ) ∀x 1 ... ∀x n ((x 1 , ..., x n -→ t) x 1 ... x n ) = t On peut ensuite transformer ces axiomes en règles de réécriture ε({x 1 , ..., x n | P } y 1 ... y n ) → [y 1 /x 1 , ..., y n /x n ]P ((x 1 , ..., x n -→ t) y 1 ... y n ) → [x 1 → y 1 , ..., x n → y n ]t 1 On peut démontrer qu'on garde une théorie équivalente en se restreignant à certaines instances de ces schémas, ou ce qui est équivalent à certains symboles de Skolem : - -S T,U,V de sorte S T,U,V = x, y, z -→ ((x z) (y z)), où x est de sorte T → U → V , y de sorte T → U et z de sorte T , -K T,U = x, y -→ x où x est de sorte T et y de sorte U , -⇒ = {x, y | ε(x) ⇒ ε(y)}, -∧ = {x, y | ε(x) ∧ ε(y)}, -∨ = {x, y | ε(x) ∨ ε(y)}, -¬ = {x | ¬ε(x)}, -⊥ = {| ⊥}, -∀T = {x | ∀y ε(x y)}, -∃T = {x | (T → U → V ) → (T → U ) → T → V , -K T,U de sorte T → U → T , -⇒, ∧ et ∨ de sorte o → o → o, -¬ de sorte o → o, -⊥ de sorte o, -∀T et ∃T de sorte (T → o) → o, les symboles de fonction α T,U de rang (T → U, T, U ), et le symbole de prédicat ε de rang (o). Les règles de réécriture de la théorie des types simples sont celles de la figure 2.1. (S x y z) → ((x z) (y z)) (K x y) → x ε( ⇒ x y) → ε(x) ⇒ ε(y) ε( ∧ x y) → ε(x) ∧ ε(y) ε( ∨ x y) → ε(x) ∨ ε(y) ε( ¬ x) → ¬ε(x) ε( ⊥) → ⊥ ε( ∀ x) → ∀y ε(x y) ε( ∃ x) → ∃y ε(x y) Figure 2.1 -Les règles de réécriture de la théorie des types simples Ce système de réécriture est confluent car il est orthogonal. On montre dans [e] qu'il termine fortement en exhibant un plongement dans le langage des combinateurs simplement typés qui termine fortement par le théorème de W. Tait [START_REF] Tait | Intensional interpretation of functionals of finite type I[END_REF]. La congruence est donc décidable. Dans cette théorie, on utilise deux notions de fonction qu'il ne faut pas confondre. Le symbole d'individu S, par exemple, est un terme et il a, de ce fait, une sorte. Ce terme exprime un objet de la théorie qui se révèle être une fonction. De ce fait, sa sorte est un type fonctionnel. Le symbole de fonction α T,U , en revanche n'est pas un terme et n'exprime pas un objet de la théorie mais une fonction associant un objet de la théorie à des objets de la théorie. La sorte des objets antécédents et images de cette fonction est définie par le rang de ce symbole. Cette situation est similaire à celle de la théorie des ensembles où il est d'usage de distinguer les ensembles, en tant qu'objets de la théorie des ensembles d'objets de la théorie. On peut ensuite étendre cette théorie en ajoutant d'autres axiomes : l'axiome de l'infini, l'axiome de descriptions, l'axiome du choix et les axiomes d'extensionnalité. Les axiomes d'extensionnalité peuvent se formuler ainsi : ∀f ∀g ((∀x (f x) = (g x)) ⇒ f = g) ∀x ∀y ((ε(x) ⇔ ε(y)) ⇒ x = y) 2.3 Le symbole ε Les contenus propositionnels La principale différence entre cette présentation de la théorie des types et les présentations plus traditionnelles, qui insistent moins sur la possibilité d'exprimer cette théorie comme une théorie du premier ordre ou une théorie modulo, est l'introduction du symbole ε et corrélativement la distinction entre les propositions et leur contenu et entre les connecteurs et les quantificateurs (par exemple, ⇒) et leur contenu (par exemple, ⇒). Cette distinction est essentielle pour exprimer la théorie des types comme une théorie du premier ordre ou une théorie modulo, puisqu'on ne peut, dans une telle théorie, quantifier que sur des objets, et il faut donc, pour pouvoir quantifier sur les propositions, réifier ces propositions, comme on réifie les prédicats unaires en théorie naïve des ensembles. On montre dans [b] On pourrait objecter que puisqu'il y a, en théorie des types, une correspondance parfaite entre les propositions et certains termes, on pourrait se passer des propositions, c'est-à-dire considérer les propositions comme des termes comme les autres et n'utiliser que deux sortes d'expressions : le terme t et la métaproposition ⊢ t affirmant que t est démontrable. C'est, par exemple, le point de vue de A. Church [START_REF] Church | A formulation of the simple theory of types[END_REF] et de Andrews [6]. L'inconvénient de ce point de vue est que le fait de considérer les propositions comme des termes, c'est-à-dire les faits comme des objets, n'est pas "ontologiquement neutre". En effet, s'il est facile, et "relativement" neutre, de remplacer les symboles de prédicat et les connecteurs par des symboles de fonction et donc de transformer les propositions sans quantificateurs en termes, exprimer les propositions formées avec des quantificateurs comme des termes suppose, en revanche, des choix ontologiques lourds. En effet, les symboles ∀ et ∃ doivent être appliqués à des fonctions ou des ensembles, il faut donc faire relever la définition de ces notions de la logique et non de la théorie. La logique du premier ordre (et la déduction modulo), en distinguant les termes (sans lieurs) des propositions (avec quantificateurs) parvient à expliquer la signification des quantificateurs (c'est-à-dire à formuler des règles de déduction) sans utiliser de fonctions ni d'ensembles. La notion de déduction se définit de manière semblable que la théorie contienne des fonctions et des ensembles ou non. Cela permet de séparer clairement la logique, ontologiquement neutre, de la théorie qui exprime les choix concernant les objets du discours. Un autre avantage est que beaucoup de propriétés (complétude, élimination des coupures, ...) et d'algorithmes (démonstration automatique, ...) peuvent être étudiés génériquement indépendamment de la théorie considérée. Le fait que la théorie considérée, comme la théorie des types simple, permette d'exprimer des fonctions et des ensembles et donc des contenus propositionnels n'impose en rien de se priver de la notion de proposition introduite par la logique du premier ordre et des résultats et algorithmes généraux. Cela demande simplement de distinguer les propositions des contenus propositionnels. Le langage des termes et l'intentionnalité Nous l'avons rapidement dit plus haut : en théorie des types, skolémiser les schéma de compréhension donne un langage pour les objets de la théorie des types dans lequel on peut noter x -→ x la fonction identité et {x | ∃y x = 2 × y} l'ensemble des nombres pairs. Cependant, ce langage ne comporte pas de symboles lieurs x -→ ... et {x | ...} dans toute leur généralité. On propose ici une variante plus souple de ce langage. La substitution des variables de fonction et de prédicat Certaines formulations anciennes (par exemple, [START_REF] Church | Introduction to mathematical logic[END_REF]) de la théorie des types comprennent des variables de fonction et de prédicat, mais pas, à proprement parler, de termes de fonction ou de prédicat. Par exemple, le schéma d'axiome de récurrence s'écrit ∀P ((P (0) ∧ ∀x (P (x) ⇒ P (Succ(x)))) ⇒ ∀x P (x)) mais il n'y a pas de terme {y | y + 0 = y} qu'on puisse substituer à P . On utilise, en revanche, une nouvelle opération de substitution : substituer P (y 1 , ..., y n ) par une proposition A consiste à substituer chaque proposition de la forme P (t 1 , ..., t n ) par la proposition [t 1 /y 1 , ..., t n /y n ]A. Par exemple, en substituant la proposition P (y) par la proposition y +0 = y dans le schéma de récurrence, on obtient la proposition (0 + 0 = 0 ∧ ∀x (x + 0 = x ⇒ Succ(x) + 0 = Succ(x))) ⇒ ∀x (x + 0) = x Cette notion de substitution est notoirement difficile à définir correctement, surtout quand on sort du cadre de la logique du second ordre. De plus, on introduit ici une nouvelle opération, qui complique le formalisme et rend difficile l'application de résultats ou d'algorithmes généraux. Le schéma de compréhension Dans [START_REF] Henkin | Banishing the rule of substitution for functional variables[END_REF], L. Henkin propose de bannir la règle de substitution des variables de fonction et de prédicat dans la logique du second ordre en ajoutant le schéma d'axiome de compréhension. Il montre l'équivalence de son système avec la formulation "traditionnelle" de la logique du second ordre qui utilise cette règle de substitution. Henkin démontre cette équivalence sans passer par une théorie intermédiaire, dans laquelle le schéma de compréhension est skolémisé. Il n'introduit donc pas réellement de notation pour les fonctions et les prédicats, mais uniquement des axiomes exprimant l'existence de ces objets. Les combinateurs Le schéma de compréhension de Henkin peut se généraliser à la théorie des types. Quand on skolémise ce schéma de compréhension, on introduit pour chaque terme purement applicatif t dont les variables libres sont parmi On distingue alors la proposition y + 0 = y et l'ensemble P y,y+0=y ou {y | y + 0 = y}, et de même le nombre y + 0 de la fonction f y,y+0 ou y -→ y + 0 et on peut utiliser la substitution ordinaire pour les variables de fonction et de prédicat. Ainsi, on introduit une nouvelle proposition ({y | y + 0 = y}(0) ∧ ∀x ({y | y + 0 = y}(x) ⇒ {y | y + 0 = y}(Succ(x)))) ⇒ ∀x {y | y + 0 = y}(x) qui est équivalente à la proposition (0 + 0 = 0 ∧ ∀x (x + 0 = x ⇒ Succ(x) + 0 = Succ(x))) ⇒ ∀x (x + 0) = x Mais, contrairement à la présentation avec une substitution particulière pour les variables de prédicat, il est désormais nécessaire d'introduire des axiomes de compréhension skolémisés ou de conversion qui expriment la signification des notations {y 1 , ..., y n | P } et y 1 , ..., y n -→ t et qui permettent de montrer l'équivalence des deux propositions ci-dessus. A cause de leur ressemblance avec les supercombinateurs de R.M.J. Hughes [START_REF] Hughes | Super-combinators, a new implementation method for applicative languages[END_REF] nous avons dans [a] appelé ces symboles hypercombinateurs. Cette terminologie est, en fait, assez mal choisie, car ces symboles sont précisément ceux que H.B. Curry appelle combinateurs dans [START_REF] Curry | The combinatory foundations of mathematical logic[END_REF][START_REF] Curry | Combinatory logic[END_REF] (avant que ce terme ne serve à désigner des choses assez disparates, comme les termes clos du λ-calcul, ou les symboles de n'importe quel langage du premier ordre servant à exprimer des fonctions). Dans une présentation utilisant la déduction modulo, les axiomes de conversion peuvent être transformés en règles de réécriture. Substituer une variable de fonction ou de prédicat par un combinateur puis normaliser la proposition obtenue donne le même résultat que la substitution définie au paragraphe 2.4.1. On montre ainsi, au moins pour la logique du second ordre, que la formulation avec des combinateurs est une extension conservatrice de la formulation du paragraphe 2.4.1. Mais c'est précisément cette décomposition entre substitution et réduction qui simplifie le formalisme. L'équivalence de la présentation de la théorie des types avec le schéma de compréhension et avec des combinateurs est une simple conséquence du théorème de Skolem (de plus, seuls les quantificateurs existentiels les plus extérieurs à la formule sont skolémisés, ce qui permet une démonstration d'équivalence particulièrement simple). On peut noter x 1 , ..., x n -→ t le combinateur f x1,...,xn,t . Mais l'impression d'avoir un symbole lieur dans le langage est trompeuse. D'une part toutes les variables de t doivent être liées dans x 1 , ..., x n -→ t, on n'a donc pas le terme x -→ y, d'autre part le terme t ne doit pas, lui-même, contenir de symboles de Skolem. On n'a donc pas le terme x -→ (y -→ y), ni a fortiori le terme x -→ (y -→ x) qui viole les deux conditions à la fois. Pour construire un langage un peu plus libéral on peut tenter d'utiliser un schéma de compréhension plus général. Le schéma de compréhension fermé ∃R ∀x 1 ... ∀x n (ε(R x 1 ... x n ) ⇔ A) ∃f ∀x 1 ... ∀x n (f x 1 ... x n ) = t dans lequel toutes les variables libres de A (resp. t) sont parmi x 1 , ..., x n est équivalent au schéma de compréhension ouvert dans lequel cette condition est relâchée. Mais, on montre dans [a] que le langage obtenu en skolémisant ces deux schémas est très différent. En skolémisant l'instance du schéma de compréhension ouvert ∀y ∃f ∀x (f x) = y on introduit un symbole de fonction unaire f et l'axiome ∀y ∀x (f (y) x) = y. On peut noter x -→ y, le terme f (y) dans lequel la variable y est libre et qui est la fonction constante identiquement égale à y. On peut donc ainsi avoir des variables libres dans les abstractions. Il faut, cela dit, éviter de noter x -→ x le terme f (x) qui est la fonction constante identiquement égale à x. Ce terme doit se noter x ′ -→ x dans lequel on a renommé la variable liée pour éviter la confusion avec la variable libre. On peut aussi appliquer le symbole f à un terme contenant une abstraction, par exemple au terme y -→ y ce qui donne le terme f (y -→ y) qu'on peut noter x -→ (y -→ y). En revanche, il faut éviter de noter x -→ (y -→ x) le terme f (y -→ x), mais il faut le noter x ′ -→ (y -→ x). La variable libre dans y -→ x ne peut donc plus être abstraite. Il semble donc impossible de construire une fonction qui à x et y associe x en abstrayant les deux variables l'une après l'autre. On montre dans [a] qu'il est nécessaire, pour construire cette fonction, d'utiliser une instance du schéma de compréhension binaire ∃f ∀x ∀y (f x y) = x En effet, cette instance du schéma de compréhension est indépendante du schéma de compréhension unaire, car on peut construire un modèle du schéma de compréhension unaire dans lequel cette proposition n'est pas valide. Le schéma de compréhension unaire est donc plus faible que le schéma de compréhension général. En revanche le schéma de compréhension ternaire est équivalent au schéma général puisque les instances correspondant aux combinateurs S et K suffisent. Nous laissons ouvert le problème de l'équivalence du schéma de compréhension binaire et du schéma de compréhension général. Le λ-calcul La λ-calcul introduit par Church [START_REF] Church | The calculi of lambda-conversion[END_REF] et utilisé, également par Church, en théorie des types [START_REF] Church | A formulation of the simple theory of types[END_REF] consiste à généraliser l'utilisation des lieurs de manière à pouvoir former le terme x -→ (y -→ x) qu'on note alors λx λy x. Par rapport aux langages de combinateurs, le λ-calcul introduit une plus grande homogénéité puisqu'il devient possible de lier n'importe quelle variable, y compris celles qui apparaissent libres dans les abstractions. On pose alors l'axiome (β) (λx t) u = [u/x]t qui peut se transformer en une règle de réécriture, la règle de β-réduction (λx t) u → [u/x]t Cependant, le λ-calcul introduit une nouvelle difficulté qui est la nécessité de substituer sous les abstractions. Les formulations de la théorie des types qui utilisent des combinateurs peuvent facilement s'exprimer comme des théories du premier ordre (en introduisant des symboles d'application α T,U et un symbole ε). La substitution dans les termes est donc la substitution simple (le remplacement) de la logique du premier ordre. Ce n'est plus le cas avec la formulation de la théorie des types utilisant le λ-calcul qui demande à nouveau une substitution particulière : quand on substitue la variable x par le terme u dans le terme λy x, il faut substituer x par u sous le λ, et quand on substitue la variable x par le terme y dans le terme λy x, il faut, en outre, renommer la variable liée y de manière à éviter les captures. L'objectif de bannir la règle de substitution des variables de fonction et de prédicat, qui nous avait amené au langage des combinateurs, est donc partiellement abandonné puisqu'il faut à nouveau introduire une substitution particulière. Cette substitution est à l'origine de bien des bizarreries de la formulation de la théorie des types utilisant le λ-calcul, par exemple du traitement assez subtil de la portée des variables dans l'algorithme d'unification d'ordre supérieur de G. Huet [START_REF] Huet | A unification algorithm for typed lambda calculus[END_REF][START_REF] Huet | Résolution d'équations dans les Langages d[END_REF][START_REF] Snyder | Higher-order unification revisited : complete sets of transformations[END_REF]. Ce point est discuté dans [c]. Cependant, la simplicité et l'uniformité du λ-calcul font que cette formulation de la théorie des types est souvent préférée à celle utilisant des combinateurs. L'équivalence entre les formulations utilisant la règle de substitution des variables de prédicats et de fonctions, le schéma de compréhension et des combinateurs est simple à établir. En revanche, l'équivalence de ces formulations avec celles utilisant le λ-calcul est un peu plus difficile. L'idée est de traduire les termes du λ-calcul dans le langage des combinateurs. Cette traduction s'appelle l'ascension des variables (λ-lifting) [54, a]. Le seul cas intéressant est celui des abstractions. Pour traduire un terme de la forme λx 1 ... λx n t, on commence par traduire le terme t. On obtient un terme t ′ qui contient, outre les variables x 1 , ..., x n , des variables y 1 , ..., y p et des combinateurs c 1 , ..., c q . On remplace ces combinateurs par des variables z 1 , ..., z q , ce qui donne un terme t ′′ et on définit la traduction du terme λx 1 ... λx n t comme ((y 1 , ..., y p , z 1 , ..., z q , x 1 , ..., x n -→ t ′′ ) y 1 ... y p c 1 ... c q ). Le problème est que pour montrer l'équivalence de ces deux formulations de la théorie des types, on a besoin de l'axiome d'extensionnalité dans les deux formulations de la théorie. Par exemple, dans la formulation utilisant le λ-calcul, la proposition ((λx λy λz x) w w) = ((λx λy λz y) w w) est démontrable car elle se réduit sur λz w = λz w En revanche sa traduction ((x, y, z -→ x) w w) = ((x, y, z -→ y) w w) demande l'axiome d'extensionnalité pour être démontrée. On montre dans [a], en construisant un modèle très simple, qu'en l'absence de cet axiome, cette proposition est indépendante de la théorie des types. En fait, on peut dire que l'axiome β contient un soupçon d'extensionnalité car il permet la substitution sous les abstractions et identifie donc les termes ((λx λy λz x) w w) et λz w, qui en toute rigueur ne sont pas intentionnellement identiques. Ce point sera discuté au paragraphe 2.4.7. De plus, diverses variantes de la traduction du λ-calcul dans le langage des combinateurs donnent des résultats un peu différents. Par exemple, si on traduit les abstractions en bloc, comme ci-dessus, dans la proposition ((λx λy λz x) w w) = ((λy λx λz x) w w) on obtient une proposition qui ne peut pas se démontrer sans l'axiome d'extensionnalité. En revanche, si on traduit les abstractions une par une, on obtient la proposition ((d, e, x -→ (d e x)) (x, c, y -→ (c y)) (x, z -→ x) w w) = ((d, e, y -→ (d e)) (c, x -→ (c x)) (x, z -→ x) w w) qui est démontrable sans l'axiome d'extensionnalité, car elle se réduit sur ((x, z -→ x) w) = ((x, z -→ x) w) Une autre traduction, qui n'utilise que les combinateurs S et K [START_REF] Barendregt | The lambda-calculus, its syntax and semantics[END_REF][START_REF] Hindley | Introduction to combinators and λ-calculus[END_REF][START_REF] Krivine | Lambda-calcul, types et modèles[END_REF], utilise l'extensionnalité moins souvent encore, mais ne peut pas non plus s'en passer totalement. On peut alors se demander si le problème n'est pas davantage dans la traduction que dans le λ-calcul luimême. En faisant l'hypothèse que toutes les traductions conservent les termes purement applicatifs (c'est-àdire sans abstractions) du λ-calcul, cette question est formulée dans [a] comme le problème, que nous laissons ouvert, de trouver une proposition sans abstractions qui soit démontrable dans la présentation utilisant le λ-calcul, mais pas dans celle utilisant les combinateurs. Le calcul des substitutions explicites Si on prend la formulation de la théorie des types utilisant le λ-calcul comme référence, les combinateurs permettent de donner une présentation au premier ordre qui est extensionnellement équivalente, c'est-à-dire que les deux formulations sont équivalentes quand l'axiome d'extensionnalité est posé dans les deux cas. Si on ne pose pas l'axiome d'extensionnalité, les deux théories ne sont plus équivalentes, et on doit poser des axiomes d'extensionnalité faible (les axiomes de Curry [START_REF] Barendregt | The lambda-calculus, its syntax and semantics[END_REF][START_REF] Hindley | Introduction to combinators and λ-calculus[END_REF][START_REF] Krivine | Lambda-calcul, types et modèles[END_REF]) dans la formulation utilisant les combinateurs de manière à obtenir l'équivalence. Ces axiomes sont notoirement compliqués et on ne sait pas bien les transformer en règles de réécriture dans une théorie modulo. Parallèlement au langage des combinateurs, les indices de N.G. De Bruijn [START_REF] De Bruijn | Lambda calculus notation with nameless dummies, a tool for automatic formula manipulation, with application to the Church-Rosser theorem[END_REF], puis les combinateurs catégoriques [START_REF] Curien | Categorical combinators, sequential algorithms and functional programming, second edition[END_REF] et le calcul des substitutions explicites ou λσ-calcul [1,[START_REF] Curien | Confluence properties of weak and strong calculi of explicit substitutions[END_REF] ont fourni d'autres langages du premier ordre pour noter les fonctions et, comme nous allons le voir, d'autres formulations au premier ordre de la théorie des types qui sont intentionnellement équivalentes à la formulation utilisant le λ-calcul. Les indices de De Bruijn sont, au départ, une notation alternative pour le λ-calcul. S'appuyant sur l'idée que les noms des variables liées servent uniquement à relier une occurrence d'une telle variable à son lieur, De Bruijn a proposé de remplacer chaque occurrence de variable liée par un nombre entier, qui peut s'assimiler à l'adresse relative du lieur et qui est le nombre de lieurs qu'il faut traverser pour arriver à celui qui correspond à cette variable. Ainsi, le terme λx λy (x y λz (z y)), par exemple, s'écrit λλ(2 1 λ(1 2)). Ce terme peut, à son tour, se lire comme un terme d'un langage du premier ordre contenant une infinité de symboles d'individu 1, 2, 3, ..., un symbole de fonction unaire λ et un symbole de fonction binaire α pour l'application. On peut remarquer que ce langage contient bien davantage que le λ-calcul. Si le terme λ1 est naturellement l'expression du terme λx x, son sous-terme, le terme 1, ne représente aucun λ-terme, à moins de le lire dans un contexte de variables, par exemple, x, y, z, auquel cas, il exprime le terme ouvert x. Les termes ouverts du λ-calcul peuvent donc à la fois s'exprimer comme des termes clos dans un contexte de variables ou comme des termes ouverts du λ-calcul avec des indices de De Bruijn. Par exemple, le terme x peut ou bien s'exprimer comme le terme 1 dans un contexte x, y, z ou bien comme le terme x. Dans le λ-calcul avec des indices de De Bruijn, l'axiome de β-conversion s'exprime ainsi ((λt) u) = [u/1]t Cet axiome peut se transformer en une règle de réécriture, la β-reduction ((λt) u) → [u/1]t La substitution du [u/n]t désigne maintenant la substitution de l'indice de De Bruijn n par le terme u dans t, il ne s'agit donc pas de l'opération de substitution simple (de remplacement) mais d'une autre opération qui se définit ainsi par récurrence sur la structure de t -[u/n]x = x, -[u/n](t 1 t 2 ) = ([u/n]t 1 [u/n]t 2 ), -[u/n]λt = λ[lft(u, 0)/n + 1]t, -[u/n]m = m -1 si m > n, -[u/n]m = u si m = n, -[u/n]m = m si m < n. où le terme lft(u, i) (qui se lit "incrémentation (lift) de u à partir de i") est défini par récurrence sur la structure de u -lft(x, i) = x, -lft((u 1 u 2 ), i) = (lft(u 1 , i) lft(u 2 , i)), -lft(λu, i) = λ(lft(u, i + 1)), -lft(m, i) = m + 1 si m > i, -lft(m, i) = m si m ≤ i. Par exemple le terme λx λy ((λz λw (x z)) y), qui se réduit λx λy λw (x y), s'exprime avec des indices de De Bruijn par le terme λλ((λλ(4 2)) 1). Ce terme se réduit sur λλ( 32) qui qui est bien la traduction du terme λx λy λw (x y). [1/1]λ(4 2)) = λλλ([lft(1, 0)/2](4 2)) = λλλ([2/2](4 2)) = λλλ( Cette notion de substitution est assez délicate à définir car elle doit prendre en compte deux phénomènes : d'une part la nécessité de décrémenter les indices libres (c'est-à-dire désignant une variable liée au dessus du radical) dans t (par exemple la variable x exprimée par l'indice 4 dans le terme initial est décrémentée et exprimée par l'indice 3 dans le terme final), d'autre part la nécessité d'incrémenter les indices libres dans u quand on traverse une abstraction (par exemple, la variable y représentée par l'indice 1 dans le terme initial est représentée par l'indice 2 dans le terme final). Ce calcul présente deux inconvénients. D'une part, comme c'était déjà le cas pour le λ-calcul, un algorithme de substitution extérieur aux règles de réécriture est nécessaire à la définition de la réduction. Ce système de réécriture contient, de ce fait, un nombre infini de règles. D'autre part, les variables libres ne sont pas affectées par ces opérations de substitution et d'incrémentation. De ce fait, l'opération de réduction sur les termes ouverts ne commute pas avec la substitution simple (le remplacement) d'une telle variable. Par exemple le terme (λ(x 1) y) se réduit sur (x y), mais si on substitue la variable x par le terme (z 1) on obtient le terme (λ(z 1 1) y) qui se réduit sur (z y y) alors que si on substitue la variable x par le terme (z 1) dans le terme (x y), on obtient le terme (z 1 y). En revanche la réduction commute avec une autre forme de substitution des variables qui correspond à la substitution du λ-calcul et qui demande d'incrémenter les indices libres du terme substitué quand on passe sous un λ. Ainsi, le terme (λ(x 1) y) se réduit sur (x y), et si on substitue la variable x par le terme (z 1) on obtient le terme (λ(z 2 1) y) qui se réduit sur (z 1 y) qui est bien le terme obtenu en substituant la variable x par le terme (z 1) dans le terme (x y). Le calcul des combinateurs catégoriques et le calcul des substitutions explicites pallient ces inconvénients du λ-calcul avec des indices de De Bruijn. D'une part, il permettent de ne plus recourir à une opération externe de substitution et ils ont un nombre fini de règles. En corollaire de cela, la substitution des variables libres et la réduction commutent, comme dans tout système de réécriture du premier ordre. L'idée de ce calcul est d'internaliser la notion de substitution comme un symbole du langage au même titre que λ ou α. On note t[s] le terme t auquel est appliquée la substitution s. Les termes sont à présent de cinq formes : indices de De Bruijn, variables, applications d'un terme à un autre, abstractions et applications d'une substitution à un terme. Cet opérateur prend en argument un terme et une substitution explicite qui est une liste u 1 , ..., u n de termes. Des règles de réécriture permettent ensuite de réduire le terme t[u 1 , ..., u n ] en substituant dans t l'indice 1 par u 1 , l'indice 2 par u 2 , ..., l'indice n par u n et en décrémentant de n tous les autres indices libres. Pour construire ces listes, on a besoin de la liste vide, notée id car c'est la substitution identité, et du symbole "cons" noté ".". Deux autres symboles sont encore nécessaires : la substitution ↑ qui incrémente les indices libres d'un terme et le symbole • qui permet de composer les substitutions. Les règles de réécriture du λσ-calcul sont les suivantes (λa)b → a[b.id] (a b)[s] → (a[s] b[s]) 1[a.s] → a a[id] → a (λa)[s] → λ(a[1.(s • ↑)]) (a[s])[t] → a[s • t] id • s → s ↑ • (a.s) → s (s 1 • s 2 ) • s 3 → s 1 • (s 2 • s 3 ) (a.s) • t → a[t].(s • t) s • id → s 1. ↑→ id 1[s].(↑ • s) → s La première règle est la β-réduction, les autres, appelées règles de σ-réduction, permettent de propager une substitution explicite dans un terme. Dans ce calcul, comme dans tout système de réécriture du premier ordre, la réduction commute avec la substitution simple (le remplacement) des variables. Ainsi, le terme (λ(x 1) y) se réduit sur (x[y.id] y), et si on substitue la variable x par le terme (z 1) on obtient le terme (λ(z 1 1) y) qui se réduit sur (z[y.id] y y) qui est le terme obtenu, après normalisation de la substitution, en substituant la variable x par le terme (z 1) dans le terme (x[y.id] y). Quand on étend le λ-calcul avec des indices de De Bruijn en ajoutant des substitutions explicites, on peut envisager une autre traduction du λ-calcul dans ce formalisme. Dans la traduction présentée plus haut, quand on traduisait le terme λx (y x) en conservant y comme une variable libre on obtenait le terme λ(y 1). Dans le λ-calcul, si on substitue la variable y par le terme t, il faut éventuellement renommer la variable liée x de manière à éviter les captures. Dans le λ-calcul avec des indices de De Bruijn, il faut de même, comme nous l'avons vu, incrémenter les indices libres de t. Donc, bien que le λ-calcul avec des substitutions explicites soit un langage du premier ordre, la substitution des variables de ce langage du premier ordre qui correspond à celle du λ-calcul n'est pas la substitution simple (le remplacement). Cette opération de substitution peut se décomposer d'une part en une opération d'incrémentation et d'autre part en une substitution simple (un remplacement). Comme le λσ-calcul contient des opérateurs explicites d'incrémentation, on peut préparer cette incrémentation en appliquant cet opérateur à la variable y, on obtient alors le terme λ(y[↑] 1) comme traduction de λx (y x). À présent, si on substitue simplement (on remplace) la variable y par le terme 1 on obtient le terme λ(1[↑] 1) c'est-à-dire λ(2 1). Dans [c], puis dans [i], nous avons appelé "précuisson" ("pre-cooking") cette traduction qui anticipe l'incrémentation du terme substitué. Si on note t F la précuisson du λ-terme t, nous avons montré, dans [c], le résultat clé ([u/x]t) F = [x → u F ]t F où t/x la substitution du λ-calcul et x → t est la substitution simple (le remplacement). Autrement dit, la précuisson est un homomorphisme du λ-calcul dans le λσ-calcul qui transforme la substitution d'un calcul en la substitution de l'autre. Pour utiliser le calcul des substitutions explicites comme langage des fonctions de la théorie des types simples, il faut pouvoir associer un type à chaque terme du calcul. Ici, un nouveau problème surgit. Si on sait donner le type ι → ι au terme λ1, qui exprime le λ-terme λx x, il est plus difficile, en revanche, de donner un type au terme 1 qui n'exprime un λ-terme que dans un contexte de variable. On doit donc connaître le type des variables de ce contexte pour pouvoir associer un type à ce terme [1,[START_REF] Curien | Un résultat de complétude pour les substitutions explicites[END_REF]. Si on considère un contexte de trois variables x, y, z de types respectifs A, B, C alors le terme 1 a le type A, le terme 2 le type B, le terme 1 Γ A de sorte AΓ ⊢ A α Γ A,B de rang (Γ ⊢ A → B, Γ ⊢ A, Γ ⊢ B) λ Γ A,B de rang (AΓ ⊢ B, Γ ⊢ A → B) [] Γ,Γ ′ A de rang (Γ ′ ⊢ A, Γ ⊢ Γ ′ , Γ ⊢ A) id Γ de sorte Γ ⊢ Γ ↑ Γ A de sorte AΓ ⊢ Γ . Γ,Γ ′ A de rang (Γ ⊢ A, Γ ⊢ Γ ′ , Γ ⊢ AΓ ′ ) • Γ,Γ ′ ,Γ ′′ de rang (Γ ⊢ Γ ′′ , Γ ′′ ⊢ Γ ′ , Γ ⊢ Γ ′ ) Le λσ-calcul typé avec des variables de termes est confluent [START_REF] Ríos | Contributions à l'étude des λ-calculs avec des substitutions explicites[END_REF] et il termine faiblement [START_REF] Muñoz-Hurtado | Un calcul de substitutions pour la représentation des preuves partielles en théorie des types[END_REF][START_REF] Muñoz-Hurtado | A calculus of substitutions for incomplete-proof representation in type theory[END_REF][START_REF] Muñoz-Hurtado | A left-linear variant of λσ[END_REF][START_REF] Goubault-Larrecq | A proof of weak termination of the simply typed λσ-calculus[END_REF] mais P.-A. Melliès [START_REF] Melliès | Typed λ-calculi with explicit substitutions may not terminate[END_REF] a montré qu'il ne termine pas fortement. À ces symboles, nous avons ajouté dans [i] les symboles ⇒ de sorte ⊢ o → o → o ∧ de sorte ⊢ o → o → o ∨ de sorte ⊢ o → o → o ¬ de sorte ⊢ o → o ⊥ de sorte ⊢ o ∀T de sorte ⊢ (T → o) → o ∃T de sorte ⊢ (T → o) → o et le symbole de prédicat ε de rang (⊢ o) Les variables libres dans ce langage ont une sorte de la forme Γ ⊢ T , c'est-à-dire qu'à chaque variable est associé non seulement un type, mais aussi un contexte qui est celui dans lequel les termes qu'on substitue à cette variable doivent être typés. On peut montrer que si un λ-terme t contenant des variables libres x 1 , ..., x n de type T 1 , ..., T n est bien typé alors sa précuisson est un λσ-terme bien typé contenant les variables libres x 1 , ..., x n de sorte ⊢ T 1 , ..., ⊢ T n . Sur ce langage, on définit ensuite le système de réécriture de la figure 2.2. La première règle est la règle de β-réduction, le groupe suivant est constitué des règles de σ-réduction, le dernier groupe est constitué des règles de réduction associées aux symboles ⇒, ∧, ... On peut également ajouter une règle de réduction correspondant à la η-réduction du λ-calcul, qui permet d'incorporer dans la réduction une partie plus importante de l'axiome d'extensionnalité : λ(a 1) → b si a = σ b[↑] On montre dans [i] que ce système de réécriture est confluent. On montre également qu'il est faiblement normalisable en le plongeant dans le λσ-calcul typé. On montre dans [i] que la théorie modulo constituée de ce système de réécriture et de l'ensemble vide d'axiomes est intentionnellement équivalente à la formulation de la théorie des types utilisant le λ-calcul. En effet, une proposition t est démontrable dans la théorie des types utilisant le λ-calcul si et seulement si la proposition ε(t F ) est démontrable dans cette présentation de la théorie des types. De plus, la structure des démonstrations est conservée : les étapes de calcul correspondent aux étapes de calcul, les étapes de déduction aux étapes de déduction. Contrairement à la formulation de la théorie des types avec des combinateurs, il n'est jamais nécessaire de faire appel aux axiomes d'extensionnalité ou aux axiomes de Curry pour démontrer une proposition qui se démontre sans ces axiomes dans la théorie des types utilisant le λ-calcul. Cette formulation de la théorie des types est celle que nous utiliserons dans les chapitres suivants. Nous allons, cependant, conclure ce paragraphe consacré aux langages des termes en théorie des types par la présentation de deux autres langages qui sont plus prospectifs. Les combinateurs, à nouveau On construit, dans [i], un modèle de la théorie des types simples utilisant les substitutions explicites. Les sortes de cette théorie étant de la forme Γ ⊢ T et Γ ⊢ ∆, (M U1 × ... × M Up ) M Tn ... M T 1 . Cette double notion de fonctionnalité (puisque la flèche → et le symbole ⊢ dans les sortes Γ ⊢ ∆ sont interprétés, l'une comme l'autre comme des constructeurs d'espaces fonctionnels) rappelle l'origine catégorique du calcul des substitutions explicites, puisque les catégories cartésiennes closes comportent deux flèches : celle des morphismes, ici ⊢, et l'exponentielle, ici →. Une substitution, en revanche, se traduit par une suite finie de termes. L'application d'une substitution à un terme se traduit également par le combinateur app. (λa)b → a[b.id] (a b)[s] → (a[s] b[s]) 1[a.s] → a a[id] → a (λa)[s] → λ(a[1.(s • ↑)]) (a[s])[t] → a[s • t] id • s → s ↑ • (a.s) → s (s1 • s2) • s3 → s1 • (s2 • s3) (a.s) • t → a[t].(s • t) s • id → s 1. ↑→ id 1[s].(↑ • s) → s ε( ⇒ x y) → ε(x) ⇒ ε(y) ε( ∧ x y) → ε(x) ∧ ε(y) ε( ∨ x y) → ε(x) ∨ ε(y) ε( ¬ x) → ¬ε(x) ε( ⊥) → ⊥ ε( ∀T x) → ∀y ε(x y) ε( ∃T x) → ∃y ε(x y) (app n (app p a b) c) → (app p-1 (app n a c) (app n b c)) si n < p (1) (app n 1 p c) → c si p = n + 1 (2) (app n 1 p c) → 1 p-1 si n + 1 < p (3) (app n (↑ p a) b) → a si p = n (4) (app n (↑ p a) b) → (↑ p-1 (app n a b)) si n < p (5) (↑ n (app p a b)) → (app p+1 (↑ n a) (↑ n b)) si n ≤ p (6) (↑ n 1 p ) → 1 p+1 si n < p (7) (↑ n (↑ p a)) → (↑ p+1 (↑ n a)) si n ≤ p (8) Ces symboles sont donc des combinateurs au sens du paragraphe 2.4.3 et les premières règles de réécriture qu'on voudrait poser sont (1 n x 1 ... x n ) → x n (↑ n a x 1 ... x n y ) → (a x 1 ... x n ) (app n a b x 1 ... x n ) → (a x 1 ... x n (b x 1 ... x n )) On traduit un λ-terme t exprimé avec des indices de De Bruijn dans un contexte de longueur n comme le terme |t| n . - |k| n = (↑ n-1 (...(↑ n-k+1 1 n-k+1 )...)) -|(a b)| n = (app n |a| n |b| n ) -|λa| n = |a| n+1 . Il est facile de voir que ce système de réécriture ne simule pas la β-réduction. Un β-radical de la forme ((λa) b) se traduit par le terme (app n |a| n+1 |b| n ) qui n'est pas, en général, un radical. Un tel β-radical est caractérisé par le fait que l'argument du symbole app n est traduit dans un contexte plus long que n. Il faut ici distinguer les différents cas en fonction de la forme de a. On peut montrer, quoi que cela soit fastidieux, que si a → β b alors |a| n → |b| n . La réciproque de ce résultat, en revanche, est fausse car le terme ((λ(1 y)) z) se traduit par (app 2 (app 3 1 3 (↑ 2 1 2 )) (↑ 1 1 1 )) qui se réduit sur (app 2 (app 2 1 3 (↑ 1 1 1 )) (app 2 (↑ 2 1 2 ) (↑ 1 1 1 ) )) qui est la traduction de (((λ1) z) ((λy) z)). On paie ici le prix de la confusion entre application d'un terme à un autre et d'une substitution à un terme. Une solution est peut-être d'introduire deux variantes du combinateur app. On peut remarquer que si la règle (app n 1 n+1 b) → b n'est pas une conséquence du système de réécriture naïf ci-dessus, c'est une conséquence de ce système de réécriture et de l'axiome d'extensionnalité. En effet le terme (app n 1 n+1 b x 1 ... x n ) se réduit sur (1 n+1 x 1 ... x n (b x 1 ... x n )) puis sur (b x 1 ... x n ). On peut montrer qu'il en est de même des autres règles. On retrouve ici cette idée que le λ-calcul contient une forme faible d'extensionnalité qui est absente du langage des combinateurs. La solution traditionnelle à ce problème est d'ajouter les axiomes de Curry à la théorie des combinateurs, mais à la différence des axiomes de Curry, l'extension proposée ici s'exprime sous forme d'un ensemble de règles de réécriture et non d'axiomes équationnels. Alors que je travaillais sur ce système de réécriture, j'ai appris qu'il avait été proposé indépendamment, à quelques variations près par H. Goguen et J. Goubault-Larrecq [START_REF] Goguen | Sequent combinators : a Hilbert system for the lambda calculus[END_REF]. Goguen et Goubault sont allés beaucoup plus loin que moi dans l'étude de ce système. En particulier, ils ont démontré sa confluence, alors que je n'avais montré que la confluence locale, et ils ont montré que le calcul simplement typé n'avait que la propriété de terminaison faible. Il est remarquable aussi que le contre-exemple à la terminaison forte qu'ils proposent ait été découvert par un programme de recherche aléatoire et n'ait qu'un lointain rapport avec le contre-exemple proposé par Melliès à la forte normalisation du λσ-calcul typé. Le calcul faible des substitutions explicites On l'a dit plusieurs fois, le λ-calcul contient une forme d'extensionnalité faible qui n'est pas présente dans le calcul des combinateurs. Cette forme d'extensionnalité est la possibilité de substituer sous un λ, ce qui permet de transformer le terme ((λx λy λz x) w w) ou le terme en ((λx λy λz y) w w) en λz w alors que dans le calcul des combinateurs le terme ((x, y, z -→ t) w w) ne peut pas se réduire. Cela est dû au fait qu'en λ-calcul, la possibilité de substituer sous les abstractions permet de n'avoir que des fonctions unaires et d'itérer la construction de fonctions unaires pour construire les fonctions n-aires, alors que cela est impossible dans le calcul des combinateurs qui est essentiellement n-aire. Le calcul des substitutions explicites, permet d'avoir la même forme d'extensionnalité que le λ-calcul tout en restant un langage de premier ordre. La possibilité de substituer sous les abstractions dans ce calcul s'exprime par la règle (λa)[s] → λ(a[1.s• ↑]) On peut vouloir garder la notation du λ-calcul qui permet d'abstraire une variable libre dans un terme, où que soit l'occurrence de cette variable, mais rejeter la β-réduction et l'extensionnalité qu'elle contient. En effet, on peut argumenter que du point de vue intentionnel les termes ((λx λy λz x) w w) et λz w n'expriment pas le même algorithme et que c'est une erreur du λ-calcul que d'identifier ces termes (argumentation, qui demande, cela dit, d'avoir résolu l'épineux problème de l'égalité intentionnelle de deux fonctions). J.R. Hindley propose dans [START_REF] Hindley | Combinatory reductions and lambda reductions compared[END_REF] une notion de réduction dans le λ-calcul sans substitution sous les abstractions et montre son équivalence avec celle du calcul des combinateurs. Dans le λσ-calcul, interdire la propagation des substitutions sous les abstractions est particulièrement simple, puisqu'il suffit de supprimer la règle ci-dessus. On aboutit alors au λσ-calcul faible [START_REF] Curien | Confluence properties of weak and strong calculi of explicit substitutions[END_REF][START_REF] Martin-Löf | [END_REF][START_REF] Maranget | Optimal derivations in weak lambda-calculi and in orthogonal term rewriting systems[END_REF][START_REF] Th | Functional back-ends within the lambda-sigma calculus[END_REF]. Il devient alors naturel de se poser la question de l'équivalence entre le λσ-calcul faible et le calcul des combinateurs, puisque l'un comme l'autre revendiquent la pureté intentionnelle. Hélas, l'ascension des variables, traduction du λσ-calcul faible dans le langage des combinateurs, ne respecte pas la réduction. Par exemple, les termes λy x et ((λx λy x) x) ne sont pas convertibles dans le λσ-calcul faible, mais leur traduction est la même : ((x, y -→ x) x). Ici, le problème semble venir de l'ascension des variables elle-même, qui n'est pas intentionnellement neutre. Cependant, un premier pas vers un théorème d'équivalence entre λσ-calcul faible et combinateurs a été effectué par T. Strahm [START_REF] Strahm | Partial applicative theories and explicit substitutions[END_REF] qui a montré qu'on pouvait, en revanche, traduire les combinateurs dans le λσ-calcul faible en conservant la convertibilité. En conclusion : quel langage ? En conclusion de ce long paragraphe consacré aux langage des termes en théorie des types simples, on peut dire qu'on a le choix entre deux langages du premier ordre : celui des combinateurs et celui du calcul des substitutions explicites. Ce dernier est équivalent au λ-calcul ce qui signifie qu'une partie de l'extensionnalité est incluse dans les règles de réduction. On peut voir cela comme un point positif (davantage est transféré du raisonnement vers le calcul) ou comme un point négatif si on cherche la pureté intentionnelle. Le langage des combinateurs, en revanche, est plus intentionnel. Le langage des substitutions explicites est plus complexe à définir que celui des combinateurs, mais les exemples sont plus simples à écrire du fait de la présence d'un opérateur d'abstraction. Nous avons également présenté deux autres langages, les combinateurs étendus et le λσ-calcul faible, qui sont plus prospectifs et demandent à être davantage étudiés. La complétude Formuler la théorie des types comme une théorie du premier ordre (ou une théorie modulo) permet de lui appliquer des théorèmes et des algorithmes généraux valables pour toutes les théories du premier ordre. Le premier exemple auquel on pense est le théorème de complétude. C'est un résultat du folklore qu'on peut déduire le théorème de complétude de Henkin [START_REF] Henkin | Completeness in the theory of types[END_REF] de celui de Gödel en utilisant une formulation au premier ordre de la théorie des types. Nous avons détaillé la démonstration de ce résultat dans [b]. En appliquant le théorème de complétude de Gödel à une formulation de la théorie des types comme une théorie du premier ordre, on montre qu'une proposition P est démontrable en théorie des types si et seulement si elle est valide dans tous les modèles de cette théorie. La classe des modèles de la théorie des types est, cela dit, plus vaste que la classe des modèles de Henkin. Les modèles de Henkin sont les modèles égalitaires et fonctionnels de la théorie des types. Un premier lemme montre que la classe des modèles égalitaires valide les même propositions que la classe de tous les modèles des axiomes de l'égalité. Un second lemme, homologue du théorème de l'effondrement d'A. Mostovski, montre que la classe des modèles fonctionnels égalitaires valide les même propositions que la classe de tous les modèles égalitaires de l'axiome d'extensionnalité. Ce deuxième lemme n'est vrai que pour les modèles égalitaires et donc cette condition doit être incluse dans la définition des modèles de Henkin (ce que Henkin n'avait pas fait explicitement) comme l'avait déjà remarqué Andrews [5]. Ce théorème de complétude peut aussi s'appliquer à la théorie des types intentionnelle, c'est-à-dire privée de l'axiome d'extensionnalité, et on sait que les modèles de Henkin sont insuffisants pour cela, car ils valident toujours l'axiome d'extensionnalité. La skolémisation Un deuxième exemple de théorème général qui peut s'appliquer aux formulations de la théorie des types comme une théorie du premier ordre est le théorème de Skolem. Le problème de la skolémisation en théorie des types Dans la présentation de la théorie des types utilisant le λ-calcul, la règle de skolémisation est relativement complexe. La règle naïve qui consiste à skolémiser l'axiome ∀x ∃y (P x y) en introduisant un symbole f de type T → U où T est le type de x et U celui de y est incorrecte, puisque de la proposition ∀x (P x (f x)) on peut déduire ∃f ∀x (P x (f x)) qui n'est pas démontrable à partir de la proposition ∀x ∃y (P x y) car l'axiome du choix est indépendant de la théorie des types [4]. Une solution naturelle consiste à proposer que pour chaque terme t, on puisse construire un terme (f t) tel que (P t (f t)) mais sans pour autant disposer de la fonction f dans sa totalité. Cela a amené D.A. Miller [START_REF] Miller | Proofs in higher order logic[END_REF][START_REF] Miller | A compact representation of proofs[END_REF] à proposer une extension du λ-calcul dans laquelle chaque symbole a un indice, son indice de Skolem qui indique le nombre d'arguments auxquels il faut obligatoirement l'appliquer pour former un terme. Ainsi, on skolémise la proposition ∀x ∃y (P x y) en introduisant un symbole f 1 d'indice de Skolem 1. Le terme (f 1 x) est bien formé, mais pas le symbole f 1 tout seul. Hélas, cette restriction n'est pas suffisante, car si le symbole f 1 tout seul n'est pas un terme bien formé, le terme λx (f 1 x), en revanche, l'est et de la proposition ∀x (P x (f 1 x)) on peut donc déduire ∀x (P x ((λx (f 1 x)) x)) puis ∃f ∀x (P x (f x)) Miller a donc introduit une seconde restriction qui interdit la liaison par un λ des variables libres dans les arguments obligatoires d'un symbole de Skolem. Ainsi, le terme λx (f 1 x) n'est pas non plus bien formé. Miller a montré que si on pose ces deux conditions on retrouve un théorème analogue au théorème de Skolem, c'est-à-dire que les propositions sans occurrences de symboles de Skolem sont conséquences d'une proposition si et seulement si elles sont conséquences de sa forme skolémisée. Il faut noter, cela dit, que si on considère, comme c'est l'habitude en théorie des types que le λ-terme ∀x P est une notation pour le terme ∀ (λx P ) où ∀ est une constante de type (T → o) → o alors la proposition skolémisée elle-même ∀x (P x (f 1 x)) n'est pas bien formée : en effet la seconde condition de Miller interdit de lier par un λ la variable x libre dans un argument obligatoire de f 1 . On doit donc ou bien introduire les quantificateurs comme des symboles lieurs supplémentaires ou bien donner une forme plus restreinte de la règle de skolémisation. Par exemple, si on utilise la skolémisation lors de la mise en forme clausale d'une proposition à réfuter par la méthode de résolution [START_REF] Robinson | A Machine-oriented logic based on the resolution principle[END_REF], comme le quantificateur universel est lui-même appelé à être supprimé, on peut définir la forme clausale de la proposition ∀x ∃y (P x y) comme la proposition (bien formée) (P X (f 1 X)) et le théorème de Skolem prend la forme de la complétude et correction de cette transformation (c'est-àdire la forme clausale de la négation d'une proposition P est réfutable par résolution si et seulement si la proposition P est démontrable au sens ordinaire du terme). Dans [START_REF] Miller | Proofs in higher order logic[END_REF][START_REF] Miller | A compact representation of proofs[END_REF] Miller formule son théorème comme la correction de cette transformation pour la méthode des connections. La skolémisation dans la présentation de la théorie des types utilisant les combinateurs La présentation de la théorie des types utilisant des combinateurs étant une théorie du premier ordre, on peut lui appliquer le théorème de Skolem. L'axiome ∀x ∃y ε(α(α(P, x), y)) se skolémise en introduisant un symbole de fonction f de rang (T, U ) où T est le type de On peut donner une intuition de la seconde condition de Miller en remarquant qu'on peut étendre l'ascension des variables aux termes qui vérifient cette condition mais pas aux autres. Pour traduire un terme de la forme λx 1 ... λx n t, où t n'est pas une abstraction, on commence par traduire le terme t. On obtient un terme t ′ qui contient outre les variables x 1 , ..., x n des variables y 1 , ..., y p et des combinateurs c 1 , ..., c q et des sous-termes u 1 , ..., u r de la forme (f k v 1 ... v k ) dans lesquels les variables x 1 , ..., x n n'apparaissent pas. On remplace ces combinateurs par des variables z 1 , ..., z q , et ces termes par des variables z ′ 1 , ..., z ′ r , ce qui donne un terme t ′′ . On définit ensuite la traduction du terme λx 1 ... λx n t comme ((y 1 , ..., y p , z 1 , ..., z q , z ′ 1 , ..., z ′ r , x 1 , ..., x n -→ t ′′ ) y 1 ... y p c 1 ... c q u 1 ... u r ). Naturellement, si les variables x 1 , ..., x n apparaissaient dans u 1 , ..., u r , leur liaison serait perdue. La skolémisation dans la présentation de la théorie des types utilisant des substitutions explicites La présentation de la théorie des types utilisant des substitutions explicites étant une théorie du premier ordre, on peut lui appliquer le théorème de Skolem. Ici, le résultat est beaucoup plus intéressant car, comme on le montre dans [i], on retrouve exactement les deux conditions de Miller. Comme ci-dessus, l'axiome ∀x ∃y ε(α(α(P, x), y)) se skolémise en introduisant un symbole de fonction f de rang (Γ ⊢ T, ∆ ⊢ U ) où Γ ⊢ T est la sorte de x et ∆ ⊢ U la sorte de y et l'axiome ∀x ε(α(α(P, x), f (x))) Ici encore, le symbole f tout seul n'est pas un terme, car c'est un symbole de fonction et non un symbole d'individu. On retrouve donc la première condition de Miller. Mais on retrouve aussi la seconde : si la proposition ∀x ∃y ε(α(α(P, x), y)) est obtenue par précuisson puis réduction à partir d'un terme de type o du λ-calcul, la variable x a la sorte ⊢ T où le contexte est nécessairement vide. L'argument de f doit donc être de sorte ⊢ T ce qui lui interdit de contenir des indices de De Bruijn qui désigneraient des lieurs hors du terme lui-même. Le fait que ce contexte soit vide est donc exactement la seconde condition de Miller. Enfin, ici encore, on dispose de quantificateurs indépendamment des symboles ∀ et ∃ de ce langage particulier. Il n'y a donc pas de problème de formulation de la proposition skolémisée ∀x ε(α(α(P, x), f (x))) et on peut formuler le théorème Proposition 2.6.2 Les propositions sans occurrences de symboles de Skolem sont conséquences d'une proposition si et seulement si elles sont conséquences de sa forme skolémisée. Chapitre 3 La théorie des ensembles Le second exemple de théorie modulo que nous détaillons est la théorie des ensembles. L'intérêt de cet exemple est dû au fait que c'est un contre-exemple à de nombreuses propriétés des systèmes modulo (terminaison du système de réécriture, normalisation des démonstrations, complétude de la résolution, ...). Comme le montre la faible longueur de ce chapitre, le travail sur cette théorie modulo est beaucoup moins avancé que celui sur la théorie des types. En particulier, nous n'avons pas encore de langage des termes du premier ordre intentionnellement équivalent au langage avec un lieur. La théorie des ensembles de Zermelo La théorie des ensembles de Zermelo résout le paradoxe de Russell en remplaçant le schéma de compréhension par trois axiomes et un schéma d'axiome : les axiomes de la paire, de l'union, des parties et le schéma du sous-ensemble (ou de compréhension restreint). Ces axiomes s'énoncent ainsi ∀x ∀y ∃z ∀w (w ∈ z ⇔ (w = x ∨ w = y)) ∀x ∃y ∀w (w ∈ y ⇔ ∃z (w ∈ z ∧ z ∈ x)) ∀x ∃y ∀w (w ∈ y ⇔ ∀z (z ∈ w ⇒ z ∈ x)) ∀x 1 ...∀x n ∀y ∃z ∀w (w ∈ z ⇔ (w ∈ y ∧ P )) où x 1 , ..., x n sont les variables libres de P sauf w. À ces axiomes s'ajoutent éventuellement l'axiome d'extensionnalité, l'axiome de fondation, et d'autres axiomes d'existence : l'axiome de l'infini, divers axiomes de grands cardinaux, le schéma de remplacement et l'axiome du choix. Pour obtenir un langage de termes, on peut skolémiser ces axiomes. On obtient alors les axiomes ∀x ∀y ∀w (w ∈ {}(x, y) ⇔ (w = x ∨ w = y)) ∀x ∀w (w ∈ (x) ⇔ ∃z (w ∈ z ∧ z ∈ x)) ∀x ∀w (w ∈ P(x) ⇔ ∀z (z ∈ w ⇒ z ∈ x)) ∀x 1 ...∀x n ∀y ∀w (w ∈ f x1,...,xn,w,P (x 1 , ..., x n , y) ⇔ (w ∈ y ∧ P )) Ici encore, il est naturel de transformer ces axiomes en règles de réécriture. On aboutit aux règles de la figure 3.1. De ce fait, la décidabilité de la congruence engendrée par les règles de réécriture de la théorie des ensembles ne peut pas s'établir comme une conséquence de la confluence et de la terminaison. Nous laissons ici ouvert le problème de la décidabilité de cette congruence. w ∈ {}(x, y) → w = x ∨ w = y w ∈ (x) → ∃z (w ∈ z ∧ z ∈ x) w ∈ P(x) → ∀z (z ∈ w ⇒ z ∈ x) v ∈ fx 1 ,..., Le langage des termes Le dernier paragraphe de [a] traite du problème du langage des termes pour la théorie des ensembles. La skolémisation des axiomes de la paire, de la réunion et des parties donne les symboles {}, et P attendus. Le cas du schéma du sous-ensemble (schéma de compréhension restreint) est plus compliqué. D'une part, comme on ne considère en théorie des ensembles que des ensembles et non des relations, le schéma de compréhension n'est pas trivialement équivalent au schéma de compréhension fermé comme c'était le cas en théorie des types. On doit donc skolémiser le schéma ouvert ∀x 1 ...∀x n ∀y ∃z ∀w (w ∈ z ⇔ (w ∈ y ∧ P )) où x 1 , ..., x n sont les variables libres de P sauf w. La skolémisation de ce schéma introduit des symboles de fonction f x1,...,xn,w,P , et en notant {x ∈ z | P } le terme f x1,...,xn,w,P (x 1 , ..., x n , z) on obtient l'axiome ∀x 1 ... ∀x n ∀z (w ∈ {x ∈ z | P } ⇔ (w ∈ z ∧ P )) Comme dans le langage des combinateurs on semble disposer d'un symbole d'abstraction { | }, mais avec la restriction que dans le terme {x ∈ A | P } la proposition P doit être sans abstractions. On peut introduire certaines abstractions dans P en appliquant le symbole f à des termes qui contiennent des abstractions, mais on n'obtient pas pour autant un symbole d'abstraction général, car, comme en théorie des types, la variable x liée dans {x ∈ A | P } ne peut pas être libre dans les abstractions apparaissant dans P . Par ailleurs ce langage comporte un nombre infini de symboles de fonction produits par la skolémisation du schéma de compréhension, ce qui le rend difficilement utilisable en pratique. Des présentations finies de la théorie des ensembles, comme celle de Von Neumann-Bernays-Gödel [START_REF] Mendelson | Introduction to mathematical logic[END_REF] seraient donc sans doute utiles ici. On peut également introduire un symbole d'abstraction général et on obtient alors une présentation de la théorie des ensembles homologue à la présentation de la théorie des types avec le λ-calcul. On montre dans [a] que cette présentation de la théorie des ensembles est équivalente à la présentation avec les symboles de Skolem, mais, ici encore, l'axiome d'extensionnalité est nécessaire pour montrer cette équivalence. On ne donne pas de présentation au premier ordre de la théorie des ensembles, intentionnellement équivalente à la présentation avec un opérateur lieur général, mais il est probable qu'on puisse construire un tel langage en utilisant des substitutions explicites. Chapitre 4 L'élimination des coupures En déduction modulo, comme dans d'autres cadres, l'élimination des coupures s'étudie d'abord pour la logique intuitionniste, puis se généralise ensuite à la logique classique. En déduction modulo, comme en logique du premier ordre, la déduction naturelle intuitionniste est obtenue en supprimant la règle du tiers exclu et le calcul des séquents intuitionniste en restreignant le calcul à des séquents qui ont une conclusion au plus. Les coupures en déduction modulo A ⊢ ≡ ¬A axiome A ⊢ ≡ A ¬-élim A ⊢ ≡ ⊥ ¬-intro ⊢ ≡ ¬A axiome A ⊢ ≡ ¬A axiome A ⊢ ≡ A ¬-élim A ⊢ ≡ ⊥ ¬-intro ⊢ ≡ A ¬-élim ⊢ ≡ ⊥ contient une coupure unique, et l'élimination de cette coupure donne la même démonstration. Elle n'a donc pas de forme normale. Un cas similaire, mais plus intéressant utilise la règle A → B ∧ ¬A. La démonstration axiome BA ⊢ ≡ B ∧ ¬A ∧-élim BA ⊢ ≡ ¬A axiome BA ⊢ ≡ A ¬-élim BA ⊢ ≡ ⊥ ¬-intro B ⊢ ≡ ¬A axiome B ⊢ ≡ B axiome BA ⊢ ≡ B ∧ ¬A ∧-élim BA ⊢ ≡ ¬A axiome BA ⊢ ≡ A ¬-élim BA ⊢ ≡ ⊥ ¬-intro B ⊢ ≡ ¬A ∧-intro B ⊢ ≡ A ¬-élim B ⊢ ≡ ⊥ réduit successivement sur 43 axiome B ⊢ ≡ B axiome BA ⊢ ≡ B ∧ ¬A ∧-élim BA ⊢ ≡ ¬A axiome BA ⊢ ≡ A ¬-élim BA ⊢ ≡ ⊥ ¬-intro B ⊢ ≡ ¬A ∧-intro B ⊢ ≡ B ∧ ¬A ∧-élim B ⊢ ≡ ¬A axiome B ⊢ ≡ B axiome BA ⊢ ≡ B ∧ ¬A ∧-élim BA ⊢ ≡ ¬A axiome BA ⊢ ≡ A ¬-élim BA ⊢ ≡ ⊥ ¬-intro B ⊢ ≡ ¬A ∧-intro B ⊢ ≡ A ¬-élim B ⊢ ≡ ⊥ puis sur axiome BA ⊢ ≡ B ∧ ¬A ∧-élim BA ⊢ ≡ ¬A axiome BA ⊢ ≡ A ¬-élim BA ⊢ ≡ ⊥ ¬-intro B ⊢ ≡ ¬A axiome B ⊢ ≡ B axiome BA ⊢ ≡ B ∧ ¬A ∧-élim BA ⊢ ≡ ¬A axiome BA ⊢ ≡ A ¬-élim BA ⊢ ≡ ⊥ ¬-intro B ⊢ ≡ ¬A ∧-intro B ⊢ ≡ A ¬-élim B ⊢ ≡ ⊥ qui est la démonstration initiale. L'intérêt de ce contre-exemple est que, comme on l'a vu, en théorie des ensembles, la proposition de Crabbé A se réécrit sur une proposition de la forme B ∧ ¬A. La théorie des ensembles ne vérifie donc pas la propriété d'élimination des coupures. Ce contre-exemple dû à Crabbé est discuté par L. Hallnäs [START_REF] Hallnäs | On normalization of proofs in set theory[END_REF] et J. Ekman [START_REF] Ekman | Normal proofs in set theory[END_REF] dans une présentation de la théorie des ensembles avec des axiomes de conversion. On conjecture dans [h] que l'élimination des coupures termine modulo toutes les congruences définies par un système de réécriture confluent et qui termine. On montre certains cas particuliers de cette conjecture. La notation fonctionnelle des démonstrations Pour cela, on utilise une notation fonctionnelle pour les démonstrations en suivant la sémantique de Brouwer-Heyting-Kolmogorov et l'isomorphisme de Curry-De Bruijn-Howard. On propose donc un λ-calcul typé dont les types sont des propositions de la déduction modulo. Ici, notre objectif est simplement de disposer d'une notation pour les démonstrations de déduction modulo, pas de proposer une théorie dans laquelle les démonstrations sont des objets (ce sera l'objet du paragraphe 4.6). Autrement dit, les termes de la théorie considérée et les termes de démonstration appartiennent à deux langages distincts. On utilisera des lettres latines x, y, ... pour les variables d'objet et des lettres grecques (α, β, ...) pour les variables de démonstrations. Les termes de ce λ-calcul sont π ::= α | λα π | (π π ′ ) | (π, π ′ ) | fst(π) | snd(π) | i(π) | j(π) | (δ π απ ′ βπ ′′ ) | (botélim π) | λx π | (π t) | (t, π) | (exélim π xαπ ′ ) La figure 4.1 donne les règles de typage de ce calcul. Il est trivial qu'une proposition est un type habité de ce calcul si et seulement si elle est démontrable en déduction naturelle modulo intuitionniste. La démonstration de Crabbé ci-dessus, par exemple, s'exprime par le terme λα (snd(α) α) (β, λα (snd(α) α)) On définit ensuite les règles de réduction des démonstrations qui correspondent à l'élimination des coupures. Γ ⊢≡ π : C (x, A, t) ∃-intro si B ≡ (∃x A), C ≡ [t/x]A Γ ⊢≡ (t, π) : B Γ ⊢≡ π : C Γ, α : A ⊢≡ π ′ : B (x, A) ∃-élim si C ≡ (∃x A), x ̸ ∈ F V (ΓB) Γ ⊢≡ (exélim π xαπ ′ ) : B (λα π 1 π 2 ) → [π 2 /α]π 1 fst(π 1 , π 2 ) → π 1 snd(π 1 , π 2 ) → π 2 δ(i(π 1 ), απ 2 , βπ 3 ) → [π 1 /α]π 2 δ(j(π 1 ), απ 2 , βπ 3 ) → [π 1 /β]π 3 (λx π t) → [t/x]π (exélim (t, π 1 ) αxπ 2 ) → [t/x, π 1 /α]π 2 On veut maintenant montrer la terminaison du processus d'élimination des coupures, c'est-à-dire la normalisation de ce calcul. La réductibilité La première tentative pour démontrer la normalisation des démonstrations, au moins modulo certaines congruences, est de s'inspirer de la démonstration de la normalisation des démonstrations en logique du premier ordre et d'utiliser la méthode de Tait. L'élimination des coupures en logique du premier ordre En logique du premier ordre, on ne peut pas montrer que toutes les démonstrations sont normalisables par une simple récurrence sur la structure des démonstrations. En effet, le fait que π et π ′ soient normalisables ne permet pas de déduire que (π π ′ ) l'est, car ce terme peut être lui-même un radical. On renforce donc l'hypothèse de récurrence en montrant par récurrence que toutes les démonstrations sont réductibles. Une démonstration réductible de la proposition A ⇒ B est une démonstration fortement normalisable, telle que quand elle se réduit sur une démonstration de la forme λα π 1 alors, pour toute démonstration réductible π ′ de A, [π ′ /α]π 1 est une démonstration réductible de B. Le cas où le terme est un radical est ainsi anticipé. On définit de même l'ensemble des démonstrations réductibles d'une proposition de la forme A ∧ B, A ∨ B, ¬A, ⊥, ∀x P , ∃x P . Les démonstrations réductibles d'une proposition atomique sont, quant à elles, simplement les démonstrations fortement normalisables. L'ensemble des démonstrations réductibles d'une proposition P se construit donc par récurrence sur la taille de P . En utilisant le vocabulaire du λ-calcul, l'ensemble des termes réductibles de type T se construit par récurrence sur la taille de T . On montre ensuite, par récurrence sur la structure des démonstrations, que toute démonstration d'une proposition A est réductible et donc fortement normalisable. Pour cela, on note R A l'ensemble des démonstrations réductibles de A et on commence par montrer les lemmes suivants. -si π ∈ R A , alors π est fortement normalisable, [START_REF] Girard | Une extension de l'interprétation de Gödel à l'analyse et son application à l'élimination des coupures dans l'analyse et la théorie des types[END_REF][START_REF] Girard | Interprétation fonctionnelle et élimination des coupures dans l'arithmétique d'ordre supérieur[END_REF][START_REF] Girard | Proofs and types[END_REF][START_REF] Krivine | Lambda-calcul, types et modèles[END_REF][START_REF] Gallier | On Girard's candidats de réductibilité[END_REF], on appelle candidat de réductibilité un ensemble qui vérifie ces conditions. -si π ∈ R A et π → π ′ alors π ′ ∈ R A , -si π est neutre (c'est-à-dire une variable ou une élimination) et pour tout π ′ telle que π → 1 π ′ , π ′ ∈ R A alors π ∈ R A . Après L'élimination des coupures en déduction modulo En déduction modulo, si deux propositions P et Q sont congruentes, toute démonstration de l'une est une démonstration de l'autre. De ce fait, on doit avoir R P = R Q . On ne peut donc plus définir l'ensemble des démonstrations réductibles d'une proposition atomique A comme l'ensemble de toutes les démonstrations fortement normalisables, car cette proposition peut se réduire, par exemple, sur une proposition de la forme B ⇒ C et dans ce cas, pour être réductible, une démonstration doit, en outre, vérifier la propriété que si elle se réduit sur une démonstration de la forme λα π 1 , alors pour toute démonstration réductible π ′ , [π ′ /α]π 1 est réductible. De ce fait, la construction de la famille R P ne peut plus se faire simplement par récurrence sur la taille de P . Une tentative consiste à définir les ensembles R P par récurrence sur la taille de P pour P normal, puis de définir l'ensemble R P pour P quelconque comme R P ↓ où P ↓ est la forme normale de P . Hélas, cette tentative échoue, car cela nous amène à définir les démonstrations réductibles de la proposition normale ∀x A comme l'ensemble des démonstrations π de ∀x A qui vérifient les propriétés suivantes π est fortement normalisable, -si π se réduit sur une démonstration de la forme λx π Cette remarque n'est pas nouvelle : en théorie des types simples la substitution d'une variable de prédicat modifie, similairement, la complexité des propositions. L'élimination des coupures en théorie des types est d'ailleurs un corollaire de cette conjecture. Le contre-exemple traditionnel [START_REF] Girard | Une extension de l'interprétation de Gödel à l'analyse et son application à l'élimination des coupures dans l'analyse et la théorie des types[END_REF][START_REF] Girard | Interprétation fonctionnelle et élimination des coupures dans l'arithmétique d'ordre supérieur[END_REF][START_REF] Girard | Proofs and types[END_REF][START_REF] Krivine | Lambda-calcul, types et modèles[END_REF][START_REF] Gallier | On Girard's candidats de réductibilité[END_REF] qui montre qu'on ne peut pas donner une telle définition de la réductibilité en théorie des types, s'adapte ici. On serait amené à poser qu'une démonstration π de la proposition ∀x ε(x) est réductible si elle est fortement normalisable et si quand elle se réduit sur une démonstration de la forme λx π 1 , alors pour tout terme t, [t/x]π 1 est une démonstration réductible de ε(t) ↓. Cela demande d'avoir au préalable défini l'ensemble des démonstrations réductibles pour toutes les propositions ε(t) ↓ en particulier pour ε( ∀ (λ1)) ↓= ∀x ε(x). Cependant, cet argument est correct pour tous les systèmes de réécriture qui ne comportent pas de quantificateurs dans leurs membres droits. Dans ce cas, la définition ci-dessus est une définition correcte par récurrence pour l'ordre lexicographique sur le couple (q(A), c(A)) où q(A) est le nombre de quantificateurs dans la proposition A et c le nombre de connecteurs. On en déduit donc la proposition suivante. Proposition 4.3.1 Les démonstrations modulo une congruence définie par un système de réécriture qui est confluent, qui termine et qui n'a pas d'occurrences de quantificateurs dans les membres droits de ses règles sont fortement normalisables. Les prémodèles Dans les autres cas, montrer l'élimination des coupures demande de construire une famille d'ensembles R A indexée par les propositions qui vérifie les conditions suivantes. -Les ensembles R A sont des candidats de réductibilité. -Conditions sur les propositions composées : -R A⇒B est l'ensemble des démonstrations π telles que π est fortement normalisable et si π se réduit sur une démonstration de la forme λα π 1 , alors pour toute démonstration π ′ de R A , [π ′ /α]π 1 est un élément de R B . - L'avantage de ce choix est que la famille des ensembles R A est entièrement définie par les ensembles R A où A est atomique. Autrement dit, pour définir une famille R A , il suffit de se donner les ensembles R P (t1,...,tn) , c'est-à-dire pour chaque symbole de prédicat P une fonction P qui à chaque n-uplet de termes associe un candidat de réductibilité. On voit apparaître ici une similarité avec la notion de modèle. Pour se donner un modèle, il faut également, se donner pour chaque symbole de prédicat P une fonction P qui à chaque n-uplet de termes associe une valeur de vérité. La comparaison peut se poursuivre. Si deux termes t 1 et t On montre également que des variantes de la notion de prémodèle permettent d'obtenir des résultats similaires pour le calcul des séquents intuitionniste et classique. Dans ce cas, il n'y a pas ici de processus d'élimination des coupures dont on montre la terminaison, puisqu'en calcul des séquents, le processus d'élimination d'une coupure n'est pas local. On montre donc que si une congruence a un prémodèle (dans un sens légèrement différent) alors la règle de coupure est redondante dans le calcul des séquents modulo cette congruence. R On montre facilement que pour les systèmes de réécriture qui ne comportent pas de quantificateurs dans leurs membres droits, on peut construire un prémodèle très simple qui rappelle, par certains aspects, le modèle syntaxique construit dans le théorème de complétude. On montre aussi dans [h] que la démonstration de normalisation pour la théorie des types exprimée avec des combinateurs peut s'exprimer comme la construction d'un prémodèle. Cette construction de prémodèle ressemble par bien des aspects à la construction du modèle fonctionnel de la théorie des types : dans un cas comme dans l'autre l'ensemble M ι est un singleton et l'ensemble M T →U est l'ensemble des fonctions de M T dans M U . On retrouve aussi plusieurs aspects de la démonstration traditionnelle de normalisation des démonstrations en exprimant ces démonstrations dans le λ-calcul polymorphe avec constructeurs de types F ω [START_REF] Girard | Une extension de l'interprétation de Gödel à l'analyse et son application à l'élimination des coupures dans l'analyse et la théorie des types[END_REF][START_REF] Girard | Interprétation fonctionnelle et élimination des coupures dans l'arithmétique d'ordre supérieur[END_REF] : le fait que tous les termes d'un même type construit sans le type o dénotent le même objet correspond à l'élimination des termes du premier ordre dans la démonstration traditionnelle, et la fonctionnalité du modèle fait qu'un symbole de type o → o, par exemple, est interprété comme une fonction des candidats dans les candidats. Dans [i] on montre la normalisation des démonstrations dans la formulation de la théorie des types utilisant des substitutions explicites, encore une fois en construisant un prémodèle. On montre enfin dans [h] que si un système de réécriture n'a pas d'occurrence négative de proposition atomique dans les membres droits de ses règles, alors on peut construire un prémodèle comme plus petit point fixe. La de normalisation Notre conjecture est que pour tous les systèmes de réécriture confluents et qui terminent, on peut construire un prémodèle et donc les démonstrations normalisent fortement modulo ce système de réécriture. Nous croyons cette conjecture vraie car nous n'avons pas réussi à trouver de contre-exemple. Même si cette conjecture est vraie, on peut s'interroger sur les outils nécessaires pour la démontrer. Puisqu'elle implique la cohérence de la théorie des types, on sait par le second théorème d'incomplétude de Gödel, qu'on ne peut pas la démontrer dans la théorie des types elle-même. Une question importante, du point de vue méthodologique, est de savoir si cette conjecture implique la cohérence de la théorie des ensembles ou non. Le système de réécriture que nous avons proposé pour la théorie des ensembles ne termine pas, mais on peut le modifier ainsi : On peut montrer que ce système termine. Hélas, l'élimination des coupures modulo ce système n'implique pas trivialement la cohérence de la théorie des ensembles car, dans ce système, les axiomes de l'égalité sont nécessaires pour exprimer les démonstrations de la théorie des ensembles. Mais, on peut imaginer qu'il y a là une piste pour montrer que notre conjecture implique la cohérence de la théorie des ensembles et donc qu'elle ne peut se démontrer sans faire appel à des axiomes au delà de la théorie des ensembles, comme par exemple, des axiomes de grands cardinaux. Plus modestement, on peut voir que le système de réécriture de la figure 3.1 termine pour de grands fragments de la théorie des ensembles (par exemple, vraisemblablement, pour le fragment où on restreint le schéma du sous-ensemble aux propositions stratifiables au sens de W.V.O. Quine) et donc que pour démontrer cette conjecture, il est nécessaire de sortir de ce fragment. w ∈ {}(x, y) → ∀a (w = a ⇒ (a = x ∨ a = y)) w ∈ (x) → ∃z (w ∈ z ∧ z ∈ x) w ∈ P(x) → ∀z (z ∈ w ⇒ z ∈ x) v ∈ f x1,..., Des prémodèles aux modèles Si une théorie (≡, Γ) a un prémodèle, alors le calcul des séquents classiques modulo la congruence ≡ a la propriété d'élimination des coupures. Si, en outre, le candidat de réductibilité dénotation de ⊥ ne contient pas de démonstration close (c'est, par exemple, toujours le cas quand Γ est vide car il n'y a pas de démonstration sans coupure et sans axiome de la proposition ⊥), alors la théorie est cohérente et, d'après le théorème de complétude de Gödel, elle a un modèle. On peut s'interroger sur les relations entre les prémodèles et les modèles de cette théorie, en particulier on peut vouloir transformer les prémodèles en modèles (ce qui justifierait notre terminologie de "prémodèle"). Un prémodèle est un modèle multivalué dont les valeurs de vérité sont des candidats de réductibilité. Une piste est donc de considérer un candidat qui contient une démonstration close comme une valeur de vérité positive et un candidat qui ne contient que des démonstrations ouvertes comme une valeur de vérité négative. Hélas cette idée est trop naïve car une proposition indéterminée de la théorie dénoterait une valeur de vérité négative, ainsi que sa négation. Il faut donc vraisemblablement, comme dans la démonstration du théorème de complétude, commencer par compléter et saturer le prémodèle. Nous laissons ce projet pour un travail futur. Les coupures de conversion Nous l'avons vu, une théorie en déduction modulo est toujours équivalente à une théorie du premier ordre. La notion de coupure modulo se traduit donc en logique du premier ordre en une notion de coupure relative à certains axiomes (comme les notions de coupure en logique du premier ordre avec égalité, ou dans l'arithmétique sont relatives aux axiomes de l'égalité ou à l'axiome de récurrence). On a caractérisé dans [e] ces coupures que nous avons appelées coupures de conversion. Une coupure de conversion est une séquence formée d'une règle d'introduction et d'une règle d'élimination séparées par une étape de conversion, c'est-à-dire une séquence d'utilisation des axiomes équationnels ou d'équivalence de la théorie. Cela permet, par exemple, de formuler le théorème d'élimination des coupures pour la théorie des types en restant intégralement au premier ordre, sans même utiliser la déduction modulo. La normalisation des démonstrations du théorème de Cantor À titre d'exemple, on présente dans ce paragraphe plusieurs démonstrations du théorème de Cantor dans différentes théories modulo et leur forme normale. Le théorème de Cantor montre qu'il n'y a pas de surjection d'un ensemble dans l'ensemble de ses parties. En théorie des types avec une fonction En théorie des types, un ensemble est un objet de type T → o. Ici, nous faisons le choix de nous intéresser uniquement à l'ensemble de tous les objets de type ι. L'ensemble des parties de cet ensemble est l'ensemble de tous les objets de type ι → o. Cela revient à montrer qu'il n'y a pas de surjection du type ι dans le type ι → o. La première possibilité est de représenter cette surjection potentielle par une fonction f de type ι → ι → o. La surjectivité de cette fonction peut s'exprimer par l'existence d'une fonction g inverse à droite de f , c'està-dire d'une fonction de type (ι → o) → ι telle que pour tout x, (f (g x)) = x. En utilisant la définition de G.W. Leibniz de l'égalité, cette hypothèse se traduit par la proposition H : ∀x ∀p (ε(p (f (g x))) ⇔ ε(p x)) On considère alors l'ensemble λx ¬((f x) x) des objets x de type ι qui n'appartiennent pas à (f x). C = λ ¬[↑](f [↑] 1 1) L'ensemble C est l'image par f de l'objet (g C) de type ι. Par définition de C, (g C) appartient à C si et seulement si il n'appartient pas à (f (g C)), mais cet ensemble n'est autre que C lui-même donc (g C) appartient à C si et seulement si il ne lui appartient pas, ce qui est contradictoire. On construit le λ-terme démonstration du théorème de Cantor ainsi. La proposition A "(g C) appartient à C" s'écrit ε(λ ¬[↑](f [↑] 1 1) (g λ ¬[↑](f [↑] 1 1))) Sa forme normale est ¬ε((f (g λ ¬[↑](f [↑] 1 1)) (g λ ¬[↑](f [↑] 1 1)))) En appliquant l'hypothèse H au terme C et au prédicat P = λ(1 (g[↑] λ ¬[↑ 2 ](f [↑ 2 ] 1 1) )) on obtient une démonstration de (H C P ) de la proposition ε(f (g λ ¬[↑](f [↑] 1 1)) (g λ ¬[↑](f [↑] 1 1))) ⇔ ε(λ ¬[↑](f [↑] 1 1) (g λ ¬[↑](f [↑] 1 1))) qui se réduit sur ε(f (g λ ¬[↑](f [↑] 1 1)) (g λ ¬[↑](f [↑] 1 1))) ⇔ ¬ε(f (g λ ¬[↑](f [↑] 1 1)) (g λ ¬[↑](f [↑] 1 1))) c'est-à-dire sur A ⇔ ¬A. De cette proposition on peut déduire une démonstration de la proposition ¬A λα (f st(H C P ) α α) puis une démonstration de ⊥ λα (f st(H C P ) α α) (snd(H C P ) λα (f st(H C P ) α α)) Comme toute démonstration en théorie des types, cette démonstration est fortement normalisable, sa forme normale est (f st(H C P ) (snd(H C P ) λα (f st(H C P ) α α)) (snd(H C P ) λα (f st(H C P ) α α))) À partir de cette démonstration de ⊥ sous l'hypothèse H, on peut facilement construire une démonstration de la proposition ¬∃f ∃g ∀x ∀p ( ))) ⇔ ε(p x)) ε(p (f (g x En théorie des types avec une relation Au lieu d'utiliser la notion primitive de fonction de la théorie des types, on peut coder les fonctions comme des relations fonctionnelles. On pose alors un symbole R de type ι → (ι → o) → o et deux hypothèses qui expriment que R est surjective E : ∀y ∃x ε(R x y) et fonctionnelle ∀x ∀y ∀z (ε(R x y) ⇒ ε(R x z) ⇒ y = z) Encore une fois, on utilise la définition de Leibniz de l'égalité F : ∀x ∀y ∀z (ε(R x y) ⇒ ε(R x z) ⇒ ∀p (ε(p y) ⇔ ε(p z))) Pour dire que x n'appartient pas à son image, on est obligé de dire qu'il n'appartient à aucune de ses images car on ne dispose plus d'un terme pour désigner cette image. L'ensemble C est donc λx ∀λy ((R x y) ⇒ ¬(y x)), c'est-à-dire C = λ ∀λ((R 2 1) ⇒ ¬(1 2)) On commence par démontrer ⊥ sous l'hypothèse h : (R a C). Soit A la proposition ε(C a), cette proposition se réduit sur ∀y (ε(R a y) ⇒ ¬ε(y a)) Le terme π = λα (α C h) est une démonstration de A ⇒ ¬A et le terme π ′ = λα λy λβ (f st(F a C y h β (λ ¬[↑](1 a[↑]))) α) une démonstration de ¬A ⇒ A. On peut comme ci-dessus, en déduire une démonstration de ⊥ (π (π ′ (λγ (π γ γ))) (π ′ (λγ (π γ γ)))) Comme toute démonstration en théorie des types, cette démonstration a une forme normale π 0 = (f st(F a C C h h (λ ¬[↑](1 a[↑]))) (λγ (γ C h γ))))(λy λβ (f st(F a C y h β (λ ¬[↑](1 a[↑]))) (λγ (γ C h γ)))) On peut ensuite se passer de l'hypothèse h pour obtenir une démonstration normale de ⊥ sous les hypothèses E et F (exélim (E C) ahπ 0 ) À partir de cette démonstration de ⊥ sous les hypothèses E et F , on peut facilement construire une démonstration de la proposition ¬∃R ((∀y ∃x ε(R x y)) ∧ (∀x ∀y ∀z (ε(R x y) ⇒ ε(R x z) ⇒ ∀p (ε(p y) ⇔ ε(p z))))) En théorie des ensembles La démonstration du théorème de Cantor en théorie des ensembles ressemble beaucoup à celle en théorie des types avec une relation. La démonstration que nous donnons ici est, cela dit, un peu plus générale puisque nous considérons un ensemble B quelconque, alors que dans la démonstration en théorie des types ci-dessus, nous avions fait le choix de nous restreindre à l'ensembles des objets de type ι. On note < x, y > l'ensemble {{x}, {x, y}} c'est-à-dire {}({}( On pose un ensemble R et deux axiomes qui expriment que R est une relation surjective E : ∀y (y ∈ P(B) ⇒ ∃x (x ∈ B∧ < x, y >∈ R)) et fonctionnelle F : ∀x ∀y ∀z (< x, y >∈ R ⇒< x, z >∈ R ⇒ y = z) On utilise aussi l'axiome de l'égalité L : ∀z ∀x ∀y (x = y ⇒ ¬z ∈ x ⇒ ¬z ∈ y) Pour dire que x n'appartient pas à son image, on dit qu'il n'appartient à aucune de ses images C = {x ∈ B | ∀y (< x, y >∈ R ⇒ ¬x ∈ y)} On a donc une règle de réécriture x ∈ C → x ∈ B ∧ ∀y (< x, y >∈ R ⇒ ¬x ∈ y) 1 1. La skolémisation du schéma du sous-ensemble ne nous donne pas un tel terme puisque la proposition ∀r ∀b ∃e ∀x (x ∈ e ⇔ ((x ∈ b) ∧ ∀y (< x, y >∈ r ⇒ ¬x ∈ y))) qui contient des symboles de Skolem, n'est pas une instance du schéma du sous-ensemble. On peut remplacer la proposition < x, y >∈ r par ∀u ((∀v (v ∈ u ⇔ (∀w (w ∈ v ⇔ (w = x ∨ w = y)) ∨ ∀w (w ∈ v ⇔ w = x)))) ⇒ u ∈ r) mais on obtient une démonstration beaucoup plus compliquée. En revanche, on a un tel terme C dans le système avec lieurs présenté dans [a]. On aurait également un tel terme dans une présentation au premier ordre de ce système. En attendant, on se place dans un système dans lequel le terme C est une constante et où on pose la règle de réécriture ci-dessus. On commence par démontrer la proposition ⊥ sous l'hypothèse h : a ∈ B∧ < a, C >∈ R. Soit A la proposition a ∈ C, cette proposition se réduit sur a ∈ B ∧ ∀y (< a, y >∈ R ⇒ ¬a ∈ y) Le terme π = λα (snd(α) C snd(h)) est une démonstration de A ⇒ ¬A et le terme π ′ = λα < f st(h), λy λβ (L a C y (F a C y snd(h) β) α) > une démonstration de ¬A ⇒ A. On peut, comme ci-dessus, en déduire une démonstration de ⊥ (π (π ′ (λγ (π γ γ))) (π ′ (λγ (π γ γ)))) Cette démonstration a une forme normale π 0 = (L a C C (F a C C snd(h) snd(h)) (λγ (snd(γ) C snd(h) γ)) < f st(h), λy λβ (L a C y (F a C y snd(h) β) (λγ (snd(γ) C snd(h) γ))) >) On peut ensuite se passer de l'hypothèses h pour obtenir une démonstration normale de ⊥ sous les hypothèses E et F . La proposition C ∈ P(B) se réduit sur ∀x ((x ∈ B ∧ ∀y (< x, y >∈ R ⇒ ¬x ∈ y)) ⇒ x ∈ B)). Le terme ρ = λx λα f st(α) en est donc une démonstration. On construit alors la démonstration suivante de ⊥ sous les hypothèses E et F (exélim (E C ρ) ahπ 0 ) À partir de cette démonstration de ⊥ sous les hypothèses E et F , on peut facilement construire une démonstration de la proposition ∀B ¬∃R ((∀y (y ∈ P(B) ⇒ ∃x (x ∈ B∧ < x, y >∈ R))) ∧ (∀x ∀y ∀z (< x, y >∈ R ⇒< x, z >∈ R ⇒ y = z) Dans les théories étendues Dans la formulation du théorème de Cantor en théorie des types avec une fonction, on peut étendre le système de réécriture avec la règle (f (g x)) → x de manière à faire entrer dans la congruence l'égalité entre (f (g x)) et x. L'hypothèse H peut être remplacée par la démonstration λx λp (λβ β, λβ β) et la démonstration ci-dessus se réduit sur (λα (α α) λα (α α)) qui n'a pas de forme normale. Remarquons que, dans ce cas, le système de réécriture lui-même ne termine pas car la proposition A se réduit sur sa négation. Dans les formulations en théorie des types avec une relation et dans la formulation en théorie des ensembles, on utilise l'axiome F pour montrer la proposition C = C puis on applique l'axiome de Leibniz de manière à remplacer C par C dans la proposition ¬A. Dans [START_REF] Ekman | Normal proofs in set theory[END_REF], Ekman propose d'étendre la notion de coupure en théorie des ensembles en considérant comme une coupure une application de l'axiome de Leibniz à une égalité de la forme a = b si a et b sont liées par une certaine relation d'équivalence. En suivant cette idée, on peut considérer comme une coupure une application de l'axiome de Leibniz à une égalité de la forme a = a que cette proposition soit démontrée par l'axiome d'identité ou de n'importe quel autre manière. Une telle coupure s'élimine naturellement en supprimant la démonstration de la proposition a = a et l'application de l'axiome de Leibniz. Cela amène à ajouter une règle de réduction des démonstrations liée à l'axiome F (f st(F x y y α α ′ P ) β) → β universel. Le type produit cartésien sert à exprimer les propositions formées avec la conjonction et le quantificateur existentiel, l'union disjointe sert à former les propositions formées avec la disjonction, et le type vide les propositions formées avec la négation et la contradiction. Pour que les propositions atomiques aient le type T ype il est nécessaire de plonger les symboles de prédicat de rang (s 1 , ..., s n ) comme des symboles de type s 1 → ... → s n → T ype. Cela demande de pouvoir former des termes tels que s 1 → ... → s n → T ype. On introduit pour cela une nouvelle constante Kind et on donne le type Kind à ces termes. On aboutit au calcul de la figure 4.2. Remarquons que contrairement aux présentations traditionnelles du λ-calcul avec des types dépendants [START_REF] Harper | A framework for defining logics[END_REF][START_REF] Barendregt | Lambda calculi with types[END_REF], ce calcul ne comporte ni règle de conversion, ni règle d'abstraction pour former des objets dont le type est de type Kind : Γ ⊢ T : T ype Γ, x : T ⊢ T ′ : Kind Γ, x : T ⊢ t : T ′ Γ ⊢ λx : T t : Πx : T T ′ Suivant l'usage, on note T → U le terme Πx : T U et T × U le terme Σx : T U quand la variable x n'apparaît pas dans U . On pose, pour chaque sorte s de la théorie, une variable s ′ de type T ype, pour chaque variable x de sorte s une variable x ′ de type s ′ , pour chaque symbole de fonction f de rang (s 1 , ..., s n , t) une variable f ′ de type s ′ 1 → ... → s ′ n → t ′ , pour chaque symbole de prédicat P de rang (s 1 , ..., s n ) une variable P ′ de type s ′ 1 → ... → s ′ n → T ype. Les propositions se plongent alors comme des termes de type T ype dans ce contexte : -(f (t 1 , ..., t n )) ′ = (f ′ t ′ 1 ... t ′ n ), -(P (t 1 , ..., t n )) ′ = (P ′ t ′ 1 ... t ′ n ), -(A ⇒ B) ′ = A ′ → B ′ , -(A ∧ B) ′ = A ′ × B ′ , -(A ∨ B) ′ = A ′ + B ′ , -(¬A) ′ = A ′ → ∅, -⊥ ′ = ∅, -(∀x A) ′ = Πx : T A ′ , -(∃x A) ′ = Σx : T A ′ . Pour chaque axiome A, on pose une variable x A de type A ′ . Les démonstrations d'une proposition A peuvent alors s'exprimer comme des termes de type A ′ . Le plongement des théories modulo dans le λ-calcul On peut, de même, plonger une théorie modulo, en transformant les règles de ce calcul de manière à autoriser une conversion à chaque étape de déduction (on remplace alors le symbole ⊢ par le symbole ⊢ ≡ ). De manière équivalente, on peut ajouter une règle de conversion : Γ ⊢ ≡ t : T Γ ⊢ ≡ T : T ype Γ ⊢ ≡ T ′ : T ype T ≡ T ′ Γ ⊢ ≡ t : T ′ Dans ce cas, cela dit, les étapes de conversions apparaissent explicitement dans la dérivation bien qu'elles n'apparaissent pas dans le λ-terme. Le cas de la théorie des types simples Comme toute théorie modulo, la théorie des types simples avec des combinateurs peut se plonger dans le λ-calcul avec des types dépendants. Il y a, malgré tout, une petite difficulté due au fait que cette théorie a un nombre infini de sortes : ι, o, ι → o, ... Pour ne pas avoir à introduire un nombre infini de variables de type T ype, on peut introduire deux variables ι et o, un nouveau symbole Arrow et une nouvelle règle Γ ⊢ ≡ A : T ype Γ ⊢ ≡ B : T ype Γ ⊢ ≡ Arrow(A, B) : T ype Γ ⊢ a : T + T ′ Γ ⊢ U : T ype Γ, x : T ⊢ b : U Γ, y : T ′ ⊢ c : U Γ ⊢ (δ T,T ′ ,U a xb yc) : U Utilisation des éléments du type vide : Mais ce plongement n'est pas satisfaisant. En effet, la théorie des types simples comportant déjà une notion de fonction le calcul obtenu comporte deux notions, redondantes, de fonction : celle propre au λcalcul et celle importée de la théorie des types. Au type A → B que permet de former le λ-calcul avec des types dépendants, s'ajoute le type Arrow(A, B) qui est une sorte de la théorie des types simples. De même, au terme λx λy x s'ajoute le terme K de la théorie des types simples. Γ ⊢ t : ∅ Γ ⊢ (R T t) : T Autrement dit, dans le cas de la théorie des types simples, parce que cette théorie comporte déjà une notion de fonction, on peut optimiser le plongement naïf ci-dessus : on ne pose que deux variables ι et o de type T ype, et on plonge la sorte T → U (flèche de la théorie des types simples) comme le type T → U (flèche du λ-calcul). On plonge le combinateur K comme le terme λx λy x, et on plonge de manière similaire les autres combinateurs et les termes formés par application. Il faut alors ajouter la β-conversion des termes du λ-calcul avec des types dépendants dans la congruence ≡ utilisée dans la règle de conversion pour remplacer les règles de réduction des combinateurs. On peut s'étonner du fait que nous parvenions ici à exprimer les démonstrations de la théorie des types simples dans le λ-calcul avec des types dépendants (non polymorphe) alors que le polymorphisme a précisément été inventé pour exprimer les démonstrations de cette théorie [START_REF] Girard | Une extension de l'interprétation de Gödel à l'analyse et son application à l'élimination des coupures dans l'analyse et la théorie des types[END_REF][START_REF] Girard | Interprétation fonctionnelle et élimination des coupures dans l'arithmétique d'ordre supérieur[END_REF][START_REF] Girard | Proofs and types[END_REF][START_REF] Krivine | Lambda-calcul, types et modèles[END_REF][START_REF] Gallier | On Girard's candidats de réductibilité[END_REF]. Cela est dû au fait que, si nous avons identifié l'implication ⇒ avec la flèche → du λ-calcul typé, et également la flèche → de construction des sortes de la théorie des types simples avec la flèche → du λ-calcul typé, nous avons, en revanche, gardé la distinction entre l'implication ⇒ et son contenu ⇒ qui a été introduite dans la présentation au premier ordre de la théorie des types simples. Certes, nous ne pouvons pas typer le terme λX : T ype λx : X x de type ΠX : T ype (X → X), mais nous pouvons typer le terme λX : o λx : ε(X) x de type ΠX : o (ε(X) → ε(X)). Certes, nous ne pouvons pas appliquer le terme λX : T ype λx : X x au type T → T pour obtenir le terme λx : T → T x de type (T → T ) → (T → T ), mais nous pouvons appliquer le terme λX : o λx : ε(X) x au terme t ⇒t et obtenir le terme λx : ε(t ⇒t) x de type ε(t ⇒t) → ε(t ⇒t) qui est identique, modulo la congruence, au terme λx : ε(t) → ε(t) x de type (ε(t) → ε(t)) → (ε(t) → ε(t)). Cet exemple nous montre que le polymorphisme peut être obtenu en λ-calcul avec des types dépendants modulo, non seulement par l'ajout de règles de typage, comme c'est l'usage, mais aussi par l'ajout de règles de réécriture. On obtient ainsi une présentation du Calcul des constructions qui si elle n'est pas la plus courante, n'est pas, pour autant, nouvelle. Elle a, par exemple, été étudiée en détails par T. Altenkirch [2] qui note pr notre symbole ε et qui distingue le terme P de type o du type pr(P ) des démonstrations de P . On retombe sur la présentation traditionnelle du Calcul des constructions en optimisant encore le plongement : au lieu de poser deux variables ι et o on ne pose que la variable ι et on plonge la sorte o comme le symbole T ype. De même, on plonge le symbole ⇒ comme la flèche du λ-calcul, et on donne un plongement similaire pour les symboles ∧, ∨, ... Il est alors nécessaire d'ajouter la règle des types polymorphes pour exprimer la quantification sur les objets de type o qui sont désormais de type T ype, la règle des constructeurs de types pour exprimer la sorte o → o, et la règle de construction d'abstractions dont le type est de type Kind pour former le terme λx : o λy : ι x qui est désormais de type T ype → ι → T ype. Ces différentes présentations ne sont pas nécessairement équivalentes. En particulier, il reste à déterminer celles dans lesquelles on peut exprimer une démonstration de l'axiome du choix ou le paradoxe de la somme forte. Il y a, cela dit, un avantage à garder la distinction entre les connecteurs et leur contenus. En effet, les extensions du λ-calcul avec des types dépendants sont traditionnellement classées en deux familles : celles obtenues par l'ajout de règles de typage (qui sont élégamment présentées dans le cube de H. Barendregt [START_REF] Barendregt | Lambda calculi with types[END_REF]) : le polymorphisme et les constructeurs de types, et celles qui sont obtenues par l'ajout de règles de réécriture : les types inductifs. Dans le λ-calcul avec des types dépendants modulo, toutes ces extensions sont obtenues de manière uniforme par l'ajout de règles de réécriture. Notons enfin qu'on peut aussi plonger la théorie des types simples présentée avec des substitutions explicites dans le λ-calcul avec des types dépendants. Mais, pour optimiser le plongement comme ci-dessus, il est nécessaire de présenter le λ-calcul avec des types dépendants lui-même avec des substitutions explicites, comme l'ont proposé P. Martin-Löf [START_REF] Martin-Löf | [END_REF], L. Magnusson [START_REF] Magnusson | The implementation of ALF, a proof editor based on Martin-Löf monomorphic type theory with explicit substitution[END_REF] et C. Muñoz [START_REF] Muñoz-Hurtado | Dependent types with explicit substitutions : a meta-theoretical development[END_REF][START_REF] Muñoz-Hurtado | Un calcul de substitutions pour la représentation des preuves partielles en théorie des types[END_REF][START_REF] Muñoz-Hurtado | A calculus of substitutions for incomplete-proof representation in type theory[END_REF]. Chapitre 5 La démonstration automatique La résolution équationnelle Dans une théorie modulo dont la congruence est définie par des règles de réécriture et des équations portant sur les termes uniquement, les méthodes de démonstration automatique sont assez similaires à celles pour la logique du premier ordre. On peut utiliser la résolution, ou la méthode des tableaux, en remplaçant simplement l'algorithme d'unification du premier ordre par un algorithme d'unification équationnelle modulo cette congruence. Cette idée n'est pas nouvelle, Plotkin propose dans [START_REF] Plotkin | Building-in equational theories[END_REF] d'abandonner l'axiome d'associativité en remplaçant l'unification par l'unification associative (ce qui revient, dans notre cadre, à raisonner modulo l'associativité) et M. Stickel [START_REF] Stickel | Automated deduction by theory resolution[END_REF] donne un cadre général à la résolution équationnelle qu'il appelle theory resolution. Un problème d'unification équationnelle est une équation a = b (ou un ensemble d'équations) et une solution en est une substitution σ telle que les termes σa et σb soient congruents (et non plus nécessairement identiques). L'algorithme général d'unification équationnelle est appelé la surréduction [START_REF] Fay | First-order unification in an equational theory[END_REF][START_REF] Hullot | Canonical forms and unification[END_REF][START_REF] Gallier | Complete set of transformations for general E-unification[END_REF][START_REF] Jouannaud | Solving equations in abstract algebras : a rule-based survey of unification[END_REF]. L'idée de base est d'ajouter aux règles de l'unification du premier ordre, une règle qui permet de surréduire (c'est-à-dire d'instancier puis de réduire) un sous-terme d'une équation qui n'est pas réductible mais dont une instance l'est. Par exemple, le problème d'unification z + 5 = 7 n'a pas de solution pour l'unification du premier ordre, mais il a une solution z → 2, modulo les règles de réécriture 0 + y → y Succ(x) + y → Succ(x + y) En effet, le terme z + 5 n'est pas réductible, c'est-à-dire que ce n'est pas une instance d'un membre gauche d'une des règles, ou encore qu'il n'est pas filtré (au sens du filtrage du premier ordre) par le membre gauche de l'une des règles, mais il est surréductible c'est-à-dire qu'une de ses instance est réductible ou encore qu'il est unifiable (au sens de l'unification du premier ordre) avec le membre gauche de l'une des règles. En unifiant, au sens ordinaire, le terme z + 5 avec le terme Succ(x) + y on instancie z en Succ(z ′ ). Le terme Succ(z ′ ) + 5 se réduit alors en Succ(z ′ + 5). En procédant de même on instancie z ′ en Succ(z ′′ ), puis z ′′ en 0 ce qui donne la solution z → 2. est démontrable car la proposition ε(A ∨ ¬A) l'est, mais on ne peut rien déduire de la forme clausale de sa négation ¬ε(X). En résolution d'ordre supérieur, cela a amené à proposer une nouvelle règle de déduction, la règle de scission (splitting), qui consiste à instancier arbitrairement les littéraux dont le symbole de tête est une variable en des propositions formées avec des connecteurs et des quantificateurs. Par exemple, en partant de la clause ¬X, on instancie la variable X par le terme Y ∨ Z ce qui donne après mise en forme clausale de }/E 1 ∪ E 2 ∪ {A 1 = E . . . A n = E C 1 = E . . . C p } C/E si l → r ∈ R Surréd. étendue cl(C[r] p )/(E ∪ {C |p = E l}) Les contraintes Comme d'habitude, quand les problèmes d'unification ne sont pas décidables et unitaires, la bonne formulation de la résolution est celle de la résolution contrainte [START_REF] Huet | Constrained resolution : a complete method for higher order logic[END_REF][START_REF] Huet | A mechanization of type theory[END_REF]. Les problèmes d'unification ne sont pas toujours immédiatement résolus au moment de l'application des règles, mais leur résolution peut être retardée. On les garde alors comme des contraintes. Nous aboutissons alors au système de la figure 5.1 appelé Résolution et surréduction étendues ou, plus simplement, Résolution modulo. La complétude La complétude de cette méthode s'exprime comme le fait que si une proposition A est démontrable en déduction modulo, alors de la forme clausale de sa négation, on peut déduire par les règles ci-dessus une clause contrainte vide, c'est-à-dire une clause de la forme 2/E où E est un ensemble unifiable de contraintes. Dans [g], on montre que cette complétude est atteinte pour toutes les théories modulo qui ont la propriété d'élimination des coupures, faute de quoi il faut se restreindre aux propositions A qui ont une démonstration sans coupures. Un exemple simple Pour comprendre pourquoi il est plus efficace de transformer un axiome A ⇔ B en une règle de réécriture de l'ensemble R et d'utiliser la règle de surréduction étendue plutôt que de garder cet axiome tel quel, on peut considérer l'exemple suivant. Si on veut réfuter la théorie P 1 ⇔ (Q 2 ∨ P 2 ), P 1 , Q 2 ⇔ ⊥, P 2 ⇔ ⊥, ¬P n+1 En revanche, si on transforme les propositions P i ⇔ (Q i+1 ∨ P i+1 ) en des règles de réécriture Certes, réduire la proposition P 1 a un certain coût, mais ce coût est sans comparaison avec celui de la recherche non déterministe d'une réfutation par résolution de l'ensemble de clauses ci-dessus. Ce processus de réduction est, en effet, déterministe car le système de réécriture est confluent. P i → Q i+1 ∨ P i+1 les propositions Q i ⇔ ⊥ en des règles de réécriture Q i → ⊥ La démonstration automatique en théorie des types simples La surréduction et la scission Dans le cas de la théorie des types simples, on partitionne les règles de réécriture en deux ensembles. Les règles de l'ensemble E qui sont exactement les règles du λσ-calcul (avec η-réduction) (λa)b → a[b.id] λ(a 1) → b si a = σ b[↑] (a b)[s] → (a[s] b[s]) 1[a.s] → a a[id] → a (λa)[s] → λ(a[1.(s • ↑)]) (a[s])[t] → a[s • t] id • s → s ↑ • (a.s) → s (s 1 • s 2 ) • s 3 → s 1 • (s 2 • s 3 ) (a.s) • t → a[t].(s • t) s • id → s 1. ↑→ id 1[s].(↑ • s) → s et les règles de l'ensemble R qui sont les autres règles ε( ∧ x y) → ε(x) ∧ ε(y) ε( ∨ x y) → ε(x) ∨ ε(y) ε( ⇒ x y) → ε(x) ⇒ ε(y) ε( ¬ x) → ¬ε(x) ε( ⊥) → ⊥ ε( ∀T x) → ∀y ε(x y) ε( ∃T x) → ∃y ε(x y) Remarquons qu'en logique classique, on peut se restreindre à n'utiliser que les connecteurs ∨ et ¬ et le quantificateur ∀, les autres pouvant se définir à l'aide des règles de De Morgan. De même, on peut se restreindre à n'utiliser que les symboles ∨, ¬ et ∀. Dans ce cas le système R se réduit aux règles ε( ∨ x y) → ε(x) ∨ ε(y) ε( ¬ x) → ¬ε(x) ε( ∀T x) → ∀y ε(x y) La règle de surréduction étendue se spécialise alors exactement en la règle de scission de [START_REF] Huet | Constrained resolution : a complete method for higher order logic[END_REF][START_REF] Huet | A mechanization of type theory[END_REF]. En effet, un littéral réduit est surréductible s'il n'a pas la forme ε( ∨ x y), ε( ¬ x), ε( ∀ x), mais s'il peut s'unifier avec une telle proposition. Il est facile de montrer que cette condition est équivalente au fait que le symbole de tête d'un tel littéral est une variable. La règle de surréduction étendue consiste à remplacer un tel littéral L, ou bien par ε(x) ∨ ε(y) en ajoutant la contrainte L = ε( ∨ x y), ou bien par ¬ε(x) en ajoutant la contrainte L = ε( ¬ x), ou bien par ∀y ε(x y) en ajoutant la contrainte L = ε( ∀ x). C'est exactement la règle de scission. La règle de scission n'est donc pas propre à la théorie des types. La nécessité d'utiliser une telle règle s'explique par le fait que la théorie des types contient des règles qui réécrivent des propositions atomiques en des propositions non atomiques. La forme de la règle de scission se déduit simplement de la forme de ces règles de réécriture. L'unification équationnelle et l'unification d'ordre supérieur L'unification équationnelle qui est utilisée dans cette méthode est l'unification équationnelle modulo le système E ci-dessus, c'est-à-dire modulo le λσ-calcul avec η-réduction. Nous avons donné dans [c] un algorithme d'unification équationnelle dans le λσ-calcul qui peut être vu comme un algorithme optimisé de surréduction. Les optimisations portent sur trois points. D'une part, on utilise le théorème de terminaison de la réduction pour ne considérer que des équations en forme normale [START_REF] Werner | Normalizing narrowing for weakly terminating and confluent systems[END_REF] et pour réduire l'espace de recherche en spéculant sur la forme normale des solutions (en particulier sur leur symbole de tête). Ensuite, on reconnaît certaines équations qui n'ont pas de solution et on en simplifie d'autres en gardant le même ensemble de solutions. Enfin, on reconnaît certaines équations (entre termes flexibles) qui ont toujours des solutions et qu'il n'est donc pas nécessaire de résoudre puisque l'utilisation des contraintes permet de se limiter au test d'unifiabilité et ne demande jamais l'énumération des solutions [START_REF] Huet | Constrained resolution : a complete method for higher order logic[END_REF][START_REF] Huet | A mechanization of type theory[END_REF]. Nous avons établi dans [c] un théorème d'équivalence entre l'unification équationnelle modulo la théorie λσ et l'unification d'ordre supérieur [START_REF] Huet | A unification algorithm for typed lambda calculus[END_REF][START_REF] Huet | Résolution d'équations dans les Langages d[END_REF][START_REF] Snyder | Higher-order unification revisited : complete sets of transformations[END_REF] Nous avons enfin montré dans [c] que l'algorithme d'unification dans le λσ-calcul était plus simple, par certains aspects que l'algorithme d'unification d'ordre supérieur. En particulier l'utilisation de la substitution simple (du remplacement), et non de la substitution particulière du λ-calcul qui demande un renommage des variables, évite le codage fonctionnel de la portée des variables et permet une expression plus atomique des règles. Par exemple, une équation de la forme λa = λb se simplifie en a = b dans le λσ-calcul alors que dans le λ-calcul une telle simplification est incorrecte puisque la variable liée par le λ ne peut pas être substituée aux variables libres de a et b dans la première équation, mais le peut dans la seconde. Autre exemple, une variable X de sorte Γ ⊢ T → U peut être instanciée par un terme de la forme λY où Y est une variable de sorte T Γ ⊢ U alors qu'une telle instanciation est incomplète dans le λ-calcul, car il n'est plus possible ensuite d'instancier la variable Y par la variable liée par le symbole λ. Cette simplicité est due au fait que les contraintes de portée des variables sont incorporées dans les termes eux-mêmes par la précuisson. Nous avons enfin montré dans [d] que la restriction de l'unification d'ordre supérieur aux patterns de Miller [START_REF] Miller | Unification under a mixed prefix[END_REF][START_REF] Miller | A logic programming language with lambda-abstraction, function variables, and simple unification[END_REF] pouvait également se définir dans le λσ-calcul. La décidabilité et le caractère unitaire de l'unification est alors une conséquence de l'inversibilité de certaines substitutions explicites. De la résolution d'ordre supérieur à la résolution modulo Par rapport à la résolution du premier ordre, la résolution d'ordre supérieur apporte trois nouveautés : l'unification d'ordre supérieur, la règle de scission et la skolémisation d'ordre supérieur (qui n'a, en fait, été formulée correctement que dix ans après l'apparition de la résolution d'ordre supérieur dans [START_REF] Miller | Proofs in higher order logic[END_REF][START_REF] Miller | A compact representation of proofs[END_REF]). Plusieurs auteurs, en particulier M. Davis [24], ont insisté assez tôt sur le fait que la théorie des types n'était qu'une théorie du premier ordre comme les autres. On pouvait en tirer l'intuition que les outils introduits pour la résolution d'ordre supérieur avaient une portée plus générale et pouvaient s'appliquer à d'autres théories, en particulier à la théorie des ensembles. Cela suggérait le programme de définir une méthode de démonstration automatique pour la logique du premier ordre qui se spécialise exactement en la résolution d'ordre supérieur quand on l'applique à la formulation au premier ordre de la théorie des types. Ce programme présentait deux difficultés : d'une part les présentations traditionnelles de la théorie des types comme une théorie du premier ordre utilisaient le schéma de compréhension ou des combinateurs, alors que l'unification d'ordre supérieur utilisait la notation plus élégante et plus compacte du λ-calcul. D'autre part, il semblait bien que la séparation entre calcul (β-réduction) et déduction que permettait la théorie des types était un élément essentiel pour le fonctionnement de l'unification d'ordre supérieur et de la scission. Cette première difficulté a été surmontée par l'introduction des combinateurs catégoriques puis du calcul des substitutions explicites que nous avons utilisé dans [c] pour réduire l'unification d'ordre supérieur à l'unification équationnelle du premier ordre, puis dans [i] pour donner une formulation au premier ordre de la théorie des types intentionnellement équivalente à la formulation utilisant le λ-calcul. La seconde difficulté a été surmontée par l'introduction dans [g] d'un cadre général, la déduction modulo, qui permet de séparer calcul et déduction. Cela nous a permis de formuler une méthode de démonstration automatique pour la déduction modulo qui s'instancie exactement en la résolution d'ordre supérieur quand on l'applique à la formulation de la théorie des types donnée dans [i] et de réaliser ce programme. Les particularités de la skolémisation d'ordre supérieur sont donc avant tout une conséquence du système de sortes du λσ-calcul. L'unification d'ordre supérieur en revanche est une forme particulière d'unification équationnelle et la scission une forme particulière de surréduction étendue. La démonstration automatique en théorie des ensembles En théorie des ensembles, toutes les règles de réécriture transforment des propositions atomiques en des propositions composées, l'ensemble E est donc vide et l'ensemble R contient toutes les règles de la figure 3.1. L'unification est donc simplement l'unification du premier ordre. La résolution modulo est incomplète car elle ne permet de démontrer que les propositions qui ont une démonstration sans coupures. Elle permet de démontrer le théorème de Cantor, mais pas le théorème de Crabbé Ici, il faudrait que la méthode de démonstration automatique devine la coupure qui se trouve dans la démonstration de Crabbé. Les différences entre la résolution modulo en théorie des types et en théorie des ensembles sont discutées dans [j]. Cette discussion est illustrée par la recherche de démonstrations du théorème de Cantor dans les différentes formulations présentées au paragraphe 4.5. Chapitre 6 Les types et la terminaison Les types et les ensembles Dans les chapitres précédents nous avons donné deux exemples de théories modulo, la théorie des types simples et la théorie des ensembles, qui se présentent, l'une et l'autre, comme des axiomatisations possibles des mathématiques. L'une et l'autre peuvent être vues comme des restrictions de la théorie naïve (incohérente) des ensembles. La théorie des ensembles restreint le schéma de compréhension à quatre instances et la théorie des types le restreint en le plaçant dans un langage multisorté. La déduction modulo permet de jeter un regard nouveau sur les différences entre ces deux théories. Le système de réécriture associé à la théorie des ensembles ne termine pas, de ce fait la théorie des ensembles n'a pas la propriété d'élimination des coupures et la résolution modulo n'est pas complète. En revanche, le système de réduction de la théorie des types simples termine, cette théorie a la propriété d'élimination des coupures et la résolution modulo est complète. Par ailleurs, la théorie des types dispose d'une notion primitive de fonction et de la possibilité de former un terme α(f, x) pour désigner l'image de l'objet x par la fonction f . Cela permet de définir des objets comme l'ensemble de Cantor plus simplement (en particulier avec moins de quantificateurs et de connecteurs) qu'en théorie des ensembles et la recherche de démonstrations fait moins souvent appel à la règle de surréduction qu'en théorie des ensembles. Cependant, on ne peut pas ignorer certaines critiques souvent opposées à la théorie des types. D'abord, en utilisant un langage multisorté, c'est-à-dire restreint syntaxiquement, la théorie des types élude, plus qu'elle ne résout, le paradoxe de Russell. En caricaturant, on peut dire que la théorie des types résout le paradoxe du menteur en interdisant aux Crétois de prononcer la phrase "Tous les crétois sont menteurs". D'autre part il semble y avoir, dans cette théorie, une certaine redondance entre la notion de type et la notion d'ensemble, par exemple entre le type ι et l'ensemble λx ⊤ de type ι → o de tous les objets de type ι. Plus concrètement, on peut se demander comment choisir les ensembles qui sont destinés à être des types de base. Certains choisissent par exemple un type de base pour les entiers, alors que pour d'autres, les entiers ne sont que des objets de type (ι → o) → o qui se révèlent être des cardinaux finis. Autrement dit, la théorie des types renouvelle l'épineuse distinction entre propriété essentielle et propriété accidentelle. Dans [START_REF] Quine | Set theory and its logic[END_REF], Quine a résolu le premier problème en donnant une formulation de la théorie des types comme une théorie du premier ordre ordinaire (c'est-à-dire monosortée). Pour cela, il suffit de relativiser [START_REF] Enderton | A mathematical introduction to logic[END_REF][START_REF] Gallier | Logic in computer science[END_REF] la présentation de la théorie des types comme une théorie du premier ordre multisortée. On peut, en effet, toujours traduire une théorie du premier ordre multisortée en une théorie monosortée en introduisant pour chaque symbole de fonction ou de prédicat un symbole de même arité et pour chaque sorte s un prédicat unaire T s et en traduisant les propositions comme suit : - x ′ = x, (f (t 1 , ..., t n )) ′ = f ′ (t ′ 1 , ..., t ′ n ), -(P (t 1 , ..., t n )) ′ = P ′ (t ′ 1 , ..., t ′ n ), -(A ⇒ B) ′ = A ′ ⇒ B ′ , (A ∧ B) ′ = A ′ ∧ B ′ , (A ∨ B) ′ = A ′ ∨ B ′ , (¬A) ′ = ¬A ′ , ⊥ ′ = ⊥, -(∀x A) ′ = ∀x (T s (x) ⇒ A ′ ), (∃x A) ′ = ∃x (T s (x) ∧ A ′ ). La traduction d'une théorie Γ s'effectue en traduisant chacune des propositions et en ajoutant pour chaque symbole de fonction f de rang (s 1 , ..., s n , t) un axiome ∀x 1 ... ∀x n ((T s1 (x 1 ) ∧ ... ∧ T sn (x n )) ⇒ T t (f (x 1 , ..., x n ))) et pour chaque sorte s un axiome ∃x T s (x) On peut alors démontrer que Γ ⊢ P dans la théorie initiale si et seulement si Γ ′ ⊢ P ′ dans la théorie monosortée. En exprimant ainsi la théorie des types comme une théorie du premier ordre monosortée, on résout le premier problème puisqu'on n'utilise plus d'artifice syntaxique pour éviter le paradoxe de Russell. Comme la théorie des ensembles, la théorie des types apparaît comme une restriction du schéma de compréhension à certaines instances : celles qui sont stratifiables et dont les quantificateurs sont bornés. L'instance du schéma de compréhension utilisée dans le paradoxe de Russell ∃R ∀x (R x) ⇔ ¬(x x) n'est pas un axiome de la théorie car la proposition ¬(x x) n'est la relativisation d'aucune proposition du langage multisorté. La question soulevée par Quine de savoir si on peut relâcher la condition des quantifications bornées et accepter toutes les instances stratifiables du schéma de compréhension, c'est-à-dire la cohérence de ses Fondations nouvelles (New foundations ou NF) est, à notre connaissance, toujours ouverte. Si cette formulation de la théorie des types résout le premier problème, le second demeure car les types sont remplacés par les prédicats T ι , T o , T ι→o , ... qui sont toujours redondants avec certains ensembles. Dans [b], nous avons proposé une formulation de la théorie des types dans laquelle les types ne sont que des ensembles particuliers. Pour cela, on introduit dans le langage lui-même des symboles I, O et →, on supprime les symboles T s et on traduit les propositions de la forme T s (t) par ε(s ′ t), où s ′ est défini comme suit : - ι ′ = I, o ′ = O, -(T → U ) ′ = T ′ → U ′ . L'expression I → I, à la différence du type ι → ι, est un terme du langage. Nous montrons dans [b], en utilisant des techniques de construction de modèles, qu'une proposition est démontrable en théorie des types si et seulement si sa traduction l'est dans cette théorie. La réduction dans un cadre monosorté On peut, à présent, s'interroger sur la possibilité de présenter cette théorie comme une théorie modulo. Dans cette théorie, les axiomes de conversion sont des axiomes gardés. Par exemple, l'un de ces axiomes se formule Autrement dit, rien dans ce langage n'interdit de former une proposition comme ε(λx ¬(x x) λx ¬(x x)), mais cette proposition n'est pas prouvablement équivalente à sa négation et a fortiori ne se réduit pas sur elle, car l'argument auquel la fonction est appliquée n'est pas dans son domaine de définition et donc l'image obtenue n'est pas spécifiée par la théorie. Dans ce cadre restreint où les ensembles de définition des fonctions ne sont que quelques ensembles particuliers (les types, c'est-à-dire les ensembles formés à partir des ensembles I et O avec le constructeur d'espace fonctionnel →), on peut espérer que ces gardes soient décidables. Mais nous proposons dans le dernier paragraphe de [b] une extension de cette théorie dans laquelle, comme dans les mathématiques informelles, n'importe quel ensemble peut être ensemble de définition d'une fonction. Il est alors illusoire d'espérer décider les gardes des règles de réécriture. Pour avoir à la fois un système de réécriture dont les gardes sont décidables et la possibilité de prendre n'importe quel ensemble comme ensemble de définition d'une fonction, il faut demander, à chaque fois qu'on applique une fonction à un objet, une démonstration que cet objet est dans l'ensemble de définition de cette fonction. Cela demande de faire des démonstrations des objets à part entière de la théorie. 6.3 Les démonstrations objets de la théorie Les motivations Si on demande, à chaque fois qu'on applique une fonction à un objet, une démonstration que cet objet est dans l'ensemble de définition de cette fonction, l'application n'est plus une opération binaire qui s'applique à une fonction et son argument, mais une opération ternaire qui s'applique à une fonction, son argument et une démonstration que cet argument est dans l'ensemble de définition de la fonction. Cette méthode est la manière traditionnelle de traiter des "fonctions partielles" (c'est-à-dire les fonctions dont le domaine de définition est une partie d'un type et non un type entier) en théorie des types de Martin-Löf, dans le Calcul des constructions ou dans le Calcul des constructions inductives. Ici, puisque nous sommes dans un cadre non typé, rien n'empêche de former le terme α(f, a, p) où a est un objet qui n'est pas dans le domaine de définition de f et p est un terme quelconque. Mais dans ce cas, la vérification que p est une démonstration que a est dans l'ensemble de définition de f échoue et les règles de réduction ne s'appliquent pas. Une seconde motivation pour construire une théorie où les démonstrations sont des objets à part entière est, comme nous l'avons déjà évoqué au paragraphe 4.6, de donner des règles de calcul pour l'opérateur de descriptions. Une formulation sans types des mathématiques où les démonstrations sont des objets Plusieurs formalisations des mathématiques dans lesquelles les démonstrations sont des objets ont été proposées, parmi lesquelles la théorie des types de Martin-Löf, le Calcul des constructions et le Calcul des constructions inductives. Toutes ces théories sont des théories typées. Cela s'explique par le fait que leurs auteurs ont toujours cherché un formalisme dans lequel la β-réduction termine et par le fait que l'isomorphisme de Curry-De Bruijn-Howard entre propositions et types demande un langage typé. Nous défendons le point de vue qu'il n'est pas nécessaire d'interdire l'application d'une fonction à un argument qui est hors de son domaine de définition par des règles syntaxiques, mais qu'il est suffisant d'interdire la β-réduction quand l'argument d'une fonction n'est pas dans cet ensemble. (Simplement parce que la règle de β-réduction est fausse dans ce cas, si f est la fonction identité définie sur l'ensemble des nombres pairs f = x ∈ P -→ x, alors f (1) n'est pas nécessairement égal à 1. La valeur de ce terme n'est pas spécifiée par les axiomes.) En suivant ce point de vue, seule la terminaison de la β-réduction gardée importe. Par ailleurs, l'isomorphisme de Curry-De Bruijn-Howard nous paraît un outil utile en théorie de la démonstration (c'est-à-dire quand on cherche à construire un langage L ′ pour exprimer les démonstrations d'une théorie fondée sur un langage L, sans confusion entre ces langages (voir, par exemple, le chapitre 4)) mais pas nécessairement quand on cherche à construire une théorie fondée sur un langage L telle que les démonstrations de cette théorie soient exprimables dans ce même langage L. Dans [f] nous avons proposé une première tentative de langage non typé dans lequel les démonstrations sont des objets. Pour cela, nous avons introduit un symbole pr tel que si P est un contenu propositionnel, alors pr(P ) est l'ensemble des démonstrations de la proposition ε(P ). La sémantique de Brouwer-Heyting-Kolmogorov s'exprime alors par certains axiomes de la théorie, par exemple ε(pr(P ⇒Q) = pr(P ) → pr(Q)) On retrouve ici plusieurs outils que nous avons introduits dans les versions multisortée et monosortée de la théorie des types simples : la distinction entre proposition et contenus propositionnels, la notion primitive de fonction, le constructeur d'espace fonctionnel →, ... Les propositions p ∈ pr(P ) ne sont pas toujours décidables, mais on donne dans [f] un algorithme qui permet d'en décider certaines : on montre qu'à chaque fois que la proposition ε(P ) est démontrable, il existe un terme p tel que la proposition p ∈ pr(P ) soit non seulement démontrable, mais de plus reconnue par l'algorithme. On appelle ces démonstrations reconnues par l'algorithme démonstrations directes. On montre aussi que si p une démonstration indirecte de P , et que q est une démonstration directe de p ∈ pr(P ), alors on peut construire à partir de p et q une démonstration directe de P . Le paradoxe de Tarski On montre enfin qu'on peut poser l'axiome "vérité = démontrabilité" ε(P ⇔ ∃x ∈ pr(P )) sans qu'on puisse pour autant appliquer le paradoxe de Tarski pour montrer l'incohérence de cette théorie. En effet, réfléchir les propositions comme des termes (c'est-à-dire avoir une fonction "." qui associe un terme à chaque proposition), un prédicat de vérité T et un schéma P ⇔ T ("P ") n'est pas contradictoire en soi. Si Γ est une théorie qui a un modèle M de plus de deux éléments, on peut montrer facilement qu'ajouter ce schéma à Γ n'est pas contradictoire, il suffit d'appeler 0 et 1 deux éléments distincts de M, de prendre 1 comme dénotation de "P " si la proposition P est valide dans M et 0 sinon, et d'interpréter T par l'ensemble {1}. La contradiction, dans le paradoxe de Tarski, vient de l'existence d'un prédicat de substitution, c'est-àdire d'un prédicat R tel qu'on puisse démontrer la proposition ∀x ∀y ∃z R(x, y, z) et pour toute proposition P et terme t, la proposition (schéma de substitution) R("P ", t, q) ⇔ q = "P [t/x]" Le théorème de Tarski montre que dès qu'on a un prédicat de vérité, ce schéma de substitution est une forme du schéma de compréhension de la théorie naïve des ensembles. En effet, en skolémisant la proposition ∀x ∀y ∃z R(x, y, z) on obtient ∀x ∀y R(x, y, α(x, y)) et donc pour toute proposition P et terme t, on a R("P ", t, α("P ", t)) soit α("P ", t) = "P [t/x]" En utilisant les axiomes de l'égalité, on obtient T (α("P ", t)) ⇔ T ("P [t/x]") et comme T est un prédicat de vérité T (α("P ", t)) ⇔ P [t/x] On peut s'interroger sur l'existence de restrictions de la théorie des ensembles pour lesquelles le système de réécriture termine et la propriété d'élimination des coupures est vérifiée. Une solution est sans doute de restreindre le schéma de compréhension aux propositions P stratifiables au sens de Quine. On aurait alors un système intermédiaire entre N F et Z. On peut remarquer que ce système n'est pas typé, comme d'usage en théorie des ensembles, il restreint la formation des objets en agissant sur les axiomes et non sur la syntaxe du langage, et il est sans doute suffisant pour exprimer une bonne partie des mathématiques informelles. Un résultat qui semble indiquer que ce programme est réalisable est la démonstration par S.C. Bailin [START_REF] Bailin | A normalization theorem for set theory[END_REF] de l'élimination des coupures pour une variante de la théorie des ensembles dans laquelle une règle de déduction t ∈ t ⊥ est ajoutée, ainsi qu'un mécanisme de priorité qui interdit d'appliquer une règle autre que celle-ci à une proposition de cette forme. Conclusion La principale contribution de ce dossier d'habilitation est d'avoir proposé un ensemble de règles de déduction qui permet d'intégrer le calcul et le raisonnement. Certes, cette idée n'est pas nouvelle : Plotkin et Andrews l'avaient déjà implicitement proposé dans le cadre de la démonstration automatique, le langage du système Automath, la théorie des types de Martin-Löf, le Calcul des constructions et le Calcul des constructions inductives comportaient déjà une règle de conversion, l'Arithmétique fonctionnelle du second ordre permettait déjà d'utiliser les axiomes équationnels sans laisser de trace dans la démonstration. Mais nous soutenons que cette distinction qui est centrale en mathématique doit être étudiée dans le cadre le plus général possible : la logique du premier ordre. Les applications en démonstration automatique et en λ-calcul s'en déduisent alors naturellement. Par ailleurs, nous avons insisté sur la possibilité d'utiliser des règles de calcul qui transforment des propositions atomiques en des propositions non atomiques. C'est dans ce cadre qu'on peut exprimer des systèmes puissants comme la théorie des types simples et la théorie des ensembles. Ce sont ces règles de calcul qui sont responsables de la difficulté de démontrer l'élimination des coupures et ce sont elles qui, en démonstration automatique, rendent nécessaire la règle de surréduction étendue. Cette habilitation propose deux autres contributions plus techniques qui consistent à replacer le théorème d'élimination des coupures et les algorithmes de démonstration automatique développés pour la théorie des types simples dans le cadre plus général de la déduction modulo. Nous défendons le point de vue que les outils qui ont été développés dans ces deux cas ne sont en rien propres à cette théorie, et qu'ils ont une portée beaucoup plus générale. Contrairement à ce que nous avions fait dans les articles qui constituent ce dossier, nous avons accordé une large place, dans ce mémoire, à des conjectures et à la discussion de problèmes encore mal dégrossis. On peut donc à présent tracer un programme de travail pour les années à venir : en premier lieu, nous avons conjecturé qu'on pouvait toujours construire un prémodèle pour une congruence présentée par un système de réécriture confluent et qui termine. Si nous croyons cette conjecture vraie, parce que nous n'avons pas réussi à construire de contre-exemple, nous pouvons cependant nous demander si elle peut être démontrée en théorie des ensembles, ou si, au contraire, elle implique la cohérence de cette théorie. Nous avons également uniquement esquissé les applications de la notion de déduction modulo dans le cadre du λ-calcul typé avec des types dépendants. Nous laissons également ouvert le problème de trouver une axiomatisation des mathématiques non typée dont le système de réécriture termine et dans laquelle les fonctions sont des objets primitifs. Si la notion de β-réduction gardée nous semble ici essentielle, il est probable que cela demande de développer une théorie non typée dans laquelle les démonstrations sont des objets, et nous avons évoqué les difficultés, notamment de cohérence, que cela pose. Enfin, nous voudrions conclure par une perspective qui concerne la déduction modulo elle-même. Dans le système de déduction naturelle modulo que nous avons présenté au chapitre 1, les règles de déduction peuvent modifier les axiomes de la théorie (c'est, par exemple, le cas de la règle d'introduction de l'implication) mais jamais la congruence. De ce fait, la congruence est fixée une fois pour toutes au début de la démonstration et ne peut pas s'enrichir. On peut imaginer un cadre plus général où il serait possible de transformer dynamiquement un théorème en une règle de réécriture. Il faudrait alors donner à ce moment un métaargument qui assurerait que le système de réécriture reste confluent et qu'il continue à terminer. Éventuellement, cela peut demander d'utiliser un algorithme de complétion pour compléter le système de réécriture enrichi. Cela nous semble particulièrement important pour les démonstrations par récurrence où on veut souvent transformer l'hypothèse de récurrence en une règle de réécriture pour résoudre la conclusion. Une telle possibilité correspondrait également davantage à la pratique mathématique : au sein des mathématiques, nous développons des algorithmes, nous montrons leur terminaison puis nous les incorporons peu à peu à nos automatismes de calcul qui, ainsi, s'enrichissent sans cesse. Index 7 2 4 Figure 1 - 741 Figure 1 -Deux et deux font quatre Figure 1 . 2 - 12 Figure 1.2 -Le calcul des séquents modulo 1. 2 . 1 21 La réécriture des termesUne congruence peut se définir par un ensemble de règles de réécriture sur les termes du langage. En arithmétique, par exemple, on pose les règles0 + y → y Succ(x) + y → Succ(x + y) 0 × y → 0 Succ(x) × y → x × y + yDe même, quand on utilise une opération associative, on pose la règle (x + y) + z → x + (y + z) 1. 3 . 3 LE LEMME D'ÉQUIVALENCE Dans cet exemple, on réécrit une proposition atomique sur une autre proposition atomique. Dans d'autres exemples, on veut pouvoir effectuer des calculs qui introduisent des connecteurs ou des quantificateurs. Par exemple, dans les anneaux intègres, la règle x × y = 0 → x = 0 ∨ y = 0 transforme une proposition atomique en une disjonction. Proposition 1 . 3 . 1 ( 131 Équivalence) Soit une congruence ≡, et T la théorie formée de toutes les clôtures universelles des propositions de la forme A ⇔ B où A ≡ B. On a Γ ⊢ ≡ P si et seulement si T Γ ⊢ P Naturellement, si les propositions démontrées sont les mêmes, les démonstrations sont très différentes. En particulier elles sont beaucoup plus courtes car les étapes calculatoires ont été supprimées. Proposition 1 . 4 . 3 143 Pour tout modèle M de (≡, Γ), il existe un modèle égalitaire M ′ qui valide les mêmes propositions.Soit M un modèle de (≡, Γ). On appelle E la plus petite relation d'équivalence définie sur sur les éléments de M , l'ensemble de base de M, telle que a et b sont mis en relation s'il existe des termes t et u tels que t ≡ u, où t dénote a et u dénote b. On pose M ′ = M/E. La relation ≡ étant une congruence, les dénotations des symboles de fonction et de prédicat de M passent au quotient, ce qui définit un modèle M ′ . On montre par récurrence structurelle que la dénotation d'un terme dans M ′ est la classe de sa dénotation dans M et la dénotation d'une proposition dans M ′ est sa dénotation dans M. 1 . 1 on doit donner un ensemble D Γ⊢T ou D Γ⊢∆ pour chacune de ces sortes. Cette construction se fait en deux étapes. On construit, dans un premier temps, une famille d'ensembles M T en partant de deux ensembles M ι et M o et en définissant M T →U comme l'ensemble des fonctions de M T dans M U , puis dans un second temps les ensembles D Γ⊢T et D Γ⊢∆ , encore une fois comme des ensembles de fonctions. L'ensemble D T1,...,Tn⊢U est défini comme M U M T 1 ×...×M Tn ou mieux, en curryfiant, comme l'ensemble M U M Tn ... M T De même, l'ensemble D T1,...,Tn⊢U1,...,Up est défini comme Figure 2 . 2 - 22 Figure 2.2 -Les règles de réécriture de la théorie des types simples avec des substitutions explicites Figure 2 . 3 - 1 . 231 Figure 2.3 -Règles étendues pour les combinateurs Si a est par exemple l'indice de De Bruijn 1 on obtient le terme (app n 1 n+1 |b| n ). Ce terme n'est pas réductible par le système ci-dessus et la règle du λσ-calcul ((λa) b) → a[b.id] suggère la règle (app n 1 n+1 b) → b. L'étude des différents cas amène au système de réécriture de la figure 2.3. Figure 3 . 1 - 31 Figure 3.1 -Les règles de réécriture de la théorie des ensembles En déduction naturelle modulo, comme en déduction naturelle du premier ordre, une coupure est simplement une règle d'introduction d'un connecteur ou d'un quantificateur suivie d'une règle d'élimination de ce même symbole. Le processus d'élimination des coupures se définit comme en déduction naturelle du premier ordre. En revanche, sa terminaison pose davantage de problèmes. Si on n'impose pas de restriction sur la congruence, il n'est pas difficile d'exhiber une démonstration qui n'a pas de forme normale. Par exemple, avec la règle A → ¬A, la démonstration axiome Figure 4 . 1 - 41 Figure 4.1 -Les règles de typage des démonstrations xn,w,P (y 1 , ..., y n , z) → ∀a ∀b 1 ... ∀b n (a = v ∧ b 1 = y 1 ∧ ... ∧ b n = y n ) ⇒ a ∈ z ∧ [b 1 /x 1 , ..., b n /x n , a/w]P Figure 4 . 2 - 42 Figure 4.2 -Le λ-calcul avec des types dépendants 59 59 5. 2 2 La résolution modulo 5.2.1 La règle de surréduction étendue Dans le cas général, où on a des règles de réécriture qui réécrivent des propositions atomiques en des propositions non atomiques, la résolution équationnelle et la méthode des tableaux équationnelle, ne sont plus complètes. Par exemple, modulo la règle x × y = 0 → x = 0 ∨ y = 0 on peut démontrer la proposition a × a = 0 ⇒ a = 0 et donc ∃y (a × a = y ⇒ a = y) Pourtant, la mise en forme clausale de la négation de cette proposition donne les deux clauses a × a = Y ¬a = Y et les propositions a × a = Y et a = Y n'étant pas unifiables, on ne peut pas appliquer la règle de résolution avec succès. Ce problème a déjà été rencontré dans le cadre de la méthode de résolution d'ordre supérieur de Huet [50, 51]. En logique d'ordre supérieur, en effet, la la proposition ∃x x est démontrable car la proposition A ∨ ¬A l'est, mais on ne peut rien déduire de la forme clausale de sa négation. Dans notre présentation, cela s'exprime par le fait que la proposition ∃x ε(x) Figure 5 . 1 - 51 Figure 5.1 -La résolution modulo et la proposition P n ⇔ ⊥ en une règle de réécriture P n → ⊥ il ne reste plus que la proposition P 1 a mettre en forme clausale. Cette proposition se réduit sur ⊥ ∨ ... ∨ ⊥ et sa forme clausale est la clause vide. . Ce théorème exprime qu'un problème d'unification d'ordre supérieur a = b a une solution si et seulement si c'est également le cas du problème d'unification équationnelle dans λσ a F = b F où . F est l'injection du λ-calcul dans le λσ-calcul appelée précuisson (pre-cooking) et décrite au paragraphe 2.4.5 (mais qui a été introduite pour la première fois dans le cadre de cet algorithme d'unification [c]). Le sens direct du théorème est facile : si le premier problème a une solution c 1 /x 1 , ..., c n /x n alors le second a la solution x 1 → c 1F , ..., x n → c nF . Pour montrer la réciproque, on doit montrer que si le problème d'unification a F = b F a une solution alors il a aussi une solution dans l'image de la précuisson. Nous avons montré dans [c] que l'algorithme d'unification dans le λσ-calcul reproduisait fidèlement l'algorithme d'unification d'ordre supérieur, c'est-à-dire que chaque étape de cet algorithme pouvait être simulée par un nombre fini d'étapes de l'algorithme d'unification dans le λσ-calcul. ¬{x ∈ A | ¬x ∈ x} ∈ A en effet notons f le symbole de Skolem introduit par la skolémisation de l'axiome ∀A ∃B ∀x ((x ∈ B) ⇔ (x ∈ A ∧ ¬x ∈ x)) on a la règle de réécriture x ∈ f (y) → x ∈ y ∧ ¬x ∈ x la proposition de Crabbé s'écrit ¬f (a) ∈ a la forme clausale de sa négation est formée de l'unique clause f (a) ∈ a Avec cette clause on ne peut pas appliquer la règle de résolution, on ne peut pas non plus appliquer la règle de surréduction étendue car on devrait alors unifier la proposition f (a) ∈ a avec une proposition de la forme w ∈ {}(x, y), w ∈ P(x), w ∈ (x) ou w ∈ g(x 1 , ..., x n ). La contrainte se simplifierait en une contrainte de la forme a = {}(x, y) a = P(x) a = (x) ou a = g(x 1 , ..., x n ) et aucune de ces équations n'a de solution puisque a est une constante. (ε((I → I → I) a) ∧ ε((I → I) b) ∧ ε(I c)) ⇒ (S I,I,I a b c) = (a c (b c)) Les règles de réécriture correspondantes doivent donc également être gardées. Pour réduire (S I,I,I a b c) en (a c (b c)), il faut au préalable "vérifier" que a appartient à l'ensemble I → I → I, b à l'ensemble I → I et c à l'ensemble I. x1,...,xn,w,P et un axiome ∀x 1 ... ∀x n ∀w (w ∈ f x1,...,xn,w,P (x 1 , ..., x n ) ⇔ P ) On peut, si on veut, noter {w | P } le terme f x1,...,xn,w,P (x 1 , ..., x n ) (ce point sera discuté en détail aux paragraphes 2.4 et 3.2). On peut remarquer que ce système de réécriture est trivialement confluent car il est orthogonal, mais qu'il ne termine pas. Un contre-exemple à la terminaison est le paradoxe de Russell : en notant R le terme {w | ¬w ∈ w} c'est-à-dire le symbole d'individu f w,¬w∈w , on a la règle de réécriture Il est ensuite naturel de transformer ces axiomes en règles de réécriture v ∈ f x1,...,xn,w,P (y 1 , ..., y n ) → [y 1 /x 1 , ..., y n /x n , v/w]P Dans ce dernier schéma, on utilise un symbole d'égalité. On se place donc en logique du premier ordre avec égalité.Skolémiser ces axiomes amène à introduire un nombre infinis de symboles d'individu, qu'on peut noter {x 1 , ..., x n | P } et x 1 , ..., x n -→ t et les axiomes de conversion ∀x 1 ... ∀x n (ε({x 1 , ..., x n x 1 , ..., x n . Et on pose également un schéma d'axiome de compréhension fonctionnelle ∃f ∀x 1 ... ∀x n (f x 1 ... x n ) = t où toutes les variables libres de t sont parmi x 1 , ..., x n . Deux et deux sont quatre." assertion, énoncé ou jugement. Hélas, cette terminologie introduit une confusion. D'une part, l'acte d'asserter, d'énoncer ou de juger n'est pas plus présent dans ε(t) que dans t, d'autre part, la terminologie de la logique du premier ordre oppose les termes qui sont les expressions désignant des objets et les propositions qui sont les expressions désignant des faits et il serait maladroit d'appeler le terme t proposition et la proposition ε(t) énoncé. Nous préférons donc appeler le terme t contenu propositionnel ou information et la proposition ε(t) proposition.Une autre remarque terminologique est que le symbole ε pourrait être noté comme le symbole de Frege "⊢", en suivant, par exemple, Whitehead et Russell qui distinguent la lexis p de l'assertion ⊢ p. Cela n'est pas possible, car dans le métalangage (c'est-à-dire dans le langage que nous utilisons pour parler des théories logiques) nous utilisons le symbole ⊢ pour affirmer qu'une proposition est démontrable. Nous devons donc distinguer : le terme p, la proposition ε(p) et la métaproposition ⊢ ε(p) affirmant que la proposition ε(p) est démontrable. En théorie des ensembles, on distingue, de même, l'ensemble P des nombres pairs, la proposition 4 ∈ P et la métaproposition ⊢ 4 ∈ P affirmant que cette proposition est démontrable. des propriétés différentes peuvent avoir des contenus différents même s'ils sont démontrables l'un et l'autre. On vera toutefois au paragraphe 6.3.3 que certains traits syntaxiques de la proposition doivent être gommés de son contenu, faute de quoi, le symbole ε peut se voir comme le prédicat de vérité d'A. Tarski et la théorie obtenue est contradictoire. 2.3.2 Les termes, les propositions et les assertions Cette distinction entre une proposition ε(t) et son contenu t, ou lexis [61], est traditionnelle en grammaire où on distingue la proposition "Deux et deux sont quatre." du groupe nominal "que deux et deux sont quatre" qui remplit les fonctions d'un groupe nominal, par exemple la fonction d'objet dans la phrase "Je crois que deux et deux sont quatre." Elle permet donc d'utiliser les prédicats d'attitude propositionnelle comme des prédicats ordinaires. Une terminologie possible pour distinguer ces deux expressions est de nommer l'expression "que deux et deux sont quatre" proposition et l'expression " que quand on pose l'axiome d'extensionnalité ∀x ∀y ((ε(x) ⇔ ε(y)) ⇒ x = y) le contenu d'une proposition se réduit à sa valeur de vérité. Mais en l'absence d'un tel axiome, le contenu d'une proposition peut véhiculer beaucoup plus sur la proposition elle-même. Deux théorèmes qui expriment x 1 , ..., x n un symbole f x1,...,xn,t et un axiome ∀x 1 ... ∀x n (f x1,...,xn,t x 1 ... x n ) = t De même, on introduit pour chaque proposition A dont les variables libres sont parmi x 1 , ..., x n un symbole P x1,...,xn,A et un axiome ∀x 1 ... ∀x n ((P x1,...,xn,A x 1 ... x Par exemple, on introduit un symbole S et l'axiome ∀x ∀y ∀z (S x y z) = ((x z) (y z)) n ) ⇔ A) x et U le type de y et l'axiome ∀x ε(α(α(P, x), f (x))) Le symbole f est un symbole de fonction de rang (T, U ) et non un symbole d'individu de type T → U et on écrit donc f (x) et non α(f, x). De ce fait, il va de soi que le symbole f tout seul n'est pas un terme, car dans un langage du premier ordre, les symboles de fonction d'arité non nulle ne sont pas des termes. On retrouve ainsi la première condition de Miller. La seconde condition n'a pas lieu d'être, puisque dans le langage des combinateurs, il n'y a pas d'opérateur d'abstraction. Enfin, comme dans tout langage du premier ordre on dispose de quantificateurs indépendamment des symboles ∀ et ∃ de ce langage particulier. Il n'y a donc pas de problème de formulation de la proposition skolémisée ∀x ε(α(α(P, x), f (x))) et on peut formuler le théorème Proposition 2.6.1 Les propositions sans occurrences de symboles de Skolem sont conséquences d'une proposition si et seulement si elles sont conséquences de sa forme skolémisée. Cette définition est incorrecte, car la proposition [t/x]A peut très bien ne pas être normale. On peut alors tenter de remplacer cette seconde condition par -si π se réduit sur une démonstration de la forme λx π 1 alors pour terme t, [t/x]π 1 est une démonstration réductible de ([t/x]A) ↓. où ([t/x]A) ↓ est la forme normale de la proposition [t/x]A. Cette tentative échoue également, car la proposition ([t/x]A) ↓ n'est pas nécessairement plus petite que la proposition ∀x P . 1 alors pour tout terme t, [t/x]π 1 est une démonstration réductible de [t/x]A. A∧B est l'ensemble des démonstrations π telles que π est fortement normalisable et si π se réduit sur une démonstration de la forme (π 1 , π 2 ), alors π 1 est un élément de R A et π 2 est un élément de R B . -R A∨B est l'ensemble des démonstrations π telles que π est fortement normalisable et si π se réduit sur une démonstration de la forme i(π 1 ) (resp. j(π 2 )) alors π 1 est un élément de R A (resp. π 2 est un élément de R B ). -R ¬A est l'ensemble des démonstrations π telles que π est fortement normalisable et si π se réduit sur une démonstration de la forme λαπ 1 , alors pour toute démonstration π ′ de R A , [π ′ /α]π 1 est fortement normalisable. -R ⊥ est l'ensemble des démonstrations fortement normalisables. -R ∀x A est l'ensemble des démonstrations π telles que π est fortement normalisable et si π se réduit sur une démonstration de la forme λx π 1 alors pour tout terme t [t/x]π 1 est un élément de R [t/x]A . -R ∃x A est l'ensemble des démonstrations π telles que π est fortement normalisable et si π se réduit sur une démonstration de la forme (t, π 1 ) alors [t/x]π 1 est un élément de R [t/x]A . -Si P ≡ Q alors R P = R Q . Dans cette définition, nous avons fait le choix de ne pas normaliser les propositions dans l'expression des conditions sur les ensembles R ∀x P et R ∃x P . Cela nous impose d'ajouter la dernière condition : les propositions congruentes doivent avoir le même ensemble de démonstrations réductibles. (ou d'une famille d'ensembles dans le cas multisorté) de fonctions f et P est appelée prémodèle, et l'ensemble R A est appelé dénotation de A. La définition d'un prémodèle est la même que celle d'un modèle, sauf que les valeurs de vérité sont remplacées par des candidats de réductibilité et la dénotation des propositions composées est définie comme ci-dessus. Un prémodèle et donc un modèle multivalué dont les valeurs de vérités sont des candidats de réductibilité. Un prémodèle est un prémodèle d'un système de réécriture si deux propositions congruentes ont même dénotation. ′ 1 sont congruents, les ensembles P (t 1 , ..., t n ) et P (t ′ 1 , ..., t n ) doivent être identiques. La fonction P peut donc se définir comme une fonction d'un objet abstrait (par exemple, la classe de t 1 et t ′ 1 ) que t 1 et t ′ 1 dénotent. Enfin, la condition selon laquelle deux propositions congruentes doivent avoir le même ensemble de démonstrations réductibles est la même que celle que nous avons vue dans la définition des modèles d'une théorie modulo au paragraphe 1.4. Dans [h], nous avons donc utilisé le vocabulaire de la théorie des modèles : la donnée d'un ensemble M Dans [h] on a montré la proposition suivante Proposition 4.3.2 Si une congruence a un prémodèle, alors les démonstrations en déduction naturelle modulo cette congruence normalisent fortement. on applique ensuite la règle de résolution pour obtenir la clause vide. La règle de scission et la règle de résolution d'ordre supérieur forment ensemble un système complet pour la logique d'ordre supérieur[START_REF] Huet | Constrained resolution : a complete method for higher order logic[END_REF][START_REF] Huet | A mechanization of type theory[END_REF].Dans le cadre de la déduction modulo, il faut ajouter à la règle de résolution une règle qui anticipe l'instanciation d'un littéral, permettant de le réduire sur une proposition composée. Par exemple, le littéral {A 1 , . . . , A n , B 1 , . . . , B m }/E 1 {¬C 1 , . . . , ¬C p , D 1 , . . . , D q }/E 2 Rés. étendue {B 1 , . . . , B m , D 1 , . . . , D q la proposition ¬(Y ∨ Z) les deux clauses ¬Y ¬Z on instancie ensuite la variable Z par le terme ¬Z ′ ce qui donne les deux clauses ¬Y Z ′ la solution naïve donne les clauses ¬P 1 , Q 2 , P 2 En revanche, si on transforme la proposition P 1 ⇔ (Q 2 ∨ P 2 ) en une règle de réécritureP 1 → Q 2 ∨ P 2 et les propositions Q 2 ⇔ ⊥ et P 2 ⇔ ⊥, en les règles Q 2 → ⊥ P 2 → ⊥il ne reste plus que la proposition P 1 a mettre en forme clausale. Cette proposition se réduit sur ⊥ ∨ ⊥. Sa forme clausale est la clause vide.Plus généralement, si on veut réfuter la théorieP 1 ⇔ (Q 2 ∨ P 2 ), ..., P i ⇔ (Q i+1 ∨ P i+1 ), ..., P n ⇔ (Q n+1 ∨ P n+1 ), P 1 , Q 2 ⇔ ⊥, ..., Q n+1 ⇔ ⊥, P n+1 ⇔ ⊥,la solution naïve donne les 4n + 2 clauses ¬P 1 , Q 2 , P 2 ¬Q 2 , P 1 ¬P 2 , P 1 P 1 ¬Q 2 ¬P 2 ¬Q 2 , P 1 ¬P 2 , P 1 ... ¬P i , Q i+1 , P i+1 ¬Q i+1 , P i ¬P i+1 , P i ... ¬P n , Q n+1 , P n+1 ¬Q n+1 , P n+1 ¬P n+1 , P n+1 P 1 ¬Q 2 ... ¬Q n+1 × 2 = 4 (x, 2 × x = 4, 2) ∃-intro ∃x (2 × x = 4)Substituer la variable x par le terme 2 dans la proposition 2 × x = 4 donne la proposition 2 × 2 = 4, qui est congruente à 4 = 4. Le passage d'une proposition à l'autre, qui requiert plusieurs fastidieuses étapes de démonstration dans la formulation traditionnelle de l'arithmétique de Peano, est ici gommée de la démonstration, c'est un simple calcul qui n'a pas besoin d'être écrit, puisque chacun peut le refaire.On peut remarquer que, dans cette démonstration, on n'utilise pas les axiomes de l'addition et de la multiplication. Cela s'explique par le fait que raisonner modulo une congruence permet de supprimer ces axiomes. En effet, si on pose que les termes 0 + x et x sont congruents, alors l'axiome ∀x 0 + x = x est congruent à l'axiome de l'égalité ∀x x = x, il est donc redondant et on peut le supprimer. 11 CHAPITRE 1. LA DÉDUCTION MODULOaxiome si A ∈ Γ et A ≡ B Γ ⊢≡ B Γ, A ⊢≡ B ⇒-intro si C ≡ (A ⇒ B) Γ ⊢≡ C Γ ⊢≡ C Γ ⊢≡ A ⇒-élim si C ≡ (A ⇒ B) Γ ⊢≡ B Γ ⊢≡ A Γ ⊢≡ B ∧-intro si C ≡ (A ∧ B) Γ ⊢≡ C Γ ⊢≡ C ∧-élim si C ≡ (A ∧ B) Γ ⊢≡ A Γ ⊢≡ C ∧-élim si C ≡ (A ∧ B) Γ ⊢≡ B Γ ⊢≡ A ∨-intro si C ≡ (A ∨ B) Γ ⊢≡ C Γ ⊢≡ B ∨-intro si C ≡ (A ∨ B) Γ ⊢≡ C Γ ⊢≡ D Γ, A ⊢≡ C Γ, B ⊢≡ C ∨-élim si D ≡ (A ∨ B) Γ ⊢≡ C Γ, A ⊢≡ ⊥ ¬-intro si B ≡ (¬A) Γ ⊢≡ B Γ ⊢≡ B Γ ⊢≡ A ¬-élim si B ≡ (¬A) Γ ⊢≡ ⊥ Γ ⊢≡ B ⊥-élim si B ≡ ⊥ Γ ⊢≡ A Γ ⊢≡ A (x, A) ∀-intro si B ≡ (∀x A) et x ̸ ∈ F V (Γ) Γ ⊢≡ B Γ ⊢≡ B (x, A, t) ∀-élim si B ≡ (∀x A) et C ≡ [t/x]A Γ ⊢≡ C Γ ⊢≡ C (x, A, t) ∃-intro si B ≡ (∃x A) et C ≡ [t/x]A Γ ⊢≡ B axiome si A ≡ B A ⊢≡ B Γ, A ⊢≡ ∆ Γ ⊢≡ B, ∆ coupure si A ≡ B Γ ⊢≡ ∆ Γ, B1, B2 ⊢≡ ∆ contr-gauche si A ≡ B1 ≡ B2 Γ, A ⊢≡ ∆ Γ ⊢≡ B1, B2, ∆ contr-droite si A ≡ B1 ≡ B2 Γ ⊢≡ A, ∆ Γ ⊢≡ ∆ aff-gauche Γ, A ⊢≡ ∆ Γ ⊢≡ ∆ aff-droite Γ ⊢≡ A, ∆ Γ ⊢≡ A, ∆ Γ, B ⊢≡ ∆ ⇒-gauche si C ≡ (A ⇒ B) Γ, C ⊢≡ ∆ AΓ ⊢≡ B, ∆ ⇒-droite si C ≡ (A ⇒ B) Γ ⊢≡ C, ∆ Γ, A, B ⊢≡ ∆ ∧-gauche si C ≡ (A ∧ B) Γ, C ⊢≡ ∆ Γ ⊢≡ A, ∆ Γ ⊢≡ B, ∆ ∧-droite si C ≡ (A ∧ B) Γ ⊢≡ C, ∆ Γ, A ⊢≡ ∆ Γ, B ⊢≡ ∆ ∨-gauche si C ≡ (A ∨ B) Γ, C ⊢≡ ∆ Γ ⊢≡ A, B, ∆ ∨-droite si C ≡ (A ∨ B) Γ ⊢≡ C, ∆ Γ ⊢≡ A, ∆ ¬-gauche si B ≡ (¬A) Γ, B ⊢≡ ∆ Γ, A ⊢≡ ∆ ¬-droite si B ≡ (¬A) Γ ⊢≡ B, ∆ ⊥-gauche si A ≡ ⊥ Γ, A ⊢≡ ∆ Γ, [t/x]A ⊢≡ ∆ (x, A, t) ∀-gauche si B ≡ (∀x A) Γ, B ⊢≡ ∆ Γ ⊢≡ A, ∆ (x, A) ∀-droite si B ≡ (∀x A) et x ̸ ∈ F V (Γ∆) Γ ⊢≡ B, ∆ Proposition 1.5.1 (Conservativité) Soit (≡, Γ) une théorie modulo sur un langage L, on obtient une extension conservatrice de cette théorie en ajoutant un symbole d'égalité et les axiomes de l'égalité Eq correspondants.On montre que tout modèle égalitaire de la théorie (≡, Γ) sur le langage L s'étend en un modèle égalitaire de (≡, ΓEq) sur le langage L ∪ {=}, en interprétant l'égalité par l'égalité.On peut remarquer que cette démonstration ne marche pas si on ne se restreint pas aux modèles égalitaires. En effet, comme la congruence est définie sur les termes, quand on ajoute un nouveau prédicat elle s'étend automatiquement aux propositions atomiques formées avec ce prédicat. Ainsi, si t et u sont deux termes congruents la proposition t = u est congruente à t = t et elle est donc démontrable. On note x 1 → t 1 , ..., xn → tn la substitution simple, ou le remplacement, des variables x 1 , ..., xn par les termes t 1 , ..., tn par opposition à la notation t 1 /x 1 , ..., tn/xn qui désigne la substitution "avec renommage" utilisée dans les langages avec lieurs.Cette notation x 1 → t 1 , ..., xn → tn avec une flèche courte → ne doit pas être confondue avec la notation x 1 , ..., xn -→ t avec une flèche longue. le type C et le terme n'est pas bien typé.Les jugements de typage sont donc de la forme Γ ⊢ t : T où Γ est un contexte de typage, c'est-à-dire une liste de types. Pour exprimer le λσ-calcul typé comme un langage du premier ordre multisorté, nous avons proposé dans[c] puis dans[i], de considérer des sortes de la forme Γ ⊢ T où T est un type simple et Γ une liste de types simples. Les substitutions qui sont des listes de termes ont naturellement des types qui sont des listes de types simples. Comme les termes, elles doivent être typées dans un contexte. Elles ont donc des sortes de la forme Γ ⊢ ∆ où Γ et ∆ sont des listes de types simples.On doit ensuite, comme c'était déjà le cas avec les combinateurs, distinguer divers symboles en fonction de leur sorte. Par exemple, on a un nombre infini de constantes de 1 Γ A , la constante 1 Γ A étant de sorte AΓ ⊢ A. On aboutit au langage suivant. axiome si α : A ∈ Γ, A ≡ B Γ ⊢≡ α : B Γ, α : A ⊢≡ π : B ⇒-intro si C ≡ (A ⇒ B) Γ ⊢≡ λα π : C Γ ⊢≡ π : C Γ ⊢≡ π ′ : A ⇒-élim si C ≡ (A ⇒ B) Γ ⊢≡ (π π ′ ) : B Γ ⊢≡ π : A Γ ⊢≡ π ′ : B ∧-intro si C ≡ (A ∧ B) Γ ⊢≡ (π, π ′ ) : C Γ ⊢≡ π : C ∧-élim si C ≡ (A ∧ B) Γ ⊢≡ fst(π) : A Γ ⊢≡ π : C ∧-élim si C ≡ (A ∧ B) Γ ⊢≡ snd(π) : B Γ ⊢≡ π : A ∨-intro si C ≡ (A ∨ B) Γ ⊢≡ i(π) : C Γ ⊢≡ π : B ∨-intro si C ≡ (A ∨ B) Γ ⊢≡ j(π) : C Γ ⊢≡ π : D Γ, α : A ⊢≡ π ′ : C Γ, β : B ⊢≡ π ′′ : C ∨-élim si D ≡ (A ∨ B) Γ ⊢≡ (δ π απ ′ βπ ′′ ) : C Γ, α : A ⊢≡ π : ⊥ ¬-intro si B ≡ (¬A) Γ ⊢≡ λα π : B Γ ⊢≡ π : B Γ ⊢≡ π ′ : A ¬-élim si B ≡ (¬A) Γ ⊢≡ (π π ′ ) : ⊥ Γ ⊢≡ π : B ⊥-élim si B ≡ ⊥ Γ ⊢≡ (botélim π) : A Γ ⊢≡ π : A (x, A) ∀-intro si B ≡ (∀x A), x ̸ ∈ F V (Γ) Γ ⊢≡λx π : B Γ ⊢≡ π : B (x, A, t) ∀-élim si B ≡ (∀x A), C ≡ [t/x]A Γ ⊢≡ (π t) : C CHAPITRE 4. L'ÉLIMINATION DES COUPURES Déclaration et utilisation des variables : [ ] bien formé Γ ⊢ T : Kind Γ, x : T bien formé Γ ⊢ T : T ype Γ, x : T bien formé Γ bien formé x : T ∈ Γ Γ ⊢ x : T Construction des objets de type Kind : Γ bien formé Γ ⊢ T ype : Kind Γ ⊢ T : T ype Γ, x : T ⊢ T ′ : Kind Γ ⊢ Πx : T T ′ : KindConstruction des objets de type T ype :Γ ⊢ T : T ype Γ, x : T ⊢ T ′ : T ype Γ ⊢ Πx : T T ′ : T ype Γ ⊢ T : T ype Γ, x : T ⊢ T ′ : T ype Γ ⊢ Σx : T T ′ : T ype Γ ⊢ T : T ype Γ ⊢ T ′ : T ype Γ ⊢ T + T ′ : T ype Γ bien formé Γ ⊢ ∅ : T ypeConstruction et utilisation des fonctions :Γ ⊢ T : T ype Γ, x : T ⊢ T ′ : T ype Γ, x : T ⊢ t : T ′ Γ ⊢ λx : T t : Πx : T T ′ Γ ⊢ t : Πx : T T ′ Γ ⊢ t ′ : T Γ ⊢ (t t ′ ) : [t ′ /x]T ′ Construction et utilisation des couples : Γ ⊢ T : T ype Γ, x : T ⊢ T ′ : T ype Γ ⊢ a : T Γ ⊢ b : [a/x]T ′ Γ ⊢ (p Σx:T T ′ a b) : Σx : T T ′ Γ ⊢ U : T ype Γ ⊢ a : Σx : T T ′ Γ, x : T, y : T ′ ⊢ b : U Γ ⊢ (E a xyb) : U Construction et utilisation des éléments des unions disjointes : Γ ⊢ T : T ype Γ ⊢ T ′ : T ype Γ ⊢ a : T Γ ⊢ (i T,T ′ a) : T + T ′ Γ ⊢ T : T ype Γ ⊢ T ′ : T ype Γ ⊢ b : T ′ Γ ⊢ (j T,T ′ b) : T + T ′ λ-calcul, 26 avec des constructeurs de types, 49, 57 avec des substitutions explicites, 27 avec des types dépendants, 54 avec des types dépendants modulo, 54 avec des types inductifs, 57 faible avec des substitutions explicites, 34 polymorphe, 49, 57 λσ-calcul, 27 lemme d'équivalence, 15 de conservativité, 16 lexis, 23 méthode de Tait, 46 modèle, 15, 49 opérateur de descriptions, 54, 69 paradoxe de Russell, 19 de Tarski, 23, 70 plongement d'une théorie dans le λ-calcul, 54 d'une théorie modulo dans le λ-calcul, 55 précuisson (pre-cooking), 30, 64 prémodèle, 48 programmation en langage mathématique, 8, 54 proposition de Crabbé, 40, 44, 65 raisonnement, 7 réductible, 46 règle de réduction des démonstrations, 44 de réécriture, 14 de la théorie des ensembles, 40 de la théorie des types simples, 22 de typage des démonstrations, 45 résolution équationnelle, 59 modulo, 60 schéma de compréhension, 19, 24 fermé, 25 ouvert, 25 scission, 60
00411480
en
[ "spi.tron" ]
2024/03/04 16:41:24
2006
https://hal.science/hal-00411480/file/ESSCAP2006_Venet_1.pdf
A Hammar R Lallemand P Venet G Coquery G Rojat J Chabas Electrical characterization and modelling of round spiral supercapacitors for high power applications Keywords: Supercapacitor, characterization, Impedance Spectroscopy, Constant Current charge/discharge, modelling Our laboratory research project consists on the study of accelerating cycling and ageing of supercapacitors under railway and electrical traction constraints. Electrical model parameters according to physical and electrochemical phenomena represent an important key to investigate aging indicators of supercapacitors and give precious information on its state of health. These kinds of studies need a powerful techniques and instrumentations. Nowadays several electrical characterization methods exist. We have chosen to use two complementary methods: impedance spectroscopy and direct current charge/discharge methods. The present work describes the metrology defined to electrical characterization of supercapacitors. Electrical modelling based on simple tortuous pore impedance is explained. Some electrical parameters of the model defined for winded supercapacitors are investigated for different temperatures and different voltages. Finally, we have assessed the electrical model under full charge/discharge at high currents levels. Introduction Supercapacitors are electrical energy devices with high power density and high expected reliability. They express more temperature stability than other energy storage devices. As a consequence cycles are possible with high charge/discharge currents. Nowadays they become an attractive alternative storage device for several applications. They may be used as the only energy and power device or in combination with other energy supplies like batteries [START_REF] Ruffer | Le supercondensateur et la batterie se marient pour fournir de l'énergie électrique[END_REF], fuel cells [START_REF] Kötz | Supercapacitors for peak power demand in fuel-cell driven cars[END_REF]… Some electrical applications like railway [START_REF] Hammar | Assessment of electrothermal model of supercapacitors for railway applications[END_REF], [START_REF] Chabas | THALES: hybrid tram-train using ultracapacitors for electric power supply[END_REF] and electrical traction systems present high dynamics constraints. In such case, accurate electrical modelling with adaptive characterization methods are necessary. Consequently, we have proposed Impedance Spectroscopy technique [START_REF] Buller | Modelling the dynamic behaviour of supercapacitors using impedance spectroscopy[END_REF] in combination with Direct current method. Impedance spectroscopy technique provides a powerful tool of analysis and investigation of dynamics behaviour of supercapacitor according to dynamic constraints [START_REF] Karden | A frequency-domain approach to dynamical modelling of electrochemical power sources[END_REF]. In addition charge/discharge method allows assessing model validity under high current constraints. Some mathematical transformations are necessary to find a representative equivalent electric circuit. More Impedance spectroscopy allows detecting the state of charge of the supercapacitor at any working point. Ideally, electrical parameters of any electrical model must reflect physical and electrochemical phenomena of supercapacitor. It may be allows observation and comprehension of the evolution of the state of health of supercapacitor under current or power cycling and with ageing. In this paper we use impedance spectroscopy technique to characterize supercapacitors. We define its metrology [START_REF] Hammar | Impedance Spectroscopy characterization of supercapacitors for railway environment[END_REF] and we present and we discuss it. It responds to practice requirements in time of measurements, precision and covering different properties of the supercapacitor under test. On the other hand we propose electrical model based on homogenous porous electrode impedance. A supercapacitor with carbon/carbon electrodes and organic electrolyte is characterized using impedance spectroscopy method at different voltages. Correlation between impedance spectroscopy measurements and full charge discharge at high current level (100 A, 200A, 400A) is investigated. Measurement techniques Impedance spectroscopy method Impedance spectroscopy technique consists to excite the system under test with small alternative signal (voltage or current) and to measure the other part (current, voltage). In fact the operating mode for characterization of supercapacitor is the potentiostatic mode, it consists to apply DC voltage V to the system which must be regulated and controlled during the impedance measurement. An alternative sinus AC voltage U(ω) with small amplitude and with pulsation ω (frequency f) perposed to DC voltage V already stabilized. is su U (ω) = Uo e i(ωt+α) We measure the alternative current I (ω) = Io e i(ωt+δ) We calculate the impedance Z (ω)= dU(ω) / dI(ω)= Zo e i ϕ ϕ Where the impedance amplitude is calculated as a ratio between the alternative voltage amplitude and the alternative current amplitude, and the phase of the impedance is calculated as a difference between alternative signals arguments. By varying the frequency of the alternative AC voltage signal we obtain a spectrum of impedance versus frequency. Numerous representations of the impedance spectrum exist: BODE diagram, complex plane diagram (NYQUIST plot) … The alternative AC voltage magnitude is optimized according to some constraints: o Minimization of noise on alternative voltage and current signals. o No violating of the linearity of the system response region in the U/I characteristic. o Characteristics of supercapacitor in particular its low impedance. o Technical hardware specifications of the impedancemeter (maximal current, maximal voltage…) One problem inherent in impedance experiments is the balance between the improvement in the data quality obtained by increasing the number of frequencies uses and the number of cycles collected at each frequency and the increase of the time required for the full experiment. This problem is mainly observed when low frequencies are used. The THALES software of the IM6 impedancemeter offers a partial solution by dividing the frequency range into two: high frequency range (f > 66 Hz) and low frequency range (f < 66 Hz). At high frequencies we calculate impedance as the mean value of impedances calculated for numerous cycles (10 cycles) at the same frequency to increase the quality of data measurement. At low frequencies although the precision is respected the time of measurement is limited at one cycle at each frequency. The total time impedance measurement is realistic for our laboratory uses. The frequency spectra are chosen in the range between 10 mHz and 50kHz which corresponds to typical time constants in most high power applications. DC voltage applied to supercapacitor is sufficiently stabilized before starting impedance spectroscopy measurements. Limitation in high frequencies for low impedance supercapacitors Some errors appear in low impedance supercapacitors at high frequencies. They are caused by the coupling effects between current feeding and potential sensing lines [START_REF] Schiller | Main error sources at AC measurements on low impedance objects[END_REF]. They diminish at low frequencies but become dramatic at high frequencies. In practice the use of one twisted wires for current feeding lines and second one for potential sensing lines can help to minimize such errors on measurements. In spite the careful taking on connections, errors on measurements are minimized but not eliminated. Some complementary methods to spectroscopy measurement exist in literature like High current interrupt method [START_REF] Richer | Current interrupt technique measuring low impedances at high frequencies[END_REF]. We have proposed a solution represented in Fig. 1. We have separated voltage sensors from power source according to Ampere's law. Theoretically the electromagnetic filed in the exterior of the cylinder is equal to zero. Cylinder Power (AC current) Cylindrical Supercapcitor Measurement voltage Fig. 1 Basic scheme of system minimizing electromagnetic coupling effect on impedance spectroscopy measurements Insulation Supercapacitors, instrumentation The two testing methods (impedance spectroscopy and constant current charge/discharge were applied to two supercapacitors B, and F based on activated carbon and organic electrolyte developed for power applications. B is supercapacitor rated 2600 F of capacitance and 2.5 V of voltage, F is rated 2600 F of capacitance and 2.7 V of voltage. Impedance spectroscopy measurements are realised with Zahner IM6+PP240 impedance analyzers [10][11]. THALES software ensured the data acquisition and the control function with good quality [START_REF] Bott | Electrochemical Impedance Spectroscopy using the BAS-Zahner IM6 and IM6e Impedance Analyzers[END_REF]. We have developed a constant charge/discharge test bench for supercapacitors characterization in our laboratory. Measurements results and modelling Impedance Spectroscopy results and discussions Two supercapacitors E (2600F, 2.5V) and F (2600 F, 2.7 V) are tested with impedance spectroscopy method. Figure2 illustrate an example of complex plane plot of impedance spectroscopy behaviour of supercapacitors E at different temperatures at voltage of 2 V in the frequency range from 10 mHz to 1kHz. In contrast figure 4(a,b) represent the behaviour of E and F supercapacitors at different DC voltages at a temperature of 25° C in the frequency range from 10 mHz to 1 kHz. Simple Pore Model of cylindrical supercapacitor Fig. 3 gives frequency behaviour of a supercapacitor with homogenous porous electrode at stabilized voltage. We consider pore size accessible with uniform dimensions, and inaccessible pores behaving like a resistance contributing in self discharge. As consequence the impedance of the electrode is equivalent to cylindrical pore impedance in parallel with resistance. The structure of supercapacitor can be decomposed to three elements [START_REF] Levie | Electrochemical response of porous and rough electrodes[END_REF], [START_REF] Murray | Transformées de Laplace[END_REF] : inductor L, series resistance R s , and complex pore impedance Z P (jω) (cf. Fig. 4, Fig. 5) Fig. 3 Complex plane plot of a porous ideally polarizable electrode in series with Rs C dl /2 R 1 C dl /2 R 2 C dl /2 C dl R n L R s dl n C n R . 2 2 2 π τ ⋅ ⋅ = dl C R . 2 2 2 π τ ⋅ = dl C R . 2 2 1 π τ ⋅ = Z P Z sc The mathematical expression of Z P (jω) can be given by s s C s Z dl P × × × = τ τ τ coth ) ( We put dl C K τ = 1 and dl C K τ = 2 , s K K s K s Z p 1 2 1 coth ) ( × = The inverse transformation of Z p (s) in the time domain is given by [START_REF] Murray | Transformées de Laplace[END_REF] ∑ - + = t k k n P e k k k k t Z 2 2 2 1 2 2 2 2 1 2 2 1 2 ) ( π , ∑ - + = t C C n dl dl P dl dl e C C t Z . 2 2 . . 2 2 2 1 ) ( τ π dl n C n R . 2 2 2 π τ ⋅ ⋅ = And ∑ ∞ = - + = 1 . 2 2 1 ) ( n t C R dl dl P dl n e C C t Z with el dl R C ⋅ = τ Where C dl (average double layer capacitance) is the capacitance at low frequencies, Tau (τ) depends on C dl and the electrolyte resistance filled in pore R el . Z P (jω) is approximated by a n. RC circuits which are fully described by two parameters (C dl , and τ), it describe in-pore diffusion. So for the totally model of supercapacitor we need to identify four parameters (L, R s , C dl , R el ) Analysis According to the physic's theory of Gouy, Chapman and Stern [START_REF] Gouy | [END_REF], [16], [17] on double layer capacitor, the capacity C dl is equivalent to two parallel capacities. We note V the voltage on the extremities of the double layer. The dependency of C dl to the voltage is expressed by 1 1 2 3 ) cosh( 1 1 - ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⋅ + = V C C C C dl We have defined the parameters of the capacity C dl (C 3 , C 2 , C 1 ) for B supercapacitor at different temperatures Table . 1) Simulated curves of the double layer capacity C dl and values of C dl calculated from impedance spectroscopy measurements show a great agreement (cf. Fig. 6) The accessibility of the electrolyte within pores depends on molecular electrolyte dimensions and pore size, it varies with temperature. The electrolyte resistance may be a good indicator for electrolyte-pore accessibility. Its behaviour can reflect supercapacitor's dependency of supercapacitor to temperature. We note T is the supercapacitor temperature (° K).The dependency expression of the electrolyte resistance R el to temperature can be given by ) exp( 1 2 3 T r r r R el × × + = We have defined the function's factors (r 1 , r 2 , r 3 ) for B supercapacitor at different voltages (cf. Table . 2) Simulated curves of electrolyte resistance R el and values of R el calculated from impedance spectroscopy measurements show a great agreement (cf. Fig. 7) By using the expression of C dl defined above, we have compared constant current discharge measurement voltage with model simulated voltage. We observe a good concordance between model and constant current discharge Table . 3. Time discharge is limited by our test bench. To enhance the assessment of the validity of our modelling, we have done other experiences in Cegely Laboratory. Model is assessed to full charge/discharge measurements of supercapacitor at high currents levels (100A, 200A, 400A). Assessment of electrical model Constant current charge/discharge method is the most straightforward method. It requires the application of a current step charge/discharge to the supercapacitor from initial stabilized voltage. The voltage response as function of time is measured by four wires method. In practice we heave applied full charge/discharge current to the supercapacitor B between 0V and 2.5 V with three current's level (100A, 200A, 400A) and full discharge between 2.7V and 0V for F(2600F, 2.7V) supercapacitor with two current level (300A, 500A) at ambient temperature (25 °C). Comparison between measurements and simulations is represented in Fig. 8. It shows differences less than 2 % in the worst case. The difference observed between measurements and simulations may be explained by: • The model proposed is based on the mean value of capacity of homogenous porous electrode with uniform pore size. It gives good approximation and results. In fact the electrode presents a pore size distribution not uniform. • The model proposed take into account the in pore diffusion or frequency diffusion, but it neglect the pore size diffusion, observed in low frequencies [18][19]. • The precision of our instruments in the order percentage of the difference observed between measurements and simulations. Conclusions We proposed in this work a simplified method to avoid effect coupling in spectroscopy measurements. Indeed we have defined a measurement procedure. It is practice and precise. The electric model gives a good agreement between measurements and simulations. It is based on physical phenomena. It is very interesting to model supercapacitors in good state of health. We have modelled in-pore dispersion with frequency, a complete model witch takes into a count pore size dispersion will be proposed in our next papers. The measurement voltage and the model voltage give a good agreement. It introduces the dependency of capacitance C dl , resistance R HF and τ to voltage and temperature. Fig. 2 2 Fig. 2 Complex plot of Impedance spectroscopy measurements data for B (2600F, 2.5V) supercapacitor at different voltages (T = 25°C) and at different temperatures (U = 2V) Fig. 4 Fig. 5 45 Fig. 4 Equivalent circuit for supercapacitor at constant voltage Fig. 6 Fig. 7 67 Fig. 6 Behaviour of capacitance C dl with voltage at different temperatures Fig. 8 8 Fig. 8 Comparison between measured voltage and simulated voltage under full charge/discharge with three levels currents (100 A, 200 A, 400 A) Table . 2 . Approximation function of electrolyte resistance R el at different voltages Table. 3 Difference between of measured voltage and model voltage in the time domain under constant current discharge for B supercapacitor at different temperatures
03286220
en
[ "math.math-ds", "math.math-ap", "math.math-ca", "sdv.bid.evo" ]
2024/03/04 16:41:24
2023
https://hal.science/hal-03286220v4/file/BDG-230602.pdf
Jean-Baptiste Burie Arnaud Ducrot Quentin Griette email: [email protected] Epidemic models in measure spaces: persistence, concentration and oscillations Keywords: AMS subject classifications (2020). 34D05, 92D25, 37L15, 37N25 Keywords. ordinary differential equations, asymptotic behavior, population dynamics, Radon measure, evolution We investigate the long-time dynamics of a SIR epidemic model in the case of a population of pathogens infecting a homogeneous host population. The pathogen population is structured by a genotypic variable. When the initial mass of the maximal fitness set is positive, we give a precise description of the convergence of the orbit, including a formula for the asymptotic distribution. We also investigate precisely the case of a finite number of regular global maxima and show that the initial distribution may have an influence on the support of the eventual distribution. In particular, the natural process of competition is not always selecting a unique species, but several species may coexist as long as they maximize the fitness function. In many cases it is possible to compute the eventual distribution of the surviving competitors. In some configurations, species that maximize the fitness may still get extinct depending on the shape of the initial distribution and some other parameter of the model, and we provide a way to characterize when this unexpected extinction happens. Finally, we provide an example of a pathological situation in which the distribution never reaches a stationary distribution but oscillates forever around the set of fitness maxima. Introduction In this article we investigate the large time behavior of the SIR epidemic model :main              S t (t) = Λ -θS(t) -S(t) X β(x)I(t, dx), I t (t, dx) = β(x)S(t)I(t, dx) -γ(x)I(t, dx), R t (t) = X γ(x)I(t, dx), (1.1a) with the initial data S(0) = S 0 ∈ (0, +∞), I(0, dx) = I 0 (dx) ∈ M + (X), R(0) = R 0 ∈ (0, +∞), (1.1b) where X is a Polish space and M + (X) is the set of nonnegative Borel measures on X. This model describes the evolution of a population of hosts that can be, at any time t > 0, either free of infection (S(t), the susceptible population), infected by a pathogen of type x ∈ X (I(t, dx), the infected population of type x) or removed from the system (R(t), the recovered population), the latter class including two possible outcomes of the infection, complete immunity or death. The parameter Λ > 0 models a constant influx of susceptible hosts, θ > 0 the death rate of the hosts in the absence of infection, β(x) the transmission parameter of the pathogen of type x, and γ(x) the recovery rate of a pathogen of type x. Both β(x) and γ(x) are bounded continuous functions. SIR models are ubiquitous in the literature concerning mathematical epidemiology and have been extensively studied. Without the pretention of reconstructing the entire history of the model, let us cite the works of Kermack and McKendrick [29] that might well be its first occurrence in the literature, and was immediately applied to a plague outbreak in the island of Bombay. In this article we consider that the phenotype of the pathogen (that is, the values of β(x) and γ(x)) depends on an underlying variable x ∈ X, where X is a set of attainable values that possesses a few mathematical properties. This variable x may be a collection of quantitative phenotypic traits involved the mechanism of transmission, reproduction or replication of the pathogen (expression of surface protein at the cellular or viral level, impact on the host's behavior, ...) or the underlying genes that determine the values of β(x) and γ(x). We do not specify the particular mechanisms that link the underlying variable x ∈ X and the phenotype (β(x), γ(x)) but focus on the dynamics of the population under (1.1) conditionally to the knowledge of these mechanisms. By analogy, because it stands for a hidden process that determines an observable quantity, x will be called the genotypic variable, even if it could well stand for hidden quantitative phenotypic variables. We do not take mutations into account and consider that the pathogen is asexual; therefore, (1.1) can be considered a pure competition model where the pathogens compete for a single resource (the susceptible hosts). When I 0 (dx) is a finite collection of Dirac masses, I 0 (dx) = n i=1 I i 0 δ xi (dx), our problem is reduced to a system of ordinary differential equations, inite                      d dt S(t) = Λ -θS(t) -S(t) β 1 I 1 (t) + β 2 I 2 (t) + . . . + β n I n (t) d dt I 1 (t) = β 1 S(t)I 1 (t) -γ 1 I 1 (t) . . . d dt I n (t) = β n S(t)I n (t) -γ n I n (t), (1.2a) with the initial data S(0) = S 0 ∈ (0, +∞), I 1 (0) = I 1 0 ∈ (0, +∞), . . . , I n (0) = I n 0 ∈ (0, +∞), ( where β i = β(x i ), γ i = γ(x i ) and I(t, dx) = n i=1 I i (t)δ xi (dx). In this context, Hsu [START_REF] Hsu | Limiting behavior for competing species[END_REF] and Hsu, Hubbell, and Waltman [START_REF] Hsu | A mathematical theory for single-nutrient competition in continuous cultures of micro-organisms[END_REF] showed for closely related systems of ordinary differential equations that the solution eventually converges to an equilibrium which may not be unique but is always concentrated on the equations that maximize the fitness β i /γ i . One of the objectives of the current article is to establish an equivalent result in the context of measure-valued initial conditions that are not necessarily finite sums of Dirac masses (this will be given by Theorem 2.2). In a recent work [START_REF] Burie | Asymptotic behavior of an epidemic model with infinitely many variants[END_REF] we clarified the asymptotic behavior of an extension of (1.2) in the case of an infinite number of equations. This discrete setting is a particular case of the results that we present here, and we provide a number of illuminating examples that give a glimpse of the diversity of different behaviors that can be expected for solutions to (1.1). In particular, we show that, when the assumptions of Proposition 2.7 are not satisfied, it is possible to construct exotic inital data for which the total mass of pathogen X I(t, dx) does not converge to a limit but oscillates between distinct values. We refer to [START_REF] Burie | Asymptotic behavior of an epidemic model with infinitely many variants[END_REF] for details. The problem of several species competing for a single resource has received a lot of attention in the literature. In this context, the "Competitive exclusion principle" states that "Complete competitors cannot coexist", which in particular means that given a number of species competing for the same resource in the same place, only one can survive in the long run. This idea was already present to some extent in the book of Darwin, and is sometimes referred to as Gause's law [START_REF] Hardin | The competitive exclusion principle[END_REF]. This problem of survival of competitors has attracted the attention of mathematicians since the '70s and many studies have proved this property in many different contexts -let us mention the seminal works of Hsu [START_REF] Hsu | Limiting behavior for competing species[END_REF] and Hsu, Hubbell, and Waltman [START_REF] Hsu | A mathematical theory for single-nutrient competition in continuous cultures of micro-organisms[END_REF] followed by Armstrong and McGehee [START_REF] Armstrong | Competitive exclusion[END_REF], Butler and Wolkowicz [START_REF] Butler | A mathematical model of the chemostat with a general class of functions describing nutrient uptake[END_REF], Hsu, Smith, and Waltman [START_REF] Hsu | Competitive exclusion and coexistence for competitive systems on ordered Banach spaces[END_REF], Li [START_REF] Li | Global asymptotic behavior of the chemostat: general response functions and different removal rates[END_REF], Rapaport and Veruete [START_REF] Rapaport | A new proof of the competitive exclusion principle in the chemostat[END_REF], Wolkowicz and Lu [START_REF] Wolkowicz | Global dynamics of a mathematical model of competition in the chemostat: general response functions and differential death rates[END_REF], and Wolkowicz and Xia [START_REF] Wolkowicz | Global asymptotic behavior of a chemostat model with discrete delays[END_REF], to cite a few -and also disproved in other contexts, for instance in fluctuating environments, see Cushing [START_REF] Cushing | Two species competition in a periodic environment[END_REF] and Smith [START_REF] Smith | Competitive coexistence in an oscillating chemostat[END_REF]. Ackleh and Allen [START_REF] Ackleh | Competitive exclusion and coexistence for pathogens in an epidemic model with variable population size[END_REF] study the competitive exclusion in an epidemic model with a finite number of strains, and describe how different species can coexist in some cases. In our model, the fitness of a pathogen with genotype x is given by the formula R 0 (x) = Λβ(x) θγ(x) = Λ θ α(x), where α(x) := β(x) γ(x) ; the competitive exclusion principle implies that the only genotypes that eventually remain are the ones that maximize R 0 (x). When R 0 (x) has a unique maximum then it is clear that I(t, x) converges to a Dirac distribution concentrated at the maximum. But if R 0 (x) attains its maximum at a more complex set -from two isolated maxima to an entire line segment -then the eventual weight of each fitness maximum in the population is less clear. We will give a partial description of this distribution here, that depends on the initial repartition of infected as well as the repartition of the phenotypic value γ(x) in the vicinity of the set of fitness maxima. We will show in particular that, while it is true that the species have to maximize the fitness function in order to survive, the natural process of competition is not selecting a unique genotypic value but several may coexist as long as they maximize the fitness function. In many cases it is possible to compute the eventual repartition of the surviving competitors. In some cases, species that maximize the fitness may still get extinct if the initial population is not sufficient, and we provide a method to characterize when this unexpected extinction happens. Considering a situation where R 0 has more than one maximum at the same exact level may appear artificial but is not without biological interest. Indeed, the long-time behavior that we observe in these borderline cases can persist in transient time upon perturbing the function R 0 . For example in the epidemiological context of Day and Gandon [START_REF] Day | Applying population-genetic models in theoretical evolutionary epidemiology[END_REF], it has been observed that a strain 1 with a higher value of γ and a slightly lower R 0 value than a strain 2 may nevertheless be dominant for some time (we reproduce such a behavior numerically in Figure 5). These borderline cases shed light on our understanding of the transient dynamics, see also Burie, Djidjou-Demasse, and Ducrot [START_REF] Burie | Asymptotic and transient behaviour for a nonlocal problem arising in population genetics[END_REF] where we explicit transient dynamics for a related evolutionary model depending on the local flatness of the fitness function. Quantitative traits such as the virulence or the transmission rate of a pathogen, the life expectancy of an individual and more generally any observable feature such as height, weight, muscular mass, speed, size of legs, etc. are naturally represented using continuous variables. Such a description of a population seems highly relevant and has been used mostly in modelling studies involving some kind of evolution [START_REF] Barles | Concentration in Lotka-Volterra parabolic or integral equations: a general convergence result[END_REF][START_REF] Barles | Concentrations and constrained Hamilton-Jacobi equations arising in adaptive dynamics[END_REF][START_REF] Bouin | Invasion fronts with variable motility: phenotype selection, spatial sorting and wave acceleration[END_REF][START_REF] Desvillettes | On selection dynamics for continuous structured populations[END_REF][START_REF] Ducrot | Differential equations and population dynamics I. Introductory approaches[END_REF][START_REF] Griette | Singular measure traveling waves in an epidemiological model with continuous phenotypes[END_REF][START_REF] Jabin | On selection dynamics for competitive interactions[END_REF][START_REF] Lorz | Long-term behaviour of phenotypically structured models[END_REF][START_REF] Lorz | Dirac concentrations in a chemostat model of adaptive evolution[END_REF][START_REF]Mutation and recombination in a model of phenotype evolution[END_REF][START_REF] Magal | Mutation, selection, and recombination in a model of phenotype evolution[END_REF][START_REF] Raoul | Long time evolution of populations under selection and vanishing mutations[END_REF]. In this context, and this has been remarked before [START_REF] Desvillettes | On selection dynamics for continuous structured populations[END_REF][START_REF] Lorenzi | Asymptotic analysis of selection-mutation models in the presence of multiple fitness peaks[END_REF], concentration on the maximum level set of the fitness function R 0 (x) means that the classical mathematical framework of functions is not sufficient to describe accurately the dynamics of the solutions to (1.1). In this article, we will therefore extend our analysis to the case of Radon measures. Note that it is also natural to consider measures as initial data in epidemic models with an age of infection structure to model cohorts of patients, see Demongeot et al. [START_REF] Demongeot | A Kermack-McKendrick model with age of infection starting from a single or multiple cohorts of infected patients[END_REF]. When X = R N , System (1.1) arises naturally as the limit of a mutation-selection model of spore-producing pathogen proposed by [START_REF] Lo Iacono | The evolution of plant pathogens in response to host resistance: factors affecting the gain from deployment of qualitative and quantitative resistance[END_REF] and studied mathematically by Burie, Djidjou-Demasse, and Ducrot [START_REF] Burie | Asymptotic and transient behaviour for a nonlocal problem arising in population genetics[END_REF][START_REF] Burie | Slow convergence to equilibrium for an evolutionary epidemiology integro-differential system[END_REF], Burie et al. [START_REF] Burie | Concentration estimates in a multi-host epidemiological model structured by phenotypic traits[END_REF], Djidjou-Demasse, Ducrot, and Fabre [START_REF] Djidjou-Demasse | Steady state concentration for a phenotypic structured problem modeling the evolutionary epidemiology of spore producing pathogens[END_REF], and Fabre et al. [START_REF] Fabre | An epi-evolutionary model to predict spore-producing pathogens adaptation to quantitative resistance in heterogeneous environments[END_REF] when the dynamics of the spores is very fast. The system (1.1) corresponds to the case of no mutations at all or, equivalently, the case of a fully concentrated kernel (equal to a Dirac mass at 0). Despite our efforts, we were unable to find a precise description of the behavior of the solutions of (1.1) in the literature when the initial condition I 0 (dx) is a Radon measure. Here we remark that the vector field of (1.1) is locally Lipschitz continuous in the space R × M + (X) × R (when M(X) is equipped with the total variation norm), so the existence of solutions is not the main difficulty. The solution I(t, dx) can be written as I(t, dx) = e β(x) t 0 S(s)ds-tγ(x) I 0 (dx), so the solution is always a bounded continuous function multiplied by the initial data I 0 (x) at any finite time t > 0. But to describe what happens as t → +∞ is not at all trivial. In Theorem 2.2, we distinguish two typical situations. When there is a positive initial mass on the set of maximal fitness ( α(x)=α * I 0 (dx) > 0, where α * = sup x∈supp I0 α(x) and we recall that α(x) = β(x)/γ(x)), then we can show that the distribution of pathogens converges to a stationary distribution that we can compute explicitly. This is done with the help of a Lyapunov function that is essentially the same as the one used by Hsu [START_REF] Hsu | Limiting behavior for competing species[END_REF]. This is point i) of Theorem 2.2. The case when there is no initial mass on the set of maximal fitness ( α(x)=α * I 0 (dx) = 0) is less clear. We compactify the orbits by using the weak-* topology of measures and use this compactness to show the uniform persistence of the population thanks to a general argument from Magal and Zhao [START_REF] Magal | Global attractors and steady states for uniformly persistent dynamical systems[END_REF]. Then we show that the population on the sets of high fitness always grows faster than the one on sets of low fitness, and this allows us to control uniformly the Kantorovitch-Rubinstein distance between the solution I(t, dx) and the space of measures that are concentrated on the set of maximal fitness, M + {α(x) = α * } . Thus in this case also we can prove that the solution eventually concentrates on the set that maximizes the fitness. This is point ii) of Theorem 2.2. In general, it is not true that the distribution I(t, dx) eventually reaches a stationary distribution. We construct a counterexample in Section 2.3. By carefully choosing the initial data and the fitness function α(x) = β(x)/γ(x), we construct a solution of (1.1) with I(t, dxdy) that approaches the unit circle of R 2 but never stops turning around it. This fact is illustrated numerically in Section 3.3. We prove in Claim 2.14 that the ω-limit set of the integral of I(t, dxdy) on the upper-half plane, R×R + I(t, dxdy), contains at least two values, therefore I(t, dxdy) does not converge to a stationary distribution. We also refer to [START_REF] Burie | Asymptotic behavior of an epidemic model with infinitely many variants[END_REF] where we construct an exemple in a discrete setting where the total mass X I(t, dx) does not converge to a single value but oscillates between several values. With some additional assumption we improve the description of the asymptotic behavior of I(t, dx) compared to Theorem 2.2 in case ii). In Assumption 2.5 we impose a condition on the disintegration of I 0 (dx) with respect to α(x) to impose that the distribution of I 0 (dx) is uniformly positive around the maximum of γ(x), γ * := sup α(x)=α * γ(x). Under this assumption, in Proposition 2.7, we refine the localization of the asymptotic concentration set of I(t, dx) and we prove that the total mass of I(t, dx), X I(t, dx), converges to a limit value. We also focus on the special case when the fitness function α(x) attains a finite number of interior regular maxima in the interior of supp I 0 , and when X = R N . In Theorem 2.10, we show that the initial distribution around the maxima of the fitness function plays a crucial role in the asymptotic behavior of the solution. We show that the fitness maxima that keep a nonzero asymptotic population are the ones that maximize an ad-hoc score that involves the value of γ(x) but also the dimension of the Euclidean space and the polynomial decay of the initial data around the fitness maximum. The structure of the paper is as follows. In section 2 we present our main results. More precisely, we state our results for persistence and concentration in section 2.1, we give precise statements of our results concerning fitness functions with a finite number of regular maxima in section 2.2, and in section 2.3 we provide a counterexample to the convergence of the distribution I(t, dx) when the initial mass of fitness maxima is negligible. In section 3 we illustrate our results with numerical simulations. The corresponding figures are added at the end of the article. In section 4 we prove our results concerning general measure initial data (corresponding to the statements in section 2.1). In section 5 we prove our statements on the systems with a fitness function α(x) having a finite number of regular maxima (corresponding to the statements in section 2.2). Data availability Data sharing not applicable to this article as no datasets were generated or analysed during the current study. Main results :main Without loss of generality, the system (1.1) can be rewritten as o-mut    S t (t) = Λ -θS(t) -S(t) X α(x)γ(x)I(t, dx), I t (t, dx) = α(x)S(t) -1 γ(x)I(t, dx), x ∈ X, (2.1a) eq:SIwith the initial data S(0) = S 0 ∈ (0, +∞), I(0, dx) ∈ M + (X), (2.1b) by setting α(x) := β(x)/γ(x), and removing the third equation, which has no impact on the dynamics of the system. In the rest of the article we will study the system (2.1) instead of (1.1). Before going to our results, we introduce some notations that will be used along this work. We work on a Polish space X (i.e. a metrizable space which is separable and complete for at least one metric) equipped with a complete distance d. We denote by M(X) the set of finite signed Radon measures on X. Recall that M(X) is a Banach space when endowed with the total variation norm given by: µ T V = |µ|(X) = X |µ|(dx), ∀µ ∈ M(X). This fact is proved for instance in Bogachev [5, Vol. I, Theorem 4.6.1 p. 273]. When X is compact, it is possible to identify M(X) with the dual of the space of continuous functions over X, C(X). This is the Riesz representation theorem [5, Vol. II, Theorem 7.10.4 p.111]. When X is an arbitrary Polish space, while it is true that every measure µ ∈ M(X) yields a continuous linear functional on BC(X) (the space of bounded continuous functions), the converse is no longer true [5, Vol. II, Example 7.10.3 p.111]. We denote by M + (X) the set of the finite nonnegative measures on X. Observe that one has M + (X) ⊂ M(X) and M + (X) is a closed subset of M(X) for the norm topology of • T V . An alternate topology on M(X) can be defined by the Kantorovitch-Rubinstein norm [START_REF] Bogachev | Measure theory[END_REF]Vol. II,Chap. 8.3 p. 191], µ 0 := sup f dµ : f ∈ Lip 1 (X), sup x∈B |f (x)| ≤ 1 , wherein we have set Lip 1 (X) := f ∈ BC(X) : |f (x) -f (y)| ≤ d(x, y), ∀(x, y) ∈ X 2 . Let us recall [START_REF] Bogachev | Measure theory[END_REF]Theorem 8.3.2] that the metric generated by • 0 on M + (X) is equivalent to the weak- * topology generated by tests against bounded continuous test functions. Note however that this equivalence is true only for M + (X) and cannot be extended to M(X) since the latter space is not (in general) complete for the metric generated by • 0 . We denote by d 0 this metric on M + (X), that is d 0 (µ, ν) := µ -ν 0 for all µ, ν ∈ M + (X). (2.2) eq:kan About the parameters arising in (2.1) our main assumption reads as follows. nomut Assumption 2.1. The constants Λ > 0 and θ > 0 are given. The functions α(x) and γ(x) are bounded and continuous from X into R and there exist positive constants α ∞ and γ 0 < γ ∞ such that α(x) ≤ α ∞ , 0 < γ 0 ≤ γ(x) ≤ γ ∞ for all x ∈ X. We let S 0 > 0 be given and I 0 (dx) ≡ 0 be a finite nonnegative Radon measure and S 0 > 0 be given. We define the two quantities α * ≥ 0 and R 0 (I 0 ) by α * =: sup x∈supp I0 α(x) and R 0 (I 0 ) := Λ θ α * . (2.3) def-al We finally assume that the set L ε (I 0 ) := {x ∈ X : α(x) ≥ α * -ε} ∩ supp I 0 = {x ∈ supp I 0 : α(x) ≥ α * -ε} (2.4 ) eq:Lep is compact when ε > 0 is sufficiently small. Let us observe that if S 0 ≥ 0 then (2.1) equipped with the initial data S(0) = S 0 and I(0, dx) = I 0 (dx) has a unique solution S(t) ≥ 0 and I(t, dx) ∈ M + (X) for all t ≥ 0. This is a direct application of the Cauchy-Lipschitz Theorem in the Banach space R × M(X). It is not difficult to show that S(t) and I(t, dx) are a priori bounded (this will be proved in Lemma 4.1), hence the solution is global. In addition I is given by a quasi-explicit formula: I(t, dx) = exp γ(x) α(x) t 0 S(s)ds -t I 0 (dx). The above formula ensures that supp I(t, •) = supp I 0 for all t ≥ 0. We now split our main results into several parts. We first derive very general results about the large time behavior of the solution (S, I) of (2.1) when I 0 is an arbitrary Radon measure. We show that I(t, dx) concentrates on the points that maximize both α and γ. We then apply this result to consider the case where I 0 (dx) is a finite or countable sum of Dirac masses. We continue our investigations with an absolutely continuous initial measure with respect to Lebesgue measure and a finite set {α(x) = α * }. In that setting we are able to fully characterize the points where the measure I(t, dx) concentrates as t → ∞. Persistence and concentration neral As mentioned above this subsection is concerned with the large time behavior of the solution (S, I) of (2.1) where the initial measure I 0 (dx) is an arbitrary Radon measure. Using the above notations our first result reads as follows. sures Theorem 2.2 (Asymptotic behavior of measure-valued initial data). Let Assumption 2.1 be satisfied and suppose that R 0 (I 0 ) > 1. Let (S(t), I(t, dx)) be the solution of (2.1) equipped with the initial data S(0) = S 0 and I(0, dx) = I 0 (dx). We distinguish two cases depending on the measure of the set {α(x) = α * } with respect to I 0 : where τ ∈ R denotes the unique solution of the equation i) If α(x)=α * I 0 (dx) > 0, X γ(x)1 α(x)=α * (x)e τ γ(x) I 0 (dx) = θ α * R 0 (I 0 ) -1 . The convergence of I(t, dx) to I ∞ (dx) holds in the total variation norm • T V . ii) If α(x)=α * I 0 (dx) = 0, then one has S(t) → 1 α * and I(t, dx) is uniformly persistent, namely lim inf t→+∞ X I(t, dx) > 0. Moreover I(t, dx) is asymptotically concentrated as t → ∞ on the set {α(x) = α * }, in the sense that d 0 I(t, dx), M + ({α(x) = α * }) ----→ t→+∞ 0, where d 0 is the Kantorovitch-Rubinstein distance. In the statement of the Theorem 2.2 and in the rest of the paper, we stress for clarity that M + {α(x) = α * } is the set of finite positive measures on the closed set {x ∈ X : α(x) = α * }. This set is naturally embedded as a subset of the space M + X , which is closed for the topology induced by the total variation norm • T V and also the one induced by the Kantorovitch-Rubinstein distance d 0 . Theorem 2.2 can be interpreted as follows. In case i), when the set of pathogens with maximal fitness is "already populated", the behavior of the dynamical system is no different from the case of a finite system: it converges to a equilibrium which is concentrated on the set of maximal fitness. The case ii), when the maximal fitness is not attained for the population but can only be reached asymptotically, is more intricate and we can only prove that the population of pathogens is uniformly persistent and asymptotically concentrated on the set of maximal fitness. We cannot prove the convergence to an equilibrium distribution in general; in fact, it is false, see the example in section 2.3 below and also the counterexamples in [START_REF] Burie | Asymptotic behavior of an epidemic model with infinitely many variants[END_REF]. In fact, as shown in the latter reference, it is not even true in general that the total mass of pathogen X I(t, dx) converges to a limit. We continue our general result by showing that under additional properties for the initial measure I 0 , the function I(t, dx) concentrates in the large times on the set of the points in {α(x) = α * } that also maximize the function γ. The additional assumption for the initial measure I 0 (dx) are expressed in term of some properties of its disintegration measure with respect to the function α on L ε (I 0 ) with ε sufficiently small. We refer to the book of Bourbaki [7, VI, §3, Theorem 1 p. 418] for a proof of the disintegration Theorem which is recalled in the Appendix, Theorem C.4. Let A(dy) be the image of I 0 (dx) under the continuous mapping α : X → R, then there exists a family of nonnegative measures I 0 (y, dx) (the disintegration of I 0 with respect to α) such that for almost every y ∈ α(supp I 0 ) with respect to A we have: supp I 0 (y, dx) ⊂ {x ∈ X : α(x) = y}, α(x)=y I 0 (y, dx) = 1 and I 0 (dx) = I 0 (y, dx)A(dy) (2.5) eq:dis wherein the last equality means that X f (x)I 0 (dx) = y∈R α(x)=y f (x)I 0 (y, dx)A(dy) for all f ∈ BC(X). Note that, by definition, the measure A is supported on the set α(supp I 0 ). The measure A is called the pushforward measure of I 0 under the mapping α. Note that the disintegration is unique up to a redefinition on an A-negligible set of fibers, see the disintegration theorem recalled in the Appendix, Theorem C.4. We shall also make use, for all y A-almost everywhere, of the disintegration measure of I 0 (y, dx) with respect to the function γ, as follows I 0 (y, dx) = R I α,γ 0 (y, z, dx)I α 0 (y, dz), where I α,γ 0 (y, z, dx) is concentrated on the set {x : α(x) = y and γ(x) = z}. This allows to the following reformulation of I 0 (dx): I 0 (dx) = y∈R z∈R I α,γ 0 (y, z, dx)I α 0 (y, dz)A(dy). Remark 2.3 (Explicit disintegration in Euclidean spaces). Suppose that X = R N and I 0 ∈ L 1 (R N ). Since we restrict to measures which are absolutely continuous with respect to the Lebesgue measure here, with a small abuse of notation we will omit the element dx when the context is clear. Assume that α is Lipschitz continuous on R N and that I 0 (x) |∇α(x)| ∈ L 1 (R N ). (2.6) eq:reg The coarea formula implies that, for all g ∈ L 1 (R N ), we have R N g(x)|∇α(x)|dx = R α(x)=y g(x)H N -1 (dx)dy, where H N -1 (dx) is the (N -1)-dimensional Hausdorff measure (see Federer [20, §3.2]). Therefore if g(x) = f (x) I0(x) |∇α(x)| we get R N f (x)I 0 (x)dx = R α(x)=y f (x) I 0 (x) |∇α(x)| H N -1 (dx)dy, (2.7) eq:coa and if moreover f (x) = ϕ(α(x)) we get R ϕ(y)A(dy) = R N ϕ(α(x))I 0 (x)dx = R ϕ(y) α(x)=y I 0 (x) |∇α(x)| H N -1 (dx)dy, where we recall that A(dy) is the image measure of I 0 (dx) through α. Therefore we have an explicit expression for A(dy): A(dy) = α(x)=y I 0 (x) |∇α(x)| H N -1 (dx)dy (2.8) eq:exp and (recalling (2.7)) we deduce the following explicit disintegration of I 0 : I 0 (y, dx) = 1 {x| α(x)=y} I0(x) |∇α(x)| H N -1 (dx) α(x)=y I0(z) |∇α(z)| H N -1 (dz) . (2.9) eq:exp Equations (2.8) and (2.9) give an explicit formula for the disintegration introduced in (2.5). Remark 2.4. In particular, if α is a C 2 function with no critical point in supp I 0 except for a finite number of regular maxima (in the sense that the bilinear form D 2 α(x) is non-degenerate at each maximum; this is a typical situation), then the assumption (2.6) above is automatically satisfied if N ≥ 3 and I 0 ∈ L ∞ (R N ). If N = 2 then a sufficient condition to satisfy (2.6) with I 0 ∈ L ∞ (R N ) should involve I 0 vanishing sufficiently fast in the neighborhood of each maximum of α. Now equipped with this disintegration of I 0 with respect to α we are now able to state our regularity assumption to derive more refine concentration information in the case where α(x)=α * I 0 (dx) = 0. bound Assumption 2.5 (Regularity with respect to α, γ). Let us define γ * > 0 by γ * := sup α(x)=α * γ(x). (2.10) def-ga We assume that, for each value γ < γ * in a neighborhoof of γ * , there exist constants δ > 0 and m > 0 such that m ≤ γ(x)∈([γ,γ * ]) and α(x)=y I 0 (y, dx) for A-almost every y ∈ (α * -δ, α * ]. Remark 2.6. The assumption 2.5 means that the initial measure I 0 (dx) is uniformly positive in a small neighborhood of {α(x) = α * } ∩ {γ(x) = γ * }. For instance, if the assumptions (2.6) is satisfied and if there exists an open set U containing {α(x) = α * } ∩ {γ(x) = γ * } on which I 0 (x) is almost everywhere uniformly positive, then Assumption 2.5 is automatically satisfied. The next proposition ensures that, when the initial measure I 0 satisfies Assumption 2.5, then the function I(t, dx) concentrates on {x ∈ X : α(x) = α * and γ(x) = γ * }. vival Proposition 2.7. Let Assumption 2.1 hold, and suppose that R 0 (I 0 ) > 1. Assume moreover that α(x)=α * I 0 (dx) = 0 and that Assumption 2.5 holds, then: i) The measure I(t, dx) concentrates on the set {α(x) = α * } ∩ {γ(x) = γ * }: d 0 (I(t, dx), M + ({α(x) = α * } ∩ {γ(x) = γ * })) ----→ t→+∞ 0. ii) The total mass of I(t, dx) converges to a limit value: X I(t, dx) ----→ t→+∞ θ α * γ * R 0 (I 0 ) -1 . iii) If there exists a Borel set U ⊂ X such that U ∩ {x : α(x) = α * and γ(x) = γ * } = ∅ and lim inf ε→0 A(dy) ess inf α * -ε≤y≤α * I α 0 (y,dz) ess inf γ * -ε≤z≤γ * U I α,γ 0 (y, z, dx) > 0, then the following persistence occurs lim inf t→∞ U I(t, dx) > 0. Here the notation µ(dy) ess inf K denotes the essential infimum taken on the set K and relatively to the measure µ (i.e. up to redefinition on µ-negligible sets). The condition on the set U in iii) means that the distribution of the population does not vanish in a neighborhood of {x : α(x) = α * and γ(x) = γ * } in U ; the existence of a set U cannot be guaranteed in general, in particular, no such U exists in the case of the counterexample given in subsection 2.3. Refined concentration estimates: regular fitness maxima in Euclidean spaces -data We now deepen our analysis of (2.1) set on X = R N when the fitness function has a finite number of regular maxima. We will not discuss here the case of a unique global maximum, in which the precise asymptotic behavior can be completely determined: see the Appendix for a statement of what we obtain. Rather, we describe the large time behavior of the solutions when the function α has a finite number of maxima on the support of I 0 (dx). We consider an initial data (S 0 , I 0 ) ∈ [0, ∞) × M + (R N ) with I 0 absolutely continuous with respect to the Lebesgue measure dx in R N (in other words and with a small abuse of notation, I 0 ∈ L 1 (R N )) in a neighborhood of the maxima of the fitness function. Recalling the definition of α * in (2.3), throughout this section, we shall make use of the following set of assumptions. By a small abuse of notation, we will identify in this section the function I 0 ∈ L 1 (R N ) and the associated measure I 0 (x)dx ∈ M + (R N ) when the context is clear. lculs Assumption 2.8. We assume that: (i) the set {α(x) = α * } is a finite set, namely there exist x 1 , ..., x p in the interior of supp (I 0 ) such that x i = x j for all i = j and {α(x) = α * } = {x 1 , • • • , x p } and R 0 := Λα * θ > 1. (ii) There exist ε 0 > 0, M > 1 and κ 1 ≥ 0,..,κ p ≥ 0 such that for all i = 1, .., p and for almost all x ∈ B(x i , ε 0 ) ⊂ supp (I 0 ) one has M -1 |x -x i | κi ≤ I 0 (x) ≤ M |x -x i | κi . Here and along this note we use | • | to denote the Euclidean norm of R N . (iii) The functions α and γ are of class C 2 and there exists > 0 such that for each i = 1, .., p one has D 2 α(x i )ξ 2 ≤ -|ξ| 2 , ∀ξ ∈ R N . Remark 2.9. Let us observe that since x i belongs to the interior of supp (I 0 ) then Dα(x i ) = 0. In order to state our next result, we introduce the following notation: we write f (t) g(t) as t → ∞ if there exists C > 1 and T > 0 such that C -1 |g(t)| ≤ |f (t)| ≤ C|g(t)|, ∀t ≥ T. According to Theorem 2.2 (ii), one has α * S(t) → 1 as t → ∞, and as a special case we conclude that S(t) = 1 t t 0 S(l)dl → 1 α * as t → ∞. As a consequence the function η(t) := α * S(t) -1 satisfies η(t) = o(1) as t → ∞. To describe the asymptotic behavior of the solution (S(t), I(t, dx)) with initial data S 0 and I 0 as above, we shall derive a precise behavior of η for t 1. This refined analysis will allow us to characterize the points of concentration of I(t, dx). Our result reads as follows. O-eta Theorem 2.10. Let Assumption 2.8 be satisfied. Then the function η = η(t) satisfies the following asymptotic expansion Moreover there exists ε 1 ∈ (0, ε 0 ) such that for all 0 < ε < ε 1 and all i = 1, .., p one has |x-xi|≤ε η(t) = ln t t + O 1 t , as t → ∞. ( 2 I(t, dx) t γ(xi) -N +κ i 2 as t → ∞. (2.13) mass-e As a special case, for all ε > 0 small enough and all i = 1, .., p one has |x-xi|≤ε I(t, dx) 1 if i ∈ J → 0 if i / ∈ J as t → ∞, where J is the set defined as J := i = 1, .., p : N + κ i 2γ(x i ) = . (2.14) def-J The above theorem states that the function I(t, dx) concentrates on the set of points {x i , i ∈ J} (see Corollary 2.12 below). Here Assumption 2.5 on the uniform positiveness of the measure I 0 (dx) around the points x i is not satisfied in general, and therefore the measure I concentrates on {α(x) = α * } as predicted by Theorem 2.2, but not necessarily on {α(x) = α * } ∩ {γ(x) = γ * } as would have been given by Proposition 2.7. In Figure 6 we provide a precise example of this non-standard behavior. In addition, the precise expansion of η = η(t) provided in the above theorem allows us obtain the self-similar behavior of the solution I(t, dx) around the maxima of the fitness function. This asymptotic directly follows from (4.1). milar Corollary 2.11. For each i = 1, ..., p and f ∈ C c (R N ), the set of the continuous and compactly supported functions, one has as t → ∞: t N 2 R N f (x -x i ) √ t I(t, dx) t γ(xi) -N +κ i 2 R N f (x)|x| κi exp γ(x i ) 2α * D 2 α(x i )x 2 dx. (2.15) eqCORO Our next corollary relies on some properties of the ω-limit set of the solution I(t, dx). Using the estimates of the mass around x i given in (2.13), it readily follows that any limit measures of I(t, dx) belongs to a linear combination of δ xi with i ∈ J and strictly positive coefficients of each of these Dirac masses. This reads as follows. -conv Corollary 2.12. Under the same assumptions as in Theorem 2.10, the ω-limit set O(I 0 ) as defined in Lemma 4.6 satisfies that there exist 0 < A < B such that O(I 0 ) ⊂ i∈J c i δ xi : (c i ) i∈J ∈ [A, B] J . Oscillations gence Here we construct a counterexample which shows that, in general, it is hopeless to expect convergence of the genotypic distribution to a stationary measure on X = R 2 . Such a counterexample is new in the case of "continuous" spaces; we provided some counterexamples in the case of discrete spaces in [START_REF] Burie | Asymptotic behavior of an epidemic model with infinitely many variants[END_REF]. Fix L > 0 and let Ω be the parametric curve described as Ω := ω(τ ) := 1 -e -τ cos π L τ , sin π L τ : τ ∈ R + , and define I 0 (dxdy) ∈ M(R 2 ) as the pushforward of the measure e -τ 1 τ ≥0 dτ by ω. That is to say, R 2 ϕ(x, y)I 0 (dxdy) = R + e -τ ϕ 1 -e -τ cos π L τ , 1 -e -τ sin π L τ dτ, for all ϕ ∈ BC(R 2 ). Then we select α(x, y) := 1 -1 -x 2 + y 2 , γ(x, y) ≡ 1. In this setting, α attains its global maximum on the unit circle in R 2 , while the support of I 0 (the curve Ω) approaches the unit circle from the inside with an exponentially decreasing mass as the radius converges to 1. We are thus in the situation described in case ii) of Theorem 2.2; in particular, it is true that S(t) = 1 t t 0 S(s)ds → 1 as t → +∞. Yet, explicit computations show that I(t, dxdy) = e (1-e -τ ) t 0 S(s)ds-t I 0 (τ )ω(dτ ) = e (1-e -τ )tS(t)-t-τ ω(dτ ) =: I(t, τ )ω(dτ ), where we denote ω(dτ ) the pushforward of the Lebesgue measure on R + onto Ω and, with a small abuse of notation, I 0 (τ ) = e -τ 1 τ ≥ 0. More precisely, R 2 ϕ(x, y)I(t, dxdy) = R + I(t, τ )ϕ 1 -e -τ cos π L τ , 1 -e -τ sin π L τ dτ, for all ϕ ∈ BC(R 2 ). Now by using explicit computations, we establish the following claim. :asym Claim 2.13. The function I(t, τ ) is, up to a multiplicative error of order zero, a solitary wave whose position behaves like τ 0 (t) := ln(t): I(t, τ ) = e u(τ -τ0(t))+o(1) , (2.16) where u(τ ) = ln(I ∞ ) -e -τ -τ and I ∞ := θ α * (R 0 -1) is given by Proposition 2.7. We now prove the claim. We note that Assumption 2.5 is clearly satisfied since γ ≡ 1 therefore, by Proposition 2.7, we have +∞ 0 I(t, τ )dτ = I ∞ + o(1) with I ∞ := θ Λ θ α * -1 > 0. S(t) = 1 + ln(t) t + ln(I ∞ ) t + o 1 t . We deduce that I t, τ + τ 0 (t) = exp 1 -e -(τ +τ0(t)) t + ln(t) + ln(I ∞ ) + o(1) -t -(τ + τ 0 (t)) = exp 1 - e -τ t ln(t) + ln(I ∞ ) + o(1) -te -(τ +τ0(t)) -(τ + τ 0 (t)) = exp ln(t) + ln(I ∞ ) + o(1) + e -τ -τ -ln (t) = exp ln(I ∞ ) -e -τ -τ + o(1) , which proves the claim. Note that the above computations and in particular the result of Claim 2.13 are completely independent of the parameter L. We see that I(t, dxdy) is asymptotically equivalent to a rotating mass which becomes concentrated on the unit circle and does not converge to a static distribution. We illustrate this fact in numerical simulations in section 3.3. We can also prove that the distribution does not reach stationarity when the rotation speed of the spiral (which behaves like L -1 ) is very slow. Indeed, the integral in the upper half-space never reaches a stationary value, as we show in the following Claim. m:osc Claim 2.14. There exists a function ε(L) ≥ 0 such that ε(L) → 0 as L → ∞, and two sequences t 1 n := e (2n+1/2)L → +∞ and t 2 n := e (2n+3/2)L → +∞ such that lim inf (2.18) eq:osc Indeed, we have n→+∞ R×R + I(t 1 n , dxdy) ≥ I ∞ -ε(L) (2.17 R×R + I(t, dxdy) = +∞ k=0 (2k+1)L 2kL I(t, τ )dτ = +∞ k=0 (2k+1)L 2kL e u(τ -τ0(t))+o(1) dτ = 1 + o(1) +∞ k=0 (2k+1)L-τ0(t) 2kL-τ0(t) e u(τ ) dτ = 1 + o(1) +∞ k=0 (2k+1)L-τ0(t) 2kL-τ0(t) I ∞ e -e -τ -τ dτ = 1 + o(1) +∞ k=0 I ∞ exp -e -(2k+1)L+τ0(t) -exp -e -2kL+τ0(t) , (2.19 ) eq:221 so when t = t 1 n = e (2n+1/2)L we have R×R + I(t 1 n , dxdy) = 1 + o(1) +∞ k=0 I ∞ exp -e -(2k+1)L+(2n+1/2)L -exp -e -2kL+(2n+1/2)L = I ∞ 1 + o(1) +∞ k=-n e -e -2kL-L/2 -e -e -2kL+L/2 We note that e -e -(2k±1/2)L ≤ e -(2k-1/2)L as L → ∞ when k ≤ -1, and that for k ≥ 1 we have by a Taylor expansion: |e -e -(2k-1/2)L -e -e -(2k+1/2)L | ≤ Ce -(2k-1/2)L , for some constant C > 0 independent of L and k. Therefore by the dominated convergence theorem we have lim L→+∞ -1 k=-∞ + +∞ k=1 e -e -2kL-L/2 -e -e -2kL+L/2 = 0. Now for k = 0 we have e -e -2kL-L/2 -e -e -2kL+L/2 = e -e -L/2 -e -e L/2 → 1 as L → +∞. We have shown lim inf L→+∞ lim inf n→+∞ R×R + I(t 1 n , dxdy) ≥ I ∞ , which proves (2.17). Now (2.19) with t = t 2 n = e (2k+3/2)L leads us to R×R + I(t 2 n , dxdy) = 1 + o(1) +∞ k=0 I ∞ exp -e -(2k+1)L+(2n+3/2)L -exp -e -2kL+(2n+3/2)L = I ∞ 1 + o(1) +∞ k=-n e -e -2kL+L/2 -e -e -2kL+3L/2 . Note that we have lim L→+∞ -1 k=-∞ + +∞ k=1 e -e -2kL+L/2 -e -e -2kL+3L/2 = 0. by the dominated convergence theorem and, for k = 0, e -e -2kL+L/2 -e -e -2kL+3L/2 = e -e L/2 -e -e 3L/2 → 0 as L → ∞. This shows lim sup L→+∞ lim sup n→+∞ R×R + I(t 2 n , dxdy) ≤ 0, which finishes the proof of Claim 2.14. 3 Comments and numerical illustrations erics Numerical illustrations: the main Theorem In this section we provide numerical illustrations to some of our results. Note however that the figures are included at the end of the manuscript. We start with an illustration of the long-time behavior of the solution to (2.1) when the initial mass of the fitness maximum is positive (I 0 ({α(x) = α * }) > 0) in Figure 1. To help with the visual representation, in our example, I 0 is chosen absolutely continuous with respect to the Lebesgue measure, i.e. carried by a L 1 (R 2 ) function. We provide a plot of the fitness function which is proportional to α (top left sub-figure) and the initial data (top right). The fitness function attains its maximum on the union of a rectangle [0.2, 0.5] × [-0.5, 0.5] with a line segment {-0.3} × [-0.5, 0.5] and the support of the initial data intersects this set with non-negligible intersection. We plot the time evolution of the converging function S(t) (bottom left) and a snapshot of the distribution I(t, dx) at t = 50 (bottom right). We observe that the mass that was initially located outside of the fitness maximum has vanished. What remains is a distribution of mass in the initial rectangle of maximal fitness (according to Theorem 2.2, the precise distribution can be computed). The distribution located at {x 1 = -0.3} is still positive, but is negligible with respect to the Lebesgue measure and does not contribute to the mass. In Figure 2 we present four snapshots of the distribution I(t, dx) to monitor the time evolution of this distribution with the same initial distribution. Next, we consider the case when the initial mass of the fitness maximum is equal to zero (I 0 ({α(x) = α * }) = 0) in Figure 3. Again, we provide a plot of the fitness function (top left) and of the initial data (top right). The fitness function attains its maximum on the line segment {0.35} × [-0.5, 0.5]. The time evolution of S(t) (bottom left) and a snapshot of the distribution I(t, dx) at t = 100 (bottom right) are also displayed. The distribution I(t, dx) at t = 100 is already concentrated on the maximum of the fitness. Figure 4 consists of four snapshots of the example presented in Figure 3. Transient dynamics on local maxima: a numerical example In many biologically relevant situations it may be more usual to observe situations involving a fitness function with one global maximum and several (possibly many) local maxima, whose values are not exactly equal to the global maximum but very close. In such a situation, while the long-term distribution will be concentrated on the global maximum, one may observe a transient behavior in which the orbits stay close to the equilibrium of the several global maxima situation (corresponding to Theorem 2.10), before it concentrates on the eventual distribution. We leave the analytical treatment of such a situation open for future studies, however, we present a numerical experiment in Figure 5 which shows such a transient behavior. In this simulation, we took a fitness function presenting one global maximum at x 2 = +0.5 and a local maximum at x 1 = -0.5, whose value is close to the global maximum. The precise definition of α(x) is α(x) = 0.95 × P [x1-δ,x1+δ] (x) + P [x2-δ,x2+δ] (x) with δ = 0.2. (3.1) eq:exa The function I 0 (x) is chosen as I 0 (x) = min 1, 4 (x -x 1 ) 2 min 1, 4 (x -x 2 ) 2 1 [-1,1] (x), (3. 2) eq:exa so that κ 1 = 2 and κ 2 = 2. Finally, γ(x) = 1 1 + P [x1-δ,x1+δ] (x) + 3P [x2-δ,x2+δ] (x) (3. 3) eq:exa so that γ(x 1 ) = 1 2 and γ(x 2 ) = 1 4 . Summarizing, we have N + κ 1 2γ(x 1 ) = 3 < 6 = N + κ 2 2γ(x 2 ) . If α(x) had two global maximum at the same level, Theorem 2.10 would predict that the mass I(t, dx) vanishes near x 2 and concentrates on x 1 . Since the value of α(x 2 ) is slightly higher than the value of α(x 1 ), however, it is clear that the eventual distribution will be concentrated on x 2 . We observe numerically (see Figure 5) that the distribution first concentrates on x 1 on a transient time scale, before the dynamics on x 2 takes precedence. We refer to Burie, Djidjou-Demasse, and Ducrot [START_REF] Burie | Asymptotic and transient behaviour for a nonlocal problem arising in population genetics[END_REF] for a related model with mutations where these transient behaviors are analytically characterized. Figure 3 illustrates the case when the maximal fitness is negligible for the initial measure (Point ii) of Theorem 2.2). We provide a plot of the fitness function (top left-hand side) and the initial data (top right-hand side). The fitness function attains its maximum on a rectangle with positive Lebesgue measure and the support of the initial data intersects this rectangle with non-negligible intersection. We also plot the time evolution of S(t) (bottom left-hand side) and a snapshot of the distribution I(t, x) at t = 100. We observe that the mass that was initially located outside of the fitness maximum has vanished (bottom right-hand side). What remains is a distribution of mass around the initial line of maximal fitness, which is negligible for the initial data; however the distribution takes very high values. In Figure 6 we provide a precise example of this non-standard behavior. The function α(x) is chosen to have two maxima x 1 = -0.5 and x 2 = 0.5; the precise definition of α(x) is α(x) = P [x1-δ,x1+δ] (x) + P [x2-δ,x2+δ] (x), I 0 (x) = min 1, 1024 (x -x 1 ) 8 min 1, 4 (x -x 2 ) 2 1 [-1,1] (x), (3.5 ) eq:exa so that κ 1 = 8 and κ 2 = 2. Finally we take γ(x) = 1 1 + P [x1-δ,x1+δ] (x) + 3P [x2-δ,x2+δ] (x) (3.6) eq:exa so that γ(x 1 ) = 1 2 and γ(x 2 ) = 1 4 . Summarizing, we have N + κ 1 2γ(x 1 ) = 9 > 6 = N + κ 2 2γ(x 2 ) , so that Theorem 2.10 predicts that the mass I(t, dx) will vanish near {x 1 } = {α(x) = α * } ∩ {γ(x) = γ * } and concentrate on x 2 . Numerical illustration: Oscillations tions In Figure 7 we present a numerical simulation that illustrates the example provided in section 2.3. The parameter L is set to 1. A supplementary movie is also available (spiraling.avi). Comments We have studied the asymptotic behavior of a simple epidemiological model whose originality is that the population is structured by a continuous genotypic variable. Thus the population is divided into a compartment of susceptible and infected by a certain strain of pathogen. The duration of the infectious period (1/γ) and the basic reproduction number (α, up to a multiplicative constant) of the disease depend on the trait considered. In this work, the pathogen population does not mutate and therefore if a trait is absent from the initial pathogen population, it cannot appear in the population afterwards. Assuming that the basic reproduction number has a maximum strictly greater than one for at least one phenotypic trait of the initial population, we have shown the convergence of the solution of this model towards an endemic equilibrium. Firstly, in the case where the maximum value of the basic reproduction number is reached on a continuum of phenotypic traits of the initial population, we can completely describe the asymptotic number of susceptible as well as the distribution of the infected population with respect to the different variants. Secondly, in the case where the maximum of R 0 is reached on a set of zero measure, we have shown the persistence of the infectious population and its asymptotic concentration on a subset of the set of the traits maximizing both the R 0 and the additional mortality rate due to the pathogen (or in other words minimizing the infectious period). To go further in the analysis, we then considered the case of a finite number of traits maximizing the R 0 . In this case, we were able to describe more precisely the set of traits around which the population of infected (and therefore of pathogens) is concentrated, as well as the profile of the asymptotic distribution of infected around these traits. In particular, we observed that even if there are no infected individuals with a trait maximizing the R 0 initially, the population can concentrate around this trait (but not on this trait since there is no mutation in this model). The selection of traits around which the population concentrates thus depends not only on the value of R 0 and the additional mortality rate, but also on the initial population distribution around the trait. A non-standard behavior may thus appear where the selected strain no longer maximizes γ, the virulence. The question arises whether such a configuration can be observed in vivo. 4 Measure-valued solutions and proof of Theorem 2.2 asure In this section we derive general properties of the solution of (2.1) equipped with the given and fixed initial data S(0) = S 0 ∈ [0, ∞) and I 0 (dx) ∈ M + (X). Recall that α * and R 0 (I 0 ) are both defined in (2.3). Next for ε > 0 we recall that L ε (I 0 ) is the following superlevel set (defined by (2.4) in Assumption 2.1): L ε (I 0 ) = {x ∈ supp I 0 : α(x) ≥ α * -ε} = α * -ε≤y≤α * {α(x) = y}. Recall also that the existence and uniqueness of a solution S(t), I(t, dx) ∈ R×M(X) corresponding to (S 0 , I 0 ) in the Banach space R × M(X) (where M(X) is equipped with the norm • T V ) follow directly from the Cauchy-Lipschitz Theorem. The following lemma holds true. . Next we return to the S-component of equation (2.1) and let ε > 0 be given. We have, for t 0 sufficiently large and t ≥ t 0 , Assume by contradiction that the conclusion of the Lemma does not hold, i.e. there exists ε > 0 and a sequence S t = Λ -θ + X α(x)γ(x)I(t, dx) S(t) ≥ Λ -θ + α * γ * Λ min(θ, γ * ) + ε S(t), therefore S(t) ≥ e -θ+ Λα * γ * min(θ,γ * ) +ε (t-t0) S(t 0 ) + Λ min(θ, γ * ) (θ + ε) min(θ, γ * ) + Λα * γ * 1 -e -θ+ Λα * γ * min(θ,γ * ) +ε (t- T n → +∞ such that 1 T n Tn 0 S(t)dt ≥ 1 α * + ε. Then T n Tn 0 S(t)dt ≤ 1 1 α * + ε ≤ α * -ε , where ε = (α * ) 2 ε + o(ε). Since the map x → α(x) is continuous, the set L ν (I 0 ) = {x ∈ supp I 0 : α(x) ≥ α * -ν} has positive mass with respect to the measure I 0 (dx) for all ν > 0, i.e. Lν (I0) I 0 (dx) > 0. This is true, in particular, for ν = ε 2 , therefore L ε /2 (I0) I(T n , dx) = L ε /2 (I0) exp γ(x) I(T n , dx) = +∞, which is a contradiction since I(t, dx) is bounded in M(X) by Lemma 4.1. This completes the proof of the Lemma. An important tool in later proofs is that the mass of I(t, dx) vanishes on any set sufficiently far away from I 0 when the Cesàro mean S(t) = 1 t t 0 S(s)ds of S is sufficiently close to α * , which we prove now. ishes Lemma 4.3. Let Assumption 2.1 hold and S(t), I(t, dx) be the corresponding solution of (2.1). Let {t τ } be a net t τ → ∞ and ε > 0 be such that we have eventually S(t τ ) = 1 t τ tτ 0 S(s)ds ≤ 1 α * -ε for all τ. (4.2) Then for any p > 1 we have {α(x)≤α * -pε} I(t τ , dx) -----→ tτ →+∞ 0. (4.3) Proof. Indeed, we can write {α(x)≤α * -pε} I(t τ , dx) = {α(x)≤α * -pε} exp γ(x)t τ S(t τ ) α(x) - 1 S(t τ ) I 0 (dx) ≤ {α(x)≤α * -pε} exp γ(x)t τ S(t τ ) α * -pε - 1 S(t τ ) I 0 (dx) ≤ {α(x)≤α * -pε} exp γ(x)t τ S(t τ )(1 -p)ε I 0 (dx). Since p > 1 and lim inf S(t τ ) > 0 by Lemma 4.1, the argument of the exponential converges to -∞ as t τ → +∞ therefore {α(x)≤α * -pε} I(t τ , dx) -----→ tτ →+∞ 0. The Lemma is proved. The following weak persistence property holds. nomut Lemma 4.4. Let Assumption 2.1 hold and suppose that R 0 (I 0 ) > 1. Let S(t), I(t, dx) be the corresponding solution of (2.1). Then lim sup t→+∞ X I(t, dx) ≥ θ α * γ * R 0 (I 0 ) -1 > 0, where γ * := sup x∈supp I0 γ(x). Proof. Assume by contradiction that for t 0 sufficiently large we have X I(t, dx) ≤ η < η =: θ α * γ * R 0 (I 0 ) -1 for all t ≥ t 0 , with η > 0. As a consequence of Lemma 4.2 we have lim inf t→+∞ S(t) ≤ lim sup T →+∞ 1 T T 0 S(t)dt ≤ 1 α * . ( 4 .4) eq:inf Let S := lim inf t→+∞ S(t). Let (t n ) n≥0 be a sequence that tends to ∞ as n → ∞ and such that lim n→+∞ S (t n ) = 0 and lim n→+∞ S(t n ) = S. As X I(t n , dx) ≤ η for n large enough we deduce from the equality Let us remind that M + (X), equipped with the Kantorovitch-Rubinstein metric d 0 defined in (2.2), is a complete metric space. nomut Lemma 4.5 (Compactness of the orbit and concentration). Let Assumption 2.1 hold and Let S(t), I(t, dx) be the corresponding solution of (2.1). Then, the closure of the orbit of (S 0 , I 0 ), S (t n ) = Λ -θS(t n ) -S(t n ) X α(x)γ(x)I(t n , dx), that 0 ≥ Λ -θS -Sα * γ * η so that S ≥ Λ θ + α * γ * η > Λ θ + α * O(S 0 , I 0 ) := (s, µ) ∈ R × M + (X) : there exists (t n ) such that d 0 (I(t n , dx), µ) -----→ n→+∞ 0 and |S(t n ) -s| -----→ n→+∞ 0 , is compact. Moreover R 0 (I 0 ) > 1 and t n → +∞ is an arbitrary sequence along which lim inf n→+∞ X I(t, dx) > 0, ( 4 I(t, dx) = 0. Thus given any threshold ν > 0, there is T ν such that I(t, X\L ε (I 0 )) ≤ ν for all t ≥ T ν , and since I 0 is Radon there exists a compact set K ν ⊂ X such that I 0 (X\K) ≤ νe -γ ∞ Tν α ∞ sup 0≤s≤Tν S(s) so that X\K I(t, dx) ≤ X\K I 0 (dx)e γ ∞ Tν (α ∞ sup 0≤s≤Tν S(s)-1) ≤ ν. Thus for all t ≥ 0 we have I t, X\ K ν ∪ L ε (I 0 ) . Next let (t n ) be an arbitrary sequence along which (4.5) holds. Thanks to the compactness of the orbit, we extract from (S(t n ), I(t n , dx)) a subsequence still denoted t n , such that S(t n ) → S ∞ and I(t n , dx) → I ∞ (dx) weakly. Clearly S (t + t n ) is bounded independently on n; differentiating the first line in (2.1a) we see that S (t + t n ) is also bounded independently on n when t is in an arbitrary compact set. Thus, up to a diagonal extraction, we may assume that both S(t + t n ) and S (t + t n ) converge locally uniformly in t to a limit S ∞ (t) and (S ∞ ) (t). This proves the final statement of the Lemma. We can now take the weak- * limit in the formula I(t + t n , dx) = I(t n , dx) exp γ(x)t α(x) 1 t t 0 S(t n + s)ds -1 which shows that, for fixed but arbitrary t ∈ R, we have that I(t n + t, dx) converges weakly to I ∞ (t, dx) := I ∞ (dx) exp γ(x)t α(x) 1 t t 0 S ∞ (s)ds -1 . In particular (S ∞ (t), I ∞ (t, dx)) is a complete orbit of the equation (2.1). Since 1 ∈ BC(X) we have that X I ∞ (dx) = X 1I ∞ (dx) = α(x)≤α * -1 k I ∞ (dx) = lim n→+∞ α(x)≤α * -1 k I(t n , dx) = 0, for all k ∈ N. Thus by Fatou's Lemma α(x)<α * I ∞ (dx) ≤ lim k→+∞ α(x)≤α * -1 k I ∞ (dx) = 0. This proves (4.7) and completes the proof of Lemma 4.5. Next we show the weak uniform persistence property if R 0 (I 0 ) > 1. nomut Lemma 4.6 (Uniform persistence). Let Assumption 2.1 hold and suppose that R 0 > 1. Let S(t), I(t, dx) be the corresponding solution of (2.1). Then lim inf t→+∞ X I(t, dx) > 0. Proof. We adapt here the argument of Magal and Zhao [START_REF] Magal | Global attractors and steady states for uniformly persistent dynamical systems[END_REF]Proposition 3.2] to our context. Suppose by contradiction that there exists a sequence t n → +∞ such that lim n→+∞ X I(t n , dx) = 0. By Lemma 4.1 we know that S(t n ) ≥ S > 0 for all n ∈ N. By Lemma 4.4, for each n ∈ N sufficiently large, there exists s < t n such that X I(s n , dx) = θ 2α * γ * R 0 (I 0 ) -1 and X I(t, dx) ≤ θ 2α * γ * R 0 (I 0 ) -1 for all s n ≤ t ≤ t n . (4.8) eq:230 Up to replacing t n by a subsequence, we will assume without loss of generality that t n-1 < s n < t n for all n ∈ N. Thanks to Lemma 4.5 and up to a further extraction, the shifted orbits S(t+s n ), I(t+s n , dx) converge to a complete orbit S ∞ (t), I ∞ (t, dx) which satisfies 0 < S ≤ S ∞ (t) ≤ Λ θ and X I ∞ (t, dx) > 0 for all t ∈ R. Moreover I ∞ (t, dx) is concentrated on the set {x : α(x) = α * }, and in particular R 0 (I ∞ (0, dx)) = R 0 (I 0 ) > 1. By Lemma 4.4 we have therefore lim sup t→+∞ X I ∞ (t, dx) ≥ θ α * γ * R 0 (I 0 ) -1 . (4.9) eq:230 Next we investigate the time T n := t n -s n . Up to extracting a subsequence, there are two options. T n → T as n → +∞. In that case we have X I ∞ (T, dx) = 0 so I ∞ (T, dx) ≡ 0, and by the uniqueness of the solution to (2.1) we have I ∞ (t, dx) ≡ 0 for all t ≥ T . This contradicts (4.9). T n → +∞ as n → +∞. In that case, recalling (4.8), we have X I ∞ (t, dx) ≤ θ 2α * γ * R 0 (I 0 ) -1 for all t ≥ 0, and this also contradicts (4.9). This completes the proof of Lemma 4.6. o-mut Lemma 4.7. Let Assumption 2.1 hold. Let S(t), I(t, dx) be the corresponding solution of (2.1). Assume that R 0 (I 0 ) > 1. Then lim inf T →+∞ 1 T T 0 S(t)dt ≥ 1 α * , with α * given in (2.3). Proof. Assume by contradiction that the conclusion of the Lemma does not hold, i.e. there exists ε > 0 and a sequence T n → +∞ such that 1 T n Tn 0 S(t)dt ≤ 1 α * -ε. Then T n Tn 0 S(t)dt ≥ 1 1 α * -ε ≥ α * + ε , where ε = (α * ) 2 ε + o(ε) , provided ε is sufficiently small. Therefore by Lemma 4.3 we have lim inf t→+∞ X I(t, dx) ≤ lim inf n→+∞ X I(T n , dx) = 0, which is in contradiction with Lemma 4.6. This proves the Lemma. Proof. Let ε > 0 be as in the statement of Lemma 4.9. By Lemma 4.2, there exists T ≥ 0 such that for all t ≥ T we have t t 0 S(s)ds ≥ α * - ε 2 . Hence Lemma 4.3 implies that X\Lε(I0) I(t, dx) ----→ t→+∞ 0. In particular, if I(t, dx)| Lε(I0) denotes the restriction of I(t, dx) to L ε (I 0 ), we have I(t, dx)-I(t, dx)| Lε(I0) T V ----→ t→+∞ 0 and hence d 0 I(t, dx), I(t, dx)| Lε(I0) ≤ d T V I(t, dx), I(t, dx)| Lε(I0) ----→ t→+∞ 0. Here ε > 0 can be chosen arbitrarily small. By Lemma 4.1 we know moreover that lim sup t→+∞ X I(t, dx) ≤ Λ min(θ, γ * ) , so that for t sufficiently large, we have X I(t, dx) ≤ 2 Λ min(θ, γ * ) . Finally by using Proposition B.5 (proved in the Appendix), we have d 0 I(t, dx), M + ({α(x) = α * }) ≤ d 0 I(t, dx), I(t, dx)| Lε(I0) + d 0 I(t, dx)| Lε(I0) , M + ({α(x) = α * }) ≤ d 0 I(t, dx), I(t, dx)| Lε(I0) + 2 Λ min(θ, γ * ) sup x∈Lε(I0) d x, {α(x) = α * } . Since sup x∈Lε(I0) d x, {α(x) = α * } ---→ ε→0 0, the Kantorovitch-Rubinstein distance between I(t, dx) and M {α(x) = α * } can indeed be made arbitrarily small as t → +∞. This proves the Lemma. const Lemma 4.10. Let Assumption 2.1 hold and suppose moreover that α(x) ≡ α * for all x ∈ supp I 0 and that R 0 (I 0 ) > 1. Let S(t), I(t, dx) be the corresponding solution of (2.1) and let S * := 1 α * and i * (x) := e τ γ(x) where τ ∈ R is the unique solution of the equation X γ(x)e τ γ(x) I 0 (dx) = θ α * (R 0 -1) . (4.10) eq:id- Define g(x) := x -ln(x) and let D(V ) := S, i(x) ∈ R × L ∞ i * (I 0 ) : S > 0 and I0(dx) ess inf i(x) i * (x) > 0 , where L ∞ i * (I 0 ) is the weighted L ∞ space equipped with the norm i L ∞ i * (I0) := i i * L ∞ (I0) . Then D(V ) is open in R × L ∞ i * (I 0 ) and the functional V (S, i) := S * g S S * + X i * (x)g i(x) i * (x) I 0 (dx) is well-defined and continuous on D(V ). Moreover if i(t, x) is the Radon-Nikodym derivative of I(t, dx) with respect to I 0 (in other words, .11) eq:Lya I(t, dx) = i(t, x)I 0 (dx)), then t → V (S(t), i(t, x)) is of class C 1 and we have d dt V S(t), i(t, x) ≤ -θ (S(t) -S * (t)) 2 S(t) for all t ∈ R. ( 4 Proof. The well-definition of τ is clear since the left-hand side of (4.10) is strictly increasing and connects 0 when τ → -∞, to +∞ when τ → +∞. The openness of D(V ), the well-definition and continuity of V (S, i) are also clear. Let us check (4.11). We first remark that i(t, x) = e tγ(x)(α(x)S(t)-1) , so it is clear that i(t, x) ∈ D(V ) and that t → V (S(t), i(t, x)) ∈ C 1 (R). We show (4.11). Let us write V 1 (S) = S * g S(t) S * and V 2 (t) = X i * (x)g i(t,x) i * (x) I 0 (dx), then we have V 1 (t) = S * S (t) S * g S(t) S * = Λ -θS(t) -S(t) X α * γ(x)i(t, x)I 0 (dx) 1 - S * S(t) = Λ -θS(t) -S(t) X α * γ(x)i(t, x)I 0 (dx) -Λ + θS * + S * X α * γ(x)i * (x)I 0 (dx) 1 - S * S(t) = -θ (S(t) -S * ) 2 S(t) + S * X α * γ(x)i * (x)I 0 (dx) -S(t) X α * γ(x)i(t, x)I 0 (dx) 1 - S * S(t) , = -θ (S(t) -S * ) 2 S(t) + S * X α * γ(x)i * (x)I 0 (dx) - (S * ) 2 S(t) X α * γ(x)i * (x)I 0 (dx) -S(t) X α * γ(x)i(t, x)I 0 (dx) + S * X α * γ(x)i(t, x)I 0 (dx), and V 2 (t) = X i * (x) i t (t, x) i * (x) g i(t, x) i * (x) I 0 (dx) = X γ(x) (α * S(t) -1) i(t, x) 1 - i * (x) i(t, x) I 0 (dx) = X γ(x) (α * S(t) -1) (i(t, x) -i * (x)) I 0 (dx) = X γ(x)α * S(t)i(t, x)I 0 (dx) - X γ(x)i(t, x)I 0 (dx) - X γ(x)α * S(t)i * (x)I 0 (dx) + X γ(x)i * (x)I 0 (dx). Recalling S * = 1 α * , we have therefore d dt V (S(t), i(t, •)) = d dt V 1 (t) + d dt V 2 (t) = -θ (S(t) -S * ) 2 S(t) + 2 X γ(x)i * (x)dx - (S * ) 2 S(t) X α * γ(x)i * (x)I 0 (dx) - X α * γ(x)S(t)i * (t)I 0 (dx). Since X α * γ(x)i * (x) S(t) + (S * ) 2 S(t) I 0 (dx) ≥ X α * γ(x)i * (x) × 2S * I 0 (dx), which stems from the inequality a + b ≥ 2 √ ab, we have indeed proved that (4.11) holds. Next we can determine the long-time behavior when the initial measure I 0 puts a positive mass on the set of maximal fitness. unded Lemma 4.11. Let Assumption 2.1 hold. Assume that R 0 (I 0 ) > 1 and suppose that I 0 ({α(x) = α * }) > 0, or in other words, α(x)=α * I 0 (dx) > 0. Let η(t) := α * S(t) -1, then there exists a constant η such that -∞ < -η ≤ tη(t) ≤ η < +∞, (4.12 ) eq:230 for all t ≥ 0. If moreover lim inf t→-∞ α(x)=α * I(t, dx) > 0, then up to changing the constant η, (4.12) holds for all t ∈ R. Proof. We first remark that I(t, dx) can be written as I(t, dx) = exp γ(x)t η(t) + (α(x) -α * ) 1 t t 0 S(s)ds I 0 (dx). By Jensen's inequality we have exp α(x)=α * γ(x)tη(t) I 0 (dx) α(x)=α * I 0 ≤ α(x)=α * e γ(x)tη(t) I 0 (dx) α(x)=α * I 0 (dz) , so that tη(t) ≤ α(x)=α * I 0 α(x)=α * γ(x)I 0 (dx) ln X e γ(x)tη(t) I 0 (dx) X I 0 = α(x)=α * I 0 α(x)=α * γ(x)I 0 (dx) ln 1 α(x)=α * I 0 α(x)=α * I(t, dx) . Applying Lemma 4.1, I(t, dx) is bounded and we have indeed an upper bound for tη(t). Next, writing I(t, dx) = exp γ(x)tη(t) + (α(x) -α * ) t 0 S(s)ds I 0 (dx) and recalling that t 0 S(s)ds → +∞ as t → +∞, the function exp γ(x)tη(t) + (α(x) -α * ) t 0 S(s)ds converges almost everywhere (with respect to I 0 ) to 0 on X\{α(x) = α * }, so that by Lebesgue's dominated convergence theorem, we have lim t→+∞ X\{α(x)=α * } I(t, dx) = X\{α(x)=α * } lim t→+∞ exp γ(x)tη(t) + (α(x) -α * ) t 0 S(s)ds I 0 (dx) = 0. Next it follows from Lemma 4.7 that lim inf t→+∞ I(t, dx) > 0, so that lim inf t→+∞ α(x)=α * I(t, dx) = lim inf t→+∞ X I(t, dx) > 0. Assume by contradiction that there is a sequence (t n ) such that t n η(t n ) → -∞, then α(x)=α * I(t, dx) = α(x)=α * e γ(x)tnη(tn) I 0 (dx) ≤ α(x)=α * e γ * tnη(tn) I 0 (dx) = e γ * tnη(tn) α(x)=α * I 0 (dx) ----→ t→+∞ 0, where γ * := inf x∈supp I0 γ(x) > 0. This is a contradiction. Therefore there is a constant η > 0 such that tη(t) ≥ -η > -∞. In particular, the function t → tη(t) is bounded by two constants, -∞ < -η ≤ tη(t) ≤ η < +∞. This completes the proof of Lemma 4.11. Finally we prove that any complete orbit that is already concentrated on {α(x) = α * } is constant, provided the mass can be bounded when t → -∞. This is a kind of LaSalle principle, since we have a partial Lyapunov functional by Lemma 4.10. stant Lemma 4.12. Let Assumption 2.1 hold and assume that α(x) ≡ α * for all x ∈ supp I 0 . Suppose that S(t), I(t, dx) is a complete orbit of (2.1) with initial data S 0 , I 0 (dx) , and assume that lim inf t→-∞ X I(t, dx) > 0. Then S(t) ≡ 1 α * , for all t ∈ R. Proof. Thanks to our assumption and the results of Lemma 4.11, we know that tη(t) is bounded for t ∈ R; moreover by Lemma 4.10, the functional V (S(t), i(t, x)) is well-defined and decreasing along the orbit S(t), i(t, x) , t ∈ R. Since V (S(t), i(t, x)) is bounded and decreasing there exists V ∞ ∈ R such that V (S(t), i(t, x)) ----→ t→-∞ V ∞ . Let t n → -∞ be a sequence with S(t n ) → S -∞ and t n η(t n ) → η -∞ when t n → -∞, so that I(t n , dx) = e tnη(tn)γ(x) I 0 (dx) converges when t → -∞ to I -∞ 0 (dx) := e η -∞ I 0 (dx). Then the shifted orbits S(t+t n ), I(t+t n , dx) converge, as t n → -∞, to a complete orbit S -∞ (t), I -∞ (t, dx) with I -∞ (0, dx) = I -∞ 0 (dx) = e η -∞ I 0 (dx). By the continuity of V , along the new orbit S -∞ (t), I -∞ (t, dx) , we have that Then it follows from the first line in (2.1a) that 0 = Λ -θS * -S * X α * γ(x)I -∞ (t, dx) for all t ∈ R, therefore in particular V S -∞ (t), I -∞ (t, dx) ≡ V -∞ is a constant. Thus d dt V S -∞ (t), I -∞ (t, X γ(x)e η ∞ γ(x) I 0 (dx) = θ α * R 0 -1 . Thus η -∞ is a solution of (4.10) and, by the uniqueness of the solution, we have η -∞ = τ and I -∞ 0 (dx) = i * (x)I 0 (dx). Thus V -∞ = V S -∞ , e η -∞ γ(x) = V S * , i * (x) = 0. Thus V -∞ is the smallest possible value of V (S, i(x)). Since t → V (S(t), i(t, x)) is nonincreasing, we have therefore V S(t), i(t, x) ≡ 0, d dt V S(t), i(t, x) ≡ 0, for all t ∈ R. By (4.11), we have therefore S(t) ≡ S * = 1 α * for all t ∈ R, which completes the proof of Lemma 4.12. itive Lemma 4.13. Let Assumption 2.1 hold. Assume that R 0 (I 0 ) > 1 and suppose that I 0 ({α(x) = α * }) > 0, or in other words, α(x)=α * I 0 (dx) > 0. Then S(t) ----→ t→+∞ 1 α * and I(t, dx) -I ∞ (dx) T V ----→ t→+∞ 0, where we have the following formula for I ∞ (dx), with τ being the unique solution of (4.10), I ∞ (dx) = e τ γ(x) 1 α(x)=α * I 0 (dx). Proof. Suppose that there exists a sequence t n → +∞ and η I(s, dx) > 0 for all t ∈ R. Thus we can apply Lemma 4.12 which shows that S ∞ (t) ≡ 1 α * for all t ∈ R. Thus (S ∞ ) (t) ≡ 0 and by using the first line in (2.1a) we find that X γ(x)e η * γ(x) 1 α(x)=α * I 0 (dx) = θ α * R 0 (I 0 ) -1 . This is (4.10) which has a unique solution η * = τ . Since we can extract from any sequence t n → +∞ a subsequence with t n η(t n ) → τ , we conclude that lim t→+∞ tη(t) = τ therefore I(t, dx) • T V ----→ t→+∞ e τ γ(x) 1 α(x)=α * I 0 (dx). We show similarly that S(t) → 1 α * as t → +∞. When the set of maximal fitness {α(x) = α * } is negligible for I 0 , it is more difficult to obtain a general result for the long-time behavior of I(t, dx). We start with a short but useful estimate on the rate η(t) Proof. Assume by contradiction that there exists a sequence t n → +∞ such that t n η(t n ) has a uniform upper bound as t n → +∞, then observe that the quantity e γ(x)(α(x)S(tn)-1)tn = e γ(x)(α(x)-α * )S(tn)tn+γ(x)η(tn)tn is uniformly bounded in t n and vanishes as t n → +∞ almost everywhere with respect to I 0 (dx). By a direct application of Lebesgue's dominated convergence Theorem, we have therefore X I(t n , dx) = X e γ(x)(α(x)S(tn)-1)tn I 0 (dx) -----→ tn→+∞ 0, which is in contradiction with Lemma 4.6. We conclude that tη(t) → +∞ as t → +∞. We are now in the position to prove Theorem 2.2. Proof of Theorem 2.2. The convergence of S(t) and I(t, dx) in case i) was proved in Lemma 4.13. Let us focus on case ii), that is to say, we assume α(x)=α * I 0 (dx) = 0. The uniform persistence of I(t, dx) is a consequence of 4.6. The concentration on the maximal fitness was proved in Lemma 4.9. Let us show that S(t) → 1 α * . Suppose by contradiction that it is not the case, then there exists ε > 0 and a sequence t n → +∞ with S(t n ) -1 α * ≥ ε. By Lemma 4.5 we can extract t n a subsequence such that the shifted orbits S(t + t n ), I(t + t n , dx) converge to S ∞ (t), I ∞ (t, dx) . We have α(x)<α * I ∞ (t, dx) = 0, X I ∞ (t, dx) > 0, and lim inf t→-∞ X I ∞ (t, dx) ≥ lim inf t→+∞ X I(t, dx) > 0, so by Lemma 4.12 we have lim n→+∞ S(t n ) = S ∞ (0) = 1 α * . This is obviously a contradiction. Theorem 2.2 is proved. We now turn to the proof of Proposition 2.7 and we first prove that I(t, dx) concentrates on the set of points maximizing both α and γ. This property is summarized in the next lemma. Proof. We decompose the proof in several steps. Step 1: We show that I(t, dx) and 1 α(x)S(t)≥1 I(t, dx) are asymptotically close in • T V . That is to say, I(t, dx) -1 α(•)S(t)≥1 I(t, dx) T V ----→ t→+∞ 0. Indeed we have I(t, dx) -1 α(x)S(t)≥1 I(t, dx) = 1 α(x)S(t)<1 I(t, dx) = 1 α(x)S(t)<1 e γ(x)(α(x)S(t)-1)t I 0 (dx). First note that the function 1 α(x)S(t)<1 I(t, dx) = 1 α(x)S(t)<1 e γ(x)(α(x)S(t)-1)t I 0 (dx) is uniformly bounded. On the other hand, since I 0 ({α(x) = α * }) = 0 recall that S(t) → 1 α * for t → ∞, so that 1 α(x)S(t)<1 → 0 as t → ∞ almost everywhere with respect to I 0 . It follows from Lebesgue's dominated convergence Theorem that X 1 α(x)S(t)<1 e γ(x)(α(x)S(t)-1)t I 0 (dx) ----→ t→+∞ 0. Step 2: We show that the measure 1 S(t)y≥1 e γ(yS(t)-1)t A(dy) is bounded when t → ∞ for all γ < γ * . Recall that A is the pushforward measure of I 0 by the continuous map α. Note that I 0 ({α(x) = α * }) = 0 implies that A({α * }) = 0 and remark that one has X 1 α(x)S(t)≥1 I(t, dx) = α * min(α * ,1/S(t)) α(x)=y e γ(x)(yS(t)-1)t I 0 (y, dx)A(dy), so, according to Step 1, for t sufficiently large one has e γ(yS(t)- e γ(yS(t)-1)t A(dy) < ∞. Note that, if the constant m is independent of γ, then the above estimate does not depend on γ either. Step 3: We show that 1 γ(x)<γ 1 S(t)α(x)≥1 I(t, dx) vanishes whenever γ < γ * . Fix γ < γ * and let 0 < ε≤ γ * -γ 2 . Then we have I 0 (y, dx)e -ε(yS(t)-1)t e γ(yS(t)-1)t A(dy). Reducing ε if necessary we may assume that γ ε > 1. Therefore it follows from Hölder's inequality that γ(x)≤γ and α(x)≥1/S(t) I(t, dx) ≤ α * min(α * ,1/S(t)) e -ε(yS(t)-1)t γ ε e γ(yS(t)-1)t A(dy) ε γ ×   α * min(α * ,1/S(t)) α(x)=y I 0 (y, dx) γ γ-ε e γ(yS(t)-1)t A(dy)   1-ε γ ≤ α * min(α * ,1/S(t)) A(dy) ε γ α * min(α * ,1/S(t)) e γ(yS(t)-1)t A(dy) 1-ε γ = I 0 (L α * -min(α * ,1/S(t)) (I 0 )) ε γ α * min(α * ,1/S(t)) e γ(yS(t)-1)t A(dy) Next we prove the asymptotic mass. Pick a sentence t n → +∞. By the compactness of the orbit (proved in Lemma 4.6) we can extract from t n a subsequence t n such that there exists a Radon measure I ∞ (dx) with d 0 (I(t, dx), I ∞ (dx)) ----→ t→+∞ 0, and since S(t) → 1 α * and upon further extraction, S (t n ) → 0. Therefore, X α(x)γ(x)I(t n , dx) = Λ -S (t n ) S(t n ) -θ -----→ n→+∞ α * Λ -θ = θ (R 0 (I 0 ) -1) . By the concentration result in Lemma 4.15, I ∞ is concentrated on {α(x) = α * } ∩ {γ(x) = γ * }. Therefore α * γ * I ∞ (dx) = α(x)γ(x)I ∞ (dx) = lim n→+∞ X α(x)γ(x)I(t n , dx) = θ (R 0 (I 0 ) -1) , so that lim n→+∞ I(t n , dx) = I ∞ (dx) = θ α * γ * (R 0 (I 0 ) -1) . Since the limit is independent of the sequence t n , we have indeed shown that I(t, dx) + f (t) ≤ lim inf t→+∞ f (t) Remark that 1 U (x)1 γ≥γ 1 S(t)y≥1 I(t, dx) = α * min(α * ,1/S(t)) γ * γ {γ(x)=z} 1 U (x)e z(yS(t)-1)t I α,γ 0 (y, z, dx)I α 0 (y, dz)A(dy) = α * min(α * ,1/S(t)) γ * γ {γ(x)=z} 1 U (x)I α,γ 0 (y, z, dx)e z(yS(t)-1)t I α 0 (y, dz)A(dy) ≥ α * min(α * ,1/S(t)) γ * γ m 2 e z(yS(t)-1)t I α 0 (y, dz)A(dy) ≥ f (t) m 2 , provided t is sufficiently large and γ is sufficiently close to γ * , where m := lim inf ε→0 A(dy) ess inf α * -ε≤y≤α * I α 0 (y,dz) ess inf γ * -ε≤z≤γ * 1 x∈U I α,γ 0 (y, z, dx) > 0. Therefore lim inf t→+∞ U I(t, dx) ≥ m 2 lim inf t→+∞ f (t) > 0. This completes proof of Proposition 2.7. 5 The case of a finite number of regular maxima axima In this section we prove Theorem 2.10. To that aim, we shall make use of the following formula I(t, dx) = exp γ(x) α(x) t 0 S(s)ds -t I 0 (dx). ( 5 .1) formul Recall also the definition of η(t): η(t) = α * 1 t t 0 S(s)ds -1 = α * S(t) -1. Proof of Theorem 2.10. We split the proof of this result into three parts. We first derive a suitable upper bound. We then derive a lower bound in a second step and we conclude the proof of the theorem by estimating the large time asymptotic of the mass of I around each point of {α(x) = α * }. Upper bound: Let i = 1, .., p be given. Recall that ∇α(x i ) = 0. Now due to (iii) in Assumption 2.8 there exist m > 0 and T > ε -2 0 large enough such that for all t ≥ T and for all y ∈ B 0, t -1 2 we have α(x i + y) -α * ≤ -α * m y 2 . As a consequence, setting Γ(x) = γ(x) α(x) α * , we infer from (5.1) and the lower estimate of I 0 around x i given in Assumption 2.8 (ii), that for all t > T xi-x ≤t -1 2 I(t, dx) ≥ M -1 |y|≤t -1 2 |y| κi exp tη(t)Γ(x i + y) -tγ(x i + y)m|y| 2 dy. Next since the function I = I(t, dx) has a bounded mass, there exists some constant C > 0 such that R N I(t, dx) ≤ C, ∀t ≥ 0. Coupling the two above estimates yields for all t > T |y|≤t -1 2 |y| κi exp tη(t)Γ(x i + y) -tγ(x i + y)m|y| 2 dy ≤ M C. Hence setting z = y √ t into the above integral rewrites as |z|≤1 t -κ i 2 |z| κi exp tη(t)Γ(x i + t -1 2 z) -γ(x i + t -1 2 z)m|z| 2 dz t N/2 ≤ M C, ∀t > T. Now, since γ and α are both smooth functions, we have uniformly for |z| ≤ 1 and t 1: Γ(x i + t -1 2 z) = γ(x i ) + O t -1 2 , γ(x i + t -1 2 z) = γ(x i ) + O t -1 2 . This yields for all t 1 |z|≤1 t -κ i 2 |z| κi exp tη(t) γ(x i ) + O t -1 2 -γ(x i )m|z| 2 dz t N/2 ≤ CM, t -κ i 2 -N 2 e tη(t) γ(xi)+O t -1 2 |z|≤1 |z| κi e -γ(xi)m|z| 2 dz ≤ CM, that also ensures the existence of some constant c 1 ∈ R such that tη(t) γ(x i ) + O t -1 2 - N + κ i 2 ln t ≤ c 1 , ∀t 1, or equivalently η(t) ≤ N + κ i 2γ(x i ) ln t t + O 1 t as t → ∞. Since the above upper-bound holds for all i = 1, .., p, we obtain the following upper-bound η(t) ≤ ln t t + O 1 t as t → ∞, (5.2) upper where is defined in (2.12). Lower bound: Let ε 1 ∈ (0, ε 0 ) small enough be given such that for all i = 1, .., p and |y| ≤ ε 1 one has α(x i + y) ≤ α * - 2 |y| 2 . Herein > 0 is defined in Assumption 2. Recall that Γ(x) = α(x)γ(x) α * and ∇Γ(x) = 1 α * (α(x)∇γ(x) + γ(x)∇α(x)). Consider M > 0 such that for all k = 1, .., p and all |x - x k | ≤ ε 1 one has |Γ(x) -γ(x k ) -∇γ(x k ) • (x -x k )| ≤ M |x -x k | 2 . ( 5 .3) Gamma Next fix i = 1, .., p and ε ∈ (0, ε 1 ). Then one has for all t > 0 |x-xi|≤ε I(t, dx) ≤ |x-xi|≤ε exp tη(t)Γ(x) -tm|x -x i | 2 I 0 (dx) ≤ e tη(t)γ(xi) |x-xi|≤ε exp t η(t)∇γ(x i ) • (x -x i ) -(m + O(η(t))|x -x i | 2 I 0 (dx). Now observe that for all t 1 one has η(t)∇γ(x k ) • (x -x i ) -(m + O(η(t)))|x -x i | 2 = -(m + O(η(t))) x -x i - η(t)∇γ(x i ) 2(m + O(η(t))) 2 + η(t) 2 ∇γ(x i ) 2 4(m + O(η(t)) ) , so that we get, using Assumption 2.8 (ii), that |x-xi|≤ε I(t, dx) ≤ e tη(t)γ(xi)+ tη(t) 2 ∇γ(x i ) 2 4(m+O(η(t)) |x-xi|≤ε exp -(m + O(η(t))t x -x i - η(t)∇γ(x i ) 2(m + O(η(t)) 2 I 0 (dx) ≤ M e tη(t)γ(xi)+ tη(t) 2 ∇γ(x i ) 2 4(m+O(η(t)) × |x-xi|≤ε |x -x i | κi exp -(m + O(η(t))t x -x i - η(t)∇γ(x i ) 2(m + O(η(t)) 2 dx. We now make use of the following change of variables in the above integral z = √ t x -x i - η(t)∇γ(x i ) 2(m + O(η(t)) , so that we end up with C(t) → C ∞ := M R N |z| κi e -m 2 |z| 2 dz ∈ (0, ∞) as t → ∞. As a conclusion of the above analysis, we have obtained that there exists some constant C such that for all ε ∈ (0, ε 1 ) and all i = 1, .., p one has Now recalling the definition of and J in (2.12) and (2. Estimate of the masses: In this last step we turn to the proof of (2.13). Observe first that the upper estimate directly follows from the asymptotic expansion of η(t) in (2.11) together with (5.4). Next, the proof for the lower estimate follows from similar inequalities as the one derived in the second step above. Thus x ∈ B 2 . The equality is proved. We conclude that P -1 K (B H (C, R)) is a Borel set for all C ∈ K(K) and R > 0, and since those sets form a basis of the Borel σ-algebra, P K is indeed Borel measurable. The Lemma is proved. Proof. Indeed, let us choose a Borel measurable metric projection P K on K as in Theorem B.4. Let µ K be the image measure defined on B(K) by µ K (B) := µ P -1 K (B) , for all B ∈ B(K). Then in particular for all f ∈ BC(M ) we have R N f (P K (x))dµ(x) = K f (x)dµ K (x). Let f ∈ Lip 1 (M ), then we have Therefore d 0 (µ, µ K ) ≤ µ T V sup x∈supp K d(x, K) and, since µ K ∈ M + (K), R N f (x)d(µ -µ K )(x) = R N f (x)dµ(x) - R N f (x)dµ K (x) = R N f (x)dµ(x) - d 0 µ, M + (K) ≤ d 0 (µ, µ K ) ≤ µ T V sup x∈supp µ d(x, K). The Proposition is proved. C Disintegration of measures ation C.1 Bourbaki's disintegration theorem We recall the disintegration theorem as stated in [7, VI, §3, Theorem 1 p. 418]. We use Bourbaki's version, which is proved by functional analytic arguments, for convenience, although other approaches exist which are based on measure-theoretic arguments and may be deemed more intuitive. We refer to Ionescu Tulcea and Ionescu Tulcea for a disintegration theorem resulting from the theory of (strong) liftings [START_REF] Ionescu Tulcea | On the lifting property. IV. Disintegration of measures[END_REF][START_REF] Ionescu Tulcea | Topics in the theory of lifting[END_REF]. Let us first we recall some background on adequate families. This is adapted from [7, V.16 §3] to the context of finite measures of R N . We let T and X be locally compact topological spaces and µ ∈ M + (T ) be a fixed Borel measure. The initial condition I 0 vanishes more rapidly around x 1 than x 2 so that the solution I(t, x) vanishes around x 1 as t goes to ∞, even though γ(x 1 ) > γ(x 2 ), while around x 2 it takes the shape given by expression (2.11). then one has S(t) ----→ t→+∞ 1 α * and I(t, dx) ----→ t→+∞ I ∞ (dx) := 1 α(x)=α * (x)e τ γ(x) I 0 (dx), ) eq:osc and lim supn→+∞ R×R + I(t 2 n , dxdy) ≤ ε(L). ( 3 . 4 ) 34 eq:exa where P [a,b] (x) := max 1 -(a + b -2x) 2 (a -b) 2 , 0 is the downward parabolic function of height one and support [a, b] and δ = 0.2. The function α(x) has the exact same local behavior in the neighborhood of x 1 and x 2 . The function I 0 (x) is chosen as where γ * = inf x∈supp I0 γ(x). Since L ε /2 (I0) I 0 (dx) > 0 and Tn 0 S(t)dt → +∞ when n → +∞, we have therefore lim sup t→+∞ X I(t, dx) ≥ lim sup n→+∞ L ε /2 (I0) Remark 4 . 8 . 9 . 489 By combining Lemma 4.2 and Lemma 4.7 we obtain that 1 T T 0 S(t)dt admits a limit when T → +∞ and lim Let Assumption 2.1 hold. Let S(t), I(t, dx) be the corresponding solution of (2.1). Then one has d 0 (I(t, dx), M + ({α(x) = α * })) ----→ t→+∞ 0. dx) = 0 and, by(4.11),S -∞ (t) ≡ S * = 1 α * and (S -∞ ) (t) ≡ 0 for all t ∈ R. ta(t) Lemma 4 . 14 .t t 0 S 4140 Let Assumption 2.1 hold. Assume that R 0 (I 0 ) > 1. Suppose that I 0 ({α(x) = α * }) = 0 and set η(t) := α * S(t) -1, with S(t) = 1 (s)ds, where α * := sup x∈supp I0 α(x). Then it holds tη(t) ---→ t→∞ +∞. vival Lemma 4 . 15 . 415 Let Assumptions 2.5 hold. Assume that R 0 (I 0 ) > 1 and that I 0 ({α(x) = α * }) = 0. Recalling the definition of α * in (2.3) and γ * in Assumption 2.5, set Γ 0 (I 0 ) be the set of maximal points of γ on {α(x) = α * }, defined by Γ 0 (I 0 ) := {x : γ(x) = γ * and α(x) = α * }. Then one has d 0 (I(t, dx), M + (Γ 0 (I 0 ))) ----→ t→+∞ 0. γ (x)∈[γ,γ * ] I(t, dx) = α * min(α * ,1/S(t)) α(x)=y and γ(x)∈[γ,γ * ] e γ(x)(yS(t)-1)t I 0 (y, dx)A(dy) + o(1) ≥ α * min(α * ,1/S(t)) α(x)=y and γ(x)∈[γ,γ * ] e γ(yS(t)-1)t I 0 (y, dx)A(dy) + o(1) = α * min(α * ,1/S(t)) α(x)=y and γ(x)∈[γ,γ * ] I 0 (y, dx)e γ(yS(t)-1)t A(dy) + o(1) ≥ m α * min(α * ,1/S(t)) γ (x)≤γ and α(x)≥1/S(t)I(t, dx) = α *min(α * ,1/S(t)) γ(x)≤γ and α(x)≥1/S(t) e γ(x)(yS(t)-1)t I 0 (y, dx)A(dy) ≤ α * min(α * ,1/S(t)) γ(x)≤γ and α(x)≥1/S(t) I 0 (y, dx)e (γ * -2ε)(yS(t)-1)t A(dy) ≤ α * min(α * ,1/S(t)) γ(x)≤γ and α(x)≥1/S(t) 13 ) 13 eq:upp Since S(t) → 1/α * as t → ∞, I 0 (L (I 0 )) ---→ →0 0 and by the boundedness of α * min(α * ,1/S(t)) e γ(yS(t)-1)t A(dy) shown in Step 2, we have indeed γ(x)≤γ and α(x)≥1/S(t) I(t, dx) ----→ t→+∞ 0, and this completes proof of Lemma 4.15. Proof of Proposition 2.7. The concentration of the distribution to M + ({α(x) = α * } ∩ {γ(x) = γ * }) was shown in Lemma 4.15. lim t→+∞ X I(t, dx) = θ α * γ * (R 0 (I 0 ) -1) .To prove the last statement, setf (t) := α * min(α * ,1/S(t)) γ * γ e z(yS(t)-1)t I α 0 (y, dz)A(dy),where γ < γ * . It follows from (4.13) that γ(x)≤γ and α(x)≥1/S(t) I(t, dx) ----→ e z(yS(t)-1)t I α,γ 0 (y, z, dx)I α 0 (z, dy)A(dy) = γ≤γ(x)≤γ * and α(x)≥1/S(t) I(t, dx) satisfies 0 < lim inf t→+∞ I(t, dx) = lim inf t→+∞ γ(x)≤γ and α(x)≥1/S(t) 8 (iii). Next define m > 0 by i + y) > 0. 2 e 2 dx) ≤ C t -N +κ i tη(t)γ(xi) , ∀t 1.(5.4) esti1Since I(t, dx) concentrates on {α(x) = α * }, then for all ε ∈ (0, ε 1 ) one hasR N I(t, dx) = p i=1 |x-xi|≤ε I(t, dx)dx + o(1) as t → ∞.Using the persistence of I stated in Theorem 2.2 (see Lemma 4.1), we end-up with 0 < lim inf t→∞ R N I(t, dx) ≤ lim inf t→∞ p i=1 |x-xi|≤ε I(t, dx), so that (5.4) ensures that there exists c > 0 and T > 0 such that 0 < c ≤ p i=1 e γ(xi) tη(t)-N +κ i 2γ(x i ) ln t , ∀t ≥ T. (5.5) esti2 n ≥ 1 and k ≥ 1 . 1 If z ∈ C then there is a sequence z ϕ(k) such that z = lim z ϕ(k) , and (by assumption) we haveP K (x) ∩ B(z ϕ(k) , R + 1/k) = ∅. Therefore d z, P K (x) = lim k→+∞ d z ϕ(k) , P K (x) ≤ lim k→+∞ ction Theorem B. 4 ( 4 Existence of a regular metric projection). Let K ⊂ M be compact. There exists a Borel measurable mapP K : M → K such that d x, P K (x) = d(x, K).Proof. The proof is immediate by combining Proposition B.3 Proposition B.2. -supp Proposition B.5 (Metric projection on measure spaces). Let K ∈ K(M ) be a given compact set. Let µ ∈ M + (M ) be a given nonnegative Borel measure on M . Then the Kantorovitch-Rubinstein distance between µ and M + (K) can be bounded by the distance between K and the furthest point in supp µ: d 0 (µ, M + (K)) ≤ µ T V sup x∈supp µ d(x, K). f (x) -f (P K (x)) dµ(x) ≤ supp µ |x -P K (x)|dµ(x) ≤ sup y∈supp µ d(y, K) supp µ 1dµ = µ T V sup x∈supp K d(x, K). Figure 1 : 1 Figure 1: Illustration of Theorem 2.2 in the case i), i.e., when I 0 {α(x) = α * } > 0. Parameters of this simulation are: Λ = 2, θ = 1, α(x) = 0.5 + T [-0.4,-0.2] (x 1 ) + 1 [0.2,0.8] (x 1 ) 1 [-0.6,0.6] (x 2 ) where x = (x 1 , x 2 ) ∈ R 2 and T [-0.4,-0.2] is the triangular function of height one and support [-0.4, -0.2], and γ = 1 2α . Initial condition is given by I 0 (dx) = I 0 (x 1 , x 2 ) dx where I 0 (x 1 , x 2 ) = 1 [-0.5,0.5] (x 1 ) cos(πx 1 )1 [-0.5,0.5] (x 2 ) cos(πx 2 ). In particular, we have α * = 3/2 and {α(x) = α * } = ({-0.3} ∪ [0.2, 0.5]) × [-0.5, 0.5]. Function t → S(t) converges towards 1/α * = 2/3 and function x → I(t, x) at time t = 50 is asymptotically concentrated on {α(x) = α * } = ({-0.3} ∪ [0.2, 0.5]) × [-0.5, 0.5]. Fig1 Fig1 Figure 2 : 2 Figure 2: Illustration of Theorem 2.2 in the case i), i.e., when I 0 {α(x) = α * } > 0. Function x → I(t, x) at time t = 10, 20, 30 and 40. The function I remains bounded in this case. Fig3 Fig3 Figure 3 : 3 Figure 3: Illustration of Theorem 2.2 in the case ii), i.e., when α(x)=α * I 0 (dx) = 0. Parameters of this simulation are: Λ = 2, θ = 1, α(x) = 0.5 + T [-0.1,0.8] (x 1 )1 [-0.5,0.5] (x 2 ) where x = (x 1 , x 2 ) ∈ R 2 and T [-0.1,0.5] is the triangular function of height one and support [-0.1, 0.5], γ = 1 2α . Initial condition is given by I 0 (dx) = I 0 (x 1 , x 2 ) dx where I 0 (x) = 1 [-0.5,0.5] (x 1 ) cos(πx 1 )1 [-0.5,0.5] (x 2 ) cos(πx 2 ). In particular, α * = 3/2 and {α(x) = α * } = {0.35} × [-0.5, 0.5]. The function t → S(t) converges towards 1/α * = 2/3. The function x → I(t, x) at time t = 100 is asymptotically concentrated on {α(x) = α * } = {0.35} × [-0.5, 0.5]. Fig4 Fig4 Figure 4 : 4 Figure 4: Illustration of Theorem 2.2 in the case ii), i.e., when I 0 {α(x) = α * } = 0. Function x → I(t, x) at time t = 20, 40, 60 and 80. The function I asymptotically converges towards a singular measure. Fig5 Fig5 Figure 5 : 5 Figure 5: Illustration of a transient behavior for (2.1). Parameters of this simulation are: Λ = 2, θ = 1, α(x) is given by (3.1), I 0 (x) by (3.2) and γ(x) by (3.3). In particular, α * = 1, {α(x) = α * } = {x 2 } with x 1 = -0.5, x 2 = 0.5. Other parameters are κ 1 = 2, κ 2 = 2, γ(x 1 ) = 1/2, γ(x 2 ) = 1/4. The value of the local maximum at x 1 , α(x 1 ) = 0.95, being very close to α * , observe that the distribution I(t, x) first concentrates around x 1 before the global maximum x 2 becomes dominant (bottom right plot). Fig8 Fig8 Figure 6 : 6 Figure 6: Illustration of Theorem 2.10 and Corollary 2.11. Parameters of this simulation are: Λ = 2, θ = 1, α(x) is given by (3.4), I 0 (x) by (3.5) and γ(x) by (3.6). In particular, α* = 1, {α(x) = α * } = {x 1 , x 2 } with x 1 = -0.5, x 2 = 0.5, κ 1 = 8, κ 2 = 2, γ(x 1 ) = 1/2, γ(x 2 ) = 1/4,ρ = 6 and J = {2}. The initial condition I 0 vanishes more rapidly around x 1 than x 2 so that the solution I(t, x) vanishes around x 1 as t goes to ∞, even though γ(x 1 ) > γ(x 2 ), while around x 2 it takes the shape given by expression(2.11). Fig7 Fig7 Figure 7 : 7 Figure 7: Numerical solution of the example provided in Section 2.3 of the main text with L = 1. The x, y axes correspond to the underlying physical space and we plot I(t, τ ) on the z axis at every x = (1 -e -τ ) cos(τ ) and y = (1 -e -τ ) sin(τ ). The observation times are spaced exponentially from one another (t = 0, 10, 10 2 , 10 3 , 10 4 , 10 5 ) to observe constant shifts of the fixed asymptotic profile. These plots are snapshots of the supplementary movie spiraling.avi.g:osc .11) expans wherein we have set := min i=1,...,p N + κ i 2γ(x i ) . (2.12) def-va By computing the integral of I(t, τ ), we will now obtain a precise description of the behavior of S(t) as t → +∞. Indeed, 0 +∞ I(t, τ )dτ = e tS(t)-t tS(t) 0 +∞ tS(t)e -τ e -tS(t)e -τ dτ = e tS(t)-t tS(t) e -tS(t)e -τ +∞ 0 = e tS(t)-t tS(t) 1 -e -tS(t) = e tS(t)-t tS(t) - e -t tS(t) = e tS(t)-t tS(t) + o(1), therefore, recalling S(t) = 1 + o(1), we have e tS(t)-t = tI ∞ + o(t), and finally Lemma 4.2. Let Assumption 2.1 hold. Let S(t), I(t, dx) be the corresponding solution of (2.1). Then lim sup T →+∞ α Proof. Let us remark that the second component of (2.1) can be written as 1 T T 0 1 S(t)dt ≤ I(t, dx) = I 0 (dx)e γ(x)(α(x) t 0 S(s)ds-t) , = I 0 (dx) exp γ(x) 0 t S(s)ds α(x) - t 0 S(s)ds t . (4.1) eq:I-n t0) , so that finally by letting t → +∞ we get lim inf t→+∞ (θ + ε) min(θ, γ Since ε > 0 is arbitrary we have shown min(θ, γ * )Λ S(t) ≥ lim inf * ) + Λα * γ * . t→+∞ S(t) ≥ min(θ, γ * )Λ θ min(θ, γ * ) + Λα * γ * . The Lemma is proved. o-mut * , where α * is given in (2.3). .5) eq:170 then one can extract from (t n ) a subsequence (t n k ) such that the shifted orbitst → S(t + t n k ), I(t + t n k , dx)converge weak- * pointwise to a complete orbit S ∞ , I ∞ (t, dx) satisfying the following properties: defined in (2.3). We prove that the family {I(t, dx), t ≥ 0} is uniformly tight. Let ε > 0 be sufficiently small, so that the set L ε (I 0 ) is compact. By Lemma 4.3 and Lemma 4.2 we have Proof. First of all let us remark that I(t, dx) = e ( t 0 S(s)dsα(x)-t)γ(x) I 0 (dx), and therefore the orbit t → I(t, dx) is continuous for the metric d 0 . By Lemma 4.2 we have lim sup T →+∞ α lim sup 1 T T 0 1 S(s)ds ≤ t→+∞ X\Lε(I0) X I ∞ (t, dx) > 0 for all t ∈ R, (4.6) eq:170 and α(x)<α * I ∞ (t, dx) = 0. (4.7) eq:170 Finally the convergence S(t + t n k ) → S ∞ (t) holds locally uniformly in C 1 (R). * , where α * The set {I(t, dx) : t ≥ 0} is uniformly tight. Moreover it is bounded in the total variation norm (see Lemma 4.1) in the complete separable metric space X, therefore precompact for the weak topology by Prokhorov's Theorem [5, Theorem 8.6.2, Vol. II p. 202]. lim n→+∞ X 1I(t + t n , dx) > 0, which shows (4.6). Next it follows from Lemma 4.2 that lim sup n→+∞ 1 tn tn 0 S(s)ds ≤ 1 α * and thus by Lemma 4.3 that 1 α(x)=α * I 0 (dx) = I ∞ (t, dx). We know that supp I ∞ (t, dx) ⊂ {α(x) = α * } and byLemma 4.4 By Lemma 4.5, the shifted orbits S(t + t n ), I(t + t n , dx) converge to a complete orbit S ∞ (t), I ∞ (t, dx) with I(t n , dx) ----→ we have X I ∞ (t, dx) ≥ lim inf s→+∞ X * ∈ [-η, η] such that lim n→+∞ t n η(t n ) = η * . t→+∞ e γ(x)η * 1)t A(dy) + o(1), wherein m > 0 is the constant associated with γ in Assumption 2.5 and we used the Landau notation o(1) to collect terms that converges to 0 as t → +∞. Recalling the upper bound for I(t, dx) from Lemma 4.1, we have lim sup t→+∞ α * min(α * ,1/S(t)) e γ(yS(t)-1)t A(dy) ≤ lim sup t→+∞ 1 m X I(t, dx) ≤ Λ m min(θ, γ 0 ) < +∞. This implies that lim sup t→+∞ α(supp(I0)) [START_REF] Day | Applying population-genetic models in theoretical evolutionary epidemiology[END_REF], the upper bound for η(t) provided in (5.2) implies e γ(xi) tη(t)- i / ∈J and (5.5) rewrites as 0 < c 2 ≤ This yields lim inf that is η(t) ≥ ln t t + O 1 t as t → ∞. (5.6) N +κ i 2γ(x i ) ln t → 0 as t → ∞, i∈J e γ(xi)(tη(t)-ln t) , ∀t 1. t→∞ (tη(t) -ln t) > -∞, lower Then (2.11) follows coupling (5.2) and (5.6). J.-B. Burie and A. Ducrot are supported by the ANR project ArchiV ANR-18-CE32-0004. Q. Griette was partially supported by ANR grant "Indyana" number ANR-21-CE40-0008. A.D. and Q.G. acknowledge the support of the Math AmSud program for project 22-MATH-09. Appendix A The case of a unique fitness maximum If the function α(x) has a unique global maximum in the support of the initial data, then our analysis leads to a complete description of the asymptotic state of the population. This may be the unique case when the behavior of the orbit is completely known, independently on the positivity of the initial mass of the fitness maximizing set {α(x) = α * }. e-max Theorem A.1 (The case of a unique global maximum). Let Assumption 2.1 be satisfied. Suppose that the function α = α(x) has a unique maximum α * on the support of I 0 attained at x * ∈ supp I 0 , and that R 0 (I 0 ) := Λ θ α * > 1. Then it holds that S(t) ----→ t→+∞ 1 α * , d 0 (I(t, dx), where δ x * (dx) denotes the Dirac measure at x * and B Existence of a regular metric projection In this Section we let (M, d) be a complete metric space. Let K(M ) be the set of compact subsets in M and let K ∈ K(M ). We first recall that we can define a kind of frame of reference, internal to K, which allows to identify each point in K. Let us denote K(M ) the set formed by all compact subsets of M . Recall that (K(M ), d H ) is a complete metric space, where d H is the Hausdorff distance (Metric coordinates). There exists a finite number of points x 1 , . . . , x n ∈ K with the property that each y ∈ K can be identified uniquely by the distance between y and x 1 , . . . , x n . In other words the map . . . is one-to-one. Moreover c K is continuous and its reciprocal function c -1 K : c K (K) → K is also continuous. Proof. Let us choose x 1 ∈ K and x 2 ∈ K such that x 1 = x 2 . We recursively construct a sequence x n and a compact set K n such that the choice of x n+1 being arbitrary. Clearly K n is a compact set and K n+1 K n . Suppose by contradiction that K n = ∅ for all n ∈ N, then (because K is compact) one can construct a sequence x ϕ(n) , extracted from x n , and which converges to a point In particular K ∞ is not empty. However we see that, by definition of K ∞ , we have d(x, x n ) = d(x, x 1 ) > 0 for all n ∈ N, which contradicts the fact that lim Hence we have shown by contradiction that there exists n 0 ∈ N such that K n0 = ∅ and K n0-1 = ∅. This is precisely the injectivity of the map c K : K → R n0 . To show the continuity, we remark that c K is continuous, and therefore for each closed set F ⊂ K, F is compact so that c K (F ) is compact and therefore closed. Therefore (c The proposition is proved. Recall that the Borel σ-algebra B(M ) is the closure of the set of all open sets in P(M ) = 2 M under the operations of complement and countable union. A function ϕ : M → N is Borel measurable if the reciprocal image of any Borel set is Borel, i.e. ϕ -1 (B) ∈ B(M ) for all B ∈ B(N ). hoice Proposition B.2 (Borel function of choice). There exists a Borel measurable map c : where the minimum is taken with respect to the lexicographical order in R n0 (which is a total order and therefore identifies a unique minimum for each K ∈ K(K)). Since the map K ⊂ R n0 → min y∈K y is Borel for the topology on K(R n0 ) induced by the Hausdorff metric, so is c. The proposition is proved. ction Proposition B.3 (Borel measurability of the metric projection). Let K ⊂ M be compact. The map P K : M → K(K) defined by is Borel measurable. Proof. First we remark that the map is well-defined for each x ∈ M , and therefore forms a mapping from M into K(K) ⊂ K(M ). Indeed P K (x) is clearly closed in the compact space K, therefore is compact. To show the Borel measurability of P K , we first remark that, given a compact space K ⊂ K, the set is closed. Indeed let x n → x be a sequence in P -1 K (K ), then by definition there exists y n ∈ K such that d(x n , y n ) = d(x n , K). By the compactness of K , there exists y ∈ K and a subsequence y ϕ(n) extracted from y n such that y ϕ(n) → y. Because of the continuity of z → d(z, K), we have therefore y ∈ P K (x) ∩ K , which shows that x ∈ P -1 K (K ). Hence P -1 K (K ) is closed. We are now in a position to show the Borel regularity of P K . Let C ∈ K(K) and R > 0 be given. We define B H (C, R) the ball of center C and radius R in the Hausdorff metric: , where B 1 := {x ∈ M : d(y, C) ≤ R for all y ∈ P K (x)}, and It can be readily seen that B 1 is a Borel set by writing where V R (C) := {y ∈ K : d(y, C) ≤ R}. To see that B 2 is a Borel set, we choose a sequence z n which is dense in C and write Indeed if x ∈ B 2 then P K (x) intersects every ball of radius R and center z ∈ C; in particular P K (x) intersects every ball of radius R + 1/k and center z n . Conversely suppose that P K (x) intersects every ball B(z n , R + 1/k) for Definition C.1 (Scalarly essentially integrable family). Let Λ : t → λ t be a mapping from T into M + (X). Λ is scalarly essentially integrable for the measure µ if for every compactly supported continuous function f ∈ C c (X), the function t → X f (x)λ t (dx) is in L 1 (µ). Setting ν(f ) = T X f (x)λ t (dx)µ(dt) defines a linear form on C c (X), hence a measure ν, which is the integral of the family Λ, and we denote Recall that every positive Borel measure µ on a locally compact space X defines a positive bounded linear functional on C c (X) equipped with the inductive limit of the topologies on C c (K) when K runs over the compact subsets of X. Conversely if µ is a positive bounded linear functional on C c (X), there are two canonical ways to define a measure on the Borel σ-algebra. 1. Outer-regular construction. Let U ⊂ X be a open, then one can define It is always true that µ • ≤ µ * , however it may happen that µ * = µ • when µ * is not finite, see e. Definition C.2 (Pre-adequate and adequate families). We follow [7, Definition 1, V.17 §3]. Let Λ : t → λ t be a scalarly essentially µ-integrable mapping from T into M + (X), ν the integral of Λ. We say that Λ is µ-pre-adequate if, for every lower semi-continuous function f ≥ 0 defined on X, the function t We say that Λ is µ-adequate if Λ is µ -pre-adequate for every positive Borel measure µ ≤ µ. The last notion we need to define is the one of µ-proper function. Definition C.3 (µ-proper function). We say that a function p : T → X is µ-proper if it is µ-measurable and, for every compact set K ⊂ X, the set p If µ is Radon, in particular, then every µ-measurable mapping p : T → X (X being equipped with the Borel σ-algebra) is µ-proper. The following Theorem is taken from [7, Theorem 1, VI.41 No.1, §3]. ation Theorem C.4 (Disintegration of measures). Let T and X be two locally compact spaces having countable bases, µ be a positive measure on T , p be a µ-proper mapping of T into X, and ν = p(µ) the image of µ under p. There exists a ν-adequate family x → λ x (x ∈ X) of positive measures on T , having the following properties: a) λ x = 1 for all x ∈ p(T ); b) λ x is concentrated on the set p -1 ({x}) for all x ∈ p(T ), and λ x = 0 for x ∈ p(T ); c) µ = λ x ν(dx). Moreover, if x → λ x (x ∈ X) is a second ν-adequate family of positive measures on T having the properties b) and c), then λ x = λ x almost everywhere in B with respect to the measure ν.
04115077
en
[ "info.info-lo" ]
2024/03/04 16:41:24
1994
https://inria.hal.science/hal-04115077/file/simple.pdf
Gilles Dowek email: [email protected] Third Order Matching is Decidable de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction The higher order matching problem is the problem of determining whether a term is an instance of another in the simply typed λ-calculus i.e. to solve the equation a = b where a and b are simply typed λ-terms and b is ground. Pattern matching algorithms are used to check if a proposition can be deduced from another by elimination of universal quantifiers or by introduction of existential quantifiers. In automated theorem proving, elimination of universal quantifiers and introduction of existential quantifiers are mixed and full unification is required, but in proof-checking and semi-automated theorem proving, these rules can be applied separately and thus pattern matching can be used instead of unification. Higher order matching is conjectured decidable in [START_REF] Huet | Résolution d' Équations dans les Langages d[END_REF] and the problem is still open. In [START_REF] Huet | A Unification Algorithm for Typed λ-calculus[END_REF] [6] [START_REF] Huet | Proving and Applying Program Transformations Expressed with Second Order Patterns[END_REF] Huet has given a semi-decision algorithm and shown that in the particular case in which the variables occurring in the term a are at most second order this algorithm terminates, and thus that second order matching is decidable. In [START_REF] Statman | Completeness, Invariance and λ-definability[END_REF] Statman has reduced the conjecture to the λ-definability conjecture and in [START_REF] Wolfram | The Clausal Theory of Types[END_REF] Wolfram has given an always terminating algorithm whose completeness is conjectured. We prove in this paper that third order matching is decidable i.e. we give an algorithm that decides if a matching problem, in which all the variables are at most third order, has a solution. The main idea is that if the problem a = b has a solution then it also has a solution whose depth is bounded by some integer s depending only on the problem a = b, so a simple enumeration of the substitutions whose depth is bounded by s gives a decision algorithm. This result can also be used to bound the depth of the search tree in Huet's semi-decision algorithm and thus turn it into a always-terminating decision algorithm. It can also be used to design an algorithm which enumerates a complete set of solutions to a third order matching problem and either terminates if the problem has a finite complete set of solutions or keeps enumerating solutions forever if it the problem admits no such set. At last we discuss the problems that occur when we try to generalize the proof given here to higher order matching. 1 Trees and Terms Trees Definitions 1 (Following [START_REF] Gorn | Explicit Definitions and Linguistic Dominoes[END_REF]) An occurrence is a list of strictly positive integers α =< s 1 , ..., s n >. The number n is called the length of the occurrence α. A tree domain D is a non empty finite set of occurrences such that if α < n >∈ D then α ∈ D and if also n ̸ = 1 then α < n -1 >∈ D. A tree is a function from a tree domain D to a set L, called the set of labels of the tree. If T is a tree and D its domain, the occurrence < > is called the root of T and the occurrence α < n > is called the n th son of the occurrence α. The number of sons of an occurrence α is the greatest integer n such that α < n >∈ D. A leaf is an occurrence that has no sons. Let T be a tree and let α =< s 1 , ..., s n > be an occurrence in this tree, the path of α is the set of occurrences {< s 1 , ..., s p > | p ≤ n}. The number of elements of this path is the length of α plus one. The depth of the tree T is the length of the longest occurrence in D. This occurrence is, of course, a leaf. If T is a tree of domain D and α is an occurrence of D, the subtree T /α is the tree T ′ whose domain is D ′ = {β | αβ ∈ D} and such that T ′ (β) = T (αβ) By an abuse of language, if α < n > is an occurrence of a tree T , the subtree T /α < n > is also called the n th son of the occurrence α. If a is a label and T 1 , ..., T n are trees (of domains D 1 , ..., D n ) then the tree of root a and sons T 1 , ..., T n is the tree T of domain D = {< >} ∪ i {< i > α | α ∈ D i } such that T (< >) = a and T (< i > α) = T i (α) If T is a tree of domain D, α an occurrence of D and T ′ a tree of domain D ′ then the graft of T ′ in T at the occurrence α (T [α ← T ′ ]) is the tree T ′′ of domain D ′′ = D-{αβ | αβ ∈ D} ∪ {αβ | β ∈ D ′ } and such that T ′′ (γ) = T ′ (β) if γ = αβ and T ′′ (γ) = T (γ) otherwise Let T and T ′ be trees and let a be a label such that all the occurrences of a in T are leaves α 1 , ..., α n then the substitution of T ′ for a in T (T [a ← T ′ ]) is defined as T [α 1 ← T ′ ]...[α n ← T ′ ]. Note that since α 1 , ..., α n are leaves, the order in which the grafts are performed is insignificant. Types Definition 2 (Type) Let us consider a finite set T . The elements of T are called atomic types. A type is a tree whose labels are either the elements of T or → and such that the occurrences labeled with an element of T are leaves and the ones labeled with → have two sons. Let T be a type, if the root of T is labeled with an atomic type U then T is written U , if the root of T is labeled with → and its sons are written T 1 and T 2 then T is written (T 1 → T 2 ). By convention T 1 → T 2 → T 3 is an abbreviation for (T 1 → (T 2 → T 3 )). Definition 3 (Order of a Type) If T is a type, the order of T is defined by • o(T ) = 1 if T is atomic, • o(T 1 → T 2 ) = max{1 + o(T 1 ), o(T 2 )}. Typed λ-terms Definitions 4 For each type T we consider three sets C T , I T , L T . The elements of C T are called constants of type T , those of I T instantiable variables of type T and those of L T local variables of type T . We assume that we have in each atomic type at least a constant and that there is a finite number of constants i.e. that the set T C T is finite1 . We assume also that we have an infinite number of instantiable and local variables of each type. A typed λ-term is a tree whose labels are either App, or < Lam, x > where x is a local variable, or < V ar, x > where x is a constant, an instantiable variable or a local variable such that the occurrences labeled with App have two sons, the occurrences labeled with < Lam, x > have one son and the occurrences labeled with < V ar, x > are leaves. Let t be a term, if the root of t is labeled with < V ar, x > we write it x, if the root of t is labeled with < Lam, x > and its son is written u then we write it λx : T.u where T is the type of x, if the root of t is labeled with App and its sons are written u and v then we write it (u v). By convention (u v w) is an abbreviation for ((u v) w). In a term t, an occurrence α labeled with < V ar, x > is bound if there exists an occurrence β in the path of α labeled with < Lam, x >, it is free otherwise. A term is ground if no occurrence is labeled with a pair < V ar, x > with x instantiable. Let t and t ′ be terms and x be a variable, the substitution of t ′ for x in t (t[x ← t ′ ]) is defined as t[< V ar, x >← t ′ ]. Definition 5 (Type of a Term) A term t is said to have the type T if either: • t is a constant, an instantiable variable or a local variable of type T . • t = (u v) and u has type U → T and v type U for some type U , • t = λx : U.u, the term u has type V and T = U → V . A term t is said to be well-typed if there exists a type T such that t has type T . In this case T is unique and is called the type of t. Definition 6 (βη-reduction) The βη-reduction relation, written £, is defined as the smallest transitive relation compatible with term structure such that (λx : T.t u) £ t[x ← u] λx : T.(t x) £ t if x is not free in t We adopt the usual convention of considering terms up to α-conversion (i.e. bound variable renaming) and we consider that bound variables are renamed to avoid capture during substitutions. A rigorous presentation would use, for instance, de Bruijn indices [START_REF] De Bruijn | Lambda Calculus Notation with Nameless Dummies, a Tool for Automatic Formula Manipulation, with Application to the Church-Rosser Theorem[END_REF]. Obviously, if t is a term of type T , x is a variable of type U and u a term of type U then the term t[x ← u] has type T . In the same way if a term t has type T and t reduces to u then u has type T . Proposition 1 The βη-reduction relation is strongly normalizable and confluent on typed terms, and thus each term has a unique normal form. Proof See, for instance, [START_REF] Hindley | Introduction to Combinators and λ-Calculus[END_REF]. Proposition 2 Let t be a normal well-typed term of type U 1 → ... → U n → U (U atomic), the term t has the form t = λy 1 : U 1 . ... λy m : U m .(x u 1 ... u p ) where m ≤ n and x is a constant, an instantiable variable or a local variable. Proof The term t can be written in a unique way t = λy 1 : V 1 . ... λy m : V m .u where u is not an abstraction. The term u can be written in a unique way u = (v u 1 ... u p ) where v is not an application. The term v is not an application by definition, it is not an abstraction (if p = 0 because u is not an abstraction and if p ̸ = 0 because t is normal), it is therefore a constant, an instantiable variable or a local variable. Then since t has type U 1 → ... → U n → U , we have m ≤ n and for all i, V i = U i . Definition 7 (Head of a Term, Atomic Term) Let t = λy 1 : T 1 . ... λy m : T m .(x u 1 ... u p ) be a normal term. The symbol x is called the head of the term. If m = 0 then t is said to be atomic, it is an abstraction otherwise. Definition 8 (η-long Form) If t = λy 1 : U 1 . ... λy m : U m .(x u 1 ... u p ) is a term of type T = U 1 → ... → U n → U (U atomic) (m ≤ n) which is in βη-normal form then we define its β-normal η-long form as the term t ′ = λy 1 : U 1 . ... λy m : U m .λy m+1 : U m+1 . ... λy n : U n .(x u ′ 1 ... u ′ p y ′ m+1 ... y ′ n ) where u ′ i is the β-normal η-long form of u i and y ′ i is the β-normal η-long form of y i . This definition is by induction on the pair < c 1 , c 2 > where c 1 is the number of occurrences in t and c 2 the number of occurrences in T In the following all the terms are assumed to be in β-normal η-long form. Böhm Trees Definition 9 (Böhm Tree) A (finite) Böhm tree is a tree whose occurrences are labeled with pairs < l, x > such that l is a list of local variables < y 1 , ..., y n > and x is a constant, an instantiable variable or a local variable. Definition 10 (Type of a Böhm Tree) Let t be a Böhm tree whose root is labeled with the pair << y 1 , ..., y n >, x > and whose sons are u 1 , ..., u p . The Böhm tree t is said to have the type T if the Böhm trees u 1 , ..., u p have type U 1 , ..., U p the symbol x has type U 1 → ... → U p → U (U atomic) and T = T 1 → ... → T n → U where T 1 , ..., T n are the types of the variables y 1 , ..., y n . A Böhm tree t is said to be well-typed if there exists a type T such that t has type T . In this case T is unique and is called the type of t. Definition 11 (Böhm Tree of a Normal Term) Let t = λy 1 : T 1 . ... λy n : T n .(x u 1 ... u p ) be a λ-term in normal (η-long) form. The Böhm tree of t is inductively defined as the tree whose root is the pair < l, x > where l =< y 1 , ..., y n > is the list of the variables bound at the top of this term, x is the head symbol of t and sons are the Böhm trees of u 1 , ..., u p . Remark Normal (η-long) well-typed terms and well-typed Böhm trees are in one-to-one correspondence. Moreover if t is a normal (η-long) term and t is its Böhm tree then occurrences in t labeled with a constant, an instantiable variable or a local variable and occurrences in t are in one-to-one correspondence. So we will use the following abuse of notation: if α is an occurrence in the Böhm tree of t we write (t/α) for the normal (η-long) term corresponding to the Böhm tree ( t/α) and t[α ← u] for the term t[α ′ ← u] where α ′ is the occurrence of a variable or a constant in t corresponding to α. Notation Let t be a term, we write |t| for the depth of the Böhm tree of the normal (η-long) form of t. Proposition 3 In each type T there is a ground term t such that |t| = 0. Proof Let T = U 1 → ... → U n → U with U atomic and let c be a constant of type U . The term t = λx 1 : U 1 . ... λx n : U n .c has type T and |t| = 0. Substitution Definition 12 (Substitution) A substitution is a finite set of pairs < x i , t i > where x i is an instantiable variable and t i a term of the same type in which no local variable occurs free such that if < x, t > and < x, t ′ > are both in this set then t = t ′ . The variables x i are said to be bound by the substitution. Definition 13 (Substitution applied to a Term) If σ is a substitution and t a term then we let σt = t[α 1 1 ← t 1 ]...[α p 1 1 ← t 1 ]...[α 1 n ← t n ]...[α pn n ← t n ] where α 1 i , ..., α p i i are the occurrences of x i in t. Note that since the α j i are leaves, the order in which the grafts are performed is insignificant. Definition 14 (Composition of Substitutions) Let σ and τ be two substitutions the substitution τ • σ is defined by τ • σ = {< x, τ t > | < x, t >∈ σ} ∪ {< x, t > | < x, t >∈ τ and x not bound by σ} Proposition 4 Let σ and τ be two substitutions and t is a term, we have (τ • σ)t = τ (σt) Proof By decreasing induction on the depth of an occurrence α in t we prove that we have (τ • σ)(t/α) = τ (σ(t/α)) 2 Definition 16 (Third Order Matching Problem) A third order matching problem is a matching problem Φ = {a 1 = b 1 , ..., a n = b n } such that the types of the instantiable variables that occur in a 1 , ..., a n are of order at most three. Definition 17 (Solution) Let Φ = {a 1 = b 1 , ..., a n = b n } be a matching problem. A substitution σ is a solution of this problem if and only if for every i, the normal form of the terms σa i and b i are identical up to α-conversion. Remark Usual unification terminology distinguishes variables (here instantiable variables) and constants. The need for local variables comes from the fact that we want to transform the problem λy : T.x = λy : T.y (where x is an instantiable variable of type T ) into the problem x = y by dropping the common abstraction. The symbol y cannot be an instantiable variable (because it cannot be instantiated by a substitution), it cannot be a constant because, if it were, we would have the solution x ← y to the second problem which is not a solution to the first. So we let y be a local variable and the solution x ← y is now forbidden in both problems because no local variable can occur free in the terms substituted for variables in a substitution. In Huet's unification algorithm [START_REF] Huet | A Unification Algorithm for Typed λ-calculus[END_REF] [6] these local variables are always kept in the head of the terms in common abstractions. In Miller's mixed prefixes terminology [START_REF] Miller | Unification Under a Mixed Prefix[END_REF], constants are universal variables declared to the left hand side of the instantiable variables and local variables are universal variables declared to the right hand side of all the instantiable variables. Remark In an alternative definition of matching problems, the terms b 1 , ..., b n do not need to be ground. The method of this paper can be adapted to such problems using the standard technique of variable freezing [START_REF] Huet | Résolution d' Équations dans les Langages d[END_REF]. Definition 18 (Ground Solution) Let Φ = {a 1 = b 1 , ..., a n = b n } be a problem and let σ be a solution to Φ. The solution σ is said to be ground if for each instantiable variable that has an occurrence in some a i , the term σx is ground. Proposition 5 If a matching problem has a solution then it has a ground solution. Proof Let Φ = {a 1 = b 1 , ..., a n = b n } be a matching problem and let σ be a solution to this problem. Let y 1 : T 1 , ..., y n : T n be the instantiable variables occurring in the term σx for some x instantiable variable occurring in some a i . Let u 1 , ..., u n be ground terms of the types T 1 , ..., T n . Let τ = {< y 1 , u 1 >, ..., < y n , u n >}, and σ ′ = τ • σ. Obviously, for each instantiable variable x of a, the term σ ′ x is ground and σ ′ is a solution to Φ. Definition 19 (Complete Set of Solutions) Obviously if σ is a solution to a problem Φ then for any substitution τ , τ • σ is also a solution to Φ. A set S of solutions to a problem Φ is said to be complete if for every substitution θ that is a solution to this problem there exists a substitution σ ∈ S and a substitution τ such that θ = τ • σ. So in contrast with second order matching [6] [START_REF] Huet | Proving and Applying Program Transformations Expressed with Second Order Patterns[END_REF] there is no (always terminating) algorithm that enumerates a complete set of solutions to a third order matching problem. We consider now algorithms that take as an input a matching problem and either give one solution to the problem or fail if it does not have any. A Bound on the Depth of Solutions All the problems considered in the rest of the paper are third order. To prove the decidability of third order matching we are going to prove that the depth of the term t substituted to a variable x by a solution σ to a problem Φ can be bounded by an integer s depending only on the problem Φ. Of course the previous example shows that a matching problem may have solutions of arbitrary depth, but to design a decision algorithm we do not need to prove that all the solutions are bounded by s but only that at least one is. To show this result we take a problem Φ that has a solution σ (by proposition 5, we can consider without loss of generality that this solution is ground) and we build another solution σ ′ whose depth is bounded by an integer s depending only on the problem Φ. The proof is divided into two parts. In the first part,we focus on a particular case in which the problem Φ is a an interpolation problem i.e. set of equations of the form (x c 1 ... c n ) = b such that x is an instantiable variable and c 1 , ..., c n and b are ground terms. Then, in the second part, we reduce the general case to this particular case. Consider now an equation (x c 1 ... c n ) = b and a substitution σ solution to this equation. Let us write t = σx = λy 1 : T 1 . ... λy n : T n .u (u atomic). We have σ(x c 1 ... c n ) = (λy 1 : T 1 . ... λy n : T n .u c 1 ... c n ) This term reduces to u[y 1 ← c 1 , ..., y n ← c n ] whose normal form is b. The terms c i are at most second order. In the key lemma, we prove that, in the general case, when we substitute a second order term c to a variable y in a term u and we normalize the term u[y ← c], we get a term with a depth larger than or equal to the one of u. If this were true in all the cases, we would know that the depth of t (the solution) has to be less than or equal to the depth of b (the right hand side of the equation). A simple enumeration of the terms t whose depth is less than or equal to |b| would give a decision procedure. Actually, the key lemma shows that the depth of the normal form of u[y ← c] can be less than the depth of u in two cases : when c is a non relevant term and when |c| = 0. When such cases happen, solutions may have an arbitrary depth. In these cases, we show that if the problem Φ has a solution σ then it has also another solution σ ′ whose depth is bounded by some integer s depending only on the problem Φ. Interpolation Problems Definition 20 (Interpolation Problem) An interpolation problem is a set of equations of the form (x c 1 ... c n ) = b such that x is an instantiable variable and c 1 , ..., c n and b are ground terms. (2) If α is an occurrence in the Böhm tree of u such that no occurrence in the path of α is labeled with y, then α is also an occurrence in the normal form of u[y ← c] and has the same label in the Böhm tree of u and in the Böhm tree of the normal form of u[y ← c]. (3) If α =< s 1 , ..., s n > is an occurrence in the Böhm tree of u such that for each occurrence β =< s 1 , ..., s k > in the path of α, β ̸ = α, labeled with y, the term c is relevant in its r th argument where r is the position of the son of β in the path of α i.e. r = s k+1 , then there exists an occurrence α ′ of the Böhm tree of the normal form of u[y ← c] such that all the labels occurring in the path of α, except y, occur in the path of α ′ and the number of times they occur in the path of α ′ is greater than or equal to the number of times they occur in the path of α. Moreover if the occurrence α is labeled with a symbol different from y, then the occurrence α ′ is labeled with this same symbol. (4) Moreover if |c| ̸ = 0 then the length of α ′ is greater than or equal to the length of α. Proof By induction on the number of occurrences of y in u. We substitute these occurrences one by one and we normalize the term. Let β be the occurrence in the Böhm tree of u corresponding to the occurrence of y in u we substitute. Let us write c = λz 1 : U 1 . ... λz p : U p .d The term (u/β) has the form λv 1 : V 1 . ... λv q : V q .(y e 1 ... e p ). When we substitute y by the term c in (y e 1 ... e p ) we get (c e 1 ... e p ) and when we normalize this term we get the term d[z 1 ← e 1 , ..., z p ← e p ] which is normal because the type of the e i are first order. Let us consider the occurrences in the Böhm tree of u, while substituting the occurrence of y corresponding to β, we have removed all the occurrences β < i > γ where i is an integer (i ≤ p) and γ is an occurrence in the Böhm tree of e i . We have added all the occurrences βδ where δ is an occurrence of the Böhm tree of c labeled with a symbol different from z 1 , ..., z p and all the occurrences βδγ where δ is a leaf occurrence in the Böhm tree of c labeled with a z i and γ is an occurrence of the Böhm tree of e i . (2) When an occurrence β of y is substituted by c all the occurrences removed have the form β < i > γ. So if no occurrence in the path of α is labeled with y, the occurrence α remains in the normal form of u[y ← c]. (3) If the occurrence β is not in the path of α then the occurrence α is still an occurrence in the normal form of u[y ← c], we take α ′ = α. If β = α then the occurrence β is an occurrence of the Böhm tree of the normal form of u[y ← c]. We take α ′ = β = α. If β is in the path of α and β ̸ = α, β =< s 1 , ..., s k > then let r be the position of the son of β in the path of α i.e. r = s k+1 . Let γ be such that α = β < r > γ. By hypothesis z r has an occurrence in d, let δ be such an occurrence. The occurrence βδγ is an occurrence in the Böhm tree of the normal form of u[y ← c]. We take α ′ = βδγ. In all the cases, all the labels occurring in the path of α, except y, occur in the path of α ′ and the number of times they occur in the path of α ′ is greater than or equal to the number of times they occur in the path of α. If the occurrence α is labeled with a symbol different from y, then the occurrence α ′ is labeled with the same symbol as α. • if one of the terms c i is not relevant in one of its arguments, • if one of the terms c i is such that |c i | = 0. So solutions may have an arbitrary depth. When this happens, we compute another solution to the problem whose depth is bounded by an integer s depending only on the initial problem. This new substitution is constructed in two steps. In the first step we deal with non relevant terms and in the second with terms of depth 0. Let us consider the Böhm tree of t. The set of the occurrences of the Böhm tree of t accessible with respect to the equation (x c 1 ... c n ) = b is inductively defined as: • the root of the Böhm tree of t is accessible, • if α is an accessible occurrence labeled with y i and c i is relevant in its j th argument then the occurrence α < j > (the j th son of α) is accessible, • if α is an accessible occurrence labeled with a symbol different from all the y i then all the sons of α are accessible. Definition 23 (Occurrence Accessible with Respect to an Interpolation Problem) An occurrence is accessible with respect to an interpolation problem if it is accessible with respect to one of the equations of this problem. Definition 24 (Term Accessible with Respect to an Interpolation Problem) A term is accessible with respect to an interpolation problem if all the occurrences of its Böhm tree which are not leaves are accessible with respect to this problem. Definition 25 (Accessible Solution Built from a Solution) Let Φ be an interpolation problem and let σ be a solution to this problem. For each instantiable variable x occurring in the equations of Φ we consider the term t = σx. In the Böhm tree of t, we prune all the occurrences non accessible with respect to the equations of Φ in which x has an occurrence and put Böhm trees of ground terms of depth 0 of the expected type as leaves. The tree obtained that way is the Böhm tree of some term t ′ . We let σx = t ′ . We prove by decreasing induction on the depth of the occurrence α of the Böhm tree of u that if α is accessible with respect to the equation (x c 1 ... c n ) = b then α is also an occurrence of the Böhm tree of u ′ and (u ′ /α)[y 1 ← c 1 , ..., y n ← c n ] = (u/α)[y 1 ← c 1 , ..., y n ← c n ] and then since the root of u is accessible with respect to this equation we have u ′ [y 1 ← c 1 , ..., y n ← c n ] = u[y 1 ← c 1 , ..., y n ← c n ] i.e. ((σx) c 1 ... c n ) = b So σ is a solution to Φ. Proposition 7 Let Φ be an interpolation problem and let σ be a solution to Φ. Let h be the maximum depth of the right hand side of the equations of Φ. Let σ the accessible solution built from σ. Let t = σx = λy 1 : T 1 . ... λy n : T n .u (u atomic). There are at most h + 1 occurrences of symbols not in {y 1 , ..., y n } on a path of the Böhm tree of t. Proof Let α be an occurrence in the Böhm tree of t such that there are more than h+1 occurrences of symbols not in {y 1 , ..., y n } in the path of α. Let β be the (h+1)-th occurrence of such a symbol. Since there are more that h+1 occurrences of symbols not in {y 1 , ..., y n } in the path of α, the occurrence β is not a leaf, so it is accessible with respect to some equation (x c 1 ... c n ) = b of Φ. Also, since this occurrence is not a leaf, it is labeled with a symbol f whose type is not first order. For each occurrence γ =< s 1 , ..., s k > in the path of β labeled with y i , let r be the position of the son of this occurrence in this path (i.e. r = s k+1 ). Since the occurrence β is accessible with respect to the equation (x c 1 ... c n ) = b, the term c i is relevant in its r th argument. So using n times the part (3) of the key lemma there exists an occurrence β ′ in the Böhm tree of the normal form of the term b = (σx c 1 ... c n ) such that the path of β ′ contains at least h + 1 occurrences. Thus, the length of this occurrence is at least h. This occurrence is labeled with the symbol f whose type is not first order, so it has a son β ′′ whose length is at least h + 1. So the depth of b is greater than or equal to h + 1 which is contradictory. Definition 26 (Compact Term) A term t = λy 1 : T 1 . ... λy n : T n .u (u atomic) is compact with respect to an interpolation problem Φ if no variable y i has more than h + 1 occurrences in a path of its Böhm tree, where h is the maximum depth of the right hand side of the equations of Φ. Proposition 8 Let Φ be an interpolation problem and let σ be an accessible solution to Φ. Let h be the maximum depth of the right hand side of the equations of Φ. Let us consider an instantiable variable x and t = σx = λy 1 : T 1 . ... λy n : T n .u (u atomic). Let us consider a variable y i and an occurrence α of the Böhm tree of t such that there are more than h + 1 occurrences on the path of α labeled with the variable y i . We consider all the equations (x c 1 ... c n ) = b of Φ such that the (h + 2) -th occurrence of y i is accessible with respect to this equation. Then there exists an integer j such that for every such equation we have c i = λz 1 : U 1 . ... λz p : U p .z j Proof Let β be the first occurrence of y i in the path of α. Let j be the integer such that α = β < j > β ′ . Let (x c 1 ... c n ) = b be an equation of Φ such that the (h + 2) -th occurrence of y i on the considered path is accessible with respect to this equation. If the head of c i is a symbol different from a z k then |c i | ̸ = 0. Using part (3) of the key lemma when we substitute c 1 , ..., c i-1 , c i+1 , ..., c n we have an occurrence α ′ that has more than h + 1 occurrences of y i on its path. Then using part (4) of the key lemma, when we substitute c i we have an occurrence α ′′ whose length is greater than or equal to h + 1 so h + 1 ≤ |u[y 1 ← c 1 , ..., y n ← c n ]| i.e. h + 1 ≤ |b| which is contradictory. So we have c i = λz 1 : U 1 . ... λz p : U p .z k Since h+2 > 1 the occurrence β < j > is accessible with respect to the equation (x c 1 ... c n ) = b. Thus as the occurrence β is labeled with y i and the occurrence β < j > is accessible with respect to this equation, the term c i is relevant in its j th argument. Therefore k = j and c i = λz 1 : U 1 . ... λz p : U p .z j Definition 27 (Compact Accessible Solution Built from an Accessible Solution) Let Φ be an interpolation problem and let σ be an accessible solution to this problem. Let h be the maximum depth of a right hand side of the equations of Φ. We let σx = t = λy 1 : T 1 . ... λy n : T n .u For each α, occurrence in t labeled with y i such that the corresponding occurrence α ′ in the Böhm tree of t has more than h + 1 occurrences labeled with y i in its path, we have c i = λz 1 : U 1 . ... λz p : U p .z j in all the equations (x c 1 ... c n ) = b of Φ such that α ′ is accessible with respect to this equation. We substitute the occurrence α by the term λz 1 : U 1 . ... λz p : U p .z j . We get that way a term t ′ . We let σ ′ x = t ′ . This solution is accessible but not compact. The first occurrence of f is accessible with respect to both equations, but the second and third occurrences are accessible only with respect to the second one. We have h = 0, so we substitute the second and third occurrences of f by the term λy : T.λz : T.z and we get the substitution and for all i such that z has an occurrence in (σx σd 1 ... σd i-1 z σd i+1 ... σd n ) by induction hypothesis we have σ Definition 29 Let Ψ be a third order matching problem and let σ be a solution to Ψ. We let Φ(Ψ, σ) be the following third order interpolation problem: ′ d i = σd i , so c i = σ ′ d i . Therefore (σ ′ x c 1 ... c n )[z i ← σ ′ d i ] = b[z i ← σ ′ d i ] (σ ′ x c 1 ... c n )[z i ← σ ′ d i ] = b (σ ′ x σ ′ d 1 ... σ ′ d n ) = b σ ′ a = b Φ(Ψ, σ) = a=b∈Ψ Φ(a = b, σ) Proposition 14 Let Ψ be a third order matching problem and let σ be a solution to Ψ. Let h be the maximum of the depth of the right hand side of the equations of Ψ. Then σ is a solution to the problem Φ(Ψ, σ), each substitution σ ′ solution to the problem Φ(Ψ, σ) is a solution to Ψ and if a ′ = b ′ ∈ Φ(Ψ, σ) then |b ′ | ≤ h. Proof By propositions 12 and 13. Lemma 4 Let Ψ be third order matching problem. Let h be the maximum of the depth of the the right hand side of the equations of Ψ. If this problem has a solution σ then it also has a solution σ ′ such that for every instantiable variable x, σx has a depth less than or equal to (n + 1)(h + 1) -1 where n the arity of x. Proof The substitution σ is a solution to the problem Φ(Ψ, σ), thus, by lemma 3, this problem has a solution σ ′ such that for every instantiable variable x, σ ′ x has a depth less than or equal to (n + 1)(h + 1) -1. This solution σ ′ is a solution to the problem Ψ. Remark This method, in which an interpolation problem Φ(Ψ, σ) is constructed from a pair < Ψ, σ > where Ψ is an arbitrary problem and σ a solution to Ψ, can be compared to the one used in the completeness proof of [START_REF] Snyder | Higher-Order Unification Revisited: Complete Sets of Transformations[END_REF] in which a problem in solved form is constructed from such a pair. A Decision Procedure Theorem Third Order Matching is Decidable Proof A decision procedure is obtained by considering the problem Φ and enumerating all the ground substitutions such that the term substituted for x has a depth less than or equal to (n + 1)(h + 1) -1, where h maximum depth of b for a = b ∈ Φ and n is the arity of x. If one of these substitutions is a solution then success else failure. This decision procedure is obviously sound. By lemma 4, it is complete. Remark A more efficient decision algorithm is obtained by enumerating the nodes of the tree obtained by pruning Huet's search tree [START_REF] Huet | A Unification Algorithm for Typed λ-calculus[END_REF] [6] at each node corresponding to a substitution whose depth is larger than (n + 1)(h + 1) -1. This tree is obviously finite and thus this algorithm terminates. It is obviously sound. By lemma 4, it is complete. Remark This result can be used to design an algorithm which enumerates a complete set of solutions to a third order matching problem and either terminates if the problem has a finite complete set of solutions or keeps enumerating solutions forever if it the problem admits no such set. Such an algorithm is got by enumerating the nodes of the tree obtained by pruning Huet's search tree [START_REF] Huet | A Unification Algorithm for Typed λ-calculus[END_REF] [6] at each node labeled with a problem that has no solution (by the theorem above, it is decidable if such a problem has a solution or not). Obviously, this algorithms still produces a complete set of solutions. Let us show now that when a matching problem has a finite complete set of solutions then this algorithm terminates. Recall that a set of substitutions is called minimal if no substitution of this set is an instance of another and that Huet's algorithm applied to a matching problem produces a minimal complete set of solutions [START_REF] Huet | Résolution d' Équations dans les Langages d[END_REF]. It is routine to verify that if a a problem has a finite complete set of solutions then any minimal complete set of solutions is also finite. So, if a problem has a finite complete set of solutions then Huet's tree for this problem has a finite number of success nodes and thus a finite number of nodes labeled with a problem that has a solution. The pruned tree is therefore finite and the algorithm obtained by enumerating its nodes terminates. Remark This decidability result can be compared with the decidability of the equations of the form P (x 1 , ..., x n ) = b where P is a polynomial whose coefficients are natural numbers and b is a natural number. If this equation has a solution < a 1 , ..., a n > then it has a solution < a ′ 1 , ..., a ′ n > such that a ′ 1 ≤ b. Indeed either Q(X) = P (X, a 2 , ..., a n ) is not a constant polynomial and for all n, Q(n) ≥ n, so a 1 ≤ b, or the polynomial Q is identically equal to b and < 0, a 2 , ..., a n > is also a solution. So a simple induction on n proves that if the equation has a solution then it also has a solution in {0, ..., b} n and an enumeration of this set gives a decision procedure. Conclusion: Towards Higher Order Matching The proof given here is based on the fact that if t is a third order term then when we reduce the term (t c 1 ... c n ), in the general case, we get a term deeper than t (or, at least, if it is not, the depth loss can bounded). This gives a bound (in terms of the depth of b) on the depth of the solutions of the equation (x c 1 ... c n ) = b. In the particular cases in which the depth loss is greater than the bound, some part of the term t is superfluous and that we can construct a smaller term t ′ such that (t ′ c 1 ... c n ) = (t c 1 ... c n ). Generalizing this property of reduction to the full λ-calculus would give the decidability of higher order matching. To get the normal form of the term (t c 1 ... c n ) we have followed the strategy hinted by the weak normalization theorem and reduced first all the second order redexes, then all the first order redexes. So a generalization of this proof to higher order should require an induction on the maximal order of a redex. In the proof for the third order case, we quickly get the normal form of the term (t c 1 ... c n ) and we do not need to define the depth of a non-normal term. It seems that the generalization of this result to higher order requires such a definition. Pattern Matching Definition 15 (Matching Problem) A matching problem is a set Φ = {< a 1 , b 1 >, ..., < a n , b n >} of pairs of terms of the same type such that the terms b 1 , ..., b n are ground. A pair < a, b > is frequently written as an equation a = b. Lemma 1 1 Some problems have no finite complete set of solutions. Proof (Example 1) Consider an atomic type T and an instantiable variable x : T → (T → T ) → T and the problem λa : T.(x a λz : T.z) = λa : T.a The substitutions x ← λo : T.λs : T → T.(s ... (s o) ... ) are solutions to this problem and they cannot be obtained as instances of a finite number of solutions. Remark In [6] [12], the similar examples (x λz : T.z) = a and (x λz : T.z) = b(a) are considered. 3. 1 . 1 11 Key Lemma Definition 21 (Relevant Term) Let c = λz 1 : U 1 . ... λz p : U p .d (d atomic) be a normal term and i an integer, i ≤ p. We say that c is relevant in its i th argument if z i has an occurrence in the term d. Lemma 2 (Key Lemma) Let us consider a normal term u, a variable y of type T of order at most two and a normal ground term c of type T . (1) If y has an occurrence in u then |c| ≤ |u[y ← c]|. ( 1 ) 1 Let β be an outermost occurrence of y in the Böhm tree of u. For each occurrence δ in the Böhm tree of c, βδ is an occurrence in the Böhm tree of the normal form of u[y ← c]. So |c| ≤ |u[y ← c]|. ( 4 ) 4 If δ =< > then c = λz 1 : U 1 . ... λz p : U p .z r and |c| = 0. So if |c| ̸ = 0 then δ ̸ =< > and the length of α ′ is greater than or equal to the length of α. Corollary Let us consider a normal term u, a variable y of type T of order at most two and a ground term c of type T . If c is relevant in all its arguments and |c| ̸ = 0 then |u| ≤ |u[y ← c]|. Proof We take for α the longest occurrence in the Böhm tree of u. When we substitute one by one the occurrences of y, by part (4) of the key lemma, we get occurrences that are at least long. So there is an occurrence in the Böhm tree of the normal form of u[y ← c] which is at least long as α. So |u| ≤ |u[y ← c]|. 3. 1 . 2 12 Computing the Substitution σ ′ Let us consider an equation (x c 1 ... c n ) = b. Let σ be a solution to this equation and let t = σx. Let us write t = λy 1 : T 1 . ... λy n : T n .u. The normal form of the term σ(x c 1 ... c n ) is the normal form of u[y 1 ← c 1 , ..., y n ← c n ]. If all the c i are relevant in their arguments and |c i | ̸ = 0 then using the corollary of the key lemma we have |t| ≤ |(t c 1 ... c n )|, so |t| ≤ |b| and this gives a bound on the depth of t. But the depth of t may decrease when applied to the terms c i and normalized in two cases: Example 2 2 Let x be an instantiable variable of type T → (T → T ) → T . Consider the problem (x a λz : T.b) = b The variable z has no occurrence in b so this problem has solutions of arbitrary depth x ← λo : T.λs : T → T.(s t) where t is an arbitrary term of type T . In this example we will compute the substitution x ← λo : T.λs : T → T.(s c) where c is a constant. Example 1 (continued) The term λz : T.z has depth 0, so we have solutions of an arbitrary depth. In this example we will compute the substitution x ← λo : T.λs : T → T.(s o) Definition 22 (Occurrence Accessible with Respect to an Equation of the Form (x c 1 ... c n ) = b) Let us consider an equation (x c 1 ... c n ) = b and the term t = σx = λy 1 : T 1 . ... λy n : T n .u Example 2 ( 2 continued) From the solution x ← λo : T.λs : T → T.(s t) where t is an arbitrary term, we compute the substitution x ← λo : T.λs : T → T.(s c) where c is a constant. Proposition 6 Let Φ be an interpolation problem and let σ be a solution to Φ, then the accessible solution σ built from σ is a solution to Φ. Proof Let us consider an equation (x c 1 ... c n ) = b of Φ and the terms σx = t = λy 1 : T 1 . ... λy n : T n .u and σx = t ′ = λy 1 : T 1 . ... λy n : T n .u ′ Example 1 (Example 3 13 continued) We build the substitution x ← λo : T.λs : T → T.(s o) Consider an instantiable variable x of type (T → T → T ) → T . And the problem (x λy : T.λz : T.y) = a (x λy : T.λz : T.z) = b We have the solution x ← λf : T → T → T.(f a (f c (f d b))) xProposition 11 11 ← λf : T → T → T.(f a b) Note that we must not substitute the first occurrence of f by λy : T.λz : T.z, because we would get the substitution x ← λf : T → T → T.b which is not a solution to the first equation. Definition 28 Let a = b be an equation and let σ be a (ground) solution to this equation. By induction on the number of occurrences of a we construct an interpolation problem Φ(a = b, σ). • If a = λx : T.d then since σ is a solution to the problem a = b we have b = λx : T.e and σ is a solution to the problem d = e. We let Φ(a = b, σ) = Φ(d = e, σ) • If a = (f d 1 ... d n ) with f a constant or a local variable then since σ is a solution to a = b we have b = (f e 1 ... e n ) and σ is a solution to the problems d i = e i . We let Φ(a = b, σ) = i Φ(d i = e i , σ)• If a = (x d 1 ... d n ) with x instantiable then for all i such that z has an occurrence in the normal form of the term (σx σd 1 ... σd i-1 z σd i+1 ... σd n ) we let c i = σd i andH i = Φ(d i = σd i , σ) (obviously σ is a solution to d i = σd i ). Otherwise we let c i z i where z i is a new local variable and H i = ∅. We let Φ(a = b, σ) = {(x c 1 ... c n ) = b} ∪ i H i Let t = (x d 1 ... d n )be a term and let σ be a substitution. Let c i = σd i if z has an occurrence in (σx σd 1 ... σd i-1 z σd i+1 ... σd n ) and c i = z i where z i is a new local variable of the same type as d i otherwise. The variables z i do not occur in the normal form of (σx c 1 ... c n ). Proof Let us assume that some of these variables have an occurrence in the normal form of (σx c 1 ... c n ) and consider an outermost occurrence of such a variable z i in the Böhm tree of the normal form of (σx c 1 ... c n ). By part (2) of the key lemma, the variable z i has also an occurrence in the normal form of term (σx c 1 ... c n )[z j ← σd j | j ̸ = i] i.e. in the normal form of the term (σx σd 1 ... σd i-1 z i σd i+1 ... σd n ), which is contradictory.Proposition 12 Let a = b be an equation and let σ be a solution to this equation,• the substitution σ is a solution to Φ(a = b, σ),• conversely, if σ ′ is a solution to Φ(a = b, σ) then σ ′ is also a solution to the equation a = b.Proof• By induction on the number of occurrences of a. When a is an abstraction a = λx : T.d (resp. an atomic term whose head is a constant or local variable a = (f d 1 ... d n )) then b is also an abstraction b = λx : T.e (reps. an atomic term with the same head b = (f e 1 ... e n )) and by induction hypothesis σ is a solution to all the equations of the set Φ(d = e, σ) (resp. Φ(d i = e i , σ)), so it is a solution to all the equations of Φ(a = b, σ).When a = (x d 1 ... d n ) then by induction hypothesis σ is a solution to all the equations of the sets H i and using the previous proposition the variables z i have no occurrences in the term (σx c 1 ... c n ) so we have (σxc 1 ... c n ) = (σx c 1 ... c n )[z i ← σd i ] (σx c 1 ... c n ) = (σx σd 1 ... σd n ) = bSo σ is a solution to the equation (x c 1 ... c n ) = b. • By induction on the number of occurrences of a. Let σ ′ be a substitution solution to Φ(a = b, σ). If a is an abstraction a = λx : T.d (resp. an atomic term whose head is a constant or a local variable a = (f d 1 ... d n )) then b is also an abstraction b = λx : T.e (reps. an atomic term with the same head b = (f e 1 ... e n )) and by induction hypothesis we have σ ′ d = e (resp. σ ′ d i = e i ) and so σ ′ a = b. If a = (x d 1 ... d n ) then we have (σ ′ x c 1 ... c n ) = b Proposition 13 13 Let a = b be an equation and let σ be a solution to this equation, if a ′ = b ′ is an equation of Φ(a = b, σ) then |b ′ | ≤ |b|. Proof By induction on the number of occurrences of a. When a is an abstraction a = λx : T.d (reps. an atomic term whose head is a constant or a local variable a = (f d 1 ... d n )) then b is also an abstraction b = λx : T.e (reps. an atomic term with the same head b = (f e 1 ... e n )) and by induction hypothesis |b ′ | ≤ |e| (resp. |b ′ | ≤ |e i |) so |b ′ | ≤ |b|. When a = (x d 1 ... d n ) and the considered equation is (x c 1 ... c n ) = b then we have b ′ = b so |b ′ | ≤ |b|. When the considered equation is in one of the sets H i , the set H i is non empty so z has an occurrence in the normal form of the term (σx σd 1 ... σd i-1 z σd i+1 ... σd n ) and (σx σd 1 ... σd i-1 z σd i+1 ... σd n )[z ← σd i ] = b so using part (1) of the key lemma we have |σd i | ≤ |b| and by induction hypothesis |b ′ | ≤ |σd i | so |b ′ | ≤ |b|. This technical restriction is in fact superfluous, because a matching problem expressed in a language with an infinite number of constants can always be reduced to one expressed in the language with a finite number of constants obtained by considering only the constants occurring in the problem and one constant in each atomic type. Acknowledgments The author would like to thank Gérard Huet, Richard Statman and Gopalan Nadathur for many very helpful discussions on this problem and remarks on previous drafts of this paper. This research was partly supported by ESPRIT Basic Research Action "Logical Frameworks". Proposition 9 Let Φ be an interpolation problem and let σ be a solution to Φ. Let σ the accessible solution built from σ and σ ′ the compact accessible solution built from σ. Then σ ′ is a solution to Φ. Proof We consider an equation (x c 1 ... c n ) = b and we let σx = t = λy 1 : T 1 . ... λy n : T n .u The term u ′ is obtained by substituting in the term u some occurrences (say β 1 , ..., β k ) by some terms (say e 1 , ..., e k ). If α is an occurrence of u then we define u ′ α as the term obtained by substituting in the term u/α the occurrence γ i by the term e i if β i = αγ i . We prove by decreasing induction on the depth of the occurrence α of the Böhm tree of u that if α is accessible with respect to the equation ( Thus for the root we get So σ ′ is a solution to all the equations of Φ. Proposition 10 Let Φ be an interpolation problem and let σ be a solution to Φ. Let σ be the accessible solution built from σ and σ ′ the compact accessible solution built from σ. Let h be the maximum depth of the right hand side of the equations of Φ. For every instantiable variable x of arity n, σ ′ x has a depth less than or equal to (n + 1)(h + 1) -1. Proof In a path of the Böhm tree of σ ′ x each y i has at most h + 1 occurrences and there are at most h + 1 occurrences of other symbols, so there are at most (n + 1)(h + 1) occurrences. Therefore the depth of σ ′ x is bounded by (n + 1)(h + 1) -1. Lemma 3 Let Φ be a third order interpolation problem. If Φ has a solution σ then it also has a solution σ ′ such that for every instantiable variable x, σx has a depth less than or equal to (n + 1)(h + 1) -1, where h is maximum of the depths of the right hand side of the equations and n the arity of x. Proof The compact accessible solution σ ′ built from the accessible solution built from the solution σ is a solution and for every instantiable variable x, σ ′ x has a depth less than or equal to (n + 1)(h + 1) -1. This bound is met, for instance by the example 3. General Case
00411508
en
[ "info.info-ai" ]
2024/03/04 16:41:24
2009
https://inria.hal.science/inria-00411508/file/papier-soumission-taaable2.pdf
Fadi Badra Julien Cojan Amélie Cordier Jean Lieber Thomas Meilender Alain Mille Pascal Molli Emmanuel Nauer Amedeo Napoli Hala Skaf-Molli Yannick Toussaint Knowledge Acquisition and Discovery for the Textual Case-Based Cooking system WIKITAAABLE Keywords: textual case-based reasoning, case-based cooking, semantic wiki, opportunistic knowledge acquisition The textual case-based cooking system WIKITAAABLE participates to the second Computer cooking contest (CCC). It is an extension of the TAAABLE system that has participated to the first CCC. WIKITAAABLE's architecture is composed of a semantic wiki used for the collaborative acquisition of knowledge (recipe, ontology, adaptation knowledge) and of a case-based inference engine using this knowledge for retrieving and adapting recipes. This architecture allows various modes of knowledge acquisition for case-based reasoning that are studied within the TAAABLE project. In particular, opportunistic adaptation knowledge discovery is an approach for interactive and semi-automatic learning of adaptation knowledge triggered by a feedback from the user. Introduction As final result from last year did not make us good cooks, we decided to keep on doing research. Hence, for the second edition of the Computer Cooking Contest (CCC), the TAAABLE system has evolved towards a new architecture called WIKITAAABLE. This year, we focused our efforts on two intertwined aspects: knowledge and reasoning. Concerning reasoning, we worked on the inference engine improvement to enhance the adaptation ability of the system. Concerning knowledge we set up advanced knowledge acquisition facilities to support the evolution of the system during its life-cycle. This paper describes the innovations developed in WIKITAAABLE, whose architecture is described in section 2 and discusses current research issues. One innovation this year is that the system is embedded within a semantic wiki and that the collaborative aspects are also of main concern mainly for document and knowledge edition and update. The remainder of the paper shows how knowledge is manipulated within the system. Section 3 presents knowledge acquisition and representation within the semantic wiki. Section 4 illustrates how knowledge is used by the inference engine. Section 5 describes an opportunistic acquisition strategy guiding the evolution of the system knowledge. Finally, section 6 draws some conclusions and points out ongoing and future work. For qualification purpose, a restricted interface of the system is available online. This interface only allows users to ask queries to the system. The full application with embedded knowledge acquisition features will be available for the contest. 2 Architecture of WIKITAAABLE In a CBR system, results strongly depend on the quality of available knowledge. As a consequence, continuous improvement of knowledge is required to progressively enhance the obtained results. The previous version of TAAABLE [START_REF] Badra | Taaable: Text Mining, Ontology Engineering, and Hierarchical Classification for Textual Case-Based Cooking[END_REF] suffered from different problems making maintenance and evolution of the system knowledge difficult. For example, there was no way to capture user feedback and to reuse it for maintenance. Besides, knowledge was organized in several files of heterogeneous formats that were difficult to update. As a consequence, the evolutions of the system knowledge were tedious tasks, even for the designers of TAAABLE. In WIKITAAABLE [START_REF] Cordier | Wikitaaable: A semantic wiki as a blackboard for a textual case-based reasoning system[END_REF] we decided to use a semantic wiki (Semantic Media Wiki [START_REF] Völkel | Semantic Wikipedia[END_REF]) as a central module to manage all data and knowledge used in the system. Making use of a semantic wiki has two major advantages: it enables humans and machines to rely on the same tool for representing and reasoning on shared knowledge and it provides users with user-friendly interfaces for browsing and editing knowledge. Figure 1 gives an overview of the WIKITAAABLE architecture. Each component has been designed to work with the others and the components are strongly intertwined. For example, knowledge has not been represented at a general level but for reasoning purpose. The semantic wiki module manages knowledge of the system. The wiki is accessible for the users through a graphical interface and for the system through bots1 that automates several tasks. Section 3 details this module. The inference engine includes the CBR core and is able to reason on the knowledge available in the wiki. It is described in section 4. In order to facilitate knowledge acquisition, the architecture of WIKITAAABLE is designed in such a way that it enables as much interactions as possible. Opportunistic knowledge acquisition process developed in WIKITAAABLE is discussed in section 5. A Semantic Wiki for Collaborative Acquisition of Cooking Knowledge In [START_REF] Cordier | Wikitaaable: A semantic wiki as a blackboard for a textual case-based reasoning system[END_REF], Semantic Media Wiki (SMW [START_REF] Völkel | Semantic Wikipedia[END_REF]) is used as a blackboard for WIKITAAABLE knowledge management. WIKITAAABLE gathers the whole knowledge body required by the application. To import knowledge of the first version of the TAAABLE system [START_REF] Badra | Taaable: Text Mining, Ontology Engineering, and Hierarchical Classification for Textual Case-Based Cooking[END_REF] into WIKI-TAAABLE, we wrote several bots that use mediawiki API. Recipes, ontologies, and specific adaptation knowledge are now represented as a graph of semantic wiki pages. Each page can be freely read an updated by humans and by bots. Hence, TAAABLE is now maintained and improved by a collaboration between users and machines. Domain Ontology The domain ontology contains four hierarchies: ingredients, dish moments, dish types, and dish origins. For adapting a recipe by substituting some ingredients by other ingredients, the CBR engine requires knowledge stating similarity between ingredients. Therefore, ingredients are organized in the ingredient hierarchy. This hierarchy is used by the CBR engine to compute the cost of a substitution: the closer the ingredients, the lower the cost; e.g., orange is closer to lemon than apple. 2 In order to characterize recipes, three other hierarchies define and organize dish moments (appetizer, dessert), dish types (cake, pizza), and dish origins (Mediterranean, Chinese). The original acquisition of the hierarchies is described in [START_REF] Badra | Taaable: Text Mining, Ontology Engineering, and Hierarchical Classification for Textual Case-Based Cooking[END_REF]. The four hierarchies are imported into WIKITAAABLE by using the Category/Sub-Category relation of Semantic MediaWiki [START_REF] Völkel | Semantic Wikipedia[END_REF]. For example, there is a page for orange and another page for citrus and the two pages are linked by this relation. For instance, the figure 2 shows the imported ingredient hierarchy. Adaptation knowledge To adapt a recipe, the CBR engine uses the ontology and a set AK of substitutions. A substitution σ ∈ AK states that, in a given context, some ingredients may be substituted by other ones. For instance, the following substitution states that, for the context of a salad without potato, vinegar may be substituted by lemon juice and salt. σ = context salad no potato from vinegar by lemon juice salt ( 1 ) Each substitution is represented in a semantic wiki page. For instance, figure 3 shows the wiki page of the above substitution. The acquisition of substitutions is detailed in section 5. Recipes The recipes are also imported into WIKITAAABLE, where a wiki page is created for each recipe. Then, a bot crawls all the recipe pages, parses ingredient information, sets dish types, moments, and origins. It updates recipe pages with this information encoded as semantic annotations. Figure 4 shows a recipe page in WIKITAAABLE. We used the n-ary relationship of Semantic Media Wiki to represent an ingredient line, for example, (1, c, Category:rice) represents 1 c rice, the first ingredient line in figure 4. The parsing of ingredient information and setting types is described in [START_REF] Badra | Taaable: Text Mining, Ontology Engineering, and Hierarchical Classification for Textual Case-Based Cooking[END_REF]. O is the domain ontology represented by a set of axioms of the form a ⇒ b where a and b are two variables representing recipe classes. For example, lemon (resp., citrus) represents the class of recipes with lemon (resp., with citrus) and the axiom lemon ⇒ citrus states that any recipe with lemon is a recipe with citrus. In fact, every ingredient name X such as lemon is interpreted here as "class of the recipes with ingredient X". Recipes is the set of recipes given by the CCC organizers, and consequently the case base of the CBR system TAAABLE. A recipe R ∈ Recipes cannot be directly handled by the CBR inference engine: the engine requires a formal representation whereas R is for the largest part written in natural language. Therefore, only the formalized part of the recipe R is used: its index idx(R), which is expressed by a conjunction of literals (the indexing process of the recipes coincides with the annotation process mentioned in section 3.3). For example idx(R) = lettuce ∧ vinegar ∧ olive_oil ∧ tomato ∧ Nothing else (2) is a formal representation of a recipe R having ingredients lettuce, vinegar, olive oil, and tomato. A closed world assumption is associated to idx(R) stating that if a prop- erty cannot be deduced from the recipe description and from the ontology then it is considered as false. Formally, if idx(R) O a then "Nothing else" contains the conjunct ¬a. It can be noticed that the substitution given by a triple (context, from, by) in the wiki pages (cf. equation ( 1)) are rewritten context ∧ from context ∧ by to suit the CBR engine formalism. The CBR inference is based on substitutions, either taken from AK or built with the help of ontology O (see below for details). The choice of substitutions is made according to the problem-solving context and to a cost function cost : σ → cost(σ) > 0; substitution σ is preferred to substitution τ when cost(σ) < cost(τ ). Therefore, the cost function (and its parameters) constitutes an additional knowledge container. CBR Inference Let Q be a query. For example Q = endive ∧ lemon_juice ∧ ¬onion (4) is a query for a recipe with endive and lemon juice and without onion. The CBR inference consists in (1) retrieving recipes matching exactly or approximately Q and (2) adapting the retrieved recipes. Retrieval aims at choosing indexes idx(R) matching the query Q. An exact match corresponds to idx(R) O Q. If no index exactly matches Q, the query Q is progressively relaxed into Γ (Q) such that idx(R) O Γ (Q), for some idx(R). The relaxation of Q is obtained by applying generalizations g k according to O: g k = a k b k is a substitution such that (a k ⇒ b k ) ∈ O. Thus Γ (Q) = g n (. . . (g 1 (Q)) . . .). Therefore, retrieval provides a similarity path idx(R) O Γ (Q) gn ←-. . . g1 ←-Q (5) This similarity path is built according to a best-first search minimizing k cost(g k ). For example, retrieval can give the following result Q = endive ∧ lemon_juice ∧ ¬onion Γ (Q) = green_salad ∧ ⊤ ∧ ¬onion ≡ green_salad ∧ ¬onion idx(R) = lettuce ∧ vinegar ∧ olive_oil ∧ tomato ∧ Nothing else (Γ consists in generalizing endive into green_salad and in removing lemon_juice from the query by generalizing it in several steps to ⊤, the hierarchy top). Adaptation is composed of two sub-steps. Let R be a retrieved recipe, with index idx(R). The first subset of adaptation is matching, that aims at building an adaptation path from idx(R) to Q of the form idx(R) σ1 -→ . . . σp -→ Σ(idx(R)) O Γ (Q) γq ←-. . . γ1 ←-Q (6) where σ i ∈ AK (i = 1 . . . p) and substitutions γ j (j = 1 . . . q) correspond to axioms of the ontology: γ j = a j b j with (a j ⇒ b j ) ∈ O. Such an adaptation path is built according to a best-first search in a state space minimizing i cost(σ i ) + j cost(γ j ). The second sub-step of adaptation consists in "following" the adaptation path: first R is adapted successively in σ 1 (R), σ 2 (σ 1 (R)), . . . σ p (. . . (σ 2 (σ 1 (R)) . . .) = Σ(R). Then, ingredients of Σ(R) are substituted by other ingredients according to a generalization-specialization process (generalization corresponds to the relation O and specialization to the substitutions γ -1 q , . . . , γ -1 1 ). For example, let idx(R) and Q be the example presented above. Matching may provide the following adaptation path: idx(R) σ -→ Σ(idx(R)) O Γ (Q) γ ←-Q where σ is defined by ( 3) and γ = endive green_salad. Thus, the adaptation of R consists in replacing vinegar by lemon juice and salt (cf. σ) and by substituting lettuce by endive (cf. O and γ -1 ). It can be noticed that retrieval provides a first matching: a similarity path is a kind of adaptation path involving no σ ∈ AK. Thus, the retrieved recipe R can be adapted following this similarity/adaptation path. However, during adaptation, some substitutions σ ∈ AK may be used and, if they do, the resulting adaptation requires less effort. 45 Opportunistic Knowledge Acquisition and Discovery WIKITAAABLE has been designed to facilitate continuous knowledge acquisition through interactions with its users: it is a reflexive and perpetually evolving system. However, due to the heterogeneity of knowledge acquisition situations that can be envisioned, setting up such a process is a complex task. This diversity of situations is explained by several factors: -The various types of knowledge (ontology, adaptation knowledge, substitutions costs, recipes) that can be acquired. -The different interaction modalities such as simple user feedback, direct modification on wiki pages, interaction through dedicated interfaces, use of external knowledge discovery methods, etc. -The provenance of knowledge that is acquired: single users, community of users, experts, other sources of data (web sites), or local knowledge from which new knowledge in inferred. In the following, a particular scenario of opportunistic knowledge discovery is detailed. In WIKITAAABLE, substitutions σ ∈ AK are acquired at problem-solving time through interactions with the user. The knowledge discovery process CABAMAKA [START_REF] D'aquin | Case base mining for adaptation knowledge acquisition[END_REF] is used to assist the user in the formulation of new pieces of knowledge. Its role is to generate a set of substitutions σ ∈ AK "on the fly" from the comparison of two sets of recipes of the case base. The generated substitutions can be used by the system to repair a failed adaptation. Each time a substitution is validated by the user, it is stored for future reuse. In the following, the main principles of the approach are illustrated on an example. More details on the proposed approach can be found in [START_REF] Badra | Opportunistic adaptation knowledge discovery[END_REF]. In section 4, the user wanted a salad recipe with lemon juice but without onion, which was modeled by the query Q defined by [START_REF] D'aquin | Case base mining for adaptation knowledge acquisition[END_REF]. The substitution σ ∈ AK defined by (3) was used to adapt the retrieved recipe R by replacing vinegar by lemon juice and salt. Such a substitution cannot be obtained from the ontology O, so let us assume in this scenario that σ is not available to the system. Thus, to perform adaptation, the system relies solely on the ontology O from which it generates the substitution vinegar lemon_juice. Now, in our scenario, the user is not satisfied with the proposed solution and gives this feedback to the system. Therefore, the knowledge discovery process is triggered: a set of substitutions is learned from the case base by comparing salad recipes with vinegar and salad recipes with lemon juice. Among the learned substitutions is the substitution σ learned = vinegar lemon_juice ∧ salt, which suggests to replace vinegar by lemon juice in the retrieved recipe R and to add salt. The user is satisfied with the adaptation resulting from the application of this substitution, so the latter is stored for future reuse. At this point, the user is encouraged to specify the condition of application of the substitution σ learned . The user states that vinegar can be replaced by lemon juice and salt in salad recipes that do not contain potato, which is modeled by the substitution context salad∧¬potato. Combining the learned substitution σ learned and its application context gives the substitution σ defined by [START_REF] Cordier | Wikitaaable: A semantic wiki as a blackboard for a textual case-based reasoning system[END_REF]. In WIKITAAABLE, the wiki is used as a gateway enabling to centralize knowledge used in the system. It provides functionalities to facilitate acquisition and maintenance of knowledge and enables to progressively add new acquisition features, allowing the evolution of the whole system. However, setting up a complex knowledge acquisition process raises several issues. For example, tools for ensuring consistency of knowledge used in the system must be developed. Another issue is to handle updates from multiple users. What happens when one believes that an avocado is to be eaten as a starter whereas someone else reckon that it has to be eaten as a dessert? Is the system supposed to converge towards a "commonly accepted" knowledge base or should it be able to deal with user's preferences? A strength of the architecture of WIKITAAABLE is that it will enable to progressively address these issues. A future work is to allow users to add their own recipes to the system. This functionality requires to be able to dynamically annotate new recipes within WIKITAAABLE in order to make them usable by the CBR inference engine. One of the advantages of such a functionality, combined with the benefits of a wiki, is that communities of users will be able to share their recipes and to collaborate in order to improve the global knowledge of the system. Next, we would like to tackle the multi-user issue which is a prerequisite for envisioning a collaborative building of the knowledge base of TAAABLE. Conclusion, Ongoing Work, and Future Work The textual case-based cooking system WIKITAAABLE participates to the second CCC. It is an extension of the TAAABLE system that has participated to the first CCC. WI-KITAAABLE's architecture is composed of a semantic wiki used for the collaborative acquisition of knowledge (recipe, ontology, adaptation knowledge) and of a CBR inference engine using this knowledge for retrieving and adapting recipes. This architecture allows various modes of knowledge acquisition for CBR that are studied within the TAAABLE project. In particular, opportunistic adaptation knowledge discovery is an approach for interactive and semi-automatic learning of adaptation knowledge triggered by a feedback from the users. The first ongoing work is the improvement of the WIKITAAABLE system (user interface, inference engine, knowledge base encoded in wiki pages, and links between these components). Another work planned for the next weeks is the development of tools within WIKITAAABLE for knowledge acquisition triggered by user feedbacks. Such a knowledge acquisition leads to a continuous evolution of the knowledge base and thus, of the behavior of the system. It is important to ensure that these evolutions are improvements and that the integrity of the knowledge is preserved. We plan to use non regression and consistency tests for this purpose. Today, only the compulsory task of the CCC is addressed by WIKITAAABLE but we plan to have also the two challenges addressed by the system for the day of the contest. Currently, wiki pages are accessed and maintained by a limited community: the TAAABLE project members. These pages encode the knowledge that have been acquired on the basis of a consensus. A long term objective is to have several open semantic wikis with cooking knowledge, each of them corresponding to a user community, the consensus being only realized at the level of a community. Fig. 1 . 1 Fig. 1. Overview of the WIKITAAABLE architecture. Fig. 2 . 2 Fig. 2. WIKITAAABLE ingredient ontology. Fig. 3 . 3 Fig. 3. A substitution semantic wiki page. Fig. 4 . 4 Fig. 4. indexed recipe of "Arroz dulce". 3 3 For example, this closed world assumption enables to deduce that idx(R) O ¬meat ∧ ¬fish, i.e., that R is a vegetarian recipe. The indexes idx(R) are used to access recipes through a hierarchy H idx , according to the partial ordering O : for C, D ∈ H idx , C O D iff there is a directed path in H idx from C to D. The indexes idx(R) are leaves of the H idx . Adaptation knowledge has two parts. The first part is included in ontology O. The second part is the set AK of substitutions (cf. section 3.2). Any σ ∈ AK may be considered as a domain specific inference rule R is a good recipe σ(R) is a good recipe . The substitutions have the form C D where C and D are conjunctions of literals. Applying C D to a conjunction of literals (such as an index or a query) consists in replacing the literals of C by literals of D. For example, the substitution σ described in figure 3 is written as follows σ = salad ∧ ¬potato ∧ vinegar salad ∧ ¬potato ∧ lemon_juice ∧ salt (3) A bot is a piece of software for doing automated repetitive tasks. This closeness can be measured by a weighted length of the shortest path between ingredients in the hierarchy. If f and g are two propositional formulas, f O g means that f implies g, given the ontology O. More precisely: if I satisfies both O and f then I satisfies g. If the cost function is an estimation of the adaptation effort, then the adapted recipe should be better by following (6) then by following[START_REF] Völkel | Semantic Wikipedia[END_REF]. Indeed, since adding new substitutions (the ones of AK) only adds new ways to connect indexes to queries, it comes that P i cost(σi) + P j cost(γj) ≤ P k cost(g k ). Acknowledgments The participants of the TAAABLE project wish to thank the organizers of the CCC for providing this benchmark, that entails many interesting problems, and the need to collaborate with researchers in various domains on knowledge engineering.
04115275
en
[ "math.math-na" ]
2024/03/04 16:41:24
2023
https://hal.science/hal-04115275/file/cl_lbm_fvca10.pdf
Romane Hélie email: [email protected] Philippe Helluy email: [email protected] Stable second order boundary conditions for kinetic approximations Keywords: Boundary conditions, Lattice Boltzmann, Entropy stability . The method ensures entropy stability of the resulting approximation but also high order accuracy. Introduction The vectorial kinetic approach is a general method for approximating hyperbolic systems of conservation laws. It has been introduced by Bouchut in [START_REF] Bouchut | Construction of BGK models with a of kinetic entropies for a given system of conservation laws[END_REF] and Aregba and Natalini in [START_REF] Aregba | Discrete kinetic schemes for multidimensional systems of conservation laws[END_REF]. It consists in representing any system of conservation laws by a finite set of transport equations coupled through a stiff relaxation term. This representation has several advantages: it allows rigorous mathematical analysis [START_REF] Bouchut | Construction of BGK models with a of kinetic entropies for a given system of conservation laws[END_REF][START_REF] Aregba | Discrete kinetic schemes for multidimensional systems of conservation laws[END_REF][START_REF] Dubois | Simulation of strong nonlinear waves with vectorial lattice boltzmann schemes[END_REF]. Associated with time splitting techniques, it leads to the construction of very efficient schemes [START_REF] Baty | A robust and efficient solver based on kinetic schemes for magnetohydrodynamics (mhd) equations[END_REF]. We first describe the one-dimensional framework, but it can be generalized to higher dimensions. We consider a system of m conservation laws ∂ t w + ∂ x q(w) = 0, (1) where the unknown conservative variable w(x, t) ∈ R m depends on a space variable x ∈ [L, R] and a real time variable t ≥ 0. The system (1) is supplemented with an initial condition and boundary conditions at x = L and x = R. We assume that (1) admits a Lax entropy s : R m → R, associated to an entropy flux g : R m → R. This means that s is strictly convex and that s ′ q ′ = g ′ . A weak solution w to (1) is admissible if the entropy is decreasing ∂ t s(w) + ∂ x g(w) ≤ 0. Kinetic approximation The system of conservation laws is approximated by the following system of 2m kinetic equations ∂ t f + Λ∂ x f = 1 ε (f eq (w) -f ), (2) where: the kinetic vector f (x, t) = f 1 (x, t) f 2 (x, t) ∈ R 2m , f 1,2 (x, t) ∈ R m , the matrix Λ is defined by (I is the identity matrix of size m × m) λ = λI 0 0 -λI , λ > 0, the parameter ε > 0. The equilibrium f eq is defined below, the conservative data are related to the kinetic data thanks to the m × 2m matrix P = (I, I), and w = P f. This kinetic model is consistent with the conservation laws (1) when ε → 0 as soon as P f eq (w) = w, P Λf eq (w) = q(w). The consistency conditions (3) lead to a unique possibility for the equilibrium data f eq (w) = w 2 + q(w) 2λ w 2 -q(w) 2λ . For the numerical resolution of (2) we use a Lattice Boltzmann Method (LBM). For a given integer N > 0, we define the space step h = (R -L)/N , the grid points x i = L + ih + h/2, 0 ≤ i < N and the time step τ = h/λ. The solution is approximated by f n i ≃ f (x i , nτ ). For going from time step n -1 to time step n, we first solve the free transport equation ∂ t f + Λ∂ x f = 0 exactly, which is possible by simple shift operations, thanks to the choice of the time step. We get f n,- i = f n-1 1,i-1 f n-1 2,i+1 , w n i = P f n,- i . (4) Then we apply a relaxation procedure for staying close to equilibrium f n i = ωf eq (w n i ) + (1 -ω)f n,- i . (5) The choice ω = 1 consists in returning to equilibrium at the end of each step. The resulting time marching scheme is first order accurate. The over-relaxation choice ω = 2 results in a second order accurate time scheme [START_REF] Paul | An interpretation and derivation of the lattice boltzmann method using strang splitting[END_REF]. We observe that the LBM is particularly simple and efficient: it is made of independent shift operations followed by independent local relaxations. It is highly parallelizable and the second order extension has no additional cost. It is clear that the shift algorithm (4) needs to be adapted at the boundaries, when i = 0 or i = N -1. At the left point, the value f n-1 1,-1 is missing, and at the right point, the value f n-1 2,N is missing. The boundary conditions are given by two "ghost cell" functions b L and b R whose role is to reconstruct the missing values from the first and last cell values: f n-1 1,-1 = b L (f n-1 1,0 , f n-1 2,0 ), f n-1 2,N = b R (f n-1 1,N -1 , f n-1 2,N -1 ). We propose below a general strategy to apply the boundary conditions on the kinetic scheme. We shall test it on the very simple transport conservation law at constant velocity v. In this case, we thus have m = 1 and q(w) = vw. (6) Let us explicit the kinetic model in this case. We find P = (1, 1), w = f 1 + f 2 , f eq (w) = w( 1 2 + v 2λ ) w( 1 2 -v 2λ ) . The chosen entropy and entropy flux are s(w) = w 2 2 , g(w) = v w 2 2 . 3 Entropy stability and boundary conditions Stability analysis For constructing the kinetic model we follow [START_REF] Bouchut | Construction of BGK models with a of kinetic entropies for a given system of conservation laws[END_REF][START_REF] Dubois | Simulation of strong nonlinear waves with vectorial lattice boltzmann schemes[END_REF]. We assume that it is possible to find convex kinetic entropies s k such that s(w) = min w=f1+f2 s 1 (f 1 ) + s 2 (f 2 ) = s 1 (f eq 1 ) + s 2 (f eq 2 ). ( 7 ) For the simple transport equation ( 6) that we consider, we can take s 1 (f 1 ) = λ λ + v (f 1 ) 2 , s 2 (f 2 ) = λ λ -v (f 2 ) 2 . These two kinetic entropies are convex under the sub-characteristic condition |v| < λ. For a general system of conservation laws with a Lax entropy, it is also possible to compute s k . The practical calculations can be done thanks to the Legendre transform [START_REF] Dubois | Simulation of strong nonlinear waves with vectorial lattice boltzmann schemes[END_REF][START_REF] Guillon | Stability analysis of the vectorial Lattice-Boltzmann Method[END_REF]. Once the consistency (3) and the kinetic entropy property [START_REF] Dubois | Simulation of strong nonlinear waves with vectorial lattice boltzmann schemes[END_REF] are ensured, the lattice Boltzmann scheme (4-5) is entropy dissipative for ω ∈ [START_REF] Aregba | Kinetic approximation of a boundary value problem for conservation laws[END_REF][START_REF] Aregba | Discrete kinetic schemes for multidimensional systems of conservation laws[END_REF]. Indeed, the shift step (4) preserves the kinetic entropy exactly and the relaxation step (5) makes it decrease, by design. This reasoning has to be adapted at the boundaries. For establishing the entropy dissipative kinetic boundary condition, we assume that the initial condition is constant outside an interval [a, b] ⊂]L, R[ w(x, 0) = w, if x < a or x > b. Without loss of generality, we can assume that s(w) = 0 and g(w) = 0, because the properties of the entropy are not modified by the addition of affine functions. Then the boundary entropy condition simply states that, in the shift step (4), the kinetic entropy that enters the domain should be smaller than the kinetic entropy that leaves the domain. We obtain s 1 (b L (f 1 , f 2 )) ≤ s 2 (f 2 ), s 2 (b R (f 1 , f 2 )) ≤ s 1 (f 1 ). ( 8 ) First order stable boundary conditions We consider the simple transport case q(w) = vw. We assume that w = 0 and v > 0. It is then mandatory to impose w = 0 at the left boundary. A condition is missing at the right boundary. As in [START_REF] Drui | An analysis of over-relaxation in a kinetic approximation of systems of conservation laws[END_REF] we apply a null boundary condition on the flux error y, defined by y = λf 1 -λf 2 -q(f 1 + f 2 ) = (λ -v)f 1 -(λ + v)f 2 . This quantity vanishes when f k = f eq k . The boundary conditions are thus f 1 + f 2 = 0 at x = L, λf 1 -λf 2 -q(f 1 + f 2 ) = 0 at x = R. This allows to deducing the ghost cell functions b L and b R b L (f 1 , f 2 ) = -f 2 , b R (f 1 , f 2 ) = λ -v λ + v f 1 . (9) The kinetic entropy condition (8) is then satisfied, because v > 0. The boundary conditions (9) are thus stable. However, we have observed that the right outgoing condition is only first order accurate, even when ω = 2. Stabilized second order boundary conditions As proposed in [START_REF] Drui | An analysis of over-relaxation in a kinetic approximation of systems of conservation laws[END_REF] we replace the right Dirichlet boundary condition by a Neumann boundary condition, because it was experimentally observed a better second order accuracy with w(L, t) = 0, ∂ x y(R, t) = 0. The spatial derivative is approximated by ∂ x y(R, nτ ) ≃ y n N -1 -y n N -2 h = 0. But with this approximation, the right ghost cell boundary function now depends on the kinetic data in cell N -1 and cell N -2: f n-1 2,N = b R (f n-1 1,N -1 , f n-1 2,N -1 , f n-1 1,N -2 , f n-1 2,N -2 ). ( 10 ) It is then generally not possible to ensure directly [START_REF] Guillon | Stability analysis of the vectorial Lattice-Boltzmann Method[END_REF]. We thus propose the following nonlinear correction at the right boundary: -Compute f n-1 2,N with formula (10). -If the right entropy inequality is not satisfied, i.e. if s 2 (f n-1 2,N ) > s 1 (f n-1 1,N -1 ), then replace f n-1 2,N by the closest value f n-1 2,N such that s 2 ( f n-1 2,N ) = s 1 (f n-1 1,N -1 ). This construction ensures the decrease of entropy, with a minimal adjustment. It is expected that it will remain a second order accurate approximation. Similar ideas are exposed for instance in [START_REF] Aregba | Kinetic approximation of a boundary value problem for conservation laws[END_REF] (see also included references) in the FV framework. Extension to higher dimension The above construction can be extended to higher dimensions. Let us comment a two-dimensional example. We solve the transport equation ∂ t w + ∇ • q(w) = 0, q i (w) = v i w. The unknown w(x 1 , x 2 , t) depends on a two-dimensional variable (x 1 , x 2 ) in a square Ω ⊂ R 2 . The velocity vector v = (v 1 , v 2 ) T . In this case, we can consider the D2Q4 model with four kinetic unknowns f k , 1 ≤ k ≤ 4. The kinetic velocities are (λ is a positive parameter) c 1 = λ 1 0 , c 2 = λ -1 0 , c 3 = λ 0 1 , c 4 = λ 0 -1 . The kinetic equations read ∂ t f k + c k • ∇f k = 1 ε (f eq k -f k ). The equilibrium is given by f eq k (w) = w 4 + v • c k w 2λ 2 , w = 4 k=1 f k . We also define the flux error by y =   λ(f 1 -f 2 ) -v 1 (f 1 + f 2 ) λ(f 3 -f 4 ) -v 2 (f 3 + f 4 ) λ 2 (f 1 + f 2 -f 3 -f 4 )   . As in the D1Q2 model, the flux error vanishes when f = f eq . The kinetic entropy is given by Σ(f ) = 1 λ + 2v 1 f 2 1 + 1 λ -2v 1 f 2 2 + 1 λ + 2v 2 f 2 3 + 1 λ -2v 2 f 2 4 . Its convexity implies the sub-characteristic condition 2 max(|v 1 | , |v 2 |) < λ. We summarize two boundary condition strategies in Table 1 ((n 1 , n 2 ) is the normal vector to the considered boundary): 5 Numerical results D1Q2 model For our tests, the initial data is the compact support function w(x, 0) = 0 if r(x) > 1, (1 -r(x) 2 ) 5 otherwise, where r(x) = |x-x0| σ , with σ = 0.2. We consider several test cases corresponding to different initial positions x 0 of the peak: in test case 1, x 0 = -1/2; in test case 2, x 0 = 0 and in test case 3, x 0 = 1/2. In Figure 1 we check the accuracy and the stability of the second order boundary condition. D2Q4 model We choose a square geometry, aligned with the kinetic velocities: Ω = [0, 1] × [0, 1]. We initialize w with a compact support function w(x 1 , x 2 , t) = 0 if r(x 1 , x 2 ) > 1, (1 -r(x 1 , x 2 ) 2 ) 5 otherwise. with r(x 1 , x 2 ) = √ (x1-x1,0) 2 +(x2-y2,0) 2 σ and σ = 0.4. We consider several test cases in Table 5.2, corresponding to different initial positions of the peak and several velocities (v 1 , v 2 ). We first check in Figure 2 the second order accuracy and the instability of the boundary condition described in the second column of Table 1. Finally, we check in Figure 2 the improvement given by the projection strategy. 5 - √ 2/4 - √ 2/4 √ 2/2 √ 2/2 6 √ 2/4 √ 2/4 - √ 2/2 - √ 2/2 7 0 0 √ 2/2 √ 2/2 Conclusion In this short paper, we have described a general strategy for imposing boundary conditions on a vectorial kinetic approximation. The difficulty is that the number of necessary boundary conditions is different between the kinetic model and the approximated conservation laws. With our method, it is possible to rigorously stabilize boundary schemes that are accurate but slightly unstable. In future works, we will extend the approach to nonlinear systems of conservation laws and more complex boundary geometries. 5.2. Second order is achieved for all the tests for short times. Right: entropy evolution for test case 1. The scheme is unstable on the long time. 5.2. Second order is achieved for all the tests. Right: entropy evolution for test case 1. The scheme is now stable thanks to the entropy limiting procedure. Fig. 1 . 1 Fig. 1. Second order boundary condition for the D1Q2 model (Dirichlet on w at the left boundary and Neumann on y at the right boundary). Left: error rates in the L 2 norm for the three test cases. Second order is achieved for all the tests. Right: entropy evolution for test case 1. Once the peak has entered the domain, the entropy decreases. Fig. 2 . 2 Fig. 2. Second order boundary condition for the D2Q4 model, without entropy limiting. Left: error rates in the L 2 norm for the test cases of Table5.2. Second order is achieved for all the tests for short times. Right: entropy evolution for test case 1. The scheme is unstable on the long time. Fig. 3 . 3 Fig. 3. Second order boundary condition for the D2Q4 model, with entropy limiting. Left: error rates in the L 2 norm for the test cases of Table5.2. Second order is achieved for all the tests. Right: entropy evolution for test case 1. The scheme is now stable thanks to the entropy limiting procedure. Table 1 . 1 Two different boundary condition strategies for the D2Q4 model. Left: It is possible to prove that this boundary condition is entropy dissipative, but first order accurate. Right: second order boundary condition, but not stable for long time simulations. By projecting this second-order boundary condition on the space of the decreasing entropy boundary conditions, we obtain stability and second-order accuracy. Table 2 . 2 Parameters of the test cases for the D2Q4 model.
04115280
en
[ "info.info-dc" ]
2024/03/04 16:41:24
2023
https://inria.hal.science/hal-04115280/file/23-03-21-jlesc.pdf
Bouteiller, Bosilca, Faverge, Dongarra Wu Keywords: gitlabpages.inria.fr/ 7 High-Performance Computing Classical parallel programming, threads High-Performance Computing Classical parallel programming, threads Classical parallel programming Classical parallel programming Classical parallel programming Classical parallel programming Classical parallel programming Classical parallel programming Classical parallel programming « Hierarchical DAG Scheduling for Hybrid Distributed Systems », Performance (Gflop/s) CPU Saturation GPU Saturation C2070 GPU Saturation K40c cuBLAS SGEMM K40c cuBLAS SGEMM C2070 8 Core CPU MKL SGEMM 1. Performance of compute kernels on CPU and GPU dep lem granularity How big should a task be? Tuning the tile size? • B = "big" tile size, for GPUs [0][0], inout:B[1][0]) TRSM(A[0][0], B[1][0]) #pragma omp task \ depend(in:A[1][0], in:B[0][0], inout:B[0][1]) GEMM(A[1][0], B[0][0], B[0][1]) ... NumPEx project • of compute kernels on CPU and GPU dep lem granularity How big should a task be? GPUs • Have thousands for cores to feed • Newer generations require yet larger sizes • Can run several kernels at the same time • Still limited https://starpu.gitlabpages.inria.frCan use parallel implementations (e.g. from MKL) • But better have subtasks to interleave them How big should a task be? How size does not fit all Gwenolé Lucas experimented with Chameleon • Cholesky inversion • 2 NVIDIA V100 • 2 Xeon Gold 6142 • Different matrix tile sizes : 2880 / 960 / 320 https://starpu.gitlabpages.inria.fr •Figure 5 . 5 Figure 5. Performance for different tile size parameters (DPOTRF, using 1 GPU on Bunsen). FromFigure 6 . 6 Figure 6. Performance of h-PaRSEC Cholesky and QR com when both employ the same tile size for kernels executed on From PARSEC : « Hierarchical DAG Scheduling for Hybrid Distributed Systems », Wu, Bouteiller, Bosilca, Faverge, Dongarra https://starpu. How big should a task be? From CEA : « A hierarchical fast direct solver for distributed memory machines with manycore nodes », Augonnet, Goudin, Kuhn, Lacoste, Namyst, Ramet core (2,620 seconds). We can thus compare the speedup obtained within a KNL in Figure 20. We obtain a parallel ef ciency of 78.2% instead of 49.9% on 64 cores. It is also worth noting that this is very challenging testcase because there is only 56MB of data per core in average when using 64 threads. On the distributed memory side, Figure 22 shows the strong scalability of a sphere testcase with 867000 unknowns on several nodes of TERA1000-1. In addition to the ideal scaling line, three other lines depict the time performances
04115343
en
[ "spi.nano" ]
2024/03/04 16:41:24
2023
https://hal.science/hal-04115343/file/manuscript_doi_and_copyright.pdf
Thomas Bauvent email: [email protected] Paola Trotti Olivier Billoint Jean-François Nodin Yasser Moursy Gabriel Molas Gaël Pillonnet J-F Nodin RRAM Device Performances under Capacitive-Enhanced Current Programming Scheme Keywords: resistive random access memory, RRAM, OxRAM, HfO2, operation scheme, current driver, current overshoot, switching variability INTRODUCTION RESISTIVE random access memories (RRAM) are emerging as a promising candidate for non-volatile memory (NVM) applications. Various memory suppliers are now bringing it up to the embedded market [START_REF] Peters | Reliability of 28nm embedded RRAM for consumer and industrial products[END_REF], [START_REF] Ito | ReRAM Technologies for Embedded Memory and Further Applications[END_REF]. Oxide based RRAM are made of a stack of metal / oxide / metal and are generally called OxRAM. The cells can be switched from a high-resistive state (HRS) to a low-resistive state (LRS) by electrical means; we call it the set operation. The reset operation makes the cells go back from LRS to HRS. Most of the set schemes are based on voltage drivers. During set operation, the abrupt reduction of the resistance induces a current surge. To prevent damaging the cell, voltage drivers incorporates current limiting elements. To enhance diverse metrics, refinements of this general scheme have been studied such as write termination [START_REF] Liu | 4.7 A 65nm ReRAM-enabled nonvolatile processor with 6× reduction in restore time and 4× higher clock frequency using adaptive data retention and self-write-termination nonvolatile logic[END_REF]- [START_REF] Chen | A 16Mb dualmode ReRAM macro with sub-14ns computing-in-memory and memory functions enabled by self-write termination scheme[END_REF], delayed write termination [START_REF] Yang | 24.2 A 14nm-FinFET 1Mb Embedded 1T1R RRAM with a 0.022µm2 Cell Size Using Self-Adaptive Delayed Termination and Multi-Cell Reference[END_REF], [START_REF] Chou | An N40 256K×44 embedded RRAM macro with SL-precharge SA and low-voltage current limiter to improve read and write performance[END_REF], and voltage ramps [START_REF] Sassine | Novel Computing Method for Short Programming Time and Low Energy Consumption in HfO2 Based RRAM Arrays[END_REF], [START_REF] Feng | Improvement of State Stability in Multi-Level Resistive Random-Access Memory (RRAM) Array for Neuromorphic Computing[END_REF]. The conductance in LRS is an important parameter as it is linked to metrics such as the read margin or the retention. While RRAM writing is field activated [START_REF] Ielmini | Evidence for Voltage-Driven Set/Reset Processes in Bipolar Switching RRAM[END_REF], the conductance in LRS has been reported to be strongly linked to the maximum current passing through the cell [START_REF] Ielmini | Resistive switching memories based on metal oxides: mechanisms, reliability and scaling[END_REF]. In light of this observation, it seems attractive to control the current rather than the voltage. Current control writing has mainly been reported on in DC / quasi-static studies [START_REF] Li | Self-compliance SET switching and multilevel TaOx resistive memory by current-sweep operation[END_REF]- [START_REF] Lian | Improved Resistive Switching Uniformity in Cu/HfO2/Pt Devices by Using Current Sweeping Mode[END_REF]. To the best of the authors' knowledge, only a single publication [START_REF] Jain | 2 A 3.6Mb 10.1Mb/mm2 Embedded Non-Volatile ReRAM Macro in 22nm FinFET Technology with Adaptive Forming/Set/Reset Schemes Yielding Down to 0.5V with Sensing Time of 5ns at 0.7V[END_REF] wrote about using a current driver for writing RRAM at time scales of the order of the microsecond. However, its reliance on a voltage limiter subdues its potential benefits. The advantages and tradeoffs of current based programming against the voltage based approach were not clarified so far. Another issue RRAM faces is high programming current. Since the main conducting mechanism in LRS is filamentary, the LRS conductance does not scale with the device [START_REF] Sandrini | OxRAM for embedded solutions on advanced node[END_REF]. This problem cannot be solved by programming the cells to a lower conductance since it is detrimental to its retention and to the read margin [START_REF] Chen | Improvement of data retention in HfO2/Hf 1T1R RRAM cell under low operating current[END_REF], [START_REF] Chen | Programming-conditions solutions towards suppression of retention tails of scaled oxide-based RRAM[END_REF]. In this letter, we explore a novel current driven writing scheme. To tackle the current scaling issue, we exploit the current spike caused by the inherent parasitic line capacitance in RRAM matrices. This enables this scheme to write cells to a high conductance with limited current amplitude. This helps relax issues with IR drop and pushes back the electromigration barrier, which are both concerns for scaling. It is also beneficial toward lowering the programming energy consumption. EXPERIMENTAL The measurements presented hereafter were obtained on the stack TiN / Ti / HfO2-5nm / TiN with a cell surface of 300 x 450 nm. We used a 130 nm CMOS technology with RRAM co-integrated in between M4 and M5 [START_REF] Sandrini | OxRAM for embedded solutions on advanced node[END_REF]. The RRAM are individually selected using pass-gates as we do not need current limitation in our proposed scheme. RRAM Device Performances under Capacitive-Enhanced Current Programming Scheme Thomas Bauvent, Paola Trotti, Olivier Billoint, Jean-François Nodin, Yasser Moursy, Gabriel Molas, Gaël Pillonnet, Univ. Grenoble Alpes, CEA, Leti, France Fig. 1c presents the operating principle of our writing scheme. As explained in the next section, with a current driver, the parasitic line capacitance Cp plays an important part in defining the dynamics of the system. To have a realistic capacitance value Cp with respect to Mb matrices, the current source must be co-integrated with the RRAM bank. With postlayout extraction, we estimated that Cp was approximately 600 fF in our integrated circuit. With scaling, and at constant bank size, this capacitance will tend to lower. Its value has no effect on the current spike amplitude, but is directly proportional to its energy [START_REF] Shrestha | Analysis and Control of RRAM Overshoot Current[END_REF]. This leads us to believe that there might be some need to adapt the scheme for more advanced technology nodes. CAPACITIVE-ENHANCED CURRENT DRIVER When a voltage driver is used, the parasitic line capacitance Cp does not play a major role. Other parasitic capacitances play a role in current spiking, such as the one on the node linking the cell to its current limiter [START_REF] Shrestha | Analysis and Control of RRAM Overshoot Current[END_REF]. However, when using a current driver, Cp plays an integral part in the writing process. That is why we call the driver in this scheme Capacitive-Enhanced Current Driver (CECD). Fig. 2 presents a comparison between these two driving methods on a set operation. With voltage driving, when the set event occurs, the output current of the driver quickly rises to the compliance. As a result, the current limiter makes the voltage on the RRAM drop, but the current through the device stays at the compliance level until the voltage pulse ends. In this work, with CECD, both the current passing through and the voltage on the RRAM rise at a limited rate which is proportional to Iprog Ú Cp where Iprog is the output current of the driver. When the set occurs, the voltage drops and the current passing through the RRAM spikes as Cp discharges through the RRAM. As long as the switching dynamics of the RRAM is faster than the RC time constant of the parallel combination of Cp and the impedance of the cell (≥ 5 kΩ x 600 fF = 3 ns in this work), the amplitude of the spike does not depend on the value of the capacitance [START_REF] Shrestha | Analysis and Control of RRAM Overshoot Current[END_REF]. The capacitance value only affects the total extra energy received. Since the conductance of the LRS is mainly defined by the maximum peak current passing though the RRAM device [START_REF] Ielmini | Resistive switching memories based on metal oxides: mechanisms, reliability and scaling[END_REF], we can expect the conductance in LRS to be well controlled, independently of the precise value of Cp. In the case of memory matrices, this should make this scheme robust to cell placement. Thanks to the capacitive current spike, the maximum current passing through the RRAM is higher than Iprog. As such, with CECD, the output current of the driver can be significantly lower than the one in a voltage driver, for an equivalent LRS conductance. This reduces the constraints due to IR drop in memory matrices, pushes back the electromigration barrier, and reduces the total energy required per set. RESULTS AND DISCUSSION Fig. 3a shows the distributions of read currents obtained on a device for different Iprog. It shows that the distribution is mostly independent from the current pulse amplitude as long as it is above a given threshold, which in our experiment was found to be below 23.5 µA. In voltage driven schemes, a minimum voltage is required for writing the cells. Here, this limitation translates to a minimum current through the relation linking the voltage, current and impedance of the cell. To obtain the same conductance level with a typical voltage driver, a compliance current of at least 200 µA would be required. The irregularities near the high current tails seem to be linked to LRS instabilities that occur stochastically during cycling. This does not affect our conclusion on the independence of the LRS on Iprog, which is very contrasting compared to the importance of the compliance current in voltage schemes. Fig. 3b shows the distributions of read currents obtained on a device for different pulse duration from 1 µs to 10 µs. Again, the read current distributions are found to be independent of the pulse duration, which is to be expected given that this scheme is naturally self-terminated. Indeed, by comparing the measured read current at 0.2 V with Iprog, we can infer that, after the set occurred, the voltage on the RRAM drops to a value close to 0.2 V. Therefore, from the point of view of the RRAM cell, the excitation roughly ends as soon as the spike from the set event wears off. That explains why the pulse duration has no measurable impact. Furthermore, the low remnant electric field could be beneficial to reduce relaxation as observed with delayed write termination [START_REF] Yang | 24.2 A 14nm-FinFET 1Mb Embedded 1T1R RRAM with a 0.022µm2 Cell Size Using Self-Adaptive Delayed Termination and Multi-Cell Reference[END_REF], [START_REF] Chou | An N40 256K×44 embedded RRAM macro with SL-precharge SA and low-voltage current limiter to improve read and write performance[END_REF]. Such as with the effect of the current pulse amplitude, we expect to hit a threshold in pulse duration under which the set operation would no longer be consistent. We successfully demonstrate that CECD works with pulses as short as 1 µs. Since the energy consumption is proportional to both the pulse duration and current amplitude, shorter pulses would be beneficial not only for writing speed, but also for energy efficient programming. Fig. 3c shows a comparison of energy consumption between our scheme and [START_REF] Jain | 2 A 3.6Mb 10.1Mb/mm2 Embedded Non-Volatile ReRAM Macro in 22nm FinFET Technology with Adaptive Forming/Set/Reset Schemes Yielding Down to 0.5V with Sensing Time of 5ns at 0.7V[END_REF], [START_REF] Chou | An N40 256K×44 embedded RRAM macro with SL-precharge SA and low-voltage current limiter to improve read and write performance[END_REF], [START_REF] Sassine | Novel Computing Method for Short Programming Time and Low Energy Consumption in HfO2 Based RRAM Arrays[END_REF]. This establishes our scheme as an energy efficient writing technique that does not require any extra write termination circuitry. Fig. 4a shows the scatter plot of the measured resistance under 0.2 V against the cycle index, in a 200 000 cycles experiment. After a period of seasoning of 2 000 cycles, both distributions remained stable throughout the remainder of the 200 000 cycles. Fig. 4b shows the distribution excluding seasoning. The observed raw bit failure rate is 5x10 -5 . For comparison, a distribution obtained with a conventional voltage scheme on the same technology is shown [START_REF] Giraud | Benefits of Design Assist Techniques on Performances and Reliability of a RRAM Macro[END_REF]. We can see that its LRS distribution is tighter, but achieves about the same raw bit failure rate. The low LRS/HRS ratio of less than one order of magnitude may seem insufficient for high capacity memories. However, by using circuit design strategies, a fully functional 128 kbit array with zero failures up to 1 M cycles was demonstrated on the same technology [START_REF] Giraud | Benefits of Design Assist Techniques on Performances and Reliability of a RRAM Macro[END_REF]. To further enhance the reliability, a write verify (WV) algorithm is employed. After each programming operation, if the resulting conductance is not satisfactory, the memory is cycled. This process is repeated until the conductance of the memory is within predefined bounds. Fig. 4c shows the histograms of the number of extra cycles required to meet the targeted read margin (RM). As expected, the larger the RM target, the more cycles are required. However, even for a target of 25 µA, only 4 extra cycles are required, at most, to meet our criteria on the 200 000 cycles of this experiment. In more than 99 % of the cases no extra cycle is necessary, and the maximum number of cycles is only necessary in rare events, less than 10 ppm. By comparison, the same write algorithm with the same rejection rate would only achieve a ~15 µA read margin with the conventional voltage scheme. For instance, a 25 µA read margin would hardly be possible with the voltage scheme. The lower slope of the distribution of LRS in the case of CECD, which could be seen as an inconvenience at first, actually makes WV more efficient. The WV algorithm requires extra cycles and as such consumes energy and ages the cell prematurely. However, since extra cycles are rare, its effect on global energy consumption and aging is marginal. The low write energy exhibited by this scheme raises a question about retention. This work being the first on this new current scheme concept, we focused on its key and unique features. Therefore, we did not specifically address retention. We can report that we did not experience any issue with shortterm retention during this study. We plan to fully address retention in future work. Given our previous observation that the LRS is independent of the value of Iprog above a threshold, the driver in this scheme need not be any more complicated than a fixed current source. As such, the surface required per driver is much lower than any voltage control implementation. CONCLUSION We presented a current driver based scheme for set operation in RRAM. The capacitive-enhanced current driver (CECD) utilizes the inherent parasitic line capacitance to set RRAM cells to a high conductance with limited current coming from the driver. We showed that the distribution of LRS is mostly independent of the pulse amplitude, duration and line capacitance. The scheme is used to program a cell for 200 000 cycles with a raw bit failure rate of 5 x 10 -5 . With write verification, it manages a read margin of 25 µA with only 4 extra cycles at worst. Since in more than 99 % of the cases the read margin is attained without requiring any extra cycle, the mean cost in time and energy of the write verification is marginal. The proposed scheme lowers the power consumption since it allows 10 x reduction in the current of the driver at a given LRS conductance compared to conventional drivers. This is a step forward to relax the constraints due to IR drop and electromigration and to promote the RRAM integration. Fig. 1 . 1 Fig. 1. (a) Photography of our test circuit. (b) Cross-section of the RRAM co-integrated in the BEoL (c) The writing scheme studied in this letter. The capacitance Cp models the parasitic line capacitance on BL in memory matrices. Fig. 3 . 3 Fig. 3. (a) Cycle-to-cycle distribution of read current for different write pulse current amplitude obtained with CECD. (b) Cycle-to-cycle distribution of read current for different write pulse duration. (c) Estimation of energy consumption of the CECD scheme compared to a current driver scheme ignoring the effect of the parasitic capacitance[START_REF] Jain | 2 A 3.6Mb 10.1Mb/mm2 Embedded Non-Volatile ReRAM Macro in 22nm FinFET Technology with Adaptive Forming/Set/Reset Schemes Yielding Down to 0.5V with Sensing Time of 5ns at 0.7V[END_REF], the extra energy invested in D-WT[START_REF] Chou | An N40 256K×44 embedded RRAM macro with SL-precharge SA and low-voltage current limiter to improve read and write performance[END_REF], and Ramp Voltage Stress (RVS)[START_REF] Sassine | Novel Computing Method for Short Programming Time and Low Energy Consumption in HfO2 Based RRAM Arrays[END_REF]. The difference of spread in the LRS distribution between (a) and (b) is due to the use of two different dies. Fig. 2 . 2 Fig. 2. A comparaison of the time evolution of the voltage and current in RRAM cells with a voltage driver on one side, and CECD on the other. Fig. 4 . 4 Fig. 4. (a) Scatter plot of the read resistance of 200 000 consecutive SET / READ / RESET / READ cycles with CECD. The bottom plot illustrates the effect of the WV with a RM of 25 µA. (b) Corresponding distribution (excluding the 2 000 seasoning cycles) with and without write verify (WV) and the distribution obtained in [19] with a conventional voltage scheme on the same technology. (c) Histograms of the number of extra cycles required by our WV scheme. © 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information.
04115570
en
[ "phys", "spi" ]
2024/03/04 16:41:26
2023
https://hal.science/hal-04115570/file/79-2023-1152.pdf
François Richez Rohit Jain Prediction of the laminar-to-turbulent transition position on a helicopter rotor blade with cyclic pitch variation This work aims at assessing different laminar-to-turbulent transition models with different CFD codes for unsteady cases of a helicopter rotor with cyclic pitch variation. ONERA evaluates with elsA code the semi-empirical transition criteria-based approach and the Langtry-Menter model, while US Army investigates two versions of the Langtry-Menter model available in the Helios/Overflow code. The experimental transition position measurements provided by DLR show a complex hysteretic behavior of the transition front motion. The simulations show that Langtry-Menter model can reproduce this hysteresis with a satisfactory agreement, if spatial accuracy is maximized using refined meshes and minimized numerical dissipation. The numerical results are then used to identify the transition mechanisms involved and the unsteady effects at the origin of hysteresis. NOTATION 𝑎𝑎 ∞ freestream speed of sound, m/s b number of blades c blade chord, m 𝐶𝐶 𝑓𝑓 skin friction coefficient, 𝜏𝜏 𝑤𝑤 /[0.5𝜌𝜌 ∞ (𝑟𝑟Ω) 2 ] 𝐶𝐶 𝑇𝑇 /𝜎𝜎 rotor thrust coefficient, 𝑇𝑇/[𝜌𝜌 ∞ 𝑆𝑆𝜎𝜎(𝑅𝑅Ω) 2 ] 𝑀𝑀²𝐶𝐶 𝑛𝑛 section normal force in the local airfoil frame 𝑀𝑀 local Mach number, 𝑟𝑟Ω/𝑎𝑎 ∞ 𝑀𝑀 75 Mach number at 0.75𝑅𝑅, (0.75𝑅𝑅Ω)/𝑎𝑎 ∞ 𝑅𝑅 75 Reynolds number at 0.75𝑅𝑅, 𝜌𝜌 ∞ (0.75𝑅𝑅Ω)c/μ R rotor radius, m 𝑟𝑟 radial coordinate, m S rotor disk area, 𝜋𝜋𝑅𝑅 2 𝑇𝑇 𝑢𝑢 freestream turbulence level, % 𝑈𝑈 ∞ freestream axial velocity, m/s 𝑥𝑥 𝑡𝑡𝑡𝑡 transition position in the chordwise direction, m 𝜃𝜃 75 collective pitch angle at 𝑟𝑟/𝑅𝑅 = 0.75, ° 𝜃𝜃 1𝑠𝑠 cyclic longitudinal pitch angle, ° 𝜆𝜆 axial advance ratio, 𝑈𝑈 ∞ /(𝑅𝑅Ω) 𝜇𝜇 molecular viscosity, kg/m/s 𝜇𝜇 𝑡𝑡 eddy viscosity, kg/m/s 𝜌𝜌 ∞ freestream fluid density, kg/m³ 𝜎𝜎 rotor solidity, 𝑏𝑏𝑅𝑅𝑏𝑏/𝑆𝑆 𝜏𝜏 𝑤𝑤 wall shear stress, N/m² 𝜓𝜓 blade azimuth, ° Ω rotor angular velocity, rad/s INTRODUCTION 1 Laminar-to-turbulent transition process can have a strong impact on helicopter rotor performance. In hovering flight, laminar boundary layer regions can be observed on a very large part of the lower side of the blade and on the leading-edge area of the upper side. Due to the reduced levels of skin friction in these laminar regions, the blade profile power is decreased which has a beneficial effect on the Figure of Merit. In forward flight, the blade sections see such large variations of both relative speed and local angle of attack that the transition position is expected to significantly vary with azimuth. However, the boundary layer transition is still often neglected in most of CFD simulations where the flow is usually assumed as fully turbulent. Consequently, the rotor torque is often overestimated with respect to experimental measurements. Turbulence transition models do exist but still suffer from a lack of validation for helicopter rotor applications, mainly because experimental measurements of boundary layer transition position on helicopter rotor blade are challenging and still few in the public literature. Weiss et al. from the German Aerospace Lab (DLR) recently proposed high-quality transition position unsteady measurements based on Differential Infrared Thermography (DIT) technique on a small scale dynamically pitching rotor Presented at the Vertical Flight Society's 79th Annual Forum & Technology Display, West Palm Beach, FL, USA, May 16-18, 2023. This material is declared a work of the U.S. Government and is not subject to copyright protection in the United States. DISTRIBUTION STATEMENT A: approved for public release; distribution is unlimited. PR20230177. referred to as RTG hereafter (Ref. [START_REF] Weiss | Unsteady boundary-layer transition measurements and computations on a rotating blade under cyclic pitch conditions[END_REF]. CFD simulations were also performed in this article with the TAU code of DLR using URANS (Unsteady Reynolds-Averaged Navier-Stokes) modeling coupled with semi-empirical transition criteria approach. The experimental measurements show that the transition position as a function of the pitch angle exhibits a hysteresis loop, meaning that the transition position is different during the upstroke and the downstroke parts of the pitch cycle. The numerical simulation predicted well the transition position during downstroke but failed to accurately provide the transition during upstroke. One of the RTG test cases was later numerically investigated with transition modeling based on transport equations. Jain evaluated the Amplification Factor Transport (AFT) model and different versions of the Langtry-Menter model within Helios CFD code and showed very good prediction of the transition hysteresis loop (Ref. 2). Carnes and Coder also used the AFT model within OVERFLOW CFD code and paid a particular attention to the effect of freestream turbulence level (Ref. [START_REF] Carnes | Numerical Investigation of Unsteady Boundary Layer Transition on a Dynamically Pitching Rotor[END_REF]). We propose here to push further the validation of the transition models by investigating two different test cases of the RTG database, and two different CFD solvers involving different transition models. After introducing the test cases and the numerical approaches, a numerical sensitivity analysis is performed in order to define the best numerical strategies. Then the numerical simulations are assessed by comparison with experimental data. In the last section, a discussion is conducted in order to clarify the transition mechanisms and the causes of hysteresis. TEST CASES The RTG rotor, defined in Table 1, is a four bladed rigid rotor with a radius R = 650 mm and a chord length c = 72 mm (Figure 1.a). The blades have a linear twist of -9.33°/R and are equipped with the DSA-9A airfoil (Figure 1.b). The rotor is operated with a small axial flow to prevent recirculation flow. The flow parameters of the two test cases considered here are presented in Table 2. Test case V gives a Mach number at r/R = 0.75 of 0.21 and cyclic pitch amplitude of 5.9° around a mean value of 9°. For test case II, the Mach number at 75% of radius is reduced to 0.11 while the cyclic pitch amplitude is decreased to 2.9° around a mean value of 10.1°. For both cases, the azimuthal variations of the transition positions on the upper side of the blade are measured by means of DIT technique as described in Ref. 1 (Figure 1.c). These experimental data will be used as reference for the validation of the CFD simulations. NUMERICAL APPROACHES CFD solvers Two different numerical approaches have been applied by the two partners of this joint research project. US Army DEVCOM AvMC used the Helios software (Ref. 4). In Helios, Overflow (Ref. [START_REF] Nichols | OVERFOW User's Manual, Version 2.2[END_REF]) is involved for the blade body-fitted grids. Overflow used a Roe's 3rd-order upwind scheme using MUSCL (Monotonic Upstreamcentered Scheme for Conservation Laws) reconstruction for convective flux for both flow and turbulence. The 3rd-order flux used a 4th-order flux difference with a 3rd-order artificial dissipation term. Viscous fluxes were discretized using a second-order central discretization. The implicit time marching used the 2nd-order BDF (Backward Difference Formula). For temporal convergence, the Improved SSOR (Successive Symmetric Over-relaxation) scheme with 80 Newton iterations was employed. A time step size of 0.25° was used. For the studies involving the effect of dissipation, the central difference scheme with different levels of dissipation was used for inviscid flux, as described later. For temporal discretization, the second-order diagonalized Beam Warming pentadiagonal scheme is used along with 80 dualtime stepping sub-iterations. The off-body solver, SAMCART, is a structured, adaptive high-order Cartesian solver. The inviscid fluxes are discretized with a fifth-order central spatial scheme and a third-order explicit Runge-Kutta integration scheme is used for time advancement. The Helios simulations uses the Delayed-Detached Eddy Simulation (DDES) based on k-ω SST turbulence model coupled with Langtry-Menter (LM) transition model (Ref. 6). The initial LM model has been modified to guaranty Galilean invariance (Ref. 2) and to include crossflow transition correlation (Ref. 7). The Helios simulations based on Langtry-Menter with Galilean invariance is referred to as Helios-SST-LM-G and the simulations that also includes crossflow correlation are referred to Helios-SST-LM-G-CF in the following. ONERA applied the elsA solver (Ref. 8) which discretizes the Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations with finite volume approach. The temporal derivative discretization involves a second-order implicit Gear scheme, with an azimuthal time step of 0.25° and 80 sub-iterations. All the simulations are performed with the k-ω SST turbulence model (Ref. 9). Two different turbulence transition models are evaluated. The first one applies semi-empirical transition criteria (TC) approach (Refs. 10 and 11) to detect laminar regions where the eddy viscosity is set to zero. The semi-empirical transition criteria are not computed locally but deduced from the chordwise history of the boundary layer, along the wall grid lines. Once the transition is detected by one of the criteria, an intermittency function is defined on the blade wall surface. Then, following the wall normal direction, the intermittency is used as a weight in the calculation of the eddy viscosity of the turbulence model, in order to set to zero the eddy viscosity in the laminar regions. This approach is thus not based on additional transport equations. The details of this TC model can be found in Ref. 11. The second approach evaluated with elsA is the Langtry-Menter (LM) transition model which is, for its part, based on transport equations of two new variables as described in Ref. [START_REF] Langtry | Correlation-Based Transition Modeling for Unstructured Parallelized Computational Fluid Dynamics Codes[END_REF]. For TC simulations, the convective flux is discretized with Jameson's cell-centered second-order scheme using second-and fourth-order artificial viscosity coefficients (Ref. 12). For LM simulations, the convective flux is discretized with a specific version of the second-order AUSM+(P) scheme (Ref. [START_REF] Liou | 30 th Computational Fluid Dynamics, chap. AUSM schemes and extensions for low mach and multiphase[END_REF]) that permits to reduce the numerical dissipation for low Mach number flows (Ref. [START_REF] Mary | Large Eddy Simulation of Flow around an airfoil near stall[END_REF]. The elsA simulations performed with LM model and TC approach will be referred to as elsA-SST-LM and elsA-SST-TC hereafter. Grid parameters An overview of the grids used for the different solvers is shown in Figure 2. All the simulations are based on overlapping grid strategy. The parameters of the blade grid and the background grid are detailed in Table 3 andTable 4 respectively. With Helios (Figure 2.a), the blade grids are composed of a main grid of 14.1 million nodes, completed with overlapping tip and root caps, leading to a total number of 22 million nodes per blade. The center body is also included in the simulation with a body-fitted grid of 11.3 million nodes. The background Cartesian grid is composed of several blocks with several levels of resolution ensuring the finest grid resolution of 5% of chord around the rotor. It has 78 million points. For elsA simulations, the rotor model is more simplified. First, the center body is not included as shown in Figure 2.b. Secondly, only the profiled part of the blade is meshed while the blade root is not considered. Two blade grids were built in order to investigate the sensitivity of the solution with respect to the grid resolution. The first grid (Figure 3.a), composed of 5.8 million nodes, has the same resolution as the grids classically used for rotor simulations at ONERA. It is referred to as "coarse grid" hereafter. The second grid (Figure 3.b) is significantly refined leading to a total number of 15.5 million points per blade. The background Cartesian grids are composed of 19 million nodes for the coarse grid and 48 million nodes for the fine grid (Table 4). All the parameters of the blade grids used by the partners are compared in detail in Table 3. One can remark that the grid resolution in the chordwise direction is very similar between Helios grid and elsA fine grid, while elsA coarse grid is approximately coarsened by a factor two. In the spanwise direction, Helios grid is slightly finer than the two elsA grids. In the wall normal direction, the resolutions are similar between all grids, especially the wall normal spacing that has been chosen in order to ensure Δy + ≤ 1. Extraction of the transition position Estimating the transition location from a numerical simulation based on Langtry-Menter model is not straightforward. The value of the eddy viscosity ratio or the intermittency is imposed to zero at wall. Therefore, an extraction of the flow field on the blade surface cannot be useful to detect the transition. Thus, computing the transition position requires extracting a turbulent quantity at a certain distance from the wall that is to be defined by the user. Since the three CFD codes used here does not offer the same post-processing functionalities, each partner has defined his own strategy to estimate the transition position. On ONERA side, the extraction of the transition position is done as follows: the flow field on the blade grids is stored at each time step. Then, a surface is extracted at a constant grid index in the wall normal direction (k). The value of the k-index is chosen to correspond approximately to the logarithmic region of the boundary layer. Then, at several blade sections, the transition position is defined as the first grid point in the chordwise direction where the eddy viscosity ratio gets larger than a user-defined threshold. Since the k-index could be out of the logarithmic region of the boundary layer, the process is repeated for five different values of k and the final transition position is defined as the minimum value of all the results. The transition location in Helios/Overflow computations is determined using a special post-processing of the boundary layer flow field, as follows. For each azimuth and each radial station, boundary layer over the blade section is examined for growth in turbulence. The chord location on the section where the turbulence level in the boundary layer has grown by an order of magnitude with respect to the inflow boundary is flagged as the transition location. Since structured grids were used for blades, this post-processing method could be applied in a straight-forward manner as the grid lines are aligned along the surface tangent and normal directions. NUMERICAL SENSITIVITY In order to define the best numerical strategy, the partners first investigated the sensitivity of the solution with respect to numerical parameters. In this preliminary activity, only Case V of the database is considered (Table 2). ONERA investigated the sensitivity with respect to the time integration with elsA solver. For that purpose, the number of sub-iterations of the Newton process used to solve iteratively the non-linear system at each time step is increased from 20 to 40 and finally 80, while the azimuthal time step is kept to a constant value of Δ𝜓𝜓 = 0.25°. All the simulations are performed with LM model on the coarse grid defined in Table 3 and shown in Figure 3.a. The transition position 𝑥𝑥 𝑡𝑡𝑡𝑡 is defined as the chordwise position where the eddy viscosity ratio gets higher than 𝜇𝜇 𝑡𝑡 /𝜇𝜇=1. The azimuthal evolutions of 𝑥𝑥 𝑡𝑡𝑡𝑡 thus defined are compared at the section 𝑟𝑟/𝑅𝑅 = 0.75 in Figure 4.a. As expected, all the simulations predict that the transition 𝑥𝑥 𝑡𝑡𝑡𝑡 /𝑏𝑏 is located around 80% of chord in the first part of the cycle (0°≤ 𝜓𝜓 ≤ 180°) where the pitch angle is the highest, while the transition moves to the leading edge region in the second half of the cycle (180°≤ 𝜓𝜓 ≤ 360°) where the pitch angle reaches its maximum value. However, it can be seen that the azimuthal angle where the transition moves upstream and downstream is very sensitive to the number of sub-iterations used in the time integration. In particular, the upstream transition motion occurs at 𝜓𝜓 = 280° with 20 sub-iterations instead of 𝜓𝜓 = 240° when 80 sub-iterations are used. This high sensitivity with respect to the time integration is probably due to the fact that the transition motions seem to occur in a very fast manner. In light of this observation, it will be chosen to fix the number of sub-iterations to a minimum value of 80 in the rest of this work. It can also be remarked that the transition position shows very high oscillations when it is located upstream, i.e. for 250°≤ 𝜓𝜓 ≤ 360°. This will be discussed in the next paragraph. The influence of the grid resolution is also investigated for the LM model and still with the elsA code. Two simulations are performed for Test Case V, using the coarse and the fine grids shown in Figure 3 and defined in Table 3. The variations of the transition position at 𝑟𝑟/𝑅𝑅 = 0.75 are compared in Figure 4.b. The grid refinement perfectly eliminates the large oscillations of the transition position 𝑥𝑥 𝑡𝑡𝑡𝑡 /𝑏𝑏 observed for 250°≤ 𝜓𝜓 ≤ 360° and previously mentioned. The reason lies in the nature of the transition process. Indeed, as indicated by the sign of the skin friction coefficient at 𝑟𝑟/𝑅𝑅 = 0.75 and 𝜓𝜓 = 270° (Figure 5) the transition occurs in a Laminar Separation Bubble (LSB) that extends along 15% of chord. The fine grid allows to discretize the LSB with 100 grid points in the chordwise direction while the coarse grid only provides less than 20 points inside the LSB. The poor resolution of the coarse grid does not permit a sufficient production of turbulence kinetic energy to ensure a stable reattachment point of the LSB (Figure 6). Thus, the very high resolution of the fine grid as defined in Table 2 seems mandatory to properly capture the transition process for this specific test case. The grid refinement also slightly shifts downstream the position of the transition for 0°≤ 𝜓𝜓 ≤ 180° but this effect is less significant as the one observed in the second half of the cycle. As a conclusion, the fine grid will be considered in the following, since it provides a better capture of the LSB. The influence of the numerical dissipation was also assessed using Helios/Overflow. For this study a central difference scheme was used for inviscid flux discretization as then accuracy and dissipation levels can be independently varied. Table 5 shows the various settings. The first row, DISS1, is the baseline case, which used a Roe's 3rd-order upwind scheme. Other cases, DISS2, DISS3, and DISS4 used a central difference scheme in the order of increased numerical dissipation -DISS2 used a fifth-order scheme with a small artificial dissipation coefficient of 0.01; DISS3 had the coefficient increased to 0.04; and DISS4 was most dissipative with second-order central with a large dissipation coefficient. The effect on transition location prediction is illustrated in Figure 7. DISS1 and DISS2 have minimal dissipation and the predictions agree closely. However, as the dissipation is increased the transition onset is delayed, in general. The Helios/Overflow results presented hereafter used the DISS1 settings. As previously mentioned, extracting the transition location 𝑥𝑥 𝑡𝑡𝑡𝑡 from a numerical simulation based on LM model is not straightforward. In the 𝑥𝑥 𝑡𝑡𝑡𝑡 calculation method presented above, the threshold value of 𝜇𝜇 𝑡𝑡 /𝜇𝜇 used to define the transition has to be arbitrary set by the user. In order to evaluate the sensitivity of 𝑥𝑥 𝑡𝑡𝑡𝑡 with respect to this threshold value, the transition position obtained with the criteria of 𝜇𝜇 𝑡𝑡 /𝜇𝜇 = 1 and 0.1 were compared in elsA computations. In both cases, transition positions are computed from the same flow field provided by elsA code for the Test Case V. The two threshold values are significantly lower than the level reached in turbulent boundary layer. Thus, we can expect that 𝑥𝑥 𝑡𝑡𝑡𝑡 will be weakly sensitive to these values. However, the results shown in Figure 8 indicate that the effect is not negligible. The threshold 𝜇𝜇 𝑡𝑡 /𝜇𝜇 = 0.1 provides a transition location that is located upstream by 2% of chord for 240°≤ 𝜓𝜓 ≤ 360° and 5% of chord for 60°≤ 𝜓𝜓 ≤ 230°. These results show that the chordwise evolution of turbulence production, and consequently of eddy viscosity, is rather slow compared to more usual high Reynolds number flows. From the following, it is decided to keep a threshold of 𝜇𝜇 𝑡𝑡 /𝜇𝜇 = 0.1 since it provides a more continuous evolution of 𝑥𝑥 𝑡𝑡𝑡𝑡 especially in the azimuthal portion 0°≤ 𝜓𝜓 ≤ 60°. VALIDATION Following the best practices defined in the previous section, US Army and ONERA have performed the simulations for both Test Case V and Test Case II. Test Case V The rotor maps of the transition positions for Test Case V are shown in Figure 9. It is reminded in the experimental rotor maps (Figure 9.a) that the minimum pitch angle is reached at ψ=90° and the maximum pitch angle at ψ = 270°. The rotor map of the experimental measurements also reveals that the transition position is not symmetrical with respect to lateral axis. In particular the upstream motion of the transition occurs around 210° of azimuth and the downstream motion occurs around 30° of azimuth. This means that the transition position evolves differently between the upstroke and downstroke parts of the pitch cycle. The rotor maps of transition position provided by Helios simulations (Figure 9.b and Figure 9.c) show a good agreement with respect to the experiment. The addition of the crossflow correlation into the original LM transition model slightly improves the results, especially at the inboard sections around ψ = 210° and at the outboard sections around ψ = 30°. Despite these slight differences, one can conclude that both models provide the good trend, especially the dissymmetry of x tr between upstroke and downstroke parts of the pitch cycle. The elsA simulation based on the LM model is shown in Figure 9.d. Unlike LM model in Helios, no correction has been added to original model in elsA code in order to ensure the Galilean-invariance. However, the elsA-SST-LM transition map (Figure 9.d) looks rather similar to the one provided by Helios (Figure 9.b). Once again, the asymmetry of the rotor map is quite well predicted, although the upstream transition motion during upstroke and the downstream transition motion during downstroke are slightly delayed. The elsA simulation based on the TC approach provides the rotor map shown in Figure 9.e. The agreement with experiment is not as good as with LM model. The downstream motion of transition around 30° of azimuth is quite well predicted. However, the strong delay of the upstream motion of x tr during the upstroke is not captured. Finally the TC approach gives a symmetric transition map, which indicates that this transition model does not predict significant differences of x tr between upstroke and downstroke. For a more qualitative assessment of the models, the azimuthal evolution of the transition position at r/R = 0.75 is shown in Figure 10 for both codes. Figure 10.a confirms that Helios SST-LM-G provides a very good agreement with respect to experiments. Including crossflow transition correlation into the model (SST-LM-G-CF) permits to avoid a spurious evolution of x tr observed for 20°≤ ψ ≤ 40° but also slightly degrades the agreement with experiment for 90°≤ ψ ≤ 180°. The elsA simulation with LM model shown in Figure 10.b provides a prediction similar to Helios-SST-LM-G. In particular, the spurious evolution of x tr between 20° and 40° of azimuth is again observed. The transition position is however a little more upstream with elsA for all azimuths. Another significant difference between the LM model's results of elsA and Helios lies on the upstream motion of transition that occurs between 180° and 270° of azimuth. While Helios prediction is perfectly in phase with the experimental measurements, elsA shows a noticeable delay of the upstream motion of x tr . The reason of the discrepancy between Helios-SST-LM-G and elsA-SST-LM can be manifold. As seen in the previous section, the solution seems very sensitive to the numerical parameters. Thus, the differences of numerical methods between Helios and elsA could explain the discrepancy. The corrections introduced in Helios to allow for Galilean invariance, that are not used in elsA, could also be the reason of this slight disagreement between codes. Looking now at the elsA results based on TC approach in Figure 10.b, one can observe a good agreement of the transition position during downstroke, i.e. from ψ = 270° to ψ = 90°. However, during upstroke, the TC model fails to predict the strong delay of the upstream transition motion. Plotting the transition position as a function of the pitch angle allows to highlight the dissymmetric between upstroke and downstroke in the form of a hysteresis loop. As shown in Figure 11, the hysteresis observed with DIT measurements is large. For instance, at 𝜃𝜃 75 = 9°, the transition is located at 30% of chord during downstroke and 80% of chord during upstroke. LM model with both Helios and elsA code permits to reproduce this large hysteresis, athough the agreement is less good with elsA in the second part of downstroke. TC approach predicts a small hysteresis, narrow around the experimental measurements during downstroke. Test Case II As shown in Table 2, Test Case II differs from Test Case V by a lower amplitude of the cyclic pitch (2.9° instead of 5.9°), a slightly higher collective pitch (10.1° instead of 9°) and a lower rotor angular velocity which leads to a decrease of the Reynolds number at 0. 75𝑅𝑅 from 3.16 × 10 5 to 1.72 × 10 5 . The Helios and elsA results for Test Case II are shown in Figure 12 in the form of transition maps. The experimental measurements reveal again a dissymmetry of 𝑥𝑥 𝑡𝑡𝑡𝑡 between upstroke and downstroke. Helios simulations based on LM model (Figure 12.b and Figure 12.c) show a satisfactory prediction of the transition motions although the agreement is not as good as for Test Case V. The crossflow correlation (Helios-SST-LM-G-CF) improves the prediction especially during upstroke, for 0°≤ 𝜓𝜓 ≤ 30°, similarly to what was observed for Test Case V. The elsA simulation with LM model (Figure 12.d) gives the good trend but shows the same shortcoming as already observed for Test Case V. Indeed, the upstream motion of transition for 180°≤ 𝜓𝜓 ≤ 270° is too delayed. Furthermore, the downstream motion of transition that occurs between 0° and 30° of azimuth shows a disrupted evolution. With TC approach (Figure 12.e), the transition seems located to upstream at all azimuths, and more significantly for 30°≤ 𝜓𝜓 ≤ 210°. A detailed look to 𝑟𝑟/𝑅𝑅 = 0.75 (Figure 13) confirms the observations previously mentioned. Helios provides a good agreement of the transition between 0° and 220° of azimuth. The upstream motion of 𝑥𝑥 𝑡𝑡𝑡𝑡 is also well-captured, but the transition is located too upstream for 230°≤ 𝜓𝜓 ≤ 360°. With elsA code, the LM model gives an unexpected "bump" of 𝑥𝑥 𝑡𝑡𝑡𝑡 between 45° and 90° of azimuth, similarly to what was observed for Test Case V, but this spurious behavior is here more pronounced. Again, the upstream motion of transition (180°≤ 𝜓𝜓 ≤ 270°) is delayed and too abrupt in comparison with the experimental measurements. At the beginning of the downstroke (270°≤ 𝜓𝜓 ≤ 360°) the transition position is very similar to Helios results but located too upstream in comparison with the experiment. The TC approach shows a less satisfactory prediction of 𝑥𝑥 𝑡𝑡𝑡𝑡 . For 0°≤ 𝜓𝜓 ≤ 180°, the transition is located too upstream. It reaches 60% of chord instead of 85% in the experiment. On the second half of the cycle, the transition position shows some oscillations probably due to a lack of turbulence production in the LSB. Plotting the transition position with respect to the pitch angle 𝜃𝜃 75 reveals that Test Case II leads to a hysteretic behavior of the transition (Figure 14) in the same way as Test Case V. As observed for Test Case V, the LM model simulations predict this large hysteresis while the TC approach underpredicts its amplitude. DISCUSSION We propose in this section to analyze the origin of the hysteresis observed in the evolution of the transition location as a function of the pitch angle. Since the phenomenon looks similar between the two test cases, we will focus the discussion on Test Case V only. A first possible reason of the transition hysteresis could be a hysteresis in lift. Indeed, if lift, and thus pressure distribution, is different during upstroke and downstroke, the transition position will consequently differ. Figure 15 shows the evolution of the section normal force at 𝑟𝑟/𝑅𝑅 = 0.75 as a function of the pitch angle 𝜃𝜃 75 predicted by numerical simulations. One can see that both Helios-SST-LM-G-CF and elsA-SST-TC provide the same normal force evolution as the one provided with a fully turbulent simulation (referred-to as "elsA-SST-FT" in the figure). Furthermore, the hysteresis in normal force is very weak, which suggests that the hysteresis of transition could not be due (or at least only due) to a hysteresis in lift. This confirms what is suggested by Weiss et al. in Ref. [START_REF] Weiss | Unsteady boundary-layer transition measurements and computations on a rotating blade under cyclic pitch conditions[END_REF]. The semi-empirical transition criteria used for elsA-SST-TC simulation are based on quasi-steady assumptions. At each time step, the transition position is deduced from the pressure distribution and the properties of the boundary layer at this current time step only. Thus, the TC approach of elsA code is able to predict hysteretic behavior of transition only if this one is caused by hysteresis in lift. Since the hysteresis in lift is here small, it is not surprising to observe that elsA-SST-TC simulation predicts a small transition hysteresis. Furthermore, the discrepancy between measurements and elsA-SST-TC results finally give more credence to the idea that the transition hysteresis observed in the experiments is not caused by lift hysteresis. Additional elsA-SST-TC simulations have been performed without cyclic pitch and with a collective pitch angle of 6°, 9° and 12°. The transition positions obtained with these three "non-cyclic" simulations are added to the Test Case V results in Figure 16, in order to highlight how the unsteady case differs from a quasi-steady condition. The figure indicates that the transition position of Test Case V is very close to the non-cyclic results during upstroke, while it strongly diverges from them during upstroke. This tends to prove that the transition motion during downstroke follows a quasi-steady state, and that the hysteresis is mainly due to unsteady effects during upstroke. In order to identify these unsteady effects, we propose in Figure 17 and Figure 18 to follow the azimuthal evolution of the transition position together with the skin friction distribution 𝐶𝐶 𝑓𝑓 at 𝑟𝑟/𝑅𝑅 = 0.75. Figure 17 shows Helios-SST-LM-G-CF and elsA-SST-LM results that are assumed as representative of the hysteresis observed experimentally. Figure 18 shows the elsA results using TC approach that are considered as the quasi-steady solution. At the beginning of downstroke, (𝜓𝜓 = 270° and 𝜓𝜓 = 315°), the transition occurs in a LSB located around 20% of chord, as indicated by the negative value of 𝐶𝐶 𝑓𝑓 . Both LM model and TC approach provide almost the same transition positions and skin friction distribution. On the second part of downstroke, from 𝜓𝜓 = 0° to 𝜓𝜓 = 90°, all the transition models predict that the transition progressively moves downstream to finally reaches 80% of chord. However, the differences observed in the skin friction evolution between Figure 17 and Figure 18 points out that the transition process is different between the two models. On one hand, LM model (Figure 17) predicts that transition is still induced by a LSB from 0° to 90° of azimuth and that the LSB moves downstream together with the transition point. On the other hand, the transition predicted by TC approach (Figure 18) occurs at the limit of flow separation at 𝜓𝜓 = 0°. Then, we observe an attached boundary layer transition that moves downstream until 𝜓𝜓 = 90°. During the first part of upstroke (from 90° to 210° of azimuth), the quasi-steady assumptions (Figure 18) shows that the attached flow transition moves progressively upwards, preventing any laminar separation in the leadingedge region. With LM model (Figure 18), in contrast, the transition is stuck in the trailing edge region, which permits another laminar separation to emerge around 20% of chord at 𝜓𝜓 = 210°. Between 210° and 215°, LM model show two flow separation regions, a first one close to the trailing edge which corresponds to the vanishing LSB responsible of transition during the previous azimuths, and a second one around 20% of chord which corresponds to the arising LSB responsible of the transition at the current time. It is noted that prediction of the coexistence of the two LSBs in Helios simulation is limited to a small range of azimuth (210°≤ 𝜓𝜓 ≤ 215°), while in elsA-SST-LM simulations it is stretched up to 𝜓𝜓 = 232°. However, in both cases, the coexistence of the two separation regions is observed. In the end of upstroke, from 240° to 270° of azimuth, the trailing edge flow separation disappeared and all the simulations whether with LM model or TC approach gives a LSB between 10 and 30% of chord. CONCLUSIONS Different turbulence transition models have been evaluated for unsteady conditions of a small rotor in lowspeed axial flow with cyclic blade pitch variations. Two CFD solvers have been used: Helios on US Army side and elsA on ONERA side. US Army has evaluated two versions of the Langtry-Menter model, the standard version based on Langtry correlation and a second one including crossflow transition correlation. Since the Langtry-Menter model is not Galilean invariant, some corrections were introduced in Overflow to allow the invariance for both versions. ONERA has evaluated methods for transition modeling: the Langtry-Menter model (without Galilean invariance) and a semi-empirical transition criteria approach. A numerical sensitivity analysis has first been performed and the conclusions are as follows: • The Langtry-Menter model is very sensitive to the grid resolution, especially in the chordwise direction. In the region where laminar separation bubble appears, a too coarse grid will lead to a lack of turbulence production and the appearance of non-physical oscillations of the reattachment point. The grid resolution required for the RTG test cases are significantly higher than the one typically used for full-scale helicopter rotor flow simulations. • The Langtry-Menter model is also very sensitive to the numerical dissipation and the accuracy of the time integration. These parameters mainly impact the lag in azimuthal phase of the upstream motion of the transition front. • The Transition-Criteria approach does not require such refined grid and small time-step to get a converged solution (although it is not shown in the paper). The capabilities of the simulations to predict the unsteady motions of the transition front have then been assessed and the analysis of this assessment has led to the following conclusions: • The experimental measurements reveal a strong hysteretic behavior of the transition motion, meaning that the transition positions are different during upstroke and downstroke parts of the pitch cycle. • LM model in Helios code provides a very good agreement with respect to the experiment, especially concerning the hysteresis. The crossflow transition correlation slightly improves the prediction. • LM model in elsA code provides a satisfactory agreement with experiment but not as good as Helios. The slight discrepancies between the two codes could be due to numerical methods or Galilean invariance corrections. • The Transition-Criteria approach provides the good estimation of the upstream and downstream maximum position of transition for Test Case V but not for Test Case II. In both cases, this approach fails to predict the large hysteresis of the transition. Finally, an analysis has been conducted in order to understand the transition mechanisms and the origins of this hysteresis. It has led to the following conclusions: • The hysteresis in transition is probably not caused by hysteresis in lift. • The transition position during downstroke follows a quasi-steady state, while unsteady effects appear during upstroke. • Since the transition-criteria approach is based on quasisteady assumptions, it can only reproduce the downstroke part of the cycle. • According to LM model, the transition is induced by a laminar separation bubble almost all along the pitch cycle. • The lag phase of the transition position during the upstroke leads to the coexistence of two laminar separation bubbles during a short range of azimuth. Figure 1 .Figure 2 .Figure 3 .Figure 4 . 1234 Figure 1. Overview of the RTG rotor test model (from Ref. 1). Figure 5 .Figure 6 . 56 Figure 5. Skin friction coefficient at 𝒓𝒓/𝑹𝑹 = 𝟎𝟎. 𝟕𝟕𝟕𝟕 and 𝝍𝝍 = 𝟐𝟐𝟕𝟕𝟎𝟎° (LM model, elsA, fine grid). Figure 7 . 7 Figure 7. Sensitivity of the Langtry-Menter model with respect to numerical dissipation (Helios simulations). Figure 8 . 13 Figure 9 .Figure 10 .Figure 11 .Figure 12 .Figure 13 .Figure 14 . 81391011121314 Figure 8. Sensitivity of 𝒙𝒙 𝒕𝒕𝒓𝒓 calculation with respect to the eddy viscosity ratio threshold 𝝁𝝁 𝒕𝒕 /𝝁𝝁. Figure 15 . 15 Figure 15. Section normal force 𝑴𝑴 𝟐𝟐 𝑪𝑪 𝒏𝒏 as a function of pitch angle at 𝒓𝒓/𝑹𝑹 = 𝟎𝟎. 𝟕𝟕𝟕𝟕 (Test Case V) including the results for non-cyclic conditions with elsA-SST-TC (red triangle symbols). Figure 16 . 16 Figure 16. Transition position as a function of pitch angle at 𝒓𝒓/𝑹𝑹 = 𝟎𝟎. 𝟕𝟕𝟕𝟕 (Test Case V) including the results for noncyclic conditions with elsA-SST-TC (red triangle symbols). Figure 17 . 17 Figure 17. Skin friction coefficient 𝑪𝑪 𝒇𝒇 at 𝒓𝒓/𝑹𝑹 = 𝟎𝟎. 𝟕𝟕𝟕𝟕 for Test Case V: elsA-SST-LM (green line), Helios-SST-LM-G-CF (blue line). Letter "T" indicates the transition position for elsA (green) and Helios (blue). Figure 18 . 18 Figure 18. Skin friction coefficient 𝑪𝑪 𝒇𝒇 at 𝒓𝒓/𝑹𝑹 = 𝟎𝟎. 𝟕𝟕𝟕𝟕 for Test Case V with elsA-SST-TC. Letter "T" indicates the transition position. Table 1 . Rotor blade parameters. 1 Radius, 𝑅𝑅 650 mm Chord length, 𝑏𝑏 72 mm Tip sweep Parabolic (0.91 ≤ 𝑟𝑟/𝑅𝑅 ≤ 1) Twist rate -9.33°/𝑅𝑅 Airfoil DSA-9A Number of blades, 𝑏𝑏 4 Solidity, 𝜎𝜎 0.141 Table 2 . Flow parameters of the two test cases of the RTG database. 2 Case V Case II Table 3 . Blade grid parameters. 3 Helios elsA coarse grid elsA fine grid Table 4 . Background Cartesian grid parameters. 4 Helios elsA coarse grid elsA fine grid Table 5 . Numerical dissipation parameters evaluated with Helios. 5 Order IRHS Dissipation Number of Coefficient Sub-iterations DISS 1 Third Roe Upwind - 80 DISS 2 Fifth Central 0.01 80 DISS 3 Fifth Central 0.04 40 DISS 4 Second Central 0.04 40 ACKNOWLEDGMENTS The authors would like to acknowledge the U.S./France Project Agreement on Rotary-Wing Aeromechanics and Human Factors Integration Research, a longstanding cooperation between the U.S. Army, ONERA and DGA. The authors would like to acknowledge the DLR for sharing the experimental measurements of the RTG rotor test that were used for the validation of the numerical simulations.
04115684
en
[ "phys.phys.phys-optics", "phys.cond.cm-ms" ]
2024/03/04 16:41:26
2023
https://hal.science/hal-04115684/file/Article-PLD-Erbium-2023-vHal.pdf
A Gassenq email: [email protected] H-S Nguyen E Cleyet-Merle S Cueff A Pereira Diffraction grating enhanced photoluminescence from etching-free erbium thin films Micro-structuration by etching is commonly used in integrated optics, adding complex and costly processing steps which can also potentially damage the devices performances due to the degradation of the etched sidewalls. For diffraction grating fabrication, different strategies have been developed to avoid etching, like layer deposition on structured surface or grating deposition on top of active layers. However, etching remains one of the best process for making high aspect ratio diffraction grating. In this work, we have developed fully structured diffraction gratings (i.e. like fully etched gratings) using lift-off based processing performed in pulsed laser deposited layers since the combination of both techniques is of great interest for making microstructures without etching. We have first studied the influence of the lithography doses in the lift-off process, showing that (1) micrometric spatial resolution can be achieved and (2) sidewall angle can be controlled from 50 to 150° in 0.5 µm thick layers. Using such optimizations, we have then fabricated Er-doped Y2O3 uniaxial diffraction gratings with different periods ranging from 3 to 8 µm. The fabricated devices exhibit emission and reflectivity properties as a function of the collection angle in good agreement with the modeling with a maximum luminescence enhancement of x15 compared to an unstructured layer at a wavelength of 1.54 µm. This work highlights thus the lift-off based processing combined with pulsed laser deposition as a promising technique for etch-free practical applications such as luminescence enhancement in Er doped layers. INTRODUCTION Material etching is a standard process in photonics for the fabrication of a large variety of micro-devices like waveguides, lasers, photodetectors gratings, and so on. It consists of the ablation of a surface trough a micro-structured mask (e.g photoresist). Such a processing step remains complicated, expensive and potentially detrimental for practical applications since it can also induce extra-losses due to the degradation of materials along etched sidewalls [1,2] as well as undesired roughness. For instance, grating devices are widely spread in photonics for light coupling [3], filters [4], mirror definitions [5] and so one. Furthermore, luminescence enhancement by diffraction grating is also very interesting to increase the spontaneous emission, to control the light direction or to increase the pump absorption [6][7][8][9][10][11]. For that purpose, different strategies have been developed to avoid etching in the emissive layer like using diffraction elements below [6][7][8]12] or above the active layer [9,10], but these methods cannot be used to fully structure the emitting layers (like the fully etched gratings compared to the shallow etched gratings). For rare earth doped materials, etching is also commonly used in photonics [13][14][15][16][17]. Other methods have also been studied for integration but with millimetric resolution [18,19], without structuration [10,20,21], using ion beam [22][23][24], or by layer deposition on pre-structured substrates [25,26]. Indeed, rare earth emitters have recently growing as a very interesting emerging platform for quantum photonics [27,28] due to the lanthanide high coherences [29] allowing to realize many quantum functions like sources [16], spin readout [17,24], memories [22], microwave transductors [23] and so on. Therefore, developing diffraction gratings with rare earth emitters without etching is of great interest to control the light emission of such layers for different applications like laser integration [20,30] with grating, signal processing [31], quantum optics [27,28], and so on. In this work, we have fabricated Er 3+ rare-earth-doped gratings by Pulsed Laser Deposition (PLD) and lift-off processing without etching. Lift-off processing is usual in microelectronics but is limited to low thermal budget layers [32,33] and can also induce high sidewall roughness [34]. Since PLD deposition can provide high quality films at low temperature deposition due to the high kinetic energy of the atoms during deposition [35], it is thus a compatible technique with the lift-off processing to fabricate thicker micro-devices without etching [36]. However, low roughness and high-resolution patterning in thick layer are relatively challenging with lift-off processing. For that purpose, we have first investigated the lithography dose influence in the shape of the microstructure to determine the minimum resolution size. We have then fabricated diffraction gratings with different periods and measured their emission and reflection properties as a function of the wavevector. Structured layers exhibit a controlled angular-dependence in emission, in good agreement with theory and a strong luminescence enhancement at certain angles. This work thus shows that the lift-off processing combined with PLD is a relevant approach for etch-free fabrication of practical photonic devices like diffraction gratings as future building blocks in many applications in integrated optics like rare-earth ions based quantum nanophotonics. [27] GRATTING A. Description Our reference design consists of 0.6 µm thick Y2O3:Er 3+ layer on SiO2 with 1.87 [37] and 1.44 [38] refractive index at 1.54 µm wavelength, respectively. This structure corresponds to a single-mode waveguide slab in the infrared spectral window 1.3-1.8 µm. Most of Y2O3:Er 3+ emission radiates into this guided mode and is trapped within the slab due to total internal reflection, while only ~8% percent of the emitted light may escape to the air. To enhance light extraction and absorption, a periodic corrugation of period p is introduced (see Figure 1-a). This modulation diffracts the initial guided mode and induces a band folding effect: each diffraction order m implements a wavevector-shift of Km=2mπ/p to the dispersion of the initial guided mode. As a consequence, for certain values of diffraction order m, the folded guided mode is brought above the light-line and becomes guided mode resonances (GMR). The GMRs are out-coupling channels for Y2O3:Er 3+ guided emission and the angular pattern of emitted light is dictated by the dispersion of the GMRs. At first approximation, when only the bandfolding mechanism is taken into account, the dispersion of GMRs can be analytically calculated. The gray shading region indicates the numerical aperture in our experimental setup (NA=0.42) studied later in this work. For a small period of p=2µm, we "only" have eight GMRs corresponding to diffraction orders m=±1, ±2, ±3, ±4. When limited within a numerical aperture NA=0.42 and focusing in the vicinity of erbium emission wavelength at 1.54µm, only two GMRs of m= ±2 exists for 2 µm period which are responsible of the extraction of erbium emission to free space. However, for big periods, many more GMRs are present, and the light extraction mechanism involves multi channels. Similar arguments can be applied for the enhancement of absorption mechanisms in the visible range (λp=0.532 µm for the 4 I15/2→ 2 H11/2 transition of Er 3+ ). In that case, the waveguide is already multimode transverse in the visible range, thus the number of GMRs in the visible range is much more important (bandfolding for different guided modes, with different diffraction orders). Therefore, the mode positions have been summarized as a function of the period at 0° on the Figure 1-b around the Er 3+ extracted emission (λ=1.54 µm for the 4 I13/2 → 4 I15/2 transition) and absorption (λp=0.532 µm for the 4 I15/2→ 2 H13/2 transition) which will be experimentally studied later in this work. We clearly see that the lower period absorbs at lower diffraction mode (with different transverse modes k=1, 2 or 3) and that the emission is mono-mode (with only one transverse mode k=1). B. Modeling FABRICATION A. Processing Figure 2 presents the used process flow for the grating fabrication. Micro-strips were first patterned with AZ5214 photoresist previously spin coated on top of 2 µm thick SiO2 layer on Si with a laser writer using GDS files. Different exposure doses were investigated, whereas the development time was kept constant at 60 s. A 500 nm-thick Y2O3:Er 3+ (1 at. %) layer was then deposited on top of the structured substrate with a KrF excimer laser (λ=248 nm, t=17 ns, Coherent Compex Pro) operating at 5 Hz. The laser beam was focused on the target over 2 mm 2 , and the laser energy and the Y2O3:Er 3+ substrate distance were kept constant at 73 mJ and 6 cm, respectively. Substrate rotation was used to obtain homogeneous layer thickness at a growth speed of 3.5 nm/min. The film was grown at room temperature in an oxygen-gas atmosphere (10-3 mbar). Lift-off was then performed in ultrasonic wet bench with acetone. After the lift-off, samples were annealed for 24 h at 650 °C in order to activate the doping and to improve the structural quality of the film [36]. c) with SEM (Scanning Electron Microscopy) images. SEM images were taken at 45° angle tilt with a TESCAN SEM at 5 kV with a secondary electron detector. Low doses were obtained with negative photoresist and high doses with positive photoresist using AZ5214E[Eu] reversible photoresist from micro-chemicals company, in order to control the slope angle of the photoresist [39]. By comparing the imaging before and after lift-off, we clearly see that the PLD layer follows the lithography shape. Figure 3-b plots the slope θ angle of the microstructure and the pattern width difference Δa (see inset of the Figure 3-b) as a function of the lithography dose. This width difference corresponds to the absolute difference of the pattern size compared to the theoretical width (a in the Figure 1-a), which was used in the GDS file and sent to the laser lithography. For negative photoresist, the pattern increases with the dose since it remains on the sample after exposure (i.e. negative mode). For positive photoresist, the dimension of the developed pattern decreases with the dose since the exposed region is removed from the sample after exposure (i.e. positive mode). Regarding the shape of the devices, the obtained microstructures have a sidewall angle ranging from 50° to 150°. For positive photoresist, the sidewall angle is below 90° with higher roughness coming from the breaking of the layer during the lift-off. For negative photoresist, low roughness is observed since the deposited material is not in contact with the photoresist, especially for the lowest dose. These behaviors come from the reversible nature of the used photoresist AZ5214. Therefore, the exposure dose is a primordial parameter which needs to be controlled as a function of the targeted shape. The Figure 3-c presents two examples of extrema for the minimum and maximum angles we can obtain on a disk. In that case, we observe here "bowl" (for low dose with negative photoresist) or "pancake" (for high dose with positive photoresist) shapes which confirms that we can obtain atypical shapes and very low roughness (for low dose) with this technique. Indeed, as it can be seen in the SEM images after liftoff for low dose in the Figure 2-a, the sidewall roughness is in the same order of magnitude than the roughness of the unstructured part of the layer. FIG. 3: Influence of the lithography dose on the shape of the microstructure with a) SEM tilted imaging before and after lift-off; b) sloped sidewall angle and width difference (compared to the theoretical width of the used GDS file); c) Examples of the disk micro-structuration at very low and very high doses, resulting in "bowl" and "pancake" shapes, respectively. GRATTING CHARACTERIZATION A. Measurements Different gratings have been fabricated with 0.5 µm thick Y2O3:Er 3+ on 2 µm SiO2 on Si substrate. Figure 4-a presents SEM images for 3, 4.5, 6 and 8 µm periods. We clearly see that high spatial resolution is obtained with sub-micrometric trenches. Photoluminescence and reflectivity measurements were performed on such gratings. Figure 4-b presents the micro-reflectivity measurements performed with an infrared lamp focused over 10 µm² on the microstructures. Light was collected through a x50 objective of numerical aperture NA=0.42 and directed to a spectrometer with an IR InGaAs camera used as output detector with 10 ms integration time. In order to resolve the angular distribution, the farfield image is captured in the Fourier space by projecting the backfocal plane of the objective on the entrance slits of the spectrograph. The corrugation direction in the sample is aligned along the spectrometer slit, so that single-shot image taken by the camera provides directly an angle-resolved reflectivity map [40]. The GMRs are revealed in the reflectivity as Fano resonances in the measured spectrum. The associated modeling (similar to the one discussed in Figure 1) have been added as red dashed lines on the measurements with the adjusted period pth indicated on top of the Figure 4-b. We found a very good agreement between the measured and adjusted periods starting from 4 GMRs for the 3 µm period up to 10 for the largest period, as expected. Micro-PL measurements were also performed on the same microstructures with a 532 nm wavelength CW laser and the same measurement setup (Figure 4-c). First of all, we observe the expected erbium transition in Y2O3 poly-crystalline matrix with a maximum emission at 1.54 µm wavelength [41]. Secondly, we observe that the photoluminescence intensity is highly increased at angular positions driven by the GMR dispersions. For instance, the emission is enhanced at two angles (±10°) for the 3 µm period, which correspond to the diffracted modes observed by the reflectivity measurement (Figure 4). B. Photoluminescence enhancement In order to evaluate quantitatively the luminescence enhancement, the gratings were compared to the unstructured part of the sample. The Figure 5-a presents the luminescence of the bulk layer with the same measurement setup. Since the emitted signal was lower, the integration time used was 10 times higher (i.e 100 ms). We also see a maximum of emission at 1.54 µm. However, no preferential orientation of the emission is observed since rare earth emission is isotropic in bulk materials. Such emission was compared in count/s with the structured layers at 1.54 µm as a function of the emitted angle (Figure 5-c) and at -10° as function of the wavelength (Figure 5-d). A maximum luminescence enhancement of 15 is reported for the 3 µm period compared to the bulk layer at the maximum of the detected signal (i.e. 1.54 µm and -10°). Such improvement results from the extraction enhancement evaluated to be a factor 2 (by comparing the intensity between 0 and ±10°) and the absorption enhancement evaluated here to be around 7.5 (15 divided by the extraction factor of 2). Indeed, the absorption enhancement of the pump laser is a key factor for luminescence enhancement in diffraction gratings [10]. As it was shown in Figure 1-b, at 0.54 µm wavelength the coupled in-plane modes have an order of around 15 for the 3 µm period while it is closer to 40 for the 8 µm period. According to the diffraction grating theory, the diffraction will be more efficient with lower period since the diffraction occurs with lower mode giving thus higher absorption. Our maximal enhancement factor (i.e. x15) could thus be further increased using lower periods. Furthermore, the luminescence enhancement is also related to the extraction factor. As indicated by the comparison between the reflection measurement and the photoluminescence (Figure 4-b and c), for the 3 µm period sample, we clearly see that the crossing of both Fano resonances does not fit the maximum of the erbium emission at 1.54 µm. Therefore, the previous enhancement value of 15 at λ=1.54 µm could also be further improved by carefully adjusting the Fano resonances at the maximum of the erbium emission using period optimization. Finally, far field measurement was performed on the 6 µm period sample using a spectral filter and a Fourier lens on the beam path. As expected three main modes are observed only along the grating diffraction (Figure 5-b), which confirms that we can control the emission along different directions with such a design. FIG. 1 : 1 FIG. 1: Diffraction grating modeling with (a) used parameters, (b) mode position as a function of the period around λEr=1.54 µm (top) and λp=0.532 µm (bottom) wavelength and (c) calculated dispersion of GMRs. Figure 1 - 1 Figure 1-c presents calculated GMR dispersions for the 1.3-1.8 µm wavelength ranges, with three different periods 2, 5 and 8 µm. The red dashed line corresponds to the wavelength at which the erbium emission is maximal (λEr=1.54 µm for the 4 I13/2 → 4 I15/2 transition of Er 3+ ). The gray shading region indicates the numerical aperture in our experimental setup (NA=0.42) studied later in this work. For a small period of p=2µm, we "only" have eight GMRs corresponding to FIG. 2 :Figure 3 - 23 FIG. 2: Process flow for the grating fabrication using a) lithography of photoresist, b) Y2O3:Er 3+ PLD, c) lift-off and d) annealingB. Dose influenceFigure3-a presents the influence of the lithography dose before and after the lift-off processing (corresponding to the process step in Figure2-b and 2-c) with SEM (Scanning Electron Microscopy) images. SEM images were taken at 45° angle tilt with a TESCAN SEM at 5 kV with a secondary electron detector. Low doses were obtained with negative photoresist and high doses with positive photoresist using AZ5214E[Eu] reversible photoresist from micro-chemicals company, in order to control the slope angle of the photoresist[39]. By comparing the imaging before and after lift-off, we clearly see that the PLD layer Fig. 4 . 4 Fig. 4. Diffraction gratings characterization for different periods with a) SEM tilted image; b) reflectivity and c) photoluminescence measurements as a function of the wavelength and the angle. Fig. 5 . 5 Fig. 5. Diffraction gratings with a) photoluminescence of the unstructured layers as a function of the wavelength and the angle; b) far field photoluminescence measurement in the Fourier plane; Comparison of the emission intensity for the different periods compared to the unstructured layers c) as a function of the angle and d) wavelengthCONCLUSIONIn summary, we have investigated Y2O3:Er 3+ diffraction gratings made by PLD and lift-off processing. The processing conditions have been first optimized showing that the lithography dose is a key parameter to control the sidewall angle and to reach high-resolution patterning. Diffraction gratings have then been fabricated down to a period of 3 µm. Reflectivity and photoluminescence measurements of such devices show a very good agreement with modeling, with a maximum luminescence enhancement of 15 compared to unstructured layer. Therefore, this work highlights the PLD based lift-off processing to fabricate photonics devices. It opens thus perspectives to the realization of new microstructures for integrated optics without etching, for future applications in free-space communications and integrated quantum optics. Acknowledgments. The authors would like to thank the NanoLyon platform for the cleanroom facilities. Salahedine Toubi, Ylyes Betka for their M1 projects and the ILM for funding. Disclosures. The authors declare no conflicts of interest.
04115739
en
[ "phys", "spi" ]
2024/03/04 16:41:26
2023
https://hal.science/hal-04115739/file/79-2023-0028.pdf
Rocco Moretti Hyeonsoo Yeo Franc ¸ois Richez Rotor Loads Prediction on UH-60A Flight Test using Loose Fluid/Structure Coupling This work exploits the high-quality UH-60A flight test campaign for comparison with concurrent simulations at the U.S. Army and ONERA. Rotor airloads and structural loads are predicted by coupling Computational Fluid Dynamics (CFD) with rotorcraft comprehensive analysis (CA). A high-speed test point is examined. Comparisons are shown for airloads, structural loads and rotor controls. Agreement with test data is improved when the flexibility of the pitch link is considered. Results show good agreement between the two partners predictions and with test data. INTRODUCTION In the last decades, numerical simulation has taken its place in the study of aerodynamic loads on aircraft and helicopters. It is essential to reveal and resolve possible design deficiencies/inconsistencies before wind tunnel or flight tests are performed. In the case of helicopters, accurate rotor loads prediction is a really challenging task because of the interaction of several complex phenomena: unsteady low-speed and transonic aerodynamics of flexible blades undergoing large displacements and rotations. This work validates rotor loads prediction with existing tools at ONERA and U.S. Army on the UH-60A flight test data (Ref. [START_REF] Bousman | UH-60A Airloads Catalog[END_REF]. Similar comparison has already been carried out on ONERA 7A rotor (Refs. [START_REF] Ortun | Rotor Loads Prediction on the ONERA 7A Rotor Using Loose Fluid/Structure Coupling[END_REF][START_REF] Yeo | High-Fidelity Structural Loads Analysis of the ONERA 7A Rotor[END_REF]. The UH-60A rotor has already been extensively studied by the U.S. Army (Refs. [START_REF] Yeo | Rotor Structural Loads Analysis Using Coupled Computational Fluid Dynamics/Computational Structural Dynamics[END_REF][START_REF] Potsdam | Rotor Airloads Prediction Using Loose Aerodynamic/Structural Coupling[END_REF]. The state-of-the-art in the rotor loads prediction is coupling the rotorcraft comprehensive analysis (CA) with the computational fluid dynamics (CFD) (Ref. 5). Indeed, this allows to improve fidelity of aerodynamics by replacing the CA loworder aerodynamic model with the most accurate flow solver for this kind of computations. The novelty of this work stands on four points: [START_REF] Bousman | UH-60A Airloads Catalog[END_REF] The cooperation between the U.S. Army Combat Capabilities Presented at the Vertical Flight Society's 79th Annual Forum & Technology Display, West Palm Beach, FL, USA, May 16-18, 2023. This material is declared a work of the U.S. Government and is not subject to copyright protection in the United States. DISTRIBU-TION STATEMENT A: approved for public release; distribution is unlimited. The test data used in the present study were obtained during the NASA/Army UH-60A Airloads Program conducted from August 1993 to February 1994 (Figure 1). This seminal flighttest program provided a detailed airloads and blade structural loads database for a full-scale rotor covering a broad flight envelope. The UH-60A main rotor is a four-bladed, articulated rotor. The blade construction uses a titanium spar with a fiberglass outer contour and two airfoils, the SC1095 and SC1094R8 (Figure 2). Development Command The present study investigates a high-speed flight condition: counter c8534, µ = 0.37, C T /σ = 0.081, with advancing blade negative lift. COMPUTATIONAL METHODOLOGIES Rotor loads prediction is accomplished by coupling a rotor comprehensive analysis with a CFD code solving the Unsteady Reynolds-averaged NavierStokes equations (URANS). The CFD airloads replace, through a delta method, the air- U.S. Army used RCAS (Ref. 6) for comprehensive analysis and Helios (Ref. 7) for CFD. RCAS is a comprehensive multidisciplinary computer software system for predicting rotorcraft aerodynamics, performance, stability and control, aeroelastic stability, loads, and vibration. RCAS is capable of modeling a wide range of complex rotorcraft configurations operating in hover, forward flight, and maneuvering conditions. RCAS has been used extensively for correlation of performance and loads measurements of the UH-60A in various flight conditions. Helios is the rotary-wing product of the U.S. Army and CREATE Air Vehicles program sponsored by the U.S. Department of Defense High Performance Computing Modernization Office. Helios uses an innovative dualmesh paradigm that employs unstructured meshes in the near body close to the surface to capture the wall-bounded viscous effects and structured Cartesian grids in the off body to resolve the wake through a combination of higher-order algorithms and adaptive mesh refinement (AMR). The unstructured solver NSU3D is used for near-body and the structured solver SAMARC is used for the Cartesian off-body grid system. An overset procedure facilitates the data exchange and also enables the relative motion between the two meshes. The parallel domain connectivity solver PUNDIT automatically handles the data exchange between the two meshes. ONERA used HOST (Ref. 8) for comprehensive analysis and elsA (Ref. 9) for CFD. HOST (Helicopter Overall Simulation Tool) is a rotorcraft comprehensive analysis developed by Airbus Helicopters. HOST modeling of blade dynamics is multibodylike. The blade is represented by a series of rigid elements connected by virtual hinges. Inertial properties (e.g., mass and moment of inertia) are assigned to each rigid element, and elastic properties (e.g., flap bending, chord bending, and torsion stiffness) are assigned to each hinge. A modal reduction approach is used to reduce the number of degrees of freedom from a large system of equations. The aerodynamics of HOST is based on a lifting-line approach based on airfoil tables combined with a wake model. In this effort, among the several wake models available, a prescribed wake helical geometry was used. The elsA CFD code, developed at ONERA, solves the unsteady Reynolds-averaged Navier Stokes equations for both background Cartesian grids and blade curvilinear grids. An overset procedure facilitates the data exchange and also enables the relative motion between the two meshes. The near-body grids of the blades are rotated and deformed following the blade motion and trim provided, through the loose coupling, by the rotorcraft comprehensive analysis HOST. In the current work, ONERA and U.S. Army used similar grid meshes. Helios used unstructured near-body grids, whereas elsA used structured near-body grids, see Figure 3. The four rotor blade grids in Helios have 15.4 million nodes and 36.5 million cells, whereas in elsA the cell count is 16 million. The finest off-body spacing in Helios is 5% chord with a fixed refinement region surrounding the rotor plane. The off-body grid contains 146 million unblanked grid points on 8 levels. In elsA, the finest off-body spacing is 10% chord. Table 1 summarizes the main parameters characterizing the rotor and the studied flight point. RESULTS The results chapter is organised in two parts: the first part shows how the structural modeling of the UH-60A blade in HOST was modified in order to model the pitch link stiffness Once the structural models are shown to be equivalent, the second part compares the airloads, structural loads and control angles. Campbell diagrams Structural dynamics modeling of the UH-60A rotor blade is briefly described. First are compared the Campbell diagrams when considering a rigid pitch link in HOST and the nominal pitch link stiffness in RCAS, see Figure 5. It can be seen that there is a good agreement for the lower modes, while significant differences arise in the torsion mode. The flexibility of the pitch link is then introduced in HOST by lowering the torsional stiffness of the blade at the radius corresponding to the attachment of the pitch link. This process was done iteratively, judging by the agreement between RCAS and HOST. Eventually, a torsional stiffness adjustment was obtained such that an equivalent pitch link behaviour was created in HOST, yielding a very good agreement with the RCAS Campbell diagram, see Figure 6. As a conclusion, this chapter has shown the relevance of modeling properly the pitch link stiffness in the rotorcraft comprehensive analyses. The remainder of the results presented in this paper correspond to coupled CFD/CA analyses and include this modeling. Airloads In this work calculated airloads were obtained by integrating pressure all over the chord, and not only at the location of the pressure taps. This is expected to be acceptable given that up to 30 unsteady pressure transducers were installed at every instrumented section of the UH-60A blade (Ref. [START_REF] Biedron | An Examination of Unsteady Airloads on a UH-60A Rotor: Computation versus Measurement[END_REF]), yet any error in the blade pressures may change significantly the integrated pitching moments. Consequently, all plots of pitching moment have the mean removed. Pitching moment is marked by a pronnounced negative, nosedown, pitching moment on the advancing side, resulting from the highly compressible flow on the outboard part of the blade. Agreement between both partners calculations and test data is good, although less so for the most outboard section, at r/R=0.965. Trim control angles The rotor control angles used for trimming the simulation are the collective pitch angle, the lateral cyclic pitch angle and the longitudinal cyclic pitch angle. Rotor shaft angle of attack is prescribed at the experiment value, -7.31 deg. Figure 10 compares the calculated and measured oscillatory chord bending moment at r/R=0.113, 0.30, 0.50 and 0.70. Here the differences between the two coupled anlayses (elsA/HOST and Helios/RCAS) are more marked than for the flap bending moment, although both capture the experimental trends. Figure 13 compares the half peak-to-peak chord bending moment, showing for elsA/HOST a closer match to experiment. The significant underprediction of half peak-topeak chord bending moment was previously observed by both RCAS and CAMRAD II (Ref. 4). Figure 11 compares the calculated and measured oscillatory torsion moment at r/R=0.30 and 0.70. Modelling a rigid or flexible pitch link in HOST has a visible influence on the predicted torsion moment, with the rigid pitch link exhibiting higher amplitude oscillations and fewer harmonic content. Figure 14 compares the half peak-to-peak torsion moment and shows that the equivalent pitch link model substantially improves the correlation with the test data and both partners underestimate the torsion moment amplitude in the outboard part of the blade. Finally, Figure 15 compares the harmonic magnitude of calculated and measured structural loads. In this comparison, elsA/HOST analysis results with rigid pitch link are not included for clarity. This comparison is analysed next in the light of the following assumptions: harmonics 1 and 2 are more related to rotor trim, lag damper and pitch-lag coupling. harmonics of chord bending moment are harder to capture; elsA/HOST shows a quadratic trend, whereas Helios/RCAS a more linear one. Measurements are somewhere in-between. However, the elsA/HOST worsening agreement with experiment at the blade root suggests a difference in the way the lag damper is modelled between HOST and RCAS. The fourth harmonic of the chord bending moment remains ellusive to both calculations, whereas the third and fifth are better captured by Helios/RCAS. Overall, compared to chord bending moment, the harmonics of flap bending moment and torsion moment exhibit better agreement between calculations and with measurement. The challenges in capturing chord bending moment were not observed in (Refs. 2, 3), where calculations were compared to the 7A rotor wind tunnel experiment. This may be related to the fact that rotor-drivetrain coupling may be smaller in the 7A rotor wind tunnel rig than in the UH-60A flight testing. It should be noted that the effects of drivetrain dynamics on the chord bending moment for the flight test and the effects of hub impedance using test stand NASTRAN model on the chord bending moment for the wind-tunnel test are also investigated for the UH-60A rotor. In general, the effects of drivetrain dynamics slightly improved both 4/rev and 5/rev harmonic magnitude correlation and the effects of hub impedance improved the correlation of both 3/rev and 4/rev harmonic components. Readers who are interested in those details are referred to (Ref. 11). CONCLUSIONS This work is an achievement of the cooperation between the U.S. Army and ONERA under the U.S./France Project Agreement on Rotary-Wing Aeromechanics and Human Factors Integration Research, a more than 50 year old cooperation between the United States of America and France in the field of rotorcraft. Results support previous evidence that accounting for pitch link flexibility is an enabling factor for high-fidelity predictions of rotor loads. The good agreement between the predictions of the two partners paves the way towards joint investigations of more challenging flight conditions and blade shapes, in order to ultimately improve the performance and loads prediction capability of high-performance, high-speed rotors. Aviation & Missile Center (DEV-COM AvMC) and the ONERA French Aerospace Research Lab for the analysis of the UH-60A rotor under the United States/France Project Agreement on Rotary-Wing Aeromechanics and Human Factors Integration Research, (2) Success of ONERA's coupled CFD/CA methodology on the prediction of the UH-60A rotor loads, (3) Systematic comparison of two high-fidelity CFD/CA coupled analyses, and (4) Demonstration of how simple assumptions on blade modeling in the CA tool can have an important impact on rotor loads prediction. Figure 1 . 1 Figure 1. UH-60A flight test picture. Figure 2 . 2 Figure 2. UH-60A rotor blade planforms showing locations of airfoil section, pressure taps and strain gauges. loads of the CA. The coupling exchanges are done on a per revolution (periodic) basis (loose coupling). Figure 3 . 3 Figure 3. Structured overset grids. Figure 4 4 shows the RCAS modeling of the rotor hub with blade hinges, pitch control, and the lag damper. The blade is composed of 13 nonlinear beam elements. Three coincident hinge elements are offset from the center of rotation. The three hinge elements allow for simultaneous flap, lag, and pitch rotations of the blade. Rigid bars and spring elements are used to represent the pitch control linkage.HOST assumes, by default, the pitch link to be rigid (i.e., infinite stiffness). An equivalent pitch link model was developed in this work by using torsionally soft beam element. More recently a pitch control elasticity model has been introduced in HOST by its main developers at Airbus Helicopters, but the authors did not have the time to use it for this paper. Detailed blade frequency comparisons are shown below. Campbell diagrams are calculated in vacuum conditions. Figure 4 . 4 Figure 4. RCAS modeling of the UH-60A rotor. Figure 5 . 5 Figure 5. Fan plot; rigid pitch link in HOST. Figures 7 and 8 Figure 6 . 86 Figures 7 and 8 compare the normal force and pitching moment, respectively, at three span-stations. The black line corresponds to measurement. Blue, to Helios/RCAS calcula- Figure 9 9 Figure 9 compares the calculated and measured oscillatory flap bending moment at r/R=0.113, 0.30, 0.50 and 0.70. The two coupled anlayses (elsA/HOST and Helios/RCAS) agree well with each other and with experiment. The equivalent pitch link model reduces amplitude at r/R=0.7 compared to the rigid pitch link model, and thus improves the correlation with test data. Figure 12 compares the half peak-to-peak flap bending moment. The rigid pitch link model significantly overpredicts the half peak-to-peak amplitude at the outboard locations. The equivalent pitch link model improves the correlation by decreasing the half peak-to-peak amplitude. Due to strong couplings among the high-frequency modes shown in Figure 6, pitch link stiffness has an important influence on flap bending moment. Overall, elsA/HOST tends to overpredict the amplitudes between r/R=0.40 and r/R=0.70, while the opposite is true for Helios/RCAS. Figure 15 . 15 Figure 15. c8534. Comparison of harmonic magnitude of calculated and measured structural loads. Table 1 . 1 Summary of UH-60A rotor flight test Characteristic Symbol Value Rotor Geometry Rotor radius R 26.833 ft (8.179 m) Mean blade chord c 1.722 ft (0.525 m) Number of blades N b 4 Rotor solidity σ 0.0817 Airfoils SC1095,SC1094R8 UH-60A test point C8534: High speed Air density ρ inf 0.002082 slug/ f t 3 Air density (metric) ρ inf 1.0732 kg/m 3 Air temperature T inf 71.8 F (295.3 K) Rotational speed Ω 258.1 rev/min Advance Ratio µ 0.37 Rotor thrust coefficient C T /σ 0.081 Sideslip angle β 1.27 deg and hence become comparable to RCAS. This was achieved by modifying the inboard torsional stiffness in HOST and comparing the Campbell diagrams produced by HOST and RCAS. Table 2 2 com- Army predicts an angle nearly 2 degrees smaller (in absolute value) than measurement, while ONERA underestimation is close to 3 degrees. Previously published work by the U.S. Army (Ref. [START_REF] Yeo | Rotor Structural Loads Analysis Using Coupled Computational Fluid Dynamics/Computational Structural Dynamics[END_REF] showed Helios/CAMRAD II to calculate very similar pitch control angles compared to Helios/RCAS for point c8534. Table 2 . 2 UH-60A c8534 control angles Collective Lateral cyclic Long. cyclic θ 0 θ 1c θ 1s c8534 13.73 2.84 -11.05 Helios/RCAS 13.61 1.81 -9.35 elsA/HOST 13.87 3.38 -8.23 Structural loads Harmonics higher than 2 are dominated by the blade elastic model and the coupling with the drivetrain. On the two first harmonics, the U.S. Army calculations (Helios/RCAS) feature better agreement with experiment, particularly for flap bending moment and torsion moment. The two first Figure 7. Sectional normal force coefficient C n M 2 . 0.25 r/R=0.675 (elsA r/R=0.652) 0.25 r/R=0.865 (elsA r/R=0.860) 0.25 r/R=0.965 (elsA r/R=0.964) 0.20 0.20 0.20 0.15 0.15 0.15 0.10 0.10 0.10 n M 2 0.05 0.05 0.05 C 0.00 0.00 0.00 0.05 0.05 0.05 0.15 0.10 0 45 90 c8534 elsA/HOST 135 180 225 270 315 360 azimuth, deg elsA/HOST rigid pitch link Helios/RCAS 0.15 0.10 0 45 90 135 180 225 270 315 360 azimuth, deg 0.15 0.10 0 45 90 azimuth, deg 135 180 225 270 315 360 0.010 0.015 r/R=0.675 (elsA r/R=0.652) elsA/HOST elsA/HOST rigid pitch link Helios/RCAS c8534 0.010 0.015 r/R=0.865 (elsA r/R=0.860) 0.010 0.015 r/R=0.965 (elsA r/R=0.964) 0.005 0.005 0.005 mean removed 0.005 0.000 0.005 0.000 0.005 0.000 C m M 2 0.010 0.010 0.010 0.015 0.015 0.015 0.020 0.020 0.020 0.025 0 45 90 135 180 225 270 315 360 azimuth, deg 0.025 0 45 90 135 180 225 270 315 360 azimuth, deg 0.025 0 45 90 azimuth, deg 135 180 225 270 315 360 Figure 8. Sectional pitching moment coefficient C m M 2 (mean removed). ACKNOWLEDGMENTS The authors would like to acknowledge the U.S./France Project Agreement on Rotary-Wing Aeromechanics and Human Factors Integration Research, a longstanding cooperation between the U.S. Army, ONERA and DGA.
04115831
en
[ "phys.meca.mefl" ]
2024/03/04 16:41:26
2022
https://theses.hal.science/tel-04115831/file/2022theseStruyvenF.pdf
Hydrogen is an essential energy carrier for a successful ecological and energy transition. However, most hydrogen is produced by cracking hydrocarbons of fossil origin. Only 1% of the hydrogen currently produced comes from the electrolysis of water. Hydrogen from electrolysis is too expensive to produce. The objective of this thesis is to study electrogenerated bubbles in order to identify microfluidic aspects that could contribute to the improvement of water electrolysis. Bubbles act as an electrical insulator. By covering the electrode, they reduce the efficiency of electrolysis. Therefore, once they nucleate, they must quickly detach from the electrode. There is currently no consensus on the phenomena influencing the growth and detachment of bubbles at the microfluidic scale. Among others, there are still many uncertainties on how the wettability of the electrodes, and the Marangoni effect of thermal or solutal origin influence the nucleation, growth and detachment of bubbles. To this end, the mathematical and numerical basis for the simulation of a two-phase fluid was reviewed. In order to study such a phenomenon numerically, it is necessary to be able to simulate the surface tension variations along a liquid-gas interface, to integrate the mass transfer across the interface from the dissolved species present in the electrolyte to the gas phase, and to take into account the moving contact line. The use of the continuous surface force (CSF) model in the volume of fluid (VOF) framework is known to introduce non-physical velocities, called parasitic currents. The use of an alternative model based on the height function (HF) approach has been developed and tested. Its use limits the spurious currents and makes the VOF methodology suitable for the study of Marangoni currents at the interface of an electrogenerated bubble. A correlation for determining the growth rate of a bubble by integrating the Marangoni currents and based on the penetration theory was developed and compared to the Epstein-Plesset relation. A dimensionless study was conducted to relate the Sherwood number representing interfacial mass transfer to the Marangoni number. There are too many unknowns to draw definitive conclusions. However, the implementation of a contact line model in the numerical model could remove many uncertainties. The work done in this thesis to develop a holistic model is a first step towards Y mass fraction 6 Nomenclature Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Chapter 2 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Chapter 2 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 using an inverse problem to determine the unknowns that need to be removed to optimise bubble detachment. Résumé L'hydrogène est un vecteur énergétique essentiel pour réussir la transition écologique et énergétique. Cependant, la majeure partie de l'hydrogène est produite par le craquage d'hydrocarbures d'origine fossile. Seul 1% de l'hydrogène actuellement produit provient de l'électrolyse de l'eau. L'hydrogène issu de l'électrolyse est trop cher à produire. L'objectif de cette thèse est d'étudier les bulles électrogénérées afin d'identifier les aspects microfluidiques qui pourraient contribuer à l'amélioration de l'électrolyse de l'eau. Les bulles agissent comme un isolant électrique. En recouvrant l'électrode, elles réduisent l'efficacité de l'électrolyse. Par conséquent, après leur nucléation, elles doivent se détacher rapidement de l'électrode. Il n'existe actuellement aucun consensus sur les phénomènes influençant la croissance et le détachement des bulles à l'échelle microfluidique. Entre autres, il existe encore de nombreuses incertitudes sur la façon dont la mouillabilité des électrodes, et l'effet Marangoni d'origine thermique ou solutale influencent la nucléation, la croissance et le détachement des bulles. Dans ce but, les bases mathématiques et numériques nécessaires à la simulation d'un fluide diphasique ont été passées en revue. Afin d'étudier numériquement un tel phénomène, il est nécessaire de pouvoir simuler les variations de tension superficielle le long d'une interface liquide-gaz, d'intégrer le transfert de masse à travers l'interface des espèces dissoutes présentes dans l'électrolyte vers la phase gazeuse, et de prendre en compte la ligne de contact mobile. L'utilisation d'un modèle CSF (continuum surface force) dans le cadre du VOF (Volume of fluid) est connue pour introduire des vitesses non physiques, appelées courants parasites. L'utilisation d'un modèle alternatif basé sur l'approche des fonctions de hauteur a été développée et testée. Son utilisation limite les courants parasites et rend la méthodologie VOF adaptée à l'étude des courants de Marangoni à l'interface d'une bulle électrogénérée. Une corrélation permettant de déterminer le taux de croissance d'une bulle en intégrant les courants de Marangoni et basée sur la théorie de la pénétration a été développée et comparée à la relation de Epstein-Plesset. Une étude sans dimension a été menée pour relier le nombre de Sherwood représentant le transfert de masse interfacial au nombre de Marangoni. Il y a trop d'inconnues pour tirer des conclusions définitives. Cependant, l'implémentation d'un modèle de ligne de contact dans le modèle numérique pourrait lever de nombreuses incertitudes. Le travail effectué dans cette thèse pour développer un modèle holistique est un premier pas vers l'utilisation d'un problème inverse pour déterminer les inconnues à lever pour optimiser le détachement des bulles. Nomenclature GREC λ elec electrical conductivity [ S • m -1 ] α volume fraction δ 1 diffusion layer thickness [m] ϵ void ratio η cathode anode overpotential [V] η cathode cathode overpotential [V] γ surface tension [N • m -1 ] κ curvature [m -1 ] λ thermal conductivity [W • m -1 • K -1 ] ν kinematic viscosity [m 2 • s -1 ] Φ electric potential [V] Φ B current efficiency ρ density [kg • m -3 ] c p heat capacity [J • kg -1 • K -1 ] c bulk bulk concentration [mol • m -3 ] c sat saturation concentration [mol • m -3 ] D diffusion coefficient [m 2 • s -1 ] D conv Diffusion coefficient chaotic motion [m 2 • s -1 ] D H 2 Diffusion of H 2 in electrolyte [m 2 • s -1 ] D th thermal diffusivity [m 2 • s -1 ] F Faraday constant [C • mol -1 ] f G gas evolution efficiency H Henry's constant [N • m • mol -1 ] h electrolyte thickness of the electrolyte [m] J material flow density [mol • m 2 • s -1 ] j current density [A • m -2 ] j avg average current density [A • m -2 ] j lim limiting current density [A • m -2 ] k mass transfert coefficient [m • s -1 ] k B Boltzmann constant [J • K -1 ] k b bubble induced mass transfert coefficient [m • s -1 ] k dif f diffusion induced mass transfert coefficient [m • s -1 ] M g molar mass of gas [kg • mol -1 ] M H 2 molar mass of dihydrogenn [kg • mol -1 ] p b pressure inside the bubble [N • m -2 ] p e pressure in the electrolyte [N • m -2 ] R bubble radius [m] R c critical radius [m] R g universal gas constant [J • K -1 • mol -1 ] V bubble volume [m 3 ] W work [N • m] w elec energy required for the gas to form [J] Introduction 1 This thesis is the subject of a joint agreement between the University of Canterbury and the University of South Brittany. The subject is "Study of electrogenerated two-phase and microfluidic flows". It follows Damien LE BIDEAU's ADEME (Environment and Energy Management Agency) thesis which concerned the study of two-phase electrolysis for the improvement of hydrogen production. While Damien Le Bideau's thesis focused on macroscopic parameters at the scale of an electrolyzer, the aim of this thesis is to investigate the phenomena involved at the microfluidic scale in the production of electrogenerated bubbles. The challenge of the XXI century is to make a successful ecological and energy transition from a world vision where energy resources are abundant and growth is unlimited to a world where international tensions to acquire the remaining resources are increasingly felt and where ecological constraints such as global warming are jeopardizing human development. There are at least three main areas of research that can influence the success of the energy transition. The reduction of energy consumption in buildings and industry, the improvement of renewable energy resources, and the improvement of energy storage. One of the main obstacles to the use of renewable energies are the spatial and temporal constraints related to their production. For example, the day-night cycles impose on photovoltaic fields a production during the day that reaches its peak at noon. This production is not in phase with consumption. Thus, the challenge here is to succeed in storing this energy during the production peaks and to restore it during the consumption peaks. As energy production is intermittent, its integration into the electricity grid cannot be done effectively. National electricity grids are not designed to receive large-scale intermittent electricity production. Production must be in line with consumption in order for the network to remain stable. It is for this reason that the storage of the electricity produced is necessary, in order to be able to produce energy according to demand. Some of the main energy storage technologies are as follows : supracondensator (storage time around 1 second), battery (storage time around a week), "Pumped Storage Power Plants" (unlimited storage time, but storage capacity limited, depends on the capacity of the water reservoirs available in the country), and "Power to Gas" (P2G). Regarding the inter-seasonal cycles, we observe that consumption decreases in summer while photovoltaic production increases and vice versa in winter. Batteries are not adapted to store energy over such long periods. The P2G usually refers to the combined process of producing dihydrogen with the electrolysis of water from electricity from renewable energies and then methane from dihydrogen production using the Sabatier reaction [START_REF] Gruber | Integrated high-temperature electrolysis and methanation for effective power to gas conversion[END_REF]. The storage capacity is potentially unlimited, and the storage time is more than one month. The problem of self-discharge that we observe in batteries does not arise in the case of hydrogen. The properties of hydrogen are both an advantage and a disadvantage. Indeed, the amount of energy produced by the combustion of one kilogram of hydrogen is equivalent to that of about 3 kilograms of diesel, but the density of hydrogen at atmospheric pressure is extremely low compared to other fuels. It must therefore be compressed in order to store energy in an acceptable volume. A pressure of about 700 bar is required to store it in liquid form. This can lead to the transformation of hydrogen into another more easily storable gas such as methane or ammonia, but results in a loss of overall energy efficiency due to the additional transformation. The efficiency of electricity production with hydrogen is 35%, and about 25% for methane transformed from hydrogen. As far as the alkaline electrolysis process is concerned, it can be observed that no greenhouse gases are present in the reaction products, which makes it a clean process (and the use only produces water which does not directly contribute to global warming). Hydrogen can bring flexibility to energy networks powered by intermittent renewable energies via energy storage, promote self-consumption of renewable energies by providing a local storage solution, and can also be considered for use as fuel to decarbonise the transport sector. However, the energy required as well as the cost of a production unit make hydrogen produced by water electrolysis currently expensive to be used as a large-scale storage solution. To sum up, energy efficiency is too low to be an economically competitive solution. In 2018, the production of hydrogen amounted to 115 million tonnes worldwide, of which 70 million tonnes in pure form was used for petroleum refining and the production of ammonia for the production of fertilisers and explosives, and 45 million tonnes was used in industry without prior separation from other gases [START_REF] Iea | The future of hydrogen[END_REF]. Most of it is obtained via the cracking of hydrocarbons of fossil origin. The products of these reactions are therefore hydrogen and CO2. The production of this hydrogen contributes therefore to greenhouse gas emissions. The electrolysis of water makes it possible to obtain hydrogen and oxygen of high purity. However, its cost, which is about twice as high as that of natural gas reforming, limits its field of application. Currently only 1% of hydrogen production is supplied by water electrolysis. The production of hydrogen produces 830 million tonnes of CO2 per year or 2% of total world emissions. An example of the application of electrolysis is the Swedish "Hybrit" steel production project. By using "green hydrogen" as an ingredient and for heat production the Hybrit project produces only 25 kg of CO2 per tonne of steel instead of 1850 kg. However, to produce the equivalent of current hydrogen world production through electrolysis would require an additional 3600 TWh of electricity, which is about the European Union's electricity production or 13.5% of the world's electricity. Thus, there are many issues at stake. The global project in which this thesis is integrated will support the design of new generation electrolyzers and fuel cells. The aim is to allow greater production capacity of dihydrogen (average current density 1 [A • m -2 ] instead of 0.5 [A • m -2 ] currently) and a cell voltage of 2[V] i.e. an energy cost of hydrogen produced around 52 [kWh.kg -1 ] of H 2 instead of 55 to 60[kWh.kg -1 ] currently. The figures quoted here are the European PAA H2020 FCH2 figures for 2017 (Game changer Water Electrolyser) and of 2018 for the large production water electrolyser . These European PAA's serve as references for the ambitions of the Brittany region. The aim would be to give to the Brittany region an industrial and technological development path combining the arrival of new mechanical (3D printers) and electrical (pulsed processes) technologies in the field of sustainable energy based on hydrogen. In addition to the ethical aspects linked to energy sobriety and therefore innovative technological development, the economy and therefore employment, this path of progress is particularly sensitive for the region which must improve the robustness of its electrical distribution. The IRDL (Institut de Recherche Dupuy de Lôme) is in contact with off-shore wind turbine companies (Saint-Brieuc, Ailes Marines), (Groix-Belle Ile and EOLFI), hydroliennes (Naval Group, Naval Energie, SABELLA), river turbines (Guinard Energie), Photovoltaic (roof of the base of K2 submarines in Lorient with EOLFI). All these Brittany companies have a need of storage solution or conversion. Intermittent energies must be stored to ease the concerns of EDF (Électricité de France S.A., French multinationnal electric company), the French electricity supplier who claims to be "at the end of the network". At the same time Morbihan Energie acquired in 2017 a hydrogen station for power HYUNDAI cars. The concern is the price of these acquisitions: 250 k Cfor 2 kg/day of "green H 2 product". Electrogenerated bubbles and their impact on the electrochemical process 2 The objective of this first part is to gather knowledge on how the bubbles that are produced during the electrolytic process can affect its efficiency. This thesis focuses on the microfluidic aspect and therefore only briefly addresses the influence of macroscopic aspects. The reader interested in research on larger scale aspects to improve water electrolysis can refer to the review by [START_REF] Wang | The intensification technologies to water electrolysis for hydrogen production -A review[END_REF]. There is of course a link between what happens at the bubble scale and the electrode scale. First, it is necessary to understand the basic functioning of electrolysis and then to understand the factors that influence it, by recalling the content of the work that has been done on electrolysis in the past years. This first step will lead to a detailed study of the mechanisms involved in the nucleation, growth and detachment of electrogenerated bubbles. This second step introduces the need to have a better understanding of the transport of species in the vicinity of the bubble and the interfacial mass transfer that drives the growth of the bubbles. The last section takes into account all of the micro-fluidic phenomena suspected of influencing the development of bubbles. This first part allows us to identify the problematic of the study, and is a prerequisite to identify the needs that will allow us to carry out a numerical study of an electrogenerated bubble. 1 The electrolysis process The history of the electrolysis process begins in 1785 when Martinus van Marum's static electricity generator was used to reduce tin and zinc from their salt. The history of water electrolysis began in 1800 when William Nicholson and Anthony Carlisle succeeded in breaking down water into hydrogen and oxygen. It was not until 1836 that Michael Faraday published his two laws of electrolysis and established the terminology associated with electrolysis (anode, cathode, electrolyte, etc.). In order to improve the efficiency of the electrolysis process, it is necessary to recall its basic principles and to know the factors that hinder its performance. Influence of voltage and current density on process efficiency The products of the electrolysis reaction of water are in gaseous form, which makes this process a two-phase system. The advantage of two-phase systems is that they promote mass transfer and heat transfer. Whatever the type of electrolysis studied, alkaline electrolysis of water, acid electrolysis, high temperature electrolysis, they work on the same principle. A voltage is imposed between two electrodes located in an electrolyte (liquid or solid). The energy supplied via electricity causes an oxidation at the anode and a reduction at the cathode. What differs between these technologies is the half-reactions and the charge carrier. In order to understand how the efficiency of the electrolysis process is reduced by the same bubbles whose production is its purpose, it is advisable to introduce here the notions that allow to describe its functioning, taking as an example the electrolysis of water. Energy required By reducing the energy required to produce hydrogen, the efficiency of the process is improved. The energy required for the gas to form is governed by the following equation: w elec = t 0 U cell Idt = t 0 U cell jS elec dt (2.1) where w elec is the energy required for the gas to form in [J], U cell is the voltage of the electrolysis cell in [V], j the average current density in [A • m -2 ], S elec is the surface area of the electrode in [m 2 ], t is the the duration of the electrolysis [s]. Taking into account the previous equation, this amount of energy can be reduced by decreasing the voltage across the cell, or the current density. The current density The current density is an extensive quantity (it is a flow of charges) which can be defined at any point of the electrochemical system, i.e. at the surface of the electrodes, but also in the electrolytic medium. The current density is directly related to the amount of hydrogen produced via Faraday's law. The rate of dihydrogen production ṁH 2 in [kg • s -1 ] at the electrode can be expressed by the following equation: ṁH 2 = j S elec 2 F M H 2 (2.2) where F is the Faraday constant in [C • mol -1 ], M H 2 is the molar mass of dihydrogen in [kg • mol -1 ]. Thus, it is therefore not advisable to decrease the current density, otherwise the hydrogen production will be reduced. With the current density out of the way, it remains to study the influence of the voltage. Difference between theoretical and real cell voltage The electrolysis of water entails a hydrogen gas release reaction on the cathode and an oxygen gas release reaction on the anode, respectively. At a temperature of 25°C and an atmospheric pressure of 1 atmosphere the reaction and standard equilibrium electrode potential E are in an alkalyne electrolyte: • Cathode : 2H 2 O + 2e -= H 2 + 2OH -E anode = -0.83V • Anode : 2OH -= H 2 O + 1 2 O 2 + 2e -E cathode = 0.4V By combining the two half-reactions, the total reaction is obtained: H 2 O = H 2 + 1 2 O 2 To obtain both gaseous emissions, a theoretical electrical voltage of U theory = E anode -E cathode = 1.23V must be applied between the two electrodes. However, in practice no reaction occurs when the voltage is less than 1.6 -1.7V . The practical voltage between the electrodes is obtained from the following relationship: U cell = U theory + |η anode | + |η cathode | + j × R ohm (2.3) Where R ohm is the total ohmic resistance, η anode and η cathode are the two reaction overpotential at the anode and cathode, respectively. In the absence of a bubble, the ohmic resistance of the electrolyte can be written as: R electrolyte = h electrolyte σ (2.4) where h electrolyte in [m] is the thickness of the electrolyte, and σ is the electrical conductivity in [S • m -1 ]. As the bubbles appear, the bubbles act as an electrical insulator and this resistance increases. The overpotential can be defined as the difference between the theoretical thermodynamic voltage required for the half-reaction to take place and the voltage observed experimentally. It increases with growing current density, as described by the Tafel's empirical equation [START_REF] Wang | The intensification technologies to water electrolysis for hydrogen production -A review[END_REF]: η = a + b log(j) (2.5) where boths a and b are Tafel constants. a depends on the properties and surface structure of electrode materials. There are two types of overpotential: the activation overpotential, and the concentration overpotential. Activation overpotential The activation overpotential η act represents the activation energy of the electrochemical reaction taking place at the cathode and the anode. It is caused by the resistance against the reaction at the electrolyte-electrode interface. This overpotential increases logarithmically with the current density as shown in equation (2.5) and is dependent on the electrode material used. To reduce this overpotential, it is possible to use a suitable electrocatalyst. An electrocatalyst is a material that offers a low activation path for a given electrochemical reaction. The catalytic activity depends on the electronic configuration of the catalyst and the surface structure (nanometric, micrometric structure). Concentration overpotential The concentration overpotential can be caused either by a lack of reagent or too high concentration of products. In the case of a concentration overpotential due to a high concentration of product, the electrolyte close to the electrode quickly reaches the saturation of the dissolved gas and largely exceeds it, which makes the electrolyte sursaturated. This type of overpotential is easily measurable and has been studied in various publications by Vogt et al., Dukovic et al., as well as Gabrielli et al. [Vogt, 1990a;[START_REF] Dukovic | The Influence of Attached Bubbles on Potential Drop and Current Distribution at Gas-Evolving Electrodes[END_REF][START_REF] Gabrielli | Potential drops due to an attached bubble on a gas-evolving electrode[END_REF]]. The concentration overpotential is usually calculated by the following equation: η conc = R g T n e F ln c e c sat (2.6) where c e is the dissolved gas concentration gas closest to the electrode in [mol • m -3 ], c sat is the saturation concentration of the dissolved gas at a pressure of 1 atm in [mol • m -3 ], R g is the universal gas constant in [J • mol -1 • K -1 ], T is the temperature, and n e is the number of electrons transferred to form one gas molecule, n e = 2 for dihydrogen and n e = 4 for dioxygen. Since c e varies with the distance from the adhering bubble, c e can be assimilated to an average concentration of dissolved gas close to the electrodes and not covered by the bubbles. Assuming that the number of moles of gas produced at the electrode by the reaction per unit time and area is equal to that transported from the electrode to the bulk, the current density can be expressed as a function of the concentration at the electrode: j n e F = k(c e -c bulk ) (2.7) c bulk is the concentration in the bulk [mol • m -3 ], and k is the mass transfert coefficient in [m • s -1 ] describing the mass transfer of species near the electrode. The concentration c e increases until it reaches a maximum at bubble nucleation. When the current density becomes higher then the bubbles start to participate to the mass transfer. Knowing k ,j , c bulk , c e can be estimated, which allows to calculate η conc using equation (2.6). By combining the two previous equations Vogt et al. obtain [Vogt, 1990a]: η conc = R g T n e F ln j n e F kc sat + c bulk c sat (2.8) Assuming a constant temperature, equation (2.8) shows that the value of this overpotential is not only a function of the current density, but also of the bulk concentration and the flow of the electrolyte. Vogt et al. studied the overpotential for current densities from 1 to 10 4 A • m -2 [Vogt, 1990a]. The overpotential increases until current density reaches a value around 1000 A • m -2 where the mass transfer is so increased by the agitation of the bubbles that the concentration of dissolved gas can no longer increase and thus the overpotential reaches a plateau. Vogt et al. define two regimes, one controlled by macroconvection (convection induced by the electrolyte flow) and the other controlled by microconvection (convection induced by the appearance of bubbles at the electrode). For currents density below 10 A • m -2 , macroconvection determines the value of the concentration overpotential because convection due to bubbles is minimal. Above 1000 A • m -2 , microconvection determines the value of the concentration overpotential. In the case of a concentration overpotential due to a lack of reactants, the transfer of species is limited at the electrode and the current density can no longer increase and reaches a limit value j lim . In this case, the concentration overpotential can be expressed with the following equation: η conc = R g T n e F ln 1 - j j lim (2.9) When the transfer of species becomes insufficient, the current density reaches a limit value. The flux of consumption of species at the electrode becomes equal to the flux of their transport. Thus, the current density can no longer increase. This limiting current density j lim can be expressed by the following equation: j lim = n e F kc bulk (2.10) 1.2 Bubble effect, effect of the bubbles on current density and overpotential The energy required for the gas to form is proportional to the voltage as shown in equation (2.1). The higher the voltage to be applied, the greater the energy to be supplied to produce the same quantity of gas. For this reason, water electrolysis improvement technologies focus on reducing the overpotentials η and the ohmic drop voltage (j×R ohm ). The bubbles produced during the electrolysis process have a direct impact on η and (j × R ohm ). Understanding their influence on these quantities allows to find a way to improve the electrolysis process. Bubbles act as an electrical insulator Generally speaking attached-bubbles are only produced at active sites on the electrode surface where the gas molecules are supersaturated. Generally speaking, bubbles act as an electrical insulator. Bubbles attached to the surface of the electrodes disturb current distribution and isolate active sites from reaction ion during nucleation and growth preventing other bubbles from being produced. For an effective electrolysis process, the gas released must be promptly removed from the active sites in order to provide more space for the gas release reaction. Consequently, the fast elimination of these bubbles from the electrode is crucial to increase process efficiency and allow the process to operate at higher currents density, which results in higher production rates [START_REF] Wang | The intensification technologies to water electrolysis for hydrogen production -A review[END_REF][START_REF] Zeng | Recent progress in alkaline water electrolysis for hydrogen production and applications[END_REF]. In our study we focus only on the bubbles attached to the surface, so we will briefly describe the impact of bubbles dispersed in the bulk-electrolyte on the overall efficiency of the process. When water electrolysis occurs, the bubbles are not quickly released from the electrolytic system and coat the electrode area. The phenomenon is reported to lead to high reaction overpotential and large ohmic voltage drop, and is commonly referred to as the bubble effect [START_REF] Vogt | The bubble coverage of gas-evolving electrodes in stagnant electrolytes[END_REF][START_REF] Vogt | On the supersaturation of gas in the concentration boundary layer of gas evolving electrodes[END_REF][START_REF] Sides | Phenomena and effects of electrolytic gas evolution[END_REF]. Influence of the bubbles on the void ratio A first study by Sides and Tobias experimentally evaluated the effect of attached bubbles on the conductivity of the electrolyte using insulating particles in contact with the electrode [START_REF] Sides | Resistance of a planar array of spheres: Gas bubbles on an electrode[END_REF]. The experimental results obtained by using spherical insulating particles simulating the shape of the bubbles showed the relationship between the electrical conductivity and the void ratio ϵ, which represents the gas volume fraction of the electrolyte mixture : σ(ϵ) σ(ϵ = 0) = 1 -ϵ 1 + 0.5 * ϵ (2.11) This relationship is valid for ϵ < 0.5, and was later taken up by Vogt, and Sides et al. [Vogt, 1983a;[START_REF] Sides | Phenomena and effects of electrolytic gas evolution[END_REF]. As the proportion of gas increases, the electrical conductivity decreases. When the void ratio is higher than 0.5 this model loses its validity and another model called constriction must be used [START_REF] Sides | Resistance of a planar array of spheres: Gas bubbles on an electrode[END_REF]. As shown by equation (2.4), a decrease in electrical conductivity increases the resistance of the electrolyte closest to the electrode, decreasing the efficiency of the process. Main factors A rigorous theoretical description of the electrical effects of attached bubbles taking into account the nonuniform distribution of current density and gas supersaturation was led by later by [START_REF] Dukovic | The Influence of Attached Bubbles on Potential Drop and Current Distribution at Gas-Evolving Electrodes[END_REF]. Their results come from the numerical calculation of current density distributions around truncated spherical bubbles attached to an electrode. These results show that the effect of attached bubbles on cell voltage depends on: • the number of bubbles per unit area of the electrode; • the contact angles of the attached bubbles; • the rate of the electrochemical reaction; • the conductivity of the electrolyte as shown by a previous study [START_REF] Sides | Resistance of a planar array of spheres: Gas bubbles on an electrode[END_REF]. Links between overpotential, current density and bubble coverage The overpotential is a function of the current density as shown by equation 2.5. Understanding how the current density is distributed around the bubble helps to understand how the overpotential is being impacted. From their simulation, Dukovic et al. find that the bubbles attached to the electrodes capture the dissolved gas produced by the electrochemical reaction, thereby decreasing the concentration overpotential [START_REF] Dukovic | The Influence of Attached Bubbles on Potential Drop and Current Distribution at Gas-Evolving Electrodes[END_REF]. This decrease in concentration overpotential predicted by calculation has been observed experimentally later by means of advanced electrochemical techniques quantifying the effects of an isolated bubble generated on a defect positioned at the edge of an electrode [START_REF] Gabrielli | Potential drops due to an attached bubble on a gas-evolving electrode[END_REF]]. Specifically, Dukovic and Tobias find that when the concentration overpotential is low and the rate of electrochemical reaction is fast, the calculation indicates that the current density is relatively lower at the anchor line of the bubble at the electrode [START_REF] Dukovic | The Influence of Attached Bubbles on Potential Drop and Current Distribution at Gas-Evolving Electrodes[END_REF]. Conversely, at the contact line between the bubble and the electrode, the capture of dissolved gas by the bubble is greater and so is the current density. The depolarisation due to the decrease in concentration overpotential can outweigh the effects of increased electrolyte resistance and activation overpotential. When the electrochemical reaction rate is slow the distribution of current lines is smoothed over the electrode and the current fluctuations at the foot of the bubble are small. According to their analysis, in order to take into account the effect of attached bubbles on the activation overpotential and the concentration overpotential, the relationship between current density and overpotential must be calculated using a real average current density j avg . They conclude that by considering only the attached bubbles, the impact of the ohmic effect is small compared to the additional overpotential required for charge transfer with part of the electrode surface masked by the bubbles, and that a good approximation of the addition of tension due to attached bubbles can only be calculated by considering only the surface coverage Θ they make. However, it is important to remember that these conclusions were reached under the assumption that the dissolved gas obeys a steady state of diffusion in the concentration boundary layer. The bubble coverage Θ As pointed out in the previous paragraphs, the covering of the electrode by the bubbles is one of the main parameters to study the influence of the overpotential on the overall efficiency. Vogt studies this bubble coverage in more detail [START_REF] Eigeldinger | The bubble coverage of gas-evolving electrodes in a flowing electrolyte[END_REF][START_REF] Vogt | The bubble coverage of gas-evolving electrodes in stagnant electrolytes[END_REF]. Definition The bubble coverage of an electrode is the area of the electrode masked by all attached bubbles. Θ = S bubble S elec (2.12) where S bubble is the area of the electrode masked by an attached bubble. A commonly used definition for the fractionnal bubble coverage Θ is the fraction of the electrode area shaded by normal projection of the bubbles on the electrode surface, Figure 1 . This is a reasonable approximation of the surface actually in contact with the bubbles because the region under an adherent bubble does not participate in the electrode's reaction [START_REF] Eigeldinger | The bubble coverage of gas-evolving electrodes in a flowing electrolyte[END_REF]. Theoretical relationship between bubble coverage and overpotential The average current density j avg can be estimated as the ratio of the intensity of the current I applied to the electrodes to their surface exposed to the electrolyte S : j avg = I S elec . By using Tafel's equation we obtain [START_REF] Wang | The intensification technologies to water electrolysis for hydrogen production -A review[END_REF]: η = a + b log I S (2.13) Since the fraction covered on the electrode surface is electrochemically inactive, the actual current density j cov of the electrodes is higher than the nominal current density j avg . It can be estimated as a function of bubble coverage: j cov = I S(1 -Θ) (2.14) A large bubble coverage reduces the effectiveness of the active area of the electrode. As a result, j cov increases and η is also higher according to Equation 2.13 . η = a + b log I S(1 -Θ) = η + b log j avg 1 -Θ (2.15) As an example, Wang reports that in a typical device, the bubble coverage generates an overpotential of 0.4 V ( current density of 300 mA.cm -2 ), and inexorably raises the energy requirements for water electrolysis [START_REF] Wang | The intensification technologies to water electrolysis for hydrogen production -A review[END_REF] . One of the usual ways to reduce bubble coverage is to have a forced flow of electrolyte. Zhang and Zeng on a vertical electrode studied the detachment of bubbles in a stagnant electrolyte and then with flow [START_REF] Zhang | Evaluating the Behavior of Electrolytic Gas Bubbles and Their Effect on the Cell Voltage in Alkaline Water Electrolysis[END_REF]. Their results showed that for a flow with a Reynolds number of 2500 the diameter of the bubbles was only slightly reduced. Estimate Vogt and Balzer give a relation allowing to obtain a first approximation of the bubble coverage [START_REF] Vogt | The bubble coverage of gas-evolving electrodes in stagnant electrolytes[END_REF]: Θ = 2.310 -2 (j avg ) 0.3 (2.16) This relationship in accordance with their observation is able to correlate the data points to a good approximation. Parameters influencing bubble coverage Since the bubbles grow on the electrode surface over a certain period of time, the screened area is especially time dependent [START_REF] Vogt | Superposition of microconvective and macroconvective mass transfer at gas-evolving electrodes-a theoretical attempt[END_REF]: Θ = n S tr 0 π R 2 r t r dt (2.17) where n is the number of bubbles in contact with the electrode, t r is the average residence time of the bubble on the electrode, and R r is the bubble radius at t = t r . Vogt relates the bubble coverage to the rate of gas produced, the average residence time of the bubbles and the average volume of the bubbles when they leave. He demonstrates the interdependence of these quantities. He establishes a semi-empirical relationship between the ratio of a nominal current density I/S to the summit value of this nominal current density (I/S) su with the bubble coverage [START_REF] Vogt | Superposition of microconvective and macroconvective mass transfer at gas-evolving electrodes-a theoretical attempt[END_REF]: I/S -j 0 (I/S) su = 3.08Θ 1.5 (1 -Θ) 1.5 (2.18) where j 0 is an exchange current density only effective at small values of Θ only. According to Vogt, the summit value of this nominal current density (I/S) su is mainly determined by the surface condition and wettability of the electrode. Large values of (I/S) su are characteristic of strong wettability and smooth surfaces. The thermodynamic conditions (temperature, pressure) also play an important role in the departure of bubbles and thus on the bubble coverage. Temperature decreases the density of the gas, increases the partial pressure and decreases the solubility of the gas. Pressure decreases the size of the bubbles and therefore the bubble coverage and temperature increases it. In order to account for the influence of these thermodynamic parameters on bubble coverage, Vogt developed the following relationship [START_REF] Vogt | The rate of gas evolution of electrodes-I. An estimate of the efficiency of gas evolution from the supersaturation of electrolyte adjacent to a gas-evolving electrode[END_REF]: Θ = πR 3 r 2 V r R r t r R 2 f G Φ B I/S n e R g T F (p -p s ) (2.19) where V r is the bubble volume at t = t r , Φ B is the current efficiency, and f G is the gas evolution efficiency and can be described as the portion of dissolved gas produced at the electrode and participating in the growth of bubbles, with the remainder of the dissolved gas dispersed in the bulk of the electrolyte. The above equation requires knowledge of the temperature and pressure conditions near the electrode, but in addition to this the residence time and radius of the bubbles. However, there is little data on the residence time and radius of bubbles, so it is difficult to use the above equation. What this equation highlights is that in order to be able to describe bubble coverage it is necessary to have a better knowledge of the nucleation, growth and departure of the bubbles from the electrode. The gas evolution efficiency f G f G a measure of electrolysis efficiency The gas is produced at the electrode as a dissolved gas. This dissolved gas is transported to the core of the electrolyte by diffusion and convection. Some of it participates in the growth of bubbles. But the rest can leave the electrolysis cell without being captured by the bubbles, as shown in Fig. 2. For this gas to participate in the germination of new bubbles, its concentration must be sufficient. There are several competing mechanisms: absorption and transport by the bubbles of the gas phase, and transport by diffusion and convection within the electrolyte. To account for this phenomenon, a coefficient expressing the effective gas production is defined. The gas evolution efficiency f G is the ratio between Ṅb and •N [START_REF] Matsushima | Single bubble growth during water electrolysis under microgravity[END_REF]Vogt, 1984a;Vogt, 2011a;[START_REF] Vogt | Mechanisms of mass transfer of dissolved gas from a gas-evolving electrode and their effect on mass transfer coefficient and concentration overpotential[END_REF]. f G ≡ Y 0 d Ṅb Ṅ (2.20) where Ṅb is the rate of dissolved gas in [mol • s -1 ] transported from the electrode to the bubbles attached to the electrode and participating in their formation by desorption from the liquid phase into the gas phase and Ṅe is total rate of dissolved substance produced at the electrode area and entering the electrolyte liquid. The distance Y from the electrode in the normal direction, is considered equal to δ 1 corresponding to the thickness of the diffusion layer when the bubble average diameter d exceed this diffusion layer thickness, and Y = d otherwise. The diffusion layer thickness defined in the vicinity of an electrode is the distance over which the concentrations are different from their value in the bulk solution. The definition of the thickness of this diffusion layer is arbitrary because the concentration approaches asymptotically the value in the bulk solution Bubbles population A material balance allows us to evaluate the proportion of gas that passes from the electrode to the core of the electrolyte in the form of dissolved gas. The amount of bubbles at the electrode is necessary for the calculation of f G and the effects of the attached bubble layer on • the average bubble coverage Θ; • the volume of gas contained in the attached bubbles per unit area ; • the average equivalent radius of the attached bubbles ; • the average number of bubbles per unit area; Nucleation step is partly responsible for the number and position of bubbles attached to the electrodes and the value of the dissolved gas concentration at the electrode. The nucleation frequency f nuc , the coalescence frequency f coal , and the detachment frequency f det , allow to establish the population balance of the bubbles. This population balance allows to describe the layer of attached bubbles and the population of bubbles in the bulk. This balance is established locally on the electrode: dN b dt = f nuc -f coal -f det (2.21) Influencing parameters and estimate In his calculations Vogt emphasises the interrelation between f g and Θ. He establishes several relations allowing f g to be expressed as a function of several parameters [START_REF] Vogt | Mechanisms of mass transfer of dissolved gas from a gas-evolving electrode and their effect on mass transfer coefficient and concentration overpotential[END_REF]: • the surface of the electrode covered by the bubbles Θ; • a Sherwood number Sh 1 simulating the effect of mass transfer to the liquid bulk by diffusion; • Sherwood number Sh 2 simulating the effect of mass transfer to adhering bubbles; • the concentration profile in the diffusion layer; • the average diameter of the bubbles at the time of their departure d dep . An estimate of the normal concentration profile at the electrode over a distance δ 1 can be obtained from f G [START_REF] Vogt | Mechanisms of mass transfer of dissolved gas from a gas-evolving electrode and their effect on mass transfer coefficient and concentration overpotential[END_REF]: c -c bulk (c e -c bulk ) fg=0 = 1 - y δ 1 - 2 3 f G 1 - y δ 1 1.5 (2.22) when y = 0 this relation can be simplified: c -c bulk (c e -c bulk ) fg=0 = 1 - 2 3 f G (2.23) which coincides with the results of a previous work [START_REF] Vogt | Mechanisms of mass transfer of dissolved gas from a gas-evolving electrode and their effect on mass transfer coefficient and concentration overpotential[END_REF]. Three models are established. The first model assumes a linear concentration profile. For big bubbles Sh 1 ≡ d dep δ 1 ≥ 1.5: f G = 1 3 Sh 2 1 ΘSh 2 + 2 3 -1 (2.24) For small bubbles Sh 1 ≡ d dep δ 1 ≤ 1.5: f G =   1/4 Sh 1 ΘSh 2 1 -Sh1 3 + 2 3   -1 (2.25) The second model assumes a more complex concentration profile taking into account equation (2.22). For big bubbles Sh 1 ≡ d dep δ 1 ≥ 1.5: f G = 1 3 Sh 2 1 ΘSh 2 + 0.8 -1 (2.26) For small bubbles Sh 1 ≡ d dep δ 1 ≤ 1.5: f G = 1 -Sh1 3 1/4 Sh 1 ΘSh 2 + 2/3 -0.145Sh 1.5 1 (2.27) In the third model the assumption of a constant Sh 2 number is replaced by a relation obtained from a sphere in an infinite solution, for big bubbles: f G = Sh1 2 6Θ + 0.8 -1 1 + 2.036 P e Sh 1 0.8 1 - 2 3 f G 1.8 (2.28) For small bubbles : f G = 1 -Sh 1 1 3 + 1.53( P e Sh1 ) 0.8 (1 -2 3 f 1.8 G )1/Sh 1 [1 -(1 -2 3 Sh 1 ) 2.8 ] Sh1 8Θ + 2 3 -0.145Sh 1.5 1 (2.29) where P e is the Péclet number which is defined to be the ratio of the flow rate of advection by the flow rate of diffusion. All three models are subject to assumptions about the diameter of the bubbles at the time of detachment, their volume, the average concentration and the concentration profile, but they give a better approximation of what was previously formulated in the work of Vogt [START_REF] Vogt | The rate of gas evolution of electrodes-I. An estimate of the efficiency of gas evolution from the supersaturation of electrolyte adjacent to a gas-evolving electrode[END_REF]. Although difficult to use as is, they highlight the need to know the bubble coverage Θ and the bubble detachment diameter at the moment of the departure d dep . Of the three models, a group of adimensional numbers influencing f G was distinguished Sh2 1 ΘSh 2 -1 . The Sh1 Sh2 ratio accounts for the competition of the transport phenomena. This ratio is itself dependent on the concentration profile. Vogt's work thus makes it possible to refine the overall view of the phenomena taking place in the boundary layer in the presence of bubbles. But it also highlights the fact that a better knowledge of the microfluidic phenomena around the bubble is necessary, hence motivating the work in this thesis. Bubble development To sum up, the formation of hydrogen and oxygen bubbles affects electrical efficiency in three different ways: • the dispersed bubbles in the bulk increase the bulk-electrolyte resistivity, based on the same principle, the attached bubbles also increase the resistivity of the gas-electrolyte mixture near the electrode ; • the attached bubbles to the electrode area raise the overpotential by insulating parts of the electrode surface and crowding the current into the remaining area; • the attached bubbles modify the overpotential concentration by affecting the level of gas supersaturation near the electrode (capture of dissolved gases). It is therefore advisable to speed up the detachment of the bubbles from an electrolytic system in order to improve its efficiency. To understand how to accelerate the detachment of bubbles from the electrode, it is necessary to study the physical phenomena that cause their nucleation, growth and detachment. There are of course several ways to speed up electrolytic processes, for example by using different types of electrode materials or by circulating the electrolytic fluid. A non-exhaustive list of them is described in [START_REF] Zeng | Recent progress in alkaline water electrolysis for hydrogen production and applications[END_REF]]. However, this study focuses on the microfluidic aspects. This is why, it is necessaryto have a better knowledge of the main works on nucleation, growth, and detachment of electrogenerated bubbles. The goal being to know the phenomena related to their development and to have an overview of the models used to describe it. Nucleation General principle In a pure and homogeneous liquid, bubbles can form when overheating. The liquid undergoes a phase change and the process is mainly controlled by the transfer of heat. In such a system, the concentration gradient plays no role. Another process for generating bubbles is the desorption of gas molecules from a liquid to a gas phase. This generation depends on the rate of supersaturation and is essentially controlled by the concentration gradient assuming that the heat gradient in the system is negligible [START_REF] Jones | Bubble nucleation from gas cavities -a review[END_REF]. A third type of bubble generation is cavitation and can be caused by reducing the external pressure below the vapour pressure of the pure liquid. Another type of bubble generation is that driven by chemical processes such as electrolysis. Upon supersaturation of the reduced (or oxidised) ions at the liquid-electrode interface, the molecular species produced is transformed into gas in the form of bubbles. This phenomenon begins at specific sites on the electrode called "nucleation sites". These sites are often imperfections, scratches, and grooves on the electrode surface. After the bubble detaches, some of the gas remains trapped in these imperfections, which encourages the formation of a new bubble. As the current density increases, the sites become more and more active, allowing for an increased gas flow to the electrode. Low current density implies low supersaturation, which in turn implies a low number of active sites. In addition to the current density, the temperature and concentration gradients can be locally significant and control the efficiency of the process. The heat in the system is due to the joule effect and the presence of gas molecules is due to the chemical reaction. It should be noted as pointed out by Wang that the theoretical electrical voltage U theory is a function of temperature and may decrease by increasing the electrolytic temperature [START_REF] Wang | The intensification technologies to water electrolysis for hydrogen production -A review[END_REF]. So temperature can also have an influence on the production of gas molecules. To account for the nucleation process in the case of electrolysis, the three intensive variables temperature, concentration, and pressure are to be considered. Although in most electrochemical applications the overall system pressure can be considered equivalent to atmospheric pressure, at the microscopic scale the pressure variation is to be taken into account. As reported by Tawfik and Diez if the growth or detachment of bubbles focus attention in electrolysis publications, there is a lack of information on nucleation and the prediction of their appearance [START_REF] Tawfik | On the relation between onset of bubble nucleation and gas supersaturation concentration[END_REF]. Supersaturation Lubetkin and Blackwell defined a saturation ratio ζ r and the supersaturation degree ζ d as [START_REF] Lubetkin | The nucleation of bubbles in supersaturated solutions[END_REF] : ζ r = c e c b ζ d = α -1 (2.30) where c b is the bulk concentration. There is no direct experimental means of measuring the supersaturation degree near the electrode surface. Temperature and interfacial tension are controlled parameters that depend on the electrolysis conditions and the nature of the electrolyte. On the other hand, the rate of supersaturation is linked to numerous variables such as the wettability of the electrodes, the solubility of the gas, the transport of matter and the current density. The rate of supersaturation reached at the surface of the electrolyte is therefore a difficult parameter to predict. However, Lubetkin distinguishes two approaches for evaluating it [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF]. One is theoretical and based on nucleation theory [START_REF] Dapkus | Nucleation of electrolytically evolved hydrogen at an ideally smooth electrode[END_REF][START_REF] Lubetkin | The nucleation of bubbles in supersaturated solutions[END_REF]]. The other is experimental. From the kinetic parameters obtained experimentally, the supersaturation degree is deduced [START_REF] Westerheide | Isothermal growth of hydrogen bubbles during electrolysis[END_REF][START_REF] Shibata | The Concentration of Molecular Hydrogen on the Platinum Cathode[END_REF]. Production and transport of dissolved gas To know the degree of supersaturation, one needs to know the amount of dissolved gas produced at the electrode and how it is transported. The number of moles of dissolved gas generated at the electrode n can be estimated with Faraday's law and is related to the electrical current I(t) flowing through the electrodes: n = I(t)dt z F (2.31) Where z is the valency number of ions of the substance, and F = 9.648533 10 4 [C • mol -1 ] is the Faraday constant. The dissolved gas produced is then partially transported to the nucleation site. A part of these molecules is transported into the bulk, another part contributes to the growth of other bubbles, and the rest is involved in the nucleation process. Vogt reports on the various mode of mass transfers involved [Vogt, 1990a]: • micro-convection induced by gas bubbles forming and periodically detaching from the electrode surface; • forced or natural macro-convection of the electrolyte superimposed on the previous micro-convection phenomenon; • diffusion which is governed by Fick's second law. These modes act with an intensity that depends on the operating conditions. Vogt then obtains a general expression of the concentration of the dissolved gas and makes it possible to obtain an approximation in different regions of the electrolytic device [Vogt, 1990a]. However, it does not allow to deduce a profile of the concentration gradient close to the wall. Methodology based on Fick's second law Tawfik and Diez deduced a profile of the concentration gradient by mesuring the nucleation time [START_REF] Tawfik | On the relation between onset of bubble nucleation and gas supersaturation concentration[END_REF]. Assuming a mass transfer without convection in a stagnant electrolyte near the electrode surface and based on Fick's second law Eq. (2.32), the authors established a dissolved dihydrogen concentration profile as a function of time and wall distance. ∂c(x, t) ∂t = D ∂ 2 c(x, t) ∂x 2 (2.32) Where D is the diffusion coefficient. The distance over which the concentration is greater than c bulk scale with δ ∼ √ Dt. Over time, c e increases and so does the thickness of this diffusion layer. By combining the analytical solution of Eq. (2.32) with experimental measurements of the time of appearance of the first bubbles, the evolution and concentration profile in the diffusion layer can be estimated . In the case of flowing electrolyte when convection is taken into account, there is no obvious method to determine the concentration gradient in the diffusion layer. Energy state When the dissolved gas molecules oversaturate the solution, the environment becomes favourable to the formation of bubbles. Traditionally, it is assumed that instant nucleation occurs when this environment reaches a sufficient energetic state. Ward et al. describes this energy state [START_REF] Ward | On the Thermodynamics of Nucleation in Weak Gas-Liquid Solutions[END_REF]. Assuming that a bubble is spherical and as shown on Fig. 3, the work required for a bubble to form corresponds to a critical radius R c for which a bubble could exist in thermodynamic equilibrium, although unstable, with the surrounding liquid. Using Laplace's theorem Eq. (2.33) and the laws of thermodynamics he deduced the reversible work W Eq. (2.34) necessary to generate a bubble with a critical radius R c . The necessary work must oppose the surface energy, i.e. the pressure created by the surface tension of the future bubble. p b -p e = 2 γ R c (2.33) W = 4π γ R 2 c 3 (2.34) where γ is the surface tension of the liquid-gas interface and p b -p e is the pressure difference between the pressure inside the bubble and pressure in the electrolyte. This work varies according to the radius of the expected bubble, as illustrated in Fig. 3. [START_REF] Blander | Bubble nucleation in liquids[END_REF] showing the work required to form a spherical bubble according to its radius. The system seeking to achieve a minimum energy state below a radius R c the bubble cannot form, above R c the bubble can grow. In their review Blander and Katz take up the basics of classical nucleation theory [START_REF] Blander | Bubble nucleation in liquids[END_REF]. The distribution of the number of bubbles n with a critical radius R c per unit volume containing x molecules is similar to a Boltzmann distribution and in relation to the reversible work necessary to form the bubble : n = Ze -W k B T (2.35) where Z is a pre-exponential factor to be determined , k B is the Boltzmann constant. The addition of a molecule to a bubble with a critical radius is sufficient to trigger the growth. Henry's Law is a gas law that stipulates that the concentration of gas molecule dissolved in the liquid is proportional to its partial pressure k H = p c , is called the Henry's constant. It links the degree of supersaturation of a gas to its partial pressure. p b -p e = k H (c b -c e ) = p e ( c b c e -1) = p e ζ d (2.36) From Eq. (2.36), it comes that the higher the degree of supersaturation the higher the pressure. From Eq. (2.33) and Eq. (2.34), we conclude that the higher the pressure, the smaller the critical radius of the bubbles, and therefore from Eq. (2.35), the greater the number of bubbles per unit volume [START_REF] Jones | Bubble nucleation from gas cavities -a review[END_REF]. Rate of nucleation From Eq. (2.34), and Eq. (2.35), the rate of formation of bubbles per unit time per unit volume can be estimated. Ward et al. give a general expression of this rate [START_REF] Ward | On the Thermodynamics of Nucleation in Weak Gas-Liquid Solutions[END_REF]. J = Z exp -4πγR 2 c 3kT (2.37) For a heterogeneous nucleation, Lubetkin gives an expression taking into account the degree of supersaturation σ [START_REF] Lubetkin | Why Is It Much Easier To Nucleate Gas Bubbles than Theory Predicts?[END_REF]. J = Z exp -16πγ 3 Φ(θ) 3 k T (σP ′ ) 2 (2.38) where Φ(θ) is a function dependent on the contact angle and defined according to the geometry of the surface (here a flat surface). P ′ is the pressure at which the nucleation takes place. The pre-exponential factor Z depends on the operating conditions. According to Lubetkin,the nucleation rate J is a thousand times less sensitive to a variation in the pre-exponential factor Z than to a variation in the exponential factor. So its importance is minor compared to changes in the degree of supersaturation or surface tension. The classical nucleation theory allows to access the degree of supersaturation from the nucleation rate, the measurement of the contact angle, the pressure and the temperature. Flaws in the classical nucleation theory Usually nucleation is said to be homogeneous when bubbles form in the liquid and heterogeneous when they develop in contact with a surface. If the classical nucleation theory makes it possible to describe the appearance of bubbles for homogeneous nucleation, a gap between theory and experience has been reported when it comes to heterogeneous nucleation on a surface. Dapkus observing electrogenerated hydrogen gas bubbles in aqueous sulfuric acid solutions at a mercury pool electrode found that they appeared at much lower concentrations than predicted by classical theory [START_REF] Dapkus | Nucleation of electrolytically evolved hydrogen at an ideally smooth electrode[END_REF]]. Nucleation was observed on the surface of the electrode at degrees of supersaturation at least two orders of magnitude lower than the values predicted by the theory. Other authors report this same discrepancy [START_REF] Lubetkin | Why Is It Much Easier To Nucleate Gas Bubbles than Theory Predicts?[END_REF][START_REF] Hemmingsen | Spontaneous formation of bubbles in gas-supersaturated water[END_REF][START_REF] Chen | Electrochemical Measurements of Single H2 Nanobubble Nucleation and Stability at Pt Nanoelectrodes[END_REF]. Jones et al. points out that the thermodynamic approach to describe systems that are not in equilibrium is not ideal [START_REF] Jones | Bubble nucleation from gas cavities -a review[END_REF]. At very small scales where nucleation occurs, the concept of surface tension is no longer valid, this is reported as the capillary approximation. The liquid-gas interface is not so sharp. Considering the Laplace equation, enormous pressure differences are necessary between the inside of an embryo of bubble of a few nanometres radius and the liquid. Finkelstein and Tamir have developed a reliable method to determine this pressure difference. For different gases they found pressure differences between 13 and 42MPa [START_REF] Finkelstein | Formation of gas bubbles in supersaturated solutions of gases in water[END_REF]. According to classical theory, this value is 142MPa, at least three times higher than that obtained experimentally. Moreover, the classical theory does not take into account the different types of gases. Surfaces acting as catalyst In his review of nucleation from gas cavities, Jones points out that solid surfaces act as catalysts [START_REF] Jones | Bubble nucleation from gas cavities -a review[END_REF]. According to the classical theory in the absence of surfaces or gaz cavities, the energy required to create a bubble must be higher than that caused by the surface tension of the liquid-gas interface that opposes it. In the presence of a solid substrate, the surface area of this interface decreases. Less interfacial free energy is needed for the bubble to grow to the critical size. Nucleation is facilitated. Lubetkin and Wilt take into account this action of the solid surface by introducing a function established according to its geometry [START_REF] Wilt | Nucleation rates and bubble stability in water-carbon dioxide solutions[END_REF][START_REF] Lubetkin | Why Is It Much Easier To Nucleate Gas Bubbles than Theory Predicts?[END_REF]]. In Eq. (2.38) the Φ(θ) function allows to reduce the nucleation rate accordingly. In addition, Jones distinguishes between bubbles nucleating from nothing and those developing from an existing gas cavity. When the bubbles come off, part of the bubbles remain attached to the solid surface. For bubbles developing on these gas cavities the required energy is even lower. Therefore, the degree of supersaturation required is lower. From these reflections Lubetkin hypothesizes that on electrodes containing asperities with adequate geometry and allowing to retain these gaseous cavities the nucleation would be greatly facilitated and would allow to improve the global electrolysis process [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF] . Lubetkin suggests that the conditions required to make a flat surface more favorable to nucleation are those that, on the contrary, make it more difficult to detach the bubbles once formed [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF]. Sakuma experiments under microgravity and demonstrates that bubbles at the time of their detachment are smaller on a hydrophilic surface than on a hydrophobic surface [START_REF] Sakuma | Nucleation and growth of electrolytic gas bubbles under microgravity[END_REF]. On hydrophilic surface the contact angle approaches zero facilitating the detachment of bubbles. However, in this case the nucleation approaches the conditions of a homogeneous nucleation and makes it more difficult. On the other hand, on hydrophobic surfaces, the bubble nucleates and spreads more easily, the hydrophobic support not being favourable to the electrolyte, the gas covers this surface more easily. It becomes larger but takes longer to detach as a result. While the bubble remains attached, it successively blocks neighbouring nucleation sites as it develops. Growth Bubble growth in supersaturated solutions has been studied from a theoretical, experimental, and numerical point of view. Among the papers adopting the theoretical approach one can find that of Scriven and Epstein-Plesset [START_REF] Epstein | On the Stability of Gas Bubbles in Liquid-Gas Solutions[END_REF][START_REF] Scriven | On the dynamics of phase growth[END_REF]. In their work, Epstein and plesset studied the evolution of the radius of a bubble at different levels of supersaturation, assuming a spherical bubble in an infinite solution and neglecting the possible effects of convection surrounding the bubble. Scriven details this work taking into account all assumptions and limitations. Exact solutions of the equations are obtained for typical bubble growth conditions. An asymptotic expression of the evolution of the bubble radius is obtained. These early studies established the first relationships to describe bubble growth. Mass balance at the interface By doing a mass balance in the gas phase and assuming that the bubble is spherical by deriving, the growth rate of the bubble ṁB [kg • s -1 ] can be obtained: ṁB = ρ g d dt 4 3 πR(t) 3 = ρ g 4πR(t) 2 dR(t) dt (2.39) Calculating the same quantity but considering the liquid side this time and assuming transport by diffusion: ṁB = M g 4πR 2 D dc dr R (2.40) where r is the radial coordinate in a spherical coordinate system in which the origin coincides with the centre of the bubble. In the diffusive regime, the concentration profile at each time and position is defined by the radially symmetric diffusion equation: ∂c(r, t) ∂t = D∇ 2 c(r, t) = D r 2 ∂ ∂r r 2 ∂c(r, t) ∂r (2.41) Considering a single bubble in a infite domain, the boundary conditions for solving the equation (2.41) are defined as: c(r, 0) = c 0 , r >> R (2.42a) lim r-→∞ c(r, t) = c 0 , t > 0 (2.42b) c(R, t) = c s , t > 0 (2.42c) These boundary conditions stablish that, at the beginning of the bubble growth, the solution has an homogeneous concentration c 0 = Hp 0 which will depend on the temperature T 0 and the pressure p 0 at which the saturated solution was prepared Eq. (2.42a); very far away from the bubble, the concentration remains unaltered at all times Eq. (2.42b); and finally, the concentration at the bubble boundary remains constant at a value c s = Hp s which depends also on temperature T s (which usually coincides with T 0 ) and pressure p s at which experiments are performed Eq. (2.42c). Solving Eq. (2.41) using these boundary conditions results in: ∂c ∂r R = (c 0 -c s ) 1 R + 1 √ πDt (2.43) This brings us to the Epstein-Plesset equation which defines the change in the growth of the bubble with respect to time: dR dt = M D(c 0 -c s ) ρ g 1 R + 1 √ πDt = M W (c 0 -c s ) ρ l D l πt 1 + √ πD l t R (2.44) if c s > c 0 the solution is undersaturated, the gas flow goes from the bubble to the bulk. If c s = c 0 equilibrium, if c s < c 0 the solution is supersaturated. Radius of the bubble In experimental approaches, usually to account for the growth of the bubble a relation of its radius as a function of time is used. R(t) = β × t b (2.45) where β is the growth coeficient principally dependent on the current density according to Brandon and Kelsall and b is the time coefficient [START_REF] Brandon | Growth kinetics of bubbles electrogenerated at microelectrodes[END_REF]]. These two parameters vary according to the studies and the different phases of the growth. Several authors use this model as a basis for presenting their experimental results [ • the chemical reaction which is determined by the electrolyte and surface properties; • the gas molecule transfer process, which is determined, among other things, in a stagnant electrolyte by the diffusion of the gas molecule and the radial micro-convection effect due to the expansion of the bubble interface; • the desorption and the absorption of the gas molecules on the bubble interface. From these steps, Wang et al. assume that the competition between desorption and adsorption at the liquid-bubble interface is not the process limiting bubble growth but rather the combination between mass transfer and chemical reaction rate [START_REF] Wang | Investigations on bubble growth mechanism during photoelectrochemical and electrochemical conversions[END_REF]. Its importance in explaining the dynamics of growth would therefore be minor. Another phenomenon to be taken into account is the coalescence of small bubbles to large ones. According to Vogt the validity of Equation 2.45 is restricted to case where bubbles do not interfere with each other [Vogt, 1983a]. The parameter b takes different values depending on which process is limiting. Inertial growth During the initial tens of microseconds b = 1 the growth is inertia-controlled [START_REF] Brandon | Growth kinetics of bubbles electrogenerated at microelectrodes[END_REF]]. The growth is described by Rayleigh equation : R(t) = 2∆p 3ρ 0.5 t (2.46) where ρ is the electrolyte density. The bubble growth is driven by the high difference of pressure ∆p determined by the Laplace equation and due to the interfacial tension force. Growth by diffusion When the movement of gas molecules is slow and controlled by diffusion phenomena, Scriven showed that the radius increase with the square root of time i.e. b = 0.5 and the growth coefficient is a function of the dissolved gas concentration and the diffusion coefficient [START_REF] Scriven | On the dynamics of phase growth[END_REF]. R(t) = 2β(D t) 0.5 (2.47) where β can be calculated as follows: β = ∆c 2πρ 1 + 2πρ gaz ∆c 0.5 0.5 (2.48) where ∆c is the supersaturation of the species under consideration near the electrode. Wang extends his theory to the phenomenon of micro-convection due to the spreading of the bubble using the surface renewal theory to simulate the gas-liquid interfacial mass transfer [START_REF] Wang | Investigations on bubble growth mechanism during photoelectrochemical and electrochemical conversions[END_REF]. To simulate the mass transfer around the bubble they add the diffusive flow to that caused by the expansion of the bubble. The time coefficient remains unchanged, but the he finds that the effect due to micro-convection on growth may be 100 times higher than that of diffusion. Growth limited by the reaction rate When gas molecules move rapidly in the liquid or when the gas-liquid interface is close to the electrode, the process limiting the growth is the reaction rate. Some of the generated molecules are dispersed in the electrolyte while the other contributes to the generation of bubbles N G [mol.s -1 ]. Assuming that the bubble is spherical, we obtain: N G = d dt 4πR 3 3V m (2.49) where V m is the molar volume of gas. Wang deducts that b = 1/3 when the bubble growth is controlled by the surface chemical reaction [START_REF] Wang | Investigations on bubble growth mechanism during photoelectrochemical and electrochemical conversions[END_REF]. R(t) = 3V m N G 4π 1/3 t 1/3 (2.50) It is almost impossible to study the individual behaviour of bubbles on conventional electrodes due to the high bubble coverage and void fraction at practical current densities [START_REF] Massing | Thermocapillary convection during hydrogen evolution at microelectrodes[END_REF]. This is why most experimental studies on the subject are done on microelectrodes. These electrodes are about 100 micrometers in diameter and allow the observation of the development of a single bubble. This facilitates the study of its nucleation, growth and detachment. However, one of the main differences between a microelectrode and a conventional electrode is that the bubble completely covers its surface before detachment. The bigger the bubble gets, the more it blocks the electrode. So there is a high current densities and high local supersaturation level at the bubble foot. Whereas for a conventional electrode part of the gas escapes into the liquid, for a microelectrode almost the entire hydrogen generated diffuses directly into the bubble at its foot. Therefore the behaviour of a bubble on a micro electrode and a conventional electrode will not be the same, and the observed growth relationships can be expected to be different. Detachment Break-off diameter An appropriate analysis of the detachment is essential to determine the bubble coverage and therefore the overpotential and the efficiency of the electrolysis process. Usually a balance of forces acting on the bubble is used to determine the diameter that the bubble will have at the time of detachment and the residence time of the bubble on the electrode surface. These two parameters are usually used to account for the detachment. They are determined by the nucleation and growth steps. Lubetkin notes that nucleation kinetics and detachment kinetics are closely related and that the classical theory of heterogeneous nucleation is only valid when the detachement is fast compared to the nucleation rate [START_REF] Lubetkin | The nucleation and detachment of bubbles[END_REF] . Vogt et al. points out that the analogy between boiling and the evolution of electrolysis bubbles is limited and that the detachement diameters of electrolysis and boiling bubbles differ greatly [START_REF] Vogt | The limits of the analogy between boiling and gas evolution at electrodes[END_REF]. In most of the relationships established by Vogt and Balzer to determine the mass transfer coefficients, a term is related to the break-off diameter of the bubble [START_REF] Vogt | The bubble coverage of gas-evolving electrodes in stagnant electrolytes[END_REF]. They use a force analysis to correlate the bubble coverage with the electrolyte flow. Study of detachment on microelectrodes Under normal operating conditions the bubble coverage near the electrodes is very high, which limits observation and makes it difficult to analyze bubble growth and detachment. In addition, bubbles that detach and rise near the surface create turbulence and prevent the determination of forces on an isolated bubble. This is why many electrolysis experiments [Fernández et In this case the bubble forms on a small surface. Usually before the detachment takes place, the bubble covers the entire surface of the micro or nanoelectrodes forcing the current to pass through the corner formed by the base of the bubble and the contact surface, causing an increase in current density. Its distribution is thus different compared to larger electrodes. Force balance Depending on the experimental conditions, stagnant electrolyte or electrolyte with flow, vertical or horizontal electrodes, micro or macro electrode, the intensity and direction of the forces involved in the balance may change. However, the principle remains the same. Their point of application is assumed to be in the center of the bubble. The bubble is supposed to be spherical. For a horizontal electrode, detachment is supposed to occur when the sum of the force projections on the vertical axis is equal to zero. The force balance usually include: • surface tension force holding the bubble on the electrode; • the buoyancy, the inertial, and the contact pressure force pushing the bubble away from the surface; • the hydrodynamics forces acting in either way depending on the experimental conditions [START_REF] Hibiki | Lift force in bubbly flow systems[END_REF]. Inertial force The inertial force is due to the growth of the bubble: F i = 4πR 3 3 ρ g d 2 R dt 2 (2.51) Usually neglected because the mass of the bubble is very small and the growth rate of the bubble is also very small. Buoyancy force The buoyancy force is the resultant of the difference between Archimedes' thrust and the force of gravity, it is given by: F b = 4πR 3 3 (ρ g -ρ l )g (2.52) Contact pressure force The contact pressure force is is due to the pressure difference between the inside and outside of the bubble (see Laplace equation) and pushing on the solid surface S CP of the electrode in contact with the bubble. F CP = S CP (p L -p G )ndS ≈ πR 2 c 2γ R (2.53) where R c is the radius of the contact surface. Surface tension force The surface tension force depends on the angle θ of contact of the bubble with the surface. F S = 2πR c γsin(θ) (2.54) In a stagnant electrolyte on a horizontal microelectrode Chen et al. note that the surface tension and contact pressure forces are the main forces that influence the balance followed by the buoyancy force, the inertial and hydrodynamic forces being negligible [START_REF] Chen | Dynamics of single bubble departure from TiO2 nanorod-array photoelectrode[END_REF]. This means that the surface tension of the liquid-gas interface is one of the most sensitive parameters in determining the bubble departure. A missing force ? With only these forces taken into account, the force balance usually failed to predict the bubble departure diameter. Lubetkin reports different bubbles phenomena observed during electrolysis that cannot be explained by the usual force balance [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF] . One of them is called the "bubble jump-off and return". When 2 bubbles coalesce, the new bubble comes off the surface. The surface tension force is then zero and the buoyancy force is supposed to carry the bubble into the bulk. However, the bubble immediately returns to the electrode surface. Lubetkin deduced from this the evidence of a force created by a Marangoni effect. The surface energy of a liquid depends on its composition and temperature, among other things. Thus, variations in composition or temperature along the surface of a liquid imply variations in surface tension. Areas of higher tension pull harder, resulting in tangential stress at the interface, called Marangoni stress. This stress introduces a tangential stress jump at the interface. This jump is compensated by the viscous stresses of the two fluids.The two fluids on either side of the interface are therefore carried away by viscosity, generating Marangoni flows [START_REF] Scriven | The marangoni effects[END_REF]. 3 Transport of the diluted species in the vicinity of the bubble Mass transfer Coefficient Definition The mass transfer coefficient k is a term that illustrates this effect of agitation in the calculations of the interfacial mass transfer or the mass transfer in the bulk . It has the dimensions of a velocity: k = J C (2.55) with J the material flow density in [mol • m -2 • s -1 ] , and c the concentration[mol • m -3 ]. Diffusion induced transport coefficient In the diffusion layer the mass transfer coefficient can be expressed as : k dif f = D δ (2.56) with D the diffusion coefficient of the species m 2 • s -1 , and δ in [m] the thickness of the diffusion layer. In the absence of gas bubble production, the material transport coefficient depends solely on the nature of the flow, the characteristics of the fluid (kinematic viscosity) and the geometry of the cell. Expression in terms of the Sherwood number This coefficient has been measured or calculated for many geometries and flow types. It is expressed by means of dimensionless number correlations [START_REF] Reid | Chemical engineers[END_REF][START_REF] Fahidy | Elements de genie electrochimique[END_REF]. These correlations are established between the Sherwood, Reynolds (or Grashof), Schmidt numbers and dimensional ratios characteristic of the flow. Sh = A Re a Sc b L d c (2.57) with A , a, b, c constants,and L/d ratio of characteristic flow dimensions. The Sherwood number for transport at the electrode is proportional to the transport coefficient: Sh = kδ D (2.58) The Schmidt number is defined by : Sc = µ ρD (2.59) The Reynolds and Schmidt numbers are used to characterise the thickness of the electrolyte layer near the electrodes in which transport takes place mainly due to diffusion. Generally, the greater the forced convection or agitation, the higher the Reynolds number, the thinner the layer and the faster the transport. Bubble induced transport coefficient When an electrode generates gas, the bubbles cause agitation of the liquid in the vicinity of the electrode. This bubble-induced convection strongly increases the transport coefficient [Vogt, 1983b;[START_REF] Ward | Bubble evolution in solutions with gas concentrations near the saturation value[END_REF]. The transport coefficient to describe the bubble-induced transport is denoted k b . Experimental data show that k b increases with the amount of gas produced [Vogt, 1983b]. There is therefore a relationship between the bubble-induced transport coefficient k b and the current density k b = p 1 j p 2 . A very large number of experimental relationships are available as a function of the gas produced, the electrode material, the temperature, the pressure and the electrolyte, these relationships are collected in [START_REF] Janssen | Behaviour of and mass transfer at gas-evolving electrodes[END_REF]Stephan and Vogt, 1979a]. It is shown that the transport coefficient, at a given current density, decreases as the size of the bubbles at detachment increases. Superposition of transport coefficients It is common practice to superimpose bubble-induced transport coefficient and diffusion-induced transport coefficient to describe electrolyte flow. In their study Bisang et al. validate this practice experimentally by showing the effects of this superposition on the current density distribution at the surface of an electrode [START_REF] Bisang | Effect of mass transfer on the current distribution in monopolar and bipolar electrochemical reactors with a gas-evolving electrode[END_REF]. The equations Eq. (2.60) and Eq. (2.61) are usually used to calculate the global transport coefficient k t from the independently calculated coefficients k dif f and k b [Vogt, 1983a]. k t = k dif f + k b (2.60) k t = k 2 dif f + k 2 b (2.61) The choice between one or other of these equations is a choice validated a posteriori with regard to the results obtained by the model used. Therefore, to calculate k t , a model is needed to calculate the effect of the movement of the electrolyte in the vicinity of the electrode k dif f without the presence of the bubbles and a model to describe the effect of the bubbles produced on k b . The calculation of k t allows us to evaluate the movement of the electrolyte in the area around the bubble and the electrode. It allows us to estimate how the dissolved species are transported and to evaluate the concentration overpotential from equation (2.8). Models describing the effect of bubbles on the transport coefficient Models estimating the average transport in the vicinity of bubbles As shown in the diagram in Fig. 4 the amount of dissolved gas involved in the growth of the bubble will depend on three mechanisms: the production of dissolved gas at the electrode, the transport of it from the electrode to the interface, and its absorption at the interface. The transport of dissolved species is a key step, yet there is no consensus on how to model it. Different types of competing models have been developed to describe the effect of bubbles on the transport coefficient near an electrode. Each model assumes that the phenomenon it models is sufficient to describe the transport of species. Some authors do not exclude that different forms of agitation of the diffusion boundary layer can coexist and that consequently the transport coefficients calculated by the different models can add up [START_REF] Janssen | Behaviour of and mass transfer at gas-evolving electrodes[END_REF]. In the "penetration model", transport is increased by replacing the volume occupied by a bubble at the time of its detachment by the same volume of liquid from the core of the electrolyte just after detachment. The second type of model is "hydrodynamic". This model considers that the mechanism controlling the transport of material around the electrodes is the free convection brought by the detached bubbles. This model was initially developed to describe the material transport in an electrolysis cell in which gas is injected to agitate the electrolyte. Finally, the last type of model considers that the bubbles increase the transport of matter by agitating the electrolyte that surrounds them during their growth [Stephan and Vogt, 1979b;[START_REF] Vogt | Superposition of microconvective and macroconvective mass transfer at gas-evolving electrodes-a theoretical attempt[END_REF]]. This model is called "microconvective model" and predicts a transport of matter described by a correlation built from dimensionless numbers. microconvective model The microvonvective model assumes that the overall flow in the electrolyte can be decomposed into different microconvection phenomena. Relationships based on the Sherwood number are established theoretically and experimentally. The mass transfer coefficient is determined from this Sherwood number [Vogt, 1984b;[START_REF] Vogt | Superposition of microconvective and macroconvective mass transfer at gas-evolving electrodes-a theoretical attempt[END_REF][START_REF] Vogt | Local microprocesses at gas-evolving electrodes and their influence on mass transfer[END_REF]. A first relationship was determined by Wragg, to account for free convection on a horizontal electrode [START_REF] Wragg | Free convection mass transfer at horizontal electrodes[END_REF]. This phenomenon of microconvection taking place in the vicinity of the electrode is assumed to be caused by the difference c e which is the concentration of diluted specie adjacent to the electrode and c bulk which is the bulk concentration: Sh = k f L D = 0.64(Gr Sc) 0.25 (2.62) The Grashof number is : Gr = ρβ * (c e -c bulk ) L 3 ν 2 (2.63) where β * is: β * = - 1 ρ ρ c T,p (2.64) According to Vogt, the density difference in equation (2.63) is affected by the temperature difference between liquid bulk and the electrode-liquid interface in addition to the corresponding concentration differences of all subspecies [Vogt, 1993a]. The author introduces the term of single-phase free convection emphasising that the origin of this microconvection is independent of the gas phase, but not its extent, and repeats the previous equation but focuses on the temperature and concentration gradients that cause the density difference. The dimensionless mass flux of the dissolved gas is given by: Sh = kd D = 0.72 1 -Θ 1 -2 3 f G 0.8 j ϵα o gL 4 2F ν 3 L Sc 2 0.2 (2.65) where the expansion coefficient α o is : α o = 13 + 71 1 -2 3 f G × 10 -6 (2.66) The expansion coefficient α o accounts for the difference in density caused by concentration and temperature gradients. Another type of microconvection influence the mass transfer of the dissolved substance once the bubble begins to grow at the electrode surface is the bubble induced microconvection [Vogt, 1993a;Vogt, 1993b]. Sh = k d D = 1.89 (Re f G ) 0.5 Sc 0.487 [Θ 0.5 (1 -Θ 0 .5)] 0.5 1 -2 3 f G (2.67) According to Matsushima et al., the above relationships were later confirmed on the growth of a single bubble under microgravity [START_REF] Matsushima | Single bubble growth during water electrolysis under microgravity[END_REF]. Mass transport within a phase depends directly on the concentration gradient of the species being transported in that phase. Mass can also be transported from one phase to another, and this process is called interphase mass transfer. General principles At the liquid-gas interface, the sorption phenomenon can be described as physical, whereas at the electrode it can be described as chemical. Physical absorption or non-reactive absorption is a process of mass transfer that does not involve with chemical reaction occurring in liquid phase. The rate of absorption-desorption at the interface depends on properties of gas-liquid fluid dynamics, interfacial area between phases, concentration difference, solubility of gas, temperature, pressure, and its duration time of contact. Equilibrium at the interface One of most important factors is the solubility of gas in liquid. It depends on temperature, pressure and the characteristics of the substance itself. There is an equilibrium between the gas and the liquid phase. This gas-liquid equilibrium can be described with Henry's Law with the assumption of an ideal liquid solution and that the perfect gas law can be applied. c(T, P ) = p k H (T ) (2.68) where T is the temperature , k H (T ) is the Henry's constant ( dependent on each gas-liquid couple and a decreasing function of T ) and p is the partial pressure of the gas in the liquid. In dilute conditions, Henry's law has good capability to predict the gas-liquid equilibrium. In any multiphase-multicomponent systems, the chemical species will always move towards equilibrium. The state of non equilibrium is what causes the mass transfer, as it is the mass transfer that drives the system towards equilibrium. Unsteady mass balance at the interface The basic theory of mass transfer with absorption is the two-film theory. At the interfaces of both phases, there are two films, gas film and liquid film connecting to each other. However, the interfacial mass transfer between the electrolyte and the gas bubble is an unsteady process. This non-stationary aspect is not taken into account by the film theory. Usually to model species/mass transfer across an interface a coefficient of mass transfer k is used. The mass transfer rate through the bubble surface ṁ in [kg • m -2 • s -1 ] is calculated as follows: ṁ = M H 2 kA I (c H 2 ,I -c H 2 ,sat ) = ρk (y H 2 ,I -y H 2 ,sat ) (2.69) where A I is the interfacial area, c I the concentration of hydrogen around the gas bubble, c sat the saturation concentration of dissolved hydrogen, y I and c sat are the equivalent values converted in species mass fraction. In their simulation of the growth of a hydrogen bubble on the surface of an electrode, Liu et al. used this kind of relationship to model the mass transfer across the interface [START_REF] Liu | Numerical simulation of hydrogen bubble growth at an electrode surface[END_REF]. The difference (c I -c sat ) represents the driving force of the interfacial species transfer. Estimate from Sherwood number The species transfer coefficient k H 2 can be obtained from the Sherwood number : Sh = 2k H2 R D H 2 (2.70) where R is the bubble radius, and D H 2 is the diffusion coefficient of the diluted H 2 in the bulk. The Sherwood number is used to describe the ratio of the overall species transfer to pure species transfer. Empirical or analytical relations that are used to evaluate the mass transfer through a spherical surface can be adopted to calculate the Sherwood number : Sh = 2 + 0.6 2Rv s ρ µ 1/2 µ ρD 1/3 (2.71) where v s is the liquid velocity around the the sphere surface [START_REF] Kashchiev | Kinetics of the initial stage of isothermal gas phase formation[END_REF], and can be evaluated as v s = dR dt ; By combining Eq. (2.43) and Eq. (2.40) we can obtain an analytical solution of the Sherwood number [START_REF] Bird | Transport Phenomena[END_REF]: Sh = 2 1 + R (πDt) 0.5 (2.72) These correlations depend both on the shape of the bubble and on the operating conditions, that is why modeling mass transfer on the basis of Sh may lead to erroneous results [START_REF] Deising | Direct numerical simulation of mass transfer in bubbly flows[END_REF]. Penetration theory Principle The penetration theory was suggested in 1935 by Higbie who was investigating whether or not transfer resistance existed at the interface when a pure gas was absorbed into a liquid [Higbie, 1935a]. As shown in Fig. 5, Higbie assumed that each surface element of the liquid was exposed to the gas for the time required for the gas bubble to pass through it. Vortices in the fluid bring an element of the fluid to the interface where it is exposed to the second phase for a defined time interval, after which the element of fluid is mixed back into the mass of the fluid. Thus, an element of fluid whose initial composition matches that of the main fluid away from the interface is suddenly exposed to the gas phase. An unsteady molecular diffusion process then occurs within the fluid element. Equilibrium is assumed to be reached immediately at the interface, i.e. transfer across the interface is instantaneous. The element is then displaced or remixed after a fixed time interval, i.e. each element remains in contact with the gas for the same period of time. The existence of a velocity gradient within the liquid element is ignored and the fluid at all depths of the element is assumed to move at the same speed as the interface. Expression of the mass transfer coefficient Under these conditions the convection terms in the transport equation within the fluid element can be neglected and can be written as : where z is the distance from the interface in the normal direction. ∂c ∂t = D ∂c ∂z (2.73) c(z, 0) = c 0 , z > 0, t = 0 (2.74a) c(z, t) = c i , z = 0, t > 0 (2.74b) where c 0 is the initial concentration of the fluid element, and c i is the equilibrium concentration at the interface. Solving this differential equation in the fluid element gives a value for the mass transfer coefficient: k(t) = D πt (2.75) The average mass transfer coefficient during a time interval corresponding to the contact time of the fluid element with the t c interface can be obtained by integrating the previous equation: k = 1 t c tc 0 k(t)dt = 2 D πt c (2.76) The mass transfer coefficent is proportional to the square root of the diffusivity. There are important differences in the implications of the theories when one must consider the impact of contaminants (surfactants) on gas transfer [START_REF] Painmanakul | Effect of surfactants on liquid-side mass transfer coefficients[END_REF]. 5 Microfluidic phenomena in electrolysis Theory -surface tension Surface tension an expression of the bond between molecules The molecules of condensed phases have strong attractive interactions between them. The molecules located at the interface have fewer neighbours and therefore form fewer interactions, which creates a energy deficit. In order to decrease this energy deficit, a force pulls tangentially on the interface to reduce its area. The interfaces of the condensed phases are therefore subjected to a tension, called surface tension. We note γ the value of this tension, which is expressed in [N • m -1 ]. It also corresponds to the energy deficit per unit area and can therefore also be expressed in [J • m -2 ]. In a liquid, the molecules are constantly changing neighbours due to thermal agitation. The interaction energy between molecules is therefore of the order of the thermal energy k B T , where k B is the Boltzmann constant and T the temperature. The order of magnitude of the surface energy of a liquid can be estimated by relating this thermal energy to the average surface occupied by a molecule a 2 : γ ∼ k B T a 2 ≈ 25mN • m -1 where a is the average distance between molecules, about 4 Ȧ. This calculation gives a value of 25mN • m -1 , which is the right order of magnitude for ethanol, silicone oils or alkanes. Due to their strong polarity, water molecules bind to each other by stronger interactions than van der Waals, called hydrogen bonds. The surface energy of water is higher and is about 73mN/m at 20°C. Pressure jump By describing the surface tension balance at the edges of a curved interface element, Young deduced that this resulted in a pressure jump on both sides of the interface [START_REF] Young | III. An essay on the cohesion of fluids[END_REF]. This pressure jump is called Laplace's pressure in honour of a contemporary scientist of Young's, Pierre-Simon de Laplace, who provided a more precise formalism: ∆p = p inside -p outside = γκ (2.77) where κ [m -1 ] is the curvature of the interface. Depending on the sign of the curvature, the Laplace pressure jump can be positive or negative. Marangoni stress The curvature of a plane interface is zero, so the Laplace pressure is also zero. However, the surface tension balance is not necessarily so. The surface energy of a liquid depends on its composition and temperature, among other things. Thus, variations in composition or temperature along the surface of a liquid imply variations in surface tension. Areas of higher tension pull harder, resulting in tangential stress at the interface, called Marangoni stress. This stress introduces a tangential stress jump at the interface. This jump is compensated by the viscous stresses of the two fluids. The two fluids on either side of the interface are therefore carried away by viscosity, generating Marangoni flows [START_REF] Scriven | The marangoni effects[END_REF]. Thus, in the general case of a curved and heterogeneous interface, the surface tension balance on an elementary surface can be broken down into two terms, a normal stress, the Laplace pressure, and a tangential stress, the Marangoni stress. Due to the surface tension gradient the fluid near the interface moves from areas where the surface tension is low to areas where it is higher. In electrolysis surface tension can be found as a function of: • the temperature, the marangoni effect is referred to as the thermocapilary effect; • voltage, the marangoni effect is referred to as the electrocapilary effect; • concentration of surfactants, the marangoni effect is referred to as the solutocapilary effect. Marangoni motion near electrolytic gas bubbles Observations In his review, Lubetkin reports the existence of bubble-related phenomena that cannot be explained by classical theories [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF]. He suspects the existence of a Marangoni effect at the interface between the electrolyte and the bubble. Based on this hypothesis, he provides an explanation for these phenomena which include : • a phenomenon of radial adhesion, a larger bubble attracts other smaller bubbles; • bubbles following oscillating trajectories along a vertical electrode [START_REF] Janssen | The effect of electrolytically evolved gas bubbles on the thickness of the diffusion layer[END_REF]]; • oscillations of bubbles attached to the electrode. The Marangoni effect is a transient effect. Lubetkin assumes that these bubble shifts or oscillations are due to a force caused by the soluto-Marangoni effect. This force varies with the concentration of dissolved species. As the bubbles absorb the dissolved gases, the force varies. As a result, the bubbles leave a low concentration area after absorption and move towards the higher concentration areas, which creates an oscillatory movement if the bubble does not coalesce. According to his reasoning the soluto-Marangoni effect is more important than the themo-Marangoni effect created by a temperature gradient, but it has a shorter range than the thermo-capillary effect. A more significant phenomenon was reported by Lubetkin [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF]. When two bubbles of similar size are adjacent to each other on the horizontal surface of the electrode, and they meet and merge, the centre of the new bubble is higher than the position of either of the two original smaller bubbles. A normal impulse at the surface of the electrode is produced and may in some cases force the bubble to leave the electrode. What is interesting in this case is that the bubble subsequently returns to the electrode. This phenomenon was first observed by Westerheide and Westwater in 1964, who hypothesised that the return could be due to surface voltage gradients or electrostatic interactions [START_REF] Westerheide | Isothermal growth of hydrogen bubbles during electrolysis[END_REF]]. Lubetkhin's reasoning rejects the impact of drag and electrostatic forces, leaving only Marangoni forces to explain the phenomenon. Lubetkin hypothesises that this phenomenon was not suspected earlier because it suffers from interface masking phenomena due to surfactants. Using Particle Tracking Velocimetry (PTV) Yang et al. were able to observe convective vortices around the bubble. An example of a result is shown in Fig. 6. From these results Yang et al. deduced that the convection movement of the electrolyte was caused by the Marangoni effect. Origin of the surface tension gradient However, it is still unclear whether this Marangoni effect is due to a variation in temperature, concentration of dissolved species, potential, or by pollution from surfactants. Lubetkin originally seemed to favour the soluto-Marangoni effect. Yang et al. assumed that the solutal and thermal effect were of the same order of magnitude. By comparing the experimental In their work, Meulenbroek et al. show that Marangoni convection in the vicinity of electrochemically generated bubbles is the result of thermo and solutocapillary effects at the bubble interface [START_REF] Meulenbroek | Competing Marangoni effects form a stagnant cap on the interface of a hydrogen bubble attached to a microelectrode[END_REF]. Unlike Lubetkin, they do not attribute this Marangoni effect to dissolved species but to other surfactants present and adsorbed on the liquid-gas interface. Meulenbroek et al. include the electrocapillary effect in their study. This appears to be of minor importance compared to the other effects. The Marangoni force Marangoni flow around a growing bubble at an electrode surface is known to delay detachment of bubbles. By examining the motion of a gas bubble at a distance z from a heated surface, and by solving the linearised Navier-Stokes equation, with appropriate boundary conditions, it was shown that the thermal Marangoni effect was linearly dependent on a temperature gradient [START_REF] Young | The motion of bubbles in a vertical temperature gradient[END_REF][START_REF] Hardy | The motion of bubbles in a vertical temperature gradient[END_REF][START_REF] Morick | Migration of Air Bubbles in Silicone Oil under the Action of Buoyancy and Thermocapillarity[END_REF][START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF]. The shear stress on the interface induced by the thermal Marangoni effect is: τ M T = dγ dT dT dz (2.78) The thermal Marangoni force F M T acting on the bubble attached to an electrode can be approximated by calculating the area integration of the shear stress over the entire spherical surface of the bubble, giving the following expression [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF][START_REF] Chen | Dynamics of single bubble departure from TiO2 nanorod-array photoelectrode[END_REF]: F M T = dγ dT ∆T πR (2.79) where ∆T represents the difference in concentration between the surface of the electrode and the liquid bulk. By analogy it is possible to obtain a similar expression for the solutocapillary effect: F M C = dγ dc ∆cπR (2.80) Assuming that the two effects are independent of each other it is possible to obtain a general expression for this force: F M = F M T + F M C = dγ dT ∆T πR + dγ dc ∆cπR (2.81) Both dγ dT and dγ dc are negative, so the combined Marangoni force pushes the bubble towards the electrode surface. Stagnant cap model Meulenbroek et al. used a stagnant cap model that describes the transport of the contaminants along the interface [START_REF] Levich | Surface-Tension-Driven Phenomena[END_REF][START_REF] Griffith | The effect of surfactants on the terminal velocity of drops and bubbles[END_REF][START_REF] Sadhal | Stokes flow past bubbles and drops partially coated with thin films. Part 1. Stagnant cap of surfactant film -exact solution[END_REF][START_REF] He | The size of stagnant caps of bulk soluble surfactant on the interfaces of translating fluid droplets[END_REF][START_REF] Hosokawa | Evaluation of adsorption of surfactant at a moving interface of a single spherical drop[END_REF][START_REF] Shmyrov | Phase transitions on partially contaminated surface under the influence of thermocapillary flow[END_REF]. The competition of the Marangoni effects results in the formation of a stagnant cap at the top of the bubble. The surfactants cover the top of the bubble and prevent the Marangoni effect from taking place, while the bottom part of the bubble interface is mobile and drives a Marangoni flow. The interface is stiffened by the surfactants and prevents more minor surface tension variations such as that caused by a temperature or dissolved species gradient. Effect of electrode size The particularity of the previous studies is that they are conducted on microelectrodes. Hossain et al. performed several numerical simulations of the thermocapillary effect at the level of electrogenerated bubbles on electrodes of varying size [START_REF] Hosokawa | Evaluation of adsorption of surfactant at a moving interface of a single spherical drop[END_REF]. Depending on the size of the electrode, the hot spot described earlier by Massing et al. moves. As the size of the electrode changes, the bubble covers the electrode to a greater or lesser extent, which changes the current density profile. The Joule effect is thus modified, which moves the hot spot. In the case of microelectrodes, the hot spot is close to the electrode. As the surface area of the electrode increases the hot spot moves upwards. This reveals a double vortex structure of the thermocapillary flow, which has not been taken into account before because the lower vortex is small at the microelectrode. When several bubbles develop simultaneously, the electric current must flow through the inter-bubble space. This causes an increase in the maximum current density near the inter-bubble equatorial region. The temperature hotspot is located near the equator and thus an almost symmetrical double vortex structure is generated near the bubble interface. Influence of surface tension on electrogenerated bubbles Surface tension as a parameter influencing nucleation As a general rule, surface tension is an essential parameter to understand the formation of electrogenerated bubbles. Surface tension is a function of temperature , concentration of different surfactants, pressure and voltage. These empirically determined relationships are assumed to be linear and independent of each other [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF][START_REF] Massoudi | Effect of pressure on the surface tension of water. Adsorption of low molecular weight gases on water at 25.deg[END_REF][START_REF] Weissenborn | Surface Tension of Aqueous Solutions of Electrolytes: Relationship with Ion Hydration, Oxygen Solubility, and Bubble Coalescence[END_REF][START_REF] Zhang | Evaluating the Behavior of Electrolytic Gas Bubbles and Their Effect on the Cell Voltage in Alkaline Water Electrolysis[END_REF]. Starting from the fact that Lubetkin pointed out that the increase in the concentration of different gases dissolved in water lowers the surface tension (including dihydrogen and oxygen), the nucleation phenomenon can be interpreted as follows: • The concentration of dissolved gases increasing, the surface tension of the liquid gas interface decreases, which reduces the work required Eq. (2.34) and allows nucleation; • After the nucleation event the concentration of dissolved gases decreases rapidly and the surface tension increases. This brief reasoning need to be validated but shows the dependence of the evolution of surface tension on nucleation. Surface tension as a parameter influencing growth In the same register as shown in Fig. 7, the evolution of the surface tension influences the growth of the bubble. Surface tension is a function of temperature and concentration, among other things. As these two quantities vary at the interface, a variation of the surface tension is created. This variation has two plausible consequences. Firstly, it will generate a Marangoni effect which will modify the temperature and concentration gradients. Secondly it will modify the contact angle which will change the microconvection phenomenon generated by the spreading of the bubble. These two microconvection phenomena will affect the distribution of dissolved species around the bubble, thus modifying its growth. In the studies conducted on the nucleation, the growth, and the detachment of electrogenerated bubbles, only the surface tension of the gas-liquid interface is taken into account. The resultant forces due to the surface tension of the 3 interfaces are estimated from a measurement of the contact angle. The problems of contact angles hysteresis are rarely addressed [START_REF] Brussieux | Controlled electrochemical gas bubble release from electrodes entirely and partially covered with hydrophobic materials[END_REF], which prevents a distinction from being made between the wettability of the material used and the roughness of the surface. With regard to the impact of surface tension on bubble formation, there is a need to specify its use in models on the formation of electrogenerated bubbles. As described above the dynamic contact angle cannot be predicted, and therefore must be considered as an input parameter of the model. The contact angle depends on the wettability of the electrode and on the surface tension of the hydrogen-electrolyte interface. The measurement of the contact angle requires particular attention and great rigour in the setting up of the experimental device. Any form of pollution that could affect the measurements must be avoided. The surface of the electrode must be clean and free of all forms of asperity that could affect the contact line [De Gennes and Brochard-Wyart, 2015]. The contact angle could be defined as the angle between the tangent to the interface at the point of triple contact with the solid surface. However, this measurement depends on the zoom with which the contact line is observed. Yang et al. defined the contact angle according to geometrical parameters of the bubble, as it is shown on Fig. 8 . They measure the contact angle for one configuration in which the platinum electrode is embedded in a hydrophilic surface (glass) and another in which it is embedded in a hydrophobic surface (epoxy). The results are shown in the Fig. 9. As shown by the study conducted by Sakuma , the wettability of the surface will influence the effectiveness of the microconvection effect. A bubble on a hydrophobic surface will spread more easily, thus modifying the dynamic of growth. For oxygen electrogenerated bubbles Sakuma et al. observed initial sizes in the range of 10 -30 µm depending on whether the electrode surface is hydrophobic or hydrophilic. During these first moments the degree of supersaturation of the liquid as close as possible to the bubble varies greatly. The time coefficient does not vary from one electrode to another (hydrophilic or hydrophobic), so this means that in Eq. (2.45) β depends on the wettability of the surface. Consequently, the contact angle is another parameter to consider when considering bubble growth. 6 How to improve electrodes? In the theoretical overview of the nucleation, growth and detachment of a bubble described in the previous section, it should be noted that because of the connection between these 3 steps, they must be studied together in order to be able to predict the detachment of the bubble. The conditions required to make an electrode surface more favorable to nucleation are those that, on the contrary, make it more difficult to detach the bubbles once formed. Wilt gives several relationships to determine the nucleation rate for different surface geometries Fig. 11 [START_REF] Wilt | Nucleation rates and bubble stability in water-carbon dioxide solutions[END_REF]. These relationships have been established within the framework of the classical theory of heterogeneous nucleation. The discrepancy between the experience and the results provided by classical theory discourages its use as it stands. However, it makes it possible to understand that modifying the geometry makes it possible to modify the contact surface of the bubble with the electrode, to modify the ratio of the radius of curvature of the bubble with respect to its volume and thus the pressure difference and thus can make it easier to nucleate and detach it. By modifying the contact surface, we modify the contact pressure force F CP and the surface tension force F S , which are the main forces involved in the force balance [START_REF] Chen | Dynamics of single bubble departure from TiO2 nanorod-array photoelectrode[END_REF]. Using these two levers to facilitate detachment (wettability, and design of the electrode), Lubetkin imagines an electrode design that allows rapid nucleation and detachment of dihydrogen bubbles from the electrode surface, see Fig. 12. However, he suggests that another type of design could be more efficient and that this sophistication is not necessary to achieve what he calls the phenomenon of "rapid-fire emission". This phenomenon has been observed by [START_REF] Weissenborn | Surface Tension of Aqueous Solutions of Electrolytes: Relationship with Ion Hydration, Oxygen Solubility, and Bubble Coalescence[END_REF]. He assumes that the rapid departure of a first bubble could cause sufficient disruption and reduce the concentration gradient, thus minimizing the marangoni effect. Most current nucleation models do not take into account the contribution of the Marangoni effect or the impact of surface wettability and often focus on only one of its stages, so they are not able to predict the bubble detachment. A holistic approach is required. Most of the studies on the subject are experimental and require the use of high-speed cameras and microelectrodes to be able to observe the vortex flow of the fluid around the bubble which is characteristic of the marangoni effect. On the other hand, there are no numerical studies that model the development of an electrogenerated bubble and include the Marangoni effect on the liquid-gas interface of the bubble. Integration into an inverse resolution problem From the previous sections several questions emerge: • How to understand the growth of a bubble with the Marangoni effect? Hypotheses about microconvection currents have been formulated, but there is no direct way to measure them. Based on this observation, other methods should be used to access information on bubble development that is not available through experiments. The presence of a Marangoni effect invalidates the hypothesis of transport of dissolved species by diffusion from the electrode to the bubble. • What is the origin of this Marangoni effect? Through experiments it is possible to measure: the diameter of the bubble, or the velocity of the fluid around the bubble. [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF] : "A hypothetical composite conical or crack-like nucleation site. The region A is hydrophobic, possibly as a result of capillary condensation of a volatile organic material. This deposition mechanism would restrict its presence to the narrowest region of the pore. Nucleation is easy in this environment, with its high contact angle. As the nascent bubble grows past the hydrophobic region, and into the hydrophilic region B, a small remnant may be detached, remaining in the hydrophobic region. This residual gas would promote the instantaneous growth of the next bubble. As it grows, it will be shot out of the conical site, its zero contact angle ensuring the absence of friction. The rapidity of its departure might cause sufficient stirring in the vicinity C, to disrupt the Nernst layer, thus eliminating the concentration Marangoni effect, which would otherwise tend to hold the bubble near the electrode until it was larger. Alternatively, the rapid growth of the first bubble during its rise, and the immediate following of others might cause depletion of the Nernst layer in a narrow bubble chimney, again destroying the concentration gradient." However, it is not possible to access the concentration gradient of dissolved species, temperature, or to identify the presence of surfactant. From this point of view it is impossible to distinguish between a Marangoni effect of thermal solutal or surfactant origin. • The Marangoni effect delays the detachment of the bubbles, but to what extent? While the presence of a Marangoni effect has been proven, it is currently impossible to measure the impact on bubble growth and bubble detachment. The diameter of the bubble at the time of detachment is an essential parameter for predicting the efficiency of the electrolytic process. However, the uncertainties on the phenomena influencing the evolution of this diameter cannot be resolved with the current state of theory and the available experimental techniques. One of the solutions proposed in this thesis to answer the questions mentioned above is to pose an inverse problem. The objective here is to present quickly the inverse methodologies and how they can be used in the work related to this thesis. Inverse problems are problems whose formulation is incorrect. The calculated "solutions" are often quite different from the real solution. Nevertheless, the need to solve inverse problems is becoming more and more necessary. Usually, in order to analyse the behaviour of a system as well as possible, it is necessary to build models to represent reality. These are determined from equations derived from physical laws and allow the behaviour of a system to be predicted under the effect of a known stimulus. A characteristic of these models is their causality: subsequent conditions depend on previous ones. When the input data and parameters are assumed to be known, solving the modelling equations is used to predict the output of the model. To validate a model it is usual to compare the experimental results y mes with the modelling results y mod . If both fit, the assumptions are considered validated. Inverse problems are the opposite of these direct problems. They are non-causal problems. They describe the situation in which one tries to determine the causes of a phenomenon from experimental observations of its effects. In other words, the objective may be to identify one or more of the elements that define the model. The principle of an inverse methodology is to test several sets of parameters x in a model A to find those for which the results obtained by the model A(x) best fit the experimental results y mes : A(x) ≈ y mes (2.82) Without going into the details of an inverse methodology it is necessary to discuss the measurable quantities in the case of an electrogenerated bubble on a micro-electrode and the unknown or supposedly known parameters. As described above, the experiments of Yang et al. allowed the observation of the temperature and velocity field around the bubble [START_REF] Yang | Marangoni convection at electrogenerated hydrogen bubbles[END_REF]. The temperature field is one of the quantities that can be used to determine the solutocapillary effect. The velocity field in a stagnant electrolyte is the result of the Marangoni effect. The other measurable quantity is the evolution of the bubble radius. These three quantities are the only ones that can be measured. The local concentration and the current density are determined by the model. One of the unknown parameters of the model is the presence of surfactants. This parameter is a priori very sensitive as it influences both the interfacial mass transfer and the surface tension gradient by inhibiting a large part of the bubble interface. As described above there is no certainty about the value of ∂γ ∂T , ∂γ ∂Φ , or ∂γ ∂c S the nature of the surfactant being unknown. The uncertainty in these parameters makes it impossible to determine the surface tension gradient correctly. Usually in the framework of an inverse methodology it is advisable to carry out a sensitivity study on each of the parameters taken independently, as a small error on one parameter can strongly modify the output values. The diagram in Fig. 13 provides a non-exhaustive inventory of the sources of error that can occur in the process. In the spirit, beyond being an optimisation problem, the problem of an inverse methodology is to estimate the sources of error as well as possible, and their influence on the result. The aim of this section on inverse methods is to illustrate that one of the objectives of this thesis has been to build some of the necessary components of the inverse problem. 8 Overview of the chapter and objectives of the thesis It has been shown in the previous sections that the major lever for improving the efficiency of the electrolysis process is to limit the coverage of bubbles on the electrode surface. For this it is necessary to accelerate the departure of the bubbles from the electrode. Consequently, a literature review on bubble nucleation, growth, and detachment was conducted in order to evaluate the phenomena involved at the microfluidic scale in the production of electrogenerated bubbles. It was found that the hypothesis of a Marangoni effect should be taken into account in order to understand the development of bubbles and thus be able to facilitate their detachment. However, several questions remain. It is impossible to determine the origin of the Marangoni effect. Is it solutal, thermal or due to surfactants? Furthermore, how can we understand the growth of a bubble with the Marangoni effect? In the course of the literary research carried out, it became clear that the experimental means are not sufficient to resolve these uncertainties. This is why the research track proposed in this thesis is to use inverse methods. The work presented in this thesis is part of the inverse problem approach. Therefore, the objectives of this thesis are the following: • To provide a mathematical or numerical model Numerical modeling 3 To model an electrogenerated two-phase fluid at the microfluidic scale, it is necessary to take into account the marangoni effect and mass transfer at the interface and to include the problem of the contact lines with the electrode. For this, multiphase microfluidic modeling ingredients are needed, in particular to model phases, mass transfer and the marangoni effect at interfaces. A review on the subject was done by [START_REF] Wörner | Numerical modeling of multiphase flows in microfluidics and micro process engineering: a review of methods and applications[END_REF]. There are several types of methods for modelling a multiphase fluid. They all have their advantages and disadvantages. Some are more suitable for microfluidic flows. The first objective of this chapter is to describe these methods and to evaluate their suitability for the problem at hand. Then, the mathematical basis for the development of the numerical model is given. Finally, the numerical model created and the reasons for the choices made during its design will be presented. Toward a direct numerical simulation of the phenomenum In order to explain the reasons that led to the choice of the specific numerical method, it is appropriate first to give a brief description of all the methods for modelling a multiphase system and then to state the necessary requirements of the model. Overview of the numerical methodology To model an electrogenerated two-phase fluid at the microfluidic scale, it is necessary to take into account the marangoni effect and mass transfer at the interface and to include the problem of the contact lines with the electrode. For this, multiphase microfluidic modeling ingredients are needed, in particular to model phases, mass transfer and the marangoni effect at interfaces. A review on the subject was done by [START_REF] Wörner | Numerical modeling of multiphase flows in microfluidics and micro process engineering: a review of methods and applications[END_REF]. There are several types of methods for modelling a multiphase fluid. They all have their advantages and disadvantages. Some are more suitable for microfluidic flows. The aim here is to describe these methods and to assess their suitability for the problem at hand. Modelling an interface Reproducing and following an interface with complex shape and dynamics that can develop large deformations, singularities and topological changes is a numerical challenge. The interface to be described (gas-liquid) has material properties, i.e. it changes the flow behaviour, as is the case with surface tension. There is a zone of discontinuity for pressure and properties between the two phases. If the interface has material properties, there may be diffusion of these properties along the interface, or transfer between phases. Several difficulties must be overcome when dealing with the numerical simulation of interfaces: • the domain of each phase is unknown; • the deformation of the interface is governed by a discontinuous pressure jump condition; • the very shape of the interface has a direct influence on the action of surface tension; • in the case of the phenomena of adsorption-desorption of dissolved species on the interface or coalescence, we are at the limit of the validity of macroscopic description. Interface characteristics Tryggvason et al. point out that in order to simulate gas-liquid multiphase flows it is necessary to make several hypotheses [START_REF] Tryggvason | Direct Numerical Simulations of Gas-Liquid Multiphase Flows[END_REF]]: • the length scale of the problem is much larger than the mean free path of the molecules, so the continuum hypothesis is valid and the flow equations take the classical form; • the interface separating two or more fluids, which in fact has a finite thickness of the order of a nanometer and constitutes a transition zone for the properties of the fluid, is assumed to be sharp with a negligible thickness; • the intermolecular forces, which determine the dynamics of the interface, are modelled on a continuum scale as a force localised at the interface which is proportional to a surface tension coefficient, which represents the rate of change of the excess free surface energy produced by a unitary increase in the surface area of the interface resulting from the deformation. Explicit or implicit representation of the interface An overview of the methods for simulating a multiphase fluid is shown in Fig. 14 [START_REF] Wörner | Numerical modeling of multiphase flows in microfluidics and micro process engineering: a review of methods and applications[END_REF]. There are two main classes of interface processing methods: Lagrangian methods and Eulerian methods (volume tracking). The choice of an efficient and robust method to take into account the interface depends on the physical problem to be studied, as each method has its strengths and weaknesses. The difference between these two classes of methods lies in the representation of the interface: explicit or implicit. In Lagrangian methods the interface is explicitly tracked. Indeed, in these methods, the mesh adapts over time so as to merge with the interface. While Eulerian methods use a fixed mesh, with an interface that is not explicitly tracked but reconstructed from a phase indicator function or color function. Lagrangian methods maintain the interface as a discontinuity, and explicitly track its evolution. No modeling is required to define the interface or its effects on the flow. Moreover, boundary conditions can be applied exactly to the interface. However, these methods require a remeshing at each time step. It should be noted that in case of strong distortions of the interface the mesh may be strongly altered and not uniformly distributed, which may degrade the accuracy of the resolution methods. The main disadvantage of these methods lies in their difficulty to take into account topological changes and in particular the ruptures or coalescence. Eulerian methods, also called fixed grid methods, front capturing methods or volume tracking methods, require modelling where additional equations are needed to obtain information on the location of phases and discontinuities. Indeed, unlike the Lagrangian methods, the interface is not explicitly tracked in the Eulerian methods. To locate the different phases and impose the interfacial conditions, a phase indicator function or color function is introduced. This phase indicator function is defined over the whole computational domain and allows to locate the different phases. At each time step, the interface can be located and reconstructed from this phase indicator function. The phase-indicator function allows boundary conditions to appear in the flow equations, but these boundary conditions are altered. Indeed in these methods the interface is diffuse and of non-zero thickness. Therefore the information on the interface is smooth, leading to a spreading and dispersion of the information. Eulerian methods have the advantage of not needing remeshing procedures. In addition, these methods automatically take into account topology changes such as coalescence and fragmentation. However, due to the smoothing of the interface, the physical phenomena related to the interface are not described precisely, as the interface is not explicitly represented. Eulerian methods Eulerian methods include: Volume Of FLuid (VOF), Level-set (LS). These are the most common methods used in CFD software (FLUENT, COMSOL, OpenFOAM). VOF methods naturally ensure the conservation of volume and mass in incompressible flows and, with some improvements, in compressible flows. However, the description of the interface is not precise, which makes it difficult to evaluate the curvature of the interface and impose boundary conditions. The level-set method, like the VOF methods, automatically takes into account topological changes. It describes the interface implicitly using a signed distance function which gives a more precise definition of the interface than in standard VOF methods. But the signed distance function has to be reset frequently alterating mass conservation. Finally the Lagrangian methods are very precise with a thickness free interface and boundary conditions that are imposed exactly on the interface. However, changes in topology and highly deformed interfaces are not easily accessible by this type of methods because the remeshing procedure which allows to preserve the adequate mesh size can become very complicated in this case. Requirements of the simulation Towards a holistic approach Numerical models to simulate the Marangoni effect along the wall of an electrogenerated bubble have already been presented [START_REF] Massing | Thermocapillary convection during hydrogen evolution at microelectrodes[END_REF]. But these consider the interface as fixed, and assume that the bubble does not grow by considering only a small time interval during its development. Liu et al. have modeled the growth and detachment of an electrogenerated bubble [START_REF] Liu | Numerical simulation of hydrogen bubble growth at an electrode surface[END_REF]. But they point out that their numerical results do not match the experimental results, partly because the marangoni effect is not modelled. The real added value of a numerical model is to combine bubble growth with the marangoni effect. Dealing with numerical errors The model must be reliable, and must, among other things, limit problems related to mass conservation and spurious currents. Spurious currents are proportional to capillary effects, and in this case, in a simulation where the capillary effects are at the origin of the main forces governing the evolution of the system, they compete directly with the currents caused by the Marangoni effect, thus distorting the results. It is therefore important, if it is not possible to eliminate them completely, to at least know their importance and to reduce their effects. The spurious currents depend, among other things, on surface tension, viscosity, and discretization techniques used. They increase proportionally with decreasing capillary number Ca = µv 0 γ [START_REF] Ren | Heterogeneous multiscale method for the modeling of complex fluids and micro-fluidics[END_REF][START_REF] Herrmann | A balanced force refined level set grid method for two-phase flows on unstructured flow solver grids[END_REF][START_REF] Dupont | Numerical simulation of static and sliding drop with contact angle hysteresis[END_REF] . In the simulation we want to run, the capillary effects are predominant. The problem is that most of the CFD software available uses models close to the Brackbill model [START_REF] Brackbill | A continuum method for modeling surface tension[END_REF]]. Choice of a numerical method Choosing an appropriate numerical method turned out to be complex because of the difficulties in modelling an interface at the microfluidic scale. There are a multitude of choices and several criteria to consider. If it is possible to express the qualities required for a simulation in terms of equations, the choice of a numerical method calls for another type of knowledge. What is needed is a numerical method capable of "reasonably" simulating the problems we are interested in. It is not simply a question of modeling an interface but of being able to simulate the marangoni effect with mass transfers while managing the problems related to contact angles. The growth and detachment of a bubble from the wall of an electrode involves large deformations and a change of topology, which excludes moving mesh methods. The final choice was a VOF method. The advantage of this method is that it limits errors due to volume conservation. Even if the curvature calculations are not very accurate, there are solutions to improve it and thus limit the spurious currents. Combining the methods described by Guo et al. with those of Ivana Seric et al. provides a good method of mass conservation (VOF) with a means of accurately calculating the curvature (Height function), while being able to model the growth of a bubble with the Marangoni effect [START_REF] Guo | Implementation of a height function method to alleviate spurious currents in CFD modelling of annular flow in microchannels[END_REF][START_REF] Seric | Direct numerical simulation of variable surface tension flows using a Volume-of-Fluid method[END_REF]. This is a first step towards a holistic model. Mathematical model In order to understand the choices of the numerical methodology used, it is necessary to recall the foundations of the mathematical model. In a mono-fluid system the Navier-Stokes equations and boundary conditions make it possible to describe the whole flow, however in a two-phase liquid-gas system, it is necessary to reapply the principle of conservation of mass and momentum to be able to describe what happens at the interface. Mathematical operators Before the conservation principle can be applied to the two-phase system, a jump operator and a surface gradient operator must first be defined. The Reynolds transport theorem must also be developed for systems with mass transfer at the interface. Jump operator The jump operator, as shown in Fig. 15, describes the passage of a quantity Q through an interface between two distinct volumes. Assuming that Q has a limit on each side of the interface I, we define for any point x belonging to I, the jump relation for Q by : Q = lim h →0 + Q l (x + h • n) -Q g (x + h • n) (3.1) where n is the vector normal to the interface. The direction of the vector n is chosen here arbitrarily from gas to liquid. Surface gradient operator The surface gradient operator ∇ s Q is defined as the projection of the gradient ∇Q onto the surface : ∇ s Q = ∇Q -n(n • ∇Q) (3.2) The diagram in the fig. 16 gives a geometric representation of this operator. The gradient ∇Q is projected onto the tangent at the interface. 74 Chapter 3 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Two-phase particle The Fig. 17 shows a fluid particle consisting of a liquid phase and a gas phase (e.g. liquid l, gas g). The two phases are separated by an interface I. In order to show the boundary conditions at the interface, the reynolds transport theorem is first applied to this bi-fluid volume. The total volume is divided into two sub-volumes: V(t) = V g (t) + V l (t) (3.3) Green-Ostrogradsky bi-fluid theorem For a fluid particle the Green-Ostrogradsky theorem allows to link the flux of a vector field E(x, t) to the divergence of this field. ∂V(t) E(x, t) • n ∂V dS = V ∇ • E(x, t) dV (3.4) The surface integral applied on the boundary ∂V of the scalar product of E with n ∂V can be decomposed. The Green-Ostrogradsky theorem can then be applied to the two boundaries of the gas and liquid volumes. This decomposition into two fluids reveals a jump relationship at the interface : ∂V(t) E(x, t) • n ∂V dS = ∂V g E • n ∂V g dS - I E g • (-n)dS + ∂V l E • n ∂V l dS - I E l • n I dS = V g+l ∇ • E dV + I E • n dS (3.5) Two-phase reynolds transport theorem with mass transfer at the interface For a volume V with a single-phase fluid, the time derivative of the volume integral of a scalar field Q over V(t), dependent on the time t, is equal to the sum of the volume integral of the time partial derivative of Q and the surface integral over the integration volume boundary ∂V(t) , which is time-dependent, of the product of Q with the normal displacement velocity of the boundary v ∂V • n ∂V : Then using Eq. (3.6) on the two separate volumes we obtain: d dt V(t) Q(x, t)dV = V(t) ∂Q(x, t) ∂t dV + ∂V(t) Q(x, t) v ∂V • n ∂V dS (3.6) d dt V(t) Q(x, t)dV = d dt V g (t) Q(x, t)dV + d dt V l (t) Q(x, t)dV (3.7) d dt V(t) Q(x, t)dV = V g (t) ∂Q(x, t) ∂t dV + V l (t) ∂Q(x, t) ∂t dV + ∂V g (t) Q(x, t) v ∂V g • n ∂V g dS + ∂V l (t) Q(x, t) v ∂V l • n ∂V l dS (3.8) A surface integral on the interface I between the two phases appears knowing that : ∂V(t) = ∂V l (t) -I(t) + ∂V g (t) -I(t) (3.9) The vectors n ∂V l and n ∂V g are directed outwards from the volumes V l and V g , and are normal to both surfaces ∂V l and ∂V g . At the interface level they are opposite. The previous equation becomes: d dt V(t) Q(x, t)dV = V g+l (t) ∂Q(x, t) ∂t dV + ∂V(t) Q(x, t) v ∂V • n ∂V dS + I(t) Q(x, t) g v ∂V g • n ∂V g dS + I(t) Q(x, t) l v ∂V l • n ∂V l dS (3.10) The speed of movement of an interface is generally identical to the speed of movement of adjacent continuous media. However, when there is mass transfer or phase change at its location, the interface no longer moves at the same speed as the surrounding continuous media, so v ̸ = v I ,where v is the speed of the continuous media, and v I is the speed of the interface. Locally at the closest to the interface the following equalities apply: v ∂V g = v ∂V l = v -v I (3.11) Elsewhere the velocity of the volume boundary is the same as that of the continuous medium. Thus we have v = v ∂V l , v = v ∂V g , v = v ∂V . The vector n is defined as being normal to the interface and being directed from the liquid phase to the gaseous phase: n ∂V l = -n ∂V g = n (3.12) From the Eq. (3.10) we get: d dt V(t) Q(x, t)dV = V g+l (t) ∂Q(x, t) ∂t dV + ∂V(t) Q(x, t) v • n ∂V dS + I(t) Q(x, t) l (v -v I ) • n I dS - I(t) Q(x, t) g (v -v I ) • n I dS (3.13) The Green-Ostrogradsky's theorem applied to the surface integral yields, i.e. the second right hand term of Eq. (3.13): ∂V(t) Q(x, t) v • n ∂V dS = V(t) ∇ • (Q(x, t) v)dV (3.14) Then the jump relationship for Q(x, t) (vv I ) is formed with the surface integrals applied to the interface: d dt V(t) Q(x, t)dV = V(t) ∂Q(x, t) ∂t + ∇ • (Q(x, t) v) dV + I(t) Q(x, t) (v -v I ) • n dS (3.15) Conservation principle Mass conservation In the same way as in the single-phase case, the mass of a small volume element is preserved, it is considered that there is no mass source term in the volume or on the surfaces: d dt [m(t, x)] = d dt V(t) ρ(x, t)dV = 0 (3.16) The equation (3.15) then implies that: d dt V(t) ρ(x, t)dV = V(t) ∂ρ ∂t + ∇ • (ρ v) dV + I(t) ρ (v -v I ) • n dS = 0 (3.17) The localization principle states that a null integral for any volume V (t) implies that its integrand is null. This establishes the principle of local conservation of the mass. For the volume V(t) we obtain : ∂ρ ∂t + ∇ • (ρ v) = 0 (3.18) And at the interface I(t) for mass conservation the jump relation is : ρ (v -v I ) • n = 0 (3.19) This last relation reflects the equality of the mass flows on each side of the interface. By this means an input parameter of the model is introduced, the mass flow rate of mass transfer at the interface ṁ in [kg.m -2 .s -1 ]. ṁ = ρ g (v g -v I ) • n = ρ l (v l -v I ) • n (3.20) Momentum conservation The temporal variation in the momentum mv of a fluid particle results from the forces acting on this particle. As in the single phase, the forces affecting the particle are divided into surface forces F s and volumic forces F v . However, a new type of force is added, the linear forces F l , which are exerted exclusively on the edge of the interface ∂I. By defining the surface tension γ, we obtain for the three forces: F l = ∂I γ n ∂I dL (3.21) F s = ∂V T • n ∂V dS (3.22) F v = V ρ f v dV (3.23) With T the stress tensor, and f v the density of the volume forces. Before establishing the law of conservation, the linear force is rewritten as a surface integral. By applying Stokes' theorem, which is the surface version of Green-Ostrogradsky's theorem, we transform the linear integral into a surface integral : F l = I (n × ∇) × (γ n) dS (3.24) Then , the operators surface gradient ∇ s , surface divergence ∇ s • and the definition of the curvature of the interface κ such as κ = ∇ s • (-n) allow to write the linear force as : F l = I ∇ s γ + γκn I dS (3.25) This expression gives rise to the Marangoni term ∇ s γ which is expressed in the case of spatial variations in surface tension, and the term γκn I which is the Laplace pressure responsible for the pressure difference between the two phases. Now that the linear force is expressed as a surface integral, Newton's second law is applied to the fluid particle : d dt V(t) ρ(x, t) v(x, t)dV = ∂I γ n ∂I dL + ∂V T n ∂V dS + V ρ f v dV (3.26) Then, the Eq. (3.15) is applied to the left-hand term, the linear force term is rewritten, and the Eq. (3.5) is applied to the surface force term: d dt V(t) ρ(x, t)v(x, t)dV = V(t) ∂ρv ∂t + ∇ • (ρ v ⊗ v) -∇ • (T) -ρf v dV + I(t) ( ρv ⊗ (v -v I ) • n -T • n -γ κn -∇ s γ) dS (3.27) Locally in each phase we have: ∂ρv ∂t + ∇ • (ρ v ⊗ v) = ∇ • (T) + ρf v (3.28) And at the interface : ρv ⊗ (v -v I ) • n -T • n -γ κn -∇ s γ = 0 (3.29) The stress tensor can be decomposed into a pressure component and a viscous stress tensor: T = -pI + 2µD (3.30) where µ is the dynamic viscosity, and D = ∇v + (∇v) T /2. As a reminder getting mass conservation at the interface ρ (vv I ) • n = 0 and so ρv ⊗ (vv I ) • n can be expressed as ṁ v . At the interface by displaying the mass flow rate and developing the stress tensor we finally obtain: ṁ v + pI • n -2µD • n = γκn + ∇ s γ (3.31) Projections along a normal axis and an axis tangential to the interface results in: ṁ v • n + p -2µD • n • n = γκ (3.32) ṁ v • t -2µD • n • t = ∇ s γ • t (3.33) As shown in Eq. (3.32) and Eq. (3.33) the flow at the interface is influenced by surface tension, the Marangoni effect, and mass transfer. The term ṁ v refers to the consequences of mass transfer across the interface on the tangential and normal stress balance. This means that it is possible to have slippage at the interface caused by the mass transfer. 3 From continuous to discrete model Having laid the foundations of the mathematical model, it is necessary to describe how the transition from the mathematical model to the numerical model is made. FVM -volume of fluid Averaging of continous quantities on a finite volume In order to move from a mathematical model to a discrete model, it is necessary to transform the partial differential equations representing the conservation laws into discrete algebraic equations. In the finite volume method the continuous quantities (or functions of space and time in mathematical terms) are averaged over each volume represented by the cells of the mesh. β = 1 V cell V cell β i dV (3.34) The dynamics and continuity equations resulting from the conservation principle are adapted to the single field approach using a weighted average of the quantities followed by these equations. In the process of calculating the weighted average, the operation carried out for the calculation of the weighted average on the divergence and gradient operators is not simple and straightforward. The integrals and derivatives can not be interchanged in volumes that contain the interface I. The spatial volume averaging theorem has to be used [START_REF] Whitaker | Theory and applications of transport in porous media: The method of volume averaging[END_REF]: ∇β i = ∇β i + 1 V cell A I,cell β i ndA (3.35) ∇ • β i = ∇ • β i + 1 V cell A I,cell β i • ndA (3.36) where A I is the part of the surface of the interface in the cell. Volume fraction Instead of considering two fluids (gas-liquids), the the Volume Of Fluid (VOF) method assume a single fluid, and solve a single conservation equation for mass and momentum. A continuous indicator function 1 i is used to distinguish the phase i from the others. It takes the value 1 when phase i is present at a given point in the system and 0 otherwise. In the VOF methodology the integral on the cell of the function defines the volume fraction α i : α i = 1 V cell V cell 1 i dV (3.37) In the cells that contain the phase i α i = 1, and 0 otherwise. A notable consequence of this kind of multiphase modelling is that the interface I is diffuse and spans several cells of the mesh. Transport of the volume fraction A transport equation is added to the Navier Stokes equations to track the volume fraction of phase i, which makes it possible to distinguish the phases and identify their interface : ∂α(t) ∂t + ∇ • (α(t) v) = 0 (3.38) In cells crossed by the interface I, we get 0 < α i < 1. This equation is common in CFD and is routinely solved. This allows precise control of the volume of each phase in each cell of the mesh at a lower computational cost. Therefore, the VOF methodology is known to have good volume conservation properties. Several immiscible fluids are considered as one effective fluid throughout the domain. This is referred to in the literature as a one-fluid formulation. A notable consequence of this modelling is that the interface I is diffuse and spreads over several cells of the mesh. This diffusivity of the interface is a handicap of the VOF methodology. Surface tension is a local force whose points of application are directly on the interface and which varies along the interface. In this context, it may be considered counter-intuitive and inacurate to choose a multiphase model that diffuses the interface to account for the impact of surface tension on it. Continuum surface force formulation There are several formulations for modelling surface tension. In a literature review Popinet describes the recent development of the Eulerian surface tension formulations [START_REF] Abu-Al-Saud | A conservative and well-balanced surface tension model[END_REF]. One of the most common formulation of surface tension for VOF was first introduced by Brackbill et al., and is called "Continuum Surface Force" (CSF) [START_REF] Brackbill | A continuum method for modeling surface tension[END_REF]]. The formulation adopted in this thesis is based on it. It is therefore necessary to describe it first to understand its shortcomings and to understand the modifications made to overcome them. Surface tension expressed as a volumetric force In this formulation, the surface tension force is modified into a volumetric force and introduced into the momentum equation. This surface tension source term can be calculated from the values of the volume fraction and the surface tension coefficient. There are different models. One of the most commonly used is the continuum surface force model CSF. By using a Dirac delta function, the surface integral from Eq. (3.25) taking into account the surface tension becomes a volume integral : F l = I ∇ s γ + γκndS = V (∇ s γ + γκn) δ S dV (3.39) This volumetric formulation makes it possible to integrate surface tension as a source term in the momentum equation Eq. (3.28): f γ = (∇ s γ + γκn) δ S (3.40) ∂ρv ∂t + ∇ • (ρ v ⊗ v) = ∇ • (T) + ρf v + f γ (3.41) Interface density The operator δ S represents the interface density present in the cell. V ∩ I is the proportion of the interface present in the volume V: V ∩ I = (V∩I) dS = V |∇α|dV (3.42) In three spatial dimensions this volume integral gives the interface area, in two dimensions the corresponding area integral gives the interface length. |∇α| being constant within each cell of the mesh, V cell the previous integral is transformed into : V cell |∇α|dV = V cell |∇α| = V cell ∩ I = I cell (3.43) I cell represents the part of the interface present in the cell. In each cell of index (i, j), |∇α (i,j) | = I cell V cell represents the interface density of the cell and so: δ S = |∇α| (3.44) Normal and curvature The geometrical properties of the interface, the normal vector and the curvature can be calculated from the gradient of α. n = ∇α |∇α| (3.45) κ = ∇ • n = ∇ • ∇α |∇α| (3.46) The volume surface tensionforce f γ can be expressed using the CSF formulation (neglecting the Marangoni term), that is, using |∇α| : f γ = γκ n δ S = γ κ ∇α |∇α| |∇α| = γκ∇α (3.47) CSF Marangoni effect Very often authors seeking to model the effects of surface tension assume that it is constant. And consequently, they do not express the marangoni effect, and remove the expression ∇ s γ from the Eq. (3.40). Surface tension is a function dependent on temperature and concentration, amongs other things. A value of the surface tension can thus be calculated in each cell of the mesh. In other words, a value of the surface tension coefficient can be calculated in each cell of the mesh even if the interface does not pass through it. One can thus calculate a gradient of surface tension. From the definition of surface tension we get: ∇ s γ = ∇γ -n(n • ∇γ) = ∇γ - ∇α |∇α| ∇α |∇α| • ∇γ (3.48) The CSF is one of several methods to model surface tension. However, whatever the chosen method, several difficulties have to be overcome : • appearance of spurious currents; • failure to preserve mass; • contact line limitations. Numerical errors In order to properly assess the errors that could be generated by the model, it is necessary to state in this section the common errors encountered in multiphase fluid modelling: parasitic currents, and conservation of volume. This description helps to confirm the choices made in the design of the model. Spurious currents Observation Spurious currents are currents observed in a numerical simulation that have reached a steady state of equilibrium when no energy is injected into the system. These currents result from errors in the discretization of the surface tension and have serious consequences on the results of the calculations. The first consequence is that they make certain calculations impossible [START_REF] Popinet | A front-tracking algorithm for accurate representation of surface tension[END_REF][START_REF] Popinet | Gerris: a tree-based adaptive solver for the incompressible Euler equations in complex geometries[END_REF] • the existence of mechanical equilibrium when the fluids are at rest; • the normal and curvature are estimated accurately; • the consistency of the discretization operators. Starting from Eq. (3.32), and assuming v = 0 on either side of the interface, we get the Young-Laplace equation -p = γκ . This corresponds to the case of a bubble or drop in zero gravity in a stagnant fluid. By applying the hypotheses related to this systems, the Navier-Stokes equations are simplified: Newtonian, considered incompressible, no mass transfer at the interface, no gravity, constant surface tension, the flow is isothermal. The volumetric forces term cancels out in the momentum equation. Since there is no mass transfer at the interface, the velocity of the two fluids on either side of the interface is equal to that at the interface. The two phases are in equilibrium, which theoretically means that the velocity is zero on both sides of the interface. This means that if the fluid is moving near the interface it can only be due to a numerical error. As shown in fig. 18, spurious currents appear when the CSF formulation is used and they can be significant. Well-balanced relation It is assumed that there is a discretized interface geometry such that a zero velocity field is the solution to the Navier-Stokes equations. The mechanical equilibrium of the discrete system at zero velocity is characterized by the momentum equation where all velocity-dependent terms are removed . ∂ρv ∂t + ∇ • (ρ v ⊗ v) + ∇p -∇ • (2µD) = f γ (3.49) -→ ∇p = f γ (3.50) This leaves a balance between the discrete pressure gradient, and the surface tension term. In the absence of gravity and for a constant surface tension , we obtain [START_REF] Abu-Al-Saud | A conservative and well-balanced surface tension model[END_REF] : -∇p + γκn I δ I = -∇p + γκ∇α = 0 (3.51) Assuming a constant curvature we obtain : -∇ * (p -γκα) = 0 (3.52) where ∇ * is a numerical approximation of the gradient. The exact discrete numerical solution, which guarantees exact balance between surface tension and pressure in the case of constant curvature and surface tension, is then simply : p -γκα = constant (3.53) In the case of a numerical simulation of a static bubble in a liquid in a zero gravity environment and giving the curvature (which is constant) as an input parameter of the simulation rather than trying to calculate it, this relationship should be logically verified. If this is not the case, an imbalance is created by the discretization system. Popinet specifies that for the equilibrium condition to be met, the pressure gradient should be estimated using the same discrete operator as that used to estimate the gradient of the indicator function used in the volumetric surface tension force calculation [START_REF] Abu-Al-Saud | A conservative and well-balanced surface tension model[END_REF]. He points out that Brackbill et al. in the original CSF article uses two different operators to calculate the pressure gradient whose values are taken from the centre of the faces and the gradient of the volume fraction whose values are taken from the centre of the cells [START_REF] Brackbill | A continuum method for modeling surface tension[END_REF]. They calculate the surface tension force at the center of the faces by averaging the values taken at the center of the cell to perform the force balance. The values used to calculate the discretized gradient of pressure and volume fraction must be taken at the same location in the mesh in order to get a well-balanced relation. If this is not the case, the discrete operators used to calculate the pressure gradient and volume fraction are not the same and an imbalance is created. However, the use of this discrete operator does not give a sufficient approximation of the gradient of the volume fraction to be able to estimate the interface normal and the curvature accurately. Popinet suggests calculating the curvature using another method than the one appearing in the CSF method, i.e. without using the gradient of the volume fraction [START_REF] Abu-Al-Saud | A conservative and well-balanced surface tension model[END_REF]. Abadie et al. point out that any discrete vector field is not the gradient of a scalar field [START_REF] Abadie | On the combined effects of surface tension force calculation and interface advection on spurious currents within Volume of Fluid and Level Set frameworks[END_REF]. Its curl operator must be null. Looking at Eq. (3.50), the curl of the second member must be equal to 0 (i.e. ∇ × (f γ + ρg) = 0 ) in order to be exactly equal to the discrete gradient pressure which is a scalar field. This mathematical condition is a prerequisite for equilibrium. However, it is not fulfilled if we combine the approaches mentioned in the CSF model to calculate f γ . However, the gravity term ρg is not the gradient of a scalar, even in the discrete sense. Only the sum f γ + ρg is written as a gradient at equilibrium. These two terms must therefore be discretized together in order to hope to achieve a balance of the discrete system. Some authors have addressed this issue and implemented well-balanced algorithms in the VOF, Level-set, front-tracking framework [Francois et Time step condition For explicit schemes to ensure that spurious currents do not develop over time, a stability condition on the time step introduced by Brackbill et al. must be applied [START_REF] Brackbill | A continuum method for modeling surface tension[END_REF]: ∆t < ρ avg (∆ x ) 3 2 πγ (3.54) where ∆ x is the grid spacing, γ the surface tension, ρ avg is the average density of the both phases. The physical reason given by Brackbill et al. is that the time step must be small enough to resolve the fastest capillary waves. The value of the time step is limited by the size of the mesh. As shown by Eq. (3.54) there is a proportional relationship between the time step and the grid spacing, ∆t ∝ (∆ x ) 3/2 . See [START_REF] Abu-Al-Saud | A conservative and well-balanced surface tension model[END_REF] for a detailled discussion on the subject. Volume conservation Transporting materials by an incompressible flow results in the conservation of volume and mass. This property is a very important stake for the VOF method. By nature, the VOF method has good volume conservation properties, but some steps of the algorithm are approximate and the conservation is inaccurate in most existing algorithms [START_REF] Aulisa | A geometrical area-preserving Volume-of-Fluid advection method[END_REF] . A natural definition of mass conservation is a method which conserves the total area at each time step so that: ij A α n ij = ij A α n+1 ij The advection of the interface requires the calculation of cell boundary fluxes. The use of volume of fluid data and fluxes should lead directly to exact mass conservation but it is in fact not so. Moreover, interface advection algorithms may produce some systematic errors, such as volume fractions that do not satisfy : 0 ≤ α n ij ≤ 1 The above-mentioned inconsistencies are difficult to correct: it is not obvious where the excess or missing mass should be disposed of, or retrieved. Code writers then routinely redistribute it in the surrounding cells with some diffusion algorithm or reset the volume fraction to 1 or 0 thus destroying mass conservation. Various attemps have been made in order to assure boundedness and conservativeness of the phase fraction [START_REF] Scardovelli | DIRECT NUMERICAL SIMULATION OF FREE-SURFACE AND INTERFACIAL FLOW[END_REF][START_REF] Cummins | Estimating curvature from volume fractions[END_REF][START_REF] Afkhami | Height functions for applying contact angles to 2D VOF simulations[END_REF]. The error on the volume balance has important consequences on the simulation of bubble growth on walls. Even if the error is smaller, the growth time and the diameter at detachment can be poorly predicted. Therefore, the accuracy of the phase volume balance is an indispensable ingredient of the numerical method. By nature, the VOF method has good volume conservation properties, but some steps in the algorithm are approximate and conservation not satisfied in most existing algorithms . The first challenge to overcome in the context of multiphase fluids is the modelling of the pressure jump at the interface. After having described in the previous section the methods usually used to manage this discontinuity at the interface and the spurious currents they can generate, it is appropriate to present here the method that has been chosen to carry out the simulations of chapter 4 and 5. The objective of this section is to calculate γκn by an alternative and more efficient method than the CSF methodology. For this purpose a first method based on height functions has been described. The use of this methodology allows to determine the curvature, the normal and the tangent at the interface. However, the use of the height function methodology is not straightforward. A first necessary step is to define local coordinate systems to retrieve the data needed to calculate the height functions, as described in subsection 5.2. Furthermore, the height function method loses accuracy when the slope of the interface is too steep compared to the local coordinate system. The methodology for choosing between the vertical or horizontal local coordinate system is described in subsection 5.3. Despite a good choice of the coordinate system the use of a second method based on a polynomial fit described in subsection 5.4 is necessary to obtain an accurate calculation of the curvature. The choice of the transition from one method to the other is not straightforward and requires tests which will be described in chapter 4. Finally, the last subsection describes how the term γκn is integrated into the Navier-Stokes equations. Height definition Definition of the height The height function method method makes the calculation of normal and curvature more accurate, thus reducing parasitic currents [START_REF] Poo | A computational method for determining curvatures[END_REF] As shown in Fig. 19 the integral is represented by the air under the curve. Typically on a mesh crossed by an interface this value is given exactly by the volume fraction. The volume fraction gives the part of the cell area occupied by a phase. A stencil of several cells around the cell through which the interface passes and for which the curvature is to be calculated is used as the basis for a coordinate system. In each cell of width ∆ x and height ∆ y , we have the value of the volume fraction occupied by one of the phases. Summing the volume fractions of a column of the stencil and multiplying it by the width of the cells makes it possible to carry out a calculation equivalent to the calculation of an integral of a function whose graph is represented by the interface in a local coordinate system represented by the stencil : f (x 0 ) = H(x 0 ) = x 0 + ∆x 2 x 0 -∆x 2 f (x)/∆ x dx (3.55) H(x 0 ) = H(i) = j=+∞ j=-∞ α ij • ∆ y (3.56) Curvature calculation From this quantity we can obtain an approximation of the first and second derivative of the function in the ith-column by using the value of the heigh in left (i -1) and the right (i + 1) column of the stencil. H ′ (i) = H(i + 1) -H(i -1) ∆ x (3.57) H ′′ (i) = H(i + 1) -2H(i) + H(i -1) ∆ 2 x (3.58) An estimation of the tangential, the normal vector, and the curvature can be obtained from there : t = 1 1 + H ′ (i) 2   1 H ′ (i)   (3.59) n = 1 1 + H ′ (i) 2   H ′ (i) 1   (3.60) 92 Chapter 3 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 The curvature is calculated as the negative of the divergence of the normal vector : κ = -∇n = - H ′′ (i) (1 + H ′ (i) 2 ) 3/2 (3.61) The key to the success of this method is having access to sufficiently accurate discrete values of the height function [START_REF] Francois | Interface curvature via volume fractions, heights, and mean values on nonuniform rectangular grids[END_REF]. So the smaller the cell width, the better the approximation. An issue arises when the slope of the interface tends to infinity. In mathematical terms the function is no longer defined. In a more general way according to the Popinet analysis, the error on the calculation of the curvature increases with the magnitude of H(i) ′ . For each cell the coordinate system is oriented so that H(i) ′ ≤ 1. Usually a vertical stencil and a horizontal stencil are used. The stencil used is the one in which the interface is represented by the function graph with the lowest slope. Definition of a local coordinate system The use of the height function methodology is not straightforward. A first necessary step is to define local coordinate systems to retrieve the data needed to calculate the height functions. The goal here is to capture the interface to be able to assimilate it to a mathematical function. By using stencils the data necessary to calculate the curvature of the interface in each cell can be retrieved. However, it is necessary to adapt the size and direction of the stencil, and in some cases modify the data to fit the mathematical formalism of the analytical calculation. To be able to use Eq. (3.58), it is necessary to retrieve the values of the volume fractions of the cells allowing to calculate 3 contiguous heights. This is why the stencil is usually made up of 3 cells in width. For the choice of the number of cells to be considered in height, the choices differ according to the authors. Guo et al. chose to start on a 3 × 7 stencil base. Popinet adapts the height of the number of cells of each column according to the need. The essential thing is to be able to get the integral of the interface in each column correctly. The volume fraction of the lower cell must be equal to 1 and the volume fraction of the upper cell must be zero, as shown on Fig. 20. One of the problems to take into account when switching from an interface to the graph of a function is that for one x-coordinate the interface can have several y-coordinates. Two cases oriented horizontally with respect to the mesh, i.e. a stencil for which the height is calculated along the x-axis of the mesh. Offset procedure The interface extends over several cells. Each cell for which the interface density is not zero must be taken into account. In other words, as long as ∇γ ̸ = 0, a source term is calculated for the cell. Since the interface is diffuse in the VOF method, the volume fraction gradient represents the portion of the interface present in the cell. In cells where the volume fraction is equal to 0 or 1 and its gradient is not zero, the calculation of the height does not fall within the cell concerned. For example, in the diagram in Fig. 19 the gradient of the volume fraction of the cells in the top row is different from zero, so the source term needs to be calculated. However, the volume fraction is zero. When the height is calculated the value is outside the cell concerned. The calculation of the curvature is then false. To overcome this defect it is necessary to recover the calculation of the curvature of another nearby cell. An offset procedure originally thought up by Magnini can be used to recover this value in one of the cells of the stencil column [Magnini, 2016b]. However this target cell sometimes can be 4 Algorithm 1: Calculation of the curvature of the cells with α = 0 or 1 Result: return κ (i,j) num (i,j) = 0; den (i,j) = 0; if ∇ α (i,j) > 0 then if α (i,j) = 0 or 1 then for k1 = -1 to 1 do for k2 = -1 to 1 do if (k1 ̸ = 0 and k2 ̸ = 0) and ∇α (i+k1,j+k2) > 0 and (α (i+k1,j+k2) ̸ = 0 or 1) then num (i,j) = num (i,j) + κ (i+k1,j+k2) × ∇α (i+k1,j+k2) ; den (i,j) = den (i,j) + ∇α (i+k1,j+k2) ; end end end end end κ (i,j) = num (i,j) den (i,j) ; cells away from the original cell (usually when the interface is diagonal to the mesh). Results can be misleading when testing the code on a stagnant bubble in a zero-g environment. The curvature on a circle is constant. So when the algorithm retrieves data from a remote cell, it looks better. But this does not work for an arbitrary interface. This is why another procedure averaging the curvature weighted by the gradient of the volume fraction of the cells adjacent to the original cell has been used, as shown in Alg. 1. Orientation of the local coordinate system As mentioned above, the HF method loses accuracy when the interface slope is too steep in relation to the local coordinate system. The approximation of the first and second derivatives becomes weaker. This is why, it is necessary to use two types of local coordinate systems, one horizontal and the other vertical, and whose axes are parallel to those of the mesh, and then to switch from one to the other as needed. According to the mesh coordinate system, the value of the components of the volume fraction gradient allows to estimate a first orientation of the interface. This orientation estimation allows the selection of the most favourable stencil for the calculation of the heights. In other words, the one in which the interface will be oriented most parallel to the abscissa axis of the local coordinate system. However, when the interface is diagonal to the mesh axes, another method must be used. As shown in Fig. 23 one solution is to used a mesh decoupled method and computes heights within columns not aligned with the computational mesh but rather aligned with the interface normal vector [START_REF] Ito | A high-precision calculation method for interface normal and curvature on an unstructured grid[END_REF][START_REF] Owkes | A mesh-decoupled height function method for computing interface curvature[END_REF]]. The problem is that these methods need a first computation of the interface normal, and thus of the volume fraction gradient. Another point to emphasize is that these methods cut the cells of the mesh to compute the integrals. Data from several columns of the mesh are used to reconstruct the integral under the curve. Thus the equality relationship between the volume fraction and the integral demonstrated in Fig. 19 under the interface is lost. The heights are approximated by relying on the volume fraction gradient. The approximation of the volume fraction gradient is the first source of error in the calculation of the curvature in more traditional methods such as CSF. The advantage of the HF method is to be able to calculate the curvature without using this gradient. The disadvantage is that its usage is restricted to rectangular meshes. Selection of the heights In the previous step, heights were calculated for the horizontal and vertical stencil. These heights will define points in a local coordinate system to be determined in which the interface will be approximated by curvature of a second degree polynomial. It is not clear which method is preferable between using height functions directly to determine the curvature and using a polynomial fit. The answer to this question will be discussed in a later chapter, as it requires a special numerical study. It should be noted that the polynomial fit technique is more complex and computationally intensive than the one based on height functions. In a way, the heights are used to reconstruct the interface. The diagram in Fig. 24 shows a representation of the interface. In the two vertical and horizontal stencils a total of six heights were calculated, each stencil having 3 columns. However, four heights were retained, i.e. four points representing the interface. One of the tools to help select these points is the normal to the interface n. This was calculated previously using the gradient of the volume fraction. Stencil columns in which the interface is too sharp are excluded. An orthonormal system whose origin O is the median of the two points closest to the centre of the cell and whose ordinate is directed along the normal to the interface (O, i ′ , n). Then the coordinates of each point P are computed in the new sytem x m , y m . Fit of a parabola The parameters (a 0 , a 1 , a 2 ) of the equation of a parabola are estimated from the minimisation of an objective function: f pol (a i , x m ) = a 0 x 2 m + a 1 x m + a 2 (3.62) F objective = m [y m -f pol (a i , x m )] 2 (3.63) The curvature is estimated from these estimated parameters: κ = 2 a 0 (1 + a 2 1 ) 3/2 (3.64) As pointed out by Popinet, while the least-square minimisation is not particularly complex or computationally expensive, the difficulty is to select the good points [START_REF] Popinet | An accurate adaptive solver for surface-tension-driven interfacial flows[END_REF]. In a more general way, the main difficulty of an HF algorithm is for the programmer to adapt to the many particular cases, which can make the code with many conditional statement to implement. Identification of parameters Two types of methodologies were used to identify the parameters. One based on the direct minimisation of Eq. (3.63), and the other based on the Levenberg-Marquardt algorithm. The second method is more flexible and can be adapted to both linear and non-linear problems. The coding of this second methodology was done firstly with the intention of validating the two methodologies by comparing their results, and secondly with the objective of making a tool for minimising quantities for testing purposes. The objective function reaches a minimum when F ′ objective = 0. This corresponds to solving the following system of equations: m y = a 2 m + a 1 m x + a 0 m x 2 (3.65) m xy = a 2 m x + a 1 m x 2 + a 0 m x 3 (3.66) m x 2 y = a 2 m x 2 + a 1 m x 3 + a 0 m x 4 (3.67) Where m is the number of coordinates considered. The methodology of Levenberg-Marquardt is shown in Alg. 2. Algorithm 2: Determination of parameters -Levenberg-Marquardt d LM is the direction of descent of the algorithm J is the Jacobian matrix Result: return crit quad n iter = 0; while n iter < cste do crit quad = 0 for i = 0; i < m; i + 1 do dev[i] = y m [i] -f poly (x m , a 0 , a 1 , a 2 ) crit quad = dev 2 [i] + crit quad end if n iter == 0 then λ = cste; λ rec = λ crit quad,rec = crit quad a 2,rec = a 2 , a 1,rec = a 1 , a 0,rec = a 0 else if crit quad < crit quad,rec then λ = 0.1λ; λ rec = λ crit quad,rec = crit quad a 2,rec = a 2 , a 1,rec = a 1 , a 0,rec = a 0 else λ = 10λ a 2 = a 2,rec , a 1 = a 1,rec , a 0 = a 0,rec end end d LM = (J t J + λΩ) -1 (J t [y -f poly (x)]) for i = 0; i < m; i + 1 do a[i] = a r ec[i] + d LM [i] crit quad = dev 2 [i] + crit quad end n iter = n iter + 1 end 100 Chapter 3 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Insertion in the Navier-Stokes equations Averaging of physical properties at the interface The physical properties of each fluids are calculated as weighted averages based on the distribution of a phase volume fraction, thus being equal to the properties of each fluid in their corresponding occcupied regions and varying only across the interface. Single-field quantity are defined all over the computational domain. In the case of a system with two fluids gas and liquid, in each cell we get : β = α g β g + α l β l (3.68) β = (1 -α l )β g + α l β l (3.69) To simplify the reasoning a single variable α is considered such that : α = α l = 1 -α g (3.70) Thus density is considered a function of volume fraction:: ρ(α) = (ρ l -ρ g ) α + ρ g (3.71) However, in order to correctly capture the viscosity term at the interface µ ∇v + (∇v) T , an average of the dynamic viscosity µ performed as for the density can be approximative. At the interface, gross and abrupt changes in viscosity take place. An accurate evaluation of µ is crucial to reproduce the correct free surface. The use of an arithmetic mean actually causes an artificial acceleration of the fluid in the less dense phase, resulting in speeds that are too high due to a non-physical viscous term. According to Kothe et al., the relative interface/cell face orientation must be taken into account [START_REF] Kothe | Perspective on Eulerian finite volume methods for incompressible interfacial flows[END_REF]. µ = η [(1 -α)µ g + αµ l ] + (1 -η) 1 -α µ g + α µ l -1 η = |n • n F | (3.72) where n F is a unit vector normal to the face cell. The formulations of the averaged quantities vary from one model to the other to ensure code stability and reduce discretization errors. Density shift procedure In the original CSF method the final momentum source is defined as shown in the equation below f γ ′ = ρ 1 2 (ρ g + ρ l ) f γ (3.73) Surface tension should act uniformly, regardless of the instantaneous liquid fraction and density in the cell. It may be argued that the force should act on the mass in a shifted cell, which is ∆x/2 thick on both sides of the interface. In this cell, the density is constant and equal to the mean density ρ = 1 2 (ρ g + ρ l ). By dividing by ρ rather than ρ (i,j) , spurious currents can indeed be reduced. Guo et al. used modified a density shifting procedure to improve the stability of the momentum sources [START_REF] Guo | Implementation of a height function method to alleviate spurious currents in CFD modelling of annular flow in microchannels[END_REF]: f (γ,i,s) ′ = N (i,s) ρf (γ,i,s) (3.74) N (i,s) = V cell f (γ,i,s) V cell ρf (γ,i,s) (3.75) where i = x, y denotes the coordinates, s = ps, ns denotes the positive and negative sources. This shifting procedure causes the surface tension force to be applied more to the phase with the higher density and therefore improves the numerical stability. ρ is the bulk density and N (i,s) is is a normalisation factor used to conserve the source. The shifting for positive and negative sources needs to be kept separate otherwise a negative value of the normalisation factor may occur and therefore cause the signs of the local source values to be reversed. The problem with Eq. (3.73) is that it may not strictly conserve the source in the computational domain although it can improve the numerical stability compared to the direct implementation of f γ . However, by using the normalisation factor, the density shifting procedure shown by Eq. (3.74) achieves accurate source conservation. Marangoni model Having described in the previous section how the pressure jump at the interface was calculated, it is appropriate to describe how ∂γ ∂s t, i.e. the Marangoni stress, can be calculated. The methodology presented here is based on height functions. The calculations use the data collected by the part of the algorithm presented in section 5.2. Deficiency of the surface gradient operator It is underlined by Seric et al. that using the surface gradient operator as it is defined in Eq. (3.48) can result in inaccuracies when implemented in the VOF method [START_REF] Seric | Direct numerical simulation of variable surface tension flows using a Volume-of-Fluid method[END_REF]. This definition of the surface gradient can result in inaccuracies when implemented in the VOF method for general variable surface tension for two reasons. First, the discontinuities of the material properties across the interface represented by Eq. (3.1) can result in Q having a large jump across the interface: for example, in the case of surface tension dependence on the temperature where the fluids on each side of the interface have large difference in the conductivity. The second reason is that, in general, surface tension can depend on the concentration: for example, in the case of the mixing of two liquids with different surface tension, or in the case of surface tension dependent on the surfactant concentration. The Marangoni effect is caused by a tangential stress located on the interface due to a variation in surface tension. Surface tension is a concept that only makes physical sense at the interface and its variation only makes sense along that interface. An estimation of the surface tension gradient is necessary to be able to calculate the surface gradient operator. The surface tension is a function of, among other things, the temperature and the concentration of chemical species. However, these quantities generally vary abruptly when they are considered on either side of the interface. Therefore in VOF and similar methods the gradient of surface tension is a vector whose direction is close to the normal of the interface, which makes no physical sense. The surface gradient operator subtracts its interface normal component to recover its tangential component. This approximation constructed from data from few cells (depending on how the gradient operator is calculated) is very sensitive to errors. Calculation Marangoni term with HF Seric et al. propose to implement the variation of surface tension using a method inspired by height functions [START_REF] Seric | Direct numerical simulation of variable surface tension flows using a Volume-of-Fluid method[END_REF]. The algorithm for implementing ∇ s γ(x) in the VOF method starts with the approximation of the interfacial values of the surface tension in each cell containing an interface segment. More precisely, the idea of constructing the columns of cells inspired by the computation of interfacial curvature and normals using height functions is used. In this method the derivative along the interface of the surface tension is calculated directly: f st = ∂γ ∂s δ s t (3.76) where s is the arc length. The derivative ∂γ ∂s , δ s , and t are evaluated using the cell-centers values. Firstly the surface tension values are defined at the interface, then the derivatives of γ are computed along the interface. In this method the tangential component at the interface of the surface tension gradient is directly obtained which avoids the projection along the tangent to the interface thus avoiding the calculation errors generated by the surface gradient operator. Its other advantage is that the diffuse data due to the VOF method are averaged in a direction close to the normal to the interface, the columns of the stencil being oriented in the direction most perpendicular to the interface. Calculation of the average surface tension coefficient in each cell The first step is to determine the surface tension evaluated from the concentration at the center of all interfacial cells γ(pos, c h 2 ), with the volume fraction α pos . Where pos is the the coordinate of the targetted cell center. The surface tension in each column, denoted by γ x (pos x , c h 2 ), is defined so that it has only one value in each column, regardless of how many interfacial cells are contained in that column. The superscripts x, y represent the column direction. For columns with only one interfacial cell , the surface tension of the interfacial cells is not averaged among the neighbouring cells. The value is directly taken from the surface tension of the cell. If there is more than one interfacial cell in the column, then γ x (pos x , c h 2 ) is approximated by the volume weighted average of the suface tension values belonging to the same column. γx i (pos x , c h 2 ) = α i,j γ i,j + α i,j+1 γ i,j+1 + • • • j α i,j (3.77) γy j (pos y , c h 2 ) = α i,j γ i,j + α i+1,j γ i+1,j + • • • i α i,j (3.78) In this implementation γx i (pos x , c h 2 ) is first defined for all interfacials cells. For certain coordinate , it is posible to define γ for more than just one interfacial cell. The direction of the column as to be determined along whit the computation of the surface forces. The choice of the direction is based on the interface orientation: x or y is chosen to be the same as the largest component of the normal vector to the interface. The same choice is made for computing curvature and the interface normal using height functions. Derivative of surface tension The next step is to evaluate the derivative along the interface ∂γ ∂s . The derivative of the surface tension along the interface, ∂γ ∂s , is approximated by the derivative of the interfacial value, γ in the column which is formed in the direction x or y. The derivative is computed as : ∂γ ∂s i,j = γ(i + 1) -γ(i -1) ds (3.79) γ(i) is the weighted average of the values taken by the surface tension for the cells of column i. Therefore the same value of surface tension will be used in the calculation of the derivative for the cells of the same column of the stencil. γ(i) = j=+∞ j=-∞ ∇α ij × γ i,j j=+∞ j=-∞ ∇α ij (3.80) In each interfacial cell, the derivative is computed along the interface using central difference, i.e. the finite difference of the ~γ in the two neighboring columns. ∂γ x ∂s i,j = γj+1 -γj-1 ds (3.81) γj is the interfacial value of the surface tension in the column j constructed in the x direction. Derivative of the arc length Then the derivative of the arc length is calculated from the derivative of the height function: ds = 2∆ x 1 + H ′ (i) (3.82) The arc length ds is computed from height function in the same direction as ∂γx ∂s . For the example given previous equation , the arc length is : ds = 2∆ y 1 + H ′ (j) (3.83) where H ′ (j) is the derivative of the height function and ∆ y is the cell size. The next part of the surface gradient implementation is the choice of the tangent vector, t, which is computed so that it satisfies t • n = 0. The direction of t depends on the direction used for computing ∂γ ∂s . points in the direction of the positive component orthogonal to the x, y direction. As an example, t points in the positive x direction if column is built in the y direction. An intermediate value as to be considerate when choosing the tangent vector. M G x = ∂γ ∂s sgn(t x ) (3.84) M G y = ∂γ ∂s sgn(t y ) (3.85) The Fig. 25 show an examples ∂γ ∂s , M G x and M G y respectively, are computed in all interfacial cells, where a positive uniform gradient of the surface tension is imposed in the y direction. On the left figure ∂γ ∂s changes sign in the first and third quadrant at the angles, defined from the positive x axis, of π/4 and 5π/4, respectively. At these points the direction of the columns used in gradient computation changes. Hence, the two neighboring cells have opposite sign of ∂γ ∂s . However, once the correct sign of the tangent vector components is included and each component is considered separately, as in the two previous equations , this inconsistency in the sign is corrected. The numerical expression of the component of the surface force are then : F st,x = M G x |t x |δ s (3.86) F st,y = M G y |t y |δ s (3.87) where δ s is the dirac function defined previously , which represent from the numerical point of view the part of the cell present in the interface The two values M G x and M G y have to be evaluated in all the cells where δ ̸ = 0 . The same approach as the one used to define the curvature of the cells neighboring the interface is used. The values in the cells neighboring the interfacial cells are defined by averaging the values in the direct neighbors that already have the curvature value defined. This procedure is repeated twice, insuring that the curvature values for the corner neighbors to the interfacial cells are defined as well. We use an identical approach for defining the x and y components of M G in the cells around the interface which are subsequently used in the two previous equation. Numerical model mass transfer across the interface Firstly, this section presents how interfacial mass transfer is modelled and why it is necessary to perform an accurate calculation of the interfacial area to ensure mass conservation on both sides of the interface. Then in a second step the numerical method used to calculate this area is presented. Conservation of dissolved species Capturing the fluid behavior of a multiphase and multicomponent system composed of a gas phase and a liquid phase with the gas diluted in the liquid requires the modeling of its multiphase and multicomponent characteristics. From a numerical point of view, the multiphase behavior can be captured by solving the VOF equations and the multicomponent aspect by solving the species transport equations. However, certain considerations must be taken into account in order to develop a numerical and holistic model simulating simultaneously multiphase and multicomponent physics. Firstly the dissolved species must be tracked only within the phase it exists in, i.e. the solution to the species transport equation for dissolved concentration must only have non-zero values within the liquid phase and not the gas phase. Otherwise, having a non-zero solution in the gas phase would imply the existence of dissolved gas species within the gas phase itself, which is non-physical. Secondly the mass must be preserved for the species studied during mass transfer. For example, to capture the absorption of dissolved species in the gas phase, a source/sink term S α,sp-trsf t should be added to the VOF equation, and a source/sink term S Y,sp-trsf t should be added to the species transport equation: ∂α(t) ∂t + ∇ • (α(t) v) = S α,sp-trsf t (3.88) ∂ρY i ∂t + ∇ • ρ (vY i -D i ∇Y i ) = S Y,sp-trsf t (3.89) where Y i is the mass fraction of species i, D i is the diffusion coefficient of species i. The mass transfer rate must be determined by the interface jump conditions so that there is a chemical equilibrium between the species. In the bulk of each phase, these source terms must be equal to 0, and must have an influence only at the interface. These source terms are volumetric and from a numerical point of view will be applied to the centre of each cell of the mesh. However, interfacial mass transfer is a phenomenon that occurs on a surface and is localised at the interface. It is therefore necessary to ensure from a numerical point of view that the species are well conserved. In a multiphase, multi-component system the chemical species will move to bring the system to equilibrium. The state of non-equilibrium is at the origin of mass transfer. It is the mass transfer that pushes the system towards equilibrium. In many numerical simulations the rate required for the system to reach equilibrium is used as the basis for formulating the rate of mass transfer across the interface. The objective of the mass transfer is to calculate the coefficient ṁ of the mass transfer flow rate. Interfacial area calculation For the mass transfer rate to be calculated accurately A I must be evaluated correctly [Soh et al., 2016;[START_REF] Schlottke | Direct numerical simulation of evaporating droplets[END_REF]. The gradient of α gives a biased representation of the interface. At the local level, |∇α| can have non-zero values in cells of the mesh where in the continuous model the interface is not present. These cells are adjacent to the interface Algorithm 3: Calculation of A I in 2D Result: return A I n iter = 0 A I = 0 step x = |x 2 -x 1 | n L x a = 0, x b = x a + step x while n iter < n L do L = 0 y a = f pol (x a ) y b = f pol (x b ) L = (x b -x a ) 2 + (y b -y a ) 2 A I = A I + L x a = x a + step x x b = x b + step x n iter = n iter + 1 end cells, |∇α| is computed using the α values of the neighboring cells, including the interface cells. Thus |∇α| may not be zero for cells where α = 0 and α = 1. These cells can generate an artificial mass transfer and after the transfer phase the calculation can result in a value of α negative or greater than unity. With the developments made previously on the height functions it is possible to calculate A I without using the gradient of the volume fraction. For a 2D simulation, by determining the coordinates of the points where the interface crosses the axes of the mesh and knowing f pol the length of the interface can be determined by using the alg. 3 allowing estimation of the length of the curve of a function. The prerequisite is to know the intersection points of the parabola with the cell boundaries named here arbitrarily P a (x a , y a ) and P b (x b , y b ). Meier et al. used a similar method [START_REF] Meier | A novel technique for including surface tension in PLIC-VOF methods[END_REF]. The calculation of A I can be extended to a 3D simulation [Soh et al., 2016]. The use of such a computational method is necessary to be able to integrate a mass transfer model across the interface [START_REF] Soh | A CFD model for the coupling of multiphase, multicomponent and mass transfer physics for micro-scale simulations[END_REF]. Integration of the code in Fluent This section presents how the numerical methods described above have been integrated into the Ansys Fluent code. User Defined Function The commercial software Ansys Fluent 2020 R2 was used to perform the numerical simulations. The model was coded and implemented using User-Defined Functions (UDFs). The coding was done in C language. The code was inserted within a DEFINE ADJUST macro. This macro is executed by Fluent just before solving the mass, momentum, volume fraction and species transport equations. The main function of the macro code is to calculate the source terms that can be added to the conservation and transport equations. Fluent allows its users to calculate the surface tension terms with different types of methodology through a graphical interface and in particular with the CSF method. For comparison and testing purposes this module was used, but disabled in the general case so as not to interfere with the calculations done by the DEFINE ADJUST macro. The code has been adapted to parallel calculation, in order to reduce the calculation time. One of the main difficulties of the coding was to adapt the custom-made calculations to Fluent's calculation system. Fluent offers a turnkey CFD solution, which means that the user can customise the software to his needs but is not expected to replace an entire computing system. In order to identify each cell Fluent uses its own referencing system. The calculations performed on the height functions require an exact knowledge of the position of the coordinates of each cell. This is why one of the first functions of the DEFINE ADJUST is to establish its own coordinate and referencing system different from that of fluent, and to establish the correspondence between the two systems if necessary. This explains the use of a particular mesh. The DEFINE ADJUST code can only work with this type of mesh. The mesh consists of square and orthonormal cells as shown on fig. 26. Different mesh sizes were used for the simulation: 40 × 40 , 80 × 80, 120 × 120, 160 × 160. Increasing the mesh size increases the computation time. To make the calculation of surface tension forces more accurate at the interface it is interesting to refine the mesh size locally. Fluent provides macros for local refinement of the mesh size. However, this has the effect of changing the cell referencing system, which makes it difficult to transfer the information to the DEFINE ADJUST macro referencing system. From a coding point of view, the simplest solution is to globally refine the mesh to obtain the desired level of accuracy of the surface tension force calculation. Discretisation The gradients of scalars are calculated as cell centroid values from the centroid values of faces surrounding the cell. The Green-Gauss node-based method is used for this calculation. The PRESTO scheme is used for pressure interpolation. The QUICK scheme is used for the discretisation of the momentum and the energy equations. The Piecewise-Linear Interface Calculation (PLIC) scheme is used for the discretisation of the volume fraction equation. When simulations are made using the CSF method, the default node based smoothing of the volume fraction field prior to calculation of the curvature was enabled. No smoothing of the calculated curvatures was performed. A first order implicit scheme is used for the temporal discretisation of the transient terms. Finally, for the pressure-velocity coupling, the SIMPLE algorithm is used. Presentation of the code used in the simulations The numerical methods presented above are part of the puzzle pieces that were used to code the DEFINE ADJUST macro used in the simulations presented in this thesis. The diagram in Fig. 27 describes how the DEFINE ADJUST macro is integrated into the Fluent algorithm. The source term f γ Eq. (3.90) is calculated at each iteration before the Navier-stokes equations are solved. Suitability of the VOF approach to model electrogenerated bubble with Marangoni micro-convection flow 4 In order to disentangle the effects of spurious currents from Marangoni currents, the overall approach presented in this chapter is to evaluate spurious currents when there is no simulated Marangoni effect. First, the error in the curvature calculation is evaluated. Then tests on bubbles in stagnant fluids are performed. These allow the evaluation of spurious currents as theoretically no current should be generated in the case of a stagnant bubble. These tests are performed for different mesh resolutions, the errors generated being dependent on the mesh resolution. The objective of this chapter is to evaluate the errors generated by the algorithm presented in the previous chapter, to compare it with the CSF methodology, and to evaluate its suitability to model electrogenerated bubble with Marangoni micro-convection flow. 1 Curvature calculation errors Choice of curvature calculation method It has been shown by Cummins et al. that starting from an exact volume fraction value, calculations with the standard method of height functions estimate the curvature asymptotically with second order accuracy [START_REF] Cummins | Estimating curvature from volume fractions[END_REF]. But, as previously mentioned, the height function method loses its effectiveness when the interface approaches the diagonal of the mesh axes, in which case a polynomial fit method is more appropriate. However, from a theoretical point of view there is no obvious way to decide whether to switch from one of these methods to the other. An orientation angle θ switch of the interface must be determined to know when to switch from one method to another. Numerical tests are needed. A first step is to evaluate the performance of each method. Then by comparing the errors of each method the most suitable angle θ switch can be chosen. Study parameter and evaluation criteria The first parameter to take into account is the size of the mesh, several tests have been performed for different spatial resolutions. The curvature calculation tests were performed on circular interfaces. The ratio 1 κ ∆ is used to compare the mesh size to the curvature. Another criterion to consider is the theoretical position of the interface within the cell. Depending on this position the value of the volume fraction is modified. This can have an influence on the curvature results provided. To prevent the curvature calculation from being interfered with by errors in the numerical calculation of the volume fraction, the curvatures have to be evaluated from analytically calculated exact volume fractions, so that only the curvature calculation method is evaluated. In this study the interface is made up of arcs. The exact volume fraction can be determined from the integral of the function representing these arcs of circles. The circles have as their centre the origin of the reference frame for determining the coordinates of the mesh and have the function: x 2 + y 2 = R 2 . In this trivial case, the exact curvature is the inverse of the radius of the circle. The evaluated curvature is compared with the exact curvature: ∆κ error = 1 κ exact |κ -κ exact | (4.1) The quantity ∆κ error is used to determine the accuracy of the numerical algorithm for calculating the curvature. Calculation of the exact volume fraction As shown in Fig. 29, each cell of the mesh is used as a basis for determining a local coordinate system whose centre on the diagram has the coordinates (x 0 , y 0 ).This gives the circle equation in the local system: In order to calculate the integral of the function, the coordinates are expressed in an explicit form: (x + x 0 ) 2 + (y + y 0 ) 2 = R 2 (4.2) y = R 2 -(x + x 0 ) 2 -y 0 (4.3) x = R 2 -(y + y 0 ) 2 -x 0 (4.4) To simplify the following calculations only the cases where x > 0 and y > 0 are considered. The calculations and reasoning remain equivalent to the nearest sign in the other cases. As shown in Figure 1, the arc of the circle intersects the y-axis at m y and the x-axis at m x in the local coordinate system: m x = R 2 -y 2 0 -x 0 (4.5) m y = R 2 -x 2 0 -y 0 (4.6) In Fig. 29 the surface A c , allowing to determine the volume fraction, is the intersection of the surface of the cell of the mesh with the surface represented by the integral of the curve and satisfying the equation: A c = A 0 -A 1 -A 2 . The integral under the curve A 0 is cut into three areas, in order to determine the area representing the volume fraction. By determining A 0 , and the two surfaces A 1 and A 2 , the surface A c is obtained: A 0 = mx 0 y dx (4.7) A 1 =      0 if |m x | < |∆ x | mx ∆x ydx |m x | > |∆ x | (4.8) A 2 =        0 if |m y | < |∆ y | my ∆y xdy |m y | > |∆ y | (4.9) In the case where |m x | < |∆ x | or |m y | < |∆ y |, the surfaces A 1 and A 2 do not exist. This is why it is necessary to include a condition in Eq. (4.8) and Eq. (4.9). From these areas, A c can be determined, which makes it possible to find the volume fraction by taking up the definition in Eq. (3.37): α = A0 -A 1 -A 2 ∆ x ∆ y = A c ∆ x ∆ y (4.10) Determination of θ switch Beyond a certain angle of inclination θ the height function method becomes less efficient. The polynomial fit method is more expensive to compute. A system of equations must be solved for each cell of the mesh. There is the computational cost and precision to consider. An angle to define when to switch from using one methodology to another to calculate curvature must be determined. A suitable angle value that assures that the results will not diverge has to be found. The choice of this angle is more a question of safety than of optimization. Curvatures are evaluated for interfaces with a tangent inclined at an angle θ of 0 to 45°with respect to the horizontal axis of the mesh. The usual height function method as mentioned before presents good results for interfaces whose inclination is close to the axes of the mesh. However, from a certain inclination, the results obtained diverge from the real value of the curvature, as shown in Fig. 30. As the inclination of the interface increases, the error increases. The choice of the alternative method of polynomial fit is to be considered. In view of the results obtained, the transition from one method to the other is to be considered for an interface tilt around θ switch = 22.5°. The main difficulty of the polynomial fit method is to choose the interface points to use. When θ is less than θ switch , the height function method is used. The transition to the polynomial fit method is made for θ greater than θ switch . With regard to the results presented in Fig. 31 for resolutions where the radius of the circular interface considered has a length equivalent to 5 cell widths of the mesh, it appears that the use of the two techniques used is inconsistent. The points approximating the position of the interface are not close enough to be able to correctly estimate the polynomial parameters. On the other hand, when the spatial resolution becomes finer and for θ greater than θ switch the fitting method gives better results. These tests on the curvature allowed us to establish suitable value for θ switch . The center of the circular interface was moved to test the robustness of the methodology. In general this test has no influence on the results obtained. However, in cases where the part of the interface present in the cell is too small, the calculation error on the curvature diverges. The choice of the weighted average calculation of the curvatures on the adjacent cells was considered instead. This choice allows much better results to be obtained. The electrogenerated bubbles having almost circular interfaces so the choice of an average seems coherent. Generally speaking, for resolutions for which the interface radius is equivalent to 15 times the width of the mesh, calculation errors of less than 0.3% are obtained. This preliminary test allows us to be confident about the curvature calculation methodology. Static bubble test case Pressure jump at the interface The analytical solution for the simulation of a stationary bubble in a zero velocity field, and the analytical curvature can be easily obtained from the bubble radius. A circular interface with surface tension should remain at rest, with the pressure jump at the interface exactly balancing the surface tension force (Laplace's law). The velocity field being zero, Eq. (3.28) reduces to: -∇ • (pI) = 0 (4.11) In this test case within each fluid the pressure is constant. The jump relation Eq. (3.31) which is applicable only at the interface reduces to the mathematically exact formulation: pI • n I + γκn I = 0 (4.12) This brings us back to the relation of Laplace. In each phase the pressure is constant and a pressure jump occurs at the interface. As shown in Fig. 32 the pressure jump created by the CSF model at the interface is less direct, which deviates from the real conditions, while the height function methodology gives a better approximation. Time to reach equilibrium In practice, depending on the method used to discretize the pressure gradient and the surface tension force, parasitic currents appear. The exact numerical balance is difficult to obtain [START_REF] Abu-Al-Saud | A conservative and well-balanced surface tension model[END_REF]. The numerical imbalance created is at the origin of these currents. Similar to what was done by Popinet, it is appropriate to first test the model by imposing the exact curvature in the entire domain for the calculation of f γ [START_REF] Popinet | An accurate adaptive solver for surface-tension-driven interfacial flows[END_REF]. This tests the adequacy of the model by excluding the curvature calculation, and thus verifies that the balance calculation between the pressure term and the surface tension term is indeed achieved. The time required for the momentum to diffuse over a distance L is proportional to t ν , where t ν ∝ L 2 ν (4.13) and ν is the kinematic viscosity of the liquid. As noticed by Popinet, the time scale needed to reach the numerical equilibrium solution is comparable with the time scale of viscous dissipation t ν , as expected from physical considerations [START_REF] Popinet | An accurate adaptive solver for surface-tension-driven interfacial flows[END_REF]. In practice, this means that test cases designed to evaluate the accuracy of a given surface tension model (for a stagnant bubble equivalent problem) must ensure that simulations are run for time scales comparable with t ν . In our case t ν is close to 2 ms. The other quantity to consider is the velocity associated with the capillary wave u γ . u γ ∝ γ ρ L (4.14) It can be interpreted as the scale of the velocities associated with a capillary wave of wavelength comparable with L. As shown in Fig. 33, the average velocity obtained decays for a time equivalent to the viscous dissipation time. The velocity and time have been scaled using u γ and t ν . Thus, the numerical calculation verifies the theoretical equilibrium and the spurious currents observed in the following can be attributed to errors in the curvature calculation. Errors caused by spurious currents In this second test the curvature is calculated by the model. In order to evaluate the impact that spurious currents could have on a simulation with Marangoni effect, the maximum speed of the spurious currents u max,spurious currents obtained are scaled using an average speed u average, M arangoni . Several simulations were performed for different mesh resolutions for each of the two tested models: CSF and and HF the algorithm based on height functions presented in section 3.9. The simulations were carried out for a time equivalent to the experimentally observed growth time of a bubble. As shown in Fig. 34 and in Fig. 35, for the CSF method the spurious currents increase when ∆x decreases . This is consistent with the analysis presented by Harvie et al. [START_REF] Harvie | An analysis of parasitic current generation in volume of fluid simulations[END_REF]. The CSF method is therefore clearly not suitable for the simulation that is the objective of this study. This validates the use of a more efficient interface representation method. The error generated using HF decreases with the reduction of the grid spacing. The results obtained in this section show that the method used is balanced and allows estimation of the curvature sufficiently accurate to obtain a solution close to the exact equilibrium (for the velocity). The numerical equilibrium obtained is very close to the theoretical equilibrium. Even for coarse resolutions the error generated on the final simulation is less than 1 %. Interfacial area error calculation In order to evaluate the accuracy of the calculation of the interfacial area of each cell of the mesh, several simulations were conducted. The calculated value is compared to the exact value of the interface A I,exact . The test case of a static bubble as described below was used. The interface being circular the exact value of the interfacial area in each cell can be calculated analytically. As the position of the interface in each cell can influence the calculated numerical value, the values of all cells through which the interface passes were averaged, as described by the following equation: E(A I ) = N |A I,exact -A I | N (4.16) where N is the number of cells for which the calculation was performed. The results are shown in the graph in Figure 36. The calculations were performed as a function of the parameter n L , which determines the accuracy of Alg.3, and as a function of the ratio R/∆ x , which determines the accuracy of the mesh. The graph in Fig. 37 shows the maximum error found, as described by the following equation: E max (A I ) = max|A I,exact -A I | (4.17) The error decreases with a finer mesh, and by increasing the value of the parameter n L . Surface gradient error calculation Next, the efficiency of the surface tension gradient calculation should be tested, as shown in Eq. (3.79). The static bubble is subjected to different temperature gradients over a given distance as shown in Fig. 38. The objective here is to expose the interface of the bubble to variations in surface tension similar to what it might encounter as it grows, the bubble is exposed to surface tension variations ranging from 0.1N • m -2 to 50N • m -2 . As the interface is circular the exact value of the surface tension gradient can be calculated. For each cell crossed by the interface the length of the interface is known and every two cells the temperature difference can be calculated. For each simulation the surface tension gradient is averaged along the interface in order to compensate for uncertainties concerning the influence of the position of the interface within the cell. The relative error found is calculated according to the following equation: where N is the number of cells used for the calculation. The maximum errors found are also recorded, and calculated using the equation: E(∇ s γ) = N |∇ s γ exact -∇ s γ|/|∇ s γ exact | N (4.18) E max (∇ s γ) = max|∇ s γ exact -∇ s γ| |∇ s γ exact | (4.19) The errors found are counted and plotted in Fig. 39. The finer the mesh, the smaller the error. As shown in the graph in Fig. 39, the calculated errors are not very sensitive to the value of the surface tension gradient, but they decrease rapidly as the mesh is refined. The maximum errors are mainly due to cells where the interface share is small compared to the total cell volume. Translating bubble test case While the case of a stagnant bubble allows us to test the equilibrium of the model by referring to an exact solution of the velocity field, it does not allow us to evaluate the combined accuracy of the interface advection and surface tension model. As proposed by Popinet, the horizontal translation of a bubble carried by a uniform flow field is a more robust and realistic test [START_REF] Popinet | An accurate adaptive solver for surface-tension-driven interfacial flows[END_REF]. In the case of electrogenerated bubbles, when the bubble grows, the interface translates at a vertical speed of a few millimeters per second. This is a preliminary test before testing mass transfer models across the interface. A uniform horizontal velocity u 0 is imposed in the whole domain with periodic boundary conditions on lateral sides and symmetry boundary conditions on the top and bottom. As already reported by Popinet, the absolute error on the velocity does not depend on u 0 and is weakly dependent on the Laplace number La = ρLγ µ . It is thus the transport scheme of the interface that is directly tested and therefore the impact of the spatial resolution. In our study the Laplace number varies between 5, 000 and 20, 000. A new time scale is introduced, to account for the time needed for the bubble to cross a length L, and is given by: t u 0 ∝ L u 0 (4.20) The velocity has been scaled with u M and the time with t u 0 . The Laplace number was fixed at 12, 000. The results presented here as an example reflect a general trend in the evolution of the spurious velocity over time as observed by Abadie et al. [START_REF] Abadie | On the combined effects of surface tension force calculation and interface advection on spurious currents within Volume of Fluid and Level Set frameworks[END_REF]. Popinet notes that these oscillations are proportional to u 0 /∆x. The advection errors of the models continuously disturb the calculations related to the interface [START_REF] Popinet | An accurate adaptive solver for surface-tension-driven interfacial flows[END_REF]. As shown in Fig. 40, the model has similar behavior to the previous studies [START_REF] Popinet | An accurate adaptive solver for surface-tension-driven interfacial flows[END_REF][START_REF] Abadie | On the combined effects of surface tension force calculation and interface advection on spurious currents within Volume of Fluid and Level Set frameworks[END_REF]. The drastic drop in performance can be noticed. Even if the height function method allows accurate curvature calculations, the flaw of the method comes essentially from the advection method. The previous simulation was repeated for different mesh resolutions. The dimensionless quantity used in the abscissa is R/∆x. As for the static case the method converges when the mesh is refined. We obtain an error of 2.5% for R/∆x = 70. Conclusion of the chapter As shown in the example in Fig. 41, the use of the height function method greatly reduces the error rate due to spurious currents. The objective of this simulation is to visually represent the impact that spurious currents could have on a solutal Marangoni type effect on an electrogenerated bubble attached to an electrode. The interface is fixed, the contact line model is not implemented and so the bubble is not in contact with any edge. To replicate what might exist in an electrode-generated bubble, a gradient of dissolved species is initiated along the interface, where the Marangoni effect takes place as shown in Fig. 42. The gradients of scalars are calculated as cell centroid values from the centroid values of faces surrounding the cell. The Green-Gauss node-based method is used for this calculation. The PRESTO scheme is used for pressure interpolation. The QUICK scheme is used for the discretisation of the momentum and the energy equations. The Piecewise-Linear Interface Calculation (PLIC) scheme is used for the discretisation of the volume fraction equation. When simulations are made using the CSF method, the default node based smoothing of the volume fraction field prior to calculation of the curvature was enabled. No smoothing of the calculated curvatures was performed. A first order implicit scheme is used for the temporal discretisation of the transient terms. Finally, for the pressure-velocity coupling, the SIMPLE algorithm is used. Globally scaled residuals are used and the residual targets for all the equations are set to 1 × 10 -6 . The mesh consists of square and orthonormal cells. The mesh with R/∆ x = 20 ratio is shown in Fig. 42. The properties of the electrolyte and the gas considered (H 2 ) are shown in the ρ l [kg • m -3 ] ρ g [kg • m -3 ] µ l [kg • m -1 • s -1 ] µ g [kg • m -1 • s -1 ] 1000 0.0899 1.2 × 10 -3 8.79 × 10 -6 tension varies with the concentration of dissolved gases [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF]. To be consistent with the experiments performed on microelectrodes, the initial surface tension was set at [START_REF] Glas | Measurements of the growth of electrolytic bubbles[END_REF][START_REF] Liu | Numerical simulation of hydrogen bubble growth at an electrode surface[END_REF]. 0.075 [N • m -1 ] [Glas The work of [START_REF] Massoudi | Effect of pressure on the surface tension of water. Adsorption of low molecular weight gases on water at 25.deg[END_REF] has been able to establish relationships between the variation in partial pressure of gas and surface tension . At low pressures, the concentration of dissolved hydrogen and the partial hydrogen pressure can be related through Henry's law: c h 2 = p K H (4.21) where K H the constant of proportionality is dependent on the temperature and pressure, but in our case can be estimated as [START_REF] Wiebe | The Solubility of Hydrogen in Water at 0, 50, 75 and 100°from 25 to 1000 Atmospheres[END_REF][START_REF] Sander | Compilation of Henry's law constants (version 4.0) for water as solvent[END_REF]: K H = 7.8 × 10 -6 [mol • m -3 • Pa -1 ] (4.22) As a result, the variation of surface tension as a function of concentration can be obtained: ∂γ ∂c H 2 = 1 K H ∂γ ∂p = -3.2 × 10 -5 [N • m 3 • m -1 • mol -1 ] (4.23) A concentration gradient was initially applied along the bubble interface as shown in Fig. 42. From this concentration gradient results a surface tension gradient along the bubble interface which generates a Marangoni current as shown in Fig. 41 . This artificial situation is only intended to illustrate visually the impact of spurious currents on a simulation, and the improvement that can be made by modelling the surface tension with height functions as described in the previous chapters. In the image on the left with the CSF method, spurious currents appear along the interface, which disrupt the Marangoni currents. This is not the case on the left image with the height function method. In order to disentangle the effects of spurious currents from Marangoni currents, the overall approach presented in this chapter has been to evaluate spurious currents when there is no simulated Marangoni effect as described earlier in the case of the static bubble. This last simulation of this chapter tends to illustrate the error that spurious currents could have generated on the calculation of Marangoni currents if a generic methodology such as the CSF methodology had been used. Overall, the surface tension variation part does not create spurious currents if the curvature calculation is correct in the first place. 1 The Marangoni effect as an alternative to diffusive transport 1.1 Alternative to the concept of diffusion in the Nernst's layer The Marangoni effect is a transient phenomenon, but in the case of electrogenerated bubbles it is sustained by the Joule effect and the production of gas at the electrode. The observation of vortices by Yang et al. is most likely due to the Marangoni effect as hypothesised by Lubetkin [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF][START_REF] Yang | Marangoni convection at electrogenerated hydrogen bubbles[END_REF]. This calls into question the hypothesis of transport by pure diffusion of the species produced at the electrode towards the bubble interface. The microscopic chaotic motions that take place in the viscous sublayer adjacent to the surface of an electrode have long been regarded as diffusive motion. Moreover the bubble growth laws assuming a pure diffusive motion give a relatively good approximation to what is observed. The work of Amatore et al. sheds light on this interpretation [START_REF] Amatore | The real meaning of Nernst's steady diffusion layer concept under non-forced hydrodynamic conditions. A simple model based on Levich's seminal view of convection[END_REF]. In particular, they introduce a diffusion coefficient D conv that depends on the species flux and the convective movement over a given distance: D conv = v < ∆ > (5.1) where < ∆ > is the length over which the dissolved species travel during a time interval, and v is the average velocity of chaotic motion that develops over this length. The difference with a classical diffusion coefficient obeying the Nernst relationships is that this diffusion coefficient is spacially dependent. On the basis of this analysis and without the possibility of observing the currents in the vicinity of the bubble, it is understandable that chaotic movements that were in fact generated by a Marangoni effect could have been interpreted in the past on the basis of a diffusive phenomenon. Yang et al. observed vortex flows around the bubble as shown in Fig. 6 and the only possible interpretation is the presence of stress on the interface caused by a variation in the surface tension [START_REF] Yang | Marangoni convection at electrogenerated hydrogen bubbles[END_REF]. The presence of a Marangoni effect around electrogenerated bubbles is still a recent discovery. While the vortex currents resulting from the Marangoni effect can be visualised, it is not yet clear how they can affect the growth and detachment of a bubble on a conventional electrode. There is also no consensus on their origins. Existence of interfacial gradients The Marangoni effect is due to a variation of the surface tension. This variation as a function of temperature and concentration of different surfactants or pollutants, is essential to understand the formation of electrogenerated bubbles [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF][START_REF] Massoudi | Effect of pressure on the surface tension of water. Adsorption of low molecular weight gases on water at 25.deg[END_REF][START_REF] Weissenborn | Surface Tension of Aqueous Solutions of Electrolytes: Relationship with Ion Hydration, Oxygen Solubility, and Bubble Coalescence[END_REF][START_REF] Zhang | Evaluating the Behavior of Electrolytic Gas Bubbles and Their Effect on the Cell Voltage in Alkaline Water Electrolysis[END_REF]. The interfacial gradients of these physical properties vary the surface tension along the interface. Therefore, due to the unbalanced forces at the interface, fluid elements experience a net shear stress there and move to regions of the interface where the interfacial tension is higher. The assumptions made about the strength of the Marangoni effects depend directly on the knowledge of the gradients of the quantities involved. The important physical prerequisite is the presence of a sufficiently long-lasting gradient to allow observation of the resulting motion. If the presence of these gradients at the interface is proven, their magnitudes remain a model hypothesis. The convection generated by these ends very quickly, as it promotes rapid equilibration of the temperature or concentration distribution along the interface. To sustain convection for a long time, a mechanism must be provided to maintain the surface tension gradient. In the case of the thermocapillary effect, the Joule effect maintains the phenomenon. A distinction must then be made between the solutal capillary effect caused by pollutants or surfactants and that caused by dissolved gas species. In the first case, the surfactants present in the electrolyte only modify the value of the surface tension of the interface when they are adsorbed. There are no sources of surfactants that could maintain the phenomenon over the long term. In the second case, dissolved gas species are generated at the electrode and then absorbed by the bubble as it grows. There is therefore a driving force here that could sustain the Marangoni effect over the long term. To understand how temperature or concentration gradients occur it is worth mentioning the couplings that take place, as they highlight the complexity of the competing phenomena and their influence on the Marangoni effect. The couplings between the model equations are shown in Fig. 43. The implications of Fig. 43 are explained in section 3. Before commenting on this diagram and as a pre-requisite, the heat equation and the transport equation for dissolved species must be stated here. 2 Heat, transport, and surfactant models Heat equation As the surface tension is a function of the temperature, in order to study this variation it is necessary to include in the model the heat equation Eq. (5.2) as well as the Laplace equation Eq. ( 5.3) to calculate the Joule effect Eq. (5.5), and the source term S T of the heat equation: where j is the curent density in A • m -2 . As shown in Fig. 43, at the interface, as the temperature varies, the value of the surface tension will be modified. This generates the Marangoni effect. The velocity at the interface is modified which changes the temperature profile around the bubble. ρC p ∂T ∂t + v • ∇T = ∇ • (λ∇T ) + S T (5.2) ∇ 2 Φ = 0 (5.3) j = λ elec ∇Φ (5. ṁ v + pI • n I -2µD • n I = γκn I + ∇ s γ(T, c) (5.6) The Marangoni effect will therefore depend on the Joule effect. The conductivity of the electrolyte and the size and shape of the electrode will therefore have a direct Transport of dissolved species In order to evaluate the interfacial mass transfer it is necessary to know the concentration near the interface, to do this the transport equation of dissolved species must be introduced: ∂c ∂t + v • ∇c = ∇ • (D∇c) + S electrode + S interf ace (5.7) Two source terms are to be taken into account in this equation: the production of dissolved species at the electrode S electrode and their absorption at the interface S abs . The source term at the electrode can be calculated by Faraday equation ṁelectrode = |j|M F v (5.8) where ṁelectrode [kg • m -2 • s -1 ] is the production rate of dissolved species at the electrode. The second source term will depend on ṁ which is found in the jump relation established from the conservation of mass at the interface: ṁ = ρ g (v g -v I ) • n I = ρ l (v l -v I ) • n I (5.9) This transfer rate will depend on the assumptions made about interfacial mass transfer and the concentration of dissolved species closest to the interface. As the bubble grows, the interface is supersaturated on average. It is the difference between the interface concentration and the saturation concentration of the dissolved gas that drives this growth. This changes the transport equation, which in turn changes the value of the surface tension. As with the thermo-capillary effect, this variation in surface tension creates a Marangoni effect which modulates the flow and thus the transport of dissolved species. Presence of contaminants The presence of surfactants/pollutants in the electrolyte is a very likely hypothesis. Their impact on the surface tension cannot be neglected. Levich et al. show that the adsorption of an insoluble surfactant stiffens the interface [START_REF] Levich | Surface-Tension-Driven Phenomena[END_REF]. Such a rigid interface is unable to transfer tangential stress, which of course blocks the effects of the surface tension gradient at the interface. The surfactants move along the interface and clump together to form this stiffened area. A first modelling approach is to consider two zones on the interface, one stiffened by the surfactants and a mobile one where other sources of surface tension variation are not inhibited by the surfactant influence. The conclusions are limited to the surfactant distribution for a fixed bubble size whose diameter corresponds to a stage in the bubble cycle. The proportion of the surface covered by surfactants is unknown and can only be estimated a posteriori by comparing the modelling results with the experimental results. As shown in Fig. 35, in the case of a hydrogen bubble growing on the surface of a microelectrode, Meulenbroek et al. assume that the surface stiffened by the surfactants covers the top of the bubble and forms a cap [START_REF] Meulenbroek | Competing Marangoni effects form a stagnant cap on the interface of a hydrogen bubble attached to a microelectrode[END_REF]. They define a stagnation angle θ S to give the position of the surfactant stagnation point on the interface, below which the interface is free of surfactant. Using the contact angle θ c , the area for which surface tension changes are not inhibited can be calculated. Couplings Model uncertainties The Marangoni effect is a phenomenon generated at the interface that has an influence on what happens in the bulk. Including it in the analysis of the development of electrogenerated bubbles profoundly changes the way in which bubble behaviour should be interpreted. In view of what has been written above, the diagram in Fig. 43 contains all the elements for a holistic direct numerical simulation. Four equations must be solved in the electrolyte, and two jump relations at the interface. To this must be added a model to take into account the presence of surfactants on the interface. In the case of electrogenerated bubbles the surface tension of the bubble can be affected by either the temperature, the concentration of any surfactants and the concentration of dissolved gases. For the sake of completeness, the electrocapillary effect should be added. The electrocapillary effect is a phenomenon that occurs at the gas-liquid interface due to the presence of a surface charge. The electrocapillary effect can cause currents [START_REF] Johnson | Electrocapillary Flows[END_REF]. However, compared to the thermocapillary and solutocapillary effects, the electrocapillary effect is less well understood. It should also be added here that no indication of the presence of surfactants within the electrolyte can be measured. It remains an unknown which can only be deduced a posteriori from the experimental results [START_REF] Sellier | Unraveling surfactant transport on a thin liquid film[END_REF]. This leaves temperature and dissolved gas concentration for which the uncertainties in the equations for monitoring them are smaller. The quantities involved in these equations are known. However, as previously mentioned, solutions for monitoring the Marangoni effect on a free interface are not common. Numerical studies of the Marangoni effect on an electrogenerated bubble assume that the interface is fixed, i.e. that the growth of the bubble is negligible with respect to the time interval of study considered [START_REF] Massing | Thermocapillary convection during hydrogen evolution at microelectrodes[END_REF][START_REF] Hosokawa | Evaluation of adsorption of surfactant at a moving interface of a single spherical drop[END_REF][START_REF] Meulenbroek | Competing Marangoni effects form a stagnant cap on the interface of a hydrogen bubble attached to a microelectrode[END_REF]. These studies study the bubble at one stage of its growth and aim to reproduce the velocity and temperature field observed experimentally. In this context, it is legitimate to question the initial conditions used. The argument developed by the above authors is that the temperature and velocity field evolve rapidly enough to consider that the steady solution of a numerical simulation assuming a fixed interface can account for the development of an electrogenerated bubble from a microelectrode in a real situation. The assumption made here is that the parameter that can make the numerical system evolve is the growth of the bubble. By deriving the relationship of bubble diameter versus time for reaction rate limited bubble growth Eq. ( 2.50), it can be shown that for long times at the end of the bubble's development the growth rate of the bubble becomes less and less important: d R dt = β 1 3t 2/3 (5.11) Using a value of β = 360 found experimentally by Massing et al. and from a bubble radius of R = 560 µm, a characteristic bubble growth time in the experiment can be expressed: t (growth,r=560 µm) = R dR/dt ≈ 6.7s (5.12) By comparing this time with the recirculation time within the vortex of 0.12s, it can be concluded that the flow field will grow rapidly relative to the bubble growth rate. In other words, it is possible to neglect the bubble growth rate by using a fixed bubble size. However, this assumption is only valid at the end of the bubble's growth, in the first few moments, for example, for a bubble radius of 50µm, we obtain a characteristic bubble growth time of about 1s. The growth rate of the bubble is thus no longer negligible. Mass transfer and surface tension variation Concerning the transport of dissolved species it is appropriate to examine the corresponding Peclet number: P e = u R D H 2 ≈ 758 (5.13) Microconvection movements clearly dominate diffusive phenomena in the case of an electrogenerated bubble on a microelectrode. One of the objectives of a model based on the diagram in Fig. 43 is to be able to determine the growth rate of the bubble. This growth rate depends on the interfacial mass transfer. This depends on the transport of dissolved species in the electrolyte which is a result of microconvection currents. To perform a direct simulation it is necessary to be able to associate the interfacial mass transfer to the microconvective currents in the vicinity of the bubble, i.e. to the Marangoni effect. As previously discussed, from a numerical point of view, it is necessary to be able to combine mass transfer and surface tension variation at the interface and more particularly at the contact line between the solid, liquid and gaseous phase where the problem of a moving contact line must be taken into account. At the level of the contact line, deformations are important and it is probable that the hypothesis of a spherical interface does not hold anymore. This deformation could influence the velocity profile and the pressure distribution. In addition, it has been observed that a mat of microbubbles is formed under the detaching large bubble, which could also significantly influence the detachment and the force balance [START_REF] Bashkatov | Oscillating Hydrogen Bubbles at Pt Microelectrodes[END_REF]. Solving these problems, in the form of simulations with a growing and deformable bubble interface, is a crucial step in understanding bubble detachment. Competing Marangoni effect The other emerging issue is the competition between the different Marangoni effects. Lubetkin assumed that the magnitude of the solutal Marangoni force is greater than thermal marangoni [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF]. A first approach to assess the competition between the different effects is to estimate and compare the different surface tension gradients ∇ s γ along the interface according to their source. The surface tension gradient of the thermocapillary effect can be approximated by: (∇ s γ) T ≈ ∂γ ∂T ∂T ∂z (5.14) As shown in Tab. 5.2, the value of ∂γ ∂T varies according to the references considered [START_REF] Young | The motion of bubbles in a vertical temperature gradient[END_REF][START_REF] Hardy | The motion of bubbles in a vertical temperature gradient[END_REF][START_REF] Prigogine | Statistical mechanics of surface tension and adsorption[END_REF][START_REF] Morick | Migration of Air Bubbles in Silicone Oil under the Action of Buoyancy and Thermocapillarity[END_REF][START_REF] Vazquez | Surface Tension of Alcohol Water + Water from 20 to 50 .degree.C[END_REF]. The low value and the typical value were reported as low and typical by Lubetkin in his review [START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF]. The high value comes from the work of Vazquez et al. [START_REF] Vazquez | Surface Tension of Alcohol Water + Water from 20 to 50 .degree.C[END_REF]. This value was cited by Yang et al By carrying out a numerical simulation for a conventional electrode the authors obtain ∆T ≈ 1.5K for Θ = 0.87 and ∆T ≈ 0.5K for Θ = 0.31. The greater the bubble coverage, the greater the Joule effect. The temperature difference between a microelectrode and an electrode decreases by a factor of 10. By taking into account the length at the interface for which this temperature difference can be observed, we can obtain an approximation of the temperature gradient. The surface tension gradient can then be calculated: (∇ s γ) T [N • m -2 ] Low High Microelectrode 6 22 Regular electrode 0.1 0.4 The surface tension gradient of the solutocapillary effect of dissolved gas can be approximated by: (∇ s γ) c ≈ ∂γ ∂c H 2 = 1 k H ∂γ ∂p = -3.2 × 10 -5 N m 3 m -1 mol -1 (5.17) As shown in Tab.5.4, the value of ∂c ∂z varies according to the references considered [START_REF] Westerheide | Isothermal growth of hydrogen bubbles during electrolysis[END_REF][START_REF] Shibata | The Concentration of Molecular Hydrogen on the Platinum Cathode[END_REF][START_REF] Glas | Measurements of the growth of electrolytic bubbles[END_REF][START_REF] Sides | A Close View of Gas Evolution from the Back Side of a Transparent Electrode[END_REF] The surface tension gradient can then be calculated: (∇ s γ) c [N • m -2 ] Low typical High 0.1 14 500 In view of the values obtained for the surface gradients, it is difficult to say which capillary effect may dominate. By examining the ratio (∇ s γ) c /(∇ s γ) T it can vary from 0.5 10 -3 to 5 10 3 . The dominant effect will be determined essentially from the experimental conditions. Bubble growth rate with Marangoni Flow There is currently no model to simulate the growth of a bubble in conjunction with the Marangoni effect. It is therefore not possible to establish a relationship of radius growth as a function of time R(t). On the other hand, there are data on the velocity field around the bubble created by the Marangoni effect for a given bubble size. Growth with the diffusion hypothesis To close the bubble growth model it is necessary to express a value for interfacial mass transfer. Indirectly it is necessary to be able to estimate the interfacial mass transfer coefficient k [m • s -1 ]. The equation Eq. (2.43) modelling a bubble with no contact with the electrode allow the variation of the concentration as a function of radius to be obtained. Under this assumption, and by combining this equation with Eq. (2.40), it is possible to obtain an expression for interfacial mass transfer ṁdiff,hyp by dividing by the area of a sphere: ṁdiff,hyp = M D ∂c ∂r R = M (c 0 -c s ) D R 1 + R √ πDt (5.18) Then the mass transfer coefficient k can be estimated by: k = D R + D πt (5.19) This mass transfer coefficient is used to express the mass transfer rate ṁ appearing in the interfacial mass transfer equation, thus closing the model: ṁ = M k(c i -c s ) (5.20) This relationship was used by Lui et al. to simulate the growth of a hydrogen bubble in a numerical simulation on the surface of an electrode [START_REF] Liu | Numerical simulation of hydrogen bubble growth at an electrode surface[END_REF]. The authors do not include in this study the Marangoni effect. However the results concerning the growth of a bubble can be considered to be in poor agreement with the measurements made by Glas and Westwater [START_REF] Glas | Measurements of the growth of electrolytic bubbles[END_REF]. The mass transfer coefficient here can be considered as a global variable determined at the bubble scale. The assumptions made here are strong and vary greatly from reality. The bubble is considered as a sphere. The analytical solution of the equation starts from the assumption that the bubble is in the middle of an infinite electrolyte of homogenous concentration c 0 in which the transport of dissolved species takes place only by diffusion. In contrast to this theoretical value c 0 , the interfacial concentration c i is not homogeneous along the interface. Growth with the penetration theory The penetration theory was suggested by Higbie who was investigating whether or not a resistance to transfer existed at the interface when a pure gas was absorbed in a liquid [Higbie, 1935b]. In the penetration theory, the mass transfer coefficient of the fluid element with the residence time is expressed by the following equation: k = 2 D πt c (5.21) where t c is the contact time of a fluid element containing the dissolved species with the bubble interface. On the electrolyte side we can evaluate the growth rate of the bubble ṁB [kg • s -1 ] from k: ṁB = M k(c i -c s )S interf ace (5.22) Where S interf ace is the surface on which the mass transfer is performed. In other words, it is the surface not covered by the surfactants. As shown in Fig. 45 the assumption of a stagnant surfactant cap prevents interfacial mass transfer. By combining this relationship with what is happening on the gas side Eq. (5.23): ṁB = ρ g 4πR(t) 2 dR(t) dt [kg • s -1 ] (5.23) we can determine the growth rate of the bubble: dR(t) dt = M ρ g 2 D πt c (c i -c s ) S interf ace 4πR(t) 2 (5.24) This relationship is close to that of the Eq. (2.44), and helps to validate the reasoning that the use of the penetration theory is appropriate. The objective here is to establish that Eq. ( 5.24) is suitable to study of bubble growth. With the experimental protocol used by Yang et al. it is possible to measure the fluid velocity as close to the interface as possible [START_REF] Yang | Marangoni convection at electrogenerated hydrogen bubbles[END_REF]. Similarly, this value can be deduced from a numerical simulation such as that of [START_REF] Meulenbroek | Competing Marangoni effects form a stagnant cap on the interface of a hydrogen bubble attached to a microelectrode[END_REF]. Using the data from this study an average velocity along the interface can be calculated v i,avg = 12[m • s -1 ]. The transfer area S interf ace can be calculated from the bubble radius, the contact angle θ c = 4.2°and the stagnation angle θ S = 57°estimated by the same authors. The time t c is the contact time of the fluid with the interface. This contact time can be deduced from the average velocity along the interface. From these data we can estimate an average contact time along the interface of t c ≈ 0.0414s. Tab. 5.5 shows the other data used. The main unknown here is the average concentration at the interface c i,avg . The Tab. 5.5: Parameters common to both equations, Eq. (2.44) and Eq. (5.24) Quantity Unit Estimate R m 5.6 • 10 -4 D H 2 m 2 • s -1 7.38 • 10 -9 ρ H 2 kg • m -3 0.09 M H 2 kg • mol -2 0.002 growth rate of the bubble measured by Massing et al. is dR(t) dt ≈ 84 [µm s -1 ]. Using this value it is possible to find c i,avg = 37.5 [mol • m -3 ] from Eq. (5.24). Which is a plausible value [START_REF] Westerheide | Isothermal growth of hydrogen bubbles during electrolysis[END_REF][START_REF] Shibata | The Concentration of Molecular Hydrogen on the Platinum Cathode[END_REF][START_REF] Vogt | On the supersaturation of gas in the concentration boundary layer of gas evolving electrodes[END_REF]. However, many parameters are approximated. Eq. (5.24) needs to be tested with more experimental results to be validated. This relationship is thus a new tool to better understand what is happening around the bubble. 6 Non-dimensional study Simplification of the model There are few studies that relate the Marangoni effect to interfacial mass transfer. The aim of this dimensionless study is to evaluate the interfacial mass transfer by means of a Sherwood number from an estimated or measured Marangoni effect as in the Yang et al. experiment [START_REF] Yang | Marangoni convection at electrogenerated hydrogen bubbles[END_REF]. In a previous section we were able to estimate the errors of the numerical model, which will allow us to obtain a reliable evaluation of the Sherwood number. The main assumption used here is the use of the penetration theory. The other assumption made here is that the quantity influencing the value of the surface tension at the interface can be compared to a temperature, i.e. the coupling between the absorption of dissolved species at the interface and the resulting change in surface tension is not taken into account. The surface tension gradient is generated from the temperature profile deduced from the heat equation. The variable of interest here is the surface tension along the interface. This type of study is thus in line with the hypotheses made by Massing et al. Meulenbroek et al. and Hossain et al. which • L interf ace the length of the interface over which the interfacial mass transfer and the Marangoni effect take place; • κ the curvature of this interface; • ∇ s T the temperature gradient along the interface. From this temperature surface gradient can be deduced the surface tension gradient, the surface tension in this model being a function of temperature only. Dimensionless equations A surface tension is a force per unit length, so the resulting stress must scale as ∆γ/L , while the viscous stress scales as µu/L, for u the speed of the Marangoni flow. Equating the two we have a flow speed u = ∆γ/µ. As Ma is a type of Péclet number, it is a velocity times a length, divided by a diffusion constant D. The Marangoni number can be defined as : M a = v char L char D ≈ |∇ s γL 2 char | µD T ≈ |∆γL char | µD T (5.25) M a = advective transport rate due to surface tension gradient diffusive transport rate of the quantity of interest (5.26) In order to make the Marangoni number appear, the Navier-Stokes and transport equations and the corresponding jump relations are made dimensionless. As illustrated in Fig. 47 and Fig. 48, the aim here is to expose the perimeter of the bubble to different gradient values to generate a Marangoni effect. Due to blocking by surfactants, only a portion of the interface will participate in the surface tension gradient, and interfacial mass transfer. The objective of this study is to create a tool to relate a surface tension gradient through a Marangoni number to the interfacial mass transfer through a Sherwood number, the common characteristic length of these two dimensional numbers is the surfactant free interface length L interf ace . Locally in an experiment it is possible to estimate this length. As mentioned earlier the Marangoni effect is intermittent, so the transfer of interfacial mass is only retained when the velocity of the Marangoni effect is highest. The graph in Fig. 47 details the correspondence between the observation that can be made experimentally and the numerical study performed. The interface is assumed to be exposed to a temperature gradient over a length of interface. Mass transfer is only simulated along the length L interf ace . It is assumed that Surfactants block interfacial mass transfer, covering the bubble interface, which prevents interfacial mass transfer of dissolved species. A condition has been added so that interfacial mass transfer is blocked outside the length L interf ace . In Fig. 47 the diagram on the left shows a bubble attached to an electrode. However, the numerical model presented in Chapter 3 does not include a contact line module. Therefore, the choice was made to perform a simulation on a bubble in a stagnant liquid. The idea here is to simulate a surface tension variation along the interface to obtain local information on the interfacial mass transfer. As suggested by Hossain et al. the surface tension variation along the interface is not necessarily uniform, nor linear [START_REF] Hosokawa | Evaluation of adsorption of surfactant at a moving interface of a single spherical drop[END_REF]. The interface can be divided into several sections where the surface tension stresses are different. The Marangoni currents can be of opposite direction depending on the portion of the interface considered as suggested in Fig. 44. The objective of this dimensionless study is to develop a tool to determine the local interfacial mass transfer as a function of the interface section considered. Fig. 48 shows the result of one of the simulations carried out to produce the graphs in Fig. 49 and Fig. 50. In each cell of the mesh the fluid will be exposed to the interface for a time The graphs in Fig. 49 and Fig. 50 show the results of many simulations. As an example, the Tab.5.6 shows the data of the point indicated by the label data in Fig. 49 and Fig. 50. The Sherwood numbers and the mass transfer coefficient are calculated as with the methodology indicated just above. The Marangoni number is calculated from the input parameters as indicated by Eq. (5.25). Marangoni currents were generated along a spherical interface over a distance L interf ace . These graphs allow to obtain local information on the interfacial mass transfer, and can be used accordingly for experimental observations. In accordance with the observations made by Golovin, the Sherwood number increases as a function of the Marangoni number [START_REF] Golovin | Mass transfer under interfacial turbulence: kinetic regulaties[END_REF]. The graph in Fig. 49 shows the Sherwood number as a function of the Marangoni number. In the framework of a simulation made on a fixed liquid-bubble interface this allows to estimate the Sherwood number describing the interfacial mass transfer. The graph in Fig. 50 shows the mass transfer coefficient as a function of the Marangoni effect. The three curves allow to distinguish different ratios of bubble length to bubble radius. The overall objective here was to provide a tool to better understand the mass transfer due to the Marangoni effect. Conclusion of the chapter and discussion of model uncertainties The models described above do not confirm the real situation of interfacial mass transfer at the bubble interface. There are too many unknowns to form definite conclusion. However, the implementation of a contact line model in the numerical model could remove many uncertainties. At this stage it is appropriate to list the main quantities for which these uncertainties exist: • Concentration gradients around the bubble, i.e. the local value of the concentration of dissolved species along the interface; • the value of the surface tension gradient along the interface; • local value of the interfacial mass transfer coefficient; • presence and impact of surfactants; • contact angle. A direct numerical simulation of a holistic model such as the one shown in Fig. 43 would provide a microscopic approach to the problem. One of the major objectives of this thesis has been to develop a model to address these uncertainties. In most models, the interfacial mass transfer coefficient remains an input parameter of the model. The advantage of the penetration theory is to integrate the mass transfer coefficient as a local value calculated directly by the model, and not as an average quantity calculated a priori. The validity of the penetration theory in the case of electrogenerated bubbles would remove the uncertainty regarding the mass transfer coefficient. The coupling between dissolved species transport, interfacial mass transfer and the momentum jump relation would remove the uncertainty in the concentration gradient around the bubble. Removing this uncertainty would provide a value for the soluto Marangoni effect, which could distinguish its importance from other effects, and thus provide a value for the surface tension gradient along the interface. With the development of a holistic model, however, the value of the contact angle and the impact of surfactants would remain unknown. There is currently no agreement on the theory to predict the evolution of the contact angle. The impact of surfactants can only be assessed after an analysis of the results of a model with regard to the experimental results. Conclusion 6 1 Perspective: integration of a moving contact line model As previously stated, in order to simulate a holistic model like the one shown in Fig. 43, a moving contact line model must be included. As such it is consistent to introduce the problem of simulating these moving contact lines, and how this can be integrated into the numerical model presented in this thesis through height functions. The contact line is a singularity of an even higher order than the interfaces: it is a linear singularity located at the edge of a surface singularity. Not only are the physical quantities not continuous, which poses numerical problems, but even a simple physical model is not always available to describe them [START_REF] Legendre | Comparison between numerical models for the simulation of moving contact lines[END_REF]. Experience shows that the behaviour of contact lines depends on a large number of factors, few of which are controlled in practice (wall roughness, surface condition and chemical contamination, composition of the fluid and possible contamination by surfactants, etc). These lines of contact play a crucial role in the phenomenon we want to study. If we want to study the influence of a hydrophobic surface compared to a hydrophilic surface, it is essential that the model used can account for their differences. The Navier-Stokes equations include some aspects of intermolecular forces. Viscosity, pressure or surface tension are all quantities that result directly from the interactions between the molecules of the fluid. The application of these interactions to the contact lines leads very naturally to the surface energy of the Young-Dupré relationship and the link with the equilibrium contact angle is obvious. γ lg cos θ E = γ sg -γ sl (6.1) where θ E is the stactic contact angle and can be defined as the angle that the interface makes with the wall when the contact line is stationary. Unfortunately, , this relationship is only relevant for a static contact line without mass transfer. In the presence of mass transfer, not only is the contact angle modified, but the balances and exchanges are modified by the presence of the wall. It is easy to find theoretical studies on the dynamics of contact lines and contact angle. These studies are based on classical continuous medium mechanics and their two essential ingredients are the Navier-Stokes equation and the Laplace relationship. There are also other approaches to model contact lines, where the microscopic aspects are more emphasized [START_REF] Hadjiconstantinou | Hybrid Atomistic-Continuum Formulations and the Moving Contact-Line Problem[END_REF][START_REF] Gouin | The wetting problem of fluids on solid surfaces: Dynamics of lines and contact angle hysteresis[END_REF][START_REF] Gouin | Energy of Interaction between Solid Surfaces and Liquids[END_REF][START_REF] Pomeau | Recent progress in the moving contact line problem: a review[END_REF]. Experiments on the subject often use perfect surfaces (usually glass, sometimes coated to modify its wettability) and equally ideal fluids. In [START_REF] Gennes | Wetting: statics and dynamics[END_REF], de Gennes draws a particularly complete panorama of the different aspects of wetting by incorporating a very large number of physical ingredients. Studies of contact angle hysteresis are rarer. The common feature of these models is the calculation of the liquid-gas interface profile from a simplified solution of the Stokes flow near the contact line. This flow gives rise to a non-integrable singularity [START_REF] Hocking | A moving fluid interface. Part 2. The removal of the force singularity by a slip flow[END_REF] which is solved by introducing a cut-off scale and a physical mechanism at the molecular scale to close the system of equations. When a contact line moves at a velocity U cl along a wall and a non-slip condition is imposed, a stress is generated: τ singularity ≈ µ U cl ∆ (6.2) where ∆ is the grid spacing and µ is the fluid viscosity. When ∆ tends towards zero this constraint τ singularity diverges. Refining the mesh makes the calculations diverge. To deal with the singularity several authors introduce the navier slip condition in their model. In fact, many models can be interpreted as variants of the Dussan model [START_REF] Dussan | The moving contact line: the slip boundary condition[END_REF], where microscopic phenomena are summarized by a microscopic contact angle and a slip length. In such model the tangential component to the wall of the velocity is estimated using the following relation: U w = λ N ∂u ∂n w (6.3) where U w is the fluid velocity at the wall, n w is the normal to the wall, and λ N is the slip length, which is usually estimated to be of nanometer scale. Convergence of the grid is then achieved by solving the complete hydrodynamic problem within the moving contact line region. However, λ N values are unrealistic in most simulations, they are too large. The reason for this is the limited refinement of the grid. So in practice λ N becomes an adjustable parameter for the simulation. The calculations converge but the slip condition becomes unphysical. This boundary condition for the velocity field is used in conjunction with a dynamic contact angle model. The first step is to determine the dynamic contact angle. One of the solution is to assume the dynamic contact angle to be constant and equal to the static contact angle. Then in the VOF framework, the idea is to impose on the contact line a normal to the interface depending on the contact angle. n I = sinθn w,∥ + cosθn w,⊥ (6.4) where n w,∥ and n w,⊥ are the components of the normal vector, parallel and normal to the wall. This value is used into the surface tension model. Then it is imposed as a boundary condition . Most models vary depending on how the contact angle is determined and the description of the slip condition. However the solution in this type of model depends on the size of the mesh [START_REF] Afkhami | A mesh-dependent model for applying dynamic contact angles to VOF simulations[END_REF]. Afkhami et al. based on the theory of [START_REF] Cox | The dynamics of the spreading of liquids on a solid surface. Part 1. Viscous flow[END_REF] proposed the following expression for modeling the contact angle [START_REF] Afkhami | A mesh-dependent model for applying dynamic contact angles to VOF simulations[END_REF][START_REF] Sheng | Immiscible-fluid displacement: Contact-line dynamics and the velocity-dependent capillary pressure[END_REF] : cos(θ num ) = cos(θ app ) + 5.63Ca ln( K ∆/2 ) (6.5) where θ num is an angle defined according to the mesh. They point out that there is a linear dependance of cos(θ num ) -cos(θ app ) on Ca ln( K ∆/2 ) when applying both no-slip and Navierslip boundary conditions. Ca is the capillary number, Ca = µU cl /γ (U cl is the contact line velocity, µ the fluid viscosity ). K is a constant which can be determined by fitting numerical data to data obtained experimentally. The big advantage of this method is that it eliminates the stress singularity at the contact line; the solutions converge with mesh refinement. A shown on Fig. 51, in order to model the numerical contact angle, Afkhami et al. used a methodology based on height functions [START_REF] Afkhami | Height functions for applying contact angles to 2D VOF simulations[END_REF]. This model implemented in the VOF framework has been used in a more recent publication [START_REF] Afkhami | Transition in a numerical model of contact line dynamics and forced dewetting[END_REF]. Conclusion of the work This work was carried out with the aim of gathering the knowledge necessary to model bubbles generated from electrolysis cells. The work started with a thorough literature review of the behaviour of bubbles in electrolysis cells and the effect of these bubbles on the efficiency of the process. The outgassing produced in electrolysis cells has been studied extensively by several generations of researchers. This study revealed that knowledge of outgassing characteristics, such as size at detachment, growth rate or bubble movement, is fragmentary. The aim of this literature review is twofold, we have sought to describe the knowledge needed to model an electrolysis cell but also to highlight aspects that could improve the electrochemical process. The covering of the electrode by the bubbles is an essential parameter in the calculation of the efficiency of the electrolysis process. Therefore, without knowledge of the residence time of the bubbles on the electrode, their growth rate, and their diameter at the moment of detachment, it is not possible to model the electrolysis process correctly. It turned out that a better understanding of the transport of dissolved species, the interfacial mass transfer, and the Marangoni effect contributing to delay the detachment of bubbles from the electrode, would allow to elucidate the behaviour of bubbles in certain experimental cases. The desire to produce a holistic model of an electrogenerated bubble following this bibliographical study stems from the observation of the gaps concerning these three points. Without the implementation of a model capable of simulating a Marangoni effect on a free surface, an essential simulation tool is missing. In this manuscript, the mathematical model to simulate two-phase flows with a Marangoni effect and interfacial mass transfer has been presented. Then, the numerical methods needed to apply this model have been outlined. An algorithm based on VOF methodology and height functions was detailed. Finally, it was shown that this tool successfully reduces parasitic currents on a static and moving isolated bubble and is suitable for the study of the Marangoni effect around an electrogenerated bubble. This tool is proving to be a first step towards a holistic model. The couplings necessary to perform a direct simulation of the phenomenon were presented. In order to achieve this objective, the implementation of a moving contact line module is missing. However, a parametric study to account for the interfacial mass transfer as a function of the Marangoni effect was carried out using the numerical model. In the absence of a direct holistic simulation, an equation giving the growth rate of a bubble, based on the penetration theory, allowing the Marangoni effect to be taken into account, as well as the covering of the bubble by surfactants, was established. This relationship is close to the Eptein-Plesset equation, and gives comparable results. It was pointed out that the amount of experimental data is still insufficient to validate a numerical model of an electrogenerated bubble including the Marangoni effect. The work carried out in this manuscript has laid the theoretical and numerical foundations for an improved understanding of dissolved gas transport, interfacial transfer and the Marangoni effect around an electrogenerated bubble. As mentioned earlier, the work presented in this thesis is part of an approach based on inverse problems. In this perspective, among all the sources of errors mentioned in the diagram in Fig. 13 , this thesis has focused on reducing three of them: • Errors in hypothesis and models: They have been extensively discussed in this thesis. Different models have been presented to account for the transfer of species from the electrode to the bubble interface, interfacial mass transfer, surface tension variation at the interface, bubble growth and detachment. The model shown in the diagram in Fig. 43 presents the elements necessary to correctly describe an electrogenerated bubble. • Errors due to assumed known parameters: it was noted that depending on the references considered ∂γ ∂T could vary by a factor 10. From this point of view, it is likely that the thermocapillary effect is badly evaluated. This is why evaluating the surface tension variation on the interface excluding other sources of effect may lead to a wrong interpretation. • Errors in direct or numerical calculations: in order to limit the direct calculation errors, several numerical methods have been tested (height functions), the aim being to estimate these errors and their impact on the simulation result. The aim has been to limit these errors due to numerical calculation. As a reminder, in the case of a moving bubble, the error due to parasitic currents was limited to 2.5% for a ratio R/∆x = 70. In conclusion, this thesis : • presented and discussed the assumptions and structural choices of a holistic model for simulating electrogenerated bubbles; • provided a numerical model for simulating the Marangoni currents around an electrogenerated bubble; • evaluated and reduced the errors (spurious currents) of this model using innovative methodologies; • established a new relation allowing to evaluate the growth rate of a bubble as a function of the intensity of the Marangoni effect around this bubble; • provided a tool allowing to evaluate the interfacial mass transfer through a mass transfer coefficient as a function of a Marangoni number. Fig. 1 : 1 Fig. 1: Definition of bubble coverage, the grey surface represents the normal projection of the bubble on the electrode surface. Fig. 2 : 2 Fig. 2: Extract from [Vogt, 2011b], gas evolution efficiency Fig. 3 : 3 Fig. 3: Diagram inspired by[START_REF] Blander | Bubble nucleation in liquids[END_REF] showing the work required to form a spherical bubble according to its radius. The system seeking to achieve a minimum energy state below a radius R c the bubble cannot form, above R c the bubble can grow. Fig. 4 : 4 Fig. 4: Local microprocesses at gas-evolving electrodes and their influence on mass transfer Fig. 5 : 5 Fig. 5: Penetration Theory Fig. 6 : 6 Fig. 6: Extract from [Yang et al., 2018], particle trajectories and corresponding velocities aroundthe growing bubble at a potential of -8V . Fig. 7 : 7 Fig. 7: Influence of surface tension variation on mass transfer Fig. 8 :Fig. 9 : 89 Fig. 8: Extract from [Yang et al., 2015], bubble geometry Fig. 10 : 10 Fig. 10: Extract from [Sakuma et al., 2014], bubble growth model for the absorption of the dissolved oxygen gas on (A) hydrophobic and (B) hydrophilic electrode surfaces Fig. 11 : 11 Fig. 11: Figure and comments extracted from[START_REF] Wilt | Nucleation rates and bubble stability in water-carbon dioxide solutions[END_REF] -"Definition of geometrical parameters for heterogeneous nucleation in the cases of conical and spherical cavities and projections." Fig. 12 : 12 Fig. 12: Figure and comments extracted from[START_REF] Lubetkin | The motion of electrolytic gas bubbles near electrodes[END_REF] : "A hypothetical composite conical or crack-like nucleation site. The region A is hydrophobic, possibly as a result of capillary condensation of a volatile organic material. This deposition mechanism would restrict its presence to the narrowest region of the pore. Nucleation is easy in this environment, with its high contact angle. As the nascent bubble grows past the hydrophobic region, and into the hydrophilic region B, a small remnant may be detached, remaining in the hydrophobic region. This residual gas would promote the instantaneous growth of the next bubble. As it grows, it will be shot out of the conical site, its zero contact angle ensuring the absence of friction. The rapidity of its departure might cause sufficient stirring in the vicinity C, to disrupt the Nernst layer, thus eliminating the concentration Marangoni effect, which would otherwise tend to hold the bubble near the electrode until it was larger. Alternatively, the rapid growth of the first bubble during its rise, and the immediate following of others might cause depletion of the Nernst layer in a narrow bubble chimney, again destroying the concentration gradient." Fig. 13 : 13 Fig. 13: Diagram showing the errors that can influence the inverse process. Fig. 14 : 14 Fig.14: Illustration of the different continuum methods for describing the evolution of deforming interfaces extract from[START_REF] Wörner | Numerical modeling of multiphase flows in microfluidics and micro process engineering: a review of methods and applications[END_REF] Fig. 15 : 15 Fig. 15: The jump operator describes the passage of a quantity Q through an interface between two distinct volumes. Fig. 16 : 16 Fig. 16: Diagram of the surface gradient operator ∇ s Q. It is defined as the projection of the gradient ∇Q onto the surface. Fig. 17 : 17 Fig. 17: Diagram of a fluid particle. The two gaseous V g and liquid V l volumes are separated by an interface I. Fig. 18 : 18 Fig. 18: Example of a 2D simulation with Ansys Fluent of a bubble within a stagnant liquid in a zero-gravity system. The classical CSF formulation was used to calculate the surface tension forces. Spurious currents appear at the interface. Fig. 19 : 19 Fig. 19: Methodology of height functions. The sum of the volume fractions present in a column is equal to the average value of a function f on an interval ∆x of which the curve represents the interface. Fig. 20 :Fig. 21 :Fig. 22 : 202122 Fig. 20: Stencil adapted according to the interface Fig. 23 : 23 Fig. 23: Extracted from [Owkes and Desjardins, 2015] , example of mesh-decoupled columns and heights used to compute the interface curvature. Fig. 24 : 24 Fig. 24: Diagram showing the interface. In the columns and rows of the mesh the height functions are estimated.The local coordinate is identified by the two vectors n and Fig. 25 : 25 Fig. 25: a : example gradient of ∂γ ∂s , when switching from one to another direction the values of ∂γ ∂s Fig. 26 : 26 Fig. 26: 40 × 40 structured mesh used for the simulation Fig. 27 :Fig. 28 : 2728 Fig. 27: Algorithm used by Fluent to solve the Navier-Stokes equations. Integration of the DEFINE ADJUST macro to calculate the source term f γ Fig. 29 : 29 Fig. 29: Calculation of the exact volume fraction Fig. 30 : 30 Fig. 30: Relative curvature errors with the height function methodology along the circular interface as a function of the interface inclination angle with respect to the horizontal. Fig. 31 : 31 Fig. 31: Relative curvature errors along the circular interface as a function of the interface inclination angle with respect to the horizontal for two interface resolutions. Fig. 32 : 32 Fig. 32: Comparison between analytically calculated Young-Laplace pressure , and numerically evaluated pressures around the bubble for the continuous surface force (CSF) model and the height function (HF) model. The points represent the average pressure at the center of the cells and the x-axis represents the distance from the center of the bubble. Fig. 33 :Fig. 34 : 3334 Fig. 33: Evolution of the maximum intensity of the spurious currents observed around the bubble.With the use of an exact curvature for the simulation, the equilibrium is reached for a time t ν . Fig. 35 : 35 Fig. 35: Spurious current generated using two mesh densities. On the left R/∆x = 10 on the right R/∆x = 20 Fig. 36 : 36 Fig. 36: Relative interfacial area errors averaged along a circular interface as a function of n L and the mesh refinement. Fig. 37 : 37 Fig. 37: Maximal relative interfacial area errors along a circular interface as a function of n L and the mesh refinement. Fig. 38 : 38 Fig. 38: Surface gradient calculation. The interface of the bubble is exposed to a temperature difference along the interface. The value calculated with Eq. (3.79) is compared with the exact value. Fig. 39 :Fig. 40 : 3940 Fig. 39: Relative error of the surface tension gradient as a function of mesh resolution. The maximum E max and average E avg error are calculated for two values of the surface tension gradient, 0.1N • m -2 and 50N • m -2 . Fig. 41 : 41 Fig. 41: Numerical modelling made with Ansys Fluent of a bubble exposed to a surface tension gradient. The image on the left shows the velocity vectors of the current generated by the Marangoni effect using the method described in this thesis. The Marangoni currents in the image on the right are obtained using the CSF methodology. Fig. 42 : 42 Fig. 42: Gradient of a dissolved species initiated along the interface and the origin of the soluto capillary effect shown in Fig.41. The mesh is represented by the grey grid with R/∆ x = 20. Fig. 43 : 43 Fig. 43: Diagram representing the different couplings between the different relationships involved in the production of electrogenerated bubbles. Fig. 44 : 44 Fig. 44: Diagram extract from Hossain et al. showing the flows around the bubble as a function of electrode size [Hossain et al., 2020].The red lines to the left of each bubble represent the electrical current lines. The narrowing of these electrical current lines makes it possible to determine the position of a hot spot which is at the origin of a temperature gradient and consequently of a surface tension gradient. The black lines to the right of the bubbles represent the lines of fluid movement generated by the surface tension gradient. influence on the fluid currents around the bubble. It is worth recalling here the study by Hossain et al. on the influence of electrode size on the thermo-Marangoni effect [Hossain et al., 2020].Fig.44summarises a key point of this study. The temperature gradient along the interface depends on a hot spot created by the Joule effect. The more the electric current lines tighten around the bubble, the greater the Joule effect. The profile of the electric current lines will depend on the size of the electrodes. For micro electrodes the hot spot is at the foot of the bubble. For larger electrodes the point where the current lines narrow is near its equator. Around this hot spot two vortices form. The hot spot reduces the surface tension. The fluid along the interface moves from areas of low surface tension to areas of high surface tension. The results obtained by Hossain et al. exclude other sources of the Marangoni effect[START_REF] Hosokawa | Evaluation of adsorption of surfactant at a moving interface of a single spherical drop[END_REF]. Although these results do not capture the complexity of the real phenomenon, they clarify the influence of the Joule effect on the Marangoni effect, and allow to put into perspective the experimental results obtained from a microelectrode compared to those obtained from a larger electrode. Fig. 45 : 45 Fig. 45: Diagram of an electrogenerated bubble on a microelectrode, coated with surfactants. The angle θ S is used to define the free surface of surfactants. Surfactants bind to the top of the bubble and prevent interfacial mass transfer. . and then adopted by other authors[Yang et al., 2018; Massing et al., 2019; Hossain et al., 2020; Meulenbroek et al., 2021]. The temperature gradient will depend on the size of the Tab. 5.2: Comparaison of some key references -1 • m -1 5.5 × 10 -5 6.5 × 10 -5 1.6 × 10 -4 electrode (micro or conventional electrode), the bubble coverage, and the current density. The observations of Yang et al. allow to estimate ∆T ≈ 10[K] for a microelectrode [Yang et al., 2018]. In their work Hossain et al. obtained similar results for a microelectrode. mol • m -4 4 × 10 3 4 × 10 5 2 × 10 7 neglect the soluto Marangoni effect due to absorbed dissolved species [Hossain et al., 2020; Massing et al., 2019; Meulenbroek et al., 2021]. This assumption greatly simplifies the model. The assumptions outlined above are shown in the graph in Fig. 46. The model will take into account 3 input parameters : Fig. 46 : 46 Fig. 46: Simplified model of the dimensionless study. The model depends on 3 input parameters, the interface length L interf ace over which the interfacial mass transfer and the Marangoni effect takes place, the curvature of the interface κ, and the surface temperature gradient∇ s T . is thus found in the transport equation and in the momentum jump relation: m ṽ + pI • n Iof the mass transfer coefficient and the Sherwood number as a function of the Marangoni number Fig. 47 : 47 Fig. 47: Diagram explaining the correspondence between the experimental observation and the numerical modelling carried out. In each case the interface is exposed to the same assumed surface tension gradient. Fig. 48 :Fig. 49 : 4849 Fig. 48: Numerical modelling made with Ansys Fluent of a bubble exposed to a gradient of an extensive quantity (∆T = 0.1K). The first image on the left shows the initial gradient applied. The second image shows the evolution of this gradient due to the Marangoni effect. The image on the right shows the velocity vectors of the current generated by the Marangoni effect. Fig. 50 : 50 Fig. 50: Average mass transfer coefficient over a length L as a function of bubble radius R and Marangoni number M a. Tab. 5 . 6 : 56 Data from the point identified on the graphs in Fig.49and Fig.50Ratio of interface length to radius of curvature 0.7D T [m 2 • s -1 ] Thermal diffusivity 1.39 • 10 -7 µ [kg • m -1 • s -1 ] Dynamic viscosity 1.02 • 10 -3 ∆γ ≈ ∂γ ∂T ∆T [N • m -1 ]Surface tension change 2.84 • 10 -3 Fig. 51 : 51 Fig. 51: Figure extracted from [Afkhami and Bussmann, 2008], using phantom cells below the electrode position it is possible to calculate the curvature at the contact line from height functions. Glas and West- water, 1964; Fernández et al., 2014; Sakuma et al., 2014; Yang et al., 2015; Liu et al., 2016; Wang et al., 2016; Van Der Linde et al., 2018]. Processes driving bubble growth Important aspects of the growth are not yet fully understood. According to Wang in the case of electrolysis, growth is controlled by 3 processes [Wang et al., 2016] : al., 2012; Luo and White, 2013; Chen et al., 2014; Fernández et al., 2014; Sakuma et al., 2014; Liu et al., 2015; Yang et al., 2015; Baczyzmalski et al., 2016; Karnbach et al., 2016; Liu et al., 2016; Massing et al ., 2019] conducted on bubble use microelectrodes. ; Popinet, 2009; Abadie et al., 2015; Abu-Al-Saud et al ., 2018]. When discretizing and choosing the surface tension model, one must make sure of : ; Cummins et al., 2005; Popinet, 2009; Guo et al., 2015; Magnini, 2016a; Owkes and Desjardins, 2015]. It can be integrated into surface tension models such as the Brackbill's model. An interface can always be described locally as a graph of a function. The principle of the height function method is to use a local coordinate system to be able to find the curvature of the interface. By calculating the integral of this function and dividing it by the interval over which it has been integrated we obtain the mean value or height of this function over the interval. Table 4.1. As noticed by Lubetkin the surface Tab. 4.1: Physical parameters used Yang et al. found that the thermal and solutal effect had an amplitude of the same order of magnitude[START_REF] Yang | Marangoni convection at electrogenerated hydrogen bubbles[END_REF]. Massing with a microelectrode in a 1 M H 2 SO 4 solution found that the solutal effect was negligible[START_REF] Massing | Thermocapillary convection during hydrogen evolution at microelectrodes[END_REF]. The study by Meulenbroek et al. also excludes the soluto-Marangoni effect caused by dissolved gases but introduces that caused by surfactants not being absorbed by the bubble but on the contrary remaining fixed on the interface and partially blocking the interfacial mass transfer[START_REF] Meulenbroek | Competing Marangoni effects form a stagnant cap on the interface of a hydrogen bubble attached to a microelectrode[END_REF]. This uncertainty cannot be resolved if the effects are not studied simultaneously. As shown in Tab. 5.3, Massoudi established how the surface tension of water varies as a function of the partial pressure of different dissolved gases[START_REF] Massoudi | Effect of pressure on the surface tension of water. Adsorption of low molecular weight gases on water at 25.deg[END_REF]. At Linear approximation of the surface tension variation for different gases, γ = γ 0 + bp + cp 2 + dp 3 with γ in mN/m, p in bar, and γ 0 = 71.98 mN/mWhere k H the constant of proportionality is dependent on the temperature and pressure, but in our case can be estimated as k Tab. 5.3: Gases b c d H 2 -0.025 - - O 2 -0.0779 +0.000104 - CO 2 -0.7789 +0.00543 -0.000042 low pressures, the concentration of dissolved hydrogen and the partial hydrogen pressure can be related through Henry's law: c h 2 = p k H (5.16) ∂γ ∂c ∂c ∂z (5.15) H = 7.8 × 10 -6 mol m -3 P a -1 [START_REF] Wiebe | The Solubility of Hydrogen in Water at 0, 50, 75 and 100°from 25 to 1000 Atmospheres[END_REF][START_REF] Sander | Compilation of Henry's law constants (version 4.0) for water as solvent[END_REF] . As a result we obtain : al., 2019; Meulenbroek et al., 2021]. Based on the velocity field observed by Massing et al. and numerically recovered by Meulenbroek et al. it is possible to establish a relationship that allows the growth ratedR(t) dt to be recovered[Massing et Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 NomenclatureStudy of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Chapter 1Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 The electrolysis process 13 Study of electrogenerated two-phase and microfluidic flows Florent Struyven The electrolysis process Chapter 2Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Bubble development Transport of the diluted species in the vicinity of the bubble Interfacial mass transfer Microfluidic phenomena in electrolysis Microfluidic phenomena in electrolysis 57 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 How to improve electrodes? Integration into an inverse resolution problem Overview of the chapter and objectives of the thesis Chapter 3Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Toward a direct numerical simulation of the phenomenum Mathematical model From continuous to discrete model Numerical errors Surface tension model 91 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Surface tension model Chapter 3 Marangoni model 103 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Marangoni model Numerical model mass transfer across the interface Integration of the code in Fluent Presentation of the code used in the simulations Chapter 4Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Curvature calculation errors Chapter 4Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Static bubble test case Static bubble test case 123 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Chapter 4Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Interfacial area error calculation Surface gradient error calculation Chapter 4Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Translating bubble test case 129 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Translating bubble test case 131 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Conclusion of the chapter Conclusion of the chapter 135 Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Chapter 5Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Heat, transport, and surfactant models Couplings Competing Marangoni effect Bubble growth rate with Marangoni Flow Non-dimensional study Conclusion of the chapter and discussion of model uncertainties Chapter 6Study of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Perspective: integration of a moving contact line model Conclusion of the work BibliographyStudy of electrogenerated two-phase and microfluidic flows Florent Struyven 2022 Fixed interface hypothesis A first consideration is to examine the time scales considered. The time corresponding to the bubble cycle varies according to the studies from 2 to 5s [START_REF] Sakuma | Nucleation and growth of electrolytic gas bubbles under microgravity[END_REF][START_REF] Yang | Dynamics of Single Hydrogen Bubbles at a Platinum Microelectrode[END_REF][START_REF] Guo | Implementation of a height function method to alleviate spurious currents in CFD modelling of annular flow in microchannels[END_REF][START_REF] Yang | Marangoni convection at electrogenerated hydrogen bubbles[END_REF][START_REF] Massing | Thermocapillary convection during hydrogen evolution at microelectrodes[END_REF]. This time must be compared with the evolution of the temperature and velocity profile near the interface, which evolve quite rapidly in less than a second. Data on the thermal properties of the electrolyte when growing an electrogenerated bubble on a microelectrode according to the experiments by Massing et al. are given in Tab. 5.1 [Massing et al., 2019]. The value of the radius R = 560 µm chosen by these authors for their numerical simulations and corresponding to an advanced stage of the bubble's development serves as a basis for the following reasoning. Symbol Descritpion Value Thermal conductivity 0.58 Taking the radius of the bubble as the characteristic length, and starting from the first instant when the Joule effect starts to modify the temperature of the electrolyte, theoretically close to the ambient temperature, the heat will diffuse within the electrolyte for a time The heat is thus diffused slowly with respect to the bubble cycle. However, heat is also advected by the thermocapillary movement of the electrolyte at the interface. The ratio of convective to diffusive heat transfer is expressed by the Péclet number. Based on the results of Yang et al. a mean velocity u = 10 mm s -1 is a good compromise for estimating the convective motions within the electrolyte. The Péclet number is expressed as: (5.10) where D th = k ρ cp is the thermal diffusivity. Convective heat transport is therefore much more important than diffusive transport. Another time scale to consider is the recirculation time of the electrolyte within the vortex near the bubble. A characteristic size of this vortex of 200 µm seems to be a good approximation in view of the observations made by [START_REF] Yang | Marangoni convection at electrogenerated hydrogen bubbles[END_REF]. By assimilating this vortex to a circle, we obtain a a characteristic recirculation time within the vortex of 0.12 s.
04115865
en
[ "info.info-ts", "phys.phys.phys-optics" ]
2024/03/04 16:41:26
2023
https://hal.science/hal-04115865/file/Hybrid_PPM_FSK_PAM.pdf
Andrea Petroni email: [email protected] Member, IEEE Antonio Costanzo email: [email protected] Senior Member, IEEE Valeria Loscri email: [email protected] Mauro Biagi email: [email protected] A Novel Optical Wireless Modulation exploiting Time, Frequency and Amplitude divisions enabling Link and Illumination Reliability The simultaneous task of data transmission and illumination achievable with Visible Light Communication is continuing to raise the interest of research and industry community. The joint use of the same infrastructure for illumination and communication may enable new location-based services, by implementing sensing functionalities. In this paper, we propose a new multi-dimensional modulation scheme combining different techniques, in order to increase the spectral and energy efficiency, while meeting the optical power emission constraints. Specifically, Pulse Position Modulation Pulse Amplitude Modulation and Frequency Shift Keying are effectively combined in a joint fashion, in order to exploit time, amplitude and frequency information. The innovation of using a smart combination of those modulation formats is based on the exploitation of specific features of the different approaches. In particular, we combine time, amplitude and frequency features to enhance the robustness of the system, without sacrificing the data rate performance as for coded systems. The major implication of such approach is twofold. In this way, we grant a constant illumination level per symbol and, moreover, we outperform in terms of reliability coded modulations for the same level of spectral efficiency. Theoretical results, validated through experimental evaluations, demonstrate that the combined approach achieves very good performance. Index Terms Modulation, Coding, Illumination, Detection, Optical Wireless I. INTRODUCTION It is a well recognized matter of fact that the Internet of Everything (IoE) paradigm main limitation is represented by the exiguous wireless resources [START_REF] Langley | The internet of everything: Smart things and their impact on business models[END_REF]. In the last decade, Optical Wireless Communication Systems (OWCSs) have risen as a green and effective solution to alleviate the spectral scarcity and boost the ubiquitous deployment and usage of IoE devices [START_REF] Ghassemlooy | Emerging optical wireless communications-advances and challenges[END_REF]. The significant advances in optoelectronic technology have made Light Emitting Diodes (LEDs) and photodetectors (PDs) inexpensive, and efficient hardware to equip optical transceivers [START_REF] Zhang | Recent advances in the hardware of visible light communication[END_REF]. Furthermore, the emission of LEDs can be conveniently managed to provide the illumination service jointly with wireless connectivity. Such scenario is peculiar of the so called Visible Light Communication (VLC) [START_REF] Karunatilaka | LED based indoor visible light communications: State of the art[END_REF], representing one of the most promising fields of application for OWCSs. By focusing on communication aspects, the design of energy efficient and effective modulation schemes is paramount to maximize the performance in terms of both data rate and reliability [START_REF] Dogan Tusha | Multidimensional index modulation for 5G and beyond wireless networks[END_REF]. One of the key features of OWCSs is they are Intensity-Modulation/Direct-Detection (IM/DD) systems, requiring high energy efficient modulation techniques. Among the different modulation schemes, one of the most widespread is On-Off Keying (OOK), as a special case of Pulse-Amplitude Modulation (PAM) employing only two transmit signal power levels, one of which is set to zero. In general, even though PAM is very suitable for OWCSs, its main issue is related to the growth of the electrical/optical energy per bit, with the increase of modulation order, necessary to guarantee a certain Bit Error Rate (BER). A higher level of energy efficiency can be achieved with the adoption of orthogonal modulation schemes, although with a negative impact on spectral efficiency. Frequency-Shift Keying (FSK) belongs to the orthogonal modulation schemes category [START_REF] Ngo | Space-time-frequency shift keying for dispersive channels[END_REF]. Its main drawback in the context of OWCSs is represented by its bipolar characteristics, however the variant of FSK based on Direct-Current (DC) offset, namely DC-FSK, allows to obtain a scheme compatible with the IM/DD approach [START_REF] Yamga | Low complexity clipped frequency shift keying (FSK) for visible light communications[END_REF]. In fact, from an energy point of view, DC-FSK is more efficient than an asymmetric version of FSK, but suffers from a greater spectral efficiency reduction. Another orthogonal modulation technique is represented by Pulse Position Modulation (PPM) [START_REF] Lee | Performance analysis of M-ary PPM in dimmable visible light communications[END_REF]. PPM is interesting for OWCSs since particularly fitting also for localization purposes, but its inefficiency concerns the peak-to-mean optical power ratio [START_REF] Arnon | The effect of clock jitter in visible light communication applications[END_REF]. Based on these premises, it is clear that each modulation scheme has its own strengths and weaknesses. Hence, combining different techniques would be a promising approach to capitalize on the benefits and mitigate drawbacks. One of the earliest proposal for hybrid schemes is described in [START_REF] Batshon | Beyond 240 gb/s per wavelength optical transmission using coded hybrid subcarrier/amplitude/phase/polarization modulation[END_REF], where a low-density parity-check coded hybrid subcarrier-amplitude-phasepolarization modulation is investigated to deal with optical channels and providing up to 240-Gb/s rate. Another work about hybrid solutions for OWCSs has been presented in [START_REF] Liu | M-ary pulse-position modulation and frequency-shift keying with additional polarization/phase modulation for high-sensitivity optical transmission[END_REF]. The authors propose a class of optical modulation techniques as a combination of PPM and FSK and with the addition of a polarization component and/or phase modulation, demonstrating the achievement of a higher power efficiency. Both the schemes in [START_REF] Batshon | Beyond 240 gb/s per wavelength optical transmission using coded hybrid subcarrier/amplitude/phase/polarization modulation[END_REF], [START_REF] Liu | M-ary pulse-position modulation and frequency-shift keying with additional polarization/phase modulation for high-sensitivity optical transmission[END_REF] were developed for optical systems, but not specifically wireless, hence potential issues related to dimming control, typical of indoor VLC, are not addressed. A hybrid DC Frequency and Phase Shift Keying modulation scheme for optical wireless systems has been investigated in [START_REF] Azim | Hybrid Frequency and Phase-Shift Keying Modulation for Energy Efficient Optical Wireless Systems[END_REF]. The authors established the particular combination of phase and frequency leading to an optimal energy and spectral efficiency. Phase and frequency shift applied to OOK are considered by the authors in [START_REF] Atta | A 160 m visible light communication link using hybrid undersampled phase-frequency shift on-off keying and CMOS image sensor[END_REF] to demonstrate the feasibility of a 160 meters outdoor VLC link, by employing an image sensor-based receiver for reliable signal detection. Still dealing with free space optics (FSO), the joint use of quadrature amplitude modulation and multi-pulse position modulation is evaluated in [START_REF] Khallaf | Performance analysis of a hybrid QAM-MPPM technique over turbulence-free and gamma-gamma free-space optical channels[END_REF] for both turbulence-free and gamma-gamma channels, with results demonstrating its superiority to other known schemes in terms of power efficiency and outage probability. Although the techniques in [START_REF] Azim | Hybrid Frequency and Phase-Shift Keying Modulation for Energy Efficient Optical Wireless Systems[END_REF]- [START_REF] Khallaf | Performance analysis of a hybrid QAM-MPPM technique over turbulence-free and gamma-gamma free-space optical channels[END_REF] are tailored to optical wireless communications, no focus has been made on their application in indoor VLC systems, therefore performance related to lighting control were not investigated. An interesting contribution on hybrid modulation schemes has been proposed in [START_REF] Xu | Hybrid modulation scheme for visible light communication using CMOS camera[END_REF], where the authors combine Pulse Width Modulation, PPM and Discrete Pulse Amplitude Modulation (DPAM) in order to increase the data transmission rate. Furthermore, the use of DPAM is also aimed to achieve the dimming function. Similarly, the features of DPAM and the reliability of variable PPM are jointly exploited in [START_REF] Hua | Three-dimensional constellation modulated visible light communications with pixelated addressable micro-LED array[END_REF] to improve the communication rate while providing illumination control as well. The effectiveness of the approaches presented in [START_REF] Xu | Hybrid modulation scheme for visible light communication using CMOS camera[END_REF] and [START_REF] Hua | Three-dimensional constellation modulated visible light communications with pixelated addressable micro-LED array[END_REF] was validated through several experiments, however the lack of preliminary simulation analysis does not allow the measured performance to be compared with a reliable benchmark. Besides in [START_REF] Bui | Energy-constrained slot-amplitude modulation with dimming support[END_REF] a new modulation format able to constrain macro-symbol by macro-symbol the lighting level has been proposed by merging PAM and PPM. The receiver is really complex and a sub-optimal solution has been proposed in [START_REF]Optical energy-constrained slot-amplitude modulation for dimmable vlc: Suboptimal detection and performance evaluation[END_REF]. Based on the encouraging results available in the literature, the idea of this work is to explore the potentialities of three popular modulation formats and try to merge them in a whole framework. Hence, in this work we propose a joint multi-modulation scheme, based on the combination of non-orthogonal PPM, PAM and FSK, in order to efficiently exploit the time, amplitude and frequency features of the different schemes. More in detail, it is known from theory [START_REF] Proakis | Digital Communications[END_REF] that the adoption of spectrally efficient pure modulations like PAM allows the achievement of high data rates. However, the main drawback concerns the power inefficiency that, in OWCSs, does not guarantee the provision of a constant lighting level per symbol and, moreover, modulation only is not sufficient to achieve high reliability in real systems. Nonetheless, solving the reliability problem through coding does not allow to solve illumination issues. On the other hand, light emission and robustness to errors can be effectively addressed by using different schemes, such has FSK or PPM, but rate is unavoidably penalized due to the bandwidth inefficiency. With the aim to achieve a convenient compromise between rate and reliability, the use of hybrid modulations has been recently spreading. Current works in the literature mainly deal with the merging of pairs of modulation schemes, typically phase and amplitude, frequency and amplitude, phase and frequency domains. Furthermore, the largest part of the proposed solutions are developed for FSO applications or, in general, not specifically for indoor wireless scenarios. Hence, issues related to power and illumination control that characterize indoor VLC are still very often neglected. The main novelty of this work is that we propose a multi-modulation scheme relying on the combination of non-orthogonal PPM, PAM and FSK to leverage the specific features and smartly mitigate the main drawbacks of each modulation. Such popular schemes are merged in a whole, novel framework, in order to efficiently exploit the time, amplitude and frequency features of the different schemes. The goal is to achieve performance improvements in terms of spectral and power efficiency with respect to pure modulations, providing also a constant illumination level that represents a fundamental requirement in real-world indoor VLC systems.The main contributions can be summarized as follows. The multi-dimensional modulation we propose: • is able to simultaneously exploit the spectral efficiency of PAM and the power efficiency of PPM and FSK by combining them in a single and more effective scheme, where the impact of pure modulations weaknesses is reduced; • is able to grant a constant power level, thus meaning constant lighting level on a per symbol basis, allowing illumination control while other coded and uncoded schemes generally do not; • allows the use of different receivers, ranging from the optimum to several sub-optimal ones, with each one providing a particular trade off between computational complexity and performance; • is able to outperform block coding strategies, for the same level of spectral efficiency, in terms of communication reliability, as confirmed by experimental validation as well. The paper is organized as follows. In Sec.II, we introduce the system model, including channel description and the multi-dimensional modulation we propose. In Sec III, we detail the optimum receiver structure and discuss the implementation of several sub-optimal and less costly receivers. In order to evaluate both the performance of the modulation itself and the different receivers effectiveness, we proceed in Sec. IV with the numerical results description and we also show some test results. Last, in Sec. V we draw final conclusion. II. SYSTEM MODEL AND PROPOSED MODULATION Let us consider a point-to-point optical link with transmitter and receiver equipped with a single LED and PD, respectively. The signal propagation can be modeled as: y(t) = ρhx(t) + w(t) (1) where x(t) is the instantaneous power associated to the transmitted optical signal, h accounts for the channel effect from the source to the receiver, ρ is the responsivity of the PD (measured in ampere/watt), and y(t) is the received current signal resulting from opto-electrical conversion. Finally, w(t) is the noise term modeled as zero-mean, additive white and Gaussian, the variance of which is indicated as σ 2 w . Regarding the channel characterization, by assuming the signal propagation as Lambertian and line-of-sight, we can model h as [START_REF] Petroni | Modulation precoding for MISO visible light communications[END_REF]: h =      (n + 1)A pd 2πD 2 cos n (ϕ) cos(ψ)g(ψ), ψ ≤ Ψ 0, ψ > Ψ (2) where A pd is area of the PD, D is the distance between LED and PD, g represents the gain of a potentially employable optical concentrator and n = -ln 2/ ln(cos Φ 1/2 ) represents the Lambertian emission factor derived from the LED -3dB semi-angle. Finally, ϕ and ψ are the angle of radiance and incidence, respectively, with Ψ defining the PD FOV. Herein, we propose a novel multi-dimensional modulation scheme based on a PPM-like signaling, FSK-inspired shape and also PAM-based modulation. The general framework description is provided in Figure 1. Transmit side operations are described in the current section, while received signal processing is detailed in the next one. Now, we proceed by first introducing the generic symbol emitted by the modulator in the following compact form: x(t) = I 0 + A m s n (t -ℓT c ) (3) where I 0 is a light bias that is used to maintain constant the lighting level per symbol. This choice is fundamental in order to guarantee a constant illumination level. In fact, the use of to the PAM-based alphabet A = {A 0 , A 1 , ..., A M -1 }, T c is the elementary PPM-like delay and the index ℓ = 0, 1, ..., L-1 rules the signal time shift according to one of the L possible PPM delays gathered in L = {0, T c , ..., (L -1)T c }. Finally, s n (t) is the square wave describing the N -FSK signal shape, with n=1,2,...,N indicating the number of wave cycles within the signaling time T p . In detail, s n (t) can be represented as: s n (t) = n k=1 rect t -T p /4n -(k -1)T p /2n T p /2n -rect t -T p /4n -kT p /2n T p /2n (4) where the first term is the positive portion of the square wave and the second term represents the negative component of the square wave. The modulator combining the principles of PPM, FSK and PAM signaling is depicted in Fig. 2, while a graphical example of symbol emission is reported in Fig. 3. Once more, it is possible to appreciate that the symbol power is constant, hence, the illumination has a constant value on a symbol basis. Given T c and T p , the use of L possible delays, N possible frequencies and M possible amplitudes entails the overall symbol time to be equal to T s = T p + (L -1)T c (it is worth clarifying that the PAM modulation does not impact on the symbol duration, but only on amplitude). Interestingly, it can be noted that if T c = T p the orthogonal PPM signaling is realized, while T c < T p returns non-orthogonal PPM. In this direction, by expressing T c = βT p , with 0 < β ≤ 1 (β = 1 representing orthogonal PPM), the spectral efficiency of the proposed modulation format (related to the electrical signal) is given by: η = log 2 (L • N • M ) N (1 + β(L -1)) . (5) According to the modulator block diagram in Fig. 2, the bit mapping can be done by composing the bit string related to the generic symbol as log 2 (L • M • N ) bits where the first log 2 L are related to PPM, the second log 2 N are linked to FSK and the last log 2 M deal with PAM. It is worth highlighting that the proposed scheme combines a highly spectrally efficient modulation such as PAM with spectrally inefficient but reliable modulation like PPM and FSK. In this regard, PPM and FSK realize a sort of coding (even though here no channel coding is present) as they provide a higher robustness to errors with respect to pure PAM signaling, that is unavoidably paid in terms of spectral efficiency reduction. In fact, by looking at eq. ( 5), the use of PAM signaling (M > 1) allows η to increase, while the presence of PPM and FSK (L > 1, N > 1) lowers η. Moreover, β ruling the width of PPM delay impacts on the trade off between spectral efficiency and reliability as well. Another interesting aspect to highlight is that, by smartly combining the three different techniques, we are able to improve energy efficiency. In particular, energy efficiency is meant as the system is able to reduce the BER in correspondence to the same SNR values, thus requiring lower power to achieve the same performance of other schemes. III. DIGITAL DEMODULATION: OPTIMAL AND SUB-OPTIMAL RECEIVERS As detailed in the previous section, by combining PPM, FSK and PAM we realize a multidimensional modulation scheme jointly exploiting time, frequency and amplitude domains. Regarding digital demodulation, we must anticipate that the relationship that links PPM, FSK and PAM still holds. In fact, a possible wrong decision about the PPM part of the symbol may induce errors both in symbol frequency and amplitude detection. An unreliable decision on the FSK part may lead PAM detection to be unreliable as well. Finally, having many close signal amplitudes available makes PAM detection very challenging, and errors may have also a significant impact on PPM symbol part recognition. Such a circular dependency among PPM, FSK and PAM suggests that parallel detection of the three modulation components can not be followed. Therefore, a different approach combining time, frequency and amplitude dimensions is required. In this regard, we resort to the principles of Maximum Likelihood (ML) criterion allowing symbol detection to be described as follows. First, the transmit signal model in eq. ( 3) allows eq. ( 1) to be rewritten as: y(t) = ρhI 0 + ρhA m s n (t -ℓT c ) + w(t) (6) from which it is possible to recover the unbiased continuous time received signal as: y u (t) = y(t) -ρ hI o = ρhA m s n (t -ℓT c ) + w(t) (7) where the term h is the estimated version of the channel that, under the hypothesis of reliable estimation, leads to have the y u (t) with zero-mean. As previously outlined, the bits carried by PPM, FSK and PAM must be decoded in a joint fashion. The most effective, but also expensive, way to achieve this goal is to implement the so called optimum receiver (OR) [START_REF] Proakis | Digital Communications[END_REF], based on signal filtering operated by considering all the possible combinations of PPM, FSK and PAM signals. As alternatives, we present three sub-optimal receivers, referred as quasi-optimum receiver (QOR), direct receiver (DR) and feedback receiver (FBR), respectively. Below, we detail the characteristics of the mentioned receivers, discussing the different trade off between complexity and performance provided by each configuration. A. Optimum receiver By looking at the scheme of the OR reported in Fig. 4 (excluding the gray section), it is possible to appreciate that, by employing L-PPM and N -FSK, L • N filters matched to different delays and frequencies are required in order to evaluate the corresponding L•N decision metrics, given as: r ℓ,n = 1 T p ℓTc+Tp ℓTc y u (t)s n (t -ℓT c )dt (8) with n=1,...,N , ℓ=0,...,L-1. Then, the decision related to the PPM-FSK symbol components is taken according to the following rule, that allows to obtain the pair ( l, n): l, n = argmax ℓ=0,...,L-1, n=1,...,N |r ℓ,n |. (9) representing the indexes of the decided PPM and FSK symbols, respectively. It is important to highlight that, in eq. ( 9), we introduce the use of modulus on r ℓ,n since the electrical signal, for what concerns the amplitude, can be negative due to the presence of noise. Differently, measuring only the maximum on r ℓ,n may lead to totally misdetect the frequency component. At this stage, once decided l and n, only the PAM component still needs to be detected. Hence, by resorting to the ML criterion based on Euclidean distance, we can perform the PAM detection according to the following rule: m = argmin m=0,...,M -1 (ρ hA m -r l,n ) 2 (10) so that l, n and m return the decided indexes of the PPM, FSK and PAM symbols composing the transmitted multi-dimensional symbol. Summarizing, we can define the computational cost required by the OR as O OR = L • N + M , given by the L • N filtering operations related to PPM and FSK detection plus the M metrics computations for PAM detection. As the decision on the received multi-dimensional symbol is taken by comparing it to all the possible symbols belonging to the PPM-FSK-PAM alphabet, the OR results as the best performing receiver. On the other hand, the required computational cost O OR may be high, especially if the modulation orders grow. B. Quasi-optimum receiver The block diagram of the sub-optimal QOR is the same of the OR, but including a further processing part, highlighted in the gray area in Fig. 4, realizing a very first decision stage that allows the overall signal processing to be reduced with respect to the case related to the OR. In fact, the receiver is referred as quasi-optimum since, whenever possible, it avoids the full computation of the filtering operations characterizing the OR. In QOR, the initial processing stage is characterized by an energy detector that computes the signal energy over the L PPM time windows characterizing the received signal y u (t). So, we have that the quantity: represents the signal energy referred to the ℓ-th PPM slot measured in y u (t). Once the L energy metrics are calculated, a threshold θ is used to determine the best candidates to be the detectable PPM symbols, gathered in the set L θ = {ℓ | ξ(ℓ) > θ}. Hence, the PPM-FSK symbol is detected according to the following rule: ξ(ℓ) = l, n = argmax ℓ∈L θ , n=1,...,N |r ℓ,n | (12) where the maximum is not searched in the set characterized by L • N elements (as done in the optimum receiver case), but in a number of |L θ | • N . Please notice that, in this case, the operator |.| applied to the set |L θ | means its cardinality, that is, the number of elements that it is composed of. As in general L θ ≤ L, the number of metrics computed according to eq. ( 8) is lower in QOR than in the optimal receiver, thus saving computational effort. The setup of the threshold θ depends on the energy level expected to be measured within the PPM slot where the FSK signal is placed on. In this regard, let us assume the transmission of a pilot multi-dimensional symbol x P (t) where ℓ, n and m are known. The estimated energy, calculated as in eq. ( 11) in the ℓ-th PPM slot and referred as ξ P , can be exploited to define the threshold theta as θ = (1 -β)ξ P , that essentially represents the energy level expected in the (ℓ + 1)-th PPM slot immediately next to that one where the signal is placed on. In fact, due to their closeness, the ℓ-th PPM symbol is likely to be confused with the (ℓ + 1)-th one, so this is why the threshold θ is set as a function of the energy expected on the (ℓ + 1)-th PPM slot. Finally, concerning the detection of the PAM symbol, we proceed in the same way reported in eq. [START_REF] Batshon | Beyond 240 gb/s per wavelength optical transmission using coded hybrid subcarrier/amplitude/phase/polarization modulation[END_REF]. Overall, the processing cost related to the QOR is O QOR = L + L θ • N + M , C. Direct receiver The receiver introduced as DR is probably the less complex sub-optimal solution in terms of computational cost since signal detection is performed sequentially, starting from the PPMrelated part, then moving to FSK and finally to PAM parts. The use of such detection approach is the reason why the receiver is referred as direct. For the sake of clarity, we would remark that, in this case, detection is not operated jointly on the PPM, FSK and PAM signal components. The block diagram of DR is depicted in Fig. 5 (excluding the gray box). Regarding PPM detection, an energy detector as that one introduced in eq. ( 11) is considered, so as: l = max ℓ=0,...,L-1 ξ(ℓ) (13) returns the index of the decided symbol related to the PPM part. From l it is possible to identify the time delay characterizing the FSK signal s n (t) within the received symbol y u (t), so that the FSK detection can be performed according to the following rule: n = argmax n=1,...,N |r l,n | (14) where it is important to highlight that we are not dealing with r ℓ,n as in eq. ( 8), but with r l,n defined as: r l,n = 1 T p lTc+Tp lTc y u (t)s n (t -lT c )dt. (15) In other words, eq. ( 15) defines the filtering operations that consider only the specific time delay given by l, while filtering in eq. ( 8) is performed for ℓ=0,1,...,L-1. Finally, for what concerns PAM detection, we can still resort to the ML based approach followed in eq. ( 10). Regarding the DR, the number of filtering operations and comparisons requested for symbol demodulation is really low since, as previously outlined, detection is performed sequentially on the PPM, FSK and PAM parts. Therefore, the required computational cost is given as O DR = L + N + M . D. Feedback receiver The last receiver we propose is referred as FBR and it is somewhat similar to the DR. In fact, its reference block diagram is that one reported in Fig. 5, but including the gray area as well. With FBR, it is possible to take a temporary decision on the PPM-FSK symbol part, with the detection being potentially performed multiple times iteratively. Therefore, as highlighted in Fig. 5, a feedback-like mechanism is realized and integrated in the DR architecture. In this regard, we indicate the iteration number with the subscript s, so that ls refers to the PPM symbol part decided at the s-th detection iteration. For s=0, we have the very first decision on PPM as taken according to eq. ( 13) similarly to the DR case. Even for FSK detection, for s=0, ns is determined as in eq. ( 14) representing the maximum output of matched filtering. Differently from DR, we also consider a second potential candidate to be the decided FSK symbol, formally defined as: n⋄ s = argmax n∈N /{ns} |r l,n | (16) that is the second-highest output of eq. ( 15) (in other words, maximum output of matched filtering excluding ns ). Given such two candidates, ns is recognized as a reliable decision if the following conditions are simultaneously met: |r ls,ns | -|r ls,n ⋄ s | > Ψ (17a) |r ls,ns | > Γ. (17b) Specifically, eq. ( 17a) is used to explicitly verify the accuracy of FSK decision. In fact, if the difference between ns and n⋄ s is larger than a suitably defined threshold Ψ, it follows that the computing of N matched filters returns an output, namely ns , significantly higher than the others. So, it is likely that ns correctly identifies the index of the received FSK symbol. The reference threshold is defined as Ψ = 0.25(A M-1 -A 0 )/(M -1) and is function of the distance expected between adjacent PAM amplitudes, with 0.25 being an empirically chosen scaling factor. Eq. (17b) allows not only to explicitly verify the reliability of FSK detection, but also to understand if the decision on PPM has been taken correctly. In this regard, if the N matched filters computed for FSK detection returns very low output values, it may be that a wrong decision on PPM has led to a misaligned identification of the FSK signal within y u (t), causing the FSK filtering to be unmatched. On the other hand, if the highest filter output r ls,ns exceeds a certain threshold Γ, conveniently chosen to deal with energy detector behavior, it is likely that the PPM detection has been performed reliably. To this aim, the threshold has been set as Γ = 0.25|A min |, with A min being the minimum amplitude for a PAM symbol and 0.25 representing a scaling factor as for Ψ. Therefore, the joint meeting of eqs. (17a)-(17b) should prevent from a wrong decision on the PPM-FSK symbol pair ( ls ,n s ). Otherwise, if eq. ( 17) is not met, we consider the next step for s, that is the next iteration, and we get back to the energy detection stage by evaluating: ls+1 = max ℓ∈L/{ l0 ,..., ls} ξ(ℓ) (18) that is a new decision on the PPM symbol part, but excluding the index ls found during the previous iteration. Once the new decided PPM symbol is available, we proceed again with FSK detection by computing eqs. ( 13)-( 16) and checking if, given ls+1 , eq. ( 17) is finally met. Such procedure is repeated until eq. ( 17) is verified or, at most, L times corresponding to the maximum number of decisions that can be considered for PPM, thus realizing the feedback path highlighted in Fig. 5. Just in case eq. ( 17) is never achieved, the final symbol decision will be the one related to the iteration s = 0. Finally, dealing with PAM detection, still we can resort to eq. ( 10). It is important to note that in FBR, as for QOR, the number of filtering operations and symbol comparisons is not fixed since it depends on the meeting of conditions in eq. ( 17). Hence, the cost is O FBR = L + (s + 1) • N + M where the fixed terms L and M are related to the implementation of the PPM energy detector and PAM detector, respectively, while the remaining variable term is function of the number of feedback iterations. Remark -about the possible receiver selection Although the implementation of DR is worth for what concerns cost, the optimal receiver allows to achieve the minimum error rate, thus granting the highest level of reliability. Hence, the use of the most convenient receiver can be seen in two different ways. First, if the setup is fixed and it is expected that no significant changes are applied to the communication link, once assigned the parameters, it is possible to select the receiver allowing the achievement of the requested target performance in terms of error rate, while optimizing the receiver computational effort. This represents an offline solution. Second, if the channel is subject to variations (for example a user moving in a room), the received optical power changes and so the signal-to-noise ratio (SNR). This implies that, if the receiver is implemented by software, it is possible to realize a SNR threshold based detection method tuning, so that the channel conditions drive the most convenient receiver selection. By doing so, the requested reliability level can be provided at the lowest processing cost. We comment further in the numerical results section the possible use of the receiver selection procedure. IV. NUMERICAL RESULTS In this section, we investigate the performance of the proposed multi-dimensional modulation scheme, highlighting the benefits brought by the use of such approach in terms of both communication reliability and illumination control. Furthermore, we evaluate the effectiveness of the three sub-optimal receivers presented in the previous section, discussing the trade off between computational complexity and error rate achievable with respect to the optimum receiver case. Matlab software was employed for simulating the transmission of 10 7 symbols over a pointto-point optical link. In addition, some results obtained from experimental validation are also reported and discussed. Different communication distances between transmitter and receiver have been considered so as to investigate the performance as a function of the average electrical SNR per symbol, formally defined as: γ = 1 K s Ks-1 i=0 (ρP i h) 2 σ 2 w ( 19 ) where K s = L • N • M is the size of the considered multi-dimensional modulation vocabulary and P i is the power of the i-th transmitted symbol, with i = 0, 1, 2, ..., K s -1. For simulations, we considered normalized LED and PD parameters since performance are evaluated as a function of SNR. On the other hand, a detailed description of hardware parameters and setup is provided when presenting the experimental results. A. Optimum and Multi-dimensional modulation performance As outlined in Sec. II, the multi-dimensional modulation allows the high spectral efficiency of PAM to be combined with the reliability of PPM and FSK, to achieve good data rates while providing robustness to errors. In this direction, we now compare the BER performance of uncoded and coded PAM with some PPM-FSK-PAM schemes. Comparison with uncoded strategy: We selected some modulation formats providing a target spectral efficiency η * =1 bit/s/Hz and η * =2 bit/s/Hz achieved with 2-PAM (or OOK) and 4-PAM, respectively. Such PAM formats have been taken as a reference since being the most widespread in OWCS [START_REF] Ma | Nonlinear compensation based on K-means clustering algorithm for nyquist PAM-4 VLC system[END_REF], [START_REF] Wang | A 75-mb/s RGB PAM-4 visible light communication transceiver system with pre-and post-equalization[END_REF]. In fact, the optical signal starts suffering from high attenuation after few decades of centimeters propagation, therefore using higher order PAM modulations results as unreliable due to the detection errors caused by the presence of noise. 5), the use of PPM and FSK (that is, L > 1 and N > 1) lowers the spectral efficiency. So, there is a limited number of combinations of L, N and β that guarantee the achievement of the target η * . Moreover, the spectral efficiency reduction caused by the use of PPM and FSK must be counterbalanced by increasing the PAM modulation order as well. This is what happens when dealing with the considered 4-PPM/2-FSK/4-PAM case. Unfortunately, the use of PPM and FSK to achieve a higher reliability requests the PAM order M to be increased up to 4 so as to guarantee η * =1 bit/s/Hz. Hence, the benefits brought by PPM and FSK in terms of robustness to errors are lost since the use of 4-PAM leads the communication to be more sensitive to errors than an uncoded 2-PAM. As a consequence, such kind of multidimensional modulation is not convenient with respect to 2-PAM. Moving to a more challenging case, Fig. 6 Comparison with coded strategy: An easy comment and objection is that, in order to grant reliability, a coding strategy may be applied, obviously at the cost of spectral efficiency reduction since a part of the bits sent on the channel are for error correction. In this regard, we proceed with an additional performance comparison that is related to block coding (BC) with coding rate R=1/2, thus meaning that the ratio between the number of bits entering in the coder are half of the ones out the coder. In order to provide a comparison for the same spectral efficiency η * =1 bit/s/Hz, we considered 4-PAM. As just mentioned, the role of channel coding is to improve the robustness to errors, but at the expense of rate reduction. Therefore, the achievement of the target spectral efficiency requests the PAM order to be necessarily increased. From the results, we can appreciate that the performance of coded 4-PAM are significantly lower than uncoded 2-PAM, thus revealing that the proposed multi-dimensional modulation is more convenient than coding. Of course, many other coding schemes may be considered, but the achievement of good performance unavoidably requires the decoding strategy complexity increase. Differently, in 16-PPM/4-FSK/2-PAM with β=0.05 and 2-PPM/2-FSK/2-PAM with β=0.5 the PAM order remains the lowest possible, while PPM and FSK are introduced to achieve a higher reliability. In fact, these solutions result as better performing than uncoded 2-PAM even though providing the same spectral efficiency. In Fig. 6(b), the same problem concerns 16-PAM with block coding and R=1/2, where the unreliability of the high PAM modulation order can not be properly counterbalanced by coding. On the other hand, the implementation of a 16-PPM/2-FSK/4-PAM results to be more convenient than an uncoded 4-PAM, thus meaning that PPM and FSK are exploited fruitfully to achieve a lower BER. Results Discussion Interestingly, it can be noted from Fig. 6 that the most reliable modulation schemes to achieve both η * =1 bit/s/Hz and η * =2 bit/s/Hz consider the use of a high PPM order equal to L=16, that may induce a significant spectral efficiency decrease according to eq. ( 5). However, in both cases we have β=0.05 allowing the realization of a non-orthogonal PPM, which provides a lower spectral efficiency reduction with respect to orthogonal PPM (where β=1). Furthermore, it is worth highlighting that, in general, when dealing with the proposed multi-dimensional modulation, using high PPM modulation formats may be more convenient than increasing the FSK order. In fact, regarding PPM, we have two degree of freedom to adjust the spectral efficiency, that are L and β. On the other hand, FSK performance are ruled by the unique parameter N . Hence, by increasing N the spectral efficiency decreases, but there is no other way to mitigate such reduction, as happens in PPM by adjusting β. proposed scheme we have the capability to select the most appropriate scheme to guarantee the quality of service achievement based on the propagation conditions, while selecting the less expensive approach from a computational point of view. In this direction, in order to evaluate the computational effort required by the proposed sub-optimal receivers with respect to the optimum one, we introduce now the metric the use of DR allows a significant computational effort saving of about 70% and 40% with respect to the OR, respectively. On the other hand, the performance of QOR and FBR depends on the SNR impacting on eq. ( 12) and eq. ( 17). In this regard, Table I reports the results related to the considered QOR and FBR, measured at different SNR levels, and considering 2-PPM/2-FSK/2-PAM, 16-PPM/4-FSK/2-PAM and 16-PPM/2-FSK/4-PAM resulted as the best performing schemes from the previous analysis. Interestingly, it can be noted that, when dealing with 2-PPM/2-FSK/2-PAM, the sub-optimal receivers seem to be more costly than the optimum one. Such result is explained by the fact that, since L=2, the filtering operations for PPM-FSK detection required by QOR and FBR are essentially the same as for the OR. In fact, for QOR we have that L θ can be 1 or 2, thus very close or equal to L, while in FBR the running of a single feedback iteration leads the computational cost to be essentially the same as in the optimum case. Moreover, both QOR and FBR consider the implementation of an energy detector as initial stage of processing, that is not present in the OR. So, this is the reason why C QOR and C FBR are greater or equal to 1. Moving to 16-PPM/4-FSK/2-PAM and 16-PPM/2-FSK/4-PAM where L is very large, the processing saving provided by QOR and FBR (despite the implementation of the PPM energy detector) becomes remarkable since, in general, in QOR L θ may be much smaller than L and in FBR the feedback iterations may be few. Specifically, in 16-PPM/4-FSK/2-PAM the cost of QOR is less than half of that one referred to the OR, while the cost of DR is even lower, with processing saving being larger than 60%. In the case of 16-PPM/2-FSK/4-PAM, representing a higher performing modulation since providing a spectral efficiency equal to 2 bit/s/Hz, the computational effort requested by QOR and DR increases. However, more than 30% of processing is avoided with respect to the OR case, and this result is still significant. C d = O d /O OR , Furthermore, the computational cost characterizing QOR and FBR tends to lower as the SNR grows, even though such decrease is only slight. In this regard, both 16-PPM/4-FSK/2-PAM and 16-PPM/2-FSK/4-PAM consider a very small β, so it follows that the PPM energy detector returns L values that are comparable in amplitude since PPM is far from be orthogonal. Hence, the number of filtering operations for QOR and FBR may not be the minimum one. On the other hand, by enlarging β the PPM detection results more reliable. As a consequence, in QOR L θ tends to 1 and in FBR the feedback becomes rarely necessary (thus leading to the performance of DR). Till now, we have slightly discussed about the role of β, anyway it is important to deeply explain its impact. As previously outlined, the use of a high PPM order as for 16-PPM/4-FSK/2-PAM and 16-PPM/2-FSK/4-PAM is made possible thanks to the realization of a non-orthogonal signaling with β << 1, allowing the spectral efficiency reduction due to L to be mitigated. By referring to the considered schemes, we now show in Fig. 8 how the choice of β rules the trade off between BER and spectral efficiency. Results have been obtained at γ=2.5 dB and γ=8dB for 16-PPM/4-FSK/2-PAM and 16-PPM/2-FSK/4-PAM, respectively, which represent the SNR conditions where a BER equal to 10 -3 is achieved (see Fig. 6). Specifically, each point in Fig. 8 is characterized by a unique x coordinate, related to BER, and a double y coordinate related to β and η. Here, β is the independent variable (that is reported on the y-axis instead of the x-axis as in conventional plotting), while BER and η are the dependent ones. In fact, according to eq. ( 5), there is a direct relationship between the values reported in the left y-axis (β) and in the right one (η) of Fig. 8. Furthermore, β impacts also on the reliability of PPM detection and on BER, reported on the x axis. By referring to Fig. 8(a) describing the performance of 16-PPM/4-FSK/2-PAM providing η * =1 bit/s/Hz, it is possible to appreciate that, for whatever detection mechanism employed at the receiver, BER lowers as β grows. When dealing with OR and QOR, the high reliability of the considered detection approaches allows the use of β < 0.15 to achieve a BER below 10 -5 . On the other hand, for FBR and DR representing less complex solutions, a larger β would be required for performance improvement. Moving to the case 16-PPM/2-FSK/4-PAM providing η * =2 bit/s/Hz, the performance of all the considered receivers follows essentially a similar behavior, with BER below 10 -5 achievable only for β > 0.2. Of course, it can be verified from the right y-axis on Fig. 8 that the increase of β to improve the communication reliability is unavoidably paid in terms of spectral efficiency. We recall that the main goal of our approach is to combine the robustness of PPM and FSK with the high spectral efficiency of PAM. By doing so, a more reliable communication can be achieved, without excessively sacrificing the data rate. This is what typically happens when coding is applied, with overhead being introduced to protect data from propagation errors, but by reducing rate. In this direction, we provide now a comparison with some multi-dimensional modulations and coded PAM schemes. Specifically, we considered BC with R=1/2 and R=1/3 and Reed-Solomon (RS) coding with R=11/15, representing the most widespread techniques providing a good trade off between effectiveness and error protection complexity. Given the reference BER equal to 10 -4 , we show in Fig. 9 a map where the points with coordinates (γ,η) describe the performance of the schemes under investigation. In Fig. 9, the multi-dimensional modulations are highlighted in red, pure PAM schemes in black, PAM with BC in blue and PAM with RS coding in green. Different modulation orders referred to the same scheme are instead differentiated with markers. For multi-dimensional modulations, we considered the performance achieved with the OR. Furthermore, Fig. 9 reports also the Shannon limit curve (note that the curve has not the typical logarithmic behavior as the SNR x-axis is on the logarithmic scale), so that it is possible to infer how close the modulations performance are with respect to the maximum achievable. Overall, it can be seen how the proposed multi-dimensional modulations outperform the other coded and uncoded PAM solutions. In fact, among same colored markers, those ones referring to multi-dimensional schemes are closer to the Shannon limit (see the squared red marker representing 16-PPM/4-FSK/2-PAM, the red circled marker describing 4-PPM/2-FSK/2-PAM and the red star-shaped marker related to 8-PPM/2-FSK/4-PAM). This means that, for a given pair of target BER and spectral efficiency to be provided, multi-dimensional modulations result as better performing in terms of SNR. Finally, it is worth noting that the results shown in Fig. 9 allow multi-dimensional modulations and coding based solutions to be compared only from a communication point of view. Another fundamental benefit brought by multi-dimensional modulation concerns illumination, since the use of FSK signaling allows the average source output power to be constant, thus avoiding problem of dimming and flickering. On the other hand, when using PAM, with or without coding, despite the emitted symbols can be assumed as equiprobable, it is not possible to achieve lighting control, that may represent a severe limitation in indoor VLC scenarios where illumination and connectivity services must be simultaneously provided. To sum up the main results obtained in this section, we can highlight that multi-dimensional modulation scheme allows a dynamic and (computation)-cost efficient selection of the most suitable multi-dimensional scheme, with an eye on the power level of the light, to guarantee light levels homogeneity for users. This is a key point for deploying real-world VLC systems. In other words, with multi-dimensional modulation schemes, we aim to provide the most cost-effective solution by guaranteeing the quality of service imposed by the underlying applications. Another important consideration is regarding the energy efficiency, since as we can observe in Fig. 9, multi-dimensional modulations allow to achieve the same BER with a reduced SNR with respect to other coded and uncoded schemes, or otherwise said, at the same SNR level it is possible to reduce the BER, with an evident impact in terms of energy reduction. From this we can infer that the proposed modulation scheme allows to achieve higher energy efficiency. C. Experimental validation In order to validate the feasibility of the proposed multi-dimensional modulation, we performed several experiments by realizing an ad hoc test-bed based on Software Defined paradigm (Fig. All the PDs are arranged in parallel, acting as a unique current generator. Despite we actually realized a Multiple-Input Multiple-Output configuration, LEDs and PDs alignment has been accurately performed so as to let the system acting in Single-Input Single-Output mode. This is because using a single LED does not allow to generate reasonable power levels due to the LED characteristics (only 150mW power each). The received signal has not been amplified and In detail, we evaluated the communication performance by transmitting a sequence of 10 8 symbols over two different link distances, that are 175 cm and 75 cm, where we measured a SNR equal to 5.6 dB and 13.9 dB, respectively. Regarding signal detection, we implemented the optimum receiver and the QOR, with this latter demonstrating to be the most effective suboptimal detector. The results, expressed in terms of BER, are reported in Table III. For the two considered receivers and communication distances, we compared the BER values achieved with simulations and experiments. Specifically, it is possible to appreciate that test results only slightly differ from those ones related to simulations. The really limited mismatch is due to the fact that the realized test-bed is not a pure single LED-single PD point-to-point link, so the propagation suffers from different angular emissions paths that are not accounted in the simulation model. However, it is possible to appreciate that at 75cm we achieve a BER of 3.3 • 10 -7 where the simulations report 2 • 10 -7 . Obviously, at 175cm the BER is higher for both the cases (simulated and experiments) since the SNR is really low. V. CONCLUSION In this work we have proposed a novel joint modulation approach, based on different techniques, namely PAM, FSK and PPM. The main idea is to combine the different schemes in order to exploit their key features and limit the disadvantages of each modulation. The selection of the different schemes is based on the rationale that we can exploit in an effective way amplitude, frequency and time information. The combination of the different schemes is not trivial, and they have to be combined in order to keep a constant power level in a VLC system that is thought for illumination and communication simultaneously. In particular, we have proposed a framework that has been proven, both with a numerical evaluation and experimental results, to be effective and outperform block coding schemes. The proof of concept demonstrates the feasibility of the system, by the means of commercial components, such as LED and PDs, that do not require complex, ad-hoc hardware to integrate the framework. Performance results are very encouraging and prove not only the feasibility of such a kind of frameworks, but also the superiority in respect of block coding based approaches. Fig. 2 . 2 Fig. 2. Block diagram of the PPM-FSK-PAM modulator. Fig. 3 . 3 Fig. 3. Graphical representation of modulated symbols with different combinations of ℓ, n and m. Fig. 4. Block diagram of optimum receiver (OR) and quasi-optimum receiver (blue QOR). Fig. 5 . 5 Fig. 5. Block diagram of direct receiver (DR) and feedback receiver (FBR). Given η * =1 bit/s/Hz, 2-PAM has been compared to 2-PPM/2-FSK/2-PAM with β=0.5, to 4-PPM/2-FSK/4-PAM with β=0.5 and to 16-PPM/4-FSK/2-PAM with β=0.05. The BER results are plotted in Fig.6(a), with curves related to the multi-dimensional modulations being obtained considering the OR based detection. In particular, it can be appreciated that the multi-dimensional modulations characterized by 16-PPM/4-FSK/2-PAM and 2-PPM/2-FSK/2-PAM are able to outperform 2-PAM, while 4-PPM/2-FSK/4-PAM provide worse performance. Such results can be explained as follows. By recalling eq. ( Fig. 6. BER performance, based on the OR, of different modulation schemes providing the same spectral efficiency η * . (b) shows the BER for those schemes providing a target spectral efficiency η * =2 bit/s/Hz, that are uncoded 4-PAM, 2-PPM/2-FSK/8-PAM with β=0.25, 4-PPM/2-FSK/16-PAM with β=0.25 and to 16-PPM/2-FSK/4-PAM with β=0.05. Curves still refer to the performance achieved with the OR. Even in this case, it can be observed that 2-PPM/2-FSK/8-PAM and 4-PPM/2-FSK/16-PAM are not feasible since the introduction of PPM and FSK requires the increase of the PAM order to achieve the desired spectral efficiency. So, as discussed before, the benefits of PPM and FSK are canceled by the use of an unreliable PAM order. Fig.7. BER performance of different multi-dimensional modulations with optimal and sub-optimal receivers. measuring the processing effort required by the d-th receiver type (d = QOR,DR,FBR), normalized to the optimum receiver computational cost. As discussed in Sec. III-C, the computational cost related to DR is fixed since depending only on L, N , M . Hence, for 2-PPM/2-FSK/2-PAM, 16-PPM/4-FSK/2-PAM and 16-PPM/2-FSK/4-PAM we have that C DR =1, C DR =0.333 and C DR =0.611, respectively. This means that, when dealing with 2-PPM/2-FSK/2-PAM, the processing complexity required by the DR is essentially the same of the OR, while for the cases 16-PPM/4-FSK/2-PAM and 16-PPM/2-FSK/4-PAM Fig. 8 . 8 Fig. 8. Performance trade off between BER and η * as a function of β. (a)). As described in the schematic diagram in Fig.10(d), at transmit side the test signals have been generated through an Arduino 1 board, using a low level code in order to directly operate on the registers of the micro-controller and so avoiding unwanted latency which could prevent the correctness of the tests.Data were transmitted by modulating the optical emission of a Kingbright 104500 10mm red LED array. Each light is controlled by a dedicated digital output of the board and the number of LED simultaneously turned on determines the output optical flux. This approach allows to easily implement the PAM modulation, avoiding non linearities due to the current-optical flux characteristic of the LED. Since each LED is characterized by a limited output power, the Fig. 10 . 10 Fig. 10. Experimental setup description. QOR 3 . 3 • 33 10 -7 3.9 • 10 -7 1.45 • 10 -2 2.03 • 10 -2 no filtering operations have been performed. The transimpedance operation has been guaranteed just using a 1M Ω resistor. Other information about the receiver and the transmitter are resumed in TableII. The experimental validation concerned 16-PPM/2-FSK/4-PAM with β=0.05, chosen since representing the most efficient scheme in terms of spectral efficiency and reliability among those ones previously investigated. TABLE I COMPUTATION I COST FOR QOR AND FBR. Modulation Subopt. SNR γ scheme receiver 3 dB 6 dB 9 dB 12 dB 2-PPM/2-FSK/2-PAM QOR 1.15 1.15 1.15 1.16 β = 0.5 FBR 1.01 1.00 1.00 0.99 16-PPM/4-FSK/2-PAM QOR 0.49 0.47 0.46 0.45 β = 0.05 FBR 0.37 0.34 0.33 0.33 16-PPM/2-FSK/4-PAM QOR 0.70 0.69 0.67 0.65 β = 0.05 FBR 0.64 0.63 0.61 0.61 • 10 -7 8.15 • 10 -3 1.16 • 10 -2 TABLE II EXPERIMENTAL PARAMETERS Receiver Transmitter Photodiode CENTRONIC LED Kingbright Model OSD15-5T Model RED 104500 Active Area 15 mm 2 Output Power 150 mW Responsivity Operative 0.4 30 mA (620 nm) current Bandwidth 29.1 MHz Light flux 1.4 lm FOV 45 • FOV 80 • Rise time 12 ns Forward current 30 mA TABLE III BER COMPARISON BETWEEN SIMULATION AND EXPERIMENT WITH OR AND QOR Detection 75 cm link (γ=13.9 dB) 175 cm link (γ=5.6 dB) type Sim. Exp. Sim. Exp. OR 2 • 10 -7 2.6 ACKNOWLEDGMENT This publication has been based upon work from COST Action CA19111 NEWFOCUS, supported by COST (European Cooperation in Science and Technology). B. Trade off between computational complexity, spectral efficiency and error rate Once identified the most suitable multi-dimensional modulation formats allowing the achievement of the considered target spectral efficiency, we pass now to investigate the performance of the proposed sub-optimal receivers. The results are still shown in terms of BER as a function of the SNR. Dealing with η * =1 bit/s/Hz, Fig. 7 7(a) shows that QOR and OR guarantee the same reliability level, with the performance of FBR and DR being very close as well. This is due to the fact that, since L, N and M are equal to the lowest order possible, that is 2, the number of transmittable symbols is the lowest possible as well, hence reliability performance are less sensitive to the detection mechanism type. Regarding 16-PPM/2-FSK/4-PAM, as depicted in Fig. 7(b), the sub-optimal receivers still provide performance close to the optimum one. In this case where η * =2 bit/s/Hz, symbol detection results as more challenging than for the previously investigated schemes. Therefore, having similar performance between optimal and sub-optimal solutions allows the receiver selection to be potentially performed basing on the computational effort constraints. Following the remarks presented in Section III about the receiver selection, by means of the
04115890
en
[ "spi.meca.solid" ]
2024/03/04 16:41:26
2022
https://theses.hal.science/tel-04115890/file/2022theseGuimaraes-de-OliveraMJ.pdf
Dans l'ensemble, l'impact de cette thèse est lié à (i) l'importance d'établir des métriques dans la sélection et la conception d'essais mécaniques hétérogènes, et (ii) à faire progresser la calibration de modèles de comportement avancés d'un cadre 2D à un cadre 3D. Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 agradecimentos Ao Professor Doutor António Andrade-Campos e à Professora Doutora Sandrine Thuillier, pelo desafio lançado e confiança demonstrada no meu trabalho. A vossa orientação, enorme disponibilidade, compreensão e amizade foram fundamentais para o sucesso deste trabalho. O meu muito obrigado. A todos aqueles que me acolhem na Universidade de Aveiro desde sempre, o meu sincero agradecimento. Aos colegas do M9'Calibre, por criarem um grupo de trabalho divertido, com importantes momentos de discussão científica e de lazer promovidos. Em especial, ao Doutor João Martins, que desde o primeiro dia foi um constante apoio e exemplo para mim. Agradeço a sua disponibilidade nas nossas discussões que muito contribuiram para o sucesso deste trabalho. A todos aqueles que me receberam na Université Bretagne Sud, onde realizei uma grande parte deste trabalho. Os períodos vividos em Lorient muito contribuiram para o desenvolvimento deste trabalho, e enorme crescimento pessoal. Em especial, gostaria de agradecer ao Amine Lagroum, à Cynthia Tayeh, e ao Federico Amenini pela amizade desde o primeiro dia. Merci à tous! A todos aqueles que me receberam na Università Politecnica delle Marche, onde tive oportunidade de realizar uma parte deste trabalho. Ao Professor Doutor Marco Rossi e ao Doutor Atillio Lattanzi, a vossa disponibilidade e conhecimento transmitido em muito me enriqueceram. De salientar, também, as experiências vividas e pessoas que conheci durante os três meses em Ancona. Em especial, ao Gabrielle Buratti, colega de casa e amigo, pela amizade e conversas proporcionadas. Grazie a tutti! À Fundação para a Ciência e Tecnologia, pelo apoio financeiro concedido através da bolsa de doutoramento SFRH/BD/143665/2019. Agradeço, também, o apoio financeiro concedido pela associação ESAFORM e pela Région Bretagne durante o período de mobilidade na Università Politecnica delle Marche. À minha familía, cuja presença é fundamental na minha vida e a quem devo grande parte daquilo que sou como pessoa. Aos meus pais, pelo apoio incondicional e compreensão que só eles conseguem dar. À minha irmã e ao Luís, pela amizade, carinho e compressão. Aos meus sobrinhos, Afonso e Francisca, que são o meu maior orgulho e motivo de enorme felicidade. À Luísa, por sempre me apoiar e também questionar para que possa tornarme uma melhor pessoa. A tua força, motivação, e compreensão foram peças essenciais para o sucesso desta etapa. keywords constitutive models, inverse methodologies, parameter identification, heterogeneous mechanical tests, optimization algorithms, volume reconstruction abstract The mechanical design of sheet metal forming parts tends to be more virtual, reducing delays and manufacturing costs. Reliable numerical simulations can also lead to optimized metallic parts using accurately calibrated advanced constitutive models. Thus, the aim of this thesis is to improve the representation of the mechanical behavior of the material in the numerical model, by developing efficient and accurate methodologies to calibrate advanced constitutive models. A recent trend in material characterization is the use of a limited number of heterogeneous mechanical tests, which provide more valuable data than classical quasi-homogeneous tests. Yet, the design of the most suitable tests is still an open question. To that extent, an overview of heterogeneous mechanical tests for metallic sheets is provided. However, no standards exist for such tests, so specific metrics to analyze the achieved mechanical states are suggested and applied to four tests. Results show that the use of various metrics provides a good basis to qualitatively and quantitatively evaluate heterogeneous mechanical tests. Due to the development of full-field measurement techniques, it is possible to use heterogeneous mechanical tests to characterize the behavior of materials. However, no analytical solution exists between the measured fields and the material parameters. Inverse methodologies are required to calibrate constitutive models using an optimization algorithm to find the best material parameters. Most applications tend to use a gradient-based algorithm without exploring other possibilities. The performance of gradient-based and -free algorithms in the calibration of a thermoelastoviscoplastic model is discussed in terms of efficiency and robustness of the optimization process. Often, plane stress conditions are assumed in the calibration of constitutive models. Nevertheless, it is still unclear whether these are acceptable when dealing with large deformations. To further understand these limitations, the calibration of constitutive models is compared using the virtual fields method implemented in 2D and 3D frameworks. However, the 3D framework requires volumetric information of the kinematic fields, which is experimentally difficult to obtain. To address this constraint, an already existing volume reconstruction method, named internal mesh generation, is further improved to take into account strain gradients in the thickness. The uncertainty of the method is quantified through virtual experiments and synthetic images. Overall, the impact of this thesis is related to (i) the importance of establishing standard metrics in the selection and design of heterogeneous mechanical tests, and (ii) enhancing the calibration of advanced constitutive models from a 2D to a 3D framework. palavras chave modelos constitutivos, metodologias inversas, identificação de parâmetros, ensaios mecânicos heterogéneos, algorimos de otimização, reconstrução volúmica resumo O projeto mecânico de peças por conformação de chapas metálicas tende a ser mais virtual, reduzindo atrasos e custos de produção. Simulações numéricas confiáveis também podem levar a peças optimizadas usando modelos constitutivos avançados calibrados com precisão. Assim, o objetivo desta tese é melhorar a representação do comportamento mecânico do material no modelo numérico, através do desenvolvimento de metodologias eficientes e precisas para a calibração de modelos constitutivos avançados. Uma tendência recente na caracterização de materiais é o uso de um número limitado de ensaios mecânicos heterogéneos, que fornecem dados mais valiosos do que os ensaios clássicos quase-homogéneos. No entanto, a concepção de ensaios mais adequados ainda é uma questão em aberto. Este trabalho detalha os ensaios mecânicos heterogêneos para chapas metálicas. No entanto, não existem ainda normas para estes ensaios, pelo que métricas específicas para analisar os estados mecânicos são sugeridas e aplicadas a quatro ensaios. Os resultados mostram que o uso de várias métricas disponibiliza uma boa base para avaliar ensaios mecânicos heterogéneos. Devido ao desenvolvimento de técnicas de medição de campo total, é possível utilizar ensaios mecânicos heterogéneos para caracterizar o comportamento dos materiais. No entanto, não existe uma solução analítica entre os campos medidos e os parâmetros do material. Metodologias inversas são necessárias para calibrar os modelos constitutivos usando um algoritmo de otimização para encontrar os melhores parâmetros do material. A maioria das aplicações tende a usar um algoritmo baseado em gradiente sem explorar outras possibilidades. O desempenho de vários algoritmos na calibração de um modelo termoelastoviscoplástico é discutido em termos de eficiência e robustez do processo de otimização. Frequentemente, são utilizadas condições de estado plano de tensão na calibração de modelos constitutivos, hipótese que é questionada quando se trata de grandes deformações. A calibração de modelos constitutivos é comparada usando o método de campos virtuais implementado em 2D e 3D. No entanto, a implementação 3D requer informações volumétricas dos campos cinemáticos, o que é experimentalmente difícil de obter. Um método de reconstrução volúmica já existente é melhorado para considerar os gradientes de deformação ao longo da espessura. A incerteza do método é quantificada através de ensaios virtuais e imagens sintéticas. No geral, o impacto desta tese está relacionado com (i) a importância de estabelecer métricas na seleção e concepção de ensaios mecânicos heterogéneos, e (ii) promover desenvolvimentos na calibração de modelos constitutivos avançados de 2D para 3D. mots clés modèles de comportement, méthodologies inverses, identification de paramètres, essais mécaniques hétérogènes, algorithmes d'optimisation, reconstruction volumique résumé La conception mécanique des pièces métalliques tend à être plus virtuelle, réduisant les délais et les coûts de fabrication. Des simulations numériques fiables peuvent conduire à des pièces optimisées en utilisant des modèles mécaniques avancés calibrés avec précision. Ainsi, l'objectif de cette thèse est d'améliorer la représentation du comportement mécanique du matériau dans le modèle numérique, en développant des méthodologies efficaces et précises pour calibrer des modèles de comportement avancés. Une tendance récente dans la caractérisation des matériaux est l'utilisation d'un nombre limité d'essais mécaniques hétérogènes, qui fournissent des données plus riches que les essais classiques quasi-homogènes. Pourtant, la conception des tests les plus adaptés reste une question ouverte. Ce travail détaille les essais mécaniques hétérogènes pour les tôles métalliques. Cependant, aucune norme n'existe pour de tels tests, ainsi des métriques spécifiques pour analyser les états mécaniques sont suggérées et appliquées à quatre tests. Les résultats montrent que l'utilisation de diverses métriques fournit une bonne base pour évaluer des essais mécaniques hétérogènes. L'utilisation des essais mécaniques hétérogènes pour caractériser le comportement des matériaux est rendue possible par des mesures de champ. Cependant, aucune solution analytique n'existe entre les champs mesurés et les paramètres du matériau. Des méthodologies inverses sont nécessaires pour calibrer les modèles de comportement à l'aide d'un algorithme d'optimisation afin de déterminer les meilleurs paramètres de matériau. Un algorithme basé sur le gradient est très fréquemment utilisé, sans explorer d'autres possibilités. La performance de plusieurs algorithmes dans la calibration d'un modèle thermoélastoviscoplastique est discutée en termes d'efficacité et de robustesse du processus d'optimisation. Souvent, des conditions de contraintes planes sont supposées dans la calibration des modèles, hypothèse qui est remise en cause dans le cas de forte localisation des déformations. La calibration de modèles de comportement est comparée à l'aide de la méthode des champs virtuels développée dans les cadres 2D et 3D. Cependant, le cadre 3D nécessite des informations volumétriques des champs cinématiques, ce qui est expérimentalement difficile à obtenir. Une méthode de reconstruction volumique déjà existante est encore améliorée pour prendre en compte les gradients de déformation dans l'épaisseur. L'incertitude de la méthode est quantifiée par des expériences virtuelles, à l'aide d'images de synthèse. Introduction Motivation The mechanical design of sheet metal forming parts tends to be more and more virtual, by numerical simulation. The sheet metal forming industry is economically driven, so it is only natural that one of their main goals is to decrease associated delays and costs in the manufacturing process. Nevertheless, designing better products and improving their performance by using innovative materials is also crucial [START_REF] Adams | Building Better Products with Finite Element Analysis[END_REF]. This is only possible due to the major developments in simulation and computational systems that have allowed every engineer access to, for example, finite element analysis (FEA) software. This leads to the development of many computational tools, some of general application and others tailored for specific tasks. In the scope of sheet metal forming, computational analysis software is meaningless without accurate information on the part being manufactured, ranging from something as simple as the initial blank dimensions to how the material behaves. Generally, industrial processes use phenomenological constitutive models to characterize the material behavior from a macroscopic description [START_REF] Banabic | Sheet Metal Forming Processes: Constitutive Modelling and Numerical Simulation[END_REF]. Compared to the alternative micromechanical models based on the crystallographic texture of the material, the phenomenological approach is much simpler, easier to implement, and computationally efficient. Their accuracy depends much on the model's ability to reproduce the real behavior under several mechanical states and deformation amplitudes. Advanced phenomenological models can describe multiple phenomena, but require the determination of many material parameters. This leads to exhaustive experiments and complicated methodologies [START_REF] Yoshida | A model of large-strain cyclic plasticity describing the Bauschinger effect and workhardening stagnation[END_REF][START_REF] Cazacu | Orthotropic yield criterion for hexagonal closed packed metals[END_REF][START_REF] Reyne | A new concept for continuum distortional plasticity[END_REF]. Therefore, the topic of parameter identification of constitutive models has received increased attention due to the need for reliable input data [START_REF] Banabic | Advances in anisotropy of plastic behaviour and formability of sheet metals[END_REF]. Conventionally, the parameters of constitutive models are determined from experimental data acquired by standard mechanical tests, such as uniaxial, biaxial and shear tests [START_REF] Kuwabara | Measurement and analysis of differential work hardening in cold-rolled steel sheet under biaxial tension[END_REF][START_REF] Boger | Continuous, large strain, tension/compression testing of sheet material[END_REF][START_REF] Vincze | A comparison of the mechanical behaviour of an AA1050 and a low carbon steel deformed upon strain reversal[END_REF]). An advantage of these tests over other methods is the relatively straightforward way to obtain the stress-strain curves required to calibrate constitutive models. These tests generate quasi-homogeneous strain and stress fields, only describing the material behavior for a specific mechanical state, ranging from uniaxial compression to biaxial tension (Oliveira et al. 2022b). Therefore, to characterize the material behavior for different mechanical states, additional tests need to be carried out. For example, to calibrate an advanced plasticity model describing the material anisotropy under various mechanical states, 13 quasihomogeneous tests are required [START_REF] Barlat | Linear transfomation-based anisotropic yield functions[END_REF]. Moreover, the stress field generated by these tests does not resemble the multiple mechanical states simultaneously observed in many sheet metal forming processes [START_REF] Thuillier | Occurence of strain path changes in a two-stage deep drawing process[END_REF][START_REF] Banabic | Sheet Metal Forming Processes: Constitutive Modelling and Numerical Simulation[END_REF]. Recently, the development of non-contacting full-field measurement techniques has led to advances in experimental testing [START_REF] Grédiac | Full-Field Measurements and Identification in Solid Mechanics[END_REF]. Instead of obtaining a single stressstrain curve out of a mechanical test, these techniques allow the extraction of complete coordinates field from the surface of the specimen, which can then be used to calculate derived quantities, such as displacements or strains. One of the most widespread techniques is the digital image correlation (DIC) technique, which correlates the evolution of a pattern applied to the surface of the specimen [START_REF] Sutton | Image Correlation for Shape, Motion and Deformation Measurements: Basic Concepts, Theory and Applications[END_REF]). Consequently, these advances opened up the opportunity to explore new test configurations, with the goal of obtaining more mechanical data from a single mechanical test (Rossi et al. 2022b). Since then, many so-called heterogeneous mechanical tests have been proposed [START_REF] Aquino | Design of heterogeneous mechanical tests: Numerical methodology and experimental validation[END_REF]. These tests are distinguished by complex specimen geometries that achieve heterogeneous strain field. In general, the tests are either uniaxial or biaxial loading, with a few exceptions. Moving from homogeneous to heterogeneous mechanical tests will eventually allow the reduction of experiments to characterize a material behavior, as well as improving the quality of the calibrated models [START_REF] Teaca | Identification of sheet metal plastic anisotropy using heterogeneous biaxial tensile tests[END_REF][START_REF] Pottier | Contribution of heterogeneous strain field measurements and boundary conditions modelling in inverse identification of material parameters[END_REF]. However, designing heterogeneous tests is a complicated task, proven by the increased complexity of the most recent methodologies to arrive at such geometries [START_REF] Souto | Mechanical design of a heterogeneous test for material parameters identification[END_REF]. For example, one test was proposed by using a test simulator to generate synthetic images (Rossi et al. 2016a), and another by using topology optimization [START_REF] Barroqueiro | Design of mechanical heterogeneous specimens using topology optimization[END_REF]). The scientific community seems not yet fully convinced of the advantages of heterogeneous mechanical tests. There is a strong need to establish ways to evaluate existing and novel methodologies, as well as clear standards for the so-called "Material Testing 2.0" [START_REF] Pierron | Towards material testing 2.0. A review of test design for identification of constitutive parameters from full-field measurements[END_REF]. Due to the heterogeneity of such tests, it is no longer possible to directly obtain the stressstrain relationship from experimental data. Thus, inverse methodologies to identify the material parameters were developed, where an iterative procedure is employed [START_REF] Cooreman | Elasto-plastic material parameter identification by inverse methods: Calculation of the sensitivity matrix[END_REF][START_REF] Andrade-Campos | On the inverse identification methods for forming plasticity models using full-field measurements[END_REF]). Among several inverse methodologies, the most widely used is the finite element model updating (FEMU) since its publication by [START_REF] Kavanagh | Finite element applications in the characterization of elastic solids[END_REF]. In FEMU, a finite element model of the real test configuration is modeled, and the material parameters are iteratively updated by repeatedly performing a FEA until a close correspondence between numerical simulations and real experiments is achieved. Although this methodology is popular, it incurs high computational effort due to many simulations required. Moreover, having the information of the real boundary conditions may be delicate and is also a drawback of FEMU. As an example, [START_REF] Kacem | Influence of experimental boundary conditions on the calibration of a ductile fracture criterion[END_REF] has shown the influence of boundary conditions for rupture tests. More recently, the virtual fields method (VFM) has been largely investigated and efficiently used in various applications [START_REF] Pierron | The Virtual Fields Method: Extracting Constitutive Mechanical Parameters from Full-field Deformation Measurements[END_REF]. The VFM is derived from the principle of virtual work (PVW), which is a statement of equations of equilibrium in the weak form. One interesting feature of the VFM is that it directly uses the actual displacement field acquired during the experimental test, and, by means of special virtual fields, it does not require to reproduce the boundary conditions of the real test. An overview of inverse methodologies based on full-field measurements can be found in Avril et al. (2008b), where the authors conclude that the inverse methods are highly sensitive to errors in the experimental data. Martins et al. (2018a) has compared the performance of, among others, the FEMU and the VFM, concluding that the latter provides the best compromise between the final solution and computational effort. A key feature common to all inverse methodologies is the requirement for an optimization procedure to update the material parameters. A general-purpose optimization algorithm is commonly employed, but as of yet, there is no clear consensus on this topic [START_REF] Andrade-Campos | On the determination of material parameters for internal variable thermoelastic-viscoplastic constitutive models[END_REF][START_REF] De-Carvalho | Optimization strategies for non-linear material parameters identification in metal forming problems[END_REF][START_REF] Kowalewski | Assessment of optimization methods used to determine plasticity parameters based on DIC and back calculation methods[END_REF]. Often, the material behavior of thin sheet metals is conveniently characterized using plane Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 1.2 Objectives stress constitutive models [START_REF] Hill | Constitutive modelling of orthotropic plasticity in sheet metals[END_REF][START_REF] Vegter | A plane stress yield function for anisotropic sheet material by interpolation of biaxial stress states[END_REF], provided the specimen thickness is small relative to its width and height. This leads to simplification of identification procedures, such as the use of two-dimensional (2D) instead of three-dimensional (3D) finite element models with the FEMU. Nevertheless, it also leads to considerable improvement in its computational efficiency. However, when dealing with thick plates and necking at large deformations, plane stress assumptions may no longer be valid. Indeed, after necking, a triaxial stress state develops and eventually leads to rupture of the material [START_REF] Bridgman | Studies in Large Plastic Flow and Fracture: With Special Emphasis on the Effects of Hydrostatic Pressure[END_REF][START_REF] Hill | On the mechanics of localized necking in anisotropic sheet metals[END_REF]). As a result, the hardening behavior beyond the point of diffuse necking is usually extrapolated from the pre-necking information, but can lead to large errors in the predicted stress-strain curve. Moreover, the triaxial state developed in the necking region is not used, but represents rich mechanical information that could improve the material characterization. These limitations and the use of plane stress conditions can also be linked with the limitations of DIC, which only provide the kinematic fields on the surface of the material. An alternative technique, named digital volume correlation (DVC), could potentially reconstruct the 3D kinematic fields over the whole volume [START_REF] Bay | Digital volume correlation: Three-dimensional strain mapping using x-ray tomography[END_REF]). However, this technique requires highly advanced and costly experimental apparatus and is limited to materials that present a heterogeneous microstructure, such as wood or composites. Moreover, the application of inverse methodologies up to rupture, and in particular the VFM, has not been extensively investigated. A few studies using the VFM in a 2D framework have shown that its validity after necking is violated [START_REF] Kim | Characterization of the post-necking strain hardening behavior using the virtual fields method[END_REF][START_REF] Martins | Calibration of a modified Johnson-Cook model using the virtual fields method and a heterogeneous thermo-mechanical tensile test[END_REF]. This means the model does not accurately predict the behavior of the material after necking. Therefore, there is a need and interest in extending the VFM to a 3D framework (Rossi and Pierron 2012a), to provide a more accurate representation of the data. Objectives This thesis deals with the development of efficient and accurate methodologies for robust numerical characterization and proper determination of the material parameters for advanced constitutive models. As already established, this field concerns many topics that need to be addressed to make further advances. Concerning the virtual forming of metallic sheets, the parameter identification of constitutive models can be divided in three pillars, which can be understood as necessary steps of the process: (i) material testing, including the mechanical tests, experimental and numerical techniques to acquire and reconstruct the kinematic fields; (ii) constitutive models, used to describe different phenomena of material behavior; and (iii) inverse methodologies, employed in the identification of the material parameters governing the constitutive models (see Figure 1.1). In many cases, there are research groups solely dedicated to the development of constitutive models, while other groups are more focused on researching inverse methodologies. Although each of these pillars is often investigated separately, they are not independent and can each contribute to the entire process. The process is a multiplicative system (Farnam [START_REF] Street | Multiplicative Systems: Understanding The Power of Multiplying by Zero[END_REF], which means that if one pillar fails, it will undoubtedly jeopardize the successful outcome of the parameter identification. One could say that the integration of all these pillars is the essence of the "Material Testing 2.0" paradigm. This thesis will not only provide an overview of the field, but will also contribute to each pillar by expanding on current research. As such, a critical topic is the design of heterogeneous mechanical tests that achieve several mechanical states and a wide range of deformation. To gain a better understanding, an overview of heterogeneous mechanical tests reported in the literature is the starting point. Furthermore, it can still be quite difficult to select the most appropriate test based on qualitative or quantitative metrics. Recently, there have been a few studies that have tackled this problem, but their approaches are mostly quantitative and complex from an implementation point of view. In this thesis, a contribution is made by suggesting standard metrics, based on the strain and stress fields, to qualitatively evaluate the richness of heterogeneous mechanical tests. It is expected that these [START_REF] Aquino | Design of heterogeneous mechanical tests: Numerical methodology and experimental validation[END_REF]), (b) constitutive models [START_REF] Barlat | Linear transfomation-based anisotropic yield functions[END_REF], and (c) inverse methodologies [START_REF] Cooreman | Elasto-plastic material parameter identification by inverse methods: Calculation of the sensitivity matrix[END_REF]). metrics will be used after any experiment or numerical simulations, providing a way to easily compare different studies. The topic of constitutive models can be further divided in the development and selection of the models. The former is important to accurately describe different material phenomena, and will be crucial for the development of a new generation of materials. However, selecting the correct model to describe a given phenomenon might be as important. A model might be well calibrated, but not adequate to describe the material behavior, leading to unreliable numerical simulations. Therefore, there is a great need in the scientific and industrial communities for indicators and tools that help select constitutive models. Nevertheless, the numerical implementation of constitutive models can be an arduous task, requiring expertise and time. To bridge the gap between selection and implementation, this thesis aims to explore the use of an open source library of constitutive models and contribute to its spread among the scientific community. A largely developed topic is the use of inverse methodologies, in particular the FEMU and the VFM. While the latter is not yet fully adopted and is restricted to highly specialized research groups, the former is already established in the scientific community. Both methods need an optimization algorithm to find the best material parameters. However, studies tend to use a gradient-based least squares algorithm, or default algorithms in a given programming language, without knowledge of its characteristics. Therefore, this thesis investigates the use of different optimization algorithms within the FEMU. These algorithms are compared regarding their efficiency and robustness. It is expected to contribute to the definition of more efficient optimization strategies. Perhaps the VFM is the inverse methodology that creates the biggest interest nowadays in the scientific community. The computational efficiency and direct use of experimental data in the identification procedure are its major advantages. Its use within a 2D framework has already been largely investigated, because the VFM is still limited to surface measurements. However, the VFM is not inherently limited, but rather the experimental techniques to acquire volume kinematic fields inside the material. By further developing and proposing new techniques to reconstruct the deformation field inside the material, the use of the VFM in a 3D framework is promoted. Therefore, this thesis aims to contribute to the enhancement of a numerical method able to reconstruct the kinematic fields inside the material. In the future, it is expected to use this method in combination with the VFM to more accurately describe the material behavior up to rupture. Until then, the VFM can be extended to a 3D framework and its use investigated within a numerical approach. By investigating the limitations of the VFM in 2D and 3D frameworks, we can learn more about the VFM and how it can be improved. Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 1.3 Scientific Contributions Scientific Contributions The studies developed during this thesis have led to contributions to scientific and industrial communities through the publication of articles in international journals and conference proceedings. Thesis Outline This chapter provides an overview of the contents addressed in this thesis. Therefore, every chapter has its own introduction as well as a review of the current state of the art. Including the present chapter, this thesis is divided in eight chapters, which are outlined in Figure 1.2 and organized as follows: Chapter 1 Introduces the motivation for this thesis by highlighting the latest developments and technological challenges in the field. Enumerates the scientific contributions of this thesis and describes its content. Chapter 2 Introduces fundamental mathematical concepts applied to the constitutive modeling of elastoplasticity. Several yield criteria and hardening laws used throughout this thesis are presented and mathematically described. Additionally, an open source user subroutine library for plasticity models is presented and its use investigated. Chapter 3 Presents a literature survey of heterogeneous mechanical tests by focusing on the type and amount of mechanical information provided. Moreover, the tests are classified according to the loading type, either uniaxial, biaxial or other. Chapter 4 Discusses the use of standard metrics to evaluate heterogeneous mechanical tests. All these metrics are calculated from the stress and strain tensors. A metric named rotation angle is also suggested to evaluate the sensitivity of tests to anisotropy. To illustrate the approach, four heterogeneous mechanical tests are selected from the literature and numerically evaluated through the use of the suggested metrics. Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 Thesis Outline Chapter 5 Compares the use of optimization algorithms, one gradient-based and two gradientfree algorithms, in the inverse identification of constitutive model parameters. To avoid unnecessary implementation and take advantage of highly developed programming languages, the optimization algorithms available in Python's optimization libraries are used. A FEMU based approach is developed and implemented in the calibration of a thermoelastoviscoplastic model. The results are discussed in terms of efficiency and robustness of the optimization process. Chapter 6 Investigates a numerical method able to reconstruct the volume displacement field inside a flat specimen. This method can potentially enable the material characterization at large deformations and up to rupture. However, its inherent error was not yet fully investigated, and improvements are proposed to address possible sources of error. Moreover, the uncertainty of the method is evaluated through a simulated 3D test and the measurement chain associated with a virtual stereo-DIC system composed of four cameras. Chapter 7 Extends and investigates the use of VFM in a 3D framework, and compares it with a 2D framework. For that purpose, a code is developed in Python programming language, coupled with its optimization library, and uses an open source user subroutine for the stress reconstruction. The prediction of the load and the identification of the Swift hardening law are analyzed and compared for both frameworks. Chapter 8 Presents, globally, the main conclusions and contributions of this thesis. Furthermore, outlines some perspectives of potential future developments following the thematics addressed. Constitutive Modeling Introduction Material modeling in mechanics is the description of the mechanical behavior of materials through mathematical models, also known as constitutive models, based on physical and empirical evidence. In practice, the stress tensor acting on a material body is obtained from constitutive models, as a function of the deformation history. For each material, different constitutive models might be required to describe its unique behavior. In fact, a complete characterization of the material behavior might require the description of different phenomena, such as anisotropy, strain hardening behavior, rupture, or Bauschinger's effect [START_REF] Banabic | Advances in anisotropy of plastic behaviour and formability of sheet metals[END_REF]. Consequently, the development and selection of the constitutive model can play a major role in accurately predicting the material's behavior. One of the most important phenomena of sheet metal characterization is the anisotropic behavior that some materials exhibit. This phenomenon is mainly related to the crystallographic structure of the materials, which is affected by thermomechanical processes and is characterized by the symmetry of the mechanical properties with respect to three orthogonal planes. The intersection lines of the symmetry planes are the orthotropic axes, and in the case of rolled sheet metals, their orientations are defined as (see Figure 2.1): rolling direction (RD), transverse direction (TD), and normal direction (ND). In sheet metal forming processes, anisotropy can have a big effect on predicting the drawing limits of the sheet. As such, a parameter known as the Lankford strain ratio or coefficient 𝑟 𝜃 was introduced to measure the material anisotropy [START_REF] Lankford | New criteria for predicting the press performance of deep-drawing sheets[END_REF]. This parameter is defined as the ratio of the specimen's width strain 𝜀 yy to the thickness strain 𝜀 zz , along with different material orientations 𝜃 as 𝑟 𝜃 = 𝜀 yy 𝜀 zz . (2.1) When the material demonstrates isotropic behavior, 𝑟 𝜃 will be equal to 1 for any value of 𝜃 between 0°and 90°. Therefore, in order to quantify the material anisotropic behavior, several uniaxial tensile tests are usually performed with the material at different orientations. The three most common orientations are 0°, 45°, and 90°from the RD. In sheet metal forming, materials generally present a linear elastic behavior at the initial stages of deformation. However, after a given amount of deformation, the material might undergo irreversible deformations and not recover its initial configuration. In this case, the material has deformed plastically, and the stress-strain relationship is no longer linear. Different phenomena are responsible for governing transition from the elastic to the plastic regime, as well as controlling the deformation. In the following sections, the theoretical basis of constitutive models adopted throughout this thesis is described. Emphasis is given to the used constitutive models describing each material's mechanical behavior, though many others could have been presented. Afterwards, an open source library of plasticity models is investigated. Linear Elasticity A material body that regains its initial shape after being submitted to an external force is said to be in the elastic regime. In this case, the material body only undergoes elastic deformations [START_REF] Dunne | Introduction to Computational Plasticity[END_REF]. It is then possible to determine the material body's linear state of stress through Hooke's law, which corresponds to 𝝈 = 𝐃𝜺 e (2.2) where 𝝈 represents the Cauchy stress tensor, and 𝜺 e is the elastic strain tensor, both defined in the (𝑥, 𝑦, 𝑧) coordinate system (see Figure 2.1). According to the Voigt notation, this relationship can be rewritten as ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 𝜎 xx 𝜎 yy 𝜎 zz 𝜎 xy 𝜎 xz 𝜎 yz ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = 𝐃 ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 𝜺 e xx 𝜺 e yy 𝜺 e zz 2𝜺 e xy 2𝜺 e xz 2𝜺 e yz ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , (2.3) where 𝐃 is the elasticity matrix in three dimensions. The elasticity matrix depends on the material's elastic constants, the Young's modulus 𝐸 and the Poisson's ratio 𝜈, as 𝐃 = 𝐸 (1 + 𝜈)(1 -2𝜈) ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 -𝜈 𝜈 𝜈 0 0 0 𝜈 1 -𝜈 𝜈 0 0 0 𝜈 𝜈 1 -𝜈 0 0 0 0 0 0 1-2𝜈 2 0 0 0 0 0 0 1-2𝜈 2 0 0 0 0 0 0 1-2𝜈 2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ . (2.4) Plasticity An elastoplastic material is characterized by, after a certain level of stress, undergoing irreversible deformations (plasticity). Therefore, the material's behavior is elastic up to an elastic limit, corresponding to the yield stress 𝜎 y , after which the material presents a plastic strain. In the case of metals, made up of many crystals in which atoms are organized regularly, the origin of plasticity is associated with crystal slip. It results that many important phenomena are observed in macroscopic plasticity [START_REF] De Borst | Non-Linear Finite Element Analysis of Solids and Structures[END_REF]): 1. plastic slip does not lead to volume change, known as the incompressibility condition of plasticity; 2. plastic slip is a shearing process, thus, hydrostatic stress can often be assumed not to influence plastic strain; 3. plastic yielding is often considered an isotropic process. Experimentally, the stress-strain behavior of an elastoplastic material is classically determined from a uniaxial tensile test, as schematically represented in Figure 2.2. In Figure 2.2, the linear elastic behavior of the material is observed up to 𝜎 y . From this point onward, the material is on the plastic regime. If the stress does not increase further with the increase of strain, it is said the material is perfectly plastic, even though that is usually not the case with sheet metals. On the other hand, when the stress increases with the development of plastic strain, the material undergoes a phenomenon known as strain hardening. If the specimen is unloaded at a given strain in the plastic regime (after the yield stress point), it would cease to deform plastically and present a linear decreasing stress-strain relationship equal to 𝐸. Once the stress is zero, the material recovers some strain, known as the elastic strain 𝜺 e , while other remains, known as the plastic strain 𝜺 p . Consequently, the total strain 𝜺 is equal to the sum of the elastic and plastic strains as 𝜺 = 𝜺 e + 𝜺 p . (2.5) Equation 2.5 is known as the classical additive decomposition of strain, which is convenient in the definition of mathematical models that, separately, describe the physical phenomena of elasticity and plasticity. Generally, to describe the plastic behavior, a few elements are required: 1. a yield criterion, expressing the relationship between stress components at the onset of plastic yielding; 2. an associated flow rule, expressing the relationship between strain-and stress-rate; 3. a hardening law, describing the evolution of the yield surface with plastic deformation. Yield Criterion As mentioned above, the yield criterion expresses the relationship between stress components at the onset of plastic yielding. Consequently, it is also responsible for determining the transition between the elastic and plastic regimes [START_REF] De Souza Neto | Computational Methods for Plasticity: Theory and Applications[END_REF]. Generically, the yield criterion can be defined in an implicit function as 𝜙(𝝈, ε p ) = σ (𝝈) -𝜎 y ( ε p ) = 0, (2.6) where σ is the equivalent stress, and ε p is the equivalent plastic strain governing the hardening behavior. Mathematically, Equation 2.6 represents a surface in the 3D space, known as yield surface, which is necessarily closed, smooth, and convex. Therefore, a given material point is on the plastic regime when 𝜙(𝝈, ε p ) = 0, while 𝜙(𝝈, ε p ) < 0 is associated to the elastic regime. It should be noted that the condition 𝜙(𝝈, ε p ) > 0 is not admissible and has no physical meaning. Yield criteria can be differentiated for their ability to describe the material anisotropic behavior. On the one hand are the isotropic yield criteria, often used to describe the behavior of a material that behaves similarly in all directions under a given loading state. On the other hand, if the material presents a different behavior depending on the loading direction, it is said to be anisotropic, and specific yield criteria able to describe this behavior are required. Additionally, yield criteria can also be differentiated as micromechanical or phenomenological. The first approach is based on the mechanism of plastic deformation in metallic crystals at the microscopic level, while the latter corresponds to the macroscopic description of the mechanical behavior. Compared to microscopic criteria, the latter are simpler, easier to implement, and lead to computationally efficient numerical simulations. In the field of sheet metal forming, several authors are solely dedicated to the development of novel yield criteria, comprehensively presented in [START_REF] Banabic | Sheet Metal Forming Processes: Constitutive Modelling and Numerical Simulation[END_REF]. Throughout this thesis, only phenomenological yield criteria are considered, and the ones used are presented. von Mises The von Mises isotropic yield criterion was independently proposed by [START_REF] Huber | Przyczynek do podstaw wytorymalosci[END_REF][START_REF] Von Mises | Mechanik der festen Körper im plastisch-deformablen Zustand[END_REF], and later developed by [START_REF] Hencky | Zur theorie plastischer deformationen und der hierdurch im material hervorgerufenen nachspannungen[END_REF]. The criterion is based on the observation that hydrostatic pressure cannot induce plastic yielding of the material, thus often known as 𝐽 2 -plasticity. The von Mises yield criterion can be expressed in terms of the deviatoric Cauchy stress tensor 𝐬 as σ = √ 3 2 𝐬 ∶ 𝐬, (2.7) with 𝐬 = 𝝈 - 1 3 tr(𝝈)𝐈, (2.8) where 𝐈 represents the second order identity tensor, and tr(𝝈) represents the trace of the Cauchy stress tensor. The criterion can be expressed in terms of the normal and shear stress components, and can be written as 2 σ 2 = (𝜎 xx -𝜎 yy ) 2 + (𝜎 yy -𝜎 zz ) 2 + (𝜎 zz -𝜎 xx ) 2 + 6(𝜎 2 xy + 𝜎 2 xz + 𝜎 2 yz ). (2.9) Hill 1948 The Hill 1948 anisotropic yield criterion was proposed as a generalization of the von Mises criterion [START_REF] Hill | A theory of the yielding and plastic flow of anisotropic metals[END_REF]. Because of its simplicity and physical meaning, it is one of the most used yield criterion to describe the anisotropic behavior of metals. It can be expressed as (2.10) where 𝐹, 𝐺, 𝐻, 𝐿, 𝑀, and 𝑁 are material-dependent parameters describing the anisotropic behavior. σ 2 = 𝐹(𝜎 yy -𝜎 zz ) 2 + 𝐺(𝜎 zz -𝜎 xx ) 2 + 𝐻(𝜎 xx -𝜎 yy ) 2 + 2𝐿𝜎 2 yz + 2𝑀𝜎 2 xz + 2𝑁𝜎 2 xy , In plane stress conditions (𝜎 zz = 𝜎 xz = 𝜎 yz = 0), the parameters are reduced to 𝐹, 𝐺, 𝐻, and 𝑁. These four parameters can also be obtained as a function of the Lankford strain ratios 𝑟 0 , 𝑟 45 , and 𝑟 90 , respectively, evaluated at 0°, 45°, and 90°from the RD of the sheet metal, according to: 𝐹 = 𝑟 0 (1 + 𝑟 0 )𝑟 90 ; 𝐺 = 1 1 + 𝑟 0 ; 𝐻 = 𝑟 0 1 + 𝑟 0 ; 𝑁 = (𝑟 0 + 𝑟 90 )(2𝑟 45 + 1) 2𝑟 90 (1 + 𝑟 0 ) . (2.11) The Hill 1948 yield criterion reduces to the von Mises yield criterion when 𝐹 = 𝐺 = 𝐻 = 0.5 and 𝐿 = 𝑀 = 𝑁 = 1.5. Yld2000-2d The Yld2000-2d anisotropic yield criterion describes the anisotropic behavior of a material under plane stress conditions [START_REF] Barlat | Plane stress yield function for aluminum alloy sheets-part 1: Theory[END_REF]. It can be expressed as 2 σ 𝑎 = | |𝑋 ′ 1 -𝑋 ′ 2 | | 𝑎 + | |2𝑋 ″ 2 + 𝑋 ″ 1 | | 𝑎 + | |2𝑋 ″ 1 + 𝑋 ″ 2 | | 𝑎 , (2.12) where 𝑎 is a material-dependent parameter, usually, assuming the values of 6 or 8, depending on the crystallographic structure of the material, respectively, body-centred cubic (BCC) or facecentred cubic (FCC). 𝑋 ′ 𝑖 and 𝑋 ″ 𝑖 (𝑖 = 1, 2) are the eigenvalues of the tensors 𝐗 ′ and 𝐗 ″ obtained after two linear transformations on the stress tensor. These transformations can be directly determined from 𝝈 as (2.13) where 𝐋 ′ and 𝐋 ″ are the linear transformation tensors defined as 𝐗 ′ = 𝐋 ′ 𝝈 and 𝐗 ″ = 𝐋 ″ 𝝈, 𝐋 ′ = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 𝐿 ′ 11 𝐿 ′ 12 𝐿 ′ 21 𝐿 ′ 22 𝐿 ′ 66 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = 1 3 ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 2 0 0 -1 0 0 0 -1 0 0 2 0 0 0 3 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎣ 𝛼 1 𝛼 2 𝛼 7 ⎤ ⎥ ⎥ ⎦ , ( 2.14) and 𝐋 ″ = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 𝐿 ″ 11 𝐿 ″ 12 𝐿 ″ 21 𝐿 ″ 22 𝐿 ″ 66 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = 1 9 ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ -2 2 2 -2 0 1 -4 -4 4 0 4 -4 -4 1 0 -2 8 2 -2 0 0 0 0 0 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 𝛼 3 𝛼 4 𝛼 5 𝛼 6 𝛼 8 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , (2.15) with 𝛼 𝑘 (𝑘 = 1, ..., 8) the anisotropy coefficients representing the material-dependent parameters. It is worth noting that when all coefficients are equal to 1 and 𝑎 = 2, the Yld2000-2d criterion reduces to the von Mises criterion. Yld2004-18p The Yld2004-18p is an advanced non-quadratic yield criterion [START_REF] Barlat | Linear transfomation-based anisotropic yield functions[END_REF] given by 4 σ 𝑎 = 3 ∑ 𝑖=1 3 ∑ 𝑗=1 | | S ′ 𝑖 -S ″ 𝑗 | | 𝑎 , (2.16) where 𝑎 is a material-dependent parameter, S ′ 𝑖 and S ″ 𝑗 are the principal values of the tensors s ′ and s ″ defined by two linear transformations on 𝐬 as s ′ = 𝐂 ′ 𝐬 and s ″ = 𝐂 ″ 𝐬, (2.17) where 𝐂 ′ and 𝐂 ″ are the matrices containing the 18 anisotropy coefficients, with the form of 𝐂 as 𝐂 = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 0 -𝑐 12 -𝑐 13 0 0 0 -𝑐 21 0 -𝑐 23 0 0 0 -𝑐 31 -𝑐 32 0 0 0 0 0 0 0 𝑐 44 0 0 0 0 0 0 𝑐 55 0 0 0 0 0 0 𝑐 66 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ . (2.18) Similarly to the Yld2000-2d criterion, the exponent 𝑎 usually assumes the values of 6 or 8, depending on the crystallographic structure of the material, respectively, BCC or FCC [START_REF] Logan | Upper-bound anisotropic yield locus calculations assuming <111> -pencil glide[END_REF]. When the coefficients are all equal to 1 and 𝑎 = 2 (or 4), the Yld2004-18p yield criterion reduces to the von Mises criterion. Associated Flow Rule The associated flow rule is valid in the theory of plasticity for the generality of metallic materials. A plasticity law is said to be associated if the same potential is used to describe the plastic behavior in terms of yield surface and plastic deformation. As the material deforms plastically after the onset of plastic yielding, the normality hypothesis is used to determine the direction of plastic flow. This hypothesis determines that the plastic strain rate εp develops in a direction normal to the tangent of the yield surface (see Figure 2.3) and is given by εp = λ ∂𝜙 ∂𝝈 , (2.19) where the magnitude is determined by the plastic multiplier λ and the direction by ∂𝜙/∂𝝈. The determination of λ is based on the condition that during plastic loading a given material point remains on the yield surface, also known as the consistency condition. Hardening Law The hardening law regulates the evolution of the yield surface after the onset of plastic yielding according to the evolution of plastic deformation as a function of the equivalent plastic strain ε p . The yield surface evolution is usually associated with three types of hardening laws (see Figure 2.4): 1. isotropic hardening; 2. kinematic hardening; 3. mixed hardening. In isotropic hardening, the development of plastic deformation results exclusively from the uniform expansion of the yield surface in all directions, while its center is kept fixed throughout the loading path. On the other hand, in kinematic hardening, the development of plastic deformation results only from the translation of the yield surface in the stress space. Lastly, in mixed hardening, the development of plastic deformation combines both isotropic and kinematic hardening laws, leading to simultaneous expansion and translation of the yield surface. Many models exist to describe the isotropic hardening behavior of metallic materials (such as Ludwick). One of the most known and used models is the Swift's law, given as 𝜎 y = 𝐾(𝜀 0 + ε p ) 𝑛 , (2.20) with 𝜀 0 = ( 𝜎 0 𝐾 ) 1 𝑛 , (2.21) where 𝐾, 𝜀 0 , 𝑛, and 𝜎 0 are material-dependent parameters, respectively, the strength coefficient, the initial yield strain, the strain-hardening exponent, and the initial yield stress. Another well known isotropic hardening law is the Voce's law, given as 𝜎 y = 𝜎 0 + 𝑄[1 -exp(-𝑏 ε p )], (2.22) where 𝑄 and 𝑏 are material-dependent parameters. The Johnson-Cook model can describe the hardening behavior of metals under various temperatures and strain rates [START_REF] Johnson | A constitutive model and data for materials subjected to large strains, high strain rates, and high temperatures[END_REF]. A decoupled formulation characterizes the model, because it describes the yield stress evolution by taking into account strain hardening, temperature, and strain rate effects as independent phenomena. The model can be written as 𝜎 y = [𝐴 + 𝐵 ( ε p ) 𝑛] [1 -( 𝑇 -𝑇 tr 𝑇 m -𝑇 tr ) 𝑚 ] [1 + 𝐶ln ( ε p ε 0 )] , (2.23) where 𝐴, 𝐵, and 𝑛 are material-dependent parameters describing the strain hardening behavior. The effects of temperature are described by the temperature 𝑇, the transition temperature 𝑇 tr (governing the threshold of temperature effect), the melting temperature 𝑇 m , and the exponent 𝑚. Lastly, the strain rate sensitivity is described by the parameters 𝐶 and ε 0 , defining the threshold of strain rate dependence. Kinematic hardening is typically used to predict the Bauschinger effect, often observed in materials subject to cyclic loading. Also, when considering mixed hardening, the generic yield function from Equation 2.6 takes the form of 𝜙(𝝈, 𝜶, ε p ) = η (𝝈 -𝜶) -𝜎 y ( ε p ) = 0, (2.24) where η is the equivalent stress as a function of the tensor 𝜼 = 𝝈 -𝜶. The back stress tensor 𝜶 describes the kinematic hardening and is defined in the stress space analogously to the stress tensor. To account for the need to describe the hardening behavior under cyclic loading, several models have been proposed, such as by [START_REF] Prager | The theory of plasticity: A survey of recent achievements[END_REF], [START_REF] Ziegler | A modification of Prager's hardening rule[END_REF], or the more advanced by [START_REF] Yoshida | A model of large-strain cyclic plasticity describing the Bauschinger effect and workhardening stagnation[END_REF]. The kinematic hardening model considered in this study is based on the additive contribution of several back stress terms as proposed by [START_REF] Chaboche | Time-independent constitutive theories for cyclic plasticity[END_REF], following a modification of the model originally proposed in 1966 by [START_REF] Frederick | A mathematical representation of the multiaxial Bauschinger effect[END_REF]. This model defines the evolution of the back stress tensor α as α = 𝑛 ∑ 𝑖=1 α 𝑖 = 𝑛 ∑ 𝑖=1 ( 2 3 𝑐 𝑖 εp -𝛾 𝑖 𝜶 𝑖 ε p ) , (2.25) where 𝑐 𝑖 and 𝛾 𝑖 (𝑖 = 1, ..., 𝑛) are material-dependent parameters, εp is the plastic strain rate (see Equation 2.19) and ε p is the equivalent plastic strain rate. Additionally, this model has also been formulated based on the stress tensor as α = 𝑛 ∑ 𝑖=1 α 𝑖 = 𝑛 ∑ 𝑖=1 ( 𝑐 𝑖 η 𝜼 -𝛾 𝑖 𝜶 𝑖 ) ε p . (2.26) According to [START_REF] Carbonnière | Comparison of the work hardening of metallic sheets in bending-unbending and simple shear[END_REF], this formulation leads to an uncoupling between the initial anisotropy and the hardening behavior, which is not the case when using the original formulation based on the plastic strain tensor. Unified Material Model Driver for Plasticity The Unified Material Model Driver for Plasticity (UMMDp) was developed by the Japan Association for Nonlinear Computer Aided Engineering (JANCAE) 1 It is an open source user subroutine library for plasticity models that supports various anisotropic yield criteria, isotropic and kinematic hardening laws. Moreover, it is compatible with several FEA software: Abaqus/Standard, ANSYS, ADINA, LS-DYNA, and MARC (see Figure 2.5). The structure of the UMMDp is modular, flexible, and relatively easy for any user to implement their own models into the library. Although this library could have a huge potential, few validations are reported, and none outside the original contributions are found yet [START_REF] Takizawa | Development of the user subroutine library "Unified Material Model Driver for Plasticity (UMMDp)" for various anisotropic yield functions[END_REF][START_REF] Oide | Implementation of anisotropic yield functions into the subroutine library "UMMDp[END_REF][START_REF] Inoue | Practical examples of sheet metal forming simulations using the subroutine library 'UMMDp[END_REF][START_REF] Ida | Development of plug-ins for bridging variables between advanced finite element codes and 'UMMDp[END_REF]. With this study, it was intended to start exploring the capabilities of the UMMDp, as well as validate its use with Abaqus/Standard (Dassault Systèmes 2019). To validate a few models, a user material subroutine (UMAT), developed for Abaqus/Standard in the scope of previous doctoral theses, is used as a reference [START_REF] Grilo | On the development and computational implementation of complex constitutive models and parameters' identification procedures[END_REF][START_REF] Grilo | Development of computational anisotropic hypoelastic-and hyperelastic-based models including nonlinear kinematic hardening[END_REF][START_REF] Souto | Computational design of a mechanical test for material characterization by inverse analysis[END_REF]. This UMAT implements the Yld2004-18p anisotropic yield criterion, various isotropic hardening laws, and the non-linear kinematic hardening law composed of three back stress terms from Equation 2.26. This UMAT has already been successfully used in the identification of material parameters [START_REF] Grilo | On the modelling of complex kinematic hardening and nonquadratic anisotropic yield criteria at finite strains: application to sheet metal forming[END_REF]Souto et al. 2015b;[START_REF] Pradeau | Prediction of failure in bending of an aluminium sheet alloy[END_REF], thus constitutes a reliable reference in the validation of the UMMDp. For simplicity, the latter UMAT is hereon designated as UMAT_Yld2004_Mixed. Nevertheless, the built-in constitutive models provided by Abaqus/Standard are used as a second reference, hereon designated as Std_Aba. Initially, the implementation of the Yld2004-18p anisotropic yield criterion is discussed by analyzing differences between the formulation in the UMMDp and the UMAT_Yld2004_Mixed. Afterwards, FEA simulations of homogeneous tests and a deep drawing cup test are performed to validate the UMMDp and analyze its computational efficiency. Formulation of Yld2004-18p Concerning the formulation of the Yld2004-18p anisotropic yield criterion implemented in the UMMDp and the UMAT_Yld2004_Mixed, there are two important details to discuss: (i) the order of the anisotropy coefficients, and (ii) the formulation of the yield criterion. Firstly, it is important to note that in the formulation of Yld2004-18p in the UMMDp, the order of the anisotropy coefficients differs from the one in the original formulation (see Equation 2.18). This difference is related to the Voigt notation of the Cauchy stress tensor, which is given by (𝜎 xx , 𝜎 yy , 𝜎 zz , 𝜎 yz , 𝜎 xz , 𝜎 xy ) in the original formulation and by (𝜎 xx , 𝜎 yy , 𝜎 zz , 𝜎 xy , 𝜎 yz , 𝜎 xz ) in the UMMDp, while the formulation in the UMAT_Yld2004_Mixed is known to be the same as in the original formulation. This difference in the Voigt notation changes the order of the anisotropy coefficients 𝑐 44 , 𝑐 55 , and 𝑐 66 as ⎧ ⎨ ⎩ 𝑐 66 𝑐 44 𝑐 55 ⎫ ⎬ ⎭ Barlat et al. (2005) ≡ ⎧ ⎨ ⎩ 𝑐 44 𝑐 55 𝑐 66 ⎫ ⎬ ⎭ UMMDp ≡ ⎧ ⎨ ⎩ 𝑐 66 𝑐 44 𝑐 55 ⎫ ⎬ ⎭ UMAT_Yld2004_Mixed . (2.27) Secondly, in the original formulation of the yield criterion, the value 4 in the left hand side of Equation 2.16 is constant. However, in the formulation of the UMMDp, this value takes the form of a coefficient 𝑅, herein named coefficient of equivalent stress, and defined as 𝑅 σ 𝑎 = 3 ∑ 𝑖=1 3 ∑ 𝑗=1 | | S ′ 𝑖 -S ″ 𝑗 | | 𝑎 . (2.28) In the UMMDp, the coefficient depends on the values of the material parameters 𝑐 12 , 𝑐 13 , 𝑐 21 , 𝑐 23 , 𝑐 31 , and 𝑐 32 , and is computed according to 𝑅 = 3 ∑ 𝑖=1 3 ∑ 𝑗=1 | | 𝐵 ′ 𝑖 -𝐵 ″ 𝑗 | | 𝑎 , (2.29) with The latter set of equations was deduced from the source code files of the UMMDp. The scientific background for this formulation is not documented, but it appears to be related to assuming 𝜎 y as the yield stress in the RD. This assumption can be imposed by the following condition as (2.31) where 𝜉 depends on the anisotropy coefficients as 𝐵 1 = 2√𝑝 cos ( 𝜃 3 ) + 𝐻 1 , 𝐵 2 = 2√𝑝 cos ( 𝜃 + 4𝜋 3 ) + 𝐻 1 , 𝐵 3 = 2√𝑝 cos ( 𝜃 + 2𝜋 3 ) + 𝐻 1 , 𝑝 = 𝐻 2 1 + 𝐻 2 , 𝑞 = 1 2 (2𝐻 3 1 + 2𝐻 1 𝐻 2 + 2𝐻 3 ) , 𝜃 = arccos ( 𝑞 𝑝 3/2 𝜎 xx σ = ( 3 𝑎 𝑅 𝜉 ) = 1, 𝜉 = |𝑐 ′ 12 + 𝑐 ′ 13 -𝑐 ″ 12 -𝑐 ″ 13 | 𝑎 + |𝑐 ′ 12 + 𝑐 ′ 13 + 2𝑐 ″ 21 -𝑐 ″ 23 | 𝑎 + |𝑐 ′ 12 + 𝑐 ′ 13 -2𝑐 ″ 31 -𝑐 ″ 32 | 𝑎 + |𝑐 ′ 23 -2𝑐 ′ 21 -𝑐 ″ 12 -𝑐 ″ 13 | 𝑎 + |𝑐 ′ 23 -2𝑐 ′ 21 + 2𝑐 ″ 21 -𝑐 ″ 23 | 𝑎 + |𝑐 ′ 23 -2𝑐 ′ 21 + 2𝑐 ″ 31 -𝑐 ″ 32 | 𝑎 + |𝑐 ′ 32 -2𝑐 ′ 31 -𝑐 ″ 12 -𝑐 ″ 13 | 𝑎 + |𝑐 ′ 32 -2𝑐 ′ 31 + 2𝑐 ″ 21 -𝑐 ″ 23 | 𝑎 + |𝑐 ′ 32 -2𝑐 ′ 31 + 2𝑐 ″ 31 -𝑐 ″ 32 | 𝑎 . (2.32) According to Equation 2.16, for a given set of material parameters with 𝑅 = 4, it is not guaranteed that Equation 2.31 is satisfied. In the formulation of the UMMDp, the value of 𝑅 can be calculated from the material parameters, thus satisfying Equation 2.31. This approach is more practical in terms of the identification procedure, as it reduces the number of externally imposed constraints. For example, [START_REF] Güner | Characterization of anisotropy of sheet metals employing inhomogeneous strain fields for Yld2000-2D yield function[END_REF] and [START_REF] Martins | Calibration of anisotropic plasticity models using a biaxial test and the virtual fields method[END_REF] both used such constraint in the identification of the material parameters of the Yld2000-2d yield criterion, to obtain 𝜎 y as the yield stress in the RD. To validate this assumption, a comparison between both formulations is performed with 𝜎 xx = 100 MPa, and the material parameters reported by Souto et al. (2015b) presented in Table 2.1, with the exponent 𝑎 equal to 6. The obtained results are presented in Table 2.2, where it is observed that in the original formulation of the yield criterion, Equation 2.31 is not satisfied, whereas in the formulation of the UMMDp it is, with 𝑅 = 4.215. Although the formulation of the UMMDp calculates 𝑅, it is possible to manually assign its value to 4. Therefore, to fairly compare the UMMDp with the UMAT_Yld2004_Mixed, the constant value of 4 is adopted in all the following analysis. Additionally, the order of the anisotropy coefficients in 𝐂 ′ and 𝐂 ″ is analyzed with a simple validation test. A prescribed stress tensor 𝝈 is given as input to the yield criterion, and the resulting equivalent stress σ , first and second derivatives of the yield criterion represented by, respectively, ∂𝜙/∂𝝈 and ∂ 2 𝜙/(∂𝝈) 2 , are analyzed. By manually prescribing a stress tensor as input to the yield criterion, it is possible to successfully compare its formulation. However, when running a numerical simulation with a FEA software, the input is more difficult to control, and it would be equal to both the UMAT_Yld2004_Mixed and the UMMDp. The material parameters used in this verification are presented in Table 2.3, according to the values reported for the material 6111-T4 by [START_REF] Barlat | Linear transfomation-based anisotropic yield functions[END_REF], with 𝑎 = 8. In Table 2.4, the prescribed input stress tensor and the results of first and second derivatives of both the UMAT_Yld2004_Mixed and the UMMDp, with and without modifications, are presented. In all formulations, the order of the anisotropy coefficients is maintained according to Equation 2.31. Nevertheless, the position of the prescribed stress tensor out-of-plane components Table 2.4 Comparison of the first and second derivatives of the yield criterion, for a given stress tensor, for the UMAT_Yld2004_Mixed and the UMMDp, with and without modifications. The highlighted values correspond to the differences between formulations. Input Output Formulation 𝝈 [MPa] ∂𝜙 ∂𝝈 [-] ∂ 2 𝜙 (∂𝝈) 2 [-] UMAT_Yld2004_Mixed ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ -149 -5 35 -12 -19 204 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ -0.632 0.266 0.366 -0.168 -0.120 1.205 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 0.004 -0.002 -0.001 0.000 0.000 0.003 -0.002 0.003 -0.001 0.001 0.000 -0.001 -0.001 -0.001 0.002 0.000 0.000 -0.001 0.000 0.001 0.000 0.017 -0.002 0.001 0.000 0.000 0.000 -0.002 0.008 0.000 0.003 -0.001 -0.001 0.001 0.000 0.002 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ UMMDp without modification ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ -149 -5 35 -12 204 -19 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ -0.632 0.266 0.366 -0.168 1.205 -0.120 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 0.004 -0.002 -0.001 0.000 0.003 0.000 -0.002 0.003 -0.001 0.001 -0.001 0.000 -0.001 -0.001 0.002 0.000 -0.001 0.000 0.000 0.001 0.000 0.017 0.001 -0.002 0.003 -0.001 -0.001 0.001 0.002 0.000 0.000 0.000 0.000 -0.002 0.000 0.008 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ with modification ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ -149 -5 35 -12 -19 204 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ -0.632 0.266 0.366 -0.168 -0.120 1.205 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 0.004 -0.002 -0.001 0.000 0.000 0.003 -0.002 0.003 -0.001 0.001 0.000 -0.001 -0.001 -0.001 0.002 0.000 0.000 -0.001 0.000 0.001 0.000 0.017 -0.002 0.001 0.000 0.000 0.000 -0.002 0.000 0.002 0.003 -0.001 -0.001 0.001 0.000 0.008 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ 𝜎 xz and 𝜎 yz had to be changed using the UMMDp without modification, to obtain the same value of equivalent stress ( σ = 355.769) as the UMAT_Yld2004_Mixed. However, by changing the order of 𝜎 xz and 𝜎 yz , it led to differences in the first and second derivatives out-of-plane components. These differences are not in terms of values, but rather in terms of the positions at which they appear, which are consistent with the positions 𝜎 xz and 𝜎 yz in the prescribed stress tensor. Although the latter situation is not documented in the UMMDp, it is understood that the formulation used in the yield criterion cannot be the same for each software for which the UMMDp is prepared for, as shown in Table 2 (2.33) With the modification applied to the yield criterion in the UMMDp, the results of equivalent stress, first and second derivatives are now identical for the UMAT_Yld2004_Mixed and the UMMDp (see Table 2.4). Therefore, the modification is validated and maintained in all subsequent analyses. It is recommended that the Voigt notation be verified for the other yield criteria implemented in the UMMDp, taking into account the software to be used. In addition, future modifications to the UMMDp can include an automatic detection of the software and change the Voigt notation accordingly. (Souto et al. 2015b). [START_REF] Souto | Computational design of a mechanical test for material characterization by inverse analysis[END_REF]. The numerical models are displacement-driven, with imposed displacements of 0.6 mm for the tensile and shear tests, and 0.4 mm in each direction for the biaxial test. In addition, the 8-node brick element with reduced integration (C3D8R) is used. The three tests are each performed with three combinations of constitutive models, namely, using the von Mises yield criterion with mixed hardening, using the Yld2004-18p yield criterion with isotropic hardening at 0°from RD, and using the Yld2004-18p yield criterion with mixed hardening at 0°from RD. In the first case, all three user material subroutines are used. However, in the other two cases only the UMMDp and the UMAT_Yld2004_Mixed are used, because Std_Aba allows the use of Yld2004-18p. The evolution of strain and stress tensor components was analyzed and compared. In particular, the evolution of 𝜀 xx -𝜎 xx , 𝜀 xx -𝜀 yy , and 𝜀 xx -𝜀 zz are presented for the tensile and the biaxial tests, while the evolution of 𝜎 xy -𝜀 xy is presented for the shear tests. Nevertheless, all the results are identical independently of which user material subroutine is used, thus validating its use for this particular application. For that reason, in Figure 2.7, the shown curves are representative of all user material subroutines correspondent. Isotropic hardening Kinematic hardening 𝜍 0 [MPa] 𝑄 [MPa] 𝑏 [-] 𝑐 1 /𝛾 1 [MPa] 𝛾 1 [-] 𝑐 2 /𝛾 2 [MPa] 𝛾 2 [-] 𝑐 3 /𝛾 3 [MPa] 𝛾 3 [-] 100 Deep Drawing Cup Test Additionally, it is recommended to compare the results of each user material subroutine for a more complex model, as well as their convergence efficiency. For that purpose, a deep drawing cup test is carried out using the setup presented in Figure 2.8a, for a sheet 0.7 mm thick. The FEA simulation is displacement-driven with an imposed vertical displacement of 15 mm in the punch, considering an implicit integration scheme using Abaqus/Standard. To reduce the computational cost of the simulation, only one quarter of the blank is modeled by assuming symmetry conditions. In addition, the finite element mesh of the blank is built using the 8-node brick elements with reduced integration (C3D8R). The mesh presents two regions of varying element size, one coarse in the center region and in contact with the punch, therefore not achieving considerable deformation, and another finer in regions withstanding higher deformations (see Figure 2.8b). The blank is also built with 4 elements through the thickness, leading to a total of 47 011 elements. The tools, namely the die, the punch and the blank holder, are modeled as analytical rigid surfaces. The contact between the blank and the tools is assumed to present a friction coefficient of 0.06 and modeled using the Coulomb friction model. Moreover, a force of 15 kN is applied on the blank holder. Two FEA simulations are performed using two combinations of constitutive models, namely, one using the von Mises yield criterion with a mixed hardening law (test 1), and another using the Yld2004-18p yield criterion with a mixed hardening (test 2). The results of major strain 𝜀 1 , minor strain 𝜀 2 , and total back stress tensor component 𝛼 xx are analyzed for each user material subroutine and presented in Figure 2.9. Similarly to the homogeneous tests, it is observed that the results for the deep drawing cup test are identical between the UMMDp, the UMAT_Yld2004_Mixed, and the Std_Aba, thus a single map representative of the three user material subroutines is shown in Figure 2.9. It should be noted that the finite element mesh of test 2 is highly distorted at the end of the test, as no rupture criterion has been considered. Nevertheless, this simulation is still valid for comparison purposes. Analyzing the convergence and computational time of the UMAT_Yld2004_Mixed and the UMMDp, it is important to refer that both are implemented with a backward-Euler scheme with a sub-incremental method. However, it is unclear if the algorithm is implemented with the same conditions, and the computation of the consistent tangent matrix is not identical. What is known is that the code complexity in the UMMDp is higher than in the UMAT_Yld2004_Mixed. Some examples are: (i) the implementation of the yield criteria in the UMMDp is programmed using arrays, while in the UMAT_Yld2004_Mixed the differential equations are previously derived in an external software and then implemented in the subroutine in the form of scalar operations; (ii) in the UMMDp, the user can select the level of printing information during the analysis, so that every time a printing option is available, the subroutine must check the level required by the user, while in UMAT_Yld2004_Mixed, the amount of printing is low, and it should be commented if the user does not wish it to be printed; and (iii) a large number of yield criteria, isotropic and kinematic hardening laws are available in the UMMDp, and the subroutine is required to select each option every time it is called. On the other hand, in UMAT_Yld2004_Mixed, there are only available options for selecting the isotropic hardening law. All simulations required 2032 increments for the step corresponding to the displacement of the punch, and the number of cutbacks was identical in each increment. In terms of the computational time, the total central processing unit (CPU) time required by both user material subroutines is analyzed for test 2. In this case, the UMMDp is approximately 14 % slower than the UMAT_Yld2004_Mixed (see Table 2.7). To verify the influence of the if programming conditions on the selection of constitutive models and the printing, a new simulation is performed, named UMMDp_Simplified, where these conditions are removed. Now, the UMMDp_Simplified is approximately 11 % slower than the UMAT_Yld2004_Mixed, which represents a relatively good improvement. However, the computational gap is significant, and it is therefore relevant to investigate if it is related to the stress integration algorithm or programming efficiency. For that purpose, it is monitored for a single element of the model, from the first yield increment onward, the number of Newton-Raphson (NR) iterations in the global equilibrium per increment, the number of sub-increments in the stress integration scheme, and the number of NR iterations in the stress integration scheme, per increment. Moreover, the ratio of NR iterations between the UMMDp and the UMAT_Yld2004_Mixed is used in the analysis. Values above 1 mean the UMAT_Yld2004_Mixed required more NR iterations than the UMMDp, and vice versa. The CPU time required per increment is also monitored for the same element. In Figure 2.10a, the number of NR iterations in the global equilibrium per increment is presented. However, this analysis shows that the two subroutines are identical. On the other hand, analyzing the results for the ratio of NR iterations presented in Figure 2.10b, it is observed that the UMAT_Yld2004_Mixed requires more iterations, particularly in the late stages of the test. The same is observed on average through the test, as the average value 1.037 indicates. Regarding the number of NR iterations in the stress integration scheme, a large difference between the two user material subroutines is observed (see Figure 2.11a). Analyzing Figure 2.12b, where above the threshold are the points where the UMAT_Yld2004_Mixed takes more iterations than the UMMDp and vice versa, the average ratio of 2.624 indicates the ability of the UMMDp to converge faster than the UMAT_Yld2004_Mixed. Further observing the results of the sub-increments required in each NR iteration (see Figure 2.11d), it is interesting to observe that the UMMDp only requires one sub-increment per NR iteration in each increment, while the UMAT_Yld2004_Mixed requires at least two sub-increments and a maximum of 15. From these results, it is possible to conclude that the UMMDp presents higher levels of efficiency, even though the total CPU time was higher than the UMAT_Yld2004_Mixed. The Figure 2.11 Results for stress integration scheme of (a) NR iterations, (b) ratio of NR iterations per increment, (c) sub-increments, and (d) sub-increments per NR iteration, per increment, of the deep cup drawing test using the Yld2004-18p yield criterion with mixed hardening. Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 results of the CPU time per increment presented in Figure 2.12a show that the UMMDp is slower than UMAT_Yld2004_Mixed, and that the simplification in the code does not improve the CPU time considerably. In Figure 2.12b, the ratio of the CPU time of the UMAT_Yld2004_Mixed by the UMMDp is represented, where the values above 1 mean the UMAT_Yld2004_Mixed is slower than the UMMDp and vice versa. It is possible to see the differences in the CPU time. Overall, the UMMDp requires fewer iterations than UMAT_Yld2004_Mixed to converge to the solution, but the difference in CPU time is significantly in favor of the UMAT_Yld2004_Mixed. Thereby, it is possible to assume that this difference is related to the programming efficiency of the material user subroutines, and not to the algorithm's efficiency. Figure 2.12 Results for stress integration scheme of (a) CPU time and (b) ratio of the CPU time, per increment, of the deep cup drawing test using the Yld2004-18p yield criterion with mixed hardening. Conclusions The UMMDp is a valuable open source library to FEA software thanks to its wide range of constitutive models. Even though it presents great potential, it still lacks numerical validations of the implemented models. The Yld2004-18p yield criterion in the UMMDp presented an interesting formulation of the yield criterion, which might be of great use in material parameter identifications. However, the observed incoherence between the Voigt notation of the stress tensor used in its formulation and the FEA software conventions is of great importance. When accounting for the implemented modifications to the Yld2004-18p, the results are similar to the user material subroutine used as reference, validating its formulation. The analyzed kinematic hardening law in the UMMDp also presents similar results to the user material subroutines used as reference, validating its formulation. Finally, the numerical simulations performed validate the use of Yld2004-18p yield criterion, isotropic and kinematic hardening laws in the UMMDp. Even though the UMMDp proved significantly slower in the deep drawing cup test than the reference, it is nevertheless advantageous, as it enables the use of complex yield criteria, without the need to implement them from zero. Therefore, the UMMDp will be used for the remainder of studies of this thesis. Overview of Heterogeneous Mechanical Tests Introduction To decrease associated delays and costs, the mechanical design of sheet metal forming parts tends to be nowadays more and more virtual, by using numerical simulation. Therefore, the characterization of materials has received increased attention due to the need for precise input data to computational analysis software. The material mechanical behavior is numerically described through constitutive equations and material parameters. Conventionally, the identification of material parameters is achieved using standard quasi-homogeneous mechanical tests [START_REF] Rusinek | Shear testing of a sheet steel at wide range of strain rates and a constitutive relation with strain-rate and temperature dependence of the flow stress[END_REF][START_REF] An | A novel and simple method for the measurement of plane strain work hardening[END_REF][START_REF] Gilles | Experimental characterization and elasto-plastic modeling of the quasi-static mechanical response of TA-6V at room temperature[END_REF]. In this approach, the strain field is quasi-homogeneous for tensile conditions along the gauge part of the sample before necking occurs, and in shear conditions, the heterogeneity is negligible, which is restrictive. Moreover, from a single classical test, it is difficult to extract many material parameters, requiring several tests to identify many parameters of a single constitutive model. As such, the number of material parameters to identify might range from only 3, considering an isotropic material using the von Mises yield criterion with the Swift hardening law [START_REF] Reis | Inverse identification of the Swift law parameters using the bulge test[END_REF]. On the other hand, some studies have identified many material parameters. For example, in Souto et al. (2015b), 24 parameters are identified for an highly anisotropic material using the Yld2004-18p yield criterion with a mixed hardening law. More recently, research has focused on alternative identification methods based on heterogeneous strain fields, measured using full-field experimental techniques [START_REF] Prates | Inverse strategies for identifying the parameters of constitutive laws of metal sheets[END_REF][START_REF] Pierron | Towards material testing 2.0. A review of test design for identification of constitutive parameters from full-field measurements[END_REF]. The accuracy of these alternative methods depends on three main issues: (i) the geometry of the specimen to be used, (ii) the choice of an appropriate measurement technique of the strain field and (iii) the definition of an identification strategy. By using these techniques, it is possible to extract a greater amount and variety of information from the strain field developed during the test. Ideally, the use of a single heterogeneous test could be sufficient to characterize the material behavior (Rossi et al. 2022b). However, the selection of a geometry that demonstrates a rich strain field, both in type and amount of information, is still a topic under research [START_REF] Thoby | Robustness of specimen design criteria for identification of anisotropic mechanical behaviour from heterogeneous mechanical fields[END_REF]. Additionally, the use of full-field measurement techniques requires inverse methodologies to determine the material parameters of a given constitutive model. The identification procedure consists of minimizing the gap between experimental and numerical data, measured by an objective function, for example, defined in a least square form. In sheet metal characterization, the thickness of the material is generally small, of the order of a few millimeters or below, and stress levels in the ND to the sheet plane are most of the time neglected. In addition, measuring deformation perpendicular to the plane of the sheet is a difficulty in conventional DIC techniques. For that reason, studies investigating the material's behavior tend to assume that it presents isotropic behavior in the ND to the sheet plane. To evaluate the richness of the test, studies generally use the major and minor stresses diagram, and the major and minor strains diagram, in the sheet plane, respectively, illustrated in Figures 3.1a and 3.1b. Both types of representation illustrate the mechanical state observed at different material points of the test. In the case of the major and minor stresses diagram, the mechanical states are represented by the ratio of minor stress 𝜎 2 by major stress 𝜎 1 . For example, at 𝜎 2 /𝜎 1 = 1 corresponds to an equibiaxial state, and at 𝜎 2 /𝜎 1 = -1 to a shear state. In the case of the major and minor strains diagram, the mechanical states are represented by the ratio of minor strain 𝜀 2 by major strain 𝜀 1 . For example, at 𝜀 2 /𝜀 1 = 1 corresponds to a equibiaxial state, and at 𝜀 2 /𝜀 1 = -0.5 to a uniaxial state. One of the goals of using heterogeneous mechanical tests is to apply them in the identification of material parameters with inverse methodologies. Therefore, the following sections provide an overview of heterogeneous mechanical tests reported in the literature, with a focus on the type and quantity of information. Details of applications to identify the material parameters are also provided if applicable. The tests are presented chronologically in order of their publication, and are distinguished between uniaxial, biaxial, and out-of-plane loading. The latter tests are more complex, requiring advanced experimental apparatus with contact between the material and tools. Therefore, friction exists and should be accounted for. On the other hand, the first and second type of tests are simpler, and no contact is required between the material and the tools. Additionally, no occurrence of out-of-plane of shear stresses is observed, that is, the tests are insensitive to the material parameters associated with these stresses. For each test, a description of the geometry, materials and constitutive models used will be provided when this information is available. In some cases, the studies do not explain how the test has been designed or selected, so no information is reported. [START_REF] Meuwissen | An inverse method for the mechanical characterisation of metals[END_REF] was one of the pioneer studies that used heterogeneous mechanical tests for the characterization of sheet metals through a mixed numerical-experimental methodology. In this study, an irregular specimen with two holes was used, both in virtual and real experiments, to estimate the parameters of constitutive models (see Figure 3.2a). One of the conclusions of the study was that the test leads to richer mechanical information, although with a single test it was impossible to characterize all the anisotropic properties of the materials. Additionally, [START_REF] Meuwissen | Determination of the elasto-plastic properties of aluminium using a mixed numerical-experimental method[END_REF] also proposed a shear-like specimen to investigate the material characterization of an aluminum alloy with 0.5 mm of thickness (see Figure 3.2b). These tests were selected so that the strain and stress levels in the materials varied over a wide range. However, it is unclear how the geometry was obtained. Later, the shear-like specimen was also used to characterize a dual-phase steel [START_REF] Haddadi | Improving the characterization of a hardening law using digital image correlation over an enhanced heterogeneous tensile test[END_REF]) and a pure titanium with 0.5 mm of thickness [START_REF] Pottier | Contribution of heterogeneous strain field measurements and boundary conditions modelling in inverse identification of material parameters[END_REF]. [START_REF] Kajberg | Characterisation of materials subjected to large strains by inverse modelling based on in-plane displacement fields[END_REF] designed a specimen with a narrow part in the middle to control the location of plastic instabilities (see Figure 3.2c). This way, it was possible to focus the acquisition cameras in this region and achieve better spatial resolution. The test achieved high values of equivalent plastic strain (around 0.75), leading to a successful characterization of the hardening behavior up to large strains. [START_REF] Belhabib | Heterogeneous tensile test on elastoplastic metallic sheets: comparison between FEM simulations and full-field strain measurements[END_REF] and [START_REF] Haddadi | Improving the characterization of a hardening law using digital image correlation over an enhanced heterogeneous tensile test[END_REF] proposed a heterogeneous tensile test for the identification of material parameters of a dual-phase steel with 1 mm of thickness using full-field measurements (see Figure 3.2d). The specimen consists of a hybrid geometry between a classical tensile test and a plane tensile test, and was designed with the aim of presenting: (i) large strain heterogeneity in the gauge area; (ii) large strain paths diversity; and (iii) good sensitivity of the strain fields to the material parameters. The study performed real experiments using this test, but from the available data, it is not easy to identify the type of strain paths achieved. [START_REF] Robert | Identification of hardening parameters using finite element models and full-field measurements: Some case studies[END_REF] investigated the use of the specimens proposed by [START_REF] Meuwissen | Determination of the elasto-plastic properties of aluminium using a mixed numerical-experimental method[END_REF] and [START_REF] Belhabib | Heterogeneous tensile test on elastoplastic metallic sheets: comparison between FEM simulations and full-field strain measurements[END_REF] to identify the material parameters of Hill 1948 yield criterion and Ludwig isotropic hardening law for a 2024-T3 aluminum alloy. Through real experiments, it was observed that the range of strain paths is located between uniaxial tension and plane strain for both specimens. The maximum major strain values observed are approximately 0.20 and 0.16 (concentrated in the notches), respectively, for the specimen of [START_REF] Belhabib | Heterogeneous tensile test on elastoplastic metallic sheets: comparison between FEM simulations and full-field strain measurements[END_REF] and [START_REF] Meuwissen | Determination of the elasto-plastic properties of aluminium using a mixed numerical-experimental method[END_REF]. The study concludes that for this specific material, the specimen proposed by [START_REF] Belhabib | Heterogeneous tensile test on elastoplastic metallic sheets: comparison between FEM simulations and full-field strain measurements[END_REF] exhibited similar strain paths, but for higher values of strain and a better distribution over the surface of the specimen. Avril et al. (2008c), [START_REF] Pierron | Extension of the virtual fields method to elasto-plastic material identification with cyclic loads and kinematic hardening[END_REF], Rossi et al. (2016b), and[START_REF] Marek | Sensitivity-based virtual fields for the non-linear virtual fields method[END_REF] investigated the use of a double notched specimen with similar geometries to characterize the material parameters using VFM (see Figure 3.2e). According to Rossi et al. (2016b), this shape is often used because it can produce a heterogeneous strain field, is easy to machine, and, in plasticity, allows to have a large zone of the specimen under plastic deformation. However, Rossi et al. (2016b), observe that the adopted notched specimens generate states of stress covering only a small part of the yield surface, so that a good identification of material parameters of Hill 1948 yield criterion is obtained using a 45°specimen or the combination of 0°and 90°angle from RD. [START_REF] Cooreman | Identification of the plastic material behaviour through full-field displacement measurements and inverse methods[END_REF] proposed two perforated specimens of different geometric complexity to identify the material parameters of the Hill 1948 yield criterion and the Swift hardening law (see Figures 3.2f and 3.2g). In total, there were 6 material parameters to identify. By comparing the identification against a traditional approach using multiple conventional uniaxial tensile tests, the study showed that using a single heterogeneous test yields better results. The study concluded that mechanical tests should be proposed according to the forming process, which will be employed afterwards. [START_REF] Kim | Determination of anisotropic plastic constitutive parameters using the virtual fields method[END_REF] investigated four different geometries of heterogeneous tests (see Figure 3.3) that should present very heterogeneous stress states from a uniaxial load, and that the relative errors between the identified parameters using the VFM and the input values in the FEA simulations should be less than 1 %. The study considers the latter condition important, because the identification was unsuccessful in various geometries, producing heterogeneous stress states due to high strain concentration in a few elements. Geometry A exhibits various stress distributions, from biaxial tension, uniaxial tension and pure shear to uniaxial compression. Though this geometry yields satisfactory identification results, this specimen tends to buckle in the experiments. Geometry B exhibits a similar stress distribution to geometry A, and the identification results were successful, even though it is considered that the specimen will eventually buckle when imposing a large displacement. Geometry C exhibits biaxial stress states in the central area between the two holes. However, in the experiments, it was found that necking occurs prematurely in the hole areas, leading to unsuccessful identification. Geometry D exhibits a stress distribution mainly concentrated over the uniaxial tension range. The identification using this geometry was unsuccessful, as the specimen does not provide sufficient information to identify each material parameter. Finally, the study combined two tests of geometry B in different directions, and the identification results are satisfying. Although applied to dynamic testing, [START_REF] Fu | Inertia-based identification of elastic anisotropic properties for materials undergoing dynamic loadings using the virtual fields method and heterogeneous impact tests[END_REF] designed a specimen similar to geometry B, named M-shaped specimen. Goga ( 2014) proposed a specimen for plane stress analysis, which only requires a uniaxial testing machine (see Figure 3.4a). Due to its geometry, the specimen exhibits tension in the vertical arm and compression in the horizontal one. The geometry was numerically and experimentally investigated for an aluminum alloy, and appears to exhibit various mechanical states. However, the test was only carried out in the elastic regime. Rossi et al. (2016a) proposed a couple of specimens based on a numerical procedure that relies on a test simulator that can generate synthetic images similar to the ones obtained during an actual test, to identify the hardening behavior of sheet metals with the VFM. The external shape of the specimen is fixed, while in the zone of measurement, there are two notches whose geometries are defined according to 7 independent design variables (see Figure 3.4b). The study proposes a few geometries (with different variables) considered optimal geometries, which present a diffuse plastic strain distribution, while non-optimal geometries exhibit a strain concentration only close to the notches. [START_REF] Souto | Mechanical design of a heterogeneous test for material parameters identification[END_REF] proposed a specimen, known as butterfly test, based on an optimization procedure of the outer shape (see Figure 3.4c). A dedicated indicator guides the optimization that quantifies, among others, the heterogeneity of strain states (Souto et al. 2015a). Later, [START_REF] Aquino | Design of heterogeneous mechanical tests: Numerical methodology and experimental validation[END_REF] investigated the experimental application of the specimen to characterize a 0.7 mm thick mild steel. One interesting aspect of this test is that the shape of the top boundary is created using special grips with the desired shape. To avoid sliding between the specimen and the grips, special patterned contact marks were used in the jaws. The study performs experiments and numerical simulations of the test, observing that the range of principal strains distribution is between shear and close to plane strain, but most of the specimen is the uniaxial tension region. The maximum value of major strain observed is approximately 0.8, but only a few points are observed in this range. Most material points are concentrated for values lesser than 0.4. The identification procedure uses two different material orientations (0°and 90°from the RD), and the results show that the identified parameters can reproduce the real material behavior. [START_REF] Jones | Parameter covariance and non-uniqueness in material model calibration using the virtual fields method[END_REF] proposed a specimen to investigate the identification of material parameters of a viscoplastic material model for a 1.5 mm thick stainless steel, using the VFM (see 2018), (e) [START_REF] Küsters | Damage characterization on heterogeneous tensile tests[END_REF], and (f) [START_REF] Barroqueiro | Design of mechanical heterogeneous specimens using topology optimization[END_REF]. Uniaxial Loading Tests Figure 3.4d). The study designed the specimen via an iterative process, starting with an initial shape selected by engineering intuition, and then performing a finite element analysis of the specimen using the reference material model parameters. Afterwards, the design is manually adjusted until the geometry met the following criteria: (i) maximize stress heterogeneity, (ii) maximize range of strain rates, (iii) minimize large gradients in stress or strain near sample edges, (iv) restrict the geometry to planar, but with arbitrarily complex in-plane geometry, while preventing buckling, and (v) be a uniaxial loading test. Strain paths that occur in the specimen are not presented, as the study does not explore in detail the characteristics of the optimized specimen. [START_REF] Küsters | Damage characterization on heterogeneous tensile tests[END_REF] developed a specimen to achieve various stress states in the plastic regime (see Figure 3.4e). This test is intended to be used in damage characterization, and has also been designed to achieve more than one failure mode in each experiment. [START_REF] Barroqueiro | Design of mechanical heterogeneous specimens using topology optimization[END_REF] developed an innovative methodology to design heterogeneous mechanical tests from topology optimization and a dedicated indicator. The goal was to design tests promoting uniformity of equivalent stress, while simultaneously achieving uniaxial tension, uniaxial compression and shear stress states. Following this innovative methodology, a specimen mainly achieving tensile stress states was obtained (see Figure 3.4f). The geometry has already been experimentally investigated, but huge buckling was observed. [START_REF] Fu | A method for the simultaneous identification of anisotropic yield and hardening constitutive parameters for sheet metal forming[END_REF] proposed a specimen, named bridge-like, that exhibits balanced heterogeneous stress states (see Figure 3.5a). This test was designed based on the plate buckling theory to avoid significant buckling under compressive loadings. One important aspect of their study focused on avoiding the use of buckling devices, because they block the field of view during experiments. The test has been experimentally validated under cycling loadings and has shown to not be liable to buckle within a reasonable range of in-plane compression. The study also performed a calibration of the Hill 1948 yield criterion and a non-linear mixed hardening law, and found that by combining two tests with different material orientations, it yielded accurate and robust results. [START_REF] Maček | Calibration of advanced yield criteria using uniaxial and heterogeneous tensile test data[END_REF] developed a heterogeneous test that primarily achieves a biaxial stress state away from free edges, to avoid difficulties with DIC measurements (see Figure 3.5b). Additionally, the study also wanted to avoid any stress concentrations or high strain gradients. Their goal was to use this test to identify only biaxial related parameters of the Yld2000-2d yield criterion, in combination with classic uniaxial tensile tests. One issue was the pronounced buckling of the upper and lower bridges during experimental tests. Another specimen designed by topology optimization was proposed by [START_REF] Chamoin | Coupling between topology optimization and digital image correlation for the design of specimen dedicated to selected material parameters identification[END_REF] to maximize the sensitivity of displacement fields to material parameters (see Figure 3.5c). The obtained geometry was directly manufacturable. Both numerical and experimental results that confirm the validity of the methodology were obtained. However, this study is only applied to fir wood and the identification of a shear modulus. [START_REF] Conde | Design of heterogeneous interior notched specimens for material mechanical characterization[END_REF] used shape optimization methodologies to design a perforated specimen (see Figure 3.5d). In this study, the exterior geometry was fixed, and only the interior shape was optimized to maximize stress heterogeneity. The obtained specimen demonstrates to achieve stress states ranging from uniaxial tension to compression. Although this specimen has not yet been experimentally validated, it has already been successfully applied to the identification of material parameters of Swift hardening law [START_REF] Conde | Parameter identification of Swift law using a FEMU-based approach and an innovative heterogeneous mechanical test[END_REF]. Zhang et al. (2022a) designed a new specimen using shape optimization of the exterior notches by maximizing the heterogeneity of strain states (see Figure 3.5e). Interestingly, this study uses the same indicator as proposed by Souto et al. (2015a). This specimen has shown to achieve strain states located between shear to uniaxial tension, though closer to the latter. Additionally, the study validated the use of this test to successfully identify the material parameters of the Hill 1948 and the Yld2000-2d yield criteria. [START_REF] Gonçalves | On the topology design of a mechanical heterogeneous specimen using geometric and material nonlinearities[END_REF] extended the study of [START_REF] Barroqueiro | Design of mechanical heterogeneous specimens using topology optimization[END_REF] to plasticity and obtained new geometries (see Figure 3.5f). A large part of the specimen presents strain states between uniaxial compression and uniaxial tension. In terms of equivalent plastic strain, it is observed that the highest values are concentrated in small regions and away from regions of uniaxial tension. So far, this test has not yet been applied in the identification of material parameters or experimentally validated. [START_REF] Chapelier | Spline-based specimen shape optimization for robust material model calibration[END_REF] developed an innovative methodology that leads to the improvement of existing specimens, instead of generating new geometries from scratch. This methodology is based on a non-invasive CAD-inspired optimization strategy, which relies on univariate spline tools to update the finite element model mesh. An example is shown in Figure 3.5g, where the shape and position of holes are optimized to improve the sensitivity of the tests to material parameters of a given model. Many uniaxial loading tests exist, as shown by the 25 different tests here presented. It is expected that this number continues to increase, as studies will continue to design tests using even more innovative methodologies. The first heterogeneous tests used, in general, simple notches or holes, while nowadays irregular shapes and multiple holes are more prominent. If future tests follow the trend here observed, the complexity of the specimens will increase. This will lead to the need for advanced manufacturing processes and complex experimental apparatus, such as buckling devices. However, selecting the best test is not an easy task due to lack of comparative studies using suitable metrics. The following section provides an overview of biaxial loading tests. Biaxial Loading Tests The use of a cruciform specimen, coupled with full-field displacement or strain measurements, has gained interest in the development of inverse identification strategies. In general, this test allows reaching strain paths ranging from uniaxial tension (in the arms region of the specimen) to equibiaxial tension (in the center region of the specimen), and high strain gradients from the center region of the specimen to the end of the arms region. Also, by changing the load or displacement ratio over the two normal loading axes, several biaxial stress states in the central region of the specimen can be achieved. However, this type of test only allows obtaining low values of equivalent plastic strain before rupture occurs. Nevertheless, by loading the specimen in two directions (RD and TD), it is possible to characterize a larger spectrum of the material anisotropy compared to most uniaxial loading tests [START_REF] Hannon | A review of planar biaxial tensile test systems for sheet metal[END_REF]. [START_REF] Makinde | Development of an apparatus for biaxial testing using cruciform specimens[END_REF] pioneered the development of heterogeneous mechanical tests to tackle the lack of suitable standard biaxial loading tests. The study designed a cruciform specimen using a statistical method of factorial and response surface designs in combination with FEA simulations. The resulting geometry presents a circular reduced central region (see Figure 3.6a), and the stress and strain states distribution was better than other geometries. [START_REF] Yu | Design of a cruciform biaxial tensile specimen for limit strain analysis by FEM[END_REF] designed an optimal shape of a cruciform specimen by manually adjusting the geometrical parameters. The specimen was designed to obtain a uniform stress distribution in the central region and achieve large deformations. The geometry presents a chamfer on the arms and on the central region (see Figure 3.6b). Moreover, different strain paths in the central region, ranging from uniaxial tension to biaxial tension, can be obtained by adjusting the velocity of boundary conditions. [START_REF] Cooreman | Identification of the plastic material behaviour through full-field displacement measurements and inverse methods[END_REF] proposed a biaxial test on a perforated cruciform specimen (see Figure 3.6c) to simultaneously identify the material parameters of Hill 1948 anisotropic yield criterion and Swift isotropic hardening law to describe the plastic behavior of a dual-phase steel with 0.8 mm of thickness. The study performed an experimental test with this specimen using DIC and use of 7 load steps in the identification procedure. The isovalues distribution of strain at the last step show that the test presents considerable heterogeneity, though the levels of strain in the RD and the TD are relatively low. On the other hand, the levels of shear strain are considerably high. Additionally, the obtained values of equivalent plastic strain at the end of the last load step are low (approximately 0.23), indicating that this test might not adequately represent the higher strain values that occur in sheet metal forming operations. The results of the Swift hardening law are satisfactory, except for the parameter controlling the initial yield strain, which was justified by the lack of information close to the yield point. [START_REF] Zidane | Development of an in-plane biaxial test for forming limit curve (FLC) characterization of metallic sheets[END_REF] investigated several cruciform specimen geometries and identified a promising one for which necking appears in the central region. Based on the most promising geometry, an optimized geometry was manually proposed by adjusting two geometrical parameters (see Figure 3.6d). This geometry concentrates and achieves stress states between uniaxial tension and biaxial tension in the central region, and has been numerically and experimentally validated for an aluminum alloy. [START_REF] Teaca | Identification of sheet metal plastic anisotropy using heterogeneous biaxial tensile tests[END_REF] designed two cruciform specimens that achieve a wide range of mechanical states and are highly sensitive to the material anisotropy (see Figure 3.7). to identify the material parameters of an advanced anisotropic yield criterion for two materials with 1 mm of thickness. The material characterization is performed for stress paths ranging from uniaxial to plane strain tension with the specimen UT/PST and from uniaxial to equibiaxial tension with the specimen UT/EBT. For the UT/PST specimen, average mechanical states in the arms (RD region) and connecting parts between arms are close to uniaxial tension and plane strain tension, respectively. In the case of the UT/EBT specimen, mechanical states along the RD vary between equibiaxial tension in the center and uniaxial tension in the central ligament of the arm. According to the study, the main advantage of both specimens is the multiple mechanical states achieved in a single biaxial test, allowing the reproduction of mechanical states close to those encountered in industrial sheet metal forming processes. Nevertheless, the observed mechanical states and deformation levels are similar to those observed in other cruciform specimens. [START_REF] Schmaltz | Comparison of different biaxial tests for the inverse identification of sheet steel material parameters[END_REF] explored the usability of the biaxial test on three cruciform specimen geometries (see Figure 3.8), promoting different types of heterogeneous strain fields. The study identified the material parameters of the Hill 1948 yield criterion and an isotropic hardening law for a 2 mm thick steel. The first geometry is designed to present a homogeneous strain distribution in the center region of the specimen, without using slits or perforations (G#1). The second geometry presents the same shape as the first, but with a hole in the center region of the specimen to generate an heterogeneous strain distribution (G#2), and the third geometry is designed so that under biaxial tensile conditions, the center region rotates, generating tension, shear and compressive stresses (G#3). Additionally, the study performed a fourth test to generate more compressive stresses (G#4), using the same geometry as G#2, but loaded by compressive forces in the y direction and by tensile forces in the x direction. From the experimentally determined strain state, tests G#2 and G#3 present a wide range of strain distributions comparatively to test G#1 in the tensile spectrum, with test G#2 also presenting some equibiaxial tension state, though with relatively small values of major and minor strains. On the other hand, test G#4 presents a strain distribution mainly in the shear state, with both dominant tensile and compressive components. The study concludes that the test G#4 should yield the best identification results, though this is not observed later in their identification. The study argues that the implemented model does not take into account that the real material behavior in the compression region is different compared to the one in the tensile direction, and does not include kinematic or anisotropic hardening. [START_REF] Prates | A new strategy for the simultaneous identification of constitutive laws parameters of metal sheets using a single test[END_REF] proposed an inverse methodology for determining the material parameters of the Hill 1948 yield criterion and the Swift hardening law from a single biaxial tensile test using an optimized 1 mm thick cruciform specimen (see Figure 3.9a). The geometry of this specimen was designed to reproduce heterogeneous strain paths observed in sheet metal forming operations. The specimen geometry demonstrated to cover a range of strain paths from equibiaxial tension to uniaxial tension. The proposed specimen was not experimentally tested, and the numerical results show that at the center of the specimen, the equivalent plastic strain presents low values of approximately 0.03, while the maximum values of approximately 0.3 are obtained along the arms of the specimen. Moreover, the study performed a sensitivity study using five virtual materials (variations of the parameters of the constitutive models). The results showed that variations in the material parameters of the hardening law did not influence the strain paths, but in the anisotropic material parameters could lead to substantial differences. [START_REF] Liu | Identification of sheet metal hardening for large strains with an in-plane biaxial tensile test and a dedicated cross specimen[END_REF] designed an optimized cruciform sample with a thickness-reduced central zone and four slots at each arm (see Figure 3.9b), to characterize the hardening behavior of an aluminum alloy with 2 mm of thickness. The optimized geometry was obtained from a parametric study of five variables controlling the position of notches and slots. The use of a thickness-reduced central zone and slots at each arm in biaxial tensile specimens is to concentrate strains in the central zone, avoiding strain localization at notches. The study performed an experimental test of the specimen using DIC to calculate the strain fields on the surface. The equivalent strain reaches 0.30 just before crack, and the principal strains are about 0.16 for the major strain and 0.14 for the minor strain in the center of the specimen. The maximum deformation is observed in the fillet between the flat central zone and the non-reduced thickness zone on the central axis of the specimen (TD). Although the maximum equivalent plastic strain is relatively high, the range of strain paths is not wide -mainly between equibiaxial and uniaxial stress -therefore not providing a wide variety of strain paths required to characterize the material states in sheet metal forming operations. The study attempted to identify the hardening behavior of the material using three distinct yield criteria (one isotropic and two anisotropic), and the results are relatively good. However, it was not investigated the potential of the specimen to identify the material parameters of anisotropic yield criteria. [START_REF] Zhang | Potential of the cross biaxial test for anisotropy characterization based on heterogeneous strain field[END_REF] investigated the use of a cruciform specimen without slits or perforations (see Figure 3.9c) to identify the material parameters of anisotropic yield criteria for an aluminum alloy and a dual-phase steel, respectively, with 1 mm and 1.75 mm of thickness. The experimental results of the designed cruciform specimen show that the observed strain path is nearly equibiaxial for both materials in the central region. It then changes gradually along the diagonal direction to nearly uniaxial tension near the corner. Overall, the observed strains are relatively small for both materials, and the range of strain paths is not good enough to characterize the anisotropic behavior of a given material. [START_REF] Martins | Calibration of anisotropic plasticity models using a biaxial test and the virtual fields method[END_REF] investigated the use of the same geometry as [START_REF] Zhang | Potential of the cross biaxial test for anisotropy characterization based on heterogeneous strain field[END_REF], as well as some variations to identify the material parameters of the Hill 1948 and the Yld2000-2d yield criteria with the Swift isotropic hardening law using virtual experimental data of a mild steel and an aluminum alloy. The study concluded that the original geometry proposed by [START_REF] Zhang | Potential of the cross biaxial test for anisotropy characterization based on heterogeneous strain field[END_REF] lacks information for shear stress states for the mild steel using the Hill 1948 yield criterion, and that using the adapted geometry (see Figure 3.9d), an increase of areas with shear states is observed. Then, an identification of the material parameters for the aluminum alloy using the Yld2000-2d yield criterion was carried out using the adapted geometry. The results show that the adapted test is sensitive to the material, because the shear paths are different comparatively to the mild steel. Also, the plastic behavior of the aluminum alloy leads to more information near plane strain, but leads to a decrease of information in the shear region. From these two studies, it is essential to note that the richness of the test seems to depend significantly on the used material. Therefore, it is of utmost importance to analyze the richness of a test for more than one material. [START_REF] Kim | Design of a new cruciform-like specimen for combined tension and shear of metal sheets[END_REF] designed a cruciform specimen that generates combined tension and shear states on in the gauge section. The specimen shape is based on the combination of a general cruciform specimen and a simple shear specimen (see Figure 3.9e). Its geometry was optimized by evaluating several shapes restricted by geometrical constraints to avoid generating plastic deformation outside the gauge section [START_REF] Kim | Shape optimization of a cruciform-like specimen for combined tension and shear loading[END_REF]). The optimize shape showed large stress concentrations near the fillets, but provided good stress uniformity in the central region for different combinations of force ratio, leading to either a predominant tension or shear state. In general, the biaxial test with cruciform specimens results in relatively low values of equivalent plastic strain, while some studies use thickness-reduced center regions of the specimen to increase these. However, the mechanical state usually ranges from equibiaxial to uniaxial tension, with a reduced number or even no strain paths in the shear state. [START_REF] Pottier | Out-of-plane testing procedure for inverse identification purpose: application in sheet metal plasticity[END_REF] designed an experimental procedure based on out-of-plane displacements similar to the Nakazima test. However, the specimen geometry is designed to exhibit highly heterogeneous strain paths (see Figure 3.10a), where a hemispherical punch is used to apply the prescribed displacement at the sample center, which is circular, tightly encircled and fastened between the die and the holder. The specimen is designed to exhibit tension (at 0°and 90°from the RD), shear (at 0°and 90°), and expansion, as well as an isoprobability of fracture in the tension and the shear zones (isotropic material). The strain level in expansion remains weaker than in the tension or shear zones, where the maximum levels of strain are relatively high, of 0.62 and 0.95, respectively. From the identification results, the study concludes that the test can lead to the identification of material parameters of an anisotropic plastic model. [START_REF] Wang | Identification of 7B04 aluminum alloy anisotropy yield criteria with conventional test and Pottier test at elevated temperature[END_REF] used this test to characterize the anisotropic behavior of an aluminum alloy at 200 °C, using the Yld2000-2d yield criterion. The obtained results were validated using a deep drawing test, which showed a good agreement between experiments and numerical simulations. [START_REF] Hapsari | Instrumented incremental sheet testing for material behavior extraction under very large strain: Information richness of continuous force measurement[END_REF] designed a micro incremental deformation test based on the principle of single point incremental forming (see Figure 3.10b). This test consists of locally deforming a clamped blank using a hemispherical tool, where large deformations (approximately 240 %) and multiple strain paths can be achieved. The study characterizes the ductile damage behavior of a copper foil with 0.21 mm of thickness. In a first step, the study identifies the material parameters of a hardening law using classic uniaxial tensile tests up to 0.3 values of deformation. Then, the material parameters of the same hardening law with a damage model are identified by using a FEMU approach, where the objective function was composed of the forming forces between experiments and numerical simulations. The identified material parameters of the hardening law in the second step are similar to those obtained in the first step. Nevertheless, without the second step, it would not have been possible to identify the damage material parameters due to damage growth and necking in the tensile tests. Out-of-Plane Loading Tests Conclusions An extensive overview of heterogeneous mechanical tests reported in the literature to characterize the mechanical behavior of sheet metals has been presented. In Figure 3.11, a distribution of designed tests over time is presented, where it is shown that this topic emerged approximately 20 years ago and has since seen more than 40 new tests developed. Historically, the uniaxial loading tests have always been largely developed, though it is seen that the biaxial loading tests have also been similarly investigated up to 2014. However, the trend has been to design uniaxial loading tests using different and innovative methodologies. Many studies have focused on the use of cruciform specimens under biaxial loading, where it is possible to achieve mechanical states ranging from uniaxial to equibiaxial tension. To reach large levels of deformation, some studies have experimented with the use of perforated cruciform specimens, thick-reduced zones, or even slots in the arms. However, the levels of strain are still relatively low to fully characterize the hardening behavior of the material. Other studies have used uniaxial loading tests, where it is possible to use complex geometries and achieve various mechanical states (such as shear, uniaxial tension, or biaxial tension). To design the specimen geometries, some studies use innovative methodologies, such as shape or topology optimization. Nevertheless, it was also shown that others are manually proposed based on the authors experience and creativity. However, a single test might not characterize the material behavior or fully retrieve the best material parameters. For that reason, some studies have combined two identical tests with different material orientations to enrich the quality of available data. Moreover, one common problem with these tests is their sensitivity to the material, as the mechanical state and level of deformation can vary. Therefore, it is not an easy task to quantify the richness of the heterogeneous test in a general way. Regarding the application of heterogeneous tests to identify the material parameters of constitutive models, it is fair to observe that studies tend to focus on simple yield criteria and isotropic hardening laws. Only a single study is reported that identifies the material parameters for a non-linear kinematic hardening law, where reverse loading conditions are necessary. When the identification is performed with finite element based inverse methods (such as FEMU), accurate reproduction of the tests boundary conditions is essential for its success [START_REF] Kacem | Influence of experimental boundary conditions on the calibration of a ductile fracture criterion[END_REF]). However, the latter is not always easy due to possible sliding between the grips and specimen, lateral loads, or contact between the tools and the specimen. Furthermore, only a few specimens are analyzed by different studies and conditions, resulting in a lack of information on the proposed tests. The comparison of the proposed specimens is not trivial, as some studies do not present the same information, or their results lack information to assess the richness of the tests. These tests are not yet able to extensively characterize the material behavior, as it is achievable using multiple quasi-homogeneous tests. Nevertheless, the use of heterogeneous mechanical tests is promising in terms of the decrease in the number and cost of experiments, by providing richer mechanical information in a single experiment. Evaluation of Heterogeneous Mechanical Tests Introduction The mechanical design of sheet metal forming parts tends to be more virtual, decreasing associated delays and manufacturing processes costs. Consequently, materials' mechanical characterization has received increased attention due to the need for accurate input data to computational analysis software. The mechanical behavior of the materials can be numerically described through constitutive models, which can be calibrated using standard quasi-homogeneous mechanical tests (Souto et al. 2015b). Nevertheless, from a single classical mechanical test, it is difficult to extract many material parameters. It requires several tests to identify many material parameters of a single constitutive model [START_REF] Barlat | Linear transfomation-based anisotropic yield functions[END_REF][START_REF] Cazacu | Orthotropic yield criterion for hexagonal closed packed metals[END_REF]. More recently, research has focused on alternative methods based on mechanical tests that induce heterogeneous strain fields, measured through full-field experimental techniques, providing significantly more valuable data [START_REF] Haddadi | Improving the characterization of a hardening law using digital image correlation over an enhanced heterogeneous tensile test[END_REF][START_REF] Grédiac | On the optimal pattern for displacement field measurement: Random speckle and DIC, or checkerboard and LSA?[END_REF]. Ideally, a single mechanical test should be enough to characterize the material's mechanical behavior. However, the accuracy of these alternative methods depends on several factors [START_REF] Pierron | Towards material testing 2.0. A review of test design for identification of constitutive parameters from full-field measurements[END_REF]): (i) the applied load and shape of the specimen used in the mechanical test [START_REF] Pierron | Test design for identification from full-field measurements: A concise review[END_REF]), (ii) the choice of an appropriate strain field measurement technique [START_REF] Blaysat | Towards criteria characterizing the metrological performance of full-field measurement techniques[END_REF], or (iii) the definition of an identification strategy [START_REF] Prates | Inverse strategies for identifying the parameters of constitutive laws of metal sheets[END_REF]. Indeed, the heterogeneity does not lead to any analytical post-treatment, and a suitable inverse methodology, such as the FEMU [START_REF] Aquino | Design of heterogeneous mechanical tests: Numerical methodology and experimental validation[END_REF] or the VFM [START_REF] Grédiac | The virtual fields method for extracting constitutive parameters from full-field measurements: A review[END_REF][START_REF] Pierron | The Virtual Fields Method: Extracting Constitutive Mechanical Parameters from Full-field Deformation Measurements[END_REF], is required. Although many challenges still exist in this field, research still focuses on selecting and designing the most suitable heterogeneous mechanical tests for sheet metals' constitutive model calibration. In recent times, the design of novel heterogeneous mechanical tests using different strategies has increased (see Chapter 3). An innovative approach adopted a shape optimization procedure to design a butterfly-shaped specimen under uniaxial tensile loading [START_REF] Souto | A numerical methodology to design heterogeneous mechanical tests[END_REF][START_REF] Souto | Mechanical design of a heterogeneous test for material parameters identification[END_REF]. A different approach analyzed configurations of notched specimens by evaluating the error between synthetic images and numerical simulations (Rossi et al. 2016a). More recently, some studies used topology optimization, allowing more flexibility in the design process, and resulting in complex geometries [START_REF] Barroqueiro | Design of mechanical heterogeneous specimens using topology optimization[END_REF][START_REF] Chamoin | Coupling between topology optimization and digital image correlation for the design of specimen dedicated to selected material parameters identification[END_REF]. Another study proposed a bridge-like specimen based on the plate buckling theory [START_REF] Fu | A method for the simultaneous identification of anisotropic yield and hardening constitutive parameters for sheet metal forming[END_REF]. The bridge-like specimen is the first attempt to design a heterogeneous mechanical test that can characterize the mechanical behavior of sheet metals under cyclic loadings. Generally, the quality and quantity of heterogeneous mechanical tests have increased. However, a qualitative or quantitative comparison is not straightforward, because studies use different materials and representations of output data. To solve this problem, Souto et al. (2015a) proposed a quantitative indicator to evaluate and classify mechanical tests. The indicator evaluates the mechanical test up to rupture and is based on the strain heterogeneity. Although this methodology is innovative, it can depend on the material, and did not consider the sensitivity of the tests to material anisotropy [START_REF] Martins | Calibration of anisotropic plasticity models using a biaxial test and the virtual fields method[END_REF]. Additionally, the sensitivity of heterogeneous mechanical tests to the material parameters is a relevant metric to evaluate their richness (Souto et al. 2015a). In this scope, [START_REF] Hapsari | Instrumented incremental sheet testing for material behavior extraction under very large strain: Information richness of continuous force measurement[END_REF] used an indicator that quantifies the identifiability of a parameters' subset based on the largest and smallest eigenvalue of the dimensionless sensitivity matrix. Nevertheless, this methodology depends highly on the constitutive model and can be computationally expensive. More recently, Zhang et al. (2022b) also investigated the identifiability of the anisotropic material parameters through a sensitivity analysis. The study concluded, among other things, that the material orientation plays a major role in the success of the identification. Moreover, the study recommends against evaluating the identification quality by comparison to the reference parameters or the objective function values. Instead, representations of the material behavior should be used, such as the Lankford coefficients or normalized yield stresses at various material orientations. Another interesting study evaluated the robustness of heterogeneous mechanical tests in terms of measurement biases through virtual experiments [START_REF] Thoby | Robustness of specimen design criteria for identification of anisotropic mechanical behaviour from heterogeneous mechanical fields[END_REF]. To fill the existing gap in the evaluation of heterogeneous mechanical tests, this study proposes metrics based on the strain and stress states. Within a numerical approach, four heterogeneous mechanical tests are evaluated by considering three different materials. Furthermore, a metric based on the Mohr's circle formulation is proposed to evaluate the tests' sensitivity to material anisotropy. Heterogeneous Mechanical Tests The four heterogeneous mechanical tests selected were selected from the previous survey to include different shapes in the present study (see Chapter 3). In general, these tests were proposed to maximize the strain field heterogeneity. One test resembles a uniaxial tensile test (see Figure 3.2d), another is biaxial loading (see Figure 3.8c), and two others are uniaxial loading with complex geometries (see Figures 3.3b and 3.4d). The tests are modeled using symmetry conditions to reduce the computational cost of numerical simulations, and the loading is displacement driven. The tests, named A, B, C, and D, and boundary conditions of the finite element models are described and shown in Figure 4.1. It should be noted that the 𝑥 and 𝑦 axes correspond to the global coordinate system (𝑥, 𝑦), respectively, coincident with the TD and the RD for all tests. The material coordinate system (𝑥 ′ , 𝑦 ′ ) rotating with deformation is initially coincident with the global coordinate system. Test A is a biaxial tensile test with a centered hole, designed to generate a heterogeneous strain field [START_REF] Schmaltz | Comparison of different biaxial tests for the inverse identification of sheet steel material parameters[END_REF]. The authors considered that the test could reach levels of deformation able to properly calibrate the hardening behavior, and that it was also able to characterize the anisotropic behavior. The specimen's design is interesting, as the arms are wide and create a short empty region between them. However, the specimen's geometry is not easily modeled because of the lack of given dimensions. The authors concluded that the range of stress states was confined between uniaxial tension and equibiaxial tension states. Nevertheless, the test constitutes a good basis for this analysis because it presents classical characteristics of a biaxial tensile test and adds features that enhance the heterogeneity of strain and stress states. The numerical model uses symmetry conditions along the 𝑥 and 𝑦 axes, with both corresponding to loading directions (see Figure 4.1a). Test B was designed as a hybrid shape between a uniaxial tensile test and a uniaxial plane strain tensile test [START_REF] Belhabib | Heterogeneous tensile test on elastoplastic metallic sheets: comparison between FEM simulations and full-field strain measurements[END_REF]. The shape's interest was investigated by comparing it with standard mechanical tests and evaluating the strain field's heterogeneity and sensitivity to material parameters. Additionally, the numerical comparison accounted for the limitations of DIC, by not considering the regions of the sample near the free edges, and where strain level is below DIC accuracy. This study concluded that the new heterogeneous mechanical test performed better than standard mechanical tests in all criteria. Thanks to its simple geometry, test B represents a good benchmark in this analysis, avoiding the complexity of other heterogeneous mechanical tests. The numerical model uses symmetry conditions along the 𝑥 and 𝑦 axes, with the latter corresponding to the loading direction (see Figure 4.1b). Test C is a uniaxial tensile test, designed from empirical knowledge and trial and error, resulting in a geometry resembling a Greek capital letter sigma 𝛴 [START_REF] Kim | Determination of anisotropic plastic constitutive parameters using the virtual fields method[END_REF]. The authors showed that this test exhibits various stress states. Also, combining two tests loaded in different material orientations increased the mechanical information and improved the calibration. Test C is numerically modeled using one symmetry condition along the 𝑦 axis, aligned to the loading direction (see Figure 4.1c). Test D is a uniaxial tensile test, designed via an iterative process of finite element analyses [START_REF] Jones | Parameter covariance and non-uniqueness in material model calibration using the virtual fields method[END_REF]. The process started with a shape based on empirical knowledge, then manu-ally adjusted to meet several criteria, such as maximizing stress heterogeneity and minimizing large gradients in strain near free edges. The optimized geometry resembles a capital letter "D", and the authors showed that it could cover a reasonable extent of stress states. Similarly to test C, one symmetry condition along the 𝑦 axis is used in the numerical model of test D, which coincides with the loading direction (see Figure 4.1d). The numerical models are developed in 2D, assuming a plane stress state formulation. A fournode bilinear plane stress element, with reduced integration and stiffness hourglass control, is used, as well as a large strain formulation. A refined mesh with a maximum element size of 0.1 mm is used, to obtain many material points approaching a continuous state of the material. Furthermore, to account for the limitations of a subset-based DIC system, which does not provide information near the specimen's free edges, the results are analyzed for a region of interest (ROI), defined by an arbitrary boundary of 0.25 mm on the free edges of the specimen. The numerical simulations are performed with automatic time stepping, and a maximum increment size of 0.01. Finally, the numerical simulations are performed with Abaqus/Standard software (Dassault Systèmes 2019), and the constitutive models are implemented through the UMMDp (see Section 2.4). Virtual Materials Three virtual materials are considered to account for the heterogeneous mechanical tests' sensitivity to different materials: (i) an aluminum alloy (AA2090-T3) with a nominal thickness of 1.6 mm [START_REF] Yoon | Prediction of six or eight ears in a drawn cup based on a new anisotropic yield function[END_REF][START_REF] Comsa | A new formulation of the MMFC to avoid the numerical instability[END_REF][START_REF] Abedini | Evaluation and calibration of anisotropic yield criteria in shear loading: Constraints to eliminate numerical artefacts[END_REF]); (ii) a dual-phase steel (DP600) with a nominal thickness of 0.8 mm [START_REF] Ozturk | Effects of anisotropic yield functions on prediction of forming limit diagrams of DP600 advanced high strength steel[END_REF]; and (iii) a cold-rolled sheet of 99.9 % pure copper (Cu) with a nominal thickness of 0.1 mm [START_REF] Pham | Experimental and numerical investigation of the formability of an ultra-thin copper sheet[END_REF][START_REF] Thuillier | 2D springback and twisting after drawing of copper alloy sheets[END_REF]. For simplicity's sake, a plane stress state formulation is chosen. The elastic properties of the materials, the material parameters of the Yld2000-2d anisotropic yield criterion, and the Swift isotropic hardening law are provided in Table 4.1 (see Section 2.3 for details on the models formulation). It is worth highlighting that the material parameters of AA2090-T3 and DP600 are selected from the literature. On the other hand, the material parameters of Cu were identified using a classical approach, with quasi-homogeneous mechanical tests using a methodology identical to Souto et al. (2015b). Although the latter identification is not presented in this study, it can be said that a good agreement was obtained between experimental and numerical data. Therefore, the three materials considered in this study come from real experiments, providing a realistic description of the mechanical behavior up to a certain deformation level. Moreover, the same phenomenological models are used for the three materials to have a straightforward and easier comparison between them Concerning the material's mechanical behavior, DP600 is characterized by a much higher initial yield stress than AA2090-T3 and Cu, while the latter presents a much lower hardening rate than the former (see Figure 4.2). Moreover, Figures 4.3a and 4.3b present the normalized yield stress and the Lankford coefficient for various material orientations. These representations show the considerable variation of both normalized yield stress and Lankford coefficient of AA2090-T3 compared to Cu and DP600. Besides, the latter exhibits, on average, values close to 1 in both representations, while the Lankford coefficients of Cu are below 1, representing a relatively high normal anisotropy. The yield surface projection on the plane of normalized RD-TD and shear-RD stress also highlights the anisotropic behavior, respectively, represented in Figures 4.3c and 4.3d. Forming Limit Curves In evaluating heterogeneous mechanical tests, it is relevant to consider the maximum deformation the material can sustain relative to a specific condition. This condition can be, among others, a rupture criterion or a forming limit curve (FLC). The latter is a feature of the forming limit Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 diagram (FLD), typically used to evaluate the formability of sheet metals [START_REF] Keeler | Plastic instability and fracture in sheets stretched over rigid punches[END_REF][START_REF] Zhang | A review on modelling techniques for formability prediction of sheet metal forming[END_REF]. The FLC corresponds to the strain limits the material can sustain without failing. Essentially, the FLC separates the regions where the material may fail (above the FLC), from the regions where it is safe (below the FLC). Virtual Materials With the purpose of a fair comparison between the heterogeneous mechanical tests, an FLC is virtually generated for each material. The adopted shape is a simplified version of real FLCs, but based on typical FLCs and the materials' strain hardening exponent 𝑛 (see Table 4.1) for the maximum value in the plane strain tensile region (𝜀 2 /𝜀 1 = 0), with 𝜀 1 and 𝜀 2 , respectively, the major and minor strains [START_REF] Paul | Theoretical analysis of strain-and stress-based forming limit diagrams[END_REF]. Moreover, the virtually generated FLCs are characterized by a slope of -1.0 (left-hand side) and 0.5 (right-hand side), as presented in Figure 4.4. Reference data is also added in Figure 4.4 for comparison with the virtually generated FLCs [START_REF] Comsa | A new formulation of the MMFC to avoid the numerical instability[END_REF][START_REF] Ozturk | Effects of anisotropic yield functions on prediction of forming limit diagrams of DP600 advanced high strength steel[END_REF][START_REF] Pham | Experimental and numerical investigation of the formability of an ultra-thin copper sheet[END_REF]. Although not exact, the virtually generated FLCs are correct approximations of the reference data. Considering the analysis is limited to monotonic loadings, a material point A is expected to present a linear strain path. Thus, if at time instant 𝑡 the material point A 𝑡 is located at a given position in the FLD, the strain path is linear, and the material point is expected to reach the FLC at A ∞ . However, the purpose of using the FLC here is simply to know when it is reached. By numerically evaluating the vertical distance (evaluated on the major strain axis, for the same value of minor strain) from the material point A 𝑡 to the FLC, represented by the point B 𝑡 , it is possible to identify when the limit is reached. Consequently, this distance, represented by 𝑊 FLC , is numerically evaluated as 𝑊 FLC = 𝜀 A 𝑡 1 𝜀 B 𝑡 1 , (4.1) where 𝜀 A 𝑡 1 and 𝜀 B 𝑡 1 are, respectively, the major in-plane principal strain at the material point A 𝑡 and at the corresponding point B 𝑡 on the FLC (see Figure 4.4). Whenever the condition 𝑊 FLC > 1 is satisfied, the FLC is reached and the test stops. [START_REF] Comsa | A new formulation of the MMFC to avoid the numerical instability[END_REF][START_REF] Ozturk | Effects of anisotropic yield functions on prediction of forming limit diagrams of DP600 advanced high strength steel[END_REF][START_REF] Pham | Experimental and numerical investigation of the formability of an ultra-thin copper sheet[END_REF]. The position in the FLD of a material point A at the time instant 𝑡 is represented by A 𝑡 , while A ∞ is the position of A when it reaches the FLC on a linear strain path. Material point B 𝑡 represents the position used to evaluate the vertical distance from A 𝑡 to the FLC. Evaluation Metrics As many heterogeneous mechanical tests already exist, selecting the best one is not a straightforward task. To aid in the decision or design process, studies tend to use standard metrics to evaluate the richness of the tests. This section presents the five metrics selected to evaluate heterogeneous mechanical tests: equivalent plastic strain, major and minor strains, major and minor stresses, stress triaxiality and Lode angle parameter, and rotation angle. Indeed, total strains can be experimentally measured, and linear strain paths used for the determination of FLDs are easily represented through a strain-based diagram. However, some studies prefer a stress-based representation. To represent the six independent stress tensor components, it is worth using several invariants, such as the principal values, but also the hydrostatic pressure and position in the deviatoric plane. For simplicity's sake, the definition of each metric already assumes a plane stress state formulation. Equivalent Plastic Strain The equivalent plastic strain ε p is a metric that quantifies the level of plastic strain reached during the tests and is given by ε p = ∫ 𝑡 0 ε p d𝑡, (4.2) where ε p is the equivalent plastic strain rate (see Equation 2.19). This metric is relevant for metallic materials and in the context of sheet metal forming. Indeed, if more data is available for higher levels of equivalent plastic strain, the calibration accuracy is expected to improve. Concerning heterogeneous mechanical tests, it is beneficial to achieve high equivalent plastic strain levels in many regions. This way, the equivalent plastic strain can also be used as a measure of the heterogeneity of the tests, where a large region of the test presenting high levels of strain is desired, rather than observing strain localization in a reduced region. Major and Minor Strains The major and minor strains, respectively, 𝜀 1 and 𝜀 2 , with 𝜀 1 > 𝜀 2 , are the principal values of the strain tensor 𝜺 calculated on the sheet plane as 𝜀 𝑖 = 𝜀 xx + 𝜀 yy 2 ± √ ( 𝜀 xx -𝜀 yy 2 ) 2 + 𝜀 2 xy , with 𝑖 = 1, 2, (4.3) where 𝜀 xx , 𝜀 yy , and 𝜀 xy are the strain tensor components calculated on the global coordinate system. The combination of major and minor strains values can characterize the strain state. This metric analysis is typically presented using the major and minor strains diagram, as shown in Figure 4.5a. Then, as reference values of 𝜀 2 /𝜀 1 define typical mechanical states for isotropic materials (such as 𝜀 2 /𝜀 1 = -0.5 defining uniaxial tension and shear by 𝜀 2 /𝜀 1 = -1, as listed in Table 4.2), the diagram identifies the strain states achieved. The major and minor strains diagram is also known as the FLD. Moreover, the major and minor strains are relevant to infer the diversity of strain states achieved. A primary characteristic of using the major and minor strains to evaluate heterogeneous mechanical tests is that it can be experimentally measured [START_REF] Zhang | Potential of the cross biaxial test for anisotropy characterization based on heterogeneous strain field[END_REF]. Major and Minor Stresses The major and minor stresses, respectively, 𝜎 1 and 𝜎 2 , with 𝜎 1 > 𝜎 2 , are the principal values of the stress tensor 𝝈 calculated on the sheet plane as 𝜎 𝑖 = 𝜎 xx + 𝜎 yy 2 ± √ ( 𝜎 xx -𝜎 yy 2 ) 2 + 𝜎 2 xy , with 𝑖 = 1, 2, (4.4) where 𝜎 xx , 𝜎 yy , and 𝜎 xy are the stress tensor components calculated on the global coordinate system. Analogously to the major and minor strains, the mechanical state analyzed through the stress tensor represents the stress state. This analysis is typically presented using the major and minor stresses diagram, as shown in Figure 4.5b. Similarly to the major and minor strains, reference values of 𝜎 2 /𝜎 1 represent typical mechanical states (such as uniaxial tension represented by 𝜎 2 /𝜎 1 = 0 and shear by 𝜎 2 /𝜎 1 = -1, as listed in Table 4.2). These last two metrics are commonly used to evaluate mechanical states' distribution, either through a strain-or stress-based representation. Because the strain fields are obtained from experiments, a strain-based representation is usually associated with experimental data [START_REF] Belhabib | Heterogeneous tensile test on elastoplastic metallic sheets: comparison between FEM simulations and full-field strain measurements[END_REF][START_REF] Güner | Characterization of anisotropy of sheet metals employing inhomogeneous strain fields for Yld2000-2D yield function[END_REF][START_REF] Robert | Identification of hardening parameters using finite element models and full-field measurements: Some case studies[END_REF]Souto et al. 2015a;[START_REF] Aquino | Design of heterogeneous mechanical tests: Numerical methodology and experimental validation[END_REF][START_REF] Wang | Identification of 7B04 aluminum alloy anisotropy yield criteria with conventional test and Pottier test at elevated temperature[END_REF][START_REF] Marek | Experimental validation of the sensitivity-based virtual fields for identification of anisotropic plasticity models[END_REF]). On the other hand, a stress-based representation is more associated with numerical data, because it cannot be directly obtained from experiments [START_REF] Cooreman | Identification of mechanical material behavior through inverse modeling and DIC[END_REF]Avril et al. 2008c;[START_REF] Kim | Determination of anisotropic plastic constitutive parameters using the virtual fields method[END_REF]Rossi et al. 2016b;[START_REF] Jones | Parameter covariance and non-uniqueness in material model calibration using the virtual fields method[END_REF][START_REF] Barroqueiro | Design of mechanical heterogeneous specimens using topology optimization[END_REF][START_REF] Fu | A method for the simultaneous identification of anisotropic yield and hardening constitutive parameters for sheet metal forming[END_REF][START_REF] Thoby | Robustness of specimen design criteria for identification of anisotropic mechanical behaviour from heterogeneous mechanical fields[END_REF]. Studies considering both representations are limited [START_REF] Schmaltz | Comparison of different biaxial tests for the inverse identification of sheet steel material parameters[END_REF][START_REF] Martins | Calibration of anisotropic plasticity models using a biaxial test and the virtual fields method[END_REF]. Stress Triaxiality and Lode Angle Parameter The stress triaxiality and the Lode angle parameter are two dimensionless indicators that can characterize a mechanical state. Both represent the stress tensor 𝝈, through stress invariants, and have been used in studies related to ductile fracture [START_REF] Bai | A new model of metal plasticity and fracture with pressure and Lode dependence[END_REF][START_REF] Danas | Influence of the Lode parameter and the stress triaxiality on the failure of elasto-plastic porous materials[END_REF][START_REF] Kiran | A triaxiality and Lode parameter dependent ductile fracture criterion[END_REF]. Nevertheless, their interest in evaluating the richness of heterogeneous mechanical tests has not yet been investigated. The stress triaxiality 𝜂 represents the distribution of stress in the three orthotropic axes and can be given by 𝜂 = 𝜎 h σ VM = 𝜎 xx + 𝜎 yy 3 √ 𝜎 2 xx + 𝜎 2 yy -𝜎 xx 𝜎 yy + 3𝜎 2 xy , (4.5) where 𝜎 h is the hydrostatic stress, and σ VM is the von Mises equivalent stress. The range of stress triaxiality values is -∞ < 𝜂 < +∞. However, in the case of plane stress state, the range is reduced to -2/3 ≤ 𝜂 ≤ 2/3. Additionally, stress triaxiality values of 𝜂 > 0 characterize tensile states, and of 𝜂 < 0 characterize compressive states. The Lode angle 𝜃, or the Lode parameter 𝐿, relates to the third invariant of the deviatoric stress tensor 𝐽 3 normalized by the von Mises equivalent stress σ VM , through 𝐿 = -cos(3𝜃) = -( 𝐽 3 σ VM ) 3 , (4.6) where 𝐽 3 is given by 𝐽 3 = - 1 27 (𝜎 xx -2𝜎 yy )(𝜎 yy -2𝜎 xx )(𝜎 xx + 𝜎 yy ). (4.7) The range of the Lode angle is 0 ≤ 𝜃 ≤ 𝜋/3, and the Lode parameter's range is -1 ≤ 𝐿 ≤ 1. Additionally, the Lode angle 𝜃 can be normalized as θ = 1 - 2 𝜋 arccos ( 𝐽 3 σ VM ) 3 . (4.8) where θ is the Lode angle parameter of range -1 ≤ θ ≤ 1. A diagram is typically used to represent the stress triaxiality and Lode angle parameter, where the latter is represented on the horizontal axis and the former on the vertical axis. Similarly to the two previous metrics, reference values of 𝜂 and θ represent typical mechanical states, such as uniaxial tension by 𝜂 = 1/3 and θ = 1, and shear by 𝜂 = θ = 0 (see Figure 4.5c and Table 4.2). Moreover, the relationship between 𝜃 and 𝜂 in a plane stress state (𝜎 zz = 0) is also represented in Figure 4.5c by the plane stress curve defined by 𝜃 = cos [ 𝜋 2 (1 -θ )] = - 27 2 𝜂(𝜂 2 - 1 3 ). (4.9) Mechanical state Major and minor strains Major and minor stresses Stress triaxiality and Lode angle parameter Equibiaxial compression Plane strain compression Uniaxial compression Shear Uniaxial tension Plane strain tension Equibiaxial tension 𝜀 2 /𝜀 1 = 1 𝜀 2 < 0 𝜀 1 < 0 𝜀 2 < 0 𝜀 1 = 0 𝜀 2 /𝜀 1 = -2 𝜀 2 < 0 𝜀 1 > 0 𝜀 2 /𝜀 1 = -1 𝜀 2 < 0 𝜀 1 > 0 𝜀 2 /𝜀 1 = -0.5 𝜀 2 < 0 𝜀 1 > 0 𝜀 2 /𝜀 1 = 0 𝜀 2 = 0 𝜀 1 > 0 𝜀 2 /𝜀 1 = 1 𝜀 2 > 0 𝜀 1 > 0 𝜍 2 /𝜍 1 = 1 𝜍 2 < 0 𝜍 1 < 0 𝜍 2 /𝜍 1 = 𝜈 𝜍 2 < 0 𝜍 1 < 0 𝜍 2 < 0 𝜍 1 = 0 𝜍 2 /𝜍 1 = -1 𝜍 2 < 0 𝜍 1 > 0 𝜍 2 /𝜍 1 = 0 𝜍 2 = 0 𝜍 1 > 0 𝜍 2 /𝜍 1 = 𝜈 𝜍 2 > 0 𝜍 1 > 0 𝜍 2 /𝜍 1 = 1 𝜍 2 > 0 𝜍 1 > 0 𝜂 = -2/3 θ = 𝜂 = -√ 3/3 θ = 𝜂 = -1/3 θ = -1 𝜂 = θ = 𝜂 = 1/3 θ = 𝜂 = √ 3/3 θ = 𝜂 = 2/3 θ = -1 Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 4.5 Formulation of Rotation Angle 55 Formulation of Rotation Angle A metric able to evaluate the test's sensitivity to anisotropy has not yet been fully explored to evaluate heterogeneous mechanical tests. A possible representation for this behavior is based on the principal angle between the principal stress base and the material orthotropic axes. The formulation is derived from Mohr's circle equations for a plane stress state and is given by tan(2𝛽) = 2𝜎 xy 𝜎 xx -𝜎 yy , (4.10) where 𝛽 is a principal angle measured in Mohr's circle. Because the tangent function is 𝜋-periodic, satisfying tan(2𝛽) = tan[2(𝛽 + 𝜋/2)], there are two principal directions for which Equation 4.10 is fulfilled. These principal directions are 𝛽 and 𝛽 + 𝜋/2, which are perpendicular. These two principal directions, represented by 𝛽 1 and 𝛽 2 , are, respectively, the principal angles of the major and minor principal stresses. Knowing the values of the stress tensor components, 𝛽 corresponds to the angle from the material coordinate system 𝑥 ′ axis to the closest principal stress axis. For this reason, it is impossible to know from its value if the principal angle is associated with the major or minor principal stress axis. Additionally, the values of 2𝛽 range between -90°to 90°in Mohr's circle, so that 𝛽 varies between -45°to 45°in the material coordinate system. [START_REF] Martins | Calibration of anisotropic plasticity models using a biaxial test and the virtual fields method[END_REF] applied this formulation presenting the principal angle related to the major principal stress axis to show that a cruciform specimen exhibited a relatively broad distribution of the principal angle. However, this formulation ignores the case when the mechanical state is predominantly compressive in Mohr's circle by only considering the principal angle related to the major principal stress axis. It also presents different values for 𝛽 depending on the shear loading direction. Moreover, the presented range of values (from -45°to 135°) is not readily associated with the material orientation from the RD, typically evaluated between 0°to 90°for sheet metals. Therefore, a metric based on the principal angle's formulation is suggested to evaluate the tests' sensitivity to anisotropy. It considers the maximum principal stress in absolute value, and the range of material orientations typically used to calibrate the material's anisotropic behavior. This metric is named rotation angle, represented by 𝛾, and refers to the principal direction associated with the maximum principal stress in absolute value. Its values range between 0°to 90°and is given by 𝛾 = { 45 if 𝜎 xx = 𝜎 yy and 𝜎 xy ≠ 0 45(1 -𝑞) + 𝑞|𝛽| otherwise , (4.11) where 𝛽 is the principal angle (in degrees) calculated from Equation 4.10, and 𝑞 is an integer of value -1 or 1, calculated as Conditions 1 and 2 are schematically illustrated in Figure 4.7. Additionally, a test that demonstrates good sensitivity to material anisotropy should exhibit material points in the complete range of the rotation angle. Ideally, the material points reach the plastic regime and are equally distributed in the range of 0°to 90°. 𝑞 = 𝜎 xx -𝜎 yy | |𝜎 xx -𝜎 yy | | |𝜎 1 | -|𝜎 2 | | ||𝜎 1 | -|𝜎 2 || | = sign(𝜎 xx -𝜎 yy ) sign(|𝜎 1 | -|𝜎 2 |), ( Numerical Evaluation This section evaluates the heterogeneous mechanical tests A, B, C, and D, considering the three different materials, specifically Cu, DP600, and AA2090-T3. The evaluation is performed using the metrics described above, namely: the equivalent plastic strain, major and minor strains, major and minor stresses, stress triaxiality and Lode angle parameter, and rotation angle. The tests present linear strain paths, without strain-path changes, thus the time instant just before necking defines the maximum heterogeneity of the test. Moreover, this times instant is representative of the heterogeneity observed throughout the test (Souto et al. 2015b). For that reason, the results are exclusively evaluated for the time instant where the FLC is reached, in other words, when 𝑊 FLC = 1 (see Equation 4.1). Table 4.3 summarizes the displacements obtained at the FLCs of each material for every test. Equivalent Plastic Strain The equivalent plastic strain distribution is evaluated using the deformed shape, where the equivalent plastic strain levels are represented through a color map (see Figure 4.8). The finite element mesh is not represented in the shapes, as it would add significant visual noise due to a fine mesh, and the color map would not be visible. Additionally, a distinction in the representations is made between material points in elastic and plastic regimes, with the latter represented through the color scale and the former in a single color. This distinction between regimes enables visualization to focus on material points in the plastic regime, preferable, rather than material points in the elastic regime. Also, this representation provides a visual outline of the mechanical solicitation; and, as will be shown ahead, it is used to connect the metrics. The comparison of equivalent plastic strain levels between materials is not of huge interest, as the observed differences are intrinsically related to the material itself. Instead, it is more interesting to analyze differences between tests for the same material. Therefore, in this case, the test presenting the highest equivalent plastic strain levels is, in general, test D, for the three materials. Concerning the equivalent plastic strain contour, the variation in the amount of material points under the plastic regime between materials is the lowest for Cu and the highest for DP600. Therefore, more mechanical information is retrieved from the latter before the rupture, which could be related to the strain hardening behavior. Indeed, DP600 exhibits the highest strain hardening rate (see Figure 4.2), and Cu the lowest, leading rapidly to a high localization of deformation. An important aspect is also the homogeneity of deformation. The deformation in test A mainly localizes between adjacent arms, while most of the specimen's area is under low deformation levels. For test B, the deformation is localized in the vertically centered region, particularly in the central region, and close to the free edges. Although slight differences are observed between materials, test B presents intermediate to high levels of deformation in this region, while the rest of the specimen presents low equivalent plastic strain levels. In opposition, test C presents two primary regions where deformation localizes: one at the left vertically centered region and another at the right-hand side curvature. Low equivalent plastic strain levels are observed between these two regions, while the rest of the specimen is in the elastic regime or close to it. The distinct regions of localized deformation in tests B and C can perhaps be associated with different mechanical states. Lastly, the deformation in test D appears to localize at the right-hand side curvature close to the mechanical grips, and distinct regions present low to intermediate equivalent plastic strain levels. As the analysis of the equivalent plastic strain is more related to the amount and homogeneity of strain, the following metrics will evaluate the diversity and richness of mechanical states. Nevertheless, each metric is associated with the equivalent plastic strain through a color map. Major and Minor Strains The evaluation of the major and minor strains is performed through the strain-based diagram, where a circular marker represents each material point colored by the corresponding level of equivalent plastic strain (see Figure 4.9). In addition, the diagrams are presented with both axes normalized by an arbitrary value, to reduce the visual noise of using different axes limits. This value is specific for each material and is identified in each diagram's bottom left corner. The major and minor strains results show that the strain states are mainly located between uniaxial and plane strain tension for tests A and B. In the former, for AA2090-T3, the strain states Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 tend to uniaxial tension, while for DP600, it tends to plane strain tension. Although some material points with low equivalent plastic strain levels are observed in the equibiaxial tension region, it is disappointing and surprising that a biaxial test presents few material points in this region. In test B, the two regions of high equivalent plastic strain levels are identified, particularly for Cu and DP600, where two peaks of material points are formed, one in the specimen center and another along the free edge. Tests C and D present similar strain states' distributions, ranging from plane strain compression to plane strain tension. However, material points distribution in shear is denser for test C than D, while the opposite is observed in uniaxial compression, even though with low, but not zero, levels of equivalent plastic strain. Concerning test C, the single peak with high equivalent plastic strain levels can suggest that the two main regions previously identified present similar uniaxial tension strain states, therefore not providing additional mechanical information. Major and Minor Stresses Similarly to the major and minor strains, the evaluation of the major and minor stresses diagram is performed through the stress-based diagram (see Figure 4.10). The same visualization characteristics are employed, with colored circular markers and normalized axes, which are valuable in this representation, as the magnitude of stress values greatly vary between materials. The first observation on the major and minor stresses diagram is that material points under or close to the elastic regime are more easily identified than in the strain-based diagram. The opposite is observed for material points under the plastic regime. This observation is certainly explained by the natural shape of typical metallic sheets' stress-strain curve. The curve typically presents an accentuated slope in the elastic regime (proportional to Young's modulus), enhancing stress differences. On the contrary, the curve typically presents a low slope in the plastic regime, enhancing strain differences. This observation is particularly evident for test A, and with less expression for tests C and D. Concerning test B, most material points are close to the uniaxial tension region, presenting the lowest range of stress states from all tests. Between tests C and D, the major and minor stresses diagram is useful to accentuate differences from equibiaxial compression to shear. Stress Triaxiality and Lode Angle Parameter The last metric evaluating the diversity of mechanical states is the stress triaxiality and Lode angle parameter diagram, as presented in Figure 4.11. Overall, this representation appears to present less information than the two previous ones, because only a fraction of the material points is visible, though an equal amount of information is represented. Because of the plane stress state formulation, it was expected that all material points fall on the plane stress state curve. Even with a 3D analysis, the same would be expected on most material points, because the stress tensor component 𝜎 zz is generally negligible up to necking, and maybe beyond for small thicknesses (see Chapter 7). Therefore, the use of this metric can be more useful in studies of rupture or failure of metallic sheets, because it gives greater importance to 𝜎 zz . Nevertheless, it can also be a good way to verify if the plane stress conditions are verified in 3D analysis. But because all the material points are represented on top of the plane stress curve, this metric can be a great complement to the two previous metrics, providing an instant understanding of stress states achieved. The levels of test A stress triaxiality indicate that all material points are under a tensile state. For DP600, the full range of states from uniaxial to equibiaxial tension is achieved with material points in the plastic regime. On the other hand, for Cu and AA2090-T3, a higher range of stress states with high equivalent plastic strain levels is observed. In relation to tests B, C, and D, it is readily observed that these tests achieve tensile and compressive stress states. Test C lacks material points in shear, under the plastic regime, and equibiaxial to uniaxial compression. Finally, it is seen that test D is the test achieving the highest range of stress states, from equibiaxial compression to equibiaxial tension, with material points in both regimes. In general, this diagram replicates the major and minor stresses diagram through a different stress tensor representation. While the former represents the decomposition of the hydrostatic and deviatoric stress contributions, the latter stands for the principal values. Nevertheless, this metric can complement the previous two, providing an instant understanding of the stress states achieved. Rotation Angle Finally, the rotation angle results are presented in Figure 4.12 through a histogram of area densities. The histogram divides the rotation angle entire range of values into 90 equal intervals, representing the specimen's area proportional amount. Using the area density minimizes the finite element mesh influence on the results. Each interval is also discretized by the levels of equivalent plastic strain, similar to the materials points of previous metrics. Test A presents material points in the full range of rotation angle (both elastic and plastic regimes), indicating a good sensitivity of the test to anisotropy. However, material points presenting high equivalent plastic strain levels are only located between 30°to 60°. It can be noticed that the rotation angle distribution is centered close to 0°for tests B, C and D, which can be related to the material orientation. When changing the material orientation, mainly a shift of the distribution is noticed, but also some spreading of the distribution, especially at 45°, as will be shown ahead. Test B presents the smallest range of values for all materials, exclusively from 0°to 45°. Most material points are also located between 0°and 15°, indicating low sensitivity to anisotropy for a given loading direction. Specifically, for Cu, the density of material points presenting rotation angle levels higher than 15°is low, and the present low levels of equivalent plastic strain. On a positive note, for AA2090-T3 and DP600, the density of material points with rotation angle values higher than 15°tends to increase. Both test C and D achieve the full range of rotation angle values. However, from 45°to 90°, the material points are in the elastic regime or close to it. The ability to cover a high range of values means the density of values will be more evenly distributed. Particularly for test C, this remark reflects the good density of material points in the histogram central region and the low density values (lower than 5 %). Nevertheless, test C still exhibits the most amount of material points located close to 0°, but material points with high levels of equivalent plastic strain are observed up to 30°. In test D, the rotation angle values are not as equally distributed as in test C, because the histogram's first interval (0°to 1°) aggregates more than 10 % of the specimen's total area. Even so, the density and equivalent plastic strain levels of material points between 0°to 30°are in the same order as for test C. Ultimately, test A is the one that presents the highest range of rotation angle levels for material points in the plastic regime. Therefore, it can be considered the most sensitive test to anisotropy. However, as test A is a biaxial loading test, it is naturally advantageous relative to the other tests. Test C presents the most equal distribution of the four tests, while achieving values up to 45°under the plastic regime. To demonstrate how the histograms would change for a different material orientation, Figure 4.13 represents the rotation angle histograms of test C and AA2090-T3 for material orientations of 0°, 45°, and 90°from RD. The first case is similar to the one found in Figure 4.12. Concerning the other two cases, it can be observed that for a material orientation of 90°from RD, the histogram is approximately mirrored relatively to the one of 0°from RD. It is not an exact copy, because the material behavior changes with material orientation, but the general distribution is similar. In the case of 45°from RD, it is observed that the region with a higher area density of material points is between 45°and 60°of the rotation angle. Here, it is expected that most material points present rotation angle values in the same order as the material orientation angle. Additionally, the distribution of material points is more spread from 0°to 90°of rotation angle, rather than concentrating near the material orientation angle. The transformation is rather complex than a simple shift in the distribution. However, it is possible to approximately anticipate how the transformation will occur. These results reinforce the idea that at least two tests with different material orientations are needed to fully characterize the anisotropic behavior of sheet metals. From a global perspective, it appears that these tests individually cannot fully characterize the material's strain hardening and anisotropic behavior. Nevertheless, test D could better characterize the strain hardening than the other tests due to higher equivalent plastic strain levels and diversity of mechanical states. However, test B presents a more homogeneous distribution of deformation and can also be considered a suitable choice for the strain hardening characterization, even if the range of mechanical states is reduced. Regarding the anisotropic behavior of the materials, the rotation angle suggests that test A can better characterize this behavior. However, this test requires a nonstandard testing machine, unlike the other tests, which can be performed on a universal testing machine. Given the possibility of performing two tests, the combination of different material orientations, such as at 0°and 90°, of tests D or C, can increase the anisotropic information observed in the rotation angle. Conclusions In this study, the richness of heterogeneous mechanical tests is evaluated using five metrics. The analysis considers four heterogeneous mechanical tests and three virtual materials, providing additional validation of each test's potential and demonstrating the relevance of evaluating tests for different materials. The performed evaluation shows that the equivalent plastic strain is relevant to evaluate the levels of strain achieved and the homogeneity of deformation. The diversity of the mechanical states is evaluated using the major and minor strains, the major and minor stresses, and the stress triaxiality and the Lode angle parameter. Although these metrics represent the strain and stress tensors, each one presents characteristics that complement each other. However, the simultaneous use of the three can be considered excessive. To avoid redundant representations, the use of the major and minor strains should be a priority, because it can be experimentally measured. Additionally, the major and minor stresses are an excellent complement to the major and minor strains and should be used whenever possible. In opposition, the use of the stress triaxiality and Lode angle parameter can be considered optional, as the amount of visible information is lower than in the other two. Lastly, the rotation angle is proposed to evaluate the tests' sensitivity to anisotropy. Through its use, it is possible to assess the range of material orientations achieved. It is expected that this metric can be useful to evaluate the potential of heterogeneous mechanical tests to calibrate the material's anisotropic behavior. A test achieving a high range of mechanical states, from equibiaxial compression to equibiaxial tension, was identified. Moreover, the biaxial loading test leads to the lowest equivalent plastic strain levels, but is also the most sensitive test to anisotropy. Nevertheless, two uniaxial loading tests also show great potential to characterize the material's anisotropic behavior, if two tests of different material orientations are combined. In summary, this study provides a basis to evaluate heterogeneous mechanical tests using various metrics. Hopefully, it can contribute to the development of heterogeneous mechanical tests by considering the materials' anisotropic behavior and providing a basis to evaluate their richness. Future studies should validate these findings by applying the suggested metrics to real experiments, and calibrate constitutive models using the selected tests. Efficiency of Optimization Algorithms Introduction The mechanical behavior of sheet metals is usually sensitive to strain, strain rate and temperature effects. Recently, due to the growth of heat-assisted manufacturing processes [START_REF] Karbasian | A review on hot stamping[END_REF], the effects of strain rate and temperature have gained more impact. Therefore, being able to precisely forecast sheet metals' mechanical response, under a broad variety of strain rates and temperatures, is more and more essential for advanced manufacturing processes. To predict such behavior, thermoelastoviscoplastic constitutive models, characterized by their nonlinearity and many material parameters, can be used. However, the traditional calibration of such models requires an extensive database, with tests performed for various strain rates and temperatures [START_REF] Markiewicz | A review of characterisation and parameters identification of materials constitutive and damage models: From normalised direct approach to most advanced inverse problem resolution[END_REF]. The use of full-field measurement techniques, heterogeneous mechanical tests, and inverse methods has reduced the number of required experimental tests [START_REF] Kajberg | Viscoplastic parameter estimation by high strain-rate experiments and inverse modelling -Speckle measurements and high-speed photography[END_REF]. Applying full-field measurements techniques, such as DIC, the entire displacement field on the specimen's surface can be recorded and directly used to calibrate a constitutive model. The information obtained from these tests can be further enriched using temperature measurements. The material parameters can be identified using inverse methods, such as the FEMU [START_REF] Kavanagh | Finite element applications in the characterization of elastic solids[END_REF]. Solely using full-field measurements with inverse methodologies does not guarantee suitable material parameters are found. An automatic strategy using optimization algorithms is required to find these material parameters. However, most studies related to constitutive models' calibration tend to overlook the importance of optimization algorithms by resorting to familiar ones, such as gradient-based least-squares algorithms [START_REF] Cooreman | Identification of mechanical material behavior through inverse modeling and DIC[END_REF][START_REF] Coppieters | Identification of the postnecking hardening behaviour of sheet metal by comparison of the internal and external work in the necking zone[END_REF]Souto et al. 2015b). These algorithms may perform well and be suitable for nonlinear least-squares problems, such as the ones formulated using the mentioned inverse methodologies, but may also present disadvantages. A few studies have explored the use of other optimization algorithms and strategies, such as direct-search and stochastic methods [START_REF] De-Carvalho | Optimization strategies for non-linear material parameters identification in metal forming problems[END_REF][START_REF] Coelho | Inverse identification processes of elastoplastic constitutive models using advanced optimisation strategies[END_REF][START_REF] Kowalewski | Assessment of optimization methods used to determine plasticity parameters based on DIC and back calculation methods[END_REF]. This study aims to implement different optimization algorithms in the calibration of a thermoelastoviscoplastic constitutive model. Three heterogeneous thermomechanical tests performed at different average strain rates are used in this study. This application is selected because it is considered a challenging application. The FEMU is the inverse methodology, and three optimization algorithms are applied in the optimization procedure. The results obtained with different optimization algorithms are compared in terms of efficiency and robustness. Finite Element Model Updating Method The finite element model updating (FEMU) is used to identify the material parameters of the constitutive model [START_REF] Kavanagh | Finite element applications in the characterization of elastic solids[END_REF]. This method is based on the simple idea of iteratively adjusting the unknown material parameters, to minimize the difference between reference and FEA numerical results. The FEMU has been largely adopted in many applications, partly because of its ease of implementation and flexibility [START_REF] Pottier | Contribution of heterogeneous strain field measurements and boundary conditions modelling in inverse identification of material parameters[END_REF][START_REF] Prates | Inverse strategies for identifying the parameters of constitutive laws of metal sheets[END_REF]. In fact, the FEMU mainly requires the finite element method, which is already widely spread across the scientific community. However, the FEMU requires a finite element model that accurately reproduces the real experiments and its boundary conditions [START_REF] Kacem | Influence of experimental boundary conditions on the calibration of a ductile fracture criterion[END_REF]). In addition, the high computational cost of the FEMU is a major disadvantage compared to more computationally efficient methods, such as the VFM (Martins et al. 2018a). The FEMU employs an objective function to be minimized, which can be composed of different data, such as the strain tensor components, the displacement components, or load. This flexibility has contributed to an increase in the number of formulations presented in the literature [START_REF] Lin | Universal multi-objective function for optimising superplastic-damage constitutive equations[END_REF][START_REF] Cao | A study on formulation of objective functions for determining material models[END_REF][START_REF] Andrade-Campos | Novel criteria for determination of material model parameters[END_REF]. Recently, FEMU has been mainly used in combination with full-field measurements [START_REF] Cooreman | Identification of mechanical material behavior through inverse modeling and DIC[END_REF]Souto et al. 2015b;[START_REF] Henriques | Identification of orthotropic elastic properties of wood by a synthetic image approach based on digital image correlation[END_REF]. In that regard, the adopted objective function 𝜑(𝝃), with 𝝃 = (𝐴, 𝐵, 𝑛, 𝑚, 𝐶), the vector of material parameters to be identified, includes the three tests with equal weights and can be decomposed in two separate terms as where 𝑛 t and 𝑛 p are, respectively, the number of time instants and in-plane measurement points. 𝜑(𝝃) = 1 3 3 ∑ 𝑖=1 [𝜑 1 (𝝃) + 𝜑 2 (𝝃)] 𝑖 , ( 5 To distinguish between reference and numerical variables, the superscripts "ref" and "num" are used, respectively. The variable 𝜀 ref max represents the maximum strain value of all in-plane components and time instants. Because the displacement field represents the raw data, the strain field is computed from the displacement field using a total Lagrangian formulation. The reference strain field is computed before the calibration procedure, and the updated strain field is computed after every finite element simulation from the extracted numerical displacement field. The second term 𝜑 2 consists of the reference and numerical load normalized by the maximum load value of all time instants 𝐹 ref max as 𝜑 2 (𝝃) = 1 𝑛 p 𝑛 p ∑ 𝑗=0 [ 𝐹 num (𝝃) -𝐹 ref 𝐹 ref max ] 2 𝑗 . (5.3) Optimization Algorithms The optimization algorithms are essential in the FEMU to minimize the objective function (see Equation 5.1) through an iterative procedure. In this study, different algorithms are used to evaluate their performance. The optimization procedure is implemented in Python programming language, using the optimization algorithms from the SciPy library [START_REF] Virtanen | SciPy 1.0: Fundamental algorithms for scientific computing in Python[END_REF]SciPy 2022b). This library has several optimization algorithms (e.g., gradient-based, stochastic), providing the user with easy implementation in its programs. Three optimization algorithms of different types were selected for the optimization procedure: the Levenberg-Marquardt, the Nelder-Mead, and the Differential Evolution algorithms. Levenberg-Marquardt Algorithm The Levenberg-Marquardt (LM) is a gradient-based algorithm that uses the approximated Hessian and Jacobian matrices [START_REF] Levenberg | A method for the solution of certain non-linear problems in least squares[END_REF][START_REF] Marquardt | An algorithm for least-squares estimation of nonlinear parameters[END_REF]. It is widely used to calibrate constitutive models [START_REF] Güner | Characterization of anisotropy of sheet metals employing inhomogeneous strain fields for Yld2000-2D yield function[END_REF][START_REF] Marek | Extension of the sensitivity-based virtual fields to large deformation anisotropic plasticity[END_REF][START_REF] Martins | Calibration of anisotropic plasticity models using a biaxial test and the virtual fields method[END_REF]) because it is well suited for solving nonlinear least-squares problems. The LM algorithm requires the user to select an initial solution, and if not well chosen, it can lead to convergence difficulties or local minima, instead of the global minimum. Nevertheless, if the problem is well-conditioned and a suitable initial solution 𝐱 0 is selected, the algorithm can converge rapidly. For an iteration 𝑖, the LM can be written as [𝐇(𝐱 𝑖 ) + 𝜆diag (𝐇 𝑖 )] 𝐡 𝑖 = -[𝐉(𝐱 𝑖 )] T 𝐫(𝐱 𝑖 ), (5.4) where 𝐇(𝐱 𝑖 ) is the Hessian matrix, 𝜆 is the damping factor, 𝐡 𝑖 is the increment of identification parameters, and 𝐫(𝐱 𝑖 ) is the corresponding residuals vector of the objective function 𝜙. Note that 𝐫(𝐱 𝑖 ) contains as many rows as number of tests, time instants, and measurement points, for each strain component and load. This is one of the main advantages of the LM algorithm, because it quantifies the influence of each row in the objective function. The damping factor is particularly important in the LM algorithm, because it creates a trust region around the solution. For low values of 𝜆, the algorithm results in a Gauss-Newton algorithm, while for high values it results in the steepest descent algorithm. The Hessian matrix 𝐇(𝐱 𝑖 ) can be approximated by 𝐇(𝐱 𝑖 ) = [𝐉(𝐱 𝑖 )] T 𝐉(𝐱 𝑖 ) (5.5) where 𝐉(𝐱 𝑖 ) is the Jacobian matrix representing the derivatives of 𝜙 with respect to 𝐱 𝑖 . In the scope of this study, 𝐉 is calculated by forward finite-differences. Finally, the new solution 𝐱 𝑖+1 is obtained by 𝐱 𝑖+1 = 𝐱 𝑖 + 𝐡 𝑖 . (5.6) More details on the LM algorithm implementation used in this study can be found at [START_REF] Moré | The Levenberg-Marquardt algorithm: Implementation and theory[END_REF] and SciPy (2022d). However, it should be noted that this implementation within SciPy does not provide enough flexibility, because it is essentially a wrapper of the MINPACK project in Fortran [START_REF] Moré | The MINPACK Project[END_REF]. Nelder-Mead Algorithm The Nelder-Mead (NM) is one of the best known and simpler direct-search algorithms used in unconstrained optimization [START_REF] Nelder | A simplex method for function minimization[END_REF]. The NM algorithm employs a simplex that begins with a set of vertices for every identification parameter plus one. In practice, each vertex corresponds to a given solution. Based on a series of transformations of the simplex, the algorithm iteratively reduces the simplex size. The algorithm is known to achieve satisfactory results in few iterations, but may also present convergence problems. Comparatively to gradient-based algorithms, this algorithm stands out for not requiring derivatives. At each iteration 𝑖, the simplex vertices 𝐱 𝑖 1 , 𝐱 𝑖 2 , ..., 𝐱 𝑖 𝑛+1 , with 𝑛 the number of material parameters, are ordered according to the objective function values as 𝜑(𝐱 𝑖 1 ) ≤ 𝜑(𝐱 𝑖 2 ) ≤ ... ≤ 𝜑(𝐱 𝑖 𝑛+1 ). (5.7) where 𝐱 𝑖 1 represents the best vertex, and consequently, 𝐱 𝑖 𝑛+1 the worst. Additionally, let us consider that 𝐱 𝑖 o is the centroid of the 𝑛 best vertices, given by 𝐱 𝑖 o = 1 𝑛 𝑛 ∑ 𝑗=1 𝐱 𝑖 𝑗 . (5.8) After the vertices are ordered and the centroid calculated, the algorithm uses four main transformations to improve the simplex: reflection, expansion, contraction and shrink. Each is associated with four scalar parameters 𝜔 r , 𝜔 e , 𝜔 c , and 𝜔 s . Reflection The first and mandatory transformation, named reflection, is performed where a new vertex 𝐱 𝑖 r is found by reflecting the 𝐱 𝑖 𝑛+1 through 𝐱 𝑖 o as 𝐱 𝑖 r = 𝐱 𝑖 o + 𝜔 r (𝐱 𝑖 o -𝐱 𝑖 𝑛+1 ), (5.9) Then, the objective function value of 𝐱 𝑖 r is calculated and if 𝜑(𝐱 𝑖 1 ) ≤ 𝜑(𝐱 𝑖 r ) ≤ 𝜑(𝐱 𝑖 𝑛 ), a new simplex is obtained where 𝐱 𝑖+1 𝑛+1 = 𝐱 𝑖 r . However, if the latter condition is not observed, multiple transformations can be additionally performed. Expansion Contraction If 𝜑(𝐱 𝑖 𝑛 ) ≤ 𝜙(𝐱 𝑖 r ) < 𝜑(𝐱 𝑖 𝑛+1 ), an outside contraction is performed, where the reflected point is contracted towards the centroid as 𝐱 𝑖 oc = 𝐱 𝑖 o + 𝜔 c (𝐱 𝑖 r -𝐱 𝑖 o ), (5.11) If the new vertex is better than the reflected vertex, then 𝐱 𝑖+1 𝑛+1 = 𝐱 𝑖 oc . Instead, if 𝜑(𝐱 𝑖 r ) ≥ 𝜑(𝐱 𝑖 𝑛+1 ), an inside contraction is performed as 𝐱 𝑖 ic = 𝐱 𝑖 o -𝜔 c (𝐱 𝑖 r -𝐱 𝑖 o ), (5.12) where the new simplex will have 𝐱 𝑖+1 𝑛+1 = 𝐱 𝑖 ic if the latter is better. However, if the contraction transformation does not yield a better solution, a last transformation is required. Shrink Finally, the shrink transformation is performed as a last resort in case contraction fails. Here, a new simplex is obtained by replacing all the vertices, except 𝐱 𝑖 1 , with new vertices closer to the best solution as 𝐱 𝑖+1 𝑗 = 𝐱 𝑖 1 + 𝜔 s (𝐱 𝑖 𝑗 -𝐱 𝑖 1 ). (5.13) The operations and transformations of the NM algorithm are schematically illustrated in Figure 5.1 for a simplex of size 𝑛 = 2, and considering 𝜔 r = 1, 𝜔 e = 2, 𝜔 c = 0.5, and 𝜔 s = 0.5. Nevertheless, NM implementation used in this study adapts the scalar parameters 𝜔 r , 𝜔 e , 𝜔 c , and 𝜔 s to the dimensionality of the problem at hand [START_REF] Gao | Implementing the Nelder-Mead simplex algorithm with adaptive parameters[END_REF]. More details of the implementation used in this study can be found at SciPy (2022a). Nevertheless, implementation-wise, the NM algorithm allows more control over the identification procedure, as it allows the calling of a subroutine at the end of every iteration. Differential Evolution Algorithm The Differential Evolution (DE) is a population-based stochastic algorithm that generates new solutions from other solutions [START_REF] Storn | Differential evolution -A simple and efficient heuristic for global optimization over continuous spaces[END_REF]. The DE algorithm can be characterized by its ease of implementation, robustness, and finding the global minimum of a problem in most attempts. However, a significant drawback of the DE algorithm is its high computational cost and low convergence rate compared to other optimization algorithms. On the other hand, compared to other population-based stochastic algorithms, the DE algorithm tends to outperform them. As it is common in population-based algorithms, the number of solutions used as the population can significantly impact the convergence and success of the optimization procedure [START_REF] Piotrowski | Review of differential evolution population size[END_REF]. Moreover, the DE algorithm requires the definition of bounds for the identification parameters, to generate the initial set of solutions, either manually, randomly or distributed over the search space. A population of 𝑁𝑃 solutions, each with a size of 𝑛 material parameters, is initially randomly generated and should try to cover the design space as much as possible. At each iteration 𝑖, a solution 𝐱 𝑖 is represented in the design space and limited by its upper and lower bounds, respectively, 𝐱 max and 𝐱 min . The generation of the initial population is defined for a given initial solution 𝐱 0 by 𝐱 0 = 𝐱 min + 𝐫 (𝐱 max -𝐱 min ) , (5.14 ) where 𝐫 is a vector of random numbers uniformly distributed in the range [0, 1]. The optimization process is then initialized, with the DE algorithm divided into three main operations: mutation, crossover and selection. Mutation The mutation scheme allows for a more diversified and robust search in the design space. In this step, for every target solution 𝐱 𝑖 , a mutant solution 𝐯 𝑖+1 is generated by randomly choosing three mutually different solutions 𝐱 𝑖 𝑟 1 , 𝐱 𝑖 𝑟 2 and 𝐱 𝑖 𝑟 3 , 𝑟 1 , 𝑟 1 , 𝑟 2 and 𝑟 3 are integer values from a sample in the range of [1, 2, ..., 𝑛]. The mutant solution 𝐯 𝑖+1 is then given by 𝐯 𝑖+1 = 𝐱 𝑖 𝑟 1 + 𝐹 (𝐱 𝑖 𝑟 2 -𝐱 𝑖 𝑟 3 ) , (5.15) where 𝐹 is a positive integer that controls the ratio in which the population evolves. This parameter is often referred to as the scale factor. The solutions 𝐱 𝑖 𝑟 2 and 𝐱 𝑖 𝑟 3 should also be different from the target solution 𝐱 𝑖 . Depending on the strategy used, 𝐱 𝑖 𝑟 1 can be randomly chosen from the population or even the best solution from the previous generation. Crossover To complement the mutation strategy, the DE algorithm introduces crossover, controlled by a parameter 𝐶 r that defines the probability for crossover. In this step, the trial solution 𝐮 𝑖+1 is generated from two different solutions, the mutant solution 𝐯 𝑖+1 and the target solution 𝐱 𝑖 . The type of crossover can either be binomial or exponential. In the binomial crossover, the trial solution 𝐮 𝑖+1 is generated as 𝐮 𝑖+1 𝑗 = { 𝐯 𝑖+1 𝑗 if 𝐫 𝑗 ≤ 𝐶 r 𝐱 𝑖 𝑗 otherwise , (5.16) where 𝐫 𝑗 is a vector of random numbers uniformly distributed in the range [0, 1]. Each material parameter of the trial solution 𝐮 𝑖+1 is determined independently. To determine which solution contributes a given material parameter, 𝐶 r is compared to 𝐫 𝑗 . If 𝐫 𝑗 is less than or equal to 𝐶 r , the material parameter 𝐮 𝑖+1 𝑗 is inherited from the mutant solution 𝐯 𝑖+1 , otherwise is inherited from 𝐱 𝑖 𝑘 . In the exponential crossover, a random material parameter is selected, and starting from that, the trial solution 𝐮 𝑖+1 receives a material parameter from 𝐯 𝑖+1 until 𝐶 r is less than 𝐫 𝑗 . From that point forward, 𝐮 𝑖+1 inherits all the remaining material parameters from 𝐯 𝑖+1 . Selection Finally, the selection step decides if the trial solution 𝐮 𝑖+1 replaces the target solution 𝐱 𝑖 in the population. The objective function is evaluated at 𝐮 𝑖+1 , and if its value is less than or equal to the value at 𝐱 𝑖 , 𝐮 𝑖+1 replaces 𝐱 𝑖 in the population as 𝐱 𝑖+1 = { 𝐮 𝑖+1 if 𝜑(𝐮 𝑖+1 ) ≤ 𝐱 𝑖 𝐱 𝑖 otherwise . (5.17) This procedure repeats until a termination criterion is satisfied. The operations of DE are schematically illustrated in Figure 5.2, and more details of the implementation used in this study can be found at SciPy (2022c). Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 Heterogeneous Thermomechanical Test A numerical database from three heterogeneous thermomechanical tests is used as a reference for the identification procedure, as no real experiments are considered in this study (Martins et al. 2020b;[START_REF] Martins | Calibration of a modified Johnson-Cook model using the virtual fields method and a heterogeneous thermo-mechanical tensile test[END_REF]. Nevertheless, the reference data is based on real experiments performed on a Gleeble 3500 thermomechanical simulator, using a hydraulic servo system able to impose tension or compression forces, as well as a direct resistance heating system [START_REF] Coër | The effect of temperature on anisotropy properties of an aluminium alloy[END_REF]. These tests present both heterogeneous temperature and strain fields, leading to richer mechanical information. The study considers a virtual material that presents characteristics of a DP980 dual-phase steel with a 1.5 mm of thickness. The tests are uniaxial loading using the specimen geometry as represented in Figure 5.3a. The ROI, defined by a width and length of, respectively, 28 and 60 mm, corresponds to the gauge length of the specimen. A uniaxial tensile load is imposed on the specimens at constant displacement rates of 0.006, 0.06, and 0.6 mm s -1 , respectively, for each test. According to the length of the ROI, the displacement rates of each test correspond to, respectively, average strain rates of 10 -4 , 10 -3 , and 10 -2 s -1 , which will be used to identify each test. In addition, a heterogeneous temperature field is imposed on the specimens through the direct resistance heating system, controlling and maintaining the temperature at the specimen's center during the test. The remaining part of the specimens present a temperature gradient due to the watercooling system of the machine's grips. An example of the resulting temperature field is shown in Figure 5.3b for a real experiment. Three thermocouples are used to monitor the temperature field along the gauge length, and the red circles show their relative position in Figure 5.3b. The added value of this procedure is that the temperature gradient triggers a heterogeneous deformation, providing information on the material's mechanical behavior at different temperatures and strain rates. The temperature field acquired with the Gleeble equipment usually presents a parabolic shape, symmetrical about the specimen's center [START_REF] Coelho | On the use of the Gleeble® test as a heterogeneous test: sensitivity analysis on temperature, strain and strain rate[END_REF]. Measurements of three thermocouples placed at -40, 0, and 40 mm along the 𝑥 axis during real experiments confirm the approximate symmetrical and parabolic shape of the profile along the specimen's length. Temperature variations along the width of the specimen can be neglected. It was also confirmed that the temperature field remains constant throughout the deformation process [START_REF] Coër | The effect of temperature on anisotropy properties of an aluminium alloy[END_REF][START_REF] Coelho | On the use of the Gleeble® test as a heterogeneous test: sensitivity analysis on temperature, strain and strain rate[END_REF]. Because of its parabolic shape, a second-order polynomial can describe each test temperature field. For this study, a temperature field 𝑇(𝑥) is numerically generated between -30 and 30 mm along the 𝑥 axis, and is defined for each test by 𝑇(𝑥) = ⎧ ⎨ ⎩ -0.143𝑥 2 -0.251𝑥 + 506.404 for 10 -4 s -1 -0.149𝑥 2 -0.123𝑥 + 499.364 for 10 -3 s -1 -0.139𝑥 2 -0.040𝑥 + 509.084 for 10 -2 s -1 . (5.18) For the three tests, the temperature field is in range of [360 °C, 510 °C]. The maximum temperature is reached at the specimen's center, decreasing to its lowest at -30 and 30 mm along the 𝑥 axis as represented in Figure 5.4. The reference data is numerically generated by a FEA simulation of each test. Only the ROI is modeled due to lack of temperature measurements outside this region. The finite element model is built in 2D, assuming a plane stress formulation. Abaqus/Standard (Dassault Systèmes 2019) software is used, and the four-node bilinear plane stress quadrilateral element (CPS4) is used, with a large strain formulation. The finite element mesh is composed of 1680 elements. Displacement-driven boundary conditions are imposed on the model extremities, respectively, at -30 and 30 mm along the 𝑥 axis. In addition, the temperature field is imposed on each node of the finite element model through the second-order polynomial, and kept constant throughout the test (see Figure 5.4). The virtual material is considered isotropic and its elastic behavior described by the Hooke's law. The elastic properties of the material, namely Young's modulus 𝐸 = 210 GPa and Poisson's ratio 𝜈 = 0.30, are known a priori and assumed to be constant in the temperature range of the study. The von Mises yield criterion is assumed, thus anisotropy is not considered in this study. In addition, the Johnson-Cook thermoelastoviscoplastic constitutive model (see Equation 2.23) is used to describe the strain hardening behavior. The models available in Abaqus/Standard are used in the FEA simulations. The reference materials parameters are characteristic of a DP980 dual-phase steel (Martins et al. 2020b), and are presented in Table 5.1. Although it is common to identify the material parameters of the Johnson-Cook model, considering each term individually (describing the effects of strain hardening, temperature, and strain rate), in this study the material parameters are simultaneously identified. Nevertheless, some material parameters are considered known a priori (Martins et al. 2020b). 𝐴 [MPa] 𝐵 [MPa] 𝑛 [-] 𝑇 tr [°C] 𝑇 m [°C] 𝑚 [-] 𝐶 [-] ε 0 [s -1 ] 205.21 1124.0 0.092 25.0 1000.0 1.36 0.05 0.001 and kept fixed throughout the identification procedure, such as the parameter ε 0 = 0.001 s -1 , the transition temperature 𝑇 tr = 25 °C, and the melting temperature 𝑇 m = 1000 °C. The latter is a value commonly used for the DP980 [START_REF] Erice | Stress-state and strain-rate dependent ductile fracture of dual and complex phase steel[END_REF]. The transition temperature 𝑇 tr is set to a low value, as it can describe a wide range of temperatures. Additionally, the value of ε 0 is also low, to account for the early sensitivity of the material to strain rate effects. Moreover, by reducing the number of material parameters to be identified, it might reduce the problem of non-uniqueness of the solutions [START_REF] Notta-Cuvier | Identification of Johnson-Cook's viscoplastic model parameters using the virtual fields method: Application to titanium alloy Ti6Al4V[END_REF]Zhang et al. 2022b). As such, the material parameters 𝐴, 𝐵, 𝑛, 𝑚, and 𝐶 are the only ones identified. In Figure 5.5, the maps of equivalent plastic strain are presented for each test at the last time instant, respectively, at 950, 106, and 11.75 s. It is observed that the increase in average strain rate results in lower levels of equivalent plastic strain achieved. As a result, the strain localization is delayed. The reference database is composed of the displacement field and the evolution of load at each time instant. The evolution of load for each test performed at three different average strain rates is shown in Figure 5.6, where the markers represent the distribution of time instants considered. It is observed that the evolution of the load is sensitive to the average strain rate, as an increase leads to higher values of the maximum load. In addition, it should be noted that different time instants compose the reference database, specifically 62, 58, and 47, respectively, for tests with an average strain rate of 10 -4 , 10 -3 , and 10 -2 s -1 . Therefore, the objective function indirectly assigns different weights to each test. Moreover, a lower weight is assigned to the early stages of each test, as observed by the distribution of time instants considered in the objective function. Numerical Implementation One of the goals of this study was to develop a parameter identification software in Python programming language, starting from a previously developed Fortran code (Martins et al. 2020b). Python was selected as the main programming language because it is a simple and modern language that allows easy and fast development [START_REF] Marchand | The Rising Popularity of Python. pythoncircle.com[END_REF]. Moreover, it is already one of the most used languages, and any user can easily learn it and use this code in the future. However, one drawback of Python compared to more advanced programming languages, such as C++ or Java, is the higher computational cost [START_REF] Oliveira | On the use of advanced optimization methods in mechanical design[END_REF]. To circumvent the computational limitation of Python, a hybrid code with Fortran is adopted, where the latter is used to perform tasks that require more computational effort. There are multiple reasons for selecting Fortran, such as: (i) the availability of previously developed codes, (ii) the wide use in the scientific community, (iii) the low computational cost (Elton 2015), and (iv) a framework to easily connect with Python. This framework is particularly important in this study, as it allows the use of code already developed and validated without additional development. The framework is called F2PY, and provides a connection between Python and Fortran [START_REF] Peterson | F2PY: a tool for connecting Fortran and Python programs[END_REF]. Simply, Fortran subroutines can be called from a Python script as if they were Python functions, while only a few modifications are required in Fortran subroutines. It is part of the NumPy package, and more details about its use can be found at (NumPy 2022). The flow of implementation is presented in Figure 5.7, where the main tasks are represented, as well as the programming language or software in which they are executed. Initially, the reference data (displacement field and evolution of the load) is read, and then the reference strain field is calculated. The latter task is performed in Fortran because it is one of the most computationally heavy tasks. Then, the optimization algorithm is selected from the ones presented above. It should be noted that additional algorithms could be made available, but only a few could be selected for this study. The optimization procedure starts by invoking the SciPy functions, at which point the control over the procedure is diminished. Then, all the other tasks are performed within this framework. The FEA simulation is performed with Abaqus/Standard at every iteration, and then an Abaqus-Python script is required to extract the numerical data. Both these tasks must be externally executed to the main Python code. Again, the strain field is calculated from the displacement field in Fortran, as well as the objective function. Although the latter is not of high computational cost, the Fortran code was reused from a previous implementation. Numerical Evaluation Contrary to the DE, the LM and the NM algorithms are suitable for unconstrained problems. However, to limit the search space of the three algorithms, lower and upper bounds are defined for the material parameters (see Table 5.2). The defined bounds are based on the order of magnitude of each parameter, and special attention was taken to not restrict the search space overly. For the DE algorithm, the bounds are directly imposed, while for the LM and the NM algorithms, a transformation of the material parameters is used. For all algorithms, the bounds normalize the material parameters. Considering x = 𝐱/𝐱 0 , the vector of material parameters normalized relative to its initial value 𝐱 0 , the lower and upper bounds 𝐱 min and 𝐱 max are normalized as x min = 𝐱 min 𝐱 0 and x max = 𝐱 max 𝐱 0 , (5.19) where x min and x max are the normalized bounds. Then, the transformed material parameters X for x ≥ 1 correspond to X = 1 + ( x max -1) [1 -exp ( 1 - x x max -1 )] , (5.20) x < 1 corresponds to X = 1 + ( x min -1) [1 -exp ( 1 - x x min -1 )] . (5.21) The use of these transformations guarantees that the algorithms achieve feasible solutions, i.e. solutions that lie within the search space. To mimic image noise from actual full-field measurements, random noise from a normal Gaussian distribution is added to the reference displacement field, while, for simplicity's sake, the load signals are kept noiseless. To evaluate the robustness of the methodology and performance of algorithms, three data sets are used as reference data for the calibration: (i) without noise, (ii) with noise of amplitude 10 -5 mm, and (iii) with noise of amplitude 10 -3 mm. The objective function values using the reference material parameters are 0.0, 3.13 × 10 -9 , and 3.017 × 10 -5 , respectively, for each data set. Because the LM and the NM algorithms are sensitive to the initial solution, five different initial sets are generated using the Latin hyperspace sampling method [START_REF] Mckay | A comparison of three methods for selecting values of input variables in the analysis of output from a computer code[END_REF], generating solutions evenly distributed over the search space (see Table 5.2). The number of initial sets was arbitrarily selected as equal to the number of material parameters. This sampling method does not consider the reference material parameters, avoiding an initial bias towards the global minimum. The distribution of the material parameters of each generated initial set over the search space is shown in Figure 5.8. These five initial sets are used for the LM and the NM algorithms, while for the DE algorithm, a population of 50 solutions, ten times the number of material parameters, is generated using the same sampling method. This number is considered sufficiently high to avoid a significant impact on the performance of the DE algorithm [START_REF] Piotrowski | Review of differential evolution population size[END_REF][START_REF] Oliveira | On the use of advanced optimization methods in mechanical design[END_REF]. The three algorithms are, in general, used with the default settings defined in the SciPy library. The exception is the step size for the finite difference approximation of the Jacobian in the LM algorithm, set equal to 10 -3 . The adaptive setting in the NM algorithm is also set, enabling the algorithm to adapt its parameters to the problem's dimensionality. The final solutions and objective function values obtained in the optimization procedures are summarized in Table 5.3 for the three algorithms. Additionally, in Figure 5.9, the evolution of the objective function throughout the function evaluations is represented for the three algorithms. Let us consider, for the purpose of analysis, that the final solutions with all material parameters within 0.1 % of error from the reference ones achieve the global minimum. As such, it is observed that the LM algorithm presents the worst performance of the three algorithms. Overall, the LM algorithm achieves the global minimum only twice, whereas the NM algorithm achieves the global minimum six times, and the DE algorithm can find the global minimum in two out of three attempts. The fact that the LM algorithm cannot achieve the global minimum more often can be related to the step size for the finite difference approximation of the Jacobian, which is probably not small enough. The algorithm could benefit from starting a new optimization procedure, using the obtained solution as the initial solution, and reducing the step size. The results of the LM and the NM algorithms confirm their sensitivity to the initial solution, with different results depending on the initial set. This is particularly evident when the parameter 𝐴 of the initial set is close to its upper bound, as it is the case of set 2. In this situation, the solution stagnates early, either in the upper and lower bound of parameter 𝐴. Based on these results, it can be said that avoiding initial solutions close to the parameter's bounds can potentially decrease the chance of the algorithm converging there. In general, parameters 𝑛, 𝑚, and 𝐶 present higher success rates in achieving the reference solution. However, the results of set 5 with the LM algorithm appear to have converged to local minima. In fact, one disadvantage of the LM algorithm is the susceptibility to converge to local minima. The effect of noise on the results is interesting, as it is observed that noise can benefit the algorithm. In the case of the LM and NM algorithms, the final achieved solutions are, in general, very similar between the data sets without noise and with noise of amplitude 10 -5 mm. However, with the LM algorithm and set 3, it is observed that noise benefits the algorithm in achieving a better solution than without noise. When the noise is increased to an amplitude of 10 -3 mm, both the LM and the NM algorithms are not as efficient in achieving the reference solution, but the NM algorithm appears less negatively affected by the presence of an additional noise. The DE algorithm appears less affected by noise regarding the final solution, as each parameter solution of the data set with noise of amplitude 10 -3 mm is within 0.1 % of error from the reference solution. However, the final solution for the data set with noise of amplitude 10 -5 mm is not close to the reference one, but it can be related to a local minimum. To further investigate the effect of noise in DE, more attempts should be performed, as the DE algorithm is not deterministic, and a component of randomness always exists. Noise also affects the objective function convergence. For example, for the LM algorithm with the initial set 1, where the final solutions are relatively similar, the algorithm converges around 1000, 600, and 50 function evaluations, respectively, for data sets without noise, with noise of amplitude 10 -5 and 10 -3 mm. In summary, the differences between the LM and the NM algorithms are more evident in terms of the convergence rate, where the first tends to converge rapidly, whereas the former typically requires more function evaluations. Although only a single run was performed for the DE algorithm with each data set, it appears more robust than the other two algorithms. Nevertheless, it generally requires more function evaluations to converge to the global optimum. Using the LM or the NM algorithms in a multiple starting approach can lead to similar results to the DE algorithm. Conclusions A methodology to calibrate a thermoelastoviscoplastic constitutive model based on full-field measurements, and the FEMU is considered to reduce the number of thermomechanical tests involved. Three heterogeneous thermomechanical tests performed at different average strain rates are used as reference data required by the calibration procedure. By using a modern programming language library, three optimization algorithms are easily implemented in the calibration process, namely the LM, the NM, and the DE algorithms. The identification results of each algorithm were compared for three data sets, involving different noise levels introduced in the reference displacement field. The DE algorithm has demonstrated to be the most robust algorithm by reaching or being close to the global minimum in two of the three data sets. However, it is also susceptible to local minima, even though it appears less than the LM and the NM algorithms. Moreover, the number of function evaluations required for convergence by the DE is higher than the other algorithms. The latter is particularly relevant when dealing with the FEMU. Using a computationally efficient algorithm can potentially compensate for the required effort of the FEMU. A multiple starting approach could be used to circumvent the sensitivity of the LM and the NM algorithms to initial solutions. Additionally, it would be interesting to explore the use of the DE algorithm early on to reduce the search space, and then use the LM or the NM algorithms to fine-tune the solution. Nevertheless, this approach could be susceptible to local minima in the vicinities of the global optimum. In the scope of the present study, a parameter identification software was developed in Python and Fortran programming languages, by combining the benefits of both. In addition, by using the optimization library within Python, it will allow an easier implementation and combination of optimization algorithms in the future. Later, the NM algorithm is used in the VFM, because it is computationally more efficient than DE. Compared to the implementation of the LM algorithm, it allows more flexibility to control the identification procedure. Volume Reconstruction from 3D Full-Field Measurements Introduction From a macroscopic point of view, it is essential to characterize and model the mechanical behavior of metallic parts up to large deformations, enabling an accurate simulation of manufacturing processes. However, the occurrence of plastic instabilities at large deformations, such as diffuse and localized necking [START_REF] Bayoumi | On the formabllity/instability of stretch-forming sheet metals[END_REF][START_REF] Hill | On the mechanics of localized necking in anisotropic sheet metals[END_REF], introduces additional challenges to the experimental techniques and inverse methodologies. Necking makes the deformation non-uniform and the stress state inside the specimen triaxial, eventually leading to fracture [START_REF] Bridgman | Studies in Large Plastic Flow and Fracture: With Special Emphasis on the Effects of Hydrostatic Pressure[END_REF][START_REF] Tvergaard | Necking in tensile bars with rectangular cross-section[END_REF]. Besides, plastic deformation becomes highly heterogeneous and concentrates within the necking region. A classical method to characterize the mechanical behavior is based on the information up to necking, but it can limit the strain range drastically. Alternatively, by combining full-field measurements with inverse methods, it is possible to calibrate a constitutive model beyond necking where deformation is no longer homogeneous, such as using FEMU [START_REF] Pradeau | Prediction of failure in bending of an aluminium sheet alloy[END_REF]. The VFM can potentially be used as an alternative to FEMU, but it has been mainly used in the 2D space where plane stress conditions are assumed. However, because the stress state is triaxial, using the VFM considering plane stress assumptions is no longer valid, and a 3D analysis is required (Rossi and Pierron 2012b;[START_REF] Kim | Characterization of the post-necking strain hardening behavior using the virtual fields method[END_REF][START_REF] Coppieters | On complete solutions for the problem of diffuse necking in sheet metal[END_REF][START_REF] Martins | Calibration of anisotropic plasticity models using a biaxial test and the virtual fields method[END_REF]). Nevertheless, it would be interesting to know which thicknesses can be applied to the VFM in a 2D framework, but no previous study has investigated it. In the field of material testing and characterization, full-field measurement techniques are being used more and more frequently as the standard procedure. [START_REF] Pierron | Towards material testing 2.0. A review of test design for identification of constitutive parameters from full-field measurements[END_REF] give a comprehensive overview of this topic and introduce the concept of "Material Testing 2.0". One advantage of these techniques is that they allow for the measurement of displacement and strain fields on the entire surface of the specimen. It enables the transition from classical mechanical tests to more complex ones, leading to more and richer data acquisition from a single test. Consequently, it reduces the cost and effort of material testing and improves the accuracy of calibrated constitutive models [START_REF] Cooreman | Identification of mechanical material behavior through inverse modeling and DIC[END_REF]). Among full-field measurement techniques, the DIC is the most widespread and fast-growing method in solid mechanics [START_REF] Sutton | Image Correlation for Shape, Motion and Deformation Measurements: Basic Concepts, Theory and Applications[END_REF]. The most common optical setup nowadays is a two-camera setup that can acquire the surface displacement field in the 3D space (stereo-DIC). However, using the DIC to obtain information in the bulk of the material is still a big challenge Figure 6.1 Schematic illustration of measurement points at the front and back surfaces of the specimen, in the undeformed and deformed configurations. [START_REF] Buljac | Digital volume correlation: Review of progress and challenges[END_REF], but something required to use the VFM in a 3D framework. In the absence of bulk measurements, plane stress conditions have been commonly assumed to estimate the deformation in the thickness direction for sheet metals. While technology is still limited to surface displacements, efforts have been made to, at least, reconstruct the 3D displacement field over the whole external surface of solids [START_REF] Badel | 3D residual stress field in arteries: Novel inverse method based on optical full-field measurements[END_REF][START_REF] Genovese | A 360-deg digital image correlation system for materials testing[END_REF]. For example, [START_REF] Li | Whole-field thickness strain measurement using multiple camera digital image correlation system[END_REF] used a multiple camera DIC system, with two cameras in the front and another two in the back of the specimen, to measure the whole-field thickness strain. Starting from surface full-field measurements, a volume reconstruction method able to reconstruct the volume displacement field of a homogeneous solid was introduced with the aim of using the VFM in a 3D framework Rossi and Pierron (2012a). The potential benefits of this proposed method is that it could eventually replace more advanced and costly techniques, such as the DVC. However, an in-depth study of the uncertainties associated with this method is required. The final aim of using the volume reconstruction method is to accurately reconstruct the strain field, from real experiments, over the whole volume of the specimen. Then, it is intended to identify the material parameters through the VFM in a 3D framework. To achieve that, this study performs an in-depth analysis of the uncertainty associated with the internal mesh generation (IMG). At first, the volume is directly reconstructed from FEA data, and afterwards, virtual experiments mimicking real experiments are employed [START_REF] Lava | Assessment of measuring errors in DIC using deformation fields generated by plastic FEA[END_REF]. The interest in using virtual experiments is to reduce the gap to experimentally acquired measurements without having to perform the real experiment. Internal Mesh Generation The IMG method was proposed to reconstruct the volume displacement field inside a solid by starting from surface measurements (Rossi and Pierron 2012b;[START_REF] Rossi | Characterization of aluminum alloys using a 3D full field measurement[END_REF]Rossi et al. 2018b). It can be applied to flat or cylindrical specimens, but only the first one is considered here. In the undeformed configuration, flat specimens are assumed to have a constant thickness and two opposite parallel faces, one in the back and another in front of the specimen. A requirement of IMG is that the displacement field is known a priori on both surfaces, named here surfaces A and B. The coordinate system is chosen so that the 𝑧 axis is perpendicular to the thickness in the undeformed configuration. Let us consider that 𝐍 A = (𝑁 A x , 𝑁 A y , 𝑁 A z ) and 𝐍 B = (𝑁 B x , 𝑁 B y , 𝑁 B z ) are points located in surfaces A and B, respectively. In the undeformed configuration, 𝐍 A and 𝐍 B share the same coordinates along the 𝑥 axis (𝑁 A X = 𝑁 B X ) and along the 𝑦 axis (𝑁 A y = 𝑁 B y ), but different coordinates along the 𝑧 axis. A schematic illustration of points 𝐍 A and 𝐍 B on surfaces A and B, both in undeformed and deformed configurations, is shown in Figure 6.1. The IMG adopts quadratic Bézier curves to construct points inside the specimen. The Bézier curve is defined by one point on each surface, 𝐍 A and 𝐍 B , and an internal point 𝐏 = (𝑃 x , 𝑃 y , 𝑃 z ) inside the volume and located in a plane between points 𝐍 A and 𝐍 B . The Bézier curve is denoted 𝐁 and mathematically defined as 𝐁(𝑧) = (1 -𝑧) 2 𝐍 A + 2(1 -𝑧)𝐏 + 𝑧 2 𝐍 B , (6.1) with 0 ≤ 𝑧 ≤ 1 defining the position of each internal point along the Bézier curve, so that 𝐁(0) = 𝐍 A and 𝐁(1) = 𝐍 B . The distribution of internal points 𝐳 along the Bézier curve is linearly defined. Additionally, the curve is tangent to segment 𝐍 A 𝐏 at 𝐁(0) and to segment 𝐍 B 𝐏 at 𝐁(1). The methodology behind the IMG is schematically illustrated in Figure 6.2, for the undeformed and deformed configurations. For clarity, the point of view is chosen perpendicular to the 𝑥 axis. However, points and vectors are considered in the 3D space. The internal point 𝐏 is defined as the intersection between the straight line perpendicular to surface A (or surface B) in 𝐍 A (or 𝐍 B ) and the middle plane perpendicular to segment 𝐍 A 𝐍 B . In the undeformed configuration, 𝐏 is simply defined as the average of the surface points 𝐍 A and 𝐍 B : 𝐏 = 𝐍 A + 𝐍 B 2 , (6.2) yielding a straight line from 𝐍 A to 𝐍 B . However, in the deformed configuration, the specimen is not flat anymore, and two different internal points 𝐍 A and 𝐍 B are obtained by starting from 𝐍 A and 𝐍 B , respectively. According to geometrical considerations, the internal points 𝐏 A and 𝐏 B are defined as (6.3) (6.4) where ⋅ is the scalar product, and n A and n B are the surface normals of points 𝐍 A and 𝐍 B , respectively. The internal point 𝐏 is then computed as It can be pointed out that if the deformation is symmetrical, then 𝐏 A ≡ 𝐏 B ≡ 𝐏, as it theoretically occurs in isotropic materials. However, due to measurement errors and possible anisotropy of heterogeneous materials, the former is usually not true. Finally, a 3D regular mesh is obtained with 𝑛 x , 𝑛 y , and 𝑛 z nodes along the 𝑥, 𝑦, and 𝑧 axes, respectively. Consequently, there will be 𝑚 x = 𝑛 x -1, 𝑚 y = 𝑛 y -1, and 𝑚 z = 𝑛 z -1 elements along the 𝑥, 𝑦, and 𝑧 axes, respectively. Deciding the number of internal points is similar to deciding the number of elements through the thickness in a 3D finite element model. The more points through the thickness, the finer the mesh is, and more precision is obtained. However, the computational time of future operations, such as strain calculation, will increase proportionally. It is important to balance the required precision and computational time, and ideally a convergence study can be used to determine the most suitable number of internal points. However, the number of nodes of the surface mesh is more dependent on the number of measurement points coming from experiments. In the end, the volume displacement and strain fields are calculated using the shape functions of hexahedral elements, as illustrated in the last schematic of Figure 6.2. From an implementation point of view, the steps of IMG can be defined as: 𝐏 A = 𝐍 A - |𝐍 A -𝐍 B | 2 2 (𝐍 A -𝐍 B ) ⋅ n A n A , 𝐏 B = 𝐍 B - |𝐍 B -𝐍 A | 2 2 (𝐍 B -𝐍 A ) ⋅ n B n B , 𝐏 = 𝐏 A + 𝐏 B 2 . ( 6 1. Read the experimental coordinates field of undeformed configuration 𝐗 E 0 and 3D displacement field of deformed configuration 𝐮 E 1 , for every considered time instant; 2. Define the coordinates field of regular grid over the specimen surfaces in the undeformed configuration 𝐗 S 0 . The regular grid coordinates field can be calculated based on the sparsity and coordinates field of experimental points; 3. Interpolate the displacement field 𝐮 E 1 over regular grid 𝐗 S 0 to obtain the interpolated displacement field 𝐮 S 1 through an interpolation function ℐ defined as 𝐮 S 1 = ℐ(𝐮 E 1 , 𝐗 E 0 , 𝐗 S 0 ), (6.6) and obtain the regular grid coordinates field in the deformed configuration 𝐗 S 1 as 𝐗 S 1 = 𝐗 S 0 + 𝐮 S 1 . (6.7) 4. Calculate the surface normals of the regular grid in the deformed configuration n S 1 through a function 𝒮 as n S 1 = 𝒮(𝐗 S 1 ). (6.8) 5. Then, calculate the middle plane between the front and back surfaces defined over the regular grid in the undeformed configurations; 6. Reconstruct the internal points through the Bézier curve definition of Equation 6.1 to obtain the reconstructed points coordinates field 𝐗 IMG . Then, the reconstructed displacement field of the deformed configuration 𝐮 IMG 1 are obtained as (6.9) where 𝐗 IMG 0 and 𝐗 IMG 1 represent, respectively, the reconstructed points coordinates field in the undeformed and deformed configurations. 𝐮 IMG 1 = 𝐗 IMG 1 -𝐗 IMG 0 , 7. Finally, calculate the reconstructed strain field 𝜺 IMG 1 using hexahedral shape functions ℱ as 𝜺 IMG 1 = ℱ(𝐗 R 0 , 𝐮 IMG 1 ). (6.10) Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 Plane Strain Tensile Test Lastly, the IMG is numerically implemented in Python programming language, using an arraybased approach. Plane Strain Tensile Test To evaluate the accuracy of the IMG when applied to real experiments, a FEA simulation is used for which the whole volume deformation history is known. The finite element model reproduces the necking evolution in specimens under severe plastic deformation. In this study, a 20 mm thick flat specimen, as represented in Figure 6.3a, is considered under a plane strain tensile test and an initial rectangular section. The finite element model is modeled in 3D, and the commercial software Abaqus/Standard is used (Dassault Systèmes 2019). The bottom extremity of the test is fixed, and a vertical displacement is imposed on the top extremity. Moreover, an eight-node brick element with reduced integration is used, as well as a large strain formulation. The global element size is 1 mm, and a structured mesh is used in the volume of interest (VOI) between 76 × 20 mm 2 in the 𝑥𝑦 plane, resulting in 1520 elements and 1617 nodes in the VOI. The dimensions of the latter were defined based on the equivalent plastic strain distribution, as it is observed that it mainly localizes in the center region of the specimen (see Figure 6.3c). A virtual material is considered, where the constitutive models and material parameters are selected to resemble the behavior of a metal with high anisotropy. The Hill 1948 yield criterion is used to describe the yield surface, and Swift's hardening law is selected to describe the strain hardening curve (see Section 2.3). The material parameters, including Young's modulus 𝐸 and Poisson's ratio 𝜈, are provided in Table 6.1. The FEA simulation is displacement-driven, with a maximum imposed vertical displacement Table 6.1 Elastic properties of the material and parameters of the constitute models used for the virtual material. of 16 mm in the top extremity, and is composed of 100 time instants. At the last time instant, the maximum values of equivalent plastic strain are approximately 0.8 which falls in the range of the fracture strain observed for ductile metals, as observed in the deformed configuration of Figure 6.3c. In this representation, it is possible to notice that a severe necking occurs at the center of the specimen. The necking is associated with a complex 3D deformation field, as illustrated over the cross-sections. The maximum strain is obtained at the inner part of the specimen and therefore cannot be directly evaluated by surface measurements. Elasticity Hill 1948 Swift 𝐸 [GPa] 𝜈 [-] 𝐹 [-] 𝐺 [-] 𝐻 [-] 𝑁 [-] 𝑀 [-] 𝐿 [-] 𝜍 0 [MPa] 𝐾 [MPa] 𝑛 [-] 70 Analyzing the distribution of the components of the strain tensor, it is observed that the test is largely dominated by the components 𝜀 yy and 𝜀 zz (see Figure 6.4). The latter components achieve its highest values in the middle of the specimen, 0.542 and -0.422 for 𝜀 yy and 𝜀 zz , respectively. Also, both strain components present a considerable gradient increasing towards the middle of the specimen. The other components present considerably smaller magnitudes, particularly 𝜀 xz and 𝜀 yz , for which the maximum achieved values are lower than 0.1. Although the strain component 𝜀 xx presents low values comparatively to 𝜀 yy , its maximum values are close to the free edges of the specimen, which might be difficult regions to reconstruct. The strain field obtained from the FEA simulation is used in the following sections as the reference strain field 𝜺 REF . The next step is to use the IMG to reconstruct the internal displacement field, and compare the reconstructed strain field to 𝜺 REF . Volume Reconstruction from Finite Element Analysis The necking occurs in the central part of the specimen, where the stress state is triaxial and cannot be inferred from surface measurements. The VOI used to verify the reconstruction algorithm is the central zone where localized necking occurs. A regular grid of 77 × 21 nodes, respectively, along the specimen's width (𝑥 axis), and length (𝑦 axis), is selected in the reconstruction algorithm to coincide with the number of nodes of the finite element model. Therefore, it is not necessary to interpolate the displacement fields to the regular grid, because these are coincident, i.e., 𝐮 S 1 = 𝐮 E 1 (see Equation 6.6). Moreover, 21 nodes are used in the reconstruction of the internal points, to have a direct correspondence between the reference and the reconstruction. In terms of computational effort, the IMG is very fast at reconstructing the displacement field. In this study, it takes approximately two minutes to reconstruct the displacement field of all 100 time instants. The reconstructed volume is evaluated by comparing the reconstructed strain field 𝜺 IMG to the reference strain field 𝜺 REF , for every time instant. The absolute error and the average absolute error are used to express the difference between the two strain fields. Mathematically, for a given time instant 𝑡 and material point 𝑖, the absolute error 𝜹 is given by (6.11) while the average absolute error δ of all material points is given by 𝜹(𝑡, 𝑖) = |𝜺 IMG (𝑡, 𝑖) -𝜺 REF (𝑡, 𝑖)| . δ (𝑡) = 1 𝑛 p 𝑛 p ∑ 𝑖=1 𝜹(𝑡, 𝑖), (6.12) where 𝑛 p is the total number of material points. In Figure 6.5, the evolution of 𝜹 is represented for each strain tensor component throughout all time instants. All material points of the VOI are represented within the plotted area, while the superior boundary represents the maximum observed error, and the intermediate white line represents δ. A first observation can be made regarding the increase of error with deformation. The higher the deformation, the more the IMG is susceptible to error. Nevertheless, a reasonable agreement is observed between the reference and reconstruction, with maximum errors of up to 0.08 in the 𝜀 yy and 𝜀 yz components. Even though the maximum errors are relatively high compared to the observed strain values, the average values are much closer to the minimum than the maximum. The error evolution of the thickness strain 𝜀 zz is distinct from the other components, as the error increases more gradually, rather than abruptly, towards the end of the test. This is a sign that the source of error in 𝜀 zz is not just related to the increase of deformation, as discussed further ahead. The 3D maps also present the spatial absolute error in Figure 6.6. Overall, 𝜀 xx and 𝜀 yy are well reconstructed, as a large region presents low errors. However, 𝜀 xx presents large errors towards the free edges of the specimen, and 𝜀 yy towards the middle of the specimen. Nevertheless, the accentuated error on the free edges can also be related to the surface normals estimation close to the free edges, as no data is available beyond the specimen's boundaries. The use of additional information from the specimen's sides could potentially improve the accuracy of the IMG towards the free edges. However, using the sides' information would have added more complexity to the methodology, both in terms of data acquisition and integration of the data within the IMG methodology. The absolute error spatial distribution of 𝜀 zz is more complex (see Figure 6.6c). It is observed that the highest error is at the center of the specimen's surface. Moreover, following a path towards the middle of the specimen, the error decreases between the specimen's surface and middle plane, but then increases again towards the middle. This behavior indicates that the reconstructed strain 𝜀 zz is approximately homogeneous through the thickness, representing an average of the reference gradient values. The source of this behavior is the linear distribution of internal points along the Bézier curve. The combination of linear distribution with perpendicular surface normals to the undeformed surfaces leads to Bézier curves with little or no curvature. Although the latter is less observed at distances away from the specimen's center, the most problematic is still the region of higher deformations. However, this drawback of the IMG has not been previously described nor identified. An investigation of the specimen's global volume evolution, calculated as the sum of all elements volume, shows a discrepancy between the reconstruction and the reference, as shown in Figure 6.7a. During the elastic regime, which corresponds to the initial rapid evolution, and beginning of the plastic regime, the reconstruction shows a global volume approximately identical to the reference. However, starting from the time instant of maximum load, the global volume of reconstruction presents a rapid increase and deviation from the reference values, which are approximately constant afterwards. Additionally, a local analysis of the volume of the elements showed a high discrepancy. In the reference, the initial volume of the elements is approximately preserved, with a residual increase coming from the elastic deformation. On the other hand, the reconstruction shows that the elements' volume starts deviating from the initial volume at values of equivalent plastic strain of approximately 0.13, which are rather low relative to the maximum values at the last time instant. This is shown in Figure 6.7b, where the volume's evolution of two elements, one located on the specimen's surface and another on the middle of the surface (between two opposite surfaces) is represented. These two deviations from reference, global and local volume, are thought to be a source of error of the IMG, leading to incorrect reconstructed strain fields. Therefore, the next section focuses on improving the IMG by integrating two types of correction. Improvements to Internal Mesh Generation The analysis of the reconstructed strain field relative to the reference identified two possible sources of error of the IMG: the increase in the geometry's global volume and the local element's volume deviation from its initial value. This section proposes two strategies to improve the accuracy of the IMG in reconstructing the strain field. Correction of Global Volume As previously observed in Figure 6.7a, the reconstructed global volume deviates abruptly from the reference after a certain amount of deformation. On the other hand, the reference global volume evolves approximately linearly after the onset of plastic deformation. The proposed solution attempts to correct the global volume after it starts deviating from the reference. However, to generalize the algorithm to applications where the reference is unknown, the threshold is identified after the abrupt increase is detected in the global volume evolution. The global volume is, mathematically, the sum of all elements volume, but in the IMG, it can also be seen as the volume enclosed by the geometry's external surfaces. This way, by adjusting the enclosing surfaces, it is possible to correct the global volume. However, the front and back surfaces are the only known information of the specimen (either from the FEA simulation or experimental measurements), while all the other surfaces are reconstructed using the IMG. In turn, the IMG is based on the normals of the front and back points surface, which govern the unknown enclosing surfaces. Therefore, the proposed algorithm attempts to correct the global volume by adjusting the front and back points normals to the surface (see Figure 6.8). The first part of the algorithm identifies the threshold of abrupt increase and determines the predicted global volume of subsequent time instants. Initially, the global volume evolution is only due to elastic deformation. After the onset of plastic deformation, the evolution rate changes, and is necessary to quantify it. Therefore, the global volume evolution is incrementally fitted by a second order polynomial regression, and the coefficient of determination 𝑅 2 is calculated. Whenever 𝑅 2 drops below 0.99, it is considered that the given time instant is the onset of plastic deformation. From this time instant onward, the global volume evolution is incrementally fitted by a linear regression. Similarly, whenever 𝑅 2 drops below 0.99, it is considered an abrupt increase. Then, the linear regression is used to predict the global volume of subsequent time instants and correct it to a linear evolution. This first part of the algorithm is schematically represented in the top section of Figure 6.8. The second part of the algorithm is dedicated to correcting the global volume evolution by adjusting the surfaces' normals. Let us consider that n represents the surface normal of any given point. The corrected surface normal n c is obtained by multiplying n by a given weight 𝐰 = (𝑤 x , 𝑤 y , 𝑤 z ), for each component, as n c = (𝑤 x n x , 𝑤 y n y , 𝑤 z n z ), (6.13) which is then normalized by its magnitude to transform it into a unit normal vector. The search for the optimal weight is performed using the Levenberg-Marquardt algorithm from Python's optimization library (see Section 5.3). The optimization algorithm starts from the known solution, where 𝐰 = (1, 1, 1), and iteratively searches for the optimum weight by comparing the corrected global volume with the one predicted using the linear regression. The optimization procedure stops whenever the corrected global volume drops below the predicted global volume. This second part of the algorithm is schematically presented at the bottom section of Figure 6.8. Different combinations of weights are implemented to investigate which one yields the best results. These combinations are differentiated by the surfaces normal component or components adjusted by a given weight, in particular by optimizing the individual weights associated with the x component (IMG-Global-X), y component (IMG-Global-Y), z component (IMG-Global-Z), and another combination where the three components are simultaneously adjusted by optimizing three different weights (IMG-Global-XYZ). Mathematically, these combinations result in corrected surface normals given by n c = ⎧ ⎪ ⎨ ⎪ ⎩ (𝑤 x n x , n y , n z ) with IMG-Global-X ( n x , 𝑤 y n y , n z ) with IMG-Global-Y ( n x , n y , 𝑤 z n z ) with IMG-Global-Z (𝑤 x n x , 𝑤 y n y , 𝑤 z n z ) with IMG-Global-XYZ . (6.14) The combination IMG-Global-XYZ is a bigger challenge than the three others in terms of optimization, because three different weights are optimized instead of only one. In Figure 6.9, the corrected global volume evolution by using the four combinations is represented. The only combination that failed to correct was when only optimizing the weight given to n x , while the others successfully corrected to the linearly predicted values. The evolution of the latter is overlapped in Figure 6.9, thus only the evolution of IMG-Global-XYZ is visible. Although three combinations of the weights succeeded in correcting the global volume, it does not mean it yielded three similar solutions. To decide which combination is best suited to improve the IMG, the evolution of the strain tensor components average absolute error for each combination is analyzed and presented in Figure 6.10. Comparatively to the average absolute error obtained with the IMG without correction, it is observed that the only combinations that perform worst for any component are IMG-Global-X and IMG-Global-XYZ, in particular, for 𝜀 xx , 𝜀 xy , and 𝜀 xz . The average absolute error of 𝜀 zz has barely changed for any combination, but improvements are observed in the other components when using the combinations IMG-Global-Y and IMG-Global-Z. The differences are slight between these two combinations, but both decrease by 50 %, or more, the average absolute error of 𝜀 yy , 𝜀 xy , and 𝜀 yz . In further analyses, the combination IMG-Global-Z is adopted because it corrects the component through the thickness. Therefore, it is a more general approach that can be applied in different tests. Computationally, the correction of global volume takes approximately 15 minutes for all time instants. Comparatively to the time required by the IMG without global correction, it is a substantial increase, but it can still be considered computationally fast. Correction of Local Volume Due to the global volume increase, it would be expected that the volume of the elements also changes, as demonstrated in Figure 6.7b. However, the volume of the elements does not increase proportionally, but instead some elements increase their volume while others decrease it. This behavior is a consequence of the linear distribution of nodes through the thickness along the Bézier curves. By assuming 𝜀 xx and 𝜀 yy are correctly reconstructed, then the 𝜀 zz is either overor underestimated, resulting in an incorrect evolution of the elements volume. By adjusting the distribution of nodes through the thickness along the Bézier curve, the proposed solution attempts to correct the local volume of the reconstructed elements. The algorithm adopts a node by node approach, where the distribution of nodes along the Bézier curve is iteratively optimized to improve the elements volume in its neighborhood. Meaning that the updated nodes will affect the volume of the element to which they belong. Therefore, an individual cost function is used to guide the optimization towards the optimum distribution. This individual cost function is given by the sum of the maximum absolute volume change (relative to a given reference) along the 𝑧 axis. Considering that the nodes distribution associated to a given pair of surface nodes (one in the back and another in the front surface) is denoted by 𝐳, such that, the distribution of the nodes between points 𝐍 A and 𝐍 B illustrated in Figure 6.2 is given by 𝐳 𝑖𝑗 = [0.0, 0.25, 0.5, 0.75, 1.0]. Each pair of nodes 𝑖𝑗 is associated with 𝑚 s number of elements on each surface, corresponding to 𝑚 s × 𝑚 z elements through the thickness. Then, the individual cost function 𝜙 I , for a given distribution 𝐳 𝑖𝑗 , is given by (6.15) where 𝐕 I represents the optimized volume of each element affected by 𝐳 𝑖𝑗 , and 𝐕 I 0 the corresponding reference volume. As such, 𝐕 I (𝑙) stands for the set of the elements volume along a given surface element 𝑙. The optimization procedure is simplified by adopting symmetry conditions along the thickness between the front and back surfaces. This way, the nodes distribution is optimized for only half of the specimen's thickness, while the other half distribution will be identical. Moreover, the location of nodes on the front and back surfaces is unchanged, as well as the node in the middle between the two outer surfaces. Nevertheless, the optimization of individual nodes distribution could lead to global solutions worse than the reference. Thus, after all nodes are individually optimized, a global cost function 𝜙 G is calculated to evaluate if the global solution has improved. The global cost function is defined as the sum of the maximum absolute volume change along the 𝑧 axis for all elements (relative to a given reference), and is mathematically expressed as 𝜙 I (𝐳 𝑖𝑗 ) = 𝑚 s ∑ 𝑙=1 max z | |𝐕 I (𝐳 𝑖𝑗 , 𝑙) -𝐕 I 0 (𝑙) | | , 𝜙 G (𝐙) = 𝑚 x ∑ 𝑖 𝑚 y ∑ 𝑗 max z | |𝐕 G (𝐙, 𝑖, 𝑗) -𝐕 G 0 (𝑖, 𝑗) | | (6.16) where 𝐙 represents the set of all nodes optimized distribution 𝐳 𝑖𝑗 , 𝐕 G represents the optimized volume of each element in the VOI, and 𝐕 G 0 the corresponding reference volume. As such, 𝐕 G 0 (𝑖, 𝑗) stands for the set of the elements volume along a given surface element 𝑖𝑗. The optimization procedure is repeated until the global solution no longer improves. The structure of the algorithm to correct the elements volume is schematically presented in Figure 6.11. Additionally, the methodology to find the optimal nodes distribution along the Bézier curve could potentially be performed in different ways. Two strategies are here proposed, named, biased and optimized. Biased Strategy The biased distribution uses a logarithmic function to distribute the nodes along the Bézier curve with a certain bias in one direction, either towards the middle of the specimen or towards the external surfaces. The position of node 𝐳 𝑖𝑗 (𝑘) ∈ [0, 0.5] along the biased distribution is given by 𝐳 𝑖𝑗 (𝑘, 𝑤) = 1 2|𝑤| [10 log 10 (|𝑤|+1) 𝑛 b 𝑘 -1] , with 𝑘 = 1, 2, ..., 𝑛 b , (6.17) where 𝑛 b = (𝑛 z + 1)/2 represents the number of nodes in the biased distribution, assuming symmetry along the 𝑧 axis, and 𝑤 ≠ 0 is the bias weight. The bias weight 𝑤 can either take negative or positive values, with its absolute value governing the bias magnitude and its sign governing the bias direction. Nevertheless, Equation 6.17 is formulated to generate a bias towards the outer surfaces of the specimen. Therefore, to also consider a bias towards the middle of the specimen, 𝐳 𝑖𝑗 is inverted when 𝑤 < 0. Mathematically, the two bias directions are given by 𝐳 𝑖𝑗 = { 𝐳 𝑖𝑗 if 𝑤 > 0 flip(|𝐳 𝑖𝑗 -0.5|) if 𝑤 < 0 (6.18) where flip(⋅) stands for the reverse order of the vector components. The advantage of this strategy is that there is only one optimization variable, independently of the number of nodes through the thickness. The initial solution of the optimization algorithm is a linear nodes distribution 𝐳 linear 𝑖𝑗 . Optimized Strategy The optimized distribution targets each node along the Bézier curve individually. In this approach, the number of optimization variables is equal to the number of nodes through the thickness (considering symmetry condition and excluding surface and middle nodes). Thus, the higher the number of nodes, the more complex the optimization problem. Moreover, because this approach optimizes the position of each node individually, there is a risk that it can lead to unfeasible meshes. Therefore, there is the need to impose bounds constraints to the nodes distribution as (6.19) where 𝛿 z represents an interval of accepted values given by 𝑧 linear 𝑖𝑗𝑘 -𝛿 z ≤ 𝑧 𝑖𝑗𝑘 ≤ 𝑧 linear 𝑖𝑗𝑘 + 𝛿 z , 𝛿 z = 0.5 𝑛 z -1 . (6.20) Similarly to the bias strategy, the initial solution of the optimization algorithm is a linear distribution. Moreover, the Nelder-Mead algorithm is employed in the optimization procedure (see Section 5.3). The local volume correction was tested for both strategies, without considering the global volume correction. Additionally, a total and an incremental approach are used on the reference to guide the optimization procedure. The former uses the initial undeformed configuration as reference, and the latter the last deformed configuration. Four different combinations are tested: using the biased (IMG-Local-B) and optimized (IMG-Local-O) strategies, with total (-T) and incremental (-I) approaches. The evolution of the average absolute error for each strain tensor component is presented in Figure 6.12, for all combinations. It is observed that the IMG-Local-B-I combination completely fails to correct the local volume, as the error vastly increases relative to the IMG combination. In this particular combination, it happened that the bias distribution started to excessively converge towards the middle surface. In this case, either the adopted cost function failed to penalize the solutions, or bounds constraints should be adopted for the weights. Except for this combination, all the others decrease or maintain the error. Specifically, the average absolute error associated with the thickness strain substantially decreases when adopting the optimized approach, either with a total or incremental approach. A drawback of the local volume correction is its high computational cost, which could require a few hours to correct all time instants. Correction to Global and Local Volume Here, the combination of global and local volume corrections (IMG-Mixed) is analyzed. Again, four different combinations are tested where the local volume strategy is changed. In Figure 6.13, the evolution of the average absolute error for each reconstructed strain component is presented. The combination using the bias strategy with an incremental approach should be discarded, as it completely fails again. Using the other combinations, the results are encouraging for all strain components, except for 𝜀 xx , which is unchanged. It is interesting to observe that the bias strategy with a total approach decreases the error, though it was not the case using both corrections individually. Nevertheless, the optimized strategy with a total approach is the one that decreases the error the most. In Figure 6.14, the profile of the strain component 𝜀 zz is plotted along the thickness direction for the center elements, corresponding to 20 elements. It is observed that the reference produces a curved profile, while using IMG an approximately linear profile is obtained. When applying the volume corrections, the reconstructed profile is no longer linear and is more similar to the reference. However, the reconstructed profile of combination IMG-Mixed-O-I is irregular along the thickness, with a few peals observed. Although the error decreases, its reconstructed profile is still far from the reference. Based on this result, the total approach appears to be the best, for both bias and optimized strategies. Between these two combinations, both reconstructed paths are similar to the reference. However, using the bias strategy, the maximum value of 𝜀 zz is overestimated, and the gradient evolves linearly towards the center. Consequently, combining the global volume correction with the local volume correction using the optimized strategy with a total approach is the best combination. Figure 6.15 shows the element's volume map and thickness strain map of the reconstructed volume of interest at the last time instant, using the IMG-Mixed-O-T combination. Although the corrected values are still far from the reference, the maximum and minimum values have improved (see Figure 6.7b). Nevertheless, the existing improvements resulted in the thickness strain map of Figure 6.15b, which is almost identical to the reference (see Figure 6.4c). Lastly, to show that the proposed global and local volume corrections improve the IMG, Figure 6.16 presents the values of the normal strain components along different paths. It is observed in all cases that the combination IMG-Mixed-O-T improves the strain reconstruction relative to the reference. Up to this point, the error coming from the IMG itself has been established and improved. Nevertheless, the previous studies only considered the displacement field from FEA simulations, which does not contain any "imperfections". However, the aim of the IMG is to be applied with real experiments, where many sources of error exist, which could lead to increased errors in the reconstructed fields. As such, it is important to also estimate the errors that would be obtained when using real experiments. With that purpose, the following section estimates the uncertainty of the IMG using virtual experiments that mimic real experiments. Uncertainty Quantification The IMG was developed to be applied to real experiments, but many experimental conditions can influence the measurements quality, and consequently influence the quality of reconstructed displacement and strain fields (Oliveira et al. 2022a). To include these conditions in the volume reconstruction, virtual experiments are introduced to evaluate the uncertainty of IMG under various experimental conditions. In particular, virtual experiments mimicking a stereo-DIC setup are used to measure more accurately the out-of-plane displacements. These virtual experiments are useful to study the influence of experimental conditions or DIC settings, without ever performing the real experiments [START_REF] Lava | Validation of finite-element models using full-field experimental data: Levelling finite-element analysis data through a digital image correlation engine[END_REF][START_REF] Henriques | Identification of orthotropic elastic properties of wood by a synthetic image approach based on digital image correlation[END_REF]. Consequently, it leads to a decrease in time and cost relative to performing the same study with real experiments. Generation of Stereo-DIC Virtual Experiments To simulate the measurement chain associated with a virtual stereo-DIC setup, four virtual cameras are used to measure the displacement field in the front and back surfaces of the specimen, two for each. The virtual experiments are generated from displacement fields coming from FEA simulation, and used to deform a reference speckle pattern image. This image can be numerically generated. Further details on the procedure and mathematical formulation can be found in Balcaen et al. (2017b) and Balcaen et al. (2017a). Nevertheless, the necessary steps to obtain virtual experiments for each surface of the specimen are as follows: 1. Export the FEA mesh and associated 3D displacement fields of outer surfaces; 2. Impose the FEA mesh onto a reference speckle pattern image; 3. Generate the synthetically deformed stereo-DIC images; 4. Perform the stereo-DIC measurements; 5. Export the stereo-DIC measurements to the IMG. These steps are illustrated in Figure 6.17 for a large region of the plane strain tensile test. As previously established, Abaqus/Standard was used to perform the FEA simulation. Then, all the steps were performed using MatchID suite software (MatchID 2022a). In particular, step 1 was performed through MatchID FEA Converter (MatchID 2022b), which easily allows the extraction of mesh coordinates and displacement fields from Abaqus output database files (see Figure 6.17a). Using this software, the obtained output files are directly compatible with MatchID software. In step 2, the FEA mesh is imposed onto the speckle pattern image from the point of view of camera 0 (see Figure 6.17b). Then, in step 3, the FEA mesh and displacement fields are interpolated to camera 1 using stereo-camera calibration parameters (see Figure 6.17c). Steps 2 and 3 are performed through the virtual images generation module from MatchID Stereo software (MatchID 2022c). Using this module, it is possible to use any speckle pattern image and easily impose the FEA mesh on top of it. Finally, full-field measurements are obtained in step 4 through MatchID Stereo (MatchID 2022d) and exported in a convenient structure to be used in IMG (see Figures 6.17d and 6.17e). One key aspect of stereo-DIC virtual experiments is the definition of calibration parameters. Without these parameters, it is impossible to generate virtual experiments, because they are responsible for interpolating data from one camera to the other. In opposition, generating virtual experiments for a 2D-DIC application is much simpler, because there is no need to define these calibration parameters. These parameters can be obtained from a standard calibration procedure, where more than 50 real images are captured for each camera. Then, through a calibration software, intrinsic and extrinsic parameters are obtained. The intrinsic parameters of a camera describe the way images are acquired, such as resolution or focal length. On the other hand, the extrinsic parameters describe the position and orientation of a camera relative to another. Normally, the extrinsic parameters of camera 1 are defined relative to a reference camera, named camera 0. Calibration parameters can also be manually defined, even though they will not be a complete representation of a real acquisition. Nevertheless, an image of a real speckle pattern is used as a reference to generate all virtual experiments, as shown in Figure 6.18. This image was obtained using a real camera with a resolution of 1040 × 1392 px 2 , respectively, the width 𝑟 x and height 𝑟 y of the camera sensor in pixels. Therefore, some calibration parameters are inherited from this camera, assuming perfect conditions exist, in particular 𝑐 x and 𝑐 y defining the coordinates of the detector's center are set to, respectively, half-width and half-height. Additionally, a camera lens with a focal length 𝑓 = 50 mm is here considered. From the camera resolution and focal length, it is possible to obtain parameters 𝑓 x and 𝑓 y , governing the focal length along the 𝑥 and 𝑦 axes. Additionally, by considering perfect conditions, it is assumed that the cameras do not present skewing of the axes and no lens distortions. Thus, the skewing factor 𝑓 s , radial distortion coefficients 𝜅 1 , 𝜅 2 , and 𝜅 3 of the camera lens, as well as tangential distortion coefficients of the camera lens 𝑝 1 and 𝑝 2 are equal to zero. Perfect conditions are also assumed in the definition of the extrinsic parameters, meaning that camera 1 is perfectly aligned and oriented relative to camera 0. Therefore, the position of camera 1 is only governed by its relative orientation with respect to camera 0 around the 𝑥 axis, and its magnitude defined by 𝜃 x , which will be referred to as stereo-angle. The other two orientations 𝜃 y and 𝜃 z represent, respectively, the relative orientation of camera 1 with respect to camera 0 around 𝑦 and 𝑧 axes, and are set to zero. Consequently, distance 𝑇 x of camera 1 relative to camera 0 in the 𝑥 axis will be zero. Distances 𝑇 y and 𝑇 z of camera 1 relative to camera 0, respectively, along the 𝑦 and 𝑧 axes depend of 𝜃 x and are calculated as 𝑇 y (𝑆 o , 𝜃 x ) = 𝑆 o sin 𝜃 x (6.21) and (6.22) where 𝑆 o is the distance from camera 0 to the specimen along the 𝑧 axis given by 𝑇 z (𝑆 o , 𝜃 x ) = 𝑆 o (1 -cos 𝜃 x ), 𝑆 o = 𝑓 𝐿 FOV 𝐿 CS (6.23) where 𝐿 FOV is the dominant length of the field of view, and 𝐿 CS is the correspondent length of the camera sensor in millimeters (Jones and Iadicola 2018). A schematic of the position and orientation of cameras 0 and 1 is shown in Figure 6.19, for a generic situation of a stereo-vision system. In addition, the intrinsic and extrinsic parameters used in the generation of the synthetic images are presented in Table 6.2. It should be noted that the intrinsic parameters correspond to all four cameras involved: cameras 0, 1, 2, and 3. Moreover, the extrinsic parameters correspond to both stereo-vision systems, in front (cameras 0 and 1) and back (cameras 2 and 3) of the specimen. Design of Experiments When using stereo-vision systems in real experiments, many factors can influence the accuracy of the final measurements [START_REF] Forsström | Quantifying the effectiveness of patterning, test conditions, and DIC parameters for characterization of plastic strain localization[END_REF]. Evaluating the influence of all of them in the volume reconstruction method would be a hard and long task. A design of experiments (DOE) can be used to evaluate the variation of results under different conditions [START_REF] Mason | Statistical Design and Analysis of Experiments: With Applications to Engineering and Science[END_REF]. Therefore, a DOE is performed by selecting five factors which can influence the final stereo-DIC measurements and consequently the volume reconstruction: field of view, stereo-angle, noise, subset size, and step size. All these factors are introduced at some step along the procedure chain presented above. Moreover, three levels are evaluated for each factor to obtain a quadratic response [START_REF] Mason | Statistical Design and Analysis of Experiments: With Applications to Engineering and Science[END_REF]). Table 6.3 presents the different levels defined for each factor. In total, the DOE is composed of 243 virtual experiments. The field of view of the cameras is usually determined by the specimen ROI and its relative motion and deformation (Jones and Iadicola 2018). Ideally, the ROI should almost fill the field of view to optimize the spatial resolution of measurements. The better the spatial resolution, the more accurate the measurements will be. Thus, it is important to estimate the total motion of the specimen to better define the field of view. Practically, the field of view will determine the distance the cameras should be located from the specimen. In the virtual experiments, the field of view effect is introduced when imposing the FEA mesh onto the speckle pattern image (step 2), by increasing or decreasing the space by the FEA mesh, as shown in Figure 6.20. It should be noted that the ROI in the generation of virtual experiments differs from the VOI defined previously and considered for analysis. The ROI here corresponds to 76 × 76 mm 2 area centered on the specimen. The stereo-angle between two cameras depends on the measurement quantities most important in a given experiment (Jones and Iadicola 2018). Smaller stereo-angles will lead to more accurate in-plane displacements, while larger stereo-angles are often associated with better out-of-plane displacement accuracy [START_REF] Reu | Stereo-rig design: Stereo-angle selection -Part 4[END_REF]Balcaen et al. 2017b). Thus, defining the stereo-angle results from a compromise between in-plane and out-of-plane accuracy. In the present study, because the out-of-plane accuracy is important to better reconstruct the out-of-plane deformation fields, angles between 15°to 26°are considered. It should be noted that the stereo-angle values shown in Table 6.3 correspond to the front and back stereo-vision systems, respectively. This difference was considered to have different results between front and back measurements. A schematic representation of the different levels is presented in Figure 6.20. In real experiments, the presence of noise in the obtained measurements is always considered. Noise can come from various sources, such as the quality of speckle pattern, lightning, cleanliness of cameras and sensors, among others. In virtual experiments, most possible sources of noise are avoided. Nevertheless, by using a real image of a speckle pattern, some noise is already present. To go even further, artificial noise can be added to virtual experiments by changing the gray level of pixels [START_REF] Rossi | Effect of DIC spatial resolution, noise and interpolation error on identification results with the VFM[END_REF]. Therefore, three levels of artificial noise are defined as the standard deviation of a Gaussian distribution for an 8-bit image (step 3). The first level does not add artificial noise, while the noise distribution of second and third levels is, respectively, defined with a standard deviation of 5 and 10 gray levels, approximately 2 % and 4 % of the gray level range. While the first three factors are related to the setup of acquisition, the subset size and step size are related to the post-processing settings of the stereo-DIC software (step 4). The subset size is important for the software to correlate the motion of a subregion of ROI throughout the test. In theory, the subset should be large enough that it contains sufficient information, such that one subset can be distinguished from all other subsets in the ROI. Generally, it is recommended that a subset contains at least three transitions between dark and light features of the speckle pattern [START_REF] Lava | Assessment of measuring errors in DIC using deformation fields generated by plastic FEA[END_REF][START_REF] Lava | Study of systematic errors in strain fields obtained via DIC using heterogeneous deformation generated by plastic FEA[END_REF]). The step size governs the density of measurement points obtained by the stereo-DIC software, and directly affects the spatial resolution of measurements. The smaller the step size, the more measurement points will be obtained. However, many measurement points will also lead to higher computational costs, and is not a guarantee of improving spatial resolution if subsets are largely overlapped. Nevertheless, using a smaller step size allows obtaining measurement points closer to the free edges of the specimen, which are important for better volume reconstruction as previously established. The three different subset and step sizes considered in the DOE are shown in Figure 6.20. As already established, MatchID software is used to generate virtual experiments and perform stereo-DIC measurements. However, manually generating 243 virtual experiments can be a long task. For that reason, a Python program was developed to automatically generate virtual experiments for different conditions and also perform the stereo-DIC measurements. This was only possible using MatchID in batch mode, which enables performing all necessary tasks through the command line in the background. It was only necessary to build a template for a single case and replicate it for the others. Results To isolate the influence of introducing virtual experiments in the IMG, the error is computed as the absolute difference between the reconstruction from FEA and virtual experiments, mathematically identical to Equation 6.11. Additionally, the relative error is also investigated with respect to the maximum values observed in each component. The results of the displacement field are presented in Figure 6.21a and of the strain field in Figure 6.21b. The presented results analyze the major effects of each factor, allowing a comparison of its influence on the error. Therefore, each level within each factor corresponds to the average result of 81 experiments. Moreover, for simplicity, only the average of error of all the points or elements in the VOI is represented. It is considered that the average absolute error can describe the influence of each factor on an average. It is observed that an increase in the field of view leads to an error increase in all the components of displacement and strain. Moreover, it appears the field of view is the factor with the biggest impact out of the five factors evaluated, as variations between levels are more pronounced. For example, for 𝑢 𝑦 , the average error is approximately two times higher from the first to the third level of the field of view. The increase of noise also leads to an increase in the average error, even though its impact on the displacement components is low and more evident in the strain components. On the other hand, the subset size appears to have a low impact on the error, at least for the range of values considered. To further understand its impact, the range values could have been enlarged to higher values. The impact of stereo-angle is not well understood, and some conflicting results are observed. Perhaps the stereo-angle could be analyzed as a function of lenses focal length, because the impact of stereo-angle on measurements accuracy is also affected by the choice of lenses. Overall, the error is relatively low for all factors. However, these low variations can also be explained by the range of values selected for each factor. Expanding the range of values can potentially yield different results, but it is not expected that the error will increase drastically. Conclusions This study analyses the accuracy of a volume reconstruction method, named internal mesh generation. This method reconstructs the volume of a solid starting from surface measurements, and is purely based on geometrical considerations. A finite element model of a plane strain tensile test with 20 mm of thickness was used as a reference. The ability of the IMG to reconstruct the deformed volume is good, with reduced computational effort. However, the accuracy of the reconstructed strain field is not perfect, particularly when a large strain gradient is observed through the thickness. To improve the accuracy of the IMG, two strategies are proposed. The first aims to correct the global volume evolution of the reconstructed geometry, and consequently, has been shown to improve the accuracy of the reconstructed strain field. The second strategy focuses on correcting the elements' local volume by adjusting the position of nodes through the thickness. Using the latter strategy, it has been shown that the strain tensor component 𝜺𝑧𝑧 is well reconstructed. Future developments could attempt to improve the reconstructed displacement field by including information of all surfaces, front, back and sides. Using this method with real experiments is the final aim. However, real experiments are susceptible to varying conditions that ultimately affect the final measurements and, consequently, the reconstruction itself. Therefore, the uncertainty of the IMG under different experimental conditions and stereo-DIC settings was successfully estimated using virtual experiments. In particular, a DOE was performed with five factors, each with three levels. The study showed that the field of view is the primary factor influencing the error, but that all factors considered have a rather low influence. Globally, a volume reconstruction method that is computationally efficient and able to estimate the displacement field inside a solid and from the surfaces displacement field within a reasonable error has been evaluated. However, applying the method to real experiments with different materials of varying thicknesses is still lacking. Moreover, investigating the stress reconstruction from the reconstructed strain field is of utmost importance to use this method in the context of parameter identification. In the end, the interest of this method is to use the VFM in the identification of material parameters, with improved accuracy. Framework Introduction Conventionally, the elastoplastic behavior of sheet metals is characterized through standard uniaxial tensile tests. From these tests, it is possible to extract the true stress-strain curve essential to characterize the strain hardening behavior. The former is important, because many sheet metal forming processes achieve plastic strains beyond the point of maximum uniform elongation. Moreover, to simulate these processes in FEA software, the key requirement is the definition of the true stress-strain curve. Without an accurate characterization of this curve, numerical simulations will not be representative of the real material behavior. However, diffuse necking is observed after the point of maximum uniform elongation in the standard uniaxial tensile tests [START_REF] Hill | On discontinuous plastic states, with special reference to localized necking in thin sheets[END_REF][START_REF] Hutchinson | Bifurcation analysis of the onset of necking in an elastic/plastic cylinder under uniaxial tension[END_REF][START_REF] Nichols | Plastic instabilities and uniaxial tensile ductilities[END_REF]. The diffuse necking leads to a non-uniform deformation concentrated within the necking region [START_REF] Tvergaard | Necking in tensile bars with rectangular cross-section[END_REF][START_REF] Tong | An experimental investigation of necking in thin sheets[END_REF], which does not allow measuring the deformation through a conventional extensometer. Commonly, localized necking is also observed in sheet metals after diffuse necking. While the former phenomenon is associated with a decrease in the specimen width, in the latter, a rapid decrease of the thickness is observed. A triaxial stress state also develops within this necking region, which eventually leads to fracture of the material [START_REF] Dunand | Hybrid experimental-numerical analysis of basic ductile fracture experiments for sheet metals[END_REF][START_REF] Tardif | Determination of anisotropy and material hardening for aluminum sheet metal[END_REF]. By measuring deformation through an extensometer, it will underestimate the resulting strain values after the onset of necking. Therefore, several studies have proposed solutions to determine the true stress-strain curve beyond this point [START_REF] Tu | Stress-strain curves of metallic materials and post-necking strain hardening characterization: A review[END_REF]. Usually, the hardening behavior is extrapolated from the pre-necking stress-strain curve, but can lead to large errors. [START_REF] Bridgman | Studies in Large Plastic Flow and Fracture: With Special Emphasis on the Effects of Hydrostatic Pressure[END_REF] was one of the first studies to tackle the problem of characterizing the postnecking hardening behavior. In this pioneering study, the authors proposed an analytical method based on geometrical parameters of the necking profile to correct and compensate for necking in a round bar. Although this method has shown to perform well for round tensile specimens, it requires significant experimental work and could not be applied to specimens of rectangular crosssections. However, rounded specimens are not practical for thin sheets applications. Later, [START_REF] Zhang | Determining material true stress-strain curve from tensile specimens with rectangular cross-section[END_REF] proposed a method that can be applied to specimens of rectangular cross-section by determining a relationship between the area and thickness reduction of the cross-section. A drawback of this method is the required use of small specimens with thin sheets. Also, [START_REF] Mirone | A new model for the elastoplastic characterization and the stress-strain determination on the necking section of a tensile specimen[END_REF] proposed a material-independent method that characterizes the necking behavior with reduced errors and much less experimental effort compared to the method of Bridgman. With the advance of full-field measurement techniques combined with inverse methodologies, it has allowed the development of complete solutions that consider the stress and strain of all measured material points in the identification procedure. Most proposed solutions use a finite element based inverse method for the identification of the post-hardening behavior [START_REF] Koc | Computer-aided identification of the yield curve of a sheet metal after onset of necking[END_REF][START_REF] Tao | An iterative procedure for determining effective stress-strain curves of sheet metals[END_REF][START_REF] Dunand | Hybrid experimental-numerical analysis of basic ductile fracture experiments for sheet metals[END_REF][START_REF] Pottier | Contribution of heterogeneous strain field measurements and boundary conditions modelling in inverse identification of material parameters[END_REF][START_REF] Tardif | Determination of anisotropy and material hardening for aluminum sheet metal[END_REF]. Nevertheless, building a finite element model that accurately predicts the occurrence of necking is a hard task, for which several aspects need to be considered. The finite element models should be capable of predicting the location of diffuse necking, which is often also a difficulty in real experiments. Special attention should also be paid to building a finite element mesh to avoid highly distorted elements, due to the large strains involved in post-necking [START_REF] Kajberg | Characterisation of materials subjected to large strains by inverse modelling based on in-plane displacement fields[END_REF]. Employing information about the test's actual boundary conditions in the finite element model can lead to more accurate results [START_REF] Denys | Multi-DIC setup for the identification of a 3D anisotropic yield surface of thick high strength steel using a double perforated specimen[END_REF][START_REF] Kacem | Influence of experimental boundary conditions on the calibration of a ductile fracture criterion[END_REF]), but getting this information is not straightforward. Another drawback of such methodologies, which is often pointed out, is their high computational cost. More recently, alternatives to finite element based methodologies are becoming more popular by avoiding their shortcomings and directly using the measured strain fields. Rossi et al. (2018a) proposed an identification method, known as linear stress-strain curve identification method (LSSCI), that uses a piecewise linear function to describe the stress-strain curve even beyond necking. This method has been successfully applied, provided that good full-field measurements are obtained up to the free edges of the specimen. Nevertheless, to fully characterize the elastoplastic behavior (strain hardening and anisotropy) of sheet metals, a decoupled identification approach is required. In this approach, the LSSCI method is used to characterize the hardening behavior, and the VFM is used to identify the parameters of a yield criterion [START_REF] Lattanzi | Inverse identification strategies for the characterization of transformation-based anisotropic plasticity models with the non-linear VFM[END_REF]. [START_REF] Coppieters | Identification of the postnecking hardening behaviour of sheet metal by comparison of the internal and external work in the necking zone[END_REF] proposed an energy method, based on the observation that under quasi-static conditions, the internal work is equal to the external work, assuming the hardening behavior is correctly described. Using real experiments, it has been shown that this method is computationally efficient, without the need to perform any FEA simulations [START_REF] Coppieters | Identification of post-necking hardening phenomena in ductile sheet metal[END_REF]. The latter method is not drastically different from the VFM, and can be seen as a special case of the latter. The VFM is based on the PVW and employs virtual displacement fields instead of actual displacement fields to balance the internal virtual work with the external virtual work [START_REF] Pierron | The Virtual Fields Method: Extracting Constitutive Mechanical Parameters from Full-field Deformation Measurements[END_REF]. It is one of the most prominent inverse methodologies and has already been applied to various applications (Avril et al. 2008a;[START_REF] Notta-Cuvier | An innovative procedure for characterising a coupled elastoplastic damage model of behaviour using the virtual fields method[END_REF][START_REF] Jones | Investigation of assumptions and approximations in the virtual fields method for a viscoplastic material model[END_REF][START_REF] Martins | Calibration of anisotropic plasticity models using a biaxial test and the virtual fields method[END_REF][START_REF] Marek | Experimental validation of the sensitivity-based virtual fields for identification of anisotropic plasticity models[END_REF][START_REF] Fu | Inertia-based identification of elastic anisotropic properties for materials undergoing dynamic loadings using the virtual fields method and heterogeneous impact tests[END_REF]Rossi et al. 2022a). In particular, it has already been successfully applied in the characterization of the strain hardening behavior of thin sheets up to large deformations (Rossi et al. 2016b). However, some studies have highlighted that after diffuse necking, a deviation between internal and external virtual works is observed [START_REF] Coppieters | On complete solutions for the problem of diffuse necking in sheet metal[END_REF][START_REF] Martins | Calibration of a modified Johnson-Cook model using the virtual fields method and a heterogeneous thermo-mechanical tensile test[END_REF], violating the validity of the VFM, leading to an inaccurate description of the hardening behavior. The triaxial stress state can explain the observed deviation that develops in the necking region [START_REF] Kim | Characterization of the post-necking strain hardening behavior using the virtual fields method[END_REF]. This rich information about the necking behavior is not used in the VFM because of the plane stress limitations imposed by the lack of available 3D full-field measurements. In fact, in sheet metal characterization, the VFM is limited to be used in combination with surface measurements, as techniques are not yet available to capture the 3D behavior inside the material. To circumvent this limitation, the IMG method investigated in Chapter 6 might provide the necessary 3D displacement fields for using the VFM in a 3D framework. However, this topic has not been extensively investigated, and at the best of the author's knowledge, only one study is found that it extended the use of the VFM to a 3D framework (Rossi and Pierron 2012a), assuming no plane stress or plane strain conditions. Considering the amount of studies already using the VFM in the identification of material parameters, its interest and potential is undeniable. Yet, it lacks a better understanding of the consequences of its limitations, particularly the use of a 2D instead of a 3D framework. Although several problems might exist in the latter, such as the lack of volumetric full-field measurements, it might encourage the scientific community to develop new methodologies. Therefore, this study investigates the application of the VFM in 2D and 3D frameworks, respectively, 2D-VFM and 3D-VFM to the identification of a hardening law up to large deformations. Even though only simulated experiments are considered, the application of the VFM is investigated in materials of varying thicknesses to understand its influence. The following section introduces the formulation of the VFM general to any 3D application, as well as its numerical implementation. Then, the simulated tests and virtual materials used in this study are presented and analyzed. A section dedicated to the definition of virtual fields follows, and finally, the results of an analysis of predicted load and identifications of a hardening law are presented. Virtual Fields Method Let us consider, in the Euclidean space, a body ℬ subject to a general deformation process at a time instant 𝑡. According to the finite deformation theory [START_REF] Dunne | Introduction to Computational Plasticity[END_REF], the position of its particles in the undeformed or reference configuration ℬ 0 (Lagrangian description) is given by 𝐗, while in the deformed or current configuration ℬ 𝑡 (Eulerian description) by 𝐱. The motion of each material point can be described as a function of 𝐱 = 𝜒(𝐗, 𝑡), mapping the position of each particle in the undeformed to the deformed configuration. The displacement field 𝐮 = (𝑢 x , 𝑢 y , 𝑢 z ) is defined as the difference between the deformed and undeformed configurations as 𝐮(𝐗, 𝑡) = 𝜒(𝐗, 𝑡) -𝐗 = 𝐱 -𝐗. (7.1) Spatial derivatives can be used to calculate the deformation gradient 𝐅 as 𝐅 = ∂𝐱 ∂𝐗 = ∂𝐮 ∂𝐗 + 𝐈, (7.2) where 𝐈 is the second order identity tensor. The deformation gradient 𝐅 provides a complete description of deformation, including stretch and rigid body rotations. The latter does not contribute to shape or size change, thus not contribute to the strain tensor. Therefore, it is necessary to separate the stretch from the rigid body rotations in the deformation gradient. Using the theorem of polar decomposition [START_REF] De Souza Neto | Computational Methods for Plasticity: Theory and Applications[END_REF], 𝐅 can be decomposed as 𝐅 = 𝐑𝐔 = 𝐕𝐑, (7.3) where 𝐔 is the right stretch tensor, 𝐕 is the left stretch tensor, and 𝐑 is the orthogonal rotation tensor. A consequence of such mathematical description is that for every material point, a local coordinate system (1, 2, 3), also known as corotational coordinate system, rotates during deformation, as schematically represented in Figure 7.1. The VFM is based on the PVW, which represents an integral form of mechanical equilibrium, written either in the undeformed or deformed configurations. Using the deformed configuration, neglecting body forces, and quasi-static conditions, the PVW can be given as ∫ ℬ 𝑡 𝝈 ∶ ∂𝐮 * ∂𝐱 𝑑ℬ 𝑡 + ∫ ∂ℬ 𝑡 (𝝈𝐧) ⋅ 𝐮 * 𝑑∂ℬ 𝑡 = 0, (7.4) where 𝝈 is the Cauchy stress tensor, 𝐮 * is the virtual displacements field, and 𝐧 the outward vector of ∂ℬ 𝑡 . Alternatively, the PVW can be formulated in the undeformed configuration as ∫ ℬ 0 𝐏 ∶ ∂𝐔 * ∂𝐗 𝑑ℬ 0 ⏟⎵⎵⎵⎵⏟⎵⎵⎵⎵⏟ 𝑊 int + ∫ ∂ℬ 0 (𝐏𝐍) ⋅ 𝐔 * 𝑑∂ℬ 0 ⏟⎵⎵⎵⎵⎵⏟⎵⎵⎵⎵⎵⏟ 𝑊 ext = 0, (7.5) where 𝐏 is the first Piola-Kirchhoff stress tensor, the virtual displacements' field is now represented by 𝐔 * , and 𝐍 is the outward vector of ∂ℬ 0 . The PVW written in the undeformed configuration is more suitable for practical implementation, as 𝐔 * does not need to be updated with virtual boundary conditions [START_REF] Marek | Extension of the sensitivity-based virtual fields to large deformation anisotropic plasticity[END_REF][START_REF] Lattanzi | Inverse identification strategies for the characterization of transformation-based anisotropic plasticity models with the non-linear VFM[END_REF]). The first Piola-Kirchhoff stress tensor 𝐏 relates forces applied in the deformed configuration to volumes in the undeformed configuration, and is defined as (7.6) where det(𝐅) is the determinant of 𝐅 and 𝐅 -T the transpose of its inverse. The Cauchy stress tensor 𝝈 is calculated from the strain tensor 𝜺 by means of a constitutive model combined with a given set of material parameters 𝝃. 𝐏 = det(𝐅)𝝈𝐅 -T , According to Equation 7.5, the PVW states that, in equilibrium and for a continuous virtual field, the internal virtual work 𝑊 int is equal to the external virtual work 𝑊 ext in absolute value, which must be satisfied for any time instant. Let us now consider that the body ℬ can be discretized by a given number of material points (similarly to a finite element mesh), then 𝑊 int can be approximated by a discrete sum as 𝑊 int (𝝃, 𝐔 * , 𝑡) = 𝑛 p ∑ 𝑖=1 (𝐏 𝑘 (𝝃, 𝑡) ∶ ∂𝐔 * 𝑖 ∂𝐗 𝑖 𝑉 𝑖 ) , (7.7) where 𝑛 p represents the number of material points and 𝑉 𝑖 is the volume covered by 𝑖-th material point. Due to the formulation of the PVW in the undeformed configuration, 𝑉 𝑖 is constant for every time instant. Additionally, the external virtual work 𝑊 ext can be simplified by selecting a virtual field that is constant along the boundary ∂ℬ 0 as 𝑊 ext (𝐔 * , 𝑡) = ∫ ∂ℬ 0 (𝐏𝐍) ⋅ 𝐔 * 𝑑∂ℬ 0 = 𝐔 * ∫ ∂ℬ 0 (𝐏𝐍) 𝑑∂ℬ 0 = 𝐔 * ⋅ 𝐋(𝑡), (7.8) where 𝐋 is the resultant of the load acting on ∂ℬ 0 , recorded during the test. Because the load distribution on ∂ℬ 0 is usually unknown, this simplification is crucial for the VFM. Finally, the VFM is used to identify the best set of material parameters by minimizing the difference between the internal and external virtual work, through an objective function 𝜑 written in a least-squares formulation as 𝜑(𝝃) = 𝑛 t ∑ 𝑡=1 𝑛 v ∑ 𝑗=1 [𝑊 int (𝝃, 𝐔 * 𝑗 , 𝑡) -𝑊 ext (𝐔 * 𝑗 , 𝑡) ] 2 , (7.9) where 𝑛 v is the number of virtual fields and 𝑛 t the total number of time instants. Various time instants and different virtual fields can be used to enrich the objective function. Usually, an optimization algorithm is used to iteratively adjust the material parameters. Stress Reconstruction from the Strain Fields As previously established, the VFM requires, at each iteration, the calculation of 𝐏, which is effectively dependent on the strain tensor. Therefore, it is convenient to use a measure of strain that is independent of rigid body rotations and dependent on stretch alone, given by 𝜺 = ln 𝐕, (7.10) where 𝐕 is defined in the global coordinate system (𝑖, 𝑗, 𝑘) in the undeformed configuration. However, constitutive equations are generally defined in the material coordinate system (𝜉, 𝜂, 𝜁), aligned with material texture. Thus, the calculated strain tensor 𝜺 (𝑖,𝑗,𝑘) must be projected to the material coordinate system (𝜉, 𝜂, 𝜁) according to 𝜺 (𝜉,𝜂,𝜁) = 𝐑 T mat (𝐑 T 𝜺 (𝑖,𝑗,𝑘) 𝐑) 𝐑 mat , (7.11) where 𝐑 mat is the material rotation tensor given by 𝐑 mat = ⎡ ⎢ ⎢ ⎣ cos 𝜃 -sin 𝜃 0 sin 𝜃 cos 𝜃 0 0 0 1 ⎤ ⎥ ⎥ ⎦ , (7.12) which is responsible for rotating the local coordinate system to the material one as a function of the material orientation 𝜃, constant throughout deformation. The projected strain tensor 𝜺 (𝜉,𝜂,𝜁) is used to reconstruct the stress tensor 𝝈 (𝜉,𝜂,𝜁) via an implicit integration algorithm, similarly to the ones found in FEA software. However, many time instants should be used to guarantee the stability and convergence of the algorithm. Alternatively, other algorithms could be used, such as the direct method proposed by [START_REF] Rossi | An approximated computational method for fast stress reconstruction in large strain plasticity[END_REF], that provide a computationally faster reconstruction but cannot be incorporated in a FEA software. Afterwards, the stress tensor 𝝈 (𝜉,𝜂,𝜁) defined in the material coordinate system (𝜉, 𝜂, 𝜁) should be projected to the global coordinate system (𝑖, 𝑗, 𝑘) as 𝝈 (𝑖,𝑗,𝑘) = 𝐑 mat (𝐑𝝈 (𝜉,𝜂,𝜁) 𝐑 T ) 𝐑 T mat , (7.13) and used in Equation 7.5 to calculate the first Piola-Kirchhoff stress tensor 𝐏 defined the global coordinate system (𝑖, 𝑗, 𝑘) in the undeformed configuration. This formulation is general to any 3D application. However, the VFM is commonly used with full-field measurements limited to the surface of the specimen. These measurements only provide in-plane kinematic quantities, and often plane stress conditions are assumed, provided the specimen's thickness is small relative to its width and height. Thus, the application is considered 2D, and out-of-plane stress components are negligible (𝜎 zz = 𝜎 xz = 𝜎 yz = 0), but the deformation gradient 𝐅 is in reality fully 3D. It results that the calculation of 𝐏 requires additional assumptions [START_REF] Marek | Extension of the sensitivity-based virtual fields to large deformation anisotropic plasticity[END_REF]. Assuming a thin specimen, the out-of-plane shear components can be neglected (𝐹 xz = 𝐹 yz = 𝐹 zx = 𝐹 zz = 0), and the determinant of 𝐅 is redefined as det(𝐅) = 𝐹 zz (𝐹 xx 𝐹 yy -𝐹 xy 𝐹 yz ), (7.14) where 𝐹 xx , 𝐹 yy , 𝐹 xy , and 𝐹 yx are the in-plane components of 𝐅, and 𝐹 zz is the out-of-plane component. The in-plane components are obtained from the displacement field. However, 𝐹 zz is unknown, but it can be approximated by the out-of-plane strain component for each time instant 𝜀 zz (𝑡) as 𝐹 zz (𝑡) = 1 + 𝜀 zz (𝑡) = 1 + ∫ 𝑡 0 ε zz 𝑑𝑡. (7.15) Finally, assuming isotropic elastic behavior using Hooke's law and the isochoric behavior of plasticity, ε zz can be determined as ε zz = - 𝜈 1 -𝜈 ( ε e xx + ε Numerical Implementation The VFM is numerically implemented in a code fully developed in Python programming language. In terms of computational efficiency, the code is developed with array-based calculations, instead of using loops over the material points, as was the case in the FEMU from Chapter 5. Using arrays to directly perform the operations, such as calculating the logarithmic strain of all material points, largely speeds up the computation and reduces differences into an implementation in Fortran. Comparatively to the implementation of Chapter 5, the small loss in computational speed is compensated in simplicity. Moreover, the array-based calculation easily allows the program to perform the operations in either a 2D or 3D framework. As schematically presented in Figure 7.2, the VFM requires the input of the geometry (coordinates of undeformed configuration), displacement fields, evolution of load, and constitute model. It should be noted that this implementation uses a finite element mesh and shape functions similar to a FEA software (such as Abaqus/Standard). In the pre-processing stage, the deformation gradient is calculated, and the logarithmic strain tensor is obtained from the polar decomposition of the deformation gradient. The latter operation is performed using a function available in the NumPy library (Numpy 2022). The logarithmic strain tensor calculated in global coordinate system is rotated to the material coordinate system in the undeformed configuration. Then, the manual virtual fields are generated. The identification procedure uses the gradient-free NM algorithm (see Section 5.3) included in the SciPy library (SciPy 2022a). This algorithm was chosen, in part, because it allows more control over the optimization procedure compared to the implementation of the LM algorithm. For example, using the NM it is possible to call a function at the end of each iteration, which is not available for all algorithms of the SciPy library. This functionality is important to allow updates during the optimization procedure. Inside the identification procedure, the first and main operation is the calculation of the Cauchy stress tensor. Specifically, UMMDp is integrated in the VFM to reconstruct the stress field through the backward-Euler scheme combined with an elastic predictor and plastic corrector (see Section 2.4). The major advantages of using UMMDp is to guarantee that the same constitutive model is used both in the identification procedure and future FEA simulations, and easily identify different constitutive models. A driver code for UMMDp is developed that allows the communication with the main program through the F2PY framework (NumPy 2022). This operation requires the most computational effort, because using an array-based calculation is impossible. The stress tensor is history dependent, so the calculation is performed iteratively for each material point and time instant. Afterwards, the Cauchy stress tensor is rotated to the global coordinate system, and the first Piola-Kirchhoff stress tensor is calculated. The internal and external virtual works are calculated, and finally, the objective function. Finally, the stopping criteria are evaluated, and the program either ends or updates the material parameters and continues the optimization procedure. Simulated Tensile Tests Specimen Geometry and Finite Element Model This study only considers numerical simulations of a uniaxial tensile test. A specimen with varying cross-section is selected [START_REF] Kim | Characterization of the post-necking strain hardening behavior using the virtual fields method[END_REF]Martins et al. 2018b) and its geometry is presented in Figure 7.3. Relatively to a classic uniaxial tensile specimen, this geometry has the advantage of generating a heterogeneous strain field over a large area, providing richer mechanical information for the identification of material parameters (see Chapter 3). Moreover, the varying cross-section will trigger necking in the center region of the specimen, facilitating the setup of acquisition cameras during real experiments. The test is simulated with Abaqus/Standard (Dassault Systèmes 2019), considering a fully 3D model of the test, without assuming symmetry conditions. The finite element mesh is built using the 8-node brick elements with reduced integration (C3D8R), and considering three layers through the thickness. Ideally, more layers could have been used to describe more accurately the deformations along the thickness. However, it was found that increasing the number of layers also largely increased the computational time of numerical simulations and post-processing procedures. The finite element mesh was generated with an in-plane global element size of 0.2 mm, resulting in a total of 197 955 elements. Nevertheless, the center region of the specimen is the richest part of the specimen, thus a VOI (shaded region in Figure 7.3) is defined and constituted by 97 307 elements. The in-plane mesh size was defined after performing a convergence study evaluating the maximum equivalent plastic strain, by considering only three layers of elements through the thickness. The selected mesh size provided the best balance between an accurate description of the equivalent plastic strain and the computational effort. A Cartesian coordinate system is defined with the 𝑥 axis along 𝑑 w , the 𝑦 axis along 𝑑 h , and the 𝑧 axis along 𝑑 t . The constants 𝑑 w and 𝑑 h represent, respectively, half of the width and length of the VOI, while 𝑑 t is the material thickness. The origin of the coordinate system is located in the front surface of the specimen (𝑥 = 𝑦 = 𝑧 = 0), while 𝑧 = 𝑑 t defined the back surface (see Figure 7.3). The numerical simulations are displacement-driven, with an imposed constant vertical displacement 𝑢 y of 5 mm applied across the top edge, which was additionally fixed along the 𝑥 and 𝑧 axess to simulate the effect of the grip of a testing machine. The bottom edge of the model was fixed. The numerical simulations are performed at a constant time step, resulting in 200 time instants. Finally, a large strain formulation is adopted, and the constitutive model is implemented using UMMDp (see Section 2.4). Virtual Materials The materials considered in this study are based on the real characteristics of a DP780 dualphase steel with a nominal thickness of 1.5 mm. This material was experimentally tested using quasi-static classical uniaxial tensile tests at different material orientations. The results showed that the material exhibited anisotropy, and the recorded longitudinal strain just before rupture was approximately 0.65. However, it should be emphasized that the goal of this study is not to characterize this material, but only to use it as a basis for the virtual materials considered throughout. Many virtual materials of varying thicknesses are used in this study, but using the same description of the elastoplastic behavior. The elastic isotropic behavior of the materials is described by Hooke's law, and the elastoplastic behavior is described by the von Mises isotropic yield criterion and the Swift isotropic hardening law (see Sections 2.2 and 2.3). The latter model was selected for its simplicity and wide use in the scientific community. The elastic properties of the materials and the Swift's law material parameters are provided in Table 7.1. The parameters of Swift law were obtained by fitting the experimental stress-strain curve of DP780. Additionally, in Figure 7.4, the hardening behavior according to the Swift's law is presented up to 0.75 of equivalent plastic strain. The materials are defined with thicknesses of 1.5, 2, 2.5, 3, 4, 6, 8, and 10 mm. The vertical displacement of 5 mm is used in all numerical simulations to have a direct comparison between the materials. As a result, only the numerical model of the 1.5 mm thick material is, approximately, modeled to reach rupture around the same magnitude of values of the real material. The equivalent plastic strain map for this material, on the surface (𝑧 = 0) and on the middle plane (𝑧 = 𝑑 t ), is shown in Figure 7.5c at the last time instant corresponding to 𝑢 y = 5 mm. A large concentration of deformation is observed in the central region, indicating a severe localization of necking just before rupture. At a time instant of 𝑢 y = 4 mm (see Figure 7.5a), it can be observed that the material was already in diffuse necking, though the values of equivalent plastic strain are much lower than at the last time instant. This difference shows that large deformations are developed in the later stages of the test. On the other hand, the tests with materials of large thicknesses (such as 8 mm or 10 mm) do not even reach localized necking. It is observed that the necking is delayed as a function of the material's thickness. This is observed in the equivalent plastic strain maps for the 10 mm thick material presented in Figure 7.5b. The values of equivalent plastic strain at the last time instant are only comparable to the ones observed at 𝑢 y = 4 mm for the 1.5 mm thick material. The evolution of load as a function of the vertical displacement is also shown in Figure 7.6 for all materials. As expected, the load increase is proportional to the thickness of the materials. Extraction of the Displacement Fields The simulated tests are modeled in 3D, but the 2D-VFM uses information from the surface of the specimen, thus only in-plane kinematic fields are considered. For the 3D-VFM, the displacement field is extracted from the nodes of the finite element mesh in the VOI of the simulated tests. This mesh is already regularized, thus can be directly used as input to the 3D-VFM. For the 2D-VFM, the in-plane displacement field is extracted from the nodes on the surface of the 3D test, and a 2D finite element mesh is generated by rearranging the elements' connectivity. Definition of the Virtual Fields The main area of development and discussion around the VFM is probably the selection of virtual fields. Because its definition assigns weights to material points, it directly affects the objective function and can lead to different identification results. Nevertheless, one thing all methodologies agree is that the virtual fields must be kinematically admissible, continuous and differentiable over the entire domain [START_REF] Pierron | The Virtual Fields Method: Extracting Constitutive Mechanical Parameters from Full-field Deformation Measurements[END_REF]. In the framework of plasticity, two strategies can be distinguished according to the way virtual fields are generated. The most common strategy is based on manually defined virtual fields. This strategy usually employs polynomial or harmonic functions defined by the user and adapted to the geometry and boundary conditions of the application (Rossi et al. 2016b;[START_REF] Fu | A method for the simultaneous identification of anisotropic yield and hardening constitutive parameters for sheet metal forming[END_REF][START_REF] Martins | Calibration of a modified Johnson-Cook model using the virtual fields method and a heterogeneous thermo-mechanical tensile test[END_REF]. For this reason, this strategy depends highly on the user experience and understanding of the application, even though it is simpler from an implementation point of view. The other strategy is based on the automatic generation of virtual fields, and was developed to overcome the problems of the first. A major advantage is that it reduces the user dependence of the first strategy, making the method more automatic and suited for integration in parameter identification software. A first implementation, named optimized virtual fields, relied on the minimization of noise effects in the material parameters [START_REF] Pierron | Extension of the virtual fields method to elasto-plastic material identification with cyclic loads and kinematic hardening[END_REF]. Later, [START_REF] Marek | Sensitivity-based virtual fields for the non-linear virtual fields method[END_REF] proposed the sensitivity-based virtual fields. This implementation relies on the sensitivity of the stress field to each material parameter, and it has been shown to outperform the manual virtual fields, particularly when dealing with experimental data [START_REF] Marek | Experimental validation of the sensitivity-based virtual fields for identification of anisotropic plasticity models[END_REF][START_REF] Lattanzi | Inverse identification strategies for the characterization of transformation-based anisotropic plasticity models with the non-linear VFM[END_REF]). Nevertheless, it has been shown that increasing the number of manual virtual fields can lead to interesting results (Martins et al. 2020a). Considering that 3D-VFM is still less explored than 2D-VFM, only the manual defined virtual fields are employed in this study because of its simplicity and ease of implementation. The generic form of the virtual displacement fields 𝐔, for the general case of 3D, is given by where ∂𝑈 * x /∂𝑍 = ∂𝑈 * z /∂𝑋 = ∂𝑈 * y /∂𝑍 = ∂𝑈 * z /∂𝑌 = ∂𝑈 * z /∂𝑍 = 0 in the 2D-VFM. Three manual virtual fields are defined in this study, and their components are presented in Table 7.2. The first virtual field 𝐔 * 1 is commonly used in various applications of the VFM because it includes the load's contribution. The external virtual work is proportional to the load and is balanced by the internal virtual work calculated from the stress component yy, aligned with the loading direction. In theory, it is the only virtual field that gives a non-zero value in the calculation of the internal virtual work. Moreover, the resulting gradient presents a constant spatial distribution across all VOI. 𝐔 * = ⎡ ⎢ ⎢ ⎣ 𝑈 * x 𝑈 * y 𝑈 * z ⎤ ⎥ ⎥ ⎦ , ( 7 The second and third virtual fields, respectively, denoted by 𝐔 * 2 and 𝐔 * 3 present non-constant spatial distributions, with different weights assigned to the stress components in the calculation of the internal virtual work. These are null on the boundary of the applied load, which gives a zero value in the calculation of external virtual work. The second virtual field was adopted from other studies using 2D-VFM, where its use has proven to be successful (Rossi et al. 2016b;[START_REF] Marek | Extension of the sensitivity-based virtual fields to large deformation anisotropic plasticity[END_REF][START_REF] Lattanzi | Inverse identification strategies for the characterization of transformation-based anisotropic plasticity models with the non-linear VFM[END_REF]. It only considers the in-plane components of the stress tensor, neglecting the out-of-plane components. Also, 𝐔 * 2 does not depend on 𝑧, leading to a constant spatial distribution across the thickness. Lastly, 𝐔 * 3 assigns weight to all the stress tensor components and also generates a non-constant spatial distribution across the thickness. The third virtual field can be considered an extension of the second to 3D-VFM, because it simplifies to the latter when 𝑧 = 0. However, it should be noted that this virtual field would not be useful if the origin of the coordinate system was positioned in the middle of the specimen, because the kinematic fields are symmetrical across the thickness. Thus, leading to a combination of symmetrical stress fields with symmetrical virtual fields across the thickness, resulting in a null value of the internal virtual work. The former is only true because this study only considers simulated tests, while in real experiments unsymmetrical kinematic fields across the thickness would be expected. A graphical representation of the three virtual fields displacement maps is shown in Figure 7.7. The presented maps of 𝐔 * 1 and 𝐔 * 2 are representative of the whole VOI, while for 𝐔 * 3 are presented the maps at the front (𝑧 = 0) and back (𝑧 = 𝑑 t ) surfaces. Results Analysis of the Predicted Load Because this study only considers numerical data, the reference parameters and reference test data are known. By taking advantage of that, it is possible to analyze the predicted load using the 2D-VFM and the 3D-VFM, and compare with the reference load obtained from the numerical simulations. In particular, the evolution of load can be extracted from numerical simulations and used in the external virtual work term, as shown in Equation 7.8, provided a constant virtual field over the boundary is used. Also, considering the test is uniaxially loaded along the 𝑦 axis, the evolution of load can be simplified to its y component, denoted by 𝐿 ref (𝑡). Using the first virtual field 𝐔 * 1 , the external virtual work 𝑊 ext (𝐔 * 1 , 𝑡) can be simplified to 7.19) where 𝐔 * 1 y = 𝑦/𝑑 h = 1 on the boundary of applied load. Consequently, the evolution of predicted load 𝐿(𝝃, 𝑡) can be calculated from the internal virtual work 𝑊 int (𝝃, 𝐔 * 1 , 𝑡) as (7.20) where 𝑃 yy represents the yy component of the Piola-Kirchhoff stress tensor. The constant 1/2 is necessary to account for the reaction load on the fixed boundary. 𝑊 ext (𝐔 * 1 , 𝑡) = 𝑦 𝑑 h ⋅ 𝐿 ref (𝑡) = 𝐿 ref (𝑡), ( 𝐿(𝝃, 𝑡) = 1 2 𝑊 int (𝝃, 𝐔 * 1 , 𝑡) = 1 2 ∫ ℬ 0 𝐏(𝝃, 𝑡) ∶ ∂𝐔 * 1 ∂𝐗 𝑑ℬ 0 = 1 2 ∫ ℬ 0 𝑃 yy (𝝃, 𝑡) ⋅ 1 𝑑 h 𝑑ℬ 0 , The relative error 𝛿 re (𝝃, 𝑡) between evolution of predicted and reference load given by 𝛿 re (𝝃, 𝑡) = 100 × | |𝐿(𝝃, 𝑡) -𝐿 ref (𝑡) | | 𝐿 ref (𝑡) , (7.21) is used as the indicator to compare the results using the 2D-VFM and the 3D-VFM. In Figure 7.8, the evolution of the relative error is presented for all materials. It is observed that using the 3D-VFM, the maximum relative error is lower than 0.5 % for all thicknesses, and Figure 7.8 Evolution of the relative error for predicted load using the 2D-VFM and the 3D-VFM with the reference material parameters, for tests with materials of thickness (a) 1.5, 2, 2.5, and 3 mm, and (b) 4, 6, 8, and 10 mm. Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 Results 125 tends to decrease with the increase in thickness. Thus, it can be considered that the relative error of 3D-VFM is negligible, and that the evolution of load is successfully predicted. However, the same is not observed using the 2D-VFM, where the relative error is not negligible. Overall, the relative error increases with the increase in thickness. For 1.5 mm, the evolution of relative error is close to zero, up to 4 mm of displacement, instant where the error rapidly increases up to 1 %. It was shown before that the time instant corresponding to 𝑢 y = 4 mm is when the equivalent plastic strain values rapidly increase, leading to the rupture of the material. Increasing the material's thickness, an increase in the relative error is observed sooner in the test, even though the same displacement corresponds to lower deformations in the thicker tests. For the thicker material of 10 mm, the relative error is as high as 6 % at the last time instant, though it can be expected that it would be higher for later stages of the test leading to rupture. In general, the increase in relative error appears proportional to the material's thickness. To better understand the origin of these deviations, the evolution of the stress components xx, yy, and zz is investigated for two material points, one in the center of the front surface, denoted 𝑃 S , and another in the center of the middle plane, denoted 𝑃 M . The exact location of these two material points in the finite element mesh of all tests is shown in Figure 7.9, for the 1.5 mm thick material as an example. This analysis is only performed for the materials of thickness 1.5, 3, and 10 mm, as presented in Figure 7.10. It can be seen that there is a dominant stress component 𝜎 yy associated with the loading direction, which is expected from an uniaxial tensile test. More interestingly, it can be seen for the 1.5 mm thick material that the time instant where the error rapidly increases is consistent with the appearance of the stress component 𝜎 zz (see Figure 7.10a). In fact, this is also consistent with the onset of localized necking, as previously observed in Figure 7.4. For larger thicknesses, the development of 𝜎 zz is not as sudden and intense, but more as a constant increase from the initial time instant, in particular for the 10 mm thick material (see Figure 7.10c). Moreover, for the three materials, there is a difference in the stress values between the two material points. Additionally, an analysis of the evolution of the ratio of stress components is also performed between the two material points. Its formulation is given by (7.22) This analysis shows that the magnitude of 𝜎 zz in 𝑃 M is two times higher than in 𝑃 S (see Figure 7.11). The same behavior is observed for all materials, as shown in Table 7.3, where the maximum values of the ratio of stress are presented. Interesting is also the observed tendency of 𝑅 xx to increase for higher thicknesses. Since the 2D-VFM only considers the in-plane stress components and the surface material points in the calculation of the internal virtual work, it is not able to capture these behaviors, leading to the errors in the calculation of the internal virtual work, and consequently, errors in the predicted load. Although errors will exist when using the 2D-VFM, it is important to understand if it's an acceptable error. For example, a relative error of 1 % (1.5 mm) can be considered acceptable for an application not requiring high accuracy. As such, it can also be considered that for thicknesses up to 1.5 mm, using the 2D-VFM is acceptable. However, for larger thicknesses, a larger deviation in the internal virtual work should be expected, leading to larger errors in the predicted load. 𝑅 𝑖𝑖 = 𝜎 𝑖𝑖 (𝑃 M ) 𝜎 𝑖𝑖 (𝑃 S ) with 𝑖 = x, y, z. Identification of the Swift Hardening Law The developed VFM code in Python programming language is applied to identify the material parameters of the Swift's law, with 𝝃 = (𝐾, 𝜎 0 , 𝑘). The adopted termination criteria of the optimization procedure are set to a maximum number of iterations of 500, or when the absolute difference between parameters in consecutive iterations is lower than or equal to 1 × 10 -8 . The identification is carried out for only the three thicknesses 1.5, 3, and 10 mm. In a first part, the identification is carried out using both the 2D-VFM and the 3D-VFM. Only the first virtual field is used here. The three material parameters of Swift's law are identified, and the starting guess is equal to the reference parameters (see Table 7.1). This way, it will be easier to verify if the reference parameters are the global minimum solution used by numerical simulations. It is expected that the optimization is easily carried out, without taking many iterations or stopping at local minima. The results of this identification are summarized in Table 7.4, where it is observed that using the 3D-VFM, the final material parameters are identical to the reference ones. In fact, considering the objective function is not normalized, the initial and final values are of low magnitude. These results reinforce the previous analysis, where very low errors were obtained for the predicted load using the 3D-VFM. Table 7.4 Results of the identification starting from the reference material parameters and using the 2D-VFM and the 3D-VFM, for tests with thicknesses of 1.5, 3, and 10 mm. Only the first virtual field is employed in the identification. However, the results using the 2D-VFM are completely different. By comparison, the initial objective function values are higher than those observed using the 3D-VFM, which directly indicates a worse prediction of the load using the reference material parameters. For the 1.5 mm thick material, the identified material parameters are not that far from the reference ones, even though it directly represents that these are suitable parameters. In fact, the objective function values have not largely improved, and its values are still much higher than those observed using the 3D-VFM. Moreover, the 2D-VFM uses a third of the material points considered in the 3D-VFM, thus proportionally the objective function values using 2D-VFM would be even higher. For the 3 and 10 mm thick materials, the identified parameters are far from the reference parameters. In particular, the identified value of 𝜎 0 for the 10 mm thick material is not even physically acceptable. Bounds could have been imposed to avoid the unfeasible solutions, though it would not change the conclusions of the identification. In Figure 7.12, the evolution of the relative error for predicted load (see Equation 7.21) is represented using both the reference and identified material parameters from Table 7.4 using the 2D-VFM. For the 1.5 mm thick material, it is observed that the predicted load with the identified parameters has improved in the range of necking (from approximately 4 mm of displacement). In spite of that, the relative error increases in early stages of the test, where the error was close to zero using the reference parameters. The same phenomenon is observed for the 3 and 10 mm thick materials with even more emphasis. It can be concluded that the identification using the 2D-VFM results in identified material parameters that are not representative of the materials' hardening behavior when applied to materials with large thicknesses. To analyze the influence of each virtual field (see Table 7.2), more identifications are performed, but only using the 3D-VFM. In the identifications, various combinations of virtual fields are used. Three identifications evaluate the influence of each virtual field individually, and three more are performed by combining the first with the second virtual field, the second with the third, and a final identification combining the three of them. Again, the identifications are performed only for the 1.5, 3, and 10 mm thick materials. However, the initial material parameters are different from the reference ones to investigate the effect on the efficiency of convergence. The selected initial material parameters correspond to 1.5 times the reference ones. Overall, the identification results are good, as summarized in Table 7.5. A common result between all identifications is the exact identification of the parameter 𝑛, while for the parameters 𝐾 and 𝜎 0 the exact values are not identified, but almost identical. The values of the identified parameters are all within 1 % of the reference ones, which is more than an acceptable result. Nevertheless, for the sake of comparing the influence of different virtual fields, it can be said that the best results are achieved using the first virtual field, either alone or in combination with others. In fact, this virtual field is the only one that leads to a non-zero value in external virtual work, including valuable information to the identification of hardening material parameters. When using either the second or third virtual fields alone, the identified parameters 𝐾 and 𝜎 0 tend to not reach the exact solution. Nevertheless, it does not mean these virtual fields are not suitable, but only reinforces the idea that using manual virtual fields, increasing its number leads to better and more stable results (Martins et al. 2020a). Interestingly, for the 10 mm thick material, the results using the second virtual field are slightly worst compared to the results of the 1.5 and 3 mm thick materials. This finding suggests that when out-of-plane stress components are not negligible, using virtual fields similar to the second leaves out important information that could be captured in the objective function. In terms of computational efficiency, the number of iterations required for the convergence of the identification procedure does not suggest any major differences between the combination of virtual fields. Conclusions The purpose of this study is to better understand the limitations of using the VFM in a 2D framework compared to a 3D framework. The study only considers data from numerical simulations, and the methodology was applied to the identification of a hardening law, beyond necking and up to rupture. The simulated tests are performed with a uniaxial tensile test of varying cross-sections, generating a heterogeneous strain field over a large area. Moreover, various materials of different thicknesses are considered to understand its influence on the use of the VFM. Although these materials present different thicknesses, their hardening behavior is based on the real behavior of a dual-phase steel with 1.5 mm of thickness. Three manual virtual fields are implemented in this study, because of its simplicity and lack of studies in a 3D framework. The first virtual field is commonly used in the VFM, because it's the only one that includes the external virtual work in the calculation of the objective function. The second virtual field only assigns weight to in-plane stress components. It is also widely used in the VFM in a 2D framework. The third, and last, virtual field is an extension of the second to a 3D framework, and can be considered a novel contribution of this study. The advantage of the latter is that it assigns non-zero weights to all stress components and is not constant along the thickness. In a first part, the evolution of the predicted load was analyzed using the 2D-VFM and the 3D-VFM, for all materials. The relative error between predicted and reference load from numerical simulations was used as an indicator. The results showed that the relative error was negligible when using the 3D-VFM, successfully predicting the evolution of the load. On the contrary, using the 2D-VFM, the relative error was no longer negligible, ranging from 1 % for the 1.5 mm thick material up to 6 % for the 10 mm thick material, at the last time instant. With this analysis, it has been shown that the relative error increases proportionally to the increase of materials' thickness. Moreover, the source of the error can be related to the development of non-negligible out-of-plane stress components not considered in the 2D-VFM, as well as considerable differences in the stress values between the surface and middle of the specimen. In a second part, the identification of the Swift hardening law material parameters was investigated. Initially, the identification was carried out using both VFM frameworks, only one virtual field, and starting from the reference material parameters. The results have shown that the reference material parameters correspond to the global optimum solution using the 3D-VFM, but using the 2D-VFM led to solutions diverging from the reference, and one unfeasible solution was also found. A verification of the predicted load using the identified parameters has shown that these do not lead to better results, but rather improve the prediction at some stages at the expense of others. Lastly, the three virtual fields were employed in the identification of the material parameters using the 3D-VFM. This time, a starting solution far away from the reference was chosen to analyze the convergence efficiency using different combinations of virtual fields. Nevertheless, few differences were observed between identifications, with all reaching solutions within 1 % of the reference values or in some cases identical. Finally, it can be said that this study performed a pioneering analysis of the limitations of the VFM in a 2D framework compared to a 3D framework. However, this was only a preliminary numerical study that can be enlarged and extended to materials developing pronounced out-ofplane stress fields. Final Remarks Conclusions The ultimate objective of this thesis was to develop and investigate efficient and accurate methodologies for the characterization of metallic sheets. Following the "Material Testing 2.0" paradigm, the parameter identification of constitutive models of metallic sheets can be divided in three pillars: (i) material testing, (ii) constitutive models, and (iii) inverse methodologies. It is considered that this thematic follows a multiplicative system, meaning that if one of the pillars fails, the outcome of the parameter identification will not be successful. Thus, this thesis aimed to contribute to each of the three pillars, by investigating the use of heterogeneous mechanical tests, techniques to reconstruct the kinematic fields, advanced constitutive models, and inverse methodologies. The starting point for this thesis dealt with the use of advance constitutive models, because scientific and industrial communities need accurate models to describe multiple phenomena of the materials. However, implementing and validating a single model is an arduous task that can take months of work. For that reason, studies generally tend to use well known models, which are, in some cases, relatively simple and have already been implemented and tested. Nevertheless, with the recent development of an open source user subroutine library of constitutive models, named Unified Material Model Driver for Plasticity (UMMDp), an opportunity to easily access advanced models opened up. This library contains various isotropic and kinematic hardening laws, more than 10 advanced yield criteria, and it is compatible with five different FEA software. Moreover, its programming structure is highly modular, allowing its expansion with an easy integration of other models. However, at the start of this thesis, few validations of the library were reported. Therefore, the validity and performance of the UMMDp was investigated by comparing it with an already developed and validated user material subroutine and one of the most used FEA software, Abaqus/Standard. In particular, the use of the advanced Yld2004-18p anisotropic yield criterion and a non-linear mixed hardening law was analyzed for single-element homogeneous tests and a deep drawing cup test. Results showed that the UMMDp yields identical results compared to the used references, validating its use, but with added computational effort. Nevertheless, the latter disadvantage is compensated by its practicality and potential. These outcomes have led to the adoption of the UMMDp throughout this thesis. Moreover, this investigation also led to the use of the UMMDp by several other studies, other research teams, as well as its integration in the VFM module of MatchID commercial software (MatchID 2022e). Afterwards, an extensive overview of heterogeneous mechanical tests, reported in the literature, was performed, with its focus on the type and amount of information provided. This survey was justified by the large amount of heterogeneous tests designed in the last decade. However, as more and more tests were proposed, it raised the question of which was best, i.e. which test provided the widest range of mechanical states and a larger degree of deformation, leading to more accurate parameter identification. The survey divided the tests between uniaxial, biaxial and out-of-plane loading, showing that the first type corresponds to most developed tests, which may be related to the fact that only a common uniaxial testing machine is required. These tests can achieve various mechanical states, mainly in the shear and uniaxial tension regions. Moreover, having only two edges of the specimen fixed leads to more flexibility in the design process, which in turn leads to complex shapes obtained by different methodologies. On the other hand, biaxial and out-of-plane loading tests require more complex experimental apparatus, thus are found in a less number. Nevertheless, the biaxial tests have been largely investigated for a long time, but its interest has recently dropped because it has been shown that the range of mechanical states is reduced and low levels of deformation are achieved. The previous survey also led to the conclusion that there is a lack of information on the proposed tests, as only a few are analyzed by different studies and conditions. Comparing heterogeneous mechanical tests is not a trivial task, as there are no clear standards or metrics to evaluate them. Therefore, a study was performed where the richness of four heterogeneous mechanical tests is numerically evaluated through five different metrics. These metrics are the equivalent plastic strain, the major and minor strains, the major and minor stresses, the stress triaxiality and Lode angle parameter, and, lastly, the rotation angle. It has been shown that the first metric is relevant to evaluate the levels of deformation achieved, as well as its homogeneity over the specimen surface. The other three metrics evaluate the diversity of mechanical states achieved, providing different representations. However, it is suggested that major and minor strains are always used, because they can be directly calculated from experiments. The last metric is proposed to evaluate the range of anisotropy that can be characterized from a single test. Also, the use of three virtual materials is considered in the analysis. This has shown the importance of evaluating the tests for different materials, as the results can vary. Finally, the study has shown that these can be suitable metrics to qualitatively evaluate and compare heterogeneous mechanical tests, without the complexity of quantitative indicators. Concerning the inverse methodologies, optimization algorithms are transversal to most of them, but studies often overlook its importance. Often, studies tend to use a gradient-based least squares algorithm without exploring other possibilities. Thus, the following study evaluated the use of different optimization algorithms within the finite element model updating (FEMU). The FEMU was implemented using Python and Fortran programming languages, by combining the benefits of both. Also, by using the optimization library within Python, three optimization algorithms were employed in the calibration of a thermoelastoviscoplastic model, namely the Levenberg-Marquardt (LM), the Nelder-Mead (NM), and the Differential Evolution (DE) algorithms. Artificial noise was introduced in the numerical data to evaluate the robustness of the algorithms. The results showed that the most robust algorithm is the DE algorithm. However, the latter requires many function evaluations, leading to a high computational cost. The LM and NM algorithms are more computationally efficient, but often are not able to achieve the best solution. Their robustness can be improved by considering a multiple starting approach. Although it was not explored in this study, the best methodology could be to combine the DE algorithm with either the LM or the NM algorithms in a sequential approach. Then, it was intended to investigate the use of the virtual fields method (VFM) up to rupture. However, a few studies dealing with the VFM in a 2D framework have shown that after necking, the validity of the VFM is compromised, leading to inaccurate descriptions of the material hardening behavior. This study highlighted that the cause of deviations in the VFM was due to the appearance of a triaxial stress state in the necking region. They have suggested to extend the VFM to a 3D framework. However, the VFM requires the input of the experimental strain field, which is limited to the specimen surface. To circumvent the latter limitation, a numerical method, named internal mesh generation (IMG), was proposed a few years ago to reconstruct the volume displacement field inside a solid material. This method, which only uses surface measurements, is purely based on geometrical considerations, and showed a great potential. However, only a few studies investigated it, and its uncertainty was not yet fully quantified. For that reason, a study was performed with the intent to estimate the error resulting from the application of the IMG. Initially, the reconstructed strain field of a plane strain tensile test was compared with the reference from FEA simulations. The results showed that there was a good agreement between the reference and the reconstructed field. However, a limitation of the method to accurately reconstruct the strain field through the thickness was identified when large strain gradients are observed. To solve this limitation and further improve the method, two strategies were proposed. The first one corrects the global volume, leading to, on average, more accurate strain values. The second one corrects the local volume, which is the volume of each element, by adjusting the position of nodes through the thickness. It has been shown that combining both strategies leads to the best results. The latter improves the reconstruction of the normal strain components, particularly the through-thickness gradient of the strain tensor component 𝜀 zz . This study only deals with numerical data, but the final aim is to apply the IMG to real experiments. Thus, it was attempted to estimate the uncertainty of the method under different experimental conditions and varying DIC settings. A design of experiments was carried out with five factors using virtual experiments from a dedicated software. The results showed that the investigated factors have a relatively low influence, meaning the error is mainly from the IMG itself. Thanks to the IMG, it will be possible to use the VFM in a 3D framework with real experiments, which is a great advance. A fully numerical study to investigate the limitations of the VFM in 2D framework, and consequently, the advantages of a 3D framework, is performed. Particularly, the influence of the materials thickness was analyzed regarding, at first, the evolution of the predicted load, and afterward, the identification of the material parameters of the Swift hardening law. In the first part, the evolution of the predicted load using the reference parameters was compared to the one obtained from the FEA simulations. For the 3D-VFM, the relative error was negligible, accurately predicting the evolution of the load. However, using the 2D-VFM resulted in nonnegligible relative errors after necking, showing that the error is proportional to the thickness. The source of error was related to non-negligible out-of-plane stress components and the considerable differences between the stress values at the surface and inside the material. In the second part, the identification of the Swift law was carried out using different combinations of three manually defined virtual fields. While two of them are commonly used in other 2D applications, a third one specific to 3D was extended from the 2D one. Although the material parameters identified with the 2D-VFM did not match the reference parameters, the 3D-VFM was able to correctly identify them in every instance. The purpose of this study was to explore the potential benefits of using the VFM in a 3D framework. It is expected that future studies will build on the findings of this research. Generically, this thesis has contributed to the three main pillars of parameter identification of constitutive models for metallic sheets, and has demonstrated how important it is to consider all of them as part of the entire process. Nevertheless, the field of parameter identification in the paradigm of "Material Testing 2.0" is fairly wide and relatively recent, as proven by the variety of studies here presented. Consequently, some topics lack clear standards or are still underdeveloped. Thus, it is expected that further research leads to clear standards to convince more communities of its advantages. Finally, even though the studies here presented only included numerical data, it is hoped that these can be extended to real experiments in the future. Perspectives of Future Studies Considering the studies developed in this thesis, it is expected that the field of parameter identification of constitutive models will continue to receive more attention from scientific and industrial communities. Therefore, some perspectives of future studies may be outlined. Additionally, several computational tools developed within this thesis are available open source and can be used as a basis for future studies. The following points represent ideas and suggestions for future studies: • The investigation of the UMMDp has led to its validation and spread across the scientific community. The large range of models it includes can be a great tool for various studies, reducing the arduous task of model implementation. Nevertheless, it is expected that the UMMDp will be enlarged with models from the community and continuously shared. Particularly, it is expected to extend it with thermoelastoviscoplastic models. Moreover, the UMMDp can be the starting point for research studies, leading to better model selection; • Following the overview and evaluation of heterogeneous mechanical tests, it is suggested to clearly define ways to compare and evaluate heterogeneous mechanical tests. In the future, more tests will be designed with even more innovative methodologies. Therefore, it will be important to dedicate time and effort to the definition of standard metrics, allowing them to easily compare and even rank different tests. Additionally, if different research groups agree to such metrics, it could lead to easier adoption, as well as cultivating a sense of healthy competition. As a suggestion, this discussion could be initiated through informal sessions organized within international conferences; • Concerning the IMG, its potential to reconstruct the strain field inside the materials has been shown. However, this study was purely numerical and must be extended for real experiments to consolidate its potential. Although not included in this thesis, real experiments of the DP780 dual-phase steel have already been performed and are expected to be included in future studies. Nevertheless, the IMG should be applied to different materials of varying thickness to understand its robustness. In addition, the method can be further improved by integrating measurements of the four surfaces, which are the front, the back and the sides. Moreover, it is expected to use the IMG in combination with inverse methodologies, in particular with the VFM, to characterize the material behavior up to rupture. A starting point could be the use of reconstructed fields from virtual experiments, and then continue with real experiments; • Some limitations of the 2D-VFM comparatively to the 3D-VFM have been identified when dealing with large deformations up to rupture and larger material thicknesses. However, this study can still be considered preliminary, creating the path for more research about this topic. To avoid general assumptions and conclusions based on a single study, it is suggested to investigate the 3D effects of different materials, particularly with a more predominant out-of-plane stress field. • Furthermore, the VFM code developed within this thesis can be the basis for future studies, in particular extending the sensitivity-based virtual fields to a 3D framework. Figure 1 . 1 11 Figure 1.1 Illustrative images of the three pillars addressed in this thesis: (a) material testing (Aquino et al. 2019), (b) constitutive models[START_REF] Barlat | Linear transfomation-based anisotropic yield functions[END_REF], and (c) inverse methodologies[START_REF] Cooreman | Elasto-plastic material parameter identification by inverse methods: Calculation of the sensitivity matrix[END_REF]). Figure 1 . 2 12 Figure 1.2 Schematic of the thesis structure, including chapters and scientific contributions. Figure 2 . 1 21 Figure 2.1 Orthotropic axes of rolled sheet metals, respectively, rolling direction (RD), transverse direction (TD), and normal direction (ND). Tensile test specimen represented on the sheet metal at an angle 𝜃 from the RD (van den Boogaard 2002). Figure 2 . 2 22 Figure 2.2 Schematic representation of the classical stress-strain relationship of an elastoplastic material from a uniaxial tensile test. The classical additive decomposition of strain into elastic and plastic parts is also represented. Figure 2 . 3 Figure 2 . 4 2324 Figure 2.3 Schematic representation of the von Mises yield surface in the principal stress space for plane stress conditions. The plastic strain rate is represented in a normal direction to the tangent of yield surface. Figure 2 . 5 25 Figure 2.5 Framework of the UMMDp, including the compatible FEA software, the yield criteria and hardening laws. Figure 2 . 6 26 Figure 2.6 Boundary conditions applied on the numerical models of the single-element homogeneous: (a) tensile, (b) shear, and (c) biaxial. Figure 2 . 7 27 Figure 2.7 Results of the numerical simulations of homogeneous tensile, shear and biaxial tests using the three user material subroutines represented by 𝜀 xx -𝜎 xx and 𝜀 xy -𝜎 xy (left), 𝜀 xx -𝜀 yy (center), and 𝜀 xx -𝜀 zz (right) using: (a)-(c) the von Mises yield criterion with mixed hardening, (d)-(f) the Yld2004-18p yield criterion with isotropic hardening at 0°from RD, and (g)-(i) the Yld2004-18p yield criterion with mixed hardening at 0°from RD. Figure 2.8 Schematic of the deep drawing cup test (dimensions in mm): (a) setup and geometry (Souto et al. 2015b) and (b) finite element mesh used in the blank. Figure 2 . 9 29 Figure 2.9 Maps of the major strain 𝜀 1 (left), minor strain 𝜀 2 (center), and total back stress tensor component 𝛼 xx (right) of the deep drawing cup simulations at the last time instant for (a)-(c) test 1 and (d)-(f) test 2. Figure 2 . 2 Figure 2.10 Results for global equilibrium of (a) NR iterations and (b) ratio of NR iterations, per increment, of the deep cup drawing test with the Yld2004-18p yield criterion and a mixed hardening law. Figure 3 . 1 31 Figure 3.1 Different mechanical states reached during testing of sheet metal defined in the (a) major and minor stresses diagram (Brosius et al. 2011), and (b) major and minor strains diagram (Paul et al. 2013), in the sheet plane. Figure 3 . 2 32 Figure 3.2 Geometry of uniaxial loading specimens (dimensions in mm): (a) Meuwissen (1998), (b) Meuwissen et al. (1998), (c) Kajberg and Lindkvist (2004), (d) Belhabib et al. (2008), (e) Pierron et al. (2010), and (f)-(g) Cooreman (2008). Figure 3 . 3 33 Figure 3.3 Geometry of the specimens (dimensions in mm) proposed by Kim et al. (2014). Figure 3 . 4 34 Figure 3.4 Geometry of uniaxial tensile specimens (dimensions in mm): (a) Goga (2014), (b) Rossi et al. (2016a), (c) half-shape of Souto et al. (2017), (d) Jones et al. (2018), (e)[START_REF] Küsters | Damage characterization on heterogeneous tensile tests[END_REF], and (f)[START_REF] Barroqueiro | Design of mechanical heterogeneous specimens using topology optimization[END_REF]. Figure 3 . 5 35 Figure 3.5 Geometry of the specimens proposed : (a) Fu et al. (2020), (b) Maček et al. (2020), (c) Chamoin et al. (2020), (d) Conde et al. (2021), (e) Zhang et al. (2022a), (f) quarter-shape of Gonçalves et al. (2022), and (g) Chapelier et al. (2022). Figure 3 . 6 36 Figure 3.6 Geometry of biaxial loading specimens (dimensions in mm): (a) Makinde et al. (1992), (b) quarter-shape of Yu et al. (2002), (c) Cooreman (2008), and (d) Zidane et al. (2010). Figure 3 . 7 37 Figure 3.7 Geometry of the biaxial loading specimens (dimensions in mm) proposed by Teaca et al. (2010). Figure 3 . 8 38 Figure 3.8 Geometry of the biaxial loading specimens (dimensions in mm) proposed by Schmaltz and Willner (2014). Figure 3 . 9 39 Figure 3.9 Geometry of biaxial loading specimens (dimensions in mm): (a) Prates et al. (2014), (b) Liu et al. (2015), (c) Zhang et al. (2015), (d) Martins et al. (2019), and (e) Kim et al. (2021). Figure 3 . 3 Figure 3.10 Configuration of the out-of-plane loading tests proposed by (a) Pottier et al. (2012) and (b) Hapsari et al. (2018). Figure 3 . 3 Figure 3.11 Distribution over time of designed heterogeneous mechanical tests for sheet metals, divided in uniaxial, biaxial and out-of-plane loading. Figure 4 . 1 41 Figure 4.1 Geometry (dimensions in mm) and boundary conditions of the finite element models for the heterogeneous mechanical tests (a) A, (b) B, (c) C, and (d) D. The specimen's geometry is represented using symmetry conditions, while the full geometry is recalled on the right-hand side. Figure 4 . 2 42 Figure 4.2 Virtual representation of the strain hardening behaviour of AA2090-T3, DP600, and Cu, through the yield stress as a function of the equivalent plastic strain. Figure 4 . 3 43 Figure 4.3 Virtual representations of the anisotropic behavior of AA2090-T3, DP600, and Cu: (a) anisotropy of normalized yield stress along material orientations between 0°to 90°from the RD, (b) anisotropy of Lankford coefficient along material orientations between 0°to 90°from the RD, (c) projection of initial yield surface on the plane of RD-TD normalized stress, and (d) projection of initial yield surface on the plane of shear-RD normalized stress. Figure 4 . 4 44 Figure 4.4 Virtual FLCs of AA2090-T3, DP600, and Cu, characterized by the materials' strain hardening exponent 𝑛 in the plane strain tension region, and by a slope of -1.0 on the left-hand side and 0.5 on the right-hand side of the diagram. Reference data (REF) from the literature is also represented for each material[START_REF] Comsa | A new formulation of the MMFC to avoid the numerical instability[END_REF][START_REF] Ozturk | Effects of anisotropic yield functions on prediction of forming limit diagrams of DP600 advanced high strength steel[END_REF][START_REF] Pham | Experimental and numerical investigation of the formability of an ultra-thin copper sheet[END_REF]. The position in the FLD of a material point A at the time instant 𝑡 is represented by A 𝑡 , while A ∞ is the position of A when it reaches the FLC on a linear strain path. Material point B 𝑡 represents the position used to evaluate the vertical distance from A 𝑡 to the FLC. Figure 4 . 5 45 Figure 4.5 An overview of mechanical states, for an isotropic material and plane stress state, represented through the diagrams of (a) major and minor strains, (b) major and minor stresses, and (c) stress triaxiality and Lode angle parameter. Figure 4 . 6 46 Figure 4.6 Schematic representation of possible states in the Mohr's circle covered by the formulation of the rotation angle. A predominant compressive state characterizes situation A, and situation B by a predominant tensile state. Cases 1, 2, 3, and 4 represent the different possibilities of calculating the principal angle. Figure 4 . 7 47 Figure 4.7 Schematic representation of material points under all possible states in the Mohr's circle, covered by the formulation of the rotation angle, for conditions (a) 𝛾 = |𝛽| and (b) 𝛾 = 90°-|𝛽|. The 𝑥 and 𝑦 axes represent the global coordinate system (𝑥, 𝑦), and 𝑥 ′ and 𝑦 ′ axes represent the material coordinate system (𝑥 ′ , 𝑦 ′ ). Figure 4 . 8 48 Figure 4.8 Equivalent plastic strain contours. Tests are represented in rows and materials in columns. Material points in the elastic and plastic regimes are distinguished by a gray color and the color map. Figure 4 . 9 49 Figure 4.9 Major and minor strains diagrams. Each row is associated with a test, and each column with a material. Material points in the elastic and plastic regimes are distinguished by a single color and the color map. An arbitrary value normalizes both axes, shown in each diagram's bottom left corner. The gray lines represent each material's FLC and the red lines represent reference mechanical states. Figure 4 . 4 Figure 4.10 Major and minor stresses diagrams. Each row is associated with a test, and each column with a material. Material points in the elastic and plastic regimes are distinguished by a single color and the color map. An arbitrary value normalizes both axes, shown in each diagram's bottom left corner. The red lines represent reference mechanical states. Figure 4 . 4 Figure 4.11 Stress triaxiality and Lode angle parameter diagrams. Each row is associated with a test, and each column with a material. Material points in the elastic and plastic regimes are distinguished by a single color and the color map. All the material points are located on top of the plane stress curve represented in red color. Figure 4 . 4 Figure 4.12 Rotation angle histograms. Each row is associated with a test, and each column with a material. The rotation angle's range of values (0°to 90°) is divided into 90 equal intervals. Material points in the elastic and plastic regimes are distinguished by a single color and the color map. Figure 4 . 4 Figure 4.13 Rotation angle histograms of test C and AA2090-T3 for material orientations of 0°, 45°, and 90°from RD, respectively, from left to right. The rotation angle's range of values (0°to 90°) is divided into 90 equal intervals. Material points in the elastic and plastic regimes are distinguished by a single color and the color map. The maximum equivalent plastic strain value is shown above each histogram. Figure 5 . 1 51 Figure 5.1 Schematic illustration of operations and transformations of the NM algorithm. Figure 5 . 2 52 Figure 5.2 Schematic illustration of operations of the DE algorithm. Figure 5 . 3 53 Figure 5.3 Schematic representation of the specimen used in the heterogeneous thermomechanical tests: (a) geometry of the specimens (dimensions in mm) and (b) example of temperature field recorded during real experiments at the surface of the specimens. Figure 5 . 4 54 Figure 5.4 Temperature field of the three heterogeneous thermomechanical tests imposed along the 𝑥 axis and described by second-order polynomials. Figure 5 . 5 55 Figure 5.5 Maps of equivalent plastic strain at the last time instant for the thermomechanical heterogeneous tests (a) 10 -2 s -1 , (b) 10 -3 s -1 , and (c) 10 -4 s -1 . Figure 5 . 6 56 Figure5.6 Evolution of the load throughout the deformation process of the heterogeneous thermomechanical tests at an average strain rate of (a) 10 -2 s -1 , (b) 10 -3 s -1 , and (c) 10 -4 s -1 . Figure 5 . 7 57 Figure 5.7 Flowchart of FEMU implementation. Figure 5 . 8 58 Figure 5.8 Distribution in the search space of the material parameters, normalized by the bounds, in each initial set for the LM and the NM algorithms. The reference material parameters are also represented. Figure 6 . 2 62 Figure 6.2 Schematic of the IMG for flat specimens in the undeformed and deformed configurations. Figure 6 . 3 63 Figure 6.3 Finite element model of plane strain tensile test in the (a) undeformed configuration, (b) deformed configuration, and (c) 3D view cut of the equivalent plastic strain distribution in the deformed configuration. Dimensions in mm. Figure 6 . 4 64 Figure 6.4 Maps of the 3D cross-sections for strain tensor components (a) 𝜀 xx , (b) 𝜀 yy , (c) 𝜀 zz , (d) 𝜀 xy , (e) 𝜀 xz , and (f) 𝜀 yz , at the last time instant and represented on the reference configuration. For simplicity's sake, the axes and coordinates are only represented in the top-left map. Figure 6 . 5 65 Figure 6.5 Evolution of the absolute error for each reconstructed strain tensor component throughout all time instants. The superior bound represents the maximum absolute error, and the white line represents the average absolute error. Figure 6 . 6 66 Figure 6.6 Maps of the 3D cross-sections of absolute error for reconstructed strain tensor components (a) 𝜀 xx , (b) 𝜀 yy , (c) 𝜀 zz , (d) 𝜀 xy , (e) 𝜀 xz , and (f) 𝜀 yz , at the last time instant and represented on the undeformed configuration. For simplicity's sake, the axes and coordinates are only represented in the top-left map. Figure 6 . 7 67 Figure 6.7 Evolution throughout time instants of (a) global volume of reference and reconstruction, with the reference load for comparison, and (b) volume of elements in the surface, and another in the middle for reference and reconstruction. Figure 6 . 8 68 Figure 6.8 Flowchart of the algorithm to correct the global volume in the IMG. Figure 6 . 9 69 Figure 6.9 Comparison of the global volume evolution throughout time instants between the IMG with different combinations of the corrected global volume. Figure 6 . 6 Figure 6.10 Evolution of the average absolute error for each reconstructed strain tensor component throughout time instants, between the IMG and different combinations of the corrected global volume. Figure 6 . 6 Figure 6.11 Flowchart of the algorithm to correct the elements volume in the IMG. Figure 6 . 6 Figure 6.12 Evolution of the average absolute error for each reconstructed strain tensor component throughout time instants, for the IMG, both standard and different combinations of local volume correction. Figure 6 . 6 Figure 6.13 Evolution of the average absolute error for each reconstructed strain tensor component throughout all time instants, for the IMG, both standard and different combinations of global and local volume corrections. Figure 6 . 6 Figure 6.14 Profile of the strain component 𝜀 zz , along the intersections of middle planes 𝑥𝑧-𝑦𝑧, of the reference and the IMG, both in standard and different combinations of global and local volume corrections. Figure 6 . 6 Figure 6.15 Maps of the 3D cross-sections of the reconstruction using IMG-Mixed-O-T for the (a) volume of the elements and the (b) strain component 𝜀 zz , at the last time instant and represented on the undeformed configuration. Figure 6 . 6 Figure 6.16 Profile of the strain tensor components (a) 𝜀 xx , (b) 𝜀 yy , and (c) 𝜀 zz , along the intersections of middle planes 𝑥𝑦-𝑥𝑧, 𝑥𝑦-𝑦𝑧, and 𝑥𝑧-𝑦𝑧, respectively, from left to right. Figure 6 . 6 Figure6.17 Schematic illustrations of the steps involved in the generation of virtual experiments for each surface: (a) export of FEA mesh and displacement field of outer surface, (b) FEA mesh imposed on the speckle pattern image from the point of view of camera 0, in undeformed and deformed configurations, (c) generated synthetic images from the point of view of camera 0, in undeformed and deformed configurations, (d) pre-processing of stereo-DIC measurements from the point of view of camera 0, in the undeformed configuration, and (e) measured displacement field component from the point of view of camera 0. Figure 6 . 6 Figure 6.18 Image of the real speckle pattern used as reference to generate the virtual experiments. Figure 6 . 6 Figure 6.19 Schematic illustration of the position and orientation of cameras in a generic stereovision system, in the (a) 𝑦𝑧 plane and the (b) 𝑥𝑧 plane. Figure 6 . 6 Figure 6.20 Schematic illustrations of the factors and levels considered in the DOE: (a) field of view, (b) stereo-angle, (c) noise, (d) subset size, and (e) step size. Figure 6 . 6 Figure 6.21 Average absolute and relative errors of the VOI between reconstruction using FEA and virtual experiments for the (a) displacement field components and (b) strain field components. Each column corresponds to a different factor in the DOE. Figure 7 . 2 72 Figure 7.2 Flowchart of the VFM implementation. Figure 7 . 3 73 Figure 7.3 Geometry and finite element model of the specimen used in the numerical simulations of the simulated tensile test (dimensions in mm). Figure 7 . 4 74 Figure 7.4 Virtual representation of the materials strain-hardening behavior using the Swift isotropic hardening law and reference parameters. Figure 7 . 5 Figure 7 . 6 7576 Figure 7.5 Maps of equivalent plastic strain at time instants corresponding to vertical displacements of 2, 4, and 5 mm for materials with (a) 1.5 mm and (b) 10 mm of thickness. Maps on the surface (𝑧 = 0) and middle plane (𝑧 = 𝑑 t /2) are both represented. .17) where 𝑈 * z = 0 in the 2D-VFM, and the gradient of virtual displacement fields Figure 7 . 7 77 Figure 7.7 Maps of virtual displacement fields represented for three virtual fields (a) 𝐔 * 1 , (b) 𝐔 * 2 , and (c) 𝐔 * 3 . The maps of 𝐔 * 1 are represented on the front (𝑧 = 0) and back (𝑧 = 𝑑 t ) surfaces, while the maps of 𝐔 * 1 and 𝐔 * 3 are representative of the whole VOI. Figure 7 . 9 79 Figure 7.9 Location of the material points 𝑃 S and 𝑃 M , represented on the finite element mesh of the 1.5 mm thick material test. Figure 7 . 7 Figure 7.10 Evolution of the stress components 𝜎 xx , 𝜎 yy , and 𝜎 zz for the material points 𝑃 S and 𝑃 M , respectively, located on the center of the surface and in the center of the middle plane, for materials with thicknesses of (a) 1.5 mm, (b) 3 mm, and (c) 10 mm. Figure 7 . 7 Figure 7.11 Evolution of ratio of stress components 𝜎 xx , 𝜎 yy , and 𝜎 zz between material points 𝑃 S and 𝑃 M , for materials with thicknesses of (a) 1.5 mm, (b) 3 mm, and (c) 10.0 mm. Figure 7 . 7 Figure 7.12 Evolution of the relative error for predicted load using the reference and the identified material parameters using the 2D-VFM, for tests with thicknesses of (a) 1.5 mm, (b) 3 mm, and (c) 10.0 mm. ) , 𝐻 1 = 𝐻 2 = 1 9 1 27 (𝑐 12 + 𝑐 13 + 𝑐 23 -2𝑐 21 + 𝑐 32 -2𝑐 31 ) , [(𝑐 (2.30) 23 -2𝑐 21 ) (𝑐 32 -2𝑐 31 ) + (𝑐 32 -2𝑐 31 ) (𝑐 12 + 𝑐 13 ) + (𝑐 12 + 𝑐 13 ) (𝑐 23 -2𝑐 21 )], 𝐻 3 = 1 54 (𝑐 12 + 𝑐 13 ) (𝑐 23 -2𝑐 21 ) (𝑐 32 -2𝑐 31 ) . Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 Table 2 .1 Material parameters for the Yld2004-18p yield criterion used in the verification of the coefficient of equivalent stress (Souto et al. 2015b). All parameters are dimensionless. 2 Parameter Table 2 2 .3 Set of Yld2004-18p material parameters used in the verification of the coefficients order (Barlat et al. 2005). All parameters are dimensionless. Parameter Term 𝑐 12 𝑐 13 𝑐 21 𝑐 23 𝑐 31 𝑐 32 𝑐 44 𝑐 55 𝑐 66 𝐂 ′ 1.241 1.078 1.216 1.223 1.093 0.889 0.501 0.557 1.349 𝐂 ″ 0.775 0.922 0.765 0.793 0.918 1.027 1.115 1.112 0.589 .5 that there are two different orders of Voigt notation used by the software. Abaqus/Standard (Dassault Systémes 2014) and ADINA (ADINA R&D 2021) share the first type of notation, while LS-DYNA (Livermore Software Technology 2021), ANSYS (ANSYS, Inc. 2021) and MARC share the second. Because the software used in this study is Abaqus/Standard, the formulation of the yield criterion in the UMMDp is modified according to the Voigt notation of the stress tensor used in this FEA software (𝜎 xx , 𝜎 yy , 𝜎 zz , 𝜎 xy , 𝜎 xz , 𝜎 yz ), resulting in a new relation of the anisotropy coefficients 𝑐 44 , 𝑐 55 and 𝑐 66 as ⎧ 𝑐 44 ⎫ ⎧ 𝑐 44 ⎫ ⎧ 𝑐 44 ⎫ ⎨ ⎩ 𝑐 55 𝑐 66 ⎬ ⎭ Barlat et al. 2005 ≡ ⎨ ⎩ 𝑐 55 𝑐 66 ⎬ ⎭ UMMDp ≡ ⎨ ⎩ 𝑐 55 𝑐 66 ⎭ UMAT_Yld2004_Mixed ⎬ . Table 2 .5 2 Convention of Voigt notation for each finite element software prepared for the UMMDp. Software Voigt notation ABAQUS/Standard 𝜍 xx 𝜍 yy 𝜍 zz 𝜍 xy 𝜍 xz 𝜍 yz ADINA 𝜍 xx 𝜍 yy 𝜍 zz 𝜍 xy 𝜍 xz 𝜍 yz LS-DYNA 𝜍 xx 𝜍 yy 𝜍 zz 𝜍 xy 𝜍 yz 𝜍 xz ANSYS 𝜍 xx 𝜍 yy 𝜍 zz 𝜍 xy 𝜍 yz 𝜍 xz MARC 𝜍 xx 𝜍 yy 𝜍 zz 𝜍 xy 𝜍 yz 𝜍 xz 2.4.2 Numerical Validation After implementing the modifications to the UMMDp, it is possible to directly compare the results of the Std_Aba, the UMAT_Yld2004_Mixed and the UMMDp by using FEA simulations. With that purpose, single-element homogeneous tests and a deep drawing cup test are used in this validation. The results are compared for the von Mises yield criterion and the Yld2004-18p yield criterion, using the parameters from Table 2.3. In addition, a mixed hardening law, composed of the Voce law, is used for isotropic hardening (see Equation 2.22), and the non-linear kinematic hardening law of Equation 2.26 is used for the kinematic hardening, using the material parameters presented in Table 2.6. The latter material parameters were identified from real experiments of a DC04 mild steel with 0.7 mm of thickness in Souto et al. (2015b). Table 2 .6 2 Material parameters of the Voce isotropic hardening law and kinematic hardening law used in numerical simulations 𝜀 xx and 𝜀 yy is approximately identical .0 110.3 5.92 44.57 22.85 106.2 258.38 5629.7 0.0258 Single-Element Homogenous Tests Three types of homogeneous tests, namely, tensile, shear and biaxial, are modeled using a single 3D element of 1 mm 3 with boundary conditions according to Figure 2.6. The use of single ele- ment models, particularly in the tensile and shear tests, is justified by the homogeneous strain distributions commonly observed in real experiments, using classical tests. In the case of the biaxial test, it has been previously shown, compared to a bulge test, that the evolution of the strain tensor components Table 2 .7 Total CPU time of the deep drawing cup test 2. User material subroutine Time [h] 2 UMMDp 53.946 UMMDp_Simplified 52.614 UMAT_Yld2004_Mixed 47.544 Table 4 .1 4 Elastic properties of the materials and parameters used in the Yld2000-2d anisotropic yield criterion and the Swift isotropic hardening law, for AA2090-T3[START_REF] Yoon | Prediction of six or eight ears in a drawn cup based on a new anisotropic yield function[END_REF][START_REF] Abedini | Evaluation and calibration of anisotropic yield criteria in shear loading: Constraints to eliminate numerical artefacts[END_REF], DP600[START_REF] Ozturk | Effects of anisotropic yield functions on prediction of forming limit diagrams of DP600 advanced high strength steel[END_REF] and Cu. Material Table 4 .2 4 An overview of mechanical states represented through the reference values of major and minor strains, major and minor stresses, and stress triaxiality and Lode angle parameter, considering an isotropic material and a plane stress state. 4.12) where |.| = √(.) 2 represents the absolute value, and the stress tensor components are defined in the material coordinate system. This formulation represents all possible states in Mohr's circle, as shown in Figure4.6. Although each state represents different physical meanings, it is possible to mathematically reduce them to three different conditions, where the rotation angle is: . equal to 90°-|𝛽|, when 𝜎 xx > 𝜎 yy for a predominant compressive state (|𝜎 1 | < |𝜎 2 |), and when 𝜎 xx < 𝜎 yy for a predominant tensile state (|𝜎 1 | > |𝜎 2 |), respectively, cases 3 and 4 of situations A and B (see Figure 4.6). 1. equal to 45°, when 𝜎 xx = 𝜎 yy , and 𝜎 xy ≠ 0; 2. equal to |𝛽|, when 𝜎 xx < 𝜎 yy for a predominant compressive state (|𝜎 1 | < |𝜎 2 |), and when 𝜎 xx > 𝜎 yy for a predominant tensile state (|𝜎 1 | > |𝜎 2 |), respectively, cases 1 and 2 of situations A and B (see Figure 4.6); 3 Table 4 . 4 3 Displacements (in mm) corresponding to a material point reaching the forming limit curves of Cu, DP600, and AA2090-T3 for tests A, B, C, and D. Material Test Cu AA2090-T3 DP600 A 0.195 0.583 0.571 B 0.514 2.041 2.223 C 0.848 1.924 2.587 D 0.852 2.345 3.266 Table 5 .1 5 Reference material parameters used in the Johnson-Cook thermoelastoviscoplastic constitutive model to generate the reference data Table 5 .2 5 Reference set, upper and lower bounds for material parameters. The sets of material parameters represent the initial solutions used for the LM and the NM algorithms. Parameter 𝐴 [MPa] 𝐵 [MPa] 𝑛 [-] 𝑚 [-] 𝐶 [-] Reference 205.210 1124.000 0.092 1.360 0.050 Bounds Lower 50.000 100.000 0.010 0.500 0.000 Upper 1200.000 2000.000 0.500 4.000 1.000 Sets 1 838.230 258.620 0.340 3.970 0.210 2 1158.140 1704.400 0.060 0.680 0.680 3 707.870 525.550 0.180 2.950 0.700 4 51.600 1368.290 0.440 2.110 0.010 5 397.470 1218.310 0.300 1.400 0.460 Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarães de Oliveira 2022 Table 5 .3 5 Final solutions and objective function values of the LM, the NM, and the DE algorithms. The highlighted values indicate a relative error of 0.1% of the identified material parameters. Evolution of objective function throughout function evaluations using the LM, the NM, and the DE algorithms, represented from left to right. Results correspond to data sets without noise, with noise of amplitude 10 -5 mm, and with noise of amplitude 10 -3 mm, from top to bottom. Parameter Table 6 .2 6 Calibration parameters for the stereo-vision system considered in this study.𝑐 y [px] 𝑓 x [px] 𝑓 y [px] 𝜃 x [°] 𝜃 y [°] 𝜃 z [°] 𝑇 x [mm] (𝑆 o , 𝜃 x ) 𝑇 z (𝑆 o , 𝜃 x ) Intrinsic Extrinsic 𝑐 x [px] 𝑇 y [mm] 𝑇 z [mm] 520 696 7909 7909 𝜃 x 0 0 0 𝑇 y Table 6 .3 6 Factors and levels for the design of experiments to evaluate the volume reconstruction from virtual experiments. Factors Description of the motion of a material body ℬ in the Euclidean space defined by the global coordinate system (𝑥, 𝑦, 𝑧), its undeformed configuration, deformed configuration at a time instant 𝑡, and definition of a material point coordinate systems: (𝑖, 𝑗, 𝑘) is the global coordinate system defined in the undeformed configuration, (1, 2, 3) is the local coordinate system defined in the deformed configuration, (𝛯, 𝐻, 𝛱) and (𝜉, 𝜂, 𝜁) are the material coordinate systems, respectively, in the undeformed and deformed configurations. The angle 𝜃 represents the material orientation. Figure 7.1 Table 7 .1 7 Elastic properties of the materials and reference parameters used in the Swift isotropic hardening law. Elasticity Swift hardening law 𝐸 [GPa] 𝜈 [-] 𝐾 [MPa] 𝜍 0 [MPa] 𝑛 [-] 200.0 0.3 1250.0 350.0 0.150 Table 7 .2 7 Components of the virtual displacements and gradient of the virtual displacements for the three virtual fields. Virtual field Table 7 .3 7 Maximum values of the ratio of stress components 𝜎 xx , 𝜎 yy , and 𝜎 zz between material points 𝑃 S and 𝑃 M , for all materials. Thickness [mm] Table 7 .5 7 Results of the identification using the 3D-VFM and combining different virtual fields, for tests with thicknesses of 1.5, 3, and 10 mm. Parameter Objective function Thickness Virtual fields 𝐾 [MPa] 𝜍 0 [MPa] 𝑛 [-] Iterations Initial [-] Final [-] initial parameters 1875.000 525.000 0.225 - - - 1.5 mm 1 1250.591 350.365 0.150 217 7.64 × 10 9 8.21 × 2 1247.688 348.374 0.150 226 1.97 × 10 8 9.68 × 3 1247.355 348.439 0.150 217 8.25 × 10 7 8.72 × 1 + 2 1250.454 350.240 0.150 218 7.89 × 10 9 3.71 × 1 + 3 1250.525 350.307 0.150 240 6.73 × 10 9 2.70 × 1 + 2 + 3 1250.394 350.187 0.150 232 6.93 × 10 9 5.52 × 3 mm 1 1250.463 350.255 0.150 234 3.06 × 10 10 8.32 × 2 1248.317 348.930 0.150 208 9.13 × 10 8 7.28 × 3 1248.670 349.627 0.150 217 3.80 × 10 8 3.88 × 1 + 2 1250.357 350.160 0.150 216 3.15 × 10 10 6.45 × 1 + 3 1250.423 350.223 0.150 221 3.10 × 10 10 2.56 × 1 + 2 + 3 1250.321 350.132 0.150 220 3.19 × 10 10 8.04 × 10 mm 1 1249.936 349.863 0.150 240 3.51 × 10 11 5.39 × 2 1246.999 347.853 0.150 210 9.40 × 10 9 1.67 × 3 1249.914 349.843 0.150 275 3.09 × 10 11 1.09 × 1 + 2 1249.792 349.726 0.150 248 3.62 × 10 11 1.18 × 1 + 3 1249.914 349.843 0.150 275 3.09 × 10 11 1.09 × 1 + 2 + 3 1249.773 349.710 0.150 260 3.19 × 10 11 1.22 × Efficient strategies to speed up material characterization and parameter identification for virtual forming of metallic sheets Miguel Jorge Guimarãesde Oliveira 2022 The JANCAE, a non-profit organization, has held lectures twice a year for four days each since to provide training on theories and technology related to nonlinear simulation for researchers and engineers engaged in various computer-aided engineering (CAE) businesses. Since 2009, as one of the subcommittee activities, it has developed and verified the user subroutine library the UMMDp (https://www.jancae.org/annex/annexUMMDe/index.html).
00347192
en
[ "phys.cond.cm-sm", "phys.qphy" ]
2024/03/04 16:41:26
2008
https://hal.science/hal-00347192v2/file/Arnaud_Chusseau_Philippe.pdf
Jacques Arnaud Mas Liron Laurent Chusseau Fabrice Philippe A simple model for Carnot heat engines ) is recalled in Appendix B. Only elementary physical and mathematical concepts are employed. I. INTRODUCTION The purpose of this paper is to give newcomers to the field of Thermodynamics a feel for the concepts involved in a simple manner. The paper is self-contained and employs only elementary mathematics. The results are derived for a particular urn (or bag, or reservoir) model, related to the one introduced in 1907 by P. Ehrenfest. [START_REF] Ehrenfest | Ueber zwei bekannte Eingewände gegen das Boltzmannsche H-Theorem[END_REF][START_REF] Güémez | A generalization of the Ehrenfest urn model[END_REF][START_REF] Tobochnik | Teaching statistical physics by thinking about models and algorithms[END_REF] Our model consists of two reservoirs with N possible ball locations each, containing respectively n l and n h weight-1 balls. This model is directly applicable to Otto heat engines (in which the working agent parameter stays fixed when in contact with either bath) employing as a working agent two-level atoms. (For an exhaustive discussion concerning quantum heat engines see Quan. [START_REF] Quan | Quantum thermodynamic cycles and quantum heat engines (II)[END_REF]) Obviously, classical heat engines may of course be considered as special cases of quantum heat engines. In section II we describe in more detail our model, and evaluate the average work produced, the average energy lost by the upper reservoir, and the efficiency. The general properties of heat engines are recalled in Section III. The relationship between our urn model and the properties of heat engines is discussed in Section IV. The reservoirs absolute temperatures T l , T h , respectively, are defined in the limit of large ball numbers. When n l ≈ n h the system efficiency and the average work obtained in Section II tend to coincide with the expression for the efficiency and average work given by Carnot for an ideal heat engine. The fluctuations of the work produced are evaluated in Appendix A. A discussion of the history of the Carnot discovery is in Appendix B. It is well-known that heat exchange between two contacted bodies may be described by an urn model (see Section III). The present model employing two urns at different altitudes, and the comparison that we make with heat engines, has not, to our knowledge, been presented before. It seems to us that mathematically-minded students would better understand the entropy concept from the present model than from classical Thermodynamics, because empirical results (such as the irreversibility of thermal contacts) are not needed in our discussion. For other students, temperature is a primary intuitive concept, and they may prefer an introduction based on gas-filled cylinders of variable length instead. II. EXCHANGE OF BALLS BETWEEN TWO RESERVOIRS We consider a system consisting of two reservoirs at altitudes E l and E h > E l respectively, with respect to some lower reference level, in the earth's gravitational field. There are N possible ball locations in each reservoir that we label: 1,2...N , as shown in Fig. 1. A single ball is allowed in any given location. The lower reservoir contains n l ≤ N weight-1 balls and the higher reservoir contains n h ≤ N weight-1 balls. By "weight-1 ball," we mean an object with a weight of 1 N, that is, with a mass of approximately 0.1 kg. In order to lift such an object by 1 m in the earth's gravitational field an energy of 1 J is required. Conversely, such an object delivers an energy of 1 J whenever its altitude gets lowered by 1 m. These energies may be removed or added to some external device with the help, for example, of cords and pulleys, or electrical motors (or generators). We leave the nature of these mechanisms unspecified in the discussion that follows. Note, however, that a particularly interesting mechanism is the action of resonant optical fields on two-level atoms. They may convert upper-state atoms into lower-state atoms and conversely, thereby receiving or delivering energy. In Figure 1 we have N = 5, n l = 1 and n h = 2. In the higher reservoir there are 10 distinguishable ball configurations and in the lower reservoir there are 5 distinguishable configurations (the general formula for the number of configurations will be given later on). Considering the two reservoirs together there are therefore 5 × 10=50 distinguishable configurations. These 50 configurations correspond to the same energy. If one assumes that the reservoirs are being shaken up frequently (at least more frequently than the occurrence of a ball-exchange event), all possible ball configurations are equally likely to occur. We consider ball displacements from the lower and higher reservoirs at a single location, say the left-most location, labeled 1. One may draw all the possible configurations and evaluate by inspection quantities of interest such as the average work produced by the system. For example, if N = 3, n l = 1, n h = 2, the nine distinguishable configurations are • • • • • • -1 • • • • • • 0 • • • • • • 0 • • • • • • 0 • • • • • • 1 • • • • • • 1 • • • • • • 0 • • • • • • 1 • • • • • • 1 The number below each configuration is the energy produced over a cycle, setting for brevity E ≡ E h -E l = 1. Indeed, restricting our attention to the first location, labeled 1, and, for example, the first configuration shown above, we note that there is an empty location in the higher reservoir and a ball in the lower reservoir. In that case, a cycle consists of transferring the lower-reservoir ball into the higher empty location. The system then delivers an energy equal to -1, that is, absorbs an energy equal to 1. From the above numbers, we calculate that the average energy produced, called "average work," is W = (-1 + 0 + 0 + 0 + 1 + 1 + 0 + 1 + 1)/9 = 1/3. This pedestrian method also provides the variance of the work produced. We now turn to an equivalent but more convenient picture (see the abstract), involving the probability of picking up a ball from a reservoir. It is then convenient to suppose that empty locations are occupied by weight-0 balls. Such balls, of course, do not carry any energy when displaced. A cycle consists of exchanging two balls (of weight 0 or 1), one randomly picked from the lower reservoir, and one randomly picked from the higher reservoir. The probability of picking a weight-1 ball from the lower reservoir is l ≡ n l /N and the probability of picking up a weight-1 ball from the higher reservoir is h ≡ n h /N . As we discuss below, the average energy produced is W = h -l. In the above example (namely N = 3, n l = 1, n h = 2), l = 1/3, h = 2/3, we obtain W = 2/3 -1/3 = 1/3, which coincides with the previous result. The equivalence between the two methods is general. The total (potential) energy of a reservoir at altitude E (with respect to some arbitrarily selected level) containing n weight-1 balls is obviously Q = nE (kinetic energy being not considered, the total energy coincides with the potential energy). The letter Q is employed anticipating a correspondence with heat. When a weight-1 ball is added to a reservoir at altitude E the reservoir energy is incremented by E. On the other hand, if a ball is randomly picked up from the N locations of a reservoir containing n weight-1 balls, the probability that this ball has weight 1 is clearly n/N . Accordingly, if the picked up ball is subsequently carried to a reservoir at altitude E, the latter reservoir average energy is incremented by ∆Q = E n/N . The word "average" will henceforth be omitted in the main text since only average values are considered. Consider now two such reservoirs. One at altitude E l (lower reservoir) and containing n l weight-1 balls. The other at altitude E h (higher reservoir) containing n h weight-1 balls. A cycle consists of exchanging two randomly-picked balls between the two reservoirs. From what has just been said and setting l ≡ n l /N , h ≡ n h /N , the energies added The number of weight-1 balls (black circles) is n l in the lower reservoir, and n h in the higher reservoir (with n l = 1, n h = 2 in the figure). Open circles may be viewed as weight-zero balls. For each reservoir, every ball configuration is equally likely to occur considering that the energies are the same. The complete figure should therefore consist of 5 × 10 = 50 similar figures exhibiting all the possible system configurations. Balls may be transferred from one reservoir to the other in location 1 only. If there is a ball in the upper reservoir at that location and none in the lower one (as is the case in the figure), the ball gets transferred from the upper reservoir to the lower one, thereby delivering energy. Conversely, if there is a ball in the lower reservoir and none in the upper one the ball gets transferred from the lower reservoir to the upper one, thereby absorbing energy. In some limits, the efficiency is the same as for a Carnot cycle. to the lower and higher reservoirs read respectively ∆Q l = E l (h -l), ∆Q h = -E h (h -l). (1) The work performed follows from the law of conservation of energy W = -∆Q l -∆Q h = (E h -E l )(h -l). (2) The engine efficiency, defined as the ratio of the work performed W and the energy -∆Q h lost by the higher reservoir, is therefore η ≡ W -∆Q h = 1 - E l E h . (3) In the next section we recall well-known properties of heat engines. In the subsequent section, we show that when h ≈ l the efficiency given in Eq. ( 3) coincides with the Carnot efficiency and the work given in Eq. ( 2) coincides with the expression given by Carnot. Note that after a cycle the number of weight-1 ball in a reservoir may be incremented by -1, 0 or 1. The next cycle operates therefore with different values of the l, h parameters. We will not consider the evolution of the work produced cycle after cycle. Indeed we are interested in a comparison with heat engines operating between two baths whose heat capacity is so large that their temperatures do not vary significantly. Likewise, in the present reservoir model, one may suppose that the number of balls in each reservoir is so large that their change after any number of cycles is insignificant. III. HEAT ENGINES The purpose of the present section is to recall some well-known facts about heat and heat engines (for a detailed discussion see for example Ref. 5). It is empirically known that when two bodies at different temperatures are contacted, they eventually reach the same temperature. This observation can be verified using only the intuitive concept of temperature. Classically, this fact may be interpreted by supposing that some heat (energy) is flowing from the high-temperature body into the low-temperature body, but that the converse never occurs. A similar observation can be made with two urns (see the figure with E l = E h ). Suppose that the urns contain N balls each that have either weight-1 (black) or weight-0 (white). If randomly picked balls are repeatedly exchanged between the two urns, eventually the ratios of black and white balls become nearly the same in both urns, irrespectively of the initial conditions. For the analogy with the phenomenon of thermal contact to hold, one must define the temperature of an urn as a monotonically increasing function of the ratio of black to white balls [for an exact expression, see Eq. ( 6) with E > 0 and n = n black , N -n = n white ]. It is also an empirical fact that when the energy originating, for example, from a dropping weight is dissipated into a body the body heats up. According to the law of conservation of energy, one thus presumes that a hot body contains an energy called "heat." The fact that this heat cannot be converted back into usable energy is one form of the second-law of Thermodynamics. In order to get usable energy from heat one needs two bodies at different temperatures, one of them being called the high-temperature bath and the other the low-temperature bath. If the low-temperature bath is at zero absolute temperature, all the heat may in principle be converted back into usable energy, but this is not the case in general. Heat is usually pictured as the average kinetic energy of randomly moving atoms, according to Bernoulli and Maxwell pictures. For example, the average kinetic energy of an helium atom in a gas is 3T /2 in appropriate units, where T denotes the gas absolute temperature. We will not discuss further this conventional model because in the present paper we instead represent heat by a random potential energy. Consider for example the first location of the higher reservoir in Fig. 1. For the configuration shown in that figure the energy is equal to E h since a weight-1 ball at altitude E h is present. But for other configurations (not shown in the figure), the energy may be zero. On the average, the higher reservoir energy is easily found to be (2/5)E h . Let us now recall how, concretely, usable energy may be retrieved from two baths at different temperatures. What one needs is a "working agent," which may be any piece of material whose properties can be changed by varying a parameter. An example is an helium-filled cylinder with a piston, of length ǫ, where ǫ is the parameter. Another example (more relevant to this paper discussion) is a collection of two-level atoms. We suppose that the separation in energy ǫ of the two levels may be varied through, e.g., the application of an electrical field. It is the external agent that causes the parameter ǫ to vary that collects, or delivers, the energy. A typical closed cycle consists of putting the working agent with parameter ǫ 1 in contact with the low-temperature bath, and slowly varying this parameter to ǫ 2 . The working agent is then carried from the low-temperature bath to the high-temperature bath while the parameter is slowly changed to ǫ 3 . The parameter is then changed to ǫ 4 . The working agent is finally carried back to the low temperature bath while the parameter recovers its initial ǫ 1 -value. A closed cycle is thus defined by the nature of the working agent and four parameter values. If the parameter ǫ does not vary at all, obviously no energy is delivered or received. The cycle then simply transfers heat from the high-temperature bath to the low-temperature bath, and is analogous to a thermal contact. A more interesting situation is the Otto cycle that describes an idealized form of the gasoline engine (discovered by Beau de Rochas in 1862 and Otto in 1876). In that case ǫ 2 = ǫ 1 and ǫ 4 = ǫ 3 , meaning that the parameter does not vary when the working agent is in contact with either bath. The parameter varies only during the adiabatic transitions from one bath to the other. This cycle may deliver work or receive work (heat pump) for appropriate choices of the parameters. Usually it does not achieve the maximum (Carnot) efficiency. It does so approximately, however, when ǫ 4 ≈ ǫ 2 . For the exact description of Carnot heat engines, see the generalization of the ball model in Ref. [START_REF] Arnaud | Mechanical equivalent of quantum heat engines[END_REF]. In the celebrated Carnot cycle the parameters are so chosen that the temperature of the working agent is nearly the same as the bath temperature when contacted with it. Then the efficiency is η = 1 -T l /T h , and the work produced is W = (T h -T l )S, where S denotes the entropy transferred from the high-temperature bath to the low-temperature bath. These concepts will be made clearer in the next section. To summarize, whenever we have at our disposal two baths at different temperatures, one may always find a heat engine that delivers energy. But many kind of cycles fail to deliver energy even though T h > T l . As a matter of fact they may instead absorb energy and act as heat pumps. As we shall see, our potential-energy model is fully consistent with the above well-known considerations. IV. ENTROPY AND TEMPERATURE We consider again reservoirs containing N locations and n weight-1 balls and N -n weight-0 balls. To relate this device to heat engines, let us first recall that the number of ball configurations in a reservoir is N !/n!(N -n)!. For example, if N = 3 and n = 1, there are 3!/(1!2!)=3 configurations, namely (• • •), (• • •) and (• • •). Next, we define the entropy as the logarithm of the number of configurations, the Boltzmann constant being set equal to unity, that is, for a reservoir, S(n) = ln N ! n!(N -n)! . ( 4 ) Note that S(n + 1) -S(n) = ln N ! (n + 1)!(N -n -1)! -ln N ! n!(N -n)! = ln N -n n + 1 ≈ ln N n -1 , (5) for large n. The absolute temperature of a reservoir is then defined as T (n) = Q(n + 1) -Q(n) S(n + 1) -S(n) ≈ E ln( N n -1) . ( 6 ) Temperature is an intensive quantity. For example, the temperature of two identical bodies at temperature T , considered together, is again T . Because heat has the nature of an energy and is an extensive quantity, it is required that S be also an extensive quantity. Since the number of configurations in two separate bodies is the product of the configurations (for each configuration of one body one must consider all the configurations of the other body) and the logarithmic function has the property that ln(ab) = ln(a) + ln(b), the above definitions do ensure that T be an intensive quantity. Note that we have chosen a temperature unit such that the Boltzmann constant k B be unity. By doing so the distinction between extensive and intensive quantities drops out of sight. For example, the energy of a single-mode oscillator E = k B T reads in our notation E = T . However, the distinction may be restored, while keeping k B = 1, by writing E = T × the number of modes. The number of modes depends on volume, while T does not. Note that the temperature is positive if n < N/2. The cycle efficiency given in Eq. ( 3) may now be written in terms of temperatures as η = 1 - E l E h = 1 - T l T h ln( 1 l -1) ln( 1 h -1) . (7) Thus, when l ≈ h, the last fraction in the above equation drops out and the Carnot efficiency is indeed obtained. In the limit l ≈ h the work W produced per cycle is very small. However, one may always add up the work contributions of any number of similar devices having the same reservoir temperatures (but possibly different values of E, n), and achieve any specified work at the Carnot efficiency. The ball exchange discussed above may increment the reservoir entropies. The number of weight-1 balls in a reservoir may indeed be incremented by one, remain the same, or be decremented by one. From what was said before, the probability that a weight-1 ball be transferred from the high reservoir to the lower one is h ≡ n h /N , and the probability that a weight-1 ball be transferred from the low reservoir to the higher one is l ≡ n l /N . Since these events are independent, the lower reservoir entropy increment reads ∆S l = h(1 -l)[S(n l + 1) -S(n l )] + l(1 -h)[S(n l -1) -S(n l )]. (8) Using Eq. ( 5) we obtain ∆S l = (h -l) ln 1 l -1 . (9) The increment of the higher reservoir entropy is obtained by exchanging the h and l labels in the above expression, that is ∆S h = -(h -l) ln 1 h -1 . (10) We thus find that, in the limit n l /N ≈ n h /N (or l ≈ h), ∆S l ≈ -∆S h so that there is no net entropy produced. Entropy is just carried from the higher reservoir to the lower one. The Carnot expression for the work recalled in Section III may thus be written as W = (T h -T l )∆S l ≈ E h ln 1 h -1 - E l ln 1 l -1 ∆S l ≈ (E h -E l )(h -l), (11) so that the Carnot general formula for W indeed coincides with the expression for the work performed per cycle evaluated for our model from simple reasoning. More precisely, the ratio of the total entropy produced ∆S l + ∆S h to the work produced W tends to zero as h → l, see Eq. (A4). The final expression tells us that the engine, according to our model, delivers work only if the terms E h -E l and h-l are both positive or both negative. But since E h > E l by convention, this implies that we must have h > l. Going back to the expression of the temperature in Eq. ( 6), and remembering that the ln(.) function is a monotonically increasing function of its argument, we find that work may be produced only if T h > T l . Whenever T h > T l there exist heat engines that may deliver work. But this is not so for every heat engine. In particular, one may consider the properties of an Otto heat engine that stops delivering work (and turns into a heat pump) when a condition similar to our l = h condition holds, even though T h > T l . Conventional heat engines operate with two large baths, or reservoirs, one hot and one cold. Because these baths are not infinite in size, cycle after cycle, the hot bath cools down and the cold bath warms up. Eventually no work is being produced. The same situation occurs in our model. After a very large number of cycles the values of h and the value of l tend to coincide and no work is being produced any more. Because the reservoir temperatures do not equalize, however, one may say that the system has then reached a state of equilibrium, but not a state of thermal equilibrium. This is not a peculiarity of our model, but a general property of some heat engines. V. CONCLUSION We have seen that heat engines may be equivalent to random mechanical engines of a special kind. Precisely, the model consists of two reservoirs having N locations, with n l , n h weight-1 balls, at different altitudes. The only concepts involved in the present paper are those of potential energy and of uniform probability. We have shown that the efficiency and work in our model coincide with the Carnot expressions in the limit where n l ≈ n h . Full Carnot cycles may be generated out of this elementary configuration. to [START_REF] Zemansky | Heat and Thermodynamics[END_REF], ∆S(n l -1) -∆S(n l ) =ln( 1 l -1), and the increment of S h is ∆S(n h + 1) -∆S(n h ) = ln( 1 h -1). It follows that the increment in total entropy is ln( 1 h -1 1 l -1 ) with probability l(1 -h). The average increment in total entropy is therefore ∆S = h(1 -l) ln 1 l -1 1 h -1 + l(1 -h) ln 1 h -1 1 l -1 = (h -l) ln 1 l -1 1 h -1 0. (A3) As was said in the main text, when h ≈ l, ∆S ≈ 0 and the system tends to be reversible and to achieve the highest efficiency. Note that the entropy increment is non-negative for both a heat engine (h > l) and a heat pump (l > h). More precisely, noting that to first order in δ ≡ h -l we have ln[(1/l -1)/(1/h -1)] ≈ δ/[l(1 -l)], and thus ∆S ≈ δ 2 l(1 -l) . ( A4 ) The whole model presented makes sense because the generated entropy is proportional to δ 2 while the work produced is proportional to δ, so that, for small δ, near reversibility does not imply vanishing work. Finally, we evaluate the variance of the total entropy increment. From the above expressions, it follows that (∆S) 2 = h(1 -l) ln 1 l -1 1 h -1 2 + l(1 -h) ln 1 h -1 1 l -1 2 = (h + l -2lh) ln 1 l -1 1 h -1 2 , ( A5 ) and the variance reads var(∆S) ≡ (∆S) 2 -∆S 2 = (h + l -2lh -(h -l) 2 ) ln 1 l -1 1 h -1 2 = [(h(1 -h) + l(1 -l)] ln 1 l -1 1 h -1 2 , (A6) which vanishes, as well as the average entropy produced, when h ≈ l. To first order in δ ≡ h -l, we have var(∆S) ≈ 2 ∆S , (A7) a remarkably simple result. This result has been presented before in Ref. 7 just after Eq. ( 18), and also in previous works. This agreement shows that the properties of our model are, at least up to a point, generic, that is, generally applicable to heat engines. One reason for the current misunderstanding is that part of the Carnot contribution appeared in print only decades after his early death. A second one is that his work was popularized by Clapeyron in a partly erroneous manner. A third one is the unfortunate use by Carnot of the word "calorique" to designate what Clausius later on called "entropy." The word "calorique" had been formerly employed by Lavoisier to designate some hypothetical heat substance. Clausius and Lord Kelvin, though highly appreciative of the Carnot work, missed part of his contribution because the notes mentioned above were not available to them. Let us cite also the paper by La Mer who expressed himself forth-fully as follows: "Unless the view-point that the Carnot theory is accurate is adopted, one is placed in the position of maintaining that Carnot succeeded in demonstrating some of the most fundamental and profound principles of physical science by the most masterly display of scientific double-talk that has ever been perpetrated upon the scientific world. This view is untenable. [START_REF] La Mer | Some current misinterpretations of N. L. Sadi Carnot's memoir and cycle[END_REF]" Much clarification is due to Hoyer. [START_REF] Hoyer | How did Carnot calculate the mechanical equivalent of heat?[END_REF] The historian of science R. Fox says: "Until recently there were very few studies concerning [the physics of Carnot reflexions]. Thanks to the work of Hoyer, we now have papers on the logical implications of the Carnot theory, and its analogy with modern thermodynamics [...]. It is not at all obvious to understand how Carnot [discovered the mechanical equivalent of heat]. Hoyer examined this question in two important papers. His articles provide complete references to earlier attempts [...]. He explains the exactness of Carnot calculation (which is even more striking if one uses modern values for the specific heats) by noticing that the Carnot theory is entirely accurate." As far as the first law of thermodynamics is concerned, let us quote Carnot: [START_REF] Carnot | Réflexions sur la puissance motrice du feu[END_REF] "Heat is nothing but motive power, or rather another form of motion. Wherever motive power is destroyed, heat is generated in precise proportion to the quantity of motive power destroyed; conversely, wherever heat is destroyed, motive power is generated." Carnot calculated that 1 calorie of heat is equivalent to 3.27 J, instead of the modern value: 4.18 J. As far as the second law is concerned, the key fact is that engine efficiencies reach their maximum value when they are reversible. Carnot reached this conclusion from the consideration that energy cannot be obtained for free. He therefore looked for heat processes that could work in a reversed manner, ending up with the celebrated "Carnot cycle." Slow processes are reversible, with the exception of thermal contacts. Because there is some confusion in the literature concerning the significance of the Carnot contribution with respect to the second law of Thermodynamics, let us quote Zemansky and Dittman: [START_REF] Zemansky | Heat and Thermodynamics[END_REF] "Carnot used chaleur when referring to heat in general, but when referring to the motive power of fire that is brought about when heat enters an engine at high temperature and leaves at low temperature, he uses the expression chute de calorique, never chute de chaleur [. . . ]. Carnot had in the back of his mind the concept of entropy, for which he reserved the term calorique." FIG. 1 : 1 FIG.1: Schematic representation of an engine that converts potential energy into work. The figure represents two lotterylike reservoirs located at altitudes, E l and E h ≥ E l , respectively, with N possible ball locations labeled 1,2,3...N (N = 5 in the figure). The number of weight-1 balls (black circles) is n l in the lower reservoir, and n h in the higher reservoir (with n l = 1, n h = 2 in the figure). Open circles may be viewed as weight-zero balls. For each reservoir, every ball configuration is equally likely to occur considering that the energies are the same. The complete figure should therefore consist of 5 × 10 = 50 similar figures exhibiting all the possible system configurations. Balls may be transferred from one reservoir to the other in location 1 only. If there is a ball in the upper reservoir at that location and none in the lower one (as is the case in the figure), the ball gets transferred from the upper reservoir to the lower one, thereby delivering energy. Conversely, if there is a ball in the lower reservoir and none in the upper one the ball gets transferred from the lower reservoir to the upper one, thereby absorbing energy. In some limits, the efficiency is the same as for a Carnot cycle. ACKNOWLEDGMENTS The authors thank the anonymous referees for sharp reading and useful comments. APPENDIX A: FLUCTUATIONS Some readers may not be particularly interested in the fluctuations of the quantities of major interest considered in the main text, namely the work produced and the high-temperature reservoir heat loss. However, fluctuations are important in some applications. We show in this appendix that the variance of these quantities can be readily obtained from the ball model. Recall that in our model a cycle consists of exchanging simultaneously a ball from the higher reservoir (at altitude E h and containing n h weight-1 balls and N -n h weight-0 balls) and a ball from the lower reservoir (at altitude E l and containing n l weight-1 balls and N -n l weight-0 balls). The probability that a weight-1 ball be picked up from the higher reservoir is h ≡ n h /N . The probability that a weight-1 ball be picked up from the lower reservoir is l ≡ n l /N . The two events are independent. Setting E ≡ E h -E l , we have seen in the main text that the average work produced per cycle is W = E(h -l). We now evaluate W 2 . The probability that a weight-1 ball falls and none is raised is h(1 -l). If this event occurs, the work performed squared is equal to E 2 . Conversely, the probability that a weight-1 ball is raised and none falls is l(1 -h). If this event occurs, the work performed squared is again equal to E 2 . Because the two other cases produce no work, it follows that Therefore, the variance of the work produced reads In the limit h ≈ l considered in the main text, we have Let us now consider the total entropy produced ∆S ≡ ∆S l + ∆S h . When a weight-1 ball is being transferred from the high reservoir to the lower one and none from the low reservoir to the higher one, an event that occurs with probability h(1 -l), the increment of S l is, according to Eq. ( 5), ∆S(n l + 1) -∆S(n l ) = ln( 1 l -1), and the increment of S h is ∆S(n h -1) -∆S(n h ) =ln( 1 h -1). It follows that the increment in total entropy is ln( ) with probability h(1 -l). When a ball is being transferred from the low reservoir to the higher one and none from the high reservoir to the lower one, an event that occurs with probability l(1 -h), the increment of S l is, according APPENDIX B: BRIEF HISTORY OF THE CARNOT DISCOVERY The motivation for introducing the present account of Carnot discoveries is that, in spite of the efforts of a number of motivated scientists (see below), they remain insufficiently appreciated. The Carnot theory, which appeared in a book in 1824 and in unpublished notes, established both the first and the second laws of Thermodynamics. This fact has been pointed out by a number of authors who took the trouble of looking carefully at what Carnot actually said, clarifying the terminology employed, up-dating the system of units, and correcting minor errors in the experimental data. One of these authors is the Nobel-prize winner A. Kastler. [START_REF] Kastler | Sadi Carnot et l'essor de la Thermodynamique[END_REF] We translate from Kastler paper: "Had Sadi Carnot lived longer [...] he would be considered today not only as the author of the Carnot principle (called by Clausius second principle of Thermodynamics) but also as the author of the first principle of that science." Another author is the russian scientist V.M. Brodiansky. [START_REF] Brodiansky | Sadi Carnot[END_REF] At the end of his book, p. 228 and 229, the author lists ten major achievements of Carnot. The first reads "Carnot is the first to formulate the second principle of thermodynamics" and the eighth says "He was among the firsts to formulate strictly the law of equivalence between heat and work, and the first to calculate with sufficient accuracy its numerical value."
01589471
en
[ "spi" ]
2024/03/04 16:41:26
2015
https://hal.science/hal-01589471/file/Papa2015.pdf
S Papa email: [email protected] S Patalano A Lanzotti email: [email protected] S Gerbino J Y Choley Towards the Integration of Thermal Physics and Geometrical Constraints for a 3D-Multiphysical Sketcher Keywords: Multiphysics, TTRS, Modelica language, preliminary design The paper deals with the relationship between geometrical or topological entities of complex systems and the physics in which the systems are involved. In particular, the paper deepens the integration of thermal physics with geometrical constraints. Therefore, the results of the work could be used within the development of a 3D-multiphysical sketcher viz., a tool for the preliminary design of complex systems, characterized by the presence of one or more overlapping physics. Firstly, the model of Topologically & Technologically Related Surfaces (TTRS) is used and related Minimal Reference Geometrical Elements (MRGEs) and constraint conditions are implemented by means of Modelica language. Then, the implementation of new objects for MRGEs and constraint conditions are applied to a mechanical assembly. Finally, the integration of TTRS model within thermal physics is applied to the case of the layout designing for electronic boards. I. INTRODUCTION The design of a complex system nowadays deals with the interaction of many components involving different physics. The designer uses a typical procedure, passing from the preliminary design to the final configuration, validated through several simulations. In general, the multidisciplinary aspect of a complex system, such as a mechatronic system [START_REF] Van Amerongen | Mechatronic design[END_REF], requires an environment suited to all steps of the design, which allows to model mechanics, electronics, automation, and also the interaction between the different physics involved [START_REF] Plateaux | Méthodologie de conception d'un produit mécatronique" 19ème Congrès Français de Mécanique[END_REF]. A methodology for the conceptual design of complex systems, developed at SUPMECA in Paris, proposes the combined use of various instruments widely applied during the different phases of the design process [START_REF] Plateaux | A need for the definition of a topological structure for the complex systems modeling[END_REF], [START_REF] Plateaux | Vers un environnement intégré pour le prédimensionnement-Modelica 3D", 19ème Congrès Français de Mécanique[END_REF]. Nowadays, many studies have been accomplished, but they focus just on each level of the V-Cycle and do not allow a continuity of modeling from the definition of requirements to virtual prototyping [START_REF] Ferretti | Virtual prototyping of mechatronic systems[END_REF]. On the opposite, the approach in [START_REF] Plateaux | Towards an Integrated Mechatronic Design Process[END_REF] introduces a hybrid methodology based on different tools, languages and methodologies, such as SysML [START_REF] Johnson | Integrating models and simulations of continuous dynamics into SysML[END_REF], [START_REF] Penas | A telescope robust preliminary design with SysML[END_REF], Modelica [START_REF] Lind | Model Based Systems Engineering for Aircraft Systems-How does Modelica Based Tools Fit?[END_REF] and CATIA [START_REF] Plateaux | Integrated Design for a Mechatronic System with CATIA V6[END_REF], [START_REF] Kleiner | Model based design with systems engineering based on RFLP using V6[END_REF]. In particular, the analysis of a complex system starts from its breakdown into several devices, sub-assemblies or components that could be considered homogeneous and consistent from the topological, functional and multi-physical point of view, respectively. Such goal is accomplished by means of dedicated tools [START_REF] Choley | A consistent preliminary design process for mechatronic systems[END_REF]. Each device has a behavior related to one or more physics and contributes, therefore, to the overall multiphysical field that characterizes the complex system. Once identified the functional requirements and the physical parameters of the complex system, the next step deals with the definition of the logical architecture, the connection of different components characterized by their dynamic behaviors and, finally, the simulation of whole system performances. The language used for this step is Modelica [START_REF]Modeling of Complex Physical Systems[END_REF], [START_REF] Fritzson | The modelica object-oriented equation-based language and its open modelica environment with metamodeling, interoperability, and parallel execution[END_REF]. In fact, the libraries developed using Modelica already contain objects and models aimed to simulate different physics, including related equations, and the existing connectors could be used to connect devices by means of compatible parameters [START_REF] Patalano | A Digital Pattern Approach to the Design of an Automotive Power Window by means of Object-Oriented Modelling[END_REF], [START_REF] Herfs | Design of Feed Drives with Object-Oriented Behavior Models[END_REF]. The simulation of multiphysical behavior of a complex system could be very useful during preliminary design. By creating a tool i.e. a 3D multiphysical sketcher for preliminary design of complex systems, in fact, it is possible to reduce the successive use of Finite Element (FE) simulations. Such simulations represent the most expensive task in terms of time consumption for solving the dynamic behavior of multiphysical systems [START_REF] Plateaux | Introduction of the 3D Geometrical Constraints in Modelica[END_REF]. To accomplish the simulation of different and overlapping physics, it is necessary to determine the relationship existing between geometrical or topological entities of the system and the physics in which the system is involved. A method to establish this relationship is to use Topologically & Technologically Related Surfaces (TTRS) [START_REF] Clement | The TTRS: 13 constraints for dimensioning and tolerancing. geometric design tolerancing: theories, standards and applications[END_REF], together with the related Minimal Reference Geometrical Elements (MRGEs) and constraint conditions. The present paper deepens the relationship between geometrical or topological entities of complex systems and the thermal physics in which the systems is involved. The paper is arranged as follows. Section 2 presents the implementation of the set of MRGEs within Modelica environment and the application of such MRGEs to a mechanical assembly. Section 3 summarizes the logical scheme supporting the action of a 3D multiphysical sketcher. Section 4 illustrates the application of the integration of TTRS model within thermal physics to the case of the designing for electronic boards. Finally, Section 5 draws the conclusions. II. MODELLING TTRS WITHIN MODELICA LANGUAGE The coupling of information related to the position and orientation of different objects in a three-dimensional space could be accomplished using Topologically & Technologically Related Surfaces (TTRS) and Modelica language. According the TTRS model [START_REF] Clement | The TTRS: 13 constraints for dimensioning and tolerancing. geometric design tolerancing: theories, standards and applications[END_REF], any surface or association of real surfaces of an object can be associated to a kinematic invariance class named TTRS. There are 7 classes of TTRS, classified according to increasing Degrees of Freedom (DOF): Identity equals to 0, Revolute, Prismatic and Helical equal to 1, Cylindrical equals to 2, Spherical and Planar equal to 3. Kinematic joints can be expressed by TTRS. Each TTRS is characterized by a MRGE. Each MRGE is made up of a combination of one point and/or one line and/or one plane, but it does not take into account the intrinsic dimensional aspect of the object (Table I). In order to assembly two geometrical objects, i.e. to define geometrical constraints between two TTRS, 44 associations were identified depending on the relative orientations and positions. They correspond to the most elementary formulation of a kinematic connection between objects. TABLE I. A CONE ASSOCIATED TO TTRS "REVOLUTE SURFACE" WHOSE MRGES ARE A POINT AND A LINE Surface Class MRGE Revolute Surface Point-Line Finally, the study of related MRGEs enables to have only 13 possible cases of constraints. These constraints between MRGEs, numbered from C1 to C13, use algebraic expressions and parameters (Fig. 1) [START_REF] Clement | The TTRS: 13 constraints for dimensioning and tolerancing. geometric design tolerancing: theories, standards and applications[END_REF]. Firstly, each MRGE (point, line and plane) was implemented in Modelica environment as an object. The primary object is the point, defined as a vector with three variable coordinates. The coordinates of the point can be assigned as a datum or evaluated during a simulation (point or point_u in Fig. 2, respectively). The line block is created using a class containing both two vectors for the coordinates of two points, and two matrices for parametric and Cartesian coordinates. When simulation starts, the environment evaluates both Cartesian and parametric coordinates of the objects. A line could be defined by means of two points, or obtained using geometric conditions, as the intersection of two planes. A plane could be defined by means of three points. For this reason, the line object contains both parametric and Cartesian coordinates. In Fig. 3 the coordinates of the line, defined through two datum points or the intersection of two planes, are shown. The equations of the planes are ax+by+cz+d=0 and a'x+b'y+c'z+d'=0 respectively. In particular, Fig. 3 depicts the condition related to the planes represented by equations y=0 and z=0. The 13 constraints were generated in Modelica environment by using MRGEs. Each block was created by using vectors and parameters, in order to calculate the coordinates of the involved MRGE. The constraint C2, for example, related to the distance between two points, uses the parametrical Equation in (1): oint2 = oint1 + v c • (1) In particular, vector is the unitary vector between the two points and d is the parameter that defines the position of point2 respect to point1. The components of vector are evaluated by initial conditions related to the two points or by means of geometrical constraints coming from other blocks. In fact, an additional connector located on the top right of the block was added: when this connector is used, the direction of vector is evaluated from other blocks of the model. Similar procedures were used to implement the whole set of 13 constraints. Fig. 4 depicts the 13 constraints and the implementation related to C2 constraint. In particular, a counter "_x" was added to the label of each constraint to univocally identify the instance used in the system context. In order to verify the implemented objects for MRGEs and the 13 constraints, a lifting valve assembly was preliminarily modelled within Modelica environment. The assembly consists in a vertical valve moved by an eccentric shaft rotating around the horizontal axis. The nominal axis (axis_nom) is aligned to x axis and defined by two datum points (point1 and point2), while the eccentric axis (eccentric) is parallel to the nominal one and translated along y direction by the constraint C12_1. The valve, according to TTRS model, is a revolute surface, so it is associated to the MRGEs line and point. The line is the axis of revolution (valve) and it is defined perpendicular to axis_nom by the constraint C12_2 and passing through point2. The point (point_u1) is the center of the valve and it belongs to the axis of revolution by means of C4_1 constraint; the distance for point_u1 position is given through the C5_1 constraint (Fig. 5). III. THE ROLE OF TTRS MODEL CONSTRAINTS IN A 3D MULTIPHYSICAL SKETCHER The strong integration and the geometrical compactness among the different components of modern complex systems carry as a consequence the proximity between different multiphysical domains and the inevitable physics interactions. In particular, the devices that compose the system are immersed in media with known physical characteristics, which determine the values of the multiphysical interactions. The connectors implemented for each device enable the multiphysical interactions (Fig. 6). The simulation of a complex system works both in terms of time, thanks to the solution of equations by means of Modelica solvers and in terms of variational changes of geometries. The role of geometrical constraints from TTRS model, here implemented in Modelica environment, is to accomplish the updating of geometrical conditions between devices, in a 3D space, during multiphysical simulations. Fig. 6. Logical scheme for the interaction between devices to be modelled in a 3D multiphysical sketcher At this step, this goal is accomplished by using the same parameters and objects related both to 13 constraints and the simulation of physical interactions. IV. CASE STUDY: LAYOUT OF COMPONENTS IN PRESENCE OF HEAT TRANSFER The case study consists of 2 masses in a 3D space, 1 thermic relation for heat transfer, and 1 metal media for conduction. The two-masses system refers to the case of electronic boards as in [START_REF] Roumizadeh | Pre-designing an electronic card using a multi-domain models approach with DYMOLA[END_REF], or to the case of the evaluation board depicted in Fig. 7 [START_REF] Vv | Stratix Documentation[END_REF]. In the board, a 10 watts power Field-Programmable Gate Array (FPGA) transmits thermal energy to a set of DDR3 memory modules. Therefore, a limit working temperature for the DDR3 modules should not be exceeded. At this stage, a preliminary model for conductiveconvective thermal exchange has been implemented. In particular the model uses the classes belonging to Modelica libraries together with the ad hoc models implemented for the 13 constraints and related to TTRS. In fact, by considering the TTRS model for the C2 constraint i.e. the distance between two geometrical points, the final displacement of the mass, representing the DDR3 modules, was accomplished (Fig. 8). The distance between the two lumped masses was associated to the balance distance in the conduction equation, by using Modelica environment (Eq. 2). (2) e=e0+alfa]dT; TTRS.C2 c2_1(Vector={1,2,3}); c2_1.d=thermalConductor2_1.e; …… …… The result was the displacement of the second mass to the allowed distance, evaluated according to thermal parameters in terms of power, heat capacity, thermal conductivity and initial temperatures (Fig. 9). The automatic displacement of the two lumped masses is actually accomplished using the 3D animation window related to the Modelica library for multi-body systems. V. CONCLUSIONS The paper summarizes the preliminary results related to the integration of thermal physics with geometrical constraints, i.e. the possibility to use a thermal exchange to impose a variational change to a set of assigned MGREs (point, line, plane) that represent the complex system. Therefore, such results could be used within the development of a 3Dmultiphysical sketcher aimed to preliminary designing of complex systems. The objects developed within Modelica environment assure the possibility to model every complex system, including all possible constraints conditions between components, thanks to the completeness of TTRS approach that is used as a basis. Furthermore, the approach to simulation, used in the work, operates in terms of time thanks to the solution of equations through Modelica solvers but also in terms of variational changes related to geometrical elements. Therefore, different design solutions could be explored by means of simulation. A first possibility deals with the evaluation of transitory events related to parameters, as in case of working temperatures for critical components. Otherwise, final displacements, such as relative positioning of components, could be simulated providing a significant support to designers. The developed case study related to a two-mass system showed the potentiality of the approach and the data coming from such simulation. Further works have to be developed to test the whole set of constraints as well as the presence of coexisting and overlapping physics. Fig. 1 . 1 Fig.1. The 13 constraints within TTRS model[START_REF] Clement | The TTRS: 13 constraints for dimensioning and tolerancing. geometric design tolerancing: theories, standards and applications[END_REF] Fig. 2 .Fig. 3 . 23 Fig. 2. Objects of MRGEs in Modelica environment Fig. 4 .Fig. 5 . 45 Fig. 4. Blocks related to 13 constraints and the code related to C2 constraint, within Modelica environment Real e; equation G = k*A/e; Q_flow = G*dt; Fig. 7 .Fig. 8 .Fig. 9 . 789 Fig.7. Relative positioning of DDR3 modules and FPGA related to the electronic board in[START_REF] Vv | Stratix Documentation[END_REF] ACKNOWLEDGMENT The present work was developed with the contribution of the Italian Ministry of University and Research (MIUR) performing the activities of the PON01_01268 DIGIPAT Project, Digital Pattern Product Development: A Pattern Driven approach for industrial product design, simulation and manufacturing. Finally, the authors thank Eng. Giovanni Marco Di Vincenzo for his technical support.
04116006
en
[ "shs" ]
2024/03/04 16:41:26
2023
https://hal.science/hal-04116006/file/CMDL%20ABII.pdf
Alexis Blouët Du Du droit à la vie privée au régime du film de police : Le Conseil constitutionnel et la loi relative à la responsabilité pénale et à la sécurité intérieure Alexis Blouët : enseignant-chercheur contractuel Université de Grenoble, chercheurassocié université d'Édimbourg Le 20 janvier 2022, le Conseil Constitutionnel s'est prononcé sur la loi « relative à la responsabilité pénale et à la sécurité intérieure » après une saisine a priori de députés de l'opposition1 . Les dispositions contrôlées relevaient pour la plupart du titre III de la loi concernant l'usage de caméras, ou « dispositifs de captation d'images 2 », par les forces de police 3 , aboutissant à ce que nous qualifions, ici, de « films de police ». La décision du 20 janvier 2022 s'inscrit dans la continuité d'une précédente, rendue le 20 mai 2021 sur la loi « pour une sécurité globale préservant les libertés » 4 . À première vue, les deux sont sensiblement différentes. Quand le Conseil censurait le 20 mai 2021 la majorité des dispositions relatives au recours à la caméra par les forces de police 5 , il les validait presque toutes le 20 janvier 2022. S'esquisse néanmoins une continuité, anticipée, notamment, par Vanessa Barbé et Serge Slama dans leurs commentaires de la décision du 20 mai 2021 6 . Les deux auteurs avançaient que cette dernière était en fait à « double-détente ». Le Conseil censurait certes des modalités de captations vidéos policières, mais ne remettait à aucun moment en question le principe de leur existence ni ne s'interrogeait généralement sur l'idée du recours à la caméra par la police. Les juges de la rue Montpensier avaient pointé dans leurs censures la voie à suivre pour le législateur, de la même manière que le Conseil d'État avait auparavant, dans l'arrêt Quadrature du net de 2020 7 incité à l'adoption d'un cadre textuel en matière d'usage de drones. La décision du 20 mai 2021 « sécurité globale » s'est inscrite dans la fonction d'aiguilleur du Conseil, pas dans le sens traditionnel de la notion, selon laquelle le juge constitutionnel indique aux autorités politiques la valeur formelle des réformes qu'ils entendent adopter [START_REF] Eisenmann | La justice constitutionnelle de la Haute Cour constitutionnelle en Autriche[END_REF] . La juridiction aurait davantage fait office d'aiguilleur substantiel, indiquant le contenu des ajustements à opérer pour instituer législativement de nouveaux types de caméra policière. La fonction du Conseil constitutionnel vis-à-vis des autorités politiques à la lumière des décisions en matière de sécurité globale et sécurité intérieure est également présentable par la notion d'accompagnateur, dont Julie Alix et Christine Lazerges estiment qu'elle caractérise le rôle de la juridiction en matière de police et de sécurité [START_REF] Alix | La loi « sécurité globale[END_REF] . Le terme « colégislateur » pourrait également convenir, non pas au sens où le Conseil aurait exercé un office similaire à celui des autorités politiques [START_REF]Sur les convergences (et divergences) entre les activités du juge constitutionnel et celle du législateur voir la synthèse de D. Baranger[END_REF] , mais dans une perspective procédurale objective, dès lors qu'il serait intervenu en amont dans la fabrique de la loi en guidant le législateur. I Un régime de visa La notion de visa renvoie à deux dimensions différentes unies par un point commun, celui de traiter du film de police de l'extérieur, sans déterminer son contenu. La première dimension fait écho à l'aspect habilitationnel du visa et englobe les énoncés autour de l'attribution à une autorité du pouvoir de décider de l'existence du film (A). La seconde résonne avec l'étymologie du terme, visa signifiant en latin « choses vues », et renvoie aux énoncés concernant l'exploitation matérielle et technique du film, en ce que ceux-ci tendent à se subsumer dans une norme de visionnage (B). À l'instar de celle d'habilitation, la notion de régime d'exploitation renvoie à l'idée hiérarchique, sans toutefois qu'elle en constitue l'unique ressort. A Un régime d'habilitation B Un régime d'exploitation En matière de films de police, la notion d'exploitation se comprend bien plus largement qu'au cinéma. Quand pour les salles obscures elle s'y limite à l'activité de montrer le film au public, généralement à des fins commerciales42 , l'exploitation s'articule à une conception bien plus générale de l'usage. II Un régime de production L'idée de production au cinéma désigne, lato sensu, les activités de fabrication du contenu du film [START_REF] Masson | Production, cinéma[END_REF] . Ce contenu peut s'appréhender d'un point de vue narratif et symbolique par la notion de scénario qui aidera à saisir les hypothèses de recours à la caméra de police (A). Ce contenu peut aussi s'appréhender d'un point de vue matériel et rejoindre la notion de réalisation, qui désigne la concrétisation du film par le recours à la caméra [START_REF] Aumont | Mare indiquent que traditionnellement la réalisation s'appliquait au scénario, en tant que mise par écrit à fins cinématographiques d'une histoire orale[END_REF] . La notion de régime de réalisation aidera alors à appréhender les règles relatives au contenu sensoriel du film de police (B). récits sociaux [START_REF] Cover | The Supreme Court, 1982 Term --Foreword: Nomos and Narrative[END_REF] . Dans cette perspective, il est possible d'avancer que la loi responsabilité pénale et sécurité intérieure dispose que le recours à la caméra s'articule à des récits appelés, ici, « scénarios ». Le recours à la caméra de police apparaît ainsi structuré autour de la survenance de situations illicites, ou plus largement dangereuses, appelant une réaction de la force publique, que la doctrine désigne par les termes « finalité [START_REF] Debove | Surveiller et punir dans un monde 3.0[END_REF] lorsqu'est « susceptible de se produire un incident, eu égard aux circonstances de l'intervention ou au comportement des personnes concernées » 68 , est également intermédiaire. L'hypothèse du recours à la caméra se réfère à la toile de l'intervention policière en voiture et désigne des éléments attestant de la probabilité de l'issue à éviter, « les circonstances de l'intervention ou le comportement des personnes concernées ». Cette issue est toutefois désignée par le terme « incident », qui renvoie à un champ indéterminable d'évènements susceptibles même s'extraire des missions de la police. Enfin, le degré de précision des cinq autres hypothèses de recours au drone administratif-« la régulation des flux de transport aux seules fins du maintien de l'ordre et de la sécurité publics, la surveillance des frontières, la prévention des mouvements transfrontaliers de marchandises prohibées, la prévention d'actes de terrorisme et le secours au personne » 69 -semble faible. Ces hypothèses de recours à la caméra ne comprennent pas de références à des éléments attestant de la probabilité de l'issue à éviter, et les deux dernières ne contiennent pas non plus de toile et se définissent uniquement par l'issue à éviter. En plus des hypothèses de recours à la caméra, la notion de scénario permet également d'alimenter la réflexion sur les normes relatives aux individus filmés, à travers la notion de personnage, entendue comme désignant une forme humaine créée par le scénario 75 . La notion de personnage souligne la distance et la réduction suscitée par l'oeil de la caméra de police entre les images d'un individu dans l'espacetemps déterminé du film et la réalité de son individualité inscrite à la fois dans une construction historique éminemment plus longue et dans un for intérieur complexe à sonder. Le Conseil constitutionnel s'intéresse, dans la décision, à la loi du point de vue des « personnages » à qui ce statut est imposé, c'est-à-dire aux individus filmés n'appartenant pas aux forces de police. Aux fins de validation de la loi, la juridiction met en avant les dispositions de cette dernière susceptibles de pallier la violence de cette réduction et de cette mise à distance par la caméra de police. Le Conseil constitutionnel relève deux types d'énoncés. Le premier concerne l'identification des individus filmés avant l'usage de la caméra, afin d'en limiter le nombre. Ce type s'applique au drone judiciaire à travers la référence aux individus « déterminés » objets de la procédure judiciaire 76 , même si le Conseil doute de l'efficacité de la mesure et énonce que le drone filmera inévitablement un « nombre très important de personnes sans lien avec la procédure judiciaire en cause » 77 . Le second type d'énoncé est relatif aux modalités de renseignement des individus filmés sur l'usage de la caméra renvoyant au principe d'information, qui est au coeur du paquet européen de protection des données personnelles 78 . L'enjeu n'est pas ici d'obtenir le consentement des individus mais bien plus probablement de les inciter à participer à ce que la situation dangereuse et illicite n'advienne pas. Cette information est destinée à leur attribuer un objectif à l'instar des personnages d'un scénario en théorie du cinéma 79 . Selon les points de vue, cette information vise soit à discipliner l'individu, soit, à l'aviser des enjeux de la situation pour qu'il ne la subisse pas et, tel un acteur, s'approprie le personnage et agisse en conséquence. Les énoncés en question, mis en avant par le Conseil constitutionnel pour justifier de la validité de la loi, sont l'obligation d'information par signalétique en cas d'usage de la caméra embarquée80 dont la police ne peut se dispenser qu'en cas de nécessité81 , ainsi que la communication de la mesure de vidéosurveillance à l'individu concerné par la gardes à vue82 . Des énoncés analogues existent dans la loi pour les drones administratifs, mais le Conseil constitutionnel n'a toutefois pas estimé nécessaire de les relever pour valider la décision. Si la qualification du rôle du Conseil dans la séquence est ouverte à débat, ce qui l'est moins est que la décision du 20 janvier 2022 a validé un accroissement des moyens et donc du pouvoir des forces de police. En ce sens, comme le souligne Stéphanie Henette-Vauchez, avant d'encadrer des dispositifs susceptibles de restreindre les libertés individuelles, le législateur les légalise Le Conseil constitutionnel présente son dispositif comme la résultante d'une mise en balance entre deux normes : d'une part l'objectif à valeur constitutionnelle de sauvegarde de l'ordre public 16 , et d'autre part la protection du droit au respect de la vie privée inclue dans l'article 2 de la Déclaration des droits de l'homme et du citoyen17 . Une analyse de l'argumentation du Conseil conduit toutefois à questionner le fait qu'il ait équivalu ces deux normes. Les juges semblent en effet s'être situés a priori du côté du législateur et des efforts qu'il aurait effectué pour limiter les atteintes au droit à la vie privée bien davantage que du côté des individus susceptibles d'entrer dans le champ d'application de la loi. Autrement dit, la question structurant le contrôle paraît avoir été : en souhaitant protéger l'ordre public le législateur a-t-il suffisamment limité les atteintes au droit au respect de la vie privée, plutôt que, au vu de l'ensemble des atteintes au droit au respect de la vie privée, le gain en termes d'ordre public est-il suffisant ? Cette affinité préférentielle avec le législateur correspond aux discours doctrinaux mettant en avant l'identité systémique de modérateur des organes politiques de la juridiction au détriment de celle de pur protecteur des droits et libertés individuels18 . Une autre caractéristique littérature qui ancre l'organisation de l'analyse des normes sur, par exemple, le dispositif technologique même 20 , la notion de donnée 21 ou la trame des textes 22 , ce régime, s'articulera sur le produit des caméras envisagé descriptivement et matériellement, à travers la notion de film, au sens d' « enregistrement d'images mouvantes » 23 . Ce faisant, l'article empruntera au champ cinématographique en renvoyant quelque fois aux énoncés juridiques encadrant le 7 ème art, mais aussi à des notions non juridiques associées à cette activité. L'objectif est, grâce à la métaphore cinématographique, d'y voir plus clair, sur les enjeux pratiques de la caméra policière. Le pari consiste à affronter les difficultés liées à la spécificité technologique par la mobilisation d'un cadre notionnel et langagier susceptible d'apparaître plus familier aux profanes de la vidéosurveillance. Ce régime du film de police, que semble tracer le Conseil constitutionnel, est organisable autour de deux méta-types de normes: celles renvoyant à un sous-régime dit de «visa , extérieur au contenu du film, (I) et celles renvoyant à un sous-régime dit de production portant, lui, sur le contenu du film (II). La conclusion traitera, elle, directement et spécifiquement des droits et libertés individuels, à l'aune de cette analyse approfondie de la loi et de la décision du Conseil. La perspective sera évaluative, particulièrement s'agissant du choix du respect de la vie privée comme droit et liberté individuel unique de référence. interprétation, le droit au respect de la vie privée ou, au contraire, concourent à sa suffisante protection. La valeur de cette démarche consiste à faire émerger un cadre général pour qualifier les normes de production vidéo par la police. Par ailleurs, eu égard au reste de la [START_REF] Henette-Vauchez | La démocratie en état d'urgence[END_REF] . La décision du Conseil constitutionnel est ainsi venue ponctuer un développement des capacités policières par le biais de technologies ou/et d'hypothèses d'usages supplémentaires de recours à la caméra: les caméras fixes de vidéosurveillance en garde à vue, les drones en matière de police administrative et judiciaire, et les caméras-embarquées dans les véhicules de police. Ces dispositifs se sont ajoutés à ceux pré-existants : caméras nonaéroportées en matière judiciaire [START_REF]Pour l'espace privé Article 706-96 du code de procédure pénale. 13 Article L.252-6 du code de sécurité intérieure. 14 Article L. 241-2 du code de sécurité intérieure. 15 Code de procédure pénale articles[END_REF] , caméras non-aéroportées autour des manifestations 13 et caméras-piétons 14 . En deçà de leurs capacités à produire leurs propres films, les forces de polices pouvaient également, déjà, consulter les images prises par les différents dispositifs fixes de « vidéosurveillance » des entités privées et autres entités publiques 15 . du Conseil, plus spécifique à lui dans une perspective comparée, est celle du minimalisme de son argumentation 19 . Ainsi, le texte de la décision ne propose pas de définition de la norme de référence « droit au respect de la vie privée », qu'elle soit générale ou en relation au recours à la caméra par les forces de police. Le droit au respect de la vie privée ne peut se comprendre qu'en relation aux rares énoncés de la loi que le Conseil, d'autorité, invalide ou assortit de réserves d'interprétation, et, pour le reste, reprend de manière synthétique, sans argument, afin de montrer que le droit au respect de la vie privée n'est pas méconnu par la loi. La frontière entre norme constitutionnelle de référence (droit au respect de la vie privée) et norme législative contrôlée (loi responsabilité pénale et sécurité intérieure) est donc difficile à établir à la lecture du texte de la décision, car la première norme n'est comprise par le Conseil constitutionnel qu'à l'aune de la seconde. Le juridiction paraît donc avoir sculpté un régime constitutionnel du film de police à partir du texte même de la loi, même si des éléments normatifs européens et internes extérieurs ont manifestement influé sur le contrôle. Nous opèrerons principalement en systématisant les énoncés de la loi auquel le Conseil constitutionnel se réfère dans la décision en fonction de son affirmation qu'ils violent, ou, potentiellement violent, pour les réserves d' (trois mois pour les drones administratifs et huit mois pour les drones judiciaire 28 ). Par ailleurs, l'autorisation émane d'une autorité qui inscrit le film tourné par le ou les policiers dans un dispositif de légitimation de type hiérarchique. L'autorité est externe ou interne aux forces de police, renvoyant, en fonction, à un contrôle plus ou moins fort en fonction du nombre plus ou moins élevé de personnes concernées par le dispositif[START_REF]Le contrôle apparaît devoir être plus intense pour le drone et la caméra embarquée en raison du plus grand nombre de personnes couvertes par ces dispositifs. En revanche, en matière de vidéosurveillance, le dispositif ne concerne qu'un individu sur lesquelles il existe des présomptions de commission d'acte « illicites[END_REF] . des libertés individuelles 35 et dans la continuité de ses compétences en matière de caméras non aéroportées 36 . Symétriquement, la notion de régime d'habilitation souligne l'attention qu'accorde le Conseil constitutionnel à l'existence d'une compétence de retrait de cette autorisation dans la loi. Comme dans la décision sécurité globale 37 , l'absence de cette compétence en matière d'usage des « drones administratifs » par la police municipale figure parmi les motifs conduisant la juridiction à censurer le dispositif 38 . En contraste, le Conseil constitutionnel souligne bien que cette compétence de retrait est intégrée aux autres dispositifs qu'il valide. En matière d'usages de drone, la loi consacre un parallélisme des formes et le retrait émane des même autorités qui l'ont autorisé : le préfet et le magistrat de l'ordre judiciaire 39 . S'agissant de la vidéosurveillance en garde à vue, la compétence échoit à une autorité différente de celle qui autorise. lieux publics ou privés qu'elles arpentent 25 . Le pouvoir d'habilitation ne s'articule toutefois pas à la propriété des lieux, mais trouve son fondement dans le principe de contrôle qui structure le droit de la donnée personnelle dans le paquet européen de protection 26 . Pour justifier la validation des dispositifs, le Conseil constitutionnel souligne que la loi les encadre par une norme d'autorisation préalable à leur usage. La durée de cette autorisation est délimitée temporellement, que cette temporalité soit inhérente à l'objet du film (durée légale de la garde à vue 27 ) , ou à la décision de l'autorité L'habilitation des forces de police à recourir aux caméras est une ligne directrice du contrôle du Conseil constitutionnel. Plus précisément qu'avec celle de visa, dont l'intervention est postérieure à la production du film pour en autoriser la habilitationnelle dans un plafond 0 », AJA Pénal, 2022, représentation au public 24 , l'analogie avec le cinéma s'opère avec la notion p.165. d'autorisation de tournage, qui sont sollicitées, en amont, par les équipes au gré des 36 Pour la voie publique C.Cass. Crim, 11 décembre 2018, n° 18-82.365, Crim. 8 déc. 2020, n° 20-83.885 , Dalloz Actualités, notes S. Fucini. Pour l'espace privé Article 706-96 du code de procédure pénale. 37 Décision n°2021-817 DC du 20 mai 2021, Loi pour une sécurité globale préservant les libertés, Points 129 à 141. 38 Point 36 (Article 8 du projet de loi 7° bis). L'autorisation y est interne pour la vidéosurveillance s'agissant des gardes à vue et émane du chef de service 30 . Elle est externe pour le drone administratif et échoie au préfet, l'autorité de droit commun en matière de police administrative générale 31 , qui détient déjà une compétence analogue en matière d'installation des dispositifs fixes de « vidéosurveillance » des entités privées et publiques 32 et de caméras non-aéroportées autour des manifestations [START_REF]L.252-6 du code de sécurité intérieure[END_REF] . Le caractère externe de cette autorisation est néanmoins à relativiser, en ce qu'il se comprend organiquement et non fonctionnellement puisque si le préfet est une autorité distincte des forces de police, il est supposé concourir à la même mission de police administrative générale. En suivant l'analogie avec le cinéma, cela reviendrait à y concevoir que les autorisations de tournages puissent être octroyées par les équipes de production et non par les propriétaires des lieux filmés. Par ailleurs, l'intense lien de subordination du préfet vis-à-vis des autorités politiques peut, selon les points de vue, être conçu comme la garantie d'une décision responsable ou, au contraire, comme porteur d'un risque de contrôle déficitaire dans l'hypothèse d'un gouvernement peu soucieux des libertés individuelles. En matière de drone judiciaire, l'autorisation est également externe avec une nuance similaire à celle s'appliquant aux drones administratifs. La compétence échoit au magistrat de l'ordre judiciaire 34 qui dirige l'enquête, une prérogative qu'il détient au nom de sa qualité de gardien L'autorité est cette fois externe, judiciaire, et se prononce éventuellement sur demande de la personne objet du dispositif 40 . Le Conseil constitutionnel valide ainsi un changement vis-à-vis de la mouture sécurité globale dont il avait censuré le dispositif au motif, notamment, que cette compétence échouait au chef de service qui détenait, déjà, celle d'autorisation 41 . Ainsi, si l'on suit le Conseil constitutionnel, l'introduction d'une autorité externe à l'échelon du retrait compenserait le déficit organique de contrôle impliqué par le caractère interne de l'autorisation. La notion de norme d'habilitation ne trouve toutefois pas à s'appliquer, lorsque l'hypothèse du recours à la vidéo renvoie à l'immédiateté de l'intervention policière, c'est-à-dire dans des situations où la police est face à la cristallisation d'une situation a priori illicite ou dangereuse. Nous renvoyons aux caméras embarquées et au drone administratif en cas « d'expositions particulière et imprévisible à un risque d'atteinte caractérisée aux personnes ou aux biens ». La notion de régime d'habilitation ne perd toutefois pas toute valeur heuristique pour comprendre la loi et la décision du Conseil. Simplement, l'enjeu d'habilitation monte en généralité et se dilue dans le principe général de contrôle évoqué plus haut, et qui renvoie, pour les deux dispositifs, à l'information préfectorale postérieurement à leur usage. Outre d'avoir relevé ces énoncés législatifs, le Conseil paraît avoir veillé à l'effectivité de ce contrôle en censurant l'usage immédiat des drones administratifs au motif que le contenu de cette information n'étaient pas suffisamment précis. 35 Article 66 de la Constitution. F. Debove, « Surveiller et punir dans un monde 3.39 Points 28 (Article L. 242-7 du Code de sécurité intérieure introduit par l'article 15 de la loi) et 43 (Article 230-48 du Code de procédure pénale introduit par l'article 16 de la loi). 40 Point 9 (Article L. 256-2 du Code de sécurité intérieure al.3 et 5 introduit par l'article 13 de la loi). 41 Décision n°2021-817 DC du 20 mai 2021, Loi pour une sécurité globale préservant les libertés, Points 83 à 88. Elle saisit l'ensemble des activités humaines entreprenables directement sur le film de police, et le régime d'exploitation renvoie aux énoncés relatifs à la manipulation technique du film.Le Conseil constitutionnel souligne que, hormis en matière de drone judiciaire, la seule technique d'usage des films doit être le visionnage, entendue comme le regard humain. En droit du cinéma, une norme analogue est celle interdisant aux spectateurs d'enregistrer le film à des fins de piratage, car elle souligne la nécessité pour ces derniers de regarder eux-mêmes le film[START_REF]Le fait de filmer avec un caméscope ou son téléphone portable l'écran sur lequel est projeté un film dans une salle de cinéma est punissable d'une amende de 300.000 euros et de trois ans d'emprisonnement[END_REF] . Pour justifier la validation, le Conseil constitutionnel met ainsi en avant que la loi responsabilité pénale et sécurité intérieure interdit, en matière de vidéosurveillance en garde à vue, le complément par des dispositifs biométriques 44 et, en matière de drone administratif et de caméras embarquées, le complément par des dispositifs de reconnaissance faciale45 . La juridiction ajoute également pour ces deux dispositifs une réserve d'interprétation disposant que l'interdiction vaut aussi pour le couplage avec des dispositifs extérieurs aux caméras 46 . Le Conseil souligne également que pour ces trois dispositifs précités la loi veille bien à ce que les films ne soient pas manipulés numériquement a posteriori du tournage, afin d'être associés à d'autres logiciels de traitement de données à caractère personnel 47 . L'enjeu de l'intermédiation technologique entre le regard humain et l'image existe aussi au cinéma mais avec une portée sensiblement atténuée. Il renvoie aux lunettes 3D, auxquelles certains films sont adaptés, mais qui, contrairement aux dispositifs contrôlés par le Conseil constitutionnel, relèvent d'une fonction récréative et n'ajoutent pas au film des éléments extérieurs à l'image.Pour justifier la validation en matière de vidéosurveillance en garde à vue et de caméra embarquée, le Conseil constitutionnel souligne également que la loi responsabilité pénale et sécurité intérieure limite cette compétence de visionnage en termes d'attribution individuelle. La loi comportait, également, des énoncés similaires concernant les drones 48 , que le Conseil n'a toutefois pas estimé nécessaire d'évoquer dans la décision. L'analogie avec le cinéma la plus précise se situe probablement dans les mécanismes de classification 49 interdisant l'accès au film en fonction de la jeunesse des spectateurs potentiels. Cependant dans la loi responsabilité pénale et sécurité intérieure, l'idée principale fondant la restriction individuelle n'est évidemment pas l'âge, mais le principe hiérarchique, dans sa forme intra-policière. Le Conseil constitutionnel avance ainsi que, pendant l'opération, la loi dispose, pour la vidéosurveillance, qu'elle peut être visionnée uniquement par le chef de service et une personne désignée par lui-même au préalable 50 , et, pour la caméra embarquée, par seuls des agents du poste de commandement et le personnel impliqué dans la conduite et l'exécution de l'intervention 51 . Après l'opération, une logique fonctionnelle s'ajoute pour restreindre la compétence de visionnage, car il doit accompagner une démarche de signalement à l'autorité judiciaire 52 , ou aider à la rédaction du compte-rendu d'intervention 53 .La notion de régime de visa a permis d'envisager les énoncés de la loi relevés par le Conseil constitutionnel, qui appréhendent le film de police d'un point de vue désincarné. Le pendant de cette notion est celle de régime de production qui renvoie au contenu filmique des caméras policières. » ou « cas[START_REF]Vidéosurveillance dans les lieux publics[END_REF] » de recours à la caméra de police, et que nous désignerons par celui d'« hypothèse ». Contrairement au cinéma où le scénario s'identifie au film et y correspond totalement, le « film de police » se détache, en partie, de son scénario, et se conçoit comme un instrument destiné à jouer, potentiellement en cours, sur son déroulé. Autrement dit, le film de police serait un accessoire du scénario rattaché à son hypothèse de recours, alors que dans le cinéma le scénario se penserait davantage comme un accessoire du film. Ce caractère instrumental de la caméra se retrouve dans la réserve d'interprétation du Conseil constitutionnel sur la subsidiarité[START_REF] Debove | Surveiller et punir dans un monde 3.0[END_REF] du recours au drone administratif. La mobilisation du filmage par drone ne saurait être qu'un moyen au service des finalités d'ordre public énoncées dans la loi et, à ce titre, le préfet doit s'assurer que son usage est nécessaire, à concurrence de tout autre moyen, pour accomplir la finalité poursuivie 60 . Les scénarios de la loi responsabilité pénale et sécurité intérieure peuvent être classés en deux méta-types, selon que des éléments sont Le Conseil constitutionnel souligne bien que la loi responsabilité pénale et sécurité intérieure, en relation à chaque technologie, détermine les différents « scénarios », et le juge énonce toutes les hypothèses de recours à la caméra comprises dans la loi. En ce sens, le Conseil dégage implicitement une norme de caractérisation du recours à la caméra par les forces de police 62 , afin d'en éviter, selon ses propres mots, un usage « généralisé et discrétionnaire » 63 . La censure qu'il opère, au motif d'imprécision, du recours au drone administratif s'agissant « des expositions particulière et imprévisible à un risque d'atteinte caractérisée aux personnes ou aux biens » laisserait également penser que le Conseil a dégagé une norme de précision de ces hypothèses64 . Toutefois, le reste de la décision conduit à en douter, dès lors que les juges de la rue de Montpensier ont jugé comme valides des hypothèses de recours à la caméra policière dont la précision apparaît variable et, pour certaines, faible.La notion de scénario peut contribuer à mettre cela en lumière en ce qu'elle aide, dans son pendant « administratif », à décomposer les hypothèses de recours à la caméra par les forces de police en trois catégories d'éléments objectivants. L'évaluation générale de la précision de l'hypothèse est ainsi étayée, en ce qu'elle repose sur trois composantes dont il est possible d'identifier la présence et la précision. Ces trois composantes correspondent à la structure ternaire des scénarios ordinairement conceptualisée en théorie du cinéma 65 . La première renvoie à la phase d'exposition, consistant à établir l'environnement du film; la seconde à la phase d'action narrant la survenance d'un élément perturbateur dans cet environnement et la réaction suscitée parmi les personnages ; la troisième à la phase conclusive, montrant les résultats de cette action sur l'environnement. Transposée à la caméra de à la sécurité des personnes et des biens dans des lieux particulièrement exposés à des risques de commission de certaines infractions ; la protection des bâtiments et installations publics et de leurs abords immédiats particulièrement exposés à des risques d'intrusion ou de dégradation ; la sécurité des rassemblements de personnes sur la voie publique ou dans des lieux ouverts au public lorsque ces rassemblements sont susceptibles d'entraîner des troubles graves à l'ordre public » 67 . Le degré de précision du « scénario » à enjeu préventif en matière de caméra embarquée, 63 Points 6, 52. directement ou non susceptibles d'attester de la constitution des situations illicites ou dangereuses. La dichotomie recouvre fonctionnellement, mais pas nécessairement en droit positif, la distinction entre police judiciaire et administrative. Dans le premier type de scénario dit « administratif », l'enjeu est principalement préventif et la caméra intervient avant tout pour aider la police à ce que les situations ne se cristallisent pas. Nous nous référons aux dispositifs de vidéosurveillance en garde à vue, aux drones administratifs et à la caméra embarquée ex ante [START_REF]Les notions ex ante et ex post distinguent selon que l'usage de la caméra embarquée intervient lorsqu'est « susceptible de se produire un incident » ou se « produit un incident[END_REF] . Dans le second type de scénario dit « judiciaire », l'enjeu est principalement résolutif, et la caméra intervient pour aider la police à traiter la situation, c'est-à-dire, et éventuellement de manière cumulative, pour l'authentifier, la faire cesser et en identifier les responsables. Nous nous référons aux dispositifs de drones judiciaires et de caméra embarquée ex post. police, cette structure ternaire donne : 1 « la toile » correspondant à la phase d'exposition et renvoyant à un cadre situationnel d'intervention policière ; 2 « l'issue à éviter » renvoyant à la phase conclusive 3 « les éléments attestant de la probabilité de la survenance de cette issue » désignant la phase d'action en tant qu'appel à la caméra policière à intervenir dans la situation. Parmi les hypothèses de recours à la caméra de la loi, c'est en matière de vidéosurveillance que le degré de précision apparait le plus haut. L'hypothèse désigne une toile correspondant à un dispositif défini par la loi -la garde à vue -une issue à éviter distinctement énoncée -« la fuite ou le danger sur soi-même ou sur autrui » de la personne sujette à garde à vue-et la référence à des éléments attestant de la probabilité de cette issue -« les raisons sérieuses 66 ». Le degré de précision des hypothèses apparaît intermédiaire pour certains recours au drone en matière administrative, en ce que les trois composantes (toile, issue à éviter, et éléments attestant de sa probabilité) renvoient à des énoncés moins précis ou/et se confondant entre eux : « la prévention des atteintes 64 L'autre justification est que ces cas ne sont pas d'une particulière gravité. Point 31. 65 Y. Lavandier, « La dramaturgie : l'art du récit », Bruxelles, Les impressions nouvelles, 2019, pp. 210-215. 66 Article L. 256-1 alinéa 1 du Code de sécurité intérieure introduit par l'article 13 de la loi. 67 Article L. 242-5 du Code de sécurité intérieure introduit par l'article 15 de la loi. 75 A. Gardies et J. Bessalel, « 200 mots-clés de la théorie du cinéma », Paris, Les éditions du cerf, p. 161.76 Point 41. Article 230-47 du Code de procédure pénale introduit par l'article 16 de la loi se réfère au « une ou plusieurs personnes se trouvant dans un lieu public ». C'est probablement à cet énoncé que le Conseil constitutionnel se réfère quand il évoque les « personnes déterminées ».Les normes de scénario ont permis d'appréhender le contenu du film de police d'un point de vue narratif. Les normes de réalisation le feront dans une perspective plus matérielle et technique. l'entendre comme la fabrication du film à partir du scénario83 , la notion de réalisation au cinéma renvoie à des choix de deux ordres de la part du réalisateur : tout d'abord, technologique, concernant le dispositif matériel qu'il utilise, et ensuite figuratif sur le contenu de l'image qu'il filme. Cette dichotomie nous à constituer deux sous-types de normes dans la loi responsabilité pénale et sécurité intérieure, relevées par le Conseil constitutionnel, ici, uniquement pour justifier sa validation. S'agissant du sous-type de normes dites technologiques, la première est l'interdiction de la captation sonore 84 en matière de drone administratif et de vidéosurveillance en garde à vue 85 . Le film que ces dispositifs suscitent doit donc être muet, et les individus filmés s'incarnent seulement par le geste. Cet énoncé, déjà présent dans la loi sécurité globale, peut par ailleurs être subsumé au sein du principe de visionnage évoqué dans le §IB. L'absence de son confirme en effet que les policiers ne peuvent recourir qu'à la vue pour analyser les films. Ce rapprochement est congruent avec le fait que dans la loi, l'interdiction de la captation sonore se situe dans les mêmes alinéas que ceux interdisant les dispositifs biométriques, la reconnaissance faciale et l'interconnexion ou la mise en relation automatisée avec d'autres traitements de données à caractère personnel. L'autre norme technologique concerne le nombre d'appareils pouvant être utilisés dans le cadre d'une même opération. Ainsi le Conseil constitutionnel souligne bien que la loi responsabilité pénale et sécurité intérieure institue une limite en matière de drone administratif, qui n'est pas définie arithmétiquement mais habilitationnellement par le nombre mentionné dans l'autorisation du préfet 86 . Cet énoncé de la loi responsabilité pénale et sécurité intérieure répond à la censure du projet de loi sécurité globale, qui se référait notamment à l'absence de contingentement qui évoquait le spectre d'un recours aux « essaims » de drone 87 . L'interdiction ne s'applique pas en matière de caméra embarquée et en matière de drone judiciaire, un domaine où la captation sonore était déjà instituée mais seulement dans l'espace privé (Article 706-96 du Code de procédure pénale). 86 Point 27. Article L. 242-5 du Code de sécurité intérieure par l'article 15 du projet de loi. Le Club des Juristes, 10 juin 2021 l'image filmique[START_REF] Aumont | Dictionnaire théorique et critique du cinéma[END_REF] . Les énoncés de la loi relevés par le Conseil constitutionnel aux fins de validation contraignent le champ filmé par les caméras policières de manière à le restreindre ou à le densifier. La restriction du champ peut se comprendre abstraitement en terme de zones identifiables par des données géographiques dans lesquelles il est autorisé de recourir à la caméra de police. Le Conseil constitutionnel souligne que, selon la loi, les autorisations du préfet et du magistrat délimitent pour l'un, un « périmètre[START_REF]IV du code de sécurité intérieure introduit par l'article 15 du projet de loi. 90 Point 44[END_REF] », et pour l'autre, un « lieu 90 » que pourront survoler les drones. En droit du cinéma, une analogie peut être faite avec les espaces identifiés, par exemple, par des noms de rue et leur numéro, qu'octroient les municipalités aux équipes de tournage au cas où celui-ci se déroule sur la voie publique 91 . La restriction du champ se comprend également concrètement en référence à des éléments matériels auxquels sont attribués une signification sociale et qui ne pourront pas, peu importe la zone, figurer à l'image. Le contrôle du Conseil constitutionnel se réfère ainsi aux énoncés de la loi interdisant de filmer l'espace domiciliaire et ses accès en matière de drone administratifs et de caméras embarquées92 . Ces derniers reprennent les dispositions législatives encadrant les dispositifs fixes de « vidéosurveillance » des entités privées et autres entités publiques 93 et la jurisprudence afférente du Conseil constitutionnel en 1995 consacrant le principe d' « inviolabilité du domicile » constituait l'une des références du contrôle 94 . Cette restriction s'apparente en droit du cinéma à la censure qui existe, avec plus ou moins d'intensité en fonction des pays et des périodes historiques, contre « la sexualité et sa représentation, la violence, et la dénonciation des abus politiques[START_REF] Aumont | Dictionnaire théorique et critique du cinéma[END_REF] » ou au nom de la conception de la dignité humaine propre aux autorités[START_REF] France | la loi dispose qu'un visa peut être refusé pour des motifs tirés de la protection de l'enfance et de la jeunesse ou du respect de la dignité humaine[END_REF] . Outre l'objet des restrictions, une différence est toutefois qu'au cinéma, la censure s'exerce, d'une part, a posteriori, après le tournage du film, du moins formellement[START_REF]Nous pensons à l'autocensure, se référant au cas où le réalisateur ne tourne pas certains plans afin de s'assurer que son film passera la censure officielle[END_REF] , et d'autre part, par le biais d'une autorité extérieure à l'équipe de tournage. En revanche pour le film de police, la valeur pratique de la norme restreignant le champ est moins forte. La « censure » de l'espace domiciliaire et ses accès dépend en effet d'une décision « d'interruption[START_REF]III du code de sécurité intérieur introduit par l'article 15 de la loi[END_REF] » instantanée, en cours de tournage et prise, non pas par une autorité externe mais par l'autorité qui filme elle-même, c'est-à-dire la police. Enfin, les normes d'image ne s'envisagent pas nécessairement comme diminuant le champ, elles peuvent être aussi conçues comme le densifiant. En matière de vidéosurveillance en garde à vue, le Conseil constitution souligne bien que la loi dispose qu'un accessoire de « décor », un pare-vue censé préserver l'intimité de la personne gardée à vue, doit figurer à l'image 99 .ConclusionL'idée de cinéma a ainsi contribué analyser en profondeur à la fois la loi responsabilité pénale et sécurité intérieure, le contrôle afférent du Conseil constitutionnel et les enjeux pratiques du recours à la caméra par les forces de police. D'emblée, en inscrivant la notion de film au coeur de la réflexion, l'idée de cinéma a permis d'esquisser les contours d'un régime à même de susciter une réflexion croisée et systématisée, toutefois pas complètement exhaustive 100 , sur les énoncés des différents dispositifs de la loi relevés par le Conseil constitutionnel. Cela a été fait à l'aune de notions et normes transversales également susceptibles d'interroger et éclairer les dispositifs de caméras policières préexistant à la loi et ceux qui seront institués dans le futur. Ces normes et notions transversales se retrouvent dans la structure de cet article. À l'intérieur même de celle-ci, l'univers cinématographique a également constitué un contrepoint à celui des caméras policières évoquant convergences et contrastes et des notions, comme celles de scénario et de personnages, enrichissant l'analyse. Cette compréhension affinée peut concourir à un regard évaluatif en terme de droits et libertés individuels, par lequel il est possible, au préalable, de regretter que le Conseil constitutionnel ait contrôlé les modalités d'encadrement des dispositifs de caméra policière sans jamais s'interroger sur leur légitimité, opérant ainsi un contrôle très formaliste. La décision et la loi ont néanmoins certaines forces: principe d'autorisation et de retrait, principe de visionnage, principe de caractérisation des hypothèses de recours à la caméra, principe de contraintes sur le champ filmé… Émergent cependant, également, de sérieux doutes et potentielles lacunes: adéquation de la seule autorité préfectorale en matière d'autorisation et de retrait pour l'usage des drones administratifs, faiblesse de l'encadrement de l'exploitation matérielle du film en matière de drones judiciaires, imprécision de certaines hypothèses de recours la caméra, absence de garanties à l'effectivité de l'interdiction de filmer les domiciles 101 . La métaphore cinématographique a également servi de palliatif au style minimaliste du Conseil constitutionnel traduit ici par l'inconsistance de son argumentation dans son contrôle de la loi à la lumière du droit au respect de la vie privée. La juridiction n'a pas pour autant contrôlé à « l'aveugle ». Elle semble en effet avoir effectué son contrôle à la lumière d'une constellation de textes traitant des caméras de police ou d'objets approchants : RGPD, directive police-justice, loi informatique et liberté 102 , avis de la CNIL, lois précédentes de 1995, 2011, et sécurité globale 103 et, décisions de la Cour de Justice Européenne 104 et de la Cour européenne des droits de l'homme en matière de surveillance 105 . Ces textes semblent en fait composer un bloc de constitutionnalité non formalisé, et donc fragilisé, sur la question des caméras policières que la loi responsabilité pénale et sécurité intérieure et son contrôle par le Conseil constitutionnel sont appelés à rejoindre. Si à peu énoncer, le Conseil gagne probablement en flexibilité et en adaptabilité 106 , la question de l'usage des caméras par les forces de polices aurait pourtant semblé mériter un traitement approfondi et davantage explicite en matière de droit et libertés individuels, ne serait-ce que pour ancrer plus fermement les forces de la décisions en la matière. Ainsi, le principe de visionnage 107 , désignant l'analyse du film de police par un regard strictement humain 108 , est d'ores et déjà menacé par le projet de loi concernant les jeux Olympiques et Paralympiques de 2024 qui autorise la vidéosurveillance algorithmique. L'affirmation claire d'une telle norme par le Conseil, plutôt qu'implicite par renvoi aux dispositions législatives contrôlées, aurait pu par ailleurs couper court aux dispositifs analogues déjà institués par certaines municipalités de 77 Ibid. 78 Article 14 du RGPD ; Article 13 de la directive police-justice ; Article 48 et 104 de la loi CNIL. 79 Y. Lavandier, « La dramaturgie : l'art du récit », Bruxelles, Les impressions nouvelles, 2019, pp. 211-S'agissant du sous-type de normes dites d'image, il renvoie à la notion cinématographique de champ, au sens de portion d'espace perçu à chaque instant de 83 Voir supra. 84 Points 10, (Article L. 256-3 du Code de sécurité intérieure introduit par l'article 13 du projet de loi). Point 30, (Article L. 242-4 alinéa 2 du Code de sécurité intérieure modifié par l'article 15 du projet de 215 B Un régime de réalisation loi). Point 55. À 85 87 ; S. Slama, « Censure partielle de la loi « sécurité globale » : après demain les drones ? », Décision n°2021-834 DC du janvier 2022, Loi relative à la sécurité pénale et à la sécurité intérieure. 2 F. Debove « Surveiller et punir dans un monde 3.0 »,AJ Pénal, 2022, p. 164. 3 L'institution policière est la seule à laquelle nous nous réfèrerons, par volonté de simplification et car elle est la principale concernée par l'objet de la loi. Le texte habilite néanmoins d'autres autorités comme l'armée, la gendarmerie ou la douane, à recourir à la vidéosurveillance et aux drones.4 Décision n°2021-817 DC du 20 mai 2021, Loi pour une sécurité globale préservant les libertés.5 Le Conseil constitutionnel avait validé la compétence des policiers municipaux de visionner les images de vidéosurveillance des entités publiques et privées ainsi que les dispositifs dits de « caméras piéton ». Il avait, toutefois, censuré les dispositifs au coeur de la loi « sécurité pénale et sécurité intérieure » : la vidéosurveillance en garde à vue, le drone, et la caméra embarquée. 6 S. Slama « Censure partielle de la loi « sécurité globale » : après demain les drones ? », Le Club des Juristes, 10 juin 2021 ; V. Barbé, « Quarante ans après la décision « Sécurité et liberté », le Conseil constitutionnel filtre toujours « le moustique [pour] laisser passer le chameau », Le blog Droit administratif, 7 juillet 2021. 7 Dans l'arrêt Quadrature du Net, la juridiction administrative enjoignait aux préfet de police de Paris de cesser de recourir aux drones à moins qu'il n'opère des ajustements techniques afin que le dispositif ne relève plus du traitement de données personnelles. Sinon, un cadre textuel devait être adopté au moins sous forme règlementaire. CE, ord., 18 mai 2020, Association « La Quadrature du Net » et Ligue des Droits de l'Homme, n°440442, n°440445. B. Le Querrec, « Le Conseil d'État ouvre l'espace aux drones », RDLF, 2020, n° 81. Décision n°1982-141 DC du 27 juillet 1982, Loi sur la communication audiovisuelle. Décision n°1999-416 DC du 23 juillet 1999, Loi portant création d'une couverture maladie universelle. Voir par exemple D. Rousseau, « De quoi le Conseil constitutionnel est-il le nom ? », JusPoliticum, n°7, 2012 ; N . Belloubet, « La motivation des décisions du Conseil constitutionnel : justifier et réformer », Nouveaux Cahiers du Conseil Constitutionnel, n°55, 2017. Voir par exemple D. Baranger, « Sur la manière française de rendre la justice constitutionnelle : Motivation et raisons politiques dans la jurisprudence du Conseil constitutionnel », JusPoliticum n°7, ; A. Le Quinio, « La motivation des décisions du Conseil constitutionnel au prisme du modèle ibéro-américain », Nouveaux Cahiers du ConseilConstitutionnel, 2017, pp. 33-43. Voir par exemple F. Debove, « Surveiller et punir dans un monde 3.0 », AJA Pénal, 2022, p.164. Voir par exemple X. Bioy, « Les drones produisent-ils des données personnelles ? », AJDA, 2020, p.1552. Voir par exemple C. Crichton, « Encadrement législatif de la surveillance des drones », Dalloz IP/IT, 2022, p.63 J. Aumont et M. Marie, « Dictionnaire théorique et critique du cinéma » 3 ème ed.,Paris, Armand Colin, 2016, p. 112. Article L. 211-1 alinéa 1 du Code du cinéma et de l'image animée. P. Kamina, Droit du cinéma, Paris,LexisNexis, 2014, p.384. Le paquet européen de protection est composé du RGPD, de la directive police-justice et de leur traduction dans la loi informatique et liberté. P. Kamina, Droit du cinéma, Paris,LexisNexis, 2014, p. 275. Concernant les « scénarios » à enjeux résolutifs, la donne de l'analyse change. La notion de scénario perd quelque peu de son potentiel heuristique, car elle ne permet plus de décomposer les hypothèses de recours à la caméra en trois composantes. En effet, ne comptent, pour évaluer leur précision, que les éléments définissant les situations illicites ou dangereuses à résoudre, correspondant à la phase conclusive du scénario au cinéma. La caméra de police intervient en effet après la phase initiale d'exposition et l'enclenchement de la phase d'action. En dépit de cela, la notion de scénario peut quand même contribuer à évaluer la précision des hypothèses de recours à la caméra policière en soulignant bien qu'ici, l'enjeu réside dans la précision de la finalité du recours à la caméra, autrement dit dans celle de la situation à résoudre. En matière de caméra embarquée ex post,« lorsque se produit un incident », ce syntagme continue, comme plus haut, à désigner un champ évènementiel indéterminable et correspond donc à un faible degré de précision 70 . En revanche, en matière de drone judiciaire, les enjeux du recours à la caméra apparaissent bien plus déterminés. Les situations à résoudre sont définies par le droit pénal 71 où elles renvoient à la fuite 72 , l'ignorance des causes de la mort d'un individu ou/et de sa disparition73 , et aux éléments caractérisant un crime ou un délit puni d'au moins trois ans d'emprisonnement. Le législateur a ainsi suivi la voie tracée par la décision sécurité globale, dans laquelle le Conseil constitutionnel avait censuré le dispositif au motif que la loi ne limitait pas les infractions susceptibles de susciter un recours au drone judiciaire[START_REF]Le dispositif avait été censuré dans la décision sécurité globale car il comprenait les contraventions dans son champ d'application[END_REF] . Point 55. (Article L. 243-2 du Code de sécurité intérieure alinéa 2 par l'article 17 de la loi). Ibid. Le dispositif caméra embarquée avait été censuré dans la loi sécurité globale car la dispense d'information n'était pas suffisamment encadrée. Décision n°2021-817 DC du 20 mai 2021, Loi pour une sécurité globale préservant les libertés, Point 144. Point 9, (Article L. 256-1 du Code de procédure pénale introduit par l'article 13 de la loi). Point 10, du Code de sécurité intérieure introduit par l'article 13 de la loi). La loi et son contrôle par le Conseil constitutionnel évoquent également d'autres enjeux tenant notamment à la durée de conservation des enregistrements, à leur intégrité, et à la traçabilité des consultations. Voir à cet égard le scepticisme de la doctrine, d'associations et de différentes autorités consultatives sur l'usage de la caméra par la police. Ce scepticisme s'était exprimé principalement lors de l'adoption de la loi « sécurité globale » mais la loi « responsabilité pénale et sécurité intérieure » apparaît loin de pouvoir dissiper tous les doutes et critiques. P. Cassia, « Demain les drones », Mediapart, 20 mai 2020 ; S. Slama, « Censure partielle de la loi « sécurité globale » : après demain les drones ? », Le Club des Juristes, 10 juin 2021 ; La Quadrature du Net, « Sécurité globale : nos arguments juridiques », 15
00411610
en
[ "info.info-ds" ]
2024/03/04 16:41:26
2009
https://ujm.hal.science/ujm-00411610/file/conf-2009-mlg.pdf
Guillaume Damiand email: [email protected] Colin De La Higuera Jean-Christophe Janodet email: [email protected] Emilie Samuel email: [email protected] Christine Solnon email: [email protected] A Polynomial Algorithm for Subisomorphism of Holey Plane Graphs We address the problem of searching for a pattern in a plane graph, that is, a planar drawing of a planar graph. We define plane subgraph isomorphism and give a polynomial algorithm for this problem. We show that this algorithm may be used even when the pattern graph has holes. Introduction Many applications involve mining graphs in order to discover frequent connected subgraphs. If this problem may be solved in output-polynomial time for some specific classes of graphs such as trees [START_REF] Chi | Frequent subtree mining: an overview[END_REF], tenuous outerplanar graphs [START_REF] Horvath | Frequent subgraph mining in outerplanar graphs[END_REF], or bounded treewidth graphs [START_REF] Horvath | Efficient frequent connected subgraph mining in graphs of bounded treewidth[END_REF], it remains challenging in the general case. This mainly comes from the fact that subgraph isomorphism is N P-complete in the general case. In this paper, we focus on plane graphs, i.e., planar graphs that are embedded in planes. Indeed, when graphs model objects defined on planes, such as images, one may consider the planar embedding of the graph. In [START_REF] Damiand | A polynomial algorithm for submap isomorphism: Application to searching patterns in images[END_REF], we have defined and studied the plane (sub)graph isomorphism problem. We have shown that it can be solved in quadratic time whenever the pattern graph is compact, that is, the pattern graph may be obtained by iteratively removing nodes and edges that are incident to the unbounded face. However, compact plane graphs are somehow restricted, because they do not have any hole. Thus, it would be impossible to use compact plane graphs to Presented at ILP-MLG-SRL, Leuven, Belgium, 2009. model and search for a cup with a handle for instance (see Fig. 1). Indeed, the background of the cup, visible through the handle, would be integrated to the modelling graph, so that the cup could not be searched independently of the background. In this paper, we extend [START_REF] Damiand | A polynomial algorithm for submap isomorphism: Application to searching patterns in images[END_REF]'s algorithm to solve the sub-isomorphism problem for plane graphs with holes. Plane Graphs A graph is a pair G = (V, E) where V is a set of vertices and E is a set of edges. Below, all the graphs are supposed connected, that is, every pair of vertices is linked by a sequence of edges. A planar embedding of a graph G is an injective mapping φ that assigns 2D points to vertices, and 2D curves to edges. G is planar if an embedding exists such that no two embedded edges intersect, except at their endpoints. A theorem by [START_REF] Fàry | On straight-line representation of planar graphs[END_REF] states that given a non-crossing representation of a planar graph, it is always possible to move the vertices so that the edges are drawn with straight-line segments. Hence, we only consider planar embeddings such that embedded edges are straight-line segments that are defined by the 2D embedding of their endpoint vertices. Several embeddings may however exist for a graph. A plane graph is a triple G = (V, E, φ) such that (V, E) is a planar graph and φ : V → R 2 is an embedding of the vertices such that no two embedded edges intersect, except at their endpoints. A plane graph is made of (bounded or unbounded) faces: when considering the planar embedding, the complement of the set of edges is a disjoint union of simply connected regions called faces. For instance, the plane graph of Fig. 2 is made of 9 faces: Faces A to H are bounded whereas the white face is unbounded. Let faces(G) denote the set of faces defined by plane graph G = (V, E, φ). For each f ∈ faces(G), we note boundary(f ) the sequence of vertices encountered when walking along the boundary of f , having f on the right hand side. This boundary is unique up to cyclic permutations. We finally introduce face-connectivity, that is based on sequences of faces that share common edges. Formally, two faces f, g ∈ faces(G) are sewn if there exists an edge {i, j} ∈ E which belongs both to boundary(f ) and boundary(g). Graph G is face-connected if for each f, g ∈ faces(G), there exists a sequence of faces f 1 , f 2 , . . . f n such that f = f 1 , g = f n , and faces f i and f i+1 are pairwise sewn, for all 1 ≤ i ≤ n -1. Compact Plane Subgraph Isomorphism and Combinatorial Maps A compact plane subgraph isomorphism problem between a pattern plane graph G p = (V p , E p , φ p ) and a target plane graph G t = (V t , E t , φ t ) consists in deciding whether G p is isomorphic to some subgraph of G t which is obtained from G t by iteratively removing nodes and edges that are adjacent to the unbounded face (see Fig. 3). More precisely, one should find a mapping h : V p → V t such that (i) h is injective, (ii) h preserves the edges, i.e., ∀{x, y} ∈ E p , one has {h(x), h(y)} ∈ E t , and (iii) h preserves the faces, i.e., for every face f ∈ faces(G p ), there exists a face g ∈ faces(G t ) such that for every edge {x, y} ∈ In [START_REF] Damiand | A polynomial algorithm for submap isomorphism: Application to searching patterns in images[END_REF], we have proposed a polynomial algorithm for the plane subgraph isomorphism problem where the pattern graph G p must be faceconnected. This algorithm is derived from an algorithm that solves the sub-isomorphism problem for combinatorial maps. Combinatorial maps were introduced in the early 60's to efficiently implement plane graphs [START_REF] Edmonds | A combinatorial representation for polyhedral surfaces[END_REF][START_REF] Tutte | A census of planar maps[END_REF]. They describe the topological organisation of plane graphs by decomposing every edge {i, j} into two darts (i → j) and (j → i), and by using two functions β 1 and β 2 which respectively define dart successions in face boundaries and face connectivity (see Fig. 4). The algorithm proposed in [START_REF] Damiand | A polynomial algorithm for submap isomorphism: Application to searching patterns in images[END_REF] for solving the submap isomorphism problem is based on the fact that, given any starting dart, the traversal of a combinatorial map (that is, the order in which darts are discovered) is unique, provided that one has fixed (1) the strategy used to memorize darts that were dicovered but not exploited yet (e.g., Last In First Out / LIFO), and (2) the order in which β 1 and β 2 are used to discover new darts (e.g., β 1 before β 2 ). Hence, to determine if a pattern map M p is subisomorphic to a target map M t , we choose a starting dart d p in M p , and for every dart d t of M t , we perform a traversal of M p starting from d p , and a traversal of M t starting from d t . Each time a new dart is discov- ered in M p , it is matched with the corresponding dart in M t . At the end of the traversal, we check whether the matching that has been built actually defines a subisomorphism (i.e., the problem is solved) or not (i.e., we try with another starting dart in M t ). Each traversal may be done in linear time w.r.t. the number of darts, and in the worst case one has to perform a traversal for every dart of M t . Hence, this algorithm is quadratic w.r.t. the number of darts. Moreover, we have shown in [START_REF] Damiand | A polynomial algorithm for submap isomorphism: Application to searching patterns in images[END_REF] that this algorithm for solving the submap isomorphism problem could be used to solve the compact plane subgraph isomorphism problem, provided that the pattern graph is face-connected. We have also shown that this algorithm could be used to find patterns in images modelled by plane graphs (see Fig. 5). Plane Subgraph Isomorphism for Graphs with Holes As pointed out in the introduction, when looking for a pattern in an image, one may want to remove some parts of the image (corresponding to the background). This could be done by modelling the pattern image with a holey plane graph, i.e., a plane graph such that some faces have been removed. We define the plane subgraph isomorphism for graphs with holes as follows. Consider a pattern compact plane graph G p = (V p , E p , φ p ), a set of required faces F ⊆ faces(G p ), and a target plane graph G t = (V t , E t , φ t ). Let V F p denote the set of vertices that appear in F and E F p , the corresponding set of edges. One should find a mapping h : V F p → V t such that (i) h is an injection, (ii) h preserves edges, i.e., ∀{x, y} ∈ E F p , one has {h(x), h(y)} ∈ E t , (iii) h preserves the faces of F , i.e., for every face f ∈ F, there exists a face g ∈ faces(G t ) such that for every edge {x, y} ∈ boundary(f ), one has {h(x), h(y)} ∈ boundary(f ). For example, in Fig. 2, the graph obtained from G 1 by eliminating faces D and H (thus, setting F = {A, B, C, E, F, G}) would be a plane graph with holes and a face-connected subgraph of graph G 1 . The algorithm for deciding the plane subgraph isomorphism problem for graphs with holes is derived from the submap isomorphism algorithm of [START_REF] Damiand | A polynomial algorithm for submap isomorphism: Application to searching patterns in images[END_REF] as follows: in the traversal of the pattern graph, the faces that do not belong to the set of required faces F must not considered. Note that the set of required faces F has to be face-connected. Figure 1 . 1 Figure1. Two cups that should be modelled by holey graphs in order to remove the background which is visible through the handle. Figure 2 . 2 Figure 2. A face-connected plane graph G1 with 8 bounded faces (labeled from A to H) and 1 unbounded (white) face. Figure 3 . 3 Figure3. A face-connected plane graph G2 which is a compact subgraph of G1 (given in Fig.2). Figure 4 . 4 Figure 4. An example of combinatorial map. Darts are represented by numbered segments. Concerning functions β1 and β2, consider e.g. dart 11; one has β1(11) = 8 and β2(11) = 12. Figure 5 . 5 Figure 5. Finding a car in an image: The original image, coming from the Movi dataset (Luo et al., 2003), is on the left. The plane graph obtained after segmentation is on the middle. The car has been extracted and rotated on the right. It is found again in the original image using Damiand et al.(2009)'s algorithm. Acknowledgments This work was supported in part by the Ist Programme of the European Community, under the Pascal 2 Network of Excellence, Ist-2006-216886, and Project Blanc07-1 184534 of the French Anr.
01008952
en
[ "spi.mat" ]
2024/03/04 16:41:26
2011
https://hal.science/hal-01008952/file/Dalian_Fireintunnels_paper_2011b_AL.pdf
B A Schrefler F Pesavento F Chinesta A Leygue Steps towards a real time solution of fire in tunnels We show a solution for a realistic tunnel fire simulation using a 2D-3D coupling strategy and a full multiphysics model for concrete (2D sections), known also as three fluids model (water, vapour, dry air). The computing times are too high for a real time solution. We investigate the applicability of the proper generalized decomposition (PGD) to obtain two goals: achieve a full 3D solution for the solid domain and reduce simultaneously the computing time. The first results are encouraging. Introduction The availability of an efficient tool for simulation of a fire scenario in a tunnel is of paramount importance for fire safety management in emergency situations, for training of fire brigades prior to emergency cases in order to be able take the right decisions when needed and to evaluate measures geared to increase the resistance of existing tunnel vaults against spalling. We have developed such a tool which takes the thermal fluid-structural coupling in a tunnel fire fully into account [START_REF] Schrefler | Thermal coupling of fluid flow and structural response of a tunnel induced by fire[END_REF]. It appears as one of the largest coupled problems actually solved in the community of computational interaction problems. The simulation of a realistic fire scenario is still a time consuming task and the tool is not yet completely ready for the first of the above mentioned three goals. One of the bottlenecks is the heavy computational burden linked with the three fluids model for concrete. It is not possible to disregard the enormous heat sink the tunnel vault represents with the phase changes and chemical reactions going on in heated concrete. Such an omission can yield temperature fields also some 1000°C above measured ones in an experiment. On the other hand simplifications of these phenomena are not possible as highlighted in two recent companion papers [START_REF] Gawin | What physical phenomena can be neglected when modelling concrete at high temperature? A comparative study: Part 1: physical phenomena and mathematical model[END_REF][START_REF] Gawin | What physical phenomena can be neglected when modelling concrete at high temperature? A comparative study: Part 2: comparison between models[END_REF]. In the existing model [START_REF] Schrefler | Thermal coupling of fluid flow and structural response of a tunnel induced by fire[END_REF] we have chosen a 3D-2D coupling strategy where the thermally driven CFD part is solved in a three dimensional cavity i.e. the tunnel, and the concrete part is solved on 2D sections normal to the tunnel axis, at appropriate intervals, see Figure 1. The heat flux and temperature values which serve as coupling terms between the fluid and the structural problem are interpolated between the sections. With this approximation the heat transfer in the tunnel vault in the direction of the tunnel axis is disregarded. As an example, with such an approach the fully coupled simulation on a realistic tunnel for a fire of 20 MW of the duration of one hour lasts more than one day on a PC. The aim of our current research effort is twofold: realize a true 3D-3D coupling on one hand and reduce drastically the computing time on the other hand. The way for achieving this is through adoption of an extremely fast equation solver which can achieve a speed-up of up to 3600 times [START_REF] Bertoldo | A fast multifrontal solver for non-linear multi-physics problems[END_REF] and the adoption of the Proper Generalized Decomposition PGD [START_REF] Dureisseix | A computational strategy for multiphysics problemsapplication to poroelasticity[END_REF][START_REF] Chinesta | Recent Advances and New Challenges in the Use of the Proper Generalized Decomposition for Solving Multidimensional Models[END_REF] for a fast 3D solution of the problem of heated concrete. Steps in this direction as well as the general model will be shown. Full 3D solution ofheated concrete with PGD In the case of the complex behaviour of the tunnel vault (transient and non-linear coupled multi-physic models) we would like to avoid the above mentioned 3D to 2D dimensionality reduction. Because of the richness of the thickness description due to many coupled physics with strong and fast evolutions in the thickness direction, a full 3D descriptions may involve millions of degrees of freedom in the solid domain that should be solved many times because of the history dependent chemo-hygro-thermo-mechanical behavior. Today, the solution of such fully 3D models remains intractable for large size problems despite the impressive progress made in mechanical modeling, numerical analysis, discretization techniques and computer science during the last decade. New numerical techniques are needed for approaching such complex scenarios, able to proceed to the solution of fully 3D multiphysics models in geometrically complex parts. The well established mesh-based discretization techniques fail because of the excessive number of degrees of freedom involved in the fully 3D discretizations where very fine meshes are required in the thickness direction (despite its reduced dimension) and also in the in-plane directions to avoid too distorted meshes. A way to solve this problem is the adoption of the Proper Generalized Decomposition (PGD). In what follows we illustrate the construction of the Proper Generalized Decomposition of a model defined in tunnel shell I Ξ = Ω× with 2 ( , ) x y = ∈ Ω ⊂ ℜ x (the tunnel transverse section) and z ∈I = 0, L     its axis. We consider a generic transient partial differential equation, linear for the sake of simplicity: ( ) 0 u u S t ∂ -∇ ⋅ ⋅∇ + = ∂ K whose weak form writes: ( ) * 0 T u u u S d dt t Ξ× ∂   ⋅ -∇ ⋅ ⋅∇ - Ξ ⋅ =   ∂   ∫ K with T the time interval in which the model is defined. The solution ( ) ( ) , , , , , u x y z t u z t ≡ x is searched under the separated form: ( ) ( ) ( ) ( ) 1 , , i N i i i i u z t X Z z t = = ≈ ⋅ ⋅ Θ ∑ x x In what follows we are illustrating the construction of one such decomposition. For this purpose we assume that at iteration n N < the solution ( ) , , n u z t x is already known: ( ) ( ) ( ) ( ) 1 , , i n n i i i i u z t X Z z t = = = ⋅ ⋅ Θ ∑ x x and that at the present iteration we look for the solution enrichment ( ) ( ) ( ) R S z t θ ⋅ ⋅ x : ( ) ( ) ( ) 1 , , , , ( ) ( ) n n u z t u z t R S z t θ + = + ⋅ ⋅ x x x The test function involved in the weak form is searched under the form: ( ) ( ) ( ) ( ) * * * * , , ( ) ( ) ( ) ( ) ( ) ( ) u z t R S z t R S z t R S z t θ θ θ = ⋅ ⋅ + ⋅ ⋅ + ⋅ ⋅ x x x x By introducing the trial and test function in the weak form of the problem, we obtain the problem to be solved for computing the functions involved in the enrichment ( ) S z .The process continues until reaching convergence. The converged solutions allow defining the next term in the finite sums decomposition: ( ) ( ) 1 n R X + → x x ( ) ( ) 1 n S z Z z + → and ( ) ( ) 1 n t t θ + → Θ . We can notice that this procedure involves a 2D solution for computing functions ( ) R x , and two 1D solutions for computing functions ( ) S z and ( ) t θ . Moreover, we can notice that the resulting algorithm is non-incremental because at each iteration we are looking the whole field history. In the following we show an application of the 2D-3D coupling strategy for a 20MW fire in a tunnel and a PGD solution for the heat transfer problem only. Tunnel fire: example of the 3D-2D coupling strategy The structure under consideration is the tunnel of Virgolo close to Bolzano (Italy) that has been also used for an experimental test in the framework of UPTUN project [START_REF]HITECO -High Temperature Concrete[END_REF]. We have considered the central part of the tunnel, 80 m long. Its geometry is decomposed in the fluid and the solid domains, see Fig. The total thermal power involved by the fire is increasing in 10 minutes up to 20MW following a linear law and then kept constant. For this analysis 15300 hexaedral elements are used in the fluid domain, while each cross section is discretized with 640 quadrilateral elements with eight nodes. The initial and the boundary conditions selected for this case are: • For the fluid domain: the atmospheric pressure is imposed at the ends of the tunnel, the initial fluid velocity is equal to zero, close to the tunnel vault the fluid can exchange heat with the concrete structure surface according to the universal profiles ("wall law") described in [START_REF] Schrefler | Thermal coupling of fluid flow and structural response of a tunnel induced by fire[END_REF], with a heat exchange coefficient equal to α c =500 W/m 2 K. The initial temperature is set to 298.15 K for the whole fluid domain. • For the solid domain: on the inner side of the cross section, i. The total time of simulation is 1 hour. The case under consideration corresponds to a real fire case in terms of the total thermal power involved, the duration of the fire and the value of the heat exchange coefficient selected. Figure 3 shows that the velocity of the ascensional flux close to the fire source is higher than 9 m/s, while the horizontal fluxes flowing toward the ends of the tunnel have a velocity equal to 3 m/s (t=600 s). The temperature distribution in the fluid domain and in the top part of the sections S2, S3 (the section of the fire) and S4 are shown in Figure 4. The central section, that is the most stressed one, is the most exposed to spalling risk. Indeed, the peak of gas pressure (2.4 MPa) in the external layer of the concrete vault (10 cm thick), the formation of the "moisture-clog" (the relative humidity reaches in this zone a value higher than 90%) and, finally, the total damage ( 55%) can lead to a progressive spalling starting from that layer, see Fig. 5. Heat transfer simulation in the tunnel vault with PGD In this section we present some preliminary results of the simulation of the tunnel fire described above obtained by applying the Proper Generalized Decomposition techniques illustrated in section 2. The main aim of this calculation is to demonstrate the validity of the approach considering, as initial stage of the ongoing research, a simplified model limited to heat transfer through conduction. The mesh used in the same as in the previous example in the transversal section, but using nine-noded elements., while in the longitudinal direction 8000 elements have been used. The duration is 1 hour with a time step of 1 sec. As already pointed out the problem is solved assuming some simplifications as far as both the physical The surface temperatures reached are comparable to the ones obtained with the full model, [START_REF] Schrefler | Thermal coupling of fluid flow and structural response of a tunnel induced by fire[END_REF], in the case of pure CFD simulations for which the solid domain is neglected together with the related heat exchange fluxes. In that case the temperature values were much higher than those obtained with the model of section 3 and approximately equal to 4000 K [START_REF] Schrefler | Thermal coupling of fluid flow and structural response of a tunnel induced by fire[END_REF], that is the same order of magnitude of thermal field attained by using PGD in this work. Conclusions We have shown a 3D-2D coupled solution for a tunnel fire simulation, taking the three fluids concrete model into account. For a fire of a duration of one hour the calculation time on a PC is well over one day. For the thermal problem in the tunnel vault a PGD approach on a full 3D solid domain model has been shown. The computing time was reduces to two and half minutes. Considering the short computational time and the fact that it is possible to obtain physically reasonable results just by introducing in PGD analysis more sophisticated physical models and physically correct bc.s (i.e. taking into account the role of the air), this set of preliminary computations are extremely encouraging. m 2 , 3 . 23 s 1,2. The solid domain consists in the cross section of the tunnel vault. In the simulations five cross sections are considered at 0, 30, 40, 50, 80 meters along the longitudinal axis z. The location of fire is the section at 40 m. The fluid is considered as an ideal gas and has the following properties: dynamic viscosity µ=1.8x10 -5 kg/ms, specific heat c p =1006 J/kgK, thermal conductivity λ f =0.026, W/mK, density ρ=1.225 kg/m 3 . Concrete used for the solid domain (i.e. the sections of the concrete tunnel) is C60 concrete (with a final compressive strength equal to 60 MPa) and has the following main properties at ambient temperature (20°C): elastic modulus E=40000 MPa, porosity n=0.082, intrinsic permeability k=2×10 -18 solid density ρ s =2564 kg/m 3 , solid thermal conductivity λ s =1.92 W/mK, solid specific heat c ps =855.52 J/kgK. The volumetric heat source corresponding to the fire is located at the coordinates (x,y,z)=(1.0,0.5,40.0) and has a volume equal to 8 m This means that the fire is located in the central section of the tunnel at 0.5 m from the longitudinal axis and at a height of 1 m from the road pavement. Figure 1 .BFigure 2 . 12 Figure 1. Global geometry of the tunnel. e. the vault surface in contact with the fluid, two convective (i.e. Robin) boundary conditions are imposed. The convective heat exchange is governed by the same universal profiles described for the fluid domain with the same exchange coefficient α c . As far as the mass exchange between the surface of concrete and the surrounding environment is concerned, a water vapor pressure equal to 1300 Pa and an exchange coefficient of 0.02 m/s are set. The initial condition for the concrete structure are p g =101325 Pa, p c =7×10 -7 Pa, T=298.15 K. This set of values corresponds to an initial relative humidity equal to 58%. On the outer side of the cross sections the values of gas pressure, capillary pressure and temperature are fixed (i.e. Dirichlet bc.s) to the initial ones. Figure 3 . 3 Figure 3. Velocity field (m/s) at t= 600 s (C), (A) cross-section and (B) longitudinal section. Figure 4 .BFigure 5 .BFigure 6 . 456 Figure 4. Temperature distribution (K) at t = 3600 s in the fluid domain and in sections S2, S3 (fire), S4. Figure 7 .Figure 8 . 78 Figure 7. Sketch of the mesh (A) and of the bc.s (B) used in the case of tunnel fire solved by PGD.
04116281
en
[ "info" ]
2024/03/04 16:41:26
2023
https://hal.science/hal-04116281/file/aleatory.pdf
Luciano Da Fontoura Costa email: [email protected] Luciano Da F Costa Randomness: A Challenging Central Concept ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction One of the tasks most systematically shared between science and typical human experience corresponds to prediction (e.g. [START_REF] Da | Modeling: The human approach to science[END_REF]). Given situations involving the choice between two or more alternatives, it becomes a critical endeavor to predict the implications of each of those alternatives, so that the most suitable choice -given specific requirements -can be taken. Predicting the implications of decisions actually constitutes the main aspect of science, through the activity of modeling, where quantitative constructs are developed (e.g. [START_REF] Da | Statistical modeling[END_REF]) as the means not only for better understanding physical systems, but also for making predictions about the unfolding of their respective dynamics. Another closely related situation involving prediction is, given two or more possible outcomes, to assign probabilities that can quantify how likely each of those possible results is. This type of problem constitutes the main subject studied in the areas of probability and statistics (e.g. [START_REF] Da | Statistical modeling[END_REF][START_REF] Da | Multivariate statistical modeling[END_REF][START_REF] Bertsekas | Introduction to Probability[END_REF][START_REF] Degroot | Probability and Statistics[END_REF][START_REF]Probability and statistics ebook[END_REF][START_REF] Kreyszig | Advanced Engineering Mathematics[END_REF][START_REF] Mukhopadhyay | Probability and Statistical Inference[END_REF][START_REF] Gallager | Stochastic Processes: Theory for Applications[END_REF]), being often approached in terms of respective statistical models, involving random variables as well as respective models in terms of density probability functions and/or moments. Thus, prediction can be understood as being directly related to the two important following activities: (a) as-signing a probabilities to possible outcomes, which has been addressed mostly by using concepts and methods from probability and statistics; and/or (b) identifying the implications of specific outcomes or choices, which has been tackled mainly through scientific modeling. Predictions are necessary because of two main factors: (i) true, intrinsic randomness; and (ii) incomplete knowledge. It should be observed that these two factors can potentially underly either or both the activities (a) and (b) above. In the former case, we have situations such as rolling an ideal dice, in which case it is impossible to known with absolute certainty the respective outcome. Henceforth, 'true' or 'intrinsic' randomness is understood to lead to outcomes that are completely unaccountable for. The latter case is characterized by the fact the observed experiment is actually deterministic, but that lack of knowledge about all its relevant aspects, including its initial condition, implies uncertainty while predicting the unfolding of their respective dynamics. Prediction is directly motivated by the existence of any of the two above types of randomness, without which it would not exist. Henceforth, we will understand a random event as corresponding to a situation whose outcome is impossible to be predicted with certainty, for any reason. Thus, every measurement from the real-world intrinsically involves a random aspect, because a certain level 1 of uncertainty and incompleteness is always present. The quantities observed when measuring a given property are often called random variables. Events and measurements that are not random, and therefore can be predicted with certainty, can be said to be deterministic. In particular, the classical mechanics approach to the real-world is intrinsically therefore deterministic. Thus, all in all, we shall consider three types of events/measurements: Deterministic, truly random, and random because of incomplete knowledge. . The possible existence of random and deterministic events and measurements imply the important problem of devising approaches that can, given a specific system, decide on its randomness. As well shall discuss at more length in Section 5, it turns out that this constitutes a particularly difficult problem, having motivated several multidisciplinary approaches in areas that include probability and statistics, scientific modeling, pattern recognition, dynamical systems, classical and quantum mechanics, among many others. In addition to developing means for identifying randomness, it also becomes important to have means for quantifying its levels and generating sequences of random values or symbols. One of the few sure aspects regarding prediction about the dynamics of deterministic dynamical system is that prediction would not be required provided one knows everything about the system -in the sense of having a complete model, as well as fully accurate knowledge of the initial conditions. No predictions would be necessary in this case because the dynamics of the system could be directly obtained from the respective complete model, unfolding form the initial condition. Thus, in case our universe were deterministic, the intrinsic randomness could not be verified. However, predictions would still be required for incompletely modeled dynamical systems or when trying to identify randomness experimentally in terms of measurements possibly influenced by noise and interferences. Despite its critical implication for the activity of predicting, it is not known for certain whether our universe is or not deterministic. While classical mechanics indicated that the universe was deterministic, the advent of quantum mechanics completely changed the perspectives. While currently the odds seem to point toward a nondeterministic universe, the possibility that it eventually turns out to be deterministic cannot completely excluded from the present work. In addition, as it will be argued in this work, true intrinsic randomness is possibly a graded quantity, therefore presenting different levels of intensity, which could tend to decrease as one moves from smaller to larger space-time scales, but which could also be amplified by non-linear effects (e.g. chaos). Though discussing this interesting and centrally challenging question, the present work by no means aims at answering it. When approached from the perspectives briefly revised above, it becomes hard to overstate the importance of prediction and randomness. Indeed, more effective decisions could be taken provided the probabilities of outcomes, as well as their implications, could be accurately predicted. Ultimately, these prospects have played a major role while developing science and technology. Yet, despite long continuing research efforts, randomness remains an elusive concept, especially when taken at a more strict and comprehensive manner. The present work aims at presenting and discussing some of the several concepts and methods that underly and/or relate to randomness. In addition, special attention is focused on the issues of identifying, quantifying and generating randomness. The intriguing issue about true randomness is also briefly discussed. Developing in a mostly informal, somewhat unconventional, and hopefully relatively accessible manner, while also involving several figures and examples, the present work should not be understood to be complete, or taken as a basic text on related subjects. However, it is hoped that the discussion of several aspects of randomness will provide some indication about some of the main involved facets of randomness, as well as their respectively implied challenges. Experimental and Constructive Approaches to Randomness Figure 1 illustrates a key situation regarding randomness. Here, we have two agents (natural or artificial) A and B, the former transmitting a sequence of integer digits (from 0 to 9) to the receiving agent B, whose problem is to decide whether the received sequence of integer values is random or not. Let us consider the situation in which the digits being transmitted corresponds to an intermediate portion of the number π, which is irrational and infinite. Two possible outcomes can be expected: (a) the agent B has great knowledge about numbers and identifies the sequence as corresponding to a portion of the number π; and (b) the agent B does not recognize the sequence as a portion of π. In case (a), the sequence will be deemed as being very likely to be deterministic, in which case the agent will always be able to foresee the next digits. However, in case (b), the sequence will be understood as being possibly random, as no prediction could be accurately made. This latter situation provides an example of randomness as a possible consequence of lack of knowledge about the observed system. Another interesting point regarding the possibility to use mathematical irrational constants as π, √ 2 and e is that these values are particularly well-known and prominent, being more likely to be eventually adopted as possible sources of randomness, therefore becoming even less 'random'. A substantially more difficult situation to be considered in the experiment in Figure 1 would arise in case of less known irrational numbers. The situation in Figure 1 makes it plain that randomness is a concept relative to the agent analyzing the observations, depending critically on the knowledge about the respective random experiments. This knowledge can be effectively approached in terms of respective models (e.g. [START_REF] Da | Modeling: The human approach to science[END_REF]), in the sense that a complete model of the specific random experiment can provide critical information for deciding on its possible randomness. We can summarize this point as follows: Lack of knowledge can be perceived as randomness. This initial conclusion already poses a substantial challenge for deciding on randomness, as it is not possible to have fully complete models of the physical world (e.g. [START_REF] Da | Modeling: The human approach to science[END_REF]). As a brief illustration of this principle, we have that the motion of a real-world pendulum, when observed at full precision, will ultimately depend of every possible mass in the universe. Thus, in principle, even if there are fully random sequences in the universe, we will not be able to fully identify their nature, as we will always be uncertain if our limitations in the respective prediction are a consequence of intrinsic randomness or lack of full information about the respective system. Other limitations, including the fact that no measurement can be fully accurate and Heisenberg's uncertainty principle, further complicate the experimental identification of randomness. It follows from the above considerations that the experimental verification of randomness can only be made up to a given level of certainty. In other words, it is in principle impossible to conclude on real intrinsic randomness while making observations, taking measurements and making predictions. There is, however, a second possibility that can be potentially considered while trying to decide if there is (or not) intrinsic randomness in a system, which consists in devising abstract, logic-mathematic models capable of intrinsic randomness. This alternative approach to randomness, will be henceforth referred to as constructive, involve developing or considering abstract models that can lead or explain randomness. Though randomness is often illustrated in terms of flipping a coin, this situation actually is potentially influence by several complex factors including the way in which the coin is thrown (e.g. initial speed, direction, rotation, etc.) and falls. In the present work we adopt instead the abstract, ideal situation depicted in Figure 2(a), concerning a hypothetical experiment in which a perfect sphere sits, initially static, onto a perfectly sharp edge. Observe that the center of mass of the sphere is initially in perfect alignment with the edge, and no external influences other than gravity are present. Figure 2: A perfectly symmetric sphere, initially static, under action of gravity only, lies onto a perfectly sharp edge (a). The center of mass of the sphere is initially aligned with the edge. When approached from the classical mechanics perspective, the sphere can be understood to remain in unstable equilibrium until some additional perturbation (external gravitational fields, vibrations, etc.), even if of infinitesimal intensity, eventually may imply it to fall toward one of the sides and touch the surfaces marked as 0 or 1. Situations involving two or more outcomes can be effectively represented in terms of respective trees, characterized by branches or bifurcations, as illustrated in (b) for the sphere-on-the-edge case. Another situation involving two possible outcomes in a mechanical system is illustrated in (c). Here, substantially more word and time needs to be invested so that the sphere moves to one of the possible upper branches. The outcome of this situation can also be abstracted by a respective tree, shown in (d). When approached from the classical physics perspective (Newton's laws of motion), the situation depicted in Figure 2(a) will be preserved until some mechanical perturbation, even if infinitesimally small, is applied to the sphere, in which case it will fall to one of the two possible sides until touching, after some time, the surfaces marked as 0 or 1. In other words, if true randomness is to be verified, this is the type of situation in which it would possibly be more prone to happen. Because mechanical perturbations necessarily involve mass and/or energy, these perturbations need themselves to be consequence of some (classic) mechanics effect that is governed by Newton's laws of motion, therefore being deterministic. It follows from this simple example that, from the classic perspective, it is impossible to have true intrinsic randomness in nature. Actually, by being unaccountable for, true randomness would needed to be caused by something else then an effect involving mass/energy. Figure 2(b) presents a somewhat distinct situation involving a sphere being eventually displaced until touching surfaces 0 or 1. The sphere, which is initially at rest over the plane at the bottom of the figure and under action of gravity, can be displaced upwards toward any of the two possible sides. Though this situation also involves an intrinsic bilateral symmetry identical to that in Figure 2(a), the complete displacement of the sphere until touching surfaces 0 or 1 now requires substantially larger quantity of work (e.g. [START_REF] Serway | Physics for scientists and engineers[END_REF][START_REF] Da | Single point particle motion: Mass, force, momenta, impulse[END_REF]) than previously during a much longer period of time, implying in much larger energy requirement. Therefore, the cause of this displacement, be it random or not, would typically involve much more 'determination' in the sense of investing larger amounts of energy and time. The randomness of the outcome issuing from situation in Figure 2(b) would therefore depend on how the mechanism required for the complete motion of the sphere is initiated and maintained. Figure 3 depicts a possible (and relatively naïve) manner to approach the previously discussed sphere equilibrium situation in terms of quantum mechanics. Given that the mass of the sphere is assumed to be homogeneous, the diversity of wave-function lengths indicates respective projections onto the plan of the image which, as a consequence of the respective symmetry, are enough to be considered in the respective discussion. The equilibrium will be maintained only in case the contributions of all wave-functions are completely symmetric along time and space. Any small bias taking place along some long enough period of time will imply in possible oscillations or even the falling of the sphere. It should also be observed that the situations illustrated in Figure 2 relate to the phenomenon of symmetry breaking (e.g. [START_REF] Strocchi | Symmetry breaking[END_REF]), in which some overall property of the system changes from a symmetric to asymmetric configura- 3 Randomness, Time, and Causation Given that randomness depends inexorably of changes, it also strongly related to causation (e.g. [START_REF] Pearl | Causality[END_REF][START_REF] Da | Continuous and event-driven causality: A simple model-based approach[END_REF][START_REF] Granger | Some recent development in a concept of causality[END_REF][START_REF] Hiemstra | Testing for linear and nonlinear Granger causality in the stock price-volume relation[END_REF][START_REF] Bressler | Wiener-Granger causality: a well established methodology[END_REF][START_REF] Salmon | Causality and explanation[END_REF][START_REF] Bunge | Causality and modern science[END_REF]) taking place along time, an issue that is discussed very briefly in the present section. As indicated in [START_REF] Da | Continuous and event-driven causality: A simple model-based approach[END_REF], there are two main types of causations unfolding along time: continuous and event-driven. While the former type involves causal influences taking place continuously along time, the latter is characterized by the observation of discrete events taking place at specific time instants. Interestingly, the latter type of causation is relatively more intricate, being illustrated in Figure 4. [START_REF] Da | Continuous and event-driven causality: A simple model-based approach[END_REF]. The cause C needs to take place for a minimum period of time ∆, which is then followed by a respective delay δ, and the effect E, along which the causation continues but is no longer under control of the agent that initiated it. It is understood that, once the causation period is completed, the effect can no longer be avoided other than by some eventual external influence (not included in the diagram). In continuous causation, we have that ∆ → 0 and δ → 0, so that the effect will always immediately follow the respective immediate cause. Though time is typically understood naturally by humans, it turns out that it is not an easy concept to be for-malized in physic-mathematical manner. The main issue here is that every physical event is, in principle reversible. In other words, this means that there are no intrinsic constraints in the laws of physics implying the respective dynamics to unfold in only one direction, which is called T-symmetry. The time irreversibility of physical systems that is typically observed, has been linked to the concept of entropy, in particular the second law of thermodynamics (e.g. [START_REF] Serway | Physics for scientists and engineers[END_REF][START_REF] Newman | Monte Carlo Methods in Statistical Physics[END_REF]). Informally speaking, this law indicates that the entropy of isolated systems cannot decrease, being also related to the fact that heat always flow from hotter to colder objects. Another interesting perspective involving randomness and time regards the possibility to travel to or observe the future. If this were possible, the outcomes of events could be predicted, therefore preventing true randomness. At the same time, the possibility of having non-local, simultaneous events (as in quantum mechanics EPR, e.g. [START_REF] Blaylock | The EPR paradox, Bell's inequality, and the question of locality[END_REF]) also has potentially critical implications for causality and true randomness. Randomness and Probability Random experiments are typically modeled in terms of random variables and respective probabilities, including probability density functions. In this section, we provide a brief overview of these concepts. Every real-world measurement can be understood and modeled in terms of a respective random variable X. A numeric variable X can be described/modeled in terms of its respective probability density function p(x), which necessarily has to satisfy: p(x) ≥ 0 (1) ˆ∞ -∞ p(x)dx = 1 (2) In that case, we have: P (x 1 ≤ x < x 2 ) = ˆx2 x1 p(x)dx (3) As an example, the uniform probability density in an interval [a, b], a, b ∈ R and a ≤ b, can be expressed as: understanding that: ˆa a p(x)dx = 0 (5) p(x) = c = 1 b-a , a ≤ x ≤ b 0, otherwise (4) The uniform probability density function defined for the interval (0, 1) has special importance, being henceforth expressed as rand(). Each time this function is invoked, it will provide a real value x chosen uniformly from the interval (0, 1). Discrete random variables can be modeled by using the Dirac delta function (e.g. [START_REF] Da | Statistical modeling[END_REF]). For instance, the perfectly uniform coin experiment can be modeled in terms of the following discrete density probability function: p(x) = 0.5 δ(0), x = 0 0.5 δ(1), x = 1 (6) which can be understood as an example of discrete uniform density probability function, being respectively illustrated in Figure 6. Given a set of numbers S M = {1, 2, . . . , M }, they can be drawn with uniform probability by using the rand() function as follows: x = ⌊rand() M )⌋ + 1, x ∈ S M ( 7 ) where ⌊⌋ is the floor function. The density function of a constant variable can be expressed as: p(x) = δ(c), x = c 0, otherwise (8) One frequent problem in probability and statistics consists of generating events with given probabilities. This problem can be addressed by using the Monte Carlo approach (e.g. [START_REF] Newman | Monte Carlo Methods in Statistical Physics[END_REF][START_REF] Robert | Monte Carlo statistical methods[END_REF]). In its most basic form, the Monte Carlo method involves performing estimations while drawing values uniformly distributed in a given interval, typically [0, 1], in order to select events with a given probability. Let us illustrate the gist of this approach in terms of a specific example involving drawing an event with probability a given probability p. One value x is uniformly chosen from the interval (0, 1), and the even is taken if only if x ≤ p. As a simple illustration of a Monte Carlo approach, we mention the Buffon's needle problem (e.g. [START_REF] Badger | Lazzarini's lucky approximation of π[END_REF]), in which the constant π is estimated by dropping needle with length l in a random manner (angles and positions) onto a plane with (infinite) marked parallel lines separated by a distance L (see Fig. 7. In case l < L, we would have: π ≈ 2l #experiments L #cases in which the needle crosses a line (9) Identifying Randomness In this section, we develop a possible approach to understanding of randomness, aiming at a possible respective identification. Given that random variables of several types (e.g. integer, real, complex) can always be represented as (possibly infinite) binary numbers, for generality's sake, these values are henceforth understood as being represented as respective binary numbers, composed only by 0 and 1 digits. We start by considering flipping a given coin N times (event) and recording the obtained results (outcomes). Such a procedure provides one of the simplest examples of a random experiment, characterized by uncertainty about the possible results. We shall also assume that only two results are possible, corresponding to head and tail, which could be arbitrarily represented by the values 1 and 0, respectively. After N experiments, the number of respective head and tail observations will be represented as n h (N ) and n t (N ), leading to the following transient probabilities: p h (N ) = p 1 (N ) = n 1 N ( 10 ) p t (N ) = p 0 (N ) = n 0 N ( 11 ) The probability of the two outcomes can now be defined in terms of the following limits to infinity: P 1 = p 1 (N = ∞) = lim N →∞ n 1 N ( 12 ) P 0 = p 0 (N = ∞) = lim N →∞ n 0 N (13) In case the above experiment were fully uniformly random, we would have: P 1 = P 0 = 0.5 (14) meaning that the number of observed heads and tails would be identical when N = ∞. Though the above discussion suggests that the coin could be classified as being truly uniformly random provided P 1 = P 0 , this turns out not to be necessarily the case. That is because the overall probabilities P 1 and P 0 are completely general and do not tell us anything about the specific sequence and arrangements of heads and tails along an observed sequence. Indeed, consider the following sequence of observations, which is known, by hypothesis, to be completely periodic: h,t,h,t,h,t,h,t,h,t,h,t,. . . This sequence is plainly not random, as the next outcome can always be fully accurately predicted. Even so, we will nevertheless have P 1 = P 0 for even values of N tending to infinity: h, t, lim N →∞ p 1 (N ) = lim N →∞ p 0 (N ) (15) It follows from the above example that randomness of the observations constitutes only a necessary condition, not being a sufficient requirement, for it is also required that all the respective sequence of observed values be the most unpredictable as possible. A possible manner to obtain further indication about the randomness of a given sequence consists of considering the probabilities of respective pairs or adjacent values, of which there are the four following joint possibilities: 00; 01; 10; 11 [START_REF] Hiemstra | Testing for linear and nonlinear Granger causality in the stock price-volume relation[END_REF] A true uniformly random sequence would then be characterized by: P 00 = p 00 (N = ∞) = lim N →∞ n 00 N = 0.25 (17) P 01 = p 01 (N = ∞) = lim N →∞ n 01 N = 0.25 (18) P 10 = p 10 (N = ∞) = lim N →∞ n 10 N = 0.25 (19) P 11 = p 11 (N = ∞) = lim N →∞ n 11 N = 0.25 (20) For practical reasons, transient probabilities (N < ∞) would need to be taken. As an example, we could have the following characterization of the sequence of N = 20 subsequent pairs of observed digits in terms of transient probabilities, here indicated in percentage. Though the four obtained probabilities present biases, in the sense of not corresponding to 25%, this provides a relatively weak indication whether the sequence is or not uniformly random, because of the small number N of observations of adjacent pairs. In other words, a fully random sequence, from the specific perspective of paired values, would be (partially) characterized by the above probabilities converging to 25% when N → ∞. Even if a sequence were estimated to be random when its values are observed singly or in adjacent pairs, these two conditions would still be only necessary, and not sufficient, for deeming the sequence as being truly uniformly random. Parenthetically, observe also that the two tests described above assumed that heads and tails have the same individual probability of 50%. Different probabilities of joint (pair) occurrences would be expected otherwise (see Section 7). Each specific sub-sequence of heads and tails of length L can have a respective probability assigned in a manner similar to that discussed above. This allows us to consider the following test for fully uniform randomness of the dice rolling experiment: it will be truly random if, after an infinite number N of observations, the probabilities of each possible sub-sequence of length L = 1, 2, . . . , N are all mutually identical. Conveniently, each of these subsequences can be effectively represented by a respective integer decimal value obtained from the respective binary subsequence. Observe, however, that the number of subsequences decreases with L. More specifically, given a specific value N , the number of sequences of length L will be limited to N -L + 1. In particular, for L = N , we would have just a single sequence of L digits. In order to have statistical relevance, the whole experiment would need to be repeated a large number of times, actually an infinite number in case we wanted a fully precise test. Even if this were possible, it would still be necessary that the experiment be stationary, in the sense of perfectly preserving all possible single and joint probabilities. In addition, each whole experiment would need to be independent of the previous experiments. In particular, the test considering N = L would be enough by itself, without requiring the complementary tests for N < L. That is because all possible combinations of digits 0 and 1 are considered in the case N = L. As observed above, this test would require a stationary experiment to be independently repeated an infinite number of times (or a very large number of times in the case of a provisional estimation). In case the above mentioned test for N = L were viable, assuming fully uniform probabilities, the probability of each possible sequence would be: P = 1 2 L = 2 -L (21) as there are 2 L possible distinct binary subsequences with L digits. Though we have been able to develop a hypothetical test which can, in principle, indicate if the coin experiment is truly random, there is a practical impossibility of its application, as it involves an infinite number of repetitions and observations. This indicates that it is virtually impossible to be completely certain about an observed sequence being fully random or not. What can be aimed instead is to use tests similar to those discussed above for a finite number of experiments of finite length, which can only provide preliminary indications about the respective randomness. Several interesting aspects underly the discussion developed in the present section, some of which are covered in more detail in the following sections. Measuring Randomness Interestingly, even constant variables are often understood as particular cases of random variables, for the sake of generality. This indicates that random variables can have markedly varying levels (or degrees) of randomness, extending from 0 in the case of a constant random variable up to a maximum randomness value. The level of randomness of a given random variable X can be associated to the dispersion of its respective probability density function p(x). Therefore, the wider the dispersion, the more random the variable can be understood to be. While the statistical concepts of variance and standard deviation (e.g. [START_REF] Da | Statistical modeling[END_REF][START_REF] Bertsekas | Introduction to Probability[END_REF][START_REF] Degroot | Probability and Statistics[END_REF][START_REF]Probability and statistics ebook[END_REF][START_REF] Kreyszig | Advanced Engineering Mathematics[END_REF]) can be effectively adopted to quantify the level of dispersion of a random variable, there is another related measurement, namely entropy (e.g. [START_REF] Cover | Elements of Information Theory[END_REF]), which has some particularly interesting features. More specifically, the entropy of a continuous random variable X described by a respective density p(x) can be formally expressed as: ε X = - ˆ∞ -∞ p(x) log [p(x)] dx (22) In the case of a discrete random variable X taking values x in the set S M = {1, 2, . . . , M }, the respective entropy can be calculated as: ε X = - M k=1 p i log(p i ). ( 23 ) where p i is the probability of drawing value x = i. One of the interesting features of entropy is that it is directly related to information and statistical mechanics, constituting one of the basis of the second law of thermodynamics (e.g. [START_REF] Serway | Physics for scientists and engineers[END_REF][START_REF] Cover | Elements of Information Theory[END_REF]). In addition, when radix-2 logarithm is employed in Equation 23, the obtained entropy can be understood to have unit of bits, corresponding to the estimated average number of bits necessary to represent the respective random variable. Given an entropy measurement ε X , it becomes interesting to consider also its respective exponential, giving rise to the respective exponential entropy (e.g. [START_REF] Campbell | Exponential entropy as a measure of extent of a distribution[END_REF][START_REF] Travençolo | Accessibility in complex networks[END_REF][START_REF] Pielou | Shannon's formula as a measure of specific diversity: its use and misuse[END_REF][START_REF] Benatti | Complex networks accessibility symmetry[END_REF][START_REF] Viana | Effective number of accessed nodes in complex networks[END_REF]): η X = e ε X ( 24 ) The exponential entropy, a real-valued quantity, is of particular interest as it is directly related to the effective number of choices that can be derived from a given random variable X. Thus, in a sense, an analogy can be drawn between the exponential entropy and the concept of fractal dimension (e.g. [START_REF] Peitgen | Chaos and Fractals: New Frontiers of Science[END_REF]), which is another real-valued quantity interpolating between the more traditional topological dimension. For instance, in the case of the constant density, we would have: ε X = - 1 k=1 p c log(p c ) = -1 log(1) = -(1) (0) = 0 (25) This makes complete sense, because the randomness of a constant random variable could indeed be expected to be null. The respective exponential entropy can then be obtained as: η X = e ε X = e 0 = 1 [START_REF] Travençolo | Accessibility in complex networks[END_REF] confirming that a single value can be effectively derived from X. In the case of the example described by Equations 17 to 20, we would have: Observe that the effective number of values in this case is substantially greater than 1, but smaller than the total of 4 possible outcomes. ε X = -[0. It can be shown that the discrete uniform density probability function is characterized by the discrete density with the largest entropy. Indeed, in case the previous example were perfectly uniform, it would follow that: ε X = -[4(0.25) log(0.25)] = -log(0.25) η X = e log(1/4) = 4 As could be expected, the effective number of derivable values becomes identical to the number of possible outcomes when the entropy is maximum. Though the illustrations above consider the application of entropy to characterize the dispersion of a given sequence while taking into account isolated symbols, as discussed in Section 5 this test should also be applied to all respective subsequences in order to characterize more completely the randomness of that sequence. The quantification of the level of randomness associated to a random variable provides an interesting and not so often considered perspective for appreciating the own nature of randomness, as well as its possible impact. For instance, a measurement involving a small level of randomness can be considered as not being random in specific situations in which the level of randomness does not influence the respective dynamics or outcome in any relevant way. In addition to taking into account the level of randomness of a random variable, it is also interesting to consider its respective maximum values, which are particularly important in situations involving thresholded activations. It is important to keep in mind that there are many other approaches to quantifying the randomness of a sequence of observations, including combinations of several tests as the Diehard Battery of Tests (e.g. [START_REF] Marsaglia | Some difficult-topass tests of randomness[END_REF][START_REF]Diehard tests[END_REF]). Randomness and Hierarchy One important aspect of randomness concerns whether the observation of one outcome may influence the randomness (probability) of a subsequent outcome. Thus, we have observations of joint random outcomes or variables. For instance, let us assume that we are receiving a sequence of two digits 0 or 1, which can be understood in terms of the two following events: (A) observing the first received digit; and (B) observe the second digit. The probabilities of the outcomes of event (A) will henceforth be expressed as P (0) and P (1), while the outcomes of the second event will be represented as P (0|0), P (0|1), P (1|0), P (1|1). For the sake of proper normalization, it is henceforth assumed that: P (0) + P (1) = 1 ( 27 ) P (0|0) + P (0|1) + P (1|0) + P (1|1) = 1 ( 28 ) The blimp character '|' can be read as 'given that', so that B|A means that outcome B follows outcome A, being possibly influenced, or conditioned by that previous occurrence. For this reason, probabilities of type P (B|A) are said to be conditional. In principle, two situations may occur when observing conditional events: (a) complete independence, in the sense a previous observation by no means affects the probabilities of subsequent observations; and (b) otherwise, i.e. a previous observation influences the successive probabilities. The conditional probability between two events A and B can be expressed as: P (B|A) = P (A ∩ B) P (A) (29) P (A|B) = P (A ∩ B) P (B) (30) where P (A ∩ B) means the joint observation of A and B. The Expression 29 can be readily rewritten as follows: P (A, B) = P (A ∩ B) = P (A) P (B|A) (31) which more directly reflects eventual interest in the joint, subsequent events, being also more closely related to the following figures presenting trees of probabilities. Recall that the intersection between two sets A and B is commutative, i.e. A ∩ B = B ∩ A. However, the conditional probability is, in general, not commutative, in the sense that: In general: P (A|B) ̸ = P (A|B) [START_REF]Diehard tests[END_REF] We will also adopt the alternative representation: P (A ∩ B) = P (A, B) = P (B, A) (33) Figure 8 illustrates the above experiment involving an event B taking place subsequently to another event A. Interestingly, the sequence of subsequent events or outcomes therefore defines a respective hierarchy in which each following level is potentially (but not necessarily) conditioned by the outcome of the previous hierarchical levels. A first possibility is that the outcome of the first outcome has no effect whatsoever on the second outcome. In this case, the two outcomes are said to be independent, and we have that: An even more specific situation concerns the case, in which all involved probabilities are identical (and thus mutually independent), is illustrated in Figure 10. In the particular situation in which the events are independent, the order of the digits 0 and 1 will not matter, and the probability of observing a sequence with n 0 digits 0 and n 1 digits 1 can be expressed as: P (B = 0|A = 0) = P (B = P = p n0 0 p n1 1 ( 34 ) where p 0 and p 1 are the probabilities of observing a digit 0 and 1, respectively. Another important situation concerning two subsequent events A and B happens when the events are dependent one another, in the sense that P (B|A) is influenced by P (A). In this case, we have that: P (A, B) = P (B|A) P (A) (35) A particular example of dependent events is depicted in Figure 11. The just illustrated situation in which the occurrence of one event influences the probability of a subsequent event is particularly interesting, as it indicate that the randomness of a given random variable is not necessarily independent, also revealing a possible network of influences between random events. In a sense, in case P (B|A) ̸ = P (B) suggests that the observation of particular outcomes from A 'causes' a modification of the probabilities of the outcomes of B. However, this by no means implies that the event A causes event B, which would otherwise be a deterministic effect. Generating Randomness . Now that we have, even though preliminary, got familiarized with some of the important basic concepts underlying randomness, it is time to proceed to the particularly challenging task of generating randomness. Interestingly, the difficulty if generating random events provides an indication that true, inherent randomness is not easily, if ever, to be found in the real-world. There are two main approaches to generating randomness: (a) by using physical means; and (b) by using computational means. The former method involves performing random experiments and/or measuring properties from the physical world, such as rolling a dice and observing the outcome, measuring the noise in an electronic circuit, or the decay time of particles. The latter family of methods involves conceiving and applying computational algorithms (typically digital) for generating pseudo-random numbers, i.e. random numbers that are obtained in a deterministic manner but which appear to be random (e.g. satisfy some incomplete statistical test for randomness). The following discussion will focus on the computational approach. The first respective difficulty to be observed concerns the fact that, almost invariably, digital computer algorithms operate in deterministic manner, which is necessary in order a program yields the same results whenever the same input data is supplied. Interestingly, the new area of quantum computing (e.g. [START_REF] Steane | Quantum computing[END_REF][START_REF] Gruska | Quantum computing[END_REF]) presents a completely different nature, in which the computations are performed in stochastic manner. The challenges of computing random numbers has motivated a wide range of interesting approaches, which cannot be fully reviewed and explained in the present work. Rather, here we provide just a simple approach, in order to give some idea of a possible principle leading to pseudorandom numbers. It should be observed that this is only a didactic example, chosen for its simplicity, being likely to provide biased results. The approach to be discussed involves the adoption of the function providing the remainder of the integer division between two integer values. For simplicity's sake, we shall be limited to non-negative integer values. The remainder function can be expressed as: r(m, n) = m mod n, m, n ∈ {1, 2, . . . , M } (36) necessarily, we have that 0 ≤ r(m, n) ≤ n -1. Figure 12 illustrates the remainder function for n = 7 and m = 0, 1, 2, . . . , 50. The linear congruential generator family of methods is based on a recurrence expression yielding successive pseudo-random numbers x i , after having started with a specific seed x 0 (e.g. [START_REF] Eichenauer | A non-linear congruential pseudo random number generator[END_REF][START_REF] Marsaglia | The structure of linear congruential sequences[END_REF][START_REF] Impagliazzo | Pseudorandom generation from one-way functions[END_REF][START_REF] Payne | Coding the lehmer pseudo-random number generator[END_REF][START_REF] Fuller | The period of pseudo-random numbers generated by lehmer's congruential method[END_REF]). An example of this type of expression, known as Lehmer generator, is as follows: x i+1 = [a x i ] mod m (37) Observe that this recurrence expression involves two parameters, namely a and m, which are typically called multiplier and modulus, respectively. There are several ways in which these parameters can be chosen, and each choice will have specific implications in the generated sequence of pseudo-random values, starting with the seed x 0 . One possible choice of these parameters, known as MINSTD (from 'minimal standard', e.g. [START_REF] Ronald | Random numbers and computers[END_REF]): m = 2 31 -1 (38) a = 7 5 (39) A straightforward implementation of the above method in R is provided as follows for didactic purposes only. Starting with the seed x [START_REF] Da | Modeling: The human approach to science[END_REF], it will generate pseudorandom values in the interval (0, 10). N <-1000 a <-7^5 m <-2^(31)-1 x <-matrix(0,1,N) x[1] <-2 # seed for (i in seq(2,N)) x[i]<-(a*x[i-1] ) %% m x <-x/m * 10 Pseudo-random values in the set {1, 2, . . . , 10) can then be obtained from x[i] as: X[i] = floor(x[i]) + 1 (40) As previously observed, exactly the same sequence of pseudo-random values will be obtained as long as the same seed is employed. It should be noticed that the above implementation is very basic and does not cater for possible 32-bit multiplication overflow, also yielding limited level of randomness. More effective algorithms/codes should be used for practical applications. A Case-Example In order to illustrate the difficulties in generating random sequences and quantifying their randomness, let us consider a simple example involving the application of the approach discussed in Section 5 as a means to verify the level of randomness of binary sequences obtained by the MINSTD method as specifically implemented in a straightforwardly manner in Sectionsec:gener. More specifically, we will estimate the distribution of the observed sub-sequences for increasing values of L = 1, 2, . . . , 9. The obtained distributions, presented in terms of respective histograms, are moderately uniform, with bars of similar heights, especially for the smallest considered values of L. However, as L increases, implying successively longer subsequences and less samples of a larger number of generated values (2 L ), a smaller number of occurrences of each of the possible values is consequently obtained. As a consequence, larger oscillations along the histograms are observed for the larger considered values of L. This tendency is further illustrated, in more quantitative terms, in Figure 14. In addition, despite the moderately uniform distributions obtained, a closer observation of Figure 13 indicates that the smallest and largest values for each respective L tend to be overrepresented, possibly corresponding to bias in the random values generation. As already observed, the implemented MINSTD method provides limited randomness, so that other approaches need to be considered in practice for enhanced randomness. Similarly, it should be observe that the test of randomness presented in this section is not only quite limited to small values of L, but also not enough to provide a more substantive and strict quantification of random number generation. More comprehensive test should be considered in practice. Randomness and Generalization Let an experiment generating a sequence of measured numbers as illustrated in Figure 15(a), which can be un- derstood as a source ρ 1 . This source could be derived from a radio signal at a given frequency, the temperature at a given point along time, or the pressure taken along several heights at a given location. All these sequences of measurements can be understood, represented, and treated as real-valued signals, which can be have continuous or discrete domains. These sequences are also often called random signals (e.g. [START_REF] Srinath | Introduction to statistical signal processing with applications[END_REF][START_REF] Gray | An introduction to statistical signal processing[END_REF][START_REF] Kay | Fundamentals of statistical signal processing: estimation theory[END_REF]) or stochastic processes (e.g. [START_REF] Cinlar | Introduction to stochastic processes[END_REF][START_REF] Parzen | Stochastic processes[END_REF][START_REF] Ross | Stochastic processes[END_REF]). Observe that experimental recording of these signals will necessarily imply the observed values as well as their domain (typically time) to be sampled, usually at equal time steps. The measurement of these signals will be affected by limited resolution as well as possible noise and interferences. We have already seen that as long as possible sequences of numbers would be needed in case we are interested in quantifying and studying the randomness of these signals. More specifically, it would be interesting to have a large number of sequences of a same length N . There are two main manners in which this could be approached: (i) long sequences are obtained serially from the same source along time; and (ii) each of the sequences would be recorded in parallel from distinct sources with same statistical properties. More information could therefore be obtained from the latter situation. While the former of the above possibilities would require the observed experiment to be stationary, in the sense of preserving all its statistical properties along time, the latter approach assumes all considered sources not only to be stationary, but also have identical statistical properties when observed at a single source or among all the considered sources, a condition known as ergodicity (e.g. [START_REF] Petersen | Ergodic theory[END_REF][START_REF] Gray | Probability, random processes, and ergodic properties[END_REF]). In a sense, ergodicity is therefore related to the concept of two signal sources being statistically equivalent. It is interesting to observe that the sources do not need to supply identical sequences, but they need to be characterized by all respective statistical properties being identical (e.g. average, moments, etc.). Therefore, though not yielding identical sequences, the three examples (sources ρ 1 , ρ 2 and ρ 3 ) in Figure 15 could still have the same statistical characteristics when taken more completely. Randomness and Dynamical Systems One important aspect of randomness that is not very often observed concerns the fact that it is inexorably related to change. Indeed, by remaining static along time, systems cannot be random, because there are not choices or outcomes taking place. More formally speaking, the state of one such system would always remain the same. So, we have that: Change (and dynamics) is required for randomness. The phenomenon of change can be effectively described by using concepts from the area of dynamical systems (e.g. [START_REF] Thelen | Dynamic systems theories[END_REF][START_REF] Kailath | Linear systems[END_REF][START_REF] Hannan | The statistical theory of linear systems[END_REF][START_REF] Da | Visualizing the content of differential equations[END_REF]). Given a deterministic system and its respective state space, its dynamical unfolding along time can be fully represented in terms of a respective oriented trajectory, as illustrated in Figure 16(a). Figure 16: The dynamics of a deterministic system can be represented as a single trajectory along time t in the respective state space (a). Contrariwise, the dynamics of a non-deterministic system, illustrated in (b), involves multifurcations corresponding to two or more outcomes or choices taking place at a respective time instant. As it will be briefly discussed later in this work, multifurcations can also be a consequence of incomplete state space representations of the observed phenomenon (please refer to Section 13). State spaces are N -dimensional vector spaces having each axis associated to a respective property or measure-ment of the system dynamics. Therefore, at each time instant, the observed properties of the system are mapped into a respective point in the state space. As the system properties undergo successive changes, a respective trajectory unfolds in the associated state space. A particular example of intuitive state space would correspond to the positions of a point particle moving along time in a respective Euclidean plane. The properties to be mapped in this case would typically include the position and velocity of the particle at each instant of time. More general state space are defined by the several properties required to represent the dynamics of the system of interest. Though the examples of state space in the present work will refer to two-dimensional planes, more general state spaces can have any dimension. A complete representation of the system state would be so as to allow the complete characterization of the respective dynamics. This requires that enough properties are observed. In case these properties are not enough to completely specify the system dynamics, characterizing lack of information about the system, the representation will be incomplete. State space representations of dynamic systems are frequently limited by two main factors: (a) it is difficult or impossible to consider measurements of all properties that can influence the respective dynamics; and (b) the limited resolution and noise/interference characterizing experimental measurements. Observe that both these effects can be understood as lack of knowledge about the system, leading to incomplete representations. The pre-requisite of change for randomness is particularly important because it emphasizes the question of why and how changes can take place, ultimately leading to the concept of causality (e.g. [START_REF] Pearl | Causality[END_REF][START_REF] Da | Continuous and event-driven causality: A simple model-based approach[END_REF][START_REF] Granger | Some recent development in a concept of causality[END_REF][START_REF] Hiemstra | Testing for linear and nonlinear Granger causality in the stock price-volume relation[END_REF][START_REF] Bressler | Wiener-Granger causality: a well established methodology[END_REF][START_REF] Salmon | Causality and explanation[END_REF][START_REF] Bunge | Causality and modern science[END_REF]). Given its own nature, this question can be addressed from the perspective of physics, more specifically according to its classical and quantum approaches. When approached from the perspective of classical physics, every changes undergone by mass and energy in the real world are explainable in terms of Newton's laws of motion (e.g. [START_REF] Serway | Physics for scientists and engineers[END_REF][START_REF] José | Classical Dynamics: A Contemporary Approach[END_REF][START_REF] Morin | Introduction to classical mechanics: with problems and solutions[END_REF][START_REF] Thornton | Classical dynamics of particles and systems[END_REF][START_REF] Da | Point motion in flat spaces: An ample starting point[END_REF][START_REF] Da | Single point particle motion: Mass, force, momenta, impulse[END_REF]). For instance, in the case of the point mass particle example mentioned above, the obtained trajectories will necessarily correspond to continuous and smooth curves as a consequence of the inertia implied by the respective particle mass under action of finite forces. Given any point a t specific time along a respective trajectory (e.g. the initial conditions), the whole of the potential trajectory can be determined before and after that time. Indeed, as a consequence of time symmetry of Newton's laws of motion, there is no intrinsic distinction between past and future. We thus have that, from the classical mechanics per-spective (e.g. [START_REF] Serway | Physics for scientists and engineers[END_REF][START_REF] José | Classical Dynamics: A Contemporary Approach[END_REF][START_REF] Morin | Introduction to classical mechanics: with problems and solutions[END_REF][START_REF] Thornton | Classical dynamics of particles and systems[END_REF][START_REF] Da | Point motion in flat spaces: An ample starting point[END_REF][START_REF] Da | Single point particle motion: Mass, force, momenta, impulse[END_REF]), the universe would be fully deterministic, and therefore predictable, provided one could have access to one of its complete state configurations along time, from which the respective trajectory could then be determined. In a deterministic universe, changes would be the consequences of a chain of causal influences extending along time (e.g. [START_REF] Da | Continuous and event-driven causality: A simple model-based approach[END_REF]). Even so, prediction would still be necessary, as it would very likely be impossible to know the complete state of the universe at any given time. However, in a fully deterministic universe, the predictions could be approached knowing that no intrinsic randomness would be involved. The phenomenon of choice in a non-deterministic system can be modeled by allowing a trajectory to branch, or multifurcate, into two or more branches, as illustrated in Figure 16(b). This type of here adopted structure should not be confounded with a related concept, also from dynamical systems, in which the trajectories in a state space bifurcate as a consequence of small changes in an involved parameter Each of these multifurcations represents a possibility of the system to change dynamics which, generally speaking, can be controlled by an external effect, or take place in an intrinsic and truly random manner. In addition, as illustrated Section 13, a multifurcation can simply the consequence of incomplete representations of the properties of the system. In the case of a deterministic system, the branching could controlled by a respectively associated external effect or signal. As a literal example of branching under control of an external effect, we could mention a production line in which products not satisfying given specifications are respectively directed (branch) for rejection. In non-deterministic, or random systems, the decision to be taken at multifurcations can be a consequence of truly randomness, which cannot be accounted for by any means. From the perspective of the physical universe, it is the quantum physics (e.g. [START_REF] Dirac | The Principles of Quantum Mechanics[END_REF][START_REF] Serway | Modern Physics[END_REF][START_REF] Mandl | Quantum mechanics[END_REF]) approach where true randomness assumes intrinsic importance. According to this approach, particles involving mass and/or energy (and information) are all intrinsically subjected to randomness, at least at the smallest spatial scales (e.g. particles). More specifically, as implied by the Schrödinger equation, particles are associated to complex-valued wave functions that are associated to real-valued probability density functions. As a consequence, the state (position and momentum) of a given particle is only statistically characterized along a wave of probability before respective measurement. The action of measurement implies the wave function to collapse, resulting a particle with its specific state. Even so, according to Heisenberg un-certainty principle (e.g. [START_REF] Dirac | The Principles of Quantum Mechanics[END_REF][START_REF] Serway | Modern Physics[END_REF][START_REF] Mandl | Quantum mechanics[END_REF]), it is impossible to measure, with full accuracy and simultaneously, both the position and momentum of a particle. One approach to having a basic unit of information a quantum universe can be abstracted in terms of the concept of qubit, which has intrinsically random properties. For example, measuring a spin along a direction orthogonal to its orientation will yield uniformly distributed values of 1 and -1. Because spins can be modeled as qubits, and particles such as electrons have spin, the universe could potentially incorporate aspects of randomness that would be more intense at the smallest spatial scales. Although macroscopic structures such as a soccer ball ultimately are quantum objects, the implied random variations are extremely small to be typically observed. However, it should be kept in mind that non-linear effects can amplify smaller scale randomness up to larger spatial scales. This possibility is particularly intense in chaotic systems which, even if deterministic, are characterized by potentially exacerbated sensitivity to tiny perturbations. In case true randomness at mesoscopic or macroscopic scales could indeed emanate from small scale quantum effects, an interesting question remains: given that even quantum effects necessarily take place in terms of energy and/or mass, the determination of a truly random event would possibly need to be immaterial or happen materially in a domain that is not known and/or observable. As the latter situation does not correspond to true randomness, but only lack of knowledge, the only remaining possibility would be to have immaterial causes of randomness. Otherwise, if a random even takes place under effect of material causation, it would be no longer unaccountable because the material causation could be itself tracked back to previous causal effects herein. It should be kept in mind that, in addition to not answering the great question about true, unaccountable randomness, the above considerations are quite informal and by no means represent a strict or forma approach, especially when considering the quantum perspective, which is particularly challenging and non-intuitive. Nevertheless, the above discussion of physical randomness should provide additional indications about the complexity and challenges implied as one searches for true randomness. Randomness and Noise The experimental observation of the properties of a physical system relies on taking respective measurements, which are often subjected not only to limited resolution, but also to several types of noise and interferences. Given that these effects can therefore influence the characterization of randomness of a system, it is important to consider them with particular attention. One first important aspect concerns the own nature of noise and interference. In principle, noise corresponds to an unwanted component of a measurement or a signal. The concept of interference is closely related, and will actually be understood as a synonym of noise in the present work. An example of noise is the 60Hz (or 50Hz, depending on the locality) oscillation which is frequently found in electric signals as a consequence of electromagnetic interference from the power line. Observe that, in the case of this example, the noise actually corresponds to a well defined signal, but which corresponds to an unwanted influence. Noise can also have a more intrinsic random nature, such as thermal noise in electrical and electronic circuits, or decaying particles. In addition, observe that noise can be added in several manners to a measurement, including but not limited to addition and multiplication. Interestingly, in the case of a composed signal corresponding to the sum of a signal and a noise signal, any of these two components can, in principle, be deterministic or fully random. The resulting signal will be deterministic only in case both the source signal and noise are deterministic, otherwise it will be random. The level of randomness of the resulting composed signal will depend on the relative amplitude of the two added components. Figure 17 illustrates an interesting situation involving two composite signals x(t) and y(t) derived from the same deterministic source signal z(t) (a sinusoidal signal with frequency 1 and amplitude 1) by a respective scalings by a and b and addition of distinct (and mutually independent) uniformly random noises n x (t) and n y (t) with null mean and respective standard deviations σ 1 and σ 2 . The two obtained signals result intrinsically interrelated, in the sense that increase (or decrease) in one of theme will tend to be accompanied by respective increase (decrease) also in the other signal. This tendency for joint variation is illustrated in the scatterplot in Figure 18 in the case of a = 1, b = 2, and σ 1 = σ 2 = 0.4. The linear interrelationship between the two obtained signals can be quantified in terms of the respective Pear- Several interesting aspects can be observed from the above example. First, we have that two correlated signals may actually derive from the same common source, which causes both signals. Thus, except for the contribution of the added noise, the two signals would actually convey the same original information. Though a relevant level of joint variation can be observed even in presence of noise, the two signals would have no mutual causation in this case. Then, we have that the level of noise can strongly influence the estimation of the relationship between signals. As observed in the above example, increased intensity of noise tends to reduce the Pearson correlation coefficient between the two signals. In addition, we also have that a signal composed of a source signal and noise can be deterministic only if both those signals are deterministic. However, signals originating from arithmetic combinations of two more random signals can be deterministic provided the signals derived from identical randomness mutually cancel one another (as in the division between two scaled versions of the same random source). Randomness and Completeness Given that randomness is intrinsically associated to changes, it becomes critically important to be able to observe these changes. Complete observation of all involved changes may not always be possible as a consequence of several factors, including the infinite number of points along a continuous dynamical trajectory (in practice, it is only possible to observe (or measure) a finite number of points in the state space), the adoption of insufficient properties to define the respective state space, as well as the existence of states that are unobservable. We shall preliminary assume that the adopted measurements are enough to fully represent the respectively dynamics. This important principle is illustrated in Figures 20(a . Additional independent and simultaneous trajectories are presented in both cases for generality's sake. The enumerated green stars identify the only points of the state spaces that are assumed in this example to be observable (or measurable) during the analysis of the respective system dynamics. In Figure 20(a), we have a deterministic trajectory (shown in magenta), as well as an additional trajectory (shown in black) respective to some other simultaneous dynamic system taking place in the same state space. It is assumed that these additional trajectories are not preliminary know. In addition, as in a classic mechanics situation, the developed trajectories are assumed to be smooth (inertia) and continuous (though not necessarily smooth). Four available observation points are marked in the state space by green stars, which are assumed to be generate an event whenever the system visit that respective state. Thus, the timing and sequence of these events can be analyzed in order to estimate possible causal effects and/or randomness along the dynamics. Starting with an observation of an event at point 1, let us now consider some possible hypothetical unfoldings, such as: (a) The next event takes place at point 2. This suggests that the ordered pair of points (1, 2) belongs to a same trajectory. The eventual repeated observation of these sequence of events can be taken as providing amassing indication about a deterministic dynamics proceeding along a trajectory that contains the points [START_REF] Da | Modeling: The human approach to science[END_REF][START_REF] Da | Statistical modeling[END_REF]. Such repeated observation of this pair of points could be a consequence of the system being restarted at state 1 or could be taken as the trajectory being periodic; (b) The next event takes place at point 4, which is not related to the dynamics respective to point 1. In case the sequence [START_REF] Da | Modeling: The human approach to science[END_REF][START_REF] Bertsekas | Introduction to Probability[END_REF] is repeated several times (by chance or as a consequence of another effect such as synchronization of the two dynamics), it could be incorrectly inferred that the points [START_REF] Da | Modeling: The human approach to science[END_REF][START_REF] Bertsekas | Introduction to Probability[END_REF] belong to a same deterministic trajectory, which is not the case; (c) The event 4 often observed, by chance or synchronization, also to follow the event 1. In case this situation repeats itself several times, it could be incorrectly inferred that the non-deterministic trajectory corresponds to a bifurcation involving the sequences (1, 2) and (1, 4). In the case of the non-deterministic system shown in Figure 20(b), possible observations after the dynamics is observed to pass through point 1 would include: (i) As above, the next event takes place at point 2. This suggests that the ordered pair of points (1, 2) belongs to a same trajectory. The eventual repeated observation of these sequence would then suggest that the trajectory contains the points (1, 2); (ii) The event 1 is followed by events 2 or 3 with equal probabilities. In case this happens several times, a possible non-deterministic trajectory could be inferred in which event 1 is followed by equally possible outcomes 2 and 3. Given that the third branch cannot be observed in the case of the specific situation depicted in Figure 20(b), the trifurcation would be incorrectly taken for a bifurcation. Some important conclusions can be drawn from the above examples and discussion, including but not limited to: • Causal interactions between two events along a same deterministic trajectory can be statistically suggested, but full confirmation would require either the knowledge of the system structure and respectively implied trajectory, or an infinite number of observation of respective subsequent occurrences; • Sequence of causal events along a trajectory will go unnoticed when the number and position of the monitoring points are not adequate (e.g. only one point falling on the trajectory); • The verification of a sequence of events stemming from a specific set of observation points as being fully random could be suggested to the identification of multifurcations, but this can be misled by existence of additional trajectories in the same state space; • Some of the trajectories branching out of a multifurcation can be overlooked as a consequence of lack of respective monitoring points. Though the above discussed situations addressed situations in which not enough monitoring points are available, there is another case related to incomplete observation of the dynamical system of interest. More specifically, this situation relates to adopting a set of measurements to define the respective state space that is not enough to fully characterize the respective dynamics. As could be expected, this situation tends to be even more critical than the lack of monitoring points, because typically more information is lost when one or more essential measurements are left out. Figure 21 illustrate on such case in which a fully deterministic trajectory in an R 3 space leads to a non-deterministic trajectory in a projected space R 3 when the measurement x 3 is missed, therefore implying in lack of information about the respective dynamics Observe that both true randomness, as well as randomness stemming from incomplete knowledge, lead to multifurcations in respective state spaces. Only a deterministic system completely represented in a state space will necessarily be guaranteed to be devoid of multifurcations. The considerations developed in this section indicate that, though causality and randomness can be estimated by using experimental means in which events are triggered along a state space, the respectively obtained results are not definitive as it would involve infinite repetition of observations and the limited number and positioning of the monitoring points, while assuming that the adopted set of measurements is enough to fully characterize the observed dynamics. It should also be kept in mind that every experimental measurement is also limited by unavoidably limited resolution/accuracy, as well as eventual presence of noise and interferences, which may incorporate long distance effects that are impossible to incorporate in the respective analysis, such as the influence of the gravity of a yet unknown distant star on the motion of a chaotic pendulum. In this sense, even in a deterministic system, it would be impossible to have a full description of all the influences on the respective dynamics, which indicates that even if the real world is deterministic, we will always have limited knowledge about its properties and states, therefore characterizing a situation of randomness being a consequence of lack of knowledge about the observed dynamics. Hybrid Random Systems A particularly interesting possibility that is not often considered relates to having a dynamical system with hybrid dynamics in the sense of incorporating a mix of deterministic and random modules, or presenting levels of randomness, as illustrated in Figures 22 and23, respectively. Figure 22 depicts one of the simplest possible configurations, involving one deterministic system interconnected to a random counterpart. In such a system, the deterministic module would control, to a limited extent, the dynamics of the random counterpart. Two important aspects can be observed. First, the control needs to be limited, otherwise the random counterpart would become deterministic. At the same time, no net influence can be exerted by the random module on the deterministic counterpart, otherwise the latter could become random. Another possibility of hybrid random dynamic system is illustrated in Figure 23, in which case most of the system is actually random, but at successive, graded respective levels. Hybrid dynamic systems provide an interesting means to try to accommodate concomitant randomness and determinism. As a possible example, we could mention a system aimed at analyzing a set of real-world measurements, intrinsically incorporating randomness (e.g. noise and interferences), which progressively removes this randomness by using filters and other signal processing methods. Another example would correspond to the real physical world, in which randomness tends to be more intense at its smallest scales, then decreasing along the meso-and macroscopic scales. Randomness and Decision Randomness has an important implication of taking decisions, in the sense that in a fully deterministic system there would actually be no decisions to be taken, as the complete unfolding of the dynamics is determined from any previous state configuration. As a consequence, and strictly speaking, decisions are only possible in truly random systems or in deterministic systems about which we do not have complete knowledge. We shall start by considering the situation depicted in Figure 24. Here, we have a system S of interest from which five measurements, indicated by respective distinct colors, are taken as subsidies for taking a specific decision ∆ with binary values 0 or 1. Figure 24: A decision ∆ has to be taken respectively to a system S, while considering measurements, indicated in respective distinct colors, of five of its properties. The strength of the measurements are indicated in the areas of the respective rectangles. Observe that the adopted five measurements may be not enough to completely characterize the state of the system, characterizing randomness by lack of knowledge. A possible manner to take decisions is to weight the respective measurements contributing respectively to binary decisions 0 and 1. In the case of the particular example illustrated in Figure 24, we assume the measurements shown in green and blue to support taking decision 0, while the yellow and orange measurements are understood to contribute to taking decision 1. Figure 25 depicts an abstraction of reaching the decision ∆ by using a scale where the five measurements are placed according to their respective support for the de-cisions 0 and 1. In the case of the illustrated situation, 0 would be taken as the decision as a consequence of the respective support (measurements in green and blue) having greater 'weight'. Figure 25: Taking a decision on the system S while considering the adopted five respective measurements can be abstractly implemented by using a scale in which the 'weights' of the measurements supporting decisions 0 and 1 are compared. In the case of this particular example, the decision 0 would have been taken. In case the measurements were fully accurate and provided a complete description of the state of the system, the above procedure would lead to a deterministic and well-defined outcome. However, neither of these two conditions are typically (if ever) met in practice. The implications of these two limitations are briefly discussed in the remainder of this section. First, we assume that the five measurements do provide a complete representation of the state of the system. However, these measurements are taken with limited resolution and/or in presence of noise, leading to a respective error. Figure 26 illustrates the effect of a relatively small level of noise induced by limited measurement accuracy/resolution in the weighting procedure. Because the noise is is assumed to be small enough, the respective displacements of the scale in this example preserve the decision of taking value 1 as outcome. However, in the case lower resolution and/or higher levels of noise, the maximum error implied in the weight comparison can lead to changing the decision between 0 and 1, as illustrated in Figure 27. The above considerations indicate that taking a decision in a system completely described by the adopted measurements, but with limited accuracy or high levels of noise/interference, can lead to incorrect outcomes. However, provided these unwanted effects can be kept at adequate levels, the outcome can still be deterministic. Though in the above examples we considered limited and noise as sources of randomness, the respective discussion can be immediately extended to the case in which the measurement errors are consequence of true, intrinsic randomness manifested at specific levels. For instance, if the measurements are influenced by true randomness at small enough intensity, the decision outcome would not be affected. This line of reasoning suggests that deterministic decisions can be taken even in presence of true randomness. Concluding Remarks Though not often realized, the concept and phenomenon of randomness constitutes one of (and possibly the most) important and challenging aspects shared by science and human experience. This importance stems mainly from the fact that, in both those areas, there often arise the need to estimate the effects of decisions or outcomes, so that the chances of achieving aimed results can be enhanced. Indeed, the impact of accurately predicting outcomes, as well as the consequence of these outcomes, can hardly be overemphasized regarding respective impact on several of the human dimensions, including but not limited to environmental, social, economic, quality of life, among many other aspects. Yet, despite all the efforts that have been directed at better understanding randomness, motivated by the above summarized aspects, we still cannot be certain whether the universe is (or not) truly random, or if the limitations of our ability to making predictions are a sole consequence of incomplete information about the state space of an otherwise hypothetically fully deterministic universe. The challenges implied by randomness are further emphasized by the interrelationship among this concept and other several concepts from an intricate multidisciplinary perspective. The present work has been mainly aimed at presenting, briefly revising, and discussing what constitutes randomness, and whether this property can be truly encountered in the physical world. In order to keep the presentation relatively simple and accessible, only some of the main concepts and methods have been addressed in a relatively informal manner. As such, the present work does not correspond to a formal and complete approach to the covered concepts methods, and therefore should not be understood as a more orthodox basic text. Given the inter and multidisciplinary nature of randomness and its relationships, our approach unfolded while considering several subsequent, interrelated concepts and issues, including types of randomness, temporality, causation, probability, identification and quantification of randomness, the hierarchical nature between random effects, the challenging issue of generating random values, the generalization of randomness in time and space, a dynamical system approach approach to randomness, noise, completeness of observation, and the task of taking decisions. It is hoped that a more complete picture of randomness and its properties can be achieved when approached from these several perspectives. In addition to providing some rudiments about the covered topics and their relationship with randomness, the presented material also discusses several respective difficulties that make randomness a concept that is as important as it is challenging. In particular, regarding the core issue about the possible existence of true, unaccountable randomness no definitive answer has been provided, as this is probably an undecidable question. However, the two considered types of randomness, namely involving incomplete knowledge and true randomness, as well as considering hybrid (modular or graded) random dynamic systems, could provide additional subsidies from which to approach the intriguing concept of being random. Another interesting point is that, if it were possible to know that the randomness underlying a specific case stems from lack of knowledge, and not as a consequence of true randomness, would motivate one to invest further efforts in respective study and research. As it may be impossible to decide on this issue, it remains an interesting prospect to perform as comprehensive as possible experiments and measurements, taking into account as many factors as possible, with ever increasing accuracy. It is hoped that the presented discussion, illustrations and numeric examples may motivate the reader to probe further (e.g. starting from the several provided bibliographical references) into the fascinating subject of randomness and its myriad relationships and implications. Figure 1 : 1 Figure1: A receiver (B) observes a sequence of digits unknowingly corresponding to an intermediate portion of the the number π and has to decide whether the sequence is random or deterministic. The problem of defining and recognizing randomness is more subtle and complex than it may initially appear, involving the recognition, or not, of the origin of the digits as being part of the number π by the receiver in this particular case. Figure 3 : 3 Figure 3: A possible (and naïve) quantum mechanics approach to the situation depicted in Fig. 2(a) consists of considering several uniformly distributed wave functions representing all the involved particles of the object. tion. Figure 4 : 4 Figure 4: The basic timing diagram typically underlying an eventdriven causation[START_REF] Da | Continuous and event-driven causality: A simple model-based approach[END_REF]. The cause C needs to take place for a minimum period of time ∆, which is then followed by a respective delay δ, and the effect E, along which the causation continues but is no longer under control of the agent that initiated it. It is understood that, once the causation period is completed, the effect can no longer be avoided other than by some eventual external influence (not included in the diagram). In continuous causation, we have that ∆ → 0 and δ → 0, so that the effect will always immediately follow the respective immediate cause. Figure 5 5 Figure 5 illustrates the above density for [a, b] = [2, 5] and c = 1/3. Henceforth, we will immediately extend the uniform probability density to intervals (a, b], [a, b) and (a, b), while assuming p(x) to be real-valued and continuous and Figure 5 : 5 Figure 5: Illustration of the uniform probability density function p(x) for [a, b] = [2, 5] and c = 1/3. Observe that p(x) ≥ 0, ∀x and ´∞ -∞ p(x)dx = 1. Figure 6 : 6 Figure 6: Example of the discrete uniform probability density function a discrete random variable taking values 0 and 1 with identical probabilities 0.5. Observe that, as in Fig. 5, p(x) ≥ 0, ∀x and ´∞ -∞ p(x)dx = 1. Figure 7 : 7 Figure 7: Illustration of Buffon's experiment to estimate the constant π. 1 log(0.1) + 0.70 log(0.35) + 0.2 log(0.2)] = = 1.287... η X = e 1.287 = 3.622... Figure 8 : 8 Figure8: The representation of the outcomes, and respective probabilities, involved in the considered two-events experiment. The probabilities of outcomes and subsequence levels may (or not) be conditioned by the outcomes at the previous levels. When a subsequent event B does not depend on the previous event A, they are said to be independent, and we have P (B|A) = P (B), also implying P (A ∩ B) = P (A, B) = P (A) P (B). Figure 9 9 Figure 9 depicts an example of a situation in which the two events A and B are independent, also indicating the involved probabilities in this particular case. Figure 9 : 9 Figure 9: Illustration of a particular situation in which the two involved events are independent, so that P (A ∩ B) = P (A, B) = P (A) P (B). Figure 10 : 10 Figure10:A specific situation in which not only the events A and B are independent, but also involve identical probabilities. As a consequence each of the four possible joint outcomes will have precisely the same probability 0.25 (equiprobable). Figure 11 : 11 Figure 11: An example of a situation where the outcomes of even A influence (or condition) the subsequent event B. In this case, the joint probabilities are calculated as P (A, B) = P (B|A) P (A). Figure 12 : 12 Figure 12: Illustration of the results of the function that provides the remainder of the integer division between a non-negative integer value m = 0, 1, 2, . . . by the number n = 7. Observe the periodic nature of this function, with period equal to n = 7. Figure 13 : 13 Figure 13: Histograms illustrating the distribution of the values obtained for L = 1 to 9 by using the MINSTD method in Section 8 considering N = 30000. The red line in each plot indicates the respectively expected (average) number of counts in each of the histogram bins. The dispersion of counts tends to increase with L as a consequence of respectively reduced sampling of the subsequences. Moderately uniform distributions are obtained, except for the first and last bins tending to presenting slightly higher counts for several of the values of L considered in this figure, which suggests respective bias in the random values generation. Figure 14 : 14 Figure 14: The relative error (in %) of the exponential entropy of the number of possible values for L = 1, 2, . . . , 9 respectively to the considered number N of drawn samples (shown in the legend).For each value of N , the error tends to increase monotonically with L, which is caused by the smaller number of samples obtained for each respective possible value. In addition, the overall errors tend to decrease as N increases. Figure 15 : 15 Figure15: Three sources (ρ 1 , ρ 2 , and ρ 3 ) of potentially random sequences that are not identical but may (or not) be characterized by presenting full sets of identical statistical properties. Figure 17 : 17 Figure 17: Flow diagram illustrating the obtention of two composite signals x(t) and y(t) obtained from a noiseless sinusoidal source signal z(t) by respective scalings by a and b and addition of independent uniform noise nx(t) and ny(t) with null mean and respective standard deviations σ 1 and σ 2 . Figure 18 : 18 Figure 18: Scatterplot of the composite signals x(t) and y(t) for a = 1, b = 2, and σ 1 = σ 2 = 0.4. Though the two signals have a tendency for positive joint variation, the added noise makes this relationship less defined. In the case of this particular scatterplot, the Pearson correlation coefficient is P ≈ 0.938. Figure 19(b) illustrates the Pearson correlation coefficients obtained from the above composite signals in terms of several values of σ = σ 1 = σ 2 . The presented values were averaged over 1000 simulations. Figure 19 : 19 Figure 19: The average Pearson correlation coefficients obtained for the signals in the above example in terms of several standard deviation values. ) and (b), involving the deterministic and random trajectories previously shown in Figure16. Equal probabilities are assumed for the three possible outcomes in the latter situation. Figure 20 : 20 Figure20: Illustration of the concept of observability of points in a state space respectively to the deterministic and random trajectories respectively shown in magenta in (a) and (b). Additional independent and simultaneous trajectories are presented in both cases for generality's sake. The enumerated green stars identify the only points of the state spaces that are assumed in this example to be observable (or measurable) during the analysis of the respective system dynamics. Figure 21 : 21 Figure21: Illustration of a situation in which a deterministic trajectory in a higher dimensional state space (R 3 ) into a smaller dimension (R 2 ) leads to a seemingly non-deterministic trajectory including a bifurcation point P as a consequence of not taking property x 3 into account, which would otherwise ensure that none of the points of the original trajectory overlap. Figure 22 : 22 Figure22: A hybrid dynamic system S subdivided into a deterministic component D and a non-deterministic (or random) component N . Some aspects of the non-deterministic part can be influenced by the deterministic component, but the incorporation of random dynamics into an otherwise deterministic system would possible make that system random (unless the random effects mutually cancel one another). Figure 23 : 23 Figure 23: Example of a graded random dynamic system, where randomness gradually extends through the whole system. Figure 26 : 26 Figure 26: In the situation illustrated in this figure, the presence of a small level of accuracy/resolution in the considered measurements induce a maximum displacement of the scale arms that is not enough to change the outcome of taking 0 as decision. Figure 27 : 27 Figure 27: In case the previous measurements are taken with less accuracy and/or in presence of higher levels of noise, the respectively induced displacement of the scale arms can be enough to change the decision. Acknowledgments Luciano da F. Costa thanks CNPq (grant no. 307085/2018-0) and FAPESP (grant 15/22308-2). Observations As all other preprints by the author, the present work contains preliminary work subject to further revision and validation. Respective modification, commercial use, or distribution of any of its parts are not possible, as this work has author copyright. Many of the preprints by the author are also available in Hal and/or arXiv. This work can also be cited by using the DOI number or article identification link. Thanks for reading.
04116307
en
[ "info.info-ro", "spi.gciv.it" ]
2024/03/04 16:41:26
2021
https://hal.science/hal-04116307/file/TRB2021_AED50_OR-NC-JPN_rev1-HAL.pdf
Keywords: Shared Autonomous Mobility, First-Last Mile Service, Vehicle Routing, Node Embeddings Autonomous vehicles are anticipated to revolutionize ride-sharing services and subsequently enhance the public transportation systems through a first-last mile transit service. Within this context, a fleet of autonomous vehicles can be modeled as a Dial-a-Ride-Problem with certain features. In this study, we propose a holistic solving approach for this problem, which combines the mixed-integer linear programming formulation with a novel graph dimension reduction method based on the graph embedding framework. This latter method is effective since accounting for heterogeneous travel demands of the covered territory tends to increase the size of the routing graph drastically, thus rendering exact solving of small instances computationally infeasible. An application is provided for the real transport demand of the industrial district of "Vallée de la Chimie" in Lyon city, France. Instances involving more than 50 transport requests, and 10 vehicles could be easily solved. Results suggest that this method generates routes of reduced nodes with a lower vehicle kilometers traveled compared to the constrained K-means based reduction. Reductions in terms of GHG emissions are estimated to be around 75% lesser than the private vehicle mode in our applied service. A sensitivity analysis is also provided. INTRODUCTION The urban expansion in most cities around the world followed an automobile-oriented pattern, leading to the emergence of several low-density metropolitan areas, thus increasing the overall infrastructure costs for public transportation (PT) (Sinha, 2003 [35]). Since then and due to this reason, PT systems have been relegated to a secondary role behind private car use in urban mobility. The PT cover for low-density areas and commercial and industrial districts outside the city is limited. Shared mobility, which is the shared use of vehicles, bicycles, and other means of transport to access PT networks, started to appear to be a viable solution to this issue, known as the first-and last-mile commute. The concept of shared mobility is not novel in itself, and could be traced back to the 1990s where it was popular, and even to the year of 1948 for the first car-sharing program (Shaheen & Chan, 2016 [36]). Yet, the major shift in shared mobility is still expected by many to happen with the advent of shared autonomous vehicles (SAV). Several AV manufacturers and service transportation provider companies have heavily invested and adopted specific ride-sharing plans, e.g., Waymo in Phoenix area (Waymo, 2020 [41]), Jaguar Land Rover (Jaguar Land Rover, 2020 [START_REF] Land | Article titled "Jaguar Land Rover reveals Project Vector autonomous ride-share vehicle[END_REF]), and eventually Amazon, a fresh new competitor in this market share (Amazon, 2020 [START_REF] Amazon | Article titled "Amazon Buys Autonomous Taxi Company Zoox. Look Out, Uber and Lyft[END_REF]). The factors making SAV an attractive transit mode are mainly economic, for both sides of customers and PT and transportation network entities. Due to lower operational and investment costs over fixed routes, transportation companies could propose services at fair prices with a higher frequency. Higher customer satisfaction and time reductions may be equally obtained compared with other transit modes such as parkand-ride and active modes. Another argument in favor of SAV lies in the environmental sustainability of this technology, which is mainly due to the application of electric vehicles in SAV services. Autonomous fleets could reduce GHG emissions by up to 73% compared to conventional taxi fleets (Bauer et al., 2018 [3]). It was shown that each SAV could replace around 11, respectively from 5 to 10 privately owned vehicles, with an increase of 10%, resp. from 6 to 89% travel distance via agent-based simulations of the city of Austin, Texas [START_REF] Fagnant | The travel and environmental implications of shared autonomous vehicles, using agent-based model scenarios[END_REF] [START_REF] Fagnant | The travel and environmental implications of shared autonomous vehicles, using agent-based model scenarios[END_REF]), resp. Lisbon (International Transport Forum, 2015 [START_REF]Urban mobility system upgrade: How shared self-driving cars could change city traffic[END_REF]). The existence of empty trips justifies this latter increase. All these favorable reports led to a resurgence of small-size experimentations of SAV services in Europe, the U.S., and globally. Just in Lyon city in France, where the case study of this paper is located, there are currently three experimentations: Navya shuttle buses in the Confluence area, around Groupama stadium, and Mia buses in the Meyzieu-Décine suburb area. Therefore, there is a great need of efficient design models for the short-time deployment of SAVs, especially to handle the first-last mile transit issue (see [START_REF] Hyland | Taxonomy of shared autonomous vehicle fleet management problems to inform future transportation mobility[END_REF] [START_REF] Hyland | Taxonomy of shared autonomous vehicle fleet management problems to inform future transportation mobility[END_REF] for a taxonomy of SAV fleet management problem classes to accompany this current mobility shift). A critical component of the design of an SAV service is the vehicle assignment or the process of matching customer requests to vehicles. Most SAV approaches rely on rule-based assignment methods, e.g., [START_REF] Gurumurthy | Analyzing the dynamic ride-sharing potential for shared autonomous vehicle fleets using cellphone data from Orlando, Florida[END_REF] [START_REF] Gurumurthy | Analyzing the dynamic ride-sharing potential for shared autonomous vehicle fleets using cellphone data from Orlando, Florida[END_REF]. Optimization models are almost not used here. This is because SAV studies are mostly on-demand systems, that are dynamic systems that require solving approaches that are computationally cost-efficient and easy to implement (Narayanan et al., 2020 [30]). On the other hand, SAV models that are reservation-based, or in other terms static systems that could be solved beforehand, are relying in general on optimization. However, they are quite few in number, e.g., [START_REF] Levin | Congestion-aware system optimal route choice for shared autonomous vehicles[END_REF] [START_REF] Levin | Congestion-aware system optimal route choice for shared autonomous vehicles[END_REF], [START_REF] Ma | Designing optimal autonomous vehicle sharing and reservation systems: A linear programming approach[END_REF] [START_REF] Ma | Designing optimal autonomous vehicle sharing and reservation systems: A linear programming approach[END_REF]. Both [START_REF] Levin | Congestion-aware system optimal route choice for shared autonomous vehicles[END_REF] and [START_REF] Ma | Designing optimal autonomous vehicle sharing and reservation systems: A linear programming approach[END_REF] proposed a linear programming model to modelize an SAV fleet. In this study, we propose using the famous routing problem of the Dial-a-Ride-Problem (DARP), which is derived from the classical Vehicle Routing Problem (VRP) with additional constraints to account for pickup and drop-off requests. VRP and DARP are both computationally intractable and can be pinned down as mixed-integer linear programming models. According to [START_REF] Ho | A survey of dial-a-ride problems: Literature review and recent developments[END_REF] [START_REF] Ho | A survey of dial-a-ride problems: Literature review and recent developments[END_REF], the largest instance solved to optimality for the basic DARP with time windows is up to 8 vehicles and 96 requests, which is quite small for practical deployment of an SAV service. Therefore, a mechanism has to be found in order to reduce the dimension of the routing graph. To address this research gap, we develop a new reduction method based on the graph embedding framework, specifically around the node2vec algorithmic framework (Grover and Leskovec, 2016 [14]), which will be seen later. The problem we solve is a reservation-based SAV service. A second objective is to design an SAV service as close as it would work in a real environment. First, this was done by the choice of an application territory. We choose an industrial district in Lyon city (France), called "Vallée de la Chimie" (VC). Although VC is one of the biggest chemical and petrochemical parks in Europe, it is poorly served by public transport systems. It presents a concrete application of the lastmile transit. This study focuses on a classical commute problem between the city center and suburbs where two transportation alternatives compete: highway and rapid transit system. Second, we aim to generate realistic traffic demand accounting for intermodality, i.e., using multiple transport modes on the same trip. The traditional four-step travel models could only produce demand matrices for separate modes (McNally, 2007 [26]). We have relied on a Land Use and Transport Interactions (LUTI) model-based tool called OPTIREL to account for intermodality, without using agent-based simulations. OPTIREL has been previously applied to design new subway lines in Paris among other projects. Therefore, the problem we solve is tested on VC for the commute problem with a local rapid transit system. The remainder of the paper is organized as follows. A brief literature review on the integration of PT-SAV system, the DARP problem, and the graph embedding framework is presented in the next section. Section 3 introduces the optimization model and the solving approach, while Section 4 describes the study case and the generation of traffic demand. In Section 5, we experimentally evaluate the benefit of the designed last-mile service, before concluding on some final thoughts. LITERATURE REVIEW In this section, we briefly review the literature of interest in this paper within the following three subsections. Integration of PT-SAV systems Several recent studies are starting to explore the benefits of integrating SAV services with existing PT systems. The most commonly adopted approach in this regard is agent-based simulations, whether it concerns the integration, e.g., [START_REF] Shen | Integrating shared autonomous vehicle in public transportation system: A supply-side simulation of the first-mile service in Singapore[END_REF] [START_REF] Shen | Integrating shared autonomous vehicle in public transportation system: A supply-side simulation of the first-mile service in Singapore[END_REF], Pinto et al. (2020) [START_REF] Pinto | Joint design of multimodal transit networks and shared autonomous mobility fleets[END_REF], or merely the intersection of PT-SAV systems, e.g., [START_REF] Fagnant | The travel and environmental implications of shared autonomous vehicles, using agent-based model scenarios[END_REF] [START_REF] Fagnant | The travel and environmental implications of shared autonomous vehicles, using agent-based model scenarios[END_REF]. [START_REF] Liang | Optimizing the service area and trip selection of an electric automated taxi system used for the last mile of train trips[END_REF] [START_REF] Liang | Optimizing the service area and trip selection of an electric automated taxi system used for the last mile of train trips[END_REF] proposed an optimization model for an automated taxi service serving the last mile transit of a train system. Their results are shown for a train station in Delft and include comparisons with human-driven taxis. [START_REF] Shen | Integrating shared autonomous vehicle in public transportation system: A supply-side simulation of the first-mile service in Singapore[END_REF] [START_REF] Shen | Integrating shared autonomous vehicle in public transportation system: A supply-side simulation of the first-mile service in Singapore[END_REF] simulate several scenarios of integrating bus lines with SAVs for the first-mile connectivity during the morning peak hours. Their results based on the Singapore PT show significant savings if lowdemand buses are substituted by SAVs, savings that go up to 860 passenger car unit-kilometers. Pinto et al. (2020) [START_REF] Pinto | Joint design of multimodal transit networks and shared autonomous mobility fleets[END_REF] proposed a bi-level optimization framework to simultaneously modelize the joint transit network parameters and the SAV mobility service. The lower-level problem is a dynamic combined mode choice -traveler assignment problem, while the upper-lever is a modified transit network frequency setting problem. Their results are given for the Chicago metropolitan area. A more detailed review of overall SAV services is provided by [START_REF] Narayanan | Shared autonomous vehicle services: A comprehensive review[END_REF] [START_REF] Narayanan | Shared autonomous vehicle services: A comprehensive review[END_REF]. Dial-a-Ride problems Designing vehicle routes and schedules for collective people transportation such that each user request has a pickup and a drop-off point locations are referred to as Dial-a-Ride systems, as in early versions of this service, customers have to phone their requests. From a modeling perspective, DARP belongs to the family of routing problems. It is a variant of the vehicle routing problem (VRP), precisely the capacitated VRP with pickup and delivery and Time Windows, with a hands-on transporting passengers rather than goods. Early solving approaches include [START_REF] Psaraftis | A dynamic programming solution to the single vehicle many-to-many immediate request dial-a-ride problem[END_REF] [START_REF] Psaraftis | A dynamic programming solution to the single vehicle many-to-many immediate request dial-a-ride problem[END_REF], who solved the single-vehicle DARP using dynamic programming, and [START_REF] Jaw | A heuristic algorithm for the multi-vehicle advance request dial-a-ride problem with time windows[END_REF] [START_REF] Jaw | A heuristic algorithm for the multi-vehicle advance request dial-a-ride problem with time windows[END_REF], who developed one of the first heuristics to the multivehicle DARP. For the exact approaches, Branch-and-Bound and their derived algorithms are the most applied, e.g., [START_REF] Ropke | Models and branch-and-cut algorithms for pickup and delivery problems with time windows[END_REF] [START_REF] Ropke | Models and branch-and-cut algorithms for pickup and delivery problems with time windows[END_REF], Gschwind & Irnich (2014) [START_REF] Gschwind | Effective handling of dynamic time windows and its application to solving the dial-a-ride problem[END_REF]. Concerning metaheuristics, those based on local search, especially those combining local search elements with other metaheuristics currently constitute state-of-the-art solvers, e.g. [START_REF] Masmoudi | Three effective metaheuristics to solve the multidepot multi-trip heterogeneous dial-a-ride problem[END_REF] [START_REF] Masmoudi | Three effective metaheuristics to solve the multidepot multi-trip heterogeneous dial-a-ride problem[END_REF]. Applications of DARPs are overall real-life problems with a diverse range of backgrounds, starting from the standard application to the elderly and disabled people transportation to health care services, to the current emerging applications in public transportation and shared mobility (Ho et al. 2018 [17], Mourad et al. 2019 [START_REF] Mourad | A survey of models and algorithms for optimizing shared mobility[END_REF]). For a more comprehensive review of the problem classes, solving algorithms, and the applications of the DARP, see Cordeau & Laporte (2007) [START_REF] Cordeau | The dial-a-ride problem: models and algorithms[END_REF], [START_REF] Molenbruch | Typology and literature review for dial-a-ride problems[END_REF] [START_REF] Molenbruch | Typology and literature review for dial-a-ride problems[END_REF], and [START_REF] Ho | A survey of dial-a-ride problems: Literature review and recent developments[END_REF] [START_REF] Ho | A survey of dial-a-ride problems: Literature review and recent developments[END_REF]. Graph embeddings Graph embedding is a powerful method to represent a graph into a low-dimensional vector space by preserving as much as possible its topological structure. It addresses the research question of efficient graph analytics, since traditional methods suffer from the curse of dimensionality and space cost. Essential graph analytic tasks, such as graph classification, node clustering, and link prediction, become scalable to large instances. Representation learning and visualization is another research problem tackled by graph embeddings. There are three main categories of graph embeddings: probabilistic, matrix factorization-based, and deep learning-based methods (see Goyal & Ferrara, 2018 [13] for a survey on the topic). Probabilistic models learn the embedding of graphs through random walks. The samplings given by those walks can capture the neighborhood structure of nodes, connectivities, and other graph properties such as node centrality. Compared with other probabilistic models like DeepWalk (Perozzi et al., 2014 [32]) and LINE (Tang et al., 2015 [38]), the probabilistic model of node2vec (Grover & Leskovec, 2016 [14]) is shown to perform better. This is why we chose node2vec implementation for the graph embedding in the current study. The random walk exploration of this algorithm interpolates between breadth-first (BFS) and depth-first (DFS) searches in order to construct a more informative embedding. PROPOSED METHOD Figure 1 illustrates the flowchart of the proposed approach. It is composed of two main distinct blocks: optimization according to the DARP formulation, and the graph reduction mechanism. Given a set of stations in the studied area, the method's inputs are the origin-destination (OD) matrix of the traffic demand, i.e., the number of customers traveling between any two stations, the duration of travel and the traveled distance between all couples of stations. The earliest visit time, service time, and latest visit time for each O-D couple of the demand matrix are also input data. Routing problems in their general formulation are defined on an underlying graph where the set of vertices corresponds to the depot and points to visit, and the set of arcs weighted with costs to travel, generally distances or travel times, represent shortest paths between stations. On this defined routing graph, each point has to be visited only once by precisely one vehicle. Since each initial station could be a pickup and a drop-off point to all the remaining stations, and taking into account all traffic demand inputs of the OD matrix of the area, the dimension of the routing graph to be solved can quickly increase. If n is the total number of stations, the graph dimension could attain a maximum of n (n-1) points to visit without counting the depots. We assume that the vehicle's capacity is greater or equal to every entry of the OD demand matrix. This will prevent us from duplicating the same O-D couple of stations in the routing graph to account for the total demand. The graph reduction mechanism we propose aims to detect groups of pickup and drop-off couple points to be merged, and output a reduced graph that can be easily solvable by a mixed-integer programming solver. Note that clustering and community detection algorithms could be used as well in the step of forming groups 1 . We use the constrained K-means clustering method proposed Bradley et al. (2000) [START_REF] Bradley | Constrained k-means clustering[END_REF] for comparison matter. Our method is overall a model-based heuristic around a central component of mathematical programming. More details of the two main parts of our model is presented in the following. Mathematical formulation In this study, we build on [START_REF] Cordeau | A branch-and-cut algorithm for the dial-a-ride problem[END_REF] [START_REF] Cordeau | A branch-and-cut algorithm for the dial-a-ride problem[END_REF] three-index formulation x ij k of the DARP by considering a new objective function and adding new variables. The points to visit are partitioned into two sets: the set of pickup points P and the set of drop-off points D, such that each demand request of the OD demand matrix has a pickup point i ∈ P, a corresponding delivery point which will be denoted deliv (i )∈ D, a maximum riding time L i , and a maximum waiting time M i . Each point i ∈ P ∪ D has a service time ser i , the earliest time to visit tw i early , a latest time to visit tw i late , and a demand load denoted req i , such that for i ∈ P , req i ≥ 0 and req deliv (i) =-req i . K is the set of vehicles. Each vehicle k ∈ K has a capacity Q k ∈ℕ, GHG emissions up to E k by kilometer, starts from the depot denoted s k and ends at the depot denoted e k . Let D start = { s k : k ∈ K } be the set of all start depots and D end = { e k :k ∈ K } the set of all end depots. For all j ∈ D start ∪ D end , the values serv j =req j =0 are fixed. We associate different start and end depots to each vehicle to make the model more general. However, all these depots may correspond to the same geographical location, which coincide in the case of our application to the transit station of the PT network. The underlying routing graph G=(V , A ) has a vertex set equal to V =P ∪ D ∪ D start ∪ D end . The arc set is defined as A= { (i , j ): ( i ∈ D start , j ∈ P ) ∨ ( i ∈ D , j ∈ D end ) ∨ (i , j ∈ P ∪ D , i ≠ j , i ≠ deliv ( j)) }, 1 This is different from cluster and route methods, which first apply a clustering algorithm to the input visiting points, and second generate optimal routes for each cluster, e.g., Chen et al. (2020) [START_REF] Chen | Solving the first-mile ridesharing problem using autonomous vehicles[END_REF]. and the weights of the graph are given by c ij and d ij , which denote respectively the duration to travel and the distance from i to j. The decision variable x ij k indicates whether the arc (i , j ) is traversed by vehicle k ∈ K or not, r i and w i gives respectively the riding time and the waiting time for the request associated with pickup i ∈ P, t i k is the arrival time at point i using vehicle k, and q i k is the load of the vehicle k when leaving the point i. In addition to minimizing total travel times for all vehicles, the degree of customer dissatisfaction in the sense of [START_REF] Psaraftis | A dynamic programming solution to the single vehicle many-to-many immediate request dial-a-ride problem[END_REF] [START_REF] Psaraftis | A dynamic programming solution to the single vehicle many-to-many immediate request dial-a-ride problem[END_REF] is also minimized plus to the greenhouse gas emissions due to the transport of passengers. Plausible values of E k are drawn from the life-cycle assessment (LCA) conducted by [START_REF] Gawron | Deep decarbonization from electrified autonomous taxi fleets: Life cycle assessment and case study in Austin, TX[END_REF] [START_REF] Gawron | Deep decarbonization from electrified autonomous taxi fleets: Life cycle assessment and case study in Austin, TX[END_REF]. ( ω 1 , ω 2 , ω 3 ) gives the weights between the three quantities in the objective function. The model is as follows. Min ω 1 ∑ k ∈ K ∑ i , j ∈V c ij x ij k +ω 2 ∑ i ∈ P ( r i +w i ) + ω 3 ∑ k ∈ K ∑ i ∈V ∑ j ∈V E k d ij x ij k (1) Subject to: ∑ i ∈ P x s k i k = ∑ i∈ D x i e k k =1 (k ∈ K ) , (2) ∑ k ∈K ∑ j ∈V x ij k =1 (i ∈ P) , (3) ∑ j ∈V x ij k -∑ j ∈V x deliv (i ) j k =0 (i ∈ P ,k ∈ K ) , (4) ∑ j ∈V x ji k -∑ j ∈V x ij k =0 (i ∈ P ∪ D , k ∈ K ) , ( 5 ) t j k ≥ ( t i k +ser i +c ij ) x ij k (i , j ∈V , k ∈ K ) , (6) t deliv (i ) k ≥ t i k + ser i +c ideliv (i) (i ∈ P ,k ∈ K ) , ( 7 ) tw i early ≤ t i k ≤ tw i late (i ∈ P ∪ D , k ∈ K ) , ( 8 ) q j k ≥ ( q i k +req i ) x ij k (i , j ∈V , k ∈ K ) , (9) r i ≥ t deliv (i ) k -( t i k + ser i ) (i ∈ P ,k ∈ K ) , ( 10 ) w i ≥ t i k -tw i early (i ∈ P ,k ∈ K ) , ( 11 ) c ideliv (i ) ≤ r i ≤ L i (i ∈ P) , ( 12 ) 0 ≤ w i ≤ M i (i ∈ P) , ( 13 ) 0 ≤ q i k ≤ Q k (i ∈ V , k ∈ K ) , ( 14 ) x ij k =0 ∨1 (i , j ∈V ,k ∈ K ) . ( 15 ) The routing constraints (2)-( 5) ensure that each vehicle starts and ends at its corresponding depot points in constraint (2), each request is answered in constraint (3), the same vehicle is used for pickup and drop-off in constraint (4), and the flow conservation in constraint [START_REF] Bradley | Constrained k-means clustering[END_REF]. Constraint (6) tracks the service time, while constraint [START_REF] Cordeau | A branch-and-cut algorithm for the dial-a-ride problem[END_REF] ensures that pickup points are visited before their delivery points, and time window constraints are given in [START_REF] Cordeau | The dial-a-ride problem: models and algorithms[END_REF]. Constraint (9) tracks the load of vehicles, which is needed later to compute the vehicle occupancy rate (VOR) indicator. Constraints [START_REF] Fagnant | The travel and environmental implications of shared autonomous vehicles, using agent-based model scenarios[END_REF] and [START_REF] François | Environmental assessment of urban mobility: combining life cycle assessment with land-use and transport interaction modelling-application to Lyon (France)[END_REF] are respectively relative to the riding time and the waiting time for each request. Constraints ( 12)-( 15) represent the binary and the bounding restrictions for the decision variables. Note that the current mixed-integer programming solvers can efficiently handle indicator constraints as in ( 6) and (9). Other useful constraints to add concern the battery management aspect, which includes the recharge time, the battery energy consumption and detours to recharge stations as in Bongiovanni et al. (2019) [START_REF] Bongiovanni | The electric autonomous dial-a-ride problem[END_REF]. Recent simulations (Vosooghi et al., 2020 [40]) suggest that a best strategy for SAV could be deploying batteries with more charging space to avoid charging the battery within the rush hours of the morning and the evening peaks, otherwise alternating charging and routing could drastically increase the passenger kilometer traveled. For this reason and for the time being, we suppose that battery charging operations are set outside the routing process, and that vehicles are set to be initially charged before transporting customers. For the heuristic to determine the number of vehicles |K| (mentioned in Figure 1), we use the following formula for vehicles with an identical capacity Q=Q k : vehicles (m)= ∑ i , j ∈V 2 ,i ≠ j demand (i , j )/(n× Q) . m is an estimate of the average number of O-D requests each vehicle is expected to answer during the time horizon (usually m=3 or 4). Graph reduction mechanism The steps of the graph reduction are listed in the below algorithm. At first, node2vec is applied to the routing graph G, which is introduced in the previous section and is weighted by the travel durations ( c ij ) (i , j )∈ A , for the following inputs: the desired dimension size of the embedding d, the number of random walks to launch from each node and their length, plus other parameters related to the guided exploration strategy of the walks. Thereafter, each node of G can be represented by a feature vector of dimension d. To obtain a similarity matrix S for the nodes of G, we apply the cosine similarity, which is defined for any two real-valued vectors v 1 and v 2 as the cosine of the angle θ between them, i.e., cos (θ )=v 1 ⋅ v 2 / ( ‖ v 1 ‖ ×‖ v 2 ‖ ) . Other measures of similarity and distances could be also used at this step. This step is important because it gives a quantification of the similarity between any two nodes of G while taking into account as much as possible the topology structure of the graph. The remaining steps concern merging the points to visit. We opt for merging pick-up and drop-off (P-D) couples two by two. This gives us more flexibility in the rate of contraction of the graph G, and allows us to quickly compute the optimal order within the merged P-D couples. For each two P-D couples ( p 1 , deliv ( p 1 ) , p 2 ,deliv ( p 2 )), we compute the following average similarity using the similarity matrix S simil ( p 1 , p 2 ) = ( S ( p 1 , p 2 ) +S ( deliv ( p 1 ) , deliv ( p 2 )) +S ( p 1 ,deliv ( p 1 )) ) /6 + ( S ( p 2 , deliv ( p 2 )) +S ( p 2 , deliv ( p 1 )) +S ( p 1 ,deliv ( p 2 )) ) / 6 , ( 17 ) which indicates how close the two couples are. Note that we affect a higher input parameters for BFS over DFS in the guided search of node2vec to favor closer nodes within the vicinity of each node of G. After ranking pairs of the P-D couples by their similarity, we choose the top r × ( |P|∪|D|)/ 2 pairs, with r is the rate of contraction of the graph. The following step proceeds by computing the optimal order of visiting points within each of the chosen pairs. There are only six possible orders respecting the pickup and drop-off constraints among the total 32, which are shown in Figure 2-a. For each optimal order, we output two aggregated nodes in the novel graph, i.e., if an optimal order is for instance ( p 1 , deliv ( p 1 ) , p 2 ,deliv ( p 2 )), the new nodes will correspond to p '= ( p 1 , deliv ( p 1 )) and deliv ( p ')= ( p 2 ,deliv ( p 2 )). The final step updates the graph weights involving the new nodes, as shown in Figure 2-b. Distances between nodes are updated in the same manner. The graph reduction algorithm has an overall time complexity of O (|V| 2 ) , since node2vec runs in O (|V| 2 ) and the number of pairs of P- D couples processed in the merging steps is at maximum COMBIN (n=( |P|∪|D|)/2, k=2)≤|V| 2 /8. Algorithm: node2vec-based graph reduction Inputs: The routing graph G=(V , A ), the contraction rate r. 1. Apply the node2vec algorithm to G. 2. Compute the similarity matrix S of G using the cosine measure on the embedding of 1. 3. Compute for each two P-D couples ( p 1 , deliv ( p 1 ) , p 2 , deliv ( p 2 )) the value simil ( p 1 , p 2 ). 4. Select the top r × ( |P|∪|D|)/ 2 similar pairs of P-D couples. 5. Compute the optimal order of points to visit within each chosen pair. 6. Update the graph's costs for the arcs going to/coming from the newly merged nodes. CASE STUDY: THE INDUSTRIAL DISTRICT OF "VALLÉE DE LA CHIMIE", LYON CITY In this section, we describe the studied territory and the process used to generate the input demand data. Territory Lyon is the second-largest urban area of France and is home to several industrial hubs. The largest of them is "Vallée de la Chimie" (VC) for approximately 17 km 2 . Situated about 10 km away from city center, VC has currently over 1,000 companies, 6 research and development centers, mostly working in the chemical, energy and the environmental sectors, and it attracts over 20,000 jobs (Métropole de Lyon, 2020 [START_REF]Vallée de la Chimie Presentation[END_REF]). In terms of mobility infrastructure, a Rapid Transit System (RTS) links VC to the city center on a regular schedule (each 30 min in rush hours, otherwise once by hour). There are quite few buses connecting VC to Lyon city (in total three, two are drawn in the right map of Figure 3 in light red lines). The highway A7 also crosses the territory. This artery is essential for logistic transportation and gets severely congested in the morning rush hours, as A7 is also the south entry point to the city. Commuting to work accounts for a significant part of mobility flows from/to VC. By considering the low PT supply and the highway's high saturation, a typical last-mile transit issue occurs. A number of experimentations to overcome this issue are currently conducted in this territory: Personal Rapid Transit (PRT) from the Esprit project (Esprit, 2019 [9]) and a demand-responsive transit (DRT) service from the local PT service (TCL, 2020 [START_REF] Tcl | Transport à la demande[END_REF]). VC constitutes a good study example for the SAV deployment. This test case is finally classic and generalizable to several other mobility situations all over the world. Indeed, it corresponds to the framework of a commute problem with two main alternatives: PT and the private vehicle which experiments bottleneck. Demand generation To generate the traffic demand, we apply a Land Use and Transport Interaction (LUTI) model. LUTI models are spatial interaction models that integrate the socio-demographic and economic features of the population to the transport infrastructure and mobility features, to answer questions such as, what is the transport modal share of a middle-sized income household living in the suburb to commute to work to the city center. With LUTI models, the cause/effect relationships between land use and transport become well understood, and the mobility patterns are well identified. It is noteworthy that LUTI models are extensively used for urban and transport plannings (see [START_REF] Acheampong | Land use-transport interaction modeling: A review of the literature and future research directions[END_REF] for a review on the topic). We use the LUTI model called OPTIREL, which has the advantage to account for intermodality. OPTIREL proceeds by decomposing the area's space according to the transport mode used from/to the target territory. For instance, the first zone of the studied territory is solely accessible by walking, the second zone is larger and is accessible by bus or RTS system (from/to the territory), the third zone is even larger. It delimits areas accessible by private vehicles combined with relay parking. In this manner, it is possible to define mobility solutions combining several modes in the step of the modal decomposition. The classical four-step travel models define zones according to the administrative or the population census based spatial decomposition, which turns out to be a rigid assumption for combining several modes in the third step of modal share. The traffic demand considered in our application corresponds to the PT demand arriving in the territory by train during the morning peak [7h-10h]. This interval will set our time horizon. Ten stations are defined for the planned SAV service, which correspond to stations of the Esprit project (Esprit, 2019 [9]), and cover the north part of VC. They are shown in Figure 4. The main transit station is "Gare Feyzin" which will coincide with the depots of all vehicles. We can notice from our input OD demand matrix, generated by OPTIREL, and shown in Figure 4, that traffic coming from "Gare Feyzin" is the highest. Interestingly, the traffic is also high from the stations "Hector Berlioz"and "Thomas" which are the closest stations to the transit station. Finally, duration and distances between stations are generated using web mapping services. Indicators Two main performance measures are used: the vehicle kilometers traveled (VKT), and the vehicle occupancy rate (VOR). Two additional measures are extracted from them. VKT indicates the total distance traveled by all vehicles of K to satisfy the demand, and is equal to, VKT ( x )= ∑ k ∈ K ∑ i , j ∈V 2 ,i ≠ j d ij x i , j k . We also use a correlated measure to VKT, which is the total GHG emissions generated for all trips to provide an order of magnitude of SAV emissions : GHG ( x )= E k VKT ( x ). To get a glimpse of the GHG environmental gain of using SAV, which is previously established in the literature, we show the gap: GHG gain ( x )= GHG pv (demand) -GHG ( x ) GHG pv (demand) , with GHG pv (demand ) represents GHG emissions estimated for the input demand and the personal vehicle mode. The third measure VOR is an average of the load rate of all vehicles at each step of the routing, and is equal to, VOR ( x ,q )= 1 |K| 1 |P|∪|D| ∑ k ∈ K ∑ i , j∈ P ∪ D ,i ≠ j x ji k q i k Q . VOR can also be seen as an index of comfort of the mobility service. The last measure, the zero occupancy rate (ZOR) is equal to the percentage of trips between any two stations which have an occupancy rate equal to zero, and is given by, ZOR ( x , q )= 1 |K| ∑ k ∈ K ∑ i , j ∈P ∪ D , i≠ j , q i k =0 x ji k (|P|∪|D|)-1 . All measures are computed on the original non-reduced problem. Baseline scenario Accounting for the total demand of VC leads to solving a problem of 54 points to visit 2 plus the dummy start and end vehicle depots, which is a computationally hard problem for k >2 vehicles. The graph reduction mechanism makes this optimization more manageable. CPLEX solver could attain a solution for the baseline scenario within a relative MIP gap of 23% in 0.28, 277.17, and 3609.62 seconds when the graph contraction rate is respectively 50%, 45%, and 40%. In the rest of the study, we set the contraction rate of the routing graph at r=50 % . Figure 5 illustrates the SAV routes obtained for the baseline scenario. This scenario has a total traveled distance of VKT =65.46 (respectively 74.44 km) for an average occupation rate of VOR=42.1 % (resp. 31.6 %), and a total emissions of GHG=11.12 (resp. 12.73 Kg CO 2 -eq) for node2vec (resp. constrained K-means) based node reduction. By drawing on the estimates of the vehicle occupancy rate and the GHG emissions of private vehicles computed for the Lyon urban area (François et al, 2017 [START_REF] François | Environmental assessment of urban mobility: combining life cycle assessment with land-use and transport interaction modelling-application to Lyon (France)[END_REF]), we obtain : GHG pv (demand )= ∑ i , j ∈V 2 ,i ≠ j demand (i , j) × d ij 175 g CO 2 -eq / Km 1.33 person/ car =43.8 Kg CO 2 -eq . Therefore, compared to the private vehicle mode, the SAV service leads to a reduction of 74.61% of GHG emissions for the territory of VC when relying on the node2vec reduction. Sensitivity analysis Figure 6 shows the sensitivity analysis performed on the baseline scenario, by varying the fleet size |K|, the maximum riding time L, and the vehicle's capacity Q . For a number of vehicles equal to 1, 5, and 10, the total traveled distance VKT slowly decreases to be resp. 67.70, 64.53, and 59.88 km for the node2vec reduction. K-means clustering generates tours in the same decreasing trend, but with higher VKT, which is an indication of a lower performance to find best pairs of P-D couples to merge. While |K| is multiplied by 10, VKT of node2vec is divided by 1.13. Having more vehicles actually allows more flexibility for the routing optimization to produce efficient assignments of vehicles to the transport requests. VKT is decreasing in function of Q, which is expected since larger capacities imply smaller tours. Node2vec is again much efficient than K-means reduction in this regard. VKT is overall stable under varying L. As some values of L and Q may constraint the number of possible arrangements of Figure 2-a, an oscillatory effect when varying L and Q could be seen for VKT and VOR index as well. GHG gain plots on the other hand show the substantial cut in terms of GHG emissions if SAV are deployed instead of private vehicles, especially for the node2vec method. The reduction corresponds to resp. 73.57% and 76.62% of GHG emissions if the fleet size is equal to resp. 1 and 10. Concerning the vehicle occupation rate, we notice that this rate is not affected much by the size |K| and the riding time L, and oscillates around the mean value of 35%. The vehicle's capacity Q, on the other hand, has some influence on VOR. VOR decreases from 38.5 (resp. 43.3%) to 24.9 (resp. 22.7%) when the capacity increases from 20 to 50 for node2vec (resp. K-means) reduction. This particular point should be carefully considered by the designers of SAV services to achieve a fair tradeoff between users' comfort and the costs for an additional capacity of automated vehicles and buses. As for empty trips, ZOR is solely impacted by the size |K|. This index decreases for an increase of |K|, otherwise it has a constant value around 10%. CONCLUSION In this study, we propose a model-based heuristic design for a shared autonomous vehicle (SAV) service by combining the formulation of the Dial-a-Ride-Problem (DARP) and a graph reduction mechanism. This latter procedure is based on the graph embedding framework node2vec, and allows us to merge similar couples of pickup and drop-off points, and hence solving instances with a large number of O-D requests and fleet size. A study case analysis is provided for an industrial district of the Lyon urban area (France), wherein the traffic demand is generated by a LUTI model. In the paper, we address in a classical commute problem between city center and suburbs where two transportation alternatives compete: highway and rapid transit system (RTS). The SAV service is deployed as a last-mile transit for the RTS. Our results suggest that node2vec is more efficient for node reduction than the constrained K-means (in terms of vehicle traveled distance), in addition around 75% reduction of GHG emissions are gained by SAV service when compared to the private vehicle choice. The limitations of the study mainly concerns the maximum reduction rate, which is currently 50% as reduction is done by merging couples of pickup and drop-off pairs, and the specificity of our application case. One natural improvement is to enlarge the reduction mechanism to three pickup and drop-off pairs. We also think that integrating the current approach with the LUTI model would generate precise partitions of the mode share for the studied territory and population, which would in return help to fix the adequate parameters of the SAV service, i.e., load capacity, fleet size, maximum riding time, in a causeand-effect loop. Table 1. Numerical values of the sensitivity analysis (plotted in Figure 6). Figure 1 . 1 Figure 1. Flowchart showing the method's main modules. Figure 2 . 2 Figure 2. Possible arrangements for a pair of P-D couples in (a), and the updated travel costs involving merged P-D pairs in (b). Figure 3 . 3 Figure 3. The placement of VC within the Lyon urban area in the left map, and the transport network connecting VC in the right map. Source: www.openstreetmap.org/. Figure 4 . 4 Figure 4. The locations of the ten stations considered in the study (left map), and the input OD demand matrix (right plot), which is colored according to the demand intensity: green=low, yellow and orange=middle, and red=high. 2 The number of points to visit for the full OD demand matrix of Figure4is 180 points. We have only 54 points in our input data, since some O-D couples have a null demand. Figure 5 . 5 Figure 5. SAV routes for the baseline scenario with the graph embedding reduction (first row) and the K-means based reduction (second row). Figure 6 . 6 Figure 6. Sensitivity analysis on the baseline scenario of the fleet size (first column), the clients maximum riding time (second column) and the vehicles' capacity (third column) for the indicators VKT (fist row), VOR (second row), ZOR (third row) and GHG gain (fourth row). ACKNOWLEDGEMENTS This work is supported by "Lyon Urban School" (ANR-17-CONV-0004) of Université de Lyon, within the program "Investissements d'Avenir" (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). We would like to warmly thank Guy Bourgeois for making the data of OPTIREL available. RESULTS In this section, we test the proposed method on the study case and show its usefulness. Parameters setting For solving the DARP, ILOG CPLEX 12.10 solver is used, with the termination criteria set to a relative gap of 1%. Although the routing graph is reduced, we have noticed that the problem (1)- [START_REF] Gschwind | Effective handling of dynamic time windows and its application to solving the dial-a-ride problem[END_REF] takes extended time to find a first solution, which is not the case for the problem (1)-( 15) excluding the precedence constraint [START_REF] Cordeau | A branch-and-cut algorithm for the dial-a-ride problem[END_REF]. Thus, we solve the latter problem in a first step, rearrange the points to visit for each vehicle by solving the single-vehicle DARP problem (1)- [START_REF] Gschwind | Effective handling of dynamic time windows and its application to solving the dial-a-ride problem[END_REF] , and then consider this solution a starting point for our main optimization. The effect of adding an initial good quality starting point is beneficial in our CPLEX solving process. In the absence of precise data about the customers' time windows, we set the constraints [START_REF] Cordeau | The dial-a-ride problem: models and algorithms[END_REF] to two time windows, each with one hour duration, and uniformly at random partition P and D points on the two time windows, with the condition that each dropoff point has the same time window of its pickup point. We do not consider customers waiting times w i with this configuration of time windows. A fixed service time ser i =1 minute is associated to each point to visit i ∈ P ∪ D. In the baseline scenario, we set the capacity of vehicles to Q=25>20=max i , j ∈V 2 ,i ≠ j demand (i , j), the number of vehicles to vehicles (m=3)=4, the maximum riding time to L=15 minutes, and the weights of the objective function to ( ω 1 , ω 2 , ω 3 ) =(1,1,0.1). The GHG emission rate is chosen to be equal to E k =0.171 KgCO 2 -eq / km, which corresponds to the short- range electric autonomous taxi scenarios in [START_REF] Gawron | Deep decarbonization from electrified autonomous taxi fleets: Life cycle assessment and case study in Austin, TX[END_REF] [START_REF] Gawron | Deep decarbonization from electrified autonomous taxi fleets: Life cycle assessment and case study in Austin, TX[END_REF]. For the input parameters of node2vec, the dimension size is set to 30, while the walk length is equal to 6 for 1000 random walks launched from each node. Since node2vec is a stochastic method, we launch this method multiple times, exactly 50 times for each configuration of Q and L, then choose the instance with the minimal inside travel time. This latter value is given by the summation of travel times inside the reduced nodes. For instance, the inside travel time of the graph of Figure 2-b is equal to c ij +c kl . The calculation is done for each configuration of Q and L as the possible arrangements of P-D couples could be less than six (Figure 2-a). All experiments are performed on an Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz machine with 32 GB RAM memory. Constrained K-means The constrained K-means clustering proposed by [START_REF] Bradley | Constrained k-means clustering[END_REF] [START_REF] Bradley | Constrained k-means clustering[END_REF] extends the traditional algorithm ensuring that each cluster contains at least a minimum number of elements. The problem of minimum cost flow optimization is used to solve the new formulation. Similarly, it is possible to add a maximum size constraint to clusters. In order to have a fair comparison with node2vec, we associate to each pickup node a value similar to the average of the expression (17) : cost avg ( p )= ∑ ( c pq + c deliv ( p)deliv (q) +c pdeliv ( p ) +c qdeliv (q ) +c pdeliv ( q) +c qdeliv ( p) ) /12 + ∑ ( c qp +c deliv (q) deliv ( p) +c deliv ( p) p +c deliv (q) q +c deliv (q ) p +c deliv ( p)q ) /12. This expression has twelve terms for each q ∈ P to account for the asymmetry of the travel times c ij . Clusters are generated with a maximum size of two and a minimum size of one, for the clustering with the best inside travel time among 50 runs. |K| =4 Q = 25
04116371
en
[ "info" ]
2024/03/04 16:41:26
2023
https://hal.science/hal-04116371/file/_IS2023__On_the__In_Efficiency_of_Acoustic_Features_Extractors_for_Self_Supervised_Speech_Representation_Learning-2.pdf
Titouan Parcollet email: [email protected] Shucong Zhang Alberto Gil C P Ramos Rogier Van Dalen Sourav Bhattacharya On the (In)Efficiency of Acoustic Feature Extractors for Self-Supervised Speech Representation Learning Keywords: Speech representations learned with self-supervised learning (SSL) have the potential to significantly improve the performance of a number of audio applications, especially when availability of labeled data from the deployment domain is limited. Despite their successes, SSL training methods are compute-and memory-heavy, and require large investments in computing infrastructure, thus putting it out of the reach of most institutions. Therefore, building efficient model architectures is essential for the wide-scale adoption of SSL in speech technologies. CNN-based Acoustic Feature Extractors (AFE), which are widely used as encoders of acoustic waveforms, remain one of the main efficiency bottlenecks. This work proposes replacing CNN-based AFEs with more efficient ones and demonstrates that SSL pre-training time and memory consumption can be reduced by a factor of two to three over existing methods while preserving performances in speech-, command-, and speakerrecognition tasks. Introduction Self-Supervised Learning (SSL) of deep learning systems leverages a vast amount of unlabeled data to deliver groundbreaking performance across a wide range of domains, including computer vision, robotics [START_REF] Jing | Self-supervised visual feature learning with deep neural networks: A survey[END_REF][START_REF] Sinha | S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics[END_REF], audio, speech, and language processing [START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Liu | Self-supervised learning: Generative or contrastive[END_REF]. Especially most sub-fields of speech processing increasingly use large pre-trained SSL models to reach previously unseen performance. For instance, SSL has led to stateof-the-art (SOTA) performance for Automatic Speech Recognition (ASR) [START_REF] Hsu | Hubert: Self-supervised speech representation learning by masked prediction of hidden units[END_REF][START_REF] Chen | Wavlm: Large-scale selfsupervised pre-training for full stack speech processing[END_REF], automatic emotion recognition [START_REF] Evain | Task agnostic and task specific self-supervised learning from speech with lebenchmark[END_REF][START_REF] Yang | SUPERB: Speech Processing Universal PERformance Benchmark[END_REF], automatic speaker verification [START_REF] Chen | Large-scale self-supervised speech representation learning for automatic speaker verification[END_REF][START_REF] Yang | SUPERB: Speech Processing Universal PERformance Benchmark[END_REF], Automatic Speech Translation (AST) [START_REF] Nguyen | Investigating self-supervised pre-training for end-to-end speech translation[END_REF][START_REF] Babu | XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale[END_REF], Spoken Language Understanding (SLU) [START_REF] Yang | SUPERB: Speech Processing Universal PERformance Benchmark[END_REF][START_REF] Laperriere | On the use of semantically-aligned speech representations for spoken language understanding[END_REF], speech enhancement [START_REF] Huang | Investigating self-supervised learning for speech enhancement and separation[END_REF][START_REF] Wang | Selfsupervised learning for speech enhancement[END_REF], and speech separation [START_REF] Huang | Investigating self-supervised learning for speech enhancement and separation[END_REF][START_REF] Tsai | Superb-sg: Enhanced speech processing universal performance benchmark for semantic and generative capabilities[END_REF]. Alongside remarkable performance in English, SSLtrained models are reducing the gap between low-and highresource languages [START_REF] Babu | XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale[END_REF][START_REF] Nguyen | Investigating self-supervised pre-training for end-to-end speech translation[END_REF], increasing the accessibility for cutting-edge technologies across many languages. However, SSL pre-training requires significant computing infrastructure limiting participation to a handful of industrial actors. For example, pre-training of a SOTA SSL architecture requires extremely large datasets, e.g., tens or hundreds of thousand hours of speech. It also requires architectures with billions of neural network parameters to reach an optimal level of performance, so that a single training can require 64 or 128 highend GPUs to complete a single pre-training process and take a few weeks [START_REF] Evain | Task agnostic and task specific self-supervised learning from speech with lebenchmark[END_REF]. This leads to high environmental costs and can easily run into hundreds of thousands of US dollars. The high cost alone acts as a high barrier for accessibility in the development of novel SSL techniques. Current SSL leaderboards, such as the SUPERB Benchmark, are occupied by models originating from two companies only [START_REF] Tsai | Superb-sg: Enhanced speech processing universal performance benchmark for semantic and generative capabilities[END_REF]. It is therefore vital to make SSL training more efficient. Three culprits for the high cost can be identified [START_REF] Gao | Federated Self-supervised Speech Representations: Are We There Yet?[END_REF]: (i) the Acoustic Feature Extractor (AFE), which transforms the raw waveform into a latent representation; (ii) the "context encoder", which is often a large Transformer; and (iii) the SSL training objective. The last aspect, the objective, has seen efficiency improvements such as Data2Vec [START_REF] Baevski | Data2vec: A general framework for self-supervised learning in speech, vision and language[END_REF] and HuBERT [START_REF] Hsu | Hubert: Self-supervised speech representation learning by masked prediction of hidden units[END_REF]. Improvements to the first two parts, on the other hands, are hardly explored. SOTA SSL-trained models rely on the same CNN-based AFE combined with a large vanilla Transformer. Careful engineering of these two parts could lead to significant efficiency gains [START_REF] Vyas | On-demand compute reduction with stochastic wav2vec 2.0[END_REF] and allow pre-training on mid-tier GPU (e.g., Nvidia Ti 80/90 families) [START_REF] Gao | Federated Self-supervised Speech Representations: Are We There Yet?[END_REF]. There exists some scattered work on improving the efficiency of the AFE. One line of research, which this paper will continue, is to exploit decades of research in the signal processing domain aiming at extracting the best representation of the speech signal. [START_REF] Lin | Melhubert: A simplified hubert on mel spectrogram[END_REF] proposed to train a small HuBERT model with Mel filterbanks instead of the standard CNN-based AFE. However, the paper focusses on downstream performance and the streaming capabilities of the model, and without mention of pre-training efficiency. [START_REF] Wu | Performance-efficiency trade-offs in unsupervised pre-training for speech recognition[END_REF], on the other hand, introduced a new CNN-based AFE inspired by the wav2vec 2.0 AFE [START_REF] Baevski | wav2vec 2.0: A framework for self-supervised learning of speech representations[END_REF], but optimized to increase throughput. This demonstrates that a careful design combined with a time decimation of the input sequence could lead to improvements both in training time and in downstream performance. However, there exists no systematic study of the gains in efficiency and accessibility from choosing different AFEs on speech SSL pre-training. This paper has three main contributions: 1. Introduce new efficient AFEs for training with speech SSL. A systematic evaluation of different efficiency metrics over eight AFEs on a carefully crafted contrastive SSL pretraining task with mid-tier GPUs (e.g. RTX 3090). An examination of the effects on downstream performance for Automatic Speech Recognition (ASR), Keyword Spotting (KS), and Automatic Speaker Verification (ASV). The conducted experiments, implemented with methods from the widely-adopted SpeechBrain toolkit [START_REF] Ravanelli | Speechbrain: A general-purpose speech toolkit[END_REF] to facilitate reproducibility, show that a few AFEs introduced in this article make pre-training faster, e.g., reducing from 7 days for a wav2vec 2.0 to 1.8 days (Section 3.1), as well as a more memory efficient, e.g., from 80 GB to 24 GB GPU (Section 3.2), with no degradation in downstream performance. Hence, greater accessibility to speech SSL research can be reached without sacrificing task-specific results or high implementation complexity. Acoustic Feature Extractors In this section, we consider AFEs that can replace the standard and inefficient CNN AFE [START_REF] Baevski | wav2vec 2.0: A framework for self-supervised learning of speech representations[END_REF] in wav2vec 2.0 to improve the efficiency of pre-training. In section 2.1, we introduce the AFE in standard wav2vec 2.0 and a more efficient variance, SEW. In section 2.2, we consider AFEs based on Mel filterbanks and learnable filters, with fewer trainable parameters. Existing AFEs used in self-supervised learning The majority of large-scale speech SSL models rely on a one-dimensional and end-to-end convolutional network first introduced in wav2vec 2.0 [START_REF] Baevski | wav2vec 2.0: A framework for self-supervised learning of speech representations[END_REF]. Wav2vec 2.0. The original wav2vec 2.0 AFE, also used in WavLM [START_REF] Chen | Wavlm: Large-scale selfsupervised pre-training for full stack speech processing[END_REF] and HuBERT [START_REF] Hsu | Hubert: Self-supervised speech representation learning by masked prediction of hidden units[END_REF], is composed of a number of 1D-convolutions operating directly on the raw audio waveform. The direct connection to the high-dimensional waveform has been identified as the main reason for the high VRAM consumption of this AFE [START_REF] Gao | Match to win: Analysing sequences lengths for efficient self-supervised learning in speech and audio[END_REF][START_REF] Wu | Performance-efficiency trade-offs in unsupervised pre-training for speech recognition[END_REF]. We use the original wav2vec 2.0 frontend [START_REF] Baevski | wav2vec 2.0: A framework for self-supervised learning of speech representations[END_REF], which has seven 1D-convolutional layers with kernels and strides equal to [START_REF] Babu | XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF] and [START_REF] Hsu | Hubert: Self-supervised speech representation learning by masked prediction of hidden units[END_REF][START_REF] Sinha | S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics[END_REF][START_REF] Sinha | S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics[END_REF][START_REF] Sinha | S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics[END_REF][START_REF] Sinha | S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics[END_REF][START_REF] Sinha | S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics[END_REF][START_REF] Sinha | S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics[END_REF]. This is equivalent to a 25 ms window, with a hop length of 20 ms, resulting in an output frequency of 49 Hz (i.e. 49 vectors emitted per second of speech). SEW. The "Squeezed and Efficient Wav2vec" (SEW) [START_REF] Wu | Performance-efficiency trade-offs in unsupervised pre-training for speech recognition[END_REF] AFE shares a similar network architecture as the wav2vec 2.0 AFE, with almost identical receptive field, e.g., window of 24.8 ms and the hop length of 20 ms. Compared to the wav2vec 2.0 AFE, the SEW AFE reduces the computational complexity by cutting the number of channels in the lower layers as well as the kernel size. The AFE contains seven 1D-convolutional layers with kernel sizes [START_REF] Evain | Task agnostic and task specific self-supervised learning from speech with lebenchmark[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Sinha | S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics[END_REF][START_REF] Sinha | S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics[END_REF] and stride [5, 2, 2, 2, 2, 2, 2] respectively. The AFE doubles the number of channels whenever the sequence length is reduced by a factor of four generating channels size of [64, 128, 128, 256, 256, 512, 512]. AFEs inspired by Speech Processing Representing the speech signal in an efficient way has been an active field of research well before the emergence of deep learning, offering plenty of techniques [START_REF] Lawson | Survey and evaluation of acoustic features for speaker recognition[END_REF][START_REF] Abdel-Hamid | Applying convolutional neural networks concepts to hybrid nn-hmm model for speech recognition[END_REF]. We would like to exploit this knowledge to build a more efficient AFE for training with SSL. Mel Filterbanks. Mel filterbanks, whose outputs we will call "FBanks", are a well-known and extensively used transformation of the waveform to the frequency domain inspired by human hearing [START_REF] Kopparapu | Choice of mel filter bank in computing mfcc of a resampled speech[END_REF]. Even large pre-trained models, such as Whisper [START_REF] Radford | Robust speech recognition via large-scale weak supervision[END_REF], rely on FBank as an input representation. FBank are extremely quick and cheap to compute enabling a good compression of any speech waveform at a very low cost. A standard and common parametrization of FBank gives a window of size 25 ms and a hop length of 10 ms with a per-frame size of 40 or 80 bins. Lin et al. [START_REF] Lin | Melhubert: A simplified hubert on mel spectrogram[END_REF] have first proposed to pre-train a small HuBERT model using FBanks only. However, the authors do not explore the efficiency and downstream superiority of their approach. In this paper, 80 FBanks are extracted every 20 ms and with a window size of 25 ms to mimic the time resolution of the wav2vec 2.0 AFE, resulting in an output frequency of 50 Hz. FastAudio. This AFE is a learnable FBank acoustic extractor. Speech filters are trained alongside the rest of the neural network to optimise the considered objective [START_REF] Fu | Fastaudio: A learnable audio front-end for spoof speech detection[END_REF]. For instance, in FastAudio, triangular FBanks are initialized following the standard mel-scale, before adapting their central frequencies and frequency bands during training. FastAudio is not faster than a standard Fbank extraction at training time, and it only becomes equivalent at inference. However, FastAudio has never been trained with SSL and unlike the original FastAudio implementation, we propose to only optimize the central frequencies alongside the rest of the neural network. Indeed, adding the bands as learnable parameters would make FastAudio filters collapse to a single value while combined with the quantization process of wav2vec 2.0 pre-training. Fixing the bands forces FastAudio to keep filtering the signal properly. The output frequency is equivalent to FBanks. Mel Filterbanks and CNN. FBanks have been combined for years with 1D or 2D CNNs to either enhance their feature representation or reduce their time resolution. For instance, most SOTA ASR systems using Transformers, Transducers, or recurrent architectures, rely on this type of AFEs. To the best of our knowledge, these AFEs have never been properly investigated in the context of speech SSL, and this paper proposes two different combinations of FBanks and CNN following common approaches from the speech-processing literature. First, "FBank-CNN1d" aggregates a standard melfilterbanks extraction from a 25 ms window and a hop length of 10 ms with a two-layered one-dimensional CNN made of 512 filters with kernel sizes and strides of [START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF], [2, 1] respectively. Layer-normalization and GeLU activations are applied between each layer. The resulting output frequency is 50 Hz. Second, "FBank-CNN2d" is inspired by SOTA Transformer architectures originating from well-known toolkits. Hence, it combines the same standard mel-filterbanks parametrization with a two-layered two-dimensional convolutional CNN made of (128, 64) filters and kernel sizes and strides of [START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF], [START_REF] Sinha | S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics[END_REF][START_REF] Sinha | S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics[END_REF] respectively. The output frequency is therefore halved to 25 Hz, while the feature dimension is increased from 512 for "FBank-CNN1d" to 1280. Layer normalizations and Leaky ReLU activations are applied between each layers. SincNet and Leaf Following the success of mel-filterbanks and CNN, Leaf [START_REF] Zeghidour | Leaf: A learnable frontend for audio classification[END_REF], and SincNet [START_REF] Parcollet | E2e-sincnet: Toward fully end-to-end speech recognition[END_REF] merge those two techniques into a single paradigm. Here, the mel-filterbank computation is replaced with a one-dimensional convolutional layer parametrized with common signal processing filters, e.g., triangular for SincNet and Gabor filters for Leaf. The parameters of those filters are then optimized via backpropagation with the task of interest. Other convolutional layers are then added to the filtering layer to complete the AFE. It results in layers with lower computational and memory costs than CNN layers as they require fewer parameters. SincNet and Leaf AFEs have never been applied to speech SSL before. Based on previous experiments for end-to-end ASR [START_REF] Parcollet | E2e-sincnet: Toward fully end-to-end speech recognition[END_REF], we parametrize the SincNet layer with filters of size 129 with a stride of 5 samples. The number of filters is reduced from 512 in [START_REF] Parcollet | E2e-sincnet: Toward fully end-to-end speech recognition[END_REF] to 128 to further reducing the complexity of the CNN. The convolutional front end is composed of five one-dimensional layers of [128, 128, 256, 256, 512] filters with kernel sizes and strides of [START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF][START_REF] Mohamed | Self-supervised speech representation learning: A review[END_REF], [2, 2, 2, 2, 2] respectively. The resulting output frequency is equal to 50 Hz. Lastly, Leaf follows its original implementation [START_REF] Zeghidour | Leaf: A learnable frontend for audio classification[END_REF] adapted to the 20 ms hop-length of the wav2vec 2.0 to get an output frequency of 50 Hz. Experiments In this section we present the performance of each AFE following a two-step process. First, the efficiency is benchmarked to offer empirical insights on potential training speed and memory gains (see §3.1). Then, downstream performance is evaluated, offering a complete view of the advantages of each solution, and following three tasks from the SUPERB benchmark [START_REF] Tsai | Superb-sg: Enhanced speech processing universal performance benchmark for semantic and generative capabilities[END_REF]: automatic speech recognition (ASR), automatic speaker verification (ASV), and keyword spotting (KS) (see §3.2). Datasets. The Librispeech dataset is used to evaluate both efficiency and downstream performance. During the former, two different sets are created by randomly selecting 1,000 sentences from the 100 hours clean set and then reducing them to either 5 s or 15s. For full SSL pre-training, however, the full Librispeech dataset of 960 hours is considered, with an upper duration limit of 30 s to maximise downstream performance. Practical considerations. Due to the high pre-training cost associated with contrastive learning, no architecture search has been conducted. We finalized learnable AFE architectures by running a few pre-training epochs with a small set of architectural hyperparameters taken from the literature. In all experiments, AFEs are the only difference between all models and the rest of the architecture strictly follows the original wav2vec 2.0 BASE model [START_REF] Baevski | wav2vec 2.0: A framework for self-supervised learning of speech representations[END_REF], as described in the SpeechBrain Librispeech recipe (commit 71c1490). Efficiency experiments are powered with an isolated RTX 3090 for precise measurements compared to four shared Tesla A100 for pre-training. (In)Efficiency Analysis The overall pre-training efficiency of the eight AFEs is evaluated for the first time on a carefully crafted benchmark to enable an extensive comparison of training costs. Metrics and evaluation. Each AFE follows a pre-training for a single epoch over the two above-described datasets. A warm-up of 5 backward steps is used to avoid the cudnn optimization overhead. The mini-batch size is fixed to fit the memory budget. AFEs are compared over standard efficiency metrics including the averaged training time necessary to process the 1,000 sentences, the average forward and backward propagation times, the peak VRAM consumption, and the time necessary to process one second of speech including forward and backward computations. The latter metric expresses the throughput of the neural network. Measurements are obtained with full-precision operations on a single RTX 3090 24GB. Note however that our SSL AFEs can exhibit even smaller GPU requirements via low precision arithmetics, e.g., half-precision, or lower precision optimizers e.g. 8-bit [START_REF] Dettmers | 8-bit optimizers via block-wise quantization[END_REF] among others. processing time for all its components compared to the baseline. This is due to the lack of cudnn support in its implementation. Hence may expect a roughly halved training time in real pretraining conditions for the candidate AFE, excluding Leaf, compared to the baseline. Memory-and throughput-wise, FBank-CNN2d clearly outperforms the other AFEs by only consuming 7 GB of memory against 18 GB and 13 GB for wav2vec 2.0 and Fbank-CNN1d respectively on 15 seconds long utterances. The latter behavior is expected due to the lower time resolution of the model. This lowered time resolution also enables the Fbank-CNN2d to exhibit the highest gain in throughput when dealing with longer sentences as the processing time per second of speech decreases from 2.2 ms to 1.4 ms for 5 s and 15 s sentences respectively. Again, all AFE models except Leaf offer significant gains both in terms of VRAM consumption and throughput compared to the base wav2vec 2.0. Mini-batch size variations have also been investigated, yet not reported for the sake of readability. In fact, the less memory-demanding AFEs, e.g. FBank-CNN2d, enable much larger mini-batch sizes than the standard wav2vec 2.0 (e.g. 64 against 16 with 15 s long sentences). This results in better utilization of the GPU, smaller training times, and critically, enabling the training of speech SSL models on cheap GPU such as the Ti 80 and 90 families. Downstream Evaluation The downstream performances of seven AFEs are evaluated on three downstream tasks originating from the SUPERB benchmark. We decided to discard Leaf due to poor processing speed performance resulting in untractable training times. Table 1: Downstream results wav2vec 2.0 pre-training with seven AFEs (Leaf is discarded due to poor efficiency measurements). For speech recognition, word error rates (WER) after fine-tuning are on the subsets of Librispeech (dev-clean and test-clean). For keyword spotting, accuracies are on Google Speech Commands. For speaker verification, the equal error rate (EER) is after fine-tuning on VoxCeleb1. GPU requirements are for 200k steps with 4 GPUs in less than 7 days of pre-training, within GPU memory. [START_REF] Baevski | wav2vec 2.0: A framework for self-supervised learning of speech representations[END_REF]. Hence, the number of negatives (100), the length of the masks [START_REF] Nguyen | Investigating self-supervised pre-training for end-to-end speech translation[END_REF], their probability (0.65), and the 1.6 hours mini-batch length are identical. AFE Downstream evaluation. ASR training is performed with the 100 hours clean subset of Librispeech. Word error rates are reported on the dev and test clean sets with 4-gram rescoring. Two dense layers alongside layer normalization are added on top of the pre-trained model following the official SpeechBrain recipe. The entire architecture including wav2vec 2.0 is finetuned with the CTC loss. KS is done with the Google Speech Command dataset with the 12 actions set. Average pooling is applied to the output of the wav2vec 2.0 with a final dense layer for classification. Again, and in contrast to SUPERB, the whole architecture is fine-tuned from end-to-end. Finally, ASV is conducted with the VoxCeleb1 dataset. In this case, the wav2vec 2.0 is frozen to add some diversity to our evaluation. Hence, we follow the SUPERB approach and instead extract a learned weighted sum from all the layers of the wav2vec 2.0. Nevertheless, this evaluation uses the SOTA ECAPA-TDNN architecture [START_REF] Desplanques | Ecapa-tdnn: Emphasized channel attention, propagation and aggregation in tdnn based speaker verification[END_REF] to consume those features instead of the mere x-vector [START_REF] Snyder | X-vectors: Robust dnn embeddings for speaker recognition[END_REF] model from SUPERB. Hyper-parameters are identical to existing SpeechBrain recipes for each dataset. Downstream results. First, the reported results in Table 1 are in-line with previous works evaluating partially pre-trained wav2vec 2.0 [START_REF] Vyas | On-demand compute reduction with stochastic wav2vec 2.0[END_REF]. Speaker verification EER, for instance, are even better than those reported within the official SUPERB leaderboard with 6.0% against 4.1% in our experiments for the base wav2vec 2.0. The latter difference is due to the use of the ECAPA-TDNN architecture as a downstream decoder. Then, pre-training and Librispeech fine-tuning times also align with findings from the efficiency analysis with every AFE offering speed-ups ranging from 1.4× for SincNet to 3.7× for FBANK-CNN2d compared to wav2vec 2.0. Hence, the total pre-training time is lowered to 180 against 670 GPU hours for FBANK-CNN2d and wav2vec 2.0 respectively. Most newly introduced AFEs also lower the bar in terms of GPU requirements as they enable a 7 days or less pre-training on four Tesla V100 32GB or RTX 3090 24GB for the FBANK-CNN2d compared to four Tesla A100 80GB for the wav2vec 2.0. The latter change induces a decrease in the unitary GPU price of a factor of roughly 10×. Wav2vec 2.0, however, offers the most consistent downstream performance across the three tasks. We hypothesize that such behavior is the result of the extensive hyperparameter tuning at the origin of the wav2vec 2.0 architecture. Nevertheless, the drop in accuracies and word error rates observed with more efficient AFEs are far from dramatic. Indeed, FBANK-CNN1d obtains similar performance to the baseline with a 0.1 % relative increase in WER and EER and an equivalent accuracy on KS, while the pre-training time is reduced by 2.5×. The fastest and cheapest alternative, FBANK-CNN2d, suffers from higher WER with a relative increase of 1.3 %, but also similar or identical ASV and KS performance compared to wav2vec 2.0. SEW, the standard FBANK, and FastAudio also offer important speed and memory improvements, but also show important increases in WER. ASV and KS accuracies also illustrate that the latter AFEs are totally suitable for these tasks. SincNet sits in the middle, as it offers the smallest speed improvement (i.e. 1.4×) but also exhibits comparable WER, EER and KS accuracy against wav2vec 2.0. Overall, Table 1 shows that certain AFE, such as FBANK-CNN1d or SincNet can significantly lower the pretraining cost of wav2vec 2.0-based SSL models while ensuring a similar level of downstream performance. There exist use cases where a relative gain of 0.1 % in raw performance with a 3× increase in computing cost might not be justified. Hence, such results should encourage the community to integrate better descriptors of the model efficiency in standardized benchmarks. Conclusion Acoustic features extractors play an important role in the large compute and memory costs of speech SSL pre-training. This paper bridges the gap between the existing abundant speech processing literature and SSL pre-training and compares extensively three novel and five existing AFE, both at the efficiency and downstream performance levels. The observed results suggest that significant compute and memory savings can be achieved without disproportionated downstream performance impact. The latter finding is a major step forward to lowering the entry bar to speech SSL pre-training. (Figure 1 : 1 Figure1: Efficiency analysis of the eight AFEs. Time measurements are averages for processing 1,000 sentences. The stacked bar charts report forward and backward timings while the scatter plot offers a view of the throughput (i.e. averaged processing time in ms to train on one second of speech) against peak VRAM. Aside from Leaf, all proposed AFEs offer a training speed-up and a better throughput/VRAM ratio.
04116461
en
[ "shs.phil", "shs.hisphilso" ]
2024/03/04 16:41:26
2023
https://hal.science/hal-04116461/file/Are%20Hobbesian%20states%20as%20passionate%20as%20Hobbesian%20individuals.%20HAL%20version.pdf
Jerónimo Rilla Are Hobbesian states as passionate as Hobbesian individuals? Keywords: Hobbes, passions, states, international relations This article deals with the possibility of ascribing passions to states in Thomas Hobbes's political theory. According to Hobbes, the condition of sovereign states vis à vis one another is comparable to that of individuals in the state of nature, namely, a state of war. Consequently, the three causes of war (competition, diffidence, and glory) identified in chapter XIII of Leviathan could also be relevant to interstate relations. Since these war triggers are mainly passions, one could presume that state action is motivated by passions as well. Some argue that it is just a figurative way of speaking. Others claim that the passions of war affect only sovereign rulers. I explore an alternative answer based on the ability of sovereigns to direct the preexisting passions of their people. Introduction In chapter XIII of Leviathan, Thomas Hobbes claims that while the state of nature as a war of all against all may not have existed among "particular men", it certainly exists among sovereign states (L, XIII.12, 196). 1 The best example of the state of nature is international relations. Commonwealths are in a latent state of hostility towards each other (DCv, XIII.7, 144). 2 This is the intended meaning of Hobbes's famous formulation: "man is a wolf to man", which "is true of… relations between commonwealths" (DCv, Preface, 3-4). In chapter XXX of Leviathan, Hobbes asserts that the rules that guide individual behavior in the state of nature are the same that regulate the behavior of sovereign persons: "the Law of Nations and the Law of Nature is the same thing" (L, XXX. 30, 552). The sovereign's right to protect its people against foreign 1 Thomas Hobbes, Leviathan, ed. Noel Malcolm (Oxford: Oxford University Press, 2012). Henceforth referred to as L and LL. 2 Thomas Hobbes, On the Citizen, ed. Richard Tuck (Cambridge: Cambridge University Press, 1998), referred to as DCv. enemies is identical to the individual's right to protect his or her body in the state of nature. It is no accident then that Hobbes depicted states as artificial men (L, Introduction, 16). In view of this, it could be argued that Hobbes's reasoning about the war of all against all is especially germane to the situation of sovereign states towards one another. If their condition is comparable to that of individuals in the state of nature, the three causes of war, i.e., competition, diffidence, and glory (L, XIII.6, 192), might also apply to the sovereign states. I explore one aspect of this analogy, namely, in the fact that Hobbes's description of war's triggers in the state of nature is an "inference made from the passions" (L, XIII.10, 194). As such, we may conclude that state warfare should also be motivated by passions. Some interpreters concur with this. David Gauthier speaks of "interests and values" of states, which are "subjective and selfish" like "Hobbesian men". 3 Hedley Bull claims that "all of what Hobbes says about the life of the individual men in the state of nature may be read as a description of the condition of states in relation to one another". 4 In Glen Newey's terms, "non-state corporations" and "sovereign states" are actors "to which the state-of-nature motives of competition, diffidence and glory can be ascribed". 5 David Armitage argues that "the commonwealth once constituted as an artificial person took on the characteristics and the capacities of the fearful, self-defensive individuals who fabricated it". 6 None of these commentators, however, address the question explored in this article which is in what sense states are selfish, distrustful, or glorious, and passions can be attributed to states. Section 2 considers one possible answer: the metaphorical attribution thesis, which holds that if Hobbes's theory leads us to attribute passions to states, it is only figuratively. It then shows why this account is inadequate. Section 3 reviews a second possible answer: the reducibility thesis, which argues that it is individual rulers who are competitive, distrustful, and proud, not the states. I point out some problems with this thesis and present an alternative reading of Hobbes as proposing a passionate compound account in which sovereigns must rearrange the preexistent passions of the people and mobilize them in a coherent way towards war. State passions that elicit war are those in which the sovereign power (an individual or a group) manages to shape the wills of its citizens and infuse them with a coherent direction. In Section 4, I justify this thesis by delving into each cause of war: competition, diffidence, and glory. Finally, I address cases in which the passions promoted by the sovereign to conduct war are not entirely shared by its people. In trying to solve the interpretative puzzle of whether passions explain the occurrence of war between states in the same way they explain it for individuals, I draw from Hobbes's main political work, Leviathan My article contributes to Hobbes studies by presenting an original solution to a textual problem. Unlike other readings, the Hobbesian answer I defend allows us to reflect on the relationship between the passions of those in power and the passions of those governed. To go to war and wage it effectively, I claim, sovereigns must tap into the emotions of their people. More broadly, the Hobbesian reasoning I put forward highlights the role of passions over reason in the origination and conduct of war. Hobbes describes a state of nature delimited by competitive ambition, mistrust, and glory. Embedded in this "known disposition" to fight (L, XIII.8, 192), actors can behave in accordance with rational decision-making. As we will see, attacking preemptively is a rational course of action when the general climate of mistrust is taken for granted. But the prudential thing to do will always be determined by this framework of passions. Likewise, the moral thing to do, "to endeavor peace" (L, XIV.4, 200), the "first and fundamentall law of nature" (L, XIV.4, 200), will also be constrained by this passionate background, since it is valid only when it is possible to achieve peace. If not, one must employ all means of war to ensure one's safety (L, XIV.4, 200). If, as mentioned above, Hobbes equates laws of nature with the laws of nations, then elucidating how passions work at the state level provides us with conceptual tools for understanding not only how war is provoked, but also how the passions determine rational and moral courses of action. In this regard, my article can also be read as a contribution to the Hobbesian tradition in International Relations. 8 The metaphorical attribution thesis One possible solution to the question of the attribution of passions to states is what I call the metaphorical attribution thesis. George Kateb, for example, denies the legitimacy of this attribution, arguing that Hobbes's personifications of states is "the most ingrained kind of political irrationality" because it applies to them terminology that is only sensible when used for individuals. 9 The vocabulary of passions should be restricted to individuals. If it is possible to ascribe passions to states it is due to Hobbes's careless fondness for hypallage. Therefore, it only makes sense figuratively, which is why I refer to this as the metaphorical attribution thesis. I identify four reasons why passions can be attributed to states in a non-figurative way. Passions are motions To find out whether these alleged state passions are real or metaphorical we must clarify what the notion of passion consists of. As the title of chapter VI of Leviathan indicates, Hobbes holds passions to be "the interiour beginnings of voluntary motions" which are "commonly called ENDEAVOUR" (L, VI.1, 78). Passions are simply endeavors or motions that affect certain bodies, i.e., "animals". Animal bodies have two sorts of motions: vital and voluntary (L, VI.1, 78). Vital motion is the regular motion of the body controlled by the heart. Voluntary motions or endeavors are the passions, which are "voluntary" in a Hobbesian sense, namely, as physical responses to the action of an external object in the animal body. External things transmit motions that first generate a sensation and then are communicated to the heart (L, VI.9, 82). Voluntary motions are connected to the vital motion of the body under a criterion of self-preservation. If the motion produced by the external thing "helps" the body's vital motion controlled by the heart (L, VI.10, 82), then it will make the body move toward that thing, a reaction "called appetite or desire" (L, VI.2, 78). If the 8 The scholarship of the Hobbesian legacy in international relations is huge. For an outline of the "Hobbesian Tradition", see Michael Williams "Recasting the Hobbesian Legacy in International Political Theory" in International Political Theory after Hobbes, ed. Raia Prokhovnik et al. (London: Palgrave, 2010), 147-67. Also, Theodore Christov, Before Anarchy. Hobbes and his Critics in Modern International Thought (Cambridge: Cambridge University Press, 2015), 6-24, glosses the interpretations of Hobbes's international thought during the 20th century and the present. 9 George Kateb, "Hobbes and the Irrationality of Politics," Political Theory 17 (3) (1989), 381. external object hinders the vital motion (L, VI.10, 82), the body will react moving "fromward", a motion "generally called aversion" (L, VI.2, 78). These desires and aversions are nothing more than the "voluntary motions" that precede action. The last appetite or aversion that precedes action is what Hobbes calls "will", which is "the act (not the faculty) of willing" (L, VI.53, 92). For Hobbes, passions are acts of volition, appetites, or aversions whose function is the preservation of the body, the continuity of its vital motion. The state is a body in motion One could object that this sort of motion is only applicable to animal bodies, and not to an artificial being such as the state. There are reasons to doubt this. Leviathan's subtitle reveals that the state has both "matter" and "form". It possesses "similar parts or muscles" (L, XXII.1, 348), "parts organicall" (L, XXIII.1, 376), needs "nutrition" (L, XXIV.1, 386) and is susceptible of "infirmities" (L, XXIX.2, 498). It could be argued that Hobbes's homology of the state with a human body is only figurative, a residue of an organicist political tradition. However, he is adamant in his attribution of movement to the commonwealth. In his Critique to Thomas White's De Mundo, he analyzes on what grounds we can affirm that a river, a human being, and a state are "the same entity" (AW, XII.4, 190). 10 The criterion that explains identity in all three instances is kinetic, the continuity of the movement that governs it. If the body's "movement or flux is one and the same" (AW, XII.4, 190), he argues, then the entity in question is the same. 11 The continuity of the body's vital motion determines its identity. Because "life is but a motion of limbs", Hobbes also attributes artificial life to automata (L, Introduction, 16). So, if movement is a property predicable of the state, then the state must be treated as an actual body. Indeed, Hobbes thinks that speaking of bodies metaphorically "is but an absurd speech" (L, VI.2, 78). The state is, as Philippe Crignon explains, a body among other bodies, an artificial body that owes its identity to its peculiar internal motion. 12 His reworking of the organicist tradition, Annabel Brett has proposed, is the dynamic conception of the state as something that moves itself. 13 This subsection has provided reasons to consider that motion can be attributed to the state in a substantial, non-figurative, way. A continuous motion or flux is, in fact, what accounts for the identity of the state. The sovereign power is the unifier of motion If the state has a motion of its own, we still need an explanation of how this motion is obtained. In the Critique Hobbes ventures an answer: through the unity of the government's actions. If the "continuous order and the movement of the government" remains the same, we can say that it is the same state (AW, XII.4, 191). Drawing on this passage, Sean Fleming asserts that sovereignty is what confers a corporate identity on a people. 14 As we learn from De Cive, the institution of a sovereign power operates by unifying many voluntary motions into one. When a multitude agrees that the will of a representative "is to be taken as the will of them all" then it "becomes one person, for it is endowed with a will and can therefore perform voluntary actions" (DCv, VI.1.Ann., 76-7). We should remember here what was claimed in 2.1: for Hobbes, wills or volitions are motions. So, as Mikko Jakonen summarized it, "governing the commonwealth means to control the motions of the people". 15 Now, the specific motion of a state depends on the sort of institution that governs it. The sovereign power instituted by the multitude will instill different sorts of motions to the political corporation depending on it being a monarchy, an aristocracy, or a democracy (DCv, X.16, 125). As we will see in section 4, the various institutions of sovereign power differ in their ability to unify the movement of the state. In Leviathan, Hobbes reiterates that the people "should receive their motion from the authority of the soveraign" (L, XXIX.20, 516). By its authority, the sovereign "is inabled to conforme the wills of them all" (L, XVII.13, 260). To will as one means being able to move and act as one, especially when ensuring internal peace and going to war. In sum, the state is an artificial body and sovereignty is its artificial soul, that gives "life and motion to the whole body" (L, Introduction, 16). When the sovereign is removed, the members stop receiving their motion from the state, which in turn becomes a corpse (L, XXIX.23, 518). Hobbes also explains how the sovereign infuses motion to the people. In chapter XXX he argues that the laws do not "bind the People from all voluntary actions" but "direct and keep them in such a motion, as not to hurt themselves by their own impetuous desires" (L, XXX.21, 540, my emphasis). Sovereigns are to enforce the law of "compleasance" and ensure that people are willing to live in society. Those who, "for the stubbornness of [their] passions, cannot be corrected", must be expelled (L, XV.17, 232). I have shown how the motion that gives identity to the state is generated by the unifying action or motion of the sovereign institution. Hobbes argues that the institution of a sovereign power establishes a unique principle of motion for the political body. As long as this principle endures, the state remains the same. In addition, he clarifies that the sovereign institution acts by conforming the wills of its people, that is, their voluntary motions. State motions are construable as passions The question of individuation has a final rearticulation in De Corpore, where Hobbes examines the criterion according to which a body's identity remains the same or changes. In this question, he includes "whether a city [civitas] be in different ages the same or another city" (CB, XI.7, 135). He then considers the "beginning of motion" as principle of individuation (CB, XI.7, 137). If the origin of motion of a thing persists, the thing will be the same. In the case of the body politic, this means that it will be the same if its "acts proceed continually from the same institution" (CB, XI.7, 138). States retain their identity if their origin of motion, their sovereign institution, remains the same. To describe the individuating function of the sovereign power, Hobbes falls back on the terminology he uses to define passions: the "beginning of motion". Andrea Bardin underscores that "the physical concept of conatus as beginning of motion" seeks to explain all motions, both of natural and artificial bodies. 16 The sovereign institution is the "endeavor" that both initiates and conducts the state's motion. It initiates because it generates a coherent motion out of several incoherent wills. It conducts because it preserves its internal motion and thus maintains the state's identity throughout its actions. Hence, motions originated by the sovereign institution are construable as passions since they transmit the will (the desires and aversions) of the political body. Through legislation the sovereign power leads the people's motions and is thus "inabled to conforme" their wills (L, XVII.13, 260). Affecting the subjects' desires and aversions, the sovereign "keeps them in such a motion" (L, XXX.21, 540) that preserves internal peace and cohesion against foreign enemies. This version of the state does not replace or cancel, but supplements, Hobbes's sophisticated elaboration of its fictive personality. The establishment of a unique principle of motion depends on a representation "by fiction". As is well known, a multitude becomes one person when represented by an individual or an assembly (L, XVI.13, 248). Each member of the multitude pledges to "owne and acknowledge himselfe to be author of whatsoever" the representative does (L, XVII.13, 260). They authorize the will of the sovereign power to function as the will of all. On this account, the state is an artificial body. At the same time, the person of the commonwealth is upheld by a particular configuration of real human beings whose motions are led by a representative and whose actions have effects in the real world. The covenant is not enough if not enforced by an effectively awe-inspiring power (L, XVII.12, 260). To function as the will of them all, the sovereign must be able to "conform" and "direct" their wills. In this regard, the state is an artificial body. Sandra Field has expounded this difference in terms of the sovereign's potestas and potentia, between its "entitled capacity" and its "effective capacity". Hobbes's challenge is "to bring that effective power to coincide with right". 17 I claim similarly that the fictitious personhood of the state and the juridical prerogatives with which it has been endowed need to be backed up by kinetic support, by the exercise of power through movement. To hold sovereign power means being authorized by the people, but also being able to move and direct that people. Hobbes recognizes that the state has a distinctive motion as a body politic. This motion begins when the sovereign power is instituted. As long as this centralized generation of motion persists, the state remains the same. This motion also conveys the will of the state. To deliver this motion, the sovereign power reshapes 17 Sandra Field, "Hobbes and the Question of Power," Journal of the History of Philosophy 52 (1) (2014), 79-80. the wills of the multitude in a coherent way and gives them a direction. Just as passions express the voluntary motions of an individual (Section 2.1), the voluntary motions initiated by the sovereign power can be construed as the desires and aversions of the state. The motions of the state are its voluntary motions, i.e., its passions. Since Hobbes is referring to motions that are real features of the political body, the attribution of passions to states can be interpreted in a non-figurative way. In what follows I elucidate why these passions should be attributed to the state as a whole and not simply to the sovereign. The reducibility thesis Another possible approach to attributing passions to states is to maintain that those conflictive passions affect only sovereigns. It is the human individuals that govern the commonwealths who are competitive, diffident, and proud, not the states. I call this position the reducibility thesis. Glen Newey takes this approach when he claims that "Hobbes thought that the state of nature obtained between both individual humans in the state of nature and persons who exercise sovereign power in international affairs". 18 Haig Patapan explains events in Hobbesian international relations as products of the glory of sovereigns, who pose "the same problems that Hobbes discerned in the glory seeker in the state of nature". 19 Scholars who underscore Hobbes's lack of a genuine notion of group persons also support this view. Otto von Gierke argues that Hobbes dissolves group personhood "into representing and represented individuals", suggesting that the actions of the state can be reduced to the actions of the sovereign. 20 The most articulate objection, though, comes from Christian List and Philip Pettit, who hold that Hobbes falls into "an easy translation from talk of group agents into talk of individual agents". 21 According to them, Hobbes would end up presenting a "thin", "redundant" or even "degenerate" version of collective action. 22 On this account, Hobbes's artificial men are to be equated for practical purposes to sovereigns, and the landscape of 18 Newey, Routledge Guidebook, 162. 19 international relations would boil down to the effects of the passions that impinge on the natural body of state rulers. My previous reasoning that calls into question the metaphorical attribution thesis may appear to reinforce this reducibility thesis. After all, sovereign rulers are the ones who generate and conduct the motions of the state. One could argue that what they will is what counts as what the entire political corporation wills. However, I explore a non-reductionist way of understanding state passions. My thesis is that passions, especially those that function as drivers of state wars, are not emotions that occur simply in the minds of sovereign rulers. Instead, the mobilization of the body politic is dynamic process that entails the sovereign's recognition and reshaping of the passions of the people. I call this the thesis of the passionate compound. In my interpretation, the motion of the commonwealth is a whole. As Hobbes explains, "the cause of the whole is compounded of the causes of the parts" (DCo, VI.2, 67). The components of this whole are two: (i) the incoherent and preexisting motions of the multitude; and (ii) the superimposing and coordinating motions of the sovereign institution. In causal relations, Hobbes distinguished between the "material cause" or the "patient" and the "efficient cause" or the "agent" (DCo, IX.3, 121). In order to elicit an effect, both the active and the passive factors must be in place. As we have seen in Sections 2.3 and 2.4, the sovereign power plays an active role in the causation of the state's motion. It acts as a vector of the state's movement, initiating it and deciding where to move. Now we can expand that contention and claim that the sovereign operates as an efficient cause on a material cause, i.e., the voluntary motions of its people. Samantha Frost has emphasized the synergy, "in both the philosophical causal sense as well as in the affective sense", of the actions of the sovereign and the will of the people. 23 This also means that the will of the sovereign power does not operate in a void. The sovereign understood as the initiator of the state's motion will necessarily encounter a reaction in the patient: "reaction is nothing but endeavour in the patient to restore itself to that situation from which it was forced by the agent" (DCo, XXII.19, 348). The patient certainly does its part and needs to be taken into account. So, the motion of the state is the result of a compound of two factors: the voluntary motions of the sovereign and that of the people. This compound motion of the state is the kinetic ground that supports the attribution of passions (i.e., voluntary motions) to the state. I have already established that the sovereign power's success in maintaining the state's identity amounts to acting in a "continuous" way. In De Corpore Hobbes refers to the continuity of motion when discussing the impression of a habit in the material cause. Habit is "an easy conducting of the moved body in a certain and designed way". It is achieved "by the weakening" of contrary endeavors and "by the long continuance of action" (DCo, XXII.20, 349, my emphasis). Moreover, to impress a habit, Hobbes argues, one needs a particular "skill" that consists in "compounding interrupted motions or endeavours into one equal endeavour" (DCo, XXII.20, 349, my emphasis). Analogously, it is possible to interpret the way the state motion is shaped in light of a compound of endeavors. In the Introduction of Leviathan Hobbes identifies the material cause of the state with the human individuals that compose it (L, Introduction, 19). To get to know the nature of the state, one needs to know the stuff it is made of. Above all, the sovereign, acting as the efficient cause, needs to know the material cause on which it acts. "He that is to govern a whole nation, must read in himself not this or that particular man, but man-kind" (L, Introduction, 20). Since passions tend to operate similarly in all human beings, sovereigns must read humanity in themselves to understand how the passions of their people work (L, Introduction, 19). This is a difficult skill, however, because everyone tries to hide their true feelings (L, Introduction, 18). Because sovereigns are in charge of international relations, they must not only be familiar with the passions of their people, but also know how the passions of the people of other nations work. This is why Hobbes exhorts them to discover "man-kind" in themselves. To impress a continuous motion on their people and compound their endeavors into one, sovereigns must be attentive to how that people feel. But to lead them effectively to war, they also need to know how the people of other nations feel. Further evidence for this passionate compound thesis can be obtained from Hobbes's reference to war as a "unanimous endeavour against a forraign enemy" "governed and directed by one judgement" (L, XVII.5, 258). In my analysis, this unanimous endeavor should be understood as the preexisting matter of passions. The sovereign power enjoys a juridical prerogative that validates its active function: "the right of making warre and peace to other nations" (L, XVIII.12, 274). In such capacity, it acts on the material cause, i.e., governs and directs the endeavors or voluntary motions of its citizens fittingly. There is a way of attributing passions to states that would not fall into a metaphorical license, nor an easy translation into the state of mind of the rulers: the thesis of the passionate compound, according to which the sovereign power conducts the preexisting endeavors or passions of the multitude by reshaping them and creating a new endeavor. The sovereign initiates a specific movement by infusing a coherent direction to the voluntary motions of its subjects. In line with the Hobbesian model of causality, it operates as an efficient or active cause. Conversely, the people's wills are the material cause that is conformed by the active contribution of the sovereign. Viewed in this way, the voluntary motions of the commonwealth, its desires and aversions, are constituted conjointly by the voluntary motions of the sovereign as an agent and the disordered motions of the people as a patient. As we shall see in the next section, this passionate compound is particularly germane to the attribution of passions to states in the case of war. Three passions of war Desire and hope The first cause, competition, is not a passion, but is engendered by passions, namely, desire and hope. Competition leads to war because "the way of one competitor to the attaining of his desire is to kill, subdue, supplant or repell the other" (L, XI.3, 152, my emphasis). Without desire, there would be no competition. Human beings strive both to fulfil present desires and to acquire the means that will also enable future fulfillment (L, XI.2, 150). And those means are defined by Hobbes as "power" (L, X.1, 132). Even if we do not desire the same things, any object of desire of one person might be reinterpreted by another as an instrument for the fulfillment of his or her future desires. Hence, we end up in a zero-sum scenario: the object of desire one attains is a potential instrument taken away from the others. As Arash Abizadeh points out, Hobbes is thinking of "goods that are intrinsically, not incidentally, scarce". 24 Desire, nonetheless, is not 24 Arash Abizadeh, "Hobbes on the Causes of War: A Disagreement Theory," American Political Science Review 105 (2) (2011), 310. enough to elicit war. Additionally, there should be an "equality of hope in the attaining of our ends" (L, XIII.3, 190). Hobbes tells us that competition "maketh men invade for gain" (L, XIII.7, 192). When we examine his description of invasions, it is quite evident that they are activities that suit both individuals and groups. Invasions are carried out "with forces united" (L, XIII.3, 190). These forces can be construed as small "systems" or gangs of individuals gathered by "one interest or one businesse" (L, XXII.1, 348), that is, the desire and expectation of obtaining gain. Desire and hope for gain are feelings that "dissociate and render man apt to invade and destroy one another" (L, XIII.10, 194), but at the same time, they join people together. In chapter XVII of Leviathan, Hobbes mentions that in the past, plunder was an honorable trade that kept small families united around a purpose (L, XVII.2, 254-6). Conquering groups stirred by desire for gain also include political corporations: "as small families did then, so now do cities and kingdoms, which are but greater families (for their own security), enlarge their dominions" (L, XVII.2, 256, my emphasis). Indeed, war and invasion are sometimes "necessary for the citizens to prosper" because they can increase their wealth (DCv, XIII.14, 149) and help finance a tax exemption for the poorest (DCv, XIII.14, 150). States struggle to make a profit, too. The difference between a gang raid and a state invasion is one of identity and continuity of motion. Leagues dedicated to looting depend on a contingent "similitude of wills and inclinations" (L, XXII.28, 370). A state, by contrast, operates with a continuous flux of passions administered by the sovereign power. As long as it is a true union, it maintains a regular motion towards an object of "common interest" (L, XIX.4, 288), which in this case is described as an "appetite… of enlarging dominion" (L, XXIX.22, 518), "limited… by externall accidents, and the appetites of their neighbours" (L, XXIV.8, 390-2). The issue of the continuity of motion is also germane to the superiority of monarchy over sovereign assemblies, which are unsuitable "for the government of a multitude, especially in time of warre" (L, XVI.17, 250, my emphasis). In assemblies the "inconstancy" of human beings is aggravated by the fickleness of numbers: the decision supported by a majority one day may be a minority opinion the next (L, XIX.6-7, 290). Also, in assemblies, passions do not converge but block each other "and reduce their maintain peaceful relations with their neighbors, vigilantly guard their frontiers and thus "admit their fear and distrust of each other" (DCv, Preface, 10-1). Distrust, then, is an expression of hostility (DCv, XVII. 27, 231). Fear of external enemies serves as a binding element for the political corporation (L, XXV.16, 412), because this sort of fear is experienced in a collective fashion. Hobbes argues that the size and power of "the enemy we fear" is constantly compared with our own (L, XVII.3, 256, my emphasis). His lifelong brotherhood with fear ratifies this contention. Faced with the imminent arrival of the Spanish Armada, his "mother had so much fear that she gave birth to twins: myself and fear at once. This is why, I believe, I hate the enemies of the fatherland" (Vita, lxxxvi, my emphasis). [START_REF] Hobbes | Thomae Hobbes Malmesburiensis Vita Carmine Expressa[END_REF] The collective feeling of fear serves not only to develop an external enmity, but also to consolidate a national identity. [START_REF]On the topic of negative association as key to the formation and maintenance of Hobbesian political corporations, see Ioannis Evrigenis, Fear of enemies and collective action[END_REF] Hobbes is careful to point out that these images of aversion must be directed by the state. Sovereigns must "forewarn" and "forearm" their citizens (DCv, XIII.7, 144). The preservation of the state's identity entails, among other things, delineating an external source of fear against which the sovereign power is the sole guardian. Hobbes's warning extends also to internal enemies, i.e., groups sponsored by foreign nations to propagate pernicious doctrines and undermine the power of a state (L, XXII.27, 368). [START_REF] Untea | External Authority or External Threat? Thomas Hobbes and the Politically Troubled Times of Early Modern England[END_REF] Now, diffidence may lead to an aggressive action or may temporarily prevent states from attacking each other. Given that sovereigns might not know what forces other commonwealths possess and fear they are greater than expected, they may promote cautious behavior. [START_REF] Silviya | Hobbesian Internationalism: Anarchy, Authority and the Fate of Political Philosophy[END_REF] In a passage of the Dialogue, Hobbes considers that "mutual fear may keep them quiet for a time" (D, 12). [START_REF] Hobbes | A Dialogue Between a Philosopher and a Student, of the Common Laws of England[END_REF] As long as this fear lasts, states will hold their positions. These contingent moments of rest or peace are not exceptions to, but foreseeable components of, Hobbes's state of war. War is like "foul weather", not "actual fighting", but "the known disposition thereto" (L, XIII.8, 192). Thus, "upon every visible advantage" (D, 12), battle will be resumed. Focusing on fear, we can also verify the superiority of monarchy over a sovereign assembly. For Hobbes, panic is a passion specific to "a throng, or multitude of people", a kind of fear in which everyone acts by imitation, copying the fear of others, but without a clear notion of its origin (L, VI.37, 86). In the words of Mikko Jakonen, panic "introduces the disordered motion typical of the multitude". 32 Whereas monarchs will be able to detect the origin of their fear (e.g., a real threat from a neighboring state) and impress a coherent motion on the commonwealth accordingly, an assembly is susceptible to being affected by panic. Also, a sovereign assembly might be too prone to dismiss fear, as Daniel Kapust explains, because no member wants to be considered a coward in front of their peers. 33 Therefore, everyone adopts an aggressive stance and avoids participating in the passion of fear, even though it is a relevant factor in the relationship with an enemy state. By excess or by defect, collective bodies are incapable of adequately compounding the passion of fear. In mechanistic terms, fear is decisive for the movement and rest of the state. It can mobilize citizens to engage in combat against a loathed enemy or provoke a mistrustful quiescence, a state of permanent alert in the face of an alien threat that holds the community together. This kinetics is consistent, and attributable to, a political corporation as a whole only when guided by a sovereign power. A multitude that panics without a clear understanding of the source of its terror cannot wage a war. A league may temporarily assemble out of concern about an external menace (L, XXII.29, 370), but when "they have no common enemy", they will separate due to the difference of their interests and fall back into a war of all against all (L, XVII.5, 258). Only a state with a principle of motion can maintain a continuous flow of fear that draws people together, alerts them to possible invasion or mobilizes them for a preemptive attack. 32 Jakonen, Multitude, 96. 33 Daniel Kapust, "The Problem of Flattery and Hobbes's Institutional Defense of Monarchy," The Journal of Politics 73 (3) (2011), 686 and 690. Glory As Hobbes elucidates in chapter VI of Leviathan, glory is an "exultation of the mind", the "joy arising from imagination of a mans own power and ability" (L, VI.39, 88). This satisfaction entails comparison (DCv, I.2, 131) 34 for the bliss resulting from the conception of our own power depends on the corroboration that we are more powerful than others (L, XVII.8, 258). Power is "the excess of the power of one above that of another" (EL, VIII.4, 48). Hobbes thinks that human beings fight "for reputation" (L, XIII.7, 192) for two reasons. There are exceedingly glorious people who fall into aggressive behavior because they gloat over "the pleasure of contemplating their own power" (L, XIII.4, 190). This set of people are usually characterized as "conquerors" or "aggressors". 35 At the same time, glory is a universal passion, because we all want others to value us as we value ourselves. If they despise or underestimate us, we will try to extract from them by force the valuation that we think we deserve (L, XIII.5, 190). As Gauthier argues, no one can willingly admit the inferiority of power that comes with contempt. 36 Thus, everybody seeks to rejoice by judging themselves more powerful than others and is therefore willing to exert violence if that judgement is not recognized. Hobbes defines this state of mind as "pride" or the "breach" of the ninth law of nature, "that everyman acknowledge other for his equall by nature" (L, XV.21, 234). On the face of it, this thirst for glory may come across as an irrational or delusional passion that drives people to fight "for trifles" (L, XIII.7, 192). 37 To be glorious in the state of nature is, however, a good proxy for sanity and good sense. More precisely, to succeed in assigning oneself a higher value than one's neighbors means to unbalance the condition of symmetry by which all human beings are "in the same 34 On the comparative and subjective aspects of power, see Gabriella Slomp, Thomas Hobbes and the Political Philosophy of Glory (New York: St. Martin's Press, 2000), 39-40; and Yves Zarka, Hobbes and modern political thought (Edinburgh: Edinburgh University Press, 2016), 75. 35 There is no consensus on this issue. Kavka, Hobbesian Moral and Political Theory, 116, distinguishes "dominators" as those "who possess… the desire of power over other people", from "moderates", whose "considerations of safety [are] their primary motives". Pärtel Piirimäe, "The explanation of conflict in Hobbes's Leviathan," TRAMES 10 (60/55) (2006), 7, and Ioannis Evrigenis, "The State of Nature," in The Oxford Handbook of Hobbes, ed. Kinch Hoekstra & Aloysius Martinich (Oxford: Oxford University Press, 2016), 230, understate this distinction. 36 Gauthier, Logic, 16. 37 Jean Hampton, Hobbes and the Social Contract Tradition (Cambridge: Cambridge University Press, 1986), 38-9; Michael Oakeshott, Hobbes on civil association (Indianapolis: Liberty Fund, 2000), 127-8; Delphine Thivet, "Thomas Hobbes: A philosopher of war or peace?," British Journal for the History of Philosophy 16 (4) (2008), 714; and Julie Cooper, "Vainglory, modesty, and political agency in the political theory of Thomas Hobbes," The Review of Politics (2010), 248, held this contention to some extent. danger" (L, XIII.1, 188). Power is only useful if it is "eminent": distributed equally, it is of no use (DH, XI.6, 98). 38 Those whose self-assigned values are acquiesced to by potential competitors are thereby powerful, because "reputation of power is power" (L, X.5, 132). 39 Hence, the prideful conception of one's own power, when endorsed by others, betokens power and higher chances of survival. Hobbes acknowledges glory's collective dimension in his description of the causes of war. Glory causes people to engage in violence in response to "undervalue" that was directed at themselves "or by reflexion in their kindred, their friends, their nations, their profession, or their name" (L, 192, XIII). The worth of one's family, trade, gang, or nation reflects on one's own worth. Individuals experience glory when they manage to assert the value they attribute to themselves or to the groups they belong to. On this account, rejoicing in the power of a group is a feeling that can be shared all its members. 40 This collective facet of glory through reflection is evident when Hobbes refers to the revolt of the "so-called Beggars" in Holland during the 16th century, stressing how powerful contempt and undervalue are as sources of sedition against the government. (LL, XXX.16, 537). In Behemoth Hobbes claims that the driving force of the Parliamentarian army during the civil wars was simply spite (B, 253). 41 Glory mobilizes groups of people to conflict. It is in the interest of states, as it is of individuals and groups in state of nature, to be glorious. A good deal of the success in international relations depends on the capacity of states to uphold their pride. Since a glorious state is literally the one that enjoys more recognized power than its rivals, it is less likely to be attacked. As with the two previous causes of war, the glorious motivations of a political corporation are not reducible to the mental states of its representative(s). The glory of sovereigns consists in the "vigor of their subjects" (L, XVIII.20, 282). In a clear allusion to the ends for which human beings quarrel, i.e., "gain", "safety" and "reputation" (L, XIII.7, 192), Hobbes claims that "no king can be rich, nor glorious, nor secure; whose subjects are either poore, or contemptible, or too weak through want or dissention, to 38 Thomas Hobbes, De Homine in Opera Philosophica Quae Latine Scripsit, vol. II, ed. William Molesworth (London: John Bohn, 1849). Referred to as DH. 39 On reputation as a positional good, see Barbara Carnevali, "Glory. La lutte pour la réputation dans le modèle hobbesien," Communications, 93 (2013), 54. 40 Slomp, Political Philosophy of Glory, 52, maintains that "[f]or Hobbes, as for Thucydides, ambition and pride characterise not only the behaviour of single individuals, but also the actions of entire peoples and nations". 41 Thomas Hobbes, Behemoth, ed. Paul Seaward (Oxford: OUP, 2009). Referred to as B. maintain war against their enemies" (L, XIX.4, 288). Conversely, the vain aspiration to glory on the part of rulers will have counterproductive effects, making life painful for both kings and subjects (D, 16). Sovereigns cannot elicit by themselves emotions that are not in their people. 42 Hobbes recounted the situation of a defeated Charles I, who complained about the harsh treatment his people gave him, "which made them pity of him, but not yet rise in his behalfe" (B, 305). The feelings needed to undertake a war for reputation cannot emerge either from a contingent accumulation of arrogant individuals. To ascertain the state's power internationally, sovereigns need to mobilize and administer the glory of their people in a coherent and durable way. Those who face the risk of losing their lives for reputation are in a position to reflect on the appropriateness of war. For Hobbes, the obligation to carry out a dangerous mission assigned by the sovereign depends on its "intention" or "end" (L, XXI.15, 338). Hence, the mobilizing force of the citizens' individual glory may dwindle, leaving rulers with no other resource than "pressure" (L, XVIII.20, 282). And even if motivation remains strong, individual prides will not engender a harmonious effort, but more probably lead citizens to compete with each other for "preeminence". Herein lies a further Hobbesian argument for monarchy over sovereign assemblies. Whereas the monarch's glory will coincide with that of the commonwealth because in a monarchy private and public interests coincide (L, XIX.4, 288), members of a sovereign assembly will not act for glory in a coordinated manner. Instead, they will fight each other to monopolize the glory attributed to the state: "a monarch cannot disagree with himselfe out of envy or interest, but an assembly may, and that to such height as may produce a civill warre" (L, XIX.7, 290). The sovereign must galvanize the subjects' emotional energy convincing them of the direction in which they are moving as a body politic. Corporate glory expresses a collective motion of affirmation of the state's identity and power. While this glory 'reflects' on its participants, it is not simply attributable to them individually. Citizens may feel glorious as parts of a common enterprise and a common superiority vis à vis other states. But that glory is attributable to the state only when it is soundly administered by the sovereign power. A disjointed compound? I have presented a schematic version of how state passions can be construed. In reality, however, it is not easy to determine if the passions of the people, the sovereign, and the commonwealth are conjoined. For instance, while the sovereign may decide to attack a foreign nation for glory, soldiers may obey because they fear punishment. A corporation of merchants might promote war on account of the profits they hope to make. Even though Hobbes claims that wars are "unanimous" endeavors, this is often not the case. The disjunction is exacerbated, as we have seen, when sovereign power is held by a democratic assembly. Although individuals might experience fear in private, since they do not want to be considered cowards, they take the most reckless positions in public. Hence, what is shown in public as the will of the commonwealth does not coincide with the voluntary motions of the individuals who constitute it. In short, it is difficult to gauge whether the passions attributed to the state as a whole in a war are the passions that drive each individual to wage the war effort. My claim is that the direction of motion set by the sovereign must take into account the wills of the people and be sufficiently, but not unanimously, shared by them. To what extent the sovereign power can pressure a people who does not share its passions is an empirical question. At the beginning of Behemoth, Hobbes ventures into this kind of exercise as he enumerates the different "sorts" that intervened in the English civil wars and clarifies their motivations. From ministers who claimed to have a divine right to government (B, 108) to gentlemen infatuated with Greek and Roman institutions (B, 110), to those betting on staying in the winning party and benefitting from the war (B, 110), the emotions that drove the majority were too dissonant with the war effort of Charles I. One of the characters of the dialogue concludes that with such a people "the King is already ousted of his government" (B, 111). Charles' inability to conform their wills and impress a coherent motion to the body politic determined his fate. This might also explain the prominent role that the concept of "popularity" acquired in Leviathan. Popularity is particularly relevant in the army. To adequately execute their office, commanders must be popular, and therefore loved and feared (L, XXX.28, 550). Similarly, when sovereigns are popular, their power is strengthened because soldiers love them and their "cause". A popular sovereign will be able to "turn the hearts" of the people, that is, unite their endeavors and lead them to war (L, XXX.29, 550). One might think that popularity is too demanding a requirement for rulers as a means of reshaping the wills of their subjects. But Hobbes makes it clear that occupying the seat of the sovereign is itself a "popular quality" (L, XXX.29, 550). If sovereigns preserve the popularity inherent in their office, their ability to consistently lead and reshape the voluntary motions of the people is guaranteed. 43 Hobbes offers a theoretical model to think about the attribution of passions to states that is rooted in the joint operation of the passions of the sovereign as the efficient cause and the passions of the people as the material cause. In any war there are multiple motivations that drive individuals into battle. These may differ from the main passion that drives the body politic to war, i.e., the will of the sovereign power. The extent to which the passions that lead the war effort must be shared by the people can only be answered empirically. Nevertheless, Hobbes believes that if the sovereign remains popular, it will tap into the hearts of its people effectively enough to lead the motion of the body politic. Conclusion This article is built on the premise that international relations, and the "posture of war" among sovereign states are actual examples of the state of nature. As such, the three causes of conflict that Hobbes identifies as dominant in that condition may be relevant to account for the behavior of states. And since these drivers are mainly passions, states can be thought of as motivated by passions. This attribution of passions to states should not be dismissed as a figurative way of speaking or resolved by invoking the frame of mind of individual rulers. States can be construed as moveable bodies whose kinetics are guided by the sovereign power. A state preserves its identity if its governing institution manages to impart a permanent motion to it, if the sovereign reads its subjects' passions and forms a coherent will out of them. This feature of the sovereign's office is particularly important in the face of war, when the entire community must be mobilized through desire, fear, and glory. Focusing on each cause of conflict, I have explained how sovereign representatives and citizens contribute to generating what I consider to be state passions. ( 1651) and other works such as Elements of Law (1640), De Cive (1642/1647), the Anti-White (1643), De Corpore (1655), De Homine (1658), the Dialogue between a Philosopher and a Student of the Common Laws (1666/1681), the Latin Leviathan (1668), and Behemoth (1668/1681). I reconstruct an argument from premises set out explicitly by Hobbes to arrive at what, as Gregory Kavka claimed, "may be fairly dubbed" a "Hobbesian" conclusion. 7 Haig Patapan, "The Glorious Sovereign: Thomas Hobbes' Understanding of Leadership and International Relations," in British International Thinkers from Hobbes to Namier, ed. Ian Hall & Lisa Hill (New York: Palgrave Macmillan, 2009), 12. 20 Otto von Gierke, Natural Law and the Theory of Society. 1500 to 1800, ed. Ernest Barker (Cambridge: Cambridge University Press, 1934), 84. 21 Christian List and Philip Pettit, Group Agency. The Possibility, Design, and Status of Corporate Agents (Oxford: Oxford University Press, 2011), 76. 22 List and Pettit, Group Agency, 76. David Gauthier, The Logic of Leviathan (Oxford: OUP, 1969), 207; Glen Newey, Routledge Philosophy Guidebook to Hobbes and Leviathan (London: Routledge, 2008), 208. Hedley Bull, "Hobbes and the International Anarchy," Social Research. 48(4) (1981), 720-1. Newey, Guidebook toHobbes and Leviathan,166. David Armitage, Foundations of Modern International Thought (Cambridge: Cambridge University Press, 2013), 64. Gregory Kavka, Hobbesian Moral and Political Theory (Princeton: Princeton University Press, 1986), xii. Thomas Hobbes, Thomas White's De Mundo Examined, ed. Harold Jones (London: Bradford University Press, 1976), referred to as AW. This contention is reiterated in Thomas Hobbes, Concerning Body. In The English Works of Thomas Hobbes, vol. I, ed. William Molesworth (London: John Bohn, 1839), XI.7, 137. Referred to as CB. Philippe Crignon, De l'incarnation à la représentation (Paris: Garnier, 2012), 109. Annabel Brett, "The Matter, Forme, and Power of a Common-wealth: Thomas Hobbes and Late Renaissance Commentary on Aristotle's Politics," Hobbes Studies, 23(1) (2010), 96. Sean Fleming, Leviathan on a Leash (Princeton: Princeton University Press, 2020), 117. Mikko Jakonen, Multitude in Motion: Re-Readings on the Political Philosophy of Thomas Hobbes (PhD Thesis, University of Jyväskylä, 2013), 116. Andrea Bardin, "Liberty and representation in Hobbes: a materialist theory of conatus," History of European Ideas (2021), 9. See also Douglass Jesseph, "Hobbes on 'Conatus': A Study in the Foundations of Hobbesian Philosophy," Hobbes Studies, 29(1) (2016), 83. Samantha Frost, Lessons from a Materialist Thinker (Stanford: Stanford University Press, 2008), 172. This point of the Hobbesian theory was raised by Gerald Gaus, "Hobbes's idea of public judgement," (n.d., 18). The concept of popularity features only negatively in the Elements (EL, IX.7, 175-76) and De Cive (DCv, XIII.13, 149) associated with subjects who, on account of their popularity, can form factions and rebel against the sovereign power. strength, by mutuall opposition, to nothing" (L, XVII.4, 256). In sum, a monarch is better equipped to guide the passionate components that sustain a war for gain. This does not mean that those passions can be narrowed to the emotions of the individual sovereigns in power. During a conquest, the preservation of the state's motion depends not only on the sovereign or on the forces raised, but also on extracting money from subjects to finance it (L, XVIII.12, 274). The state's endeavor has to inhere both in the citizens that fight and in the ones that give monetary support. All are comprised in the "appetite" for dominion. Hence, this cause of war is neither explicable as the contingent aggregation of the citizens' passions, nor in terms of the psychological state of mind of the ruler in charge. One state competes against another on the basis of a continuous flow of movement that, while steered by the sovereign power, fuels a collective quest for prosperity. Diffidence If two parties desire the same thing, a commodity or an instrument of power, and have fairly equal expectations of attaining it, they "become enemies" (L, XIII.3, 190) and the danger of an invasion or an attack looms. 25 Owing to this generalized state of anxiety and misgivings, they try to defend their interests by means of preventive attacks. This is the most "reasonable" way of ensuring one's safety (L, XIII.4, 190). Violence breaks out not by virtue of the aggressive nature of human beings, but "for safety", "to defend" (L, XIII.7, 192) one's position against a presumptive attack, and to the extent that it is what "conservation requires" (L, XIII.4, 190) in that situation. What Hobbes signifies by diffidence is a "degree" of fear originated by "distrust" towards others (EL, IX.9, 53), 26 or simply a "fear from each other" (LL, 191). Analogously, fear is the dominating feature of international relations. States expand their territories driven by "fear of invasion" (L, XVII.2, 256). This is reiterated in De Cive: all states, even those that 25 I reconstruct the problem of diffidence and anticipatory violence without resorting to the distinction between dominators and moderate agents. For an alternative reading, see Daniel Eggers, "Hobbes and game theory revisited: Zero-sum games in the state of nature," Southern Journal of Philosophy 49 (3) (2011), 201-6. 26 Thomas Hobbes, The Elements of Law Natural and Politic, ed. John Gaskin (Oxford: Oxford University Press, 1999), 53, henceforth referred to as EL. As Richard Tuck explains in "Hobbes's moral philosophy," in The Cambridge Companion to Hobbes, ed. Tom Sorell (Cambridge: Cambridge University Press, 1996), 161, while diffidence is defined as "constant despair" in L, VI.20, 84, this formulation does not quite convey the nature of the feeling.
04116471
en
[ "qfin" ]
2024/03/04 16:41:26
2003
https://hal.science/hal-04116471/file/2003%20garcia_working%20paper%20GEA.pdf
Serge Garcia Measuring economies of scale and the efficient size of intermunicipalities
04116532
en
[ "qfin" ]
2024/03/04 16:41:26
2023
https://hal.science/hal-04116532/file/Taxes%20and%20economic%20growth%20in%20the%20WAEMU.pdf
Zacharia Zabsonre email: [email protected] Taxes and economic growth in the WAEMU Keywords: taxes, economic growth, SURE model, UEMOA, interaction Résumé impôts, croissance économique, modèle SURE, UEMOA, interaction This article analyses the interaction between taxes and economic growth in WAEMU countries by breaking down total taxes into direct and indirect taxes. The application of the Generalized Least squares (GLS) estimate on data for the period 1980-2020 through the Seemingly Unrelated Regression Equations Model (SURE) has led to conclusive results. The results show a lack of significant interaction between total taxes and economic growth in Benin, Niger, Senegal and Guinea-Bissau. Unidirectional causality ranging from tax revenues to economic growth is found in Burkina Faso and Côte d'Ivoire. However, feedback is observed between the two variables in Mali and Togo. In the light of these results, it is necessary to strengthen the WAEMU Community Directives on direct and indirect taxation. This will allow different countries to further improve their tax system in order to boost their economic growth. Introduction The ideology of many low-income countries is systematic state intervention to stimulate economic growth [START_REF] Nafziger | Economic development. Fourth Edition[END_REF]. Tax reforms are therefore presented as having strong macroeconomic growth effects [START_REF] Engen | Taxation and economic growth[END_REF]. This hope is based on the fact that the tax system influences economic growth [START_REF] Thomakos | Taxation in Crisis: Tax Policy and the Quest for Economic Growth[END_REF][START_REF] Romer | Increasing Returns and Long Run Growth[END_REF][START_REF] Lucas | Supply-Side Economics: An Analytical Review[END_REF]. On the other hand, some governments want more economic growth to generate significant tax revenues [START_REF] Co %3b2-L | Fiscal policy and economic growth : Lessons for Eastern Europe and Central Asia[END_REF]. Indeed, the tax ratio increases with economic growth [START_REF] Chelliah | Tax Ratios and Tax Effort in Developing Countries[END_REF][START_REF] Tait | International Comparisons of Taxation for Selected Developing Countries[END_REF]Tanzi 1987[START_REF] Tanzi | Termites of the state : Why Complexity Leads to Inequality[END_REF]. They argue that the causality between economic growth and taxation ranges from higher growth to higher levels of taxation. Yet some of the causality can also range from tax revenues to economic growth [START_REF] Todaro | Economic Development[END_REF] if tax resources are wisely spent, for example, to improve human capital and the necessary investments in infrastructure. Finally, only empirical evidence can determine whether tax policy has a strong influence on economic growth or whether it is the reverse [START_REF] Engen | Taxation and economic growth[END_REF]. This theoretical controversy is also observed empirically. Indeed, some researchers have identified a unidirectional relationship between economic growth and taxes. Among them, many find that taxes positively affect economic growth [START_REF] Köse | Effects of Monetary and Fiscal Policies on Economic Growth: An Empirical Study On The Iraqi Economy[END_REF][START_REF] Moh | The effect of tax revenues on GDP growth in Jordan[END_REF][START_REF] Amedanou | Do The WAEMU Member States Still Have Fiscal Space? Answering By ?[END_REF]etc.) while others have led to the opposite results that isthat taxation negatively impacts economic growth [START_REF] Hakim | Direct Versus Indirect Taxes: Impact on Economic Growth and Total Tax Revenue[END_REF][START_REF] Oyinlola | Governance, domestic resource mobilization, and inclusive growth in sub-Saharan Africa[END_REF][START_REF] Ozpence | The relationship between tax burden and economic growth: Turkey case[END_REF][START_REF] Baiardi | Tax policy and economic growth: does it really matter?[END_REF][START_REF] Atems | Another look at tax policy and state economic growth:The long-run and short-run of it[END_REF]etc.). While previous work has focused on the effects of taxation on economic growth, some authors have looked at the opposite effect. They have thus provided evidence that economic growth generates tax revenues [START_REF] Gurdal | The relationship between tax revenue, government expenditure, and economic growth in G7 countries: new evidence from time and frequency domain approaches[END_REF][START_REF] Hassan | Governance: A Source to Increase Tax Revenue in Pakistan[END_REF]Kobyagda, 2019, etc.). In contrast to those who have highlighted a one-way relationship, researchers such as [START_REF] Arvin | Are there links between institutional quality, government expenditure, tax revenue and economic growth? Evidence from low-income and lower middleincome countries[END_REF], [START_REF] Gurdal | The relationship between tax revenue, government expenditure, and economic growth in G7 countries: new evidence from time and frequency domain approaches[END_REF], [START_REF] Baiardi | Tax policy and economic growth: does it really matter?[END_REF] and [START_REF] Kane | Spillover Effects of Budgetary Policies in Monetary union: The Case of West African Economic and Monetary Union[END_REF] are among those who have highlighted a two-way relationship between taxation and growth. A third group of authors reported an absence of relationship between the two variables [START_REF] Baiardi | Tax policy and economic growth: does it really matter?[END_REF][START_REF] Atems | Another look at tax policy and state economic growth:The long-run and short-run of it[END_REF][START_REF] Kalaš | The relationship between taxes and economic growth: evidence from serbia and croatia[END_REF]etc.). These theoretical and empirical controversies lead us to determine the causal relationship between taxes and economic growth. What is the interaction between economic growth and taxation in WAEMU countries? What is its nature, meaning and intensity? Are there discernible differences in GDP growth as a result of tax cuts? To what extent has this growth been caused by tax cuts? The main objective of this research is therefore to analyse the interaction between economic growth and taxes in the WAEMU countries. We assume that there is feedback between these two variables in WAEMU countries. The application of the Generalized Least squares (GLS) estimate on data for the period 1980-2020 through the Seemingly Unrelated Regression Equations Model (SURE) allowed us to verify the hypothesis. The implementation of the SURE model for the first time in the context of WAEMU is one of the innovations of this research. Another important contribution of the present work is to highlight the interaction between direct taxation and economic growth separately from that between direct taxation and indirect taxation. The rest of the document is structured as follows. Section 2 provides a review of some relevant literature on the issue and Section 3 outlines the methodology approach. The results and their discussion will be discussed in Section 4 and the conclusion and implications of economic policy will be discussed in Section 5. Literature Review Here we review the literature review both theoretically and empirically. Theoretical interaction between taxes and economic growth The interaction between taxes and economic development is analysed in three directions. First, taxes influence economic growth, second, taxes affect taxation, and finally, the existence of neutrality is often highlighted between the two variables. Influence of taxes on economic growth It is well known that taxes and tax regimes affect economic growth [START_REF] Thomakos | Taxation in Crisis: Tax Policy and the Quest for Economic Growth[END_REF]. Reducing the distortions of the tax structure would permanently increase economic growth [START_REF] Auerbach | Dynamic Fiscal Policy[END_REF]Auerbach, 1996a;Engen and Gale, 1996). But how does tax policy affect economic growth? Taxation essentially goes through five channels to influence economic growth. First, high statutory corporate and personal income tax rates can discourage investment rates [START_REF] Solow | A Contribution to the Theory of Economic Growth[END_REF]. Second, taxes can dampen labour supply growth by discouraging labour force participation [START_REF] Solow | A Contribution to the Theory of Economic Growth[END_REF]. Third, tax policy can discourage productivity growth by mitigating research and development (R&D) and venture capital development [START_REF] Solow | A Contribution to the Theory of Economic Growth[END_REF]. Fourth, tax policy can influence marginal productivity of capital by reducing investment in heavily taxed sectors in favour of those with lower overall productivity [START_REF] Harberger | The Incidence of the Corporation Income Tax[END_REF]. Fifth, high taxation of labour supply can impede the effective use of human capital by discouraging labour in sectors where social productivity is high but where the tax burden is high [START_REF] Engen | Fiscal Policy and Economic Growth[END_REF]. In the Harrod-Domar model, the rate of economic growth decreases with the increase in the tax rate [START_REF] Harrod | An essay in dynamic theory[END_REF][START_REF] Domar | Capital expansion, rate of growth, and employment[END_REF]. However, the question remains: how big are these tax effects on economic development? Fifth, high taxation of labour supply can impede the effective use of human capital by discouraging labour in sectors where social productivity is high but where the tax burden is high [START_REF] Engen | Fiscal Policy and Economic Growth[END_REF]. In the Harrod-Domar model, the rate of economic growth decreases with the increase in the tax rate [START_REF] Harrod | An essay in dynamic theory[END_REF][START_REF] Domar | Capital expansion, rate of growth, and employment[END_REF]. However, the question remains: how big are these tax effects on economic development? Influence of economic growth on taxation As real per capita GNP increases, people demand relatively more public goods and relatively less private goods [START_REF] Wagner | Three Extracts on Public Finance[END_REF]. As a result, a country's tax ratio increases with economic growth [START_REF] Chelliah | Tax Ratios and Tax Effort in Developing Countries[END_REF]Tanzi, 1987). Indeed, taxation capacity is closely associated with administrative capacity that is likely to improve with economic development [START_REF] Burgess | Taxation and Development[END_REF][START_REF] Co %3b2-L | Fiscal policy and economic growth : Lessons for Eastern Europe and Central Asia[END_REF][START_REF] Todaro | Economic Development[END_REF]. On the other hand, tax systems tend to have "integrated" mechanisms for increasing tax revenues as income levels of economic agents increase as economic activities increase [START_REF] Musgrave | Public finance in theory and practice[END_REF]. The level of real income per capita is therefore an important factor in a country's fiscal potential [START_REF] Todaro | Economic Development[END_REF]. The explanation is that people with higher incomes theoretically pay a higher percentage of that income in taxes when it is too costly administratively and economically regressive to try to collect substantial taxes from the poor. [START_REF] Todaro | Economic Development[END_REF]. As a result, developed countries receive a much higher percentage of GDP in tax revenues than developing countries [START_REF] Todaro | Economic Development[END_REF]. Tax neutrality and economic growth Solow's conventional growth model postulates that productivity growth is assumed to be fixed and not affected by tax policy. But this paradoxical result is also due to a distinction between changes in the level of GDP and changes in GDP growth rates. In the Solow model, growth in investment and labour supply returns to their initial rates determined by long-term population growth. In other words, Solow's simple model implies that fiscal policy, even if distortive, does not have an impact on long-term economic growth rates, even if it reduces the level of long-term economic output. Thus a revenue-neutral change that would eliminate all taxes on capital income while increasing taxes on labour income would increase growth rates negligible [START_REF] Lucas | Supply-Side Economics: An Analytical Review[END_REF]. Empirical interaction between taxes and economic growth Empirically, there are three types of interaction between taxation and economic growth: Oneway, two-way, and no-interaction. One-way interaction between taxation and economic growth The authors who identified a one-way relationship between economic growth and taxes fall into two categories. There are those who think that taxation affects economic growth and those for whom it is exactly the opposite. Taxation positively affects economic growth Based on an ARDL model of data from Iraq covering the period 2005-2019, [START_REF] Köse | Effects of Monetary and Fiscal Policies on Economic Growth: An Empirical Study On The Iraqi Economy[END_REF] found that the effects of government revenues on growth were positive. Similarly, [START_REF] Moh | The effect of tax revenues on GDP growth in Jordan[END_REF] demonstrated that there is a positive impact of tax revenues on growth in Jordan over the period (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018). They relied on the VAR model and the Granger causality test. The ordinary least squares estimation technique was used. Using a Scully model (1996Scully model ( , 2000) ) and a quadratic model, Amedanou (2020) identified an inverted U-relationship between economic growth and tax burden in West African countries over the period 1980-2017. Pooled Mean Group (PMG) and MG (Mean Group) panel estimation techniques were used to arrive at the conclusion that rising tax levels are accompanied by improved living conditions in countries. But in some countries, such as Burkina Faso and Guinea-Bissau, greater mobilization of tax revenues as a percentage of GDP has led to lower levels of well-being. Using the time domain and frequency panel causality test on annual data from G7 countries (Canada, France, Germany, Italy, Japan, the United Kingdom and the United States) for the period 1980-2016, allowed [START_REF] Gurdal | The relationship between tax revenue, government expenditure, and economic growth in G7 countries: new evidence from time and frequency domain approaches[END_REF] Demonstrate that there is a long-term unidirectional relationship between tax revenues and economic growth from tax revenues to economic growth in Japan. Using a seemingly independent regression model (SURE: Seemingly Unrelated Regression Model) on data from 2000 to 2015, [START_REF] Oboh | Tax Revenue and Economic Growth in Selected ECOWAS Countries, Evidence from Sure Model[END_REF] showed that a 1% increase in indirect taxes stimulates economic growth by about 47.7% in Ghana, Sierra Leone, Benin and Burkina Faso and Nigeria. In Nigeria, the effect is positive but negligible. Similarly, the taxation of corporate profits positively influences the economy when the tax rate delayed by a period is less than 12.7% in six (6) WAEMU countries for the period 1970-2016 [START_REF] Maxime | Effets de la Politique Fiscale sur la Croissance Mendoza[END_REF]. The authors used a smooth panel transition model (PSTR) based on the composition of the tax structure to achieve this result. Comlan (2017) also found that tax revenues have a positive effect on the growth of WAEMU economies over the period 1980-2014. The author used an econometric PCSE approach (panel with standard error correction). Empirical results from the generalized least squares estimate indicate that tax revenues are positively and statistically related to GDP in Africa during 2004 to 2013 [START_REF] Babatunde | Taxation revenue and economic growth in Africa[END_REF]. N'Yilimon (2014) notes, in disagreement with Arthur Laffer's curve, that there is no non-linear relationship between taxation and economic growth in WAEMU countries. Thus, the absence of a non-linear relationship suggests that high and low levels of fiscal performance are conducive to per capita economic growth. Taxation negatively affects economic growth Unlike previous authors who have identified a positive effect, other authors will argue that taxation hinders economic growth [START_REF] Hakim | Direct Versus Indirect Taxes: Impact on Economic Growth and Total Tax Revenue[END_REF][START_REF] Oyinlola | Governance, domestic resource mobilization, and inclusive growth in sub-Saharan Africa[END_REF][START_REF] Ozpence | The relationship between tax burden and economic growth: Turkey case[END_REF][START_REF] Baiardi | Tax policy and economic growth: does it really matter?[END_REF][START_REF] Atems | Another look at tax policy and state economic growth:The long-run and short-run of it[END_REF]etc.). [START_REF] Hakim | Direct Versus Indirect Taxes: Impact on Economic Growth and Total Tax Revenue[END_REF] found that the increase in direct taxes leads to a decrease in GDP growth in 51 countries over the period 1992-2016. The Dynamic Panel Generalized Moments (GMM) method was used. Using the Generalized Moments (GMM) method on data from 27 sub-Saharan African countries covering the period 1999-2015, [START_REF] Oyinlola | Governance, domestic resource mobilization, and inclusive growth in sub-Saharan Africa[END_REF] indicated that indirect tax coefficients are mainly negative on growth while direct tax coefficients are all positive but not significant. Based on impactresponse analysis and variance decomposition, [START_REF] Ozpence | The relationship between tax burden and economic growth: Turkey case[END_REF] found that the tax burden negatively affects economic growth over the period 1970-2018 in Turki. [START_REF] Oboh | Tax Revenue and Economic Growth in Selected ECOWAS Countries, Evidence from Sure Model[END_REF] showed that a 1% increase in direct tax decreases economic growth by 3.08% in five states of the Economic Community of West African States (ECOWAS). But this tax has not been productive in Ghana, Sierra Leone, Benin and Burkina Faso. Using the Pooled Average Estimator (GMG) on data from 23 OECD countries over the period 1971-2014, [START_REF] Baiardi | Tax policy and economic growth: does it really matter?[END_REF] showed that there is a negative and significant long-term correlation between tax burden and GDP per capita. The positive long-term correlation of GDP per capita is confirmed by a shift from consumption taxes to property taxes. [START_REF] Atems | Another look at tax policy and state economic growth:The long-run and short-run of it[END_REF] shows that property and sales taxes have reduced economic growth in the short and long term. The pooled mean group average (PMG) technique was used on the data of 48 States of the United States of America for the period 1967-2008. Economic growth impacts taxation While previous work has focused on the effects of taxation on economic growth, some authors have focused on the reverse effect. Thus, [START_REF] Gurdal | The relationship between tax revenue, government expenditure, and economic growth in G7 countries: new evidence from time and frequency domain approaches[END_REF] 2018) reached the conclusion that per capita income positively affects tax revenues while corruption negatively affects tax revenues. Official development assistance and inflation have no significant effect on tax revenues. Using an error-corrected dynamic vector model (PVEC) on WAEMU data for the period 1980-2016, [START_REF] Kane | Spillover Effects of Budgetary Policies in Monetary union: The Case of West African Economic and Monetary Union[END_REF] showed that a shock of GDP per capita in a Union country significantly influences the variance of tax revenues among its neighbours. Two-way interaction between taxation and economic growth Researchers such as [START_REF] Arvin | Are there links between institutional quality, government expenditure, tax revenue and economic growth? Evidence from low-income and lower middleincome countries[END_REF], [START_REF] Gurdal | The relationship between tax revenue, government expenditure, and economic growth in G7 countries: new evidence from time and frequency domain approaches[END_REF], [START_REF] Baiardi | Tax policy and economic growth: does it really matter?[END_REF] and [START_REF] Kane | Spillover Effects of Budgetary Policies in Monetary union: The Case of West African Economic and Monetary Union[END_REF] are among those who have highlighted a two-way relationship between taxation and growth. [START_REF] Arvin | Are there links between institutional quality, government expenditure, tax revenue and economic growth? Evidence from low-income and lower middleincome countries[END_REF] showed that in the short term, economic growth has a significant impact on tax revenues and vice versa. They used an error-corrected vector model (P-VECM: panel vector error-correction model) for 51 countries over the period 2005-2019. [START_REF] Gurdal | The relationship between tax revenue, government expenditure, and economic growth in G7 countries: new evidence from time and frequency domain approaches[END_REF] report two-way causality between tax revenues and economic growth for the period 1980-2016 when all G7 countries are tested together. [START_REF] Baiardi | Tax policy and economic growth: does it really matter?[END_REF] confirmed a positive and significant long-term correlation between tax burden and per capita GDP is confirmed by a shift from consumption tax to property tax. Two-way interaction between taxation and economic growth Researchers such as [START_REF] Arvin | Are there links between institutional quality, government expenditure, tax revenue and economic growth? Evidence from low-income and lower middleincome countries[END_REF], [START_REF] Gurdal | The relationship between tax revenue, government expenditure, and economic growth in G7 countries: new evidence from time and frequency domain approaches[END_REF], [START_REF] Baiardi | Tax policy and economic growth: does it really matter?[END_REF] and [START_REF] Kane | Spillover Effects of Budgetary Policies in Monetary union: The Case of West African Economic and Monetary Union[END_REF] are among those who have highlighted a two-way relationship between taxation and growth. [START_REF] Arvin | Are there links between institutional quality, government expenditure, tax revenue and economic growth? Evidence from low-income and lower middleincome countries[END_REF] showed that in the short term, economic growth has a significant impact on tax revenues and vice versa. They used an error-corrected vector model (P-VECM: panel vector error-correction model) for 51 countries over the period 2005-2019. [START_REF] Gurdal | The relationship between tax revenue, government expenditure, and economic growth in G7 countries: new evidence from time and frequency domain approaches[END_REF] report two-way causality between tax revenues and economic growth for the period 1980-2016 when all G7 countries are tested together. [START_REF] Baiardi | Tax policy and economic growth: does it really matter?[END_REF] confirmed a positive and significant long-term correlation between tax burden and per capita GDP is confirmed by a shift from consumption tax to property tax. Lack of interaction between taxation and economic growth The lack of relationship between the two variables was found in the work of [START_REF] Baiardi | Tax policy and economic growth: does it really matter?[END_REF][START_REF] Atems | Another look at tax policy and state economic growth:The long-run and short-run of it[END_REF][START_REF] Kalaš | The relationship between taxes and economic growth: evidence from serbia and croatia[END_REF]etc.). By expanding the data set to 34 OECD countries, [START_REF] Baiardi | Tax policy and economic growth: does it really matter?[END_REF] concluded that the tax burden is not significantly associated with long-term economic growth for the period 1995-2014. [START_REF] Atems | Another look at tax policy and state economic growth:The long-run and short-run of it[END_REF] shows that income taxes have had no impact on economic growth in both the short and long term in 48 states of the United States of America for the period 1967-2008. Applying a standard panel data model, [START_REF] Kalaš | The relationship between taxes and economic growth: evidence from serbia and croatia[END_REF] also confirmed that there is no significant relationship between taxes (namely corporate income tax, value added tax, social security contributions and excise taxes) and gross domestic product in Serbia and Croatia over the period 2007-2016. Research methodology Theoretical models Theoretical models of causality Be , two fixed time series with zero means. [START_REF] Granger | Investigating causal relations by econometric models and cross-spectral methods[END_REF] simple causal model is: where , are considered as two uncorrelated white noises, i.e., for all t,s. In the model ( 1), m may be infinite, but in practice, of course, due to the finite length of the available data, m will be considered finite and shorter than the given time series. The above definition of causality implies that causes provided that some are not null. Similarly, causes if some are not null. If these two events occur, there would be a feedback relationship between and . This new definition of causality is in fact identical to the one previously introduced. The more general model with instant causality is In model ( 2), instantaneous causality is allowed [START_REF] Granger | Investigating causal relations by econometric models and cross-spectral methods[END_REF]. Theoretical model of the interaction between taxation and economic growth Aigner et al. (1977) shows that tax revenues (T) are a function of GDP and other variables. Thus, where the represents the various variables likely to influence tax revenues and U denotes the error term. Conversely, taxation acts on economic wealth as evidenced by the macroeconomic model of [START_REF] Harrod | An essay in dynamic theory[END_REF][START_REF] Domar | Capital expansion, rate of growth, and employment[END_REF]. Thus, Where ω refers to the long-term economic growth rate, s, savings rate A, productivity . λ is the share of public investment in public expenditure and t is the tax rate. Keeping all other variables constant, the complete system of interaction between growth and taxation is formalized as follows: (5) Empirical Models Based on theoretical models (1) and ( 5), we write our empirical model in the form of an equation system. Interaction model between economic growth and total taxes …………………………………………………………………………………………………………. (1) et ……………………………………………………………………………………………………….. Compared to (1), this substitution specification has two distinct characteristics. First, each equation in (1) and also in ( 2) has different predetermined variables. The only possible link between individual regressions is the simultaneous correlation within systems. Therefore, these sets of equations are not VAR systems, but Seemingly Unrelated Regression Equations Model (SURE). In systems of equations ( 1) and ( 2), GDP indicates GDP per capita at constant 2015 prices, Tax indicates nominal tax revenues, PopAct indicates the labour force (population aged 15 to 64); it is the control variable, N is the number of panel units, t is the period (t = 1,…,T) and i is the selected offset time in the system. The common coefficient is α, the slopes are β, δ and γ, while ε is the error term. ln refers to the natural logarithm. This allows us to obtain consistent, reliable and easily interpretable empirical results (Shahbaz et al., 2016). It is also allocated a maximum difference between variables and between equations. In this document, the system is estimated by each possible pair of lm1, ln1, lm2, ln2, lk1 and lk2, and it is assumed that there is a single lag in the fact that taxation in a year (n) is based on the profits of the previous year (n-1). Our present work is part of an instantaneous causality as defined above. Interaction model between economic growth and direct taxes ……………………………………………………………………………………………………………. (1) et …………………………………………………………………………………………….………………….. where refers to direct taxes. Interaction model between economic growth and indirect taxes ……………………………………………………………………………………………………………. ( ) et …………………………………………………………………………………………………………….….. 1 where refers to indirect taxes. Variable definition and data source Results and Discussions The heterogeneity of the slopes obliges us to a country by country estimate. With regard to Rsquared are high we can say the overall significance of the models is good. Interaction between economic growth and total tax revenues in WAEMU The interaction between the HBP and total tax revenues is presented in Table 5. The lack of significant interaction between the two variables is noted in Benin, Niger, Senegal and Guinea-Bissau. This finding contradicts the theories of researchers such as [START_REF] Wagner | Three Extracts on Public Finance[END_REF]; [START_REF] Musgrave | Public finance in theory and practice[END_REF]; [START_REF] Todaro | Economic Development[END_REF]. This is due to the importance of the informal economy, fraud and tax corruption. These scourges are liable to tax certain incomes. The financing of unproductive public expenditure by tax revenues may also justify this result. Another problem is that taxes are not necessarily allocated to expenditures that are conducive to economic growth, either because of political "inefficiencies" or because of redistribution policies [START_REF] Atkinson | The Welfare State and Economic Performance[END_REF]. Our results are consistent with those of researchers such as AL-Tamimia and Bataineha (2021); Abdoulaye (2018); [START_REF] Baiardi | Tax policy and economic growth: does it really matter?[END_REF] and [START_REF] N'yilimon | Taxation and Economic Growth : An Empirical Analysis on Dynamic Panel Data of WAEMU Countries[END_REF] who also highlighted the non-significant impact of economic development on tax revenues. In Burkina Faso and Côte d'Ivoire, the causality between economic growth and tax revenues is unidirectional, ranging from tax revenues to economic growth. Thus, a 1% tax increase leads to economic growth of 0.14% and 0.07% respectively in Burkina Faso and Côte d'Ivoire. Which contradicts our original hypothesis. Fiscal resources are therefore spent wisely, for example for the improvement of human capital and the necessary investments in infrastructure [START_REF] Todaro | Economic Development[END_REF] in these countries. This supports the thesis of Ihori (2017), [START_REF] Pigou | The Economics of Welfare[END_REF][START_REF] Co %3b2-L | Fiscal policy and economic growth : Lessons for Eastern Europe and Central Asia[END_REF] and [START_REF] Barro | Economic Growth[END_REF] who argued that an increase in the tax rate affects public investment and positively affects the growth rate. [START_REF] Pigou | The Economics of Welfare[END_REF], [START_REF] Zagler | Fiscal policy and economic growth[END_REF], and [START_REF] Musgrave | The theory of public finance: a study in public economy[END_REF] achieved the same results. In the long term, however, about two-thirds of the effect of tax changes on economic development is through productivity changes [START_REF] Niskanen | Reflections of a political economist : selected articles on government policies and political processes Cato Institute[END_REF]. In Mali and Togo, feedback is observed between the two variables. A change in GDP in these countries causes a change in tax revenues which in turn influences GDP. More specifically, an increase in tax revenues of 1% improves economic wealth by 0.138% and 0.154% respectively in Mali and Togo. In return, the increase in wealth of 1% causes an increase in taxes of 1.134% and 1.346% respectively in Mali and Togo. Thus, the causal effect of economic growth on taxes is greater than the inverse effect. Indeed, the combined effect of tax distortion and beneficial public spending can lead to a net improvement in the functioning of the private sector economy [START_REF] Barro | Government spending in a simple model of endogenous growth[END_REF](Barro, , 1991 a,b) a,b). An increase in taxes affects public investment and positively affects the growth rate [START_REF] Ihori | Principles of Public Finance[END_REF][START_REF] Pigou | The Economics of Welfare[END_REF][START_REF] Co %3b2-L | Fiscal policy and economic growth : Lessons for Eastern Europe and Central Asia[END_REF][START_REF] Barro | Economic Growth[END_REF][START_REF] Pigou | The Economics of Welfare[END_REF][START_REF] Zagler | Fiscal policy and economic growth[END_REF][START_REF] Musgrave | The theory of public finance: a study in public economy[END_REF]. GDP action on tax revenues is also consistent with the theories developed by [START_REF] Wagner | Three Extracts on Public Finance[END_REF], [START_REF] Musgrave | Public finance in theory and practice[END_REF] and [START_REF] Todaro | Economic Development[END_REF]. Empirically, our results are similar to those found by authors such as [START_REF] Arvin | Are there links between institutional quality, government expenditure, tax revenue and economic growth? Evidence from low-income and lower middleincome countries[END_REF]; [START_REF] Köse | Effects of Monetary and Fiscal Policies on Economic Growth: An Empirical Study On The Iraqi Economy[END_REF]; Tamimia and Bataineha (2021); [START_REF] Gurdal | The relationship between tax revenue, government expenditure, and economic growth in G7 countries: new evidence from time and frequency domain approaches[END_REF]; [START_REF] Maxime | Effets de la Politique Fiscale sur la Croissance Mendoza[END_REF] They also demonstrated a positive impact of tax revenues on economic growth. Interaction between economic growth and direct taxes in WAEMU The results of the estimation of the interaction between GDP and direct taxes are presented in Table 6 below. They show that there is no significant interaction between economic wealth and taxes in Benin, Burkina Faso and Guinea-Bissau. This finding confirms the theories of Mendoza, Milesi-Ferretti and Asea (1996) that income taxes are more harmful to growth than general consumption taxes. In contrast, a significant unidirectional relationship between GDP and direct taxes ranging from direct taxes to GDP is noted in Niger. Thus a 1% increase in direct taxes leads to an economic growth of 0.049%. Feedback between the two variables is noted in Côte d'Ivoire, Mali, Senegal and Togo. When direct taxes rise by 1%, GDP improves by 0.048%, 0.073%, 0.094% and 1.152% respectively in Côte d'Ivoire, Mali, Senegal and Togo. Conversely, economic growth of one percentage point leads to an increase in direct taxes of 1.182%, 1.116%, 0.829% and 2.120% respectively. As with total tax revenues, the effect of economic development on direct taxes is greater than the reverse. Interaction between economic growth and indirect taxes in WAEMU Table 7 presents the results of the interaction between economic growth and indirect taxes. Economic growth has no significant effect on indirect taxes and in Benin, Niger and Senegal. On the other hand, significant feedback is identified between economic growth and indirect taxes in Burkina Faso, Côte d'Ivoire, Mali and Togo. In these countries, an increase in indirect taxes of 1% causes an economic growth of 0.135% in Burkina Faso, while an economic growth of 1% causes an increase in indirect taxes of 0.961%. The analysis of the results reveals essentially three groups of countries. There is the group of countries in which economic growth has no significant effect on indirect and indirect taxes andconversely, the group of countries for which a unidirectional relationship is noted between the two variables ranging from indirect taxes to economic growth and the third group of countries where there is feedback between the two variables. Thus no interaction is proven between economic growth and indirect taxes in countries like Benin, Niger and Senegal. On the other hand, there is significant feedback between economic growth and indirect taxes in Burkina Faso, Côte d'Ivoire, Mali and Togo. If an increase in indirect taxes of 1% causes an economic growth of 0.135% in Burkina Faso, an economic growth of 1% causes an increase in indirect taxes of 0.961%. In Côte d'Ivoire, a 1% increase in indirect taxes affects economic growth by 0.066%, while a 1% improvement in indirect taxes leads to a 1.148% increase in indirect taxes. In Mali, the increase is 0.138% against 1.407% and 0.124% against 0.948% in Togo. In Guinea-Bissau, indirect taxes have an impact on economic development, not vice versa. Thus, a 1% increase in indirect taxes generates an economic growth of around 0.04%. Conclusion The interaction between economic growth and taxation in the WAEMU countries was analysed using the Generalized Least squares (GLS) estimator on data for the period 1980-2020 through the Seemingly Unrelated Regression Equations Model (SURE). Total taxes have been broken down into three levels: the overall level, the level of direct taxes and the level of indirect taxes. Three SURE models have been estimated. In terms of overall taxes, three groups of countries stand out. There are countries where there is no significant interaction between the overall level of taxes and the level of economic growth (Benin, Niger, Senegal and Guinea-Bissau). In some countries the interaction between the two variables is unidirectional, ranging from taxes to economic growth (Burkina Faso and Côte d'Ivoire). Finally, the existence of feedback between the two variables is noted in Mali and Togo. In terms of direct taxes, feedback is identified in Côte d'Ivoire, Mali, Senegal and Togo, while unidirectional causality is mentioned in Niger. In Benin, Burkina Faso and Guinea-Bissau, no interaction between the two variables is revealed. In terms of indirect taxes, the lack of interaction is observed in Benin, Niger and Senegal, while feedback between the two variables is observed in Burkina Faso, Côte d'Ivoire, Mali and Togo. In Guinea-Bissau, it is indirect taxes that act on growth and not the other way around. In the light of these results, it is necessary to strengthen the WAEMU Community Directives on direct and indirect taxation. This will allow different countries to further improve their tax system in order to boost their economic growth. Countries like Mali and Togo have an interest in improving their tax performance in order to boost economic growth, which in turn will generate tax revenues. To do this, civic-minded and tax compliance actions must be strengthened. Senegal and Niger need to focus more on mobilizing direct taxes while Burkina Faso and Guinea-Bissau need to focus on indirect taxes. On the other hand, Côte d'Ivoire, Mali, and Togo must prioritize both direct and indirect taxes to better support economic growth. [START_REF] North | Institutions, Institutional Change, and Economic Performance[END_REF], [START_REF] Acemoglu | The Role of Institutions in Growth and Development[END_REF][START_REF] Acemoglu | The Role of Institutions in Growth and Development[END_REF] and [START_REF] Hayek | The Use of Knowledge in Society[END_REF] emphasized the importance of the role played by institutions (protection of property rights, political power, free competition) on economic growth in a wide range of economic theories. It is therefore necessary to have institutions capable of combating tax evasion, promoting the formalisation of economic activities and guaranteeing competition on the markets. Thus, strengthening institutions is an imperative for compliance with tax compliance, the fight against fraud and corruption, and for compliance with the rules of a competitive market. The results show that the tax priority differs from country to country depending on the types of tax. This calls into question the common policy of macroeconomic convergence within the WAEMU. A coordinated global tax system, designed with the full participation of developing countries, could be an effective tool for tax mobilization. With this type of tax system in place, innovation in the global distribution of corporate income can be promising to support economic sustainability strategies in developing countries. We propose to pursue a regulatory corporate income tax objective by linking the overall effective corporate tax rate to corporate performance based on factors such as profitability, employment, social and environmental sustainability, and "wealth redistribution" in a community. The implementation of this proposal would allow low-income countries to structure their tax systems in a way that pursues their development objectives (vocational training, environmental sustainability, job creation), attract investment and start building the kind of social and technological infrastructure that would strengthen and build their economy. highlighted a long-term unilateral relationship between tax revenues and economic growth ranging from economic growth to tax revenues in the UK and Italy. In the United States, this relationship is shortterm. Based on an ARDL model applied to Pakistan data from 1976 to 2019, Hassan et al. (2021) showed that increased industrial activity would increase direct and indirect tax revenues. Based on the stochastic model of borders, Kobyagda (2019) has shown that GDP per capita has a positive and significant influence on the tax burden in the WAEMU area. His research used data from 1990-2017. Based on the estimation technique of a fixed-effect model with heteroscedasticity correction and instrumental variables for WAEMU countries over the period 1996-2015, Abdoulaye D. ( Table 1 : Variables and Data Sources 1 Variables Notation Sources GDP per capita at constant 2015 prices World Bank (WDI) Total tax revenues UNU-WIDER Direct taxes UNU-WIDER Indirect taxes UNU-WIDER Labour force (pop. aged 15-65) World Bank (WDI) These data cover the period 1980-2020. Table 5 : Estimation results of the interaction between total taxes and growth in the WAEMU countries 5 (Bénin) (Niger) (Sénégal) (Guinée-BisBissau) VARIABLES lnPIBh lnTax lnPIBh lnTax lnPIBh lnTax lnPIBh lnTax L.lnPIBh 0.840*** 0.821*** 0.902*** 0.740*** (0.0981) (0.0697) (0.0687) (0.105) lnTax 0.000663 0.0388 0.0406 0.0403 (0.0158) (0.0254) (0.0430) (0.0255) lnPopAct 0.0632 0.346 -0.0674 0.146 -0.0671 0.992*** -0.159* 1.774*** Table 6 : Estimation results of the interaction between GDP and direct taxes in the WAEMU countries 6 (Bénin) (Burkina Faso) (Guinée-Bissau) (Niger) VARIABLES lnPIBh lnDirTax lnPIBh lnDirTax lnPIBh lnDirTax lnPIBh lnDirTax L.lnPIBh 0.812*** 0.712*** 0.720*** 0.763*** (0.0987) (0.101) (0.110) (0.0704) lnDirTax -0.0196 0.0290 0.00498 0.0489*** (0.0193) (0.0321) (0.0208) (0.0189) lnPopAct 0.125** 0.681* 0.127 1.420*** -0.0543 1.275*** -0.0995* 0.355 (0.0582) (0.367) (0.133) (0.528) (0.101) (0.471) (0.0537) (0.259) lnPIBh 0.224 0.202 -0.627 -0.100 (0.562) (0.427) (0.572) (0.407) L.lnDirTax 0.697*** 0.562*** 0.753*** 0.891*** (0.109) (0.132) (0.0997) (0.0985) Constant -0.380 -8.457** -0.510 -18.55*** 2.487** -10.93* 2.483** -3.707 (0.616) (3.344) (1.512) (6.022) (1.165) (6.272) (1.009) (5.274) Observations 40 40 40 40 40 40 40 40 R-squared 0.960 0.977 0.991 0.993 0.523 0.955 0.891 0.975 Standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1 (Côte d'Ivoire) (Mali) (Sénégal) (Togo) VARIABLE lnPIBh lnDirTax lnPIBh lnDirTax lnPIBh lnDirTax lnPIBh lnDirTax Table 7 : Estimation result of the interaction between economic growth and indirect taxes in WAEMU 7 (Bénin) (Niger) (Sénégal) (Guiné-Bissau) VARIABLE lnPIBh lnIndTax lnPIBh lnIndTax lnPIBh lnIndTax lnPIBh lnIndTax L.lnPIBh 0.797*** 0.841*** 0.926*** 0.730*** (0.102) (0.0667) (0.0643) (0.104) lnIndTax 0.0156 0.0302 0.0116 0.0404* (0.0159) (0.0251) (0.0379) (0.0233) lnPopAct 0.0299 0.403 -0.0441 0.304 0.00506 0.304 -0.171** 2.002*** (0.0489) (0.268) (0.0663) (0.187) (0.102) (0.187) (0.0853) (0.523) lnPIBh 0.545 -0.0799 -0.0799 -0.142 (0.551) (0.218) (0.218) (0.682) L.lnIndTax 0.815*** 0.898*** 0.898*** 0.425*** (0.0856) (0.0745) (0.0745) (0.141) Constant 0.759 -7.514** 1.306 -2.996 0.296 -2.996 3.640*** -20.42*** (0.689) (3.643) (1.065) (3.156) (1.111) (3.156) (1.030) (7.534) Observation 40 40 40 40 40 40 40 40 R-squared 0.962 0.986 0.877 0.987 0.928 0.987 0.565 0.889 Standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1 (Côte d'Ivoire) (Burkina Faso) (Mali) (Togo) VARIABLE lnPIBh lnIndTax lnPIBh lnIndTax lnPIBh lnIndTax lnPIBh lnIndTax L.lnPIBh 0.904*** 0.674*** 0.470*** 0.636*** (0.0433) (0.0835) (0.121) (0.0749) lnIndTax 0.0660*** 0.134*** 0.138*** 0.124*** (0.0161) (0.0264) (0.0333) (0.0271) lnPopAct -0.0846** 1.908*** -0.173* 1.316*** -0.168 0.613 -0.256*** 1.046*** (0.0405) (0.376) (0.105) (0.480) (0.116) (0.384) (0.0694) (0.267) lnPIBh 1.448*** 0.961* 1.407*** 0.948*** (0.407) (0.510) (0.367) (0.331) L.lnIndTax 0.152 0.362** 0.582*** 0.575*** (0.159) (0.165) (0.115) (0.112) Constant 1.164 -29.75*** 3.114*** -18.76*** 4.350*** -13.44*** 4.645*** -16.47*** (0.728) (6.646) (1.116) (5.555) (1.399) (4.818) (1.066) (4.329) Observations 40 40 40 40 40 40 40 40 R-squared 0.962 0.884 0.994 0.991 0.963 0.989 0.846 0.975 Standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1
04116533
en
[ "info.info-cr" ]
2024/03/04 16:41:26
2023
https://hal.science/tel-04116533/file/PhDThesis-1.pdf
The Internet of Things (IoT) is one of the most important technologies in our current world. It is composed of connected devices with sensors and processing abilities, all connected to a single platform that orchestrates them. The integration of these IoT devices into many real-life applications (eg., transportation, health-care, industries, ...) implied significant performance and efficiency improvements. As a consequence, we have seen a boom in the number of IoT devices deployed and their corresponding platforms. These IoT devices use real data from their deployment environment and send them to the platform. The collected data by these devices are often sensitive information. Hence, the privacy of users' data is one of the important concerns in IoT. Moreover, IoT applications rely on automating frequent tasks to achieve better efficiency. Unfortunately, moving control of usually human-controlled operations to the IoT risks the safety of IoT users . This thesis deals with the privacy and safety concerns raised by IoT. We propose security protocols that preserve the privacy of the users' data. In addition to privacy, we aim to design verifiable solutions that guarantee the correctness of the computations performed by the IoT devices and the platform. We design this solution while focusing on their performance specifically for IoT. More precisely, we propose protocols that are scalable to cope with the increasing number of IoT devices. We also consider protocols that are fault-tolerant to cope with the mobility and frequent dropouts of IoT devices. We focus on two security protocols: Secure Aggregation and Remote Attestation. Secure aggregation is a protocol where an aggregator computes the sum of the private inputs of a set of users. In this thesis, we propose the first verifiable secure aggregation protocol (VSA) that gives formal guarantees of security in the malicious model. Our solution preserves the privacy of all users' inputs and the correctness of the aggregation result. Moreover, we propose a novel fault-tolerant secure aggregation protocol (FTSA) based on additively-homomorphic encryption. The scheme allows users in secure aggregation to drop from the protocol and offers a mechanism to recover the aggregate without affecting the privacy of the data. We show that FTSA outperforms the state-of-the-art solutions in terms of scalability with respect to the number of users. On the other hand, a remote attestation protocol is a protocol that allows an IoT device (acting as a prover) to prove its software integrity to the IoT platform (acting as the verifier). We propose a new collaborative remote attestation protocol (FADIA) in which devices collect attestations from each other and aggregate them. FADIA deals with the heterogeneity and dynamic nature of IoT by considering fairness in its design. The evaluation of FADIA shows an increase in the lifetime of a network. i Lastly, I dedicate this work to the loving memory of my father, whose soul departed one week before my PhD defense day. His unwavering support, encouragement, and belief in my abilities were a constant source of inspiration. He dreamed of this moment and was my biggest advocate throughout my academic journey. Though he couldn't witness the culmination of my efforts, his presence will forever remain in my heart. To the members of my thesis committee, collaborators, colleagues, my family, and the soul of my father, I offer my deepest appreciation. Your Abrégé L'internet des objets (IoT) est l'une des technologies les plus importantes de notre monde actuel. Il est composé d'appareils connectés dotés de capteurs et de capacités de traitement, tous reliés à une plateforme unique qui les orchestre. L'intégration de ces dispositifs IoT dans de nombreuses applications de la vie réelle (par exemple, le transport, les soins de santé, les industries, ...) a impliqué des améliorations significatives de la performance et de l'efficacité. En conséquence, nous avons assisté à un boom du nombre de dispositifs IoT déployés et de leurs plateformes correspondantes. Ces dispositifs IoT utilisent les données réelles de leur environnement de déploiement et les envoient à la plateforme. Les données collectées par ces dispositifs sont souvent des informations sensibles. Par conséquent, la confidentialité des données des utilisateurs est l'une des principales préoccupations de l'IoT. En outre, les applications IoT reposent sur l'automatisation de tâches fréquentes pour une meilleure efficacité. Malheureusement, le transfert du contrôle d'opérations habituellement contrôlées par l'homme vers l'IoT risque de compromettre la sécurité des utilisateurs de l'IoT. Cette thèse traite des problèmes de confidentialité et de sécurité soulevés par l'IoT. Nous proposons des protocoles de sécurité qui préservent la confidentialité des données des utilisateurs. En plus de la confidentialité, nous voulons concevoir des solutions vérifiables qui garantissent l'exactitude des calculs effectués par les dispositifs IoT et la plateforme. Nous concevons ces solutions en nous concentrant sur leurs performances spécifiquement pour l'IoT. Plus précisément, nous proposons des protocoles qui sont évolutifs pour faire face au nombre croissant de dispositifs IoT. Nous considérons également des protocoles tolérants aux pannes pour faire face à la mobilité et aux abandons fréquents des dispositifs IoT. Nous nous concentrons sur deux protocoles de sécurité : l'agrégation sécurisée et l'attestation à distance. L'agrégation sécurisée est un protocole où un agrégateur calcule la somme des entrées privées d'un ensemble d'utilisateurs. Dans cette thèse, nous proposons le premier protocole d'agrégation sécurisée vérifiable (VSA) qui donne des garanties formelles de sécurité dans le modèle malveillant. Notre solution préserve la confidentialité des entrées de tous les utilisateurs et l'exactitude du résultat de l'agrégation. En outre, nous proposons un nouveau protocole d'agrégation sécurisée tolérant aux pannes (FTSA) basé sur le cryptage additif-homomorphe. Le schéma permet aux utilisateurs de l'agrégation sécurisée de se retirer du protocole et offre un mécanisme pour récupérer l'agrégat sans affecter la confidentialité des données. Nous montrons que le FTSA surpasse les solutions de l'état de l'art en termes d'évolutivité par rapport au nombre d'utilisateurs. iii Abrégé D'autre part, un protocole d'attestation à distance est un protocole qui permet à un dispositif IoT (agissant en tant que prouveur) de prouver son intégrité logicielle à la plateforme IoT (agissant en tant que vérificateur). Nous proposons un nouveau protocole collaboratif d'attestation à distance (FADIA) dans lequel les dispositifs collectent des attestations les uns des autres et les agrègent. FADIA traite de l'hétérogénéité et de la nature dynamique de l'IoT en tenant compte de l'équité dans sa conception. L'évaluation de FADIA montre une augmentation de la durée de vie d'un réseau. I would like to begin by expressing my profound gratitude to all those who have contributed to the completion of this PhD thesis. It is with great pleasure that I acknowledge the invaluable support, guidance, and encouragement I have received from numerous individuals and organizations throughout this journey. I am immensely grateful to both of my supervisors who have provided invaluable guidance and support throughout this research journey. Melek Önen, my academic supervisor, has been instrumental in shaping the direction of this thesis and enhancing its quality. Wafa Ben Jaballah, my industrial supervisor, has provided valuable insights and practical expertise, bridging the gap between academia and industry. I am truly fortunate to have had the opportunity to work under their mentorship. I extend my heartfelt appreciation to the members of my thesis committee who have significantly contributed to the success of this research. Ghassan Karame, Marc Joe, Mauro Conti, Ferhat Karakoç, and Aurélien Francillon provided invaluable feedback, constructive criticism, and rigorous evaluation. I am also grateful to Katerina Mitrokotsa and Sayantan Mukherjee, for their valuable insights and contributions to this research. I would like to express my sincere gratitude to the French National Association for Research (ANTS) for funding this research. Furthermore, I am deeply thankful to EURECOM and Thales for hosting this thesis and providing the necessary resources and facilities to conduct this work. EURECOM provided an inspiring academic environment that nurtured intellectual growth, while Thales offered invaluable industry insights, collaborative opportunities, and technical expertise. Within Thales, I had the privilege of being a part of the exceptional Therisis team, led by Olivier Bettan. I extend my deepest gratitude to the members of Therisis team for their unwavering support, fruitful discussions, and shared experiences, which greatly enriched my research journey. I offer my deepest gratitude to my family for their exceptional support throughout this journey. Their love, encouragement, and belief in my abilities have been my greatest motivation. I would also like to extend a special thanks to my wife, Fatima Maali. During the long nights and demanding work, she stood by my side, providing unwavering support and understanding. I am deeply grateful for her enduring love and the sacrifices she made throughout this endeavor. IoT and its Applications With the recent advances in embedded systems and communication technologies, the Internet of Things (IoT) paradigm emerge and evolves rapidly. Indeed, researchers expect that the number of IoT devices installed will raise to 41.6 billion devices that will generate 79.4 ZB of data in the year 2025 [START_REF] Dignan | Iot devices to generate 79.4zb of data in 2025, says idc[END_REF]. IoT is composed of connected objects with sensors and processing abilities. The connected objects exchange data with each other and with external systems. IoT spans different life sectors such as medical care, end-consumers, industrial, and even military applications. The main benefit of IoT originates from its ability to automate daily tasks leading to improved efficiency, productivity, and accuracy of existing systems. Many applications use IoT technologies to improve the quality of their services and to reduce their operational cost. We discuss the main applications where IoT made a significant contribution to their advancement. Home Automation It may involve temperature and humidity sensors, live monitoring devices, electric appliances control, etc. Smart homes [START_REF] Prospero | Best smart home hubs of[END_REF] provide a platform for end consumers to collect data from their houses and automate actions. The main benefit of smart homes is offering more life convenience for end-consumers and cost-saving by optimizing energy consumption. Healthcare The medical sector also benefits from IoT technologies to improve healthcare systems [PSJH + 20]. It allows the collection of more rich medical data from patients which are used to improve the research work and studies on existing diseases [START_REF] Istepanian | The potential of internet of m-health things "m-iot" for non-invasive glucose level sensing[END_REF]. Additionally, IoT can improve the live monitoring of patients leading to improved treatment of emergency cases. Transportation Automated cars are the future of the world. This promising technology strongly depends on IoT sensors and devices that are installed inside vehicles and on the 1 Chapter 1. Introduction roads [START_REF] Mahmud | Integration of electric vehicles and management in the internet of energy[END_REF]. These devices communicate with each other to provide autonomous driving features for passengers. Industry 4.0 One of the main reasons behind the boom of IoT technologies is their use in industries [START_REF] Yang | The internet of things in manufacturing: Key issues and potential applications[END_REF]. Industry 4.0 involves the deployment of smart sensors and IoT platforms in the production lines. The exchange of data between the product and the production line enables a much more efficient connection of the supply chain. Thus, it leads to a reduction in production costs and an improvement in product quality. Infrastructures Large-scale applications that cover metropolitan projects have also a huge interest in IoT technologies. They involve smart cities [START_REF] Poon | Sleepy in songdo, korea's smartest city[END_REF] where smart sensors enable services like parking search, environmental monitoring, water distribution, and traffic management. They also involve smart grids [START_REF] Sayed | Chapter 18 -scada and smart energy grid control automation[END_REF] that enable smart monitoring of electricity consumption in a large inhabited area. IoT Components The IoT involves two main components: the IoT end-devices (the sensors, actuators, etc.) and the IoT platform to which these end-devices are connected. We describe the role and the properties of each of these components. IoT End-Devices These are the devices installed at the peripheries of the IoT network. They collect live data from the environment and perform simple actions. An example of these devices is smart sensors, programmable logic units (PLC), cameras, electricity relays, etc. Although there exist many types of IoT end-devices, they all share some common characteristics: • Deployed in large-scale: It is common in many IoT applications that the IoT end-devices are deployed in large numbers and spread across multiple geo-dispersed locations. For example, smart city applications install end-devices in different locations to track live information about the environment in multiple locations. This allows authorities to take better decisions that help improve the life quality in some regions. • Low in Resource: Due to the need for large number of devices in some applications, the manufacturers have to decrease their costs to make them affordable. This comes at the cost of minimizing the resources that these devices possess such as their processing power, their data transfer speed, and their energy resources. • Heterogeneity: The complexity of IoT applications requires deploying different types of IoT end-devices. These devices can have different properties in terms of their amount of resources and their software stack. • Mobility: In many applications, the end-devices are supposed to move frequently within some areas. For example, in industrial applications, smart sensors may be connected to objects to track the production or business process. IoT Platform In addition to the IoT end-devices, IoT applications deploy an IoT platform to provide their servers. The IoT platform is the brain of the IoT which manages all the connected IoT end-devices. It involves the network architecture and the processing servers that collect data from the end-devices and analyze/process them. The IoT platform administrators control these servers. They define how the data is processed and what decisions and actions to take based on the data analysis. These IoT platforms may be deployed physically on-premises or might use cloud technologies and thus get installed as software services on the cloud. The Dark Side of IoT Along with the bright side of IoT comes a dark side. Security researchers raise concerns about the security of IoT devices and their impact on the privacy and safety of users [START_REF] Lo'ai Tawalbeh | Iot privacy and security: Challenges and solutions[END_REF]. We identify three types of adversaries for IoT that raise serious security problems: Adversaries controlling some IoT devices, adversaries controlling the IoT platform, and external adversaries that are on the same network of the IoT devices. In this section, we present the different types of adversaries and their impact on the user's safety and privacy. Adversaries Controlling The End-Devices It is possible to have IoT end-devices performing attacks on the IoT application. This can happen either due to users of these devices acting maliciously or due to the devices being compromised. In such cases, the adversary aims to compromise the IoT application to influence its logic or to leak private information about other users. In these attacks, the adversary controls the inputs of the compromised devices. Hence, it can influence the results and decisions. An example of such attacks is the attack on the implantable cardiac devices from St. Jude Medical (2017) [START_REF] Larson | Fda confirms that st. jude's cardiac devices can be hacked[END_REF]. The attacker compromised these implementable and threatened the safety of the users. Another example is the famous StuxNet Malware (2009) [START_REF] Kushner | The real story of stuxnet[END_REF]. This malware infected PLC devices at Iranian nuclear plant and caused damage to its components while remaining undetected by reporting false information about the actual performance. Therefore, to secure the IoT, one should consider the malicious behavior of IoT end-devices and thus verify their inputs. Adversaries Controlling The Platform The IoT platform collects users' data for further processing. However, users cannot trust the platform as this data often contains private information. IoT platform owners might be interested in increasing their profit by reselling these data to malicious entities. Indeed, in 2009 the Dutch parliament rejected a smart metering program, basing their decision on privacy concerns [START_REF] Mckenna | Smart meter data: Balancing consumer privacy concerns with legitimate applications[END_REF]. The smart metering program aims to collect live data from Dutch households about their electricity consumption. However, the Dutch parliament raised concerns about the misuse of this data by electricity companies. Therefore, to secure the IoT, one should consider the malicious behavior of the IoT platform. External Adversaries External attackers are connected to the IoT network. Hence, they can see the messages sent from the devices to the platform. This allows the adversary to tamper with these messages. Moreover, external attackers may exploit existing vulnerabilities on the IoT end-devices and thus take control of these end-devices. In addition, some external attackers may also have physical access to the IoT end-devices which allows them to perform hardware attacks on the devices. These types of adversaries are mainly interested in three attacks types: • Leaking Sensitive Data: In this type of attack, the attacker exploits the lack of security mechanisms on IoT end-devices to collect private information from the user of that device. These attacks happen either due to insecure communication between the IoT end-devices and the IoT platform or due to software and hardware vulnerabilities that allow the attacker to leak private data. • Privilege Escalation: External adversaries aim to get more privileges by compromising the end-devices. This allows the adversaries to perform more sophisticated attacks and gain more information as shown in Section 1.3.1. • Creating Botnets: The adversaries can compromise a large number of IoT devices to create botnets. A botnet is a group of Internet-connected devices used to perform Distributed Denial-of-Service (DDoS) attacks, steal data, send spam, and allow the attacker to access the device and its connection. The owner can control the botnet using command and control (C&C) softwares [START_REF] Sabanal | Thingbots: The future of botnets in the internet of things[END_REF]. One of the most popular botnets that hit the IoT is "Mirai Botnet (2016)" [AAB + 17]. It is composed primarily of embedded and IoT devices, and it took the Internet by storm in late 2016 when it overwhelmed several high-profile targets with some of the largest distributed denial-of-service (DDoS) attacks on record. Problem Statement It is clear how important IoT is to our world. It is even more clear that the trend of integrating more and more IoT devices will keep increasing. Therefore, one very important goal for researchers is securing these IoT devices to make sure that this integration is not a curse but rather a benefit for humanity. In this thesis, we aim for designing security protocols for IoT devices. We first start by posing the question: Are existing protocols designed for standard IT devices (e.g., computers) can solve the security concerns of IoT? The direct answer to this question is No. Researchers have already identified the limitation of IT world security solutions and traced back this limitation to the unique characteristics of IoT devices (see Section 1.2.1). Therefore, in this thesis, we aim to design customized security protocols that take into consideration all the constraints of the IoT ecosystem. We specifically study the security protocols on three main axes: scalability, fault tolerance, and verifiability. • Scalability: We aim to design scalable security protocols that involve thousands or possibly millions of devices. This allows for more practical solutions targeting IoT applications that run with a large number of IoT devices. • Fault Tolerance: We aim to design security protocols that are reliable in case of failures of some devices. This allows for more practical solutions targeting IoT applications with mobile devices that may encounter connectivity problems and occasionally fail to receive and deliver messages. • Verifiability: We aim to design security protocols that do not rely on trusting the different IoT components (processing platform, end-devices, etc.). By integrating verification mechanisms into our design, we can verify that third-party IoT platforms are correctly processing the collected data. Additionally, verifiable security protocols can also ensure that compromised devices cannot affect the results of our protocol without being detected. Security Protocols for IoT In this thesis, we focus on two very important security protocols that can be gamechangers for improving IoT security. Namely, secure aggregation protocols and remote attestation protocols. Secure Aggregation Protocols Secure aggregation consists of computing the sum of data collected from multiple sources without disclosing these individual inputs. Hence, the goal of a secure aggregation protocol is to preserve the privacy of the data sources. Researchers identified that such protocols are useful to improve privacy guarantees in IoT applications. More precisely, secure aggregation can be used at the network layer to secure the data exchange from the enddevices to the platform. Smart grid applications are a good example. Moreover, secure aggregation can also be used at the processing layer to allow multiple IoT platforms to aggregate securely their collected data. For example, secure aggregation can be applied to IoT platforms that are using federated learning. Therefore, we classify secure aggregation into the following two categories. • Intra-Platform Secure Aggregation: Smart grids are IoT end-devices that monitor energy consumption in households and industries. These devices collect live measurements of energy consumption and send them to service providers. The service providers compute statistics on the collected data thus enabling them to optimize their energy supply and improve their business model. Collecting these data from households poses a privacy problem. It allows the service providers to perform real-time surveillance and profile the tenants by determining their behavior patterns. Service providers may also determine the used electric appliances in households and track the behavior of renters/leasers. Secure aggregation can be used to protect the energy consumption measurements of each household and allows the service provider to only compute their sum. By only disclosing the statistical average value of a large inhabited area, the privacy of inhabitants is preserved. • Inter-Platform Secure Aggregation: One of the most important technologies used at the IoT processing layer is machine learning. The processing units of IoT platforms use machine learning to train models from the data collected from IoT devices. These models are used to improve the quality of the IoT application. Recently, federated learning emerged as a new collaborative machine learning technology to train machine learning models. This new technology consists of several federated clients holding private data and contributing to the training of a machine learning model collaboratively: Each client first trains a local machine learning model using its dataset and further shares training updates with a server. The server computes the average of the training updates and sends the average back to the clients. The clients update their local model and repeat the process so that they finally converge to one shared, global model. Thanks to this new technology, several IoT platforms (acting as FL clients) can train more accurate machine learning models using larger and more diverse datasets (collected from different IoT infrastructures). Unfortunately, federated learning may compromise the privacy of IoT users. Adversaries, who have access only to the training updates (sent by each FL client) can perform inference attacks to reveal some data samples from the private training dataset. Therefore, it is very important to secure the federated learning protocol. Secure aggregation can be used to protect the training updates and allows the federated learning server to only compute the average of the updates. By protecting the individual training updates from each IoT platform, secure aggregation helps improve the privacy of IoT users. In this thesis, we are interested in designing secure aggregation protocols that can scale to a large number of users, do not require the availability of all users, and where their results are publicly verifiable. Remote Attestation Protocols Remote Attestation (RA) schemes are protocols that enable a device to prove the integrity of its software to another remote device. It involves two roles: the prover (the device that proves its software integrity) and the verifier (the device that verifies the integrity of the prover's software). In RA, the verifier relies on a root of trust existing on the prover device to perform integrity checks on the prover's running software. It then creates a cryptographic proof and sends it to the verifier. The verifier upon validating this proof authenticates the prover and thus trusts its software. RA is very useful for IoT applications since it can be used to detect compromised IoT end-devices. Basic RA solutions require installing a verifier in the IoT platform which verifies the state of all IoT end-devices. Such a solution is not practical since it does not scale well with the number of IoT devices. To solve this problem, collaborative RA protocols are used where the IoT devices attest to each other and create a single proof for their software integrity. In this thesis, we raise questions about the scalability of existing collaborative RA protocols in dynamic and heterogeneous networks. Thus, we aim to design efficient RA protocols that scale with a large number of IoT end-devices and can support their dynamic nature and heterogeneity of IoT devices in the network. Contributions The thesis describes novel secure aggregation and remote attestation protocols that are suitable for the IoT. The thesis is organized intro three parts. In what follows, we summarize the contributions in each part: Part I: Secure Aggregation In Chapter 3, we perform an in-depth literature study of secure aggregation protocols that are based on cryptographic techniques and consequently propose a new definition of this concept. We present existing techniques for secure aggregation as instantiations of our new definition and we compare them. The chapter gives a clear view of the advantages and disadvantages of each of the secure aggregation types. In Chapter 4, we present the first formal definition of security for secure aggregation protocols in the malicious model. Our new security definitions captures malicious aggregators and users. Then, we design a novel verifiable secure aggregation (VSA) protocol and prove that it achieve our security definitions. Part II: Secure Aggregation for Federated Learning In Chapter 5, we conduct a large-scale study on secure aggregation solutions specifically designed for federated learning applications. We identify the main security, privacy, and performance challenges raised by using secure aggregation for federated learning and we categories the existing solutions based on the challenges they tackle. The study finishes with key takeaways that can help developers design, develop or improve secure aggregation schemes for federated learning. In Chapter 6, we design a novel secure and fault-tolerant aggregation protocol for federated learning. For this purpose, we propose a new threshold encryption scheme (based on the Joye-Libert encryption scheme [START_REF] Joye | A scalable scheme for privacy-preserving aggregation of time-series data[END_REF]) and we use it to achieve a highly scalable secure aggregation protocol that supports users dropping frequently from the network. We are able to reach the theoretical limit in terms of scalability with respect to the number of users. Part III: Remote Attestation In this Chapter 7 we design a novel collaborative remote attestation protocol that relies on device-to-device connection to perform attestation of a large network. Our solution has better scalability than the existing collaborative remote attestation protocol. It focuses on the heterogeneity aspect of IoT networks and integrates fairness by design. We show by experiments that by developing a fair-by-design protocol, we can achieve better scalability for a RA protocol. Chapter 2 Preliminaries and Cryptographic Building Blocks In this thesis, we propose new protocols that improve IoT in terms of security. To build these protocols we rely on a set of security assumptions and some existing cryptographic primitives. For this purpose, we introduce, in this chapter, the relevant assumptions and tools used throughout the thesis. Additionally, we present the basic security protocols and cryptographic schemes that are considered interesting building blocks for our solutions. Notations We use some common notations throughout the thesis. We present these notations in Table 2.1. N The set of natural numbers {1, .., inf}. Zn The set of integer modulus n {0, .., n-1}. [n] The set {1, .., n}. [n] - The set {1, . The i-th bit of x = x ⟨1⟩ ||...||x ⟨ℓ⟩ . |U | The cardinality of the set U . r ←$ D r is sampled uniformly at random from distribution D. D 1 c ≈ D 2 The two distributions are not distinguishable by a computationally bounded adversary. D 1 s ≈ D 2 The two distributions are not distinguishable by a computationally unbounded adversary. Hard Problems and Assumptions In cryptography, it is common to rely on the hardness of some mathematical problems to build secure systems. In this thesis, we rely on a set of hard problems which we present in this section. Decisional and Computational Diffie-Hellman Assumptions Given a finite, cyclic group G of order p and generator g, we present the following four assumptions (ordered from strongest to weakest): Definition 2.2.1: Inv-DDH Assumption Given the two distributions: D 1 : {(g, g a , g c )} and D 2 : {(g, g a , g a -1 )} such that a, c ←$ Z p and a -1 is the inverse of a in Z p , the assumption states that D 1 c ≈ D 2 . Definition 2.2.2: DDH Assumption Given the two distributions: D 1 : {(g, g a , g b , g c )} and D 2 : {(g, g a , g b , g ab )} such that a, b, c ←$ Z p , the assumption states that D 1 c ≈ D 2 . Definition 2.2.3: GDH Assumption The assumption states that given g a , g b ∈ G and O DDH such that a, b ←$ Z p and O DDH (g x , g y , g z ) is an oracle that returns (g xy ? = g z ), there is no polynomial time algorithm that outputs g ab except with some negligible probability. Definition 2.2.4: CDH Assumption The assumption states that given g a , g b ∈ G such that a, b ←$ Z p , there is no polynomial time algorithm that outputs g ab except with some negligible probability. Decisional Composite Residuosity Assumption Given N = pq where p and q are two large primes. We define the following assumption (initially introduced in [START_REF] Paillier | Public-key cryptosystems based on composite degree residuosity classes[END_REF]) in the multiplicative group Z * N 2 : Definition 2.2.5: DCR Assumption Given two distributions: D 1 : {x N mod N 2 } and D 2 : {x mod N 2 } such that x ←$ Z * N 2 . The assumption states that D 1 c ≈ D 2 . Cryptographic Schemes In this section, we present the main cryptographic schemes used in our thesis. Threshold Secret Sharing (SS) A threshold secret sharing scheme (SS) is a scheme that allows sharing a secret value with n parties such that a threshold number of these parties can reconstruct the secret value. It is composed of two algorithms: • SS.Share(s, t, n, I) → {(i, ⟨s⟩ i )} ∀i∈[n] : The algorithm takes as input secret s ∈ I and a reconstruction threshold t, and generates n random shares. (I ⊂ [n]) ∧ (|I| < t) ∧ (s ∈ I) ∧ (s ′ ∈ I), SS.Share(s, t, n, I) → {(i, ⟨s⟩ i )} ∀i∈[n] : {(i, ⟨s⟩ i )} ∀i∈I s ≈ SS.Share(s ′ , t, n, I) → {(i, ⟨s ′ ⟩ i )} ∀i∈[n] : {(i, ⟨s ′ ⟩ i )} ∀i∈I Definition 2. 3.3: Homomorphism SS is said to be homomorphic in I if and only if: ∀s 1 , ∀s 2 , ∀n, ∀t, ∀I, s.t. (I ⊂ [n]) ∧ (|I| ≥ t) ∧ (s 1 , s 2 ∈ I), Pr   r ̸ = s 1 + s 2 SS.Share(s 1 , t, n, I) → {(i, ⟨s 1 ⟩ i )} ∀i∈[n] , SS.Share(s 2 , t, n, I) → {(i, ⟨s 2 ⟩ i )} ∀i∈[n] , SS.Recon({(i, ⟨s 1 ⟩ i + ⟨s 2 ⟩ i )} ∀i∈I , I) → r    = 0 Realization over the Field Z p Shamir's secret sharinf scheme (SSS) [START_REF] Shamir | How to share a secret[END_REF] is a realization of the threshold secret sharing scheme SSS in the field Z p . SSS is correct, secure, and homomorphic, and defined as follows: • SSS.Share(s, t, n, Z p ) → {(i, ⟨s⟩ i )} ∀i∈[n] : The algorithm first generates a polynomial p(x) with uniformly random coefficients in Z p and of degree t -1 such that p(0) = s. It then sets ⟨s⟩ i = p(i), ∀i ∈ [n]. • SSS.Recon({(i, ⟨s⟩ i )} ∀i∈I , Z p ) → s: The algorithm uses the Lagrange interpolation formula [START_REF] Meijering | A chronology of interpolation: from ancient astronomy to modern signal and image processing[END_REF] to compute the value of p(0) as follows: s = p(0) = ∀i∈I η i ⟨s⟩ i mod p where η i = ∀j∈I,j̸ =i j j -i mod p We additionally define the following algorithm for Shamir's Secret Sharing: • SSS.ReconExp({(i, g ⟨s⟩ i )} ∀i∈I , G) → g s : The algorithm uses the Lagrange interpolation formula on the exponent to compute the value of g s = g p(0) = ∀i∈I (g ⟨s⟩ i ) η i . Lemma 2.3.1 Given the cyclic group G of prime order p and generator g, it holds that: ∀s, ∀n, ∀t, ∀I, ∀g ∈ G s.t. (I ⊂ [n]) ∧ (|I| ≥ t) ∧ (s ∈ Z p ), Pr r ̸ = g s SSS.Share(s, t, n, Z p ) → {(i, ⟨s⟩ i )} ∀i∈[n] , SSS.ReconExp({(i, g ⟨s⟩ i )} ∀i∈I , G) → r = 0 Lemma 2.3.2 Given the cyclic group G of prime order p and generator g, it holds that: ∀s 1 , ∀s 2 , ∀n, ∀t, ∀I, ∀g 1 , ∀g 2 s.t. (I ⊂ [n]) ∧ (|I| ≥ t) ∧ (s 1 , s 2 ∈ Z p ) ∧ (g 1 , g 2 ∈ G), Pr   r ̸ = g s 1 1 g s 2 2 SSS.Share(s 1 , t, n, Z p ) → {(i, ⟨s 1 ⟩ i )} ∀i∈[n] , SSS.Share(s 2 , t, n, Z p ) → {(i, ⟨s 2 ⟩ i )} ∀i∈[n] , SSS.ReconExp({(i, g ⟨s 1 ⟩ i 1 g ⟨s 2 ⟩ i 2 )} ∀i∈I , G) → r    = 0 Realization over the Integers A variant of Shamir's secret sharing is defined over the integers (rather than in a field). The secret sharing scheme over the integers is defined by Rabin [START_REF] Rabin | A simplified approach to threshold and proactive rsa[END_REF] and we denote it by ISS. The scheme shares a secret integer s in an interval I = [-η, η] and provides σ-bits statistical security where σ is a security parameter. • ISS.Share(s, t, n, I) → {(i, ⟨s⟩ i )} ∀i∈[n] : The algorithm first generates a polynomial p(x) with uniformly random coefficients in [-2 σ ∆ 2 η, 2 σ ∆ 2 η] and of degree t -1 such that p(0) = ∆s where ∆ = n!. It then sets ⟨s⟩ i = p(i), ∀i ∈ [n]. • ISS.Recon({(i, ⟨s⟩ i )} ∀i∈I , Z p ) → s: The algorithm uses the Lagrange interpolation formula to compute the value of p(0) as follows: s = p(0) = ∀i∈I ν i ⟨s⟩ i ∆ 2 where ν i = ∆ ∀j∈I,j̸ =i j ∀j∈I,j̸ =i j -i Symmetric Encryption A symmetric encryption scheme parameterized with a security parameter λ, its key space K, message space M, and ciphertext space C is composed of the following algorithms. • Enc k (m) → e: It encrypts the message m with key k. ≈ D 2 where D 1 and D 2 are defined as follows: ∀k ∈ K, m ∈ M, Pr[m ̸ = Dec k (Enc k (m))] = 0 Definition 2. • D 1 : {(e, k)} such that k ←$ K and e ← Enc k (m) • D 2 : {(e ′ , k ′ )} such that e ′ ←$ S 1 (1 λ ), k ′ ←$ S 2 (e ′ , m) and S 1 , S 2 are any PPT algorithms that may share a state. Definition 2.3.7 of non-committing encryption says that it is possible for a simulator to come up with a ciphertext which can later be explained as an encryption of any message, in such a way that the joint distribution of the ciphertext and the key in this simulated experiment is indistinguishable from the normal use of the encryption scheme, where a key is first sampled and then an encryption of m is generated. In this thesis, we will use (Enc, Dec) for authenticated encryption scheme and (Enc * , Dec * ) for a non-commiting authenticated encryption scheme. Pseudo Random Generator Key Agreement Scheme A key agreement scheme KA is a scheme parameterized by the security parameter λ and the key space K. It is composed of the following algorithms: • KA.Param(1 λ ) → pp: Given a security parameter λ it generates the public parameters pp. • KA.Gen(pp) → (pk, sk): This algorithm generates a key pair from the public parameter. • KA.Agree(pp, sk, pk 1 , pk 2 , G) → k: This algorithm uses a private key, two public keys, and a hash function G to generate an encryption key. Definition 2.3.9: Correctness The key agreement scheme is correct if and only if: ∀λ, Pr        k 1,2 ̸ = k 2,1 KA.Param(1 λ ) → pp, KA.Gen(pp) → (pk 1 , sk 1 ), KA.Gen(pp) → (pk 2 , sk 2 ), KA.Agree(pp, sk 1 , pk 1 , pk 2 , G) → k 1,2 , KA.Agree(pp, sk 2 , pk 2 , pk 1 , G) → k 2,1        = 0 Definition 2. 3.10: Security The key agreement scheme is secure if and only if D 1 c ≈ D 2 where the two distributions are defined as follows: D 1 :          (pk 1 , pk 2 , r) KA.Param(1 λ ) → pp, KA.Gen(pp) → (pk 1 , sk 1 ), KA.Gen(pp) → (pk 2 , sk 2 ), r ←$ K          D 2 :          (pk 1 , pk 2 , k) KA.Param(1 λ ) → pp, KA.Gen(pp) → (pk 1 , sk 1 ), KA.Gen(pp) → (pk 2 , sk 2 ), KA.Agree(pp, sk 1 , pk 1 , pk 2 , G) → k          Realization based on DDH Under the DDH assumption we can construct a KA scheme that is secure in the random oracle G (see Section 2.4.3) as follows: • KA.Param(1 λ ) → pp: From the security parameter λ, it chooses a group G of order p and generator g where the DDH holds in G. The public parameters are set to pp = (G, g, p). • KA.Gen(pp) → (pk, sk): It outputs (g a , a) where a ←$ Z p . • KA.Agree(pp, sk, pk 1 , pk 2 , G) →: It outputs k ← G G (pk 1 ,pk 2 ) (pk sk 2 ). Ideal Functionalities and Protocols The Universal Composability Framework (UC) In this thesis, we use the standard universal composability framework proposed by Canetti [START_REF] Canetti | Universally composable security[END_REF]. The UC framework defines a probabilistic polynomial time (PPT) environment machine Z that oversees the execution of a protocol in one of two worlds: The "ideal world" execution involves "dummy parties" (some of whom may be corrupted by an ideal adversary S) interacting with a functionality F. The "real world" execution involves PPT parties (some of whom may be corrupted by a PPT real-world adversary A) interacting only with each other in some protocol Π. Let EXEC Π,A,Z (x) denote the random variable (over the local random choices of all the real world parties) describing the output of an execution of Π with environment Z and adversary A, on input x. Similarly, let IDEAL F ,S,Z (x) denote the random variable (over the local random choices of a simulator S) describing the output of an execution of F with environment Z and simulated adversary S, on input x. Moreover, Let EXEC Π,A,Z and IDEAL F ,S,Z denote the ensembles {EXEC Π,A,Z (x)} x∈{0,1} * and {IDEAL F ,S,Z (x)} x∈{0,1} * respectively. We refer to [START_REF] Canetti | Universally composable security[END_REF] for a more detailed description of both executions. Definition 2.4.1: Protocol Realization Let F be an ideal functionality. A protocol Π is said to UC-realize F if for any adversary A, there exists a simulator S such that for all environments Z, IDEAL F ,S,Z c ≈ EXEC Π,A,Z . Additionally, we say that protocol Π operates in G-hybrid-model, if Π uses the ideal functionality G. Definition 2.4.2: Protocol Emulation Let G 1 , G 2 be ideal functionalities and let Π 1 , Π 2 be multi-party protocols. We say Π 1 in the G 1 -hybrid model UC-emulates Π 2 in the G 2 -hybrid model if for any adversary A 1 , there exists an adversary A 2 such that for all environments Z, EXEC Π 1 ,A 1 ,Z c ≈ EXEC Π 2 ,A 2 ,Z . Furthermore, we follow the definitions in [START_REF] Canetti | Universally composable security[END_REF] for modeling adversary corruptions. When the adversary corrupts a party it sends a backdoor message (sid, ID) where sid denotes the session id and ID denotes the identifier of the corrupted party. This happens at the very beginning of the protocol execution in the case of static corruption. For simplicity, we do not show these messages in the ideal functionalities as we assume the default behavior is that party ID is recorded as corrupted. We also use the work in [START_REF] Canetti | Universal composition with joint state[END_REF] which describes the execution of multi-sessions of a protocol. We denote by Π and F, the multi-session extensions of the protocol Π and functionality F respectively. We recall that the UC theorem with joint state (JUC theorem): Theorem 2.4.1: Universal Composability with Joint State [START_REF] Canetti | Universal composition with joint state[END_REF] Given protocol Π that UC-realizes F in G-hybrid-model and a protocol Ψ that UC-realizes functionality Ĝ in the real-world model, it is true that the composition of Π and Ψ: Π • Ψ in the real-world model UC-emulates Π in G-hybrid-model. In this section, we present the ideal functionality of the primitives we use in this thesis. Additionally, we discuss a possible realization for some of these functionalities. Common Reference String (CRS) We use the ideal functionality F D,n CRS parameterized by the sampling algorithm D and the number of parties n. The purpose of this functionality is to have a common string crs sampled from a fixed distribution and available to all parties of the protocol. Notice that this proposed functionality differs from the one proposed in [START_REF] Canetti | Universal composition with joint state[END_REF] because it supports distributing the CRS to n parties. It is described in Figure 2.1. Functionality F D,n CRS F D,n CRS runs with parties P 1 ,..., P n and is parameterized by an algorithm D and a security parameter λ. • When receiving a message (sid, CRS, P i ) from P i , if this is the first CRS message, compute crs ←$ D(1 λ ). Then, send (sid, CRS, crs) to P i and S. Check if received message (sid, CRS, ...) from all parties. If yes, halt. If no, continue. Random Oracle Model Random oracles are typically used as an idealised replacement for cryptographic hash functions in schemes where strong randomness assumptions are needed. The ideal functionality of the random oracle returns a uniformly random response from the output domain. Additionally, if a query is repeated, it responds the same way every time that the same query is submitted. Throughout this thesis, we use the following hash functions modeled as random oracles. • H G : {0, 1} * → G. This hash function is used to generate random group elements from arbitrary strings. The group used in this hash function is indicated as a parameter in its superscript. • G G : G 3 → {0, 1} λ . This hash function is used as a key-derivation function to extract a λ-bit key from a group element, and the first two inputs are used to seed the function. Sharing Random Number (RShare) We define the ideal functionality F t,n,F RShare parameterized by the field F, the number of parties n, and a threshold t ≤ n. The purpose of this functionality is to generate a uniformly random element in the field F and secretly share it (using Shamir's secret sharing) with the n parties. The functionality is described in Figure 2.2. Realization This functionality can be realized by letting each party P i generate its random value r i ←$ F and secretly share it (using authenticated channels) with the other parties SS.Share(r i , t, n, F) → {(j, ⟨r i ⟩ j )} ∀j∈ [n] . Then, each partly P i simply sums the shares it receives to obtain the share ⟨r⟩ i = n j=1 ⟨r j ⟩ i where r = n 1 r i . Oblivious Transfer (OT) An oblivious transfer protocol (OT) is a protocol executed between two parties: the sender and the receiver. The sender inputs a pair (m 0 , m 1 ) whereas the receiver inputs a selection bit x ∈ {0, 1}. At the end of the protocol, the receiver outputs value m x Functionality F t,n,F RShare F t,n,F RShare runs with parties P 1 ,..., P n and is parameterized by field F, the number of parties n, and a threshold t ≤ n. • When receiving a message (sid, init, P i ) from P i , if this is the first init message, compute r ←$ F. • When all init messages are received from the n parties, compute SS.Share(r, t, n, F) → {(i, ⟨r⟩ i )} ∀i∈[n] and send (sid, share, ⟨r⟩ i ) to each P i and send (sid, share) to S. Functionality F OT F OT interacts with a receiver R and a sender S. Auxiliary Inputs: the length of the sender input ℓ • Upon receiving a message (sid, send, S, m 0 , m 1 ) from S, where m i ∈ {0, 1} ℓ , store (m 0 , m 1 ). • Upon receiving a message (sid, recv, R, x) from R, check if a (send, sid, S, ...) message was previously sent. If yes, send (sid, sent, m x ) to R and (sid, sent) to adversary S and halt. If not, send nothing to R (but continue running). We present the construction proposed by Chou and Orlandi [START_REF] Chou | The simplest protocol for oblivious transfer[END_REF] in Figure 2.4. The authors of the OT protocol prove that it UC-realizes the functionality F OT in the random oracle model. Their proof relies on the CDH assumption. Later Hauck and Loss [START_REF] Hauck | Efficient and universally composable protocols for oblivious transfer from the cdh assumption[END_REF] identify a flow in the proof from [START_REF] Chou | The simplest protocol for oblivious transfer[END_REF] and provide a patch for the proof that relies on the hardness of the GDH assumption. Based on the work of Hauck and Loss [START_REF] Hauck | Efficient and universally composable protocols for oblivious transfer from the cdh assumption[END_REF] we restate the following two Lemmas. Lemma 2.4.1 For any PPT adversary A that statically corrupts a receiver in Π OT protocol, and any PPT environment Z, the algorithm S rcv (defined in [START_REF] Hauck | Efficient and universally composable protocols for oblivious transfer from the cdh assumption[END_REF]) simulates the execution for A under the GDH assumption (i.e., IDEAL FOT,Srcv,Z c ≈ EXEC ΠOT,A,Z ). For any PPT adversary A that statically corrupts a sender in Π OT protocol, and any PPT environment Z, the algorithm S snd (defined in [START_REF] Hauck | Efficient and universally composable protocols for oblivious transfer from the cdh assumption[END_REF]) simulates the execution for A under the CDH assumption (i.e., IDEAL FOT,Ssnd,Z c ≈ EXEC ΠOT,A,Z ). OT Protocol Π OT Receiver Sender (x ∈ {0, 1}) (m 0 , m 1 ) A = g α α ←$ Z p β ←$ Z p B ← A x g β κ ← G G (A,B) (A β ) B κ 0 ← G G (A,B) (B α ) κ 1 ← G G (A,B) (( B A ) α ) c 0 ← Enc * κ 0 (m 0 ) (c 0 , c 1 ) c 1 ← Enc * κ 1 (m 1 ) m x ← Dec κ (c x ) In a nutshell, the simulator S rcv takes the messages of the receiver and extracts the receiver's choice. It then gets the output from F OT and generates the simulated sender messages accordingly. On the other hand, the simulator S snd takes the messages of the sender and extracts the sender's input pair. In our work, we are interested in reusing S snd and S rcv algorithms for our proofs. Garbled Circuits (GC) A garbled circuit protocol proposed by Yao [START_REF] Andrew | How to generate and exchange secrets[END_REF] runs between two parties (a garbler G and an evaluator E). It is used to evaluate an arbitrary function f chosen by G on the private input x of E. Yao garbling scheme is defined as two algorithms: • GC.Grb(1 λ , f ) → (F, {l b x,i } ∀b∈{0,1},∀i∈[ℓx] , {l b out,i } ∀b∈{0,1},∀i∈[ℓout] ): Given the security parameter λ and the function f , the algorithm generates a garbled circuit F and a label for each bit of the input and output (ℓ x and ℓ out are the bit-length of the input and output, respectively). • GC.Eval(F, {l x ⟨i⟩ x,i } ∀i∈[ℓx] ) → {l out ⟨i⟩ out,i } ∀i∈[ℓout] : Given the garbled circuit F and the labels of the input bits, the algorithm outputs the labels of the result out = f (x). Protocol with GC and OT to compute f θ,M Evaluator (x = x 1 ||...||x ℓ ) Garbler (θ, M ) F, {l b x,i }, l b out ← GC.Grb(1 λ , f θ ) ∀i ∈ [ℓ], ∀b ∈ {0, 1} A = g α , F α ←$ Z p β i ←$ Z p ∀i ∈ [ℓ] B i ← A x ⟨i⟩ g βi ∀i ∈ [ℓ] κ i ← G G (A,Bi) (A βi ) ∀i ∈ [ℓ] {B i } ∀i∈[ℓ] κ 0 i ← G G (A,Bi) (B α i ) ∀i ∈ [ℓ] κ 1 i ← G G (A,Bi) (( B i A ) α ) ∀i ∈ [ℓ] c 0 i ← Enc * κ 0 i (l 0 x,i ) ∀i ∈ [ℓ] c 1 i ← Enc * κ 1 i (l 1 x,i ) ∀i ∈ [ℓ] {(c 0 i , c 1 i } ∀i∈[ℓ] , C C ← Enc * l 1 out (M ) l x ⟨i⟩ x,i ← Dec κi (c x ⟨i⟩ i ) ∀i ∈ [ℓ] l res out ← GC.Eval(F, {l x ⟨i⟩ x,i } ∀i∈[ℓ] ) M ← Dec l res out (C) Figure 2.5: Protocol to evaluate f θ,M using a garbled circuit and oblivious transfer. The protocol is parameterized by the cyclic group G of order p and generator g In this thesis, we are interested in the function f θ,M which checks if the evaluator's input is less than θ and if so, returns a message M . We define the two functions f θ and f θ,M as follows f θ (x) = 1 , if x ≤ θ 0 , otherwise f θ,M (x) = M , if x ≤ θ ⊥ , otherwise (2.1) To evaluate f θ,M using a garbled circuit, the garbler G generates and sends a garbled circuit F of f θ and a label for each bit of the input. G sends F to E and then transfers the labels of x to E using OT. G also uses the output label l 1 out to encrypt the message M and send the ciphertext to E. Finally, E evaluates F on the labels of x and obtains l res out where res ∈ {0, 1}. If res = 1 the evaluator decrypts the ciphertext and obtains M . We describe the protocol in Figure 2.5. Notice, that this protocol is secure when the garbler is honest-but-curious (follows the protocol steps). Zero-Knowledge Proofs (ZKP) Zero-Knowledge Proof protocols are two-party protocols where a prover P interacts with a verifier V to convince it that a given statement is true while P avoids conveying any additional information apart from the fact that the statement is indeed true. We show the ideal functionality of ZKP in Figure 2.6. Functionality F R ZK F R ZK interacts with a prover P and a verifier V and is parameterized by the relation R. • Upon receiving a message (sid, prove, x, w) from P , store x and w, and continue. • Upon receiving a message (sid, verify, x ′ ) from P , abort if x ′ ̸ = x, otherwise, send (sid, R(x, w)) to V and S, and halt. (B i = A w i ) Realization: We can realize F R n DL ZK as an extension of Schnorr's zero-knowledge proof for discrete logarithm [START_REF] Schnorr | Efficient identification and signatures for smart cards[END_REF]. More precisely, P samples n uniformly random values r i ←$ Z p ∀i ∈ [n] and sends {g r i } ∀i∈[n] to V . V samples the challenge c ←$ Z p and sends it to P . P computes the proof z i ← wc + r i mod Z p , and sends {z i } ∀i∈[n] to V which finally checks ∀i∈[n] (g r i B c i = g z i ) . This protocol can be transformed into a non-interactive zero-knowledge proof using the Fiat-Shamir transformation [START_REF] Fiat | How to prove yourself: Practical solutions to identification and signature problems[END_REF]. Tag Relation R Tag Given G (a group where DDH holds) of order p, we define the tag relation R Tag (x, w) : G 3 × Z 2 p → {0, 1} as: R Tag ({A, B, C}, (a, b)) : C = A a B b Realization: Similarly, we can realize F RTag ZK as an extension of Schnorr's zero-knowledge proof for discrete logarithm [START_REF] Schnorr | Efficient identification and signatures for smart cards[END_REF]. More precisely, P samples two uniformly random values r 1 , r 2 ←$ Z p and sends (g r 1 +r 2 ) to V . V samples the challenge c ←$ Z p and sends it to P . P computes the proof z 1 ← ac + r 1 mod Z p and z 2 ← bc + r 2 mod Z p , then sends (z 1 , z 2 ) to V which finally checks Cg r 1 +r 2 = g z 1 g z 2 . This protocol can also be transformed into a non-interactive zero-knowledge proof using Fiat-Shamir transformation [START_REF] Fiat | How to prove yourself: Practical solutions to identification and signature problems[END_REF]. Part I Secure Aggregation Chapter 3 Characterization of Secure Aggregation In this chapter, we provide a formal definition of a secure aggregation protocol and we study the existing threat models studied in the literature: the honest-but-curious model and the malicious model. We first identify the four major technologies used to build secure aggregation solutions: differential privacy, trusted execution environment, anonymity, and cryptography. Then, we focus on solutions that are based on cryptography. We study the existing cryptography-based solutions in the honest-but-curious model and we suggest a systematic categorization of these solutions. Environment of Secure Aggregation One of the most important functionalities of the IoT is aggregating the data collected by the end-devices. Many IoT applications such as smart grids [START_REF] Muhammad Rizwan Asghar | Smart meter data privacy: A survey[END_REF], water management systems, and traffic control in cities use this functionality. In most of these applications, aggregating the data involves the operation of the statistical sum of the data points collected from multiple geo-dispersed data sources (eg., sensors). Nevertheless, these individual data points usually involve sensitive data. This raises serious privacy concerns and thus calls for suitable privacy-enhancing technologies to protect the confidentiality of end-devices' data while still being able to perform the aggregation operation. During the past 20 years, a huge amount of research focused on designing secure aggregation (SA) solutions [ ÖM07, LEM14a, BSK + 19, BIK + 17, DA16, CMT05] for various applications. The goal is to enable the computation of the sum of several parties' inputs without leaking any information about each individual input except the aggregate (the sum). In this thesis, aggregation refers to the process where data collected from multiple sources are summed up. Usually, data is collected in consecutive timestamps and the sum (aggregate) is calculated for each timestamp. For many applications, the data contains sensitive information. Usually, secure aggregation involves the following actors: • Users (U): Parties that provide the input data x i,τ at each timestamp τ . ... • Aggregators (A): Parties that perform the aggregation to obtain the sum X τ of the input data at each timestamp τ . • Trusted Third Party (T P): A trusted part that may be required for setup purposes in some secure aggregation protocols. Threat Models and Security Requirements for Secure Aggregation We describe the most popular threat models considered for secure aggregation. Namely, the honest-but-curious model and the malicious model. Then, we define the security requirements for secure aggregation. Threat Models Honest-but-curious Model A common security model for secure aggregation schemes is the honest-but-curious model. An honest-but-curious adversary correctly follows the protocol steps but remains curious to discover any private information. The adversary may corrupt the aggregator and some users. When the aggregator corrupts multiple parties it accesses all their private information. This models collusion between the corrupted aggregator and corrupted users. Malicious Model In addition to the honest-but-curious model, several research works consider a malicious model where the adversaries are more powerful. A malicious adversary may deviate from the protocol steps and thus has arbitrary inputs to the protocol. There are some variations of this model where the adversary only corrupts the aggregator (a.k.a. malicious aggregator model) or only corrupts a subset of users (a.k.a. malicious users model). In this chapter, we only study SA solutions secure under the honest-but-curious model. Security Requirements We present the main two security requirements that define the security of secure aggregation protocols. The first requirement is Aggregator Obliviousness and it was initially proposed by Shi et Aggregate Unforgeability: This security notion ensures that an adversary (by corrupting the aggregator) cannot output an incorrect sum without being detected by honest users. If the adversary corrupts a small subset of users (U corr ), the notion requires that the adversary can only make an insignificant change in the result of the sum. This means that given X is the sum of the honest users' inputs, the adversary can only output a result X ′ such that X ≤ X ′ ≤ X + θ|U corr | where θ is the maximum allowed value of a user input (x i,τ < θ) . Existing Secure Aggregation Protocols There are mainly four types of secure aggregation (SA) protocols: SA based on differential privacy, SA based on Trusted-Execution Environment (TEE), SA based on anonymity, and SA based on cryptography. In the following, we elaborate on each type of secure aggregation. Table 3.1 presents a comparison between the different types. Global Differential Privacy GDP relies on the idea of adding noise to the output of some algorithm run on the private data as opposed to adding noise to the data entries themselves. This is not practical in the case of SA since adding noise to the aggregation results does not protect the users' inputs from untrusted aggregation servers (IoT platform). SA based on differential privacy Local Differential Privacy On the other hand, LDP solutions [MASN21, ASY + 18, TF19, YWW21, CYD20, STL20, DHCP21] are more practical for SA. These solutions propose to add noise to the user input locally at each user device. The noise serves as a layer to prevent attackers from learning private information from the user's input. A common type of noise used is Gaussian noise. Since all users need to add noise to their inputs before sending them, this method ends up adding too much noise to the final aggregate. Thus, affecting significantly the accuracy of the results. Therefore, the main concern with this approach is the accuracy of the result as noise is amplified during the aggregation. Using a TEE provides important advantages in terms of isolation of the SA task from the real word execution environment which can be compromised by attackers. However, TEE can only protect against external attackers which remotely compromise the platform's devices. More clearly, it does not protect against malicious platforms (i.e., platform owners with malicious intentions). Indeed, a malicious platform owner has physical access to the TEE hardware and can simply misconfigure during the setup phase. SA based on trusted-execution environment SA based on anonymity A different method relies on the anonymous communication assumption [START_REF] Ishai | Cryptography from anonymity[END_REF]. This method is also known as secure shuffling where users split their inputs into shares and send them anonymously to the aggregator [ZWC + 21] (i.e., using different anonymous channels). The aggregator can simply sum up all the received shares to compute the aggregate. Thanks to the anonymous communication, the server cannot map the received shares to their corresponding users and thus is unable to reconstruct the user inputs. This solution offers a good advantage in terms of the accuracy of the result and in terms of the security guarantee against a malicious platform. Unfortunately, realizing these communication channels is often impossible in the context of IoT. Indeed, it is very hard to hide the end-devices' identity against the IoT platform even when using anonymous networks such as Tor [ Ödé22]. This is mainly because IoT devices can be profiled and de-anonymized based on side channel information such as activity time, frequency of input generation, network delays, etc. SA based on cryptography Finally, cryptographic techniques are also used to design secure aggregation protocols. This technique relies on strong mathematical assumptions to protect the user inputs and sends them in public. By relying on some hard mathematical problems, the user inputs are transformed to random looking values which can still undergo linear operations such as addition. Hence, it allows to compute the aggregation result without decrypting the individual user inputs at any point in time. Many researchers are particularly interested in this type of secure aggregation since it does not significantly affect the accuracy of the aggregation result. It also does not require the trust of the platform owners to configure a TEE environment. Moreover, it does not rely on strong assumptions regarding the communication channels between the users and the server as the case for secure shuffling. These important advantages render cryptographic-based SA a suitable choice for many IoT applications. In this thesis, we only study secure aggregation (SA) solutions based on cryptographic schemes as these seem to be the most popular ones for many applications. Secure Aggregation based on Cryptography We first identify and investigate the three phases of secure aggregation protocol: Setup, Protection, and Aggregation. Then, we regroup secure aggregation solutions into two categories based on the underlying cryptographic tools used to build the SA protocol. Specifically, we distinguish encryption-based secure aggregation and multi-party-computation (MPC)-based secure aggregation. For each of the categories, we show how to build the baseline1 secure aggregation protocol from the existing cryptographic schemes by defining the algorithms executed at each SA phase. We provide some realizations of the different categories and we summarize their advantages and disadvantages in Table 3.2. Secure Aggregation Protocol Phases A secure aggregation protocol consists of three consecutive phases: SA.Setup, SA.Protect, and SA.Agg. Each of these phases achieves a specific task described as follows: • SA.Setup: In this phase, the n users and the aggregator get the public parameters and the key material. The public parameters and the keys are generated either using a trusted third party (T P) or through a distributed mechanism. At the end of this phase, each user stores a single, unique key k i where i ∈ [n] and the aggregator stores its aggregation key k 0 . • SA.Protect: Each user U i locally executes a protection algorithm to protect its input x i,τ at timestamp τ . The resulting protected input is sent to the aggregator(s). • SA.Agg: Once the aggregators collect all the protected inputs, they collaboratively execute an aggregation algorithm to retrieve the sum of user inputs for timestamp τ . In the case with a single aggregator, the aggregation algorithm is locally executed by the aggregator. Encryption-based SA Encryption-based SA protocols use encryption schemes to protect the inputs of the users. Encryption utilizes a secret key to ensure the confidentiality of the user input. To achieve Aggregator Obliviousness (see Definition 3.2.1), users should not be allowed to encrypt their inputs with the same key. Moreover, the encryption scheme should allow the computation of the sum of the inputs over the ciphertexts without leaking the individual cleartext values. There are three types of encryption schemes that are used to build a secure aggregation protocol: (i) masking, (ii) additively homomorphic encryption (AHE), and (iii) functional encryption (FE). In general, encryption-based SAs rely on a single aggregator to perform the aggregation which minimizes the communication overhead of the protocol. Masking is a symmetric encryption technique based on one-time pad [START_REF] Rubin | One-time pad cryptography[END_REF]. It uses modular addition to mask the data owner inputs. Given a shared key k between two parties and an upper bound R of the input, masking is defined by two algorithms: Layered Masking Variant In this type of masking scheme, the users are assumed to have network connectivity with each other. Hence, the users are arranged in a tree structure. In the SA.Setup phase, the ones that are at a distance of h hops from each other, share the same keys (h being a security parameter). Each user representing a node in the tree runs a secure aggregation process with its children. In SA.Protect, each child node masks its input with the sum of the keys it holds using Masking.Mask. It then sends it to its parent node. In SA.Agg, the parent sums the masked inputs received from all its children and then removes the layers of the masks that correspond to their keys using Masking.UnMask. The same process is repeated from bottom to up for each parent node until the aggregated value reaches the root of the tree at which the final layers are removed. Castelluccia et al. [START_REF] Castelluccia | Efficient aggregation of encrypted data in wireless sensor networks[END_REF] proposed a specific version of this scheme when h = ∞. Önen et al. [ ÖM07] later generalized this scheme. DC-Net Variant In the variant, secure aggregation is seen as a variation of the dining cryptographers problem [START_REF] Chaum | The dining cryptographers problem: Unconditional sender and recipient untraceability[END_REF]. The solution is realized from masking as follows: In the SA.Setup phase, each pair of users (U i , U j ) agrees on a random key k (i,j),τ using a Key Agreement protocol (ex., Diffie Hellman (DH) [START_REF] Diffie | New directions in cryptography[END_REF] using the aggregator as a proxy to forward the public keys). Also, each user U i agrees on a random key k (i,0),τ with the aggregator. As a result, each user U i and the aggregator A computes their own unique key as follows: k i,τ ← i-1 j=1 k (i,j),τ - n j=i+1 k (i,j),τ -k (i,0),τ s.t. k (i,j),τ = k (j,i),τ and k (i,i),τ = 0 ∀ i ∈ [0, .., n] (3.1) In the SA.Protect phase, each user masks its own input x i,τ with the key k i,τ : c i,τ ← Masking.Mask(k i,τ , x i,τ ) In the SA.Agg phase, the aggregator adds the masked inputs from all users. Then, it removes the mask using its key k 0,τ (all the operations are mod R = nR u where Z Ru is the range for the input values of each user): Masking.UnMask(k 0,τ , n i=1 c i,τ ) = n i=1 c i,τ -k 0,τ = n i=1 (x i,τ + i-1 j=1 k (i,j),τ - n j=i+1 k (i,j),τ -k (i,0),τ ) -k 0,τ = n i=1 x i,τ + $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ X 0 n i=1 ( i-1 j=1 k (i,j),τ - n j=i+1 k (i,j),τ ) - n i=1 k (i,0),τ -k 0,τ = n i=1 x i,τ - n i=1 k (i,0),τ -(- n i=1 k (0,i),τ ) = n 1 x i,τ (3.2) Analysis (DC-Net Variant): This scheme does not require a key dealer (KD) to distribute the masks. However, it relies on a trusted public key infrastructure (PKI). On the other hand, the masking operations are themselves very lightweight since they only include modular additions. However, the setup phase incurs significant overhead in terms of computation and communication costs per user which increases linearly with the total number of users. Since masking uses one-time pad encryption, the setup phase is performed on each timestamp τ (notice the use of the tag τ for each key). Another disadvantage is that once keys are distributed, all users should provide their protected inputs (i.e., does not support dynamic users). Indeed, if some users did not participate, the masks on the aggregated value cannot be removed. Note that masking itself is information-theoretically secure but the setup relies on a key agreement protocol that is computationally secure. Secure Aggregation using Additively Homomorphic Encryption A special type of Additively Homomorphic Encryption (AHE) schemes can be used for secure aggregation. Specifically, multi-user AHE is proposed such that "addition homomorphism" property is maintained across ciphertexts generated by different users with different keys. These schemes are generally defined by the three following algorithms: • AHE.Setup(λ) → (pp, {k i } i∈[n] , k 0 ): Given a security parameter λ, it generates the public parameters, the encryption keys, and the decryption key. • AHE.Enc(pp, k i , τ, x i,τ ) → c i,τ : It encrypts message x i,τ for timestamp τ using key k i and outputs ciphertext c i,τ . • AHE.Agg(pp, k 0 , τ, {c i,τ } i∈[n] ) → n 1 x i,τ : It evaluates the homomorphic operation on the n ciphertexts generated at timestamp τ . Then decrypts the resulting ciphertext using decryption key k 0 . A multi-user AHE scheme can guarantee Aggregator Obliviousness if each user encrypts only one input per timestamp. Several instantiations are proposed in [SCR + 11, ET12, JL13, BJL16]. Multi-user AHE schemes are specifically designed for secure aggregation protocols: In the SA.Setup phase, T P runs AHE.Setup(λ) and distributes the keys to the users and the aggregator. The SA.Setup phase is executed only once; In the SA.Protect phase, U i executes AHE.Enc(pp, k i , τ, x i,τ ) and sends the ciphertext to the aggregator; Finally, in the SA.Agg phase, the aggregator executes AHE.Agg(pp, k 0 , τ, {c i,τ } i∈ [n] ) and retrieves the sum of the inputs. Shi-Chan-Rieffel-Chow-Song Scheme (SCRCS) SCRCS scheme [SCR + 11] is the first AHE scheme used for secure aggregation. It guarantees Aggregator Obliviousness based on Decisional Diffie Hellman (DDH) assumption. The three algorithms (AHE.Setup, AHE.Enc, and AHE.Agg) are defined as follows: • AHE.Setup(λ) → (pp, {k i } i∈[n] , k 0 ): Given security parameter λ, it chooses gen- erator g ∈ G where G is a cyclic group of prime order p for which DDH holds. Additionally, it defines the hash function H G (see Section 2.4.3). It also generates n random secrets k 1 , ..., k n ∈ Z p and k 0 = -n 1 k i . It outputs the public parameters pp = (G, g, H G ), the secrets keys of each user {k i } i∈ [n] , and the secret key of the aggregator k 0 . • AHE.Enc(pp, k i , τ, x i,τ ) → c i,τ : c i,τ ← g x i,τ H G (τ ) k i • AHE.Agg(pp, k 0 , τ, {c i,τ } i∈[n] ) → n 1 x i,τ : c τ ← n 1 c i,τ = g n 1 x i,τ H G (τ ) x 1 k i V ← H(τ ) k 0 c τ = g n 1 x i,τ mod n Then, it computes the discrete logarithm base g of V to obtain n 1 x i,τ mod p. For an efficient computation of the discrete logarithm using Pollard's method [START_REF] Pollard | Monte carlo methods for index computation[END_REF], the output n 1 x i,τ should be a small number. Theorem 3.4.1 The scheme provides Aggregator Obliviousness (see 3.2.1) security under the DDH assumption in the random oracle model if each user encrypts at most one value per timestamp. Proof. For the proof of correctness and security of this scheme, refer to [SCR + 11]. Joye-Libert Scheme (JL) JL scheme [START_REF] Joye | A scalable scheme for privacy-preserving aggregation of time-series data[END_REF] is another AHE scheme for SA which is designed as an improvement of the SCRCS scheme. JL scheme has a simpler decryption function as it does not require the computation of the discrete logarithm in a group in which the DDH assumption holds. The JL scheme guarantees Aggregator Obliviousness based on Decision Composite Residuosity (DCR) assumption [START_REF] Paillier | Public-key cryptosystems based on composite degree residuosity classes[END_REF]. It defines the three algorithms (AHE.Setup, AHE.Enc, and AHE.Agg) as follows: • AHE.Setup(λ) → (pp, {k i } i∈[n] , k 0 ): Given security parameter λ, it generates randomly two equal-size prime numbers p and q and sets N = pq. Then, it defines a cryptographic hash function H Z * N 2 (see Section 2.4.3). It randomly generates n secret keys {k i } i∈[n] ∈ {0, 1} 2l and sets k 0 = - n 1 k i . It outputs the public parameters pp = (N, H Z * N 2 ) , the secrets keys of each user {k i } i∈ [n] , and the secret key of the aggregator k 0 . • • AHE.Enc(pp, k i , τ, x i,τ ) → c i,τ : c i,τ ← (1 + x i,τ N )H Z * N 2 (τ ) k i mod N 2 AHE.Agg(pp, k 0 , τ, {c i,τ } i∈[n] ) → n 1 x i,τ : c τ ← n 1 c i,τ = (1 + N n 1 x i,τ )H Z * N 2 (τ ) n 1 k i mod N 2 H Z * N 2 (τ ) k 0 c τ -1 N = n 1 x i,τ mod N Theorem 3.4.2 The scheme provides Aggregator Obliviousness (see 3.2.1) security under the DCR assumption in the random oracle model if each user encrypts at most one value per timestamp. Proof. For the proof of correctness and security of this scheme, refer to [START_REF] Joye | A scalable scheme for privacy-preserving aggregation of time-series data[END_REF]. Analysis: The main advantage of AHE schemes is that they require running the setup phase only one time, and hence they are effective when aggregating a stream of data. This originally comes with the cost of relying on a trusted key dealer (KD) to perform the setup. Nevertheless, previous work has improved these schemes to enable running them without the need for a key dealer [START_REF] Leontiadis | Private and dynamic time-series data aggregation with trust relaxation[END_REF]. The per-user computational cost resulting from SA.Protect does not depend on the total number of users but remains significant. Similarly, the communication cost per user does not depend on the total number of users but incurs a size expansion because of the size of the ciphertext. Additionally, similar to masking schemes, AHE does not support dynamic users since all users should provide their inputs to correctly aggregate them. Secure Aggregation using Functional Encryption Functional encryption (FE) is a type of encryption scheme that enables a user to learn a function on the encrypted data [START_REF] Boneh | Functional encryption: Definitions and challenges[END_REF]. Multi-Input Function Encryption (MIFE), introduced by Goldwasser [GGG + 14], enables the learning of a function over multiple encrypted inputs. A special type of MIFE scheme can be designed to compute the inner product function of multiple inputs [AGRW17, ACF + 18a, DOT18]. Assuming that we have two vectors x and y, each consisting of l elements, the inner product of x and y is as follows: IP (x, y) = ℓ i=1 x i y i (3.3) An inner product MIFE scheme is defined by four algorithms: • MIFE.Setup(λ) → (pp, msk, {k i } i∈[n] ) : Given a security parameter λ, it generates the public parameters pp, master secret key msk, and n user keys {k i } i∈[n] . • MIFE.Enc(pp, k i , x i,τ ) → c i,τ : It encrypts message x i,τ using key k i and outputs ciphertext c i,τ . • MIFE.DKGen(pp, msk, y τ ) → dk τ : It generates decryption key dk τ using the master secret key and vector y τ of n elements. • MIFE.Dec(pp, dk τ , c τ , y τ ) → IP (x τ , y τ ) : It takes vector c τ = [c 1,τ , .., c n,τ ], vector y τ , and decryption key dk τ generated from y τ . It decrypts c τ such that the result is the inner product of x τ = [x 1,τ , .., x n,τ ] and y τ . MIFE schemes for inner product can be used to construct a secure aggregation protocol [XBZ + 19,WPX + 20]. In the SA.Setup phase, T P runs MIFE.Setup(λ) and distributes the keys to the users. In the SA.Protect phase, U i executes MIFE.Enc(pp, k i , x i,τ ) and sends the ciphertext c i,τ to the aggregator. Finally, in the SA.Agg phase, the aggregator first sends vector y τ = [1, ..., 1] to T P which executes MIFE.DKGen(pp, msk, y τ ) and sends decryption key dk τ of timestamp τ to the aggregator. The aggregator then executes MIFE.Dec(pp, dk τ , [c 1,τ , .., c n,τ ], y τ ) and retrieves the inner product n i=1 x i,τ y [i] = n 1 x i,τ . Construction based on one-time pad An efficient FE-based secure aggregation can be built using one-time pad encryption and a pseudo-random generator, only. The MIFE scheme from [ACF + 18b] defines the four algorithms (MIFE.Setup, MIFE.Enc, MIFE.DKGen, and MIFE.Dec) as follows: • MIFE.Setup(λ) → (pp, msk, {k i } i∈[n] ) : Given security parameter λ, it chooses prime p and a pseudo-random generator PRG : (Z p , Z) → Z p as the public parameter then chooses n user keys at random • MIFE.DKGen(pp, msk, y τ ) → dk τ : k i ← Z p ∀i ∈ [n]. Finally it sets the master key msk = {k i } i∈[n] . • MIFE.Enc(pp, k i , x i,τ ) → c i,τ : It first computes k i,τ ← PRG(k i , τ ) then c i,τ ← x i,τ + k i, dk τ ← ∀k i ∈msk y i,τ PRG(k i , τ ) mod p • MIFE.Dec(pp, dk τ , c τ , y τ ) → IP (x τ , y τ ) : IP (c τ , y τ ) -dk τ mod p Analysis: Similar to AHE schemes, MIFE-based SA incurs constant computation and communication costs per user with respect to the total number of users. A very important property of these schemes is that they can deal with dynamic users by replacing zero weights in vector y τ for the users that do not provide input at timestamp τ . On the other hand, the disadvantage of these schemes is that they require an online key dealer (KD) as a trusted third party to generate the decryption key for each timestamp. MPC-based SA Another cryptographic tool used to build secure aggregation protocols is multi-party computation (MPC). In MPC, keys are not needed to protect user inputs. Instead, private messages are split into shares and distributed to multiple servers such that t of them can collaborate to reconstruct them. The secret sharing scheme presented in Section 2.3.1 can be used to construct a secure aggregation protocol. To design a secure aggregation protocol from MPC, the SA.Setup phase is not needed since no keys are generated. In the SA.Protect phase, a user protects its input by splitting it into l random shares using SSS.Share(x i,τ , t, l, Z p ) where l is the number of aggregators and p is the first prime number such that p ≥ nR u . It then sends one unique share to each aggregator. In the SA.Agg phase, each aggregator locally sums up the shares. Because secret sharing is additively homomorphic, the sum of the shares will result in a share of the sum. Finally, at least t aggregators broadcast their shares of the sum so that any aggregator can then run SSS.Recon and retrieve the sum n 1 x i,τ . Analysis: An important property of MPC-based SA is that it does not need a trusted third party since it does not need a key setup phase. Also, MPC supports dynamic users since it allows any subset of users to participate in the aggregation. This is mainly because MPC does not rely on secret keys that uniquely identify a user. On the contrary, MPC incurs high computation and communication costs since the protection of a user input involves creating O(nm) shares where n is the number of users and m is the size of the input. Furthermore, to distribute the shares, pre-existing secure channels are needed between the users and the aggregators. The secure channels ensure that each share is received and accessed only by its destined aggregator. 3.5 Summary of Honest-but-Curious Secure Aggregation In this chapter, we introduced secure aggregation protocols by defining their three phases of execution. Then, we surveyed existing solutions and presented them as instantiations of our definitions. We categorized these solutions into encryption-based solutions (Masking, AHE, FE) and MPC-based solutions and we compared their pros and cons. We found out that secure aggregation based on MPC can offer less strict setup assumptions and allow user dynamics. However, this comes at a cost of higher computation and communication costs. On the other hand, secure aggregation based on additively homomorphic encryption can perform with much better scalability in terms of the number of users. However, the latter requires a strong setup assumption where a key dealer is needed to initialize the protocol. Secure aggregation based on masking serves as a mid-way between AHE and MPC solutions. Finally, all properties can be achieved by secure aggregation based on functional encryption, but it requires a very strong assumption that the trusted key dealer should stay online during the protocol execution. Schemes T P No SC Dynamic Comp. Comm. required required users Masking (DC-net) [ ÁC11, DA16] PKI O(n + m) O(n + m) AHE [LEM14a, JL13, SCR + 11] KD O(m) O(m) Encryption FE [ACF + 18b, ABM + 20, L Ţ19] KD O(m) O(m) n-out-of-n SS [BSMD10, GJ11] MPC t-out-of-n SS [Sha79] - O(nm) O(nm) This chapter tackled the solutions in the honest-but-curious threat model only. Unfortunately, the adversary in many real-life applications is more powerful. Therefore, these schemes provide a good starting point to build solutions that can achieve stronger and more realistic threat models. In the next chapters, we focus on secure aggregation in the malicious model and we propose a new secure aggregation protocol. Chapter 4 Secure Aggregation in the Malicious Model In this chapter, we study secure aggregation protocols when the aggregator and a subset of the users are considered malicious. We review some of the previous work and then we propose a new verifiable secure aggregation protocol (VSA). We first formalize the security definitions of a verifiable secure aggregation which capture stronger security guarantees than previous work. Then we construct our protocol VSA that satisfies these security definitions. Secure Aggregation with Malicious Parties In the malicious model, the adversary controlling the aggregator and a subset of the users may deviate from the protocol steps. This model represents more realistic scenarios than the honest-but-curious model. For example, in smart grid systems, the adversary may control some of the smart meters and additionally succeed in compromising the aggregation server. In such a case, we need to guarantee that the electricity consumption of honest users remains private. Moreover, we need to guarantee that the adversary is detected if the aggregation server produces incorrect statistics about the total electricity consumption. In general, a secure aggregation protocol in the malicious model involves some verification mechanism to ensure Aggregate Unforgeability in addition to Aggregator Obliviousness (see Definitions 3.2.1 and 3.2.2). For this purpose, we propose a formal definition for verifiable secure aggregation protocols which capture both the privacy and verifiability of the protocol. Moreover, we present in this chapter the first verifiable secure aggregation protocol (VSA) that meets our definition. Secure Aggregation against Malicious Aggregator All the solutions in the malicious aggregator model share the idea of using tags created by the users to authenticate the sum of the inputs. More precisely, the user sends a tag along with its protected input. Then, the aggregator adds all the tags to produce a final tag that proves the correctness of the sum. There are mainly two types of tags proposed in the literature: Previous Work • Hash-based tags: In DEVA [TLB + 21] and VERSA [START_REF] Hahn | Versa: Verifiable secure aggregation for cross-device federated learning[END_REF], the authors use homomorphic hashes [START_REF] Yao | Homomorphic hash and blockchain based authentication key exchange protocol for strangers[END_REF][START_REF] Krohn | On-the-fly verification of rateless erasure codes for efficient content distribution[END_REF] as tags. The problem with the hash approach is that the tags a need to be authenticated and saved on a public bulletin board. The public verifier later aggregates the hashes and verifies the sum. This approach leads to a linear increase in the size of the tags with respect to the number of users. • Signature-based tags: In PUDA [LE ÖM15], the authors modify the SCRCS secure aggregation scheme (see Section 3.4.2.2) to provide a verification mechanism of the aggregate. PUDA users generates homomorphic tags based on a homomorphic signature scheme. The homomorphic signature scheme is an extension of the signature scheme in [START_REF] Mandell | Improved security for linearly homomorphic signatures: A generic framework[END_REF] to the multi-user setting. The advantage of PUDA over hash-based tags is the succinctness of the aggregated tag (i.e., constant size with respect to the number of users). We present PUDA scheme in details in Section 4.4.1 since it is one of the building blocks for VSA. The problem with these solutions is that they do not consider the case where one or more users are corrupted. In such a case, the adversary who controls both the aggregator and one user can forge a tag for any arbitrary value. Therefore, these solutions are limited to a few applications. Secure Aggregation against Malicious Users Another type of secure aggregation protocols target applications where the user is not trusted. In these protocols, the user produces a tag that proves to the aggregator that the input is within a valid range. To generate these tags, Karakoç et al. propose KOB [START_REF] Karakoç | Secure aggregation against malicious users[END_REF] which uses an Oblivious Programable Pseudo-Random Function (OPPRF) protocol. The protocol runs between the user which provides its input and the aggregator which provides an upper-bound value. If the user input is less than the upper bound chosen by the aggregator, the user receives a valid tag of its input. They build their OPPRF using an OT protocol. More specifically, the user and the aggregator perform a secure comparison of their inputs using ℓ OT executions (ℓ is the bit-size of the input). In each run, the aggregator sends a masked part of the tag. After executing all the OTs, the user can reconstruct the tag and unmask it if its input is less than the aggregator's upper bound. This solution ensures that a malicious user cannot provide out-of-bound input and thus limits their influence on the sum of all users' inputs. Unfortunately, in this solution, the aggregator is trusted to perform the check on the user inputs. Additionally, in the case of a malicious aggregator, this solution cannot ensure privacy. Indeed, the malicious aggregator can compute an arbitrary function of the user input instead of computing the tag. Therefore, this secure aggregation solution is insecure when the aggregator is controlled by the adversary. Secure Aggregation against both Malicious Users and Aggregator There is only one solution that studies this extreme case where both the aggregator and the users are malicious. Guo et al. propose a solution [GLL + 21] that considers the case where the aggregator is malicious and it colludes with a subset of the user. However, the authors assume that the users provide correct inputs. In their solution, the users commit to their inputs using a homomorphic commitment scheme [START_REF] Pryds | Non-interactive and information-theoretic secure verifiable secret sharing[END_REF] and send the commitments to all the other users before the aggregation process starts. After the users receive the commitments, they send their protected inputs to the aggregator. This solution does not prevent the adversary from providing bad inputs and changing the sum. Instead, it only guarantees that the adversary can influence the aggregation result before knowing the actual sum of the honest users. Hence, this solution does not provide strong security guarantees. Therefore, we identify the need for a strong definition of security for secure aggregation in the case of malicious parties as a missing requirement. VSA -Overview We aim to build a verifiable secure aggregation protocol that ensures both Aggregator Obliviousness and Aggregate Unforgeability in the malicious model. Thus, we propose to extend PUDA [LE ÖM15] (which is secure in the malicious aggregator model) to achieve security when both the aggregator and a subset of users are malicious. The main problem with PUDA is that users are trusted in generating tags for their inputs. Thus, a malicious user can produce tags for any input and thus compromise the sum while still having a valid tag. Alternatively, KOB protocol [START_REF] Karakoç | Secure aggregation against malicious users[END_REF] allows the aggregator to generate the tags instead of the user after privately checking the correctness of the user inputs. However, this approach requires the aggregator to be trusted. Therefore, our idea is to involve additional parties in the protocol called taggers. The purpose of the taggers is to generate user tags. More precisely, each user runs a tagging protocol with the taggers. If the user input is less than the upper bound set by each tagger, the user receives a PUDA-like tag. Then, the user continues as in PUDA protocol but instead of generating its own tag, it uses the tag computed by the taggers. The aggregator receives the protected input and the tag of each user and finally outputs the sum and its corresponding tag. In our protocol VSA, the final tag ensures both, that the users provided well-formed inputs, and that the aggregator correctly computed the sum. One challenge in building VSA is building a secure and robust tagging protocol that even when some taggers (less than a threshold) are malicious, the tag can still be produced. To build this protocol we use threshold secret sharing to share the tagging keys among the taggers. Each tagger computes a partial tag using the key share. The user robustly aggregates the partial tags under the assumption that there is a sufficient number of honest taggers. One more challenge to deal with is how each tagger privately checks that the user input is in the correct range. KOB proposes an OPPRF approach for this purpose. However, this solution is only secure when the tagger is honest-but-curious since it allows the tagger to compute any function of the user input instead of computing the tag. To solve this problem, we design our tagging protocol using zero-knowledge proofs to ensure correct behavior of the tagger. Leontiadis et al. presented PUDA [LE ÖM15 ] as a model for aggregating inputs from n users using a single aggregator. We describe this model and its properties. Then, we present the construction proposed by the authors. PUDA model The PUDA model involves n users, one aggregator, and a trusted key dealer. It consists of the five following algorithms: • PUDA.Setup(1 λ ): An interactive randomized algorithm which on the input of security parameter λ, generates public parameters pp, an encryption and tagging key (ek i , tk i ) for user U i , an aggregator key ak, and a public verification key vk. • PUDA.Enc(ek i , x i , τ ) → c i : It encrypts the private value x i using key ek i for timestamp τ . • PUDA.Tag(tk i , x i , τ ) → σ i : It generates a tag for private value x i using tagging key tk i for timestamp τ . • PUDA.Agg(ak, {c i } ∀i∈[n] , {σ i } ∀i∈[n] , τ ) → (X τ , Υ τ ): It aggregates the ciphertext and the tags of all users for timestamp τ using the aggregation key ak. It outputs the sum X τ and the corresponding tag Υ τ . • PUDA.Verify(vk, X τ , Υ τ , τ ) → {0, 1}: It uses the verification key vk to verify the tag Υ τ of the sum X τ for timestamp τ . The authors define security using the two notions of Aggregator Obliviousness and Aggregate Unforgeability (see Section 3.2). They propose a formal definition of these notions using security games AO PUDA and AU PUDA respectively1 . PUDA construction PUDA construction is based on SCRCS secure aggregation scheme (see Section 3.4.2.2). It is described as follows: • PUDA.Setup(1 λ ): -The trusted key dealer generates the public parameters: It selects the groups G 1 , G 2 , and G T (where DDH holds) of prime order p; the generators g 1 ∈ G 1 and g 2 ∈ G 2 ; a bilinear map e : {G 1 , G 2 } → G T ; and a hash function H G1 (see Section 2.4.3). It gives pp = (g 1 , g 2 , G 1 , G 2 , G T , e, H G1 ) to all users and the aggregator. -The key dealer generates a uniformly random encryption key per user ek i ←$ Z p . It sets the aggregation key ak = -∀i∈[n] ek i . It gives ek i for each user and ak to the aggregator. -The key dealer generates a uniformly random key a and sends it to all the users. Then, each user generates a uniformly random key b i ←$ Z p and forwards g b i 2 to the key dealer. The user sets tk i = (a, b i ) as its tagging key. -The key dealer computes vk 1 = g a 2 and vk 2 = n 1 g b i 2 = g n 1 b i 2 and then sets the verification key as vk = (vk 1 , vk 2 ) • PUDA.Enc(ek i , x i , τ ): The user encrypts x i for timestamp τ as follows: c i ← (g x i 1 )H G1 (τ ) ek i • PUDA.Tag(tk i , x i , τ ): The user parses tk i =(a, b i ) and generates a tag of x i for timestamp τ as follows: σ i ← (g a 1 ) x i H G1 (τ ) b i • PUDA.Agg(ak, {c i } ∀i∈[n] , {σ i } ∀i∈[n] , τ ): The aggregator aggregates the ciphertexts and the tags from each user: V τ ← ( i∈[n] c i )H G1 (τ ) ak = g Xτ 1 Υτ ← i∈[n] σ i = (g a 1 ) Xτ H G1 (τ ) i∈[n] b i It then finds X τ = i∈[n] x i by computing the discrete logarithm of V τ . • PUDA.Verify(vk, X τ , Υ τ , τ ): e(Υ τ , g 2 ) ? = e(H G1 (τ ), vk 1 )e(g Xτ 1 , vk 2 ) VSA -Formal Definitions In this section, we present the VSA model and its security properties. VSA is composed of n users U = {U 1 , ..., U n }, m taggers T = {T 1 , ..., T m }, and one aggregator A. The users generate inputs at each timestamp τ . The taggers are parties that compute the tag of a user's inputs at a given timestamp. Finally, the aggregator is the party that produces the sum of all inputs and its corresponding authenticating tag for a given timestamp. VSA is composed of the following PPT algorithms: • VSA.Setup(1 λ , t): An interactive randomized algorithm which on input of security parameter λ and threshold t ≤ m, generates the public parameters pp, an encryption key ek i for user U i , an aggregator key ak, a list of n tagging keys TK j = {tk 1,j , ..., tk n,j } (one key per user) for each tagger T j , and a public verification key vk. • VSA.Enc(ek i , x i , τ ) → c i : It encrypts the private value x i using key ek i for timestamp τ . • VSA.Tag({tk i,j } j∈J i , {θ j } j∈J i , x i , τ ) → σ i : An interactive randomized algorithm which takes an input x i , a higher bound on the inputs θ, a subset of tagging keys {tk i,j } j∈J i , and a timestamp τ . It outputs a tag σ i . • VSA.Agg(ak, {c i } ∀i∈[n] , {σ i } ∀i∈[n] , τ ) → (X τ , Υ τ ): It aggregates the ciphertext and the tags of all users for timestamp τ using the aggregation key ak. It outputs the sum X τ and a tag Υ τ . • VSA.Verify(vk, X τ , Υ τ , τ ) → {0, 1}: It uses the verification key vk to verify the tag Υ τ of the sum X τ for timestamp τ . The main difference between VSA model and PUDA model is due to the additional parties called taggers. Moreover, each user's tagging key is distributed to the m taggers and not to the users. Consequently, the tagging algorithm becomes an interactive protocol between the user and the taggers to produce the tag of the user's input. Correctness of VSA We require that when all users provide inputs that are less than all the bounds set by the participating taggers, and when the number of taggers that participate in producing the tag of each user is higher than a threshold t, the verification algorithm outputs 1 with an overwhelming probability. More formally: ∀λ, ∀t, ∀J i , ∀x i , and ∀θ j s.t. i ∈ [n], j ∈ J i , J i ⊂ [m], |J i | ≥ t, and x i ≤ θ j , Pr      VSA.Verify(vk, X τ , Υ τ ) = 0 VSA.Setup(1 λ ) → (pp, {ek i }, {tk i,j }, ak, vk), VSA.Enc(ek i , x i , τ ) → c i,τ ∀i ∈ [n], VSA.Tag({tk i,j } j∈J i , {θ j } j∈J i , x i , τ ) → σ i,τ ∀i ∈ [n], VSA.Agg(ak, {c i } i∈[n] , {σ i } i∈[n] , τ ) → (X τ , Υ τ )      ≤ µ(λ) where µ is a negligible function. Security of VSA We consider that the adversary controls the aggregator A, a set of taggers T corr ⊂ T , and a set of users U corr ⊂ U. We present the oracle O EncTag that we use to define the security games. Throughout the games, we consider that the minimal upper bound value θ min = min({θ j } ∀j∈[m] ) is known to the adversary. . Aggregator Obliviousness (AO) We give a formal definition of AO using a security game. Our game is an improved version of AO PUDA game from PUDA [LE ÖM15] that captures (in addition to the static corruption of the aggregator and some users) the corruptions of some taggers. The game proceeds in three phases: Setup and Corruption Phase: The adversary A chooses the security parameter λ and the threshold t ≤ m and accordingly gets the public parameters of the protocol pp, the aggregator's key ak, the verification key vk, the secret keys of the parties in U corr (i.e., {ek i } ∀U i ∈Ucorr ), and the secret keys of the parties in T corr (i.e., {TK j } ∀T j ∈Tcorr ). Learning Phase: The adversary A interacts with oracle O EncTag in polynomial times by sending tuples (i, x, τ ) such that for every user identifier i, if U i / ∈ U corr , then x ≤ θ. Challenge Phase: The adversary A chooses a timestamp τ * that has never been queried, and a set of non-corrupted users U * (i.e., U * ∩ U corr = ϕ). It chooses two lists X 0 τ * and X1 τ * each corresponds to the inputs for each user in U * at τ * . It queries the oracle O AO with these lists which is defined as follows: • O AO : The oracle is called with the a set of users U * ⊂ U \ U corr and two list of inputs (i, x 0 i , τ ) ∀U i ∈U * and (i, x 1 i , τ ) ∀U i ∈U * such that ∀U i ∈U * x 0 i = ∀U i ∈U * x 1 i . It (σ i ) ← O EncTag(i,x,τ ) The Challenge Phase: Aggregate Unforgeability (AU) We give a formal definition of AU using a security game. Our game is an improved version of AU PUDA game that captures (in addition to the static corruption of the aggregator) the corruptions of some users and taggers. A → τ * , U * ⊂ U \ U corr A → X 0 τ * , X 1 τ * {(c b i , σ b i )} ∀Ui∈U * ← O AO(X 0 τ * ,X 1 τ * ) A → b * ∈ {0, 1} Definition Similar to the previous definition (i.e., AO) the game proceeds in three phases. The Setup and Corruption Phase and Learning Phase are the same as in AO game. In the Challenge Phase A submits a tuple (τ * , sum τ * , Υ τ * ). Game 2 Aggregate Unforgeability (AU) The Learning Phase: /* A queries the oracle poly-number of times for any i, x, and τ s.t. U i / ∈ Ucorr and x≤θ */ (c i , σ i ) ← O EncTag(i,x,τ ) /* A queries the oracle poly-number of times for any i, x, and τ s.t. U i ∈ Ucorr*/ (σ i ) ← O EncTag(i,x,τ ) The Challenge Phase: A → (τ * , sum τ * , Υ τ * ) The adversary wins the game if it succeeds in forging a valid tag of a sum. In [LE ÖM15], the authors define two types of forgeries. We reconsider these types and add a new type. • Type I Forgery: VSA.Verify(vk, sum τ * , Υ τ * ) = 1 and the adversary A never made a previous query with timestamp τ * . • Type II Forgery: VSA.Verify(vk, sum τ * , Υ τ * ) = 1 and A already made queries with timestamp τ * , however the sum sum τ * ̸ = ∀U i ∈U x i,τ * . • Ideal Functionality for Distributed Tagging Protocol In this section, we present the ideal functionality F t,G DTAG for the distributed tagging protocol Π t,G DTAG ≡ VSA.Tag. The protocol runs between the user and the m taggers. The user provides an input x and each tagger provides a share of the tagging key k and a bound θ j . The protocol outputs a tag computed by PUDA.Tag(k, x, τ ) if x is less than all the bounds of the honest taggers. Otherwise, it gets no output. Figure 4.2 shows the functionality F t,G DTAG . VSA -Construction in the Ft,G DTAG -Hybrid Model In this section, we first present our VSA protocol. Then, we prove its correctness and that it achieves the security properties defined in Section 4.5. We describe the protocol in the hybrid model of the functionalities F D,n CRS (Section 2.4.2), F t,G DTAG (Section 4.6), and F t,n,F RShare (Section 2.4.4). Figure 4.3 illustrates a single run of the protocol. VSA Scheme Let Λ be a sampling algorithm that returns a uniformly random tuple of the form (g 1 , g 2 , p, G 1 , G 2 , G T , e, H G1 ) where G 1 , G 2 , and G T are groups where DDH holds of prime order p; g 1 is a generator of G 1 ; g 2 is generator of G 2 ; e : {G 1 , G 2 } → G T is a bilinear map; and H G1 is a hash function as defined in Section 2.4.3. • VSA.Setup(1 λ , t): Functionality F t,G DTAG F t,G DTAG runs between user U and m taggers T = {T 1 , ..., T m } and is parameterized by the reconstruction threshold t ≤ m of the SSS scheme. Let T h ⊂ T be the set of honest taggers such that |T h | ≥ t. Auxiliary Inputs: The bit size ℓ of the user's input x and the tagger's bound θ j . A function PUDA.Tag : Z 2 p × [2 ℓ ] -× {0, 1} * → G where G is a cyclic group of order p. • Upon receiving a message (sid, input, x) from U where x ∈ [2 ℓ ] -, store x. Then, send message (sid, input, U ) to S. • Upon receiving a message (sid, input, ⟨tk⟩ j , θ j ) from T j where ⟨tk⟩ j =(⟨a⟩ j , ⟨b⟩ j ) ∈ Z 2 p and θ j ∈ [2 ℓ ] -, store ⟨tk⟩ j and θ j . Then, send message (sid, input, T j ) to S and continue. • When (sid, input) is received from U and all T j , set tk ← (a, b) such that a ← SSS.Recon({(j, ⟨a⟩ j )} ∀j|T j ∈T h , Z p ) and b ← SSS.Recon({(j, ⟨b⟩ j )} ∀j|T j ∈T h , Z p ). Finally, check if x ≤ min{θ j } ∀j|T j ∈T h . If yes, compute σ ← PUDA.Tag(tk, x, sid a ). If no, set σ ←⊥. Send (sid, output, σ) to U and halt. a The session id (sid) is the current timestamp (sid=τ ) -Distributing the public parameters: The m taggers, the aggregator, and the n users query F Λ,m+n+1 CRS and receives the public parameters pp = (g 1 , g 2 , p, G 1 , G 2 , G T , e, H G1 ). -Distributing the encryption and aggregation keys: Each tagger T j uniformly samples ek i,j ←$ Z p per user (i.e. ∀U i ∈ U), and computes ak j = -∀U i ∈U ek i,j . It sends to each user U i ∈ U the key ek i,j and to the aggregator ak j . Each user computes its encryption key ek i = j∈[n] ek i,j and the aggregator computes the aggregation key ak = j∈[n] ak j . -Distributing the tagging keys: Each tagger T j queries F t,m,Zp RShare n + 1 times and receives the share ⟨a⟩ j and the n shares ⟨b i ⟩ j for each i ∈ [n]. It sets tagging keys tk i,j = (⟨a⟩ j , ⟨b i ⟩ j ) ∀i ∈ [n]. The tagger key is TK j = {tk 1,j , ..., tk n,j }. -Distributing the verification keys: • VSA.Tag({tk i,j } j∈J i , {θ j } j∈J i , x i , τ ): Each tagger T j publishes vk 1,j = g ⟨a⟩ j 2 and vk 2,j = g i∈[n] ⟨b i ⟩ j 2 . Each tag- ger then computes SSS.ExpRecon([m], {vk 1,j } ∀j∈[m] , G 2 ) → vk 1 = g a 2 and SSS.ExpRecon([m], {vk 2,j } ∀j∈[m] , G 2 ) → vk 2 = g i∈[n] b i 2 and finally vk = (vk 1 , vk 2 ) is published. • VSA.Enc(ek i , x i , τ ) ≡ PUDA.Enc(ek i , x i , τ ) The user U i (with input x i ) and each tagger T j (with input (tk i,j , θ j )) query Ft,G 1 DTAG . The user receives the output σ τ from Ft,G 1 DTAG . • VSA.Agg(ak, {c i } ∀i∈[n] , {σ i } ∀i∈[n] , τ ) ≡ PUDA.Agg(ak, {c i } ∀i∈[n] , {σ i } ∀i∈[n] , τ ) • VSA.Verify(vk, X τ , Υ τ , τ ) ≡ PUDA.Verify(vk, X τ , Υ τ , τ ) Proof of Correctness The correctness of our construction directly follows from the correctness of PUDA protocol. To prove this argument, observe that ∀λ, ∀t, such that VSA.Setup(1 λ , t) and PUDA.Setup(1 λ ), the following holds: • ∀x i , the ciphertexts produced by PUDA and VSA schemes are statistically indistinguishable since VSA.Enc ≡ PUDA.Enc. • ∀J i , ∀x i , ∀θ j s.t. (J i ⊂ [m]) ∧ (|J i | ≥ t) ∧ (x i ≤ θ j ) , the tags produced by PUDA and VSA schemes are statistically indistinguishable. This holds because VSA outputs the result of Ft,G 1 DTAG which produces tags computed with PUDA.Tag algorithm if x i is less than the minimum bound and a sufficient number of taggers participate. • The aggregation and verification algorithms are equivalent: VSA.Agg ≡ PUDA.Agg and VSA.Verify ≡ PUDA.Verify. VSA -Security Analysis We now perform a security analysis of the scheme. To prove that our VSA scheme achieves Aggregator Obliviousness and Aggregate Unforgeability, we first take a step to show that both our AO game and AU game can be reduced to AO PUDA game and AU PUDA game. Lemma ′ i ←$ Z p for each i ∈ [n]. Then it computes SSS.Share(a ′ , t, m, Z p ) and SSS.Share(b ′ i , t, m, Z p ) for each i ∈ [n]. It sets TK j = {(⟨a ′ ⟩ j , ⟨b ′ 1 ⟩ j ), ..., (⟨a ′ ⟩ j , ⟨b ′ n ⟩ j )} for each corrupted T j ∈ T corr . B gives A the public parameters pp, the verification key of PUDA vk, the aggregation key ak, the corrupted users' encryption keys {ek i } ∀U i ∈Ucorr , and the randomly generated list of shares TK j of the corrupted taggers. During the Learning Phase: When A queries O EncTag , B returns the values from History if the user identifier and timestamp were previously queried. Otherwise, • If U i ̸ ∈ U corr : it queries O PUDA EncTag with (i, x, τ ) and gets (c u,τ , σ u,τ ), saves it to History and sends σ i to A. • If U i ∈ U corr and x ≤ θ min : B computes PUDA.Tag(tk i , x, τ ) → σ i , saves σ i to History and sends it to A. • If U i ∈ U corr and x > θ min it sets σ i ←⊥ saves σ i to History and sends it to A. During the = {(⟨a ′ ⟩ j , ⟨b ′ 1 ⟩ j ), . .., (⟨a ′ ⟩ j , ⟨b ′ n ⟩ j )} for each corrupted T j ∈ T corr . B gives A the public parameters pp, the verification key of PUDA vk, the aggregation key ak, the randomly generated encryption keys of the corrupted users {rk i } ∀U i ∈Ucorr , and the randomly generated list of shares TK j of the corrupted taggers. B maintains a list History of the queried ciphertexts and tags. 2. During the Learning Phase: When A queries O EncTag , B returns the values from History if the user identifier and timestamp where previously queried. Otherwise, • If x ≤ θ min : B queries O PUDA EncTag with (i, x, τ ) and gets (c i , σ i ). B computes c ′ i ← VSA.Enc(rk i , x, τ ), saves (c ′ i , σ i ) to History and sends it to A. • If x > θ min : B computes c ′ i ← VSA.Enc(rk i , x, τ ), saves (c ′ i , ⊥) to History and sends it to A. In this reduction, the differences between A's simulated view (by B) and A's real view (when it plays the real game) are: (i) the encryption keys of the corrupted users, (ii) the tagging keys' shares of the corrupted taggers, and (iii) the ciphertexts received from the oracles. The tagging keys' shares are statistically indistinguishable in both views as shown in the previous proof. Similarly, the encryption keys are chosen uniformly randomly as if they were generated by the setup algorithm. It remains to show that the ciphertexts received from the oracles are indistinguishable in both views. In both views, the ciphertexts are generated by randomly generated encryption keys. Additionally, they both satisfy the relation i∈ [n] c i = (g i∈[n] x i 1 )H(τ ) - * , Υ τ * )= 1 is P r[A AU III ] . Now observe that to have ∀U i ∈Ucorr x i,τ * > |U corr |θ min there exists at least one x i,τ * > θ min . Let U ′ corr ̸ =ϕ be the set of users that their user identifiers were queried with x i,τ * > θ min . Observe that VSA.Tag returns ⊥ when run with input x > θ min . Therefore, the oracle O EncTag returns σ i =⊥ (no output) for all queries (i, x, τ * ) where U i ∈ U ′ corr . Let sum ′ τ * = ∀U i ∈U \U ′ corr x i,τ * + ∀U i ∈U ′ corr x ′ i,τ * such that sum τ * ̸ = sum ′ τ * and x ′ i,τ * > θ min ∀U i ∈ U ′ corr . The oracle O EncTag returns the same result (i.e., ⊥) for both queries (i, x i,τ * , τ * ) and (i, x ′ i,τ * , τ * ) ∀U i ∈ U ′ corr . Thus, we can deduce that P r[VSA.Verify(vk, sum τ * , Υ τ * ) = 1] ≃ P r[VSA.Verify(vk, sum ′ τ * , Υ τ * ) = 1]. There- fore, P r[A AU III ] = P r[A AU II ] . By relying on Corollary 4.8.3, we conclude that our scheme ensures aggregate unforgeability against Type III Forgery under the LEOM assumption in the random oracle model and F t,G 1 DTAG -hybrid model. Realization of Distributed Tagging Protocol In this section, we show how to build the protocol Π | < m -t (eg., for t = m 2 , it is sufficient to choose |T corr | < m 2 ). Proof. To prove the theorem we build a simulator S that runs in the ideal world and interacts with F t,G DTAG and Z. It simulates for the adversary A the interaction with the noncorrupted parties and Z. We consider three different cases depending on which parties A corrupts. In each case, S runs internally an instance of A. It simulates the interaction with Z by forwarding the messages sent from Z to A and vice-versa. It simulates the interaction with the protocol parties based on a well-defined strategy described in what follows. Case 1: A corrupts U Observe that U only interacts with FG TAG . Thus, S simply forwards the messages between A and FG TAG and vice-versa. Hence, the simulated execution of A and the real one are identical. Additionally, the honest parties do not get any output. Therefore, we proved that Π t,G DTAG UC-realizes the functionality F t,G DTAG in the F G TAG -hybrid model if A corrupts only U . Case 2: A corrupts T corr ⊂ T Observe that each T j ∈ T corr sends a message to FG TAG only and receives nothing. So S simply forwards the message from A to FG TAG . Hence, the simulated execution of A and the real one are identical. We only need to show that the Functionality F G TAG F G TAG runs between user U and a tagger T and is parameterized by the function f and the reconstruction threshold of the SSS scheme t ≤ m. Auxiliary Inputs: The bit size ℓ of the user's input x and the tagger's bound θ j . A function PUDA.Tag : Z 2 p × [2 ℓ ] -× {0, 1} * → G where G is a cyclic group of order p. • Upon receiving a message (sid, input, x) from U where x ∈ [2 ℓ ] -, store x. Then, if the T is corrupted, send message (sid, input, U ) to S and receive the message (sid, ok, η) where η : [2 ℓ ] -→ {0, 1}. If η(x) = 0 abort, otherwise, continue. • Upon receiving a message (sid, input, k, θ) from T where k ∈ Z 2 p and θ ∈ [2 ℓ ] -, store k and θ. Then, if U is corrupted send message (sid, input, T ) to S. -If S sends (sid, abort), send abort to all parties and abort. -If S sends (sid, ok), continue. • When (sid, input) is received from U and all T i , compute σ ← PUDA.Tag(k, x, sid a ). If no, set σ ←⊥. Send (sid, output, σ) to U and halt. a The session id (sid) is the same as the timestamp (sid=τ ) Inputs: User's input x ∈ [2 ℓ ] -and tagger T j input ⟨tk⟩ j =(⟨a⟩ j , ⟨b⟩ j ), θ j ∈ Z 2 p × [2 ℓ ] -. • User U initializes an empty set Ω then interacts with each tagger T j as follows: -U sends (sid, ssid, input, x) to FG TAG . -T j sends (sid, ssid, input, ⟨tk⟩ j , θ j ) to FG TAG . -U receives (sid, ssid, output, σ j ) from FG TAG (or the sub-session ssid aborts). If σ j ̸ =⊥, add (j, σ j ) to the set Ω. output of the honest parties (the user since it is the only party that gets an output) is identical in both worlds. In the real world, the user outputs • ∀J k ⊂ Ω s.t. |J k | = t, U computes SSS.ReconExp({(j, σ j )} ∀(j,σ j )∈J k , G) → σ ′ k . The user outputs σ ← majority({σ ′ k } ∀k∈[( t n )] ). σ REAL ← majority({σ ′ k } ∀k∈[( t n )] ) where σ ′ k ← SSS.ReconExp({(j, σ j )} ∀(j,σ j )∈J k , G). In the ideal world, the user outputs σ IDEAL ← PUDA.Tag(tk, x, sid) where tk ← (a, b) such that a ← SSS.Recon({(j, ⟨a⟩ j )} ∀j|T j ∈T h , Z p ) and b ← SSS.Recon({(j, ⟨b⟩ j )} ∀j|T j ∈T h , Z p ). Thus, we need to show that σ IDEAL = σ REAL . Notice that T h = T \ T corr . Thus, for |T corr | < m -t, we have |T h | > t and thus σ IDEAL = (g x 1 ) a H(τ ) b where g 1 , H(τ ) ∈ G. In the real world, σ j = (g x 1 ) ⟨a⟩ j H(τ ) ⟨b⟩ j ∀T j ∈ T h . So, we have for each σ ′ k ← SSS.ReconExp({(j, σ j )} ∀(j,σ j )∈J k , G), where J k ⊂ T h , σ ′ k = (g x 1 ) a H(τ ) b = σ IDEAL (from Lemma 2.3.2). Since |T h | > t, then t |T h | > 1. Hence, the number of σ ′ k 's that are equal to σ IDEAL are more than one. On the other hand, since |T corr | < t, for each J k , we have P r[σ ′ k 1 = σ ′ k 2 ̸ = σ IDEAL ] ≃ 0 (from Definition 2.3.2). Therefore, P r[majority({σ ′ k } ∀k∈[( t n )] ) ̸ = σ IDEAL ] ≃ 0. Realization of The Tagging Protocol The ideal functionality of the protocol Π G TAG (depicted in Figure 4.5) computes the tag PUDA.Tag(k = (a, b), x, τ ) = (g a ) x H(τ ) b if the user input x is less than a bound θ chosen by the tagger. For illustration purposes, we first present a version of the protocol that is secure in the honest-but-curious (HBC) model (the user and the tagger follow the protocol steps). Next, we present our Π G TAG protocol and prove that it UC-realizes F G TAG in the malicious model. Tagging Protocol in the HBC model The user generates a uniformly random value r ←$ Z p and computes P =(g x ) r and Q=H(τ ) r then send P and Q to the tagger. The tagger computes E=P a Q b . Notice that the user now can obtain the tag from E by computing σ = E 1 r . To prevent the user from obtaining the tag σ if its input x > θ, the tagger does not send E to the user. Instead, it runs a garbled circuit protocol as described in Section 2.4.6 with the function f θ,E : f θ,E (x) = E , if x ≤ θ ⊥ , otherwise This protocol is only secure in the HBC model since a malicious user may compute the tag on a value x > θ, then evaluate the garbled circuit on a different value x ′ ≤ θ. On the other hand, a malicious tagger may garble any arbitrary function f ′ and thus influence the protocol to output f ′ (x) instead of σ. Tagging Protocol Π G TAG in the malicious model To correctly realize the protocol Π G TAG in the malicious model, we need the tagger to ensure that the tag is evaluated on the same input used to evaluate the garbled circuit. Additionally, we need the user to ensure that the garbled circuit is evaluating a tag of data point x. To solve the first issue, we propose to modify the OT protocol used in the garbled circuit protocol. The goal is to use the messages sent by the user in the OT protocol to evaluate the tag σ directly. By reusing these messages, we make sure that the tag is computed on the same input used to evaluate the circuit. To solve the second issue, we use a zero-knowledge proof of knowledge of language R Tag (see Section 2.4.7) to verify the result of the protocol. The details of the protocol are described in Figure 4.7. Proof. To prove the theorem we build a simulator S that runs in the ideal world and interacts with F G TAG and Z. It simulates for the adversary A the interaction with the noncorrupted parties and Z. We consider two different cases depending on what parties A corrupts. In each case, S runs internally an instance of A. It simulates the interaction with Z by forwarding the messages sent from Z to A and vice-versa. It simulates the interaction with the protocol parties based on a well-defined strategy described in what follows. x,i }, l b out ← GC.Grb(1 λ , f θ ) ∀i ∈ [ℓ] , ∀b ∈ {0, 1} and samples value α, γ ←$ Z p uniformly at random. It sends the message (sid,γ, A = g α , F ) to A and waits for A ′ s response. Tagging Protocol Π G TAG User (x = x 1 ||...||x ℓ ) Tagger θ, k=(a, b) F, {l b x,i }, l b out ← Grb(1 λ , f θ ) ∀i ∈ [ℓ], ∀b ∈ {0, 1} γ, A = g α , F α ←$ Z p , γ ←$ Z p r ←$ Z p , β ←$ Z p | γ=rβ β i ←$ Z p ∀i ∈ [ℓ] | β= ℓ 1 (β i ×2 i-1 ) B i ← A x ⟨i⟩ g βi ∀i ∈ [ℓ] κ i ← G G (A,Bi) (A βi ) ∀i ∈ [ℓ] P i ← B r i ∀i ∈ [ℓ] Q ← H(τ ) r {B i , P i } ∀i∈[ℓ] , Q {(B i , P i )} i∈[ℓ] , (H(τ ), Q) witness = r F R ℓ+1 DL ZK {(B i , P i )} i∈[ℓ] , (H(τ ), Q) True κ 0 i ← G G (A,Bi) (B α i ) ∀i ∈ [ℓ] κ 1 i ← G G (A,Bi) (( B i A ) α ) ∀i ∈ [ℓ] c 0 i ← Enc * κ 0 i (l 0 x,i ) ∀i ∈ [ℓ] c 1 i ← Enc * κ 1 i (l 1 x,i ) ∀i ∈ [ℓ] P ← (g -γ ℓ 1 (P i ) 2 i-1 ) 1 α E ← P a Q b C ← Enc * l 1 out (E) {(c 0 i , c 1 i } ∀i∈[ℓ] , C l x ⟨i⟩ x,i ← Dec κi (c x ⟨i⟩ i ) ∀i ∈ [ℓ] l res out ← Eval(F, {l x ⟨i⟩ x,i } ∀i∈[ℓ] ) E ← Dec l res out (C) (g xr , Q, E) F R Tag ZK (P, Q, E) witness = (a, b) True return σ ← E 1 r , P ′ i )} i∈[ℓ] , (H ′ , Q ′ )), r) to F R ℓ+1 DL ZK . S checks if H ′ =H(sid), Q ′ =Q, B i =B ′ i , (B i ) 2 i-1 = (g α ) x g β . If this equality does not hold, this means A chose β i such that ( n 1 β i ×2 i-1 ) ̸ = β. In this case, S sets the value bad_input = 1 and continues. 4. S sends the message (sid, input, x) to F G TAG and receives (sid, output, σ). S computes: • κ 0 i ← G G (A,B i ) (B α i ) ∀i ∈ [ℓ], • κ 1 i ← G G (A,B i ) (( B i A ) α ) ∀i ∈ [ℓ] • c 0 i ← Enc * κ 0 i (l 0 x,i ) ∀i ∈ [ℓ] • c 1 i ← Enc * κ 1 i (l 1 x,i ) ∀i ∈ [ℓ] Then, S computes: • if σ =⊥, samples ε ←$ {0, 1} λ and E ←$ G then computes C ← Enc * ε (E). • if σ ̸ =⊥ and bad_input = 0, sets E = σ r then computes C ← Enc * l 1 out (E). • if σ ̸ =⊥ and bad_input = 1 sets E ←$ G then computes C ← Enc * l 1 out (E). S finally sends (sid,{(c 0 i , c 1 i } ∀i∈[ℓ] , C) to A. In this simulation, the difference between the simulated execution of A and the real one is in the generated garbled circuit F and in the ciphertext C. We argue that both values are indistinguishable between the real world and the ideal world. (Proceed to this step only if σ ̸ =⊥). When A sends (sid,(P ′ , Q ′ , E ′ )) to F RTag ZK , if P ′ =g rx , Q ′ =Q, Indistinguishability of the simulated garbled circuit F : The simulated garbled circuit is garbled from the function f θIDEAL where θ IDEAL = 2 ℓ -1. The only difference is in the value of θ. However, the security of the garbled circuit scheme (see [START_REF] Andrew | How to generate and exchange secrets[END_REF]) guarantees computational indistinguishablity between these two circuits. Indistinguishability of the simulated ciphertext C: There are three different cases in our simulation: Case (i) when σ =⊥: This means that A's input x is greater than θ REAL (input of the honest tagger). In this case, the adversary in the real world will not be able to decrypt the value of C since it will not obtain l 1 out from the evaluation of the garbled circuit (A will obtain l 0 out since x ̸ ≤ θ REAL ). Hence the ciphertext C in the ideal world which is encrypted by a random key ε is computationally indistinguishable from C in the real world based on the security property of the encryption scheme (see Definition 2.3.5 and Definition 2.3.7). Case (ii) when σ ̸ =⊥ and bad_input = 0: In the ideal world, the value of C is the encryption of σ r with l 1 out where σ is the output of F G TAG and r is the randomness chosen by A (extracted from the F R ℓ+1 DL ZK message). In the real world, the value of C is the encryption of E with l 1 out where E = P a Q b . We have P = (g -γ ℓ 1 (P i ) 2 i-1 ) 1 α = g rαx+rβ-γ α = g rx (since γ = rβ when bad_input = 0). Additionally, Q = H(τ ) r , so we have E = (g rx ) a H(τ ) rb = σ r . Hence, the values of C are identical from both worlds. Case (iii): σ ̸ =⊥ and bad_input = 1: In the ideal world, the value of C is the encryption of E IDEAL with l 1 out where E IDEAL is chosen uniformly random in G. In the real world, the value of C is the encryption of E REAL with l 1 out where E REAL = P a Q b . We have P = (g -γ ℓ 1 (P i ) 2 i-1 ) 1 α = g rαx+rβ-γ α ̸ = g rx (since γ ̸ = rβ when bad_input = 1). So we have E REAL = (g rαx+rβ-γ α ) a H(τ ) rb . Notice that Z does not know α which is chosen uniformly random in Z p . We can write the value of E REAL = g c 1 +c 2 α -1 H(τ ) c 3 where c 1 = arx, c 2 = b(rβ -γ) ̸ = 0 (since rβ ̸ = γ), and c 3 = rb are three values chosen by Z. Let us assume that Z can distinguish E REAL from E IDEAL with a non-negligible probability given only g, g α ∈ G and τ ∈ {0, 1} * . We show that we can solve the Inv-DDH problem (i.e., given the tuple (g, g X , g Z ) tell of g Z = g X -1 ) using Z. For a tuple (g, g X , g Z ) query Z with (g, g α =g X ,E REAL = g c 1 (g Z ) c 2 H(τ ) c 3 ). , P ′ i )} i∈[ℓ] , (H ′ , Q ′ ))) to F R ℓ+1 DL ZK . If H ′ =H(sid), Q ′ =Q, B i =B ′ i , and P i =P ′ i ∀i ∈ [ℓ], and the relation R ℓ+1 DL holds, S sends (sid,True) to A, otherwise send (sid,False) to A and (sid, abort) to F G TAG . 3. When A sends the message (sid,{(c 0 i , c 1 i } ∀i∈[ℓ] , C), S starts internally S snd (defined in Lemma 2.4.2) and handles the OT sender's message A = g α ,(c 0 i , c 1 i ), and the random oracle G G queries to S snd . S snd extracts the sender's input (l 0 x,i , l 1 x,i ) and sends it out to S. S repeats the process for every i ∈ [ℓ]. Hence, S extracts all the input labels of the garbled circuit {l b x,i } ∀i∈[ℓ],b∈{0,1} . Then, S evaluates the garbled circuit F on all possible values of x ∈ [2 ℓ ] -using the labels and uses the output label to decrypt C and obtain E. If this operation fails for any x, S sets η(x)=0, otherwise η(x)=1. S finally sends (sid, ok, η) to F G TAG . 4. When A sends (sid,(( In this simulation, the difference between the simulated execution of A and the real one is only in the choice of the input x = 1. As a result, the only messages that are different are B i and P i , ∀i ∈ [ℓ]. Recall that B i is the OT message defined in the OT protocol in Section 2.4.5. By relying on the UC proof of [START_REF] Hauck | Efficient and universally composable protocols for oblivious transfer from the cdh assumption[END_REF] (when simulating the corrupted sender), we can show that B i are computationally indistinguishable from both views under the CDH problem. The indistinguishability of P i follows directly since P i = B r i , and r is chosen uniformly random in Z p . This proves that Π G TAG UC-realizes the functionality F G TAG in the F R ZK -hybrid model if A statically corrupts T under the CDH assumption. P ′ , Q ′ , E ′ ), (a, b)) to F RTag ZK , If P ′ ̸ =P , Q ′ ̸ =Q, We finally deduce that, IDEAL F G TAG ,S,Z s ≈ EXEC Π G TAG ,A,Z under static corruption and under the Inv-DDH assumption which proves our theorem. Conclusion on Verifiable SA In conclusion, we proposed a new formal definition for verifiable secure aggregation protocol. Our new definition captures the privacy of the user inputs and the verifiability of the aggregation result in the malicious model. Moreover, we presented VSA protocol and proved that it satisfies these security guarantees. VSA improves PUDA protocol which originally provides a verification mechanism against a malicious aggregator. VSA extends it by generating the PUDA tags using a distributed tagging protocol. Our distributed tagging protocol runs between a user and m taggers. It guarantees that an honest user receives a valid tag of its input and that a malicious user receives nothing. Thanks to this tagging protocol, VSA guarantees that an adversary controlling the aggregator and few users cannot compromise the aggregation result. Part II Secure Aggregation for Federated Learning Chapter 5 Privacy-Preserving Federated Learning with Secure Aggregation In this chapter, we study federated learning, one of the recent technologies used in IoT platforms. We specifically discuss its limitations in terms of privacy and we propose to improve its security using secure aggregation. For this purpose, we study all existing solutions that integrate secure aggregation within federated learning and we regroup them based on the specific challenge they tackle. We finally derive some takeaway messages that would help for a secure design of federated learning protocol and identify research directions in this topic. Introduction to Privacy-Preserving Federated Learning With the recent advancements in information technologies, machine learning techniques take a substantial part of data processing. Machine learning is a set of techniques that uses real data (e.g., measurements) to improve the accuracy and performance of existing systems [START_REF] Tom | Machine learning[END_REF]. These techniques aim to develop the so-called machine learning models by learning to perform some well-specified tasks. This operation is known as training machine learning models on collected data. As a consequence, data becomes the most important resource for such systems to achieve better accuracy. In the case of IoT platforms, machine learning is an essential technique to improve their business targets. IoT platforms collect data from IoT devices and process them using machine learning techniques. Lately, these IoT platforms started to collaborate with each other to train better machine learning models. However, they cannot simply share their collected data due to privacy reasons. Thus, federated learning technology emerged as a privacy-preserving technique to train on private datasets from multiple sources. The The main goal of FL is to protect the privacy of the local data while still being able to use them for training public models. This technology provides a great advantage over other techniques that try to achieve the same goal (eg., training on encrypted data [VNP + 20, HTGW18, WGC19, DGBL + 16, WTB + 20]). The latter adds a large computational overhead since it involves encryption of the inputs then performing complex computations on encrypted data. FL requires less computation as it only involves the averaging operation at the server. While FL is proposed for privacy-protection purposes, it lacks a formal guarantee of privacy. For example, adversaries who have access to the training results sent from each client to the server might be able to infer a training sample from a client's private dataset. Many types of inference attacks on FL are investigated and researched in [MSDCS19, ZLH19, LHCH20, NSH19]. One of the solutions to mitigate these inference attacks is secure aggregation. In this chapter, we study the use of secure aggregation to perform the averaging operation in federated learning. To better understand the integration of secure aggregation in federated learning, we study the specific requirements and characteristics of federated learning compared to the legacy application of secure aggregation. Additionally, we survey all 37 existing solutions that propose to do this integration and we regroup them based on the specific challenge they tackle. As a result, we identify the limitations and gaps in the literature and present them as a set of take-aways. Characteristics of Federated Learning Federated learning has a wide range of use cases and applications. These applications differ based on the scale of federation, the partitioning of the training data, and the learning algorithm used in training. Scale of Federation Two types of FL exists with respect to the scale of federation: Cross-silo FL and cross-device FL [KMA + 19]. • Cross-silo Scenarios (X-Silo): A small number of powerful users host the data. These often have decent computational power with a reliable and high bandwidth network connection. • Cross-device Scenarios (X-Device): It involves a large number of users. These users often correspond to end-devices with moderate computational power. In many applications, these devices directly interact with end-users from which they collect data. Partitioning of the training data There exist three categories of data partitioning [YLCT19, LWH19, DP21]: Horizontal partitioning, vertical partitioning, and hybrid partitioning. • Horizontal Partitioning: Each FL client holds a set of complete training samples. Each sample contains all the training features and the corresponding label. Hence, each client can train a local model on these samples. • Vertical Partitioning: A client may hold part of the features of each training sample while the other parts might be held by other FL clients. In this FL type, the clients are not able to locally train a model without collecting the missing information of each sample from other clients. • Hybrid Partitioning: A hybrid partitioned dataset is a combination of horizontally and vertically partitioned datasets. Secure aggregation is only suitable to FL based on horizontally partitioned datasets since those based on vertically partitioned datasets require more operations than just summing the clients' updates. In this thesis, we only consider horizontally partitioned datasets since it is the most realistic case in real-life applications. Learning algorithm The most used learning algorithm for horizontal FL is Federated Averaging [MMR + 17], which is based on Stochastic Gradient Descent (SGD) [iA93]. SGD is an iterative algorithm used to train a model on a dataset (i.e., find the best weights of a model that can fit the dataset). At each SGD step, client i uses model M τ and a loss function f to compute gradient g τ i from the values in its dataset D i : g τ i = ∆f (M τ , D i ) = ∆ (x,y)∈D i f (M τ , x, y) Then, the gradient is used to update the weights of the model with learning rate η (M τ i = M τ -ηg τ i ). The FL clients send their new trained model M τ i to the FL server who aggregates them: M τ +1 ← n 1 M τ i n Finally, each FL client obtains the aggregated model M τ +1 and starts a new federated learning round. Privacy of the Datasets in Federated Learning An adversary having access to the model update M τ i sent by client i can perform inference attacks. These attacks allow an attacker to retrieve some private information about the client's datasets. Based on the type of private information, there exist three categories of inference attacks: • Membership Inference Attacks [SSSS17a, NSH19]: The attackers learn whether a specific data record is part of the training dataset or not. • Reconstruction Attacks [DN03,WLW + 09]: The attacker learns some of the attributes of a record in the dataset. These attacks are also known as model invasion attacks [START_REF] Fredrikson | Model inversion attacks that exploit confidence information and basic countermeasures[END_REF]. • Data Properties Inference Attacks [AMS + 15, GWY + 18]: The attacker learns global properties of the training dataset, such as the environment in which the data was produced. All these attacks show that federated learning is not sufficient to preserve data privacy when used alone. To mitigate these attacks, it is crucial to protect the model updates sent by the federated learning clients while still being able to compute their aggregate. Existing Secure Aggregation for Privacy-Preserving Federated Learning Secure aggregation schemes aim to prevent inference attacks by hiding the model updates from any potential adversary. Based on the definition given in Chapter 3, it involves two Table 5.1: Challenges in using secure aggregation for FL based on the specific requirements of FL. It shows for each challenge whether the baseline SA protocols defined in Chapter 3 can originally cope with that challenge. Federated learning features some unique properties and characteristics that differ from previous applications where secure aggregation was used. This makes integrating secure aggregation schemes to federating learning a challenging task. We hereby identify seven unique properties for FL that raise significant challenges for the integration of secure aggregation in FL. We further analyze the suitability of each secure aggregation category (see Chapter 3) to cope with these characteristics. We summarize the results in Table 5.1. Failures and Drops of Clients at Realtime (C 1 ) In cross-device FL scenarios, it is common to have mobile, unreliable FL clients. The mobility of a client may cause failures (drops) of some FL clients causing their unavailability for some federated learning rounds. Failures of clients may even happen within the FL round as well. All this can be a problem for some secure aggregation schemes that do not support dynamic users. In particular, SA schemes based on masking and AHE are not fundamentally designed to cope with user failures. Therefore, the need for fault-tolerant secure aggregation is a requirement for FL. Client's Inputs are Vectors of High Dimension (C 2 ) In FL, the user's input is a vector that holds all the model parameters (weights). Not all types of secure aggregation protocols can efficiently work with vectors. For example, MPC-based SA incurs a significant communication overhead since the shares of the inputs have the same size of the input. Therefore, it is not practical to run secret sharing to share large vectors. Also, in masking-based SA, the users should run a key-agreement protocol to compute new masks for each FL round. This adds significant overhead. Additionally, for AHE, the encryption algorithm results in large ciphertexts. Thus, encrypting each element in the input vector adds a large overhead. This calls for efficient packing techniques designed for AHE-based SA. Unlike other methods, for FE-based secure aggregation, it is possible to construct a solution where each party sends exactly one value of the size of the input vector (see the SA scheme construction presented in Section 3.4.2.3 in Chapter 3). Client's Inputs are of Floating Point Type (C 3 ) All the baseline secure aggregation protocols are designed to operate on integer types. In FL, the user's input usually is of floating point (float) type. This calls for efficient quantization techniques that transform floats to integers while preserving a high accuracy in the result. Representing floats with integers incurs an increase in the size of the input. Hence, the use of quantization techniques helps achieve a good trade-off between accuracy and communication overhead. Huge Number of Clients (C 4 ) Recently, we start to observe FL applications involving thousands of FL clients. Google is researching how to train Gboard (the Android's keyboard application) search suggestion system using federated learning on large scale [YAE + 18, HKR + 18]. With secure aggregation integrated with federated learning, the scalability problem becomes a serious challenge. MPC-based SA protocols do not scale well with huge number of users since they suffer from a quadratic complexity in terms of communication and computation. Similarly, masking-based SA suffers from a quadratic complexity in the setup phase. Additionally, with a large number of clients, the typical synchronized FL protocol is not practical. In an asynchronous FL protocol, clients do not wait for the updates of a sufficient number of users at each FL round. Instead, the updates of the users are incorporated as soon as they arrive at the server. Adopting SA for asynchronous FL is challenging because updates may be protected with keys corresponding to different FL rounds. Privacy Attacks that Bypass SA (C 5 ) The aggregated model M τ +1 is a public information that is accessible for all FL clients. Therefore, secure aggregation is not used to hide this value. There exists a different type of inference attacks that can still infer private information from the aggregated model, only [START_REF] Shokri | Membership inference attacks against machine learning models[END_REF]. For example, recently So et al. [SAG + 21] pointed out a new attack to leak the client's updates even when protected with secure aggregation. The authors notice that the models from the FL clients do not change a lot between one training step and another one when the trained model starts to converge. This causes a privacy leakage if a FL client did not participate. In more details, if all FL clients participate in round τ -1 and all clients except one participate in round τ , and if the inputs did not change a lot, an adversary who has access to the aggregated model updates for rounds τ and τ -1 will be able to approximate the inputs of the missing FL client. Such specific attacks can bypass the security measures of SA. Gao et al. [GHG + 21] implemented these types of attacks and show how they can effectively infer the category of the given data samples. Secure aggregation protocols by definition do not provide protection against these types of attacks. Therefore, additional security mechanisms should be used with secure aggregation to mitigate these attacks. Malicious Users (C 6 ) Earlier SA protocols proposed before the FL paradigm appeared, mainly consider a honest-but-curious threat model with colluding users (see Section 3.2 in Chapter 3). Such a threat model is not sufficient in the context of federated learning. Specifically, FL clients cannot be trusted to provide their inputs truthfully at each FL round. Thus, we should consider an extended threat model which considers malicious users. Indeed, poisoning attacks (a.k.a., backdooring attacks) are attacks where malicious FL clients manipulate their model updates M τ i to affect the aggregated model M τ +1 . Their goal is to install a backdoor in the trained model. A "backdoored" model behaves almost normally on all inputs except for some attacker-chosen inputs at which it outputs attacker-desired predictions. Malicious FL clients use two main methods to poison a model: Dataset poisoning [START_REF] Shen | Auror: Defending against poisoning attacks in collaborative deep learning systems[END_REF] where attackers insert malicious records in their dataset; and model poisoning [BVH + 20a] (a.k.a., constrain-and-scale attacks) where the attacker replaces the trained model by a malicious model and send it instead of the trained model. An even more recent attack method consists of distributed poisoning attacks [START_REF] Xie | Dba: Distributed backdoor attacks against federated learning[END_REF] in which the poison is distributed among several malicious clients inputs so that it is harder to detect malicious models. On the other hand, malicious clients can perform less stealthy attacks by sending ill-formed inputs to prevent the calculation of the aggregation. To further prevent all these types of attacks we need to implement additional security mechanisms for SA. Malicious Aggregator (C 7 ) Similar to challenge C 6 , the honest-but-curious threat model is not sufficient to prevent the cheating of a server in the context of federated learning. More specifically, the SA protocols described in Section 3 prevent a curious FL server from learning the clients inputs, but cannot protect against a malicious server that modifies the aggregated model. Indeed a malicious server can cause a huge damage because it has full control of the final aggregated value. Therefore, an adversary controlling the FL server can force the clients to receive an adversary chosen model. In fact, the impact of a malicious aggregator can even go beyond forging the aggregation result. Pasquini et al. [START_REF] Pasquini | Eluding secure aggregation in federated learning via model inconsistency[END_REF] showed that a malicious aggregator can even compromise the privacy by bypassing the secure aggregation protocol. An example for these attacks illustrated by the authors is when the malicious aggregator chooses specific values for the aggregated result. The values are chosen such that when the clients train the forged model sent by the aggregator, the training outputs a model of zero parameters. Hence, the malicious aggregator can suppress arbitrary clients of his choice from the aggregation by sending them malformed models. Therefore, it can suppress all clients except a targeted one and leak its input. To prevent such attacks, SA protocols should consider a malicious aggregator in their threat model. Secure Aggregation Solutions for Federated Learning A lot of research has been conducted on designing secure aggregation protocols based on cryptographic schemes for federated learning applications. Most of the proposed schemes are improvements of the basic secure aggregation protocols described in Chapter 3 and tackle one or more of the aforementioned challenges (C 1 -C 7 ). The proposed schemes can be categorized based on the challenge they tackle. We summarize how these solutions propose different solutions for each of the challenges. Table 5.2 presents an overview of these solutions grouped by their challenge scope. Also, Figure 5.3 regroups them per SA MIFE-based SA also are fault-tolerant by design since the data aggregator can assign zero weights for missing clients [XBZ + 19]. However, these schemes require a key dealer to stay online for each federated learning round. On the other hand, Bonawitz et al. [BIK + 17] propose a fault-tolerant variant of the masking-based SA. Later, this scheme is widely adopted and improved by [BBG + 20, EA20, SGA21b, KLS21, XLL + 20, GLL + 21]. The idea of this scheme is to merge Shamir's SS scheme with masking. More specifically, it benefits from the lightweight operations and low communication overhead of the masking scheme and on the other hand, the solution inherits the fault-tolerance property of Shamir's SS scheme. Thanks to this trade-off, it is considered a big jump towards designing practical secure aggregation scheme for cross-device FL scenarios. A similar scheme is proposed by Stevens et + 20]. Batch encryption allows encrypting of multiple values in a single operation and thus optimizing the encryption of vector inputs (representing machine learning models). In BatchCrypt [ZLX + 20], the authors propose a method to quantize and batch the elements of a model before encryption. The strength of their approach is that it preserves the additively homomorphic property of the ciphertexts. Another interesting technique presented by Wu et al. [WPX + 20] is to use the All Or Nothing Transformation (AONT) [START_REF] Ronald | All-or-nothing encryption and the package transform[END_REF]. AONT is a technique for transforming data into a different form such that, the new data can only be understood if all of it is known. The authors show that by transforming clients' models with AONT, it is sufficient to encrypt a small part of the transformed model. Thanks to AONT property, the non-encrypted part of the transformed model does not give any useful information as long as the other part of it is encrypted. This can decrease the size of the protected user input by several orders. For masking-based SA, Bonawitz et al. [BIK + 17] propose to execute a key agreement protocol to produce small random numbers that are used as seeds of a pseudo-random generator. These seeds are then used to generate the masks. + 17] does not require that all the clients are connected. Thus, they propose to generalize the scheme by creating random graphs. Each FL client executes the FT-Masking with its neighbors. The new protocol assumes that not all the neighbors will be corrupted at the same time and it proposes a method to build the so-called "good" graphs. Accurate Secure Aggregation Similarly, both So et al. (TurboAgg) [SGA21b] and Sandholm et al. (SAFE) [SMH21] propose a circular topology. Clients perform a chain of aggregations by passing the aggregated updates to the next client. To further deal with a large number of clients, So et al. [START_REF] So | Secure aggregation for buffered asynchronous federated learning[END_REF] propose a SA protocol that can be integrated into asynchronous FL. The solution uses the scheme proposed in [YSH + 21] and adapts it to enable secure aggregation of inputs from different timestamps. Secure Aggregation Resilient to Privacy Attack To deal with inference attacks on the aggregated model (C 5 ), Differential Privacy (DP) solutions [START_REF] Dwork | Differential privacy. In Automa, Languages and Programming[END_REF] should be used with secure aggregation. Notice that the goal of using DP mechanism with SA is to protect the aggregated model, only, and not user inputs. A simple method is to let the aggregator apply DP mechanism on the aggregated model [START_REF] Delgado Fernández | Secure federated learning for residential short term load forecasting[END_REF]. However, this requires trusting the aggregator. A better method is to use a distributed version of DP (DDP) along with SA to mitigate the information leakage caused by the public aggregated model. Few works have followed this approach for FL [KLS21, TBA + 19]. These solutions add Gaussian noise to the FL clients' inputs. They leverage the fact that FL clients' inputs are protected with cryptographic tools (thanks to SA) which permit them to decrease the level of noise while achieving sufficient privacy level. Therefore, using DDP with SA limits the degradation of the accuracy of the trained model compared to using DP alone. Stevens et al. [SSV + 21] follow a similar approach by using Learning with Error (LWE) masking technique to make the final aggregate differentially private. On the other hand, a multi-round privacy concept is introduced by So et al. [SAG + 21]. This concept is to ensure that an adversary cannot learn valuable information by monitoring the changes in the aggregated model across different FL rounds. The authors propose a solution enlightened by the work in [TNW + 21]. They propose to randomly and fairly (using weights) select participants in each FL round based on well-defined criteria called Batch Partitioning. Using this technique they can guarantee the long-term privacy of the data at the FL clients. Secure Aggregation Against Malicious Users To deal with malicious users who perform poisoning attacks (C 6 ), the FL server needs a mechanism to validate the inputs of the clients. Mitigating poisoning attacks is studied by researchers independently from using secure aggregation for FL [START_REF] Fung | Mitigating sybils in federated learning poisoning[END_REF][START_REF] Andreina | Baffle: Backdoor detection via feedback-based federated learning[END_REF]. One of the methods used to prevent such attacks is to use the cosine distance [START_REF] Fung | Mitigating sybils in federated learning poisoning[END_REF] to detect poisoned inputs that deviate from the other benign inputs. Clustering [STS16, BEMGS17] and anomaly detection methods [START_REF] Andreina | Baffle: Backdoor detection via feedback-based federated learning[END_REF] are also used to detect malicious model updates. An orthogonal approach is to use clipping and noising to smooth the model updates and remove the differences [BVH + 20b]. While all these solutions are shown to be efficient in preventing poisoning attacks, using them with secure aggregation is a big challenge. The problem is that all these solutions rely on analyzing the FL clients' inputs while secure aggregation aims to hide and protect these inputs. Several methods are proposed to verify the inputs while keeping them protected to preserve their privacy. For MPC-based SA, it is possible to build circuits that can perform complex operations on the shares. This can be used to evaluate functions on the inputs other than just computing the sum. Indeed, MLGuard [START_REF] Khazbak | Mlguard: Mitigating poisoning attacks in privacy preserving distributed collaborative learning[END_REF] proposes to verify the users' inputs by transforming a verification function into a circuit that gets executed by the two servers using 2PC. The verification function computes the distance between the clients' inputs. The circuit compares the distance to pre-defined thresholds and thus rejects the input if it exceeds it. FLGuard [NRY + 21] follows the same approach by building two circuits: One circuit for detecting poisoned inputs using a dynamic clustering algorithm (HDBSCAN [START_REF] Ricardo | Densitybased clustering based on hierarchical density estimates[END_REF]) and another circuit for reducing the impact of poisoned inputs using clipping and noising. The communication cost of running these circuits is significant thus making scalability even harder to achieve for SA in the federated learning context. A promising approach to reduce this cost is through the use of secret-sharing noninteractive proof (SNIP). This approach was proposed in [CGB17] (Prio). Using SNIP enables the aggregators to validate the user inputs without interacting with the users and with minimal interaction between themselves. This scheme is not yet deployed in FL applications. SNIP brings a great advantage over standard 2-PC validation circuits since it does not limit the number of aggregators thanks to its lower communication cost. The limitation of SNIP is that it only supports specific validation functions. Therefore, it is an open challenge to design validation circuits for detecting poisoning attacks using SNIP. On the other hand, regarding AHE-based SA, Karakoc et al. [START_REF] Karakoç | Secure aggregation against malicious users[END_REF] propose OPPRF, an algorithm based on private set membership (PSM) [START_REF] Ciampi | Combining private set-intersection with secure two-party computation[END_REF] and oblivious transfer (OT) [START_REF] Naor | Computationally secure oblivious transfer[END_REF]. OPPRF uses PSM to perform equality checks between values (i.e., equivalent to finding an intersection between sets of cardinality equal to one [START_REF] Couteau | New protocols for secure equality test and comparison[END_REF]). Using OPPRF, the users can create tags that are only valid if their inputs are lower than a threshold provided by the aggregator. Karakoc et al. [START_REF] Karakoç | Secure aggregation against malicious users[END_REF] applied this scheme for AHE-based SA schemes and evaluated it in FL applications. The scheme enables the FL server to detect poisoning attacks by checking that the minimum, maximum and average of the model elements do not cross a certain threshold value. The threshold is configured based on an observation of the models of benign clients. Another approach is proposed by Lukas et al. [BLV + 21]. The authors use a non-interactive commitment scheme proposed in [START_REF] Pryds | Non-interactive and information-theoretic secure verifiable secret sharing[END_REF]. Using this scheme, the users create proofs that the Euclidean distance of their inputs satisfies the bound set by the aggregator. Upon receiving the client's protected input and the commitment, the server verifies that the proof is valid. For masking-based SA, two techniques are proposed. One technique is proposed by So et al. [START_REF] So | Byzantine-resilient secure federated learning[END_REF] in which users secretly share their model updates with all other clients and then compute the squared distance between the model shares. The server can finally reconstruct the squared distances and use the result to detect malicious inputs. An alternative technique is proposed by Zhang et al. [START_REF] Zhang | Safelearning: Enable backdoor detectability in federated learning with secure aggregation[END_REF] and Velicheti et al. [START_REF] Velicheti | Secure byzantinerobust distributed learning via clustering[END_REF]. In more detail, users are anonymously and randomly grouped into clusters. Aggregation happens per cluster and then a following round of aggregation happens on the results of each cluster. For each cluster, the intermediate aggregation results are checked to prevent poisoning attacks. The fact that attackers do not know to which cluster the compromised device belongs to, protects from distributed poisoning attacks (see C 6 ). Secure Aggregation Against Malicious Aggregator In a malicious aggregator threat model, the FL server forges false aggregation results (C 7 ). Mitigating these attacks requires a verifiable secure aggregation scheme. Many solutions are proposed to enable the verification of the aggregation outcome [KShS12, SS11, DOS18, CDE + 18]. However, these solutions do not fit well in federated learning applications due to their high communication overhead. In the context of federated learning, six solutions are proposed [GLL + 21, HKKH21, XLL + 20, ZFW + 20, TLB + 21, BTL + + 20] use Homomorphic Hash Functions (HHF) to verify the result of the aggregation. HHF can be built using bilinear maps [START_REF] Boneh | Aggregate and verifiably encrypted signatures from bilinear maps[END_REF][START_REF] Mandell | Improved security for linearly homomorphic signatures: A generic framework[END_REF]. First, the data owners create authentication tags for their inputs and send them to the aggregator. The latter aggregate them to prove the outcome of the aggregation. Finally, the aggregator verifies the result. Hahn et al. [START_REF] Hahn | Versa: Verifiable secure aggregation for cross-device federated learning[END_REF] detect possible brute-force attacks on VerifyNet and improve it by deploying a keyed HHF. All these solutions do not prevent collusions between the users and the aggregator. Therefore, they cannot be considered secure based on our security definitions for secure aggregation in Chapter 3). Another problem with these solutions is that they significantly affect the performance of SA. This is because of the linear increase in computation and communication overhead with the increase of the dimension of inputs. This is a clear limitation since the performance of the ML model highly depends on its size (i.e., number of parameters). To solve this problem, Guo et al. (VeriFL) [GLL + 21] focus on designing a verification scheme specifically for secure aggregation applications with inputs of high dimension. To support user-aggregator collusions, the authors integrated a commitment scheme to prevent users from changing their hashes after the computation of the aggregate. The authors of VeriFL apply this scheme to the fault-tolerant masking scheme in [BIK + 17]. The evaluation of their solution on federated learning applications shows a significant reduction in communication overhead with respect to other verification schemes. However, VeriFL still suffers from a quadratic computation and communication cost with respect to the number of FL clients. Achieving better scalability for verification systems is an open problem. For MPC-based SA, Brunetta et al. [BTL + 21] propose NIVA as a non-interactive secure aggregation protocol that includes the verification of the result. The users create a tag for each of their input shares. Upon computing the aggregate, the result can be verified using all the generated tags. Tsaloli et al. [TLB + 21] propose DEVA which improves the number of tags created for each user. DEVA requires that a user creates a single tag for its input rather than creating a tag for each share. Both approaches do not support collusions between users and the aggregator and incur very high communication overhead since they use MPC. Observations and Conclusion on Secure Aggregation for FL We have extensively studied federated learning solutions that integrate secure aggregation schemes. In this section, we identify and share the following observations and takeaway messages: O 2 We notice that secure aggregation solutions based on AHE are not widely adopted in federated learning. This is mainly because they do not support user dynamics. However, we see that AHE-based SA is promising since they provide long-term security using the same user keys. We hope to see more research improving these schemes towards a practical deployment in federated learning context. O 3 We notice that some of the solutions proposed to preserve the privacy in federated learning do not adhere to the minimal security requirements for secure aggregation protocols (see Definition 3.2.1 for Aggregator Obliviousness). Specifically, AHE schemes [PAH + 18, LCV19, ZLX + 20, ZFW + 20] and masking-based verification schemes [ZFW + 20, XLL + 20, HKKH21] that use the same key for all users should not be considered secure since they do not guarantee security in case of a collusion between a user and the aggregator. O 4 We note that secure aggregation alone is insufficient to guarantee the privacy of the clients' datasets in the context of federated learning. Although SA helps prevent inference attacks, the global model that is collaboratively computed from private individual inputs can still leak information. Therefore, additional protection mechanisms are required. Differentially private mechanisms and multi-round privacy solutions are suitable candidates to cope with this problem. O 5 Poisoning attacks against federated learning call for some integrity mechanisms that would allow the aggregator (the FL server in this context) to verify the correctness/veracity of received inputs. Nevertheless, the cost of such mechanisms can be significant. Therefore, we can consider the design of such mechanisms that can (i) detect stealthy and sophisticated poisoning attacks, and (ii) ensure the security and scalability requirements, as an open challenge. O 6 Similar to the previous observation, we identify the need for an integrity mechanism for verifying the correctness of the actual aggregate. Basic solutions would linearly increase the size of the transmitted data between parties w.r.t. the model size. Using incremental HHF is promising as shown in VeriFL [GLL + 21]. However, this solution is still far from being applied for FL applications in larger scale since it still implies a linear increase in communication and computation cost w.r.t the number of clients. Based on all the previous observations, we propose revisiting the definition of cryptobased secure aggregation to make it suitable for FL. Specifically, we revise the description of the protocol phases (i.e., SA.Setup, SA.Protect, and SA.Agg) to meet all the security requirements for FL applications. Following observation O 4 , the SA.Protect phase should be modified such that users first pre-process their inputs with distributed DP mechanisms before running the actual protection algorithm. Additionally, based on observation O 5 , SA.Protect should also generate integrity proofs of inputs which are sent together with the protected inputs to the data aggregators. On the other hand, SA.Agg should include a verification mechanism of the inputs which validates the integrity proofs. Moreover, observation O 6 indicates that SA.Agg should compute a proof of the aggregation which is sent to the users along with the aggregation result. In order for the users to validate the aggregation result, we require an additional phase: Namely, the SA.Verify phase should be performed as a final step by the users. In this phase, the users verify the received result of the aggregation. Summary In summary, we propose a better definition of secure aggregation protocols based on cryptographic schemes which cope with the security requirements of federated learning. The definition consists of four phases: SA.Setup, SA.Protect, SA.Agg, and SA.Verify. Figure 5. [START_REF] Mansouri | Verifiable Secure aggregation in The Malicious Model[END_REF] shows the details of the improvements in each of the phases. It is worth noting that this new definition combines and generalizes all the improvements proposed by the state-of-the-art solutions. It would be interesting to develop the first SA solution for FL implementing our proposed definition by combining all the state-of-the-art techniques. Chapter 6 Scalable and Fault-Tolerant Secure Aggregation for Federated Learning In the previous chapter, we have identified and enumerated the main challenges of integrating secure aggregation schemes in federated learning (FL) protocol. In this chapter, we tackle one of these challenges, namely, failures and drops of FL clients in realtime. We first provide an extensive study of the existing solutions against such failures. We further propose a novel fault-tolerant secure aggregation solution for federated learning (FTSA) that is the first one based on additively homomorphic encryption. Fault-Tolerant Secure Aggregation Consider a scenario where hundreds of companies manage IoT platforms and would like to collaboratively train a machine learning model over their private datasets. Their goal is to provide better services to their customers by sharing their resources. The different IoT platforms use federated learning to train a common model. Additionally, these platforms do not wish to rely on a trusted third party to run the federated learning server. Therefore, they integrate secure aggregation into federated learning to preserve the privacy of their customers. Each company may have multiple federated learning clients (clients may correspond to different geolocated branches or different company sectors). Consequently, running federated learning protocols on large scale (with a large number of clients) may frequently encounter client failures and dropouts. Such a use case calls for a secure and fault-tolerant federated learning solution with an untrusted server (that can be located on any of the companies' premises). Most secure aggregation solutions fall short to address such a problem (see Chapter 6). Previous work from Bonawitz et al. [BIK + 17] develops a fault-tolerant secure aggregation that enables the server to recover the aggregate from up to t out of n client failures. The authors design their solution by extending an existing masking scheme [START_REF] Dimitriou | Secure and scalable aggregation in the smart grid resilient against malicious entities[END_REF] with Shamir's secret sharing to enable fault tolerance. Their scheme has been used as a building block for a significant number of privacy-preserving federated learning solutions [BSK + 19,EA20,BEG + 19,BBG + 20,SGA21b,KLS21,SAG + 21,XLL + 20,GLL + 21,SGA21a]. Nevertheless, masking is based on one-time-pad encryption (i.e., modular addition) and hence requires the re-distribution of new keys for each aggregation process. This incurs a significant computation and communication overhead originating from the execution of the key re-distribution among clients and the generation of new masks at each FL round. We, therefore, propose a new solution that enables a client to use the same key for multiple FL rounds. We revisit the Joye-Libert (JL) secure aggregation scheme proposed in [START_REF] Joye | A scalable scheme for privacy-preserving aggregation of time-series data[END_REF] (see Section 3.4.2.2 in Chapter 3) and propose a variant that supports clients' failures. Compared to [BIK + 17], we provide a more efficient and fault-tolerant secure aggregation scheme since it does not require the re-distribution of the protection keys for each FL round. Threat Model We consider a threat model with an untrusted FL server that colludes with some clients. Additionally, we consider some of the honest clients to unintentionally fail (i.e., drop from the protocol) in some federated learning rounds. The failures can happen at any stage of the protocol. The adversary (controlling the server and the colluding clients) is interested in discovering any private information about the individual inputs of the honest clients. We consider the two possible settings for the adversary as defined in Section 3.2. Namely, the honest-but-curious model and the malicious model. For each of these settings, we identify the minimum required security parameters to achieve Aggregator Obliviousness (see Definition 3.2.1). We rely on a trusted party only to generate the public parameters of our protocol. Attacks that aim to change the result of the aggregated data or to perform some denial of service are out of the scope. Additionally, we do not consider attacks where the adversary impersonates existing clients as these can be prevented by deploying a public key infrastructure with signed certificates. Note that this threat model is common among secure aggregation protocols and it is the same as the one adopted by [BIK + 17] except for the dependency on a trusted party for the setup. We show later how to avoid this dependency. Prior Work based on Masking In this section, we discuss the previous works on fault-tolerant secure aggregation. All the existing work focus on masking-based secure aggregation. To achieve fault tolerance, these solutions integrate different types of secret sharing with masking. + 17] as a secure aggregation scheme based on masking. The solution is briefly described in Section 5.4. We here give a more detailed description of the protocol. Every two clients agree on a mutual mask using a keyagreement scheme and use these masks to protect their inputs. To recover from some potential drops of clients, the clients secretly share their secret keys such that any t online clients can help the aggregator reconstruct the missing client's masks and remove them from the aggregate. In more detail, in the SA.Setup phase, the users agree on mutual seeds using Diffie-Hellman key-agreement scheme similar to the case in standard masking-based SA. Additionally, they use the t-out-of-n Shamir's Secret Sharing [START_REF] Shamir | How to share a secret[END_REF] to share their key-agreement [START_REF] Diffie | New directions in cryptography[END_REF] private keys. Using this approach, masks of dropped users can be recovered as long as t users are still online. While this solves the problem of dropped users, it causes a new security problem. More specifically, the aggregator can claim that an online user actually dropped and thus ask for reconstructing its masks and hence reveal the user input. To solve this problem, a double masking technique is used in the SA.Protect phase. Each user adds another layer of masking using a randomly generated mask b i,τ : SecAgg Bonawitz et al. propose SecAgg [BIK c i,τ ← x i,τ + k i,τ + b i,τ where k i,τ is computed based on the masking construction discussed in Chapter 3 (Equation 3.1). This new mask b i,τ is generated from a randomly generated seed which is also shared using Shamir's Secret Sharing with all other users. In SA.Agg, the aggregator can ask to either reconstruct the blinding mask b i,τ or the Diffie-Hellman private key of a user. Therefore, the aggregator will not be able to reveal individual inputs. So, it first collects t shares of the seed of each mask b i,τ of every online user U i and reconstructs it. Then, it gets t shares of the Diffie-Hellman's secret key of the dropped users and thus reconstructs the missing masks. Consider X and Y as the set of remaining and dropped users, respectively, the aggregation operation is described as follows (all the operations are performed mod R = nR i where Z R i is the range for the input values of each user): + 20] to increase the scalability of secure aggregation protocols. In their solution, the clients do not need to share secret keys with all other clients to achieve robustness against dropouts. The authors propose to build partially connected (as opposed to fully connected) graphs where secure aggregation runs between the connected graph nodes (representing clients) only. More specifically, each client sends and receives secret shares of its keys to/from its neighbor nodes (according to the graph). Since the graph is connected, the correctness of mask cancellation and the privacy of individual values are both guaranteed under certain thresholds. The authors further propose a method to build these graphs to achieve a good tradeoff between security and efficiency. They apply this technique to SecAgg and show that their technique reduces the complexity in a logarithmic scale with respect to the number of users. It relies on Fast Fourier Transfer (FFT) and it decreases the computation cost of sharing and reconstructing a secret. Using FastShare, the authors reduce the computational complexity of the fault-tolerant secure aggregation. However, this comes at a cost of lower security guarantees. Indeed, FastSecAgg can reconstruct the aggregate (with a good probability) if the failed clients are arbitrarily chosen. Therefore, an adversary may prevent the recovery of the aggregate by cherry-picking a few clients and isolating them. U i ∈X c i,τ - U i ∈X b i,τ + U j ∈Y k i,τ = U i ∈X x i,τ + U i ∈X k i,τ + U i ∈X b i,τ - U i ∈X b i,τ + U j ∈Y k i,τ = U i ∈X x i,τ + ¨¨¨¨B 0 U i ∈X ∪Y k i,τ = n 1 x i,τ 6.3.2 SecAgg + Bell et al. propose a new technique [BBG TurboAgg FTSA -Overview In this section, we present our fault-tolerant secure aggregation protocol (FTSA) by giving an overview of the main idea. Joye-Libert (JL) scheme [START_REF] Joye | A scalable scheme for privacy-preserving aggregation of time-series data[END_REF] is an AHE-based SA scheme (see the construction of the scheme in Section 3.4.2.2). The scheme allows n users to encrypt their private input with a unique long-term pre-distributed key. Its main interesting property is that it is homomorphic with respect to both the messages and the encryption keys. An aggregator holding the sum of the user keys can decrypt the aggregate of the messages. Since this secure aggregation scheme supports long-term keys, it features a significant advantage in terms of computation and communication cost as it allows the use of the same keys for all FL rounds. However, JL introduces the following challenges when used for federated learning: Client failures It is common in federated learning that some clients drop in several FL rounds. When one or more clients do not provide a ciphertext for FL round τ , the aggregation fails. This is because the aggregation key used to aggregate the ciphertext is equal to the sum of the user keys. If a client fails, its encryption key will not be involved in the aggregation. We deal with this problem by designing a threshold version of JL to recover from dropouts. Our Idea (Threshold-variant of JL scheme): We design a threshold JL scheme, whereby clients can secretly share their individual keys with each other so that when one or more clients fail, t out of n clients that are still online help provide a protected zero-value that is computed with the failed clients' individual keys. By computing the protected zero-value of the missing clients, the server can correctly compute the aggregated result. Larger inputs In federated learning, the inputs are the parameters of the client's model and are of vector type. JL is originally designed to encrypt single integers. We extend JL to support vector inputs. Our Idea (JL scheme with vector inputs): We propose to encode the vector elements in a single integer. The vector sum is decoded after aggregation. No trusted key dealer JL requires a key dealer to distribute some keys to the users and the aggregator. However, a trusted key dealer may not be feasible in federated learning applications. Therefore, we propose a decentralised key setup phase to distribute the keys. Our Idea (JL scheme with decentralized key setup): In order not to rely on a trusted key dealer, we propose a distributed key generation mechanism. We mainly use secure multi-party computation such that each of the n users and the aggregator get a random additive share of zero. Each user will use its share as a secret key so that the sum of the keys with the aggregator key equals zero. Threshold Joye-Libert Scheme We describe a threshold version of the Joye-Libert secure aggregation scheme (see Section 3.4.2.2 for the original scheme). The design of this scheme mainly transplants the design of the threshold version of the Paillier encryption scheme [START_REF] Damgård | A generalization of paillier's public-key system with applications to electronic voting[END_REF] into this context. This extended solution mainly helps the aggregator recover failed users' inputs (which consists of the protection of the zero-value under each failed user's individual key) and hence compute the final aggregate value. The goal is to distribute user key k i to the n users such that any subset of at least t (online) users can produce a ciphertext on behalf of user U i while less than t users learn nothing. To secret share the keys, we use integer secret sharing ISS (see Section 2.3.1). Let U = {U 1 , ..., U n } be the set of all users and U on ⊂ U be the set of online users, the threshold-variant Joye-Libert secure aggregation scheme, denoted as TJL, consists of the following PPT algorithms: • TJL.Setup(λ, σ) → (pp, k 0 , {k i } ∀i∈[n] ): Given some security parameter λ, this algorithm calls the original JL.Setup(λ) and outputs the server key, one secret key per user and the public parameters pp = (N, H Z * N 2 , I) where I = [-2 l , 2 l ] and l corresponds to the bit-size of the modulus N . Additionally, it sets the security parameter of the ISS scheme to σ. • TJL.SKShare(pp, k i , t, n) → {(i, ⟨k i ⟩ j )} ∀j∈[n] : On input of user U i 's y ′ τ = ∀U i ∈Ushares (⟨y ′ τ ⟩ i ) ν i • TJL.Protect(pp, k i , τ, x i,τ ) → y i,τ : This algorithm calls JL.Protect(pp, k i , τ, x i,τ ) and hence outputs cipher y i,τ . • TJL.Agg(pp, k 0 , τ, {y i,τ } U i ∈Uon , y ′ τ ) → X τ : On input public parameters pp, the aggregation key k 0 , the ciphertexts of online users, and the ciphertext of the zero-value corresponding to the failed users for timestamp τ , this algorithm computes: y τ = ( U i ∈Uon y i,τ ) ∆ 2 • y ′ τ mod N 2 (6.1) To decrypt the final result, the algorithm proceeds as follows: V τ = H(τ ) ∆ 2 k 0 • y τ X τ = V τ -1 N ∆ 2 mod N (6.2) Definition 6.5.1: Correctness Given the set of users U and the set of online user U on ⊂ U, the correctness of TJL scheme is defined as follows: ∀λ, ∀σ, ∀t, ∀τ, ∀x i,τ , ∀τ, ∀U shares s.t. (t ≤ n) ∧ (U shares ⊂ U on ) ∧ (|U shares | ≥ t), Pr                  X τ ̸ = ∀U i ∈Uon x i,τ TJL.Setup(λ, σ) → (pp, k 0 , {k i } ∀i∈[n] ), TJL.SKShare(pp, k i , t, n) → {(i, ⟨k i ⟩ j )} ∀j∈[n] ∀i∈[n] , TJL.ShareProtect(pp, {⟨k j ⟩ i } ∀U j ∈U\Uon , τ ) → ⟨y ′ τ ⟩ i ∀U i ∈U τ shares , TJL.Protect(pp, k i , τ, x i,τ ) → y i,τ ∀U i ∈Uon , TJL.ShareCombine(pp, {(i, ⟨y ′ τ ⟩ i )} ∀U i ∈Ushares ) → y ′ τ , TJL.Agg(pp, k 0 , τ, {y i,τ } ∀U i ∈Uon , y ′ τ ) → X τ                  = 0 Theorem 6.5.1 The scheme TJL is correct. Proof. To prove the correctness of TJL, observe that the following equalities hold: 1) y i,τ = (1 + x i,τ N )H Z * N 2 (τ ) k i mod N 2 2) ⟨y ′ τ ⟩ i =H Z * N 2 (τ ) U j ∈U \Uon ⟨k j ⟩ i mod N 2 3) y ′ τ = U i ∈Ushares (⟨y ′ τ ⟩ i ) ν i = H Z * N 2 (τ ) U i ∈U shares ν i U j ∈U \Uon ⟨k j ⟩ i = H Z * N 2 (τ ) U j ∈U \Uon U i ∈U shares ν i ⟨k j ⟩ i = H Z * N 2 (τ ) ∆ 2 U j ∈U \Uon k j 4) y τ = ( U i ∈Uon y i,τ ) ∆ 2 • y ′ τ mod N 2 = (1 + ∆ 2 ∀U i ∈Uon x i,τ N )H Z * N 2 (τ ) ∆ 2 ∀U i ∈Uon k i • H Z * N 2 (τ ) ∆ 2 ∀U i ∈U \Uon k i = (1 + ∆ 2 ∀U i ∈Uon x i,τ N )H Z * N 2 (τ ) ∆ 2 ∀U i ∈U k i = (1 + ∆ 2 ∀U i ∈Uon x i,τ N )H Z * N 2 (τ ) -∆ 2 k 0 5) V τ = H(τ ) ∆ 2 k 0 • y τ = (1 + ∆ 2 ∀U i ∈Uon x i,τ N ) Theorem 6.5.2 This scheme provides Aggregator Obliviousness security under the DCR assumption in the random oracle model if the number of corrupted users is less than threshold t. Proof. The security of this scheme mainly relies on the security of the JL secure aggregation scheme which is proved secure under the DCR assumption (Theorem 3.4.2). Similar to the original scheme, we assume that each user encrypts at most one input per timestamp. Additionally, we assume that failed users do not provide any encrypted values for that timestamp and the non-corrupted users construct at most one encrypted value on behalf of the failed clients. By relying on the security of the secret sharing scheme over integers (which is proved to be statistically secure in Theorem 1 in [VAS19]), we can show that less than t users cannot construct an encrypted value. Therefore, Aggregator Obliviousness is guaranteed in the random oracle model under the DCR assumption if less than t users are corrupted. FTSA -Complete Specifications We now describe the newly designed secure and fault-tolerant aggregation protocol based on the proposed TJL scheme. The protocol consists of a setup phase and an online phase each of them defined with two communication rounds. The setup phase is performed once, while the online phase is repeated for each federated learning round. We describe the details of each protocol phase. The Setup Phase The setup phase consists of the registration of the clients and the distribution of the security keys. In TJL, a trusted key dealer is needed to generate the keys. To avoid the dependence on a key dealer, we propose a distributed method to setup the TJL user keys. Distribution of TJL keys We recall that the aggregator's key k 0 allows the aggregator to recover the sum of the inputs from the set of users' ciphertexts (see Equation 6.1). The goal of this key is to protect the final aggregate result. In the case of FL applications, the aggregated ML model is considered public and thus does not need to be hidden. Therefore, we set the aggregator's key to a publicly known value (for example zero). Indeed, the security proof in [START_REF] Joye | A scalable scheme for privacy-preserving aggregation of time-series data[END_REF] considers the case where an adversary controls the key of the aggregator. In such case, the scheme cannot protect the result of the aggregation but it still protects the individual inputs of the users which is sufficient for FL. To generate the user keys, every two clients U i and U j agree using the KA scheme on a shared mutual key k i,j . Then, client U i computes its protection key k i ← U j ∈U (δ i,j • k i,j ) where δ i,j = 1 when i > j, and δ i,j = -1 when i < j. The correctness of the protocol is preserved since: ∀U i ∈U k i = ∀U i ∈U ( ∀U j ∈U δ i,j • k i,j ) = 0 = -k 0 FTSA -Setup Phase • Setup -Registration: Trusted Dealer: -Choose security parameters λ, σ and runs KA.Param(λ) → (G, g, q) = pp KA and TJL.Setup(λ, σ) → ((N, H Z * N 2 , I), ⊥, ⊥). It sets the public parameters pp = (pp KA , pp TJL , G G , t, n, m, R, F) such that t is the secret sharing threshold, n is the number of clients, Z m R is the space from which inputs are sampled, G G is the hash function for key-derivation (see Section 2.4.3), and F is the field for SSS scheme. It sends them to the server and to all the clients. User u (pp): -Receive the public parameters from the trusted dealer. -Generate key pairs (c P K i , c SK i ) ← KA.gen(pp KA ), (s P K i , s SK i ) ← KA.gen(pp KA ) -Send (c P K i || s P K i ) to the server (through the private authenticated channel) and move to next round. Server(pp): -Receives public parameters pp from the trusted dealer. -Collect all public keys from the users (denote with U the set of registered users). Abort if |U| < t otherwise move to the next round. -Broadcast to all users the list {Ui, (c P K i , s P K i )} ∀U i ∈U • Setup -Key Setup: User u: -Receive the public keys of all registered users U -For each registered user Uj ∈ U \{Ui}, compute channel keys ci,j ← KA.agree(pp KA , c SK i , c P K j , G G ). -For each registered user Uj ∈ U \ {Ui}, compute ki,j ← KA.agree(pp KA , s SK i , s P K j , G G ) (set ki,i = 0). Then compute the TJL secret key ki ← U j ∈U δi,j • ki,j where δi,j = 1 when i > j, and δi,j = -1 when i < j. -Generate t-out-of-n shares of the TJL secret key: {(j, ⟨ki⟩j)} ∀U j ∈U ← TJL.SKShare(pp TJL , ki, t, n). -For each registered user Uj ∈ U \ {Ui}, encrypt its corresponding shares: ϵi,j ← Encc i,j (Ui || Uj || ⟨ki⟩j). -If any of the above operations fails, abort. -Send all the encrypted shares {ϵi,j} ∀U j ∈U to the server (each implicitly containing addressing information Ui, Uj as metadata). -Store all messages received and values generated in the setup phase, and move to the online phase. Server: -Collect from each user Ui its encrypted shares {(Ui, Uj, ϵi,j)} ∀U j ∈U . -Forward to each user Uj ∈ U its corresponding encrypted shares: {(Ui, Uj, ϵi,j)} ∀U i ∈U and move to the online phase. -Sample a random element bi,τ R ← -F (to be used as a seed for a PRG). -Extend bi,τ using the PRG: Bi,τ ← PRG(bi,τ ). -Protect the private input Xi,τ ∈ Z m R (after masking it with Bi,τ ) using TJL scheme: Yi,τ ← TJL.Protect(pp, ki, τ, Xi,τ + Bi,τ ). -Generate t-out-of-n shares of bi,τ using the SSS scheme: {(j, ⟨bi,τ ⟩j)} ∀U j ∈U ← SSS.Share(bi,τ , t, n, F). -For each registered user Uj ∈ U \ {Ui}, encrypt its corresponding shares e (i,j),τ ← Encc i,j (Ui || Uj || ⟨bi,τ ⟩j) -If any of the above operations fails, abort. -Send all the encrypted shares {e (i,j),τ } ∀U j ∈U (with addressing information Ui, Uj as metadata) and the protected input Yi,τ to the server . Server: -Collect from each user u its encrypted shares {(Ui, Uj, e (i,j),τ )} ∀U j ∈U and its protected input Yi,τ (or time out). -Denote with U τ on ⊂ U the set of online users. Abort if |U τ on | < t. -Forward to each user Uj ∈ U τ on its corresponding encrypted shares: {(i, j, e (i,j),τ )} ∀U i ∈U τ on . • Online -Aggregation (step τ ): User u: -Receive the encrypted shares and deduce the list of online users U τ on from the received shares. Verify that U τ on ⊂ U and |U τ on | >= t. -Decrypt all the encrypted secret shares: U ′ j || U ′ i || ⟨bj,τ ⟩i ← Decc i,j (e (j,i),τ ) . Assert that Ui = U ′ i ∧ Uj = U ′ j -Compute the share of the zero-value corresponding to all failed users: ⟨Y ′ τ ⟩i ← TJL.ShareProtect(pp TJL , {⟨kj⟩i} ∀U j ∈U \U τ on , τ ). -Abort if any operation failed. -Send the secret shares of the blinding mask seeds {⟨bj,τ ⟩i} ∀U j ∈U τ on and of the share of the protected zero-value ⟨Y ′ τ ⟩i to the server. Server: -Collect shares from at least t honest users. Denote with U τ shares ⊂ U τ on the set of users. Abort if |U τ shares | < t. -Construct the blinding mask seed of all users bi,τ ∀Ui ∈ U τ on : bi,τ ← SSS.Recon({⟨bi,τ ⟩j} ∀U j ∈U τ shares , F) -Recompute the blinding mask: Bi,τ ← P RG(bi,τ ) -Construct the protected zero-value corresponding to all failed users: Y ′ τ ← TJL.ShareCombine(pp, {⟨Y ′ τ ⟩j} ∀U j ∈U τ shares ) -Aggregate all the protected inputs of the online clients and the protected zero-value: Cτ ← TJL.Agg(pp, 0, τ, {Yi,τ } ∀U i ∈U τ on , Y ′ τ ) -Remove the blinding masks Cτ - ∀U i ∈U τ on Bi,τ = ∀U i ∈U τ on Xi,τ Security in the honest-but-curious model As explained in Section 3.2, in the honest-but-curious model, the adversary, who corrupts the aggregator and a subset of the clients, correctly follows the protocol but colludes with up to n -t clients. Let U corr be the set of corrupted clients and C = U corr ∪ A (A represents the aggregator). Theorem 6.7.1: Security in the honest-but-curious model In the honest-but-curious model, the protocol FTSA achieves Aggregator Obliviousness if the number of corrupted clients is less than threshold t (|U corr | < t). Proof. To prove this argument, we show that the view of C is computationally indistinguishable from a simulated view where all inputs of the honest clients are replaced with random values (given that the sum of the random inputs is equal to the sum in the real view). For the offline phase, the proof is trivial as the offline phase is considered as a distribution of a secret sharing a zero value between n parties using Diffie-Hellman. Thus, we focus on proving security in the online phase using a hybrid argument that relies on the security of the underlying cryptographic primitives. H 0 H 0 H 0 : This hybrid view is the same as the real view. H 1 H 1 H 1 : In this hybrid view, we replace all channel keys established by KA.agree between honest clients with uniformly random keys. The Decisional Diffie-Hellman assumption guarantees that this hybrid view is indistinguishable from the previous one. H 2 H 2 H 2 : In this hybrid view, we replace all ciphertexts generated from Enc between honest clients with encryptions of zeros (padded to the right length). The IND-CPA security of the encryption scheme (see Definition 2.3.5) guarantees that this hybrid view is indistinguishable from the previous one. H 3 H 3 H 3 : In this hybrid view, we replace all shares of b i sent in the online phase from honest clients U i ∈ U \ U τ on \ U corr to corrupted ones with a random share of zero. Notice that the adversary does not receive additional shares of b i in the construction phase since the honest clients do not provide shares of b i for U i ∈ U \ U τ on . Hence, if |U corr | < t, this hybrid view is indistinguishable from the previous one due to the security of the ISS scheme (see Definition 2.3.2). H 4 H 4 H 4 : In this hybrid view, we replace all honest clients' (U \U corr ) inputs with uniformly random values such that the sum of the online clients' random inputs is equal to the aggregate. The simulator masks these random values and then protects them with TJL.Protect and gives the resulting ciphertexts to the adversary. Notice that we allow the adversary to receive TJL.Protect ciphertexts from offline clients as this may happen in benign scenarios where the aggregator receives the delayed ciphertext after it had considered the user offline. -For the online honest clients (U i ∈ U τ on \ U corr ) the adversary does not receive the shares of the zero-protected value from the honest clients. Thus, the adversary cannot construct the zero-protected value of the user U i if |U corr | < t. Hence, the adversary only sees one ciphertext of TJL.Protect of the user U i . The security of TJL states that the ciphertext of two different values cannot be distinguished under the DCR assumption if only one ciphertext per timestamp is provided (see Theorem 6.5.2). Therefore, the ciphertext of the uniformly random input in this hybrid is indistinguishable from the ciphertext of the real input from the real view. -For the offline honest clients (U i ∈ U \ U τ on \ U corr ) the adversary does not receive the shares of the mask b i from honest clients. Thus, the adversary cannot construct the mask b i of the user U i if |U corr | < t. Hence, the masked random input in this hybrid view cannot be distinguished from the masked real input in the real view. Since the adversary cannot distinguish the messages in the hybrid view from that in the real view for both online and offline clients, we can deduce that this hybrid view is indistinguishable from the previous one if |U corr | < t. We showed that we can define a simulator that can simulate the view of parties in C, where for any adversary in polynomial time the output of the simulation is indistinguishable from the real view of C if |U corr | < t. Thus, the aggregator learns nothing more than the sum of the inputs of the online honest clients. This proves Aggregator Obliviousness. Corollary 6.7.1 In the honest-but-curious model, security is guaranteed if the minimum number of honest clients t should be strictly larger than half of the number of clients in the protocol (t > n 2 ) and the protocol can recover from up to n 2 -1 client failures. Proof. From Theorem 6.7.1, |U corr | < t. We also have the number of honest users set to the threshold t (i.e, |U corr | = n -t). Therefore, t > n 2 . Security in the malicious model As explained in Section 3.2, in the malicious model, the adversary, who corrupts the aggregator and a subset of users, can additionally manipulate the messages of the corrupted parties. Theorem 6.7.2: Security in the malicious model In the malicious model, the protocol FTSA achieves Aggregator Obliviousness if the number of corrupted clients is |U corr | < 2t -n. Proof. The only messages that the aggregator distributes, are the encrypted shares. The aggregator cannot modify the values of these encrypted shares thanks to the underlying authenticated encryption Enc scheme (see Definition 2.3.6). Therefore, the aggregator's power in the protocol is limited to not forwarding some of the shares. This will make clients reach some false conclusions about the set of online clients in the protocol. Note, importantly, that the aggregator can give different views to different clients about which clients have failed. Therefore, the aggregator can convince a set of honest clients to send the protected zero-value Y ′ τ corresponding to client U i (i.e., U \ U τ on = {U i }) while asking another set of honest clients to send the shares of the blinding mask b i,τ . The aggregator should not obtain the protected zero-value Y ′ τ corresponding to a client U i and its blinding mask b i,τ for the same FL round τ . If that happens, the aggregator can recover the masked input from Y ′ τ and Y i,τ , then remove the mask using b i,τ . Knowing that the aggregator may collude with |U corr | corrupted clients, The adversary controlling Finally, we can use the same hybrid argument used in the proof of security in the honest-but-curious model except that in the active model the adversary chooses to receive the share ⟨Y ′ τ ⟩ j or the share ⟨b i,τ ⟩ j from each honest client U j (instead of being fixed according to U τ on ). Hence, the indistinguishablity can be ensured as long as the maximum number of shares of both values obtained by the adversary is less than the reconstruction threshold ( n+|Ucorr| 2 < t). Therefore, we can prove Aggregator Obliviousness if the number of corrupted clients is |U corr | < 2t -n. Corollary 6.7.2 C obtains |U corr | shares of Y ′ τ s.t. U \ U τ on = {U i } In the active model, security is guaranteed if the minimum number of honest clients t should be strictly larger than two third of the number of clients in the protocol (t > 2n 3 ) and the protocol can recover from up to n 3 -1 client failures. Proof. From Theorem 6.7.2, |U corr | < 2t -n. We also have the number of honest users set to the threshold t (i.e, |U corr | = n -t). Therefore, the reconstruction threshold is t > 2n 3 . Performance Analysis In this section, we analyze the performance of FTSA in terms of computation, communication, and storage costs at both the client and the server and compare it with the state-of-the-art. We first perform the complexity analysis with respect to the number of clients n and the dimension of the client's input m. Then, we implement the protocol and conduct an experimental evaluation of it. For completeness, we analyze both the offline and online phases. We only compare the online phases of different protocols since the offline setup phase is a one-time process. Table 6.1: Complexity analysis of the online phase of FTSA and the state-of-the-art protocols. The values represent the order of complexity. n is the number of clients (order of hundreds) and m is the dimension of the client's input (order of hundreds of thousands). Communication Computation Client Server Client Server SecAgg [BIK + 17] m + n nm + n 2 nm + n 2 n 2 m SecAgg + [BBG + 20] m + log(n) m + log(n) m + log(n) nm + n log(n) nm + n log(n) nm + n log(n) log(n)m + log 2 (n) n log(n)m + n log 2 (n) TurboAgg [SGA21b] log(n)m n log(n)m log(n)m nm nm nm FastSecAgg [KRKR20] m + n nm + log 2 (n) log(n)m nm nm nm FTSA m + n nm + n 2 m + n 2 nm + n 2 FTSA + m + log(n) m + log(n) m + log(n) nm + n log(n) nm + n log(n) nm + n log(n) m + log 2 (n) m + log 2 (n) m + log 2 (n) nm + n log 2 (n) Scalability Analysis of The Offline Phase In terms of communication cost, the offline phase consists of each client receiving n public keys and sending n shares of its secret keys. Thus, the complexity is O(n) for the client and O(n 2 ) for the server. In terms of computation cost, the server only relays the messages between the clients so it does not perform cryptographic operations. However, the client's cost is dominated with creating t shares of its secret keys which is O(n 2 ). Finally, the storage requirement of the client depends on the number of shares which is O(n) while it is O(n 2 ) for the server since it has to collect and forward the encrypted shares of all clients. Scalability Analysis of The Online Phase at the Client Communication The communication cost consists of: During the Encryption step: 1. Sending O(n) shares of b i,τ and receiving O(n) shares. Sending the encrypted client input which is O(m). Aggregating the ciphertexts and unmasking the result which is O(nm). The total computation cost equals to O(n 2 + nm) which is lower than the one in [BIK + 17] (O(n 2 m)). Note that the reconstruction of t out of n shares normally costs O(n 2 ) since it consists of computing the Lagrange coefficient O(n 2 ) and then applying the Lagrange formula O(n). Thus, the computation of t out of n shares for n clients should cost O(n 3 ). However, we optimize the reconstruction by computing the Lagrange coefficients only one time per FL round and use them for all the reconstructions which result in O(n 2 + n) ≡ O(n 2 ). Storage O(n 2 + nm). The server storage consists of: t shares of b i,τ for each client b i,τ which is O(n 2 ) and t shares of Y ′ τ which is O(nm). We have implemented a prototype of the TJL scheme and our fault-tolerant secure aggregation protocol. Additionally, we have also implemented a prototype of the protocol presented in [BIK + 17]. Our implementation is done using Python programing language and is available on GitHub1 . We first benchmark our new proposed TJL secure aggregation scheme. Then, we run several experiments with our secure aggregation protocol and the one in [BIK + 17] while varying the number of clients n = {100, 300, 600, 1000}. We also use different dimensions for clients' inputs m = {10 3 , 10 4 , 0 5 , 10 6 } and different client failure rates f = {0%, 10%, 20%, 30%}. In each experiment, we run all the clients' and the server's processes and measure their execution time. We also measure the size of the data transmitted in the network. The measurements are performed using a single-threaded process on a machine with an Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz processor and 32 GB of RAM. Experimental Evaluation Implementation Details We use the same implementation and parameters for the building blocks of our protocol and the protocol proposed in [BIK + 17]. • Pseudo-Random Generator (PRG): We use AES encryption [DBN + 01] in the counter mode with 128 bits key size. Thus, the blinding mask seed (b i ) is 128-bit long. • Key Agreement (KA): We use Elliptic-Curve Diffie-Hellman over the NIST P-256 curve. For the hash function, we use SHA256. • Secret Sharing (SS): We use the finite fields F p (integers modulo p where p is prime) in the implementation of Shamir's secret sharing. To share the seed of the blinding mask (b i ) (128 bits) we set p = 2 129 -1365. To share the DH secret key (256 bits) we set p = 2 257 -2233. • Authenticated Encryption (Enc): We use AES-GCM [START_REF] Dworkin | recommendation for block cipher modes of operation: Galois/counter mode (gcm) and gmac[END_REF] with a key size of 256 bits. • Threshold Joye-Libert Scheme (TJL): We use 1024 bits for the public parameter N . Thus, the user keys are of size 2048 bits. For the hash function H : Z → Z * N 2 , we implement it as a Full-Domain Hash using a chain of eight SHA256 hashes. Additionally, for the secret sharing over the integers scheme ISS used by TJL.SKShare, we set the security parameter σ to 128 bits. Benchmarks for JL and TJL schemes We show in Table 6.2 the execution time of all the algorithms of JL and TJL schemes. Benchmarks for FTSA and SecAgg Running time for clients We plot the wall-clock running time for a client in both protocols (i.e., our protocol and the one in [BIK + 17]) in Figure 6.4a. Our protocol shows better running time in most of the scenarios. Additionally, our protocol scales better with respect to the increasing number of clients (i.e., our solution is ×1.5 faster with 100 clients, ×5.5 faster with 600 clients, and ×8 faster with 1000 clients). This confirms the study in Section 6.8.2. We also show the setup time for our protocol in Table 6. [START_REF] Mansouri | Verifiable Secure aggregation in The Malicious Model[END_REF]. The results show that the offline time becomes negligible after running a sufficient number of FL rounds. Running time for the server We plot the wall-clock running time for the server in both protocols (i.e., our protocol and the one in [BIK + 17]) in Figure 6.4b. We show the In general, the cost is higher in our protocol since the reconstruction operation (i.e., Lagrange formula) is computed on the exponent. However, this overhead is constant in our protocol w.r.t. the number of failed clients (i.e., construction of single value Y ′ τ that represents all failed clients). Therefore, the scalability of our protocol with respect to the number of failures is better. We show the detailed running time (per protocol round) in Table A.1 in Appendix A.1. Data transfer To measure the data transfer, we measure the size of the data sent by the client to the server, and the data received by the client from the server. We plot the total data transfer (sent and received) per client in both protocols (i.e., our protocol and the protocol in [BIK + 17]) in Figure 6.4c. Our protocol has larger data transfer in scenarios with a low number of clients (n = 300) but is more efficient when the number of clients is increased to n = 1000. This shows that our protocol scales better with an increasing number of clients. On the other hand, our protocol involves larger data transfer when the input dimension increases. This is mainly because of the larger size of the ciphertext (two computation and communication complexity of the user in the online phase becomes constant with respect to the number of users. The Protocol Similar to FTSA, our optimized secure and fault-tolerant aggregation protocol is composed of an offline and an online phase. In the setup phase, the users compute their TJL secret keys similar to FTSA protocol. In addition, each user generates a masking key bk i ←$ {0, 1} 2l and shares it with the other user using the t-out-of-n secret sharing scheme. Thus, each user U i will get a share {(j, ⟨bk j ⟩ i )} ∀U j ∈U . In the encryption step of the online phase, the users do not need to mask their input with B i,τ which is generated from secretly shared seed b i,τ . Instead, they compute the TJL.Protect of their input with the masked key and send it to the server. Next, in the construction step, the online users construct the zero-protected values of the offline users similar to FTSA protocol. In addition, they help the aggregator remove the masking keys from the ciphertexts of the online users. The details of the online phase of the new protocol are shown in Figure 6.5. Security Analysis Theorem 6.9.1 The protocol FTSA * achieves Aggregator Obliviousness in the passive model if the number of corrupted clients is less than threshold t (|U corr | < t) and it achieves Aggregator Obliviousness in the active model if the number of corrupted clients is |U corr | < 2t -n. Proof. FTSA * uses TJL scheme which is secure under the DCR assumption if the users encrypt no more than one ciphertext for each timestamp. Encrypting with a masked key (k i + bk i ) is equivalent to masking the ciphertext of each user at each timestamp with a fresh random mask H(τ ) bk i (i.e., TJL.Protect( k i + bk i , τ, x i,τ ) = (1 + x i,τ N )H Z * N 2 (τ ) k i H Z * N 2 (τ ) bk i ). Similar to FTSA, removing the mask requires at least t users to collaborate where t is the threshold of the secret sharing scheme. Hence, similar to the security analysis in Section 6.7, and based on Theorems 6.7.1 and 6.7.2, FTSA * achieves Aggregator Obliviousness when (|U corr | < t) in the case of a passive adversary and it achieves Aggregator Obliviousness when |U corr | < 2t -n in the active. Corollary 6.9.1 In the passive (active) model, security is guaranteed if the minimum number of honest clients t > n 2 (t > 2n 3 ) and the protocol can recover from up to n 2 -1 ( n 3 -1) client failures. Proof. The proof follows the proof for Collaries 6.7.1 and 6.7.2 6.9.4 Performance Analysis Table 6.5: Complexity analysis of the online phase of FTSA * scheme and the state-of-theart protocols. The values represent the order of complexity. n is the number of clients (order of hundreds) and m is the dimension of the client's input (order of hundreds of thousands). Communication Computation Client Server Client Server SecAgg [BIK + 17] m + n nm + n 2 nm + n 2 n 2 m SecAgg + [BBG + 20] m + log(n) nm + n log(n) log(n)m + log 2 (n) n log(n)m + n log 2 (n) TurboAgg [SGA21b] log(n)m n log(n)m log(n)m nm nm nm FastSecAgg [KRKR20] m + n nm + log 2 (n) log(n)m nm nm nm FTSA m + n nm + n 2 m + n 2 nm + n 2 FTSA + m + log(n) nm + n log(n) m + log 2 (n) nm + n log 2 (n) FTSA * m m m nm nm nm m m m nm nm nm FTSA * shows a significant improvement in the performance of the user in the online phase. More specifically, the computation cost of the user is one call of the algorithm TJL.Protect and two calls to TJL.ShareProtect. Consequently, the communication cost is sending three ciphertexts. Therefore, both costs depend on the size of the user input (i.e., only O(m)). On the other hand, the server computation cost involves computing TJL.ShareCombine twice to reconstruct Y ′ τ and Ỹτ from their shares. These operations are O(nm). It also involves aggregating the ciphertexts and unmasking the result which is O(nm). Therefore the asymptotic costs of the user and the server are reduced to O(m) and O(nm) respectively which correspond to the theoretical bound since this is equivalent to the cost of federated learning without using secure aggregation. Table 6.5 summarizes the result. Conclusions We presented two versions of a new secure and fault-tolerant aggregation protocol (FTSA and FTSA * ). Our solutions use the additively homomorphic encryption scheme of Joye-Libert [START_REF] Joye | A scalable scheme for privacy-preserving aggregation of time-series data[END_REF] with Shamir's secret sharing to achieve fault tolerance. FTSA protocol outperforms the state-of-the-art protocols in terms of the computational cost of the federated learning clients. On the other hand, our optimized version FTSA * allows performing all secret-sharing operations during the setup phase. Hence, The online phase of FTSA * has only a constant factor of overhead compared to a base federated learning protocol (without secure aggregation). Consequently, FTSA * outperforms all the existing fault-tolerant secure aggregation protocols in terms of the asymptotic cost on both the clients and the server. We show that FTSA * indeed achieves optimal scalability. Part III Remote Attestation Protocols Chapter 7 Fairness-Driven Remote-Attestation In this chapter, we investigate remote attestation protocols when the IoT network is large, dynamic, and heterogeneous. We first study state-of-the-art solutions for collaborative remote attestation and we identify challenges due to the large size of IoT networks; We focus on two particular problems: the heterogeneity of the devices used in the network and the dynamic nature of IoT networks. To deal with these challenges, we propose FADIA, the first lightweight collaborative remote attestation protocol that is designed with fairness in mind: FADIA enables the fair distribution of attestation load/tasks to achieve better performance. We implement our solution on heterogeneous embedded devices and evaluate it in real scenarios. The evaluation shows that FADIA can (i) increase the lifetime of a network by an order of magnitude and (ii) decrease the remote attestation runtime by a factor of 1.6. Remote attestation (RA) is a technique that enables a device called prover to prove the integrity of its software configuration to another remote device called verifier. • The prover : It is the device that needs to prove its software integrity. The device executes some internal procedure that checks its software configuration and consequently generates a cryptographic proof. The procedure is usually run inside a trusted execution environment in the device. • The verifier : The device that verifies the integrity of the prover's software. It receives the proof from the prover and validates it. The verifier relies on a root of trust such as a trusted execution environment installed on the prover. Root of Trust for Remote Attestation Protocols In remote attestation protocols, the verifier in an attestation protocol relies on a root of trust existing on the prover device. Based on the kind of the root of trust, there are three types of attestations: Hardware Root of Trust [AFS97,Gro08,KSA + 09,SZJvD04] Hardware-based attestation relies on assumptions concerning the hardware that hosts the attestation code. This involves hardware components such as Trusted Platform Modules (TPM) [START_REF]Trusted platform module (tpm) summary[END_REF] and secure co-processors [START_REF] Smith | Outbound authentication for programmable secure coprocessors[END_REF]. These extra hardware components perform verification tasks such as secure booting. The high cost of these techniques limits their use for IoT devices. Software Root of Trust [GGR09, SLP08 Hybrid Root of Trust [FNRT14, KPS + 14, EFPT12, BES + 15] More recently, hybrid approaches for the root of trust have been proposed. These approaches rely on a combination of hardware and software assumptions. They propose to design a minimal secure hardware architecture on the prover which can be used to perform integrity checks on the prover's running software. It then creates a cryptographic proof and sends it to the verifier. The verifier upon validating this proof authenticates the prover and thus trusts its software. Collaborative Remote Attestation Lately, to adhere to the increasing number of embedded devices in the network, collaborative remote attestation protocols are proposed. The idea of collaborative remote attestation is to allow the IoT devices to collaboratively collect attestations from each other and finally produce a single proof for the IoT platform administrators. Hence, it improves the scalability of the protocol. In this thesis, we are interesting on studying collaborative remote attestation techniques since they are very practical in IoT networks. Previous Work on Collaborative Remote Attestation In this section, we present previous work on collaborative remote attestation. We first regroup them into three categories based on the type of connection topology between provers. Then, we categorize them based on the type of key management. Finally, we identify the main limitations of the previous work in the literature. Categorization based on Connection Topology In collaborative remote attestation, provers communicate with each other to collect the proofs. We identify three main types of collaborative remote attestation based on the topology of the connection between provers. Remote Attestation using Tree Topology Collaborative remote attestation such as SANA [ACI + 16], LISA [START_REF] Carpent | Lightweight swarm attestation: A tale of two lisa-s[END_REF], SEDA [ABI + 15], and SHeLA [RVW + 19] are designed for static or quasi-static networks. These approaches run on devices deployed in a static tree topology. Tree structures provide an efficient and scalable method for verifying a large number of devices. In general, there exist three roles in the tree based on the position of the nodes: • Initiator node: This is the root node of the tree. It initiates the tree construction (attestation) process. • Intermediate node: This corresponds to any node in the tree that has a parent node and children nodes. It forwards messages from its parent to its children and vice versa. It usually also aggregates the attestations received from the children nodes. • Leaf node: This is any node in the tree that has no children nodes. The message forwarding stops at this node. It sends its attestation to its parent node. Using this tree structure, provers can collaboratively attest by aggregating their proofs. Depending on the type of cryptographic tools used to generate the proof, different aggregation algorithms may be used. For example, SANA [ACI + 16] uses a combination of Boldyreva's multi-signature scheme [START_REF] Boldyreva | Threshold signatures, multisignatures and blind signatures based on the gap-diffie-hellman-group signature scheme[END_REF] and Boneh et al.'s aggregate signature scheme [START_REF] Boneh | Aggregate and verifiably encrypted signatures from bilinear maps[END_REF] to generate a proof. Other solutions like LISA [START_REF] Carpent | Lightweight swarm attestation: A tale of two lisa-s[END_REF] uses simple message authentication code (MAC) to generate proofs. Remote Attestation using Mesh Topology Another line of research proposes RA schemes that use a gossip-based mechanism and run in dynamic mesh networks. Examples of such an approach are DARPA [START_REF] Ibrahim | Darpa: Device attestation resilient to physical attacks[END_REF], PADS [ACL + 18], SALAD [START_REF] Kohnhäuser | Salad: Secure and lightweight attestation of highly dynamic and disruptive networks[END_REF], and SARA [DRC + 20]. In these protocols, provers share their attestations with all their neighbors, then the received attestations are aggregated and forwarded. The process is repeated until convergence is reached about the state of all provers. Although gossip-based protocols offer high tolerance against failures and a high degree of autonomy, they suffer from high bandwidth overhead and long runtime. Remote Attestation using Hybrid Topology Recently, Kohnhauser et al. proposed PASTA [START_REF] Kohnhäuser | A practical attestation protocol for autonomous embedded systems[END_REF], an autonomous RA protocol in which provers create multiple spanning trees in the network and generate the so-called tokens to verify the attestations of the provers participating in the tree. Each token embeds the attestations of all the nodes in the tree. Later, these tokens are distributed among all the provers in a gossip-like approach. The parallelized tree generation process in PASTA allows for a relatively low runtime for the protocol. Also, the token distribution offers autonomy (decentralization) to the RA protocol. However, PASTA does not scale well in terms of communication due to the broadcasting of large-size tokens. Also, PASTA requires provers with high storage and computation capabilities. This is because it uses asymmetric cryptography for token generation. Categorization based on Key Management In remote attestation protocols, two types of secret keys are defined: the communication keys which are used to establish a secure channel between provers and the attestation keys which are used for attestation, i.e., to generate a proof of the integrity of a prover's software. We identify two main types of collaborative remote attestation with regard to the underlying key management mechanism. Remote Attestation using a Single Shared Key In such RA solutions, the network operator defines a single key shared among all provers. This key is used as both a communication key and an attestation key (e.g., PADS [ACL + 18]). Such an approach enables the efficient addition of new devices to the network at runtime. Nevertheless, solutions become inefficient if a single node is compromised (as all nodes need to update their keying material). Remote Attestation using Pairwise Unique Keys In such RA solutions, each prover holds a unique attestation key and a pair-wise mutual communication key with each other prover. An example of this type of RA is SALADS [START_REF] Kohnhäuser | Salad: Secure and lightweight attestation of highly dynamic and disruptive networks[END_REF], PASTA [START_REF] Kohnhäuser | A practical attestation protocol for autonomous embedded systems[END_REF], and DARPA [START_REF] Ibrahim | Darpa: Device attestation resilient to physical attacks[END_REF]. This approach ensures better security guarantees and enables an efficient revocation mechanism. However, it does not scale with a high number of provers. Moreover, the addition of new devices to the network becomes costly because each newly joined device requires a key distribution mechanism to run with all existing provers. Limitation of Previous Work The aforementioned collaborative remote attestation solutions tackle different challenges such as scalability, security and robustness. Unfortunately, these solutions may become inefficient and sometimes even impractical for some applications of IoT networks. This is mainly because the current state-of-the-art solutions disregard two common characteristics of IoT networks: (i) heterogeneity of IoT networks and (ii) continuous increase in the size of the network (i.e., devices are added to the network gradually over time). Heterogeneity of devices: Large-scale RA protocols involve collaborative tasks across devices. These tasks include the generation and forwarding of attestations. Existing solutions perform the distribution of this load (i.e., RA tasks) randomly or uniformly. This may imply a significant performance decrease in heterogeneous networks because it results, usually, in creating bottlenecks, resulting in a degradation of the overall performance. We define heterogeneity in the IoT network as the diversity in the (hardware and/or software) characteristics of the IoT devices. For instance in Industry 4.0 IoT applications [START_REF] Roblek | A complex view of industry 4.0[END_REF], sensor devices in the network (ex. Tmote Sky [START_REF]Tmote sky details[END_REF]) have Microcontroller Units (MCU) with low computational capabilities compared to a Raspberry Pi operating in robots. The quality of the MCU directly affects the speed of processing the attestation messages from peer devices. Therefore, a RA collaborative protocol that does not consider this gap in the hardware capabilities between devices will end up putting either an equal attestation load on different devices or a higher load on less capable devices. The number of proofs (i.e., attestations) that a node can receive and forward should thus be depending on its capabilities. The heterogeneity of the network can also threaten the lifetime of the services. Running a RA protocol on a device consumes a significant amount of its battery due to the frequent participation in the transmission and reception of attestation messages. Thus, if sensors with lower battery levels engage in many energy-consuming operations, this can end up with battery depletion of some devices causing potential disruption in the service. To this end, we see heterogeneity as a problem that can have a strong impact on both the performance and lifetime of a collaborative RA protocol. Dynamic nature of IoT networks: With the continuous advancement of IoT applications, IoT networks gradually increase in size (i.e., new devices are added). In collaborative remote attestation, existing devices need to agree on keys with new devices to communicate. These keys ensures secure commination between devices to transmit their attestations. The existing solutions lack scalability and add a high overhead to the runtime of the RA protocol. Based on that, we identify the need for dynamic management and distribution of the key materials as a missing requirement for a practical remote attestation protocol. FADIA -Overview We present our approach to solving the problems mentioned in Section 7.2.3 (namely, heterogeneity and device addition). We design a lightweight collaborative RA protocol (FADIA). To solve the heterogeneity problem, FADIA is designed with fairness in mind. In a collaborative RA protocol, we define fairness as the ability to distribute the load of the protocol according to the capabilities of the provers. The goal is to increase the performance of the protocol and to reach a better lifetime for the network. In a fair RA O( 1) LISA α [CERT17] O(log n) O(1) LISA s [CERT17] O(1) O( 1) DARPA [ISTZ16] O(n) O(n) SALAD [KBK18] O(n) O(n) PADS [ACL + 18] O(n) O(n) PASTA [KBK19] O(n) O(n) FADIA O(1) O [START_REF] Mansouri | FADIA: fairness-driven collaborative remote attestation[END_REF] protocol, provers in FADIA can be assigned a score depending on their capabilities and behave accordingly. This score will be frequently computed and the protocol should adapt to any change. In FADIA, similar to PASTA [KBK19], a group of provers collaborate and create a spanning tree in which parent nodes collect attestations from children nodes. However, in contrast to PASTA, the choice of the position and the number of children of a prover in the tree are adaptively regulated. These are determined by the scoring function which is computed based on the hardware capabilities (e.g., CPU) of the prover and its current residual battery. We, therefore, define a score function that outputs a score value between 0 and 1: when the score is closer to 1, the prover can assign more tasks with respect to the RA protocol (for example, the node can have a higher number of children). To cope with the dynamic nature of the network, the protocol should support the addition of new devices at runtime. We propose to rely on an exisiting Eschenauer-Gligor's (E-G) scheme [START_REF] Eschenauer | A key-management scheme for distributed sensor networks[END_REF] for the distribution of communication keys. In FADIA, each node is pre-loaded with a random keyring (a set of communication keys) randomly selected from the key pool. Devices that share at least one key in their keyrings can directly establish a secure communication channel. Thanks to this scheme, FADIA easily addresses the trade-off between the connectivity of the provers and the security of the communication. Furthermore, the addition of a new prover to the protocol does not require any modification of the other provers. Building Block: Eschenauer and Gligor's Scheme We present Eschenauer and Gligor's (E-G) [START_REF] Eschenauer | A key-management scheme for distributed sensor networks[END_REF] scheme that is used in the design of our protocol. E-G key distribution scheme follows a probabilistic approach to efficiently distribute the keys over a large number of devices. It facilitates the addition and the revocation of nodes (and the corresponding keys) in the network without substantial computation and communication overhead on the end devices. The scheme defines a main key pool as a set of p keys of size. The participating devices pick a key ring as a random subset of size r from the key pool. Devices having at least one shared key from their key rings can communicate securely. This scheme has been shown to be simple and highly scalable and is, therefore, suitable for resource-constrained devices. In this work, we utilize E-G scheme to distribute the keys to the provers. Only provers who share common keys can establish secure communication channels. Connectivity of the Provers Connectivity between provers is defined as the average percentage of provers, a prover can connect to (i.e., communicate with). The connectivity between provers using E-G scheme is equivalent to the probability of two provers sharing at least one key in their key rings (P s ). P s = 1 - ((p -r)!) 2 (p -2r)!p! (7.1) For example, with a key ring of size r = 300 and a key pool of size p = 100000 we obtain a connectivity of P s ≈ 0.6. Assumptions and Threat Model In this section, we describe the assumptions on the network. Then we define our security model. Network Assumptions We consider a mesh network topology where devices acting as provers, communicate within their communication range. Additionally, all provers are connected to a more powerful device (the controller C) acting as the verifier. For example, this can be an edge router. The network may contain more than one controller such that these controllers share their information and synchronize their data on a different layer. For the sake of simplicity, in this paper, we consider a single controller that connects to all provers in the network. Both, the provers and the verifier are managed by the network operator O. The participating provers can have heterogeneous characteristics. Moreover, they can be static or mobile within the network. New provers may be added to the network at any point in time. Security Model Security assumptions: We assume that provers have the minimal secure hardware features to perform RA [START_REF] Francillon | A minimalist approach to remote attestation[END_REF]. Additionally, provers are equipped with loosely synchronized real-time clocks. The minimal secure hardware can be implemented using a secure Read-Only-Memory (ROM) to store the keys and a Memory Protection Unit (MPU) that stores FADIA's attestation code. The aforementioned execution space on each prover is referenced as the Trusted Anchor (T A ). Additionally, we assume that the controller is not compromised and is fully trusted. Furthermore, similar to previous research, FADIA only considers invasive and semi-invasive physical attacks. Thus, noninvasive attacks such as side-channel attacks are out of the scope of this paper. In this context, we rely on a common assumption that for an attacker to successfully bypass T A , it needs to take the devices offline for more than some δ h time which is predefined and known to be non-negligible [CDPG + 10, CDPMM08]. This is because such attacks require expensive and complex laboratory equipment and require the full possession of the target for a significant amount of time (from hours to weeks) [START_REF] Skorobogatov | Physical Attacks and Tamper Resistance[END_REF][START_REF] Skorobogatov | Physical attacks on tamper resistance: Progress and lessons[END_REF]. This becomes even more expensive especially when devices are equipped with tamper-resistant mechanisms [RRC04, ION + 18, MGGA17]. We finally assume that the implementation of FADIA and its cryptographic components do not suffer from any security bugs. Adversarial Model: The main objective of an Adversary is to perform malicious activities by corrupting the memory of a prover and also damaging the network communication while being undetected. We consider two types of adversaries, Software Adversary A s and Hardware Adversary A h . • A s has full control of the execution of a prover apart from the T A . It also has full access to the prover's memory except the memory protected by the MPU. Thus, A s can launch attacks like spoofing attacks, Man-in-the-middle-attacks, replay attacks. • A h has, in addition to A s capabilities, physical access to the devices in the network. This provides it with the ability to leak any secret or modify FADIA's code on the targeted prover. However, this is only possible after turning the prover off for more than δ h time, as stated previously. δ h is defined by C for all provers participating in the protocol. Definition 7.5.1: Security FADIA is considered secure if an adversary (under the aforementioned assumptions) cannot forge a "healthy" state of a compromised prover. In line with other RA schemes [ABI + 15,ACI + 16,KBK19,ACL + 18] we do not consider Distributed Denial of Service (DDoS) attacks. Nevertheless, in Section 7.7 we mention possible ways to detect DDoS attacks in FADIA. Secret key (and it corresponding key id) shared between P i and P j sch The hash of the software configuration on a prover FADIA -Complete Specifications In this section, we describe the design of FADIA. The protocol is composed of four different phases: the initialization, joining, attestation and revocation phases. The first initialization phase consists of an offline setup phase where the keying material is installed at all involved provers. During the joining phase, a prover identifies itself to the controller. The prover further starts the attestation phase. During the attestation phase (which is the core phase of FADIA), the active prover periodically participates in virtual attestation trees to send its attestation report and forward others'. The active prover keeps running this phase until it is dropped from the network (either intentionally because it is detected as malicious or incidentally because it left the network). On such an event, the revocation phase starts whereby the dropped prover becomes offline and its keys are revoked. In the following, we describe each phase in more detail. Initialization Phase Provers, before participating in FADIA, are considered offline. In order for a prover P i to enter the protocol, it has first to be set by the network operator O. O defines a key pool (O.P ool) according to [START_REF] Eschenauer | A key-management scheme for distributed sensor networks[END_REF], (see § 7.4). This key pool is stored in a safe location (offline). The key pool contains p symmetric keys together with their key ids {( K ID , K )}. P i randomly receives a key ring of size r from the key pool: P i .Ring ← {(K ID , K )} ⊂ O.P ool O also assigns a unique id (uid) for P i and a cryptographic hash of the current software configuration (sch). P i then obtains a unique key K ic (the attestation key) derived from the chosen keyring ids and the prover's unique id. This key is computed by O using a key K ic ← KDF (K s , {K ID ∀ (K ID , K ) ∈ P i .Ring} ∪ {uid}) (7.2) K ic is used in the later phases of the protocol to provide secure communication between a P i and the controllers. Moreover, O also defines a function score() which evaluates at the runtime the required load P i should take based on its hardware capabilities and its current capacity. This function outputs a value between 0 and 1 that indicates the amount of load P i can take (0 indicating that this device should take the least load possible, and 1 indicating that it should take the maximum load that can be given to one device). The implementation of score() depends on the underlying environment and application. For example, in the case of a network of wireless devices with batteries, the function score() will evaluate the battery percentage level of a prover; For a network of devices with microcontrollers of different computational speeds, score() will categorize different types of microcontrollers into different classes, and output a higher value for more powerful classes. Joining Phase Once P i is initialized, it can join the network. P i sends a join request message (jreq) to the controller (C) in the network. The join message contains its unique ID (uid) and the set of key ids in its key ring. This message is authenticated using a message authentication code (MAC) computed with K ic . Based on the received uid and the key ids, C computes K ic (see equation 7.2) and authenticates P i . Upon validation, C registers P i in the table of provers currently participating in the protocol. The table of registered provers records the current state of each prover (being either healthy or unhealthy) along with the last time the prover attested. P i is first registered as "healthy". C sends back a response message jresp (authenticated using K ic ) to confirm the joining process. jresp includes (i) cid: the controller's id (ii) δ h which corresponds to the maximum time a prover can stay active without attesting in the network, (iii) α g which defines the maximum number of attestations that can be aggregated, and (iv) c max that is the maximum number of score() and uses the result to decide on its role in an attestation tree. More specifically, P i updates two parameters namely, c limit and δ c : • c limit is the maximum number of children P i can accept in the upcoming attestation tree. c limit ← score() × c max • δ c (which is less than δ h /2) is the amount of time P i waits to receive a participation invitation to an attestation tree. When δ c is reached and P i did not receive any invitation, it starts the tree construction protocol. δ c ← score() × δ h /2 Neighboring provers that share common keys and are ready to provide their attestations construct a tree. Note that if a prover does not share any key with its neighbors, it directly sends the attestation to the controller. However, such cases appear with a low probability in realistic networks. For example, if the average number of neighbors for a prover is 5, and the connectivity P s = 0.6, then the probability of a prover being isolated is (1 -P s ) 5 ≈ 0.01. The tree construction and the attestation collection processes are described next. Tree Construction A tree construction starts by an initiator prover (P i ) after waiting for δ c time. The latter broadcasts an invitation message (invite) to its neighbors. The invitation message includes the unique id (uid) of P i as well as a tree id (treeid) (generated from a timestamp to guarantee freshness) and the set of key ids which P i holds in its keyring. When P j receives invite, it first checks if it did not attest in the last δ h /2 time. Then it checks if it shares at least one key with P i . Let K ij denote the shared key between P i and P j , and K ID ij be its corresponding key id. P j responds with an invitation acceptance message (iaccept) containing the key id K ID ij . The message also includes the unique ids of the source and the destination, the tree id sent by P i , and a random value (r) and a Message Authentication Code (MAC) computed using the shared key K ij . P i accepts P j as a child only if it has not acquired c limit children yet and responds with a child acceptance message caccept authenticated with the shared key concatenated with the random value (r). To this end, both provers established a secure channel with the shared key K ij . Next, P j either extends the tree and thus acts as an intermediate prover, or it finishes the tree construction process thus acts as a leaf prover. A prover acts as a leaf prover in two cases: either it does not accept any children (i.e., c limit = 0) or its invite message is timed out without receiving any response from its neighbors. The details of the tree construction messages are shown in Figure 7.3. The secure channels established between parent nodes and children nodes in the tree are used next to transmit the attestation messages. When C receives the aggregated attestation it validates the proofs. For each set, it computes the MAC according to equation 7.3 using the unique keys (K jc ) of each prover P j . For each verified aggregated proof, the status of all provers in the set is updated. More specifically, C updates the time of the last proof received from these provers by the current time. We show the attestations collection details in Figure 7.4. Key Revocation Phase When prover P i becomes "unhealthy", it cannot be trusted anymore. So it is required by C to revoke all the shared keys between P i and the other provers. Since C knows the ids of all the keys in the keyrings of all provers, it can find out the affected devices (i.e., the ones which share at least a key with the "unhealthy" device). C sends a revocation message (revk) to each of them. The message contains the uid of the receiving prover, and the set of key ids that should be revoked. revk is authenticated with a MAC using K ic of the corresponding prover. revk ← cid || uid || affectedKeys() (7.4) When a prover receives revk, it removes the affected keys from its keyring. When the size of its keyring goes under a certain threshold θ, the prover goes to the offline state and requires reinitializing to go back online. Role of the score function The fairness by design approach can be achieved thanks to the tuning of mainly two parameters set with the help of the score function: c limit and δ c . this allows configuring the behavior and the computation load of each prover during the attestation phase and their correct setting hence ensuring a fair distribution of the load caused by FADIA on the active devices. The first parameter c limit represents the number of children a prover can hold during one attestation round in the virtual attestation tree. The number of children has a direct impact on the amount of load put on the prover. The load involves the use of cryptographic tools (MACs) to establish a secure channel with each child and forward the attestation proofs. The second parameter δ c is the time at which a prover waits to receive the invitation message (invite) before it decides to start its own tree and sends invite itself. It is important to mention here that when a prover finishes an attestation round, it can switch to a sleeping state since it has completed its attestation for this round. Research on wireless ad-hoc networks has studied the scheduling of the nodes sleeping state to optimize the network lifetime [START_REF] Mahfoudh | Survey of energy efficient strategies in wireless ad hoc and sensor networks[END_REF]. Inlighted by these studies we control the sleeping time of a prover based on a fair policy. In FADIA for each attestation round, a prover is first actively waiting for invitation messages, then performs the requested attestation operation in the tree and finally switches to a sleep state until the next round. Consequently, δ c is a critical parameter that influences directly the average amount of time a prover spends in the sleep state and hence optimizes its resource consumption. Research has shown that being in an active state (listening or sending) can be intensively resource-consuming compared to being in a sleep state [START_REF] Kim | Smart sensing period for efficient energy consumption in iot network[END_REF]. Therefore, δ c parameter is also used to control the amount of load on a prover in FADIA. In FADIA, these two parameters are calculated at runtime by each prover. Their corresponding values are updated at each round of the attestation phase using the score function. We present examples of how to build this function depending on the scenario of deployment in section § 7.8.3. Security Analysis The main goal of an adversary is to perform malicious activities and evade detection. However, a remote attestation scheme should identify the presence of malicious actors in the network to safeguard the network's operations. We consider the system secure if an attacker cannot forge a "healthy" state of a "non-healthy" prover. The remainder of this section informally discusses the security of FADIA w.r.t. adversarial assumptions mentioned in 7.5. Attacks Performed by A s : • Spoofing attacks: FADIA is immune to attackers trying to spoof a prover's identity. Since all message exchanges are protected by keys stored in an inaccessible location for software attackers and can only be accessed through T A , an attacker will not be able to produce authenticated messages without the keys from the key ring. Please note that the invite message is an exception, as this message is not authenticated. However, this does not affect the security of FADIA since attackers spoofing this message will not be able to complete the 3-way handshake, (i.e., respond with a valid caccept). • MITM attacks: FADIA will identify this attack as it suffers the same limitations as spoofing attacks. A s using this attack technique will not be able to manipulate messages without being detected since the integrity of these messages is ensured using the keys from the key rings that are protected by T A . • Replay attacks: FADIA prevents replay attacks since all messages are unique and cannot be used twice without being detected. Specifically, freshness is guaranteed in invite messages thanks to using a timestamp in the treeid. Similarly, iaccept and caccept includes randomness for each prover-to-prover communication. Further, attst messages are also resilient to replay attacks since they contain a counter which is incremented at each attestation round. • DoS Attacks: Although FADIA does not include these types of attacks in its threat model, it is worth noting that FADIA can detect the effect of such attacks. This is because A s performing DoS attacks on a prover (or prover group) will prevent the attestation of these provers. The controller will, therefore, inevitably discover a missing attestation from the provers under attack. Attacks Performed by A h In addition to software adversaries, hardware adversaries have the ability to tamper with a device to extract the keys from its keyring or alter the attestation code. Under the assumption that a hardware adversary needs to take the system offline for at least a δ h duration, FADIA provides resilience against these attackers. This is because FADIA requires that within every δ h period, each prover provides evidence of its "healthiness". This helps the network owner to ensure that the provers are not taken offline and thus not corrupted. On the other hand, a hardware adversary may use leaked keys to attack other provers sharing the same key. However, the key revocation process ensures that if a prover no longer participates in the protocol, these keys are revoked. Additionally, even if an attacker leaked all the key materials from a prover, the attacker will not be able to forge legitimate proof as this forgery involves the targeted prover's K ic . The state of a "unhealthy" device will thus not be forged by A h . Performance Analysis In this section, we evaluate the performance of FADIA in a heterogeneous network and demonstrate the advantages originating from its fairness-driven approach by evaluating the energy consumption, the computational cost and the bandwidth. We further study its performance in a homogeneous network and show that even in this case, FADIA outperforms the relevant state-of-the-art solutions, namely PASTA [START_REF] Kohnhäuser | A practical attestation protocol for autonomous embedded systems[END_REF] and SALAD [START_REF] Kohnhäuser | Salad: Secure and lightweight attestation of highly dynamic and disruptive networks[END_REF]. 7.8. can be used for the Tmote Sky. We use HMAC-SHA256 for authenticating the messages and creating the proofs. We use the SHA256 of the device's firmware as the software configuration hash (sch). In order to conduct our study, we first evaluate the cost of one attestation round for a prover in terms of the execution time and the energy consumption of FADIA. Since one round mainly involves MAC and hash computation operations, we have obtained some benchmarks on HMAC-SHA256 and SHA-256 respectively. Results are shown in Table 7.3. As expected, Tmote Sky takes more time to generate attestation proofs. We also benchmark the throughput and the round-trip time (RTT) of the CC2420 transceiver in a real environment. The throughput on the application layer is 25.2 Kbps and the RTT is 61.4 ms. Environment for Experimental Evaluation We first evaluate FADIA on a heterogeneous network to measure the influence of fairness on the performance in terms of energy consumption and computational cost. To evaluate the benefits of fairness, we consider two variants of FADIA. Variant (1) with fairness activated and variant (2) without fairness. Our results show that fairness improves the lifetime of the network and the runtime of the RA protocol. We then evaluate the performance of FADIA on a homogeneous network in terms of storage, computation and communication cost. Our results show that FADIA outperforms the state-of-the-art solutions. Additionally, we evaluate the robustness of FADIA against selfish provers. We also evaluate the efficiency of the revocation phase. We perform our simulations using the Omnet++ simulator [START_REF] Varga | An overview of the omnet++ simulation environment[END_REF]. We implement FADIA on the application layer of the devices1 . For the lower layers, we use a simplified medium access control version which makes sure that no device within the communication range transmits at the same time. Provers communicate in a half-duplex fashion and store messages in queues when the medium is busy. Evaluation in Heterogeneous Networks Test Case 1: Optimizing the energy consumption To measure the optimization of energy consumption, we simulate FADIA on 500 Tmote Sky sensors acting as provers. sch is set as the SHA256 hash of the firmware of size 30 KB. The network and cryptographic delays are set according to the values measured in Table 7 on the status of the transceiver and the microcontroller of the prover. The transceiver can either be transmitting, listening, or OFF. Similarly, the microcontroller can either be ON or idle. For each of the following statuses, the energy is computed according to the energy measurements shown in Table 7. [START_REF] Mansouri | Verifiable Secure aggregation in The Malicious Model[END_REF]. The provers move in a random waypoint model at a linear speed uniform between 1mps and 2mps in a 300m × 300m area. Each of the provers is equipped with a battery of 1000J max capacity. The initial energy level a prover starts with is chosen randomly (uniformly [100J, 1000J]). Evaluation of different random distribution functions for the initial energy are shown in appendix B.1. Note that the maximum capacity of an alkaline AA battery is around 13, 000J. But we use 1000J as the maximum capacity for the seek of the feasibility of the experiment. We run FADIA for 60, 000 seconds. For variant (1) of FADIA we implement the score() to return the current battery level of a device. Alternatively, for variant (2) returns 0.5. We measure the consumption of energy of each prover with respect to time. Figure 7.5a shows the energy traces of the provers in both variants of FADIA. As expected, variant (2) of FADIA (i.e. fairness is not activated), shows a fast depletion of the energy of all the provers while for variant [START_REF] Mansouri | FADIA: fairness-driven collaborative remote attestation[END_REF], most of the provers remain active after 40, 000 seconds. The reason for the fast depletion of energy in the "unfair" protocol (i.e. variant 2) is that provers with low energy are treated indifferently from high energy provers. This leads to putting a significant load on these devices due to the high (i.e. unfair) number of children they need to collect attestations from. Moreover, since δ c is not adapted to the energy level of the device, provers may spend more time waiting to be invited to a tree construction process. This keeps the transceivers of these devices in the listening state for a longer time instead of switching sooner to the OFF/idle state. This brings a serious problem since the part that mostly consumes energy in sensor devices is the transceiver. This is the case for Tmote Sky as shown in Table 7.4. Notice that, when provers with low battery crash, this decreases the number of provers in the network and causes fewer tree constructions to appear, causing a domino effect and faster depletion for other devices. Differently, in the case of FADIA variant (1), the low battery provers preserve their the energy which prevents the early loss of provers. We also look into the consumed energy of the provers at the end of the experiment (i.e., after 16.6 hrs). We group provers that had similar initial battery levels at initialization and we measure their average consumed energy level after the experiment. Figure 7.5b shows the relation between the consumed energy with respect to the initial energy. The graph shows that provers with high initial battery levels (more than 50%) consumed more energy than provers with fewer initial battery levels thus providing a longer lifetime to the network. Additionally, we measure the time taken until we detect 1st, 3rd and 5th crash of a prover (i.e., its energy is completely depleted). We observe that the lifetime of the provers in a fair protocol is an order of magnitude longer. Test Case 2: Optimizing the runtime To measure the optimization of the runtime achieved thanks to fairness, we evaluate the runtime of FADIA on heterogeneous static tree topology. In this scenario, we use two types of devices such that 50% of provers are Tmote Sky (MSP430) devices and the other 50% are Raspberry PI 2 devices. Both types of devices use the same transceiver (CC2420). We use the throughput, network delay, and the cryptographic delays for each type of device according to the benchmarks measured in table 7.3. We measure the time taken from the start of the tree construction until the final attestation is sent from the root node to the collector. The function score() is defined such that it returns 0.05 for the MSP430 provers and 1 for Raspberry PI provers. Accordingly, the number of accepted children (c limit ) will be 1 for the less powerful provers and 20 for the powerful ones. On the contrary, we simulate FADIA variant (2) which assigns 10 children for each prover regardless of its type. Figure 7.6 shows the runtime results of both approaches with respect to a varying number of provers in the network. The results show that FADIA with fairness option can run 1.6 faster in static topologies. Evaluation in Homogeneous Networks Memory consumption of FADIA Each prover in FADIA stores one key ring. The storage consumption derived from the key ring is (4B + 32B) × R, where R is the size of the keyring. The prover also stores uid (4B), the controller key k ic (32B), the attestation counter cntr (4B), and other FADIA parameters (20B). The total memory consumption is 56 + 36 × R Bytes. For example, with a key ring of size 300, this results in 10.6KB of memory. It is worth noticing that the memory consumption depends only on the key ring size and is independent of the number of provers on the network. This provides very high scalability compared to most of the state-of-the-art solutions that incur a cost linear to the number of provers. Our experimental study shows that FADIA can run on 10, 000 provers while each of them consumes only 10.6 KB which is significantly low compared to PASTA with 780KB of memory usage and SALAD with 365KB. Runtime of FADIA To measure the runtime for FADIA, we have implemented the protocol in a static network defined under four common topologies: the tree, chain, ring and grid topologies. We consider the construction of a single attestation tree where all provers participate in it. The running time is evaluated by measuring the time it takes until the report is collected by the controller. For a fair comparison with the state-ofthe-art, the scenario, the types of devices, the network delays and the cryptographic benchmarks are all set as the ones used in the evaluation of PASTA protocol in [START_REF] Kohnhäuser | A practical attestation protocol for autonomous embedded systems[END_REF]. We use ESP32-PICO-D4 devices in the simulation as provers and set the size of their firmware to 50KB. The throughput of the provers on the application layer is 12.51 MB/s and the round trip time is 4.63 ms. The time a device takes to generate the proof is set according to Table B.1 in the appendix. The provers perform 10 attestation rounds and the runtime of the attestation is averaged. The position of the nodes is randomized between rounds to force reinitialization of the tree construction. We set the keyring size r = 300, and α g = inf . Figure 7.7 shows the runtime results of FADIA and PASTA. We observe that FADIA shows a low runtime in tree topologies. It can attest 1, 000, 000 provers in less than 2 seconds in a 4-ary tree. The runtime of FADIA and PASTA are close to each other in a tree topology since most of the attestation time corresponds to network delays. Moreover, FADIA shows better performance at higher degree trees because messages are broadcasted during the construction of the attestation tree, whereas for PASTA, one-to-one tree commitment requests are sent from the prover to its neighbors. Additionally, FADIA is faster than PASTA by approximately 17 times for grid topologies and 1.3 times for ring topologies. In grid topologies, the tree construction results in an unbalanced tree. This creates a problem for PASTA since the tree construction happens in two steps. In the first step, all provers first commit to the tree. Then in the second step, all provers receive the aggregated commitment. This requires all provers to wait for the tree construction to finish before they start attesting. FADIA does not have this problem since the leaf nodes can immediately send their attestations without waiting for the tree construction to finish. On the other hand, PASTA outperforms FADIA in chain topologies with a large number of provers (i.e., > 8000). This is explained by the fact that in FADIA, every prover sends all the key ids in the keyring to its neighbors which turns to be not effective in large chain topologies. Fortunately, such topologies do not often exist in real applications. Bandwidth consumption of FADIA We evaluate the bandwidth consumption of FADIA in a dynamic network. We consider devices moving in a random waypoint model (i.e., provers choose random destinations and move toward them) at a linear speed uniform between 1mps and 2mps. The provers move in an area of 500m × 500m. We measure the average amount of data sent and received by a prover in an attestation round (i.e., for all provers to attest). Notice that the scenario, the device type, the network delays and the cryptographic benchmarks are all set the same as the ones used in the evaluation of SALAD protocol in [START_REF] Kohnhäuser | Salad: Secure and lightweight attestation of highly dynamic and disruptive networks[END_REF]. We use Stellaris LM4F120H5QR devices in the simulation as provers and set the size of their firmware to 30KB. The throughput of the provers on the application layer is set to 35.0 kbps and the round trip time is set to 15ms. The cryptographic delays are set according to Table B.1 in the appendix. We choose the value of α g as 1, 10, and infinity. We also use different values for the keyring and pool sizes: more specifically we set r = 100, p = 10, 000 and r = 300, p = 100, 000. Both cases give the same connectivity of the graph being P s = 0.6 (see 7.4). Figure 7.8 shows the results. In particular, the results show that FADIA has highly scalable bandwidth consumption since the data consumption is nearly constant with respect to the number of provers. It also shows that the bandwidth consumption depends mostly on the size of the keyring. However, this cost always remains less than the average consumption of SALAD increases linearly with the number of provers in the network. Evaluation with Selfish Provers We evaluate the impact of selfish provers on the lifetime of the network (time till 1st, 3rd and 5th crash of a prover). A selfish prover is a prover that does not will to participate in the tree-generation process. It thus greedily attests individually to the controller and goes to sleep state as early as possible. Note that such extreme selfish behavior can be easily detected. We consider this extreme case to evaluate the worst-case scenario. A more careful selfish prover will still collaborate however less than it is supposed to. We consider the same energy consumption scenario in Test Case 1 (see 7.8.3). However, selfish provers are chosen with an initial battery level greater than 250J. Figure 7.5c shows the results with different percentages of selfish provers. We observe that FADIA is robust against selfishness. This is because the lifetime drops significantly only when more than 40% of the provers are selfish. With a high number of selfish provers, the performance degrades since there is a sort of race toward the collector to provide attestation. This creates too much contention between provers accessing the wireless medium leading to the attestations being delayed. Evaluation of the Revocation Phase in FADIA The revocation phase starts when a prover is dropped from the network. The controller detects the event during the next δ h time and sends a revocation message revk to all the affected provers. To measure its efficiency, we evaluate the following: • (E 1 ) The number of provers affected when a device is dropped. • (E 2 ) The number of connections affected. • (E 3 ) The number of keys revoked for each prover. • (E 4 ) Time taken until all affected provers receive the revocation. E 1 is estimated as the number of provers in the network multiplied by the probability of two provers being connected (n × P s ). This means that the average number of affected devices is proportionally tied to the connectivity of the provers in the network. An example of a keyring of size 300 and a key pool of size 100,000 gives P s = 0.6, thus in this example 60% of the provers are affected by the revocation process. However, this is not a problem since most of the keys revoked at the prover are not actually used. Therefore, a more important metric to look into is the average number of connections affected (i.e., connections established using a key that is revoked). This is equal to c × r/p which estimates E 2 (c is the total number of connections established at the time of the revocation). Following our example, the averaged affected connections will be only 0.003 × c thus only 0.3% of the current connections need to be re-established. Third, E 3 estimates the expected amount of decrease in the size of a keyring on each revocation process and it is equal to the following: We demonstrate this equation in Figure 7.9a. The figure shows the relation between the average number of keys revoked per prover and the connectivity of the graph with respect to different keyring sizes and key pool sizes. We can see that for r = 300 and p = 100, 000, 0.8 keys are revoked per prover on average. It thus requires 190 provers to drop for an active prover to revoke half of its keys. Finally, to evaluate E 4 , we run a simulation in which we deploy FADIA in a network of provers and we set r = 300 and p = 100, 000. During the run of the protocol, we start the revocation process at a random point in time. We measure the time it takes till all provers receive the revocation message. We also consider cases where multiple revocations start simultaneously. Results are shown in Figure 7.9b. The revocation process increases linearly with the number of provers in the network and is performed within 34 seconds for 1,000 provers. When multiple provers are dropped simultaneously, the revocation time scales linearly with the number of dropped provers. K i=0 i-1 j=0 (K -j) × K-i-1 j=0 (P -K -j) Conclusion on Fairness-Driven Remote-Attestation In this chapter, we proposed FADIA, a lightweight collaborative attestation protocol that can be deployed on heterogeneous networks of IoT devices. FADIA is the first RA protocol that integrates fairness in its design and deploys a scalable key management mechanism based on E-G scheme. We show that fairness is an important feature of remote attestation protocols. It can increase the performance of the protocol by a factor of 1.6 in a network where Tmote Sky sensors and Raspbery PIs coexist. Additionally, the lifetime of the network can increase by an order of magnitude, thus achieving fewer failures. We also show that FADIA outperforms the state-of-art solutions in terms of scalability on networks of a large number of provers. Finally, we evaluated the efficiency of the key revocation in FADIA. Our results show that the key management in FADIA offers a good tradeoff between the efficiency of adding and removing provers to the network. Chapter 8 Final Conclusion and Future Work Conclusion In this thesis, we have studied the design of security protocols that are suitable for IoT. We focused on two security protocols: Secure Aggregation and Remote Attestation. We observe that the existing secure aggregation protocols suffer from several limitations when used in IoT context. Namely, the existing SA protocols do not consider malicious users. We argued that this type of solution does not fit IoT use cases since the devices may be compromised. Moreover, we identified that existing SA protocols do not cope with the dropouts of users. This poses a limitation for these solutions due to their large scale where failures of some devices are inevitable. On the other hand, we observe that existing remote attestation protocols are also not suitable for IoT. The major challenges facing the efficient design of RA protocols are the heterogeneity of IoT devices and the dynamic nature of the IoT network. Therefore, this thesis proposed solutions that fix the limitations of the existing work for secure aggregation and remote attestation. For secure aggregation protocols, we proposed VSA as a new secure aggregation protocol that considers a stronger threat model than existing SA protocols. VSA preserves the privacy of the users' input and guarantees the correct computations of the aggregation result even when the aggregator and few users are acting maliciously. To design VSA, we built over an existing work (PUDA [LE ÖM15]) which considers only a malicious aggregator and uses user-generated tags to verify the aggregation result. To cover the case of malicious users, we integrated a newly designed tagging protocol into PUDA. Our tagging protocol (which is secure in the malicious model) involves new entities called taggers that issue the tags. The tagging protocol guarantees that only honest users receive valid tags for their inputs. Therefore, thanks to the tagging protocol, VSA ensures that neither the malicious users nor the malicious aggregator can produce an incorrect aggregation result. Additionally, we proposed FTSA as a new secure and fault-tolerant aggregation protocol dedicated to federated learning application. FTSA allows a robust computation of the aggregation result even when some users drop during the aggregation process. To build our new solution, we improved the Joye-Libert secure aggregation scheme (JL [START_REF] Joye | A scalable scheme for privacy-preserving aggregation of time-series data[END_REF]) and proposed a threshold variant of it (TJL). Our new TJL uses an Integer Secret Sharing scheme (ISS see Section 2.3.1) and it allows a threshold number of users to protect a zero-value on behalf of other users. FTSA leverages TJL scheme to recover the aggregation result by enabling online users to submit protected zero-value on behalf of the offline ones. We deployed FTSA in the context of federated learning and compared it with the state-of-the-art. Our evaluation showed that FTSA can scale better than all existing solutions and it achieves the theoretical limit in terms of scalability with respect to both the number of users and the size of the machine-learning models. Last and not least, we proposed FADIA as a new collaborative remote attestation protocol with fairness integrated into its design. FADIA enables the fair distribution of attestation load/tasks to achieve better performance. We show that fairness is an important feature for remote attestation protocols. Fairness can increase the performance of the protocol by a factor of 1.6 in a network where Tmote Sky sensors and Raspbery PIs coexist. The lifetime of the network can increase by an order of magnitude, thus achieving fewer failures. We also show that FADIA outperforms the state-of-art-solutions in terms of scalability. Future Work In this thesis, we studied secure aggregation and remote attestation protocols. Possible future research directions to investigate for each of the solutions can be listed below: • The verifiable secure aggregation protocol (VSA) proposed in this thesis is secure under static corruptions. To achieve security under adaptive corruptions we need to improve the design of our protocol to cover adversaries that may corrupt honest users during the execution of the protocol. In the current design, we build our tagging protocol over an OT protocol [START_REF] Chou | The simplest protocol for oblivious transfer[END_REF] which is secure in the static corruption model. To replace this protocol with an OT protocol secure under adaptive corruptions, we need a technique to still be able to compute the tags from the OT messages as done in the current design. • The verifiable secure aggregation protocol (VSA) proposed in this thesis verifies the users' inputs by checking that they are less than an upper bound. In some applications, it might be required to check a more complex condition on the inputs. Fortunately, our solution uses garbled circuit to implement this condition which can be generalized to a circuit that checks any predicate. It is interesting to study what predicates are useful for secure aggregation and their implementations. • The fault-tolerant secure aggregation protocol (FTSA) proposed in this thesis presents a method to compute protected inputs on behalf of offline users. To achieve fault tolerance in the case of verifiable secure aggregation solutions, it is possible to use the same approach to compute the tags on behalf of offline users. However, we need some distributed zero-knowledge proof techniques to prove that the computed ciphertext is an actual protection of zero-value. • In this thesis, we studied secure aggregation as a solution to preserve the privacy of clients in federated learning. However, as discussed in Chapter 5, secure aggregation should be used with differential privacy to achieve better privacy guarantees. Thus, studying the suitability of differential privacy techniques with secure aggregation is an interesting area of research that complements our work. • The collaborative remote attestation protocol (FADIA) proposed in this thesis improves the performance of the RA by optimizing the management of the attestation between provers. Research work on the trusted execution environment which runs the attestation code and the integrity-checking algorithm can also lead to a significant improvement of RA protocols for IoT. vB. 1 1 Evaluation of the fairness in energy consumption for FADIA. . . . . . . . 141 B.2 Evaluation of the network lifetime for FADIA . . . . . . . . . . . . . . . . 142 xiv List of Tables 2.1 Common Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Comparison between different Secure Aggregation techniques. . . . . . . . 3.2 Comparison table for SA schemes in the honset-but-curious model . . . . 5.1 Summary of the challenges in using secure aggregation for FL. . . . . . . 5.2 Categorization of SA solutions for FL. . . . . . . . . . . . . . . . . . . . . 6.1 Complexity analysis of the online phase of FTSA . . . . . . . . . . . . . . 6.2 Benchmarks of JL and TJL schemes. . . . . . . . . . . . . . . . . . . . . . 6.3 Runtime of clients in FTSA for one FL round . . . . . . . . . . . . . . . . 6.4 Runtime of clients in FTSA for many FL rounds . . . . . . . . . . . . . . 6.5 Complexity analysis of the online phase of FTSA * . . . . . . . . . . . . . . 7.1 Comparison between work on collaborative remote attestation. . . . . . . 7.2 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Benchmarks and energy consumption measurements of FADIA . . . . . . 7.4 Energy consumption measurement of FADIA . . . . . . . . . . . . . . . . A.1 Running time for the clients and the server in FTSA . . . . . . . . . . . . A.2 Data transfer per client for FTSA . . . . . . . . . . . . . . . . . . . . . . . B.1 Benchmarks of SHA256 and HMAC-SHA256 . . . . . . . . . . . . . . . . 1 c≈ D 2 . 12 PRG(m, R, b) → B: is a pseudo-random generator that extends seed b ∈ {0, 1} λ to vector B ∈ Z m R (vector of m elements and each element is in Z R ). Definition 2.3.8: Security Given the two distribution D 1 : {A} such that A ←$ Z m R and D 2 : {B} such that B ← PRG(b) where b ←$ {0, 1} λ and λ is a large security parameter, it holds that D Figure 2 . 1 : 21 Figure 2.1: Ideal functionality of CRS Figure 2 . 2 : 22 Figure 2.2: Ideal functionality of random secret sharing Figure 2 . 3 : 23 Figure 2.3: Ideal functionality of oblivious transfer protocol. Figure 2 . 4 : 24 Figure 2.4: The OT protocol [CO15] parameterized by the cyclic group G of order p and generator g. Figure Figure 2.6: Ideal functionality for zero-knowledge protocol. Figure 3 . 1 : 31 Figure 3.1: An illustration of secure aggregation between n users (U 1 , ..., U n ) and one aggregator A. A trusted party T P is used to setup up the parties. Figure 3 . 2 : 32 Figure 3.2: The main operations and communication between parties in Masking-based SA. • Masking.Mask(k, x) → c : Masks an input x with the masking key k c = x + k mod R • Masking.UnMask(k, c) → x : Unmasks the ciphertext c using the masking key k c = c -k mod R It is one of the oldest techniques for designing a secure aggregation protocol. It was first used in tree-structured networks. These schemes are called layered masking schemes [CMT05, ÖM07, CCMT09]. More recently, Dining Cryptographers network (DC-net) variants are proposed in [ ÁC11, BIK + 17, BBG + 20, SGA21b]. Figure 3 . 3 : 33 Figure 3.3: The main operations and the communication between parties in AHE-based SA. Figure 3 . 4 : 34 Figure 3.4: The main operations and the communication between parties in FE-based SA. Figure 3 . 5 : 35 Figure 3.5: The main operations and communication between parties in MPC-based SA. Most of the existing work focus either on the malicious aggregator model [LE ÖM15, TLB + 21, HKKH21] or the malicious users model [KOB21, CGB17, BLV + 21]. Figure 4 . 1 : 41 Figure 4.1: PUDA protocol. The aggregator is malicious and users are honest. flips a coin b ∈ {0, 1} and returns the ciphertexts and tags of the input (i, x b i , τ ) ∀U i ∈U * . After receiving the ciphertexts {c b i } ∀U i ∈U * and their corresponding tags {σ b i } ∀U i ∈U * , A submits a guess bit b * and wins the game if b * = b. 4.5.1Let P r[A AO ] denote the probability that the adversary A outputs b * = b. Then, VSA is said to ensure Aggregator Obliviousness if for any poly-bounded adversary A the probability P r[A AO ] ≤ 1 2 + µ(λ), where µ(λ) is a negligible function and λ is the security parameter. Figure 4 . 2 : 42 Figure 4.2: Ideal functionality for a Distributed Tagging Protocol. Figure 4 . 3 : 43 Figure 4.3: Our verifiable secure aggregation protocol. All parties can be malicious and may collude except a threshold number taggers. 3 . 3 During the Challenge Phase: B outputs what A outputs. Figure 4 . 5 : 45 Figure 4.5: Ideal functionality for a Tagging Protocol. Figure 4 . 6 : 46 Figure 4.6: Distributed Tagging Protocol in F G TAG -hybrid model. Theorem 4 .9. 2 42 Protocol Π GTAG UC-realizes the functionality F G TAG in the F R ZK -hybrid model under static corruption and under the Inv-DDH assumption in G. Case 1 : 1 A corrupts the user U 1. S first chooses θ = 2 ℓ -1 (largest bound possible). It garbles the circuit similar to an honest tagger F, {l b Figure 4 . 4 Figure 4.7: Tagging Protocol in F R ZK -hybrid model. Black text is the GC part and Blue text is the modifications. Recall that timestamp τ =sid represents the session id. and E ′ =E, and the relation R Tag holds, S sends (sid,True) to A, otherwise send (sid,False) to A and (sid, abort) to F G TAG . and E ′ ̸ =E, or the relation R Tag does not hold, send (sid, abort) to F G TAG . Otherwise, S chooses θ = 2 ℓ -1 and then sends the message (sid, input, k = (a, b), θ) to F G TAG . Figure 5.1: One federated learning round with three FL clients and one server. Figure 5 . 2 : 52 Figure 5.2: A secure aggregation protocol integrated into federated learning. The secure aggregation protocol ensures that the aggregators do not learn anything about the clients' locally trained ML models except their aggregate. Figure 5 . 3 : 53 Figure 5.3: Summary of existing FL solutions that use crypto-based secure aggregation grouped by the type of secure aggregation used and the specific challenge they tackle. Bordered boxes indicate that the solution presents a technique that can be deployed in other types of SA protocols (eg., [XLL + 20] is implemented on masking-based SA but can be also used for AHE-based SA). Hatched boxes indicate that the scheme cannot achieve the security requirements since they do not support collusions (this is discussed in Section 5.5). Different colors represent research groups of the authors. To deal with the floating point challenge while preserving good accuracy, researchers propose to use different quantization techniques. Elkordy et al. [EA20] and Bonawitz et al. [BSK + 19] propose to use auto-tuned quantization. This technique allows adapting the quantization level of the model vector based on the requirements. More specifically, for some elements of the trained model that do not have a large impact on its accuracy, this technique reduces the quantization level which consequently reuced the communication cost. Auto-quantization is integrated with FT-Masking [BIK + 17]. Alternatively, in [BT20], Beguier et al. propose a quantization technique called TopBinary quantization. This technique is essentially a combination of top-k sparsification [SCJ18] with 1-bit quantization [BWAA18]. We observe that these quantization techniques is that they can be used for all categories of secure aggregation protocols. Scalable Secure Aggregation To tackle challenge C 4 , scalability of SA started to gain researchers' attention thanks to the new large-scale applications of FL. Bonawitz et al. [BEG + 19] set up a general framework to scale a secure aggregation framework to millions of devices. The authors propose to simply run multiple instances of the scheme, one for each subgroup of clients. Each subgroup computes intermediate aggregates which are combined later. The same intuition of grouping clients is followed up by Bell et al. [BBG + 20] and by So et al. [SGA21b]. Bell et al. observe that the FT-Masking scheme in [BIK Figure 5 . 4 : 1 541 Figure 5.4: New Components of Secure Aggregation Figure 6 . 1 : 61 Figure 6.1: Demonstration of an execution of TJL scheme with four users (n = 4) and a reconstruction threshold t = 2. Figure 6 . 2 : 62 Figure 6.2: Detailed description of the setup phase of FTSA Figure 6 . 3 : 63 Figure 6.3: Detailed description of the online phase of FTSA 2 = n+|Ucorr| 2 shares 22 and |U corr | shares of b i,τ . Additionally, for the remaining U \ U corr honest clients, the adversary's best strategy is to set U i as online for half of them and set it as offline for the other half. It thus receives shares of Y ′ τ of U i from n-|Ucorr| 2 honest clients and shares of b i,τ from the other n-|Ucorr| 2 honest clients. Hence, in total, the adversary can learn a maximum number of |U corr | + n-|Ucorr| of Y ′ τ and b i,τ of the same client. Figure 6 . 4 : 64 Figure 6.4: The wall-clock running time (a,b) and the total data transfer sent and received (c). The measurements are performed using a single-threaded python implementation of our solution and the solution in [BIK + 17] (only the online phase time is shown). When varying the number of clients, we fix the input dimension to m = 100K and when varying the dimension we fix the number of clients to n = 300. Bars represent the average value based on 10 runs and the error margins represent the standard deviation. Figure 7 . 1 : 71 Figure 7.1: Illustration of an remote attestation execution between a prover and verifier . . 3 .Figure 7 . 5 : 375 Figure 7.5: Evaluation of the energy consumption. (a) the residual energy over time. w/ (left side) and w/o (right side) activating fairness. (b) the average consumed energy w.r.t. initial energy after 16.6 hours of running FADIA with fairness activated. The green-colored area represents the residual energy and the orange-colored area represents the consumed energy at the end of the experiment. (c) the time taken until 1st, 3rd, and 5th crash (energy depletion at a prover) with different percentages of selfish provers in the network. Dotted lines represent the results when fairness is not activated. Figure 7 . 6 : 76 Figure 7.6: Runtime of FADIA with (in green) and without (in orange) fairness integrated in dynamic network. Figure 7 7 Figure 7.7: Runtime of FADIA and PASTA with a different number of provers in tree, grid, chain, and ring topologies. Figure 7.8: Average amount of data sent by a prover during one round of attestation in FADIA and SALAD protocols. Figure 7.9: Revocation efficiency. (a) shows the number of revoked keys per prover with respect to different key ring sizes. (b) shows the average amount of time taken till the revocation process is completed with respect to a variable number of provers in the network. Table 2 . 2 1: Common Notations al. [SCR + 11]. We state this security requirement in Definition 3.2.1. The second requirement is Aggregator Unforgeability and it was initially proposed by Leontiadis et al. [LE ÖM15] and improved later in several other works. We state this security requirement in Definition 3.2.2. This security notion ensures that an adversary (by corrupting the aggregator) cannot learn more than what could be learned from the sum of the users' inputs. If the adversary corrupts some users, the notion only requires that the adversary gets no extra information about the values of the honest users beyond their sum. Definition 3.2.1 Aggregator Obliviousness: Definition 3.2.2 Table 3 . 3 2: Table comparing the baseline constructions of the different categories of secure aggregation. T P stands for trusted third party. SC stands for pre-established secure channels. Dynamic users property shows whether the aggregation can be performed with a subset of the users, only. Comp. stands for computation cost on users. Comm. stands for communication cost between users and aggregators. n and m represent the number of users and the size of the user's input, respectively. • O EncTag : When queried with a user identifier i, input x, and timestamp τ , the oracle first checks if i and τ are queried before. If so, it aborts. The oracle computes c i and σ i as VSA.Enc(ek i , x, τ ) and VSA.Tag({tk i,j } ∀j∈[m] , {θ min , ..., θ min }, x, τ ) respectively. Then, if U i / ∈ U corr , it outputs the result (c i , σ i ), otherwise it outputs (σ i ). A queries the oracle poly-number of times for any i, x, and τ s.t. U Game 1 Aggregator Obliviousness (AO) The Learning Phase: /* i / ∈ Ucorr and x≤θ */ (c i , σ i ) ← O EncTag(i,x,τ ) /* A queries the oracle poly-number of times for any i, x, and τ s.t. U i ∈ Ucorr*/ Type III Forgery: VSA.Verify(vk, sum τ * , Υ τ * ) = 1 and A made queries with timestamp τ * , however the sum sum τ * > ∀U i ∈U \Ucorr x i,τ * + |U corr |θ min . AU III ] denote the probability that the adversary A outputs a tuple that satisfies Type I, II, and III Forgery respectively. Then, VSA is said to ensure Aggregate Unforgeability if for any poly-bounded adversary A the probability P r[A AU ] = P r[A AU I ] + P r[A AU II ] + P r[A AU III ] ≤ µ(λ), where µ(λ) is a negligible function and λ is the security parameter. Definition 4.5.2 features stronger security guarantees than PUDA. First, it captures the Type I and II Forgeries that represent a corrupted aggregator forcing an incorrect sum, but this time, even when a corrupted aggregator colludes with corrupted users. Second, it captures a new type of forgery Type III Forgery that represents corrupted users adding invalid inputs to the sum. Definition 4.5.2 Let P r[A AU I ], P r[A AU II ], and P r[A 4.8.1 For any ϵ > 0 and poly-bounded adversary A that statically corrupts U corr ⊂ U, A, or T corr ⊂ T s.t. |T | < t (t is the SSS threshold ), or any combination of them where P r[A AO ] ≤ 1 2 + ϵ there exists a poly-bounded adversary B that statically corrupt U corr ⊂ U and/or A such that P r[B AO PUDA ] ≥ 1 2 + ϵ. Notice that this reduction is only true when the adversary does not corrupt t or more taggers (|T corr | < t). To prove this reduction, we construct the adversary B using A. Let us denote the oracles that B has access to by O PUDA Setup , O PUDA Corrupt , O PUDA EncTag , and O PUDA It then queries O PUDA Corrupt with each user identifier i of the corrupted users U i ∈ U corr which returns {(ek i , tk i )} ∀U i ∈Ucorr . B samples uniformly at random a ′ ←$ Z p and b Proof. AO defined in [LE ÖM15 ]. B proceeds as follows: 1. During the Setup and Corruption Phase: Given that the corrupted parties in U corr ∪ T corr ∪ {A}, when A chooses the security parameter λ and the threshold t, B queries O PUDA Setup which returns pp, ak, and vk. Challenge Phase: When A queries O AO with some parameters, B queries O PUDA AO with the same parameters then gives the result to A and outputs what it outputs.In this reduction, the difference between A's simulated view (by B) and A's real view (when it plays the real game) is only the tagging keys' shares of the corrupted taggers. Since A does not see a sufficient number of shares (because |T corr | < t), the shares from both views are indistinguishable (see Security definition of Shamir's secret sharing in section 2.3.1). Therefore, if A outputs b * = b (wins VSA AO game) with a probability 1 2 + ϵ, B will also output b * = b (win PUDA AO game) with the same probability. This proves the lemma. For any ϵ > 0 and poly-bounded adversary A that statically corrupts U corr ⊂ U, A, or T corr ⊂ T s.t. |T | < t (t is the SSS threshold ), or any combination of them where P r[A AU I ] + P r[A AU II ] ≤ ϵ there exists a poly-bounded adversary B that statically corrupt A such that P r[B AU PUDA ] ≥ ϵ. Notice that this reduction is only true when the adversary does not corrupt t or more taggers (|T corr | < t). To prove this reduction, we construct the adversary B using A. Denote the oracles that B has access to by O PUDA Setup and O PUDA EncTag defined in [LE ÖM15]. Notice that B does not have access to oracle O PUDA Corrupt since in PUDA the aggregator is not allowed to collude with the users. However, B should provide A with the encryption keys of the corrupted users. It turns out that B can simply generate a random encryption key for each user and use that key to produce ciphertexts instead of using O PUDA It then samples the user encryption keys uniformly at random for all users rk i ←$ Z p such that i∈[n] rk i = -ak. It also samples a ′ ←$ Z p and b ′ Lemma 4.8.2 Proof. EncTag (B needs O PUDA EncTag to only produce the tags). In more detail, B proceeds as follows: 1. During the Setup and Corruption Phase: Given the corrupted parties are U corr ∪ T corr ∪ {A}, B queries O PUDA Setup which returns pp, ak, and vk. i ←$ Z p uniformly at random for each i ∈ [n]. Then it computes SSS.Share(a ′ , t, m, Z p ) and SSS.Share(b ′ i , t, m, Z p ) for each i ∈ [n]. It sets TK j ak thus, they are statistically indistinguishable.We conclude that the simulated view of A (by B) and the real view of A are statistically indistinguishable. Hence, if A succeeds in producing a Type I or II Forgery with probability ϵ then B succeeds with the same probability. This proves the lemma.For any poly-bounded adversary A that statically corruptsU corr ⊂ U, A, or T corr ⊂ T s.t. |T | < t (t is the SSS threshold ),or any combination of them, VSA achieves Aggregator Obliviousness under the decisional Diffie-Hellman (DDH) assumption in G 1 in the random oracle model and F t,G 1For any poly-bounded adversary A that statically corrupts U corr ⊂ U, A, or T corr ⊂ T s.t. |T | < t (t is the SSS threshold ), or any combination of them, VSA scheme achieves Aggregate Unforgeability against a Type II Forgery under the LEOM assumption in the random oracle model and F t,G 1 DTAG -hybrid model. Since PUDA ensures aggregate unforgeability against Type II Forgery under the BCDH assumption in the random oracle model, and since our AU game reduces to AU PUDA game (see Lemma 4.8.2), we can conclude directly that our scheme also ensures aggregate unforgeability against Type II Forgery under the LEOM assumption in the random oracle model and F t,G 1 DTAG -hybrid model. Corollary 4.8.3 Proof. Corollary 4.8.1 Proof. Since PUDA ensures aggregate unforgeability against Type I Forgery under the BCDH assumption in the random oracle model, and since our AU game reduces to AU PUDA game (see Lemma 4.8.2), we can conclude directly that our scheme also ensures aggregate unforgeability against Type I Forgery under the BCDH assumption in the random oracle model and F t,G 1 DTAG -hybrid model. DTAG -hybrid model. Proof. Since PUDA ensures aggregator obliviousness under the DDH assumption in G 1 in the random oracle model, and since our AO game reduces to AO PUDA game (see Lemma 4.8.1), we can conclude directly that our scheme also ensures aggregator obliviousness under the decisional Diffie-Hellman (DDH) assumption in G 1 in the random oracle model and F t,G 1 DTAG -hybrid model. Corollary 4.8.2 For any poly-bounded adversary A that statically corrupts U corr ⊂ U, A, or T corr ⊂ T s.t. |T | < t (t is the SSS threshold ), or any combination of them, VSA scheme achieves Aggregate Unforgeability against a Type I Forgery under the BCDH assumption in the random oracle model and F t,G 1 DTAG -hybrid model. Theorem 4.8.1 For any poly-bounded adversary A that statically corrupts U corr ⊂ U, A, or T corr ⊂ T s.t. |T | < t (t is the SSS threshold ), or any combination of them, VSA scheme achieves Aggregate Unforgeability against a Type III Forgery under the LEOM assumption in the random oracle model and F t,G 1 DTAG -hybrid model. Proof. Let us assume that sum τ * > ∀U i ∈U \Ucorr x i,τ * + |U corr |θ min . This gives us that ∀U i ∈Ucorr x i,τ * > |U corr |θ min . Notice that in this case the probability that VSA.Verify(vk, sum τ TAG that runs between one user and one tagger only. The ideal functionality F G TAG is shown in Figure4.5. Notice that F G TAG is very similar to F t,G DTAG (with one tagger) except that the adversary in F G TAG is allowed to perform a selective failure attack (i.e., choose to abort the protocol on certain inputs from the honest user). Given the functionality, F G TAG , the protocol Π t,G DTAG is defined in the F G TAG -hybrid model as shown in Figure4.6. Protocol Π t,G DTAG UC-realizes the functionality F t,G DTAG in the F G TAG -hybrid model under static corruption if the number of corrupted taggers |T corr | < t and |T corr User Tagger User Taggers ... ( ) ( ) if : if : ... Figure 4.4: Overview of the tagging and distributed tagging functionalities with ideal functionality F G Theorem 4.9.1 t,G DTAG ≡ VSA.Tag that realizes the DTAG . To build this protocol, we first propose a tagging protocol Π G functionality F t,G TAG This proves that σ IDEAL = σ REAL and consequently we proved Π t,G DTAG UC-realizes the functionalityF t,G DTAG in the F G TAG -hybrid model if A corrupts T corr ⊂ T and |T corr | < t and |T corr | < m -t.Case 3: A corrupts U and T corr ⊂ T Similar to the previous two cases, S will forward messages between A and FG TAG . Hence, the simulated execution of A and the real one are identical. Additionally, the honest parties do not get any output. Therefore, we proved that Π t,G DTAG UC-realizes the functionality F t,G DTAG in the F G TAG -hybrid model if A corrupts U and T corr ⊂ T . We finally deduce that, if |T corr | < t and |T corr | < m -t, IDEAL F t,G DTAG ,S,Z s ≈ EXEC Π t,G DTAG ,A,Z which proves our theorem. 2. When A sends the message (sid,{B i , P i } ∀i∈[ℓ] , Q), S starts internally S rcv (defined in Lemma 2.4.1). It handles the OT receiver's message (B i ) and the random oracle G G queries to S rcv . S rcv extracts the receiver's choice x ⟨i⟩ and sends it out to S. S repeats the process for every i ∈ [ℓ]. Hence, S extracts the user input x. 3. When A sends the message (sid,prove,({(B ′ i and P i =P ′ i ∀i ∈ [ℓ], and the relation R ℓ+1 DL hold, and send (sid, abort) to F G TAG if any of the checks fail. If the values are valid and the relation R ℓ+1 DL holds, then S sets β = γ r and checks if ℓ 1 Hence, the values of C are computationally indistinguishable from both worlds under the Inv-DDH assumption. This proves that Π G TAG UC-realizes the functionality F G TAG in the F R ZK -hybrid model under the Inv-DDH assumption in G if A statically corrupts U . Case 2: A corrupts the tagger T 1. When A sends the message (sid,γ, A = g α , F ), S chooses x=1 (smallest input possible) and proceeds similarly to an honest user to compute {B i } ∀i∈[ℓ] , {P i } ∀i∈[ℓ] , and Q. S sends (sid,{B i , P i } ∀i∈[ℓ] , Q) to A. 2. S sends the message (sid,prove,({(B i , P i )} i∈[ℓ] , (H(sid), Q)), r) to F R ℓ+1 DL ZK and waits for A to sends (sid,({(B ′ i al. [SSV + 21] that replaces the standard masking with a Learning With Error masking and uses a packed and verifiable version of Shamir's secret sharing. Also, Yang et al. propose LightSecAgg [YSH + 21] which replaces the Shamir's secret sharing scheme with a secret sharing scheme based on Maximal Distance Separable (MDS) code [RL89]. The work of Yang et al. reduces the computation time at the server. Another approach is proposed by Swanand et al. [KRKR20] that uses Fast Fourier Transform (FFT) for secret sharing. Communication Efficient Secure Aggregation Researchers propose some techniques to bound the communication overhead incurred by SA (see C 2 ). For AHE-based SA, batch encryption has been leveraged by Liu et al. [LCV19], Phong et al. [PAH + 18], and Yang et al. [ZLX 21]. For masking-based and AHE-based SA, Zhang et al. [ZFW + 20] and Xu et al. (Veri-fyNet) [XLL So et al. propose TurboAgg[START_REF] So | Turbo-aggregate: Breaking the quadratic aggregation barrier in secure federated learning[END_REF] which uses additive secret sharing instead of Shamir's secret sharing. To achieve robustness against dropouts, Lagrange coding[START_REF] Yu | Lagrange coded computing: Optimal design for resiliency, security and privacy[END_REF] is used. The main drawback of the scheme is that it divides the clients into groups and each client in a group needs to communicate with every other client in the next group. Therefore, the communication cost of TurboAgg is relatively high. Additionally, the protocol can fail to recover the aggregate value when some clients drop with a non-negligible probability. 6.3.4 FastSecAgg Kadhe et al. propose FastSecAgg [KRKR20] which uses FastShare instead of Shamir's secret sharing. FastShare is a new cryptographic primitive proposed by the same authors. secret key, this algorithm calls ISS.Share(k i , , t, n, I) (see Section 2.3.1). It outputs n shares of the key where each share is intended for different user U j ∈ U.• TJL.ShareProtect(pp, {⟨k j ⟩ i } ∀j|U j ∈U \Uon , τ ) → ⟨y ′ τ ⟩ i : This algorithm protects a zero-value with U i 's shares of all the secret keys corresponding to the failed users. It calls JL.Protect(pp, j∈U \Uon ⟨k j ⟩ i , τ, 0) and outputs ⟨y′ τ ⟩ i • TJL.ShareCombine(pp, {(i, ⟨y ′ τ ⟩ i )} ∀U i ∈U τ shares ) → y ′ τ :This algorithm computes the Lagrange interpolation on the exponent (the ν i coefficients are defined in ISS.Recon in Section 2.3.1): Table 6 . 6 2: Benchmarks of JL and TJL schemes. The time measurements are in milliseconds such that x / y refers to x ms runtime for JL scheme and y ms runtime for TJL scheme. The current benchmarks are run with 30% user failures and 70% threshold. Gray distinguishes the algorithms that are executed only once for the setup. n = 100 n = 300 n = 600 TP Setup 12 / 10 11 / 15 13 / 13 User SkShare ShareProtect Protect -/ 2 -/ 7 4 / 4 -/ 23 -/ 14 4 / 4 -/ 149 -/ 26 4 / 4 Agg ShareCombine Agg -/ 88 4 / 8 -/ 946 -/ 4220 5 / 21 7 / 42 Table 6 . 6 3: Wall-clock running time per client in the online phase for one FL round with different % of client failures. The number of clients varies n = {100, 300, 600, 1000} and the dimension is fixed to m = 100K. # Clients 0% % Failures 10% 20% 30% 100 9.8 sec 20.9 sec 20.8 sec 20.8 sec 300 10.9 sec 28.1 sec 27.9 sec 26.6 sec 600 10.5 sec 35.5 sec 35.6 sec 35.8 sec 1000 10.7 sec 51.6 sec 51.2 sec 52.1 sec running time with different failure rates of clients (with 0% failure rate the runtime is negligible since there is no need for a recovery phase which is the most costly operation). , SPvK04, LMP11, SLS + 05, KJ03] Softwarebased attestation relies on assumptions concerning the software implementation of the attestation code. An example of software-based attestations is Pioneer [SLS + 05]. Pioneer computes a checksum of device memory using a function that includes side effects in its computation, such that any emulation of this function incurs a timing overhead sufficiently long to detect cheating. Software-based attestation is useful only if the verifier and the prover are directly connected (without passing through intermediate hops)[START_REF] Castelluccia | On the difficulty of software-based attestation of embedded devices[END_REF], thus impractical as a remote attestation technique. Table 7 . 7 1: Comparison between previous work on collaborative remote attestation and FADIA. The big O notation represents the complexity of the cost on a prover with respect to n (number of provers in the network). Scalability Supported Features Communi- cation Cost Storage cost Mobility Detects hw attacks Autono- mous Adding Devices Fairness SEDA [ABI + 15] O(1) O(1) SENA [ACI + 16] O(1) Table 7 . 7 2: Notations Entities P i , C, O A prover of index i, Collector, and Network Operator A h , A s Hardware adversary and software adversary Parameters uid Unique id of a prover cntr Counter for the number of attestation of a prover α g Max size of the set of uids in an attestation message. δ h Minimum time required by A h to compromise a prover δ c Time a prover waits to receive invite before it calls generateTree() c max Maximum number of children for a node in a tree c limit Maximum number of children a prover can accept in a tree K ic Secret key shared between P i and the controller K s Secret key shared between C and O K ID ij , K ij Table 7 . 7 3: Benchmarks and energy consumption measurements of cryptographic functionalities of FADIA when implemented on Tmote Sky and Raspberry PI 2 devices. HMAC-SHA256 SHA256 Time (ms) Energy (mJ) Time (ms) Energy (mJ) 32 B 0.068 - 0.025 - PI 2 4 KB 8 KB 1.075 2.083 -- 1.049 2.032 -- 32 KB 8.131 - 8.079 - 32 B 63.28 0.3384 15.54 0.0862 SKY 4 KB 1035 5.5908 988 5.3388 8 KB 1998 10.7892 1960 10.6128 1 Implementation of FADIA on Tmote Sky and Raspberry PI 2To illustrate heterogeneity, we implement FADIA on two types of devices: Tmote Sky[START_REF]Tmote sky details[END_REF] and Raspberry PI 2. The Tmote Sky which represents a resource-constrained device is equipped with an 16-bits 8 MHz msp430 MCU, 10 KB of RAM, and 48 KB of non-volatile memory. On the other hand, the Raspberry PI 2 is more powerful as it is equipped with a 900M Hz quad-core ARM Cortex-A7 CPU, 1 GB of RAM, and 32 GB of non-volatile memory. Both types of devices are equipped with CC2420 RF transceivers.The CC2420 chip operates on 2.4 GHz and is compliant with IEEE 802.15.4 standards. We do not follow a certain security architecture for implementing FADIA on the devices. However, FADIA can be implemented based on any security architecture. Previous research has shown that achieving a security architecture on sensor devices is indeed possible [EFPT12, BES + 15, DRT17, NBM+ 17]. In our implementation, and without loss of generality, an ARM TEE can be used for the Raspberry PI, and TYTAN [BES+ 15] Table 7 . 7 4: Energy consumption of Tmote Sky devices while in different states. MCU ON represents a state where the microcontroller performing computations. TX and RX represents the CC2420 state while transmitting or receiving respectively. IDLE represents and idle state where the transceiver is off and the MCU is in low power mode. Data taken from Tmote Sky datasheet[START_REF]Tmote sky details[END_REF]. score() always Table A . A 2: Data transfer per client for our protocol. The dimension is fixed to m = 10000. # Clients Failures Registration KeySetup Encryption Aggregation sent 0.13 KB 32.84 KB 62.47 KB 1.96 KB 0% rcvd -KB 45.73 KB -KB 5.46 KB 100 total sent 0.13 KB 0.13 KB 78.57 KB 32.84 KB 62.47 KB 62.46 KB 7.42 KB 58.81 KB 30% rcvd -KB 45.73 KB -KB 3.81 KB total 0.13 KB 78.57 KB 62.46 KB 62.62 KB sent 0.13 KB 135.95 KB 79.00 KB 5.86 KB 0% rcvd -KB 174.62 KB -KB 16.50 KB 300 total sent 0.13 KB 0.13 KB 310.57 KB 79.00 KB 135.95 KB 79.00 KB 22.36 KB 67.09 KB 30% rcvd -KB 174.62 KB -KB 11.53 KB total 0.13 KB 310.57 KB 79.00 KB 78.62 KB sent 0.13 KB 400.79 KB 97.30 KB 11.72 KB 0% rcvd -KB 478.13 KB -KB 33.05 KB 600 total sent 0.13 KB 0.13 KB 878.92 KB 97.30 KB 400.78 KB 97.30 KB 44.77 KB 72.96 KB 30% rcvd -KB 478.13 KB -KB 23.12 KB total 0.13 KB 878.91 KB 97.30 KB 96.08 KB Baseline secure aggregation protocols correspond to the basic constructions from the corresponding cryptographic tool. For the formal definitions, please refer to the original paper. This does not affect the security proof as it gives in a general sense more power to the adversary If at least one client failed, constructing the protected zero-value Y ′ τ from its t shares which is O(nm). https://github.com/MohamadMansouri/fault-tolerant-secure-agg The C++ implementation of FADIA for Omnet++ can be found at https://github.com/ MohamadMansouri/FADIA Acknowledgements Acknowledgements List of Publications We have presented the following publications during this PhD study. Most of them have already been presented through different venues enumerated below: category and shows the relation between the solutions. Fault-tolerant Secure Aggregation To tackle the problem of client failures (see C 1 ), a fault-tolerant secure aggregation protocol should be used. MPC-based SA, more specifically, Shamir's SS scheme [START_REF] Shamir | How to share a secret[END_REF] is fault-tolerant by design. It is used in [START_REF] Dong | Eastfly: Efficient and secure ternary federated learning[END_REF][START_REF] Kadhe | Fastsecagg: Scalable secure aggregation for privacypreserving federated learning[END_REF] where the FL server role is distributed among Note that this distributed method allows the distribution of the JL user keys but it still relies on a trusted third party to generate the public parameters. There exist techniques to distribute the computation of the public RSA modulus N ( [BF01, NS11, VAS19] in the passive setting and [CDK + 22] in the active setting). In this work, we assume the existence of a trusted offline third party that distributes the public parameters. Protocol steps During Registration, clients register by sending their public keys to the aggregator who further broadcasts them to all clients. Notice that each client generates two public keys, one used to create secret communication channels among clients and the other used to compute the TJL secret key. Later, in the Key Setup step, each client U i computes mutual channel keys c i,j and its TJL key k i . It also creates secret shares of the TJL keys using TJL.SKShare and sends them to the corresponding clients (via the server through the authenticated channels). The specifications of the setup phase are given in Figure 6.2. The Online Phase The online phase consists of two communication rounds. In the first communication round, (i.e., Encryption step), the clients protect their inputs and send them to the aggregator. In the second communication round (i.e., Aggregation step), the clients and the aggregator construct the ciphertext of the failed clients. Blinding inputs to ensure privacy One problem with the TJL scheme is that it guarantees privacy under the assumption that only one ciphertext of each user is computed for each timestamp. However, this assumption may break even in the honest-but-curious model. Let us assume the case where a client sends its protected input with some delay. The delay may cause the aggregator to request the online clients to construct the protected zero-value of the assumed failed client. In this case, two ciphertexts for the same timestamp are collected breaking the security assumption of the JL scheme. To deal with this problem, the client masks its input before encrypting it. The goal of the mask is to protect any leakage in case two ciphertexts of the same period are obtained. To remove these masks from the aggregated value, each client secretly shares its mask with all other clients. If the client survives the federated learning round, the online clients help construct its blinding mask. Otherwise, the clients construct the ciphertext of the zero-value using the TJL scheme. Input vector encoding The TJL scheme is originally designed to work with integers. In federated learning applications, FL clients send a vector of parameters instead of a single value. We, therefore, propose a dedicated encoding solution to encode a vector into a long integer. Each element of the initial vector V is firstly expanded by log 2 (n) bits of 0's at the beginning of the element. Then the elements of the vector are packed to form a large integer ω. The number of elements that ω represents corresponds to ptsize which denotes the plaintext size of TJL divided by the actual size of the extended element (i.e ⌊ ptsize log 2 (R)+log 2 (n) ⌋,R being the maximum possible value for vector elements). Note that for TJL scheme, the plaintext size is equal to half of the size of the user key (ptsize 2 ). The decoding operation simply consists of unpacking ω into bitmaps of log 2 (R) + log 2 (n). To execute TJL.Protect and TJL.ShareProtect, the user input is first encoded. In the case where the user input vectors are too large to be encoded in a single integer of size ptsize, the vectors are split into multiple vectors of length ⌊ ptsize log 2 (R)+log 2 (n) ⌋ and encoded separately. We use a counter to generate a unique timestamp for each encoded part. Protocol steps In the Encryption step, the client generates random seed b i which is further extended to blinding mask B i using a PRG. The client first blinds its input with B i then protects it with TJL.Protect. After that, the client secretly shares the seed b i with other clients and sends its masked and protected input to the server. In the Aggregation step, the client learns the list of failed clients and computes TJL.ShareProtect using the sum of their TJL keys' shares. Then, it sends to the server the blinding mask share of each online client and the share of the protected zero-value corresponding to all the failed clients. The server constructs the blinding masks and the protected zero-value using TJL.ShareCombine. Finally, it uses TJL.Agg to obtain the blinded sum which is unblinded by removing the clients' masks. The detailed specification of this phase is provided in Figure 6.3. Deployment of FTSA on Semi-Connected Graphs (FTSA + ) In the current description of the protocol, we assumed that all clients secretly share their keys with each other. This gives the maximum security guarantee. However, as shown in [BBG + 20], the client doesn't need to share its key with all other clients (see Section 6.3). Indeed it is sufficient to build a connected graph where each node (representing a client) is connected to a subset of all clients and only shares its key with this subset. As we presented in Section 6.3, this technique is used to improve the scalability of SecAgg [BIK + 17]. We observe that the same method can also be applied to FTSA. We refer to our protocol as FTSA + when deployed on such graphs. Given a connected graph as described in [BBG + 20], FTSA + , involves the same steps as FTSA, except that the sharing of the key is done only with the neighbor clients. By using this optimization, the complexity of our protocol in terms of the number of users is reduced into a logarithmic factor. Security Analysis In this section, we evaluate the security of our secure and fault-tolerant aggregation protocol (FTSA) and prove that it ensures Aggregator Obliviousness (see Definition 3.2.1) in both the honest-but-curious and malicious model settings, given a dedicated threshold t of honest clients. During the Aggregation step: 4. If at least one client failed, sending the share of the protected zero-value During the total, the complexity is O(n + m) which is the same as in [BIK + 17]. Computation The computation cost consists of: During the Encryption step: 1. Generating t out of n shares of b i,τ which is O(n 2 ). Encrypting the client's message X i,τ which is O(m). During the Aggregation step: 3. Encrypting the zero-value using the secret shares which is O(m). Therefore, the total complexity is O(n 2 +m) which is lower than in [BIK + 17] (O(n 2 +nm)). The higher complexity in [BIK + 17] is due to the one-to-one key agreement on the shared masking seeds then extending each of them to the dimension of the client's input which adds an extra O(nm). Storage The client must store the keys and shares of all clients as well as the data vector which results in a storage overhead of O(n + m) which is the same as the one in [BIK + 17]. Scalability Analysis of The Online Phase at the Server Communication The server's communication cost is n times the client's communication cost. Thus, a complexity of O(n 2 + nm) which is the same as the one in [BIK + 17]. Computation The computation cost consists of: During the Encryption step, the server's computation cost is not significant since the server only forwards messages. During the Aggregation step: Summary In summary, the major improvement of our protocol over SecAgg [BIK + 17] is in the running time at the client. Our protocol reduces the overhead on the client's processor up to 8 times when running in large networks. For the server running time, the improvement is only achieved in networks with significantly high failure rates (∼ 30%). Finally, the data transfer is better in our protocol in the case of a large number of clients but worse in the case of huge input dimensions. 6.9 FTSA * : An Optimized Fault-Tolerant SA We present an optimized version of our protocol FTSA presented in Section 6.6. The optimization can be considered as a major change in the way how the TJL scheme is used to achieve fault-tolerant secure aggregation. We first discuss the main idea of this modification and its consequences on the protocol. Second, we present the new version of our protocol (we call it FTSA * ). Finally, we present the new performance results of the protocol. The Idea The main limitation of FTSA is that it requires masking the user input X i,τ with random mask B i,τ before applying the TJL.Protect algorithm. The masks are secretly shared with all users which results in a quadratic complexity in terms of computation and communication. The reason for masking the input (X i,τ + B i,τ ) is to prevent a malicious aggregator to leak the user input. This attack is performed by a malicious aggregator claiming that an actually online user is dropped. It allows the aggregator to obtain two ciphertexts (for the same timestamp) produced by the same user key (the first ciphertext is received from the online user and the second is the zero-protected value computed using the shares of the key). Masking guarantees that the user input is protected when such an attack happens. We propose a different solution to mitigate this attack. Our solution is to mask the encryption key of the user instead of masking the message itself. The advantage of this approach is that it is performed in the offline phase when the keys are created. Hence, the sharing of the masking key is also performed in the offline phase. Consequently, the Secure Aggregation Protocol -Online Phase children a prover can have. After P i is registered, C regularly checks its status and if P i does not attest in δ h time, its status becomes "unhealthy". Figure 7.2 provides the specification of the joining phase. Attestation Phase Provers start running this phase immediately after completing the joining process. P i enters a new attestation period every δ h /2 time. (δ h /2 is chosen to guarantee that two consequent attestations are always received within no longer than δ h time.) At each attestation period, a P i starts by performing an integrity check on its software configuration. The check is performed against the software configuration hash (sch) stored in the prover. The method of this check and the format of sch is out of the scope of this paper. This can be a simple technique based on computing the hash of the firmware or more complex techniques such as the ones proposed in [AAD + 16, DZN + 17, ZDA + 17, CTR18]. If the integrity check fails, then P i quits the attestation phase and will be eventually dropped from the protocol. After succeeding in the integrity check, P i participates in the construction of a tree in which it will attest. First, P i generates a unique proof of its attestation using the function generateP roof (). It further evaluates The leaf nodes send attestation messages (attst) to their parents. The attestation message of P i contains the generated proof which is computed as follows: where cntr is a counter that is incremented each time P i attests. Parent nodes aggregate the attestation messages from their children using the function aggregateAttestation(). Similarly, the new attestation message is sent to the parent and processed. This is repeated until the initiator prover receives all the aggregated attestation messages. It then sends the final attestation message to the controller. An attestation message is composed of multiple sets of prover ids. The number of provers in a set is controlled by the parameter α g . Each set is linked with a single proof which is the XOR of all proofs provided by the provers in that set. The algorithm that describes aggregateAttestation() is depicted in Algorithm 1. The granularity of the aggregation of the attestation is parameterized by α g . If α g is 0, then none of the proofs are XORed, and thus, all individual proofs are transmitted to the controller. If α g is larger than the number of provers participating in the tree, all proofs are XORed forming a single proof for all provers. Appendices
04116901
en
[ "scco.neur" ]
2024/03/04 16:41:26
2021
https://theses.hal.science/tel-04116901/file/Guerreiro_2021_These.pdf
This dissertation gathers the work developed throughout my last 4 years as a Ph.D. student at ENS. Many people have crossed paths with me during this time and played an essential part in this journey. I would like to acknowledge them here. I was fortunate to have a directeur de thèse who understands what life in research entails and makes sure his students have the tools they need to have a fulfilling career. I would like to thank Boris Gutkin for entrusting me with such an exciting and challenging research project and for his dedicated support and guidance even (and especially!) during a global pandemic and a total lockdown. Furthermore, I would like to thank Jerrel Yakel and Zhenglin Gu for guiding me through the mystic world of neurophysiology and for always taking the time to answer my questions. I value our discussions greatly, as well as the time I spent in your laboratory. I | Introduction 1.The hippocampal formation The hippocampal formation comprises two main structures: the hippocampus and the entorhinal cortex (EC). The hippocampus is organized into subdivisions, namely, the dentate gyrus (DG), the cornu ammonis 3 (CA3), the cornu ammonis 2 (CA2), and the cornu ammonis 1 (CA1). Additionally, each subdivision is divided into layers: stratum oriens (s.o.), stratum pyramidale (s.p.), stratum radiatum (s.r.), and stratum lacunosum-moleculare (s.l.m.). Similarly, the EC presents a laminar organization (layers I, II, III, IV, V, and VI). The hippocampus and EC are interconnected through the trisynaptic loop. The EC projects to the dentate gyrus via the perforant pathway, granule cells in the dentate gyrus project to CA3 through mossy fibers, and the CA3 pyramidal cells project to CA1 via the Schaffer collateral (SC) pathway. Entorhinal inputs can also reach the hippocampal CA1 field directly through the temporoammonic pathway. Pyramidal cells in CA1 project to the deep layers of the EC, closing the hippocampalentorhinal loop (see Figure 1.1). This general layout holds across the full range of mammalian species [START_REF] Li | The hippocampal ca3 network: and in vivo intracellular labelin study[END_REF][START_REF] Amaral | Hippocampal neuroanatomy[END_REF][START_REF] Amaral | The three-dimensional organization of the hippocampal formation: a review of anatomical data[END_REF]. The hippocampal formation receives a vast amount of highly processed sensory information from neocortical areas that converge into the hippocampal formation mainly through the EC. The exchange of information between the hippocampal formation and other cortical areas is fundamental for memory consolidation processes. Based on such extrinsic connectivity, the hippocampal formation exerts control over widespread regions, and it occupies a privileged position to coordinate the activity 1 Chapter 1: Introduction B. A. of the different brain regions. Hippocampus: structure and organization As previously mentioned, the hippocampus is divided into different fields with distinct morphological, anatomical, and cellular profiles. The CA3 region comprises a homogeneous population of pyramidal cells that form extensive recurrent connections with each other, allowing it to function as an auto-associative network [START_REF] Marr | Simple memory: a theory for archicortex[END_REF][START_REF] Mcnaughton | Hippocampal synaptic enhancement and information storage within a distributed memory system[END_REF][START_REF] Treves | What determines the capacity of autoassociative memories in the brain?[END_REF]. CA1 pyramidal cells form remarkably less recurrent connections and are uniformly distributed with the cell body at the pyramidale layer. The CA1 area also comprises populations of highly diverse GABAergic interneurons that form a complex neural network and control the activity of CA1 pyramidal cells by feedback or feedforward inhibition [START_REF] Knowles | Local circuit synaptic interactions in hippocampal brain slices[END_REF]. The EC is considered the main extrinsic source of excitatory inputs of the CA1 region. The two regions form a closed loop, with entorhinal layer III neurons projecting 2 1.1. The hippocampal formation to the CA1 and pyramidal CA1 neurons targetting entorhinal layer V/VI pyramidal cells. Additionally, CA1 neurons receive substantial cholinergic and GABAergic inputs from the medial septum (MS)1 . CA1 circuits are fundamental for processes of memory formation [START_REF] Bartsch | Focal lesions of human hippocampal ca1 neurons in transient global amnesia impair place memory[END_REF], and impairment of CA1 neurons contribute to memory deficits in patients with damages to the hippocampus [START_REF] Kadar | Sub-regional hippocampal vulnerability in various animal models leading to cognitive dysfunction[END_REF]. CA1 GABAergic interneurons Despite representing only 10 -15% of the total hippocampal neural population [START_REF] Pelkey | Hippocampal gabaergic inhibitory interneurons[END_REF], the interneurons form a complex local network recurrently connected and target the excitatory pyramidal cells at different dendritic compartments. Thus, they play a crucial role in regulating the activity of pyramidal cells and the excitability of the hippocampal network. Hippocampal GABAergic interneurons can generally be classified based on morphology, neurochemical markers, or physiological features. Morphologically, hippocampal interneurons are classified by relating their somatodendritic location to the layer specificity of synaptic input and the axonal projections to the postsynaptic target domain. For example, oriens-lacunosum moleculare (OLM) cells refer to interneurons whose soma is on the s.o. layer and whose axons extend to s.l.m.; bistratified interneurons have dendrites and axons that ramify within the s.o. and s.r. layers emerging from the cell body on s.p. [START_REF] Booker | Morphological diversity and connectivity of hippocampal interneurons[END_REF]. Regarding neurochemical markers, interneurons can be parvalbumin (PV+), somatostatin (SOM+), cholecystokinin (CCK+), or vasointestinal peptide (VIP+) expressing interneurons. Lastly, hippocampal interneurons can have fast-spiking dynamics or present slower dynamics with low-frequency subthreshold oscillations. Please note that this is not an extensive list of all the interneuron types that form the hippocampal network. For example, in the CA1 area of the hippocampus, 21 classes of GABAergic interneurons have been identified to date [START_REF] Freund | Interneurons of the hippocampus[END_REF][START_REF] Klausberger | Neuronal diversity and temporal dynamics: the unity of hippocampal circuit operations[END_REF][START_REF] Bezaire | Quantitative assessment of ca1 local circuits: knowledge base for interneuron-pyramidal cell connectivity[END_REF], and this is likely to be an underestimation. It is unclear whether the current classification methods are adequate, as one GABAergic interneuron often spans different categories. For example, both OLM Chapter 1: Introduction interneurons with intrinsic low-frequency spiking dynamics and fast-spiking bistratified cells express somatostatin immunoreactivity [START_REF] Booker | Morphological diversity and connectivity of hippocampal interneurons[END_REF][START_REF] Müller | Dendritic inhibition mediated by o-lm and bistratified interneurons in the hippocampus[END_REF]. It is then challenging to dissect the functional properties of the different interneurons that form the hippocampal microcircuits. Cholinergic signaling in the hippocampus Cholinergic receptors can be found in hippocampal pyramidal and GABAergic interneurons and can be located pre-or postsynaptically. To complicate matters further, there are various cholinergic receptor subtypes with distinct physiological profiles and dynamics that can modulate the hippocampal circuitry in specific ways. There are two main classes of cholinergic receptors: muscarinic (mAChR) and nicotinic (nAChR) receptors. Muscarinic receptors are metabotropic receptors responsive to ACh and muscarine. They act through second messengers and are indirectly linked with ion channels. There are five subtypes (M1-M5) expressed across the CNS. In the hippocampus, M1 and M3 receptors are mainly expressed in principal neurons, while M2 and M4 are present on interneurons [START_REF] Volpicelli | Muscarinic acetylcholine receptor subtypes in cerebral cortex and hippocampus[END_REF]. They have been shown to regulate ionic conductances and mobilize calcium [START_REF] Lanzafame | Cellular signaling mechanisms for muscarinic acetylcholine receptors[END_REF]. Nicotinic receptors are ionotropic channels responsive to ACh and nicotine, consisting of five subunits arranged symmetrically around a pore. Each subunit of hippocampal nAChR can be of type α2-α7 and β2-β4. The combination of subunits that composes the nAChR determines the dynamical and physiological properties of the receptor channel. Notably, while all nAChR subtypes are permeable to N a + and K + , they differ in their permeability to calcium, with the homomeric α7 nAChR having the highest calcium permeability [START_REF] Castro | alpha-bungarotoxin-sensitive hippocampal nicotinic receptor channel has a high calcium permeability[END_REF]. The α7 nAChR is one of the most abundant cholinergic receptors in the hippocampus. They have been subject of great interest as their dysfunction is believed to be at the origin of cognitive deficits and neurodegenerative diseases such as Alzheimer's disease [START_REF] Guan | Decreased protein levels of nicotinic receptor subunits in the hippocampus and temporal cortex of patients with alzheimer's disease[END_REF][START_REF] Wang | β-amyloid 1-42 binds to α7 nicotinic acetylcholine receptor with high affinity-implications for alzheimer's disease pathology[END_REF]. In addition, their high calcium permeability makes them potentially involved in synaptic plasticity [START_REF] Ji | Timing and location of nicotinic activity enhances or depresses hippocampal synaptic plasticity[END_REF][START_REF] Gu | Timing-dependent septal cholinergic induction of dynamic hippocampal synaptic plasticity[END_REF] and neurotransmitter release mechanisms [START_REF] Wanaverbecq | Cholinergic axons modulate gabaergic signaling among hippocampal interneurons via postsynaptic α7 nicotinic receptors[END_REF]Sharma andVijayaraghavan, 2003, 2001). Another important feature 1.1. The hippocampal formation of α7 nAChRs is their rapid desensitization. Desensitization is a mechanism where prolonged exposure to the receptor's agnostic drives it into a refractory state where there is no ion flux. It impacts their response to repetitive inputs, but their functional role in generating and maintaining hippocampal rhythms is still unclear. Notably, even though currents mediated by α7 nAChRs decline strikingly during activation at theta frequency [START_REF] Buhler | Regulation of the activity of hippocampal stratum oriens interneurons by α7 nicotinic acetylcholine receptors[END_REF], knockout of these receptors in vivo disrupts hippocampal theta oscillations (Gu and Yakel, 2017). Entorhinal Cortex local circuit The entorhinal cortex is commonly perceived as the nodal point of cortico-hippocampal circuits. Neurons in the superficial layers (II/III) receive most of their input from cortical areas and constitute a major excitatory input to the hippocampus; neurons in the deep layers receive extensive input from the hippocampus and project to the EC superficial layers [START_REF] Amaral | Hippocampal neuroanatomy[END_REF]. The EC comprises a mixture of excitatory pyramidal cells, PV+ interneurons, and stellate cells distributed among the different layers. The superficial layers are mainly made up of densely packed excitatory stellate and pyramidal cells. The stellate cells are the most abundant cell type on these layers, and they provide the primary entorhinal excitatory input to the hippocampal region. One of their most striking features is their ability to generate rhythmic subthreshold oscillations [START_REF] Alonso | Differential electroresponsiveness of stellate and pyramidal-like cells of medial entorhinal cortex layer ii[END_REF]. Connections between stellate cells have rarely been found, and they are believed to communicate with each other through PV+ fast-spiking interneurons that can be found in the same layers [START_REF] Witter | Architecture of the entorhinal cortex a review of entorhinal anatomy in rodents with some comparative notes[END_REF]. The deep layers comprise a heterogeneous population of excitatory pyramidal cells with axon collaterals terminating both on the deep and superficial layers of the EC. On the superficial layer, they target mainly pyramidal cells on layer III, generating prolonged excitatory responses. [START_REF] Hamam | Morphological and electrophysiological characteristics of layer v neurons of the rat medial entorhinal cortex[END_REF][START_REF] Witter | Architecture of the entorhinal cortex a review of entorhinal anatomy in rodents with some comparative notes[END_REF][START_REF] Canto | What does the anatomical organization of the entorhinal cortex tell us?[END_REF]. There is a rapidly growing interest in understanding the functional properties of the EC. This is primarily motivated by studies demonstrating that some aspects of memory impairment can be attributed to damage of the EC [START_REF] Davis | Effects of entorhinal cortex lesions on sensory integration and spatial learning[END_REF]Buckmaster et al., 2004a), and that stellate and pyramidal cells in this brain region act as grid cells, i.e., they represent equally spaced locations in an environment via Chapter 1: Introduction their firing rates [START_REF] Tang | Pyramidal and stellate cells specificity of grid and border representations in layer 2 of medial entorhinal cortex[END_REF][START_REF] Moser | Place cells, grid cells, and the brain's spatial representation system[END_REF]. Synaptic plasticity Synaptic plasticity is defined as the ability of neurons to change the strength of the synapses in a neural network [START_REF] Konorski | Conditioned reflexes and neuron organization[END_REF][START_REF] Hebb | The organization of behavior[END_REF]. It is considered to be the cellular process underlying learning and storage of information in the hippocampus [START_REF] Riedel | Reversible neural inactivation reveals hippocampal participation in several memory processes[END_REF]. It implies alterations in the pre-and/or postsynaptic neurons, and it can be expressed as changes in the probability of neurotransmitter release from the presynaptic neuron or in the number and sensitivity of postsynaptic receptors. Since long-term potentiation [START_REF] Bliss | Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path[END_REF] and depression [START_REF] Dudek | Homosynaptic long-term depression in area ca1 of hippocampus and effects of n-methyl-d-aspartate receptor blockade[END_REF] were first induced in the hippocampus, this remained the region of choice to study the mechanisms of synaptic plasticity, with the CA1 area being the most extensively studied model of activity-dependent plasticity in the mammalian brain. To this day, the SC-CA1 synapse continues to be widely used as a model synapse for the study of LTP and synaptic plasticity in general. In hippocampal excitatory synapses, long-term synaptic potentiation typically involves a calcium flux mediated by NMDARs [START_REF] Dudek | Homosynaptic long-term depression in area ca1 of hippocampus and effects of n-methyl-d-aspartate receptor blockade[END_REF][START_REF] Lüscher | Nmda receptor-dependent long-term potentiation and long-term depression (ltp/ltd)[END_REF][START_REF] Cummings | ca 2+ signalling requirements for long-term depression in the hippocampus[END_REF][START_REF] Bear | Synaptic plasticity: Ltp and ltd[END_REF]. NMDAR is a ligand of glutamate, highly permeable to calcium. At the resting potential, extracellular M g 2+ binds to specific sites of the NMDARs blocking the passage of ions. Postsynaptic depolarization relieves this block allowing calcium (and N a + ) to enter the cell. Depolarization of the postsynaptic membrane is typically induced through activation of AMPAR co-localized on the dendritic spine [START_REF] Lüscher | Nmda receptor-dependent long-term potentiation and long-term depression (ltp/ltd)[END_REF][START_REF] Collingridge | Excitatory amino acids in synaptic transmission in the schaffer collateral-commissural pathway of the rat hippocampus[END_REF][START_REF] Muller | Factors governing the potentiation of nmda receptor-mediated responses in hippocampus[END_REF][START_REF] Tsien | The essential role of hippocampal ca1 nmda receptor-dependent synaptic plasticity in spatial memory[END_REF]. An elevation of the intracellular calcium concentration mediated by postsynaptic NMDAR can activate protein kinases such as Ca 2+ /calmodulin-dependent protein kinase II (CaMKII), which ultimately leads to changes in the density of AMPAR on the postsynaptic terminal [START_REF] Asztely | The relative contribution of nmda receptor channels in the expression of long-term potentiation in the hippocampal ca1 region[END_REF][START_REF] Kullmann | Amplitude fluctuations of dual-component epscs in hippocampal pyramidal cells: implications for long-term potentiation[END_REF][START_REF] Mainen | Use-dependent ampa receptor block in mice lacking glur2 suggests postsynaptic site for ltp expression[END_REF][START_REF] Perkel | The role of ca 2+ entry via synaptically activated nmda receptors in the induction of long-term potentiation[END_REF][START_REF] Barria | Regulatory phosphorylation of ampa-type glutamate receptors by cam-kii during long-term potentiation[END_REF]. The calcium response determines the polarity of synaptic modification. Typically, a moderate increase in intracellular concentrations induces LTD while a substantial elevation of intracellular calcium induces LTP [START_REF] Cummings | ca 2+ signalling requirements for long-term depression in the hippocampus[END_REF][START_REF] Lisman | A mechanism for the hebb and the anti-hebb processes underlying learning and memory[END_REF][START_REF] Artola | Long-term depression of excitatory synaptic transmission and its relationship to long-term potentiation[END_REF]Bear and Malenka, 1.2. Synaptic plasticity 1994). The depolarization of the postsynaptic membrane is a critical step in the induction of NMDAR-dependent plasticity. Thus, it is not surprising that GABAergic circuits can modulate hippocampal plasticity at excitatory synapses through feedforward or feedback inhibition. Similarly, activating cholinergic receptors on glutamatergic or GABAergic neurons, located pre or postsynaptically, can regulate the induction of potentiation (or depression) in the hippocampal region. GABAergic modulation It is firmly established that inhibitory inputs modulate local hippocampal synaptic plasticity [START_REF] Wigström | Facilitated induction of hippocampal longlasting potentiation during blockade of inhibition[END_REF][START_REF] Meredith | Maturation of long-term potentiation induction rules in rodent hippocampus: Role of gabaergic inhibition[END_REF][START_REF] Ormond | Disinhibition mediates a form of hippocampal long-term potentiation in area ca1[END_REF][START_REF] Yang | A dendritic disinhibitory circuit mechanism for pathway-specific gating[END_REF]. Moreover, selective activation of certain interneuron classes can mediate the induction of plasticity in distinct ways. For example, activation of OLMα2 interneurons facilitates potentiation of SC inputs into proximal dendrites while inhibiting EC inputs into distal dendrites of the CA1 pyramidal neuron (R. Leão et al., 2012), and high-frequency bursts acting on GABAergic interneurons containing GABA B autoreceptors permits the induction of LTP on the SC-CA1 synapse [START_REF] Davies | gaba b autoreceptors regulate the induction of ltp[END_REF]. The different pathways and mechanisms through which GABAergic interneurons modulate plasticity remain elusive despite all the experimental efforts. Cholinergic modulation Due to the abundance of cholinergic receptors and the complexity of the neural networks in which they are embedded, it is difficult to access the mechanisms through which cholinergic inputs regulate hippocampal activity and synaptic plasticity. The effects of ACh vary depending on which type of cholinergic receptor and neuron is being activated. For example, presynaptic mAChRs can decrease neurotransmitter release reducing synaptic strength [START_REF] Valentino | Presynaptic inhibitory effect of acetylcholine in the hippocampus[END_REF][START_REF] Raiteri | Heterogeneity of presynaptic muscarinic receptors regulating neurotransmitter release in the rat brain[END_REF], while postsynaptic mAChRs enhance responses of NMDA receptors [START_REF] Markram | The inositol 1,4,5-trisphosphate pathway mediates cholinergic potentiation of rat hippocampal neuronal responses to nmda[END_REF] and inhibit calcium-activated K + currents inducing the opposite effect [START_REF] Cole | Acetylcholine mediates a slow synaptic potential in hippocampal pyramidal cells[END_REF]. Presynaptic activation of α7 nAChR enhances synaptic transmission [START_REF] Radcliffe | Nicotinic stimulation produces multiple forms of increased glutamatergic synaptic transmission[END_REF], and postsynaptic α7 nAChRs facilitate LTP at hippocampal excitatory synapses by producing calcium signals that contribute to the induction of LTP [START_REF] Vernino | Calcium modulation and high calcium permeability of neuronal nicotinic acetylcholine receptors[END_REF][START_REF] Vernino | Quantitative measurement of calcium flux through muscle and neuronal nicotinic acetylcholine receptors[END_REF][START_REF] Rathouz | Elevation of intracellular calcium levels in neurons by nicotinic acetylcholine receptors[END_REF][START_REF] Shoop | Synaptically driven calcium transients via nicotinic receptors on somatic spines[END_REF][START_REF] Berg | Nicotinic alpha 7 receptors: synaptic options and downstream signaling in neurons[END_REF]. In addition, Chapter 1: Introduction studies also show that the timing of cholinergic inputs is important in modulating SC-evoked responses [START_REF] Ji | Timing and location of nicotinic activity enhances or depresses hippocampal synaptic plasticity[END_REF][START_REF] Gu | Timing-dependent septal cholinergic induction of dynamic hippocampal synaptic plasticity[END_REF]. Mechanistic models of calcium-dependent synaptic plasticity There is a large variety of heuristic models of synaptic plasticity focusing on the timing of inputs [START_REF] Poo | Spike timing-dependent plasticity of neural circuits[END_REF][START_REF] Gerstner | A neuronal learning rule for sub-millisecond temporal coding[END_REF][START_REF] Appleby | Synaptic and temporal ensemble interpretation of spike-timing-dependent plasticity[END_REF][START_REF] Badoual | Biophysical and phenomenological models of multiple spike interactions in spike-timing dependent plasticity[END_REF][START_REF] Bi | Temporal adymmetry in spike timing-dependent synaptic plasticity[END_REF][START_REF] Burkitt | Spike-timing-dependent plasticity for neurons with recurrent connections[END_REF], the correlations in the pre and postsynaptic activity [START_REF] Hebb | The organization of behavior[END_REF][START_REF] Kempter | Hebbian learning and spiking neurons[END_REF][START_REF] Lisman | A mechanism for the hebb and the anti-hebb processes underlying learning and memory[END_REF] and reflecting the modulatory role of neuromodulators [START_REF] Ang | The functional role of sequentially neuromodulated synaptic plasticity in behavioural learning[END_REF][START_REF] Pedrosa | The role of neuromodulators in cortical plasticity. a computational perspective[END_REF][START_REF] Maki-Marttunen | A unified computational model for cortical post-synaptic plasticity[END_REF]. In this thesis, we focus on mechanistic models, which we review briefly in this section. Many computational models have been developed to understand the mechanisms of synaptic plasticity. In particular, there are abundant models that focus on the role of calcium signaling, either by detailing the calcium processes of CaMKII phosphorylation and consequent changes in AMPAR density or by directly modeling changes in synaptic efficiency as a function of intracellular calcium concentration. [START_REF] Lisman | A mechanism for the hebb and the anti-hebb processes underlying learning and memory[END_REF][START_REF] Holmes | Insights into associative long-term potentiation from computational models of nmda receptor-mediated calcium influx and intracellular calcium concentration changes[END_REF][START_REF] Lisman | A model of synaptic memory: a camkii/pp1 switch that potentiates transmission by organizing an ampa receptor anchoring assembly[END_REF][START_REF] Shouval | A unified model of nmda receptor-dependent bidirectional synaptic plasticity[END_REF][START_REF] Abarbanel | Biophysical model of synaptic plasticity dynamics[END_REF]Graupner andBrunel, 2012, 2005a;[START_REF] Inglebert | Synaptic plasticity rules with physiological calcium levels[END_REF]. According to the model developed by [START_REF] Shouval | A unified model of nmda receptor-dependent bidirectional synaptic plasticity[END_REF], changes in the synaptic strength of a synapse j, W j , can be formulated as dW j dt = η([Ca] j )(Ω([Ca] j ) -λW j ) (1.1) where η is a calcium-dependent learning rate, Ω is a function that describes changes in synaptic efficacy induced by calcium, and λ represents a decay constant that stabilizes synaptic growth. A calcium-dependent learning rate η avoids unwanted oscillations in the synaptic weights, while a function Ω accounts for the fact that different levels of intracellular calcium trigger various forms of plasticity (see Figure 1.2). The model assumes that the primary source of calcium are the postsynaptic NMDAR. The calcium dynamics is then described as follows: Synaptic plasticity d[Ca] j dt = I N M DA - [Ca] j τ Ca (1.2) where τ Ca is the calcium's time constant and I N M DA is the current through the NMDAR. The NMDA current is generally described as I N M DA = G N M DA B(V )(V -E r ) (1.3) where G N M DA is the channel's conductance, E r the reversal potential, and B(V )(= 1/(1 + exp(-0.062V ) [M g 2+ ] 3.57 )) is a voltage-dependent term that accounts for the presence of a M g 2+ block when the cell is hyperpolarized. Please note that although in the original model of Shouval and colleagues, it is considered that calcium ions transport all the current through the NMDAR, that it is not accurate, and the NM-DAR are also permeable to other ions such as N a + . This imprecision can easily be corrected by including a parameter α that accounts the percentage of total current that is carried out by calcium ions ( d [Ca] j dt = αI N M DA -[Ca] j τ Ca ). Following the work of [START_REF] Shouval | A unified model of nmda receptor-dependent bidirectional synaptic plasticity[END_REF], [START_REF] Graupner | Calcium-based plasticity model explains sensitivity of synaptic changes to spike pattern, rate, and dendritic location[END_REF] devised a simplified calcium-based model that provides a link between stimulation protocols Chapter 1: Introduction and evoked synaptic changes, and can reproduce different STDP curves as seen experimentally. In this thesis, we use the calcium-based model developed by [START_REF] Shouval | A unified model of nmda receptor-dependent bidirectional synaptic plasticity[END_REF] to describe changes in the synaptic strength of the SC-CA1 synapse. The model is simple enough to be implemented computationally and to reproduce the experimental results on which we base our modeling work, such as the absence of bistability. Hippocampal theta rhythm The hippocampal theta rhythm consists of an oscillatory pattern with a 4-12 Hz frequency observed in the hippocampal formation and associated structures during active exploration, REM sleep, states of alert immobility, and under anesthesia. Experiments suggest a close relation between hippocampal theta rhythms and learning and memory. Several studies show that the extent to which theta is present in an electroencephalogram is indicative of how quickly animals learn a task of how well they remember [START_REF] Landfield | Theta rhythm: a temporal correlate of memory storage processes in the rat[END_REF][START_REF] Winson | Loss of hippocampal theta rhythm results in spatial memory deficit in rat[END_REF][START_REF] Berry | Prediction of learning rate from the hippocampal electroencephalogram[END_REF]. While several studies suggest that theta is required for the formation of memories represented by neuronal ensembles [START_REF] Wang | Theta sequences are essential for internally generated hippocampal firing fields[END_REF][START_REF] Skaggs | Theta phase precession in hippocampal neuronal populations and the compression of temporal sequences[END_REF][START_REF] Dragoi | Temporal encoding of place sequences by hippocampal cell assemblies[END_REF][START_REF] Foster | Hippocampal theta sequences[END_REF][START_REF] Feng | Dissociation between the experiencedependent development of hippocampal theta sequences and single-trial phase precession[END_REF][START_REF] Gupta | Segmentation of spatial experience by hippocampal theta sequences[END_REF], its role in the formation of memory representations at the single cells level remains unclear. A study shows that place fields are formed by hippocampal place cells of rats when the animal is in a new environment, despite the blockade of theta rhythm and theta entrainment [START_REF] Brandon | New and distinct hippocampal place codes are generated in a new environment during septal inactivation[END_REF]. Such observation suggests that theta rhythmicity is not required to form spatial memory representations at the single cell level. On the other hand, there is compelling evidence that theta and synaptic plasticity, a cellular mechanism of information storage, are strongly correlated [START_REF] Larson | Patterned stimulation at the theta frequency is optimal for the induction of hippocampal long-term potentiation[END_REF][START_REF] Orr | Hippocampal synaptic plasticity is modulated by theta rhythm in the fascia dentata of adult and aged freely behaving rats[END_REF][START_REF] Hyman | Stimulation in hippocampal region ca1 in behaving rats yields long-term potentiation when delivered to the peak of theta and long-term depression when delivered to the through[END_REF][START_REF] Griffin | Theta-contingent trial presentation accelerates learning rate and enhances hippocampal plasticity during trace eyeblink conditioning[END_REF]. According to their frequency, physiology, and behavioral correlations, the hippocampal theta rhythm can be classified into type 1 or type 2 [START_REF] Kramis | Two types of hippocampal rhythmical slow activity in both the rabitt and the rat: relations to behavior and effects of atropine diethyl ether, urethane, and pentobarbital[END_REF]. Type 1 theta (8-12 Hz) occurs during active motor behaviors and REM sleep. It is considered to be atropine resistant, despite studies suggesting that it may have an atropine-sensitive and an atropine-resistant component [START_REF] Kramis | Two types of hippocampal rhythmical slow activity in both the rabitt and the rat: relations to behavior and effects of atropine diethyl ether, urethane, and pentobarbital[END_REF][START_REF] Vanderwolf | Evidence that serotonin mediates noncholinergic neocortical low voltage fast activity, non-cholinergic hippocampal rhythmical slow activity and contributes to intelligent behavior[END_REF]. Type 2 (4-7 Hz) occurs during states of still alertness 1.3. Hippocampal theta rhythm and urethane anesthesia. It is abolished by the administration of atropine, and it is therefore considered to be atropine-sensitive [START_REF] Lee | Hippocampal theta activity following selective lesion of the septal cholinergic system[END_REF][START_REF] Vanderwolf | Evidence that serotonin mediates noncholinergic neocortical low voltage fast activity, non-cholinergic hippocampal rhythmical slow activity and contributes to intelligent behavior[END_REF]. Mechanisms of theta rhtyhm Neural oscillations can arise on two levels of organization. On the cellular level, it can appear as oscillations in the membrane potential or persistent rhythm action potentials. On the network level, the synchronized activity of large numbers of neurons can give rise to macroscopic oscillations with a well-defined frequency. It is essential to understand the mechanisms through which oscillations are generated and maintained on the different scales as they are most likely complementary. Some neurons in the hippocampal formation are endowed with intrinsic properties that give rise to slow subthreshold oscillations and resonate at theta frequency. This is the case of OLM interneurons in the hippocampus [START_REF] Zemankovics | Differences in subthreshold resonance of hippocampal pyramidal cells and interneurons: the role of h-current and passive membrane charactheristics[END_REF], and stellate cells in the EC [START_REF] Alonso | Differential electroresponsiveness of stellate and pyramidal-like cells of medial entorhinal cortex layer ii[END_REF]. Even though these cells have been implicated in the generation of theta in local hippocampal circuits, their role is still a topic of discussion [START_REF] Dickson | Properties and role of i h in the pacing of subthreshold oscillations in entorhinal cortex layer ii neurons[END_REF][START_REF] Kispersky | Spike resonance properties in hippocampal o-lm cells are dependent on refractory dynamics[END_REF][START_REF] Fernandez | Entorhinal stellate cells show preferred spike phase-locking to theta inputs that is enhanced by correlations in synaptic activity[END_REF][START_REF] Wang | Pacemaker neurons for the theta rhythm and their synchronization in the septohippocampal reciprocal loop[END_REF][START_REF] Moser | Place cells, grid cells, and the brain's spatial representation system[END_REF][START_REF] Rowland | Functional properties of stellate cells in medial entorhinal cortex layer ii[END_REF]Rotstein et al., 2005a). On the network level, two brain regions known to be essential to the generation of hippocampal theta rhythm are the medial septum and the entorhinal cortex. The medial septum The medial septum is composed of cholinergic, GABAergic, and glutamatergic cells, and it mainly targets the hippocampal formation. It is regarded as a crucial brain structure for the generation and maintenance of hippocampal theta activity, a notion that has been corroborated by experimental observations that lesions or inactivation of the medial septum disrupts (or even abolishes) hippocampal theta oscillations [START_REF] Petsche | The significance of the rabbit's septum as a relay station between the midbrain and the hippocampus i. the control of hippocampus arousal activity by the septum cells[END_REF][START_REF] Gogolák | The firing pattern of septal neurons and the form of the hippocampal theta wave[END_REF][START_REF] Green | Hippocampal electrical activity in arousal[END_REF][START_REF] Mizumori | Reversible inactivation of the medial septum differentially affects two forms of learning in rats[END_REF]. Cholinergic neurons modulate the excitability of hippocampal neurons in a way that promotes theta rhythm, likely through the activation of mAChR on CA1 pyramidal neurons and activation of α7 nAChR on GABAergic interneurons that can inhibit or disinhibition hippocampal pyramidal cells (Gu et al., 2017[START_REF] Gu | Hippocampal interneuronal α7 nachrs modulate theta oscillations in freely moving mice[END_REF]. The Chapter 1: Introduction role of the septal GABAergic neurons is still a topic of discussion. Despite presenting rhythmic activity, there is no direct evidence that they are directly pacing the hippocampal rhythm [START_REF] Stewart | Do septal neurons pace the hippocampal theta rhythm?[END_REF][START_REF] Yoder | Involvement of gabaergic and cholinergic medial septal neurons in hippocampal theta rhythm[END_REF][START_REF] Gogolák | The firing pattern of septal neurons and the form of the hippocampal theta wave[END_REF][START_REF] King | The rhyhmicity of cells of the medial septum/diagonal band of broca in the awake freely moving rate: relationships with behaviour and hippocampal theta[END_REF]. The entorhinal cortex Even though theta oscillations can be observed in an in vitro preparation of the isolated hippocampus [START_REF] Goutagny | Self-generated theta oscillations in the hippocampus[END_REF], lesions of the EC lead to disruptions of the hippocampal theta rhythm and spatial learning [START_REF] Chenani | Hippocampal ca1 replay becomes less prominent but more rigid without inputs from medial entorhinal cortex[END_REF][START_REF] Davis | Effects of entorhinal cortex lesions on sensory integration and spatial learning[END_REF][START_REF] Buzsáki | Cellular bases of hippocampal eeg in the behaving rat[END_REF]. Moreover, administration of AMPAR antagonist to the EC in a septo-entorhinal-hippocampal co-culture preparation blocked theta expression in the entire hippocampal formation (Gu and Yakel, 2017). However, it is important to note that the EC may contribute differently to the generation and expression of the two subtypes of theta rhythm (type 1 and type 2) since lesions to the EC abolish type 1 theta while type 2 oscillations remain. Still, lesions to the EC disrupt the behavioral correlates of both types of theta, suggesting that the EC is an integral part of both systems [START_REF] Montoya | The effects of entorhinal cortex lesions on type 1 and type 2 theta[END_REF]. Recent experimental work indicates that the role of EC may go beyond simply responding to external rhythmic inputs and coordinating the activity of the hippocampal regions. Instead, it may be where the theta rhythm is being generated (Gu and Yakel, 2017). Hippocampal excitatory inputs and NMDAR in the EC seem to be two essential components for the generation of theta in EC, but its mechanisms remain elusive (Gu and Yakel, 2017;Gu et al., 2017;[START_REF] Nuñez | The theta rhythm of the hippocampus: from neuronal and circuit mechanisms to behavior[END_REF]. Modeling hippocampal theta oscillations Theoretical and mathematical models are convenient tools for understanding brain circuitry's functional properties and studying the emergence of neural phenomenons on different scales. They rely on experimental data to construct biological approximations and shape their output. Due to the difficulty of recording simultaneously from the septum, hippocampus, and entorhinal cortex, most of the experimental data collected to date focus on the septal-hippocampal network, hippocampal-entorhinal network, or the isolated hippocampus. This is reflected in the models of theta 1.3. Hippocampal theta rhythm rhythm. Models that study the interaction between septum and hippocampus often consider the septum to be the pacemaker in the production of hippocampal theta rhythm, with rhythmic cholinergic and GABAergic septal neurons imposing the theta rhythm on the hippocampal neurons and coordinating their activity, in particular on hippocampal GABAergic interneurons [START_REF] Stewart | Do septal neurons pace the hippocampal theta rhythm?[END_REF][START_REF] Denham | A model of theta rhythm production in the septal-hippocampal system and its modulation by ascending brain stem pathways[END_REF]. However, recordings from the medial septum indicate that the theta-locked cells of the region do not fire with a common phase, which is inconsistent with the pacemaker hypothesis [START_REF] King | The rhyhmicity of cells of the medial septum/diagonal band of broca in the awake freely moving rate: relationships with behaviour and hippocampal theta[END_REF]. Concerning the role of EC in the generation of theta rhythm, models suggest that rhythmic hippocampal inputs can initiate theta activity in the EC by driving a population of inhibitory interneurons that in turn coordinates the activity of stellate cells [START_REF] Neru | Theta oscillations gate the transmission of reliable sequences in the medial entorhinal cortex[END_REF]. Recent experimental evidence suggests that EC is not simply responding to external rhythmic inputs, namely from the hippocampus, but it is where the theta oscillations are being generated (Gu and Yakel, 2017;[START_REF] Mitchell | Generation of theta rhythm in medial entorhinal cortex of freely moving rats[END_REF], in particular during the exploration of novel environments [START_REF] López-Madrona | Functional interactions between entorhinal cortical pathways modulate theta activity in the hippocampus[END_REF]. However, computational models addressing the origins of theta in the intrinsic circuit of the EC are still lacking. Whereas theta rhythms are traditionally thought to be imposed extrinsically, the hippocampus contains intrinsic mechanisms that may actively contribute to the rhythm through the resonance of external inputs or as a local phenomenon. Moreover, the hippocampus contains neurons with slow synapses and intrinsic spiking dynamics in the theta range, such as the OLM interneurons, which may contribute to the generation of theta [START_REF] White | Networks of interneurons with fast and slow γ-aminobutyric acid type a (gaba a ) kinetics provide substrate Bibliography for mixed γ-θ rhythm[END_REF][START_REF] Rotstein | Slow and fast inhibition and an h-current interact to create a theta rhythm in a model of ca1 interneuron network[END_REF][START_REF] Kopell | Gamma and theta rhythms in biophysical models of hippocampal circuits[END_REF]. Despite this, there is evidence that the CA1 region can autonomously generate theta oscillations by using mainly a network of pyramidal cells and PV+ interneurons. More specifically, models suggest that spike frequency adaptation and post inhibitory rebound provide the necessary conditions for rhythmic activity to arise in a minimally connected network of CA1 pyramidal cells with fast-spiking PV+ interneurons, where the pyramidal controls the frequency to PV+ neuron connections. [START_REF] Goutagny | Self-generated theta oscillations in the hippocampus[END_REF][START_REF] Bezaire | Interneuronal mechanisms of hippocampal theta oscillations in a full-scale model of the rodent ca1 circuit[END_REF][START_REF] Ferguson | Combining theory, model and experiment to explain how intrinsic theta rhythms are generated in an in vitro whole hippocampus preparation without oscillatory inputs[END_REF]. In this scenario, OLM cells regulate the robustness of hippocampal theta rhythms, without affecting their frequency and power [START_REF] Chatzikalymniou | Deciphering the contribution of oriens-lacunosum/moleculare (olm) cells to intrinsic θ rhythms using biophysical local field potential (lfp) models[END_REF]. Chapter 1: Introduction Outline of the work This thesis investigates the cholinergic and circuit mechanisms underlying the generation of theta rhythm in a septal-entorhinal-hippocampal circuit. We use a quantitative data-based modeling approach based on a unique experimental setup (a septalentorhinal-hippocampal in vitro co-culture preparation) developed by Zhenglin Gu and Jerrel Yakel at the National Institute for Environmental Health Sciences (NIEHS), USA. The dissertation is divided into two parts: I. In the first part of this dissertation, we derive a biophysical model of cholinergic induced hippocampal plasticity. Experimental studies show that the induction of theta in a septal-entorhinalhippocampal in vitro preparation depends on the co-activation of the septal cholinergic and SC pathways. Moreover, repeated co-activation of the two pathways potentiates the EPSCs of CA1 pyramidal cells and facilitates the expression of the theta rhythm, which can then be readily generated through SC stimulation alone (Gu and Yakel, 2017). We show that cholinergic activation of α7 nAChR on OLMα2 interneurons can disinhibit CA1 pyramidal cells by inhibiting a class of fast-spiking interneurons targetting CA1 pyramidal cells. Repeated disinhibition paired with SC stimulation can upregulate the conductance of AMPA receptors and potentiate the SC-CA1 excitatory synapse. II. The second part focuses on the entorhinal and hippocampal network properties that permit theta oscillations to arise and propagate in the circuit. We start by deriving an exact mean-field model reduction that we use to describe the macroscopic activity of the entorhinal and hippocampal networks. Next, we examine how increased hippocampal excitatory inputs -a consequence of the increased hippocampal excitability described in part I -can drive the entorhinal network into an oscillatory regime with theta frequency. Finally, we study the response of the hippocampal network to external rhythmic theta inputs before and after hippocampal plasticity is induced. Part I Hippocampal synaptic plasticity | Recurrent cholinergic inputs induce local hippocampal plasticity through feedforward disinhibition Introduction The hippocampal networks are characterized by a variety of locally connected GABAergic interneurons exerting robust control on network excitability. Previous work has detailed the importance of inhibitory inputs in modulating local hippocampal synaptic plasticity [START_REF] Saudargiene | Inhibitory control of site-specific synaptic plasticity in a model ca1 pyramidal neuron[END_REF][START_REF] Chevaleyre | Modulating excitation through plasticity at inhibitory synapses[END_REF][START_REF] Wigström | Facilitated induction of hippocampal longlasting potentiation during blockade of inhibition[END_REF][START_REF] Meredith | Maturation of long-term potentiation induction rules in rodent hippocampus: Role of gabaergic inhibition[END_REF][START_REF] Ruiz | Presynaptic gaba a receptors enhance transmission and ltp induction at hippocampal mossy fiber synapses[END_REF]. Furthermore, several experimental studies show that disinhibition facilitates the induction of LTP at excitatory synapses [START_REF] Ormond | Disinhibition mediates a form of hippocampal long-term potentiation in area ca1[END_REF][START_REF] Yang | A dendritic disinhibitory circuit mechanism for pathway-specific gating[END_REF]. However, how the disinhibition controlling hippocampal excitatory synapses is modulated (e.g., by neuromodulators) is not clearly understood, and the precise circuitry and dynamics underlying this type of plasticity remain an open question. GABAergic interneurons receive significant cholinergic innervation from the medial septum. They are endowed with various subtypes of nicotinic acetylcholine receptors (nAChRs) that regulate excitability, plasticity, and cognitive functions [START_REF] Griguoli | Regulation of hippocampal inhibitory circuits by nicotinic acetylcholine receptors: nachrs and inhibitory circuits in the hippocampus[END_REF][START_REF] Levin | Nicotinic receptor subtypes and cognitive function[END_REF][START_REF] Yakel | Nicotinic ach receptors in the hippocampus: Role in excitability and plasticity[END_REF]. Moreover, alterations of cholinergic action on hippocampal GABAergic interneurons have been implicated in cognitive dysfunction in Alzheimer's disease (AD) [START_REF] Schmid | Dysfunction of somatostatin-positive interneurons associated with memory deficits in an alzheimer's disease model[END_REF]. These Chapter 2: Recurrent cholinergic inputs induce local hippocampal plasticity through feedforward disinhibition studies, among others, furnish clear evidence that cholinergic inputs exert a powerful role in regulating hippocampal activity. Still, due to the abundance of cholinergic receptors (both muscarinic and nicotinic) and the complexity of the networks in which they are embedded, it is difficult to access the exact mechanisms through which cholinergic action on the hippocampus modulates its microcircuits. Previous studies showed that activation of OLMα2 interneurons increases SC to CA1 transmission and suggest that this happens through disinhibition by reducing the activity of stratum radiatum (s.r.) interneurons that in turn provide feedforward inhibition onto pyramidal neurons (R. Leão et al., 2012). Consistent with these studies, Gu and colleagues found that activation of OLMα2 interneurons increased SC to CA1 EPSCs and reduced IPSCs [START_REF] Gu | Hippocampal interneuronal α7 nachrs modulate theta oscillations in freely moving mice[END_REF]. However, the mechanisms through which the activation of the inhibitory interneurons OLMα2 regulates the activity of inhibitory interneurons targeting the CA1 pyramidal cell, and how this facilitates the potentiation of SC-evoked EPSPs of the CA1 pyramidal cells remain elusive. In this chapter, we use a minimal biophysical circuit model, driven quantitatively by in vitro data, to show how modulation of OLM cells influences the activity of fast-spiking interneurons whose GABAergic inputs are co-localized with the SC glutamatergic synapses onto a CA1 pyramidal cell dendrite, and how this promotes the induction of plasticity at the SC-CA1 synapse. We seek to determine how cholinergic activation of the OLM cells through postsynaptic α7 nAChRs can down-regulate the GABAergic signaling onto the pyramidal cells, and how recurrent decreased inhibitory inputs can directly induce the plasticity of the excitatory SC-CA1 synapse. We constructed a minimal circuit consisting of a single compartment spiking model of an OLM interneuron with α7 nAChRs, a fast-spiking interneuron with AMPA, and GABA A receptors, and a pyramidal cell dendritic compartment with AMPA, NMDA, and GABA A receptors. They are connected as schematically shown in Figure 2.1. Overwhelming evidence suggests that most types of LTP involve calcium influx through NMDARs and subsequent changes in the properties of postsynaptic AM-PARs, namely changes in their number and phosphorylation state [START_REF] Barria | Regulatory phosphorylation of ampa-type glutamate receptors by cam-kii during long-term potentiation[END_REF][START_REF] Collingridge | Excitatory amino acids in synaptic transmission in the schaffer collateral-commissural pathway of the rat hippocampus[END_REF][START_REF] Lüscher | Nmda receptor-dependent long-term potentiation and long-term depression (ltp/ltd)[END_REF]. To reflect these mechanisms, we employ the calcium-based synaptic plasticity model (proposed by [START_REF] Shouval | A unified model of nmda receptor-dependent bidirectional synaptic plasticity[END_REF]) to model synaptic plasticity of the SC-CA1 excitatory synapse. Introduction We use a combination of experiments with computational modeling to put together a coherent picture of the multiple mechanisms through which concurrent disinhibition directly induces local SC-CA1 plasticity. More specifically, we show how repeated concurrent disinhibition induces LTP by mediating AMPAR trafficking. Our modeling results also put together all the pieces of the puzzle to lay out how nAChR cholinergic action on OLM interneurons, working through calcium-dependent regulation of GABA neurotransmission, can downregulate the GABAergic signaling onto CA1 pyramidal cells and induce potentiation of the SC-CA1 synapse. Schaffer Collateral Cholinergic Pathway OLM⍺2 interneuron Fast-spiking interneuron Methods Animals and materials All procedures related to the use of mice followed protocols approved by the Institutional Animal Care and Use Committees of the NIEHS. ChAT-cre mice (B6;129S6-Chattm2(cre)Lowl/J), Sst-cre mice (Ssttm2.1(cre)Zjh), and floxed α7 nAChR knockout mice (B6(Cg)-Chrna7tm1.1Ehs/YakelJ) were originally purchased from Jackson Laboratory and then bred at NIEHS. OLM2-cre mice (Tg(Chrna2cre)OE29Gsat/Mmucd) were originally obtained from Mutant Mouse Resource and Research Centers (MM-RRC) and then bred at NIEHS. Mice (of either sex) were used for slice culture from day 6 to 8. Culture media were from Sigma and Invitrogen. AAV serotype 9 helper plasmid was obtained from James Wilson at the University of Pennsylvania. The AAV vector containing floxed ChR2 (Addgene #20297) and floxed eNpHR (Addgene #26966) were obtained from Karl Deisseroth [START_REF] Witten | Cholinergic interneurons control local circuit activity and cocaine conditioning[END_REF][START_REF] Gradinaru | Molecular and cellular approaches for diversifying and extending optogenetics[END_REF]. AAV viruses were packaged with serotype 9 helper at the Viral Vector Core facility at NIEHS. Brain slice culture and AAV virus infection To study the effects of cholinergic co-activation on the plasticity of SC to CA1 synapses in Figure 2.2, coronal septal slices (350 µm) from ChAT-cre mice and horizontal hippocampal slices from floxed α7 nAChR mice or OLMα2-cre/floxed α7 nAChR mice (350 µm) were cut with Leica VT1000S vibratome. Medial septal tissue containing cholinergic neurons was then dissected out and placed next to the hippocampus on a 6-well polyester Transwell insert (Corning) and cultured there for about 2 weeks before being used for experiments, similar as described in Gu and Yakel (2017). AAV viruses containing double floxed ChR2 construct (5 nl) were microinjected to the septal tissue with a micro injector (Drummond Scientific) on the second day of culture. To study the effects of disinhibition on the plasticity of SC to CA1 synapses in Figure 2.4, horizontal hippocampal slices from Sst-cre mice were cultured and AAV viruses containing double floxed eNpHR construct were microinjected to the hippocampus the next day. 20 2.2. Methods Whole-cell patch-clamp recordings SC to CA1 excitatory postsynaptic currents (EPSCs) were recorded from hippocampal CA1 pyramidal neurons under whole-cell patch-clamp, similar as described in Gu andYakel (2017, 2011). Briefly, 2-3 weeks after culturing, the slices were removed from transwell inserts and put into a submerged chamber, continuously perfused with 95%O 2 /5%CO EPSCs were evoked every 60 seconds by stimulating the SC pathway with an electrode placed in the stratum radiatum through a stimulator (Grass S88X). The stimulation intensity was 1-10 µA for 0.1 ms. To study the effects of cholinergic co-activation on SC to CA1 synaptic plasticity in Figure 2.2, cholinergic terminals in the hippocampus were optogenetically activated (10 pulses at 10 Hz, 1 sec before SC stimulation) through ChR2 that was selectively expressed in ChAT-cre positive (cholinergic) neurons. ChR2 was activated with 488-nm laser light (5 mW, 20 ms) through a 40× objective over CA1 stratum oriens near the septum with an Andor spinning disk confocal microscope (Andor technology). To examine the effects of disinhibition on SC to CA1 synaptic plasticity in Figure 2.4, Sst positive neurons were inhibited optogenetically through eNpHR which was activated through a 40x objective over CA1 stratum oriens with 530-nm laser light (20 mW) for 1 sec flanking SC stimulation. The amplitudes of EPSCs were analyzed with Clampfit and graphs were drawn with Excel. The amplitudes were normalized to the mean of the 5-min baseline recording before cholinergic pairing or disinhibition pairing. Values were presented Chapter 2: Recurrent cholinergic inputs induce local hippocampal plasticity through feedforward disinhibition as mean ± SEM. Amplitude changes were compared with baseline before pairing by Student t-test. Recordings were done in 5 slices from 3 individual mice in each group. The sample size was estimated by Student t-test with an expected effect of 40% change, expected standard deviation of 15%, and 80% confidence interval width. Model The minimal network used in this study consists of an OLM cell (O), a fast-spiking interneuron (I), and a pyramidal cell. All the cells in the network are modeled as point neurons. Since we are interested in the local changes at the SC-CA1 synapse, the pyramidal cell is represented by a dendritic compartment (E D ). The cells of the network are connected through feedforward connections. Adding connections between the CA1 pyramidal cell and the OLM interneuron did not significantly alter our results (simulations not shown). Therefore, we did not include synapses between the CA1 pyramidal and the OLM cells in our model. Our modeling choice is further supported by experimental studies showing that the IPSC elicited by an OLM interneuron has a small amplitude at the soma of CA1 pyramidal cells since these synapses are on the distal parts of the dendritic tree [START_REF] Maccaferru | Cell surface domain specific postsynaptic currents evoked by identified gabaergic neurones in rat hippocampus in vitro[END_REF], and that an action potential in CA1 pyramidal cells is insufficient to make the OLM cell membrane potential cross the action potential threshold [START_REF] Ali | Ca1 pyramidal to basket and bistratified cell epsps: dual intracellular recordings in rat hippocampal slices[END_REF]. Neuron dynamics models The O and I cells are modeled following the Hodgkin-Huxley formalism [START_REF] Hodgkin | A quantitative description of membrane current and its application to conduction and excitation in nerve[END_REF] (transient I N a , delayed rectifier potassium I K , and leak I leak ), with synaptic currents I syn . Its membrane potential V m is described as follows: C m dV m dt = -I leak -I K -I N a -I syn (2.1) where C m is the membrane capacitance. The I leak , I K and I N a currents are given by: 2.2. Methods I leak = g leak (V m -E leak ) (2.2) I K = g K n 4 (V m -E K ) (2. 3) I N a = g N a m 3 h(V m -E N a ) (2.4) where g i and E i are, respectively, the maximal conductance and reversal potential of channel i (i=leak, K, Na), and m, h and n are gating variables that obey the following differential equation: dx dt = α x (1 -x) -β x (2.5) where α x and β x are voltage-dependent rate constants. Following [START_REF] Rotstein | Slow and fast inhibition and an h-current interact to create a theta rhythm in a model of ca1 interneuron network[END_REF] we included an applied current I app = -260 pA, a persistent Na-current I p , and a hyperpolarization-activated inward current I h (with a slow and fast component) on the O-cells: I p = g p p(V m -E N a ) (2.6) I h = g h (0.65h f + 0.35h s )(V m -E h ) (2.7) While the gate variable p obeys equation (2.5), h f and h s are described by the following equation: dx dt = x ∞ (V m ) -x τ x (V m ) (2.8) where x ∞ is the voltage-dependent steady state and τ x the time constant. Definitions for the α x , β x , x ∞ and τ x for each of the dynamic variables are as follows. Chapter 2: Recurrent cholinergic inputs induce local hippocampal plasticity through feedforward disinhibition For the O-cells: Vm-1.7 10 ) + exp(-Vm+340 α n = - 0.01(V m + 27) exp(-0.1(V m + 27)) -1 β n = 0.125exp(- V m + 37 80 ) α m = - 0.1(V m + 23) exp(-0.1(V m + 23)) -1 β m = 4exp(- V m + 48 18 ) α h = 0.07exp(- V m + 37 20 ) β h = 1 exp(-0.1(V m + 7)) + 1 α p = 1 0.15(1 + exp(-f racV m + 386.5)) β p = exp(-Vm+38 6.5 ) 0.15(1 + exp(-Vm+38 6.5 )) h f ∞ = 1 1 + exp( Vm+79.2 9.78 ) τ f h = 0.51 exp( 52 ) + 1 .83 15.9 )] 58 τ s h = 5.6 exp( Vm-1.7 14 ) + exp(-Vm+260 h s ∞ = 1 [1 + exp( Vm+2 Methods For the I-cells: α n = 0.032(V m + 52) 1 -exp(-Vm+27 5 ) β n = 0.5exp(- V m + 57 40 ) α m = 0.32(V m + 54) 1 -exp(-Vm+54 4 ) β m = 0.28(V m + 27) exp( Vm+27 5 ) -1 α h = 0.128exp(- V m + 50 18 ) β h = 4 exp(1 + exp(-Vm+27 5 )) The parameter values used in the simulations are presented in Table 2.1. Since we are interested in studying local synaptic changes of the SC-CA1 synapse, we use the following equation to describe the activity of the pyramidal cell dendritic compartment: C dV E D dt = -I leak -I syn (2.9) The parameters C, g leak , and E leak were set to 100 pF, 1 nS and -68 mV, respectively. For the simulations of Figure 2.2D, noise was added to the dendritic compartment E D to allow direct comparison with the experimental results portrait in Figure 2.2C. In addition to E D , white noise was added to the O and I-cells in Figure S5 to study plasticity induction when these cells show spontaneous spiking. Since we used the Euler method to solve the differential equations describing V O , V I , and All the parameter values and expressions here described were taken from [START_REF] Rotstein | Slow and fast inhibition and an h-current interact to create a theta rhythm in a model of ca1 interneuron network[END_REF], considering a surface area of 1 × 10 -4 cm 2 , except for the reversal potential of the leakage current of the OLM which was set to have the resting potential of the OLM cells at -60 mV, as reported in R. Leão et al. (2012) 2.2. Methods V E D , (V x [i + 1] = V x [i] + dt dVx dt ) noise was incorporated by adding a stochastic term √ dtζ (V x [i + 1] = V x [i] + dt dVx dt + √ dtζ), Synaptic models The O-cell model includes a current mediated by α7 nAChR channels, that in the real OLM neurons are presynaptic to the O-to I-cell synapse. The description of the current here used is an adaptation of the model proposed in [START_REF] Graupner | Endogenous cholinergic inputs and local circuit mechanisms govern the phasic mesolimbic dopamine response to nicotine[END_REF], and it is given by I α7 = g α7 r α7 (V m -E α7 ) (2.10) where g 7 is the maximal conductance of the α7 nAChR channel, and E α7 the reversal potential. The opening gate variable r α7 is described by equation (2.8), with τ rα7 constant and r (α7)∞ given by r (α7)∞ = [ACh] n EC n 50 + [ACh] n (2.11) where EC 50 is the half-maximum concentration and n the Hill's coefficient of activation. The I-cell has an excitatory AMPA and inhibitory GABA A synaptic currents, described by the following set of equations: I GABA A = g G r G (V m -E G ) (2.12) I AM P A = g AM P A r A (V m -E A ) (2.13) The gating variables r x is, as described in [START_REF] Destexhe | Kinetic models of synaptic transmission[END_REF], given by dr x dt = α x [T ](1 -r x ) -β x r x (2.14) where α x and β x are the opening and closing rate of the receptor channel, and [T ] the neurotransmitter's concentration available for binding. The GABA released by the I-cell is described by using the Destexhe et al. simplified neurotransmitter release model [START_REF] Destexhe | Kinetic models of synaptic transmission[END_REF], where a stationary relationship between presynaptic voltage and neurotransmitter release is deduced by fitting the model to experimental results. The intervening reactions in the release process are considered fast -a presynaptic action potential elicits a rapid influx Chapter 2: Recurrent cholinergic inputs induce local hippocampal plasticity through feedforward disinhibition of calcium, leading to the activation of transmitter-containing vesicles and neurotransmitter release. The following equation gives the neurotransmitter release as a function of the presynaptic voltage: [GABA] I = T max 1 + exp(-Vm-Vp Kp ) (2.15) where T max = 1 mM is the maximal neurotransmitter concentration, K p = 5 mV gives the function's steepness, and V p = 2 mV sets the value at which the function is half-activated. These parameters were directly taken from [START_REF] Destexhe | Kinetic models of synaptic transmission[END_REF]. Concerning the GABA released by the O-cell, we assume that the α7 nAChR current is not strong enough to elicit an action potential directly, but, as the channels are presynaptic to the O-I GABAergic synapses, it can generate an increase in the intracellular calcium concentration sufficient to activate the vesicular release of GABA [START_REF] Griguoli | Regulation of hippocampal inhibitory circuits by nicotinic acetylcholine receptors: nachrs and inhibitory circuits in the hippocampus[END_REF]. To avoid the detailed computation of the mechanisms whereby calcium leads to exocytosis, we assume a sigmoid relationship between calcium and transmitter concentration given by: [GABA] O = T max 1 + exp(-Ca i -Cap K (Ca)p ) (2.16) where T max = 1 mM is the maximal neurotransmitter concentration, K (Ca)p = 1x10 -6 mM gives the function's steepness, and Ca p = 4 × 10 -5 mM sets the value at which the function is half-activated. These parameters were chosen so that a pulse of calcium elicits GABA release with approximately the same characteristics (amplitude and duration) as the [START_REF] Destexhe | Kinetic models of synaptic transmission[END_REF] detailed model of transmitter release (see Figure S2). The passive dendritic compartment of the pyramidal cell E D is modeled using synaptic GABA A , AMPA, and NMDA currents. The GABA A and AMPA currents are given by equations (2.12) and (2.13), respectively. The NMDA current is described according to the following equation: I N M DA = g N r N B(V m )(V m -E N ) (2.17) where r N is the gating variable described by equation (2.14). Due to the presence 2.2. Methods of a M g 2+ block, the NMDA channels a voltage-dependent term, B(V m ), defined as: B(V m ) = 1 1 + exp(-0.062V m ) [M g 2+ ] 3.57 (2.18) The parameters α A , β A , E A , α N , β N , E N , [M g 2+ ], α G , β G and E G were estimated by [START_REF] Destexhe | Kinetic models of synaptic transmission[END_REF] by fitting the models of postsynaptic AM P A, N M DA and GABA A currents to experimental data. Regarding the synaptic currents of E D , the AMPA and NMDA receptors maximal conductances were chosen such that at V=-70 mV, a glutamate pulse of 1 mM and 10 msec duration evoked AMPA and NMDA currents with an amplitude of 240 pA and 40 pA, respectively [START_REF] Andrásfalvy | Impaired regulation of synaptic strength in hippocampal neurons from glur1-deficient mice[END_REF]. The maximal conductance of GABA A receptors was chosen such that at V=0 mV, a pulse of GABA with 1 msec duration and concentration of 1 mM evokes a current with an amplitude of 500 pA [START_REF] Schulz | Dendrite-targeting interneurons control synaptic nmda-receptor activation via nonlinear α5-gabaa receptors[END_REF]. For the I-cell, the AMPA receptor maximal conductance value is such that one pulse of glutamate coming from the SC evokes a volley of action potentials. Concerning the α7 nAChR postsynaptic current, the parameters EC 50 , τ rα7 and n were taken from [START_REF] Graupner | Endogenous cholinergic inputs and local circuit mechanisms govern the phasic mesolimbic dopamine response to nicotine[END_REF]. The parameter E α7 was deduced from [START_REF] Castro | alpha-bungarotoxin-sensitive hippocampal nicotinic receptor channel has a high calcium permeability[END_REF], and g α7 was chosen such that activation of the α7 nAChR by a pulse of ACh evokes a current of 35 pA, as seen in R. Leão et al. (2012). Calcium-induced calcium release (CICR) mechanism Calcium entry through α7 nAChR cells initiates calcium release from internal stores [START_REF] Tsuneki | Calcium mobilization elicited by two types of nicotinic acetylcholine receptors in mouse substantia nigra pars compacta[END_REF][START_REF] Dajas-Bailador | Intracellular ca2+ signals evoked by stimulation of nicotinic acetylcholine receptors in sh-sy5y cells: contribution of voltage-operated ca2+ channels and ca2+ stores[END_REF][START_REF] Griguoli | Regulation of hippocampal inhibitory circuits by nicotinic acetylcholine receptors: nachrs and inhibitory circuits in the hippocampus[END_REF]. The calcium concentration in the cytosol of OLM cells Ca i is described by the following equation: dCa i dt = -ξ ′ α ′ I α7 + w 3 ∞ (Ca IS -Ca i ) - Ca τ Ca (2.19) where ξ = 2.1 × 10 -6 mM/(msec pA) is a parameter that converts current into concentration, α = 0.05 reflects the 5% calcium permeability of the α7 nAChRs [START_REF] Vernino | Quantitative measurement of calcium flux through muscle and neuronal nicotinic acetylcholine receptors[END_REF], and τ Ca is the calcium decay constant. The parameter ξ was chosen so that the intracellular calcium concentration is of the same order of magnitude as observed experimentally in [START_REF] Sabatini | The life of cycle of ca 2+ ions in dendritic spines[END_REF]. The parameter τ Ca through feedforward disinhibition was taken directly from the same study. Ca IS represents the calcium concentration of the internal stores given by: dCa IS dt = -w 3 ∞ (Ca IS -Ca i ) - Ca IS -0.4 × 10 -3 τ (2.20) where τ (= 10 msec) is the calcium decay constant, and w ∞ is the open probability of calcium-permeable channels on the internal store, given by w ∞ = Ca i Ca i + k d (2.21) where k d (= 2 × 10 -4 mM ) is the half-activation of the function. The model assumes three calcium-binding sites [START_REF] Young | A single-pool inositol 1,4,5-trisphosphatereceptor-based model for agonist-stimulated oscillations in ca2+ concentration[END_REF]) and a calcium concentration at the internal stores of 0.4 µM at rest (this value can be different as long as it is bigger than the intracellular calcium concentration Ca i at rest). Please note that the CICR mechanism described is a simplification of the model proposed by [START_REF] Rinzel | Bursting oscillations in an excitable membrane model[END_REF], where we limit the model to account for the calcium activation sites of the calcium-permeable IP 3 receptors on the endoplasmic reticulum. Model of synaptic plasticity To study plasticity induction at the SC -E D synapse, we use a calcium-based synaptic plasticity model based on [START_REF] Shouval | A unified model of nmda receptor-dependent bidirectional synaptic plasticity[END_REF]. We assume that changes in the AMPA receptor conductance reflect changes in the strength of the excitatory SC-CA1 synapse. Our synaptic plasticity model is formulated as follows: dg AM P A dt = η(Ca)(Ω(Ca) -σ(g AM P A -g 0 )) (2.22) where σ is a decay constant and g 0 (= 4 nS) is the value of the maximal conductance of the AMPAR at t=0. The variable η is a calcium-dependent learning rate described by equation (2.23), and Ω determines the sign magnitude of synaptic plasticity as a function of the intracellular Ca levels (equation (2.24)). Methods Parameter Value Reference α A 1.1 ms -1 mM -1 Ω(Ca) = γ ↑ exp(900(Ca -θ ↑ )) 1 + exp(900(Ca -θ ↑ )) -γ ↓ exp(900(Ca -θ ↓ )) 1 + exp(900(Ca -θ ↓ )) (2.24) The parameters θ ↑ and θ ↓ define the potentiation and depression onset, i.e., the calcium levels that trigger the removal and insertion of AMPAR in the membrane, respectively, and γ ↑ and γ ↓ represent the maximal insertion and removal rate of the AMPARs from the membrane. Please note that on the original model, the parameters θ ↑ and θ ↓ are represented by θ p and θ d , and define the potentiation and depression threshold, respectively, but, as it will be evident in the Results section, we find that this terminology can be misleading (i.e., we show that crossing these levels is necessary but not sufficient for potentiation). We assume that the primary source of Ca 2+ in E D is the calcium flux entering the cell through the NMDA receptor channels. The intracellular Ca 2+ concentration evolves according to the following equation dCa dt = -ξαI N M DA - Ca τ Ca (2.25) where ξ is a parameter that converts current into concentration, α=0.1 refers to the fact that only about 10% of the NMDA current is composed of calcium ions [START_REF] Burnashev | Fractional calcium currents through recombinant glur channels of the nmda, ampa and kainate receptor subtypes[END_REF], and τ Ca is the calcium decay constant. The parameter ξ was chosen so that the intracellular calcium concentration is of the same order of magnitude as observed experimentally in [START_REF] Sabatini | The life of cycle of ca 2+ ions in dendritic spines[END_REF] 1 . The parameter 1 The reversal potential of the compound of all the ionic currents flowing through the NMDARs is employed in the voltage equation (2.1) and the calcium dynamics equation (2.25), even though only the calcium component of the NMDAR total current contributes to the intracellular calcium concentration. We recognize that this is an ad hoc simplification which does not qualitatively affect the model results since V E D ranges between -30 and -67 mV mV in the simulations.. The model can be modified to account for the calcium component of the NMDA current in the calcium dynamics equation by using the calcium reversal potential of 140 mV to describe the fractional calcium through the NMDARs in equation (2.25), similarly to how it was laid out in [START_REF] Graupner | Stdp in a bistable synapse model based on camkii and associated signaling pathways[END_REF]. We note that this would not quantitatively change our results provided the parameter ξ is altered accordingly, i.e., to ensure the resultant calcium magnitude remains in the same order of magnitude as observed experimentally. On the other hand, this simplification becomes important when considering a spiking model for the pyramidal cell where the transmembrane voltage exceeds Methods τ Ca was taken directly from the same study. P 1 , P 2 , P 3 and P 4 were chosen to have a calcium-dependent learning rate that increases monotonically with calcium levels [START_REF] Shouval | A unified model of nmda receptor-dependent bidirectional synaptic plasticity[END_REF]. The parameters θ ↑ and θ ↓ were determined such that before the co-pairing period the calcium concentration does not cross either while crossing the potentiation onset θ ↑ when pairing starts (with θ ↑ > θ ↓ ). The parameters σ, γ ↑ and γ ↓ were chosen to reproduce the experimental results concerning potentiation of CA1 pyr cell EPSC during co-activation of SC and disinhibition/cholinergic inputs (with γ ↑ > γ ↓ ). Parameter Value Reference σ 0.0040 - P 1 1.5 ×10 -6 Shouval et al. ( 2002) 0 mV and thereby inverting the polarity of the total but not the calcium current. P 2 P 1 × 10 -4 Parameters of the model: We used experimentally determined values or values from previous modeling studies for most of the parameters. Others that could not be determined experimentally were determined by experimental constraints imposed on the model, namely, the maximal conductances g x , and the synaptic plasticity model parameters indicated with a dash on Table 2.3. All the parameter values are defined in Tables 2.1,2.2,2.3. With our choice of parameters, all parameters are within the physiological range. Results Results Co-activation of cholinergic and glutamatergic inputs modifies the SC-CA1 synaptic transmission Previously, it has been observed that co-activation of hippocampal cholinergic inputs and local SC pathway increases the amplitude of SC to CA1 pyramidal EPSCs [START_REF] Gu | Hippocampal interneuronal α7 nachrs modulate theta oscillations in freely moving mice[END_REF]. Moreover, repeated pairing of cholinergic and hippocampal inputs (8 times) increased EPSC amplitudes in pyramidal neurons during the pairing and long after, indicating long-term synaptic plasticity at SC to CA1 excitatory synapses. The induction of potentiation was abolished by a knockout of OLMα2 interneuronal α7 nAChRs, but not by knockout of these receptors on hippocampal pyramidal cells or other interneurons (see Figures 2.2 (A) and 2.2 (C)). SC stimulation elicits EPSCs in s.r. interneurons and in the CA1 pyramidal cell proximal dendrites (R. Leão et al., 2012). Given the high calcium permeability of the α7 nAChRs, we assume their activation modulates transmitter release through calcium-mediated signal transduction cascades. We constructed a minimal feedforward circuit with an OLM cell (O), a fastspiking interneuron (I), and the pyramidal cell s.r. dendritic compartment (E D ) connected as schematically shown in Figure 2.2(B) to examine mechanistically how pairing cholinergic activation of the O-cell with glutamatergic activation of the I-cell and E D can potentiate the EPSCs of E D . We look at how the EPSC of E D , modeled as the sum of the postsynaptic AMPA and NMDA currents (I AM P A and I N M DA ), changes when the glutamatergic inputs acting on the I-cell and E D are paired with the cholinergic inputs that act on the presynaptic α7 nAChR of the O-cell during a co-pairing period of 8 minutes, identical to the experimental protocol. The I-cell and E D receive one glutamate pulse per minute before, during and after the co-pairing period. During the co-pairing period, the O-cell gets a pulse of ACh per minute, 100 msec before each glutamate pulse. Not much is known about ACh's concentration profile in vivo, but it is believed that it can be cleared from the synaptic cleft within milliseconds. After testing different ACh profiles, we decided to model ACh as a square pulse with a duration of 5 msec and concentration of 1 mM, similar to the glutamate, even though similar results were obtained for a variety of profiles of ACh Chapter 2: Recurrent cholinergic inputs induce local hippocampal plasticity through feedforward disinhibition (see appendix B for more details). From Figure 2.2(D), we see that during the co-pairing period (from t=10 min to t=18 min), the EPSC is increased. This increase in our model is maintained for an extended period after the co-pairing period is over (black line), matching the experimental results. We also see that GABA release from the I-cells, GABA I , decreases significantly (Figure 2.2(D) inset). Before the co-pairing period, glutamatergic inputs activate the I-cell. This results in E D inhibition, which shows a SC-evoked depolarization immediately followed by hyperpolarization of its membrane potential. During the co-pairing period, activation of α7 nAChRs 100 msec before SC stimulation results in a calcium flux into the OLM cell that will initiate calciuminduced calcium release (CICR) from internal stores, exerting positive feedback. The increase in intracellular calcium concentration induces the release of GABA, as described by equation (2.16). GABAergic inputs from the OLM cell disable the SCevoked activation of the I-cell. As a result, E D does not receive GABAergic inputs (see Figure S4). If we reduce the maximal conductance of the α7 nAChR, g α7 , from 3 nS to 0.3 nS as an approximation of the effect of α7 knockout, co-pairing no longer potentiates the EPSC of E D (Figure 2.2 (D), orange line). These observations are in accordance with experimental results that showed that this form of EPSC boost was abolished by knockout of the α7 nAChR in OLMα2 interneurons (Figure 2.2(C)). We then examined how the key parameters of the co-paring protocol influence the plasticity of the SC-CA1 EPSCs. According to our model, the duration of the copairing period, the relative time between the cholinergic and glutamatergic inputs, as well as their frequency during the co-pairing period, can modulate the efficiency and direction of plasticity. Our simulations show that the longer the co-pairing period, the longer the transient duration, where the potentiation transient duration was defined as the time it takes the EPSCs to return to the baseline value once the copairing period is over (Figure 2.3(A)). We observe a positive relationship between the frequency of the glutamatergic and cholinergic inputs during a fixed period of paring protocol and the potentiation transient duration (Figure 2.3(B)). Interestingly, our simulations suggested that while changing the co-pairing period and the frequency of stimulation modulates the efficiency of the induction of potentiation, it does not change the direction of plasticity. Only when varying the relative time between the The time-dependent plasticity curve does not change shape if we pair doublets of glutamate and ACh with a θ frequency (4 Hz) instead of single pulses. Still, it changes the potentiation and depression windows. The potentiation window is of 10.3 < ∆t < 299.3 msec, while the depression window is -10.7 < ∆t < 10.3 msec and 299.3 < ∆t < 375 msec (see Figure 2.3(D) and Figure S7). These results agree with experimental findings by [START_REF] Gu | Timing-dependent septal cholinergic induction of dynamic hippocampal synaptic plasticity[END_REF] showing that activation of cholinergic inputs 100 msec and 10 msec prior to SC stimulation induced SC to CA1 long-term potentiation and short-term depression, respectively. Results Disinhibition of the CA1 pyramidal cell dendritic compartment enables potentiation of the SC-CA1 synaptic transmission. Our model shows a decrease in GABA release from I-cells during the co-pairing period (Figure 2.2 (D), inset). To study the role of disinhibition of E D in the potentiation of the SC-CA1 excitatory synapse, we use a model where E D receives a pulse of glutamate followed by a pulse of GABA, except during a disinhibition period when it only receives pulses of glutamate. According to our model, the rise and decay time of GABA concentration release that results from the spiking of the I-cells is almost instantaneous (see Figure S6). Therefore, in this section, GABAergic inputs into E D are modeled as a square pulse. For simplicity, both glutamate and GABA release pulses are modeled as square pulses with a duration of 1 msec and 1 mM of amplitude. It is important to note that pulses with amplitudes and durations different from those considered here would reproduce the same results, as long as the duration and amplitude of glutamate and GABA are similar (simulations not shown). Thus, E D receives one pulse of glutamate per minute, followed by a pulse of GABA 2 msec after, except during a disinhibition period when it only receives pulses of glutamate. We note that this simulated stimulation and pairing choice directly follows the experimental protocol 2.3. Results (see Methods). We observed that before the disinhibition period, there were no changes in the EPSC amplitude of E D . During the disinhibition period, the EPSC amplitude increases, and the longer the disinhibition period lasts, the longer these changes last. More specifically, for a disinhibition period of 5 minutes, the EPSC returns to baseline once the disinhibition period is over. For a longer disinhibition period of 8 minutes, the EPSC remains potentiated long after the disinhibition period is over (Figure 2.4(D)). After 5 minutes of E D disinhibition, the EPSC amplitude was increased from 169.40 pA to 285.34 pA. After 8 minutes of disinhibition, the EPSC amplitude increased to 361.33 pA. These results hold for different values of γ ↑ and γ ↓ (see Figure S8). This is in accordance with experimental results, where inhibition of Sst interneurons projecting to CA1 pyramidal cells was paired with SC stimulation for a short and long period (Figure 2.4(C)). Inhibition of Sst interneurons via eNpHR resulted in increased SC-CA1 EPSC amplitude not only during the Sst inhibition but also after the end of Sst inhibition. The EPSC enhancement after the Sst inhibition lasted about 10min after 5 minutes of Sst inhibition and more than 30 min after 8 minutes of Sst inhibition. After 5 times of Sst inhibition, the EPSC amplitude was significantly increased at 5min after the end of Sst inhibition (31.8% increase compared with baseline, p = 0.0003) but returned to baseline at 30 min after Sst inhibition (2.8% increase compared with baseline, p = 0.79). After 8 times of Sst inhibition, the EPSC amplitude was significantly increased at both 5min after the end of Sst inhibition (37.3% increase compared with baseline, p < 0.0001) and 30 min after Sst inhibition (32.5% increase compared with baseline, p 40 2.3. Results < 0.0001). Experiments showed that inhibition of OLMα2 interneurons via eNpHR did not change the amplitude of SC-CA1 EPSC, indicating that the Sst interneurons inducing potentiation do not include OLM (Figure 2.4 (C), grey line). AMPARs are known to play an important role in regulating and expressing synaptic plasticity in the hippocampus [START_REF] Barria | Regulatory phosphorylation of ampa-type glutamate receptors by cam-kii during long-term potentiation[END_REF]. From Figure 2.5 we see that there is an increase of g AM P A during the disinhibition period. The longer the disinhibition period, the more significant the increase. For a disinhibition period of 5 minutes, there is an increase of g AM P A from 4 to 6.9 nS during disinhibition. Afterward, g AM P A gradually goes back to its baseline value (Figure 2.5(A)). For a disinhibition period of 8 minutes, g AM P A increases from 4 to 8.83 nS. When the disinhibition period is over, g AM P A remains potentiated (Figure 2.5(B)). It is important to note that without regular synaptic stimulation, g AM P A decays back to its resting value after the disinhibition period, i.e., g AM P A has only one stable fixed point and it is not bistable. In this study, we focused on a calcium-based synaptic plasticity model to describe changes in the excitatory SC-CA1 synapse. To gain a more detailed understanding of how the evolution of the calcium levels relates to the changes in the synaptic weights, we can examine the calcium dynamics before, during and after the disinhibition period. Figures 2.5(C) and (D) show that the calcium concentration increases significantly during the disinhibition period, crossing the potentiation onset θ ↑ with a significant margin. Immediately after the end of the disinhibition period, the calcium levels decrease, yet they remain above θ ↑ . We can see a clear difference in calcium dynamics for the short and the long disinhibition periods. In the case of a short disinhibition period, each pairing of GABA and glutamate after the disinhibition period will elicit a calcium pulse with a smaller amplitude than the previous one. Eventually, at t=25 min, the calcium concentration from the pairing is not enough to cross the potentiation onset θ ↑ . By the time t=30 min, calcium does not cross either the potentiation (θ ↑ ) or the depression onset (θ ↓ ), having a similar amplitude as before the disinhibition period. In the case of a long disinhibition period, each pairing performed after the disinhibition period evokes a calcium pulse with a constant amplitude. In other words, long-disinhibition periods ensure that the consequent pairings yield calcium responses that do not drop below the onset thresholds. To better visualize the synaptic and calcium dynamics immediately after the disinhibition period in both cases, we plot the system's trajectory in Results the Ca-g AM P A plane. We do so for g AM P A (t init )=6.9 nS and for g AM P A (t init )=8.83 nS (Figure 2.5(E)), which are the values of g AM P A at the end of the disinhibition period for the short-and long-disinhibition durations. For g AM P A (t init )=6.9 nS, the calcium concentration crosses the potentiation onset θ ↑ (Ca max = 0.353 µM), but there is a decrease of g AM P A from 6.9 to 6.8 nS. For g AM P A (t init )= 8.83 nS, the calcium concentration crosses θ ↑ to a larger extent (Ca max = 0.389 µM) and there is an increase of g AM P A from 8.83 to 8.92 nS. These results suggest that it is necessary but not sufficient for calcium concentration to cross the potentiation onset to induce potentiation. To verify this, we looked at changes in maximal conductance of the postsynaptic AMPAR, ∆g AM P A , as a function of the amplitude of the intracellular calcium, Ca max . From Figure 2.5(F), we see that as Ca max increases we only start to have potentiation (∆g AM P A > 0) when Ca max crosses not the potentiation onset θ ↑ , but a higher level, that we term as the potentiation threshold θ pot , 0.36 µM. We do note that the fixed potential threshold θ pot is not an ideal indicator of potentiation, as it may need to be re-calculated depending on a specific case of calcium dynamics time scales and/or the induction protocol. As seen in Figure 2.5, the dynamics of calcium is important in the induction of plasticity. Therefore, changing these by, for example, changing the calcium decay rate, can alter the θ pot by changing the time calcium spends in the depression/potentiation onset region. This kind of analysis can also fail to identify mechanisms of induction of potentiation. As shown in Figure 2.6(B), if we consider a second calcium source that becomes activated at t = 80 msec, none of the 2 pulses of calcium generated crosses θ pot ; however, the synapse is potentiated. These examples suggest that it is not the peak calcium concentration that is a key indicator of potentiation but a measure based on the total amount of calcium that exceeds the onset levels. We suggest that a better quantity that can be used more generally as an indicator of plasticity is the ratio between the integral of calcium when its concentration is above the potentiation onset θ ↑ , which we will call the area of AMPAR insertion (orange area in Figure 2.6), and the integral of calcium when its concentration is above the depression onset θ ↓ and bellow the potentiation onset θ ↑ , which we will call the area of AMPAR removal (grey area in Figure 2.6), weighted by the calcium-dependent learning rate η, which we named ( , when the dendritic compartment is disinhibited for a short period (from t=5 min to t=10 min). The maximal AMPAR conductance increases from its initial value g AM P A = 4 nS to g AM P A =6.9 nS during the disinhibition period. (B) Time course of g AM P A when the dendritic compartment is disinhibited for a long period (from t=5 min to t=13 min). It increases from g AM P A = 4 nS to g AM P A =8.83 nS during the disinhibition period. Changes in the AMPAR conductance g AM P A are described by equation (2.22). (C) Time course of intracellular calcium concentration when dendritic compartment E D is disinhibited for a short period (from t=5 min to t=10 min), where θ ↓ is the depression onset, and θ ↑ the potentiation onset. (D) Time course of intracellular calcium concentration when the dendritic compartment is disinhibited for a long period (from t=5 min to t=13 min). The calcium dynamics is described by equation (2.25) (see Methods). (E) Trajectories of the system in the g AM P A -Ca plane when a pulse of glutamate is paired with a pulse of GABA for g AM P A =6.9 nS and g AM P A =8.83 nS, where θ pot is the potentiation threshold as defined in [START_REF] Shouval | A unified model of nmda receptor-dependent bidirectional synaptic plasticity[END_REF]. (caption continues on next page) 44 A ↑ A ↓ ) w ( Results (F) Changes in the maximal AMPAR conductance, ∆g AM P A , as a function of the amplitude of intracellular calcium pulse, Ca max . Each point of the graph was obtained by submitting E D to a glutamate pulse for different initial values of g AM P A . This induced different depolarization levels and, consequently, different activation levels of NMDARs and calcium pulses of different amplitudes. (see Figure 2.6). GABA amplitude and Glu-GABA pairing timing control membrane potential Disinhibition of the pyramidal cell, i.e., reduction of GABAergic inputs, can facilitate the depolarization of the cell, which can control plasticity, as we have shown in the previous section. Therefore, we hypothesize that the amplitude of the GABA pulse, GABA max , and the relative time between the glutamate and GABA pulses, ∆t (GABA-Glu) , can modulate plasticity. To explore this hypothesis, we pair glutamatergic inputs with GABAergic inputs into E D . We vary the relative time between the inputs, ∆t (GABA-Glu) , and the amplitude of the GABAergic inputs, GABA max , to measure changes induced in g AM P A . Simulations were repeated for different values of g AM P A to understand why pulses of glutamate and GABA with the same characteristics (same amplitude and same duration) have different outcomes when administered after the short or long disinhibition periods. Simulations were done with three initial values of g AM P A : g AM P A = 4 nS, g AM P A = 6.9 nS and g AM P A = 8.83 nS. We identified well-defined regions of potentiation and depression in the ∆t (GABA-Glu) -GABA max parameter space (see Figure 2.7). We also saw that the regions change with the value of g AM P A . More specifically, the depression region moves towards the right of the plot as g AM P A increases. In other words, as g AM P A increases, the GABAergic inputs need to arrive with a longer delay relative to the glutamatergic inputs to induce depression. It is important to note that the level of potentiation or depression induced also changes as we increase g AM P A . Generally, the magnitude of potentiation decreases, and the magnitude of depression increases. This is because the system saturates as g AM P A increases, i.e., g AM P A cannot increase indefinitely. This is a restriction imposed by the model. These results suggest that through feedforward disinhibition the same induction protocol may induce either potentiation or depression, more or less efficiently, depending on the current phosphorylation state of the AMPA receptors, i.e. g AM P A , and on the decrease of GABA during disinhibition. In other words, the net effect of a pairing protocol is state-dependent. For a weighted ratio between the calcium area of AMPAR insertion and removal below 3.00, depression is induced. For a value above 3.00, potentiation is induced. (B) By adding a second source of calcium that becomes activated at t=80 msec, it is possible to have situations where the calcium never crosses the potentiation threshold θ pot but potentiation is induced. The normalized ratio ( A ↑ A ↓ ) w accurately identifies these cases as potentiation. In these numerical simulations, E D receives a pulse of glutamate followed by a pulse of GABA 2 msec after, each with an amplitude of 1mM and duration of 1 msec. Chapter 2: Recurrent cholinergic inputs induce local hippocampal plasticity through feedforward disinhibition Model Predictions and implications: Results of model simulations and analysis make several testable predictions. First, while experiments so far have not identified precisely the exact type of s.o. interneuron that provides feedforward inhibition to the CA1 pyramidal cell, our model predicts that it should be an interneuron with fast dynamics, i.e. with dynamics comparable to the pyramidal cells. More specifically, we expect hippocampal PV+ interneurons EPSCs in the stratum radiatum would decrease during cholinergic pairing due to the inhibition provided by the OLM neurons. Consequently, GABA A -mediated IPSCs on the proximal dendrites of CA1 pyramidal cells would also decrease. In this work (both in modeling and experimentally), modulation of the OLM cells is due to cholinergic activation of α7 nAChRs. Our model more specifically suggests that the GABA release by the OLM cells is regulated by activating α7 nACh receptors without necessarily altering the OLM firing. However, GABA release can also be controlled by the depolarization of the OLM cells and/or by modulation of their spiking activity by somatic nAChRs. Our model predicts a relationship between the relative timing of the septal and hippocampal stimulus paring and the synaptic plasticity direction at the SC-PYR synapse. According to our simulations, increasing the frequency of septal and hippocampal paired stimulation can induce plasticity more efficiently, i.e., fewer pairings would be required to induce LTP. At the same time, we predict that changing the relative time between septal and hippocampal activation can induce LTD instead of LTP. Finally, our modeling results suggest that for the plasticity to be induced, the excitatory NMDA and AMPA receptors and the inhibitory GABA A receptors should be located sufficiently proximal to each other in the pyramidal dendritic compartment. Discussion This work set out to explain how nicotinic cholinergic modulation of hippocampal OLM interneurons paired with hippocampal stimulation can potentiate CA1 pyramidal cell EPSC responses. Our modeling results suggest that co-pairing cholinergic activation of α7 nAChRs on OLM interneurons results in the disinhibition of CA1 pyramidal cells. We also show by mathematical analysis how synaptic plasticity is controlled by the disinhibition of the postsynaptic pyramidal membrane through a disynaptic GABAergic circuit. To our knowledge, this is the first report to reveal how repeated disinhibition can directly induce LTP (both experimentally and computationally). It is also the first computational study that explicitly shows how cholinergic action on OLM interneurons can directly induce SC-CA1 plasticity through disinhibition. OLM cells are a major class of GABAergic interneurons located in the stratum oriens hippocampal layer that inhibit pyramidal cells dendritic compartment located in the stratum lacunusom-moleculare layer, reducing the strength of EC inputs. OLM cells also target bistratified interneurons, expressing parvalbumin (PV) and somatostatin (Sst), that receive feedforward excitatory inputs from the Schaffer collaterals [START_REF] Müller | Dendritic inhibition mediated by o-lm and bistratified interneurons in the hippocampus[END_REF]. Recent findings show that activation of OLM cells can facilitate LTP in the SC-CA1 pathway, likely by inhibiting s.r. interneurons that synapse on the same dendritic compartment as the SC, counteracting SC feedforward inhibition (R. Leão et al., 2012). We found that repeated pairing of cholinergic inputs with hippocampal stimulation can induce plasticity if the inputs are tightly timed. The time window for potentiation depends significantly on the dynamics of the O-cells, I-cells, and calcium dynamics. This agrees with experimental findings showing that activating cholinergic inputs to the hippocampus can directly induce different forms of synaptic plasticity depending on the hippocampus's input context, with a timing precision in the millisecond range [START_REF] Gu | Timing-dependent septal cholinergic induction of dynamic hippocampal synaptic plasticity[END_REF]. Our model also shows that the longer the co-pairing period and the higher the frequency of stimulation during the co-pairing period, the longer lasting is the potentiation of the synapse. According to our model, the key mechanism behind paired cholinergic induction of synaptic plasticity is the disinhibition of the pyramidal cell dendritic compart-through feedforward disinhibition ment. Cholinergic activation of the O-cell synapses inhibits the fast-spiking I-cell that projects to the dendritic compartment E D . The disinhibition of E D paired with glutamatergic stimulation allows for the depolarization of the pyramidal dendritic compartment. This increases NMDAR activation and intracellular calcium concentration sufficient to upregulate postsynaptic AMPAR permeability and potentiate the excitatory synapse. Our model puts together the elements to give the following sequence of events: SC stimulation results in the activation of CA1 fast-spiking interneurons, I, and the subsequent release of GABA. At the same time, it evokes an EPSP mediated by AMPAR on the CA1 pyramidal cell dendritic compartment E D . Since I and E D have comparable dynamics, the EPSP is closely followed by a GABA A -mediated IPSP. Because of slow kinetics and voltage-dependence, at that time, the NMDAR receptors are not in the open state and there is no influx of calcium. When the SC inputs are tightly timed with cholinergic inputs acting on OLM interneurons, GABA release from I-cells is suppressed. The pyramidal cell membrane at (or sufficiently near to) the glutamatergic synapse can depolarize enough to relieve the M g 2+ block from the NMDA receptors, allowing calcium to permeate through the receptor channel (Figure 2.8). Therefore, every time the pyramidal cell receives a glutamate pulse during the disinhibition period, the intracellular calcium concentration crosses the potentiation outset θ ↑ , and g AM P A increases. Down-regulation of the GABAergic signaling during disinhibition leads to increased NMDAR activation. We see that when we reduced GABA concentration, glutamatergic activation of E D results in postsynaptic NMDA currents with 7.90 pA of amplitude -with depolarization of -58.25 mV -, as opposed to the 6.75 pA that results from the pairing of glutamate and GABA inputs -with depolarization of -63.56 mV (see Figure S9). Because of the receptor's high calcium permeability, there is an elevation in intracellular calcium concentration large enough to initiate molecular mechanisms that result in the insertion/phosphorylation of the AMPAR. In our model, this translates into an increase in the AMPAR maximal conductance g AM P A . Moderate calcium concentrations, on the other hand, result in the removal of AMPARs. Because changes in calcium concentration are not instantaneous, the potentiation/depression of the synapse results from a balance between the insertion/removal of AMPARs during the period in which Ca concentration is above the potentiation/depression threshold. During disinhibition, this balance is positive and there is a total increase in g AM P A . The more times we pair disinhibition with SC stimulation, i.e., the longer the disinhibition period, the higher the value of g AM P A by the end of the disinhibition period. After the disinhibition period, if the increase of g AM P A was large enough, the calcium resultant from glutamatergic and GABAergic stimulation is such that there is a balance between potentiation and depression close to zero. That is, g AM P A stabilizes by oscillating around the value of g AM P A at the end of the disinhibition period (8.83 nS). Therefore, the synapse remains potentiated long after the disinhibition period is over. If there is no stimulation after the disinhibition period, g AM P A slowly decays to its initial value (i.e., its value before the disinhibition period). Supposing that the increase of the AMPAR permeability is high enough, the potentiation of the excitatory synapse is maintained when the disinhibition period is over through repeated stimulation of the SC that keeps a balance between the down and upregulation of the AMPARs. This is in accordance with experimental results that show that repeated pairing of inhibition of Sst interneurons (that were not OLM) that target the CA1 pyramidal cell with SC stimulation can induce plasticity. Our model is robust to changes of parameters that maintain the same ratio of insertion/removal of AMPARs. Thus, for example, for different values of the γ ↑ , there is (at least) a pair of γ ↓ for which our results remain the same (Figure S9). It is worth noting that the type of synaptic plasticity induced depends on the value of maximal conductance of the postsynaptic AMPAR, g AM P A , as shown in Figure ??. Therefore, our model indicates that future changes in synaptic strength depend on previous plasticity events and how these changed g AM P A . This explains why, after the disinhibition period, pairs of glutamate and GABA pulses with the same characteristics will induce different results when the disinhibition period is short or long. Earlier studies pointed out that reduced inhibition (disinhibition) can facilitate LTP induction under various conditions [START_REF] Ormond | Disinhibition mediates a form of hippocampal long-term potentiation in area ca1[END_REF][START_REF] Yang | A dendritic disinhibitory circuit mechanism for pathway-specific gating[END_REF][START_REF] Wigström | Facilitated induction of hippocampal longlasting potentiation during blockade of inhibition[END_REF]. Our results show that repeated temporally precise disinhibition can directly induce SC to CA1 LTP, providing a novel mechanism for inhibitory interneurons to modify glutamatergic synaptic plasticity directly. This expands the original spike-timing dependent plasticity that concerns the concurrent activation of two excitatory pathways to include the interneuron network. Discussion Furthermore, our modeling work implies that GABAergic neurotransmission should control the local pyramidal voltage in the vicinity of the glutamatergic synapses. Thereby, the inhibitory synapses critically modulate excitatory transmission and the induction of plasticity at excitatory synapses. This points towards the importance of dendritic GABA and glutamate co-location in shaping local plasticity rules. Our work also suggests a cholinergic mechanism for controlling GABA release at the pyramidal dendrites and the subsequent potentiation of excitatory synapses, unraveling the intricate interplay of the hierarchal inhibitory circuitry and cholinergic neuromodulation as a mechanism for hippocampal plasticity. Previous work by Gu and Yakel (2017) showed that co-paired activation of the cholinergic input pathway from the septum to the hippocampus with stimulation of the Schaffer collateral pathway could readily induce theta oscillations in a co-culture septal-hippocampal-entorhinal preparation. Moreover, after performing co-paired activation several times, not only was the SC-PYR synapse potentiated, but it became easier to evoke the theta rhythm in the preparation (one pulse stimulus of the SC is sufficient to generate theta oscillations in the circuit with the same characteristics as before). Therefore, induction of hippocampal plasticity, particularly potentiation of the CA1 EPSPs, appears to facilitate the generation of the theta rhythm. Moreover, recent studies directly linked OLMα2 interneurons to theta oscillations [START_REF] Mikulovic | Ventral hippocampal olm cells control type 2 theta oscillations and response to predator odor[END_REF], and suggest that OLM cells can regulate the robustness of the hippocampal theta rhythm [START_REF] Chatzikalymniou | Deciphering the contribution of oriens-lacunosum/moleculare (olm) cells to intrinsic θ rhythms using biophysical local field potential (lfp) models[END_REF]. Thus, we may speculate that the action of ACh on the α7 nAChRs at the OLMα2 neurons potentiates the SC-CA1 synapses to close the critical link in the synaptic chain of events, enabling recurrent reverberation of excitation in the hippocampal-entorhinal theta generating circuit. Understanding the mechanisms underlying the induction of hippocampal plasticity by this co-pairing mechanism will allow future studies of how changes on the synaptic level can propagate to the network level and change theta generation's mechanisms. Our results are also relevant to understanding the neural circuit origins of pathological conditions and uncovering potential targets for therapeutic intervention in disorders linked to memory deficits. For example, the hippocampus is one of the earliest brain structures to develop neurodegenerative changes in Alzheimer's disease (AD) [START_REF] Arriagada | Neurofibrillary tangles but not senile plaques parallel duration and severity of alzheimer's disease[END_REF]. Furthermore, numerous studies suggest that cogni-A. Calculating the weighted potentiation/depression area ratio (A ↑ /A ↓ ) w A Calculating the weighted potentiation/depression area ratio (A ↑ /A ↓ ) w While the calcium concentration is above the depression onset θ ↓ (but bellow the potentiation onset θ ↑ ), the maximal conductance of the AMPARs g AM P A is decreasing. On the other hand, when the calcium concentration is above θ ↑ , g AM P A is increasing. The induction of plasticity at the excitatory synapse depends on the net result of these changes of g AM P A . The more time calcium spends above θ ↑ /θ ↓ , the more likely it is that potentiation/depression is induced at the synapse. Furthermore, the more time calcium spends above θ ↑ /θ ↓ , the bigger the area underneath the calcium curve in this region of insertion/removal of AMPARs. Therefore, the ratio between the area of insertion and the area of removal (A ↑ /A ↓ ) can be used as a measure of induction of plasticity. There is an optimal ratio for which the decrease of g AM P A resultant from time spent in the removal region and the increase of g AM P A resultant from time spent in the insertion region will cancel each other and no plasticity is induced. If the ratio A ↑ /A ↓ is below this value, depression is induced; if the ratio is above this value, potentiation is induced. The ratio is A ↑ /A ↓ is given by the following expression: ∫ t 2 t 1 Cadt ∫ t 1 t 0 Cadt + ∫ t 3 t 2 Cadt (26) t0 t1 t2 t3 ↓ ↓ Because the decrease and increase of g AM P A is not the same in the whole removal and insertion region, we need to calculate the calcium integral weighted by the calcium-dependent learning rate η. The normalized ratio (A ↑ /A ↓ ) w is then given by ∫ t 2 t 1 Ca.ηdt ∫ t 1 t 0 Ca.ηdt+ ∫ t 3 t 2 Ca.ηdt . To calculate (A ↑ /A ↓ ) w , we use the trapezoidal rule to perform numerical integration of the potentiation and depression area. B A qualitative study of the synaptic profile of ACh Not much is known about the ACh profile in the synaptic cleft upon release from cholinergic neurons; more specifically, not much is known about the time it decays for ACh to be broken down into choline and therefore, available to bind to cholinergic receptors. We have considered two different types of profiles for the ACh concentration in the synaptic cleft, and explored their validity for different parameters (amplitude, duration and decay time constant). We take into account the observations made by [START_REF] Gu | Timing-dependent septal cholinergic induction of dynamic hippocampal synaptic plasticity[END_REF] that pairing cholinergic inputs 10 msec prior to SC stimulation induces depression of the SC-CA1 synapse, while if the cholinergic inputs are activated 100 msec prior to SC stimulation, potentiation is induced. We pair one pulse of ACh with a square pulse of glutamate with a relative time of 10 and 100 msec, and measured the resultant changes in AMPAR conductance. A square pulse of ACh followed by a pulse of glutamate 10 and 100 msec after will induce, respectively, depression and potentiation if the duration of the ACh pulse is equal or greater than the glutamate (even if the amplitude of ACh is smaller than the one of glutamate). If Ach is described by an alpha function with an instantaneous rise time, the smaller the amplitude of the ACh pulse, the longer the decay time needs to be for the results to be in agreement with [START_REF] Gu | Timing-dependent septal cholinergic induction of dynamic hippocampal synaptic plasticity[END_REF] (for a same pulse duration of glutamate). If the duration of the glutamate pulse increases (decreases), we must also increase (decrease) the decay time of ACh (simulations not shown). Please note that the decay and duration times, as well as the amplitude, of both the ACh and glutamate pulses serve merely as a guide as what types of neurotransmitters profiles we should consider. They are qualitative, and not quantitative, predictions of the synaptic profile of ACh. Figure S1: (A) One square pulse of ACh with a duration of 1 msec and concentration of 0.5 mM followed 10 msec after by a a square pulse of glutamate with a duration of 5 msec and an amplitude of 1 mM produces no changes in the maximal conductance of AMPAR, g AM P A (left panel). Similarly, If the pulse of glutamate arrives 100 msec after, no changes are induced (right panel). (B) One square pulse of ACh with duration of 5 msec and concentration of 0.5 mM followed 10 msec after by a pulse of glutamate with a duration of 5 msec and 1 mM of concentration decrease g AM P A (left panel). If the pulse of glutamate arrives 100 msec after, potentiation is induced (right panel). (C) One square pulse of ACh with a duration of 1 msec and concentration of 1 mM followed 10 msec after by a a square pulse of glutamate with a duration of 5 msec and an amplitude of 1 mM produces no changes in g AM P A (left panel). Similarly, If the pulse of glutamate arrives 100 msec after, no changes are induced (right panel). (D) One square pulse of ACh followed 10 msec after by a pulse of glutamate with the same characteristics (duration of 5 msec and 1 mM of concentration) decrease g AM P A (left panel). If the pulse of glutamate arrives 100 msec after, potentiation is induced (right panel). Figure S2: (A) One pulse of ACh with an amplitude of 0.39 mM, an instantaneous rise time and a decay time constant of 1 msec followed 10 msec after by a a square pulse of glutamate with 1 mM of amplitude and a duration of 5 msec induces no changes in g AM P A (left panel). Similarly, if the pulse of glutamate arrives 100 msec after, no changes are induced (right panel). (B) One pulse of ACh with an amplitude of 0.39 mM, an instantaneous rise time and a decay time constant of 4 msec followed 10 msec later by a square pulse of glutamate depresses g AM P A (left panel). If the pulse of glutamate arrives 100 msec after, potentiation is induced (right panel). (C) One pulse of ACh with an amplitude of 1 mM, an instantaneous rise time and a decay time constant of 1 msec followed 10 msec later by a square pulse of glutamate with the same amplitude and duration of 5 msec provokes a decrease in g AM P A (left panel). If the pulse of glutamate arrives 100 msec after, no changes are induced (right panel). (D) One pulse of ACh with an amplitude of 1 mM, an instantaneous rise time and a decay time constant of 4 msec followed 10 msec later by a square pulse of glutamate depresses g AM P A (left panel). If the pulse of glutamate arrives 100 msec after, potentiation is induced (right panel). Part II Theta rhythm generation 3 | Exact reduction for networks of neurons with complex dynamic phenotypes Introduction For decades, neuroscientists have been using mean-field theory to reduce the description of neural circuits composed of many interacting neurons to a low-dimensional system that describes the macroscopic dynamical states of the network. This approach generates a reduced picture of the neural population that can be used to study how brain functions arise from the collective behavior of spiking neurons. [START_REF] Montbrió | Macroscopic description for networks of spiking neurons[END_REF] pioneered an exact mean-field approach to link the microscopic dynamics of the individual neurons to the macroscopic state of large neural networks in terms of the firing rate and mean voltage. However, this approach was limited to networks of one-dimensional quadratic integrate-and-fire (QIF) neurons that cannot account for complex spiking and bursting dynamics. Two-dimensional quadratic integrate-and-fire models (e.g., the Izhikevich neuron model [START_REF] Izhikevich | Simple model of spiking neurons[END_REF]) reproduce a wide variety of spiking and bursting behaviors. An exact mean-field reduction of such neuron models will enable us to derive the macroscopic dynamics of populations of neurons with any kind of spiking properties. In other words, it would allow us to use techniques of dynamical systems theory to study the underlying mechanisms that lead to the emergence of specific population behaviors, such as neural oscillations. Despite recent advances [START_REF] Di Volo | Biologically realistic mean-field models of conductance-based networks of spiking neurons with adaptation[END_REF][START_REF] Nicola | Bifurcations of large networks of twodimensional integrate and fire neurons[END_REF], exact analytical mean-field reduction methods for two-Chapter 3: Exact reduction for networks of neurons with complex dynamic phenotypes dimensional QIF are still lacking. In this chapter, we derive a macroscopic exact-reduction (ER) description for large networks of conductance-based interacting Izhikevich neurons. We start by presenting the two-dimensional QIF neuron model we will use to describe the neurons in our population. We then show how a separation of time scales of the variables describing the state of the neurons allows us to explicitly solve the continuity equation of the system, which represents the evolution of the state of the neural population. By doing so, we obtain a system of two coupled variables, the firing rate and the mean voltage, which together describe the evolution of the macroscopic system. We support our findings with extensive numerical evidence involving simulation of finite-size networks of neurons with different spiking properties compared with the respective reduced system. This approach opens the possibility of generating realistic mean-field models from electrophysiological recordings of individual neurons and can be used to relate the biophysical properties of neurons with emerging behavior at the network scale. Methods Methods Population model of coupled QIF neurons We derive a mean-field model for populations of coupled Izhikevich neurons. Each neuron i from a population W is described by a fast variable representing the membrane potential, V (mV ), and a slow variable representing the recovery current, u (pA): C m dV W i dt = a(V W i -V r )(V W i -V t ) -u W i + I i (3.1a) du W i dt = α(β(V W i -V r ) -u W i ) (3.1b) where the onset of an action potential is taken into account by a discontinuous reset mechanism: If V W i > V peak ⇒ V W i ← V reset , u W i ← u W i + u jump The parameteres are as follows: C m stands for the membrane capacitance, V r is the resting potential, V t the threshold potential, a is a scaling factor, α the time constant of the recovery variable u, β modulates the sensitivity of the recovery current to subthreshold oscillations, and I i is the total current acting on neuron i. We consider I i = η i +I ext +I syn,i . The parameter η i represents a background current. To account for the network heterogeneity, the parameter η i is randomly drawn from a Lorentzian distribution with half-width ∆ centered at η, g(η) = 1 π ∆ (η-η) 2 +∆ 2 . I ext is an external current acting on population W (identical to all neurons). I syn,i is the total synaptic current acting on neuron i given by: I syn,i = ∑ Z s W Z (E Z r -V W i ) (3.2) where E Z r is the reversal potential of the synapse, and s W Z the synaptic conductance. If we assume that all neurons of population W are connected to all neurons Chapter 3: Exact reduction for networks of neurons with complex dynamic phenotypes of population Z, the synaptic conductance s W Z can be described according to the following equation: ds W Z dt = - s W Z τ s + p W Z N Z N Z ∑ k=1 ∑ f δ(t -t k f ) (3.3) where δ is the Dirac mass measure and t k f is the firing time of neuron k. The parameter τ s represents the synaptic time constant, N Z is the number of neurons of population Z, and p W Z is the coupling strength of population Z on population W . Results Adiabatic approximation of the two-dimensional QIF neuron model We exploit the time scales difference between the dynamics of the membrane potential V and the recovery variable u to reduce the dimensionality of the neural network. If the time scale of the recovery variable is much slower than the other variables, we can invoke an adiabatic approximation by considering that all neurons of population W receive a common recovery variable u W . This results in the modified Izhikevich QIF model: C m dV W i dt = a(V W i -V r )(V W i -V t ) -u W + I i (3.4) du W dt = α(β(⟨V ⟩ W -V r ) -u W ) + u jump N W ∑ k=1 ∑ f δ(t -t k f ) (3.5) where ⟨V ⟩ W is the mean membrane potential of the population W , described as follows: ⟨V ⟩ W = ∑ N W k=1 V W k N W (3.6) Note that we have incorporated the resetting mechanism of the variable u W in the last term of equation (3.5). The onset of an action potential is now described 3.3. Results by: V W i > V peak ⇒ V W i ← V reset From now on we will consider equation (3.4) written in terms of the parameters b = a(-V r -V t ) and c = aV r V t : C m dV W i dt = a(V W i ) 2 + bV W i + c -u W + η i + ∑ Z s W Z (E Z r -V W i ) + I ext (3.7) The main consequence of the adiabatic approximation is the reduction in the number of state variables describing a neuron in the population from (V W i , u W i ) to (V W i ) . This is a crucial step in our method since it enables us to solve the continuity equation of the system analytically, as we demonstrate in the next section. Mean-field reduction In the mean-field limit, a population of neurons is well represented by its probability density function, ρ. This function represents the proportion of neurons that are in a particular state at time t. In our case, the state of a neuron is fully described by its membrane potential. We denote ρ(V W |η, t) as the probability of finding a neuron from population W with voltage V at time t, knowing that its intrinsic parameter is η. Defining the flux J(V |η, t)(= dV dt ρ(V |η, t)) as the net fraction of trajectories per time unit that crosses the value V , we can write the continuity equation ∂ ∂t ρ(V |η, t) = - ∂ ∂V J(V |η, t) (3.8) which expresses the conservation of the number of neurons. Note that in integrateand-fire models, the number of trajectories is not conserved at V = V reset and V = V peak . By taking V reset and V peak to infinity, we ensure that the boundary conditions are the same and that the number of trajectories is conserved1 . According to the Lorentzian ansatz (LA) [START_REF] Montbrió | Macroscopic description for networks of spiking neurons[END_REF], solutions of the Chapter 3: Exact reduction for networks of neurons with complex dynamic phenotypes continuity equation (3.8) for a population of QIF neurons converge to a Lorentzianshaped function with half-width x(η, t) and center at y(η, t) of the form: ρ(V W |η, t) = 1 π x(η, t) [V -y(η, t)] 2 + x(η, t) 2 (3.9) We discuss the validity of the LA here applied in appendix A. Here, x(η, t) and y(η, t) are statistical variables that represent the low dimensional behavior of the probability density function ρ. Adopting the LA, we obtain the low dimensional system: C m ∂x(η, t) ∂t = (b - ∑ Z s W Z )x + 2axy (3.10) C m ∂y(η, t) ∂t = -ax 2 + ay 2 + (b - ∑ Z s W Z )y + c -u W + I ext + ∑ Z s W Z E Z r + η (3.11) that can be written in the complex form as: C m ∂w(η, t) ∂t = i(-aw 2 + c -u + I ext + ∑ Z s W Z E Z r + η) + (b - ∑ Z s W Z )y)w (3.12) with w(η, t) = ix(η, t) + y(η, t) The macroscopic variables: firing rate and mean voltage The firing rate is obtained by summing the flux for all η at V = V peak . Taking V peak → ∞ the firing rate of a population W is defined as follows r W (t) = lim V →∞ ∫ J(V W |η, t)g(η)dη (3.13) The mean voltage of the population is obtained by integrating the probability Results density function ρ for all V and η values: v W (t) = ∫ ∫ ρ(V W |η, t)g(η)dV W dη (3.14) Adopting the solution for the continuity equation (3.9) and inserting it into equations 3.13 and 3.14 we have that the phenomenological variables x and y relate with the firing rate, r, and mean voltage, v, as follows: r W (t) = a C m π ∫ x(η, t)(g(η))dη (3.15) v W (t) = ∫ ∫ x(η, t) π V W (t) (V W (t) -y(η, t)) 2 + x(η, t) 2 g(η)dV W dη (3.16) To avoid indeterminancy of the improper integral, we resort to the Cauchy principal value to evaluate the integral 3.16 (p.v. ∫ +∞ -∞ h(x)dx = lim R→∞ ∫ R -R h(x)dx). In the case of a Lorentzian distribution, the principal value is given by p.v. ∫ +∞ -∞ σ π x (x-x 0 ) 2 +σ 2 dx = x 0 . We then have that the mean voltage is given by: v W (t) = ∫ g(η)p.v. ∫ x π V W (V W -y) 2 + x 2 dV W dη (3.17) = ∫ g(η)ydη (3.18) As previously mentioned, in the mean-field limit, the probability distribution function g(η) is given by g(η) = 1 π ∆ (η -η) 2 + ∆ 2 = 1 π ∆ (η -(η + i∆))(η -(η -i∆)) The distribution g(η) has poles at η -i∆ and η + i∆, and can be written as g(η) = 1 2πi ( 1 η -(η + i∆) - 1 η -(η -i∆) ) The integrals in equations 3.15 and 3.18 are evaluated by closing the integral Chapter 3: Exact reduction for networks of neurons with complex dynamic phenotypes contour in the complex η plane and using the residue theorem. We then have that the firing rate and mean potential relate to the Lorentzian coefficients x and y according to the following expression: r W (t) = a C m π x(η ± i∆, t) (3.19) v W (t) = y(η ± i∆, t) (3.20) Given equations 3.19 and 3.20 and noting that C m dx(η ± i∆, t) dt = (b - ∑ Z s W Z )x + 2axy -(±∆) (3.21) C m dy(η ± i∆, t) dt = -ax 2 + ay 2 + c -u + (b - ∑ Z s W Z )y + I ext + η (3.22) we have that the continuity equation reduces to the low-dimensional macroscopic dynamical system: C m dr dt = (b - ∑ Z s W Z )r X + 2arv -(±∆) a C m π C m dv dt = - C 2 m π 2 a r 2 + av 2 + c -u + bv X + I ext + η Since the firing rate always has to be non-negative, we needed to evaluate the closed integral contour containing the pole of g(η) in the lower half of η plane, i.e., η -i∆. Until now, we considered the integral contour in both the upper and lower half of the η. This is because the Lorentzian variables x and y have no physical meaning. Therefore we could not make any conclusions regarding which contour to consider when using the residue theorem to solve (3.15) and (3.18) until now. We have that an exact mean-field reduction of a population of interacting conductancebased Izhikevich two-dimensional QIF neurons is given by : Results C m dr W dt = (b - ∑ Z s W Z )r W + 2ar W v W + ∆ a C m π (3.23a) C m dv W dt = - C 2 m π 2 a r 2 W + av 2 W + c -u W + (b - ∑ Z s W Z )v W + I ext + ∑ Z E Z r + η (3.23b) du W dt = α(β(v W -V r ) -u W ) + u jump r W (3.23c) (3.23d) with ds W Z dt = - s W Z τ s + p W Z r Z (3.24) Numerical simulations The Izhikevich two-variable QIF model can, with the adequate choice of parameters, reproduce many of the key features of firing patterns observed in neurons, such as tonic spiking, subthreshold oscillations, and bursting [START_REF] Izhikevich | Simple model of spiking neurons[END_REF]. We apply the exact reduced system 3.23 described in the previous section to populations of neurons with different firing dynamics and compare the resultant population activity with direct simulations of QIF neurons to explore the versatility of the model. Figure 3.1 illustrates a comparison of the dynamics of the full network of Izhikevich QIF neurons with its corresponding reduced system. Regarding the full system, each population is made up of N = 3000 neurons. The neurons are described by the two-dimensional QIF model 3.1, with the respective parameters specified in Table 3.1. The firing rate is calculated according to:r (t) = 1 N ∑ N k=1 ∑ f δ(t -t k f ). For the reduced system, the firing rate is calculated according to equation 3.23a. The reduced description closely follows the firing activity of the full network for all populations. Rebound Limitations of reduction formalism One crucial assumption of the mean-field reduction formalism here presented is the slow dynamics of the recovery variable u. However, every time a neuron reaches V peak , the membrane potential V is reset to V reset and the recovery variable u is instantaneously increased by u jump , adding a discontinuity to the system. Given that the adiabatic approximation made in section 3.3.1 relies on the assumption that the variable u is the slowest variable in the system and that therefore we can consider that all the neurons in the population receive a variable u with approximately the same value, adding an instantaneous jump invalidates the approximation made. The highest the jump, the more evident this is. In the examples considered in Figure 3.1, u jump is small enough to describe the activity of all populations accurately. This means the reduced system here derived can be used to study the activity of populations with any of the spiking dynamics portrayed in Figure 3.1. Still, it may not be adequate to describe the activity of specific populations of neurons found in the brain that require a big u jump value to describe their dynamics accurately. This is the case for rat spiny projection neurons of the neostriatum and basal ganglia. In Figure 3.2 we compare the full and reduced system for a population of uncoupled spiny projection neurons (u jump = 150 pA). We then systematically decrease the value of u jump we see how that changes the accuracy between dynamics of the 3.3. Results full and reduced system. All the neurons receive an external current as described in Figure 3.1 (D). We see that there is not a perfect agreement between the full and reduced system for a population of spiny projection neurons (left panel). Decreasing the value of u jump notably improves the agreement between the full and reduced system significantly, confirming that the high u jump is at the origin of the mismatch observed. For u jump = 150, one way to improve the representation of the population activity would be to decrease ∆. By decreasing the variance ∆ of the intrinsic variable η, we decrease the heterogeneity of the network. As a result, we can consider again that all the neurons in a population W are receiving the same variable u at any given time (simulations not shown). The particular case of bursting neurons A critical point of the derivation of our reduced mean-field model is the assumption that the firing rate of a population is defined as the flux at infinity. In other words, we consider V peak → ∞. Similarly, we assume that V reset → -∞. While moving V peak towards infinity does not change the intrinsic spiking properties of the neurons that constitute the population, moving V reset in the direction of -∞ changes the microscopic behavior of bursting neurons. Starting at point A, we are on the V-nullcline, where by definition dV dt = 0, and the dynamics is going to be governed by the u-component. Since we are on the left of the u-nullcline, the trajectory follows a downward flux. As u slowly decreases, we reach point B below the V-nullcline, and the fast dynamics in the V direction pushes the system towards V peak , at which point the system is reset to V reset . This last process repeats while u slowly increases until it reaches point C, where a voltage reset takes the system to a point above the V-nullcline. In this region, the flux is directed towards the left, which brings the system back to point A. By decreasing the value of V reset , we lose the bursting dynamics, and the neuron model now shows tonic spiking instead (see Figure 3.3 (C)). One way to preserve the bursting dynamics of the microscopic system would be to move the V and unullclines by the same amount as the V reset (Figure 3.3 (D)). We do so by decreasing the values of V r and V t (remember that b = -a(V r + V t ) and c = aV r V t ). From phenotypes Nullclines, dV dt = 0 (green line) and du dt = 0 (yellow line), for a system of a bursting neuron and respective trajectory (black line) on the phase plane. (C) Nullclines and trajectory of the system when V reset decreases from -55 to -70 mV on the phase plane. The trajectory of the system no longer shows a bursting behavior. (D) Nullclines and trajectory of the system on the phase plane when b = 6, c = 232, V r = -80 and V reset = -70. As a result of the changes in b, c and V r the nullclines moved to the left of the phase plane and we recover the trajectory of bursting neurons. (E) Comparison between full and reduced system for a population of bursing neurons (with b = 6, c = 232, V r = -80 and V reset = -70). The reduced system captures some but not all of the structure of the full bursting system. Results Figure 3.3 (E), we see that by adopting this change the full and reduced system activity have approximately the same shape, but that they do not perfectly agree. It is important to note, however, that this method presents important faults: it implies that at V reset → -∞, the resting and threshold potential, V r and V t , should also move to -∞. This is not only a problem from the biological point of view, but it can also invalidate some mathematical results adopted during the derivation of the mean-field reduction; namely, when solving explicitly the integrals that define the firing rate and mean voltage of the population (equations (3.15) and (3.16)). Chapter 3: Exact reduction for networks of neurons with complex dynamic phenotypes Discussion In this chapter, we presented a reduction formalism that allows us to predict the collective dynamics of large networks of conductance-based interacting spiking neurons with different spiking properties. Starting with a population of two-dimensional QIF neurons, we considered an adiabatic approximation of a slow recovery variable, which enabled us to reduce the dimension of variables that describes the state of a neuron in the network. By doing so, we simplified the continuity equation describing the evolution of the state of our population, and we could directly apply the Lorentzian ansatz to solve the continuity equation and reduce our full network to a low-dimensional macroscopic system. This mean-field formalism provides a paradigm to bridge the scale between population dynamics and the microscopic complexity of the physiology of the individual cells, opening the perspective of generating biologically realistic mean-field models from electrophysiological recordings for a variety of neural populations. A similar idea appears in di Volo et al. ( 2019) and [START_REF] Nicola | Bifurcations of large networks of twodimensional integrate and fire neurons[END_REF]. Di Volo and colleagues propose a mean-field model of spiking neurons with recovery variable by calculating the transfer function in a semi-analytical way. This approach, however, is limited to cases where the neuron dynamics has a stationary firing rate, and it cannot be used to study populations of neurons whose transfer function cannot be well-defined di [START_REF] Di Volo | Biologically realistic mean-field models of conductance-based networks of spiking neurons with adaptation[END_REF]. Similar to our approach, Nicola and Campbell use moment closure and a steady-state approximation of the recovery variable to write an expression for the population firing rate, defined as the integral of the population density function. However, they can't apply the Lorentzian ansatz to solve the integral because they don't consider the heterogeneity of the population. Therefore, for some types of networks won't be possible to be evaluated explicitly the firing rate integral [START_REF] Nicola | Bifurcations of large networks of twodimensional integrate and fire neurons[END_REF]. Sufficient requirements for our approach to be valid are that the recovery variable u is the slowest in the system and that u jump is relatively small. This means that even though it is possible to describe any class of spiking dynamics, the reduced model might be unable to describe the activity of specific neural populations, such as spiny projection neurons of the neostriatum and basal ganglia. It is important to note that even though the original QIF neuron model for tonic 3.4. Discussion bursting neurons fills all the requirements, the mean-field system seems to be inadequate to describe the population's behavior. Since, in the Izhikevich two-dimensional QIF model, the bursting mechanism depends on the position the system acquires in the phase-plane (V ,u) upon reset, when moving V reset to inf ty will alter the behavior of the microscopic system. Therefore, despite having a good agreement between the full and reduced system, the population at study is no longer a population of tonic bursting neurons but of tonic spiking neurons. A solution found was to move the u and V-nullclines with V reset , so that an action potential will reset the system to the same position in the phase-plane relative to the nullclines and preserve the bursting mechanisms of the original model. We do so by decrease the resting and threshold potential, V r and V t by the same amount as V reset . The full and reduced system of the resultant tonic bursting neurons does not perfectly agree, but it accurately reproduces the oscillatory behavior of the population. In other words, when the system receives a big enough external input I ext , both the full and reduced system show damped oscillations but with different frequencies. Therefore, the mean-field description may still be useful to study certain features of a population of bursting neurons and qualitative behavior. However, it is important to note that the approach taken for the case of the bursting neurons presents some fundamentals problems. Namely, it implies that both the reset, resting and threshold potential are set to -∞. An alternative solution is to consider the two-dimensional theta neuron model with a slow recovery variable, which with the appropriate choice of parameters can produce bursting [START_REF] Ermentrout | Parabolic bursting in an excitable system coupled with a slow oscillation[END_REF], and apply the steps as for the derivation of a two-dimensional QIF. In the theta neuron model, the system evolves along a circle and V ∈ [-∞, +∞] maps to θ ∈ [0, 2π]. A deeper analysis of this approach is necessary to prove its validity. In the following chapter, we show how the exact mean-field model here derived can be applied to study the generation and expression of macroscopic oscillations in an entorhinal and hippocampal network. Chapter 3: Exact reduction for networks of neurons with complex dynamic phenotypes A Validity of the Lorentzian ansatz Previous work by [START_REF] Montbrió | Macroscopic description for networks of spiking neurons[END_REF] shows how the dynamics of a class of QIF neurons generally converges to the Ott-Antonsen ansatz (OA) manifold. This is known has the Lorentzian ansatz (LA). In this section, we clarify why the Lorentzian ansatz holds for the ensembles of QIF neurons here considered. We start by introducing the following transformation: V W i = tan θ W i 2 (25) Then, Equation 3.1a transforms into: C m dθ W i dt = a(1-cosθ W i )+(c-u W +η i + ∑ Z s W Z E Z r +I ext )+(b- ∑ Z s W Z )sinθ W i ( 26 ) Note that V = ±∞ corresponds to θ = ±π. According to the Ott-Antonsen ansatz [START_REF] Ott | Low dimensional behavior of large systems of globally coupled oscillators[END_REF], in the thermodynamic limit, the dynamics of a class of systems dθ dt = Ω(η, t) + Im(H(η, t)e -iθ ) (27) converges to the OA manifold ρ(θ|η, t) = 1 2π Re[ 1 + α(η, t)e iθ 1 -α(η, t)e iθ ] ( 28 ) where the function α(η, t) is related to w(η, t) = x(η, t) + iy(η, t) as α(η, t) = 1 -w(η, t) 1 + w(η, t) (29) Noticing that in the new variable θ W our system belongs to the class 27 with Ω(η, t) = a + c + ∑ Z s W Z E Z r + I ext + η -u W and H(η, t) = (-b + ∑ Z s W Z ) + i(a - c - ∑ Z s W Z E Z r -I ext -η + u W ) , we infer that it converges to: ρ(θ|η, t) = 1 π Re[ 1 + xtan 2 ( θ 2 ) + ytan( θ 2 ) + i(ytan 2 ( θ 2 ) + (1 -x)tan( θ 2 )) tan 2 ( θ 2 ) + x -ytan( θ 2 ) + i(y -(1 -x)tan( θ 2 )) ] (30) Therefore, in the original variable V X , our system converges to: ρ(V W |η, t) = 1 π Re[ 1 + x(V W ) 2 + yV W + i(y(V W ) 2 + (1 -x)V W ) (V W ) 2 + x -yV W + i(y -(1 -x)V W ) ] (31) After some algebraic manipulations, we recover the LA (3.9) ρ(V W |η, t) = 1 π x(η, t) (V W -y(η, t)) 2 + x(η, t) 2 (32) The LA ansatz solves the continuity equation exactly, making the system amenable to theoretical analysis. In section 3.3.4, we show that these solutions agree with the numerical simulations of the original QIF neurons, further validating the application of the LA. | The Entorhinal Cortex as a theta rhythm generator 4.1 Introduction Local field potentials in the entorhinal cortex (EC) show theta oscillations under different conditions. A regular prominent theta rhythm is observed in the EC during voluntary movements and REM sleep [START_REF] Alonso | Neuronal sources of theta rhythm in the entorhinal cortex of the rat. i. laminar distribution of theta field potentials[END_REF], as well as under anesthesia [START_REF] Mitchell | Generation of theta rhythm in medial entorhinal cortex of freely moving rats[END_REF]. Early work suggested that the medial septum may be enforcing the theta rhythm into which the EC network is entrained [START_REF] Gogolák | The firing pattern of septal neurons and the form of the hippocampal theta wave[END_REF][START_REF] Stewart | Do septal neurons pace the hippocampal theta rhythm?[END_REF]. This view is challenged by experimental work showing that lesions in the medial septum reduce but do not terminate theta rhythms in the hippocampal formation [START_REF] Colgin | Mechanisms and functions of theta rhythms[END_REF][START_REF] Winson | Loss of hippocampal theta rhythm results in spatial memory deficit in rat[END_REF] and that rhythmic activity seems to be originating in the medial entorhinal cortex [START_REF] Mitchell | Generation of theta rhythm in medial entorhinal cortex of freely moving rats[END_REF]Gu and Yakel, 2017). The EC is believed to provide the major excitatory rhythmic drive to hippocampal theta oscillations [START_REF] Buzsáki | Theta oscillations in the hippocampus[END_REF][START_REF] Kamondi | Theta oscillations in somata and dendrites of hippocampal pyramidal cells in vivo: activity-dependent phase-precession of action potentials[END_REF][START_REF] Brankack | Current source density analysis of the hippocampal theta rhythm: associated sustained potentials and candidate synaptic generators[END_REF]. Therefore, a thorough knowledge of the intrinsic circuit properties of the EC is essential to understanding the origins of hippocampal theta and how the entorhinal structure modulates the rhythm's power and frequency. The EC is organized into layers that can be characterized by different inputoutput connectivity and constituent neuron types. The deep layers (V/VI) are made of a heterogeneous population of excitatory pyramidal cells selectively targetted by outputs from the hippocampal CA1 region and project locally to the deep and superficial (II/III) layers of the EC. The superficial layers comprise fast-spiking PV+ interneurons, excitatory pyramidal cells, and stellate cells, with the stellate cells making up the largest group of principal cells [START_REF] Witter | Architecture of the entorhinal cortex a review of entorhinal anatomy in rodents with some comparative notes[END_REF]. So far, there is no anatomical evidence that they form synaptic connections between themselves. In vitro studies revealed that stellate cells form strong reciprocal connections with PV+ interneurons and sparse connections with pyramidal cells of the superficial layers [START_REF] Couey | Recurrent inhibitory circuitry as a mechanism for grid formation[END_REF][START_REF] Pastoll | Feedback inhibition enables theta-nested gamma oscillations and grid firing fields[END_REF]. Despite conflicting reports concerning stellate cells' projections to deep layers, it is well-established that they constitute the primary excitatory input of the hippocampus [START_REF] Surmeli | Molecularly defined circuitry reveals input-output segregation in deep layers of the medial entorhinal cortex[END_REF][START_REF] Ohara | Intrinsic projections of layer vb neurons to layers va, iii, and ii in the lateral and medial entorhinal cortex of the rat[END_REF][START_REF] Tamamaki | Projection of the entorhinal layer ii neurons in the rat as revealed by intracellular pressure-injection of neurobiotin[END_REF][START_REF] Klink | Morphological characteristics of layer ii projection neurons in the rat medial entorhinal cortex[END_REF]Buckmaster et al., 2004b;[START_REF] Canto | What does the anatomical organization of the entorhinal cortex tell us?[END_REF]. Principal stellate cells have long been considered a key contributor to the entorhinal theta rhythm. They are endowed with slow hyperpolarizing currents that give them the ability to generate persistent rhythmic subthreshold oscillatory activity with a theta frequency when depolarized [START_REF] Alonso | Differential electroresponsiveness of stellate and pyramidal-like cells of medial entorhinal cortex layer ii[END_REF][START_REF] Rowland | Functional properties of stellate cells in medial entorhinal cortex layer ii[END_REF]. This chapter proposes a circuit model of the EC to study its intrinsic properties that allow external excitatory inputs to drive the system into an oscillatory regime. Firstly, we use Izhikevich's QIF neuron model with an adaptation variable to describe the entorhinal pyramidal cells, stellate cells, and fast-spiking interneurons and apply the exact reduction formalism presented in chapter 3 to obtain a macroscopic description of the entorhinal network. Then, to study how synchronized theta oscillations can arise in such a network, we infer the space of connectivity parameters that generate coherent theta rhythm. Our results suggest that the EC may utilize distinct subnetworks to generate low-frequency theta oscillations (type 2) and high-frequency theta oscillations (type 1). Methods Methods Network of QIF neurons In this study, we use a minimal spiking neural network model to represent the entorhinal cortex region. We consider a population of regular spiking pyramidal cells (E) -as found in the deep layers of the EC -, and a population of stellate cells (S) and fast-spiking interneurons (I) -as observed in the superficial layers. Note that although you can also find pyramidal cells in the superficial layers of the EC, we do not consider this population in our model since, contrarily to stellate cells, it is not clear if they play a role in the generation of the theta rhythm. In addition, they only form sparse connections with stellate cells [START_REF] Witter | Architecture of the entorhinal cortex a review of entorhinal anatomy in rodents with some comparative notes[END_REF][START_REF] Couey | Recurrent inhibitory circuitry as a mechanism for grid formation[END_REF][START_REF] Pastoll | Feedback inhibition enables theta-nested gamma oscillations and grid firing fields[END_REF]. For similar reasons, we do not take into account the activity of other types of interneurons found in the superficial layers of the EC, such as CCK-expressing interneurons, since they are less abundant than PV+ cells and do not form connections with stellate cells [START_REF] Witter | Architecture of the entorhinal cortex a review of entorhinal anatomy in rodents with some comparative notes[END_REF]. Each cell i of each population W is described by the modified version of the Izhikevich QIF neuron model: C m dV W i dt = aV W i + b(V W i ) 2 + c -u W + η i + I ext + I syn,i (4.1) du W dt = α(β(⟨V ⟩ W -V r ) -u W ) + u jump N W ∑ k=1 ∑ f δ(t -t k f ) (4.2) where V i is the membrane potential of neuron i, and u the slow recovery variable of population W . The parameters C m , a, b, c, V r α, β and u jump determine the dynamics of the neuron (see Chapter 3, section 3.2 for a more detailed explanation). The function δ is the Dirac mass measure and t k f is the firing time of neuron k. The parameter η i represents a background current randomly drawn from a Lorentzian distribution that accounts for the network's heterogeneity, I ext is an external current acting on all the neurons of the population, and I syn,i is the total synaptic current acting on neuron i. The parameters C m , a, b, c, V r α, β and u jump are chosen such as to reproduce the electrophysiological profile of the three neuron types: stellate cells with intrinsic subtreshold oscillations (S), class 1 pyramidal cells (E), and fast-spiking PV+ interneurons (I). All the parameters used are described in Table 4.1. 4.1 . To our knowledge, there are no anatomical studies that determine precisely what is the relative size of each population of neurons in the EC. Therefore, we consider the three populations (S, I and E) to have the same size, similar to what is done in [START_REF] Neru | Theta oscillations gate the transmission of reliable sequences in the medial entorhinal cortex[END_REF]. We assume that each population is made of 3000 neurons. We find that this is enough to get a reasonable estimate of the population's firing rates. Izhikevich (2007a) and Izhikevich (2007c), respectively. Methods S-cells I-cells E-cells Synaptic model In vitro studies show that pyramidal cells in the deep layers of the EC receive external excitatory inputs from the hippocampal CA1 region and project to interneurons and stellate cells in the superficial layers [START_REF] Witter | Architecture of the entorhinal cortex a review of entorhinal anatomy in rodents with some comparative notes[END_REF][START_REF] Alonso | Differential electroresponsiveness of stellate and pyramidal-like cells of medial entorhinal cortex layer ii[END_REF][START_REF] Hamam | Morphological and electrophysiological characteristics of layer v neurons of the rat medial entorhinal cortex[END_REF]. Some studies also suggest the existence of reciprocal connections from stellate cells to pyramidal cells in the deep layers [START_REF] Surmeli | Molecularly defined circuitry reveals input-output segregation in deep layers of the medial entorhinal cortex[END_REF]. Fast-spiking interneurons, in turn, form strong bi-directional connections with stellate cells [START_REF] Witter | Architecture of the entorhinal cortex a review of entorhinal anatomy in rodents with some comparative notes[END_REF][START_REF] Alonso | Differential electroresponsiveness of stellate and pyramidal-like cells of medial entorhinal cortex layer ii[END_REF][START_REF] Hamam | Morphological and electrophysiological characteristics of layer v neurons of the rat medial entorhinal cortex[END_REF]. That being said, we consider the S-E-I network connected as schematically shown in Figure 5.1, where all populations are recurrently connected through gap junctions except for S-cells that do not form monosynaptic connections between themselves [START_REF] Witter | Architecture of the entorhinal cortex a review of entorhinal anatomy in rodents with some comparative notes[END_REF][START_REF] Winterer | Excitatory microcircuits within superficial layers of the medial entorhinal cortex[END_REF] 1 . Chapter 4: The Entorhinal Cortex as a theta rhythm generator The synaptic currents were modeled as follows: I syn,i = ∑ Y s W Z (E Z r -V W i ) (4.3) where E Z r is the reversal potential of the synapse, and s W Z the synaptic conductance. The reversal potential depends on the nature of the synapse. If the synapse originates on an inhibitory cell population Z, E Z r = -80 mV; if it originates on an excitatory population E Z r = 0 mV. The synaptic conductance s W Z is given by: ds W Z dt = - s W Z τ s + p W Z N Z N Z ∑ k=1 ∑ f δ(t -t k f ) (4.4) where the parameter τ s represents the synaptic time constant, N Z is the number of neurons of population Z, and p W Z is the coupling strength of population Z onto population W . Mean-field description of the EC We take advantage of the exact mean-field reduction system derived in chapter 3 to obtain a macroscopic description of the EC neural network. For simplicity, we consider instantaneous synapses (s W Z = p W Z r Z )2 . We then have that the following reduced system describes our network For the S-cells: C m dr S dt = (b -p SI r I -p SE r E )r S + 2ar S v S + ∆a πC m (4.5a) C m dv S dt = - (πr S Cm) 2 a + av 2 S + (b -p SI r I -p SE r E )v S + c (4.5b) -80p SI r I -u S + I ext + η du S dt = α(β(v S -V r ) -u S ) + u jump r S (4.5c) For the I-cells: 4.2. Methods C m dr I dt = (b -p II r I -p IS r S -p IE r E )r I + 2ar I v I + ∆a πC m (4.6a) C m dv I dt = - (πr I Cm) 2 a + av 2 I + (b -p II r I -p IS r S -p IE r E )v I + c (4.6b) -80p II r I -u I + I ext + η du I dt = α(β(v I -V r ) -u I ) + u jump r I (4.6c) For the E-cells: C m dr E dt = (b -p EE r E -p ES r S -p EI r I )r E + 2ar E v E + ∆a πC m (4.7a) C m dv E dt = - (πr E Cm) 2 a + av 2 E + (b -p EE r E -p ES r S -p EI r I )v E + c (4.7b) -80p EI r I -u E + I ext + η du E dt = α(β(v E -V r ) -u E ) + u jump r E (4.7c) Figure 5.1 illustrates a comparison of the dynamics of the full network with the low-dimensional reduced system. It shows the time evolution of the external stimulus acting on all three populations (Figure 5.1 (a)), the spiking activity obtained from simulations of the full network, and the firing rates of the three populations given by the reduced description ((4.5a), (5.6a) and (5.7a) for the S, I and E-cells, respectively) and calculated from the full network simulations (r W = 1 N W ∑ N W k=1 ∑ f δ(t -t k f ) ). The reduced description captures the shape of the firing activity of the full network. This confirms that we can safely employ the reduced mean-field model to interpret the phenomena observed on the spiking network and to obtain theoretical predictions for its dynamics. Bayesian inference algorithm for model parameter identification In Bayesian inference, one can infer the parameters of interest θ from observed data x 0 given a model of their statistical relationship. In other words, given an estimation of the parameter distribution, which we call prior, and a likelihood (or sampling function) p( x which is high for parameters θ consistent with the data x 0 , and it approaches zero as discrepancies increase. However, the likelihood function of most mechanistic models is untractable. In that case, one can use likelihood-free inference methods, such as Sequential Neural Posterior Estimation (SNPE), that compute posterior beliefs over parameters using simulations from the model rather than likelihood evaluations [START_REF] Leuckmann | Flexible statistical inference for mechanistic models of neural dynamics[END_REF]. We use a simulation-based inference algorithm that implements SNPE [START_REF] Gonçalves | Training deep neural density estimators to identify mechanistic models of neural dynamics[END_REF] to infer the connectivity parameters of the S-E-I network that enable the generation of theta rhythm. SNPE is a machine learning tool that identifies all the parameters of a mecha-4.2. Methods nistic model that reproduce the target data (or selected data features). Given the data (or selected data features) x 0 , a mechanistic model with parameters θ, and a prior distributions of the parameters p(θ), it returns a posterior distribution p(θ|x 0 ). Contrarily to other likelihood-free methods, SNPE uses all simulations to train an artificial network to identify all admissible parameters instead of filtering out simulations, i.e., it finds not only the best but all parameters consistent with the data. The SNPE algorithm runs N simulations for a range of parameter values and trains an artificial neural network to map any simulation result onto a range of possible parameters. Parameter samples θ n are drawn from the prior p(θ). A simulated response of the mechanistic model is obtained for each parameter sample, and a summary statistic x n is computed. This results in N pairs of parameters and summary statistics (θ n , x n ). At the end, the network is trained to find a mapping from summary statistics to parameter distributions by minimizing a loss function: L = -log(q ϕ (θ n |x n )) (4.8) where q ϕ (θ n |x n ) is the weighted posterior distribution (the network weights ϕ are adjusted based on the simulations results and inference settings). In other words, the network is trained to find a mapping from summary statistics to parameter distributions. Suppose a single round of inference is not sufficient for results to converge. In that case, SNPE can be run in multiple rounds, in which samples are drawn from the distribution q ϕ (θ n |x n ) obtained at the end of the previous round instead of from the prior distribution p(θ). After the last round, q ϕ (θ n |x n ) is returned as the inferred posterior distribution on parameters θ [START_REF] Gonçalves | Training deep neural density estimators to identify mechanistic models of neural dynamics[END_REF]. We use a SNPE framework to identify the connectivity parameters of the network (p IS , p SI , p EI , p IE , p SE , p ES , p EE ,p II ) for which each population of the S-E-I network is synchronized at theta frequency. We simulated the network's firing rates when an excitatory external current I ext acts on the E-cells. This follows anatomical studies of the EC showing that excitatory hippocampal inputs target pyramidal cells in the deep layers of the EC [START_REF] Witter | Architecture of the entorhinal cortex a review of entorhinal anatomy in rodents with some comparative notes[END_REF]. We define the frequency and amplitude of the firing rates of the three populations as the model output used for inference. More specifically, we consider firing rates with a frequency between 4 and 12 Hz and constant amplitude throughout 3000 msec as the desired simulated feature. We find that a simulation period of 3000 msec is long enough to accurately calculate the frequency of oscillations of the firing rates and detect a potential decrease in amplitude while not being too computationally expensive. We based our algorithm on the Python code available at https://www.mackelab. org/sbi/. Inference is calculated after one round of 500 simulations. We chose uniform distributions for all priors. These simulation specifications proved to be enough for the system to converge to a solution. Results Estimating the EC network connectivity Despite the progress made in the last years in mapping the entorhinal connectivity, it is still not clear what synaptic connections in the EC are enabling this structure to generate sustained theta oscillations. We use a Bayesian inference machine learning tool, SNPE, to derive the posterior distribution of the connectivity parameters (p SI , p IS , p ES , p SE , p IE , p EI , p II , p EE ) for which the EC network model produces sustained rhythmic activity with theta frequency. We look at the network's response when an excitatory external current I ext acts on the E-population. More precisely, we use a frequency of the firing rates r S , r E and r I between 4 and 12 Hz as the target feature that the model needs to reproduce. This follows anatomical studies showing that hippocampal CA1 pyramidal cells target pyramidal cells on the deep layers of the EC [START_REF] Witter | Architecture of the entorhinal cortex a review of entorhinal anatomy in rodents with some comparative notes[END_REF], and experimental results showing that the EC is not an independent generator and it needs excitatory hippocampal inputs to generate theta oscillations (Gu and Yakel, 2017). For simplicity, we consider symmetric connections between the populations, i.e. p W Z = p ZW . For a first approach, we consider that this simplification can still give us an estimated idea of how the activity of each populations constraints the generation of oscillations. Moreover, by considering symmetric connections we reduce the number of parameters to explore which increases the efficiency and speed of the inference algorithm. Regarding the prior distribution of the connectivity parameters, we consider the uniforms distributions p SI , p IS ∈ U(0,100), p SE , p ES ∈ U(0,190), p IE , p EI ∈ U(0,100), p II ∈ U(0,100), Results p EE ∈ U(0,100). Despite the lack of data to support our choice of priors, we find that the system converged to a solution in the chosen prior interval. Initially, we infer both the connectivity parameters p W Z and the external current I ext to get an estimation of the magnitude that I ext needs to have for the network to oscillate. From the resulting posteriors, we estimate that with an external current I ext =100 pA, there is a connectivity configuration for which the system will generate theta (see Figure S1). We then adopt this value for the current and re-optimize the posteriors of the connectivity parameters. By doing so, we get a more accurate estimation of the parameters p W Z , since the system needs to learn how to represent theta oscillations by sampling from a smaller number of parameters. The posteriors distributions of the connectivity parameters are shown in Figure 4.3 (A). We select connectivity parameter values given by the mean of the posteriors, indicated by the violet lines/points in Figure 4.3 (A) (p IS ,p SI =43.9714, p SE , p ES =160.2503, p IE , p EI =34.4222, p II =55.4267, p EE =84.4322). By doing so, we obtain a set of parameters sampled from the high probability region. As we can see in Figure 4.3 (B) and (C), these parameters lead to simulations with the selected features. In other words, the populations firing rates obtained show sustained oscillations with a frequency in the theta range (6.3 Hz). We next study the behavior of the network when its parameters are in the high probability posterior region. For that, we adopt the mean values of the parameters posterior distribution defined before and examine the system's dynamics when the external current changes. Namely, we plot the bifurcation diagram of the populations firing rates, r E , r S and r I , with the external current I ext as a bifurcation parameter (Figure 4.3 (D)). The three populations' firing rates exhibit two Hopf bifurcation points, at I ext = 55.42 pA and I ext = 129.7. This indicates the system is in an oscillatory regime for 55.42 < I ext < 129.7. Notably, for I ext > 69 pA the frequency of the oscillations are in the theta range (Figure 4.3 (E)) We notice that the connectivity of the S-E subnetwork ,p SE , p ES and p EE , is more constrained (with a standard deviation of 15 and 12, respectively) and has a higher mean value (160.2503 and 84.4322, respectively) than the other parameters p SI , p IS , p IE , p EI and p II (with mean values 43.9714, 34.4222 and 55.4267 and standard deviations 22, 18 and 24, respectively). In other words, the high posterior probability region is highly constrained by high values of S-E and E-E connections. Results (C) Power spectrum of the activity of the S, I and E population's firing rate calculated over 10 seconds. The power spectrum of all populations shows a maximum at 6.3 Hz. (D) Bifurcation diagrams of the firing rates of the 3 neural populations. System is in an oscillatory regime for HB1 < I ext < HB2, with HB1=55.42 pA and HB2=129.7 pA. HB: Hopf bifurcation. (E) Frequency of oscillations of the network in the stable limit cycle regime (HB1 < I ext < HB2). For I ext > 69Hz, the system oscillates with a frequency between 4 and 8.5 Hz, which is in the theta range (grey area). These observations may indicate that the network is utilizing more the S-E subnetwork to generate theta rhythm than the full S-E-I network. To explore this hypothesis, we start by examining changes in the firing rates of the S and E-cells populations when we change the strength of connection of these two populations with the I-cells, p IE , p EI and p SI , p IS . S and E firing rates with the connectivity between the E and I population, p IE , p EI , as a bifurcation parameter. The system is in an oscillatory regime for p IE , p EI = 83.26. Bottom panel: Frequency of the network's stable limit cycle (83.26 <p IE , p EI ) as a function of the I-E connectivity. Stable limit cycle is always in the theta range frequency of oscillations (grey area). (B) Bifurcation diagram of S and E firing rates with the connectivity between the S and I population, p SI , p IS , as a bifurcation parameter. The system is in an oscillatory regime for p SI , p IS = 133.6. Bottom panel: Frequency of the network's stable limit cycle (133.6 <p SI , p IS ) as a function of the I-S connectivity. Stable limit cycle is always in the theta range frequency of oscillations (grey area). Remaining parameters from high probability region (from Figure 4.3 (A), in purple). HB: Hopf bifurcation. When plotting a bifurcation diagram using p IE , p EI and p SI , p IS as the bifurca-tion parameters, we see that oscillations with a theta frequency persist when these parameters are zero and cease to exist when we increase them beyond an Hopf bifurcation parameter at p IE , p EI = 83.26 and p IE , p EI = 133.6 (see Figure 4.4). It is important to note, however, that the E-I connection, although not necessary to generate oscillations, can modulate its frequency. Increasing p IE , p EI decreases the frequency of oscillations (Figure 4.4 (C)). The S-I connection, on the other hand, does not modulate significantly the frequency of the network (Figure 4.4 (D)). These results indicate that removing the S-I or E-I connections from the circuit does not impair its ability to generate theta rhythm. S-E circuit as a theta rhythm generator To further explore the role of the I-cell and the S-E subnetwork in the generation of theta oscillations, we test the ability of the S-E circuit to act as a theta rhythm generator. Removing the I population while keeping the same values for the p SE , p ES and p EE connectivity parameters (p SE , p ES = 160.2503 and p EE = 84.4322) did not prevent the generation of theta in the S-E circuit (Figure 4.5). Furthermore, it did not significantly change the bifurcation diagrams of the S and E populations firing rates. For 44.77 < I ext < 110.3 pA, the system is in an oscillatory regime with approximately the same amplitude as the one generated by the S-E-I network (Figure 4.5 (C)). Similarly, the stable limit cycles generated in the S-E network are primarily in the theta range (Figure 4.5 (D)). This is in agreement with previous studies showing that model stellate cells synchronize with fast excitatory synapses [START_REF] Acker | Synchronization of strongly coupled excitatory neurons: relating network behavior to biophysics[END_REF]. Next, we focused on the potential functional role of the S-E network connectivity parameters in network dynamics. From Figure 4.6 we see that the recurrent connections in the E population are not necessary to obtain sustained oscillations in our population model. When p EE = 0, the system is still in an oscillatory regime for 172 < p SE , p ES < 247 However, increasing the value of p EE value inside the region of oscillatory regime (grey area in Figure 4.6, left panel), the power of the firing rate of both S and E increases (see Frequency of oscillations of the network in the stable limit cycle regime (HB1 < I ext < HB2). For I ext > 58Hz the system oscillates with a frequency in the theta range (grey area). Chapter 4: The Entorhinal Cortex as a theta rhythm generator garding the frequency of the oscillations, even though it seems that overall increasing p EE or p SE , p ES increases the frequencies of oscillations, this effect is small. !"!#"$%&''()"*+ Entorhinal mechanisms of type 1 and type 2 theta generation The theta rhythm can be classified into type 1 or type 2, according to its frequency and behavioral correlates. Type 1 theta has a frequency of 8-12 Hz and it is typically observed during exploratory behavior and REM sleep; type 2 has a lower frequency of 4-7 Hz and it appears during states of alert immobility and under anesthesia. Experimental studies suggest that the EC may be modulating the two types of theta differently; namely, while lesions to the EC can abolish type 1 theta, they disrupt behavioral and sensory correlates of both type 1 and type 2 [START_REF] Montoya | The effects of entorhinal cortex lesions on type 1 and type 2 theta[END_REF][START_REF] Buzsáki | Theta oscillations in the hippocampus[END_REF]. We hypothesize that the EC is utilizing different subnetworks to generate type 1 and type 2 theta oscillations. We suggest that while the S-E network can efficiently generate type 1 oscillations, under certain conditions, we may need to utilize the full S-E-I network to generate type 2 theta oscillations, in 4.3. Results particular when I ext is very high. The main observations leading to this hypothesis were the fact that the frequency of oscillations increases rapidly with I ext in the S-E circuit (Figure 4.5 (D)) while increasing slowly in the S-E-I network (Figure 4.3 (E)). Additionally, increasing the strength of the E-I connection decreases the network frequency of oscillations, as already mentioned (see Figure 4.4 (A)). To further explore this hypothesis, we repeat the method used in section 4.3.1 to infer the posterior distribution of the S-E-I network connectivity that generates sustained oscillations with a frequency of 8-12 Hz or 4-7 Hz (type 1 and type 2 rhythm, respectively) when it receives an external input I ext = 100 pA. We consider the prior distributions p SI , p IS ∈ U (0,100), p SE , p ES ∈ U (0,190), p IE , p EI ∈ U (0,100), p II ∈ U(0,190), p EE ∈ U(0,100). For these prior intervals, our model converges to a solution (Figure 4.7). To generate low-frequency theta rhythm, the system is making use of the E-I connection more than to generate high-frequency theta, i.e. the mean value of the posterior distribution of p EI , p IE and p II is higher than for the high-frequency case. The opposite is true for the recurrent connections p EE . The posteriors of p SE , p ES and p SI , p IS are identical in for both type 1 and type 2 theta generation. This seems to indicate that I-cells became more relevant in the generation of low-frequency theta rhythm. Please note that the posterior parameter distributions represent the probability of parameters taken from the distribution reproducing the target feature, reproducing high-frequency and low-frequency theta. That being said, it does not mean that the S-E subnetwork cannot reproduce low-frequency theta oscillations. In fact, the S-E subnetwork considered in section 4.3.2 is reproducing theta oscillations in the low-frequency range (4-7 Hz). Results For a S-E network receiving a current I ext =150 pA, the SNPE algorithm could not find a connectivity configuration capable of generating type 2 oscillations, while for a S-E-I network, the system converged to a plausible solution (see Figure 4.8) if we incorporate a slow connection between the S and I-cell populations ( These results indicate that there are different entorhinal network configurations for which an external current (originating, for example, in the hippocampal CA1 region) gives rise to type 2 theta oscillations. For low values of external current, both an S-E and S-E-I network with instantaneous synapses can generate type 2 oscillations with identical power; for high values of external current, only an S-E-I network with a slow S-to-I synapse can reproduce type 2 theta. Also, a strong external input in a S-E-I network produces type 2 theta with the highest power (see Figure 4.9). Next, we explore the different subnetworks capable of generating type 1 theta oscillations and under which conditions. We found that both an S-E and S-E-I subnetwork under different conditions (with or without slow synapses, and with low or high external currents) generates type 1 theta, with different frequencies and power. The connectivity parameters of a S-E network with I ext =100 pA or 150 pA, as well as a network of S-E-I with I ext =150 pA with or without slow S-to-I synapses, and with I ext =100 pA all converge to a solution (see Figures 4.7 and 4.10). From Figure 4.11, we see that out of all the network configurations considered, a S-E-I network with I ext =150 pA produces type 1 oscillations with the highest power. In comparison, the S-E subnetwork with I ext =150 pA produces oscillations with the highest frequency. On the other hand, the S-E-I network with I ext =100 pA produces type 1 oscillations with the lowest power and frequency. Discussion Despite being well established that the EC is a necessary structure for the generation of theta oscillations, its role in the induction and maintenance of theta is poorly understood. Previous work suggests that the EC is simply responding to external theta inputs from the medial septum since inactivation of the medial septum disrupts the receptive field of grid cells in the EC [START_REF] Koenig | The spatial periodicity of grid cells is not sustained during reduced theta oscillations[END_REF]. Recent studies challenge this view by suggesting that the EC circuit may be intrinsically generating theta rhythm as a response to excitatory hippocampal inputs (Gu and Yakel, 2017). However, it is still unclear what intrinsic neuronal properties and network mechanisms enable the entorhinal circuit to generate theta rhythm. In this chapter, we used a computational model framework to investigate the intrinsic properties of the entorhinal circuitry that give rise to theta oscillations. We used a mean-field description of the entorhinal network, composed of pyramidal cells, stellate cells, and fast-spiking interneurons, to study the connectivity requirements for coherent theta oscillations to arise. Our results suggest that the EC may be utilizing distinct subnetworks under different conditions to generate type 1 and type 2 theta rhythm. If the entorhinal network receives a small excitatory current (for example, I ext = 100 pA), both the S-E and S-E-I subnetwork can generate type 2 theta oscillations with identical power and frequency. However, if the EC receives a strong external excitatory input (for example, I ext = 150 pA), the S-E subnetwork is not capable of generating type 2 oscillations; the system requires synaptic connections with I-cells. Interestingly, it requires the existence of a slow excitatory synapse between the S and I cells (with τ s =100 msec). Concerning the generation of type 1 oscillations (8-12 Hz), the EC circuit can generate oscillations more robustly; meaning, both an S-E and S-E-I network (with or without slow synapses) produce type 1 theta rhythm with similar powers (powers vary between 100 and 150 mV 2 /Hz ) for different values of I ext . These observations suggest that blocking the PV+ activity should impact type 1 and type 2 theta oscillations differently. In all the subnetworks and theta subtypes considered, an excitatory drive acting on the E-cell population is necessary for the system to enter into an oscillatory regime. This agrees with experimental results showing that the EC is not an in-dependent rhythm generator and needs hippocampal excitatory inputs to generate theta. Interestingly, our modeling work suggests that fast synapses between S and E cells are crucial to achieving synchrony, given that the high probability posterior of p SE , p ES has a high value and was highly constrained for the generation of both type 1 and type 2 theta oscillations, where synchrony is defined as sustained oscillations of the populations firing rates. Synaptic connections between the I-cells and the other populations (S and E) are likely to also be playing a role in synchronizing the activity of the full network, given that abolishing the I-cell population from our network can change the power of the oscillations. In some cases, it can even eradicate theta oscillations. For example, an EC network without I-cells cannot generate type 2 oscillations when the excitatory input acting on the network is high. Also, suppressing the I population considerably decreases the power of type 1 theta. However, due to the different effects eliminating the I-cell population can have on the generation of the different types of theta, it isn't easy to access what role the I-cells are playing in the synchronization of the network. Moreover, both the p SI , p IS and p EI , p IE parameters have strained posterior distributions. Therefore, it is not possible to conclude from our results how I-connections modulate synchrony in the network. Several studies indicate that NMDAR activation is crucial for the generation of hippocampal theta oscillations [START_REF] Buzsáki | Theta oscillations in the hippocampus[END_REF][START_REF] Leung | Apv, an n-methyl-d-aspartate receptor antagonist, blocks the hippocampal theta rhythm in behaving rats[END_REF][START_REF] Leung | Glutamatergic synaptic transmission participates in generating the hippocampal eeg[END_REF][START_REF] Korotkova | Nmda receptor ablation on parvalbumin-positive interneurons impairs hippocampal synchrony, spatial representations, and working memory[END_REF]. Some even suggest that the EC uses NMDAR-dependent mechanisms to generate theta rhythm (Gu et al., 2017). According to our modeling work, adding a slow excitatory synapse between the S and I cell population is, in some cases, crucial for the generation of theta oscillations, in particular for the generation of type 2 theta. Our results are consistent with experiments showing that theta oscillations are impaired in mice with selective NMDAR knockout in PV interneurons during anesthesia [START_REF] Korotkova | Nmda receptor ablation on parvalbumin-positive interneurons impairs hippocampal synchrony, spatial representations, and working memory[END_REF]. Adding a slow synapse between the other populations (S and E) did not change our results. However, a more accurate description of the NMDAR dynamics is necessary to judge the role of NMDAR on the S-E and E-E synapses. Namely, include a voltage-dependent term in the description of the NMDA-mediated synapse (in our study, we generally consider a synapse with a decay time of 100 msec). Furthermore, NMDARs are known to present a high calcium permeability [START_REF] Burnashev | Fractional calcium currents through recombinant glur channels of the nmda, ampa and kainate receptor subtypes[END_REF]. Therefore, NMDAR can potentially regulate the neurons' excitability and enhance neurotransmitter release by triggering calcium-induced cascades. Taking that into account, it might be valuable to consider these effects of the NMDAR activation to address their role in the modulation of the EC circuit and the generation of theta. | The hippocampus as a theta rhythm resonator Introduction Hippocampal theta oscillations are a prominent 4-12 Hz rhythm observed in the hippocampus and associated structures. It has been linked to spatial and episodic memories, and its malfunction is strongly correlated with cognitive dysfunction such as memory deficits [START_REF] Colgin | Mechanisms and functions of theta rhythms[END_REF][START_REF] Colgin | Rhythms of the hippocampal network[END_REF][START_REF] Hinman | Neural mechanisms of navigation involving interactions of cortical and subcortical structures[END_REF]. Studies indicate that the hippocampus contains the necessary circuitry to generate and maintain theta oscillations without any extrinsic inputs. Goutagny and colleagues measured a spontaneous theta rhythm in the CA1 region of an intact in vitro hippocampus preparation [START_REF] Goutagny | Self-generated theta oscillations in the hippocampus[END_REF]. Moreover, several modeling studies confirm that a CA1 microcircuit can produce oscillations with a theta frequency [START_REF] Ferguson | Combining theory, model and experiment to explain how intrinsic theta rhythms are generated in an in vitro whole hippocampus preparation without oscillatory inputs[END_REF][START_REF] Chatzikalymniou | Deciphering the contribution of oriens-lacunosum/moleculare (olm) cells to intrinsic θ rhythms using biophysical local field potential (lfp) models[END_REF][START_REF] Kopell | Gamma and theta rhythms in biophysical models of hippocampal circuits[END_REF][START_REF] Giovannini | The can-in network: A biologically inspired model for self-sustained theta oscillations and memory maintenance in the hippocampus[END_REF][START_REF] Chatzikalymniou | Linking minimal and detailed models of ca1 microcircuits reveals how theta rhythms emerge and how their frequencies are controlled[END_REF]. While some assume that OLM cells play a crucial role in the generation of theta rhythm [START_REF] Kopell | Gamma and theta rhythms in biophysical models of hippocampal circuits[END_REF][START_REF] White | Networks of interneurons with fast and slow γ-aminobutyric acid type a (gaba a ) kinetics provide substrate Bibliography for mixed γ-θ rhythm[END_REF], others suggest that a microcircuit of CA1 fast-spiking and pyramidal cells is capable of robustly generating oscillations with a theta rhythm [START_REF] Giovannini | The can-in network: A biologically inspired model for self-sustained theta oscillations and memory maintenance in the hippocampus[END_REF][START_REF] Ferguson | Combining theory, model and experiment to explain how intrinsic theta rhythms are generated in an in vitro whole hippocampus preparation without oscillatory inputs[END_REF][START_REF] Chatzikalymniou | Linking minimal and detailed models of ca1 microcircuits reveals how theta rhythms emerge and how their frequencies are controlled[END_REF]. Besides the hippocampus, two other brain regions are known to be essential for the generation and maintenance of hippocampal theta rhythm in vivo: the medial septum (MS) and the entorhinal cortex (EC). Still, the origins of in vivo theta remain elusive, partly due to the difficulty in recording the activity from all the three essential regions (hippocampus, MS, and EC) simultaneously. To address this question, Gu and Yakel established an in vitro tri-culture preparation of the septalentorhinal-hippocampal circuit (Gu and Yakel, 2017). Their study indicates that theta originates in the EC and then propagates to the hippocampus, namely to the CA1 region, through the temporoammonic pathway. The generation of theta in the septo-entorhinal-hippocampal circuit was firstly dependent on the co-activation of septal cholinergic inputs acting on OLM cells and SC inputs. However, after performing co-paired activation several times, hippocampal PYR-OLM and SC-PYR synapses were potentiated, and theta could be induced by SC stimulation alone. In light of these results, in this chapter, we use a network model to examine CA1 hippocampal responses to theta oscillatory inputs from the EC when cholinergic co-paired activation is being performed and when only the SC is stimulated. We use the mean-field framework presented in chapter 3 to build a network model of OLM cells, fast-spiking neurons, and pyramidal cells. We then access the role of each neural population and the connections they form in the modulation of the network's behavior. More specifically, we examine the connectivity configurations for which the network resonates to external rhythmic inputs with theta frequency under different conditions -when the network receives SC glutamatergic inputs paired or not with cholinergic inputs. We start by introducing the CA1 network model, composed of OLM cells (O), fast-spiking interneurons (I), and pyramidal cells (E). We then use a statistical inference algorithm [START_REF] Gonçalves | Training deep neural density estimators to identify mechanistic models of neural dynamics[END_REF] to derive the distribution of the network's connectivity parameters that permit the O-I-E system (submitted to glutamatergic inputs paired with or without cholinergic inputs) to resonate to theta inputs. Finally, we study how potentiation of the hippocampal PYR-OLM and SC-PYR synapses that results from cholinergic pairing changes the hippocampal mechanisms of theta induction and expression. Methods Methods CA1 network of QIF neurons We use a minimal spiking neural network model to represent the hippocampal CA1 region. We consider a population of OLM cells (O), fast-spiking interneurons (I), and pyramidal cells (E). Each cell i of each population W is described by a modified version of the Izhikevich QIF neuron model: C m dV W i dt = aV W i + b(V W i ) 2 + c -u W + η i + I ext + I syn,i (5.1) du W dt = α(β(⟨V ⟩ W -V r ) -u W ) + u jump N W ∑ k=1 ∑ f δ(t -t k f ) (5.2) where V i is the membrane potential of neuron i, and u the slow recovery variable acting on population W . The parameters C m , a, b, c, V r α, β and u jump determine the dynamics of the neuron (see Chapter 3, section 3.2 for a more detailed explanation). The function δ is the Dirac mass measure and t k f is the firing time of neuron k. The parameter η i represents a background current randomly drawn from a Lorentzian distribution that accounts for the network's heterogeneity, I ext is an external current acting on all the neurons of the population, and I syn,i is the total synaptic current acting on neuron i. The parameters C m , a, b, c, V r α, β and u jump that describe each neuron type (OLM, fast-spiking and pyramidal cells) were adapted from previous models used to describe neurons with similar dynamics. Namely, the E-cells are described by the Izhikevich's model of non-bursting CA1 pyramidal neuron (Izhikevich, 2007b), and the I-cells by the Izhikevich's fast-spiking interneurons model (Izhikevich, 2007a). Given that the hippocampal OLM and EC stellate cells have a similar electrophysiological profile and synchronization properties [START_REF] Kopell | Gamma and theta rhythms in biophysical models of hippocampal circuits[END_REF], we use the S-cell model described in the previous chapter to describe the dynamics of the O-cells (see chapter 4, section 4.2). You can find a summary of all the parameters in Table 5 Izhikevich (2007a), andIzhikevich (2007b), respectively. Synaptic model We consider bidirectional synaptic connections among all populations of our network (O, I, and E), as schematically shown in Figure 5.1. We also consider self-connections among the neurons of each population, except for the O-cells, given that previous studies show that recurrent connections among OLM cells do not contribute to the production of theta [START_REF] Kopell | Gamma and theta rhythms in biophysical models of hippocampal circuits[END_REF]. The synaptic currents were modeled as follow: I syn,i = ∑ Y s W Z (E Z r -V W i ) (5.3) where E Z r is the reversal potential of the synapse, and s W Z is the synaptic conductance. If the synaptic connections originate on a population of inhibitory cells, the reversal potential is -80 mV; if they originate in a population of excitatory cells, the reversal potential is 0 mV. Similar to what has been done in previous work, we do not consider slow NMDA and GABA B synapses [START_REF] Kopell | Gamma and theta rhythms in biophysical models of hippocampal circuits[END_REF]Aussel, 5.2. Methods 2018). Moreover, experimental studies showing that NMDARs in the hippocampus are not necessary for the generation of theta rhythm in an EC-hippocampal circuit support the decision of not considering slow synapses (Gu et al., 2017). Instead, we consider instantaneous synapses, i.e. whenever a pre-synaptic neuron in Z spikes, the conductance s W Z is instantaneously increased: s W Z = p W Z N Z N Z ∑ k=1 ∑ f δ(t -t k f ) (5.4) where N Z is the number of neurons of population Z, and p W Z is the coupling strength of population Z on population W . Network input When studying the hippocampal responses submitted to paired cholinergic and SC inputs, we do not explicitly model cholinergic activation of the O-cells in a timely manner and instead consider an external current I ext = 50 pA acting on the Ocells. Similarly, SC inputs are modeled as an external current I ext =50 pA acting on the I and E-cells. Even though cholinergic activation of O-cells typically results in excitatory currents of smaller amplitude than SC stimulation of I and E-cells, taking a smaller current I ext acting on the O-cells did not significantly change our results (simulations not shown). As we have seen in chapter 2, cholinergic activation of OLM cells results in the inhibition of the I-cells. Therefore, during the simulation of the pairing protocol, we fixate the synaptic connection from the O to the I population to a high value (p OI = 70). When studying the hippocampal responses during SC stimulation alone, the Ocells are not activated by any external current, while E and I populations receive an external input I ext = 50 pA. Mean-field description of the CA1 network Similar to what was done in the previous sections, we use the exact mean-field reduction model derived in Chapter 3 to obtain the macroscopic description of the CA1 neural network. The low dimensional system reads as follows: For the O-cells: C m dr O dt = (b -p OI r I -p OE r E )r O + 2ar O v O + ∆a πC m (5.5a) C m dv O dt = - (πr O Cm) 2 a + av 2 O + (b -p OI r I -p OE r E )v O + c (5.5b) -p OI r I 80 -u O + I ext + η (5.5c) du O dt = α(β(v O -V r ) -u O ) + u jump r O (5.5d) For the I-cells: C m dr I dt = (b -p II r I -p IO r O -p IE r E )r I + 2ar I v I + ∆a πC m (5.6a) C m dv I dt = - (πr I Cm) 2 a + av 2 I + (b -p II r I -p IO r O -p IE r E )v I + c (5.6b) -p II r I 80 -u I + I ext + η (5.6c) du I dt = α(β(v I -V r ) -u I ) + u jump r I (5.6d) For the E-cells: C m dr E dt = (b -p EE r E -p EO r O -p EI r I )r E + 2ar E v E + ∆a πC m (5.7a) C m dv E dt = - (πr E Cm) 2 a E + a E v 2 E + (b -p EE r E -p EO r O -p EI r I )v E + c (5.7b) -p EI r I 80 -u E + I ext + η (5.7c) du E dt = α(β(v E -V r ) -u E ) + u jump r E (5.7d) Figure 5.1 illustrates a comparison of the dynamics of the full network with the low-dimensional reduced system. It shows the time evolution of the external stimulus acting on all three populations (Figure 5.1 (a)), the spiking activity obtained from simulations of the full network, and the firing rates of the three populations given by the reduced description ((5.5a), (5.6a) and (5.7a) for the O, I and E-cells, respectively) and calculated from the full network simulations (r W = 1 N W ∑ N W k=1 ∑ f δ(t-t k f ) ). We assume that each population is made of 3000 neurons. Although populations of 122 5.2. Methods OLM cells, fast-spiking interneurons, and pyramidal cells in CA1 do not have the same size [START_REF] Jinno | Stereological estimation of numerical densities of glutamatergic principal neurons in the mouse hippocampus[END_REF][START_REF] Ferguson | Combining theory, model and experiment to explain how intrinsic theta rhythms are generated in an in vitro whole hippocampus preparation without oscillatory inputs[END_REF][START_REF] Chatzikalymniou | Linking minimal and detailed models of ca1 microcircuits reveals how theta rhythms emerge and how their frequencies are controlled[END_REF], the mean-field model presumes neural populations with N → ∞. The reduced description accurately captures the shape of the spiking activity of the full QIF neural network, meaning we can safely employ the reduced mean-field model to study the dynamics of the O, I, and E-cell populations. Excitatory entorhinal inputs arrive in the CA1 region through the temporoammonic pathway, targetting the distal dendrites of pyramidal cells [START_REF] Amaral | Hippocampal neuroanatomy[END_REF]. Therefore, we presume that the CA1 network will resonate if the E population has a natural frequency in the theta range. Bearing that in mind, we infer the connectivity parameters of the O-I-E network for which the imaginary part of the eigenvalues of the E is between 0.02512 rad/msec and 0.07539 rad/msec, which corresponds to a natural frequency between 4 and 12 Hz. Bayesian inference of connectivity parameters One question that has been disputed in the past few years is the contribution of OLM cells for the induction and expression of hippocampal theta rhythm [START_REF] Kopell | Gamma and theta rhythms in biophysical models of hippocampal circuits[END_REF][START_REF] White | Networks of interneurons with fast and slow γ-aminobutyric acid type a (gaba a ) kinetics provide substrate Bibliography for mixed γ-θ rhythm[END_REF][START_REF] Chatzikalymniou | Deciphering the contribution of oriens-lacunosum/moleculare (olm) cells to intrinsic θ rhythms using biophysical local field potential (lfp) models[END_REF][START_REF] Giovannini | The can-in network: A biologically inspired model for self-sustained theta oscillations and memory maintenance in the hippocampus[END_REF]. To address this question, we start by analyzing the posterior of p EI , p IE ,p II and p EE parameters while the other parameters were fixed. For the pairing stimulation protocol, we considered p OE = p EO = p OI = 0, and p IO = 70. For the no-pairing protocol, we consider p OE = p EO = p OI = p IO = 0. We then sampled a set of parameters from the low probability distribution, i.e. for which the network cannot resonate to theta inputs, and inferred the previously fixed O-connectivity parameters. By doing so, we investigate how OLM cells modulate the CA1 network; more specifically, how they change the robustness and power of the produced rhythm. Inference is calculated after one round of 500 simulations. We chose uniform 123 Results distributions for all connectivity priors. These simulation specifications proved to be enough for the learning algorithm to converge to a solution. Results Estimating CA1 network connectivity To estimate the CA1 neural populations that modulate the hippocampal response to extrinsic rhythmic inputs, we use a statistical inference tool, SNPE, to infer the posteriors of the synaptic connections of the O-I-E network that turn the network into a theta resonator. To perform inference of the posteriors, we only evaluate the behavior (natural frequency and power) of the E-cell populations, given that they are the target of the rhythmic entorhinal inputs as well as the source of current into the EC, i.e. they form the population that links the EC and CA1 region and closes the entorhinal-hippocampal loop. We are interested in assessing the role of the OLM cells in the modulation of the natural frequency of the E-cells and power of theta oscillations during the cholinergic pairing protocol (cholinergic activation of the O-cells paired with glutamatergic activation of the I and E-cells) and during the no-pairing protocol (glutamatergic activation of I and E-cells). We start by focusing on the pairing protocol. To simulate the state of the O-I-E network during the pairing protocol, all three populations receive an external current I ext = 50 pA, and the O-to-I connection is fixed to p IO =70. This follows the results obtained in chapter 2 showing that cholinergic pairing activates OLM cells, which results in strong inhibition from the O to the I-cells. First, we focus on the contributions of the E-I subnetwork, i.e. we set all connections with the O-populations (except p IO ) to zero, and infer the posterior distributions of p II , p EI , p IE and p EE for which the E-cells have a natural frequency in the theta range (Figure 5.2 (A)). From the resultant posteriors, we extract the mean value, which puts us in the high probability region: p IE = 20.2821, p EI = 64.578830, p II = 45.3345, p EE = 28.8861. We confirm our results by calculating the power of the network's oscillatory activity when the E-cells are submitted to a periodic input with different frequencies (Figure 5.2 (A), right panel). We see that there is a peak in power for periodic inputs with a frequency in the theta range (Figure 5.2 (A)). When we chose parameters in the low probability region (p IE = 20.2821, p EI = 30, p II = 45.3345, p EE = 28.8861), the peak of the power spectrum falls outside the theta range (grey area). These observations indicate that the network will selectively respond to oscillatory inputs with a theta frequency for connectivity parameters in the high probability region. From the inferred posteriors, we can estimate the relative contribution of the different synaptic connections to this selective response. The network's response to oscillatory theta inputs is mainly constrained by p EE , p EI and p IE . While the posterior of p EE and p IE are skewed towards low values, the posterior of p EI is skewed towards high values. This indicates that the bidirectional interactions between I and E-cells modulate the network's response to external rhythmic inputs differently. We then fixed the parameters p IE , p EI , p II and p EE to its low probability regions values, and inferred the remained connectivity parameters p OI , p OE , p EO . The learning algorithm converged to a solution. This means that adding E-O, O-E and I-O connections modulates the network in such a way that enables resonance to theta inputs (Figure 5.2 (B)). To confirm these findings, we drawn the mean values of the posterior ( p OI = 51.8445, p OE = 47.0968, p EO = 49.0425) and used these parameters to study the power spectrum of the network as a function of the frequency of a periodic input. We see that now the system resonates to theta inputs with parameters p IE , p EI , p II and p EE from the low probability region (Figure 5.2 (B), right panel). This result indicates that the O-cells increase the robustness of the hippocampal response to theta inputs. In other words, the O-cells increase the range of connectivity values for which the system resonates to theta inputs. We repeated the process for the no-pairing protocol. The I and E-cells receive an external current I ext = 50 pA, while the O-cells don't receive any external inputs. Initially, we study the contributions of the E-I subnetwork by setting all connections with the O-populations to zero. We infer the posterior distributions of p II , p EI , p IE and p EE for which the E-cells have a natural frequency in the theta range, and extract the mean value of the posterior distributions of the connectivity parameters (p IE = 30.5809, p EI = 71.5955, p II = 49.8003, p EE = 34.7191) (Figure 5.3 (A)). To confirm our results, we calculate the power spectrum of the E population when in the high and low probability region. We confirm that for parameters from the posterior high probability region (Figure 5. to theta inputs. For parameters from the low probability region (Figure 5.3 (A), pink line), the system resonates to inputs with a frequency outside the theta range. We then fixed the parameters p IE , p EI , p II and p EE in their low probability region, and infer the connectivity parameters involving the O-cells (p OI , p IO , p OE , and p EO ). Similarly, as for the pairing case, adding connections with the O-cells makes the system resonate at theta frequencies (Figure 5.3 (B)). The hippocampus as a theta rhythm generator It is well established that the CA1 region contains the circuitry necessary for the generation of theta oscillations [START_REF] Goutagny | Self-generated theta oscillations in the hippocampus[END_REF][START_REF] Ferguson | Combining theory, model and experiment to explain how intrinsic theta rhythms are generated in an in vitro whole hippocampus preparation without oscillatory inputs[END_REF][START_REF] Chatzikalymniou | Linking minimal and detailed models of ca1 microcircuits reveals how theta rhythms emerge and how their frequencies are controlled[END_REF][START_REF] Giovannini | The can-in network: A biologically inspired model for self-sustained theta oscillations and memory maintenance in the hippocampus[END_REF]). Yet, recent in vitro studies indicate that, in a septal-entorhinal-hippocampal networkin vitro, theta rhythm induced by co-pairing septal cholinergic and SC inputs originates in the EC structure (Gu and Yakel, 2017), and not in the hippocampal region. Furthermore, it was also shown that repeated co-pairing of cholinergic and SC inputs potentiates SC-PYR and PYR-OLM synapses [START_REF] Gu | Hippocampal interneuronal α7 nachrs modulate theta oscillations in freely moving mice[END_REF]. In this section, we inquire if changes in the SC-PYR and PYR-OLM synapses can change the mechanisms of theta generation in the septal-entorhinal-hippocampal circuit. In particular, we test the possibility of changes in the mentioned synapses enabling the CA1 region to generate theta oscillations when the co-pairing stimulation protocol is performed. For that, we consider that the potentiation of the SC-PYR and PYR-OLM synapses translates into an increase of the magnitude of the external current acting on the E-cell and of the connectivity parameter p OE , respectively. For the remaining connectivity parameters, we adopt the values inferred in the previous section, i.e. the parameters for which the CA1 network resonates to theta inputs (p IE = 20.2821, p EI = 30, p II = 45.3345, p EE = 28.8861, p OI = 51.8445, p IO = 70, p OE = 47.0968, p EO = 49.0425). We plot a two-parameter bifurcation diagram of the E-I-O system when all the populations of the network receive an external current I ext =50 pA (see section 5.2 for justification). We see that if the magnitude of the external current acting on E and the strength of the E-O synapse p OE increase, the hippocampal network enters an oscillatory regime without the need for external periodic inputs (Figure 5.4 (B)). Suppose the system is in the non-oscillatory region. In that case, it will only produce theta oscillations if, besides an external current acting on the O, I and Ecells, the E-cells receive a periodic input I per = 8sin(wt) + 8 with a frequency w in the theta range. The oscillations generated have the same frequency as the periodic input (11 Hz) and a power of 8.7 mV 2 /Hz. On the other hand, if the system is in the oscillatory regime, it will produce oscillations of similar power independently of receiving periodic input or not (see Figure 5.4 (B)). However, if the system is in the oscillatory regime, pairing I ext with a periodic theta input I per produced oscillations outside the theta range (Figure 5.4 (B), right panel 1). These results indicate that the mechanisms for the generation of theta in a septo-entorhinal-hippocampal circuit might differ before and after inducing potentiation of the SC-PYR and PYR-OLM synapses. Gu and Yakel also observed that after repeatedly inducing theta in a septoentorhinal-hippocampal circuit using the co-pairing protocol and inducing potentiation of the mentioned hippocampal synapses, they could generate theta by activating the SC pathway alone (Gu and Yakel, 2017). To further examine how potentiation of the hippocampal synapses is altering the mechanisms of theta generation, we repeated the previous analysis considering that the system is being subjected to the no-pairing protocol. That is, we plot a two-bifurcation diagram of the E-I-O system when the E and I populations receive an external input I ext = 50 pA and the O population does not receive any external input. Once again, if the magnitude of the external current acting on E and the strength of the E-O synapse p OE increase, the network generates oscillations without any external periodic drive (Figure 5.5). If the system in the non-oscillatory region, it will only produce theta oscillations if the E-cells receive a periodic input I per = 8sin(wt) + 8 with a frequency w in the theta range (see Figure 5.5 (B)). The oscillations generated have the same frequency as the periodic input (11 Hz) and a power of 8.2 mV 2 /Hz. If the system is in the oscillatory regime, it can autonomously generate oscillations with a frequency of 11.1 Hz, i.e. in the theta range; if besides the external current acting on the E and I-cells, the E-cells receive a periodic input with a frequency of 12 Hz, it produces oscillations with the same frequency. Regarding the power of the oscillations generated, this is bigger when the system does not receive oscillatory inputs I per (14.4 mV 2 /HZ) than when it does (11.2 mV 2 /Hz). These results confirm that induction of plasticity in the hippocampal region can change the mechanisms of theta generation in a septoentorhinal-hippocampal circuit. ext , and the strength of the connection between the O and E population, p OE as bifurcation parameters. We simulate the activity of the E-cells when the system is in the non-oscillatory or oscillatory region, with or without paired periodic inputs. Results Non-oscillatory Discussion In this chapter, we used a network model of the hippocampal CA1 region to study how its connectivity modulates the hippocampal responses to external periodic inputs. This is particularly important in light of recent experimental work suggesting that in a septal-hippocampal-entorhinal circuit, rhythmic activity is not being generated in the hippocampus and that instead, it originates in the intrinsic entorhinal circuit and is fed back to the hippocampus, probably through the temporoammonic pathway (Gu and Yakel, 2017). Moreover, the same study indicates that induction of theta in the tri-culture preparation requires co-paired activation of cholinergic and SC inputs and that SC stimulation alone is not sufficient to induce theta. Assuming that the theta rhythm is not originating in the hippocampus and that the CA1 region receives rhythmic theta inputs from the EC, we analyze the connectivity parameters of a CA1 network composed of OLM cells (O), fast-spiking interneurons (I), and pyramidal cells (E) for which the network resonates to rhythmic theta inputs. Given that the EC rhythmic inputs target the CA1 E-cells and that, in turn, the E-cells project back to the EC network closing the entorhinal-hippocampal circuit, we assume that the E-cells must resonate to the rhythmic entorhinal inputs for theta to be maintained in the full circuit. Generally speaking, a given system will resonate under the influence of an external force if its natural frequency is equal to the frequency of the external input. Therefore, we were interested in finding the different connectivity configurations of the network for which the E-cells have a natural frequency in the theta range. We consider two distinct situations: when the hippocampal circuit is subjected to paired activation of cholinergic and SC inputs and when subjected to SC inputs alone. Given the contrasting ideas regarding the role of OLM cells in the generation and maintenance of the hippocampal theta rhythm [START_REF] Kopell | Gamma and theta rhythms in biophysical models of hippocampal circuits[END_REF][START_REF] White | Networks of interneurons with fast and slow γ-aminobutyric acid type a (gaba a ) kinetics provide substrate Bibliography for mixed γ-θ rhythm[END_REF][START_REF] Chatzikalymniou | Deciphering the contribution of oriens-lacunosum/moleculare (olm) cells to intrinsic θ rhythms using biophysical local field potential (lfp) models[END_REF], for each of the mentioned situations, we ascertain the role of the O-cells by considering the E-I subnetwork and seeing how adding connections with the O-populations changes the behavior of the network. Our results show that while the E-I subnetwork can resonate with rhythmic inputs with theta frequency, adding connections with the O-cells increases the robustness of the network. This is true when we subjugate the network to paired activation of the O-cells and the E and I cells or activation of the E and I cells alone. Such observations indicate that cholinergic activation of the O-cells is unnecessary for the CA1 region to resonate with extrinsic theta inputs. Instead, we hypothesize that they might only play a role in the modulation of the hippocampal excitability (see chapter 2) that gates the generation of theta rhythm in the EC. Experimental and theoretical studies have shown that the CA1 region has the necessary circuitry to generate theta oscillations intrinsically [START_REF] Goutagny | Self-generated theta oscillations in the hippocampus[END_REF][START_REF] Ferguson | Combining theory, model and experiment to explain how intrinsic theta rhythms are generated in an in vitro whole hippocampus preparation without oscillatory inputs[END_REF][START_REF] Chatzikalymniou | Linking minimal and detailed models of ca1 microcircuits reveals how theta rhythms emerge and how their frequencies are controlled[END_REF][START_REF] Giovannini | The can-in network: A biologically inspired model for self-sustained theta oscillations and memory maintenance in the hippocampus[END_REF]). Yet, Gu and colleagues observations indicate that in a septo-entorhinal-hippocampal circuit, theta is originating in the EC and not the hippocampal region, and that the hippocampus is responding to theta inputs coming through the temporoammonic pathway (Gu and Yakel, 2017). We confirm that if the connectivity of the CA1 O-I-E network is such that the network resonates to entorhinal theta inputs, paired cholinergic and SC inputs (or SC inputs alone) do not initiate theta oscillations in the CA1 region. Furthermore, we investigate how changes in the magnitude of the external current acting on the E-cells and the E-O connection, p OE , modify the mechanisms of theta generation. This follows experimental results showing that repeated paired activation of cholinergic and SC inputs potentiates the SC-PYR and PYR-OLM hippocampal synapses (Gu and Yakel, 2017;[START_REF] Gu | Hippocampal interneuronal α7 nachrs modulate theta oscillations in freely moving mice[END_REF]. According to our model, potentiation of the hippocampal synapses enables the generation of oscillations in the hippocampal region. In other words, if the external current acting on the E-cells and/or the connectivity p OE increase, paired co-activation of O cells and E and I cells (or activation of the E and I cells alone) generates theta rhythm in the E-I-O network, without the need of extrinsic periodic inputs. Note that these are preliminary results, and supplementary simulations are required for a complete analysis. For example, when inferring the connectivity parameters that enable the E-cell with theta resonant properties, we did not consider the effects of potentiation of the hippocampal SC-PYR and OLM-PYR synapses. Additional simulations are required to verify how changes in the strength of these synapses alter the resonant properties of the network. Moreover, we did not consider the different coupling strengths that the different external inputs, I ext and I per , can have on the network. Overall, our results indicate that the mechanisms for the generation of theta in 134 5.4. Discussion a septo-entorhinal-hippocampal circuit differ before and after potentiation of the SC-PYR and PYR-OLM synapses is induced. Before the hippocampal synapses are potentiated, paired cholinergic and SC inputs (or SC inputs alone) cannot initiate theta in the local hippocampal circuit, and CA1 merely responds to rhythmic inputs from the EC. However, if the strength of the E-O synapse and/or the external current acting on the E-cells increase, co-pairing or SC activation alone can drive the system into an oscillatory regime with theta frequency. | Conclusion and future perspectives This thesis was set out to investigate the mechanisms of theta induction and expression in a septal-hippocampal-entorhinal circuitry. We use computational models to study the intrinsic properties of each region and how they can contribute to the generation and maintenance of theta rhythm in the hippocampal formation. Despite being known that three brain regions -septum, hippocampus, and entorhinal cortex -are necessary for the generation of theta rhythm in the hippocampal formation, it is not clear what role each of them plays in the interplay that gives rise to synchronous activity with a theta frequency. Thanks to the groundbreaking experimental work of Gu and Yakel (2017), where they established an in vitro septalentorhinal-hippocampal brain co-culture preparation, it was possible to study how theta is generated and how the activity flows among the three regions during theta generation and propagation. It was found that while activation of septal cholinergic inputs or activation of SC inputs alone could not induce theta, tightly paired coactivation of the two pathways could readily induce theta in the circuit. Repeated pairing of cholinergic and SC inputs potentiated the EPSCs of CA1 OLM and pyramidal cells in the deep layers of the EC. Moreover, SC stimulation alone could then give rise to theta oscillations in the hippocampal-entorhinal circuit. Experiments also revealed that the generation of theta oscillations depends on the activation of α7 nAChRs and mAChRs in the hippocampus (on OLMα2 and CA1 pyramidal neurons, respectively), and NMDARs on the EC. In contrast, re-expression only depends on the activation of the NMDARs. Experiments also show that theta rhythmic inputs first appear in the deep layers of the EC, then spread to the superficial layers, and finally to the hippocampal slm layer and to hippocampal pyramidal neurons, which project back to the deep layers of the EC and close the hippocampal-entorhinal circuit (Gu andYakel, 2017, 2011;Gu et al., 2017[START_REF] Gu | Hippocampal interneuronal α7 nachrs modulate theta oscillations in freely moving mice[END_REF]. Several questions arise from these results. First, how the pairing of acetylcholine and glutamatergic hippocampal inputs gates local plasticity and facilitate theta rhythm generation. Second, what are the intrinsic properties of the entorhinal circuit that permit theta oscillations to arise. And finally, how do entorhinal rhythmic inputs drive the hippocampus into an oscillatory regime. To answer these questions, we used a combination of local biophysical and network models. We started by using a biophysical model to study how cholinergic inputs paired with SC stimulation modulate synaptic strength in the hippocampus. We constructed a minimal circuit with a single compartment spiking OLM cells with α7 nAChRs, a fast-spiking interneuron with AMPA and GABA A receptors, and the pyramidal cell proximal dendritic compartment with AMPA, NMDA, and GABA A receptors. Our results show that recurrent cholinergic activation of α7 nAChR expressed in OLMα2 interneurons can potentiate SC-evoked CA1 pyramidal EPSCs by inhibiting fast-spiking interneurons that provide feedforward inhibition onto CA1 pyramidal cells. These results suggest that septal cholinergic inputs regulate hippocampal plasticity, promoting the generation of theta oscillations instead of pacing theta frequency. This is in accordance with optogenetic studies showing that changes in the firing frequency of septal cholinergic inputs do not significantly change the frequency of hippocampal theta [START_REF] Dannenberg | Synergy of direct and indirect cholinergic septo-hippocampal pathways coordinates firing in hippocampal networks[END_REF][START_REF] Vandecasteele | Optogenetic activation of septal cholinergic neurons suppresses sharp wave ripples and enhances theta oscillations in the hippocampus[END_REF], and that after blockade of septal inputs to the hippocampus, the in vivo hippocampus was still able to generate theta with simultaneous excitation and disinhibition of the hippocampus [START_REF] Colom | In vivo intrahippocampal microinfusion of carbachol and bicuculline induces thetalike oscillations in the septally deafferented hippocampus[END_REF]. In our modeling work, we did not consider the action of mAChR on the CA1 pyramidal cell in our model. The dynamics of these receptors is challenging to modulate due to the multitude of different outcomes and protein kinase cascades that their activation entails. We believe that cholinergic activation of mAChR mainly affects the pyramidal cell's excitability, enhancing the induction of plasticity. Still, more experimental and computational results are necessary to confirm this hypothesis. Future work also includes expanding our pyramidal cell model to include more dendritic compartment, namely a distal dendritic compartment, and the spiking soma to study how direct inhibitory inputs from the OLM interneurons into slm affects the induction of plasticity at the proximal dendritic compartment and the firing activity of the pyramidal cell and consequent neurotransmitter release into EC. It would also be interesting to include a PYR-OLM connection and model plasticity induction at this synapse, which is also potentiated following repeated cholinergic and SC inputs pairing. In particular, we would like to study how plasticity at this site is induced and how it affects the induction of plasticity in the SC-PYR synapse. Based on the results obtained, we hypothesize that the pairing of septal cholinergic and SC inputs promotes the generation of hippocampal theta rhythm by increasing the hippocampal excitability, which presumably results in an increased excitation onto the deep layers of the EC. We next built a network model of the entorhinal circuit to study how an increase in the hippocampal excitatory drive can trigger synchronous activity with a theta frequency in this brain region. Our results suggest that connections between stellate cells and pyramidal cells can synchronize the activity of the network. Additionally, connections between pyramidal cells and fast-spiking interneurons modulate the frequency of oscillations. The stellate cells are endowed with currents that give rise to subthreshold oscillations with theta frequency. Therefore, they may also indirectly control the frequency of theta rhythm in the EC by selectively resonating to inputs with a theta frequency. Our model also argues that slow S-to-I excitatory synapses can promote the generation of theta oscillation in the EC. Although this observation agrees with previous experimental results indicating that NMDARs in the EC play a crucial role in the generation of theta in the hippocampal formation (Gu and Yakel, 2017;Gu et al., 2017), a more detailed and accurate description of these receptors is necessary to take any further conclusions from our model. Finally, we built a network model of the CA1 region that included inhibitory OLM and fast-spiking interneurons and excitatory pyramidal cells. We inferred the connectivity for which the system has a natural frequency in the theta range. This allowed us to study how the connectivity of the hippocampal neurons modulates the network's response to entorhinal rhythmic inputs. According to our results, a minimal network of pyramidal cells and fast-spiking interneurons can amplify external rhythmic inputs with theta frequency. Connections from fast-spiking interneurons to pyramidal cells seem to be particularly important in modulating this response. The OLM cells increase the robustness of the network, i.e., with configurations of the E-I network that would not resonate at theta, we can make the system resonate by including connections with the O-cells. We also found that if the hippocampal connectivity is such that it will resonate to inputs with theta frequency, it cannot intrinsically generate theta oscillations as a response to the pairing of cholinergic and SC inputs (or to SC inputs alone) unless the E-O synapse is strong enough. This seems to indicate that the mechanisms of induction and expression of hippocampal theta rhythm are different. For the induction of hippocampal theta, the rhythmic activities are generated in the EC circuit as a response to increased excitatory hippocampal inputs, which then feedback to the hippocampus driving the hippocampal circuit into a resonant regime. When it comes to the expression, the hippocampal formation may be using a similar mechanism, or the hippocampus might be generating theta rhythm intrinsically as a response to SC inputs. This oscillatory activity can then propagate to the EC, or the hippocampus and EC can function as two coupled oscillators. To explore these ideas, one would need to couple the entorhinal and hippocampus network described in chapters 4 and 5, respectively, and study how rhythmic activity flows in the entire circuit. Another possible approach is to study the macroscopic Phase-Response Curve (mPRC) of the different populations involved. PRCs illustrate transient changes in an oscillatory system's period induced by small perturbations as a function of the phase at which the perturbation is induced. In other words, it quantifies by how much a spike of a regular spiking neuron is advanced/delayed as a function of the timing of a small perturbation delivered to that neuron. From PRCs, we can extract useful information regarding the excitability type and synchronization properties of different neuron types. For example, a biphasic PRC indicates that an excitatory input can delay or advance the firing of the next spike, depending on the phase at which it's delivered to the neuron. Neurons that present this type of PRC curves are known to synchronize with fast excitatory synapses [START_REF] Acker | Synchronization of strongly coupled excitatory neurons: relating network behavior to biophysics[END_REF][START_REF] Hansel | Synchrony in excitatory neural networks[END_REF][START_REF] Stiefel | Neurons as oscillators[END_REF]. Similarly, we can derive the mPRC of a population of identical neurons to determine how the phase of the global oscillation of a macroscopic system respond to incoming perturbations acting on a population of neurons [START_REF] Dumont | Macroscopic phase-resetting curves for spiking neural networks[END_REF] Preliminary results show that populations of entorhinal stellate (S) and pyramidal (E) cells both have biphasic mPRCs (Figure 6.1), which suggests that they can synchronize with external rhythmic excitatory inputs. This indicates that two intercoupled populations of oscillating S and E-cells can synchronize with each other, as we have seen in chapter 4. Moreover, it suggests that external excitatory inputs acting on the E-cells, presumably from the hippocampal region, can synchronize a population of pyramidal entorhinal cells. Still, it does not give us any information about the phase shifts that arise when we couple the hippocampus and EC. The mPRC has been used to study the phase shifts that occur in a system of identical intercoupled networks [START_REF] Dumont | Macroscopic phase resetting-curves determine oscillatory coherence and signal transfer in inter-coupled neural circuits[END_REF]. However, to our knowledge, no methods have yet been developed that allow us to quantify the phase shifts that arise in intercoupled populations of non-identical neurons. Understanding how regularly firing neural hippocampal and entorhinal populations synchronize with each other is a crucial step towards studying the maintenance of theta rhythm in a closed hippocampal-entorhinal loop. Both population have a mPRC with a biphasic phase, indicating that periodic excitatory inputs into the S and E-cells facilitates the entrainment of the circuit [START_REF] Acker | Synchronization of strongly coupled excitatory neurons: relating network behavior to biophysics[END_REF] The present study sheds light on the underlying mechanisms of hippocampal theta rhythm generation. To our knowledge, this is the first computational study that addresses the role of all three brain regions (medial septum, hippocampus, and entorhinal cortex) involved in the induction and expression of hippocampal theta rhythm. We combined minimal, detailed models to establish a cellular basis for how cholinergic action can modulate the hippocampal network and promote the induction of theta, with network models that put into evidence the intrinsic properties of the Figure 1 . 1 : 11 Figure 1.1: Simplified diagram of the hippocampal formation (A) Wiring diagram of the hippocampus and entorhinal cortex. (B) Laminar structure of the CA1 region. Figure 1.2: Calcium-dependent plasticity functions from Shouval et al. (2002). (A) Synaptic efficacy funtion Ω. When the calcium concentration is bellow a depression threshold Θ d , Ω remains at its base line value; when calcium is above Θ d and bellow a potentiation threshold Θ p (Θ d < Calcium < Θ p ) the synaptic weight is reduced; when calcium is above Θ p , the synaptic weight increases. (B) Synaptic plasticity learning rate function η as a function of calcium. Figures reproduced from[START_REF] Shouval | A unified model of nmda receptor-dependent bidirectional synaptic plasticity[END_REF] Figure 2 . 1 : 21 Figure 2.1: Disynaptic disinhibition circuit for nAChR-modulated long-term plasticity in the CA1. (A) Simplified wiring diagram of an interneuron network that mediates feedforward inhibition in the CA1 region of the hippocampus. Activating the Schaffer Collateral (SC) pathway leads to the activation of CA1 pyramidal cell dendrites and stratum radiatum (s.r.) interneurons, which provide feedforward inhibition onto the pyramidal cell. Cholinergic activation of OLMα2 interneurons in stratum oriens (s.o.) leads to the inhibition of the s.r. interneurons, counteracting SC feedforward inhibition (R.Leão et al., 2012). (B) Minimal network to investigate plasticity induced by the pairing of cholinergic and SC activation. Glutamate activates postsynaptic AMPA and NMDARs at the pyramidal cell dendritic compartment E D and postsynaptic AMPARs at Icells, providing feedforward inhibition onto E D by activating postsynaptic GABA A Rs. Cholinergic inputs act on postsynaptic α7 nAChRs of O-cells, which results in GABA release of the O-cells that it is going to bind to postsynaptic GABA A Rs of the I-cell Figure 2 . 2 : 22 Figure 2.2: Cholinergic activation of OLM interneurons potentiates SC-evoked EP-SCs. (A) Scheme of in vitro induction of cholinergic pairing induced hippocampal synaptic plasticity. EPSCs were recorded from CA1 pyramidal neurons. Cholinergic neurons were activated via channelrhodopsin-2 that was specifically expressed in ChAT-positive neurons. The Schaffer collateral (SC) pathway was activated by a stimulating electrode. Adapted from Gu et al. (2020). (B) Scheme of the minimal network used to study the role of cholinergic inputs in the potentiation of SC-evoked EPSCs. Glutamatergic inputs activate the pyramidal cell dendritic compartment (E D ) and the fast-spiking interneuron (I) that projects to it. ACh activates the OLM interneuron (O) during the co-pairing period. (C) Normalized SC-evoked EPSC responses from CA1 pyramidal neurons showing that the enhancement of EPSCs was impaired in hippocampal slices from mice with selective α7 nAChR knockout in OLMα2 interneurons. Adapted from Gu et al. (2020). (D) Numerical simulation of normalized EPSC amplitude when glutamatergic inputs acting on the I-cell and E D are paired with cholinergic inputs acting on the O-cell (from t=10 min to t=18 min). The EPSCs are calculated as the sum of postsynaptic AMPA and NMDA currents, I AM P A and I N M DA , resulting from 10 simulations with white noise on the E D membrane potential. Normalization of the results was calculated according with the expression (100 + (EP SC -EP SC min ).(150 -100))/(EP SC max -EP SC min ). The same results are obtained if a noisy background current inducing spontaneous spiking is added to the O and I-cells (see Figure S5). Inset: Concentration of GABA released from fast-spiking interneurons (I), calculated according to equation (2.15) Figure 2 . 3 : 23 Figure 2.3: Co-pairing temporal parameters determine the duration and polarity of synaptic plasticity: relative timing between cholinergic and glutamatergic stimulation, the extent of the co-pairing period, and the frequency of stimulation. (A) Synaptic strength transient duration is proportional to the extent of the pairing period. Here, the transient duration is defined as the time it takes the EPSC to go back to baseline after co-pairing is over. The I-cell and E D receive a pulse of glutamate per minute. During the co-pairing period, the O-cell receives a pulse of ACh per minute, 10 msec before the glutamate pulses. (B) Synaptic strength transient duration is proportional to the ACh and glutamate pulses frequency during the co-pairing period. Before and after the co-pairing period, the I-cell and E D receive a glutamate pulse per minute. During the co-pairing period (4 minutes), the frequency changes to 1 120 , 1 60 or 1 30 Hz, and the O-cell receives a pulse of ACh 10 msec before the glutamate pulses with the same frequency. (caption continues on next page) Figure 2 . 4 : 24 Figure 2.4: Disinhibition of CA1 pyramidal cell facilitates induction of hippocampal synaptic plasticity. (A) Scheme of in vitro induction of hippocampal synaptic plasticity through concurrent Sst inhibition. EPSCs were recorded from CA1 pyramidal neurons. Sst neurons were inhibited via eNpHR that was specifically expressed in Sst-positive neurons. The Schaffer collateral (SC) pathway was activated by a stimulating electrode. (B) Schematic representation of a CA1 pyramidal neuron dendritic compartment E D with postsynaptic GABA A , AMPA and NMDA receptors used to study the disinhibitory mechanisms for induction of plasticity at the SC-CA1 excitatory synapse. The pyramidal cell's dendritic compartment receives one pulse of both glutamate and GABA per minute, except during the disinhibition period, where it only receives pulses of glutamate. Glutamate binds to the excitatory AMPA and NMDA receptors, while GABA binds to the inhibitory GABA A receptor. (C) Experimental measurements showing the effects of inhibition of Sst and OLMα2 interneurons in s.o. on SC-evoked EPSCs. Inhibition of Sst interneurons from t=5min to t=10min enhanced the SC-evoked EPSC amplitude of the CA1 pyramidal cell, followed by a return to the baseline after the inhibition period (blue line). Inhibition of Sst interneurons from t=5min to t=13min increased SC-evoked EPSCs amplitude, which remained potentiated after the inhibition period (orange line). (D) Numerical simulation of normalized EPSCs of E D for a disinhibition period of 5 minutes (from t=5 min to t=10 min) and 8 minutes (from t=5 min to t=13 min). Normalization of the results was calculated according with the expression (100 + (EP SC -EP SC min ).(150 -100))/(EP SC max -EP SC min ). Figure 2 . 5 : 25 Figure2.5: Calcium dynamic is key for the induction of synaptic plasticity. (A) Time course of maximal AMPAR conductance, g AM P A , when the dendritic compartment is disinhibited for a short period (from t=5 min to t=10 min). The maximal AMPAR conductance increases from its initial value g AM P A = 4 nS to g AM P A =6.9 nS during the disinhibition period. (B) Time course of g AM P A when the dendritic compartment is disinhibited for a long period (from t=5 min to t=13 min). It increases from g AM P A = 4 nS to g AM P A =8.83 nS during the disinhibition period. Changes in the AMPAR conductance g AM P A are described by equation (2.22). (C) Time course of intracellular calcium concentration when dendritic compartment E D is disinhibited for a short period (from t=5 min to t=10 min), where θ ↓ is the depression onset, and θ ↑ the potentiation onset. (D) Time course of intracellular calcium concentration when the dendritic compartment is disinhibited for a long period (from t=5 min to t=13 min). The calcium dynamics is described by equation (2.25) (see Methods). (E) Trajectories of the system in the g AM P A -Ca plane when a pulse of glutamate is paired with a pulse of GABA for g AM P A =6.9 nS and g AM P A =8.83 nS, where θ pot is the potentiation threshold as defined in[START_REF] Shouval | A unified model of nmda receptor-dependent bidirectional synaptic plasticity[END_REF]. (caption continues on next page) Figure 2 . 6 : 26 Figure 2.6: The weighted ratio ( A ↑A ↓ ) w can accurately be used as a predictor of induction of depression or potentiation. (A) Different values of g AM P A evoke different levels of depolarization and, consequently, different intracellular calcium concentrations. For a weighted ratio between the calcium area of AMPAR insertion and removal below 3.00, depression is induced. For a value above 3.00, potentiation is induced. (B) By adding a second source of calcium that becomes activated at t=80 msec, it is possible to have situations where the calcium never crosses the potentiation threshold θ pot but potentiation is induced. The normalized ratio ( Figure2.7: Amplitude of GABA pulse, GABA max , and relative time between GABA and glutamate pulses, ∆t (GABA-Glu) , control direction and efficiency of induction of synaptic plasticity. (A) Depression and potentiation regions for g AM P A = 4 nS. This is the AM-PAR's maximal conductance value used in our simulations before the disinhibition period starts. (B) Depression and potentiation regions for g AM P A = 6.9 nS, which represents the state of phosphorylation of the AMPAR at the end of the short disinhibition period. (C) Depression and potentiation regions for g AM P A = 8.83 nS, which is the state of phosphorylation of the AMPAR at the end of the long disinhibition period. For each plot (A), (B), and (C) we pair one pulse of glutamate (with a concentration of 1 mM and 1 msec of duration) with one pulse of GABA with a duration of 1 msec and varying concentrations and initial time, and measure the resultant change in g AM P A for each case. Figure 2 2 Figure 2.8: Scheme of the cholinergic and disinhibitory mechanisms that drive SC-CA1 potentiation. (A) Glutamatergic activation of I-cells lead to spiking activity and consequent GABA release. Subsequently, glutamate inputs acting on E D evoke an EPSP mediated by AMPAR immediately followed by an IPSP mediated GABA acting on GABA A receptors. (B) Cholinergic activation of 7 nAChR on OLM interneuron initiates a CICR process mediated by calcium internal stores (IS). This results in GABA release that inhibits the I-cell. The dendritic compartment does not receive GABAergic inhibition. The dendritic compartment can depolarize enough -and remain depolarized for long enough -to relieve M g 2+ block from NMDA receptors, allowing calcium to permeate through the receptor channel. Figure S4 : S4 Figure S4: Before co-pairing, the network only receives one pulse of glutamate. The α7 nAChR at OLM are not activated, there are no changes in the intracellular calcium concentration (Ca i ) and, consequently, no GABA is released (GABA O ). Glutamatergic activation of the I-cell result in two spikes, and the I-cell inhibits E D that cannot depolarize enough. During co-pairing, α7 nAChR activation increases the intracellular concentration Ca i . GABA O is released from the OLM cell and inhibits the I-cell. E D does not receive inhibition, only excitation from glutamatergic stimulation and depolarizes. Figure S6 :Figure S7 : S6S7 Figure S6: Membrane potential of the I-cell when it receives two pulses of glutamate (with amplitude 1mM and duration of 3 msec) with a frequency of 0.2 msec -1 , and respective GABA release. GABA concentration can be described by a square function. Figure 3 . 1 : 31 Figure 3.1: Comparison between the full network and exact reduced system for networks of neurons with distinct dynamics. (A) Membrane potential of spiking neurons with different spiking features. Results were obtained using the Izhikevich two-dimensional QIF neuron model[START_REF] Izhikevich | Simple model of spiking neurons[END_REF] with the adequate choice of parameters (see Table3.1). (B) Firing rate of populations of uncoupled neurons with different dynamics obtained from simulations of the full and reduced system, and respective external current. (caption continues on next page) Figure 3 . 2 : 32 Figure 3.2: Comparison between full and reduced system for a population of spiny projection neurons. Spiny projection neurons of the neostriatum and basal ganglia can be described by the two-dimensional QIF neuron model with a = 1 nS/mV , b = 105 nS, c = 2000 nSmV , C m = 50 pF , V r = -80 mV , α = 0.01 msec -1 , β = -20 nS, V peak = 40 mV , V reset = -55 mV and u jump = 150 pA(Izhikevich, 2007d). Decreasing the value of u jump improves representation of the population activity. Figure 3 3 Figure 3.3 (B) depicts the phase portrait of an intrinsically bursting neuron.Starting at point A, we are on the V-nullcline, where by definition dV dt = 0, and the dynamics is going to be governed by the u-component. Since we are on the left of the u-nullcline, the trajectory follows a downward flux. As u slowly decreases, we reach point B below the V-nullcline, and the fast dynamics in the V direction pushes the system towards V peak , at which point the system is reset to V reset . This last process repeats while u slowly increases until it reaches point C, where a voltage reset takes the system to a point above the V-nullcline. In this region, the flux is directed towards the left, which brings the system back to point A. Figure 3 . 3 : 33 Figure 3.3: Comparison of reduced and full system for a class of bursting neurons. (A) Voltage trace of a bursting neuron using the Izhikevich QIF neurons model. Parameters: a = 0.04, b = 5, c = 150, C m = 1, V r = -65, α = 0.02, β = 0.32, u jump = 0. (B) Nullclines, dVdt = 0 (green line) and du dt = 0 (yellow line), for a system of a bursting neuron and respective trajectory (black line) on the phase plane. (C) Nullclines and trajectory of the system when V reset decreases from -55 to -70 mV on the phase plane. The trajectory of the system no longer shows a bursting behavior. (D) Nullclines and trajectory of the system on the phase plane when b = 6, c = 232, V r = -80 and V reset = -70. As a result of the changes in b, c and V r the nullclines moved to the left of the phase plane and we recover the trajectory of bursting neurons. (E) Comparison between full and reduced system for a population of bursing neurons (with b = 6, c = 232, V r = -80 and V reset = -70). The reduced system captures some but not all of the structure of the full bursting system. Figure 4 . 1 : 41 Figure 4.1: Dynamics of neurons in the enthorinal cortex. (A) Membrane potential of stellate cells using the Izhikevich's QIF with recovery variable model. Parameters taken from Izhikevich (2007e). (B) Membrane potential of regular spiking pyramidal cells using the Izhikevich's QIF with recovery variable model. Parameters taken from Izhikevich (2007c) (C) Membrane potential of fast-spiking interneurons using the Izhikevich QIF with recovery variable model. Parameters taken fromIzhikevich (2007a). All parameters are described in Table4.1 Figure 4 . 2 : 42 Figure 4.2: Comparison between the full network dynamics and the reduced system. Left panel: Schematic illustration of the neural network. The parameter p ij denotes the connectivity strength of the population j onto the population i. The external current a acting on the different populations is denoted by I ext . Right panel: (A) Time evolution of the stimulus I ext . (B) Spiking activity obtained from simulations of the full network. The first 300 neurons are stellate cells (S); the following 300 are inhibitory (I); the last 300 are excitatory (E). (C) Firing rate of the SC obtained from simulations of the full and reduced system. (D) Firing rate of the I-cells obtained from simulations of the full and reduced system. (E) Firing rate of the E-cells obtained from simulations of the full and reduced system. Parameters: N = 3000; ∆ = 15; η = 25; p SI = p IS = 50; p SE = p ES = 90; p IE = p EI = 40; p II = 55; p EE = 40. Figure 4 . 3 : 43 Figure 4.3: Estimating network connectivity for theta generation in a mean-field model of the EC network using statistical inference. Top left panel: Schematic illustration of neural network with stellate cells (S), fast-spiking interneurons (I), and pyramidal cells (E). (A) Posterior distribution over 5 connectivity parameters, with I ext =100 pA. Mean of parameters posterior distribution represent high probability parameters (in purple): p IS ,p SI =43.9714; p SE , p ES =160.2503; p IE , p EI =34.4222; p II =55.4267; p EE =84.4322. (B) Network activity generated by posterior samples from a high probability region (in purple, in (A)). Top panel: Spiking activity obtained from simulations of the full network. We look at the activity of 300 random neurons of each population. The first 300 neurons are stellate cells (S); the following 300 are inhibitory (I); the last 300 are pyramidal cells (E). Bottom panel: Firing rates of S, I and E-cells population obtained from simulations of the reduced mean-field system. (caption continues on next page) Figure 4 . 4 : 44 Figure 4.4: System dynamics for changing S-I and E-I connectivities. (A) Bifurcation diagram ofS and E firing rates with the connectivity between the E and I population, p IE , p EI , as a bifurcation parameter. The system is in an oscillatory regime for p IE , p EI = 83.26. Bottom panel: Frequency of the network's stable limit cycle (83.26 <p IE , p EI ) as a function of the I-E connectivity. Stable limit cycle is always in the theta range frequency of oscillations (grey area). (B) Bifurcation diagram of S and E firing rates with the connectivity between the S and I population, p SI , p IS , as a bifurcation parameter. The system is in an oscillatory regime for p SI , p IS = 133.6. Bottom panel: Frequency of the network's stable limit cycle (133.6 <p SI , p IS ) as a function of the I-S connectivity. Stable limit cycle is always in the theta range frequency of oscillations (grey area). Remaining parameters from high probability region (from Figure4.3 (A), in purple). HB: Hopf bifurcation. Figure 4 . 5 : 45 Figure 4.5: Dynamical analysis of the reduced S-E network. Top panel: Schematic illustration of the S-E sub-network. (A) Network activity generated by posterior samples from a high probability region of Figure 4.3 (A) (in purple): p SE , p ES = 160.2503 and p EE = 84.4322), with p SI , p IS = p EI , p IE = p II = 0 and I ext = 100 pA. Top panel: Spiking activity obtained from simulations of the full network.We look at the activity of 300 random neurons of the S and E population. The first 300 neurons are stellate cells (S); the last 300 are pyramidal cells (E). Bottom panel: Firing rates of S and E-cells population obtained from simulations of the reduced mean-field system. (B) Power spectrum of the activity of the S and E populations firing rate calculated over 10 seconds. The power spectrum of both populations shows a maximum at 7 Hz. (C) Bifurcation diagrams of the firing rates of the S and E populations. System is in oscillatory regime for HB1 < I ext < HB2, with HB1=44.77 pA and HB2=110.3 pA (HB: Hopf bifurcation). (D) Frequency of oscillations of the network in the stable limit cycle regime (HB1 < I ext < HB2). For I ext > 58Hz the system oscillates with a frequency in the theta range (grey area). Figure 4 . 6 : 46 Figure 4.6: Exploring reduced EC network dependence on connectivity parameters. Stability region of the S-E subnetwork for I ext =100 pA, constrained by the S-E and E-E connections (grey area). Power and frequency of oscillations produced change as we move inside the stability region. Coordinates: (1) p EE = 0 and p SE , p ES = 200; (2) p EE = 44 and p SE , p ES = 173; (3) p EE = 84 and p SE , p ES = 160; (4) p EE = 84 and p SE , p ES = 125; (5) p EE = 100 and p SE , p ES = 124. Figure 4.7: Estimating the connectivity of the EC network using statisticial inference for the generation of type 1 (8-12 Hz) and type 2 (4-7 Hz) theta oscillations with I ext =100 pA. (A) Posterior distribution over the S-E-I network connectivity parameters for the generation of type 1 theta rhythm. High probability parameters (in purple): p SI , p IS = 46.0033, p SE , p ES = 161.6447, p IE , p EI = 30.5906, p II = 97.5407, p EE = 139.4172. (B) Posterior distribution over the S-E-I network connectivity parameters for the generation of type 2 theta rhythm. High probability parameters (in purple): p SI , p IS = 46.8981, p SE , p ES = 156.8319, p IE , p EI = 40.3033, p II = 55.3938, p EE = 69.4229. Figure 4.8: Inference of connectivity parameters that enable the generation of type 2 theta for different EC subnetwork configurations. (A) Left panel: Posterior distributions of connectivity parameters of S-E subnetwork. Even though the high probability region is not well defined, we consider the following parameter samples (in purple): p SE , p ES = 106.7219 and p EE = 51.1498. Right panel: Spiking activity from simulations of the full network (first 300 neurons are pyramidal cells, and last 300 neurons are stellate cells), and firing rates of the S and E populations from simulations of the reduced mean-field model, with parameters from the high probability region. (caption continues on next page) Figure 4 . 11 : 411 Figure 4.10: Inference of connectivity parameters that enable the generation of type 1 theta, for different configurations of the EC network. (A) Inference of the connectivity parameters of a S-E-I network with instantaneous synapses, subjected to an external current I ext = 150 pA. We consider parameters sampled from the high probability region (in purple): p IS ,p SI =27.0279; p SE , p ES =141.1522; p IE , p EI =44.3601; p II =109.0740; p EE =128.0583 (B) Inference of the connectivity parameters of a S-E-I network with instantaneous synapses and a slow S-to-I synapse (τ s = 100 msec), subjected to an external current I ext = 150 pA. We consider parameters sampled from the high probability region (in purple): p IS ,p SI =27.8001; p SE , p ES =157.5060; p IE , p EI =21.0786; p II =45.0699; p EE =82.0658; p IS slow = 5.3456. (C) Inference of the connectivity parameters of a S-E network with instantaneous synapses, subjected to an external current I ext = 100 pA. We consider parameters sampled from the high probability region (in purple): p SE , p ES =149.6219; p EE =135.3354. (D) Inference of the connectivity parameters of a S-E network with instantaneous synapses, subjected to an external current I ext = 150 pA. p SE , p ES =144.0672; p EE =119.2519. Figure S2 : S2 Figure S2: Estimating network connectivity in mean-field model of EC network using statistical inference (with recurrent S-S connections). Left panel: Schematic illustration of neural network. Right panel: Posterior distribution over 6 parameters, with I ext =100. Mean of parameters posterior distribution: p IS ,p SI =46.3876; p SE , p ES =149.9954; p IE , p EI =39.5836; p II =49.6283; p EE =68.8112; p SS =48.1571. Adding recurrent S-S connections (p SS ) did not significantly change the posterior distribution of the remaining connectivity parameters (see Figure 4.3 (A) for comparison) Figure 5 . 1 : 51 Figure 5.1: Comparison between the full network dynamics and the reduced system. Left panel: Schematic illustration of the neural network. The parameter p ij denotes the connectivity strength of the population j onto the population i. The external current a acting on the different populations is denoted by I ext . Right panel: (A) Time evolution of the stimulus I ext . (B) Spiking activity obtained from simulations of the full network, where we randomly selected 300 neurons from each population. The first 300 neurons are OLM cells (O); the following 300 are fast-spiking inhibitory cells (I); the last 300 are pyramidal cells (E). (C) Firing rate of the O-cells obtained from simulations of the full and reduced system. (D) Firing rate of the I-cells obtained from simulations of the full and reduced system. (E) Firing rate of the E-cells obtained from simulations of the full and reduced system. Parameters: N = 3000; ∆ = 15; η = 25; p OI = p IO = 50; p OE = p EO = 90; p IE = p EI = 60; p II = 30; p EE = 60. Figure 5.2: Inference of the connectivity parameters that enable resonance to periodic input with theta frequency when the hippocampal network is subjected to the pairing protocol. During pairing, all three populations (O, I and E) receive an external current I ext . We fix the O-I connection p IO to a high value (70). This follows the results obtained in Chapter 2 (A) Posterior distribution over 4 connectivity parameters (p IE , p EI , p II and p EE ). We sampled parameters from the high (purple line) and low (pink line) probability region: p IE = 20.2821, p EI = 64.578830 (purple line) or 30 (pink line), p II = 45.3345, p EE = 28.8861. Using a periodic input I per = 8sin(wt) + 8, where w defines the frequency of the input, we estimated the power spectrum of the E-cells activity obtained using parameters from the high and low probability region. (B) Posterior distribution over 3 connectivity parameters (p OI , p OE and p EO ). The remaining connectivity parameters were fixed to the values sampled from their low probability region (p IE = 20.2821, p EI = 30, p II = 45.3345, p EE = 28.8861). We sampled from their posterior the parameters from the high probability region p OI = 51.8445, p OE = 47.0968 and p EO = 49.0425 (pink line), and calculated the respective power spectrum. Figure 5.3: Inference of the connectivity parameters that enable resonance to periodic input with theta frequency when the hippocampal network is subjected to the nopairing protocol. During the no-pairing protocol, the E and I populations receive an external current I ext , while the O-cell do not receive any extrinsic stimulus. (A) Posterior distribution over 4 connectivity parameters (p IE , p EI , p II and p EE ). We sampled parameters from the high (purple line) and low (pink line) probability region: p IE = 30.5809, p EI = 71.5955 (purple line) or 35 (pink line); p II = 49.8003; p EE = 34.7191. Using a periodic input I per = 8sin(wt) + 8, we estimated the power spectrum of the E-cells activity obtained using parameters from the high and low probability region. (B) Posterior distribution over 4 connectivity parameters (p OI , p IO , p OE and p EO ). The remaining connectivity parameters were fixed to the values sampled from their low probability region (p IE = 30.5809, p EI = 35; p II = 49.8003; p EE = 34.7191). We sampled from their posterior the parameters from the high probability region, p OI = 40.7411, p IO = 50.9074 p OE = 72.2952, p EO = 79.4703 (pink line), and calculated the resultant power spectrum. Figure 5 . 4 :Figure 5 . 5 : 5455 Figure 5.4: Evaluating the effect that an increase of I ext acting on the E-cell and of p OE has on the mechanisms of hippocampal theta rhythm generation when the hippocampal network is subjected to the co-activation protocol. (A) Schematic illustration of the O-I-E network when subjected to the co-activation protocol paired (1) or not (2) with a periodic input with theta frequency acting on the E-cells. (B) Two-parameter bifurcation diagramof the E-I-O system, with the external current acting on the E-cells, I ext , and the strength of the connection between the O and E population, p OE as bifurcation parameters. We simulate the activity of the E-cells when the system is in the non-oscillatory or oscillatory region, with or without paired periodic inputs. Figure 6 . 1 : 61 Figure 6.1: Macroscopic phase-response curve (mPRC) of the stellate cells (S) and pyramidal cells (E) population using direct simulations and the adjoint method. Both population have a mPRC with a biphasic phase, indicating that periodic excitatory inputs into the S and E-cells facilitates the entrainment of the circuit[START_REF] Acker | Synchronization of strongly coupled excitatory neurons: relating network behavior to biophysics[END_REF] Table 2 .1: Parameters 2 of pyramidal cell, OLM interneuron, and fast-spiking interneuron dynamics. Table 2 .2: Parameter 2 values of synaptic currents I AM P A , I N M DA , I GABA A and I α7 . The values indicated with † refer to the conductance of postsynaptic channels on the fast-spiking interneurons, while the ones noted with ‡ refer to the conductances of the dendritic compartment E D . through feedforward disinhibition η(Ca) = ( P 1 P 2 + Ca P 3 + P 4 ) -1 Table 2 . 3 : 23 Parameter values for calcium dynamics and synaptic plasticity. The values indicated with § were used to reproduce Figures 2.2, S4, S5, and S7. The values indicated with ¶ were used to reproduce the remaining figures. see appendix A for more details). If this ratio is below 3.0, depression is induced in our model; if the ratio is above 3.0, potentiation is induced Chapter 2: Recurrent cholinergic inputs induce local hippocampal plasticity through feedforward disinhibition A B C D t final t final t final t final t init t init t init t init E t final F t t t final init t t final init final t t final t final final t final t t final init t t t t t t final init init final final t t final final t init t init init t init t t t final init init t final t t final init t t final t final init t t final init t t final init t t init t final t final t final final t init t init t init t t final final t final t t final init t t final init Camax Camax Camax Camax Camax Camax Camax Camax Camax Camax Camax Camax Camax Camax Camax Camax t init t t init init t init t init t final t init t final t init t t t final init init t final final t t final t init t init t init t t final init t t final init t t final t t final init final final t t init t t init t final t final final t init t t final t init t init t final init t init t init init t t t init init t init t t final init t t final init t t final init t t final init t t final init t final init final t t t init t final t final final t t init t init t init t final Table 3 .1: Parameter 3 values of two-dimensional QIF neuron model for neurons displaying different firing properties. Parameters adapted from [START_REF] Izhikevich | Simple model of spiking neurons[END_REF] . Table 4 . 4 1: Parameter values of two-dimensional QIF neuron model for entorhinal stellate cells (S), fast-spiking interneurons (I), and pyramidal cells (E). Parameters adapted from Izhikevich (2007e), ). Left panel: Schematic illustration of neural network. Right panel: Posterior distribution over 6 parameters, with I ext =100. Mean of parameters posterior distribution: p IS ,p SI =46.3876; p SE , p ES =149.9954; p IE , p EI =39.5836; p II =49.6283; p EE =68.8112; p SS =48.1571. Adding recurrent S-S connections (p SS ) did not significantly change the posterior distribution of the remaining connectivity parameters (see Figure 4.3 (A) for comparison) .1. O-cells I-cells E-cells a (nS/mV ) 0.75 1 0.5 b (nS) 78.75 98 52.5 c (nS/mV ) 2025 2320 1350 C m (pF ) 200 40 50 V r (mV) -60 -58 -60 α (msec -1 ) 0.01 0.11 0.02 β (nS) 15 1.2 0.5 u jump (pA) 0 0 50 V peak (mV) 30 30 30 V reset (mV) -50 -65 -60 Table 5 . 1 : 51 Parameter values of the two-dimensional QIF neuron model for CA1 hippocampal OLM cells (O), fast-spiking interneurons (I), and pyramidal cells (E). Parameters adapted from Izhikevich (2007e), We use a simulation-based inference algorithm that implements SNPE (Sequential Neural Posterior Estimation) to infer the connectivity parameters of the O-I-E network (p OI , p IO , p OE , p EO , p IE , p EI , p II , p EE ) that enables it to resonate to entorhinal oscillatory theta inputs. You can find a detailed description of the inference algorithm used in Chapter 4, section 4.2. Connections between septal and CA3 neurons also exist[START_REF] Amaral | An analysis of the origins of the cholinergic and noncholinergic septal projections to the hippocampal formation of the rat[END_REF], but they go beyond the scope of this project By considering V peak = -V reset = ∞, the resetting rule still captures the spike reset as well as the refractoriness of the neurons. Adding S-S connections in our model did not significantly alter the posterior of the remaining connectivity parameters (FigureS2) We later investigate how adding synapses with a slow dynamics affects our results (section 4.3.3). Chapter 2: Recurrent cholinergic inputs induce local hippocampal plasticity through feedforward disinhibition tive deficits in AD, such as memory impairment, are caused in part by cholinergic dysfunction action on hippocampal GABAergic interneurons [START_REF] Schmid | Dysfunction of somatostatin-positive interneurons associated with memory deficits in an alzheimer's disease model[END_REF][START_REF] Haam | Cholinergic modulation of the hippocampal region and memory function[END_REF]. Here, we have shown that a decrease in the conductance of cholinergic α7 nAChRs on OLM interneurons caused the impairment of induction of hippocampal synaptic plasticity. Bear in mind that so far, we had only considered instantaneous synapses. Adding a slow synapse only improved convergence to a solution in the S-E-I network. More specifically, adding slow synapses between S and E cells on the S-E network did not lead to the generation of the low-frequency theta. C. Supplementary Figures C Supplementary Figures In summary, if the external current I ext is small enough (for example, 100 pA), both the S-E and S-E-I network are capable of generating low-frequency oscillations with similar power (Figure 4.9 (A) and (B)). However, as we increase I ext to 150 pA, only the S-E-I can generate low-frequency theta oscillations. In summary, we propose a multi-circuit mechanism for the generation of theta oscillations in a septal-hippocampalentorhinal network, where the three brain regions play an active role in the induction and expression of the theta rhythm. Cholinergic inputs regulate hippocampal excitability, which acts as a gate that permits theta oscillations to arise in the EC circuit and spread to the hippocampus, thus closing the entorhinal-hippocampal loop. KEYWORDS theta-rhythm; hippocampus; cholinergic receptors
04116912
en
[ "scco.neur" ]
2024/03/04 16:41:26
2019
https://hal.science/hal-04116912/file/Lounis_29468.pdf
Christophe Antony Lounis Vsevolod Peysakhovich Mickaël Causse Flight Eye Tracking Assistant (FETA): Proof of Concept Keywords: Eye-Tracking, Aviation, Human Factors, Human Computer Interaction, Neuroergonomics, Flying Assistant, Assistive Technology ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Over the past 50 years, continuous technical and technological improvements in commercial aviation made it the safest modes of transportation [START_REF]Boeing. Statistical Summary of Commercial Jet Airplane Accidents Worldwide Operations, 1959-2014[END_REF]. Progress in cockpit systems and aircraft design [START_REF] Sarter | From tool to agent": The evolution of (Cockpit) automation and its impact on human-machine coordination[END_REF], in pilot training, in flight crew and air-traffic control procedures, are still essential to maintain a low accident rate despite an everincreasing traffic [START_REF] Button | Wings across Europe: towards an efficient European air transport system[END_REF]. Nevertheless, accidents always occur and a large part of them involve human error (approximatively 60 to 80 percent of accidents), as shown Figure 1. One critical solution provided by the industry to reduce crew's workload [START_REF] Lee | Human factors and ergonomics in automation design[END_REF] and to deal with human errors [START_REF] Chialastri | Automation in aviation[END_REF][START_REF] Lee | Human factors and ergonomics in automation design[END_REF] has been the introduction of automation. However, automation also shifted the role of the crew from controllers to supervisors [START_REF] Mumaw | Analysis of pilots' monitoring and performance on an automated flight deck[END_REF]. Unfortunately, automation is not always fully understood and nor correctly monitored [START_REF] Parasuraman | Complacency and bias in human use of automation: An attentional integration[END_REF]. It can induce complacency, overconfidence, and airline pilots sometimes rely too much on it [START_REF] Wickens | Complacency and automation bias in the use of imperfect automation[END_REF]. Therefore, increasing the pilot's ability to stay in the loop, in particular by promoting an appropriate monitoring of the cockpit instruments, is a main current safety challenge [START_REF] Berberian | The out-of-the-loop Brain: a neuroergonomic approach of the human automation interaction[END_REF]. This is particularly true during the approach and landing phases, critical periods of a flight, in which safety margin and tolerance on flight parameters deviations are very low [START_REF] Sumwalt | Examining How Breakdowns in Pilot Monitoring of the Aircraft Flight Path[END_REF]. A report of the active pilot monitoring working group published by the Flight Safety Foundation [START_REF] Sumwalt | Examining How Breakdowns in Pilot Monitoring of the Aircraft Flight Path[END_REF] investigated 188 cases involving monitoring issues leading to accidents. They find out that 66% of monitoring errors occurred during dynamic phases of flights (e.g., climb, descent, approach, and landing). They also identified the failure to cross-check instruments as a major cause of those monitoring errors that resulted in excessive deviations of the flight parameters (e.g., altitude, trajectory, or speed deviation). Recently, the Federal Aviation Administration (FAA) published a final training rule that requires enhanced pilot monitoring training to be included into existing air careers training programs [START_REF]A Practical Guide for Improving Flight Path Monitoring: Final Report of the Active Pilot Monitoring Working Group[END_REF]. Furthermore, the French accident investigation bureau (Bureau Enquêtes et Analyses, BEA) suggested to analyze pilots' monitoring with eye tracking to improve piloting procedures [START_REF][END_REF]. In this sense, a recent paper described the different ways in which eye tracking could be used to assist commercial pilots during the flight [START_REF]Study on Aeroplane State Awareness during Go-Around[END_REF]. Based on the latter studies, we propose the FETA system. It compares the current visual scan of a pilot with a database of "standard" visual circuits. If the current visual scan deviates too much from the database (e.g., the speed is not fixated during a too long period), FETA emits a vocal alarm (e.g., "check speed"). The current paper describes the development and the evaluation of the FETA (Flight Eye Tracking Assistant) system [START_REF] Peysakhovich | The neuroergonomics of aircraft cockpits: the four stages of eye-tracking integration to enhance flight safety[END_REF]. In particular, we evaluated the impact of FETA on situation awareness [START_REF] Lounis | Intelligent cockpit: eye tracking integration to enhance the pilot-aircraft interaction[END_REF], subjective workload, flight performance, and visual scans. FETA system development The main purpose of the FETA system is to warn the pilot when he looks not sufficiently at an instrument. In order to calibrate the "not sufficiently", the threshold beyond which visual scans become "abnormal", we built a database of standard visual circuits in the cockpit with a sample of 16 airline pilots. They performed approachlanding phases in flight simulator while their eye movements were recorded. We also ensured that their flight performance remained in the standard safety thresholds. Participants Sixteen male professional airline pilots (ATPL: Airline Transport Pilot License or CPL: Commercial Pilot License) volunteered to participate in this study. Their mean age was 34 years old (range: 23-59). Their total flight experience ranged from 1,600 to 13,000 hours (M = 4,321.73 hrs, SD = 2,911.41 hrs). They were not paid for their participation. They had normal or corrected-to-normal vision. The experiment was approved by the Research Ethics Committee (CER, n°2019-131). Procedure Each participant signed a consent form and provided demographic information, their flight qualifications (type of aircraft), and their total flight experience hours. Pilots were briefed on the study and receive instruction about the flight scenario and the goal of this experiment. They filled a fatigue questionnaire. Next, pilots were installed in the flight simulator and were submitted to the eye-tracking calibration procedure. Participants took the captain position and performed a training consisting in two approach-landings scenarios in order to familiarize themselves with the flight simulator. Then, they performed the two experimental approach-landings scenarios. Flight simulator The study was conducted in the PEGASE (Platform for Experiments on Generic Aircraft Simulation Environment) flight simulator of the ISAE-SUPAERO (Toulouse, France), illustrated in figure 2. It simulates an Airbus A320 with a glass cockpit. The simulator includes pilots' seats, sidestick controllers, throttles, trim wheels, and rudder pedals. Eye-tracking measurements Eye tracking data was collected with a Smart eye System embedded in the cockpit. The Smart eye System consists of 5 deported cameras (0° -2° of accuracy), with a sampling frequency of 60 Hz. Furthermore, the cockpit has been divided into several Areas of Interests (AOIs), as presented in figure 3. They correspond to the main flight displays. These AOIs are used by the FETA system to evaluate online current visual scans. We also used these AOIs during the human factor evaluation to examinate the impact of FETA on visual scans. The threshold for detecting a fixation on an AOI was set at 100 ms [17]. Experimental conditions The 16 pilots performed two times the same flight scenario. The flight scenario consisted of a manual approach-landing task to Toulouse-Blagnac Airport Runway LFBO 14R. Flight began at coordinates 1.2159 longitude and 43.7626 Latitude. During the scenario, the pilot had to comply with some specific instructions. In particular: maintain a vertical speed between +500 ft/min and -800ft/min, a speed of 130 knots, and a heading of 143°. Flight parameters Firstly, we checked flight performance of the pilots, assuming that correct flight performance is likely correlated to an efficient cockpit monitoring. Figure 5 shows the mean flight parameters deviation for vertical speed, speed and heading during the landing task. Flight performance of each pilot was adequate. Average vertical speed was in the correct range, and average speed and average heading were very close to the target values. Visual Behavior Database and notification threshold The Visual Behavior Database (VBD) has been established with the eye recordings made on the 16 pilots that performed the two approach-landing scenarios. Mean nondwell times were calculated for each AOI. While dwell times represent the time during which an individual gaze inside an AOI [START_REF] Russi-Vigoya | Analysis of pilot eye behavior during glass cockpit simulations[END_REF], non-dwell times correspond to the period of time during which an individual does not look at an AOI, see Figure 6. We used the "non-dwells times" of the 16 expert pilots as the metric for the FETA notification threshold. More precisely, the thresholds consisted of the averages of the non-dwell time for each AOI plus a standard deviation, as presented in [START_REF]Boeing. Statistical Summary of Commercial Jet Airplane Accidents Worldwide Operations, 1959-2014[END_REF]. φthreshold = μNDT + σNDT . ( 1 ) This metric indicates the maximum non-dwell time tolerance for each AOI (i.e., beyond which an insufficient monitoring is diagnosed). FETA interface Besides the Visual Behavior Database, the eye tracking system, and the vocal alarms, FETA also has an application permitting to visualize the activity from outside the cockpit. Coded in C#, the FETA interface has many features shown in Figure 7. The features of FETA are: 1. AOI Monitoring Panel (on the left of Figure 7) It shows the state of each AOI. The color turns from green to blue when the AOI is not monitored enough according to the VBD. 2. Show timer (center of Figure 7) User can tick the tick boxes of any of the AOIs to see the timer of each AOI. This timer shows the elapsed duration since the last monitoring (in seconds). 3. AOI Heat Map Panel (on the right of Figure 7) This heat map panel indicates the proportion of fixation times on the AOI since the beginning of the flight. 4. Timer (center of Figure 7) This feature shows the elapsed time until the beginning of the simulation in seconds. 5. AOI Text Alert and Current Area of Interest Annunciator (at the bottom left of Figure 7) The AOI Text Alert will show the name of the AOI that needs to be monitored. If more than one AOI needs to be monitored, this alert will only show the name of the AOI with the highest priority. The Current Area of Interest Annunciator shows the currently monitored AOI. 6. Flight Parameter Indicators (at the bottom left of Figure 7) This feature shows the several flight parameters that affect the dynamic of the database. 7. Start/Stop Tracking Button and Show/Hide Heat Map Button (centred at the bottom of Figure 7) The Start/Stop Tracking Button starts or stops FETA, while the Show/Hide Heat Map Button shows or hides the AOI heat map. 8. Audio Alarm (cannot be shown) FETA will emit an audio alarm that corresponds to the AOI Text Alert (e.g. "check speed"). FETA system assessment The second part of this paper focuses on the evaluation of the FETA system. In particular, its impact on mental workload, situational awareness, flight performances, and cockpit monitoring. As a preliminary assessment, five pilots were submitted to three different scenarios varying in terms of monitoring difficulty. Participants Five male professional pilots (ATPL, CPL) volunteered to participate in this study. They had normal or corrected-to-normal vision. Their mean age was 39 years old (range: 33-50). Their total flight experience ranged from 2,500 to 8,500 hours (M = 3,176 hrs, SD = 2,645 hrs). Pilots were not paid for their participation. The experiment was approved by the Research Ethics Committee (CER, n°2019-131). Procedure Procedure was essentially the same than during the FETA calibration, except that the new 5 pilots performed four additional landings. During this evaluation, FETA auditory notifications (in case of abnormal monitoring) were restricted to three instruments: speed, vertical speed, and heading. These instruments were chosen because they corresponded to the flight parameter values that pilots had to maintain. Possible auditory alarms emitted by FETA were: "check speed"; check vertical speed", check heading". Apparatus This experiment also took place in the PEGASE flight simulator, using the same eye tracking system. Experimental conditions Pilots performed two times three different randomized landing scenarios. The first scenario (Scenario 1) was identical to the one performed by the pilots for the building of the VBD. In the second and the third scenarios, we increased monitoring difficulty. During these two scenarios, pilots were asked to read aloud the distance between the aircraft and a specific radio beacon (information displayed in the ND-zone) either every 0.5 Nm (scenario 2) or every 0.2 Nm (scenario 3). The pilots had to comply with the same speed, vertical speed, and heading constraints than during the VBD building. At the end of the simulation, pilots filled out 2 subjective questionnaires: situational awareness measures using SART [START_REF] Goldberg | Eye tracking in web search tasks: design implications[END_REF] and workload Instantaneous Self-Assessment [START_REF] Tattersall | An experimental evaluation of instantaneous selfassessment as a measure of workload[END_REF], which is a subjective scale ranging from 1 to 5. The latter allows assessing overall workload. After the flight scenarios, open interviews were conducted to garner the various opinions of the pilots according to the system. Human factors assessment Due to the low number of participants, we only present descriptive statistics for subjective assessments, and flight performance. However, eye tracking data allows us to use inferential statistics regarding the comparison with and without FETA. In particular by taking into account the difficulty of scenarios (1, 2, 3) as a covariate. Subjective results Figure 9 shows the SART results. A higher SART score indicates a better situational awareness. On average, FETA seemed to disturb the situational awareness when flying context was easy (scenario 1 and 2), but it tended to be the opposite when flying context was more complex (scenario 3). As presented in figure 9 (at right), ISA workload indicator did not show marked difference with or without the FETA system. However, in an easy flying context (scenario 1), the FETA system seems to induce more workload and this trend is reversed when flying context is more difficult (scenario 2 and 3). Flight performance results The figure 8 shows flight parameters deviations. During the easy scenario (scenario 1), pilots had higher speed deviations with FETA than without. Concerning the heading in the difficult condition (scenario 3), pilots had on average lower heading deviations with the FETA system than without. Eye tracking results Figure 10 shows the percentage dwell times on each AOI for all scenarios with and without FETA system. The Wilcoxon-Mann-Whitney nonparametric test shows a significant effect (p<0.05) of FETA vs. without FETA condition on the AOIs according to speed, vertical speed, heading, flight mode annunciator and out the window. Discussion and conclusions The purpose of this study was on the one hand to present the concept and the development of a flight eye-tracking assistant (FETA) calibrated thanks to eyemovement recordings from 16 airline pilots. On the other hand, we also proposed a user-centered evaluation (e.g., situation awareness, mental workload) of the first version of this assistant together with an assessment of its impact on the cockpit monitoring. This evaluation was performed with 5 other airline pilots. Overall, this first version of FETA demonstrated mixed results. First, results showed that there was no clear improvement of the flight parameters that had to be maintain during the landing (speed, vertical speed, and heading). There was an increased speed deviation during the easier landing and on the contrary an improvement of heading accuracy during the most difficult landing scenario. Consistently, subjective results tend to show that FETA was not detrimental only when flight scenario was difficult. In particular, situation awareness seemed slightly improved by FETA in the scenario 3. Eye tracking results were more favorable to FETA, with an increase of the time spent on some instruments subjected to the FETA audio notification in case of insufficient visual consultation. In presence of FETA, pilots checked more often the speed, the vertical speed, and the heading. This additional time gazing these instruments impacted the time spend on the window. Most likely, FETA was efficient to redirect attention toward the critical flight instruments thanks to the vocal alarm triggered when visual circuit deviated too much from the database. Despite this positive result, our experiment shed to light several issues that should be addressed in the future. Open interviews with the pilots allowed revealing some areas of improvements. For example, the use of the auditory modality is not necessary the best one. This channel is already used by the synthetic voice in the cockpit, and also during the exchanges between pilots and air traffic control. To overcome this problem, other notifications methods could be explored, such as visual and/or haptic modalities. Another important improvement would be to both integrate flight parameters values and eye movements in FETA. Indeed, it would be more appropriate to trigger notifications when both visual scans and flight parameters deviates too much from standards. For example, when speed decline too fast etc. This would help avoid triggering spurious notification (useless auditory notification from FETA), which was one of the main problems raised by the pilots during the debriefing. More generally, the FETA system should consider other eye tracking metrics when considering the landing task; for example, it could analyze the visual patterns (transitions between AOIs, not only the fixation on each AOI) and correct them when they deviate from established standards, using artificial intelligence. Furthermore, FETA could take into account other flight phases, automatically identified considering the flight data (e.g., altitude, speed, flight mode…). Then, this would enable to adapt eye-tracking metrics to the given flight phases. For example, cockpit monitoring is much less intense during the cruise, but this phase is more prone to drowsiness or fatigue. FETA could integrate metrics based on the percentage of eye closure [START_REF] Taylor | Situational awareness rating technique (SART): The development of a tool for aircrew systems design[END_REF] or considering the frequency of eye blinks [START_REF] Sommer | Evaluation of PERCLOS based current fatigue monitoring technologies[END_REF]. Future study should consider these improvements and assessing FETA during complex flight phases with a higher number of pilots. Fig. 1 . 1 Fig. 1. Aeronautical accidents from 1969 to 2019 with human factors, technical failure and others causes as contributory factors. These data were retrieved from Bureau of Aircraft Accidents Archives (www.baaa-acro.com). Fig. 2 . 2 Fig. 2. The PEGASE flight simulator used during FETA development and assessment. Fig. 3 . 3 Fig. 3. Cockpit Display with AOIs and Sub-AOIS: (1) Primary Flight Display (PFD), (2) Navigation Display (ND),(3) Electronic Centralized Aircraft Monitoring (ECAM), (4) Out of Window (OTW), (5) Flight Control Unit (FCU), (6) Flight Mode Annunciator (PFD.FMA) , (7) Speed Tape (PFD.SPD), (8) Attitude Indicator (PFD.ATT), (9) Vertical Speed Tape (PFD.VS), (10) Heading Tape (PFD.HDG), (11) VOR tag reading area in ND (ND-zone). Fig. 4 . 4 Fig. 4. The landing scenario with the flight parameter values that pilots had to maintain. Fig. 5 . 5 Fig. 5. Violin plot of flight parameters deviations during the landing task. The red lines correspond to target values, as given by the experimenter before the flight scenarios. N = 16. Fig. 6 . 6 Fig. 6. Violin plot of non-dwell times during the landing task for the main AOIs. N = 16. Fig. 7 . 7 Fig. 7. FETA interface with its 7 different features. Fig. 9 . 9 Fig. 9. Left: SART results (higher the values, better the situational awareness); Right: ISA results (lower is the value and lower is the subjective workload). All three scenarios with and without the FETA system are showed. N = 5. Fig. 8 . 8 Fig. 8. Root Mean Square (RMS) of the flight parameters for each scenarios with and without FETA (the higher the value, the lower the performance). N = 5. Fig. 10 . 10 Fig. 10. Bar plot of percentage dwell times on each AOI. All three scenarios with and without the FETA system are showed. N = 5. (*p<0.05, Wilcoxon Mann-Whitney test). Acknowledgments This work was supported by a chair grant from Dassault Aviation ("CASAC", holder: Prof. Mickaël Causse)". The Authors thank the PEGASE simulator technical team and all the pilots who participated in this study.
04116940
en
[ "sdv", "sdv" ]
2024/03/04 16:41:26
2023
https://hal.science/hal-04116940/file/Brenet%20A%20et%20al.pdf
Alexandre Brenet Julie Somkhit Zsolt Csaba Sorana Ciura Edor Kabashi Constantin Yanicostas Dr Nadia Soussi-Yanicostas Microglia mitigate neuronal activation in a zebrafish model of Dravet syndrome Keywords: Microglia, Epilepsy, Seizure, Calcium imaging, Zebrafish, Seizures, Cytokines, Neuroprotection 2 It has been known for a long time that epileptic seizures induce brain neuroinflammation through microglia activation. Nevertheless, these cells have not yet received the attention they deserve in epilepsy research and the consequences of this microglial response on subsequent neuronal activity remain poorly understood. Here, we sought to fill this gap and gain a larger understanding of the role of microglia in the pathophysiology of epilepsy, using an established zebrafish Dravet syndrome epilepsy model based on Scn1Lab sodium channel loss-offunction, combined with live microglia and neuronal Ca 2+ imaging, local field potential (LFP) recording and genetic microglial ablation. First, in scn1Lab-deficient larvae experiencing epileptiform seizures, microglia displayed morphological and biochemical features suggesting M1-like pro-inflammatory activation, including reduced branching, amoeboid-like morphology, and a marked increase in the number of microglia expressing pro-inflammatory cytokine Il1β. More importantly, scn1Lab-KD larvae fully lacking microglia showed a significantly increased neuronal activation compared to that seen in scn1Lab-KD individuals with microglia, as shown by LFP recording and Ca 2+ imaging, and also by the epileptiform seizure-related whirling swimming of larvae. These findings are evidence that despite a microglial activation and the synthesis of pro-inflammatory cytokines, microglia provide neuroprotection to epileptic neuronal networks, making these cells a promising therapeutic target in epilepsy. Introduction Epilepsy is the most frequent neurological disorder, affecting more than 1% of the world's population [START_REF] Fisher | ILAE Official Report: A practical clinical definition of epilepsy[END_REF][START_REF] Fisher | Redefining epilepsy[END_REF]. The disease is characterized by recurrent seizures caused by synchronized hyperactivation of neuronal networks of various sizes ranging from a limited brain region to the whole cortex, leading to focal or generalized seizures, respectively [START_REF] Thurman | Standards for epidemiologic studies and surveillance of epilepsy[END_REF][START_REF] Stafstrom | Seizures and epilepsy: An overview for neuroscientists[END_REF]World Health Organization, 2019). Potent anti-epileptic drugs (AEDs) have been developed over the last decades. However, they are ineffective in approximately one-third of patients, especially children with developmental epileptic encephalopathies, such as Dravet syndrome (DS) [START_REF] Brunklaus | Comorbidities and predictors of healthrelated quality of life in Dravet syndrome[END_REF][START_REF] Kwan | Drug-Resistant Epilepsy[END_REF][START_REF] Wu | Incidence of dravet syndrome in a US population[END_REF]. In addition, they can cause significant side effects, such as teratogenicity in pregnant women [START_REF] Xue-Ping | Risk factors for drug-resistant epilepsy: A systematic review and meta-analysis[END_REF][START_REF] Marxer | A review of the evidence on the risk of congenital malformations and neurodevelopmental disorders in association with antiseizure medications during pregnancy[END_REF]. These limitations emphasize the need for novel anti-epileptic strategies. Notably, most existing AEDs are "neurocentric", targeting neuronal proteins directly involved in neuronal network activation. At the same time, microglia have so far received little attention as a therapeutic target in epilepsy despite large amounts of data accumulated over the last decades, strongly suggesting that these cells are essential players in the pathophysiology of the disease [START_REF] Vezzani | Neuroinflammatory pathways as treatment targets and biomarkers in epilepsy[END_REF]. Microglial cells, the resident brain macrophages, have several essential functions, including immunoprotection, maintenance of neuron health and homeostasis, and maturation of neuronal networks through developmentally controlled apoptosis of supernumerary neurons and pruning excess synapses [START_REF] Lukens | Microglia and Neurodevelopmental Disorders[END_REF]. As immune cells, microglia also respond rapidly to brain injury or infection through complex reprogramming, referred to as microglia activation, which comprises morphological and biochemical changes, including the synthesis and release of pro-inflammatory cytokines [START_REF] Woodburn | The semantics of microglia activation: neuroinflammation, homeostasis, and stress[END_REF]. In epileptic patients, it has long been known that seizures cause the activation of microglial cells and synthesis of inflammatory mediators [START_REF] Ravizza | Innate and adaptive immunity during epileptogenesis and spontaneous seizures: Evidence from experimental models and human temporal lobe epilepsy[END_REF][START_REF] Vezzani | Glia as a source of cytokines: Implications for neuronal excitability and survival[END_REF][START_REF] Vezzani | Fetal brain inflammation may prime hyperexcitability and behavioral dysfunction later in life[END_REF][START_REF] Vezzani | Neuroinflammatory pathways as treatment targets and biomarkers in epilepsy[END_REF]. Analysis of post-mortem brain tissues from epileptic patients has indeed shown massive microglial activation, directly correlated with seizure severity [START_REF] Boer | Inflammatory processes in cortical tubers and subependymal giant cell tumors of tuberous sclerosis complex[END_REF][START_REF] Morin-Brureau | Microglial phenotypes in the human epileptic temporal lobe[END_REF]. Moreover, in various pharmacological rodent epilepsy models, pronounced morphological changes of microglia were rapidly observed after seizures, prior to the onset of neuronal damage [START_REF] Eyo | Neuronal Hyperactivity Recruits Microglial Processes via Neuronal NMDA Receptors and Microglial P2Y12 Receptors after Status Epilepticus[END_REF][START_REF] Avignone | Altered morphological dynamics of activated microglia after induction of status epilepticus[END_REF]. However, other studies have highlighted the wide variability of inflammatory responses observed in patients and animal epilepsy models [START_REF] Gasmi | Low grade inflammation in the epileptic hippocampus contrasts with explosive inflammation occurring in the acute phase following status epilepticus in rats: translation to patients with epilepsy[END_REF]. Some data suggest that microglia may be harmful in an epileptic context, probably owing to the release of pro-inflammatory cytokines, known to be pro-epileptogenic factors [START_REF] Vezzani | Interleukin-1β immunoreactivity and microglia are enhanced in the rat hippocampus by focal kainate application: Functional evidence for enhancement of electrographic seizures[END_REF][START_REF] Musto | The role of inflammation in the development of epilepsy[END_REF][START_REF] Andoh | Synaptic Pruning by Microglia in Epilepsy[END_REF]. Conversely, other studies suggest that microglia may play a beneficial role in epilepsy by decreasing the activity of neuronal networks (Li, X. Du, et al., 2012;[START_REF] Vinet | Neuroprotective function for ramified microglia in hippocampal excitotoxicity[END_REF][START_REF] Eyo | Neuronal Hyperactivity Recruits Microglial Processes via Neuronal NMDA Receptors and Microglial P2Y12 Receptors after Status Epilepticus[END_REF][START_REF] Badimon | Negative feedback control of neuronal activity by microglia[END_REF]. Data also suggest that the response of microglia to epileptic seizures is not uniform throughout the brain, with cells showing a wide range of phenotypes within the same individual [START_REF] Morin-Brureau | Microglial phenotypes in the human epileptic temporal lobe[END_REF]. Moreover, the levels of both pro-inflammatory and antiinflammatory cytokines are increased in microglia after epileptic seizures, supporting the notion that the response of these cells to seizures is not limited to classic M1-type pro-inflammatory activation, thus highlighting the complexity of this microglial response [START_REF] Benson | Complex alterations in microglial M1/M2 markers during the development of epilepsy in two mouse models[END_REF]. Whether microglia activity has an overall detrimental or beneficial role in epilepsy is still a largely open question with far-reaching therapeutic implications [START_REF] Lukens | Microglia and Neurodevelopmental Disorders[END_REF]. To address this issue and gain a better understanding of the role of microglia in epilepsy, we used a zebrafish model of Dravet syndrome (DS), a severe intractable epileptic syndrome in infants caused in 70-80% of cases by de novo mutations in the gene encoding the voltagegated sodium channel, SCN1A [START_REF] Claes | De Novo Mutations in the Sodium-Channel Gene SCN1A Cause Severe Myoclonic Epilepsy of Infancy[END_REF][START_REF] Dravet | Dravet syndrome history[END_REF]. We used this zebrafish epilepsy model to investigate the morphology and dynamic behavior of microglial cells in vivo, using the Tg[mpeg1:mCherryF] line, which enables imaging of microglial cells in living zebrafish embryos and larvae combined with neuronal Ca 2+ imaging, local field potential (LFP) recording and genetic microglia ablation. We found that microglia were deeply remodeled in scn1Lab-deficient larvae, with morphology shifted toward the amoeboid and less branched phenotype, combined with a markedly increased mobility of their cell bodies and an increased number of Il1β-expressing cells. In addition, in microglia-depleted scn1Lab-deficient larvae, neuronal hyperactivation was significantly increased compared with their siblings with microglia, suggesting that activities of these cells lent neuroprotection to epileptic neurons and decreased their activity. Results Microglia morphological changes in scn1Lab-KD larvae To increase our understanding of microglia phenotypes in epileptic brains in vivo, we first made use of the live imaging techniques available in zebrafish, especially real-time microglia imaging [START_REF] Hassan-Abdi | Neurons Expressing Pathological Tau Protein Trigger Dramatic Changes in Microglial Morphology and Dynamics[END_REF], combined with two zebrafish scn1Lab models of Dravet syndrome epilepsy, scn1Lab morphant [START_REF] Brenet | Defective Excitatory/Inhibitory Synaptic Balance and Increased Neuron Apoptosis in a Zebrafish Model of Dravet Syndrome[END_REF] and scn1Lab s552/s552 mutant larvae (Scott C Baraban, Dinday and Hortopan, 2013). Both models show recurrent spontaneous epileptiform seizures at 4 days post-fertilization (dpf), as visualized by LFP recording and Ca 2+ imaging and by the characteristic swirling behavior of the larvae (Scott C. Baraban, Dinday and Hortopan, 2013;[START_REF] Zhang | Pharmacological Characterization of an Antisense Knockdown Zebrafish Model of Dravet Syndrome: Inhibition of Epileptic Seizures by the Serotonin Agonist Fenfluramine[END_REF][START_REF] Brenet | Defective Excitatory/Inhibitory Synaptic Balance and Increased Neuron Apoptosis in a Zebrafish Model of Dravet Syndrome[END_REF]. Specifically, we mainly used scn1Lab morphants, referred to below as scn1Lab-KD individuals, and verified that the results were identical in scn1Lab s552/s552 mutants, displaying a complete loss of Scn1Lab function (Scott C. Baraban, Dinday and Hortopan, 2013;[START_REF] Brenet | Defective Excitatory/Inhibitory Synaptic Balance and Increased Neuron Apoptosis in a Zebrafish Model of Dravet Syndrome[END_REF]. To visualize microglial cells in vivo, we chose the transgenic Tg[mpeg1:mCherryF] line, which shows intense labeling of these cells [START_REF] Travnickova | Primitive macrophages control HSPC mobilization and definitive haematopoiesis[END_REF]. However, as this transgenic line expresses mCherryF in both microglia and macrophages, we verified that all the microglia imaged in this study also expressed the Tg[p2y12:P2Y12-GFP] transgene [START_REF] Sieger | Long-Range Ca2+ Waves Transmit Brain-Damage Signals to Microglia[END_REF], as P2RY12 is a specific marker of microglial cells [START_REF] Mildner | Ghosts in the shell: Identification of microglia in the human central nervous system by p2y12 receptor[END_REF]. As previously shown [START_REF] Peri | Live Imaging of Neuronal Degradation by Microglia Reveals a Role for v0-ATPase a1 in Phagosomal Fusion In Vivo[END_REF][START_REF] Hassan-Abdi | Neurons Expressing Pathological Tau Protein Trigger Dramatic Changes in Microglial Morphology and Dynamics[END_REF], in wild-type (WT) larvae, microglia displayed the characteristic branched morphology of "resting" microglia, with a relatively small cell body and several long and ramified processes (Figure 1A To further characterize the morphological differences in the two genetic contexts, we used the 3D image analysis software Imaris (Bitplane) to measure several microglial cell morphological parameters (sphericity, volume, surface area, branch number, branch length and ramification index). Data confirmed that microglia in scn1Lab-KD larvae displayed an increased sphericity (WT: 0.567 ± 0.006 vs. scn1Lab-KD: 0.644 ± 0.006; p < 0.0001) (Figure 1C), a decreased number of branches (WT: 6.3 ± 0.2 vs. scn1Lab-KD: 5.1 ± 0.2; p < 0.0001) (Figure 1D), a decreased total (WT: 115 ± 3 µm vs. scn1Lab-KD: 85 ± 3 µm; p < 0.0001) (Figure 1E) and mean processes length (control: 23.5 ± 0.4 µm vs. scn1Lab-KD: 19.6 ± 0.3 µm; p < 0.0001) (Figure 1F), and a decreased ramification index (WT: 1.94 ± 0.05 vs. scn1Lab-KD: 1.74 ± 0.05; p < 0.0001) (Figure 1G) compared with microglia in WT individuals. Sholl analysis, a method used to quantify both the extent and complexity of cell branches (Imaris software), further confirmed that microglia in scn1Lab-KD individuals had fewer and shorter branches than their WT siblings (Figure 1H). Taken together, our data show that microglia exhibited an amoeboid-like morphology reminiscent of that of M1-type activated macrophages in scn1Lab-KD larvae. To better evaluate the extent of microglia morphological differences in scn1Lab-KD and WT larvae, we performed a cluster analysis of these cells in the two genetic contexts based on the five morphological parameters showing significant changes as indicated above (sphericity, branch number, branch length, surface area, and volume) (see Materials and Methods). The first population comprised cells showing low sphericity, numerous elongated branches, and high surface area and volume, referred to below as "branched" microglia. The second comprised microglia displaying high sphericity, few short processes, and low surface area and volume, referred to below as "amoeboid" cells. The third comprised cells with intermediate characteristics, referred to below as "transitional" microglia (Figure 1J). Although the three populations of microglia were found in both contexts, their respective proportion varied considerably. In scn1Lab-KD larvae, we observed a significant increase in the percentage of "amoeboid" microglia (29.1 ± 1.2 vs. 10.4 ± 0.3%; p < 0.001) combined with a significant decrease in the percentage of "branched" microglia (10.2 ± 0.8 vs. 27.8 ± 1.5%; p < 0.01) (Figure 1K). These findings further prove that microglia display an M1-like reactive phenotype in scn1Lab-KD larvae. Visual examination of the behavior of microglia in scn1Lab-KD and WT larvae provided a first indication that these cells displayed an increased mobility in the mutant context. To confirm this observation, we measured the speed of microglia cell bodies in the two genetic contexts. In WT larvae, as previously described [START_REF] Hassan-Abdi | Neurons Expressing Pathological Tau Protein Trigger Dramatic Changes in Microglial Morphology and Dynamics[END_REF], microglia had several branches that were constantly extending and retracting, while their cell bodies remained almost Increased microglia-mediated brain inflammation in scn1Lab-KD larvae Because the microglia phenotypic changes observed in scn1Lab-KD larvae were strongly reminiscent of those of "activated" M1-type neuroinflammatory microglia observed in various neuronal disease situations, including epilepsy [START_REF] Morin-Brureau | Microglial phenotypes in the human epileptic temporal lobe[END_REF], we next evaluated the neuroinflammatory profile in the brain of scn1Lab-KD larvae. First, we used qRT-PCR to quantify the brain accumulation of transcripts encoding two key pro-inflammatory cytokines, interleukin-1β (Il1β) and interleukin-8 (Il8), and three anti-inflammatory cytokines, interleukin-10 (Il10), interleukin-4 (Il4) and transforming growth factor β-3 (Tgfβ3). Surprisingly, we did not observe any significant changes, neither in the expression of pro-inflammatory markers il1β (WT: 1.00 ± 0.22 vs. scn1Lab-KD: 1.33 ± 0.18; p > 0.05) and il8 (WT: 1.00 ± 0.09 vs. scn1Lab-KD: 0.93 ± 0.11; p > 0.05) (Figure 3A), nor in that of anti-inflammatory markers il10 (WT: 1.00 ± 0.21 vs. scn1Lab-KD: 0.767 ± 0.17; p > 0.05), il4 (WT: 1.00 ± 0.12 vs. scn1Lab-KD: 0.89 ± 0.19; p > 0.05) and tgfβ3 (WT: 1.00 ± 0.04 vs. scn1Lab-KD: 0.876 ± 0.08; p > 0.05) (Figure 3B). Therefore, although morphological changes seen in scn1Lab-KD larvae suggested proinflammatory microglial activation, no significant increase in the expression of pro-or antiinflammatory cytokines could be detected in the brain of these individuals. It may be that in this genetic model of epilepsy, levels of neuroinflammation are lower than those observed in rodent pharmacological models, as also suggested by the relatively small percentage of amoeboid microglia in scn1Lab-KD individuals (~30%). To analyze more precisely the microglial expression of Il1β in this genetic context, we generated scn1Lab-KD larvae also expressing the Tg[mpeg1:mCherryF] and Tg[il1β:GFP] transgenes [START_REF] Nguyen-Chi | Identification of polarized macrophage subsets in zebrafish[END_REF]. The results revealed a significant increase in the percentage of microglia expressing Il1β (WT: 14.6 ± 1.7 vs. scn1Lab-KD: 33.6 ± 1.7%; p < 0.0001) (Figure 3C-F). These results confirmed an increased number of activated microglial cells in scn1Lab-KD larvae, consistent with the increased number of "amoeboid" microglia previously characterized. All images were acquired using a Leica SP8 confocal microscope, equipped with a 20x/water objective with a numerical aperture of 0.75. All graphs represent mean ± sem. The p-values were calculated using a Student's t-test. n.s., not significant; ****, p<0.0001. Microglia ablation increases neuronal activation in scn1Lab-KD larvae The phenotypic changes and Il1β expression observed in microglia of scn1Lab-KD larvae strongly suggest that a small, albeit significant, proportion of microglia, approximately 30% of the cells, respond to neuronal seizures through M1-type pro-inflammatory activation. These results then raised the question of whether this microglial response globally mitigates or worsens subsequent activity of neuronal networks. As a first attempt to address this question, we generated scn1Lab-KD larvae expressing the Tg[HuC:GCaMP5G] transgene, a sensitive calcium bio-sensor [START_REF] Akerboom | Optimization of a GCaMP calcium indicator for neural activity imaging[END_REF], and entirely devoid of microglia following morpholino-mediated inactivation of the pU.1 gene, which encodes a factor essential for proper differentiation of macrophages and microglia [START_REF] Rhodes | Interplay of pu.1 and Gata1 determines myelo-erythroid progenitor cell fate in zebrafish[END_REF][START_REF] Peri | Live Imaging of Neuronal Degradation by Microglia Reveals a Role for v0-ATPase a1 in Phagosomal Fusion In Vivo[END_REF]. As previously described [START_REF] Peri | Live Imaging of Neuronal Degradation by Microglia Reveals a Role for v0-ATPase a1 in Phagosomal Fusion In Vivo[END_REF], the brain of larvae injected with PU.1 morpholinos are devoid of microglia, as shown by the absence of L-plastin immunostaining (Figure 4A2) compared with non-injected larvae (Figure 4A1). With these tools, we analyzed and compared neuronal activation in WT and scn1Lab-KD larvae, devoid or not of microglia, using calcium imaging and local field potential (LFP) recordings simultaneously. As previously shown [START_REF] Brenet | Defective Excitatory/Inhibitory Synaptic Balance and Increased Neuron Apoptosis in a Zebrafish Model of Dravet Syndrome[END_REF][START_REF] Liu | Network properties revealed during multi-scale calcium imaging of seizure activity in Zebrafish[END_REF], when compared to WT individuals, scn1Lab-KD larvae display a markedly increased neuronal activity as indicated by the larger number of seizure-like events revealed by either calcium imaging (WT: 16.5 ± 1.0 vs. scn1Lab-KD: 32.2 ± 1.4, p < 0.0001) (Figures 4B,4D1) and (Supp. Video 5, https://urlz.fr/jlSX), (Supp. Video 6, https://urlz.fr/jlT1) and LFP recording (WT: 4.8 ± 1.2 vs. scn1Lab-KD: 28.2 ± 2.4, p < 0.0001) (Figures 4C,4E1). More importantly, microglia depletion increased the amplitude of neuronal events in both WT and scn1Lab-KD larvae (Figure 4D2, 4E2) and (Supp. Video 7, https://urlz.fr/jlTa), (Supp. Video 8, https://urlz.fr/jlTr), and exacerbated the number of epileptiform events in scn1Lab-KD individuals, suggesting that microglia displayed a neuroprotective activity that was able to mitigate, at least partially, neuronal hyperactivation. indicates a statistically significant difference between control and scn1Lab-KD larvae. $$, p < 0.01; $$$, p < 0.001; $$$$, p < 0.0001: indicates a statistically significant difference between larvae with or without microglial cells. p-values were determined using two-way ANOVA with Tukey post-tests. Microglia ablation exacerbates seizure-like swimming behavior of scn1Lab-KD larvae It has been shown that scn1Lab mutant and morphant exhibit spontaneous seizure-like swimming behavior consisting of whole-body convulsions and rapid undirected movements (Scott C. Baraban, Dinday and Hortopan, 2013;[START_REF] Brenet | Defective Excitatory/Inhibitory Synaptic Balance and Increased Neuron Apoptosis in a Zebrafish Model of Dravet Syndrome[END_REF]. Therefore, we questioned whether the locomotor activity of scn1Lab-KD larvae was dependent on microglia (Figure 5A). As previously shown, scn1Lab-KD larvae swam longer and over longer distances than WT (WT: 212 ± 11 vs. scn1Lab-KD: 355 ± 11 mm, p < 0.0001), reflecting their neuronal hyperactivation (Figures 5F,5G,5H). Interestingly, in scn1Lab-KD larvae devoid of microglia, we observed a markedly increased locomotor activity compared with that observed in scn1Lab-KD individuals not microglia-depleted (scn1Lab-KD: 355 ± 11 vs. microglia-depleted scn1Lab-KD: 423 ± 12 mm, p < 0.0001) (Figures 5B-D), suggesting that microglia depletion worsened the locomotor behavior of scn1Lab-KD larvae thus confirming that neuronal activation was exacerbated in scn1Lab-KD larvae lacking microglia compared to that observed in scn1Lab-KD individuals. Discussion A close relationship between epileptic seizures and microglia-mediated brain inflammation was established long ago [START_REF] Vezzani | Functional role of proinflammatory and anti-inflammatory cytokines in seizures[END_REF]. However, the precise response of microglia to epileptic seizures and its consequences on subsequent neuronal network function have been scantly studied so far, especially in in vivo models. Furthermore, it has not been known whether seizure-induced microglia activities reduce or exacerbate neuronal network activity [START_REF] Lukens | Microglia and Neurodevelopmental Disorders[END_REF]. This knowledge gap has several causes, one being that microglia cannot be imaged in vivo through the skull in rodents and most other animal species. Transparent windows must be cut, involving delicate surgery and risks of brain trauma that can influence the state of microglia, which are highly sensitive to disturbances in brain homeostasis [START_REF] He | RNA sequencing analysis reveals quiescent microglia isolation methods from postnatal mouse brains and limitations of BV2 cells[END_REF]. Zebrafish larvae offer a helpful alternative model because they are transparent, enabling easy imaging of microglial cells in real-time, in a living brain and in its physiological environment [START_REF] Peri | Live Imaging of Neuronal Degradation by Microglia Reveals a Role for v0-ATPase a1 in Phagosomal Fusion In Vivo[END_REF]. Another difficulty has been that most studies investigating microglia activities in rodent epilepsy models have been conducted using pharmacologically-induced seizures. Those chemicals can then cause extended status epilepticus-like activity, usually lasting several hours [START_REF] Ravizza | Innate and adaptive immunity during epileptogenesis and spontaneous seizures: Evidence from experimental models and human temporal lobe epilepsy[END_REF][START_REF] Avignone | Altered morphological dynamics of activated microglia after induction of status epilepticus[END_REF][START_REF] Vezzani | Neuroinflammatory pathways as treatment targets and biomarkers in epilepsy[END_REF][START_REF] Maupu | Diisopropylfluorophosphate-induced status epilepticus drives complex glial cell phenotypes in adult male mice[END_REF], and evoke much stronger brain inflammation than that observed in genetic models, and probably human patients [START_REF] Gasmi | Low grade inflammation in the epileptic hippocampus contrasts with explosive inflammation occurring in the acute phase following status epilepticus in rats: translation to patients with epilepsy[END_REF][START_REF] Maupu | Diisopropylfluorophosphate-induced status epilepticus drives complex glial cell phenotypes in adult male mice[END_REF][START_REF] Somkhit | Microglia Remodelling and Neuroinflammation Parallel Neuronal Hyperactivation Following Acute Organophosphate Poisoning[END_REF]. This difficulty can be overcome by using a genetic epilepsy model. We report here what is, to our knowledge, the first study of the response of microglial cells in vivo in such a model. We chose an established zebrafish model of Dravet syndrome (DS), a severe epileptic encephalopathy caused by a deficit in the activity of inhibitory interneurons, most commonly caused by de novo loss-of-function mutations in the SCN1A gene encoding the voltage-gated sodium channel NaV1.1. This zebrafish DS model is based on the loss-of-function of one of the two zebrafish orthologues of the SCN1A gene, scn1Lab. Mutants and morphants scn1Lab larvae display recurrent epileptiform seizures from 3 dpf onward, which we and others had previously characterized (Scott C. Baraban, Dinday and Hortopan, 2013;[START_REF] Brenet | Defective Excitatory/Inhibitory Synaptic Balance and Increased Neuron Apoptosis in a Zebrafish Model of Dravet Syndrome[END_REF]. In particular, we previously showed that loss of scn1Lab function induced a shift in the excitatory/inhibitory balance, with a decreased number of GABAergic synapses and an increased number of glutamatergic ones, combined with an increased number of apoptotic neurons [START_REF] Brenet | Defective Excitatory/Inhibitory Synaptic Balance and Increased Neuron Apoptosis in a Zebrafish Model of Dravet Syndrome[END_REF]. In close agreement with findings in both preclinical animal models and human patients [START_REF] Altmann | A systems-level analysis highlights microglial activation as a modifying factor in common epilepsies[END_REF], microglia displayed significant morphological differences in scn1Lab-KD larvae compared to their wild-type siblings. In the mutant context, we observed a decreased number of "branched" microglia and an increased percentage of microglia showing an amoeboid morphology. These "amoeboid" microglia probably correspond to the previously described M1-type pro-inflammatory "activated" cells observed after brain injuries or in various neuronal disease situations including epilepsy [START_REF] Morin-Brureau | Microglial phenotypes in the human epileptic temporal lobe[END_REF], but also following intoxication by neurotoxic compounds, such as organophosphates [START_REF] Nakajima | Microglia: Activation and their significance in the central nervous system[END_REF][START_REF] Somkhit | Microglia Remodelling and Neuroinflammation Parallel Neuronal Hyperactivation Following Acute Organophosphate Poisoning[END_REF]. Interestingly, although the disease was associated with an increased number of microglia with an "activated" phenotype, the proportion of "transitional" microglia was similar in the two genetic contexts, and a small percentage of "resting" microglia was still observed in scn1Lab-KD individuals. These observations highlight the complexity of the microglial response in epilepsy and, more importantly, are evidence that different populations of microglial cells are present in epileptic brains, each likely playing a distinct role in the pathophysiology of the disease. Although increased expression of pro-inflammatory mediators has been observed both in patients with different forms of epilepsy and in various animal models [START_REF] Leal | Brain expression of inflammatory mediators in Mesial Temporal Lobe Epilepsy patients[END_REF][START_REF] Morin-Brureau | Microglial phenotypes in the human epileptic temporal lobe[END_REF], other studies have shown that anti-inflammatory cytokines are also increased in the epileptic brain [START_REF] Benson | Complex alterations in microglial M1/M2 markers during the development of epilepsy in two mouse models[END_REF]. All evidence of a complex response of microglia to epileptic seizures. Surprisingly, and in contrast with reports in most rodent epilepsy models, qRT-PCR analysis found no significant overexpression of pro-inflammatory cytokine-encoding genes, il1β and il8, in the brain of scn1Lab-KD individuals. This result is probably due to the moderate severity of the seizures in these larvae compared with those induced by pharmacological agents, such as PTZ, kainate, or DFP [START_REF] Maupu | Diisopropylfluorophosphate-induced status epilepticus drives complex glial cell phenotypes in adult male mice[END_REF][START_REF] Somkhit | Microglia Remodelling and Neuroinflammation Parallel Neuronal Hyperactivation Following Acute Organophosphate Poisoning[END_REF], whatever the model species. However, using the transgenic Tg[il1β:GFP] line, which enables Il1β-expressing cells to be visualized [START_REF] Nguyen-Chi | Identification of polarized macrophage subsets in zebrafish[END_REF], we observed a significant increase in the percentage of microglia expressing Il1β in scn1Lab-KD individuals compared with their WT siblings. This observation, which is in close agreement with our cluster analysis, further confirmed not only the increased number of M1-like "activated" microglia in scn1Lab-KD larvae, but also the existence of at least two distinct microglia populations, with and without expression of Il1β. However, the increased expression of il1β was probably too weak to be detected by qRT-PCR analysis of brain RNAs, highlighting the differences between genetic and pharmacological epilepsy models again [START_REF] Maupu | Diisopropylfluorophosphate-induced status epilepticus drives complex glial cell phenotypes in adult male mice[END_REF][START_REF] Somkhit | Microglia Remodelling and Neuroinflammation Parallel Neuronal Hyperactivation Following Acute Organophosphate Poisoning[END_REF]. It is also of note that studies of microgliamediated neuroinflammation in Scn1A +/-mice yielded widely varying results. In some studies, significant activation of microglial cells was observed in the prefrontal cortex and dentate gyrus, combined with an increased expression of the Il1β and Tnfα genes [START_REF] Satta | Neuropathological Characterization of a Dravet Syndrome Knock-In Mouse Model Useful for Investigating Cannabinoid Treatments[END_REF], while in other studies, no microglia activation was detected in the hippocampus of Scn1A +/-mice associated with a lack of over-expression of pro-inflammatory mediators [START_REF] Salgueiro-Pereira | A two-hit story: Seizures and genetic mutation interaction sets phenotype severity in SCN1A epilepsies[END_REF]. Furthermore, [START_REF] Benson | Complex alterations in microglial M1/M2 markers during the development of epilepsy in two mouse models[END_REF] showed that whereas microglia expressed both pro-and anti-inflammatory mediators in the mouse pilocarpine model, only pro-inflammatory mediators were overexpressed in microglia of mice exposed to kainate [START_REF] Benson | Complex alterations in microglial M1/M2 markers during the development of epilepsy in two mouse models[END_REF]. Complex alterations in microglial M1/M2 markers during the development of epilepsy in two mouse models highlight the wide variety of microglial responses to seizures. Genetic depletion of microglia in scn1Lab-KD larvae exacerbated the hyper-activity of neuronal networks, as shown by the increased number of spontaneous epileptiform seizures and worsening of the characteristic swirling swimming behavior. These results thus suggest that microglia played a protective role in epileptic brains and mitigated neuronal activity. Like ours, other studies have concluded that microglia play a beneficial role in epilepsy through mitigation of neuronal activity and modulation of synaptic plasticity [START_REF] Mirrione | Microglial ablation and lipopolysaccharide preconditioning affects pilocarpine-induced seizures in mice[END_REF][START_REF] Li | Reciprocal Regulation between Resting Microglial Dynamics and Neuronal Activity In Vivo[END_REF][START_REF] Badimon | Negative feedback control of neuronal activity by microglia[END_REF]. In addition, [START_REF] Mirrione | Microglial ablation and lipopolysaccharide preconditioning affects pilocarpine-induced seizures in mice[END_REF] showed that total microglia ablation in adult mice led to an increased sensitivity to the pro-convulsant substance pilocarpine [START_REF] Mirrione | Microglial ablation and lipopolysaccharide preconditioning affects pilocarpine-induced seizures in mice[END_REF], further evidence that microglia display neuroprotective functions in an epileptic context. However, preventing microglia proliferation using the selective CSF1-R inhibitor GW2580 during the late stage of the epileptogenesis (but not the early stage) decreased spontaneous seizures in the kainate mouse model [START_REF] Di Nunzio | Microglia proliferation plays distinct roles in acquired epilepsy depending on disease stages[END_REF]. This work extends our understanding of the role of microglia in epilepsy. The principal added value of our study is that it is the first investigation of the response of microglial cells in vivo in a genetic epilepsy model. We show that epileptiform seizures rapidly induce microglia reprogramming characterized by significant changes in cell shape and dynamic behavior, and increased expression of pro-inflammatory cytokine Il-1b. However, this reprogramming is partial and concerns only part of the microglia, suggesting that the response of these cells to seizures is complex and not limited to M1-type activation. Importantly, we also show that scn1Lab-KD larvae entirely devoid of microglia presented an increased number of seizures, with greater intensity, suggesting that microglia activity lends neuroprotection to epileptic neurons. Future research will be needed to investigate how microglial cells modulate neuronal activity. Indeed, modulating microglial cells activity may offer a novel therapeutic strategy of great potential interest by reducing neuronal hyperactivity in epilepsies. Materials and Methods Transgenic lines and fish maintenance Zebrafish were maintained at 26.5°C in 14 h light and 10 h dark cycles. Embryos were collected by natural spawning and raised at 28.5°C in E3 medium. Developmental stages were determined as hours post-fertilization (hpf) or days post-fertilization (dpf) as previously described. 0.003% 1-phenyl-2-thiourea (PTU) was added at 1 dpf to avoid pigmentation. The scn1Lab s552 line has been previously described [START_REF] Schoonheim | Optogenetic localization and genetic perturbation of saccade-generating neurons in Zebrafish[END_REF] and was generously provided by the laboratory of Dr. Herwig Baier. Calcium imaging was performed using Tg[Huc: GCaMP5G] ( [START_REF] Akerboom | Optimization of a GCaMP calcium indicator for neural activity imaging[END_REF]. Microglia were visualized using Tg Morpholino knockdown The following morpholinos, obtained from Gene Tools (Philomath, Oregon, USA), were used in this study: 5'-CTGAGCAGCCATATTGACATCCTGC-3' was used at 0.62 mM to block the scn1Lab zebrafish mRNA transcription; 5'-GATATACTGATACTCCATTGGTGGT-3' was used at 0.88 mM to block the PU.1 zebrafish mRNA transcription. These morpholinos were injected with 0.03 mM rhodamine B dextran and 0.1 mM KCl into embryos at stage 1-2 cells. Microglia morphology 4 dpf Tg[mpeg1:mCherryF] larvae were paralyzed using 300 µM pancuronium bromide (Sigma Aldrich, Saint-Louis, Missouri, USA) and agar-immobilized in the center of a 35 mm glassbottom dish. A 100 µm stack of the larval living optic tectum was acquired at 1024 × 1024-pixel resolution using a Leica SP8 laser scanning confocal microscope equipped with a 20×/0.75 multi-immersion objective. Images were processed using AutoQuant X3 (Media cybernetic, Rockville, Maryland, USA) and Fiji ImageJ (NIH) software. Microglial cells' surface area, volume and sphericity were quantified using Imaris MeasurementPro (Oxford Instruments, Abingdon, UK). The number and length of microglial processes and their Sholl analysis were determined using Imaris Filament Tracer (Oxford Instruments, Abingdon, UK). Clustering of microglial cells Cluster analysis was performed using sphericity (S), number of processes (NP), total processes length (TL), surface area (A), and volume (V) of microglial cells. Microglial cells with at least three of the following features: sphericity less than 0.5, number of processes greater than 7, total processes length greater than 140 µm, and a surface area greater than 2400 µm² were classified as "branched" microglia. Conversely, microglial cells with at least three of the following features: sphericity greater than 0.7, number of processes less than 3, total processes length less than 60 µm, and surface area less than 1500 µm² were classified as "amoeboid" microglia. Microglial cells with fewer than three "branched" or "amoeboid" characteristics were classified as "transitional" microglia. RNA isolation and quantitative RT-PCR For RNA isolation, whole larvae or dissected brains were homogenized using a syringe equipped with a 26G needle (10 brains per sample) using the RNA XS Plus kit (Macherey Nagel, Düren, Germany). Following RNA quantification with a Nanodrop 2000 (ThermoScientific, Waltham, Massachusetts, USA) and RNA integrity assessment using denaturing gel electrophoresis, total RNAs (1 µg) were reverse transcribed into cDNAs using the iScript cDNA Synthesis Kit (Bio-Rad, Germany) and qPCRs were performed using iQ SYBR Green Supermix (Bio-Rad, Germany). Samples were run in triplicate and expression levels of the studied genes were normalized to that of the tbp gene. The primers (Eurofins Genomics, Ebersberg, Germany) used are listed in Supplementary Table 1. Microglial dynamics 4 dpf pancuronium bromide-paralyzed Tg[mpeg1:mCherryF] larvae were agar-immobilized in the center of a 35 mm glass-bottom dish. A 42 µm stack of the larval living half optic tectum was acquired at 1024 × 512-pixel resolution every 40 s for 1 h using a Leica SP8 laser scanning confocal microscope equipped with 40×/1.1 water objective (Leica, Wetzlar, Germany). Videos were processed using ImageJ software. Microglial cell body displacement, distance traveled between the first and the last frame, and microglial cell body speed displacement were quantified using Imaris MeasurementPro. Calcium imaging For quantification, 4 dpf Tg[Huc:GCaMP5G] larvae were paralyzed using pancuronium bromide, agar-immobilized in the center of a recording chamber, and placed under a Leica SP8 laser scanning confocal microscope equipped with a 20×/0.75 multi-immersion objective. A single focal plane of the optic tectum was captured at a high frequency for 40 minutes. The mean grey value of each frame was measured using ImageJ software, and fluorescence variation (ΔF/F0) was calculated using Excel (Microsoft, Redmond, Washington, USA) by subtracting the mean fluorescence intensity of all frames from the fluorescence intensity of a frame of interest, then normalizing this difference by the mean fluorescence intensity of all frames. The fluorescence drift during recording was corrected by subtracting the background intensity of the surrounding frame. Calcium events were defined as any fluorescence increase greater than 0.04 ΔF/F0. For movie illustration (Supp. Video 5, 6, 7, 8), larvae were paralyzed and immobilized as described above. Calcium transient in a single focal plane of the optic tectum of larvae were captured for 5 min, at 5 Hz, using a Yokogawa CSU-X1 spinning disk confocal microscope (Yokogawa, Tokyo, Japan) equipped with a LD LCI Plan-Apochromat 25x/0.8 DIC Imm Korr (UV) objective (Zeiss, Oberkochen, Germany). Local field potential recording Local field potential recording was performed as described in [START_REF] Brenet | Defective Excitatory/Inhibitory Synaptic Balance and Increased Neuron Apoptosis in a Zebrafish Model of Dravet Syndrome[END_REF] Briefly, 4 dpf larvae were paralyzed using pancuronium bromide and agar-immobilized in the center of a recording chamber. A glass electrode filled with artificial cerebrospinal fluid was placed in the right neuropil of the optic tectum. Neuronal activity was recorded for 1 h in current clamp mode at 10 µs sampling interval, amplified with a digital gain of 10, and filtered through a 0.1 Hz high-pass filter and 1 kHz low-pass filter. Recordings were analyzed using Clampfit software (Molecular Devices, San Jose, California, USA). Every downward membrane potential variation under -0.3 mV amplitude and lasting more than 100 ms was defined as an event. Locomotor activity Larval locomotor activity was assessed using an infrared automated tracking device (Zebrabox), controlled by ZebraLab software (ViewPoint, Lyon, France). First, 4 dpf larvae were individually spread in a 96-well plate in 200 µL E3 medium. The plate was then placed inside the Zebrabox recording chamber in the dark for 30 minutes of habituation after which larvae were tracked for 30 min divided into one-minute integration time. The animal color was set to dark and detection sensitivity to 12 for tracking. Besides the total distance swum by the larvae, we also recorded the number of times that the larvae were immobile, in low-speed, medium-speed, and high-speed movements, together with the duration of the events. The inactive speed threshold was set at 4 mm and the high-speed threshold at 8 mm. Each experiment replicates comprised at least 16 larvae per condition and was completed at least three times. Data analysis Data were analyzed and plotted using R-studio 1.2.5. (Posit, Boston, Massachusetts, USA). Data are presented as mean ± sem. Statistical analysis of Figures 1 and 2 was performed https://urlz.fr/iPxZ). By contrast, in scn1Lab-KD larvae, microglia exhibited a more rounded morphology, with a larger cell body combined with fewer and shorter branches (Figure 1B; 1B1-3 and Supp. Fig. 1B3 (Video 2): https://urlz.fr/iPxP). Figure 1 : 1 Figure 1: Microglial morphology in the scn1Lab model. (A-B) Dorsal view of 4 dpf Tg[mpeg1:mCherryF] control (A1) and scn1Lab-KD (B2) larvae optic tectum showing microglial cell population, as well as 3D reconstruction of control (A2-A3) and scn1Lab-KD (B2-B3) microglia. (C-G) Quantification of microglial cells' sphericity (C), number of processes (D), total processes length (E), mean processes length (F) and ramification index (G) in 4 dpf scn1Lab-KD larvae (N = 11; n = 239) compared to sibling controls (N = 11; n = 228). (H) Sholl analysis of control (grey) and scn1Lab-KD (blue) microglia. (I) Diagram of the head of a zebrafish larva with the area of interest (the periventricular stratum) framed in red. (J) Cluster analysis of the microglial population of 4 dpf scn1Lab-KD (N = 11; n = 239) and sibling control (N = 11; n = 228) based on sphericity (S), number of processes (NP), total processes length (TL), area (A), and volume (V), leading to three microglial populations. (K) Repartition of the control (white) and scn1Lab-KD (blue) microglial population in the three clusters produced before. All images were acquired with SP8 Leica scanning laser confocal equipped with 20x/multi-immersion 0.75 objective. All data are represented as the mean ± sem. p-values were determined using Mann-Whitney or unpaired Student t-test depending on the normal distribution of the values. n.s., non-significant; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001. Scale bars: 50 µm (A1, B1); 10 µm (A2, A3, B2, B3). immobile (Figure 2A1-A2, 2C1, 2F1) (Supp. Video 3, https://urlz.fr/iOIz). By contrast, cell bodies of scn1Lab-KD microglia were much more mobile (Figure 2B1-B2, 2C2, 2F2) (Supp. Video 4, https://urlz.fr/iOIA). Figure 2 : 2 Figure 2: Microglial dynamics in the scn1Lab model. (A) Confocal time-lapse showing microglial dynamic in a 4 dpf Tg[mpeg1:mCherryF] control living larval brain. (B) Confocal time-lapse showing microglial dynamic in a 4 dpf Tg[mpeg1:mCherryF] scn1Lab-KD living larval brain. (C) Merged image of two-time points highlighting the increase in microglial soma displacement in scn1Lab-KD larvae (C2) compared to sibling control (C1) (0': cyan, 36': magenta, merge: white). (D-E) Quantification of microglial cell body displacement (D) and mean displacement speed (E) over 60' recording in 4 dpf scn1Lab-KD larvae (N = 6; n = 55) compared to sibling controls (N = 4; n = 46). (F) Microglial cell tracking over 60' in 4 dpf control (F1) and scn1Lab-KD (F2) living larval brain. Scale bar 20 µm in all images. N = number of larvae and n = number of microglial cells. All images were acquired with a SP8 Leica laser scanning confocal equipped with a 40x/water 1.1 objective. All data are represented as the mean ± sem. p-values were determined using Mann-Whitney test. ****, p < 0.0001. Figure 3 : 3 Figure 3: Study of the inflammatory profile of scn1Lab-KD larvae. (A) Quantification of the expression of genes encoding pro-inflammatory cytokines, Il1β and Il8, relative to that of the tbp gene, in the brain of control (white) and scn1Lab-KD (blue) larvae, aged 4 dpf. (B) Quantification of the expression of the genes encoding the anti-inflammatory cytokines, Il10, Il4, and Tgfβ3, relative to that of the tbp gene, in the brain of control (white) and scn1Lab-KD (blue) larvae aged 4 dpf. (C-D) Dorsal view of the periventricular stratum of control (C) and scn1Lab-KD (D) 4 dpf larvae, showing microglial cells (C1, D1), Il1β protein expression (C2, D2), as well as the superposition of the two signals (C3, D3). (E) Diagram of the head of a zebrafish larva with the area of interest (the periventricular stratum) framed in red. (F) Quantification of the percentage of microglial cells expressing the cytokine Il1β in control larvae (N = 19) and scn1Lab-KD (N = 16). Scale: 10 µm. N = number of larvae analyzed. Figure 4 : 4 Figure 4: Consequences of microglial genetic depletion on neuronal activity in the scn1Lab model. (A) Schematic of the experimental setup. (B) 20 min representative calcium traces of 4 dpf control (N = 11), scn1Lab-KD (N = 16), control without microglia (N = 9) and scn1Lab-KD without microglia (N = 9) larvae (from top to bottom). (C) 20 min representative LFP traces of 4 dpf control (N = 5), scn1Lab-KD (N = 5), control without microglia (N = 5) and scn1Lab-KD without microglia (N = 5) larvae (from top to bottom). Scale bars of calcium traces: 0.05 ΔF/F0 and 1 min; of LFP traces: 0.5 mV and 1 min. (D) Quantification of calcium events, fluorescence increase greater than 0.04 ΔF/F0, frequency (D1), and amplitude (D2) during 1-hour recording in 4 dpf control (white) and scn1Lab-KD (blue) larvae. (E) Quantification of LFP events, downwards depolarization greater than 0.3 mV and lasting more than 100 ms, frequency (E1), and amplitude (E2) during 1-hour recording in 4 dpf control (white) and scn1Lab-KD (blue) larvae. N = number of larvae. All data are represented as the mean ± sem. ****, p < 0.0001: Figure 5 : 5 Figure 5: Consequences of genetic ablation of microglia on the locomotor activity of scn1Lab-KD larvae. (A) Diagram describing the experimental protocol. (B) Plots of distances traveled by control larvae (N = 148), scn1Lab-KD (N = 184), controls without microglia (N = 45) and scn1Lab-KD without microglia (N = 288) (from right to left). The swimming speed is represented according to the following color code: black represents a low speed, green a medium speed, and red a high speed. (C) Quantification of the total distance traveled by 4 dpf control (white) and scn1Lab-KD (blue) larvae during the 30 minutes recording. (D) Quantification of the time spent by 4 dpf control (white) and scn1Lab-KD (blue) larvae in a motionless position (D1), at low (D2), medium (D3), and high swimming speed (D4). N = number of larvae. All graphs represent the average ± sem. The p-values were calculated by a two-way ANOVA followed by a Tukey posttest. ***, p < 0,001; ****, p < 0,0001: indicates a statistical difference between genotypes. $$$, p < 0,001; $$$$, p < 0,0001: indicate significant differences depending on the presence or absence of microglia. [mpeg1: mCherryF] (Ellett et al., 2011) and cytokine Il1b expression using Tg[il1b: GFP] (Nguyen-Chi et al., 2015). All the animal experiments described were conducted at the French National Institute of Health and Medical Research (INSERM), UMR 1141, Paris, in compliance with European Union guidelines for the handling of laboratory animals (http://ec.europa.eu/environment/chemicals/lab_animals/home_en.htm). They were approved by the Direction Départementale de la Protection des Populations de Paris and the French Animal Ethics Committee under reference No. 2012-15/676-0069. Acknowledgments: We thank Francesca Peri (University of Zurich, Switzerland) for providing us with the Tg(p2y12:P2Y12-GFP) transgenic line and George Lutfalla (University of Montpellier, France) for providing us with the Tg(il1b:GFP). We also thank Christiane Romain and Olivier Bar (INSERM UMR 1141) for their technical assistance. Funding: This work was supported by the Institut National de la Santé et de la Recherche Médicale (INSERM), and the National Centre for Scientific Research (CNRS). Funding sources had no involvement in the study design, collection, analysis, or interpretation of data, nor in the decision to publish. using the Mann-Whitney test, if data did not follow a normal distribution, or Student's unpaired t-test with or without Welch's correction depending on the variance difference. Two-way ANOVA with Tukey post-test was used to analyze data in Figures 3, 4, and5. Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Author Conflicts of Interest: The authors declare no conflict of interest. Supplementary figures Supplementary video 1: 3D reconstruction of a representative microglia from a control brain's larvae (Figure 1A). 3D images were generated using Imaris software (Biplane Inc., Version 9.5.0). (Link: https://urlz.fr/iPxZ). Supplementary video 2: 3D reconstruction of a representative microglia from a scn1Lab-KD brain's larvae (Figure 1B). 3D images were generated using Imaris software (Biplane Inc., Version 9.5.0). (Link: https://urlz.fr/iPxP). Supplementary video 3: Time-lapse of 4 dpf control larvae dorsal view showing microglial displacement and processes dynamic (Figure 2A). Interval between frames: 39 s. Video played at x6. Microglial tracking displacement was performed using Imaris software (Biplane Inc., Version 9.5.0). (Link: https://urlz.fr/iOIz). Supplementary video 4: Time-lapse of 4 dpf scn1Lab larvae dorsal view showing microglial displacement and processes dynamic (Figure 2B). Interval between frames: 39 s. Video played at x6. Microglial tracking displacement was performed using Imaris software (Biplane Inc., Version 9.5.0). (Link: https://urlz.fr/iOIA). Supplementary Supplementary table il-1β
04116951
en
[ "scco.neur" ]
2024/03/04 16:41:26
2023
https://theses.hal.science/tel-04116951/file/ZHU_HONGMEI_2023.pdf
PhD Pascal Branchereau email: [email protected] Elodie Martin Anne-Emilie Allain William Cazenave Laura Supiot Fara Hodeib Amandine Laupe ´ Nie Urvashi Dalvi Hongmei Zhu Daniel Cattaert Prenatal dysfunctions of chloride-related inhibition in lumbar motoneurons of the SOD1G93A ALS mouse model Keywords: Conceptualization, Data curation, Formal analysis, Supervision, Funding acquisition, Validation, Investigation, Visualization, Methodology, Project administration Conceptualization, Formal analysis, Investigation, Laura Supiot, Data curation, Formal analysis, Daniel Cattaert, Conceptualization, Data curation, Software, Formal analysis, Supervision, Validation, Investigation, Visualization, Methodology ALS disease, SOD1G93A mouse, inhibitory synaptic events, synaptic integration, GABA/glycine, patch-clamp, modelling, spinal cord, spinal lumbar motoneuron Amyotrophic lateral sclerosis (ALS) is a fatal and adult-onset neurodegenerative disease characterized by a progressive degeneration of motoneurons (MNs) with complex multifactorial aetiology. Most ALS studies have focused on symptomatic stages based on the hypothesis that ALS pathogenesis occurs when the disease becomes symptomatic. However, growing evidence indicates that ALS pathogenesis might start long before symptom onset. My PhD thesis work was based on the hypothesis that ALSfamilial and sporadic -stems from deficits taking place during early development. With the aim of identifying early changes underpinning ALS neurodegeneration, the first part of my thesis analysed the GABAergic/glycinergic inhibitory postsynaptic currents (IPSCs) to embryonic (E) E17.5 MNs located in the ventro-lateral motor column from SOD1 G93A (SOD) mice, in parallel with the analyse of chloride homeostasis. Our results showed that IPSCs are less frequent in SOD animals in accordance with a reduction of synaptic VIAAT-positive terminals in the close proximity of MN somata. SOD MNs exhibited an ECI 10 mV more depolarized than wild type (WT) MNs. This deficit in GABA/glycine inhibition was due to a reduction of the neuronal chloride transporter KCC2. SOD spontaneous IPSCs and evoked GABAAR-currents exhibited a slower decay correlated to elevated [Cl -]i. Using computer modelling approach, we revealed that the slower relaxation of synaptic inhibitory events acts as a compensatory mechanism to strengthen or increase the efficacy of GABA/glycine inhibition when ECI is more depolarized. Interestingly, simulations revealed an excitatory effect of low frequency (<50Hz) depolarizing GABA/glycine post-synaptic potentials (dGPSPs) in SOD-like MNs but not in WT-like littermates. At high frequency, dGPSPs switched to inhibitory effect resulting from the summation of the shunting components. The second part of my PhD thesis focussed on the effect of electrically evoked-dGPSPs, at different frequencies (7.5 To my supervisor -Professor Pascal Branchereau You are one of the best people I have ever met. Your invaluable guidance and immeasurable support throughout my doctoral studies. You have always been on my side and cared about what is best for me. You have been very generous in providing me with the available resources you have to broaden my research perspectives and achieve as many outcomes as possible during my four-year PhD studies. I really appreciate you offered me valuable opportunities to get involved into those exciting and interesting projects. Moreover, you have given me essential guidance and encouragement in dissection of spinal cords, electrophysiological experiments and data analysis, making posters, participating in international conferences, writing thesis, etc. You have always been patient in explaining to me when I had any questions. I sincerely thank you for your continuous help over the years and your patience with my dearth of knowledge. In most cases, you spent hours communicating and discussing projects, experimental results and project-related articles with me, and also extend some additional knowledge to me by sharing some good articles you reviewed and telling some stories about the authors. I really enjoy discussing scientific questions with you. In addition, you always went over the results very carefully, corrected any drafts I wrote, and led me to ask why and how. One scenario is still fresh in my mind: GraphPad Prism could directly provide the statistical results of the chi-square test, but you wanted to know how the Prism calculates. Finally, we re-learned together by searching online and using the statistics textbook in your office library, and then we manually calculated the p-values together by looking at the appendix tables of this book. You have set an example for me: know the what and endeavour to know the how and the why. Besides, during the years I worked with you, I observed a particularly good habit: you always write down a To-Do list on your desk or calendar, crossing one off as you completed it, and constantly review and use newly learned English words. You have taught me that every step must be very solid and conscientious in doing research and learning (new) knowledge. Another important lesson you taught me is that doing scientific research requires questioning, not trusting 100% of published articles or even recommended instructions for the use of antibodies without explaining why, but more importantly, trying. New results are often obtained through trying after questioning. I am indebted for the valuable knowledge I have learnt from you. To three most important Collaborators A very big thank you to Prof. Daniel Cattaert of our institute for his guidance throughout the tenure of the main project. He gave me priority ad excellent assistance in my thesis in spite of his packed schedule. His support has been incomputable in providing me with his simulation results and PCA analysis, the method of asymmetry analysis, correcting posters and manuscripts as well "Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less." was imposed. The aim was to examine whether the excitatory effect could be linked to morphological changes previously described in E17.5 SOD MNs. Results showed that some MNs were excited by low frequency dGPSPs and inhibited by high frequency dGPSPs (Dual MNs) and others were inhibited at all frequencies (Inhibited MNs). Dual effect was more often detected in SOD MNs. WT MNs were classified into two clusters according to their input resistance (Rin), Dual MNs being specific to high Rin and Inhibited MNs to low Rin. Morphometric data pointed out a reduced dendritic tree in high Rin WT Dual MNs and a large dendritic tree in low Rin Inhibited MNs. This was not the case in SOD MNs that were excited or inhibited whatever their morphology and Rin. - In agreement with simulation showing that a less density of inhibitory current on MNs soma favors excitatory dGPSPs, we found less synaptic VIAAT terminals on the soma and proximal dendrites of SOD MNs, compared to littermate WT MNs, as well as a lower frequency of spontaneous dGPSPs. Altogether, my thesis data emphasize a prenatal defect in the Cl -homeostasis and GABA/glycine innervation in the SOD1 G93A ALS MNs. Before birth, a dominant population of MNs with low Rin emerges in WT animals. These MNs that are inhibited by dGPSPs could represent future ALS vulnerable fast MNs (putative FF). Interestingly, those MNs are not inhibited in SOD animals. The inhibitory dysfunction could be attributed to two distinct factors: morphology and perisomatic inhibitory synapse density. Of these two factors, the latter plays a major role by controlling capability of GABAergic/glycinergic neurons for shaping spinal motor output. Keywords The impact of COVID-19 on my work The COVID-19 pandemic hit France during my PhD candidature, causing considerable uncertainty and disruptions to the daily activities in my lab, which affected my research and hence this thesis. The problems included government restrictions on access to workplaces, reduced interactions with other colleagues in the department. COVID-19 restrictions halted my laboratory-based work for several months, delaying the production of data and its analysis. They also caused delays in the delivery of KCC2 transgenic animal from USA and in the requisition of experimental animals. Moreover, I was caught by COVID during my PhD studies and it took me over 3 weeks to recover, but muscle or body aches lasted for more than one month, especially the pain and very heavy hands that made washing and brushing my hair difficult, as well as the inability to use the mouse for long hours per day for data analysis and making figures. Overall, COVID-19 significantly reduced my output during my PhD. Funding acknowledgement Hongmei Zhu was granted by the China scholarship Council (CSC). We warmly thank the "Association pour la recherche sur la Sclérose Latérale Amyotrophique et autres maladies du Motoneurone" (ARSLA), as well as AFM-Téléthon for their financial support. This study received financial support from the French government in the framework of the University of Bordeaux's IdEx "Investments for the Future" program/GPR BRAIN_2030 to Professor Pascal Branchereau. I am also very grateful to two key collaborators at University College London, Prof. Marco Beato and Dr. Filipe Nascimento, for their helpful discussions and keen insights in the side project. At the very beginning, for each low-power image I provided, Marco gave his opinion and that of an expert in the field to help me identify the dorsal and ventral nuclei at different lumbar levels. His serious academic attitude in annotating each image left a very deep impression on me. Unluckily, I did not obtain any positive outcomes or results in this tough side project because of the Confocal resolution limitation. However, through discussions and interaction with them, with specific questions or "questionsanswer -discussion of results -problems -possible causes" process, I actively or passively absorbed a lot of topic-related knowledge and experienced the growing pleasure of answering questions and doing research. Furthermore, their very rigorous assessment to scientific results: results were analysed in multiple dimensions to determine the reliability of the results. The valid conclusion is based on the size of the group that is as large as possible and choose the correct statistical analysis method. Much more than that, I gradually got a lot of inspirations from them and drove myself to open my mind and try more scientific critical and comprehensive thinking. I really appreciated fruitful discussions and communications with them via email. To Branchereau Lab To all the members of the Branchereau lab -a big shout out to you guys. To William Cazenave -thank you a lot for assistant in experiment of western blot and converting excel files from csv format to xltm. But also, thank you a lot for promptly accompanying me to hospital emergency room to check in and treat my wound when my finger was bitten by a pregnant FUS mouse. To two internship students of Master (Urvashi DALVI) and Bachelor (Alissa Dimitrova) -thank you for showing me instantaneous frequency analysis and helping me to re-check the labelled dendritic tree number, respectively. To Philippe Chauvet -laboratory engineer -special thank you for your assistance in the technical problems (i.e., fixing dissecting instruments) and also in enthusiastically teaching me French in the lab, although my French is still poor. To Antonny Czarneckithank you for answering some of the project-related academic questions I asked and giving suggestions and proofreading my poster and PhD manuscript. To Alexandra Reichova -thank you for your care in the progress of my thesis and suggestions on the defense, and for your help in analysing the raw image data for the side project, even though we did not end up using it. To all staff at INCIA and the University of Bordeaux Many others were part of my PhD journey. Gilles Courtand -Principal Engineer -I would like to give a big thank you for your excellent technical assistance in coding different macro files for image analysis of my IHC data and teaching me how to use the ZEISS Confocal microscopy for image acquisition and deconvolution processing. Dr.Thoby-Brisson Muriel -I really appreciate your kind assistance in lending me your team's powerful computer when my computer was running very slowly. Loïc GRATTIER and Christophe Halgand -our informatician -thank you for solving many problems related with my computer and servers. Cardoit Laura -Lab technician -thank you for teaching me use Leica cryostat and helping me cut few P20 spinal cords. Cabirol Marie-Jeanne -research engineer -thank you for teaching me CLARITY tissue clearing technique, although I eventually gave up using it because of the long processing time. Nathalie Argenta -thank you for breeding animals and offering the experimental animals I needed. Dr. Barriere Grégory -thank you for helping code macro in excel file for efficient file format conversion from csv to xltm. Camille Lancelin -thank you for giving me helpful advices in the technical problems that I encountered during the side project. Anne Fayoux -thanks to your warm help in bringing the idle cooling fan from your home to the lab for me to use during the hot summer vacation of 2020 while I was analysing electrophysiological data in the lab. To facility of Bordeaux Imaging Center To Fabrice Cordelieres -Engineer of BioImage Analyst in Photonic unit -thank you for your assistant in deconvolution processing for Airyscan images and suggestions for image analysis. To Lysiane Brocard and Matthieu Buridan -Technical manager and Engineer in Plant unit -Thank you for your professional technical assistance in teaching me to use the Zeiss confocal microscope equipped with the Airyscan module to capture super-resolution images in my side project and responding positively to any questions I asked. Special thanks to Matthieu Buridan for his handy help in delivering and upload the captured Airyscan images to me. To the Doctorate Funding Program of the CSC and University of Bordeaux I am really grateful to the CSC (China Scholarship Council) for providing me with a 4year scholarship, so that my PhD aspirations were fulfilled without financial worries. I am especially grateful to Prof. Zhicheng Xiao who was my supervisor during my master study and early academic career. Thanks to his support and recommendations, I won the selection to study for a PhD overseas with a national scholarship. I cannot thank the University of Bordeaux enough for providing me an opportunity to pursue a PhD at the prestigious Neuroscience Institute. I have been given access to a world of possibilities to develop my professional and research skills. In addition, I had the pleasure of taking a very high-quality online course -Computational Neuroscience Crash Course -with a professional teaching team and talented students from across the world to gain new knowledge and skill. To my family, friends, flatmates and officemates I wish to express my heartfelt thanks to my officemates, Anne Bessaguet-Olechowski, Zoé Mazurie, Gabriel Barrios, Laura Puygrenier and Gabriel Pitollat, for their daily discussions and daily help with me. Among them, when I arrived in our lab, Zoé provided me with timely assistance in administrative procedures, share the experience of doing whole-cell patch clamp recording, and constantly cared about my life; Anne and Gabriel Barrios have shared with me unreservedly on the academic aspect (,sharing academic experiences and using Adobe illustrator). I would like to extend my gratitude to the Chinese researchers and friends who I met in Bordeaux and have shared some journeys with me before COVID pandemic. They are Binbin Xu, Sze Ying Lam, Zhe Zhao, Wencan He, Xiaoyan Ma, Fang Tong, Li Lu, Wei Yuan, Linhan Shao, Zhenhui Chen, Jianqiao Jiang, Jiaojiao Wang, Oleg Solopchuck, et al. We exchanged various information (scientific news, some interesting knowledge in different fields, etc), and share the scientific methodology learned in France (e.g. reductionism) and learning resources. During the journey, I was so lucky to gain a Tai Chi instructor, Xiaoyan, who has been practicing martial arts since childhood. For me, she is also a good teacher and friend. Special thanks to Binbin for his great support and help in using R software for clustering analysis of my side project and teaching me how to use R. Sze Ying, a big thank you for your constant care and encouragement to me, and for some very helpful advices. You're just like my elder sister, caring about my life and studies, bring me gifts and books after your travel, giving me red pocket for Chinese New Year (because you said in Hongkong, people who are not married are allowed to receive red packets no matter their ages), making food and bringing them to my residence or the lab, and letting me celebrate with you a very important event in your life. I also would like to give thanks to the students who got the same scholarship in the same year, they are Naixin Kang, Weiwei Cao, Yi Yu, Qilei Song, Junjin Che, Fenghuan Zhao, et al., for the happy time we have spent together. I am grateful to have met some friendly flatmates in Bordeaux who are from different countries (i.e., France, Spain, Italy, Singapore, Syria, Germany, etc) and study different majors (i.e., mathematics, public health, computer science, chemistry, physics, political science, literature, etc). They are Antoine, Anne, Ahmad, David, Ali, Edgar, Lorenzo, Katie, Timothy, Alessa, Ahcene and Angela. We greet and communicate daily, share food or cuisines from our respective countries, run together, and go hiking together. We open our hearts to each other and get along like family: On weekends, we had family breakfast or dinner together, and Antoine and Anne often get up early to make crêpes and coffee for us, and then we enjoy a family breakfast and chat together. But when the kitchen and bathroom are untidy, we will have a harmonious family meeting to talk about the problems and improvement measures, and posted the family rules that each family member agrees to follow; When they learn some new knowledges (e.g. foreign language etc.) or obtain some important achievements (e.g. a finished product -a novel hands-on playable video game -designed by coding thousands of lines of code, Won award and certificate in oral presentations at the international conference, etc.), they share with me; When they buy new but very expensive chairs, they let me experience; When anyone finishes their study and will leave Bordeaux, they always say goodbye before leave, and also give gifts and good wishes for each other; Although Antoine had moved to another residence, he still came back to our residence to give us one COVID rapid test kit containing 5 tests after his institute had distributed two free kits to each staff member; Sometimes I discussed some English expressions or phrases with them, which is very interesting and funny. An example, I didn't know the meaning of "a can of worms" from an email reply from Marco, when I got home and asked Timothy from Singapore, "Do you know?", and then he blurted out "Don't want to open it''. Honestly, this explanation is much better than the English-to-Chinese translation given by the translation software, because I had just such a picture in my mind that I can relate to a similar experience: we had cleaned a garbage bin with lots of worms together in the summer. There were many similar moments when I really appreciated the beauty and precision of the academic English used by people like Marco; When I was sick, Antoine and Edgar helped me contact the doctor and send me to the nearest emergency center; When I heard the news of my grandmother's death from my little niece, I felt very sad. It was Anne and Ahmed who first noticed my abnormality and offered me companionship and comfort; When I got Covid, Timothy took good care of me, buying food and cooking for me. Too many beautiful and vivid details of life cannot be written one by one. I give my deepest thanks to my families, my Papa (Zhiwei Zhu) and Mummy (Huanying Zhu), grandma (Yuxian Xuan and Heshun Wang), grandpa (Guifu Zhu, and Jie Xuan, and Hexiang Xie), sister and brother (Hongguo Zhu, Yunqin Zhu, and Yunlou Zhu), aunt (Bin Xuan) and uncle (Ying Xuan), for giving me infinite love and supports. I will never be able to hug you again, my dear grandma and grandpa (Zhu Guifu and Jie Xuan), but I will remember you forever and wish you a second life and a happy life in another world. The PhD journey in France is absolutely the most amazing and unforgettable experience in my whole life and great treasure to me. This is because not only the knowledge and expertise I've gained here, but also the unique and colourful oversea life I have had here. I really enjoy the time working here and all those after-work journeys. I wish all my friends very best in their careers. Bonne santé à tous Abbreviations ALS Amyotrophic Lateral Sclerosis AMPA α-Amino-3-hydroxy-5-methyl- Glossary Transcription TF = a protein that binds to DNA and controls the factor transcription from DNA to RNA. TF is expressed in specific populations of neurons during development. Central pattern CPG = a specific group of neurons that autonomously generator generate rhythmic patterns of activity. the reversal potential for GABA A R currents (E GABAAR ). GABA A R-mediated signalling consists of two main components: phasic and tonic inhibition. "Phasic" inhibition Synaptic inhibition (fast inhibition), one form of GABA A Rmediated inhibitory signalling. It is triggered by transient activation of synaptic GABA A Rs at GABAergic synapses and consists of individual inhibitory postsynaptic events. "Tonic" inhibition Non-synaptic constant (tonic) inhibition, one form of GABA A R-mediated inhibitory signalling. It is generated in large part through persistent activation of extrasynaptic GABA A Rs and may also involve spontaneously opening GABA A Rs whose activation does not require binding of GABA. IPSC Inhibitory postsynaptic current (IPSC) generated by phasic inhibition related to a synaptic release of GABA and/or glycine. Miniature IPSC mIPSC is triggered by the spontaneous release of GABA and/or glycine from a single synaptic vesicle, without any pre-synaptic action potential. mIPSCs are revealed in the presence of TTX. Spontaneous IPSC sIPSC is triggered by spontaneously occurring action potentials in presynaptic terminals and typically involve release of GABA and/or glycine from multiple synaptic vesicles. Evoked IPSC eIPSC is triggered following experimentally induced action potentials, and, like sIPSC, involves release of GABA and/or glycine from multiple synaptic vesicles. Table of contents List of figures Chapter 1 Figure 1. Organization of Motor cortex and spinal cord. Figure 2. A hierarchy of MNs identities. Figure 3. Columnar organization of spinal MNs by rostro-caudal axis patterning. Figure 4. Spinal MNs specification and connectivity. Figure 5. The development of the embryonic spinal cord Figure 6. Neuronal specification during spinal cord development along the dorso-ventral axis. Figure 7. Organization of the locomotor system in vertebrates Figure 8. Organization of Mouse Spinal Locomotor CPG Network. Figure 9. Positioning ipsilateral interneurons within the Spinal Locomotor CPG Network in ~P21 mice Figure 10. Schematic representation of the different types of motoneurons Figure 11. Local spinal motoneuronal network control excitation/inhibition balance in normal condition. Figure 12. The diagrams summarize the changes in the integration of Renshaw cells and Ia interneurons in the ventral spinal network during development. Figure 13. Schema of GABAergic (left) and glycinergic (right) axon terminal. Figure 14. Modes of GABAA receptor activation. Figure 15. Structure of Cys-loop receptor (GABAA receptor) ion channels. Figure 16. Structure of the glycine receptor. Figure 17. Development of the GABAAR-mediated inhibitory transmission in mouse lumbar spinal MNs. Figure 18. Developmental changes in the proportions of GABAergic and glycinergic synaptic activity in various areas of the CNS. Figure 19. Schema of shunting inhibition. Figure 20. Intracellular Cl -homeostasis. Figure 21. Schematic representation of GABAergic signaling in mouse. Figure 22. A developmental switch in cation-chloride cotransporter expression regulates excitatory GABAergic signaling in the developing CNS and inhibitory GABAergic signaling in the adult CNS Figure 23. Relationship between [Cl -]i and Cl -load (NKCC1 activity), KCC2 activity. Figure 24. Hypothetical structural model of the K + -Cl -cotransporter. Figure 25: Amyotrophic lateral sclerosis: muscle denervation and atrophy upon lower motor neurons loss. Figure 26. ALS Gene Discovery since 1990. Figure 27. Typical pathological hallmark features of ALS in human patients. Figure 28. ALS pathomechanisms and targets tested in recent preclinical trials in rodents. Figure 29. The "dying-forward" and "dying-back" hypotheses Figure 30. Schematic representation of the selective and progressive degeneration of MNs. Figure 31. Time course of neurodegeneration in the SOD1 G93A mouse model of ALS. Figure 32. Embryonic morphological abnormalities in E17.5 SOD1 G93A MNs. Figure 33. ALS-associated alterations in ventral spinal cord circuitry. Figure 34. Summary of the changes in motoneuron properties over time. Figure 35. Schematic of the evolution of MNs degeneration and glial activation during the course of SOD1 mutant-Initiated ALS disease. Figure 36. Graphic summary of natural killer cells modulates motor neuron-immune cell cross talk in models of ALS Figure 37. Hyperexcitability in SOD1G93A MNs at E17.5. Figure 38. Summary diagram of thesis aims. Chapter 3 Figure 39. Summary diagram of the mechanisms involved in Cl -homeostasis regulating cellular hyperexcitability and affecting the synaptic inhibition efficacy of GABA/glycine in prenatal SOD G93A MNs. Figure 40. Recurrent inhibition is preferentially reduced in delayed firing motoneurons from SOD1G93A mice. Figure 41. Alterations in size of the postsynaptic GlyRs clusters onto Fast MNs. Figure 42. A summary of size measurements of GlyRs clusters on rodent spinal MNs. Figure 43. Schematic drawing of recorded MN receiving recurrent inhibition from the Renshaw cell but with variable glycinergic input efficacy under physiological conditions. CHAPTER 1 LITERATURE REVIEW 1.1 PHYSIOLOGY OF SPINAL CORD ORGANIZATION A major task of the central nervous system (CNS) is to control behavioural actions, which necessitates a precise regulation of muscle activity. The final components of the circuitry controlling muscles are the motoneurons (MNs), which settle into pools in the ventral horn of the spinal cord in positions that mirror the musculature organization within the body (Figure 1). An animal's behavioural repertoire is evolved to suit its particular survival needs and originates from the nervous system circuitries that control movements. One major strategy for an animal to organize the neuronal networks for diverse behaviours is to possess ensembles of neurons that are distributed to form maps of external space or body space. These 'topographic' arrangements are a common theme in the vertebrate nervous system. The descending corticospinal motor neurons (upper motor neurons) that project from the motor cortex make synapses in the brainstem and spinal cord, and the bulbar or spinal motor neurons (lower motor neurons) that project into skeletal muscles. Upper motor neurons from different areas of primary motor cortex control the functions of tongue (green), arm (blue), and leg (red). Adapted from [START_REF] Taylor | Decoding ALS: from genes to mechanism[END_REF] Neurons have complex and extended morphologies compared to other cell types, and within the CNS, neurons can vary greatly in their properties. MNs are unique cells amongst neurons because they are large with very long axons, up to 1 m in length in an adult human. Motoneurons can be distinguished into two main categories according to their location in the CNS: upper motoneurons (UMNs) located in the cortex, and lower motoneuron (LMNs) located in the brainstem and spinal cord (Figure 1). The spinal MNs comprise both visceral "MNs" (pre-ganglionic neurons) of the thoracic and sacral regions, which control autonomic functions, and somatic MNs, which regulate the contraction of skeletal muscles and thus control unclear [START_REF] Dasen | Hox networks and the origins of motor neuron diversity[END_REF]. At a third level, these two divisional subgroups have been linked to particular pools of MNs. Each pool innervate a dedicated target muscle [START_REF] Jessell | Neuronal specification in the spinal cord: inductive signals and transcriptional codes[END_REF]. At limb levels, ~60 motor pools in the LMC control the alternation of flexor and extensor muscles about each joint, whereas at thoracic levels as few as 10 motor pools in the HMC supply muscle groups that support posture, inspiration and expiration. Such differences in MN identity and muscle fibre number associated with segmental distinctions in interneuron diversity have been proven by Jessell's lab [START_REF] Sweeney | Origin and Segmental Diversity of Spinal Inhibitory Interneurons[END_REF]. Spinal MNs form the ultimate and irreplaceable component of the neuronal circuitry since there is no alternative route to convey the commands from the processing centers located in the CNS to the effector muscles in the periphery. Their axon extending through several meters in mammals constitute an exceptional and unique anatomical feature. As explained before, spinal MNs are organized into distinct anatomical columns extending along the rostro-caudal axis and called motor columns (Figure 3 and4). Previous studies have described four main columns: MMC, LMC, HMC and PGC [START_REF] Prasad | Development and migration of avian sympathetic preganglionic neurons[END_REF][START_REF] Tsuchida | Topographic organization of embryonic motor neurons defined by expression of LIM homeobox genes[END_REF][START_REF] Jessell | Neuronal specification in the spinal cord: inductive signals and transcriptional codes[END_REF][START_REF] Francius | Dynamic expression of the Onecut transcription factors HNF-6, OC-2 and OC-3 during spinal motor neuron development[END_REF][START_REF] Alaynick | SnapShot: Spinal Cord Development[END_REF]. MNs from the MMC are located in the medial region of the ventral grey matter of the spinal cord and target to the dermomyotome [START_REF] Gutman | Selective degeneration of a physiological subtype of spinal motor neuron in mice with SOD1-linked ALS[END_REF][START_REF] Tsuchida | Topographic organization of embryonic motor neurons defined by expression of LIM homeobox genes[END_REF], which gives rise to the axial musculature later in development [START_REF] Fetcho | A review of the organization and evolution of motoneurons innervating the axial musculature of vertebrates[END_REF][START_REF] Gutman | Selective degeneration of a physiological subtype of spinal motor neuron in mice with SOD1-linked ALS[END_REF]. Axial muscles are mainly involved in the maintenance of the body posture and are found all along the body axis. Consequently, MMC MNs are not segmentally restricted and are located all along the rostro-caudal axis of the spinal cord. MNs from the LMC are detected in the most lateral portion of the ventral spinal grey matter [START_REF] Bueker | DIFFERENTIATION OF THE LATERAL MOTOR COLUMN IN THE AVIAN SPINAL CORD[END_REF]. They connect to muscles of the appendages and therefore are present only at limb levels: LMC MNs are positioned at brachial level (C5 to T1), which innervates the forelimb, and at lumbar levels (L1-L5), which innervates the hindlimb [START_REF] Hollyday | An autoradiographic study of the formation of the lateral motor column in the chick embryo[END_REF][START_REF] Hollyday | Location of motor pools innervating chick wing[END_REF]. MNs from the HMC are located in the ventro-lateral grey matter and innervate muscles derived from the ventral mesenchyme [START_REF] Smithm | The development and postnatal organization of motor nuclei in the rat thoracic spinal cord[END_REF]. The ventral mesenchyme produces the body wall musculature composed of the intercostal and abdominal muscles present only at thoracic level [START_REF] Prasad | Development and migration of avian sympathetic preganglionic neurons[END_REF]. Thus, HMC MNs are only present at thoracic level [START_REF] Tsuchida | Topographic organization of embryonic motor neurons defined by expression of LIM homeobox genes[END_REF][START_REF] Sharma | Genetic and epigenetic mechanisms contribute to motor neuron pathfinding[END_REF]. PGC "MNs" are located in the thoracic and upper lumbar spinal segments (T1-L2) where they occupy an intermedio-lateral location, and the sacral segments (S2-S4). These cells known as spinal visceral "MNs" constitute the CNS component of the autonomic nervous system responsible for the control of smooth muscles (i.e., heart and arteries) and glands. Thoracic PGC MNs are connected to the sympathetic chain ganglia in the proximity of the spine and belong to the sympathetic autonomic system (fight or flight), whereas sacral PGC MNs are connected to ganglia in the vicinity of the effector targets (kidney, bladder, gonads) and belong to the parasympathetic system (rest and digest). In addition, spinal MNs are also organized into two less well-characterized motor columns: SAC and PMC. SAC MNs are located in the intermedio-lateral region of the spinal cord and expand from the end of the medulla until the 5th cervical segment (C1-C5) [START_REF] Ullah | Localization of the spinal nucleus of accessory nerve in rat: a horseradish peroxidase study[END_REF][START_REF] Jacobson | Neuroanatomy for the Neuroscientist[END_REF]. Axons of the SAC MNs constitute the accessory nerve (XI), which supplies the sternocleidomastoid and trapezius muscles. PMC MNs are located in the cervical segments (C2-C6) at embryonical stages and become progressively confined between cervical levels (C3-C5) by birth [START_REF] Webber | Structural and functional characteristics of individual phrenic motoneurons[END_REF]Allan et al., 1997aAllan et al., , 1997b;;[START_REF] Song | Development of the rat phrenic nucleus and its connections with brainstem respiratory nuclei[END_REF]. Axonal projections from PMC MNs converge into phrenic nerves innervating the respiratory diaphragm. Each MMC, LMC, HMC and PGC column possesses a coherent gene expression profile as well as a uniform axonal projection pattern [START_REF] Stifani | Motor neurons and the generation of spinal motor neuron diversity[END_REF]. Neural circuits are assembled with remarkable precision during embryonic Cellular identities are established by the inductive signalling of morphogen gradients within the ventricular zone of the ~E9.5 neural tube across two major spatial axes: 1) the dorso-ventral axis and 2) the rostro-caudal axis [START_REF] Jessell | Neuronal specification in the spinal cord: inductive signals and transcriptional codes[END_REF]. As development proceeds, neural progenitor cells within these axes differentiate into different neural categories with specific identities. Within mouse embryo, rostrocaudal patterning is established in early neural progenitors around the time of neural tube closure [START_REF] Ensini | The control of rostrocaudal pattern in the developing spinal cord: specification of motor neuron subtype identity is initiated by signals from paraxial mesoderm[END_REF] and in the developing spinal cord it is manifested by the differential expression of a phylogenetically conserved family of Hox transcription factors (Figure 4). Whilst the activity of rostro-caudal patterning signals is transient, it has a lasting effect on the pattern of Hox gene expression that is maintained for the rest of embryonic development [START_REF] Ensini | The control of rostrocaudal pattern in the developing spinal cord: specification of motor neuron subtype identity is initiated by signals from paraxial mesoderm[END_REF]. Maintenance of Hox gene expression is essential for the generation of segment-specific types of spinal MNs including columnar and pool specific MN subtypes, which are necessary for proper wiring and functionality of spinal motor circuits [START_REF] Mazzoni | Saltatory remodeling of Hox chromatin in response to rostrocaudal patterning signals[END_REF]. Dorso-ventral patterning is regulated by two main secreted proteins or morphogens: As neuron development progresses, the neurotransmitter properties of the cells emerge and express phenotypic markers, such as neurotransmitter biosynthetic enzymes (e.g., choline acetyltransferase (ChAT) or glutamic acid decarboxylase (GAD)) and vesicular transport proteins (e.g., vGluT2). During the formation of the CNS, cholinergic signalling, the binding of neurotransmitter acetylcholine (ACh) to its receptors, is essential for necessary physiological processes, including spontaneous neuronal activity, axon pathfinding and synaptogenesis [START_REF] Rima | Dynamic regulation of the cholinergic system in the spinal central nervous system[END_REF]. Glutamate decarboxylase (GAD) is the biosynthetic enzyme for the neurotransmitter γ-aminobutyric acid (GABA). GABA signalling plays several roles in neuronal development by modulating immature progenitor proliferation, neuron migration, survival and differentiation, proper morphological development of neurons, as well as potentially primes the earliest neuronal networks [START_REF] Wu | GABA receptors in brain development, function, and injury[END_REF]. Many progenitor domains tend to produce neurons with similar initial axonal growth trajectories, whereas other domains (e.g., pd1, p0, and p1) produce "V" interneuron subtypes with diverse axonal projection. Although each class of "V" cells is highly heterogeneous, the best example of the diversification of a single neuronal class is MNs (Figure 6). Generic postmitotic MNs become subdivided into the medial and hypaxial motor columns (MMC and HMC), which innervate the back (epaxial) and trunk (hypaxial) musculature, respectively (Figure 4). MNs from both columns express Isl1 and Isl2, although the ratio of expression varies, with greater Isl1 expression in the HMC than MMC at E11.5 and greater Isl2 in the MMC than HMC by E13.5 in mouse [START_REF] Tsuchida | Topographic organization of embryonic motor neurons defined by expression of LIM homeobox genes[END_REF][START_REF] Thaler | A postmitotic role for Isl-class LIM homeodomain proteins in the assignment of visceral spinal motor neuron identity[END_REF]. The MMC innervates dorsal or epaxial musculature, while HMC innervates ventral or hypaxial musculature (Figure 4). Initially, all MN progenitors express the LIM homeodomain transcription factor, Lhx3. Lhx3 expression is maintained in the MMC, while its expression is downregulated in the HMC and LMC [START_REF] Tsuchida | Topographic organization of embryonic motor neurons defined by expression of LIM homeobox genes[END_REF]. MN-(Hb9 promoter) dependent expression of Lhx3 results in conversion of LMC MNs to a MMC identity [START_REF] Sharma | Genetic and epigenetic mechanisms contribute to motor neuron pathfinding[END_REF]. At limb levels, the medial and lateral portions of the lateral motor column (LMCm and LMCl) innervate the ventral and dorsal portions of the limb, respectively. MNs pools innervating the same muscle can be defined by unique combinations of transcription factors (e.g., Nkx6 and Ets classes, which are a group of evolutionarily related, DNAbinding transcriptional factors) (Figure 6). Currently, over 20 interneuron types have defined in the spinal cord. Historically, two broad groups have been defined: the "V" interneurons with progenitors that are found in the ventral cord and are grossly associated with motor function, and a dorsal interneuron, dI class, associated predominantly with sensory processing. Furthermore, each cardinal "V"' class of ventral interneurons can be subdivided into several subsets according to further combinatorial expression of transcription factors [START_REF] Francius | Identification of multiple subsets of ventral interneurons and differential distribution along the rostrocaudal axis of the developing spinal cord[END_REF]. As we can see on Figure 6, V and dl interneurons are either excitatory (glutamatergic) or inhibitory (glycinergic and/or GABAergic). Conversely, elimination of dorsal inhibitory V0 D interneurons triggered hopping at slow locomotor frequencies, but the alternation remained conserved at fast locomotion while the phenotype was reversed during selective ablation of ventral glutamatergic V0 V interneurons [START_REF] Talpalar | Dualmode operation of neuronal networks involved in left-right alternation[END_REF][START_REF] Bellardita | Phenotypic characterization of speed-associated gait changes in mice reveals modular organization of locomotor networks[END_REF]O. Kiehn, 2016). These results demonstrate that the V0 population, through commissural projections, plays a critical role in setting the basic pattern of terrestrial locomotion, while also regulating the rhythm of left-right alternation A genetic approach selectively eliminating Pax6 + and En1 + V1 class interneurons from the spinal cord [START_REF] Gosgnach | V1 spinal neurons regulate the speed of vertebrate locomotor outputs[END_REF], showed that the overall pattern of left-right alternation and rostro-caudal propagation of activity was normal when fictive locomotion was pharmacologically induced, whereas MN bursting episodes revealed longer cycle periods and durations (,frequency). This resulted in a markedly slower locomotor activity rhythm in Pax6 -/-and En1-DTA mice preparations compared to wild type preparations. These results demonstrate that the V1 population as a whole regulates the step-cycle period of fictive locomotion and is necessary for 'fast' locomotor outputs in mice. Given that, V1 class interneurons are more homogeneous compared to V0 in the sense that they are all inhibitory and characterized by the coexpression of glycine and GABA [START_REF] Wenner | Topographical and physiological characterization of interneurons that express engrailed-1 in the embryonic chick spinal cord[END_REF][START_REF] Alvarez | Postnatal phenotype and localization of spinal cord V1 derived interneurons[END_REF]. These findings also suggest the important role for inhibition in regulating the frequency of the locomotor CPG rhythm. Ablation of ipsilateral projection of excitatory Chx10 + V2a cells using diphtheria toxin also disturbs right-left alternation and, notably, at higher speeds, animals transition to a left-right synchronous gallop that is not seen in wildtype mice. Genetic ablation of Chx10 + V2a in mice is also associated with defects in left-right coordination of locomotion activity and perturbs gait transitions [START_REF] Crone | Genetic ablation of V2a ipsilateral interneurons disrupts left-right locomotor coordination in mammalian spinal cord[END_REF][START_REF] Zhong | Frequency-dependent recruitment of V2a interneurons during fictive locomotion in the mouse spinal cord[END_REF]. This effect is possibly mediated by a lack of excitatory drive onto contralateral glutamatergic interneurons as suggested by the pattern of V2a synaptic connectivity in mice, in turn weakening left-right alteration [START_REF] Crone | Genetic ablation of V2a ipsilateral interneurons disrupts left-right locomotor coordination in mammalian spinal cord[END_REF]. These results demonstrate that a group of ipsilaterally projecting excitatory interneurons, Chx10 + V2a interneurons, has an important role in driving the excitatory commissural interneurons (a subset of CIN) responsible for left-right alternation. Moreover, in neonatal (P0-P4) mice, spinal V2a interneurons are dispensable for leftright coordination at low locomotor frequencies, but their function is essential for maintaining left-right coordination at high frequencies, i.e., frequency-dependent recruitment of V2a interneurons during fictive locomotion [START_REF] Zhong | Frequency-dependent recruitment of V2a interneurons during fictive locomotion in the mouse spinal cord[END_REF]. Half of the V2a interneurons receive rhythmic locomotor synaptic drive, which increases with cycle frequency, recruiting more of the neurons to fire at higher frequencies, while the other V2a interneurons do not receive locomotion-related synaptic drive and are not recruited into the locomotor network at any frequency. This gradual recruitment of V2a interneurons underlies the maintenance of left-right alternation in normal mice during acceleration from low speeds, in which V2a interneurons do not have a major role, to higher speeds, in which maintenance of left-right alternation depends on the increasing recruitment of rhythmic V2a interneurons. The increased role of V2a interneurons at higher locomotor frequencies arises from increased synaptic drive to recruit subthreshold oscillating V2a neurons, and not from recruitment of a second set of silent V2a interneurons [START_REF] Zhong | Frequency-dependent recruitment of V2a interneurons during fictive locomotion in the mouse spinal cord[END_REF]. Inactivation of contralaterally projecting excitatory Sim1 + V3 interneuron class disrupts the regularity of the rhythm. The ipsilaterally projecting glutamatergic Hb9 + Vx class is rhythmically active during locomotion [START_REF] Hinckley | Locomotorlike rhythms in a genetically distinct cluster of interneurons in the mammalian spinal cord[END_REF]. Inactivation of the Pitx2 + V0 C class disrupts aquatic locomotion due to altered integration of sensory feedback. The Ptf1a + dI4 class forms inhibitory presynaptic contacts on glutamatergic proprioceptive sensory neurons in the ventral spinal cord. The dI6 and dI3 interneuron classes make direct connections onto MNs, however, their roles have not been determined (Figure 6). As a whole, we can see that many subtypes of spinal interneurons are involved in the genesis of an appropriate locomotor activity. LOCOMOTOR SYSTEM AND SPINAL LOCOMOTOR CPG MNs acquire molecular properties during their differentiation throughout embryonic development as a result of dynamic interplay between spatial and temporal expression of families of transcription factors and diffusible morphogens. In the terminal step of differentiation, the combinatorial activity of terminal effector genes defines the features of individual postmitotic MNs. This battery of terminal identity genes governs the synthesis of neurotransmitters and expression of neurotransmitter receptors, ion channels, and axon guidance and synaptic adhesion molecules [START_REF] Deneris | Maintenance of postmitotic neuronal cell identity[END_REF][START_REF] Li | Establishment and maintenance of motor neuron identity via temporal modularity in terminal selector function[END_REF]. At these early stages of development, the first wirings of neuronal circuits proceed with axon outgrowth toward appropriate targets and by the complex interaction of both intrinsic genetic instructions and environmental cues. Spinal MNs are organized into motor columns along the rostro-caudal and ventro-dorsal axes that project to a single muscle target in the periphery. Long descending premotor projection neurons from the spinal cord and supraspinal centres, as well as proprioceptive afferents, begin to establish the spinal motor circuitry during embryonic development [START_REF] Glover | Development of Specific Connectivity Between Premotor Neurons and Motoneurons in the Brain Stem and Spinal Cord[END_REF][START_REF] Arber | Organization and function of neuronal circuits controlling movement[END_REF]. The extensive dendritic arborization of MNs that integrates synaptic inputs critical for circuit formation and plasticity [START_REF] Dong | Intrinsic and Extrinsic Mechanisms of Dendritic Morphogenesis[END_REF]. Also, the diversity of local interneuron subtypes that direct early motor output enable further adaptive motor behaviour [START_REF] Berg | Principles Governing Locomotion in Vertebrates: Lessons From Zebrafish[END_REF]. Animals rely on locomotion or movements to explore their environment, to feed, to escape from predators, to dominate habitats and to find partners for reproduction. These locomotor behaviours are fundamental motor acts that give animals the ability to exploit their surroundings to seek the essential resources, to rapidly change their gait to adapt their locomotion to various terrains and avoid hazards, and also give humans the ability to move. To perform these locomotor movements, the central nervous system generates elaborate goal-directed locomotor sequences translating intent into action through the transformation of the activity of neuronal circuits into the orderly contraction of muscles [START_REF] Mcneill Alexander | Energetics and optimization of human walking and running: the 2000 Raymond Pearl memorial lecture[END_REF][START_REF] Bellardita | Phenotypic characterization of speed-associated gait changes in mice reveals modular organization of locomotor networks[END_REF]O. Kiehn, 2016). Work over many years has demonstrated that the motor control system exhibits a multitude of interleaved layers of organization in a modular manner [START_REF] Goulding | Circuits controlling vertebrate locomotion: moving in a new direction[END_REF]. Topography of neural circuits is clearly a major organizational strategy of the CNS. Thereby, the motor network is presented as topographic spatial organization within the spinal MNs, motor cortex, spinal sensory system, and spinal interneurons function together with the thalamus, basal ganglia, brainstem, cerebellum, and other brain regions to initiate and coordinate movements (Figure 7). In vertebrates, the best example of modular organization of motor circuits is the central pattern generators (CPGs) in the spinal cord, which generate neural oscillations and control rhythmic movements, providing an experimentally tractable model system for investigating how moderately complex ensembles of neurons generate specific motor behaviours. ("alteration" in solid-line path, Figure 8). These pathways appear well-suited to the control of left-right alternation during locomotion since they are likely to connect to rhythm-generating neurons on the contralateral side of the cord ("alteration" in dashline path, Figure 8). The single excitatory pathway is composed of a set of glutamatergic CINs that directly excite contralateral MNs ("Synchrony" in solid-line path, Figure 8), with possible additional connections to contralateral rhythmgenerating neurons ("Synchrony" in dash-line path, Figure 8). This pathway is therefore positioned to promote synchrony in left-right motor bursting. Accordingly, these two types of CIN pathways produced the different locomotion patterns: the dual inhibitory pathway might be active during a pattern of alternation activity (walking), whereas the direct excitatory pathway might be active during a pattern of synchronized movements (hopping). The general organization of these pathways seems to be similar in the adult cat spinal cord [START_REF] Jankowska | Spinal interneuronal networks in the cat: elementary components[END_REF]. Based on neuronal elimination experiments and genetic tracing studies, at least four classes of interneurons (INs), including V0, V2, V3 and V1, can be defined in the locomotor CPG region in the embryonic and early postnatal spinal cord (Figure 8). Almost all the V0-INs and most of the V3-INs are commissural. V0 interneurons were the first of the parent genetically defined classes to be characterized, and have their functional role in stepping determined [START_REF] Lanuza | Genetic identification of spinal interneurons that coordinate left-right locomotor activity necessary for walking movements[END_REF][START_REF] Gosgnach | Synaptic connectivity amongst components of the locomotor central pattern generator[END_REF] and, as a whole, more than 90% of this population has been shown to be inhibitory [START_REF] Sapir | Pax6 and engrailed 1 regulate two distinct aspects of renshaw cell development[END_REF][START_REF] Gosgnach | Synaptic connectivity amongst components of the locomotor central pattern generator[END_REF]. V1-INs are ipsilateral, Pax6 and En1-positive, and GABAergic and glycinergic. This set constitutes over one third of all inhibitory interneurons in the ventral spinal cord, and includes all Renshaw and many ipsilaterally projecting inhibitory interneurons [START_REF] Sapir | Pax6 and engrailed 1 regulate two distinct aspects of renshaw cell development[END_REF]J. Zhang et al., 2014). Genetic ablation of V1-INs slows the speed of rhythmic locomotor output and perturbs flexor-extensor alternation [START_REF] Gosgnach | V1 spinal neurons regulate the speed of vertebrate locomotor outputs[END_REF]J. Zhang et al., 2014). Ipsilateral inhibitory circuits underlying flexor-extensor alternation The spinal locomotor CPG network generating flexor-extensor alternation is organized in flexor and extensor modules which are reciprocally connected through ipsilateral inhibitory interneurons [START_REF] Kiehn | Development and functional organization of spinal locomotor circuits[END_REF] 9). From an evolutionary perspective, the diversity displayed by mammalian V1 neuronal circuits may be an adaptation to the complex control features of multi-jointed limbs. MNs SUBTYPES The spinal cord is a highly organized final executive center for body movement and MNs are the final stage of neural processing for the execution of locomotor behaviours. As all motor-related circuitries with their particular organizational schemes must signal or converge to the MNs that control muscles, it is helpful to examine the arrangement of MNs first. Romanes and colleagues have used retrograde neuronal fills from the muscles or nerves in the periphery and identified discrete clusters of MNs, called pools, devoted to each muscle (George J. [START_REF] Romanes | The motor cell columns of the lumbo-sacral spinal cord of the cat[END_REF]; G.J. [START_REF] Romanes | The Motor Pools of the Spinal Cord Progress in Brain Research[END_REF][START_REF] Tj | The localization of the motoneurons supplying the hindlimb muscles of the mouse[END_REF][START_REF] Nicolopoulos-Stournaras | Motor neuron columns in the lumbar spinal cord of the rat[END_REF][START_REF] Vanderhorst | Increased glutathione biosynthesis by Nrf2 activation in astrocytes prevents p75NTR-dependent motor neuron apoptosis: Astrocytic GSH and motor neuron apoptosis[END_REF]. These studies also identified the organizational principles of inter-pool relationships. Behaviourally relevant movements naturally involve muscles within the same part of the body leading to a significant local clustering of neurons that control neighbouring body regions. The MN population can be divided in many sub-populations in adults (Figure 10). γ-MNs innervate intrafusal fibres whereas α-MNs innervate extrafusal fibres and β-MNs innervate both. γ-and β-MNs further divide between static and dynamic MNs according to the way they modulate the spindle activity. When investigating the properties of the motor units, Burke et al. [START_REF] Burke | Mammalian Motor Units: Physiological-Histochemical Correlation in Three Types in Cat Gastrocnemius[END_REF][START_REF] Burke | Physiological types and histochemical profiles in motor units of the cat gastrocnemius[END_REF] defined three physiological types: fast and slow motor units according to the contraction time of the muscle fibres and within the fast motor units fatigue-resistant and fatigable motor units according to the way the contractile material of their muscle fibers is able or not to sustain repeated stimulations. Each motor unit comprises slow MNs (S MNs), fast fatigue-resistant MNs (FR MNs) and fast fatigue-fatigable MNs (FF MNs). The investigation of the motor unit properties was initially carried out in the cat but similar results were obtained in many other mammals (rat, rabbit, human, guinea-pig) [START_REF] Kernell | The Motoneurone and its Muscle Fibres[END_REF]. However, some authors argued that, in the mouse, FR and FF motor units form a continuum rather than discrete populations, especially in rodents in which motor neuron size differences are less marked (R Bakels et al., 1993;R. Bakels et al., 1993;[START_REF] Gardiner | Physiological properties of motoneurons innervating different muscle unit types in rat gastrocnemius[END_REF]. In mammals every muscle embodies the three types of muscle units although in highly variable proportions. Burke et al. [START_REF] Burke | Mammalian Motor Units: Physiological-Histochemical Correlation in Three Types in Cat Gastrocnemius[END_REF][START_REF] Burke | Physiological types and histochemical profiles in motor units of the cat gastrocnemius[END_REF] further demonstrated that all the muscle fibers of a given motor unit exhibited the same histochemical profile. Furthermore, they found a correlation between the physiological types and the histochemical profiles: type S motor units have type I muscle fibers, type FR motor units have type IIa muscle fibers, and type FF motor units have type IIb muscle fibers (Figure 10). Therefore, Burke et al. [START_REF] Burke | Mammalian Motor Units: Physiological-Histochemical Correlation in Three Types in Cat Gastrocnemius[END_REF][START_REF] Burke | Physiological types and histochemical profiles in motor units of the cat gastrocnemius[END_REF] showed an exquisite correlation between the muscle unit contractile properties (contraction velocity and fatigue resistance) and the molecular identity of the fibers (difference in the amount of oxidative enzymes such as SDH or in the amount of myofibrillar ATPases) [START_REF] Edstrom | Histochemical composition, distribution of fibres and fatiguability of single motor units. Anterior tibial muscle of the rat[END_REF][START_REF] Schiaffino | Fiber Types in Mammalian Skeletal Muscles[END_REF]. Later on, staining of the different isotypes of the myosin heavy chain proteins was established [START_REF] Schiaffino | Embryonic and neonatal myosin heavy chain in denervated and paralyzed rat skeletal muscle[END_REF] allowing to tighten the link between the contractile proteins and the muscle unit properties. MNs in the adult can be classified into functionally diverse classes and subtypes. (A). One divisioninto alpha (α), beta (β), and gamma (γ) MNs -is made according to the type of muscle fiber that each class innervates: α-MNs innervate extrafusal fibers, γ-MNs innervate intrafusal fibers, β-MNs innervate both intrafusal and extrafusal fibers but this class is not shown in this schema. α-MNs target distinct muscle fibers, thus this class continue to be subdivided into three subtypes: FF (fast-twitch fatigable), FR (fast-twitch, fatigue-resistant) and S (slow-twitch, fatigue-resistant). (B). A typical motor pool contains all types of MNs-fast and slow, α and γ: α-FF, α-FR, α-S and γ. The upper pool innervates a fast muscle but also contains α-S and γ MNs. Similarly, the lower pool would be classified as slow. Adapted from [START_REF] Kanning | Motor Neuron Diversity in Development and Disease[END_REF] The fact that different MNs impinge on different muscle units and that a given MN connects exclusively to a homogenous muscle fiber population calls for a matching between a MN and its efferent fibers characteristics (i.e, the MNs electrophysiological properties need to be tuned to their output). Indeed, MNs properties are widely different. Among all characteristics, MNs of a given kind segregate according to their electrotonic size (evaluated by the measure of the input conductance) [START_REF] Zengel | Membrane electrical properties and prediction of motor-unit type of medial gastrocnemius motoneurons in the cat[END_REF]. In addition, Friese et al. [START_REF] Friese | Gamma and alpha motor neurons distinguished by expression of transcription factor Err3[END_REF] showed that the average size of the soma already allowed to discriminate between γand α-MNs as early as P14 in the mice. MN size therefore increases from γ-MN to α-FF MN (γ < α-S < α-FR < α-FF). Given the evidence indicating that α-MNs receive a common input (Elwood Henneman, 1957;[START_REF] Mendell | Terminals of Single Ia Fibers: Distribution within a Pool of 300 Homonymous Motor Neurons[END_REF][START_REF] Henneman | The size-principle: a deterministic output emerges from a set of probabilistic connections[END_REF] posited an orderly recruitment of the motor units based on the size principle. That is to say that considering a fixed input, the most electronically compact MNs will see their voltage rise more and will reach their firing threshold prior the less compact ones. In the neonatal rodent, heterogeneity between motor units is already established [START_REF] Close | Properties of motor units in fast and slow skeletal muscles of the rat[END_REF] and motor units present a biased composition in fiber types as early as E16 [START_REF] Condon | Development of muscle fiber types in the prenatal rat hindlimb[END_REF]. But what about the putative differences between the electrical intrinsic properties of neonatal MNs? So far, we do not know how the functional properties of the different subtypes of spinal MNs maturate. It is well known that, in adult mammals, spinal MNs start to fire without any delay at the onset of a liminal current pulse (Elwood Henneman, 1957;[START_REF] Manuel | Fast Kinetics, High-Frequency Oscillations, and Subprimary Firing Range in Adult Mouse Spinal Motoneurons[END_REF]. In sharp contrast, at least two different firing behaviours were reported during the second postnatal week for the rat abduces MNs [START_REF] Russier | A-, T-, and Htype Currents Shape Intrinsic Firing of Developing Rat Abducens Motoneurons[END_REF] or the mouse spinal MNs (Pambo-Pambo et al., 2009). Upon injection of liminal current pulses, the discharge starts at the current onset in some MNs but it is delayed in others. Neonatal and adult bistable motoneurons display delayed spike-frequency acceleration. Delayed spike-frequency acceleration reflects slow inactivation of Kv1.2 channels [START_REF] Bos | Kv1.2 Channels Promote Nonlinear Spiking Motoneurons for Powering Up Locomotion[END_REF]. RECURRENT INHIBITION An important and relatively simple circuit that controls MN firing is the recurrent inhibitory circuit established between Renshaw cells (RCs) and MNs (Figure 11), which form a recurrent inhibitory circuit in order to adjust the motor output. RCs receive synaptic inputs from intraspinal collaterals of motoneuronal axons and, once activated, inhibit the same MNs and close synergists (Birdsey Renshaw, 1946;[START_REF] Eccles | Cholinergic and inhibitory synapses in a pathway from motor-axon collaterals to motoneurones[END_REF][START_REF] Fyffe | Spatial distribution of recurrent inhibitory synapses on spinal motoneurons in the cat[END_REF]Francisco J. Alvarez et al., 2007). In this manner, RCs slow down and stabilize MN firing and shape input-output relations to excitatory drive [START_REF] Windhorst | ON THE ROLE OF RECURRENT INHIBITORY FEEDBACK IN MOTOR CONTROL[END_REF]. Renshaw cells were first identified in cats by their high-frequency discharge in response to antidromic motor axon action potentials [START_REF] Willis | The case for the Renshaw cell[END_REF] and are located in the most ventral regions of laminae VII and IX of the spinal cord (R. C. [START_REF] Thomas | Precise Localization of Renshaw Cells with a New Marking Technique[END_REF]. They belong to the V1 interneuron subclass [START_REF] Gonzalez-Forero | Differential postnatal maturation of GABAA, glycine receptor, and mixed synaptic currents in Renshaw cells and ventral spinal interneurons[END_REF] and can be identified by their medium to large size, expression of biochemical markers such as GlyT2, calbindin, and parvalbumin, location, and electrophysiological properties such as a high postsynaptic sensitivity to acetylcholine and large glycine-and GABA-evoked currents (Francisco J. Alvarez et al., 2007). Their inhibitory action is mediated by both GABA and glycinergic synapses, although synaptic boutons immunoreactive to glycine alone are MNs than boutons that are immunoreactive to both GABA and glycine [START_REF] Shigenaga | The distribution of inhibitory and excitatory synapses on single, reconstructed jaw-opening motoneurons in the cat[END_REF]. Since Renshaw cells release both GABA and glycine, the recurrent inhibition they induce exerts a longer inhibitory synaptic action than the inhibition induced by Ia interneurons, in which neurotransmission is more phasic and solely glycinergic [START_REF] Bhumbra | Co-release of GABA does not occur at glycinergic synapses onto lumbar motoneurons in juvenile mice[END_REF]Ole Kiehn, 2016). Renshaw cells are the only interneurons that receive direct excitatory synaptic inputs from MNs and, in turn, exert inhibitory feedback on them, known as recurrent inhibition (Francisco J. Alvarez et al., 2013). However, inhibitory synapses of Renshaw cells are located on dendrites rather than on the cell body [START_REF] Fyffe | Spatial distribution of recurrent inhibitory synapses on spinal motoneurons in the cat[END_REF] and the effectiveness of recurrent inhibition at reducing the MN firing rate is limited [START_REF] Mattei | Pharmacologically induced enhancement of recurrent inhibition in humans: effects on motoneurone discharge patterns[END_REF]. This is in keeping with the small amplitude of the postsynaptic inhibitory potential or current generated by Renshaw cells [START_REF] Windhorst | ON THE ROLE OF RECURRENT INHIBITORY FEEDBACK IN MOTOR CONTROL[END_REF]. In contrast, the synapses of Ia inhibitory interneurons are close to the MN soma and have a more significant impact in counteracting the excitatory input arriving in the dendrites. Thus, Renshaw cells and Ia interneurons present distinct synaptic connectivity that serves different functions. Individual Renshaw cells receive inputs from particular motor pools and spread their inhibitory output to the same MNs, either directly or through inhibition of Ia inhibitory neurons mediating reciprocal inhibition of antagonistic muscles (flexor and extensor alternation activity), to γ-MNs controlling muscle spindle length, and to other Renshaw cells (Francisco J. Alvarez et al., 2007) (Figure 12). Thus, recurrent inhibition is primarily generated by input from motor axon collaterals. In summary, Renshaw cells are activated by axon collaterals of -MNs, in particular those in proximal motor pools that are vulnerable in ALS, and in turn inhibit -and γ-MNs in homonymous and synergist MN pools. The functional role of Renshaw inhibition is not clear. Several hypotheses have been proposed: Renshaw inhibition may serve to limit MN firing [START_REF] Noga | The role of Renshaw cells in locomotion: antagonism of their excitation from motor axon collaterals with intravenous mecamylamine[END_REF], curtail plateau potentials and persistent firing [START_REF] Hultborn | Variable amplification of synaptic input to cat spinal motoneurones by dendritic persistent inward current[END_REF][START_REF] Bui | Relative Location of Inhibitory Synapses and Persistent Inward Currents Determines the Magnitude and Mode of Synaptic Amplification in Motoneurons[END_REF], or even support high firing rates of MNs [START_REF] Obeidat | Modulation of motoneuron firing by recurrent inhibition in the adult rat in vivo[END_REF], to restructure recruitment at the motor pool level [START_REF] Hultborn | On the function of recurrent inhibition in the spinal cord[END_REF], or to guide the tuning of MN properties at the microcircuit level [START_REF] Brownstone | Spinal circuits for motor learning[END_REF]. | V1 interneurons take a ventral-lateral migratory path and position themselves in close relationship to MNs with whom they start to interact synaptically. V1 interneurons produce several classes of ipsilaterally projecting GABAergic/glycinergic premotor interneurons: RCs and IaIN precursors (top, middle). At mid-embryonic stages, all early connectivity (primarily cholinergic and GABAergic and less glycinergic and glutamatergic) is excitatory, however, morphological details of these early connections are largely unknown (question marks). It is believed that this early connectivity has important roles in transmitting motoneuron spontaneous activity to the whole spinal network and that this spontaneous activity is important during early axonal guidance and target recognition. An amplifier role for early MN-initiated spontaneous activity could be accomplished by RCs directly or indirectly through connections with other interneurons like IaINs. The anatomical nature and organization of these connections remains speculative at present. Current studies are actively pursuing clarification of this early synaptology. | At late embryonic stages (top, right), inhibitory synapses become hyperpolarizing and the role of RCs in transmitting excitation down-regulates in favor of exerting inhibitory influences over the MNs. Primary afferent axons commence to invade the ventral horn of the spinal cord in late embryos at the same time that they induce the differentiation of the sensory apparatus in the periphery (muscle spindles and Golgi tendon organs). By the time of birth (bottom, right), RCs are contacted by both sensory afferents and motor axons and, in this sense, they are a class of proprioceptive inhibitory interneuron that also has direct excitation from MNs. The source of the sensory input is as yet unknown. If it originates from antagonist muscles, then Renshaw cells could behave at this stage as a special class of IaIN. After maturation and onset of weightbearing locomotion (bottom, left) this sensory input is functionally lost (indicated by an X), although the sensory inputs are still present structurally, and the role of RCs then becomes more focused to recurrent inhibition, as it is in the adult. From Alvarez and Fyffe 2007. GABAergic/Glycinergic INHIBITION AND GABA/Glycine TRANSMISSION There are two major inhibitory neurotransmitters that modulate excitatory signals in the CNS: γ-aminobutyric acid (GABA) and glycine. Inhibitory circuits across different brain regions rely preferentially on GABAergic (brain) or glycinergic transmission (brainstem and spinal cord), but some neural circuits utilize both GABA and glycine at an individual synapse (Andréa Dumoulin et al., 2001). GABA and glycine can also be co-released from the axon terminal of an individual interneuron allowing a wider dynamic range of inhibitory modulation than could be conferred by the action of a single neurotransmitter type [START_REF] Nerlich | Dynamic Fidelity Control to the Central Auditory System: Synergistic Glycine/GABAergic Inhibition in the Cochlear Nucleus[END_REF] (Figure 13). In the spinal cord sensory-motor circuit, distinct populations of proprioceptive sensory afferents target specific MNs and different populations of inhibitory interneurons form synapses onto the sensory afferent terminals and MNs, respectively. Inhibitory synapses onto the sensory afferents are usually GABAergic, whereas those on MNs are glycinergic and/or GABAergic [START_REF] Hughes | P boutons in lamina IX of the rodent spinal cord express high levels of glutamic acid decarboxylase-65 and originate from cells in deep medial dorsal horn[END_REF][START_REF] Betley | Stringent Specificity in the Construction of a GABAergic Presynaptic Inhibitory Circuit[END_REF]. Figure 13. Schema of GABAergic (left) and glycinergic (right) axon terminal. GABA is synthesized from L-glutamate by GAD-65 and GAD-67 and transported into presynaptic storage vesicles by VGAT/VIAAT. Degradation of GABA occurs through GABA transaminase (GABA-T) to form succinic semialdehyde (SSA) which is further metabolized in the Krebs cycle. Glycine is formed from serine-byserine methyl transferase and taken up from extracellular space by GlyT2. Transport into presynaptic vesicles occurs again through VGAT/VIAAT. GABA acts at postsynaptic GABAA receptors and presynaptic and postsynaptic GABAB receptors. Some neurons, in particular primary sensory afferent neurons, carry GABAA receptors also on their presynaptic terminals. Glycine acts at postsynaptic glycine receptors (modified from Zeilhofer et al., 2012a). Inhibitory neural transmission in the adult nervous system is mediated by GABA and glycine (Figure 13), which are inhibitory neurotransmitters with fast synaptic inhibition occurring largely through the ionotropic GABA A receptor (GABA A R) [START_REF] Hevers | The diversity of GABAA receptors: Pharmacological and electrophysiological properties of GABAA channel subtypes[END_REF]Martin Heubl et al., 2017;[START_REF] Siucinska | Γ-Aminobutyric acid in adult brain: an update[END_REF]. As a GABAgated chloride (Cl -) channel, the consequence of GABA A R signalling is dependent on intracellular Cl -concentration, which determines the reversal potential for GABA A R currents (E GABA ). In the mature CNS, which includes the spinal cord, GABA and glycine negatively regulate neuronal activity (Richard W. [START_REF] Olsen | Molecular biology of GABA A receptors[END_REF][START_REF] Macdonald | GABA A Receptor Channels[END_REF]Legendre, 2001a;[START_REF] Kirsch | Glycinergic transmission[END_REF]. In the spinal cord, there are three types of inhibitory neurons and terminals: GABAergic, GABA/glycine coreleasing, and glycinergic [START_REF] Jonas | Corelease of Two Fast Neurotransmitters at a Central Synapse[END_REF][START_REF] Vaaga | Dual-transmitter neurons: functional implications of co-release and co-transmission[END_REF][START_REF] Tritsch | Mechanisms and functions of GABA co-release[END_REF]. These neurons and terminals are arranged in the spinal cord neural circuit and are involved in many vital roles, such as regulating locomotive movement and respiratory rhythms (Andrew J. Todd et al., 1990;Hanns Ulrich Zeilhofer et al., 2005;Mehdi Hossaini et al., 2007). At the spinal cord level, in adults, the activation of GABA A and glycine receptor-gated Cl -channels inhibits neurons as a result of low intracellular chloride concentration ([Cl -] i ) and hyperpolarized Cl -equilibrium potential (E Cl ). Low [Cl -] i is maintained by the potassium-chloride cotransporter KCC2 [START_REF] Delpire | Human and murine phenotypes associated with defects in cation-chloride cotransport[END_REF]J. A. Payne et al., 2003), which extrudes Cl -. However, at early developmental stages, GABA and glycine are often excitatory because E Cl is very depolarizing (Ziskind- Conhaim, 1998). The decrease of inhibition would be caused mainly by downregulation of KCC2, which induce depolarized E Cl (Figure 13) [START_REF] Coull | Trans-synaptic shift in anion gradient in spinal lamina I neurons as a mechanism of neuropathic pain[END_REF][START_REF] Boulenguez | Down-regulation of the potassium-chloride cotransporter KCC2 contributes to spasticity after spinal cord injury[END_REF]). The GABAergic system plays an important role in motor control and sensory transmission in the spinal cord [START_REF] Cramer | The role of cation-dependent chloride transporters in neuropathic pain following spinal cord injury[END_REF][START_REF] Sibilla | GABAergic and glycinergic interneuron expression during spinal cord development: dynamic interplay between inhibition and excitation in the control of ventral network outputs[END_REF]. GABA receptors are present in the pre-and post-synaptic sites of primary afferent terminals, at laminae I-III [START_REF] Bowery | GABAA and GABAB receptor site distribution in the rat central nervous system[END_REF] and at MNs [START_REF] Delgado-Lezama | Extrasynaptic GABA A Receptors in the Brainstem and Spinal Cord: Structure and Function[END_REF]. This system modulates the motor performance and sensory processing [START_REF] Rudomin | In search of lost presynaptic inhibition[END_REF]. The fast actions of GABA and glycine are mediated by their ionotropic receptors, which are ligand-gated ion channels. Fast inhibition is mostly mediated by glycine receptors and GABA A receptors. GABA C receptors are also ionotropic, but their expression is more reduced than for GABA A receptors, mainly in the retina (S. M. Jones et al., 2009). Since their contribution to fast inhibition is minor in this sense, it is sometimes omitted. GABA B receptors are also inhibitory, but they are metabotropic receptors, which means they are G-protein coupled receptors. They do not contribute to fast inhibitory transmission, because the consequences of their activation are slower [START_REF] Purves | Damage to Descending Motor Pathways: The Upper Motor Neuron Syndrome[END_REF]. GABA A receptors mediate two modes of inhibitory neurotransmission (Robert L. [START_REF] Macdonald | GABAA Receptor Channels Physiology and Pathology of Chloride Transporters and Channels in the Nervous System[END_REF] (Figure 14). The first, termed ''phasic'' inhibition, involves the transient activation of postsynaptic GABA A receptors by nearly saturating concentrations of GABA released from presynaptic vesicles. GABA A receptors are also involved in slower forms of non-synaptic inhibition, a phenomenon termed ''tonic'' inhibition, both forms were illustrated in Figure 14. Phasic inhibition gives rise to inhibitory postsynaptic currents (IPSCs) that activate rapidly (rise times of 1ms or less) but decay slowly [START_REF] Maconochie | How quickly can GABAA receptors open?[END_REF]; M. V. [START_REF] Jones | Desensitized states prolong GABAA channel responses to brief agonist pulses[END_REF]. Several types of IPSCs can be recorded, each having slightly different kinetic properties [START_REF] Otis | Lasting potentiation of inhibition is associated with an increased number of gamma-aminobutyric acid type A receptors activated during miniature inhibitory postsynaptic currents[END_REF][START_REF] Mody | Diversity of inhibitory neurotransmission through GABAA receptors[END_REF]). 1). Miniature IPSCs (mIPSCs) are triggered by the spontaneous release of GABA from a single synaptic vesicle (i.e., action potential independent) (Figure 14A). 2). In contrast, spontaneous IPSCs (sIPSCs) are triggered by spontaneously occurring action potentials in presynaptic terminals and typically involve release of GABA from multiple synaptic vesicles (Figure 14B). 3). Evoked IPSCs (eIPSCs) are triggered following experimentally induced action potentials, and, like sIPSCs, involve release of GABA from multiple synaptic vesicles. Tonic inhibition is mediated by peri-and extra-synaptic GABA A receptors that are persistently activated by sub-saturating concentrations of ambient GABA (Figure 14C). While the sources and precise concentration of ambient GABA are still uncertain, it is generally believed to arise from a combination of synaptic overflow and non-vesicular release, and to reach concentrations of 1M [START_REF] Attwell | Early signs of motoneuron vulnerability in a disease model system: Characterization of transverse slice cultures of spinal cord isolated from embryonic ALS mice[END_REF][START_REF] Zoli | Volume transmission in the CNS and its relevance for neuropsychopharmacology[END_REF]Mark Farrant et al., 2005). Interestingly, the contribution of the tonic current to overall inhibitory tone may actually be greater than the summed charge transfer of phasic currents [START_REF] Brickley | Development of a tonic form of synaptic inhibition in rat cerebellar granule cells resulting from persistent activation of GABAA receptors[END_REF][START_REF] Hamann | Tonic and spillover inhibition of granule cells control information flow through cerebellar cortex[END_REF]. GABA A receptors, like other Cys-loop receptors, are ligand-gated channels mediating inhibition in the CNS and assembled from homologous subunits (Figure 15A) as pseudo-symmetrical heteropentamers (Figure 15B) (Robert L. [START_REF] Macdonald | GABAA Receptor Channels Physiology and Pathology of Chloride Transporters and Channels in the Nervous System[END_REF], which facilitate folding and oligomerization of GABA A receptor subunits and provide a stringent quality control system [START_REF] Bollan | Multiple assembly signals in gamma-aminobutyric acid (type A) receptor subunits combine to drive receptor construction and composition[END_REF][START_REF] Ellgaard | Calnexin, Calreticulin, and ERp57: Teammates in Glycoprotein Folding[END_REF]. There are also specific subunit requirements for GABA A receptor assembly that serve to limit receptor heterogeneity [START_REF] Angelotti | Assembly of GABAA receptor subunits: alpha 1 beta 1 and alpha 1 beta 1 gamma 2S subunits produce unique ion channels with dissimilar single-channel properties[END_REF]. While the functional and pharmacological signature of receptors has been identified in a subset of W. [START_REF] Olsen | GABA A receptors: subtypes provide diversity of function and pharmacology[END_REF]. Recently, Garifulina and colleagues [START_REF] Garifulina | beta subunits of GABAA receptors form proton-gated chloride channels: Insights into the molecular basis[END_REF] demonstrated that βsubunit homomers are proton-gated anion channels. Mutation of a single H267A in β3 subunits completely abolishes channel activation by protons. In molecular dynamic simulations of the β3 crystal structure protonation of H267 increased the formation of hydrogen bonds between H267 and E270 of the adjacent subunit leading to a pore stabilising ring formation and accumulation of Clwithin the transmembrane pore. Different subunits are also involved in the formation of GlyRs with different functional properties (Legendre, 2001a). The quaternary structure of GlyR has been examined by chemical cross-linking of its subunits [START_REF] Langosch | Conserved quaternary structure of ligand-gated ion channels: the postsynaptic glycine receptor is a pentamer[END_REF]) (Figure 16), demonstrating a pentameric quaternary structure with an invariant stoichiometry of three  and two  subunits [START_REF] Langosch | Conserved quaternary structure of ligand-gated ion channels: the postsynaptic glycine receptor is a pentamer[END_REF]. Neurons express both homomeric and heteromeric GlyRs. Homomeric GlyRs are the dominant form in foetal rat spinal cord neurons [START_REF] Virginio | Glycine-activated whole cell and single channel currents in rat cerebellar granule cells in culture[END_REF] and are also expressed by foetal neurons of the rat neocortex [START_REF] Flint | Nonsynaptic Glycine Receptor Activation during Early Neocortical Development[END_REF] and cerebellar granule cells [START_REF] Kaneda | Whole-cell and single-channel currents activated by GABA and glycine in granule cells of the rat cerebellum[END_REF][START_REF] Virginio | Glycine-activated whole cell and single channel currents in rat cerebellar granule cells in culture[END_REF], where glycine synapses have not been detected. Homomeric GlyRs seem to be located extrasynaptically and responsible of a "tonic" inhibition (E. Muller et al., 2008), whereas heteromeric  GlyRs are predominantly expressed at postsynaptic sites [START_REF] Takahashi | Functional correlation of fetal and adult forms of glycine receptors with developmental changes in inhibitory synaptic receptor channels[END_REF][START_REF] Rampon | Distribution of glycineimmunoreactive cell bodies and fibers in the rat brain[END_REF][START_REF] Kneussel | Clustering of inhibitory neurotransmitter receptors at developing postsynaptic sites: the membrane activation model[END_REF]. The existence of numerous subtypes of GlyR represents a functional diversity of inhibitory glycinergic synapses as GlyRs have functions distinct from their role in inhibitory synaptic transmission during brain development. The intracellular Cl -concentration ([Cl -] i ) in neurons is developmentally regulated, with a relatively high postnatal concentration that drops to a lower value going into adulthood, which has been confirmed in a wide range of animal species from worms to higher mammals, tissue structures (i.e., retina, olfactory bulb, hippocampus, neocortex, spinal cord, etc.) and preparations from neuronal cultures to slices, intact organs in vitro and in vivo [see the large table in Ben inactivation combined to KCC2 activity leads to a significant decrease in [Cl -] i and a disappearance of GABAAR-mediated excitatory effects. In parallel to the maturation of the chloride cotransporters KCC2 and NKCC1, the spinal cord starts to convey first synaptic activity at E12.5 that is GABAergic (green horizontal bar). Bottom: maturation of the chloride equilibrium potential (Ecl), spike threshold and resting membrane potential (Vrest) across the embryonic stages of developmental. Note: the drop of Ecl at E16.5 accounts for the GABAAR-mediated "shunting" effect. Adapted from [START_REF] Allain | Maturation of the GABAergic transmission in normal and pathologic motoneurons[END_REF] Our laboratory has showed that there is a shift of E GABA A R toward negative values during the embryonic development of mouse lumbar spinal MNs (Alain Delpy et al., 2008). Until E15.5, the reversal potential of Cl -ions (chloride equilibrium, Ecl) is above the spike threshold, whereas after E16.5, it drops significantly below spike SHUNTING INHIBITION Shunting inhibition, also known as divisive inhibition, is a form of postsynaptic potential inhibition that can be represented mathematically as reducing the excitatory potential by division, rather than linear subtraction. The term "shunting" is used because of the synaptic conductance short-circuit currents that are generated at adjacent excitatory synapses. If a shunting inhibitory synapse is activated, the input resistance is reduced locally. According to Ohm's law, the amplitude of subsequent excitatory postsynaptic potential (EPSP) is thus reduced (Figure 19). Consider a model neuron with an excitatory channel and inhibitory channel. The reversal potential for the excitatory channel is typically much more depolarized than the resting potential. This provides the depolarizing current flow of an excitatory input. The reversal potential of the inhibitory channel, however, is assumed to be slightly above the resting potential. Both excitatory and inhibitory channels are organized such that preferred motion leads to temporally non-overlapping activation of the channels. For a two-flash preferred direction stimulus this arrangement will lead to a depolarizing current from the excitatory channel as well as a depolarizing current from the inhibitory channel. When motion in the antipreferred direction is presented to the model shunting neuron, the excitatory and inhibitory conductances are activated at the same time. Under these circumstances the inhibitory channel essentially creates a large hole in the membrane. Any current flow caused by the simultaneous activation of excitatory channels will simply seep out through this hole (Figure 19). In other words, this so-called shunting inhibitory channel can veto nearby excitatory currents. The shunting inhibition model provides a possible mechanism to implement a multiplicative nonlinearity, but what is the evidence that shunting inhibition is actually used in neural systems? Many direction-selective neurons contain gammaaminobutyric acid (GABA) activated chloride channels whose reversal potential is close to the resting potential. Thus, the GABAergic synapse could function as a shunt. When GABAergic synapses are inactivated pharmacologically, Direction selectivity (DS) in many systems is greatly reduced [START_REF] Sillito | Inhibitory processes underlying the directional specificity of simple, complex and hypercomplex cells in the cat's visual cortex[END_REF][START_REF] Ariel | Pharmacological analysis of directionally sensitive rabbit retinal ganglion cells[END_REF][START_REF] Schmid | Using neuropharmacology to distinguish between excitatory and inhibitory movement detection mechanisms in the fly Calliphora erythrocephala[END_REF]. GABA A receptor activation causes Cl -channels to open in the membrane and the usual consequence is a change in the membrane potential. Since the Cl - equilibrium potential is usually more negative than the resting membrane potential, this gives rise to an influx of Cl -and hyperpolarization of the postsynaptic neuron observed as an IPSP [START_REF] Glickfeld | Interneurons hyperpolarize pyramidal cells along their entire somatodendritic axis[END_REF]. However, the synaptic equilibrium potential for chloride is not always more negative than the resting potential, particularly in early development, or after a period of sustained Cl -influx sufficient to increase the chloride concentration inside the cell [START_REF] Staley | Ionic mechanisms of neuronal excitation by inhibitory GABAA receptors[END_REF]. If the synaptic reversal potential is more positive, lying between the resting potential and the threshold for the generation of action potentials, activation of the GABAergic synapse can depolarize the membrane and produce an EPSP. Finally, if the Cl -equilibrium potential is equal to resting potential, then there will be no obvious PSP after activation of a GABA receptor (Alain Delpy et al., 2008;[START_REF] Branchereau | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF]. Shunting inhibition can be quantified by the sum of the EPSP, the IPSP and a nonlinear term proportional to their product (k × EPSP × IPSP), where the coefficient k reflects the strength of shunting effect (J. [START_REF] Hao | An arithmetic rule for spatial summation of excitatory and inhibitory inputs in pyramidal neurons[END_REF]. The k value strongly depends on the locations (dendritic trunk, oblique branches, soma) of excitatory and inhibitory inputs and the distance between them. The duration of shunting inhibition is relatively short in comparison with the membrane polarization that is produced at the GABAergic synapse. This is because transmembrane resistance is reduced while the Cl -channel is open, it returns to normal when the channel closes. CHLORIDE HOMEOSTASIS The intraneuronal ionic composition is an important determinant of brain functioning. There is growing evidence that aberrant homeostasis of [Cl -] i evokes, in addition to that of Na + and Ca 2+ , robust impairments of neuronal excitability and neurotransmission and thereby neurological conditions. Thus, understanding the mechanisms underlying regulation of [Cl -] i is crucial for deciphering the variability in GABAergic and glycinergic signalling of neurons, in both health and disease. The homeostatic level of [Cl -] i is determined by various regulatory mechanisms, including those mediated by plasma membrane Cl -channels and transporters [START_REF] Rahmati | Chloride Homeostasis in Neurons With Special Emphasis on the Olivocerebellar System: Differential Roles for Transporters and Channels[END_REF]. Cl -is the most abundant transportable anion in all cells of the body where it triggers fundamental biological functions in all tissues. The [Cl -] i is regulated and maintained by a delicate functional balance between the operations of plasma membrane Cl -channels and those of transporters, as well as those of local impermeant anions (Rivera et al., 1999;J. Glykys et al., 2014). In the CNS, Cl -channels and transporters play key roles in neuronal growth and development, neurotransmitter uptake, intracellular pH modulation, cell volume regulation and, perhaps most importantly, setting [Cl -] i either above or below its equilibrium potential (Figure 20) [START_REF] Sangan | Cloning and Expression of a Chloride-dependent Na+-H+ Exchanger[END_REF][START_REF] Deidda | Modulation of GABAergic transmission in development and neurodevelopmental disorders: investigating physiology and pathology to gain therapeutic perspectives[END_REF][START_REF] Ruffin | Intracellular pH regulation by acid-base transporters in mammalian neurons[END_REF][START_REF] Jentsch | VRACs and other ion channels and transporters in the regulation of cell volume and beyond[END_REF]Joseph Glykys et al., 2017). Cl -channels, such as GABA A receptors (GABA A Rs) (J Bormann et al., 1987). The direction of the Cl -flow depends on the difference between E Cl and the resting membrane potential (RMP) (Figure 21). If E Cl is more negative than the RMP of the neuron, Cl -flows inside the neuron. GABA A Rs in these types of cells mediate inward (hyperpolarizing) Cl -currents, which in turn lead to inhibition of the postsynaptic neuronal activity (Figure 21a). In contrast, if E Cl becomes more positive than the RMP, outward (depolarizing) Cl -flow through GABA A Rs leads to excitation of the postsynaptic neuron (Figure 21c). Therefore, the activities of Cl -channels and transporters that regulate [Cl -] i are critical for determining the polarity of the impact of GABA A Rs on the neuronal membrane potential. Mutations or deletions of Cl -channels and transporters in the brain have been linked to genetic disorders, such as particular forms of neonatal seizures and epilepsy, ataxia, hyperekplexia (startle disease), and autism spectrum disorders [START_REF] Cohen | On the Origin of Interictal Activity in Human Temporal Lobe Epilepsy in Vitro[END_REF][START_REF] Vermeer | Targeted Next-Generation Sequencing of a 12.5 Mb Homozygous Region Reveals ANO10 Mutations in Patients with Autosomal-Recessive Cerebellar Ataxia[END_REF][START_REF] Pizzarelli | Alterations of GABAergic Signaling in Autism Spectrum Disorders[END_REF][START_REF] Deidda | Modulation of GABAergic transmission in development and neurodevelopmental disorders: investigating physiology and pathology to gain therapeutic perspectives[END_REF]. In addition, impaired Cl -homeostasis has been associated with pathology of the brain following acute injuries, such as hypoxic-ischemic encephalopathy, brain edema, and post-traumatic seizures [START_REF] Galeffi | Changes in Intracellular Chloride after Oxygen-Glucose Deprivation of the Adult Hippocampal Slice: Effect of Diazepam[END_REF][START_REF] Jin | Impaired Cl -Extrusion in Layer V Pyramidal Neurons of Chronically Injured Epileptogenic Neocortex[END_REF][START_REF] Pond | The Chloride Transporter Na + -K + -Cl -Cotransporter Isoform-1 Contributes to Intracellular Chloride Increases after In Vitro Ischemia[END_REF][START_REF] Papp | Relationship between neuronal vulnerability and potassium-chloride cotransporter 2 immunoreactivity in hippocampus following transient forebrain ischemia[END_REF]. Therefore, targeting Cl -channels/transporters has been investigated as a therapeutic tool for re-balancing neuronal [Cl -] i and rescuing the consequential neurological symptoms. One example of such a Cl -based intervention is dampening the elevation of [Cl -] i following traumatic brain injury (TBI), so as to prevent further neuronal swelling, excitatory GABA signalling, and seizure susceptibility [START_REF] Annegers | A Population-Based Study of Seizures after Traumatic Brain Injuries[END_REF][START_REF] Hung | Treatment of Post-Traumatic Epilepsy[END_REF]Goubert et al., 2019). Developing drugs that specifically target Cl -channels or transporters may thereby not only ameliorate the short-term pathological processes induced by TBI, but also have long-term behavioural consequences [START_REF] Rungta | The Cellular Mechanisms of Neuronal Swelling Underlying Cytotoxic Edema[END_REF][START_REF] Ben-Ari | NKCC1 Chloride Importer Antagonists Attenuate Many Neurological and Psychiatric Disorders[END_REF]Goubert et al., 2019). Chloride channels and transporters may become activated in response to membrane potential changes (such as ClC-channels), intracellular Ca 2+ signalling (such as anoctamin-channels), and changes in intracellular pH (SLC4 and SLC26). Although regulated by different actors (Figure 20), [Cl -] i mostly relies on cationchloride co-transporters (CCCs). In fact, Cl -is transported across the membrane by CCCs, mainly represented by the Na + -K + -Cl -cotransporter (NKCC1), and K + -Cl -cotransporters (KCCs). Investigating the impact of such a rich set of widely expressed ion channels/transporters on neuronal functioning is a complex matter because of the heterogeneity of the neuronal populations and the diverse functional interactions of Cl -channels/transporters with each other and other ion carriers. KCC2 AND NKCC1 [Cl -] i is regulated by CCCs in neural cells and it changes during development. The chloride importer NKCC1 and the chloride exporter KCC2 are key regulators of neuronal chloride concentration [START_REF] Williamson | Slowing of axonal transport is a very early event in the toxicity of ALS-linked SOD1 mutants to motor neurons[END_REF] Among the cation-chloride cotransporter gene family, the K + -Cl -cotransporters exhibit the greatest genetic diversity. Four distinct isoforms of the K + -Cl -cotransporter have been identified, KCC1 (Slc12a4), KCC2 (Slc12a5), KCC3 (Slc12a6) and KCC4 (Slc12a7). In brains of new-born mice, transcript expression of KCC2a and KCC2b are similar. Upon postnatal development, however, KCC2b is steeply up-regulated whereas KCC2a is not [START_REF] Uvarov | A novel N-terminal isoform of the neuron-specific K-Cl cotransporter KCC2[END_REF]). KCC2b appears to be the predominant variant in the mature rodent cortex. KCC2a exhibits significant levels of mRNA expression in the brainstem and spinal cord during prenatal and early postnatal development, indicating it may play an important developmental role in these CNS regions. Structure-function studies have examined some unique operational and structural features of KCC2, which had a large extended region in the carboxyterminus, exhibiting little identity with the other KCCs (Figure 24). Three phosphoacceptor sites were identified in the N-terminal domain of the protein NKCC1: Thr189 is necessary for activation of the protein, whereas phosphorylation at Thr184 and Thr202 are modulatory [START_REF] Darman | A regulatory locus of phosphorylation in the N terminus of the Na-K-Cl cotransporter, NKCC1[END_REF]. Phosphorylation ALS: AN UNCURABLE DISEASE Amyotrophic lateral sclerosis (ALS) is a fatal adult-onset neurodegenerative disease characterized by progressive muscular weakness and atrophy due to degeneration of corticospinal motor neurons in the brain and spinal motoneurons (MNs). Upper motor neurons are glutamatergic descending neurons that synapse directly or indirectly via interneurons onto lower MNs typically through the corticobulbar and corticospinal tracts. Lower MNs are multipolar cholinergic neurons whose axons exit the CNS to innervate skeletal muscles to produce movement. Consequently, ALS leads to progressive muscle atrophy and weakness and eventual paralysis (Figure 25). In fact, the name "Amyotrophic Lateral Sclerosis" reflects the strikingly selective degeneration of MNs in ALS. It is derived from a combination of three words: "Lateral" refers to the lateral spinal cord, given that corticospinal MNs are particularly vulnerable to degeneration; "Amyotrophic" is from the Greek "amyotrophia," meaning lacking muscle nourishment; and "Sclerosis" In 1874, Jean-Martin Charcot, a French neurologist, bridged the knowledge gap between patient symptoms and neurological presentations and officially named the disease amyotrophic lateral sclerosis (Martin R Turner et al., 2015). However, in the decades since that time, neurologists and clinical researchers have been mystified and challenged to solve the puzzle to the disease's pathogenesis, aetiology, and epidemiology. Over time, neurologists gradually improved the diagnostic procedures and attempted to understand ALS pathology, but, because ALS has not been well understood and was virtually unknown to the public (Martin R Turner et al., 2015), it was not until 1990 that researchers developed a uniform diagnostic procedure [START_REF] Geevasinga | Amyotrophic lateral sclerosis diagnostic index: Toward a personalized diagnosis of ALS[END_REF]. In 1939, the famous New York Yankee baseball player, Lou Gehrig, was diagnosed with ALS and delivered his famous farewell speech at Yankee Stadium [START_REF] Smith | The Luckiest Man: Errol Morris's A Brief History of Time and The Pride of the Yankees[END_REF], bringing the disease out of obscurity. Later, the disease would become informally known in the United States as Lou Gehrig's disease. In 1963, Stephen Hawking, a 21-year-old doctoral student, was diagnosed and became a public face for ALS as he completed his doctorate, made groundbreaking scientific discoveries, and taught cosmology and physics [START_REF] Smith | The Luckiest Man: Errol Morris's A Brief History of Time and The Pride of the Yankees[END_REF]. As the longest recorded ALS survivor, Hawking contributed a lifetime of scientific publications, inspirational documentaries, and television show appearances, which sparked an awareness movement for the ALS community. Nevertheless, given the history of ALS being an untreatable and incurable disease that kills tens of thousands of people throughout the world every year, many people with ALS, ALS advocates, and allies are growing impatient for clinical progress (K. Traynor, 2018). Given that it is unknown when researchers will find a cure, increased financial emphasis in social research is needed to guide social programs that help people with ALS and their families adaptively cope with the emotional and mental health challenges associated with this disease. ALS, also known as Lou Gehrig's or Charcot disease, was first described by Jean-Martin Charcot in 1874 [START_REF] Rowland | How Amyotrophic Lateral Sclerosis Got Its Name: The Clinical-Pathologic Genius of Jean-Martin Charcot[END_REF]. ALS is a progressive motor neuron disease in which motor neurons die in the motor cortex, brain stem, and spinal cord [START_REF] Boillée | ALS: A Disease of Motor Neurons and Their Nonneuronal Neighbors[END_REF]. Two forms of ALS have been identified; sporadic ALS (sALS) and familial ALS (fALS) [START_REF] Harikrishnareddy | Roots to start research in amyotrophic lateral sclerosis: molecular pathways and novel therapeutics for future[END_REF]. Almost 90% of ALS patients suffer from the sporadic form and only 10% patients suffer from the familial form [START_REF] Harikrishnareddy | Roots to start research in amyotrophic lateral sclerosis: molecular pathways and novel therapeutics for future[END_REF]. In general, recent studies have shown that the incidence of ALS is lower in Asian countries than in Europe and North America [START_REF] Longinetti | Epidemiology of amyotrophic lateral sclerosis: an update of recent literature[END_REF]. Two to three people suffer from ALS per 100,000 of the general European populations using the person-year calculation [START_REF] Al-Chalabi | The epidemiology of ALS: a conspiracy of genes, environment and time[END_REF][START_REF] Longinetti | Epidemiology of amyotrophic lateral sclerosis: an update of recent literature[END_REF]. The incidence rate of ALS is higher in males than in females, possessing a risk ratio of 1:350 and 1:400 for males and females respectively [START_REF] Al-Chalabi | The epidemiology of ALS: a conspiracy of genes, environment and time[END_REF]. ALS is a fatal disorder in which most patients do not survive more than 3 to 5 years following diagnosis [START_REF] Wijesekera | Amyotrophic lateral sclerosis[END_REF]. Despite decades of research, riluzole [START_REF] Bensimon | A Controlled Trial of Riluzole in Amyotrophic Lateral Sclerosis[END_REF] SOD1 G93A AND OTHER ALS ANIMAL MODELS At least 25 genes have now been reproducibly implicated in familial ALS, sporadic ALS, or both (Figure 26).The SOD1 gene was the first gene identified as the genetic cause of ALS [START_REF] Rosen | Mutations in Cu/Zn superoxide dismutase gene are associated with familial amyotrophic lateral sclerosis[END_REF]. The SOD1 gene is located on chromosome 21 in humans. Five exons in this chromosome encode the Cu-Zn A hallmark pathology of ALS is protein deposition and inclusion formation within motor neurons and/or motoneurons (MNs) (Figure 27). Study of the mSOD1 mouse model reveals a number of cell and non-cell autonomous mechanisms that are involved in the pathophysiology of ALS (Figure 28). Cell autonomous mechanisms include changes within motor neurons such as axonal transport defects, mitochondrial dysfunction, increased endoplasmic reticulum (ER) stress, glutamate toxicity and caspase activation, abnormal RNA metabolism and impaired proteostasis [START_REF] Soo | Rab1-dependent ER-Golgi transport dysfunction is a common pathogenic mechanism in SOD1, TDP-43 and FUS-associated ALS[END_REF][START_REF] Mathis | Current view and perspectives in amyotrophic lateral sclerosis[END_REF]. Non-cell autonomous mechanisms include neuroinflammation centrally mediated by astrocytes and microglia [START_REF] Hooten | Protective and Toxic Neuroinflammation in Amyotrophic Lateral Sclerosis[END_REF] and oligodendrocytes that develop spontaneous neuroinflammation, or/and involve peripherally (Schwann cells, skeletal myocytes, and Tcells) acting contributors to the survival of MNs [START_REF] Taylor | Decoding ALS: from genes to mechanism[END_REF]. Overall, both cell and noncell autonomous mechanisms contribute to motor neuron death in ALS. Despite the above and other knowledge in disease mechanisms leading to ALS, the primary factors that trigger ALS remain unknown. Findings from studies in mSOD1 mice suggest that mSOD1 in motor neurons initiate toxic actions, which in combination with non-neuronal cells also possessing mSOD1, accelerates motor neuron death and disease progression. Initially, two studies failed to demonstrate ALS in transgenic mice such as SOD1 G37R , SOD1 G93A , SOD1 G85R specifically expressing mSOD1 in neurons [START_REF] Pramatarova | Neuron-Specific Expression of Mutant Superoxide Dismutase 1 in Transgenic Mice Does Not Lead to Motor Impairment[END_REF][START_REF] Lino | Accumulation of SOD1 mutants in postnatal motoneurons does not cause motoneuron pathology or motoneuron disease[END_REF]. In contrast, in a third study, SOD1 G93A transgenic mice overexpressing mSOD1 selectively in neurons developed ALS [START_REF] Jaarsma | Neuron-Specific Expression of Mutant Superoxide Dismutase Is Sufficient to Induce Amyotrophic Lateral Sclerosis in Transgenic Mice[END_REF]. In another study, disease onset in chimeric SOD1 G37R mice was accelerated when mSOD1 was present in cells other than motor neuron cells [START_REF] Yamanaka | Mutant SOD1 in cell types other than motor neurons and oligodendrocytes accelerates onset of disease in ALS mice[END_REF]. Conversely, the presence of wild-type non-neuronal cells substantially delayed disease onset in chimeric SOD1 G37R and SOD1 G85R mice [START_REF] Clement | Wild-Type Nonneuronal Cells Extend Survival of SOD1 Mutant Motor Neurons in ALS Mice[END_REF][START_REF] Yamanaka | Mutant SOD1 in cell types other than motor neurons and oligodendrocytes accelerates onset of disease in ALS mice[END_REF]. Similarly, in transgenic SOD1 G93A mice, wild-type microglia slowed motor neuron loss and prolonged disease duration and survival compared to mice possessing mSOD1 expressed microglia [START_REF] Beers | Wild-type microglia extend survival in PU.1 knockout mice with familial amyotrophic lateral sclerosis[END_REF]). Together, these findings clarify that the presence of mSOD1 promotes motor neuron loss by both cell and non-cell autonomous processes in transgenic mSOD1 mice. Following motor neuron death, mSOD1 aggregates are released into the extracellular environment to provide a positive feedback loop to further induce motor neuron death [START_REF] Brundin | Prion-like transmission of protein aggregates in neurodegenerative diseases[END_REF]. These extracellular mSOD1 aggregates are not directly toxic to motor neurons, but rather activate microglia, which then mediate neurotoxicity. SOD1 aggregates in the extracellular space are taken up by neighbouring cells via micropinocytosis or in association with vesicles [START_REF] Zeineddine | The role of macropinocytosis in the propagation of protein aggregation associated with neurodegenerative diseases[END_REF][START_REF] Silverman | Disease Mechanisms in ALS: Misfolded SOD1 Transferred Through Exosome-Dependent and Exosome-Independent Pathways[END_REF]. In vitro studies have demonstrated that the extracellular mSOD1 cause detectable injury to motor neurons when co-cultured with microglia, but had no effect on motor neurons in the absence of microglia (W. Zhao et al., 2010). Over the last 20 years, several transgenic mouse strains expressing human mutant SOD1 have been generated. These mice have been used to either examine disease mechanisms or trial potential therapeutic strategies for ALS, although the latter has led to questionable success [START_REF] Perrin | Preclinical research: Make mouse studies work[END_REF]. The transgenic line harboring the Gly93 → Ala substitution (SOD1 G93A ) has been used most extensively (M. E. [START_REF] Gurney | Motor-Neuron Degeneration in Mice That Express a Human Cu,Zn Superoxide-Dismutase Mutation[END_REF], followed by the SOD1 G37R [START_REF] Wong | An adverse property of a familial ALS-linked SOD1 mutation causes motor neuron disease characterized by vacuolar degeneration of mitochondria[END_REF], SOD1 G85R (L.I. Bruijn et al., 1997), SOD1 G86R [START_REF] Ripps | Transgenic mice expressing an altered murine superoxide dismutase gene provide an animal model of amyotrophic lateral sclerosis[END_REF] and SOD1 G90A [START_REF] Jonsson | Motor Neuron Disease in Mice Expressing the Wild Type-Like D90A Mutant Superoxide Dismutase-1[END_REF] models. The B6SJL-TgN(SOD1-G93A)1Gur mouse (M. E. [START_REF] Gurney | Motor-Neuron Degeneration in Mice That Express a Human Cu,Zn Superoxide-Dismutase Mutation[END_REF] carries 25 ± 1.5 copies of the transgene within chromosome 12 and as a result, it expresses very high levels of human mutant SOD1 G93A [START_REF] Alexander | Effect of transgene copy number on survival in the G93A SOD1 transgenic mouse model of ALS[END_REF] . Whilst these significant levels of overexpression are criticized as a major limitation [START_REF] Alexander | Effect of transgene copy number on survival in the G93A SOD1 transgenic mouse model of ALS[END_REF], these animals remain the most widely used mouse model for therapeutic studies in ALS (M. E. Gurney, Pu, [START_REF] Gurney | Motor-Neuron Degeneration in Mice That Express a Human Cu,Zn Superoxide-Dismutase Mutation[END_REF]. These SOD1 G93A mice become paralyzed in the hindlimbs as a result of MN loss from the spinal cord, resulting in death by 5 months of age. Another variant of this model, B6SJL-TgN(SOD1-G93A)dl1Gur, possesses fewer copies of the transgene; 8 ± 1.5 (Mark E [START_REF] Gurney | The use of transgenic mouse models of amyotrophic lateral sclerosis in preclinical drug studies[END_REF][START_REF] Alexander | Effect of transgene copy number on survival in the G93A SOD1 transgenic mouse model of ALS[END_REF]). This "low-copy" mouse, hereafter referred to as "G93A-slow" (s-SOD1 G93A ), develops a slower disease course in comparison, where paralysis begins at 6-8.5 months of age [START_REF] Alexander | Effect of transgene copy number on survival in the G93A SOD1 transgenic mouse model of ALS[END_REF]; F. L. Muller et al., 2008;[START_REF] Acevedo-Arozena | A comprehensive assessment of the SOD1G93A low-copy transgenic mouse, which models human amyotrophic lateral sclerosis[END_REF]. In addition, several other "low-copy" mouse lines have subsequently been generated, with even fewer copies of the human SOD1G93A transgene. These models also exhibit greater life spans compared to the higher copy lines [START_REF] Alexander | Effect of transgene copy number on survival in the G93A SOD1 transgenic mouse model of ALS[END_REF]. Similarly, four lines of mice expressing another SOD1 mutant, SOD1G37R, at different levels (5-14 times) have been produced, with variable phenotypes [START_REF] Wong | An adverse property of a familial ALS-linked SOD1 mutation causes motor neuron disease characterized by vacuolar degeneration of mitochondria[END_REF]. Multiple mouse models based on transgenic expression of wild type or mutant TDP-43 have also been generated [START_REF] Philips | Rodent Models of Amyotrophic Lateral Sclerosis[END_REF]. Overexpressing human TDP-43 with a defective nuclear localization signal (NLS) in mice in the absence of an ALS mutation results in cytoplasmic expression of hTDP-43 and nuclear TDP-43 clearance. This results in a severe motor phenotype and reduced survival in the resulting 'rNLS8' mice compared to littermate controls [START_REF] Walker | Functional recovery in new mouse models of ALS/FTLD after clearance of pathological cytoplasmic TDP-43[END_REF]. Several mouse models also exist based on transgenic expression of mutant FUS. These mice display progressive, age-and mutationdependent degeneration that also model aspects of ALS [START_REF] Sharma | ALS-associated mutant FUS induces selective motor neuron degeneration through toxic gain of function[END_REF]. Furthermore, several newer models based on the C9orf72 repeat expansion have also been produced, although the phenotypes are more reminiscent of FTD rather than ALS [START_REF] Batra | Mouse Models of C9orf72 Hexanucleotide Repeat Expansion in Amyotrophic Lateral Sclerosis/ Frontotemporal Dementia[END_REF]. Alternative splicing of the MAPT gene at exon 10, which generates 4-repeat Tau (4R-Tau) and 3-repeat Tau (3R-Tau), is one of the most impactful targets regulated by FUS. Additionally, loss of FUS function can affect dendritic spine maturations by destabilizing mRNAs such as Glutamate receptor 1 (GluA1), a major AMPA receptor, and Synaptic Ras GTPase-activating protein 1 (SynGAP1). Moreover, FUS is involved in axonal transport and morphological maintenance of neurons. These findings indicate that a biological link between loss of FUS function, Tau isoform alteration, aberrant post-synaptic function, and phenotypic expression might lead to the sequential cascade culminating in FTLD [START_REF] Ishigaki | Importance of Functional Loss of FUS in FTLD/ALS[END_REF]. The pathomechanisms caused by The first generation of transgenic pigs expressing mutant G93A hSOD1 was reported [START_REF] Yang | Speciesdependent neuropathology in transgenic SOD1 pigs[END_REF]. This model shows hind limb motor defects, which are germline transmissible, and motor neuron degeneration in dose-and age-dependent manners. Importantly, in the early disease stage, mutant hSOD1 did not form cytoplasmic inclusions, but showed nuclear accumulation and ubiquitinated nuclear aggregates, as seen in some ALS patient brains, but not in transgenic ALS mouse models. SOD1 binds PCBP1, a nuclear poly(rC) binding protein, in pig brain, but not in mouse brain, suggesting that the SOD1-PCBP1 interaction accounts for nuclear SOD1 accumulation and that species-specific targets are key to ALS pathology in large mammals and in humans. Another transgenic (Tg) cloned swine model expressing the human pathological hSOD1G93A allele was reported [START_REF] Crociara | Motor neuron degeneration, severe myopathy and TDP-43 increase in a transgenic pig model of SOD1-linked familiar ALS[END_REF]. As in patients, these Tg pigs transmitted the disease to the progeny with an autosomal dominant trait and showed ALS onset from about 27 months of age. Post mortem analysis revealed MN degeneration, gliosis and hSOD1 protein aggregates in brainstem and spinal cord. Severe skeletal muscle pathology including necrosis and inflammation was observed at the end stage, as well. Remarkably, as in human patients, these Tg pigs showed a quite long presymptomatic phase in which gradually increasing amounts of TDP-43 were detected in peripheral blood mononuclear cells. Thus, these transgenic swine models open the unique opportunity to investigate ALS biomarkers even before disease onset other than testing novel drugs and possible medical devices (Figure 28). PATHOPHYSIOLOGY OF ALS ALS is a rapidly progressive neurodegenerative disorder of the human motor system, clinically characterized by dysfunction of the upper and lower motor neurons, which forms the basis of diagnosis (J. [START_REF] Dharmadasa | Motor neurone disease: progress and challenges[END_REF]. Understanding the relationship between upper and lower motor neuron dysfunction is critical for unravelling ALS pathogenesis and three opposing theories have been proposed [START_REF] Eisen | Cortical influences drive amyotrophic lateral sclerosis[END_REF]. A first school of thought suggested that ALS originates at a cortical level, with corticomotoneuronal hyperexcitability mediating neuronal degeneration via a transsynaptic anterograde mechanism, the dying forward hypothesis [START_REF] Eisen | Amyotrophic lateral sclerosis (ALS): A phylogenetic disease of the corticomotoneuron?[END_REF]. A second school of thought proposed a contrasting theory, whereby lower motor neuron dysfunction was postulated to occur as a primary event, the dying back hypothesis (Figure 29) [START_REF] Williamson | Slowing of axonal transport is a very early event in the toxicity of ALS-linked SOD1 mutants to motor neurons[END_REF][START_REF] Fischer | Amyotrophic lateral sclerosis is a distal axonopathy: evidence in mice and man[END_REF]. A third school of thought, the independent hypothesis, proposed an independent and random degeneration of upper and lower motor neuron degeneration with patterns of disease spread being contiguous and random, conforming to underlying neuroanatomical boundaries (J. Ravits et al., 2007; J. M. Ravits et al., 2009). Figure 29. The "dying-forward" and "dying-back" hypotheses. The "dying-forward" hypothesis postulates that ALS commences in the motor and pre-motor cortices' pyramidal neurons and, through antegrade mechanisms, causes dysfunction and death of the bulbar and spinal MNs. Excitotoxicity is important but not the only factor. The hallmark TAR DNA-binding protein 43 (TDP-43) pathology, seen in >95% of patients with ALS, is largely restricted to corticofugal projecting neurons ("dying forward"). In broader terms, this site of origin may be considered as the nidus of a spreading network disorder associated with frontotemporal dementia in ALS. In any event, ALS is best regarded as a degenerative brain disease. The figure indicates that there are alternative hypotheses of origin site which include dying-back and independent degeneration of the upper and lower MNs. The "dying-back" hypothesis proposes that ALS begins within the muscle cells or at the neuromuscular junction. Specifically, there is deficiency of a motor neurotrophic hormone, which is normally released by postsynaptic cells and retrogradely transported up the presynaptic axon to the cell body where it exerts its effects. Adapted from Kiernan et al., 2011. NEURONAL LOSS ALS is a late-onset, progressive and fatal neurodegenerative disease which primarily affects motor neurons (MNs) located in the motor cortex of the brain, brainstem motor nuclei and anterior horn of the spinal cord (Matthew C. [START_REF] Kiernan | Amyotrophic lateral sclerosis[END_REF][START_REF] Renton | State of play in amyotrophic lateral sclerosis genetics[END_REF][START_REF] Alsultan | The genetics of amyotrophic lateral sclerosis: current insights[END_REF][START_REF] Taylor | Decoding ALS: from genes to mechanism[END_REF]. In ALS, as MNs degenerate, the ability to control movement of the muscles is progressively lost (Figure 30). Specific MNs in the brain, brainstem and spinal cord are selectively targeted, and pathology appears first in these restricted MN populations. However, some MNs are spared until disease end stage, such as oculomotor neurons and Onuf's nuclei MNs, and as a result, patients retain normal visual, sexual and bladder function throughout the disease course. The resistant MNs differ significantly from the vulnerable MNs anatomically and functionally, and they possess distinct transcriptomes, metabolic and developmental profiles. Surprisingly, there are also differences in vulnerability amongst spinal MNs, because those that are part of the faster motor units degenerate before those in the slower motor units [START_REF] Frey | Early and Selective Loss of Neuromuscular Synapse Subtypes with Low Sprouting Competence in Motoneuron Diseases[END_REF][START_REF] Pun | Selective vulnerability and pruning of phasic motoneuron axons in motoneuron disease alleviated by CNTF[END_REF][START_REF] Hegedus | Time course of preferential motor unit loss in the SOD1G93A mouse model of amyotrophic lateral sclerosis[END_REF]Hadzipasic et al., 2014;[START_REF] Sharma | ALS-associated mutant FUS induces selective motor neuron degeneration through toxic gain of function[END_REF][START_REF] Spiller | Selective Motor Neuron Resistance and Recovery in a New Inducible Mouse Model of TDP-43 Proteinopathy[END_REF], thus adding further complexity to the question of MN vulnerability. To investigate the progression of motor neuron loss, through morphological techniques, reactive astrocytosis, and expression of ubiquitin and neurofilament proteins, by immunohistochemistry, loss of MNs in SOD1 G93A mice followed a biphasic progression, with an initial loss at 126 days of age, followed by a gradual loss from onset of symptoms through to end-stage disease. Reactive astrocytosis was first observed at 70 days of age and showed a gradual increase through to endstage disease [START_REF] Feeney | Presymptomatic motor neuron loss and reactive astrocytosis in the SOD1 mouse model of amyotrophic lateral sclerosis[END_REF]. The progression of clinical and pathological disease was studied in a line of mice designated human SOD1. Clinical disease started at 91 ± 14 days of age with fine shaking of the limbs, followed by paralysis and death by 136 ± 7 days of age. Pathological changes begin by 37 days of age with vacuoles derived from swollen mitochondria accumulating in motor neurons. At the onset of clinical disease (90 days), significant death of somatic MNs innervating limb muscles has occurred; mice at end-stage disease (136 days) show up to 50% loss of cervical and lumbar MNs. However, neither thoracic nor cranial MNs show appreciable loss despite vacuolar changes. Autonomic MNs also are not affected. Mice that express wild-type human SOD1 remain free of disease, indicating that mutations cause neuron loss by a gain-of-function [START_REF] Chiu | Age-Dependent Penetrance of Disease in a Transgenic Mouse Model of Familial Amyotrophic Lateral Sclerosis[END_REF]. Thus, the agedependent penetrance of MN disease in this transgenic model is due to the gradual accumulation of pathological damage in select populations of cholinergic neurons. MOTONEURONAL ALTERATIONS One of the main pathological characteristics of ALS is the presence of insoluble protein inclusions in the soma of MNs. TAR DNA binding protein-43 (TDP-43) is the major component of these cytoplasmic inclusions [START_REF] Arai | TDP-43 is a component of ubiquitin-positive tau-negative inclusions in frontotemporal lobar degeneration and amyotrophic lateral sclerosis[END_REF][START_REF] Neumann | Ubiquitinated TDP-43 in Frontotemporal Lobar Degeneration and Amyotrophic Lateral Sclerosis[END_REF] in almost all (97%) ALS patients and ~50% of frontotemporal dementia (FTD) patients [START_REF] Arai | TDP-43 is a component of ubiquitin-positive tau-negative inclusions in frontotemporal lobar degeneration and amyotrophic lateral sclerosis[END_REF][START_REF] Neumann | Ubiquitinated TDP-43 in Frontotemporal Lobar Degeneration and Amyotrophic Lateral Sclerosis[END_REF] The expression of specific proteins can vary between MN subpopulations and this may be linked to their vulnerability to degenerate. Evidence for this hypothesis comes from the existing mouse models of ALS. Whilst mutant SOD1G93A is expressed in all MNs in these mice [START_REF] Jaarsma | Neuron-Specific Expression of Mutant Superoxide Dismutase Is Sufficient to Induce Amyotrophic Lateral Sclerosis in Transgenic Mice[END_REF], its propensity to induce neurodegeneration and disease is proportional to its expression level (M. E. [START_REF] Gurney | Motor neuron degeneration in mice that express a human Cu,Zn superoxide dismutase mutation[END_REF]; L.I. [START_REF] Bruijn | ALS-Linked SOD1 Mutant G85R Mediates Damage to Astrocytes and Promotes Rapidly Progressive Disease with SOD1-Containing Inclusions[END_REF][START_REF] Alexander | Effect of transgene copy number on survival in the G93A SOD1 transgenic mouse model of ALS[END_REF]. At lower levels of expression, pathology is restricted to MNs in the spinal cord and brainstem only, whereas higher expression levels also induce severe abnormalities in the brain. Fewer copies of the SOD1 G37R transgene correlate with delayed disease progression and a significant increase in lifespan compared to animals with higher copy numbers [START_REF] Zwiegers | Reduction in hSOD1 copy number significantly impacts ALS phenotype presentation in G37R (line 29) mice: implications for the assessment of putative therapeutic agents[END_REF]. Similarly, in TDP-43 models, higher levels of overexpression are associated with a worse phenotype [START_REF] Philips | Rodent Models of Amyotrophic Lateral Sclerosis[END_REF]. Moreover, disease is evident in both wildtype and mutant TDP-43 models, indicating that the expression levels of TDP-43, rather than the presence of a mutation per se, induces neurodegeneration. Hence, the effect of the TDP-43 mutation can be difficult to segregate from the effects of overexpression in these models [START_REF] Philips | Rodent Models of Amyotrophic Lateral Sclerosis[END_REF]. Both retaining the physiological expression levels and normal nuclear localization of TDP-43 have been linked to maintaining cellular homeostasis [START_REF] Swarup | Deregulation of TDP-43 in amyotrophic lateral sclerosis triggers nuclear factor κB-mediated pathogenic pathways[END_REF][START_REF] Philips | Rodent Models of Amyotrophic Lateral Sclerosis[END_REF]. These studies together highlight the role of differing protein expression levels in the development and progression of ALS. Time course of neurodegeneration in the SOD1 G93A mouse model of ALS was compiled from multiple publications over the ten years (Figure 31) [START_REF] Kanning | Motor Neuron Diversity in Development and Disease[END_REF]. In the spinal cord, the earliest detectable pathological alterations occur before muscle denervation. The earliest reported phenotypes concern cultured embryonic motor neurons, which show remarkably increased sensitivity to external stressors such as Fas ligand, which triggers their death by activating caspase-8 and p38 [START_REF] Raoul | Motoneuron Death Triggered by a Specific Pathway Downstream of Fas[END_REF]. They are also hyperexcitable owing to abnormally strong persistent inward currents (Jason J. Kuo et al., 2004;Kuo et al., 2005). Axonal transport defects (both anterograde and retrograde) are also found in cultured motor neurons [START_REF] Kieran | A mutation in dynein rescues axonal transport defects and extends the life span of ALS mice[END_REF][START_REF] De Vos | Familial amyotrophic lateral sclerosis-linked SOD1 mutants perturb fast axonal transport to reduce axonal mitochondria content[END_REF]. In vivo, the earliest change detected to date is an increase in the electrical excitability of hypoglossal motor neurons at P4, reflecting an abnormally strong persistent inward sodium current. In parallel, early pruning of dendrites may reflect premature functional maturation (van Zundert et al., 2008). Not all motor neuron subtypes are equally affected by the disease. This is first apparent at the morphological level. FF motor units undergo atrophy earliest in mutant SOD mice, with near total loss of FF terminals from type IIb muscle fibres in the triceps surae of SOD1 G93A mice by P50 [START_REF] Frey | Early and Selective Loss of Neuromuscular Synapse Subtypes with Low Sprouting Competence in Motoneuron Diseases[END_REF]. Notably, dendritic morphometric parameters in foetal E17.5 WT and SOD1 G93A MNs presented striking differences in Figure 32, a ~ 60% reduction of the terminal dendritic length was found in SOD1 G93A MNs after topologic and morphologic analysis (Martin et al., 2013). As the role of dendritic trees is to collect and transmit synaptic inputs to the soma, its structural properties are crucial for local integration of distal inputs or all inhibitory and excitatory inputs to shape the state of excitability of a circuit throughout life. Neurite outgrowth is an important process in the formation of neuronal networks [START_REF] Aoki | Local phosphatidylinositol 3,4,5-trisphosphate accumulation recruits Vav2 and Vav3 to activate Rac1/Cdc42 and initiate neurite outgrowth in nerve growth factor-stimulated PC12 cells[END_REF]. The morphology of neurons and network plays an important role in processing electrical and biochemical signals, and can be interpreted as a strong indicator that the anatomy of the cellular and network level is deeply involved on various functional levels [START_REF] Breit | Anatomically Detailed and Large-Scale Simulations Studying Synapse Loss and Synchrony Using NeuroBox[END_REF]. GLUTAMATE EXCITOTOXICITY Glutamic acid is an excitatory neurotransmitter involved in long term potentiation (LTP). Neuronal cell death decreases neuronal glutamate (Glu) uptake, which is required for the rapid removal of Glu from the extracellular space, thus terminating the excitatory signal and reducing the excitotoxic neuronal damages [START_REF] Heath | Update on the glutamatergic neurotransmitter system and the role of excitotoxicity in amyotrophic lateral sclerosis[END_REF][START_REF] Geevasinga | Pathophysiological and diagnostic implications of cortical dysfunction in ALS[END_REF]. A large number of studies with SOD1G93A ALS-model mice provide strong evidence for the role of oxidative stress in the disease pathogenesis [START_REF] Geevasinga | Pathophysiological and diagnostic implications of cortical dysfunction in ALS[END_REF][START_REF] Spalloni | Cognitive impairment in amyotrophic lateral sclerosis, clues from the SOD1 mouse[END_REF]. ( Additionally, ROS-mediated mitochondrial dysfunction, in connection with glutamate excitotoxicity, has been implicated in ALS pathogenesis (Z. Xu et al., 2004). Glutamate is abundant in the diet, and is the most abundant excitatory neurotransmitter in the nervous system, binding to receptors such as the NMDA (Nmethyl-D-aspartate) receptor [START_REF] Meldrum | Glutamate as a Neurotransmitter in the Brain: Review of Physiology and Pathology[END_REF]. The Excitotoxicity is currently being targeted pharmacologically in dementia patients, with the FDA approved therapeutic memantine [START_REF] Molinuevo | Memantine: Targeting glutamate excitotoxicity in Alzheimer's disease and other dementias[END_REF]. Changes in excitability of neurons may also play an important role in ALS progression. Over-stimulation of neuronal glutamate receptors (GluR) leads to excitotoxicity, thus leading to neuronal dysfunction and ultimately to cellular death through activation of Ca2+-dependent enzymatic pathways [START_REF] Arundine | Molecular mechanisms of calcium-dependent neurodegeneration in excitotoxicity[END_REF]Kuo et al., 2005). Defects in glutamate transport that may contribute to excessive extracellular glutamate have been noted in patients with sporadic ALS and in the transgenic mutant SOD1 mouse model of ALS [START_REF] Rothstein | Selective loss of glial glutamate transporter GLT-1 in amyotrophic lateral sclerosis[END_REF][START_REF] Lin | Aberrant RNA Processing in a Neurodegenerative Disease: the Cause for Absent EAAT2, a Glutamate Transporter, in Amyotrophic Lateral Sclerosis[END_REF][START_REF] Dunlop | Impaired Spinal Cord Glutamate Transport Capacity and Reduced Sensitivity to Riluzole in a Transgenic Superoxide Dismutase Mutant Rat Model of Amyotrophic Lateral Sclerosis[END_REF]. Numerous reports of alterations to glutamate receptors have also been central to the development of the excitotoxicity hypothesis in ALS or ALS-parkinsonism dementia complex [START_REF] Wilson | Late appearance of glutamate transporter defects in a murine model of ALS-parkinsonism dementia complex[END_REF]; P. [START_REF] Zhao | Altered presymptomatic AMPA and cannabinoid receptor trafficking in motor neurons of ALS model mice: implications for excitotoxicity[END_REF][START_REF] Blasco | The Glutamate Hypothesis in ALS: Pathophysiology and Drug Development[END_REF]. In humans, low GluR2 expression was found in both upper and lower motor neurons compared to other neuronal types (P. [START_REF] Shaw | Molecular factors underlying selective vulnerability of motor neurons to neurodegeneration in amyotrophic lateral sclerosis[END_REF]. Specific downregulation of GluR1 was also reported in a FUS depletion model of ALS [START_REF] Udagawa | FUS regulates AMPA receptor function and FTLD/ALS-associated behaviour via GluA1 mRNA stabilization[END_REF]. Currently, the only approved pharmacological therapeutic for the treatment of ALS (Riluzole or RilutekTM) is thought to act as a mitigator of glutamate excitotoxicity. The drugs Riluzole and Edaravone, have been reported to work by an anti-glutamate mechanism [START_REF] Cheah | Riluzole, Neuroprotection and Amyotrophic Lateral Sclerosis[END_REF][START_REF] Rothstein | Edaravone: A new drug approved for ALS[END_REF]. Given that this therapeutic extends life by approximately 3 months in ALS patients, it is likely that ALS has some degree of glutamate excitotoxicity involved in pathogenesis, although it may not be the predominant mechanism of neurodegeneration in ALS [START_REF] Bensimon | A Controlled Trial of Riluzole in Amyotrophic Lateral Sclerosis[END_REF][START_REF] Doble | The pharmacology and mechanism of action of riluzole[END_REF]. SYNAPTOPATHY Synaptopathy is a broad definition of diseases with synaptic dysfunction, regardless of the disease mechanisms [START_REF] Lepeta | Synaptopathies: synaptic dysfunction in neurological disorders -A review from students to students[END_REF] Fogarty, 2019). In the motor system the synapse is critically important for the integration of signals between motor neurons and target muscles, as well as between neurons in the cortex [START_REF] Azpurua | Neuronal epigenetics and the aging synapse[END_REF]. Neuronal morphology as well as synaptic form, number and structure are tightly regulated, and are essential for appropriate and functional connections. This developing nervous system, stimulated by changes in spontaneous activity, regulates the formation of the synapse. Effectively, cortical circuits require the elimination of redundant synapses during synaptogenesis and rely on intimate communication between pre-and post-synaptic compartments [START_REF] Cohen-Cory | The Developing Synapse: Construction and Modulation of Synaptic Structures and Circuits[END_REF]. The majority of excitatory glutamatergic neurotransmission in the CNS is mediated by α-Amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) 33). GLIAL DEFECTS The neuroglia includes the neuroectoderm-derived macroglia that comprise astrocytes, oligodendrocytes, and NG2-positive progenitors and the microglia, the resident macrophages of the CNS that have a mesodermal origin. Astrocytes and microglia are major players in neuroinflammatory response (Figure 35). Astrocytes are a large portion of the glial cell population in the CNS and accomplish a plethora of essential functions that have been extensively summarized [START_REF] Verkhratsky | Physiology of Astroglia[END_REF] Microglia, the resident macrophages of the CNS, play a crucial role in neuroinflammation to promote motor neuron loss in ALS [START_REF] Hooten | Protective and Toxic Neuroinflammation in Amyotrophic Lateral Sclerosis[END_REF]. In ALS, immune cells also contribute to MN degeneration. There is clear evidence of immune activation in some patients with ALS and in animal models of disease, which is evidence of abnormalities of the peripheral immune system, with alterations of T lymphocytes, monocytes, complement and cytokines in the peripheral blood of patients with ALS. For instance, the total leukocyte count is elevated in patients with ALS, and correlates with progression of disease (Murdock et al., 2017). Microglia The ratio of neutrophils to monocytes was also shown to be increased (Murdock et al., 2016), as was the total number of granulocytes [START_REF] Gustafson | Comprehensive immune profiling reveals substantial immune system alterations in a subset of patients with amyotrophic lateral sclerosis[END_REF]. In the Hyperexcitability of MNs Electroneurography studies revealed marked variability in the hyperexcitability index scores of patients with ALS studies focused on spinal motor neurons revealed that affected neurons within the anterior horn present with both synapse loss and narrowed synaptic contact, with this occurring at an early stage in ALS patients [START_REF] Sasaki | Synapse loss in anterior horn neurons in amyotrophic lateral sclerosis[END_REF]. Moreover, transcranial magnetic stimulation (TMS) studies indicated that both the UMNs and LMNs of the motor cortex and of the spinal cord exhibit hyperexcitability in sporadic and SOD1 familial ALS cases, months prior to the Hyperexcitability enables neurons to have a lower threshold to fire action potentials upon stimulation, which is often described as a condition in which the neuron is excessively excitable. The mechanism leading to hyperexcitability has also been documented in the most characterised SOD G93A mouse model of ALS (Figure 37), with changes in excitability occurring prior to cell loss [START_REF] Pieri | Altered excitability of motor neurons in a transgenic mouse model of familial amyotrophic lateral sclerosis[END_REF] G93A MN. Note that, in response to the largest positive current pulse, the SOD1 G93A MN fires an action potential (AP) whereas the WT MN remains silent. A2. Mean voltage-current (V/I) relationship in WT (black squares) and SOD1 G93A MNs (gray circles) revealing the increased input resistance in SOD1 G93A MNs (steeper slope). B1. Characteristic membrane response of a WT MN (top trace) and a SOD1 G93A MN (middle trace) to current injection of increasing steps (bottom trace). Note the SOD1 G93A MN fires APs in response to the second current step, unlike the WT MN. B2. Average AP frequency, plotted against current intensity, revealing the hyperexcitability of SOD1 G93A MNs (same symbols as in A2). *p < 0.05, ***p < 0.001, two-way ANOVA. Adapted from Martin et al., 2013. Hyperexcitability was also found in human iPSC-MNs derived from SOD1, FUS and C9orf72 associated familial patients (Brian J. Wainger et al., 2014). This hyperexcitability might damage nerve cells and lead to excitotoxicity. The critical role of excitotoxicity in ALS is an enduring concept and poses an attractive target for potential therapeutic strategies. Accordingly, until the recent approval of the drug Edaravone [START_REF] Rothstein | Edaravone: A new drug approved for ALS[END_REF], the drug on the market for the treatment of ALS was Riluzole, which is thought to positively alter ALS progression by preventing excitotoxic pre-synaptic glutamate uptake [START_REF] Miller | Glutaminase Immunoreactivity and Enzyme Activity Is Increased in the Rat Dorsal Root Ganglion Following Peripheral Inflammation[END_REF]. Supporting the role excitotoxic mechanisms play in disease genesis and progression, ALS patients display elevated levels of cerebrospinal fluid (CSF) glutamate (P. J. [START_REF] Shaw | CSF and Plasma Amino Acid Levels in Motor Neuron Disease: Elevation of CSF Glutamate in a Subset of Patients[END_REF], and reduced expression and activity of the excitatory amino acid transporter 2 (EAAT2) in pathologically affected areas of the CNS [START_REF] Rothstein | Decreased Glutamate Transport by the Brain and Spinal Cord in Amyotrophic Lateral Sclerosis[END_REF][START_REF] Rothstein | Selective loss of glial glutamate transporter GLT-1 in amyotrophic lateral sclerosis[END_REF][START_REF] Fray | The expression of the glial glutamate transporter protein EAAT2 in motor neuron disease: an immunohistochemical study: EAAT2 expression in motor neuron disease[END_REF]. Moreover, findings in transgenic mice expressing a rare ALS-causing G127X mutant SOD1 protein indicate the protein is linked to excitotoxicity in ALS pathology by increasing sensitivity of motor neurons to excitotoxicity [START_REF] Meehan | Intrinsic properties of lumbar motor neurones in the adult G127insTGGG superoxide dismutase-1 mutant mouse in vivo: evidence for increased persistent inward currents: G127X SOD1 mice motor neurone properties in vivo[END_REF], reducing the expression and activity of EAAT2 [START_REF] Boston-Howes | Caspase-3 Cleaves and Inactivates the Glutamate Transporter EAAT2[END_REF] and altering the inhibitory-excitatory synaptic ratio [START_REF] Sunico | Reduction in the Motoneuron Inhibitory/Excitatory Synaptic Ratio in an Early-Symptomatic Mouse Model of Amyotrophic Lateral Sclerosis: Inhibitory Loss and Excitatory Gain in ALS[END_REF]. Renshaw cell alterations may lead to hyperexcitability and eventually to MN degeneration. Loss of Renshaw cell function could be the result of degeneration of the corticospinal fibers directed to these cells and that the loss of cortical inhibitory influence, in association with ion channel alterations, may participate in increased motor network excitability [START_REF] Brunet | Cortical Circuit Dysfunction as a Potential Driver of Amyotrophic Lateral Sclerosis[END_REF]. Two hypotheses have been proposed to explain how Renshaw cell alterations may lead to a hyperexcitable state and eventual degeneration of MNs [START_REF] Scamps | Synaptic Transmission and Motoneuron Excitability Defects in Amyotrophic Lateral Sclerosis[END_REF]. The first hypothesis postulates that the hyperexcitability is caused by loss of recurrent Renshaw cellmediated inhibition and is based on electrophysiological findings suggesting an impairment of Renshaw cells in patients with ALS [START_REF] Raynor | Recurrent inhibition is decreased in patients with amyotrophic lateral sclerosis[END_REF][START_REF] Shefner | The mixed nerve silent period in normal subjects and patients with amyotrophic lateral sclerosis[END_REF]. The second hypothesis proposes that the recurrent inhibitory circuit is altered prior to MN hyperexcitability and neurodegeneration, but this was not a consequence of Renshaw cell loss due to the finding that loss of inhibitory spinal interneurons occurred after loss of motoneurons (M. Hossaini et al., 2011). SKELETAL MUSCLE ALTERATIONS ALS had long been considered a pure motor neuron disease; however, recent studies have shown that motor neuron protection is not sufficient to prevent the course of the disease since the dismantlement of NMJs occurs before motor neuron degeneration. Skeletal muscle alterations have been described in the early stages of the disease. Studies in ALS mouse models have corroborated this hypothesis and described ALS as a distal axonopathy also caused by alterations in skeletal muscle [START_REF] Fischer | Amyotrophic lateral sclerosis is a distal axonopathy: evidence in mice and man[END_REF] The NMJ connects muscle fibers and motor neurons, allowing their communication. The trophic factors (i.e. retrograde messenger) play an important role in the selection process of NMJ connections at nerve terminals [START_REF] Je | Role of probrain-derived neurotrophic factor (proBDNF) to mature BDNF conversion in activitydependent competition at developing neuromuscular synapses[END_REF][START_REF] Hurtado | Muscle Contraction Regulates BDNF/TrkB Signaling to Modulate Synaptic Function through Presynaptic cPKCalpha and cPKCbetaI[END_REF]. One of the most studied neurotrophic factors is brain-derived neurotrophic factor (BDNF), showing a neuroprotective effect under adverse conditions, such as glutamatergic stimulation, cerebral ischemia, hypoglycemia, and neurotoxicity [START_REF] Maisonpierre | Human and rat brain-derived neurotrophic factor and neurotrophin-3: gene structures, distributions, and chromosomal localizations[END_REF][START_REF] Bathina | Brain-derived neurotrophic factor and its clinical implications[END_REF]. proBDNF-p75(NTR) signaling promotes retraction of the less active terminals, whereas mature BDNFtyrosine-related kinase B (TrkB) p75NTR (p75 neurotrophin receptor) facilitates stabilization of the active ends [START_REF] Je | Role of probrain-derived neurotrophic factor (proBDNF) to mature BDNF conversion in activitydependent competition at developing neuromuscular synapses[END_REF]. Significantly higher levels of BDNF in CSF were found in ALS patients with faster disease progression and lower serum levels of proBDNF were associated with a shorter survival, indicating that BDNF and proBDNF might be used as ALS prognostic biomarkers [START_REF] Riolo | BDNF and Pro-BDNF in Amyotrophic Lateral Sclerosis: A New Perspective for Biomarkers of Neurodegeneration[END_REF]. ALS is the classic example of severely compromised communication between muscles and nerves [START_REF] Lepore | Neuromuscular Junction as an Entity of Nerve-Muscle Communication[END_REF]. NMJ degradation is the first event of denervation, and it occurs before motor neuron loss [START_REF] Fischer | Amyotrophic lateral sclerosis is a distal axonopathy: evidence in mice and man[END_REF][START_REF] Loeffler | The Role of Skeletal Muscle in Amyotrophic Lateral Sclerosis[END_REF]. NRG1-ErbBs signalling and, hence, NMJ development and maintenance depend on the activation of muscle-specific receptor tyrosine kinase (MuSK) [START_REF] Burden | Building the vertebrate neuromuscular synapse[END_REF]. MuSK orchestrates the muscle-derived retrograde signal through the interaction with LRP4 and agrin, ensuring NMJ stability and maintenance while preventing disassembly [START_REF] Kong | Inhibition of synapse assembly in mammalian muscle in vivo by RNA interference[END_REF][START_REF] Hesser | Synapse disassembly and formation of new synapses in postnatal muscle upon conditional inactivation of MuSK[END_REF][START_REF] Herbst | MuSK function during health and disease[END_REF][START_REF] Cruz | The Neuromuscular Junction in Health and Disease: Molecular Mechanisms Governing Synaptic Formation and Homeostasis[END_REF]. MuSK overexpression in SOD1 G93A double transgenic mice delayed the onset of ALS, improved motor ability, and preserved the integrity of NMJs (Pérez-García et al., 2012). Preclinical genetic interventions on skeletal muscle highlighted the importance of this tissue in ALS progression as modifying the expression of genes involved in skeletal muscle physiology, metabolism, and functions had a strong impact (either positive or negative) on the disease. For example, IGF-1 regulates skeletal muscle physiology and promotes satellite cell proliferation and neuronal survival (Musaro et al., 2002;[START_REF] Song | The therapeutic potential of IGF-I in skeletal muscle repair[END_REF][START_REF] Ahmad | Implications of Insulin-Like Growth Factor-1 in Skeletal Muscle and Various Diseases[END_REF][START_REF] Yoshida | Mechanisms of IGF-1-Mediated Regulation of Skeletal Muscle Hypertrophy and Atrophy[END_REF]. The expression of IGF-1 in skeletal muscle of ALS models gave the most remarkable results on disease progression and survival, delaying the death of SOD1 G93A mice by about one month [START_REF] Dobrowolny | Muscle expression of a local Igf-1 isoform protects motor neurons in an ALS mouse model[END_REF]. In these mice, the regeneration pathways through calcineurin and CDK5 were induced, while apoptotic and ubiquitin pathways were inhibited, protecting muscles against atrophy and denervation and preserving NMJs and motor neurons [START_REF] Dobrowolny | Muscle expression of a local Igf-1 isoform protects motor neurons in an ALS mouse model[END_REF][START_REF] Dobrowolny | Local expression of mIgf-1 modulates ubiquitin, caspase and CDK5 expression in skeletal muscle of an ALS mouse model[END_REF]. Interestingly, high concentrations of IGF-1 in patients' serum correlate with a better prognosis but not with a lower risk of ALS, suggesting that IGF-1 plays a role in the survival of ALS patients [START_REF] Nagel | Association of Insulin-like Growth Factor 1 Concentrations with Risk for and Prognosis of Amyotrophic Lateral Sclerosis -Results from the ALS Registry Swabia[END_REF]. HYPOTHESIS / AIMS OF THE THESIS As we have seen, ALS is an adult-onset neurodegenerative disease characterized by a progressive and selective degeneration of MNs. Most ALS studies have focused on symptomatic adult stages based on the hypothesis that ALS pathogenesis occurs when the disease becomes symptomatic. However, growing evidence indicates that ALS pathogenesis might start long before symptom onset (Jason J. Kuo et al., 2004;Kuo et al., 2005 channels. Moreover, our lab previously found that E17.5 SOD G93A MNs are hyperexcitable because of a shorter dendritic tree and increased input resistance (Martin et al., 2013). In our study, we cannot exclude a participation of the persistent Na + current in hyperexcitability of E17.5 SOD G93A MNs, but our study demonstrates that its contribution would be minor versus the alteration of the dendrite morphology (this is summarized in Figure 38). However, little is known about the synaptic mechanisms underlying the hyperexcitability of prenatal SOD G93A MNs. Does a reduction in the inhibitory inputs or an increase of the excitatory inputs lead to higher firing rates? How are altered synaptic inputs and dendritic morphology integrated (i.e., dendritic integration), leading to hyperexcitability? From a theoretical viewpoint, increasing the excitability of a network or cellular hyperexcitability can be achieved equally well by increasing excitatory input or removing inhibitory input. In fact, silent interconnections can be achieved better by disinhibiting than by enhancing excitability. Furthermore, among the numerous potential causative links in ALS associated dysfunction and degeneration, more and more evidences point to the key role of inhibitory alteration (Ramirez-Jarquin et al., 2014; Clark et al., 2015a; Ramírez-Jarquín et al., 2018; Van den Bos et al., 2018), with vulnerability being induced in the perinatal period (Eisen et al., 2014b). Therefore, this thesis focused on investigating changes of prenatal (E17.5) inhibition in spinal locomotor CPG networks. At E17.5 GABA/glycine is still depolarizing but a powerful shunting effect allows adult-like alternated activities between ipsi-and contra-lateral lumbar spinal CPGs. It is therefore worthy examining the inhibitory system in the E17.5 SOD1 G93A mouse. As summarized in the Figure 38, the aims of my PhD thesis were to: 1-analyze the Cl -homeostasis in SOD G93A and littermate spinal lumbar MNs from the lateral motor column in order to identify whether the deficit of GABA/glycine inhibition leads to hyperexcitability in prenatal SOD G93A MNs. G93A MNs could be induced by mechanisms besides these intrinsic mechanisms, such as an alteration in the synaptic inputs received by a motoneuron. We focused on the GABA/glycine inhibitory pathway from the premotor network to investigate whether this pathway is involved in hyperexcitability of E17.5 SOD G93A MNs. Accordingly, four scientific questions needed to be addressed: 1) How is the GABAAR/GlyR-mediated inhibition in SOD G93A MNs? (To evaluate changes in Cl -homeostasis) | What are the mechanisms of GABA/glycine inhibition deficiency, leading to hyperexcitability of prenatal SODG93A MNs? = Does a reduction in the inhibitory inputs lead to higher firing rates? (To analyse the density of GABAergic/glycinergic inputs); 2) What's the consequence of putative alterations in inhibitory pathways on motor function? (To assess outcome on fictive locomotor-like activity). Answering this question will help to determine whether the properties of hyperexcitability caused by inhibitory dysfunction are maintained in a compensatory state or have been transformed into a fully pathological state; 3) What early compensatory or pathological mechanisms are involved in the putative alteration of Cl -homeostasis? 4) Is it the integration of multiple factors such as morphology, inhibitory synaptic inputs and ECl that synergistically regulates neuronal hyperexcitability? HVA: high voltage activated, PCca: Persistent Ca 2+ Introduction Amyotrophic Lateral Sclerosis (ALS), also known as Lou Gehrig's disease, is a rapidly progressive neurodegenerative disease that targets motor neurons (MNs). It is one of the most common and most devastating neurodegenerative diseases. The incidence rate of ALS in the general European population has been estimated as ~2 per 100 000 person per year [START_REF] Logroscino | Incidence of amyotrophic lateral sclerosis in Europe[END_REF]. What makes ALS particularly devastating is that there is no known curative treatment. Death often occurs due ultimately to respiratory paralysis that concludes the progressive pattern of the disease, with the mean survival time of patients post-diagnosis being 3 to 5 years. In 90% of cases, ALS is idiopathic and sporadic (sALS). Only 10% of ALS cases are familial (fALS) in origin, inherited through an autosomal dominant pattern. Around 20 genes are associated with ALS, with the most common causes of typical ALS being associated with mutations in SOD1, TARDBP, FUS and C9orf72 (Van Damme et al., 2017). Mutations in C9orfF72 are the most common cause of fALS (10% of total ALS) followed by mutations in SOD1 (superoxide dismutase 1) (2% of total ALS). Interestingly, misfolded SOD1 was found in MNs in a subset of patients with sALS that did not have SOD1 mutations, suggesting that there is a SOD1-dependent pathway common to both sALS and fALS [START_REF] Bosco | Wild-type and mutant SOD1 share an aberrant conformation and a common pathogenic pathway in ALS[END_REF][START_REF] Pare | Misfolded SOD1 pathology in sporadic amyotrophic lateral sclerosis[END_REF]. Because ALS is a lethal adult-onset neurodegenerative disease, most studies have focused on symptomatic adult stages. However, a growing body of evidence indicates that ALS pathogenesis might develop earlier than previously expected (Kuo et al., 2004;Kuo et al., 2005;[START_REF] Avossa | Early signs of motoneuron vulnerability in a disease model system: characterization of transverse slice cultures of spinal cord isolated from embryonic ALS mice[END_REF]Bories et al., 2007;Amendola and Durand, 2008;van Zundert et al., 2008;Pambo-Pambo et al., 2009;Chang and Martin, 2011;Quinlan et al., 2011;Filipchuk and Durand, 2012;Leroy et al., 2014;[START_REF] Milan | Age-Related changes in pre-and postsynaptic partners of the cholinergic C-Boutons in Wild-Type and SOD1G93A lumbar motoneurons[END_REF]Chang and Martin, 2016;[START_REF] Medelin | Altered development in GABA co-release shapes glycinergic synaptic currents in cultured spinal slices of the SOD1(G93A) mouse model of amyotrophic lateral sclerosis[END_REF]. Using the transgenic mouse model SOD1 G93A (SOD, Gly93fiAla substitution), which expresses a large degree of human mutant SOD1 and faithfully recapitulates a vast majority of the pathology's abnormalities seen in ALS patients [START_REF] Fogarty | Driven to decay: excitability and synaptic abnormalities in amyotrophic lateral sclerosis[END_REF], we previously found that SOD MNs are hyperexcitable at the prenatal (embryonic day (E) 17.5) stage because of a shorter dendritic tree and increased input resistance (Martin et al., 2013). Among the numerous potential causative link in ALS associated dysfunction and degeneration, more and more evidence point to the key role of inhibitory alteration (Ramı´rez-Jarquı´n et al., 2014;Clark et al., 2015;Ramı´rez-Jarquı´n and Tapia, 2018;Van den Bos et al., 2018), with vulnerability being induced in the perinatal period (Eisen et al., 2014). Here, using gramicidin intracellular recordings of SOD and wild type (WT) lumbar spinal prenatal E17.5 MNs from the lateral motor column (LMC, ventral pool), we first analyzed chloride homeostasis and found that EGABAAR was 10 mV more depolarized in the SOD fetus compared to WT associated with a KCC2 down-regulation. Interestingly, locomotor-like activity remained normal. We then examined the premotor network at the same fetal developmental stage in order to determine underlying mechanisms that could compensate for MN hyperexcitability and chloride homeostasis deficiency. This was performed by combining whole-cell patch-clamp recordings and immunohistochemistry. Miniature (m), spontaneous (s) inhibitory postsynaptic currents (IPSCs) as well as evoked (e) IPSCs and puff-evoked GABAAR currents were quantified in parallel with the staining of synaptic VIAAT boutons. Our results show a significant reduction in IPSC frequency in SOD MNs compared to WT MNs, in agreement with a reduction in synaptic VIAAT-positive terminals. Our result also show a significantly slower decay time in SOD IPSCs and GABAAR-currents compared to WT. Interestingly, this slower decay is associated with a higher intracellular chloride concentration [Cl -]i in SOD MNs compared to WT. In prenatal E17.5 MNs, GABAergic/glycinergic postsynaptic potentials are depolarizing (dGPSPs) and exert mixed excitatory (depolarizing) and inhibitory (shunting) effects on the input-output (I-O) relationship [START_REF] Gulledge | Excitatory actions of GABA in the cortex[END_REF][START_REF] Alger | Pharmacological evidence for two kinds of GABA receptor on rat hippocampal pyramidal cells studied in vitro[END_REF][START_REF] Michelson | Excitatory synaptic responses mediated by GABAA receptors in the Hippocampus[END_REF][START_REF] Misgeld | Depolarizing IPSPs and depolarization by GABA of rat neostriatum cells in vitro[END_REF]Staley and Mody, 1992). Higher [Cl -]i would then result in shifting the balance dGPSPs toward more excitation [START_REF] Branchereau | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF]. However, Using computer simulations, we demonstrate that longer decay time reinforce the inhibitory component of dGPSPs. Therefore longer relaxation of dGPSPs constitutes a partial compensatory mechanism sustaining a well-coordinated, although slightly slower, locomotor activity in prenatal SOD mice. Together, our findings show that although the presynaptic network of SOD MNs is altered at fetal stages, cellular mechanisms leading to long-lasting IPSCs operate to compensate for the depolarized EGABAAR and hyperexcitability of E17.5 SOD MNS in order to maintain a coordinated locomotor activity. Results Fetal MNs exhibit a perturbed chloride homeostasis but normal locomotor-like activity Initially, gramicidin perforated patch-clamp recordings from L4-L5 MNs were used to obtain Rin and Cm values for individual MNs. SOD MN values for Rin and Cm were 148.2 ± 13.5 MW, n = 21 and 105.6 ± 5.9 pF, n = 21, N = 16, respectively, and were significantly different (p<0.05, Mann Whitney test) from WT MNs values (104.9 ± 10.8 MW, n = 16 and 143.7 ± 14.2 pF, n = 16, N = 10) (Table 1). These values corresponded to the previously reported differences between WT and SOD MNs (Martin et al., 2013). Puff application of the GABAAR agonist isoguvacine allowed us to measure the reversal potential of GABAAR (EGABAAR) in individual SOD and WT MNs (Figure 1A), while our pooled data indicated that mean EGABAAR was significantly different in SOD MNs (-50.5 ± 2.4 mV, n = 25, N = 18) and WT MNs (-62.0 ± 2.0 mV, n = 24, N = 17) (p<0.001, Mann Whitney test) (Figure 1B1). From these EGA-BAAR values, and because Cl -ions are mainly involved in GABAAR currents in fetal MNs (Bormann et al., 1987;Gao and Ziskind-Conhaim, 1995), we calculated the intracellular chloride concentration [Cl -]i as being 19.5 ± 1.5 mM and 12.2 ± 0.9 mM in SOD and WT MNs, respectively (p<0.001, Mann Whitney test) (Figure 1B2). Interestingly, [Cl -]i values reported in E17.5 WT MNs are in agreement with those previously reported, i.e., 12 mM (Delpy et al., 2008). The resting membrane potential (Em) was not significantly different between SOD and WT MNs (-73.1 ± 0.5 mV in SOD and -74.1 ± 1.0 mV in WT MNs (p>0.05, Mann Whitney test MNs) (Figure 1B3), in contrast to the mean driving force (DF = Em -EGABAAR), which was 22.6 ± 2.3 mV and 12.1 ± 2.2 mV in SOD and WT MNs, respectively (p<0.001, Mann Whitney test) (Figure 1B4). The dissipation of an elevated intracellular Cl -concentration is mediated by the activation of the neuronal K + /Cl -co-transporter type 2 (KCC2) (Payne et al., 1996;Rivera et al., 1999). This protein, which extrudes Cl -, is expressed at early stages in the embryonic SC (Delpy et al., 2008). The [Cl -]i is also related to the Na + -K + -2Cl -co-transporter NKCC1, which intrudes chloride ions (Russell, 2000;Delpy et al., 2008). Therefore, in order to identify the cellular mechanisms underlying the higher [Cl -]i in SOD MNs, we firstly assessed the amount of KCC2 and NKCC1 in lumbar SCs using WB. Our data showed that KCC2 was reduced by ~19% in SOD lumbar SCs compared to WT (p<0.05, Mann Whitney test) (Figure 2A1) whereas NKCC1 was unchanged (Figure 2A2). When analyzed in the MN area (Hb9-eGFP), the KCC2 staining appeared as significantly reduced in SOD SCs (181.5 ± 16. 2C). This difference was absent when the specific KCC2 blocker VU0240551 was applied (Figure 2C). In order to assess whether the alteration of chloride homeostasis in SOD MNs could affect the leftright alternation in locomotor-like activity expressed by E17.5 mouse spinal lumbar networks due to half-centers organization and commissural inhibition (Branchereau et al., 2000), we recorded this activity in L3-L5 ventral roots after exogenous application of 5-HT/NMDA/DA. As illustrated in Figure 3A1, this cocktail evoked stable bilateral segmental alternation between left and right ventral roots in WT SC preparations. Chemically-activated locomotor-like activity was also expressed in the ventral roots of SOD SCs (Figure 3A2), although analysis (Rayleigh's test) revealed a slower rhythm period in SOD SCs (2.01 ± 0.03 s, n = 8, N = 5) compared to WT controls where bursts recurred with a period of 1.88 ± 0.03 s (n = 9, N = 7) (p<0.01, Mann Whitney test) (Figure 3B). However, the rhythm phase relationship between left and right sides was close to anti-phase in both genotypes (0.49 ± 0.003 and 0.52 ± 0.004 in SOD and WT SCs, respectively) (see polar plots in Figure 3A1-A2). Therefore, surprisingly, alteration of chloride homeostasis in SOD MNs has limited consequences on the locomotor-like activity and therefore little physiological consequences. Could compensatory mechanisms in the SOD spinal synaptic network explain this result? In order to gain information about the synaptic inputs from the premotor inhibitory neuronal networks that underlies this MN activity, we then compared the occurrence and properties of GABA-/ glycinergic synaptic events in SOD and WT E17.5 MNs in the absence of rhythmic fictive locomotion. We found that miniature GABAergic/glycinergic inhibitory synaptic currents (mIPSCs), which persisted in the presence of TTX, were significantly (p<0.001; K-S test) less frequent in SOD: the mean inter-event interval (IEI) was 887 ± 30 ms (n = 19, N = 9, 1589 events) for SOD MNs and 838 ± 33 ms (n = 15, N = 7, 1366 events) for WT MNs from the same littermate (p<0.001, Mann Whitney test) (Figure 4A1,A2). The mIPSCs of SOD MNs were also slightly smaller (52.8 ± 0.9 pA) compared to WT MNs (57.9 ± 1.2 pA) (p<0.05, Mann Whitney test) (Figure 4A3). Taurise was not significantly different between SOD MNs (1.25 ± 0.03 ms) and WT MNs (1.11 ± 0.02 ms) (p>0.05, Mann Whitney test), whereas taudecay was strongly significantly larger in SOD MNs (20.53 ± 0.08 ms) compared to WT MNs (16.11 ± 0.40 ms) (p<0.0001, Mann Whitney test) (Figure 4A4, see inset traces). The total charge of mIPSCs, calculated for best-fitted (R > 0.98) mIPSCs, was similar in SOD MNs (-1.14 ± 0.04 pA.s, n = 841, N = 9) and WT MNs (-1.18 ± 0.04 pA.s, n = 901, N = 7) (p>0.05, Mann Whitney test), indicating that the longer relaxation for SOD mIPSCs likely compensated their reduced amplitude. The development of synaptic inhibition in spinal MNs involves a functional switch between GABA to glycine events (Gao et al., 2001). We therefore focused on pharmacologically dissected GABA and glycine mIPSCs, isolated using strychnine and GABAzine, respectively. Interestingly, strychnine application revealed a tonic glycine current in 5/7 SOD MNs and 4/6 WT MNs (Figure 4A5) without any significant difference in amplitude between SOD MNs (11.72 ± 3.34 pA, n = 5, N = 5) and WT MNs (11.15 ± 2.36 pA, n = 4, N = 4) (p>0.05, Mann Whitney test). We found a tonic GABA current in only 3/8 SOD MNs and 2/7 WT MNs, this tonic current being comparable in both genotypes: 15.03 ± 1.09 pA (n = 3, N = 3) for SOD MNs and 10.40 ± 0.60 pA (n = 2, N = 2) for WT MNs (p>0.05, Mann Whitney test). This tonic glycine and GABA current likely favors tonic shunting effect as previously demonstrated [START_REF] Branchereau | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF][START_REF] Song | Tonic excitation or inhibition is set by GABA(A) conductance in hippocampal interneurons[END_REF]. The pharmacological dissection revealed that mIPSCs were ~50% pure glycine,~40% pure GABA, the remaining events ~ 10% being mixed (see Figure 4A6-A8). No difference was found between SOD MNs (48.43 ± 8.26% glycine, n = 5, N = 5; 39.65 ± 16.77% GABA, n = 3, N = 3) and WT MNs (51.46 ± 9.41% glycine, n = 3, N = 3; 41.79 ± 7.97% GABA, n = 5, N = 5) (p>0.05, Mann Whitney test). Interestingly, E17.5 SOD MNs, recorded at a holding potential of -70 mV with ECl set at ~0 mV displayed pure glycine and GABA mIPSCs with a longer relaxation than E17.5 WT animals from the same littermate. Taudecay of pure glycine mIPSCs was 13.35 ± 3.14 ms (n = 3, N = 3) in SOD MNs and 6.53 ± 0.44 ms in WT MNs (n = 5, N = 5) (p<0.05, Mann Whitney test) (Figure 4A6) whereas taurise were similar in both genotypes (1.24 ± 0.12 ms for SOD and 1.18 ± 0.07 ms for WT, p>0.05, Mann Whitney test). Taudecay of pure GABA mIPSCs was also larger in SOD MNs (24.69 ± 1.18 ms, n = 5, N = 5) compared to WT MNs (19.51 ± 0.56 ms, n = 3, N = 3) (p<0.05, Mann Whitney test) (Figure 4A7) whereas taurise were similar in both genotypes (3.02 ± 0.24 ms for SOD and 2.12 ± 0.52 ms for WT, p>0.05, Mann Whitney test). Most of the differences found between SOD and WT mIPSCs were also observed in spontaneous GABAergic/glycinergic inhibitory synaptic currents (sIPSCs). An increase in IEI was found for sIPSCs (p<0.001; K-S test): mean IEI was 182.3 ± 5.9 ms (n = 14, N = 10, 1499 events) and 157.9 ± 5.6 ms (n = 17, N = 11, 1685 events) in SOD and WT MNs, respectively (p<0.001, Mann Whitney test) (Figure 4B1-B2). No difference was found in the amplitude of sIPSCs (p>0.05; K-S test): 68.0 ± 1.8 pA in SOD MNs and 61.7 ± 1.4 pA in WT MNs (p>0.05, Mann Whitney test) (Figure 4B3). Again the taurise was not significantly different between SOD MNs (1.38 ± 0.02 ms) and WT MNs (1.37 ± 0.02 ms) (p>0.05, Mann Whitney test), whereas taudecay was again strikingly larger in SOD MNs compared to WT MNs: 21.89 ± 0.42 ms in WT MNs and 23.58 ± 0.53 ms in SOD MNs (p<0.0001, Mann Whitney test) (Figure 4B4, see inset traces). VLF (ventro-lateral funiculus)-evoked IPSCs (eIPSCs) recorded in MNs at a holding potential -70 mV with an imposed ECl of -45 mV also revealed a longer taudecay (30.3 ± 5.7 ms, n = 6, N = 3) in SOD MNs compared to WT MNs (18.3 ± 2.7, n = 10, N = 4) (p<0.05, Mann Whitney test) (Figure 4C1-C2), the amplitude and taurise of eIPSCs being not significantly different (62.4 ± 16.6 pA, 2.8 ± 0.5 ms in SOD, n = 6 and 72.1 ± 13.2 pA, 2.3 ± 0.3 ms in WT, n = 10). This was in agreement with our data showing a reduced efficacy of KCC2 (Figure 2C) leading to a higher [Cl -]i in SOD MNs compared to WT and therefore a slower relaxation of IPSCs (Pitt et al., 2008;Houston et al., 2009). Finally, taudecay of puff-evoked GABAAR current was also affected in the same way: 707. The transport of GABA/glycine (GABA/Gly) into synaptic vesicles is mediated by VIAAT transporters. Therefore, in order to seek for anatomical correlates of our electrophysiological findings, we performed an immunohistochemical analysis of GABAergic/glycinergic (VIAAT positive) synaptic puncta (Figure 5A, red) in the lumbar MN area identified by the presence of Hb9-eGFP neurons (Figure 5A, green). Synaptic terminals were identified using double staining for the synaptic vesicle protein synaptophysin (Figure 5A, white) that is ubiquitously expressed in the marginal zone of the SC in close proximity of Hb9 somata. We did not detect any significant difference in the percentage of VIAAT puncta between SOD and WT SCs: 0.47 ± 0.10% of analyzed area in SOD (n = 5, N = 4) and 0.54 ± 0.14% in WT (n = 6, N = 5) (p>0.05, Mann Whitney test) (Figure 5B1-B2). Synaptophysin staining in SOD SCs was decreased by 44% compared to WT: 2.17 ± 0.47% of area in SOD and 3.93 ± 0.72% of area in WT (p<0.01, Mann Whitney test) (Figure 5B3). We then calculated the percentage of the synaptophysin surface colocalized with VIAAT and found a ~ 23% decrease in this percentage of colocalization in SOD SCs compared to WT: 24.50 ± 2.55% and 32.05 ± 2.73% for SOD and WT, respectively (p<0.001, Mann Whitney test) (Figure 5B4). Our electrophysiological data indicated a clear alteration in the frequency and shape of miniature and spontaneous synaptic currents in SOD MNs, with IPSCs being less frequent in SOD animals. This reduced basal network activity was therefore consistent with the anatomical labeling of GABAergic/ glycinergic synaptic terminals. Prenatal SOD MNs also exhibited a more depolarized EGABAAR than WT MNs, which would in turn impinge on the efficacy of GABA/Gly inhibition [START_REF] Branchereau | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF]. In SOD MNs EGABAAR was 10 mV more depolarized, which could preclude efficient inhibition. Surprisingly on the other hand, well-coordinated locomotor-like activity was still elicited in SOD SCs (Figure 3A), indicating the presence of putative compensatory mechanisms. In this respect, the slower decay times of both IPSCs and puff-evoked GABAAR currents recorded in SOD MNs could be one of such mechanisms. Therefore, in order to test this idea further, we conducted a series of computer simulations of SOD-like and WT-like E17.5 spinal MNs in which the effect of taudecay on the strength of GABA/Gly inhibition was assessed. A slower decay time of GABA/Gly synaptic currents to strengthen inhibition in SOD MNs The impact of synaptic current shape on MN activity was assessed using numerical simulations of WT-like and SOD-like MNs with a NEURON simulation environment (Carnevale and Hines, 2006). The topology and morphometry of canonical WT and SOD E17 MNs were issued from Martin et al. (2013). The two MN types differed only in the length of their terminal dendrites, being shorter in SODlike MNs (i.e. 60% of WT-like MNs) (Figure 6-figure supplement 1A-B). As a result, the SOD-like MNs are more excitable (Martin et al., 2013). A continuous depolarizing current was injected into the somata of both the WT-like (250 pA) and SOD-like MNs (200 pA) (Figure 6-figure supple-ment 1C) to produce a spiking discharge of ~12.5 Hz (Figure 6A1,B1). This spike frequency in E17.5 MNs was reached during bursts of activity occurring during fictive locomotion B). During this MN discharge, a train of GABA/Gly synaptic events was deliv-ered to the MN soma. Various frequencies of GABA/Gly synaptic events were tested to assess the frequency (cut-off frequency) needed to totally block the ongoing MN discharge. When EGABAAR was Figure 6. Simulation of a putative compensatory role of taudecay for the inhibitory strength of GABA/gly synaptic events in SOD-like MNs. (A) Simulations made with a WT-like MN whose soma was continuously injected with depolarizing current (250 pA) to induce spiking discharge at a rate of ~12.5 Hz. After the spiking discharge was stabilized (t = 2 s), a train of GABA/Gly synaptic events was delivered to the soma at a frequency of 71 Hz, with either a taudecay of 20 ms (A1) or 25 ms (A2) and EGABAAR set at -50 mV. Using these two values of GABA/Gly taudecay (20 ms: dark blue circles; 25 ms: light blue squares), various frequencies of GABA/Gly synaptic train (from 0 to 100 Hz) were tested with either EGABAAR = -50 mV (A3) or EGABAAR = -60 mV (A4). The horizontal dashed line represents the stabilized spiking frequency before application of the GABA/Gly synaptic train. Inhibitory effects (below dashed line) are in red, excitatory effects (above this line) are in green. (B1-B5) Simulations made with a SOD-like MN, in which the soma was continuously depolarized (200 pA injection) to induce spiking discharge at ~12.5 Hz (same layout as in A). Note that in the SOD-like MN with EGABAAR = -50 mV, excitatory effects are observed when the GABA/Gly synaptic frequency is below 50 Hz (B3). This effect is not observed when EGABAAR = -60 mV (B4) or is almost absent in the WT-like MN with EGABAAR = -50 mV. The online version of this article includes the following figure supplement(s) for figure 6: set to -50 mV (the value measured in biological SOD MNs), increasing the GABA/Gly taudecay from 20 ms to 25 ms drastically increased its inhibitory effect on the MN discharge (Figure 6): whereas a 71 Hz GABA/Gly synaptic event train with a taudecay of 20 ms slowed down the WT model MNs' discharge from 12 Hz to 8-10 Hz, the same train totally blocked spike activity with a GABA/Gly taudecay of 25 ms (Figure 6A1-A2) and almost blocked it in a SOD MN (Figure 6B1-B2). We then repeated these simulations for a set of synaptic frequencies ranging from 0 Hz to 100 Hz for the two taudecay values (20 and 25 ms), with an EGABAAR set at -50 or -60 mV for the WT (Figure 6A3-A4) and SOD MNs (Figure 6B3-B4). The value of EGABAAR was determinant for the inhibitory effect of GABA/Gly events because in the WT-like MN, as in the SOD-like MN, the efficacy of the inhibitory train was much stronger when EGABAAR was set to -60 mV rather than -50 mV. Indeed, with EGA-BAAR = -60 mV, a 15 Hz GABA/Gly event train totally blocked the discharge in both MN types (Figure 6A4-B4), with little or no difference occurring between the two (taudecay of 20 and 25 ms) event/firing rate curves. By contrast, the corresponding curves differed significantly when EGABAAR was set to -50 mV (Figure 6, compare A3 with B3). Significantly, the morphology of the MNs also played a role in the inhibitory effect of the 71 Hz GABA/Gly train on their activity with a taudecay set to 20 ms: the inhibition was stronger in the WTlike MN (discharge reduction from 12.5 Hz to 7.33 Hz, Figure 6A3) than in the SOD-like MN (reduction from 12.5 Hz to only 9.83 Hz, Figure 6B3). The effect of morphology was also evident in the difference (frequency shift) between the two curves (taudecay of 20 and 25 ms) being more pronounced in the SOD-(17 Hz) than in the WT-like MN (14 Hz). A final influence of MN morphology was observed for a GABA/Gly synaptic frequency below 50 Hz: in the SOD-like MN with EGABAAR = -50 mV, the GABA/Gly synaptic trains produced excitatory effects (Figure 6B3), although these effects were much smaller (if at all) in the WT-like MN (Figure 6A3). This excitatory action of GABA/Gly synapses was not observed when EGABAAR = -60 mV (Figure 6A4-B4). We chose a firing frequency of 12.5 Hz for the model MNs since this corresponded to the mean value measured during locomotor-like activity in E17 WT embryos (Figure 6-figure supplement 2A-B). Together our results from computer simulations thus indicate that an increase of taudecay from 20 to 25 ms may constitute a partial compensatory mechanism contributing to reinforcing the inhibitory strength of GABA/Gly synaptic events that are dramatically reduced in SOD-type MNs mainly because of their more depolarized EGABAAR (-50 mV instead of -60 mV). Effect of slower decay time of GABA/Gly synaptic currents on locomotor rhythm In a final step, we wished to assess the functional consequence of increasing taudecay during ongoing locomotor activity. To this end, a simplified computer simulation of two half-centers driving antagonistic MNs was employed (Figure 7A). Pacemaker activity was obtained using a simplified model (see Appendix 1) based on previous physiological observations on the fundamental role played by INaP and IKCa in pacemaker activity [START_REF] Tazerart | The persistent sodium current generates pacemaker activities in the central pattern generator for locomotion and regulates the locomotor rhythm[END_REF]. Interestingly, increasing taudecay sloweddown the locomotor rhythm when EGABAAR was set to -60 mV (WT MN value) and gGABA/Gly was set to 0.04 mS (Figure 7A1,A2). This slow-down of the locomotor rhythm was also observed when gGABA/Gly was increased to 0.2 mS with EGABAAR = -60 mV (Figure 7B1). When EGABAAR was set to -50 mV with a gGABA/Gly of 0.2 mS, increasing taudecay still slowed down the locomotor rhythm, but the effect was much smaller (Figure 7B2). This result was in agreement with our findings that the pharmacologically-evoked locomotor rhythm in SOD SCs was slower than in WT SCs (Figure 3B). Again, although this slight change in the locomotor rhythm observed in SOD SCs has likely no physiological consequences, it reveals one side effect of increasing taudecay of GABA/Gly synaptic events. Discussion In the present study we show that fetal E17.5 SOD1 G93A MNs express an impairment of chloride homeostasis that leads to a more depolarized reversal potential for GABAAR (EGABAAR), indicating that a very early inhibitory dysfunction may initiate the pathogenesis in ALS MNs as hypothesized by others (van Zundert et al., 2012;Clark et al., 2015). This could lead to a less efficient inhibitory input to MNs [START_REF] Branchereau | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF], which in turn would be expected to affect locomotor coordination. However, this was not observed here, likely due to a compensatory prolongation of inhibitory synaptic events revealed in our study. Moreover, we found that the passive properties of E17.5 MNs (Rin and Cm) complied with previous observations (Martin et al., 2013), in that Rin increased and Cm decreased in SOD MNs (Table 1), in accordance with a morphometric alteration (shorter dendritic length without changes at the soma level). Decreased capacitance and membrane conductance were therefore not related to smaller size of E17.5 SOD mouse MNs. These findings therefore support the conclusion that fetal E17.5 SOD MNs are also hyperexcitable. The following will consider possible compensatory mechanisms that counteract the reduction in effective inhibitory input and the hyperexcitability of SOD MNs. The frequency and amplitude of inhibitory synaptic inputs are only weakly affected in fetal SOD MNs In our study we show that frequency of inhibitory synaptic inputs is reduced in SOD MNs. This conclusion was supported by both physiological and anatomical observations. Firstly, SOD MNs displayed less frequent mIPSCs (~6%) (Figure 4A2), which correlates with the finding that VIAAT/ synaptophysin co-localization was diminished (~23%) in these mice (Figure 5B3). Second, the amplitudes of the mIPSCs were slightly smaller in SOD1 MNs than in WT, especially for large events (Figure 4A3) and sIPSC events also exhibited this tendency (Figure 4B3). These main features of SOD IPSCs compared to WT are summarized in the schematic representation of Figure 8A1-A2. The reduced degree of synaptic input to mouse prenatal SOD MNs assessed by VIAAT/synaptophysin staining is reminiscent of anatomical data from adult human ALS patients in which a reduction of synaptophysin was also identified (Ikemoto et al., 2002). It cannot be explained by a reduced SOD MN size (i.e., smaller cells can host fewer synapses) because we have shown in a previous article that soma perimeters and soma surface of WT MNs and SOD MNs are comparable (Martin et al., 2013). Even though the reduced degree of synaptic input might reflect a developmental delay in the maturation of SOD SCs compared to WT, data from the literature seems to indicate that this reduction persists with time. In fact, MNs cultured from other ALS patients were shown to progressively lose their synaptic activity (Devlin et al., 2015), as was also shown in pre-symptomatic (P60-P120) adult SOD mice (Zang et al., 2005). The mixed GABA/Glycine IPSCs recorded in our study are likely to reflect a major contribution of GlyR because of the fast taudecay (Gao et al., 2001;[START_REF] Muller | Developmental dissociation of presynaptic inhibitory neurotransmitter and postsynaptic receptor clustering in the hypoglossal nucleus[END_REF]. Therefore, IPSCs, which were slightly reduced in amplitude in our recordings in SOD MNs, represent glycinergic events whereas GABAergic IPSCs amplitude remained unaffected. This hypothesis is supported by the fact that the ratio gGABAAR/Cm measured in our perforated patch-clamp recordings did not differ between SOD and WT MNs (Table 1). Interestingly, a reduction in amplitude of glycinergic synaptic events but not GABAergic events was described in SOD1 G93A lumbar MNs maintained in culture (Chang and Martin, 2011). Less frequent and smaller amplitude IPSCs convey a reduced inhibitory drive onto prenatal SOD1 MNs. Measurements of IPSC amplitudes derived from CsCl mediumbased whole-cell recordings in which EGABAAR approached 0 mV. Therefore, WT and SOD MNs shared the same artificial driving force for chloride and so the observed reduction of IPSC amplitude in SOD MNs was likely to be related to a change in post-synaptic GlyR as described in cultured SOD1 G93A MNs (Chang and Martin, 2011). A prolongation of inhibitory synaptic events in SOD MNs as a compensatory mechanism Our data revealed a noticeable increase of taudecay in both SOD1 mIPSCs and sIPSCs. We found a mean taudecay of around 20-25 ms, which is in the same range as that previously described for mIPSCs during perinatal stages in rat lumbar MNs (Gao et al., 1998). Because a part of our study did not differentiate unequivocally between GABAAR-mediated versus GlyR-mediated mIPSCs, we pharmacologically dissected GABAAR-mediated versus GlyR-mediated mIPSCs. This confirmed that the mean taudecay is between taudecay of pure GABAAR-mediated mIPSCs and pure GlyR-mediated mIPSCs and that both types of mIPSCs exhibit a longer relaxation. Interestingly, GlyRs switch from immature homomeric a2 GlyRs with slow decay kinetics of glycinergic mIPSCs to heteromeric GlyRs that include a1, a3 and b subunits with fast kinetics (Legendre, 2001;Raltschev et al., 2016). E17.5 SOD1 MNs may therefore exhibit a delayed development with a preponderance of homomeric a2 GlyRs. However, this is unlikely to be the case, since most electrical properties measured from birth to postnatal day 12 in SOD1 G93A MNs instead show an accelerated rate of maturation (Quinlan et al., 2011). During perinatal development of rat spinal MNs, GlyR-mediated mIPSCs become dominating over GABAAR-mediated mIPSCs (Gao et al., 2001). We also found a majority of pure GlyR-mediated mIPSCs and found the same ratio of pure GABAAR-mediated mIPSCs and pure GlyR-mediated mIPSCs in SOD and WT MNs, indicating similar prenatal developmental stage for both genotypes. Considering that SOD SCs do not express a development delay compared to WT SCs from the same littermate, we questioned the origin of the prolonged taudecay in GABAAR-and GlyR-mediated mIPSCs, sIPSCs and eIPSCs. It has been demonstrated that high [Cl -]i slows down the decay of glycine-and GABA-mediated inhibitory synaptic events (Pitt et al., 2008;Houston et al., 2009), reducing the firing probability of Purkinje neurons (Houston et al., 2009). Interestingly, the structural basis of this effect was described as a direct effect of chloride ions acting in the pore of glycine and GABA channels [START_REF] Moroni | Chloride ions in the pore of glycine and GABA channels shape the time course and voltage dependence of agonist currents[END_REF]. Therefore the elevated [Cl -]i likely accounts for the prolonged relaxation of GABA/Gly IPSCs in SOD MNs. The increase in [Cl -]i affect GABAAR and GlyR and thus IPSCs duration (Pitt et al., 2008;Houston et al., 2009;[START_REF] Moroni | Chloride ions in the pore of glycine and GABA channels shape the time course and voltage dependence of agonist currents[END_REF]. In P10 rat MNs a change from 10 mM [Cl -]i to 131 mM led to an increase in taudecay from 9.2 ms to 19.8 ms (Pitt et al., 2008). In P12 Purkinje cells a change from 10 mM [Cl -]i to 150 mM led to an increase in taudecay from 14 ms to 19.8 ms (Houston et al., 2009). Finally, in HEK-293 cells, a change from 10 mM [Cl -]i to 30 mM and then 131 mM led to an increase in taudecay from 9.6 ms to 12 ms and then 22 ms (Pitt et al., 2008). Even though our data highlight a more prolonged taudecay in SOD GABAAR/Gly IPSCs, we cannot directly link taudecay values to physiological [Cl -]i values, because taudecay values have been collected from CsCl recordings in which [Cl -]i was set as being elevated. Impaired KCC2 expression or function is involved in a number of neurodevelopmental and neurological disorders (Doyon et al., 2016). We provide evidence that altered EGABA in E17.5 SOD1 G93A mouse embryo likely arises from down-regulation of KCC2 in the lumbar cord and lumbar MNs. Again this down-regulation of KCC2 is unlikely linked to the decrease in the size of the SOD MNs, that is smaller neurons have less volume of the cytosol and less surface of the membrane to host KCC2, because soma perimeters and surface of E17.5 WT MNs and SOD MNs were found as similar (Martin et al., 2013). KCC2 protein has been quantified at the soma level and proximal dendrites that are not affected in SOD1 MNs. Also KCC2 efficacy and EGABAR have been assessed at the soma level. SOD MNs do not have less volume to adjust and the lower efficacy of KCC2 in SOD MNs is not related to a change in MN size. In hippocampal pyramidal neurons, NKCC1 up-regulation has been the proposed mechanism that drives transient perinatal GABA switch in rodent models of autism (Tyzio et al., 2014). However, our Neuroscience Branchereau et al. eLife 2019;8:e51402. DOI: https://doi.org/10.7554/eLife.51402 14 of 28 data show that NKCC1 is unaltered in the E17.5 lumbar SCs. KCC2 down-regulation in the hippocampus delays postnatal GABA shift in oxytocin receptor knockout mouse model of autism (Leonzino et al., 2016). More recent evidence shows that KCC2 down-regulation explains long-lasting effects of adolescent nicotine exposure (Thomas et al., 2018). In ALS-vulnerable motoneurons of adult SOD1 G93A mouse lumbar cord, a strong decrease of mRNA and protein expression levels was reported (Fuchs et al., 2010). Thus, KCC2 is a likely molecular substrate underlying perinatal alteration of chloride homeostasis in the lumbar spinal cord. It may be surprising to observe a taudecay more prolonged in SOD MNs when [Cl -]i is imposed ( 0mV or -45 mV). Indeed, when we imposed similar [Cl -]i in SOD and WT MNs, we expected a similar taudecay in SOD and WT. However, since our data clearly indicate a deficit in KCC2 efficacy in SOD MNs, the actual [Cl -]i may have been counteracted in WT MNs by the KCC2 protein more effectively than in SOD MNs. Therefore, we observed a longer relaxation of GABAAR-mediated/GlyR mediated mIPSCs, sIPSCs and VLF-evoked IPSCs in SOD MNs compared to WT. Our first series of computer simulations demonstrated that increasing taudecay from 20 ms to 25 ms clearly potentiated the efficacy of GABA/glycine inhibition. With a taudecay of 20 ms, a 20 Hz barrage of inhibitory inputs totally blocks WT-like MN firing activity (12 Hz, i.e. locomotor firing frequency, with ECl = -60mV). In contrast, SOD-like MNs (with ECl = -50mV) required a ~ 90 Hz inhibitory barrage to suppress their firing. Increasing taudecay to 25 ms significantly reduced the threshold cutting frequency to ~70 Hz in SOD-like MNs. Those blocking frequencies are in agreement with GABA/glycinergic event frequencies encountered in lumbar spinal MNs during fictive locomotion (>100 Hz) (Figure 6-figure supplement 2C,D). Increasing taudecay from 20 ms to 25 ms also leads to a higher capacity of shunting effect summation [START_REF] Branchereau | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF]. This is clearly visible in Figure 8B in which the increased shunting effect tends to maintain Em towards EGA-BAAR and prevents spike occurrence (compare green and blue traces). Our second series of computer simulations demonstrate that increasing taudecay slowed-down the locomotor rhythm (Figure 7). This is in agreement with our findings that the pharmacologicallyevoked locomotor rhythm in SOD SCs was slower than in WT SCs (Figure 3B). The effect of IPSCs taudecay on locomotor rhythm was only observed when EGABAAR was set to -60 mV in the interneurons of a central-pattern generator (CPG, half-centers) (Figure 7). To our knowledge, however, no EGABAAR and gGABA/Gly measurements have been obtained in the CPG interneurons of ALS mouse models. In their study of 2-3 week in vitro organotypic slice cultures of the spinal cord (from E12-E13), [START_REF] Medelin | Altered development in GABA co-release shapes glycinergic synaptic currents in cultured spinal slices of the SOD1(G93A) mouse model of amyotrophic lateral sclerosis[END_REF] suggested that the Cl -reversal potential is similar in SOD1 G93A and WT ventral interneurons, around -60 mV. However, neither the excitatory versus inhibitory nature of the recorded interneurons, nor their relationship with CPG circuitry were reported in their study. Our computer model of locomotor rhythm was based on half-center interneurons without considering the output MNs. Therefore it may be argued that changes in IPSCs taudecay observed in spinal MNs has nothing to do with the spinal locomotor rhythm generator. However, a recent study clearly demonstrated that MNs participate to the rhythm generation in neonatal mouse (Falgairolle et al., 2017). Therefore the effect of taudecay increase observed in SOD MNs could therefore impinge the locomotor rhythm frequency. Ethical considerations and mouse model All procedures were carried out in accordance with the local ethics committee of the University of Bordeaux and European Committee Council directives. All efforts were made to minimize animal suffering and reduce the number of animals used. B6SJLTgN(SOD1-G93A)/1Gur/J mice expressing the human G93A Cu/Zn superoxide dismutase (SOD1) mutation (Gly93fi Ala substitution) were obtained from the Jackson Laboratory (https://www.jax.org/strain/002726). Heterozygous B6SJL-TgN(SOD1-G93A)/1 Gur/J (named SOD in this report) were maintained by crossing heterozygous transgene-positive male mice with B6SJL F1 hybrid females (Janvier labs, France). Gestation in SOD lasted for ~18.5 days, embryonic day 0.5 (E0.5) being defined as the day after the mating night. Materials and methods Key resources table Experiments were performed on E17.5 fetuses, that is collected one day before their birth. Dissection and isolation of the embryonic spinal cord (SC) Pregnant mice were sacrificed by cervical dislocation. A laparotomy was performed and the fetuses were removed after cutting the uterine muscle. Fetuses were removed from their individual embryonic sacs and transferred into cooled artificial cerebrospinal fluid (aCSF) oxygenated with a 95% O2 and 5% CO2 mixture. The composition of the latter was in mM: 114.5 NaCl, 3 KCl, 2 CaCl2-2H2O, 1 MgCl2-6H2O, 25 NaHCO3, 1 NaH2PO4-H2O, 25 D-Glucose, pH 7.4 and osmolarity 307 mosmol/kg H2O. Only fetuses that displayed active movements were chosen for experiments. The fetuses selected were then decapitated and the SC preparation was dissected out. Their tails were preserved for subsequent genotyping. An incision was performed on the ventral side of the lumbar cord (between the midline and ventral roots) at the level of lumbar 4-5 (L4-L5) ventral roots, which innervate the extensor muscles, in order to remove the meninges. This exposes the MNs and makes them accessible for the patch-clamp electrode (generally 1-2 MNs recorded in each SC). The SC preparation was placed in a recording chamber and continuously superfused (~1.5 mL•min -1 ) with oxygenated aCSF. All experiments were carried out at constant temperature (30˚C). Experiments were performed blindly without knowing the genotype of the animals. Fetuses were genotyped at the Genotyping Platform at the Magendie Neurocentre (Bordeaux). The genotyping was performed by standard PCR from mice tail samples using established primers and a protocol as stated by the Jackson Laboratory. Protein extraction and western blotting (WB) WB analysis was performed on whole E17.5 lumbar spinal cords. For protein extraction, 15 WT and 15 SOD mice, from four different litters, were used. Each of the 15 lumbar spinal cords were divided into three groups of five lumbar SCs, crushed (micro-potter, PP750ACD), and homogenized with mammalian lysis buffer (Qproteome MammalianProtein Prep Kit, Qiagen) with a phosphatase inhibitor cocktail (Millipore Sigma). Homogenate was incubated (5 min at 4˚C), centrifuged (10 min at 4˚C to 14,000 RCF) to recover the supernatant from debris and stored 48 hr at -80˚C. Total protein concentration in the supernatant solution was determined by the DC protein assay (Bio-Rad) using an iMark microplate reader (Bio-Rad). Briefly, colorimetric technique was used to measure color change proportional to total protein concentration. Based on a determined concentration of total protein, volume of a supernatant solution that contains a desired amount of total protein could be calculated. Then, samples were deposited at -80˚C for future use. WB was performed by denaturing the sample at 100˚C for 5 min and loading 20 mg of protein of each sample (3 wells of WT and 3 wells of SOD samples) in Mini-PROTEAN TGX Stain-Free Gels (Bio-Rad) with migration buffer (2.5 mM Tris, 19.2 mM Glycine, 0.01% SDS prepared from 10X Tris/Glycine/SDS (Bio-Rad). Molecular weight markers (Precision Plus Protein All Blue Standards, Bio-Rad) were used in each individual gel. Electrophoresis of the samples and MW markers was powered by electrophoresis power supply EPS 600 (Pharmacia Biothe, Sweden) device at 250V. In order to be able to visualize the total proteins (stain free), the gel was activated in ChemiDoc MP imaging system (Bio-Rad). The transfer from gel to mini format 0.2 mm nitrocellulose membrane pack (Bio-Rad) was made using Trans-Blot Turbo Transfer System (Bio-Rad). Once the transfer was confirmed with ChemiDoc MP imaging system, membrane was blocked with 5% milk in tris-buffered saline (Millipore Sigma) containing 2% Tween-20 (Millipore Sigma) (TBST) buffer for 1 hr on agitator (Heidolph, UNIMAX 1010, Germany), and then incubated with primary antibody at 4˚C overnight. The primary antibodies used were rabbit polyclonal anti-KCC2 (1:1000, reference Millipore 07-432, Millipore Sigma) or mouse monoclonal anti-NKCC1 (1:400, reference T4 from Developmental Studies Hybridoma Bank, The University of Iowa, USA). The next day, membrane was washed 3-5 times using TBST and incubated with goat anti-rabbit or anti-mouse Horseradish Peroxidase (HRP) conjugated secondary antibody diluted to 1:20000 (Euromedex or Bio-Rad) for 1.5 hr at room temperature. After 3-5 rinses in TBST, total proteins on the membrane were visualized using a ChemiDoc MP imaging system (Bio-Rad). Membrane was then incubated 5 min in Clarity western ECL substrate (Bio-Rad) and the KCC2 or NKCC1 protein was visualized using the using the ChemiDoc. Four rounds of WB were run from the same deposited WT and SOD spinal cord samples. The ~140 kDa KCC2 band and ~150 kDa NKCC1 were analyzed. The stain-free staining of total proteins loaded was used as the normalization control to quantify the KCC2 or NKCC1 band. Quantification was performed using Image Lab (Bio-Rad). Data presented are normalized to mean values from WT samples. Immunohistochemistry Immunohistochemistry was performed on frontal sections from lumbar SCs prepared as follows. The mice used were offspring of female Hb9-eGFP mice cross bred with SOD male mice. These mice express GFP in the dendrites and the soma of spinal MNs. The lumbar SC samples of SOD and WT embryos at E17.5 were fixed in 4% paraformaldehyde (PFA) for 2 hr at room temperature. They were rinsed three times with 0.1 M Phosphate Buffer Saline (PBS) and then cryoprotected in 15% sucrose for 24 hr, followed by 30% sucrose for another 24 hr. After placing them in Tissue-Tek (O.C. T. Compound, Sakura Finetek) and freezing, the samples were sliced using a Leica 3050 s Cryostat. Sections (20 mm thickness) were affixed on gelatinized slides and preserved at -25˚C until their use. Each slide was rinsed three times with 0.1 M PBS, blocked with a medium containing 2% bovine serum albumin (BSA), then incubated for 48 hr with primary antibodies prepared in PBST (1% triton, BSA 0.2%). We processed the sections with an anti-synaptophysin antibody (1:500, mouse monoclonal, Clone SVP38, Millipore Sigma) coupled either with a rabbit polyclonal antibody directed against the vesicular inhibitory amino acid transporter (1:1000, VIAAT, antibody Provided by B. Gasnier, Paris Descartes University). VIAAT reflects the synaptic release of the GABA and glycine neurotransmitters (Dumoulin et al., 1999). The rabbit polyclonal anti-FoxP1 antibody (1:500, AB2277, Millipore) was also used to assess the LMC identity of recorded MNs (Figure 1-figure supplement 1) [START_REF] Dasen | Hox repertoires for motor neuron diversity and connectivity gated by a single accessory factor[END_REF]. For KCC2 staining, we used the rabbit polyclonal anti-KCC2 antibody (1:400, reference Millipore 07-432, Millipore Sigma). After three rinses, slides were incubated for 2 hr at room temperature with a goat anti-rabbit secondary antibody Alexa Fluor 546 (A-11035) and/or goat anti-mouse secondary antibody Alexa Fluor 647 (A-21235) (1:500, Invitrogen, France) and then rinsed with 0.1 M PBS. After rinsing, slides mounted with an anti-fade reagent (Fluoromount, Electron Microscopy Sciences) and stored at 4˚C in obscurity until confocal observation. Confocal microscopy Samples were visualized either in the laboratory with a BX51 Olympus Fluoview 500 confocal microscope or in the Bordeaux Imaging Center (BIC) with a SP5 Leica. Serial optical sections (thickness 0.2 mm) were obtained using a x60 oil-immersion objective. Lasers were selected according to the wavelength required for visualization. The different transporters and proteins were visualized as spot aggregates. Spots were detected using the spot detector plugin in Icy version 1.8.1.0, a quantitative image analysis program (Institut Pasteur -CNRS UMR 3691). Spots were then quantified and the colocalization of synaptophysin with VIAAT was assessed in the marginal zone edging the MN soma location. The percentage of co-localization was calculated according to the surface of the synaptophysin positive area. The global and membrane KCC2 staining density was assessed using Image J. For the KCC2 membrane staining a specific macro, allowing to delineate the periphery of MNs and quantify the KCC2 staining density, was developed by Se´ bastien Marais from the BIC. Briefly, the contour of Hb9-eGFP MNs was manually outlined in order to build an area (five pixels on each side of the outline), in which KCC2 punctiform profiles were automatically detected. Density of KCC2 puncta was then calculated as the number of puncta relative to the outline area. Values are expressed as arbitrary unit (AU). Electrophysiological procedures and data analysis Patch-clamp electrodes were constructed from thin-walled single filamented borosilicate glass (1.5 mm outer diameter, Harvard Apparatus, Les Ulis, France) using a two-stage vertical microelectrode puller (PP-830, Narishige, Tokyo, Japan). Patch electrode resistances ranged from 3 to 5 MW. All recordings were made with an Axon Multiclamp 700B amplifier (Molecular Devices, Sunnyvale, CA, USA). Data were low-pass filtered (2 kHz) and acquired at 20 kHz on a computer via an analog-todigital converter (Digidata 1322A, Molecular Devices) and a data acquisition software (Clampex 10.3, Molecular Devices). Motorized micromanipulators (Luigs and Neumann, Ratingen, Germany) were used to position the patch-clamp electrode on a visually identified MN using a CCD video camera (Zeiss Axiocam MR). Recorded MNs were located in the lateral column (Figure 1-figure supplement 1) and were identified by their pear-shaped cell bodies. Values of Rin and Cm also further confirmed the motoneuron identity of recorded neurons (Martin et al., 2013). For the KCC2 efficacy assessment, the aCSF contained the following in mM: 127 NaCl, 6 KCl, 2 CaCl2 2H2O, 1 MgCl2 6H2O, 20 HEPES, 11 D-glucose, osmolarity 306 mosmol/kg H2O, oxygenated with 100% O2, pH adjusted to 7.4 with NaOH. The patch-clamp recording pipette was filled with the following in mM: 120 K-gluconate, 13 KCl, 1 CaCl2 2H2O, 10 mM HEPES, 5 CsCl, 1 TEACl, 10 mM Verapamil, 2 ATP Mg 2+ , 10 EGTA, pH = 7.4 adjusted with KOH. This led to a theoretical ECl of -49 mV. In order to challenge the KCC2 efficacy, MNs were voltage-clamped at -20 mV (above ECl) while isoguvacine was applied during 40 s, inducing a massive influx of chloride ions. Actual ECl was measured by holding the voltage from -100 mV to 10 mV using a 1 s ramp protocol (Figure 2C1). Subtracting the ramp current obtained in the presence of isoguvacine to the control ramp current allowed to determine ECl (Figure 2C1). For recordings of IPSCs, patch pipettes were filled with cesium chloride (CsCl) intracellular medium, the composition of which being (in mM): 130 CsCl, 4 MgCl2-6H2O, 10 HEPES, 10 EGTA, 4 ATP-2Na, 302 mosmol/kg H2O. The pH was adjusted to 7.4 with the addition of CsOH. The intracellular CsCl medium led to an EGABAAR that approached 0 (2.89 mV), allowing a clear visualization of Cl --dependent IPSCs. All synaptic events were recorded while holding the membrane potential of MNs at -70 mV. In order to assess both Glu and GABA/Gly synaptic events in individual neurons (Figure 6-figure supplement 2), whole-cell patch-clamp recordings were conducted in voltage-clamp mode (holding membrane potential -45 mV). The intracellular medium was composed of (in mM): 130 KGluconate, 5 NaCl, 1 CaCl2, 2H2O, 10 HEPES, 10 EGTA, and 2 MgATP, (284 mosmol/kg H2O) adjusted to pH 7.4 using 1M KOH. For EGABAAR assessment, gramicidin perforated patch-clamp recordings were conducted. For perforation, a gramicidin (Millipore Sigma) stock solution was dissolved at 10 mg ml -1 in DMSO (Millipore Sigma) and diluted to a final concentration of 10-20 mgml -1 in the intracellular medium composed of (in mM): 130 KCl, 10 HEPES, 10 EGTA, and 2 MgATP, (284 mosmol/kg H2O) adjusted to pH 7.4 using 1M KOH. Gramicidin stock and diluted solutions were prepared <1 hr before each experiment. Pipette tips were filled with a filtered ~3 mL intracellular medium and subsequently backfilled with gramicidin solution. The GABAAR specific agonist isoguvacine (50 mM, Bio-Techne, France) was pressure ejected (~ 3 psi; 50 ms) via a puff pipette placed in the vicinity of motoneuron somata with a PicoSpritzer II (Parker Hannifin Corporation, Fairfield, NJ, USA) driven by a program-mable Master 8 Stimulator/Pulse Generator (Master-8, A.M.P.I. Jerusalem, Israel). Isoguvacine appli-cations were repeated at different membrane voltages, allowing to assessment of the reversal potential of evoked GABAAR-related currents. Measurements were corrected for liquid junction potentials (3.3 mV) calculated with the Clampex junction potential calculator. Using the Nernst equa-tion, EGABAAR was calculated to be 1.5 mV. Thus, to confirm that the perforated-patch configuration had not ruptured during the experiment, repetitive EGABAAR tests were performed. Measurements leading to an EGABAAR close to 0 mV were considered as whole-cell recordings and cells were dis-carded. The Clampex membrane test was used to monitor the input resistance (Rin) and membrane capacitance (Cm): a -60 mV holding membrane potential and 5 mV steps (negative and positive, 40 ms duration) were chosen. The reversal potential for GABAAR (EGABAAR) was calculated by a linear fit (Prism seven from GraphPad Software Inc, USA). The taurise (ms), and taudecay (ms) of IPSCs were determined using Spike 2 software version 7.15 (Cambridge Electronic Design, Cambridge, England) and by fitting a single exponential. In order to elicit evoked IPSCs (eIPSCs) from MNs, a concentric bipolar wire stimulating electrode (FHC, USA) was placed on the ventro-lateral funiculus (VLF) in order to activate local inhibitory fibers. Single stimulations delivered to the VLF (duration 1 ms, intensity ranging 10-20 mA) were driven by a programmable Master 8 Stimulator/Pulse generator (Master-8, A.M.P.I. Jerusalem, Israel). For extracellular recordings of locomotor-like activity, left and right lumbar ventral roots (at the level of L3 to L5) were recorded. Raw signals were collected with a high-gain AC amplifier (ISO-DAM8A-4 Bio-amplifier System, World Precision Instruments Ltd, Stevenage, UK). Filtered (cutoff frequency: 0.3-3 kHz) raw signals were integrated off-line and analyzed using Spike 2 software. Locomotor-like activity was elicited by applying a cocktail of 5-HT 10 mM, NMDA 10 mM and DA (100 mM) (5-HT/NMDA/DA). This cocktail has been previously established [START_REF] Han | Dopaminergic modulation of spinal neuronal excitability[END_REF]Milan et al., 2014) to evoke stable locomotor-like activity. Two 50 s representative episodes of strongly alternating locomotor-like activity (assessed by the Rayleigh's test) were analyzed in each WT and SOD experiment. Pharmacology In gramicidin perforated experiments and in experiments performed for assessing the KCC2 efficacy, GABAAR responses were isolated by using a cocktail of drugs containing 0.2 mM tetrodotoxin (TTX, Latoxan Laboratory, France), 4 mM kynurenic acid (Millipore Sigma), 10 mM (+)-tubocurarine (Millipore Sigma), 5 mM Dihydro-b-erythroidine hydrobromide (DHbE, Bio-techne, France), and 3 mm strychnine (Millipore Sigma) that respectively blocked voltage-dependent Na + action potentials, and glutamate, cholinergic, and glycinergic input to MNs. In CsCl experiments, IPSCs were isolated pharmacologically using DL-AP5 40 mM ((2R)-amino-5-phosphonovaleric acid, Bio-techne, France) and CNQX 20 mM (6-cyano-7-nitroquinoxaline-2,3-dione, Bio-techne, France). mIPSCs were isolated in the presence of 0.2 mM TTX (Latoxan, France). GABA and glycine mIPSCs were isolated by adding 3 mM strychnine or 3 mM GABAzine (SR 95531 hydrobromide, Bio-techne, France), respectively. These blockers were added to the cocktail containing 0.2 mM TTX, 4 mM kynurenic acid, 10 mM (+)-tubocurarine and 5 mM DHbE. In experiments assessing the KCC2 efficacy, 10 mM bumetanide (Millipore Sigma) was applied to block NKCC1 and 10 mM VU0240551 (Bio-techne, France) (10 mM) to block KCC2. Serotonin (5-HT, 10 mM), dopamine (DA, 100 mM) and N-Methyl-D-aspartic acid (NMDA, 10 mM) were from Millipore Sigma. Membrane properties and data analysis Data analysis was performed using Clampfit 10.6 (Axon Instruments), MiniAnalysis 6.0. or Spike 2. For synaptic event analysis, 100-200 s recording samples were selected from each MN. All selected synaptic events were concatenated for SOD versus WT MNs. After analysis, the information obtained included mean values for event amplitude (pA), taurise (ms), taudecay (ms) as well as inter-event interval (IEI) (ms). The MN capacitance (Cm), membrane input resistance (Rin) and access resistance (Ra) were recorded immediately after establishing whole-cell patch-clamp. The membrane voltage was held at -70 mV as this corresponds to the resting membrane potential of mice MNs at E17.5 (Delpy et al., 2008). As a result, all events appeared as downward events. Computer simulations GABA/glycine synaptic events in E17.5 MNs were simulated using a multi-compartment neuron model that was elaborated with the NEURON 7.3 program (Hines and Carnevale, 1997). Two simulated neurons were constructed: an E17.5 WT-like MN and an E17.5 SOD-like MN (Figure 6-figure supplement 1). These canonical MNs, designed from the average topology of real E17.5 MNs, were built on models used in a previous study (Martin et al., 2013). Both MNs were virtually identical, with similar channels and morphologies, except for the terminal dendritic segments of the SOD-like MN that were 40% less in extent than the WT terminal segments (Martin et al., 2013). In each section (dendrites and axon), the number of segments was an odd number calculated according to the d-lambda rule (Hines and Carnevale, 2001;Carnevale and Hines, 2006). The following conductances were used: a passive leakage current (Ileak), a transient K channel (IA), a voltage-dependent calcium-activated potassium channel (IC) also called IK(Ca) (McLarnon et al., 1995;Gao and Ziskind-Conhaim, 1998) and a high-threshold calcium current (IL) (Walton and Fulton, 1986). Ileak was simulated in each MN section (Ileak = (Eleak-E) Gleak, with Eleak = -73 mV and Gleak = 1/Rm). Rm, the specific membrane resistance was set to 21200 W.cm 2 , in order to obtain an input resistance of 120 MW in the WT MN. IA was present in the axon initial segment (AIS), with GIA = 0.0033 S.cm -2 ; IC was present in soma, with GIC = 0.0025 S.cm -2 . IL was present in the soma, with GIL = 8.10 -5 S. cm -2 . Each of the axon segments (n = 750) was equipped with Na and K Hodgkin Huxley (HH) channels (GNa = 0.012 S.cm -2 and GK = 0.0036 S.cm -2 , respectively) used to generate spikes. The density of Na and K channels in the initial segment (GNa = 0.5 S.cm -2 and GK = 0.15 S.cm -2 ) were adjusted in order to obtain a spike threshold of -48 mV for the WT MN (for more details see Appendix 1). In addition, calcium dynamics (Blaustein, 1988) were added in the soma to reproduce calcium accumulation, diffusion and pumping (for more details see [Destexhe et al., 1993]). The parameters used in the intracellular calcium dynamics were: depth = 0.17 mm, taur = 1e 10 ms, Cainf = 0.0002 mM, Kt = 0.00025 mM.ms-1, Kd = 0.0001 mM, Cai = 2.4e -4 mM and Cao = 3 mM. Inhibitory and excitatory synaptic inputs were inserted on the somatic MN compartment using two exponentials equations (one for rising phase and the other for decay phase). Depolarizing GABAergic/glycinergic post-synaptic current kinetics was set with taurise = 0.3 ms and taudecay = 15/ 20 ms (for WT/SOD, respectively). Excitatory synaptic current kinetics was set with taurise = 0.1 ms and taudecay = 10 ms. Equations concerning channel properties and GABAAR synaptic activation are summarized in the Appendix 1. Statistical analysis GraphPad Prism 7 software was used to analyze all the data. The results are presented as means ± the standard error of the mean (SEM) unless otherwise specified. n is the number of MNs used in the analysis and N the number of fetuses. Significance was determined as p<0.05 (*), p<0.01 (**), p<0.001 (***) or p<0.001 (***). The difference between the cumulative frequencies was analyzed using the Kolmogorov-Smirnov (K-S) test. The statistical differences between two data sets were assessed with the Mann Whitney test for non-parametric data. where the disease is transmitted in families usually with dominant inheritance traits [START_REF] Taylor | Decoding ALS: from genes to mechanism[END_REF]. The SOD1 G93A transgenic mouse line is the most widely studied mouse model of ALS, with mice carrying the mutated human SOD1 gene (Gly93Ala substitution) (M. E. [START_REF] Gurney | Motor neuron degeneration in mice that express a human Cu,Zn superoxide dismutase mutation[END_REF] and developing many pathological hallmarks and clinical symptoms similar to ALS patients [START_REF] Tu | Transgenic mice carrying a human mutant superoxide dismutase transgene develop neuronal cytoskeletal pathology resembling human amyotrophic lateral sclerosis lesions[END_REF]. Using this transgenic ALS mouse model, we previously found that spinal SOD1 G93A MNs are hyperexcitable at the embryonic day (E) (17.5) because of a shorter dendritic tree leading to elevated input resistance (Martin et al., 2013). Hyperexcitability of cultured mouse E12-E14 SOD1 G93A MNs (J. J. Kuo et al., 2004) and MNs derived from ALS patients carrying mutations in the SOD1, C9orf72, FUS and TARDBP genes have also been described (B. J. Wainger et al., 2014) (A. C. Devlin et al., 2015) [START_REF] Burley | Hyperexcitability in young iPSC-derived C9ORF72 mutant motor neurons is associated with increased intracellular calcium release[END_REF], highlighting motoneuronal hyperexcitabiliy as a possible contributor to the pathology of ALS disease. Figures and tables: In SOD1 G93A mice, E17.5 is the day before birth. At this prenatal stage, the locomotor networks become functional and it is possible to evoke alternated locomotor-like activities from ex-vivo brainstem-spinal cord preparations [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF], indicating that inhibitory commissural connections are likely functional (Branchereau et al., 2000a). However, at E17.5 GABA and glycine effects on lumbar spinal MNs are not hyperpolarizing but evoke depolarizing GABAergic/glycinergic post synaptic potentials (dGPSPs), which have mixed inhibitory (shunting) and excitatory (depolarizing) components [START_REF] Branchereau | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF]. Converging lines of evidence highlight alterations of the inhibitory system (Ramirez- [START_REF] Bos | Kv1.2 Channels Promote Nonlinear Spiking Motoneurons for Powering Up Locomotion[END_REF], with a perinatal timewindow of vulnerability [START_REF] Kiernan | Amyotrophic lateral sclerosis: Origins traced to impaired balance between neural excitation and inhibition in the neonatal period[END_REF] (Eisen et al., 2014a), among the numerous potential causative links in ALS associated dysfunction and degeneration. In this line of evidence, we found that the inhibitory system is altered as early as E17.5 in SOD1 G93A MNs that exhibit a reversal equilibrium for chloride ions ECl 10mV more depolarized relative to littermate wild-type (WT) MNs (ECl being -50mV and -60mV in SOD and WT, respectively) [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF]. At high-frequency stimulations, dGPSPs are inhibitory because of gCl summation [START_REF] Branchereau | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF] but evidence from computer modelling approach performed on E17.5 MN models indicated that low-frequency (< 50Hz) dGPSPs could be excitatory in SOD-like MNs but not in WT-like MNs with similar ECl [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF]. However, we did not study the effect of dGPSPs on the repetitive firing ability of real E17.5 lumbar MNs that drive the contraction of muscle fibers and are the final pathway [START_REF] Manuel | Alpha, beta and gamma motoneurons: functional diversity in the motor system's final pathway[END_REF]. Are these results the same under physiological conditions? Does the same mechanism operate in a real MN? In addition, the relationship between morphological alterations and dGPSPs effects on real E17.5 lumbar MNs was not studied. We address these issues here in three ways. First, we performed ex-vivo electrophysiological experiments using whole-cell patch-clamp recordings from WT and SOD1 G93A fetal E17.5 mouse lumbar spinal MNs. ECl was imposed and physiological dGPSPs were evoked by activating local pharmacologically isolated GABA/glycine fibers with ventro-lateral funiculus (VLF) electrical stimulations. The effect of dGPSPs was assessed on the firing of MNs elicited by intracellular injection of current. Our results confirmed dual effects of dGPSPs on MN firing frequency, i.e., excitatory effect at low dGPSPs frequency and inhibitory at high frequency (Dual MNs), and showed that dGPSPs are also able to exert pure inhibitory effects at low frequency (Inhibited MNs). SOD MNs were mainly Dual MNs whatever their Rin whereas the majority of WT MNs -having low Rin -were Inhibited MNs, remaining with high Rin were Dual WT MNs. Second, we examined whether morphology could interpret the different dGPSPs responses in SOD1 G93A and WT MNs by morphometric analysis of neurobiotininjected recorded MNs. MNs reconstructions revealed that the morphology is highly associated with Rin in both WT and SOD1 G93A MNs. Morphometric results showed that dual WT MNs displayed decreased total dendritic area and the trend of reduced dendritic trees, which is linked to high Rm. However, SOD1 G93A MNs can have dual or inhibited responses whatever their morphology. Given that morphology could not interpret the different dGPSPsevoked responses in SOD1 G93A MNs, we proposed that GABA/glycine synaptic integration may be a potential cause. Then we simulated the different synapse integration on SOD MNs by using NEURON simulation. The simulation results revealed that less amount of inhibitory synapse on SOD-like MNs soma can induce more excitatory action of evoked dGPSPs before switching to inhibition. Interestingly, pure excitation could be obtained in SOD-like MNs by moving away the inhibitory input from the cell body to distal dendrites (~100 µm). It indicates that the intergration way of dGPSP synapse may play the major role in dGPSPs responses of SOD1 G93A MNs. Third, from the molecular level by quantification anyalysis of synaptic VIAAT density, we verified that less abundant of synaptic and non-synaptic VIAAT on SOD1 G93A MNs soma and proximal dendrites could induce more excitatory action of evoked dGPSPs before switching to inhibition, compared to WT MNs. Taken together, our data show that low density and location of GABA/glycine synaptic inputs on MNs favor excitatory effects of dGPSP trains in fetal SOD1 G93A MNs, highlighting the early dysfunction of the GABA/glycine inhibitory system in the SOD1 G93A mouse strain of ALS disease. Jarquin et al., 2014) (Clark et al., 2015b) (Van den Results SOD lumbar MNs are more excited by low frequency dGPSPs Using simulation made on E17.5 WT-like MNs, we previously found that repetitive dGPSPs, delivered to the MN soma, were able to totally block the ongoing MN discharge when reaching a cut-off frequency [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF]. Interestingly, similar simulations made on E17.5 SOD-like MNs exhibiting shorter dendritic trees (Martin et al., 2013) revealed that such repetitive dGPSPs were also able to exert an excitatory effect on the ongoing MN discharge when occurring at low frequency before switching to full inhibitory effect at higher frequency (cut-off frequency) [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF]. Here, we initially sought to verify whether this excitatory effect was found in real prenatal SOD1 G93A MNs. To answer this question, we used whole-cell patch-clamp recordings from lumbar E17.5 MNstotal 71 MNs (N = 39 litters, table 1) located in the lateral column [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF] -in which ECl was set to -57.5 mV, i.e., in the range of the recorded physiological E GABAAR in E17.5 MNs [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF]. SOD or WT genotype was not known at the moment of the recording. MNs were depolarized by injecting through the recording patch-clamp pipette, in current-clamp mode, a positive current in order to elicit a tonic firing (~20 Hz, Figure 1A-C) during 5 s while dGPSPs were evoked at different frequencies thanks to electrical stimulations of the ventral lateral funiculus (VLF) in the presence of glutamate, cholinergic and serotonergic blockers. As illustrated on Figure 1, three types of dGPSPs effects were found: dGPSPs excitatory effect at low frequency but also at high frequency -up to 200 Hz (termed Excited MNs, Figure 1A), dGPSPs excitatory effect at low frequency switching to inhibitory effect above 50 Hz (termed Dual MNs Figure 1B) and dGPSPs inhibitory effects at low and high frequencies (between 10 Hz to 200 Hz) (termed Inhibited MNs Figure 1C). Genotyping revealed that ~60% of the WT MNs (19/32, 7 female and 8 male MNs, N = 11 litters) were inhibited while the remaining ~40% (13/32, 7 female and 6 male MNs, N = 13) were Dual MNs. Interestingly, it also revealed that excited MNs were exclusively SOD MNs (n=7, 3 female and 4 male MNs, N = 7), those latter corresponding to ~18% of the SOD MNs (7/39), most of the remaining SOD MNs being dual (~54%, n = 21, 8 female and 13 male MNs, N = 16) or inhibited (~28%, n = 11, 7 female and 4 male MNs, N = 10) (Figure 1D) (**** p < 0.0001, Chi-square test). Therefore, if our electrophysiological data did not confirm the exclusivity of dual dGPSPs effect in SOD MNs because some WT MNs could also be dual, it showed that fetal E17.5 SOD MNs were more excited by low frequency repetitive dGPSP than WT MNs from the same littermate, some of them being unable to switch to inhibition when dGPSPs frequency was above 40 Hz. This propensity to be excited by low frequency repetitive dGPSPs was not due to the resting membrane potential (ERest), spike amplitude (Spike Amp.), spike width, difference between spike threshold and membrane potential (Vthr-Vr) of MNs that were similar values between WT Dual, WT Inhibited, SOD Dual, SOD Inhibited and SOD Excited MNs (table 1). It was not due to the initial MN firing frequency of MNs since this latter was not significantly different between the five groups (Figure 1E1) (table 1). For each experiment, VLF intensity was adjusted in order to evoked dGPSPs with amplitude corresponding to spontaneous dGPSPs (see upper traces in Figures 1A-C). We found that the VLF intensity was significantly higher (p<0.0001, Low input resistance SOD MNs are not specifically inhibited We previously showed that SOD E17.5 spinal lumbar MNs have higher input resistance (Rin) than WT MNs from the same littermate (Martin et al., 2013;[START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF]. Interestingly, our present set of data revealed a similar finding: Rin was 101.0 ± 8.2 M (n = 32, 18 female and 14 male MNs, N = 24) for WT MNs and 132.4 ± 9.3 M (n = 39,18 female and 21 male MNs, N = 33) for SOD MNs (p<0.05, Mann Whitney test) (Figure 3A1). Rheobase -the minimum current injection to elicit a spiking activity -was lower in SOD MNs Somatic morphometric parameters were compared between reconstructed WT and SOD MNs. We found that the soma area measured in transversal plane (A trans ), the soma area (A soma ), the soma volume (V soma ) as well as the minimum feret of the soma (F min ) were significantly higher in SOD MNs relative to age-matched WT MNs (table 7). As a whole, somatic parameters showed that E17.5 SOD lumbar spinal MNs exhibited bigger soma that WT MNs from the same littermate. Estimation of effects of ECl, distal dendritic length, soma diameter and synapse position on dGPSP burst effects by simulations In a previous work, we have used simulations to show how ECl interact with V rest and other parameters (morphology, synaptic conductance etc.) of MNs in order to produce mixed inhibitory and excitatory responses in single PSPs [START_REF] Branchereau | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF]. Here, we addressed more precisely this question using the same protocol as used in physiological experiments. Basically, the MN received a continuous amount of constant current in order to produce a stable 12 Hz discharge (figure 5A2). On this basal activity, a train of dGPSPs was elicited (GABA/glycine inputs) at a range of frequencies from 0.2 to 300 Hz. When ECl was around -50 mV, a small variation of ECl could produce opposite effects. For example, in a WT MN, when ECl was set to -47.4 mV (as in experiments) a 20 Hz fiber stimulation of GABAergic/glycinergic neurons (GABA/glycine stimulation) evoked an excitatory response that increased with frequency up to 166 Hz and then decreased and became inhibitory for In the same manner, increasing the diameter had very modest effect on MN response to GABA/glycine stimulation (figure 5C1). In this new simulation, we increased the soma membrane area by a factor 1.46 (as values observed in some SOD MNs compared to WT MNs, see Results). Therefore, two types of CE MNs were modeled (soma diameter of 11.17µm and 13.50µm, respectively, figure 5C2). This CE MNs with increased soma diameter were slightly less excitable than previous CE MNs, but presented roughly the same characteristic response to GABA/glycine stimulation, with a dual response present when ECl = -52.1 mV (figure 5C1). In the last series of simulations, we aimed at illustrating the effect of dGPSP synapse position on the response of SOD MNs to GABA/glycine stimulation (figure 5D1-D3). And this effect was estimated for various dGPSP synapse conductance. dGPSP synapse was moved from soma (x=0µm) to a proximal dendrite at x = 100µm from soma (figure 5D3). This modest displacement had tremendous consequences on the effect of dGPSP trains. Whereas a dGPSP synapse in soma position (x=0µm) induced a mixed excitatory response (at low GABA/glycine stimulation frequencies) and inhibitory response (at high stimulation frequencies) (figure 5D1) whatever the dGPSP conductance (12-20 nS), only excitatory responses were elicited by dGPSP trains when the dGPSP synapse was on the dendrite (x = 100 µm, figure 5D2). SOD MNs exhibit less inhibitory inputs Our simulation data suggest that SOD MNs lack GABA/glycine terminals on the soma compartment. Therefore, some neurobiotin-injected MNs (in blue in figure 6) were processed for VIAAT immunostaining to detect presynaptic GABA/glycine terminals (in green) on soma and proximal dendrites of MNs and gephyrin (in red), the GABA A /glycine receptors anchoring protein. We found that a reduced VIAAT staining on the cytoplasmic membrane of SOD MN somata (figure 6A), relative to age-matched WT MNs (figure 6B). What mean the different subtype of fetal MNs? We previously described that prenatal E17.5 lumbar spinal MNs for the SOD1 G93A ALS mouse model are hyperexcitable because of shorter dendrites and exhibit higher Rin compared to WT MNs from the same littermate (Martin et al., 2013). Our present data, collected from similar fetal MNs located in the lateral motor column [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF], confirm the high Rin in SOD MNs but also show some SOD MNs with low Rin and extented dendritic tree (figures 4C-E). Similarly, some fetal WT MNs are high Rin and exhibit short dendrites (figures 4B). Does this correspond to two types of MNs? Leroy and colleagues have shown, in post-natal (P) 6-10 days (P6-P10) SOD1 G93A , that only S-type (innervating the slow-contracting muscle fibers)  MNs displayed intrinsic hyperexcitability and shrunk dendritic tree, the excitability of F-type (innervating the fast-contracting muscle fibers)  MNs being unchanged (Leroy et al., 2014). Similarly, Venugopal and collaborators found in P8-P12 SOD1 G93A trigeminal MNs that predicted fast SOD MNs showed hyperexcitable shifts marked by reduced rheobase and increased input resistance whereas predicted slow SOD MNs did not present significant alterations in these properties when compared to WT littermate MNs [START_REF] Venugopal | Homeostatic dysregulation in membrane properties of masticatory motoneurons compared with oculomotor neurons in a mouse model for amyotrophic lateral sclerosis[END_REF]. Leroy and colleagues described an immediate firing in S-type MNs and a delayed firing in F-type MNs. S-type were identified as expressing the estrogen-related receptor β (Errβ) but not the matrix metalloproteinase-9 (MMP9), whereas most of the F-type were enriched with MMP9 (Leroy et al., 2014) and Errβ negative (Leroy et al., 2014). MMP-9 is not expressed by embryonic MNs and is first detected in the spinal cord around P5 [START_REF] Kaplan | Neuronal matrix metalloproteinase-9 is a determinant of selective neurodegeneration[END_REF]. We have tried to detect MMP-9 in neurobiotin-injected fetal MNs but could not detect any staining. We also checked the immediate vs delayed firing when MNs were depolarized at threshold with a 5s long depolarizing pulse and could only detect immediate firing. In addition, voltage difference between threshold potential for spikes and membrane potential (Vthr-Vr) and spike width (SpkWdth) did not differ between E17.5 MNs (table 1) unlike P6-10 MNs (Leroy et al., 2014). However, even though it is difficult to conclude about the identity of our recorded fetal MNs, low Rin (< 100 M) WT MNs, inhibited by dGPSPs, likely correspond to future F-type MNs. Their morphology complies with this described in WT MNs in our previous study [START_REF] Martin | Embryonic alteration of motoneuronal morphology induces hyperexcitability in the mouse model of amyotrophic lateral sclerosis[END_REF]. Interestingly, our data highlight a more hyperpolarized actual ECl in low Rin WT Inhibited MNs compared to high Rin WT Dual MNs (figure 1F3), suggesting a higher efficacy of KCC2. However, the protocol used in the present study was not a KCC2 challenging protocol and did not reveal any main difference in actual ECl values between WT and SOD MNs as previously described [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF]. Motoneuronal properties Our data did not highlight any differences in MN E Rest , spike amplitude and spike width between E17.5 SOD1 G93A and WT littermate mice. No difference in E Rest and action potential shape was also reported in between G93A and control MNs from spinal cord slices cultured during 3 weeks starting at E12-14 days (J. J. Kuo et al., 2004). P6-10 SOD1 G93A spinal MNs also did not show any change in E Rest , spike amplitude and spike width (Leroy et al., 2014) whereas E Rest of P30-P60 SOD1 G93A spinal MNs is 10 mV more hyperpolarized than WT littermate MNs, this difference in E Rest becoming undetected at P90-P120 [START_REF] Huh | Time Course of Alterations in Adult Spinal Motoneuron Properties in the SOD1(G93A) Mouse Model of ALS[END_REF]. This hyperpolarization of MN E Rest that was not detected at early and late stages was described as a cellular adjustment useful to compensate an increase in the persistent inward current (PIC) [START_REF] Huh | Time Course of Alterations in Adult Spinal Motoneuron Properties in the SOD1(G93A) Mouse Model of ALS[END_REF]. Interestingly, the authors described an overall pattern of oscillations across the lifespan of the SOD1 G93A ALS mouse model, with the PIC oscillations tending to increase excitability but the conductance and E Rest oscillations tending to reduce it [START_REF] Huh | Time Course of Alterations in Adult Spinal Motoneuron Properties in the SOD1(G93A) Mouse Model of ALS[END_REF]. Prenatal SOD motoneurons receive less VIAAT-positive inputs Our result show that fetal SOD MNs can be Dual or Inhibited MNs whatever their Rin and morphology, unlike WT MNs from the littermates. What could explain this surprising result? If we consider that Inhibited low Rin WT MNs are future F-type (innervating the fastcontracting muscle fibers)  MNs that start getting mature at E17.5, then we may consider that the spinal cord from the SOD1 G93A strain matures less rapidly than littermate WT spinal cords, i.e., the developmental sequence of the lumbar motoneural network (decrease of the MN Rin, increase of the MN Cm (A. of dGPSPs. Our physiological data show that we had to increase the VLF intensity in SOD preparations to obtain evoked single dGPSPs of similar size than WT preparation (figures 1E2), in agreement to anatomical data highlighting lower density of VIAAT-positive terminals on SOD MNs. A reduction of inhibitory synapse was not found in SOD1 G93A E12.5 spinal cords maintained in culture during 14 days but rather an increase was described (higher ratio inhibitory/excitatory synapses) [START_REF] Avossa | Early signs of motoneuron vulnerability in a disease model system: characterization of transverse slice cultures of spinal cord isolated from embryonic ALS mice[END_REF]. However, in a recent study, Allodi and collaborators have shown that F-type MNs from P45 SOD1 G93A mice lose inhibitory glycinergic connections before S-type MNs [START_REF] Allodi | Locomotor deficits in a mouse model of ALS are paralleled by loss of V1-interneuron connections onto fast motor neurons[END_REF]. A major reduction of inhibitory synapse densities (gephyrin puncta) in the ventral horn of the SOD1 G93A slow mutant was also described at early stages of disease (P50) (S. [START_REF] Saxena | Neuroprotection through excitability and mTOR required in ALS motoneurons to delay disease and extend survival[END_REF]. Our previous data show that VIAAT positive terminals are less abundant in the marginal zone edging the E17.5 MN soma location [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF]. Here, we found a reduction of VIAAT density (global and synaptic) on soma of SOD MNs, relative to age-matched WT. In addition, as in our previous study [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF], we found a reduction in the frequency of GABA/glycine synaptic events, in SOD MNs corroborating the reduction of synaptic VIAAT density. This highlights the selective reduction of GABA/glycine inputs on MN cell bodies that likely accounts for the lack of inhibitory effect of dGPSPs barrages in prenatal SOD G93A MNs and for the reduction in the frequency of GABA/glycine synaptic events in SOD MNs. This selective reduction of GABA/glycine inputs may start occurring around birth and may strengthen over time in adult. Therefore, even if we did not analyze the lack of VIAAT staining on F-type vs S-type MNs (because of lack of specific markers at fetal stages, see above), we may hypothesize that it reflects an early reduction of GABA/glycine terminals on F-type MNs that are low Rin neurons. Somatic morphometric parameters Our morphometric analysis has uncovered an increase in SOD MNs soma size compared to WT littermates. This was not evidenced in our previous morphometric analysis performed on E17.5 SOD1 G93A mice (Martin et al., 2013), as in other studies done on postnatal (P6-10) SOD1 G93A mice (Leroy et al., 2014) or P3-9 SOD G85R mice (Amendola et al., 2008;Filipchuk et al., 2012). Interestingly, such an increase in SOD MNs soma was described in a comprehensive study performed on large sample size of immunolabelled P10 SOD1 G93A MNs, specifically in F-type  MNs, [START_REF] Dukkipati | The vulnerability of spinal motoneurons and soma size plasticity in a mouse model of amyotrophic lateral sclerosis[END_REF]. A larger sample size was also considered in the study performed by Shoenfeld and collaborators who observed an increase in the soma size of P30 SOD1 G93A MNs [START_REF] Shoenfeld | Soma size and Cav1.3 channel expression in vulnerable and resistant motoneuron populations of the SOD1G93A mouse model of ALS[END_REF]. Maybe we focused more on future F-type MNs in the present study compared to Martin and collaborators (Martin et al., 2013). Effect of recurring dGPSPs on MN firing: a complex equation The present study highlights the effect of recurring dGPSPs on the repetitive firing ability of E17.5 lumbar MNs that drive the contraction of muscle fibers and are the final physiological pathway [START_REF] Manuel | Alpha, beta and gamma motoneurons: functional diversity in the motor system's final pathway[END_REF]. We show that the excitatory action of low frequency dGPSPs is favored in SOD1 Limitations of the study In the present study, we did not analyze the lack of VIAAT staining on F-type vs S-type MNs because of lack of specific markers at fetal stages (see Discussion part). Therefore, we could not address if any specific early reduction of GABA/glycine terminals on F-type MNs exists at prenatal stage. We also did not look at GABA A R and GlyR on spinal MNs. A decrease of surface postsynaptic GlyR on SOD1 G93A MNs maintained in culture from E12.5 during 12-16 days has been described (Q. Materials and Methods Ethical considerations and mouse model All procedures were carried out in accordance with the local ethics committee of the University of Bordeaux and the UE Directive 2010/63. In order to apply the 3Rs rule, all efforts were made to curtail animal suffering and reduce the number of animals used. B6SJL-Tg(SOD1*G93A)1Gur/J mice expressing the human G93A Cu/Zn superoxide dismutase (SOD1) mutation (Gly93 Ala substitution) were obtained from the Jackson Laboratory (https://www.jax.org/strain/002726). Heterozygous B6SJL-Tg(SOD1*G93A)1Gur/J (named SOD in this manuscript) were maintained by crossing heterozygous transgenepositive male mice with B6SJL F1 hybrid females (Janvier labs, France). Gestation in SOD lasted for ~18.5 days, embryonic day 0.5 (E0.5) being defined as the day after the mating night. Experiments were performed on E17.5 fetuses of either sex, ,collected one day before their birth. Immunohistochemistry Immunohistochemistry was performed as follows. 1F1). See Figure S2 for the specificity of GABA A R/GlyR activation. The reversal potential was calculated by a linear fit (Prism 9 from GraphPad Software Inc., USA) (Figure 1F2). Clampex (10.7, Molecular Devices) membrane test was used to monitor the passive properties of a neuron, namely, membrane resistance (Rin), access resistance (Ra) and membrane capacitance (Cm). Only neurons with resting membrane potential more hyperpolarized than -50mV (value not corrected for the junction potential) were used for recording. Rheobase was recorded as the minimum current pulse injected to elicit an action potential. Intracellular stimulation intensity was adjusted for each recording to evoke a train of action potentials at ~20Hz (Figure 1E1), the duration of which was set to 5 seconds. The intensity of VLF stimulation was adjusted in order to elicit synaptic events equivalent to spontaneous ones observed in MNs (Figure 1A-C, upper traces). Nine VLF frequencies were selected from previous simulation data: 7.5Hz,10Hz,20Hz,30Hz,40Hz,50Hz,100Hz,150Hz and 200Hz. The duration of VLF was kept as 2.5 s for all the experiments, occurring 2.5 s after MN spiking activity (Figure 1A-C). The recording protocol consisted of one control bursts (intracellular stimulation of cell to produce a train of spikes), followed by one test burst (intracellular stimulation accompanied by VLF stimulation after 2.5 s). The spike number change (% of control) was assessed during 1 s by comparing spike number during VLF stimulation to spike number in the preceding control, at the same time. The inter-burst duration was retained as ~40 s. This frequency discharge protocol was recorded in currentclamp mode and analyzed using a Spike 2 V7 script developed by D.C. whereas actual ECl from each MNs were assessed using from voltage-clamp recordings analyzed using Clampfit 10.7 (Axon Instruments). Pharmacology In whole-cell patch-clamp experiments, GABA A R/GlyR responses were isolated by using a cocktail of pharmacological drugs containing 4mM kynurenic acid (Bio-techne, France), 5μM dihydro-beta erythroidine, that respectively blocked glutamate and cholinergic input to MNs. Two serotonin (5-HT) receptors blockers methysergide (10 μM) and ketanserine (10 μM) were also applied to prevent any 5-HT release from descending fibers that are already dense in the VLF [START_REF] Ballion | Ontogeny of descending serotonergic innervation and evidence for intraspinal 5-HT neurons in the mouse spinal cord[END_REF]. Gabazine 3 μM (SR-95531, Bio-techne, France) and strychnine 3 μM (Sigma-Aldrich Chimie SARL, France) were used to verify that our VLF stimulation specifically elicited GABA A R-and GlyR-related synaptic events (Figure S2). Membrane properties and data analysis Data analysis was performed using Clampfit 10.7 (Axon Instruments) and Spike 2 V7 (Cambridge Electronic Design Ltd). For synaptic event analysis, representative 1-2 min recording samples were selected from each MN. The MN capacitance (Cm), membrane input resistance (Rin) and access resistance (Ra) were recorded immediately after establishing whole-cell patch-clamp. Computer simulations GABA/glycine synaptic events in E17.5 MNs were simulated using a multi-compartment neuron model that was elaborated with the NEURON 7.3 program (Hines et al., 1997). Two simulated neurons were constructed: an E17.5 WT-like MN and an E17.5 SOD-like MN (figure 5B2). These canonical MNs, designed from the average topology of real E17.5 MNs, were built on models used in a previous study. Both MNs were virtually identical, with similar channels and morphologies, except for the terminal dendritic segments of the SODlike MN that were 40% less in extent than the WT terminal segments (Martin et al., 2013). In each section (dendrites and axon), the number of segments was an odd number calculated according to the d-lambda rule (Hines et al., 2001) (Carnevale et al., 2006). The following conductances were used: a passive leakage current (I leak ), a transient K channel (I A ), a voltagedependent calcium-activated potassium channel (I C ) also called I K(Ca) (McLarnon et al., 1995) (B. X. Gao et al., 1998) and a high-threshold calcium current (I L ) (Walton et al., 1986). I leak was simulated in each MN section (I leak = (E leak -E). G leak , with E leak = -73 mV and G leak = 1/Rm). Rm, the specific membrane resistance was set to 21200 .cm 2 , in order to obtain an input resistance Rin of 120 M in the WT MN. I A was present in the axon initial segment (AIS), with G IA = 0.0005 S.cm -2 ; IC was present in soma, with G IC = 0.0035 S.cm -2 . I L was present in the soma, with G IL = 9.10 -5 S. cm -2 . Each of the axon segments (n = 750) was equipped with Na and K Hodgkin Huxley (HH) channels (G Na = 0.012 S.cm -2 and G K = 0.0036 S.cm -2 , respectively) used to generate spikes. The density of Na and K channels in the initial segment (G Na = 0.5 S.cm -2 and G K = 0.15 S.cm -2 ) were adjusted in order to obtain a spike threshold of -48 mV. In addition, calcium dynamics (Blaustein, 1988) were added in the soma to reproduce calcium accumulation, diffusion and pumping (for more details see (Destexhe et al., 1993)). The parameters used in the intracellular calcium dynamics were: depth = 0.17 mm, tau r = 10 10 ms, Ca inf = 0.0002 mM, Kt = 0.00025 mM.ms -1 , Kd = 0.0001 mM, Ca i = 2.4x10 -4 mM and Ca o = 3 mM. Inhibitory synaptic inputs were inserted on the somatic MN compartment using twoexponentials equations (one for rising phase and the other for decay phase). Depolarizing GABAergic/glycinergic post-synaptic current kinetics was set with tau rise = 0.3 ms and tau decay = 20 ms. More details about equations for channel properties and synaptic activation can be found in (Martin et al., 2013) and [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF]. Principal component analysis (PCA) Principal component analysis was performed using R software (Ade4 package) based on the behavioral variables described in table 2. In a first step, the contribution of each variable to the variance in the first and second components of the PCA was calculated (tables 2-3). Only the variables whose contribution to the variance of a given component was larger than the mean contribution of all variables to that component (i.e., #2000) were considered (see *). These receptors show high permeability to chloride (Cl -) and low permeability to bicarbonate (HCO 3 -) [START_REF] Kaila | The role of bicarbonate in GABAA receptor-mediated IPSPs of rat neocortical neurones[END_REF]. Therefore, the strength of inhibition critically depends upon the intracellular concentration of Cl -([Cl -] i ), which determines the reversal potential for GABA A R-and GlyR-mediated currents (E GABAAR/GlyR ). Adult neurons in the healthy nervous system express low levels of the cation-cotransporter NKCC1 (Cl -importer) and high levels of KCC2 (Cl -exporter), this latter extrudes Cl -out of the cells. This assures low [Cl -] i and a Cl -electrochemical gradient that mediates Cl -inflow after opening of the GABA A R/GlyR by binding of the inhibitory neurotransmitter GABA/Glycine. E GABAAR/GlyR is thus set below the resting potential near the Cl -equilibrium (ECl). It follows that Cl -produces a net hyperpolarization of the membrane and an inhibitory action. However, GABAergic neurotransmission predominate in developing neurons [START_REF] Allain | Maturation of the GABAergic transmission in normal and pathologic motoneurons[END_REF], so Cl -ions are mainly involved in GABA A R currents in fetal MNs (i.e., E GABAAR = ECl). In fetal mouse lumbar spinal MNs, E GABAAR is very depolarizing above the spike threshold that induces strong excitatory action at early developmental stages (E13.5-E15.5) whereas drops significantly below spike threshold at late embryonic stages (E16.5-E18.5) (Alain Delpy et al., 2008). Besides, our lab previously found that at E17.5 fictive alternate locomotor activity may be elicited by 5-HT bath application (Branchereau et al., 2000b). This implies that at E17.5 the lumbar spinal networks may express a left-right coordination, indicating the presence of inhibitions between ipsi-and contra-lateral sides. Interestingly, during the E16.5-E18.5 period, the mouse spinal motoneurons still exhibit depolarizing ECl (ECl above the resting membrane potential but largely below the spike threshold), so such inhibitions between ipsi-and contra-lateral sides may be caused by shunting effects of GABA/Glycine inputs (Alain Delpy et al., 2008). This shunting inhibition is a form of postsynaptic potential inhibition owing to an increase of the postsynaptic membrane conductance due to the sustained activation of GABA A receptors and the opening of ion channels, and always present independently of the effect of depolarizing or hyperpolarizing GABA [START_REF] Kurbatova | Dynamic changes of depolarizing GABA in a computational model of epileptogenic brain: Insight for Dravet syndrome[END_REF]. My thesis works expect to examine whether molecular and/or cellular mechanisms affecting chloride homeostasis are responsible for hyperexcitability in E17.5 SOD1 G93A MNs, which is possibly due to shunting inhibition or inhibition via excitatory effects upon inhibitory interneurons. However, the specific roles of these two types -indirect synaptic inhibition and shunting inhibition -of GABA/Glycine inhibition in E17.5 SOD1 G93A MNs, which are involved in cellular hyperexcitability, is not fully examined. This thesis aimed to investigate the chloride-related inhibitory system in lumbar spinal motor networks underlying the modulation of hyperexcitability in the SOD1 MNs, in accordance with a morphometric alteration exhibiting shorter dendritic length but no changes at the soma level. Decreased capacitance and membrane conductance were thus not related to smaller size of MNs in E17.5 SOD G93A mice. These findings therefore support the conclusion that prenatal E17.5 SOD G93A MNs are also hyperexcitable. The following will consider possible compensatory mechanisms that counteract the reduction in effective inhibitory input and the hyperexcitability of SOD G93A MNs.  The frequency and amplitude of inhibitory synaptic inputs are only weakly affected in fetal SOD MNs. We showed that SOD G93A MNs have a decreased GABA/Glycine inhibition, reflected by smaller amplitude and less frequent mIPSCs and sIPSC. This conclusion was supported by both physiological and anatomical observations. According to the functional criteria, the frequency of IPSCs events is thought to represent pre-synaptic release mechanism whereas the amplitude is indicative of post-synaptic receptor function [START_REF] Frerking | Are some minis multiquantal?[END_REF]. First, SOD G93A MNs displayed less frequent mIPSCs (~6%), which correlates with diminished (~23%) inhibitory synaptic inputs (VIAAT/synaptophysin co-localization). Similarly, SOD G93A MNs also exhibited less frequent sIPSC (~13%). Second, the amplitudes of the mIPSCs were slightly smaller in SOD G93A MNs than that in WT, especially for large events, and sIPSC events also exhibited this tendency. The reduced degree of synaptic input to mouse prenatal SOD G93A MNs assessed by VIAAT/synaptophysin staining is reminiscent of anatomical data from adult human ALS patients in which a reduction of synaptophysin was also identified (Ikemoto et al., 2002). It cannot be explained by a reduced SOD G93A MN size Therefore, IPSCs, which were slightly reduced in amplitude in our recordings in SOD G93A MNs, represent glycinergic events whereas GABAergic IPSCs amplitude remained unaffected. This hypothesis is supported by the fact that the ratio gGABA A R/Cm measured by using perforated patch-clamp recordings under physiological conditions did not differ between SOD G93A and WT MNs. Interestingly, a reduction in amplitude of glycinergic synaptic events but not GABAergic events was described in SOD G93A lumbar MNs maintained in culture (Q. However, in the present study, since no GlyR staining was performed, we cannot determine whether reduction of IPSC amplitude is attributable to a reduction in the number of channels clustered at postsynaptic sites. Chang et al., 2011). Less frequent and smaller amplitude Our findings indicate that there are two possibilities that could underlie the deficits in GABA/glycine inhibition. It could result from a decreased GABA/Glycine release from presynaptic terminals as indicated by a reduced frequency for IPSCs events and a reduction in the number of pre-synaptic GABAergic/glycinergic inputs, and also a modification in glycinergic synaptic transmission as evidenced by decreased mIPSCs amplitude and a possible reduction in number of post-synaptic GlyRs channels number (i.e., size of GlyRs clusters) by excluding the involvement of GABA A Rs. As a result, inhibitory insufficiency induced hyperexcitability of E17.5 SOD G93A MNs.  A prolongation of inhibitory synaptic events in SOD MNs as a compensatory mechanism. Another key property can be extracted or analyzed from post-synaptic GABA/Gly IPSCs events: the kinetics changes representing how fast (time course) cells can respond to inputs. Our data revealed a noticeable increase of tau decay in both SOD G93A mIPSCs and sIPSCs. We found a mean tau decay of around 20-25 ms, which is in the same range as that previously described for mIPSCs during perinatal stages in rat lumbar MNs (B.-X. Gao et al., 1998). Because a part of our study did not differentiate unequivocally between GABA A Rmediated versus GlyR-mediated mIPSCs, we pharmacologically dissected GABA A Rmediated versus GlyR-mediated mIPSCs. This confirmed that the mean tau decay is between tau decay of pure GABA A R-mediated mIPSCs and pure GlyR-mediated mIPSCs and that both types of mIPSCs exhibit a longer relaxation. Interestingly, GlyRs switch from immature homomeric 2 GlyRs with slow decay kinetics of glycinergic mIPSCs to heteromeric GlyRs that include 1, 3 and  subunits with fast kinetics (Legendre, 2001b;Raltschev et al., 2016). E17.5 SOD1 G93A MNs may therefore exhibit a delayed development with a preponderance of homomeric 2 GlyRs. However, this is unlikely to be the case, since most electrical properties measured from birth to postnatal day 12 in SOD1 G93A MNs instead show an accelerated rate of maturation (Quinlan et al., 2011). During perinatal development of rat spinal MNs, GlyR-mediated mIPSCs become dominating over GABA A R-mediated mIPSCs Gao et al., 2001). We also found a majority of pure GlyR-mediated mIPSCs and found the same ratio of pure GABA A R-mediated mIPSCs and pure GlyR-mediated mIPSCs in SOD1 G93A and WT MNs, indicating similar prenatal developmental stage for both genotypes. (B.-X. Considering that SOD spinal cords do not express a development delay compared to WT cords from the same littermate, we questioned the origin of the prolonged tau decay in GABA A R-and GlyR-mediated mIPSCs, sIPSCs and eIPSCs. It has been demonstrated that high [Cl -] i slows down the decay of glycine-and GABA-mediated inhibitory synaptic events (Pitt et al., 2008b;Houston et al., 2009b), reducing the firing probability of Purkinje neurons (Houston et al., 2009b). Interestingly, the structural basis of this effect was described as a direct effect of chloride ions acting in the pore of glycine and GABA channels (Moroni et al., 2011b). Therefore, the elevated [Cl -] i likely accounts for the prolonged relaxation of GABA/Gly IPSCs in SOD1 G93A MNs. The increase in [Cl -] i affect GABA A R and GlyR and thus IPSCs duration (Pitt et al., 2008b;Houston et al., 2009b;Moroni et al., 2011b Doyon et al., 2016). We therefore quantified the KCC2 protein express and provide evidence that altered E GABAAR in E17.5 SOD1 G93A mouse embryo likely arises from down-regulation of KCC2 in the lumbar cord and the membrane of lumbar MNs. This downregulation of KCC2 was unlikely linked to the decrease in the size of the SOD G93A MNs, namely, smaller neurons have less volume of the cytosol and less surface of the membrane to host KCC2, because soma perimeters and surface of E17.5 WT MNs and SOD1 G93A MNs were found to be similar (Martin et al., 2013). Also, KCC2 efficacy and E GABAR have been assessed at the soma level. SOD G93A MNs do not have less volume to adjust and the lower efficacy of KCC2 in SOD G93A MNs is not related to a change in MN size. This suggests that upstream molecules of KCC2 probably regulate the efficacy of KCC2. In hippocampal pyramidal neurons, NKCC1 up-regulation has been the proposed mechanism that drives transient perinatal GABA switch in rodent models of autism (Tyzio et al., 2014). However, our data show that NKCC1 is unaltered in the E17.5 lumbar spinal cords. KCC2 down-regulation in the hippocampus delays postnatal GABA shift in oxytocin receptor knockout mouse model of autism (Leonzino et al., 2016). More recent evidence shows that KCC2 down-regulation explains long-lasting effects of adolescent nicotine exposure (A. M. Thomas et al., 2018). In ALS-vulnerable motoneurons of adult SOD1 G93A mouse lumbar cord, a strong decrease of mRNA and protein expression levels was reported (Fuchs et al., 2010). Thus, KCC2 is a likely molecular substrate underlying perinatal alteration of chloride homeostasis in the lumbar spinal cord. Lower efficacy of KCC2 raised the physiological [Cl -] i of E17.5 SOD1 G93A MNs. It may be surprising to observe a tau decay more prolonged in SOD G93A MNs when ECl was imposed at 0 mV (to visualize the Cl --dependent IPSCs) or at -45 mV. Owing to the similar ECl values that are experimentally imposed in SOD G93A and WT MNs, theoretical [Cl - ] i values of SOD G93A and WT MNs are also similar. Calculated from the Nernst equation, we therefore expected a similar tau decay in SOD G93A and WT MNs. However, since our data clearly indicated a deficit in KCC2 efficacy in SOD G93A MNs, the physiological [Cl -] i may have been counteracted by a higher KCC2 efficacy in WT MNs compared to SOD G93A MNs. Therefore, we observed a longer relaxation of GABA A R/GlyR-mediated mIPSCs and sIPSCs as well as VLF-evoked IPSCs in SOD G93A MNs compared to WT littermate MNs. Our first series of computer simulations demonstrated that increasing tau decay from 20 25 ms also led to a higher capacity of shunting effect summation [START_REF] Branchereau | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF]. Our second series of computer simulations demonstrate that increasing tau decay sloweddown the locomotor rhythm. This was in agreement with our findings showing that the pharmacologically-evoked locomotor rhythm in SOD spinal cords was slower than in WT. The effect of IPSCs tau decay on locomotor rhythm was only observed when E GABAAR was set to -60 mV in the interneurons of a central-pattern generator (CPG, half-centers). To our knowledge, however, no E GABAAR and gGABA/Gly measurements have been obtained in the CPG interneurons of ALS mouse models. In their study of 2-3 weeks in vitro organotypic slice cultures of the spinal cord (from E12-E13), [START_REF] Medelin | Altered development in GABA co-release shapes glycinergic synaptic currents in cultured spinal slices of the SOD1(G93A) mouse model of amyotrophic lateral sclerosis[END_REF] suggested that the ECl is similar in SOD G93A and WT ventral interneurons, around -60 mV. However, neither the excitatory versus inhibitory nature of the recorded interneurons, nor their relationship with CPG circuitry were reported in their study. Our computer model of locomotor rhythm was based on half-center interneurons without considering the output MNs. Therefore, it may be argued that changes in IPSCs tau decay observed in spinal MNs has nothing to do with the spinal locomotor rhythm generator. However, a recent study clearly demonstrated that MNs participate to the rhythm generation in neonatal mouse (Falgairolle et al., 2017). Therefore, the effect of tau decay increase observed in SOD G93A MNs could therefore impinge the locomotor rhythm frequency. This change in the kinetics of GABA A R/GlyR-mediated IPSC might be due to a prolonged the opening duration of GABAAR/GlyR, thereby maximizing shunting inhibition or making inhibition more efficient, (even if E GABAAR is depolarized), as reported in Kurbatova's study [START_REF] Kurbatova | Dynamic changes of depolarizing GABA in a computational model of epileptogenic brain: Insight for Dravet syndrome[END_REF]. To summarize (Figure 39), we found that E17.5 SOD G93A MNs are hyperexcitable due to depolarized ECl and weakening of GABA A R/GlyR-mediated inhibition, which is mediated by a rise in [Cl -] i through downregulation of the principal neuronal potassium-chloride cotransporter KCC2 and the reduction of GABA/glycine inputs. The rise in [Cl -] i is sufficient to make MNs hyperexcitable, but the potentiating the shunting inhibition of GABA/Glycine inputs by increasing the tau decay of GABA/Gly IPSCs to compensate the reduction in effective inhibitory inputs, resulting in a normal locomotor coordination. The GABA/Glycine inhibition of E17.5 SOD G93A MNs produce decreased GABA A R/GlyR-mediated inhibition (less frequent and smaller amplitude IPSCs) and enhancement of shunting inhibition (longer tau decay ) by distinct mechanisms, suggesting that maintenance of neuronal inhibition depends not only on the frequency and peak amplitude of IPSCs, but also on their time course. This slower or longer tau decay could be attributable to expression of synaptic GlyR channels with slower kinetic properties, which was likely a consequence of altered channels gating efficacy (opening and closing rates). GABAergic/glycinergic disinhibition and depolarized ECl together renders ALS neuronal networks hyperexcitability that is susceptible to spasticity leading to important disabling complications of manual dexterity and gait, and propose KCC2 as a therapeutic target.  What mean the different subtypes of fetal MNs? We previously described that prenatal E17.5 lumbar spinal MNs for the SOD1 G93A ALS mouse model are hyperexcitable because of shorter dendrites and exhibit higher Rin compared to WT MNs from the same littermate (Martin et al., 2013). Our present data, collected from similar fetal MNs located in the lateral motor column, confirm the high Rin in SOD MNs but also show some SOD MNs with low Rin and extended dendritic tree. Similarly, some fetal WT MNs are high Rin and exhibit short dendrites. Does this correspond to two types of MNs? Leroy and colleagues have shown, in post-natal (P) 6-10 days (P6-P10) SOD1 G93A , that only S-type (innervating the slow-contracting muscle fibers)  MNs displayed intrinsic hyperexcitability and shrunk dendritic tree, the excitability of F-type (innervating the fast-contracting muscle fibers)  MNs being unchanged (Leroy et al., 2014). Similarly, Venugopal and collaborators found in P8-P12 SOD1 G93A trigeminal MNs that predicted fast SOD MNs showed hyperexcitable shifts marked by reduced rheobase and increased input resistance whereas predicted slow SOD MNs did not present significant alterations in these properties when compared to WT littermate MNs [START_REF] Venugopal | Homeostatic dysregulation in membrane properties of masticatory motoneurons compared with oculomotor neurons in a mouse model for amyotrophic lateral sclerosis[END_REF]. Leroy and colleagues described an immediate firing in S-type MNs and a delayed firing in F-type MNs. S-type were identified as expressing the estrogen-related receptor β (Errβ) but not the matrix metalloproteinase-9 (MMP9), whereas most of the F-type were enriched with MMP9 (Leroy et al., 2014) and Errβ negative (Leroy et al., 2014). MMP-9 is not expressed by embryonic MNs and is first detected in the spinal cord around P5 [START_REF] Kaplan | Neuronal matrix metalloproteinase-9 is a determinant of selective neurodegeneration[END_REF]. We have tried to detect MMP-9 in neurobiotin-injected fetal MNs but could not detect any staining. We also checked the immediate vs delayed firing when MNs were depolarized at threshold with a 5s long depolarizing pulse and could only detect immediate firing. In addition, voltage difference between threshold potential for spikes and membrane potential (Vthr-Vr) and spike width (SpkWdth) did not differ between E17.5 MNs unlike P6-10 MNs (Leroy et al., 2014). However, even though it is difficult to conclude about the identity of our recorded fetal MNs, low Rin (< 100 M) WT MNs, inhibited by dGPSPs, likely correspond to future F-type MNs. Their morphology complies with this described in WT MNs in our previous study [START_REF] Martin | Embryonic alteration of motoneuronal morphology induces hyperexcitability in the mouse model of amyotrophic lateral sclerosis[END_REF]. Interestingly, our data highlight a more hyperpolarized actual ECl in low Rin WT Inhibited MNs compared to high Rin WT Dual MNs, suggesting a higher efficacy of KCC2. However, the protocol used in the present study was not a KCC2 challenging protocol and did not reveal any main difference in actual ECl values between WT and SOD MNs as previously described [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF]. In addition, as in our previous study [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF], we found a reduction in the frequency of GABA/glycine synaptic events, in SOD MNs corroborating the reduction of synaptic VIAAT density. Therefore, even if we did not analyze the lack of VIAAT staining on F-type vs S-type MNs (because of lack of specific markers at fetal stages, see above), we may hypothesize that it reflects an early reduction of GABA/glycine terminals on F-type MNs that are low Rin neurons.   Somatic morphometric parameters Our morphometric analysis has uncovered an increase in SOD MNs soma size compared to WT littermates. This was not evidenced in our previous morphometric analysis performed on E17.5 SOD1 G93A mice (Martin et al., 2013), as in other studies done on postnatal (P6-10) SOD1 G93A mice (Leroy et al., 2014) or P3-9 SOD G85R mice (Amendola et al., 2008;Filipchuk et al., 2012). Interestingly, such an increase in SOD MNs soma was described in a comprehensive study performed on large sample size of immunolabelled P10 SOD1 G93A MNs, specifically in F-type  MNs, [START_REF] Dukkipati | The vulnerability of spinal motoneurons and soma size plasticity in a mouse model of amyotrophic lateral sclerosis[END_REF]. A larger sample size was also considered in the study performed by Shoenfeld and collaborators who observed an increase in the soma size of P30 SOD1 G93A MNs [START_REF] Shoenfeld | Soma size and Cav1.3 channel expression in vulnerable and resistant motoneuron populations of the SOD1G93A mouse model of ALS[END_REF]. Maybe we focused more on future F-type MNs in the present study compared to Martin and collaborators (Martin et al., 2013). iii. How does inhibition alteration evolve during the disease progression in the SOD1 G93A mouse ? Given our findings in this thesis and studies by Chang and Martin pointing out that spinal SOD G93A MNs in ALS exhibit deficient inhibition as early as the prenatal stage, this Glycinergic neurons appear during development in the vertebrate central nervous system (CNS) from embryonic E12.5 day on the somata of ventral horn cells in the spinal cord, followed by cells in the dorsal horn one day later [START_REF] Allain | Expression of the glycinergic system during the course of embryonic development in the mouse spinal cord and its co-localization with GABA immunoreactivity[END_REF][START_REF] Chalphin | The specification of glycinergic neurons and the role of glycinergic transmission in development[END_REF]. In spinal cord, GlyRs are expressed in the ventral and dorsal horns [START_REF] Baer | Localization of glycine receptors in the human forebrain, brainstem, and cervical spinal cord: an immunohistochemical review[END_REF]. During the maturation of ventral inhibitory synapses, important are developing interneurons (e.g., Ia IN, V2b, Renshaw cells), which possess diverse roles in shaping locomotor patterns. These interneurons are assigned to perform reciprocal (Ia inhibitory neurons), feedback (Renshaw cells), and feedforward (presynaptic sensory) inhibition [START_REF] Siembab | Target selection of proprioceptive and motor axon synapses on neonatal V1derived Ia inhibitory interneurons and Renshaw cells[END_REF]. Renshaw cell is the only interneuron in the spinal ventral horn that receives afferents directly from collateral axons of the MN and mediates recurrent inhibition back to the MN themselves, through the co-released inhibitory neurotransmitters of glycine and GABA (B. Renshaw, 1946;[START_REF] Eccles | Distribution of recurrent inhibition among motoneurones[END_REF]A. J. Todd et al., 1990;[START_REF] Schneider | Involvement of GABA and glycine in recurrent inhibition of spinal motoneurons[END_REF]; F. J. Alvarez et al., 2007; Q. Chang et al., 2009b; F. J. Alvarez et al., 2013). Individual Renshaw cell receives inputs from particular motor pools and spreads its inhibitory output to the same MNs, their synergists (i.e., motor pools exerting a similar action on the same joint), and sometimes selected motor pools across joints [START_REF] Eccles | Distribution of recurrent inhibition among motoneurones[END_REF]. The recurrent inhibition thus may act as a variable gain regulator at the motoneuronal level. Renshaw inhibition dysfunction or even Renshaw cell loss has been attributed to the cause of the MNs degeneration in pathological conditions such as ALS [START_REF] Fornai | Lithium delays progression of amyotrophic lateral sclerosis[END_REF][START_REF] Pasquali | Autophagy, lithium, and amyotrophic lateral sclerosis[END_REF]. My thesis therefore also performed another project, in collaboration with Dr. Marco Beato from University College London (UCL), to investigate whether recurrent inhibition deficits occur in juvenile SOD G93A ALS MNs, which could be linked to a reduction in the size of the postsynaptic GlyRs clusters. Beato's lab examined the recurrent inhibition at ~P21 in SOD G93A MNs and littermate WT MNs and we investigated the expression of functional GlyRs forming postsynaptic receptor aggregates (i.e., size of GlyRs clusters) on ChATpositive -MNs through immunostaining of coronal cryosections from PFA-fixed spinal cords. We confirmed the postsynaptic GlyRs by their colocalization with presynaptic glycinergic terminals GlyT2 with 10% overlap. Electrophysiological results from Beato's lab revealed that recurrent inhibition was preferentially reduced in large MNs (F-type MNs) from SOD G93A mice (Figure 40). Their Bayesian quantal analysis disclosed that quantal size was reduced at Renshaw-motoneuron contacts, suggesting the possibility of some postsynaptic impairment at RC-MN synapses. Beato's lab recording of glycinergic miniature IPSCs (mIPSCs) from MNs revealed that the amplitude of the mIPSCs was smaller in SOD G93A MNs. Their data therefore indicated that the size of postsynaptic GlyRs clusters could be smaller in SOD G93A MNs compared to WT littermate MNs [START_REF] Lim | Quantal size is correlated with receptor cluster area at glycinergic synapses in the rat brainstem[END_REF]. (a) The nervous system is a dynamic structure causing structural modification including the possibility of reorganizing the neuronal membrane, the topography and translocation of transmitter receptors and ionic channels in neuronal membrane is susceptible to modulation under physiological conditions [START_REF] Poo | Mobility and localization of proteins in excitable membranes[END_REF]. Recording recurrent inhibition need to stimulate ventral roots in order to activate local recurrent inhibitory circuit. Functional GlyRs therefore cluster or aggregate towards the GABA/Glycine release sites where functional glycinergic synapse between Renshaw cells and MNs are formed. As a result, the size of synaptic GlyRs clusters in the physiological state would be larger than those obtained at the immunohistochemical level because GlyRs on motoneuronal membranes lose their mobility or translocation after PFA fixation process. (b) In the ventral horn of the spinal cord, not all MNs can form a recurrent inhibitory circuit with Renshaw cells (see Figure 9 in this thesis introduction) [START_REF] Bikoff | Spinal Inhibitory Interneuron Diversity Delineates Variant Motor Microcircuits[END_REF], which may lead to some "fake-recurrent inhibition" MNs (i.e., MNs without axonal collateral branches) being analyzed and their data being pooled into the anatomical data, contributing to dilute valid results. 3) Resolution limitation of ZEISS Confocal microscope (LSM900). The distribution of cluster sizes for WT and SOD (Figure 41d) exhibit that slight excess of small cluster near the resolution limit in SOD MNs. There is a sharp cutoff at the level of the resolution (minimum cluster size detectable, 0.02µm 2 ), but if we were to fit both distributions, the size distribution could be shifted to the left, beyond the resolution limits. We thus may miss more "below resolution" SOD clusters than WT. To conclude, if it's true that there was a change in GlyR cluster size due to the SOD mutation, we think that the technical issue is now the limiting factor and the reduced GlyRs clusters in SOD are more likely to be small-size clusters below 0.02 µm 2 . To capture such small clusters, we would not be able to see it without using high-grade equipment with super resolution. Moreover, it is hard to estimate to what extent these small-size clusters are reduced in SOD, so it is possible that they are so small that they are reduced directly to complete disappearance. In this case, it is worth trying to analyze the density of receptors on the cell membrane from the images acquired with super-resolution instruments. Last but not least, it will be more worthwhile to conduct immunostaining of GlyRs on the recorded MNs after First, the inadequacy of ALS models is considered as an important reason for failed clinical trials. At the preclinical level, genetically modified rodents carrying fALS mutations remain the most widely used models. However, they do not fully recapitulate the complete pathophysiological and phenotypic spectrum present in ALS patients. ALS is caused by defects in different interacting pathways that culminate in a large network. The relative extent to which each of these mechanisms contributes to the overall pathobiology of ALS has not been fully ascertained, making it difficult to discriminate initiating factors from secondary consequences and target the primary processes underlying ALS. A more comprehensive understanding of ALS pathogenic mechanisms will allow for advances in treatment. Second, the lack of presymptomatic biomarkers and the delay in clinical diagnosis have significantly limited the therapeutic potential of putative diseasemodifying drugs. It is now increasingly accepted that by the time patients fulfil the diagnostic criteria for ALS, a considerable disease burden has already occurred in the long presymptomatic phase. The timing of application of potentially neuroprotective interventions is fundamental for increasing the chance of success. The results of treatment that was started much in advance with respect to the onset of symptoms in animal models should be considered as indicative of success in humans. Presymptomatic diagnosis and clinical trials of early therapeutic intervention in fALS patients identified to carry ALS-related gene mutations are strongly recommended in the future. Also, potential biomarkers that allow early intervention to improve therapeutic outcomes of ALS are anticipated in the near future. There is an urgent need to develop novel disease-modifying therapeutics to slow the disease progression and extend the lifespan of ALS patients. Since emerging evidence has supported the notion that dysregulation of autophagy is critical for the pathogenesis of ALS, the autophagic signal pathway may be a potential therapeutic target. Furthermore, studies elucidating the genotypephenotype correlations in ALS patients in recent years have laid ground for individualized, gene-specific therapeutic approaches. The results of these studies advance our understanding of ALS and boost our hope that disease progression can be curbed in the future. To date, most studies of ALS pathogenesis have focused on symptomatic adult stages, and there are no cures for ALS. Disease-modifying therapies for ALS remain restricted to two drugs, riluzole and edaravone. However, results from the clinical trials have suggested greater benefits for patients from riluzole treatment at early stage (,less than 2 years) than later stage of their disease (i.e., more than 2 years) (B. J. [START_REF] Traynor | An outcome study of riluzole in amyotrophic lateral sclerosis--a population-based study in Ireland, 1996-2000[END_REF][START_REF] Zoing | Riluzole therapy for motor neurone disease: an early Australian experience (1996-2002)[END_REF]. Thus, there is an urgent need to find some useful pathological hallmarks in the pre-symptomatic stage of ALS for early diagnosis and medical treatment in order to prolong the survival of patients and achieve better therapeutic outcomes. Evidence for ALS pathogenesis before symptomatic adult stages is growing. Deciphering the biology of pre-symptomatic or pre-manifest ALS relies on a clear conceptual framework for defining the earliest stages of disease. Following to our lab previous finding, ALS may even start at developmental stage, because fetal E17.5 SOD G93A MNs showed a hyperexcitability in lumbar spinal cord associated with reduced terminal dendrite morphology and increased input resistance (Martin et al., 2013). My PhD thesis investigated hyperexcitability-related physiological abnormalities in SOD G93A spinal motoneuronal network at this prenatal stage, causative link to the motoneuronal vulnerability being Cl --related inhibitory dysfunction. The work aimed at identifying some novel pathological hallmarks in presymptomatic stage of ALS and unravelling its the underlying cellular and molecular mechanisms. Finally, one of the most important results reveals that elevated [Cl -]i induced hyperexcitability of SODG93A MNs. We then investigated the molecular (Cltransporter of KCC2 and NKCC1) and cellular (Cl--related inhibitory synapse) mechanisms that regulate this Cl-homeostasis alteration, showing an association with downregulation of Cl-excluder KCC2 and a reduction in GABAergic/glycinergic inputs. Thus, my thesis work highlights the importance of CI-homeostasis deficiency in the progression of ALS. CI-homeostasis can be considered as a potential candidate pathological hallmark for ALS. Further work could also be conducted on: 1) decipher the upstream molecular mechanisms involved in the perinatal KCC2 deregulation; 2) investigate the alteration of Cl -homeostasis in other ALS mouse models (e.g., FUS, TDP43) at perinatal period (E17.5,P0,P2,P6); 3) rescue the KCC2 deficiency using the KCC2-T906A/T1007A knock-in mouse that exhibits a potentiated KCC2 activity (target KCC2 Protein); 4) identify whether KCC2 is differentially enriched in inhibitory glycinergic clusters (i.e., GlyRs) across development at two key postnatal stages (P20, P50) in SOD1 G93A versus WT mice and in specific MN populations. This future work may provide us a better understanding of how CI - homeostasis affects synaptic inhibition and the role in the progression of ALS. amyotrophique (SLA) est une maladie neurodégénérative fatale de l'adulte caractérisée par la dégénérescence des motoneurones (MNs) et ayant une étiologie multifactorielle. La plupart des études sur la SLA se sont focalisées aux stades symptomatiques selon l'hypothèse que la pathogénicité apparaît lorsque la maladie devient symptomatique. Cependant, un nombre grandissant d'évidences indique que la pathogénicité se développerait bien avant les symptômes. Mon travail de thèse de Doctorat a été basé sur l'hypothèse selon laquelle la SLA -familiale et sporadique -découlerait de déficits présents dès le développement précoce. La première partie de ma thèse a consisté à analyser les courants post-synaptiques GABA/glycine (IPSCs) au niveau des MNs embryonnaire (E) E17,5, localisés dans la colonne motrice ventro-laterale, chez la souris SOD1 G93A (SOD) modèle de la SLA, en parallèle à l'analyse de l'homéostasie chlorure. Nos résultats ont montré que les IPSCs sont moins fréquents chez les animaux SOD en accord avec une réduction des terminaisons synaptiques VIAAT autour des MNs. Les MNs SODs avaient un ECI 10 mV plus positif que les MNs sauvages (WT) de la même portée. Ce déficit était lié à une réduction du co-transporteur chlorure KCC2. Les IPSCs évoqués et spontanés présentaient une relaxation plus longue chez les MNs SOD, en corrélation à une [Cl-]i plus élevée. La modélisation a montré que cet excès de relaxation permettait de compenser la moindre efficacité de l'inhibition GABA/glycine liée au ECI dépolarisé. De manière intéressante, les simulations ont révélé la nature excitatrice des potentiels dépolarisants post-synaptiques GABA/glycine (dGPSPs) survenant à basse fréquence (<50Hz) sur les MNs SOD mais pas sur les MNs WT. A plus haute fréquence, les dGPSPs basculaient vers une inhibition du MN liée à une sommation de composantes « shuntantes ». La seconde partie de ma thèse a donc focalisé sur les effets de dGPSPs évoqués électriquement at différentes fréquences (7,5 -100 Hz) sur de vrais MNs E17,5 au niveau desquels un ECl dépolarisant (sous le seuil du PA) était imposé. Le but était d'examiner si l'effet excitateur pouvait être lié aux changements morphologiques des MNs E17,5 décrits précédemment. Les résultats ont montré que certains MNs étaient bien excités par les dGPSPs basse fréquence et inhibés à plus forte fréquence (MNs bieffet) alors que d'autres MNs étaient inhibés quelles que soient les fréquences (MNs inhibés). L'effet double était plus souvent détecté au niveau des MNs SOD. Les MNs WT ont été classés en deux groupes en fonction de leur résistance d'entrée (Rin), les MNs bieffet ayant une Rin élevée et les MNs inhibés une Rin basse. Les données morphométriques ont mis en avant un arbre dendritique réduit pour les MNs WT bieffet (Rin élevée) et un arbre dendritique étendu pour les MNs inhibés (Rin basse). Ce n'était pas le cas des MNs SOD excités ou inhibés indépendamment de leur morphologie. En accord avec les simulations montrant qu'une baisse de la densité des courants inhibiteurs sur le soma du MN favorise l'excitation des dGPSPs, nous avons trouvé moins de terminaisons synaptiques VIAAT sur le soma et dendrites proximales des MNs SOD, et une fréquence réduite des dGPSPs spontanés. Dans leur ensemble, les données de ma thèse soulignent une altération précoce de l'homéostasie chlorure et de l'innervation GABA/glycine des MNs SOD1G93A. Avant la naissance, une population dominante de MNs avec Rin basse émerge chez les animaux WT. Ces MNs qui sont inhibés par les dGPSPs pourraient correspondre aux futures MNs vulnérables (rapides, FF). Ces MNs ne sont pas inhibés chez les animaux SOD. Le dysfonctionnement de l'inhibition pourrait être attribué à deux facteurs distincts : la morphologie et la densité des synapses inhibitrices péri-somatiques. Parmi ces facteurs, le deuxième joue un rôle majeur en contrôlant la capacité des neurones GABA/glycine à façonner la sortie motrice spinale. Mots clés : homéostasie chlorure | intégration synaptique | KCC2 | inhibition| motoneurone spinal| sclérose latérale amyotrophique | modèle souris SOD1G93A | simulation | patch-clamp | stade prénatal | transmission synaptique GABAergic/glycinergique | GABA/glycine to 100 Hz), on real lumbar E17.5 MNs in which a depolarized ECI (below spike threshold) : chloride homeostasis | synaptic integration | KCC2 | inhibition| spinal motoneurons| amyotrophic lateral sclerosis | SOD1G93A mouse model | simulation | patch-clamp | Prenatal stage | GABAergic/glycinergic synaptic transmission | GABA/glycine. Figure 1 . 1 Figure 1. Organization of motor cortex and spinal cord.The descending corticospinal motor neurons (upper motor neurons) that project from the motor cortex make synapses in the brainstem and spinal cord, and the bulbar or spinal motor neurons (lower motor neurons) that project into skeletal muscles. Upper motor neurons from different areas of primary motor cortex control the functions of tongue (green), arm (blue), and leg (red). Adapted from[START_REF] Taylor | Decoding ALS: from genes to mechanism[END_REF] movement. The diversity of MNs reflects the variety of targets they innervate, including a wide range of muscle fibre types. UMNs and LMNs differ in the location of their cell bodies, the neurotransmitters released, their targeting and symptoms resulting from their injury. Terrestrial vertebrates possess hundreds of anatomically distinct muscle groups. Spinal MNs are located in the ventral horn of the spinal cord where they control effector muscles located in the periphery. MNs that innervate these discrete targets are organized into many diverse subtypes within the spinal cord. Diversity of spinal MNs is formed by hierarchical division and identified by the location of their cell bodies in the spinal cord, axon trajectory, and pattern of muscle innervation (Figure 2). Initially, all spinal MNs are generated from a single ventral progenitor domain. At a first level, columnar organization, MNs are spatially and molecularly allocated to columns according to the body region they innervate. These columns occupy a defined position along the rostro-caudal axis of the spinal cord. The first division among spinal MNs separates somatic MNs which innervate skeletal muscles, from the visceral "MNs", which regulate the contraction of smooth muscle cells through a synaptic relay with the sympathetic neurons of the paravertebral ganglia. At a second level of organization, MNs within the main motor columns (e.g., LMC and MMC) are subdivided into medial and lateral divisions and project axons along different trajectories. The MMC contains MNs projecting axons towards proximal muscles whereas the LMC contains MNs projecting axons on distal (limbs) muscles. The LMC can be divided into two divisions: a medial-division projecting on ventral limb muscles and a lateral-division projecting on dorsally located limb muscles. Medial LMC axons typically innervate flexor muscles, whereas lateral LMC axons frequently innervate extensor muscles. The functional relevance of this anatomical segregation is still Figure 2 . 2 Figure 2. A hierarchy of spinal MNs identities. MNs subtype organization in the developing spinal cord depending on cell body position, axonal projections and gene expression. MNs are subdivided in two classes based on the innervation of skeletal muscle targets (somatic) and neuronal or glandular targets (visceral). Visceral and somatic MNs are derived from the same ventral progenitor domain at spinal levels. Somatic MNs reside in the motor column and are organized into distinct motor columns (e.g., LMC and MMC), where MNs sub-groups can be segregated into medial and lateral divisions. Medial/Lateral-division MNs are further subdivided into discrete MNs pools innervating separate muscle groups. LMC: lateral motor column, MMC: median motor column, INT: interneurons. Adapted from Jessell 2000. Figure 3 . 3 Figure 3. Columnar organization of spinal MNs by rostro-caudal axis patterning. Segmental distribution of spinal motor columns present along the rostro-caudal axis. MMC: median motor column, SAC: spinal accessory column, PMC: phrenic motor column, PGC: preganglionic column, HMC: hypaxial motor column, LMC: lateral motor column. Adapted from Stifani 2014. Figure 4 . 4 Figure 4. Spinal MNs specification and connectivity. Segmental identity of spinal MNs is established by specific Hox genes. Graded levels of the morphogens (FGFs, RA, and GDF11) determine spatial expression of Hox gene paralogues that shape the spinal cord into a brachial, a thoracic, and a lumbar segment. Spinal MNs congregate into four distinct motor columns organized in the rostro-caudal axis: LMC MNs innervate limb muscles, PGC MNs (Magenta) innervate sympathetic chain ganglia, MMC (formerly MMCm) MNs (light blue) innervate axial muscles and HMC (formerly MMCl) MNs (dark blue) innervate body wall muscles. LMC MNs are further subdivided into a medial (LMCm) and a lateral (LMCl) MNs pools innervating ventral and dorsal limb muscles. FGFs: fibroblast growth factors, RA: retinoic acid, GDF11: growth differentiation factor 11, Hox: homeobox. Hox6, Hox9 and Hox10 are MNs determinants at the brachial, thoracic and lumbar regions of spinal cord, respectively. Adapted from Francius et al., 2013. development, and the selectivity inherent in their formation helps to define the behavioural repertoire of the mature organism. This developmental program begins with the differentiation of distinct classes of neurons from progenitor cells located at defined positions within the neural tube. The development of the embryonic spinal cord consists of four successive stages (Figure 5). At the neural plate stage: during gastrulation, the chordamesoderm induces neural "fate" in the overlaying ectoderm that becomes the neural plate, a process known as neural induction. Newly formed neural cells are flanked laterally by epidermal ectoderm (ECT), while notochord cells (N) are positioned centrally in the neural plate and segmental plate mesoderm (S) is positioned laterally in the neural plate. At the neural fold stage: floor plate cells (F) are observed in the ventral midline regions and the somatic mesoderm begins to take shape. At the neural tube stage: the notochord (N) -forming from the chordamesoderm -induces ventral structure, while the dorsal lip of the neural plate fold inwards to generate the neural tube. Neural crest cells (R) differentiate at the dorsal midline, while neural crest cells (NC) cells delaminate from the dorsal neural tube. At the spinal cord stage: as development advances, the spinal cord develops a wide diversity of neuron classes arrayed across a dorso-ventral axis: MN and ventral interneurons (V) differentiate in the ventral region of the spinal cord, while dorsal root ganglion (DRG) neurons, commisural (C) neurons and association (A) neurons differentiate in the dorsal region of the spinal cord. DRG neurons differentiate from NC progenitors. Figure 5 . 5 Figure 5. The development of the embryonic spinal cord. Abbreviations: a, epidermal ectoderm (ECT), notochord cells (N), segmental plate mesoderm (S). b, floor plate cells (F). c, roof plate cells (R), neural crest cells (NC). d, commissural (C) neurons, association (A) neurons, motor neurons (M), ventral interneurons (V), dorsal root ganglion (DRG) neurons, dorsal (D) and ventral (v) axes. Adapted from Jessell 2000. 1 ) morphogen Sonic hedgehog (shh) secreted from the floor plate (FP) and the notochord (NC) modulates ventral patterning of the neural tube; 2) Bone morphogentic proteins (Bmps) secreted from roof plate (R) and surface ectoderm modulate dorsal patterning of the neural tube. Within an idealized segment of spinal cord, these morphogen gradients establish 12 progenitor domains that grossly give rise to 7 dorsal interneuron progenitor divisions (pd1-6 and pdIL) to generate dorsal interneuron classes (dI1-dI6); 4 ventral interneuron progenitor divisions (p0-3) and 1 MN progenitor domain (pMN) to generate ventral interneuron classes (V0, V1, V2, V3) and MNs, respectively. All MNs are initially derived from ventral progenitor cells or domains (e.g., pMN) that are specified to become Olig2+ motor neuron progenitors via shh and retinoic acid (RA) signals. Thus, these ventral progenitor cells give rise to one cell type early (e.g., MNs) followed by another cell type later (e.g., oligodendrocytes) (Figure6). Typically, the transcription factors in adjacent progenitor domains repress expression of factors in neighbouring domains, preventing cells from developing with hybrid identities. Transforming growth factor β (TGF-β) signalling is necessary for proper neural development, differentiation and function throughout life. The large number of elements of the nervous system are regulated by TGF-β signals produced by sequential waves of activation, inhibition, and reactivation of TGF-β family members (Meyers et al., 2017). TGFβ family proteins from the overlying ectoderm (e.g., Bmp4) and roof plate (e.g., Gdnf7) produce dorsalizing signals. The dorsal-most progenitor domains, pd1-pd3, depend on TGFβ, whereas pd4-pd6 and pdIL are independent of TGFβ. The somite produces RA that controls subtype and dorsoventral identity through Pax6. RA signalling is required at several steps during the development of the spinal cord, from the specification of generic properties to the final acquisition of neuronal subtype identities.As cells mature within their respective progenitor zones and begin to exit the cell cycle, an abrupt transition occurs in the transcriptional profile of cells. The postmitotic cells from some progenitor domains (e.g., p2) become further diversified through intercellular signalling interactions (e.g., Notch-Delta), resulting in the generation of excitatory V2a and inhibitory V2b neurons from common ancestral progenitor cells. The Delta/Notch pathway governs the specification, proliferation, and differentiation of neuronal precursors in embryonic and adult tissues. Notch receptors and Delta ligands are transmembrane proteins that elicit signalling between adjacent cells. Delta/Notch interactions then trigger a metalloprotease-mediated cleavage in the Notch extracellular domain, followed by a γ-Secretase-dependent intramembrane proteolysis that releases the Notch IntraCellular Domain (NICD) into the cytoplasm. NICD enters the nucleus to interact with the DNA-binding CSL transcription factor and promote target gene transcription (Figure6). Neuronal diversification produced in the embryonic nervous system during neurogenesis stems from cellular gene expression. The initial subdivision of MNs is driven by the action of the Hox factors according to the segmental level they occupy along the rostro-caudal axis. Afterwards, consolidation of their MN fate, newly-born MNs undergo a diversification process that results in the production of cell types displaying distinct molecular identities, migration pathways, axonal trajectories, and target connectivity located in the different motor columns and pools. However, defective consolidation of MN identity results in a partial or complete transformation of the MN population into dorsally adjacent V2a interneurons. This is likely due to the fact that MNs and V2a interneurons share some of their developmental determinants, including Nkx6 and Lhx3 factors(Francius & Clotman, 2013). These molecular mechanisms are involved in the consolidation of MN identity: early MN markers include Hb9, Isl1, and Hnf-6; V2a interneuron markers include Chx10. Inactivation of Hb9 results in defective consolidation of MN identity and partial conversion into V2a interneurons. MNs are subdivided by their cell body positions within motor columns.Each motor column consists of multiple MNs pools innervating individual muscles. Figure 6 . 6 Figure 6. Neuronal specification during spinal cord development along the dorso-ventral axis.Top panels | In an idealized developing mouse spinal cord, the sequential genetic steps produce the patterning and specification of embryonic spinal cord progenitors and the diversity of their neuronal progeny including MNs, interneurons and oligodendrocytes. The progression from neural progenitor cells to postmitotic neurons spanning embryonic day 9.5 (E9.5) to E18.5 is shown from left to right, although some events are not strictly linear. At E9.5, morphogen gradients of Shh (ventrally), and Bmps and Gdf7 (dorsally) provide instructive positional signals to dividing progenitors in the ventricular zone. This leads to the restricted activation of patterning factors in distinct dorso-ventral domains, including Nkx6.1 (ventral), Pax6 (intermediate), and Pax3 and Pax7 (dorsal). At E11, eleven early classes of postmitotic neuron are present in the embryonic spinal cord: dI1-dI5 neurons that are derived from dorsal progenitors primarily contribute to sensory spinal pathways, whereas dI6, MN and V0-V3 neurons arising from intermediate/ventral progenitors are involved in the locomotor circuitry. A number of postmitotic transcription factors are indicated for the identification of each of generic neuronal population, e.g., Chx10 is a marker of V2a neurons and IsI1/2 is a marker of MNs. Bottom panels | Spinal premotor interneurons are organized in the contralateral medial ventral horn, and in the ipsilateral spinal cord in the deep dorsal horn, intermediate region, and ventral horn. The functions of the cardinal cell types in the spinal cord, particularly those that relate to locomotor behaviours, have been partially identified. The differential expression patterns of transcription factor Hox (i.e., homedomain) paralogues along the rostro-caudal axis further subdivide MNs, but for clarity, these patterns are not reflected in this SnapShot. Shh: Sonic hedgehog, Bmps: Bone morphogenetic proteins, Gdf7: Growth differentiation factor 7. Adapted from[START_REF] Alaynick | SnapShot: Spinal Cord Development[END_REF] Figure 7 . 7 Figure 7. Organization of the locomotor system in vertebrates. (a) Schematic of the rodent central nervous system, showing the neural structures that configure the motor pathways that control simple behaviours such as respiration, chewing, and locomotion. (b) Motor pathways in aquatic and terrestrial vertebrates share a similar neuroanatomical structure. Local control of muscle movements is regulated by pools of MNs in the spinal cord that are part of a decentralized locomotor CPG network. Spinal motor centers are modulated by proprioceptive sensory feedback via sensory afferents. Descending reticulospinal (RtS), vestibulospinal (VS) and rubrospinal (RbS) pathways control the locomotor network in the spinal cord, although the RtS pathway is the primary pathway for initiating locomotion. The RtS pathway can be activated by the mesencephalic locomotor region (MLR), which has inputs from the basal ganglia and thalamus. The cerebellum coordinates motor behaviors by mediating sensory and internal feedback and optimizing the motor pattern to the task at hand. It also coordinates spinal motor actions by the supraspinal motor pathways. Connections from the motor cortex refine and initiate motor actions. The black lines indicate direct command pathways, the grey lines indicate feedback pathways. Adapted from Goulding 2009. Figure 8 . 8 Figure 8. Organization of Mouse Spinal Locomotor CPG Network. Commissural interneurons (CINs) are the core of the assembly of the locomotor CPG network in rodents that ensures left-right alternation in motor activity. Two CIN populations are involved in left-right "alteration": a set of inhibitory CINs (CINi) inhibit directly on contralateral MN, whereas a set of excitatory CINs (CINe) inhibit contralateral MN indirectly by acting on GABAergic/glycinergic inhibitory INs including ipsilaterally projecting inhibitory interneuron (IINi) and Renshaw cells (RC). Left-right "synchrony" is obtained via a single CINe acting directly on MN, or driven directly from the rhythm-generating core on the ipsilateral side. To obtain left-right coordination during locomotion, these crossed connections should also connect to the rhythm-generating core (dotted lines) on the other side of the cord and/or corresponding commissural interneurons. The "Drive" of left-right coordinating pathways is mediated through multiple ipsilaterally projecting glutamatergic neurons including the V2a population, and a group of rhythm-generating neurons. In this diagram, a single neuron represents a group of neurons. Adapted from Kiehn 2011 and Crone et al., 2008. Figure 9 . 9 Figure 9. Wiring diagram of inhibitory interneuron microcircuits within the Spinal Locomotor CPG Network in ~P21 mice. Ipsilateral interneurons are mainly from two groups: sensory afferent neurons and V1 groups including Sp8 + expressing V1 spinal neurons (V1 Sp8 ) and V1 Renshaw cells (V1 R ), which send direct monosynaptic inputs to three hindlimb pools of MNs on hip, ankle and foot. Gluteus (GL) MNs innervate hip extensor muscles and occupy an extreme ventral position in the LMC. Tibialis anterior (TA) and intrinsic foot (IF) MNs innervate ankle flexor and foot plantar-flexor muscles, respectively, and occupy a similar dorsal position within the LMC. V1 R interneurons do not receive collateral input from IF MNs, but can receive collateral input from Hip and Ankle MNs. However, V1 Sp8 interneurons receive no motor axon collateral contacts, suggesting that the collateral branches of MNs are restricted to V1R interneurons, which mediate a feedback or recurrent inhibition. The solid and dotted lines represent prevalent and sparse synaptic connectivity. From[START_REF] Bikoff | Spinal Inhibitory Interneuron Diversity Delineates Variant Motor Microcircuits[END_REF] Figure 10 . 10 Figure 10. Schematic representation of the different types of motoneurons.MNs in the adult can be classified into functionally diverse classes and subtypes. (A). One divisioninto alpha (α), beta (β), and gamma (γ) MNs -is made according to the type of muscle fiber that each class innervates: α-MNs innervate extrafusal fibers, γ-MNs innervate intrafusal fibers, β-MNs innervate both intrafusal and extrafusal fibers but this class is not shown in this schema. α-MNs target distinct muscle fibers, thus this class continue to be subdivided into three subtypes: FF (fast-twitch fatigable), FR (fast-twitch, fatigue-resistant) and S (slow-twitch, fatigue-resistant). (B). A typical motor pool contains all types of MNs-fast and slow, α and γ: α-FF, α-FR, α-S and γ. The upper pool innervates a fast muscle but also contains α-S and γ MNs. Similarly, the lower pool would be classified as slow. Adapted from[START_REF] Kanning | Motor Neuron Diversity in Development and Disease[END_REF] Figure 11 . 11 Figure 11. Local spinal motoneuronal network control excitation/inhibition balance in normal condition. Lower motoneurons execute voluntary movements by integrating central descending and peripheral commands, and local inhibitory (GABAergic and/or glycinergic interneurons and Renshaw cells) and modulatory (cholinergic and serotonergic interneurons) inputs. Adapted from Crabé et al., 2020. Figure 12 . 12 Figure 12. The diagrams summarize the changes in the integration of Renshaw cells and Ia interneurons in the ventral spinal network during development. Renshaw cells and Ia interneurons are derived from V1 interneurons that are generated from p1 progenitors in the early neural tube (top, left). | V1 interneurons take a ventral-lateral migratory path and position themselves in close relationship to MNs with whom they start to interact synaptically. V1 interneurons produce several classes of ipsilaterally projecting GABAergic/glycinergic premotor interneurons: RCs and IaIN precursors (top, middle). At mid-embryonic stages, all early connectivity (primarily cholinergic and GABAergic and less glycinergic and glutamatergic) is excitatory, however, morphological details of these early connections are largely unknown (question marks). It is believed that this early connectivity has important roles in transmitting motoneuron spontaneous activity to the whole spinal network and that this spontaneous activity is important during early axonal guidance and target recognition. An amplifier role for early MN-initiated spontaneous activity could be accomplished by RCs directly or indirectly through connections with other interneurons like IaINs. The anatomical nature and organization of these connections remains speculative at present. Current studies are actively pursuing clarification of this early synaptology. | At late embryonic stages (top, right), inhibitory synapses become hyperpolarizing and the role of RCs in transmitting excitation down-regulates in favor of exerting inhibitory influences over the MNs. Primary afferent axons commence to invade the ventral horn of the spinal cord in late embryos at the same time that they induce the differentiation of the sensory apparatus in the periphery (muscle spindles and Golgi tendon organs). By the time of birth (bottom, right), RCs are contacted by both sensory afferents and motor axons and, in this sense, they are a class of proprioceptive inhibitory interneuron that also has direct excitation from MNs. The source of the sensory input is as yet unknown. If it originates from antagonist muscles, then Renshaw cells could behave at this stage as a special class of IaIN. After maturation and onset of weightbearing locomotion (bottom, left) this sensory input is functionally lost (indicated by an X), although the sensory inputs are still present structurally, and the role of RCs then becomes more focused to recurrent inhibition, as it is in the adult. From Alvarez and Fyffe 2007. Figure 14 . 14 Figure 14. Modes of GABA A receptor activation. (a) The release of GABA (blue shading) from a single presynaptic vesicle activates only those postsynaptic GABA A receptors clustered immediately beneath the release site (yellow). The current record shows an averaged waveform of the resulting miniature inhibitory postsynaptic currents (mIPSCs). (b) Action potential-dependent release of multiple vesicles or evoked release from several terminals promotes GABA ''spillover'', which activates both synaptic and extrasynaptic receptors (blue). The current record shows the larger and much slower averaged waveform of the resulting eIPSCs. The area of the mIPSC shown in panel (a) is superimposed for comparison. (c) A low concentration of ambient GABA, which persists despite the activity of the neuronal and glial GABA transporters (GAT1 and GAT3, respectively), tonically activates extrasynaptic GABA A receptors. The trace shows the tonic current that results from stochastic opening of these high-affinity receptors, with superimposed phasic currents. The GABA A receptor antagonist gabazine (SR-95531) blocks phasic IPSCs and tonic channel activity, causing a change in the ''holding'' current and a reduction in current variance. The shaded area beneath the current record before SR-95531 application represents the charge carried by tonically active GABA A receptors. The current records are from whole-cell patch-clamp recordings of granule cells in acute cerebellar slices from adult mice. Adapted from Macdonald and Botzolakis 2009. hippocampal neurons(Mortensen et al., 2006), the overwhelming majority of native receptors are composed of ternary subunit combinations[START_REF] Mckernan | Which GABAA-receptor subtypes really occur in the brain[END_REF] R. Figure 15 . 15 Figure 15. Structure of Cys-loop receptor (GABA A receptor) ion channels. A. Individual subunits contain four hydrophobic Transmembrane (TM) domains. The large N-terminal domain is located extracellularly and is believed to contain neurotransmitter and modulator binding sites. The C-terminal domain is also located extracellularly; however, it is typically only a few amino acids long. The intracellular domain between TM3 and TM4 is the most divergent region and contains Numerous consensus sites for the action of both serine/threonine and tyrosine protein kinases. B. Receptors are assembled as pseudo-symmetrical pentamers. The ion channel is formed by a ring of TM2 domains (blue). This is surrounded by a second ring composed of TM1 and TM3 domains, which is in turn surrounded by a third ring composed of TM4 domains. Adapted from Macdonald and Botzolakis 2009. Figure 16 . 16 Figure 16. Structure of the glycine receptor. A. Membrane topology of a subunit with the four transmembrane domains, showing the position of functionally important amino acid residues (see text). The filled circle (black) in the TM3-TM4 intracellular cytoplasmic loop (serine 391) is the phosphorylation site for PKC. The transmembrane domain M2 forming the pore of the chloride channel is in black. B. arrangement of a and b GlyR subunits in the heteromeric (left) and homomeric receptors (right). The single-channel recordings (bottom) show the subconductance levels, depending on GlyRs subunit combinations. Adapted from Legendre 2001. -Ari et al Physiological reviews (Y. Ben-Ari et al., 2007)] (Ben-Ari et al., 2012). During early neuronal development, GABA A R responses are often depolarizing and excitatory, and this property is important for the facilitation of neuronal proliferation, migration, and synaptic integration[START_REF] Ge | GABA regulates synaptic integration of newly generated neurons in the adult brain[END_REF]. The developmental switch of GABA A R transmission towards a hyperpolarizing, inhibitory response is due primarily to changes mediated by neuronal sodium-potassium-chloride cotransporter 1 (NKCC1, mediating Cl -influx)[START_REF] Yamada | Cluptake promoting depolarizing GABA actions in immature rat neocortical neurones is mediated by NKCC1: NKCC-1 mediated Cluptake promoting depolarizing GABA actions[END_REF] and the potassium-chloride cotransporter 2 (KCC2, mediating Cl -efflux), in particular, an enhanced KCC2 surface expression and function shortly after birth(Rivera et al., 1999). There is heterogeneity in the timing of this developmental switch in polarity of GABA. In rodent hippocampus the change of polarity of GABA appears during the first 2 postnatal weeks[START_REF] Farrant | The cellular, molecular and ionic basis of GABA(A) receptor signalling[END_REF][START_REF] Tyzio | Postnatal changes in somatic gamma-aminobutyric acid signalling in the rat hippocampus[END_REF][START_REF] Valeeva | An Optogenetic Approach for Investigation of Excitatory and Inhibitory Network GABA Actions in Mice Expressing Channelrhodopsin-2 in GABAergic Neurons[END_REF]. In embryonic mouse spinal cord, this switch occurrs after E15.5[START_REF] Jean-Xavier | Inhibitory postsynaptic potentials in lumbar motoneurons remain depolarizing after neonatal spinal cord transection in the rat[END_REF] Alain Delpy et al., 2008;[START_REF] Stil | Developmental up-regulation of the potassium-chloride cotransporter type 2 in the rat lumbar spinal cord[END_REF][START_REF] Allain | Maturation of the GABAergic transmission in normal and pathologic motoneurons[END_REF] Figure 17 . 17 Figure 17. Development of the GABA A R-mediated inhibitory transmission in mouse lumbar spinal MNs. From top to bottom: schematic drawings (frontal views) depict the transient expression of GABA in spinal ventral interneurons (in green), while horizontal bars indicate the permanent KCC2 (in blue) and transient NKCC1 activity (in violet). The color intensity encodes the level of activity. NKCC1 inactivation combined to KCC2 activity leads to a significant decrease in [Cl-] i and a disappearance of GABAAR-mediated excitatory effects. In parallel to the maturation of the chloride cotransporters KCC2 and NKCC1, the spinal cord starts to convey first synaptic activity at E12.5 that is GABAergic (green horizontal bar). Bottom: maturation of the chloride equilibrium potential (Ecl), spike threshold and resting membrane potential (Vrest) across the embryonic stages of developmental. Note: the drop of Ecl at E16.5 accounts for the GABAAR-mediated "shunting" effect. Adapted from[START_REF] Allain | Maturation of the GABAergic transmission in normal and pathologic motoneurons[END_REF] threshold. During the course of the embryonic development, the resting membrane potential (Vrest) of mouse spinal MNs remains below the Ecl. However, if GABA A R activation may trigger the firing of MNs until E15.5, after this embryonic developmental stage, such activation, although producing a depolarization, fails to trigger action potentials (Alain Delpy et al., 2008) (Figure 17). This shunting depolarizing GABA effect likely persists during postnatal stages even though our experimental measurements indicate that Ecl reaches MNs Vrest at P0 (Alain Delpy et al., 2008). During the first 3 weeks of rodent postnatal development, inhibitory synaptic transmission changes in multiple ways that differ depending on brain areas. The respective contribution of the glycinergic and GABAergic transmission to the overall inhibitory message received by postsynaptic neurons may vary during the developmental period. For example, a developmental switch from a predominant GABAergic to main glycinergic neurotransmission occurs in the lumbar spinal cord (B.-X. Gao et al., 2001) and in the lateral superior olive of young rodents (Kotak et al., 1998; Nabekura et al., 2004), while GABAergic neurotransmission dominates in developing collicular neurons (Meier et al., 2002) (Figure 18). Figure 18 . 18 Figure 18. Developmental changes in the proportions of GABAergic and glycinergic synaptic activity in various areas of the CNS. A developmental switch from a predominant GABAergic to main glycinergic neurotransmission occurs in the lumbar spinal cord and in the lateral superior olive of Figure 19 . 19 Figure 19. Schema of shunting inhibition. A neuron receives one excitatory and one inhibitory input. (a) Stimulation of the excitatory input causes inward post-synaptic current that spreads to the soma, where it can be recorded as an EPSP. (b) When the inhibitory and excitatory inputs are stimulated together, the depolarizing current leaks out before it reaches the soma. Adapted from Paulus and Rothwell 2016. Figure 20 . 20 Figure 20. Intracellular Cl -homeostasis. Intracellular Cl -homeostasis is determined by the functional activities of various Cl -transporters expressed in the plasma membrane of a given cell. These Cl -transporter molecules include the cation-chloride cotransporters NKCC, NCC, KCC, members of the SLC6 family, Cl -/HCO 3 -antiporters (AEs), and Cl -/H + antiporters (ClC-3, -4, -5 or -7). X represents a neurotransmitter such as GABA, dopamine, glycine, etc. (Chen et al., 2004). Direction of Cl -transport depends on the chemical or the electro-chemical potential gradients across the plasma membrane. Adapted from Yasunobu et al., 2009. Figure 21 . 21 Figure 21. Schematic representation of GABAergic signaling in mouse. The level of intracellular chloride concentration ([Cl -] i ) dictates the polarity of the current through GABA A receptors (GABA A Rs). If [Cl -] i is low, the reversal potential of Cl -(E Cl ) becomes negative compared to the resting membrane potential (RMP). In this condition GABA A Rs mediate an inward Cl -current that results in hyperpolarization of the cell membrane (a). In contrast, high [Cl -] i results in a positive shift of E Cl and leads to an outward Cl -current through GABA A R and depolarization of the cell membrane that potentially induces action potential firing (c). In conditions where E Cl shifts to values similar to RMP, there will be no net Cl -current through GABA A Rs (b). Adapted from Rahmati et al., 2018. ; John APayne et al., 2003;[START_REF] Medina | Current view on the functional regulation of the neuronal K(+)-Cl(-) cotransporter KCC2[END_REF][START_REF] Pisella | Impaired regulation of KCC2 phosphorylation leads to neuronal network dysfunction and neurodevelopmental pathology[END_REF], which work by accumulating and extruding Cl -, respectively. KCC2 is encoded by Slc12a5 and is exclusively expressed in the plasma membrane of somata and dendrites on pyramidal neurons and interneurons from the hippocampus, neocortex to the spinal cord. It pumps Cl -across the plasma membrane out of the cell. In contrast, NKCC1, which is widely expressed in central and peripheral neurons as well as glial cells, is encoded by SLC12a2 and facilitates uptake of Cl -into cells[START_REF] Benarroch | Cation-chloride cotransporters in the nervous system: General features and clinical correlations[END_REF][START_REF] Rahmati | Chloride Homeostasis in Neurons With Special Emphasis on the Olivocerebellar System: Differential Roles for Transporters and Channels[END_REF]. KCC2 is composed of two splice variants: KCC2a and KCC2b. KCC2b is responsible for establishing hyperpolarizing GABA A receptor-mediated transmission[START_REF] Zhu | Cortical Neurons Lacking KCC2 Expression Show Impaired Regulation of Intracellular Chloride[END_REF]. NKCC1a and NKCC1b are two highly homologous NKCC1 isoforms. The NKCC1a isoform exhibits mRNA expression primarily in the brain[START_REF] Cutler | Two isoforms of the Na+/K+/2Cl-cotransporter are expressed in the European eel (Anguilla anguilla)[END_REF]. NKCC1 and KCC2 are comprised of 12 membrane-spanning segments, 6 extracellular loops, and intracellular N-and C-terminals. They differ in the position of regulatory sequences, phosphorylation sites, and long extracellular loops[START_REF] Gamba | Molecular Physiology and Pathophysiology of Electroneutral Cation-Chloride Cotransporters[END_REF][START_REF] Acton | Hyperpolarizing GABAergic Transmission Requires the KCC2 C-Terminal ISO Domain[END_REF][START_REF] Hartmann | Molecular and evolutionary insights into the structural organization of cation chloride cotransporters[END_REF]. In immature neurons, an age-specific upregulation of NKCC1 and a relative deficiency in KCC2 loads more Cl -into the cell, resulting in a net Cl -outflow and subsequent depolarization when GABA activates GABA A Rs (Figure22). Conversely, higher KCC2 and less NKCC1 expression results in a net Cl -influx in adult neurons[START_REF] Plotkin | Expression of the Na-K-2Cl cotransporter is developmentally regulated in postnatal rat brains: A possible mechanism underlying GABA's excitatory role in immature brain[END_REF][START_REF] Dzhala | NKCC1 transporter facilitates seizures in the developing brain[END_REF][START_REF] Löscher | Cation-chloride cotransporters NKCC1 and KCC2 as potential targets for novel antiepileptic and antiepileptogenic treatments[END_REF]. The developmental shift in expression and/or function of NKCC1 and KCC2 has sparked a large number of studies to understand the physiological and pathological mechanisms of this reverse function in GABA. Additionally, cationchloride cotransporters also regulate cell volume[START_REF] Kahle | Roles of the cation-chloride cotransporters in neurological disease[END_REF]. Dysfunction of these cotransporters has been associated with the damaging secondary effects of cerebral edema after ischemic and traumatic brain injury, and in neurodegeneration in the CNS and PNS[START_REF] Kahle | Roles of the cation-chloride cotransporters in neurological disease[END_REF]. Figure 22 . 22 Figure 22. A developmental switch in cation-chloride cotransporter expression regulates excitatory GABAergic signaling in the developing CNS and inhibitory GABAergic signaling in the adult CNS. A. During embryonic and early postnatal life, immature hippocampal and spinal neurons express chloride-intruding Na-K-2Cl cotransporter NKCC1 higher than chloride-extruding K-Cl cotransporter KCC2. Owing to high activity of NKCC1 and minimal activity of KCC2, [Cl-] i is high, and ECl is positive relative to the Vm of the neuron. When [Cl-] i is high, opening of chloride channels by GABA A -receptor activation results in chloride efflux, thereby depolarizing and exciting the neuron. B.During neuronal maturation, NKCC1 activity decreases and KCC2 activity increases. KCC2 maintains a low [Cl-] i , and the ECl is negative relative to the Vm. Therefore, activation of the GABA A receptors results in chloride influx, hyperpolarization and inhibition of the neuron. [Cl-] i , intracellular chloride concentration; ECl, equilibrium potential for chlorid; Vm, membrane potential; GABA, γ-aminobutyric acid. Adapted from[START_REF] Kahle | Roles of the cation-chloride cotransporters in neurological disease[END_REF] Figure Figure 23. Relationship between [Cl -] i and Cl load (NKCC1 activity), KCC2 activity. Relationship between Cl -load, KCC2 activity, and [Cl -] i . High [Cl -] i results from a combination of low Cl -extrusion capacity (left) and high Clload (bottom). Low Cl -load coupled with low Cl extrusion capacity (top left) or high Cl -load coupled with high Cl -extrusion capacity (bottom right) will both yield low [Cl -] i . Adapted from Doyon et al., 2016. of S940 (Silayeva et al., 2015) and dephosphorylation of T906/T1007 (Moore et al., 2018; M. Watanabe et al., 2019) are essential for the potentiation of KCC2 activity. In contrast, phosphorylation of T906/T1007 residues reduces KCC2 membrane expression which can lead to spine swelling (M. Heubl et al., 2017), thereby affecting both GABA signaling and dendritic spine morphology. Figure 24 . 24 Figure 24. Hypothetical structural model of the K + -Cl -cotransporter. Human KCC2 (hKCC2) and human KCC1 (hKCC1) are compared to one another by colors that indicate the degree of similarity on a per residue basis: red residues are identical and black residues are absent from hKCC1. The branched lines are potential N-linked glycosylation sites between putative transmembrane segments 5 and 6. Predicted secondary structural elements are shown as helices for predicted alpha-helix and wavy Lines for predicted beta-sheet. A reentrant loop Between putative transmembrane segments 2 and 3 has been predicted for NKCC2 and is shown for KCC2. Adapted from Payne 2009. (fibrosis) refers to gliosis of the crossed corticospinal tract in the dorsolateral quadrant of the spinal cord (J.-M. Charcot, 1874; Frey et al., 2000; Pun et al., 2006). Sir Charles Bell, an Edinburgh surgeon (Martin R Turner et al., 2015), first discovered the clinical features of ALS in the mid-19th century. In 1862, Charles Radcliffe and Lockhart Clarke further classified the disease as a nerve-based disease consisting of paralysis and muscular atrophy (Martin R Turner et al., 2015). Figure 25 : 25 Figure 25: Amyotrophic lateral sclerosis: muscle denervation and atrophy upon lower MNs loss. Note the loss of the myeline sheath in ALS MNs. From Hajer EL OUSSINI-BEN CHAABANE's PhD manuscript, 2016. superoxide dismutase-1 (SOD1) subunit, a protein consisting of 153 amino acids, which when bound with copper and zinc forms a SOD1 homodimer[START_REF] Turner | Transgenics, toxicity and therapeutics in rodent models of mutant SOD1-mediated familial ALS[END_REF]. The dimeric form of SOD1 acts as an antioxidant, metabolising the toxic radical superoxide to molecular oxygen and hydrogen peroxide[START_REF] Rosen | Mutations in Cu/Zn superoxide dismutase gene are associated with familial amyotrophic lateral sclerosis[END_REF][START_REF] Kaur | Mutant SOD1 mediated pathogenesis of Amyotrophic Lateral Sclerosis[END_REF]. Mutations in the SOD1 gene result in conformational instability of the protein, which leads to the formation of large intracellular protein aggregates that contribute to motor neuron death (Lucie I.[START_REF] Bruijn | Aggregation and Motor Neuron Toxicity of an ALS-Linked SOD1 Mutant Independent from Wild-Type SOD1[END_REF]. Figure 26 . 26 Figure26. ALS Gene Discovery since 1990. The size of each circle reflects the proportion of all familial ALS cases associated with that gene (e.g., 20% for SOD1 and 45% for C9ORF72). Blue circles indicate genes associated only with familial ALS, red circles indicate genes associated only with sporadic ALS, and circles that are half blue and half red indicate genes associated with both familial and sporadic ALS. Each of these genes has been found to be mutated in more than one ALS-affected family or in multiple, unrelated cases of sporadicALS. From Brown et al., 2017. Figure 27 . 27 Figure 27.Typical pathological hallmark features of ALS in human patients. Affected spinal MNs accumulate small dot-like granular aggregated proteins (inclusions) in spinal cord sections of misSOD1 WT patients with mutations in six ALS-associating genes: C9orf72HRE (A-B, cases #2 and #8), NEK1 (C), C9orf72HRE (D, cases #16), FUS (E), KIF5A (F), ALSIN (G), VAPB (H). In patients with different SOD1 gene mutations, affected spinal MNs accumulate larger or skein-like inclusions (images not shown here). Adapted from Forsberg et al., 2019. Figure 28 . 28 Figure 28. ALS pathomechanisms and targets tested in recent preclinical trials in rodents. ALS pathophysiology encompasses both cell and non-cell autonomous mechanisms. From McGoldrick et al., 2013. C9orf72 repeat expansions are increasingly understood. A GGGGCC hexanucleotide repeat expansion in intron 1 of chromosome 9 open reading frame 72 (C9ORF72) gene is the most common genetic cause of and frontotemporal dementia. Repeat-associated non-ATG translation of dipeptide repeat proteins (DPRs) contributes to the neuropathological features of c9FTD/ALS. Among the five DPRs, arginine-rich poly-PR are reported to be the most toxic. A study by Maor-Nof and colleagues elucidated how the dipeptide protein polyproline-arginine (poly-PR), which is generated from the repeat expansion, induces apoptotic pathways and, ultimately, neurodegeneration by epigenetic modulation of gene transcription via the transcription factor p53, thereby indicating nucleoporin p54 ablation as a novel therapeutic strategy (Maor-Nof et al., 2021). Hao and colleagues generated a mouse model that the transgenic animals specifically expressed poly-PR (GFP-PR 28 ), but without repeat RNA in neurons driven by Thy1 promoter. The GFP-PR 28 homozygous mice showed reduced body size, decreased body weight, and reduced premature survival. GFP-PR 28 heterozygous mice showed motor deficits, especially in progressive gait and balance impairment. Consistent with abnormal behaviours that are associated with the cerebellum, loss of Purkinje cells, but not hippocampal neurons, were presented in GFP-PR 28 heterozygous mice. Moreover, microglia and astrocytes in the cerebellum and lumbar spinal cord of GFP-PR 28 heterozygous mice were significantly activated. Finally, the poly-PR expressing neurons developed synaptic transmission-related genes dysregulation. These data demonstrated that GFP-PR 28 transgenic mice partly model neuropathological features of c9FTD/ALS (Z. Hao et al., 2019). However, BIIB078, an investigational antisense oligonucleotide for C9orf72-associated ALS, did not show clinical benefit in Phase 1 study. This clinical program was discontinued in March 2022 (https://investors.biogen.com/news-releases/news-release-details/biogen-and-ionisannounce-topline-phase-1-study-results). Since its discovery as a primary component in cytoplasmic aggregates in postmortem tissue of patients with ALS, the identification of mutations in TAR DNA Binding Protein 43 kDa (TARDBP, encoding TDP-43) have provided crucial insight into common pathogenic themes in ALS (Kabashi et al., 2008; J. Sreedharan et al., 2008; Van Deerlin et al., 2008; Pesiridis et al., 2011). TDP-43 links both familial and sporadic forms of ALS as mutations are causative for disease and cytoplasmic aggregates are a hallmark of nearly all cases, regardless of TDP-43 mutational status. Research has focused on the formation and consequences of cytosolic protein aggregates as drivers of ALS pathology through both gain-and loss-offunction mechanisms. Not only does aggregation sequester the normal function of TDP-43, but these aggregates also actively block normal cellular processes inevitably leading to cellular demise in a short time span. TDP-43 pathology appears to be tightly linked with its mislocalization from the nucleus to the cytoplasm, making it difficult to decouple the consequences of nuclear-to-cytoplasmic mislocalization from protein aggregation. Studies focusing on the effects of TDP-43 mislocalization have demonstrated both gain-and loss-of-function consequences including altered splicing regulation, over responsiveness to cellular stressors, increases in DNA damage, and transcriptome-wide changes. Additionally, mutations in TARDBP confer a baseline increase in cytoplasmic TDP-43 thus suggesting that small changes in the subcellular localization of TDP-43 could in fact drive early pathology (Suk et al., 2020). Certainly, early events in TDP-43 pathogenesis (i.e., to the protein mislocalization stage) will provide insight into disease mechanisms, therapeutic targets, and novel biomarkers for ALS.Fused in sarcoma (FUS) is a DNA/RNA binding protein containing 526 amino acids. The FUS gene was initially identified as a fusion oncogene on chromosome 16 in human liposarcoma[START_REF] Crozat | Fusion of CHOP to a novel RNAbinding protein in human myxoid liposarcoma[END_REF], the translocation and fusion of which to transcription factors results in strong transcriptional activation of the proteins. FUS is one of the components of the heterogeneous nuclear ribonucleoprotein (hnRNP) complex. Increasing evidence suggests that FUS is involved in various cellular processes, including gene transcription and regulation, DNA repair, RNA shearing, RNA transport, translation, processing of microRNAs, and maintenance of genomic stability[START_REF] Lagier-Tourenne | TDP-43 and FUS/TLS: emerging roles in RNA processing and neurodegeneration[END_REF]. FUS is genetically and pathologically involved in frontotemporal lobar degeneration (FTLD)/ALS. Multiple lines of evidence across diverse models suggest that functional loss of FUS can lead to neuronal dysfunction and/or neuronal cell death. Loss of FUS in the nucleus can impair alternative splicing and/or transcription, whereas dysfunction of FUS in the cytoplasm, especially in the dendritic spines of neurons, can cause mRNA destabilization. Figure 30 . 30 Figure 30. Schematic representation of the selective and progressive degeneration of MNs.In ALS, as MNs axonal dieback at the NMJ, the ability to control movement of the muscles is progressively lost. The general sequence of degeneration is from FF to FR to S. (A) at P50 day, FF, FR, and S are normal in terms of controlling the movement of the muscles. (B) at P60 day, there is selective dieback of FF motor axons, resulting in failure to control the movement of the muscles. However, denervated endplates can, for a period, be reinnervated by axonal regrowth or sprouting from either FR or S MNs (only S shown for simplicity). (C) at P90 day, both FF and FR MNs fail to control the movement of the muscles. Similarly, due to the high sprouting capacity of S motor axon, their denervated endplates can be reinnervated over a period by sprouting from S MNs. However, at late stages, even S axons die back (not shown in the drawing). Motor units: FF (Fast-twitch fatigable): high force, fast contraction speed but fatigue in a few seconds; FR (Fast-twitch fatigue-resistant): intermediate force, fast contraction speed and resistant to fatigue; S (Slow-twitch fatigue-resistant): low force, slower contraction speed, highly fatigue resistant. Adapted from[START_REF] Kanning | Motor Neuron Diversity in Development and Disease[END_REF] Clemence Martinot's PhD manuscript, 2011. ; I. R. A.[START_REF] Mackenzie | Pathological TDP-43 distinguishes sporadic amyotrophic lateral sclerosis from amyotrophic lateral sclerosis withSOD1 mutations[END_REF]; I. R.[START_REF] Mackenzie | TDP-43 and FUS in amyotrophic lateral sclerosis and frontotemporal dementia[END_REF][START_REF] Scotter | TDP-43 Proteinopathy and ALS: Insights into Disease Mechanisms and Therapeutic Targets[END_REF][START_REF] Le | Motor neuron disease, TDP-43 pathology, and memory deficits in mice expressing ALS-FTD-linked UBQLN2 mutations[END_REF][START_REF] Tan | ALS/FTLD: experimental models and reality[END_REF]. Loss of TDP-43 from the nucleus is evident in MNs from ALS/FTD patient tissues, concomitant with the formation of TDP-43 inclusions in the cytoplasm of both MNs and glia. Neuropathological studies have also revealed that the clinical course of ALS reflects the presence of TDP-43 pathology, from its deposition at an initial site of onset, to its spread to contiguous regions of the CNS[START_REF] Brettschneider | Stages of pTDP-43 pathology in amyotrophic lateral sclerosis: ALS Stages[END_REF]. Mutations in TDP-43 are also present in 5% of familial forms of ALS(Jemeen Sreedharan et al., 2008). In the genetic types of ALS, it remains unclear why MNs are specifically affected when the mutant proteins are ubiquitously expressed. Males are affected more by ALS than females, and ethnic populations show differences in the incidence rates of ALS, further highlighting the contribution of genetics to ALS. Figure 31 . 31 Figure 31. Time course of neurodegeneration in the SOD1 G93A mouse model of ALS. The diagram provides an overview of the complex ballet of cellular and molecular mechanisms that lead over six months to the death of this severe model of ALS. Many changes occur before muscle strength is reduced by half, including initial alterations in electrophysiology and behaviour followed by ubiquitination and ER stress in susceptible FF motor neurons leading to axonal dieback and microgliosis and astrogliosis in the spinal cord. These are accompanied by subcellular changes such as Golgi fragmentation and mitochondrial swelling. During the following months, these changes become exacerbated and generalized to other motor units, leading to extensive MNs loss and muscle paralysis. Indicated stages (scale in days) represent those in the G93A high-expressor line. Some parameters have not been studied at earlier stages, so the indicated dates represent the latest possible onset. The overall layout progresses from systemic and behavioural changes on the left toward molecular and cellular changes in motor units on the right. Adapted from Kanning et al., 2010. Figure 32 . 32 Figure 32. Embryonic morphological abnormalities in E17.5 SOD1 G93A MNs. (A1-A2) Representative WT and SOD1 G93A MNs filled with neurobiotin. Illustrations correspond to projected images of 54 and 61 optical sections stack (0.6 μm in Z) for WT and SOD1 G93A MN, respectively (×20 magnification). (B1-B2) 3D reconstructions of complete dendritic trees of the WT and SOD1G93A MNs illustrated in A. Reconstructions were made from ×60 high magnification confocal acquisitions (0.2 μm in Z). (C1-C2) Two-dimensional representations of whole dendritic arborization (dendrograms) obtained from the 3D reconstructions illustrated in B1-B2. Intermediate segments are shown in black and terminal segments in blue. L = lateral, C = caudal. Adapted from Martin et al., 2013. Figure 33 . 33 Figure 33. ALS-associated alterations in ventral spinal cord circuitry. LMN receive inhibitory input via Ia-IN, Ib-IN, and RC, and excitatory inputs from corticospinal tract (UMN), II-IN and SN. γ-motor neurons, which are spared in ALS, do not receive direct inputs from Ia-SN. Excitatory inputs to LMNs via Ia afferent terminals are controlled by PI-IN. Both excitatory and inhibitory inputs are tightly regulated by the proprioceptive feedback provided by sensory afferents (Ia, Ib and II-SN) and astrocytes. Axonal hyperexcitability and hypoexcitability are reported in ALS patients. Decreased RC synapses on LMN and lower number of RC is reported. LMN hypoexcitability is present in vivo in SOD1G93A tg mouse model. Ia-SN neurons exhibit irregular firing patterns as an indication of their altered excitability/activity. Cholinergic C-bouton number is decreased in sALS patients, but C-boutons are enlarged especially onto vulnerable FF LMN in SOD1G93A tg mice. Protein and mRNA levels of ChAT are decreased in spinal cord of ALS patients. Reduced ChAT expression is reported in II-IN and C-boutons on MN of SOD1G93A tg mice. Serotonergic boutons on LMN are increased in low-copy SOD1G93A tg mice, whereas serotonergic neurons in brainstem degenerate in both ALS patients and SOD1G86R tg (not shown). Please note that monosynaptic connections between UMN and LMN are only present in humans. Neuromodulatory synapses are depicted as one (somata located in brainstem). Studies with unspecified type of ALS are referred to as (ALS). AP, action potential; AS, astrocytes; CMAP, compound muscle action potential; ChAT, choline acetyltransferase; CPG, central pattern generator; DD, double-discharge; fALS, familial ALS; FF, fast-fatigable; FP, fasciculation potential; FR, fast-resistant; gamma (γ)-motor neuron, γ-MN; GAD65/67, glutamic acid decarboxylase 65/67; Glu, glutamate; GlyT2, glycine transporter 2; Ia-/Ib-IN, class Ia/Ib spinal interneuron; II-IN, class II spinal interneuron; LMN, lower motor neuron; MU, motor unit; NE, norepinephrine; Ia-/Ib-SN, class Ia/Ib sensory neuron; II-SN, class II sensory neuron; PI-IN, presynaptic inhibitory interneuron; RC, Renshaw cell; sALS, sporadic ALS; SOD1, superoxide dismutase 1; S, slow; TEd, depolarizing threshold electrotonus; TEh, hyperpolarizing threshold electrotonus; τSD, strength-duration constant, UMN, upper motor neuron; VGAT, vesicular GABA transporter; VGLUT, vesicular glutamate transporter 2; 5-HT, serotonin. Adapted from Gunes 2020. Figure 34 . 34 Figure 34. Summary of the changes in motoneuron properties over time. Schematic representation of the changes in four key electrophysiological properties over time. The dots represent the effect size (Hedges' g) and the vertical bars show the 95%CI around g. The thin lines are cubic splines interpolation of the data over time. The points have been slightly staggered so that the vertical bars do not occlude each other. †Data from embryonic motoneurons are from Martin et al. (2013). These authors did not measure PICs in embryonic motoneurons.Kuo et al. (2004) did measure PICs, but their embryonic motoneurons were cultured for 10-30 d in vitro, and their development stage is therefore uncertain. ‡Data from neonates (P0-P5 and P6-P12) are fromQuinlan et al. (2011). Adapted from[START_REF] Huh | Time Course of Alterations in Adult Spinal Motoneuron Properties in the SOD1(G93A) Mouse Model of ALS[END_REF] Both cell types can be polarized to a neurotoxic/proinflammatory or neurotrophic/resolutive phenotype in response to different signals, and close molecular conversations and reciprocal modulation are established between them during the course of activation (Jha et al., 2019). Figure 35 . 35 Figure 35. Schematic of the evolution of MNs degeneration and glial activation during the course of SOD1 mutant-Initiated ALS disease.Four stages are defined (normal, early phase, symptomatic, and end stage). Toxicity is non-cell-autonomous, produced by a combination of damage incurred directly within motor neurons that is central to disease initiation and damage within nonneuronal neighbors, including astrocytes and microglia, whose actions amplify the initial damage and drive disease progression and spread. Selective vulnerability of motor neurons to ubiquitously expressed mutant SOD1 is determined by the unique functional properties of motor neurons (e.g., they are very large cells with large biosynthetic loads, high rates of firing, and respond to glutamate inputs) and damage to their supporting cells in the neighborhood. Adapted fromBoillee et al., 2006. invade the brain early in development, convert into a highly ramified cell type, and constantly screen their environment. Microglia are activated by any type of pathologic event or change in brain homeostasis and can strongly influence the outcome or response to a stressor due to the release of a variety of substances, including cytokines, chemokines, and growth factors. They are the professional phagocytes of the brain and help orchestrate the immunological response by interacting with infiltrating immune cells[START_REF] Wolf | Microglia in Physiology and Disease[END_REF]. Increased microgliosis has been observed in spinal cords of ALS patients[START_REF] Yiangou | COX-2, CB2 and P2X7-immunoreactivities are increased in activated microglial cells/macrophages of multiple sclerosis and amyotrophic lateral sclerosis spinal cord[END_REF]. In ALS, the dual role of microglia is well established. In response to damage or injury, microglia become activated, inducing their migration to the site of injury, proliferation and causing the release of damage-associated molecular patterns, cytokines, chemokines and reactive nitrogen and oxygen species to promote neuroinflammation[START_REF] Becher | Cytokine networks in neuroinflammation[END_REF]. It is generally considered that activated microglia exhibit two different phenotypes in ALS, M1 and M2 phenotypes[START_REF] Parisi | M1 and M2 Functional Imprinting of Primary Microglia: Role of P2X7 Activation and miR-125b[END_REF][START_REF] Geloso | The Dual Role of Microglia in ALS: Mechanisms and Therapeutic Approaches[END_REF], although this concept remains controversial and is considered an oversimplification[START_REF] Ransohoff | A polarizing question: do M1 and M2 microglia exist[END_REF]. In ALS, microglia possess early protective M2 phenotype characterised by small cell bodies with shorter and simple processes, and late neurotoxic M1 phenotype exhibiting large cell bodies with short and extensively branched processes[START_REF] Ohgomori | Comparative morphometric analysis of microglia in the spinal cord of SOD1 G93A transgenic mouse model of amyotrophic lateral sclerosis[END_REF]. In ALS mice, mutant SOD1 (mSOD1) expressing M2 microglia isolated at disease onset protected co-cultured motor neurons, whilst mSOD1 expressing M1 microglia isolated at disease end-stage damaged co-cultured motor neurons[START_REF] Liao | Transformation from a neuroprotective to a neurotoxic microglial phenotype in a mouse model of ALS[END_REF]. The activation of microglia (microgliosis) and subsequent neuroinflammation is a major cause of fALS[START_REF] Hooten | Protective and Toxic Neuroinflammation in Amyotrophic Lateral Sclerosis[END_REF], although it remains unknown if a similar mechanism is a major cause of sALS In humans, increased microgliosis has been demonstrated in the spinal cords of specimens from ALS patients[START_REF] Yiangou | COX-2, CB2 and P2X7-immunoreactivities are increased in activated microglial cells/macrophages of multiple sclerosis and amyotrophic lateral sclerosis spinal cord[END_REF]. Microgliosis occurs specifically with motor neuron injury in the motor cortex, along the corticospinal tract, and in the ventral horn of the spinal cord[START_REF] Philips | Neuroinflammation in amyotrophic lateral sclerosis: role of glial activation in motor neuron disease[END_REF].During early phases of disease, M2 phenotype possess a protective response against signals triggered by motor neuron damage. M2 phenotype release antiinflammatory cytokines and neurotrophic factors including interleukin IL-4, IL-10, IL-13, and insulin-like growth factor-1 to promote motor neuron survival[START_REF] Tang | Differential Roles of M1 and M2 Microglia in Neurodegenerative Diseases[END_REF][START_REF] Du | Role of Microglia in Neurological Disorders and Their Potentials as a Therapeutic Target[END_REF]. Consistently, in SOD1G93A and SOD1G37R mice, overexpression of anti-inflammatory IL-10 have been demonstrated at the pre-onset phase of disease progression[START_REF] Gravel | IL-10 Controls Early Microglial Phenotypes and Disease Onset in ALS Caused by Misfolded Superoxide Dismutase 1[END_REF]). An anti-inflammatory cytokine, IL-4, has also been shown to suppress the release of reactive oxygen species and nitric oxide, and protected co-cultured primary motor neurons against microgliamediated motor neuron damage (W.[START_REF] Zhao | Protective effects of an anti-inflammatory cytokine, interleukin-4, on motoneuron toxicity induced by activated microglia[END_REF]. Despite, the actual processes that transform protective microglia into a toxic form is still unknown, it is believed that the prolonged period of microglial activation will transform protective M2 phenotype into deleterious M1 phenotype. M1 phenotypes promote the release of pro-inflammatory factors such as IL-1α, IL-1β, IL-6, IL-12, IL-23 and tumor necrosis factor (TNF)-α, chemokines, prostaglandin E2, chemokine (C-C motif) ligand 2, ROS and inducible nitric oxide synthase to proliferate motor neuron death[START_REF] Hooten | Protective and Toxic Neuroinflammation in Amyotrophic Lateral Sclerosis[END_REF][START_REF] Du | Role of Microglia in Neurological Disorders and Their Potentials as a Therapeutic Target[END_REF][START_REF] Geloso | The Dual Role of Microglia in ALS: Mechanisms and Therapeutic Approaches[END_REF]. Furthermore, the death of motor neurons results in the release of mSOD1 to provide a positive feedback loop to further induce microglia-mediated motor neuron death. Thus, the dual role of microglia in ALS is well established. The modulation of microglial function serves as a promising therapeutic strategy.Myelin is the lipid-rich structure formed by oligodendrocytes (OLs) that wraps the axons in multilayered sheaths, assuring protection, efficient saltatory signal conduction and metabolic support to neurons. the impact of OL dysfunction and myelin damage is now considered to be a major contributing factor to neurodegeneration in several neurological diseases, including ALS. Upon OL injury, oligodendrocyte precursor cells (OPCs) of adult nervous tissue sustain the generation of new OLs for myelin reconstitution, but this spontaneous regeneration process fails to successfully counteract myelin damage. The functions of OPCs exceed the formation and repair of myelin, and also involve the trophic support to axons and the capability to exert an immunomodulatory role (Raffaele et al., 2021), which are particularly relevant in the context of neurodegeneration. Intriguingly, degeneration of mature OLs and myelin defects were found to start at presymptomatic phase in SOD1 G93A mice spinal cord (Niebroj-Dobosz et al., 2007; Kang et al., 2013; Rodolfo G. Gatto et al., 2018; R. G. Gatto et al., 2018; Bonfanti et al., 2020), suggesting that myelin disruption anticipates MN degeneration and directly contributes to disease exacerbation. In line with this, the presence of OL abnormalities in the lumbar spinal cord of SOD1 G93A mice is at very early developmental stages (P7-10), including increased levels of immature markers like NG2 and GPR17 and reduced density of CC1+ mature cells (Bonfanti et al., 2020). However, despite the growing understanding of oligodendrocyte maturation in the developmental myelination, the underlying molecular mechanisms or factors involved in the context of ALS disease remain poorly investigated. The possible mechanisms underlying OL degeneration, defective OPC maturation, and impairment in energy supply to MNs have also been examined to provide insights on future therapeutic interventions. SOD1 G93A transgenic mouse, circulating neutrophils are increased[START_REF] Lee | Pharmacological inhibition of complement C5a-C5a 1 receptor signalling ameliorates disease pathology in the hSOD1 G93A mouse model of amyotrophic lateral sclerosis: C5a 1 receptor antagonism is protective in ALS mice[END_REF], and neutrophils and mast cells are present along peripheral motor axons. As with inflammation in the CNS, peripheral immune activation could be a reaction to tissue damage, but once established, could exacerbate disease. We report the presence of NK cells in post-mortem ALS motor cortex and spinal cord tissues, and the expression of NKG2D ligands on MNs. Using a mouse model of hSOD1 G93A , Garofalo and colleagues[START_REF] Garofalo | Blocking immune cell infiltration of the central nervous system to tame Neuroinflammation in Amyotrophic lateral sclerosis[END_REF] demonstrated NK cell accumulation in the motor cortex and spinal cord, with an early CCL2-dependent peak. NK cell depletion reduces the pace of MN degeneration, delays motor impairment and increases survival. This is confirmed in another ALS mouse model, TDP43 A315T. NK cells are neurotoxic to hSOD1 G93A MNs which express NKG2D ligands, while IFNγ produced by NK cells instructs microglia toward an inflammatory phenotype, and impairs FOXP3 + /Treg cell infiltration in the spinal cord of hSOD1 G93A mice. Together, these data suggest a role of NK cells in determining the onset and progression of MN degeneration in ALS, and in modulating Treg recruitment and microglia phenotype (Figure 36). Figure 36 . 36 Figure 36. Graphic summary of natural killer cells modulates motor neuron-immune cell cross talk in mouse model of ALS hSOD1 G93A . (Left) NK cells' effects in the spinal cord of hSOD1 G93A mice at the early symptomatic stage. NK cells directly exert cytotoxic activity against MNs through the NKG2D-Mult-1 axis. Furthermore, IFNγ released by NK cells: (i) increases the release of CCL2 by damaged neurons; (ii) reduces the number of Treg Foxp3+ cells; (iii) shapes microglia toward a proinflammatory phenotype. (Right) NK cell depletion, in hSOD1G93A mice, induces neuroprotective microglia phenotype and increases the number of Treg Foxp3+ cells that protect the MNs. Adapted fromGarofalo et al., 2020. onset of motor symptoms (Steve Vucic et al., 2006; van Zundert et al., 2008; Martin R. Turner et al., 2012; J. S. Bae et al., 2013; Menon et al., 2015). ; M. J.[START_REF] Fogarty | Motor Cortex Layer V Pyramidal Neurons Exhibit Dendritic Regression, Spine Loss, and Increased Synaptic Excitation in the Presymptomatic hSOD1 G93A Mouse Model of Amyotrophic Lateral Sclerosis[END_REF] Murdock et al., 2015). Furthermore, in vitro and in vivo studies using experimentally induced excitotoxicity have demonstrated the vulnerability of both UMNs and LMNs[START_REF] King | Excitotoxicity mediated by non-NMDA receptors causes distal axonopathy in long-term cultured spinal motor neurons: Excitotoxicity causes axonopathy in spinal neurons[END_REF][START_REF] Southam | Microfluidic primary culture model of the lower motor neuron-neuromuscular junction circuit[END_REF][START_REF] Blizzard | Identifying the primary site of pathogenesis in amyotrophic lateral sclerosisvulnerability of lower motor neurons to proximal excitotoxicity[END_REF]. Figure 37 . 37 Figure 37. Hyperexcitability in SOD1G93A MNs at E17.5. A1. Membrane potential responses to similar current steps in WT and SOD1G93A MN. Note that, in response to the largest positive current . Motor neuron activity regulates muscle physiology and function; in turn, muscles affect the neuronal activity by sending retrograde signals that preserve NMJ functionality and structure (Heckman et al., 2012). The so-called "dying back" hypothesis suggests that retrograde signals contribute to the centripetal motor neuron degeneration in ALS (Dadon-Nachum et al., 2011). Also, understand cellular mechanisms underlying putative Cl -homeostasis changes. (Examination of the alteration in pre-synaptic compartments by analyzing the density of GABAergic/glycinergic inputs). 2-investigate functional outcome of putative altered Cl -homeostasis on fictive locomotor-like activity. 3-unravel early compensatory or pathological mechanisms. 4-study the involvement of the morphology in the physiological integration of inhibitory GABA/glycine synaptic inputs in parallel to other parameters such as ECl. First part of this thesis (i.e., the first article) addresses the first three aims, focusing on two factors -pre-synaptic GABAergic/glycinergic inputs density and post-synaptic GABA A R/GlyR responses (i.e., E Cl ) -that regulate GABA/glycine inhibition, while the second part of this thesis (i.e., the second article) addresses the last objective by considering the MN morphology, ECl, inhibitory synaptic input density to integrate together to regulate GABA/glycine inhibition. Figure 38 . 38 Figure 38. Summary diagram of thesis aims. The thin solid arrows represent proven whereas the washed dashed arrows have not been proven. In mouse spinal cord organotypic cultures and ex vivo brainstem-spinal cord preparation, output of spinal SOD G93A MNs that exhibited hyperexcitability was demonstrated at middle and late embryonic stages (see text in purple). Such alterations in neuronal excitability may be regulated by intrinsic mechanisms and/or synaptic mechanisms (i.e., extrinsic or network). The intrinsic mechanisms for hyperexcitability of immature SOD G93A MNs have been well studied, but synaptic mechanisms have not been demonstrated (see text in blue). Therefore, my PhD 2 arbitrary unit (AU), n = 43, N = 4) compared to WT (395.0 ± 20.3 AU, n = 42, N = 4) (p<0.001, Mann Whitney test) (Figure 2B1-B2 and B5). At the level of the MN membrane, the KCC2 density was also significantly reduced in SOD SCs (5.8 ± 0.6 AU, n = 81, N = 4) compared to WT SCs (8.5 ± 0.6 AU, n = 116, N = 4) (p<0.01, Mann Whitney test) (Figure 2B3-B4 and B6). Functionally, we found that the KCC2 efficacy, assessed using indirect measurements, was reduced in SOD MNs: EGABAAR reached a maximum of -43.8 ± 0.8 mV (n = 19, N = 16) in SOD MNs and -46.5 ± 0.7 mV (n = 24, N = 17) in WT (p<0.05, Mann Whitney test) after 30 s isoguvacine pressure ejection (Figure 2C) and returned to control values: -49.3 ± 1.2 mV (n = 19) in SOD MNs and -54.8 ± 1.0 mV (n = 24) in WT (p<0.001, Mann Whitney test) after a 4 min isoguvacine washout (Figure Figure 1 . 1 Figure 1. Alterations in chloride homeostasis. (A) Representative traces of isoguvacine (Iso) responses illustrating the reversal of the evoked GABAARrelated current (EGABAAR) in WT and SOD MNs. Holding voltage -70 mV, -60 mV, -50 mV and -40 mV. I/V relationships, plotted below the traces, revealed that EGABAAR was -61.4 mV and -48.5 mV in the representative WT and SOD MNs, respectively. (B1) Mean EGABAAR was significantly lower in E17.5 WT (n = 25) compared to SOD (n = 24) MNs from the same litter. (B2) Accordingly, the mean [Cl -]i, calculated from individual EGABAAR values, indicated that SOD MNs had a higher [Cl -]i than WT. (B3) Mean resting membrane potential (Em) did not differ in WT and SOD MNs. (B4) Mean driving force (DF) for chloride ions was doubled in SOD MNs compared to WT MNs. Values illustrated were from the same set of MNs. ****p<0.0001, ***p<0.001, ns non-significant, Mann Whitney test. The online version of this article includes the following figure supplement(s) for figure 1: Figure supplement 1. Recorded MNs belong to the lateral motor column (LMC). Figure 2 . 2 Figure 2. Expression of KCC2 and NKCC1 in the E17.5 SOD spinal cord. (A1) Analysis of the KCC2 protein in SOD and WT SCs. The stain-free staining of total proteins loaded (lower left panel) was used as the normalization control to quantify the ~140 kDa KCC2 band (upper left panel). Right panel: quantification of KCC2 WB. (A2) Analysis of the NKCC1 protein in SOD and WT SCs. The ~150 kDa NKCC1 band (upper left panel) was analyzed. Quantification of NKCC1 WB. Three loads (20 mg of protein) from SOD and WT lumbar spinal cord extracts were used, each load including five individual SCs from four different litters. (B1-B4) Representative immunohistochemical KCC2 (red) staining in WT (B1-B3) and SOD (B2-B4) lumbar 0.2 mm thickness optical sections (L3-L5 level) containing Hb9-eGFP MNs (green). Images correspond to x60 confocal acquisitions. White arrows point the KCC2 labeling surrounding spinal MNs. (B5-B6) Quantification of the global KCC2 staining (B5) in the MN area and membrane KCC2 staining (B6) in E17.5 SOD and WT SCs. (C1) Illustration of the protocol applied to assess the KCC2 efficacy as explained in the Materials and method section. (C2-C3) Evolution of ECl values with time during the isoguvacine (iso) application in control aCSF (C2) and in the presence of the highly specific KCC2 blocker VU0240551 (10 mM) (C3). ***p<0.001, **p<0.01, *p<0.05, ns non-significant, Mann Whitney test. Figure 3 . 3 Figure 3. Motor activity in the E17.5 SOD spinal cord. (A) Fictive locomotor-like activity evoked by 5-HT/NMDA/DA (see Materials and methods) recorded from contralateral lumbar ventral roots (lL3, rL3; two first traces) displayed regular alternating activities (see circular plots above each raw data panel) in both WT (A1) and SOD (A2) SCs. Below the raw data are traces of the floating mean spike frequencies (mFr) and the corresponding troughs of activity (burst start, Bst) from which the phase relationships between the left and right SC sides were calculated (circular plots). (B) The cycle period was significantly longer in SOD SCs (n = 9) compared to WT (n = 8) SCs. **p<0.01, ns non-significant, Mann Whitney test. Figure 4 . 4 Figure 4. SOD E17.5 MNs exhibit a reduced IPSC frequency compared with WT MNs along with a persistent increase in their taudecay. (A1) Sample miniature IPSC (mIPSC) recordings obtained from representative SOD and WT MNs (holding membrane potential -70 mV). (A2) Cumulative frequency of inter-event interval (IEI) and mean values of mIPSCS revealed a small but significant reduction in frequency in SOD MNs (n = 19) compared with WT (n = 15) MNs. (A3) mIPSC peak amplitude was slightly smaller in SOD than in WT MNs (same layout as in A2). (A4) Mean taurise was unchanged but mean taudecay was significantly increased in SOD mIPSCs compared with WT mIPSCs. Inset traces (mean of 30 events) from representative experiments. (A5) Blocking the GlyR or GABAAR (black horizontal bars) reveals a tonic current. The distribution of pure GABAAR-mediated mIPSCs in WT (n = 5) and SOD (n = 3) MNs and pure GlyR-mediated mIPSCs in WT (n = 3) and SOD (n = 5) MNs is shown between the two traces. (A6) Taudecay of pure GlyRmediated mIPSCs is significantly increased in SOD MNs (n = 3) compared to WT MNs (n = 5). Traces are mean of 53 events (WT) and 57 events (SOD) from representative experiments. (A7) Relaxation of pure GABAAR-mediated mIPSCs is significantly longer in SOD MNs (n = 3) compared to WT MNs (n = 5). Traces are mean of 35 events (WT) and 35 events (SOD) from representative experiments. (A8) Mixed GlyR-GABAAR-mediated events from representative experiments. (B1) Samples of spontaneous IPSC (sIPSC) recordings from representative SOD and WT MNs. (B2) Cumulative frequency of IEI and mean values of sIPSCS revealed a slight frequency reduction in SOD MNs (n = 17) compared to WT MNs (n = 14). (B3) sIPSC peak amplitudes were similar in SOD and WT MNs. (B4) Mean taurise was not significantly different in WT sIPSCs, but SOD sIPSCs exhibited a higher taudecay. Inset traces (mean of 30 events) from representative experiments. (C1-C2) SOD eIPSCs (n = 6) displayed a higher taudecay than WT eIPSCs (n = 10). (D1-D2) Puff-Figure 4 continued on next page Figure 4 continued 4 Figure 4 continued evoked SOD GABAAR currents (n = 21) exhibited a higher taudecay than WT GABAAR currents (n = 16). Left traces in C1) and D1) are from representative SOD and WT eIPSCs and puff-evoked GABAAR currents. ****p<0.0001, ***p<0.001, **p<0.01, *p<0.05, ns non-significant, Mann Whitney test. 8 ± 57.8 ms (n = 21, N = 16) in SOD MNs versus 521.3 ± 33.3 ms (n = 16, N = 10) in WT (p<0.05, Mann Whitney test) (Figure 4D1-D2), the amplitude and taurise of evoked IPSCs (eIPSCs) being not significantly different (49.3 ± 5.9 pA, 34.9 ± 3.6 ms in SOD, n = 21 and 58.1 ± 8.6 pA, 43.3 ± 5.8 ms, n = 16). Figure 5 . 5 Figure 5. Inhibitory terminal staining in E17.5 SOD and WT SCs. (A1-A2) Confocal visualization of Hb9-eGFP MNs (green) along with VIAAT (red) and synaptophysin (white) staining in lumbar E17.5 WT (A1) and SOD1 SCs (A2) (frontal sections). Arrows point the synaptophysin-VIAAT co-localized staining highlighted in the left corner insets (0.2 mm thickness optical section). (B1) The density of VIAAT puncta was similar in WT (n = 6) and SOD1 (n = 5) SCs. (B2) SOD SCs again exhibited a significant reduction in their density of synaptophysin terminals. (B3) VIAAT puncta co-localizing with synaptophysin puncta were slightly reduced in SOD SCs compared with WT SCs. ***p<0.001, **p<0.01, *p<0.05, ns non-significant, Mann Whitney test. Figure supplement 1 . 1 Figure supplement 1. Simulated E17.5 WT-like and E17.5 SOD-like MNs. Figure supplement 2. Firing activity and synaptic inputs recorded from E17.5 MNs during fictive locomotion. Figure 7 . 7 Figure 7. Functional consequence of increasing taudecay on locomotor activity. (A) Effect of taudecay on the locomotor rhythm when EGABAAR (ECl) is set to -60 mV (A1-A2, B1) and -50 mV (B2). Figure 8 . 8 Figure 8. Summary. (A) Schematic drawing summarizing the altered inhibitory inputs (~25% reduction, see Figure 3B3) to fetal SOD MNs. [Cl -]i is higher in SOD MNs (A1) than in WT MNs (A2) because of a KCC2 downregulation, leading to an increased GABA/Gly-induced depolarizing effect (see insets). (B) Consequence of increasing taudecay on the GABA/gly inhibitory effect in SOD-like MNs. Due to an accumulation in the intracellular compartment, EGABAAR exerts a strong depolarizing effect. A burst of spikes generated by MNs is hardly blocked by a barrage of GABA/gly events (see blue traces) when taudecay is set to 20 ms. Increasing taudecay to 25 ms allows a better summation of the shunting component of the depolarizing GABA/gly post-synaptic event leading to a better clamp of Em towards EGABAAR and to the blockade of MN discharge (see green traces). (C) Impact of a larger taudecay in SOD MNs on the frequency of the locomotor rhythm. Figures and tables:6 Figures, 7 tables, 2 Supplementary figures Number of words: -Abstract: 249 -Introduction: 858 -Discussion: 1653 Declaration of interest: None Figure 1 : 1 Figure 1: Differential effects of dGPSPs on E17.5 MN spiking activity. (A1) Representative current-clamp traces of a lumbar E17.5 spinal MN -lateral motor column -exhibiting a constant firing when depolarized by a positive square pulse (150pA). The upper graph shows the instantaneous spiking frequency whereas the upper traces (right corner) depict evoked and spontaneous dGPSPs with similar amplitude. (A2) On the same MN, a 30 Hz VLF stimulation, activating specifically local GABA/glycine fibers (see Methods), increases the MN firing frequency. (A3) A 60 Hz VLF stimulation induces a further MN firing frequency. (A4) Quantification of the change in MN firing frequency (spike number change, % of control) for VLF stimulations between 7.5 Hz to 200 Hz. Note that VLF stimulation always evoke an increase in firing frequency so the MN was considered as being excited. (B) Similar traces for another representative E17.5 MN showing an excitation (increase of spike frequency) (B2) at 30 Hz VLF frequency switching to inhibition at 60 Hz (B3). The graph of the spike number change shows that the switch occurs between 50 Hz and 60 Hz for this MN that is therefore considered as being dual. (C) A third representative fetal MN shows partial inhibition of its firing activity when receiving dGPSPs at Figure 2 : 2 Figure 2: Principal Component Analysis (PCA) of groups of MNs made on the basis of their d-responses to VLF inhibitory stimulation. (A) In SOD MNs, three groups of MNs were tested: Excited, Dual and Inhibited (see text of explanations). PCA is presented in (A1); Results of Montecarlo tests to simulate 1000 inertia (see histograms) (A2). Real inertia value is presented as a red vertical line. P-value was then calculated as the ratio of the number of simulations in which the simulated inertia was larger than the real inertia to the total number of runs. The top histogram represents the inertia calculated for all groups. The three smaller histograms represent inertias calculated for pairs of groups. (B) In WT PCA was made for the two groups Dual and Inhibited (B1); the Montecarlo test indicates that these two groups are significantly different (B2). Between-Class Analysis (BCA). ( 193.9 ± 21.0 pA, n = 39) compared to WTMNs (227.6 ± 25.3 pA, n = 32) (p < 0.05, Mann Whitney test), in agreement with higher Rin values (Figure3A2). The between groups analysis (Kruskal-Wallis test followed by Dunn's comparisons test) showed that Rin of WT Inhibited MNs was significantly reduced when compared to Rin of Dual and Inhibited SOD MNs (Figure3B) and a similar trend was found when compared to Rin of Dual WT MNs, even though not significant (p = 0.0534). Plots of rheobase according to Rin values revealed an expected relationship for WT and SOD MNs, i.e., MNs with low Rin express a high rheobase and vice versa (Figures3C1-C2). But it also revealed that most Rin values were below 100 M for Inhibited MNs (Figure3C1), which was not the case in SOD MNs that were Dual or Inhibited whatever Rin values (Figure3C2). The percentage of InhibitedThe percentage of Inhibited MNs with Rin below 100 m was ~90% (2/19) in WT and only 45% in SOD (5/11) (**** p < 0.0001, Chi-square test) (figure3C3). GABA/glycine stimulation frequency > 300 Hz (figure5A1, red symbols, illustrated until 200 Hz). A modest decrease of ECl (to -52.1 mV) led to a total absence of excitatory effect from 20 Hz. The MN discharge was even totally blocked at GABA/glycine stimulation frequency = 50 Hz (and above) (figure5A1, blue symbols; figure5A2). Figure 5 : 5 Figure 5: Simulation of the effects of ECl, distal dendritic length, soma diameter, synapse position on DGPSP burst. (A) Comparison of effect of two ECl values: -47.4 mV (red circles) and -52.1 mV (light blue related analysis -was lessened in SOD E17.5 MNs compared to WT MNs (figure 6F1) (5.44 ± 0.39 Hz for WT MNs and 4.33 ± 0.27 Hz for SOD MNs), without any change in amplitude (figure 6F2) (0.43 ± 0.02 Hz for WT and 0.45 ± 0.01 Hz for SOD). Figure 6 : 6 Figure 6: SOD MNs lack VIAAT terminals on soma and proximal dendrite. (A-B) Confocal images (x63)of neurobiotin-injected soma (blue), VIAAT (green), gephyrin (red), and merged of a E17.5 WT MN (A panels) and SOD MN (B panels). An example of a band image analysis is given for the WT MN as row images (A1.1-A2.1-A3.1-A4.1) and detected particles (see A1.2-A2.2-A3.2-A4.2). (C1) Global VIAAT density on soma and proximal dendrite, expressed as number / 100 µm. (C2) Synaptic VIAAT density on soma and proximal dendrite (number / 100 µm). (D) Representative raw traces from a WT and SOD MN illustrating the detection of GABA/glycine synaptic events. (E) Corresponding amplitude (E1) and frequency (E2) analysis. (F) GABA/glycine events frequency (F1) and amplitude (F2) in WT and SOD MNs. Quartiles and median are indicated in the violon plots as white dotted and hatched lines, respectively. Figure S2 : S2 Figure S2: VLF stimulation selectively activates GABAAR/GlyR. The downward synaptic current evoked by a single stimulation of the VLF (a) is fully blocked by a bath application of Gabazine (G, 3 μM) / strychnine (S, 3 μM) (G/S) (b), this blockade being partially reversible when Gabazine and strychnine (G/S) are washout (c). Kynurenic acid (KA, 4 mM) and dihydro-beta erythroidine (DHBE, 5 μM) are applied all along the experiment. The recorded MN was voltage-clamped at -70 mV. (i.e., smaller cells can host fewer synapses) because we have shown in a previous article that soma perimeters and soma surface of WT MNs and SOD G93A MNs are comparable[START_REF] Martin | Embryonic alteration of motoneuronal morphology induces hyperexcitability in the mouse model of amyotrophic lateral sclerosis[END_REF]. Even though the reduced degree of synaptic input might reflect a developmental delay in the maturation of SOD G93A cords compared to WT, data from the literature seems to indicate that this reduction persists with time. In fact, MNs cultured from other ALS patients were shown to progressively lose their synaptic activity(A.-C. Devlin et al., 2015), as was also shown in surviving and -MNs from symptomatic (P90-120) adult SOD G93A mice(Zang et al., 2005) The mixed GABA/Glycine IPSCs recorded in our study are likely to reflect a major contribution of GlyR because of the fast tau decay[START_REF] Gao | Transition From GABAergic to Glycinergic Synaptic Transmission in Newly Formed Spinal Networks[END_REF][START_REF] Muller | Developmental dissociation of presynaptic inhibitory neurotransmitter and postsynaptic receptor clustering in the hypoglossal nucleus[END_REF]. IPSCs convey a reduced inhibitory drive onto prenatal SOD G93A MNs. Measurements of IPSC amplitudes derived from CsCl medium-based whole-cell recordings in which E GABAAR approached 0 mV. Therefore, WT and SOD G93A MNs shared the same artificial driving force for chloride and so the observed reduction of IPSC amplitude in SOD G93A MNs was likely to be related to a decrease of surface post-synaptic GlyR clusters in cultured SOD G93A MNs (Q. Chang et al., 2011). ms to 25 ms clearly potentiated the efficacy of GABA/glycine inhibition. With a tau decay of 20 ms, a 20 Hz barrage of inhibitory inputs totally blocked WT-like MN firing activity (12 Hz, ,locomotor firing frequency, with ECl = -60mV). In contrast, SOD-like MNs (with ECl = -50mV) required a ~ 90 Hz inhibitory barrage to suppress their firing. Increasing tau decay to 25 ms significantly reduced the threshold cutting frequency to ~70 Hz in SOD-like MNs. Those blocking frequencies are in agreement with GABA/glycinergic event frequencies encountered in lumbar spinal MNs during fictive locomotion (>100 Hz). Increasing tau decay from 20 ms to Figure 39 . 39 Figure 39. Summary diagram of the mechanisms involved in Cl -homeostasis regulating cellular hyperexcitability and affecting the synaptic inhibition efficacy of GABA/glycine in prenatal SOD G93A MNs. The solid line represents direct evidence proven by experimental data, while the dashed lines and boxes represent indirect evidence or possible interpretations mentioned in the discussion section. The circles with crosses represent excluded causal evidence. Neuronal Cl homeostasis ([Cl -] i ) as an intracellular messenger may modulate the expression of GABA A R/GlyR and subunit composition as well as control channel gating efficacy. SOD motoneurons receive less VIAAT-positive inputsOur result show that fetal SOD MNs can be Dual or Inhibited MNs whatever their Rin and morphology, unlike WT MNs from the littermates. What could explain this surprising result? If we consider that Inhibited low Rin WT MNs are future F-type (innervating the fastcontracting muscle fibers)  MNs that start getting mature at E17.5, then we may consider that the spinal cord from the SOD1 G93A strain matures less rapidly than littermate WT spinal cords, i.e., the developmental sequence of the lumbar motoneural network (decrease of the MN Rin, increase of the MN Cm(A. Delpy et al., 2008), increase of the synaptic input onto MNs[START_REF] Scain | Glycine release from radial cells modulates the spontaneous activity and its propagation during early spinal cord development[END_REF]), is delayed. Another explanation is that the lack of GABA/glycine impedes the occurrence of Inhibited low Rin SOD MNs. We have shown that increasing gGABA/glycine favors dGPSPs inhibition occurring on SOD MN soma. The role of gGABA/glycine (gCl) in the dual effect of dGPSPs was also demonstrated in a previous study where we found that increasing gCl favors inhibition either during a single dGPSP or during trains in which gCl summates[START_REF] Branchereau | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF]. Our immunohistochemical data indicate a lower density of VIAAT staining on MN somata from the SOD1 G93A strain compared to WT littermate MNs. Consequently, VLF stimulation activate less GABA/glycine inputs, lowering the shunting (inhibitory effect) of dGPSPs. Our data show that we had to increase the VLF intensity in SOD preparations to obtain evoked single dGPSPs of similar size than WT preparation, in agreement to anatomical data highlighting lower density of VIAAT positive terminals on SOD MNs. Interestingly, in a recent study Allodi and collaborators have shown that F-type MNs from P45 SOD1 G93A mice lose inhibitory glycinergic connections before S-type MNs[START_REF] Allodi | Locomotor deficits in a mouse model of ALS are paralleled by loss of V1-interneuron connections onto fast motor neurons[END_REF]. A major reduction of inhibitory synapse densities (gephyrin puncta) in the ventral horn of the SOD1 G93A slow mutant was also described at early stages of disease (P50) (S.[START_REF] Saxena | Neuroprotection through excitability and mTOR required in ALS motoneurons to delay disease and extend survival[END_REF]. Our previous data show that VIAAT positive terminals are less abundant in the marginal zone edging the E17.5 MN soma location[START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF]). Here we found a reduction of VIAAT density (global and synaptic) on soma of SOD MNs, relative to age-matched WT. insufficient synaptic inhibition leads to neuronal hyperexcitability due to reduced surface postsynaptic GlyRs in spinal SOD1 G93A MNs cultured from E12-E14 embryos (Q.Chang et al., 2011) and ex-vivo lumbar E17.5 SOD G93A MNs (indirect evidence)[START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF]. We thus would like to know how this (glycinergic) inhibitory insufficiency evolved during the progression of ALS disease. Moreover, other studies also show that SOD1 G93A transgenic ALS mice develop an age-related loss of glycinergic innervation of MNs starting from 8-week old (Q.Chang et al., 2009b) and P45 days (F-type -MNs)[START_REF] Allodi | Locomotor deficits in a mouse model of ALS are paralleled by loss of V1-interneuron connections onto fast motor neurons[END_REF], whereas GABAergic innervation is unaffected (Q.Chang et al., 2009b). However, these findings focus only on changes in global inhibition, which may derive from a process of degeneration involving Renshaw cells and/or neural networks. To further explore, we expect to identify the specific inhibitory pathways or cell substrates involved in glycinergic inhibitory deficits. Figure 40 . 40 Figure 40. Recurrent inhibition is preferentially reduced in delayed firing motoneurons from SOD1G93A mice. (a) Schematic of the 350µm thick oblique spinal cord preparation used in this study, with patch-clamp recording electrode and suction electrode used for ventral root (VR) stimulation. (b) Representation of early (top) and delayed (bottom) firing profiles observed in lumbar motoneurons. (c) Left figure represents the voltage response to different current injections before and during 200Hz VR stimulation used to elicit recurrent inhibition (by evoking a series of IPSPs that reach a steady-state voltage) for WT and SOD1G93A early and delayed firing motoneurons; Plots on the right show respective current-frequency relationships before (light blue) and during (dark blue) VR stimulation that were used to estimate the conductance of recurrent inhibition. Group data for (d) absolute and (e) scaled recurrent inhibition for early and delayed firing motoneurons shown as estimation plots with bootstrapped mean difference aligned and bootstrapped Hedges g effect sizes. Figure 41 . 41 Figure 41. Alterations in size of the postsynaptic GlyRs clusters onto Fast MNs. (a) Example of multiple Glycine receptors (GlyRs) clustering in a presynaptic GlyT2+ terminal. Bottom image is a raw image from the membrane of GlyR immunostained MN located on lateral motor column of GlyT2positive spinal cord. A spot highlighted in yellow in bottom image is used to show how properly the macro detects the size of the GlyRs cluster also represent the median size of 0.051µm 2 measured by Fiji. The top image is a binary image generated after running the macro. The middle image composites the raw and its corresponding binary image, for observing the effect of macros detected GlyRs spots. (b) Sum area of all clusters, apposed to a presynaptic GlyT2-positive (GlyT2+) terminal on each motoneuron (left) and individual clusters (right). (c) The size or area of each single GlyRs cluster colocalized to a presynaptic GlyT2+ terminal on each motoneuron (left) and individual clusters (right). (d) Distribution of all synaptic GlyRs clusters. The experiment was conducted on four (in b) and five (in c and d) WT and two (in b) and four ((in c and d) SOD G93A (SOD) mice at ~P21. All data were not statistically different between WT and SOD MNs. ( Note: Bottom image in Figure40awas extracted from an analyzed band image in the Annex.)Although we did not obtain the expected experimental results, the size measurements of GlyRs clusters that we obtained are in agreement with those recently described by[START_REF] Maynard | Identification of a stereotypic molecular arrangement of endogenous glycine receptors at spinal cord synapses[END_REF][START_REF] Maynard | Identification of a stereotypic molecular arrangement of endogenous glycine receptors at spinal cord synapses[END_REF] (Figure42). We propose the following possible reasons for the negative experimental results obtained for size measurements of postsynaptic GlyRs clusters: 1) We did not use specific biomarker to identify presynaptic glycinergic terminals from Renshaw cells (En1 and calbindin D28K[START_REF] Sapir | Pax6 and engrailed 1 regulate two distinct aspects of renshaw cell development[END_REF])), GlyT2+ presynaptic terminals representing the global inhibitory inputs. 2) anatomical results of spinal cord tissues perfused with PFA via the heart do not accurately reflect the dynamic changes in the size of synaptic GlyRs clusters under physiological conditions of ventral root stimulation given in electrophysiological experiments. Figure 42 . 42 Figure 42. A summary of size measurements of GlyRs clusters on rodent spinal MNs. completion of electrophysiological recordings, because the localization (or clustering) of GlyRs can be dynamically modulated during electrophysiological recordings under each VR stimulation for recruiting different (amount of) MNs to Renshaw cells leading to the presence of the convergence of variable MNs onto each Renshaw cell (Figure 43). More MNs converge to Renshaw cell, and more GlyRs on motoneuronal membrane cluster towards the GABA/Glycine release sites from Renshaw cells with strong efficacy. Therefore, I would say the hypothesis is that GlyRs clusters are reduced in SOD, which under physiological conditions is more likely to occur in functional large-size clusters. Figure 43 . 43 Figure 43. Schematic drawing of recorded MN receiving recurrent inhibition from the Renshaw cell but with variable glycinergic input efficacy under physiological conditions. (a) Stimulated single motoneuron (MN) connected Renshaw cell (RC) to recurrent inhibition back to same MN. (b) synergistic heteronymous motoneuron pools connected Renshaw cells can vary glycinergic input efficacy feedback to the recorded MN. (synergistic homonymous motoneuron pools converged onto Renshaw cells is not shown here). Recurrent exhibition was obtained in the presence of D-2-amino-5phosphonopentanoic acid (APV, 50 µM), 1,2,3,4-tetrahydrobenzo(f) quinoxaline-7-sulphonamide (NBQX, 3µM) and gabazine (3µM) in order to isolate the contribution of glycinergic inputs to motoneurons derived from Renshaw cells. 2 ) 2 . E., Deeb, T. Z.,Chadchankar, H., Brandon, N. J., & Moss, S. J. (2018).1) RGB color mode for each channel of interest: GlyR clusters in Red on channel 1 (C1)  GlyT2-boutons in Blue on channel 3 (C3) Color mode is changed to Composite image in order to see both C1 and C3 channels on the same image after binary process:  GlyR clusters (Red)  GlyT2-boutons (Green)  Colocalization (Yellow) >30% colocalization intensity between post/pre or pre/post will be defined as a Kiehn, 2016)). functions, including different forms of locomotion, such as swimming, walking, running, and flying. The planning and initiation of locomotion take place in supraspinal areas, including the cortex, the basal ganglia, the midbrain and the hindbrain, but the precise timings and patterns of locomotor movements in vertebrates are generated by activity of locomotor CPG network in neuron assemblies that are located in the spinal cord itself (for review, see(O. The first schematic of the locomotor CPG, called the "half-center" model, is proposed by Graham Brown (1914) and elaborated on by Lundberg (1981) (McCrea These CPGs are involved in many important et al., 2008; Stuart et al., 2008). This model includes two (flexor and extensor) half- centers reciprocally inhibiting each other. The mutual inhibitory interactions between the half-centers are mediated by inhibitory interneurons, which ensure that only one half-center can be active at a time. Due to some fatigue or adaptation mechanism, the activity of the active half-center gradually decreases, leading to the activation of the antagonistic half-center. The antagonistic half-center then inhibits the active half- center, thus switching the locomotor phase. It is suggested that the flexor and extensor half-centers directly project to and activate the flexor and extensor MNs, respectively. At a very general level of organization, locomotor CPG networks in all vertebrates need to secure two basic functions: rhythm-generation and pattern- generation. The rhythm is the clock. For example, rhythmic spinal circuits control locomotor gait, thalamic oscillations detect attentional state, cerebellar rhythms are important for motor coordination, and circadian rhythms entrain our biological clocks to a 24-hour cycle. In the spinal cord, the pattern-generation involves the rhythmic activation of MNs and left-right alternation. The pattern generators are neuronal ensembles capable of generating the basic spatiotemporal patterns underlying "automatic" movements (e.g., locomotion, respiration, swallowing and defence reactions), in the absence of peripheral feedback. Dual inhibitory CIN pathways controlling left-right alteration premotor interneurons and MNs that may generate basic movement patterns by mediating the synergistic drive to multiple muscles, such as limb and body muscles (O. We have seen above the role of different commissural interneurons in the pattern-generation. A series of lesion experiments have demonstrated that walking mammalian locomotor CPG network or core CPG components are located in the ventral half of the caudal spinal cord (Kjaerulff et al., 1996) (for review, see (Goulding, 2009). The spinal locomotor CPG comprises a distributed network of Kiehn, 2016; Takei et al., 2017). The core of the network consists of groups of excitatory and inhibitory premotor interneurons that serve designated roles in the network operation. The central organizing feature of the motor circuitry is the grouping of MNs into discrete operational units, motor pools, each of which innervates a single muscle. These premotor interneurons receive activating inputs from the brain and are able to produce the rhythms and patterns of locomotion that are conveyed to MNs and then to the axial and limb muscles (Figure 8). Anatomical tracing and electrophysiological studies in mice reveal that the neural substrate for driving left-right coordination consists of at least two components: commissural interneurons (CINs) and a class of ipsilateral interneurons ( Crone et al., 2008). These neurons have converged towards a common network structure controlling left- right coordination at the segmental level. CINs must coordinate activity on the two sides of the body through a complex segmental CIN system in newborn rodents with: a dual inhibitory CIN pathway and a single excitatory CIN pathway ( Crone et al., 2008; Kiehn, 2011). The dual inhibitory pathway is composed of two distinct projections: a direct inhibitory CIN input to contralateral MNs, and an indirect excitatory CIN projection that provides input to contralateral inhibitory interneurons, including Renshaw cells (RC) and other inhibitory interneurons (e.g., Ia interneurons) ). CINs may be important for left-right alternation through the action of V0 V via the excitatory indirect pathway and V0 D via the inhibitory direct pathway. Therefore, lacking these V0-CINs pathways, the default option for rhythmic outputs is hopping gait. 2) The V3 population is excitatory and was initially shown to extend contralaterally projecting axons at early embryonic time points (Y. V0-CINs are composed of two subclasses: one-third of V0-CINs are glutamatergic and Evx1-positive (V0 V ), while the remaining two-thirds are inhibitory and Evx1- negative (V0 D ). V3-CINs are Sim1-positive and all excitatory. Upon genetic loss of function experiments, in normal mice where: 1) V0-CINs are ablated, left-right alternation is upset result in a hopping rabbit-like gait, which indicate that the V0- Zhang et al., 2008; Gosgnach, 2022). V3 -CINs are removed, spinal locomotor activity showed increased variability in the locomotor burst amplitude and period, and imbalance between the left-right activity, which suggest that the V3-CINs play a minor role in left-right alternation, although the known projections to inhibitory INs via indirect inhibitory pathway mediated by subset of V3-CINs (V3-CINei). V3-CINs also project directly to contralateral MNs indicating that V3-CINs (V3-CINe) may be active under conditions of left-right synchrony via direct excitatory pathway (Kiehn, 2011). These findings suggested that V3-CINs population expressing the same gene can be functionally divided into two distinct sub-populations: V3-CINei (involved in "left-right alternation" circuits) and V3-CINe (involved in "left-right synchrony" circuits). This also implies that the direct pathway may play a more important role than the indirect pathway in terms of functional effects. Moreover, V3-CINs class enable to receive rhythmic excitation during fictive locomotion. The V2 interneuronal population can be divided up into two distinct subpopulations, excitatory V2a neurons which express the transcription factor Chx10 (Al- Mosawie et al., 2007; Lundfald et al., 2007), and inhibitory V2b cells which express Gata3 (Lundfald et al., 2007; J. Zhang et al., 2014; Britz et al., 2015). V2a is a subclass of V2-INs, which are ipsilateral, Chx10- positive and glutamatergic. In ablation of V2a-INs using Chx10-DTA mice, the locomotor frequency and motor burst amplitude of drug-induced locomotor-like activity becomes more variable than it is with the V2a-INs present. The left-right alternation is disrupted especially at medium to high locomotor frequencies with no disruption of the left-right synchronous activity. V2a-INs therefore provide significant drive to one pathway of the dual inhibitory system without being the main source of the rhythm-generation, indicating that V2a-INs are critical for the initiation of left-right alternation circuits and are recruited in a frequency-dependent manner (i.e., at medium to high locomotor frequencies). V1-INs express the transcription factor En1 remain the only drugs approved by Food and Drug Administration, however both have limited efficacy on ALS patients. Riluzole is a drug that blocks the persistent sodium current, likely reducing the chronic glutamate excitotoxicity. Edaravone is a scavenger of peroxyl radicals and perioxynitrites that promotes cell survival against oxidative stress and potentially inhibits apoptosis. There is an urgent need of more effective drugs for the treatment of this incurable disorder. The hallmark of ALS is the presence of lower and upper motor neuron signs and symptoms. ALS can present with four different types of onset: limbic onset, bulbar onset, primary lateral sclerosis and progressive muscular atrophy (Matthew C. Kiernan et al., 2011; Al-Chalabi et al., 2013). In limbic onset, voluntary muscles of both upper and lower limbs are affected causing muscles weakness, fasciculation, spasticity, wasting and tendon reflexes. Bulbar onset is mainly characterised by difficulty in speaking, breathing and swallowing. Primary lateral sclerosis and progressive muscular atrophy involve dysfunction in the upper and lower motor neurons, respectively. accounting for 40-50% of total familial cases. Mutation in SOD1 accounts for 20% of fALS, TARDBP and FUS account for 1-5% of fALS, and ANG accounts for <1% of fALS (Robberecht et al., 2013). Although genetic mutation is closely associated with fALS, studies conducted in monozygotic twins discovered an association of epigenetic regulation such as deoxyribonucleic acid (DNA) methylation, histone remodeling, ribonucleic acid (RNA) editing and non-coding RNAs in ALS patients with no family history of ALS (Paez-Colasante et al., 2015). Along with genetic contribution, different environmental factors are also known to contribute in the onset and edaravone (Abe et al., 2014) The disease mechanism in ALS is a complex process involving multiple factors. Despite numerous studies, disease pathogenesis in ALS is still not clear. Gene mutation, epigenetic regulation and environmental exposure have been associated with ALS causes (Paez-Colasante et al., 2015). Gene mutation is considered one of the major reasons for the occurrence of fALS. Mutation in genes such as C9orf72, SOD1, TARDBP (TDP-43) and FUS occur frequently, however mutation in 40 other genes have also been associated with ALS (Al-Chalabi et al., 2017). Among all genetic mutations, mutation in C9orf72 is the most common, of ALS (Al-Chalabi et al., 2013). Charcot et al., 1869; Brooks et al., 2000; Geevasinga et al., 2016). While disease onset is typically focal involving upper or lower limbs, bulbar or respiratory regions, the ensuing progressive course affects contiguous body regions resulting in global muscle weakness, with respiratory dysfunction representing a terminal phase of the disease (Steve Vucic et al., 2014; NMDA ionotropic glutamate receptor can mediate Ca2+ influx in response to agonist binding. Excitotoxicity refers to a situation where neurons are killed due to excessive stimulation, leading to uncontrolled Ca2+ influx, activating cellular signalling processes which can then lead to cell death. Glutamate excitotoxicity is thought to be involved in the generation of ROS and involved in mitochondrial permeability transition, which causes an increase in membrane permeability to smaller molecules, leading to mitochondrial dysfunction [START_REF] Schinder | Mitochondrial Dysfunction Is a Primary Event in Glutamate Neurotoxicity[END_REF] . The mitochondria remain sensitive to oxidative damages, resulting in further mitochondrial dysfunction [START_REF] Spalloni | Cognitive impairment in amyotrophic lateral sclerosis, clues from the SOD1 mouse[END_REF] . The mitochondrial dysfunction, in turn, enhances glutamate-mediated excitotoxicity by disrupting the normal voltage-dependent Mg2+-mediated blockade of glutamate receptors [START_REF] Bowling | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF][START_REF] Xu | Mitochondrial Degeneration in Amyotrophic Lateral Sclerosis[END_REF] . Glutamate excitotoxicity is currently being studied as a key pathological mechanism in several neurodegenerative diseases including dementia, Parkinson's disease, and ALS. et al., 2016; M. Fogarty, 2019). Furthermore . In the broadest definition, synaptopathy encompasses a wide range of features, which in due course, will lead to synaptic dysfunction. These features include changes in Ca2+ levels at synapses, glutamate excitotoxicity, structural changes in pre-and postsynaptic anchoring proteins, altered synaptic structure and function, which is often associated with dendritic spine loss, dysfunctional neurotransmitter release (quantal content and frequency), impaired maintenance, and regeneration of axons by Schwann cells, and cognitive deficit (Lepeta , synaptopathies lead to neuronal loss, mitochondrial dysfunction, accumulation of misfolded proteins associated with defective proteostasis, and defective neuromuscular junctions (NMJ) [START_REF] Wishart | Synaptic Vulnerability in Neurodegenerative Disease[END_REF][START_REF] Lepeta | Synaptopathies: synaptic dysfunction in neurological disorders -A review from students to students[END_REF][START_REF] Poo | Mobility and localization of proteins in excitable membranes[END_REF] et al., 2018). 1.2.2.5 ELECTROPHYSIOLOGICAL CHANGES IN ALS disease ALS In the CNS, NMDA receptors are abundantly expressed, the affect synaptic plasticity and alterations are implicated in many neurodegenerative diseases, such is a fatal neurological disorder affecting both upper MNs in the cerebral cortex and lower MNs in the brainstem and spinal cord. The primary feature of ALS is the selective loss of MNs in the brain and spinal cord. However, changes in synaptic transmission and MN excitability are among the first events that take place during development and accompany the relentless deterioration of motor circuitry. The earliest signs, which are detected during embryonic and postnatal development and that will pave the way for the rest of the disease course in ALS mice, are linked to the electrophysiological properties and circuitry of MNs. In SOD1 G93A mice, the first motor symptoms appear at around 90 days of age, but activation of cellular stress pathways can be observed in vulnerable MNs as early as postnatal day (P)12, and dysfunction of the neuromuscular junction is already noticeable at P50(Pun et receptors, as glutamate plays its effects through postsynaptic receptors. Increased extracellular glutamate concentration or sensitivity of the postsynaptic neuron to glutamate result in increased activation of glutamate receptors (Van Damme et al., 2005). as in learning and memory disorders [START_REF] Castellano | NMDA Receptors and Learning and Memory Processes[END_REF] and depressionrelated disorders [START_REF] Dang | Targeting of NMDA Receptors in the Treatment of Major Depression[END_REF] . The regulation of neuronal signalling occurs in parallel with the trafficking of neurotransmitter transporters, shuttling rapidly to and from the plasma membrane in response to the excitatory release of the neurotransmitter itself [START_REF] Deken | Plasma Membrane GABA Transporters Reside on Distinct Vesicles and Undergo Rapid Regulated Recycling[END_REF] . Abnormal synaptogenesis and synaptic signalling has been implicated in the initiation and progression of a range of neurodegenerative diseases including ALS (Gillingwater et al., 2013; J. R. [START_REF] Bae | Synapses in neurodegenerative diseases[END_REF] . investigations have shown that synaptic loss was not due to cortical atrophy, but instead was strongly related to the severity of cognitive impairment of the individual patients, suggesting that synapse loss occurs before neurodegeneration (Henstridge al., 2006; Smita Saxena et al., 2009). The earliest alterations that evidence a functional defect are those observed during the developmental stages of the motor system and are associated with MNs' acquisition of electrophysiological properties and the integration of MNs into the motor circuitry. Both isolated embryonic SOD1 G93A expressing MNs (Pieri et al., 2003) and those in slice preparation (Quinlan et al., 2011) consistently exhibit an increased rate of repolarization compared to controls. Action potential (AP) frequency is increased in ALS embryonic MNs in culture (Pieri et al., 2003; Jason J. Kuo et al., 2004; Qing Chang et al., 2016), embryonic spinal cord preparation (Martin et al., 2013), postnatal spinal cord, and brainstem slices (Jason J. Kuo et al., 2004; van Zundert et al., 2008) , as well as in human MNs derived from induced pluripotent stem cells (iPSCs) obtained from patients with ALS (Brian J. Wainger et al., 2014; A.-C. Devlin et al., 2015) , relative to controls. Spinal MNs from P6-P10 spinal cord slices or preparations show decreased firing frequency compared with wildtype, although by age, the gain is lower in motoneurons from P6-P7 transgenic mice and unchanged in MNs from older P8-10 transgenic mice versus those from wildtype mice (Bories et al., 2007) . Altogether this evidence highlights altered MN excitability, inhibitory imbalance, and changes in spinal locomotor networks as salient traits of the earliest origins of the pathology described to date. LMN activity/excitability in humans was indirectly measured through nerve conduction studies (NCS) and electromyography (EMG) [START_REF] Joyce | Electrodiagnosis in Persons With Amyotrophic Lateral Sclerosis[END_REF] . These studies revealed an increase in motor unit excitability, evidenced by the increased presence of fasciculation potentials, double discharges of the motor unit [START_REF] Kostera-Pruszczyk | Motor unit hyperexcitability in amyotrophic lateral sclerosis vs amino acids acting as neurotransmitters: Motor unit hyperexcitability and amino acids in ALS[END_REF][START_REF] Piotrkiewicz | Analysis of double discharges in amyotrophic lateral sclerosis[END_REF] and aberrant single motor unit firing [START_REF] Piotrkiewicz | Analysis of double discharges in amyotrophic lateral sclerosis[END_REF] and increased axonal excitability (Bostock et al., 1995; Kanai et al., 2006; Nakata et al., 2006) (Figure33). Increased axonal excitability in ALS is likely due to enhanced persistent axonal Na + conductance and impairments in axonal K + conductance(Bostock et al., 1995; Horn et al., 1996; Mogyoros, 1998; Kanai et al., 2006; Nakata et al., 2006; Tamura et al., 2006; Steve Vucic et al ., 2006; S. Vucic et al., 2010) and was suggested to contribute to fasciculation potentials typical of ALS (de Carvalho et al., 2013; Howells et al., 2018; Gunes et al., 2020) (Figure Astrocytes react to CNS damage with a phenotypic shift characterized by morphological, molecular, and functional changes including increased proliferation rate, higher levels of astrocytic glial fibrillary acidic protein expression, and extension of thicker processes[START_REF] Sofroniew | Astrocytes: biology and pathology[END_REF]. Since many terms have been used to describe these astroglial responses leading to confusing terminologies, it was recently proposed to reserve "reactive astrocytes" as an umbrella term encompassing multiple potential states of astrocytes undergoing disease-associated remodelling (Escartin et al., 2021). During ALS progression, increased number of reactive astrocytes are identified in the spinal cord, even before motor symptoms are detected (Yamanaka et al., 2018). Astrocytes overexpressing the fALS-linked SOD1 mutation [G93A] reduce motor neuron survival in coculture conditions, whether they . In the human cortex, a single astrocyte enwraps >1 million synapses and most, if not all, have at least 1 process with endfeet contacting a blood vessel (von Bartheld et al., 2016). Therefore, astrocytes are polarized cells that are in a key position to provide structural, metabolic, and trophic support to neurons. are isolated from neonatal rats (Vargas et al., 2006), mice (Nagai et al., 2007), or derived from human cells (Di Giorgio et al., 2007). ; Avossa et al., 2006; Bories et al., 2007; Amendola et al., 2008; van Zundert et al., 2008; Pambo-Pambo et al., 2009; Q. Chang et al., 2011; Quinlan et al., 2011; Filipchuk et al., 2012; Leroy et al., 2014; Milan et al., 2014; Qing Chang et al., 2016; Medelin et al., 2016). A large number of abnormal cellular and molecular mechanisms underlying MNs degeneration have been identified, but to date such efforts have led to few improvements in therapy and no cure for ALS (X. Xu et al., 2021). "Hit hard, hit early" applies to any disease, including ALS (Keon et al., 2021). Earlier initiation of therapy is favorable for ALS patients to slow the fundamental neurodegenerative process, resulting in cost savings and prolonged survival. Hence, how can we facilitate an earlier diagnosis and mechanism-based therapies of ALS? Investigating a very early initiation of ALS pathogenesis is crucial. Interestingly, a recent study published in the journal Science demonstrated that Huntington's disease is one of the neurodegenerative diseases whose pathogenesis originates in the hyperexcitability in ALS is triggered much earlier, prior to the manifestation of symptoms. This early sign -cellular hyperexcitability -could be considered a hallmark during early pathogenesis of ALS and maybe a first or primary pathological event. Cellular excitability is tightly regulated by, for example, intrinsic mechanisms such as ion channel density and function, and synaptic mechanisms that regulate the neurodevelopmental stage [START_REF] Braz | Treating early postnatal circuit defect delays Huntington's disease onset and pathology in mice[END_REF] . Thereby, my PhD thesis hypothesized that the pathogenesis of ALS disease originates in the prenatal development period. Changes in excitability of diseased MNs in ALS mouse models occur remarkably early, which are detected during prenatal development [START_REF] Pieri | Altered excitability of motor neurons in a transgenic mouse model of familial amyotrophic lateral sclerosis[END_REF] Jason J. Kuo et al., 2004; Kuo et al., 2005;[START_REF] Martin | Embryonic alteration of motoneuronal morphology induces hyperexcitability in the mouse model of amyotrophic lateral sclerosis[END_REF] al., 2013; Qing Chang et al., 2016) and postnatal first 2-week development (van Zundert et al., 2008; Pambo-Pambo et al., 2009). Hyperexcitability is consistently exhibited in embryonic spinal SOD1 G93A MNs in culture (embryonic day (E) 15.5 (E15.5), (Pieri et al., 2003); E12-14, (Jason J. Kuo et al., 2004); E13, (Qing Chang et al., 2016)), E17.5 brainstemspinal cord preparation (Martin et al., 2013), postnatal P0-12 spinal cord slices (Quinlan et al., 2011), and brainstem slices (van Zundert et al., 2008), as well as in human MNs derived from iPSCs obtained from patients with ALS (Brian J. Wainger et al., 2014; A.-C. Devlin et al., 2015), relative to controls. Based on these findings, balance of excitation to inhibition. The intrinsic mechanisms for hyperexcitability in ALS at prenatal stage are well delineated and are mainly mediated by voltage-gated ion channels, including Na + (Kuo et al., 2005) and Ca 2+ (Qing Chang et al., 2016) Bormann et al., 1987; B. X. Gao et al., 1995)) homeostasis -evaluating [Cl --ions are current. Cl -] i -calculated from individual ECl (ECl =E GABAAR , because Cl mainly involved in GABA A R currents in foetal MNs (J Table 1 . 1 Passive membrane properties of E17.5 MNs and GABAAR conductances. Data are from perforated patch-clamp recordings and are shown as mean ± SEM. Statistical significance was calculated by a nonparametric Mann Whitney test. ns, p>0.05; *p<0.05. Rin Cm g GABAAR (MW) (pF) (pS) gGABAAR/Cm (MW/pS) N N WT 104.9 ± 10.8 143.7 ± 14.2 3.90 ± 0.97 0.029 ± 0.007 16 10 SOD 148.2 ± 13.5* 105.6 ± 5.9* 1.91 ± 0.24 ns 0.019 ± 0.003 ns 21 16 Table 1 . 1 Physiological values measured in lumbar E17.5 spinal MNs MN sub-group WT Inhibited WT Dual SOD Inhibited SOD Dual SOD Excited Number of MNs n=19 n=13 n=11 n=21 n=7 Female MNs n=11 n=7 n=7 n=8 n=3 Male MNs n=8 n=6 n=4 n=13 n=4 Number of litters N=11 N=13 N=10 N=16 N=7 E Rest (mV) -74.2 ± 1.1 -74.9 ± 1.4 -75.5 ± 1.4 -76.8 ± 0.8 -77.8 ± 1.4 Rin 82.0 ± 5.7 128.7 ± 15.7 143.3 ± 21.1 119.8 ± 9.9 153.1 ± 26.6 C m (M) 171.9 ± 13.2* 122.3 ± 10.3 130.5 ± 10.9 129.4 ± 8.9 124.1 ± 9.9 Rheobase (pA) 246.8 ± 29.5 199.5 ± 45.1 173.8 ± 28.4 201.2 ± 26.6 202.0 ± 79.0 SpNbCh30 (%) 34.9 ± 9.5** 101.6 ± 8.2 65.7 ± 9.5** 97.6 ± 6.1*** 121.3 ± 3.5**** Spike Amp. (mV) 50.2 ± 1.4 49.0 ± 2.0 48.1 ± 1.6 52.8 ± 1.4 54.6 ± 2.7 Spike Width (ms) 1.5 ± 0.06 1.4 ± 0.04 1.7 ± 0.12 1.4 ± 0.06 1.4 ± 0.08 Vthr-Vr (mV) 28.5 ± 1.4 31.2 ± 2.2 30.5 ± 1.9 31.6 ± 1.4 31.8 ± 1.0 Firing Freq (Hz) 18.8 ± 1.2 18.0 ± 1.4 20.1 ± 1.3 18.3 ± 1.0 16.0 ± 1.8 VLF Intensity (µA) 8.3 ± 1.5** 8.4 ± 1.3 15.0 ± 2.1 15.1 ± 1.4 13.6 ± 3.8 Evoked dGPSP Amp. (mV) Principal Component Analysis (PCA) of the three groups of responses in SOD MNs and two groups of responses in WT MNs As described above, depending on their response to VLF stimulation, E17.5 MNs were classified into Excited (MN excited at all tested VLF stimulation frequencies), Dual (MNs excited at low VLF frequencies and then inhibited when increasing VLF stimulation frequency), and Inhibited (MN that were exclusively inhibited by VLF discharges). Five non correlated parameters were used to characterize those responses: percentage of variation of the number of spikes produced by the MN during the one second VLF stimulation at 30 Hz, compared to control spiking discharge -same time (SpNbCH30); MN input resistance (Rin); Rheobase; Voltage difference between threshold potential for spikes and membrane potential (Vthr-Vr); Spike width (SpkWdth). The first parameter corresponded to the cut VLF frequency at which dSPSPs were either excitatory or inhibitory, whereas the remaining four parameters discriminating mouse P6-10 delayed MNs from immediate MNs, likely F-type MNs and S-type MNs, respectively (Leroy et al., 2014). The result of the PCA is shown in figures 2A for WT MNs and figures 2B for SOD MNs. Contribution of variables to the first and second component (table 3 and table 4) indicates that the first component of the PCA (horizontal axis) mostly represented Rheobase, Rin and Spike Width (in red) in both SOD and WT. However, spike width (SpkWdth) was also determinant in first component for in SOD MNs but not for WT MNs. Using a similar method, the second component was interpreted as representing the change in MN discharge (number of spikes during VLF stimulation at 30 Hz expressed as % of the control: SpNbCh30), and the spike width (in green). Table 3 . 3 SOD MNs: Contributions to PCA 1st and 2nd Components variance Comp1 Comp2 SpNbCh30 1421 4177* Rin 2797* 1294 Rheobase 3190* 1044 Vthr_Vr 504 1343 SpkWdth 2088* 2142* mean 2000 2000 Table 4 . 4 WT MNs: Contributions to PCA 1st and 2nd Components varianceIn order to estimate whether the probability groups were distinct, separation between pairs of groups was evaluated by calculating the inertia, which was defined as the ratio of the between-group variance to the global variance. The statistical significance of inertia for group separation was estimated using a Monte Carlo permutation test (1000 runs) (see methods). For WT MNs, the Dual group was significantly different from the Inhibited group (p=0.001, figures 2A2 and table5). For SOD MNs, Excited and Dual groups were not found statistically different (p = 0.213), while Inhibited were statistically different from Dual (p=0.023) and Excited (p=0.013) (see histograms of simulated inertia in figures 2B2 and table 5). Therefore, in the following, Dual and Excited were grouped. Comp1 Comp2 SpNbCh30 403 4494* Rin 4142* 402 Rheobase 4480* 40 Vthr_Vr 274 1816 SpkWdth 700 3248* mean 1999.8 2000 Table 5 . 5 p values from Montecarlo test after PCA SOD MNs WT MNs Groups Excited Dual Dual Dual 0.213 Inhibited 0.013 0.023 0.001 Figure 3: WT Inhibited MNs are low Rin but not SOD MNs . (A1) SOD MNs have lessened Rin value compared to WT MNs from the same littermate. (A2) Rheobase is low in SOD MNs that are more excitable. * p < 0.05, Mann Whitney test. (B) Rin is low exclusively in WT inhibited MNs. * p < 0.05, * p < 0.01, Mann Whitney test. (C) Rheobase values are inversely proportional to Rin value in WT and SOD MNs. Unlike SOD MNs (C2), most inhibited WT MNs are < 100 M (C1). (C3) Percentage of inhibited MN < 100 M (bright blue) in both genotypes. *** p < 0.0001, Chi-square test. dGPSP effects in SOD MNs firing activity do not correlate with morphology Rin were Inhibited MNs (in blue in table 6) whereas Dual MNs and Inhibited MNs were found in SOD H Rin and SOD L Rin (table 6), indicating that In an attempt to verify whether Dual (Excited / Inhibited) MNs exhibited altered dendritic tree as expected in our working hypothesis, some recorded MNs tested for electrophysiology were neurobiotin-injected and reconstructed (figure 4). Dendrogram were performed and morphometric parameters collected. Eight WT MNs (N =7) and 11 SOD MNs (N =10) were encompassed in the present study. In our 2013 paper (Martin et al., 2013) we have shown that morphometric parameters of E17.5 MNs correlated with their measured Rin, MNs with shorter dendrites having higher Rin and vice versa. Therefore, in order to probe whether such relationship also exist in the present study, reconstructed MNs were grouped according to their high (H) Rin (>100 m) or low (L) Rin (<100 m). As illustrated in table 6 the total dendritic length (Σ Len ) was lower in WT H Rin MNs (2408 ± 379 µm) compared to WT L Rin MNs (3433 ± 458 µm), the difference between the two groups being not different likely because only 3 H Rin WT MNs were considered. A similar trend was found for the mean length of the terminal dendritic segment (Mean Len term ), mean length of the intermediate dendritic segment (Mean Len interm ) and total dendritic volume (Σ volume ). The total dendritic area (Σ area ) was significantly lower in WT H Rin MNs (6838 ± 1417 µm) compared to WT L Rin MNs (10335 ± 1069 µm) (table 6). Σ Len was significantly lessened in SOD H Rin MNs (2653 ± 460 µm) compared to WT L Rin MNs (4417 ± 313 µm) and similar trend was obtained for other considered morphometric parameters (table 6). Interestingly, WT H Rin were Dual MNs (in green in table 6) and WT L Table 6 . 6 Rin and dendritic morphometric parameter. Rin = input resistance, Mean Len term = mean length of the terminal dendritic segment; Mean Len interm = mean length of the intermediate dendritic segment; Σ Len = total dendritic length, Σ area = total dendritic area; Σ volume = total dendritic volume. H Rin = high Rin; L Rin = low Rin. Sign indicates the level of significance in Mann Whitney test: ns = not significant; * p < 0.05; ** p < 0.01. Blue values are Inhibited MNs whereas green values correspond to Dual MNs. Rin Mean Len term Mean Len interm Ʃ Len Ʃ Area Ʃ Volume Group (MΩ) (µm) (µm) (µm) (µm 2 ) (µm 3 ) WT MN1 H Rin 188 124.55 112.03 1656.08 4090.22 978.39 MN2 H Rin 155 152.73 181.24 2824.56 8814.96 2903.26 MN3 H Rin 187 96.21 176.51 2743.15 7610.29 1991.05 Mean WT H Rin 176.7 124.5 156.6 2408 6838.00 1958.00 SEM 10.84 16.32 22.32 376.7 1417.00 555.90 MN4 L Rin 85 164.21 273.34 3719.74 9043.98 2115.35 MN5 L Rin 99 448.03 298.49 5076.68 14579.9 4396.22 MN6 L Rin 87 252.07 221.71 3094.78 9368.69 2934.65 MN7 L Rin 65 188.16 209.35 2761.36 8979.47 2820.47 MN8 L Rin 95 130.18 172.61 2510.09 9703.52 4106.88 Mean WT L Rin 86.2 236.5 235.1 3433 10335.00 3275.00 SEM 5.886 56.5 22.61 458.3 1069.00 425.20 P 0.0357 0.0714 0.1429 0.2500 0.0357 0.1429 Sign * ns ns ns * ns SOD MN1 H Rin 105 204.30 187.97 3955.28 10715.30 3110.49 MN2 H Rin 119 144.87 191.78 2117.87 4922.91 1387.94 MN3 H Rin 168 325.13 281.12 3356.41 10379.10 3561.50 MN4 H Rin 192 97.78 109.26 1621.91 5183.74 1617.13 MN5 H Rin 211 60.41 68.67 1266.00 3192.48 776.19 MN6 H Rin 257 183.90 146.98 3603.44 9840.53 2681.77 Mean SOD H Rin 175.3 169.4 164.3 2653 7372.00 2189.00 SEM 23.38 37.99 30.24 460.7 1349.00 445.00 MN7 L Rin 58 253.75 202.52 3903.86 9251.66 2355.83 MN8 L Rin 82 226.62 209.35 4813.01 12275.40 3198.43 MN9 L Rin 70 284.93 174.23 4243.14 12564.50 3957.78 MN10 L Rin 90 220.69 212.86 3704.70 9030.16 2432.64 MN11 L Rin 64 202.28 264.89 5418.15 11675.00 2561.46 Mean SOD L Rin 72.8 237.7 212.8 4417 10959 2901.00 SEM 5.851 14.41 14.69 313.1 756.9 303.10 P 0.0043 0.1255 0.1775 0.0173 0.1255 0.4286 Sign ** ns ns * ns ns Abbreviations: Table 7 . 7 somatic morphometric parameter. P soma A trans A soma V soma F max F min (µm) (µm 2 ) (µm 2 ) (µm 3 ) (µm) (µm) WT MN1 80 251.45 410.67 1027.75 26.79 4.99 MN2 75.99 326.24 514.19 1223.51 27.17 8.23 MN3 78.68 159.41 406.11 822.66 31.07 3.42 MN4 93.61 342.82 406.91 1104.42 34.18 1.87 MN5 96.09 395.75 451.51 1269.31 37.16 5.77 MN6 89.73 299.48 265.27 750.97 25.83 1.72 MN7 69.32 235 412.35 985.72 24.25 4.99 MN8 68.63 282.3 1014.4 2599.08 23.56 2.18 Mean WT 81.51 286.6 485.2 1223,00 28.75 4.146 SEM 6.645 SEM 2.687 13.41 87.43 270.5 1.2 0.5155 P 0.7168 0.0157 0.0091 0.005 0.778 0.0119 Sign ns * ** ** ns * Abbreviations: P soma = soma perimeter; A trans = soma area measured in transversal plane; A soma = soma area; V soma = soma volume, F max = maximum feret of the soma; F min = minimum feret of the soma. Sign indicates the level of significance in Mann Whitney test: ns = not significant; * p < 0.05; ** p < 0.01. Similar change on soma membrane and proximal dendrite membrane is in agreement with the literature. Indeed, the density of GAD65/67 immunoreactive synaptic boutons has been described in neonatal rat MNs as being similar when analyzed on soma and on dendrites located up to 100µm away from the soma[START_REF] Jean-Xavier | Dual personality of GABA/glycine-mediated depolarizations in immature spinal cord[END_REF]. Here, the proximal dendrite length that was analyzed was 28.5 ± 0.8 µm and 24.6 ± 0.7 µm, for WT and SOD MNs respectively. The synaptic coverage diminished for the most distal dendrites(200- 250 µm). In order to verify whether the lowered VIAAT innervation of SOD MNs, compared to WT littermate MNs, could impact the motoneuronal network, we analyzed the synaptic GABA/glycine activity in SOD vs WT MNs. Interestingly, we found that the frequency of GABA/glycine synaptic events -see figures 6D for representative traces and figures 6E for This was confirmed by a quantitative analysis of global VIAAT (figure 6C1) both on soma and proximal dendrite: 97.23 ± 2.68 / 100 µm for WT vs 66.04 ± 3.44 / 100 µm for SOD MN soma, 90.49 ± 1.78 / 100 µm for WT vs 70.03 ± 1.79 / 100 µm for SOD proximal dendrite. Synaptic VIAAT (co-localizing with gephyrin) (figure 6C2) also pointed out a reduction in SOD MNs compared to WT littermates: 17.80 ± 2.12 / 100 µm for WT vs 12.05 ± 2.11 / 100 µm for SOD MN soma, 23.75 ± 1.54 / 100 µm for WT vs 17.51 ± 1.41 / 100 µm for SOD proximal dendrite. Delpy et al., 2008), increase of the synaptic input onto MNs[START_REF] Scain | Glycine release from radial cells modulates the spontaneous activity and its propagation during early spinal cord development[END_REF]), is delayed. Another explanation is that the lack of GABA/glycine impedes the occurrence of Inhibited low Rin SOD MNs. We have shown that increasing gGABA/glycine favors dGPSPs inhibition occurring on SOD MN soma (figures 5D1). The role of gGABA/glycine (gCl) in the dual effect of dGPSPs was also demonstrated in a previous study where we found that increasing gCl favors inhibition either during a single dGPSP or during trains in which gCl summates (Branchereau et al., 2016). Our immunohistochemical data indicate a lower density of VIAAT staining on MN somata and proximal dendrites from the SOD1 G93A strain compared to WT littermate MNs. Consequently, VLF stimulation activate less GABA/glycine inputs, lowering the shunting (inhibitory effect) G93A MNs and we explain this alteration by a lower density and location of GABA/glycine synaptic inputs on SOD MNs compared to WT MNs. Simulations also indicate that morphological changes in SOD MNs -shorter dendritic tree -favor the excitatory action (Figures 5B). Additional parameters are governing the effect of repetitive dGPSPs on MN firing. ECl value clearly controls this effect and a slight hyperpolarization of ECl is able to switch the effect from pure excitation to pure inhibition (Figures 5A).A shorter decay time of dGPSPs favors their excitatory effect on MN firing as we previously described [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF] because it leads to higher capacity of shunting effect summation [START_REF] Branchereau | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF] . The decay of dGPSPs is closely dependent on [Cl -] i (Pitt et al., 2008a) (Houston et al., 2009a) because of a direct effect of chloride ions acting in the pore of glycine and GABA channels (Moroni et al., 2011a) . Here, [Cl -] i was set as similar in SOD and WT MNs, hence the decay time of dGPSPs was maintained as similar in all recorded MNs. We have previously shown that E17.5 SOD1 G93A MNs exhibit dGPSPs with an increased decay time compared to WT littermate MNs [START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF] and this change could compensate the more depolarized ECl value found in SOD MNs. In physiological conditions, this compensatory change may limit the over-excitatory action of repetitive dGPSPs in SOD MNs demonstrated in the present study. It is however likely that the lack of inhibitory GABA/Gly inputs on SOD MNs and their morphological alteration (Martin et al., 2013) make them vulnerable MNs. Chang et al., 2011) . A possible reduction of GlyR on SOD MNs may also account for the higher excitability of dGPSPs in our experiments. Finally, the main limitation of our study is based on the fact that MNs were recorded in whole-cell configuration with similar ECl values. As physiological ECl is 10 mV more depolarized in E17.5 SOD1 G93A MNs compared to WT littermate MNs[START_REF] Branchereau | Relaxation of synaptic inhibitory events as a compensatory mechanism in fetal SOD spinal motor networks[END_REF], it would have been better to test the effect of dGPSPs barrages on the MN spiking activity using perforated patch-clamp recordings that preserve physiological ECl. Dumoulin et al., 1999). The recording electrode, in order to activate local inhibitory fibers that are pharmacologically MNs were recorded with patch pipettes isolated (see pharmacology below) (Figure S1). Monophasic single stimulations delivered to filled with 0.4% (wt/vol) Neurobiotin (CliniSciences, Montrouge, France) diluted in the the VLF (duration 1ms, intensity ranging 3-30 µA) were driven by a programmable Master 8 intracellular medium (see below). After recording sessions, whole brainstem-SC preparations Stimulator/Pulse generator (Master-8, A.M.P.I. Jerusalem, Israel). (SOD or WT embryos) were fixed in 4% paraformaldehyde (PFA) prepared with 0.1 M PB E GABAA/glyR , considered as ECl because of the low HCO3 -conductance of (wt/vol) for 2h at room temperature. They were rinsed three times with 0.1 M Phosphate GABA A R/glycineR in embryonic spinal MNs (J. Bormann et al., 1987), was set to -45mV (- Buffer Saline (PBS), blocked with a blocking buffer containing 2 % (wt/vol) bovine serum 57.5mV after correction of the junction potential, see below), which was in the range of the albumin (BSA) and 0.5% PBST [0.5% Triton X-100 in 0.1M PBS (vol/vol)], followed by recorded physiological E GABAAR in E17.5 MNs (Branchereau et al., 2019). Intracellular incubation with primary antibody and streptavidin-Cy3 (1:400, Invitrogen) diluted in medium composed (in mM) was the following: 115 K gluconate, 5 NaCl, 15 KCl, 1 CaCl 2 , incubating solution made with 0.1 M PBS containing 0.1% Triton X-100 (Sigma) and 0.2% 2H 2 O, 10 HEPES, 10 EGTA, and 2 MgATP, (297 mosmol/kg H 2 O) adjusted to pH 7.4 using BSA (Sigma,St Louis, MO, USA) for 48h at 4 °C. We processed brainstem-SC preparations 1M KOH. Measurements were corrected for liquid junction potentials (12.5 mV) calculated with an anti-gephyrin antibody (1:400, mouse monoclonal, mAb7a (GlyR7a), Synaptic Systems GmbH, Germany), coupled with a rabbit antibody directed against the vesicular inhibitory amino acid transporter (VIAAT, 1:1000, antibody Provided by B. Gasnier, Institut de Biologie Physico-chimique, Paris). Gephyrin is the GABA brainstem-SC preparations were thereafter washed four times for 20 min, incubated with secondary antibody conjugated to Alexa Fluor488 goat anti-rabbit IgG(H + L) (1/500, Invitrogen, France) for 2 h, at room temperature, abundantly rinsed in 0.1 M PBS, and finally mounted with an anti-fade reagent (Fluoromount, Electron Microscopy Sciences, CliniSciences, France) and stored at 4°C in obscurity until confocal observation. A /glycine receptors anchoring protein [START_REF] Meyer | Identification of a gephyrin binding motif on the glycine receptor beta subunit[END_REF] whereas VIAAT reflects the synaptic release of the GABA and glycine neurotransmitters (A. with the Clampex junction potential calculator. Actual ECl was measured by assessing the reversal membrane potential of evoked IPSCs (eIPSCs) elicited by a single VLF stimulation (local inhibitory fibers activation) at different membrane voltages (Figure Table 2 . 2 Variables used for PCA Variables Definition Measurement Range SpNbCh30 % of change in number of spikes % 0-150 during VLF stimulation 30 Hz Rin Input resistance M 113 to 264 Rheobase Threshold current pA 60 to 510 Vthr-Vr Threshold Mb potential -resting pA 21 to 43 membrane potential Spike Width Mid amplitude spike width ms 1 to 2.25 G93A mouse model of ALS disease at early stage. We first investigated the changes in physiological[Cl -] i or ECl values by perforated patch-clamp recording. Our study revealed that prenatal E17.5 SOD1 G93A MNs express an impairment of chloride homeostasis that leads to a 10 mV more depolarized ECl, indicating that a very early inhibitory dysfunction may initiate the pathogenesis in ALS MNs as hypothesized by others (van Zundert et al., 2012; Clark et al., 2015a) . This could lead to a less efficient inhibitory inputs to MNs [START_REF] Branchereau | Depolarizing GABA/glycine synaptic events switch from excitation to inhibition during frequency increases[END_REF] , which in turn would be expected to affect locomotor coordination. Moreover, we found that the passive properties (Rin and Cm) of E17.5 MNs are consistent with previous observations (Martin et al., 2013) , in that Rin was increased and Cm decreased in SOD1 G93A mechanisms modulating the elevated [Cl -] i in SOD1 G93A MNs. Which Cl -transporter raised or upregulated the [Cl -] i in E17.5 SOD1 G93A MNs? ). In P10 rat MNs a change from 10 mM [Cl -] i to 131 mM led to an increase in tau decay from 9.2 ms to 19.8 ms(Pitt et al., 2008b). In P12 Purkinje cells a change from 10 mM [Cl -] i to 150 mM led to an increase in tau decay from 14 ms to 19.8 ms(Houston et al., 2009b). Finally, in HEK-293 cells, a change from 10 mM [Cl -] i to 30 mM and then 131 mM led to an increase in tau decay Even though our data highlight a more prolonged tau decay in SOD1 G93A GABA A R/GlyR IPSCs, we cannot directly link tau decay values to physiological [Cl -] i values, because tau decay values have been collected from CsCl recordings in which [Cl -] i was set as being elevated. The intracellular Cl -concentration ([Cl -] i ) is mainly regulated by the action of two Cl - transport proteins: NKCC1 (Cl -importer) and KCC2 (Cl -excluder). To determine the molecular mechanisms underlying the elevated [Cl -] I that prolonged tau decay in E17.5 SOD1 G93A MNs, we examined whether the dissipation of elevated [Cl -] i was mediated by activation of neuronal KCC2 (Payne et al., 1996; Rivera et al., 1999), which is expressed at early stages in the embryonic spinal cords (Alain Delpy et al., 2008). Impaired KCC2 expression or function is involved in a number of neurodevelopmental and neurological disorders (Nicolas from 9.6 ms to 12 ms and then 22 ms (Pitt et al., 2008b) .  Molecular Branchereau et al. eLife 2019;8:e51402. DOI: https://doi.org/10.7554/eLife.51402 Acknowledgements The completion of this thesis would not have been possible without the support of many people. Here, I wish to express my deep and sincere gratitude to all people who contributed to my research or PhD studies directly and indirectly, with experiments, supports, advices and encouragements. Acknowledgements We thank Marie-Paule Algeo, Marie-Alix Derieppe and Nathalie Argenta for excellent technical assistance and animal breeding. A part of the microscopy was done in the Bordeaux Imaging Center, a Acknowledgements: H.Z. was granted by the China scholarship Council (CSC). We thank Nathalie Argenta for excellent technical assistance and animal breeding. The help of Gilles Courtand for image analyses is acknowledged. We finally warmly thank the "Association pour la recherche sur la Sclérose Latérale Amyotrophique et autres maladies du Motoneurone" (ARSLA), as well as AFM-Téléthon for their financial support. This study received financial support from the French government in the framework of the University of Bordeaux's IdEx "Investments for the Future" program / GPR BRAIN_2030. Team: MotoPsyn (Branchereau / Le Ray) The help of Se´ bastien Marais is specifically acknowledged. This work was supported by funding from the 'Fe´ de´ ration pour la Recherche sur le Cerveau' (FRC) and the 'Association pour la recherche sur la Scle´ rose Late´ rale Amyotrophique et autres maladies du Motoneurone' (ARSLA). Fe´ de´ ration pour la Recherche sur le Cerveau Association pour la Recherche sur la Scle ´ rose Late ´ rale Amyotrophique et autres Maladies du Motoneurone Pascal Branchereau Pascal Branchereau The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication Potentiating KCC2 activity is sufficient to limit the onset and severity of seizures. Proc Natl Acad Sci U S A, 115(40), 10166-10171. doi: 10.1073/pnas.1810134115 Moroni, M., Biro, I., Giugliano, M., Vijayan, R., Biggin, P. C., Beato, M., & Sivilotti, L. G. (2011a). Chloride ions in the pore of glycine and GABA channels shape the time course and voltage dependence of agonist currents. J Neurosci, 31(40), 14095-14106. doi: 10.1523/JNEUROSCI.1985-11.2011 Moroni, M., Biro, I., Giugliano, M., Vijayan, R., Biggin, P. C., Beato, M., & Sivilotti, L. G. (2011b). Chloride Ions in the Pore of Glycine and GABA Channels Shape the Time Course and Voltage Dependence of Agonist Currents. Journal of Neuroscience, 31(40), 14095-14106. doi: 10.1523/JNEUROSCI.1985-11.2011 Mortensen, M., & Smart, T. G. (2006). Extrasynaptic αβ subunit GABA A receptors on rat hippocampal pyramidal neurons: Extrasynaptic GABA A receptor isoforms. The Journal of Physiology, 577(3), 841-856. doi: 10.1113/jphysiol.2006.117952 Moustafa, Y. (2018). Gleason : A Rebirth. American Journal of Psychiatry Residents' Journal, 13(8), 11-11. doi: 10.1176/appi.ajp-rj.2018.130807 Muller, E., Le-Corronc, H., & Legendre, P. (2008). Extrasynaptic and postsynaptic receptors in glycinergic and GABAergic neurotransmission: a division of labor? Front Mol Neurosci, 1, 3. doi: 10.3389/neuro.02.003.2008 Muller, E., Le Corronc, H., Triller, A., & Legendre, P. (2006). Developmental dissociation of presynaptic inhibitory neurotransmitter and postsynaptic receptor clustering in the hypoglossal nucleus. Molecular and Cellular Neuroscience, 32(3), 254-273. doi: 10.1016/j.mcn.2006.04.007 Muller, F. L., Liu, Y., Jernigan, A., Borchelt, D., Richardson, A., & Van Remmen, H. (2008). MnSOD deficiency has a differential effect on disease progression in two different ALS mutant mouse models. Muscle & Nerve, 38(3), 1173-1183. doi: 10.1002/mus.21049 Murdock, B. J., Bender, D. E., Kashlan, S. R., Figueroa-Romero, C., Backus, C., Callaghan, B. C., . . . Feldman, E. L. (2016). Increased ratio of circulating neutrophils to monocytes in amyotrophic lateral sclerosis. Neurology -Neuroimmunology Neuroinflammation, 3(4), e242. doi: 10.1212/NXI.0000000000000242 Murdock, B. J., Bender, D. E., Segal, B. M., & Feldman, E. L. (2015). The dual roles of immunity in ALS: Injury overrides protection. Neurobiology of Disease, 77, 1-12. doi: 10.1016/j.nbd.2015.02.017 Murdock, B. J., Zhou, T., Kashlan, S. R., Little, R. J., Goutman, S. A., & Feldman, E. L. (2017). Correlation of Peripheral Immunity With Rapid Amyotrophic Lateral Sclerosis Progression. JAMA Neurology, 74(12), 1446. doi: 10.1001/jamaneurol.2017.2255 Musaro, A., & Rosenthal, N. (2002). The Role of local Insulin-like Growth Factor-1 Isoforms in the Pathophysiology of Skeletal Muscle. Current Genomics, 3(3), 149-162. doi: 10.2174/1389202023350462 Nabekura, J., Katsurabayashi, S., Kakazu, Y., Shibata, S., Matsubara, A., Jinno, S., . . . Ishibashi, H. (2004). Developmental switch from GABA to glycine release in single central synaptic terminals. Nature Neuroscience, 7(1), 17-23. doi: 10.1038/nn1170 Nagai, M., Re, D. B., Nagata, T., Chalazonitis, A., Jessell, T. M., Wichterle, H., & Przedborski, S. (2007). Astrocytes expressing ALS-linked mutated SOD1 release factors selectively toxic to motor neurons. Nature Neuroscience, 10(5), 615-622. doi: 10.1038/nn1876 Data availability All data generated or analysed during this study are included in the manuscript and supporting files. Highlights  Prenatal (E17.5) spinal SOD1 G93A MNs exhibit more excitatory actions by lowfrequency electrically evoked-dGPSPs than WT MNs.  At late embryonic stage, stimulation by dGPSPs allows functional and morphological identification of two types of spinal MNs located in the lateral motor column: low-Rin (the dominant population) and high-Rin WT MNs.  Low Rin WT MNs are preferentially inhibited by dGPSPs stimulation unlike SOD1 G93A MNs  dGPSP effects do not correlate with morphology (and Rin) in SOD1 G93A MNs  Simulation show that moving away GABA/glycine input from cell body to dendrites favors excitatory effect of dGPSPs  The density of perisomatic VIAAT terminals in SOD1 G93A MNs is lower compared to WT MNs.  Low Rin SOD1 G93A MNs -putative fast ALS vulnerable -lack Inhibition at fetal stage Mann Whitney test) in SOD MNs (14.8 ± 1.1 µA, n = 39) compared to WT MNs (8.3 ± 1.0 µA, n = 32) (Figure 1E2, see table 1 for individual comparisons). Even though theoretical ECl was set to -57.5mV, the actual ECl was measured for each MN by assessing, in voltage-clamp mode, the reversal potential of single VLF-evoked IPSCs (Figures 1F1-F2). We did not find any significantly difference between the five groups (Figures 1F3) (table 1). The slope of I/V curves performed for assessing actual ECl gives the IPSC conductance (gIPSC) that was divided by the membrane capacitance (gIPSC/Cm). No statistical differences were found between WT and SOD MNs groups (Figures 1F4) (table 1), indicating equal contribution of GABA A /glycine receptors when the VLF intensity was adjusted to obtain comparable dGPSPs (or IPSCs). dGPSP effects in SOD MNs firing activity were not closely correlated to morphology (and Rin) (see also figures 3C2-C3), unlike in WT MNs. 6 and7) and High Rin MN (B1-B2) (MN#1 in Tables 6 and7 6 and7). Reconstructions were made from x40 high magnification confocal acquisitions (0.2 μm in Z). Rin value for each illustrated MN is given below the 3D reconstruction image. Dissection and isolation of the embryonic brainstem-spinal cord preparation Pregnant mice were sacrificed by cervical dislocation. The fetuses were removed from the mother using the surgical procedure of laparotomy and transferred into cold artificial cerebrospinal fluid (aCSF) (6-8°C), oxygenated with 95% O 2 and 5% CO 2 mixture. The aCSF was composed of the following components (in mM): 114.5 NaCl, 3 KCl, 2 We recorded only 1-2 MNs from each brainstem-spinal cord preparation. Fetuses were genotyped at the genotyping platform (Magendie Neurocentre, Bordeaux). Genotyping was performed by standard PCR from mice tail samples using primers stated in a protocol from the Jackson Laboratory (https://www.jax.org/Protocol?stockNumber=002726&protocolID=29082). Confocal microscopy and neuron reconstruction Fluorescent images were acquired with a Zeiss LSM 900 confocal microscope, with a 63× oilimmersion objective (NA 1.40) and optical section thickness was set at 300 nm; and images were acquired at 16-bit depth. The VIAAT density was calculated by dividing the total number of VIAAT by total length of their selected band image. Three to six proximal dendrites (dendritic length < 50µm) per MN and two side of each dendrites were quantified. Three-dimensional (3D) reconstruction of patch-clamp recorded WT and SOD MNs were performed with a Neurolucida confocal module (MBF Bioscience Inc., Williston, VT, USA). Morphometric parameters, characterizing morphologic and topologic features of MN dendritic arborization and cell body, were harvested with the Neurolucida Explorer software package (MBF Bioscience Inc) (see table 2). Electrophysiological procedures and data analysis Patch-clamp electrodes were constructed from thin-walled single filamented borosilicate glass Supplementary figures Figure S1: E17.5 MN recording paradigm. MN, located in the lateral motor column (LMC), was recorded in whole-cell configuration in the presence of the following drugs dissolved in the aCSF: kynurenic acid (4 mM, blocks glutamate receptors), dihydro-beta erythroidine (5 μM, blocks cholinergic receptors), methysergide (10 μM) and ketanserine (10 μM) that block the serotoninergic system. Synaptic activation of the GABAAR/GlyR was elicited thanks to electrical stimulation of local inhibitory fibers using a concentric bipolar wire stimulating electrode positioned on the ventro-lateral funiculus (VLF), rostral to the recording electrode. Author Contributions Abbreviations Additional Information Competing financial interests: The authors declare no competing financial interests.  The FIJI macro defines the synapse as a region of apparent colocalization (at least 30% overlap) between the staining of GlyT2-positive boutons and staining of postsynaptic GlyR clusters (Verstraelen et al., 2018)  Procedures: First part Select a ROI which is a segment (band image) containing the intact soma membrane of large-size MNs (>40µm, Feret diameter) and unambiguous GlyR-ir spots and presynaptic GlyT2 terminals. The width of the segment (ROI) is set as 30 pixels in order to include GlyRir clusters apposed to GlyT2 boutons.  Notes on the selection criteria of band images All soma segments of ChAT-ir MN with intact membrane (red channel #2) and its unambiguous GlyR clusters (grey channel #1) and co-localized GlyT2-boutons (green channel #3) were selected as band images.  It makes the analysis much more accurate, compared to selecting the whole MN soma periphery exhibiting part with interference and missing signals. Interference signals come from numerous densely-packed GlyT2-positive fibers (white arrows in Fig. 1A-C) and GlyR-ir spots with artefacts (white thick arrow in Fig. 2). Missing signals of GlyT2-positive boutons are due to tissue damage (white thick arrow in Fig. 3A-B).  Darkness around ChAT-ir MNs suggests tissue deficiency or damage, which means that GlyT2 terminals may be missing. Second part Pre-and post-synaptic spots detection and colocalization analysis: theoretical aspect Object-based and intensity-based colocalization analysis is used in the FIJI macro for detecting pre-and post-synaptic spots or particles:  object-based colocalization analysis: Only GlyT2 particles with area > 0.2µm 2 (size= 0.2-Infinity) and GlyR particles with area > 0.02µm 2 (size= 0.02-Infinity) are detected by the macro For colocalization analysis, the criteria are set such as at least 30% area of a GlyR particle colocalizes with a GlyT2 bouton 0.02µm 2 x 0,3 = 0.006µm 2 . Overlap is measured using a binary quantification (8bit, 0-255) -100% coloc give 255 (max intensity in 8bit) -30% coloc give 76.5 (to normalize for collecting the overlap percentage) Pre
04116953
en
[ "scco.neur" ]
2024/03/04 16:41:26
2023
https://theses.hal.science/tel-04116953/file/CASTELLI_CECILIA_2023.pdf
Cecilia Castelli Marilyn Lepleux Aurélie Lampin-Saint-Amaux Frédéric Lanore Yann Humeau Hippocampal Sharp Wave-Ripples Differentially Modulate Medial Prefrontal Cortex and Posterior Parietal Cortex After Acquisition of a Semantic-Like Rule Keywords: Memory, hippocampus, associative cortices, behaviour, electrophysiology Memory, hippocampus, associative cortices, behaviour, electrophysiology then discuss my behavioural model based on this knowledge. In the second part, I will introduce the brain areas that supposedly compose part of the memory system sustaining memory consolidation in my model and that were the object of my investigation. My aim is to highlight their individual roles but also to present them as a functional network. In the third part, I will discuss the main theories regarding memory consolidation, focusing on synaptic plasticity and the role of sharp wave-Ripples. I hope to give a concise, comprehensive and coherent summary of the different levels (from molecular to behavioural) of investigation required to produce a complete theory of memory consolidation. Titre : Etude du lien entre plasticité synaptique et adaptation comportementale Résumé : La mémoire peut être définie comme la capacité de stocker des informations qui pourront être récupéré au besoin et facilite l'adaptation comportementale. Au niveau biologique, la mémorisation est un mécanisme complexe qui est soutenue par l'interaction cordonnée de plusieurs régions du cerveau et fait l'objet d'un déclin progressif avec le vieillissement. Comprendre les mécanismes qui soutiennent sa formation et son stockage est donc un prérequis pour développer des traitements adaptés aux troubles de la mémoire. Différentes théories décrivent la consolidation de la mémoire d'un point de vue intégratif : 1) la théorie des engrammes postule l'existence d'assemblées neuronales stablement modifiés par l'expérience ; 2) la théorie de la plasticité synaptique et de la mémoire lie ces deux concepts d'une façon causative ; 3) la théorie de la consolidation systémique postule que les engrammes sont distribués à travers plusieurs aires cérébrales interconnectées. Dans cette dernière, déterminer où, quand et comment une modification particulière du système a lieu pendant la consolidation est encore une question en suspens. Pour chacune de ces théories, plusieurs lignes de recherche ont émergé et ont permis de déterminer plusieurs régions cérébrales clés. Tout d'abord l'hippocampe qui est considéré comme le site principal pour la consolidation des mémoires récentes, mais également le cortex préfrontal comme un site important pour la consolidation des mémoires passées, faisant de la communication entre ces deux aires un axe de recherche important. Le sommeil, qui est caractérisé par l'absence d'information externe additionnelle, favorise la consolidation des mémoires. La réactivation cordonnée des assemblés neuronales dans l'hippocampe et dans le néocortex grâce à des oscillations hippocampiques à haute fréquence appelé Sharp Wave-Ripples (SPW-Rs), considéré comme étant des événements caractéristiques des périodes de consolidation mnésique, favorise cette consolidation. Au niveau cellulaire, la potentialisation synaptique à longue-terme (LTP) est le mécanisme qui sous-tend la formation des mémoires longueterme le plus étudié. Durant ma thèse, j'ai étudié l'interaction entre l'hippocampe dorsal (dHPC) et deux aires néocorticales associatives dans un contexte d'apprentissage d'une règle d'alternation spatiale avec délai chez la souris. Mon objectif a été de répondre aux questions suivantes : 1) Le cortex préfrontal médian (mPFC) et le cortex pariétal postérieur (PPC) participent-ils à l'engramme qui soutiens l'acquisition de cette règle ? Leur contribution respective est-elle quantitativement ou qualitativement différente ? 2) L'activité de réseau (oscillations et réactivation neuronale cordonnées) de ces deux aires est modulées par le SPW-Rs de l'hippocampe ; cette modulation est-elle modifiée par la consolidation de la règle ? 3) La consolidation de la règle est-elle affectée par l'inhibition de la LTP au sein de ces aires néocorticales ? Pour répondre à ces questions nous avons utilisé des souris implantées avec des électrodes dans le dHPC, mPFC et PPC pour enregistrer le Potentiel Local de Champ (LFP) et l'activité de décharge des neurones pendant la tâche et pendant trois heures de repos à la fin de chaque journée de test. Nous avons observé que, pendant la tâche, les neurones du PPC étaient plus particulièrement corrélés à des paramètres de navigation spatiale, tandis que les neurones du mPFC étaient préférentiellement activé par des attributs cognitifs. Nous avons également montré que la modulation positive des neurones néocorticaux par les SPW-Rs était augmentée après l'apprentissage de la règle dans le mPFC, mais pas dans le PPC. De plus, les neurones du mPFC les plus actives pendant la tâche sont les neurones préférentiellement modulés par les SPW-Rs. Enfin, nos données préliminaires montrent que l'inhibition de la LTP au sein du mPFC pendant la période de sommeil suivant la première période d'apprentissage ne semble pas avoir d'effet sur la consolidation de la règle, mais suggèrent cependant que la modulation des neurones par les SPW-Rs pourrait en être affectée. Mots clés : Mémoire, hippocampe, cortex associatifs, comportement, électrophysiologie Title: Study of the link between synaptic plasticity and behavioural adaptation Abstract: Memory can be defined as the ability to store information within the brain in a way that allows retrieval of such information and promotes behavioural adaptation. At the biological level, memorization is a complex mechanism which demands coordinated brain-wide interactions and is subject to progressive decline upon natural aging. It is a shared idea that understanding the mechanisms regulating formation and storage of memories could help in developing targeted approaches for mnemonic deficiency. Different theories have been formulated to describe memory consolidation in an integrated system: 1) the engram theory postulates the existence of neuronal assemblies persistently modified by experience and memory consolidation; 2) the synaptic plasticity in memory theory postulates that these persistent modifications rely on synaptic plasticity mechanisms 3) the system consolidation theory postulates that engrams are, in fact, distributed across an interconnected network encompassing multiple brain areas. In each of these theories, memory consolidation is a multi-step process and the precise identification of whether, where, when and how a specific modification is produced inside this complex system is a hot topic in research. For each of this questions collected evidences have directed the focus toward precise lines of research. The hippocampus is considered to be main site of recent memory consolidation and the prefrontal cortex has progressively emerged as an important site for remote consolidation, making the communication between these two areas a main centre of interest. During sleep, the lack of additional experience favours memory consolidation and the coordinated reactivation of neuronal assemblies of both the hippocampus and neocortical areas during fast-oscillatory hippocampal events called Sharp Wave-Ripples (SPW-Rs) is now taken has the principal hallmark of memory consolidation. At the molecular level, long-term potentiation (LTP) is the most studied mechanisms for the formation of longlasting memories. I studied the interplay between the dorsal hippocampus (dHPC) and two neocortical areas in the context of consolidation of the rule for successful completion of a Delayed Spatial Alternation (DSA) task. We aimed to answer the following questions: 1) Does the medial Prefrontal cortex (mPFC) and the Posterior Parietal cortex (PPC) participate in the engram sustaining acquisition of this rule? Is their participation equal or are they quantitatively or qualitatively differentially solicited? 2) Both areas display signs of hippocampal modulation, in the form of coherent oscillations and coordinated neuronal firing patterns during hippocampal SPW-Rs; how is this modulation modified by consolidation of this rule? 3) Does preventing LTP within the neocortex affect consolidation of this rule? To answer these questions, we used mice implanted with single electrodes in the dHPC, mPFC and PPC to record Local Field Potential (LFP) and single neurons' activity both during the task and a three-hour long rest period at the end of each behavioural training day. We observed that, during behavioural training, PPC neurons were mostly engaged in navigation, while mPFC neurons engaged with more cognitive features of the task. Surprisingly, mPFC neurons, but not PPC neurons, exhibited an increase in positive modulation during hippocampal SPW-Rs following learning, with a high proportion of mPFC's neurons active during the behavioural protocol that were positively modulated around SPW-Rs' peaks during sleep. Last, preliminary data showed that prevention of LTP within the mPFC during the sleep period allocated to memory consolidation does not affect the behavioural performance on the following day, but might affect the modulation of mPFC's neurons around SPW-Rs' peak. First and foremost, I would like to thank my mentor and PhD supervisor Frédéric Lanore, who believed in me when I didn't. I want to thank you for the incredible patience and availability you showed all throughout the past four years and for the huge effort you put down in the past months to keep me anchored in a task where I felt floating without any control on the direction. You taught me a lot both on the scientific and the human level. Thank you also to Yann Humeau, for hosting me in his lab and making me feel welcomed, and for all the technical and intellectual help you were always ready to give. And thank you for the friendly environment you created and promoted within the team. Thank you to all the members of the jury for accepting to be part of the very last step of my career as a student, thank you Dr. Thierry Gallopin, Dr. Audrey Hay, Dr. Aude Panatier, Pr. Christoph Schmidt-Hieber and Dr. Cyril Dejean. I hope this thesis will rise your interest and you will consider ours a stimulating discussion. I would like to thank Cyril Dejean also for the help and the scientific exchanges we had in the past years. Thank you to all of the staff of the IINS and of the PIV-EXPE. All throughout these four years your hard work made my work so smooth that I think I don't even realize the privilege of having so many technicians taking care of all of the small things in my place. To my team. À Aurélie, qui est la meilleure Lab Manageuse qu'on pourrait imaginer, qui sais gérer crise sanitaires, crises administratives, crises des médicaments et, surtout, crises d'angoisse sans perdre ni son sang-froid ni sa gentillesse. Tu as été mon point de sécurité pendant 4 ans, ça sera bizarre de ne plus partager le bureau avec toi. Mille fois merci. Merci à mes nouveaux colocataires de bureau. Merci Pablo pour ton aide avec les manipes et pour ton attitude calme et disponible. Merci Ha-Rang qui après 4 ans a finalement déménagé du bon côté de la passerelle et déjà propose des activités de ''jeunes''. Merci Legeolas pour trainer toujours un peu plus, le sac sur les épaules, pour parler de tous sujets périphériques à la science. Merci Cécile pour ton attitude fraiche et dynamique. Merci Anass, qui en vrais est ni nouveaux ni colocataire, pour ne pas me faire sentir toute seule dans l'institut à deux heures du mat et pour les discussions intéressantes qui suit à la fatigue de ces heures si bizarres. Merci à tous les anciens du bureau, en particulier à Hajer et Marylin. La première pour m'avoir introduit dans ce monde dans lequel elle retourne assez souvent pour un commérage rigolo ou un mot gentil. La deuxième pour tous les fou rire et les blagues que de gentil avait assez peu. Mes quatre ans dans ce bureau étaient pleins de bonheur et c'était un plaisir de venir travailler pour échanger avec des collègues comme vous. L'esprit de l'équipe, les cafés du midi, les pauses chocolat et les chansons de Noël pourris sorties de nulle part sont les choses qui vont me manquer le plus. Sans votre soutien continue et précieux probablement il y aurait aucune thèse sur laquelle écrire les remerciements. To my family. Grazie per il supporto, per i regali, per le serate passate a giocare, per le dodici ore di macchina solo per portarmi una canoa. Grazie per rispettare i miei spazi senza lasciarmi mai sola. Grazie sorella per crescere sempre di più ogni giorno e per condividere tutte quelle passioni che ci rendono strane, ma felici. Grazie per continuare ad essere un porto sicuro, un'alternativa sempre disponibile, quando non so cosa mi riserverà il futuro, anche se non lo sapete. To my friends. A public page at the beginning of a manuscript that costed me a quantity of effort and tears for which I was honestly not prepared is not the right place to thank you for everything you did for me. You took the worst year of my life and made it into something acceptable by constantly remembering me that when I feel at my most worthless I am still loved by so many incredible people. I could always find someone ready to listen to and comfort me, people genuinely interested in my well-being. Thank you. And thank you for all the nights out, the fun time, the board games, the sports, the movies. Thank you to those who have been sticking around for a good while, to those that are brand new, to those that are not much more than acquaintances and to those with whom I had unexpected meaningful conversations on the hardships of life that are common to us. Each and all of you contributed to shape the person I am today, each and all of you shared a part of these crazy years and worked to make them a little bit better. These years were quite tough, thanks to all of you they turned into an experience I will never regret. To Zoé, for everything. To Fede, for the next 19 years. To Guillaume, for being who you are. Premise "Memory" can be defined as the ability to store learned information within the brain in a way that allows retrieval of such information. In humans as well as in animals, memory processes promote behavioural adaptation to different environmental stimuli and are therefore pivotal for survival. Efficient storage of past positive or negative stimuli is, in fact, fundamental for their correct recognition upon a second encounter and a damage at this level may result in failure in recollecting food and shelter locations or in recognizing a potential death threat, with obvious consequences. At the biological level, memorization is a complex mechanism which interest most if not all brain regions, demands coordinated brain-wide interactions and is subject to progressive decline upon natural aging. With an increase in aging population worldwide and the increasing diffusion of many different cognitive disorders that impact memory in its different forms, memory has gained a status of hot topic in the scientific community. It is a shared idea that understanding the mechanisms regulating formation and storage of different types of memories could help in developing targeted approaches for each specific type of mnemonic deficiency. However, due to its vastness and complexity, as any form of information acquisition might be, indeed, reduced to a memorization process, a comprehensive investigation of "memory" results in an elusive matter. Research tracks focus on single memory types or even single phases of the memorization process and many branches of research are still at a fundamental level. Therefore, a comprehensive discussion of "memory" in its entirety is beyond the scope of this work as it would take the focus far away from the main experimental question. During my PhD I investigated the acquisition, storage and expression of a cognitive rule for reward retrieval in mice, focusing in particular on the mechanisms regulating the second phase: storage (or consolidation). In my introduction, I will start by presenting and defining the tri-phasic process that has been linked to memorization and give a brief introduction about the main memory forms identified in humans and how they rely on different memory systems, encompassing different brain regions. I will INTRODUCTION Introduction Chapter 1: Memory Phases of Memory In order to clarify some principles behind the classification of different types of memories, it is important to state that memorization is not a unitary process, but three distinct phases can be distinguished: Encoding: it is the moment in which the item or experience that will later become the subject of a memory is first encountered, resulting in the activation of specific brain areas and the first formation of a memory trace. It is often equate to the concept of learning and, in the context of repetitive behavioural tasks designed for animals, it specifically defines the moment in which the animal starts to display a clearly recognizable goal-directed behaviour, resulting in performance's improvement. Consolidation: it is the process of reinforcing of the memory trace to allow long-term storage of the item or experience. It has been linked to re-activation of neurons firstly activated during encoding, an event that has mainly been observed during the slow wave phase of sleep, however reactivation to a certain extent has been observed also in wake periods following encoding or during inter-trial intervals in repetitive tasks. The importance of sleep for memory consolidation evolutionary-wise might be justified by the need for an experience-free time-slot allocated to stabilization of memory traces for relevant experiences encountered during the day and concurrent suppression of competing traces of un-relevant experiences (Seibt and Frank, 2019). For certain types of memory, consolidation may involve the transfer of the memory trace from a primary site of storage to secondary site to assure long-term storage of the information [START_REF] Frankland | The organization of recent and remote memories[END_REF] e.g. in the case of declarative memory, the information is transferred from the Hippocampus (primary storage site) to the Neocortex (long-term storage site). Retrieval: It is the ability to recall specific information upon the exposure to an appropriate trigger. Even though retrieval often occurs upon re-exposure to all or part of the same conditions encountered during encoding, the brain areas activated during this phase are not necessary the same and this is particularly true for those memory types presenting distinct primary and long-term storage sites [START_REF] Kitamura | Engrams and circuits crucial for systems consolidation of a memory[END_REF]. Other than retrieval, re-exposure to already encountered encoding conditions can lead to subsequent phases of reconsolidation of the memory trace. Types of Memory The hypothesis that "memory" is not a unitary concept originated in psychology. In this field, starting at the end of the 19 th century and proceeding all throughout the first half of the 20 th century, many researchers have formulated theories based on different observations and using different terms, but all could be ascribed to an ideal dichotomy between a conscious and an unconscious forms of memory [START_REF] Squire | Conscious and Unconscious Memory Systems[END_REF]. From a biological point of view, the enquiry whether psychologically distinguishable types of memories also relied on distinct areas within the brain started in the wake of the tests performed on patient H. M. [START_REF] Scoviille | LOSS OF RECENT MEMORY AFTER BILATERAL HIPPOCAMPAL LESIONS[END_REF]. Patient H. M. had been subjected to bilateral medial-temporal lobe resection as a treatment for severe epileptic seizures, which resulted in extended damage to his ability to recall past event (retrograde amnesia) but also and foremost to form new memories (anterograde amnesia). Multiple memory tests were conducted on him and results showed that patient H. M. was able to learn new motor skills even in the complete absence of recollection of going through practice sessions: the medial temporal lobe was needed for conscious recollection of daily practicing with the test, but not for unconsciously acquisition and storage of the motor skill, which proceeded at a normal learning rate. Further research on other patients and animal models eventually led to the division of memory types into declarative and non-declarative [START_REF] Squire | Conscious and Unconscious Memory Systems[END_REF], a distinction that still holds and is presented first when approaching studies on memory. The distinction between declarative and non-declarative forms of memory is based on consciousness: declarative memories are consciously expressed (i.e. retrieved; oftentimes it also means that they are consciously encoded) through the process that it is commonly called recollection; hence, these forms of memory are also called noetic or explicit; non-declarative memories are, instead, expressed unconsciously through performance, implying that also encoding of the memory takes place in an unaware or semi-unaware subject; these forms of memory can also be called anoetic or implicit. This definition based on consciousness is certainly relevant for humans, but jeopardize our ability to translate memory-related concepts formulated on humans to animals and vice-versa, as consciousness is a debated concept in its application to animals, mainly because the only recognized way to assess awareness in humans is by its vocalization [START_REF] Clayton | Can animals recall the past and plan for the future?[END_REF]. Classification and schematization serve the purpose to identify distinct memory systems (i.e. specific brain regions and their mechanisms of network connection that sustain one or more specific forms of memories sharing a distinguishable and unique list of properties) implicated in the encoding, consolidation and retrieval of a specific form of memory in order to restrain the area of investigation when only a single form of memory is at the centre of the query. Given that a large part of scientific research, especially at the fundamental level, is still exerted in animals, having a translatable work-frame from animals to humans is of a certain importance in order to provide consistent data. Hence, in recent years, new models for memory systems classification were proposed, removing consciousness from the spotlight and focusing on other characteristics. In a review, Henke [START_REF] Henke | A model for memory systems based on processing modes rather than consciousness[END_REF] proposes a classification based on three processing modes: -Number of learning trials required for memory formation, distinguishing between rapid encoding (one trial) and slow encoding (multiple repetitive trials); -Cognitive complexity of the memory processing, meaning the number of processing modules solicited by the experience (e.g. a memory can be made up by a single sensory input or encompass multiple senses and being blended with conceptual, emotional and/or spatiotemporal information); -Nature of the mental representation, dwelling in particular on compositionality (i.e. if the memory represents a unitized item or if it can be divided in more than one individual elements which is accessible on its own) and flexibility (i.e. if individual elements of the memory can be used to infer from a similar but different situation in a retrieval setting); From these processing modes, Henke later distinguish three memory systems: fast encoding of flexible associations, slow encoding of rigid association and rapid encoding of single or unitized item. It is important to notice that in the classification system we are more familiar with, declarative and nondeclarative memories do not represent two distinct memory systems, but are just terms that express one characteristic that can be dichotomically found in each individual system, thus removing the consciousness criterion from the classification does not really jeopardize the fundamental distinctions that had already been stated between different memory systems or invalidate the memory forms individuated in the past, but rather shifts the frame of their classification. That classification distinguishes five memory systems [START_REF] Schacter | What are the Memory systems of 1994? Bradf[END_REF]: -Perceptual Representation System (PRS). This system belongs to the non-declarative group and process structural properties of objects (but also words, for humans) at a pre-semantic level, meaning that said properties are recognized unconsciously but cannot be verbalized or consciously structured. The main form of memory ascribed to this system is priming, which is the phenomenon guiding the preference for an object even in the absence of explicit recollection of having already encountered that object. Avoiding the consciousness criterion, priming is listed by Henke as being part of the system supporting rapid encoding of single or unitized items, as a single exposure to the object is sufficient to solicit this from of memory. In both classifications, this system relies on the para-hippocampal gyrus and neocortex and is independent from the hippocampus. -Procedural memory. This is another system belonging to the non-declarative group and it actually serves as a super-structure that probably contains more specific but not yet enough explored subsystems. Procedural memory refers to all types of memories dealing with learnt procedures and automatic behaviour developed over multiple trials, but also progressively acquired motor and cognitive skills. Schacter and Tulving list simple associative conditioning as being one of the subsystems included in procedural memory, however the two concept are often presented separately in more recent literature, even though it is not clear if the distinction is made at the level of memory forms or memory systems. For Henke, both forms of memory belong to the memory system supporting slow encoding of rigid associations, due to the repetitive nature of learning and the lack of flexibility in acquired responses. This system relies on the basal ganglia, cerebellum, parahippocampal gyrus and neocortex, which are sufficient to assure encoding, consolidation and retrieval of the memory in the absence of hippocampal function. However, it has been demonstrated that activation of the hippocampus during tasks that are used to measure this types of memories and that can be acquired by individuals with hippocampal lesions (both humans and animals) speeds up learning, probably by binding the memory to an episodic-like context. -Semantic memory. This system belongs to the declarative group and process the form of memory that is generally known as "knowledge". In fact, information hold in semantic memory is generic, factual, abstract and devoid of its context and any personal (i.e. of the subject memorizing) meaning. Lack of contextualization means that, in its ultimate form, semantic memory is composed of unitized items and rigid associations, thus it can be ascribed to the memory system supporting slow encoding of rigid associations. As for procedural memory, this system relies on the basal ganglia, cerebellum, parahippocampal gyrus and neocortex and is independent from the hippocampus, especially when semantic knowledge is acquired as the result of multiple learning sessions. However, semantic memory can also be the result of a process of progressive transformation of episodic memories (called "semanticization"), which succumb to loss of detail and gain in abstraction over time: in this case in particular (but, to a certain extent, also in the case of repetitive learning), the hippocampus is involved in memory encoding and in the first phase of consolidation and retrieval, facilitating learning [START_REF] Moscovitch | Functional BlackwellPublishing,Ltd. neuroanatomy of remote episodic, semantic and spatial memory: a unified account based on multiple trace theory[END_REF]. -Episodic memory. This system belongs to the declarative group and process personal experiences considered in their spatiotemporal and emotional context (so called "episodes"). Based on the procedural modes, Henke identifies episodic memory with the memory system supporting rapid encoding of flexible association, as a single exposure is sufficient to form complex and flexible composite memories. Eliminating the consciousness criterion, it allows to include also recognized forms of unconscious episodic memory [START_REF] Hannula | The Eyes Have It: Hippocampal Activity Predicts Expression of Memory in Eye Movements[END_REF]. This system relies on the hippocampus and neocortex for encoding, consolidation and retrieval. -Working memory. Unlike the other four, working memory is not a form of long-term memory but of short-term memory (a term that is actually disappearing from the scientific jargon, oftentimes substituted by the term working memory itself because of its more specific and less confounding nature). This system can hold a large but limited amount of items (up to 5-7 at a time) useful for completing a cognitive task within a limited amount of time. Unlike long-term forms of memories, the subdivision in the three phases of encoding, consolidation and retrieval are not of relevance related to working memory, as no persisting trace of it can be identified in the brain. However, working memory can be trained in order to expand its capacity in terms of number of items contemporaneously held and length of the retention time [START_REF] Constantinidis | The neuroscience of working memory capacity and training[END_REF]. Items are either specific features present in the environment and collected through senses or cognitive rules and/or previous knowledge stored and retrieved from other forms of long-term memory, which are flexibly blend together during the reduced amount of time in which working memory is solicited [START_REF] Tsutsui | Comparative Overview of Visuospatial Working Memory in Monkeys and Rats[END_REF]. The main theory regarding working memory is that its expression can be identified as a sustained activity of certain assemblies, representing each item, in the neocortex. The processing modes classification does not include working memory as it concentrates on long-term memory systems. Memory Investigation in Rodents For obvious reasons, human's research is limited in terms of invasiveness of adopted procedures and active manipulation is almost entirely forbidden, with the exception of drug administration in clinical trials, thus animal models are needed for more thorough research. Nowadays, rodents are the most common animal model used in research because of a multitude of characteristics (e.g. quick reproductive cycle, simple housing conditions, easy handling and manipulation and, especially for mice, proneness to genetic modification) and are therefore a commonly used model also for cognitive and memory research. As stated in the preceding paragraph, a major issue in translation of memory research in humans to animals and vice versa is represented by the pivotal role played by consciousness in traditional human's memory research: because of the difficulty of translating this concept into animal behaviour, rodent's research has traditionally been funded on a more brain regions-centred dichotomy, distinguishing hippocampal-dependent and hippocampal-independent forms of memories, sustained by distinct memory systems [START_REF] Schacter | What are the Memory systems of 1994? Bradf[END_REF]. In this context, dependency on the hippocampus has to be intended as an integral hippocampus is required for encoding, consolidation and early retrieval of the memory. The distinction stems from the fundamental work conducted on patient H. M. throughout the years following his medial-temporal lobe resection, during which many tests were performed in order to discriminate, among different memory forms, those that had been spared from those that were irreversibly damaged [START_REF] Corkin | What's new with the amnesic patient H.M.?[END_REF]. The first group comprehends memories that are now considered hippocampus-independent, such as:  emotional memories (e.g. cued fear conditioning)  recognition/familiarity memories (e.g. novel object recognition)  short-term memory (e.g. working memory)  motor memory (e.g. visuomotor skills) The second group, instead, comprehends a variety of memory forms that can be all traced to two major branches:  declarative memory, in both forms of episodic and semantic memories  spatial memory Spatial memory deserves an insight on its own as it is not included as a separate form of memory in any description of memory systems, but it is a key concept in rodents' studies. Spatial memory is the ability to learn and remember spatial locations and to associate them with other stimuli [START_REF] Bannerman | Hippocampal synaptic plasticity, spatial memory and anxiety[END_REF]. This is a fundamental ability for survival, allowing animals to recognize the environment surrounding them and not only correctly locate places of interests (such as their nest or food location), but also to recall the best path to follow to reach them. Spatial navigation through an environment exploits spatial cues, that can be defined as complex multimodal representations of the environment that comprise information from different sensory modalities. Based on the way the animal relates these cues between them and to itself, spatial navigation can be of two natures [START_REF] Rinaldi | Flexible use of allocentric and egocentric spatial memories activates differential neural networks in mice[END_REF]:  Egocentric. Based on self-references, such as turning right or left or moving toward or away from certain stimuli. Exploiting this type of navigation, the animal is always referring each encountered cue to itself, thus it is strictly dependent on the initial position of the animal for success.  Allocentric. Based on external references: cues are linked together and exploited to build a cognitive map of the environment, inside which different places of interest (e.g. food well) are located. Unlike in egocentric perception, cues are put into relationship one with the other in a way that is independent from the current position of the animal, thus it is always able to create a path through the environment even from different starting points. Proof of a heavy implication of the hippocampus in spatial navigation was given by the discovery of place cells during the '70s (O' Keefe and Nadel, 1978). Place cells are located inside the dorsal hippocampus and have the peculiarity to fire at specific location inside an environment. Clusters of space cells can encode whole environments, inside which each neuron present a "place field", meaning a specific location at which it is activated, characterized by high spatial selectivity [START_REF] Strange | Functional organization of the hippocampal longitudinal axis[END_REF]. This place field remains stable across subsequent explorations and is totally independent from the orientation of the animal. For this reason, place cells activity is hypothesized to contribute to the building up of a cognitive map of the environment that refers to extended boundaries rather than local features [START_REF] Bird | The hippocampus and memory: insights from spatial processing[END_REF], thus putting the hippocampus in a crucial position for allocentric spatial navigation, but not egocentric navigation [START_REF] Bannerman | Hippocampal synaptic plasticity, spatial memory and anxiety[END_REF]. This map is preserved inside the hippocampus for several weeks, determining that the place field of each neuron remains unvaried upon multiple visits of a familiar environment and even when orienting cues are removed. However, substantial changes within the familiar environment can lead to an abrupt change in place cells firing pattern, suggesting a re-mapping, adapting to the environment that is now considered as new [START_REF] Bird | The hippocampus and memory: insights from spatial processing[END_REF]. Remapping can also take place following a change in the value of one of the features of the environment, for example changing the position of a food reward at different location changes the value of each location and induces remapping of place cells (Dupret et al., 2010). Formation of the cognitive map of the surrounding environment is a dominant feature of animal behaviour, which means that any task that presents a spatial connotation becomes hippocampal dependent, even if directly derived from hippocampal-independent tasks. A classic example is the dichotomy between cued fear conditioning, that relies entirely on the amygdala, and contextual fear conditioning, where the need to recognize and associate the environment to the unconditioned stimulus requires also the activation of the hippocampus [START_REF] Kitamura | Engrams and circuits crucial for systems consolidation of a memory[END_REF]. Due to their dependency on the recollection of a spatial map to be completed, throughout the years most spatial tasks have been defined as episodic-like, because their dependency on an integral hippocampus led researchers to equate them to what was considered to be the most hippocampal-dependent memory system: the episodic one. However, the hippocampus has also been implicated in the early phases of encoding of non-spatial associations [START_REF] Bradfield | Goal-directed actions transiently depend on dorsal hippocampus[END_REF], corroborating Henke's statement that the hippocampus is involved with the memory system supporting slow encoding of rigid associations to speed up learning in the early stage [START_REF] Henke | A model for memory systems based on processing modes rather than consciousness[END_REF], which she identifies as the system supporting procedural and semantic-like memories. The Delayed Spatial Alternation Task When faced with the choice of an appropriate task to study the relationship between the hippocampus and cortical associative areas during memory consolidation, we identified a few criteria to guide the choice: -The task needed to be hippocampal-dependent and solicit known associative areas for its completion -It needed to rely on a semantic-like rule that could be encoded, consolidated and retrieved on testing days -It needed to be simple enough to reach optimal performance within a day of training in order to allow clear temporal distinction among the three phases of memory -It could not be a one-trial task because we needed to be able to observe a progressive learning behaviour, as a direct measure of ongoing encoding mechanisms The Delayed Spatial Alternation (DSA) task is a spatial task that is commonly used to assess hippocampal but also neocortical integrity in rodents, particularly used to assess function of the prefrontal cortex [START_REF] Zhang | Protein Kinase A Deregulation in the Medial Prefrontal Cortex Impairs Working Memory in Murine Oligophrenin-1 Deficiency[END_REF]. The task is delivered in a Y-shaped 3 arms maze and demands the rodent to alternate between left arm and right arm choices in order to collect 10 food rewards. A 30 seconds delay separate each trial from the next (see Materials and Methods for more detailed explanation). The behavioural training phase is preceded by a habituation phase during which the rodent constructs a cognitive spatial map of the environment (presenting environmental visual cues surrounding the maze for vision-guided navigation), that it will later exploit for goal-directed maze exploration during behavioural training. This task presents 2 components: -Working memory is solicited during the 30 seconds delay to hold the information about the previously chosen arm; integrity of the working memory system is therefore needed for correct behavioural expression at each moment of the training. -The most efficient way to collect all ten rewards is to follow an "alternation rule" that the rodent learns by trial and error over multiple trial repetitions; acquisition of this rule falls in the description of "slow encoding of rigid association" and can be viewed as a semantic-like type of knowledge; in later phases of the training, the rodent actually starts to perform in a procedural-like manner (i.e. the behavioural response is executed quickly, as opposed to the initial phase of the training where hesitation is considered to be a landmark for cognitive engagement during navigation), which is however afferent to the same memory system. Integrity of this system, relying, among others, on the neocortex, is therefore needed for encoding, consolidation and retrieval of the rule (combined with working memory in the latter phase) and hippocampal activity is needed to speed up the learning process, more so as the task relies on exploitation of a spatial map for completion. The protocol we followed guaranteed a progressive learning of the rule governing efficient reward collection over the first day of training, while optimal behavioural performance was maintained on following days and could be tested. We were thus able to clearly temporally separate a progressive phase of encoding (training on day 1), consolidation (mainly taking place on the resting phase between day 1 and 2) and retrieval (training on following days). Chapter 2: Anatomical Substrates for Memory The Concept of Memory Engram As can be observed, the definition of each phase supposes the existence of a "memory trace". The concept that memories repose on a physical substrate within the brain has been first introduced by Richard Semon at the beginning of the XX century [START_REF] Semon | Die Mneme (The Mneme)[END_REF] and has been debated over multiple times since. Semon hypothesized that experiences imprint on neurons by inducing specific modifications on a given subset and postulated that, to produce an engram, these modifications had to comply to two requirements: being persistent and tagging cells for specific reactivation upon triggering during memory retrieval. Part of the reason why the existence of such a physical trace of memory as engrams has been questioned is due to the fact that the search for the engram has resulted to be such an elusive matter throughout the last century, for a few good reasons. First of all, memory systems tend to be redundant and allow for compensatory mechanisms. Most tasks naturally encountered can be solved through multiple strategies, possibly involving different brain areas. For example, a single problem demanding spatial navigation could be resolved using an allocentric strategy (i.e. exploiting environmental cues) or an egocentric strategy (i.e. referring to the position of the subject and the actions needed to modify it in a goal directed way, independently from the surrounding environment): the first strategy mainly activates the hippocampus, the second one the lateral striatum and parietal cortex [START_REF] Rinaldi | Flexible use of allocentric and egocentric spatial memories activates differential neural networks in mice[END_REF]. This issue explains why behavioural results in lesion or silencing experiments might be confusing and sheds a light on the importance of carefully crafting behavioural tasks in order to refine their memory type-specificity. Second, redundancy and compensatory mechanisms are at work also within the brain areas being involved in the memory. Cell assemblies representing one specific memory aren't a fixed and never changing unit, but single neurons composing it dynamically reorganize even on a short period of time (i.e. days), undergoing loss and gain of activity and, less frequently, even changing behavioural significance [START_REF] Driscoll | Dynamic Reorganization of Neuronal Activity Patterns in Parietal Cortex[END_REF][START_REF] Schoonover | Representational drift in primary olfactory cortex[END_REF]. Computational models confirmed that activity's drift of single neurons composing the memory trace constitutes the most optimized model to achieve long-term storage of memories in a context that is still highly permissive for new learning [START_REF] Fusi | Cascade Models of Synaptically Stored Memories[END_REF]. These models predict that what is fundamental for the permanence of a memory is not a specific set of neuronal connections but the activity pattern elicited by those connections, that can be achieved by many different and ever-changing combinations [START_REF] Ajemian | A theory for how sensorimotor skills are learned and retained in noisy and nonstationary neural circuits[END_REF]. It appears clearly that the second point seriously jeopardizes the persistence requirement postulated by Semon, leading to a third point: rather than a readily identifiable physical location within the brain, "engrams" are defined, today, as dynamic network connections encompassing multiple brain areas, continuously rearranging as memory are solicited and reconsolidated. Furthermore, some areas are involved in early phases of memory (encoding, consolidation and retrieval of temporally close memories) but not in later phases (reconsolidation and retrieval of temporally far memories): a classical example of this phenomenon is the hippocampus [START_REF] Bradfield | Goal-directed actions transiently depend on dorsal hippocampus[END_REF][START_REF] Kitamura | Engrams and circuits crucial for systems consolidation of a memory[END_REF]. Focus in engram research is therefore gradually switching from interrogating the properties of single neurons in single locations to an integrated framework where neuronal populations are considered in their relationships between each other's as part of a dynamic and ever-changing network [START_REF] Eichenbaum | Still searching for the engram[END_REF]. This leads to the need for simultaneous investigation of multiple brain areas as an inescapable feature for this type of research, as the individual properties of each area are not as important as the relationships governing their inter-areas interactions. When approaching the study of memory, and of memory consolidation in particular, from a network prospective, two regions emerges as inescapable: the hippocampus and the medial prefrontal cortex. The hippocampus has emerged early on as a fundamental hub for memory encoding, consolidation and early retrieval, while the medial Prefrontal cortex is the most studied among neocortical associative regions because of its implication in most higher brain functions, including memory, a field in which a unique role for this region has emerged along the years [START_REF] Euston | The Role of Medial Prefrontal Cortex in Memory and Decision Making[END_REF]. However, other associative cortices might reveal a similarly pivotal role if adequately studied. The posterior parietal cortex is a much less studied neocortical associative region which in humans' and primates' studies have been implicated in navigation and working memory [START_REF] Lyamzin | The mouse posterior parietal cortex: Anatomy and functions[END_REF]. Hence, the delayed spatial alternation task is a perfect model task to study whether and how the posterior parietal cortex might play a major role in memory consolidation. The Hippocampus The hippocampus is a medial temporal lobe structure and part of the forebrain that is present in a conserved way in all mammalian orders. Some of its features and functions have made it one of the most studied areas of the brain, allowing to highlight its participation in many different processes, many of which I have listed above; to resume: it is part of the limbic system for expression of behavioural and emotional responses [START_REF] Strange | Functional organization of the hippocampal longitudinal axis[END_REF] it presents neurons specifically deputed to encoding of a precise location within an environment (i.e. place cells), determining the creation of a cognitive map exploited for spatial navigation [START_REF] Aao ; O'keefe | The hippocampus as a cognitive map[END_REF] its integrity is necessary for encoding, consolidation and, to different degree, retrieval of certain forms of memory [START_REF] Holtmaat | Functional and structural underpinnings of neuronal assembly formation in learning[END_REF] From a technical point of view, research in the hippocampus is facilitated by its simplified cortical structure (presenting a single layer of principal cells surrounded by strictly organized layers of processes and interneurons) and by the rigidly ordered connectivity between its different regions, granting easily recognizable and distinguishable cell types and a certain facility in experiments reproducibility. Anatomy of the Hippocampus In humans and non-human primates, the hippocampus is composed by small dispersed structures bilaterally buried under the neocortex in the medial temporal lobe. Based on their position and functionality, these structures are divided in posterior and anterior regions. In rodents, instead, it has a characteristic bilobate and curved structure that develops along a principal longitudinal axis, along which it is divided into dorsal and ventral regions (corresponding, respectively, to posterior and anterior hippocampus in humans and non-human primates), rotated of 90° with respect to the human's structures [START_REF] Strange | Functional organization of the hippocampal longitudinal axis[END_REF]. Multiple evidences rose the hypothesis that this distinction is not only anatomical, but most importantly functional: the dorsal hippocampus (and the corresponding posterior region in humans) hosts the majority of place cells' populations and has thus been linked to spatial navigation; on the other hand, the ventral hippocampus (and the corresponding anterior region in humans) shows the higher degree of connection with neocortical areas and with the limbic system and therefore has been more strongly related to declarative and, in particularly, emotional memory [START_REF] Fanselow | Are the Dorsal and Ventral Hippocampus Functionally Distinct Structures?[END_REF]. The hippocampus is part of the hippocampal formation, which comprehends the subiculum, the entorhinal cortex, the dentate gyrus and the hippocampus properor cornu ammonis - [START_REF] Topolnik | The role of inhibitory circuits in hippocampal memory processing[END_REF]. Because of their tight physiological connection through the tripartite synapse, the dentate gyrus and the cornu ammonis are often described together as two intertwining U-shaped regions: the cornu ammonis is located on the externalventricularsurface, while the dentate gyrus on the internal one. 2.2.1.a General Organisation The hippocampus is part of the archeocortex, which is mainly characterized by the presence of a single layer containing the soma of excitatory neurons, instead of the two normally present in the neocortex. The cellular architecture is different between the dentate gyrus and the cornu ammonis (O'Keefe and [START_REF] Aao ; O'keefe | The hippocampus as a cognitive map[END_REF]:  Excitatory neurons in the dentate gyrus are called granule cells and are characterized by a small, round body and the presence of an apical but not a basal dendritic arborisation. These characteristics lead to the distinction of three cortical layers (from external-most to internalmost): o molecular layer, containing the apical dendrites of granule cells, intertwined with the projections of afferent neurons, which can be both local interneurons or long-range projections from other brain regions, such as the entorhinal cortex o granular layer, containing the cell bodies of granular cells, densely packed o polymorph layer, containing the first segment of the axon of granule cells, that will later bundle to form mossy fibres; in the hilus, this layer merges with the CA4 region of the hippocampus proper; it also hosts the cell bodies of inhibitory interneurons of the dentate gyrus  Excitatory neurons within the cornu ammonis are pyramidal cells (as in the neocortex), characterized by a pyramid-shaped soma, which can be smaller or larger depending on the subregion of the cornu ammonis, and by both apical and basal dendritic arborisations. These characteristics lead to the distinction of 5 layers, incremented to 6 in the CA3 region (from internal-most to external-most): o stratum lacunosum/moleculare, containing the distal apical dendrites of pyramidal cells o stratum radiatum, containing the proximal apical dendrites of pyramidal cells; the relative importance of each of these two layers of the dendritic tree depends on the species considered and their morphological distinction is mainly based on the orientation of the axons contacting the dendritic tree and coming from different regions o stratum lucidum, containing the initial segment of the apical dendrites of CA3 pyramidal cells, which is characterized by the presence of throrn-like spines serving as contact point for the mossy fibres arriving from granule cells of the dentate gyrus, which are perpendicular to the dendritic tree of CA pyramidal neurons; this layer is not present in CA1 or CA2 regions because pyramidal cells of these two regions are not contacted by mossy fibres, nor in the CA4 region because of its unstructured nature (which actually prevents the identification of any of the other 5 layers) o stratum pyramidale, containing the cell bodies of pyramidal cells and some inhibitory interneurons o stratum oriens, containing the basal dendrites of pyramidal cells and the cell bodies of most inhibitory interneurons, together with projections from both lmocal circuits interneurons and long-range projecting neurons from the septum o alveus, containing the axons of pyramidal cells, directed toward the fimbria (rostral efferents) or the subiculum (caudal efferents) and a few afferent projections The dentate gyrus has been divided into an exposed (external) blade and buried (internal) blade, but this division does not correspond to a substantial difference in the structure of the two blades, which are mostly uniform. On the other hand, the cornu ammonis has been divided into four sections based on different morphology and connectivity of pyramidal cells present with each sub-region (O'Keefe and [START_REF] Aao ; O'keefe | The hippocampus as a cognitive map[END_REF]:  CA1: represents the dorsal-most region in the hippocampus of rodents (most lateral in that of primates) and is composed by pyramidal neurons whose soma is small and the apical dendritic tree is characterized by a main, undivided branch from which project many small side branches.  CA2: presents pyramidal neurons whose morphological features are similar to CA3 pyramidal neurons, but lack inputs from mossy fibres.  CA3: presents pyramidal neurons whose soma is large and whose apical dendritic tree is characterized by a singular, spiny proximal segment where mossy fibres make contact and that then divides into a highly branched structure in which no principal branch can be identified.  CA4: unlike the rest of the cornu ammonis, this region is not strictly structured into layers and pyramidal neurons (CA3-like) results scattered and dispersed inside the hilus of the dentate gyrus; apical dendrites are reached by mossy fibres. 2.2.1.b Neuronal Population I will focus now mainly on the cellular population of the hippocampus proper, as it is one of the regions I targeted for my study, in particular the dorsal portion due to its implication in spatial navigation. Pyramidal neurons (PN) represent the vast majority of neurons present in the hippocampus and are glutamatergic excitatory projecting cells. They have historically been considered a fairly homogenous group, sharing recognizable morphological, molecular and physiological properties also with PNs belonging to neocortical regions, however this view has started to be challenged by more recent experiments, highlighting a heterogeneity that should not be underestimated and that is hypothesized to explain the differential specialization of PNs belonging to different regions (such as place cells vs emotion-related cells) [START_REF] Cembrowski | Heterogeneity within classical cell types is the rule: lessons from hippocampal pyramidal neurons[END_REF]. The few morphological differences highlighted in the previous paragraph are the result (or the cause) of different spatial distribution of afferent synapses: in fact, synapses to PNs can be located on their dendrites, soma or axon and the relative abundance and distribution of this afferences determines the pattern of integration of the different information within the pyramidal neuron and, therefore, its firing output. While the almost entirety of excitatory input contact PNs on their dendrites, inhibitory inputs are much more varied in their targeting and are thought to be the fine regulator of information integration through spatial and temporal organization of inputs [START_REF] Klausberger | Neuronal Diversity and Temporal Dynamics: The Unity of Hippocampal Circuit Operations[END_REF]. At least 21 different classes of inhibitory interneurons (INs) have been identified in the hippocampus [START_REF] Klausberger | Neuronal Diversity and Temporal Dynamics: The Unity of Hippocampal Circuit Operations[END_REF], even if they represent only 10-15% of the total neuronal population of the hippocampus [START_REF] Topolnik | The role of inhibitory circuits in hippocampal memory processing[END_REF], and enumeration of their individual characteristics is well beyond the scope of this introduction, as nowhere during my research I was able to distinguish between different classes of interneurons from my electrophysiological data. However, even in general terms, interneurons are widely accepted as being the master regulators of brain oscillatory patterns and plasticity at their excitatory afferences is thought to be crucial for memory processing in the hippocampus [START_REF] Topolnik | The role of inhibitory circuits in hippocampal memory processing[END_REF]. Interneurons receives excitatory inputs from projecting excitatory neurons from different brain regions or from PNs located in the same sub-region and send local inhibitory outputs to PNs, constituting circuits of feedforward inhibition (when the main input to IN is constituted by external afferents) or feedback inhibition (when the main excitatory input is constituted by synapses from local PNs themselves). Interneurons can also send inhibitory projections to other interneurons and this, together with the fact that different INs' classes projects to different locations on pyramidal neurons, allows the creation of complex local circuit mechanisms that have implicated in fine population tuning and oscillatory patterns generation [START_REF] Topolnik | The role of inhibitory circuits in hippocampal memory processing[END_REF]. 2.2.1.c Internal Connectivity The main excitatory pathway within the hippocampus is one of the most studied excitatory circuits and it is known as the "trisynaptic circuit" (O' Keefe and Nadel, 1978). It actually starts outside of the hippocampus with afferent projections coming from the entorhinal cortex (EC, which is the closets neocortical region) through the perforant path and contacting granule cells of the dentate gyrus in the molecular layer. Granule cells projection, bundled into mossy fibres, then project to the CA3 region of the conrnu ammonis in the stratum lucidum and CA3 pyramidal neurons relay the information to the CA1 region through their projecting axons, known as Shaffer collaterals, which reach the apical dendrites of CA1 pyramidal neurons mostly in the stratum radiatum. The CA1 is the main output region of the hippocampus, projecting through the alveus to multiple subcortical and cortical regions (see next section), including the entorhinal cortex, thus closing this excitatory loop. Another branch of the perforant path directly connects the EC with the CA1 pyramidal neurons in the stratum lacunoso/molecolare, however no direct excitatory projection from the dentate gyrus to the CA1 region has never been observed, nor a "reverse" excitatory pathway connecting CA1 to CA3 and eventually to the dentate gyrus: hence, this circuit is thought to directionally polarize the transmission of the information within the hippocampus (O' Keefe and Nadel, 1978). Aside from this main excitatory pathway, a "recurrent" excitatory pathway has been observed in the CA3 region of the cornu ammonis, meaning that excitatory pyramidal neurons of the CA3 project to other excitatory pyramidal neurons of the CA3, in the stratum radiatum. This represents a unique case not replicated in other sections of the hippocampus and is hypothesized to be pivotal for the autonomous generation of oscillatory patterns that can be observed in this region (Le Duigou et al. 2014). 2.2.1.d Main Afferent and Efferent Pathways Both dorsal and ventral parts of the hippocampus presents reciprocal, topographically arranged connections with the entorhinal cortex (O' Keefe and Nadel 1978;[START_REF] Strange | Functional organization of the hippocampal longitudinal axis[END_REF]. The entorhinal cortex represents the link between the hippocampus and other neocortical areas and, through reciprocal projections, works as a hub for both highly processed sensory information (received though the mediation of the perirhinal cortex) and information from associative areas (e.g. the prefrontal cortex). Furthermore, the EC is deeply involved in spatial navigation through firing of specific neurons named grid cells [START_REF] Hafting | Microstructure of a spatial map in the entorhinal cortex[END_REF]. Thus, the hypothesis is that the hippocampus and the entorhinal cortex form a heavily connected system that is fundamental for the correct completion of hippocampal functions in memory and spatial navigation. The EC itself receives topographically arranged afferences from other neocortical regions: for example, in the rat, the medial section of the entorhinal cortex receives inputs from the infralimbic and prelimbic areas of the cingulate cortex, which are more involved in emotional processing, while the lateral section of the EC receives inputs from the retrosplenial region of the cingulate cortex, more involved in spatial processing [START_REF] Jones | Cingulate cortex projections to the parahippocampal region and hippocampal formation in the rat[END_REF]. Accordingly, reciprocal projections between EC and hippocampus follow a continuous gradient where more dorsal regions of the hippocampus are connected to more lateral sections of the entorhinal cortex, while more ventral regions of the hippocampus to more medial sections of the EC [START_REF] Cenquizca | Spatial organization of direct hippocampal field CA1 axonal projections to the rest of the cerebral cortex[END_REF], maintaining the distinction between a dorsal hippocampus more heavily implicated in navigation and a ventral one more heavily implicated in emotional processing [START_REF] Strange | Functional organization of the hippocampal longitudinal axis[END_REF]. The same topographic gradient is observed for projections to the subiculum, which is the region receiving the denser proportion of projections from the CA1 field of the hippocampus, throughout its whole longitudinal axis [START_REF] Cenquizca | Spatial organization of direct hippocampal field CA1 axonal projections to the rest of the cerebral cortex[END_REF] and which can serve as a relay between the hippocampus and the entorhinal cortex [START_REF] Strange | Functional organization of the hippocampal longitudinal axis[END_REF]. The subiculum can also have a relay function toward the lateral septum and hypothalamus, maintaining the same ordered topographic structure of projections [START_REF] Cenquizca | Spatial organization of direct hippocampal field CA1 axonal projections to the rest of the cerebral cortex[END_REF], however the lateral septum also receives directs projections from the hippocampus through the fimbria (O'Keefe and [START_REF] Aao ; O'keefe | The hippocampus as a cognitive map[END_REF]. The lateral septum later projects to the thalamus, creating a network that which is involved in memory and spatial navigation, but also in other behavioural outcomes more related to the associative and emotional part or the hippocampus [START_REF] Risold | Structural Evidence for Functional Domains in the Rat Hippocampus[END_REF]. The medial septum, which is considered, together with the supramammillary nuclei, the region controlling the initiation of theta rhythms that are later propagated to the hippocampus, projects instead toward CA1 and CA3 neurons located prevalently in the dorsal part or the hippocampus [START_REF] Buzsáki | Theta Oscillations in the Hippocampus[END_REF]. This functional gradient of segregation is also maintained in the pattern of projections to other neorcortical areas. The density of projections from the hippocampus gradually augments going from the dorsal to the ventral portion. The ventral portion of the hippocampus send direct projections to different sensory neocortices (auditory, visual, olfactory, somatosensory, gustatory and visceral), the amygdalar system (involved in emotional processing) and association cortices in the prefrontal and cingulate areas, with a complex system of projections that can belong either to the fornix or to the longitudinal association bundle [START_REF] Cenquizca | Spatial organization of direct hippocampal field CA1 axonal projections to the rest of the cerebral cortex[END_REF]. Projections from the dorsal portion are much less dense and are directed primarily toward the retrosplenial cortex: this is again in line with the functional segregation between a more spatially involved dorsal hippocampus, while the ventral hippocampus mainly participates in emotional processing [START_REF] Cenquizca | Spatial organization of direct hippocampal field CA1 axonal projections to the rest of the cerebral cortex[END_REF]. Hippocampus and Memory The hippocampus emerged as an important focus for memory-related studies during the 1950s, thanks to the case of patient H.M. [START_REF] Scoviille | LOSS OF RECENT MEMORY AFTER BILATERAL HIPPOCAMPAL LESIONS[END_REF]. Most of the lines of research which emerged from this medical case has been already discussed in the first chapter of this introduction, which on its own highlight the central role played by the hippocampus in any memory-related discussion. As stated before, the hippocampus has become first in line for scientific investigation of memory: it is thought to be the pivotal structure for encoding and early consolidation of episodic memory and to play an active helping role in speeding up learning of procedural rules and semantic knowledge [START_REF] Henke | A model for memory systems based on processing modes rather than consciousness[END_REF]. In rodents, its involvement has been demonstrated in all contextual and spatial navigation tasks, but also in some forms of associative learning. An important aspect of patient H. M. amnesic condition that has not been discussed yet in this dissertation is the fact that patient H. M. became affected by both retrograde and anterograde amnesia following resection of the medial-temporal lobe [START_REF] Scoviille | LOSS OF RECENT MEMORY AFTER BILATERAL HIPPOCAMPAL LESIONS[END_REF]. Anterograde amnesia refers to the inability to form new memories, which in patient H. M. was mostly restricted to declarative memories, while he preserved the ability to acquire new skills, even though in the complete absence of recollection of practicing them. The implications of this distinction have already been discussed and led to the division of memory forms into hippocampal-dependent and hippocampal-independent, which is still standing, especially in rodent's research [START_REF] Schacter | What are the Memory systems of 1994? Bradf[END_REF]. Retrograde amnesia, instead, refers to the loss of previously acquired memories and in the case of patient H. M. was progressive, meaning that his ability to recall past events failed the more recent were the events he was asked to recall, while very old memories from his infancy were preserved. The fact that he did not undergo a complete retrograde amnesic process, but that his amnesia was actually temporally graded led to the hypothesis that the hippocampus is required for long-term storage of memories, but only for a limited time period and that after a certain amount of time memories are transferred from the hippocampus to a different brain district. This multi-step and multi-area process of memory consolidation was formalized into the theory of system consolidation [START_REF] Squire | Memory and the Hippocampus: A Synthesis From Findings With Rats, Monkeys, and Humans[END_REF]. In its most recent form, system consolidation theory states that the hippocampus is required to provide context to a memory and, at the moment of encoding, it stores an index of the cortical activity registered during the experience [START_REF] Tonegawa | The role of engram cells in the systems consolidation of memory[END_REF]. On the other hand, the neocortex would hold from the beginning information about the abstract and semantic features of the memory. In this view, the relevant behavioural content would be stored from the beginning within neocortical networks, while the hippocampus would be required to provide the correct pattern of neocortical activation. Consolidation would then consist in reactivation of the neocortical network following indexed instructions contained within the hippocampus, eventually leading to strengthening of the neocortical network itself and progressively demeaning the need for the hippocampus to initiate the reactivation pattern [START_REF] Euston | The Role of Medial Prefrontal Cortex in Memory and Decision Making[END_REF][START_REF] Tonegawa | The role of engram cells in the systems consolidation of memory[END_REF]. Controversies around the classical vision of a progressive passage of memory storage from the hippocampus to the neocortex have always arose because of contradictory behavioural results, with examples of hippocampal-dependent long-term memory retrieval that can be found in literature [START_REF] Bayley | The Neuroanatomy of Remote Memory[END_REF]. Development of optogenetics tools helped with a more thorough insight into this issue. For example, in a classic experiment of contextual fear conditioning, an engram of DG cells was identified based on their activation during fear conditioning: the same cells were reactivated upon recent retrieval (after 1 day) but not upon remote retrieval (after 14 days), when instead a subset of prefrontal neurons was activated to support the behavioural response [START_REF] Kitamura | Engrams and circuits crucial for systems consolidation of a memory[END_REF]. However, two additional interesting results were found: 1) neurons forming the neocortical engram at remote retrieval were the same that had been activated at the moment of fear conditioning 2) optogenetics activation of DG engram cells at remote retrieval triggered a freezing response. These two results corroborate the hypothesis that relevant behavioural information is present within the neocortex from encoding, but suggest also that the memory trace within the hippocampus is not lost after a certain amount of time, it is just silent [START_REF] Kitamura | Engrams and circuits crucial for systems consolidation of a memory[END_REF]. The current hypothesis is that upon multiple reactivations of the neocortical network holding the abstract or semantic side of the information its activation can be triggered by external stimuli in the absence of the contextual pattern provided by the hippocampus [START_REF] Tonegawa | The role of engram cells in the systems consolidation of memory[END_REF]. However, this does not result in erasure of the mnemonic trace from the hippocampus, on the contrary during the consolidation process each reactivation would trigger the formation of a new contextual trace within the hippocampus. This hypothesis, called multiple trace theory, would explain the otherwise contradictory results obtained with patients and animal models affected by partial hippocampal disruption, forecasting that memories that have undergone a higher number of replay events have the higher probability to survive [START_REF] Nadel | Memory consolidation, retrograde amnesia and the hippocampal complex[END_REF]. However, the subject is still debated and theories surrounding involvement of the hippocampus in remote retrieval are ever evolving. The Medial Prefrontal cortex The medial Prefrontal cortex (mPFC) is a neocortical associative region considered to be involved in most higher 1985). The two dorsal-most areas, PrCm and ACC, compose the dorsal division of the mPFC and have been more heavily implicated in motor behaviour, while PL and IL, forming the ventral division, have been related to emotional, mnemonic and cognitive processes [START_REF] Heidbreder | The medial prefrontal cortex in the rat: evidence for a dorso-ventral distinction based upon functional and anatomical characteristics[END_REF]. Anatomy and Connectivity of the Medial Prefrontal Cortex Neocortex is classically subdivided in 6 layers, from the one closer to pial surface to the deepest one [START_REF] Anastasiades | Circuit organization of the rodent medial prefrontal cortex[END_REF]:  Layer I (or molecular layer), containing the apical dendrites of pyramidal neurons and characterized by a very scarce population of cell bodies  Layer II (or external granular layer) and layer III (or external pyramidal layer) are often impossible to distinguish anatomically and are therefore always considered together; they contain the cell bodies of local interneurons and pyramidal cells projecting to other cortical areas  Layer IV (or internal granular layer), which is considered to be the main input site of the neocortex, where axons projecting from other structures makes synapses  Layer V (or internal pyramidal layer), which is considered the main output site of the neocortex, characterized by the presence of large cell bodies of pyramidal neurons projecting to subcortical target regions  Layer VI (or polymorph layer), characterized by a high neuronal diversity and presenting neurons which also project to extra-cortical regions, in particular to the thalamus These layers can be easily observed and distinguished in sensory cortices but not as much in associative ones. mPFC, in particular, lacks layer IV (it is therefore classified as an agranular cortex), presenting instead a more homogeneous distribution of input fibres. This, together with the distribution of somas of projecting neurons from layer II to layer VI without a clear laminar organization, makes the appearance of this region of the neocortex fairly homogeneous when subjected to Nissl staining [START_REF] Anastasiades | Circuit organization of the rodent medial prefrontal cortex[END_REF][START_REF] Van Eden | Cytoarchitectonic development of the prefrontal cortex in the rat[END_REF]. The prefrontal cortex in its whole (comprehensive of the orbitofrontal cortex) projects to and from all major regions of the brain, cortical and subcortical, being the cortical region which receives inputs from and sends outputs to the highest number of different regions, therefore displaying the highest number of reciprocal connections [START_REF] Le Merre | The mouse prefrontal cortex: Unity in diversity[END_REF]. mPFC (especially IL and PL) is considered to be the final target of a medial cortical network conveying information from the subiculum, through the relay of Retrosplenial cortex (RSC) and Anterior Cingulate cortex (ACC) [START_REF] Zingg | Neural Networks of the Mouse Neocortex[END_REF]. It is also considered to be the final target of a lateral cortical network involving the Insular cortex, a region with which mPFC shares dense reciprocal projections. Together, these two networks are hypothesized to serve as an interface for integration of external and internal sensory stimuli [START_REF] Zingg | Neural Networks of the Mouse Neocortex[END_REF]. mPFC also shares reciprocal connections with the Temporal Association area (TEa) and the lateral Enthorinal cortex, the latter possibly serving as relay for the information to the Hippocampus, as no direct afferences from mPFC to HPC have been identified [START_REF] Zingg | Neural Networks of the Mouse Neocortex[END_REF]. On the other hand, IL and PL regions both receive inputs from the ventral hippocampus, while direct inputs from the dorsal HPC are sparser and potentially not sufficient to assure correct communication without a supporting polysynaptic alternative pathway [START_REF] Anastasiades | Circuit organization of the rodent medial prefrontal cortex[END_REF]. mPFC and ventral Hippocampus, together with the Basolateral Amygdala, form a network involved in emotional processing and emotional learning [START_REF] Ghashghaei | Sequence of information processing for emotions based on the anatomic dialogue between prefrontal cortex and amygdala[END_REF]. Dense reciprocal connections with the multiple thalamic nuclei are thought to be fundamental for support of many cognitive functions, among which working memory, attention and learning [START_REF] Anastasiades | Circuit organization of the rodent medial prefrontal cortex[END_REF]. Finally, mPFC shares reciprocal connections with important centres of the neuro-modulatory system, such as the dorsal raphe nuclei, ventral tegmental area and locus coeruleus, which are hypothesized to be important for adaptative response to stress and reward [START_REF] Euston | The Role of Medial Prefrontal Cortex in Memory and Decision Making[END_REF] By virtue of its large connectivity and because perturbation of PFC activity has been demonstrated to affect cortex-wide connectivity patterns [START_REF] Allen | Global Representations of Goal-Directed Behavior in Distinct Cell Types of Mouse Neocortex[END_REF], the prefrontal cortex has been hypothesized to be a source of top-down regulation of many if not all of its cortical targets, inducing appropriate behavioural adaptation through feedback projections to other cortical regions [START_REF] Le Merre | The mouse prefrontal cortex: Unity in diversity[END_REF]. A "competence"-division has also been hypothesized among prefrontal regions, corroborated by the pattern of reciprocal connections between the three prefrontal subdivisions (dorsal mPFC, ventral mPFC and OFC) [START_REF] Zingg | Neural Networks of the Mouse Neocortex[END_REF], but convincing experimental results confirming or rejecting this hypothesis have yet to be produced [START_REF] Le Merre | The mouse prefrontal cortex: Unity in diversity[END_REF]. The Medial Prefrontal Cortex and Memory As much as the medial Prefrontal cortex has been attributed a crucial role in cognition, it also has been in memory, with many experiments testing its participation in a wide variety of learning and memory tasks [START_REF] Euston | The Role of Medial Prefrontal Cortex in Memory and Decision Making[END_REF] and being the main neocortical area involved in experiments based on the system consolidation theory [START_REF] Tonegawa | The role of engram cells in the systems consolidation of memory[END_REF]. Classically, the role of mPFC has been related to remote memory retrieval, stemming from the observation that activity of certain neocortical areas is stronger during memory retrieval after longer (25 days after learning) rather than shorter (5 days after learning) time-lapses [START_REF] Bontempi | Time-dependent reorganization of brain circuitry underlying long-term memory storage[END_REF], among which mPFC showed the strongest activity. Accumulation of evidences supporting a pivotal role for mPFC in remote retrieval led to the hypothesis that mPFC may play in this matter a role similar to that played by the hippocampus in recent retrieval, providing other cortical areas, which would store the information in an abstract form, with context and an indexed pattern for activation [START_REF] Frankland | The organization of recent and remote memories[END_REF]. This view, which implies the unicity of the medial Prefrontal cortex and is generally widely accepted, might be challenged by accumulating evidence on the implication of mPFC also in recent retrieval of certain memory types, which would rather suggest that the region is implicated in engrams of specific memories from an early stage, like any other cortical area, instead of assuming a mastermind role over memory in general later on [START_REF] Euston | The Role of Medial Prefrontal Cortex in Memory and Decision Making[END_REF]. However, several inactivation experiments have been successful in suppressing remote retrieval but not recent retrieval in a plethora of different tasks [START_REF] Euston | The Role of Medial Prefrontal Cortex in Memory and Decision Making[END_REF], while evidence of involvement of mPFC in recent retrieval is scarcer and the effect is generally weaker than in remote retrieval [START_REF] Quinn | Inverse temporal contributions of the dorsal hippocampus and medial prefrontal cortex to the expression of long-term fear memories[END_REF]. What has been established, instead, is the activation of mPFC engram neurons during encoding [START_REF] Kitamura | Engrams and circuits crucial for systems consolidation of a memory[END_REF], as described before. Because these engram neurons cannot be activated by natural triggers during early phases, they are considered to be silent and are thought to undergo a process of maturation, corresponding to the progressive acquisition of retrieval strength by neocortical neurons and disengagement of the hippocampus (whose engram neurons become progressively silent) [START_REF] Tonegawa | The role of engram cells in the systems consolidation of memory[END_REF]. The process of maturation is hypothesized to be strictly related to memory consolidation [START_REF] Euston | The Role of Medial Prefrontal Cortex in Memory and Decision Making[END_REF][START_REF] Tonegawa | The role of engram cells in the systems consolidation of memory[END_REF]. Multiple experiments have highlighted that mPFC inactivation right after learning leads to deficits in memory retrieval even after short time-lapses (24-48 hours), suggesting that this silent engram is required for consolidation of the memory [START_REF] Euston | The Role of Medial Prefrontal Cortex in Memory and Decision Making[END_REF]. In both long-term storage and memory consolidation, this prominent role might be played by mPFC by virtue of its preferred targeting by monosynaptic afferent projections from the hippocampus [START_REF] Cenquizca | Spatial organization of direct hippocampal field CA1 axonal projections to the rest of the cerebral cortex[END_REF]. Finally, the medial Prefrontal cortex, especially in the dorsal division, has been linked to working memory and inserted in a fronto-parietal network displaying sustained activity related to item identity during delay periods in working memory-guided tasks [START_REF] Constantinidis | The neuroscience of working memory capacity and training[END_REF]. Findings on this subject have mainly been collected in humans and primates, but lesion studies in rats have confirmed that mPFC impairment prevents correct task execution when working memory is required [START_REF] Horst | The role of rat dorsomedial prefrontal cortex in spatial working memory[END_REF]. The Posterior Parietal Cortex The Posterior Parietal cortex (PPC) is one of the main associative cortices of the mammalian brain, along with the prefrontal and temporal cortices. Anatomically, it is located between the somatosensory and visual cortices on the antero-posterior axis, from which it can be distinguished because of the strong number of callosal projections from cortical areas [START_REF] Lyamzin | The mouse posterior parietal cortex: Anatomy and functions[END_REF]. It presents a wide network of reciprocal connections with different cortical and subcortical areas, which grants its participation in multiple cognitive processes, among which: sensory-motor integration, early motor planning, spatial attention, spatial navigation, representation of spatial relationships, working memory and decision making [START_REF] Whitlock | Posterior parietal cortex[END_REF]. Anatomy and Connectivity of the Posterior Parietal Cortex Like mPFC, PPC presents a homogeneous lamination when subjected to Nissl staining, even though in more lateral sections a denser layer II/III presenting smaller cells and a sparser layer V presenting larger cells emerge [START_REF] Gilissen | Reconsidering the Border between the Visual and Posterior Parietal Cortex of Mice[END_REF][START_REF] Hovde | Architecture and organization of mouse posterior parietal cortex relative to extrastriate areas[END_REF]. PPC is often regarded to as a relay region between sensory cortical areas and "higher" associate areas involved in more cognitive tasks by virtue of its connectivity pattern [START_REF] Lyamzin | The mouse posterior parietal cortex: Anatomy and functions[END_REF]. It shares reciprocal connections with both visual and auditory sensory areas and might be a site of integration of these two types of information before relay to the Orbitofrontal cortex (OFC), configuring as an alternative pathway to the direct input of information from sensory cortices to OFC (which share, however, reciprocal direct connections) [START_REF] Zingg | Neural Networks of the Mouse Neocortex[END_REF]. In this context, it has been inserted within a cortical medial subnetwork which also comprise two others associative regions with whom PPC shares reciprocal connections: the Retrosplenial cortex (RSC) and the Anterior Cingulate cortex (ACC). Altogether, this subnetwork is hypothesized not only to deliver integrated sensory-motor information to the OFC, but also to be heavily implicated in spatial navigation, as all associative areas involved have been independently demonstrated to be required for this type of task [START_REF] Clark | The retrosplenial-parietal network and reference frame coordination for spatial navigation[END_REF][START_REF] Zingg | Neural Networks of the Mouse Neocortex[END_REF]. As far as the neocortex is concerned, other, albeit less dense, reciprocal connections exist between the PPC and the Temporal Association area (TEa) and with the lateral Entorhinal cortex [START_REF] Zingg | Neural Networks of the Mouse Neocortex[END_REF]. PPC's projections also target subcortical regions, in particular the dorsal striatum, some thalamic nuclei (lateral posterior, laterodorsal and posterior complex) and the claustrum, from which it receives also dense afferent projections [START_REF] Lyamzin | The mouse posterior parietal cortex: Anatomy and functions[END_REF]. PPC is not directly connected to the Hippocampus, neither to the dorsal nor the ventral division of the latter. The relay might be represented by the lateral EC, in line with its general role as gateway for neocortical information to the hippocampus, but an alternative, interesting hypothesis is represented by the Retrosplenial cortex, which shares dense reciprocal connections with PPC and receives afferent projections from the dorsal portion of the Hippocampus [START_REF] Cenquizca | Spatial organization of direct hippocampal field CA1 axonal projections to the rest of the cerebral cortex[END_REF]. This relay would be, therefore, possibly heavily implicated in spatial navigation. It is of interest to notice that PPC and mPFC have been ascribed to two different cortical medial subnetworks. RSC and ACC are associative relay areas that are shared between the two and therefore assure their interaction. However, the main target region of the PPC, the OFC, is segregated from the mPFC by a lack of reciprocal connections, suggesting PPC and mPFC might actually serve unique and independent contributions to information processing and behaviour [START_REF] Zingg | Neural Networks of the Mouse Neocortex[END_REF]. The Posterior Parietal cortex and Navigation Ordered patterns of neuronal activity have been observed in the Posterior Parietal cortex during navigation in humans, primates and rodents. In rodents, subsets of PPC neurons express specificity for position of the animal within the environment and with respect to beginning and end of a spatial trajectory, specific navigational movement (e.g. a turn on the left or on the right) and head and body orientation in the environment [START_REF] Krumin | Decision and navigation in mouse parietal cortex[END_REF][START_REF] Nitz | Tracking Route Progression in the Posterior Parietal Cortex[END_REF][START_REF] Whitlock | Navigating actions through the rodent parietal cortex[END_REF]. In one case [START_REF] Nitz | Tracking Route Progression in the Posterior Parietal Cortex[END_REF], navigational firing patterns were conserved even in dark conditions, therefore demonstrating to be independent from environmental visual cues. Altogether, evidences point toward the insertion of PPC within a network deputed to egocentric navigation, corroborated by its strong connection to the striatum [START_REF] Rinaldi | Flexible use of allocentric and egocentric spatial memories activates differential neural networks in mice[END_REF]. However, PPC has also been involved in visual information integration and its integrity is required to solve visual decision tasks [START_REF] Driscoll | Dynamic Reorganization of Neuronal Activity Patterns in Parietal Cortex[END_REF] and a subset of PPC cells display firing tuned to the direction of goal location with respect to head direction [START_REF] Krumin | Decision and navigation in mouse parietal cortex[END_REF], thus a participation, albeit minimal, to allocentric navigation processes cannot be ruled out, also by virtue of the dense reciprocal connections shared with the Retrosplenial cortex, which might represent a point of integration between the two navigation systems [START_REF] Clark | The retrosplenial-parietal network and reference frame coordination for spatial navigation[END_REF]. The Posterior Parietal cortex and decision making PPC has been implicated in decision making in a plethora of different tasks involving a binary choice guided by visual, auditory or multimodal cues, in combination or not with navigation [START_REF] Lyamzin | The mouse posterior parietal cortex: Anatomy and functions[END_REF]. In these tasks, PPC neurons displayed a choice-specific activity, with different subsets tuning in to either one of the behavioural responses [START_REF] Harvey | Choice-specific sequences in parietal cortex during a virtualnavigation decision task[END_REF]. Most convincing results come from experiments involving tasks demanding evidence accumulation [START_REF] Hanks | Distinct relationships of parietal and prefrontal cortices to evidence accumulation[END_REF] and delay periods [START_REF] Goard | Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions[END_REF], during which a sustained neuronal activity was observed, supporting cue holding in working memory. Sustained activity in the region has also been linked to conservation of a lasting trace of previous trials choices and outcomes [START_REF] Akrami | Posterior parietal cortex represents sensory history and mediates its effects on behaviour[END_REF], used as history bias to guide behaviour. These results are in line with studies in humans and primates which led to the insertion of the PPC in a fronto-parietal network sustaining working memory [START_REF] Constantinidis | The neuroscience of working memory capacity and training[END_REF]. Participation of PPC to any phase of the processing of long-term memories of any type is much more debated and much less studied. In humans, PPC is strongly activated during retrieval of episodic memories, but patients with PPC lesions do not display amnesic symptoms [START_REF] Sestieri | The contribution of the human posterior parietal cortex to episodic memory[END_REF]. However, recent experiments in rodents revealed that behaviour-relevant firing patterns in the PPC are replayed during sleep following a learning experience, a phenomenon that is commonly associated with memory consolidation [START_REF] Wilber | Laminar Organization of Encoding and Memory Reactivation in the Parietal Cortex[END_REF]. Chapter 3: Mechanisms for Memory Encoding and Consolidation The Synaptic Plasticity and Memory Hypothesis When Richard Semon first formalized the concept of engram he stated that, to grant its formation, neuronal substrates had to undergo persistent changes that tag them for later reactivation upon triggering [START_REF] Semon | Die Mneme (The Mneme)[END_REF]. It was already suggested by the scientific community that said changes had to imply a growth or strengthening of the connections between neurons and since the discovery of Long Term Potentiation (LTP) [START_REF] Bliss | Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path[END_REF]) synaptic plasticity events have been the privileged field of investigation for mechanisms leading to the creation of memory engrams [START_REF] Takeuchi | The synaptic plasticity and memory hypothesis: encoding, storage and persistence[END_REF]. The Synaptic Plasticity and Memory (SPM) hypothesis states, in its general formulation, that activitydependent synaptic plasticity events are generated at a subset of synapses during an experience and that they are both necessary and sufficient to induce encoding and storage of the memory of said experience in the area of the brain where plasticity can be observed [START_REF] Martin | Synaptic Plasticity and Memory: An Evaluation of the Hypothesis[END_REF]. This hypothesis has guided research in the field for the past 50 years, building a pool of evidences demonstrating that synaptic changes imputable to plasticity can be observed in the hippocampus after learning, such as AMPA receptors insertion at the post-synaptic density [START_REF] Matsuo | Spine-Type-Specific Recruitment of Newly Synthesized AMPA Receptors with Learning[END_REF]. Furthermore, administration of NMDA receptors blockeragain in the hippocampus, but also in other brain areas implicated in the learning circuit -at a concentration that matches LTP induction's blockade prevents learning [START_REF] Morris | Selective impairment of learning and blockade of long-term potentiation by an N-methyl-D-aspartate receptor antagonist, AP5[END_REF] while LTP induction in the dentate gyrus after learning induced memory erasure [START_REF] Brun | Retrograde Amnesia for Spatial Memory Induced by NMDA Receptor-Mediated Long-Term Potentiation[END_REF], based on the principle that, in order to be efficacious for memory encoding, synaptic plasticity has to regard only a subset of neurons and a specific experience-related timeframe, while generalized and prolonged excitation "drowns" the relevant trace signal and results in encoding failure [START_REF] Staubli | Factors regulating the reversibility of long-term potentiation[END_REF]. An exciting challenge for the SPM hypothesis is determining when and where synaptic plasticity takes place and characterize the peculiarity of events distinguishing, for example, encoding from consolidation or memory formation within the hippocampus from neocortical areas. A widespread theory differentiating between synaptic plasticity events determining encoding and consolidation is the synaptic tagging and capture (STC) hypothesis [START_REF] Redondo | Making memories last: the synaptic tagging and capture hypothesis[END_REF]. Synaptic tagging happens during memory encoding at stimulated synapses. Although tagging is generally co-occurring with early LTP expression, the mechanisms underlying the two phenomena seem to be separated [START_REF] Kopec | Glutamate Receptor Exocytosis and Spine Enlargement during Chemically Induced Long-Term Potentiation[END_REF][START_REF] Sajikumar | Resetting of 'synaptic tags' is time-and activity-dependent in rat hippocampal CA1in vitro[END_REF], and tagging is heavily dependent on Calcium/Calmodulin II kinase (CaMKII) activation [START_REF] Redondo | Synaptic Tagging and Capture: Differential Role of Distinct Calcium/Calmodulin Kinases in Protein Synthesis-Dependent Long-Term Potentiation[END_REF]. The exact nature of the tagging is unknown: it has been long thought that it came down to one or a few molecules able to attract plasticity's stabilizers at a later time, but it is now considered more probably to be a transient structural state of the synapse, which for a lifespan of around 90 minutes makes the synapse "permissive" for long-term synaptic enhancement [START_REF] Redondo | Making memories last: the synaptic tagging and capture hypothesis[END_REF]. Concurrently to tagging, therefore still during encoding, LTP induces the transcription and translation of Plasticity-Related Products (PRPs), a plethora of proteins and mRNA including kinases, receptor subunits and cytoskeleton-associated protein Arc that are necessary for stabilization of long-term potentiation at synapses. Due to the delay imposed by transcription and translation processes, PRPs become available to the synapse around 45 minutes after LTP induction and their presence is still detected 90 minutes after LTP induction, in line with the "permissive" window assured by the tag [START_REF] Frey | Weak before strong: dissociating synaptic tagging and plasticity-factor accounts of late-LTP[END_REF]. Synaptic capture takes place during this time-window, which might also correspond to the time-window for consolidation, being severely delayed with respect to encoding, and consist in long-lasting structural and functional plastic changes induced by PRPs which are "captured" by tagged synapses, assuring the specificity of stable long-term potentiation only to synapses already solicited at the moment of encoding [START_REF] Redondo | Making memories last: the synaptic tagging and capture hypothesis[END_REF]. Synaptic capture would be dependent on consistent reactivation of neurons during consolidation. Even though STC theory is mostly mentioned with regard to synaptic potentiation, the existence of "negative" tags has been hypothesized and negative PRPs, promoting synaptic depotentiation, have been observed [START_REF] Okuno | Inverse Synaptic Tagging of Inactive Synapses via Dynamic Interaction of Arc/Arg3.1 with CaMKIIβ[END_REF]. An interesting aspect of this formulation of the STC is the distinction between functional early-LTP, synaptic tagging and PRPs' translation induction, which are described as interconnected but substantially independent events [START_REF] Redondo | Making memories last: the synaptic tagging and capture hypothesis[END_REF]. This independence has two consequences: on the one hand, when multiple events inducing LTP happens at a short time-delay, PRPs are equally shared by all tagged synapses at the moment of capture, which means that time-coupling of a low-meaning event (which would not normally be remembered upon testing 24 hours later) to a highlymeaning event would lead to a strong encoding of both events. This has been demonstrated by multiple experiments showing that novelty-exploration enhances the consolidation of an unrelated but timecoupled experience, both when exerted before or after the other experience [START_REF] Park | Reset of hippocampal-prefrontal circuitry facilitates learning[END_REF][START_REF] Wang | Relevance of synaptic tagging and capture to the persistence of long-term potentiation and everyday spatial memory[END_REF]. On the other hand, even when an event fails to express a strong functional LTP during encoding, or LTP is experimentally blocked after induction bringing the synapse back to its functional baseline, if the experience was able to induce synaptic tagging PRPs' translation-induction from unrelated events in nearby synapses could rescue the memory. A main drawback of the STC theory is that most evidences have been cumulated in in vitro models, while in vivo experiments only served indirect proves of the concept [START_REF] Redondo | Making memories last: the synaptic tagging and capture hypothesis[END_REF]. Even though supported by much more in vivo evidence, the SPM hypothesis in general suffers from similar concerns, especially in regard of LTP duration, which, apart from very few exceptions, has been observed to decay back to baseline within a few hours from induction, also due to its high characterization using in vitro models, but which is in stark contrast with the persistence of long-term memories for days, months or even years. Another general concern of the SPM is that synaptic plasticity involvement in memory has been studied at the cellular level within the hippocampus, but the link with system consolidation involving neocortical areas has been difficult to harmonize, as the second has been primarily studied at the level of neuronal populations and network activity [START_REF] Takeuchi | The synaptic plasticity and memory hypothesis: encoding, storage and persistence[END_REF]. Overall, the SPM is a blatant example of the challenges represented by the dramatic need to change the "level" of investigation, both from in vitrowhere synaptic plasticity is well characterized within a readily measurable frameworkto in vivo experimentswhere direct markers of plasticity are more difficult to evaluate and often measurement are indirect or relay on loss of function experimentsand from single neurons' level to network's level. Experiments centred on synaptic plasticity often focus on single neuronseven single dendritic spines, thanks to novel high-resolution techniques -, implying a privileged connection that, once initiated, strengthen itself. On the other hand, from a network level engrams have been showed to be ever-drifting entities where new neurons are continuously recruited while others are dismissed [START_REF] Driscoll | Dynamic Reorganization of Neuronal Activity Patterns in Parietal Cortex[END_REF][START_REF] Schoonover | Representational drift in primary olfactory cortex[END_REF]. Furthermore, even though examples of isomorphism between experience-induced synaptic strengthening and behavioural response have been observed, such as fear conditioning in the amygdala [START_REF] Johansen | Neural substrates for expectationmodulated fear learning in the amygdala and periaqueductal gray[END_REF], memories are often much more complex spatio-temporal associations consolidated in a distributed network which relies on a complex pattern of multiple excitatory and inhibitory interconnections [START_REF] Klausberger | Neuronal Diversity and Temporal Dynamics: The Unity of Hippocampal Circuit Operations[END_REF]. Bridging the gap between synaptic plasticity, network activity and behaviourallyrelevant system consolidation is one of the exciting challenges of modern neurosciences. Synaptic Plasticity Synaptic plasticity is defined as the activity-induced modification of strength or efficacy of synaptic transmission at pre-existing synapses [START_REF] Citri | Synaptic Plasticity: Multiple Forms, Functions, and Mechanisms[END_REF]. Many different parameters determine the efficacy of a synaptic connection, some of which are pre-synaptic (such as action potential propagation, firing pattern, probability of release of neurotransmitter from vesicles), other are postsynaptic (such as global excitability of the cell, shape and size of the dendritic spine, number and position of neurotransmitter receptors), depicting a complex situation where different types of changes can take place. At the middle of the 20 th century, Donald Hebb hypothesized that synchronism in the firing between the pre-and post-synaptic neurons was a mandatory requisite for synaptic plasticity, especially for changes inducing a strengthening of the connection, resuming his hypothesis in the famous statement "what fires together, wires together (Donald O. Hebb 1949) [START_REF] Morris | The Organization of Behavior[END_REF]. Hebbian plasticity causally links synchronous discharge of action potentials from the two sides of a synapse to bilateral functional and structural changes at the level of the synaptic bouton. Therefore, hebbian synapses can be considered coincidence detectors, that are strengthened only when both neurons participating to the synapse are activated at the same time [START_REF] Morris | The Organization of Behavior[END_REF]. Most forms of plasticity that were discovered in the following years fall within the canon described by Hebb, the most famous example being Long Term Potentiation (LTP), however examples of non-hebbian forms of plasticity have been found, where plastic changes at synapse can be observed even in the lack of either a pre-synaptic or a post-synaptic spike. One interesting form of non-hebbian synaptic plasticity is synaptic scaling, where weakening of a synapse is induced by strengthening of a nearby one and is realized through reduction of AMPA receptors at the post-synaptic site, hence it is an entirely post-synaptic mechanism [START_REF] Whitt | Experience-dependent homeostatic synaptic plasticity in neocortex[END_REF]. Different forms of synaptic plasticity have been classified based on their characteristics. Beside the division between hebbian and non-hebbian forms of plasticity, a main division can be drawn between mechanisms that strengthen a synaptic connection or that weaken it. LTP is again the most famous example of plastic synaptic strengthening and is countered by a mirror mechanism called Long Term Depression (LTD) that instead operates a weakening of the interested synapse [START_REF] Alger | Long-term and short-term plasticity in the CA1, CA3, and dentate regions of the rat hippocampal slice[END_REF]. The relationship between LTP and LTD is quite interesting, as they can be induced by similar protocols delivering the same stimulation pattern but with different timing. In fact, in line with their hebbian nature, both LTP and LTD are Spike Timing-Dependent forms of Plasticity (STDP), differing in the timing of coordinated activation of the pre-and post-synaptic site: when the pre-synaptic spike precedes the post-synaptic one, LTP can be observed, while when is the post-synaptic spike the one leading the pattern LTD verifies [START_REF] Bi | Synaptic Modifications in Cultured Hippocampal Neurons: Dependence on Spike Timing, Synaptic Strength, and Postsynaptic Cell Type[END_REF]. This was a huge confirmation of Hebb's theory and showed how the same mechanism can be tuned to obtain very different and even opposite results. Both LTP and LTD are forms of long-term plasticity, whose effects lasts for hours or even days (even though given that most experiments were conducted in vitro, evidences for longer durations are scarce), as opposed to short-term plasticity, which is extinguished in a matter of milliseconds-to-minutes. Most common forms of short-term plasticity involve pre-synaptic mechanisms, such as facilitation and short-term depression, which are induced by changes in Calcium levels in the pre-synaptic bouton, determining a higher or lower rate of vesicular release [START_REF] Citri | Synaptic Plasticity: Multiple Forms, Functions, and Mechanisms[END_REF]. Lastly, synaptic plasticity can be divided into functional and structural forms. Functional plasticity regards all mechanisms that alter the strength of a synapse without permanently altering its structure, such as alteration of the rate of vesicular release at the pre-synaptic site or of receptors' composition at the post-synaptic site; structural synaptic plasticity, instead, is mainly studied at the post-synaptic site and regards all structural and cytoskeletal alterations that require protein synthesis and results, for example, in alteration of the size of a dendritic spine or even in the formation of a new one [START_REF] Sala | Dendritic Spines: The Locus of Structural and Functional Plasticity[END_REF]. Structural plasticity has long been considered to be a late stage of LTP, but research in the last decades highlighted that the interaction between functional and structural plasticity is more complex than a sequential step organization and that the two relies on independent, however interconnected, mechanisms [START_REF] Redondo | Making memories last: the synaptic tagging and capture hypothesis[END_REF]. A thorough discussion of all different forms of synaptic plasticity, but even of the different forms of LTP alone, is beyond the scope of this introduction, as they are very vast subjects on themselves. I will limit myself to the description of AMPA receptors-and NMDA receptors-dependent LTP at excitatory synapses, as it is the one form of LTP that we addressed during this study. Post-synaptic Long-Term Potentiation at Excitatory Synapses Long Term Potentiation was first described by Bliss and Lomo in the 1970s, who observed a persistent (lasting several hours) increase in the amplitude of both Excitatory Post-Synaptic Potentials (EPSP) and spikes and an increase in spike frequency in granule cells of the dentate gyrus following repetitive stimulation of the perforant path in the rabbit brain [START_REF] Bliss | Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path[END_REF]. Since then, different forms of LTP have been characterized, of which post-synaptic NMDA receptor-dependent LTP is the most studied. NMDA (N-methyl-D-Aspartate) receptor is a transmembrane glutamate receptor that can be found at excitatory post-synaptic sites. It is a tetrameric ion channel which allows transmembrane diffusion of both Sodium (Na + ), Potassium (K + ) and Calcium (Ca 2+ ) and which is double-gated, demanding both ligand-binding and trespassing of a voltage threshold to be activated [START_REF] Vyklicky | Structure, Function, and Pharmacology of NMDA Receptor Channels[END_REF]. When Glutamate binds on NMDA receptors it triggers the opening of the ion channel, which is however obstructed by a Magnesium (Mg 2+ ) ion, preventing transmembrane diffusion: removal of Mg 2+ is triggered by depolarization of the post-synaptic membrane. It is evident that NMDA receptors function as coincidence detectors of pre-and post-synaptic activation, respectively expressed as Glutamate release and membrane depolarization, and thus stand out as the perfect substrate for investigation of hebbian-LTP. Post-synaptic depolarization might be the result of other Glutamate receptors activation at the same synapse or the reflection of a wider excitatory activity that interest surrounding spines in the dendritic tree or the effect of back propagation of action potentials, thus being the expression of a more generalized post-synaptic neuron activation [START_REF] Magee | A Synaptically Controlled, Associative Signal for Hebbian Plasticity in Hippocampal Neurons[END_REF]. In either case, the main glutamate receptors assuring basal synaptic transmission are AMPA (2-amino-3-hydroxy-5-methyl-4isoxazole-propionic acid) receptors [START_REF] Greger | Structural and Functional Architecture of AMPA-Type Glutamate Receptors and Their Auxiliary Proteins[END_REF]. AMPA receptor is a transmembrane tetrameric ligand-gated ion channel permeable to Na + and K + and activated by glutamate binding. It displays a rapid kinetic of activation and inactivation, designing a sharp curve, and AMPA receptors-blocker have been demonstrated to completely abolish neuronal response to excitatory stimuli [START_REF] Greger | Structural and Functional Architecture of AMPA-Type Glutamate Receptors and Their Auxiliary Proteins[END_REF]. AMPA receptors most commonly found at synapses during basal conditions are either GluA1/2 or GluA2/3 heteromeres [START_REF] Huganir | AMPARs and Synaptic Plasticity: The Last 25 Years[END_REF]. The GluA2 subunit of AMPA receptors presents a bulk amino-acidic mutation which make GluA2-containing AMPA receptors Calcium-impermeable, thus preventing Ca 2+ -mediated signalling cascade at the post-synaptic site during basal excitatory transmission conditions [START_REF] Greger | Structural and Functional Architecture of AMPA-Type Glutamate Receptors and Their Auxiliary Proteins[END_REF]. In order for LTP to take place, basal cationic conductance through AMPA receptors must trigger a strong-enough depolarization to induce Magnesium-removal from NMDA receptors ion channel and allow Ca 2+ entrance at the post-synaptic site. This step of LTP is referred to as LTP induction and is NMDA receptor-dependent. Ca 2+ transients at the post-synaptic site initiate a cascade of events that will result in LTP expression, which is a composite event independent from NMDA receptors but dependent on AMPA receptors, at least in its functional component [START_REF] Nicoll | A Brief History of Long-Term Potentiation[END_REF]. The first step in the cascade is Calcium binding to the Calcium/Calmodulin II kinase (CaMKII), which is considered the main effector of LTP-related synaptic changes [START_REF] Herring | Long-Term Potentiation: From CaMKII to AMPA Receptor Trafficking[END_REF]. CaMKII phosphorylate multiple downstream targets, among which different subunits of AMPA receptors, modifying their ion conductance and mobility pattern, for example facilitating the insertion of new AMPA receptors at the post-synaptic membrane [START_REF] Huganir | AMPARs and Synaptic Plasticity: The Last 25 Years[END_REF]. Addition of new AMPA receptors strengthen the efficacy of synaptic transmission as it augments the pool of receptors that can be activated during basal transmission and is seen as the main marker of functional LTP expression. Conversely, low Calcium concentration at the post-synaptic site would induce dephosphorylation and reduction of the pool of AMPA receptors, determining long-lasting depression of the synapse [START_REF] Herring | Long-Term Potentiation: From CaMKII to AMPA Receptor Trafficking[END_REF]. Other downstream targets of CaMKII are actin and the transcription factor cAMP Response Element Binding protein (CREB), which respectively account for early cytoskeletal remodelling and late protein synthesis, both components of structural LTP [START_REF] Kandel | The molecular biology of memory: cAMP, PKA, CRE, CREB-1, CREB-2, and CPEB[END_REF]. AMPA Receptors' Trafficking at Synapses and LTP Expression Dendritic excitatory synapses are organized in spines and present a zone where concentration of scaffolding proteins, cell-adhesion proteins and glutamate receptors is higher, called the post-synaptic density (PSD) [START_REF] Suratkal | Imaging dendritic spines: molecular organization and signaling for plasticity[END_REF]. This zone aligns with the portion of the pre-synaptic site showing the higher concentration of neurotransmitter vesicles and has therefore been identified as the site where synaptic transmission actually takes place [START_REF] Jang | Synaptic adhesion molecules and excitatory synaptic transmission[END_REF]. The surface of the spine surrounding the PSD is called peri-synaptic space, while spines' neck and the rest of the dendritic shaft make up the extra-synaptic space [START_REF] Suratkal | Imaging dendritic spines: molecular organization and signaling for plasticity[END_REF]. AMPA receptors localize at all three of this compartments and also to a cytosolic pool constituted by endosomal vesicles [START_REF] Choquet | Linking Nanoscale Dynamics of AMPA Receptor Organization to Plasticity of Excitatory Synapses and Learning[END_REF]. In basal conditions, around 50% of the total AMPA receptors population at the cellular membrane display a high mobility index, freely diffusing among the three transmembrane compartments following Brown's laws, while the other 50% is reversibly anchored to cytoskeletal elements that stabilize its position, with the most stable AMPA receptors localizing at the PSD [START_REF] Tardin | Direct imaging of lateral movements of AMPA receptors inside synapses[END_REF]. Altogether, in basal conditions AMPA receptors dynamically pass from a stable anchored state to free diffusion and a constant ratio of PSD/peri-synaptic/extra-synaptic receptors is maintained. During Ca 2+ transients due to LTP induction, phosphorylation of AMPA receptors at the PSD stabilizes their anchorage, preventing AMPA receptors lateral diffusion from the PSD; at the same time, new AMPA receptors diffusing to the PSD are more easily trapped and stabilized, overall inducing an increase in the number of AMPA receptors at the postsynaptic density and, therefore, augmenting the transmission strength of the synapse [START_REF] Opazo | CaMKII Triggers the Diffusional Trapping of Surface AMPARs through Phosphorylation of Stargazin[END_REF]. AMPA receptors trapping at the post synaptic density is considered to be the main effector for expression of functional LTP [START_REF] Choquet | Linking Nanoscale Dynamics of AMPA Receptor Organization to Plasticity of Excitatory Synapses and Learning[END_REF]. Multiple experiments have observed a subunit preference for LTP-related receptor trapping, with GluA1 homomeres being the preferential type of AMPA receptor moving to the PSD following LTP induction, while GluA2-containing heteromeres would follow at a later time-point [START_REF] Plant | Transient incorporation of native GluR2-lacking AMPA receptors during hippocampal long-term potentiation[END_REF]. However, molecular replacement experiments, where individual GluA subunits have been knocked-out and replaced with other GluA subunits, have failed to identify a specific subunit that would be necessarily required for LTP expression [START_REF] Granger | LTP requires a reserve pool of glutamate receptors independent of subunit type[END_REF]. Furthermore, immobilization of GluA2-containing AMPA receptors at the cellular membrane, preventing lateral diffusion, have been shown to be sufficient to prevent LTP both in vitro and in vivo [START_REF] Penn | Hippocampal LTP and contextual learning require surface diffusion of AMPA receptors[END_REF]. Interestingly, the same paper showed that only continuous infusion of the AMPA receptor blocker completely prevented LTP, while washout of the blocker prior to LTP induction completely abolished short-term potentiation but allowed an attenuated expression of early LTP. This little potentiation is probably due to AMPA receptors newly inserted into the plasma membrane from the endosomal compartment, in line with the discovery that LTP expression is dependent on an intact exocytic machinery [START_REF] Kopec | Glutamate Receptor Exocytosis and Spine Enlargement during Chemically Induced Long-Term Potentiation[END_REF]. This dependency is actually staggering: in fact, peri-synaptic and extra-synaptic pools of AMPA receptors should be large enough to cover for the enhanced demand from the PSD. Furthermore, lateral diffusion of receptors is quicker than exocytosis, even more as exocytic events tend to take place at the extra-synaptic compartment, and to a lesser extent to the perisynaptic one, but not at the PSD, therefore still demanding lateral diffusion of newly inserted receptors to the PSD. In recent years, my team collaborated to the characterization and application in vivo of AMPA receptors' trafficking-blocking agents, more specifically a divalent antibody directed against the GluA2 subunit of endogenous AMPA receptors and a knock-in mouse strain expressing Acceptor Protein (AP)-tagged GluA2 subunits that can be targeted with tetravalent Neutravidin [START_REF] Getz | High-resolution imaging and manipulation of endogenous AMPA receptor surface mobility during synaptic plasticity and learning[END_REF][START_REF] Penn | Hippocampal LTP and contextual learning require surface diffusion of AMPA receptors[END_REF]. Both strategies share the same trafficking-blocking principle: the agent induces crosslinking of two or more neighbouring AMPA-receptors, constituting aggregates that have a lower facility of diffusion within the plasma membrane and a high probability of containing at least one receptor stably trapped to the cytoskeleton, thus inducing stabilization of AMPA receptors at their current location. These methods have the advantage to not affect basal transmission, as the proportion of AMPA receptors present at each compartment does not change, just their mobility index is affected. However, following LTP induction, cross-linking prevents lateral diffusion of AMPA receptors toward the PSD, thus preventing LTP expression [START_REF] Penn | Hippocampal LTP and contextual learning require surface diffusion of AMPA receptors[END_REF]. At the behavioural level, this resulted in memory impairment in both fear conditioning [START_REF] Penn | Hippocampal LTP and contextual learning require surface diffusion of AMPA receptors[END_REF] and delayed spatial alternation task (El Oussini et al., 2023, Annexes). Interestingly, in this second work, not only AMPA receptors' cross-linking did not affect learning of the spatial task (even though retrieval was affected on the following day), suggesting that early LTP expression might not be required for learning, but injection of the blocking agent after training but before rest was sufficient for impairing memory performance after 24 hours, suggesting that AMPA receptors diffusion to the PSD is necessary for stabilization of long-term potentiation during memory consolidation (El Oussini et al., 2023, Annexes). The interest of developing a knock-in strategy instead of using the antibody lies in the higher target precision that can be achieved with the former. In fact, binding of biotin to the Acceptor Protein is mediated by biotin ligase BirA, which is not endogenously expressed in the mouse brain and whose stereotaxical injection can be restrained to the region of interest. Furthermore, BirA expression can be made dependent on a cell-type specific promoter, restraining even more the investigation to a specific cell type [START_REF] Getz | High-resolution imaging and manipulation of endogenous AMPA receptor surface mobility during synaptic plasticity and learning[END_REF]. Synaptic plasticity in the neocortex In the neocortex, all most common forms of synaptic plasticity mechanisms have been observed and characterized in a way that is not different from corresponding synaptic plasticity forms in the hippocampus. However, most studies have been conducted focusing on synaptic plasticity events taking place during sensory development or sensory deprivation experiments, collecting evidence mainly in sensory cortices [START_REF] Feldman | Synaptic Mechanisms for Plasticity in Neocortex[END_REF] and only in recent years the interest in studying learning-and memoryrelated synaptic plasticity in associative cortices has been awaken. Increase in the functional strength of specific synapses following learning has been observed in many neocortical regions, including prefrontal and anterior cingulate cortices. Recently, structural LTP disruption in the anterior cingulate cortex 24 hours but not 8 hours after inhibitory avoidance learning paradigm resulted in memory impairment [START_REF] Goto | Stepwise synaptic plasticity events drive the early phase of memory consolidation[END_REF], consistent with the delayed role played by neocortical engrams in system memory consolidation. Engram neurons of the medial prefrontal cortex generated after fear conditioning resulted to be more strongly synaptically connected than non-engram neurons at 14 and 28 days but not 7 days after fear conditioning [START_REF] Lee | Neocortical synaptic engrams for remote contextual memories[END_REF]. The same experiment demonstrated that neocortical synaptic strengthening was abolished in the absence of hippocampal inputs. Mechanisms for System Consolidation Almost all theories proposed in this introduction are founded on the idea that the process of memory consolidation takes place a certain time after encoding of the experience. Uncoupling of encoding and consolidation phases is clever from a computational level: in fact, it prevents neurons from the complex handling of multiple simultaneous consolidation cascades all at a different step in their progression because of the sequential nature of experiences [START_REF] Redondo | Making memories last: the synaptic tagging and capture hypothesis[END_REF]. Instead, establishing tags that will later be captured during experience-free time-windows allow for the allocation of a precise spot dedicated solely to consolidation of all experiences faced since the end of the previous consolidation spot. Naturally, this experience-free time-window has been hypothesized to coincide with Sleep. Sleep and Memory Consolidation Brain States have been distinguished and classified based on prominent oscillatory frequencies in the Local Field Potential (LFP) trace, which are the image of coordinated network activity within the region. Most common oscillatory frequencies are delta waves (0.5-4Hz), theta waves , beta waves and gamma waves . Based on distinctive oscillatory pattern, sleep can be divided in two distinct phases (Rasch and Born, 2013):  Rapid Eye Movement (REM) sleep, characterized by high theta power. Because of some of its characteristics similar to wake (including the predominance of oscillation in the theta range) it is called paradoxical sleep and, in humans, is considered to be the lightest phase of sleep, while in mice always precede a waking epoch.  Non REM sleep, also called Slow Wave Sleep (SWS) because of the characteristical oscillations in the delta range that can be observed all throughout the brain, especially in the neocortex. Other oscillatory patterns typical of SWS are thalamic spindles and high-frequency oscillations (100-250Hz) called Sharp Wave-Ripples (SPW-Rs) in the hippocampus. In humans can be divided in three different phases based on depth but convincing evidence of a similar subdivision in mice has not been proposed yet. SWS has been the privileged subject for memory consolidation investigation. Sharp-Wave-Ripples events happening during this phase have been hypothesized to be pivotal for synaptic capture by triggering widespread patterns of cell reactivation in both the hippocampus and the neocortex (Buzsáki, 2015). However, in more recent years a new field has opened investigating the role of REM sleep in consolidation, which has been long neglected [START_REF] Boyce | Causal evidence for the role of REM sleep theta rhythm in contextual memory consolidation[END_REF]. The most interesting line of investigation in this sense is the newfound evidence that REM sleep is characterized by a higher rate of protein translation than SWS, suggesting that following synaptic capture, REM sleep is needed for effective translation of proteins that would be needed for stabilization of long-term structural potentiation [START_REF] Li | REM sleep selectively prunes and maintains new synapses in development and learning[END_REF]. Sleep has been also implicated in global downscaling of the brain excitatory drive. In fact, multiple evidences point toward the idea that during wake the brain undergoes a global increase in neuronal excitability [START_REF] Vyazovskiy | Cortical Firing and Sleep Homeostasis[END_REF]. Increased excitability is a form of synaptic plasticity, but the exact mechanisms regulating increase of global excitability during wake or if it is a homogeneous process at all are not clear. A widely accepted hypothesis is that it reflects priming of neurons activated during wakeful experiences, whose excitability threshold would be lowered to facilitate replay during consolidation periods (Seibt and Frank, 2019). In this hypothesis, priming of whole neurons would be concurrent to but independent from synaptic tagging and would equally interest neurons whose synapses have been tagged for potentiation or depression (Seibt and Frank, 2019). Hence, during sleep process of synaptic potentiation and synaptic depression would be equally ongoing, leading to a general homeostatic effect important for conservation of physiological firing range for neurons, preventing excitotoxicity [START_REF] Whitt | Experience-dependent homeostatic synaptic plasticity in neocortex[END_REF]. Homeostatic synaptic events during sleep would eventually result in removal of tags and priming effects from neurons and synapses, bringing the network back to a state permissive for future experience integration (Seibt and Frank, 2019). All of these events would take place on a global scale encompassing the whole brain. The Role of Sharp Wave-Ripples in Memory Consolidation Sharp Wave-Ripples events are the image of the rapid, synchronous activation of a vast population of hippocampal neurons and can be observed in both the CA3 and CA1 regions of the hippocampus during Slow Wave Sleep but also during periods of "consummatory" behaviour during wake, such as feeding, grooming or simple immobility (Buzsáki, 2015). Fast oscillations are autonomously generated within the CA3 field via the activation of recurrent excitatory connections among pyramidal neurons, which eventually result in bursts of synchronous population activity. Through fibres of the Shaffer collateral, this activity is propagated to the CA1 field, inducing a massive, non-rhythmic depolarization of apical dendrites in the stratum radiatum which is detectable as a "Sharp Wave" in the LFP trace [START_REF] Girardeau | Hippocampal ripples and memory consolidation[END_REF]. Excitatory signals from the Schaffer collateral also activate local interneurons of the CA1 which engage in a pattern of synchronous inhibitory firing that, together with the excitatory drive coming from CA3, generate the fast oscillations (150-250Hz) called "Ripples" (Buzsáki, 2015). The commonly used name Sharp-Wave-Ripples, therefore, refer to two distinct but concurrent and causally linked oscillatory events typical of the CA1 field of the hippocampus. SPW-Rs have long been regarded as the privileged substrate for memory consolidation because of their hippocampal localization and because their intrinsic frequency is coherent with the induction of synaptic modifications such as LTP in downstream neurons. Furthermore, during SPW-Rs events sequences of hippocampal place cells corresponding to behavioural trajectories explored during the day are replayed in a compressed manner [START_REF] Wilson | Reactivation of Hippocampal Ensemble Memories During Sleep[END_REF], satisfying the requirement for cell reactivation to trigger synaptic capture. Replay of compressed neuronal sequences synchronised to SPW-Rs activity has also been observed in the neocortex (Peyrache et al., 2009;[START_REF] Wang | Coordinated Interaction between Hippocampal Sharp-Wave Ripples and Anterior Cingulate Unit Activity[END_REF][START_REF] Wilber | Laminar Organization of Encoding and Memory Reactivation in the Parietal Cortex[END_REF], suggesting that SPW-Rs might play a pivotal role in system consolidation. In particular, a huge focus is put on the bursting activity displayed by SPW-Rs, that are usually detected in burst of two or three events separated by a short delay (70-150ms) (Buzsáki, 2015), which is hypothesized to be the perfect substrate for memory association within both the hippocampus and the extended memory network including the neocortex. SPW-Rs' bursts incidence and amplitude are augmented by high frequency stimulation of the Shaffer collateral in a NMDA receptor-dependent way, suggesting that synaptic plasticity events may modulate SPW-Rs physiology [START_REF] Ishikawa | Operant Conditioning of Synaptic and Spiking Activity Patterns in Single Hippocampal Neurons[END_REF]. Previous experiments conducted within my laboratory reached similar conclusions by blocking AMPA receptors trafficking during sleep and observing a decrease in the occurrence rate of SPW-Rs in vivo (El Oussini et al., 2023, Annexes). Pairing of SPW-Rs and depolarization of a single pyramidal cell of the CA1 region resulted in enhanced participation of said neuron to subsequent SPW-Rs events [START_REF] King | Hebbian modification of a hippocampal population pattern in the rat[END_REF], strengthening the hypothesis that SPW-Rs might act as a pivotal mechanism for synaptic capture by providing a coordinated frame of reactivation of primed neurons. Coherent with the supposed role in consolidation of novel memories, SPW-Rs occurrence rate during SWS increases following training in a behavioural paradigm and after novelty exploration, while repeated exploration of the same environment leads to the decrease in the occurrence rate of SPW-Rs after a few days [START_REF] Ramadan | Hippocampal Sharp Wave/Ripples during Sleep for Consolidation of Associative Memory[END_REF]. Within the hippocampus, this increase is also associated with a stronger participation to SPW-Rs events of neurons belonging to trajectories explored during the daily training than neurons that have not been particularly solicited during the experience (Dupret et al., 2010). Furthermore, neurons participating to the training more strongly correlate with neuronal patterns participating to SPW-Rs during the rest period that follows training than that that precedes training [START_REF] Wilson | Reactivation of Hippocampal Ensemble Memories During Sleep[END_REF]. However, this change in coupling interest only a small portion of neurons and, overall, the mean population activity is conserved throughout days, probably due to homeostatic mechanisms serving conservation of the physiological firing range of the population (Buzsáki, 2015). NMDA receptor-blockers administration in the hippocampus before training but not before rest following training abolished the preferential participation of neurons activated during training to SPW-Rs and also prevented memory retrieval on the following day (Dupret et al., 2010), confirming both the importance of correct SPW-Rs physiology for memory consolidation and the dependence of this mechanisms on an intact apparatus for synaptic plasticity. Interestingly, disruption of SPW-Rs occurring during wake periods through stimulation of the ventral hippocampal commissure prevented efficient learning of a working memory task but did not disrupt SPW-Rs replay pattern during subsequent SWS periods [START_REF] Jadhav | Awake Hippocampal Sharp-Wave Ripples Support Spatial Memory[END_REF]. The same method had been adopted to disrupt SPW-Rs during SWS following behavioural training, resulting in impairment of memory consolidation (Girardeau et al., 2009). Sharp Wave-Ripples and System Consolidation SPW-Rs show a complex relationship with neocortical oscillatory patterns typical of SWS, i.e. slow waves and spindles (Buzsáki, 2015). Slow waves are typical of the neocortex and are the image of the bimodal distribution of neocortical neurons across two states of membrane potential: a depolarized state, called UP state, and a hyperpolarized state, called DOWN state, corresponding respectively to high synchronous activity and almost complete "silence" [START_REF] Steriade | A Novel Slow (<I Hz) Oscillation of Neocortical Neurons in viva: Depolarizing and Hyperpolarizing Components[END_REF]. Hippocampal SPW-Rs typically occur at the transition from DOWN to UP state in the neocortex: this correlation is stronger for DOWN-to-UP state transitions recorded in the entorhinal cortex than in the neocortex, suggesting that slow waves may propagate from more frontal neocortical areas to the entorhinal cortex and finally the hippocampus, modulating the occurrence rate of SPW-Rs [START_REF] Sirota | Communication between neocortex and hippocampus during sleep in rodents[END_REF]. Spindles, instead, are faster oscillations originated in the thalamus which propagate to the neocortex and coordinate the activation of thalamic nuclei thanks to neocortical feedback [START_REF] Steriade | A Novel Slow (<I Hz) Oscillation of Neocortical Neurons in viva: Depolarizing and Hyperpolarizing Components[END_REF]. They eventually reach the hippocampus through the neocortical-entorhinal pathway and modulate the activity of the whole hippocampal formation. SPW-Rs are often phase-locked to spindles' cycles but the nature of their interactions is complex and vary based on localization: thalamic and parietal spindles tend to precede SPW-Rs, while prefrontal spindles, which are also slower , tend to follow [START_REF] Sirota | Communication between neocortex and hippocampus during sleep in rodents[END_REF]. In both cases, coordination might be the result of modulation of both oscillatory frequencies by neocortical slow waves. SPW-Rs events have been correlated to spindles and Ripple-like fast oscillatory activity (80-200Hz) in both the medial prefrontal cortex and the posterior parietal cortex (Khodagholy et al., 2017). The same research showed that coherence between hippocampal SPW-Rs and Ripple-like activity in the PPC was increased following learning of a spatial task. SPW-Rs have also been associated to compressed replay of neurons active during daily behavioural training in both regions (Peyrache et al., 2009;[START_REF] Wilber | Laminar Organization of Encoding and Memory Reactivation in the Parietal Cortex[END_REF], suggesting that SPW-Rs might exert for neocortical neurons a similar role of coordination of primed neurons and promotion of synaptic capture as they do for neurons within the hippocampal circuit. Aim of the study We started the project with the idea to perform an integrative study for bridging the gap between synaptic plasticity and system consolidation. From these solid base, we wanted to explore the modulatory role of hippocampal SPW-Rs on neocortical associative regions and eventually manipulate synaptic plasticity in neocortical engrams. We kept the DSA task for consistency with previous experiments and individuate medial Prefrontal cortex (mPFC, areas InfraLimbic and PreLimbic) as our primary area of investigation among associative cortices, because of its implication in memory consolidation in general and in this task specifically [START_REF] Zhang | Protein Kinase A Deregulation in the Medial Prefrontal Cortex Impairs Working Memory in Murine Oligophrenin-1 Deficiency[END_REF]. Interest in another associative area, the Poster Parietal Cortex, arose for its implication in visuo-spatial navigation [START_REF] Krumin | Decision and navigation in mouse parietal cortex[END_REF], even though there were no direct evidence of its implication in the DSA task. Comparative study of these two areas is particularly interesting because they have been linked in a fronto-parietal network for working memory execution in humans and non-human primates, only partially confirmed in rodents [START_REF] Hanks | Distinct relationships of parietal and prefrontal cortices to evidence accumulation[END_REF], and have a very different pattern of connection with the hippocampus, with the mPFC receiving afferences, even monosynaptic, mainly from the ventral hippocampus [START_REF] Cenquizca | Spatial organization of direct hippocampal field CA1 axonal projections to the rest of the cerebral cortex[END_REF] and the PPC being a main projection output for the Retrosplenial cortex, which is targeted mainly by projections from the dorsal hippocampus [START_REF] Zingg | Neural Networks of the Mouse Neocortex[END_REF]. We found intriguing the hypothesis of possibly unravelling a full network for system consolidation of the semantic-like rule governing efficient reward collection in the DSA task, but we were also open to the possibility of observing and characterizing two distinct networks, possibly working in parallel for consolidation of different features. We expected hippocampal SPW-Rs to exert a modulatory activity on the firing of single neurons within the two neocortical areas, and we expected this modulation to be increased after acquisition of the behavioural rule and an enrichment of behaviourally relevant neurons among the modulated proportion. To tackle this last question, we implemented electrophysiological recording also during behavioural sessions, which had not been done in the previous set of experiments. Finally, once the pattern of modulation established, we aimed to manipulate synaptic plasticity within the neocortex through the AP-GluA2 KI strategy, by injecting tetravalent neutravidin in the neocortex following the behavioural protocol and observing effects both the animal behaviour and on modulation of neocortical neurons by hippocampal SPW-Rs. Introduction Consolidation of memories is a complex and multistep process which involves a distributed network within the brain. The system consolidation theory postulates at least two distinctive steps in memory consolidation, separated in time by the order of days: consolidation of recent memories, taking place mainly within the hippocampus, and consolidation of remote memories, supported by the gradual stream of information and transfer of the engram from the hippocampus to neocortical areas [START_REF] Frankland | The organization of recent and remote memories[END_REF]. A pivotal role in both steps is played by sleep, in particular by Slow Wave Sleep, which is characterized by bursts of massive neuronal firing within the hippocampus, which design both a sharp, ample wave of depolarization and a high frequency oscillatory pattern (150 -250 Hz), two phenomena collectively called Sharp Wave-Ripples (SPW-Rs) [START_REF] Girardeau | Hippocampal ripples and memory consolidation[END_REF]. Perturbation of SPW-Rs during Sleep efficaciously prevents memory consolidation and memory trace stabilization following learning (Girardeau et al., 2009). SPW-Rs entrain the replay of neurons participating to trajectories explored during a behavioural protocol in both the hippocampus [START_REF] Wilson | Reactivation of Hippocampal Ensemble Memories During Sleep[END_REF], subcortical regions (Girardeau et al., 2017) and the neocortex (Peyrache et al., 2009;[START_REF] Wang | Coordinated Interaction between Hippocampal Sharp-Wave Ripples and Anterior Cingulate Unit Activity[END_REF][START_REF] Wilber | Laminar Organization of Encoding and Memory Reactivation in the Parietal Cortex[END_REF]. This process is thought to induce synaptic plasticity at reactivated neurons, tying-in to the synaptic tagging and capture hypothesis [START_REF] Redondo | Making memories last: the synaptic tagging and capture hypothesis[END_REF]. This hypothesis postulates that, during encoding, neuron's activation would create tags at appropriate synapses; the nature of these tags is not completely understood, but they are associated to synaptic plasticity changes that would last a few hours without stabilization. Stabilization of tagged synapses would happen during consolidation and would consist in mostly structural synaptic plasticity that would result in the stable long-term potentiation or depression. Synaptic tagging during encoding would also be accompanied by priming, a series of modification that lower the spiking threshold of neurons, putting them in a "permissive" state for reactivation at the moment of synaptic capture (Seibt and Frank, 2019). Indeed, NMDA receptors-blocker administration before a spatial task efficiently prevents SPW-Rs enrichment with neuronal patterns expressing trajectory explored during the behavioural paradigm during the following SWS period (Dupret et al., 2010), suggesting that tagging is NMDA receptorsdependent and that it is needed to assure the reactivation during subsequent SPW-Rs of neurons participating to behaviourally relevant trajectories during the day. AMPA receptors' crosslinking, which is efficient in preventing LTP [START_REF] Penn | Hippocampal LTP and contextual learning require surface diffusion of AMPA receptors[END_REF], lowers SPW-Rs frequency and prevents memory consolidation (El Oussini et al., 2023, Annexes). On the other hand, neocortical engrams are formed at the same time as the hippocampal ones, but need a maturation time before being naturally activated by memory triggers [START_REF] Kitamura | Engrams and circuits crucial for systems consolidation of a memory[END_REF]. Formation and maturation of these engrams need hippocampal input [START_REF] Lee | Neocortical synaptic engrams for remote contextual memories[END_REF], however, whether and how this input induces synaptic plasticity at neocortical synapses has not been established. In this study, we investigated the modulation of neocortical neurons by SPW-Rs detected in the dorsal hippocampus in the context of consolidation of a spatial working memory rule for efficient reward collection in a Y maze. We chose to address two associative neocortical regions: the medial Prefrontal Cortex (mPFC) and the Posterior Parietal Cortex (PPC). These two regions are not connected in any functional network in the mouse [START_REF] Zingg | Neural Networks of the Mouse Neocortex[END_REF], but have been connected in a frontoparietal network for working memory in humans and non-human primates [START_REF] Whitlock | Posterior parietal cortex[END_REF]. The mPFC is heavily implicated in all higher cognitive functions [START_REF] Euston | The Role of Medial Prefrontal Cortex in Memory and Decision Making[END_REF] and is thought to play a pivotal role in consolidation of remote memories [START_REF] Tonegawa | The role of engram cells in the systems consolidation of memory[END_REF], even comparable to that played by the hippocampus in recent memories. The PPC is heavily implicated in sensory integration [START_REF] Driscoll | Dynamic Reorganization of Neuronal Activity Patterns in Parietal Cortex[END_REF] and navigation [START_REF] Harvey | Choice-specific sequences in parietal cortex during a virtualnavigation decision task[END_REF][START_REF] Rinaldi | Flexible use of allocentric and egocentric spatial memories activates differential neural networks in mice[END_REF]. SPW-Rs induces neuronal replay in both areas (Peyrache et al., 2009;[START_REF] Wilber | Laminar Organization of Encoding and Memory Reactivation in the Parietal Cortex[END_REF] and are coherent with specific oscillatory patterns in each area, specifically with Slow oscillations and Spindles in the mPFC [START_REF] Binder | Monosynaptic Hippocampal-Prefrontal Projections Contribute to Spatial Memory Consolidation in Mice[END_REF] and with high-frequency Ripples-like oscillations in the PPC (Khodagholy et al., 2017). With our approach we unravel the original contribution of each one of these two areas to acquisition and consolidation of a Delayed Spatial Alternation (DSA) task demanding working memory and spatial navigation for execution and leading to the consolidation of a semantic-like rule for efficient reward collection. Materials and Methods Biological models The experiments described in the Results section of this manuscript were conducted on a total of 26 male mice belonging to two strains: C57BL6/J wild type strain and C57BL6/J transgenic AP-GluA2 knock-In (KI) strain, detailed below. Mice were kept on a 12-hour light/dark cycle and provided with ad libitum food and water, except when food restriction was demanded by the protocol (detailed within the description of each procedure). Mice were housed with littermates respecting the original number of mice per cage [START_REF] Goto | Stepwise synaptic plasticity events drive the early phase of memory consolidation[END_REF][START_REF] Seibt | Primed to Sleep: The Dynamics of Synaptic Plasticity Across Brain States[END_REF][START_REF] Rasch | About Sleep's Role in Memory[END_REF], except when demanded otherwise by the protocol (detailed within the description of each procedure). Manipulations (surgery, behaviour, drugs administration, perfusion, histology) were conducted on young adult/adult mice aged between 6 weeks and 4 months (further detailed within the description of each procedure). The experimental design and all procedures were in accordance with the European Guide for the care and use of laboratory animals. C57BL6/J wild type strain C57BL6 is the most commonly used wild type mouse strain, assuring a higher degree of comparison and reproducibility of data shared by the scientific community. The transgenic strain AP-GluA2 KI [START_REF] Getz | High-resolution imaging and manipulation of endogenous AMPA receptor surface mobility during synaptic plasticity and learning[END_REF] that was employed for part of the experiments was derived from C57BL6 ancestor, thus employment of C57BL6/J mice as wild type control subjects limits to the minimum the differences between the two strains involved in my experiments, enhancing as much as possible the opportunity of comparing the 2 datasets. C57BL6/J AP-GluA2 KI transgenic strain AP-GluA2 KI strain was developed and validated as a mouse model to study AMPA receptors surface traficking [START_REF] Getz | High-resolution imaging and manipulation of endogenous AMPA receptor surface mobility during synaptic plasticity and learning[END_REF]. AP-GluA2 KI mice are similar to wild type C57BL6/J mice in terms of weight, size, growth or fertility, but also for their behavioural phenotype, assessed through SHIRPA protocol [START_REF] Getz | High-resolution imaging and manipulation of endogenous AMPA receptor surface mobility during synaptic plasticity and learning[END_REF]. At the genetic level, this strain presents a substitution of the endogenous GluA2 subunit of the AMPA receptor by a genetically modified one bearing an AP-(Acceptor Protein) tag on the extracellular domain of the subunit. The AP can bind Biotin, which is endogenously present inside neurons of the murine brain, however only in the presence of the BirA ligase enzyme, which is not endogenously expressed by the same cells. Thus, expression of AMPA receptors bearing biotinylated GluA2 subunits is restricted to neurons in which BirA ligase has been introduced by viral transfection, making biotinylation region-specific. Intracranial administration of tetrameric Neutravidin (further detailed in the "Chemicals" and "Surgery" sections of this Materials and Methods) eventually leads to cross-linking of multiple AMPA receptors and their immobilization at the synaptic and peri-synaptic space. Surgery Different surgery protocols were performed on different mice depending on the aim of the procedure, dividing into two major subgroups: stereotaxic injections and stereotaxic implantations. All types of surgeries, however, shared some common steps, hereby listed. The same surgery protocol was observed for both mouse strains. Mice were anaesthetised through exposure to the anaesthetic gas agent Isoflurane (concentration: 4% mixed with air) for 4 minutes and anaesthesia was maintained all throughout the surgery through administration of the same anaesthetic agent (concentration: 2% mixed with air) via a vaporising mask. Mice were positioned in the stereotaxic apparatus (David Kopf Instruments) on a 37° heating pad and received a subcutaneous injection of Buprenorphine (100μl, 0.1mg/Kg) and a local injection of Lidocaine (100μl, 0.4mg/kg) for analgesia. The scalp was rinsed with Betadine to prevent infections. After incision and opening of the scalp, Bregma and Lambda point were identified in order to be able to identify the region of interest using Paxinos atlas coordinates. To conclude the surgery, sutures were applied to close the incision point, except where noted otherwise. After the end of the surgery, mice were subcutaneously injected with another analgesic agent, Carprofen (100µl, 4mg/kg) or Meloxicam (200µl, 20mg/kg), fed with powdered-nutrient enriched food and left recovering inside a recovery cage positioned on a heating pad for 30 to 120 minutes, depending on the length and severity of the surgery. A post-surgery care routine was observed for 2-6 days following the surgery, depending on its length and severity, during which the weight and general presentation of mice were monitored and analgesic drugs (Carprofen or Meloxicam) were administered if needed. Stereotaxic injection Stereotaxic injection was performed on mice aged 6-8 weeks except when coupled to electrodes implantation, because this latter procedure demands slightly older mice (8-10 weeks). For the purpose of this dissertation, stereotaxic injection was never performed alone, but always coupled to an implantation. Viruses were charged inside 1ml graduated glass pipets (960 01 05 5µl, Hirshman ringcaps) and pressurized via a 5ml syringe (Terumo). The pipette was manually descended into the target region at a speed of approximately 20µm/s (mPFC: AP: +1.94, ML: ±0.4, DV: -2.5/-2/-1.5). Injection of 250nl of the product was performed manually by applying light and constant pressure on the syringe. The pipette was maintained in position within the target region for 5 minutes after the end of the injection to allow local diffusion of the injected product, then retracted at a slow speed. When combined with other surgical procedures, stereotaxic injection always preceded stereotaxic implantation. Stereotaxic implantation Stereotaxic implantation was performed on mice aged 6-8 weeks (cannulas) or 8-10 weeks (electrodes, alone or coupled with cannulas). Age difference was due to the considerable length of electrodes' implantation surgery (averagely: 2.5 hours) compared to guide cannulas' implantation surgery (averagely: 1.5 hours). Various types of implants were used, all of which are detailed in the following section ("Implanted materials"), but surgical procedures common to all or multiple types of implants are hereby listed. Prior to proper implantation, the skull was prepared by briefly applying Peroxidase RED ACTIVATOR (Super-Bond, Sun Medical Co) (3-5 seconds, then the chemical was promptly washed to avoid longlasting bone damage) to remove the periosteum and by generously scratching the skull to augment the surface for cement bonding. Single guide cannulas were manually descended into the target region at a speed of approximately 20µm/s (mPFC: AP: +1.94, ML: ±1.65/2.05, DV: -1.42, 30° angle). Electrodes were descended into the target region with the aid of a micromanipulator (Narishige) which consented to regulate descent speed to a steady pace of 10µm/s (first 2/3 of the descent) and 1µm/s (last third), to reduce to the minimum tissue damage caused by the implantation (mPFC: AP: +1.94, ML: -0.4, DV: -2.0; PPC: AP: -2.0, ML: -1.5, DV: -0.3; dHPC: AP: -2.0, ML: -1.4, DV: -1.7); ground references were implanted in the Cerebellum without any precision instrument (AP: -5.6, ML: ±0.3, DV: ~-2.0; grounds from different connectors were implanted in opposed hemispheres). All implants were held in place with the aid of the descending support during dental cement application for chronical fixation of the implant. Guide cannulas were fixed with dental cement (Super-Bond, Sun Medical Co), while for electrodes' fixation a layer of Paladur resin (Kulzer) was put on top of the layer of Super-Bond cement, in order to augment the volume of material protecting the implant. Once the cement solidified, implants were gently released from their supports. To end the surgery, the scalp of mice implanted only with cannulas was sutured, resulting in partial coverage of the cemented region with skin. The skin of mice implanted with electrodes, instead, was glued to the skull in the region surrounding the cemented implant, to prevent contact with the cement in its still unsolidified state. Post-surgery care for implanted mice always included 1-4 days of Carprofen (100µl, 4mg/Kg) or Meloxicam (200µl, 20mg/kg) subcutaneous injection, depending on the severity of the surgery and the mouse's rate of recovery. After implantation mice were housed alone to prevent implants' damaging. Implanted materials Guide cannulas Stainless steel guide cannulas (Mouse Guide cut 2.5MM below pedestal 26 gauge, 1.5mm of length, Bilaney) were used for implantation. Prior to the surgery, guide cannulas were kept in alcohol to minimize the risk for bacterial contamination and plugs were maintained on them at all time to avoid penetration of external material. Electrodes Electrodes were crafted starting from an 18 male connector (nano 18 positions 2 guides ISC-DISTREL SA, Omnetics) compatible with the Intan recording system (further detailed in the "in vivo electrophysiological recording" section of this Materials and Methods). Each pin of the connector was wrapped with the naked end of a nichrome wire (diameter: 13µm, Sandvik Kantal) and silver paint (RS Components Ltd) was added to strengthen conductivity. Wires targeted to the same region were organized in bundles and passed through a stainless steel tube (dimeter 0.25mm external /0.12mm internal x 1.2mm length, Unimed) serving as a guide to protect them from damage and enhance the rigidity of the structure. Guides bearing bundles were either glued on the side of the connector or left unattached, allowing a certain freedom in positioning the connector on the mouse skull during surgery without threatening correct targeting of the bundle (deported-bundles). All exposed section of nichrome wires, with the exception of the portion of the bundle aimed to be implanted in the mouse's brain, were either protected with nail paint (when glued to the side of the connector) or with silicone (floating portion of the wires in deported bundles). Silicone was also applied on the top portion of the connector englobing all of the connected or unconnected pins to assure electrical insulation. A silver wire (diameter: 70µm, A-M Systems) of about 2cm length was naked (i.e. the insulating plastic envelope was removed) on both ends and welded on a dedicated pin to serve as ground reference for whole brain electrical activity generated by movement and physiological activities of the mouse (e.g. breathing). Silicone was also applied to the naked portion of the ground reference closer to the connector to assure electrical insulation. Between 48h and 2h before implantation, bundles were plated with a solution of gold particles (Sifco) and carbon nanotubes (Cheap Tubes) via an electrolytic process, in order to reduce the impedance of each nichrome wire to an optimal value of 60kΩ, to assure a better signal to noise ratio and enhance spike collection. For this study, three types of electrodes were used: a connector with a single glued bundle of 15 wires targeted to the mPFC; a connector with a single deported bundle of 16 wires targeted to the mPFC; a connector with a 12 wires-bundle and a 4 wires-bundle, both glued and targeted to the PPC and dHPC respectively. dHPC/PPC connectors were always used in combination with either a glued-bundle or a deported-bundle mPFC electrode. Deported-bundle electrodes were used in combination with guide cannulas implantation in the mPFC. A total of two electrodes were implanted I each mouse. Intracranial injection Intracranial injection was performed on awake mice freely moving inside their home-cage. Plugs closing the implanted guide cannulas were removed and injection cannulas (Internal Cannula FIS 2.5mm guide, Bilaney; 0.5mm of projection) were introduced inside guides. Injections were performed via an automatic pump (Legato 101, Kd Scientific Inc.) that applied a constant pressure on two 1µl Hamilton syringes (7101 KH), allowing the regulation of injection speed to 50nl/min. Pre-rest injections were performed immediately after the last session of the first day of behavioural protocol. Chemicals Viral vectors All viral vectors used for the experiments described in the Results section are Adenoviruses and their engineering is detailed in [START_REF] Getz | High-resolution imaging and manipulation of endogenous AMPA receptor surface mobility during synaptic plasticity and learning[END_REF]. Ongoing production was assured either by the viral core facility of the Bordeaux Neurocampus IMN or by Charité Universitatsmedzin Berlin or viral vectors were ordered on Addgene. All viruses were stocked at -80°C for long-term storage, conserved at 4°C during surgery preparation and injected at room temperature. 250nl per injection site of viral vector solution were administered through stereotaxic injection during surgery. pAAV9a-pSyn-BirA-ER-IRES-eGFP (5.6 x 10^13 gcp/ml, IMN). The pSyn promoter allows the expression of the BirA enzyme in all neuronal types without distinction. BirA ligase expression promotes biotinylation of the extracellular portion of the GluA2 subunit of AMPA receptors, thus inducing AMPA receptors cross-linking in the presence of Neutravidin. eGFP is used as a tag do identify neurons expressing the enzyme. Others Tetravalent Neutravidin. Texas Red-conjugated tetravalent Neutravidin (8.33µM;Invitrogen, A2665) was used to operate cross-linking of AMPA receptors in the AP-GluA2 KI mouse model, due to its predisposition to theoretically bind up to 4 GluA2 subunits (potentially belonging to 4 different receptors) at a time. 500nl per injection site of tetravalent Neutravidin solution were administered via intracranial injection in the awake, freely moving mouse. Saline Physiological solution. Saline physiological solution was used as control for cross-linking in the AP-GluA2 KI mouse model. 500nl per injection site of Saline solution were administered via intracranial injection in the awake, freely moving mouse. Food restriction Food restriction was required to assure mice's motivation during the Delayed Spatial Alternation task (described in the next section). Mice were weighted right before food withdrawal and this weight was used to calculate the 80% of weight-loss limit that was fixed for protocol termination. On the first day of restriction, mice were fed with ~30 pellets (either Dustless precision pellets, purified, Chocolate, PLEXX; or dehydrated pasta pellet, Panzani) of the same type as those that were used to bait the maze during the behavioural task, in order to habituate them to the new food. On subsequent days, mice were fed at the end of all behavioural manipulation with 2-3g of powdered nutrients-enriched food, in order to maintain their weight around 85% of their initial weight. Behavioural protocols Delayed Spatial Alternation (DSA) task The DSA task is a delayed non-matching-to-place task used to assess special navigation and cognitive functions in rodents [START_REF] Zhang | Protein Kinase A Deregulation in the Medial Prefrontal Cortex Impairs Working Memory in Murine Oligophrenin-1 Deficiency[END_REF]. Materials. A semi-transparent white PVC Y maze was used for the task (custom made). All three arms are identical (40cm length, 8cm width, 15cm high walls), except for an additional rectangular chamber (15cmx25cm) which is accessible from the bottom of the "Starting arm" via a pivot door. Arms are spaced by a 120° angle. An opaque blue Falcon top was positioned at the end of each "Goal arm" to serve as food well for reward delivery. Environmental cues are positioned on the three walls surrounding the maze (one positioned in the middle between the two goal arms, one on the left of the maze and one on the right). Video recordings are realized through an infrared camera (1 Basler USB camera -ac1920-155um -Noldus) positioned on the ceiling upon the centre of the maze. The experience was realised in conditions of dim light and an infra-red lamp was lighted and indirectly reflecting light on the behavioural apparatus to allow infrared-camera recordings via the Media Recorder (Noldus) system. For behaviour coupled to electrophysiological recordings, a partially automated Y-shaped maze was used for the task (Imetronic Systems). All three arms are identical (40cm length, 8cm width, 20cm high walls on the two sides) and equally spaced by a 120° angle. The proximal 2/3s of each arm present walls built with transparent PVC (hence, the centre of the maze present completely transparent walls), while the distal third presents opaque-grey PVC walls on all three sides; this same third can be closed by an automatically raising and descending opaque sliding door, hence mice could be restrained in this restricted area providing a darker, completely opaque "box" at the end of each arm. An opaque-blue falcon top was positioned within the distal third of each arm to serve as food-well for reward delivery. Environmental visual cues were positioned on the three walls surrounding the maze (one positioned in the middle between the two goal arms, the other two behind each of the goal arms). Doors were operated throughout a software interface provided by the same enterprise which built the maze (Imetronic Systems). Video recordings were realized through an infrared camera (Dalsa Genie Nano-M800 CS-Mount, Teledyne Dalsa) positioned on the ceiling upon the centre of the maze (slightly on the right side, as the centre is occupied by a turning collector for compensation of the electrophysiology recording system). The same software operating the maze also operates the camera used for visual recording of each behavioural session. All behavioural experiences were realised in conditions of dim light and an infra-red lamp was lighted and indirectly reflecting light on the behavioural apparatus to allow infrared-camera recordings. Habituation. Habituation lasted for 5-8 days, depending on the training reactivity of each individual mouse, and was divided into 3 phases. A first phase, starting before food restriction, consisted in 2-3 days of handling by the researcher, in order to habituate the mouse to be manipulated and hold onto the head (for injection cannulas insertion and/or electrodes plug-in). Proper habituation for the task started on the second day of food restriction and consisted in multiple sessions of free exploration of the maze, aimed at familiarizing mice with the environment and the environmental visual cues. 1 or 2 sessions per day were administered, of progressively reducing length as the session was stopped as soon as the mouse had eaten a reward food-pellet in all three baited arms (the three arms were treated as identical during this phase; during early session a time-limit of 15 minutes was given). This phase ended when the mouse was able to collect the corresponding reward upon its first entry inside a yet unexplored arm (generally after 3-6 sessions). The last phase of habituation consisted in a single trial were the mouse was positioned inside the "start arm" (defined by position with respect to environmental visual cues) and had to collect a reward food-pellet in each of the two "end arms", with a time-limit of 1 minute. If the time-condition was not met, the trial was repeated after an interval of at least 1 hour and, possibly, on following days. For animals submitted to electrophysiological recordings, habituation was stopped two days prior to the beginning of the DSA protocol to allow unbiased electrophysiological recordings of baseline brain activity during sleep. Task. The DSA task consists in 10 trials in which the left and right end arms are alternatively baited with a rewarding food-pellet. During the first trial, the choice is forced toward the baited arm, setting the pattern of alternation (i.e. the reward-zone of the un-baited arm is made inaccessible through positioning a PVC slide at the entrance of the proximal portion of the arm or by closing the sliding door; each consecutive session alternatively starts with a forced right or left choice). The 9 following trials rely on the mouse free choice of one of the two arms. A single trial can be repeated up to 6 times (each individually called "run") if the mouse makes consecutive mistakes, of which the sixth consists in a forced run in the baited arm direction. Once the mouse has reached the reward-zone of the chosen arm, access to the reward-zone of the unchosen one is restricted and the mouse is let spontaneously come back to the distal portion of the starting arm. Here, the mouse is collected and put back in his home-cage and a delay of 30 seconds is respected before the mouse is allowed to explore the maze again (to either repeat the trial in case of a wrong choice or to pass to the next trial). For mice submitted to electrophysiological recordings during the task, once the distal portion of the starting arm has been reached the sliding door is closed, confining the mouse inside the distal portion (serving as "start box") where it will be left during the 30 seconds delay. During this delay period, the maze is cleaned with ethanol (4% of concentration Behavioural training was administered between 8a.m. and 1p.m.; for each individual mouse, priority was given to a consistent timing for the end of behavioural testing on each day (and not to its starting), to maximise homogeneity of electrophysiological recordings of rest periods. Analysis. Mice were scored during behaviour for direction (right-turn vs left-turn), reward collection (success vs error) and display of Vicarious Trial and Error (VTE) behaviour. All sessions were recorded and videos were later watched to confirm the behavioural scoring and for manual annotation of frames at which automated doors were closed or opened. This annotation was used to more precisely align videos with the electrophysiological trace recorded, as automatic door closing and opening generated a TTL signal that was captured by OpenEphys recording system and stored as a timestamp in seconds. VTE behaviours were assessed manually and recognized within a range of possible behaviours at the choosing point (i.e. the centre of the maze), from full stops to rapid head turning in both directions. In literature, VTE is defined as a perceivable hesitant behaviour that entrain the rapid replay of neurons involved in all possible future trajectories. Head turning is particular important to this behaviour, as heading toward a particular direction is associated to replay of the neuronal pattern associated to that future trajectory (Redish, 2016). In vivo electrophysiological recording Electrophysiological recordings were realized by plugging a headstage (INTAN) containing 16 unitygain operational amplifiers to each connector. Recordings were realised through the recording system OpenEphys. On the behavioural apparatus for the DSA task, cables are suspended through a turning collector (Imetronic Systems) which also provides a weight-compensatory system based on loose elastics and a pulley. This apparatus allows the mouse to autonomously explore the whole length of the maze without getting entangled and minimizing the weight of the electrophysiology recording system. Recording of rest periods is realized in the same room as the behavioural task, inside a closed box (50x35cm) presenting only a very small circular hole on the top to allow cables suspension and a larger squared window on the side to allow positioning of the infrared lamp required by the camera (Basler USB camera -ac1920-155um -Noldus). Even though dim light was still lighted in the room, the mouse was almost completely in the dark during rest periods, due to the conformation of the box. Mice were put in this apparatus inside their unaltered home-cage and were given access to the food ration for the day and a sufficient amount of water. Given the restricted size of the home cage, two elastics were sufficient to serve as compensatory system for the weight of the electrophysiology recording system. Mice were plugged and recorded all along the habituation period to get used to navigate the maze with the weight of the recording system. On the two days preceding the start of the proper behavioural task, recording of one rest period of three hours per day was realised inside the rest apparatus, to establish baseline activity. 18 wild type Mice were recorded all throughout the behavioural task, being plugged at the beginning of each session and unplugged at its end. Recording of rest periods started averagely 10 minutes after the end of the last training session of the day. All rest recording started between 11a.m. and 1p.m., trying to have a high consistency for each individual mouse. 8 AP-GluA2 KI mice were not recorded during the task and were submitted to rest recording only on the day prior to the behavioural protocol (to establish the baseline) and at the end of the first day of behavioural protocol. Electrophysiological output consists in a continuous data file for each recorded channel, sampled at a rate of 30kHz to allow collection of both Local Field Potential and single spikes. During task sessions, the recoding was started by an input TTL signal sent from the software operating the maze at the start of camera recording, to assure synchronization between the video and the electrophysiological trace. No such precise synchronisation was possible for recording of rest periods. Perfusion and Histology Mice were anaesthetized with a mix of Ketamine (0.1mg/g) and Xylazine (0.01mg/g) diluted in NaCl; 10µl of solution per gram weighted by the animal were administered via intra-peritoneal injection. Perfusion with Paraformaldehyde (PFA, concentration: 4%) was realised on the anaesthetized mouse and brain were initially collected without being evicted from the scalp. After 48h of storage in PFA at 4°C temperature, implants and scalp were removed and the brain was washed three times in PBS solution (concentration: 1%) and then stored in PBS for 24-72 hours at 4°C. Slicing was performed with a vibratome (Leica VT1200s). Coronal slices of 60µm thickness were collected at a speed of 30-50µm/s from the regions of interest and stored in PBS for 24h before being mounted on slides and covered with Fluoromont-G (complemented with DAPI for cellular nuclei staining, Thermofisher Scientific). Image acquisition of slides was performed with an epifluorescence microscope (Nikon Eclipse NI-U) coupled to an illumination system (Intensilight C-HGFI, Nikon) and a camera (Zyla sCMOS, Andor Technology, Oxford Instruments). Analysis of electrophysiological data LFP processing and Sharp Wave-Ripples detection Electrophysiological data in the continuous data format were imported on Matlab and down-sampled to 1kHz for storage and analysis speed convenience. Ripples detection was performed on Matlab using scripts originally wrote by Cyril Dejean and modified to adjust to the specificities of the data and the type of information collected. The scripts first subtract the mean of all hippocampal channels oscillations to each individual channel, in order to eliminate noise oscillations common to all channels; a notch filter at 50Hz is also realized to eliminate signal in the band of environmental electricity. A band-pass filter in the Ripple's frequency (100-250Hz) is then realized and on this filtered trace the peak of a single Ripple event is detected as the point of maximal amplitude of each event, defined by the following criteria: -an amplitude higher than 5 standard deviations from the mean amplitude of oscillations on the band-passed trace -the event must be at least 30ms long -two individual peaks must be separated by an interval of at least 45ms Ripple's characteristics are then computed, creating an excel output containing the following information for each Ripple event: timestamp of the peak, intrinsic frequency, number of oscillations, mean amplitude (both on the filtered and the integrated trace), area under the integrated curve, duration (total and of each part preceding and following the peak), half prominence. Single unit sorting and processing Electrophysiological data in the continuous data format were converted to binary files (one file containing all of the recorded channels for a single session) via a Python script provided by Stephane Valerio (Aquineuro). Binary files were later concatenated through another Python scripts to pool multiple sessions together and strengthen the power of single neurons follow up across days and sessions. Because of computing capacity limits, not all sessions from a single mouse could be concatenated together, thus files were pooled as follow: 1) rest of D-1 (second baseline), behavioural sessions of D1 (S1-4) and rest of D1; 2) behavioural sessions of D2 (S5-6) and rest on D2; 3) behavioural sessions of D3 (S7-8) and rest of D3. Binary files were uploaded on Kilosort software (https://github.com/cortex-lab/KiloSort) for unit sorting, then results were visualized with the Phy gui (https://github.com/cortex-lab/phy) for quality check and refining. Extracellular voltage traces were preprocessed with common average referencing. Each set of events was detected by a template and then manually curated on Phy. If the events corresponding to a unit were judged to correspond to noise (near-zero amplitude, non-physiological waveform and event detected on every channels), the unit was discarded. Both single-units (considered to represent the activity of one single neuron) and multi-units (considered to represent the activity of multiple neurons whose wave cannot be disentangled) were downloaded from the software and uploaded on Matlab for further processing, but multi-unit activity was not included for further analysis. From this analysis pipeline, we could easily perform longitudinal analysis on pooled sessions but not between pools, therefore for longitudinal analysis we focused on baseline rest on D-1, rest on D1 and the first 4 sessions of the behavioural task. Analysis of Sleep phases Video recordings of rest phases were analysed with a Matlab script [START_REF] Lanore | Cerebellar granule cell axons support high-dimensional representations[END_REF]. The script computes an index of motion by comparing the pixel composition of each individual frame with the one immediately following it. Motion index data and LFP data in the form of Matlab matrices were processed with "The State Editor" Matlab toolbox (created by Andres Grosmark at Gyuri Buzsaki's laboratory, 2012) to compute sleep phases. The toolbox's interface displays three heatmap representations of the power spectrogram of the LFP trace of three channels belonging to the hippocampal region and a scatterplot representation of the motion index value in function of time. Slow Wave Sleep (SWS) phases were visually identified as having relatively high power in the delta and spindle bands, while relatively lower theta; Rapid Eye Movement (REM) sleep phases were visually identified as having very high power in the theta band and low power in the delta and low-gamma bands. SWS and REM phases were only identified in prolonged periods of immobility, meaning periods during which the motion index value was low (<0.04) for at least 60s (this doesn't equate to either SWS or REM periods being at least 60s long, as from the power spectrum punctual waking episodes can be identified which do not necessarily result in significant mouse movement). Analysis of single neurons within the neocortex For analysis of reactivation around Sharp Wave-Ripples (SPW-Rs) peak, the firing rate of single-units on a time window of 1000m around the peak of SPW-Rs (500ms preceding and following the peak of SPW-Rs) was binned into bins of 10ms each and z-scored to a reference average computed for each single-unit individually by calculating the average bin count of the first 20 bins (200ms; as SPW-Rs events have a duration of 50-150ms, this part of the considered window was reasonably far from the peak and could be considered a good sample for the activity of the single-unit outside of SPW-Rs' event-window). Different populations of single-units were computed by calculating the mean of the zscored firing rate on the bins surrounding the peak of SPW-Rs (50ms, 25ms before and 25ms after the SPW-Rs' peak) and distinguishing the three populations based on the value of the obtained mean zscored firing rate: single-units displaying a value >1.96 were classified as "significantly activated", <-1.96 as "significantly inhibited" and all the other single-units as "non-modulated". For DSA sessions, a cut-off of single units with a firing rate <0.2Hz during the analysed session was performed in order to avoid confounding results deriving from very sporadic firing (less than once per run). The firing rate of single-units on a time window of 40s (30s preceding and 10s following the timestamp for the beginning of each run; intervals were chosen as runs' length was around 10s on average and prior to it mice were retained for 30s within the starting box, representing an inter-trial interval during which exploration was very limited) was binned into bins of 1s and z-scored to a reference average computed for each single-unit individually by calculating the average bin count from bins representing the middle of the inter-trial interval (bins 10-20, 11s). Single-units were classified as "activated in the maze" if the z-score of at least one of the bins representing the run (bins 31-40) was >2SD from the reference average and "inhibited in the maze" if at least one value was <-2SD from the reference average. For analysis of "behaviourally relevant" single-units during DSA sessions, the maze was subdivided in spatial bins of 2cm and firing rate at each spatial bin was computed taking into consideration occupancy time with the help of functions from the FMA toolbox in Matlab (Michaël Zugaro, General Public Licence). Preference for a specific firing location within the maze ("maze preference criterion") was computed by calculating the average firing rate across all bins and comparing the firing rate value of each individual bin to this reference: if the value was >2SD from the reference for at least one bin belonging to a spatial sub-location of the maze, the single-unit was classified as having a firing preference for the whole sub-location and being significantly activated within it (sub-locations: start box, bins 1-7; start arm, bins 8-19; centre, bins 20-25; end arm, bins 26-37; end box, bins 38-45); similarly, a single unit was considered significantly inhibited at a location if its firing rate was <-2SD from the reference average in at least one of the bins composing it. The same firing frequency maps were divided into clusters based on paired behavioural attributes for the computation of the "behavioural attribute criterion" (pairs of features: success vs error runs; right-turns vs left turns runs; VTE vs non-VTE runs). Clusters were uploaded to GraphPad Prism and a Two-Way ANOVA analysis was conducted for each single-unit in order to produce multiple comparisons among the clusters. When a single unit showed a significant difference in the firing rate map of two paired clusters, the unit was classified as being significantly modulated by that behavioural attribute. Statistics All statistical analysis was performed in the Prism environment (GraphPad). Detailed statistics are described for each figure. Significance levels were defined as p < 0.05. during which they were asked to collect a food reward at the end of one arm of a Y maze following a pattern of alternation between left and right arm choices (Fig. 1B-C). Mice performed 4 sessions (S1-4) on the first day and two sessions (S5-6 and S7-8) on each of the two following days (Fig. 1A). Results Not all mice trained for the Acquisition of the rule for efficient reward collection (i.e. spatial alternation between the two end arms) takes place during the first day of behavioural protocol, while the two following days are used as tests for the acquisition and consolidation of said rule. We observed that not all mice were able to reach the required learning criterion (mean errors' number per trial < 0.5) at the end of the fourth sessions. We therefore decided to divide the mice in three groups: Learners (n = 10), No Learners (n = 3) and Slow Learners (n = 5), the latter including mice that did not reach the learning criterion on S4 but did reach it by S8 (mean errors' number per trial: Learners, p = 0.522 Learners vs Slow Learners, p = 0.00673 Slow Learners vs No Learners; Fig. 1D-E). Poor behavioural performance is explained by a higher number of repetitive errors. The protocol allows mice to do up to 5 consecutive errors for each trial, before being forced into a run in the right direction. Zhang et al. classify series of 3 or more consecutive errors as high-rank errors and observed that they are indicative of a compulsive behaviour which characterized mPFC-impaired mice, while is rarely showed by control animals (Zhang et al. 2016). Mice from the Learners group show a marked reduction of high-rank errors from S2 (percentage of high-rank error runs per session: 22 ± 6 % S1; 3 ± 0.7 % S2-8), while the reduction in Slow Learners is progressive along training days (percentage of highrank error runs per session: 22 ± 9 % S1; 11 ± 2 % S2-4; 8 ± 2 % S5-6; 4 ± 1 % S7-8) and never takes place for mice belonging to the No Learners group (percentage of high-rank error runs per session: 21 ± 2 % S1-8; Fig. 1F). We noticed that some mice exhibited a tendency in turning preference for one of the two arms of the maze, the most displaying a bias toward right arm-choices, and we hypothesized that a turning bias might be an explanation for the high number of high-rank errors. Indeed, mice from the No Learners group exhibit a preference for right-arm choices in all sessions with the exception of S1 (Fig. 1H, mean right-arm preference across all sessions: 63 ± 13 %) and in 5 sessions this preference is significantly higher than that of the Learners group (multiple t-test: p = 0.0391 S2; p = 0.00039 S3; p = 0.01716 S4; p = 0.02464 S7; p = 0.02169 S8). However, the other two groups never display a clear turning bias (mean right-arm preference across all sessions: 51 ± 6 % Learners; 56 ± 4 % Slow Learners) and for none of the groups the percentage of right arm choices during a given session significantly correlates with the success rate of the same session (data not shown), thus excluding the possibility that a turning bias might be the sole explanation for a poor learning performance. We also hypothesized that the difference in behavioural outcome might be the image of different levels of engagement in completing the task. Average speed in completing each run progressively increased from S1 to S8 for all three groups and without significant differences among them (Supp Fig. 1), suggesting similar levels of familiarization with the task and general motivation. However, speed is not a convincing enough proxy for engagement and we tested if other behavioural parameters might highlight differences among groups. Vicarious Trial and Error (VTE) is a typical mouse behaviour consisting in a stop at the choice-point accompanied by rapid head movement toward the different possible future directions. VTE has been associated to compressed replay of neuronal firing patterns representing all the possible alternative routes in the hippocampus and is considered a common mark of cognitive engagement in a spatial task (Redish, 2016). During the first session of the task all three groups exhibited a high proportion of VTE-runs (percentage of VTE runs over total runs: 65 ± 6.7 % Learners, 69 ± 6.7 % Slow Learners, 71 ± 15.1 % No Learners; Fig. 1G), which progressively declined to ~20 % of total runs being VTE-runs in S8 for Learners and No Learners groups (21 ± 6.2 % Learners,21 ± 9.3 % No Learners), suggesting that mice engage in a procedural behavioural response, which do not demand constant cognitive engagement anymore. Interestingly, the percentage of VTE-runs of the Slow-Learners group did not decline as sharply, even though it is significantly different from Learners group only on S7 (percentage of VTE runs over total runs in S7: 28 ± 6.6 % Learners,50 ± 10.6 % Slow Learners,27 ± 12.1 No Learners; t-test: p = 0.04089 Learners vs Slow Learners, p = 0.15214 Slow Learners vs No Learners). A possible explanation is that, contrary to mice from the No Learners group, mice from the Slow Learners group do not loose cognitive engagement early on but keep trying to optimize their strategy for reward. We took advantages of this classification based on behavioural performance to have an internal negative control for learning, helping us to determine if there were any changes in hippocampo-neocortical coupling during the learning process. Hippocampal Sharp Wave-Ripples are stable across days. First, we focused on the characterization of the functional properties of Hippocampal Sharp Wave-Ripples (SPW-Rs) events detected during Slow Wave Sleep (SWS) which are known to be associated to memory consolidation. We detected SPW-Rs events by filtering the Local Field Potential (LFP) trace of dHPC's channels in the 100-250 Hz oscillatory band and identified peaks of activity based on the amplitude of the oscillation (see Materials and Methods). We then restrained our analysis to SPW-Rs falling within periods of SWS, identified from the combination of mice's immobility (measured through a motion index) and a low theta/delta ratio in the power spectrogram of dHPC's LFP channels (See Materials and Methods and Fig. 2A). We first tested the stability of sleep quality across days. Total time spent in SWS was mostly comparable across days and across groups, with two noticeable exceptions: 1) total time spent in SWS on the first baseline (D-2) by the No Learners group resulted statistically lower than the time spent by mice from the Slow Learners group, but not from the Learners group; 2) total time spent in SWS on the second day of behavioural training (D2) by the No Learners group was statistically higher than the time spent in SWS by the same group on both baselines (D-2 and D-1; Fig. 2B). Overall, a slight tendency to increase the total time spent in SWS following behavioural training compared to baseline was observed in all groups, but it was not statistically significant in any other case. Mean length of SWS periods was stable across all days and groups, as well as total time spent in REM sleep (Fig. 2B). Overall, our data showed that the structure of the rest periods, at least during the 3h we recorded immediately following the DSA task, was not affected. We then checked the intrinsic functional properties of SPW-Rs. Both Learners and Slow Learners groups exhibited comparable measures of intrinsic frequency, mean oscillatory amplitude and event duration stable across days (Fig. 2C). No Learners group SPW-Rs' characteristics were stable across days, but both mean oscillatory amplitude and event duration were significantly lower than respective measures for Learners and Slow Learners groups (t-test: No Learners vs Learners: p = 0.0084 D-2, p = 0.01005 D-1, p = 0.01881 D1, p = 0.01216 D2, p = 0.02165 D3; No Learners vs Slow Learners: p = 0.00142 D-2, p = 0.00212 D-1; p = 0.00086 D2; p = 0.00078 D3), while intrinsic frequency was comparable to that of the other two groups. The difference was already in place during the baseline, thus it cannot be a consequence of failed learning. Last, we examined the occurrence rate of SPW-Rs during SWS periods. All groups exhibited comparable measures of occurrence rate which were stable across days (SPW-Rs occurrence rate ± standard deviation for Learners group: 0.46 ± 0.20 Hz D-2, 0.47 ± 0.18 Hz D-1, 0.52 ± 0.21 Hz D1, 0.46 ± 0.19 Hz D2, 0.48 ± 0.21 Hz D3, multiple comparisons of oneway ANOVA: p > 0.99 for all pairs; Slow Learners group: 0.58 ± 0.18 Hz D-2, 0.57 ± 0.20 Hz D-1; 0.65 ± 0.14 Hz D1, 0.57 ± 0.15 D2, 0.61 ± 0.16 D3, multiple comparisons of one-way ANOVA: p > 0.99 for all pairs; No Learners group: 0.43 ± 0.09 Hz D-2, 0.36 ± 0.17 Hz D-1, 0.41 ± 0.16 Hz D1, 0.40 ± 0.10 Hz D2, 0.42 ± 0.12 Hz D3, multiple comparisons of one-way ANOVA: p > 0.99 for all pairs; paired t-tests between groups all handed p > 0.05; Fig. 2D). These data are coherent with results obtained in previous experiments (El Oussini et al., 2023, Annexes). However, they contrast with previous literature highlighting an increase in SPW-Rs occurrence rate following a spatial learning paradigm [START_REF] Eschenko | Sustained increase in hippocampal sharp-wave ripple activity during slow-wave sleep after learning[END_REF]. To confirm the accuracy of our detection, we plotted the z-scored firing frequency of single hippocampal neurons identified within the dHPC and observed that their peak of activity perfectly coincides with the peak of SPW-Rs events (Data not shown; Fig. 2E). Despite differences in the No-Leaners group, our data show that SPW-Rs properties are stable across days of learning and recordings. Hippocampo-neocortical coupling is increased after learning in the mPFC but not in the PPC. Hippocampal SPW-Rs events have already been associated to replay of neocortical neurons during SWS following behavioural training, suggesting the relevance for hippocampal-neocortical connections for system consolidation (Peyrache et al., 2009). The impact of SPW-Rs on neocortical spiking might also be seen as a proxy for synaptic plasticity mechanisms at work during the consolidation process. Therefore, we tested if we could detect enhanced neuronal firing within mPFC and PPC concurrent to SPW-Rs events detected in the dHPC, restraining our investigation to events detected during periods of SWS. Mean z-scored firing frequency of mPFC's neuronal population of the Learners group showed an increase of firing discharge starting approximately 200ms before the peak of SPW-Rs and peaking concurrently to SPW-Rs' peak, to then sharply come back to baseline. This pattern of progressive increase in the level of excitation of the population was observed with similar features on all recorded days, from the baseline on D-1 to D3 of the behavioural protocol (Fig. 3A) and was similarly observed in mPFC neuronal populations of Slow Learners and No Learners groups (Supp. Fig. 2). The increase in mean z-scored firing rate did not reflect the firing pattern of the totality of neurons composing the population. In fact, we could distinguish 2 sub-populations: one significantly activated around the peak of SPW-Rs (mean z-score in a window of 25ms before and after SPW-Rs' peak >1.96) and one which showed no modulation around SPW-Rs' peaks. Both populations represented roughly half of the total population in all groups across all days (significantly activated neurons percentages: Learners: 49 % D-1, 46 % D1, 60 % D2, 60 % D3; Slow Learners: 47 % D-1, 54 % D1, 42 % D2, 54 % D3; No Learners: 57 % D-1, 59 % D1, 34 % D2, 60 % D3); neurons significantly inhibited around SPW-Rs' peaks were too sporadic to allow any type of analysis (less than 1 neuron per group per session). Increased firing around SPW-Rs' peak suggest positive modulation from the hippocampal excitatory drive provided by SPW-Rs. The mean z-scored firing rate around the peak of SPW-Rs events of the whole population of mPFC neurons of the Learners group was not different across days, but their cumulative distribution revealed a statistically significant shift toward the right for rest periods recorded after behavioural training compared to the baseline, with the exception of D2 (Kolmogorov-Smirnov test: p = 0.0022 D-1 vs D1, p = 0.1370 D-1 vs D3, p < 0.0001; Fig. 3B). Furthermore, the z-scored firing frequency of mPFC neurons around the peak of SPW-Rs events significantly increased on D1 compared to D-1 (Wilcoxon test: p = 0.0001; Fig. 3D). mPFC's neuronal populations recorded in mice from the Slow Learners and No Learners groups did not display any statistical significance in their z-scored firing rate around SPW-Rs' peaks between D-1 and D1 (Supp. Fig. 2). On D3, z-scored firing frequency of mPFC neurons of the Learners group around SPW-Rs' peaks was significantly higher than the same measure for the No Learners group (Mann-Whitney test: p = 0.0451; for the other days: p = 0.9892 D-1, p = 0.0957 D1, p = 0.9921 D2; Fig. 3C). All of these results suggests that the positive modulation provided by the hippocampus is enhanced after learning of the rule governing efficient reward collection in the DSA task, but not after the same behavioural task has failed in inducing learning. Mean z-scored firing frequency of PPC's neuronal population of the Learners group showed a similar pattern of augmentation, starting slightly later (around 150ms before the peak of SPW-Rs), peaking at the SPW-Rs' peak and coming back to baseline levels less sharply. As in the mPFC, patterns similar to this were common across days (Fig. 3E) and across groups (Supp. Fig. 2). Roughly half of the PPC's neuronal population was significantly activated around the peak of SPW-Rs across days and groups (significantly activated neurons percentage: Learners: 43 % D-1, 50 % D1, 59 % D2, 49 % D3; Slow Learners: 42 % D-1, 41 % D1, 51 % D2, 14 % D3; No Learners: 50 % D-1, 64 % D1, 51 % D2, 30 % D3) with the noticeable exception of D3 for Slow Learners and, to a lesser extent, No Learners, which displayed only 14 % and 30 % of significantly activated neurons, respectively. The mean z-scored firing rate around the peak of SPW-Rs events of the whole population of PPC neurons of the Learners group was not different across days, but their cumulative distribution revealed a statistically significant shift toward the right of values recorded on D3 compared to the baseline recorded on D-1 (Kolmogorov-Smirnov test: p = 0.0030 D-1 vs D1, p = 0.0010 D-1 vs D3, p < 0.0060; Fig. 3G). However, comparing the firing frequency of the same neuron identified on D-1 and D1, the z-scored firing frequency of PPC neurons around the peak of SPW-Rs events was not significantly different between D-1 and D1 (Wilcoxon test: p = 0.6117; Fig. 3I). The same analysis for Slow Learners and No Learners groups showed comparable results (Supp. Fig. 2). Finally, z-scored firing rate around the peak of SPW-Rs was never significantly different between Learners and No Learners groups (Mann-Whitney test: p > 0.99 for all days; Fig. 3G). Overall, these data suggest that the hippocampus indeed exert a positive modulation on the PPC, but that this modulation is not increased after learning of the rule governing efficient reward collection in the DSA task and that, on the contrary, might weaken after a few days of repetition of the same behavioural protocol. Neocortical neurons that display a behaviourally relevant firing pattern during the task are preferentially reactivated in the mPFC but not in the PPC. Even though the proportion of neocortical neurons significantly activated at the peak of SPW-Rs did not change across days, we wanted to test if behavioural training induced a change in the composition of this population. First, we identified neurons displaying a behaviourally relevant firing pattern during the 4 sessions of the first day of behavioural training. We defined the behavioural relevance of the firing pattern of a given neuron based on two alternative criteria (see Materials and Methods): 1) maze preference criterion, based on the identification of a significant peak of activity at a specific sublocation within the maze (start box, start arm, centre, end arm or end box) 2) behavioural attribute criterion, based in the identification of a significant divergence in the firing map of a given neuron when runs where pooled based on pairs of behavioural attributes (success vs error runs; right-turn vs leftturn runs; VTE vs no-VTE runs). Neurons that corresponded to either one of the two criteria were considered to display a behaviourally relevant firing pattern. Due to the relatively low number of neurons and apparently comparable results across groups, we pooled all neurons from all groups together for this analysis. 56 % of investigated neurons in the mPFC corresponded to either or both criterion for behavioural significance. In particular, 24 % of mPFC neurons resulted modulated by the behavioural attribute criterion, with neurons divided quite evenly among the three comparative pairs (9 % responded differently to success vs error runs, 6 % to VTE vs no VTE runs and 9 % to right-turn vs leftturn runs), while 46 % resulted modulated by the maze criterion, but with proportion clearly skewed toward neurons firing in either the start box (23 %) or the end box (7 % significantly activated and 13 % significantly inhibited), while populations for other locations were almost inexistent. Interestingly, among neurons that were significantly more active within the start or the end boxes, 38 % were overall significantly inhibited during runs compared to the inter-trial interval, suggesting their role might not be strictly related to navigation. 52 % of investigated neurons in the PPC corresponded to either or both criterion for behavioural significance. A low proportion, 14 % of PPC neurons, resulted modulated by the behavioural attribute criterion, with a clear preference for the right-turn vs left-turn pair comparison (8 %, while only 2 % responded differently to success vs error runs and 3 % to VTE vs no-VTE runs). 50% of PPC's neurons, instead, resulted modulated by the maze criterion, showing a distribution of preferred firing-location more even than the neuronal population of the mPFC but still quite biased toward a preference for the start box (start box: 22 %; start arm: 9 %, centre: 2 %; end arm: 4 %; end box: 10 % significantly activated and 1 % significantly inhibited). 57 % of neurons displaying their firing peak in either the start or end box were overall significantly inhibited during runs compared to the inter-trial interval. We then investigated if behaviourally relevant neurons were more likely to undergo activation around the peak of SPW-Rs during the rest period following learning. Proportions of behaviourally relevant neurons from either of the two neocortical regions significantly active around the peak of SPW-Rs were not significantly different across the two days (mPFC: 76 % on D-1, 84 % on D1; PPC: 39 % on D-1, 41 % on D1). However, while the proportion of positively modulated behaviourally relevant PPC neurons reflected that of the general population, a significantly greater proportion of mPFC's behaviourally relevant neurons than that of the general population was activated around the peak of SPW-Rs. We then questioned if activation around the peak of SPW-Rs of behaviourally relevant neurons was qualitatively different than reactivation of neurons that were not specifically engaged during the behavioural task. Within mPFC, among neurons positively modulated around SPW-Rs' peaks, both behaviourally relevant and not behaviourally relevant neurons were significantly more activated around SPW-Rs' peaks on D1 compared to D-1 (Mann-Whitney test: p = 0.0023 for behaviourally relevant neurons, p = 0.0244 for other neurons); the magnitude of the increase was not statistically different between the two groups (Fig 4C). The same results were not observed in PPC, where neurons positively modulated around SPW-Rs' peaks in D-1 show no significant increase nor decrease of zscored firing rate on D1 (Fig. 4F). Injection of AMPA receptors' mobility blockers in the mPFC at the end of the first day of behavioural training had no effect on memory consolidation. The system consolidation theory hypothesizes that the excitatory drive provided by the hippocampus to neocortical areas during SPW-Rs induces synaptic plasticity within the neocortical engram. To test this hypothesis, we used an AMPA receptors-blocking strategy based on intracranial injection of tetravalent neutravidin in a knock-in mouse strain expressing AP-tagged GluA2 AMPA receptor subunit and the enzyme BirA ligase (see materials and Methods). We injected mice through guide cannulas chronically implanted in the mPFC at the end of the first day of behavioural training with either neutravidin (NA) or Saline, as control group to test whether the increase in firing frequency observed in D1 around SW-Rs during SWS (Fig. 3D) could be explained by the induction of long term synaptic plasticity. Unexpectedly, mice from the NA-injected group did not display any sign of memory impairment during the second day of behavioural protocol, suggesting that AMPA receptors immobilization within the mPFc during the resting period did not impact memory consolidation (Fig. 5A). Two mice from this batch were also implanted with electrodes in both dHPC and mPFC, so we could make a preliminary investigation of the impact of AMPA receptors-blocking on single neurons within the mPFC, especially around the peak of hippocampal SPW-Rs. Having just one mouse per batch I will limit myself to describe preliminary results without any attempt at quantification. We checked that basic parameters such as time spent in SWS and SPW-Rs' occurrence rate were constant and comparable to wild type measures (Data not shown). We then investigated the relationship between SPW-Rs' peak and neuronal firing within the two neocortical areas. Z-scored firing frequency described a curve similar to that of wild type mice in both regions, even though increased firing at the peak of SPW-Rs events was greatly attenuated in the mPFC compared to wild type animals (Data not shown). No apparent increase or decrease in z-scored firing rate around the peak of SPW-Rs between D-1 and D1 was observed in any of the two regions (Fig 5C-D). However, the proportion of neurons significantly activated around the SPW-Rs' peak on D1 dropped only for mPFC neurons and only in the NA-treated mouse (46 % on D-1, 24 % on D1). (Disclaimer: this study has been first produced in the form of a paper during the redaction of this PhD dissertation; given that it presents limitations related to the advancement of analysis and experiments that are entirely due to PhD's time-constraints, all of the technical aspects will be discussed in the general discussion of this thesis and not mentioned in the discussion of the paper, even when they are of relevance to the paper itself.) Discussion We investigated system consolidation of a semantic-like and spatial working memory-based rule governing efficient reward collection in a Y-maze Delayed Spatial Alternation task. Behavioural training solicited both the neocortical areas of our interest: in fact, around 50% of the recorded neuronal population in both mPFC and PPC showed a behaviourally relevant firing pattern during navigation for behavioural training, tuning-in either to a specific location within the maze or to an attribute of the run (success vs error, right-turn vs left-turn, VTE vs no-VTE) or to both. However, the information encoded in each of the two areas was qualitatively different. PPC neurons tuned-in to spatial and navigation-related features: neurons preferentially firing at all sub-locations of the maze could be found in this region, while the percentage of neurons changing their firing pattern as a function of behavioural attributes of the run was low (14%) and dominated by neurons tuning-in to the right-turn vs left-turn attribute (58%). mPFC neurons, instead, mostly had a peak of activity in either the start or end boxes, which by task-design are charged also with other inherent meaning beside the spatial information, which cannot be disentangled, the end box as the site of reward collection and the start box as the site where the mouse was retained during the inter-trial interval. Furthermore, a higher proportion of neurons (24%) were tuned-in to behavioural attributes and weights were more evenly distributed across the three pairs of attributes. For these reasons, we hypothesized for the mPFC a more cognitive role. These results are not surprising, as fall perfectly in the range of roles that are attributed to these two regions in literature [START_REF] Le Merre | The mouse prefrontal cortex: Unity in diversity[END_REF][START_REF] Lyamzin | The mouse posterior parietal cortex: Anatomy and functions[END_REF] and were expected when mPFC and PPC were chosen for this investigation, even though mPFC activity has also been associated to spatial navigation in other type of tasks [START_REF] Fujisawa | Behavior-dependent short-term assembly dynamics in the medial prefrontal cortex[END_REF]. Coordination between hippocampal SPW-Rs and spiking activity in the neocortex was taken as a proxy of system consolidation, based on the evidence in literature that projections from the hippocampus propagate the excitatory drive of SPW-Rs to neocortical areas, inducing coherence in the pattern of oscillations and neuronal replay [START_REF] Binder | Monosynaptic Hippocampal-Prefrontal Projections Contribute to Spatial Memory Consolidation in Mice[END_REF]Khodagholy et al., 2017;Peyrache et al., 2009;[START_REF] Wilber | Laminar Organization of Encoding and Memory Reactivation in the Parietal Cortex[END_REF]. Indeed, in both neocortical areas we observed that the firing rate of 40 -60 % of neurons increased around the peak of SPW-Rs, resulting in a net increase in the mean firing rate of the whole population. We restrained our investigation to neurons that we could reliably detect on two consecutive days and observed that mPFC's neurons from mice of the Learners group increased their firing rate around SPW-Rs' peak after completion of the behavioural protocol on D1 compared to the baseline recorded the day before. No such increase was observed for mPFC's neurons from mice of the Slow Learners or No Learners group. This learning-dependent positive modulation confirms that acquisition of the rule governing efficient reward collection in the DSA task induces hippocampal-driven memory consolidation in the mPFC. This conclusion is strengthened by the fact that the majority (70 -80 %) of behaviourally relevant neurons were positively modulated at the peak of SPW-Rs. Interestingly, these neurons were already part of the pool of positively modulated neurons prior to any learning protocol during baseline recording, they were not specifically recruited by SPW-Rs following their activation during the task. Evidence collected in the hippocampus showed that hippocampal neurons which participate to SPW-Rs' sequence activations during a baseline recorded right before the learning protocol are more likely to participate to trajectory sequences during the behavioural experience, suggesting the idea that new learned trajectories would emerge from a set of prefabricated neuronal sequences rather than be de novo created during exploration [START_REF] Dragoi | Preplay of future place cell sequences by hippocampal cellular assemblies[END_REF]. This perspective is efficient from a computational point of view and implies that encoding strengthens the connections and activation drive of neurons that are already highly permissive for sequential activation. Stretching this concept, a similar situation might verify also within the mPFC, where neurons that receive stronger hippocampal input would somehow be primed to participate into neuronal assemblies and new neocortical engrams. Surprisingly, PPC's neurons from mice of any group did not show any sign of firing modulation following completion of the behavioural protocol compared to their baseline. As we showed, PPC's neurons are activated during the task in ways that are qualitatively different but quantitatively comparable to mPFC's neurons, however we could not find any evidence of increased communication between the hippocampus and this neocortical area. It is possible but extremely unlikely that memory consolidation within the PPC would be regulated by hippocampus-independent mechanisms, also because experiments conducted exploiting other model tasks showed an increased positive modulatory effect from the hippocampus on this neocortical area [START_REF] Wilber | Laminar Organization of Encoding and Memory Reactivation in the Parietal Cortex[END_REF], therefore the conclusion most likely to be correct is that the network involved in consolidation of the rule governing efficient reward collection in the DSA task does not include PPC. PPC's activity during the task might be due to the participation of this region to processes such as attention, egocentric navigation or working memory but none of these components would be part of the semantic-like rule consolidated during the following rest period. Certainly, a loss of function study would be needed to determine the dependence of the DSA task on an intact PPC, to rule out the possibility that the patterns of activation we found during the task were purely correlative or that the PPC only serve a supportive and redundant role in navigation during this task. We hypothesized that consolidation of the rule governing efficient reward collection in the DSA task would be dependent on synaptic plasticity within the hippocampo-medial prefrontal network. A recent work from the laboratory demonstrated that preventing AMPA receptors mobility within the hippocampus led to disruption of the physiology of SPW-Rs and to memory impairment upon retrieval, suggesting a lack of consolidation (El Oussini et al., 2023, Annexes). Prior to that, blockage of AMPA receptors' mobility was showed to prevent LTP at excitatory synapses in the CA1 region [START_REF] Penn | Hippocampal LTP and contextual learning require surface diffusion of AMPA receptors[END_REF]. We decided to use a recently developed AMPA receptors' mobility blockage strategy [START_REF] Getz | High-resolution imaging and manipulation of endogenous AMPA receptor surface mobility during synaptic plasticity and learning[END_REF] to prevent LTP within the neocortex. The strategy is based on a strain of KI mice expressing an AP-tagged GluA2 subunit for biotinilation of AMPA receptors in the presence of the enzyme BirA ligase, whose expression was induced in the mPFC via injection of a viral vector and under the promoter pSyn, to assure expression in all neuronal types. AMPA receptors crosslinking to prevent mobility is induced via acute injection of tetravalent neutravidin. Our main hypothesis was that preventing AMPA receptors' mobility we would prevent LTP at hippocampus-to-mPFC projections, disrupting consolidation. However, our experiment design does not allow any control over the affected synapses, as the enzyme BirA ligase is expressed in mPFC neurons irrespective of their input pattern, thus an equally valid hypothesis would be that LTP suppression would affect local excitatory synapses between neurons of the mPFC engram. Disentangle the two possibilities is very difficult with our approach. Surprisingly, injection of neutravidin at the end of the first day of protocol did not affect memory retrieval upon testing on the second day of protocol. Indeed, a recent research showed that preventing structural LTP within the anterior cingulate cortex 24 hours but not 8 hours after the learning paradigm disrupted behavioural performance in an inhibitory avoidance task by affecting memory consolidation [START_REF] Goto | Synaptic plasticity during systems memory consolidation[END_REF]. Likewise, AMPA receptors-dependent functional LTP might sustain memory consolidation within the prefrontal cortex 24 hours but not immediately after the acquisition of the rule governing efficient reward collection in the DSA task, therefore testing the effect of neutravidin injection later in the protocol would be definitely interesting. Alternatively, AMPA receptors-dependent functional synaptic plasticity might not have a role after that of LTP expression during synaptic tagging at the moment of encoding. This is unlikely, because AMPA receptors addition at the potentiated synapse is hypothesised to take place also during synaptic capture, as new spot are allocated to AMPA receptors within the reorganized cytoskeleton at the post-synaptic density, and because of the extensive effects of AMPA receptors' crosslinking in the hippocampus on memory consolidation found by El Oussini et al. However, it is possible that memory consolidation would be regulated by different mechanisms in the hippocampus and in the mPFC or that AMPA receptors' crosslinking would have an effect on SPW-Rs' physiology in general but not specifically on tagged synapses of engram neurons. This question might be partly answered by analysing the effects of AMPA receptors' crosslinking on the modulation of mPFC's neurons around SPW-Rs' peaks. Indeed, physiological effects induced by crosslinking might not translate to a behavioural phenotype, as it is often the case with silent neocortical engrams during consolidation of recent memories [START_REF] Kitamura | Engrams and circuits crucial for systems consolidation of a memory[END_REF]. Unfortunately, we could not collect enough electrophysiological data to draw any conclusion upon, even if we remarked a promising tendency of decreasing of the firing rate on D1 compared to the baseline in D-1 in mPFC's neurons in the neutravidin group. An increase in the number of animals is certainly needed. Conclusions Spatial working memory-guided navigation solicits both mPFC and PPC, but only the prior is positively modulated by hippocampal SPW-Rs during the rest periods following rule acquisition, excluding PPC from a functional network of consolidation of this type of memory. Memory consolidation is not dependent on AMPA receptors' mobility within the mPFC at least during the rest period immediately following rule acquisition, while further experiments will be needed to determine mPFC's involvement at later phases of consolidation and the effects of AMPA receptors' crosslinking on the modulation of mPFC's neurons by hippocampal SPW-Rs, possibly also with a yet silent neocortical engram. Discussion We wanted to investigate the role of hippocampal sharp wave-Ripples in modulating neocortical areas within a network involved in memory consolidation. We chose a behavioural task (the Delayed Spatial Alternation task) that easily allowed us to discriminate between a progressive but rapid phase of learning which could be included in a single day of behavioural training, to which we attributed the encoding phase of memory formation, and an extended resting period allowing memory consolidation, before testing for memory acquisition on the following day. We recorded Local Field Potential with single wire electrodes from the dorsal hippocampus to collect information on SPW-Rs activity in the area CA1, and from medial prefrontal cortex and posterior parietal cortex to collect spiking activity both during behavioural training and resting periods prior to and following behaviour. Unexpectedly, even though we used 18 wild type mice without any particular condition nor treatment, only 55% were able to readily learn the task within the allocated 1-day time-frame, while 45% displayed either delayed learning (28%) or no learning at all throughout the three days of protocol (17%). This is in contrast with previous results obtained within the laboratory with the same protocol, where control mice were pooled without distinctions [START_REF] Zhang | Protein Kinase A Deregulation in the Medial Prefrontal Cortex Impairs Working Memory in Murine Oligophrenin-1 Deficiency[END_REF])(El Oussini et al., 2023, Annexes). The learning impairment seems to be caused by different mechanisms in the two groups: mice from the Non-Learners group showed a natural bias toward right-turn choices which impaired their behavioural performance in both mean error-per-trial rate and number of high-rank consecutive errors; furthermore, they seemed to disengage quite quickly from the cognitive part of the task by reducing the number of runs during which they displayed VTE (vicarious trial and error) behaviour, which is a sign of choicemaking under cognitive control (Redish, 2016). Their rate of decrease of VTE-runs was comparable to that of mice from the Learners group and is generally interpreted as a loss of cognitive control in favour of procedural choices guided by a stereotypical behaviour once the most efficient strategy for collecting reward has been implemented. In the absence of acquisition of this optimized strategy by mice from the Non-Learners group, we interpreted their loss of VTE as cognitive disengagement from the task and, possibly, proceduralization of a random pattern of exploration or a low-rewarding strategy. On the other hand, mice from the Slow-Learners group did not display any turning bias that could explain their poor behavioural performance, but retained a higher proportion of VTE-runs all throughout the behavioural protocol. This result suggests that, in the absence of an efficient strategy to collect reward and of confounding factors (i.e. internal or external biases), mice retain cognitive control over choices during the DSA task longer, possibly as long as needed to figure out the best strategy. Overall, the poor behavioural performance of a significant proportion of animals might be explained by the logistic of combining maze exploration and electrophysiological recordings. The compensatory system used was not well calibrated to relieve the mouse from the weight of cables used for recordings and pull effortlessly up to the distal edge of each arm, making navigation physically more engaging than for mice that were not recorded for electrophysiology during the task. Sharp wave-Ripples are at the core of investigation for memory consolidation both within the hippocampal circuit and from a systemic point of view (Buzsáki, 2015), therefore we naturally addressed their characterization and modulation during our behavioural protocol. Multiple papers pointed out that the occurrence rate of SPW-Rs events augments during slow wave sleep following novelty exploration or execution of a behavioural task, especially after acquisition of the learning criterion [START_REF] Eschenko | Sustained increase in hippocampal sharp-wave ripple activity during slow-wave sleep after learning[END_REF]. We did not observe this augmentation in SPW-Rs, as we didn't either in our previous research employing the same protocol (El Oussini et al., 2023, Annexes). This result is rather puzzling, as the DSA task is a hippocampal-dependent spatial navigation task which rely on an intact hippocampus to be executed, as we verified personally by injecting muscimol within the hippocampus, resulting in learning disruption (El Oussini et al. supplementary figures). One would therefore hypothesises that hippocampal SPW-Rs are needed during consolidation at least for coordinating the local circuit. Indeed, El Oussini et al. showed that by altering SPW-Rs physiology through injection of an AMPA receptorblocking agent memory consolidation was disrupted, thus even though SPW-Rs' occurrence rate was not enhanced following learning they still played a pivotal role in consolidation of the cognitive rule governing the task. A simple explanation to the discrepancy with bibliography might be that most, if not all, research in this field has been conducted in rats and mice's SPW-Rs' physiology might be slightly different or the difference in occurrence rate smaller and therefore more difficult to detect. Other SPW-Rs' features used to characterize their physiology were stable across days and across groups, with the exception of SPW-Rs' amplitude and duration for mice of the No Learners group. This difference is striking and does not rely on a difference in site of recording, as showed by histological slices: it is not an effect of lack of learning, as it is already present in recorded baselines, and might highlight an underlying issue shaping mice's poor behavioural performance from a physiological point of view. Indeed, longer SW-Rs during wake consummatory behaviours in the inter-trial interval of a spatial alternation task have been associated to a higher content of neurons participating in behavioural trajectories and prolongation of SW-Rs was effective in increasing the accuracy of the behavioural performance [START_REF] Fernández-Ruiz | Long-duration hippocampal sharp wave ripples improve memory[END_REF]. However, the two measures, even though comparatively low, are not outside the physiological bounds and SPW-Rs activity seems normal in any other aspect, from occurrence rate to correlation with neocortical neurons, hence it might be due to a random coincidence related to the low number of mice composing the No Learners group. The sleep pattern of mice, in terms of time spent in SWS and REM phases or length of SWS intervals, was consistent across days and groups, even though a non-significant tendency to an increase in time spent in SWS was detectable between rest following the behavioural protocol and baselines, most likely due to the more demanding and experience-filled nature of days interested by the behavioural training. A possible mechanism for system consolidation relies on the modulation of the firing pattern of neocortical neurons by hippocampal SPW-Rs. In particular, much as it is within the local circuit of the hippocampus, SPW-Rs are hypothesized to entrain burst of compressed neocortical neuronal replay of primed and tagged neurons to promote synaptic capture, and a few researches have indeed found correlations between SPW-Rs and neocortical firing (Peyrache et al., 2009;[START_REF] Wang | Coordinated Interaction between Hippocampal Sharp-Wave Ripples and Anterior Cingulate Unit Activity[END_REF][START_REF] Wilber | Laminar Organization of Encoding and Memory Reactivation in the Parietal Cortex[END_REF]. Our results point toward the conclusion that the hippocampus coordinates with the medial prefrontal cortex for consolidation of the cognitive rule determining the most efficient strategy for reward collection in the DSA task, but this functional network does not include the posterior parietal cortex nor the hippocampus shows any particular positive modulatory activity toward this region during consolidation of this rule. In fact, even though roughly 50% of PPC's neurons showed significantly higher z-scored firing rate around the peak of hippocampal SPW-Rs, the mean of the population activity was still quite low and single neurons did not show any significant change, neither positive nor negative, in their firing pattern around SPW-Rs' peaks throughout the days of the protocol. On the contrary, on D3 we observed an abrupt drop in the proportion of PPC neurons significantly more active around the peak of SPW-Rs events than outside of SPW-Rs' event-window, at least for the Slow-Learners and Non-Learners groups. Strengthening of coherence between dHPC and PPC in the high-frequency range following spatial learning has been shown (Khodagholy et al., 2017), however, while the DSA task certainly relay on spatial navigation during the behavioural performance, the nature of the rule to be encoded, i.e. alternate between left and right arm choices, seems to be exquisitely semantic and its consolidation would not require PPC's engagement, which has rather been implicated in navigation and sensory association [START_REF] Lyamzin | The mouse posterior parietal cortex: Anatomy and functions[END_REF]. Progressive semanticization of the strategy to efficiently solve the DSA task, marked by stereotyped and procedural choices, would further reduce the spatial and navigational components of the task, possibly explaining the decrease in the proportion of neurons significantly activated around the peak of hippocampal SPW-Rs on the last day of protocol. The mPFC, on the other hand, shows a higher mean population activity around the peak of SPW-Rs, even though also in this region roughly 50% of total detected neurons displayed a z-scored firing rate significantly higher around the peak of SPW-Rs events than outside of SW'Rs' event-window, a proportion that remains constant across days and groups. Single mPFC's neurons displayed a significantly higher z-scored firing rate around SPW-Rs' peak on D1 compared to the baseline on D-1 and almost all neurons that were already significantly positively modulated around SPW-Rs' peak on D-1 significantly enhanced their firing rate around SPW-Rs on D1. The mPFC is the main neocortical investigation subject for memory consolidation, because of its role in cognitive control [START_REF] Euston | The Role of Medial Prefrontal Cortex in Memory and Decision Making[END_REF] and cumulating evidences potentially suggesting a pivotal role in long-term memory storage, similar to that played by the hippocampus on a shorter time-frame [START_REF] Tonegawa | The role of engram cells in the systems consolidation of memory[END_REF]. It is not surprising that there seems to be a stronger and functionally relevant connection between mPFC and dHPC that was not detected for PPC, also because, unlike this latter region, mPFC receives dense monosynaptic projections at least from the ventral hippocampus, while projections from the dorsal hippocampus are scarcer. Furthermore, mPFC is part of the memory system regulating encoding and consolidation of semantic memories [START_REF] Henke | A model for memory systems based on processing modes rather than consciousness[END_REF], therefore its stronger correlation with hippocampal activity during SWS suggests that the rule governing efficient execution of the DSA task that has to be consolidated is indeed semantic. Characterization of the interplay between dHPC and neocortical regions would greatly benefit by an analysis of the coherence between these areas in different oscillatory frequencies, both during the task and subsequent rest periods. Indeed, it would be interesting to reproduce observations on the coherence in the theta range between dHPC and mPFC during navigation [START_REF] Benchenane | Coherent Theta Oscillations and Reorganization of Spike Timing in the Hippocampal-Prefrontal Network upon Learning[END_REF] as a marker of coordinated participation to the same network for task execution and inspect whether a similar coherence pattern is shared also by dHPC and PPC. Furthermore, analysis of LFP's oscillations might give a deeper insight into the participation of PPC to the network for execution of the DSA task, even in the absence of loss of function studies that might redeem the question in a more direct way. We would look specifically at the coherence between the two neocortical regions, as a sign of coordinated interaction that would be expected during execution of a working memory task [START_REF] Yamamoto | Successful Execution of Working Memory Linked to Synchronized High-Frequency Gamma Oscillations[END_REF]. Evidences, collected primarily within the hippocampus, point out that neurons implicated in SPW-Rsinduced replay of activity are those most activated during the experience encountered during the preceding waking period (Dupret et al., 2010). We tackled this question by considering the behavioural relevance of neurons active during the DSA task, as we specifically focused on neurons that displayed a peculiar and significantly different firing pattern either at a specific location within the maze or in runs characterized by one out of a pair of behavioural attributes (left-turn vs right-turn, success vs error, VTE vs no-VTE). We included PPC neurons in the analysis because, even though we excluded this region from the functional network for memory consolidation, it did undergo activation during the task and we considered that a deeper insight on the activity of this neocortical region during the behavioural task might further explain the lack of correlation with hippocampal SPW-Rs activity during SWS. Indeed, PPC's neurons displayed activity patterns during the DSA task that could be attributed to navigation. Only 14% of neurons resulted to be positively modulated by the behavioural attribute criterion, among which neurons modulated by left-turn vs right turn choices, which could either be view as a cognitive feature from the choice-making prospective or yet another spatial feature, were predominant. Furthermore, most of these neurons actually showed a double modulation, being significantly more active also at a specific location within the maze. Location-specific neurons represent 50% of total PPC neurons and all location are represented, even though not equally. The most represented location is the start box. Start box and end box cannot be strictly considered completely spatial and navigational location, as the first one represents the space were mice are retained during the inter-trial interval, while the second is where reward is collected, therefore they have a strong cognitive connotation. However, this high proportion of PPC's neurons firing at location not entirely related to space and navigation might be explained by an analysis bias: to compute temporal edges of single runs, we took the two timestamps at which, respectively, mice crossed the border of the start box and the border of the end box and added 2 seconds to each extreme. Therefore, a certain amount of time not strictly spent in exploration was included in each run. The hypothesis of a contamination of data from the inter-trial or foraging intervals seems to be confirmed by the fact that 57 % of neurons positively modulated in either the start box or the end box were overall inhibited during runs compared to the inter-trial interval. mPFC neurons, instead, were more strongly modulated by the behavioural attribute criterion (23 %) and neurons differentially modulated in success vs error runs were as numerous as neurons differentially modulated in right-turn vs left-turn runs. 46 % of mPFC's neurons were modulated at a specific location within the maze, but only one of them outside of either the start or end boxes. Contamination from neurons activated during the inter-trial interval was lower than for the PPC, interesting 38 % of total mPFC neurons modulated in either one of the two boxes. Interestingly, neurons modulated within the end box were equally distributed among significantly activated and significantly inhibited, suggesting that reward, or absence of reward, processing might require activation of certain neurons and inhibition of others. Indeed, half of the neurons differentially modulated in success vs error runs were also significantly modulated (either activated or inhibited) in the end box. Furthermore, most neurons inhibited in the end box were detected in mice from the Learners group, therefore inhibition at reward location might be heavily implied in correct reward evaluation. Overall, these results confirm our hypothesis that, in the DSA task, the PPC plays a role which is primarily related to navigation, while the mPFC is more cognitively engaged. We restricted the analysis only to the first day of behavioural protocol because of the choices made at the moment of unit sorting, when we were able to concatenate data from the baseline on D-1 and the totality of the first day of behaviour (4 behavioural sessions and the rest period) in a single file that was sorted as a unique input, assuring consistency of the detection of a particular neuron across different electrophysiological sessions. Data from the two following days of behavioural protocol were excluded from this joint sorting, therefore individuation of single neurons that could be diachronically followed on all days of the protocol requires further attentive refinement steps that have not been performed yet. It would be certainly interesting to assess whether and in which proportion neurons activated during the task undergo a drift in activity saliency, as suggested by other researches [START_REF] Driscoll | Dynamic Reorganization of Neuronal Activity Patterns in Parietal Cortex[END_REF][START_REF] Schoonover | Representational drift in primary olfactory cortex[END_REF], or if the behavioural association are fixed and stable across the whole protocol. What is lacking from the current analysis is also an assessment of the evolution of the firing pattern of single neurons within the first day of protocol, whether, for example, behaviourally relevant neurons reveal their preferential behavioural association from the beginning of if it is refined with runs' progression and, in this latter case, whether it correlates with the progression in acquisition of the behavioural rule. Finally, it would be interesting to assess whether repetitive activation of the same behaviourally relevant neurons on consecutive days would augment their enrichment among SPW-Rsmodulated neurons, which might be interpreted as sign of reconsolidation, or decrease it, possibly indicating that, once the main phase of consolidation of the behavioural rule has taken place, neurons that represent consolidated behavioural trajectories are no-longer preferred for SPW-Rs-modulated activity. All of these are intriguing results, however the low number of neurons accessible for this analysis makes very difficult to draw any clear cut conclusion, or even to properly differentiate among the different patterns of activity of the three behavioural groups. Detection of a low number of single neurons is indeed a weakness of the project design, as 16 single wire electrodes do not have the throughput power of silicone probes bearing hundreds of recording sites, but, on the other hand, methods for chronic implantation for recording of freely moving animals of this type of probes are not easy to implement and a boost in this direction have been made only in very recent years [START_REF] Steinmetz | Neuropixels 2.0: A miniaturized high-density probe for stable, long-term brain recordings[END_REF], obliging to a compromise. Furthermore, we can still take more information out from the dataset we have already recorded. In fact, for time-constraint reasons, in the present manuscript only the analysis of single-units was included, while multi-units detected simultaneously were put aside. Kilosort software sorts as multiunits all spiking activity that is considered to be the result of two or more neurons firing in close proximity, in a way that do not allow disentanglement of the individual firing patterns, resulting in patterns of activity that do not display any physiological inter-trial interval. Multi-units can be considered clusters of neurons functionally connected together, thus their analysis might handle some interesting results. We detected almost as much multi-units as single-units during our sorting, thus including them in our investigation might significantly increase the size of the dataset. Furthermore, a good proportion of multi-units was found in the hippocampus, where, instead, single-units' detection rate was low due to project design (only four electrodes were implanted in this region) and to the densely packed nature of this region, which makes disentanglement of the activity of individual neurons more difficult. Analysis of multi-units activity might even allow us to detect multi-units firing tuned to behavioural trajectories in the hippocampus and analyse the participation of behaviourally-relevant hippocampal neurons to SPW-Rs. Based on our results, we hypothesized that consolidation of the DSA task relies on a network which includes the hippocampus and the medial prefrontal cortex. Previous experiments conducted within the laboratory had already established that preventing AMPA receptors mobility induces SPW-Rs disruption and abolishes consolidation of the rule governing efficient reward collection in the DSA task (El Oussini et al., 2023, Annexes). This time, we wanted to block AMPA receptors mobility in the mPFC to test if preventing AMPA receptors-dependent synaptic plasticity in the neocortex we could still disrupt memory consolidation. We hypothesized that either neocortical neurons reactivation leads to new NMDA receptor-dependent LTP induction and expression, or that following synaptic capture in tagged synapses of primed neurons, AMPA receptors mobility would be needed to occupy the new receptor-slots created at the post-synaptic density as a consequence of late structural LTP. As a model for the experiment, we used knock-in mice expressing both AP-tagged GluA2 subunit and BirA ligase enzyme for biotin fixation chronically bilaterally implanted with guide injection cannulas in the mPFC. Unexpectedly, injection of neutravidin at the end of the first day of protocol did not induce an impairment in behavioural performance on the following day, suggesting that the AMPA receptors mobility-blocking agent had no effect on memory consolidation. A possible explanation is that, even if the mPFC is engaged early on during encoding and engram neurons are already detectable, it does not yet play an active role in memory consolidation, as suggested by the two stage system consolidation theory. Indeed, prevention of structural synaptic plasticity within the mPFC was effective in disrupting memory consolidation in an inhibitory avoidance task 24h but not 8h after conditioning [START_REF] Goto | Stepwise synaptic plasticity events drive the early phase of memory consolidation[END_REF]. We could collect only a very limited amount of electrophysiological data, but, even though we could not find any significant difference in single mPFC neurons' z-scored firing rate around the peak of SPW-Rs between baseline recording and the rest period following the first day of behavioural protocol and neutravidin injection, we observed that the proportion of mPFC neurons significantly activated around the peak of SPW-Rs dropped by half following neutravidin injection. A similar drop was not detected in PPC neurons, that we kept recording as a control neocortical region. This result is certainly preliminary and partial, but it is also very intriguing as, if confirmed, would mean that AMPA receptors-crosslinking does affect the physiology of hippocampo-neocortical communication, possibly preventing synaptic plasticity at hippocampus-to-neocortical synapses. However, this disruption has no effect on recent memory retrieval. It would be interesting to explore if this single manipulation would be effective at preventing memory retrieval at a later point, when hippocampal engrams are generally dismissed in favour of neocortical engrams for remote memory, suggesting that early neocortical plasticity is indeed needed for long-term memory consolidation, or if physiological activation during rest periods at the end of the following days of protocol would be sufficient to induce consolidation of long-term memory, suggesting a model were memories are reconsolidated after each reactivation and different consolidation phase are not regulated by any particular hierarchy. We are aware that gain or loss of modulation is an indirect measure of synaptic plasticity, especially when the phenomena observed are mostly correlatives and our project design did not allow us to do any manipulation of specific synaptic connections. For this reason, for the future progression of the project we started testing optogenetics methods to exert manipulation or to directly test properties of individual engram neurons and we took a special interest in the CaMPARI and Cal-Light approaches. CaMPARI is a photo-convertible molecule which switches from emitting green light to emitting red light after concurrent illumination with UV light and raise in Calcium concentration, resulting in a measurable red/green ratio of infected neurons which is proportional to the firing rate of a given neuron during the illumination period [START_REF] Fosque | Labeling of active neural circuits in vivo with designed calcium integrators[END_REF][START_REF] Trojanowski | Activity labeling in vivo using CaMPARI2 reveals intrinsic and synaptic differences between neurons with high and low firing rate set points[END_REF]. We performed preliminary experiments where we simply illuminated the mPFC during each run (from the moment the mouse exited from the start box to the moment it entered the end box) and we obtained the photo-conversion of a Introduction The importance of activity-dependent synaptic plasticity (SP) in the memorization process is generally admitted. This statement is supported by multiple studies reporting the behavioural impact of pharmacological treatments targeting SP-related molecular mechanisms, SP-related genetic inactivation, or more recently molecular approaches specifically affecting long term potentiation 1-3 . The examination of the physiological consequences of these inactivation strategies in living animals generally point to alterations of behavioural performance at testing time 2,3 , an impact that however depend on the type of memory and the extent of molecular manipulations onto one or more implicated brain regions 2 . Recent reports also challenged the link existing between neurobiological consequences of synaptic plasticity blockade and animal performances. For example, Kaganovsky and colleagues recently proposed that blocking synaptic plasticity in the CA1 region of the hippocampus did not impacted animal performance in a variety of behavioural tests, even though some cellular and network proxies for learning were affected in the hippocampus 2 . This may result from functional redundancy between brain regions to achieve cognitive robustness, but it also certainly complexify the interpretation of the impact of these SP interference methods on memory. One intriguing possibility would be that the process of memory that includes multiple steps -encoding, consolidation and retrieval -would correspond to sequential steps of synaptic plasticity formation and maintenance 4,5 . There, Hebbian synaptic tagging would be the immediate response to coincident neuronal activities supporting rapid adaptation of animal behaviour in response to new situations. Then synaptic capture, necessary for the maintenance of the plasticity, would occur during quiescent -awake or sleeping -states, allowing consolidation of a memory that can be retrieved after hands. In this line, the importance of hippocampal ripples appears as central. Ripples intrinsically contain both the prerequisite for synaptic capture, namely a capacity in replaying behaviourally relevant spatial sequences encoded during the awake state 6 , but also to allow broadcasting of these information enabling expression of plasticity related molecules important for synaptic capture 4 . Recent findings confirmed that ripple content depends on recently acquired memories 7 , reactivating neuronal ensembles in cortices, such as those implicated in the running of specific rules 8 . Physiologically, hippocampal ripples are short network oscillations at 180-250 Hz corresponding to synchronized neuronal activation generating synaptic waves that can be evidenced if recording in the dendritic fields -i.e. stratum radiatum-of hippocampal CA3 and CA1 regions 6 . Interestingly, ripples can reach cortical regions -through direct or indirect projections -where they synchronize with other sleep related oscillations, such as spindle and delta waves 6 , a process that is reinforced by newly encoded learning 9 . In the hippocampus, ripples are generated in structures -such as in hippocampal CA3 region -rich in recurrent connectivity and depends both on excitatory and inhibitory local inputs that constitute the feedback loops necessary for fast oscillations building 10 . As such, impairing 11 or prolongating 7 ripples can be achieved by manipulating specific interneuron populations . Even if in situ preparations do allow spontaneous oscillations that share numerous characteristics with ripples recorded in vivo 10,12,13 , the relation between synaptic plasticity and ripple physiology has not yet been explored specifically in situ by using methods avoiding confounding factors such as effects on basal glutamatergic transmission 1 . We here used two alternative strategies to abolish synaptic plasticity depending on AMPAR mobility (AMPARM): the first, based on multivalent IGGs directed against the GluA2 subunit of AMPAR has been proved to be efficient in blocking LTP at Schaeffer collaterals to CA1 synapses 14 . Similarly, blocking AMPARM using biotinylation of GluA2 subunit and presence of tetravalent neutravidin in the external medium did not affect basal transmission but led to a complete absence of LTP 15 . We thus used these strategies in the dorsal hippocampus to explore the link between synaptic plasticity, ripple physiology and learning and memory processing. Results AMPAR immobilization in the dorsal hippocampus impairs memory consolidation Based on our previous reports showing that AMPAR immobilization at neuronal surface was efficiently blocking post-synaptic expression of hippocampal long-term potentiation, we used intra-hippocampal infusions of AMPAR cross-linkers to test for the implication of synaptic plasticity in the memorization process of a spatial task. For this, mice were cannulated bilaterally above the dorsal hippocampus, and trained for working memory-based delayed spatial alternation task (DSA 16 ). In this task, food deprived mice are learned to find food rewards in a Y-maze according to a simple rule: reward location is alternating between right/left ending arms (Figure 1a). A delay of 30 seconds between consecutive runs is imposed, forcing the animals to remember the previous location before engaging in the following run. In control conditions, a training day of 4 sessions -about 40 trials or reward positionsis sufficient for the animals to decrease its number of errors and reach its maximal performance which is maintained the following days (Figure 1d). To mediate AMPAR immobilization in the dorsal hippocampus, we performed bilateral, intra-cerebral injections of AMPAR cross-linkers (anti-GluA2 IgGs) or their controls (anti GluA2 monovalent Fabs) at key timings of the learning process (Figure 1bc): Immediately before the first learning session of day 1 (Pre-learning), immediately after the end of the first training day (Pre-rest), and immediately before the first session of day 2 (Pre-test). Our aim was to test the importance of hippocampal AMPARM-dependent plasticity in the encoding, the consolidation and the recall of DSA rule respectively. Collectively, results pointed to an impact of AMPAR cross -linking onto memory consolidation: Indeed, pre-learning injections of AMPAR crosslinkers did not impact animal performance on day 1 (Figure 1d and1eleft), but rather on the following day, choices returning to random level (Figure 1dleft and1eright). A similar effect was observed when injections were performed immediately after session#4 (Figure 1dmiddle, and 1f), but not if done before the test in day2 (before session session#5, Figure 1dright and1g). This last experiment indicates that memory retrieval was not impacted by AMPAR cross-linking, but more importantly that crucial AMPARM-dependent process occurs during resting period that support memory consolidation. An important question to address is the origin of the lack of performance observed in day 2 in prelearning and pre-rest injected animals. Indeed, an increase in the number or errors can have various origins such as disorientation, disengagement, bad animal state, up to the complete forgetting of the alternation rule. To further dissociate between these options, we thoroughly analysed animal behaviour along the DSA rule acquisition (Figure 2). As other groups using T and/or Y mazes to test for mice cognitive abilities 16,17 , we noticed that runs can be separated in two groups: those in which the animals were running in the maze with almost constant speed and those in which hesitation can be observed at the crossing point, with significant changes in head orientation and speed, called Vicarious Trial and Error runs or VTE runs (Figure 2a) that are predictive for good choice accuracy, as testified by the difference in the error rates in VTE and non VTE runs (Figure 2aright). Interestingly, probably because no rule was clear initially, animals exhibited more no-VTE runs at the beginning of the learning day (Figure 2b). Next, to get insights in the origin of the performance loss of pre-learning treated animals, we examined the occurrence and choice quality of VTE and no-VTE runs in session#5 (Figure 2b-e): Surprisingly, the lack of performance of pre-learning IgG-injected animals was associated with reinstatement of initial values, suggesting an apparent DSA rule amnesia (Figure 2c-e): i) in IgG-treated mice, the occurrence of no-VTE runs in session#5 was similar to its initial session#1 value (Figure 2b,c), as if the animal had to re-establish a rule, rather than being unable to apply the beforehand encoded one; ii) the error rate of VTE runs in session#5 also increased, returning to the level observed in session#1 (Figure 2d), pointing that when wanting to apply the rule, animals performance was as when encountering it initially (Figure 2ebottom) and iii) Whereas with training the no-VTE runs performance improved and progressively diverge from random choices, they also returned back to initial values in session#5 of IgG-injected mice (Figure 2etop). Importantly, in day1, all these parameters evolved similarly between IgG-and control-injected animals, suggesting that DSA rule encoding process is ongoing normally in absence of AMPARM (Figure 2b-e and Supplementary Figure 1). Thus, we propose that a hippocampal AMPARM-dependent mechanism is at play during post-training resting periods that support memory consolidation, and that dHPC AMPAR immobilization leads to a total forgetting of the acquired DSA rule rather than to an incorrectly encoded or played DSA rule. AMPAR immobilization in the dorsal hippocampus impairs ripple physiology during slow wave sleep. Hippocampal ripples are fast oscillations that are developing during slow wave sleep (SWS) phase and that are considered as offline replays of immediately preceding experiences 6,8,18 . They are generated in CA2/CA3 regions of the hippocampus, and propagate in CA1 before broadcasting to cortical regions 9,19 . Interestingly, their interplay with immediately preceding synaptic tagging is unknow, even if specific downscaling and NMDAR-dependent synapse refinement have been reported in in situ preparations 11 . Thus, we wanted to examine the impact of IgG treatment and DSA learning on dHPC ripples (Figure 3). To achieve that, animals were implanted bilaterally with wire bundles medially to injection cannula (Supplemental Figure 2a). dHPC Local Field Potentials (LFPs) were recorded for 3 hours immediately following Y maze habituation ("habituation" in Day-1 or D-1) or testing sessions ("DSA" in Day1 or D1, Figure 3). At first, we separated awake and resting/sleeping state in the home cage using animal tracking (mobility, Figure 3top and Supplemental Figure 2c). Then, Slow Wave Sleep (SWS) and Rapid Eye Movement (REM) sleeping phases were separated using a Theta/Delta ratio defined on hippocampal LFP spectra (Figure 3middle and Supplemental Figure 2c), REM periods hosting robust theta oscillations absent in SWS periods, that are characterized by pronounced Delta waves (for a typical example see Supplemental Figure 2c). Importantly, as expected, SWS periods correlate nicely with the occurrence of hippocampal ripples (see methods, Figure 3bottom and Supplemental Figure 2c). Then we tested if the DSA protocol and AMPAR cross-linking were leading to alterations of ripple frequency and amplitudes (Figure 3c-f). At first, we tested if our recordings were stable over time: indeed, and not surprisingly, neither the amplitude nor the frequency of detected ripples differ between two basal consecutive days (Controls D-2 and D-1 recordings, Figure 3c). Some reports described that spatial learning or retrieval was leading to increases in dHPC ripple frequency 20 . However, no noticeable changes in recorded ripples were observed in animals submitted to DSA learning, and injected with control constructs or non-injected (see specific mentions in Figure 3d). We next tested if the blockade of AMPARM in the dHPC was perturbing ripple physiology in DSA trained animals (Figure 3e) or non-trained mice (Figure 3f). Importantly, in both cases, as for control injections, IgG injections were done several hours before the recorded resting periods, the only difference being that non-trained animals are solely positioned in the maze for a similar time duration, but with no specific rule to learn (see methods). Surprisingly, we detected a significant impact of IgG injections on ripple amplitude and frequency that were significantly decreased, but only when learning was present (compare Figure 3e and3f). Because ripple inactivation during SWS has been proved to impair spatial memory consolidation 21 , this decrease in ripple content may explain the lack of consolidation observed in pre-learning IgG injected animals (Figure 1d). Importantly, we did not detect any change in the total time of SWS between groups in D1 within the first three hours of resting period (data not shown). Importantly, this early SWS phase has been shown to elicit ripples that are of particular importance for recent memory consolidation 18 . Thus, our data support that in the dHPC, DSA-consolidation mobilize AMPARM-dependent plasticity events that support the genesis and strength of hippocampal ripples. AMPARM-dependent plasticity at CA3 recurrent support ripple activity in situ. Most of what we know about cellular and synaptic contribution to ripple physiology comes from acute in situ preparations in which ripple like oscillations are spontaneously generated 10,12,13 . If some aspects of in vivo ripples are obviously lacking, such as their cognitive content 6 , they recapitulated most of the in vivo ripple properties and keep some link with in vivo experience 116 . We setup and used in situ hippocampal preparations exhibiting spontaneously ripples and combined them with synaptic "tagging" by inducing LTP at CA3CA1, CA3CA3 and DGCA3 synaptic contacts with or without presence of AMPAR cross-linkers (Figures 4 and5). Among the addressed questions is the interplay between AMPARM-dependent plasticity and ripple physiology to get insights on our in vivo results. In optimized in situ preparations 13 , ripple-like activities -here called SPW-Rs -can be stably and robustly recorded using field recording pipettes positioned in the CA3 and CA1 regions (Figure 4a-e). To be included, SPW-Rs recordings has to have stable occurrence frequency, showing co-detected CA3 and CA1 events, present a constant delay between CA3 (first) and CA1 (delayed) responses (Figure 4cmiddle), and have a good amplitude matching between both signals (Figure 4b and4cright). Some other criteria were eventually respected when present: i) the signal polarity in the CA1 region was depending on the recording location: positive in the stratum pyramidale, and negative in the stratum radiatum, confirming that incoming CA3 activities were generating a significant synaptic field response in CA1 (Supplementary Figure 3a), ii) both evoked and spontaneous SPW-Rs eventually engaged CA3 unitary activities (Supplementary Figure 3b), iii) when tested, stimulations in the CA1 stratum radiatum that generated SPW-Rs were interfering with spontaneous SPW-Rs, generating a refractory period (Supplementary Figure 3c). Importantly, after a 20 minutes' period in the recording chamber, all parameters were stable for more than an hour, allowing the combination of SPW-Rs recordings with HFS application and/or pharmacological manipulations (Figure 4 and5). We previously showed that AMPAR immobilisation at neuronal surface in the CA1 region impaired LTP expression at CA3CA1 synapses 14 . We first wanted to reproduce and extend this finding to other synapses eliciting post-synaptic LTP expression. Interestingly, in CA3 region, pyramidal neurons receive two major excitatory afferent that are intrinsically different: Mossy fibres originating in the dentate gyrus generate "detonating" synapses expressing a huge rate-dependent facilitation that can be prolongated by a sustained potentiation of presynaptic origin (presynaptic release probability increase 22 ). In contrast, recurrent synapses emitted by distant or neighbouring CA3 pyramidal cells are classical Hebbian synapses, expressing post-synaptic LTP 22 . We thus tested the impact of AMPAR crosslinking on LTP in our in-situ "SPW-Rs" preparation (Supplementary Figure 4). Not surprisingly, we observed that AMPAR X-linking led to an absence of LTP at synapses postsynaptic to CA3 axons (CA3CA1 and CA3CA3) but not at DGCA3 projections, that were solely affected by PKA blockade using Rp-cAMP preincubations (Supplementary Figure 4). In the absence of learning-associated synaptic tagging, AMPAR immobilization did not lead to changes in ripple frequency or amplitude (Figure 3f). Acute slice preparations from naive mice are often used as models to study molecular and cellular mechanisms of LTP induction and expression at hippocampal synapses 23 . It is then often considered that in naive mice, no significant synaptic taggingendogenously triggered LTP -is present. We thus tested the effect of AMPAR cross-linking in SPW-Rs containing naive preparations by locally infusing anti-GluA2 IgGs in CA1 or CA3 stratum radiata (Figure 4f-i). Importantly, the efficacy of this injection procedure on LTP expression was previously validated in CA1 14 , and reiterated here in CA1 and CA3 regions (Supplementary Figure 4). As compared to basal conditions, these injections had no effect on SPW-Rs frequency or amplitude (Figure 4f-i), the local effect of IgG injection on amplitude being attributable to the one/two pipette(s) procedure (Supplementary Figure 5). Thus, in great coherence with the absence of AMPAR immobilization on basal synaptic transmission 14 , and in vivo results obtained in naive mice, our in-situ result suggest that basal SPW-Rs did not depend on AMPARM in absence of specific synaptic tagging. Next, we wanted to test for the effects of synaptic tagging -here generated by HFS applications enabling LTP induction (Supplementary Figure 4) -on SPW-Rs frequency (Figure 5). Interestingly, CA1 HFS stimulations did not impacted SPW-Rs frequency or amplitude (Figure 5a,c,d), indicating that synaptic strength at CA3CA1 synapses may not be a prominent SPW-Rs frequency determinant. In contrast, the same procedure applied at CA3CA3 recurrent synapses led to a strong increase in SPW-Rs frequency (Figure 5b,c), prominent in case of low basal SPW-Rs frequency (Figure 5d). Furthermore, the effect of HFS on synaptic strength and SPW-Rs frequency seems to be temporally disconnected, the increase in evoked EPSP amplitude being detectable as early as in the 0-5 post-tetanic period, whereas the effect on SPW-Rs frequency was not yet detectable (Figure 5dbottom). This suggest that the reinforcement of CA3CA3 recurrent synapses progressively increase CA3 region excitability, more prone to generate ripples. Finally, we tested the impact of AMPAR immobilization on SPW-Rs modulations by CA3-HFS to test if they would be independent. We applied HFS-CA3 stimulations in SPW-Rs expressing slices in which local CA3 infusions of anti GluA2 IgGs were performed (Figure 5e-g). Under AMPAR immobilization, as found for LTP at CA3CA3 synapses, HFS-associated effect onto SPW-Rs frequency was absent (Figure 5e-g), suggesting that their physiology depend on AMPARM-dependent CA3 recurrent synaptic strength. Importantly, a contribution of synaptic potentiation at DGCA3 synaptic inputs is unlikely, To confirm that the lack of consolidation is associated with impairment of ripples physiology, we combined this novel cross-linking method with dHPC recordings using the same recording methodology as for IgG experiments (Figure 7). Ripples occurring during SWS were extracted and counted in habituation (D-1) and after DSA (D1) sessions (Figure 7). This last recording starting immediately after drug delivery, it was important to control for unspecific effects of drug actions: importantly, NA application on GFP-only, and saline or mSA delivery on BiRA expressing dHPCs were not leading to changes in ripple frequency, mirroring the lack of effect on DSA consolidation (Figure 6). However, NA delivery in BiRA expressing CA3 areas were associated with a pronounced decrease in ripple frequency, remembering the effect observed upon pre-learning and pre-rest IgG injections (Figure 3). Thus, by two different approaches, we demonstrated that AMPARM in the CA3 region is necessary for memory consolidation and support ripple physiology during slow wave sleep. Discussion In order to understand the link between synaptic plasticity and learning and memory it is essential to analyse separately the various phases of memory encoding, consolidation and retrieval, and to use specific tools that perturb plasticity without affecting basal synaptic transmission. With this in mind, we develop molecular strategies that can be used in vivo to address these issues. We tested two different methods to impair postsynaptic long-term potentiation in the dorsal hippocampus, and uncover that during the process of acquiring a spatially guided rule to get food rewards, AMPAR mobility was necessary during the consolidation phase and was an important physiological mechanism that support ripple activity consecutive to new rule encoding. From our in-situ experiments, an interplay between AMPARM, LTP at CA3CA3 recurrent synapses and ripple physiology emerged, suggesting a model (Figure 8) in which rule encoding, possibly through synaptic tagging, condition/organize subsequent ripple activity that will develop during rest to consolidate memory. This would require the occurrence of AMPARM-dependent LTP at CA3 recurrent synapses, as the immobilization of AMPAR in CA3 in vivo during consolidation leads to a learning-dependent loss of ripple activity and complete forgetting of the acquired rule. Thus, we bring a new mechanism by which synaptic plasticity contribute to learning and memory. Controls for AMPAR X-linking strategies. Some antibodies against GluA2 subunits have been reported to modify AMPAR composition within several hours 24 , a time windows that may correspond to the consolidation phase of the DSA rule in case of pre-learning injections. Thus, one can attribute the observed effects onto consolidation to important changes of hippocampal connectivity/activity due to the replacement of GluA2-containing heteromers by GluA1 homomers. However, we observed a very similar effect on animal performance when intracerebral IgG injections were performed before or after DSA rule encoding (Figure 1d-f), a time at which the IgG-dependent changes in AMPAR composition might not have yet occurred. The same reasoning applies to the ripple recordings in Figure 3: The application of pre-learning IgG in absence of DSA learning did not impact the network capacity in generating ripples. In fact, the impact of both strategies blocking AMPARM appears to be specific on the actual cross-link capacity (see the various control conditions for both antibody-and neutravidinbased strategies) at the time of offline rule consolidation. A methodological issue regarding in vivo strategies and ripple detection is that local delivery of drugs affects the electrophysiological recordings due to volume injection, an issue that was identified in our in-situ experiments (supplementary Figure 5). If occurring in vivo, a global decrease in LFP amplitude caused by tissue movements would have potentially affect our capacity to detect ripples after injection. At first, we tried to minimize this effect by decreasing the injection speed (see methods) and temporally separate the injection from the recordings (pre-learning injections). We also designed a number of control conditions that would account for this effect: we injected animals with either saline, various monovalent or divalent antibodies, but also with IgG without submitting the animals to DSA learning (Figure 3). In all of these conditions, no changes in ripples properties was observed. Comparison between in vivo and in situ recordings. Another issue concerns the comparison of SWRs recorded in vivo and in vitro 6 . Indeed, in our in vivo recordings, we essentially characterized changes in frequency and amplitude of oscillations recorded in the CA1 region after a 150-250 Hz band pass filtering (Figures 3 and7), whereas focusing on the occurrence frequency of CA3/CA1 sharp waves in situ (SPW-Rs, Figures 4 and5). Sharp-wave are proposed to reflect the dendritic depolarization evoked by the synchronous activity of subgroups of excitatory afferents from the CA3 region, whereas the ripple oscillations are thought to be generated in the CA1 region, in response to the sharp-waveassociated excitatory inputs 12 . In the retained in-situ recordings, the vast majority of SPW-Rs are codetected in CA3 and CA1 regions with a significant and stable delay (Figure 4). In CA1, a 150-250 Hz band pass filtering of SPW-Rs eventually give rise to a ripple-like oscillation (supplementary Figure 3a), but that appears to be highly unstable. Another issue is that in some CA3 stratum pyramidale recordings, 150-250 Hz ripples were contaminated by recorded spikes (supplementary Figure 3c). Thus, the use of 150-250 Hz ripples in situ appears to be less reliable than the associated co-detected waves. In vivo, the presence of dendritic responses -waves -is present in some but not all recordings, as the wires implanted wires are separated by 200 m in the dorso-ventral axis, thus radially to the CA1 region (supplementary Figure 2a). However, because waves and ripples are reflecting the same intrinsic network events, we believe that effects observed on in vivo ripples and in situ sharp waves frequencies can be compared, especially the fact that they share the same lack of sensitivity to AMPAR X-linking in basal condition, whereas being impaired by the same treatment after DSA learning (in vivo, Figures 3 and7) or LTP induction (in situ, Figure 5). Synaptic plasticity in memory phases. Surprisingly to us, the effect of AMPAR X-linking on ripple activity is present only if the DSA rule is encoded (Figure 3). This suggest that ripple activity is different if salient cognitive events have to be consolidated. One intriguing possibility would be that the learning process directly or indirectly "condition" or "imprint" the hippocampal network, especially the CA3CA3 recurrent synapses, to drive DSA-related ripples to be efficiently generated offline (Figure 8). A likely mechanism by which this could happen is the occurrence of synaptic "tagging" -activitydependent synaptic plasticity events -during online rule acquisition, that will have to be "captured" during the offline consolidation phase of memory 4,5 . It has yet been suggested that in vivo synaptic modifications would protect these synaptic contacts from ripple-mediated synaptic downscaling during sleep, allowing cognitive map refinement 11 . Several aspects of our findings need to be discussed regarding this conceptual line, especially the fact that memory encoding seems to occur even if synaptic plasticity is blocked during rule encoding. Synaptic tagging is thought to rely on cellular and molecular mechanisms associated with synaptic plasticity induction and expression, that includes AMPAR trafficking at the plasma membrane 1 . However, we yet reported that AMPAR cross-linking was not impacting LTP induction 14 as NMDA receptors are kept functional. Thus, coincident neuronal activations may have activated transduction cascades necessary for synaptic tagging even if no LTP was expressed. It remains quite surprizing that we did not observe an impact of hippocampal AMPAR cross-linking on DSA rule encoding (Figure 1dleft). It would suggest that synaptic tagging associated with rule acquisition does not depend on activity-dependent changes in synaptic strength in the hippocampus, a structure that is assumed to bind all components of experience into one unique episodic memory 25 . A first hypothesis would be that rule encoding is independent on AMPARM-dependent Hebbian plasticity. Some report yet described that spatial map reorganisation upon rule acquisition can be independent from NMDA-dependent plasticity: Dupret and colleagues 26 injected rats with an NMDAR antagonist in order to interfere with their spatial memory. Learning performance was unaffected, but the animals failed to remember the newly-learnt locations, suggesting that the newly-acquired representations of goal locations, that when replayed during sleep predicted memory performance 18 , did not stabilize. A second explanation would be that our cross-linking strategy did not perturb some hippocampal synaptic contacts crucial for rule encoding. Our two strategies target GluA2-containing AMPAR. Therefore, a number of excitatory synapses may have escape from the effect of cross-linking, such as excitatory inputs onto interneurons that can be GluA2 independent 27 . It is of note because some pyramidal cell-interneuron coupling changes have been reported during spatial rule encoding 28 . Thus, to have a complete and comprehensive view of AMPARM impact onto learning-dependent ripple physiology, further experiments using cross-linking strategies targeting GluA1-containing AMPAR, or specific cell types for example using conditional expression of the BiRA under IN or PN promoters will be necessary. In the same line, we observed that DGCA3 LTP was preserved in presence of anti GluA2 IgG, that would possibly support rule encoding and relay abroad synaptic tagging within the CA3 region. Further in vivo experiments using specific blockers of DGCA3 plasticity, such as Rp-cAMP (supplementary Figure 4) would allow deciphering the role of these particular connections in the encoding of the DSA rule in the hippocampus. Another possible mechanism for this unexpected disconnection between learning ability and dHPC LTP blockade could reside in the resiliency of the system. A recent study using another strategy to block post synaptic LTP showed that even if CA1 plasticity was largely absent, and some of the learninginduced cellular rearrangements were lost, animals were still able to perform correctly 2 . Alternatively, the rule can first be encoded in another brain region than the dHPC, such as being hosted in the mPFC. For example, Peyrache and colleagues showed that in very similar conditions of a new rule learning in a Y maze, ripple activity was directed towards the reactivation of rule-related neuronal ensembles in the mPFC 8 , opening the possibility that synaptic tagging have been generated there. Indeed, replay of firing patterns in hippocampal neuron ensembles during sleep is thought to cause the gradual formation of stable representations in extra-hippocampal networks by enhancing connectivity between their elements 25 . Thus, we can anticipate that pre-learning AMPAR X-linking experiments in the mPFC, by interfering with LTP-dependent synaptic tagging during encoding, would impair animal performance. Similar results would be obtained at pre-rest injections, by blocking ripplemediated generation of DSA rule representations 8 . In our hands, complete pre-learning inactivation of dHPC or mPFC through bilateral pre-learning injections of muscimol was leading to a prominentabove chance level -number of errors linked to stereotyped choices (data not shown). This suggests that both structures are necessary to build the cognitive representation of the DSA rule. Importantly, this phenotype was not observed upon IgG-based pre-learning AMPAR cross-linking in dHPC, supporting that no such major effect on dHPC physiology was associated to the procedure. Synaptic plasticity and memory maintenance. One interesting observation emerging from our data is the fact that in absence of correct consolidation, the DSA rule is apparently completely forgotten, as if the animal would be completely naive to the task (Figure 2). So far, experiment impacting ripples activity during sleep impaired the performance of the animal on the following days, but were not reported to lead to complete resetting, but more slowing down the behavioural performance 28 . An intriguing possibility would be that in absence of synaptic capture, synaptic tagging would fade away, bringing the neuronal network in the exact same "naive" state as the previous day, as yet suggested by Norimoto and colleagues 11 . Interestingly enough, this would open a time window during which newly encoded memory would be accessible to erasure if one can specifically act onto ripple physiology. Conclusion This study is bringing a first piece of evidence that consolidation of recently acquired memory depends on AMPARM in the hippocampus. Our result points to the importance of CA3 region in this process. Our results are embedded in the more global framework of the tagging and capture synaptic hypothesis that is now more and more discuss in term of encoding / consolidation of memory and awake / sleeping state of the animals. The importance of the cortico / hippocampal dialog in this process is of fundamental interest, and the deciphering of its intimate mechanisms will certainly profit from the development and use of in vivo applicable molecular strategies interfering with plasticity in vivo with good temporal and spatial control. f: The error rate in VTE runs was analysed and reported in sessions session#1 (Left) and 5 (Right) for control and cross-link groups. Note that they were identical in Session session#1, but diverge significantly in session session#5, suggesting memory impairment in absence of AMPARM. ±2.65; DV -2.5/-2.0/-1.5). Injection of 250nl of the product was performed manually by applying low but constant pressure on the syringe. The pipette was maintained in position within the target region for 5 minutes after the end of the injection to allow local diffusion of the injected product, then retracted at a slow speed. When combined with other surgical procedures, stereotaxic injection always preceded stereotaxic implantation. Acknowledgment iii-Stereotaxic implantation Stereotaxic implantation was performed on mice aged 6 to 10 weeks. Various types of implants were used, all of which are detailed in the dedicated "Implanted materials" section, but surgical procedures were common to all. Prior to proper implantation, the skull was prepared by briefly -3 to 5 secondsapplying Peroxidase RED ACTIVATOR (Super-Bond, Sun Medical Co) to remove the periosteum. Single guide cannulas were manually descended into the target region at a speed of approximately 20µm/s (coordinates CA1: AP -1.95; ML ±2.25; DV -0.55; angle 30°; coordinates CA3: AP -2.35; ML ±2.65; DV -1.2). Electrodes were descended into the target region using a micromanipulator at 1µm/s for the last third of the descent, to reduce tissue damage caused by the implantation (coordinates: AP -2.35; ML ±2.3; DV -1.7; glued guide cannulas coordinates: AP -2.35; ML ±2.65; DV -1.2). Guide cannulas and electrodes were fixed with dental cement (Super-Bond, Sun Medical Co). After implantation mice were housed alone to prevent implants' damaging. Implanted materials i-Guide cannulas We used stainless steel guide cannulas (Bilaney 26 gauge, 1.5mm of length; PlasticOnes). Prior to the surgery, guide cannulas were kept in alcohol to minimize the risk for bacterial contamination and plugs were maintained on them at all time to avoid penetration of external material. When intracranial injections had to be combined with extracellular field recording, Guides were glued directly on the electrode connector and were obstructed with a metallic dummy cannula to avoid penetration of external material. Intracranial injection was performed on awake mice, either loosely held in the manipulator hands (for short injections) or free to move inside their home-cage. Injections were performed through injection cannulas (Internal Cannula FIS 2.5mm guide, Bilaney; 0.5mm projection) and via an automatic pump (Legato 101, Kd Scientific Inc.) that applied a constant pressure on two 1µl Hamilton syringes (7101 KH), allowing the regulation of injection speed (antibodies: 100nl/min; Neutravidin: 50nl/min). Pre-Learning and pre-test Injections were performed 1 hour before the beginning of the behavioural protocol; pre-rest injections were performed immediately after the last session of the behavioural protocol. -Antibody against GFP (2.9mg/ml). This antibody is a divalent IgG-κ from murine clones 7. 1 and 13.1 (11814450001, Roche). As murine neurons do not physiologically synthetize GFP, this antibody was used as control for unspecific antibody binding. iii-Others -Neutravidin or NA. Texas Red-conjugated Neutravidin (8.33µM;Invitrogen, A2665) was used to operate cross-linking of AMPA receptors in the AP-GluA2 KI mouse model. 500nl per injection site of Neutravidin solution were administered via intracranial injection in the awake, freely moving mouse. -monomeric Streptavidin or mSA was produced and conjugated to STAR 635P (Abberior,ST635P) using N-hydroxysuccinimide ester-activated fluorophore coupling as previously described. 500nl per injection site of mSA solution (concentration: 8.33µM) were administered via intracranial injection in the awake, freely moving mouse. -Saline Physiological solution. Saline physiological solution was used as control for cross-linking in the AP-GluA2 KI mouse model. 500nl per injection site of Saline solution were administered via intracranial injection in the awake, freely moving mouse. Behavioural protocol. Delayed Spatial Alternation (DSA) task. The DSA task is a delayed non-matching-to-place task used to assess special navigation and cognitive functions in rodents 16 . i-Food restriction Food restriction was required to assure mice's motivation. Mice were weighted right before food withdrawal and this weight was used to calculate the 80% of weight-loss limit that was fixed for protocol termination. On the first day of restriction, mice were fed with Perles pasta of the same type as those that were used to bait the maze during the behavioural task, in order to habituate them to the new food. On subsequent days, mice were fed at the end of all behavioural manipulation with 2-3g of powdered nutrients-enriched food, in order to maintain them to about 85% of their initial weight. ii-Materials. A custom-made, semi-transparent white PVC Y maze was used to the task. All three arms are identical (40cm length, 8cm width, 15cm high walls), except for an additional closable rectangular chamber (15cmx25cm) bridged to the "Starting arm". Arms are spaced by a 120° angle. An opaque small container was positioned at the end of each "Goal arm" to serve as food well for reward delivery. Environmental cues are positioned on the room walls surrounding the maze. Video recordings are electrode (FHC, Phymep, France) and an external A.M.P.I Iso-flex stimulator (Scop Pro, France). Synaptic field are recorded in the stratum radiata (sr) of CA1 and CA3 to measure evoked fEPSPs, and to test the presence of propagated SPW-Rs. Depending on the configuration, stimulation electrodes are placed in the stratum radiatum of CA1 or CA3 to stimulate respectively CA3CA1 and CA3CA3 axons and elicit basal (0,1 Hz) or high frequency stimulations (HFS, 100 Hz, 1 sec train repeated 3 times each 30 sec) to induce long term potentiation (LTP). Wen stimulating CA3 sr, a train of 10 stimulations at 1 Hz is first applied to test for eventual contamination by DG mossy fibers that display frequency facilitation. Perfusion and Histology Mice were anaesthetized with a mix of Kétamine and Xylazine (100mg/20mg/kg) ) diluted in NaCl; 10µl of solution per gram weighted by the animal were administered via peritoneal injection. Perfusion with Paraformaldehyde (PFA, concentration: 4%) was realised on the anaesthetized mouse and brain were initially collected without being evicted from the scalp. After 48h of storage in PFA at 4°C temperature, implants and scalp were removed and the brain was washed three times in PBS solution (concentration: 1%) and then stored in PBS for 24-72 hours at 4°C. Slicing was performed with a vibratome (Leica VT1200s). Coronal slices of 60µm thickness were collected at a speed of 30-50mm/s from the regions of interest and stored in PBS for 24h before being mounted on slides and covered with Fluoromont-G (complemented with DAPI for cellular nuclei staining, Thermofisher Scientific). Image acquisition of slides was performed with an epifluorescence microscope (Nikon Eclipse NI-U) coupled to an illumination system (Intensilight C-HGFI, Nikon) and a camera (Zyla sCMOS, Andor Technology, Oxford Instruments). Title: Study of the Link between Synaptic Plasticity and Behavioural Adaptation Abstract: Memory can be defined as the ability to store information within the brain in a way that allows retrieval of such information and promotes behavioural adaptation. At the biological level, memorization is a complex mechanism which demands coordinated brain-wide interactions and is subject to progressive decline upon natural aging. It is a shared idea that understanding the mechanisms regulating formation and storage of memories could help in developing targeted approaches for mnemonic deficiency. Different theories have been formulated to describe memory consolidation in an integrated system: 1) the engram theory postulates the existence of neuronal assemblies persistently modified by experience and memory consolidation; 2) the synaptic plasticity in memory theory postulates that these persistent modifications rely on synaptic plasticity mechanisms 3) the system consolidation theory postulates that engrams are, in fact, distributed across an interconnected network encompassing multiple brain areas. In each of these theories, memory consolidation is a multi-step process and the precise identification of whether, where, when and how a specific modification is produced inside this complex system is a hot topic in research. For each of this questions collected evidences have directed the focus toward precise lines of research. The hippocampus is considered to be main site of recent memory consolidation and the prefrontal cortex has progressively emerged as an important site for remote consolidation, making the communication between these two areas a main centre of interest. During sleep, the lack of additional experiences favours memory consolidation and the coordinated reactivation of neuronal assemblies of both the hippocampus and neocortical areas during fast-oscillatory hippocampal events called Sharp Wave-Ripples (SPW-Rs) is now taken has the principal hallmark of memory consolidation. At the molecular level, long-term potentiation (LTP) is the most studied mechanisms for the formation of long-lasting memories. I studied the interplay between the dorsal hippocampus (dHPC) and two neocortical areas in the context of consolidation of the rule for successful completion of a Delayed Spatial Alternation (DSA) task. We aimed to answer the following questions: 1) Does the medial Prefrontal cortex (mPFC) and the Posterior Parietal cortex (PPC) participate in the engram sustaining acquisition of this rule? Is their participation equal or are they quantitatively or qualitatively differentially solicited? 2) Both areas display signs of hippocampal modulation, in the form of coherent oscillations and coordinated neuronal firing patterns during hippocampal SPW-Rs; how is this modulation modified by consolidation of this rule? 3) Does preventing LTP within the neocortex affect consolidation of this rule? To answer these questions, we used mice implanted with single electrodes in the dHPC, mPFC and PPC to record Local Field Potential (LFP) and single neurons' activity both during the task and a threehour long rest period at the end of each behavioural training day. We observed that, during behavioural training, PPC neurons were mostly engaged in navigation, while mPFC neurons engaged with more cognitive features of the task. Surprisingly, mPFC neurons, but not PPC neurons, exhibited an increase in positive modulation during hippocampal SPW-Rs following learning, with a high proportion of mPFC's neurons active during the behavioural protocol that were positively modulated around SPW-Rs' peaks during sleep. Last, preliminary data showed that prevention of LTP within the mPFC during the sleep period allocated to memory consolidation does not affect the behavioural performance on the following day, but might affect the modulation of mPFC's neurons around SPW-Rs' peak. -synaptic Long-Term Potentiation at Excitatory Synapses 3.2.2 AMPA Receptor's Trafficking at Synapses and LTP Expression 3.2.3 Synaptic Plasticity in the Neocortex 3.3 Mechanisms for System consolidation Tagging and Capture STDP -Spike Timing-Dependent Plasticity SWS -Slow Wave Sleep TeA -Temporal Associative cortex Figure 1 . 1 Figure 1. Long-term memory systems A. Classification of memory systems proposed by Schacter and Tulving, 1994. B. Classification of memory systems proposed by[START_REF] Henke | A model for memory systems based on processing modes rather than consciousness[END_REF]. (adapted from Henke, 2010) Figure 2 . 2 Figure 2. The Delayed Spatial Alternation (DSA) task. A. Scheme representing the protocol of the DSA task by mean of showing the principle of alternation in the first two trials. B. Picture of the Y-shaped maze and surrounding visual cues used for the DSA task. Figure 3 . 3 Figure 3. Memory engrams. A. Schematic representation of the hypothesised network encompassing hippocampus, the medial prefrontal cortex and the posterior parietal cortex. B. schematic representation of the engram (red dots); red lines signify stronger network communication; two unspecified regions that might rely the excitatory drive are also represented. brain functions, including emotion, reward, motivation and cognition. It is anatomically subdivided in 4 regions, from dorsal-most to ventral-most: Medial precentral cortex (PrCm), Anterior Cingulate cortex (ACC), Prelimbic cortex (PL) and Infralimbic cortex (IL) (Van Eden and Uylings Figure 7 . 7 Figure 7. Post-synaptic long-term potentiation at excitatory synapses. A. NMDA receptors opening results in an increase in post-synaptic Calcium's concentration. B. CaMKII translocation to the post-synaptic site and its binding to Calcium initiate the transduction cascade leading to late phases of LTP expression. C. AMPA receptors' trafficking from the vesicular pool to the cell membrane and laterally from the peri-synaptic space to the postsynaptic density increases, resulting in an increase of the number of AMPA receptors available at the post-synaptic density. D. Rearranging of the local cytoskeleton results in spine-size's increase as a result of late LTP expression. At the time of the start of my PhD, the laboratory was completing the experiments described in El Oussini et al. (Annex to this dissertation) and my work was developed as a follow up to those results, to which I contributed in their final stages. The paper focuses on the manipulation of AMPA receptors' mobility within the hippocampus, exploiting a strategy that had already been showed to affect LTP expression, and describes its effects on the behavioural performance of mice training in the Delayed Spatial Alternation (DSA) task and on Sharp Wave-Ripples physiology. To resume, injection of an antibody (IgG) directed against the GluA2 subunit of AMPA receptors prevented memory consolidation both when injected 1 hour before the beginning of the first day of behavioural protocol and at the end of it, preceding a rest period that was electrophysiologically recorded for the first 3 hours, resulting in behavioural performance impairment during the second day of protocol; injection of the IgG did not affect the behavioural performance on the day of injection neither when injected 1 hour before the beginning of the first day of protocol nor 1 hour before the beginning of the second day of protocol.Electrophysiological recordings of mice injected with the IgG 1 hour before the beginning of the first day of protocol revealed a decrease in the occurrence rate of SPW-Rs recorded in the CA1 region of the hippocampus during Slow Wave Sleep (SWS) phases of the rest period of the same day. Similar results were obtained with a different strategy involving a knock-in (KI) mouse strain expressing an AP-tagged GluA2 subunit of AMPA receptors, by injecting tetravalent neutravidin in the CA3 region of the hippocampus at the end of the first day of behavioural protocol. All procedures were validated by the ethical committee of animal experimental of Bordeaux Universities (CE50; PIV-EXPE APAFIS 18507-201901118522837; A1 APAFIS 4552 2016031019009163A; Magendie and PIV-EXPE 13515-2018021314415739) ) for odours saturation, to prevent navigation following the trace of body odour of previous choices. On the first day of training, 4 sessions are realized, spaced by 30 minutes (S1-S2 and S3-S4) or 1 hour (S2-S3); on the 2 following days, 2 sessions per day are realized, spaced by 30 minutes (see Fig.). Delayed Spatial Alternation reached learning criterion within the first day of protocol 18 C57BL6/J wild type mice were unilaterally implanted with electrodes bundles in three regionsdorsal Hippocampus (dHPC), medial Prefrontal Cortex (mPFC) and Posterior Parietal Cortex (PPC) -for electrophysiological recordings both during the task and during a total of five 3 hours-long rest periods, one at the end of each day of behavioural protocol and two baselines on the two preceding days (see Materials and Methods). During the three days of behavioural protocol, mice underwent training in the Delayed Spatial Alternation (DSA) task as previously described (El Oussini et al., 2023, Annexes), S4: Leaners 0.3 ± 0.05, Slow Learners 1.2 ± 0.2, No Learners 1.3 ± 0.3, t-test: p = 0.00012 Learners vs No Learners, p = 0.00002 Learners vs Slow Learners, p = 0.69854 Slow Learners vs No Learners; S8: Leaners 0.4 ± 0.09, Slow Learners 0.5 ± 0.1, No Learners 1.5 ± 0.4, t-test: p = 0.00005 Learners vs No Figure 1 . 1 Figure 1. Mice trained in the DSA task segregates into 3 groups based on behavioural performzance A. Timeline of the behavioural sessions over days. B. Schema of the DSA task principle: at the beginning of each session the mouse faces a forced trial in a given direction, on subsequent trials the mouse is faced to free choices and would have to alternate between left and right choices in order to collect the reward; all runs are separated by a 30s interval, characterized by retention within the starting arm; error runs are repeated a maximum of 5 times. C. Example of score charts for S1 (left) and S4 (right) for a mouse in the learners group. D. Behavioural performance plotted as mean number of errors per trial within each session. E. Column plots representing the mean number of errors per trial on S1, S4 and S8 for Learners group. F. Percentage distribution of first correct choices and different ranks of errors for Learners (left), Slow learners (center) and No learners (right) groups. G. Average percentage of VTE per run within each session for the three groups. H. Percentage of right arm choices within each session for the three groups. Figure 2 . 2 Figure 2. Sleep and SPW-Rs properties are stable across days and groups A. Pictures of fixed tissue showing the wire bundle implantation in the mPFC and PPC. Example of the data analysis for the identification of SWS periods and SW-Rs detection. The upper band of the first panel shows periods of wake (black), SWS (light blue) and REM (red) identified from the combination of the three other bands, showing respectively from top to bottom: heatmap of the power spectrogram of the LFP trace for one dHPC's channel, plotted between 0 and 22 Hz oscillatory frequency, ration of theta/delta frequency bands of the power spectrogram and motion index computed from the video recording of the resting session; the second panel shows the two examples of the unreferenced LFP trace of the same hippocampal channel (upper trace, black) during a portion of the time window captured in the power spectrogram, accompanied by the filtered trace of the same channel in the SW-Rs band (100-250 Hz, centre trace, blue) and highlighting SPW-Rs with black arrow heads. B. Sleep quality evaluation across days and groups: total time spent in SWS (left), mean length of SWS periods (centre) and total time spent in REM sleep (right). C. SPW-Rs characterization across days and groups: mean intrinsic frequency (left), mean amplitude (centre) and mean event duration of SPW-Rs. D. SPW-Rs occurrence rate across days and groups. E. z-scored mean firing rate of dHPC's neurons around SPW-Rs' peak. Figure 3 . 3 Figure 3. Neocortical single units' activity coordinates with hippocampal SPW-Rs' peaks in mice from the Learners group A. From left to right, heatmap (upper panel) and mean z-scored firing frequency (lower panel) of single mPFC's neurons around SPW-Rs' peaks during SWS recorded during rest periods from D-1 to D3 for the Learners group; event window: 500 ms before and after the peak of SPW-R events (D-1: 82, D1: 102, D2; 93 and D3: 77 single-units). B. Individual plot points of z-scored firing frequency of mPFC's neurons around the peak of SPW-Rs (25 ms before and after the peak, left panel) and cumulative distribution of the same values (right panel). C. Mean z-scored firing frequency of mPFC's neurons around the peak of SPW-Rs (25 ms before and after the peak) divided by the three identified population of significantly activated neurons (red), significantly inhibited (blue) and not significantly different from baseline (grey) across days and groups. D. Individual plot points of zscored firing frequency around the peak of SPW-Rs (25 ms before and after the peak) of mPFC's neurons detected on both D-1 and D1 and scatterplot of the same values. E-H. Same analysis performed for PPC neurons (D-1: 46, D1: 44, D2: 32 and D3: 35 single-units). Figure 4 . 4 Figure 4. Neocortical neurons are modulated by task's features during DSA training A-B. example of mPFC neuron modulated by the behavioural attribute "success vs error". A. 2D firing-map of the neuron's spiking within the maze during run-time. B. Plot of the linearized firing-map for the same neurons highlighting the different firing patterns on success vs error runs. C. Individual plot points of z-scored firing frequency around SPW-Rs' peak (25 ms before and after the peak of SPW-Rs) of behaviourally relevant mPFC's neurons detected on both D-1 and D1. D-E. example of PPC neuron modulated by the behavioural attribute "right-turn vs left-turn" and displaying a firing preference for the end arm. D. 2D firing-map of the neuron's spiking within the maze during run-time. E. Plot of the linearized firing-map for the same neurons highlighting the different firing patterns on right-turn vs left-turn runs. C. Individual plot points of z-scored firing frequency around SPW-Rs' peak (25 ms before and after the peak of SPW-Rs) of behaviourally relevant PPC's neurons detected on both D-1 and D1. Figure 5 . 5 Figure 5. (preliminary) Blockage of LTP in the mPFC during the resting period does not affect mice's behavioural performance on the next day A. Column plots representing the mean number of errors per trial on S1, S4 and S5 for neutravidin-treated mice (left) and saline-treated mice (right). B. Left panel: coordinates of cannulas implantation on the Paxinos Mouse Brain Atlas; right panel: histology of BirA-GFP infection (green) and neutravidin injection (red). C. Individual plot points (left) and cumulative distribution (right) of z-scored firing frequency around SPW-Rs' peak (25 ms before and after the peak of SPW-Rs) of mPFC's neurons from the neutravidin-treated group detected on both D-1 and D1. D. Individual plot points (left) and cumulative distribution (right) of z-scored firing frequency around SPW-Rs' peak (25 ms before and after the peak of SPW-Rs) of mPFC's neurons from the salin-treated group detected on both D-1 and D1. Figure 1 : 4 ( 14 Figure 1: AMPAR surface mobility in the dorsal hippocampus is necessary for consolidation of a delayed spatial alternation rule. a: schematic of the DSA (Delayed Spatial Alternation) rule that animals have to learn. In each session, after a first forced choice, nine food-rewarded positions are set, alternatively in the right or left ending arm. Up to five error runs are permitted per reward position before the animal is forced to enter in the rewarded arm. In between runs, the animal is positioned in its home cage for 30 seconds. After 10 training sessions (session#1 to session#10) allocated within a week, animals are alternating almost perfectly (right panels). Figure 2 : 2 Figure 2: Immobilization of AMPAR in the dorsal hippocampus led to complete forgetting of the acquired DSA rule. a: DSA runs can be separated in two groups according to animal hesitation in the middle of the maze. As defined by 17 , Vicarious Trial and Errors behaviour indicate the cognitive engagement of the mouse in the task. b: For pre-leaning injections cohorts, number of VTE and no VTE runs was analysed along the DSA sessions. Note that upon IgG injections, the number of no VTE runs at session session#5 was similar as in session session#1.c: Cumulative single animal data for no VTE run numbers in session session#1 and session session#5.Note that curves are superimposable following pre-learning IgG injections. d: Choice acuteness during VTE and no VTE runs was analysed along the DSA sessions in the prelearning injection cohorts. The lack of animal performance at session session#5 (see Figure1d) is associated with a strong decrease in VTE runs accuracy that returned to its initial value in session Figure 3 : 3 Figure 3: Immobilization of AMPAR in the dorsal hippocampus led to learning-dependent impairment of ripple activities. a: During resting periods in the home cage, animal tracking and dHPC electrophysiological LFP recordings were performed, to temporally define three animal states: awake state is defined by animal mobility, whereas resting states were separated in REM (Rapid Eye Movement) and SWS (Slow Wave Sleep) according to the Delta/Theta ratio of dHPC LFPs. LFP signals were also filtered at 150-250 Hz to extract ripples, the frequency of which is closely correlated with SWS periods 6 . b: Bilateral dHPC LFPs were recorded for three hours resting periods before (habituation, D-1) or after DSA encoding (after sessions session#4, D1). Typical examples of ripple frequency in pre-learning FaB (Top) and IgG injected (middle) animals, or in IgG injected animals that were not subjected to DSA learning (bottom). Note the decrease in SWS-ripple frequency in D1 of the IgG injected DSA-trained animal. Figure 4 : 4 Figure 4: Immobilization of AMPAR in the dorsal hippocampus did not affect spontaneous ripple activities in naive in situ preparations. a: Spontaneous Sharp waves events (SPW-Rs) are recorded in fresh in situ hippocampal preparations (see methods) using extracellular field electrodes. b: Examples of recorded events. c: Simultaneously recorded events showed a significant delay, and correlated amplitudes denoting their propagation from CA3 to the CA1 area. Figure 5 : 5 Figure 5: Interplay between plasticity induction, spontaneous SPW-Rs and AMPAR mobility in hippocampus in situ. a: Spontaneous Sharp Waves Ripples were recorded in hippocampal acute slices. After stabilisation, application of HFS in the stratum radiatum of CA1 (CA1-sr) was applied to induce LTP at CA3CA1 synapses (see Supplementary Figure 4). No major impact of CA1-HFS onto SPW-Rs frequency or amplitude was observed. Grey zone: time period for baseline calculation; Yellow zone: time after HFS application. Figure 6 : 6 Figure 6: An alternative AMPAR X-linking strategy allowing a better targeting to the CA3 area also induced complete forgetting of DSA rule. a: We recently developed a new strategy for AMPAR X-linking. Knock-in mice expressing endogenous AP-tagged GluA2 AMPAR subunits can be biotinylated in presence of BiRA ER , and once exported to the cell surface can be immobilized in presence of external neutravidin (NA, cross-linking condition). b: similar in vivo pharmacological experiments as in Figure 1d were performed, combining early stereotaxic dHPC injections of AAV-BiRA-GFP or AAV-GFP, and pre-rest injections of saline, mSA or NA. c: histological controls for the mSA and NA staining on top of the AAV-GFP expression. The combination of both injections better restrict AMPAR immobilization to the CA3 area. d: Mean error rates were compared between session session#1 and session session#5 to evaluate the retention of the DSA rule upon various pharmacological treatments (as indicated by colour coding). e: As in Figure 2, we reported the number of no VTE runs that were observed in sessions session#1 and session#5. The amount of no VTE runs was similar between sessions session#1 and session#5 in the KI/BiRA/NA condition (right) pointing for memory forgetting, whereas different in the control group (Left). Figure 7 : 7 Figure 7: An alternative AMPAR X-linking strategy allowing a better targeting to the CA3 area affected SWS ripple activity. a: Same presentation as in Figure 3b. Bilateral dHPC LFPs were recorded for three hours resting periods before (habituation, D-1) or after DSA encoding (after sessions session#4, D1). Typical examples of ripple frequency in control (Top and bottom) and X-linking (middle) conditions. Note the decrease in SWS-ripple frequency in D1 of the KI-BiRA-NA DSA-trained animal. Figure 8 : 8 Figure 8: Working model. Model proposed for the action of AMPAR X-linking onto DSA memory consolidation. DSA rule encoding lead to synaptic tagging in the cortical areas, including the mPFC. Hippocampal remapping that is possibly occurring would remain insensitive to GluA2-dependent AMPAR immobilization (EX-IN CA1CA1 synapses or DGCA3 synapses). During SWS, ripples necessary for synaptic capture in the cortical areas (through replays-dependent reactivation of neuronal ensembles) would be impaired as plasticity at CA3CA3 recurrent is impaired. Table of contents of List of Abbreviation Premise Introduction Chapter 1: Memory 1.1 Phases of Memory 1.2 Types of Memory 1.3 Memory Investigation in Rodents 1.3.1 The Delayed Spatial Alternation Task Chapter 2: Anatomical Substrates for Memory 2.1 The Concept of Memory Engram 2.2 The Hippocampus 2.2.1 Anatomy of the Hippocampus 2.2.1.a General Organisation 2.2.1.b Neuronal Population 2.2.1.c Internal Connectivity 2.2.1.d Main Afferent and Efferent Pathways 2.2.2 Hippocampus and Memory 2.3 The Medial Prefrontal Cortex 2.3.1 Anatomy and Connectivity of the Medial Prefrontal Cortex 2.3.2 The Medial Prefrontal Cortex and Memory Acknowledgements RESULTS Supplementary Figures Supplementary Figure 1 DISCUSSION reasonable number of neurons after illuminating during all four sessions performed on the day 1 of protocol, even though we did not perform any quantification yet. We plan to test the membrane properties of photo-converted and non-photo-converted neurons at rest and in response to stimulation from hippocampal or local projections to collect evidence of strengthened coupling between the hippocampus and the mPFC and within the neocortical engram. This approach is still limited by the survival time of the photo-conversion (24h) and by its proportionality to the firing rate of the neuron, which would mask the contribution from slow-spiking neurons. Cal-Light is a photo-and Calcium-sensitive cocktail of proteins for induction of transcription in the presence of blue light and Calcium transients [START_REF] Lee | A calcium-and light-gated switch to induce gene expression in activated neurons[END_REF]. Again, we only performed preliminary experiments to check if our illumination pattern was effective in inducing reporter gene expression, but we plan to use this strategy to induce the expression of opsins in neocortical engram neurons and manipulate their firing pattern to either strengthen or weaken connections within the local engram network. Conclusions The results here presented are mostly correlative but indirectly sustain a model network for memory consolidation indicating in the hippocampus the master-modulator of other brain areas through the excitatory drive provided by sharp wave-Ripples. This view is widely shared by the scientific community. We could not demonstrate the implication of synaptic plasticity mechanisms for modulation of activity of neocortical neurons, but we will modify the conditions of the experiment in order to test other hypothesis, notably if consolidation takes place at a delayed time-scale in the neocortex. We were expecting the posterior parietal cortex to play a more prominent role in the task, instead our results relegate its role to navigation, with little to no hint of an engagement in cognitive processing. Probably other types of tasks, more associative, will handle more interesting results. Future experiments will be focused on assessing and manipulating synaptic plasticity in neocortical engram neurons both in vivo, with the AP-GluA2 KI strategy, and in vitro, through CaMPARI and Cal-Light optogenetics strategies. ANNEXES CA3 hippocampal synaptic plasticity supports ripple physiology during memory consolidation. Hajer El Oussini 1# , Chun-Lei Zhang 1# Urielle François 1# , Cecilia Castelli 1 , Aurélie Lampin-Saint-Amaux  : Corresponding author: [email protected] Abstract Consolidation of recent memory depends on hippocampal activities during resting periods that immediately follows the memory encoding. There, Slow Save Sleep phases appear as privileged periods for memory consolidation as hosting the ripple activities, some fast oscillations generated within the hippocampus and that inactivation leads to memory impairment. If a strong correlation exists between these replays of recent experience and the persistence of behavioural adaptations, the mobilisation, the localization and the importance of synaptic plasticity events in this process is largely unknown. To question this issue, we used cell-surface AMPAR immobilisation to block post-synaptic LTP within the hippocampal region at various steps of the memory process. 1-Our results show that hippocampal synaptic plasticity is engaged during the consolidation but is dispensable during the encoding or recall as we systematically tested for 1Hz frequency facilitation in our evoked synaptic responses (Data not shown), and also because they arbour a form of plasticity that is insensitive to AMPAR cross-linking (Supplementary Figure 4). Thus, we would like to propose that AMPARM-dependent LTP at CA3 recurrent synapses positively controls ripple activity in situ. Together with in vivo data, it suggests that CA3CA3 synaptic tagging is triggered during DSA learning, allowing ripple-mediated consolidation to occur during consecutive sleep phases. AMPAR mobility in CA3 area is necessary for memory consolidation. Based on our in situ data, we ambitioned to restrict AMPAR cross-linking to the CA3 area to evaluate if impairing plasticity only in this region would be sufficient in impairing memory consolidation and ripple physiology. However, specific antibody-based AMPAR cross-linker strategy lacks of spatial and temporal resolution: Indeed, to maximize efficiency, multiple in vivo injections were performed on a dorso-ventral axis to cover most of the dHPC 14 . In addition, secondary, unwanted effects of anti-GluA2 antibodies on AMPAR composition 24 may be present that can lead to misinterpretation of the data (see discussion). Thus, we used a recently developed approach to cross-link endogenous GluA2-containing AMPAR using biotin/streptavidin complexes 15 (Figure 6a). In Knock-in mice expressing AP-tagged GluA2 subunits, the presence of an exogenous enzyme -BiRA ER , brought by viral infections -allow the biotinylation of GluA2-containing AMPAR, that can be cross-linked in the presence of tetravalent streptavidin added in the extracellular space (Figure 6a). This cross-linking approach that has been validated in vitro and in vivo 15 and among other advantages, will help in getting improved spatial resolution, as combining viral expression and drug delivery through intracerebral cannula (Figure 6b). We tested our capacity in targeting CA3 area by infusing NA-texasRed (red-tagged tetravalent neutravidin through cannula implanted above the CA3 regions of BIRA-expressing mice (Figure 6c). Indeed, red labelling was almost completely restricted to the CA3 region, in a subpart of the green expressing region (Figure 6c). Then, we tested the capacity of mice in retaining DSA rule in various control and CA3 cross-linking conditions. Because the time course of NA action is not yet known, we privileged pre-rest injections (Figure 6b), and GluA2 KI animals being slow in establishing alternating behaviour, we mixed sessions #1-2 and #5-6 to get more robust behavioural outcomes. When compared to session#1-2 -a time point at which no learning is achieved -the error rate of control animals was significantly lower in sessions #5-6 (Figure 6d) indicating that encoding and consolidation have been successfully achieved. In contrast, error rates at these two time points remain close to random values in X-linking conditions (Figure 6d) a phenotype that is again accompany by an apparent forgetting of the DSA rule, as the accuracy of VTE runs, that improved in control mice, did not show any evolution in the X-linking conditions (Figure 6e). b: Top: same presentation as in a, for HFS application in CA3-sr. Note the increase in SPW-Rs frequency and amplitude after HFS application. Bottom: Left: Typical example of the effect of CA3 HFS on evoked CA3 responses, and frequency of spontaneous SPW-Rs. A significant delay exists between synaptic potentiation (evoked fEPSCs, middle; SPW-Rs amplitude, right) and SPW-Rs frequency increase that can be better appreciate in the time courses expressed by minutes. Supplementary figures Sup. Figure 1: dHPC AMPAR X-linking preserve DSA rule encoding. Sup. Figure 5: AMPAR X-linking effect on SWR amplitude is due to injection procedure. We compared the local effects of anti-GluA2 antibodies on SWR amplitude in experiments in which it was introduced via the recording pipet (One pipet configuration) or using another pipet than the recording one (two pipet configuration). As can be seen, the decrease in amplitude of SWRs was due to the positive pressure applied in the pipet that probably moved away the tissue locally. Thus we conclude that, as previously observed 14 that AMPAR X-linking was not affecting basal transmission, and thus leave spontaneous SWRs unaffected. Same presentation as Materials and Methods Biological models Experiments in this manuscript were conducted on 1,5 to 4 months old male mice belonging to two strains: C57BL6/J wild type and C57BL6/J transgenic AP-GluA2 knock-In (KI, maintained on a C57BL6/J background) strains. Mice were kept on a 12-hour light/dark cycle and provided with ad libitum food and water, except for food restriction associated with behavioural testing (see below). Mice were housed with 3-5 littermates except when demanded by the protocol. The experimental design and all procedures were in accordance with the European Guide for the care and use of laboratory animals. AP-GluA2 KI strain was developed and validated as a mouse model for AMPA receptors mobility in 15 . AP-GluA2 KI mice are similar to wild type C57BL6/J mice in terms of weight, size, growth or fertility, but also for tested cognitive abilities 15 . At the genetic level, this strain presents a substitution of the endogenous GluA2 subunit of the AMPA receptor by a genetically modified one bearing an AP-(Acceptor Peptide) tag on the extracellular domain of the subunit. In the presence of the BirA ligase enzyme, which is not endogenously expressed, AP can bind Biotin, yet present in the murine brain. Thus, expression of AMPA receptors bearing biotinylated GluA2 subunits is restricted to neurons in which BirA ligase has been introduced by viral transfection, allowing targeting of AMPAR cross-linking. Indeed, presence of extracellular tetrameric Neutravidin consecutive of intracranial administration leads to immobilization of AMPA receptors at the synaptic and peri-synaptic space (see 15 ). Surgery Various surgery protocols were performed depending on the aim of the procedure, dividing into two major subgroups: stereotaxic injections and stereotaxic implantations. They eventually shared some common steps, hereby listed. Surgery protocol were similarly applied for both mouse strains. i-Common surgery procedures Mice were anaesthetised through exposure to the anaesthetic gas agent Isoflurane (4% mixed with air) for 4 minutes and anaesthesia was maintained all throughout the surgery by isoflurane 2% mixed with air. Mice were positioned in the stereotaxic apparatus (David Kopf Instruments) on a heating pad and received a subcutaneous injection of Buprenorphine (100μl, 0.1mg/Kg) and a local injection of Lidocaine (100μl, 0.4mg/kg) for analgesia. The scalp was rinsed with Betadine to prevent infections. After incision and opening of the scalp, Bregma and Lambda point were identified in order to identify the region of interest using atlas coordinates (Paxinos). Finally, sutures were applied to close the incision point, and mice were subcutaneously injected with analgesic agent (Carprofen, 100µl, 4mg/kg), fed with powdered-nutrient enriched food and left recovering inside a recovery cage positioned on a heating pad for 30 to 120 minutes. A post-surgery care routine was observed for 2-6 days following the surgery, during which the weight and general presentation of mice were monitored and analgesic drugs were administered if needed. ii-Stereotaxic injection ii-Recording Electrodes For hippocampal ripples recordings, bundles of Nichrome wires (diameter: 13µm, Sandvik Kantal) were connected to a 18 males connector (nano 18 positions 2 guides ISC-DISTREL SA Omnetics) were passed through a guide cannula (see supplementary figure 2) to protect them from damage and spreading while entering into the brain. Chemicals i-Viral vectors All viral vectors used for the experiments described in the Results section are Adenoviruses and their engineering is detailed in Getz et al. 2021. Ongoing production was assured either by the viral core facility of the Bordeaux Neurocampus IMN or by Charité Universitats medzin Berlin or viral vectors were ordered on Addgene. All virus were stocked at -80°C for long-term storage, conserved at 4°C during surgery preparation and injected at room temperature. 250nl per injection site of viral vector solution were administered through stereotaxic injection during surgery. -pAAV9a-pSyn-BirA-ER-IRES-eGFP (5.6 x 10^13 gcp/ml, IMN). The pSyn promoter allows the expression of the BirA enzyme in all neuronal types without distinction. BirA ligase expression promotes biotinylation of the extracellular portion of the GluA2 subunit of AMPA receptors, thus inducing AMPA receptors cross-linking in the presence of Neutravidin. eGFP is used as a tag do identify neurons expression the enzyme. -pAAV9a-pSyn-IRES-eGFP (1.8 x 10^13 gcp/ml, IMN). Lack of BirA ligase coding sequence makes this viral vector a control for un-catalysed Biotin binding to GluA2 subunits bearing the AP. ii-Antibodies Production and conservation of antibodies used for the experiments detailed in the Results section of this manuscript is detailed in 14 . All antibodies were stored at -80°C for long-term storage, conserved at 4°C for maximum 1week preceding injection and injected at room temperature. 500nl per injection site of antibody solution were administered via intracranial injection in the awake, freely moving mouse. -Antibody against GluA2 subunit of AMPA receptors (clone: 15F1) (2.9mg/ml). This antibody is a monoclonal divalent IgG-κ directed against the extracellular domain of GluA2 subunit of AMPA receptors. The divalent nature of this antibody allows for binding of 2 target GluA2 subunits at the same time, therefore promoting AMPA receptors cross-linking. In vitro, a washout time of 8 hours due to internalization of clustered receptors has been observed. -Fragment Antigen-Binding (Fab) (2.9mg/ml). The antigen-binding portion of the antibody directed against GluA2 subunits was isolated and used as monovalent control for cross-linking. realized through an infrared camera (Basler USB camera -ac1920-155um -Noldus) positioned on the ceiling upon the centre of the maze. The DSA tests were realised in conditions of dim light. iii-behavioural paradigm -Habituation. Habituation lasted for 5-8 days, depending on each individual mouse, and was divided into 3 phases. A first phase, starting before food restriction, consisted in 2-3 days of handling in order to habituate the mouse to be manipulated, especially for injection cannulas insertion and/or electrodes plug-in. Proper habituation for the task started on the second day of food restriction and consisted in multiple sessions of free exploration of the maze. The sessions were stopped when the mouse had eaten food-pellet in all three baited arms. The last phase of habituation consisted in a single trial in which the mouse was positioned inside the "starting arm" (defined by position with respect to environmental visual cues) and had to collect a reward foodpellet in each of the two "goal arms", with a time-limit of 1 minute. -Task. The DSA task consists in 10 trials in which the left and right goal arms are alternatively baited with a rewarding chocolate pellet. During the first trial, the choice is forced toward the baited arm, setting the pattern of alternation (i.e. the reward-zone of the un-baited arm is made inaccessible through positioning a PVC slide at the entrance of the proximal portion of the arm; each consecutive session alternatively starts with a forced right or left choice). The 9 following trials rely on the mouse free choice of one of the two arms. A single trial can be repeated up to 6 times ("runs") if the mouse makes consecutive mistakes, of which the sixth consists in a forced run in the baited arm direction. Once the mouse has reached the reward-zone of the chosen arm, access to the reward-zone of the unchosen one is restricted and the mouse is let spontaneously come back to the distal portion of the starting arm. Here, the mouse is collected and put back in his home-cage and a delay of 30 seconds is respected before the mouse is allowed to explore the maze again. During this delay period, the maze is cleaned with ethanol to prevent odour-based navigation. On the first day of training, 4 sessions are conducted, spaced by 30-60 minutes; on the 2 following days, 2 sessions per day are realized, spaced by 30 minutes (see Figure 1). Behavioural training was conducted between 8a.m. and 1p.m. ii-LFP processing and Sharp Wave-Ripples detection Electrophysiological data were imported on Matlab and down-sampled to 1kHz for storage and analysis speed convenience. Ripples detection was performed on Matlab scripts originally developed by Cyril Dejean. At first, referencing and band-pass filtering at 50 Hz eliminates noise oscillations common to all channels. Then, 100-250Hz band-pass filtering is used to detect ripple events that are selected if respecting following criteria: 1) its amplitude is higher than 5 standard deviations of the mean band-passed trace, 2) the event must be at least 30ms long and 3) two ripples must be separated by an interval of at least 45ms. Ripple's characteristics are then computed, including: timestamp of the peak, intrinsic frequency, number of oscillations, mean amplitude (both on the filtered and the integrated trace), area under the integrated curve, duration (total and of each part preceding and following the peak), half prominence. In situ slice recordings i-Preparation of hippocampal slices Mice are males WT C57bl6/J mice aged from 4 to 9 weeks. The extracellular ACSF solution (Artificial Cerebro-Spinal Fluid) is composed of: 119 mM NaCl; 2.5mM KCl; 1.3mM MgCl2; 2.5mM CaCl2; 10mM glucose; 1mM NaH2PO4; 26mM NaHCO3. The cutting solution is an ice-cold sucrose solution (1-4°C) composed of 2 mM KCl; 2.6mM NaHCO3; 1.15mM NaH2PO4; 10mM glucose; 120mM sucrose; 0.2 mM CaCl2 and 6 mM MgCl2. Both solutions are oxygenated with carbogen (95% O2, 5% CO2, pH 7.4 at 37°C, 290-310 mOsm/L). For brain removal, mice are anesthetized with 5% isoflurane for 2 min before decapitation. The head is immersed in the iced sucrose solution. The removed brain is immersed for 4 minutes in iced oxygenated sucrose solution and then placed on a cellulose nitrate membrane to separate and position the hemispheres in the vibratome (Leica VT1200s) to obtain 400 µm horizontal slices (speed of 0.1 mm/s). Once produced, slices are semi-immersed in a dedicated incubation chamber, oxygenated and maintained at 35°C in a water bath for at least 2 hours before starting the recordings. ii-In vitro electrophysiological recordings Recordings are made in an S-shaped recording chamber, maximizing oxygenation while preventing slice movement caused by the 3.5 mL/min perfusion flow. Field recordings are obtained using glass micro pipettes stretched with a PC-10 (Narishige, Japan) and broken at their tip to decrease the resistance (<0.5 M𝛀). Depending on the experimental configuration, the pipette is filled either with ACSF or supplemented with IgG 𝝰-GluA2, 15F1 (IgG) or IgG Fab (Fab). Electrophysiological recordings are obtained by a MultiClamp 700B (Molecular Devices, Foster City, CA) using Clampfit software (Molecular Devices, Foster City, CA). Electrical stimulation is provided by a CBCSE75 concentric bipolar
03927207
en
[ "spi.other" ]
2024/03/04 16:41:26
2022
https://hal.science/tel-03927207v2/file/TH_2022ECDL0017.pdf
Plusieurs personnes ont permis la bonne tenue de cette thèse, que ce soit par leurs encouragements ou leurs conseils, qu'ils soient d'ordre techniques, organisationnel, ou personnels. Tout d'abord, je remercie mes encadrants qui ont fait preuve de compétences d'accompagnement diverses. Stefan Duffner, mon directeur, a su me guider sur les pistes techniques de ma thèse. Son expérience en vision par ordinateur m'a énormément apporté. Merci également à Romain Vuillemot, co-encadrant de la thèse, qui a mis en place et organisé le projet Neptune et m'a mis en contact avec la FFN. Ses conseils d'organisation et de planification ont permis une thèse riche en rencontres et en discussions. Je salue aussi les membres de la FFN (Fédération Française de Natation) et particulièrement le pôle performance. Leur rencontre, ainsi que les opportunités de filmer des compétitions de niveau national, ont grandement amélioré la thèse. J'ai beaucoup apprécié ces évènements. . Merci aussi à l'équipe SICAL, qui m'a permis d'écrire ma thèse en 6 mois à la place de 5 tout en développant mes compétences aux échecs. Les membres de ces deux équipes auront su à leur façon m'apporter de la motivation et des connaissances techniques qui ont permis un bon déroulement de la thèse. Enfin, un merci général à ma famille et mes amis qui m'ont apporté leur soutien et ont toléré ma paresse en leur présence. Promis, c'est parce que je bossais. Je remercie pour finir l'invité surprise de la thèse, le covid 19. Il a su réduire mon impact carbone durant ces trois années en m'évitant de prendre l'avion pour aller en conférence. A B S T R A C T In top-level sport, where all participants have exceptional physical and technical skills, as well as deep theoretical knowledge of their field, the gap between the best results is minimal. The winner is determined by small details that may seem insignificant to the uneducated eye, but are actually fundamental to gaining ground on others. In swimming in particular, many finals of important competitions end up with a difference of less than a tenth of a second between the leaders. The details bringing victory can be very varied because they concern the individual physique of the swimmers, their mental and physical preparation, their understanding of the swimming style of their competitors, and many other things. Understanding them is crucial to winning: this is the role of the sports coaches. They will study with finesse what can allow their swimmer to be the most efficient. The first step in the analysis of training and races is information extraction. In this thesis, we are particularly interested in swimming competitions. Our goal is to generate an automatic race report. This would free up an invaluable amount of time for coaches, and would also allow for extensive analysis of competitions. Such technology would also improve the detection of potential talent through the systematic analysis of all amateur competitions. We will focus here on video analysis, as sensors and other intrusive acquisition systems cannot be used during championships. This imposes important constraints related to the recording conditions of the videos: our methods must be robust and general. Computer vision methods will be explored to get the best out of the videos. We will also explore image analysis in a less data-dependent way than usual. Indeed, this field has progressed enormously over the last decade thanks to the development of deep learning, but it depends a lot on the quality and quantity of data. Our general problem will therefore concern the extraction of information from swimming race videos using small amounts of data. This task will be divided into three parts, each one studying a specific type of information. All results, models, and resulting databases have been published online, accessible to all. We will start by focusing on the detection of swimmers in images. This task is the most obvious to start with, because to study a swimmer on a race, we must be able to locate him. This chapter will therefore introduce a detection method specifically adapted to swimmers, as well as a dataset related to the task. Detecting swimmers on an image is a first step, but it does not give positional information in the pool. For that, we need to register the image, that is to map each point of the image to a zone of the pool. A particularly fast and very efficient method will be explained to answer this task. Another dataset will be presented. iii abstract The third part of this thesis will concern the measurement of swimming cycles. The repetition of the movement being omnipresent during a race, its study is one of the most useful to perceive the quality of swimming. It is an excellent basis for measuring a swimmer's fatigue, efficiency, or technique. A general method to count cycles on a video will be presented. Specifically for swimming, we will also describe a way to locate the ends of cycles, in order to measure their individual duration with precision. R É S U M É En sport de niveau international où tous les participants ont des compétences physiques et techniques exceptionnelle, ainsi que de profondes connaissances théorique de leur domaine, l'écart entre les meilleurs résultats est minime. Le vainqueur est déterminé par des petits détails qui peuvent sembler infimes à l'oeil du profane, mais qui sont en réalité fondamentaux pour gagner du terrain sur les autres. En natation en particulier, plusieurs finales de compétitions importantes se soldent par un écart inférieur au dixième de seconde entre les premiers. Les détails apportant la victoire peuvent être très variées car elles concernent le physique individuel des nageurs, leur préparation mentale et physique, leur compréhension du style de nage de leurs concurrents, et bien d'autres choses. Comprendre finement ces différences est donc crucial pour l'emporter : c'est le rôle des entraîneurs sportifs. Ils vont étudier avec finesse ce qui peut permettre à leur nageur d'être le plus efficace. La première étape de l'analyse des entraînements et des courses est l'extraction d'information. Dans cette thèse, nous nous intéressons particulièrement aux compétitions de natation. Notre objectif est de générer un compte-rendu de course automatique. Cela libérerait une quantité de temps inestimable pour les entraîneurs, et permettrait également l'analyse exhaustive des compétitions. Une telle technologie permettrait également d'améliorer la détection de potentiels talents via l'analyse systématique de toutes les compétitions amateures. Nous nous concentrerons ici sur l'analyse vidéo, les capteurs et autres systèmes d'acquisition intrusifs ne pouvant être utilisés lors de championnats. Cela impose des contraintes importantes liées aux conditions d'enregistrement des vidéos : nos méthodes devront faire preuve de robustesse et de généralisation. Des méthodes de vision par ordinateur seront explorées afin de tirer le meilleur des vidéos. Nous explorerons également l'analyse d'image de manière moins dépendante des données qu'habituellement. En effet, ce domaine a énormément progressé au cours de la dernière décennie grâce au développement de l'apprentissage profond, mais il dépend beaucoup de la qualité et quantité des données. Notre problématique générale concernera donc l'extraction d'informations issues de vidéos de course de natation en utilisant de faibles quantités de données. Cette tâche sera divisée en trois parties, chacune étudiant un type d'information précis. Tous les résultats, modèles, et bases de données qui en découlent ont été publiés en ligne, accessibles à tous. Nous commencerons par nous intéresser à la détection des nageurs dans les images. Cette tâche est la plus évidente à comprendre, car pour étudier un nageur sur une course, il faut être capable de le localiser. Ce chapitre introduira donc une v vi résumé méthode de détection spécifiquement adaptée aux nageurs, ainsi qu'une base de données liée à la tâche. Détecter les nageurs sur une image est une première étape, mais cela ne nous donne pas d'information de position dans le bassin. Pour cela, il faut recaler l'image, c'est à dire savoir à quelle zone de la piscine correspond chaque point de l'image. Une méthode particulièrement rapide et très efficace sera expliquée pour répondre à cette tâche. Une autre base de données sera présentée. La troisième partie de cette thèse concernera la mesure de cycles de nage. La répétition du mouvement étant omniprésente pendant une course, son étude est l'une des plus utiles pour percevoir la qualité de nage. Il s'agit d'une excellente base pour mesurer la fatigue d'un nageur, son efficacité, ou sa technique. Une méthode générale pour compter les cycles sur une vidéo sera donc présentée. Spécifiquement pour la natation, nous décrirons également une manière de localiser les fins de cycles, dans le but de mesurer leur durée individuelle avec précision. Illustration of a 2D convolution edge-detection filter. The "*" symbol represents the convolution, the dot represents the pixel-wise matrix multiplication (i.e. the local behaviour of convolution), and the "•" represents element-wise multiplication. The 3 × 3 Laplacian filter is displayed with the result of its convolution on the input. At the top, a toy example displays the filter's behaviour without texture and with an edge texture. At the bottom, an application of the filter on a swimming race image. The line buoys are mostly well detected, as they are made of simple features with little texture. The swimmers and waves, however, are more complex and the filter cannot isolate them. . . . . . . . . . . Colours are preserved in the example to map elements of on space to the other. In the point example, any line crossing the (x, y) position is represented in the Hough space using the angle (θ) and radius (ρ) as in the example. A single point thus results in a sinusoid curve. In the line example, the curves from each point of the line intersect in a single point corresponding to the line parameters, which are not directly obtainable from the image. Bottom: Hough line detection applied to a pool (after edge detection and thresholding) to detect its buoy lines. The red line does not represent a line in the image and appears solely because of noise. . . . . . . . . . . . . . Illustration of domain definition and data bias. The swimmer domain in the data (right) only represents a small portion of the entire swimmer domain (center). For a model trained on this data, the farther an image is from the training domain, the less likely it will be identified as a swimmer. C O N T E N T S For instance, Superman in swim briefs represented here will hardly be identified properly if the domain only contains classic images of swimmers. The domain thus needs to be as wide as possible for a given class. Further, data biases appear when the classes are unequally represented. In this example, there are more examples of males than of females, which causes problems for the future model's representation. . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 2.7 Raw data and explanation of a bad model's prediction in the "Husky vs Wolf " task. From [START_REF] Tulio Ribeiro | Explaining the Predictions of Any Classifier[END_REF] . . . . . . . . . . . . . The "⊛" symbol represents convolution between the image and the filters. The red square contains visual features, not easy to understand for a computer. The corresponding red vector, on the layer output, contains these information in a more understandable form for machines. . . . . . . . . . . 21 Figure 2.9 Sigmoid (left), hyperbolic tangent (tanh) (center) and Rectified Liner Unit (ReLU) (rights) activation functions. The two firsts saturate when moving away from the origin, thus resulting in a weak gradient as their absolute value gets bigger. ReLU has a higher derivative for any positive value, enabling a better gradient back-propagation. Scheme extracted from [START_REF] Teng | Structural Damage Detection Based on Real-Time Vibration Signal and Convolutional Neural Network[END_REF] The different levels of supervision. Datasets domains are represented each with a specific colour. There are different levels of label, here represented by the cylinders' edge. Although each type of supervision has a different complexity of training data, they all aim at performing the detection task. Note that for transfer learning, the "big" domain is distinct but close to the target domain. . . . . . . . . . . . . Figure 2.17 An illustration of one-shot learning (i.e. the most extreme case of few-shot learning) applied to swimmers identification, inspired by [START_REF] Vinyals | Matching Networks for One Shot Learning[END_REF]. The image of the new swimmer is embedded by the model. Afterwards, each new swimmer image is compared to the embedding vector of the different swimmers, including the new one. . . . . . . . . . . . . . . Figure 2.18 The Class Activation Mapping (CAM) pipeline explained (figure inspired by [START_REF] Zhou | Learning Deep Features for Discriminative Localization[END_REF]). The objective is to detect swimmers using a model trained to classify the swimming style. Such proxy task works because the swimming style can only be correctly identified by focusing on the swimmers. Each feature map of the last convolution layer's output is weighted by the coefficient it assigns to the freestlye class (w 1 , to w n ). In our example, the weights to the 2 nd (red) and n th (green) feature maps are low compared to the 1 st one's (blue), which roughly segments the swimmers. . . . The swimming race analysis method. The registration part is tackled in this chapter. It does not depend on any other method as it directly inputs the raw video frames. Used with swimmer detection (in the image), it gives their position in the pool and all the data coming from it (speed, acceleration, etc.). . . . . . . . . . . . . . . . . . . . . . . . . The periodicity counting framework. In Part 1, a Convolutional Neural Network (CNN) is first trained in an unsupervised fashion on the test data, as described in section 5.3.1. Then, it is used to extract an embedding for each image of a video. (1a) shows an example of the 2D PCA projection of these embeddings. The last 50 embeddings are linked chronologically (in red), revealing the cyclic path. (1b) shows the input images whose embeddings correspond to the highlighted points. They belong to different swimming cycles but correspond to the same phase, therefore the points are close in the latent space. In Part as a function of the heatmap threshold (right). We tested the speed with the same set of 1700 images. The different AP25/AR25 results were evaluated on the Swimm 400 test set. Table 3 .3 Performance comparison for the different detection models. They are all trained with the same data, except for the first line. In bold, the best of a category. We observe that the original U-Net architecture gives significantly worse results compared to tiny-U-Net, as it overfits on the few data. . . Results of different variations of our approach on the QUVA dataset. Pretrained models did not perform well at embedding the images in a cyclic manner. The same architectures, trained using our method, give much better results. Different architectures do not significantly change the results. . At the 2022 French Elite Swimming Championships in Limoges, after Maxime Grousset and Florent Manaudou finished in respectively first and second place in 50m butterfly, they were asked by a sports media 1 their feedback on the race. Maxime Grousset explained how he started strongly with great first 15m, then how his finish was under-optimal. Such analysis performed intuitively by a toplevel swimmer (silver medal in 100m freestyle at the 2022 World championships) is extremely valuable. It allows him and his coach to know what is mastered and what are the areas of improvement. Chapter 4: pool registration Grousset explained his personal feelings about the race, but he could not have provided quantitative data evidence showing how ahead he was initially and how he slowed down at the end. He also could not always know exactly what lead him to slow down around the end, such as a bad swimming movement several seconds sooner. A quantitative analysis of the sort can only be performed by studying the entire video footage of the race. Coaches can do it manually using specific tools, but they tend to focus on the few metrics they prefer. Further, not all swimmers have either a coach to do it for them or the time and tools at their disposal. Therefore, being able to automatically generate such a race summary with quantitative metrics and pixel-wise precision can be of broad interest to the swimming community. If any swimmer had access to a summary sheet showing how they performed during a race, it could help them to progress rapidly. This is the goal of this thesis: to create a swimming race automatic analysis method. It was made in collaboration with the Fédération Française de Natation (French Swimming Federation) (FFN) through the Neptune project 2 , which started in January 2020, just three months after this thesis. Choice of Using Videos We chose videos because coaches heavily rely on them. Cameras are transportable and can be installed next to a pool with no major requirement other than a dry area. They are also easy to manipulate even by non-experts due to their presence in our daily life. Videos are good communication tools to explain results to coaches, swimmers, or teams. Other data can be integrated directly onto them and visualized, either using tracking methods or simply by putting abstract data in the spatial or temporal context. Apart from videos, it is also possible to analyze a race using other data streams (GPS, inertial sensors, etc.), but they are constraining. They need swimmers to either equip with intrusive sensors or to perform extensive prior calibration phases. Body sensors, for instance, are much more constraining although they Posi�on: -virtual line close to be crossed -2 buoys on the marker, so 15m or 35m -memory: beginning of the race, so before 25m => 15m Strokes: right arm soon in the water give a precise body position. Finally, intrusive gears are not allowed during competitions, contrary to filming material. Manual Analysis VS Automatic Analysis The FFN's performance division uses videos that are closely zoomed to the swimmers to fill in summary sheets, as shown in Figure 1.1. Their workflow is as follows: an expert informs the swimmers' position through time by telling when they cross each visible landmark. Their strokes are then counted during one pool length. This enables to automatically compute the other metrics in the summary sheet. Despite all being inferred from the same two values, they offer different perspectives of the race to coaches and swimmers. Although these data contain a lot of valuable information, we argue that Computer Vision (CV) methods can enhance them. Also, this sheet is only created for a few races (especially finals and semi-finals) or swimmers with access to the performance division. With an automatic method, one could fill them in for each participant. Finally, the bottleneck of these analyses is the human time they require. If our goal is achieved, we hope to save time for the performance team over their manual annotation process and analyze a wider range of swimmers (e.g. foreign races, local league competitions, etc.). Computer Vision Tasks To perform a task using CV, one must first understand the prior knowledge humans have regarding swimming and the spatial structure of a pool, as illustrated in Figure 1.2. One must also realize parallax deformation is easily compensated by our brain, but that an image processing system does not have such prerequisites. Finally, to count the swimming cycles of a swimmer, one must before know where the swimmer is in the pool. As CV algorithms do not have such prior knowledge, they cannot study a race the same way humans do. An automatic swimming analysis must then do: • swimmer detection: places the swimmers in the image. • video spatial calibration: maps a position on the image to a position in the pool (i.e. removes the perspective). • periodicity estimation: counts the number of swimming strokes a swimmer makes during a pool length, which is the coaches' main analysis metric. • cameras temporal synchronization: if multiple cameras are used, we must adjust them so that the race starts at the same time in each. • swimmer identification: maps a swimming lane with a swimmer. • swimming phases segmentation: separates diving, normal swimming, and u-turn. Each has different properties and must be analyzed individually. In this thesis, we focus on the tasks in bold, the others are left to future works, although leads are proposed in Future Works (section 6.3.7). We also aim at working with unconstrained swimming videos to be more general and less dependent on acquisition conditions. The models must thus adapt to videos that can be panning to follow the swimmers, fix to continuously film the same part of the pool, far away or close to the water, etc.. The swimming race analysis method we propose is illustrated in Figure 1.3. It starts by detecting the swimmers, arguably its most important element. Once this is done, only half the swimmer's positioning part is done: depending on the camera position, the framing, and possible camera movements, mapping pixels from the image to positions in the pool is not trivial. A camera calibration step is necessary to resolve this. Combining detection and calibration gives the position of the swimmers in the pool. Finally, with each swimmer detected, one can crop a frame around the swimmers in the video and study the periodicity of these sub-videos. New Challenges We Must Tackle The CV works directly applied to swimming are rare and no pre-existing model can be used as an ad hoc solution. A new model specifically addressing these elements is thus necessary. A robust solution we use in this thesis is Deep Learning (DL). This type of algorithm relies on dedicated datasets, but none of the sort exists in swimming, either for detection, registration, or strokes counting. Further, although a swimmer is a human and the human class appears in many detection datasets [START_REF] Lin | Microsoft COCO: Common Objects in Context[END_REF][START_REF] Everingham | The PASCAL Visual Object Classes Challenge[END_REF], these datasets do not contain examples of humans in the water. As a result, even powerful object detection models [START_REF] Redmon | YOLOv3: An Incremental Improvement[END_REF][START_REF] Girshick | Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation[END_REF][START_REF] Zhang | DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection[END_REF] able to detect any pedestrian, runner, or sitting person, are unable to robustly detect swimmers. CV results also depend significantly on the video capture conditions. Difficult lighting, obfuscation, motion blur, and every other perturbation can cause important performance drops. Such problems exacerbate in swimming, as examples showcase it in Figure 1.4. These conditions are extremely frequent, due to the very nature of this sport. The background is a reflecting object close to giant bay windows, the waves and bubbles can randomly hide any region of interest, the swimmers are close to the surface causing diffraction problems, etc.. Camera placement is another problem, as they can be placed by the coaches on the poolside at different places depending on the presence of a public in the stands, the availability of a filming cell, etc.. Depending on the induced Point of View (POV), the video quality can vary significantly. A camera that is centered, positioned a few meters above the ground, and at a good distance from it, gives the best point of view. These capture conditions must be best suited for the later analysis. Static cameras are easy to calibrate but are not zoomed into swimmers. The analysis is therefore more robust but limited by the digital zoom. On the other hand, a video following the swimmers (either all of them or just a small group) is closer to the action and can thus show a better understanding of the race, but this will be limited by the calibration quality, significantly harder than for fix cameras (see Chapter 4). As the situation evolved during the project, the answer changed. This results in different kinds of captured videos that cannot all be studied in the exact same way due to their very nature (either statics or following a group of swimmers). This thesis proposes solutions for both. Contributions Three articles have been produced during this PhD. The first, described in chapter 3, addresses swimmer detection [START_REF] Jacquelin | Detecting Swimmers in Unconstrained Videos with Few Training Data[END_REF]. The proposed method handles the constraints of swimming, with data augmentation and a model architecture that are specifically adapted for the task at hand. A paper tackling the task of camera calibration, presenter in chapter 4, has also been produced [START_REF] Jacquelin | Efficient One-Shot Sports Field Image Registration with Arbitrary Keypoint Segmentation[END_REF]. Its purpose is to perform registration, that is mapping the input video to a virtual top-view of known coordinate system. This gives a direct correspondence of a position (in pixels) in the image with a position (in meters) in the pool. Nicolas Jacquelin, Romain Vuillemot, Stefan Duffner. Efficient One-Shot Sports Field Image Registration with Arbitrary Keypoint Segmentation. IEEE International Conference on Image Processing, Oct 2022, Bordeaux, France 〈hal-03738153〉. A third paper, described in chapter 5, addresses strokes counting [START_REF] Jacquelin | Periodicity Counting in Videos with Unsupervised Learning of Cyclic Embeddings[END_REF]. It introduces a general periodicity estimation method for videos and complex time-series such as 4D videos (like IRMs, scanners, etc.) or more abstract signals. It returns the number of periods in a signal in an unsupervised fashion. Nicolas Jacquelin, Romain Vuillemot, Stefan Duffner. Periodicity Counting in Videos with Unsupervised Learning of Cyclic Embeddings. Pattern Recognition Letters, Elsevier, 2022, (hal-03738161). In addition to these papers, two datasets have been introduced along them, namely Swimm 400 , published in [START_REF] Jacquelin | Detecting Swimmers in Unconstrained Videos with Few Training Data[END_REF], a swimmer detection dataset, and RegiSwim 500 , published in [START_REF] Jacquelin | Efficient One-Shot Sports Field Image Registration with Arbitrary Keypoint Segmentation[END_REF], a swimming pool registration dataset. We also published a benchmark dataset that groups those tasks along with non-addressed ones at the 2022 MediaEval challenge (detailed in section 6.3.8). Computer Vision (CV) is the research field we contribute to in order to address automatic swimming video analysis. This field aims at developing methods to extract knowledge from input images. It has a long history from signal processing to data analysis and has been recently revolutionized with deep learning methods. This chapter will present general background in the domain, explaining different types of CV algorithms. The next chapters will use them to explain in further detail the State of the Art (SOTA) in their respective field (detection, registration, and periodicity counting). C h a p t e r The domains of application of CV are extremely varied and tend to develop with computation power increase, data accessibility, and new model ideas. The automation of vision-based tasks is tackled with CV (e.g. pedestrian counting, action recognition) and frequently paired with online cameras (e.g. video surveillance, plant monitoring, etc.). Monitoring tasks are indeed often addressed using CV as it gets more and more reliable and cheap. Apart from that, experts can be assisted by CV models for complex tasks to enhance their decision-making. This is especially frequent in medical imaging, where small cancer tissues are easy to miss by a human eye watching the whole scan [START_REF] Hitchman | From AI-assisted imaging to AI-assisted diagnosis[END_REF]. Regarding sports analysis, as for this thesis, it can both be seen as monitoring and experts assisting. Indeed, analysing swimmers during a race is a monitoring task: once the video is created, one can wait for the analysis to be over to get a race summary. However, CV can also help coaches identify anomalies during a race or training. For instance, if background in computer vision the stroke rate suddenly changes for a small period, a coach might analyze the corresponding video timecode and find out an improvement area for the swimmer. Here is an overview of common CV tasks related to our work, each illustrated in Figure 2.1. • image classification: the content of the image must be identified among a list of potential classes. It can either concern objects in the image or the general definition of the scene. • object detection: objects in the image must be identified and located. Multiple objects can be present in the same image. The problem is formulated as the placement and shaping of bounding boxes framing the objects of interest. • instance segmentation: this task estimates the probability of each pixel to belonging to a given set of classes. It can be understood as pixel-wise classification. • object tracking: at the start of a video, an object is selected. Tracking is detecting it on each subsequent frame, even if it moves are changes. This task uses temporal information from the video to extract information from each frame. A task is chosen because it addresses a problem, but said problem usually has other external constraints. The speed of execution constraint is present in a great amount of CV applications. Sometimes, as only a few images are to be analyzed this is not a problem, but in real-time video analysis, for example, the inference speed is critical. In such conditions, trade-offs such as speed vs precision must be addressed. Depending on the use cases, one is more important than the other. Classic older algorithms described in the next section are generally faster as they are simpler than the newest ones, even with the slower hardware available at the time. Through time, both hardware and software evolved in parallel, with heavier new models running on faster new machines. As a consequence, new Deep Neural Network (DNN) models, orders of magnitude more computationally expensive than older algorithms, can nowadays run in real-time in the correct conditions. Also, classic CV methods do not rely on as much data as the more recent ones. This is usually seen as the most important difference and the reason why recent methods work better. This data dependency also has drawbacks that older approaches do not have. Previously in Computer Vision CV techniques have evolved a lot throughout the years. Understanding methods predating Deep Learning (DL) is required to understand the newest models. We also have to do it to understand works on automatic swimming race analysis [START_REF] Benarab | Optimized swimmer tracking system based on a novel multi-related-targets approach[END_REF][START_REF] Napoleon | Variabilité de la technique de nage : adaptabilité aux contraintes et performances en natation[END_REF] that are based on these algorithms. Classic Algorithms In this section, we call classic algorithms any CV algorithm where the user chooses the different parameters and where the task definition is extremely specific (e.g.: line detection). Overall, CV was much more limited than today, and as soon as an application was outside of classic calibrated tasks, it was impossible to manage the exceptions. By contrast, the majority of the modelling process of recent methods comes from the data and has better generalisation abilities. The "classic" methods mostly appeared before Machine Learning (ML), mostly between the 70's and the early 2000's. In this section, we describe in detail a few of them which were milestones when they were introduced. We will only focus on the ones that were used or considered for our application. Convolution Filters The first family of these algorithms, coming from the signal processing domain, is the 2D convolution filter. It relies on one important property of images, which is their spatial coherence: the pixel distribution in local regions contains more information than individual pixels. Therefore, processing an image as a group of spatially organized values instead of simply a group of disjointed pixels gives richer and more meaningful results. Further, as objects can be placed anywhere in the image, translation-invariant operations are often required for better analysis. Convolution filters implement both the idea of spatial coherence and translation invariance. A sliding window (called a filter) convolves through the whole image, which results in a new image of the same dimensions (if we ignore padding problems). The values of the filter are key as they define the nature of the output (i.e. the new resulting image). A wildly used filter is an edge detector named the Laplacian [START_REF] Marr | Theory of Edge Detection[END_REF], that is illustrated in Figure 2.2. In the result, extreme pixel values (i.e. far from 0) represent positions with sharp edges. Such a filter is intuitive and natural to interpret and understand. Using the same idea, one could create At the bottom, an application of the filter on a swimming race image. The line buoys are mostly well detected, as they are made of simple features with little texture. The swimmers and waves, however, are more complex and the filter cannot isolate them. a detector of other specific shapes like that. Increasing the filter sizes allows the search for more complex patterns covering a larger region. With 2D filters, it is possible to look for any visual pattern, but if the pattern is too complex, it becomes scale-dependent. To alleviate that, one can create the same filter at different scales and window sizes to have better chances to find a result. However, increasing linearly the kernel size of a filter creates a quadratic computation increase, due to the 2D nature of filters (a 3 × 3 filter has 9 elements, but a 5 × 5 filter has 25). Therefore, being scale-exhaustive is extremely slow, due to the computation time too big filters require. Filters are also orientationdependant. It is possible to rotate their value in the matrix to look for the same pattern with some rotation, but being exhaustive requires a considerable amount of filters. It is rarely possible to be exhaustive with these filters, thus one usually uses only a few well-crafted filters. Edge detection does not have these problems, as edges are not scale-dependent, so small size kernels generally give acceptable results, and the Laplacian can detect lines of any orientation. After a convolution, one can apply a threshold to the output to binarize the result: either a pixel represents a pattern (an edge for instance) or it does not. This threshold can be tricky to determine, yet it is extremely important. It depends heavily on the specific image being analysed and it is frequent to have a different threshold for different images. Further, these filters are highly noise-sensitive and patterns can be detected for no good reason sometimes (a sharp shadow on a wall, a buggy pixel area in the camera matrix...). Some regions of high textures cannot properly be analysed by such a filter, such as the waves as shown in Figure 2.2. Indeed, this method only processes the local pixel regions, but it does not further interpret their meaning. For these reasons, 2D filters lack generalisation power. One must adapt the filter's size, orientation, and threshold depending on the context (i.e. the scene) that is analysed. Despite all these problems, convolutional filters are still used nowadays for simple applications, especially for edge detection or blur (with a Gaussian filter). Edge detection has been improved with automatic cleaning algorithms, such as the Canny Filter [START_REF] Canny | A Computational Approach to Edge Detection[END_REF]. Despite being generally more robust, it still has similar problems as the others, with unintelligible results on highly textured areas and a need for thresholds. Hough Transform Although convolution filters can identify local patterns, the result is still difficult to grasp for a computer. For instance in Figure 2.2, the lines are not perfectly continuous due to noise and threshold imperfection. Also, even if some pixels are identified as edges, corners, or similar local marks, one cannot identify important wider shapes. An idea that emerged in the early days of computer science (around 1960) was a way to convert certain aspects of an image into equations, much easier to manipulate than pixels. The Hough transform [START_REF] Richard | Use of the Hough transformation to detect lines and curves in pictures[END_REF] was originally a method to detect straight lines on a "skeletonized" image, i.e. an image of edges, usually obtained with a Laplacian filter and a threshold. Afterwards, this method can return an equation for any line in the image, even highly noised or partly obfuscated ones. An illustration of this method is given in Figure 2.3. The Hough transform converts pixels in the (x, y) image space into curves in the (θ, ρ) Hough parametric space. For this operation, each pixel in the original image is converted into a sinusoid curve, following the method explained in Figure 2.3, top. If a line L of parameters (θ L , ρ L ) contains N L points on the source image, the position (θ L , ρ L ) of the parametric space will have a value of N L . One can threshold the Hough space by N to only keep the lines with N points or more. An application of the Hough transform to swimming is given in Figure 2.3, bottom. The Hough transform is easily parallelizable, but it is not necessary as it is generally very fast. It is, however, difficult to set up: one must first extract the edges on the image and threshold the result, with all the problems and thresholds implied. Accidental lines can appear (red line in Figure 2.3), especially in highly textured areas: if an edge detector outputs noise, many pixels can, by chance, be aligned. Indeed, the Hough transform does not differentiate a correct line from points across the whole image which happen to be aligned. Further, one must choose a threshold corresponding to the minimal length of a line to be considered (i.e. the threshold in the Hough space). This threshold largely depends on the Colours are preserved in the example to map elements of on space to the other. In the point example, any line crossing the (x, y) position is represented in the Hough space using the angle (θ) and radius (ρ) as in the example. A single point thus results in a sinusoid curve. In the line example, the curves from each point of the line intersect in a single point corresponding to the line parameters, which are not directly obtainable from the image. Bottom: Hough line detection applied to a pool (after edge detection and thresholding) to detect its buoy lines. The red line does not represent a line in the image and appears solely because of noise. targeted content and its scale on the image. Finally, as a line has a width, it is possible to draw multiple mathematical lines from it, with similar parameters. It is thus common to have multiple highlighted points in the Hough space that come from the same line. As a result, multiple close (θ, ρ) pairs may exceed the detection threshold. Hough transform has similar problems as convolution filters: a lack of overall generalisation ability due to many context-dependent thresholds. In the past, when one had to detect lines, engineers were very careful about their image capture conditions: they avoided unwanted shadows creating lines, put everything at a calibrated distance to avoid scale problems, and were extra careful about orientation. More recent methods significantly gained robustness. This is especially the case with videos, where anything can get closer, change its orientation, or cast shadows. For this, new methods were necessary. Feature Matching The two previous methods extract low-level information on the image. Such features inform on spatial properties of the objects present in the image, but they cannot provide more complex results, such as object identification. Further, these techniques have 3 main problems: • scale-sensitivity: the same object zoomed-in can have different representations • light-sensitivity: depending on the lighting conditions, these methods can behave in very different manners • orientation-sensitivity: the object's orientation is crucial to any local pattern description In the early 2000's, feature matching methods were developed to overcome such problems and enable deeper image understanding. Their core idea was to (i) extract points of interest in images and (ii) give meaning to these points using a semantic vector. If two vectors were similar, it meant they both represent similar patterns. If one could match the vectors of enough points like this from different images, it means the images represent a similar object. A set of vectors describing an image thus represents high-level semantic information. In general, transforming an image into a vectorial representation with semantic information is called an embedding. To perform embeddings, several methods were created (e.g. SIFT [START_REF] David G Lowe | Object recognition from local scale-invariant features[END_REF], SURF [START_REF] Bay | SURF: Speeded Up Robust Features[END_REF], ORB [START_REF] Rublee | ORB: An efficient alternative to SIFT or SURF[END_REF] or BRIEF [START_REF] Calonder | BRIEF: Binary Robust Independent Elementary Features[END_REF], etc.), each with different properties. The most used is called Scale Invariant Feature Transform (SIFT). It performs discriminant keypoints detection and feature extraction. The first step is to detect interest points, also called landmarks. They are special areas in the image containing valuable information and have a chance to be unique and discriminant compared to other areas. In practice, the highly textured regions. SIFT applies a Gaussian blur of different sizes to the image, and computes the difference between the results: this is the Difference of Gaussians (DoG). A landmark is a DoG extreme value. The image is downscaled at multiple resolutions and the process is repeated for each, giving scale-robustness to the landmarks detection. Pixel neighbourhoods around the landmarks are isolated to study the region gradient orientation, as in Figure 2.4, left and center. After the landmark detection, SIFT computes their embedding vector, which can be considered a canonic representation of the area. This is summarized in Figure 2.4. SIFT extracts a region of 16 × 16 pixels around the point coordinates, rotated so that the keypoint gradient orientation always faces up (to be orientation invariant). It isolates 16 (4 × 4) grids in it and creates a histogram of gradient direction for each of them. The gradient's value is computed on 8 possible angles to normalize the possible outcomes. SIFT also ignores the magnitude of the gradients, as it is sensitive to lighting effects. As a result, SIFT obtains 16 histograms of 8 values. They are concatenated to form a 128-dimension vector describing the landmark's local area. Although it is not completely context-invariant, it has the main properties missing from the previous methods, as it is robust to changes in: • rotation: the main orientation always faces up • size: the image is scaled to several resolutions during landmark detection • light: the orientation magnitude is ignored during the embedding vector creation SIFT outputs detailed local pattern descriptions, but it is still not enough to describe an entire object or scene. As mentioned before, one must study different descriptors in images to be sure they represent a given concept. One uses a set of varied images representing an object and generates SIFT descriptors for each of them. The most recurring vectors in these images are saved in a list of "words" representing the object. This is called the "Bag of Words" technique [START_REF] Van Gemert | Visual Word Ambiguity[END_REF]. The more varied the images are in the set, the more robust the Bag of Words is, as it contains many orientations, positions, contexts, and general variations of the object it describes. One creates Bags of Words for different types of objects. Each time they want to analyse the content of an image, they extract its SIFT descriptors and compare them with the different existing bags. If one is close enough, the image likely contains the corresponding object. Not all the words have to be present to make a match, as each part of the object cannot be present in the image at the same time. This combination of descriptors and Bags of Words has been the SOTA in CV until the early 2010's. The descriptors are robust to many variations and the bag of words adds robustness to obfuscation and provides detection. With this technique, one relies on data to create the description of an image. This idea of aggregating information from a wide source of examples has proven very effective in the domain of CV. Although it was used for a long time with histogram analysis or pixels intensity threshold, data-oriented algorithms gained popularity in the mid 2000's with this method and others (Support Vector Machines (SVM) for instance). It is called Machine Learning, and it is the main focus of the current methods in the domain. Going Further with Machine Learning ML can be defined as the algorithms which, given a set of inputs and outputs, choose the parameters of a model to map them. With ML, an important part of the intelligence can directly be found in the data. A human could at most create heuristics biased towards what they focus on, but it is often less comprehensive than data-driven algorithms. Contrast and shapes being easier to describe than texture [START_REF] Mangini | Making the ineffable explicit: Estimating the information employed for face classifications[END_REF], SIFT was conceived to detect these elements. If one generates SIFT descriptors for a face and studies what they represent [START_REF] Liu | Bag-of-Words Vector Quantization Based Face Identification[END_REF], the ones centered around the eyes, the mouth, and other highly textured regions, will be kept, as what they describe seems important to human vision. However, doing so would miss important descriptors on the cheeks and the forehead, because our eye is less focused on these regions as they lack in texture [START_REF] Vondrick | Learning visual biases from human imagination[END_REF]. Using varied object representations and ML can alleviate these problems. However, despite having fewer human biases and being more powerful than many previous algorithms, ML has its problems and biases too [START_REF] Fabbrizzi | A Survey on Bias in Visual Datasets[END_REF]. Further, it depends on data (quantity and quality) to function accurately. Finally, the nature and complexity of the model itself are determinant of the quality of the results. In this section, we will detail these aspects, applied to a specific approach of ML, namely the Neural Network (NN). Training Neural Networks In the mid 50's, Rosenblatt conceptualized the perceptron [START_REF] Rosenblatt | The perceptron -A perceiving and recognizing automaton[END_REF], represented in Figure 2.5 left, as an elementary processing unit, only performing an addition, a multiplication, and a non-linear operation. Combining many, organized in layers, resulted in a MLP architecture, as shown in Figure 2.5 right. The output is computed layer after layer, each inputting the output of the previous one, following Equation 2.1: background in computer vision out(X) = z n (X) = z n (z n-1 (...z 0 (X)...)) , z i (X) = σ(z i-1 (X) • θ i ) , (2.1) z i and θ i being respectively the output and the weights of the i th layer, σ the activation function. Such model is theoretically able to approximate any function. The more layers (i.e. the deeper the network), the higher the complexity of the problems it can solve. This is the core idea of ML, as it revolves around a key concept in the domain: the difference between a mathematical model and a heuristics. The former integrates as many influencing elements as possible, using prior knowledge of the environment. This results in an explicit formula with highly interpretable parameters. If the system is not chaotic, estimating each parameters makes it highly predictable. The number of parameters depends only on the system and the equations used to manage it. However, a formula is not trivial to find, especially with high-level notions: mapping an image to an object class is far from being understood. On the other hand, a heuristics does not result in explicit and limited parameters. Instead, it gives an approximation on a specific range and can be made of any number of parameters, often significantly more than the formula. Most importantly, it can be obtained using learning and data, which we will detail in this chapter. Gradient Descent [START_REF] Cauchy | Methode generale pour la resolution des systemes d'equations simultanees[END_REF] is the algorithm to fit the weights of a model to the data. Given a set of inputs X, outputs Y, and a NN f , the problem is to find the NN parameters θ that best map X and Y. With gradient descent, one computes the error E between Y and Ỹ and changes the weights following the error's gradient. This results in a new, less incorrect model, and the operation is repeated until the error is low enough. There are many existing error functions, which must be (i) derivable (so must be f ) and (ii) decrease as Y and Ỹ get closer. Formally, this follows: f θ (X) = Ỹ ̸ = Y , E(Y, Ỹ) ---→ Ỹ→Y 0 , f θ ← -f θ -α × ∂E ∂θ , (2.2) with α the learning rate, a coefficient (1e -3 , 1e -4 ...) defining how much the weights will change from their original value in the gradient direction. Its value must be carefully chosen. If it is too large, the model might never converge towards a good solution, the optimal being distant of less than its value. On the other hand, a too small learning rate can trap the algorithm in a local minimum. Several variations of the gradient descent algorithm have been proposed, such as the Stochastic Gradient Descent (SGD) [START_REF] Kiefer | Stochastic Estimation of the Maximum of a Regression Function[END_REF] or Adam [START_REF] Diederik | Adam: A Method for Stochastic Optimization[END_REF]. SGD provides a faster gradient computation for little precision loss. Adam adds gradient direction momentum throughout the steps to increase convergence speed. Despite being suitable for many ML optimisation problems, gradient descent is not an ah hoc solution for NN. Although it is straightforward to compute the error for the output layer, there is no direct way to know how changing the weights of an intermediate layer affects the final output. The solution to alleviate that is called back-propagation. It was developed in the late 80's [START_REF] Rumelhart | Learning representations by back-propagating errors[END_REF] and substantially improved during the next decade [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF][START_REF] Yann | Efficient BackProp[END_REF]. Back-propagation computes the error's gradient through the layers using the chain rule, following Equation 2.3 for the layer l ∈ [1, n -1]: ∂E ∂θ l = ∂E ∂θ n n-1 ∏ k=l ∂z k+1 ∂z k ∂z l ∂θ l , (2.3 ) ∂z k+1 ∂z k = ∂z k+1 ∂(z k • θ k+1 ) ∂(z k • θ k+1 ) ∂z k = σ ′ (z k θ k+1 ) • θ k+1 , ∂z l ∂θ l = ∂σ(z l-1 • θ l ) ∂θ l = σ ′ (z l-1 • θ l ) • z l-1 . Intermediate layer's weights θ i are optimized using optimizations of layers i + 1 through n. This explains the name "back-propagation", as the original error gradient is propagated to each layer from end to start. Batch Training. Before updating the weights, the gradient of multiple input/output pairs is computed, to parallelize the back-propagation. However, back-propagating the error of the entire dataset requires a lot of memory and is not very efficient as it allows only one update of the weights for the whole dataset. To use data more optimally, the back-propagation is computed by batch (i.e. subsets of the data), before updating the model. This allows a more frequent update of the weights, thus faster convergence. Further, using batches reduces the risk that the individual gradients have opposed values, which would result in very small or null adjustments. When each batch of the dataset has been used, one epoch is complete. The choice of the batch size can be critical according to [START_REF] Kandel | The effect of batch size on the generalizability of the convolutional neural networks on a histopathology dataset[END_REF][START_REF]How to Control the Stability of Training Neural Networks With the Batch Size[END_REF]. On one side, [START_REF] Kandel | The effect of batch size on the generalizability of the convolutional neural networks on a histopathology dataset[END_REF] shows the relationship between batch size and learning rate, proving their inter-dependency. It concludes by stating that with large learning rates, smaller batch sizes are the best to obtain the best model. It argues that for a given problem, it is preferable to start with a low batch size (e.g. 32) and a small learning rate (< 10 -3 ) and try increasing the batch size until performance decreases. On the other hand, [START_REF]How to Control the Stability of Training Neural Networks With the Batch Size[END_REF] showcases how batch size influences a training's speed and stability. In general, the bigger the batch size, the more stable the training. The more there are elements in a batch, the less it is subject to data noise. As variance is reduced with bigger batches, the test model is also more stable by the end of training between epochs, contrarily to training with small batch sizes. However, a small batch size allows significantly faster training and fewer epochs to reach optimal performance. Both works further explain an important aspect of batched training: it creates a trade-off between the model's specificity and generality. A Swimmers Domain Swimmers in the Data Out of domain example For a model trained on this data, the farther an image is from the training domain, the less likely it will be identified as a swimmer. For instance, Superman in swim briefs represented here will hardly be identified properly if the domain only contains classic images of swimmers. The domain thus needs to be as wide as possible for a given class. Further, data biases appear when the classes are unequally represented. In this example, there are more examples of males than of females, which causes problems for the future model's representation. ♂ ♀ gender bias ♂ ♀ too specific model can be obtained with too small batch sizes because the gradient will correspond to only a sub-part of the dataset. Each batch will change the model in too different ways to adapt each time to too different data. As a result, the model may never converge to stable optimal weights. A too general gradient adaptation often results in no strong decision by the model (i.e. all the outputs have the same probability). Indeed, if the gradient of each input/output pair is calculated and averaged, it might result in a very small vector, as many elements may have opposed gradients. After a pass of the whole dataset, one epoch is complete. It is usual to do several of them (hundreds, thousands...) to use every bit of information in the data, even if most information is learned during the first few epochs. The earliest iterations fit the model to the nature of the data, but the model's problem-solving is processed afterwards, with small variations of the weights. Intuitively, it is because a model needs to understand what an image is before telling what it contains. Data: the Solution and the Problem The weights of a NN are adjusted to fit the data (e.g. an image associated with the swimmers' position). Therefore, instead of human vision biases, the models are based on data biases. The problem's different possibilities and variations must, therefore, be present in the data. If one wants a human detector and feeds only images of men to train a neural network, they cannot expect the model to detect women [START_REF] Wang | Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations[END_REF]. Such bias is obvious, but this is not the case for all. Still for the same task, one must find images of persons of every age, in every posture, under every lighting condition, etc.. Similarly, it is also important to have a variety of non-human objects that look like one, such as statues, photos, and monkeys, in every variation. This task is not feasible, as there are infinite variations and [START_REF] Tulio Ribeiro | Explaining the Predictions of Any Classifier[END_REF] possibilities. This results in two consequences: data biases and domain definition, illustrated in Figure 2.6. The data domain can only represent part of the entire reality. Although models have generalisation capacities, anything outside of the domain may not fit the end model, giving unpredictable results. This limit is very important to understand why sometimes NN work very well in experimental conditions, but not in real life: their data is not comprehensive enough of reality. Biases, on the other hand, exist because not all data elements can have the same representation in the dataset [START_REF] Mehrabi | A Survey on Bias and Fairness in Machine Learning[END_REF], as in Figure 2.6, right, with an over-representation of male images. In consequence, the data misrepresents female characteristics (size, standing, hair length, clothes...). During training, the average gradient will be pushed (i.e. biased) towards male attributes, as there are statistically much more of them in the batches. The resulting model will be more imprecise with images of females. False correlations might also appear in the data. In [START_REF] Tulio Ribeiro | Explaining the Predictions of Any Classifier[END_REF], Ribeiro et al. trained a model to classify huskies and wolves. Figure 2.7 shows how the model considers the task: it only pays attention to the background, and ifs snow is visible, it considers it is a wolf. After observation, they understood that in the training dataset, each wolf image contains a snowy background. This correlation in the training dataset has no meaning in reality. This is the "shortcut-learning" problem [START_REF] Geirhos | Shortcut learning in deep neural networks[END_REF]: if there is an easy-to-detect feature in the training set (usually low-level features, such as textures and colours), the model does not train further. The gradient tweaks the weight to get a more accurate precision on this specific feature. Again, if getting all possible contexts of each and every class was feasible, this would not happen, as this snow/wolf correlation will not appear in the dataset. But it is not currently possible. Data is often considered the most critical part of ML. It always has biases, whether easy to explain or not, and its domain cannot represent the entire reality. Effective methods exist to alleviate these problems with domain-specific data augmentation [START_REF] Yang | Image Data Augmentation for Deep Learning: A Survey[END_REF], but the problem cannot be completely ignored. In the end, it is always important to know the dataset limitations, as they often define the final model's. The datasets created during this thesis have limitations that will be discussed in their corresponding chapters (chapter 3 for Swimm 400 and chapter 4 for RegiSwim 500 ). Convolutional Neural Networks Images are spatially structured in a way that a model analysing them must be translation invariant. A MLP that inputs an image, or the same image shifted by a few pixels, will output different results. This is a problem, as both represent the same general content. In 1989, inspired by the pioneer work of Fukushima [START_REF] Miyake | Self-Organizing Neural Networks with the Mechanism of Feedback Information Processing[END_REF][START_REF] Fukushima | Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position[END_REF][START_REF] Fukushima | A Hierarchical Neural Network Model for Selective Attention[END_REF], LeCun et al. [START_REF] Lecun | Backpropagation Applied to Handwritten Zip Code Recognition[END_REF] proposed to merge 2D filters with NN learning algorithms to automatically learn the coefficients of a convolution kernel. This was the first of a whole new type of NN, extremely well fitted for image analysis, called Convolutional Neural Network (CNN). An example is illustrated in Figure 2.10. As seen in Section 2.1.1, 2D filters extract features from an image, resulting in what is called a feature map. For a CNN, a layer is composed of stacked 2D filters inputting and outputting different feature maps. They represent the manifestation of the different kernels, at the same spatial position in the image, as shown in Figure 2.8. The first layers are very similar to handcrafted 2D convolution filters. They detect low-level features on the image, such as colours, edges, corners, etc... As the features combine through the layers, more and more abstract visual characteristics such as complex textures and shapes are extracted. Around the last layers of a CNN, the expressed features are often understandable by a human, as they react to the elements composing the object they were trained to understand. For instance, with human detection, the last feature maps can describe concepts close to faces, legs, hands, or clothes. This section explains in more detail the use of deep models applied to CV. This domain requires specific elements and architectures of networks to perform optimally. One can select from a toolbox of multiple elements to construct a model, but they need to correctly manage them to obtain better results. Depth is also critical. Before recent improvements, it was seen impossible to go "too deep" and the use of shallow networks was the norm. This section also provides explanations of how this was alleviated. Deep Learning Looking at [START_REF] Lecun | Backpropagation Applied to Handwritten Zip Code Recognition[END_REF], one of the earliest CNN architectures, there are only 3 hidden layers. In [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF], LeNet-6 architecture has 6. However, 2016's ResNet-152 [START_REF] He | Deep Residual Learning for Image Recognition[END_REF] This section explains the different challenges regarding depth and NN size. Data, processing power, and architecture, this question concerns many parts of the domain. Better Abstraction Understanding how CNN convert images into concise, abstract, and meaningful data is the key to grasping the interest of deep models. Convolution kernels describe local patterns, so the abstract visual area squared in red in Figure 2.8, left, is represented by the (quantitative) red vector in the layer output. Suppose the filter #i corresponds to a vertical edge detector and that the value of channel #i at position (x, y) is high in the layer output. In this case, the model understands there is a vertical edge at position (x, y). All the other kernels of the layer represent a feature and their presence can be quantified by looking at their index in the vector. This is abstraction: transforming abstract local features on the image into concrete numbers in a vector. The red vector is an abstract representation of the image's red square area. Going further, suppose at another position in the layer output, no dimension corresponding to edge detectors contains a high value. On the next layer, a filter can pay attention (i.e. give high weighting) to these dimensions and be activated (i.e. output a high value): this would be a "low contrast detector", which is slightly higher level than edge detectors. This behaviour propagates throughout the layers: low-level features are combined to compute higher-level ones. As such, each layer inputs the features collected by the previous layer and extracts more complex and meaningful features. The more there are layers, the higher level of the feature. Although the first layers are often texture oriented and the highest revolve around almost understandable concepts, it is difficult to explain exactly what happens in the intermediate layers. Visualisation tools [START_REF] Olah | Feature Visualization[END_REF] can give an intuitive understanding of the features, but it is not clear yet how depth improves abstraction. Further, the notion of abstraction is not quantifiable and thus troublesome to grasp. Another answer is brought by [START_REF] Raj | Theory of Deep Learning: Role of Depth[END_REF], explaining ML models as Fourier function approximators. The Fourier transform of any complex function contains high frequencies. Further, they prove that approximating rapidly oscillating sinusoids requires more and more layers with the frequency increase. Therefore, approximating complex functions requires some depth to be precise enough. Previous Limitations As deep CNNs can powerfully abstract images into understandable information for computers, one can wonder why such models have not been used before. The general idea of adding layers to increase the representation is present since at least 1965 with [START_REF] Ivakhnenko | In: Cybernetic predicting devices[END_REF], but DL started several decades later. The question has a multimodal answer concerning data, implementation, and processing power. Despite having only 3 layers, the first CNN was very slow, as shown on this early digits identifier model from 1993 [START_REF] Lecun | Convolutional Network Demo from 1993[END_REF]. Although the computer was state of the art and the model very well implemented, several seconds were required to analyse a handful of numbers. This was due to hardware limitations, as NN requires powerful processing units to input an image and output a result. This was already slow and difficult to set up, so no one had the resources to train significantly deeper models in the 90's. To alleviate this problem, more powerful machines offered a solution. However, the bigger revolution came with the implementation of CNN inference on Graphics Processing Unit (GPU) in [START_REF] Ho | Parallelization of Cellular Neural Networks on GPU[END_REF], which claimed acceleration of 8 to 17 times. Indeed, a CNN can benefit from a GPU due to its parallelization power. As filters of a layer are convolved across the image with no interaction with each other, they can all be processed individually in the different cores of a GPU. This enabled to speed up drastically training and inference and is still used and improved upon today. In principle, the deeper a network, the higher its abstraction powers. However, if shallow models can be completely trained with few examples, deep ones require significantly more, with as many labels. As we will see in Section 2.3.2, we eventually found ways to alleviate this. Though, when NN were not as advanced as today, this was a problem. One needed to not only assemble a big set of images but also to give them a label (i.e. an output value) to train a model on them. Even MNIST dataset [START_REF] Deng | The mnist database of handwritten digit images for machine learning research[END_REF] (1998, 70,000 images) and Letter Dataset [START_REF] Frey | Letter Recognition Using Holland-Style Adaptive Classifiers[END_REF] (1991, 20,000 images), which provided tens of thousands of images each, were not big enough for deep models with the current standard. The situation unlocked in 2009 with Figure 2.9 -Sigmoid (left), hyperbolic tangent (tanh) (center) and Rectified Liner Unit (ReLU) (rights) activation functions. The two firsts saturate when moving away from the origin, thus resulting in a weak gradient as their absolute value gets bigger. ReLU has a higher derivative for any positive value, enabling a better gradient back-propagation. Scheme extracted from [START_REF] Teng | Structural Damage Detection Based on Real-Time Vibration Signal and Convolutional Neural Network[END_REF]. the release of Imagenet [START_REF] Deng | Imagenet: A large-scale hierarchical image database[END_REF] and its 1.28 million images (back then) and thousands of classes. The amount of training data and the speed increase of GPUs are well-known early limitations of DL. However, one last element has to be considered to finally enable the effective training of the models used today: the ReLU activation function. Before 2011, the activation functions (the non-linear function in between layers) were either the sigmoid or the tanh, illustrated in Figure 2.9 (left). Both these functions simulated the biological neurons, which pass a tension only if a certain threshold is reached. Their problem is that except around the origin, their asymptotic behaviour results in a very small gradient. Back-propagating it throughout the layers results in quick gradient vanishing: layers too far from the output could not be trained efficiently. ReLU [START_REF] Nair | Rectified Linear Units Improve Restricted Boltzmann Machines[END_REF], introduced in 2011 and illustrated in Figure 2.9 (right), proposed a much better solution for this problem. As the positive part is completely linear, the gradient is meaningful and is proportional to the error. The negative part is null, which behaves similarly to biological neurons, which do not pass tension under a certain threshold. The function is not derivable in 0 which could create problems to compute the gradient, but in practice, if the input is precisely 0 (very unlikely in float32 precision), one can decide to either apply the positive or negative behaviour of the function. In the end, it was the convergence of the GPU implementation (2006), Imagenet release (2009) and using the ReLU activation function (2011) which enabled the implementation of AlexNet [START_REF] Krizhevsky | ImageNet Classification with Deep Convolutional Neural Networks[END_REF] and its 60 millions parameters over 8 layers (2011-2012). Further evolution of deep models will be discussed in the next sections, but the main technical advances allowing the emergence of DL were achieved using these techniques. CNNs Components Elements with various roles compose a CNN. Their importance is crucial to understanding how deep architectures are suited for image features extraction. This section will provide explanations for these elements, detailing what exactly composes a CNN. Figure 2.10 provides an example showing these different elements. Receptive Fields For computation purposes, the 2D filters present in a given CNN layer all have the same size. This size is called the receptive field of the layer and it is represented in blue in Figure 2.10. Having a big receptive field enables one to compute features on a big part of the input, therefore they contain more information. Following this logic, LeNet has (5 × 5) receptive fields for all its layers, and AlexNet has (11 × 11), (5 × 5) and (3 × 3), just to cite them. This is especially true around the beginning of the network, where the features are not complex yet: the bigger the filter, the more context it can give to a region. In 2014 however, Simonyan et al. suggested in [START_REF] Simonyan | Very Deep Convolutional Networks for Large-Scale Image Recognition[END_REF] to limit the receptive fields to (3 × 3) convolutions, which is the smallest size to capture the notion of left/right up/down and center. They argue that combining 2 successive (3 × 3) convolutions result in a (5 × 5) overall receptive field. One can obtain any receptive field size just using (3 × 3) convolutions. As this involves more layers, it also involves more ReLU activations, which increase the complexity of the representation with nonlinear operations. Moreover, the number of parameters is significantly reduced: 3 successive (3 × 3) convolutions contain 27 × C parameters for a receptive field of (7 × 7), while a single (7 × 7) contains 49 × C (C being the number of channels on the layers). For a given amount of data, the fewer parameters to train, the more each can be optimized (without considering over or underfitting). The more layers there are, the more abstraction the network can make. Therefore, increasing the number of (3 × 3) has two very important benefits. Szegedy et al. [START_REF] Szegedy | Rethinking the Inception Architecture for Computer Vision[END_REF] even suggest going further, replacing one (3 × 3) convolution by a (3 × 1) and a (1 × 3), but it did not appear to significantly change the result, and the idea has not been broadly used. It is also possible to use (1 × 1) convolutions, but they do not provide spatial understanding. Instead, they are used to linearly combine local features (followed by the activation), as in [START_REF] He | Deep Residual Learning for Image Recognition[END_REF]. They offer a computation-wise cheap way to increase the network's complexity, having only (1 × C) parameters. In practice, they are used around the end of the architecture, once spatial features have been extracted and all that remains is to combine them for the task at hand. Another use of these (1 × 1) convolutions in [START_REF] Szegedy | Going Deeper with Convolutions[END_REF][START_REF] Sandler | MobileNetV2: Inverted Residuals and Linear Bottlenecks[END_REF], is to reduce the number of channels thus reducing the number of parameters. In the paper, they compare it to "features distillation", where only the most important features manifold of the previous features is kept. This is known as linear bottlenecks, as illustrated in 2.13 and further explained in 2.2.3. The most efficient existing architectures used today [START_REF] He | Deep Residual Learning for Image Recognition[END_REF][START_REF] Simonyan | Very Deep Convolutional Networks for Large-Scale Image Recognition[END_REF][START_REF] Szegedy | Rethinking the Inception Architecture for Computer Vision[END_REF][START_REF] Ronneberger | U-Net: Convolutional Networks for Biomedical Image Segmentation[END_REF] follow this rule of thumb: a big receptive field for the earliest layers (i.e. (5 × 5)), then several classic (3 × 3) to complexify the features and bring spatial information to the representation, then finally (1 × 1) filters to assemble these features so that they are suited for the given task. In Section 2.2.3 this will be nuanced, but the core idea will remain. Spatial Sampling and Abstraction Increase In Figure 2.10, in-between the convolution layers occurs a downsampling operation: the pooling. It extracts only one value per region (usually the maximum value, sometimes the average) for each channel. Also, the successive layers have an increasing number of channels, as in Figure 2.10 where there are 32 channels for the first layer and 64 for the second. These two parameters (spatial downsampling and abstraction increase) act together towards the same goal: to only keep the interesting features of the input. As the network computes deeper and deeper information throughout its layers, it is interesting to get as many high-level features as possible for a more pertinent representation of the input. However, this also increases the data to save in memory [START_REF]GPU Performance Background User's Guide[END_REF]. It is not rare to have hundreds or thousands of channels. With a (224 × 224) pixels input, 1024 output channels, and a computation in float32, the tensor size is 32 × 224 × 224 × 1024 > 1.6 Go per image. Although recent GPUs can handle such data, it would be very resource-intensive, especially when each intermediate layer output has to be kept in memory for gradient estimation. Furthermore, even with parallel computing such layer takes a long time to be processed, forbidding real-time analyses [START_REF]Memory-Limited Layers User's Guide[END_REF]. Finally, most data in this support are in fact either useless (around unimportant areas of the input) or redundant (each neighbour pixel encoding almost the same information). Max Pooling downsizes a small region by only keeping its most activated channel. This prevents the previously mentioned problems by only keeping the most relevant information. In a small region (usually (2 × 2) pixels), knowing whether a channel is activated or not matters more than knowing which exact pixel activated it [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF]. The same article explains how spatial organization is important for CV models: the feature's relative position with each other is the most important. Further, for global tasks looking for one result in the image (classification mostly), one does not care about where the elements are, only if they are present. For spatially precise tasks (detection, segmentation...), the channels tend to encode the spatial information, so reducing the tensor size does not change the result in too significant ways, up to a certain limit. Furthermore, pooling enables a natural receptive field increase of filters [START_REF] Simonyan | Very Deep Convolutional Networks for Large-Scale Image Recognition[END_REF]: with a (3 × 3) convolution applied just before a (2 × 2) downsampling, the surface described by each spatial element on the support size is doubled. If in the end, the output is (1 × 1 × C), each channel encodes something about the whole image, which is very powerful. Combining pooling and channels increase converts local pixel distribution into global semantic meaning. Instead of the Max Pooling operation, it is also possible to do convolutions with a stride of N > 1. The stride is the step between each convolution operation, so a stride of 2 for instance means only one of every 2 pixels will ever be the center of a convolution. This too reduces the output size. This method has some interests, as the filters will learn to handle downsampling by back-propagating through them. Further, it is faster, as only part of the input is processed, while some of it is done for nothing as pooling discards them. Pooling is significantly more frequent, though, and one can argue it is more powerful as it compares the output of each position, contrarily to strides longer than 1. CNN Architectures CNN components can be combined in different orders, with different parameters, forming an architecture. Depending on their nature, they can accomplish different objectives with different performances. To complete a task, one must choose between them all and eventually adapt them to fit precisely a problem. In this section, the main architectures used in this thesis will be described. We also precise that Vision Transformer architectures [START_REF] Dosovitskiy | An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale[END_REF] will not be explained here. They were not used during the thesis due to their needs in data and the fact that we aimed at reducing our data needs. More details on the subject are provided in the perspectives of this manuscript (see section 6.3.6). Encoder Decoder Embedding vector Encoder-Decoder Architectures The encoder-decoder architecture, illustrated in Figure 2.11, can have different objectives, mostly related to image-to-image translation (out of scope for this thesis) or unsupervised training with autoencoders. The first part of an encoder-decoder is the encoder, which has the most basic use of a CNN: encoding the information, i.e. transforming pixel distributions into a vector with semantic meaning of smaller size than the input. The resulting dimension reduction can be used for a broad variety of contexts, as encoders are usually only the first part of the network. For a classification task, fully-connected layers or (1 × 1) convolutions are added at the encoder's end to output a vector the size of the class numbers. For detection tasks, one adds detection layers on top of the encoder. Note that encoders usually reduce the data width and height, but this is not always the case, as in [START_REF] Girshick | Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation[END_REF] where almost no pooling is applied to preserve as much spatial information as possible. Encoders convert images into semantic vectors and decoders do the opposite. They are the architecture to generate images or pixel distributions with CNNs. They input a semantic vector that is converted into an image. The values present in the vector entirely define the output image in a similar fashion as the output vector of an encoder is defined by the input image. The elements composing decoders are similar to the ones in encoders, but they use upsampling instead of downsampling. This operation can be achieved in two main ways: either statistical interpolation algorithms (bilinear, nearest neighbour, etc.) or using transposed convolution layers. Regular convolution operations input an area of multiple values and output only one. They can be expressed as matrix multiplication to speed up the process. Transposed convolutions do the opposite and can be expressed as the matrix multiplication of the input with a transposed convolution matrix. An autoencoder [START_REF] Scholz | Nonlinear Principal Component Analysis: Neural Network Models and Applications[END_REF][START_REF] Hinton | Learning Internal Representations by Error Propagation[END_REF][START_REF] Hinton | A Fast Learning Algorithm for Deep Belief Nets[END_REF] is an encoder-decoder architecture where the output is equal to the input, as illustrated in Figure 2.11. The model is trained to reconstruct the original image after a data compression [START_REF] Hinton | Reducing the Dimensionality of Data with Neural Networks[END_REF]: the bottleneck (i.e.: junction of the encoder and the decoder) contains less data than the input due to the pooling layers. Different reconstruction losses can be used, mainly the L 1 and the Mean Squared Error (MSE): MSE = 1 n n ∑ i (ϕ(X i ) -X i ) 2 L 1 = 1 n n ∑ i |ϕ(X i ) -X i | , (2.4) X i being an element of a batch of size n, ϕ being the function representing the autoencoder, hence ϕ(X) is the reconstructed image. These losses are complementary [START_REF] Hanley | Visualizing the Median as the Minimum-Deviation Location[END_REF]. The MSE converges faster at first because quadratic functions penalize more the bigger errors, but give less weight to small errors inferior to 1. L 1 has the opposite behaviour and both can be used at once, the MSE to quickly reduce important errors, the L 1 to make smaller adjustments. Such losses are not indicators of the quality of an image: a reconstructed image can have a low MSE compared to the original yet be blurry. The perceptual loss [START_REF] Grund Pihlgren | Improving Image Autoencoder Embeddings with Perceptual Loss[END_REF] addresses the problem, weighting high-level features instead of pixel-wise comparison. It relies on a trained model with frozen weights ψ, which outputs an embedding vector. One compares (usually with cosine distance) the embedding vector of the original image and the reconstructed image. Formally, this follows: Perceptual Loss = 1 n n ∑ i cos(ψ(X i ), ψ(ϕ(X i ))) , (2.5) As it is not possible to reconstruct lost data, the model focuses on encoding and reconstructing the visual features and colours that are the most represented in the dataset. As such, an autoencoder model trained on faces will perform poorly if fed pools, because the specific visual features they contain are not present in the original dataset. However, even well-reconstructed features can be too broad for a specific task. Indeed, an autoencoder trained for swimming races will be likely to reconstruct swimmers, but also spectators, stands, or the poolside, as they are a significant part of the training images, even if they are not interesting for race analysis purposes. Further, with distinct subgroups of images (e.g.: X images of pool A, Y images of pool B, etc.), the autoencoder will separate regions in the latent space (a distinct region per pool), as in Figure 2.12, left. If the latent space is sparse and not continuous, the features extracted from new pools will poorly describe them. A method to circumvent this problem is to use a Variational Autoencoder (VAE) [START_REF] Diederik | Auto-Encoding Variational Bayes[END_REF]. Such model forces the continuity in the latent space by adding a regularization term on the distribution of the latent vectors: the Kullback-Leibler Divergence (KL div). This function measures the difference between 2 distributions. It can be used as a loss function to force the distribution of the bottleneck output at the bottleneck to be close to a multi-variate normal distribution. The exact implementation of a VAE is out of the scope of this section, but the result is a dense and continuous manifold at the bottleneck (as illustrated in Figure 2.12 right), hence a better encoder generalization. ResNets One inconvenience of the back-propagation algorithm is that the gradient is less and less significant after each layer: the output layer has a very precise gradient, but the input layer's is diluted and very indirect. As a result, the deeper an architecture, the less its first layers are trained. This is called vanishing gradient and it has two major drawbacks: (i) it slows down training by requiring more iterations to update the first layers enough and (ii) it increases the risks of overfitting, as shortcuts can be found to overcome the slow learning. To reduce it, it is necessary to add more and more data, but the needs for data increase too quickly, and very deep architecture are not feasible. The VGG-19 [START_REF] Simonyan | Very Deep Convolutional Networks for Large-Scale Image Recognition[END_REF] architecture, with 19 layers, was considered very deep when it was introduced, and the authors mentioned the extensive experiments they had to make to reach convergence. Increasing the amount of data is thus not a scalable way to increase the depth of an architecture. A solution was proposed by He et al. [START_REF] He | Deep Residual Learning for Image Recognition[END_REF]. They introduced the idea of one layer connected to multiple parts of the network, at multiple depths. Due to these "skip connections", part of the gradient is now directly propagated from one layer to any other. In Figure 2.13 left, the gradient back-propagates from the output to the input in two ways. First, the long way, through L 1 and L 2 . The gradient has started fading away arriving at the block input. However, with the skip connection, the output's gradient also back-propagates directly to the input with no fading. Training is therefore more efficient, as even the first layers have a significant gradient. This technique enabled very deep architectures. The most efficient way to make them is using successive feature extraction blocks, as in Figure 2.13. One can stack several depending on different trade-offs, in particular accuracy/speed, as deeper architectures are more accurate but slower. Variation of the ResNet architecture with 18 up to 152 layers [START_REF] He | Deep Residual Learning for Image Recognition[END_REF], and all the other architectures that followed [START_REF] Szegedy | Rethinking the Inception Architecture for Computer Vision[END_REF][START_REF] Szegedy | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning[END_REF][START_REF] Howard | Mo-bileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications[END_REF][START_REF] Sandler | MobileNetV2: Inverted Residuals and Linear Bottlenecks[END_REF][START_REF] Tan | EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks[END_REF], are made of these blocks. Such models have better use of data and processing power, as showcased in [START_REF] Canziani | An Analysis of Deep Neural Network Models for Practical Applications[END_REF]. For the deeper ones (>50 layers), linear bottlenecks are used to reduce the number of parameters by locally reducing the number of channels, as explained in 2.13 right. U-Net U-Net [START_REF] Ronneberger | U-Net: Convolutional Networks for Biomedical Image Segmentation[END_REF] is the combination of the ResNet and encoder-decoder architectures. It is composed of an encoder-decoder with a residual connection between blocks of the same depth at both sides of the network, as shown in Figure 2.14. The difference with encoder-decoders is that residual connections enable a direct propagation of the image's spatial content to the output. The architecture presented in Figure 2.14 is the original U-Net, but as for ResNets, it is possible to create variants by adding deeper blocks, linear bottlenecks, changing the number of channels in the layers, etc.. As long as it is an encoder-decoder with skip connections, it can be considered a U-Net-like architecture. The encoder extracts deep features describing the image. As the deep tensors are upsampled in the decoder, they are stacked with slightly lower-lever features but higher spatial precision. As a result, U-Net is extremely powerful to per- form segmentation tasks, where both semantic meaning and pixel precision are required. Such fully convolutional architecture can be trained with small amounts of data due to the multitude of skip connections. Indeed, usually, the farther a layer is from the output, the lower the gradient. With U-Net though, the earliest convolution blocks are directly linked to the latest ones symmetrically. Therefore, this architecture has a gradient flow which enables fast gradient backpropagation through each layer. U-Net was originally developed for biomedical image segmentation, where data is often lacking. It is a standard benchmark for the segmentation of organs or tumours. Due to its interesting training properties, this architecture will be used in chapters 3 and 4. Data and Supervision Data is necessary to train a ML model, but its nature and amount depend on the available resources. Although it is hard to quantify, studies show that data wrangling in general represents an enormous time of a ML model development (50-60% in [START_REF]Data Engineering, Preparation, and Labeling for AI[END_REF][START_REF]Data Scientists Spend Most of Their Time Cleaning Data[END_REF]). To obtain the best model out of the available data, different algorithms have been proposed, each tackling a specific configuration of data. In this section, different issues around data and training will be discussed, as these two important aspects of ML are entangled: data serves the training algorithm, but the training algorithm must fit the nature of data. Computer Vision Datasets Data has already been introduced as one of the key elements in the training of DL models. We already mentioned its possible biases and the limits of its domain. These paragraphs explore in more detail what data is, how it is obtained, and exactly how annotation is considered before the training process. Data for Computer Vision In the context of CV, a dataset is a collection of images that will be input into a model to train it. Depending on the context, each image can be associated with a label. The vast majority of labels are either a set of classes [START_REF] Deng | Imagenet: A large-scale hierarchical image database[END_REF][START_REF] Deng | The mnist database of handwritten digit images for machine learning research[END_REF][START_REF] Krizhevsky | Learning Multiple Layers of Features from Tiny Images[END_REF], bounding boxes [START_REF] Lin | Microsoft COCO: Common Objects in Context[END_REF][START_REF] Everingham | The PASCAL Visual Object Classes Challenge[END_REF], or segmentation maps [START_REF] Sergi Caelles | The 2019 DAVIS Challenge on VOS: Unsupervised Multi-Object Segmentation[END_REF][START_REF] Geiger | Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite[END_REF][START_REF] Cordts | The Cityscapes Dataset for Semantic Urban Scene Understanding[END_REF], because they represent the 3 main challenges in CV. The first one is a vector the size of the number of classes. Each present class is at 1, the others at 0 (although specific variations can be made [START_REF] Xu | End-to-End Semi-Supervised Object Detection with Soft Teacher[END_REF][START_REF] Gou | Knowledge Distillation: A Survey[END_REF]). Bounding boxes can be encoded in several ways depending on the method itself and they will be presented in detail in Section 3.2.1. Segmentation maps, finally, are matrices of the shape of the image, each pixel belonging to an object of interest on the original image set to 1. There can be several classes, represented by as many different 2D matrices which can overlap (i.e. different matrices can have the same pixel to 1) or not depending on the task. Creating these target outputs is a manual, time-consuming process. The shortest is the classes, the second the bounding boxes, and the slowest is the segmentation. It is complicated to estimate the annotation duration of each as it depends on the subject, but it is a different order of magnitude each time. For instance, to create Figure 2.1, the author needed 5s to create the classes, around 30 to create the boxes, and 3 minutes for the segmentation. Although it is indicative and not representative, it showcases how different the annotation time is for each data, even with the same image. In practice, creating a detailed segmentation map is very long. The widely used COCO segmentation dataset [START_REF] Lin | Microsoft COCO: Common Objects in Context[END_REF] does not contain pixel-perfect annotation, but short straight segments defining the edge of the objects. Such approximation reduces enormously the labelling time without changing the data significantly. Data Gathering The labelling process is very long, but gathering images can be too. With the Internet [START_REF]Welcome to the Internet[END_REF], it is now easier than ever to create these collections of data. It is not always straightforward due to many problems (copyrights, modified images, small resolution, etc.) but it enables fast image collection gathering. In our context of swimming, this was an issue: although many swimming races (mostly from TV streams) exist online, they are subject to copyright. Further, TV's way of filming is extremely constant, with only a few different shot angles, and only part of them exploitable for deep analysis. In Figure 2.15, the shots are from the same race, where the camera angles change regularly in significant manners, making very difficult the continuous analysis of a race. These angles are made for TV, with a huge emphasis on dynamism and individual swimmers (with close-ups) instead of constrained angles with a wide view, adapted to analysis. As a result, gathering data from TV streams is not an optimal solution in our case. In this thesis, we preferred to use videos from an online database of swimming race video and races analyses: https://www.dartfish.tv/ChannelHome? CR=p153270. The majority of these videos are private and the access was kindly permitted by the Fédération Française de Natation (French Swimming Federation) (FFN). The races were filmed in varied conditions and using several camera positions (due to the pools' constraints at the time of the competition). There are all the swimming styles and both genders are represented equally. From these hundreds of videos, we selected a dozen to represent in similar proportion the obvious classes (gender and style) to avoid class biases. Data Cleaning Once raw data is gathered, it is important to "clean" it. Data cleaning means removing every element that is not suited for training or testing data [START_REF] Ihab | Data cleaning[END_REF]. Unclean data can comprise multiple occurrences of the same image, modified data (photomontage or images with marks for instance are omnipresent in the dataset Pascal-VOC [START_REF] Everingham | The PASCAL Visual Object Classes Challenge[END_REF]), too small images, unusual ratios (there is a (500x32) pixels image in Imagenet [START_REF] Deng | Imagenet: A large-scale hierarchical image database[END_REF]), etc.. This process is essential to have the best model in the end, as unclean data can reduce the performance of a model by a significant margin [START_REF] Geiger | Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite[END_REF]. Indeed, such data can present unique features towards which the computed gradient will be biased, during training. If said features are not representative of the final use case, this part of the gradient will only create divergence, resulting in a less efficient model. Multiple occurrences of the same image in a dataset also cause biases. An image presented N times will have N times more weight than the others during training and the resulting model will be biased towards its specific features. If said image is rare and contains valuable information, specific weighting can be given to it. Though, this is rare and such operation is made after data cleaning, with a good understanding of the usable data at hand. Finally, data cleaning is also done after annotation to make sure the labels correspond to the data. This can be done with visualisation tools [START_REF] Chae | Visualization for Classification in Deep Neural Networks[END_REF] or using crowd-sourcing methods [START_REF] Deng | Imagenet: A large-scale hierarchical image database[END_REF] for big datasets, or manually for small ones. Orders of Magnitude of Computer Vision Datasets The ability to digest huge amounts of data has not reached the limit of recent architectures [START_REF] Vaswani | Attention is All you Need[END_REF][START_REF] Dosovitskiy | An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale[END_REF] (see the different test sample numbers in [START_REF] Tom | Language Models are Few-Shot Learners[END_REF]). Training with thousands, tens of thousands or even millions of images is frequent, and it seems to be generally beneficial to the deeper models. Indeed, before the explosion of deep models, one of the most used detection datasets was Pascal-VOC [START_REF] Everingham | The PASCAL Visual Object Classes Challenge[END_REF] which counts 20,000 images in its 2012 version, with 20 different classes. To train deep models, this is undersized. It can still be used as a benchmark nowadays, but it is less considered than other datasets of higher orders of magnitude because a significant proportion of current methods cannot work with that amount of data. One of the most used datasets in CV is Imagenet [START_REF] Deng | Imagenet: A large-scale hierarchical image database[END_REF] which has 14 million images, but not all are used for every case. For instance, the most highly used subset distribution is the 2012-2017 ILSVRC classification and localization dataset, which contains 1.5 million images "only". It is made of 1000 classes, splittable into 20,000 sub-categories. COCO [START_REF] Lin | Microsoft COCO: Common Objects in Context[END_REF] is also a massively used dataset. It counts 200,000 labelled images with bounding boxes and segmentation masks of 80 classes (91 for COCO-stuff [START_REF] Caesar | COCO-Stuff: Thing and Stuff Classes in Context[END_REF] which uses the same images). As an image can show multiple elements, the total number of annotated objects is 1.5 million. COCO also contains more than 100,000 images with no labels. These two datasets are representative of nowadays' orders of magnitude. State-of-the-art models depend on their size to function and could often not work with less data. In our case, no dataset of the sort exists in swimming and it is not possible to create a comparable set during this thesis (it could be, but this would be outside of the scope). As such, this thesis will focus on better using few elements of data rather than over-sizing a model that will have enough data anyways. Training: Different Levels of Supervision Training a deep CNN can be done in several ways which give significantly different results. Classic methods map the input/output pairs in the data, but such pairs do not always exist. Further, straightforward methods can be improved with prior or posterior training on other data. For a given task, the optimal training algorithm depends on the exact nature of the available data and expected output. Due to external constraints (availability of data, mostly), they can be different. The following section describes the challenges associated with NN training in general and how they are associated with data. A scheme illustrating the relationship between data and training algorithm is shown in Figure 2.16. Supervised Learning Fully-supervised training is the basic level of supervision. It means the desired outputs are present in the massively available data. In this condition, learning is straightforward and no other method or trick is required to optimize the model. Transfer learning [START_REF] Caruana | Learning Many Related Tasks at the Same Time with Backpropagation[END_REF][START_REF] Bengio | Deep Learners Benefit More from Out-of-Distribution Examples[END_REF][START_REF] Bengio | Deep Learning of Representations for Unsupervised and Transfer Learning[END_REF] is frequently used when one is limited to a small dataset that has not enough samples to get acceptable results. With this approach, one relies on a big dataset to train a first model. The dataset needs to share similar visual features with the final task at hand. Once the first model is trained, one freezes the feature extraction layers and uses a smaller specialized dataset to only train the last layer on it. This is illustrated in Figure 2.20, Phase 2. For instance, if one wants to classify the swimming style in images, they can gather a few images of each. However, it can be too limited to train a CNN on this small dataset. One can use a model pre-trained on another task involving swimmers, such as detection. One only keeps the encoder of the model, freeze its weights, and only train classification layers added on top of it. This works because the features extracted on the initial domain are similar to what the new task needs. The limit of this method is that the original and end dataset domains must be close enough. If the features extracted by the encoder are too different from what the end model needs, it will work poorly. Sadly, a swimming pool is a very specific environment, with very peculiar features related to how water and light interact. Our preliminary tests showed that transferring knowledge from a daily-life dataset (Imagenet, COCO, etc.) brings limited priors and that retraining all the layers is necessary. Few-shot learning tackles an even more extreme case of this "annotated data lacking" problem when one only has a handful of images per class (< 10). In this case, transfer learning is very limited because this amount of data is not enough to train the end layers of a model. Few-shot learning uses completely different techniques from transfer learning [START_REF] Fan | Few-Shot Object Detection with Attention-RPN and Multi-Relation Detector[END_REF][START_REF] Zhang | Meta-DETR: Image-Level Few-Shot Object Detection with Inter-Class Correlation Exploitation[END_REF][START_REF] Finn | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks[END_REF][START_REF] Yan | Meta r-cnn: Towards general solver for instance-level low-shot learning[END_REF]. A more formal description of it is how to create a model given N classes of objects with K samples each if K is small. Although data is lacking, this is still considered supervised training as one has a direct mapping of desired inputs and outputs. Indeed, with more efficient learning algorithms, this would not be different from regular supervised learning. For instance, one can desire to identify the swimmers in a race. They can crop images framing only the individual swimmers and train a model to identify them. If a new swimmer S, never seen before enters a competition, one desires to find them in other races. However, the swimmer S is not in the dataset, and there are only a few images of them. In this case, a suitable solution is few-shot learning. To address few-shot learning, one must create a model with priors on a general domain. Then, the few elements of data will add knowledge to this before accomplishing the task. This definition is extremely generic, though, because the existing methods addressing few-shot learning are very diverse. A widely used few-shot classification method [START_REF] Vinyals | Matching Networks for One Shot Learning[END_REF] consists in training a model with a sufficient amount of data on a wide amount of classes similar to the class one is interested in. This creates a model producing embedding vectors describing accurately the new class. The few available images of this class are fed to the model and the output vectors are kept. Then, one compares the output of new images with these vectors: if they are close enough (according to a metric and threshold defined by the user), it means the image features the class of interest. In [START_REF] Fan | Few-Shot Object Detection with Attention-RPN and Multi-Relation Detector[END_REF], the authors compare the result with a dataset they introduce containing 1000 classes, but few images (hundreds per class), with the results from a model trained on COCO and its 80 classes (each with dozens of thousands of instances). The results (Figure 8 of [START_REF] Fan | Few-Shot Object Detection with Attention-RPN and Multi-Relation Detector[END_REF]) show that the more there are classes in the base dataset to train the embedding model, the better the model generalizes for new classes. For our task of identifying swimmers, one can gather images of many of them to train a model in a fully-supervised fashion. When a new swimmer S, who has never been seen before, appears, the model can output a vector for one of their images. If the base dataset contains enough swimmer variations, the resulting vector will describe S in a discriminant manner. The distance to other swimmers' embedding vectors will be large while the distance to the known embedding vectors of S will be small. This idea is explained with Figure 2.17. } Figure 2.17 -An illustration of one-shot learning (i.e. the most extreme case of few-shot learning) applied to swimmers identification, inspired by [START_REF] Vinyals | Matching Networks for One Shot Learning[END_REF]. The image of the new swimmer is embedded by the model. Afterwards, each new swimmer image is compared to the embedding vector of the different swimmers, including the new one. Few-shot learning is also addressed by other approaches, such as meta-learning [START_REF] Finn | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks[END_REF][START_REF] Yan | Meta r-cnn: Towards general solver for instance-level low-shot learning[END_REF], which proposes to retrain a small model for each new class. This model outputs a vector that weights the output feature map of a bigger model. In the resulting feature map, the characteristics of the new class will be highlighted so that the last layers of the main model identify the class correctly. Weakly Supervised Learning The classic tasks of CV (classification, detection, segmentation) can be hierarchized by order of complexity, each inferior level being a subpart of the superior. This complexity can also be measured by the time required to annotate data accordingly. The challenge of weakly supervised training is to use a level of annotation during training and to output a higher level [START_REF] Li | Tell Me Where to Look: Guided Attention Inference Network[END_REF][START_REF] Ren | Instance-Aware, Context-Focused, and Memory-Efficient Weakly Supervised Object Detection[END_REF][START_REF] Choe | Attention-Based Dropout Layer for Weakly Supervised Single Object Localization and Semantic Segmentation[END_REF][START_REF] Zhou | Learning Deep Features for Discriminative Localization[END_REF]. The main interest for this is annotation time: labelling a classification dataset takes only a fraction of the time required to label a detection dataset. Likewise for detection with respect to segmentation. It can also be used for uncertain data. For instance, rare forms of pathologies exist where doctors are only sure an organ is malfunctioning, but do not know which cells are responsible for it. In such conditions, labelling a sick part is not possible but classifying sickness is trivial (as the patient is sick) so weakly supervised learning is a solution [START_REF] Xu | Weakly supervised histopathology cancer image segmentation and classification[END_REF][START_REF] Xu | Camel: A weakly supervised learning framework for histopathology image segmentation[END_REF]. A huge variety of methods address this challenge. We will explain one of them which is massively used: Class Activation Mapping (CAM). CAM, introduced in [START_REF] Zhou | Learning Deep Features for Discriminative Localization[END_REF], proposes detection or segmentation based on imagelevel annotation (i.e. class). This method relies on the fact that the features responsible for a classification result are localized in an image, thus in a feature map too. A CNN is trained on a classification task, with a global average pooling layer at the end to spatially reduce the feature map size, succeeded by a fully connected classification layer. Once training is complete, during inference, the [START_REF] Zhou | Learning Deep Features for Discriminative Localization[END_REF]). The objective is to detect swimmers using a model trained to classify the swimming style. Such proxy task works because the swimming style can only be correctly identified by focusing on the swimmers. Each feature map of the last convolution layer's output is weighted by the coefficient it assigns to the freestlye class (w 1 , to w n ). In our example, the weights to the 2 nd (red) and n th (green) feature maps are low compared to the 1 st one's (blue), which roughly segments the swimmers. architecture is modified. The feature maps before the average pooling are kept and weighted by the coefficient associating them to a given class in the classification layer. The mean of the resulting heatmaps highlights the regions responsible for the classification. In the absence of strong biases, this corresponds to a segmentation of the class's instances in the image. This is illustrated in Figure 2.18. It is recommended to have few pooling layers before the final global pooling, as it reduces the precision of the final segmentation heatmaps. Improvements have been made on the CAM based algorithms. An NVIDIA team showcased limitations to the method in [START_REF] Ren | Instance-Aware, Context-Focused, and Memory-Efficient Weakly Supervised Object Detection[END_REF], and solutions to circumvent them. First, these algorithms do not separate close instances of the same class, which are merged into a big unique bounding box. The proposed solution is to use Multiple Instance Learning (MIL) to divide a region of interest into multiple sub-regions if necessary. In the article, the authors also highlight the fact that CAM only highlights the discriminant parts of a class (the head of a dog instead of its entire body, for instance). They address this problem following [START_REF] Choe | Attention-Based Dropout Layer for Weakly Supervised Single Object Localization and Semantic Segmentation[END_REF] which proposes "attentionbased dropout". This method detects the regions responsible for the classification and replaces them with grey patches to force the model to find other discriminant areas. Metric learning is also a way to create an encoder model with weak labels. Although it does not directly give the expected end result (detection for instance, Extracted from [START_REF] Horiguchi | Significance of Softmax-Based Features in Comparison to Distance Metric Learning-Based Features[END_REF]. if weak labels are classes), it can generate robust encoders suited to the data. It relies on a distance loss between embedding vectors. Several losses of the sort exist, such as the contrastive loss [START_REF] Hadsell | Dimensionality Reduction by Learning an Invariant Mapping[END_REF] or the magnet loss [START_REF] Rippel | Metric Learning with Adaptive Density Discrimination[END_REF]. In this thesis, we focus on the Triplet Loss, defined as follows: Triplet Loss(A, P, N) = max(0, d(A, P) -d(A, N) + α) , (2.6) where α ∈ R is the margin, d is a distance function (traditionally euclidean or cosine), A is the anchor, P is the positive and N is the negative. The purpose of the triplet loss is to make the distance between the embeddings of A and N larger than the distance between the embeddings of A and P up to a minimum distance defined by α. The model only learns to position the input in the embedding space with respect to the other available inputs [START_REF] Kilian | Distance metric learning for large margin nearest neighbor classification[END_REF]. The other losses of the same nature also learn a relative position of the data in the parametric space, hence the general name "metric learning". With classification labelled data, one extracts 2 images of any class (the positive and the anchor) plus an image of another class (the negative). The model groups images of similar classes in the latent space and isolate these groups [START_REF] Zhai | Classification is a strong baseline for deep metric learning[END_REF]. As shown in Figure 2. [START_REF] Rublee | ORB: An efficient alternative to SIFT or SURF[END_REF], training a model on a classification task using softmax activation [START_REF] Bridle | Training Stochastic Model Recognition Algorithms as Networks can Lead to Maximum Mutual Information Estimation of Parameters[END_REF] in the end creates similar groups. However, the end distribution is very different: metric learning creates a dense manifold with smooth frontier between classes, whereas the latent space of the other is significantly sparser. Being less specific, the first is better-suited for feature encoding of new similar images. Weakly-supervised learning applied to swimmers detection has been experimented with during this thesis, using CAM. We used swimming style classification, which is fast to label as all the frames from a race belong to said class. The results were promising but quickly became obsolete once we created a labelled detection dataset (see Section 2.3.2). Unsupervised Learning When no labelled data exist, it is still possible to create generic embedding vectors of input images. To achieve this, unsupervised methods are required. Contrary to the other levels of supervision, where the model fits a task, unsupervised models are adapted to the input data itself. This means a CNN model trained in an unsupervised fashion will represent the image in general, without focusing on task-specific properties. Each information present in a significant part of the image dataset will be encoded in an embedding model. In practice, this method has important limits, as it does not directly output information such as a class, object position, or segmentation maps. However, it is often combined with other methods (transfer learning, clustering, ...) to finally achieve this. The following describes two methods of unsupervised learning: autoencoders and representation learning [START_REF] Chen | A Simple Framework for Contrastive Learning of Visual Representations[END_REF][START_REF] Chen | Big Self-Supervised Models are Strong Semi-Supervised Learners[END_REF]. Generative Adversarial Networks (GAN) are also a powerful method of the domain, but they have not been studied further during this thesis. By definition, an autoencoder (introduced in section 2.2.3) is an unsupervised learning model. Its main interest is data reduction and abstraction at its bottleneck. The encoder part can be used as an embedding model for the type of images it was trained on [START_REF] Garrison | Extracting features from faces using compression networks: Face, identity, emotion, and gender recognition using holons[END_REF][START_REF] Xing | Stacked denoise autoencoder based feature extraction and classification for hyperspectral images[END_REF][START_REF] Meng | Relational autoencoder for feature extraction[END_REF]. The model can output a feature map with spatial data, which can be converted into an embedding vector with a global pooling layer. Autoencoders can be trained with a noisy input and asked to reconstruct the denoised image [START_REF] Vincent | Extracting and Composing Robust Features with Denoising Autoencoders[END_REF]. This helps the model to learn a better representation of the content featured in the input instead of just reciting the content of an image. Such added artificial noise can be varied: blur, (small) grey patches, colour changes, salt and pepper, etc.... One must however be careful with it, as it may learn to miss relevant information for the end task. For instance with detection, small regions can be interesting to keep. If the autoencoder is trained to reconstruct too important blurs, it may dismiss small regions to only focus on the general aspect of a region. On the other hand, specific augmentations can be used to push the training in the intended direction, such as zooming out to learn the importance of small pixel regions. Also, VAE having more densely uniform latent space, they are generally preferred for unsupervised feature extractions [START_REF] Jimenez Rezende | Stochastic Backpropagation and Approximate Inference in Deep Generative Models[END_REF][START_REF] Durk P Kingma | Semi-supervised Learning with Deep Generative Models[END_REF]. They offer more general and adaptable features to further end-tasks. A specific use-case of metric learning, called representation learning, can also be applied to unsupervised learning despite the lack of weak labels. To do so, one creates a pair of anchor and positive using data augmentation on one image [START_REF] Chen | A Simple Framework for Contrastive Learning of Visual Representations[END_REF][START_REF] Chen | Big Self-Supervised Models are Strong Semi-Supervised Learners[END_REF]. If said data augmentation does not change the content of the image in a discriminant manner (for the final end task), both the non-augmented and augmented images have the same content. The negative can be any other image. This method forces the model to learn what similarity is in the dataset and how to alleviate data augmentation transformation. Therefore, the resulting encoder is robust to noise and learns a good representation of the dataset. As Figure 2.19 left shows, the embedding space is well covered: there are no major "holes" in it, contrarily to the right image where most of the space does not represent an image. This means that although pertinent features are extracted from the images, no distinct prior classes have been defined: the result can be used for extremely diverse end tasks. Semi-Supervised Learning Semi-supervised learning [START_REF] Dumitru Erhan | Why Does Unsupervised Pre-training Help Deep Learning?[END_REF] is the combination of supervised learning and unsupervised learning, illustrated in Figure 2.20. It uses both labelled and unlabelled data, the first in a significantly smaller amount than the latter. If the task is too complex for the small available labels, raw supervised training is not sufficient. Semi-supervised learning relies on a smoothness assumption [START_REF] Chapelle | Semi-Supervised Learning[END_REF] stating that if two elements are in the same cluster in the latent space, their output should have a close output in the end-task (although the notion of output proximity is complex to estimate with the detection task). There also is a manifold criterion, stating that the high dimension data information lies in a lower dimension manifold. Using unsupervised learning, one can create an encoder to reduce the input data dimension and still find the required information in the output. Said output being lower dimension, fewer parameters need to be trained to perform the end-task starting from it. Therefore, less data is required. One can train an autoencoder on a given dataset and add layers on top of this encoder to perform the end task. The latent space of VAE being more convex, it guarantees a better smoothness assumption than raw autoencoders [START_REF] Doersch | Tutorial on Variational Autoencoders[END_REF][START_REF] Diederik | Auto-Encoding Variational Bayes[END_REF]. It is possible to separate the unsupervised and supervised learning [START_REF] Jimenez Rezende | Stochastic Backpropagation and Approximate Inference in Deep Generative Models[END_REF]. Doing so, the training is in two distinct steps: (i) train the autoencoder without the labelled data and (ii) use the (few) labelled data to train additional layers for the end-task in a fully supervised fashion. One can also merge these two steps, using both labelled and unlabelled data at once [START_REF] Durk P Kingma | Semi-supervised Learning with Deep Generative Models[END_REF]. The model trains as a normal VAE with unlabelled data, but with labelled inputs, the learning is both made of a reconstruction loss and the end-task loss. It is also possible to fine-tune the entire network once the layers have satisfying weights, to get a more specific encoder. These methods using VAE have direct equivalents with GAN instead [START_REF] Mehralian | RDCGAN: Unsupervised Representation Learning With Regularized Deep Convolutional Generative Adversarial Networks[END_REF][START_REF] Odena | Semi-Supervised Learning with Generative Adversarial Networks[END_REF]. Another frequently used approach, named self-training, relies on pseudolabels [START_REF] Fralick | Learning to recognize patterns without a teacher[END_REF]. A model is initially trained in a fully-supervised fashion using the available data. One then inputs the unlabelled data and keeps some. A new model is retrained using these pseudo-labels, and so on several times. If one retrains a model after the first iteration using all the pseudo labels (without dismissing the uncertain ones), this results in a model of the same precision as the original one [START_REF] Chapelle | Semi-supervised learning[END_REF]. Instead, it was proposed to use a fixed proportion of the best new elements [START_REF] Cascante-Bonilla | Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning[END_REF] to improve results each time. Recently, curriculum learning (learning strategies starting with easier data to harder data) principles were used to improve on the idea [START_REF] Zou | Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training[END_REF][START_REF] Zhang | FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling[END_REF]. Pseudo-labels are also often used in other domains, such as weakly-supervised learning, as it does not require extra annotations [START_REF] Chen | A Simple Framework for Contrastive Learning of Visual Representations[END_REF][START_REF] Chen | Big Self-Supervised Models are Strong Semi-Supervised Learners[END_REF]. Semi-supervised learning has limitations. In [START_REF] Grund Pihlgren | Improving Image Autoencoder Embeddings with Perceptual Loss[END_REF], it is shown how small spots of rare colours are never reconstructed by autoencoders with only a reconstruction loss. Their bottleneck size space being limited, they prioritize the likeliest distributions. This directly translates in our swimming context, where a swimmer is small in a pool, and made of a significantly different colour distribution than water. An autoencoder prioritizes the reconstruction of the water and waves, as they represent a more important area of the image and thus allow a better reconstruction loss reduction. The amount of gradient focused on the swimmers during training is less important than for the uninteresting water. The resulting encoder is therefore very imperfect to detect swimmers, as it never learnt to focus on them. Further, training a generative model to use its encoder as basic for semi-supervised learning is not easy to optimize, as the reconstruction and the end-task need different feature learning. Indeed, it was shown that a good semi-supervised learning encoder actually needs a bad generator [START_REF] Dai | Good Semi-Supervised Learning That Requires a Bad GAN[END_REF]. Limits Although the presented methods circumvent the lack of data, they present important limitations. First, the majority of works about non-fully supervised learning address classification, as it is significantly easier to train a model for this task than for detection. In the case of metric learning and few-shot in particular, the resulting embedding vector describes the image globally. Similarly to CAM, it is still possible to end the architecture by a global average pooling layer during training and to remove it afterwards. However, there is no insurance that the resulting model will highlight the elements one is interested in for the end task. This is the bet of transfer learning: despite the A and B domains looking similar to the human eye, a CNN model trained on domain A might not detect similar features in domain B. Further, the performance of supervised learning is significantly better than all the other forms of learning. For instance, the fully-supervised SOTA on the COCO benchmark reaches an AP-50 of 80.8 [START_REF] Zhang | DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection[END_REF]. Weakly supervised learning SOTA [START_REF] Ren | Instance-Aware, Context-Focused, and Memory-Efficient Weakly Supervised Object Detection[END_REF] reaches only 24.8. With few-shot learning, in 1-shot, the SOTA is only 12.5 and in 30-shot it reaches 35.0 [START_REF] Zhang | Meta-DETR: Image-Level Few-Shot Object Detection with Inter-Class Correlation Exploitation[END_REF]. Finally, whereas the fully supervised mean Average Precision (mAP) (the AP-50 was not available in the paper) SOTA reaches 63.1, semi-supervised SOTA [START_REF] Zhang | Semi-supervised Object Detection with Adaptive Class-Rebalancing Self-Training[END_REF] reaches 26.1 with 1% of labelled data and up to 34.9 with 10% of labelled data. These SOTA methods are also computationally more expensive in training than basic supervised learning algorithms such as Faster R-CNN [START_REF] Shaoqing Ren | Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[END_REF]. Their goal is not to compete against supervised learning, but their scores show that having more labelled data currently gives better results than any other method. Chapter abstract This chapter tackles the problem of detecting swimmers in an image. It aims at performing accurate detection while relying on few data with unusual features. It thus does not rely on transfer learning methods, but on a new model trained from scratch for our purpose. Its results will be evaluated and important design choices will be explained through different ablation studies. A dataset for swimmers detection will also be presented. It enables the training and evaluation of our detection model. The finally acquired amount and nature of data will be discussed, as it shapes the overall detection model's properties. Introduction Swimmer detection is the process of localizing the visible parts of a swimmer's body in a picture. Its role in the general pipeline is defined in Figure 3.1. Combined with registration (see chapter 4), it gives to know the position of the swimmers in the pool, thus giving meaningful analysis information. Detecting a swimmer in an image -and by extension in a video -may seem like a relatively easy task as state-of-the-art methods reach excellent results for human detection. However, the visible features in the environment of a pool are very different from those of daily-life walking and standing persons. A swimmer is mostly under a surface full of reflections and diffraction, affected by unpredictable waves creating many local deformations in the image. The light on the water tends to saturate the camera sensor, or at least obfuscate the swimmers underneath. Their accurate detection is thus harder than for a normal human. An entirely different model has to be created to detect swimmers during competitions and training. Contrarily to daily-life objects, a swimmer does not have well-defined edges. The extent of this problem depends on the swimming style, but as shown in Figure 3.2, it is rarely possible to know a swimmer's perfectly fitted boxing. In this chapter, we show an architecture and a training method that limits the impact of this problem. Apart from that, even recent deep learning methods [START_REF] Redmon | YOLOv3: An Incremental Improvement[END_REF][START_REF] Girshick | Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation[END_REF][START_REF] Liu | SSD: Single Shot MultiBox Detector[END_REF] usually require a large amount of carefully labelled images. Many datasets of the sort exist [START_REF] Lin | Microsoft COCO: Common Objects in Context[END_REF][START_REF] Deng | Imagenet: A large-scale hierarchical image database[END_REF], but in our context, they show major drawbacks. First, training a model on such data collections is computationally expensive and time-consuming, which we want to avoid. This is one of the main emphases of this thesis: the feasibility and accessibility of our method. Second, the class "swimmer" is present in no public datasets. Although the class "person" is very well represented, from a Computer Vision (CV) perspective, they look rather different. One could simply not use it for swimmer detection. Third, although fine-tuning a robust generic model may enable the training of a new well-performing model with few data, it is not suited here. The same applies to more recent approaches of domain adaptation and few-shot learning. They work well if the image distribution is not too different from the daily-life context they were trained on, but this is not the case for swimming. Due to the many local deformations the pool environment creates, common existing models perform poorly. Regions Of Interest (ROI) detection and image embedding models are not suited for this particular situation. Therefore, the use of small specialized well-crafted datasets gets more and more attention in many applications, especially if they allow the creation of good detection models. Our solution gives excellent results and is usable in many different environments (inside/outside, pool/free water) and for a large variety of acquisition conditions (see Section 3.4). It also can easily be applied to other swimming-based sports like water polo (see Figure 3.10). Our main contributions in this chapter are the following: • a model for automatic and robust swimmer detection in competition videos, • an annotated swimmer detection dataset, • a method to easily train the model which reaches high performance with few data. State Of The Art This section starts with an overview of modern CV object detection techniques and then presents recent works applying them to swimmer detection. General Object Detection Recent object detection approaches can be divided into two groups: single-shot and multi-shot. Generally, the first is faster and the second is more accurate. This section will illustrate them by explaining representative architectures of the domain. It is also possible to use segmentation as a proxy for detection, which will be addressed in section 3.3.2. Multi-Shot Object Detection with R-CNN Multi-shot object detection algorithms input several times a ROI (i.e. a small patch of the image). This refinement over an initial detection increases the precision. Searching at multiple scales and creating several queries individually analysed is the foundation of multi-shot object detection. We present in detail what is arguably its most influential architecture: Faster Region-Based Convolutional Networks (R-CNN). The R-CNN family consists of 3 main elements, illustrated in Figure 3.3, left: (i) the backbone (i.e. the feature extractor), (ii) the Region Proposal Network (RPN), and (iii) the classifier. The objective of the RPN, illustrated in Figure 3.3 right, is to find ROI, and (if it does) adjust their coordinates around the object they frame. The RPN inputs the features from the entire image (extracted by the backbone) and outputs a set of heatmaps, each associated to an anchor box of a given size and aspect ratio. Once the RPN is trained, the heatmaps activate at the barycenter of the different objects. More precisely, only the heatmap matching the best the area of a box is activated. This gives a first rough estimation of the object's position and dimensions. The features corresponding to these positions are fed to a refinement layer, regressing more precise spatial information to fit the found object of interest. These are the ROI. Finally, patches of the feature map corresponding to the ROI are extracted. They are separately fed to the classifier, which outputs a probability of presence for each class inside of the ROI. If none is above a defined threshold, it means the box is a false positive and it is discarded. If not, we obtain the exact position of a bounding box and its corresponding class. Faster R-CNN is a very powerful object detection model. Its small anchors make it great at detecting small objects, a common problem in object detection. However, it is very slow, 3-4 Frames per Second (FPS) on average, because of the many ROI extractions and analysis by the classifier. The backbone inference is also heavy: it is better to use big images (600 × 1000 pixels in the paper) to have a higher output resolution for the RPN and thus be more precise. Single-Shot Object Detection with YOLO Single-shot detectors input the image once, with no sub-patches division or multiple refinements over the same area. The two first main algorithms explicit it in their name: YOLO [START_REF] Redmon | You Only Look Once: Unified, Real-Time Object Detection[END_REF][START_REF] Redmon | YOLO9000: Better, Faster, Stronger[END_REF][START_REF] Redmon | YOLOv3: An Incremental Improvement[END_REF] and SSD (Sing-Shot Detector) [START_REF] Liu | SSD: Single Shot MultiBox Detector[END_REF]. They both rely on the same underneath technique: a feature extraction backbone, directly connected to regression and classification layers. Their main difference is in the architecture, SSD stacking different scales of feature maps to get more precise results, but at the cost of time. YOLO will be explained in this section, as it is an extremely popular algorithm. It performs detection and classification at once using anchor box adjustment. Its behaviour, illustrated in Figure 3.4, is straightforward. A backbone extracts features from the image. One can see each tensor cell (i.e. spatial dimension of the tensor) as a region's deep representation. For each cell, YOLO regresses k bounding boxes, each associated with an objectness score (i.e. probability that an object the size of the anchor is present), a class vector, and positional information (to place the box exactly around the object). This results in (too many) boxes one can position on the image, as illustrated in Figure 3.4, "generated bounding boxes". The anchors with a high enough objectness score are kept. A greedy NMS is applied to filter the boxes of the same class overlapping too much, likely representing the same instance. This results in the boxes of interest, as in Figure 3.4, "final result" YOLO is extremely popular and simple to use. However, its precision is lower than Faster R-CNN and other slow models. The algorithm is also limited in the density of its detection as there is a fixed amount of anchors per cell. Recent Advances on Swimmer Detection First, we need to mention the work of Benarab et al. [START_REF] Benarab | Optimized swimmer tracking system based on a novel multi-related-targets approach[END_REF][START_REF] Benarab | Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques[END_REF][START_REF] Benarab | A novel multitracking system for the evaluation of high-level swimmers performances[END_REF], who recently proposed several techniques pursuing the same detection goal as ours. Their computer vision algorithms are based on low-level techniques and in particular Joint Transform Correlation. They transform the pool image and swimmer head reference images into a 2D complex spectrum and use 2D convolution to find correlations between them. The reference images' variety and choice are key: they must be representative of the swimmer's head domain to work. They repeat this process at different scales and orientations to be more robust. Although it allows a faster inference and low computation needs, this choice leads to handcrafted, pool-specific thresholds. In the end, they do not provide a model or metric to compare results. This is surely one of the best solutions without Deep Learning (DL), but it also illustrates the limitations of these classic techniques. It can, however, serve as inspiration for posterity. Woinoski et al. [START_REF] Woinoski | Swimmer Stroke Rate Estimation From Overhead Race Video[END_REF] proposed a method based on a swimmer dataset annotated by themselves on a selection of 35 race videos, collecting about 25 000 images in the process. Sadly, it was not released before the end of our work, so we could not use it. However, creating such a data collection is a great milestone for the community once it is made public. They adapted a Yolo-V3 [START_REF] Redmon | YOLOv3: An Incremental Improvement[END_REF] model, where classes describe the swimmer state (normal swimming, diving, u-turning...). Their model reaches an AP25 (Average Precision (AP) definition in Section 3.4) of 0.7. Hall et al. [START_REF] Hall | The detection, tracking, and temporal action localisation of swimmers for automated analysis[END_REF] propose a similar work, with an even larger dataset containing 327 000 images from 249 different races. They used static videos and labelled each frame. This is tracking data which is comprehensive of time, therefore the task is different even if the objective is similar. Using a temporally dense 25 FPS swimmers head annotation, they can use information from previous frames to make a prediction on the next one. The model they used is a 2-step detection framework, similar to the R-CNN family. The first step is a rough head detection over the whole image. The second is fed a crop around this detection and refines it with a regression model. The tracking part of their method occurs only then, once detection has been performed. Again, they did not release their dataset or model, so the comparison is impossible. Proposed Approach The first step in detecting swimmers is to create a dataset for the task. This section describes the data acquisition and labelling process and explains its properties. We wanted to focus on a detection model trained from scratch using few data. This did not allow us to use classic methods such as YOLO [START_REF] Redmon | You Only Look Once: Unified, Real-Time Object Detection[END_REF] or Faster-RCNN [START_REF] Girshick | Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation[END_REF]. Instead, we drew our inspiration from medical imaging, where data is always missing. This is because the images come from costly medical imaging instruments and each label is made by a specialist. Models have been developed to address the lack of data. They are designed to be deep enough to learn complex feature hierarchies and patterns of high variability, but not too much to overfit on a small dataset. Moreover, they have interesting spatial properties this section will explain. Dataset Creation We selected 12 international-level competitions with openly available race videos. Each had a different camera position relatively to the pool, which gave the dataset a great range of angle and size variation. One frame was saved every 3 seconds, resulting in 403 different images with a total of 3121 bounding boxes (7.7 per image on average). Some competitions were inside, others outside, with varying lighting conditions. For each competition, races were selected to represent the 4 main swimming styles and the 2 genders (49% ♂, 51% ♀). The freestlye style is a bit more represented (30%) and the butterfly is a bit less present (18%). Breaststroke and backstroke are equally represented (respectively 27 and 25%). The data from 3 pools out of the 12 was used as test data and the 9 remaining as training data. The resulting dataset, called Swimm 400 , is composed of 80 test images and 323 train images. Detection Through Segmentation The segmentation and detection tasks have different objectives, but it is not their only difference. Their training data is also distinct by nature. However, it is possible to use heuristics to convert one data type to the other. Bounding box regression vs. segmentation Bounding box regression-based approaches [START_REF] Redmon | YOLOv3: An Incremental Improvement[END_REF][START_REF] Girshick | Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation[END_REF][START_REF] Liu | SSD: Single Shot MultiBox Detector[END_REF] transform a part of the image into a semantic vector, from which the model computes the probability of presence and position of the objects on the image. This is a very complex task which requires large amounts of labelled training images to get stable. On the other hand, fully-convolutional models like U-Net [START_REF] Ronneberger | U-Net: Convolutional Networks for Biomedical Image Segmentation[END_REF] transform each pixel into a "1" or "0" response according to the objective. This relatively simpler task brings two main advantages. First, it requires fewer data because the overall task is not regression (of the bounding boxes), but a binary classification (i.e. thresholding) extended to the whole image. Second, the conversion of a pixel into a presence probability amounts to segmentation, which is a task of higher level than just a bounding box regression. Indeed, according to the object position and orientation, much space contained inside a bounding box can be background, but most of a segmentation area designates the searched instance. Therefore, a segmentation model provides an alternative, more precise description of the regions of interest, as it excludes the parts of the surrounding background. Tiny-U-Net We propose a variant of the well-known U-Net architecture [START_REF] Ronneberger | U-Net: Convolutional Networks for Biomedical Image Segmentation[END_REF] for our swimmer detection model. The original model is a residual autoencoder with blocs of 3 convolutions layers with the same number of filters before each downsampling (in the encoder) or upsampling (in the decoder). The following modifications have been performed: instead of 3 convolution layers between each sampling operation, only one is performed. The filters are also smaller, increasing from 8 up to 128 instead of 64 to 1024 for the original U-Net. A side-by-side scheme comparison of both is given in Figure 3.5 We will designate our model by tiny-U-Net. Due to its shallow architecture and low filters number, it runs at 260 FPS on a GTX 1080 NVIDIA GPU with (256 × 256) pixels images, whereas U-Net runs at 50 FPS in the same conditions. Tiny-U-Net is therefore faster than a real-time detector for 25, 50 or 60 FPS videos, which are standards in the camera industry. To convert the model's output heatmap into bounding boxes, a threshold is applied to said heatmap, and the remaining areas are extracted. A bounding box is created by finding the circumscribed rectangle around each of them. This further allows for a fair comparison with the benchmark methods in Section 3.4, each of them creating bounding boxes. Box-to-Segmentation-Map Transformation: the U-Net model requires segmentation heatmaps for training. To convert the bounding boxes from Swimm 400 into segmented data, an image with black background is created, and "filled" with white pixels inside the labelled boxes. Therefore, a pixel is 1 or 0 depending on whether there is or not a swimmer at the pixel. Multiple variants of this approach have been tested. Whiten inside the full box or only in the inscribed ellipsis; using binary masks or Gaussian values (close to zero as the pixel is far from the center). As shown in Table 3.1, the option giving the best result was to use the inscribed ellipsis with hard edges. As the boxes are reduced to approximate masks, we noticed that the model can be successfully trained even with mediocre and partly inconsistent annotation. This allows for a much quicker and less costly annotation process. Data Augmentation Training Tiny-U-Net is direct as it is a simple forward model without hyperparameters. To allow batch training, the images were all resized to (256 × 256) Zoom-in / zoom-out: the main augmentation was the zoom in and out (Fig 3 .6, right column). Image crops are performed during training, such that the subjects occupy more space. Inversely, a neutral colour could be put around the reduced image, so that the swimmers look smaller. This helped the model to generalize its representation of swimmers independently from their size. Swimm 400 originally contains swimmers at different distances from the camera, but this data augmentation increased this benefit even more. Side-switch: another important augmentation was the side-switch, seen in Figure 3.6, bottom row, third column. This transform is extremely useful to avoid central overfitting: most images present in Swimm 400 tend to center the swimmer. The side-switch puts them on the side, preventing the model from only detecting instances at the center. For fully-convolutional networks, this is less of a problem as they are mostly translation invariant. Others: apart from these two transforms, other more common methods were used to train the model. The random left-right flip generalizes the swimmer's direction to the model, by giving them the same chance to face each side. The colour change (in HSV format, the hue is rotated by max. 45°so that the water can have any blue shade plus some green ones) generalizes to many skins, pools and water colours. The contrast and brightness random variations adapt the model to the many lighting conditions that can happen during different competitions. Finally, Gaussian blur increases the overall robustness. Of course, all these augmentations do not require any further annotation, as they are automatically generated during the training. The probability to trigger them is 50% each, except for colour variation (30%) and side-switch (10%), as they both are stronger changes and thus might make the model diverge if used too much. These trigger probabilities work well for our study case, but may need to be slightly varied to adapt them to other detection problems. Experimental Results This section shows how we found the best parameters to train our model. We first compare different variations of our method to find the optimal solution with our Swimm 400 dataset, then we compare it to another existing method. Metrics The comparison will be made using the AP and Average Recall (AR) 25, defined as: AP 25 = 1 N ∑ i #Good Detections i #Positives i , AR 25 = 1 N ∑ i #Good Detections i #True Positives i , (3.1) on image i , #Good Detections i being the number of detected bounding boxes with an Intersection Over Union (IOU) of more than 25% with the true box, #Positives i being the total number of boxes detected, and #True Positives i being the total number of boxes labelled. N is the number of images in the benchmark set. To get a better idea of what these metrics represent, consider the different detection scenarios presented in Figure 3.7. A perfect model would return the results from image A, with the correct number of boxes well enough placed, and no other boxes. Such a model would have an AP and AR equal to 1. Note that, as we use the AP/AR 25 metric, the boxes can be imperfectly fitted around the swimmer as long as their IOU with the annotated ground truth is superior to 0.25. On image B, all the objects are correctly detected, thus giving an AR of 1, but many background objects are found as well. As a result, the AP is greatly decreased. On image C, there is no false positive, resulting in an AP of 1. However, only 1 object among 3 is detected, which gives a low AR. Finally, image D shows a catastrophic model, detecting several background objects but none of the swimmers, therefore both metrics are null. Using these examples, one can conclude that both metrics have their own interest, depending on the conditions. A good AP means there are not too many false positives, and a good AR means most of the objects are detected. For our task, the AP is more suited for this task and will be prioritized, compared to the AR. Indeed, as in this work there is no discrimination process to be sure that a detection corresponds to a swimmer, we want to reduce as much as possible the false positives. Therefore, both metrics will be used, but the AP will be considered the most important. Although both metrics used to be massively used, they are not detection standards anymore: it is the mean Average Precision (mAP) and mean Average Recall (mAR) [START_REF] Coco | COCO detection metric[END_REF], defined as follows: mAP(dataset) = 1 10 0.95 ∑ X=0.5:0.05 AP X(dataset) , (3.2) which in summary corresponds to the mean of all AP/AR with IOUs between 0.5 and .95 (with a step of 0.05). There is a similar formula for the mAR. This metric aims for pixel-perfect precision, which is not adapted for swimmers with blurry edges under the water. AP50 is the standard metric and threshold for object detection. It considers valid a box at the correct position and with the correct general size and proportion. In our case, though, the width and height of a bounding box are fuzzy and imprecise by nature, even during annotation. For this reason, we chose a smaller threshold of 25. AP25 and AR25 are closer to what real-world applications seek, such as extracting a sub-region around swimmers, where estimating the boxes' barycenter and general size is enough. Ablation Study First, we trained our tiny-U-Net model on different variants of the box-tosegmentation map strategy. The results are shown in Table 3.1. 13 3 This table clearly shows the superiority of shapes with hard edges. Smoothed ones tend to reduce the model convergence during training. Finally, filling an ellipse shape is better than filling the whole rectangle bounding box. An intuitive explanation could be that corner regions are less likely to contain pixels from the instance. The ellipse mask contains almost only the swimmer, and the remaining pixels can be understood by the model as regularisation. As the edges of a swimmer are fuzzy anyway, it probably does not differ much from a preciselylabelled pixel-perfect mask. To compare the results of tiny-U-Net with current state-of-the-art methods, we trained two variants of Yolo on Swimm 400 . The first version is YoloV3 [START_REF] Redmon | YOLOv3: An Incremental Improvement[END_REF], a deep model with a 2048 fully-connected layer after the convolutions. The second is Yolo-tiny, which is shallower. Moreover, it replaces the fully connected layer with a 1 × 1 convolution layer with 56 filters, to drastically reduce the number of parameters to train. The 3 models are trained with Swimm 400 dataset. The Adam optimizer is selected and starts with a learning rate of 10 -3 with a decrease of 0.1 if the test loss plateaus more than 10 epochs. As the dataset is quite small, a batch size of 16 is chosen, and the loss is the Mean-Squared Error (MSE) in each case. For the Yolo-based models, the λ confidence training trick described in [START_REF] Redmon | You Only Look Once: Unified, Real-Time Object Detection[END_REF] Section 2.2 is followed. From Table 3.3, it appears that our tiny-U-Net model outperforms by far the two others. Indeed, Yolo is a great model as long as enough data is available because of its conversion from feature vectors to output tensors. On the other hand, tiny-U-Net does not require such a transformation. This result is confirmed by testing the 3 models on the same videos: the tiny-U-Net gives the best results. Moreover, on a video, tiny-U-Net appears much more stable between one frame and the next. We further explored different domains to get a better knowledge of this model's abilities and limits. The first experiment we did was to measure its speed. It is significantly impacted by the image size, also affecting performance. Table 3.2 displays this trade-off. It shows a peak in the AP at (256 × 256) pixels, and in the AR at (512 × 512) pixels. Indeed, despite extensive size-focused data augmentation, smaller images do not have enough pixels per swimmer to get a good detection. On the other hand, too big images tend to contain many false positives, even if We also studied the impact of the threshold on performances, which is often underestimated. Indeed, if one observes a very thin performance peak around one optimal threshold, it does not mean the model is optimized for the task, but mostly for the test dataset. Such a peak is a bad thing for generalization, especially with small datasets such as Swimm 400 . In Table 3.2 (rightmost part), we observe that our optimum is fairly flat between 0.3 and 0.6, which proves the model's stability. The optimum value is 0.45. Comparative Results Also shown in Table 3.3, the Yolo model trained on 25,000 images by Woinoski et al. [START_REF] Woinoski | Swimmer Stroke Rate Estimation From Overhead Race Video[END_REF] gives results comparable to the tiny-U-Net trained on Swimm 400 . It is not measured on the same benchmark though, thus we cannot assure which model exactly is superior. However, ours seems comparable to theirs with only a fraction of their amount of data and model size. Our model also performs some segmentation. Depending on the intended task, both the segmentation heatmap and the bounding boxes can be used, which is another advantage compared to the Yolo-based models. Finally, Tiny-U-Net is extremely efficient in terms of scalability. Being designed to be trained with small datasets, it is quite shallow and can run extremely fast (see Table 3.2) with not-so-recent GPUs (GTX 1080 NVIDIA GPU). The experts we interrogate can determine the position of a swimmer in real-time, but only one swimmer at a time. With our detection model, we know the position of each swimmer in a race faster than in real-time, so faster than an expert by a huge margin. Table 3.3 -Performance comparison for the different detection models. They are all trained with the same data, except for the first line. In bold, the best of a category. We observe that the original U-Net architecture gives significantly worse results compared to tiny-U-Net, as it overfits on the few data. Visual and Qualitative results We also provide a few visual examples of the results. These are not cherrypicked: they have been selected because they are representative of the global behaviour of our model. Swimming Races First, it is important to study the model in its normal context. Figure 3.8 displays a few of them, illustrating different behaviours of the model. The top-left image displays the segmentation on a classic swimming segment. The overall detection quality is very good, each swimmer is well detected and separated from the others. Although one could not state it is precise segmentation, the bounding boxes we can infer from the segmentation heatmap are of great quality. The top-right image show both one limit and one unexpected performance. The limit is the lack of detection of swimmers about to dive, even if a few examples of such cases are present in the training dataset. This is not really problematic, though, considering the important swimming phases to detect are inside of the pool, where detection works correctly. The unexpected result this image shows is the accurate detection of people in the water who are not swimming. In our dataset, there is no occurrence of such "objects", so the network has apparently learnt to generalize enough to detect them. This is a great thing as this means the model is not strictly limited to professional swimmers in pools (see the water-polo and lake examples below). The bottom-left image highlights an important problem: segmented blobs which are too small and too close tend to merge. This is especially common when swimmers are far from a low camera. Individually, each "sub-blob" is correct, but they should be separated. This could be addressed by isolating each lane, as explained in a later chapter (see section 6.2.1). One could also segment the buoy lines and mask them out of the heatmap to divide the merged blobs. Finally, the bottom-right image shows the other side of the problem with swimmers too far from a too low camera: the lack of detection (i.e. false negatives). Usually, this is caused by a threshold that is too high, but lowering it would thus create false positives. Again, this will directly be addressed in a later chapter (section 6.2.1): as the heatmap of each swimming lane is isolated, if no detection is found, one could reduce the detection threshold until something is finally detected. However, this is not optimal and it would be preferable to manage these results during training. In Table 3.2 is displayed the importance of the input size for the performances. Further, one can see that the AP and AR optimum do not occur for the same size. Figures 3.9, 3.10 and 3.11 all highlight this phenomenon on their own domain. The general observation is that as the input grows, swimmer segmentation improves. In Figure 3.9, for instance, the blobs shrink as the size increases, but they contain less water and a bigger part of the swimmer. From detection, the model almost achieves segmentation. This is especially interesting considering the original data: ellipses inscribed inside bounding boxes. This proves that this shape was a good heuristic, nicely highlighting the athlete. In this case, we can consider a case of weakly supervised learning, with initial data of lower level than the output. One could arguably follow this lead to generate great swimmers segmentation with only bounding box annotation. Increasing the size, though, does come with compromises. Indeed, as the AR increases, the AP reduces, due to many false positives appearing between the swimmers. Red markers on swimming buoys, especially, tend to be confused with humans and are sometimes detected when they occupy a big enough part of the image. The opposite scenario arises with smaller images: no false detection, but many swimmers missed (half, here). Depending on the exact use case, one could prefer one extreme or the other. The intermediate size, (256 × 256) pixels, seems to be a good compromise for general-purpose swimmers detection. Other Swimming-Based Activities Our model performs great swimmers detection, but it can also be used in slightly out-of-domain contexts, such as water-polo players detection. Figure 3.10 shows the results on such an image, again with increasing input sizes. In all cases, the detection is generally good, at least of the same level as with swimmers. Note that this image is a bit zoomed-in and from a high enough point of view, which is one of the best video capture situations, for detection. In this case, the model performs well despite the players generally not being in a swimming position. They are more vertical and have a big part of their torso out of the water. This is very encouraging regarding the generalisation power of our model. Moreover, the segmentation is here even more precise than with splashing swimmers during a race. The (512 × 512) pixels image, in particular, shows great segmentation of the player with the ball (at the bottom), with the whole arm segmented. Underwater limbs are detected for several players. One step can be considered missing, which is the blobs splitting. Indeed, contrarily to classic swimming races with separated lanes, here, the players can get very close to each other, which causes detection troubles. Someone focusing on water-polo could think of a solution or a heuristic to alleviate this problem, but as this thesis is mainly on swimming races, we did not elaborate further on this. Finally, 3.11 shows an example result completely out of the training domain, as the persons are not swimming and not in a pool. This background and environment are completely new and different from what the model was trained on. Although the results are generally less precise here, both in low and high resolution, interesting observations can be made. With low resolution, first, the group of close-enough persons is essentially well detected, with imprecise edges and some background water (i.e. false positives) segmentation. With higher resolution, though, results are more refined, the different blobs follow decently the persons' shape. Further, even the farthest group gets detected. This means such a model could be used for swimmers monitoring on public beaches if we authorize a high recall. Indeed, the background town is a bit detected too, there are some false positives. However, for such monitoring tools, it is always better to have false detection leading to a waste of time (or simply visual checking) than false negatives, i.e. we do not detect a drowning person. These qualitative results showed different aspects of the model which were not expected from the AP and AR studies. The first one is the better segmentation precision when size increases. This can be very interesting for close-up analyses and swimming posture extraction -which is not yet resolved. The second point is the great generalisation performed by the model, which can detect persons in the water in general, as opposed to simply swimmers in a race. The applications of such a model are beyond race analyses and could be used to save lives. Discussions and Perspectives The detection method described in this chapter is functional, but a few things can still be improved or modified to increase the overall performance. Also, despite having been proposed for swimming analyses, it can be generalized to other sports with low cost and small annotation time. Improvements and Future Works The swimmers' coordinates output by the model could either be read as a segmentation area (the raw heatmap) or a bounding box (after the heatmap processing). The model is currently able to detect the general position of a swimmer, but it does not detect any body part in particular. This results in an accurate but not precise position. The barycenter of a blob is always somewhere on the athlete, but one could not predict exactly which. If the barycenter changes too much, from the shoulders to the hips, for instance, the local speed cannot be precisely measured. Depending on the intended application, it can cause issues, for instance to measure a swimmer's inter-cycle speed, which just lasts a second. To alleviate this problem, one could create another small dataset with only the swimmer head annotated. By training a model to detect the head only, we will obtain higher robustness. This might be incorporated into a 2-stage detector similar to Faster-RCNN [START_REF] Girshick | Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation[END_REF]: the first stage would be the raw detector described in this paper, the second a head detection on a crop around the extracted swimmer position. Having this second stage will also potentially remove false positives. However, it would also reduce inference speed by an important amount due to image crops resizing and a new model inference. To implement a faster version, one could have a U-Net-like model with 2 decoder branches, following ProstAttention-Net ( [START_REF] Duran | ProstAttention-Net: A deep attention model for prostate cancer segmentation by aggressiveness in MRI scans[END_REF]). The first one would be the one described in this chapter, the resulting heatmap serving as an attention map for the other, detecting the head. One recurrent problem, as seen in Figure 3.8, is the merging of different blobs. Model agnostic ideas have been suggested to solve the problem, but it would be better to address it directly in the model. One simple idea, that is yet to be tested, would be the addition of a refinement step. Another model would input only a crop around the blobs and it would have to segment them following a Gaussian distribution (instead of the current flat distribution). By only detecting the local maxima on this second step, instead of a threshold and blobs approach, we could differentiate the swimmers. As previously mentioned, Gaussian masks do not perform as well as binary masks for full-image detection. However, this refinement task is different as there are only a few swimmers to segment in a close space. As seen in the previous section, the model also has trouble detecting the smallest swimmers (i.e. the farther from the camera). Extensive scale-oriented data augmentation could be used to reduce a bit this problem, but it is also an inherent part of Tiny-U-Net's nature. The network is shallow and simple by design, which inevitably creates this limit. However, this complexity / speed trade-off will be addressed in a later chapter (section 6.2.1) with all the required CV elements established. Overall, the performances could be improved with the creation of a bigger dataset (and then the use of a bigger model), but the main point of this chapter is to prove that powerful specific analysis tasks can be achieved at a low cost without large computational and human resources. Generalization to Other Sports The detection process is quite general and easy to handle. For sports with atypical objects that are not present in usual datasets, our annotation and detection process can be reused and adapted. Indeed, the low quantity of data is enough for most detection tasks if the background does not change too much (a pool, a sports field etc.) as the model will more easily understand what actually matters. Data augmentation has to be adapted for each case, though, but knowing which one is pertinent is trivial for an expert. Further, as we explained, this model can directly be applied to water-polo, despite the absence of buoy lanes. Small fine-tuning could obviously improve the precision, as swimmers can be in very different settings than for typical races (grouped, under one-another, chest out of the water...). Conclusion This chapter proposes a method to detect swimmers in a frame with very few constraints. Its main advantage is its simplicity: it is easy to recreate for other sports, as it does not require a lot of data or big pre-trained neural networks. Peculiar objects (balls, weights, javelins ...) can be quickly detected with a small dataset, following our methods. Moreover, other swimming-based competitions (water polo for instance) can directly benefit from the present detection model. In the end, the detection is done in a fully automated fashion, possibly freeing an enormous amount of time for coaches who performed this task mostly manually. It allows them to focus only on the athletic part of their role in a swimmer formation. C h a p t e r Chapter abstract This chapter presents the problem of pool registration, also named camera calibration. This task aims at projecting a given image taken with unknown camera position and orientation parameters to a known 3D coordinate system. It is complementary with detection to obtain higher-level information like the position and speed of swimmers. Existing methods usually first create a rough projection estimation and then use a refinement algorithm to iteratively get closer to the desired calibration. These different methods will be discussed, highlighting their strengths and weaknesses. They are usually only compared in terms of precision on a standard benchmark without considering other metrics. In particular, speed is important, mainly in the context of live broadcast TV and sports analysis. A new automatic field registration method is introduced in this chapter, achieving robust performance on the WorldCup Soccer benchmark, while neither depending on specific visible landmarks nor refinement steps, resulting in a very high execution speed and a good generalization. Finally, to complement the widely used soccer benchmark, we introduce a new swimming pool registration benchmark which is more challenging for the task at hand. This chapter is mainly based upon this contribution: Nicolas Jacquelin, Romain Vuillemot, Stefan Duffner. Efficient One-Shot Sports Field Image Registration with Arbitrary Keypoint Segmentation. IEEE International Conference on Image Processing, Oct 2022, Bordeaux, France 〈hal-03738153〉.. Introduction Field registration designates the common method to align the visible field in a frame to a known coordinate system. Its role in the automatic analysis pipeline is shown in Figure 4.1. As sports fields are planar and we consider lens distortions negligible, the registration is performed using a linear projection called homography. Manual calibration is long (at least a dozen seconds per frame, with training), thus costly because most video streams come from moving cameras and would require a frame-by-frame annotation. Although it is theoretically possible to do so, in practice it takes significantly longer, thus an efficient solution for automatic field registration is crucial. Automatic methods [START_REF] Rahul | Automated Top View Registration of Broadcast Football Videos[END_REF][START_REF] Chen | Sports Camera Calibration via Synthetic Data[END_REF][START_REF] Sha | End-to-End Camera Calibration for Broadcast Videos[END_REF][START_REF] Nie | A Robust and Efficient Framework for Sports-Field Registration[END_REF][START_REF] Jiang | Optimizing Through Learned Errors for Accurate Sports Field Registration[END_REF] tend to decompose the task into a two-stage process: first getting an initial rough projection, then several refinement steps to get a more precise result. Both steps are necessary to perform well. The refinement process is similar in many aspects to gradient descent, but the search space is far from concave. Without a good initial estimation, no refinement step can find a good solution. However, although this rough initialisation gives a general idea of the registration solution, it is always improvable, hence the refinement steps. This second stage takes much longer, 96% of the total processing time according to [START_REF] Nie | A Robust and Efficient Framework for Sports-Field Registration[END_REF]. This chapter proposes an automatic field registration method which does not need this costly refinement step to give accurate results (see results in Section 4.5.2). A model segments the input image into a map that highlights a specific (grid-like) pattern corresponding to points on the 3D field plane (see Figure 4.4, template). Our approach can be applied to any type of 2D sports field with TV streams or side stadium views. While maintaining high precision on the WorldCup Soccer benchmark [START_REF] Homayounfar | Sports Field Localization via Deep Structured Models[END_REF], it achieves an inference speed of around 50 Frames per Second (FPS) on rather modest hardware (see Figure 4.3). This is important as it is critical to calibrate a field in real-time for our application if we want athletes to get quick feedback on their performance shortly after a race (the other tasks also have to be quickly ready). WorldCup Soccer benchmark [START_REF] Homayounfar | Sports Field Localization via Deep Structured Models[END_REF] is the only public dataset that has been widely used in the literature, although some private datasets have been introduced for registration [START_REF] Sha | End-to-End Camera Calibration for Broadcast Videos[END_REF][START_REF] Nie | A Robust and Efficient Framework for Sports-Field Registration[END_REF]. However, a soccer field is relatively simple in appearance as it contains a bi-axial symmetry with many unique visual local patterns. Thus we introduce a more challenging benchmark for Olympic swimming pool registration. Indeed, a swimming pool contains many repetitive patterns at different places in the pool (see Figure 4.3) leading to ambiguities in the image and making the registration difficult. We hope this will push forward the research on generic and robust sports field registration methods. In summary, our contributions presented in this chapter are: • a new benchmark for swimming pool registration with new spatial and textural challenges, • a new efficient sports field registration method that can be applied to any type of sport and reaches high execution speed and State of the Art (SOTA) precision. State Of The Art This section describes how registration works. It first focuses on the creation of the homography matrix in practice, then presents manual approaches, and finally describes the SOTA for sports field registration. operation uses known matches of positions (X i with Y i ) to compute the homography matrix. This matrix can then be used to match a position from one coordinate system to the other. Therefore, the bounding box from the image can be positioned in the pool with this method. Y 2 Y 3 Y 4 Y 1 X 1 X 2 X 3 X 4 Homography DLT({X i , Y i } ∀i ∈ [1, 4]) → Homography Matrix Registration Background In the context of registration, a pair of points is defined by a point on the source image (X) and a point in the absolute 3D coordinate system (Y), linked by the equation: HX = Y where H is a homography matrix. Combining 4 unaligned pairs (i.e.: forming a quadrilateral on both the source image and the other coordinate system) allows computing the homography matrix using the DLT [START_REF] Hartley | Multiple View Geometry in Computer Vision[END_REF], as illustrated in Figure 4.2. However, automatic methods based on pairs identification can mismatch them (the elements of the pair do not correspond), which result in a completely false homography matrix. To alleviate that, most methods presented here identify more than 4 pairs, and use a consensus algorithm like RANSAC [START_REF] Fischler | Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography[END_REF] to determine the most likely output. In the case of homography matrix generation with many pairs, among which some errors, RANSAC behaves as follows. It randomly selects 4 unaligned pairs in the whole pairs set and computes the corresponding homography matrix H. All the other pairs are used to compute a loss function measuring the distance between the source point projected ( Ỹ = X × H) and the other point (Y). The average loss is associated to this matrix. If it is bigger than a given threshold, the matrix is rejected and 4 other points are selected to do the same thing again. If not, the matrix serves as a basis for the rest. A new (randomly chosen) element is added to the initial set of pairs, and DLT is used again, this time with 5 elements, to compute the matrix. The result is again evaluated on the remaining other points. If the average distance is smaller than previously, the refined matrix is kept. If not, RANSAC keeps the previous matrix. A new randomly chosen point is again added to compute the matrix, then the new loss, etc.. This matrix refinement process is repeated either for a fixed number of iterations or until the resulting average distance is lower than a defined threshold. Despite the presence of thresholds and the omnipresence of randomness, RANSAC is not very sensitive to critical threshold values and gives robust and reliable results. Indeed, as it is very fast, its number of repetitions is often over-dimensioned (thousands), which increases the robustness of stochastic algorithms as explained in [START_REF] Blum | Approximation Methods which Converge with Probability one[END_REF]. Semi-Manual Approaches Although associating pairs of points is a challenging task, associating points from a similar view is common using Scale Invariant Feature Transform (SIFT) [START_REF] David G Lowe | Object recognition from local scale-invariant features[END_REF] and other algorithms of the same nature. Applied to the successive frames of a race video, one can create temporal pairs of points, i.e. points at the same position in space but (probably) different coordinates in the image. Using RANSAC, it is possible to estimate the homography matrix between frames. Thus, one can register a frame with the previous one, projecting it to the coordinate system of the other. It is therefore possible to perform "relative registration" of race video. We call it "relative" because although the frames are all in the same reference frame, they are not associated to any absolute 3D coordinate system. With this consideration, one can think of a semi-manual registration approach. This idea was developed in [START_REF] Dubrofsky | Combining Line and Point Correspondences for Homography Estimation[END_REF][START_REF] Gupta | Using Line and Ellipse Features for Rectification of Broadcast Hockey Video[END_REF] in a similar way. They relied on sparse human video annotation (e.g. one frame per second of video) and used SIFT to determine the camera shift between calibrated frames and the others. Using only one annotation (e.g. on the first frame only) would not suffice, as SIFT does not create perfect spatial matches and the homography between frames is not perfect. This results in temporal drifting, with each new registration slightly wronger than the previous. Regular manual calibrations are thus necessary throughout the video the reduce this problem by resetting the error frequently. The aforementioned methods do not use deep learning approaches, but they would likely benefit from the newest approaches. Instead of relying on SIFT and RANSAC, newer methods [START_REF] Daniel Detone | Deep Image Homography Estimation[END_REF][START_REF] Le | Deep Homography Estimation for Dynamic Scenes[END_REF] input two subsequent frames and directly regress the matrix. The framework SuperPoint [START_REF] Sarlin | SuperGlue: Learning Feature Matching with Graph Neural Networks[END_REF] also proposes an improvement on SIFT using deep learning to improve general landmarks detection and matching. These more robust methods would reduce temporal drifting, but they would probably not remove it entirely, due to sampling approximation. To our knowledge, no sports field registration method implements these techniques. We also precise it is not possible to input a frame from a race and a generic top-view image of a pool to directly regress the homography matrix between them using [START_REF] Daniel Detone | Deep Image Homography Estimation[END_REF][START_REF] Le | Deep Homography Estimation for Dynamic Scenes[END_REF]: as they are too different, no matching feature appears resulting in unusable output. Recent Advances in Sport Field Registration The first sports field registration methods [START_REF] Wang | Fast Arc Detection Algorithm for Play Field Registration in Soccer Video Mining[END_REF][START_REF] Kim | Soccer video mosaicing using selfcalibration and line tracking[END_REF] relied on lines and circle detection using Hough Transforms [START_REF] Richard | Use of the Hough transformation to detect lines and curves in pictures[END_REF]. The detected patterns were used as keypoints and, combined with RANSAC [START_REF] Fischler | Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography[END_REF], enabled to compute a homography giving the absolute position of the camera view on a soccer field. Using more recent deep learning approaches, fully automated robust methods appeared. Homayounfar et al. [START_REF] Homayounfar | Sports Field Localization via Deep Structured Models[END_REF] created a segmentation map and used a Markov Random Field and an SVM to compute the parameters of the cameras, which determine the homography. Other works [START_REF] Rahul | Automated Top View Registration of Broadcast Football Videos[END_REF][START_REF] Chen | Sports Camera Calibration via Synthetic Data[END_REF][START_REF] Sha | End-to-End Camera Calibration for Broadcast Videos[END_REF] used a similar deep segmentation model approach using synthetic datasets. They generated a set of synthetic field views with varying camera angles, extracted features from them, and associated them with their homography (easy to obtain in a synthetic environment). At inference time, they generated similar features from real images, which they compared to their database, giving a good initial homography. Then they adjusted this homography by comparing their input image to their dataset template. The idea of refining an initial result is present in all recent works of the domain, with different methods for the initialisation. For instance, Jiang et al. [START_REF] Jiang | Optimizing Through Learned Errors for Accurate Sports Field Registration[END_REF] used a model to directly estimate the image homography. They then used another one to refine the matrix by comparing the image and a template projected to the same point of view. Other approaches are based on field keypoint detection. Citraro et al. [START_REF] Citraro | Real-Time Camera Pose Estimation for Sports Fields[END_REF] used visual landmarks on the field (mostly line intersections). The main limitation of using visible elements is that the image may not show enough visual keypoints. Nie et al. [START_REF] Nie | A Robust and Efficient Framework for Sports-Field Registration[END_REF] directly address this problem, creating a generic template made of equally distributed points across the whole field, which is similar to our proposed approach. The key difference is that in [START_REF] Nie | A Robust and Efficient Framework for Sports-Field Registration[END_REF] each point is disconnected from the others, despite spatial regularities. A More Challenging Benchmark Compared to a soccer field, a swimming pool contains harder patterns to correctly identify and associate with a position on a pool. In Figure 4.3, both fields are shown aside with their distinct features highlighted. Both contain bi-axial symmetry (not represented on the pool for clarity), but the main differences are the visual landmarks. For soccer, each landmark is unique because none is repeated throughout the field. Some of them represent the same things and are in 4 instances (like the ones circled in red), but even in this case, they are distinct (with 4 different angles here). Further, the Soccer WorldCup dataset contains very few zoom variations and the camera is always placed close to the edge center. As such, it is always possible to distinguish enough landmarks to know without ambiguity the correct projection of the image in the field. A pool, however, contains many more challenges (see Figure 4.3, right), namely positioning along the Y axis (A, B), positioning along the X axis (C, D) -both due to landmarks repetitions -and unstable background (e.g. wavelets, reflections, light saturation, etc.). Finally, swimmers occlude part of the landmarks. In a soccer field, that would not be too important, as they are part of bigger patterns (a corner can be inferred by only seeing the two lines creating it, despite their intersection being hidden). In a pool, the majority of the markers are buoys coloured in red instead of yellow or blue. Although there is one of these markers on each line, their size, the waves, and the possibility for a swimmer to hide one, make their detection difficult. To articulate these challenges, we introduce the RegiSwim 500 dataset, a swimming pool registration benchmark containing 503 manually annotated images of international events associated with a corresponding homography matrix. The source videos are captured by the Fédération Française de Natation (French Swimming Federation) (FFN) from the stands and their purpose is to frame the swimmers. They are included in the dataset to enable the use of temporal information. Numeric details of the dataset are summarized in Table 4.1. In the dataset, the level of zoom and distance from the pool also change a lot depending on the competition. This introduces a new challenge in field registration benchmarks, as the notion of scale is not present in Soccer WorldCup due to its general lack of zoom variation. There are two train sets: standard and sequential. The first one has been created in a way similar to WorldCup Soccer and aims to be generic: it contains frames separated by 3 seconds from different matches. As such, a model tackling it only inputs one frame and outputs one homography matrix. The second set has temporally dense annotations (5 annotated frames per second), which can be used to train models with temporal aspects, inputting information on several successive frames (to temporally stabilize the homography output for instance). These two can be merged to create a bigger, temporally heterogeneous dataset. Finally, the test set is also densely annotated, as this makes no difference from a standard benchmark perspective, but it allows also sequential model evaluation. The dataset is available at https://github.com/njacquelin/sports_field_registration. Registration Method To find the homography transform from a camera view to a standard top view, our method uses pairs of points with RANSAC. The overall pipeline is explained in Figure 4.4. The main emphases of this work are computational efficiency and generalization. Other methods [START_REF] Sha | End-to-End Camera Calibration for Broadcast Videos[END_REF] claim a fast inference speed but require powerful hardware which may not be accessible in practice. Our method uses a much smaller one-shot model (i.e.: without iterative refinement) such that real-time registration is possible with modest hardware (1080 GTX with 8GB). Regarding generalization, it comes from the arbitrary points that are detected on the image: they do not necessarily need to correspond to visual elements on the field, although it helps. This is especially visible with soccer field images, where most landmarks detected by our model are unremarkable, meaningless grass areas. Template Heatmap This work proposes a model that, given a (W × H) input image of a sports field, outputs a (W × H × D) heatmap of keypoints, D being the keypoints encoding dimension. The keypoints do not necessarily represent a visual landmark on the field: they are spread regularly, creating a grid (Figure 4.4, "Grid Template"). One unique aspect of this method is the way it encodes the points. The depth vector is composed of two subsets: X t and Y t . They are one-hot vectors whose maxima index (x t , y t ) encode one line/column along the grid axis: a combination of any value of x t and y t gives a node position in the top-view frame (Figure 4.4, "Depth"). Compared to having C channels for the C keypoints in the template, as in [START_REF] Nie | A Robust and Efficient Framework for Sports-Field Registration[END_REF], this method has speed benefits: it avoids the depth to increase geometrically with the number of keypoints. A pair of one-hot vectors only linearly increases the output depth, for the same level of encoding. This improves the speed and scaling of the solution. For instance, a grid of (15 × 7) contains 105 channels in [START_REF] Nie | A Robust and Efficient Framework for Sports-Field Registration[END_REF] but only 22 in ours. In addition, as each channel does not only represent one point, but one line/column in the field, their semantic meaning is more interesting and enables a better scene understanding. Data Generation and Model Training Once the top-view template is created, the data generation can start using a dataset that contains images with their corresponding homography matrix. The matrix is used to project the template into the point of view of its image (Figure 4.4, "Projected Template"). With such projection, only semantic information has to be inferred. Our approach relies on a U-Net architecture [START_REF] Ronneberger | U-Net: Convolutional Networks for Biomedical Image Segmentation[END_REF], which is widely used for image segmentation. The cross-entropy loss is used to train the pixel-wise keypoints one-hot classification. As there is no "background" class (which would be overrepresented in the data), this loss is only applied at the ground truth keypoints location, using a mask. To ensure that the keypoints are in the correct place, the binary cross-entropy loss (BCE) is used. To do so, the ground truth (Truth) and output (Out) heatmaps are flattened with a depth-wise MAX operation. The 2D Algorithm 4.1 Fast identification of keypoints on a heatmap. Det returns the position of the local maxima in the heatmap. The correspondence table Tab associates each channel to an absolute position in the field template. (x m t , y m t )) end for Homography Matrix ← RANSAC(Pairs) return Homography Matrix resulting heatmaps are compared, in order to align the estimated "blobs" with the expected ones. Formally: Require: Model Output Out, Threshold T, maxima detector Det, Correspondence Table Tab Pairs ← ∅ Out f lat ← Max depth (Out) Max_List ← Det(Out f lat ) for (x m , y m ) in Max_List do if Out f lat [x m , y m ] < T : SKIP depth_vector ← Out[x m , y m ] X t , Y t ← depth_vector x m t ← Tab(argmax(X t )) y m t ← Tab(argmax(Y t )) Pairs ← Pairs ∪ ((x m , y m ), L axis class = CrossEntropy(Out, Truth) * Mask truth keypoints , L pos = BCE(Max depth (Out), Max depth (Truth)) , L total = L x class + L y class + λ • L pos , with λ ∈ R being a weighting coefficient. Matrix Estimation To extract the keypoints' absolute position from the heatmap, one could study each pair of (X, Y) channels to verify if each (x, y) point is represented. This results in a X G × Y G × K complexity (X G and Y G being the template grid resolution, and K the number of keypoints to be found). We propose a much faster algorithm whose complexity is in (X G + Y G ) × K (the K operations are parallelizable). A depthwise MAX operation is applied to Out, the whole output, resulting in Out f lat , a 2D heatmap (the Max operation is extremely well optimized in processors and insignificant compared to the rest). Its M local maxima are identified and if they exceed a certain threshold, their (x m , y m ) positions are kept. On Out, the depth vectors at these (x m , y m ) positions are isolated. Their one-hot vectors return the index of their most activated dimension, (x m t , y m t ), the position on the top-view template. Based on these ((x m , y m ), (x m t , y m t )) pairs, RANSAC [START_REF] Fischler | Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography[END_REF] can be used to compute the homography matrix. This is formally described in the Algorithm 4.1. Post-Processing For individual images, the method can be applied directly, but to register an entire video, no temporal constraint is applied. The method not being perfect, registration is inconsistent throughout the video. With our model, the projected top view of a full race video appears shaking. Stabilization methods can be applied to improve the registration smoothness on videos. A first straightforward method is temporal averaging of each individual coefficient of the matrix. Each can be taken and plotted through time, as shown in Figure 4.5. A simple approach is using a sliding window to smooth the matrices through time. Outliers (i.e. completely wrong matrix estimation) can be easily identified if a given value is too far from its neighbourhood. They are removed before the averaging and replaced by the median of a time window around them. Such smoothing is shown in Figure 4.5. It would also be possible to complexify the smoothing process using the 2/3 Power Law [START_REF] Lacquaniti | The law relating the kinematic and figural aspects of drawing movements[END_REF], which describes the human motor system's acceleration parameters. One could fit such curve to the matrix elements' temporal signal, and use the result instead of the original matrices. Instead of smoothing the resulting elements, it is also possible to smooth the position of the points detected on the different frames through time. A point corresponding to a given coordinate in the pool should not move too much between frames. Further, if different distant points are (wrongly) classified as the same, the corresponding neighbour frames' point can vote for the one with the smaller distance to them. This method results in a smaller selection of points for RANSAC, but they are of better quality and give more temporally consistent results. However, such an algorithm is much more complex to create and would require a chapter for itself. As registration benchmarks do not handle temporal data, a quantitative evaluation of the post-processing methods is not possible. However, a qualitative appreciation of them on different registered videos is possible. Further, visualizations are presented in Figure 4.5 to showcase the interest of this post-processing. Imprecision in the homography estimation can be understood as noise on the parameter's value through time. In consequence, the stabilization is the smoothed signal with much less visible noise. Results The model was trained for 150 epochs with Adam optimizer [START_REF] Diederik | Adam: A Method for Stochastic Optimization[END_REF]. The learning rate started at 1e-3 for 50 epochs and was then set to 1e-4 for the remaining 100 epochs, with a batch size of 16. In the literature, the standard metric is the Intersection Over Union (IOU) between binary masks of the ground truth top view and the estimated homography. This is either done with only the visible field (IOU part ) or using the whole field (IOU whole ). The average and median of these metrics are computed on the test dataset. Results are shown in Table 4.3. Parameter Study The parameter λ, weighting the importance of landmarks classification with respect to their position, is an important parameter of this method. We compared different orders of magnitude of the value to estimate its importance. We did not extensively search hyperparameters' precise values, as this would only fit the solution to the studied datasets, with no proof of generalization to other contexts. This parameter is not trivial to weight, as it represents the balance between the markers' classification loss and position loss. As the markers mask (serving to the position loss) is made of the maximum of each channel, it pushes all the channels to 1 at the places of interest. On the contrary, a unique channel must be activated for a low classification loss. Antagonist behaviours naturally emerge from them. It is thus important to know how to balance them. With a value of λ = 2, the results are the best by a significant margin. With lower values, the results are similar yet less precise, but it seems that with a λ too big, here 5, there is an important drop. The model gives too much importance to the mask precision and neglects the points classification, although it intuitively seems like the most important one, being responsible for the pairs of points mapping. Comparing to State of the Art Although our approach does not quite reach the top results from the literature, it is still among the best ones, as shown in Table 4.3. This is remarkable, considering it contains no refinement process while all the other methods do. However, this impacts the IOU whole metric, where the slightest shift on the visible side of the field has big repercussions on the other side. Nonetheless, this second metric can be considered less interesting for real-world applications, such as placing the players on a field, as they must be visible on the image to be detected in the first place. These results might be improved using methods such as self-training on unlabelled data. Regarding speed, our model is one of the only two exceeding real-time (> 25 FPS), although it has been tested on the least powerful hardware according to benchmarks [161,[START_REF]Deep Learning GPU Benchmarks[END_REF]. Looking at the details, one can even argue that our model is faster than Sha et al. [START_REF] Sha | End-to-End Camera Calibration for Broadcast Videos[END_REF] on the same hardware. Indeed, our architecture is a subset of theirs, to which they add 2 more deep models, a Spatial Transformer Network, and an exhaustive search among field templates. All these additional steps have a significant time cost and our method might be faster by up to this amount. The model's speed could be increased even more using distillation [START_REF] Hinton | Distilling the Knowledge in a Neural Network[END_REF] to train a more condensed, shallower and faster version of U-Net. However, registration is far from being the current speed bottleneck of the pipeline, so such optimization is not necessary. Naturally, for our more challenging RegiSwim 500 dataset, the performance is lower. Our model handles correctly Y-axis challenges (A and B in Figure 4.3) and lighting problems, mostly because of the grid density and distribution, which prevents focusing on a single part of the image. The big difference between the mean and median results is due to multiple left-right inversions. In this failure case, the IOU score can drop down to 0, reducing the mean but not the median as they are in minority. These are quite difficult to prevent in a pool (challenges C and D in Figure 4.3). This first baseline clearly shows the challenges and limitations raised by this new benchmark. Calibrating a pool, especially with different levels of zoom and multiple camera placement, is much more difficult than the standard Soccer WorldCup benchmark. Failure Cases In various situations, our trained model did not perform well. This can be observed by projecting a video in top view (without temporal smoothing), where misprojected frames appear obvious. One can also observe this phenomenon with the benchmarks, by displaying the images with the lowest scores. Wrong Classification First, there are frames where a majority of the detected landmarks are not correctly classified. They are rarely placed at a random position, so it is usually a classification error more than a segmentation (i.e. landmark placement) problem. There is a proportion of wrong points from which RANSAC cannot correctly compute a coherent homography matrix anymore. The result of an image projected in such condition is not exploitable at all. An example is shown in Figure 4.6, center. Mirror Field A similar failure case is when part of the landmarks are wrong in a coherent manner. For instance, all the 15m markers are falsely classified as 35m markers, as in Figure 4.6, right. Here, the output is a plausible result, but with a left-right inversion error. This failure is harder to automatically detect than the previous one, either with a human eye or with a model trained to classify images as possible Unrealistic Result Mirror failure Source image results or not, as it appears correct without the original image. It can be the cause of important errors. General Geometric Misunderstanding Both of these failure cases show an important limitation of our method: the model does not have an understanding of the pool's spatial disposition. If 5m and 25m markers are detected with high confidence (and they tend to be, being easy visual markers), the model should not give a high probability that a 35m marker is between them. This is not spatially coherent. However, the model fails to do this logical operation. An idea to solve this problem might involve a Generative Adversarial Networks (GAN). Indeed, one can easily differentiate a labelled heatmap and a generated heatmap solely using this geometry criterion. Adding a discriminator's loss could force a spatially logical output. However, it is never simple to use GAN and we leave that to posterity. Important Zoom The last type of common failure is when not enough landmarks are detected. In the majority of cases, this happens when the level of zoom is too high and it is impossible to see enough markers or to find a scale reference. In these cases, even a human could not register the image. Using the space between swimming lanes gives the camera angle from the water surface, the scale of the different elements gives the level of zoom, but it is not sufficient. One can only say what the camera does not frame. To circumvent such problem, one must not zoom too much on the pool, to keep enough spatial context. Temporal information can also help, as in [START_REF] Nie | A Robust and Efficient Framework for Sports-Field Registration[END_REF], but a tracking-based registration method is outside of the scope of this chapter. Discussion on the One-Shot Approach The advantages of a fast method may seem obvious, especially with good results as in our case, but one must consider the application first, before judging it. In Computer Vision (CV), the speed vs precision trade-off is ubiquitous and sports field registration is no exception. Before developing a method, one should think about where they want it on this trade-off spectrum in the context of their application. Further, we announce a given speed using our method. The speed is in fact entirely dependent on the U-Net model we chose (here, the original one in [START_REF] Ronneberger | U-Net: Convolutional Networks for Biomedical Image Segmentation[END_REF]). To our knowledge, no extensive studies have been done on precision losses in function of the model size, in the case of registration. If speed were a higher constraint, one could reduce its complexity by removing a layer or reducing the number of filters. On the other hand, it is also possible to increase the U-Net model's size to improve the results, if that were more critical than speed. One could even use a few refinement steps for that purpose, as long as an inference time is not crossed. For all the presented results in Table 4.3, an arbitrary refinement limit is chosen. Practically, they correspond to a point where improvement is too little to be considered worth the time spent. To create and optimize a tool, one must consider its actual needs in precision and speed before choosing a registration method. The one presented in this chapter is fully compatible with any other refinement method, as it can serve as an initialisation model. Depending on the final use of this work, one could use any of the many refinements proposed in the related works. Conclusion This work introduces an efficient and precise method for automatic sports field registration, which reaches very good performance and real-time inference speed. It is very well suited to online practices, such as live-stream broadcast analysis, or post-race performance review. The RegiSwim 500 dataset has been introduced and made publicly available in order to improve the registration challenge. Future works will include ways to optimize even more the model's inference speed, and new methods to increase its precision. The model, however, is not perfect and cannot handle all the possible video cases. Several limitations have been listed, with propositions to address them. Further, it was shown that a simple temporal sliding mean can smooth results and get rid of many anomalies if the majority of the video is correctly enough calibrated. Finally, the importance of speed in the context of such method was discussed. In CV, speed and performance are both important aspects to consider. Developing a tool, a user must always select which one to prioritize over the other. Chapter abstract Knowing the position of swimmers in a pool allows one to study them specifically, and in particular to count their swimming strokes. To that extent, this chapter introduces a context-agnostic unsupervised method to count periodicities in videos. Current methods are limited to a specific type of application (e.g. repetitive human motion). We propose a novel approach that provides a powerful generalisation ability since it is not biased towards specific visual features. It is thus applicable to a range of diverse domains that require no adaptation, by relying on a Neural Network (NN) model that is trained completely unsupervised. More specifically, it is trained to transform the periodic temporal data into a lower-dimensional latent encoding in such a way that it forms a cyclic path in this latent space. We also introduce a novel algorithm that reliably detects and counts periods in complex time series. Despite being unsupervised and facing supervised methods with complex architectures, our experimental results demonstrate that our approach reaches State of the Art (SOTA) performance for periodicity counting on the challenging QUVA video benchmark. Its remaining limits will be addressed by an additional method based on supervised training and an annotated dataset. This chapter is based on the work from: Nicolas Jacquelin, Romain Vuillemot, Stefan Duffner. Periodicity Counting in Videos with Unsupervised Learning of Cyclic Embeddings. Pattern Recognition Letters, Elsevier, 2022, (hal-03738161). Introduction We define periodicity as any phenomenon that happens multiple times in a similar way over time. Periodicity is ubiquitous in real-world scenes and occurs at multiple scales. In elite sports, the tracking of the athletes' motion is a key issue and is highly repetitive. In swimming, in particular, the stroke rate (defined in 5.5) is one of the most important metrics to determine a race quality and infer other statistics (e.g. stroke amplitude, rate etc.). Combined with the swimmers' position in the pool, it provides important analytical data to a coach. In the automatic swimming races analysis pipeline, it occurs after the detection, as shown in Figure 5.1. This task is challenging for many reasons. First, two successive repetitions may significantly differ (e.g. swimming strokes rate and amplitude change during the race). Second, the precise beginning and end of a cycle are ambiguous. Finally, there exist other artefacts, such as the different sub-cycles that may be mistakenly detected as cycles (e.g. the left and right arm strokes for freestlye). Furthermore, the notion of periodicity is context-dependent: the same event in two different sequences might be periodic or not depending on whether it is repeated or not. Therefore, the signal must be studied globally and not frame-wise. Estimating periodicity is particularly challenging with videos recorded under unconstrained conditions. Any spatial shift, background noise or viewpoint change results in important variations in the captured signal, which often makes it hard to automatically detect the dominant cycle. Although these problems can be alleviated with recent Machine Learning (ML) methods based on CNN, those models often require large amounts of training data [START_REF] Deng | Imagenet: A large-scale hierarchical image database[END_REF][START_REF] Lin | Microsoft COCO: Common Objects in Context[END_REF][START_REF] Hestness | Deep Learning Scaling is Predictable, Empirically[END_REF]. In the case of swimming periodicity counting, such dataset does not exist. Being able to adapt to any domain also prevents from using pre-training methods or context-specific datasets such a Kinetics [START_REF] Kay | The Kinetics Human Action Video Dataset[END_REF], as many other techniques of the SOTA do [START_REF] Dwibedi | Counting Out Time: Class Agnostic Video Repetition Counting in the Wild[END_REF][START_REF] Zhang | Contextaware and Scale-insensitive Temporal Repetition Counting[END_REF][START_REF] Zhang | Repetitive Activity Counting by Sight and Sound[END_REF][START_REF] Yin | Energy-Based Periodicity Mining With Deep Features for Action Repetition Counting in Unconstrained Videos[END_REF]. In particular, a NN trained in daily-life context does not transfer well in the new domain of a pool. Moreover, not all periodicity problems concern videos of regular human activities: there are other types of complex time series, like multi-source sensors monitoring air quality or biophysical activities [START_REF] Ubbo Van Baardewijk | Early Detection of Exposure to Toxic Chemicals Using Continuously Recorded Multi-Sensor Physiology[END_REF][START_REF] Vavrinsky | Application of Modern Multi-Sensor Holter in Diagnosis and Treatment[END_REF][START_REF] Kolumban-Antal | A Secure and Portable Multi-Sensor Module for Distributed Air Pollution Monitoring[END_REF], 4D MRI videos (i.e. 3D images through time) of breathing lungs [START_REF] Hugo | Data from 4D Lung Imaging of NSCLC Patients[END_REF], active brains [START_REF] Bakas | Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge[END_REF] or beating hearts [START_REF] Tobon-Gomez | Benchmark for Algorithms Segmenting the Left Atrium From 3D CT and MRI Datasets[END_REF]. For these reasons, it is important to have a domain-agnostic method. This chapter presents a new domain-agnostic training method suited for temporal periodic data in general. With an adapted NN architecture, it could even be used outside of the video domain to study other types of multi-variate time series. Our approach is summarized in Figure 5.2. The training is made on the test video using unsupervised learning. Once trained, the model reduces the video into a periodic 1D signal and counts its repetitive patterns using a novel peak detection algorithm based on various signal processing techniques. This counting process is performed in a single step. It does not require testing different time scales or using a sliding window through the whole signal to process it completely. The computational cost is therefore greatly reduced compared to other methods based on transformer architectures [START_REF] Dwibedi | Counting Out Time: Class Agnostic Video Repetition Counting in the Wild[END_REF] or multimodal fusion models [START_REF] Zhang | Repetitive Activity Counting by Sight and Sound[END_REF]. Further, during this thesis, we kicked off the creation of a labelled dataset for supervised learning of swimming periodicity. This chapter will explain what composes it and how to train a model from it. Qualitative results will be shown and analyzed. Our main contributions in this chapter are the following: • An unsupervised method to train a NN with the triplet loss to encode any kind of video showing a periodic phenomenon (section 5.3.1). • A framework for automatic periodicity counting in videos, based on the analysis of a learnt embedding (section 5.3.2). • a swimming periodicity dataset and a method to train a model on it (section 5.5). Related Work To specifically address periodicity counting in daily life videos, Levy and Wolf [START_REF] Levy | Live Repetition Counting[END_REF] proposed a 3D CNN architecture: the input is composed of 20 chronologically ordered images, each separated by N frames in the timeline. In this way, the temporal information is integrated into the input. They trained the model in a supervised way on synthetic data to separate the sequences on their temporal dimension. This feature-oriented method is robust to colour and lighting variations, but one needs to test several timescales (i.e. many different values of N) to obtain good results. Also, as for supervised trained models, the performance directly depends on the dataset size and quality. Similar to our method, other works aim to reduce a video to a one-dimensional signal. Polana and Nelson [START_REF] Polana | Detection and Recognition of Periodic, Nonrigid Motion[END_REF] detected the pixels responsible for motion, and considered them as temporal signals varying throughout the video. They extracted a signal period by detecting the peaks on its Fourier Transform. Yang, Zhang, and Peng [START_REF] Yang | Time-domain period detection in short-duration videos[END_REF] used a method based on pixel-wise joint entropy to estimate the similarity between a reference image and the other ones, resulting in a 1D temporal function. Runia et al. [START_REF] Runia | Real-World Repetition Estimation by Div, Grad and Curl[END_REF], introduced another method to convert a video into a 1D signal. They studied the main direction of the foreground's optical flow to create multiple 1D signals from its directional gradient components through a wavelet transform. Their paper also introduced the QUVA benchmark dataset for periodicity counting in everyday videos. More recently, Dwibedi et al. [START_REF] Dwibedi | Counting Out Time: Class Agnostic Video Repetition Counting in the Wild[END_REF] proposed a complex architecture mixing CNN and transformers [START_REF] Vaswani | Attention is All you Need[END_REF], trained in a fully-supervised fashion on the Countix dataset which they introduced. In their experiments, they also trained their model on a considerable amount of synthetic data obtaining impressive results, but unfortunately they did not publish this dataset. This method achieves good results on public benchmarks, but it is by far the most computationally expensive and data-dependent. Using the Countix dataset, Zhang et al. [START_REF] Zhang | Repetitive Activity Counting by Sight and Sound[END_REF] proposed a multi-modal approach relying on sound and sight to improve the SOTA on Countix benchmark. They did no evaluation of it on QUVA, however. The work of Yin et al. [START_REF] Yin | Energy-Based Periodicity Mining With Deep Features for Action Repetition Counting in Unconstrained Videos[END_REF] shares some similarities with our work, as it also extracts periodic features from a video with a learning-based method, reduces it to a 1D signal, and counts the repetitions with an algorithm relying on the Fourier transform. However, their approach is not generalizable to other types of data since it uses a NN that is pre-trained on Kinetics [START_REF] Kay | The Kinetics Human Action Video Dataset[END_REF], a large annotated video dataset, in a supervised way. As such, they can only analyze conventional videos of 2D images and the learnt visual features are domain dependent, which may not give satisfactory results on other types of videos. In addition, the signal processing part of their method is quite different from ours. To detect the dominant frequency, it uses a specific multi-threshold filter in the frequency domain with several empirically determined thresholds and then detects the peaks in the reconstructed signal with the inverse Fourier transform. Our model is trained unsupervised and end-to-end, and our robust peak detection algorithm operates on the original 1D signal obtained from PCA. Zhang et al. [START_REF] Zhang | Contextaware and Scale-insensitive Temporal Repetition Counting[END_REF] proposed an approach based on a context-aware model. However, it is not designed to generalize to unseen domains: the method uses the Kinetics dataset [START_REF] Kay | The Kinetics Human Action Video Dataset[END_REF], where a separate model is trained for each sports type resulting in excellent overall scores on public benchmarks. Finally, the work of Feirrera et al. [START_REF] Ferreira | Deep learning approaches for workout repetition counting and validation[END_REF] is also context-specific: it uses human pose classification to count repetitions of workout routines. This approach is suited but limited to the context of human motion repetition counting. As most of these methods are trained on a human motion video dataset (Countix being built on top of Kinetics), they are well adapted to human gestures and actions. However, this makes them (i) specific to videos and not any other type of input data and (ii) biased towards human motion. On the contrary, we designed our method to be applicable to any type of periodic data. The anchor is at the center, the positive is on the smaller circle (not necessarily the same size each time), and the negative is outside of the bigger circle. a) The anchor is ϕ(t -1), so ϕ(t) and ϕ(t + 1) are separated. b) The anchor is ϕ(t), so ϕ(t) and ϕ(t + 1) are drawn together. When the training starts, the negative can be on the other side of the big circle compared to the positive. But this situation is no longer possible when the constraint is applied to all the successive frames after convergence, as shown in c): a pseudo-linear path is naturally formed, as it is the only way to respect both the attraction and repulsion constraints imposed by the loss. Unsupervised Periodicity Counting We introduce a novel unsupervised learning process, illustrated in Figure 5.2 Part 1, to encode a video in a way that highlights its periodic features. For that purpose, a CNN is trained directly on the video to be analyzed. The resulting video embedding is a periodic signal that is processed by a novel algorithm to count its cycles. This new method does not follow the classical training/validation/test protocol. The different steps of the pipeline are described in detail in this section. Latent Representation Learning Before the model can be trained, one needs to group successive frames from the video. The frame at time index t is grouped with the frames t + 1 and t -1 forming a triplet. Each frame belongs to 3 different groups (triplets) where it plays the 3 roles t -1, t and t + 1, except for the first and last frames (because there is respectively no frame before it to be t -1 and no frame after it to be t + 1). With T frames in the video, there are T -2 triplets in the end. The output vector of the image at time index t is called ϕ(t). The images need to be embedded by the CNN in such a way that, in chronological order, they form a repetitive pattern in the latent space, i.e. a loop. This is achieved by using a continuity criterion and a periodicity criterion. The first forces the images' successive embeddings to be temporally ordered along a pseudo-linear path. The latter forces this path to contain repetitive patterns. To guarantee the continuity criterion, the triplet loss is used as the objective function: L(A, P, N) = max(0, |ϕ(A) -ϕ(P)| -|ϕ(A) -ϕ(N)| + α) , (5.1) where α ∈ R is the margin, A is the anchor, P is the positive and N is the negative image. The purpose of the triplet loss is to make the distance between the embeddings of A and N larger than the distance between the ones of A and P up to a minimum distance defined by α. Our approach defines the image at time index t -1 as the anchor, t as the positive and t + 1 as the negative. The overall consequence of applying this training method to each value of t in the video is that each ϕ(t) is "pulled towards" its direct neighbors (ϕ(t -1) and ϕ(t + 1)), and "pushed away" from its 2 nd degree neighbors (ϕ(t -2) and ϕ(t + 2)). Therefore, the positive embedding is "placed" between the anchor and the negative one, with a tolerance of α, as explained in Figure 5.3. This forces the creation of a pseudo-linear path chronologically aligning the embeddings in the latent space. To guarantee the periodicity criterion we rely on the property of CNN that two similar inputs will have similar outputs unless explicitly trained otherwise [START_REF] Geirhos | Shortcut learning in deep neural networks[END_REF]. With periodic videos, if one cycle has a period T, then the images at time indexes t and t + T will have the same phase in the cycle and look alike. Therefore, the images have an embedding close to the other images corresponding to the same phase in the cycle. This cyclic behavior is illustrated in Figure 5.2, images 1a) and 1b). The resulting model closely fits the data it was trained on. Therefore, to get the most adapted latent space representation for a video, a model needs to be specifically trained on it (and no other videos). This requires some training time, but, as explained in section 5.4.1, it is not too expensive. The training process has been presented using frames as a temporal unit, but it can be enriched by other information. In section 5.4.2, we show that adding the optical flow to a frame gives better results (i.e. frame t is enriched with the optical flow between frames t and t + 1). In this case, we concatenate the 3 image channels (RGB) to the 2 optical flow channels (direction & magnitude) resulting in 5 × W × H temporal unit tensors (W and H being the width and height of the video). This section presented a way to fit a video into a latent space, but it also works for other complex time series. Similarly to adding the optical flow, which is the variation of a frame with respect to the next one, one could add the gradient between successive temporal vectors to augment the information encoded by the model. Cycle Counting After training, the images in the video are embedded in the latent space in such a way that they form a cyclic pattern. The next step, illustrated in Figure 5.2, Part 2, is to count these cycles. To effectively work in the frequency domain and apply common signal processing techniques, the model's output vectors have to be transformed into a one-dimension signal. To do so, the embedding vectors of the M images are chronologically "stacked" to form a matrix like in Figure 5.2, 2a). This is, if the latent space has D dimensions, the resulting matrix is of size D × M. A PCA projection is applied to the matrix to keep the features combination with the most importance. By only keeping the 1 st element of the PCA, it results in a 1 × M temporal signal S with periodic information, i.e. a recurring pattern like in Figure 5.4, corresponding to a repetition in the video. The subsequent algorithm uses the Fourier Transform to detect the signal's F main frequencies. These candidate frequencies will all be tested by our proposed algorithm named Max Detector explained in the following. The main goal of Max Detector is to detect the maximum of each cycle in S and to save their time indices in a list named MaxList. These maxima will be used to distinguish and count the cycles. We name f i the current analyzed frequency (one of the F detected by the Fourier transform), Max List i its corresponding maxima list, and T i its corresponding period. Max Detector starts by finding the signal S global maximum's time index, which is added to Max List i . We suppose the neighbour cycle maxima are approximately one period away from each other. Therefore, to find the next maximum, one creates a time window by shifting of T i ± 10% from the current maximum. In this window, the local maximum is located and its time index is added to the list Max List i . This operation is performed again from this new local maximum until reaching the signal's edge. This procedure is repeated twice, each time starting from the global maximum: once forward towards the end, and once backwards to the beginning of the signal. This is graphically explained in Figure 5.5 and formally explained in Algorithm 5.1. Once the F different frequencies have been processed, there are F different candidate lists Max List i . Each list is evaluated individually and the best solution is retained. To evaluate a Max List i , each of its local maxima will be compared to its local region accordingly to equation 5.2. This score computes the proportion of elements in Max List i that correspond to the local maximum in half a period centered on them. Score_i = 1 L i Max_List_i ∑ k S[k] = max S[k - T 4 : k + T 4 ] , (5.2) L i being the number of elements in Max List i (i.e. its length), k representing the different local maxima indices. As a result, a list that contains each local maximum of the signal separated by approximately T has a score of 1. On the contrary, the more incorrect maxima a list contains, the lower its score is. This is a simple heuristic, but it proved to be efficient. The list with the highest score is kept, whose number of elements represents the number of cycles in the signal and therefore the number of repetitions on the video. Experiments and Results To compare our method with the current state of the art, we used the QUVA [START_REF] Runia | Real-World Repetition Estimation by Div, Grad and Curl[END_REF] and Countix [START_REF] Dwibedi | Counting Out Time: Class Agnostic Video Repetition Counting in the Wild[END_REF] m = ∅ T m = 1/ f m t max 0 = arg max S(t) MaxList m ← MaxList m ∪ t max i t max i = t max 0 while t max i -T m ≥ 0 do t i = t max i -T m W i = (t i -0.1 • T m , t i + 0.1 • T m ) t max i = arg max t∈W i S(t) MaxList m ← MaxList m ∪ t max i end while t max i = t max 0 while t max i + T m < length(S) do t i = t max i + T m W i = (t i -0.1 • T m , t i + 0.1 • T m ) t max i = arg max t∈W i S(t) MaxList m ← MaxList m ∪ t max MAE = 1 N N ∑ i |c i -ĉi | c i OBOA = 1 N N ∑ i [|c i -ĉi | ≤ 1] , where c i is the true count and ĉi is our model estimation on the same video i and N is the number of videos in the dataset. The OBOA, introduced in [START_REF] Runia | Real-World Repetition Estimation by Div, Grad and Curl[END_REF], counts the proportion of correct predictions with a tolerance of 1. This margin serves to reduce the importance of rounding mistakes, as ambiguous cycle cut-offs can happen at both ends of the video. Each model was trained independently on one video at a time. This means that for a dataset of 100 videos like QUVA, 100 different models have been trained and evaluated for each experiment (except said otherwise). The following sections describe the experiments performed on the two benchmarks and the results obtained with the two metrics. CNN Architecture During our test phase, we did not notice a significant performance difference using different CNN architectures (we tried VGG19 and VGG11 [START_REF] Simonyan | Very Deep Convolutional Networks for Large-Scale Image Recognition[END_REF], results shown in Table 5.1). We also designed a straightforward CNN model with fewer layers than VGG11 as it would train better on the few images of the video clips. Our custom model is composed of 6 layers of 3 × 3 convolutions with ReLU activation [START_REF] Nair | Rectified Linear Units Improve Restricted Boltzmann Machines[END_REF], each layer doubling the number of filters (starting at 4, finishing at 128) and 2 × 2 max pooling [START_REF] Scherer | Evaluation of Pooling Operations in Convolutional Architectures for Object Recognition[END_REF] after each layer, and a final global average pooling giving a 32 dimensions output vector. For each study, we trained a model for 30 epochs with a batch size of 16, a learning rate of 10 -3 and the Adam optimizer [START_REF] Diederik | Adam: A Method for Stochastic Optimization[END_REF]. Under these conditions, the training took about 1.1 times the total duration of a video using an NVIDIA GTX 1080 GPU. Ablation Study Our initial baseline CNN model just takes one image as input (Variation "1 img" of Table 5.1). To improve performances, we enriched the input with the optical flow between two consecutive frames, similar to Zhou et al. [START_REF] Zhou | Does computer vision matter for action?[END_REF], as mentioned in section 5.3.1. The new input is therefore made of an image concatenated with the optical flow from this image to the next one. This variation is named "flow" in Table 5.1. To show the importance of our training policy, we used common CNN models trained on Imagenet [START_REF] Deng | Imagenet: A large-scale hierarchical image database[END_REF] to do the embedding, with only one image as an input, as required by these architectures (they were not retrained on the cyclic videos images). The obtained embeddings did not give easily exploitable cyclic curves, resulting in bad performance. With our training policy, however, the different CNN architectures all reached comparable results, our shallow model being better than the deeper ones. For all lines in Table 5.1 not stating a specific architecture, we used our custom shallow CNN. In the Max Detector algorithm, we compare F different frequencies. As shown in Table 5.1, we studied the performance obtained for different values of F. The QUVA benchmark does not provide a specific evaluation protocol, so we used cross-validation on QUVA with 50/30/20 splits (i.e. random splits with said sizes were created to evaluate different values of the parameter F without changing anything else, especially the temporal input signal). The results were the same for the different splits: between 4 and 7, F seems to have little impact on the result, F = 4 being the optimum. On the other hand, Countix has a training dataset, which we used to compute the best value for F. The results were similar between 2 and 7 again, obtaining an optimum for F=2. Finally, we measured the importance of Max Detector, so we used another automatic peak detection algorithm, described in [START_REF] Scholkmann | An Efficient Algorithm for Automatic Peak Detection in Noisy Periodic and Quasi-Periodic Signals[END_REF] by Scholkmann et al. It counts the cycles of the same signal as our Max Detector but performs significantly worse. This shows the effectiveness of our algorithm and the importance of a more specialized algorithm for periodicity counting. Quantitative Results Table 5.2 shows the results compared to other supervised and unsupervised methods. On QUVA, our model has the best MAE and OBOA of all the unsupervised methods. This is achieved with no prior bias or complex model, which demonstrates the efficiency of our framework. Moreover, even compared to supervised models, it is outperformed by only one model with a small margin. Regarding Countix, we would like to highlight a few major weaknesses of the dataset. First, many clips with only 2 repetitions are cutting out parts of the periodic actions (at the start or the end of the video), resulting in no fully repeated movement. Moreover, the shortest video is 0.2s, which corresponds to 6 frames at 30 fps. In our opinion, such video clips are too short to contain distinct repetitions. In addition, the choice to keep the same train/validation/test splits as originally in Kinetics seems questionable, each action category being represented in both the train/validation set and test sets. To create a more context-agnostic dataset, it would be preferable to have specific test categories missing from the train/validation split to challenge the generalisation of the method. On Countix, our unsupervised method gives an OBOA better than Zhang et al. [START_REF] Zhang | Repetitive Activity Counting by Sight and Sound[END_REF] and is In addition, we observed a behavior in most of the "OBOA failure", that are where |c iĉi | ≥ 2. Our Max Detector sometimes counts 2 repetitions instead of 1 for each cyclic pattern, therefore doubling the prediction compared to the ground truth. Indeed, a lot of ambiguity in the cycles count exists, the most usual being the "double action" that can be counted as either one or two periods. For instance, on a freestyle swimming clip, the annotated ground truth cycle can either be one "left and right arm movement" or only one "arm movement" depending on the labeller. Such ambiguity can often not be managed by context-agnostic methods, which will "guess" the answer between N and 2 × N cycles when it occurs. This partly explains the difference between our score and supervised methods' score, which are specifically trained to correctly choose in these ambiguous contexts. This problem artificially increases the MAE in an "unsymmetrical" way. If the truth is 10 repetitions, but our model gives 5, MAE = 0.5. If it is the opposite, MAE = 2. We could use the Normed MAE (NMAE) as a new metric, as it does not cause this "unsymmetrical" issue: N MAE = 1 N N ∑ i |c i -ĉi | max(c i , ĉi ) On QUVA and Countix, the NMAE of our method is respectively 0.158 and 0.345. Application to 4D videos Many applications in medical imaging deeply rely on 4D videos (i.e. 3D images through time), acquired with Magnetic Resonance Imaging (MRI) for instance. However, SOTA periodicity counting methods cannot analyze them as their model can only input regular videos with 2D images. They could circumvent the problem by individually processing each 2D slice of the 3D images, but in doing so contextual data is lost and many model inferences would be required. In the end, one count per slice would be obtained and further post-processing methods would be needed to determine the final result. On the other hand, our method can perform 4D video analysis with no loss of context, as the model is created with the data itself. Adapting the CNN architecture is straightforward in this case: the 2D convolutions are replaced by 3D convolutions. The remaining training method is unchanged and the results obtained by our approach are as good as for conventional videos. Figure 5.6 gives an example of a 4D MRI video, from the results of [START_REF] Schnell | Improved Semiautomated 4D Flow MRI Analysis in the Aorta in Patients With Congenital Aortic Valve Anomalies Versus Tricuspid Aortic Valves[END_REF], showing a beating heart. The 1D signal obtained by our method is extremely smooth and easy to interpret. Although further quantitative evaluation would need to be done, these promising results represent a proof of concept that the method generalizes well to other types of data. Going Further with Supervision In our first discussions with the Fédération Française de Natation (French Swimming Federation) (FFN), race summary sheets were used as examples of what they expected, where the number of cycles per 50m was a metric. This metric enabled the coaches to grasp a general quality of swimming. It allowed studying the periodicity evolution during a long race (e.g.: 1500m) or to compare swimmers. However, coaches also want the exact beginning and end of each cycle. This allows them to measure how the distance per cycle evolves through a pool length, to identify possible local problems. Our periodicity counting method was not tailor-made for this finer-grained metric. Coaches define the extremities of a cycle using a specific position of the swimmer, which depends on the style. For breaststroke, it is when the head is at its highest, for the others it is when the right arm enters the water (or both arms for butterfly). These are conventions, which cannot be found in the raw data. As such, they require some sort of supervision. Supervised Swimmer Strokes Detection The supervised method proposed relies on the same basics as the previous one: the input is a cropped video of one swimmer, and the output is a 1D temporal signal. It relies on a dataset composed of swimmers cropped, associated with their "instantaneous phase". To compute this phase, we label each image where a cycle ends and associate them with the value 1. The intermediate frames are associated with the values of a sinusoid between successive cycle ends, as explained in Figure 5.7. The trained model inputs a frame and learns to output the associated value. The exact input can either be a single frame or two successive frames, or one frame and the optic flow, as for the unsupervised method. The resulting sinusoid is then analysed using the Max Detector algorithm. However, as the new model focuses on swimmers, heuristics can be used. Cycles are bounded by biophysical limitations and have a duration of around one second. In our dataset, the extreme values are 0.92 Hz and 1.09 Hz. Therefore, we limit the search for a frequency maximum to a minimum of 0.85 Hz and a maximum of 1.15 Hz to give a margin. This prevents the algorithm from finding out-of-bounds false positives. The sources from this dataset are competitions which were filmed by members of the Neptune project. Coaches analyzed the filmed races and we were able to synchronize their data with our video. It was therefore possible to create a labelled swimming strokes dataset with no extra annotation. Furthermore, as long as the FFN keeps analyzing videos filmed by our team, we can incorporate their data and the dataset keeps growing. However, the number of championships filmed is fairly small (6 during the writing of this manuscript). Although dozens of races of each swimming style can be added to the dataset, they are not very varied, which is a critical flaw, limiting the generalization ability of the model. The camera point of view and the lighting conditions are also very similar in the videos, reducing the domain representation. The videos have varied framerates between 25 and 60. In both cases, the proportion of frames associated with the value 1 is under-represented, despite arguably being the most important. Indeed, the final goal is to detect the extremities of the cycles, where the value is equal to 1. As such, confusing intermediate values is less problematic than confusing an extremity with an intermediate. This problem was addressed by over-representing extremities in the dataset. A third of the training images are extremities, and the other two-thirds are regularly spread intermediate images. Qualitative results As this work started at the end of the thesis, only a proof of concept has been realized. The periodicity regression model is made of 8 sequential convolution layers with 8 up to 64 filters. This simple model was chosen to increase the training speed on the one hand, and also because the used training data was limited due to time constraints during the implementation of the idea. This model is also close to the one used with the unsupervised method (few straightforward convolution layers). We trained with videos from only 1 competition and tested on a race from another one in another pool. Result is shown in Figure 5.8. Note that we moved values range from v ∈ [-1, 1] to v ∈ [0, 1] , which has no impact on the result. The curves are not perfect sigmoids, but a clear periodic pattern emerges with distinct peaks at 1. The first detected peak corresponds to a diving phase, where no cycle has started yet; this is a false positive one can threshold out by only starting the analysis after the 15m, where the swimmer must have started swimming. Visual examination showcase that the frames identified by Max Detector correspond to cycles edges. Quantitative metrics have not been used as the number of examples is too limited and this is just a proof of concept. The results of this preliminary work are encouraging and suggest we are heading in a good direction. Discussion and Perspectives The methods proposed in this chapter show promising results for our swimming application, but also for general periodicity counting in videos. Good Unsupervised Results Despite being unsupervised and based on a shallow model, our method gives results comparable to SOTA supervised techniques with complex architectures. Due to its nature, it can work on any kind of video, even the ones that differ considerably from daily life (aeronautics, medical, astrophysics etc.). Moreover, with appropriate NN architecture, it can also perform well on other temporal data, such as 4D videos, biological sensors, and audio. Promising Supervised Results The supervised method shows promising results too to separate cycles when the swimmers are in precise given positions. From a more applied point of view, this is hopeful regarding the automatic swimming race analysis tool. The in-progress creation of the supervised periodicity dataset is also encouraging as it will lead to better and better results. Important Limitation to Address Periodicity counting in the case of swimming, however, has an important limitation that cannot be addressed by any of the two described methods. It is the dependency on swimmer detection. If the detection is imprecise, there is another level of difficulty added to the task. A shaky detection might introduce periodicity in the placement of the swimmers inside the cropped videos, which would not depend on their position. The supervised method could address it by applying extensive shift-based data augmentation, but by artificially increasing the problem's difficulty, a bigger model may be needed. Also, a detection system centered on the backside of swimmers might give crops with undefinable style and periodicity. Likewise, if the detection is too wide to the point of framing multiple swimmers at once, in which case our periodicity counting method would be lost. The detection system responsible for crop extraction must be particularly smooth. This might mean not always centering the same point (the head), which can be subject to rapid local translations. As such, the detection method for crop extraction must be a bit different from the one responsible of measuring the instant position of swimmers in the image. The first might be drawn from the other using heuristics such as temporal smoothing to improve the method's results on the task. Another limitation of this solution is the need to extract crops from the video. With 8 swimmers in the race, there must be 8 sub-videos generated. Cropping and resizing are two long tasks, computation-wise: it is an order of magnitude slower than a U-Net model inference. As such, periodicity counting based on crop extraction is the temporal bottleneck of the pipeline. Possible Solution All the methods referenced in this chapter, both ours and the ones from the related works, do the same strong hypothesis: there is only one periodic element in the video. This causes the two limitations previously mentioned (important dependency on detection and slow cropping process). A method inputting directly the uncropped swimming race video and outputting periodicity information for each swimmer would therefore address the two limitations at once. Although we did not work on this idea, one could imagine a solution based on (yet another) U-Net-based model, where each area containing a swimmer would output the swimming phase. The output could either be a value between 0 and one (phase regression) or there could also be multiple channels, each associated to a range of phases (phase classification). We do not have the data to train a model based on this method, but if the supervised periodicity dataset develops enough, that might become the case. Conclusion We introduced a framework to count repetitions in periodic videos. This method is outside of the usual training set -validation set -testing set paradigm, as the training is directly done on the test data. We believe that such an unsupervised approach may be of increasing importance in the future for different applications, to reduce the need for big datasets and complex architectures. Chapter abstract This chapter sums up the different contributions we presented in this manuscript. They have their individual limitations that we discuss here. We also describe an important future work that aims at improving the methods and reducing their limits by combining them all. This is a proposition of strategy for fully automating the video analysis. We also address more general perspectives and limitations of the thesis. Unaddressed challenges are mentioned, with leads to tackle them. Summary of the Contributions This thesis introduced multiple contributions for automatic analysis of swimming race videos. They brought different domains of Computer Vision (CV) together, like object detection / segmentation, registration, and unsupervised learning. Two datasets were also introduced to help the community moving forward and creating new methods. Swimmer Detection from Unconstrained Videos The first contribution we made tackled swimmer detection in [START_REF] Jacquelin | Detecting Swimmers in Unconstrained Videos with Few Training Data[END_REF]. The Swimm 400 dataset, also presented in the paper, contains 400 labelled images of swimmers with around 3100 bounding boxes. To the best of our knowledge, no publicly available swimmer detection dataset existed when Swimm 400 was released and we hope is will push the sports analysis community forward. The dataset is varied, containing several environment, pools, viewpoints, angles, etc.. It can be used for swimmer detection without external constraints like camera positioning or type of pool. Swimm 400 is well-crafted, but it is orders of magnitude smaller than the usual detection datasets [START_REF] Lin | Microsoft COCO: Common Objects in Context[END_REF][START_REF] Deng | Imagenet: A large-scale hierarchical image database[END_REF]. Consequentially, classic object detection techniques [START_REF] Redmon | YOLOv3: An Incremental Improvement[END_REF][START_REF] Shaoqing Ren | Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[END_REF] do not perform well on it. Swimmer detection is thus addressed using a variation of the classic segmentation architecture U-Net. Our tiny-U-Net model is smaller and shallower, in order to manage the training set size. To train it, we converted the bounding boxes into heatmaps by creating ellipses inscribed in the boxes. This heuristics worked efficiently to detect swimmers with a good precision. It is weakly-supervised learning (bounding boxes are at a lower level than segmentation) and it was shown that a rough segmentation of swimmers is possible with high enough input image resolution. Real-Time Pool Registration Our second contribution addresses automatic sports field registration techniques, in [START_REF] Jacquelin | Efficient One-Shot Sports Field Image Registration with Arbitrary Keypoint Segmentation[END_REF]. This task is also fundamental for swimming analysis, as once it is combined with swimmer detection, it allows the positioning of swimmers in the pool. Automatically addressing this task gives more adaptability to our analyses, as we do not have to rely exclusively on static videos that must be manually calibrated. As such, videos filmed before this thesis can be analyzed, whereas they could not before due to their panning, zooming, or moving camera. As the main registration benchmark, Soccer WorldCup, is not about swimming pools, we again introduced a dataset named RegiSwim 500 . It contains 500 images of pools and their corresponding homography matrix to perform a top-view projection of the pool. Contrarily to the other benchmark, it contains very different levels of zooms and camera angles, making it more challenging. Thus, it enables to train more adaptable models, which do not have to rely on specific filming properties. The model relies on the U-Net architecture to position landmarks on the image. Each is associated to a position in the pool, forming a pair of points. Detecting enough of them (dozens per image in practice) allows us to use a consensus algorithm to estimate the homography matrix. This consensus-based algorithm gives some robustness to the model which, contrarily to any other sports field registration method, do not rely on refinement. As such, it is both precise and fast, even without a state of the art Graphics Processing Unit (GPU). Periodicity Estimation on Cropped Videos In this thesis, we also studied periodicity counting and strokes end detection in chapter 5 and in [START_REF] Jacquelin | Periodicity Counting in Videos with Unsupervised Learning of Cyclic Embeddings[END_REF]. This method relies on unsupervised training to create a model that is able to embed a video to form a cyclic path in a latent space. Using other signal processing techniques, we can count the number of repetitions in a video. The idea is to train a model on the test data, without label. Such approach is original and enables the creation of models specifically fitting a given type of input (videos, sequences...) and domain (swimmers, daily-life activities...). Specifically for swimmers, we also trained a model for swimming cycles end's detection. It inputs a crop around a swimmer and outputs a value between 0 and 1 describing the swimmer phase. For our specific application, such model is very important as it enables the coaches to evaluate their swimmers based on their stroke rate. Limitations and Proposed Solutions The methods we presented in this thesis could be improved, as explained in their respective chapter. The way the videos are filmed also presents problems for the later analysis. In this section, we address their limitations and present ideas of solution. Benefits from Combining the Models Some performance limitations of our models could directly be addressed by putting them together. By integrating them inside of a unified system, we could maximize their positive interactions in order to increase precision. Figure 6.1 illustrates an idea of such a unification (note that it changed compared to the previous chapters). Registration on a Race Video The current registration model is not accurate enough to give useable results if the cameras are panning and following the swimmers. Indeed, it needs spatial context, which can only be obtained by filming at least half of the pool at all time. For this purpose, the Neptune project uses two static cameras each filming half of the pool. However, even then, the model gives different results in function of the input frame, as shown in Figure 6.2. This is likely due to small variations from the waves and the reflections that change the local distribution of pixels. This creates important coherence issues, as the same position on two frames from a static camera would be mapped to different places in the pool. We could circumvent the problem by finding a single matrix that is used for the entire video. To accelerate the process, we could only apply our registration model on a few frames. The challenge is to find the best homography matrix among several that would be estimated by the model. We could use DBSCAN [START_REF] Ester | A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise[END_REF] to cluster them. We can assume that although many images can have a wrong registration, they tend to fail in different ways (see the bottom line of Figure 6.2). However, as there is only one way to be correct, we could find the optimal solution by simply selecting the most numerous cluster of homography matrices. We would thus obtain one group of similar homography matrices (good regression) and several significantly smaller groups of very different matrices (failure cases). This would alleviate the weakness of the registration model. Within the selected group, each parameter could be averaged to get the optimal homography matrix. An extra module could also be added to judge the quality of registration. If it is not satisfying enough according to its criterion, the module could warn a human user and ask them to perform manual registration. It can be a simple model fed with top-views videos, trained to output 1 or 0 whether the registration is precise or not. If the result on a video were below a threshold, this would trigger a warning addressed to the user. The threshold could be manually defined by the user depending on the level of precision they want. Detection Aided by Registration We could use a new detection pipeline as illustrated in Figure 6.3, relying on the top-view to detect the blobs on the heatmap. With this idea, we would first reconstruct the entire pool using the two camera's registration, and isolate the different swimming lanes. In these lanes, we could locate the blob of highest probability. This would alleviate the weak detection on the top (i.e. farthest) lines, where a threshold of 0.45 (optimal as shown in chapter 3) is too high for this specific area. The problem of swimmers' blobs merging would also be addressed by the lanes' separate analysis. A second-stage swimmer detector could be added to confirm that a given blob actually contains a swimmer. It could be used to select the best blob of a lane as an alternate to the heuristic of the blob with highest probability. However, this would increase computation time significantly, as the blobs' area should be extracted and resized, and then fed to the model. A post-processing step could finally be added to refine the swimmers' positioning. With a unique swimmer detected on each frame in each lane, their position through time becomes obtainable. It could be smoothed out and outliers could be removed using simple signal processing methods. This would result in a monotonously growing curve, corresponding to the position of the swimmer starting at 0m and finishing at 50m. Stabler Crops for Periodicity Estimation A major limitation of periodicity counting is its need for robust swimmer crops extraction. We could use the smoothed positioning coming from the combination of detection and registration to extract stable, not shaking sub-videos around each swimmer. The periodicity model would thus have almost no external perturbation and could function optimally. Increasing the Acquisition Speed Before analyzing a video, one must have access to it, which is slow using the current way the races are filmed during competitions. It could be changed to address real-time (or close) constraints, and allow for faster feedback to the swimmers. Towards a Real-Time Video Analysis Acquisition Process Currently, cameras film different races and their memory cards are regularly switched with empty ones. The full cards are moved to the Fédération Française de Natation (French Swimming Federation) (FFN) performance division cell and their content is uploaded to a computer where it can finally be analyzed. In both cases, there is a delay between a race and the moment an analysis can be completed. This filming process was initially copied from the performance division which also films and analyzes videos. However, it is not adapted to our needs for quick feedback. A better way could be the setting up of live streams from the cameras to the processing computer. The analysis could thus start during the race and be over soon after it, as the method can operate fast. It would allow for direct feed-backs from the analysis system to the coaches and swimmers, but also from them to us, to improve our tool. Indeed, such faster speed to obtain results would encourage both sides to interact more, which is mutually beneficial. Acquisition Resolution Races are shot in 4K. However both image loading in memory time and image resizing time increases with input size. Filming images of lower resolution, such as 720p, would decrease substantially this time. As the images fed to the models are resized to (256 × 256) pixels anyway (as shown in Table 3.2 in chapter 3), it would not change the analysis accuracy. For periodicity counting, videos cropped around a swimmer do not require 4K to perform well. If, in the future, other models needing a close-up of the swimmer (e.g. for swimmer pose estimation) are used, higher quality videos may be required. For now though, it is not the case. Perspectives and New Challenges This work proposed different new methods, ideas, models, and datasets. As such, it unlocked some domains of swimming analysis, that can be explored in future works. There are also domains we could have explored, that we ended up not addressing by lack of time. This section describes what these works could be and what they would have brought. Guided Annotation Tool Creating an extensive dataset is time consuming, even with a good labelling tool. To alleviate this problem, we worked on a tool of our own to help swimmer annotation, illustrated in Figure 6.4. This tool relies on an already trained detection model, like the one presented in chapter 3. It works like a regular annotation tool, but bounding boxes are already placed by the model. The user could quickly adjust these boxes if they are almost correct, delete the false positive, and create new boxes. During the conception of this tool, our priority was labelling speed, as it is the most cumbersome problem of annotation. Other than that, the tool's key feature concerned the threshold of the model, which has a different optimum in function of the level of zoom and other similar variations. With distant swimmers, it is generally beneficial to have a low threshold. Therefore, while annotating a race distant from the camera, the user could set the threshold to a low value to have better suggestions, and adapt it again for the next race. However, this tool was very incomplete: a lot of interesting features that could greatly improve the labelling speed were never implemented. For instance, the user had to fine-tune the detection model with the new bounding boxes themselves, even if it would have been better to automatically do it during the labelling process: the model would get better and better with each new annotated image. Likewise, the images order is also an important factor if one wants to continuously train the model. Rules based on images variety (distance, lighting, angle...) would have been interesting to explore and create. Finally, a model could also predict the most interesting threshold depending on the input image, still enabling the user to adjust it as they wish. Temporal Data for a Better Context Understanding In this thesis, detection and registration were based on models performing frame-by-frame operations, but no temporal data were used. One could enhance them using either past or future frames to detect the swimmers. Such approach would probably give significantly better results, as it is not rare that waves and diffraction temporally occlude swimmers, making them almost impossible to detect using only one frame. A temporal approach could also be used for registration. Some points of view lack information to allow for robust and confident homography estimation. However, using the landmarks visible in the previous frames would circumvent this problem. Freeing us from the constraint of static videos would significantly increase the generalisation of the model. This could be implemented by feeding the models several successive images stacked together instead of only one. The first layers of the model could be replaced with 3D convolutions to better handle this new input. No extra annotation would be required, as the output could still concern only one frame. However, we would have to find the frames preceeding the annotate ones in the videos that served to create our datasets. More Data for Better Models Additional data can be created by labelling fix videos as they are captured in competition with static cameras. New data are always appreciated by the community and they can increase substantially the performance on different tasks. In particular, u-turns and underwater swimmers could be better detected than they currently are as they are not very frequent in Swimm 400 . Therefore, the most interesting images to label are both angles more similar to our use context and "rare" swimmers positions. Weakly Supervised Learning for a Multitask Model The model fed with crops of swimmers giving information on their phase could be extended using weakly-supervised learning algorithms. Instead of crops, whole images could be the new input and the model would give phase information about only the areas with swimmers. If a model like this were created, it would unify swimmer detection with periodicity analysis. In fact, the three models could be merged into a single general U-Net architecture outputting registration, detection, and periodicity information all at once. This would increase speed by merging the different models and removing the costly crop operations. Swimmers Pose Estimation Swimmers pose estimation was not addressed at all during this thesis as many intermediate works were required before it. However, based on our existing swimmer detection method and on progresses in the domain, it could be feasible in the near future. However one must be realistic about the limitation of underwater joints detection in uncontrolled filming condition. There likely are optimal camera points of view for this task, but as we barely control the camera placement during swimming events, this will be a strong constraint. Vision Transformers Vision Transformer architectures [START_REF] Dosovitskiy | An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale[END_REF] might give better results for registration tasks. One could try using them instead of Convolutional Neural Networks (CNNs). Indeed, this task relies on landmarks that must be identified in context with the entire image. As transformer models are considered better than CNNs to study images globally, this could lead to better results. One could also use transformers to study the videos as sequences and introduce continuity in the analysis. However, this architecture currently needs significantly more data than CNNs [START_REF] Stéphane D'ascoli | ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases[END_REF], even though recent advances reduce this limitation [START_REF] Touvron | Training data-efficient image transformers amp; distillation through attention[END_REF] (at heavy computational costs). As the positioning of our works concerns small datasets and computationally light methods, transformers are incompatible. Unaddressed Challenges There are tasks mentioned in the Introduction that were never directly tackled during the thesis, despite being important for race analysis. Although most could be addressed without too much additional work, they are not currently part of our automatic analysis method. These tasks are swimming phase segmentation, swimmer identification, and cameras temporal synchronization. They are developed in this section. Swimming Phase Segmentation Phase segmentation consists in knowing what specific action the swimmer is performing (diving, normal, swimming, u-turning, etc.). Currently, when a swimmer is detected, we only extract crops to study their periodicity. The model does not informs if the swimmer is breathing, or if they are touching the edge of the pool, although these are very important analytics. We could create a new model that would be fed crops around the swimmers and output the swimming phase. Transfer learning based on the swimming periodicity model can be used to train it with few data. This would be a classification task where the states (i.e. classes) can comprise diving, breathing, touching the edge, the swimming style, and other aspects of a swimmer that can be extracted from a crop. Swimmer Identification Identifying a swimmer during a race is very challenging, especially with videos that are not particularly zoomed-in on them. Face recognition is irrelevant, as the swimmer's head is underwater most of the time. One could however use classification techniques that would input the entire body. The swimmer identification would thus not concern only face identification, but the body shape and general swimming look. This method was illustrated in chapter 2 with Figure 2.17. Cameras Temporal Synchronization As we propose to film the pool with two cameras, they must be temporally synchronized to create consistent analyses. To that extent, we could rely on the audio. The soundtracks from the cameras could be correlated with each other: the highest peack should correspond to the time shift between the two recordings. MediaEval Challenge Finally, we wanted to call for the CV community to help with the models. Combining the ideas from multiple sources, each improving on the others, is a great improvement perspective. To that extent, we published a swimming image analysis challenge at the 2022 MediaEval Workshop: It introduces a dataset created by merging the FFN analysis data with videos we recorded, manually calibrated, and synchronized. Coaches thus provide analytics in the pool coordinates system and we project them back into our videos to create rich annotations. This allows us to create temporal data for swimming detection and periodicity counting. We also have access to the official times and ranking of the races. The resulting challenge we created addresses these tasks, listed as follows: • Swimmer Tracking: the objective is to find swimmers an image. Contrarily to the dataset Swimm 400 , here, the head is labelled and there are temporal data, thus a tracking model can be used. • Strokes Detection: the input is a video cropped around a swimmer and the task is to tell a which the frames the cycles end. • Pool Registration: using the previously introduced RegiSwim 500 . • Score Boards Reading: images of score boards are provided. The purpose is to read them, that is to output the name, ranking, and time corresponding to each line. • Start Buzzer Detection: the audio from different races have been recorded and the challenge is to detect the start buzzer, which is labelled with a 1/100 th of a second precision. Although we already addressed the three first tasks, the new source, nature, and amount of data offer new opportunities of technical innovation. More information are available at https://multimediaeval.github.io/editions/2022/ tasks/swimtrack/. Conclusion This thesis tackled the challenge of automatically analyzing swimming races using videos. The created methods have to be generic enough to be used in competition situation, where one cannot place sensors on the swimmers and where even a camera has a constrained position. To do so, we chose to focus on CV techniques, which seemed the most adapted. The general method was divided in 3 tasks: detection, registration, and periodicity counting. This resulted in different models, each with its strengths and weaknesses. We hope it will be extended and used by swimmers and coaches. Figure 1 . 1 A 11 Figure 1.1 A summary sheet filled-in by the Fédération Française de Natation (French Swimming Federation) (FFN) performance division. Frequency (min -1 ) = #strokes/time, Amplitude (m) = 50m/#strokes, Tempo (s) = 60s/Frequency. . . . . . . Figure 1.2 How humans understand a pool and use their prior knowledge of the situation to infer data out of the video. These implicit challenges must explicitly tackled by Computer Vision (CV) techniques. . . . . . . . . . . . . . . . . . . . . . Figure 1.3 Description of the automatic analysis method we propose. Each Neural Network (NN) model represents a different challenge of this thesis. The final objectives are to estimate the swimmers' position through time and periodicity. . . . Figure 1.4 Examples of difficult conditions in swimming race videos. Underwater swimmers are sometimes barely visible (A). Low camera angles give little view of the farthest swimming lanes (B). Swimmers of a previous race still in the pool and referees standing by the start create a difficult subjects of interest choice (C). Lighting conditions, like outdoor with bright sun (A), or indoor with back-lighting (B, C) can obfuscate swimmers. . . . . . . . . . . . . . . . . . . . . . . Chapter 2: background in computer vision Figure 2.1 An illustration of the different CV tasks applied to swimming race automatic analysis. . . . . . . . . . . . . . . . . . Figure 2.2Illustration of a 2D convolution edge-detection filter. The "*" symbol represents the convolution, the dot represents the pixel-wise matrix multiplication (i.e. the local behaviour of convolution), and the "•" represents element-wise multiplication. The 3 × 3 Laplacian filter is displayed with the result of its convolution on the input. At the top, a toy example displays the filter's behaviour without texture and with an edge texture. At the bottom, an application of the filter on a swimming race image. The line buoys are mostly well detected, as they are made of simple features with little texture. The swimmers and waves, however, are more complex and the filter cannot isolate them. . . . . . . . . . Figure 2 . 2 Figure 2.3 Hough transform illustration. Top: toy examples with 1 point (left) and 1 line (right) with their respective Hough transform results. Colours are preserved in the example to map elements of on space to the other. In the point example, any line crossing the (x, y) position is represented in the Hough space using the angle (θ) and radius (ρ) as in the example. A single point thus results in a sinusoid curve. In the line example, the curves from each point of the line intersect in a single point corresponding to the line parameters, which are not directly obtainable from the image. Bottom: Hough line detection applied to a pool (after edge detection and thresholding) to detect its buoy lines. The red line does not represent a line in the image and appears solely because of noise. . . . . . . . . . . . . . 3 3 Figure 2.3 Hough transform illustration. Top: toy examples with 1 point (left) and 1 line (right) with their respective Hough transform results. Colours are preserved in the example to map elements of on space to the other. In the point example, any line crossing the (x, y) position is represented in the Hough space using the angle (θ) and radius (ρ) as in the example. A single point thus results in a sinusoid curve. In the line example, the curves from each point of the line intersect in a single point corresponding to the line parameters, which are not directly obtainable from the image. Bottom: Hough line detection applied to a pool (after edge detection and thresholding) to detect its buoy lines. The red line does not represent a line in the image and appears solely because of noise. . . . . . . . . . . . . . Figure 2 . 4 24 Figure 2.4 Scale Invariant Feature Transform (SIFT) descriptors creation. The local gradient is computed on a 16 × 16 region around the landmark's position. Their orientation distribution (among 8 possible angles) is computed and the dominant one is retained. Then the process is repeated for smaller 4 × 4 regions inside the area. These local orientation distributions are rotated accordingly to the main orientation as angle normalization. This results in 4 × 4 × 8 = 128 values, i.e. the SIFT embedding vector. . . . . . . . . . . . . Figure 2 . 5 25 Figure 2.5 The perceptron and a Multilayer Perceptron (MLP) architecture with 2 hidden layers. . . . . . . . . . . . . . . . . . . . Figure 2 . 6 26 Figure 2.6Illustration of domain definition and data bias. The swimmer domain in the data (right) only represents a small portion of the entire swimmer domain (center). For a model trained on this data, the farther an image is from the training domain, the less likely it will be identified as a swimmer.For instance, Superman in swim briefs represented here will hardly be identified properly if the domain only contains classic images of swimmers. The domain thus needs to be as wide as possible for a given class. Further, data biases appear when the classes are unequally represented.In this example, there are more examples of males than of females, which causes problems for the future model's representation. . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 2 . 8 28 Figure 2.8Stacked convolutional filters forming one layer of a CNN.The "⊛" symbol represents convolution between the image and the filters. The red square contains visual features, not easy to understand for a computer. The corresponding red vector, on the layer output, contains these information in a more understandable form for machines. . . . . . . . . . . 21 Figure 2 . 14 31 Figure 2 . 15 21431215 Figure 2.14 The U-Net architecture, from [61]. . . . . . . . . . . . . . . . 31 Figure 2.15 Different views from a classic TV stream. Apart from the leftmost, they are very different from what coaches are used to. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Figure 2 . 16 216 Figure 2.16The different levels of supervision. Datasets domains are represented each with a specific colour. There are different levels of label, here represented by the cylinders' edge. Although each type of supervision has a different complexity of training data, they all aim at performing the detection task. Note that for transfer learning, the "big" domain is distinct but close to the target domain. . . . . . . . . . . . . Figure 2 . 19 2D 219 Figure 2.19 2D PCA of embedding vectors from encoders trained on the MNIST dataset. The colours represent the different classes. Left: the model is trained using metric learning. Right: the model is trained with an additional classification layer with softmax activation. Extracted from [111]. . . . . . . . . . . . Figure 2 . 20 A 220 Figure 2.20 A semi-supervised pipeline applied to detection. Phase 1: an autoencoder is trained on a big unlabelled image dataset to create a feature extractor. Phase 2 (transfer learning): the encoder's weights are frozen and layers are added on top of it and trained on a small labelled detection dataset. . . . Chapter 3: swimmer detection Figure 3 . 1 31 Figure 3.1The swimming race analysis method. The detection part is tackled in this chapter. It is arguably the most critical bloc, as both swimmers placement in the pool and periodicity analysis depend on it. . . . . . . . . . . . . . . . . . . . . . . Figure 3 . 2 47 Figure 3 . 3 49 Figure 3 . 4 50 Figure 3 . 5 53 Figure 3 . 6 54 Figure 3 . 7 56 Figure 3 . 8 32473349345035533654375638 Figure 3.2Representative examples of the edge fuzziness problem with the different styles and the diving. Although for breaststroke the framing is generally well-defined, for other contexts it is not. The swimmers create waves and splashes keeping an observer from knowing the exact boxing of their body. Even with an unexcited water surface, diffraction deforms the observations and shifts the visible position of a swimmer from their actual position. . . . . . . . . . . . . 47 Figure 3.3 The Faster Region-Based Convolutional Networks (R-CNN) algorithm. Left: the entire architecture. The extracted features are used by the Region Proposal Network (RPN) and by the classifier. Right: detail of the RPN. It uses the features both to classify each area as Regions Of Interest (ROI) or background and to refine its detections. Anchors refinement generates 4k values (i.e. 4 values per anchor) that correspond to (width, height, x shift, y shift) × k anchor boxes. Figure adapted from [134]. . . . . . . . . . . . . . . . 49 Figure 3.4 An illustration of You Only Look Once (YOLO). The output tensor is only (3 × 3) for simplicity. A box colour represents its class. Objectness score is represented by box thickness. The model outputs an encoding for each anchor of each sub-region. Only the ones with an objectness score superior to a threshold are kept. Non-Maximum Suppression (NMS) is applied to filter out the redundancies of boxes framing the same instance. As there are multiple anchor sizes, multiple objects can be detected in each sub-region. . . . . 50 Figure 3.5 Comparison of the classic U-Net architecture and our tiny-U-Net. The latter is much more compact, each level having both fewer convolution layers and fewer filters per convolution. It is thus significantly faster (5×). Despite this complexity reduction, Tiny-U-Net is still complex enough to isolate swimmers. . . . . . . . . . . . . . . . . . . . . . . 53 Figure 3.6 The data augmentations used to train the model. From left to right, top to bottom: original image, blur, contrast and brightness change, crop, horizontal flip, hue change, side switch, zoom out. . . . . . . . . . . . . . . . . . . . . . . . . 54 Figure 3.7 Different detection scenarios on the same image. In green and continuous: boxes with more than 25% IOU with a true box (i.e. true positives). In red and dashed lines: incorrect detections (i.e.: false positives). . . . . . . . . . . . . . . . . 56 Figure 3 . 9 39 Figure 3.9 Classic use-case image overlapping with the segmentation heatmap output by the model with different input sizes. From left to right, the input sides are 128 pixels, 256 pixels and 512 pixels. . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 3 . 10 310 Figure 3.10 Model Segmentation of a water-polo image, slightly out of the training domain. From left to right, the input sides are 128 pixels, 256 pixels and 512 pixels. . . . . . . . . . . . . . Figure 3 . 11 311 Figure 3.11 Segmentation results of persons in a lake, which is significantly out of the training domain. From left to right, the input sides are 128 pixels, 256 pixels and 512 pixels. . . . . Chapter 4: pool registration Figure 4 . 1 41 Figure 4.1The swimming race analysis method. The registration part is tackled in this chapter. It does not depend on any other method as it directly inputs the raw video frames. Used with swimmer detection (in the image), it gives their position in the pool and all the data coming from it (speed, acceleration, etc.). . . . . . . . . . . . . . . . . . . . . . . . . Figure 4 . 2 42 Figure 4.2 An illustration of the registration. The Dynamic Linear Transform (DLT) operation uses known matches of positions (X i with Y i ) to compute the homography matrix. This matrix can then be used to match a position from one coordinate system to the other. Therefore, the bounding box from the image can be positioned in the pool with this method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 4 . 3 43 Figure 4.3 Local appearance ambiguities comparison of a soccer field and a swimming pool. A 1,2 : 4 identical lines at different places. B: 3 identical lines in the middle. Both A and B create line mismatch problems. C: the 15m and 35m markers are identical. D: an example of 2 different camera view projections on the pool that display the exact same content, despite being at two completely separate places of the pool. Best viewed in colour. . . . . . . . . . . . . . . . . Figure 4 . 4 73 Figure 4 . 5 75 Figure 4 . 6 79 Chapter 5 : periodicity 81 Figure 5 . 1 82 Figure 5 . 2 447345754679581518252 Figure 4.4 Top: data preparation. A generic template with regularly spaced keypoints is created. The template's depth encodes the keypoints' position in the top-view frame. For each image in the dataset, a corresponding projection of the template is created. As a result, the landmarks spatially refer to their position in the image and semantically encode their position in the top-view coordinates system. Bottom: inference. The model generates a heatmap of keypoints. Using RANSAC, the homography matrix can be estimated, giving the final projection of the input image in the topview frame. Best viewed in colour. . . . . . . . . . . . . . . 73 Figure 1 . 1 - 11 Figure 1.1 -A summary sheet filled-in by the FFN performance division. Frequency (min -1 ) = #strokes/time, Amplitude (m) = 50m/#strokes, Tempo (s) = 60s/Frequency. 2 . 2 The Neptune project (Natation et Paranatation: Tous Unis pour Nos Elites) brings academics and swimming professionals together to prepare for the 2024 Paris Olympics. Members of the FFN explained what lacked from their current analyses and the researchers focused on these topics. Figure 1 . 2 - 12 Figure 1.2 -How humans understand a pool and use their prior knowledge of the situation to infer data out of the video. These implicit challenges must explicitly tackled by CV techniques. Figure 1 . 3 - 13 Figure 1.3 -Description of the automatic analysis method we propose. Each Neural Network (NN) model represents a different challenge of this thesis. The final objectives are to estimate the swimmers' position through time and periodicity. Figure 1 . 4 - 14 Figure 1.4 -Examples of difficult conditions in swimming race videos. Underwater swimmers are sometimes barely visible (A). Low camera angles give little view of the farthest swimming lanes (B). Swimmers of a previous race still in the pool and referees standing by the start create a difficult subjects of interest choice (C). Lighting conditions, like outdoor with bright sun (A), or indoor with back-lighting (B, C) can obfuscate swimmers. 2 B 2 Figure 2 . 1 - 21 Figure 2.1 -An illustration of the different CV tasks applied to swimming race automatic analysis. Figure 2 . 2 - 22 Figure 2.2 -Illustration of a 2D convolution edge-detection filter.The "*" symbol represents the convolution, the dot represents the pixel-wise matrix multiplication (i.e. the local behaviour of convolution), and the "•" represents element-wise multiplication. The 3 × 3 Laplacian filter is displayed with the result of its convolution on the input. At the top, a toy example displays the filter's behaviour without texture and with an edge texture. At the bottom, an application of the filter on a swimming race image. The line buoys are mostly well detected, as they are made of simple features with little texture. The swimmers and waves, however, are more complex and the filter cannot isolate them. Figure 2 . 3 - 23 Figure 2.3 -Hough transform illustration. Top: toy examples with 1 point (left) and 1 line (right) with their respective Hough transform results.Colours are preserved in the example to map elements of on space to the other. In the point example, any line crossing the (x, y) position is represented in the Hough space using the angle (θ) and radius (ρ) as in the example. A single point thus results in a sinusoid curve. In the line example, the curves from each point of the line intersect in a single point corresponding to the line parameters, which are not directly obtainable from the image. Bottom: Hough line detection applied to a pool (after edge detection and thresholding) to detect its buoy lines. The red line does not represent a line in the image and appears solely because of noise. Figure 2 . 5 - 25 Figure 2.5 -The perceptron and a Multilayer Perceptron (MLP) architecture with 2 hidden layers. Figure 2 . 6 - 26 Figure 2.6 -Illustration of domain definition and data bias. The swimmer domain in the data (right) only represents a small portion of the entire swimmer domain (center).For a model trained on this data, the farther an image is from the training domain, the less likely it will be identified as a swimmer. For instance, Superman in swim briefs represented here will hardly be identified properly if the domain only contains classic images of swimmers. The domain thus needs to be as wide as possible for a given class. Further, data biases appear when the classes are unequally represented. In this example, there are more examples of males than of females, which causes problems for the future model's representation. Figure 2 . 7 - 27 Figure2.7 -Raw data and explanation of a bad model's prediction in the "Husky vs Wolf " task. From[START_REF] Tulio Ribeiro | Explaining the Predictions of Any Classifier[END_REF] Figure 2 . 8 - 28 Figure 2.8 -Stacked convolutional filters forming one layer of a CNN.The "⊛" symbol represents convolution between the image and the filters. The red square contains visual features, not easy to understand for a computer. The corresponding red vector, on the layer output, contains these information in a more understandable form for machines. Figure 2 . 10 - 210 Figure 2.10 -A CNN architecture with 2 convolutions, each followed by a Max Pooling operation, completed by a classification layer at the end. The task at hand is classifying the swimming style shown in the input image. Figure 2 . 11 - 211 Figure 2.11 -An encoder-decoder architecture. As the input and output are the same, it is an autoencoder. The length of a block represents its number of channels. Remarkably, the bottleneck of a linear autoencoder converges into the Principal Component Analysis (PCA) representation of the data. Figure 2 . 12 - 212 Figure 2.12 -2D PCA visualisation of different autoencoders trained on MNIST. Colours represent classes. Left: an autoencoder with a non-continuous manifold in the latent space. Right: a VAE with a dense continuous manifold. Figure from [72]. Figure 2 . 13 - 213 Figure 2.13 -Elementary residual blocks.Variations can be further applied to them, but the core feature is the skip connection, back-propagating the gradient directly from the block output to its input. On the right, an illustration of the linear bottleneck, useful for deeper networks. The (1 × 1) convolutions create depth compression (256-d to 64) and expansion(64-d to 256). In between, a regular feature abstraction with (3 × 3) convolution is done with only 64 channels instead of 256. Figure 2 . 14 - 214 Figure 2.14 -The U-Net architecture, from [61]. Figure 2 . 15 - 215 Figure 2.15 -Different views from a classic TV stream. Apart from the leftmost, they are very different from what coaches are used to. Figure 2 . 16 - 216 Figure 2.16 -The different levels of supervision. Datasets domains are represented each with a specific colour. There are different levels of label, here represented by the cylinders' edge. Although each type of supervision has a different complexity of training data, they all aim at performing the detection task. Note that for transfer learning, the "big" domain is distinct but close to the target domain. Figure 2 . 18 - 218 Figure 2.18 -The CAM pipeline explained (figure inspired by[START_REF] Zhou | Learning Deep Features for Discriminative Localization[END_REF]). The objective is to detect swimmers using a model trained to classify the swimming style. Such proxy task works because the swimming style can only be correctly identified by focusing on the swimmers. Each feature map of the last convolution layer's output is weighted by the coefficient it assigns to the freestlye class (w 1 , to w n ). In our example, the weights to the 2 nd (red) and n th (green) feature maps are low compared to the 1 st one's (blue), which roughly segments the swimmers. Figure 2 . 19 - 219 Figure 2.19 -2D PCA of embedding vectors from encoders trained on the MNIST dataset. The colours represent the different classes. Left: the model is trained using metric learning. Right: the model is trained with an additional classification layer with softmax activation.Extracted from[START_REF] Horiguchi | Significance of Softmax-Based Features in Comparison to Distance Metric Learning-Based Features[END_REF]. Figure 2 . 20 - 220 Figure 2.20 -A semi-supervised pipeline applied to detection. Phase 1: an autoencoder is trained on a big unlabelled image dataset to create a feature extractor. Phase 2 (transfer learning): the encoder's weights are frozen and layers are added on top of it and trained on a small labelled detection dataset. C h a p t e r 3 S 3 . 1 3 . 5 33135 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 State Of The Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 General Object Detection . . . . . . . . . . . . . . . . . . . 3.2.2 Recent Advances on Swimmer Detection . . . . . . . . . 3.3 Proposed Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Dataset Creation . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Detection Through Segmentation . . . . . . . . . . . . . .3.3.3 Data Augmentation . . . . . . . . . . . . . . . . . . . . . . 3.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Comparative Results . . . . . . . . . . . . . . . . . . . . . Visual and Qualitative results . . . . . . . . . . . . . . . . . . . . . 3.5.1 Swimming Races . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Other Swimming-Based Activities . . . . . . . . . . . . . 3.6 Discussions and Perspectives . . . . . . . . . . . . . . . . . . . . . 3.6.1 Improvements and Future Works . . . . . . . . . . . . . . 3.6.2 Generalization to Other Sports . . . . . . . . . . . . . . . 3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 3 . 1 - 31 Figure 3.1 -The swimming race analysis method. The detection part is tackled in this chapter. It is arguably the most critical bloc, as both swimmers placement in the pool and periodicity analysis depend on it. Figure 3 . 2 - 32 Figure 3.2 -Representative examples of the edge fuzziness problem with the different styles and the diving. Although for breaststroke the framing is generally well-defined, for other contexts it is not. The swimmers create waves and splashes keeping an observer from knowing the exact boxing of their body. Even with an unexcited water surface, diffraction deforms the observations and shifts the visible position of a swimmer from their actual position. Figure 3 . 3 - 33 Figure 3.3 -The Faster R-CNN algorithm. Left: the entire architecture. The extracted features are used by the RPN and by the classifier. Right: detail of the RPN. It uses the features both to classify each area as ROI or background and to refine its detections. Anchors refinement generates 4k values (i.e. 4 values per anchor) that correspond to (width, height, x shift, y shift) × k anchor boxes. Figure adapted from [134]. Figure 3 . 4 - 34 Figure 3.4 -An illustration of You Only Look Once (YOLO). The output tensor is only (3 × 3) for simplicity. A box colour represents its class. Objectness score is represented by box thickness. The model outputs an encoding for each anchor of each sub-region.Only the ones with an objectness score superior to a threshold are kept. Non-Maximum Suppression (NMS) is applied to filter out the redundancies of boxes framing the same instance. As there are multiple anchor sizes, multiple objects can be detected in each sub-region. Figure 3 . 5 - 35 Figure 3.5 -Comparison of the classic U-Net architecture and our tiny-U-Net. The latter is much more compact, each level having both fewer convolution layers and fewer filters per convolution. It is thus significantly faster (5×). Despite this complexity reduction, Tiny-U-Net is still complex enough to isolate swimmers. Figure 3 . 6 - 36 Figure 3.6 -The data augmentations used to train the model. From left to right, top to bottom: original image, blur, contrast and brightness change, crop, horizontal flip, hue change, side switch, zoom out. Figure 3 . 7 - 37 Figure 3.7 -Different detection scenarios on the same image. In green and continuous: boxes with more than 25% IOU with a true box (i.e. true positives). In red and dashed lines: incorrect detections (i.e.: false positives). Model AP 25 Figure 3 . 8 - 2538 Figure 3.8 -The thresholded segmentation output overlaid on the input image with a size of (256 × 256) pixels. Circled in red are mistakes to focus on. Figure 3 . 9 - 39 Figure 3.9 -Classic use-case image overlapping with the segmentation heatmap output by the model with different input sizes. From left to right, the input sides are 128 pixels, 256 pixels and 512 pixels. Figure 3 . 10 - 310 Figure 3.10 -Model Segmentation of a water-polo image, slightly out of the training domain. From left to right, the input sides are 128 pixels, 256 pixels and 512 pixels. Figure 3 . 11 - 311 Figure 3.11 -Segmentation results of persons in a lake, which is significantly out of the training domain. From left to right, the input sides are 128 pixels, 256 pixels and 512 pixels. 4 P 4 . 1 441 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 State Of The Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Registration Background . . . . . . . . . . . . . . . . . . . 4.2.2 Semi-Manual Approaches . . . . . . . . . . . . . . . . . . 4.2.3 Recent Advances in Sport Field Registration . . . . . . . 4.3 A More Challenging Benchmark . . . . . . . . . . . . . . . . . . . 4.4 Registration Method . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Template Heatmap . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Data Generation and Model Training . . . . . . . . . . . 4.4.3 Matrix Estimation . . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Post-Processing . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Parameter Study . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Comparing to State of the Art . . . . . . . . . . . . . . . . 4.5.3 Failure Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Discussion on the One-Shot Approach . . . . . . . . . . . . . . . 4.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 4 . 1 - 41 Figure 4.1 -The swimming race analysis method. The registration part is tackled in this chapter. It does not depend on any other method as it directly inputs the raw video frames. Used with swimmer detection (in the image), it gives their position in the pool and all the data coming from it (speed, acceleration, etc.). Figure 4 . 2 - 42 Figure 4.2 -An illustration of the registration. The Dynamic Linear Transform (DLT)operation uses known matches of positions (X i with Y i ) to compute the homography matrix. This matrix can then be used to match a position from one coordinate system to the other. Therefore, the bounding box from the image can be positioned in the pool with this method. Figure 4 . 3 - 43 Figure 4.3 -Local appearance ambiguities comparison of a soccer field and a swimming pool. A 1,2 : 4 identical lines at different places. B: 3 identical lines in the middle. Both A and B create line mismatch problems. C: the 15m and 35m markers are identical. D: an example of 2 different camera view projections on the pool that display the exact same content, despite being at two completely separate places of the pool. Best viewed in colour. Figure 4 . 4 - 44 Figure 4.4 -Top: data preparation. A generic template with regularly spaced keypoints is created. The template's depth encodes the keypoints' position in the top-view frame.For each image in the dataset, a corresponding projection of the template is created. As a result, the landmarks spatially refer to their position in the image and semantically encode their position in the top-view coordinates system. Bottom: inference. The model generates a heatmap of keypoints. Using RANSAC, the homography matrix can be estimated, giving the final projection of the input image in the top-view frame. Best viewed in colour. Figure 4 . 5 - 45 Figure 4.5 -Smoothing of the 8 parameters of the homography matrix estimated for a race (the parameter at position (3,3) is always equal 1). The values are represented through time. The outliers being orders of magnitude different from the truth, they have been cut out in the figure's frame. Blue: the original signal. Orange: the smoothed signal. Figure 4 . 6 - 46 Figure 4.6 -Examples of failure cases. The results were obtained from two models trained under different conditions. The Red squares represent the detected landmarks. Despite the mirror failure looking consistent, its IOU with the ground truth is 0. 5 . 1 5 . 6 5156 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Unsupervised Periodicity Counting . . . . . . . . . . . . . . . . . 5.3.1 Latent Representation Learning . . . . . . . . . . . . . . . 5.3.2 Cycle Counting . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 CNN Architecture . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Quantitative Results . . . . . . . . . . . . . . . . . . . . . 5.4.4 Application to 4D videos . . . . . . . . . . . . . . . . . . . 5.5 Going Further with Supervision . . . . . . . . . . . . . . . . . . . 5.5.1 Supervised Swimmer Strokes Detection . . . . . . . . . . 5.5.2 Qualitative results . . . . . . . . . . . . . . . . . . . . . . . Discussion and Perspectives . . . . . . . . . . . . . . . . . . . . . 5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 5 . 1 - 51 Figure 5.1 -The swimming race analysis pipeline. The periodicity counting part is tackled in this chapter. This part is built on top of swimmer detection because it relies on sub-video crops around swimmers. It can output the number of cycles per pool length or the duration of cycles. Both metrics are used by coaches. Figure 5 . 2 - 52 Figure 5.2 -The periodicity counting framework. In Part 1, a Convolutional Neural Network (CNN) is first trained in an unsupervised fashion on the test data, as described in section 5.3.1. Then, it is used to extract an embedding for each image of a video. (1a) shows an example of the 2D PCA projection of these embeddings. The last 50 embeddings are linked chronologically (in red), revealing the cyclic path. (1b) shows the input images whose embeddings correspond to the highlighted points. They belong to different swimming cycles but correspond to the same phase, therefore the points are close in the latent space. In Part 2, we chronologically stack the embeddings obtained from the trained network to form a multi-variate time signal. The PCA's first component of the signal reduces it into a uni-variate signal. Finally, our Max Detector algorithm is used to count the cycles on the signal, which corresponds to the number of cycles on the video. Figure 5 . 3 - 53 Figure5.3 -Unsupervised learning of the pseudo-linear path using the triplet loss. The anchor is at the center, the positive is on the smaller circle (not necessarily the same size each time), and the negative is outside of the bigger circle. a) The anchor is ϕ(t -1), so ϕ(t) and ϕ(t + 1) are separated. b) The anchor is ϕ(t), so ϕ(t) and ϕ(t + 1) are drawn together. When the training starts, the negative can be on the other side of the big circle compared to the positive. But this situation is no longer possible when the constraint is applied to all the successive frames after convergence, as shown in c): a pseudo-linear path is naturally formed, as it is the only way to respect both the attraction and repulsion constraints imposed by the loss. Figure 5 . 4 - 54 Figure 5.4 -Examples of 1D PCA projections of embeddings chronologically stacked. The A, B, and C curves show the embedding results for different cycles duration, from 8 to 50 frames per cycle (on average). D shows a more complex pattern containing 2 distinct local maxima. In such cases, our Max Detector might count 2 cycles per pattern, resulting in a false result, like mentioned in section 5.4.3. Algorithm 5 . 1 51 benchmarks. QUVA is composed of 100 videos showing between 4 and 63 repetitions. The videos are very diverse and recorded in reallife situations, often with camera motion and background variation. Countix contains a similar visual variety. It is the first large video repetition dataset, Max Detector: creation of candidate lists MaxList m Require: signal S, f m , m ∈ (1, ..., F) MaxList = ∅ for m in (1, ..., F) do MaxList i end while MaxList ← MaxList ∪ MaxList m end for return MaxList containing more than 8000 clips showing 2 to 73 repetitions. The metrics used for performance comparison are the Mean Absolute Error (MAE) and the Off-By-One Accuracy (OBOA), defined as: Figure 5 . 6 - 56 Figure 5.6 -4D MRI video analyzed by our method. This is a proof of concept of the method's generalisation to different input types. Left: 2D slices of 3D input images (for display purposes) at different moments. The blood pulses through the artery. Right: the 1D PCA (blue) and peak detection of our model (red). As MRI contain very little noise, the periodic pattern is perfectly smooth. Better seen in colour. Figure 5 . 7 - 57 Figure 5.7 -Value associated to each frame according to their phase in the cycle. The arrow point at the value v ∈ [-1, 1] taken by the frame on the sinusoid. Both extremity frames are associated to 1. Figure 5 . 8 - 58 Figure 5.8 -Crops of a race around a swimmer, analyzed by our periodicity regression model. Top: the raw output signal. Bottom, maxima detection by the Max Detector algorithm. As maxima correspond to cycles extremities, the algorithm outputs the cycles extremities' timecodes. C h a p t e r 6 C 6 . 1 661 Summary of the Contributions . . . . . . . . . . . . . . . Figure 6 . 1 - 61 Figure 6.1 -A final illustration of our swimming models unified in a single system. The dotted arrow is a connection that would no longer exist with the proposed architecture. Following this idea, the registration result would be part of the detection process, and the smoothing / merging bloc would improve on the crops extraction. Figure 6 . 2 - 62 Figure 6.2 -Top-views obtained using the same model from a static camera filming a race. Different types of visible errors exist. There are also many small shifts, almost indistinguishable from the ground truth, causing flickering to the video. Both the raw projection and the clustered-based projection (using a unique matrix) of the video are available at: https://drive.google.com/drive/folders/1oXKgDIzy3vTd0UHE9TaFjyozOeiR9gTt? usp=sharing. Figure 6 . 3 - 63 Figure 6.3 -The detection pipeline proposed in this section. After the model inference on the right and left videos, there are 4 steps. A: top-view projection and fusion of the raw heatmaps. B: by-lane threshold, resulting in a position for each swimmer for a given frame. C: temporal aggregation of the positions through the entire race. D: position smoothing and gap-filling giving the trajectory of each swimmer through time. Note that although the probability maximum of the topmost swimmer's blob is under 0.45, our by-lane threshold can detect it. The detection false positives are transparently discarded thanks to the use of registration. Figure 6 . 4 - 64 Figure 6.4 -An illustration of the annotation tool we built, with bounding boxes already generates by the model (in red) and bounding boxes created from scratch by the user (in white). On the left, different views of the detection. The threshold slider is at the bottom. Nicolas Jacquelin , Jacquelin Théo Jaunet, Romain Vuillemot, Stefan Duffner. Swim-Track: Swimmers and Stroke Rate Detection in Elite Race Videos. Working Notes Proceedings of the MediaEval 2022 Workshop. Elementary residual blocks. Variations can be further applied to them, but the core feature is the skip connection, back-propagating the gradient directly from the block output to its input. On the right, an illustration of the linear bottleneck, useful for deeper networks. The (1 × 1) convolutions create depth compression (256-d to 64) and expansion(64-d to 256). In between, a regular feature abstraction with (3 × 3) convolution is done with only 64 channels instead of 256. . . . . . . . . . . . . . . . . . . . . . . . . . . 30 . . . . . . . . . . . . . . . . . . . . . . . . . 23 Figure 2.10 A CNN architecture with 2 convolutions, each followed by a Max Pooling operation, completed by a classification layer at the end. The task at hand is classifying the swimming style shown in the input image. . . . . . . . . . . . . . . . . 24 Figure 2.11 An encoder-decoder architecture. As the input and output are the same, it is an autoencoder. The length of a block represents its number of channels. Remarkably, the bottleneck of a linear autoencoder converges into the Principal Component Analysis (PCA) representation of the data. . . . 27 Figure 2.12 2D PCA visualisation of different autoencoders trained on MNIST. Colours represent classes. Left: an autoencoder with a non-continuous manifold in the latent space. Right: a VAE with a dense continuous manifold. Figure from [72]. 28 Figure 2.13 2, we chronologically stack the embeddings obtained from the trained network to form a multi-variate time signal. The PCA's first component of the signal reduces it into a uni-variate signal. Finally, our Max Detector algorithm is used to count the cycles on the signal, which corresponds to the number of cycles on the video. Illustration of Max Detector. Starting from the global maximum's index t max , the algorithm shifts by one period T and finds the maximum's index t +1 in a window of 10% of T (in red, exaggerated for a better understanding). This window makes Max Detector robust to period variations. Starting from t +1 , this is repeated to find t +2 , t +3 and so on until reaching the signal's end. A first iteration goes from t max to the end of the signal and a second from t max to t = 0. . . 89 The detection pipeline proposed in this section. After the model inference on the right and left videos, there are 4 steps. A: top-view projection and fusion of the raw heatmaps. B: by-lane threshold, resulting in a position for each swimmer for a given frame. C: temporal aggregation of the positions through the entire race. D: position smoothing and gap-filling giving the trajectory of each swimmer through time. Note that although the probability maximum of the topmost swimmer's blob is under 0.45, our by-lane threshold can detect it. The detection false positives are transparently discarded thanks to the use of registration. . 106 Figure6.[START_REF] Everingham | The PASCAL Visual Object Classes Challenge[END_REF] An illustration of the annotation tool we built, with bound- Figure 5.6 4D MRI video analyzed by our method. This is a proof of concept of the method's generalisation to different input types. Left: 2D slices of 3D input images (for display purposes) at different moments. The blood pulses through the artery. Figure 6.2 Top-views obtained using the same model from a static Figure 5.4 Examples of 1D PCA projections of embeddings chronolog-camera filming a race. Different types of visible errors ically stacked. The A, B, and C curves show the embedding exist. There are also many small shifts, almost indistin- results for different cycles duration, from 8 to 50 frames guishable from the ground truth, causing flickering to per cycle (on average). D shows a more complex pattern the video. Both the raw projection and the clustered- . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 5.3 Unsupervised learning of the pseudo-linear path using the triplet loss. The anchor is at the center, the positive is on the smaller circle (not necessarily the same size each time), and the negative is outside of the bigger circle. a) The anchor is ϕ(t -1), so ϕ(t) and ϕ(t + 1) are separated. b) The anchor is ϕ(t), so ϕ(t) and ϕ(t + 1) are drawn together. When the training starts, the negative can be on the other side of the big circle compared to the positive. But this situation is no longer possible when the constraint is applied to all the successive frames after convergence, as shown in c): a pseudo-linear path is naturally formed, as it is the only way to respect both the attraction and repulsion constraints imposed by the loss. . . . . . . . . . . . . . . . . . . . . . . . containing 2 distinct local maxima. In such cases, our Max Detector might count 2 cycles per pattern, resulting in a false result, like mentioned in section 5.4.3. . . . . . . . . . Figure 5.5 Right: the 1D PCA (blue) and peak detection of our model (red). As MRI contain very little noise, the periodic pattern is perfectly smooth. Better seen in colour. 94 Figure 5.7 Value associated to each frame according to their phase in the cycle. The arrow point at the value v ∈ [-1, 1] taken by the frame on the sinusoid. Both extremity frames are associated to 1. . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Figure 5.8 Crops of a race around a swimmer, analyzed by our periodicity regression model. Top: the raw output signal. Bottom, maxima detection by the Max Detector algorithm. As maxima correspond to cycles extremities, the algorithm outputs the cycles extremities' timecodes. . . . . . . . . . . 97 Chapter 6: conclusion and perspectives 101 Figure 6.1 A final illustration of our swimming models unified in a single system. The dotted arrow is a connection that would no longer exist with the proposed architecture. Following this idea, the registration result would be part of the detection process, and the smoothing / merging bloc would improve on the crops extraction. . . . . . . . . . . . . . . . . 104 based projection (using a unique matrix) of the video are available at: https://drive.google.com/drive/folders/ 1oXKgDIzy3vTd0UHE9TaFjyozOeiR9gTt?usp=sharing. . . . . 105 Figure 6.3 ing boxes already generates by the model (in red) and bounding boxes created from scratch by the user (in white). On the left, different views of the detection. The threshold slider is at the bottom. . . . . . . . . . . . . . . . . . . . . . 108 L I S T O F TA B L E S Chapter 1: introduction Chapter 2: background in computer vision Chapter 3: swimmer detection Table 3 . 3 1 Detection performance of our model trained with different swimmer heuristic shape on the target heatmaps. The input image size is (256 × 256) pixels. * 72 in the original paper, improved since. . . . . . . . . . . . . . . . . . . . . . . . . . Table 3.2 Speed and performances results as a function of input size (left), and Average Precision (AP)25/Average Recall (AR)25 Table 4 . 1 41 Statistics of the RegiSwim 500 dataset. The races contain important lighting, textural, and spatial variations. . . . . Table 4 . 2 42 Ablation study on the training parameter λ. Best in bold. . Table 4 . 3 43 Quantitative results on Soccer World Cup and RegiSwim 500 datasets. Best in bold. Real-time methods underlined in the Frames per Second (FPS) column. . . . . . . . . . . . . . . . . . . . . Chapter 5: periodicity Table 5 . 1 51 Table 5 . 2 52 Results for different methods of periodicity counting methods. Bold: the best result of a category. Underlined: the second best. Our unsupervised method reaches comparable performances to the best fully-supervised models. This proves the overall interest of our method. Q for QUVA benchmark, C for Countix. . . . . . . . . . . . . . . . . . . . 93 Choice of Using Videos . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Manual Analysis VS Automatic Analysis . . . . . . . . . . . . . . Chapter 6: conclusion and perspectives DoG Difference of Gaussians FFN Fédération Française de Natation (French Swimming Federation) FPS Frames per Second GAN Generative Adversarial Networks 3 1.3 Computer Vision Tasks . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 New Challenges We Must Tackle . . . . . . . . . . . . . . . . . . . 5 1.5 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Nicolas Jacquelin, Romain Vuillemot, Stefan Duffner. Detecting Swimmers in Unconstrained Videos with Few Training Data. Machine Learning and Data Mining for Sports Analytics, Sep 2021, Ghand, Belgium. 〈hal-03358375). Previously in Computer Vision . . . . . . . . . . . . . . . . . . . . 9 2.1.1 Classic Algorithms . . . . . . . . . . . . . . . . . . . . . . 2.1 9 2.1.2 Going Further with Machine Learning . . . . . . . . . . . 15 2.2 Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . 20 2.2.1 Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.2 CNNs Components . . . . . . . . . . . . . . . . . . . . . . 24 2.2.3 CNN Architectures . . . . . . . . . . . . . . . . . . . . . . 26 2.3 Data and Supervision . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.3.1 Computer Vision Datasets . . . . . . . . . . . . . . . . . . 32 2.3.2 Training: Different Levels of Supervision . . . . . . . . . 34 Figure 2.4 -SIFT descriptors creation. The local gradient is computed on a 16 × 16 region around the landmark's position. Their orientation distribution (among 8 possible angles) is computed and the dominant one is retained. Then the process is repeated for smaller 4 × 4 regions inside the area. These local orientation distributions are rotated accordingly to the main orientation as angle normalization. This results in 4 × 4 × 8 = 128 values, i.e. the SIFT embedding vector. main orientation Landmark region gradients orientations distribution 4x4 region gradients Table 3 . 1 - 31 Detection performance of our model trained with different swimmer heuristic shape on the target heatmaps. The input image size is (256 × 256) pixels. * 72 in the original paper, improved since. Training Data AP 25 AR 25 Ellipse binary mask 76* 60 Rectangle binary mask 60 28 Ellipse Gaussian mask 21 5 Rectangle Gaussian mask Table 3 . 2 - 32 Speed and performances results as a function of input size (left), and AP25/AR25 as a function of the heatmap threshold (right). We tested the speed with the same set of 1700 images. The different AP25/AR25 results were evaluated on the Swimm 400 test set. Size FPS AP25/AR25 0.30 0.45 0.60 0.75 0.90 128 2 490 46/13 256 2 260 76/60 69/59 76/60 69/54 65/50 64/50 512 2 80 55/67 1024 2 20 28/63 few swimmers are missed. The (256 × 256) pixels inputs are chosen for further experiments as they offer the best AP vs AR trade-off, and excellent inference speed. Table 4 . 1 - 41 Statistics of the RegiSwim 500 dataset. The races contain important lighting, textural, and spatial variations. #images #races images / s Train Standard 226 6 1/3 Train Sequential 150 4 5 Train Merge 329 6 5 & 1/3 Test 174 3 5 Table 4 . 2 - 42 Ablation study on the training parameter λ. Best in bold. λ value IOU avg part 0.5 83.6 81.3 83.3 75.0 1 2 5 IOU med part IOU avg whole IOU med whole 89.54 84.4 94.7 80.4 67.6 66.0 72.6 63.9 82.8 85.0 91.5 81.1 Table 4 . 3 - 43 Quantitative results on Soccer World Cup and RegiSwim 500 datasets. Best in bold. Real-time methods underlined in the FPS column. Method Benchmark IOU avg part IOU med part IOU avg whole IOU med whole FPS Memory -GPU Citraro et al. [159] WorldCup 93.9 95.5 - - 9 NA -Titan RTX Sha et al. [145] WorldCup 94.2 95.4 83.2 84.6 250 48GB -Titan RTX Chen et al. [144] WorldCup 94.5 96.1 89.4 93.8 2 16GB -NA Jiang et al. [147] WorldCup 95.1 96.7 89.8 92.9 0.74 8GB -1080 GTX Nie et al. [146] WorldCup 95.9 97.1 91.6 93.4 2 8GB -1080 GTX Ours, soccer field WorldCup 94.6 95.9 81.2 86.0 50 8GB -1080 GTX Ours, swimming pool RegiSwim 500 83.3 94.7 72.6 91.5 50 8GB -1080 GTX Illustration of Max Detector. Starting from the global maximum's index t max , the algorithm shifts by one period T and finds the maximum's index t +1 in a window of 10% of T (in red, exaggerated for a better understanding). This window makes Max Detector robust to period variations. Starting from t +1 , this is repeated to find t +2 , t +3 and so on until reaching the signal's end. A first iteration goes from t max to the end of the signal and a second from t max to t = 0. 10% of T (exaggerated) Amplitude Time t -1 t max -T t max t max +T t +1 t +1 +T t +2 Figure 5.5 - Table 5 . 1 - 51 Results of different variations of our approach on the QUVA dataset. Pretrained models did not perform well at embedding the images in a cyclic manner. The same architectures, trained using our method, give much better results. Different architectures do not significantly change the results. Variations MAE±σ ↓ OBOA ↑ 1 img + F=4 0.388± 0.512 0.43 VGG19 (pretrained) + F=4 0.758± 0.812 0.21 VGG11 (pretrained) + F=4 0.783± 0.761 0.17 VGG19 + flow + F=4 0.252± 0.400 0.60 VGG11 + flow + F=4 0.241± 0.367 0.62 flow + F=2 0.291± 0.445 0.59 flow + F=5 0.239± 0.335 0.62 flow + F=7 0.244± 0.328 0.61 flow + F=10 0.378± 0.710 0.57 flow + Scholkmann et al. [182] 0.307± 0.408 0.51 flow + F=4 0.231± 0.326 0.64 Table 5 . 2 - 52 Results for different methods of periodicity counting methods. Bold: the best result of a category. Underlined: the second best. Our unsupervised method reaches comparable performances to the best fully-supervised models. This proves the overall interest of our method. Q for QUVA benchmark, C for Countix. Dwibedi et al.[START_REF] Dwibedi | Counting Out Time: Class Agnostic Video Repetition Counting in the Wild[END_REF]. The MAE is slightly worse than the supervised methods, but not by a big margin. In fact, the difference between our score and Dwibedi et al.'s equals the difference between them and Zhang et al. Method Unsupervised Q: MAE±σ ↓ Q: OBOA ↑ C: MAE±σ ↓ C: OBOA ↑ Levy and Wolf [176] 0.482± 0.615 0.45 - - Yin et al. [169] 0.199 ± 0.335 - - - Dwibedi et al. [166] 0.322 0.66 0.364 0.697 Zhang et al. [168] - - 0.307 0.511 Pogalin et al [184] ✓ 0.389± 0.376 0.49 - - Runia et al [179] ✓ 0.232± 0.344 0.62 - - Our method, F=4 ✓ 0.231± 0.326 0.64 0.495 ± 0.769 0.517 Our method, F=2 ✓ 0.291± 0.445 0.59 0.419 ± 0.496 0.545 only outperformed by . . . . . 6.2 Limitations and Proposed Solutions . . . . . . . . . . . . . . . . . 6.2.1 Benefits from Combining the Models . . . . . . . . . . . 6.2.2 Increasing the Acquisition Speed . . . . . . . . . . . . . . 6.3 Perspectives and New Challenges . . . . . . . . . . . . . . . . . . 6.3.1 Guided Annotation Tool . . . . . . . . . . . . . . . . . . . 6.3.2 Temporal Data for a Better Context Understanding . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 More Data for Better Models . . . . . . . . . . . . . . . . 6.3.4 Weakly Supervised Learning for a Multitask Model . . . 6.3.5 Swimmers Pose Estimation . . . . . . . . . . . . . . . . . 6.3.6 Vision Transformers . . . . . . . . . . . . . . . . . . . . . . 6.3.7 Unaddressed Challenges . . . . . . . . . . . . . . . . . . . 6.3.8 MediaEval Challenge . . . . . . . . . . . . . . . . . . . . . 6.4
00411704
en
[ "info.info-ti", "info.info-it", "math.math-it", "info.info-cr", "spi.signal", "info.info-ts" ]
2024/03/04 16:41:26
2009
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00411704/file/IWDW2009_CHAUMONT.pdf
Marc Chaumont email: [email protected] A Fast Embedding Technique For Dirty Paper Trellis Watermarking This paper deals with the improvement of the Dirty Paper Trellis Code (DPTC) watermarking algorithm. This watermarking algorithm is known to be one of the best among the high rate watermarking schemes. Nevertheless, recent researches reveal its security weakness. Previously, we proposed to reinforce its security by using a secret space before the embedding. This secret space requires to compute projections onto secrets carriers. When dealing with high rate watermarking, the CPU cost for those projections is dramatically high. After introducing the watermarking scheme, we then propose two Space Division Multiplexing (SDM) approaches which reduce the complexity. Evaluations are achieved with four different attacks and show that our proposal gives better robustness results with SDM approaches. Introduction Dirty Paper Trellis Codes (DPTC) [START_REF] Miller | Applying Informed Coding and Informed Embedding to Design a Robust, High Capacity Watermark[END_REF] watermarking is one of the most efficient high rate schemes. Nevertheless, it suffers of two major drawbacks: its CPU computational complexity for the embedding part and its security weakness. In this paper we propose to carry on the work proposed in [START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF] which gives a nice way to improve those two drawbacks while preserving a good robustness. The recent work of Bas and Doërr [START_REF] Bas | Evaluation of an Optimal Watermark Tampering Attack Against Dirty Paper Trellis Schemes[END_REF] about security of DPTC [START_REF] Miller | Applying Informed Coding and Informed Embedding to Design a Robust, High Capacity Watermark[END_REF] shows that in the Kerckhoffs's framework [START_REF] Kerckhoffs | La Cryptographie Militaire[END_REF], i.e. when the embedding and extracting algorithms are known by an attacker, the trellis codebook may be retrieved observing a large number of watermarked images. Those conclusions are drawn based on a simplified version of the DPTC algorithm (non random-ordering of DCT coefficients) but show a certain security weakness of DPTC [START_REF] Miller | Applying Informed Coding and Informed Embedding to Design a Robust, High Capacity Watermark[END_REF]. In [START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF], we proposed to use a private embedding space in order to better hide the structure of the trellis. Moreover, we provided a fast embedding strategy. The private space is obtained by vector projections. If achived directly, the vector projections give a quadratic CPU complexity. In that paper, we propose two different Space Division Multiplexing (SDM) approaches in order to reduce the quadratic complexity to a linear one. In section 2, we briefly present the embedding space and the embedding approach already presented in [START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF]. In section 3, we present two SDM approaches in order to reduce the projections complexity. Finally, in section 4 we evaluate the schemes and conclude to the good behavior of the SDM approaches. New embedding approach In this section, we remind the embedding space and the embedding approach proposed in [START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF]. Embedding space The embedding space is obtained by first, a wavelet transform of the image, and second, a projection of the host signal x of dimension N wlt (x is the concatenation of sub-bands coefficients except LL sub-band's coefficients) onto N sec carriers of same dimension. Carriers are denoted u i with i ∈ [0, N sec -1]. Note that a projection is just a scalar product. Figure 1 illustrates the construction of the host signal x and the host vector (secret space) v x . The obtained vector v x may then be used for the informed-coding and informed-embedding (see Section 2.2). The carriers u i are built from normalized bipolar pseudo-random sequences. For computational complexity reasons, carriers are neither orthonormalized nor drawn from a Gaussian distribution. This is not a weakness since in high dimension, carriers are orthogonal and Gaussian property is not essential. Nevertheless, computational complexity is still high since computing the N sec coefficients of the secure space requires to compute N wlt × N sec multiplications (resp. sums). Knowing that N wlt = N × (1 -1/2 2l ) and N sec = N × payload × N arc , it gives 1 N 2 × payload × N arc × (1 -1/2 2l ) multiplications (resp. sums). The Let us remark that it is impossible to reduce the number of multiplications and additions (thus it is impossible to reduce the complexity), and thus it is not useful to use a particular matrix multiplication routine. Informed-coding and informed-embedding After the projection of the host vector x onto carriers' u i , i ∈ [0, N sec -1], we obtain the host vector v x . We then run the informed-coding which is the same as the original one [START_REF] Miller | Applying Informed Coding and Informed Embedding to Design a Robust, High Capacity Watermark[END_REF]. The informed-coding takes as input the host vector v x and the message m to be embedded and returns a codeword c * . This vector c * (of size N sec ) is the closest one to v x among vectors coming from the codebook C, and representing the message m. For more details see [START_REF] Miller | Applying Informed Coding and Informed Embedding to Design a Robust, High Capacity Watermark[END_REF] or [START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF]. The objective of the informed-embedding is to push the host vector v x into the Voronoï region of c * in order to obtain the watermarked vector v y . Many solutions exist which are either too CPU consuming [START_REF] Miller | Applying Informed Coding and Informed Embedding to Design a Robust, High Capacity Watermark[END_REF], either too sub-optimal considering robustness-distortion tradeoff [START_REF] Wang | Toward a Better Understanding of Dirty Paper Trellis Codes[END_REF] 2 , [START_REF] Lin | An Efficient Algorithm for Informed Embedding of Dirty Paper Trellis Codes for Watermarking[END_REF]. In [START_REF] Miller | Applying Informed Coding and Informed Embedding to Design a Robust, High Capacity Watermark[END_REF], a Monte-Carlo approach is used which requires many iterations of Viterbi decoder. On a Pentium 3 GHz, for a 256 × 256 image and a message size of 1024 bits, watermarking takes from half an hour to two hours depending on the robustness threshold. In [START_REF] Wang | Toward a Better Understanding of Dirty Paper Trellis Codes[END_REF] and [START_REF] Lin | An Efficient Algorithm for Informed Embedding of Dirty Paper Trellis Codes for Watermarking[END_REF], the Viterbi decoder is only used once or twice. On a Pentium 3 GHz, for a Our previous approach [START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF] is a good compromise between complexity and robustness. It is illustrated in Figure 2 in the plane defined by v x and c * (those two vectors are noted v 2D x and c * 2D ). This plane is usually named the Miller, Cox and Bloom plane (abbr. MCB plane). Our approach consists in dichotomously reducing the angle between the host vector v x and the codeword c * until obtaining the smallest angle (noted θ f ) regarding all the other angles. Then, one penetrates inside the Voronoï region with a given angle θ R . Our informed embedding is thus a rotation of v x with an oriented angle equals to max(θ f + θ R , ( v x , c * )). This rotation gives the marked vector v y . We then compute the watermark vector v w = v y -v x , retro-project it onto carriers in order to obtain the watermark signal w and then compute the watermarked signal y = x + w. The inverse wavelet transform of y gives the watermarked image. At the extraction we project wavelet coefficients onto secret carriers and then retrieve the closest codeword (and thus the message) from the codebook C. Space Division Multiplexing (SDM) approaches As explained in Section 2, the projections of the host signal x onto secret carriers are quadratic in (computational) complexity. In order to reduce this complexity to a more reasonable linear function, we decide to divide the wavelet space into disjoint regions and to use a carrier for each region. Figure 3 illustrates this concept. There is still N sec carriers but their non-zero values are limited to a small region. Let s = N wlt /N sec be the mean region size. The number of multiplications (resp. sums) in order to compute the secret space is now approximately N sec × s = N wlt = N × (1 -1/2 2l ). The computational complexity is thus linear in function of the image size N . This division approach is called Space Division Multiplexing (SDM) [START_REF] Cox | 5. in Multimedia Information and Systems[END_REF]. We thus propose two approaches for SDM. In the first one, we build regions of equal size for each wavelet level (but not necessarily of equal size between the levels) and then re-arrange v x coefficients in order to obtain a fair distribution. We call this approach the structured SDM (see Section 3.1). In the second approach, we build regions of quasi-equal sizes. We name this approach the random SDM (see Section 3.2). Structured SDM In order to obtain region sizes belonging to N * , we solve the equation below (in the case of a 3-level wavelet decomposition): 3. N 4 1 s 789 + 3. N 16 1 s 456 + 3. N 64 1 s 123 = N sec (1) where s 789 ∈ N * is the size of regions in the wavelet sub-bands 7, 8 or 9 and so on (see Figure 1 for sub-bands numbering). Note that the regions sizes depend on the wavelet level. Knowing that The retained solution among all the possible solutions is the one for which regions sizes are closest to3 s = N wlt /N sec . If no integer solution is found, we use overlapping on regions borders. Before projecting the host signal x onto carriers, we pseudo-randomly shuffle its coefficients by group of wavelet level (see Figure 4). This ensures a good spreading of the influences coming from coefficients of the secret space. Moreover, it improves the security, the robustness (since it breaks spatial dependencies) and the psycho-visual impact. N sec = N × payload × N arc , After the projection of the host signal x onto carriers u i (carriers are built with this Space Division Multiplexing approach), we obtain the host vector v x . In order to better balance the influence distribution of the v x vector coefficients, we re-arrange it. Indeed, the first coefficients of v x are related to the low frequency wavelet sub-bands 1, 2 and 3, the next coefficients are related to higher frequency sub-bands 4, 5, and 6 etc. Thus the vector v x is re-arranged such that in each consecutive group of N arc coefficients, the probability distribution of influence is the same (see in Figure 4, the distribution influence re-arrangement). Note that each block of N arc should respect this distribution but coefficients are again pseudo-randomly arranged in order to keep a good security level. Random SDM With the random SDM approach, we compute regions with no overlap, that fully cover the host vector x and whose sizes are integer and close to s = N wlt /N sec . We talk of quasi-equal regions sizes (see Figure 5). There are N sec regions. A region r i , with i ∈ [0, N sec -1], is thus a set of contiguous wavelet coefficients, such that: r i = {x[i]|i ∈ [ i.s , (i + 1).s -1]}, (4) where x[i], with i ∈ [0, N wlt -1] , is a wavelet coefficient of the host vector x. Note that before proceeding to the projection, the host signal x is pseudorandomly shuffled in order to: break spatial dependencies, keep a good security level, and improve the robustness and the psycho-visual impact. The shuffled host signal x is then projected onto N sec carriers using SDM with quasi-equal regions' sizes (see Equation 4for regions definition). The host vector v x is the result of this projection. Results Tests were carried on the first 100 images of the BOWS2 data-base 5 with images resized to 256 × 256 6 . Those images are 8-bits grey-level images and are personal photos. The trellis structure has 128 states with 128 arcs per states. Outputs arcs labels are drawn from a Gaussian distribution and there are N arc = 12 coefficients by output arc. The used payload is payload = 1 bit for 64 coefficients which is the same as the original DPTC algorithm [START_REF] Miller | Applying Informed Coding and Informed Embedding to Design a Robust, High Capacity Watermark[END_REF]. The number of embedded bits is thus 1024 bits. Wavelet transform is a 9/7 Daubechies with l = 3 decompositions levels. Except the LL sub-band, all the other sub-bands are used to form the host signal x. With 256 × 256 images, the wavelet space size is thus N wlt = 64 512 coefficients. Knowing that the payload is payload = 1/64 bits per pixel and that the number of outputs coefficients for an arc is N arc = 12 coefficients, private space size is thus N sec = 1024 × 12 = 12 288 coefficients. Four kinds of robustness attacks have been applied: Gaussian noise attack, filtering attack, valumetric attack and jpeg attack. The Bit Error Rate (BER) is the number of erroneous extracted bits divided by the total number of embedded bits. The BER is computed for each attack. Three algorithms compete with a mean distortion close to 42.4 dB: -the algorithm detailed in [START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF], having carriers of high dimension and whose projection complexity is quadratic. For this method the mean embedding PSNR is 42.42 dB and the inside angle penetration is θ R = 0.1 radian; -the structured SDM algorithm (see Section 3.1). For this method the mean embedding PSNR is 42.23 dB and the inside angle penetration is θ R = 0.05 radian. -and the random SDM algorithm (see Section 3.2). For this method, the mean embedding PSNR is 42.15 dB and the inside angle penetration is θ R = 0.11 radian. In Figures 6,7, 8, 9, we observe that the two SDM approaches perform equal or even better results than the high dimension carriers algorithm [START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF]. Results are similar for the Gaussian and the jpeg attacks, but for the filtering and the scaling attacks, the SDM approaches are better. This is a very interesting result since the high dimension carriers approach [START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF] is more complex (quadratic complexity) than the two SDM approaches. The high dimension carriers approach [START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF] may then be replaced with a faster (linear complexity) SDM approach. If we compare the structured SDM approach with the random SDM approach, for the filtering and the scaling attacks, we observe that under 10% BER, the random SDM (i.e. the less complex approach) performs the best results. We conclude that in order to achieve the projection onto carriers, one should use random SDM since it is linear in complexity and it gives better robustness results than the structured SDM approach and the high dimension carriers' approach [START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF]. On a Pentium 3 GHz, for a 256 × 256 image and a message size of 1024 bits, watermarking takes less than one minute for the two SDM approaches and from half an hour to two hours for the original Miller et al. algorithm [START_REF] Miller | Applying Informed Coding and Informed Embedding to Design a Robust, High Capacity Watermark[END_REF]. In [START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF], we show that our general scheme (using a secret space and a rotationbased embedding) has good robustness performances (except facing jpeg attack) compared to the original algorithm [START_REF] Miller | Applying Informed Coding and Informed Embedding to Design a Robust, High Capacity Watermark[END_REF] or the Lin et al. approach [START_REF] Lin | An Efficient Algorithm for Informed Embedding of Dirty Paper Trellis Codes for Watermarking[END_REF]. We conclude that our scheme [START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF], enriched with the SDM technique, provides a good distortion -payload -robustness and complexity compromise. Moreover, we believe that it is as least as difficult for an attacker to retrieve the codebook for our random SDM approach as for the Miller et al. one [START_REF] Miller | Applying Informed Coding and Informed Embedding to Design a Robust, High Capacity Watermark[END_REF]. Indeed, the original approach only shuffles a subset of the DCT host coefficients whereas our approach shuffles and projects onto random carriers almost all the wavelet host coefficients. Conclusion In this paper, we introduce a new Dirty Paper Trellis Code (DPTC) algorithm having a security space built by projecting the wavelet coefficients onto secret carriers. In comparison to the original DPTC algorithm [START_REF] Miller | Applying Informed Coding and Informed Embedding to Design a Robust, High Capacity Watermark[END_REF], our scheme is as least as secure and the visual degradation is better adapted to the human psychovisual system. After introducing the general problem of projections, we have proposed two Space Division Multiplexing (SDM) algorithms in order to decrease the projections complexity to a more reasonable linear computational complexity. We evaluated robustness with and without SDM approaches and observed that projection with SDM approaches give more robust results than projecting with high dimension carriers. We finally observe that the random SDM approach, which is the less complex approach, is the more robust. Fig. 1 . 1 Fig. 1. Construction scheme of the secret embedding space. Fig. 2 . 2 Fig. 2. Rotation-based embedding in the Miller, Cox and Bloom plane. computational complexity is thus quadratic in function of the image size N . As an illustration, with a 256 × 256 image, l = 3 levels, payload = 1/64 bbp, and N arc = 12 coefficients, there are 792 723 456 multiplications (resp. sums). Fig. 3 . 3 Fig. 3. General Space Division Multiplexing principle. Fig. 4 . 4 Fig. 4. Structured SDM: Space Division Multiplexing and re-arrangements for the projection onto carriers. Fig. 5 . 5 Fig. 5. Random SDM. Fig. 6 . 6 Fig.6. Gaussian attack : BER for attack on the high dimension carriers algorithm[START_REF] Chaumont | A Novel Embedding Technic for Dirty Paper Trellis Watermarking[END_REF], on the structured SDM algorithm and on the random SDM algorithm. Fig. 7 . 7 Fig. 7. Filtering attack : BER for attack on the high dimension carriers algorithm [2], on the structured SDM algorithm and on the random SDM algorithm. Fig. 8 . 8 Fig. 8. Valumetric attack : BER for attack on the high dimension carriers algorithm [2], on the structured SDM algorithm and on the random SDM algorithm. Fig. 9 . 9 Fig. 9. Jpeg attack : BER for attack on the high dimension carriers algorithm [2], on the structured SDM algorithm and on the random SDM algorithm. Equation 1 is independent from the image size (3-level wavelet decomposition): 16 × s 123 × s 456 + 4 × s 789 × s 123 + s 789 × s 456 - 64 3 × payload × N arc × s 123 × s 456 × s 789 = 0. l is the number of wavelet decompositions, payload is the number of embedded bits by pixel, and Narc is the number of output coefficients labeling an arc of the trellis. × 256 image and a message size of 1024 bits, watermarking takes less than one minute. Nevertheless, those two last approaches degrade the image quality and are thus not fully satisfying.2 Paper[START_REF] Wang | Toward a Better Understanding of Dirty Paper Trellis Codes[END_REF] purpose is not informed-embedding but it uses a simple embedding solution. With 3 level wavelet decomposition, payload = 1/64 and Narc = 12, the retained solution is s7,8,9 = 6, s4,5,6 = 4, s1,2,3 = 3. With 3 level wavelet decomposition, payload = 1/64 and Narc = 12, the retained solution is n7,8,9 = 8, n4,5,6 = 3, n1,2,3 = 1. BOWS2 data-base is located at http://bows2.gipsa-lab.inpg.fr/. The images sub-sampling has been achieved with xnview program and using Lanczos interpolation. Acknowledgements This investigation was supported by the VOODDO project which is a French national project of the ANR (Agence Nationale de la Recherche) "Contenu et Interaction". We would also like to thank the Languedoc-Roussillon Region.
04117292
en
[ "spi.other" ]
2024/03/04 16:41:26
2023
https://theses.hal.science/tel-04117292/file/these__Chahine.pdf
𝜃 Crack tip opening angle (CTOA) 𝛿 Crack Tip Opening Displacement (CTOD) 𝑑𝛿 Variation of the CTOD Lastly Chapter III will address a panel of fatigue crack growth problems with different levels of complexity to show the potential of the proposed approaches to efficiently conduct the three kinds of uncertainty propagation analysis. An example application with correlated uncertain parameters will be studied. A second example will deal with fracture in ductile material, where the crack driving forces of interest are obtained from a computational cost demanding incremental finite element analysis. The third application example will deal with a large number of uncertain parameters resulting from the representation of random field of spatially varying material properties. Table of content List of figures List of tables Introduction Fatigue of materials, considered as the cumulative damage under cyclic loads, even below their elastic limits, is identified as a major engineering problem and the most common source of failure of mechanical structures and components. 80-90% of total identified structural failures have been attributed to fatigue, which has been the cause of several catastrophic accidents such as bridge collapses and aircraft failures. A first level of complexity that engineers will have to deal with lies in the difficulty of understanding fatigue phenomena that occur without any visible warning signs. Prediction is thus almost impossible and, consequently, the material and human damages are important. Many researchers have been interested in understanding the physics of the fatigue phenomenon to be able to predict the service life of materials and structures subjected to cyclic loads. Their studies are mainly devoted to the modeling and monitoring of the evolution of stresses in the vicinity of micro or macroscopic cracks. The fatigue damage process is decomposed into two phases. The embrittlement of the material, in particular in the vicinity of stress concentration zones, leads, in a first phase, to the appearance of cracks which propagate, in a second phase, under the effect of the load until the sudden failure of the material. The embrittlement of the material, particularly in the vicinity of stress concentration zones, leads, first, to the initiation of cracks which propagate under the effect of the loading until the sudden failure of the material. Fatigue life, measured in number of loading cycles, is generally taken as the sum of the time taken for a crack to initiate and the time taken for it to propagate to a critical length, or to failure. The duration of each of these two phases depends mainly on the type of material and its initial defects. For most structures subjected to fatigue, the contribution of the crack initiation phase on the service life is generally low relative to that of the propagation phase (about 10% of the total service life). The fatigue life is therefore mainly represented by the duration of the propagation phase, which is estimated by integrating models representing the evolution of the crack size and the number of load cycles such as the well-known Paris-Erdogan law. In engineering, the design of structures subject to fatigue is performed with respect to a safe fatigue lifetime that depends on the consequence of the failure. The definition of this target fatigue lifetime is not a trivial task. The concepts of fracture mechanics allow to study the behavior of structures with respect to fatigue of materials. Within this framework and under simplifying assumptions, such as a linear elastic mechanics of fracture, the service life can be adequately calculated using stress intensity factors and comparing them to the material toughness. These parameters, which define the driving forces that govern the crack during its propagation, depend on the geometry of the structure and the applied loading. In the case of simple structures, analytical expressions are available but for more complex structures, with complex geometries and/or for a mixed mode of crack propagation, following a curvilinear path, these expressions do not exist and numerical simulations, with heavy computational efforts, are necessary. A second level of complexity, to be dealt with mainly by researchers this time, lies in the very random nature of the fatigue phenomenon. Deterministic approaches, based exclusively on the principles of fracture mechanics, provide conservative predictions and numerous studies have shown the high dispersion of the crack propagation rate recorded during experimental tests. Accurate predictions are thus only possible through the coupling between fracture mechanics and probability theory. Different approaches can be considered to integrate the stochastic character of the crack growth process in the fatigue life estimation. Approaches based either on Markov theory or on the weighting of the equation managing the crack growth rate by a random process are very often purely statistical. In addition to the need for costly experimental work to determine the different parameters and a consequent analytical development, these models are only applicable for simple academic cases dealing with cases where the crack propagation takes place in an opening mode. However, in real engineering problems, in practice, failure occurs in mixed mode and the use of these models is largely questionable. The problem is therefore to propose an approach that allows to guarantee the best compromise between the representation of the real behavior of the fatigue crack and the consideration of the different sources of uncertainty. From the point of view of the reliability specialist, uncertainty propagation through a mechanical model is the best alternative. Different sources of uncertainties exist, mainly associated with material properties, structure geometry and loading conditions. Uncertainties in these input parameters must be incorporated into the modeling of the crack propagation process in order to characterize their effects on the mechanical response and to provide a robust prediction of the service life. To this end, the effect of uncertainties on the mechanical response is quantified using a mechanoprobabilistic coupling strategy. This quantification can have three distinct objectives and purposes: (1) Evaluate the variability of the mechanical response by computing statistical moments and constructing the probability density; (2) Measure the contribution of the variability of each uncertain parameter on the variability of the mechanical response by a sensitivity analysis based on a variance decomposition; (3) Evaluate the probability of failure with respect to one or more failure scenarios by a reliability analysis. Mathematically speaking, for the three purposes above, the treatment of the problem relies, possibly, on several multidimensional integral calculations. In fatigue cracking problems, where mechanical models are often available in an implicit and computationally expensive form, the evaluation of integrals is not trivial. With the exception of Monte-Carlo simulations (MCS), whose application is restricted to simple problems for which an explicit formulation of the mechanical model is available, few probabilistic methods are able to address all three purposes and, moreover, most of these methods are inefficient as the stochastic dimension increases. Thus, the proposal of an uncertainty propagation approach, covering the three purposes above, and using efficient multidimensional integral schemes and robust approximation of complex mechanical problems is at least relevant. On this last aspect, which ultimately concerns the precise approximation of the response of an implicit model, promising methods of probabilistic calculation, based on response surfaces, substitution models or S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 3 meta-models, have been developed. Their common principle is simple; it consists in building an explicit representation of the original implicit mechanical model, by simulating the latter in a set of points called design of experiments. Once this explicit representation is obtained, the three purposes of the probabilistic calculation can be easily approximated by performing an MCS on it. Considering the previous statement and arguments, computational approaches for addressing uncertainty propagation analysis through engineering problems are developed to satisfy the following objectives: • To achieve methodological advances in the field of uncertainty propagation through mechanical models representing complex physical phenomena and with high probabilistic dimensionality. The combination of efficient cubature formulae and metamodeling techniques will be investigated. • The computational approaches must be sufficiently generic, on the one hand, to solve a large class of engineering problems, particularly those dealing with fatigue crack growth, and on the other hand, to be able to handle all three kinds of uncertainty propagation analysis, namely statistical moments and distributions, sensitivity, and reliability analysis. • The computational approaches must be efficient, even for problems with a high probabilistic dimension. Less than a few hundred evaluations of the primary mechanical model would be appreciated. • To avoid additional computational cost when switching from one kind of uncertainty propagation analysis to another. • The quantities of interest, i.e., statistical moments, probability distributions, sensitivity indices and probability of failure, can be straightforwardly derived. To reach our objectives, step by step, we propose a manuscript structured in three chapters whose respective contents are detailed in what follows. Chapter I is a bibliographic summary presenting the general notions of the fatigue phenomenon. In this chapter, a state of the art of the fatigue phenomenon, deterministic and probabilistic approaches to deal with uncertainties will be presented. We will first present the basics of fracture mechanics which are the basis of all the laws used for the phenomenon of fatigue, and then we will focus on understanding the fatigue crack growth phenomenon. In addition, fatigue crack growth models will be studied, and the predictions will be compared to experimental data from the literature. Stochastic models for fatigue crack propagation will also be presented, as deterministic approaches cannot be used especially due to the random nature of the fatigue phenomenon. Chapter II will first introduce the general principles of uncertainty propagation analysis through physical models, to make the reader aware of the related mathematical framework. The focus will be on the mathematical formulations of the quantities of interest related to the three possible types of uncertainty propagation analysis, namely statistical moments and distributions, sensitivity, and reliability analysis. After a presentation of classical methods of computing multidimensional integrals involved in uncertainty propagation analysis, and a critical review of their advantages and limitations, six efficient cubature formulae taken from the literature will be introduced. The ability of these to conduct uncertainty propagation analysis will be evaluated on the basis of various academic and engineering problems ranging from a simple mathematical explicit model to a computationally demanding implicit model with high probabilistic dimensionality. The analysis of the advantages and disadvantages of each of these cubature formulae will allow us to orient our choice towards a class of method based on the concept of polynomial chaos. Chapter III will introduce the well-established metamodeling technique named Polynomial Chaos Expansion (PCE) that emerged in the earlier 90's to conducted uncertainty propagation analysis through mechanical models. It consists in representing the random responses due to uncertainty on the input parameters of a mechanical model, as a series expansion on a multivariate polynomial basis, called metamodel. The mathematical formalism related to the construction of PCE-based metamodels will be recalled. Special emphasis will be put on the computation of the unknown PCE coefficients, using projection and regression techniques. The number of the PCE coefficients increases with the probabilistic dimensionality of the mechanical model and the degree of the PCE used to ensure the accuracy of the metamodels, resulting in a computational effort that is impossible to achieve using conventional approaches such as Monte-Carlo Simulation and full tensor-product integration schemes for the estimation of the PCE coefficients. To circumvent this problem, two strategies of constructing PCE-based metamodels will be introduced. The first one, which is a part of the projection techniques, aims to reduce the number of evaluations of the mechanical model involved in the computation of the multidimensional integrals defining the PCE coefficient. The second one, which is a part of the regression techniques, aims to use smart truncation schemes favoring the PCE components with the largest contributions on the variability of the model responses of interest, thus reducing the number of PCE coefficients and the computational cost required to their estimates. Then, the derivation of the quantities of interest, corresponding to moments and distribution, sensitivity reliability analysis, based on post-processing the PCE coefficients will be presented. Chapter I: State of art on probabilistic modelling of fatigue crack propagation Introduction Fatigue of material, defined as the alteration of the mechanical properties under the effect of a cyclic load, has been identified as a major technical problem, leading to the failure of structures and mechanical components. The embrittlement of the material, particularly in the vicinity of the stress concentration zones, leads to the initiation of cracks which propagate under the effect of the loading until the sudden rupture of the material or the failure. The fatigue life, measured in terms of number of cycles loading, is the sum of the time necessary for the appearance of a macroscopic crack in the material, and of the time taken to propagate the crack until it reaches a critical size for which the component can no longer perform its service function. In most cases, the fatigue life is represented only by the duration of the propagation phase, which is estimated by integrating empirical models representing the evolution of the size of the crack to the number of cycles. These models are mainly deterministic and therefore unable to describe the stochastic character of the crack growth, which is induced by the uncertainties such as the mechanical properties of the materials, the loading, and the geometry of the structure. Indeed, different sources of dispersion must be taken into consideration in each situation to describe the real situation of the structures. If these uncertainties are not taken into account, this leads to an under dimensioning of the structure thus a higher risk of failure and therefore a much higher repair cost. To overcome this problem, stochastic models have been developed; these models have been the target of several criticisms because they are purely statistical and thus unable to describe complex phenomena such as mixed-mode propagation or crack growth retardation under the effect of overloads. Thus, the main objective of this thesis is to develop a probabilistic approach capable both of modeling the physical phenomena associated with the fatigue crack propagation process while considering its stochastic character. More precisely, the main objective is to solve problems with high stochastic dimensions (consider a high number of uncertainties) in application dealing with fatigue crack growth. Thus, an efficient uncertainty propagation approach will be presented, taking into account the unstable crack growth rate and enhancing the fatigue crack growth mechanical model using finite element model, to better represent the statistical dependence between uncertain parameters. In order to highlight the various scientific obstacles related to the subject of the thesis, we start the bibliographical review by the presentation of the fatigue phenomenon, followed by a description of the approaches of fatigue life prediction, and develop the one based on the fracture mechanics. We will then present the basics of fracture mechanics which are the basis of all the laws used for the phenomenon of fatigue, then focus on understanding the fatigue crack growth phenomenon. Moreover, this chapter will present the interaction between the notion of dispersion and fatigue failure. Finally, a particular attention will be given to the crack propagation since it constitutes a major part of the structure life. Thus probabilistic model of fatigue crack propagation were presented because deterministic approaches cannot be used especially with the random nature of the fatigue phenomenon and if we take in consideration the effect of retardation induced by the application of an overload due to the randomness of loading. Chapter I: State of art on probabilistic modelling of fatigue crack propagation S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 8 Fatigue phenomenon Basic concepts and physical aspects Since the 19 th century, the fatigue of materials (Sobczyk and al, 1992) was identified as a major technical problem leading to the failure of structures and mechanical components. Indeed, in most engineering structures, fatigue failure is widely observed [START_REF] Ahmadzadeh | Energy-based damage descriptions to assess fatigue life of steel samples undergoing various multiaxial loading spectra[END_REF], (Mareau and al, 2019), (Kamal and al, 2018). For instance, a classification study (Schütz, 1996) of the failure modes in military aircrafts has shown that failures induced by fatigue of materials are the most observed. Between 80% and 90% of total failures were attributed to the fatigue of materials. A material is subjected to fatigue loading when the loading applied to it varies with time and can modify its local properties until it reaches the failure of the structure. In general, one or more tiny cracks start in the material, and these grow until complete failure occurs. These cracks can be originally existent in the structure from manufacturing, or they can start early in the service life. Thus, the phenomenon of fatigue can be described as the progressive damage of a structure subjected to cyclic stresses. Prevention of fatigue fracture is a vital aspect for structures subjected to cyclic loading, hence the analysis of Experiences have shown that fatigue failures can be usually divided in three stages. First, a stage where microcracks start from the surface of components, which is called crack initiation stage. Then, follows a stage where these later (i.e. micro cracks) grow progressively while the cycling loading continues; this stage is called stable crack growth stage. Finally, when the remaining transversal section of the component is too small to support the applied load, a sudden breakdown is observed, which is called unstable crack growth stage. The fatigue lifetime is usually defined as the number of loading cycles leading to the failure. In every stage of the fatigue lifetime, complex physical phenomena can be observed. Furthermore, the respective fractions of these stages, towards the lifetime of the structure, can be significantly different. They mainly depend on the nature of the material and if this later contains initial defects or not (Schijve, 2009). More precisely, the following factors will affect the fatigue lifetime: quality of material processing (size and distribution of inclusions, voids …), procedure of material processing (annealed, quenched, tempered…), It is worth mentioning that distinguishing the initiation and the propagation stages is of a great importance in the prediction of the fatigue lifetime, since each stage has its own influencing parameters. That is, some parameters which have significant effect on the initiation stage could have weak effects on the propagation stage, and vice versa. For instance, the material surface finishing or the material roughness affects only the initiation stage, while a corrosive environment affects proportionally the initiation and the propagation stages. Fatigue design approaches In in this context, three design concepts can be used. The first one is called Safe-Life concept, where the design is performed under limited fatigue lifetime assumption. In other words, it is supposed that the fatigue lifetime is defined as the duration of the initiation stage, which means that the structure can reach safely the retirement without observing fatigue failure. This concept is mainly used in automotive industry and the safe fatigue lifetime is defined as the weighting of a target fatigue lifetime (i.e. usually the mean fatigue lifetime) by a safety coefficient. The second one, called Fail-Safe concept, is very used in the aeronautic industry. It assumes that defects can be tolerated without affecting the structural integrity i.e. the failure does not occur before a critical crack is detected and repaired, which consequently supposes that periodic inspections are scheduled over the lifetime of the structure. The third one, called Damage Tolerance concept, has the same backbone as of the Fail -Safe, but it is based on fracture mechanics. The safe fatigue lifetime is defined as the number of loading cycles able to grow the crack until a critical length. This concept is mainly applied for high toughness materials where the crack growth is supposed to be slow. The difference between these concepts is the consequence of the different criteria required in the fatigue design. Indeed, according to each type of structure and according to its field of application, criteria are dictated by the loading applied, the types of materials. Therefore, the choice of one concept or another must take these parameters into consideration. In this thesis the Damage Tolerance concept was applied, indeed, for critical structures, in order to guarantee a high level of reliability, this concep t seems best suited. However, for purely economic reasons the structures in reality are designed for a Safe -Life concept. In practical engineering, fatigue fracture of the structure may cause enormous economic losses and disastrous accidents, for that the Damage Tolerance concept and life prediction approach are important for a safe life design and reliability assessment (Li and al, 2018), (Zheng and al, 2020), (Song and al, 2020). The fatigue design methods are based on the calculation of the service life. However, each design method from the previous paragraph, takes into consideration one of the two periods (initiation period or propagation period) which constitute the fatigue damage process. Thus, there are a variety of approaches to predict the fatigue life. These approaches are based on a relationship between a load-related parameter such as stress/strain amplitude or stress intensity factors (SIF) magnitude ∆𝐾 (described later in section 3.1), and fatigue life 𝑁 in terms of number of cycles. In general, three major approaches can be distinguished for the prediction of the fatigue life of the structure: the approach based on the Wöhler curve (i.e. 𝑆 -𝑁 curve), the approach based on local deformation and the approach based on the theory of fracture mechanics. Approach based on the Wöhler curve This approach is widely used in the design of structures likely to be subjected to fatigue damage during their lifetime. The particularity of this approach relies on its simple formulation which gives a relation between the number of cycles 𝑁 and the variation of the nominal stress ∆𝜎 or the stress amplitude defined as the difference between the maximum stress value and the minimum stress value divided by two (𝜎 𝑎 = (𝜎 𝑚𝑖𝑛 -𝜎 𝑚𝑎𝑥 )/ 2). This relationship is known as the Wöhler curve or 𝑆 -𝑁 curve, mathematically written (Eurocode 3, 1996): ∆𝜎 = 𝑚𝑎𝑥 [( 𝑁 𝐶 𝑆𝑁 ) - 1 𝑚 𝑆𝑁 ; ∆𝜎 𝐷 ] (𝐼. 1𝑎) Or in logarithmic form log ∆𝜎 = 𝑚𝑎𝑥 [- 1 𝑚 𝑆𝑁 log 𝑁 + 𝐶 𝑆𝑁 𝑚 𝑆𝑁 ; log ∆𝜎 𝐷 ] (𝐼. 1𝑏) where 𝐶 𝑆𝑁 is a constant of the Wöhler curve, 1/𝑚 𝑆𝑁 is the slope of the Wöhler curve and ∆𝜎 𝐷 is the endurance limit of the material which is defined as the horizontal asymptote of the Wöhler curve. This curve distinguishes two parts: o The first for low number of fatigue cycles, or oligo-cyclic fatigue, characterized by severe loads and which corresponds to materials with non-negligible plasticity. o The second for high number of fatigue cycles, or polycyclic fatigue, where material behavior is characterized by zero macroscopic plasticity. fatigue is caused by a relatively small number of cycles, say, tens, hundreds, or thousands. Low-cycle fatigue is generally accompanied by significant amounts of plastic deformation. The transition between the two areas depends on the material under stress. It generally occurs around 10 5 cycles as shown in figure I.5 (Schijve, 2009). Let us recall in our thesis that we will be interested in the intermediate zone, as shown in the figure I.6. Since we are not interested in the total lifetime but only in the propagation phase, we have to remove the number of cycles required for the initiation phase, thus we will have a moderate number of cycles. The fatigue life expressed in term of number of cycles 𝑁 to failure can be determined directly from the Wöhler curve and calculated from equations (I.1a) or (I.1b) if we have a constant amplitude loading ∆𝜎. In the case of variable amplitude loading, the equations (I.1a) or (I.1b) cannot be used, thus the evaluation of the fatigue life requires the application of a whole procedure which consists first in representing the sequence of the applied loading in the form of an histogram by means of a cycle counting method such as the Rainflow method, then to calculate the damage induced by this loading history using a damage accumulation law such as Miner's law and in a last step to determine the number of histograms needed to reach the failure. Indeed, over the past decades, the fatigue damage accumulation rule and life prediction approach were investigated to prevent disastrous accidents of fatigue failure (Wang and al, 2021), (Gao and al, 2020), (Li and al, 2019). Thus, the fatigue life is deduced by converting the number of histograms into the number of loading cycles. However, the Wöhler curve approach is criticized for its conservatism in fatigue life prediction and because it does not consider the interaction effect noticed in the case of loads with variable amplitudes. Approach based on local deformation Local deformation is used to predict the fatigue life of structural components with notches. It is based on the concept of the 𝜀 -𝑁 curve which relates the local deformation ε to the number of loading cycles 𝑁 required to initiate a crack. This approach studies the plastic deformation that may arise in confined areas where fatigue cracks initiate. Thus, this method takes into consideration fatigue situations where local yielding is involved, which is often the case for ductile metals for short lives. The major difference with the 𝑆 -𝑁 curve, concerns the consideration of plasticity. This approach can only be used to predict the duration of the initiation period, similarly to the case of the 𝑆 -𝑁 curve where the propagation time of the fatigue tests is negligible. Figure I. 7. Elastic, plastic, and total strain versus life curves (Landgraf, 1970) A wide variety of relationships between the parameters 𝜀 and 𝑁 are available in the literature (Basquin, 1910), (Coffin, 1954), (Manson, 1954). The total strain amplitude can be divided into elastic and plastic parts (figure I.7): ∆𝜀 = ∆𝜀 𝑒𝑙 + ∆𝜀 𝑝𝑙 (𝐼. 2) where the elastic strain amplitude is related to the stress amplitude based on Hooke law: ∆𝜀 𝑒𝑙 = 𝐸∆𝜎 (𝐼. 3) where E is the Young elastic modulus. And the plastic strain amplitude is a measure of the half-width of the stress-strain hysteresis loop given by (Coffin, 1954), (Manson, 1954) as: ∆𝜀 𝑝𝑙 2 = 𝜀 𝑓 ′ (2𝑁) 𝑐 (𝐼. 4) where 𝜀 𝑓 ′ is the fatigue ductility coefficient and c is a constant of order -0.5. Thus, based on Hooke law (i.e., ∆𝜀 𝑒𝑙 = 𝐸∆𝜎), where the variation of the nominal stress ∆𝜎 is derived from the Wöhler curve (i.e. ∆𝜎 2 = 𝜎 𝑓 ′ (2𝑁) 𝑏 ), the variation of the total strain ∆𝜀 is now given by: ∆𝜀 2 = 𝜎 𝑓 ′ 𝐸 (2𝑁) 𝑏 + 𝜀 𝑓 ′ (2𝑁) 𝑐 (𝐼. 5) where 𝜎 𝑓 ′ is the fatigue strength coefficient, b is a constant defining the slope on a log 𝜀 -log 𝑁 plot and 𝐸 is the Young modulus. Thus, fatigue with a low number of cycles and fatigue with a high number of cycles can be describes using the equation (I.5). The application of the approach based on the local 𝜀 -𝑁 curve is not feasible for the propagation stage. Indeed, when the crack propagates, the strain field at the crack tip changes constantly, which makes the calculation of the ∆𝜀 extremely complicated. In the case of a loading at constant amplitude, the fatigue life can be obtained directly from the 𝜀 -𝑁 curve or by using one of the relations of the literature. However, in the case of variable amplitude loading, a three-step procedure steps, similar to that adopted in the approach based on the Wöhler curve, should be used. The specificity of the 𝜀 -𝑁 approach is that it is based on a simple formulation and provides an estimate of the initiation period, while taking into account a number of parameters considered to influence the fatigue life such as the mean stress. A similarity between the 𝑆 -𝑁 and 𝜀 -𝑁 approaches is that none of them take in consideration the analysis of the crack growth, as in the fracture mechanics approach discussed in section 1.2.3. Approach based on damage tolerance This approach is based on the fracture mechanics. This concept is based on the idea of tolerating the presence of cracks without having a catastrophic consequences on the integrity of the structure. The approach is assumed that all material contains crack type defects which may be preexisting in the material or formed d uring service life. Here, the fatigue life is defined as the loading time capable of propagating these cracks, up to a critical length. Generally, this critical value is determined from the toughness of the material and the service life is calculated based on the linear elastic mechanics of fracture. This approach is applicable for the propagation period and is founded on a relationship between the crack length and the number of loading cycles by using the stress intensity factor SIF ∆𝐾 (influenced by the loading and the geometry conditions). The fatigue life is obtained directly by integrating: 𝑁 = ∫ 𝑓 [∆𝐾(∆𝜎, 𝑎)]𝑑𝑎 𝑎 𝑐 𝑎 0 (𝐼. 6) where 𝑎 is the crack length, 𝑎 0 and 𝑎 𝑐 are respectively the initial crack length and the critical crack length, 𝑁 is the number of loading cycles, ∆𝐾 is the variation of the stress intensity factor and ∆𝜎 is the variation of the applied loading. This approach is very helpful to estimate the residual life following an inspection of structures because the fatigue life is linked to a measurable parameter (i.e. the length of the crack). The parameter ∆𝐾 depends on the loading, on the crack length and on its orientation. In practical problems, its explicit formulation is not always available, and the use of a finite element model is suggested. However, the evaluation of the integral defined by equation (I.6) is difficult and can only be done numerically. Despite this difficulty, the use of SIF provides great flexibility to the fracture mechanics approach. Its application in the case of loadings with variable amplitude does not require great efforts in the formulation. In addition, it considers the interaction effect as well as the non -linear aspect of the fatigue cracks propagation. The use of either of these approaches described previously for fatigue life prediction depends on the design strategy adopted, which depends on the type of the domain of application. The approach based on fracture mechanics concepts is very useful in practical cases where the propagation period constitutes a major part of the fatigue life. For this reason, in this work we are interested in the study of the propagation period and thus in the fracture mechanics approach to fatigue life prediction. Deterministic fatigue crack growth As already mentioned, fracture is the propagation of a macro crack as a consequence of damage. It is characterized by the irreversible separation of a continuous medium into two parts on either side of an interface. This separation is called crack and changes the fields of the displacement, deformation and stress (figure I.8). To describe the fracture of the material new stress measurements are introduced, which are called fracture driving forces. When plastic strains are confined in the vicinity of the crack tip, these fracture driving forces are called Stress Intensity Factors (SIF) as proposed by (Irwin, 1957). These factors aim to quantify the intensity of the stress singularity. For static loading, they are used to determine the intensity of the singularity in terms of both stress and displacement. Figure I. 9. Fracture modes As depicted in figure I.9, three fracture modes can be distinguished in the case of 3D problems: the opening mode (mode I) for loadings applied following y, the in-plane shear mode (mode II) following x and the out-ofplane shear mode (mode III) following z. Consequently, three SIF 𝐾 𝐼 , 𝐾 𝐼𝐼 and 𝐾 𝐼𝐼𝐼 are to be computed. Each one of them corresponds to one fracture mode. Despite that modes II and III are generally less dangerous than mode I which is responsible for crack growth, we will study the real case on our work. In real-life problems the displacements of the crack edges are often a combination of these three fracture modes-mixed mode-and the cracks follow a curved paths during their propagation. Many methods are proposed in the literature to compute the SIF of fracture bodies. The most used are the energetic method and the kinematic method. Computation of stress intensity factor To measure the SIF, it is possible to use global approach based on the energy dissipated or local approach based on the kinematic method. Energetic method It is the divergence of the stress field at the crack tip that motivated (Griffith, 1921) to introduce an energetic approach to the fracture mechanics. His approach is based on the computation of the strain energy release rate 𝐺. It is defined as the amount of energy able to create new crack surfaces. The crack grows by a new increment 𝑑𝑎 as shown in figure I.10. From the thermodynamic equilibrium equation of the structure and its crack 𝑎, the conservation of the total energy 𝑑𝑊 𝑡𝑜𝑡 is written as following: 𝑑𝑊 𝑡𝑜𝑡 = 𝑑𝑊 𝑒𝑙𝑎 + 𝑑𝑊 𝑐𝑖𝑛 + 𝑑𝑊 𝑒𝑥𝑡 + 𝑑𝑊 𝑑𝑖𝑠 (𝐼. 7) where 𝑑𝑊 𝑒𝑙𝑎 is the variation of the elastic deformation energy, 𝑑𝑊 𝑐𝑖𝑛 is the variation of the kinetic energy, the 𝑑𝑊 𝑒𝑥𝑡 is the variation of the potential energy of the external forces and 𝑑𝑊 𝑑𝑖𝑠 is the energy dissipated during the separation of the two lips of the crack, with 𝑑𝑊 𝑑𝑖𝑠 = 2𝛾𝑑𝑎 where 𝛾 is the surface energy of decohesion. S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 17 The variation of the energy which accompanies the growth of the crack of an increment 𝑑𝑎 is written as follows: 𝑑𝐸 𝑝 = 𝑑𝑊 𝑒𝑙𝑎 + 𝑑𝑊 𝑒𝑥𝑡 (𝐼. 8) The behavior of cracks is thus characterized by the transfer of the potential energy 𝐸 𝑝 of the structure into decohesion energy in the vicinity of the tip. 𝐺 = -𝑑𝐸 𝑝 𝑑𝑎 (𝐼. 9) If 𝑑𝑊 𝑐𝑖𝑛 > 0 then 𝐺 > 2𝛾 which results in an unstable crack propagation. Indeed, the surface decohesion energy 𝑑𝑊 𝑑𝑖𝑠 = 2𝛾𝑑𝑎 is used to break the molecular bonds in the material and the excess then (𝐺 -2𝛾)𝑑𝑎 is transformed into kinetic energy which induces an unstable propagation of the crack. The strain energy release rate can also be directly linked to the stress intensity factors 𝐾 𝐼 , 𝐾 𝐼𝐼 and𝐾 𝐼𝐼𝐼 , as follow: 𝐺 = 𝐾 𝐼 2 + 𝐾 𝐼𝐼 2 𝐸 ′ + 𝐾 𝐼𝐼𝐼 2 2𝜇 (𝐼. 10) where 𝐸 ′ = 𝐸 for plane stress and 𝐸 ′ = 𝐸 (1 -𝜈 2 ) ⁄ for plane strain, 𝐸 and 𝜇 are the Young's and shear modulus of the material and 𝜈 is the Poisson's ratio. The integrals of contour are tools which make it possible to characterize the singularity of the stress field in the vicinity of the point of the crack. These tools are obtained by a development based on the conservation of energy. They have the particularity of being equivalent to the strain energy release rate 𝐺 and they are independent of the integration contour. Indeed, the strain energy release rate can be computed using independent integrals introduced by (Rice, 1968). The 𝐽integral (see equation I.11) is defined as the result of a path contour integral around the crack tip. The 𝐽integral has been adapted to evaluate the strain energy release rate for nonlinear materials. The behavior of the material is considered to be non-linear elastic. In fact, Rice's idea is to consider the material not as elastoplastic but as nonlinear elastic. He considered that the cause of the energy dissipation is not only the separation of the crack lips as in the elastic case but also in the phenomenon of plasticity. As we can see in figure I. 11, both types of behavior are identical if we do not apply a discharge. where 𝑤 𝑒 = ∫ 𝜎 𝑖𝑗 . 𝑑𝜀 𝑖𝑗 𝜀 is the strain energy density, 𝛿 𝑖𝑗 the crack opening displacement define as the total separation distance between the upper and lower crack surfaces at the tip due to the singularity, and, 𝜎 𝑖𝑗 and 𝑢 𝑖 are the stress and the displacement fields respectively. Hence, the 𝐽integral expression remains valid and independent of the path 𝛤 if there is no discharge. When the crack propagates the non-discharge hypothesis is not verified behind the tip. However, this hypothesis remains reasonable, but the path independence is not guaranteed. The strain energy release rate alone constitutes a criterion of crack propagation but does not allow the determination of the direction of the crack propagation. To determine the direction of propagation, more information is needed, and the crack loading can provide this information through the SIF, for example. Kinematic method This approach (Zhang, 1992) aims to compute the SIF based on the relative displacement of the crack surfaces as depicted in figure I.12. Here, the SIF are proportional to the displacements 𝑢 𝑥 and 𝑢 𝑦 of the crack surfaces. For a two-dimensional problem the SIF 𝐾 𝐼 and 𝐾 𝐼𝐼 , corresponding to the opening and the in-plane shear fracture modes respectively, can be written: This approach is very simple to be implemented in numerical solver, but its accuracy is closely related to the number of points used to interpolate the displacement field; the more interpolation points we use, the gr eater the precision we have. Hence, in the case of the finite elements method, this approach is not suitable since the obtained model can be time consuming. 𝐾 𝐼 = 2𝜇 𝜅 + 1 . √ 2𝜋 𝑟 Comparative study In order to compare the two methods of computation of SIF discussed above, and to choose which one of these two calculation methods is the best and therefore the most adequate to use in the following developments, we propose to confront them on an example. Let us consider a Compact Tension (CT) specimen which is usually used in fatigue crack growth testing. It is subjected to tension loading Δ𝑃 = 2.5 𝑘𝑖𝑝𝑠 in the top and the bottom pins. The required geometry parameters are given in figure I.13. For this simple geometry and loading conditions, an analytical solution (Tada and al, 1973) is available for the SIF 𝐾 𝐼 associated to the opening fracture mode: As it can be seen, linear elastic fracture behavior hypothesis is verified since the yielding (i.e., plastic strain) zone is confined at the crack tip. In addition, based on the deformed shape of the crack edges, it is clear that the crack propagates in opening fracture mode. Table I.1 compares the numerical estimates of the SIF 𝐾 𝐼 given by the energetic and the kinematic methods, with the analytical reference solution given by equations (I.14). As stated before, the accuracy of the energetic method is less sensitive to the integration path, defined as the number of layers of finite elements around the crack tip used in the evaluation of the integral (I.9). The accuracy of the kinematic method is dependent on mesh refinement level around the crack tip. Indeed, smaller is the mesh size, better is the accuracy on the estimates of the stress intensity factor 𝐾 𝐼 . As it can be seen, both methods give accurate results since the relative error does not exceed 0.9 % for the worst case. But the energetic method is more accurate than the kinematic method because the error is smaller. Thus, we will choose the energetic method in the following studies. 𝐾 𝐼 = 𝛥𝑃 𝐵√𝑤 2 + ( 𝑎 𝑤 ) (1 -( 𝑎 𝑤 )) Crack bifurcation criteria In the case of crack propagation in pure mode (modes I, II or III) separately, the failure occurs when the value of the SIF reaches a critical value representing an intrinsic characteristic of the material and the direction of propagation is perpendicular to the direction of the applied load. This scenario is rarely encountered in practice, where the cracks, influenced by the geometric conditions and by the loading, tend to propagate in mixed mode. In other words, the cracks follow curved paths during their propagation. Consequently, it is necessary to determine the condition of initiation of the failure and the direction taken by the crack within each increment during its propagation. To simulate the crack propagation based on fracture mechanics, it is necessary to select a bifurcation criteria to define the direction of the crack when propagating under mixed mode loading, depending on the loading conditions and the type of fractures. In order to define the bifurcation angle of the crack during propagation, there exist mainly three criteria: the maximum circumferential stress, the maximum energy release rate and the maximum strain energy density. Maximum circumferential stress The crack path can be simulated based on the criterion proposed initially by (Erdogan and al, 1963). They proposed a simple and intuitive criterion based on the maximum circumferential stress known also as the tangential stress 𝜎 𝜃 (Chang and al, 2006). This stress serves to resist to the applied internal pressure overflow and can be most conveniently treated by considering the equilibrium of the structure. This stress is the force exerted perpendicularly to the axis and radius of the structure in both directions. This criterion postulates that the crack tends to propagate in the direction that maximizes the mode I; thus, a crack propagates in the direction where the tangential stress ahead of the crack tip is maximum, as shown in figure I.15. It is established on the idea that crack propagation occurs in the plane of maximum normal stress. Thus, in the polar system (𝑟, 𝜃), the propagation is in the direction 𝜃 0 for which 𝜎 𝜃 is maximum at fixed radius. The components of the stress field can be expressed in the polar system (𝑟, 𝜃) by: If the maximum circumferential stress is a principal stress, the shear stress 𝜎 𝑟𝜃 will be zero. Therefore, 𝜃 0 can also be considered as the solution of the equation 𝜎 𝑟𝜃 (𝜃 0 ) = 0. 𝜎 𝑟𝑟 = 2 √2𝜋𝑟 [𝐾 𝐼 (3 -𝑐𝑜𝑠𝜃)𝑐𝑜𝑠 𝜃 2 + 𝐾 𝐼𝐼 (3𝑐𝑜𝑠𝜃 - Some authors have shown a good consistence with this criterion (Gdoutos, 1984), (Chambers and al, 1991), however, some others have found this criterion to be insufficient (Smith and al, 1985), (Royer, 1986), (Hourlier and al, 1978). However, this later, known since 1978, is the most efficient and used in the literature (Bathias and al, 1997). Maximum energy release rate The maximum energy release rate criterion proposed by (Erdogan and al, 1963) Maximum strain energy density A criterion based on the solution using the maximum local density (Westergaard, 1939) of the total energy 𝑆 at the crack tip is proposed by (Sih, 1974), (Sih, 1991): The crack propagates in the direction that minimizes the total energy density (Sih, 1974), (Sih, 1991) As shown in figure 1.17, for a crack making 𝛽 angle with the axis of the load and then propagating with 𝜃 angle, the solution is proposed by (Tanaka, 1974): and al, 1997) give a comparison of the three criteria of bifurcation and conclude that the best consistence with the experimental one was obtained with the maximum circumferential stress criterion. 1963). They assume a linear relationship between the FCGR, 𝑑𝑎 𝑑𝑁 and the range of the SIF ∆𝐾 as follows: 𝑆 = 2(1 -2𝜈) 𝑠𝑖𝑛(𝜃 0 -2𝛽) -2 𝑠𝑖𝑛(2(𝜃 0 -𝛽)) -2𝑠𝑖𝑛2𝜃 0 = 0 (𝐼. 22) (Bathias 𝑑𝑎 𝑑𝑁 = 𝐶 (∆𝐾) 𝑚 (𝐼. 23) where 𝑎 is the crack length, 𝑁 is the number of loading cycles, 𝐶 and 𝑚 are two parameters depending on the material and ∆𝐾 is the variation of the stress intensity factor. Since this model considers only the region II of figure I.18, and do not take into consideration high toughness, modifications of this law were proposed taking into account additional parameters such as the load ratio 𝑅, the closure phenomenon (Elber, 1971) or the maximum stress during a cycle. The reader may refer to (Beden and al, 2009) for an exhaustive list of the proposed models. For ∆𝐾 slightly greater that a threshold ∆𝐾 𝑡ℎ , the crack will grow quickly and for medium SIF range ∆𝐾 the fatigue crack growth behavior can be described by a power law such as the Paris-Erdogan's law given in equation (I.23). In 1970, the Paris-Erdogan's model was slightly modified (Walker, 1970) in order to take into account the strong effect the stress ratio 𝑅 = 𝜎 𝑚𝑖𝑛 𝜎 𝑚𝑎𝑥 . The modified model is: 𝑑𝑎 𝑑𝑁 = 𝐶 1 (∆𝐾) 𝑚 1 (1 -𝑅) 𝑚 1 (1-𝛾) (𝐼. 24) In region III, the FCGR is faster than indicated by the Paris-Erdogan's law. Indeed, the fatigue crack growth exhibits a rapidly increasing rate towards infinity. Note that, in this region, the facture toughness 𝐾 𝐼𝑐 of the material has a significant effect on the FCGR in addition to the stress range. To take it into account, the Paris-Erdogan's model has been enhanced by (Forman and al, 1967) and the modified model can be written as follow: 𝑑𝑎 𝑑𝑁 = 𝐶 2 (∆𝐾) 𝑚 2 (1 -𝑅)𝐾 𝐼𝑐 -∆𝐾 (𝐼. 25) Constants 𝐶 1 , 𝑚 1 , 𝛾, 𝐶 2 and 𝑚 2 presented in the above are empirically derived through experimental data. Note that the previously presented fatigue crack growth models do not take into account many others parameters such as load frequency, environment factors such as the relative humidity and the temperature, and as load and al, 1972): the linear fracture mechanics defines a stress intensity factor 𝐾 at the tip of a crack as a function of the applied stress 𝜎 and the dimension of the crack: 𝐾 = 𝛼𝜎√𝜋𝑎 (𝐼. 28) 𝛼 is a coefficient depending on the geometry of the structure and the crack length. The failure criterion is then a stress criterion and failure occur when the SIF reaches a critical value 𝐾 𝐼𝑐 . The computation of the integral (I.27) is not trivial especially when the integrand is not available under an analytical form, which is often the case for real-life crack growth problems where the fracture behavior is represented by time consuming implicit models as finite elements approach-based ones. Hence, numerical schemes are usually suitable such cubature rules. (Dowling, 2007) has suggest using the well-known Simpson's integration rule below. (i.e. the inverse of the FCGR) and between the two points of abscissa 𝑎 𝑗 and 𝑎 𝑗+2 , can be evaluated assuming that a second order curve (i.e. a parabola) passes through the three points (𝑎 𝑗 , 𝑦 𝑗 ), (𝑎 𝑗+1 , 𝑦 𝑗+1 ) and (𝑎 𝑗+2 , 𝑦 𝑗+2 ). If these points are assumed to be equally spaced by an increment ∆𝑎, the hatched area can be estimated as follow: ∫ 𝑦 𝑎 𝑗+2 𝑎 𝑗 𝑑𝑎 = ∆𝑎 3 [𝑦 𝑗 + 4 𝑦 𝑗+1 + 𝑦 𝑗+2 ] (𝐼. 29) The fatigue crack growth lifetime 𝑁 𝑓 is then the sum of the contributions of all areas obtained by applying equation (I.29) for each of 𝑗 = 0,2,4, … , (𝑀 -2), where 𝑀 is even. In practice, 𝑀 is taken as large as possible to keep ∆𝑎 reasonably small in order to obtain an accurate estimate of 𝑁 𝑓 . Note that Simpson's rule can be also used when the integration points are not equally spaced (i.e., ∆𝑎 is not constant during the whole integration process), by a little modification of equation (I.29). Consequently, the fatigue lifetime is obtained by: 𝑁 𝑓 = ∑ 𝑎 𝑗 (𝑟 2 -1) 6𝑟 [𝑦 𝑗 𝑟(2 -𝑟) + 𝑦 𝑗+1 (1 + 𝑟) 2 + 𝑦 𝑗+2 (2𝑟 -1)] 𝑀-2 𝑗=0 (𝐼. 30) where 𝑟 the distance from the crack tip to the given point P. A retardation phenomenon due to overload The application of one or more overloads during constant amplitude loading is characterized by a retardation or even a stop of the crack propagation after returning to the initial loading conditions (Schijve and al, 2004), (Manjunatha and al, 2004), (Daneshpour and al, 2012), (Dirik and al, 2018), (Hemnesi and al, 2022). The retardation effect depends on several factors such as geometry, temperature, environments, and material properties. Although this phenomenon was discovered many years ago by (Schijve, 1962), its effects are not fully understood and described especially in term of its modelling. It is known that the overloads induce large plastic deformation ahead of the crack tip and decreases the rates of crack propagation. This retardation is usually measured in terms of cycles and thus increases the lifetime from 𝑁 1 to 𝑁 2 as we can see in figure I.20. The retardation effect depends on the overload rate, on the value of the basic loading before the application of the overload, on the number of cycles before the overload and on the ratio of the basic loading. Indeed, it is noticed that with different load ratio the maximum crack rate recorded differs, and this difference will have an impact on the size of the plastic zone created with the application of the overload and consequently on the number of retardation cycles (Zhang, 2019), (Alabd Alhafez, 2018). However, the retardation effect was found to decrease with the increasing number of overload cycles. Several researchers have sought to determine the residual stresses of fatigue crack tip after the application of an overload, such as (Lin andal, 2017), (Rice, 1967), (Matsuoka and al, 1976), (Taira and al, 1979), (Fuhring and al, 1979), (Bush and al, 1988). They concluded that as the distance from the point of application of the overload increases, the residual stresses tend to decrease gradually. Note that, most of these studies were based on the model of (Dugdale, 1960) which considers that the material is rigid, perfectly plastic and that the plastic zone is confined to the crack tip (see figure I.21). The retardation effect can also be viewed as a consequence of the closure concept. This concept results from the existence of the compressive residual stresses at the crack tip. It was first shown by (Elber, 1971) on an aluminum alloy, by explaining that the fatigue crack can close even before that the tensile stresses are equal to zero. However, the damage occurs only when the crack tip is completely opened, thus this phenomenon can be involved in order to explain the effects of the retardation (Lieurade, 1988) and the influence of certain important parameters of the fatigue crack growth such as the maximum stress intensity, the load ratio, the thickness of the specimen, the overloads.... The crack closure is far more important near the edge of the specimen, thus reducing the crack growth rate (Taleba and al, 2016). Also, it has been observed by many researchers that a short acceleration of the crack growth rate occurs just after the application of the overload. Thus, overloads can produce a very short acceleration of the crack growth Vasudevan and al, 1995) before the significant retardation occurs (see figure I.22). However, this acceleration is observable only for a high level of overload rate and at a short distance from the application of the overload (short crack length compared to the one of the retardation effect) and thus can be neglected because it is too small to be taken into account. For instance, (Wheatley and al, 1999) have shown that the crack length where the acceleration is observable is 300 µm and that the crack length retardation is of the order of 10 mm. ( Figure I. 22. Acceleration and retardation after the application of an overload The speed of the crack growth, after applying the overload goes through four stage: first, an increase in the crack rate, second, the rate increases very rapidly and reaches a peak, then a rapid drop in the rate is observed to reach a minimal value, and finally the rate begins to increase gradually in the plasticized zone created by the application of the overload, until it returns to almost its initial value (before the application of the overload). In our studies, we will choose not to consider the acceleration phenomenon just occurring after the overload shown in figure I.20 because it is too negligible to be taken into account in comparison with the retardation induced by the overload. Two fatigue plastic zones due to overload The retardation keeps on until the crack has propagated out of the monotonic plastic zone of overload. Therefore, the number of retardation cycles depends on the size of this monotonic plastic overload zone (see figure I.23). Moreover, the effect of the retardation depends on the thickness of the specimen since the size of the plastic zone differs for the in-plane stresses (used for thin specimens where we assume that out-of-plane stresses are equal to zero) and the in-plane strains (used for long specimens where we assume that the out-of-plane strains are equal to zero). The effects of the retardation are more important under plane-stress conditions (Lang and al, 1999) because the stress intensity factor is affected by the distance from the center of the crack. Indeed, at the center of the specimen, where the plane-strain conditions are applied, constraint is high and at the surface of the specimen where the plane-stress assumptions are used, the lack of the out-of-plane stress results with a loss of the crack tip constraint thus the stress intensity factor is lower. In general, the fatigue crack growth can be controlled by the plastic zone. During the loading, two plastic zones are created: the cyclic plastic zone and the monotonic plastic zone related to the loading of the structure. These plastic zones are assimilated to circles characterized by their radius 𝑟 𝑦 as shown in figure I.23. Actually, when a structure is subjected to cycling loading, a monotonic plastic zone is formed at the crack tip. Then a compressive stress is developed in the plastic zone when applying an overload leading to the creation of the cyclic plastic zone in the areas where the maximum compressive stress exceeds the yield strength (Saxena and al, 1996). The size of the plastic zone at the crack tip is one of the important parameters describing the retardation effects since it is directly related to the crack length affected by the overload. The size of this plastic zone at the crack tip is a significant characteristic of the crack behavior and can be observed directly during experiments. It depends on the mechanical properties of the material, the stress conditions: applied stress and yield stress of the material, as well as the distribution of stress and strain of the plastic zone. In fact, materials with high yield stress normally have a small cyclic plastic zone size (Dowling, 2013), (Ralph and al, 2001). In real materials, the theoretically very high elastic stresses in the vicinity of a crack tip exceed the yield strength of the materials; thus, plastic yielding will occur. Figure I. 23. Illustration of the plastic zone at the crack tip Irwin presented a simple method to determine the plastic zone at the crack tip assuming the materials to be elastic. He found that the creation of the cyclic plastic zone affects the geometry when it is longer than its physical size, and then, he estimated the size of the cyclic plastic zone which are approximately one quarter of the size of the monotonic plastic zone, by (Irwin, 1960): 𝑟 𝑦 = 1 8𝜋 ( ∆𝐾 𝜎 𝑦 ) 2 For plane stress (𝐼. 31𝑎) 𝑟 𝑦 = 1 24𝜋 ( ∆𝐾 𝜎 𝑦 ) 2 For plane strain (𝐼. 31𝑏) Case study In this section a study of the effect of 𝑅 as well as a comparison of the crack propagation laws through a crack a mean stress 𝜎 𝑚 and alternating stress 𝜎 𝑎 . The cycle count was recorded as the crack propagates through the specimen following an increment crack length ranged from ∆𝑎 = 0.1 to 0.2 𝑖𝑛𝑐ℎ. It should be noted that tests are performed for several stress ratio 𝑅 levels and 1 to 5 tests are performed for each stress ratio level based on different combinations of the mean stress 𝜎 𝑚 and the alternating stress 𝜎 𝑎 . In the following only experimental data with a 𝑅 ≥ 0 were analyzed since Hudson found that for 𝑅 < 0, the same crack growth rate is observed as for the specimens loaded with 𝑅 = 0. Finite elements model A finite elements model of the CCP specimen is developed using the software cast3m (CASTEM, 1997) As can be seen, Walker's law fits very well the experimental data since the goodness of fit parameter 𝑅 𝐿𝑅𝐺𝐹 2 is close to 1. It allows to consider the effect of the stress range based on linear curve compared to the Paris-Erdogan's law. Deterministic analysis of fatigue crack growth In this section fatigue crack growth lifetime analysis is performed based on Walker's and Forman's laws. The Paris-Erdogan's law is not used since it does not consider the effect of the stress level. The number of loading cycles to failure 𝑁 𝑓 is obtained through an incremental integration scheme based on the modified Simpson's formula presented in equation (I.32). Note that the integration is performed from the crack length range from 𝑎 0 to 𝑎 𝑐 , which are the initial and the critical crack lengths, respectively. This later depends on the fracture toughness of the material 𝐾 𝐼𝑐 = 72 𝑘𝑠𝑖√𝑖𝑛𝑐ℎ, and is computed by solving the following nonlinear equation. 𝑎 𝑐 = 1 𝜋 ( 𝐾 𝐼𝑐 1 + 0.5 ( 𝑎 𝑐 𝑤 ) + 0.326 ( 𝑎 𝑐 𝑤 ) 2 √1 -( 𝑎 𝑐 𝑤 ) 𝜎 𝑚𝑎𝑥 ) 2 (𝐼. 32) where 𝜎 𝑚𝑎𝑥 is the maximum stress of the loading cycle and 𝑤 is the CCP specimen half-width. Table I.5 compares experimental and numerical results of the fatigue crack growth lifetimes 𝑁 𝑓 and the critical crack lengths 𝑎 𝑐 for different loading conditions. As it can be seen, the numerical estimates of the critical crack length are twice larger than the experimental results. This discrepancy is mainly due, on the one hand, to a possible overestimated value of the crack toughness of the material 𝐾 𝐼𝑐 , and on the other hand, to the coarse crack growth increment ∆𝑎 used in the experiment process which is not able to capture accurately the crack length when the fracture occurs. Indeed, according to the ASTM standard the crack increment should be around 0.05 𝑖𝑛𝑐ℎ, while the one used for the Hudson's data is ranged form 0.1 𝑖𝑛𝑐ℎ to 0.2 𝑖𝑛𝑐ℎ. As it can be seen a discrepancy is observed between the predictions given by Walker's and Forman's laws, and the experimental data. This discrepancy is more significant for the stress ratio 𝑅 = 0 since it is around 26% and 21%, for Walker's and Forman's laws, respectively. More accurate predictions are obtained for the three other loading conditions since the relative error does not exceed 16% for the worst case. This discrepancy should not be fully attributed to the accuracy of the mathematical formulations of the FCG laws, but also to the limited amount of experimental data that we have, since only two tests are performed for each loading condi tions and only average results were provided in Hudson's paper. Indeed, as shown first by (Virkler and al, 1979), and later by (Ghonem and al, 1987) and also by (Wu and al, 2004) from fatigue crack growth tests performed respectively on 68, 60 and 30 specimens, that large variability in growth rates is observed for the same material, geometry and loading condition. This issue will be deeply discussed in the following section. Probabilistic fatigue crack growth Interaction between uncertainties and fatigue failure Despite of the huge amount of research works devoted to fatigue of materials, which certainly have contributed to a better understanding of the physics related to this failure mode, many phenomena are still misunderstood and must be studied in deep, in particular the close relationship between uncertainties and fatigue of materials S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 36 which affect the safety and reliability of the structures (Niu and al, 2021), (Li and al, 2020), (Liao and al (1), 2020). Understanding the physics of the fatigue failure phenomenon and developing a predictive models with a certain level of realism remain challenging tasks despite all the amount of efforts and works that have been done so far. Unfortunately, this difficulty in developing realistic predictive models is amplified if we want to take into account the uncertainties, which are inherent characteristics of fatigue properties, observed experimentally (Liu and al, 2022), (Ciavarella and al, 2018), (Romano and al, 2018). Statistical information concerning fatigue properties is most often collected from the results of tests carried out in the laboratory and not from experiments carried out under service conditions. Thus, different sources of dispersions must be taken into account in each situation, since observations made in the laboratory may not be valid in practice. For example, the dispersions observed in the laboratory are obtained from fatigue tests for which the loading is at constant amplitude whereas, in practice, the loadings which occur are rather at variable amplitude and even random. Moreover, as we have already explained before, the fatigue life is made up of an initiation period and a propagation period. In each of these two periods, different damage mechanisms can occur. Consequently, the sources of dispersion are different, which makes their control complex. In cases where initiation takes time to appear, the dispersion observed over the fatigue life is necessarily related to uncertainties in the parameters contributing to crack initiation such as the surface condition. On the other hand, if the propagation period constitutes the major part of the fatigue life, the observed dispersion is a consequence of the uncertainties in the parameters dominating the increase of the crack length. This second situation is frequently encountered in fatigue tests carried out on notched specimens and it also seems to be the case for real structures for which the presence of defects is almost inevitable. Moreover, during their propagation, the cracks are faced with different types of metallurgical structures and imperfections due to the inhomogeneity of the materials, so that the rate as well as the direction of propagation of the crack are variable. For many years now, some researcher's works pointed out this issue. Several studies explaining the probabilistic nature of the fatigue crack growth have discussed the necessity to consider multi sources uncertainties in fatigue reliability analyses such as (Fei and al, 2020) But the variability of the fatigue is not induced only by the heterogeneity of the material and can also be the result of uncertainties on the geometry parameters and the loading conditions. Thus, the uncertainties depend on the structure in terms of materials, geometry and loading, but also on the crack by its size, shape and position. It is clear that to obtain a safe design, these sources of uncertainties should be taken into account (Oden and al, 2003). In engineering problems, we are most often required to develop computationally time consuming numerical models to simulate the behavior of real structures. In addition, the sources of uncertainty are multiple, and since we do not have prior information on each of them, we are forced to take them all into account which generates a high number of uncertain parameters. The deterministic models, presented in the above sections and used in fatigue life prediction, are unable to take into consideration these uncertainties in fatigue crack propagation phenomenon. The probabilistic approaches could address this issue, but unfortunately they suffer until now from some limitations to solve real-life engineering problems. Indeed, some of them are purely statistic (Bogdanoff and al, 1985) and subject of criticisms since they are not able to describe the physical phenomena related to the fatigue crack growth. Other approaches, which are mainly based on the "probabilization" of the fatigue crack growth rate (Wu and al, 2004), are time consuming, because their efficiency is affected when the probabilistic dimension (i.e. the number of random variables representing the uncertain parameters) of the problem is high, and/or the use of the mechanical model representing the fatigue crack growth is also time consuming. The present work aims to find a response to these problems, that is to develop a robust, efficient, and accurate probabilistic approach allowing us to perform uncertainty propagation through a mechanical models dealing with fatigue crack growth problems. In addition, this approach must be able to take into account various sources of uncertainty, related the geometry, the material and the loading and evaluate their effect on the lifetime and the reliability of the structures subject to fatigue crack growth. At the same time, different types of uncertainty propagation analysis will be addressed, namely: statistical moments analysis, sensitivity analysis. Probabilistic models of fatigue crack propagation As already mentioned, it is well known that the crack propagation process contains various uncertainties 2), 2018), (He and al, 2015). As a result, even in repeated tests, fatigue crack growth (FCG) process shows considerable uncertainty (Zhu and al, 2020). Thus, probabilistic FCG modeling is vital for fatigue reliability and durability analyses of engineering components. To take into consideration scatter observed on data, many authors were interested in probabilistic models to describe the evolution of crack propagation in fatigue. The probabilistic models offer thus an appropriate framework for modelling and predicting crack propagation. (Song and al, 2019) proposed a probabilistic framework of low-cycle fatigue life assessment based on the wavelet neural network regression method. And to evaluate the probability distribution of fatigue life, (Long and al, 2019) developed an uncertainty propagation approach based on the principle of fatigue crack growth criterion. Indeed, this context enables the introduction of certain variabilities to the typical deterministic laws to describe FCG under constant or variable amplitude fatigue loading, see for instance (Righiniotis and al, 2003), (Sankararaman and al, 2011), (Xiang and al, 2011), (Wu and al, 2003). From the view of methodology, these probabilistic models may be separated into two types: one is a physical model which is derived from randomization of Paris-Erdogan's crack propagation law, and the other is the nonphysical model. Physical model based on Paris-Erdogan's law One approach to probabilistic modeling of fatigue crack growth is to randomize the coefficients of a deterministic model to represent the material inhomogeneity. The model proposed by (Tsurui and al, 1986), (Ishikawa and al, 1984) presents a physical probabilistic modelling of fatigue crack damage in metallic materials faced in structures. In this model, it is assumed that the stress at the crack tip is described only by the SIF, and is independent of other factors such as the mean stress or the stress ratio. In addition, it is assumed that there is no retardation effect caused by overload and, even if it exists, it is negligibly small. Under these conditions, the crack growth Therefore this model seems to be very reasonable for the analysis of the crack propagation, however, in their work, there is not an application on the crack propagation. Another example of the physical model is the one proposed by (Ray and al, 1997) to enhance the computational efficiency of the estimation of the lifetime prediction. This model proposed an algorithm for real time estimation of the crack damage based on the underlying principle of extended Kalman filtering. In this approach, the first two moments of the probabilistic damage are computed by constructing the probabilistic differential equations in the Wiener form as opposed to the Itô form. Then, the lognormal distributed crack length (LDCL) model was proposed as an improvement of the (Ray and al, 1997) model. The crack growth rate in this model is guaranteed to be non-negative, it is based on Karhunen-Loève expansion of the crack length process. This approach provides an additional parameter for tuning the probability distribution function. The nonlinear characteristics of the eigenfunctions in the Karhunen-Loève expansion provide better accuracy than the linear representation. This approach allows the model to capture certain nonlinear features of the crack growth statistic. Consequently, model predictions are more accurate. Non-physical model a) Model based on Markov theory Markov processes are proposed to address probabilistic modelling of fatigue crack growth. The basic idea of this model is to define the evolution of the crack size during its propagation by a discrete Markov process over time. This model is built on several initial assumptions. It is assumed that the damage increment at the end of each damage cycle depends only on the amount of damage present at the beginning of this damage cycle regard less of the accumulated damage before the cycle. Thus, the model proposed by (Kozin and al, 1985), known as the B-model, is nothing else than a stationary Markov process discrete in time and having a finite number of states. This B-model can be described in the following terms. Let us consider a random variable 𝐷 0 , representing the damage present in the structure at time t = 0; and let us define the initial statistical distribution of the different levels of damage by the vector 𝒑 𝟎 , as follows: 𝒑 𝟎 = {𝜋 1 , 𝜋 2 , … , 𝜋 𝑏 } , where 𝜋 𝑗 = 𝑃{𝐷 0 = 𝑗 ≥ 0, ∀ 𝑗 = 1, … , 𝑏} verify the condition ∑ 𝜋 𝑗 = 1 𝑏 𝑗=1 . where b is the number of damage levels. Let us now consider a random variable D0, representing the damage present in the structure at time t. The distribution of each level of damage is described by the following vector: 𝒑 𝒕 = {𝑝 𝑡 (1), 𝑝 𝑡 (2), … , 𝑝 𝑡 (𝑏)} , where 𝑝 𝑡 (𝑗) = 𝑃{𝐷 𝑡 = 𝑗 ≥ 0, 𝑗 = 1, … , 𝑏} (𝐼. 33) Referring to the Markov theory, the vector 𝒑 𝒕 can be easily computed using: 𝒑 𝒕 = 𝒑 𝟎 [𝑷] 𝒕 = 𝒑 𝒕-𝟏 [𝑷] where 𝒑 𝒕-𝟏 correspond to the damage distribution at the end of the previous damage cycle and [𝑷] is the transition probability matrix that describes the degree of severity of each damage cycle. The cumulative distribution function of failure is defined by: 𝐹 𝑊 (𝑡; 𝑏) = 𝑝 𝑡 (𝑏) (𝐼. 34) By considering j the level of damage at t=0 and 𝐹 𝑊 (𝑡; 𝑗, 𝑏) the cumulative distribution function required to reach damage level b, we write: 𝐹 𝑊 (𝑡; 𝑏) = ∑ 𝜋 𝑗 𝐹 𝑊 (𝑡; 𝑗, 𝑏) 𝑏-1 𝑗=1 (𝐼. 35) In the context of the problem of fatigue crack propagation, the damage is interpreted as the crack length and the damage cycle consists of several loading cycles. Thus, based on Markov theory we can obtain the cumulative distribution function of the number of cycles needed to reach a given crack length. In addition, reliability and failure rate can be easily determined. The B-model has been used in different applications dealing with the problem of fatigue crack propagation (Bea and al, 1999), (Lassen and al, 2002). Among the Markov processes suitable to perform crack modelling, one may also consider the class of piecewisedeterministic Markov processes (PDMP's) frequently employed in safety and reliability and can handle both discrete events and continuous evolution of physical phenomena. (Chiquet and al, 2009) were the first authors to use PDMP's to model fatigue crack growth as a degradation mechanism that continuously evolves in time with the growth rate changing at random times. PDMPs are able to model crack propagation in order to handle two problems: the first one is to capture the transition time between two regimes of propagation and the second one is to predict the behavior of a crack until the exit of the linear Paris regime using the first experimental points of its propagation as conditional events. PDMPs are described by two variables: an usual Euclidean state representing the physical system and a discrete variable reflecting region of propagation. PDMP's are suitable for modelling and predicting degradation processes induced by the presence of cracks in structural components. Although the Markov chain model has been used in a wide variety of applications, it has been the target of criticisms. Indeed, the formulation of the model is based on purely statistical foundations and lacks physical consistency to describe the actual mechanism of fatigue crack propagation. b) Yang and Manning model The simple probabilistic model developed by Yang and Manning (Yang and al, 1996) aims obtaining two important probability distributions: the crack growth rate probability and probabilistic service life distribution under a specified crack length. This model was developed in order to overcome the difficulties presented in the model of Markov theory described above. In this model (Yang and al, 1990), a deterministic fatigue crack growth equation is developed based on the stress intensity factors describing the failure mechanism and the orientation of the crack trajectory. The deterministic equation that was used to have a more realistic representation of the fatigue phenomenon, was randomized by assuming that the crack growth rate follows the lognormal distribution. The least square method is used to estimate the unknown parameters, and the second order approximation was used to derive the two probabilities. To take into consideration the variability of the fatigue crack propagation process, Yang and Manning weighted the crack propagation law by a random parameter. Therefore, the crack propagation law is expressed as follows: 𝑑𝑎 (𝑡) 𝑑𝑡 = 𝑋(𝑡)𝑓(∆𝐾, 𝑎, … ), where f is a deterministic function defined as positive, given by the Paris law or other laws of crack propagation, consequently, the crack propagation law is transformed into a probabilistic differential equation. After investigations of experimental data dealing with crack propagation in aircraft under random loading, Yang and Manning (Yang and al, 1996) suggested to write the equation in a simpler form: 𝑑𝑎 (𝑡) 𝑑𝑡 = 𝑋(𝑡)𝑄[𝑎(𝑡)] 𝑏 (𝐼. 36) where 𝑄 and 𝑏 are two constants determined from the experimental data and t is an independent variable that can be interpreted as the number of loading cycles. This model has been used especially in the aeronautics industry to conduct damage tolerance and durability analyses (Yang and al, 1996). The random factor 𝑋(𝑡) is modelled by a lognormal process 𝑍(𝑡) = ln 𝑋(𝑡) which should have zero mean and a standard deviation 𝜎 𝑍 = √ln(1 + 𝜎 𝑋 2 ). To Although the probabilistic model of Yang and Manning offers a better compromise between physical realism and simplicity, we can see that, in the case where the propagation process is more complex such as when the loading is random, the model is unable to provide good predictions. Indeed, if the crack propagation process is complex, the deterministic law of crack growth will have a complex mathematical formulation, therefore, the complex mathematical law weighted by a random factor in the Yang and Manning model will be very difficult to solve in term of calculations. For this reason, a polynomial representation of the crack growth is proposed. c) Polynomial model To obtain a compromise between the realism of the physical meaning of the propagation process and the simplicity of the calculation, the polynomial model (Ni, 2002) has been proposed. The basic idea of the polynomial model is to replace the deterministic propagation law, which is complex if the crack growth process is complex, with a polynomial approximation. By considering the polynomial approximation of second order, the polynomial probabilistic model is written: where 𝑝, 𝑞 and 𝑟 are the coefficients of the polynomial determined from the experimental data and depend on the characteristics of the material as well as on the applied load. 𝑋(𝑡) is a lognormal random process like the one used in the probabilistic model of Yang and Manning. 𝑑𝑎 (𝑡) 𝑑𝑡 = Following the same procedure as that of the Yang and Manning model, the quantities to be apprehended can be expressed in an analytical form. d) Sobczyk model This model (Sobczyk and al, 1989), (Sobczyk and al, 1991), (Sobczyk and al, 1995) is based on the representation of the propagation of fatigue cracks by a discontinuous probabilistic process in which the trajectories followed by the crack during its propagation are discretized into a random number of increments each having a random amplitude. (Frondelius and al, 2022) use the numerical solution schemes of this model and proposed an approach for high-cycle fatigue. Sobczyk's model assumes that among all the cracks that can coexist, there is one that dominates, and the growth of this crack leads to the failure of the structure. Furthermore, it is assumed that the configuration of this dominant crack is defined through a single parameter which length 𝐴(𝑡) depends on the time. In the case of mixed mode propagation, a relationship must be provided between the length of the crack 𝐴(𝑡) and the bifurcation angle 𝜃(𝑡) that defines the orientation of the crack. Let 𝐴(𝑡, 𝛾) be the length of the dominant crack at time 𝑡, where 𝛾 is an elementary event belonging to the space of all possible events Γ in which the probability is defined. 𝛾 ∈ 𝛤, 𝐴(𝑡, 𝛾) represents a possible realization of the random crack propagation process. The probabilistic process 𝐴(𝑡, 𝛾)can be represented by a random sum of increments having random amplitudes: 𝐴(𝑡, 𝛾) = 𝐴 0 + ∑ 𝑌 𝑖 (𝛾), 𝑌 𝑖 (𝛾) = ∆𝐴 𝑖 𝑁(𝑡) 𝑖=1 (𝐼. 38) where 𝐴 0 is the initial crack size that can be considered as a deterministic or statistic parameter, ∑ 𝑌 𝑖 (𝛾) 𝑁(𝑡) 𝑖=1 is a serie of random variables characterizing the magnitude of the increments of crack length during its propagation and 𝑁(𝑡) is a probabilistic counting process defining the number of increments in the interval of time [0,t]. In order to simplify the implementation of the probabilistic model proposed by Sobczyk, the random variables ∑ 𝑌 𝑖 (𝛾) 𝑁(𝑡) 𝑖=1 are assumed to be independent, identically distributed and positively defined. The main objective of the probabilistic model of fatigue crack propagation is to construct the crack length distribution and the life distribution. The major advantage of Sobczyk's probabilistic model is that it provides an analytical formulation of the statistical characteristics of the quantities to be understood. However, the determination of parameters of this model such as the magnitude of the increment of the crack length and the critical size of the crack requires special experimental data. e) Castillo Model Castillo model (Castillo and al, 2008) , where 𝑎 𝑐 is the critical length of the crack for which the failure occurs and 𝑁 0 the number of loading cycles taken as reference. The purpose of this model is to derive the formula ℎ ( ) that allows to express the crack size for a given life as a function of the initial crack length (for 𝑁 = 0 𝑐𝑦𝑐𝑙𝑒𝑠 the crack size is 𝑎 0 ): 𝑎 𝑎 𝑐 = ℎ ( 𝑎 0 𝑎 𝑐 , 𝑁 𝑁 0 ) (𝐼. 39) The second step is to write the distribution of the length of the current crack in terms of the distribution of the initial crack length by applying the function ℎ( can be directly obtained: 𝑎 𝑁 𝑎 𝑐 = ℎ ( 𝑎 0 𝑎 𝑐 , 𝑁 1 + 𝑁 2 𝑁 0 ) (𝐼. 40) Furthermore, the crack length can also be determined in another way by decomposing the loading time N. It is assumed that the structure containing an initial crack is solicited during a loading time ). Following this step, it is considered that the resulting structure in which there is . Thus, the value of the final crack length 𝑎 𝑁 𝑎 𝑐 can be determined using: 𝑎 𝑁 𝑎 𝑐 = ℎ (ℎ ( 𝑎 0 𝑎 𝑐 , 𝑁 1 𝑁 0 ) , 𝑁 2 𝑁 0 ) (𝐼. 41) Thus ℎ ( 𝑎 0 𝑎 𝑐 , 𝑁 1 +𝑁 2 𝑁 0 ) = ℎ(ℎ ( 𝑎 0 𝑎 𝑐 , 𝑁 1 𝑁 0 ) , 𝑁 2 𝑁 0 ) is the translation equation having as unknown the function ℎ and its solution is: 𝑎 𝑁 𝑎 𝑐 = ℎ ( 𝑎 0 𝑎 𝑐 , 𝑁 𝑁 0 ) = ф (ф -1 ( 𝑎 0 𝑎 𝑐 ) + 𝑁 𝑁 0 ) (𝐼. 42) where ф is an arbitrary invertible function from which the initial crack length This result justifies the simplicity of the model as well as its flexibility. 𝑓𝑎 𝑎 𝑐 ⁄ ( 𝑎 𝑎 𝑐 ) = 𝑓𝑎 0 𝑎 𝑐 ⁄ (ф (ф -1 ( 𝑎 𝑁 𝑎 𝑐 ) - 𝑁 𝑁 0 )) ф ′ (ф -1 ( 𝑎 𝑁 𝑎 𝑐 ) - 𝑁 𝑁 0 ) ф ′ ( 𝑎 𝑎 𝑐 ) (𝐼. 43) f) Madsen model The dispersion of the fatigue life in the Castillo model is assumed to be induced only by the variability of the initial length of the crack, thus far away from the reality. Indeed, the damage function is defined as a stochastic integral. Thus the construction of the distribution of crack length is more complicated. The basic idea of the model proposed by (Madsen and al, 1986) is to transform the equation of the crack growth rate into a probabilistic differential equation. By the use of the Paris's law defined earlier in equation (I.23), and under a constant loading amplitude, the crack growth rate by Madsen can be written by: 𝑑𝑎 𝑑𝑁 = 𝐶 𝑌(𝑎) 𝑚 ∆𝜎 𝑚 (√𝜋𝑎) 𝑚 (𝐼. 44) where 𝐶 and 𝑚 are parameters depending on the material, ∆𝜎 is the amplitude of the loading and 𝑌(𝑎) is a geometric correction function. By integration of equation (I.44), and after separating of the variables, the solution of this differential equation is given by: Ψ(a) = 𝐶 ∆𝜎 𝑚 𝑁 (𝐼. 45) where Ψ(a) is an increasing function which represents the evolution of the crack length; it is defined by: Ψ(a) = ∫ 𝑑𝑥 𝑌(𝑥) 𝑚 (√𝜋𝑥) 𝑚 𝑎 𝑎 0 (𝐼. 46) Contrarily to our knowledge concerning that the parameters 𝐶 and 𝑚 represent the material as a random variable, in this probabilistic model, we assume that the parameter 𝐶 is constant and identical for all the specimens while the parameter 𝑚 is considered as a random quantity. The distribution of crack length after a given service life appears to be complicated. Thus, the construction of a sample of realizations of the crack propagation curve is easier, this by simulations of the random process and by calculating the integral. This model is applied in several probabilistic problems dealing with fatigue crack propagation, particularly in the context of reliability analyzes. (Casciati and al, 2007) used this model to analyze experimental data obtained by crack propagation tests in compact tension specimens. They observed that the dispersion of experimental data is low. Conclusion about the probabilistic model of fatigue crack propagation The dispersion can have different sources such as the uncertainties that affect the parameters defining the geometry of the structure, the properties of the material and the applied loading. In this section, we have given a presentation of models dealing with the propagation of fatigue cracks in a probabilistic context. These models can be divided into two categories. The basic idea of the first category consists in weighting the deterministic laws governing the crack growth, such as the Paris' law, by a random process which makes it possible to take into account the dispersion related to the properties of the material. The difficulty with this type of models is that they require relatively significant experimental works to determine the various parameters involved. Besides, if the propagation process is complex, which implies a difficult deterministic mathematical model, the formulation of the probabilistic model will be also difficult to handle. The second class of models consists in representing the probabilistic character of the crack propagation by a process of Markov chains. Even if the formulation of these models has been extended for the case of mixed mode propagation, they are very criticized because their basis is purely statistical and does not represent the real physics which accompanies the process of propagation of cracks. Even if these models have been adopted in several applications, especially in failure mode I where the crack follows the same direction during its propagation, they are far of representing a realistic case since the crack follows curvilinear and irregular paths. Also, these models do not take in consideration the effect of the retardation induced by the application of overloads. In fact, many engineering structures are frequently subjected to constant amplitude loading with occasional high peak loads, which are called overloads. For instance, due to the constant air current and occasional turbulence during the flight, aircrafts are always under the influence of this phenomenon. Curiously, as confirmed by many experimental studies (Schijve, 1962), (Schijve and al, 2004), (Manjunatha and al, 2004), (Daneshpour and al, 2012), (Dirik and al, 2018), (Hemnesi and al, 2022), these overloads have benefit effect on the fracture behavior, since they retardation the fatigue crack growth and consequently can enhance the fatigue lifetime of the structure. This retardation effect is mainly attributed to a secondary plastic strain created around the crack tip. For this reason, it is necessary to study in deep the effect of these overloads on the crack growth behavior. In this context, the probabilistic models presented in this section do not consider the effect of the retardation. Conclusion Fatigue is the most important form of component failure due to cyclic loading. For safe and efficient design and evaluation of engineering materials that experience dynamic cyclic loading in service, it is essential to find an efficient prediction method in term of time computation reduction and which has a good precision on the computed results. For that, we have first presented the basics of fracture mechanics which are the basis of all the laws used for the phenomenon of fatigue, and then have focused on understanding the fatigue crack growth phenomenon. Methods used to compute the fracture driving forces, as the stress intensity factor in linear elastic fracture mechanics have been presented. Through a comparative study dealing with fatigue crack growth in CT specimen, we have found that the energetic method based on independent path integral is more accurate than the kinematic method. Fatigue crack growth models have also been studied, and predictions are compared to experimental data taken from the literature. We have shown that Walker's and Forman's laws are more suitable than the Paris-Erdogan's one to fit experimental data. Finally, stochastic model for fatigue crack propagation have been presented because deterministic approaches cannot be used especially with the random nature of the fatigue phenomenon. The fatigue phenomenon can have different sources of uncertainty that can affect the parameters defining the structural geometry, material properties and applied loading. Particular attention has been paid to the presentation of models dealing with fatigue crack propagation in a probabilistic context. As mentioned earlier in this chapter, these models can be classified into two categories. In the first, the basic idea consists in weighting the deterministic the fatigue crack growth laws, such as the law of Paris, by a random process which makes it possible to consider the uncertainty of the material properties. The difficulty with this type of model is that it requires relatively substantial experimental work to determine the statistical characteristics of parameters involved in the probabilistic modelling. Moreover, if the propagation process is complex, which implies a deterministic mathematical model that is difficult to control, the formulation of the probabilistic model will therefore be a non-trivial task. The second class of models consists in representing the stochastic character of crack propagation by a Markov chains based process. Even though the formulation of these models has been extended for the case of mixed-mode propagation, they are highly criticized because their basis is purely statistical and does not represent the true physics to the crack propagation. Although most of these models have been adopted in several applications, most deal with failure in mode I for which the crack follows the same direction during its propagation, where in practice the cracks follow curvilinear and irregular paths. As an alternative to these probabilistic fatigue crack propagation models, we can mention other more general approaches called probabilistic finite element methods. These approaches developed during the last two decades aim to take into account the uncertainties in mechanical calculation and more precisely in finite element modeling. Unfortunately, these approaches are inefficient when the probabilistic dimension is hig h and the mechanical model itself is computational time demanding such is the case for problems dealing with fatigue crack growth. Thus why developing enhanced alternatives of these approaches will be the issue of this thesis. S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 48 Chapter II: Identification of efficient cubature schemes for the computation of multidimensional integrals Introduction In the previous chapter, we have highlighted that fatigue crack growth is a random process, mainly due to the uncertainties observed on the mechanical properties of the materials, on the applied loading as well as the parameters defining the geometry of the structure. As we have seen, a wide variety of models have been proposed in the literature to consider these various sources of uncertainties and to assess their effect on the fatigue crack growth life, but unfortunately, they encounter some difficulties especially when dealing with engineering problems. To overcome these limitations, uncertainties propagation methods, seems to be the best alternative. These methods have been developed over the past forty years and have been successfully applied to various problems in the fields of mechanical and civil engineering. They can be classified into two categories. The first one is intrusive methods which are represented by any scheme that adapts the governing equations of the deterministic model to propagate the effect of uncertainties on the mechanical responses. They can only be applied to a limited number of problems, where the governing equations are mathematically very simple and the variability of the uncertain parameters is low, which is unfortunately not the case of fatigue crack growth problems. The second category is the non-intrusive methods, where the probabilistic and the mechanical computations are dissociated. Indeed, the variability of the mechanical responses induced by the uncertainty of the input parameters is assessed through a series of runs of the deterministic mechanical model on some points of the random space. The main advantage of the non-intrusive methods is the fact that the mechanical model is considered as a black box. It allows us to benefit from the advanced modeling capacity of some numerical tools such as commercial finite elements codes to deal with a large number of complex mechanical problems, such as fatigue crack growth problems. However, non-intrusive methods still suffer from some limitations, mainly their inefficiency when the number of uncertain parameters is very large. This problem is often referred to as the curse of dimensionality. Despite this limitation, the intrusive methods remain a serious candidate to tackle complex mechanical problems such as the fatigue crack growth one. In this context, this thesis aims to develop an accurate and efficient uncertainty propagation method to deal with a large class of fatigue crack growth problems. Hence, a first attempt to reach this objective will be conducted in this chapter by exploring efficient monomial cubature schemes. This chapter contains four main sections. Section 2 reviews the principle of the uncertainty propagation problem through a mechanical model. The mathematical formulation of the three finalities of the uncertainty propagation problem, such as statistical moment analysis, reliability analysis and sensitivity analysis will be also reminded. Section 3 will be devoted to the presentation of standard methods for the computation of integrals quantities derived from uncertainty propagation analysis. Section 4 reviews the mathematical formulation of some efficient In engineering problems, the components of the 𝑁-dimensional random variable 𝑿 may have different probability distributions and may also be correlated with each other. Consequently, carrying out probabilistic computations in the physical random space couldn't be a trivial task. For this purpose, we prefer to recast the uncertainty propagation problem in the standard random space, where the 𝑁- dimensional random variable 𝑿 is transformed into a 𝑁-dimensional normal variable 𝑼 = { 𝑈 1 , … , 𝑈 𝑁 } 𝑇 ∈ ℝ 𝑁 with independent components 𝑈 i , 𝑖 ∈ {1, … , 𝑁} following a standard normal distribution 𝜑 𝑈 i (𝑢 i ), 𝑖 ∈ {1, … , 𝑁} with zero mean and unit standard deviation. This can be easily achieved using an isoprobabilistic transform 𝑿 = 𝑇(𝑼), such as the Nataf (Nataf, 1962) or the Rosenblatt transformation (Rosenblatt, 1952). Therefore, the deterministic mapping 𝑓 representing the mechanical model, reads in the standard random space: 𝑦 = 𝑓 ⃘ 𝑇(𝒖) ≡ ℎ(𝒖) (𝐼𝐼. 2) Figure II. 2. Illustration of the isoprobabilistic transformation for the 2-dimensional case Once the uncertainty propagation through the mechanical model is carried out, three kinds of analyses can be performed, as illustrated by figure II.3. Figure II. 3. Classification of uncertainty propagation analysis The first one, called response variability analysis, aims to compute the few first statistical moments 𝑚 𝑌 𝑙 and to construct the probability density function 𝑝 𝑌 (𝑦) of the mechanical response 𝑦. Here, focus is mainly on the neighborhood of the mean value of the random variable 𝑌. The second one, called sensitivity analysis, aims at quantifying the contribution of each uncertain input parameter on the variability of the mechanical response. Here, sensitivity indices derived from partial variances 𝑉 𝑖 1 ,…𝑖 𝑠 are computed. Finally, the third one called, reliability analysis, aims at computing the probability 𝑃 𝑓 that the mechanical system under consideration fails with respect to one or more failure criteria. Here, the tails of the mechanical response distribution 𝑝 𝑌 (𝑦) are of particular interest. Statistical moments analysis The uncertainty of the input parameters 𝒙 = { 𝑥 1 , … , 𝑥 𝑁 } 𝑇 of the mechanical model 𝑓 is represented by an 𝑁dimensional random variable 𝑿 = { 𝑋 1 , … , 𝑋 𝑁 } 𝑇 with prescribed 𝑝 𝑿 (𝒙). Due to the uncertainty propagation, the mechanical response 𝑦 becomes an uncertain quantity. As this later is considered as scalar, for the sake of simplicity as stated previously, the variability of the mechanical response 𝑦 can be described by a random variable 𝑌. To characterize the probabilistic content of 𝑌, it is necessary to compute its statistical moments and construct its probability density function 𝑝 𝑌 (𝑦). The 𝑙 𝑡ℎ statistical moment of the random variable 𝑌, i.e. 𝑚 𝑌 𝑙 , is defined as: 𝑚 𝑌 𝑙 = 𝔼[𝑌 𝑙 ] = ∫ 𝑦 𝑙 𝑝 𝑌 (𝑦) 𝑑𝑦 𝔇 𝑌 = ∫ [𝑓(𝒙)] 𝑙 𝑝 𝑿 (𝒙) 𝑑𝒙 𝔇 𝓧 = ∫ [𝑓 ⃘ 𝑇(𝒖)] 𝑙 𝜑 𝑼 (𝒖)𝑑𝒖 ℝ 𝑁 = ∫ [ℎ(𝒖)] 𝑙 𝜑 𝑼 (𝒖)𝑑𝒖 ℝ 𝑁 (𝐼𝐼. 3) where 𝔼[. ] denotes the mathematical expectation. It is clear from equation (II.3) that the estimation of an 𝑙 𝑡ℎ order statistical moment 𝑚 𝑌 𝑙 requires solving a tough mathematical problem, which is the computation of 𝑁-dimensional integrals. Indeed, for engineering problems, it is difficult to obtain a closed-form solution of these integrals because the mechanical model is often given by a time-consuming implicit representation, which requires the use of numerical integration schemes, which unfortunately suffer from inefficiency when dealing with high-dimensional problems (i.e., the number 𝑁 of uncertain parameters is very high. 𝑝 𝑌 (𝑦) ≈ 𝛼 𝑦 -𝛾 exp [- 1 𝑟𝛽 𝑟 |ln ( 𝑦 -𝛾 ∆ )| 𝑟 ] , 𝛾 < 𝑦 < ∞ (𝐼𝐼. 8) where 𝛼 = 1 𝑟 1 𝑟 ⁄ 𝛽 Γ(1 + 1 𝑟 ⁄ ) ⁄ with Γ(. ) is the gamma function, 𝛽 and 𝑟 are shape parameters, 𝛾 and ∆ are for location and scale parameters, respectively. The probability density function 𝑝 𝑌 (𝑦) can be constructed in another way, using a kernel smoothing technique (Wand and Jones, 1995), by the following approximation, which requires a sample set of the mechanical response {𝑦 𝑖 } 𝑖=1 𝑁 : 𝑝 𝑌 (𝑦) ≈ 1 𝑁 ∆ 𝐾 ∑ 𝐾( 𝑦 -𝑦 𝑖 ∆ 𝐾 ) 𝑁 𝑖=1 (𝐼𝐼. 9) where 𝐾(. ) is positive function named kernel and ∆ 𝐾 is the bandwidth parameter. Sensitivity analysis Sensitivity analysis of a mechanical model provides a ranking of the uncertain input parameters with respect to their significance on the variability of the response. This information is of a great importance for the analyst, since it allows him to focus on the most significant parameters, since insignificant ones may be considered as deterministic quantities and fixed to their respective nominal values. This ranking of the uncertain input parameters can be obtained either by local or global sensitivity measurements. For that reason, sensitivity analysis methods are divided into two categories: Local Sensitivity methods (LS) and Global Sensitivity methods (GS). Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 54 The first one, that is LS methods concentrate on the measurement of the local impact of input parameters on the response of the model. In other words, it allows to study how a little change of an input parameter in the vicinity of a specific value (e.g., mean value, most probable failure point…) can influence the response of the model. The sensitivity measurements given by the most LS methods are based on the computation of the gradient of the mechanical response with respect to each of its uncertain input parameters around a given value. Many numerical techniques are available to perform such a computation efficiently, including finite-difference and adjoint differentiation schemes. GS methods provide more complete information compared to LS methods, as they have the advantage of considering the overall impact of the input parameters and their mutual interactions on the mechanical response, not only in the vicinity of a specific point but on the whole uncertain domain, defined as the variation space of the input parameters due to their uncertainties. GS methods can be classified into two 1978) and Sobol indices (Sobol, 1993). In the following, the focus will be solely on the presentation of the variance decomposition problem and the derivation of the global sensitivity measurements of Sobol. Interested readers can find in (Saltelli and al, 2000) an extended state-of-the-art of the available sensitivity analysis methods. Let us suppose that the mapping 𝑓 representing the mechanical model is square-integrable with respect to the probability measure associated to the probability density function 𝑝 𝑿 (𝒙) of the 𝑁-dimensional random variable 𝑿 = { 𝑋 1 , … , 𝑋 𝑁 } 𝑇 where the components 𝑋 i , 𝑖 ∈ {1, … , 𝑁} are independent. Under these assumptions, 𝑝 𝑿 (𝒙) can be obtained as the product of the marginal probability functions 𝑝 𝑋 i (𝑥 i ), 𝑖 ∈ {1, … , 𝑁} of each input parameter, and the mapping 𝑓 can be represented by a finite hierarchical expansion known as Sobol decomposition (Sobol, 1993): 𝑌 = 𝑓(𝑿) = 𝑓 0 + ∑ 𝑓 𝑖 (𝑋 𝑖 ) 𝑁 𝑖=1 + ∑ ∑ 𝑓 𝑖 1 ,𝑖 2 (𝑋 𝑖 1 , 𝑋 𝑖 2 ) 𝑁 𝑖 2 =𝑖 1 +1 𝑁-1 𝑖 1 =1 + ⋯ + 𝑓 1,2,…,𝑁 (𝑋 1 , 𝑋 2 , … , 𝑋 𝑁 ) (𝐼𝐼. 10) where 𝑓 0 is a constant, 𝑓 𝑖 (𝑋 𝑖 ) is a univariate component function representing the main effect of the uncertain input parameter 𝑥 i acting alone, 𝑓 𝑖 1 ,𝑖 2 (𝑋 𝑖 1 , 𝑋 𝑖 2 ) is a bivariate component function describing the effect of interaction between the uncertain input parameters 𝑥 𝑖 1 and 𝑥 𝑖 2 , and so on. The last component in equation (II.10) represents the effect of the interaction of all uncertain input parameters on the variability of the mechanical response 𝑦. Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 55 The uniqueness of the representation given by equation (II.10) is ensured by choosing summands that satisfy the following conditions (Sobol, 1993): 𝑓 0 = ∫ 𝑓(𝒙) 𝑉 𝑌 = 𝕍[𝑌] = 𝕍[𝑓(𝑿)] = ∑ 𝑉 𝑖 𝑁 𝑖=1 + ∑ ∑ 𝑉 𝑖 1 ,𝑖 2 𝑁 𝑖 2 =𝑖 1 +1 𝑁-1 𝑖 1 =1 + ⋯ + 𝑉 1,2,…,𝑁 (𝐼𝐼. 16) where 𝕍[. ] denotes the mathematical variance, and the components 𝑉 𝑖 1 ,…,𝑖 𝑠 appearing in the above expansion, are referred to as 𝑠 𝑡ℎ order partial variances and defined by: 𝑉 𝑖 1 ,…,𝑖 𝑠 = 𝕍[𝑓 𝑖 1 ,…,𝑖 𝑠 (𝑥 𝑖 1 , … , 𝑥 𝑖 𝑠 )], 𝑠 ∈ {1, … , 𝑁} (𝐼𝐼. 17) The ratio between the 𝑠 𝑡ℎ order partial variances 𝑉 𝑖 1 ,…,𝑖 𝑠 and the total variance 𝑉 𝑌 , given by (II.16), provides a normalized sensitivity measurement 𝑆 𝑖 1 ,…,𝑖 𝑠 , called Sobol's sensitivity index (Sobol, 1993), describing the sensitivity of the mechanical response 𝑌 to the interaction between the uncertainties related to the input parameters (𝑥 𝑖 1 , … , 𝑥 𝑖 𝑠 ). It is defined by: 𝑆 𝑖 1 ,…,𝑖 𝑠 = 𝑉 𝑖 1 ,…,𝑖 𝑠 𝑉 𝑌 (𝐼𝐼. 18) Moreover, the Sobol total sensitivity indices 𝑆 𝑖 𝑇 , 𝑖 ∈ {1, … , 𝑁} can be easily derived in the same way. They are introduced to evaluate the total effect of uncertain input parameters: 𝑆 𝑖 𝑇 = 𝑉 𝑖 𝑇 𝑉 𝑌 (𝐼𝐼. 19) Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 56 where 𝑉 𝑖 𝑇 is the total variance accounting the main effect of the uncertain input parameter 𝑥 𝑖 (i.e., when the uncertainty related to 𝑥 𝑖 acting solely on the mechanical response) and higher order effect resulting from the interaction with the uncertainties of the other input parameters. The total variance can be defined by: 𝑉 𝑖 𝑇 = 𝔼 ~𝑋𝑖 [𝕍 𝑋 𝑖 [𝑌]] = 𝔼 ~𝑋𝑖 [𝕍 𝑋 𝑖 [𝑓(𝑿)]] (𝐼𝐼. 20) where the inner term 𝕍 𝑋 𝑖 [. ] is the variance of 𝑌 only due to the uncertainty of the input parameter 𝑥 𝑖 and the outer term 𝔼 ~𝑋𝑖 [. ] is the expectation value due to the uncertainties related to all input parameters ~𝑥𝑖 except 𝑥 𝑖 . After performing some algebraic operations, the total variance can be rewritten as follow: 𝑉 𝑖 𝑇 = 𝔼 ~𝑋𝑖 [𝔼 𝑋 𝑖 [𝑓(𝑿) 2 ] -(𝔼 𝑋 𝑖 [𝑓(𝑿)]) 2 ] = 𝔼[𝑓(𝑿) 2 ] -𝔼 ~𝑋𝑖 [(𝔼 𝑋 𝑖 [𝑓(𝑿)]) 2 ] = 𝔼[𝑓(𝑿) 2 ] -𝔼 ~𝑋𝑖 [(𝔼 𝑋 𝑖 [𝑓(𝑿)]) 2 ] (𝐼𝐼. 21) The first term in (II.21) is the second order statistical moment 𝑚 𝑌 2 which can be derived from (II.3) by setting 𝑙 equal to 2. However, the computation of the second term is more complex since it involves the evaluation of two integrals: 𝔼 ~𝑋𝑖 [(𝔼 𝑋 𝑖 [𝑓(𝑿)]) 2 ] = ∫ (∫ 𝑓(𝒙) 𝑋 𝑖 𝑝 𝑋 𝑖 (𝑥 𝑖 ) 𝑑𝑥 𝑖 ) 2 ~𝑋𝑖 𝑝 ~𝑿𝒊 (~𝒙 𝒊 )𝑑~𝒙 𝒊 (𝐼𝐼. 22) The inner integral is one-dimensional and can be easily computed by numerical integration schemes or analytically if the integrand is available under closed form explicit solution. However, the outer integral is multidimensional, and its evaluation is not trivial by conventional integration schemes, especially when the integrand is only available under a time-consuming implicit form and the integration dimension is very high. As shown in subsection 2.2, sensitivity analysis presents the same difficulties as of statistical moments analysis, since, mathematically speaking, the problem in both cases is to compute multidimensional integrals. For the consistency throughout the manuscript of this thesis, equation (II.22) is rewritten in the standard random space. It now reads: 𝔼 ~𝑋𝑖 [(𝔼 𝑋 𝑖 [𝑓(𝑿)]) 2 ] = 𝔼 ~𝑈𝑖 [(𝔼 𝑈 𝑖 [𝑓(𝑼)]) 2 ] = ∫ (∫ ℎ(𝒖) 𝑈 𝑖 𝜑 𝑈 𝑖 (𝑢 𝑖 ) 𝑑𝑢 𝑖 ) 2 ~𝑈𝑖 𝜑 ~𝑼𝒊 (~𝒖 𝒊 )𝑑~𝒖 𝒊 (𝐼𝐼. 23) Reliability analysis Reliability analysis aims to assess the safety level of an engineering system or structure against a prescribed failure criterion. Commonly, the failure criterion concept can be defined as the gap between two fundamental quantities named the Demand and the Capacity, respectively. In mechanical engineering, as well as in civil engineering, the Demand can be defined as the system response, such as a stress intensity factor or a crack length, induced by the loading conditions. The Capacity, on the other hand, represents a where 𝑝 𝑿 is the joint probability density of the random vector 𝑿, and 𝕀 Ω 𝐹 is the indicator function on Ω 𝐹 , which is equal to 1 if 𝐺(𝒙) ≤ 0 and 0 otherwise. As can be seen, the estimation of the probability of failure 𝑃 𝑓 is nothing other than the computation of a multidimensional integral, which reminds us of the same computational problem encountered in subsections 2.2 and 2.3 presenting statistical moments and sensitivity analysis. As the limit state function 𝐺 is often not available in explicit form, especially when dealing with engineering problems, the integral in above equation cannot be analytically computed. Instead, numerical methods are often employed. The efficiency of the numerical scheme mainly depends on the complexity of the limit state function and the problem dimensionality 𝑁. To avoid multidimensional integrals computation in the estimation of the probability of failure, an approximation technique called First Order Reliability Method (FORM) has been developed in the early seventies (Hasofer and Lind, 1974). To apply FORM, the first step is to rewrite the reliability problem in the standard random space, using probabilistic transformation as shown in subsection 2.1. Consequently, the probability of failure defined earlier by equation (II.24), is now given by the following expression: Once the coordinates 𝒖 * = {𝑢 1 , 𝑢 2 , … , 𝑢 𝑛 } 𝑇 of 𝑃 * are found, the Hasofer-Lind reliability index 𝛽 𝐻𝐿 = ‖𝒖 * ‖ is computed, and the first order approximation of the probability of failure reads: 𝑃 𝑓 ≈ 𝑃 𝑓,𝐹𝑂𝑅𝑀 = Φ(-𝛽 𝐻𝐿 ) (𝐼𝐼. 27) where Φ is the cumulative distribution function of a standard normal variable. The FORM approximation of the probability of failure is often satisfactory, especially for high values of the reliability index, provided that the MPFP is well identified. It is clear from figure II.5 that FORM approximation is exact only when the true limit state function is linear. Unfortunately, this situation is rarely encountered in real life problems where the corresponding limit state function can be highly nonlinear. For this reason, the Second Order Reliability Method (SORM) has been developed. As depicted in figure II.5, it uses a quadratic surface to better fit the true failure domain. Based on Breitung's approximation (Breitung, 1984) the SORM estimation of the probability of failure writes: 𝑃 𝑓 ≈ 𝑃 𝑓,𝑆𝑂𝑅𝑀 = Φ(-𝛽 𝐻𝐿 ). ∏ 1 √1 -𝛽 𝐻𝐿 . 𝜅 𝑖 𝑁-1 𝑖=1 = 𝑃 𝑓,𝐹𝑂𝑅𝑀 . ∏ 1 √1 -𝛽 𝐻𝐿 . 𝜅 𝑖 𝑁-1 𝑖=1 (𝐼𝐼. 28) where 𝜅 𝑖 , 𝑖 ∈ {1, … , (𝑁 -1)} are the main curvatures of the limit state function at the MPFP. As can be seen from (II.28), SORM simply improves the estimation of the failure probability given by FORM through a ponderation by a correction factor ∏ 1 √1-𝛽 𝐻𝐿 .𝜅 𝑖 𝑁-1 𝑖=1 including information about the curvature of the limit state function. Note that, these curvatures are taken as positive quantities for a convex limit state function. Breitung (Breitung, 1984) has shown that the approximation given by SORM is accurate for high values of the reliability index, since it tends toward the exact value of the failure probability when the reliability index is infinite. However, SORM becomes inefficient when the dimensionality 𝑁 of the reliability problem is high. This is due to the computation of the (𝑁 -1) curvatures, which requires the evaluation of the second order derivatives of the limit state function with respect to the uncertain input parameters. This could lead to an unaffordable computation cost, especially for time-consuming implicit limit state functions, where the second order derivatives are computed using a finite difference scheme. Classical methods for multidimensional integration 3.1. Mathematical problem statement Performing uncertainty propagation analysis involves computation of multidimensional integrals, as shown in the previous section if we refer to equations (II.3), (II.23) and (II.25), respectively, for the evaluation of the statistical moments, the partial variances, and the probability of failure. This question is rather difficult to solve, especially when the integrand is only available in an implicit form and the integrals have a high dimensionality. In the following, we will focus on classical methods commonly used to compute is to be performed. where 𝜎̃𝒻 (𝒖) is the standard deviation of the sample {𝒻(𝒖 1 ), … , 𝒻(𝒖 𝑀 )}. Monte-Carlo Simulation The main advantage of MCS is that it is very straightforward to implement. In addition, it has a high robustness since it can deal with integrand with a high level of complexity such that in the cases where this later represents a mechanical model exhibiting nonlinear behavior. Also, its efficiency is weakly affected by the dimensionality of the integral and can be applied to mechanical models having high number of uncertain input parameters. As can be seen in equation (II.31), the error 𝜖 𝑀𝐶𝑆 decreases in 1 √𝑀 ⁄ , which explains the main drawback of MCS, namely its low convergence rate. Convergence is even slower when small probabilities of failure have to be computed for the purpose of reliability analysis. For instance, to estimate probabilities of failure of magnitude 10 -6 , with an error 𝜖 𝑀𝐶𝑆 of 5%, more than 4.10 8 evaluations of the integrand should be performed. Furthermore, it should be noticed that if the statistical moments analysis is addressed, the convergence of the MCS is also affected when high order statistical moments, such as the skewness and the kurtosis have to be evaluated. The poor convergence rate of MCS is mainly due to the use of pseudo-random numbers generator where the obtained sample points {𝒖 1 , … , 𝒖 𝑀 } are not uniformly distributed in the random space. To overcome this problem other sampling schemes can be used, such as Latin hypercube sampling (Mckay and al, 1979) and quasi-random numbers (Niederreiter, 1992). It has been shown in the literature (Owen, 1992), that Latin hypercube sampling gives more accurate results compared to pseudo-random numbers since the associated error is lower than √𝑀 𝑀 -1 ⁄ 𝜖 𝑀𝐶𝑆 . Various integration schemes based on quasi-random numbers are available. Indeed, such samples can be built from different quasi-random sequences such as those developed by Faure (Faure, 1982), (Halton, 1960), Hammersley (Hammersley and Handscomb, 1964) and Sobol (Sobol, 1998). Quasi-random numbers are more efficient in performing high-dimensional integration (Schlier, 2004), especially when derived from the Sobol sequence, since the convergence rate is in ln 𝑁 (𝑀) which is much faster than the convergence rate 1 √𝑀 ⁄ obtained by pseudo-random numbers. It is also known (Wang and al, 2004) that quasi-random numbers built from Hammersley sequence give a good balance between efficiency and accuracy. It is also worth mentioning that quasi-random numbers ensure a good performance when the integrand is very sharp or even discontinuous (Schϋrer, 2003). Halton Tensor-product cubature methods Cubature methods (Stroud, 1971;Cools and Rabinowitz, 1993;Cools, 2002;Lu and Darmofal, 2004) are an alternative tool to MCS for the computation of multidimensional integrals. In this section we focus on full tensor-product method (Brezin and Zhidkov, 1965) and sparse grid method (Smolyak, 1963;Gerstner and Griebel, 1998), which are efficient to deal, respectively, with low and moderate dimensionality integration problems. Full tensor-product cubature method Full tensor-product cubature schemes are probably among the most used techniques to perform numerical integration. They have been extensively used in the field of uncertainty propagation for engineering problems, such as in (Baldeweck, 1999) to compute the first four statistical moments of mechanical responses, and in (Ghanem, 1999) for stochastic finite elements computations. For the sake of simplicity, let us first present the one-dimensional case, i.e. when the mechanical model has only one uncertain input parameter. Thus, the dimensionality 𝑁 is now equal to 1, and the random vector 𝑼 representing the uncertain input parameters in the standard random space, becomes a scalar quantity denoted 𝑈, which is nothing other than a standard normal variable with probability density function 𝜑 𝑈 (𝑢). Under these conditions, the integral (II.29) reads: 𝐼[𝒻] = ∫ 𝒻(𝑢) 𝜑 𝑈 (𝑢)𝑑𝑢 ℝ = 1 √2𝜋 ∫ 𝒻(𝑢) exp [ 𝑢 2 2 ] 𝑑𝑢 ℝ (𝐼𝐼. 32) According to the cubature method, and after considering that the integrand 𝒻(𝑢) is square integrable, the integral (II.32) can be approximated, as in the case of MCS, by a weighted summation of the integrand as follows: 𝐼[𝒻] = ∫ 𝒻(𝑢) 𝜑 𝑈 (𝑢)𝑑𝑢 ℝ ≈ ∑ 𝑤 𝑘 𝒻(𝑢 𝑘 ) 𝑀 𝑘=1 (𝐼𝐼. 33) where {𝑢 1 , … , 𝑢 𝑀 } and {𝑤 1 , … , 𝑤 𝑀 } are integration points and weights and 𝑀 is the level of the quadrature scheme. The integration points and weights depend on the integration domain and on the weight function 𝜑 𝑈 (𝑢), but not on the integrand 𝒻(𝑢). As we work in the standard random space, and consequently the weight function in the integral (II.33) is exp [ 𝑢 2 2 ], the cubature scheme is called Gauss-Hermite integration scheme, and the integration points are the roots of the 𝑀 𝑡ℎ order Hermite polynomial. In the literature, for the onedimensional Gauss-Hermite integration scheme, the term quadrature is used instead of cubature. ) can be approximated using the so-called full tensor-product cubature scheme given below: 𝐼[𝒻] = ∫ 𝒻(𝒖) 𝜑 𝑼 (𝒖)𝑑𝒖 ℝ 𝑁 ≈ 𝐼 1 𝑀 1 ⊗ ⋯ ⊗ 𝐼 𝑁 𝑀 𝑁 [𝒻] ≈ ∑ … 𝑀 1 𝑘 1 =1 ∑ 𝑤 𝑘 1 … 𝑤 𝑘 𝑁 𝑀 𝑁 𝑘 𝑁 =1 𝒻(𝑢 𝑘 1 … 𝑢 𝑘 𝑁 ) (𝐼𝐼. 34) The accuracy of the approximation given by equation (II.34) could be measured by the degree of the polynomial below which the cubature will gives the exact value. In the case of isotropic cubature scheme, which means that the same numbers of integration points 𝑀 1 = ⋯ 𝑀 𝑁 = 𝑀 are used in each direction of the integration domain, equation (II.34) allows to exactly integrate a multidimensional polynomial with degree not greater than 2𝑀 -1. Consequently, the required number of evaluations of the integrand is 𝑀 𝑁 , which clearly grows exponentially with the dimensionality 𝑁 of the integral and leads to intractable computations when the mechanical model is time-consuming itself. This is the main drawback of full tensor-product cubature schemes. Sparse grid method An alternative to avoids the curse of dimensionality of full tensor-product cubature schemes, is the use of sparse grid integration also called Smolyak's cubature scheme (Smolyak, 1963;Gerstner andGriebel, 2003, Nobile andal, 2006;Ganapathysubramanian and Zabaras, 2007). The key idea is to use linear combinations of tensor-products of one-dimensional cubature formulae 𝐼 𝑖 𝑀 𝑖 , 𝑖 ∈ {1, … , 𝑁} of level 𝑀 𝑖 , 𝑖 ∈ {1, … , 𝑁}, each one is able to integrate exactly any one-dimensional polynomial of degree up to 2𝑀 𝑖 -1, 𝑖 ∈ {1, … , 𝑁}. To enhance its efficiency, only tensor-products combinations with relatively small number of integration points are used and higher degree combinations are excluded. The linear combination is performed in such a way that suitable interpolation property for the one-dimensional case is preserved for higher order dimensionality (Novak and Ritter, 1999). Accordingly, the Smolyak formula of level 𝑀 is given by: Smolyak's cubature formula is more efficient since the corresponding grid uses less integration points than the grid obtained by full tensor-product. Indeed, in (Novak and Ritter, 1999) it is proven that the number of evaluations of the integrand in Smolyak's cubature formula (2.35) of degree 𝑀 is 2 𝑀-1 𝑁 𝑀-1 (𝑀 -1)! ⁄ , which increases polynomially with the dimensionality 𝑁 of the integral to be computed. According to (Fichtl and Prinja, 2011), the use of Smolyak's cubature formula of degree 𝑑 + 1 or 𝑑 + 2 requires less integrand evaluations than full tensor-product cubature formula of degree 𝑑, especially for higher values of 𝑁. The efficiency of Smolyak cubature formula can be further enhanced by using nested sparse grids which allows to prevent extra computations. 𝐼[𝒻] = ∫ 𝒻(𝒖) 𝜑 𝑼 (𝒖)𝑑𝒖 ℝ 𝑁 ≈ ∑ (-1) 𝑀+𝑁-1-|𝒊| 𝑀+𝑁-1 |𝒊|=𝑀 𝐶 𝑁-1 |𝒊|-𝑀 𝑄 Let us consider two sparse grids 𝐺 𝑙 (𝒖) and 𝐺 𝑙+1 (𝒖) of level 𝑙 and 𝑙 + 1, respectively. These are nested if the points of the grid 𝐺 𝑙 (𝒖) align with those of grids of a higher degree 𝐺 𝑙+1 (𝒖). Unfortunately, Gauss-Hermite integration points have poor nesting, since only the center point belongs to all possible grids. Other types of integration points with bounded support, such as Gauss-Legendre, Gauss-Patterson and Clenshaw-Curtis Liu and al, 2011) have a better nesting and can be used to avoid this problem. This only needs to perform the integration in an appropriate standard random space defined by standard uniform random variables rather than standard normal random variables. ( Despite, the various improvements of Smolyak's cubature formula it remains inefficient for higher dimensionality integration problems. In addition, it has another critical drawback, since large negative weights can be produced during combinations of tensor-products, which means ill-conditioning in the computation of integral involving non-polynomial integrand. Indeed, the results obtained by (Bernardo, 2015) from a benchmark study conducted on a large family of integrand functions with dimensionality up to 24 have shown that Smolyak's cubature formula is not reliable as it reproduces large errors, mainly due to large negative weights. Efficient cubature methods In section 3 we focused on the classical methods used to perform multidimensional integration. The main objective was to introduce the mathematical issue behind multidimensional integration and to present the general principle of cubature integration in a soft manner. In this new section, we focus on the most efficient cubature formulae available in the literature of integration methods (Stroud, 1971;Cools, 2003). These formulae are said to be efficient here because they use few integration points (i.e., only a few tens of points are needed for moderate dimensionality and a few hundreds of points are required for high dimensionality); therefore, a limited number of integrand evaluations is needed to achieve a good accuracy. They are similar to MCS, in that only one weighted summation is needed to approximate a multidimensional integral, unlike on the tensor product-based formulae where a weighted summation is performed in each direction of the integration domain, since the integration points are smartly selected to reduce computational efforts. The main difference between these efficient cubature formulae is the way in which the integration points are selected. Indeed, to further improve the efficiency of the cubature scheme, the idea is to use symmetric integration points, such those developed first by (Genz, 1986) to perform integration on hypercube and extended later by (Genz and Keister, 1996) for infinite integration domains. The obtained formulae are called complete-symmetric cubature formulae. Based on the invariant theory and orthogonal arrays, (Victoir, 2004) has developed quasi-symmetric cubature formulae, also called thinned cubature formulae (Bernardo, 2015), which use only parts of the integration points in the same symmetric set, in contrast to the full-symmetric cubature formulae which use all integration points in a symmetric set. These efficient cubature formulae have been extensively studied, but only applied to solve purely mathematical problems. Indeed, until the works conducted by (Lu and al, 2004;Victoir, 2004), they were unknown in the engineering fields, in particular thinned cubature formulae. In the last decade, they received a growing attention and some applications to solve engineering problems were noticed (Wei and al, 2008;Xu and al, 2012;Xu and Lu, 2017;Xiao and Lu, 2018;Xu and Dang, 2019;Ding and Xu, 2021). As part of this thesis work, a first attempt will be conducted in this chapter to extend the application of these efficient cubature to more complex mechanical problems involving a high number of uncertain parameters. But firstly, let us identify to the main efficient cubature schemes available in the literature and their mathematical formulation. These formulae, all of them of fifth degree, are useful to compute Gaussian weighted integral particularly the first four statistical moments (Xu and Lu, 2017), as they allow a good accuracy while the number of evaluations of the integrand grows quadratically with the dimension of the integral. Formula I This formula of degree 5 given by (Stroud, 1971), is only valid for a moderate range of dimensions 2 ≤ 𝑁 ≤ 7 and requires 𝑁 2 + 𝑁 + 2 integration points as depicted in figure II.9 for the two-dimensional case. According to this formula, the integral (II.29) can be approximated by the following weighted sum: 𝐼[𝒻] ≈ 𝐴[𝑓(√2𝜂, √2𝜂, … , √2𝜂) + 𝑓(-√2𝜂, -√2𝜂, … , -√2𝜂)] +𝐵[ ∑ 𝑓(√2𝜆, √2𝜉, … , √2𝜉) + 𝑓(-√2𝜆, -√2𝜉, … , -√2𝜉) 𝑝𝑒𝑟𝑚𝑢𝑡𝑎𝑡𝑖𝑜𝑛 ] +𝐶[ ∑ 𝑓(√2𝜇, √2𝜇, √2𝛾, … , √2𝛾) + 𝑓(-√2𝜇, -√2𝜇, -√2𝛾, … , -√2𝛾) 𝑝𝑒𝑟𝑚𝑢𝑡𝑎𝑡𝑖𝑜𝑛 ] (𝐼𝐼. 37) where the terms appearing in the summations are built from all possible distinct permutations of the input variables (see figure II.9 left). The constants 𝐴, 𝐵, 𝐶, 𝜇, 𝛾, 𝜂 and 𝜉 are given by (Stroud, 1967;Stroud, 1971). They can have multiple real solutions and some of the derived integration points could take on complex values. For instance, the constants 𝜇, 𝛾 and 𝜂 can be obtained by solving the following equations, and as can be seen complex solutions happen for 𝑁 > 7: Formula I is the most efficient among the known integration formulae of degree 5 for 𝑁 ≥ 4, since it requires just one more point than the number of integration points given theoretically, which is 𝑁 2 + 𝑁 + 1, as can be seen from the plot (red line) of figure II.9. Formula II We give here another fifth-degree formula derived by (Mysovskikh, 1980) which requires 𝑁 2 + 3𝑁 + 2 integration points as shown in figure II.10 for the two-dimensional case. Based on this formula, the approximation given for the integral (II.29) reads: 𝐼[𝒻] ≈ 2 𝑁 + 2 𝑓(𝟎) + 𝑁 2 (7 -𝑁) 2(𝑁 + 1) 2 (𝑁 + 2) 2 ∑[𝑓(√𝑁 + 2 × 𝒂 𝑗 ) + 𝑓(-√𝑁 + 2 × 𝒂 𝑗 )] 𝑁+1 𝑗=1 + 2(𝑁 -1) 2 (𝑁 + 1) 2 (𝑁 + 2) 2 ∑ [𝑓(√𝑁 + 2 × 𝒃 𝑗 ) + 𝑓(-√𝑁 + 2 × 𝒃 𝑗 )] 𝑁(𝑁+1) 2 ⁄ 𝑗=1 (𝐼𝐼. 39) where 𝟎, 𝒂 𝑗 and 𝒃 𝑗 are the integration points representing respectively the center point of the integration domain, the vertices of a regular simplex and the midpoints of the vertices of a regular simplex projected onto the surface of the sphere 𝑆 𝑁 ≡ {𝒙 ∈ ℝ 𝑁 : 𝑥 1 2 + 𝑥 2 2 + ⋯ 𝑥 𝑁 2 = 1}. The points set 𝒂 𝑗 and 𝒃 𝑗 allowing to compute Gaussian weighted integral, are given by the following expressions: Formula II is somewhat less efficient than formula I, but it is valid for a wide range of dimensions 𝑁 > 3. 𝒂 𝑗 = (𝑎 1 𝑗 , Note that when 𝑁 < 7, the integration weights are all positive, but for 𝑁 > 7, negative integration weights appear, which is mathematically reasonable, but may lead to unacceptable large errors or even physically meaningless results. Therefore, higher dimensional integration should be handled with more care. As depicted in figure II.10, the number of required integration points grows quadratically with the dimension of the integral to be computed. Note that for higher dimensions, formula II remains efficient since the number of evaluations of the integrand is close the one given by the theoretical lower bound. Formula III This formula firstly given in (Stroud and Secrest, 1963) and later found in (Stroud, 1971) is very similar to formula II. Indeed, formula III is derived from a formula for an integration formula on the surface of the unit N-sphere. Moreover, for radially symmetric functions 𝑓(|𝒙|), formulae II and III leads to exactly the same results. The main difference between them lies in choice of the integration weights and points: the integration points used in formula III are built on all distinct reflections and permutations of the input variables. Referring to formula III, the integral (II.29) can be approximated by the following series of summations: 𝐼[𝒻] ≈ 2 𝑁 + 2 𝑓(𝟎) + 4 -𝑁 2(𝑁 + 2) 2 ∑ 𝑓(√𝑁 + 2, 0, … ,0) 𝑓𝑢𝑙𝑙 𝑠𝑦𝑚 + 1 (𝑁 + 2) 2 ∑ 𝑓( √ 𝑁 2 + 1, √ 𝑁 2 + 1, … ,0) 𝑓𝑢𝑙𝑙 𝑠𝑦𝑚 (𝐼𝐼. 43) Figure II. 111. Integration points given by formula III (left) and comparison of the number of integration points with the theoretical min bound of formulae of degree 5 (right) This formula is also of algebraic degree 5 and requires 2𝑁 2 + 1 integration points. Although, the number of integration points grows quadratically with the dimension of integration, formula III is less efficient than Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 68 the two previous formulae, especially for higher dimensions as depicted in figure II. 11. In addition, the space-filling of the integration points of formula III is different from the one of formula II, while the two formulae have some similarities. Formula IV This formula developed for the first time by (McNamee and Stenger, 1967) Formula IV has the same efficiency as formula III since 2𝑁 2 + 1 integration points are also required to approximate a multidimensional integral. The main difference between them is that, for formula IV, the position of the integration points is independent of the dimension 𝑁, as shown in equation (II.44). Indeed, from a geometrical point of view, the integration points of formula IV lie on the surface of a sphere of constant radius, whereas for formula III, the integration points fill the surface of a sphere whose radius increases with the dimension 𝑁. This fact could lead to a significant gap in the accuracy of the estimate given by formulae III and IV, and it will be interesting to investigate it in the following. It is important to note that for the two-dimensional case, formula IV is identical to the one constructed by full tensor-product scheme. As can be seen in figure II.12, the integration points of formula IV have the same locations as in Gauss-Hermite grid of level 3. integration points are built by starting with an equal weight cubature formula of degree 𝑑, having a convolutional structure such as in a tensor-product formula, and then a large portion of this integration points are removed using an orthogonal array that preserves the degree 𝑑 of accuracy. This procedure of integration points removal is called thinning, and, for this reason, formula V is also known as thinned cubature formula. It produces all positive integration weights and interior integration points, which is appropriate from a physical point of view and provides good robustness when dealing with higher dimension. For Gaussian integration domain, two classes of fifth degree formula V are available. The first is of the following form and valid when the dimension 𝑁 is of the form 𝑁 = 3𝑘 -2: 𝐼[𝒻] ≈ 2 𝑁 + 2 𝑓(𝟎) + 𝑁 𝑁 + 2 ∑ 𝑓(ℎ√3, … , ℎ√3, 0, … ,0) 𝑝𝑒𝑟𝑚𝑢𝑡𝑎𝑡𝑖𝑜𝑛 (𝐼𝐼. 45) where ℎ is the permutation of ±1 and 𝑘 is the number of ℎ√3 in the integration point (ℎ√3, … , ℎ√3, 0, … ,0). The second class of formula V is valid for 𝑁 ≥ 3 and can approximate the integral (II.29) by the following expression: 𝐼[𝒻] ≈ 8𝑁 (𝑁 + 2) 2 ∑ 𝑓(ℎ √ 𝑁 + 2 2 , 0, … ,0) 𝑝𝑒𝑟𝑚𝑢𝑡𝑎𝑡𝑖𝑜𝑛 + (𝑁 -2) 2 (𝑁 + 2) 2 ∑ 𝑓(ℎ √ 𝑁 + 2 𝑁 -2 , … , ℎ √ 𝑁 + 2 𝑁 -2 ) 𝑝𝑒𝑟𝑚𝑢𝑡𝑎𝑡𝑖𝑜𝑛 (𝐼𝐼. 46) where ℎ is also the permutation of ±1. Figure II. 133. Integration points given by formula V (left) and comparison of the number of integration points with the theoretical min bound of formulae of degree 5 (right) Integration points, given by formula V of the second class, in the case of three-dimensional Gaussian integration domain are plotted in figure II.13. The efficiency of formula V is remarkable for higher dimension, especially when the degree of accuracy of the cubature formula is relatively low. Indeed, the Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 70 number of integration points required is less than 40 for formula V of degree 3 and 1000 for formula V of degree 5. This formula is efficient to compute accurately integrals with dimension up to 24. Despite this, the use of formula V in engineering is limited to a few applications. The main drawback of the formula V is that the construction of orthogonal arrays, especially for higher dimensions, is not a trivial task. In this work, the proposed formula V allows us to investigate the computation of multidimensional integrals with dimension up to 16. Formula VI The mathematical formulation of this formula is very similar to that of formula II. Based on a high-order unscented transformation (Zhang and al, 2014), it was first developed for the purpose of non-linear estimation of Kalman filter. After that, it was used by (Xiao and Lu, 2018) to perform reliability analysis on some academic engineering problems. The results obtained show that formula VI can accurately estimate the high-order statistical moments of limit-state functions. The integration points of this formula can be divided into three types. The first type is represented by one integration point 𝒖 𝟎 located on the origin (0, … ,0) of the standard random space with weight 𝑤 0 : { 𝒖 𝟎 = (0, … ,0) 𝑤 0 = -2𝑁 2 + (4 -2𝑁)∆ 2 + 4(∆ + 1)𝑁 (𝑁 + ∆) 2 (4 -𝑁) (𝐼𝐼. 47) The second type is represented by 2𝑁, equidistant points from the origin, located on the axis of the integration domain, and with the same weight 𝑤 1 : { 𝒖 𝑗 1 = √ (4 -𝑁)(𝑁 + ∆) ∆ + 2 -𝑁 𝒆 ⃗ ⃗ 𝑗 1 𝒖 𝑗 1 +𝑁 = -√ (4 -𝑁)(𝑁 + ∆) ∆ + 2 -𝑁 𝒆 ⃗ ⃗ 𝑗 1 , 𝑗 1 = 1,2, … , 𝑁 𝑤 1 = (∆ + 2 -𝑁) 2 2(𝑁 + ∆) 2 (4 -𝑁) (𝐼𝐼. 48) where 𝒆 𝑗 1 = [0, … ,0,1,0, … ,0] 𝑇 . The third type contains 2𝑁(𝑁 -1) integration points lying on the diagonal of a plan defined by two coordinate axes and having the same weight 𝑤 2 : Formula VI has the same efficiency as formulae III and IV since it requires 2𝑁 2 + 1 integration points. But, as can be seen in the above equations, it has a small particularity, i.e. a free parameter ∆ intervenes for the computation of the integration weights and points, which gives flexibility to formula VI and some values can contribute to enhance its accuracy. Indeed, as shown in (Zhang and al, 2014) for the two-dimensional case, formula VI can theoretically capture the first six statistical moments of the input random variables when ∆ = 0.835 or ∆ = 19.165. Note that, when ∆ = 4, formula VI is nothing else then a third-degree Gauss-Hermite integration scheme. { 𝒖 𝑗 2 = +√𝑁 + ∆ 𝒆 ⃗ ⃗ 𝑗 2 + 𝒖 𝑗 2 + 𝑁(𝑁-1) 2 = -√𝑁 + ∆ 𝒆 ⃗ ⃗ 𝑗 2 + 𝒖 𝑗 2 +𝑁(𝑁-1) = +√𝑁 + ∆ 𝒆 ⃗ ⃗ 𝑗 2 - 𝒖 𝑗 2 + 3𝑁(𝑁-1) 2 = -√𝑁 + ∆ 𝒆 ⃗ ⃗ 𝑗 2 -, 𝑗 2 = 1,2, … , 𝑁(𝑁 -1) 2 𝑤 2 = 1 (𝑁 + ∆) 2 Figure II. 14. Integration points given by formula VI (left) and comparison of the number of integration points with the theoretical min bound of formulae of degree 5 (right) In figure II.14, we can clearly see the impact that the parameter ∆ can have on the location of the integration points in the integration domain. When ∆ > 4, the distance to the origin of some integration points of formula VI is much larger than those of the corresponding integration points of the Gauss-Hermite scheme. As depicted in figure II.14, for ∆ = 19 some of the integration points are located around ±3 standard deviation of the mean of the input random variables. This can be very useful in the reliability analysis, as it can help to capture the probabilistic content of the tails of the distributions of the input random variables. Note that there is no optimal value of the parameter ∆, except for the two-dimensional and three-dimensional cases, where ∆ is set to as 0.835 and 1.417, respectively, to capture the first four statistical moments of the input random variables. For the other cases (Zhang and al, 2014) have demonstrated that when ∆ is set to 2, formula VI provides a good stabilization of the numerical computation. Numerical examples In this section, two sets of numerical problems encountered when propagating uncertainty through models are addressed to conduct a comparative study between the cubature formulae presented in the previous section. The main objective is to illustrate their ability to provide probabilistic characteristics of model response, including tail distribution, reliability, and sensitivity indices. The first set of problems, related to explicit models, concerns elementary analytical models that represent either a purely mathematical function or a problem related to mechanical analysis. The second set, related to implicit models, includes computationally intensive mechanical models whose responses are available through numerical calculations. Where the exact solution does not exist, estimates of the quantities of interest given either by direct Monte-Carlo Simulations (MCS) or by a full tensor-product Gauss-Hermite Integration (GHI) scheme have been used to assess the accuracy and efficiency of cubature formulae I, II, III, IV, V and VI. Explicit models Purely mathematical integration problem Let us consider a simple integrand function that is widely used in the literature to conduct benchmark studies (Xu and Rahman, 2004) on integration methods. 𝒻(𝒙) = √1 + 𝒙 𝑇 𝒙 2 ⁄ (𝐼𝐼. 50) where 𝑥 𝑖 , 𝑖 = 1, … , 𝑁 are uncertain parameters represented by identically normal distributed random variables 𝑋 𝑖 , 𝑖 = 1, … , 𝑁 with mean 0 and standard deviation 𝜎. This function has a bell shape around the origin which becomes more pronounced as the dimension 𝑁 increases, making it more difficult to integrate for higher dimensions. Thus, we first want to study the effect of dimension 𝑁 on the accuracy of the results given by the six cubature formulae presented previously. The expected value, that is the first order statistical moment of the function (II.50), is computed for increasing dimension 𝑁, and by assigning the standard deviation 𝜎 of the input random variable the value √2 2 ⁄ . The obtained estimates given by 10 5 MCS and Gauss-Hermite (GH3) cubature scheme of level 3, taken here as the reference solutions, are reported in Table II.1. As can be seen, both methods give accurate results and as expected, the convergence of MCS is slow but less impacted by the dimension of integration. Conversely, the efficiency of Gauss-Hermite cubature scheme is closely related to the dimension of integration. Indeed, the number of required evaluations of the integrand grows exponentially with the dimension of integration. For instance, to compute an integral of dimension 10 with a good level of accuracy, 59049 evaluations of the integrand are required. The relative error, taken as an indicator of accuracy and defined as the difference between the estimates obtained by the six cubature formulae presented in the previous section and those given by MCS, are plotted in figure II.15. As can be seen, all formulae give accurate estimates of the expected value of the function (II.50) for the integration dimension up to 10, since the relative error does not exceed 6% in the worst case, which is recorded for formula IV and 𝑁 = 10. Except for formula IV, where the relative error S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 73 grows exponentially, the accuracy of the other formulae seems to be less affected by the integration dimension 𝑁, especially for formula V which gives the lower relative error. Figure II. 15. Evaluation of the accuracy and the efficiency of cubature formulae I to VI In order to push our comparative analysis a little further, an efficiency index, noted 𝐸𝑓, is introduced. It is defined as a function of the number of evaluations of the integrand 𝑀 and the relative error 𝜖 computed previously, and reads as follows: 𝐸𝑓 = 1 𝑀 𝜖 = 1 𝑀 × 𝜖 (𝐼𝐼. 51) As can be seen from equation (II.51), the smaller the quantity 𝑀 × 𝜖, the better the efficiency of the considered integration scheme. In addition, it can be noted that the number of evaluations of the integrand 𝑀 is commonly used to evaluate the accuracy of a cubature scheme. Here, a weighted or effective one, denoted 𝑀 𝜖 , is used. The idea behind this is to include in the efficiency measurement the effect of a possible loss of accuracy when the dimension of integration increases. Moreover, this efficiency index can be very useful in identifying the cubature scheme that offers the best balance between accuracy and efficiency. The efficiency index 𝐸𝑓 is plotted in figure II.15. As can be seen, except for formulae I and V which exhibit some particular behavior, the efficiency of all the other cubature formulae cubature scheme decreases with the dimension of integration. As expected, this loss of efficiency is more significant in the case of Gauss-Hermite integration scheme. Formula V gives the better balance between accuracy and efficiency when dealing with higher dimensions integration problems. The higher efficiency of formula V is observed for 𝑁 = 9, followed by a significant decrease for 𝑁 = 10. Indeed, although the level of accuracy remains relatively the same for 𝑁 = 9 and 𝑁 = 10, the number of evaluations of the integrand increases suddenly from 146 for 𝑁 = 9 to 276 for 𝑁 = 10. This significant growth of the number of integrand evaluations is explained in section 4.5 and can be clearly seen in figure II.13. Compared to the Gauss-Hermite integration scheme, the formula V is 1666 and 690 Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 74 times more efficient for 𝑁 = 9 and 𝑁 = 10, respectively. Unfortunately, the formula V is only able to deal with integration problems with moderate dimension. As explained in section 4.5, this is mainly due to the fact that the construction of orthogonal arrays, used to drive the integration points, is not a trivial task. The formula I also gives a good balance between accuracy and efficiency, even for lower dimensions where the others cubature formulae fail to give better results than the Gauss-Hermite integration scheme. Unfortunately, the formula I is only able to compute integrals with dimension up to 7. As a conclusion for this first example, dealing with purely mathematical integration problem, we have shown that cubature formulae I-VI are by far more efficient than traditional integration methods such as MCS and full tensor-product GHI. In addition, their efficiency is less affected by the dimension of integration. Hence, these cubature formulae could be serious candidates for computing high-dimensional integrals such as those encountered in uncertainty propagation problems. Analytical mechanical models with mixed random variables In the first example the uncertain parameters were represented by identical normal distributed random variables having the same statistical characteristics. This situation is rarely encountered in real-life problems since the uncertain parameters often follow different kinds of distributions. In addition, the use of the normal distribution in uncertainty modeling could be viewed as a special case due to its mathematical properties which could simplify the probabilistic computations, and consequently may give a truncated picture of the ability of the uncertainty propagation method. In this section, an analysis will be conducted, through uncertainty propagation problems involving uncertain parameters whose variability is modeled with a mixture of normal and non-normal random variables, to evaluate the accuracy and the efficiency of the cubature formulae I-VI when dealing with such problems. Three analytical models taken from the literature are considered. The first one 𝐺 1 (𝒙), obtained from Melchers and Ahammad, 2004), represents a purely mathematical function. The second 𝐺 2 (𝒙) and the third 𝐺 3 (𝒙) ones are obtained from (Hong and Lind, 1996) and (Panmetsa and Grandhi, 2003), respectively, and both represent performance functions related to structural reliability problems. The mathematical formulations of these models are given in the following equations: For each of the performance functions 𝐺 𝑖 (𝒙), 𝑖 = 1,2,3, the objective is to compute its first four statistical moments using the cubature formulae I-VI. To evaluate their accuracy, the estimates obtained are compared to those obtained by 10 5 MCS, taken here as a reference solution, since no closed-form solution is available for the three problems. The MCS results for the mean 𝜇, the standard deviation 𝜎, the skewness 𝛾 and the kurtosis 𝜅 of the three performance functions are listed in table II.3. In figure II.16, the ratios of the estimates of the first four statistical moments obtained by cubature formulae I-VI to the reference solution given by MCS are plotted. These ratios are considered here as an indicator of accuracy. Indeed, the closer the value of this ratio is to 1, the more is accurate the prediction given by cubature formulae I-VI. As can be seen, for all three performance functions, all cubature formulae give accurate results for the mean and the standard deviation, since the corresponding ratios 𝜇̂𝜇 𝑀𝐶𝑆 ⁄ and 𝜎̂𝜎 𝑀𝐶𝑆 ⁄ respectively are very close to 1. In addition, we can clearly observe that the use of mixture of different types of random variables has a weak effect on the accuracy of the first two statistical moments of the performance functions. ( 𝐺 1 (𝒙) = For higher order statistical moments such as the kurtosis and the skewness, a divergence is observed between the estimates given by the cubature formulae I-VI and the reference solution obtained by MCS. This discrepancy is related to the order of the statistical moment to be computed and to the use of a mixture of normal and non-normal random variables to model the uncertain parameters. The mixture of Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 76 random variables used in the performance function 𝐺 1 (𝒙) appears to have the most significant impact on the accuracy of the statistical moment estimates. Indeed, as can be seen in table II.2, the variability of the performance function 𝐺 1 (𝒙) is induced by the most heterogeneous combination of random variables compared to the other performance functions 𝐺 2 (𝒙) and 𝐺 3 (𝒙). It is important to notice that the probabilistic computations are performed in the standard random space rather than the physical random space, which means that the real distributions are rewritten as a function of standard normal distributions. This may mitigate the effect that using a mixture of different kinds of random variables might have on the accuracy. Next step, we assess the ability of the proposed approach to perform a reliability analysis, where the objective is to compute the probability of failure, or the reliability index related to a performance function. Usually, these quantities of interest are derived from probabilistic information provided by the tails of the distribution of the random variable representing the variability of the performance function. Hence, the PDF should be enough accurate in the vicinity of the tails to ensure reliable probabilistic information. They are plotted in a logarithmic scale on the vertical axis to emphasize the behavior around the tails. As can be seen, for the performance functions 𝐺 2 (𝒙) and 𝐺 3 (𝒙) the CDFs built from moments-based technique agree well with the reference CDFs obtained from 10 5 MCS, but with a small deviation around the distribution tail. In addition, the CDFs constructed from statistical moments obtained from cubature formulae I-VI are in good agreement since the gap between them is not significant. They can provide, in the cases of the performance functions 𝐺 2 (𝒙) and 𝐺 3 (𝒙), accurate estimates of the failure probabilities of an order of magnitude of 10 -3 . However, for the performance function 𝐺 1 (𝒙), a large deviation is observed between the CDFs built from moments-based technique and the one obtained from 10 5 MCS, almost in the entire range of the distribution. This is mainly due to the inaccurate estimates of higher order statistical moments. The discrepancy between the CDFs derived from statistical moments given by cubature formulae I-VI is also significant. We can clearly observe that formula II gives the best results, whereas formula III gives the worst ones. In addition to the CDFs curves, figure II.18 shows the ratios of the estimates of the reliability index given by the proposed method, denoted by 𝛽 ̂ and the reliability index given by MCS, denoted by 𝛽 𝑀𝐶𝑆 . These reliability indices are computed from the failure probability estimated from the corresponding CDFs constructed previously, using the following equations: 𝛽 ̂= Φ -1 (𝑃 ̂𝑓), 𝛽 𝑀𝐶𝑆 = Φ -1 (𝑃 𝑓,𝑀𝐶𝑆 ) (𝐼𝐼. 55) where Φ denotes the standard normal cumulative distribution function. This example clearly shows the ability of the proposed method to perform accurately either statistical moments or reliability analysis, on explicit physical models related to real-life engineering problems involving different kind of mixture of normal and non-normal random variables. This accuracy is reached at a very low computational effort compared to classical uncertainty propagation strategies such those based on Gauss-Hermite quadrature, MCS and FORM. It may also be retained that problems involving a higher heterogeneous mixture of random variables should be treated with care and particular attention should be paid to the choice of the cubature formula used for the computation of the integral quantities. Implicit models Deflection of truss structure The following example deals with of a planar truss structure as shown in figure II.19. This problem was first introduced by (Lee and Kwak, 2006) to conduct reliability analysis based on response surface method. After that it has been widely used by other authors (Blatman, 2009;Konakli and Sudret, 2016;Xu and Kong, 2018) to conduct different kinds of probabilistic analysis. The reference values of the statistical moments are obtained by 10 5 crude MCS directly performed on the Finite Element Model (FEM) of the truss structure. Figure II.20 displays the convergence of MCS for the estimates of the first four statistical moments of the mechanical response. As can be seen, convergence is well achieved since the estimates of the quantities of interest no longer vary after 10 5 runs of the FEM. The PDF of the mid-span deflection is also plotted and compared to some standard distributions. As we can observe, the PDF of the mid-span deflection is accurately approximated by the lognormal distribution. This is a remarkable result since an analytical formulation of the PDF is now available and the probability of occurrence of each possible event (i.e., the possible value of the mid-span deflection) can be easily derived. Moreover, the CDF of the mechanical response is built by integrating the PDF previously obtained by MCS. This CDF can be directly used to perform a serviceability reliability analysis of the truss structure, defined as exceeding a maximum allowable deterministic deflection. Figure II. 20. Truss structure: convergence of MCS As can be seen in table II.5, all cubature formulae give accurate estimates of the first two statistical moments of the mechanical response, since in the worst case the relative error is about 0.05% and 0.18% for the mean and the standard deviation, respectively. Formula II provides the best balance between II.6 and compared to the reference solutions given by Importance Sampling (IS). For comparison purposes, the reliability analysis is also performed by FORM, where the results obtained are reported in table II.6. The relative error, denoted by 𝜖 𝛽 , is defined as the difference between the reliability index estimates given by either the proposed method or FORM, and those provided by IS. As can be seen from the results reported in table II.6, the proposed method, based on cubature formula VI, gives accurate results since the relative error on the reliability index estimation does not exceed 2.51% Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 82 in the worst case, when recorded for a threshold deflection of 0.1 m. Moreover, as we can observe, this accuracy is less affected by the magnitude of the failure probability compared to the FORM method where the relative error increases with the magnitude of the failure probability. In the following, we conduct a sensitivity analysis of the truss structure with respect to the mid-span deflection. The main purpose is to evaluate the contribution of each uncertain parameter on the variability of the mechanical response. Hence, the quantities of interest are first-order and total Sobol sensitivity indices, denoted by 𝑆 1 and 𝑆 𝑇 , respectively. The reference solution, listed in table II.7, is obtained by crude MCS based on 10 6 samples for each Sobol index. These preliminary results show that the effect of interactions between uncertain parameters is small compared to the effect of each uncertain parameter considered separately, since the values of the total and first-order Sobol indices are too close. Note that for uncertain parameters with a non-significant effect, the total Sobol indices are smaller than the corresponding first-order Sobol indices. This result is contradictory since the total indices should be larger than the first-order ones; it is a consequence of the small bias of the MCS estimator as explained in (Owen, 2013). As shown in section 2.3, the computation of the total Sobol indices involves the evaluation of a multidimensional integral composed of two parts (see equation II.22): the inner part is a one-dimensional integral which is computed by Gauss-Hermite quadrature scheme of level 3 (GH3), and the outer part corresponding to an (N-1)-dimensional integral is computed by one of the cubature formula II-VI. The first-order Sobol indices also involve the evaluation of a multidimensional integral composed of two parts. But conversely, the inner part corresponds to an (N-1)-dimensional integral that is computed by one of the cubature formula II-VI and the outer part is a one-dimensional integral computed by the Gauss-Hermite quadrature scheme of level 3. As can be seen, the proposed method, independently from the used cubature formula, allows to range in the right order of importance the uncertain parameters, as the reference method. This information is quite important since it allows us to distinguish parameters with significant effect on the mechanical response from those having a weak effect. For uncertain parameters with dominant contributions, which are the cross-section areas 𝑆 1 and the Young's modulus 𝐸 1 of the horizontal bars, the first-order indices obtained by the proposed method are in good agreement with the reference ones. Indeed, the relative error varies in the ranges [0.1%, 1.8%] and [0.13%, 1.74%], respectively for the first-order indices corresponding to the uncertain parameters 𝑆 1 and 𝐸 1 . We can also observe that the proposed method based on cubature formula V, provides the most accurate results over all the first-order indices, compared to the other cubature formulae, since the maximum relative error is around 5%, which is recorded on the estimate of the first-order index related to the uncertain parameter with the weakest main effect 𝑃 1 . As can be seen, observations quite similar to those made for the previous first-order indices, can be made. Indeed, for the uncertain parameters having a significant effect on the variability of the mechanical response, which are also the cross-section areas 𝑆 1 and the Young's modulus 𝐸 1 of the horizontal bars, the proposed method independently of the cubature formula used for the evaluation the integral quantities, gives accurate estimates of the total indices since the relative error varies in the ranges [0.1%, 1.8%] and [0.16%, 2.11%], respectively for the total indices corresponding to the uncertain parameters 𝑆 1 and 𝐸 1 . Among the proposed cubature formulae, the most accurate estimates are given by formula VI, since the corresponding relative error is in the range [0.1%, 6.25%]. Note that the computation of the first-order or the total Sobol indices requires the same number of FEM calls when the same cubature formula is used to evaluate the integrals involved in each case. In table II.8 the computation costs of the proposed method based on cubature formulae II-VI and compared to those required by Full Tensor-product Gauss-Hermite method of level 3 (FTGH3) and MCS are listed. As can be observed, the proposed method is by far the most efficient for computing the Sobol sensitivity indices. The cubature formulae IV and V, which provide the most accurate results, require 5092 and 4657 runs of the FEM, respectively. Through this example, we have demonstrated the ability of the proposed method, based on different efficient cubature formulae, to perform an uncertainty propagation analysis on an explicit mechanical model involving moderate probabilistic dimensionality (i.e., number of random variables representing the uncertain parameters). It has been shown that the three possible types of uncertainty propagation analysis, which are statistical moments and distributions analysis, reliability analysis and sensitivity analysis, can be addressed with low computational cost. It may be retained that all cubature formulae give accurate estimates of the mean and the standard deviation of the quantities of interest. However, for higher order statistical moments, cubature formulae III and VI seems to be the most accurate. This high level of accuracy on the estimates of the first four statistical moments makes it possible the use of moment-based techniques to build particularly accurate PDFs in the vicinity of the distribution tails, which are useful for reliability analysis. In the case of sensitivity analysis, cubature formulae VI and V give the closest estimates to the reference solutions, for the total and the first-order Sobol indices, respectively. Heat conduction in a square plate Let us consider a two-dimensional stationary heat-conduction in a square plate defined on the spatial domain Ω = (-0.5𝑚, 0.5𝑚) × (-0.5𝑚, 0.5𝑚), as shown in figure II.24. This example was first introduced by where 𝜔 is a parameter which allows to emphasize the random nature of 𝑘(𝒛, 𝜔), 𝑢(𝒛, 𝜔) is a standard normal random field with zero mean and unit standard deviation. It is governed by the following square exponential autocorrelation function: 𝜌(𝒛 1 , 𝒛 2 ) = exp [- ‖𝒛 1 -𝒛 2 ‖ 2 𝑙 𝑐 2 ] (𝐼𝐼. 60) where 𝑙 𝑐 denotes the correlation length, which is set equal to 0.2 𝑚. The standard normal field 𝑢(𝒛, 𝜔) is discretized using the Expansion Optimal Linear Estimation (EOLE) method (Li and Der Kiureghian, 1993), and its 𝑀 𝑡ℎ order approximation denoted by 𝑢(𝒛, 𝜔), reads: According to the EOLE method, and in the case of a square exponential autocorrelation function, the element size of the mesh used to discretize the random field must be in the range [𝑙 𝑐 2 ⁄ , 𝑙 𝑐 3 ⁄ ] (Sudret and Der Kiureghian, 2000). Based on this rule, we use a square uniform mesh containing 169 elements of size 0.08 m. Furthermore, to obtain an accurate EOLE approximation of the random field 𝑢(𝒛, 𝜔), the truncation order 𝑀 in equation (II.61) is set to 53, which is obtained according to the following criterion: Because of the spatially varying uncertainty of the thermal conductivity, the temperature within the square plate is also an uncertain spatially varying parameter that can be conveniently modeled by a random field. 𝑢(𝒛, 𝜔) = ∑ 𝝓 𝑖 𝑇 𝑪 𝔃, ∑ The temperature field 𝑇(𝒛, 𝜔) is computed from the FEM implemented on the cast3m software. first four statistical moments is well achieved with 10 5 samples of model response, and as expected, the convergence of higher order statistical moments is slower than the convergence of the mean and the standard deviation. In addition, the PDF of the model response is constructed and compared to some standard distributions. The comparison show that the PDF fits a lognormal distribution. The CDF is also simply deduced from the integration of the corresponding PDF, allowing, if desired, a reliability analysis to be performed without spending additional computational effort (i.e., no additional runs of the FEM are required). Figure II. 28. Heat conduction in square plate: convergence of MCS The results depicted in table II.9 show that all the cubature formulae employed in this example give accurate estimates of the first two statistical moments of the model response 𝑇 ̃Ω2 since the maximum error is around 0.032% and 0.2% for the mean and the standard deviation, respectively. The cubature formula II appears to be the most economical integration scheme since it requires only 2971 runs of the FEM to achieve the best recorded accuracy compared to the other cubature formulae. For skewness and kurtosis, cubature formulae II and VI give the closest estimates to the reference solution. tail probabilities of order magnitude 10 -2 , which can be useful to conduct reliability analysis. Indeed, let us consider the following performance function: 𝐺(𝒙) = 𝑇 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 -𝑇 ̃Ω2 (𝒙) (𝐼𝐼. 63) where 𝑇 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 is a deterministic parameter that indicates a threshold temperature that should not be exceeded in order to ensure the integrity of the square plate with respect to the loading conditions. A parametric study is conducted in the following, where the threshold temperature varies from 6 °C to 7.5 °C. The failure probabilities 𝑃 ̂𝑓 and the corresponding generalized reliability indices 𝛽 ̂ are computed and listed in table II.10. Note that the failure probabilities are directly obtained from the CDFs corresponding to cubature formulae II, IV and VI, which give the exact results as demonstrated previously. For threshold temperatures 𝑇 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 ∈ {6, 6.5} ℃, which lead to failure probabilities of magnitude 10 -2 , the proposed method, regardless of the cubature formula used to compute the multidimensional integrals, gives accurate estimates of the reliability indices since the relative error varies in the range [0.2%, 1.84%] which can be explained by the higher accuracy achieved for tail probabilities of magnitude 10 -2 . The estimates of the reliability indices corresponding to failure probabilities of magnitude 10 -3 are also in agreement with the reference solution since the maximum relative error is less than 5%, except for a threshold temperature 𝑇 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 = 7.5 ℃ where the relative error is about 6.32% for cubature formula VI. For higher threshold temperatures however, a relatively significant discrepancy is observed between the estimates of the reliability indices given by the proposed method and those of MCS, since the relative error is in the range [6.06%, 9.01%]. Despite this, these results can be considered reasonable preliminary estimates of low failure probabilities since they require an affordable computational effort. Indeed, due to the high dimensionality of this problem, FORM and SORM approaches are inefficient even though they are usually able to provide inexpensive estimates of the quantities of interest required in reliability analysis. The above analysis clearly demonstrates the ability of the proposed method to effectively handle the propagation of uncertainty through a complex and time-consuming model with high probabilistic dimensionality. With accurate estimates of the first four statistical moments, the PDF of the model response is also accurately built across the entire range of the distribution, including the tails, simply using a moments-based technique. This then allows us to perform reliability analysis for various Demand thresholds (i.e., see section 2.4) of a prescribed performance function describing the serviceability of the square plate subjected to heat conduction. Conclusion In this chapter, we have briefly recalled the general principles of the uncertainty propagation problem through models representing physical phenomena. Depending on the expected results, three types of analysis can be performed: statistical moments and distribution analysis, sensitivity analysis or reliability analysis. For these three types of analysis, the mathematical formulations of the quantities of interest have been presented. It has been shown that the main issue is always the same, namely, to handle multidimensional integrals. Unfortunately, classical integration methods may lead to intractable computations, especially when these integrals have high dimensionality and the integrand is only available under an implicit model that requires computational time, which is often the case when dealing with engineering problems. A first attempt has been made in this chapter to overcome this difficulty, which aims at using efficient cubature schemes, where a limited number of integrand evaluations are required to obtain accurate estimates, instead of the classical greedy integration methods such as Monte-Carlo simulation and full tensor-product rules. Six cubature formulae are identified in the literature, whose efficiency and accuracy are assessed across several academic and engineering problems ranging from a simple mathematical explicit model to an implicit model that is computationally demanding and involves a large number of uncertain parameters. The analysis performed in the first example reveals that, except for cubature formula IV, for which the relative error grows exponentially but remains by far more efficient than Monte-Carlo simulation and Gauss-Hermite quadrature, the accuracy of the other formulae seems to be less affected by the integration dimension 𝑁, especially for formula V which gives the best balance between accuracy and efficiency. With the second example, which deals with uncertainty propagation through explicit mechanical models involving mixture of different types of random variables, it has been proven that the Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 91 proposed method is able to carry out accurately, either statistical moments or reliability analysis, without requiring a huge computational effort. But it has been noticed that problems involving higher heterogeneous mixture of random variables should be handled with care. The third example, which involves a time-consuming implicit mechanical model with a moderate probabilistic dimensionality (i.e., 10 scalar uncertain parameters), has revealed that the proposed method is able to tackle the three kinds of uncertainty propagation analysis with a high efficiency. Indeed, all cubature formulae give accurate estimates of the two first statistical moments of the quantities of interest. However, cubature formulae III and VI are the only best candidates for the computation of higher order statistical moments. This high level of accuracy on the estimates of the first four statistical moments makes possible the use of moment-based techniques to build PDFs especially accurate in the vicinity of the distribution tails, which makes possible to conduct reliability analysis without additional computational effort. In the case of sensitivity analysis, it has been shown that the proposed method, regardless of the cubature formula used to handle the integrals quantities, allows to range the uncertain parameters in the right order of importance as the reference method. It appears that cubature formulae VI and V gives the closest estimates to the reference solutions, respectively for the total and the first-order Sobol indices. The fourth example has a high probabilistic dimensionality, since a 53 rd order EOLE representation is needed to model accurately the space-depending randomness of the thermal conductivity of the square plate. It has been shown that only cubature formulae II, III, IV and VI can be used to face such a high probabilistic dimensionality and all of them give accurate estimates of the mean and the standard deviation. It appears that cubature formula II gives the best balance between accuracy and efficiency, and formula IV provides the closest estimates of the first four statistical moments to those given by MCS. Afterwards, the reliability indices corresponding to failure probabilities of magnitude up to 10 -3 and for a given performance function were obtained without additional runs of the FEM and were able to guarantee a relative error below 5%. The analysis performed in this chapter clearly demonstrates the strong potential of the proposed method to conduct various types of uncertainty propagation analysis with a high level of accuracy. We highlight the remarkable computational cost savings provided by the efficient cubature formulae used to handle the integral quantities required in probabilistic analysis, compared to most of classical integration schemes such as MCS and its variants, the full tensor-product quadrature and the sparse grid method. Despite these enhancements, we stress at the same time the need for further studies to establish the efficiency and the accuracy of the proposed approach to handle uncertainty propagation problems with a much higher level of complexity. Indeed, the number of evaluations of the physical model varies from a few hundred to a few thousand to handle uncertainty propagation problems with moderate and high probabilistic dimensionality respectively. This computational cost would be unaffordable for time-consuming deterministic physical models such as those encountered in fatigue fracture mechanics for instance. Naturally, the question we have to ask ourselves is: is there a way to further improve the efficiency of the proposed method? We try to answer this question in the next chapter of the thesis. S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 92 of interest taken for the sake of simplicity as a scalar. Let also assume that the random variable 𝑌 with a probability density function 𝑝 𝑌 (𝑦), representing the variability of the model response 𝑦 induced by the randomness of the input parameters, has a finite variance, and that the components of the 𝑁-dimensional random variable 𝑿 = { 𝑋 1 , … , 𝑋 𝑁 } 𝑇 are statistically independent. The PCE-based metamodel of 𝑌 = 𝑓(𝑿) thus reads (Xiu and Karniadakis, 2002): 𝑌 = 𝑓(𝑿) ≈ 𝑓 𝑃𝐶𝐸 (𝑿) = ∑ 𝑎 𝑘 𝜳 𝜶 𝑘 (𝑿) 𝑃-1 𝑘=0 (𝐼𝐼𝐼. 1) where 𝑃 denotes the number of terms in the PCE, 𝜶 𝑘 = (𝛼 𝑘 1 , … , 𝛼 𝑘 𝑁 ), 𝑘 = 0, … , 𝑃 -1 a set of multi-indices also called N-tuples of integers (i.e., 𝜶 𝑘 ∈ ℕ 𝑁 ), 𝜳 𝜶 𝑘 , 𝑘 = 0, … , 𝑃 -1 a set of multivariate orthonormal polynomials with respect to 𝑝 𝑿 (𝒙), whose total degree |𝜶 𝑘 | = 𝛼 𝑘 1 + ⋯ + 𝛼 𝑘 𝑁 and 𝑎 𝑘 , 𝑘 = 0, … , 𝑃 -1 a set of real valued deterministic coefficients to be determined. The above expansion is referred to as full-PCE metamodel. It is shown (Ghanem and Soize, 2004) that the latter converges to the true model response, in the sense of the ℒ 2 -𝑛𝑜𝑟𝑚, when the number of terms 𝑃 → +∞, that is: lim 𝑃→+∞ ‖𝑌 -∑ 𝑎 𝑘 𝜳 𝜶 𝑘 (𝑿) 𝑃-1 𝑘=0 ‖ ℒ 2 2 = lim 𝑃→+∞ 𝔼 [(𝑌 -∑ 𝑎 𝑘 𝜳 𝜶 𝑘 (𝑿) 𝑃-1 𝑘=0 ) 2 ] = 0 (𝐼𝐼𝐼. 2) where 𝔼 [. ] denotes the mathematical expectation. The size of the PCE-based metamodel given by equation (III.1), that is the number of terms 𝑃 retained in the summation, can be determined by following one of the truncation schemes available in the PCE literature (Blatman, 2009). The most used truncation scheme consists in retaining the terms corresponding to multivariate polynomials 𝜳 𝜶 𝑘 , 𝑘 = 0, … , 𝑃 -1 where whose total degrees |𝜶 𝑘 | = 𝛼 𝑘 1 + ⋯ + 𝛼 𝑘 𝑁 , 𝑘 = 0, … , 𝑃 -1 do not exceed a prescribed degree 𝑝, chosen to ensure a better accuracy of the metamodel. Based on this rule, the number of terms 𝑃 in the truncated PCE is given by: 𝑃 = (𝑝 + 𝑁)! 𝑝! 𝑁! (𝐼𝐼𝐼. 3) Equation (III. 3) clearly shows that the number of terms in the PCE grows exponentially with 𝑁, which could induce an unaffordable computational cost in the determination of the unknown coefficients when dealing with uncertainty propagation problems with a high probabilistic dimensionality and especially when the corresponding physical model is itself computationally time-demanding. Once the truncation degree 𝑝 has been chosen, the procedure used for setting up the PCE-based metamodel requires first, an algorithm (Sudret and Der Kiureghian, 2000) allowing to generate the set of multi-indices 𝜶 𝑘 , 𝑘 = 0, … , 𝑃 -1 corresponding to 𝑃 multivariate polynomials 𝜳 𝜶 𝑘 , 𝑘 = 0, … , 𝑃 -1 of respective degrees not greater than 𝑝, and, second, sets of univariate orthonormal polynomials 𝛹 𝛼 𝑘 𝑖 , 𝑖 = 1, … , 𝑁 are chosen with respect to each marginal distribution 𝑝 𝑋 i (𝑥 i ), 𝑖 = 1, … , 𝑁 of the random variables 𝑋 i , 𝑖 = 1, … , 𝑁, to construct realizations of the uncertain parameters in the random space, without any need for adaptation of the governing equations related to the mechanical model. Non-intrusive approaches are themselves composed of two categories, namely projection and regression methods which will be detailed in the next subsections. Projection methods a) Principle Since the PCE-metamodel is built in an 𝑁-dimensional standard random space, thus the polynomial chaos basis consists of 𝑁-variate Hermite polynomials. The orthonormality condition between the components of the polynomial chaos basis reads: 〈𝑯 𝜶 𝑘 (𝑼), 𝑯 𝜶 𝑙 〉 ℒ 2 = 𝔼[𝑯 𝜶 𝑘 (𝑼). 𝑯 𝜶 𝑙 (𝑼)] = 𝛿 𝜶 𝑘 ,𝜶 𝑙 ∀𝜶 𝑘 , 𝜶 𝑙 ∈ ℕ 𝑁 (𝐼𝐼𝐼. 8) where 𝛿 𝜶 𝑘 ,𝜶 𝑙 denotes the Kronecker symbol equal to 1 when 𝜶 𝑘 = 𝜶 𝑙 and 0 otherwise. The projection methods take advantage of the orthonormality of the truncated polynomial chaos basis 𝑯 𝜶 𝑘 (𝑼), 𝑘 = 0, … , 𝑃 -1, to compute the unknown coefficients 𝑎 𝑘 , 𝑘 = 0, … , 𝑃 -1. Indeed, referring to the orthonormality condition above, the projection of the PCE-based metamodel ℎ 𝑃𝐶𝐸 (𝑼) given by equation (III.6) onto the polynomial chaos basis 𝑯 𝜶 𝑘 (𝑼), 𝑘 = 0, … , 𝑃 -1, allows us to compute the unknown coefficients 𝑎 𝑘 , 𝑘 = 0, … , 𝑃 -1 using the following expression: 〈𝑯 𝜶 𝑙 (𝑼), ℎ 𝑃𝐶𝐸 (𝑼)〉 ℒ 2 = 𝔼 [𝑯 𝜶 𝑙 (𝑼). ∑ 𝑎 𝑘 𝑯 𝜶 𝑘 (𝑼) 𝑃-1 𝑘=0 ] = ∑ 𝑎 𝑘 𝔼[𝑯 𝜶 𝑙 (𝑼). 𝑯 𝜶 𝑘 (𝑼)] ⏞ = 𝛿 𝜶 𝑘 ,𝜶 𝑙 (𝑠𝑖𝑛𝑐𝑒 𝐼𝐼𝐼.8) 𝑃-1 𝑘=0 = 𝑎 𝑙 (𝐼𝐼𝐼. 9) It is clear from equation (III.9) that the coefficient 𝜶 𝑙 associated to the 𝑁-variate Hermite polynomial 𝑯 𝜶 𝑙 (𝑼), is equal to the expected value of the weighted polynomial expansion 𝑯 𝜶 𝑙 (𝑼). ℎ 𝑃𝐶𝐸 (𝑼) of the approximation ℎ 𝑃𝐶𝐸 (𝑼) of the random model response 𝑌. Mathematically speaking, the expected value of a continuous random 𝑁-variate function is defined as an 𝑁-dimensional integral, so equation (III.9) can be reformulated as follows: 𝑎 𝑙 = 𝔼[𝑯 𝜶 𝑙 (𝑼). ℎ 𝑃𝐶𝐸 (𝑼)] = ∫ 𝑯 𝜶 𝑙 (𝒖). ℎ 𝑃𝐶𝐸 (𝒖) 𝜑 𝑼 (𝒖) 𝑑𝒖 ℝ 𝑁 (𝐼𝐼𝐼. 10) Thanks to the above equation, where 𝜑 𝑼 (𝒖) denotes the PDF of the 𝑁-dimensional normal variable 𝑼, the computation of the PCE coefficients is nothing else than the evaluation of a set of 𝑁-dimensional integrals, which can be ensured by means of numerical integration schemes, such as those studied in Chapter II, whose key ingredient consists in approximating an integral by a weighted sum. For more details on the mathematical framework related to these integration schemes, the reader can refer to Chapter II of the thesis manuscript. b) Monte-Carlo Simulation and variants The simplest way to compute the 𝑁-dimensional integral defined by equation (III.10) is the use of simulation methods such as MCS and its variants. The basic idea is to generate a set of 𝑁-dimensional Chapter III: Unified approach for uncertainty propagation analysis S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 99 integration points 𝒖 𝑗 = (𝑢 1 𝑗 , … , 𝑢 𝑁 𝑗 ), 𝑗 = 1, … , 𝑀, following a random sampling scheme, namely pseudo-random number generator with respect to the distributions of the uncertain parameters. Since the mathematical formulations of the quantities of interest (i.e., here the quantity of interest is the 𝑁-dimensional integral representing the coefficients of the PCE) are derived in the standard random space, the integration points are sampled with respect to an 𝑁-dimensional normal distribution 𝜑 𝑼 (𝒖). Furthermore, since all points belonging the random space have the same probability to be sampled, 𝑤 𝑗 = 1 𝑀 ⁄ , 𝑗 = 1, … , 𝑀, thus the coefficient 𝑎 𝑙 related to the 𝑁-variate Hermite polynomial 𝑯 𝜶 𝑙 (𝑼) can be estimated by the following weighted sum: 𝑎 𝑙 = ∫ 𝑯 𝜶 𝑙 (𝒖). ℎ 𝑃𝐶𝐸 (𝒖) 𝜑 𝑼 (𝒖) 𝑑𝒖 ℝ 𝑁 ≈ ∑ 𝑤 𝑗 𝑯 𝜶 𝑙 (𝒖 𝑗 ). 𝑓 ∘ 𝑇(𝒖 𝑗 ) 𝑀 𝑗=1 = 1 𝑀 ∑ 𝑯 𝜶 𝑙 (𝒖 𝑗 ). 𝑓 ∘ 𝑇(𝒖 𝑗 ) 𝑀 𝑗=1 (𝐼𝐼𝐼. 11) where 𝑤 𝑗 , 𝑗 = 1, … , 𝑀 are the respective weights of the integration points 𝒖 𝑗 , 𝑗 = 1, … , 𝑀, 𝑇 is the isoprobabilistic transformation allowing to transform the integration points, firstly sampled in the standard random space, into a set of points 𝒙 𝑗 = 𝑇(𝒖 𝑗 ), 𝑗 = 1, … , 𝑀 belonging to the original physical random space, and 𝑓 is the mathematical mapping representing the mechanical model. MCS are robust and converge for any ℒ 2 -function. The associated error 𝜖 𝑀𝐶𝑆 used to assess the accuracy of the estimates provided by MCS, reads: 𝜖 𝑀𝐶𝑆 = √ 𝕍[𝑯 𝜶 𝑙 (𝒖). 𝑓 ∘ 𝑇(𝒖)] 𝑀 (𝐼𝐼𝐼. 12) where 𝕍[. ] denotes the variance operator. Although the convergence of MCS is less affected by the dimensionality and the mathematical rank (i.e., in statistical moments analysis for instance, the mean and the central variance are respectively represented by 1 st and 2 nd order ranked integrals) of the integral to be estimated, it is clear from equation (III.12) that the error 𝜖 𝑀𝐶𝑆 decreases with 1 √𝑀 ⁄ , which reveals the major drawback of MCS, that is its slow convergence that makes impossible its application when the evaluation of the integrand is computational time-demanding. To enhance the convergence of MCS advanced sampling schemes such as Latin hypercube sampling (Mckay and al, 1979) and quasi-random numbers (Niederreiter, 1992) can be used, which provide integration points with a better filling of the random space than pseudo-random number generators. Unfortunately, this enhancement is not yet sufficient to allow the use of MCS to handle over greedy mechanical models, such as those dealing with fatigue fracture problems. c) Full tensor-product cubature An alternative to MCS for the computation of the PCE coefficients is the use of full tensor-product integration schemes. Accordingly, the coefficient 𝑎 𝑙 can be estimated by the following 𝑀 𝑡ℎ order isotropic Gauss-Hermite full tensor-product integration formula, since we work in the standard random space: 𝑎 𝑙 = ∫ 𝑯 𝜶 𝑙 (𝒖). ℎ 𝑃𝐶𝐸 (𝒖) 𝜑 𝑼 (𝒖) 𝑑𝒖 ℝ 𝑁 ≈ ∑ … ∑ 𝑤 1 𝑗 1 … 𝑤 𝑁 𝑗 𝑁 𝑯 𝜶 𝑙 (𝑢 1 𝑗 1 … 𝑢 𝑁 𝑗 𝑁 ). 𝑓 ∘ 𝑇(𝑢 1 𝑗 1 … 𝑢 𝑁 𝑗 𝑁 ) 𝑀 𝑗 𝑁 =1 𝑀 𝑗 1 =1 (𝐼𝐼𝐼. 13) where 𝑢 𝑖 𝑗 𝑖 , 𝑗 𝑖 = 1, … , 𝑀 are the selected integration points in the 𝑖 𝑡ℎ direction of the standard random space, defined as the roots of an univariate Hermite polynomial 𝐻 𝑀 (𝑢) of degree 𝑀, and 𝑤 𝑖 𝑗 𝑖 , 𝑗 𝑖 = 1, … , 𝑀 are the corresponding weights. When a PCE-based metamodel ℎ 𝑃𝐶𝐸 of degree 𝑝 is used to approximate the random model response of interest, the integrand 𝑯 𝜶 𝑙 (𝒖). ℎ 𝑃𝐶𝐸 (𝒖) in equation (III.10) is consequently a polynomial of degree 𝑝 + |𝜶 𝑙 | ≤ 2𝑝 (with |𝜶 𝑙 | = 𝛼 𝑙 1 + ⋯ + 𝛼 𝑙 𝑁 ). It follows that an 𝑁-dimensional isotropic Gauss-Hermite formula of degree 𝑀 = 𝑝 + 1 will be able to provide the exact value of the coefficient 𝑎 𝑙 , which requires (𝑝 + 1) 𝑁 evaluations of the integrand. Thus, we can immediately see the main limit of full tensor-product integration schemes to handle high dimensional integrals especially when the integrand itself is obtained by a heavy numerical procedure. d) Sparse grids cubature To reduce the computational effort required by full tensor-product integration schemes, Smolyak method Smolyak, 1963) based on sparse integration grids is an interesting alternative. The basic idea is also to perform a tensor product to build the integration formula but having the particularity that this tensor product is defined as a linear combination of one-dimensional cubature formulae having high degrees in some directions and much lower degrees in the remaining dimensions. ( Let us consider one-dimensional cubature formulae of degree 𝑙 ≥ 1, each one based on 𝑀 𝑙 integration points and weights able to compute the exact value of the integral of a one-dimensional polynomial of degree equal or less than 2𝑀 𝑙 -1. Now by performing linear combination of products of the latter one-dimensional cubature formulae, the estimate of the coefficient 𝑎 𝑙 according to the Smolyak integration scheme of degree 𝑙 ≤ 𝑁, reads: 𝑎 𝑙 ≈ ∑ (-1) 𝑙+𝑁-1-|𝒌| 𝐶 𝑁-1 |𝒌|-𝑙 ∑ … ∑ 𝑤 1 𝑗 1 … 𝑤 𝑁 𝑗 𝑁 𝑯 𝜶 𝑙 (𝑢 1 𝑗 1 … 𝑢 𝑁 𝑗 𝑁 ). 𝑓 ∘ 𝑇(𝑢 1 𝑗 1 … 𝑢 𝑁 𝑗 𝑁 ) 𝑘 𝑁 𝑗 𝑁 =1 𝑘 1 𝑗 1 =1 𝑙+𝑁-1 |𝒌|=𝑙 (𝐼𝐼𝐼. 14) where |𝒌| = 𝑘 1 + ⋯ + 𝑘 𝑁 is the sum of the components of the multi-index 𝒌 = (𝑘 1 , … , 𝑘 𝑁 ) ∈ ℕ 𝑁 , 𝑘 𝑗 , 𝑗 = 1, … , 𝑁, is the degree of the one-dimensional cubature formula used in the 𝑗 𝑡ℎ direction of the standard random space and 𝐶 denotes the combination operator. With a Smolyak cubature formula of degree 𝑙 + 𝑝, we can estimate the exact value of the integral of a polynomial of degree 2𝑝 + 1 (Novak and Ritter, 1999). Thus, the same formula allows us to obtain the exact value of the coefficient 𝑎 𝑙 since the degree of the polynomial integrand 𝑯 𝜶 𝑙 (𝒖). ℎ 𝑃𝐶𝐸 (𝒖) in equation (III.10) does not exceeds 2𝑝. The corresponding computational cost tends asymptotically to (2 𝑝 𝑝! ⁄ )𝑁 𝑝 integrand evaluations for integrals of high dimensionality, clearly demonstrating the efficiency of the Smolyak integration scheme over full tensor-product integration schemes. Indeed, the computational cost increases polynomially with the dimension of the integral, which is far less than the exponential increase (𝑝 + 1) 𝑁 of full tensor-product integration schemes. Despite the efficiency of sparse grid-based integration schemes for high dimensions, the number of integrand evaluations remains too large to be applied to computationally demanding mechanical models, especially when a high level of accuracy is required for the construction of the PCE-based metamodels, i.e., for a high truncation degree 𝑝. e) Efficient cubature formulae Instead of using integration schemes constructed from either a full tensorization or a linear combination of unidimensional cubature formulae, a serious alternative to deal with the curse of dimensionality problem is the use of the efficient cubature formulae studied in Chapter II. One can remind that these fifth -degree cubature formulae can compute efficiency the expected value of a random quantity. This has been well established through the first application example dealing with the integration of an explicit purely 𝑁dimensional mathematical function studied in section 5.1.1 of Chapter II. Since the PCE coefficients are defined as expected values of polynomial functions, as can be seen in equation (III.10), their computation can be efficiently conducted by one of the cubature formulae I-VI. Accordingly, the estimation of the coefficient 𝑎 𝑙 reads: 𝑎 𝑙 = ∫ 𝑯 𝜶 𝑙 (𝒖). ℎ 𝑃𝐶𝐸 (𝒖) 𝜑 𝑼 (𝒖) 𝑑𝒖 ℝ 𝑁 ≈ ∑ 𝑤 𝐾 𝑗 𝑯 𝜶 𝑙 (𝒖 𝐾 𝑗 ). 𝑓 ∘ 𝑇(𝒖 𝐾 𝑗 ) 𝑀 𝐾 𝑗=1 (𝐼𝐼𝐼. 15) where the capital index 𝐾 = 𝐼, 𝐼𝐼, … , 𝑉𝐼, denotes the type of the cubature formula to be used and 𝑀 𝐾 is the number of integration points related to cubature formula 𝐾. It is clear from the above expression that the estimate provided by cubature formulae I-VI is similar to that given by MCS since only one summand is required, but a much smaller number of integration points are needed to ensure a good level of accuracy. Indeed, instead of using pseudo-random number generators as in the case of MCS, the integration points are selected in a smart way to guarantee a better filling of the random space. Since the cubature formulae are all fifth-degree integration schemes as recalled previously, they can estimate efficiently the expected value of a PCE-based metamodel with moderate polynomial degree, more than sufficient to provide an accurate approximation of the random model response of interest. Figure III.2 compares the number of integrand evaluations required by full tensor-product integration schemes for various truncation degree 𝑝 and that required by cubature formulae I-IV as a function of the dimension 𝑁 of the integral to be computed. As can be seen, using the cubature formulae I-VI to compute the PCE coefficients provides significant computational cost savings. In addition, the number of integrand evaluations and, consequently, the corresponding number of mechanical model evaluations, does not depend on the truncation degree 𝑝 of the PCE as for the full tensor-product and Smolyak integration schemes. This first attempt based on PCE-based metamodels and efficient cubature formulae I-V, developed to tackle, on one hand, the problem of the curse of dimensionality and on the other hand the problem of the additional computation cost observed in Chapter II when switching from one type of uncertainty propagation analysis to another and when using in a crude (i.e., directly on the primary mechanical model) manner the cubature formulae I-V to compute the quantities of interest, will be called in the following, full-PCE approach. This is also a first response to the question which closed the previous chapter of the thesis. Note that, the denotation full is used since all the PCE coefficients are retained to Regression methods a) Principle Regression methods have been used first by (Isukapalli, 1999) and later by (Berveiller, 2005), to compute the unknown coefficients of the PCE. Unlike projection methods, where the PCE coefficients are computed one by one by evaluating multidimensional integrals, regression methods estimate all the coefficients at the same time by solving a minimization problem in the least-squares sense, which could considerably reduce the computation effort. Indeed, the number of evaluations of the primary mechanical model varies in the range [2𝑃, 3𝑃], which is much lower, for instance, than that associated with the Smolyak integration scheme, asymptotically equal to 2 𝑝 𝑃, where 𝑃 denotes the number of coefficients to be computed for a given truncation degree 𝑝. Let 𝒂 = {𝑎 𝛼 0 , 𝑎 𝛼 1 , … , 𝑎 𝛼 𝑃-1 } 𝑇 and 𝓗(𝑼) = {𝑯 𝛼 0 (𝑼), 𝑯 𝛼 1 (𝑼), … , 𝑯 𝛼 𝑃-1 (𝑼)} 𝑇 two vectors denoting respectively the unknown coefficients of the PCE and the polynomial chaos basis made of multivariate Hermite polynomials. The regression technique consists in finding the vector of coefficients 𝒂 that minimizes the mean square error 𝔼[(𝒂 𝑇 𝓗(𝑼) -𝑓 ∘ 𝑇(𝑼)) 𝟐 ], that is: 𝒂 ̂= argmin 𝒂∈ℝ 𝑷 {𝐴(𝒂) ≡ 𝔼[(𝒂 𝑇 𝓗(𝑼) -𝑓 ∘ 𝑇(𝑼)) 𝟐 ]} (𝐼𝐼𝐼. 16) The optimality condition (𝐼𝐼𝐼. 17) The quantity 𝓗(𝑼)𝓗(𝑼) 𝑇 represents the covariance matrix of the 𝑃-dimensional random vector 𝓗(𝑼), with 𝑃 statistically independent components since the PCE-based metamodel is built in the standard random space whose directions are represented by independent standard normal variables. Consequently, the mathematical expectation of the 𝔼[𝓗(𝒖)𝓗(𝒖) 𝑇 ] of the covariance matrix 𝓗(𝑼)𝓗(𝑼) 𝑇 is reduced to the 𝑃 × 𝑃 unit matrix 𝕀 𝑃×𝑃 , and equation (III.17) can be rewritten as follows: 𝒂 ̂ = 𝔼[𝓗(𝑼)𝑓 ∘ 𝑇(𝑼)] (𝐼𝐼𝐼. 18) In practice, the minimization problem defined by equation (III.16) is discretized on the basis of a set of sample points 𝓤 = {𝒖 𝑗 = (𝑢 1 𝑗 , … , 𝑢 𝑁 𝑗 ), 𝑗 = 1, … , 𝑀}, also called experimental design, to replace the expectation operator 𝔼[. ] by its empirical estimate. Thus, the minimization problem reads: 𝒂 ̂= argmin 𝒂∈ℝ 𝑷 { 1 𝑀 ∑(ℎ 𝑃𝐶𝐸 (𝒖 𝑗 ) -𝑓 ∘ 𝑇(𝒖 𝑗 )) 𝟐 𝑀 𝑗=1 } = argmin 𝒂∈ℝ 𝑷 { 1 𝑀 ∑ ( ∑ 𝑎 𝑘 𝑯 𝜶 𝑘 (𝒖 𝑗 ) 𝑃-1 𝑘=0 -𝑓 ∘ 𝑇(𝒖 𝑗 )) 𝟐 𝑀 𝑗=1 } (𝐼𝐼𝐼. 19) where ℎ 𝑃𝐶𝐸 (𝒖 𝑗 ) and 𝑓 ∘ 𝑇(𝒖 𝑗 ) are respectively the responses of the PCE-based metamodel and the primary mechanical model at the point 𝒖 𝑗 . the number of unknown PCE coefficients to be computed by discarding those that have insignificant contribution on the quantities of interest, and, of course, without any loss on the accuracy of the obtained PCE-based metamodels. Thus, the computational effort can be significantly reduced since only a small experimental design is required to solve the regression problem. b) Truncation scheme based on low-order interactions With reference to the previous developments, the PCE-based metamodel is built using a complete polynomial chaos basis, that is the terms corresponding to all multivariate polynomials whose respective total degrees do not exceed a given degree 𝑝 are retained. For problems with high dimensionality 𝑁, a major part of the PCE coefficients represents interactions between uncertain parameters, even for moderate truncation degree 𝑝. Fortunately, for engineering problems experience has shown that high order interactions have often insignificant effect, which means that the corresponding PCE coefficients are close to 0. Thus, the size of the polynomial chaos basis can be reduced by retaining only the terms representing main and low-order interactions effects. Let 𝓗 𝑝 = {𝑯 𝜶 𝑘 , 𝜶 𝑘 ∈ ℕ 𝑁 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡 ∑ 𝛼 𝑘 𝑖 𝑁 𝑖=1 ≤ 𝑝} a complete polynomial chaos basis for a given truncation degree 𝑝, and 𝓗 𝑝,𝑞 = {𝑯 𝜶 𝑘 , 𝜶 𝑘 ∈ ℕ 𝑁 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡 ∑ 𝛼 𝑘 𝑖 𝑁 𝑖=1 ≤ 𝑝 𝑎𝑛𝑑 ∑ 𝕝 {𝛼 𝑘 𝑖 ≠0} 𝑁 𝑖=1 ≤ 𝑞} an incomplete, called also sparse, polynomial chaos basis for a given truncation degree 𝑝 and interaction order 𝑞 < 𝑝, i.e., only 𝑞-variate polynomials whose respective total degrees do not exceed a given degree 𝑝 are retained. If the allowed maximum interaction order 𝑞 is not high, the cardinality of the sparse polynomial chaos basis 𝓗 𝑝,𝑞 will be much lower than that of the complete polynomial chaos basis 𝓗 𝑝 . The efficiency of the truncation scheme based on sparse polynomial basis can be assessed by the economy ℰ 𝑝,𝑞 defined by the following ratio: ℰ 𝑝,𝑞 = card(𝓗 𝑝 ) -card(𝓗 𝑝,𝑞 ) card(𝓗 𝑝 ) × 100 (𝐼𝐼𝐼. 21) where card(𝓗 𝑝 ) and card(𝓗 𝑝,𝑞 ) are the cardinalities of the complete 𝓗 𝑝 and sparse 𝓗 𝑝,𝑞 polynomial chaos bases respectively. As can be seen in figure III.4, for a PCE-based metamodel of degree 𝑝 = 5, which is sufficient to represent the response of interest of the primary mechanical model, a truncation scheme based on sparse polynomial chaos basis for a maximum interaction order 𝑞 = 3, is far more efficient than that based on a complete polynomial chaos basis, especially for high dimensionality. For instance, when 𝑁 = 10 the economy ℰ 5,3 is around 43%, which allows to reduce the size of the experimental design used in the computation of the PCE coefficients and consequently the computational effort required to carry out the uncertainty propagation analysis of interest. The maximum interaction order 𝑞 can be chosen either by following a step-by-step scheme where the value of 𝑞 is increased gradually to achieve a target level of accuracy on the estimates of the PCE coefficients, or by performing a preliminary screening analysis (Morris, 1991) presented in the previous section, in order to enhance the sparsity of the polynomial chaos basis 𝓗 𝑝,𝑞,𝜎 2 and thus achieve greater efficiency in computing the PCE coefficients. This approach to build efficient PCEbased metamodels is denoted in the following by sparse-PCE, which is our second response to the question which closed Chapter II of the thesis. Figure III. 5. Computational flowchart of the sparse-PCE approach It is important to recall here that the idea behind the implementation of the two approaches called full-PCE and sparse-PCE is to avoid the additional computational efforts observed when cubature formulae I-VI are Chapter III: Unified approach for uncertainty propagation analysis S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 108 directly used on the mechanical model, and when one wishes to change the type of uncertainty propagation analysis. For instance, a statistical moments analysis can be carried out, first, by the crude cubature formulae I-VI, which provides a target estimate of the variance of the model response, and then the sparse-PCE approach is used to construct a metamodel that can be used to perform either a sensitivity or a reliability analysis. The accuracy of the sparse-PCE approach can be improved when the stopping criteria of the stepwise algorithm is established based on higher-order statistical moments such as skewness and kurtosis, instead of variance, provided that the mechanical model evaluations already available are sufficient to obtain a well-conditioned regression problem. Post-processing of the PCE-based metamodels Once the PCE-based metamodel for the model response of interest is built either by full-PCE or sparse-PCE approach, any kind of uncertainty propagation analysis can be carried out with efficiency. Two alternatives are available either by performing MCS on the PCE-based metamodel or by post-processing its coefficients. The latter alternative is addressed in the following. Computation of the statistical moments The statistical moments of the random variable 𝑌 representing the uncertainty of the model response 𝑦 of interest can be easily derived from the coefficients 𝑎 𝑘 , 𝑘 = 0, … , 𝑃 -1. Thanks to the orthonormality of the polynomial chaos basis, the estimates of the first four statistical moments for a given degree 𝑝, read: 𝜇̂𝑌 ,𝑝 = 𝑎̂0 (𝐼𝐼𝐼. 22) 𝜎̂𝑌 ,𝑝 2 = ∑ 𝑎̂𝑘 2 𝑃-1 𝑘=1 (𝐼𝐼𝐼. 23) 𝛿 ̂𝑌,𝑝 = 1 𝜎̂𝑌 ,𝑝 3 ∑ ∑ ∑ 𝔼[𝑯 𝜶 𝑘 1 (𝑼) 𝑯 𝜶 2 (𝑼) 𝑯 𝜶 𝑘 3 (𝑼)] 𝑃-1 𝑘 3 =1 𝑎̂𝑘 1 2 𝑃-1 𝑘 2 =1 𝑃-1 𝑘 1 =1 𝑎̂𝑘 2 2 𝑎̂𝑘 3 2 (𝐼𝐼𝐼. 24) 𝜅̂𝑌 ,𝑝 = 1 𝜎̂𝑌 ,𝑝 4 ∑ ∑ ∑ ∑ 𝔼 [𝑯 𝜶 𝑘 1 (𝑼) 𝑯 𝜶 2 (𝑼) 𝑯 𝜶 𝑘 3 (𝑼) 𝑯 𝜶 𝑘 4 (𝑼)] 𝑎̂𝑘 1 2 𝑎̂𝑘 2 2 𝑎̂𝑘 3 2 𝑃-1 𝑘 4 =1 𝑃-1 𝑘 3 =1 𝑎̂𝑘 4 2 𝑃-1 𝑘 2 =1 𝑃-1 𝑘 1 =1 (𝐼𝐼𝐼. 25) Since the PCE-based metamodel is written in the standard random space and the polynomial chaos basis is built up using multivariate Hermite polynomials, the expectations in equations (III.24) and (III.25) can be computed analytically. Once these estimates of the mean 𝜇̂𝑌 ,𝑝 , the variance 𝜎̂𝑌 ,𝑝 2 , the skewness 𝛿 ̂𝑌,𝑝 and the kurtosis 𝜅̂𝑌 ,𝑝 are obtained a moment-based technique can be used to construct the PDF 𝑓 ̂𝑌,𝑝 (𝑦). Computation of Sobol sensitivity indices In addition to the statistical moments, Sobol sensitivity indices can be derived from appropriate combinations of the PCE coefficients. We only focus on first order 𝑆 ̂𝑖,𝑝 where 𝒜 𝑖 1 = {𝜶 𝑘 ∈ ℕ 𝑁 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡 𝛼 𝑘 𝑖 ≠ 0, 𝛼 𝑘 𝑗≠𝑖 = 0} is the set of multi-indices with zeros components except the 𝑖 𝑡ℎ one, and 𝒜 𝑖 𝑇 = {𝜶 𝑘 ∈ ℕ 𝑁 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡, 𝛼 𝑘 𝑖 ≠ 0} is the set of multi-indices with non zeros 𝑖 𝑡ℎ component. Note that the higher order Sobol sensitivity indices 𝑆 𝑖 1 ,…,𝑖 𝑠 , 𝑠 = 2, … , 𝑁, which measure the effect of the interaction between the uncertain parameters (𝑥 𝑖 1 , … , 𝑥 𝑖 𝑠 ), can also be obtained in the same way as the first and total indices. For more details on this issue, the reader can refer to (Sudret, 2008). Computation of the failure probability As already stated in section 2.4 of Chapter II, reliability analysis aims to compute the probability of failure of an engineering system with respect to a prescribed serviceability criterion. From a mathematical point of view, the serviceability criterion is defined by the so-called limit state or performance function, often denoted by 𝐺(𝒙) or 𝐻(𝒖) = 𝐺 ∘ 𝑇(𝒖), respectively into the physical and the standard random spaces. Typically, in engineering problems, the serviceability criterion can be defined by the fact that the model response of interest 𝑦(𝒙), which is a random quantity with a probabilistic model 𝑌, since it depends on some uncertain parameters 𝒙 having a probabilistic model 𝑿, remains below an admissible threshold 𝑦 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 which can be a deterministic or random quantity. Consequently, the performance function reads: 𝐺(𝒙) = 𝑦 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 -𝑦(𝒙) = 𝑦 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 -𝑦 ∘ 𝑇(𝒖) = 𝐻(𝒖) (𝐼𝐼𝐼. 28) In the case where the response of interest 𝑦 ∘ 𝑇(𝒖) is provided by an implicit computational model, it can be replaced by its PCE-based metamodel ℎ 𝑃𝐶𝐸 (𝒖) to obtain an analytical formulation of the performance function, given by: As can be seen, strong negative correlation is observed between 𝐴 1 and 𝐴 2 and between 𝐴 1 and 𝐵, which can induce some computational instability when dealing with uncertainty propagation analysis. 𝐻(𝒖) = 𝑦 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 -𝑦 ∘ 𝑇(𝒖) ≈ 𝑦 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 -ℎ 𝑃𝐶𝐸 (𝒖) (𝐼𝐼𝐼. Fortunately, this problem is anticipated since, as said in Chapter II of the thesis, the quantities of interest (i.e., statistical moments, probability of failure or sensitivity indices) required by the uncertainty propagation analysis are written in the normal standard random space, which allows us to mitigate the effect of strong negative correlations. Statistical moments and distributions analysis Next, we assess the effect of the uncertain parameters on the variability of the model response, taken here as the fatigue crack growth life. The latter, denoted by 𝑁 𝑓 , is computed through the integration of Walker model based on Simpson's rule (see section 2.3 of Chapter I), and assuming that the crack propagates from an initial length 𝑎 0 to a critical length 𝑎 𝑐 : 𝑁 𝑓 = ∫ (1 -𝑅) 𝑚 1 (1-𝛾) 𝐶 1 ∆𝐾 𝐼 (𝑎) 𝑚 1 𝑎 𝑐 𝑎 0 𝑑𝑎 (𝐼𝐼𝐼. 32) where 𝑅 is the stress ratio, ∆𝐾 𝐼 is the SIF range computed by a FEM, 𝐶 1 , 𝑚 1 and 𝛾 are uncertain parameters representing the constitutive material of the CCP specimen, whose statistical characteristics can be easily derived from the results given in table III. crack length 𝑎 𝑖 . For each crack increment the mean 𝜇̂ and the standard deviation 𝜎̂ of the fatigue crack growth life are computed, which allow us to obtain the mean crack growth curve in addition to the lower and upper bounds defined as the crack growth curves at 𝜇̂-3𝜎̂ and 𝜇̂+ 3𝜎, respectively. These curves further demonstrate the variability of both the loading cycles and the crack length. The crack growth curve corresponding to Hudson's experimental data for the same loading condition, falls within three standard deviations of the mean and is very close to the mean curve. To investigate the effect of the correlation between the uncertain parameters on the variability of the fatigue crack growth lifetime, another MCS is performed by considering the Walker model parameters 𝐶 1 , 𝑚 1 and 𝛾 as statistically independent, i.e., the off-diagonal coefficients of correlation 𝜌 𝑖𝑗 , {𝑖 ≠ 𝑗} are set to 0. The estimates of the mean and the standard deviation of the model response are, respectively, 𝜇̂= 8633 cycles and 𝜎̂= 1854 cycles, which implies a coefficient of variation around 21.5%, much larger than that obtained previously for the case of correlated uncertain parameters and which indicates a significant variability of the fatigue crack growth lifetime, which is not the case in reality. This result calls into question the hypothesis, often made when performing probabilistic analysis on fatigue crack growth problems, that the uncertain parameters, especially those related to the crack growth model, are considered as statistically independent parameters. Thus, we clearly emphasize the importance of properly selecting the probabilistic model (i.e., the type of distribution), used to represent the uncertain parameters, as well as its statistical characteristics (i.e., statistical moments and correlation coefficients) when carrying out uncertainty propagation analysis. Then, the first four statistical moments of the fatigue crack growth life can be obtained, either directly from the coefficients of the PCE or by performing MCS on the metamodel. In table III.3 are listed the estimates of the first four statistical moments of the fatigue crack growth life, derived from the coefficients of a PCE of degree 𝑝 = 2. The ratio between these estimates and those given by 10 5 crude MCS, is taken here as an accuracy indicator and plotted in figure III.10. As can be seen from the results depicted in table III.3, independently from the cubature formulae used in the computation of the unknown PCE coefficients, the proposed method works well for the prediction of the statistical moments of the model response since the corresponding estimates are in good agreement with the reference solution, especially for the mean and the standard deviation where the accuracy indicators 𝜇̂𝜇 𝑀𝐶𝑆 ⁄ and 𝜎̂𝜎 𝑀𝐶𝑆 ⁄ are close to 1. This high accuracy is achieved with a low computational cost, since in the worst case only 21 evaluations of the FEM are required. For higher order statistical moments, more particularly for the skewness, the ratio 𝛾̂𝛾 𝑀𝐶𝑆 ⁄ is around 0.85 which could be interpreted as a lack of accuracy of the proposed method. Fortunately, this is not really the case, since this poor value of the indicator of accuracy 𝛾̂𝛾 𝑀𝐶𝑆 ⁄ is not fully due to a significant discrepancy between the estimate given by the proposed method and the reference solution but is also due to the magnitude of the skewness which tends to increase the relative error. However, for the kurtosis where its accuracy is much more difficult to achieve than for the skewness, since having a higher statistical order, the ratio 𝜅̂𝜅 𝑀𝐶𝑆 ⁄ is too close to 1. Note that the accuracy of the proposed method can be enhanced by increasing the truncation order of the PCE. In figure III.10 we compare the accuracy of the statistical moments estimates derived from the PCE coefficients and those obtained from 10 5 MCS applied on the corresponding PCE-metamodel. As can be observed, the latter approach can be viewed as an alternative to enhance the accuracy of the proposed method. Indeed, for the skewness the ratio 𝜅̂𝜅 𝑀𝐶𝑆 ⁄ is now around 0.97 which is achieved when cubature formula VI is used to compute the unknown PCE coefficients. As can be seen, the metamodel relative to a PCE of degree 𝑝 = 2 is sufficient to reproduce accurately the real behavior of the FEM representing the CCP specimen. Indeed, the metamodel fits very well the response of the FEM evaluated at a new set of points different from the integration points used previously in the computation of the unknown coefficients of the PCE. We can observe that the fatigue crack growth lifetime exhibits a nonlinear behavior with respect to the uncertain parameters 𝐵 = log(𝐶 1 ) and 𝐴 1 = 𝑚 1 , and varies linearly with 𝐴 2 = -𝑚 1 (1 -𝛾). In figure III.11 we also compare the PDFs of the fatigue crack growth life, constructed either by using moments technique based on the estimates of the first four statistical moment derived from the PCE coefficients, or by performing MCS on the metamodel relative to the PCE. As can be seen, the PDFs are in good agreement with that built from 10 5 crude MCS. Sensitivity analysis In the following, we conduct a sensitivity analysis to evaluate the contribution of each uncertain parameters on the variability of the model response. Hence, the first-order and total Sobol sensitivity indices, respectively, denoted by 𝑆 1 and 𝑆 𝑇 , are the quantities of interest, whose estimates are directly derived from the PCE coefficients of degree 𝑝 = 2 used previously in the statistical moments analysis. Thus, no additional evaluations of the FEM are needed. The results are listed in table III.4. It appears that the parameter 𝐴 1 = 𝑚 1 has a moderate effect on the variability of the model response, whereas the parameters 𝐵 = log(𝐶 1 ) and 𝐴 2 = -𝑚 1 (1 -𝛾) have a significant effect. This means that the variability of the fatigue crack growth lifetime of the CCP specimen is driven by the uncertainty in the parameters 𝐶 1 and 𝛾 of the Walker model, with the parameter 𝐶 1 having the dominant contribution. Furthermore, the total indices are nearly equal to the first-order indices, indicating that the interaction effects between the uncertain parameters are negligible. It is important to note that although the uncertain parameters are statistically dependent, the Sobol indices are still computable, but their interpretation becomes a difficult task. Indeed, referring to the total variance decomposition given by equation (2.16) presented in section 2.3 of Chapter II, it is difficult to know if the contribution of a given uncertain parameters on the variability of the model response is due to its importance in the model structure or to its correlation with other influent parameters. To overcome this problem, the Sobol sensitivity indices must be derived from partial variances related to an ANCOVA (ANalysis of COVAriance) decomposition (Li and Rabitz, 2010;Chastaing and al., 2012), instead of partial variances associated to an ANOVA decomposition (see section 2.3 of Chapter II). The genuine idea behind the ANCOVA decomposition is to decompose the partial variances into a variance part, which measures the contribution of an uncertain parameter due to its importance in the model structure, and a covariance part, which measures the contribution of an uncertain parameters due to a possible correlation with other parameters. Hence, these two contributions are no longer merged as in the Chapter III: Unified approach for uncertainty propagation analysis S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 118 ANOVA decomposition. We recall here that the main purpose of the present sensitivity analysis is not t o separate the sources of contribution but to identify important and unimportant parameters on the variability of the model response. Global sensitivity analysis based on ANCOVA decomposition has gained increasing popularity in the last few years and will be obviously a priority area of research for us. . Discussion It can be noted from this example that the proposed full-PCE approach is able to perform moments and sensitivity analysis at a very low computational cost. Indeed, depending on the cubature formula used to handle the multidimensional integrals defining the unknown coefficients of the PCE, the required number of evaluations of the FEM is in the range [14,21] for a PCE of degree 𝑝 = 2 where the convergence of the quantities of interest is achieved. In addition, although a full PCE is used to represent the model response, which means that all coefficients related to a PCE of a given degree 𝑝 are retained, the computational cost slightly increases with the probabilistic dimension, and it is independent from the chosen degree of the PCE. Furthermore, when passing from a moments analysis to a sensitivity analysis or the opposite, no additional computational cost is required, since the evaluations of the FEM are only needed to build the PCE, then any kind of probabilistic analysis could be performed, either by post-processing the PCE coefficients or by performing MCS on the metamodel of the mechanical response. This clearly allows us to overcome the inefficiency of crude cubature formulae pointed out in Chapter II, where the required set of evaluations of the mechanical model depends not only on the integration points related to the chosen cubature formula but also on the type of the probabilistic analysis to be addressed. Indeed, the comparison of equations (II.3) and (II.23) (see sections 2.2 and 2.3 of Chapter II) shows that the integrands to be evaluated for the computation of statistical moments and partial variances are not the same. It appears that performing MCS on the metamodel could enhance the estimates of statistical moments, especially the higher order one such as the skewness and the kurtosis, when a lack of accuracy is observed on the estimates provided by the post-processing of the PCE coefficients. Finally, thanks to the present example, we have shown how it is important to identify the right probabilistic models and their statistical characteristics when performing uncertainty propagation analysis, especially through mechanical models dealing with crack growth under fatigue loading where some uncertain parameters exhibit a statistical dependency. Indeed, when the correlation between the constants of the Walker law, which is proven by statistical analysis on experimental data, is not considered, the uncertainty propagation analysis provides erroneous results on the variability of the fatigue crack growth lifetime of the CCP specimen. Consequently, wrong decisions could be taken by the decision-makers, in the design stage for instance or in the maintenance scheduling when dealing with existent structures. Nonlinear cracked pipe Problem statement The present problem deals with an axysymmetrically cracked pipe as depicted in figure III.13. Such a component is extensively used in nuclear plants where it is often subjected to large variations of thermal and mechanical loads which can lead to crack initiation and growth toward a critical length inducing its failure. This problem has been introduced first by (Pendola and al., 2000) to assess the reliability of the cracked pipe with respect to accidental loads, using an original approach in that time, based on combination of finite elements computations and quadratic response surface. Later, this same problem has been used by (Riahi and al., 2012) We are interested in ductile fracture which concerns materials where crack growth involves plasticity. Thus, we must take into account the effect of this plasticity on the crack driving forces. The elastoplastic behavior of the constitutive material of the cracked pipe is described by the well-known Ramberg-Osgood (Anderson, 1995) stress-strain relationship given by: al., 2000), finite element mesh (lower left), evolution of 𝐽 𝑅𝑖𝑐𝑒 with respect to the FEA increments (right) The uncertain inputs include the four parameters of the Ramberg-Osgood behavior law, namely the Young's modulus 𝐸, the yield strength 𝜎 𝑦 , the coefficient 𝛼 and the strain hardening exponent 𝑛, whose distributions and asociated statistical characteristics are listed in table III.5. Statistical moments and distributions analysis First, statistical moments analysis is performed to assess the effect of the uncertain parameters on the variability of the model response defined as the Rice's integral 𝐽 𝑅𝑖𝑐𝑒 . Since all cubature formulae I-VI provide correct results, only cubature formula VI is used here, on the one hand to avoid redundancy in the presentation of the obtained results, on the other hand this cubature formula has a free parameter ∆ (see section 4.6 of Chapter II) which seems to be very useful for constructing suitable experimental designs when the sparse-PCE approach is used to carry out uncertainty propagation analysis. For comparison purposes, the first four statistical moments of the model response of interest 𝐽 𝑅𝑖𝑐𝑒 are computed using the two proposed approaches, namely the full-PCE and the sparse-PCE, where the polynomial degree 𝑝 is set to 2. For the full-PCE approach, the unknown coefficients are computed either by the full tensor-product error is equal to 0.01%, 0.19%, 3.67% and 1.84%, respectively for the mean, standard deviation, skewness and kurtosis. This good accuracy is achieved with low computational cost since only 33 runs of the FEM are required for both the proposed full-PCE and sparse-PCE approaches when the cubature formula VI is used. Due to the proposed truncation scheme based on the second moment information, the polynomial chaos basis 𝓗 𝑝,𝑞,𝜎 2 used in the sparse-PCE approach contains only 10 components, instead of 15 as for a full polynomial chaos basis 𝓗 𝑝 , which means that 5 components have insignificant effect on the model response and can be discarded from the PCE-based metamodel. As a result, the economy ℰ 𝑝,𝑞,𝜎 2 = 100 × (card(𝓗 𝑝 ) -card(𝓗 𝑝,𝑞,𝜎 2 ) card(𝓗 𝑝 )) ⁄ (i.e., defined in the same way as in equation III.21) of the sparse polynomial chaos basis 𝓗 𝑝,𝑞,𝜎 2 is around 33%, showing a significant decrease in the computational effort required to estimate the PCE coefficients by regression. As an illustration, figure III.16 shows the convergence of the statistical moments obtained by crude MCS, directly applied on the FEM. As can be seen, convergence is well achieved with a sample size of 10 5 , but with a huge computational cost since, as stated previously, the FEM is itself time consuming due to the incremental FEA required to solve the nonlinear fracture problem. The PDF of the Rice's integral 𝐽 𝑅𝑖𝑐𝑒 is also constructed and compared to conventional distributions, showing that the lognormal distribution gives the best fit. In addition, the CDF is obtained by direct integration of the PDF and can be used to perform a reliability analysis if necessary. Reliability analysis In the following, a reliability analysis is performed to study the effect of the axial tension magnitude 𝜎 𝑡 on the integrity of the cracked pipe. Pipe failure is observed when the Rice's integral 𝐽 𝑅𝑖𝑐𝑒 exceeds the fracture toughness 𝐽 𝐼𝑐 of the constitutive material. Therefore, the performance function is as follows: 𝐺(𝒙) = 𝐽 𝐼𝑐 -𝐽 𝑅𝑖𝑐𝑒 (𝒙) (𝐼𝐼𝐼. 35) The fracture toughness 𝐽 𝐼𝑐 is taken here as uncertain parameter, in addition to the uncertain parameters gathered in the vector 𝒙 = {𝐸, 𝜎 𝑦 , 𝑛, 𝛼}, considered in the statistical moments and sensitivity analyses conducted previously. It is assumed to follow a lognormal distribution with a mean 52 𝑀𝑃𝑎 𝑚𝑚 and a standard deviation 9.5 𝑀𝑃𝑎 𝑚𝑚. The axial tension 𝜎 𝑡 is taken as a deterministic parameter ranging from 140 𝑀𝑃𝑎 up to 200 𝑀𝑃𝑎. It represents the effect of an accidental increase in load, whose nominal value is about 140 𝑀𝑃𝑎, that could occur during the pipe lifetime. The main issue of the reliability analysis is to obtain the probability of failure as a function of the magnitude of the axial tension, in order to be able to make the right decision as to whether or not to perform repair operations on the pipe. Indeed, by knowing the cumulated damage, i.e. the crack length, and the corresponding failure probability, we can decide if the repair of the pipe must be done urgently, or if we can still wait. Conversely, if a threshold level of reliability must be guaranteed, for instance by referring to design codes recommendations, the curve representing the evolution of the probability of failure with respect to the magnitude of the axial tension, gives us the allowable load that the pipe should support. To build this curve, the probability of failure is computed for some values of the axial tension varying in the range [140 𝑀𝑃𝑎, 200 𝑀𝑃𝑎], based on the computational method presented in section 2.3.3 of this chapter. For more computational cost savings, the results obtained at each step of the incremental FEA, required to compute the Rice's integral, are stored in a database where a step size of 5 𝑀𝑃𝑎 is taken to cover the entire range of variation of the axial tension where 𝜇 𝐺 and 𝜎 𝐺 are, respectively, the mean and the standard deviation of the performance function 𝐺, 𝜇 ln(𝐽 𝐼𝑐 ) and 𝜎 ln(𝐽 𝐼𝑐 ) are, respectively, the mean and the standard deviation of the normal distribution ln(𝐽 𝐼𝑐 ), 𝜇 𝐿𝑁(𝐽 𝑅𝑖𝑐𝑒 ) and 𝜎 ln(𝐽 𝑅𝑖𝑐𝑒 ) are, respectively, the mean and the standard deviation of the normal distribution ln(𝐽 𝑅𝑖𝑐𝑒 ), and ln denotes the Neperian logarithm function. It is important to note that the estimation of the failure probability by evaluating the integral III.37 is not a difficult task since the integrand is now available in an analytical form. But, having in mind the idea to provide simple computational tools to engineers to assess the reliability of their designs, the use of equations III.38 and III.39 for this purpose seems to be the best solution, since it does not require extensive knowledge of mathematics and probability theory. The estimates of the reliability indices and the corresponding failure probabilities obtained by the proposed approaches are listed in Table III is related to the magnitude of the axial tension 𝜎 𝑡 , has a weak effect on the accuracy of the estimates given by either the full-PCE or the Sparse-PCE approaches. For the highest magnitude of the axial tension 𝜎 𝑡 , which is 200 𝑀𝑃𝑎, we obtain the largest relative error on the estimate of the reliability index, but it remains at an acceptable level. This increase of the relative error is probably due to the high nonlinearity observed in the mechanical behavior of the constitutive material, especially around the crack tip, when 𝜎 𝑡 = 200 𝑀𝑃𝑎. Indeed, such a magnitude of the axial tension induces high stresses in the cracked pipe close to the yield stress of the constitutive material. For such a situation, the relative error can be reduced by using more steps on the incremental FEA required to compute the Rice's integral. of interest. Furthermore, the same set of evaluations of the FEM is used to address all three kinds of analysis, with no additional computational cost when changing from one kind of analysis to another, as observed in Chapter II when the crude cubature formulae I-VI are used to compute the multidimensional integrals representing the quantities of interest corresponding to the uncertainty propagation analysis to be conducted. In this case, the computational cost gain factor is equal to 3 (i.e., 33 evaluations of the FEM instead of 99 required by the crude cubature formula VI). In particular, the truncation scheme based on second moment information used to build up the polynomial chaos basis 𝓗 𝑝,𝑞,𝜎 2 in the sparse-PCE approach, reduces the computational efforts when the unknown PCE coefficients are the solution of a least-square regression problem. Indeed, the sparsity compared to a full polynomial chaos basis 𝓗 𝑝 is about 33%, which means that about one-third of the components of the PCE-based metamodel have an insignificant effect on the model response, defined as the Rice's integral of the cracked pipe. Finally, the reliability analysis has shown that the proposed approaches are in overall good agreement with each other and with the estimates of the reliability indices and corresponding failure probabilities obtained by FORM, which was taken as the reference method. Since a closed-form representation is obtained for the PDF of the Rice's integral 𝐽 𝑅𝑖𝑐𝑒 , which fits a lognormal distribution, the failure probability is computed by solving a simple R-L reliability problem, which allows us to avoid handling integrals quantities and provides a suitable tool for engineering practices. Through parametric reliability analysis carried out in function of the axial tension 𝜎 𝑡 , we have shown that the accuracy of the proposed approaches is weakly affected by the magnitude of the failure probability to be computed. where 𝜔 is a parameter to underline the randomness of 𝐸(𝒛, 𝜔) and 𝑢(𝒛, 𝜔) is a standard normal random field of zero mean and unit standard deviation, governed by the following exponential autocorrelation function: Spatially varying uncertainty in inclined edge 𝜌(𝒛 1 , 𝒛 2 ) = exp [-( |𝑥 1 -𝑥 2 | 𝑙 𝑐𝑥 + |𝑦 1 -𝑦 2 | 𝑙 𝑐𝑦 )] (𝐼𝐼𝐼. 41) In the above equation, 𝒛 1 = (𝑥 1 , 𝑦 1 ) and 𝒛 2 = (𝑥 2 , 𝑦 2 ) are two points in the spatial domain representing the cracked plate, 𝑙 𝑐𝑥 = 0.5 𝑢𝑛𝑖𝑡 and 𝑙 𝑐𝑦 = 1.5 𝑢𝑛𝑖𝑡𝑠 are the correlation lengths in the horizontal and the vertical directions, respectively, and |. | denotes the absolute value. The standard normal random field 𝑢(𝒛, 𝜔) is discretized using the Karhunen-Loève (KL) method (Ghanem and Spanos, 1991), instead of the EOLE method used in section 5.2.2 of Chapter II. Indeed, as first shown by (Li and Der Kiureghian, 1993), and later confirmed by (Sudret and Der Kiureghian, 2000) through a benchmark study, for a given truncation order 𝑀, the KL method is more accurate than the EOLE method in the case of an exponential autocorrelation function, since it provides the lowest variance error, especially within the variation domain Ω of the random field to be represented. However, special attention must be taken at the boundaries of Ω where the EOLE may exhibit more accurate results than the KL method. These for the first two first statistical moments, i.e., the mean and the standard deviation, with a PCE of degree 𝑝 = 2, are listed in Table III.8 and compared to the estimates given by crude cubature formula II and 10 5 crude MCS. As can be seen, the results given by the all the proposed approaches are in complete agreement. The discrepancy with respect to the reference estimates given by 10 5 MCS is insignificant for all the mechanical responses of interest. It appears that the uncertainty on the Young's modulus, i.e., 10% deviation from its mean value, has a moderate effect on the variability of the crack driving forces, since the coefficients of variation corresponding to the opening fracture mode SIF 𝐾 𝐼 , the in-plane shear fracture mode SIF 𝐾 𝐼𝐼 , the bifurcation angle 𝜃 and the effective SIF 𝐾 𝑒𝑓𝑓 , are equal to 2.75%, 3.97%, 1.55% and 2.93%, respectively. It is important to notice that the PCE-based metamodels corresponding to the four mechanical responses of interest, are built from the same set of 651 evaluations of the FEM. Thus, handling non-scalar random responses does not affect the efficiency of the full-PCE and sparse-PCE approaches in any way. The truncation of the polynomial chaos basis based on second moment information significantly reduces the computational effort devoted to solving the least-square regression problem used in the sparce-PCE approach to estimate the PCE coefficients. Indeed, only 25 of the 325 components of the full polynomial chaos basis 𝓗 𝑝 have significant contributions on the model responses. The corresponding economy index ℰ 𝑝,𝑞,𝜎 2 = 100 × (card(𝓗 𝑝 ) -card(𝓗 𝑝,𝑞,𝜎 2 ) card(𝓗 𝑝 )) ⁄ is about 92%, which shows high sparsity in the truncated polynomial chaos basis 𝓗 𝑝,𝑞,𝜎 2 . Let us now turn our attention to the respective distributions of the mechanical responses. The PDFs of the four crack driving forces are built by a moment-based technique using only the statistical moments estimates given by the full-PCE approach in order to avoid redundancy, since as stated previously, a small discrepancy is observed between the results given by the full-PCE and the sparse-PCE approaches. As can be observed in figure III.26 the obtained PDFs are in good agreement with those of the reference constructed from 10 5 crude MCS. Furthermore, it appears that the lognormal distribution is the best candidate for fitting the PDFs of the four crack driving forces of interest. This finding is of great interest since closed-form solutions are now available for the PDFs, which are more appropriate for designers. In addition, as demonstrated in previous application example, having an analytical representation of the PDFs avoids the evaluation of multidimensional integrals, since the estimation of the failure probability can be performed by solving a simple R-L reliability problem. Sensitivity analysis Next, a sensitivity analysis is conducted to assess the contribution of the uncertain parameters 𝑢 𝑖 (𝜔), 𝑖 ∈ {1, … ,24}, resulting from the representation of the random field 𝐸(𝒛, 𝜔) by a 24 th order KL expansion, on the variability of the effective SIF 𝐾 𝑒𝑓𝑓 . It is important to remind that this effective crack driving force, which is derived from the opening fracture mode SIF 𝐾 𝐼 , the in-plane shear fracture mode SIF 𝐾 𝐼𝐼 and the bifurcation angle 𝜃, can be considered from a physical point of view as an opening fracture mode SIF in the direction along the bifurcation angle 𝜃. This parameter is of a great importance when dealing with mixed-mode fracture problems since it is used in the computation of the fatigue crack growth life instead of 𝐾 𝐼 and 𝐾 𝐼𝐼 . Moreover, when a reliability analysis is to be performed with respect to a serviceability criterion function of the fracture toughness of the constitutive material, the effective SIF 𝐾 𝑒𝑓𝑓 should also be used. It is important to notice that the sensitivity analysis conducted here did not require any additional runs of the FEM, since the Sobol sensitivity indices are derived from the coefficients of the metamodels already built in the statistical moments and distribution analysis conducted earlier. Discussion We clearly demonstrate the efficiency of the full-PCE and sparse-PCE approaches for conducting different types of uncertainty propagation analysis through a computationally demanding implicit mechanical model. Indeed, assuming that the crude cubature formula II is used to conduct the same sensitivity analysis, we would have to perform at least 651 evaluations of the FEM to obtain the estimates the Sobol sensitivity indices, in addition to the 651 evaluations already required by the statistical moments analysis. Thus, both proposed approaches reduce the computational effort by at least a factor of two, and possibly a factor of three if a reliability analysis is carried out later. It is worth noting that the computational cost required by the sparse-PCE approach is due to the derivation of second moment information needed to build the sparse chaos polynomial basis, rather than the estimation of the PCE coefficients. If prior second moment information are already available, the computational cost gain should be more noticeable. As demonstrated previously, the effective probabilistic dimension, defined as the number of eigenmodes required in the KL expansion to explain a given percentile of the total variance of the model response of interest, is lower than the nominal probabilistic dimension. For instance, the first 10 and 15 eigenmodes explain 90% and 95% of the total variance of the effective SIF 𝐾 𝑒𝑓𝑓 , respectively. Consequently, many components of the full polynomial chaos basis should vanish, since they have zeros sensitivity indices, implying fewer PCE coefficients to be computed and a better sparsity of the polynomial chaos basis. Conclusion Response surface methods rely on the construction of suitable approximations, called metamodels, of the uncertain responses of an implicit mechanical model. In this chapter, we have focused on the well-known PCE method which provides metamodels obtained by expanding the model responses of interest on a multivariate orthonormal polynomial basis. The mathematical formalism related to the construction of the PCE method is recalled. The standard random space has been preferred for the construction of PCE -based metamodels to provide a generalized representation capable of handling statistically independent, as well as dependent, uncertain parameters. The computation of the unknown coefficients of the PCE-based metamodels, can be performed either by projection or regression techniques. For high-dimensional uncertainty propagation problems, it has been shown that the projection technique can lead to high computational cost, when classical integration schemes, such as the Gauss-Hermite full tensor-product scheme, are used to evaluate the multidimensional integrals involved in the computation of the P CE coefficients. The regression technique is also inefficient in such a situation, especially when a full polynomial chaos basis is used to build up the metamodels. Two alternative approaches have been developed to circumvent this inefficiency. The first approach, called full-PCE, is derived from projection techniques, where the efficient cubature Gauss-Hermite full tensor-product integration scheme, their efficiency is less affected by the degree of the polynomial chaos basis chosen to construct the PCE-based metamodels. The second approach, called sparce-PCE, is derived from regression techniques, where an efficient truncation scheme uses prior available second statistical moment information to identify the most important components of the polynomial chaos basis on the model responses of interest. In this way, the PCE coefficients corresponding to the components with weak effects are discarded, and the computational efforts devoted to solving t he regression problem is significantly reduced. In this context, an economy index has been introduced in the form of a ratio between the respective cardinalities of the sparse and the full chaos polynomial basis, which allows us to objectively assess the computational cost saving obtained by the proposed truncation scheme based on the second moment information. Regardless of the use of the full PCE approach or the sparse PCE approach, two methods have been proposed to carry out the uncertainty propagation analysis, either by post-processing the PCE coefficients or by performing MCS on the obtained PCE-based metamodels. The accuracy and efficiency of the so-called full-PCE and sparse-PCE approaches have been carefully investigated in this chapter through three mechanical problems dealing with fatigue fracture. These three application examples have validated the ability of the proposed approaches to perform different types of uncertainty propagation analysis through time-consuming implicit mechanical models with high probabilistic dimensionality. Through the first example dealing with crack growth in CCP specimen that involves correlated uncertain parameters, it has been shown that the full-PCE approach is able to efficiently conduct statistical moments and sensitivity analysis, since the number of FEM runs required to achieve the target accuracy on the estimates of the quantities of interest, varies between 14 to 21, depending on the cubature formula used in the computation of the PCE coefficients. It appears that the accuracy of the statistical moments estimates obtained by MCS on the PCE-based metamodel of the fatigue crack growth life is slightly better than that given by the post-processing of the PCE coefficients, especially for skewness and kurtosis. It has been pointed out that we should be very careful when choosing the probabilistic model used to model the variability of the uncertain parameters. Indeed, as it has been shown, the omission of the statistical dependence between the parameters of the fatigue crack growth Walker law, induces erroneous results in the uncertainty propagation analysis, since in such a case the coefficient of variation of the fatigue crack growth life is equal 21.5%, while it should be about 3.14%. The second example involves a greedy computational time mechanical model, since an incremental FEA is required to assess the Rice's integral used as the fracture driving force when dealing with ductile fracture problems. In this application, it has been shown that both the full-PCE and sparse-PCE approaches can perform statistical moments and distributions, sensitivity, and reliability analysis with a high efficiency. Unlike the crude cubature formulae studied in Chapter II, where additional evaluations of the primary implicit mechanical model are required each time, one switches from one type of uncertainty propagation analysis to another, the two proposed approaches allow all three types of uncertainty propagation analysis to be addressed at the same time based on the same set of evaluations of the FEM needed to compute the Rice's integral. It is found from the statistical moments analysis that the proposed approaches give more accurate estimates of the statistical moments than crude cubature formula VI, since the maximum relative error with respect to the reference solutions given by 10 5 crude MCS, does not exceed 3.7%. This high accuracy allows the PDF of the mechanical response to be constructed directly using a simple momentbased technique. It has been shown that the full-PCE and the sparse-PCE approaches based on the cubature formula VI require the same computational cost, i.e., 33 FEM runs, and more efficient than the classical PCE approach based on the Gauss-Hermite full product-tensor integration scheme, which requires 81 FEM evaluations. Compared to the crude cubature formula VI, a noticeable lower bound computational gain factor of 3 is obtained. Moreover, the use of prior second moment information reduced the computational effort spent on solving the least-square problem when the sparse-PCE approach is used. Indeed, it appears that only 10 of the 15 components of a full polynomial chaos basis have a significant effect on the model response, resulting in an economy of 33% on the computation of the PCE coefficients by regression. Through the sensitivity analysis, it has been shown that interactions between the uncertain parameters have an insignificant effect on the variability of the Rice's integral, since the total Sobol indices have the same values as the respective first-order indices. In addition, physically meaningful sensitivity indices have been obtained. For low values of the axial tension applied to the cracked pipe, the Young's modulus appears to be the most important uncertain parameter for the variability of the Rice's integral. However, for high values of the axial tension, where the fracture of the constitutive material goes with important plastic strains in vicinity of the crack tip, the yield strength becomes the most significant uncertain parameter. It can be retained that the coefficient 𝛼 and the strain hardening exponent 𝑛 of the Ramberg-Osgood behavior law have a weak contribution on the variability of the Rice's integral. Therefore, they can be considered as deterministic parameters and set of their respective mean values, thus reducing the probabilistic dimension of the problem. Since accurate PDFs are available for the Rice's integral and the lognormal distribution is in good agreement with them, we have been able to estimate the failure probability by solving a simple R-L reliability problem. Through a parametric study as a function of the axial tension applied to the cracked pipe, it has been shown that the proposed approaches give an accurate estimate of the reliability index since the relative error with respect to the estimates given by FORM is less than 2%. Example 3 has a high probabilistic dimensionality since a 24 th order KL representation is used to model the spatial randomness of the Young's modulus of a plate containing an inclined crack. It has been shown that the full-PCE and sparse-PCE approaches are capable of efficiently conducting different kinds of uncertainty propagation analysis through a mechanical model with high probabilistic dimension. It appears that considering vector-valued model responses does not affect the efficiency of the proposed approaches. From the statistical and distributional analysis, it can be retained that the spatial variability of Young's modulus has a moderate effect on the variability of the crack driving forces of interest since the corresponding coefficients of variation vary in the range [1.55%, 3.97%]. In addition, it has been observed that the lognormal distribution fits the PDFs of all the four crack driving forces very well. Through the sensitivity analysis, it has been shown that the uncertain parameters related to the eigenmodes of the KL expansion act separately on the variability of the effective SIF 𝐾 𝑒𝑓𝑓 , since the contributions of the interaction effects is small compared to those of the main effects. It appears that the effective probabilistic dimension is low compared to the nominal one, which allows us to discard the PCE coefficients that have an insignificant effect on the model responses, and thus improves the efficiency of the proposed approaches. The various analysis carried out in this chapter have allowed us to demonstrate the good accuracy and efficiency of the two proposed approaches, called full-PCE and sparse-PCE. By using the well-established polynomial chaos expansion method, analytical representations, often called metamodels in the literature of uncertainty propagation analysis, have been built for a scalar as well as for a vector-valued model responses, initially provided by a time consuming implicit mechanical model. Thus, it is no longer necessary to run additional cycles of the primary implicit mechanical model when one wishes to switch from one type of uncertainty propagation analysis to another, as was the case when using crude cubature formulae, I-VI. Clearly, we find in this chapter a consistent response to the question asked earlier at the end of Chapter Conclusion The work we have done in this thesis was intended to develop unified approaches able to perform efficiently the three possible kinds of uncertainty propagation analysis, i.e., statistical moments and distributions, sensitivity, and reliability analysis, through a greedy computational time mechanical models. A particular interest was given to fatigue fracture problems. The challenge was to merge different well-established mathematical methods to propose robust probabilistic computational strategies whose efficiency is less affected by the probabilistic dimension and the complexity (i.e., the order of the statistical moment, the order of the sensitivity index and the magnitude of the failure probability) of the quantities of interest corresponding to the uncertainty propagation analysis to be performed. This purpose seems to be achieved through the development carried out in Chapters II and III, where good results are obtained for a large panel of application examples. After a reminder of the general framework of mechanical fatigue and particularly of the fatigue crack growth phenomenon, several existing probabilistic models allowing the evaluation of the effect of uncertainties on fracture driving forces were reviewed. Two main categories of probabilistic models were distinguished. The models belonging to the first category are based on Markov chains theory to take into account the sources of uncertainty in the fatigue crack growth process. These purely statistical models, as considered in the literature, have been criticized for their inconsistency with the physics of the fatigue crack growth phenomenon, although they can handle mixed-mode fracture problems. The second category contains models that are more consistent with the physics, obtained by randomizing traditional deterministic crack growth laws, such as the well-known Paris-Erdogan law, by introducing random variables or processes. Most of these probabilistic models suffer from inefficiency when the fracture driving forces of interest (i.e., stress intensity factors, fatigue lifetime…) are derived from time-consuming mechanical models. This inefficiency is even more visible when the probabilistic dimensionality is high. Moreover, only scalar-type variabilities are treated by these models. The spatial randomness of the material properties, which requires the use of advanced probabilistic models called random fields, which most lead to a significant increase of the probabilistic dimensionality, is omitted in the probabilistic studies of fatigue fracture. This is why three original approaches to uncertainty propagation were developed in this work. The first approach developed in Chapter II uses six distinct cubature formulae, taken from an broad literature review, to compute multidimensional integrals representing the quantities of interest related to type of uncertainty propagation analysis to be performed. As with the well-known MCS, these cubature formulae approximate a multidimensional integral by one summand of integrand evaluations over a set of smartly sampled integration points in the standard random space. Thus, the computational cost savings are significant compared to full tensor-product integration schemes where one summand in each direction of the random space is required to compute a multidimensional integral. After taking a general look at the principle of propagation through models representing physical phenomena at the beginning of Chapter II, the mathematical formulations of the quantities of interest, i.e., the first four statistical moments, the sensitivity indices and the failure probability, were established. It was shown that a common issue is the handling of multidimensional integrals. Through a benchmark study conducted on various application examples ranging from a simple mathematical equation to an implicit model requiring computation time and involving spatially varying uncertain parameters, we have demonstrated that with the exception of cubature formula IV where the computational cost grows exponentially with the probabilistic dimension, all remaining cubature formulae are able to perform efficiently any kind of uncertainty propagation analysis. For high probabilistic dimensions, it appears that cubature formula II provides the best balance between efficiency and accuracy. It has been shown that problems involving a higher heterogonous mixture of random variables should be handled with great care, since a loss of accuracy was observed on the results given by some cubature formulae. Despite the various advantages offered by cubature formulae I-VI over classical integration schemes, the associated computational cost can explode for high probabilistic dimensions, especially when one wishes to switch from one kind of uncertainty propagation analysis to another. Indeed, additional evaluations of the primary mechanical model are required, since the multidimensional integrals representing the quantities of interest corresponding to statistical moments, sensitivity or reliability analysis involve different integrands. Remedies to overcome the problem of inefficiency of the crude cubature formulae I-VI, i.e., when applied directly to the primary mechanical model, were proposed in Chapter III. Two approaches, called full-PCE and sparse-PCE, were devised based on the well-known chaos polynomial expansion. The key ingredient was to build approximations, called metamodels, by projecting the responses of interest of the model onto a suitable multivariate orthonormal polynomial basis. Once these metamodels are obtained any type of uncertainty propagation analysis can be addressed, either by performing MCS or by post-processing the PCE coefficients. The standard random space was preferred to build the PCE-based metamodels, on the one hand to take advantage of the suitable mathematical properties of the Hermite polynomial basis that simplify the derivation of some quantities of interest such as the statistical moments, on the other hand to obtain a generalized representation of the metamodels able to consider uncorrelated as well as correlated uncertain parameters. In the full-PCE approach, a full polynomial chaos basis was used to construct the metamodels where the PCE coefficients were computed by projection based on the efficient cubature formulae I-VI. It appears that the computational effort devoted to PCE coefficients estimation is less affected by the probabilistic dimensionality and the polynomial degree chosen to build the metamodels, in contrast to the Gauss-Hermite full tensor-product integration scheme where the number of the mechanical model evaluations grows exponentially in such a situation. In the sparse-PCE approach, an incomplete polynomial chaos basis is used to build the metamodels, where the PCE coefficients were obtained by solving a least-square regression problem. The sparse polynomial chaos basis was obtained by an original truncation scheme based on prior available second moment information, where the important components are automatically identifier if significant change is observed on the variance of the model responses of interest. The Full-PCE and sparse-PCE approaches were applied to three typical fatigue fracture problems. The first one deals with crack growth in a CCP specimen where the considered uncertain parameters are statistically dependent. The second example deals with the ductile fracture of a cracked pipe where the mechanical response of interest represents the Rice's integral available through an incremental FEA requiring a high computational cost demanding . The third problem, with a high probabilistic dimension, is additionally interested in the effect of the spatial variability of the Young's modulus on the mixed-mode fracture driving forces. Both approaches were found to be efficient in deriving, based on the same set of evaluations of the primary implicit mechanical model, statistical moments and distributions, sensitivity indices and failure probabilities of failure. Indeed, accurate estimates of all these quantities are obtained using only 651 FEM runs for the problem with the highest probabilistic dimensionality, which is equal to 24. It was pointed out that the accuracy of higher order statistical moments, such as the skewness and the kurtosis, was better when the estimates were obtained by performing MCS on the metamodels rather than by post-processing the PCE coefficients. Due to the high accuracy of the obtained mechanical responses PDFs, it was shown that the failure probability can be computed by solving a simple R-L reliability problem, instead of computing a multidimensional integral. Furthermore, the accuracy of the estimates, which are in good agreement with those provided by the first order reliability method, is less affected by the magnitude of the target failure probability. Although, both proposed approaches require the same number of mechanical model evaluations, the sparse-PCE reduces the computational effort devoted to the postprocessing of the PCE coefficients. It appeared that the higher is the sparsity of the polynomial chaos basis, which was measured by introducing a sparsity index called economy, the lower the computational cost. Furthermore, it was shown that a major part of the computational cost of the sparse-PCE approach is due to the computation of the variance of the model response of interest required by the truncation scheme used to identify the important components of the polynomial chaos basis. Thus, if prior information on the variance is available, the sparse-PCE approach should be noticeably more efficient than the full-PCE approach. In the first application example, the importance of choosing the right probabilistic model to represent the true variability of the uncertain parameters was emphasized. Omitting the correlation between the parameters of the fatigue crack growth law, which is a bad practice yet often observed in probabilistic studies, leads to erroneous results since the variability of the fatigue crack growth lifetime is abnormally high compared to the real situation where these parameters are naturally correlated. In the second application example, sensitivity analysis provided physically meaningful results. It was shown that the relative contributions of the uncertain parameters depend on the magnitude of the plastic strains surrounding the crack tip. When the axial tension applied to the cracked pipe is at its nominal magnitude, the plastic strains are confined in the crack tip, resulting in brittle fracture, and it was shown that the Young's modulus had the largest contribution on the variability of the Rice's integral. For accidentally high magnitudes of the axial tension, it was observed that the yield strength was the most significant uncertain parameter since the constitutive material exhibits a high plastic behavior. Statistical moments and distributions analysis performed in the third application example showed a moderate effect of the spatial randomness of the Young's modulus on the variability of the mixed-mode fracture driving forces. It was noticed that the effective probabilistic dimension is small compared to the nominal one. Referring to the total Sobol indices, only 10 out of 24 random variables, corresponding to the 10 most important eigenmodes of the KL expansion, explain 90% of the total variance of the effective SIF. Finally, it was shown that the full-PCE and sparse-PCE remain efficient even when vector-valued model responses were considered. The three approaches that have been developed, namely crude cubature formulae, full-PCE and sparse-PCE, fulfill the objectives that motivated this thesis. In addition to the interesting results obtained either concerning the computational enhancement of the uncertainty propagation approaches or the understanding of the close relationship between uncertainties and fatigue fracture, the undertaken work allowed us to identify various tracks to further improvements at different levels. The first level concerns the enhancement of deterministic mechanical models by integrating complex physical phenomena observed during the fatigue crack growth to provide more realistic crack driving forces. One such phenomenon is the crack retardation due to overloads, which can occur either accidently during constant amplitude loading or naturally during variable amplitude loading. It is wellknown that a single overload induces a decrease of the crack growth rate, leading to an increase of the fatigue lifetime. Such a free increase of the fatigue lifetime is of great interest to managers of mechanical components and civil engineering structures as it can be a way to optimize the maintenance operations and, consequently, reduce the overall cost spent during the service life. Many well-established models are available in the literature to consider the retardation due to overloads based on plasticity theory. The key ingredient consists in weighting the classical crack growth laws by a correction function whose parameters are derived from experimental data. These retardation models can be straightforwardly integrated into explicit or implicit mechanical models. The efficiency of the overloads, which can be measured by the induced increase in fatigue lifetime, depends on many parameters such as their amplitudes, the time interval between them as well as their periodicity. Thus, it seems very interesting to investigate if there is an optimal combination of these parameters to maximize the increase of fatigue lifetime. The second level concerns the improvement of the efficiency of the proposed uncertainty propagation approaches to deal with problems with higher probabilistic dimensionality. A first track for such an improvement is to use more efficient cubature formulae. In Chapter II of the thesis, it has been shown that cubature formula V is a very promising candidate. The issue is to find a straightforward way to build orthogonal arrays for high dimensions. In this context, it will be relevant to investigate the wellestablished mathematical tools called Galois fields which are extensively used in information coding and computers cryptography. As shown in the third application example, the effective probabilistic dimension is much smaller than the nominal dimension for problems with random fields. Thus, finding a way to compute the effective probabilistic dimension before performing the uncertainty propagation analysis should mitigate the effect of the probabilistic dimension on the computational cost. For this purpose, screening approaches, such as the Morris method, which is very efficient since the corresponding sensitivity indices can be computed either for separated or gathered uncertain parameters, may be used. Finally, the investigation of suitable metamodeling techniques, including High Dimensional Model Representation (HDMR) will be of great interest. Figure I. 1 . 1 Figure I. 1. Illustration of the Meudon disaster from 1842 (Wikipedia) ................................................. 8 Figure I. 2. Fuselage failure in a passenger jet that occurred in 1988 (NASA TECHNOLOGY) .................. 9 Figure I. 3. Fatigue fractography of automotive steel component (Schijve, 2009) ................................. 9 Figure I. 4. Stages of the Fatigue failure (After (Dowling, 2007)) ....................................................... 10 Figure I. 5. Wöhler curve for a low alloy steel SAE 4130 (Schijve, 2009) ............................................ 12 Figure I. 6. Representation of the Wöhler diagram and the domain of interest (Schijve, 2009) .............. 12 Figure I. 7. Elastic, plastic, and total strain versus life curves (Landgraf, 1970) ................................... 13 Figure I. 8. Separation of a medium on either side of an interface and the local reference frame attached to the fracture tip .......................................................................................................................... 15 Figure I. 9. Fracture modes ............................................................................................................ 16 Figure I. 10. Crack growth increment (left), Integration path around the crack tip (right) ..................... 17 Figure Figure I. 11. Tensile curve of a nonlinear elastic material (blue curve) and an elasto-plastic material (red curve) .......................................................................................................................................... 18 Figure I. 12. Displacements of the crack edges ................................................................................ 19 Figure I. 13. CT specimen, geometry and dimensions (left), finite elements mesh (right) ..................... 19 Figure I. 14. CT specimen, deformed mesh (left), Von-Mises equivalent stress (right) .......................... 20 Figure I. 15. Principle of the maximum tangential stress and description of the region near the crack tip. ................................................................................................................................................... 21 Figure I. 16. Calculation of G for an advance of δ and an angle θ ....................................................... 22 Figure I. 17. Calculation of the solution for a crack making 𝛽 angle with the axis of the load and propagating with 𝜃 angle .................................................................................................................................. 23 Figure I. 18. Characteristic crack growth rate curve for a ductile material ........................................... 24 Figure I. 19. Integration principle using Simpson's rule (Dowling, 2007) ............................................. 26 Figure I. 20. Effect of single overload on fatigue crack growth lifetime ................................................ 27 Figure I. 21. Effect of overload on the plastic zone ........................................................................... 27 Figure I. 22. Acceleration and retardation after the application of an overload ..................................... 28 Figure I. 23. Illustration of the plastic zone at the crack tip ............................................................... 30 Figure I. 24. CCP specimen, geometry and dimensions (left), finite elements mesh (right) ................... 30 Figure I. 25. CCP specimen crack tip, deformed mesh (left), Von-Mises equivalent stress (center), yielding elements at the crack tip (right) ..................................................................................................... 31 Figure I. 26. Paris-Erdogan's law fit to CCP specimen experimental data ............................................. 32 Figure I. 27. Walker's law fitted to CCP specimen experimental data .................................................. 33 Figure I. 28. Forman's law fitted to CCP specimen experimental data ................................................. 33 Figure I. 29. Comparison of predicted results from Walker's and Forman's FCG law with experimental data under various loading conditions ..................................................................................................... 35 Figure I. 30. Crack length evolution curves obtained from Virkler experimental data (Virkler and al, 1979) ................................................................................................................................................... 37 4. Forman's law parameters ............................................................................................... 34 Table I. 5. Comparison of experimental and numerical results for 𝑁𝑓 and 𝑎𝑐 ........................................ 34 Table II. 1. Expectation value of the integrand (II.50) given by 10 5 MCS and GH3 for various values of the dimension of integration ................................................................................................................ 72Table II. 2. Probability distributions and statistical characteristics of the random variables related to the performance functions 𝐺1𝒙, 𝐺2𝒙 and 𝐺3𝒙 .......................................................................................... 75 Table II. 3. Results of the first four statistical moments of the performance functions 𝐺1𝒙, 𝐺2𝒙 and 𝐺3𝒙 . 75 Table II.4. Truss structure: probability distributions and statistical characteristics of the random variables ................................................................................................................................................... 79 Table II. 5. Truss structure: statistical moments of the mid-span deflection ........................................ 80 Table II. 6. Truss structure: comparison of the reliability analysis results given by the proposed method, IS and FORM ................................................................................................................................ 82 Table II. 7. Truss structure: first-order and total Sobol sensitivity indices obtained by MCS .................. 82 Table II. 8. Truss structure: comparison of the computational costs of the proposed method, FTGH3 and MCS ............................................................................................................................................. 84 Table II. 9. Heat conduction in square plate: statistical moments of the average temperature 𝑇𝛺2 ........ 88 Table II. 10. Heat conduction in square plate: comparison of the reliability analysis results given by the proposed method and MCS ............................................................................................................ 90 Table III. 1. Correspondence between distributions of random variables and orthogonal polynomials .... 96 Table III. 2. Crack growth in CCP specimen: statistical characteristics of the transformed Walker model ................................................................................................................................................. 112 Table III. 3. Crack growth in CCP specimen: statistical moments of the fatigue crack growth life ........ 115 Table III. 4. Crack growth in CCP specimen: first-order and total Sobol indices obtained by PCE of degree 𝑝 = 2 .......................................................................................................................................... 117 Table III. 5. Nonlinear cracked pipe: probability distributions and statistical characteristics of the random variables .................................................................................................................................... 121 Table III. 6. Nonlinear cracked pipe: statistical moments of the Rice's integral 𝐽𝑅𝑖𝑐𝑒 .......................... 122 Table III. 7. Nonlinear cracked pipe: comparison of the reliability analysis results given by the proposed methods and FORM ..................................................................................................................... 126 Table III. 8. Inclined edge-cracked plate: statistical moments of the crack driving forces 𝐾𝐼, 𝐾𝐼𝐼, 𝜃 and 𝐾𝑒𝑓𝑓 ................................................................................................................................................. 133 fatigue crack growth is used to schedule inspection and repair. The multiplication of serious incidents with a significant number of victims especially in the field of rail transport led scientists in the mid-19 th century to focus on the phenomenon of fatigue cracking. One of the most popular examples of a disaster due to fatigue failure and known as the origin of the studies of the fatigue phenomenon, is the Meudon rail accident in 1842 (figure I.1). The cause of the accident was the rupture of one of the axes of the damaged locomotive. Figure I. 1 . 1 Figure I. 1. Illustration of the Meudon disaster from 1842 (Wikipedia) Another accident was the failure of the fuselage of a passenger jet in 1988 (figure I.2). The problem started with fatigue cracks at rivet holes in the aluminum structure. These cracks progressively propagated during the use of the airplane, creating a large crack that caused the failure of the structure. These examples show the necessity to predict its service life to guarantee the reliability of a structure, considering that it may contain defects, such as fatigue cracks. Figure I. 2 . 2 Figure I. 2. Fuselage failure in a passenger jet that occurred in 1988 (NASA TECHNOLOGY) From a physical point of view, we can define the fatigue as the alteration of the mechanical properties of the material subjected to repeated loading. Consequently, weakness points or micro cracks appear in the material from which a macro crack propagates if cycle load continues until a brutal collapse of the structure. Figure I.3 shows fracture surfaces of fatigue failure and the final fracture observed of an automotive steel component (Schijve, 2009). Figure I. 3 . 3 Figure I. 3. Fatigue fractography of automotive steel component(Schijve, 2009) Chapter I: State of art on probabilistic modelling of fatigue crack propagation S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 10 procedure of specimen manufactures (specimen shape, machining method), quality of specimen manu factures (scratch, surface condition), material properties (yield strength, ultimate strength, strain at failure, 𝜎 -𝜀 curve), geometry (length, width, thickness, transition radius,…), stress state (uniaxial, multiaxial, stress ratio), and effect of environment (temperature, corrosion, …). Some laboratory tests conducted on smooth notched specimens have shown that when a macro-crack appears in the material, the remaining number of loading cycles to failure is very small, which means that the initiation stage is about 90% of the total fatigue lifetime. Otherwise, when the tested specimens contain sharp notch, the propagation stage is by far the most dominant and can reach 95% of the total fatigue lifetime. This later situation is usually observed in practice since engineering structures inevitably contain defects.From an engineering point of view, the fatigue lifetime is reduced to only two stages: the crack initiation stage and the crack growth stage, since the unstable crack growth stage only takes few loading cycles, which can be neglected in the prediction of the fatigue lifetime. Figure I.4 depicts schematically the stages of the fatigue failure. Figure I. 4 . 4 Figure I. 4. Stages of the Fatigue failure (After (Dowling, 2007)) engineering practices, the design of structures subjected to fatigue is performed with respect to a safe target fatigue lifetime which depends on the consequence of the failure. Many research works have been conducted from different perspectives in terms of fatigue life prediction such as (Zhang and al, 2019), (Ai and al, 2019) and (Liu and al, 2020) who developed fatigue life prediction methods for engineering components based on the principle of surface fatigue abrasion theory. The definition of this target fatigue lifetime is not a trivial task and, Chapter I: State of art on probabilistic modelling of fatigue crack propagation S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 11 Figure I. 5 . 5 Figure I. 5. Wöhler curve for a low alloy steel SAE 4130(Schijve, 2009) Figure I. 6 . 6 Figure I. 6. Representation of the Wöhler diagram and the domain of interest (Schijve, 2009)If the number of repetitions (cycles) of the load is large, say, millions, then the situation is termed high -cycle fatigue. It is associated with relatively small deformations that are primarily elastic. On the contrary, low -cycle Chapter I: State of art on probabilistic modelling of fatigue crack propagation S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 15 Figure I. 8 . 8 Figure I. 8. Separation of a medium on either side of an interface and the local reference frame attached to the fracture tipFracture mechanic is introduced because classical mechanical approaches are not able to study the mechanical behavior of cracked bodies due to their inability to consider the very high and plastic strains at the vicinity of the crack tip. Figure I. 10 . 10 Figure I. 10. Crack growth increment (left), Integration path around the crack tip (right) Figure I. 11 . 11 Figure I. 11. Tensile curve of a nonlinear elastic material (blue curve) and an elasto-plastic material (red curve) Let us consider a two-dimensional cracked body Ω and Γ (shown in figure I.10) a path which surrounds the crack tip oriented by the normal 𝑛 ⃗ of component 𝑛 𝑗 . Based on this approach, the strain energy release rate can be expressed: Figure I. 12 . 12 Figure I. 12. Displacements of the crack edges Figure I. 13 13 Figure I. 13. CT specimen, geometry and dimensions (left), finite elements mesh (right)A finite elements model is developed in the software cast3m (CASTEM, 1997) to evaluate the accuracy of the energetic and the kinematic methods for the computation of the SIF. The analysis was performed under plane stress hypothesis and quadratic elements are used to mesh the CT specimen. In order to have good accuracy on the estimates of the fracture parameters, a refined structured mesh is adopted in the vicinity of the crack tip, as Figure Figure I. 14. CT specimen, deformed mesh (left), Von-Mises equivalent stress (right) Chapter I: State of art on probabilistic modelling of fatigue crack propagation S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 21 Figure I. 15 . 15 Figure I. 15. Principle of the maximum tangential stress and description of the region near the crack tip. obtain the 𝜃 0 angle. Figure I. 16 . 16 Figure I. 16. Calculation of G for an advance of δ and an angle θ Figure I. 17 . 17 Figure I. 17. Calculation of the solution for a crack making 𝛽 angle with the axis of the load and propagating with 𝜃 angle Figure Figure I. 18. Characteristic crack growth rate curve for a ductile material Later, after Paris and Erdogan's works, researchers have found that the FCGR does not exhibit the same behavior for all ranges of ∆𝐾. That is to say, the FCGR is not linear for all ranges of ∆𝐾. The general curve of the FCGR in the case of opening fracture mode and for metal-based components is shown figure I.18. Let us consider three neighboring crack length 𝑎 𝑗 , 𝑎 𝑗+1 and 𝑎 𝑗+2 . As shown in figure I.19, the hatched area Figure I. 19 . 19 Figure I. 19. Integration principle using Simpson's rule(Dowling, 2007) Chapter I: State of art on probabilistic modelling of fatigue crack propagation S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 27 Figure I. 20 . 20 Figure I. 20. Effect of single overload on fatigue crack growth lifetime The retardation phenomenon of a crack after the application of an overload is linked to the existence of residual stresses and to the concept of crack closure. Residual stresses are referred to stresses remaining in a structure in the absence of a mechanical loading. The experiment of (Schijve, 1979) has shown that the residual stresses are a cause of the retardation of the fatigue crack propagation. The plasticization at the bottom of the crack during the loading of the structure, by the singularity of the elastic stress field, gives these residual stresses in the plastic zone. In general, these stresses are compressive stresses near the crack tip (the important residual stresses in our case) and are tensile stresses away from the crack tip, thus forming an internal equilibrium for the structure. Figure Figure I. 21. Effect of overload on the plastic zoneThe determination of the residual stresses in the vicinity of the crack still presents difficulties due to the small size of the damaged plastic zone at the bottom of the crack leading to uncertainties on the evaluation of residual stresses. Chapter I: State of art on probabilistic modelling of fatigue crack propagation S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 29 growth mechanical model is conducted to study the effect of the variability of the material properties on the fatigue crack growth lifetime. The mechanical model represents a Centre Cracked Plate (CCP) specimen made of 7075-T6 aluminum alloy, subjected to tensile loads applied by five pins located at the top and the bottom. A detailed drawing of the geometry of the CCP specimen is given in figure I.24. Figure FigureI. 24. CCP specimen, geometry and dimensions (left), finite elements mesh (right) Fortunately, since the CCP specimen is extensively used, numerous fatigue crack growth data are found in the literature.(Hudson, 1969) has performed fatigue crack growth characterization tests on 7075-T6 aluminum alloy CCP specimens, using multiple axial-load fatigue-testing machine which applied monotonic oscillatory loads with , in order to use an implicit mechanical model. Due to the geometry symmetry, only one half of the CCP specimen is modeled as shown in the finite elements mesh in figure I.24. This reduces the Degree Of Freedom (DOF) of the finite elements model and consequently reduces the computation time, which becomes suitable when performing uncertainty propagation analysis.For simplicity, the pins have been removed from the finite elements model and the true loading is replaced by an evenly distributed pressure across the top and the bottom edges of the specimen. In addition, plane stress hypothesis is assumed since the specimen thickness is small compared to the two other dimensions of the specimen. Referring to the conclusions made before, only the energetic method of paragraph 2.1is used herein to compute the SIF range ∆𝐾 𝐼 .As shown in figure I.25, based on the deformed mesh of the CCP specimen, the crack will propagate in opening fracture mode. In addition, linear elastic fracture behavior hypothesis is verified since the Von-Mises equivalent stress exceeds the yielding strength of the material 𝜎 𝑦𝑙𝑑 = 75.9 𝑘𝑠𝑖, only in the vicinity of the crack tip. Figure I. 25 25 Figure I. 25. CCP specimen crack tip, deformed mesh (left), Von-Mises equivalent stress (center), yielding elements at the crack tip (right)3.5.2. Fatigue crack growth models fittingIn this section, Hudson's experimental data(Hudson, 1969) are used to obtain the empirical parameters of the fatigue crack growth models. In the following, Paris-Erdogan's, Forman's and Walker's fatigue crack growth laws are investigated and compared to find the most accurate one. Figure Figure I. 26. Paris-Erdogan's law fit to CCP specimen experimental data As it can be seen, the Paris-Erdogan's law fits very well the experimental data for each stress ratio 𝑅 since the goodness of fit parameter 𝑅 𝐿𝑅𝐺𝐹 2 is close to 1 (see the right column in table I.2). Figure Figure I. 27. Walker's law fitted to CCP specimen experimental data Figure Figure I. 28. Forman's law fitted to CCP specimen experimental data Forman's law fits very well the experimental data since the goodness of fit parameter 𝑅 𝐿𝑅𝐺𝐹 2 is close to 1. It is to be noted that these results are obtained for fracture toughness 𝐾 𝐼𝑐 = 72 𝑘𝑠𝑖√𝑖𝑛𝑐ℎ which is related to the threshold of the material. As can be seen in figure I.28, Forman's law is more suitable to fit the experimental data, than the Walker's law, especially for high SIF range values since it takes into account the unstable crack growth region (i.e., region III in the characteristic crack growth rate curve presented in figure I.18). Figure Figure I. 29. Comparison of predicted results from Walker's and Forman's FCG law with experimental data under various loading conditions Figure I.29 shows plots of crack length versus the fatigue loading cycles, for different loading conditions. The obtained curves are compared to the experimental data. Note that to build these curves, equation (I.29) is solved following an incremental scheme where the crack length grows from 𝑎 𝑖 to 𝑎 𝑐 , based on various crack increments ∆𝑎 to ensure a smooth shape. who developed a surrogate model method and an optimization model to improve the reliability based design optimization of structures, (Lu and al (1), 2020) who proposed an improved kriging surrogate model, (He and al, 2020) who proposed a probabilistic model for deducing fatigue life distribution. (Lin and al, 2016) took into consideration the uncertainties related to the flow behaviors and established a probabilistic model that describes these uncertainties. (Liao and al (2), 2020) proposed a probabilistic framework for fatigue life assessment with the consideration of the crack size and introduced an effective stress level to characterize the inhomogeneity of the stress distribution. (Yuan and al (1), 2019) suggested an optimized model to improve the reliability and enhancing the stability of the structure. Experimental data obtained by (Virkler and al, 1979) is a well-known source of information about fatigue of engineering materials. These data, available in the literature, are probably the most famous and frequently used data sets to model crack propagation. Indeed, (Virkler and al, 1979) analyzed the statistical distribution of crack propagation in 2024-T3 aluminum alloy. Tests were conducted on 68 identical specimens containing a central Chapter I: State of art on probabilistic modelling of fatigue crack propagation S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 37crack, supplied by the same company, and subjected to constant amplitude loading. Although the initial crack has the same length 𝑎 0 = 9 𝑚𝑚 for all specimens, a variability for the fatigue lifetime was observed and, despite of this later, is recorded at the same critical crack length 𝑎 𝑐 = 49,8 𝑚𝑚. As depicted in figure I.30, there is a large amount of variability in crack growth rate not only between samples but also within each sample. This variability was attributed to the heterogeneity of the material at the microscopic level. Figure I. 30 . 30 Figure I. 30. Crack length evolution curves obtained from Virkler experimental data(Virkler and al, 1979) Chapter I: State of art on probabilistic modelling of fatigue crack propagation S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 38 originating from material properties, environmental conditions and mechanical loads (Niu and al, 2020), (Yuan and al(2), 2019), (Lu and al (2), 2020), or from initial defects of fabrication (Zhu and al(1), 2018), (Zhu and al( law can be described as the Paris law in equation (I.23). The equation of damage is based on the physics of fracture mechanics and is validated by Karhunen-Loève. In the work of Fokker-Planck, a generalized equation is derived from the Paris-Erdogan's law. This equation defined the temporal variation of the crack length distribution and provided the distribution of the crack propagation life with the use of its solution. The equation is a second order differential equation and does not have a unique solution since the equation contains random variables. Chapter I: State of art on probabilistic modelling of fatigue crack propagation S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 39 Chapter I: State of art on probabilistic modelling of fatigue crack propagation S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 41 quantify the statistical characteristics of fatigue crack growth dataset, (Wu and al, 2003) performed probabilistic crack growth modeling and verified the applicability of the Yang-Manning model. (Li and al, 2020) proposed a probabilistic fatigue crack growth model considering random initial crack and modified the Yang and Manning model considering a crack coalescence under multiple cracks conditions. theory of functional equations(Castillo and al, 2005) is used based on the following property: the expression of this function must be invariant during the life of the structure. More precisely, if a structure containing an initial crack of size Chapter I: State of art on probabilistic modelling of fatigue crack propagation S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 44 an initial crack of length ) The probability density function 𝑓𝑎 𝑎 𝑐 ⁄ of the length of the crack after a service life 𝑁 𝑁 0 can be written in function of the probability density function 𝑓𝑎 0 𝑎 𝑐 ⁄ of the initial length of the crack (see figure I.30): Figure Figure I. 31. Evolution of the pdf 𝑓𝑎 𝑎 𝑐 ⁄ ( 𝑎 𝑎 𝑐) with the number of cycles and critical crack size Due to the uncertainty embodied in the input parameters 𝒙, the response 𝑦 of the mechanical model becomes also uncertain by uncertainty propagation through the mapping 𝑓. Hence, we will denote by 𝑌 = 𝑓(𝑿) the probabilistic model associated to the mechanical model defined by equation (II.1), 𝑿 = { 𝑋 1 , … , 𝑋 𝑁 } 𝑇 ∈ 𝔇 𝓧 an 𝑁-dimensional random variable, with a prescribed probability density function 𝑝 𝑿 (𝒙), modeling the uncertainty on the input parameters 𝒙 and 𝑌 ∈ 𝔇 𝒴 a scalar random variable representing the uncertainty on the mechanical response 𝑦 and following a probability density function 𝑝 𝑌 (𝑦). Figure Figure II. 1. Principle of uncertainty propagation through a mechanical modelThe principle of uncertainty propagation is schematically illustrated in figure II.1. As can be seen, it comprises three main steps. The first is to model the uncertainties associated with the input parameters using probabilistic models such as random variables and random fields. The choice of the suitable probabilistic model and its related parameters can be done by performing statistics if data are available or based on expert judgement. The second step is to define the mechanical model which ranges from a simple analytical formula to a complex time-consuming numerical model. The mechanical model maps the set of input parameters to the outputs of interest. In the third step, uncertainty propagation is performed through Figure Figure II.2 illustrates the isoprobabilistic transfomation for the 2-dimensional case, where 𝑝 𝑿 (𝒙) and 𝜑 𝑼 (𝒖) are the probability density functions of the 2-dimensional random variables 𝑿 = { 𝑋 1 , 𝑋 2 } 𝑇 and 𝑼 = { 𝑈 1 , 𝑈 2 } 𝑇 , respectively. Figure Figure II. 4. Significance of the first four statistical momentsTo get a complete picture of the probabilistic content of 𝑌 you can construct its probability density function 𝑝 𝑌 (𝑦), in addition to the estimates of the statistical moments. This can be done, by a moments- Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 57 threshold level, such as an ultimate stress intensity factor (i.e., fracture toughness) or crack length, beyond which the system collapses. Mathematically speaking, the failure criterion is characterized by the limit state function, or the performance function, denoted 𝐺. It is defined as a mapping of the uncertain parameters 𝒙 = { 𝑥 1 , … , 𝑥 𝑁 } 𝑇 . The random space can be split up into two regions: the failure domain Ω 𝐹 = {𝒙 ∈ 𝔇 𝑿 | 𝐺(𝒙) < 0} and the safety domain Ω 𝑆 = {𝒙 ∈ 𝔇 𝑿 | 𝐺(𝒙) > 0}. The set of points Γ = {𝒙 ∈ 𝔇 𝑿 | 𝐺(𝒙) = 0} represents the limit state surface which is the frontier between the failure and the safety domains. Figure II. 5 schematically illustrates these concepts in the case of a two-dimensional random space. Figure II. 5 . 5 Figure II. 5. Reliability concepts in the physical random space (left) and the standard random space (right) According to the above concepts, the probability of failure, denoted 𝑃 𝑓 , and defined as the complementary event of the reliability 𝑅, that is 𝑃 𝑓 = 1 -𝑅, reads: 𝑃 𝑓 = Prob[𝐺(𝒙) ≤ 0] = ∫ 𝑝 𝑿 (𝒙) 𝑑𝒙 𝐺(𝒙)≤0 𝑃 Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 58 where 𝐻(𝑼) ≡ 𝐺 ∘ 𝑇(𝑼) = 𝐺(𝑿) is the limit state function in the standard random space, 𝜑 𝑼 is the standard multinormal probability density function and 𝕀 Ω 𝐹 is the indicator function on Ω 𝐹 , which is equal to 1 if 𝐻(𝒖) ≤ 0 and 0 otherwise. Figure Figure II. 6. Approximation of the probability of failure using FORM (left) and SORM (right). Then, the FORM approximation substitutes the limit state function 𝐻 by a hyperplane tangent to the true failure domain at the Most Probable Failure Point (MPFP) denoted 𝑃 * , also called the design point, which is defined as the nearest point of the limit state surface to the origin of the standard random space as depicted in figure II.6. The MPFP is obtained by solving the following constrained optimization problem: { ‖𝒖 * ‖ = min 𝒖∈ℝ 𝑵 ‖𝒖‖ 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝐻(𝒖) = 0 (𝐼𝐼. 26) multidimensional integrals. But let us first recalls the mathematical problem to be addressed. The aforementioned integrals can be represented by the following Gaussian-weighted integral, since they are defined in the standard random space: 𝐼[𝒻] = ∫ 𝒻(𝒖) 𝜑 𝑼 (where 𝐼[𝒻] denotes the integration for short, 𝒻(𝒖) denotes an arbitrary integrand. For instance, if the second order statistical moment 𝜎 𝑌 2 is concerned, 𝒻(𝒖) = [ℎ(𝒖) -𝜇 𝑌 ] 2 (see equation (II.5)), else if sensitivity analysis is addressed 𝒻(𝒖) = ℎ(𝒖) (see equation (II.23)), or 𝒻(𝒖) = 𝕀 Ω 𝐹 (𝒖) (see equation (II.25)) if reliability analysis One of the simplest ways to compute the above integral 𝐼[𝒻] is to use Monte-Carlo Simulations (MCS)(Metropolis and Ulam, 1949). The basic idea behind this technique relies on drawing a random sample {𝒖 1 , … , 𝒖 𝑀 } according to the standard multinormal distribution 𝜑 𝑼 (𝒖) of the random vector 𝑼 = { 𝑈 1 , … , 𝑈 𝑁 } 𝑇 , representing the uncertain input parameters 𝒙 = { 𝑥 1 , … , 𝑥 𝑁 } 𝑇 in the standard random space. Then, the mechanical model is evaluated for each sample to obtain a set of values {𝒻(𝒖 1 ), … , 𝒻(𝒖 𝑀 )} of the integrand 𝒻(𝒖). Finally, the integral 𝐼[𝒻] could be approximated by the following weighted summation:Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 60𝐼[𝒻] = ∫ 𝒻(𝒖) 𝜑 𝑼 (It is clear from equation (II.30) that the approximation given by the MCS for the integral 𝐼[𝒻] appears as an estimate 𝜇̃𝒻 (𝒖) of the mean of the integrand 𝒻(𝒖), with a standard error 𝜖 𝑀𝐶𝑆 : Figure II.7 compares the distributions of 500 sample points obtained by pseudo-random numbers generator, Latin hypercube sampling, and Halton quasi-random sequence (Halton, 1960). Figure II. 7 . 7 Figure II. 7. Distribution in the two-dimensional random space [0,1] 2 of sample points given by pseudo-random numbers generator (left), Latin hypercube sampling (center) and Halton quasi-random sequences (right). Figure Figure II. 8. Two-dimension full (left) and sparse (right) grids built from Gauss-Hermite integration points Figure II. 8 compares, for the two-dimensional case, Smolyak's grid of level 𝑙 = 3 and full tensor-product grid of level 𝑀 = 𝑙 + 𝑁 -1 = 3 + 2 -1 = 4. It is clearly shown that for the same given level of accuracy, 𝜇 Figure II. 9. Integration points given by formula I (left) and comparison of the number of integration points with the theoretical min bound of formulae of degree 5 (right) Figure Figure II. 10. Integration points given by formula II (left) and comparison of the number of integration points with the theoretical min bound of formulae of degree 5 (right) Figure Figure II. 122. Integration points given by formula IV (left) and comparison of the number of integration points with the theoretical min bound of formulae of degree 5 (right) ( 𝐼𝐼. 49)Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 71where 𝒆 ⃗ 𝑘 + 𝒆 ⃗ ⃗ 𝑙 ); 𝑘 < 𝑙, 𝑘, 𝑙 = 1,2, … , 𝑛} and 𝒆 ⃗ ⃗ 𝑘 -𝒆 ⃗ ⃗ 𝑙 ); 𝑘 < 𝑙, 𝑘, 𝑙 = 1,2, … , 𝑛}. Note that for formula VI, different values of the free parameter ∆ are used to ensure the best possible estimates of the statistical moments of the performance functions. These values vary in the range [1.5, 2.5]. Figure FigureII. 16. Ratios of estimates obtained by cubature formulae I-VI to the reference first four statistical moments given by MCS for the performance functions 𝐺 1 (𝒙), 𝐺 2 (𝒙) and 𝐺 3 (𝒙)Figure II.17 compare the Probability Density Functions (PDF) 𝑓 𝐺 𝑖 , 𝑖 = 1,2,3 built from 10 5 MCS to those obtained from a moments-based technique using the previous estimates of the first four statistical moments of the performance functions 𝐺 𝑖 (𝒙), 𝑖 = 1,2,3. As can be seen, the plotted results confirm our previous observation on the effect of using a mixture of different types of distributions to represent the variability of the uncertain parameters, since the PDFs are in good agreement in the cases of the performance functions 𝐺 2 (𝒙) and 𝐺 3 (𝒙). There is also then a discrepancy between the PDFs for the performance function 𝐺 1 (𝒙) where a highly heterogonous combination of distributions is used to model the variability related to the uncertain parameters, which is due to the relative inaccuracy of the estimates of the higher order statistical moments, mainly for skewness and kurtosis. Note that the accuracy of the PDFs can be enhanced by using shifted generalized lognormal distribution technique(Low, 2013) or kernel Figure Figure II. 17. Comparison of the PDFs of the of the performance functions 𝐺 1 (𝒙), 𝐺 2 (𝒙) and 𝐺 3 (𝒙) Figure FigureII.18 shows the Cumulative Distribution Functions (CDFs) 𝐹 𝐺 𝑖 , 𝑖 = 1,2,3 of the performance functions 𝐺 𝑖 (𝒙), 𝑖 = 1,2,3. These CDFs are nothing else than the evolutions of the failure probability noted 𝑃 𝑓 . Figure Figure II. 18. Comparison of the CDFs and the reliability indices of the performance functions 𝐺 1 (𝒙), 𝐺 2 (𝒙) and 𝐺 3 (𝒙) As can be seen from figure II.18, the proposed method gives accurate estimate of the reliability index in the cases of the performance functions 𝐺 2 (𝒙) and 𝐺 3 (𝒙), since the ratio 𝛽 ̂𝛽𝑀𝐶𝑆 ⁄ is close to 1. Indeed, the values of the latter vary in the range [1.035, 1.102]. However, for the performance function 𝐺 1 (𝒙), the results are less accurate particularly those given by cubature formula I, which largely deviate from the reference solution given by MCS, since the ratio 𝛽 ̂𝛽𝑀𝐶𝑆 ⁄ is around 3.73. For the other cubature formulae, the value of the ratio 𝛽 ̂𝛽𝑀𝐶𝑆 ⁄ varies between 1.062 and 1.173, given by cubature formulae II and VI, respectively. The truss structure comprises 11 horizontal bars and 12 oblique bars made of the same constitutive material and subjected to six vertical loads located in its upper part. Under this loading, the mechanical response of interest is represented by the mid-span deflection, noted by 𝜈. Since the analytical solution is not available, this mechanical response is obtained by a numerical model implemented under a finite element software (cast3m, 2021). In figure II.19, the finite element mesh and the deformed shape of the truss structure are depicted. Figure FigureII. 19. Truss structure: geometry and applied loads (left), finite element mesh and deformed shape (right)Ten uncertain parameters are considered for this problem. They are the vertical loads denoted by 𝑃 𝑖 , 𝑖 = 1,2, … ,6, the Young's modulus and cross-section areas of the horizontal bars denoted by 𝐸 1 and 𝐴 1 , respectively, and the Young's modulus and cross-section areas of the oblique bars denoted by 𝐸 2 and 𝐴 2 , respectively. They are represented by independent random variables, gathered in the vector 𝑿 = { 𝐸 1 , 𝐸 2 , 𝑆 1 , 𝑆 2 , 𝑃 1 , 𝑃 2 , 𝑃 3 , 𝑃 4 , 𝑃 5 , 𝑃 6 } 𝑇 , whose distribution type, mean 𝜇 and standard deviation 𝜎 are listed in table accuracy and efficiency since it only requires 133 runs of the FEM, instead of 201 evaluations for cubature Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 81 formulae III, IV and VI, and 273 evaluations for cubature formula V. As expected, the accuracy of the computation of the higher order statistical moments of the mechanical response decreases with the moment order. Indeed, for skewness and kurtosis only cubature formulae III and VI provide the closest estimates to the reference solution. Figure Figure II.21 compares the PDFs and the CDFs built using a moments-based technique and those given by MCS. As can be seen, the PDFs given by the proposed method, especially those based on cubature formulae III and VI, fit very well the PDF built from 10 5 MCS in the entire region, including the tails. The CDFs curves plotted in the logarithmic scale following the vertical axis, clearly demonstrate the good accuracy level of the proposed method when the statistical moments are derived from cubature formula VI, since the deviation from the reference solution is insignificant in the vicinity of the tails. This accuracy is achieved with higher efficiency, since it only requires 201 calls of the FEM. Figure Figure II. 21. Truss structure: comparison of the PDFs and CDFs of the mechanical response In a next step, the reliability analysis is performed to assess the accuracy of the proposed approach in estimating the probability of failure of the truss structure with respect to a threshold deflection denoted by 𝜈 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 and taken as a deterministic quantity. The associated performance function reads: 𝐺(𝒙) = 𝜈 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 -𝜈(𝒙) (𝐼𝐼. 56) To evaluate the effect of the order of magnitude 10 -k of the failure probability on the accuracy of the estimates, a parametric study is carried out, where the threshold deflection varies in the range [0.1 m, 0.14 m]. Note that, only cubature formula VI is used, since it gives the most accurate results as stated in the statistical moments analysis conducted previously. The results obtained regarding failure probabilities and associated reliability indices, are listed in table II.6 and compared to the reference solutions given by Figure II.22 compares the estimates of the first-order Sobol indices given by the proposed method with those of the reference solution; they are plotted in decreasing order of importance. Figure Figure II. 22. Truss structure: comparison of the estimates of the first-order Sobol indices ( Nouy, 2010) and used later in the literature(Konakli and Sudret, 2016) of uncertainty propagation analysis as a benchmark problem. The temperature field denoted by 𝑇(𝒛), 𝒛 = (𝑥, 𝑦) ∈ Ω is described by the following partial differential equation:-∇(𝑘(𝒛) • ∇𝑇(𝒛)) = 𝑄 • 𝕀 Ω 1 (𝐼𝐼.57) where 𝑄 = 2000 𝑊 𝑚 3 ⁄ is a heat source applied in the square area Ω 1 = (0.2𝑚, 0.3𝑚) × (0.2𝑚, 0.3𝑚), 𝕀 Ω 1 is the indicator function equal to 1 if 𝒛 ∈ Ω 1 , and 𝑘(𝒛) is the thermal conductivity of the constitutive material of the plate which is considered as a spatially varying quantity. The boundary conditions applied to the square plate are: 𝑇 = 0 on the top edge and ∇𝑇 • 𝒏 on the left, bottom, and right edges, where 𝒏 is the vector Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 85 normal to the boundary. The model response represents the average temperature 𝑇 ̃Ω2 in the square area Ω 2 = (-0.2𝑚, -0.3𝑚) × (-0.2𝑚, -0.3𝑚), which is defined by the following integral: where 𝑇(𝒛) is the temperature field in the whole square area Ω, computed from a FEM implemented in the software (cast3m, 2021). The finite element mesh (see Figure II.24) is made of 1926 3-noded triangular elements and 3690 nodes. Figure Figure II. 24. Heat conduction in square plate: geometry and boundary conditions (left),and finite element mesh (right) The thermal conductivity 𝑘(𝒛, 𝜔) of the constitutive material of the plate is considered as a spatially varying uncertain parameter, modelled by a two-dimensional lognormal field, with mean value 𝜇 𝑘 = 1 𝑊 °𝐶 𝑚 ⁄ and standard deviation 𝜎 𝑘 = 0.3 𝑊 °𝐶 𝑚 ⁄ . It is defined as the exponential of a normal random field 𝑣(𝒛, 𝜔) with mean 𝜇 𝑣 = ln(𝜇 𝑘 ) -1 2 ln(1 + 𝜎 𝑘 2 𝜇 𝑘 2 ⁄ ) and standard deviation 𝜎 𝑣 = √ln(1 + 𝜎 𝑘 2 𝜇 𝑘 2 ⁄ ): 𝑘(𝒛, 𝜔) = exp[𝑣(𝒛, 𝜔)] = exp [𝜇 𝑣 + 𝜎 𝑣 . 𝑢(𝒛, 𝜔)] (𝐼𝐼. 59) .62) means that only the 𝑀 first largest eigenvalues, which contribute 99% of the sum of all the 𝑚 available eigenvalues resulting from the decomposition of the correlation matrix, are retained. This leads to a relative variance error 𝜖(𝒛) = Var[𝑢(𝒛, 𝜔) -𝑢(𝒛, 𝜔)] Var[𝑢(𝒛, 𝜔)] ⁄ less than 0.092%. Note that, the eigenvalues are first sorted in ascending order, before selecting the most important of them. Figure II.25 shows the 20 first shape functions 𝝓 𝑖 𝑇 𝑪 𝔃,𝔃 𝒊 , 𝑖 ∈ {1, … ,53} used in the EOLE method to build the realizations of the random field 𝑢(𝒛, 𝜔). Figure Figure II. 25. Heat conduction in square plate: 20 first shape functions 𝝓 𝑖 𝑇 𝑪 𝔃,𝔃 𝒊 , 𝑖 ∈ {1, … ,53} according to the EOLE Figure Figure II. 26. Heat conduction in square plate: example of 10 realizations of the thermal conductivity field 𝑘 ̂(𝒛, 𝜔) Figure II.27 shows the realizations of 𝑇(𝒛, 𝜔) associated with the previous realizations of the thermal conductivity field shown in figure II.26. Figure Figure II. 27. Heat conduction in square plate: example of 10 realizations of the temperature field 𝑇(𝒛, 𝜔)Since the model response of interest, defined as the average temperature 𝑇 ̃Ω2 recorded in the square area Ω 2 , is a scalar parameter as we can see in equation (II.58), the associated uncertainty can be simply modeled by a random variable, whose statistical characteristics are to be determined. Thus, we are first interested in the computation of the first four statistical moments of the model response 𝑇 ̃Ω2 . The multidimensional integrals involved in these computations are evaluated by the cubature formulae II, III, IV and VI. Note that the cubature formulae I and V are not used here because they are only capable of computing integrals of maximum dimension up to 7 and 16, respectively. The results given by the proposed method based on the aforementioned cubature formulae are listed in table II.9, along with those obtained by 10 5 crude MCS applied directly to the square plate FEM. Figure II.28 shows the convergence of the estimates given by MCS, taken here as a reference solution. We can observe that the convergence of the Figure Figure II. 29. Heat conduction in square plate: comparison of the PDFs and CDFs of the model response 𝑇 ̃𝛺2 build the metamodel of the model response of interest. The flowchart displayed in figure III.3 summarizes the steps of the full-PCE approach. Figure III. 2 . 2 Figure III. 2. Computational cost of full tensor-product integration schemes and cubature and formulae I-IV with respect to the dimension 𝑁 ) = 0 related to the above optimization problem allows us to write the following equality: 𝔼[𝓗(𝑼)𝓗(𝑼) 𝑇 ]𝒂 ̂ = 𝔼[𝓗(𝑼)𝑓 ∘ 𝑇(𝑼)] 1 and total 𝑆 ̂𝑖,𝑝 𝑇 Sobol sensitivity indices, measuring respectively the main and the total effects of an uncertain parameter 𝑋 𝑖 , 𝑖 = 1, … , 𝑁 on the randomness of the model response of interest. The estimates of these quantities based on the coefficients of the PCE-based metamodel are obtained by the following expressions: 29 ) 29 The probability of failure 𝑃 𝑓 , related to the performance function 𝐻(𝒖), can be easily computed by either by performing MCS on equation (III.29) or by evaluating the following unidimensional integral, since the PDF 𝑓 ̂𝑌,𝑝 (𝑦) of the random variable 𝑌, representing the variability of the model response 𝑦, is already known (see section 2.3.3).𝑃 𝑓 = Prob[𝐻(𝒖) ≤ 0] ≈ 𝑃 ̂𝑓,𝑝 = Prob[𝑦 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 -ℎ 𝑃𝐶𝐸 (𝒖) ≤ 0] = ∫ 𝑓 ̂𝑌,𝑝 (𝑦) 𝐹 𝑦 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑(𝑦) 2.Of interest are the first four statistical moments of the model response. The estimates are computed by crude MCS involving 10 5 runs of the FEM representing the CCP specimen. Note that, due to the high number of runs of the FEM required by MCS to reach the convergence of the estimates, only the loading condition corresponding to a stress ratio 𝑅 = 0.2, mean stress 𝜎 𝑚 = 15 𝑘𝑠𝑖 and alternating stress 𝜎 𝑎 = 10 𝑘𝑠𝑖, is analyzed.As can be seen in figure III.7 the convergence of the first four statistical moments is reached after 10 5 MCS. The obtained estimates of the mean and standard deviation are, respectively, 𝜇̂= 8447 cycles and 𝜎̂= 265 cycles, indicating a relatively small effect of the uncertain parameters on the variability of the fatigue crack growth life 𝑁 𝑓 since the corresponding coefficient of variation is about 3.14%. It is important to remind that these results represent only one loading condition and the uncertain parameters may have a significant effect on the model response for other loading conditions. Figure III. 7 . 7 Figure III. 7. Crack growth in CCP specimen: convergence of crude MCSThe PDF and the corresponding CDF of the fatigue crack growth life are also built. In addition, the PDF is compared to some standard distributions with known mathematical formulation. As can be observed, the normal distribution is in full agreement with the PDF built from MCS. This result is also confirmed by the Q-Q plot in figure III.8, comparing samples drawn from the fatigue crack growth life distribution obtained by MCS and those drawn from standard normal distribution, where a major part of the points is in the vicinity of the reference line. Figure III. 8 . 8 Figure III. 8. Crack growth in CCP specimen: Q-Q plot of the model response 𝑁 𝑓 with standard Normal PDF (left), probabilistic crack growth curves (right) Based on MCS samples, crack growth curves, representing the evolution of the effective crack length 𝑎 𝑏 ⁄ (i.e., the real crack length divided by the half-width of the CCP specimen) versus the number of loading cycles 𝑁 are also plotted in figure III.8. To obtain smooth curves the fatigue crack growth life is computed at several crack lengths by integrating Walker model from the initial crack length 𝑎 0 to a given incremental Figure Figure III.9 illustrates the PDF of the fatigue crack growth life 𝑁 𝑓 in the case of uncorrelated uncertain parameters. It appears that this PDF can no longer be described by a normal distribution and agrees rather with a lognormal distribution. This behavior is clearly confirmed by the Q-Q plot depicted next to the plot of the PDF. Figure Figure III. 9. Crack growth in CCP specimen: Q-Q plot (left), PDF (right) of the model response 𝑁 𝑓 for the case of uncorrelated uncertain parameters Figure Figure III. 10. Crack growth in CCP specimen: convergence analysis of the statistical moments We are also interested in the convergence of the metamodel corresponding to the PCE. To avoid redundancy, and since all cubature formulae give fairly the same results, only the metamodel built from PCE coefficients obtained from cubature formula VI is plotted in figure III.11. The fatigue crack growth lifetime 𝑁 𝑓 obtained from the metamodel given by the PCE is plotted with respect to each uncertain parameter 𝑥 𝑖 , 𝑖 ∈ {1,2,3}, in the range [𝜇 𝑖 ± 3𝜎 𝑖 ]. Figure Figure III. 11. Crack growth in CCP specimen: convergence of the metamodel relative to a PCE of degree 𝑝 = 2 For the sake of validation, first-order and total Sobol indices are computed by post-processing PCE coefficients of degrees 𝑝 = {1, 3}. The results obtained, when using cubature formula VI to compute the unknown coefficients of the PCE, are plotted in figure III.12, together with those derived previously from a PCE of degree 𝑝 = 2. As can be seen, the convergence of the first-order and total Sobol indices is well achieved by a full PCE of degree 𝑝 = 2. Moreover, the Sobol indices provided by a PCE of degree 𝑝 = 1 are slightly different from those achieved at the convergence point, which indicates once again that the effects of interaction between uncertain parameters are weak and the major part of the variability of the fatigue crack growth lifetime is due to the main effects. Figure Figure III. 12. Crack growth in CCP specimen: convergence of the first-order and total Sobol indices 3.1.4. Discussion as a benchmark example to compare the efficiency of pseudo-random numbers, Latin hypercube samples and quasi-random numbers when MCS is used to compute the unknown coefficients of a PCE-based metamodel. The cracked pipe with internal radius 𝑅 𝑖 = 393.5 𝑚𝑚 and thickness 𝑡 = 62.5 𝑚𝑚, contains a symmetrically centered circumferential internal crack with length 𝑎 = 15 𝑚𝑚 , and is subjected to an internal pressure 𝑃 = 15.5 𝑀𝑃𝑎 and an axial tension 𝜎 𝑡 = 140 𝑀𝑃𝑎. Due to the boundary conditions at the ends of the cracked pipe, the internal pressure 𝑃 induces a longitudinal tension pressure, in addition to the axial tension 𝜎 𝑡 . Thus, the stress 𝜎 0 due the end effects, reads: Figure FigureIII. 14. Nonlinear cracked pipe: applied loads and boundary conditions (upper left, after Pendola and al., 2000), finite element mesh (lower left), evolution of 𝐽 𝑅𝑖𝑐𝑒 with respect to the FEA increments (right) Gauss-Hermite scheme (see equationIII.13) or by the cubature formula VI (see equation III.15). For the sparse-PCE approach, the target variance of the model response of interest is computed by the crude cubature formulae VI, and then the unknown coefficients are computed by regression based on experimental design built from the integration points of the cubature formula IV and the corresponding FEM responses.The results obtained are listed in TableIII.6 and compared to the estimates provided by crude MCS, taken as reference solutions. As can be seen, the estimates of the first four statistical moments obtained by the proposed methods are overall in good agreement with the reference ones, since the maximum relative Chapter III: Unified approach for uncertainty propagation analysis S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 122 Figure Figure III.15 compares the PDFs and CDFs obtained by a moment-based technique using the estimates of the first four statistical moments given by the different proposed approaches. As can be seen, the PDFs and CDFs corresponding to the full-PCE and sparse-PCE approaches are in good agreement with the reference ones given by 10 5 crude MCS, throughout the range of variation of the model response 𝐽 𝑅𝑖𝑐𝑒 . As can be observed from the CDFs plot, where a logarithmic scale is used on the vertical axis to highlight the behavior at the tails of the distribution, high accuracy is obtained in these regions of great interest when performing reliability analysis. Clearly, the full-PCE and sparse-PCE approaches yield superior estimates of the PDF and the CDF of the model response, compared to the approach based on crude cubature formula VI. Indeed, a slight discrepancy between the corresponding CDF and the reference one is observed at the left tail of the distribution. Figure Figure III. 15. Nonlinear cracked pipe: comparison of the PDFs and CDFs of the model response 𝐽 𝑅𝑖𝑐𝑒 Figure Figure III. 16. Nonlinear cracked pipe: convergence of crude MCS3.2.3. Sensitivity analysisNext, we conduct a sensitivity analysis to assess the contribution of each uncertain parameters on the variability of the crack driving force 𝐽 𝑅𝑖𝑐𝑒 . The first-order and total Sobol indices are computed by the proposed approaches by post-processing the coefficients of the corresponding PCE-based metamodels and using equations III.26 and III.27, respectively. The obtained results are reported in figure III.17. As can be seen, the estimates given by the full-PCE and sparse-PCE approaches are almost identical. Since MCS is not practical in this problem to compute the Sobol sensitivity indices due to the relatively expensive cost of a single run of the incremental FEA required to compute the Rice's integral, the full-PCE approach where the coefficients are obtained by a full tensor-product Gauss-Hermite integration scheme is used as the reference method. Referring to the first-order or the total Sobol indices, it appears that the variability of the model response 𝐽 𝑅𝑖𝑐𝑒 is mainly due to the uncertainty of the Young's modulus 𝐸, while the coefficient 𝛼 and the strain hardening exponent 𝑛 of the Ramberg-Osgood behavior law, have insignificant effects and can be considered as deterministic quantities, thus set to their respective mean values. The effect of interaction between uncertain parameters is also negligible since the total indices have roughly the same values as the first-order indices. Figure Figure III. 17. Nonlinear cracked pipe: convergence of the first-order and total Sobol indices 3.2.4.Reliability analysis assessing the following unidimensional integral:𝑃 𝑓 = Prob[𝐻(𝒖) ≤ 0] ≈ 𝑃 ̂𝑓,𝑝,𝜎 𝑡 = Prob[𝐽 𝐼𝑐 -ℎ 𝜎 𝑡 𝑃𝐶𝐸 (𝒖) ≤ 0] = ∫ 𝑓 ̂𝐽𝑅𝑖𝑐𝑒 ,𝑝,𝜎 𝑡 (𝐽) 𝐹 𝐽 𝐼𝑐 (𝐽)where 𝑓 ̂𝐽𝑅𝑖𝑐𝑒 ,𝑝,𝜎 𝑡 (𝐽) is the approximation of the PDF of Rice's integral 𝐽 𝑅𝑖𝑐𝑒 and 𝐹 𝐽 𝐼𝑐 (𝐽) is the CDF of the fracture toughness 𝐽 𝐼𝑐 , which are already available under analytical forms. Figure Figure III. 18. Nonlinear cracked pipe: Resistance-Loading reliability problem (left), PDF of the performance function and definition of the reliability index of Rjanitzyne-Cornell If we look at equations III.35, III.36 and III.37, we clearly find the well-known elementary reliability problem, referred to in the literature as either by the Capacity-Demand (C-D) or the Resistance-Loading (R-L) problem, the basic principle of which is illustrated in figure III.18 (see also section 2.4 of Chapter IIfor more details on this issue).By analogy, the fracture toughness 𝐽 𝐼𝑐 represents the Resistance part, whereas the Rice's integral 𝐽 𝑅𝑖𝑐𝑒 represents the Loading part. Since the fracture toughness 𝐽 𝐼𝑐 follows a lognormal distribution, and the Rice's integral also tends to follow a lognormal distribution as shown in the statistical moments analysis conducted previously, the evaluation of the integral III.37 can be avoided, and the failure probability can be approximated as follows: Figure Figure III. 19. Nonlinear cracked pipe: Resistance-Loading reliability problem for 𝜎 𝑡 = 180 𝑀𝑃𝑎 (left), comparison of the reliability analysis results given by the proposed methods and FORM (right) Before closing discussion on this application example, an interesting, even obvious question that we should ask ourselves is: is there is any changes in the importance order of contributions of the uncertain parameters on the variability of the Rice's integral due to the increase of the axial tension 𝜎 𝑡 ? To find a clear response, let us plot the total Sobol sensitivity indices with respect to the magnitude of the axial tension when varying in the range [140 𝑀𝑃𝑎, 200 𝑀𝑃𝑎] as depicted in figure III.20. As expected, the picture does not the same for all values of the axial tension, which has a physical meaning. Indeed, when the axial increases the constitutive material exhibits a high plastic behavior, especially in the vicinity of the crack, and thus whey the yield strength 𝜎 𝑦 becomes the most contributor uncertain parameter on the variability of Rice's integral for the highest value of the axial tension, that is 𝜎 𝑡 = 200 𝑀𝑃𝑎. However, no significant change in the contributions of the coefficient 𝛼 and the strain hardening exponent 𝑛 of the Ramberg-Osgood behavior law, which remain the uncertain parameters with the weakest effects. Figure Figure III. 20. Nonlinear cracked pipe: evolution of the total Sobol indices with respect to the magnitude of the axial tension 𝜎 𝑡3.2.5. DiscussionThrough this example, we have demonstrated that the proposed approaches, named full-PCE and sparse-PCE, are able to carry out statistical moments, sensitivity, and reliability analysis at very low computational cost, since only 33 runs of the FEM are required to obtain a good accuracy on the corresponding quantities Consider the rectangular plate of height 2𝐿 = 2 𝑢𝑛𝑖𝑡𝑠 and width 𝑊 = 1 𝑢𝑛𝑖𝑡 visualized in figure III.21. It is subjected to tensile load 𝜎 = 1 𝑢𝑛𝑖𝑡 on its bottom and top edges and has an open inclined crack with dimensions 𝑎 = 𝑧 = 0.5 𝑢𝑛𝑖𝑡. This problem has been first introduced by (Long and al, 2016) to perform a local sensitivity analysis on the fracture driving forces, using the stochastic scaled boundary finite element method. Later, this problem was addressed in (Chahine and al, 2021) to assess the reliability of the inclined cracked plate considering the two-dimensional spatial variability of the mechanical properties of the constitutive material.Due to the orientation of the initial crack with respect to the applied load, this later naturally tends to propagate in a mixed fracture mode, instead of a simple opening fracture mode. Thus, a FEM is developed in the software (cast3m, 2021) to compute the fracture driving forces, namely the opening fracture mode SIF 𝐾 𝐼 , the in-plane shear fracture mode SIF 𝐾 𝐼𝐼 and the bifurcation angle 𝜃. The finite element mesh, consisting of 976 6-node triangular elements and 2155 nodes, is extremely refined around the crack tip, as shown in figure III.21, to ensure good accuracy of the fracture driving forces estimates. However, a coarse mesh is used in vicinity of the outer plate edges, to reduce the number of Degrees Of Freedom Chapter III: Unified approach for uncertainty propagation analysis S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 129 (DOF) in the overall finite element mesh, and therefore optimize the computational cost required by a single FEM run. Figure Figure III. 21. Inclined edge-cracked plate: geometry and applied loads (left), finite element mesh (left) The Young's modulus 𝐸(𝒛, 𝜔) of the constitutive material of the cracked plate is considered as an uncertain parameter whose variability varies along both the horizontal and the vertical directions denoted by 𝑥 and 𝑦, respectively, and gathered in the vector 𝒛 = (𝑥, 𝑦). It is modeled by a two-dimensional lognormal random field, with mean value 𝜇 𝐸 = 20.7 10 6 𝑢𝑛𝑖𝑡𝑠 and standard deviation 𝜎 𝐸 = 2.07 10 6 𝑢𝑛𝑖𝑡𝑠, which can be defined simply as the exponential of a normal random field 𝑣(𝒛, 𝜔) = Ln(𝐸(𝒛, 𝜔)) with mean 𝜇 𝑣 = ln(𝜇 𝐸 ) -1 2 ln(1 + 𝜎 𝐸 2 𝜇 𝐸 2 ⁄ ) and standard deviation 𝜎 𝑣 = √ln(1 + 𝜎 𝐸 2 𝜇 𝐸 2 ⁄ ): facts are clearly illustrated in figure III.22, which shows a comparison of the variance error provided by the EOLE and KL methods of degree 𝑀 = 10, used for the representation of a standard normal random field S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 130 governed by an exponential autocorrelation function with correlation length 𝑙 𝑐 = 5 in the one-dimensional domain Ω = [0, 10]. Figure Figure III. 22. Inclined edge-cracked plate: comparison of the variance error provided by the EOLE and KL methodsReferring to the KL method, the 𝑀 𝑡ℎ order approximation of the lognormal random field 𝐸(𝒛, 𝜔) reads: Figure Figure III. 23. Inclined edge-cracked plate: example of 10 realizations of the Young's modulus field 𝐸(𝒛, 𝜔) with mean 𝜇 𝐸 = 20.7 10 6 𝑢𝑛𝑖𝑡𝑠 and standard deviation 𝜎 𝐸 = 2.07 10 6 𝑢𝑛𝑖𝑡𝑠 Due to the spatial variability of the Young's modulus, the components of the displacement field induced by the load applied to the cracked plate are also spatially varying uncertain parameters that can be conveniently represented by random fields. Of interest are the horizontal and the vertical displacements given by the FEM at the 2155 nodes of the finite element mesh depicted in figure III.21. The corresponding random fields are denoted by 𝑑 𝑥 (𝒛, 𝜔) and 𝑑 𝑦 (𝒛, 𝜔), respectively. Figure III.24 shows 10 realizations of the random field of the equivalent displacement 𝑑(𝒛, 𝜔) = √𝑑 𝑥 2 (𝒛, 𝜔) + 𝑑 𝑦 2 (𝒛, 𝜔), associated respectively with the realizations of the Young's modulus random field presented in figure III.23. Figure Figure III. 24. Inclined edge-cracked plate: example of 10 realizations of the equivalent displacement field 𝑑(𝒛, 𝜔) As can be seen, the equivalent displacement 𝑑(𝒛, 𝜔) is indeed a spatially varying quantity, which means that at each node of coordinate 𝒛 𝑘 = (𝑥 𝑘 , 𝑦 𝑘 ), 𝑘 = 1, … ,2155 in the cracked plate, the variability of the corresponding equivalent displacement 𝑑(𝒛 𝑘 , 𝜔) can be simply represented by a random variable where the related statistical characteristics are to be determined by statistical analysis on an available sample of realizations. Figure III.25 shows the spatial variation of the mean 𝜇 𝑑 (𝒛, 𝜔) and standard deviation 𝜎 𝑑 (𝒛, 𝜔) of the equivalent displacement with respect to the coordinates 𝒛 𝑘 = (𝑥 𝑘 , 𝑦 𝑘 ), 𝑘 = 1, … ,2155 of the nodes of the finite element mesh, computed based on 10 5 crude MCS. As can be observed, these statistical parameters 𝜇 𝑑 (𝒛, 𝜔) and 𝜎 𝑑 (𝒛, 𝜔) are also random fields. The PDFs of the equivalent displacement recorded Figure Figure III. 25. Inclined edge-cracked plate: equivalent displacement mean (upper left), standard deviation (upper right), PDF at node of coordinate 𝒛 1 = (0, 0.49) (lower left) and PDF at node of coordinate 𝒛 2 = (-0.29, -0.05) (lower right)3.3.2. Statistical moments and distributions analysisThe fracture driving forces of interest are the opening fracture mode SIF 𝐾 𝐼 , the in-plane shear fracture mode SIF 𝐾 𝐼𝐼 , the bifurcation angle 𝜃 and the effective SIF 𝐾 𝑒𝑓𝑓 , gathered in the model responses vector 𝑦 = {𝐾 𝐼 , 𝐾 𝐼𝐼 , 𝜃, 𝐾 𝑒𝑓𝑓 } 𝑇 . Figure Figure III. 26. Inclined edge-cracked plate: comparison of the PDFs of the crack driving forces 𝐾 𝐼 , 𝐾 𝐼𝐼 , 𝜃 and 𝐾 𝑒𝑓𝑓 Figure Figure III. 27. Inclined edge-cracked plate: comparison of the estimates of the first-order Sobol indices Due to the high probabilistic dimension of the problem, the evaluation of Sobol indices by MCS or crude cubature formula II is impractical. Therefore, the following sensitivity analysis relies only on the full-PCE and sparse-PCE approaches. Figure III.27 compares the estimates of the first-order Sobol indices obtained by post-processing the PCE coefficients of the metamodels given by the full-PCE and sparse-PCEapproaches. As can be seen, the first-order sensitivity indices given by both the full-PCE and sparse-PCE approaches are practically identical. This fact can be considered as an indicator of convergence for the obtained estimates and they can therefore represent the reference solution. A very fast decay of the main Figure Figure III. 28. Inclined edge-cracked plate: comparison of the estimates of the total Sobol indices formulae I-VI studied in Chapter II are used instead of the classical integration schemes to compute the PCE coefficients. It has been shown that these fifth-degree cubature formulae require a limited number of evaluations of the integrand of the multidimensional integrals representing the PCE coefficients. Unlike the Chapter III: Unified approach for uncertainty propagation analysis S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 137 | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 140 Conclusion S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 141 Table I . I 1. Accuracy analysis of the numerical results obtained by the energetic and kinematic methods 20 Table I. 2. Paris-Erdogan's law parameters ...................................................................................... 32 Table I. 3. Walker's law parameters ................................................................................................ 32 Table I . I Size of the plastic zone produced by the overload at 𝑎 𝑂𝐿 𝑟 𝑝,𝑖Size the plastic zone produced at the current crack length 𝑎 𝑖 Nomenclature ∆𝐾 𝑡ℎ Threshold of the stress intensity factor 𝜑 𝑼 𝒏 Probability density of 𝑢 𝑛 𝑎 𝑖 Current crack length 𝐾 𝐼𝑐 𝑤 𝑘 𝑛 𝑟 𝑝,𝑂𝐿 Facture toughness Gauss-Hermite weights ∆𝜎 ∆𝐾 𝑒𝑞 𝑢 𝑘 𝑛 Variation of the nominal stress (amplitude of the loading) Equivalent stress intensity factor Gauss-Hermite integration points 𝑁 𝑁 𝑓 𝐻 𝑛 (𝑥) 𝑚 𝑤ℎ𝑒𝑒𝑙𝑒𝑟 Number of loading cycles Number of loading cycles to failure Hermite polynomials Shaping exponent 𝐶 𝑆𝑁 𝑟 𝒖 𝑖 𝜎 𝑦𝑙𝑑 Constant of the Wöhler curve Distance from the crack tip to a given point Integration points Yielding strength of the material 𝑚 𝑆𝑁 ∆𝐾 𝑒𝑓𝑓 𝑤 𝑖 𝐾 𝑟𝑒𝑑 Slope of the Wöhler curve Effective stress intensity factor range Integration weights Residual stress intensity factor ∆𝜎 𝐷 ∆𝑎 𝑑 𝑇 𝐾 𝑚𝑎𝑥,𝑒𝑓𝑓,𝑖 Endurance limit of the material Crack growth increment Highest order of the function: smallest integer Maximum effective stress intensity factor ∆𝜀 𝐷 0 𝜎 𝑢 2 𝐾 𝑚𝑎𝑥,𝑖 Total strain amplitude Random variable Partial variance Maximum apparent stress intensity factor (under constant amplitude) ∆𝜀 𝑒𝑙 𝒑 𝟎 𝜎 2 𝐾 𝑚𝑖𝑛,𝑒𝑓𝑓,𝑖 Elastic strain amplitude Vector of the initial distribution of the different level of damage Total variance Minimum effective stress intensity factor ∆𝜀 𝑝𝑙 𝒑 𝒕 ℱ 𝐾 𝑚𝑖𝑛,𝑖 Plastic strain amplitude Vector of the distribution of each level of damage Random set Minimum apparent stress intensity factor ′ 𝐹 𝑊 (𝑡; 𝑏) 𝐻(𝑥, 𝜃) ∆𝐾 𝑒𝑓𝑓,𝑖 𝜀 𝑓 𝜎 𝑓 ′ 𝐴(𝑡, 𝛾) 𝜌(𝑥, 𝑥 ′ ) 𝐾 𝑚𝑎𝑥,𝑂𝐿 𝐸 𝑓(𝑎) 𝐶 𝐻𝐻 (𝑥, 𝑥 ′ ) ∆𝑎 𝑎 𝜎 𝑚 𝐻 ̂(𝑥, 𝜃) 𝛽 Fatigue ductility coefficient Cumulative distribution function Random field Effective stress intensity factor range Probabilistic process Correlation function Stress intensity factor of the overload cycle Fatigue strength coefficient Probability density function Autocovariance function Crack growth length since the overload cycle Young modulus of the material Crack length Minimum stress amplitude Approximation field Plastic zone size factor ∆𝐾 𝜎 𝑀 𝛹 𝜶 𝑅 𝑒𝑓𝑓 Variation of the stress intensity factor Maximum stress amplitude Multivariate Hermite polynomials Effective stress ratio Ф 𝑎 𝜶 𝜙 𝑎 𝑎 0 𝑎 𝑐 𝑌(𝑎) 𝜶 𝐾 𝑜𝑝 𝛹(𝑎) 𝑀 ̂ ∆𝐾 𝑡ℎ 𝐾 𝐼 𝑋 𝔼[. ] ∆𝐾 0 𝐾 𝐼𝐼 𝑥 𝜑 𝑁 𝑁 𝑓 𝑅𝐴𝐿 𝐾 𝐼𝐼𝐼 𝑦 𝑞 𝑁 𝑓 𝐸𝐶𝐴𝐿 𝐸 𝑝 𝐺 𝜇 𝑤 𝑒 𝛿 𝑥𝑗 𝜎 𝑖𝑗 𝑢 𝑖 𝛥𝑃 𝜎 𝜃 𝜃 𝜎 𝑟𝜃 𝑊 𝑡𝑜𝑡 𝛿 𝑑𝑎 𝑑𝑁 𝑃 ̂𝑓 𝑦 𝑠 ̃ 𝑓 𝑘 (𝑥 𝑘 ) 1 𝛺 𝐹 𝑃 𝑓 𝐾 𝑒𝑞 𝜎 𝑌 𝐺(𝑿) 𝜇 𝑌 𝜎 𝐸 0 𝜑 𝑢 (𝑢) 𝜇 𝐸 0 𝑝 𝑥 (𝑥) 𝜙 𝑖 𝑚 𝑌 𝑙 𝜆 𝑖 𝑝 𝑦 (𝑦) 𝐸(𝒙, 𝜃) ℎ ≡ 𝑓 ⃘ 𝑇 𝑙 𝑦 𝑙 𝑥 𝑈 𝑌 𝑇 𝑌 𝛺 𝑝 𝒯 𝑛 ∆𝜎 𝑒𝑞 𝑝 ∆𝜎 𝑖 Initial crack length Arbitrary invertible function Deterministic unknown coefficients Acceleration factor Geometric correction function Multi-index SIF of crack closure level Critical crack length Stress intensity factor mode I Increasing function of the evolution of the crack length Polynomial approximation Threshold stress intensity factor range Stress intensity factor mode II Random vector of variables Mathematical expectation Threshold value of the SIF range for 𝑅 = 0 Stress intensity factor mode III Vector of real values associated to the randomness Probability density Fatigue lifetime associated to the true random amplitude loading Potential energy Mechanical model response Quadrature order Fatigue lifetime associated to the equivalent constant amplitude loading Strain energy release rate Estimation of the probability of failure Approximation of the mechanical response Fatigue Crack Growth Rate Indicator function in 𝛺 𝐹 One-dimensional function of the individual contribution of the parameter 𝑥 𝑘 Small crack Probability of failure Equivalent stress intensity factor Total energy Standard deviation Limit state function representing a failure scenario Shear stress Mean Standard deviation of the normal random field Bifurcation angle Joint probability density of an N-dimensional standard Gaussian random variable Mean of the normal random field Circumferential stress Joint probability density of the random variable 𝑋 Eigenvectors of the correlation matrix Tension loading Statistical moment Values of the correlation matrix Displacement field Random field Probability density Stress field Function representing the physical model in standard random space Correlation lengths in the 𝑦 direction Crack opening displacement Standard Gaussian random variable Correlation lengths in the 𝑥 direction Strain energy density Iso-probabilistic transformation Mechanical response Shear modulus of the material Probabilistic model Space of random events Degree of polynomial chaos expansion Equivalent stress amplitude Set of index associated with a polynomial chaos expansion of degree 𝑝 Random loading amplitude 𝐶 𝑛 𝑝 𝑟 𝑦 Material parameters Quadrature order Size of the cyclic plastic zone 𝑚 𝐼[. ] 𝜙 𝑅 Material parameters Integration designing a linear function Retardation parameter R 𝑢 𝑛 𝑎 𝑂𝐿 Load ratio Random variable in the ℝ space Crack length at which the overload is applied Table I . I 1. Accuracy analysis of the numerical results obtained by the energetic and kinematic methods Energetic method Kinematic method Analytical 𝐾 𝐼 Integration path 𝐾 𝐼 |𝐸𝑟𝑟𝑜𝑟| (%) Mesh size 𝐾 𝐼 |𝐸𝑟𝑟𝑜𝑟| (%) 17.659 0.541 17.914 0.895 17.682 0.415 17.875 0.677 17.755 17.685 0.396 17.850 0.535 17.686 0.391 17.832 0.434 Based on the maximization of 𝜎 𝜃 with respect to the crack orientation angle 𝜃, the bifurcation angle 𝜃 0 is solution of the following equation: 𝑡𝑎𝑛 ( 𝜃 0 2 ) = 1 4 ( 𝐾 𝐼 𝐾 𝐼𝐼 ) ± 1 4 √ ( 𝐾 𝐼 𝐾 𝐼𝐼 ) 2 + 8 (𝐼. 17) 1)𝑠𝑖𝑛 𝜃 2 ] (𝐼. 15𝑎) 𝜎 𝜃𝜃 = 2 √2𝜋𝑟 [𝐾 𝐼 (1 + 𝑐𝑜𝑠𝜃)𝑐𝑜𝑠 𝜃 2 -3𝐾 𝐼𝐼 𝑠𝑖𝑛𝜃 𝑐𝑜𝑠 𝜃 2 ] (𝐼. 15𝑏) 𝜏 𝑟𝜃 = 2 √2𝜋𝑟 [𝐾 𝐼 𝑠𝑖𝑛𝜃 𝑐𝑜𝑠 𝜃 2 + 𝐾 𝐼𝐼 (3𝑐𝑜𝑠𝜃 -1) 𝑐𝑜𝑠 𝜃 2 ] (𝐼. 15𝑐) Thus, in mode I and II, to obtain 𝜃 0 the location of the maximum stress, we should resolve: 𝜕𝜎 𝜃 𝜕𝜃 = 0 𝑎𝑛𝑑 𝜕²𝜎 𝜃 𝜕𝜃² < 0 (𝐼. 16) S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 22 is based on the theory introduced by Griffith, which they assume is valid in the case of crack growth. This criterion studied the possibility of propagation in the direction that maximizes the strain energy release rate 𝐺. 𝜕𝐺 𝜕𝜃 = 0 𝑎𝑛𝑑 𝜕²𝐺 𝜕𝜃² < 0 (𝐼. 18) The calculation of 𝐺 for a small crack 𝛿 advance oriented at 𝜃 angle, as we can see in figure I.16, is given by the following relation with the respect of the existing crack (Hussain and al, 1974): 𝐺(𝜃) = 4 𝐸 * ( 1 3 + 𝑐𝑜𝑠²𝜃 ) 2 ( 1 -1 + 𝜃 𝜋 𝜋 𝜃 ) 𝜃 𝜋 × [(1 + 𝑐𝑜𝑠²𝜃)𝐾 𝐼 2 + 8𝑠𝑖𝑛𝜃𝑐𝑜𝑠𝜃𝐾 𝐼 𝐾 𝐼𝐼 + (9 -𝑐𝑜𝑠 2 𝜃)𝐾 𝐼𝐼 2 ] (𝐼. 19) where 𝐸 * = { 𝐸 1-𝜈 2 𝐸 for plane strain for plane stress Finally, we have to integrate and solve 𝑎 11 𝑘 𝐼 2 + 2𝑎 12 𝑘 𝐼 𝑘 𝐼𝐼 + 𝑎 22 𝑘 𝐼𝐼 Chapter I: State of art on probabilistic modelling of fatigue crack propagation 2 (𝐼. 20) where 𝑘 𝑖 = 𝐾 𝑖 √𝜋 with𝑖 ∈ {𝐼, 𝐼𝐼, 𝐼𝐼𝐼} S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 23 , or in the direction where the dilatation energy density is maximum. Crack propagation begins when S= 𝑆 𝑐 : 𝜕𝑆 𝜕𝜃 = 0 𝑎𝑛𝑑 𝑆 = 𝑎 11 𝑘 𝐼 2 + 2𝑎 12 𝑘 𝐼 𝑘 𝐼𝐼 + 𝑎 22 𝑘 𝐼𝐼 2 = 𝑆 𝑐 (𝐼. 21) Table I I 𝑅 𝑚 𝐶 𝑅 𝐿𝑅𝐺𝐹 2 0 3.70 1.14 10 -9 0.973 0.2 4.22 2.80 10 -10 0.979 0.33 3.59 3.45 10 -9 0.977 0.5 3.72 4.02 10 -9 0.969 0.67 3.98 4.22 10 -9 0.981 0.7 4.01 4.53 10 -9 0.973 0.8 4.52 3.00 10 -9 0.938 Table I. 3. Walker's law parameters 𝑚 1 𝛾 𝐶 1 𝑅 𝐿𝑅𝐺𝐹 2 3.82 0.56 8.19 10 -9 0.975 . 2. Paris-Erdogan's law parameters b) Walker's law The Walker's law can fit all experimental data for different values of stress ratio 𝑅 by a single curve when plotting the FCGR 𝑑𝑎 𝑑𝑁 ⁄ versus an effective SIF range ∆𝐾 𝑒𝑓𝑓 = ∆𝐾 (1 -𝑅) (1-𝛾) ⁄ in equation (I.24). The parameters of the Walker's law are obtained through multiple linear regression analysis applied to 𝑑𝑎 𝑑𝑁 ⁄ and ∆𝐾 is computed previously for the Paris-Erdogan's law, after transforming them to log-log space. The estimates of parameters of the Walker's law, and the fitted model are given in figure I.27 and table I.3, respectively. Table I I . 4. Forman's law parameters 𝑚 2 𝐶 2 𝑅 𝐿𝑅𝐺𝐹 2 3.11 3.32 10 -7 0.980 Table I I 𝑅 𝑎 𝑐 (𝑖𝑛𝑐ℎ) Experimental Numerical Experimental 𝑁 𝑓 (𝑐𝑦𝑐𝑙𝑒𝑠) Walker Forman 0.00 0.80 1.6828 (110%) 3050 2268 (26%) 2398(21%) 0.20 1.40 2.2425 (60%) 8420 7519 (11%) 7733(8%) 0.50 1.40 2.9906 (114%) 42500 49473 (16%) 47809(12%) 0.67 1.80 3.3380 (85%) 154000 178817 (16%) 165688(8%) . 5. Comparison of experimental and numerical results for 𝑁 𝑓 and 𝑎 𝑐 is based on the same relationships as in the Wöhler curves (i.e. 𝑆 -𝑁 curves). Indeed, this model is constructed by establishing a relationship between the distributions of the initial crack length and the number of loading cycles at failure determined from the 𝑆 -𝑁 curves. The main advantage of this model is that it uses as less random expectations as possible.Let 𝑎 0 be the random size of the dominant crack present in the structure and 𝑓 0 (𝑎 0 ) the probability density function corresponding to it. This model considers a fatigue test conducted on a structure that is solicited by a cyclic loading with a constant amplitude varying in the interval defined by the minimum 𝜎 𝑚 and maximum value 𝜎 𝑀 . It determines the cumulative distribution function of the random lifetime 𝑁 as a Weibull or Gumbel family of models able to reproduce not only the whole Wöhler field, but any combination of minimum and maximum stresses.For more precision in the model formulation, the initial crack size 𝑎 0 and the number of loading cycles 𝑁 are replaced respectively by normalized parameters 𝑎 0 𝑎 𝑐 and 𝑁 𝑁 0 Finally, section 5 addresses various numerical examples to show the potential of monomial cubature schemes in the computation of multidimensional integrals involved in uncertainty where 𝑦 ∈ 𝔇 𝓎 is the quantity of interest or the response of the mechanical model (e.g., a crack length, a propagation analysis. 2. Uncertainty propagation framework 2.1. General principle Let us consider a mechanical model (e.g., a fatigue cracked component) having 𝑁 uncertain parameters gathered in the vector 𝒙 = { 𝑥 1 , … , 𝑥 𝑁 } 𝑇 ∈ 𝔇 𝔁 . Mathematically speaking, this model can be represented by the following deterministic mapping 𝑓: 𝑦 = 𝑓(𝒙) (𝐼𝐼. 1) S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 50 monomial cubature schemes. stress intensity factor, a fatigue life, etc.) obtained either by explicit (i.e., analytical closed formula) or implicit (i.e., finite elements model) representation of the function 𝑓. In the sequel, and without a loss of generality, only models having a single mechanical response are presented, which means that 𝑦 is a scalar quantity. Indeed, all the derivations hold component-wise in case of vector-valued models 𝒚 = { 𝑦 1 , … , 𝑦 𝑀 } 𝑇 ∈ 𝔇 𝔂 . 𝑝 𝑿 (𝒙) 𝑑𝒙where 𝔇 𝓧 is the support of the 𝑁-dimensional random variable 𝑿 = { 𝑋 1 , … , 𝑋 𝑁 } 𝑇 , 𝔇 𝒳 𝒊 and 𝑝 𝑋 𝑖 (𝑥 𝑖 ) are the support and the marginal probability density function of the random variable 𝑋 𝑖 .It follows from (II.11) and (II.12) that 𝑓 0 is the mean value of the function 𝑓, all summands are orthogonal, and the expectation of any summand vanish. Consequently, a recursive construction of the components functions of (II.10) can be obtained: 𝑋 𝑖 1 , 𝑋 𝑖 2 ) = 𝔼[𝑓(𝑿)|𝑋 𝑖 1 , 𝑋 𝑖 2 ] -𝑓 𝑖 1 (𝑋 𝑖 1 ) -𝑓 𝑖 2 (𝑋 𝑖 2 ) -𝑓 0 (𝐼𝐼. 11) 𝔇 𝓧 ∫ 𝑓 𝑖 1 ,…,𝑖 𝑠 (𝑥 𝑖 1 , … , 𝑥 𝑖 𝑠 ) 𝑝 𝑋 𝑖 (𝑥 𝑖 ) 𝑑𝑥 𝑖 𝔇 𝒳 𝒊 = 0, ∀ 𝑖 ∈ {𝑖 1 , … , 𝑖 𝑠 } (𝐼𝐼. 12) 𝑓 0 = 𝔼[𝑓(𝑿)] (𝐼𝐼. 13) 𝑓 𝑖 (𝑋 𝑖 ) = 𝔼[𝑓(𝑿)|𝑋 𝑖 ] -𝑓 0 (𝐼𝐼. 14) 𝑓 𝑖 1 ,𝑖 2 ( (𝐼𝐼. 15) and so on, where 𝔼[. |. ] denotes the mathematical conditional expectation. Now by squaring the Sobol decomposition (II.10) and integrating over 𝔇 𝓧 , the total variance of random variable 𝑌 representing the variability of the mechanical response 𝑦, can be obtained as follows: Now, let us come back to the multidimensional case. Since the uncertainty propagation problem is written in the standard random space, the random vector 𝑼 = { 𝑈 1 , … , 𝑈 𝑁 } 𝑇 is represented by 𝑁 independent standard normal variables and the corresponding joint probability density function 𝜑 𝑼 (𝒖) can be obtained simply as the product of the probability density functions 𝜑 𝑈 i (𝑢 i ), 𝑖 ∈ {1, … , 𝑁} of the set of random variables𝑈 i , 𝑖 ∈ {1, … , 𝑁}.Accordingly, one-dimensional cubature formulae 𝐼 𝑖 𝑀 𝑖 , 𝑖 ∈ {1, … , 𝑁} of level 𝑀 𝑖 , 𝑖 ∈ {1, … , 𝑁}, as the one used in equation (II.33), can be derived from each probability density function 𝜑 𝑈 i (𝑢 i ), and the multidimensional integral (II.29 Table II II . 1. Expectation value of the integrand (II.50) given by 10 5 MCS and GH3 for various values of the dimension of integration 𝑁 3 4 5 6 7 8 9 10 MCS 1.3044 1.3934 1.4778 1.5585 1.6349 1.7088 1.7793 1.8477 GH3 1.3023 (27,0.16%) 1.3913 (81,0.14%) 1.4757 (243,0.14%) 1.5561 (729,0.15%) 1.6328 (2187,0.12%) 1.7064 (6561,0.13%) 1.7772 (19683,0.11%) 1.8454 (59049,0.12%) The probability distributions and the statistical characteristics (i.e., mean, and standard deviation) of the random variables 𝑿 used to represent the variability of the uncertain parameters 𝒙 related to the above models are given in table II. 2.Table II. 2. Probability distributions and statistical characteristics of the random variables related to the performancefunctions 𝐺 1 (𝒙), 𝐺 2 (𝒙) and 𝐺 3(𝒙) Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals Performance function Random variable Distribution 𝜇 𝜎 𝑋 1 Weibull 4 0.1 𝑋 2 Lognormal 25000 2000 𝐺 1 (𝒙) 𝑋 3 𝑋 4 Gumbel Uniform 0.875 20 0.1 1 𝑋 5 Exponential 100 100 𝑋 6 Normal 150 10 𝑋 1 Normal 1.01 0.0606 𝑋 2 Lognormal 400 40 𝐺 2 (𝒙) 𝑋 3 Normal 20 3.6 𝑋 4 Normal 95.87 10 -3 9.587 10 -3 𝑋 5 Gumbel 67.11 10 -3 6.711 10 -3 𝑋 1 Weibull 0.9377 0.0459 𝑋 2 Normal 220000 5000 𝐺 3 (𝒙) 𝑋 3 𝑋 4 Normal Uniform 21000 0.29/385.82 1000 0.0058/385.82 𝑋 5 Normal 24 0.5 𝑋 6 Normal 8 0.3 𝑥 1 𝑥 2 𝑥 3 𝑥 4 - 2 𝑥 5 𝑥 6 8 (𝐼𝐼. 52) 𝐺 2 (𝒙) = 7.645 × 10 -4 𝑥 1 𝑥 2 (1 -7.217 × 10 -3 𝑥 2 𝑥 3 ) -𝑥 4 -𝑥 5 (𝐼𝐼. 53) 𝐺 3 (𝒙) = √ 3𝑥 1 𝑥 2 (𝑥 5 -𝑥 6 ) ( 𝜋𝑥 3 30 ) 2 ( 𝑥 5 3 -𝑥 6 3 ) -0.37473 (𝐼𝐼. 54) S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 75 Table II II 𝜇 𝜎 𝛾 𝜅 𝐺 1 (𝒙) 1466105.83 389291.79 -0.58371 5.02757 𝐺 2 (𝒙) 0.09925 0.03425 0.02989 3.17895 𝐺 3 (𝒙) 0.07863 0.02675 0.17250 3.14157 . 3. Results of the first four statistical moments of the performance functions 𝐺 1 (𝒙), 𝐺 2 (𝒙) and 𝐺 3 (𝒙) 2 , 𝑆 1 , 𝑆 2 , 𝑃 1 , 𝑃 2 , 𝑃 3 , 𝑃 4 , 𝑃 5 , 𝑃 6 } 𝑇 , whose distribution type, mean 𝜇 and standard deviation 𝜎 are listed in tableII.4. results are given in table II. 5. Note that cubature formula I is not used here since it is only valid for integration dimension up to 7. Table II. 5. Truss structure: statistical moments of the mid-span deflection Statistical moments II III Cubature formula IV V VI MCS 𝜇 0.07940 0.07942 0.07940 0.07940 0.07942 0.07938 𝜎 0.01108 0.01109 0.01107 0.01107 0.01109 0.01107 𝛾 0.46478 0.48255 0.41967 0.45553 0.48538 0.49200 𝜅 3.24564 3.48523 2.67713 3.19121 3.45394 3.44554 Number of FEM runs 133 201 201 276 201 10 5 Table II. 4. Truss structure: probability distributions and statistical characteristics of the random variables Parameter Distribution 𝜇 𝜎 𝐸 1 , 𝐸 2 Lognormal 210000 MPa 21000 MPa 𝑆 1 Lognormal 0.002 m 2 0.0002 m 2 𝑆 2 Lognormal 0.001 m 2 0.0001 m 2 𝑃 1 , … , 𝑃 6 Gumbel 50 kN 7.5 kN First, we conduct statistical moments analysis to assess the effect of the uncertain parameters on the variability of the mechanical response of interest. The involved multidimensional integrals for the first four statistical moments of the mid-span deflection 𝜈(𝑿) are computed using cubature formulae II-VI. The Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 80 obtained TableII. 6. Truss structure: comparison of the reliability analysis results given by the proposed method, IS and FORM 𝜈 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 𝑃 ̂𝑓 IS 𝛽 ̂ 𝑃 ̂𝑓 FORM 𝛽 ̂ 𝜖 𝛽 (%) 𝑃 ̂𝑓 Formula VI 𝛽 ̂ 𝜖 𝛽 (%) 0.10 m 4.09 10 -2 1.75 2.80 10 -2 1.91 9.14 4.40 10 -2 1.7060 2.51 0.11 m 9.90 10 -3 2.38 5.04 10 -3 2.57 7.98 9.22 10 -3 2.3565 0.98 0.12 m 1.35 10 -3 3.17 7.62 10 -4 3.17 6.73 1.66 10 -3 2.9368 1.12 0.13 m 2.16 10 -4 3.71 2.64 10 -4 3.71 6.00 2.64 10 -4 3.4662 0.96 0.14 m 3.44 10 -5 4.21 3.98 10 -5 4.21 5.78 3.98 10 -5 3.9453 0.87 Table II II 𝑆 ̂1𝑀𝐶𝑆 𝐸 1 0.3662 𝐸 2 0.0137 𝑆 1 0.3664 𝑆 2 0.0138 𝑃 1 0.0060 𝑃 2 0.0383 𝑃 3 0.0777 𝑃 4 0.0770 𝑃 5 0.0380 𝑃 6 0.0059 𝑆 ̂𝑇 𝑀𝐶𝑆 0.3696 0.0126 0.3712 0.0126 0.0047 0.0374 0.0776 0.0773 0.0374 0.0048 . 7. Truss structure: first-order and total Sobol sensitivity indices obtained by MCS TableII. 8. Truss structure: comparison of the computational costs of the proposed method, FTGH3 and MCS Method II III Cubature formula IV V VI FTGH3 MCS Number of FEM runs 3464 5092 5092 4657 5092 10x59049 10x10 6 i.e. this mesh is only used to discretize the random field and is different from the one used in the finite element computations), 𝜆 𝑖 and 𝝓 𝑖 𝑇 are respectively eigenvalues and eigenvectors of the correlation matrix 𝑪 𝔃,𝔃 with components 𝑪 𝔃,𝔃 𝑘,𝑙 = 𝜌(𝔃 𝑘 , 𝔃 𝑙 ), 𝑘, 𝑙 ∈ {1, … , 𝑚}. 𝑀 𝑖=1 √𝜆 𝑖 𝔃 𝒊 𝑢 𝑖 (𝜔) (𝐼𝐼. 61) In this approximation, 𝑢 𝑖 (𝜔), 𝑖 ∈ {1, … , 𝑀} are independent standard normal variables, 𝑪 𝔃,𝔃 𝒊 is a vector with components 𝑪 𝔃,𝔃 𝒊 𝑗 = 𝜌(𝔃, 𝔃 𝑗 ), 𝑗 ∈ {1, … , 𝑚}, where 𝔃 𝑗 , 𝑗 ∈ {1, … , 𝑚} are the nodes of an appropriate defined mesh Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 86 ( Table II . II 9. Heat conduction in square plate: statistical moments of the average temperature 𝑇 ̃𝛺2 Statistical moments II Cubature formula III IV VI MCS 𝜇 4.5678 4.5678 4.5681 4.5678 4.5666 𝜎 0.7880 0.7877 0.7876 0.7878 0.7891 𝛾 0.5388 0.5543 0.46112 0.5530 0.4787 𝜅 3.7140 4.5166 3.6658 3.9075 3.3649 Number of FEM runs 2971 5619 5619 5619 10 5 Figure II.29 compares the PDFs and CDFs obtained from moments-based technique to those derived from MCS. As can be seen, the PDFs and CDFs derived from statistical moments computed by cubature formulae II, III and VI are in good agreement with those of the reference solution and allow to estimate accurately Chapter II: Identification of efficient quadrature scheme for the computation of multidimensional integrals S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 89 Table II . II 10. Heat conduction in square plate: comparison of the reliability analysis results given by the proposed method and MCS 𝑇 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 𝑃 ̂𝑓 MCS 𝛽 ̂ 𝑃 ̂𝑓 II 𝛽 ̂ Cubature formula IV 𝑃 ̂𝑓 𝛽 ̂ 𝑃 ̂𝑓 VI 𝛽 ̂ 6.0 °C 4.578 10 -2 1.6872 4.628 10 -2 1.6820 4.493 10 -2 1.6962 4.612 10 -2 1.6837 6.5 °C 1.542 10 -2 2.1591 1.653 10 -2 2.1314 1.569 10 -2 2.1523 1.704 10 -2 2.1192 7.0 °C 4.800 10 -3 2.5899 5.551 10 -3 2.5395 5.162 10 -3 2.5648 6.054 10 -3 2.5090 7.5 °C 1.130 10 -3 3.0538 1.792 10 -3 2.9126 1.640 10 -3 2.9401 2.114 10 -3 2.8607 8.0 °C 2.400 10 -4 3.4917 5.732 10 -4 3.2519 5.191 10 -4 3.2800 7.440 10 -4 3.1770 TableIII. 2. Crack growth in CCP specimen: statistical characteristics of the transformed Walker model |𝜌 𝑖𝑗 |, {𝑖 ≠ 𝑗} of the correlation matrix [𝝆] are relatively close to 1. This find is in line with the observations made earlier by several researchers based on statistical analysis performed on Virkler's fatigue crack growth data (Virkler and al, 1979), as pointed out in Chapter I of the thesis. Note that, positive values of the coefficient of correlation 𝜌 𝑖𝑗 , {𝑖 ≠ 𝑗} indicate variances (i.e., the variances corresponding to the uncertain parameters 𝑥 𝑖 and 𝑥 𝑗 ) change in a similar direction, while negative values imply variances change in inverse directions. 𝜇 𝜎 2 [𝝆] 𝐴 1 3.8681 2.2911 10 -3 1 -0.7305 -0.9756 𝐴 2 -1.7052 3.5716 10 -3 [𝝆] = [ -0.7305 1 0.8315 ] 𝐵 -21.0737 26.4442 10 -3 -0.9756 0.8315 1 A strong statistical dependence between the parameters is observed since the off-diagonal components Table III. 3. Crack growth in CCP specimen: statistical moments of the fatigue crack growth life Statistical moments I Full-PCE (2 nd order PCE and cubature formula i, i={I,II,…,VI}) II III IV V VI MCS 𝜇 8446.392 8446.392 8446.392 8446.392 8446.392 8446.392 8446.602 𝜎 265.490 265.490 265.490 265.490 265.490 265.490 264.649 𝛾 0.09872 0.09871 0.09870 0.09871 0.09884 0.09875 0.11725 𝜅 3.01314 3.01314 3.01314 3.01314 3.01318 3.01315 3.02850 Number of FEM runs 14 21 19 19 14 19 10 5 Table III. 4. Crack growth in CCP specimen: first-order and total Sobol indices obtained by PCE of degree 𝑝 = 2 𝑆 ̂1 𝐴 1 0.15126 𝐴 2 0.37210 𝐵 0.47633 𝑆 ̂𝑇 0.15138 0.37233 0.47658 Table III . III 5. Nonlinear cracked pipe: probability distributions and statistical characteristics of the random variables Parameter Distribution 𝜇 𝜎 𝐸 (𝑀𝑃𝑎) Lognormal 175500 10000 𝜎 𝑦 (𝑀𝑃𝑎) Lognormal 259.5 10 𝑛 Normal 3.5 0.1 𝛼 Normal 1.15 0.15 Table III. 6. Nonlinear cracked pipe: statistical moments of the Rice's integral 𝐽 𝑅𝑖𝑐𝑒 Statistical moments GH * Full-PCE Formula VI Sparse-PCE Crude formula VI MCS 𝜇 16.7348 16.7347 16.7347 16.7347 16.7322 𝜎 0.98589 0.98639 0.98620 0.98643 0.98779 𝛾 0.18773 0.19806 0.19606 0.19781 0.19104 𝜅 3.04983 3.05487 3.05322 3.00158 3.05801 Number of FEM runs 81 33 33 33 10 5 * GH: Gauss-Hermite 𝜎 𝑡 . For a given value of the axial tension 𝜎 𝑡 , a PCE-based metamodel ℎ 𝜎 𝑡 𝑃𝐶𝐸 (𝒖) is built in the standard random Knowing the PDF of the fracture toughness 𝐽 𝐼𝑐 , and the PDF of the Rice's integral 𝐽 𝑅𝑖𝑐𝑒 which can be easily built either by performing MCS on the metamodel ℎ 𝜎 𝑡 𝑃𝐶𝐸 (𝒖), or by a moment-based technique using the statistical moments estimates obtained by post-processing the coefficients of the metamodel ℎ 𝜎 𝑡 𝑃𝐶𝐸 (𝒖), an approximation of the failure probability 𝑃 𝑓 , for a given value of 𝜎 𝑡 and PCE degree 𝑝, can be provided by 𝐻(𝒖) = 𝐽 𝐼𝑐 -𝐽 𝑅𝑖𝑐𝑒 ∘ 𝑇(𝒖) ≈ 𝐽 𝐼𝑐 -ℎ 𝜎 𝑡 𝑃𝐶𝐸 (𝒖) (𝐼𝐼𝐼. 36) space, either by the full-PCE or the sparse-PCE approach. Thus, the performance function 𝐺(𝒙) defined by equation III.35 can be approximated as follows: .7. The same results are also shown in figure III.19. As can be seen, the proposed approaches are in good agreement overall with the reference solutions provided by FORM, since the relative error on the estimate of the reliability index varies in the range [0.01%, 1.97%]. The magnitude of the failure probability, which Table III . III 7. Nonlinear cracked pipe: comparison of the reliability analysis results given by the proposed methods and FORMAs an illustration, figure III.19 shows the PDFs of the Rice's integral 𝐽 𝑅𝑖𝑐𝑒 and fracture toughness 𝐽 𝐼𝑐 for 𝜎 𝑡 = 180 𝑀𝑃𝑎. As can be observed, the lognormal distribution fits both PDFs very well. Thus, the previously method presented, which consists in solving an elementary R-L reliability problem, can be unambiguously applied to provide a suitable estimate of the failure probability. 𝜎 𝑡 (𝑀𝑃𝑎) 𝑃 ̂𝑓 FORM 𝛽 ̂ 𝑃 ̂𝑓 GH 𝛽 ̂ Full-PCE 𝜖 𝛽 (%) 𝑃 ̂𝑓 Formula VI 𝛽 ̂ 𝜖 𝛽 (%) 𝑃 ̂𝑓 Sparse-PCE 𝛽 ̂ 𝜖 𝛽 (%) 140 2.139 10 -9 5.873 2.129 10 -9 5.874 0.014 2.132 10 -9 5.874 0.010 2.130 10 -9 5.874 0.011 160 2.942 10 -6 4.531 2.959 10 -6 4.529 0.028 2.985 10 -6 4.527 0.068 2.982 10 -6 4.527 0.064 180 2.139 10 -3 3.039 1.261 10 -3 3.021 0.608 1.261 10 -3 3.021 0.610 1.261 10 -3 3.021 0.608 200 1.186 10 -1 1.198 1.200 10 -1 1.175 1.970 1.201 10 -1 1.175 1.992 1.198 10 -1 1.175 1.862 Table III. 8. Inclined edge-cracked plate: statistical moments of the crack driving forces 𝐾 𝐼 , 𝐾 𝐼𝐼 , 𝜃 and 𝐾 𝑒𝑓𝑓 Statistical moments Full-PCE Sparse-PCE Crude formula II MCS 𝐾 𝐼 𝜇 𝜎 2.8253 0.0779 2.8253 0.0781 2.8253 0.0781 2.8255 0.0778 𝐾 𝐼𝐼 𝜇 𝜎 1.2061 0.0460 1.2061 0.0460 1.2061 0.0460 1.2061 0.0479 𝜃 𝜇 𝜎 36.776 0.5570 36.776 0.5570 36.776 0.5570 36.774 0.5713 𝐾 𝑒𝑓𝑓 𝜇 𝜎 6.8846 0.1992 6.8846 0.1992 6.8846 0.1992 6.8849 0.2022 Number of FEM runs 651 651 651 10 5 (Alabd Alhafez, 2018) I. Alabd Alhafez, C. J. Ruestes, H. M. Urbassek,Size of the Plastic Zone Produced by Nanoscratching , Tribol Lett 66, 2018. (Baldeweck, 1999) H. Baldeweck. Méthodes des éléments finis stochastiques -Application à la géotechnique et à la mécanique de la rupture. Thèse de doctorat, Université d'Evry-Val d'Essonne, France, 1999. (Basquin, 1910) O.H. Basquin, The exponential law of endurance tests. In: Proceedings of the American Society for Testing and Materials. 625-623, 1910. (Bathias and al, 1997) C. Bathias, J. Bailon, La fatigue des matériaux et des structures, édition revue et argumentée. Éditions HERMES, Paris, 1997. (Bea and al, 1999) J.A. Bea, M. Doblaré, L. Gracia, Evaluation of the probability distribution of crack propagation life in metal fatigue by means of probabilistic finite element method and B models. Engng Fract Mech; 63 6:675-711, 1999. (Beden and al, 2009) S. Beden, S. Abdullah, A.K. Ariffin, Review of Fatigue Crack Propagation Models for Metallic Components. European Journal of Scientific Research, vol. 28, no 3, 2009. (Bernardo, 2015) Performance of cubature formulae in probabilistic model analysis and optimization, J. Comput. Appl. Math., 280 110-124, 2015. (Berveiller, 2005) M. Berveiller, Eléments finis stochastiques : approches intrusive et non intrusive pour des analyses de fiabilité, Ph. D. thesis, Université Blaise Pascal, Clermont-Ferrand, 2005. (Blatman, 2009) G. Blatman, Adaptive sparse polynomial chaos expansions for uncertainty propagation and sensitivity analysis. Ph. D. thesis, Université Blaise Pascal -Clermont II, 2009. (Blatman and Sudret, 2011) G. Blatman, B. Sudret, Adaptive sparse polynomial chaos expansions based on least angle regression. J Comput Phys, 230 (6), 2345 -67, 2011 . (Bogdanoff and al, 1985) ) J.L. Bogdanoff, F. Kozin, Probabilistic models of cumulative damage. New York: Wiley, 1985. (Breitung, 1984) K. Breitung, Asymptotic approximation for multinormal integrals, J. Eng. Mech, ASCE. 110, (3), 357-366, 1984. (Brezin and Zhidkov, 1965) I. Berezin, N. Zhidkov, Computing Methods. Addison-Wesley, Reading, MA, 1965. (Bush and al, 1988) M.L. Bush, J.L. Lebrun, X-Ray diffraction study of stress distributions following a single overload, Fatigue Crack under variable amplitude loading, Edited by J. Petit, D.L. Davidson, S. Suresh, P. Rabbe, Elsevier Applied Science, pp. 76-86, 1988. (Casciati and al, 2007) F. Casciati, P. Colombi, L. Faravelli, Inherent variability of an experimental crack growth curve. Struct Saf, 29:66-76, 2007. (Cast3m, 2021) Castem. http://www-cast3m.cea.fr, 2021. (Castillo and al, 2008) E. Castillo, A. Fernandez-Canteli, H. Pinto, M.L. Ruiz-Ripoll, A statistical model for crack growth based on tension and compression Wöhler fields. Engng Fract Mech, 75:4439-4449, 2008. (Castillo and al, 2005) E. Castillo, A. Iglesias, R. Ruiz-Cobo, Functional equations in applied sciences Mathematical in Science Engineering, vol 199, Amesterdam: Elsevier B.V, 2005. (Chahine and al, 2021) S. Chahine, H. Riahi, D. Bigaud. Propagation d'incertitudes par combinaison de chaos polynômial creux et quadrature efficace: application à la fissuration par fatigue. JFMS 2020, 11ème Journées Fiabilité des Matériaux et des Structures, Clermont-Ferrand, France, Mar 2020. (Chambers and al, 1991) A. Chambers, T.Hyde, Mixed mode fatigue crack growth at 550 degrees c under plane stress conditions in jethete m152. Engineering Fracture Mechanics, 39(3), 1991. (Chang and al, 2006) J. Chang, JQ. Xu, J. Mutoh, A general mixed-mode brittle fracture criterion for cracked materials. Engng Fract Mech. 73, 1249-1263, 2006. (Chastaing and al., 2012) Chastaing, G., F. Gamboa, and C. Prieur, Generalized Hoeffding-Sobol decomposition for dependent variables -application to sensitivity analysis. Technical Report hal-00649404, GdR MASCOT-NUM (Méthodes d'Analyse Stochastique des Codes et Traitements Numériques), 2012. (Chiquet and al, 2009) J. Chiquet, N. Limnios, M. Eid, Piecewise deterministic Markov processes applied to fatigue crack growth modelling. Journal of Statistical Planning and Inference; 139(5):1657 -1667, 2009. (Choi and al, 2012) S.K. Choi, R.V. Grandhi, R.A. Canfield, C.L. Pettit Polynomial chaos expansion with latin hypercube sampling for estimating response variability. AIAA J, 42 (6), 1191AIAA J, 42 (6), -8, 2012. . (Ciavarella and al, 2018) M. Ciavarella and A. Papangelo, On the distribution and scatter of fatigue lives obtained by integration of crack growth curves: Does initial crack size distribution matter ?, Eng Fract Mech, 191, pp.111 -124, 2018. (Coffin, 1954) LF. Coffin, A study of the effects of cyclic thermal stresses on a ductile metal. Transaction of the American Society of Mechanical Engineers.76; 931-950, 1954 931-950, . (Cools, 2002) ) R. Cools, Advances in multidimensional integration, J. Comput. Appl. Math., 149, 1-12, 2002. (Cools, 2003) R. Cools, An encyclopaedia of cubature formulas, J. Complexity, 19, (3),445-453, http://www.cs.kuleuven.ac.be/~nines/research/ecf/ecf.html, 2003. (Cools and Rabinowitz, 1993) R. Cools, P. Rabinowitz, Monomial cubature rules since Stroud: a compilation, J. Comput. Appl. Math., 48, 309-326, 1993. (Cukier and al, 1978) H. Cukier, R. Levine, K. Shuler, Nonlinear sensitivity analysis of multiparameter model systems. J. Comput. Phys, 26, 1-42, 1978. (Daneshpour and al, 2012) S. Daneshpour, J. Dyck, V. Ventzke, and N. Huber, Crack retardation mechanism due to overload in base material and laser welds of Al alloys, International Journal of Fatigue, vol. 42, pp. 95-103, 2012. (Ding and Xu, 2021) C. Ding, J. Xu, An improved adaptive bivariate dimension-reduction method for efficient statistical moment and reliability evaluations, Mech. Syst. Signal Process., 149, 2021. (Dirik and al, 2018) H. Dirik and T. Yalçinkaya, Crack path and life prediction under mixed mode cyclic variable amplitude loading through XFEM, 2018. (Dowling, 2013) E.N. Dowling, Mechanical behavior of materials. 2013. (Dowling, 2007) E.N. Dowling, Mechanical Behavior of Materials: Engineering Methods for Deformation, Fracture, and Fatigue, 3rd ed. Pearson Education, Inc., Upper Saddle River, NJ, 2007. (Dubourg, 2011) V. Dubourg, Adaptive surrogate models for reliability analysis and reliabilitybased design optimization. Ph. D. thesis, Université Blaise Pascal -Clermont II, 2011. (Dugdale, 1960) D.S. Dugdale, J. Mech. and Physi. Solids, vol. 8, pp.100-104, 1960. (Elber, 1971) W. Elber, The significance of fatigue crack closure. ASTM Special Technical Publication, vol. 486, 1971. (Erdogan and al, 1963) F. Erdogan, G. Sih, On the crack extension in plates under plane loading and transverse shear. Journal of Basic Engineering, 85:519 -525, 1963. (Eurocode 3, 1996) Norme ENV 1993-1-9. Eurocode 3 : Calcul des structures en acier -Partie 1-9 : Règles générales -Règles supplémentaires pour la résistance à la fatigue des structures en acier. Paris: AFNOR; 1996 . (Faure, 1982) ) H. Faure, Discrépance de suite associées à un système de numeration (en dimension s), Acta Artithmetica, 41, 337-351, 1982. (Fei and al, 2020) C.W. Fei, H. Li, H.T. Liu, C. Lu, B. Keshtegar, L.Q. An, Multilevel nested reliabilitybased design optimization with hybrid intelligent regression for operating assembly relationship, Aerosp. Sci. Technol. 103, 2020. (Fichtl and Prinja, 2011) E.D. Fichtl, A.K. Prinja. The stochastic collocation method for radiation transport in random media, J Quant Spect Rad Trans, 112, 646-659, 2011. (Forman and al, 1967) R.G. Forman, V.E. Kearney, R.M. Engle, Numerical analysis of crack propagation in cyclic-loaded structures. J Basic Eng; 89:459-464, 1967. (Frondelius and al, 2022) T. Frondelius, T. Kaarakka, R. Kouhia, J. Makinen, H. Orelma and J. Vaara, Stochastic continuum approach to high-cycle fatigue: Modelling stress history as a stochastic process, 2022. (Fuhring and al, 1979) H. Fuhring, T. Seeger: Dugdale crack closure analysis of fatigue cracks under constant amplitude loading. Engng. Fract. Mech., vol. 11, pp. 99-122, 1979. (Ganapathysubramanian and Zabaras, 2007) B. Ganapathysubramanian, N. Zabaras. Sparse grid collocation schemes for stochastic natural convection problems. J. Comput. Phys. 225, 652-685, 2007. (Gao and al, 2020) H.F. Gao, E. Zio, J.J. Guo, G.C. Bai and C.W. Fei, Dynamic probabilistic-based LCF damage assessment of turbine blades regarding time-varying multi-physical field loads, Eng. Fail. Anal., 108, Article 107193, 2020. (Gdoutos, 1984) E. Gdoutos, Problems of mixed mode crack propagation. Martinus Nijho, 1984. (Genz, 1986) A. Genz, Fully symmetric interpolatory rules for multiple integrals. SIAM J Numer Anal 23, 1273-1283, 1986. (Genz and Keister, 1996) A. Genz, B. Keister, Fully symmetric interpolatory rules for multiple integrals over infinite regions with Gaussian weight. J Comput Appl Math, 71, 299-309, 1996. (Gerstner and Griebel, 2003) T. Gerstner, M. Griebel. Dimension-adaptive tensor-product quadrature. Computing, 71, 65-87, 2003. (Gerstner and Griebel, 1998) T. Gerstner, M. Griebel, Numerical integration using sparse grids, Numer. Algorithms., 18, 209-232, 1998. (Ghanem, 1999) R. Ghanem, Ingredients for a general purpose stochastic finite elements implementation, Comput. Methods Appl. Mech. Engrg. 168, 19-34, 1999. (Ghanem and Spanos, 1991) R.G. Ghanem, S.D. Spanos. Stochastic finite elements: a spectral approach. Berlin: Springer; 1991. (Ghonem and al, 1987) H. Ghonem, S. Dore, Experimental study of the constant probability crack growth curves under constant amplitude loading. Engng Fract Mech; 27:1-25, 1987. (Griffith, 1921) A.A. Griffith, The phenomena of rupture and flow in solids. Philosophical transactions of the royal society of london. Series A, containing papers of a mathematical or physical character, vol. 221, 1921. (Halton, 1960) J.H. Halton. On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals. Numer. Math, 2, 84-90, 1960. (Hammersley and Handscomb, 1964) J.M. Hammersley and D.C. Handscomb, Monte Carlo methods, Wiley, 1964. (Hasofer and Lind, 1974) A.M. Hasofer, N.C. Lind, Exact and invariant second moment code format. J. Eng. Mech, ASCE, 100, (1), 111-121, 1974. (He and al, 2020) J.C. He, S.P. Zhu, D. Liao, X.P. Niu, Probabilistic fatigue assessment of notched components under size effect using critical distance theory, Eng. Fract. Mech. 235, 2020. (He and al, 2015) W. He, J. Liu, D. Xie, Probabilistic life assessment on fatigue crack growth in mixedmode by coupling of Kriging model and finite element analysis, Eng. Fract. Mech. 139, 2015. (Hemnesi and al, 2022) K. Hemnesi, F. Ellmer, M. Farajian, I. Varfolomeev and M. Luke, On the evaluation of overload effects on the fatigue strength of metallic materials, published by Elsevier B.V., 2021. (Hong and Lind, 1996) H.P. Hong, N.C. Lind, Approximate reliability analysis using normal polynomial and simulation results, Struct. Saf., 18, 329-339, 1996. Technology, 5(1):154-158, 1978. (Hourlier and al, 1978) F.Hourlier, D.McLean and A. Pineau, Fatigue crack growth behaviour of ti-5al-2• 5sn alloy under complex stress (mode I+ steady mode III). Metals (Hudson, 1969) C. M. Hudson, Effect of Stress Ratio on Fatigue-Crack Growth in 7075-T6 and 2024-T3 Aluminum-Alloy Specimens. NASA Technical Note NASA TN D-5390, Langley Research Center, Langley Station, Hampton, VA, August, 1969. (Hussain and al, 1974) M.Hussain, S.PU, J.Underwood, Strain energy release rate for a crack under combined mode I and mode II. Fracture Analysis: Proceedings of the 1973 National Symposium on Fracture Mechanics, Part II ASTM International, 1974. (Irwin, 1960) G.R. Irwin, Plastic zone near a crack and fracture toughness, in Proceedings of the 7th Sagabore research conference on mechanics & metals behavior of sheet material , vol. 4, New York, pp. 463-78, 1960. (Irwin, 1957) G.R. Irwin, Analysis of stresses and strains near the end of a crack traversing a plate, Journal of Applied Mechanics, 24. , 1957. (Ishikawa and al, 1984) H. Ishikawa, A. Tsurui and A. Utsumi, Proc. Fatigue 84 I, 511, 1984. (Isukapalli, 1999) S.S. Isukapalli. Uncertainty analysis of transport-transformation models. PhD thesis, The State University of New Jersey, 1999. (Kamal and al, 2018) M. Kamal and M. Rahman, Advances in fatigue life modeling; a review, Renew Sustain Energy Rev, 82, pp. 940-949, 2018. (Konakli and Sudret, 2016) K. Konakli, B. Sudret Global sensitivity analysis using low-rank tensor approximations, Reliab Eng Syst Saf, 156, 64-83, 2016. (Krige, 1951) D. Krige, A statistical approach to some basic mine valuation problems on the Witwatersrand. J. of the Chem., Metal. and Mining Soc. of South Africa, 52(6), 119-139, 1951. (Kuperberg, 2006) G. Kuperberg Numerical cubature using error-correcting codes SIAM J. Numer. Anal., 44 (3), 897-907, 2006. (Laird, 1967) Laird, C. The influence of metallurgical structure on the mechanisms of fatigue crack propagation. In Fatigue crack propagation. ASTM International, 1967. (Landgraf, 1970) R. W. Landgraf, The Resistance of Metals to Cyclic Deformation, Achievement of High Fatigue Resistance in Metals and Alloys, ASTM STP 467, Am. Soc. for Testing and Materials,West Conshohocken, PA, pp. 3-36, 1970. (Lang and al, 1999) M. Lang, G. Marci, The influence of single and multiple overloads on fatigue crack propagation, Fatigue and Fracture of Engineering Materials and Structures, 22, pp. 257-271, 1999. (Lassen and al, 2002) T. Lassen, JD. Sorensen, A probabilistic damage tolerance concept for welded joints. Part 1: data base and stochastic modeling. Marine Structures; 15:599-613, 2002. (Lee and Kwak, 2006) S. Lee, B. Kwak, Response surface augmented moment method for efficient reliability analysis. Structural safety 28, 261-272, 2006. (Li and al, 2021) X.Y. Li, Z. Tao, J.P. Wu, W. Zhang, Uncertainty theory based reliability modeling gor fatigue, Eng. Fail. Anal., 119, Article 104931, 2021. (Li and al, 2020) Y.Z. Li, S.P. Zhu, D. Liao and X.P. Niu, Probabilistic modeling of fatigue crack growth and experimental verification, Eng. Failure Analysis 118, Article 104862, 2020. (Li and al, 2019) N.P. Li, G. Nagi, Y.G. Lei, L.K. Bian, X.S. Si, Remaining usefel life prediction of machinery under time-varying operating conditions based on a two-factor state-space model, Reliab. Eng. Syst. Saf;, 186, pp. 88-100, 2019. (Li and al, 2018) H. Li, H.Z. Huang, Y.F. Li, J. Zhou and J. Mi, Physics of failure-based reliability prediction of turbine blades using multi-source information fusion, Appl. Soft Comput., 72, pp. 624-635, 2018. (Li and al, 2002) G. Li, S.W. Wang, H. Rabitz. Practical approaches to construct RS-HDMR component functions. Journal of Physical Chemistry. 106: 8722-8733, 2002 (Li and Der Kiureghian, 1993) C.C. Li, A. Der Kiureghian, Optimal discretization of random fields, J Eng Mech, 119 (6), 1136-1154, 1993. (Li and Rabitz, 2010) G. Li, H. Rabitz, Global Sensitivity Analysis for Systems with Independent and/or Correlated Inputs. J. Phys. Chem., 114, 6022-6032, 2010. (Liao and al (1), 2020) D. Liao, S.P. Zhu, J.A.F.O. Correia, A.M.P. De Jesus and F. Berto, Recent advances on notch effects in metal fatigue: A review, Fatigue Fract. Eng. Mater. Struct. 43, pp. 637 -659, 2020. (Liao and al (2), 2020) D. Liao, S.P. Zhu, B. Keshtegar, G. Qian, Q. Wang, Probabilistic framework for fatigue life assessment of notched components under size effects, Int. J. Mech. Sci. 181, 2020. (Lieurade, 1988) H.P. Lieurade, Effet des contraintes résiduelles sur le comportement à la fatigue des pièces et des structures industrielles, Revue Traitement Thermique -218-, pp.15-28, 1988. (Lin and al, 2017) L. Zhu, M.P. Jia, A new approach for the influence of residual stress on fatigue crack propagation, Elsevier, Results in Physics, Vol. 7, pp. 2204-2212, 2017. (Lin and al, 2016) Y.C. Lin, C.Y. Zhao, M.S. Chen, D.D. Chen, A novel constitutive model for hot deformation behaviors of Ti-6Al-4V alloy based on probabilistic method, Appl. Phys. A Mater. Sci. Process. 122 (8), 2016. (Liu and al, 2022) X. Liu, J. Liu, H. Wang and X. Yang, Prediction and evaluation of fatigue life considering material parameters distribution characteristic, Int J Struct Integr, 13(2), pp. 309 -326, 2022. (Liu and al, 2020) X.T. Liu, F.C. Kan, H.J. Wang, X.F. Xin, Z.Q. Wand and H. Huang, Fatigue life prediction of clutch sleeve based on abrasion mathematical model in service period, Fatigue Fract. Eng. Mater. Struct., 43, p.p. 488 -501, 2020. (Liu and al, 2011) M. Liu, Z. Gao, J.S. Hesthaven. Adaptive sparse grid algorithms with applications to electromagnetic scattering under uncertainty, Appl Num Math, 61, 24-37, 2011. (Long and al, 2019) X.Y. Long, K. Liu, C. Jiang, Y. Xiao and S.C. Wu, Uncertainty propagation method for probabilistic fatigue crack growth life prediction, Theor. Appl. Fract. Mech. 103, Article 102268, 2019. (Long and al, 2016) X.Y. Long, C. Jiang, C. Yang, X. Han, W. Gao, J. Liu, A stochastic scaled boundary finite element method, Comput Methods Appl Mech Eng, 308. 23-46, 2016. (Low, 2013) Y.M. Low, A new distribution for fitting four moments and its applications to reliability analysis, Structural Safety, 42, 12-25, 2013. (Lu and al (1), 2020) C. Lu, Y.W. Feng, C.W. Fei, S.Q. Bu, Improved decomposed-coordinated kriging modeling strategy for dynamic probabilistic analysis of multicomponent structures, IEEE Trans. Reliab. 69 (2), 2020. (Lu and al (2), 2020) C. Lu, and al., Moving extremum surrogate modeling strategy for dynamic reliability estimation of turbine blisk with multi-physics fields, Aerosp. Sci. Technol. 106, 2020. (Lu and al, 2004) J. Lu, D. Darmofal, Higher-dimensional integration with Gaussian weight for applications in probabilistic design. Soc Ind Appl Math 26, 613-624, 2004. (Lu and Darmofal, 2004) J. Lu, D.L. Darmofal, Higher-dimensional integration with Gaussian weight for applications in probabilistic design, SIAM J. Sci. Comput., 26, 613-624, 2004. (Madsen and al, 1986) H.O. Madsen, S. Krenk, N.C. Lind. Methods of structural safety. Prentice-Hall, Englewood Cliffs: New Jersey; 1986. (Manjunatha and al, 2004) C. M. Manjunatha and B. K. Parida, Prediction of fatigue crack growth after single overload in an aluminum alloy, AIAA Journal, vol. 42, no. 8, pp. 1536-1542, 2004. (Manson, 1954) S.S. Manson, Behaviour of Materials under conditions of thermal stress. National Advisory Commission on Aeronautics: Report 1170 Cleveland: Lewis Flight Propulsion Laboratory, 1954. (Mareau and al, 2019) C. Mareau and F. Morel, A continuum damage mechanics-based approach for the high cycle fatigue behavior of metallic polycrystals, Int J Damage Mech, 28 (2019), pp. 838-856, 2019. (Matherson, 1962) G, Matheron, Traité de géostatistique appliquée. Editions Technip, 1962. (Matsuoka and al, 1976) S. Matsuoka, K. Tanaka, M. Kawahara, The retardation phenomenon of fatigue crack growth in HT80 steel, Engng. Fract. Mech., vol. 8, pp.507-523, 1976. (Mckay and al, 1979) M.R. Mckay, R.J. Beckman, W.K. Conover, A comparison of three methods for selecting values of input variables in the analysis of output from a computer code, Technometrics, 2, 239-245, 1979. (McNamee and Stenger, 1967) J. McNamee, F. Stenger, Construction of fully symmetric numerical integration formulas, Numer. Math., 10, 327-344, 1967. (Melchers and Ahammad, 2004) R.E. Melchers, M. Ahammed, A fast approximate method for parameter sensitivity estimation in Monte Carlo structural reliability, Comput Struct, 82 (1), 55-61, 2004. (Metropolis and Ulam, 1949) N. Metropolis, S. Ulam, The Monte Carlo method, J. Am. Stat. Assoc, 44, 335-341, 1949. (Morris, 1991) M.D. Morris. Factorial sampling plans for preliminary computational experiments, Technometrics, 33: 161-174, 1991. (Mysovskikh, 1980) I. P. Mysovskikh, The approximation of multiple integrals by using interpolatory cubature formulae, in Quantitative Approximation, R. A. DeVore and K. Scherer, eds., Academic Press, New York, 217-243, 1980. Sciences, 225:42-43, 1962. (Niederreiter, 1992) H. Niederreite, Random number generation and quasi-Monte Carlo methods, SIMA, Philadelphia, PA, USA, 1992. (Niu and al, 2020) X.P. Niu, and al., Probabilistic modeling of uncertainties in fatigue reliability analysis of turbine bladed disks, Int. J. Fatigue, 2020. (Ni, 2002) C.C. Ni, Study of stochastic fatigue crack growth models and their experimental verification. PhD thesis, National Taiwan University, 2002. (Niederreiter, 1992) H. Niederreiter. Random number generation and quasi-Monte Carlo methods. SIAM, Philadelphia, PA, USA, 1992. (Nobile and al, 2006) F. Nobile, R. Tempone, C. Webster. A sparse grid stochastic collocation method for elliptic partial differential equations with random input data. Technical Report MOX Report 85, Politecnico di Milano, 2006. (Nouy, 2010) A. Nouy, Proper generalized decompositions and separated representations for the numerical solution of high dimensional stochastic problems, Arch Comput Methods Eng, 17, 403-434, 2010. (Novak and Ritter, 1999) E. Novak, K. Ritter. Simple cubature formulas with high polynomial exactness. Constructive approximation. 15: 499-522, 1999. (Nataf, 1962) A. Nataf, Détermination des distributions dont les marges sont données, Comptes Rendus de l'Académie des (Oden and al, 2003) J.T. Oden, T. Belytschko, V. Babuska, T.J.R. Hughes, Research directions in computational mechanics, Compt. Method. Appl. Mech. Eng. 192, 913-922, 2003. (Owen, 1998) A. Owen. Detecting near linearity in high dimensions. Technical report, Stanford University, Department of Statistics, 1998. (Owen, 1992) A. Owen, A central limit theorem for Latin hypercube sampling, J. Royal. Stat. Soc, Series B54, 541-551, 1992. (Panmetsa and Grandhi, 2003) R.C. Penmetsa, R.V. Grandhi, Adaptation of fast Fourier transformations to estimate structural failure probability, Finite Elem Anal Des, 39 (5-6), 473-485, 2003. Eng; 85:528-534, 1963. (Pearson and Tukey, 1965) E.S. Pearso, M. Tukey, Distributions whose first four moments are known. Biomerika, 6, 126-132, 1965. (Pelloux, 1969) R. M. N. Pelloux, Mechanisms of formation of ductile fatigue striations. ASM Transactions Quarterly, 62(1), 1969. (Pendola and al., 2000) Pendola M, Mohamed A, Lemaire M, Hornet P. Combination of finite element and reliability methods in nonlinear fracture mechanics. Reliability Engineering & System Safety. 70, 15-27, 2000. (Phillips, 1980) G. M. Phillips, A survey of one-dimensional and multidimensional numerical integration, Comput. Phys. Comm., 20, 17-27, 1980. Chemistry. 25: 197-233, 1999 (Rahman, 2008) S. Rahman. A polynomial dimensional decomposition for stochastic computing. (Paris and al, 1963) P. Paris,F. Erdogan, A critical analysis of crack propagation laws. J Basic (Rabitz and Aliş, 1999) H. Rabitz, Ö.F. Aliş. General foundations of high-dimensional model representations, Journal of Mathematical International Journal for Numerical Methods in Engineering. 76: 2091-2116, 2008. (Ralph and al, 2001) I.S. Ralph, A. Fatemi, R.R. Stephens, and O.H. Fuchs, Metal fatigue in engineering, 2001. (Rasmussen and Williams, 2006) C. Rasmussen, C. Williams. Gaussian processes for machine learning. Adaptive computation and machine learning. MIT Press, Cambridge, Massachusetts, Internet edition, 2006. (Ray and al, 1997) A. Ray, S. Tangirala, A nonlinear stochastic model of fatigue crack dynamics. Probabilistic Engineering Mechanics 12 (1), 33-40, 199733-40, . (Riahi, 2013) ) H. Riahi, Analyse de structures à dimension stochastique élevée: application aux toitures bois sous sollicitation sismique. Ph. D. thesis, Université Blaise Pascal -Clermont II, 2013. (Riahi and al., 2012) H. Riahi, Ph. Bressolette, A. Chateauneuf. Reliability assessment using combination of polynomial chaos and simulations: application to nonlinear fracture mechanics. 10th world congress on computational mechanics, Sâo Paulo-Brazil, 8-13 July 2012. (Rice, 1968) JR. Rice, A path independent integral and the approximate analysis of strain concentration by notches and cracks. Journal of Applied Mechanics. 35: 379-386, 1968. (Rice, 1967) J.C. Rice, Mechanics of crack tip deformation and extension by fatigue, ASTM STP 415, pp.247-308, 1967. (Righiniotis and al, 2003) T.D. Righiniotis, M.K. Chryssanthopoulos, Probabilistic fatigue analysis under constant amplitude loading. Journal of Constructional Steel Research; 59(7):867 -886, 2003. (Romano and al, 2018) S. Romano, A. Bruckner-Foit, A. Brandao, J. Gumpinger, T. Ghidini and S. Beretta, Fatigue properties of AlSi10Mg obtained by additive manufacturing: Defect-based modelling and prediction of fatigue strength, Eng Fract Mech, 187, pp.165 -189, 2018. (Rosenblatt, 1952) M. Rosenblatt, Remarks on a multivarite transformation, Ann Math Statist, 23, 470-472, 1952. (Royer, 1986) J.Royer, A specimen geometry for plane mixed modes. Engineering Fracture Mechanics, 23(4): [763][764][765][766][767][768][769][770][771][772][773][774][775] 1986. (Sankararaman and al, 2011) S. Sankararaman, Y. Ling, S. Mahadevan, Uncertainty quantification and model validation of fatigue crack growth prediction. Engineering Fracture Mechanics; 78(7):1487 -1504, 2011. (Saltelli and al, 2000) A. Saltelli, K. Chan, E.Scott, Sensitivity analysis. J. Wiley & Sons, 2000. (Saltelli and Sobol 1995) A. Saltelli, I. Sobol, About the use of rank transformation in sensitivity of model output. Reliab. Eng. Sys. Safety, 50, 225-239. (Santner and al, 2003) T. Santner, B. Williams, W. Notz, The design and analysis of computer experiments. Springer series in Statistics. Springer, 2003. (Saxena and al, 1996) A. Saxena, C.L. Muhlstein, Fatigue Crack Growth Testing; ASM International, Member/Customer Service Center: Materials Park, OH, USA, Volume 19, pp. 410-412, 1996. (Schütz, 1996) Schütz W. A history of fatigue. Eng Fract Mech.; 54:263-300, 1996. (Schijve, 2009) J. Schijve Fatigue of structures and materials. Second edition: Springer; 2009. (Schijve and al, 2004) J. Schijve, M. Skorupa, A. Skorupa, T. Machniewicz, and P. Gruszczynski, Fatigue crack growth in the aluminium alloy D16 under constant and variable amplitude loading, International Journal of Fatigue, vol. 26, no. 1, pp. 1-15, 2004. (Schijve, 1979) J. Schijve, eng.fract. mech., vol 11, pp. 167-221, 1979. (Schijve, 1962) J.Schijve, Fatigue crack propagation in light alloy sheet material and structure, Advances in Aeronautical Sciences, Oxford: Pergamon press, 3, pp; 387-408, 1962 387-408, . (Schlier, 2004) ) C. Schilier, Error trends in Quasi-Monte Carlo integration, Computer Physics Communications, 159, (2), 93-105, 2004. Simulation, 62,[3][4][5][6], 509-517. (Sih, 1991) G. Sih, Mechanics of fracture initiation and propagation: surface and volume energy density applied as failure criterion, 1991. (Sih, 1974) G. Sih, Strain energy density factor applied to mixed mode crack problems. International Journal of Fracture, 10(3):305-321, 1974. (Smith and al, 1985) E. Smith, K. Pascoe, Fatigue crack initiation and growth in a high strength ductile steel subject to in phase biaxial loading. American Society for Testing and Materials STP 853, pages 111-134, 1985. (Smola and Schölkopf, 2006) A. Smola, B. Scholkopf. A tutorial on support vector regression. Stat. Comput. 14, 199-222, 2006. (Smolyak, 1963) S. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes of functions. Soviet. Math. Dokl. 4, 240-243, 1963. (Sobczyk and al, 1995) K. Sobczyk, J. Trebicki, Spencer BF. Modeling of curvilinear random fatigue crack growth. Engng Fract Mech; 52 (4):703-715, 1995. (Sobczyk and al, 1992) K. Sobczyk, B.F. Spencer, Random fatigue: from data to theory. Boston: Academy Press; 1992. (Sobczyk and al, 1991) K. Sobczyk, J. Trebicki, Cumulative jump-correlated model for random fatigue. Engng Fract Mech; 40:201-210, 1991. (Sobczyk and al, 1989) K. Sobczyk, J. Trebicki, Modeling of random fatigue by cumulative jump process. Engng Fract Mech; 34 (2):477-493, 1989 34 (2):477-493, . (Sobol, 1998) ) I. Sobol. On quasi-Monte Carlo integrations, Mathematics and Computers in Simulation, 47, 103-112, 1998. (Sobol, 1993) I. Sobol, Sensitivity estimates for nonlinear mathematical models, Math. Modeling & Comp. Exp, 1, 407-414, 1993. (Soize and Ghanem, 2004) Soize, C. and R. Ghanem, Physical systems with random uncertainties: chaos representations with arbitrary probability measure. SIAM J. Sci. Comput. 26 (2), 395-410, 2004. (Song and al, 2020) L.K. Song, G.C. Bai and X.Q. Li, A novel metamodeling approach for probabilistic LCF estimation of turbine disk, Eng. Fail. Anal., 120, Article 105074, 2020. (Song and al, 2019) L.K. Song, G.C. Bai and C.W. Fei, Probabilistic LCF life assessment for turbine discs with DC strategy-based wavelet neural network regression, Int. J. Fatigue 119, p.p. 204, 2019. (Stroud, 1971) A.H. Stroud, Approximate Calculation of Multiple Integrals, Prentice-Hall, Englewood Cliffs, NJ, USA, 1971. (Stroud, 1967) A.H. Stroud, Some fifth degree integration formulas for symmetric regions II, Numer Math, 9, 460-468, 1967. (Stroud and Secrest, 1963) A. H. Stroud and D. Secrest, Approximate integration formulas for certain spherically symmetric regions, Math. Comput., 17, 105-135, 1963. (Sudret, 2008) B. Sudret. Global sensitivity analysis using polynomial chaos expansions. Reliab. Eng. Syst. Saf, 93: 964-979, 2008. (Sudret and Der Kiureghian, 2000) Sudret B, Der Kiureghian A. Stochastic finite element methods and reliability: a state-of-the-art report. Department of Civil and Environmental Engineering, University of California; 2000. (Tada and al, 1973) H. Tada, P. Paris, and G.R. Irwin, The Stress Analysis of Cracks Hand-book, 3rd ed. Del Research Corp., Hellertown, Pa, ch. 2, pp. 101_305, 1973. (Taira and al, 1979) S.Taira, K. Tanaka, Local residual stress near fatigue crack tip, Transactions ISIJ, vol. 19, pp. 41 1-418, 1979. (Taleba and al, 2016) W. Taleba, C. Gardina , C. Sarrazin-Baudoux, Plasticity induced crack closure during fatigue crack propagation: numerical prediction of the crack front shape in 304L stainless steel, 2016. (Schϋrer, 2003) R. Schϋrer, A comparison between (quasi-)Monte Carlo and cubature rule-based methods for solving high-dimensional integration problems, Mathematics and Computers in (Tanaka, 1974) K. Tanaka, Fatigue crack propagation from a crack inclined to the cyclic tensile axis. Engineering Fracture Mechanics, 6(3):493-507, 1974. (Tang and al, 2016) K. Tang, P.M. Congedo, R. Abgrall, Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation, J. Comput. Phys., 314, 557-589, 2016. (Tsurui and al, 1986) A. Tsurui and H. Ishikawa, Structural Safely 4, 15, 1986. (Vapnik and al, 1997) Vapnik, V., S. Golowich, and A. Smola, Advances in Neural Information Processing Systems 9, Chapter Support vector method for function approximation, regression estimation, and s ignal processing. MIT Press, 1997. (Vasudevan and al, 1995) A.K. Vasudevan, K. Sadananda, Classification of fatigue crack growth behaviour, Metallurgical and Materials Transaction A, 26A, pp. 1221-1234, 1995. (Victoir, 2004) N. Victoir, Asymmetric cubature formulae with few points in high dimension for symmetric measures, SIAM J. Numer. Anal., 42, 209-227, 2004. (Virkler and al, 1979) D.A. Virkler, B.M. Hillberry, P.K. Goel, The statistical nature of fatigue crack propagation. Jou Eng Mat Tech, ASME; 101:148-153, 1979. (Walker, 1970) E.K.Walker, The Effect of Stress Ratio During Crack Propagation and Fatigue fot 2024-T3 and 7075-T6 Aluminium, ASTM STP 462, American Society for Testing and Materials, Philadelphia 1970. (Wand and Jones, 1995) M. Wand, M. Jones, Kernel smoothing, Chapman and Hall, 1995. (Wang and al, 2021) B.W. Wang, L.Y. Xie, J.X. Song, B.F. Zhao, C. li and Z.Q. Zhao, Curved fatigue crack growth prediction under variable amplitude loading by artificial neural network, Int. J. Fatigue, 142, Article 105886, 2021. (Wang and al, 2004) R. Wang, U. Diwekar, C.E.G. Padró, Efficient sampling techniques for uncertainties in risk analysis, Environ. Prog., 23, (2) (2004), pp. 141-157. (Wei and al, 2008) D. L. Wei, Z. S. Cui, and J. Chen, Uncertainty quantification using polynomial chaos expansion with points of monomial cubature rules, Comput. Struct. 86, 2008. (Weiner, 1983) N. Weiner. The homogeneous chaos. American Journal of Mathematics. 60 (4): 897-936, 1983. (Wessel and al, 1972) E. T. Wessel, W. G. Clark, Pryle Fracture mechanics technology, Applied to heavy section steel fracture 2nd international conference on fracture Brighton, p. 72, 13-18 April 1972. (Westergaard, 1939) H. Westergaard, Bearing pressures and cracks. Journal of Applied Mechanics, 61:49 -53, 1939. (Wheatley and al, 1999) G. Wheatley, X.Z. Hu, Y. Estrin, Effects of a single tensile overload on fatigue crack growth in a 316L steel, Fatigue and fracture of engineering materials and structures, 22, pp. 1041-1051, 1999. (Wu and al, 2004) W.F. Wu, C.C. Ni, Probailistic models of fatigue crack propagation and their experimental verification. Probab Engng Mech; 19:247-257, 2004. (Wu and al, 2003) W.F. Wu, C.C. Ni, A study of stochastic fatigue crack growth modeling through experimental data. Probabilistic Engineering Mechanics; 18:107 -118, 2003. (Xiang and al, 2011) Y. Xiang, Y. Liu, Application of inverse first-order reliability method for probabilistic fatigue life prediction. Probabilistic Engineering Mechanics; 26(2):148 -156, 2011. (Xiao and Lu, 2018) S. Xiao, Z. Lu, Reliability Analysis by Combining Higher-Order Unscented Transformation and Fourth-Moment Method, ASCE-ASME J. Risk Uncertainty in Eng. Syst, Part A: Civ. Eng, 4 (1), 2018. (Xiu and Karniadakis, 2002) D. Xiu, GE Karniadakis. The Weiner-Askey polynomial chaos for stochastic differential equations. SIAM J Sci Comput. 24(2): 619-644, 2002. (Xu and Lu, 2017) J. Xu , Z.H Lu, Evaluation of moments of performance functions based on efficient cubature formulation, J. Eng. Mech., 143, 2017. (Xu and al, 2012) J. Xu, J.B. Chen and J. Li, Probability density evolution analysis of engineering structures via cubature points, Computational Mechanics, 2012Mechanics, , 50, (1), 135-156, 2012. . (Xu and Dang, 2019) J. Xu, C. Dang, A new bivariate dimension reduction method for efficient structural reliability analysis, Mech. Syst. Signal Process., 115, 281-300, 2019. (Xu and Kong, 2018) J. Xu, F. Kong, A cubature collocation based sparse polynomial chaos expansion for efficient structural reliability analysis, Struct Saf, 74, 24-31, 2018. (Xu and Rahman, 2004) H. Xu, S. Rahman. A generalized dimension-reduction method for multidimensional integration in stochastic mechanics. International Journal for Numerical Methods in Engineering. 61: 1992-2019, 2004 (Yang and al, 1996) J.N. Yang, S.D. Manning, A simple second order approximation for stochastic crack growth analysis, Eng. Fract. Mech. 53 (5), 677-686, 1996. (Yang and al, 1990) J.N. Yang, S.D. Manning, Stochastic crack growth analysis methodologies for metallic structures. Engng Fract Mech; 37:1105-1124, 1990. (Yuan and al (1), 2019) R. Yuan, and al., Simulation-based design and optimization and fatigue characteristics for high-speed backplane connector, Adv. Mech. Eng. 11 (6) 1-10, 2019. (Yuan and al (2), 2019) R. Yuan, and al., A Reliability Analysis Method of Accelerated Performance Degradation Based on Bayesian Strategy, IEEE Access 7, 2019. (Zhang, 2019) J.J. Zhang, Chapter 4 -Basic rock fracture mechanics, Applied Petroleum Geomechanics, Pages 133-161, 2019. (Zhang, 1992) X.B. Zhang, Etude numérique de la propagation de fissures par la mécanique de la rupture. Thèse de l'université Blaise Pascal, France 139, 1992. (Zhang and al, 2019) C.Y. Zhang, J.S. Wei, H.Z. Jing, C.W. Fei and W.Z. Tang, Reliability-based low fatigue life analysis of turbine blisk with generalized regression extreme neural network methods, Materials 12, 2019. (Zhang and al, 2014) Y.G. Zhang, Y.L. Huang, Z.M. Wu, N. Li, A high order unscented kalman filtering method, Acta Automatica Sinica, 40 (5), 838-848, 2014. (Zheng and al, 2020) D.F. Zeng, T. Xu, J. Wang, L.T. Lu, W; Meng, B. Jiang and Q. Zou, Investigation of the crack initiation of subsurface rolling contact fatigue in railway wheels, Int. J; Fatigue, 130, Article 105281, 2020. (Zhu and al, 2020) S.P. Zhu, Y.Z. Hao, D. Liao, Probabilistic modeling and simulation of multiple surface crack propagation and coalescence, Appl. Math. Model. 78, 2020. (Zhu and al (1), 2018) S.P. Zhu, Q. Liu, W. Peng, X.C. Zhang, Computational-experimental approaches for fatigue reliability assessment of turbine bladed disks, Int. J. Mech. Sci. 142-143, 2018. (Zhu and al (2), 2018) S.P. Zhu, Q. Liu, Q. Lei, Q. Wang, Probabilistic fatigue life prediction and reliability assessment of a high-pressure turbine disc considering load variations, Int. J. Damage Mech. 27 (10), 2018. Chapter III: Unified approaches for uncertainty propagation analysis Introduction In the previous chapter, an original approach based on efficient cubature formulae has been developed to carry out uncertainty propagation through-time consuming physical models with moderate to high probabilistic dimensionality. We have clearly demonstrated that this approach is by far more efficient than the existing methods based for instance on MCS or tensor-product cubature schemes for dealing with multidimensional integrals relative to the computation of the quantities of interest such as statistical moments, sensitivity indices and probability of failure. Unfortunately, we have also pointed out that in some situations this efficiency could be affected. Indeed, the set of physical model evaluations required to compute these multidimensional integrals depends not only on the integration points related to the cubature formula used, but also on the type of the uncertainty propagation analysis to be addressed. The comparison of equations (II.3), (II.23) and (II.25) (see Section 2 of Chapter II), which represent the multidimensional integrals to be evaluated when dealing with the computation respectively of, 𝑙 𝑡ℎ order statistical moment, partial variance, and probability of failure, clearly shows that the integrand is not the same for these three cases. Obviously, for a given problem these integrands are built on the same physical model, but for each computation case the associated integrand is a function of a different set of random variables representing the uncertain parameters. Therefore, a different set of runs of the physical model is needed. Hence, evaluating the physical model, which is itself time-consuming, each time when moving from one type of uncertainty propagation analysis to another, will probably lead to an unaffordable computation cost. For problems of crack growth in mechanical components or in civil engineering structures under fatigue loading, the related physical models are, in some situations, computational time-demanding. For instance, when dealing with variable amplitude fatigue loading, the quantity of interest, which is the fatigue lifetime, is often computed using an implicit physical model and following a cycle-by-cycle integration scheme, since the amount of loading-induced damage is different in each cycle. This inevitably leads to a huge computational effort for one run of the physical model, especially when High Cycle Fatigue (HCF) region is of interest. It is clear here that if we find a way to substitute the implicit physical model with an explicit formulation, the computational effort required in the uncertainty propagation analysis could be significantly reduced. The key idea is to build an accurate mathematical approximation of the model response based on a limited set of evaluations of the primary implicit model. Such an approximation is referred to as a response surface, surrogate model or metamodel. Many techniques for building metamodels are available in the literature, among them we can find, the Support Vector Regression (SVR) (Vapnik and al, 1997;Smola and Schölkopf, 2006), Kriging based on random processes (Krige, 1951;Matherson, 1962;Santner and al, 2003;Rasmussen and Williams, 2006;Dubourg, 2011), High-Dimensional Model Representation (HDMR) (Rabitz and Aliş, 1999;Li and al, 2002;Xu and Rahman, 2004; Chapter III: Unified approach for uncertainty propagation analysis S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 94 Rahman, 2008;Riahi, 2013;Tang and al, 2016;Ding and Xu, 2021) and Polynomial Chaos Expansion (PCE) (Ghanem and Spanos, 1991;Soize andGhanem, 2004, Berveiller, 2005;Blatman, 2009). The latter technique, denoted by PCE, consists in expanding the response of an implicit physical model over appropriate finite polynomial chaos bases whose components are orthonormal to each other with respect to the joint probability density representing the variability of the uncertain input parameters. Since its appearance in the early 1990's thanks to the work of (Ghanem and Spanos, 1991), PCE has been extensively applied to mechanical and civil engineering problems with uncertain parameters and, at the same time, has been gradually enhanced, mainly to cope dimensionality curse problem encountered when the number of uncertain parameters is high. In the literature two ways of enhancement are distinguished. The first way aims to use efficient techniques (Blatman and Sudret, 2011;Choi and al, 2012;Riahi, 2013;Ahlfeld, 2016;Camacho and al, 2017;Zhang and Qiu, 2020;Cao and al 2022), ranging from integration schemes to regression algorithms, to compute the unknown coefficients of the PCE. The second way aims to use suitable truncation schemes (Blatman and Sudret, 2009;Blatman and Sudret, 2010;Hu and Youn, 2011;Peng and al, 2013;Hawchar and al, 2017;Abraham and al, 2017) to discard the unknown coefficients with a weak contribution to the PCE, thus significantly reducing the computational cost. The present chapter aims to develop a unified method of uncertainty propagation, (i.e., able to handle all three types of probabilistic computation analyses efficiently) by combining PCE and the cubature formulae studied in Chapter II. Two alternatives will be investigated: the first one, called full-PCE, uses a full polynomial expansion to build a metamodel of the model response where the unknown coefficients are computed by projection based on cubature formulae I-VI; the second alternative, called sparse-PCE, uses a suitable truncation algorithm based on second moments prior information, able of adaptively discarding the insignificant coefficients of the PCE in the metamodel construction process, and the remaining significant coefficients are computed by regression. This chapter is organized into two main sections. In section 2, we first recall the mathematical framework of PCE and the key ingredients for setting up PCE metamodels to represent the responses of implicit physical models. The two alternatives, named full-PCE and sparse-PCE, developed to enhance the efficiency of PCE metamodels, are given a special attention. In section 3, four application examples dealing with fatigue fracture problems with different levels of complexity are addressed to validate the proposed approaches and to demonstrate their ability to deal with various types of uncertainty propagation analysis with an affordable computational cost. Polynomial Chaos Expansion (PCE): 2.1. Construction of PCE-based metamodels Let us consider a computational model 𝑓 describing the behavior of an engineering system, whose input (𝑋 𝑖 ) (𝐼𝐼𝐼. 4) In the above equation, the set of univariate polynomials 𝛹 𝛼 𝑘 𝑖 , 𝑖 = 1, … , 𝑁 are orthonormal with respect to the marginal distributions 𝑝 𝑋 i (𝑥 i ), 𝑖 = 1, … , 𝑁. This orthonormality condition reads: where 𝛿 𝑘,𝑙 denotes the Kronecker symbol equal to 1 when 𝑘 = 𝑙 and 0 otherwise. In the original version of the PCE proposed by (Weiner, 1983), the polynomial chaos basis is made of Hermite polynomials to represent the random processes from a set of normal variables. The resulting metamodel, called Weiner-Hermite PCE, can be used to represent model responses that depend only on normal distributions. Fortunately, an extended release, referred to as generalized PCE, has been developed by (Xiu and Karniadakis, 2002) to deal with non-normal random variables based on the Askey family of hypergeometric polynomials. In table III.1 are listed the polynomial families associated to the most popular continuous and discrete distributions. Table III. 1. Correspondence between distributions of random variables and orthogonal polynomials In engineering problems, the components of the 𝑁-dimensional random variable 𝑿 = { 𝑋 1 , … , 𝑋 𝑁 } 𝑇 may have different distributions not belonging to the families given in table III.1, which can also be correlated with each other. This general case can be easily addressed by using isoprobabilistic transformations 𝑿 = 𝑇(𝑼) (see section 2.1 of Chapter II for more details). Then, the PCE-based metamodel represented by equation (III.1) can be naturally rewritten in the standard random space as follows: where 𝑼 = { 𝑈 1 , … , 𝑈 𝑁 } 𝑇 is an 𝑁-dimensional normal variable with independent components 𝑈 i , 𝑖 ∈ {1, … , 𝑁} following a standard normal distribution 𝜑 𝑈 i (𝑢 i ), 𝑖 ∈ {1, … , 𝑁} with zero mean and unit standard deviation, In the following, the mathematical formulation of the quantities of interest will be set up in the standard random space. Thus, the polynomial chaos basis used to construct the PCE-based metamodels will comprise multivariate Hermite polynomials obtained by tensor products of univariate orthonormal Hermite polynomials. Computation of the PCE coefficients Once the truncated polynomial chaos basis has been built up, the unknown coefficients 𝑎 𝑘 , 𝑘 = 0, … , 𝑃 -1 have to be determined to completely construct the PCE-based metamodel of interest. In the literature, two families of approaches can be distinguished to solve this issue. Intrusive approaches, introduced since the appearance of the PCE at the early of 1990's (Ghanem and Spanos, 1991), aim at computing the PCE coefficients by minimizing the following residual under the constraint that it is orthogonal to the selected polynomial chaos basis: The solution of the above minimization problem is obtained by a Galerkin projection scheme, which requires adaptations of the governing equations related to the considered mechanical model, which explains why these approaches are called intrusive. Unfortunately, when these governing equations involve nonlinearities, performing these adaptations could be a challenging task. If in addition, the probabilistic dimension is high, the Galerkin projection scheme often leads to a large system of coupled equations which require a considerable computational cost. To face this problem of inefficiency of intrusive approaches, some alternatives methods have been developed in the few last decades, called non-intrusive approaches. In these approaches, the mechanical model is considered as a black box and the coefficients of the PCE are simply computed by a finite set of evaluations of the mechanical model on an appropriate finite set of Now, by performing some algebra, the least-square estimates of the PCE coefficients 𝒂 ̂ are obtained as follows: 𝒂 ̂= (𝓗 𝑇 𝓗) -1 𝓗 𝑇 𝓨 (𝐼𝐼𝐼. 20) where 𝓨 = {𝑦 𝑗 = 𝑓 ∘ 𝑇(𝒖 𝑗 ), 𝑗 = 1, … , 𝑀} is a sample set of points representing the respective responses of the mechanical model at the points 𝓤 = {𝒖 𝑗 = (𝑢 1 𝑗 , … , 𝑢 𝑁 𝑗 ), 𝑗 = 1, … , 𝑀} of the experimental design and 𝓗 is an 𝑀 × 𝑃 matrix called information matrix where its (𝑗, 𝑘) 𝑡ℎ entry ℋ 𝑗𝑘 , 𝑗 = 1, … , 𝑀, 𝑘 = 1, … , 𝑃 is defined as the response of the 𝑘 𝑡ℎ 𝑁-variate Hermite polynomial 𝑯 𝜶 𝑘 of total degree 𝜶 𝑘 at the sampling point 𝒖 𝑗 . The choice of a suitable experimental design 𝓤 = {𝒖 𝑗 = (𝑢 1 𝑗 , … , 𝑢 𝑁 𝑗 ), 𝑗 = 1, … , 𝑀} is of great importance, especially its size 𝑀, to obtain a well-conditioned regression problem and consequently accurate estimates of the PCE coefficients. Indeed, if 𝑀 is just slightly greater than the number 𝑃 of unknown coefficients to be computed, this may lead to an ill-conditioned information matrix 𝓗 and consequently to an intractable regression problem. On the other hand, that is, if 𝑀 is very high, this may induce an unaffordable computational cost in case the mechanical model itself is computational time-demanding, since the corresponding number of evaluations of the mechanical model will be high. In the literature, the value of Another way to the reduce the computational effort required to estimate the PCE coefficients using regression methods is to build experimental designs from a set of unevenly weighted point samples as suggested by (Isukapalli, 1999) instead of a sample set of equally weighted point samples like those obtained by classical sampling techniques. Based on this idea, experimental designs built from suitable combinations of Gauss-Hermite integration points have been introduced by (Berveiller, 2005). The key ingredients to build such an experimental design consist of first generating a sample set of 𝑁-tuples representing all possible combinations of 𝑝 + 1 one-dimensional Gauss-Hermite integration points, and then the experimental design is built from the first 2𝑃 + 1 combinations after sorting them in ascending order based on the distance of each point from the origin. Note that sparse grids related to Smolyak integration scheme and integration points of cubature formulae I-VI can also be used to build experimental designs. The latter alternative will be investigated in the following to improve the efficiency of regression methods for the estimation of the PCE coefficients. The key idea is to find a smart truncation scheme able to reduce S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 106 which allows, based on local sensitivity indices, to split the uncertain parameters into three categories, those with weak main effect, those with linear and additive effects and those with nonlinear or interaction effect. Note that screening analysis are not computational time-demanding, thus the loss of efficiency on the whole computational process is very limited. Thus, if a prior information about the estimate of the second order statistical moments is already available, the latter could be a useful tool to identify the most significant terms on the quantities of interest, when a step-by-step algorithm is used to build the polynomial chaos basis. Indeed, at each iteration 𝑘 of this algorithm, the polynomial chaos basis -denoted here by 𝓗 𝑝,𝑞,𝜎 2 (i.e., 𝜎 2 in 𝓗 Note that the values of 𝜀 1 used in the criterion of enrichment of the polynomial chaos basis and 𝜀 2 used in the stopping condition of the step-by-step algorithm, are set respectively to 10 -6 and 10 -3 , which allow us, on the one hand, to avoid ill-conditioned information matrix, thus an intractable regression problem, and, on the other hand, to ensure a good accuracy on the estimates of the quantities of interest. Of course, other values can be chosen depending on the complexity of the problem of interest and the accuracy to be achieved. The main steps of this truncation scheme based on second moment information are summarized on the flowchart depicted in figure III.5. As can be seen, the proposed truncation scheme based on second moment prior information can be combined with the truncation scheme based on low-order interactions S.Chahine | Efficient uncertainty propagation approaches to solve a large class of fatigue crack-growth problems 110 where 𝐹 𝑦 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 (𝑦) denotes the CDF of the random variable representing the uncertainty on the admissible threshold 𝑦 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 . The above integral can be evaluated analytically if a suitable formulation is available for the integrand 𝑓 ̂𝑌,𝑝 (𝑦) 𝐹 𝑦 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 (𝑦) or numerically otherwise, without any additional runs of the primary implicit mechanical model. Application to fatigue fracture problems In this section, four fatigue crack growth problems are addressed to investigate the approaches proposed above. Various kinds of uncertainty propagation analysis, namely statistical moments and distributions analysis, reliability analysis and sensitivity analysis, are carried out. The first problem deals with crack growth in a CCP specimen subjected to a constant amplitude fatigue loading, problem for which, fortunately, testing data are available in the literature. They are used to identify the probabilistic models of the uncertain parameters. In addition to assessing the efficiency of the proposed approaches, this first example is used to also study the effect of the distributions of the uncertain parameters and of their statistical characteristics on the results of the probabilistic computations. The second example considers a nonlinear cracked pipe where the fracture driving force, defined as the Rice's integral, is computed through a greedy computational time FEM. The structural integrity of the cracked pipe, defined as the risk that the fracture driving force will exceed the fracture toughness of the constitutive materials, is evaluated based on the statistical moments results given by the proposed PCEbased methods and a Demand-Capacity reliability approach (see section 2.4 of Chapter II). The third example deals with a mixed mode crack growth problem. The studied structure represents a rectangular plate containing an inclined crack at the edge. The effect of the spatially varying uncertainty of the Young's modulus of the constitutive materials, on the variability of the mechanical responses defined as the opening fracture mode SIF 𝐾 𝐼 , on the in-plane shear fracture mode SIF 𝐾 𝐼𝐼 and on the bifurcation angle 𝜃, is studied. Crack growth in CCP specimen Problem statement The first example deals with the crack growth in a CCP specimen which was previously studied in section 2.4 of Chapter I of the thesis, but only from a deterministic point of view. Now, we want to push our analysis much further and we are interested in assessing the effect of the uncertainty of some material properties on fatigue lifetime. First, a statistical analysis is carried out on the fatigue crack growth data provided by the experimental tests performed on CCP specimens (Hudson, 1969), on the one hand to point out the probabilistic character of the fatigue crack growth process, and, on the other hand to identify the probabilistic models capable of accurately representing the variability of the uncertain parameters. where 𝑦 = log 10 (𝑑𝑎 𝑑𝑁 ⁄ ), 𝐵 = log(𝐶 1 ), 𝐴 1 = 𝑚 1 , 𝐴 2 = -𝑚 1 (1 -𝛾), 𝑥 1 = log 10 (∆𝐾), and 𝑥 2 = log 10 (1 -𝑅). The statistical moments (i.e., the mean value 𝜇 and the variance 𝜎 2 ), and the correlation matrix where 𝜎 is the stress, 𝜖 the strain, 𝐸 the Young's modulus, 𝜎 𝑦 the yield strength, 𝛼 a dimensionless material parameter and 𝑛 the strain hardening exponent. The stress-strain curve representing the Ramberg-Osgood behavior law, which will be used later in the computation of the fracture parameters of interest, is shown in figure III.13. ABSTRACT This work presents a hybrid approach to perform uncertainty propagation. It is based on Stochastic Response Surfaces (SRS) for the construction of analytical representations of implicit mechanical model responses. The coefficients of the SRS, defined by multidimensional integrals, are calculated by effective quadrature schemes allowing to reduce the number of evaluations of the implicit mechanical model, particularly in the case where the number of uncertain parameters is high. The accuracy and effectiveness of this approach have been demonstrated through the treatment of a wide variety of fatigue cracking problems. The levels of complexity that engineers will have to deal with lies in the difficulty of understanding fatigue phenomena and the very random nature of the fatigue phenomenon. Thus the problem is to propose an approach that allows to guarantee the best compromise between the representation of the real behavior of the fatigue crack and the consideration of the different sources of uncertainty. The main to objectives of this work is computing multidimensional integrals with an approach that balance between the efficiency and the accuracy and to develop a unified approaches able to perform efficiently the three kinds of uncertainty:
04117301
en
[ "sdv" ]
2024/03/04 16:41:26
2023
https://hal.inrae.fr/hal-04117301/file/Lyon_30_05_2023_BLYSTONE_Shannan.pdf
Df The linear relationship between water quantity and signal The linear relationship between water quantity and signal • Spectrum of wood density between functional types • The persian walnut, J. regia, is the most dense wood amongst the species, and it has the thickest bark zone (varying water content) Effects of species and functional type
04117366
en
[ "spi.other" ]
2024/03/04 16:41:26
2022
https://hal.science/hal-04117366/file/Panasiewicz_29413.pdf
Jognes Panasiewicz email: [email protected] Nisrine Arab email: [email protected] Fabien Destic email: [email protected] Gefeson M Pacheco email: [email protected] Angélique Rissons email: [email protected] All-Digital Optical Phase-Locked Loop for satellite communications under Turbulence Effects Keywords: atmospheric turbulence, coherent detection, FSO communications, optical digital phase-locked loop This study proposes a new architecture for the phase detector in an all-digital Optical-Phase-Locked Loop (OPLL) to demodulate a digitally modulated optical signal from a longdistance Free Space Optics (FSO) communications system. The performance of the proposed architecture is evaluated and compared with an OPLL using an analog phase detector. In addition, a non-negligible delay is considered in the system analyses. Finally, the impact of wind speed on communication quality through the obtained Bit Error Rate (BER) from the recovered data is studied under three different atmospheric turbulence scenarios. I. INTRODUCTION FSO has recently emerged as a promising communications technology. The need to shift to optical carriers has emerged due to a significant increase in data and multimedia service demands and the congestion of the radiofrequency (RF) spectrum [START_REF] Kaushal | Optical Communication in Space: Challenges and Mitigation Techniques[END_REF]. Furthermore, FSO has numerous advantages over RF communication, including large modulation bandwidth, narrow beam divergence, high security, high directivity and no licensing requirement. However, optical signal propagation is affected by absorption, scattering, and changes in the atmospheric refractive index or turbulence, which causes optical signal attenuation and deteriorates the link quality [START_REF] Andrews | Laser Beam Propagation through Random Media[END_REF]. For long-distance FSO, such as satellite-to-ground links, as shown in Fig. 1, coherent optical communication is advantageous since it allows higher-order modulation formats and provides better signal sensitivity [START_REF] Winzer | Fiber-optic transmission and networking: the previous 20 and the next 20 years[END_REF]. Optical phase-locked loop (OPLL) or digital signal processing (DSP) are the two techniques used to recover the data in a coherent optical system. OPLL is generally used in inter-satellite links, while adaptive optics (AO) coupled with DSP techniques can be employed for satellite-to-ground links [START_REF] Carrasco-Casado | Space Optical Links for Communication Networks[END_REF]. Despite using digital control loops, the OPLL used in inter-satellite links uses analog phase detectors to provide phase error to the control loop [START_REF] Yue | Homodyne coherent optical receiver for intersatellite communication[END_REF], [START_REF] Liu | Study of multistage composite loop control based on optical phase-locked loop technology[END_REF]. However, the analog detector is unsuitable for systems subject to fading as its gain is related to the signal level and could render OPLL unstable. Therefore, taking advantage of OPLL and DSP techniques, such as real-time data recovery and flexibility of parameters reconfiguration, this study proposes a fully digital OPLL to demodulate a digitally modulated optical signal. Additionally, the optical signal propagating an LEO slant path shown in Fig. 1 will be considered. Thus, OPLL will use a digital phase detector to handle a fading signal resulting from the effects of atmospheric turbulence. The system, supported by simulations using VPIphotonics under a co-simulation technique, will be evaluated under three scenarios of atmospheric turbulence and several wind speed values. II. COHERENT OPTICAL RECEIVER A. LEO slant path Fig. 1 shows a general overview of the implemented system and the power level diagram for the satellite downlink. First, the transmitter section performs the quadrature phase-shift keying (QPSK) with a bit rate of 20 Gbps. Next, as the data signal employs nonreturn-to-zero, an optical band-pass filter (OBPF) filters the undesired side lobes, thereby limiting the bandwidth of the transmitted modulated signal. The signal is then amplified by an Erbium-Doped Fiber Amplifier (EDFA) to reach the required optical power. After that, the optical signal propagating an LEO satellite slant path with a minimum elevation of 20 • and an altitude of 700 km is considered. Through these considerations, the received power is approximately -16 dBm, given a margin of 4 dB over the required receiver input power. The required power is based on the coherent detector characteristics as photodetector responsivity of 0.04 A/W. Finally, the transmitted data are demodulated at the Optical Ground Station through the proposed fully digital OPLL. Its performance is evaluated under different scenarios of atmospheric turbulence. The a scintl in the power level diagram corresponds to the attenuation induced by turbulence. B. Turbulence model The refractive index fluctuations caused by atmospheric turbulence induce harmful effects on an optical signal propagating through the atmosphere. Due to the non-linear turbulence characteristics, the refractive index structure C 2 n is used to characterize its effects [START_REF] Andrews | Laser Beam Propagation through Random Media[END_REF]. The parameter C 2 n can be considered a constant when dealing with propagation along a horizontal path. However, when studying the vertical links, C 2 n varies according to altitude h. Several profile models of C 2 n (h) exist to describe the turbulence profile for ground-tospace and space-to-ground applications. Among these models, one commonly used in the literature is the Hufnagel-Valley (H-V) model [START_REF] Andrews | Laser Beam Propagation through Random Media[END_REF]. For this model, the nominal value of C 2 n (0) at ground level will be considered with three different values of 1×10 -13 m -2/3 , 5×10 -14 m -2/3 , and 1×10 -15 m -2/3 , corresponding respectively to strong, moderate and weak turbulence. In order to calculate the corresponding ground wind speed, the International Telecommunication Union ITU-RP.1621-1 recommendation is used through the expression v rms = v 2 g + 33.11v g + 360.31 , where v g is the ground wind speed. The simulated rms wind speeds v rms correspond to ground wind speeds ranging from zero to 43 m/s (zero to 154.8 km/h). For the rest of this article, we use the term "wind speed" to define "rms speed". Choosing this range of rms wind speeds allowed us to simulate the optical link under different weather conditions (zero m/s during a calm summer day, 154 km/h during a storm). Fig. 2 shows the profile of C 2 n (h) for different wind speeds for moderate turbulence. For different wind speeds values, the C 2 n (h) profile is modified only for an altitude between 5000 m and 20000 m according to the Bufton model [START_REF] Andrews | Laser Beam Propagation through Random Media[END_REF]. Note that the same behavior has also been obtained for weak and strong turbulence. C. All-Digital OPLL description The block diagram of the implemented coherent optical receiver used in the OGS of Fig. 1 is shown in Fig. 3. The trans-impedance amplifier (TIA) is used in the coherent receiver for loss compensation. The laser, the dual-parallel Mach-Zehnder modulator (DPMZM), and the VCO form an optical voltage-controlled oscillator (OVCO), enabling the the optical signal to be fine-tuned [START_REF] Ferrero | Optical Phase Locking techniques: An overview and a novel method based on Single Side Sub-Carrier modulation[END_REF]. The analog-to-digital converter (ADC) provides the interface of a continuous-time system to a discrete-time, and a digital-to-analog converter (DAC) carries out the opposite conversion. The OPLL loop filter (LF) is based on a first-order active filter transfer function that results in a second-order system when combined with the VCO integrator. The LF transfer function is given by F (z) = K 1 +K 2 /(z-1), where K 1 , and K 2 are the loop filter gains. In addition to the LF, a signal-to-noise (SNR) estimator was also implemented to determine the noise variations on the demodulated signal and the OPLL. The SNR algorithm is based on the second and fourth-order moments estimator, M2M4 [START_REF] Pauluzzi | A comparison of SNR estimation techniques for the AWGN channel[END_REF]. The green dashed block drawn in Fig. 3 shows the phase detector's block diagram. It provides phase error information from a tracked carrier signal. However, a digitally modulated signal does not have a carrier since modulation suppresses it. Therefore, according to the modulation scheme, a different circuit is employed to remove the modulation signal, resulting in an unmodulated carrier to track. In the QPSK format, for example, a circuit multiplied by four is used to generate an error signal proportional to K D sin 4θ [START_REF] Schaefer | Coherent receiver design based on digital signal processing in optical high-speed intersatellite links with M-phase-shift keying[END_REF]. A circuit consisting of logic gates could achieve this multiplication, thus obtaining K D = R 4 T IA R 4 P 2 s P 2 LO /4. R T IA is the trans-impedance gain, R is the responsivity, P s is the modulated signal power, and P LO is the power of the local oscillator (LO). In a digital phase detector, the imaginary part of the fourth-power (QPSK) of the sample I/Q signals have the same gain and error signal of a circuit multiplication, where Im[X M k ] = Im[(I +Q) M ]/M = K D sin(M θ). M represents the modulation order. For QPSK M = 2 b = 4, where b is the number of bits/symbol. Previously, an all-digital OPLL was demonstrated using a lookup table to implement the fourth power [START_REF] Lu | Digital-analog hybrid optical phase-lock loop for optical quadrature phase-shift keying[END_REF]. However, the gain K D is related to the input signal level P s that changes the projected control loop behavior rendering the OPLL unstable. Additionally, it uses the approximation of sin(θ) ≈ θ, for small θ, as a phase error signal. Therefore, a Moving Averaging Filter (MAF) followed by the four quadrant arc-tangent function is applied to the power moment of the sampled signals. Then, the estimated phase error is the following: Θk = 1 M arg { k+L n=k+1 [X k ] M } , thereby obtaining the phase error without approximation. This technique is based on the Viterbi-Viterbi algorithm used in DSPs for coherent optical systems. However, it was modified to correct the phase sample-by-sample according to the OPLL sampling rate instead of data blocks as DSPs. The MAF is a particular example of a finite impulse response filter, where all coefficients are equal to 1/L, where L is the average number of samples. Employing it is essential since the drawback of the power moment is the presence of harmonics at high frequencies at its output, increasing the OPLL phase error variance. III. SYSTEM IMPLEMENTATION AND ANALYSIS A. OPLL The OPLL was developed and evaluated through a cosimulation technique. The photonics circuit was implemented in the VPIphotonics Design Suite, while the OPLL algorithm and SNR estimation were implemented in Python. The coherent detector parameters are based on a commercial component whose responsivity R = 0.04 A/W and the R T IA gain was set to 9 kΩ. The P LO optical power was 1 dBm, while P s changed according to the turbulence effects. For all analyses, the Tx and Rx lasers have a linewidth of 1 kHz, and the VCO has a central frequency of 20 GHz with a gain of 500 MHz/V. The propagation delay was also considered since it can affect the OPLL, rendering it unstable. The MAF represents a delay in the control loop, where L taps introduce a delay of L samples. Additionally, for the ADCs and DAC, a delay of 12 ns was considered based on the commercial components [START_REF] Chen | Z-domain modeling methodology for homodyne digital optical phase-locked loop[END_REF]. While for the atan2, a delay of 15 ns is based on techniques for hardware implementation [START_REF] De Dinechin | Hardware Implementations of Fixed-Point Atan2[END_REF]. The OPLL sampling rate is 10 Gsamples/s (sps), which is motivated by the availability of ADCs, DACs, and even DSPs that can operate at 10 GHz, allowing the OPLL to collaborate with the DSP in a correction system. As the MAF will have 10 taps, it will introduce a delay of 1 ns. The LF parameters were determined by optimizing the B n parameter (OPLL noise bandwidth) for a total loop delay of 30 ns due to the previously discussed system characteristics. Thus, several values of the B n parameter started based on Norimatisu's relation for OPLLs with delay [START_REF] Norimatsu | PLL propagation delay-time influence on linewidth requirements of optical PSK homodyne detection[END_REF], a power penalty of 0.42 dB, and a standard deviation of the phase error of 2.6 • was found for a B n = 2 MHz. By using the bilinear transformation, the LF parameters were determined through a B n T s = 2×10 -4 for 10 Gsps, where T s is the sample time. B. Results The OPLL was evaluated through variations of the turbulence channel. The proposed digital discriminator was compared with the imaginary part of the fourth-power moment in Fig. 5 for a strong turbulence regime and a wind speed of 60 m/s. Each color corresponds to the measured SNR, as shown in Fig. 4 for strong turbulence, representing the fade values where the system is analyzed for thirty different values or thirty iterations. Fig. 5a and5b show the VCO output frequency behavior using the digital discriminator (atan2) and the imaginary part (sine), respectively. The OPLL using the atan2 discriminator was analyzed under a delay of 30 ns while the sine phase detector was under a total delay of 6 ns since it became unstable for delays longer than 6 ns. Compared to the atan2, the sine discriminator exhibits a higher oscillatory characteristic and a higher standard deviation (STD) of the controlled frequency. Additionally, the OPLL loses the lock during higher fading, unable to recover the track signal, as shown by the deviation of the central frequency. The STD variation is related to the sine discriminator gain since, as shown previously, it depends on the system parameters. As the input power level changes due to the turbulence, the gain also changes, reflecting the OPLL phase error variance and the controlled frequency STD. The proposed digital discriminator used the same gain of 1 for all turbulence scenarios. The atan2 has the advantage of being independent of the system parameters since the computation of the phase error relates to the I/Q signals. Consequently, it was possible to analyze the system under different turbulence scenarios without changing the OPLL project. Additionally, it shows a low STD of the controlled frequency as seen in Fig. 5a. One of the most notable features of the atan2 discriminator is its ability to keep tracking the signal under longer feedback loop delays than the sinusoidal discriminator. Therefore, the atan2 performance under noise and its tracking ability makes it an alternative for digital OPLL. Both discriminators and wind speed values were also compared in other turbulence scenarios. The sinusoidal discriminator always loses lock during fading regime while the proposed discriminator keeps the system locked. The only condition where the sinusoidal discriminator maintained locking was when the system delay was equal to zero, i.e., an unrealistic condition. Due to the tracking capability of the proposed phase detector under fading regimes, the impact of wind speed on system performance was also studied. Fig. 6 shows the BER variation according to wind speed in three different turbulent scenarios (weak, moderate, and strong). Each point in Fig. 6 corresponds to the mean BER calculated for thirty different events. It was verified that the results are representative by simulating only thirty events. Therefore, the error due to the number of simulated events can be negligible. The thirty events were selected as a compromise between results and simulation time. We notice that for wind speeds lower than 20 m/s, turbulence intensity contributes to the degradation of the transmitted data. However, the influence of turbulence decreases for wind speeds higher than 20 m/s. As seen in Fig. 6, when the wind speed is at 60 m/s, the BER is mainly impacted by wind speed for all the turbulent media. In addition to the quality of the transmitted data, we notice that the quality's stability degrades according to wind speed. IV. CONCLUSIONS An all-digital OPLL was presented using a new phase discriminator method, which can track an optical received signal subject to the turbulence effects. The phase discriminator is a digital type that proved capable of tracking a signal under longer feedback loop delays and fading signals. Additionally, it showed a low standard deviation of the controlled frequency and low phase noise, making the digital phase discriminator an alternative for digital OPLLs. Thanks to its characteristics, it was possible to analyze OPLL performance in several scenarios of atmospheric turbulence and wind speed values. It has been seen that the system deteriorates with higher wind speed and strong turbulence. For wind speeds higher than 30 m/s, wind is the leading cause of system degradation. During the simulations, it was observed that the scintillation index behavior is comparable to the obtained BER profiles. Hence, the observed behavior is not caused by system limitations. For better comprehension, further studies will need to be conducted. Future work is underway to implement the hardware of the proposed OPLL. Fig. 1 . 1 Fig. 1. Power level diagram for the FSO downlink. G T and G R are the transmitter and receiver telescope gains, respectively, and EIRP stands for Effective Isotropic Radiated Power. Fig. 2 . 2 Fig. 2. C 2 n profile according to altitude. Fig. 3 . 3 Fig. 3. Block diagram of the implemented all-digital OPLL. Fig. 4 . 4 Fig. 4. SNR level variations for wind speeds of 5 m/s and 60 m/s under strong turbulence. Fig. 5 . 5 Fig. 5. VCO output frequency behavior under strong turbulence and wind speed of 60 m/s. a) OPLL using the proposed digital phase detector, b) OPLL using the sinusoidal discriminator. Fig. 6 . 6 Fig.6. BER variation (mean of 30 events) as a function of wind speed. ACKNOWLEDGMENT This work was supported by the National Institute for Space Research (INPE/Brazil), Conselho Nacional de Desenvolvimento Clientfico e Technolgico (CNPq/Brazil), and the French Innovation Defense Agency (AID) through the EPLO ("Emulateur de Propagation Libre Optique").
04117430
en
[ "phys", "spi" ]
2024/03/04 16:41:26
2022
https://hal.science/tel-04117430/file/DTIS22223.1666612934.pdf
M Yves Briere M Vincent Dubanchet Ingénieur Mme Helene Evain Docteure M Paolo Gasbarri Professeur Franca, Guido Milo David Henry Professeur M Mathieu Mathieu Rognant Yves Briere Félix, Florent, Guilia, Ilyes Merci Donc À Alice Jason Louis Thomas Clément, Félix, Gustav Julio Arthur Pierre-Julien Lucien Keywords: 67 Space manipulators allow to respond to a variety of problems in future space exploitation and exploration such as on-orbit deployment, active debris removal or servicing operations. However, a difficulty to autonomously control space manipulator systems arise with large and light structures presenting flexible behavior. Flexible dynamics remain a challenging topic as its modeling may present a first difficulty while the different coupling with the manipulator may deteriorate the control quality. This thesis addresses design and control problems related to autonomous space manipulator equipped with kinetic moment exchange devices for spacecraft rotation control when dealing with system internal disturbances, model uncertainties and measurement errors. The modeling of rigid-flexible dynamics of a multi-body system remains a challenging task, and a first contribution of this work is a generic modeling tool to derive kinematic and dynamic of a rotation-free-floating Space Manipulator System (SMS) with flexible appendages. This analysis led to the main contribution of this thesis, namely the implementation and the design of such control scheme for On-Orbit Servicing operations. Thanks to the model, proposed control include the non-measurable states (i.e flexibility) in the system decoupling and linearization, and the steering laws established are based on Nonlinear Dynamic Inversion (NDI) framework where observers are introduced to improve the quality of linearization. In a first implementation an Extended State Observer (ESO) have been involved to estimate flexible dynamics. Then, in a second time, the modeling uncertainties and measurement errors have been handled by the addition of a Nonlinear Disturbance Observer (NDO). Inter-dependencies between observers and control dynamics have motivated a simultaneous computation of their gains to improve system stability and control performances. This point has been achieved by the resolution of Linear Matrix Inequalities (LMI) to guarantee stability with an appropriate Lyapunov function. In order to highlight the interest of the proposed scheme and validate our approach in a realistic environment, extensive tests of an on-orbit space telescope assembly use-case have been performed on a high-fidelity simulator. i I would like to start by giving thanks to my thesis supervisors, Mathieu Rognant and Yves Brière with whom it was a great pleasure to work with and learn from. The time you dedicated for me, your pedagogy and your kindness really helped me grow professionally. J'ai vraiment apprécié de travailler parmi vous dans cet environnement motivant. Je tiens également à remercier tout particulièrement David Henry et Benoit Clément pour le temps qu'ils ont consacré à la relecture de cette thèse et pour les commentaires pertinents qu'ils ont formulés afin d'en améliorer le contenu. J'aimerais également remercier Paolo Gasbarri pour avoir présidé le jury de thèse, Jurek Sasiadek, Hélène Evain et Vincent Dubanchet pour leur intérêt dans mes travaux et les discussions productives. Je suis également reconnaissant envers Jean-Marc Biannic pour le temps qu'il a consacré, ses conseils et sa contribution à l'amélioration de ce travail. Je remercie Daniel Alazard et Christophe Louembet pour leurs conseils généraux et pour le temps qu'ils ont pris afin de participer à mon comité de suivi de thèse. Je remercie également Christelle Cumer pour son immense gentillesse et pour l'aide qu'elle m'a apportée. Aussi incroyable que cela puisse paraître, il y a aussi des moments de plaisir et de détente pendant la thèse et je tiens à remercier chaleureusement les personnes qui ont partagé ces moments avec moi. A l'ISAE-Supaero, j'ai beaucoup apprécié les pauses-jeux, même si je n'ai que très rarement gagné. Résumé Les manipulateurs spatiaux permettent de répondre à une variété de problèmes dans les futures exploitations et explorations spatiales, tels que le déploiement en orbite, l'élimination active des débris ou les opérations de maintenance. Toutefois, il est difficile de contrôler de manière autonome les systèmes de manipulateurs spatiaux dans le cas de structures légères et de grande taille présentant alors un comportement flexible. La dynamique flexible représente un défi, premièrement par sa modélisation et secondement les couplages avec le manipulateur peuvent détériorer la qualité du contrôle. Cette thèse aborde les problèmes de conception et de contrôle d'un manipulateur spatial autonome équipé de dispositifs d'échange de moment cinétique pour le contrôle de la rotation d'un vaisseau spatial lorsqu'il est confronté à des perturbations internes au système, des incertitudes de modèle et des erreurs de mesure. La modélisation de la dynamique rigide-flexible d'un système multi-corps reste une tâche difficile, et une première contribution de ce travail est un outil de modélisation générique pour dériver la cinématique et la dynamique d'un manipulateur spatial flottant dont les rotations sont contrôlées et en présence d'appendices flexibles. Cette analyse a conduit à la contribution principale de cette thèse, à savoir l'implémentation et la conception d'une loi de contrôle pour les opérations de maintenance en orbite. Grâce au modèle, la commande proposée inclut les états non mesurables (i.e. les modes flexibles) dans le découplage et la linéarisation du système, et les lois de pilotage établies sont basées sur l'inversion dynamique non linéaire où des observateurs sont introduits pour améliorer la qualité de la linéarisation. Dans une première mise en oeuvre, un observateur d'état étendu a été utilisé pour estimer la dynamique flexible. Puis, dans un deuxième temps, les incertitudes de modélisation et les erreurs de mesure ont été traitées par l'ajout d'un observateur de perturbations non linéaires. Les interdépendances entre les observateurs et la dynamique de contrôle ont motivé un calcul simultané de leurs gains afin d'améliorer la stabilité du système et les performances de contrôle. Ce point a été atteint par la résolution d'inégalités matricielles linéaires pour garantir la stabilité obtenue à l'aide d'une fonction de Lyapunov appropriée. Afin de mettre en évidence l'intérêt du schéma proposé et de valider notre approche dans un environnement réaliste, des tests approfondis d'un cas d'utilisation de l'assemblage d'un télescope spatial en orbite ont été réalisés sur un simulateur haute-fidélité. iii I am also grateful to Jean-Marc Biannic for the time he took, his advice and its contribution to improve this work. I gives thanks to Daniel Alazard and Christophe Louembet for their general advice and for the time they took to participate in my thesis committee. I am also acknowledging to Christelle Cumer for her huge kindness and for the help she provided. As unbelievable as it can sound but during the thesis their is also some moments of fun and relaxation and I would like to give a heartfelt thank you to the people whom share those moment with me. At ISAE-Supaero, I really enjoyed the game times even if I rarely won any of those games. So thanks to Alice, Félix, Florent, Guilia, Ilyes, Jason, Louis, Thomas, Valérian and everyone else who joined. On the ONERA side I also enjoyed spending time where I met hardworking and interesting people such as Arthur, Clément, Félix, Gustav, Julio, Lucien, Pierre-Julien to work and share a laugh and a special thanks to the people whom I had the chance to share at a moment the office with: David, Franca, Guido, Milo, Sovanna, Waly, William. Finally I want to thank my family for their support over the years and my brother for his help in the reviewing of my english. 4.16 Illustration of the manipulator control performances; For each manipulator joints the relative tracking error ˙ tc ( qi ) of each t i ∈ T 2T on the left, for each t i ∈ T 3T on the middle, for each t i ∈ T 4T on the right . . . . . . . . . . . . . 4.17 Illustration of the reaction-wheels control performances; For each reactionwheels the relative tracking error ˙ tc ( qi ) of each t i ∈ T 2T on the left, for each t i ∈ T 3T on the middle, for each t i ∈ T 4T on the right . . . . . . . . . . . . . Introduction Robotic systems are predicted to a bright future in the variety of applications in the space exploitation and exploration. Notably, space manipulator systems provide advantageous solutions to reduce humanized flight missions. Refueling or more generally servicing a satellite, on-orbit deployment and active debris removal are few examples of applications that could benefit from the use of space manipulators. However, the need of developing autonomous control of the robotic systems arises as one of the main challenge. For space exploitation and exploration, structures too large to be self-deployed are more and more common such as space stations and telescopes. The ISS 13 is the principal example of robotic systems key role in the expansion of a space structure. The Canadian space agency (CSA 14 ) has wildly contributed to the improvement of robotic technologies through the years. In terms of in-space assembly, the first version of the Canadarm on the space shuttle helped for the installation of the docking module of the MIR space station in 1995 [START_REF] Hiltz | Canadarm: 20 years of mission success through adaptation[END_REF] and the second version of the Canardarm is still used for assembly purposes and extended the scope of robotic utilizations. Besides assembly, assisting astronauts in extra-vehicular activities as well as inspecting, maintaining and repairing are the current applications of the Canadarm 2. Moreover, the increase number of space objects, in activity or not, has raised a concern as it threatens the future launches. Two approaches are considered to improve the actual situations. First the de-orbiting of space debris, for which manipulator systems provide safest and sustainable solutions over the different methods considered (e.g. harpoon capture, drag augmentation, propulsion de-orbiting removal method) [START_REF] Zhao | Survey on research and development of on-orbit active debris removal methods[END_REF]; [START_REF] Biesbroek | The clearspace-1 mission: Esa and clearspace team up to remove debris[END_REF]. The second approach aims at extending the lifespan of a defective satellite. The Hubble Space Telescope is the most iconic servicing mission in which robotic manipulator were assisting astronauts in four successful missions in [START_REF] Parlaktuna | Jacobian control for space manipulator[END_REF], 1997[START_REF] Sakawa | Trajectory planning of a free-flying robot by using the optimal control[END_REF]and 2002 [Tat98] [Tat98]. Through the aforementioned examples of SMS 15 use, one main advantage proposed by space robots is the versatility. Nevertheless, space manipulators have been mostly tele-operated either from the ground, the space shuttle or the space station. However, time-delay communications represent one major problem to safely and precisely operate a manipulator for deployment, servicing or capture scenarios [START_REF] Luis F Penin | Teleoperation with time delay-a survey and its use in space robotics[END_REF]. Early studies have started to develop autonomous control technologies to perform challenging operations. The first demonstration mission without a crew was the Engineering Test Satellite No. 7 (ETS VII) in 1997, which allowed to experimentally validate technologies to proceed to both tele-operated and autonomous manipulator's tasks [START_REF] Oda | Experiences and lessons learned from the ETS-VII robot satellite[END_REF]. From the promising results, improving autonomous control technologies has remained an open study subject to fulfill the full potential of space robotics used in space exploitation. The main remaining difficulties identified to improve the autonomous technologies are the following [START_REF] Flores-Abad | A review of space robotics technologies for on-orbit servicing[END_REF]: • The safe and reliable docking with a target presenting unknown physical properties (i.e. mass, inertia, velocities). Concerning the ADR 16 , tumbling objects are to be captured and means to first estimate the target's states in the approach phase are one area to be improved [START_REF] Petit | Tracking complex targets for space rendezvous and debris removal applications[END_REF]. Secondly, stabilizing the uncontrolled malfunctioning satellite offers considerable challenges to safely absorb the grasping contact forces, as illustrated with the studies developed for the ENVISAT spacecraft de-orbiting effort [START_REF] Estable | Capturing and deorbiting Envisat with an Airbus Spacetug. Results from the ESA e. Deorbit consolidation phase study[END_REF]. The methods that were developed to reduce the undesired motions from the capture impact are mainly based on path-planning strategies, MPC 17 or impedance control. Rybus and al. [START_REF] Rybus | Trajectory optimization of space manipulator with non-zero angular momentum during orbital capture maneuver[END_REF] established a control scheme of the manipulator system based on a path-planning that optimizes the use of actuators while restraining the end-effector velocity in the capture phase. This preliminary work has later been extended with the introduction of a nonlinear MPC [START_REF] Rybus | Control system for free-floating space manipulator based on nonlinear model predictive control (NMPC)[END_REF]. Such methods rely on the modeling of the SMS and a relative knowledge of the target states and physical properties which make them challenging to be developed. • To make space more sustainable, extension of mission lifespan not only can be provided by OOS18 missions but also with a reduced use of fuel-consuming actuators. The use of electrical actuators are now mostly considered and could offer new control possibilities for SMS applications [START_REF] Wilde | Equations of motion of free-floating spacecraft-manipulator systems: an engineer's tutorial[END_REF]. One difficulty of the control of a robotic manipulator installed on a mobile base is to consider all the couplings. Actively controlling the base during a manipulator task is essential to insure the precision of its motions as well as maintaining a fixed spacecraft orientation for communication purposes. Colmenarejo and al. [START_REF] Colmenarejo | Methods and outcomes of the COMRADE project-Design of robust Combined control for robotic spacecraft and manipulator in servicing missions: comparison between between Hinf and nonlinear Lyapunovbased approaches[END_REF] established a robust control scheme, based on an H ∞ synthesis for a separate manipulator and base's thrusters in order to cancel undesired couplings in the system. • An additional difficulty is related to the large and light structure, notably considering inspace assembled spacecraft. With elements such as antennas, solar arrays or sun-shields, flexible behaviors are nearly unavoidable. Meng and al. [START_REF] Meng | Space robots with flexible appendages: dynamic modeling, coupling measurement, and vibration suppression[END_REF] studied the coupling between the end-effector motions and the flexible appendages after developing a model to describe the equations of motions of an SMS with flexible solar arrays. They established a coupling factor to adapt path-planning and control strategies. With a following study, these coupling factors are considered to reduce the influence of the end-effector on the vibrations during the pre-capture phase [START_REF] Meng | Vibration suppression control of free-floating space robots with flexible appendages for autonomous target capturing[END_REF]. However, these studies carry strong hypothesis to include the flexible dynamics. For this reason, the consideration of flexible appendages in an SMS have been made for simple applications. Hirano and al. [START_REF] Hirano | Vibration suppression control of a space robot with flexible appendage based on simple dynamic model[END_REF] developed a simple dynamics model to develop active vibrations control. Such methods are viable for small satellites but extendable with difficulty for current spacecraft too large to be self-deployed. To compensate for the challenging modeling, robust control strategies have been considered. Wu and al [START_REF] Wu | Robust Anti-Disturbance Coordination Control for Space Manipulator Systems with Multiple Disturbances[END_REF] established an H ∞ synthesis for a control coupled with a nonlinear observer in order to deal with unknown external disturbances and evaluate their influence on the manipulator. • Another area of improvement concerns the possibility of performing different tasks during a mission of an SMS. The concept of space-tug has raised interest as it fully benefits of the robotic systems to fulfill different needs of OOS [START_REF] Cresto | Reusable space tug concept and mission[END_REF]. Nonetheless, performing tasks that significantly differ will require robust control strategies to consider system variations. Rotation-free-floating SMS, which corresponds to a robotic manipulator system with a base actuated with moment exchange devices, have yet to be widely studied to develop their potentially interesting features. Moreover, with light elements constituting the large space structure, the needs of introducing modeling tools is required to describe the equations of motions of a rotation-free-floating SMS with flexible appendages. In addition, a need of high fidelity simulation tools is required to develop and validate the control laws proposed as well as deeply analyzing robotic systems. In that matter, this study aims at providing the necessary tools for the control of a rotation-free-floating SMS with flexible appendages to perform OOS operations. In order to develop control schemes, the methodology followed is resumed as: 1. Chapter 1 provides an overview of the use of SMS through the years and the control challenges with a literature review. This chapter aims at establishing the scope of this study and identifying the improvement areas. 2. Prior to introducing control schemes, chapter 2 establishes kinematic and dynamic models to describe the equations of motions of a rotation-free-floating SMS with flexible appendages. Two contributions result from this chapter. First, a formalism to integrate the flexible dynamics of appendages and secondly a Matlab-Simulink simulation tools that allows to perform analysis and time-domain simulations of a studied robot. 3. In chapter 3, a common control of the base and manipulator is discussed. With novel kinematic indices, it becomes possible to quantify the different actuators' influences to perform a given task. Besides allowing to pre-design an SMS or develop path-planning methods, such indices motivate and justify the interest to introduce a common basemanipulator control. Then, for OOS operations, based on an NDI19 scheme a steering law for an SMS with flexible appendages is developed. To insure high performances, an ESO20 is additionally designed such that un-measured states are included in the system linearization. A simultaneous control and observer gains synthesis is proposed through an LMI21 resolution to satisfy a precise velocity control and insure stability of the closed-loop system. 4. With the promising results obtained in chapter 3, the following chapter extends the proposed control scheme with robustness criteria, such as variations of the system's mass distribution, modeling uncertainties, measurement errors. Thus, the joint-space control law developed in chapter 4, is based on the similar NDI-ESO structure with an additional NDO 22 to take into account modeling and measurement errors. A new common observers and control gains synthesis is proposed and performed with an LMI resolution to satisfy the robustness of the method. Then the control is validated through extensive simulations performed on an actual in-orbit deployment scenario of a space telescope. Chapter 1 Space manipulator systems (SMS1 ) refer in this chapter to one or multiple DoF2 robotic arm(s) that operate from a spacecraft base to perform a wild range of orbital applications. After providing a general review of SMS' evolution through the last forty years, modeling and control methods are enumerated to highlight current limitations and sources of improvements. This chapter aims at introducing the motivations of this research study and introduces the contributions to future SMS applications. Literature overview Overview of Space Manipulator Systems through the years 1.Background Since November 13 th 1981 and the second mission of the space-shuttle Colombia, the Shuttle Remote Manipulator System, also known as Canadarm, has inaugurated forty years of space manipulator missions. This 6-DoF robotic manipulator with a deployed length of 15.2m has been developed by the Canadian Space Agency (CSA3 ) to fulfill over its thirty years of exploitation ninety missions. Among the most famous ones, the servicing of the Hubble space telescope [START_REF] Lyn | Computer vision systems for robotic servicing of the Hubble Space Telescope[END_REF] or the installation of the docking module of the MIR space station in 1995 [START_REF] Hiltz | Canadarm: 20 years of mission success through adaptation[END_REF]. Canada has notably contributed over the years to the development of technologies that diversify the scope of space manipulators' possibilities. Beside the Space Shuttle, Canadian manipulators have been made to perform satellite rendez-vous and servicing missions [START_REF] Sallaberger | Robotic technologies for space exploration at MDA[END_REF] but their main contribution is the Space Station Mobile Servicing System (MSS) that provides the assets to assemble, transport, inspect and repair payloads in orbit. The MSS' main elements are the Space Station Remote Manipulator System (SSRMS), the Mobile Base System (MBS) and the Special Purpose Dexterous Manipulator (SPDM). The SSRMS is a longer and more performing version of the Canadarm entitling it Canadarm 2 and present on the ISS 4 since 2001. Thanks to the mobile platform it can be deployed along the entirety of the ISS and with the SPDM a number of tasks requiring precision are made possible. Improvement of mobility are obtained with the different grasping points that permit a relocation of the manipulator. Both Canadarms are perfect illustrations of the usefulness of SMS to improve and facilitate space exploitation and exploration. Human risk factor is one of the first reasons for the development of SMS. From the tragic loss of Columbia in 2003, the Canadarm has allowed to reduce risks by performing inspections of the Space Shuttle before proceeding to re-entry. From a general stand point, Canadarms have assisted astronauts in their extra-vehicular missions and also reduced their numbers by performing the repetitive maintenance tasks of the ISS. Likewise, Canadarms have largely contributed to in-space assemblies of space stations. On-orbit assembly is a major aspect of space exploitation for which SMS has proven its efficiency and feasibility. An other asset of SMS is the capture of spacecraft or payloads. For instance, Canadarm 2 is regularly used for the berthing of the different transfer vehicles. The interest in SMS have gone beyond Canadian frontiers, in particular, for the ISS expansion. The Japanese spatial agency, JAXA5 , and the European one, ESA6 , have added their own manipulator system to the space station. The European Robotic Arm (ERA) has allowed the assembly of the Russian module and remains used for inspection and maintenance operations in addition to assist astronauts in extra-vehicular missions [START_REF] Didot | The era system: Control architecture and performance results[END_REF]. Regarding the Japanese Experiment Module Remote Manipulator System (JEMRMS), its first purpose was to assist during experiments on the Japanese module, Kibo [START_REF] Sato | JEMRMS design features and topics from testing[END_REF]. To emphasize on the purpose of reducing human risks and humanized flight, robotic systems have been studied and considered to perform tasks instead or with astronauts. In that respect, humanoid robots have been made such as DLR 7 's Justin or NASA 8 -DARPA 9 's Robonaut I and II. They are aiming at assisting astronauts during extra-vehicular operations and more generally perform highly repetitive tasks instead of astronauts. In that regard they are equipped with two manipulators disposed on their torso 10 [Blu+03]; [START_REF] Ma Diftler | Robonaut 2-Initial activities on-board the ISS[END_REF]. The various uses for robotic systems gathered on the ISS illustrate three major fields of space exploitation and exploration, namely capture of payloads, on-orbit assembly and maintenance or servicing [START_REF] Acquatella | Development of automation & robotics in space exploration[END_REF]. To highlight the importance and usefulness of SMS in these fields, further examples are presented within this section. Active Debris Removal Beside capturing payloads or transfer vehicles, SMS have widely been though of to answer the space debris mitigation objectives. Since 1957 and the beginning of space exploitation with the launch of the Sputnik telescope, space debris are more present than satellites in activity. In 1978, Donald J. Kessler raised an alarm bell, with the "Kessler syndrom" on the risk of exponential rise of space debris in LEO 11 caused by a chain reaction of collisions [START_REF] Donald | Collision frequency of artificial satellites: The creation of a debris belt[END_REF]. To give an idea of the current situation, the following figures may help. During approximately more than sixty years of space activity, more than 5.300 launches for 42.000 orbiting objects, which half are regularly tracked by the US Space Surveillance Network in GEO12 , only 1.200 correspond to active satellites. The size of those objects varies from millimeters to the size of a bus as for instance the ENVISAT spacecraft. According to NASA removing 1% of the space debris or five moderate risk debris per year would be sufficient to stabilize their numbers in LEO [START_REF] Hugh | Synergy of debris mitigation and removal[END_REF]. Different solutions have arisen such as the use of tether-based methods, drag sail or laser-based methods [START_REF] Marco | Active space debris removal: A preliminary mission analysis and design[END_REF]; [START_REF] Braun | Active debris removal of multiple priority targets[END_REF]; [START_REF] Priyant | Review of active space debris removal methods[END_REF]; [START_REF] Shan | Review and comparison of active space debris capturing and removal methods[END_REF], however space manipulators present the safest solution [START_REF] Biesbroek | The clearspace-1 mission: Esa and clearspace team up to remove debris[END_REF] and versatility [START_REF] Bonnal | Space debris mitigation & remediation: a general update[END_REF]. Early studies and experiments have shown feasibility of capturing cooperative targets to initiate active debris removal technology development. In 1997, the Engineering Test Satellite VII (ETS-VII) was launched by JAXA, formerly the National Space Development Agency of Japan (NASDA), in order to perform demonstration experiments to illustrate docking maneuvers with a robotic manipulator [START_REF] Oda | Experiences and lessons learned from the ETS-VII robot satellite[END_REF]. Those experiments aimed at evaluating the improvement made since the SRMS as well as studying different tele-operation solutions and possibility of autonomous capture. Those seminal works have been followed by the study of tumbling targets management. A possible origin for a space debris large enough to be deorbited may be a loss of actuators control or a collision leading to the tumbling of the space debris. An experimental mission planned for 2025 aiming at demonstrating technologies for autonomously removing Ariane rocket bodies, entitled Active Grabbing & Orbital Removal for Ariane (AGORA) mission [START_REF] Kumar | Agora: Mission to demonstrate technologies to actively remove Ariane rocket bodies[END_REF] will allow to reach an important step in space manipulator technology. Indeed the grasping of non-cooperative debris require high performing visual technologies to safely proceed at the capture phase, as not only no grasping feature are available but also an estimation of the relative dynamics is needed. Adapted stabilization strategies can then be based on adapting the target dynamics and ensure stability and safety [START_REF] Jankovic | Robotic System for Active Debris Removal: Requirements, State-of-the-art and Concept Architecture of the Rendezvous and Capture (RVC) Control System[END_REF]. Moreover the robotic manipulator and the servicer's base control need to be complementary for the target to be captured which presents some dynamics that may differ from what is expected in the mission analysis [START_REF] Ellery | Tutorial review on space manipulators for space debris mitigation[END_REF]. The Control and Management of Robotics Active Debris Removal (COMRADE) project has been developed by ESA with, in particular, the functionality of stabilizing both the servicer and the malfunctioning satellite to safely realize ADR13 [START_REF] Colmenarejo | Results of the COMRADE project: combined control for robotic spacecraft and manipulator in servicing missions: active debris removal and re-fuelling[END_REF]. On-Orbit Servicing Before reaching the necessity of removing spacecraft from orbits of interest, maintenance solutions and repairing operations have been proposed. Indeed, satellites may observe a shortening of their lifespan simply because of a malfunctioning device or an obsolete payload. The concept of space-tug, which consists of a spacecraft along with the help of an SMS, performs orbit transfers and servicing tasks, has appeared as an interesting alternative to the different ADR solutions as it gives possibility of servicing or repairing a satellite instead of de-orbiting it [START_REF] Bonnal | Space debris mitigation & remediation: a general update[END_REF]. The DLR has been developing such a concept to deal with near end-life satellites and has planned a demonstration mission, the Deutshe Orbitale Servicing Mission (DEOS). It will perform first the capture of a non-cooperative satellite with the servicer's manipulator. Then after a servicing application, the satellite will be finally deorbited [START_REF] Reintsema | DEOS-the in-flight technology demonstration of german's robotics approach to dispose malfunctioned satellites[END_REF]. The de-orbiting of ENVISAT has been studied by Airbus and their partners to be performed with a similar concept. Dealing with large and tumbling satellites, the detumbling phase of the client satellite remains challenging and space-tug allows to use the servicer's actuators to carry out maneuvers in that matter [START_REF] Estable | Capturing and deorbiting Envisat with an Airbus Spacetug. Results from the ESA e. Deorbit consolidation phase study[END_REF]. Space robots have a key future and a role to play in the lifespan extension and enhancement of on-orbit space structures. After the five servicing missions of the Hubble space telescope to extend its lifespan where SMS were used as an assistance of astronauts [START_REF] Joseph N Tatarewicz | The hubble space telescope servicing mission[END_REF] and the experimental tests of capturing cooperative targets with the ETS-VII mission or even the assembly of the ISS, studies have been pursed to improve capturing and servicing technologies [START_REF] Li | On-orbit service (OOS) of spacecraft: A review of engineering developments[END_REF]. In 2007, the Orbital Express mission established by DARPA and NASA aimed at developing a safe and costeffective approach to autonomously service satellites in orbit. This demonstration mission allowed to illustrate progress obtained to autonomously capture the client satellite with the help of vision based technologies [START_REF] Robert | Orbital express program summary and mission overview[END_REF]. Predefined tasks were defined and supervised from the ground to complete the realization of the refueling mission. DARPA is also contributing on the improvement of servicing technologies for instance with the Spacecraft for the Universal Modification of Orbits (SUMO), a low-cost flight demonstration to help demonstrate autonomous servicing operations with a robotic manipulator [START_REF] Albert B Bosse | SUMO: spacecraft for the universal modification of orbits[END_REF]. One other DARPA project is the Robotic Servicing of Geosynchronous Satellites (RSGS) program, they expect to demonstrate the reliability, feasibility and usefulness of a robotic servicing in GEO over several years. As illustrated with the DEXTRE, SMS with appropriate end-effector offer sufficient versatility to successfully perform different missions and adapt to the servicing operations required. On-orbit servicing also consists in refueling missions. In that purpose, the ISS manipulator has been updated for refueling missions and expand servicing possibilities [START_REF] Gefke | Advances in robotic servicing technology development[END_REF]. This Robotic Refueling Mission has tested different technologies of hardware for different kind of satellites with some success in the two first phases while the third phase has encountered loss of methane. Currently, NASA is preparing the launch of the OSAM-1 14 , previously known as Restore-L, with the ambition of autonomously refueling a US government satellite. The success of such a mission would be one giant leap for domestic servicing industries. Another side of the servicing aspect is the inspection of the space structures. Beside the Canadarms and the safety inspection procedures before re-entry [START_REF] Gillett | A hybrid range imaging system solution for in-flight space shuttle inspection[END_REF], mobile manipulators have been considered [START_REF] Xu | Control system of the self-mobile space manipulator[END_REF]. For instance, the Skyworker prototype aimed at performing inspections for structures assembly [START_REF] Peter | Skyworker: a robot for assembly, inspection and maintenance of large scale orbital facilities[END_REF]. Servicing missions are various which require versatile solutions. It thus highlights the growing interest of SMS to efficiently and durably pursue space exploitation and exploration [START_REF] Li | On-orbit service (OOS) of spacecraft: A review of engineering developments[END_REF]. On-Orbit Deployment Recent and future of space exploitation is writing itself with large infrastructures and spacecrafts. In the past, space stations have benefited from the use of SMS to constantly expand and upgrade on-board technologies [START_REF] Flores-Abad | A review of space robotics technologies for on-orbit servicing[END_REF]. Now space structures could also benefit from SMS to reduce the launch constraints. The difference between on-orbit assembly/deployment and on-orbit servicing can be summed up in either the mission's purpose. The objective of assembly/deployment missions are to expand the space structure while servicing operations will aim at increasing a satellite's lifespan or modify its use. Different SMS strategies are considered to deploy space structures on-orbit, either using an embedded manipulator or the one on a servicer satellite. However the generalization of using robotic methods to deploy or expand space infrastructures reside in the standardization of SMS features [START_REF] Piskorz | On-orbit assembly of space assets: A path to affordable and adaptable space infrastructure[END_REF]. The DLR is developing the Compliant Assistance and Exploration SpAce Robot (CAESAR) in that way. It is a manipulator system that provides the necessary flexibility to perform different manufacturing and human assistance missions [START_REF] Beyer | Caesar: Space robotics technology for assembly, maintenance, and repair[END_REF]. Besides space stations, space telescopes are perfect examples of future structures in need of means to deploy in their working orbit. For instance, the PULSAR telescope 15 allows to illustrate the benefit of using a robotic manipulator to deploy the mirror too large to be self-deployed [START_REF] Rognant | Autonomous assembly of large structures in space: a technology review[END_REF]. The deployment of space antennas with the help of the satellite's manipulator system have likewise been studied [START_REF] Li | Assembly dynamics of a large space modular satellite antenna[END_REF] as it offers more versatility in the launch configuration options. In the ambition of facilitating and diversifying launches packing options, the Archinaut in-space manufacturing technology has been proposed. It will allow to extend on-orbit structure deployment possibilities. Archinaut aims at deploying a wild range of space structures in order to simplify on-orbit assemblies [START_REF] Kugler | Applications for the archinaut in space manufacturing and assembly capability[END_REF]. 14 On-orbit Servicing, Assembly, and Manufacturing 1 15 https://www.h2020-pulsar.eu Through the DARPA Phoenix program, cellularized satellite architectural unit, or satlet, have been put in place in order to improve the degree of modularity in space structure assembly [START_REF] Melroy | DARPA phoenix satlets: Progress towards satellite cellularization[END_REF]. With the help of SMS, those modular systems can provide low-cost solutions to satellite construction and deployment as well as versatility for redefining mission objectives [START_REF] Carl | The darpa phoenix spacecraft servicing program: Overview and plans for risk reduction[END_REF]. Next steps of improvements Space manipulators' interest has ceased to increase through the years as they offer versatility and allow to answer the current turnovers in space exploitation and exploration. However, SMS technologies still require to be improved in order to be viable solutions. One major improvement needed is the autonomous control of systems to perform capture and servicing tasks. Most of the SMS presented have been tele-operated from ground stations or remotely controlled from the Space Shuttle or the ISS. The main issue is the communication delays that can be observed, as it can be illustrated with the Robot Technology Experiment (ROTEX) that took flight in 1993 with the Shuttle Columbia. The ROTEX consisted in experimentally demonstrating tele-operation technologies from both the Shuttle and from ground stations [START_REF] Hirzinger | ROTEX-the first remotely controlled robot in space[END_REF]. With regard to the ground control, delay compensations of up to 7 seconds have been a key to the success of the predefined demonstrating tasks [START_REF] Hirzinger | Telerobotics with large time delays-the ROTEX experience[END_REF]. In most cases for LEO applications, delays between five and ten seconds have to be taken into account [START_REF] Luis F Penin | Teleoperation with time delay-a survey and its use in space robotics[END_REF] which may be critical for capture missions or to manipulate payload through restrained manipulator workspaces. Furthermore, with large and light structures, internal flexible vibrations may occur with the manipulation of a payload in a servicing or deployment mission. These undesired disturbances negatively impact the control performances of the manipulator. To ensure stability and mission success, improvements in autonomous control are mandatory. A first requirement to develop such control strategies is to have an accurate modeling of the SMS to describe the behaviors of robots evolving on-orbit. As further expanded upon in the following sections, the modeling difficulties are in the integration of the flexible elements that are subject to numerous hypotheses. Another point of consideration is the coordinate control of both satellite's base and manipulator to perform on-orbit tasks. It could bring efficiency in capture or deployment scenario as well as insuring safe operations [START_REF] Li | On-orbit service (OOS) of spacecraft: A review of engineering developments[END_REF]. Modeling of Space Manipulator System Space manipulator are evolving in free-falling mode which means that, at the opposite of a ground-based manipulator, every motion has an immediate opposite reaction on the floating satellite base. In this section, we try to review the different modeling methods that allow to obtain the kinematics and dynamics of such systems in function of their control mode. In space robotics, the kinematics refer to velocities while the dynamics are employed for system's accelerations. Control classification Umetani and al. [START_REF] Umetani | Resolved motion rate control of space manipulators with generalized Jacobian matrix[END_REF] and Dubowski and al. [START_REF] Dubowsky | The kinematics, dynamics, and control of free-flying and free-floating space robotic systems[END_REF] have proposed to distinguish space manipulator control whether the satellite base is controlled or not. The choice is justified and based by the consideration of linear and angular momentum in the system. When the satellite's base is actively controlled with reaction-jets, the SMS control mode is then referred as free-flying. It corresponds to a control mode in which the momentum is by definition not preserved as base actuators induce an external torque/force to the overall system. On the other hand, if the base is let free of any control, the system is then considered as free-floating which defines a control mode that ensure momentum conservation if in addition no external forces/torques are applied on the system. A deeper analysis on the control modes has been proposed by Wilde and al. [Wil+18] according on the choice of actuators to control spacecraft's base motions. The discussion, based on the momentum, separates actuators that generate external forces/torques (i.e. typically reaction-jet thrusters) and the ones that does not (i.e. control-moment gyroscopes, reaction-wheels). Conserved momentum: When space manipulator are composed of rigid links and are neither subject to external forces or torques, the total momentum remains conserved. By decomposing the kinematic momentum into a linear and an angular contribution, two submodes of free-floating control can be derived [START_REF] Wilde | Equations of motion of free-floating spacecraft-manipulator systems: an engineer's tutorial[END_REF]: the floating and the rotation-freefloating. The floating mode will define an under-actuated control mode for which the 6-DoFs of the base spacecraft are let free of any control and only the manipulator's DoFs are actuated. In that case, each manipulator motions have a direct impact on the base motions. It is a straight interpretation of the conservation of the overall system momentum conservation. For some applications it will lead to the main difficulty of the manipulator's control and in some cases it is the basis of strategies to adjust base attitude/pose controlling the manipulator's joints. Spacecraft orientations are modified or impacted in presence of torque driven actuators in the manipulator which often correspond to revolute joints. The 3 spacecraft linear DoFs may be controlled with a manipulator's prismatic joints that will modify the system center-of-mass in consequence of the changes obtained with the revolute joints. The second sub-mode is the rotation-free-floating. In this mode, the spacecraft base's rotations are controlled with actuators providing internal torques. Typically, using momentum exchange devices, such as reaction-wheels or control-moment gyroscopes, will maintain the momentum conservation by only affecting the angular kinematic moment. The three linear DoFs remain uncontrolled in this sub-mode. In the continuation of this document free-floating and floating modes will be comparable while rotation-free-floating and free-floating will designate the same control mode. Likewise no distinction between kinematic momentum and system's momentum is made, as for such system its expression is obtained in function of the system's velocities. Unconserved momentum: Momentum conservation holds as long as no external forces/torques are being applied on the system. Similarly to the floating mode, a division of the free-flying control mode can be made in function of the actuators that are present and that will affect or not the angular and/or linear momentum of the system. One can define as rotation-free-flying, or rotation-flying, a space manipulator that is actively controlled and for which the base has its three linear DoFs controlled by external torques while the three angular DoFs are either activated by intern torques or let free of any control. The control of rotations by external torques can be obtained with reaction-jet thrusters that produce a total null force. In that control mode, only the linear kinematic moment is conserved. When the linear DoFs of the spacecraft are, in addition to the manipulator, actuated by external forces, the SMS is said to be translation-free-flying controlled. Reaction jetthrusters may be used at the condition that they provide a total null torque. Moreover, if momentum exchange devices are used to control the angular spacecraft DoFs, the system remains translation-free-flying or translation-flying. This control sub-mode thus refers to a system for which only the angular kinematic moment is conserved. A last sub-mode can be defined as flying in order to refer to SMS with a base fully actuated by external forces and torques. With this last mode, system momentum is time-varying. Precision on the external disturbances As those considerations are made for rigid SMS and subject to no external forces/torques, it is worth noting the control mode classification holds with the usual approximations developed in space manipulator control literature. For a space manipulator evolving in a free-fall environment, it will be assumed that the gravitation attraction, the environmental forces (solar radiation-pressure) and atmospheric actions are neglected allowing to study the system as an isolated one. This discussion allows to define an oriented frame centered in the CoM16 of the spacecraft in respect to an inertial coordinate system [START_REF] Wilde | Equations of motion of free-floating spacecraft-manipulator systems: an engineer's tutorial[END_REF]. Modeling of a rigid Space Manipulator System The modeling of an SMS consists of developing the dynamics and kinematics of a system composed of one or multiple kinematic chains sharing a common moving base. In terms of space robotics the dynamic model of the system defines the equations of motions expressed in function of the accelerations, forces and torques. In addition a kinematic model refers to the relation of the manipulator joint state and end-effector expressed in the Cartesian space, which corresponds to the satellite frame (R sat17 ). Moreover, evaluating the system from the accelerations and forces/torques is defined as the forward kinematics/dynamics and reciprocally the evaluation of the system inertia and moment is referred as the inverse kinematics/dynamics. In the contrary of Earth manipulators, the end-effector position in the inertial frame (R ine 18 ) not only depends on the manipulator's geometry and the joints' configurations but also on the system's inertia distribution. These results to the inverse and forward kinematic problem being also a dynamic one [START_REF] Flores-Abad | A review of space robotics technologies for on-orbit servicing[END_REF]. In a general way, the recursive Newton-Euler and the Lagrangian methods are the two main solutions to derive the equation of motions for a multi-body system [START_REF] Wittenburg | Dynamics of multibody systems[END_REF]; [START_REF] Xu | Space robotics: dynamics and control[END_REF]. The Newton-Euler formulation is derived by an interpretation of the Newton's Second law of Motion. With a recursive approach, each forces and moments acting on each bodies are enumerated to obtain the equations of motions. As elaborated by Lindberg and al. [START_REF] Robert E Lindberg | Kinematic and dynamic properties of an elbow manipulator mounted on a satellite[END_REF], the expression of the dynamic model is obtained in function of the joints' forces/torques displacements by eliminating the constraint forces acting between adjacent bodies with geometric properties. The Lagrangian approach bases itself on a work and energy description of the system's bodies using a set of generalized coordinates. It presents the advantage of providing a simpler and systematic computation method. Moreover, the resulting dynamic model expression is established in a more compact form as work-less forces and constraint forces are not taken into account but only kinetic and potential energies are [START_REF] Bruno | Robotics: modelling, planning and control[END_REF]. Though, the dynamics model is similarly expressed in function of the manipulator's joints torques/forces displacements. In comparison the Newton-Euler method presents a more complex way of deriving the equations of motions, however its main advantage relies in the recursivity of its approach. By expressing for each body all the forces/torques applied, in particular the ones between adjacent bodies, the Newton-Euler formulation provides the possibility of elaborating plug-in methods to derive a multi-body system forward dynamic model [START_REF] Alazard | Linear dynamic modeling of spacecraft with various flexible appendages and on-board angular momentums[END_REF]. Likewise, this method is well suited when all the forces/torques are known to describe the system's motions in both an inertial frame and a Cartesian coordinate system. In the contrary of ground-base manipulator, an SMS presents the particularity of developing dynamic singularities in addition to kinematic ones. A dynamic singularity occurs when the base is deprived of an attitude control system (i.e. a free-floating SMS) and the lack of a means to compensate for the manipulator's motions will lead to the impossibility of moving its end-effector in some directions [START_REF] Papadopoulos | Dynamic singularities in freefloating space manipulators[END_REF]. This free-floating common constraint is a direct consequence of the conservation of momentum from which the system dynamics are obtained. Both linear and angular momentum equations are expressed in function of the system's velocities, however only the linear one presents a holonomic constraint. Such a constraint can be described by an equation relating the coordinates (and time) of the system. As the linear momentum equation corresponds to the motion of the CoM of the system, it can also be represented with the CoM's positions instead of its velocities. Therefore, it implies its integrability and thus yields to a holonomic constraint. On another hand, the angular momentum cannot be represented by an integrable equation leading to a non-holonomic constraint [START_REF] Nakamura | Nonholonomic path planning of space robots via bi-directional approach[END_REF]. Those constraints transpose the fact that for free-floating manipulators, the dynamic couplings between the manipulator and the base make the end-effector pose dependent on the manipulator's trajectory and likewise on its velocities. An overview of methods to apprehend and take into account the dynamic coupling between orientation and position of the manipulator's end-effector and the spacecraft's base of a freefloating SMS has been detailed by Flores-Abad and al. [START_REF] Flores-Abad | A review of space robotics technologies for on-orbit servicing[END_REF]. Additionally, as developed in Siciliano and al. [START_REF] Siciliano | Mobile robots[END_REF] a manipulator placed on a mobile platform can be modeled as a terrestrial robotic manipulator which as motivated original studies. In the pioneer work of Longman and al. [START_REF] Richard W Longman | The kinetics and workspace of a satellite-mounted robot[END_REF], methods to solve the kinematics and dynamics problem have been proposed. With the time function of the joint's history for a fixed base, the end-effector's relative position to the spacecraft is firstly established. Secondly, with the angular momentum conservation the angular and inertial position of the satellite are asserted. With this method, they were able to obtain a feasible solution for the inverse kinematics problem of both endeffector and spacecraft's base attitude. It has also provided an important analysis result, the manipulator's workspace is a sphere which its radius is monotonically decreasing in function of the manipulator's mass. Beside this work, other methods have been later developed to tackle the inverse kinematics/dynamics problem whose difficulty resides in the addition of inertial parameter resolutions. From the different existing methods developed to obtain the equations of motions for a free-floating manipulator, the focus is placed on three major ones: • The first one, proposed by Vafa and Dubowsky [START_REF] Vafa | On the dynamics of manipulators in space using the virtual manipulator approach[END_REF], is the Virtual Manipulator method. It consists of an idealized mass-less kinematic chain whose base is fixed in the inertial frame and whose end-effector coincide with the studied manipulator one. When no external forces/torques apply on the system, the CoM is fixed in the inertial frame and the VM19 and space manipulator share the same kinematics and dynamics properties. Thus, the VM is employed to develop the kinematics/dynamics model of the space manipulator without describing the equations of motions of the three DoFs of the base. This idealized mass-less model allows to analyze the system in simulation but cannot be physically built. • To overcome the possibility of experimentally developed equivalent ground-base manipulator, Liang and al. [START_REF] Liang | Dynamically equivalent manipulator for space manipulator system. 1[END_REF] introduced the DEM20 approach. Similarly to the VM, the first joint is a passive revolute one, the orientation of each joint is similar to the studied manipulator and the links have the same lengths. In addition to modeling or solving the inverse kinematics/dynamics problem, the VM and DEM approaches are employed for workspace analysis [START_REF] Flores-Abad | A review of space robotics technologies for on-orbit servicing[END_REF]. • As the study of kinematics model without any external forces/torques corresponds to the study of the end-effector motions in an inertial frame, the Jacobian matrix is used to describe the translation and rotation of the manipulator in function of the joint angular positions. Umetani and al. [START_REF] Umetani | Resolved motion rate control of space manipulators with generalized Jacobian matrix[END_REF] have then proposed a GJM 21 to solve the inverse kinematic problem. In addition to the joint's angles, the GJM includes the inertia parameters and therefore depends not only on the manipulator kinematics but likewise on the dynamics. The GJM allows to consider the dynamic singularities of a freefloating space manipulator which makes this approach interesting for non-holonomic path-planing [START_REF] Nakamura | Nonholonomic path planning of space robots via bi-directional approach[END_REF] or to develop control strategies when base and manipulator coupling are a concern [START_REF] Dragomir | Reaction null-space control of flexible structure mounted manipulator systems[END_REF]. Extension with Multi-kinematic chains: The presence of a second manipulator provides more flexibility in the control and stabilization of the global SMS. Beside the classical Lagrangian and Newton-Euler approaches, two modeling methods have been developed to synthesized the kinematics model of a multi-arm SMS [START_REF] Flores-Abad | A review of space robotics technologies for on-orbit servicing[END_REF]. The two methods are the Barycentric Vector Approach and the Direct Path Method whose implementation have been compared by Moosavian and Papadopoulos [START_REF] Ali | On the kinematics of multiple manipulator space free-flyers and their computation[END_REF]. The Barycentric Vector Approach is based on the study of the overall system's CoM and a set of body-fixed vectors. The CoM is used as a representative point for the system linear motions and the vectors are used to obtain the geometric configuration of the system and its mass distribution. This approach leads to the decoupling of the linear and angular motions from the dynamics equations when no external forces/torques apply to the system. The Direct Path Method, likewise considered a representative point for the system linear motions, however it can be chosen differently from the CoM. The Direct Path Method is better suited for modeling multi-manipulator systems in presence of external forces/torques (i.e. free-flying SMS). In comparison of the two methods, it provides simpler term equations and less computation are required [START_REF] Ali | On the kinematics of multiple manipulator space free-flyers and their computation[END_REF]. Modeling of flexible appendages Flexible appendages are and will be nearly present in all spacecrafts and satellite structures. Solar arrays, sun-shields or even antennas are sources of flexible behaviors and more generally all light and large elements may exhibit flexible dynamics. An appropriate modeling of these dynamics is required to develop control strategies as illustrated by the Hubble telescope attitude control which has suffered from a poor flexible modeling [START_REF] Carlton | Solar-array-induced disturbance of the Hubble space telescope pointing system[END_REF]. Space robots systems and from a general standing, dynamic couplings between the different elements composing the spacecraft need to be considered in a control strategy such that stability is insured. If there is no means to cancel vibrations resulting from a maneuver of the manipulator or environmental disturbances [START_REF] Zhong | Attitude control for flexible spacecraft with disturbance rejection[END_REF] then the loss of system controllability may be unavoidable as the problem is an under-actuated one. For this reason, flexible dynamics modeling is a key to SMS control improvements and remains a challenging task with the complexity of space structures. With a manipulator control purpose, first modeling approaches have been considering simple model. As a first approach, the flexible beam model has been considered in a planar problem [START_REF] Zarafshan | Manipulation control of a space robot with flexible solar panels[END_REF] and later developed to integrate coupling with the rest of the system with Hamilton's principle [START_REF] Hirano | Vibration suppression control of a space robot with flexible appendage based on simple dynamic model[END_REF]; [START_REF] Liu | Modeling and observer-based vibration control of a flexible spacecraft with external disturbances[END_REF]. These models are suitable for small satellites and appendages for basic control applications. Moreover, as most of flexible modeling methods, the appendages dynamics are based on generalized damped spring/masses systems. These formulations are written as a second order state representation involving mass, damping and stiffness matrices of the coupled rigid-flexible system [START_REF] Henry | Model-based FDIR and fault accommodation for a rendezvous mission around the Mars planet: the Mars sample return case[END_REF]; [START_REF] Henry | A Class of Unknown Input Observers Under H ∞ Performance for Fault Diagnosis: Application to the Mars Sample Return Mission[END_REF]. In order to consider several flexible appendages connected to a rigid hub, multi-body modeling approach have been developed. Those approaches offer the possibility of developing independent model of each bodies and then connect them to the rest of the structure. Recursive formulations have been introduced early based on variational and vector calculus [START_REF] Bae | A recursive formulation for constrained mechanical system dynamics: Part i. open loop systems[END_REF]; [START_REF] Kim | A recursive formulation for flexible multibody dynamics, Part I: open-loop systems[END_REF]. Deformation modes are employed to represent the relative elastic deformation of each body, then adding the positions, velocities and accelerations of each joints, equation motions are recursively obtained with the system graph definition. In most cases, the flexible properties of a body is obtained from a Finite Element Method (FEM) [START_REF] Peter | Finite element appendage equations for hybrid coordinate dynamic analysis[END_REF]. A first finite element based approach to establish multi-body models is with the Finite Element-Transfer Matrix method [START_REF] Leckie | Transfer-matrix fundamentals[END_REF]. Associating a transfer function for each body of the system to describe its state on both connection point, the system model is obtained by the multiplication of these transfer functions. This approach is limited to tree-like structures modeling [START_REF] Rui | Automatic deduction theorem of overall transfer equation of multibody system[END_REF]. However, one drawback of FEM is the important number of flexible DoF with which a method can be developed in order to reduce DoF [START_REF] Ma Dokainish | A new approach for plate vibrations: combination of transfer matrix and finite-element technique[END_REF]. Moreover considering restraining assumptions the studies on the spacecraft rotations or the amplitude of vibrations, a reduction of the overall number of flexible DoF can be obtained. With a Rayleigh-Ritz discretization the system's impedance matrix can be evaluated [START_REF] Sylla | Dynamics of a rotating flexible and symmetric spacecraft using impedance matrix in terms of the flexible appendages cantilever modes[END_REF] and according to the considered deformations, suitable discretization methods apply [START_REF] Ji | A novel nonlinear finite element method for structural dynamic modeling of spacecraft under large deformation[END_REF]. A second finite element modeling method is the Component Modes Synthesis [START_REF] Morris | Analytical methods in component modal synthesis[END_REF] that provides the advantage of condensing the matrix representation of each body [START_REF] Yu | Element-by-element model updating of large-scale structures based on component mode synthesis method[END_REF]. Similarly, from effective mass/inertia matrix [START_REF] Imbert | Analyse des structures par éléments finis[END_REF] or impedance matrix [START_REF] Pascal | Dynamics analysis of a system of hinge-connected flexible bodies[END_REF] modeling method, the construction of a spacecraft model can be obtained with individual body properties and parameters and constructed body-by-body [START_REF] Alazard | Linear dynamic modeling of spacecraft with various flexible appendages and on-board angular momentums[END_REF]. With a similar modeling technique, Guy and al. [START_REF] Guy | Dynamic modeling and analysis of spacecraft with variable tilt of flexible appendages[END_REF] have proposed a generic and systematic multi-body modeling method to obtain with a Linear Fractional Transformation a spacecraft model that considers system uncertainties suitable for simulation and control validation. Likewise with the dual quaternion formalism, a Linear Fractional Transformation of the COMRADE satellite and its flexible solar arrays and the precense of propellant sloshing have been developed for similar control purposes [START_REF] Henry | A 6-DOF sliding mode fault tolerant control solution for in-orbit autonomous rendezvous[END_REF]. An extension of these works to consider complex and multiple interconnections between the bodies composing the structure have been developed [START_REF] Sanfedino | Finite element based N-Port model for preliminary design of multibody systems[END_REF]. In a context of ADR, a recursive multi-body modeling approach has the advantage of including the target dynamics as a new manipulator link with respect to the existing topology [START_REF] Xu | Dynamics modeling and analysis of a flexible-base space robot for capturing large flexible spacecraft[END_REF]. In addition to the rigid coupling existing between the different SMS elements, studying the influence of flexible appendages may benefit to the control strategies. A coupling index has been proposed by Meng and al. [Men+17] to relate mutual effects and indirect ones between the end-effector and flexible appendages. Adapted manipulator motions can then be defined to reduce vibrations before the capture of flexible targets [START_REF] Meng | Vibration suppression control of free-floating space robots with flexible appendages for autonomous target capturing[END_REF]. Studying the influence of a flexible target to be served by an SMS likewise with flexible appendages, allow pre-sizing of the mission spacecraft. Under restrictive motion assumptions, the Lagrangian formalism can be developed to obtain the servicer and the towed satellite with their respective flexible elements such that criterion on target's mass allow to dimension the servicer [START_REF] Vladimir | Dynamics, analytical solutions and choice of parameters for towed space debris with flexible appendages[END_REF]. Control methods of Space Manipulator Systems Early studies With the diversity of OOS22 missions, SMS control requires to overpass different challenges to perform autonomous tasks. A first control difficulty resides in the effects of coupling between manipulator's end-effector and its base. Early control methods have been developed to move the manipulator while minimizing its impact on the base motions. For a free-floating satellite, the objective is to maintain system's controllability while for a free-flying robot it allows to avoid an overuse of base actuators. Longman and al. [START_REF] Richard W Longman | Satellitemounted robot manipulators-New kinematics and reaction moment compensation[END_REF] have introduced SMS kinematics to develop joint adjustment to account for base motions in the manipulator's control. A trajectory planing was introduced by Dubowsky and Torres [START_REF] Dubowsky | Path planning for space manipulators to minimize spacecraft attitude disturbances[END_REF], based on an Enhanced Disturbance Map that provides manipulator motions with relatively low spacecraft disturbances. With the GJM introduced by Umetani and Yoshida and based on the angular conservation momentum [START_REF] Umetani | Continuous path control of space manipulators mounted on OMV[END_REF], resolved motion rate and acceleration control based method have been developed to take into account the dynamic interaction between manipulator and satellite base [START_REF] Umetani | Resolved motion rate control of space manipulators with generalized Jacobian matrix[END_REF]; [START_REF] Parlaktuna | Jacobian control for space manipulator[END_REF]. When momentum conservation hypotheses are not applicable, the GJM approach has been extended to compute a manipulator trajectory to capture a tumbling satellite [START_REF] Seweryn | Optimization of the trajectory of a general free-flying manipulator during the rendezvous maneuver[END_REF]. From the manipulator Jacobian expression Nenchev and al. [START_REF] Dn Nenchev | Reaction null-space based control of flexible structure mounted manipulator systems[END_REF] have proposed a reaction-null space control method to decouple a free-floating manipulator and its base dynamics, and later extended the proposed method decoupling control tasks to simplify dynamics expressions in the control law [START_REF] Dragomir | Reaction null-space control of flexible structure mounted manipulator systems[END_REF]. An adaptive version of reaction null-space control has been studied to include dynamic modeling uncertainties for a free-floating SMS [START_REF] Xu | Adaptive reactionless motion control for free-floating space manipulators with uncertain kinematics and dynamics[END_REF]. Likewise, with the fixed-base model of the manipulator, DEM, adaptive control approaches can be assessed to perform joint space control [START_REF] Parlaktuna | Adaptive control of free-floating space manipulators using dynamically equivalent manipulator model[END_REF]. Control method to reduce manipulator coupling disturbances on the base Later studies have pursued the development of control laws to reduce undesired manipulator impacts on its base which will be operating on-orbit. In particular for ADR applications, the impact caused by the capture of a target creates a disturbance torque on the manipulator with a direct influence on the servicer base. In that way ADR studies have improved manipulator control methods resulting in lowering the influence of end-effector contact dynamics and minimizing the target momentum. An extension of the reaction-null space control method has been proposed by Jin and al. [START_REF] Jin | Reaction torque control of redundant free-floating space robot[END_REF] with the introduction of a reaction torque to create a base reaction-less and dynamic singularity-free control of a free-floating manipulator. Momentum accumulation: Most of the proposed Jacobian based approaches consider an initial system linear and angular momentum null. If initially the overall momentum is non-null, it will accumulate during the servicing mission and will affect the manipulator's dynamics as a Centrifugal/Coriolis force. The direct consequence of this additional force/torque is the end-effector that cannot remain in a given position. A reaction-null space control method with a bias momentum approach has been put in place by Dimitrov and Yoshida [START_REF] Nikolaev | Momentum distribution in a space manipulator for facilitating the post-impact control[END_REF] to benefit of the initial angular momentum in the pre-impact capture phase. Moreover, for multi-arms SMS, a momentum redistribution approach offers efficient moment reduction during the capture of a target presenting an unknown angular momentum [START_REF] Zhang | Coordinated stabilization for space robot after capturing a noncooperative target with large inertia[END_REF]. Another approach, proposed by Nanos and al. [START_REF] Nanos | On the dynamics and control of free-floating space manipulator systems in the presence of angular momentum[END_REF], consists in tackling the non-zero angular momentum of a free-floating spacecraft as a fixed-base terrestrial robot that requires gravitational compensation. However these approaches constrain the manipulator workspace. A workspace adjustment has been proposed by Nanos and Papadopoulos [START_REF] Nanos | On the use of free-floating space robots in the presence of angular momentum[END_REF] to immunize the end-effector of the accumulating angular momentum of the free-floating SMS. Likewise, impedance control can include the momentum in the feedback linearization similarly to an external force [START_REF] Nakanishi | Impedance control for free-flying space robots-basic equations and applications[END_REF]. Nevertheless, to avoid the exact feedback linearization of a torque control law of a free-floating SMS, an appropriate coordinate transformation allows to remove the disturbances induced by the momentum [START_REF] Giordano | Dynamics and control of a free-floating space robot in presence of nonzero linear and angular momenta[END_REF]. Additionally to avoid the computation of the Jacobian and globally solving the inverse kinematic problematic, a dynamic formulation of the SMS has been proposed by Zhou and al [START_REF] Zhou | Dynamic coupling analysis of multi-arm space robot[END_REF] in which the end-effector is considered as a virtual base. The Lagrange multipliers method is employed to analytically obtain the joint's control torque such as the end-effector follows a desired trajectory while minimizing its base perturbations. Free-flying robots benefit from a minimization of base disturbances as their rejection are done with fuel-consuming actuators which consequently impacts the missions' lifespan. In that purpose, fuel efficient control methods have been proposed. An early study has established an optimal control algorithm based on the Pontryagin's maximum principle to minimize the fuel consumption of a free-flying spacecraft equipped with jet thrust mechanism [START_REF] Sakawa | Trajectory planning of a free-flying robot by using the optimal control[END_REF]. Giordano and al. [START_REF] Giordano | Momentum dumping for space robots[END_REF] came forth with an extraction of the accumulated momentum with the base actuators by decomposing external and internal forces to let the end-effector deal with only internal ones. The momentum of a grasped object causes a manipulator's workspace shift which can be tackled with a control of the system CoM, which corresponds to the center of the maximum reachable workspace. Giordano and al. have proposed simultaneous control methods of the system CoM and end-effector to efficiently use the thrusters [START_REF] Giordano | Workspace fixation for free-floating space robot operations[END_REF]; [START_REF] Massimo | Coordinated Control of Spacecraft's Attitude and End-Effector for Space Robots[END_REF]. Detumbling strategies: Docking maneuver remains a challenging control task and in particular when the manipulator and the target present a non-null differential energy. Con-sidering tumbling, or non-cooperative, targets have raised interests in ADR applications. A first strategy developed to reduce the impact of grasping is the impedance control of the endeffector. Nakanishi and al. [START_REF] Nakanishi | Impedance control for free-flying space robots-basic equations and applications[END_REF] proposed such a method to limit the contact force impact on the free-floating manipulator and avoid both spacecrafts to push each other. Impedance control is equivalent to a mass-damper-spring system fixed at a point in space. Yoshida and al. [START_REF] Yoshida | Dynamics, control and impedance matching for robotic capture of a non-cooperative satellite[END_REF] emphasized on the dynamic conditions to satisfy a safe grasping that depend on the inertial property of the chaser spacecraft or the impedance control. The main objective of an impedance control is to obtain smaller inertial characteristics of the servicer compared to the target's one. It results on the chaser that will not significantly deflect the target. However impedance control may present two main difficulties. Firstly, the drift of the chaser is relatively fast and switching from a base control to the manipulator may be frequent [START_REF] Giordano | Workspace fixation for free-floating space robot operations[END_REF]. In consequence, approaches considering the accumulated momentum may be more appropriated [START_REF] Giordano | Momentum dumping for space robots[END_REF]. Secondly, these approaches require dynamics estimations of the target properties to be properly developed, as established in [START_REF] Alexander D Crain | Compliant Spacecraft Capture via a Nonlinear Disturbance Observer-based Impedance Controller[END_REF]; [START_REF] Flores-Abad | Compliant Force Sensor-less Capture of an Object in Orbit[END_REF]. Adaptive control may present a solution to the estimation difficulties. As proposed by Nguyen and al. [START_REF] Nguyen-Huynh | Adaptive reactionless motion for space manipulator when capturing an unknown tumbling target[END_REF], during and after the capture of an unknown target adaptive algorithm may provide reaction-less motion of the chaser's base. Without target dynamics information, their proposed method, based on a recursive least square algorithm, allows to adapt the unknown parameters to safely capture the target. The detumbling of a non-cooperative object, or the operation to bring it at a state of rest, has been studied by Aghili [Agh09b] who suggested a first optimal control that aims at minimizing the detumbling time while ensuring the interaction torque between the manipulator and target under a defined value. From the Pontryagin's principle, an optimal path-planning problem is solved to ensure the condition on the interaction torque. The detumbling, during the post-grasping phase, is obtained with a coordination control for a combined system of the space robot and the target satellite. It aims at dumping the initial velocity of the tumbling satellite. In a more recent study from the same author, the optimal detumbling control strategy has been extended to achieve momentum dissipation of the grasped non-cooperative target in time and/or energy efficiency way while considering forces/torques limitations [START_REF] Aghili | Optimal Trajectories and Robot Control for Detumbling a Non-Cooperative Satellite[END_REF]. It was also improved by Dubanchet and al. [START_REF] Dubanchet | Motion planning and control of a space robot to capture a tumbling debris[END_REF], who proposed H ∞ to compute a manipulator and base controllers suitable for on-board processors. Moreover, to deal with the model's uncertainties of the target, Gangapersaud and al. [START_REF] Rabindra | Detumbling a non-cooperative space target with model uncertainties using a space manipulator[END_REF] preferred the use of force control to detumble the target without requiring target inertial parameters. Wang and al. [START_REF] Wang | Detumbling strategy and coordination control of kinematically redundant space robot after capturing a tumbling target[END_REF] developed the dynamics of the target and the SMS so that a coordination controller can be put forward. Optimal detumbling and path-planning are obtained with quartic Bézier curves and an adaptive differential evolution algorithm with detumbling time constraints. As proposed by Aghili [START_REF] Aghili | Coordination control of a free-flying manipulator and its base attitude to capture and detumble a noncooperative satellite[END_REF], a coordination control allows the manipulator to track a defined path while dumping the initial velocity of the target and synchronously control the chaser's base attitude. Likewise, in the debris removal mission e.Deorbit the chaser's rotational and linear DoFs and the manipulator are simultaneously controlled with a robust H ∞ control solution to safely detumble a large debris [START_REF] Colmenarejo | Results of the COMRADE project: combined control for robotic spacecraft and manipulator in servicing missions: active debris removal and re-fuelling[END_REF]; [START_REF] Fauré | AH ∞ Control Solution for Space Debris Removal Missions Using Robotic Arms: The ESA e. Deorbit Case[END_REF]. Path-planning approaches: Without considering a specific SMS application or an OOS task, path-planning approaches allows in a general way to develop methods for: reducing a manipulator's influence on its base, optimizing the workspace with collision avoidance considerations, efficiently using actuators. Papadopoulos and al. [START_REF] Evangelos | Path Planning For Space Manipulators Exhibiting Nonholonomic Behavior[END_REF] have studied the non-holonomic property of free-floating manipulators to obtain singularity-free paths, as the dynamic singularities are path dependent. A reachable workspace is defined such that a Cartesian space planning method allows to obtain manipulator's motions to control the base orientation. This work has later been extended to develop a path planning method that simultaneously provides end-effector and spacecraft attitude control only with the manipulator's joints control [START_REF] Papadopoulos | Smooth planning for free-floating space robots using polynomials[END_REF]. Path-planning methods aiming at reducing the manipulator's impact on the rest of the SMS have been studied for various applications. In capture scenarios, adaptive reaction null space methods have been widely developed for free-floating manipulators as it avoids the use of base actuators [START_REF] Ye | Research on Adaptive Reaction Null Space Planning and Control Strategy Based on VFF-RLS and SSADE-ELM Algorithm for Free-Floating Space Robot[END_REF]. More generally, methods based on the GJM have allowed to reduce the free-floating base disturbances during the capture contact. As proposed by Hu and al. [START_REF] Hu | Minimum base attitude disturbance planning for a space robot during target capture[END_REF] the impacts are considered in their developed generalized mass Jacobian matrix. Then it allows to introduce a base attitude disturbance ellipsoid used to define the impact direction and minimizing the base attitude disturbances in the control strategy. Moreover, developing path-planning methods allow to reduce the impact of the grasping. As established by Zhang and al. [Zha+20], the manipulator's path is obtained with a particle swarm optimization so that the contact has low influence on the base. Similarly, Yang and al. [START_REF] Yang | A robust and adaptive control method for flexible-joint manipulator capturing a tumbling satellite[END_REF] developed a strategy to synchronize the manipulator's end-effector of the servicer and the target that reduces the relative motions. Likewise, optimizations with multiple constraints have been developed to include a reduction of coupling influences actuators limits or the system's velocities. These optimizations result in appropriate manipulator trajectories that can be adapted according to the servicing application [START_REF] Lu | Trajectory Planning of Satellite Base Attitude Disturbance Optimization for Space Robot[END_REF]. Trajectory optimizations also provide the advantage of a reduce use of available actuators, as developed in [START_REF] Rybus | Control system for free-floating space manipulator based on nonlinear model predictive control (NMPC)[END_REF]. Furthermore, momentum accumulation or external forces may impose an unwanted endeffector motion. Therefore it may require a solution for the inverse kinematic when the manipulator is in a near singularity configuration as it could saturate its actuators. In order to avoid dynamic singularities, Nanos and al. [START_REF] Nanos | Avoiding dynamic singularities in Cartesian motions of free-floating manipulators[END_REF] studied Cartesian trajectory planning of free-floating SMS such as for a given end-effector trajectory an initial manipulator configuration is defined resulting in non-singular configurations during end-effector motion. The proposed method advantageously allows to consider initial angular momentum. Wang and al. [START_REF] Wang | Optimal trajectory planning of free-floating space manipulator using differential evolution algorithm[END_REF] benefit of the manipulator's redundancy which offers infinite solutions for the inverse kinematics problem to optimize trajectories. With null-space vectors, they use Bézier curves to represent the joints' trajectories while minimizing their range and rate or consider singular-free trajectories. Likewise, additional constraints as adding minimizing base disturbance and collision avoidance criteria can be included. A more recent study proposed by Shao [START_REF] Shao | Nonsingular terminal sliding mode control for free-floating space manipulator with disturbance[END_REF] tackles the singular avoidance problem introducing a nonsingular terminal sliding surface with a terminal sliding mode controller. It also provides quick convergence time. Considering free-flying robots, Seddaoui and al. [START_REF] Seddaoui | Combined nonlinear H ∞ controller for a controlled-floating space robot[END_REF] proposed a common trajectory planning for collision-free and singularity-free paths with fuel-consumption optimization constraints. Fuel consumption, that also goes hand in hand with the mission's lifespan of a free-flying SMS, can be tackled by adding additional constraints in the path-planning optimization problem as proposed by Breger and al. [START_REF] Breger | Safe trajectories for autonomous rendezvous of spacecraft[END_REF]. Their solution is to solve a non-convex problem formulation to reduce by two the fuel consumption of the satellite from a convex formulation of the problem. Similarly, adding constraints on actuator capacity and end-effector in the optimization allows to compute manipulator's trajectories such that the impact with the target to be captured can be done in an energy-efficiently way and without collision [START_REF] Lampariello | Motion planning for the on-orbit grasping of a noncooperative target satellite with collision avoidance[END_REF]. Base disturbances control In-space robots are affected by multiple disturbances that mainly impact the spacecraft base attitude control. Two categories of perturbations can be identified. The external ones such as atmospheric drags, gravitational fields and solar radiations that induce external torques and forces on the global system. And the internal ones such as flexible vibrations and perturbations from couplings in the system. They either extend the modeling formulation or transposes physical constraints as the kinetic momentum expression. In this section the emphasis will be placed on the internal disturbances which is a rising problematic with the current and future space structures that mostly exhibit flexible behaviors. Moreover, the study is restrained to flexible elements outside the manipulator's kinematic chain. Active control: As illustrated with the Hubble space telescope attitude control degradation due to the flexible solar arrays [START_REF] Carlton | Solar-array-induced disturbance of the Hubble space telescope pointing system[END_REF], early studies have shown interest in flexible disturbances control. When the application is enabling the possibility of active vibration suppressions, different actuators have been studied and proposed. A first approach is the use of an active joint between the flexible appendage and the rigid hub. Hirano and al. [START_REF] Hirano | Vibration suppression control of a space robot with flexible appendage based on simple dynamic model[END_REF] proposed a virtual joint model to actively control the flexible appendage and reject the vibrations due to the coupling with the manipulator. As flexible deformation are generally not-measurable, an estimation of the flexible states is included in the control law with a force/torque sensor between the hub and the appendage. Likewise, Ataei and al. [START_REF] Mahdi | Boundary control design for vibration suppression and attitude control of flexible satellites with multi-section appendages[END_REF] studied the use of elastic connection between the flexible appendage and the rigid satellite base. Boundary controllers are then designed to ensure asymptotic stability of the satellite attitude control without accurate system modeling. In a second study, in which no manipulator is considered, permanent magnet synchronous motors are used to actively reduce the flexible modes during the assembly of solar arrays [START_REF] Guo | Active control technology for flexible solar array disturbance suppression[END_REF]. Another method to actively reject flexible disturbances is the use of piezoelectric actuators positioned on the appendage [START_REF] Hu | Vibration suppression of flexible spacecraft during attitude maneuvers[END_REF]. Such methods assume available sensors to measure deformations. However, when flexible modes are not measurable, state observers are developed in the control law structure [START_REF] Cao | Flexible satellite attitude maneuver via constrained torque distribution and active vibration suppression[END_REF]. Otherwise the placement of piezoelectric actuators and sensors has been studied by Angeletti [Ang+21] in order to perform precise pointing of the base attitude without reaching flexible deformation thresholds. Different strategies have considered the use of control moment gyroscopes distributed on the flexible structures. Hu and Zhang [START_REF] Hu | Attitude control and vibration suppression for flexible spacecraft using control moment gyroscopes[END_REF] have developed an adaptive controller for vibration suppression in addition to a nonlinear controller that provides desired control input for large angles of the three-axis attitude maneuvers. Likewise, Jia and al. [START_REF] Jia | Maneuver and active vibration suppression of free-flying space robot[END_REF] have proposed a two controller strategy for a flexible manipulator with control moment gyroscopes. They first decouples the system dynamics as a slow and fast subsystem in order that the flexible displacement correspond to the fast subsystem and the manipulator's rigid motion to the slow one. Moreover, decomposing the flexible dynamics in a fast subsystem also aims at reducing the not-measurable states, as developed in the singular perturbation method detailed in [START_REF] Yu | Active control of a 6-DOF space robot with flexible panels using singular perturbation method[END_REF]. Passive control: Active control offers answers to simple systems exhibiting flexible behaviors, however they remain hardly implementable considering the large structures in future SMS applications. As a first approach to avoid the excitation of the flexible modes, input control torque can be filtered. Chu and al. [START_REF] Chu | Vibration control of maneuvering spacecraft with flexible manipulator using adaptive disturbance rejection filter and command shaping technology[END_REF] proposed an adaptive disturbance rejection filter combined with an optimal input shaping control law to identify and adapt the disturbance rejection. An adaptive notch filter is additionally developed with the closed-loop system dynamics. As the main difficulty to control free-floating robot lies in the couplings, different studies have developed control strategies based on the quantification of the undesired system couplings. For instance, the relation between manipulator and sloshing of a fuel-tank has been studied by Rackl and al. [RGL18] while Meng and al. [START_REF] Meng | Space robots with flexible appendages: dynamic modeling, coupling measurement, and vibration suppression[END_REF] proposed coupling indices between flexible appendages and manipulator motions. With these last coupling indices, a hybrid control method for vibration rejection has been formulated by the same authors [START_REF] Meng | Vibration suppression control of free-floating space robots with flexible appendages for autonomous target capturing[END_REF]. Furthermore, other disturbances may apply on SMS which require different control strategies. As discussed in [START_REF] Cao | Thermal alternation induced vibration analysis of spacecraft with lateral solar arrays in orbit[END_REF] external disturbances also impact the system such that system modeling becomes difficult to deploy active control methods. Besides the unknown external perturbations, modeling of the SMS with or without flexible appendages may be challenging. For those reasons, robust control methods have been employed to perform accurate manipulator and base control. Studies have tackled model uncertainties with H ∞ controllers. As established in [START_REF] De Fpa Taveira | Adaptive nonlinear H ∞ controllers applied to a free-floating space manipulator[END_REF], additionally to the nonlinear H ∞ a complementary neural network has been developed to compensate for model inaccuracies as well as rejecting external disturbances. Zhongyi and al. [START_REF] Zhongyi | Disturbance observer-based robust control of free-floating space manipulators[END_REF] developed a disturbance observer in the system decoupling. The observer is developed with the VM approach in order to include model uncertainties which are considered as lumped disturbances in the joint space. Likewise, Qiao and al [START_REF] Qiao | High-precision attitude tracking control of space manipulator system under multiple disturbances[END_REF] proposed the use of a disturbance observer to deal with vibrations of flexible appendages while a H ∞ controller is defined as such to compensate for SMS's modeling uncertainties. Moreover, H ∞ synthesis allows to consider parametric variations with LFT 23 representation. Colmenarejo and al. [Col+20] tackled flexible uncertainties with such methods as in order to perform capture tasks in presence of flexible appendages. Zhang and al. [START_REF] Zhang | Robust adaptive control of a free-floating space robot system in Cartesian space[END_REF] developed an adaptive trajectory tracking control for free-floating SMS. The proposed control method, allows to consider both parametric and non-parametric uncertainties as well as un-known bounded external torques. Chu and al. [START_REF] Chu | Fuzzy adaptive disturbance-observerbased robust tracking control of electrically driven free-floating space manipulator[END_REF] extended the combined DO 24 and a feedback control method with a fuzzy logic system to tune parameters online to improve perturbation rejections. Zhou and al. [START_REF] Zhou | Robust prescribed performance tracking control for free-floating space manipulators with kinematic and dynamic uncertainty[END_REF] incorporated a linear switching surface in the controller design to deal with dynamic uncertainties in their proposed tracking control method for free-floating robots. Use of base actuators: The lifespan of a mission is one criterion for future spacecrafts and in that purpose alternative to jet-thrusters are studied. Electrical actuators are then favored for the spacecraft attitude control. Giordano and al. [START_REF] Giordano | Coordination of thrusters, reaction wheels, and arm in orbital robots[END_REF] proposed a common control of thrusters and reaction-wheels such that only the critical moment of the capture of a non-cooperative target are tackled with the thrusters. Reaction-wheels are also considered as electrical kinetic moment exchange devices allowing to control the spacecraft base during manipulator motions without affecting the mission lifespan. Shi and al [START_REF] Shi | A robust attitude controller for a spacecraft equipped with a robotic manipulator[END_REF] have developed the dynamic model of an SMS with reaction-wheels to propose a controller robust to model uncertainties in order to maintain the spacecraft attitude. To overcome reaction-wheel torque limitations, Li and al. [START_REF] Li | Motion planning and coordination control of space robot using methods of calculated momentum[END_REF] have discussed the interest of single gimbal control moment gyroscopes during manipulator motions as they provide higher control torques than reactionwheels. Likewise, Wu and al. [START_REF] Wu | Attitude control for on-orbit servicing spacecraft using hybrid actuator[END_REF] have considered the combination of reaction-wheels and control moment gyroscopes to maintain the satellite platform fixed during manipulator operations. Developing a null-motion between actuators allows to deal with saturation of reaction-wheels and the inherent geometric singularity of gyroscopes. Antonello and al. [START_REF] Antonello | Dynamics and control of spacecraft manipulators with thrusters and momentum exchange devices[END_REF] have discussed the different coordinate control methods to control both manipulator and its base. With the recursive Newton-Euler approach control moment gyroscopes are incorporated in the SMS dynamics and discussion are made based on the additional use of thrusters or not. Summary and identified area of improvements SMS have shown an increasing interest in space exploitation and exploration during these last decades, however remaining substantial improvements are required to consider them as serious solutions. A first improvement consists in developing autonomous space robots to perform highly repetitive tasks and allowing to reduce manned-missions. However, difficulties to develop control laws arise with the complex modeling of space structures. Indeed, flexible appendages are frequent in satellites and space structures which required to be taken into account in SMS studies. As space manipulators operate in free-falling environments, the different coupling between each elements that composed the system have to be considered in the control applications. Besides coupling between manipulator and its base that either is controlled or let free of rotation and drift, the manipulator system motions are inducing flexible vibrations for which it is difficult to implement active control. With the literature review, we identified the following axis of improvement: • Development of analysis and simulation tools in order to represent and study the behavior of SMS with flexible appendages. • Defining SMS control laws such that a manipulator can precisely perform on-orbit servicing tasks in presence of disturbances induced by flexible appendages. Include base actuators to attenuate the base disturbances due to different system couplings and external perturbations. • Consideration of a first robustness criterion due to modeling uncertainties and sensors' measurement errors. • Consideration of a second robustness criterion on the inertia distribution changes. SMS are inclined to proceed to different operations which leads to system variations. These variations are mostly inertia ones and according to system's velocities couplings of the different SMS's elements may required to be considered. Chapter 2 Development and validation of analysis and simulation tools Sommaire Developing analysis and simulation tools for SMS with flexible appendages Problem statement and challenges Problematic of the study The objective of the study is to develop steering laws for space manipulator systems performing OOS2 operations. An SMS defines a system composed of at least one satellite base and one robotic manipulator. The study will put the focus on rotation-free-floating SMS with a unique manipulator. Such robotic systems have shown growing interest to perform multiple and various tasks. As defined in the previous chapter, rotation-free-floating or free-floating system refers to an SMS from which the satellite's three rotational DoF3 s are actively actuated by electrical actuators. In this work, reaction-wheels are selected. Moreover, in an objective of not impacting the mission's lifespan the three linear DoFs of the base, often controlled with reaction-jet actuators, will not be actively actuated in this study. Therefore, adapting multi-body modeling formalism to integrate and describe space manipulators dynamics with reaction-wheels in the satellite base is a first concern of the study. As identified in the previous literature overview, flexible appendages will be systematically present in application involving SMS. The flexible modeling raises a second focus on the study as it remains challenging. As active control is difficult to conceive, the flexible modeling could bring significant advantages in the control design process. Thus, deriving the dynamics model of a rotation-free-floating SMS with flexible appendages would provide new control approaches adapted to intern disturbances. An SMS evolves in a free-falling environment leading to multiple coupling within the overall system. Studying and evaluating those couplings is necessary to ensure stability and feasibility of manipulator motions to perform a given task. Developing means to analyze couplings and more generally to evaluate and study feasible motions and configurations that can possibly achieve an SMS is another focus of development in this work. Besides, the tools are introduced such that control laws can be developed and validated with time-domain simulations. To summarize, tools of analysis and simulations for rotation-free-floating SMS with flexible appendages have been elaborated and are described in this chapter so that control and analysis problematics can be introduced. Modeling of rigid SMS SMS modeling refers to the derivation of both a dynamic and kinematic model of the robot. A kinematic model gives the relations between the manipulator's joint state and the end-effector in the satellite frame. A dynamic model provides the system's equations of motions in terms of accelerations, forces and torques. A particularity of space robots is the inverse/forward kinematic problem that likewise is a dynamic one [START_REF] Flores-Abad | A review of space robotics technologies for on-orbit servicing[END_REF]. In that matter, different modeling strategies have been developed in function of the SMS applications. Early studies have focused on performing manipulator maneuvers without moving the satellite base. Additionally to that objective, path-planning approaches have considered singular-free motions. Another particularity of space robots is the possibility of reaching dynamic singularities. A dynamic singularity corresponds to a configuration where the system controllability is lost. Therefore, initial modeling approaches have been established such as the focus was on developing a feasible manipulator motion resulting in a low impact on the base rotations. The VM4 and DEM5 modeling methods respectively introduced by Vafa and Dubowsky [START_REF] Vafa | On the dynamics of manipulators in space using the virtual manipulator approach[END_REF] and Liang and al. [START_REF] Liang | Dynamically equivalent manipulator for space manipulator system. 1[END_REF], provided modeling methods inspired by ground-base manipulators. In both approaches, a fixed satellite base is considered to develop a kinematic/dynamic model for control purposes and workspace analysis. A third approach, introduced by Umenati [UY89], is the GJM6 . Based on a conservation of the kinetic momentum initially null, the GJM allows to express the relation between the manipulator's end-effector and the joints in the Cartesian space. Such expressions are employed for workspace analysis, path-planning and end-effector control for free-floating SMS. Nevertheless, these first approaches are advantageous for simple manipulator systems and are less adapted to provide solutions to actual robots for which multiple kinematicchains may be considered. Two main modeling methods are then developed to obtain a kinematic/dynamic model of SMS: the Newton-Euler and the Lagrangian formalism. The Newton-Euler approach, which is a direct interpretation of Newton's second law, formulates that the dynamic model by considering for each body of the system the force and moment applied. The recursivity of the method is a first advantage for its implementation [START_REF] Virgili-Llop | Spacecraft robotics toolkit: an open-source simulator for spacecraft robotic arm dynamic modeling and control[END_REF]. Moreover, the independent consideration of each body allows more flexibility in the modeling process and leads to the possibility of developing plug-in approaches [START_REF] Alazard | Linear dynamic modeling of spacecraft with various flexible appendages and on-board angular momentums[END_REF]. Thus, for instance closed-kinematic chains and additional elements connected to the end-effector can be studied. The Lagrangian formalism provides a simpler and systematic modeling method based on the energies and works present in the system. Regarding energies of a space robot, only the kinetic ones are considered which allows by developing a Lagrangian approach to obtain an easier method to derive a dynamic model [START_REF] Wilde | Equations of motion of free-floating spacecraft-manipulator systems: an engineer's tutorial[END_REF]. It provides as well a discussion on the system momentum which may be used as a control development start. With rigid systems and assuming no external interactions, the momentum conservation is a means of state parameters estimation to retrieve the physical properties of the SMS [START_REF] Christidi-Loumpasefski | On parameter estimation of space manipulator systems using the angular momentum conservation[END_REF]. In this way, it is a common strategy to model a rigid multi-body system evolving in a free fall environment by initiating a Lagrangian formalism. In this chapter, the integration of base actuators is based on the consideration of additional kinematic chains. In that matter, the energetic approach of the Lagrangian formalism provides a simple method to include additional elements to the overall SMS. Modeling of flexible appendages To take an analogy with ground-base robotic manipulators, usual control methods are based on an accurate modeling to linearize and decouple actuators from each other. In that philosophy, integrating the flexible dynamics onto the rigid one developed with a Lagrangian approach would be a notable starting point to develop efficient SMS steering laws. Nonetheless, flexible modeling represents a challenging task considering the complexity of space structures. From a finite element method, flexible displacement DoFs can be described with a second order transfer function corresponding to a generalized damped spring/masses system. This representation often involves a large number of DoFs which is a primary concern when modeling a spacecraft with flexible elements. Multiple methods have been established to reduce the number of flexible DoFs [START_REF] Leckie | Transfer-matrix fundamentals[END_REF] ; [Dok72]; [SA08]; [JL21]. With a reduced flexible model for the appendages, integration to the overall rigid SMS is the main challenge. Studies have considered simplistic models based on a flexible beam with a mass at its end such as couplings between the appendage and the satellite base [START_REF] Hirano | Vibration suppression control of a space robot with flexible appendage based on simple dynamic model[END_REF]; [START_REF] Liu | Modeling and observer-based vibration control of a flexible spacecraft with external disturbances[END_REF]. Newton-Euler approaches are adapted to integrate flexible dynamics as for each body a force/moment is listed so that a connection between two bodies is made with summing forces and torques on the junction. Such methods have been put forward by different authors [START_REF] Rui | Automatic deduction theorem of overall transfer equation of multibody system[END_REF]; [START_REF] Sanfedino | Finite element based N-Port model for preliminary design of multibody systems[END_REF]. Nevertheless, a drawback of the methods developed in the literature is the consideration of linearized model. Such modeling allows the elaborations of control laws, however it may reduce analysis and simulation applications as linearization are mainly assumed with base velocity limitations. Modeling objectives Regarding the new needs in SMS uses, different improvements of existing modeling tools are required. A first area of improvements is the need of new control methods of SMS to perform various and longer operations during their exploitation time. In this study, the focus is made on rotation-free-floating spacecrafts on which base rotations are controlled with reaction-wheels. With a preferred Lagrangian formalism, these actuators are included in a multi-body and multi-kinematic chains of a rigid system to describe kinematics/dynamics of rotation-free-floating SMS. Furthermore integrating flexible bodies raises the main challenges. For control purposes, linear models have been used to synthesized control gains. However, such modeling techniques lead to develop robust control strategies as hypothesis and uncertainties may be numerous. To be able to limit modeling constraints and hypothesis, in this present work a Lagrangian approach is developed to include SMS's flexible appendages dynamics onto the multi-body rigid ones. The developed modeling tools should firstly give the possibility of creating fine analysis of a rotation-free-floating SMS with flexible appendages. For this matter, the formalism to derive a kinematic/dynamic model is first detailed based on a Lagrangian approach and with the help of the DH7 parameters. At the conclusion of this first non-trivial contribution tools to analyze and study the behavior of flying and floating SMS with flexible appendages are introduced. Developed on Matlab-Simulink kinematic and dynamic functions are established to obtain respective direct and inverse models. With Simulink models, time-domain simulations are possible. Discussions on the uses and performances of this tools are provided in the following chapter. The chapter concludes on an illustration of the possible analysis that can be formed with the present tools. Modeling of a rigid multi-body free-floating SMS In this section, the rigid multi-body kinematics and dynamics are developed with the objective of describing the behavior of free-floating and free-flying space robots. As previously defined, a kinematic model is the expression of a manipulator's end-effector angular and linear velocities in function of its joints' pose expressed in the robot's frame. The Differential kinematics, or system's dynamic model, is the expression of the relations between accelerations, forces and torques of the system's states. As the Lagrangian formalism provides a simpler and systematic method to derive an SMS dynamic model, in this section the study of a rigid multi-body system with multiple kinematic chains is developed with this approach. Then an extension of the Lagrangian formalism is developed to integrate the flexible dynamics of the SMS appendages. First a general formalism is detailed, based on the Denavit-Hartenberg convention [START_REF] Khalil | A new geometric notation for open and closed-loop robots[END_REF], to obtain a recursive computation method to derive the dynamics of the multi-kinematic chains robot with a common base. As established by Rocha and al. [START_REF] Cr Rocha | A comparison between the Denavit-Hartenberg and the screw-based methods used in kinematic modeling of robot manipulators[END_REF], one of the advantages of the DH convention is the use of a minimal number of the kinematic chain parameter to fully describe its kinematic model. From the obtained general kinematic/dynamic modeling method, a discussion is developed to adapt it for rotation-free-floating SMS. More particularly for a one-manipulator robot with reaction-wheels to control the spacecraft base rotations. Reference frames and coordinate transformations As illustrated with Figure 2.1, a kinematic chain is a series of rigid bodies inter-connected by a joint with one or less DoF. For that reason the DH convention is well suited to develop a kinematic model with a reduced number of four parameters. These parameters are defined for each body with a specific frame that is associated with it, as visualized with Figure 2.1. Therefore a recursive formulation of the kinematic model can be established. Before introducing the DH parameters' identification method, one will define the construction of the multi-kinematic chain system with the following notations. The base is defined as the first rigid solid of the system and referred as S 0 . From the robot's base, n q more solids (or links) are considered, with n q ∈ N. It is assumed that the last solid of a unique kinematic chain system is the n th EE . In particular, for a single kinematic chain n q = n EE . To ease the reading such notations are abusively conserved for multiple kinematic chains with more than one link, the subscript EE will refer to the last solid of a kinematic chain with more than one DoF. The notation S i is used to refer at the i th solid such that i ∈ [1, n q ]. Between the bodies S i-1 and S i a connecting joint, denoted A i , is either considered fixed, prismatic or revolute. Moreover, for each joint and solid, a Cartesian body frame, Introduction of Denavit-Hartenberg notations R i-1 = (O i-1 , x i-1 , y i-1 , z i-1 ), is associated in respect to the following steps: • The axis z i-1 is defined along the axis of joint A i • The origin O i-1 corresponds to the intersection between z i-1 and the normal to z i-2 and z i-1 that carries x i-1 • The vector y i-1 is deduced from the following cross product x i-1 = z i-1 × x i-1 • The first Cartesian frame R 0 = (O 0 , x 0 , y 0 , z 0 ) = R sat is defined such that z 0 is along the axis of A 1 , x 0 and y 0 are arbitrarily chosen • The last frame R n EE = (O n EE , x n EE , y n EE , z n EE ) is defined such that x n EE is normal to the axis of A n EE , z n EE and y n EE are arbitrarily chosen Then the four DH parameters are defined with these referential frames such as: • d i is the distance between x i-1 and x i along axis z i-1 , it corresponds to a prismatic DoF • θ i is the rotation around z i between x i-1 et x i , it corresponds to a rotational DoF • a i is the distance between z i-1 and z i along axis x i , it corresponds to the common normal of z i and z i-1 • α i is the rotation around x i between z i-1 and z i Moreover, the geometry of the SMS with respect to an inertial frame, R ine8 , can be described with the position vector, r A i ∈ R 3×1 , of each joint A i and the position vector of each CoM9 , r S i ∈ R 3×1 , of link S i (i ∈ [0, n q ]). One will note that the choice of considering CoM simplifies the derivation of the kinematic model. With the DH parameters, homogeneous transformations can be defined to recursively express the pose and orientation of the i th link or joint. For that purpose, the matrix T A i-1 ,Ai is introduced as a homogeneous transformation matrix between the frames R i-1 and R i which applies a rotation and a translation from the first frame to the second. Denoting T x trans (d) a translation matrix along axis x of a distance d and T x rot (θ) a rotational matrix around axis x of an angle θ, the matrix T A i-1 ,Ai is then defined with the DH parameters as: T A i-1 ,Ai = T z i-1 trans (d i )T z i-1 rot (θ i )T x i trans (a i )T x i rot (α i ) (2.1) with the details of translation and rotational matrices given by:                                                                        T z i-1 trans (d i ) =      1 0 0 0 0 1 0 0 0 0 1 d i 0 0 0 1      T z i-1 rot (θ i ) =      cos(θ i ) -sin(θ i ) 0 0 sin(θ i ) cos(θ i ) 0 0 0 0 1 0 0 0 0 1      T x i trans (a i ) =      1 0 0 a i 0 1 0 0 0 0 1 0 0 0 0 1      T x i rot (α i ) =      1 0 0 0 0 cos(α i ) -sin(α i ) 0 0 sin(α i ) cos(α i ) 0 0 0 0 1      (2.2a) (2.2b) (2.2c) (2.2d) Thus, the homogeneous transformation matrix between A i-1 and A i is given by: T A i-1,i =      cos(θ i ) -cos(α i )sin(θ i ) sin(α i )sin(θ i ) a i cos(θ i ) sin(θ i ) cos(θ i )cos(α i ) -cos(θ i )sin(α i ) a i sin(θ i ) 0 sin(α i ) cos(α i ) d i 0 0 0 1      = R A i r A i -r A i-1 0 1×3 1 (2.3) where one can distinguish the rotation and translation. The translation is defined by the position, r A i , of the joint A i in R ine and the rotation is given by the DCM 10 , R A i ∈ R 3×3 , between A i-1 and A i that gives the direction of the rotation of A i according to the previous joint. A similar operation can be defined to describe the homogeneous transformation between the links S i-1 and S i . The frame R S i-1 attached to the solid S i-1 is defined such as its origin O S i-1 corresponds to the CoM of S i-1 , r S i , and its axis are collinear to the ones defining the frame R A i . In particular, the origin of the Cartesian base frame R sat 11 is chosen such as r S i = r A 0 . Therefore, the homogeneous transformation matrix between the links S i-1 and S i is given with the DH parameters and (2.3) as: T S i-1 ,Si = R S i r S i -r S i-1 0 1×3 1 = R A i+1 r S i -r S i-1 0 1×3 1 (2.4) 10 Direction cosine-matrix 11 Satellite frame The rotations of the link S i and the joint A i+1 are identical, as visualized with Figure 2.1, which leads to the equality R S i = R A i+1 . Finally, a homogeneous transformation matrix from S i to A i+1 , T S i ,A i+1 , is given by: T S i ,A i+1 = R A i+1 r A i+1 -r S i 0 1×3 1 (2.5) Moreover, to express in the decreasing recursive order the homogeneous transformation from A i to A i-1 , the following relation is given by: T A i ,Ai-1 = T -1 A i-1 ,Ai = R T A i -R T A i r A i -r A i-1 0 1×3 1 (2.6) For some control purposes, it may be interesting to express the base DCM in function of Euler angles, φ θ ψ T (roll φ, pitch θ, yaw ψ). With the following rotation sequence z -y -x, the base DCM is given as: R S 0 = R A 0 =    cos(ψ) -sin(ψ) 0 sin(ψ) cos(ψ) 0 0 0 1       cos(θ) 0 sin(θ) 0 1 0 -sin(θ) 0 cos(θ)       1 0 0 0 cos(φ) -sin(φ) 0 sin(φ) cos(φ)    =    cos(ψ)cos(θ) cos(ψ)sin(θ)sin(φ) -sin(ψ)cos(φ) sin(ψ)sin(θ) + cos(ψ)sin(θ)cos(φ) sin(ψ)cos(θ) cos(ψ)sin(φ) + sin(ψ)sin(θ)sin(φ) -cos(ψ)sin(φ) + sin(ψ)sin(θ)cos(φ) -sin(θ) cos(θ)sin(φ) cos(θ)cos(φ)    (2.7) Moreover, the interest of using the DH formalism relies in the recursivity to express a body state. Denoting T A i ,R ine and T S i ,R ine respectively the transformation of joint A i and body S i expressed in R ine , with the DCM a recursive overall transformation is obtained as:                    T A 0 ,R ine = T S 0 ,R ine T A 1 ,R ine = T S 0 ,R ine T A 1 ,S 0 = T A 0 ,R ine T A 1 ,A 0 T A i ,R ine = T A i-1 ,R ine T A i-1 ,A i = T A 1 ,R ine i j=2 T A j-1 ,Aj , ∀i ∈ [2, n q ] T S i ,R ine = T A i ,R ine T A i ,S i , ∀i ∈ [2, n q ] (2.8a) (2.8b) (2.8c) (2.8d) One will note that the transformation T A i ,R ine is composed of the overall DCM of the joint A i expressed in the R ine , R A i ,R ine , and its position given in R ine . This DCM is used to derive the direction of the joint rotation, k i , in the inertial frame as: k i = R A i ,R ine    0 0 1    = R A 0 i j=1 R A j    0 0 1    (2.9) To produce the kinematic model, for each body a mass m i (i ∈ [0, n q ]) and an associated inertia I S i (i ∈ [0, n q ]) expressed in the corresponding body frame are defined. With (2.9), the inertias can be expressed in R ine , denoted I i such that: I i = R S i ,R ine I S i R T S i ,R ine (2.10) Denoting m tot the total mass of the system, the CoM position in R ine is expressed in function of each body CoM poses, r S i , such as: r CoM = nq i=0 m i r S i nq i=0 m i = nq i=0 m i r S i m tot (2.11) Additionally, a recursive expression of the links CoM can be developed as: r S i = r S 0 + i j=1 (r S i -r S i-1 ), ∀i ∈ [0, n q ] (2.12) With the DH formalism introduced in this section, expressions of pose and orientation of each element composing a multi-body system with one or more kinematic chain can be obtained with recursivity. The homogeneous transformations between two solids/joints are given in function of the system's DoFs and more generally in function of the nature of the junctions. Velocities and accelerations In order to describe system velocities, twist vectors are introduced. The twist vector, t i , of the i th body gathers the angular and linear velocity both expressed in R ine and respectively denoted ω i ∈ R 3×1 and ṙS i ∈ R 3×1 . Therefore, ∀i ∈ [0, n q ] the twist of S i is expressed as: t i = ω i ṙS i , ∀i ∈ [0, n q ] (2.13) To develop the kinematic model, the twist expression is given in function of the joint-space velocities. In that matter a generalized joint rate-variables vector, q = qT 0 . . . qnq T ∈ R (6+nq)×1 , is introduced such as q0 ∈ R 6×1 is the base linear and angular velocity joint state and ∀i ∈ [1, n q ], qi ∈ R is the velocity state of the i th DoF in the kinematic chain(s). In addition, with the DH parameters, one can define ∀i ∈ [1, n q ], qi as: • qi = θi if A i is a rotational joint • qi = ḋi if A i is a prismatic joint The joint-space is defined in function of the nature of the junctions and thus the system's DoFs. One will note, motivated by control purposes, that the spacecraft angular velocity is preferably expressed in its body frame, R sat . Defining q0 such as q0 = ω sat T 0 ṙT S 0 T , with the angular velocity expressed in R sat and linear velocity in R ine . Its expression allows to obtain the base's twist, t 0 , expressed in R ine operating a base transformation with the matrix P 0 as: t 0 = ω 0 ṙS 0 = R S 0 0 3 0 3 I 3 ω sat 0 ṙS 0 = P 0 q0 (2.14) One will note that if not specified, all velocities and pose are expressed in R ine . Furthermore, to distinguish a DoF of a kinematic chain from the base's, the joint ratevariables are gathered in another joint-rate-variables vector qm ∈ R nq×1 such as qm = q1 qnq T . Defining t = t T 0 . . . t T nq T ∈ R nq×1 , the generalized twist vector, a velocity transformation between the task space and the joint space is defined as [START_REF] Angeles | The formulation of dynamical equations of holonomic mechanical systems using a natural orthogonal complement[END_REF]: t =       I 6 0 6 . . .       q = N B N p q = N(q) q (2.15) with: • B kj =   I 3 0 3 r S k -r S j × I 3   ∈ R 6×6 , if S k and S j are in the same kinematic chain • B kj = 0 6 , if S k and S j are in different kinematic chains • p m k = k k k k × (r S k -r A k ) ∈ R 6×1 , if A k is a revolute joint • p m k = 0 3×1 k k ∈ R 6×1 , if A k is a prismatic joint • p m k = 0 6×1 , if A k is a fixed joint where x × refers to the skew-symmetric matrix 12 of vector x ∈ R 3×1 , p m k is the twist 12 x × = x1 x2 x3 × = 0 -x3 x2 x3 0 -x1 -x2 x1 0 propagation vector, B ij is the twist propagation matrix and N is the velocity transformation matrix. Its expression is function of the geometry and configuration of the system. The matrix N can be decoupled into a lower and an upper triangular block matrix [START_REF] Virgili-Llop | Spacecraft robotics toolkit: an open-source simulator for spacecraft robotic arm dynamic modeling and control[END_REF]. Moreover, one will put the attention on the proposed formalism that allows to consider multiple kinematic chains with a common base thanks to vectors p m k and matrices B kj . Furthermore, to benefit from the recursivity offered by the formalism, from (2.15) the twist expression can be expressed as:        t 0 = P 0 q0 t i = B i0 P 0 q0 + i-1 k=1 (B ik p m k qk ) + p m i qi (2.16a) (2.16b) In space robotics, and more largely for robots that involve multi-body kinematic chains, the term Jacobian is employed to refer the transformation matrix between a body twist, or the twist of a point in the kinematic chain considered, and the generalized joint rate-variables vector. With the rows of N, the twist of the i th body in the kinematic chain is given in function of the Jacobian matrix, J i ∈ R 6×(6+nq) , by: t i = B i0 P 0 B i1 p m 1 . . . B i(i-1) p m i-1 p m i q = J i q (2.17) Likewise, J i can be decomposed into a Jacobian, J 0 i ∈ R 6×6 , for the base contribution on t i and a second Jacobian, J m i ∈ R 6×nq , for the kinematic chain(s) DoF contributions. This allows to re-write (2.17), ∀i ∈ [1, n q -1]: t i = B i0 P 0 q0 + B i1 p m 1 . . . B i(i-1) p m i-1 p m i 0 6×(nq-i)     q1 . . . qnq     = J 0 i q0 + J m i qm (2.18) and for i = n q , the Jacobian J mn q = B nq1 p m 1 . . . B nq(nq-1) p m nq -1 p mn q . For the particular case of a single kinematic chain, the twist t EE associated to the endeffector (or the last body of the kinematic chain) can be expressed in function of the base and actuators states adapting (2.18). In case of a single kinematic chain, the twist, t EE , of the last body in the chain which is referred as the end-effector, is given as: t EE = ω EE ṙEE = B n EE 0 P 0 q0 + B n EE 1 p m 1 . . . B n EE (n EE -1) p m n EE -1 p mn EE     q1 . . . qn EE     = J 0 EE q0 + J m EE qm (2.19) Accelerations of the system are recursively obtained with the time-derivative of the twist vector (2.18) such that ∀i ∈ [1, n q ]: ṫi = J0 i q0 + J 0 i q0 + Jm i qm + J m i qm (2.20) where:                                                          J0 i = I 3 0 3 (r S 0 -r S i ) × I 3 Ω 0 P 0 + Ḃi0 P 0 Ω 0 = ω × 0 0 3 0 3 0 3 Ḃkj =   0 3 0 3 ṙS k -ṙS j × 0 3   Jm i = Jm i,1 . . . Jm i,k . . . Jm i,nq Jm i,k = I 3 0 3 (r S k -r S i ) × I 3 Ω k p m k + Ḃki p m k Ω k = ω × k 0 3 0 3 ω × k (2.21a) (2.21b) (2.21c) (2.21d) (2.21e) Multi-body system dynamics After expressing the system's velocities and accelerations in the previous section with a recursive formalism developed thanks to the DH parameters, a dynamic model is derived in this section. With a Lagrange approach, the recursivity is taken advantage of associating for each body an energy and then to derive the equations of motions of a rigid multi-body with multi-kinematic chains and a common base evolving on-orbit. The Lagrangian, L, expresses the difference of the system's kinetic energy, T , and its potential energy, V, as: L = T -V (2.22) In this study and commonly in such context [START_REF] Wilde | Equations of motion of free-floating spacecraft-manipulator systems: an engineer's tutorial[END_REF], it is assumed that the effect of Earth (gravity gradient and free-fall environment) on the robotic system are neglected which leads to no potential energy (i.e. V = 0). Thus the evaluation of the Lagrangian (2.22) is reduced to the evaluation of the system's kinetic energy. For each body, the kinetic energy expressed in R ine is the sum of a rotational energy and a translation one such as: L = T = 1 2 nq i=0 ω T i I i ω i + m i ṙT S i ṙS i = 1 2 t T 0 I 0 0 3 0 3 m 0 I 3 t 0 + 1 2 nq i=1 t T i I i 0 3 0 3 m i I 3 t i (2.23) With the twist expression (2.18), (2.23) is re-written as: L = 1 2 t T 0 I 0 0 3 0 3 m 0 I 3 t 0 + 1 2 nq i=1 t T i I i 0 3 0 3 m i I 3 t i = 1 2 t T 0 I 0 0 3 0 3 m 0 I 3 + nq i=1 B T i0 I i 0 3 0 3 m i I 3 B i0 t 0 + 1 2 t T 0 nq i=1 B T i0 I i 0 3 0 3 m i I 3 J m i qm + qm nq i=1 J T m i I i 0 3 0 3 m i I 3 B i0 t 0 + 1 2 qm nq i=1 J T m i I i 0 3 0 3 m i I 3 J m i qm (2.24) which can be expressed in a compact form as: L = 1 2 t T 0 qT m M 0 M 0m M T 0m M m t 0 qm = 1 2 t T 0 qT m M(x 0 , q m ) t 0 qm (2.25) with x 0 the state vector composed of the base rotation DoF, θ 0 , and position in the inertial frame such that x 0 = θ 0 r T S 0 T . The detail of matrices composing M is given as:                            M 0 = I 0 0 3 0 3 m 0 I 3 + nq i=1 B T i0 I i 0 3 0 3 m i I 3 B i0 M 0m = nq i=1 B T i0 I i 0 3 0 3 m i I 3 J m i M m = nq i=1 J T m i I i 0 3 0 3 m i I 3 J m i (2.26a) (2.26b) (2.26c) The matrix M(x 0 , q m ) corresponds to the system's inertia matrix expressed in R ine , which by construction is defined positive and symmetric. Its evaluation depends on the configuration of the system as well as the base orientation and position in R ine . It is composed of the base's inertia, M 0 ∈ R 6×6 , the links' inertia M m ∈ R nq×nq and coupling inertia matrix between the kinematic chain(s) and the base M 0m ∈ R 6×nq . From a general stand point, one can introduce for the 6-DoFs base a control force/torque vector, τ 0 ∈ R 6×1 , as well as an external force/torque vector, τ ext 0 ∈ R 6×1 , that applies on the base. Similarly for the n q joints, a control force/torque vector, τ qm ∈ R nq×1 , and en external force/torque vector, τ extm ∈ R nq×1 , can be introduced. The system equation of motions are obtained by evaluating the Euler-Lagrange equation on each DoF as:          d dt ∂T ∂t 0 - ∂T ∂x 0 = τ 0 + τ ext 0 d dt ∂T ∂ qm - ∂T ∂q m = τ qm + τ extm (2.27a) (2.27b) With (2.25) and the symmetric properties of M(x 0 , q m ), the partial derivative expressions are given as:          ∂T ∂t 0 = 1 2 M T 0 + M 0 t 0 + M 0m qm = M 0 t 0 + M 0m qm ∂T ∂ qm = 1 2 M T m + M m qm + 1 2 t T 0 M 0m T + 1 2 M T 0m t 0 = M m qm + M T 0m t 0 (2.28a) (2.28b) Then the time-derivation of (2.28) gives:          d dt ∂T ∂t 0 = Ṁ0 t 0 + Ṁ0m qm + M 0 ṫ0 + M 0m qm d dt ∂T ∂ qm = Ṁm qm + ṀT 0m t 0 + M m qm + M T 0m ṫ0 (2.29a) (2.29b) with the expression of the time-derivative inertia matrices detailed with (2.21) as:                                                              Ṁ0 = Ω 0 I 0 0 3 0 3 m 0 I 3 + nq i=1 ḂT i0 I i 0 3 0 3 m i I 3 B i0 + nq i=1 B T i0 İi 0 3 0 3 m i I 3 B i0 + nq i=1 B T i0 I i 0 3 0 3 m i I 3 Ḃi0 Ṁ0m = nq i=1 ḂT i0 I i 0 3 0 3 m i I 3 J m i + nq i=1 B T i0 İi 0 3 0 3 m i I 3 J m i + nq i=1 B T i0 I i 0 3 0 3 m i I 3 Jm i Ṁm = nq i=1 JT m i I i 0 3 0 3 m i I 3 J m i + nq i=1 J T m i İi 0 3 0 3 m i I 3 J m i + nq i=1 J T m i I i 0 3 0 3 m i I 3 Jm i (2.30a) (2.30b) (2.30c) Additionally, the time-derivative of the inertia I i (2.10) in the R ine , is given by: İi = ω × i I S i R T S i ,R ine + R S i ,R ine I S i ω × T i (2.31) The derivation of the Lagrangian (2.25) in function of x 0 and q m to evaluate the Euler-Lagrange equation (2.27) is detailed in appendix A. In that purpose, one introduces c 0 ∈ R 6×6 , c m0 ∈ R nm×6 ,c 0m ∈ R 6×nm and c m ∈ R nm×nm as:                              c 0 = - 1 2 ∂ ∂x 0 t T 0 M 0 + qm M T 0m c 0m = - 1 2 ∂ ∂x 0 qT m M m + t T 0 M 0m c m0 = - 1 2 ∂ ∂q m t T 0 M 0 + qm M T 0m c m = - 1 2 ∂ ∂q m qT m M m + t T 0 M 0m (2.32a) (2.32b) (2.32c) (2.32d) The final evaluation of the Lagrangian (2.25) with (2.27) allows to derive the dynamic model for a rigid multi-body and multi-kinematic chains system as [START_REF] Wilde | Equations of motion of free-floating spacecraft-manipulator systems: an engineer's tutorial[END_REF]: M 0 M 0m M T 0m M m M(x 0 ,qm) ṫ0 qm + Ṁ0 + c 0 Ṁ0m + c 0m ṀT 0m + c m0 Ṁm + c m D(x 0 ,qm, q0 , qm) t 0 qm = τ 0 τ qm + τ ext 0 τ extm (2.33) The matrix M(x 0 , q m ) is an inertia matrix evaluated from the spacecraft configuration and position in the R ine (i.e. q). The matrix D(x 0 , q m , q0 , qm ) is a convective matrix corresponding to centrifugal and Coriolis terms. Its evaluation depends on the system configuration and velocities such that it traduces the influence of each body's motions on the rest of the system. As for control purposes, the base rotations are preferably expressed in the body frame R sat , the general formulation of the dynamic model (2.33) is modified to express the base angular dynamics in R sat . With the transformation (2.14), the equation of motions (2.33) are modified as: H 0 H 0m H T 0m H m H(x 0 ,qm) q0 qm +   P T 0 Ṁ0 + c 0 P 0 P T 0 Ṁ0m + c 0m ṀT 0m + c m0 P 0 Ṁm + c m   C(x 0 ,qm, q0 , qm) q0 qm = P T 0 (τ 0 + τ ext 0 ) τ qm + τ extm (2.34) One will observe that the base's transformation does not change the properties of the inertia matrix H(x 0 , q m ), and thus of the convective matrix C(x 0 , q m , q0 , qm ). where:              H 0 = P T 0 M 0 P 0 H 0m = P T 0 M 0m = nq i=1 R T S 0 I i -m i R T S 0 (r S 0 -r S i ) × 0 3 miI 3 J m i H m = M m (2.35a) (2.35b) With this modeling effort, based on the DH formalism to develop a kinematic model used to derive a dynamic model with a Lagrangian approach, the equations of motions for an on-orbit robot with multiple-kinematic chains with a common base are given as (2.34). The general formalism can later be modified according to the SMS studied. One will note that this formalism as written with (2.34) corresponds to a flying robot for which external forces/torques (i.e. τ ext 0 and τ extm ) are applied. Although, a floating robot will observe a null control torque τ 0 . The present study put the focus on rotation-free-floating manipulators with reaction-wheels to control base orientations. Capitalizing on the general and recursive formalism previously developed to derive the equations of motions of a space robot as (2.34), an adaptation is presented in this section for a free-floating SMS adding reaction-wheels. The multi-kinematic chains formalism is here put to use such that each reaction-wheel, A r i , corresponds to one kinematic chain, S r i , composed of one-DoF joint, k r i , as illustrated with Figure 2.2. This advantageously allows to distinguish the different contributions on the system of each actuators. As one can observe with (2.15), between two kinematic chains their are no direct interactions, the influence of a chain on the other occurs through the base. In that matter of distinguishing reaction-wheels from manipulator's actuators, the subscripts m and r are respectively used to indicate manipulator quantities and reaction-wheel ones. Thus, the rotation-free-floating SMS considered has n r reaction-wheels and a n m DoFs manipulator. Dynamics of a rotation-free-floating SMS To emphasize on the reaction-wheels' integration and impact on the system, one will first consider the twist vector by adapting notation from (2.15) such as: t r i = J 0r i q0 + J r i qr i , ∀i ∈ [1, n r ] (2.36) where the Jacobian matrices are expressed as:              J r i = p m i = k i k i × (r S i -r A i ) = k i 0 3×1 J 0r i = I 3 0 3 (r S 0 -r S i ) × I 3 R S 0 0 3 0 3 I 3 = B i0 P 0 (2.37a) (2.37b) considering these expressions, one can highlight that the reaction-wheels affect the angular velocity of the system in R sat and the system's base displacement in R ine . Similarly, adapting (2.15) and (2.16), the twist of a manipulator joint is expressed as: t m i = B i0 P 0 q0 + i-1 k=1 (B ik p m k qk ) + p m i qi = J 0m i q0 + J m i qm , ∀i ∈ [1, n m ] (2.38) where the expression of the Jacobian matrices given for a single kinematic chain are obtained with:                B kj =   I 3 0 3 r S k -r S j × I 3   p m k = k k k k × (r S k -r A k ) (2.39a) (2.39b) It allows to highlight that the manipulator's joints also impact the linear dynamic of the overall system in addition to its rotations. From the twist expressions, a kinematic model can be defined as developed in the previous section. Adding reaction-wheels to the system only modifies the kinetic energy expression without providing any potential energies. Therefore, the Lagrangian expression (2.23) is adapted to distinguish the contribution of base and manipulator actuators in the system energy as: L = T = 1 2 t T 0 I 0 0 3 0 3 m 0 I 3 t 0 + 1 2 nm i=1 t T m i I i 0 3 0 3 m i I 3 t m i + 1 2 nr i=1 t T r i I i 0 3 0 3 m i I 3 t r i (2.40) Then introducing respectively the reaction-wheels and manipulator joint rate-variables generalized vector qr = qT L = 1 2 t T 0 qT r qT m    M 0 M 0r M 0m M T 0r M r 0 nr×nm M T 0m 0 nm×nr M m       t 0 qr qm    = 1 2 t T 0 qT r qT m M(x 0 , q m , q r )    t 0 qm qr    (2.41) where the inertia matrix M(x 0 , q m , q r ) is detailed as:                                                  M 0 = I 0 0 3 0 3 m 0 I 3 + nq i=1 B T i0 I i 0 3 0 3 m i I 3 B i0 M 0m = nm i=1 B T i0 I i 0 3 0 3 m i I 3 J m i M 0r = nr i=1 B T i0 I i 0 3 0 3 m i I 3 J r i M m = nm i=1 J T m i I i 0 3 0 3 m i I 3 J m i M r = nr i=1 J T r i I i 0 3 0 3 m i I 3 J r i (2.42a) (2.42b) (2.42c) (2.42d) (2.42e) The inertia matrix M(x 0 , q m , q r ) remains symmetric and defined strictly positive. One will note that developing the expression of inertia matrix M r allows to highlight that reactionwheels only affect the angular dynamics of the overall SMS. In order to alleviate further notations and expressions, reaction wheels and manipulator's actuators state are gathered into the vector q ∈ R nq×1 (with n q = n r +n m ) as q = q T r q T m T and subscript q denotes the association of reaction-wheels and manipulator quantities, such that M 0q = M 0r M 0m and M q = M r 0 0 M m . Thus, adapting notations of the previous section and observing a similar base transformation (i.e. with 2.14) to express the spacecraft base angular dynamics in R sat , the equations of motions of a rigid rotation-free-floating SMS are obtained as [START_REF] Wilde | Equations of motion of free-floating spacecraft-manipulator systems: an engineer's tutorial[END_REF]: H 0 H 0q H T 0q H q H(x 0 ,q) q0 q +   P T 0 Ṁ0 + c 0 P 0 P T 0 Ṁ0q + c 0q ṀT 0q + c q0 P 0 Ṁq + c q   C(x 0 ,q, q0 , q) q0 q = P T 0 (τ 0 + τ ext 0 ) τ q + τ extq (2.43) with τ q = τ T r τ T m T ∈ R nq×1 the actuators control torque composed of the reaction-wheels' control torques, τ r , and manipulator's control torques, τ m , and τ extq ∈ R nq×1 the vector of the external torques/forces applying on the system's actuators equal to zeros with the previous assumptions of not disturbances. One will emphasize on the rotation-free-floating nature of the SMS considered and on the hypothesis of no external perturbations which leads to the torques/forces applying and controlling the base equal to zeros (i.e. τ 0 = τ ext 0 = 0 6×1 ). In this study, a flexible element is attached to the spacecraft's base, or on a body fixed on the base, with a unique rigid junction. The case of flexible element in a kinematic chain is not developed here. The i th flexible appendage is linked to the rest of the satellite with a fixed joint which pose in R ine is denoted r P f i as illustrated with figure 2.3. As proposed in the literature [START_REF] Peter | Finite element appendage equations for hybrid coordinate dynamic analysis[END_REF], a flexible modeling of appendages is obtained in two steps, first a finite element study is developed to obtain the flexible DoF and then a modal reduction method is employed [START_REF] Ma Dokainish | A new approach for plate vibrations: combination of transfer matrix and finite-element technique[END_REF]. The present work does not detail the finite element approach but the DoF reduction is developed with the formalism used in Girard and al. [START_REF] Girard | Structural dynamics in industry[END_REF] and Sanfedino and al. [START_REF] Sanfedino | Finite element based N-Port model for preliminary design of multibody systems[END_REF]. The modal approach consists in a first system studied without excitation sources, corresponding to the finite element approach, to obtain the motion equation and the normal modes, then a second step is the superposition of the flexible modes to reduce their number. Modeling of flexible rotation-free-floating SMS 2.3.1 Modeling hypothesis From a finite element analysis, each node is associated to a modal linear and angular displacement denoted u. The distinction between an internal mode (or DoF) and the ones on the unique junction is made using respectively the subscript i and j . Adopting the Lagrangian formalism from the finite element method, the motion equations of the total set of DoFs are a second order involving mass, damping and stiffness matrices as in [START_REF] Sanfedino | Finite element based N-Port model for preliminary design of multibody systems[END_REF]: M ii M ij M T ij M jj M f lex üi üj + C ii C ij C T ij C jj C f lex ui uj + K ii K ij K T ij K jj K f lex u i u j = F i F j (2.44) where M f lex , C f lex and K f lex are constant symmetric matrices. F i are forces/torques applying on the internal DoFs and F j are reaction forces/torques imposed by the junction on the flexible appendage. This equation traduces that for any external forces/torques, correspond-ing to an excitation, the internal DoFs will present a linear and/or angular displacement with a given velocity and acceleration and reciprocally for any mode displacement a force/torque is applied on the junction. Moreover, this equation may present an important number of DoFs which some may be neglected considering their physical properties and the application. In order to reduce the number of flexible DoFs, the normal modes are considered. By definition, it corresponds to the solutions of the equation of motion without any excitation (i.e. F i = 0 n i ×1 and u j = 0 n j ×1 ). Thus, from (2.44) and neglecting the damping terms, C f lex , the normal modes are the solutions of [START_REF] Sanfedino | Finite element based N-Port model for preliminary design of multibody systems[END_REF]: M ii üi + K ii u i = 0 n i ×1 (2.45) These solutions are the n i eigenvalues, λ k (with k ∈ [1, n i ]), associated to k th eigenvectors, Φ ik , given by [Cra00]: -λ 2 k M ii + K ii Φ ik = 0 n i ×1 (2.46) Then the DoF reduction is obtained by solving (2.44) with a projection of system physical properties in the base composed of the eigenvectors Φ ik and the junction mode matrix Ψ. Ψ is computed by successively imposing a unit displacement u j while blocking the other junction displacements. In the case of a single junction, Ψ is defined as [START_REF] Girard | Structural dynamics in industry[END_REF]: Ψ jj = I j (2.47) and the terms coupled with internal DoFs are obtained as: K ii Ψ ij + K ij = 0 n i ×n j ⇒ Ψ ij = K -1 ii K ij (2.48) The projection in the normal mode base allows to use the Craig-Bampton transformation [START_REF] Craig | Coupling of substructures for dynamic analyses-an overview[END_REF]: u i u j = Φ ik Ψ ij 0 n j ×n k I j η k u j (2.49) with η k ∈ R n k ×1 the modal displacement vector. From this transformation, (2.44) is rewritten as [START_REF] Sanfedino | Finite element based N-Port model for preliminary design of multibody systems[END_REF]: m kk L kj L T kj Mjj ηk üj + c kk 0 n k ×n j 0 n j ×n k 0 n j ηk uj + k kk 0 n k ×n j 0 n j ×n k Kjj η k u j = Φ ki F i Ψ ji F i + F j (2.50) where: • η k is the modal displacement vector • u j is the junction pose and orientation states • m kk = Φ T ik M ii Φ ik is a diagonal equivalent mass matrix which can be expressed in a normalized modal base as m kk = I k • c kk = Φ T ik C ii Φ ik is a damping equivalent matrix. Neglecting the damping terms between two modes, its expression is reduced to a diagonal matrix as c kk = diag(2ζ k ω k m k ) where ζ k , ω k and m k are respectively the damping, the natural frequency and the generalized mass of the k th mode. • k kk = Φ T ik K ii Φ ik = diag(m k λ 2 k ) is the stiffness equivalent matrix • L kj = Φ T ik (M ii Ψ ij + M ij ) is the matrix of the participation factors • Mjj = Ψ ji M ii Ψ ij + Ψ ji M ij + M ji Ψ ij + M jj is the condensed mass matrix. With a rigid junction Mjj is equal to the rigid body mass matrix. It includes the information about mass, inertia and CoM of the structure with respect to the junction. • Kjj = K jj -K ji K -1 ii K ij is the condensed stiffness matrix equal to 0 n j for a rigid junction • F η = Φ ki F i is the forces/torques imposed on the internal normal modes • F P = Ψ ji F i + F j is the forces/torques imposed on the appendage by the junction An adaptation of the Craig-Bampton transformation with a rigid junction, Ψ ij = 0 n i ×n j , Modeling of a rigid-flexible multi-body system Modeling of a rigid hub and n p flexible appendages Similarly to the method formulated in section 2.2.4, a Lagrangian approach is adopted to include the flexible appendage dynamics into the rigid ones of the base. First the twist vector of the i th junction is expressed adapting (2.15) with a rigid junction as: t P f i = ω P f i ṙP f i = B P f i 0 t 0 = B P f i 0 P 0 q0 = J 0 P f i q0 =   I 3 0 3 r S 0 -r P f i × I 3   t 0 (2.51) and its time-derivative is given as: ṫP f i = J 0 P f i q0 + J0 P f i q0 (2.52) with the expression of the time-derivative of the Jacobian matrix J 0 P f i developed as: J0 P f i =   I 3 0 3 r S 0 -r P f i × I 3   Ω 0 P 0 +   0 3 0 3 ṙS 0 -ṙP f i × 0 3   P 0 (2.53) Associating the modal displacement vector η i ∈ R nη i ×1 to the i th appendage composed of the n η i flexible DoFs, the motion equations (2.50) is adapted with an appropriate flexible base change and expressed in R ine as:    I nη i L P f i B P f i 0 B T P f i 0 L T P f i B T P f i 0 I i 0 3 0 3 m i I 3 B P f i 0    ηi ṫ0 + c η i 0 nη i ×6 0 6×nη i 0 6 ηi t 0 + k nη i η i 0 6×1 = τ extη i τ 0 + τ ext 0 , ∀i ∈ [1, n p ] (2.54) with τ extη i ∈ R nη i ×1 the external torques/forces applying on the i th flexible appendage, L P f i expressed in R sat , c η i = diag(2ζ j ω j ) and k η i = diag(ω 2 j ) (with j ∈ [1, n η i ]). A drawback of the dynamics expressed as given in (2.54) is that it neglects existing coupling factors between ηi and t 0 . In order to derive those factors, c 0η i and c η i 0 , such that no restriction or hypothesis are required on the base velocities, the Lagrangian approach is chosen. From (2.54) the appendages kinetic energy, T appendage , the hub composed of the base and payloads kinetic energy, T hub , the flexible potential energy, V η , and dissipative forces, F η , are expressed for a satellite with only the i th appendage rigidly attached as:                                  T appendage = 1 2 ηT i t T 0    I nη i L P f i B P f i 0 B T P f i 0 L T P f i B T P f i 0 I i 0 3 0 3 m i I 3 B P f i 0    ηi t 0 T hub = 1 2 t T 0 I 0 0 3 0 3 m 0 I 3 t 0 V η = - 1 2 η T i k η i η i F η = -c η i ηi (2.55a) (2.55b) (2.55c) (2.55d) Then (2.55) is generalized to the n p appendages with the introduction of the generalized modal vector η = η T 1 . . . η T np T ∈ R nη×1 and n η = np i=1 n η i , the generalized forces/torques applying on appendages τ extη = τ T extη 1 . . . τ T extη np T , the matrix of participation factor L ηp = L P f 1 B P f 1 0 . . . L P fn p B P fn p 0 and C η = diag(c η j ), K η = diag(k η j ) with j ∈ [1, n p ]. Thus, the listing of energies and forces present in the system composed of n p flexible ap-pendages rigidly attached to the spacecraft's base is:                                  T appendages = 1 2 ηT t T 0    I nη L ηp L T ηp np i=1 B T P f i 0 I i 0 3 0 3 m i I 3 B P f i 0    η t 0 T hub = 1 2 t T 0 I 0 0 3 0 3 m 0 I 3 t 0 V η = - 1 2 η T K η η F η = -C η η (2.56a) (2.56b) (2.56c) (2.56d) Flexible appendages induce a variation of the potential energy V η , the Lagrangian expression (2.23) then becomes: L f lex = T hub + T appendages -V η (2.57) The dynamics of such systems are derived from the evaluation of (2.57) with the following Euler-Lagrange equations:          d dt ∂L f lex ∂ η - ∂L f lex ∂η = τ extη + F η d dt ∂L f lex ∂t 0 - ∂L f lex ∂x 0 = τ 0 + τ ext 0 (2.58a) (2.58b) Developing expressions in (2.58) allows to express the system dynamics under a compact form as: H 0 H 0η H T 0η H η q0 η + C 0 C 0η C η0 C η q0 η + 0 6×nη K η η = 0 6×1 0 nη×1 (2.59) where inertia matrices are projected in the base such that the angular dynamics of the base are expressed in R sat as:                    H η = I η H 0η = P T 0 L T ηp M 0 = I 0 0 3 0 3 m 0 I 3 + np i=1 B T P f i 0 I i 0 3 0 3 m i I 3 B P f i 0 H 0 = P T 0 M 0 P 0 (2.60a) (2.60b) (2.60c) (2.60d) convective terms as:                C 0 = P T 0 Ṁ0 - 1 2 ∂ ∂x 0 ηT L ηp + t T 0 M 0 P 0 C 0η = P T 0 LT ηp - 1 2 ∂ ∂x 0 t T 0 L T ηp C η0 = Lηp P 0 (2.61a) (2.61b) (2.61c) where the differentiation of L ηp according to x 0 , as detailed in appendix A, is the differentiation of matrices B P f i 0 : ∂ ∂x 0 ηT L ηp = ηT L P f 1 ∂ ∂x 0 B P f 1 0 . . . L P fn p ∂ ∂x 0 B P fn p 0 (2.62) and similarly for the time-derivative: Lηp = ηT L P f 1 ḂP f 1 0 . . . L P fn p ḂP fn p 0 (2.63) Integration of flexible appendages onto a rotation-free-floating dynamics In order to study rotation-free-floating SMS with flexible appendages, illustrated with figure 2.4, the equations of motions are derived from the previous sections capitalizing on the established formalism. It is assumed that flexible bodies are rigidly attached to the base or on the end of a kinematic chain that has no DoF. Then the rigid multi-body dynamics described by (2.43) can be extended onto a coupled flexible-rigid dynamics to include those flexible elements. Based on a Lagrangian formalism, as the actuated rigid kinematic chains only modifies the overall kinetic energy, the Lagrangian expression (2.57) can be extended with the actuators' kinetic energies. As developed in section 2.2.4 for the reaction-wheels integration, with the proposed modeling formalism, separating the kinematic chains allows to naturally decouple the different elements of the system and converge all of each element impacts on the common satellite base. Therefore, the Lagrangian expression (2.57) is augmented with the n q system's actuators kinetic energy, T q , as: L = L f lex + T q = L f lex + 1 2 nq i=1 t T i I i 0 3 0 3 m i I 3 t i (2.64) The Euler-Lagrange equations are given in respect to the base and actuator DoFs (i.e. q and x 0 ), introduced in (2.27), and the flexible DoFs (i.e. η) detailed in (2.58). Remaining as general as possible, the Euler-Lagrange equations are given for a flying SMS with flexible appendages and subject to external forces/torques as:                    d dt ∂L ∂ η - ∂L ∂η = τ extη + F η d dt ∂L ∂t 0 - ∂L ∂x 0 = τ 0 + τ ext 0 d dt ∂L ∂ q - ∂L ∂q = τ q + τ ext (2.65a) (2.65b) (2.65c) By evaluating the Lagrangian expression (2.64) with these Euler-Lagrange equations, the equations of motions for a flying SMS with flexible appendages and subject to external forces/torques are detailed in its most generic form as:    M 0 M 0q M 0η M T 0q M q 0 nq×nη M T 0η 0 nη×nq M η    M(x 0 ,q)    ṫ0 q η    +    D 0 D 0q D 0η D q0 D q 0 nq×nη D η0 0 nη×nq D η    D(q 0 ,q, q0 , q)    t 0 q η    +    0 6×nη 0 nq×nη K η η    =    τ 0 + τ ext 0 τ q + τ extq τ extη    (2.66) with the expressions of the inertia terms detailed as:                                      M 0 = I 0 0 3 0 3 m 0 I 3 + nq i=1 B T i0 I i 0 3 0 3 m i I 3 B i0 + np i=1 B T P f i 0 I i 0 3 0 3 m i I 3 B P f i 0 M 0q = nq i=1 B T i0 I i 0 3 0 3 m i I 3 J m i M q = nq i=1 J T m i I i 0 3 0 3 m i I 3 J m i M η = I η M 0η = L T ηp (2.67a) (2.67b) (2.67c) (2.67d) (2.67e) and the convective terms expressions given as:                                            D 0 = Ṁ0 - 1 2 ∂ ∂x 0 t T 0 M 0 + qM T 0q + ηM T 0η D 0q = Ṁ0q - 1 2 ∂ ∂x 0 qT M q + t T 0 M 0q D q0 = ṀT 0q - 1 2 ∂ ∂q t T 0 M 0 + qM T 0q D q = Ṁq - 1 2 ∂ ∂q qT M q + t T 0 M 0q D 0η = LT ηp - 1 2 ∂ ∂x 0 t T 0 L T ηp D η0 = Lηp (2.68a) (2.68b) (2.68c) (2.68d) (2.68e) (2.68f) The expressions of the time-derivative matrices are given in (2.30) and differentiations according to x 0 and q are detailed in appendix A. The addition of the flexible dynamics leads to a second order dynamic model describing the behavior of a flying SMS with flexible appendages. One will note that the actuators have an impact only through the motions of the base and reciprocally the flexible vibrations induce disturbances on the rest of the system by affecting first the base. With such model, no assumption on the angular velocities of the base are made. The only hypothesis is made on the finite element method to obtain the flexible DoFs and the following base reduction to consider a smaller number of DoFs. Moreover, to preserve the expression of matrices of the participation factors, the junction between a flexible body and the rigid hub should be punctual. To preferably expressed the dynamic model (2.66) with the base angular dynamics expressed in their body frame (i.e. q0 = ω sat T 0 ṙT S 0 T ), the base transformation (2.14) is once again detailed here. The reaction-wheels' velocities, qr , and manipulator's joints' ones, qm , are expressed in a generalized vector q = qT r qT m T . Furthermore, to express the equations of motions for a rotation-free-floating SMS with flexible appendages, only the actuator's control torques are considered and external torques and forces are neglected. Then the overall dynamics are given adapting (2.66) with the appropriate changes as:    H 0 H 0q H 0η H T 0q H q 0 nq×nη H T 0η 0 nη×nq H η    H(x 0 ,q)    q0 q η    +    C 0 C 0q C 0η C q0 C q 0 nq×nη C η0 0 nη×nq C η    C(q 0 ,q, q0 , q)    q0 q η    +    0 6×nη 0 nq×nη K η η    =    0 6×1 τ q 0 nη×1    (2.69) with the inertia matrix, H(x 0 , q), expression detailed such that the base angular velocities are expressed in the base body frame, R sat , as:                    H 0 = P T 0 M 0 P 0 H 0q = P T 0 M 0q H q = M q H η = M η H 0η = P T 0 L T ηp (2.70a) (2.70b) (2.70c) (2.70d) (2.70e) and the convective matrix, C(q 0 , q, q0 , q), detailed as:                          C 0 = P T 0 D 0 P 0 C 0q = P T 0 D 0q C q0 = D q0 P 0 C q = D q C 0η = P T 0 D 0η C η0 = D η0 P 0 (2.71a) (2.71b) (2.71c) (2.71d) (2.71e) (2.71f) The free-floating mode define the control torques/forces τ 0 is null and only the controlled reaction-wheels' torque, τ r , and the manipulator's joints control torques apply a torque on the base. Matlab-Simulink simulation and analysis tools Deriving the dynamics of a rigid multi-body system with different kinematic-chains and flexible elements rigidly fixed to the system's base has been a first contribution of this work. Capitalizing on this modeling effort that provides recursive computation of the kinematics/dynamics of such systems, a Matlab-Simulink tool is developed. This section aims at detailing the functionalities and potential uses of the tools. SPART is an open-source software toolkit for modeling kinematic and dynamic of a 6-DoFs actuated spacecraft with different rigid kinematic-chains. Thus, both flying and floating SMS can be modeled and simulated. A robot is described in a standardized XML 14 description which is thus an input of the simulation and analysis tool. Each elements are individually described (dimension, inertia, mass) with the parent and child joints as detailed in appendix B. Simulator functionalities From the descriptive functions of SPART that provides a tree mapping of SMS, kinematics/dynamics functions are adapted with our proposed Lagrangian approach developed in sections 2.2 and 2.3 to obtain the dynamics of a rotation-free-floating with flexible appendages 13 Spacecraft Robotics Toolkit 14 eXtensible Markup Language as given by equations (2.66) and (2.69). Moreover, to guarantee computation efficiency, the rigid dynamics as been derived with a Newton-Euler modeling approach [START_REF] Virgili-Llop | SPART: an open-source modeling and control toolkit for mobile-base robotic multibody systems with kinematic tree topologies[END_REF]. Then as a first input of our elaborated Matlab-Simulink toolbox the studied robot is described in an XML description link by link with the following required informations: • Physical properties: mass, inertia, size for a rigid body-link and additionally for a flexible appendage the number of flexible modes, their natural frequencies, dampings and the matrix of participation factors • Position in the kinematic chains: the attached links (one parent-link and for an element other than end-effector a child-link), the initial position according to the referential frame of the parent-link • Joint nature: either fixed, prismatic or revolute One will note that actuators' dynamics and effort limits are not considered, however can be easily added in a Simulink model. Thanks to the overall functions established to compute both direct and inverse kinematics/dynamics time-domain simulations are obtained with Simulink models. For a given spacecraft configuration (joints positions/orientations, base attitude and position in R ine ) and joints/links velocities the inverse kinematics/dynamics functions allows to compute inertia and convective matrices. Respectively, the direct kinematics/dynamics functions provide spacecraft configuration and system velocities in function of actuators' torques, base external actuating torques and system's external forces/torques applying on the robot. Thus, from a general standpoint, one can introduce the state vector x composed of the 6-DoFs base states, the actuators states and the flexible ones. Then for a flying SMS with flexible appendages its equations of motions are given by a general second order equation (2.72). Such a system can be studied and analyzed with these tools for a given configuration or a manipulator motion. Hẍ + C ẋ + Kx = τ base + τ actuators + τ ext (2.72) A description of what is latter referred to as the main Simulink function is given in Figure 2.5. In order to obtain the direct dynamics of a system which behaviors is described with (2.72), first an initialization step is established. In this initial step the XML description is converted onto a Matlab structure taking into account the tree topology of the SMS and an initial pose is given. Then for every time step, the overall velocities and Jacobians are computed from the measure of the current state. Therefore equation (2.15) can be evaluated. Then, the inertia, convective and stiffness matrices as developed according to (2.69) are computed with the joints states and the physical properties in the XML description. Finally, for the considered forces/torques (i.e. the base control forces/torques τ base , the actuators control forces/torques τ actuators or any external forces/torques τ ext ) the new state of the SMS is obtained by integration of the system's accelerations. Furthermore, as one can visualize with Figure 2.6 visualization tools are available. Besides providing a visual of a manipulator motion, it allows to verify in a first step the appropriate construction of the XML description and likewise visually verify the feasibility of a motion. In Figure 2.6, two SMS examples are provided for which the base is represented as a red cylinder. In both cases, gray cylinders inside the base correspond to the reaction-wheels. Smaller red cylinders correspond to the different junctions/actuators positions. It is likewise illustrated with SMS on the right side of Figure 2.6 that flexible elements can be positioned on a body rigidly fixed to the base. In this case four flexible beams are attached to the first payload. To put it concisely, from a robot description, a pre-analysis can be developed to identify the manipulator's feasible motions and configurations without reaching conflict with the rest of the system's elements. Secondly, time-domain simulations can be obtained with Simulink models such that for forces/torques and a given robot structure provided as inputs, an actualized SMS state can be computed at any time. Validation and performances of the developed simulation tools The modeling formalism has been adapted to first integrate reaction-wheels and secondly an extension of the kinematic/dynamic with flexible behaviors to the pre-existing SPART. Therefore, a validation step is essential to raise the numerous potential coding errors and highlight the limits due to the choice of solver, sampling and other numerical integration induced errors. We present in this section the validation process divided into two steps. First the correct integration of the base's actuators is proceeded upon with a system momentum discussion similarly proposed in [START_REF] Wilde | Equations of motion of free-floating spacecraft-manipulator systems: an engineer's tutorial[END_REF] considering a rigid SMS. Secondly, the flexible integration is verified with existing tools that develop kinematic/dynamic models with a Newton-Euler approach [ACT08]. ) are computed with (2.43). Thus, the simulator's performances are evaluated with the threshold, λ, while considering the error defined from (2.43) as: Simulation of rigid systems τ = τ in -(Hẍ + C ẋ) < λ (2.73) The value of λ corresponds to the numerical errors or noise which allows to conclude on the consistency of the output data. Simulator validation As discussed in [START_REF] Wilde | Equations of motion of free-floating spacecraft-manipulator systems: an engineer's tutorial[END_REF], a rigid free-floating SMS that is not subject to external torques/forces will have its CoM and momenta remaining unchanged for a given manipulator maneuver. This property is thus considered in the validation process of a rigid multi-body system. Considering an SMS of which its motions are described by (2.43), for an initially null linear and angular momentum, the momentum expression is given in R ine by [START_REF] Wilde | Equations of motion of free-floating spacecraft-manipulator systems: an engineer's tutorial[END_REF]: M 0 t 0 + M 0m qm + M 0r qr < λ (2.74) with the reaction-wheels only affecting the evolution of angular momentum. In order to consider a stable and feasible SMS maneuver (i.e. without reaching singular configurations) and verify the momentum conservation, a simple control law is established. Based on classical robotic control applications an NDI15 is introduced to decouple and linearize the system. In addition a simple proportional control of the joints' velocities is used. From (2.43), a control torque that allows to obtain an NDI for a rotation-free-floating SMS is given as: τ q in = H m (K ( qd -q)) + C m qd + H T 0m q0 + C m0 q0 (2.75) with K a diagonal matrix corresponding to a linear proportional control gain and qd the desired joint velocities. Discussion with Time-domain simulations To obtain consistent time-domain results, Simulink models are given the following solver parameters: • Solver: ode45 • Fixed-step: 0.01s and simulations are run on a standard computer with an Intel i7 CPU. Firstly, the simulator performances are evaluated to properly comment simulation results. As visualized with the evolution of the torque error signal (2.73) plotted in Figure 2.8, corresponding to the input torques represented in the upper subplot of Figure 2.9, data with values lower than λ = 10 -13 are considered as under the simulator precision, or in another words corresponds to zero values. Secondly, the developed kinematic/dynamic functions are validated considering the evolution of the system's momentum for a manipulator's/reaction-wheels' motion. With the control torque computation being as in (2.75), first the reaction-wheels are used then the manipulator's joints as one can notice with Figure 2.9. This allows to observe the impact of the SMS's actuators as decomposed in the three first rows of subplot in Figure 2.10. As expected, reaction-wheels only modify angular momentum while the manipulator motions affect both linear and angular. Last row of subplot in Figure 2.10, allows to conclude on the momentum conservation and consequently on the accurate implementation of the modeling functions to study the behavior of a rotation-free-floating SMS. Such conclusion is made considering the values of the angular and linear momentum that remains at the precision λ. Moreover, one will note that the numerical error is due to a coupling transfer between the angular and linear spacecraft's dynamics when the manipulator is moving. Furthermore, interpretations of the momentum distribution can be developed based on its conservation. As visualized with Figure 2.10, for each manipulator's motions a counterreaction of the base will compensate with a linear motion the accumulated momentum. Similarly, each manipulator's motions inducing angular momentum are compensated with both reaction-wheels and base motions. Similar and reciprocal behaviors occur with the use of reaction-wheels. Validation of the flexible dynamics integration Tools validation Flexible dynamics induces dissipative forces and consequently there is no kinetic momentum conservation. For this reason validation cannot be based on a similar conservatism approach as developed in the section 2.4.2.1 and we will therefore use existing and confirmed modeling tools. The toolbox that came by Alazard and al. [START_REF] Alazard | Linear dynamic modeling of spacecraft with various flexible appendages and on-board angular momentums[END_REF], SDT 16 , allows to develop linear models of multi-body systems for space applications. Based on a Newton-Euler modeling method, for each body a transfer function between the forces/torques and accelerations is associated. In particular, for a spacecraft with flexible appendages attached with a cantilever or revolute joint to the main hub, a hybrid-cantilever model is used to obtain the dynamic model for each appendage. Then, in respect to the topology of the system and through combination of the different transfer function a linearized dynamic model of the overall system is computed. The verification process consists in a comparison of linear models computed from our tools and the ones obtained with SDT. The following steps are to conclude on the accuracy of functions' implementation integrating the flexible dynamics onto the rigid ones of a flying SMS: • Comparison between the rigid behaviors obtained with the SDT and our tools by observing the static gains of SDT's direct linear models and the ones linearized from our tools , allows to compute the system's accelerations, ẍ = ẍT 0 qT ηT T , with (2.69). Then, the torque error signal, used to consider simulator numerical precision is defined as: • τ = τ in -   Hẍ + C ẋ +    0 6×nη 0 nq×nη K η η       < λ (2.76) The solver parameters detailed in the previous paragraph are preserved, nevertheless some adjustments may be required with the flexible parameters to accurately compute the flexible dynamics. Time-domain simulations are run on the SMS detailed in appendix B. Evaluation of τ is obtained for a basic SMS motion given with the input torques, τ q , illustrated with the upper subplot in Figure 2.13. The simulator's limitations are quantified with the torque error signal, τ (2.76), represented in the lower subplot in Figure 2.13. One can then conclude that data presenting amplitudes lower than λ = 10 -13 correspond to null values. This consideration may lead to discussions on the physical properties of flexible modes considered. Example of possible analysis with modeling tools SMS couplings studies are primordial prior to developing control strategies. In the literature for rigid systems, the influence between each SMS's body has been based on a momentum conservation hypothesis. However, it is ambitious to develop similar assumptions in presence of flexible appendages. With the modeling effort developed and with the associated tools, an energetic based analysis can be introduced to illustrate and evaluate participations of each elements on the overall system. In order to analyze flexible dynamics evolution according to manipulator motions, one can consider the comparison between spontaneous powers and works of rigid and flexible elements. To distinguish flexible participations from the rigid ones, (2.69) is rewritten as: H 0 H 0q H T 0q H q q0 q F rig ine + C 0 C 0q C q0 C q q0 q F rig conv = 0 6×1 τ q F rig ext - H 0η C 0η 0 nq×nη 0 nq×nη η η F f lex→rig ext (2.77) with F rig ine an inertial force, F rig conv a convective force, F rig ext an external control torque and F f lex→rig ext an external flexible force affecting the rigid dynamics. Similarly from (2.69), forces/torques can be defined for the flexible dynamics as: H η η F f lex ine + C η η F f lex conv + K η η F f lex pot = -H T 0η C η0 q0 q0 F rig→f lex ext (2.78) with respectively F f lex ine , F f lex conv and F f lex pot an inertial, a convective and a potential flexible force. F rig→f lex ext an external force corresponding to the influence of the rigid elements motions inducing flexible vibrations. Before introducing the work of each force, the powers are defined with (2.77) and with (2.78) as:                                              P rig ine = qT 0 qT F rig ine P rig conv = qT 0 qT F rig conv P rig ext = qT 0 qT F rig ext P f lex→rig ext = qT 0 qT F f lex→rig ext P f lex ine = ηF rig ine P f lex conv = ηF f lex conv P f lex pot = ηF f lex pot P rig→f lex ext = ηF rig→f lex ext (2.79a) (2.79b) (2.79c) (2.79d) (2.79e) (2.79f) (2.79g) (2.79h) and respectively, for each power the work associated is defined as its integral such that: ∀T > 0, W = T 0 Pdt (2.80) If the multi-body system is entirely rigid, one can retrieve the kinetic momentum conservation. Considering either the involved powers of the works, the system conservatism is verified with:    P rig ine + P rig conv = P rig ext W rig ine + W rig conv = W rig ext (2.81a) (2.81b) Then extending this observation the coupled rigid-flexible case, a conservatism of powers and works can be verified to analyze the system's elements solicitations and impact on the overall SMS. From both equations (2.77) and (2.78) the flexible-rigid rotation-free-floating SMS verifies:                  P rig ine + P rig conv = P τ rig ext + P f lex→rig ext P f lex ine + P f lex conv + P f lex pot = P rig→f lex ext W rig ine + W rig conv = W rig ext + W f lex→rig ext W f lex ine + W f lex conv + W f lex pot = W rig→f lex ext (2.82a) (2.82b) (2.82c) (2.82d) This allows to quantify the work and power lost when flexible vibrations occur due to system couplings. With (2.82), the respective loss of work and powers are defined as:    W l = W f lex→rig ext -W rig→f lex ext P l = P f lex→rig ext -P rig→f lex ext (2.83a) (2.83b) An illustration of the works evolution in the system is developed with Figures 2.14 and 2.15 to analyze the influence of flexible appendages in an SMS. Considering the SMS motions obtained with the input control torques represented in Figure 2.13, a comparison between rigid and flexible elements' works allows to evaluate the actuators' influence on the energetic loss. • First, the impact of the manipulator is mainly on the variation of the system's inertia as for different configurations the mass distribution changes. This is visualized with the left subplots of Figure 2.14 in which the rigid works are detailed. • Secondly, reaction-wheels are mostly affecting the evolution of convective terms. Base actuators modify the spacecraft's angular velocities without changing mass distribution in the overall system. Regarding the typically low manipulator's joints velocities in comparison to reaction-wheels, as illustrated with the second row of subplot in Figure 2.14, the base actuators will have a larger impact on the convective terms. This is explained with the convective terms corresponding to cross products of system's DoFs velocities and thus large reaction-wheels velocities lead to large convective terms. A second observation and analysis can be obtained based on the direct couplings between reaction-wheels, base angular velocities and flexible modes vibrations. • A first note can be made on the influence of the reaction-wheels use on the flexible appendages vibration. As visualized with the flexible modes' evolutions represented in Figure 2.16, when the base actuators are used. This is easily explained regarding convective terms and system's velocities. • Considering the works derived from the flexible dynamics, visualized in the right subplots of Figure 2.14, for the given SMS motion, the manipulator poorly affects the base's rotations. Therefore, a flexible dissipative work can be quantified when the reactionwheels are used with (2.83) as illustrated with Figure 2.15. Chapter conclusions To summarize the contribution of this chapter, a modeling formalism has been elaborated to establish simulation and analysis tools for a floating SMS with flexible appendages. It has been identified from the literature review, developed in the previous chapter, that integrating flexible dynamics with those of a floating SMS remained challenging and interesting for the study of current applications. The proposed methods in the literature are either carrying numerous restrictive hypotheses or applying to simple space manipulators. Therefore, after establishing a rigid model of a floating SMS with the DH formalism and a Lagrangian approach, the flexible dynamics is included adapting the Lagrangian approach. The choice of a Lagrangian approach over a Newton-Euler is based on the generality and the simpler for a systematic modeling formalism. The modeling formalism provides a recursive description of the system kinematics which has allowed to extend existing simulation tools, SPART, with integration of reaction-wheels and flexible dynamics. The new tools allow to produce different analyses of the system, as introduced in this chapter with energetic discussions. Pre-design of the SMS, workspace analyses, path-planning studies, evaluation of coupling influence are few example of the potential use for the analysis tools. With the simulation tools, one can validate control laws or even study stability of a rigid-flexible systems. In the following chapters, they will be use to develop new control strategies. In the previous chapter, a modeling formalism has been introduced such that analysis and simulation tools have been established. At the conclusion of this chapter, the developed tools allow to both study and perform time-domain simulation on flying (and floating) SMS with flexible appendages. In this chapter, steering laws are synthesized and validated with the developed tools. First, from a short literature reviewed, areas of improvement in control strategies for SMS with flexible appendages are listed. After illustrating the limits of a classical control approach of SMS, a control that takes advantage of all the available actuators to 1 Space Manipulator System 73 improve control performances is proposed and discussed. From the conclusions of a common control of the base and manipulator's joints, a steering law is introduced to deal with flexible appendages vibrations also referred here as system's intern disturbances. Areas of improvements Challenges of developing steering law for SMS with flexible appendages With the large variety of SMS applications comes numerous control challenges [START_REF] Flores-Abad | A review of space robotics technologies for on-orbit servicing[END_REF]; [START_REF] Li | On-orbit service (OOS) of spacecraft: A review of engineering developments[END_REF]. Firstly, the space environment in which the manipulator evolves leads to different couplings between each body composing the SMS. It complexifies the control of the system and different solutions have been developed through the years to attenuate the negative effects of these couplings. A second difficulty lies in the presence of flexible appendages. As developed in the previous chapter, for a base motion, either caused by the use of reactionwheels or from a manipulator's motions, vibrations may destabilize the SMS. Additionally to the couplings between the manipulator's end-effector and the appendages, studied by Meng and al. [START_REF] Meng | Space robots with flexible appendages: dynamic modeling, coupling measurement, and vibration suppression[END_REF], vibrations of the flexible elements may be caused by unpredictably environmental disturbances as developed by Cao and al. [START_REF] Cao | Thermal alternation induced vibration analysis of spacecraft with lateral solar arrays in orbit[END_REF]. For those reasons improving the autonomous control of SMS remains a challenging task. System couplings: In the present study, the focus is made on rotation-free-floating SMS with reaction-wheels as the dedicated base actuators. With such manipulators, the global kinetic momentum remains constant which may induce undesired motions caused by different couplings. The main concern is the influence of the manipulator on the base which should keep a fixed attitude to ensure the quality of the power supply (orientation of solar panels) and communication with the ground (orientation of the antenna). Capitalizing on the momentum conservation, studies have considered path-planning approaches in that matter [START_REF] Papadopoulos | Smooth planning for free-floating space robots using polynomials[END_REF]; [START_REF] Li | Assembly dynamics of a large space modular satellite antenna[END_REF] or adapted the motions of the manipulator with kinematic/dynamic constraints based on the GJM 2 expression [START_REF] Hu | Minimum base attitude disturbance planning for a space robot during target capture[END_REF]. Moreover, path-planning methods have been established in capture applications such that by reducing the relative motions between the target and the servicer's end-effector, the influences of both impact and manipulator maneuvers are lowered [START_REF] Rybus | Control system for free-floating space manipulator based on nonlinear model predictive control (NMPC)[END_REF]; [START_REF] Yang | A robust and adaptive control method for flexible-joint manipulator capturing a tumbling satellite[END_REF]; [START_REF] Lu | Trajectory Planning of Satellite Base Attitude Disturbance Optimization for Space Robot[END_REF]. Similarly, impedance control strategies lead to similar results [START_REF] Nakanishi | Impedance control for free-flying space robots-basic equations and applications[END_REF]. Furthermore, the use of null-space projector, originally introduced by Nenchev and al. [START_REF] Dragomir | Reaction null-space control of flexible structure mounted manipulator systems[END_REF], have allowed to develop control strategies in which the manipulator's motions do not affect the base motions [START_REF] Pisculli | A minimum state multibody/FEM approach for modeling flexible orbiting space systems[END_REF]; [START_REF] Ye | Research on Adaptive Reaction Null Space Planning and Control Strategy Based on VFF-RLS and SSADE-ELM Algorithm for Free-Floating Space Robot[END_REF]. Nevertheless, these methods are mainly based on the momentum properties of the free-floating systems or established for simple manipulators. As discussed in the previous chapter, flexible elements will generate a dissipative force when they will start vibrating. Such force prevents from considering any momentum conservation hypotheses. Active control: To overcome the difficulty of establishing control laws without a momentum conservation, studies have considered active control. Early strategies were based on simple flexible model and were limited to small satellites. Two main active control methods can be distinguished. First the control of the DoF3 between the appendage and the spacecraft's base to reject the vibrations effects on the base [START_REF] Hirano | Vibration suppression control of a space robot with flexible appendage based on simple dynamic model[END_REF]; [START_REF] Hirano | Simultaneous control for end-point motion and vibration suppression of a space robot based on simple dynamic model[END_REF]; [START_REF] Hu | Semi-active vibration control of two flexible plates using an innovative joint mechanism[END_REF]; [START_REF] Guo | Active control technology for flexible solar array disturbance suppression[END_REF]. Secondly, flexible deformations are rejected with the use of piezoelectric actuators positioned on the appendage [START_REF] Hu | Vibration suppression of flexible spacecraft during attitude maneuvers[END_REF]; [START_REF] Zarafshan | Adaptive hybrid suppression control of space free-flying robots with flexible appendages[END_REF]. The placement and number of such actuators have an importance to guarantee a maximal deformation threshold of the appendage while accurately controlling the spacecraft base attitude [START_REF] Angeletti | End-to-end design of a robust attitude control and vibration suppression system for large space smart structures[END_REF]. Moreover, the use of control moment gyroscopes between two flexible bodies have been studied by identifying fast and slow sub-systems dynamics [START_REF] Jia | Maneuver and active vibration suppression of free-flying space robot[END_REF]. Control moment gyroscopes have also been considered to be placed on the flexible structure for the active vibration suppression [START_REF] Hu | Attitude control and vibration suppression for flexible spacecraft using control moment gyroscopes[END_REF]. However, in future space exploitation and exploration missions [START_REF] Li | On-orbit service (OOS) of spacecraft: A review of engineering developments[END_REF], the use of dedicated actuators is quite difficult to set up as well as in terms of energetic approach it may be contradictory with the purposes of increasing the lifespan of the missions. Another difficulty from the active control is that a proper modeling of the flexible dynamics should be established. However, it remains challenging according to the studied space structure. For that concern boundary controllers have been designed to insure asymptotic stability of the satellite attitude control without accurate system modeling [START_REF] Mahdi | Boundary control design for vibration suppression and attitude control of flexible satellites with multi-section appendages[END_REF]. Passive control: Different alternatives to active control have been developed to reject both system internal and external disturbances. With a limited knowledge of perturbation properties, adaptive disturbance rejection filters have been developed to avoid modal excitation of the flexible elements when controlling the manipulator [START_REF] Chu | Vibration control of maneuvering spacecraft with flexible manipulator using adaptive disturbance rejection filter and command shaping technology[END_REF]. Quantifying the influence of the mutual impacts between a source of an intern disturbance and the manipulator motions have been developed to introduce an SMS control strategy. For instance, Rackl and al. [START_REF] Rackl | Analysis of liquid fuel sloshing on free-floating robot dynamics under low-gravity condition[END_REF] adapted the manipulator control according to the reciprocal influence between the robotic arm and the sloshing of the fuel tank. Moreover, Meng and al. [START_REF] Meng | Space robots with flexible appendages: dynamic modeling, coupling measurement, and vibration suppression[END_REF] proposed to derive the flexible dynamics of the appendages such that a vibration rejection strategy can be established [START_REF] Meng | Vibration suppression control of free-floating space robots with flexible appendages for autonomous target capturing[END_REF]. Nevertheless, the difficulty of deriving the flexible dynamics motivates the development of robust control strategies. With H ∞ controllers both modeling uncertainties and external disturbances can be tackled [START_REF] De Fpa Taveira | Adaptive nonlinear H ∞ controllers applied to a free-floating space manipulator[END_REF]; [START_REF] Colmenarejo | Results of the COMRADE project: combined control for robotic spacecraft and manipulator in servicing missions: active debris removal and re-fuelling[END_REF]; [START_REF] Qiao | High-precision attitude tracking control of space manipulator system under multiple disturbances[END_REF]. Furthermore, flexible vibrations are usually not measurable and estimation techniques have been developed. Disturbance observer are adopted to decouple the system [START_REF] Zhongyi | Disturbance observer-based robust control of free-floating space manipulators[END_REF], or to improve the perturbation rejections [START_REF] Chu | Fuzzy adaptive disturbance-observerbased robust tracking control of electrically driven free-floating space manipulator[END_REF]; [START_REF] Qiao | High-precision attitude tracking control of space manipulator system under multiple disturbances[END_REF]. Actual solutions An important aspect of future space missions is the lifespan. For that purpose, control strategies have focused on a reduced use of base actuators, especially when reaction-jet are involved. Giordano and al. [START_REF] Giordano | Workspace fixation for free-floating space robot operations[END_REF] have first proposed a workspace adjustment strategy to compensate for manipulator motions with an efficient use of thrusters. A later approach developed by Giordano and al. [START_REF] Massimo | Coordinated Control of Spacecraft's Attitude and End-Effector for Space Robots[END_REF] have been based on the simultaneous control of the SMS global center of mass. Besides efficient propellant consumption strategies, the use of electrical kinetic moment exchange devices have raised in interest. The common use of reaction-wheels with thrusters has been studied by Giordano and al. [START_REF] Giordano | Coordination of thrusters, reaction wheels, and arm in orbital robots[END_REF] such that only the critical moment of the capture of a non-cooperative target are tackled with the thrusters. Furthermore, in most applications, manipulator and base control are effectuated separately. This can be illustrated by Wu and al. [START_REF] Wu | Attitude control for on-orbit servicing spacecraft using hybrid actuator[END_REF] who considered the combination of reaction-wheels and control moment gyroscopes to maintain the satellite platform fixed during manipulator operations. This strategy aimed at dealing with the low reaction-wheels saturation thresholds. Additionally, recent implemented control strategies also illustrate the choice of a separate base and manipulator control [START_REF] Giordano | Workspace fixation for free-floating space robot operations[END_REF]; [START_REF] Cumer | MODELLING AND ATTITUDE CONTROL DESIGN FOR AUTONOMOUS IN-ORBIT ASSEMBLY[END_REF]; [START_REF] Colmenarejo | Results of the COMRADE project: combined control for robotic spacecraft and manipulator in servicing missions: active debris removal and re-fuelling[END_REF]. This presents some benefits to ensure precise control of the robotic arm from one side and the base from the other. However, in this chapter the interest of a common base/manipulator actuators control is proposed. Illustration of a classical control approach limitations Before developing any further discussion on a common reaction-wheel and manipulator control, a short observation of the limits of a separated control strategy is detailed here on a simple example. A first control law is introduced based on a simplified version of the similar strategies proposed by Cummer and al. [START_REF] Cumer | MODELLING AND ATTITUDE CONTROL DESIGN FOR AUTONOMOUS IN-ORBIT ASSEMBLY[END_REF]. As illustrated by Figure 3.1, the proposed methods separate the control of the base actuators from the manipulator's joints. Moreover, the control torques are computed from a single system linearization and constant control gains are synthesized with robust control approaches. In order to simply illustrate the potential limits of such methods, a simplification of the control proposed in the literature is considered without developing the H ∞ synthesis. Figure 3.1 -Block diagram of classical control approach in the literature From the modeling of a rigid system (2.43), developed in the previous chapter, a separate control reaction-wheels and manipulator's joints is introduced inspired from current control methods found in the literature. From one side, the manipulator is considered as a rigid multi-body kinematic chain on a fixed base, thus its equation of motions are adapted from (2.43) by negelecting the base dynamics and coupling such as: H 0 m qm + C 0 m qm = τ m (3.1) where H 0 m and C 0 m are nominal matrices, obtained from a system linearization. Such expressions of manipulator dynamics consider a fixed spacecraft base. On the other side, the base rotational dynamic is expressed with only the reaction-wheels as a means of control. Thus, isolating the reaction-wheels and base dynamics by neglecting the coupling with the manipulator from (2.43), one first express the overall base dynamics as: H 0 ω H 0 ωr H 0 T ωr H 0 r ωsat 0 qr + C 0 ω C 0 ωr C 0 rω C 0 r ω sat 0 qr = 0 3×1 τ r (3.2) with the notation 0 used to indicate nominal values of matrices (or from a linearized model). One will note that H 0 r = H r and that for a fixed system's CoM4 in R ine5 H 0 ωr = H ωr . Base rotations (3.2) can be expressed without reaction-wheels' acceleration as they are not necessarily provided for. Combining both lines of (3.2), base dynamics are given by: H 0 T ωr -H r H 0 + ωr H 0 ω ωsat 0 + C 0 rω -H 0 r H 0 + ωr C 0 ω ω sat 0 + C 0 r -H 0 r H 0 + ωr C 0 ωr qr = τ r (3.3) This allows to have a naturally decoupled base/manipulator system. Then introducing the constant linear control gains K 0 ∈ R 3×3 and K m ∈ R nm×nm , the respective manipulator and reaction-wheels computed torques are given as:            τ mc = H 0 m K m ( qm d -qm d ) + C 0 m qm τ rc = H 0 T ωr -H 0 r H 0 + ωr H 0 ω K 0 ω sat 0 d -ω sat 0 + C 0 rω -H 0 r H 0 + ωr C 0 ω ω sat 0 + C 0 r -H 0 r H 0 + ωr C 0 ωr qr (3.4a) (3.4b) Thus each actuators are decoupled. Respectively, τ mc allows to decouple the manipulator's joint and τ rc reaction-wheels. Moreover, H 0 ωr is a constant matrix depending on the choice of reaction-wheels orientations in the spacecraft base and thus its pseudo-inverse is perfectly defined. In order to illustrate the performances of this control law, the tools developed in the previous chapter are used. A simple SMS is considered with a 5-DoFs manipulator and four reaction-wheels. Additionally, ten flexible modes are considered, which physical properties, as well as for those of the overall spacecraft, are given in appendix B. The desired SMS motion consists of moving a 150kg mass with the manipulator as illustrated in Figure 3.2 and maintaining the base with a fixed attitude. Control gains are computed with the normalized dynamics (3.1) and (3.3) to guarantee a low closed-loop response time. This can be obtained by imposing the control gains as diagonal matrices with the coefficient corresponding to the inverse of the time response. ) are plotted, one can conclude on the poor performances of the velocitiy control. One will observe that velocities for joints 2 and 5 from fifty seconds give an increasing tracking error until the end of the manipulator's motion while the three other joints successfully follow the desired velocities. This can be explained by the linearization choice made to develop the computed torques (3.4). With a linearization performed for the nominal SMS configuration, the manipulator is adequately controlled until fifty seconds of motions then the system linearization is too far from the actual state to satisfy appropriate control performances. This is why robust control synthesis and/or adaptive gains are required to perform simple motions as the present one. "The responsibility" of the linearization is in the poor control performances which are illustrated by the successfully stabilized base with a null angular velocity. Illustrated with Figure 3.4 and the second subplot, the angular rates of the base remains null during the manipulator's motions. Moreover, one will note that imposing fast control dynamics leads to control gains K 0 sufficiently large to be robust to the influence of floating system's couplings (i.e. between the motions of the manipulator's end-effector and the base). This allows to first conclude that a separated base/manipulator control can achieve good performances at the condition of taking into account the system's variation for the manipulator control. Furthermore, successfully maintaining the base with a fixed attitude allows to avoid large flexible vibrations and even reduce them with the available reaction-wheels as illustrated with both first and last subplots of Figure 3.4. The four reaction-wheels provide an overactuated system that insure base angular velocity to remain at low velocities. As discussed in the previous chapter, considering the convective terms allows to conclude on the minimized dissipative work lost in the flexible modes, as illustrated by Figure 3.5. From this short illustrative example of a classical control approach of both base and manipulator, some conclusions can be made and possible improvements. • Firstly, a separated control provides the possibility of adapting control performances of both base and manipulator. However, regarding the couplings of an SMS, developing a robust control synthesis is a "necessity". • Secondly, it has allowed to show the limitation in the use of a unique linearization and constant control gains to decouple the system. • Thirdly, maintaining the base at a constant orientation allows to reduce the potential impact of flexible vibrations on the manipulator and more generally on the rest of the system. In that matter, investigating on a common base and manipulator control approach could lead to a simple control law that provides precise performances. The chapter is then organized as following: the interest of a common base and manipulator is discussed through a quantitative method, then secondly a control law is developed such that high precision performances are guaranteed despite of the presence of the flexible appendages. Interest of a common base and manipulator control In this section, quantitative tools for analysis and control of an SMS are introduced extending our published work in [START_REF] Mathieu Rognant | Kinematic Indices of rotation-floating space robots for on-orbit servicing[END_REF]. The presence of kinetic moment exchange actuators in the base of an SMS offers more flexibility in the possible manipulator operations [START_REF] Rognant | Autonomous assembly of large structures in space: a technology review[END_REF]; [START_REF] Wilde | Equations of motion of free-floating spacecraft-manipulator systems: an engineer's tutorial[END_REF]. The objective of the section is to introduce a means to evaluate the interest of using base actuators simultaneously to controlling the manipulator. To do so, a rigid rotation-freefloating SMS is considered as it allows to come up with momentum discussions. Such system behaviors are described by the equations of motions (2.43) detailed in the previous chapter. First, the definition of usual SMS analysis notions from the literature such as workspace, singularity, redundancy and manipulability are provided. This allows to develop tools to observe the limits and feasible motions of an SMS. Secondly, kinematic indices are introduced to quantify the interest of a common base and manipulator control. SMS workspace The manipulator workspace defines the set of reachable positions of the end-effector without singular configurations. Its evaluation is obtained studying the GJM which also gives information on singular configurations. The GJM is obtained with the kinetic moment conservation, hence the restriction to the study of a rigid SMS. For an initially null system momentum and without any external sources of disturbances, the system momentum is given in function of the manipulator's joints' velocities, qm , and reaction-wheels' velocities, qr , as [START_REF] Wilde | Equations of motion of free-floating spacecraft-manipulator systems: an engineer's tutorial[END_REF]: q0 = ω sat 0 ṙ0 = -H -1 0 H 0m qm -H -1 0 H 0r qr = J m 0 qm + J r 0 qr (3.5) with J m 0 and J r 0 the Jacobian to express respectively the manipulator and reaction-wheels velocities to the base velocity. Then developing the expression of the inertia matrix H 0 of a rigid floating SMS (2.43), for which the base angular velocity is expressed in the satellite body frame, R sat6 , allows to isolate the rotational and linear part of it as: H 0 = P T 0 M 0 P 0 =   I S 0 + R T S 0 nq i=1 I i -m i (r S 0 -r S i ) × (r S 0 -r S i ) × R S 0 -m tot (r S 0 -r CoM ) × m tot (r S 0 -r CoM ) × m tot I 3   = H ω -m tot (r S 0 -r CoM ) × m tot (r S 0 -r CoM ) × m tot I 3 (3.6) As H 0 is an inertia matrix, it is defined positive and symmetric. By applying the Banachiewicz inversion [START_REF] Jerzy | Generalized inverses of partitioned matrices in Banachiewicz-Schur form[END_REF], H 0 is inverted as: H -1 0 =   S -1 U (r S 0 -r CoM ) × S -1 U S -1 U (r S 0 -r CoM ) × 1 m tot -(r S 0 -r CoM ) × S -1 U (r S 0 -r CoM ) ×   (3.7) with S U = H ω + m tot (r S 0 -r CoM ) × (r S 0 -r CoM ) × . Thus, the generalized base Jacobian matrices J m 0 and J r 0 can later be developed. Secondly, the evaluation of the workspace is obtained with the consideration of the endeffector's reachable positions. With a similar approach, generalized Jacobian matrices are introduced for the end-effector such that its twist is evaluated from only actuators' velocities. Injecting the base twist, q0 , (3.5) in the end-effector's twist (2.19), one can define J r EE and J m EE , respectively the generalized Jacobian matrices of the end-effector for reaction-wheels and manipulator's actuators as: t EE = J 0 EE t 0 + J m EE qm = J 0 EE J m 0 + J m EE qm + J 0 EE J r 0 qr = J m EE qm + J r EE qr (3.8) This relation allows to establish workspace discussions as well as actuators' use rate to perform a given motion. In the literature, the GJM refers only to J m EE which, if expressed in R sat , and only depends on the manipulator configuration. Redundancy of SMS: The non-holonomic constraint of free-floating SMS is due to the non-integrability of the angular momentum equation [START_REF] Nakamura | Nonholonomic path planning of space robots via bi-directional approach[END_REF]: M ω ω 0 + M ωL ṙS 0 + M ωr qr + M ωm qm = 0 3×1 (3.9) It traduces that the dynamic couplings between the manipulator and the base leads to an end-effector pose depending on both the manipulator's trajectory and velocities. The redundancy of a manipulator corresponds to the plurality of possible motions (i.e. multiple set of q m and q r ) to reach a base and end-effector rotation. One will note that in the case of a control of the base position, the end-effector can likewise be controlled in position however the manipulator reaches or not a singularity. The degree of redundancy is the difference between the manipulator's DoFs and the number of end-effector tasks [START_REF] Nenchev | Analysis of a redundant free-flying spacecraft/manipulator system[END_REF]. One can distinguish the main task, following a trajectory by the end-effector controlling its pose and orientation, and secondary tasks, either a trajectory tracking while maintaining a fixed orientation of the base or modifying the base orientation while maintaining the pose and/or orientation of the end-effector. Denoting the number of end-effector tasks, n t EE , n t b the number of base tasks and n m the manipulator DoFs, three redundancy cases can be considered for a free-flying SMS: • if n m = n t EE + n t b : the redundancy of the manipulator is only sufficient to control both end-effector and base • if n m > n t EE + n t b : the redundancy of the manipulator is sufficient to perform base and end-effector tasks, additional constraints like actuator saturation, singularity or obstacle avoidance can be considered • if n m < n t EE + n t b : the redundancy degree is not enough to perform both base and end-effector tasks, a hierarchy of tasks is required . If n m < n t EE the end-effector tasks can be chosen to be performed first and if n m < n t b the base control can be privileged. For rotation-free-floating SMS, the redundancy degree becomes n m + 3. Singularity and workspace analysis: A particularity of free-floating SMS is the possibility of reaching a dynamic singularity in addition of kinematic ones. Kinematic singularities correspond to geometrical constraints of the manipulator and joint rotations while a dynamic one occurs when a lack of a means to compensate for the manipulator motions leads to the impossibility of moving the end-effector in any direction [START_REF] Papadopoulos | Dynamic singularities in freefloating space manipulators[END_REF]. These singularities are consequences of the non-holonomic constraints of a free-floating manipulator and also dependent on its physical properties (mass/inertia). The dynamic singularities are studied with the GJM, J m EE . A dynamic singularity corresponds to a non-redundant manipulator when J m EE is rank deficient. Thus a dynamically singular configuration is evaluated with the evaluation of the non-inversibility of J m EE J T m EE as: det J m EE J T m EE = 0 (3.10) Moreover, manipulator path-planning benefits from the study of dynamic singularities as the manipulator joint rates become more important near singular configurations. Such path-planning approaches optimize the value of det J m EE J T m EE to insure configurations far from singular ones. However, the main difficulty of establishing a manipulator path resides in the non-holonomic constraints of a free-floating SMS. Papadopoulos and Dubowsky [START_REF] Papadopoulos | Dynamic singularities in freefloating space manipulators[END_REF] proposed a method decomposing workspaces in path dependent and independent subsets of the complete workspace where respectively dynamic singularities may be encountered or not depending on the path taken. Nanos and Papadopoulos [START_REF] Nanos | Avoiding dynamic singularities in Cartesian motions of free-floating manipulators[END_REF] developed an analytical method to find initial base attitude and joint configuration such that the manipulator can move into a workspace without reaching a dynamic singularity. Nevertheless, these approaches only apply for planar manipulator. For spatial cases, Calzolari and al. [START_REF] Calzolari | Singularity maps of space robots and their application to gradient-based trajectory planning[END_REF] presented a numerical technique to obtain singularity maps for a 6-DoFs spacecraft in order to develop path-planning method with singularity avoidance. Manipulability: The notion of manipulability for redundant manipulator has been introduced by Yoshikawa [START_REF] Yoshikawa | Analysys and control of robot manipulators with redundancy. Robotic Research[END_REF] to establish and adapt path-planning methods. The manipulability is the quantitative measure of the manipulator ability to perform an end-effector pose and orientation considering the manipulator's kinematics. It can be seen as a means to quantify the proximity with singular configurations. The end-effector manipulability, µ J m EE , is defined as: µ J m EE = det J m EE J T m EE (3.11) Thus, in path-planning approaches the problem of singularity avoidance can be replaced by locally maximizing the manipulability µ at each point or satisfying the joint velocity constraints [START_REF] Calzolari | Singularity maps of space robots and their application to gradient-based trajectory planning[END_REF]. Furthermore, as accelerations and joint torques do not impact the manipulability evaluation, Yoshikawa [START_REF] Yoshikawa | Dynamic manipulability of robot manipulators[END_REF] proposed a dynamic manipulability to likewise quantify the ability of the manipulator to perform end-effector motions while considering the system's dynamics. Its evaluation is the measure of the volume of the dynamic manipulability ellipsoid formed by the set of all realizable accelerations of the end-effector under joint constraints. For the end-effector it is defined as the set of all end-effector accelerations that joint driving forces can achieve. It is expressed in function of the normalized joint torques, τ m , defined with a weight matrix W τ , such that τ m = W τ τ m , verifies |τ T m τ m | ≤ 1. Then expressing the joint dynamics as: H m qm + C m qm = τ m (3.12) and the time-derivative of the end-effector tasks for a fixed base: ṫEE = J m EE qm + J m EE qm (3.13) the dynamic manipulability, µ , corresponds to the evaluation of the inversibility of the Jacobian matrix J m EE = J m EE H -1 m W τ obtained combining both equations (3.12) and (3.13). Thus it leads to evaluate the determinant of the scaled GJM, J m EE [START_REF] Koeppe | Dynamic manipulability analysis of compliant motion[END_REF]; [START_REF] Xu | Kinematic and dynamic manipulability analysis for free-floating space robots with closed chain constraints[END_REF]: µ J m EE = det J m EE H -1 m W τ T J m EE H -1 m W τ (3.14) Distribution of system efforts Besides system singularities, the evaluation of actuators' usage can be interesting in pathplanning applications or for pre-sizing an SMS to satisfy mission requirements. In that purpose, kinematic sensitivity indices are first introduced, as another contribution of this work, for rigid rotation-free-floating robots based on the momentum conservation. Then simulation examples to illustrate kinematic indices are shown with physical parameters for a microsatellite from Myriade series equipped with a robotic arm. Kinematic indices With the expression of the generalized Jacobians (3.5) and (3.8), kinematic indices are introduced to evaluate the participation of actuators in the base and end-effector motions. Base kinematic indices: Base indices are obtained detailing the linear and angular part of Jacobians J m 0 and J r 0 with (3.7) as:                                          J m 0 = J ω 0 m 0 J ṙ0 m 0 = -H -1 0 H ωm H Lm =   -S -1 U H ωm -(r S 0 -r CoM ) × S -1 U H Lm -S -1 U (r S 0 -r CoM ) × H ωm - 1 m tot -(r S 0 -r CoM ) × S -1 U (r S 0 -r CoM ) × H Lm   J r 0 = J ω 0 r 0 J ṙ0 r 0 = -H -1 0 H ωr H Lr =   -S -1 U H ωr -(r S 0 -r CoM ) × S -1 U H Lr -S -1 U (r S 0 -r CoM ) × H ωr - 1 m tot -(r S 0 -r CoM ) × S -1 U (r S 0 -r CoM ) × H Lr   As detailed in the previous chapter, the reaction-wheels do not affect the linear dynamics of the base (i.e. H Lr = 0 3×nr ) which leads to: J r 0 = - S -1 U H ωr S -1 U (r S 0 -r CoM ) × H ωr (3.16) As proposed by Xu and al. [START_REF] Xu | Hybrid modeling and analysis method for dynamic coupling of space robots[END_REF], this decomposition of the translation and rotation allows to compute joint-to-base coupling factors. In order to deal with the range variation of each actuators, the following scaled Jacobians will be used for analysis:      J scal ω 0 = J ω 0 m 0 Q max J ω 0 r 0 Ω max J scal r 0 = J r 0 m 0 Q max J r 0 r 0 Ω max (3.17a) (3.17b) where Ω max and Q max are respectively diagonal matrices of the maximal reaction-wheels and the manipulator's joints' velocities. By considering the Jacobians J ω 0 m 0 and J ω 0 r 0 expressions, one can notice that both are factorized by S -1 U . That implies that the relative contributions of the joints and reactionwheels to the base angular motions do not depend on the base inertia but only on the relative angular momentum capabilities of the manipulator and base actuators. This highlights that the distribution of the angular momentum is a critical point to insure the controllability of the system and dedicated control algorithms are required to deal with a non-null momentum [START_REF] Oki | Time-optimal manipulator control for management of angular momentum distribution during the capture of a tumbling target[END_REF]. A dedicated analysis of the components of S U J scal ω 0 matrix of the system in a design phasis could address this issue. The sum of the reaction-wheels' angular momentum is bounded by S U J r 0 r 0 Ω max F . Thus, in the angular momemtum space, this sum belongs to the invariant polyedron which faces are normal to the reaction-wheels' axis. For the manipulator, its angular momentum is bounded by S U J ω 0 m 0 , which is strongly sensitive to the joint configuration (variation of the gravity center position and manipulator inertia). Thus, to separate the joint and reaction-wheel contributions to the base motion according to the joint configuration, we introduce the indices K m ω 0 and K r ω 0 defined as follow:            K m ω 0 = S U J ω 0 m 0 Q max F S U J scal ω 0 F K r ω 0 = S U J ω 0 r 0 Ω max F S U J scal ω 0 F (3.18a) (3.18b) where X F is Frobenius norm of X ∈ R n×m defined as: X F = tr (X * X) = tr (XX * ) = 1≤i,j≤n,m X ij 2 (3.19) Separating base and manipulator actuators with the relation S U J scal ω 0 2 F = S U J ω 0 m 0 Q max 2 F + S U J ω 0 r 0 Ω max 2 F and by analytically evaluating this norm, the kinematic indicators are interesting to characterize the base controllability on pre-design stages or select preferential joint configuration for control and path-planning purpose. One will note that the separation leads to a complementary between K m ω 0 and K r ω 0 . For instance, in an arbitrary manner, joint configurations whose the index K m ω 0 respects the arbitrary constraint K m ω 0 < 0.5 are more suitable to achieve motion without changing the base attitude. End-effector indices: In SMS task prioritization, the highest priority task is the endeffector motions, however with additional tasks for the base motions and with the dynamic couplings of free-floating systems, manipulator capability to perform the end-effector task may significantly be affected. To analyze this problem end-effector kinematic indices are introduced. Similarly than for the base indices, the generalized Jacobians are decomposed as a translation and rotation sub-Jacobians as:              J m EE = J ω EE m EE J ṙEE m EE = -J 0 EE H -1 0 H ωm H Lm + J m EE J r EE = J ω EE r EE J ṙEE r EE = -J 0 EE H -1 0 H ωr 0 with (2.15), (2.19) and expression of J m 0 and J r 0 , the end-effector generalized Jacobians are detailed as: J m EE = R S 0 J ω 0 m 0 (r S 0 -r EE ) × R S 0 J ω 0 m 0 + J ṙ0 m 0 + J m EE (3.21) and J r EE = R S 0 S -1 U H ωr (r S 0 -r EE ) × R S 0 S -1 U H ωr + S -1 U (r S 0 -r CoM ) × H ωr (3.22) To account for actuator capacity, similarly to (3.17) the following scaled Jacobians are introduced:      J scal ω EE = J ω EE m EE Q max J ω EE r EE Ω max J scal ṙEE = J ṙEE m EE Q max J ṙEE r EE Ω max As a main control difficulty of free-floating SMS is the coupling between the manipulator and the base, null-motion projector can be used to restrict the redundant manipulator's joints motions to a sub-space which insure zero base attitude disturbance [START_REF] Dn Nenchev | Reaction null-space based control of flexible structure mounted manipulator systems[END_REF]. The proposed control method can be adapted to a rotation-free-floating for a common control of the reaction-wheels and manipulator's joints by introducing the null-motion projector: N ω 0 = I -J + ω 0 J ω 0 (3.24) with J + ω 0 the pseudo-inverse of J ω 0 . Thus, with this projector, the manipulability of the end-effector without base motions can be evaluated with the indices:    µ N ω EE = µ(J scal ω EE N ω 0 ) µ N ṙEE = µ(J scal ṙEE N ω 0 ) (3.25a) (3.25b) Simulation example The interest of the introduced kinematic indices is illustrated with numerical simulations developed on an SMS with a base exhibiting the physical parameters of the first microsatellite in the Myriade series, Demeter [START_REF] Pittet | Demeter: A benchmark for robust analysis and control of the attitude of flexible micro satellites[END_REF]. Demeter is equipped with three reaction-wheels, which nominally present an inertia of 4.10 -4 kg.m 2 on their main axis and a maximal angular speed of 2800rpm. The manipulator considered has 3-DoFs with physical parameters given in table 3.1. Moreover, as current space manipulator such as Canadarm2 and JEMRMS do not exceed joint velocities of 0.7rpm [START_REF] Patten | International space station robotics: a comparative study of ERA, JEMRMS and MSS[END_REF], we assume that for this smaller manipulator maximal joints angular speeds are less than 0.5rpm. The choice of manipulator and base is made such that inertia ratios allow to highlight significant kinetic momentum exchanges between the different SMS parts. - Table 3.1 -Physical parameters of the considered SMS Focusing on the three linear DoFs, two sets of reaction-wheels are considered to compare with the suggested indices the impact that the manipulator may have on the base. The first configuration is the nominal configuration as described above, the second configuration is modified with an increased capacity of DEMETER (H r replaced by 2H r ). To achieve this analysis, the indices have been evaluated for the two configurations in 6859 joint-space configurations uniformly distributed in the half-workspace of the manipulator because the system is symmetric about the XZ plane. In figures 3.6, 3.7 and 3.8 iso-surfaces of these indices are plotted and colors reflect their value for each end-effector pose. In figure 3.7 and 3.8 manipulability indices are normalized by the manipulator's manipulability on a fixed base. Figure 3.6 points that increasing the reaction-wheels' inertia allows to decrease the contribution of the joints and to raise the one of the reaction-wheels on the base attitude motion, as K m ω 0 and K r ω 0 are complementary. One can also note that, the joint configurations that impact the most the base motions are those where the end-effector is the farthest from the system center of mass. In these critical configurations, doubling the inertia of the reaction wheels allows to reduce the joints' impact on the base from 0.65 to 0.4 and to ensure a stable attitude without using thrusters. However, the manipulability of ṙEE , represented in figure 3.7, increases less than 0.1 if the base is able to move during the control of the end-effector. This is due to the small ratio between the maximal spacecraft angular speed and joints angular speed considered in our example. In this case, reducing the joint impact to the base with bigger reaction wheel inertia does not affect the manipulability of ṙEE . Nevertheless, when null-motion is used, figure 3.8 highlights an improvement of the manipulability of ṙEE with more angular momentum in reaction wheels. As underlined by the figure 3.6, the second configuration presents a better momentum repartition for this type of control. With this example, an illustration of one of the potential use of the kinematic indices is developed. The interest of those indices is not only to pre-size an SMS optimizing the couplings between the manipulator and the base, but likewise to adapt the control strategy. More specifically, when the base is required to maintain a fixed attitude, a suitable set of base actuators allows to balance the impact of the manipulator on the base due to the transfer of kinetic moment in the SMS. An analysis based on kinetic indices can be used to justify the interest of a simultaneous control of the base and manipulator actuators. An SMS control strategy is developed in this section such that internal perturbations due to flexible appendages will not impact control performances. As illustrated with Figure 3.9, a nonlinear dynamic inversion is performed to both linearize and decouple the system. Thanks to the modeling of a rotation-free-floating SMS with flexible appendages developed in the previous chapter, internal disturbances are included in the system feedback linearization such that actuators decoupling are accurately performed. Moreover, including the disturbances forces/torques in the control torques, it aims at rejecting perturbations with an adapted use of base actuators. With an ESO7 , the unmeasured states (i.e. the flexible modes displacements/velocities and the base linear velocities) are included in the NDI8 . This leads to an inter-dependency of observer and control dynamics and performances which motivates a simultaneous synthesis of respective gains based on a Lyapunov stability analysis. Then an LMI9 resolution allows to obtain linear observer and control gains such that control performances and stability are insured. Joint-space control of an SMS with flexible appendages First the state observer and control dynamics are developed capitalizing on the modeling effort detailed in the previous chapter. In a second time, the system closed-loop dynamics are detailed highlighting the coupling between observer and control performances, such that a gain synthesis method is proposed to perform a given motion in an SMS workspace. Finally, an illustration of the proposed method efficiency and interest is obtained on a simple SMS example thanks to the developed Matlab simulation tools. Observer structure As mentioned, an observer is established aiming at including the un-measurable states of the SMS, which behaviors is described by the equations of motions (2.69), in the NDI. It is reminded that the flexible vibrations, η and η, as well as the linear dynamics of the spacecraft, ṙS 0 , are the states requiring an estimation. Likewise, to remain as general as possible, actuators' accelerations measures are supposed to be not available. Therefore, the observer is designed such as: • the control torque, τ q , and the actuators' velocities, q, are used as observer's inputs • the angular base velocity, ω sat 0 , is used as observer's input • the states to be estimated are the flexible displacement and velocities, respectively η and η, and the base linear velocity, ṙS 0 . Prior to establish the observer dynamics, a re-written effort of the overall system equations of motions (2.69) is necessary to design the observer without actuators' accelerations, q. Thus, from (2.69) including the actuators' dynamics: q = -H -1 q H T 0q q0 + C q0 q0 + C q q -τ q (3.26) onto the base and flexible dynamics: H 0 H 0η H T 0η H η q0 η + H 0q 0 q + C 0 C 0η C η0 C η q0 η + C 0q 0 q + 0 K η η = 0 6×1 0 nη×1 (3.27) then the flexible and base dynamics can be expressed without the actuators' accelerations as: M 0η (x 0 , q) q0 η + D η0η (x 0 , q, q0 , q)    η q0 η    + D q (x 0 , q, q0 , q) q = J q (x 0 , q)τ q (3.28) with:                                    M 0η = H 0 H 0η H T 0η H η - H 0q 0 H -1 q H T 0q 0 D η0η = 0 C 0 C 0η K η C η0 C η - H 0q 0 H -1 q 0 C q0 0 D q = C 0q 0 - H 0q 0 H -1 q C q J q = - H 0q 0 H -1 q (3.29a) (3.29b) (3.29c) (3.29d) The equivalent inertia matrix M 0η (x 0 , q) is obtained from the SMS configuration (i.e. actuators' state, q, and pose/orientation of the base in R ine , x 0 = θ T 0 r T S 0 T ). It holds the mathematical properties of inertia matrices, in particular it is defined positive and invertible. The equivalent convective matrices D η0η (x 0 , q, q0 , q) and D q (x 0 , q, q0 , q) depend on the SMS configurations but also on the system's velocities. In order to alleviate further dependency notations, quantities evaluated in the inverse kinematic/dynamic model from the measure of both x 0 and q will be reduced to q. This simplification is justified by the fact that x 0 provides the pose and orientation of R sat in R ine but does not affect the mass distribution. Then introducing the state vector to be estimated, x = η T qT 0 ηT T , with (3.28) the state space model adapted to estimate x is written as:                    ẋ =    η q0 η    = 0 nη 0 nη×nq I nη -M -1 0η D η0η x + 0 nη×nq 0 nη×nq -M -1 0η D q M -1 0η J q q τ q = A e (q, q, q0 )x + B q (q, q, q0 )u y = ω sat 0 = 0 3×nη I 3 0 3 0 3×nη x = C e x (3.30a) (3.30b) By introducing the linear estimation gain L, the state vector x is estimated as x e such that: ẋe = A e (q, q, q0 )x e + B q (q, q, q0 )u + L(y -C e x e ) (3.31) Then denoting e the observation error signal, defined as e = xx e , with (3.30) and (3.31) the observer error dynamics are expressed as: ˙ e = (A e (q, q, q0 ) -LC e ) e (3.32) Control law structure To obtain the control law structure, first a re-written effort of (2.69) is made and secondly the NDI structure is established. Open-loop dynamics With a similar re-arrangement of (2.69) as operated to design the observer, the actuators' dynamics are re-written from (2.69) without the flexible and base accelerations (i.e. qT 0 ηT T ). Therefore, injecting (3.27) in (3.26), actuators' dynamics are re-written as: M (q)q + D (q, q, q0 ) q + D x (q, q, q0 )x = τ q (3.33) One will notice that the disturbance torque D x (q, q, q0 )x affects the actuators' dynamics. This torque has the particularity of including, besides the base angular velocity, only states that can be estimated through the proposed observer (3.30). Furthermore, it corresponds to the overall intern perturbations that requires to be rejected for a precise control. In order to detail the equivalent inertia matrix M (q) and the equivalent convective matrices D (q, q, q0 ) and D x (q, q, q0 ), one will first note that H 0 H 0η H T 0η H η is an inertia matrix which has an inverse given by: H 0 H 0η H T 0η H η -1 = (H 0 -H 0η H T 0η ) -1 -(H 0 -H 0η H T 0η ) -1 H 0η -H T 0η (H 0 -H 0η H T 0η ) -1 I nη + H T 0η (H 0 -H 0η H T 0η ) -1 H 0η (3.34) with the flexible modeling assumptions detailed in the previous chapter, H η = I nη . The inertial term (H 0 -H 0η H T 0η ) corresponds to a scaled inertia of the base for which the flexible inertia have been subtracted of the spacecraft base. Then the detail of matrices in (3.33) as:                      M = H q -H T 0q (H 0 -H 0η H T 0η ) -1 H 0q D = C q -H T 0q (H 0 -H 0η H T 0η ) -1 C 0q D x = 0 C q0 0 -H T 0q (H 0 -H 0η H T 0η ) -1 -(H 0 -H 0η H T 0η ) -1 H 0η 0 C 0 C 0η K η C η0 C η (3.35a) (3.35b) (3.35c) Detailing the expression of the matrix M (q) allows to highlight the effect of the flexible modes' dynamics on the actuators' ones. With the scaled inertia term H T 0q (H 0 -H 0η H T 0η ) -1 , one observes that flexible vibrations will affect the actuators through their impact on the spacecraft base. Closed-loop dynamics In order to decouple and linearize the system, an NDI is developed capitalizing on the observer to likewise include the disturbance torques. By introducing the subscripts d to designate a desired quantity, the control tracking error, c , is defined as: c = (q d -q) (3.36) and with a linear control gain, K, the desired dynamic, v, is imposed as: v = qd + K c ˙ c (3.37) Thus, a control torque, τ qc , that linearizes the system and decouples actuators is given by: τ qc = M (q)v + D (q, q, q0 ) q + D x (q, q, q0 )x e (3.38) Injecting (3.38) in (3.33) the closed-loop control dynamics is expressed in function of the observer error dynamics (3.32) as: ¨ c = -K c ˙ c + M -1 D x e (3.39) Introducing the state vector z = T c ˙ T c T , one can re-write (3.39) as:                ż = 0 I 0 0 + 0 -I K z + 0 M -1 D x e = (A z + B z K)z + B (q, q, q0 ) e c = I 0 z = C z z (3.40a) (3.40b) With this last expression, one will note that the control performances are dependent on the estimation quality and its convergence. The challenge is then to insure simultaneously the stability of the observer and the controller. Simultaneous synthesis Simple synthesis To take into account and clearly highlight the dependency between observer and controller performances, the extended state vector w = z T T e T is introduced such that a compact version of (3.32) and (3.40) is obtained as:          ẇ = A z + B z K B 0 A e -LC e w c = C z 0 w (3.41a) (3.41b) With the separation principle, both observer and controller gains could be evaluated separately, however, in this section, a method to synthesize the gains K and L is proposed such that the system remains stable and a guarantee of similar control performances is insured for a considered servicing operation that induces variations of matrices B and A e . Thus, considering a system linearization allowing to evaluate matrices in (3.41) the control and observer gains are synthesized simultaneously with the following proposition based on a Lyapunov stability analysis. Then, the gains K and L are finally obtained from an LMI resolution. Proposition: Assuming the initial estimation error verifies T e 0 E e 0 ≤ 1 for a given positive definite matrix E, where e 0 is the initial condition of the observer error. If there exist symmetric positive definite matrices Q z , P and matrices W z , W such that for a given scalar γ > 0:            Ω =    (A z Q z + B z W z ) s B Q z C T z * (P A e -W C e ) s 0 * * -γ 2 I    < 0 P < E (3.42a) (3.42b) where X s = X + X T and Ω is a symmetric matrix, then the system (3.41) is stable and the outputs verify: ∞ 0 T c c dt < γ 2 (3.43) for any conditions z(0) = 0 and e 0 ∈ { | T E ≤ 1}. Moreover, the gains are obtained as: such that V + γ -2 T c c < 0 for a given γ > 0. K = W z Q -1 z L = P -1 W e (3. By integration: ∀T f > 0, T f 0 ( V + T c c )dt < 0 ⇒ T f 0 T c c dt < γ 2 (V(0) -V ( T f )) ⇒ ∞ 0 T c c dt < γ 2 V(0) (3.46) with V(0) = z(0) T P z z(0) + T e 0 P e 0 . For z(0) = 0 2×(3+nm) , V(0) = T e 0 P e 0 and if T e 0 P e 0 ≤ T e 0 E e 0 ≤ 1 then (3.43) is verified. This condition is enforced by P ≤ E. The condition V + γ -2 T c c < 0 is equivalent to: (P z (A z + B z K)) s + γ -2 C T z C z P z B * (P (A e -LC e )) s < 0 (3.47) Applying the Schur complement on (3.47) is given by:    (P z (A z + B z K)) s P z B C T z * (P (A e -LC e )) s 0 * * -γ 2 I    < 0 (3.48) Pre and post multiplying the above matrix by diag(Q z , I, I) = diag(P -1 z , I, I) and introducing the variable changes W e = P L and W z = KQ z one obtains (3.42a) which concludes the proof. The resolution of the LMI (3.42a), and thus the synthesis of K and L, is obtained for a given system linearization. Evaluating matrices A e and B from that linearization and defining the matrices Q z , P , W z and W as LMI variables, it allows to compute the control and observer gains by minimizing the positive LMI variable γ. Additional LMI constraints as (3.42a) can be added for different system linearization with the same LMI variables for each constraint. Gains synthesis for a given SMS task As illustrated with the example of the control law inspired by the literature developed in section 3.1.2, the control gains should be taken into account for system inertia variations to insure similar control performances during a given task operated by the SMS. In that matter, a single system linearization to synthesize the gains as in the proposition 3.3.3.1 may not be sufficient. This will lead to the study of multiple equilibrium points which will correspond to a new LMI constraint as 3.42a for each linearization. In order to avoid such considerations, an approach with introduction of relaxation terms in the LMI 3.42a is developed here. We consider a typical task consisting in the manipulator moving a given mass from one point to another while the base is maintained with a fixed orientation. Then it can easily be assumed that the manipulator trajectory is restrained in a reduced workspace and that the state vector ω sat T 0 ṙT S 0 qT ηT T is bounded during the manipulator operation. This allows to obtain a bound of the inertia and convective matrices (i.e. H and C in (2.69)) and consequently matrices in (3.41). Thus from one linearization, let us introduce variations of B and A e respectively as ∆B and ∆A e such that (3.40) and (3.32) are rewritten as: ż = (A z + B z K)z + (B + ∆B ) e ˙ e = ((A e + ∆A e ) -LC e ) e (3.49a) (3.49b) The LMI synthesis is modified with the condition (3.42a) that becomes: Ω =    (A z Q z + B z W z ) s (B + ∆B ) Q z C T z * (P A e + ∆A e -W C e ) s 0 * * -γ 2 I    < 0 (3.50) Introducing positive relaxation terms ρ 1 and ρ 2 defined as the maximal singular values, σ, of respectively matrix A e and B such that: 2σ (A e ) < ρ 1 σ (B ) < ρ 2 (3.51a) (3.51b) then the LMI constraint (3.50) is true if: Ω =    (A z Q z + B z W z ) s (B + ρ 2 I) Q z C T z * (P A e -W C e ) s + ρ 1 I 0 * * -γ 2 I    < 0 ⇒    (A z Q z + B z W z ) s + ρ 2 I B Q z C T z * (P A e -W C e ) s + (ρ 1 + ρ 2 ) I 0 * * -γ 2 I    < 0 (3.52) This is verified by observing the following lemma. Lemma: Ψ 1 + ρI B B T Ψ 2 + ρI < 0 ⇒ Ψ 1 B + ∆ B T + ∆ T Ψ 2 < 0 ∀ ∆ | ∆ T ∆ < ρI Proof: Ψ 1 B + ∆ B T + ∆ T Ψ 2 = Ψ 1 + ρI B B T Ψ 2 + ρI <0 + -ρI ∆ ∆ T -ρI <0 < 0 ⇐⇒ ρ 2 I > ∆ T ∆ In that respect, the relaxation terms allow to deal with variations and uncertainties of the system during the space-robot motions. For a given manipulator motion (i.e. a trajectory and actuators velocity limit), the variations of matrices A e and B are considered in the LMI constraint of the proposition 3.3.3.1. Additionally, for relatively small parameter uncertainties present in the system modeling to evaluate the inverse kinematic/dynamic model, the relaxation terms allow to be compensated in the gains synthesis. To summarize the use of relaxation terms, first a system linearization is performed to evaluate matrices in the LMI constraint (3.42a) then the relaxations terms ρ 1 and ρ 2 are evaluated with a manipulator motion analysis as (3.51). Then defining the LMI variables, the synthesis consists of minimizing γ such that (3.52) holds with the proposition 3.3.3.1. In order to better illustrate full interest of the proposed control method in section 3.3, the control scheme is adapted for a base and manipulator common control. The NDI with an ESO structure is conserved such that the un-measured states are included in the linearization. Base and manipulator control of an SMS with flexible appendages In that way, the use of base actuators can be illustrated in the disturbances rejection process. The hypothesis on the un-measurable states developed in the previous section (i.e. actuators accelerations, flexible modes and base linear dynamics) are conserved here. Moreover, the same observer is used for the estimation of the state vector x = η T qT 0 ηT T . However, in order to distinguished the base rotations from the linear displacements, the subscript ω and L are used to respectively designate a rotational quantity and a linear one. Thus, the overall dynamics of the free-floating SMS considered (2.69) is re-written as:      H ω H ωL H ωq H ωη H Lω H L H Lq H Lη H T ωq H T Lq H q 0 H T ωη H T Lη 0 H η           ωsat 0 rS 0 q η      +      C ω C ωL C ωq C ωη C Lω C L C Lq C Lη C qω C qL C q 0 C ηω C ηL 0 C η           ω sat 0 ṙS 0 q η      +      0 3×1 0 3×1 0 nq×1 K η η      =      0 3×1 0 3×1 τ q 0 nη×1      (3.54) Moreover, arranging the actuators such that q = q T r q T m T , the convective term, C q is detailed as: C q = C r C rm C mr C m (3.55) and generally for a matrix X q : X q = X r X m (3.56) Open-loop dynamics As the actuators' accelerations are assumed to be unavailable, in order to express the openloop dynamics, a re-writing effort of (3.54) is mandatory to obtain an expression in function of measurable or estimable quantities. Thus, with (3.54), accelerations rT S 0 qT r ηT T are given by:    rS 0 qr η    = -    H L H Lr H Lη H T Lr H r 0 H T Lη 0 H η    -1       H Lω H Lm H T ωr 0 H T ωη 0    ω0 qm +    C Lω C Lm C rω C rm C ηω 0    ω 0 qm +      0 0 C L C Lη 0 0 C rL 0 K η 0 C ηL C η      x +    C Lr C r 0    qr -    0 τ r 0         (3.57) and similarly, from (3.54), the accelerations ωT 0 qT m T expression is given as: ω0 qm = - H ω H ωm H T ωm H m -1    C ω C ωm C T ωm C m ω 0 qm + H ωL H ωr H ωη H T Lm 0 0    rS 0 qr η    +   0 0 C ωL C ωη 0 0 C T ωm 0   x + C ωr C mr qr - 0 τ m   (3.58) Combining (3.57) and (3.58), the open-loop dynamics are obtained as: M ωm (q) ω0 qm + D ωm (q, q, q0 ) ω 0 qm + D x (q, q, q0 )x + D r (q, q, q0 ) qr = J τ (q)τ q (3.59) with:                                                                                          H Lrη = H ωL H ωr H ωη H T Lm 0 0    H L H Lr H Lη H T Lr H r 0 H T Lη 0 H η    -1 = H L H r H η M ωm = H ω H ωm H T ωm H m -H Lrη    H Lω H Lm H T ωr 0 H T ωη 0    D ωm = C ω C ωm C mω C m -H Lrη    C Lω C Lm C rω C rm C T ωη 0    D r = C ωr C mr -H Lrη    C Lr C r 0    D x =   0 0 C ωL C ωη 0 0 C mω 0   -H Lrη      0 0 C L C Lη 0 0 C rL 0 K η 0 C T Lη C η      J τ τ q = 0 τ m -H Lrη    0 τ r 0    = -H r 0 I τ q (3.60a) (3.60b) (3.60c) (3.60d) (3.60e) (3.60f) with H L ∈ R (3+nr+nη)×3 , H r ∈ R (3+nr+nη)×nr and H η ∈ R (3+nr+nη)×nη . Moreover, one will note that D r is the combination of an equivalent stiffness and convective matrix. The term D x x in (3.59) represents an internal disturbance torque which impacts manipulator and base dynamics while D r qr corresponds to the impact of reaction-wheels on the rest of the system. Closed-loop dynamics Similarly to the computation of the closed-loop for the joint-space control, developed in section 3.3.2.2, the system is first linearized and the closed-loop is express in an equivalent formulation. In order to decouple and linearize the system, an NDI is developed including those disturbance torques. The control tracking error, c , is now defined as: c = θ 0 q m d - θ 0 q m (3.61) and with a linear control gain, K, the desired dynamic, v, is defined as: v = ω0 qm d + K c ˙ c (3.62) Thus, a control torque, τ qc , that linearizes the system and decouples actuators is given by: τ qc = J + τ M ωm (q)v + D ωm (q, q, q0 ) ω 0 qm + D x (q, q, q0 )x e + D r (q, q, q0 ) qr (3.63) with J + τ the pseudo-inverse of J τ . One will note that J + τ is always well-conditioned as the reaction-wheels' inertia matrix remains constant (i.e. H r ). Injecting (3.63) in (3.59) the closed-loop control dynamics is expressed in function of the observer error dynamics (3.32) as: ¨ c = -K c ˙ c + M -1 ωm D x e (3.64) Introducing the state vector z = T c ˙ T c T , one can re-write (3.64) as:                ż = 0 I 0 0 + 0 -I K z + 0 M -1 ωm D x e = (A z + B z K)z + B (q, q, q0 ) e c = I 0 z = C z z (3.65a) (3.65b) Therefore, the closed-loop of the base/manipulator controlled system is expressed as the joint-space closed-loop system (3.40). Advantageously, the formalism to obtain the closedloop system allows to re-use the similar gains synthesis established in 3.3.3 where only the matrix B (q, q, q0 ) differs. Illustration of the proposed methods In order to illustrate the interest of the proposed method and its utilization, a simple SMS is used in this section. To consider a realistic illustrative example, a reduced version of the manipulator used in the PULSAR telescope deployment (https://www.h2020-pulsar.eu) is used. It consists of a 5-DoFs manipulator with a first prismatic joint and four revolute joints. The base is actuated by four reaction-wheels with similar capacities. Two identical solar arrays are disposed on both sides such that 10 total flexible modes are responsible of vibrations. Inertia, sizes and other physical parameters are detailed in appendix B and the significant parameters are gathered in the following table 3 .3 -Flexible modes' physical properties This section is then divided in two parts. A first analysis and method of the gains synthesis and the interest of the control structure in which an ESO is included in the NDI are realized for the joint control approach (i.e. section 3.3). Secondly, the benefits of such structure and its adaptability to a "task" space control introduced in section 3.4 are discussed with time-domain simulations. Joint control results Gains synthesis The gains synthesis is developed such that the closed-loop system remains stable during a complete task. While the manipulator operates, an evolution of the inertia and convective terms may be significant to impose multiple constraints as (3.42a) in the LMI resolution (3.3.3.1). In order to avoid discussion on the choice of equilibrium points for additional LMI constraints, relaxation terms have been employed such that (3.51). After establishing a manipulator maneuver with path-planning methods and singularity-free approaches that are not developed in this work, the relaxation terms can be evaluated. Their evaluation requires joint position and velocities to develop the direct dynamics of the SMS. To tackle the variations during the maneuver, a sequencing in n seq sub-motions of the complete task is made. Thus, according to the manipulator's configuration and velocities the gains ensure the dealing with the largest variations. The detail of the design procedure is given by the following algorithm: Considering the task illustrated with Figure 3.11, in the pre-analysis process one can visualize the variation during the system's motions with Figure 3.12. As the relaxation term ρ 1 reaches significant values, one can first highlight the interest of considering relaxation terms to avoid multiple LMI constraints. Furthermore, ρ 2 corresponds to the level of control degradation from the observer error. With the lower subplot of Figure 3.12, one will note that regarding the system's evolution, the impact of low observer error should not affect the control performances. .12 -Evolution of system variation for a given task, upper subplot represents the evolution of the relaxation term ρ 1 and the lower subplot represents ρ 2 's evolution In order to solve the proposition in 3.3.3.1 and the LMI constraints, the YALMIP toolbox [START_REF] Lofberg | YALMIP: A toolbox for modeling and optimization in MAT-LAB[END_REF] is used with the MOSEK solver. This allows to reduce the computation time when multiple large LMI dimension constraints are considered. In the present case, LMI variables Q z is a 18 × 18 square matrix and P a 26 × 26 square matrix. Moreover, with actuators decoupling, the control gains can be decomposed as K = K p K v with K p and K v two diagonal matrices. Moreover, a discussion on the different actuators' dynamics can be developed. Reactionwheels present slower dynamics than manipulator's joints. The proposed gains synthesis as detailed in algorithm 1 does not consider these difference of dynamics or saturation thresholds. In order to introduce such considerations, after a pre-analysis of matrices A e and B , additional constraints on the control gains K p and K v can be defined. Such constraints are developed on the LMI variable Q z which can be imposed as a diagonal matrix thanks to the decoupled system. Furthermore, distinguishing reaction-wheels to manipulator's joints in Q z , an upper-bound limit can be defined to avoid actuator's saturation thresholds. Such addi-tional constraints are proper to the studied SMS and are tuned with trial and error approach, it remains difficult to develop a general formalism. Simulation results From path-planning methods, manipulator motions (i.e. joint angular pose and velocities) are provided with the PULSAR application (https://www.h2020-pulsar.eu) to perform a given task. As it is usual to maintain the spacecraft base with a fixed attitude, reaction-wheels velocities are deduced from the manipulator's with a momentum conservation approach. One will note that this hypothesis is not verified with flexible elements but allows to define the desired velocities as: qr d = -H + 0r (H 0m qm d + H 0 q0 ) = -H + 0r H 0m qm d (3.66) with initially non-null reaction-wheels' velocities such that no base rotation occurs during manipulator motions. Figures 3.13 and 3.14 illustrate control performances. With upper subplots in both figure, the measured velocities are represented. In order to visualize and quantify the control performances, for each actuators a relative tracking error signal is plotted in the respective lower subplot of manipulator and reaction-wheels figures. This signal is defined as: ˙ tc ( qi ) = ( qd i -qi ) max( qi ) (3.67) One can conclude on the efficiency of the proposed method considering this relative error signal remains under 10 -3 . Considering the detail of the computed torque (3.38) given in Figure 3.15, the interest of the control method can be illustrated. The control torques can be decomposed into three signals. The first one is the feedback control torque (M v), visualized in the upper subplot in Figure 3.15. The second, (D q), defined as the feedback linearizing torque, visualized in the second subplot of Figure 3.15. The last one, (D x x e ), is the intern disturbance torque that appears in the lower plot in Figure 3.15. The intern disturbance torque is composed of unmeasured states and included in the system linearization thanks to the observer developed in section 3.3.1. With an amplitude comparison of the three signals composing the computed torque, one can conclude on their interests in the NDI. Regarding the low velocities involved in this illustrative example and with reaction-wheels used to maintain the base fixed, a comparison of the feedback linearizing torques and intern disturbances one is relevant to conclude on the control strategy. As these two signals present similar amplitudes, including an estimation of unmeasured states (i.e. flexible dynamics and linear base velocities) in the computed torque it allows to improve the decoupling quality. However, as physical properties of the flexible modes may be significantly different from one appendage to another, the vibrations could be difficult to properly be estimated. More precisely, the LMI resolution does not easily provide the possibility of frequency analysis. This leads to not focusing on the observation quality in this control approach. However, the synthesis ensures stability for a bounded observer error which is also guaranteed by the gains synthesis for a given motion. Thus, estimating amplitudes of the fastest unmeasured states is sufficient to adequately linearize and decouple the considered system. A second interest of the proposed method is illustrated with Figure 3.16 in which the base angular velocities are visualized in the upper subplot and the flexible mode velocities are represented in the lower plot. As the reaction-wheels are employed to maintain a null base angular velocity, the flexible modes are only excited with the indirect couplings between the manipulator motions and the flexible dynamics through the base motions. As one can notice with the lower subplot in Figure 3.16, when the manipulator presents constant velocities, the flexible modes vibrations are dumped. Using the reaction-wheels to maintain a fixed base, and including the intern disturbance in the computed torque allows to adapt reaction-wheels usage to reject these un-controllable and un-measurable states. Interest of a base-manipulator common control strategy With a base-manipulator control, besides providing a larger manipulability, the rejection of disturbances can be better illustrated with the NDI/ESO control structure. Considering the same manipulator motion to illustrate the joint-space control, a comparison of the results obtained with similar control objectives are considered in this section. With a first comparison of manipulator's control performances visualized in Figures 3.13 and 3.17, one can note similar results. This allows to focus on the base and flexible dynamics to better illustrate disturbance rejections obtained with the computed torque (3.63). With the second subplot in Figure 3.18, one easily verifies that the base is successfully maintained at a fixed attitude during the manipulator maneuver. This corresponds to relatively similar reaction-wheels' velocities that the desired ones in the previous joint-space control. As the base is less rotating, the flexible appendages are less affected by the manipulator as illustrated by the lower subplot in Figure 3.18 but are still excited when joints' velocities are changing. Capitalizing on the reaction-wheels' constant inertia matrices (H 0r and H r ), detailing the computed torque (3.18) controlling the four reaction-wheels allows to obtain information on the linearization/decoupling of the system. One defines the four following signals: the feedback control torque (J + τ M ωm v), the feedback linearizing torque (J + τ D ωm ω 0 qm ), the measured reaction-wheels linearizing torque (J + τ D r qr ) and the intern disturbance torque (J + τ D x x e ). With Figure 3.19, each signals are detailed to highlight their role in the reactionwheels control maintaining the base with a null angular velocity. Regarding amplitudes of each signals, one can note that the intern disturbance torques, visualized in the third subplot of Figure 3.19, have nearly similar importance than the feedback linearizing torques, visualized in the second subplot of Figure 3.19. Thus, including an estimation of the unmeasured states in (2.69) allows to properly decouple the system and linearize it as well as using the reactionwheels as mean of rejection to base disturbances. Additionally, maintaining the base fixed allows to limit the dissipation in the overall system due to flexible appendages. As illustrated with Figure 3.20 and comparing with Figure 3.5 allows to highlight this observation. Chapter conclusions In this chapter, the modeling developed in chapter 2 and the analysis/simulation tools have been used to introduce novel control strategies. After a first review of the literature to identify possible improvement areas to control SMS, a main concern was raised with the presence of flexible appendages. Considering rotation-free-floating SMS, discussions on the benefits of simultaneously controlling the base actuators and the manipulator have been developed. With the introduction of new kinetic indices, a quantitative analysis can be developed to both justify the common control and developed adapted motions. In addition, these indices are suitable for designing an SMS and justifying the placement and number of actuators. Motivated by the conclusions resulting of the kinetic indices use, a common base and manipulator joint control law has been introduced to perform precise SMS control in presence of internal disturbance mainly due to the flexible vibrations and the spacecraft's drift. This control strategy is based on an NDI that capitalizes on the modeling effort developed in the previous chapter. Thanks to the modeling of the flexible dynamics, an ESO is established to improve the NDI quality and thus the decoupling of actuators. Both joint-space and base-manipulator control have been developed under a similar formalism to illustrate the advantages of such control scheme. Another contribution lies in the gains synthesis that is simultaneously performed through an LMI resolution problem. It is based on a Lyapunov stability analysis, the resolution ensure control performances for a given task. It precisely requires a manipulator path with joints velocities to evaluate relaxation terms introduced to reduce the number of LMI constraints. With time-domain simulations effectuated on a simple SMS, the interest of such control scheme has been attested. Therefore, this chapter introduces a formalism to establish a control of an SMS in presence of flexible appendages. However, improvements are required to be implemented on actual systems. For that purpose, in the next chapter the focus is made on presenting a robust control strategy based on the one presented in this chapter. Modeling uncertainties, measurement errors, notable system variations are considered to perform various tasks precisely and safely. The control schemes and the gains synthesis presented in this chapter have been extended from published works in [START_REF] Kraïem | Control of rotation-floating space robots with flexible appendages for on-orbit servicing[END_REF] and [START_REF] Kraïem | Robust control of rotation-floating space robots with flexible appendages for on-orbit servicing[END_REF]. One will find in [START_REF] Kraïem | Control of rotation-floating space robots with flexible appendages for on-orbit servicing[END_REF] the base-manipulator control scheme with a similar ESO and control gains synthesis. Then in [START_REF] Kraïem | Robust control of rotation-floating space robots with flexible appendages for on-orbit servicing[END_REF] one will find an extension of the control framework for a joint-space control of the SMS. Moreover, additional robust criteria are introduced in the simultaneous gains synthesis. This last work allows to introduce the control remaining difficulties and a first approach to consider significant system variations. Chapter 4 Robust joint-space control for a space manipulator system in presence of flexible appendages In the previous chapter, the motivations to perform common base and manipulator control of rotation-free-floating SMS 1 with flexible appendages have been introduced and its benefits have been wildly discussed. However, to face actual control difficulties robust criteria must be added in the control scheme. With the modeling derivation detailed in chapter 2, uncertainties on physical parameters may be taken into account in the gains synthesis. Thus, for modeling and measurement errors an NDO 2 is introduced and included in the system linearization. A similar gains synthesis as the one proposed in chapter 3 is developed based on a stability analysis and performed through an LMI 3 resolution. Moreover, one advantage offered by an SMS is the possibility of performing different OOS 4 missions. Then, an additional objective in the gains synthesis focus on the system variations corresponding to different OOS tasks. 117 To summarize this chapter, the control strategy introduced in the previous one is extended to answer current OOS problems with an additional observer and a new gains synthesis. Then the approach is validated in a realistic environment with extensive tests on an on-orbit space telescope assembly use-case performed on the simulation tools detailed in chapter 2. Robust control challenges Area of improvement As the first difficulty of controlling an SMS resides in the couplings between each bodies, robust control strategies have been focusing on rejecting the coupling effects. Such approaches have mainly been developed for capturing targets as they may present large and unknown disturbance on the manipulator's end-effector and thus an important momentum variation. Besides developing an impedance control strategy [START_REF] Yoshida | Dynamics, control and impedance matching for robotic capture of a non-cooperative satellite[END_REF]; [START_REF] Nakanishi | Impedance control for free-flying space robots-basic equations and applications[END_REF] or focusing on a pathplanning method [START_REF] Hu | Minimum base attitude disturbance planning for a space robot during target capture[END_REF]; [START_REF] Zhang | Pre-impact Trajectory Planning of Nonredundant Free-Floating Space Manipulator[END_REF] to reduce the capture impact, robust control schemes have been proposed to deal with the impact and the post-capture of a target. Dong and al. [DC14] established a robust adaptive compound control algorithm to suppress the motions that destabilize the robotic system after the capture. However, their method based on a momentum conservation requires strong assumptions to verify the modeling of the system composed of the free-floating SMS and the target that they proposed. Capture scenarios also highlight the problems due to an incorrect or a poor modeling. Adaptive control methods have been providing interesting features to deal with the capture of an unknown target. Nguyen and al. [START_REF] Nguyen-Huynh | Adaptive reactionless motion for space manipulator when capturing an unknown tumbling target[END_REF] proposed an adaptive control to stabilize a captured target with a reaction-less motion of the chaser's base. Furthermore, the estimation of the system physical properties (i.e. mass, dimensions, CoM5 ) remains challenging. Model uncertainties have been one motivation of introducing robust strategies for SMS control. Luo and al. [START_REF] Luo | Robust inertia-free attitude takeover control of postcapture combined spacecraft with guaranteed prescribed performance[END_REF] proposed a robust inertia-free attitude to detumble a spacecraft without inertial property and subject to external disturbance. Similarly in the detumbling procedure, Gangapersaud and al. [START_REF] Rabindra | Detumbling a non-cooperative space target with model uncertainties using a space manipulator[END_REF] considered a force control to avoid identifying the target's inertia parameters. Aghili and al. [START_REF] Aghili | Optimal Trajectories and Robot Control for Detumbling a Non-Cooperative Satellite[END_REF] have developed an optimal detumbling control strategy to cancel the momentum dissipation of the grasped non-cooperative target with forces/torques limitations. Dubanchet and al. [START_REF] Dubanchet | Motion planning and control of a space robot to capture a tumbling debris[END_REF] established a fixed-structure H ∞ synthesis to separately control the manipulator and base to extend the work of Aghili and al. [START_REF] Aghili | Optimal control of a space manipulator for detumbling of a target satellite[END_REF] and optimally capture a tumbling satellite. The presence of external disturbances is another source of motivation for robust controls. The different external perturbations are affecting either directly the manipulator and/or the spacecraft's base or will excite flexible vibrations of the appendages [START_REF] Cao | Thermal alternation induced vibration analysis of spacecraft with lateral solar arrays in orbit[END_REF]. H ∞ controllers have been established to deal with external and internal perturbations [START_REF] De Fpa Taveira | Adaptive nonlinear H ∞ controllers applied to a free-floating space manipulator[END_REF]; [START_REF] Seddaoui | Combined nonlinear H ∞ controller for a controlled-floating space robot[END_REF]; [START_REF] Qiao | High-precision attitude tracking control of space manipulator system under multiple disturbances[END_REF]. These methods allow to answer different problems or difficulties in SMS control. First the modeling uncertainties of the system [START_REF] Zhang | Robust adaptive control of a free-floating space robot system in Cartesian space[END_REF] and likewise it avoids the difficult modeling of a rigid and flexible multi-body system [START_REF] Colmenarejo | Methods and outcomes of the COMRADE project-Design of robust Combined control for robotic spacecraft and manipulator in servicing missions: comparison between between Hinf and nonlinear Lyapunovbased approaches[END_REF]. Furthermore, in the previous chapter the established control law highlights some benefits by controlling both the manipulator and the base simultaneously. The promising results obtained motivate to pursue the effort of improving such design to answer the requirements of current and future OOS operations. Likewise, with the possible complete modeling of rigid-flexible systems, one capitalizes on this to obtain adapted control solutions and efficient use of actuators such that the mission lifespan can be optimized. Limits of the control introduced in chapter 3 The proposed control scheme established in the previous chapter aimed at introducing the interest of a common base and manipulator control in presence of perturbations inside the system. Nevertheless, one can identify the following necessary improvements to answer actual OOS requirements: • A primary interest of using SMS to proceed to OOS missions is the possibility of performing a multi-task mission [START_REF] Flores-Abad | A review of space robotics technologies for on-orbit servicing[END_REF]. In that purpose the proposed gains synthesis requires adjustments to consider different mass distribution and end-effector motions. • With a similar objective, the actual synthesis is based on a well-known manipulator task. Such synthesis is restrictive to a particular task, requiring a precise path and motions' velocities. Therefore, extending the synthesis to a given workspace could be a significant improvement of the method. • A second weakness of the proposed method is on the modeling assumptions to perform the system linearization and designing of the observer. Developing accurate kinematic/dynamic models remains challenging especially when integrating the flexible behaviors onto the overall system equations of motions. Thus, in order to deal with modeling uncertainties the control scheme requires other adaptations. • Additionally to model errors, the quality of measured states should also be discussed. Typically, besides the actuators velocities, the base orientations' measures incorporate some measurement noises. In this section, a novel control scheme is introduced to bring the different improvements previously enumerated. As illustrated with Figure 4.1, the control law is based on a system dynamic inversion (with an NDI6 ) in which the unmeasured states (i.e. the flexible and linear spacecraft dynamics) are included with an ESO 7 as well as an estimation of a disturbance torque induced by the modeling error with an NDO. Control strategy The contribution of this strategy is the introduction of a disturbance torque, τ d , due to both modeling uncertainties and measurement errors. As concluded with the previous control scheme detailed in chapter 3, including un-measured states in the system decoupling allows to both improve its quality and adapt the use of actuators. With the same ambition, an NDO is developed to estimate an additional perturbation torque introduced with the evaluation of the inverse uncertain kinematic/dynamic model. Both observers are designed thanks to the modeling effort developed in chapter 2. Moreover, the same assumptions as in chapter 3 on the available measurements are made to establish the new control scheme. The actuators' accelerations, q, the flexible dynamics, η, η and η, and the SMS base's linear dynamics, ṙS 0 and rS 0 are presumed not measured. Therefore, the ESO is designed by slightly adapting the observer proposed in section 3.3.1 to include the modeling/measurement errors. The NDO is similarly developed with the use of actuators' velocities. Before developing the open-loop dynamics, notations are introduced in order to consider errors in evaluation of inverse kinematics/dynamics due to modeling uncertainties and measurement errors. A quantity X obtained or evaluated from measurement is indicated as X such that X = X + ∆X with ∆X representing the difference between the actual value and the measured one. Thus, the system's dynamics (2.69) obtained from the modeled spacecraft can be expressed as: Ĥ(q)    q0 q η    + Ĉ(q, q, q0 )    q0 q η    + K    q 0 q η    =    0 6×1 τ q 0 nη×1    -    τ ∆ 1 (q, q, q0 ) τ ∆ 2 (q, q, q0 ) τ ∆ 3 (q, q, q0 )    (4.1) with the following hypothesis: • the terms ∆x∆y are neglected, • the base linear velocity is estimated by the state observer detailed in section 4.2.2 such that q0 = ωsat T 0 ṙT 0e T , • measurements of actuators position and velocities are considered good enough to assume q = q and q = q Modeling and measurement errors are then gathered in a disturbance torque defined as: τ ∆ (q, q, q0 ) =    τ ∆ 1 (q, q, q0 ) τ ∆ 2 (q, q, q0 ) τ ∆ 3 (q, q, q0 )    = ∆H    q0 q η    + ∆C    q0 q η    + ∆K    q 0 q η    -Ĥ(q)∆    q0 q η    -Ĉ(q, q, q0 )∆    q0 q η    -K∆    q 0 q η    (4.2) Joint open-loop behavior In order to establish a joint-space control, a rewriting effort of the full system dynamic followed by the modeled manipulator (4.1) is required. This effort aims at expressing the joints' dynamics in function of either measurable states or quantities that can be estimated with the observers detailed in sections 4.2.2 and 4.2.3. First, with (4.1) actuators' dynamics are expressed as: Ĥq q + Ĉq q = -ĤT 0q 0 q0 η -0 Ĉq0 0    η q0 η    + τ ∆ 2 + τ q (4.3) Secondly, from (4.1), the unmeasured states are isolated as: Ĥ0 Ĥ0η ĤT 0η Ĥη q0 η + Ĥ0q 0 q + 0 Ĉ0 Ĉ0η Kη Ĉη0 Ĉη    η q0 η    + Ĉ0q 0 q = τ ∆ 1 τ ∆ 3 (4.4) One can note that the matrix Ĥ0 Ĥ0η ĤT 0η Ĥη is an inertia matrix which by definition has an inverse given by: Ĥ0 Ĥ0η ĤT 0η Ĥη -1 = ( Ĥ0 -Ĥ0η ĤT 0η ) -1 -( Ĥ0 -Ĥ0η ĤT 0η ) -1 Ĥ0η -ĤT 0η ( Ĥ0 -Ĥ0η ĤT 0η ) -1 I nη + ĤT 0η ( Ĥ0 -Ĥ0η ĤT 0η ) -1 Ĥ0η (4.5) with Ĥη = I nη . The inertial term ( Ĥ0 -Ĥ0η ĤT 0η ) corresponds to a scaled inertia of the base for which the flexible inertia have been subtracted of the spacecraft base. Then by injecting (4.4) in (4.3) and by introducing the state vector x = η T qT 0 ηT T , actuators' dynamics are obtained as: M (q)q + D (q, q, q0 ) q + D x (q, q, q0 )x = τ q + J ∆ (q)τ ∆ (4.6) with:                              M = Ĥq -ĤT 0q ( Ĥ0 -Ĥ0η ĤT 0η ) -1 Ĥ0q D = Ĉq -ĤT 0q ( Ĥ0 -Ĥ0η ĤT 0η ) -1 Ĉ0q D x = 0 Ĉq0 0 -ĤT 0q ( Ĥ0 -Ĥ0η ĤT 0η ) -1 -( Ĥ0 -Ĥ0η ĤT 0η ) -1 Ĥ0η 0 Ĉη Ĉ0η Kη Ĉη0 Ĉη J ∆ = -ĤT 0q ( Ĥ0 -Ĥ0η ĤT 0η ) -1 I nq ĤT 0q ( Ĥ0 -Ĥ0η ĤT 0η ) -1 Ĥ0η (4.7a) (4.7b) (4.7c) (4.7d) Thus, the joints' dynamics depends on an un-measurable state vector x and a disturbance torque τ d = J ∆ (q)τ ∆ . The state vector x includes spacecraft linear drift and flexible dynamics while the disturbance torque gathers modeling and measurement errors. Splitting the sources of disturbances applying on actuators presents the advantage of clearly identifying the perturbations with different dynamics properties such that a rejection strategy can be developed. Moreover, one can note that matrix D x gathers an equivalent stiffness and a convective term multiplied by a scaled inertia. This scaling corresponds to the impact of flexible dynamics onto the actuators' ones. In order to linearize the system a state observer and a nonlinear disturbance observer are detailed in the following sections. State Observer model An extended state observer is developed such that the flexible and the spacecraft's linear dynamics can be included in the system linearization detailed in section 4.2.4. The ESO is established using the control torque and the measure of actuators' velocities as input and the spacecraft's angular velocities as output measurements. As the actuators' accelerations are not available, by injecting (4.3) in (4.4), the dynamic of the un-measured state can be written as: H 0η q0 η + C η0η    η q0 η    + C q q = J ∆ τ ∆ + J q τ q (4.8) with:                                                H 0η = Ĥ0 Ĥ0η ĤT 0η Ĥη -Ĥ0q 0 Ĥ-1 q ĤT 0q 0 C η0η = 0 Ĉ0 Ĉ0η Kη Ĉη0 Ĉη -Ĥ0q 0 Ĥ-1 q 0 Ĉq0 0 C q = Ĉ0q 0 -Ĥ0q 0 Ĥ-1 q Ĉq J ∆ = I -Ĥ0q Ĥ-1 q 0 0 0 I J q = -Ĥ0q 0 Ĥ-1 q (4.9a) (4.9b) (4.9c) (4.9d) (4.9e) Introducing the state vector to be estimated, x = η T qT 0 ηT T , one can re-write (4.8) as:                                ẋ =    η q0 η    = 0 nη 0 nη×nq I nη -H -1 0η C η0η x + 0 nη×nq 0 nη×nq -H -1 0η C q H -1 0η J q q τ q + 0 nη×(6+nq+nη) H -1 0η J ∆ τ ∆ = A e (q, q, q0 )x + B q (q, q, q0 )u + B ∆ (q, q, q0 )τ ∆ y = ω sat 0 = 0 I 0 0 x = C e x (4.10a) (4.10b) The ESO dynamics includes the disturbance torque which requires to be estimated in order to linearize the system as well as insuring an accurate state estimation. By introducing the linear estimation gain L x and τ d the estimation of τ d , the state vector x is estimated as x e such that: ẋe = A e (q, q, q0 )x e + B q (q, q, q0 )u + B ∆ (q, q, q0 )J + ∆ (q)τ d + L x (y -C e x e ) (4.11) Nonlinear Disturbance Observer model Similarly to the state observer, an estimation of the disturbance torques in the system linearization aims at improving the control performances. In particular, the present disturbance torque includes both modeling error and measurement errors which allows to compensate different uncertainties of the manipulator model and maintain high control performances. The disturbance observer is developed capitalizing on the structure of the joint's dynamics (4.6) by introducing a gain L d , the disturbance observer is given by [START_REF] Mohammadi | Nonlinear disturbance observer design for robotic manipulators[END_REF]: τ d = -L d τ d + L d (M q + D q + D x x e -τ q ) (4.12) However, actuators' accelerations are usually not available. To overcome this drawback, a new variable w is introduced, such that [START_REF] Chen | A nonlinear disturbance observer for robotic manipulators[END_REF]: w = τ d -p(q, q) (4.13) where the vector p(q, q) can be computed from the nonlinear gain L d (q, q) as [Moh+13]: d dt p(q, q) = L d (q, q)M (q)q (4.14) With (4.6), (4.12) and (4.14), one can describe the nonlinear disturbance observer with the time-derivative of (4.13): ẇ = τ d - d dt p(q, q) = -L d (q, q) + L d (M (q)q + D (q, q, q0 ) q + D x (q, q, q0 )x e -τ q ) = -L d (q, q)τ d + L d (q, q) D (q, q, q0 ) q + D x (q, q, q0 )xτ q + L d (q, q)D x (q, q, q0 ) (x e -x) = -L d (q, q)w + L d (q, q) D (q, q, q0 ) q + D x (q, q, q0 )xτ q -p(q, q) + L d (q, q)D x (q, q, q0 ) (x e -x) (4.15) Hence the nonlinear disturbance with only measurable or estimated states is defined as:              ẇ = -L d (q, q)w + L d (q, q) D (q, q, q0 ) q + D x (q, q, q0 )x e -τ q -p(q, q) τ d = w + p(q, q) d dt p(q, q) = L d (q, q)M (q)q (4.16a) (4.16b) (4.16c) Closed-loop dynamics After establishing the different observer models and the open-loop control behavior, one can note the different inter-dependencies between each dynamics. In order to insure stability of the closed-loop system, the tracking control error signal and the observers' error signals are considered in this section to highlight the notable dependencies between the dynamics. The tracking error control signal is defined as c = (q d -q), with q d the desired joint space trajectory, the state observer error signal e = (xx e ) and the nonlinear disturbance error signal d = (τ d -τ d ). The state observer error signal dynamics can be obtained with (4.10) and (4.11) such that: ˙ e = ẋ -ẋe = A e x + B q u + B ∆ J + ∆ τ d -A e x + B q u + B ∆ J + ∆ τ d + L x (y -C e x e ) = (A e -L x C e ) e + B ∆ J + ∆ d (4.17) In order to obtain the nonlinear disturbance observer error dynamics, one can make a first assumption on the relative convergence rate of the error signals compared with the evolution of the torque disturbance. By construction, the disturbance torque τ d present slow dynamics as the ∆ matrices have relatively low amplitudes. One can then assume that τ d ≈ 0 [START_REF] Chen | A nonlinear disturbance observer for robotic manipulators[END_REF] and in the case of fast disturbance dynamics, one can adapt the gain synthesis detailed in section 4.2.5 such that the convergence rate of d is exponential when τ d is bounded [START_REF] Mohammadi | Nonlinear disturbance observer design for robotic manipulators[END_REF]. Thus, the respective error dynamics is approximated by: ˙ d = τ d -τ d ≈ -τ d = -ẇ - d dt p (4.18) Then with (4.16) and (4.18), the error signal of the NDO in function of the state observer error signal can be detailed as: ˙ d = L d w -L d (D q + D x x -τ q -p) + L d D x e - d dt p = L d (w + p) -L d (M q + D q + D x x -τ q ) + L d D x e = -L d d + L d D x e (4.19) Before establishing the tracking control error dynamics, a control torque that allows to linearize and decouple the system is introduced. For a desired actuator dynamics, v, such that: v = K q d -q qd -q = K c ˙ c (4.20) where K is a linear control gain, a control torque that realizes this objectives is given by: τ qc = M K c ˙ c + D q + D x x e -τ d (4.21) By injecting (4.21) in (4.6), the actuators closed-loop dynamics is given as: M q + D q + D x x -τ d = M K c ˙ c + D q + D x x e -τ d (4.22) matrix M is defined positive and symmetric as it an inertia matrix, thus by rewriting the close-loop dynamics, the tracking control error dynamics is obtained as: ¨ c = -K c ˙ c + M -1 D x e -M -1 d (4.23) Let's introduce the state vector z = T c ˙ T c T , then (4.23) can be re-written as a state representation:                ż = 0 I 0 0 + 0 -I K z + 0 M -1 D x e + 0 -M -1 d = (A z + B z K) z + B ze e + B z d d c = I 0 z = C z z (4.24a) (4.24b) Thus, by considering (4.17), (4.19) and (4.24) one can note that the system's stability depends on the convergence of coupled observers dynamics. A new hypothesis on the convergence rates can be made to reduce the coupling of the observers dynamics and enforce the hypothesis of τ d ≈ 0. The disturbance observer error present lower amplitudes than x as it mainly corresponds to residual modeling errors. In consequence, the emphasis on an accurate estimation of x e should be made over τ d estimation. Imposing a faster convergence time on the state estimation than on the disturbances allows to simplify the coupling dependency in (4.17 =    (A z + B z K) B ze B z d 0 (A e -L x C e ) 0 0 L d D x -L d    X = A(q, q, q0 )X (4.26) Simultaneous gains synthesis With the closed-loop expression as expressed in (4.26), the separation principle could allow to obtained the different controller and observer gains imposing negative eigen values of (A z + B z K), (A e -L x C e ) and -L d to insure stability of the system. However, a simultaneous synthesis is proposed to insure disturbances rejections with observers with slower dynamics than the controller one. The synthesis is firstly developed for a linearized system and then extended to deal with workspace considerations. Secondly, the synthesis method is adapted to deal with large system evolutions mainly due to the mass distribution variations. Robustness to modeling and measurement errors With the following proposition, the gains synthesis allows to insure control performances in presence of modeling uncertainties and measurement errors for a given linearized system. Proposition: If there exist symmetric definite matrices Q z , Q d and P e and matrices W z and W e of appropriate dimensions such that for a given scalar γ > 0 the following LMI constraint holds: Θ =      (A z Q z + B z W z ) s B ze B z d Q d Q z C T z * (P e A e -W e C e ) s D T x 0 * * (-Q d + D ) s 0 * * * -γ 2 I      < 0 (4.27) with X s = X + X T , Then system is quadratically stabilized with K = W z Q -1 z , L x = P -1 W and L d = P -1 d M -1 . Moreover, the outputs c verify: such that V + γ -2 T c c ≤ 0 for a given γ > 0. ∞ 0 c (t) T c (t)dt < γ 2 Then by integration of this constraint: Then the constraint V + γ -2 T c c ≤ 0 is equivalent to: This concludes the proof. ∀ T > 0, T 0 V + γ -2 T c c dt < 0 ⇒ γ 2 T 0 T c c dt < γ 2 (V 0 -V T ) ⇒ γ 2 T 0 T c c dt < γ 2 V 0 = γ 2 T    (P z (A z + B z K)) s + γ -2 As such, for a linearized system the matrices A e , D x , D , B z d and B ze are evaluated. Then defining the LMI variables Q z , Q d , P e , W z and W e the LMI (4.27) is resolved by minimizing the parameter γ. This allows to obtain the control and observers gains suitable to insure stability for a given motions for which the linearization is sufficient. For large variations the LMI (4.27) is evaluated for different system linearizations. Robustness to system variations During an on-orbit servicing operation, the distribution of mass is led to face significant changes through the different manipulations. Thus the resolution of the LMI (4.27), which is verified around a system's equilibrium state, needs to be slightly changed in order to consider those system variations without considering multiple equilibrium points which would prohibitively increase the number of LMI constraints in the design process. In this section, under mild assumptions, it is highlighted that system velocities can be neglected in a variations preliminary analysis to focus on a workspace analysis. In other words, in the inequality (4.27), convective terms have much less influence than inertial terms. A first modification of the gains synthesis can be observed by considering a previous assumption on the convergence rate of the disturbance torque estimation. Compared to both control and state observer error signals, the dynamics of the disturbance torque estimator are slow enough to be neglected. Consequently, the tracking error dynamics (4.24) can be approximated as: ż = (A z + B z K) z + B ze e (4.38) With these considerations, the LMI (4.27) including system variations during OOS operations is modified as follows (with the Deltas blocs and simplification of the coupling dynamics): Θ =      (A z Q z + B z K) s B ze + ∆B ze 0 Q z C T z * (P e A e -W e C e ) s D T x + ∆D T x 0 * * (-Q d + D ) s + ∆D + ∆D T 0 * * * -γ 2 I      < 0 (4.39) Let us now introduce bounds ρ 1 , ρ 2 and ρ 3 on the maximum singular values of the interest variations:          σ (∆B ze ) < ρ 1 σ (∆D x ) < ρ 2 σ ∆D + ∆D T < ρ 3 (4.40a) (4.40b) (4.40c) Using a Schur-complement based argument, it is readily checked that the inequalities (4.39) are enforced by: Θ =      (A z Q z + B z K) s + ρ 1 I B ze * (P e A e -W e C e ) s + (ρ 1 + ρ 2 )I * * * * 0 Q z C T z D T x 0 (-Q d + D ) s + (ρ 2 + ρ 3 )I 0 * -γ 2 I      < 0 (4.41) Let us now evaluate the bounds introduced in (4.40). From equations (4.7) and (4.24), one can identify the matrix terms that are dependent on the workspace and those that also require system velocities to be evaluated. Decomposing D x with a stiffness and a convective equivalent term such that D x (q, q) = K η (q) D t,η (q, q) defined as:            K η = ĤT 0q ( Ĥ0 -Ĥ0η ĤT 0η ) -1 Ĥ0η Kη D t,η = Ĉq0 -ĤT 0q ( Ĥ0 -Ĥ0η ĤT 0η ) -1 Ĉη + ĤT 0q ( Ĥ0 -Ĥ0η ĤT 0η ) -1 Ĥ0η Ĉη0 -ĤT 0q ( Ĥ0 -Ĥ0η ĤT 0η ) -1 Ĉ0η + ĤT 0q ( Ĥ0 -Ĥ0η ĤT 0η ) -1 Ĥ0η Ĉη (4.42a) (4.42b) one can consider matrices M (q) and K η (q) on one side and velocity dependent term matrices D and D t,η on an other side. Firstly, the common factor term ĤT 0q Ĥ0 -Ĥ0η ĤT 0η -1 , which corresponds to a scaled inertia can be constrained as: σ ĤT 0q Ĥ0 -Ĥ0η ĤT 0η -1 ≤ σ ĤT 0q σ H 0 -H 0η H T 0η = β (4.43) where σ(A) denotes the minimum singular value of matrix A. Thus, matrices M (q) and K η (q) can be bounded as:    σ (M ) I ≤ M ≤ σ (M ) I σ K η ≤ β σ Ĥ0η Kη (4.44a) (4.44b) Then, the velocity dependent terms can be simplified thanks to specific assumptions in space robotic applications. In order to maintain system stability, actuators velocities remain low enough to develop slow manipulator motions such that flexible modes excitation are limited. Moreover, system variations correspond to the different mass distributions which are not affected by the reaction-wheels velocities. The study of matrices D t,η and D results in identifying the impact of the manipulator motions in the considered workspace. The rewriting effort made to obtain the joint-space dynamics (4.6) has allowed to define D x as an equivalent Coriolis matrix. This convective term can be bounded as a function of the SMS configuration and the actuators' capacity such that [MM07]: σ (D (q, q)) ≤ σ (D (q, q)) ≤ λ q 2 max (4.45) where the parameter λ is defined as a function of the studied workspace [START_REF] Mulero-Martinez | Uniform bounds of the coriolis/centripetal matrix of serial robot manipulators[END_REF]: λ = 3 2 sup nq i=1 ∂M (q) ∂q i (4.46) Furthermore, with the assumption of slow manipulator motions, the evaluation of D x can be reduced to the variations of K η . By definition, convective matrices correspond to the cross product terms between q, η and ω 0 . For slow manipulator and base motions, flexible modes vibration amplitudes are limited. These general assumptions allow to neglect convective term variations compared with inertia/mass ones. Bounding D x as follows σ (D x ) ≤ σ K η + σ D t,                ρ 1 = σ K η σ (M ) ρ 2 = σ K η ρ 3 = 2λ q 2 max (4.50a) (4.50b) (4.50c) with σ K η corresponding to the greatest singular value of matrix K η and σ (M ) lowest singular value of matrix (M ). Proposed design methodology The design procedure is now generalized to a multi-task on-orbit servicing scenario in which from one task to another the mass distribution may significantly differ as well as the size of the workspace. To optimally consider the system changes through the different tasks and obtain optimal constant control and observer gains, let us define n tasks, denoted t 1≤i≤n ∈ T , where T denotes the entire set of tasks corresponding to the total servicing scenario. The details of the design procedure are given by the following algorithm: = Q -1 d Thereby, algorithm 2 provides controller and observer gains suitable for a space manipulator system to perform different tasks while preserving similar control performances for each task. The SMT 9 container (7) with the main mast (8): Dimensions XYZ (m): 6 × 1, 42541 × 1, 2344 Mass (Kg): 1132 CoM XYZ (m): [1,5 0 0] Its function is to store the 36 segmented mirror tiles at the beginning of assembly scenario. Its configuration consists in a first row of 6 tiles at the bottom, and 6 consequential rows of 5 tiles. This allows for the rail of the robotic assembly system to pass between the tiles, enabling access of all the tiles to the robotic arm. The pre-assembly site (9) with its unique standard interface which will be used for attaching SMTs to form a preassembly. The manipulator has been adapted from the Compliant Assistance and Exploration Space Robot (CAESAR)1. The CAESAR robotic arm is a 7-DoF10 , modular manipulator designed for space applications. Its original total length is ∼ 3m. To increase the original workspace, its length was increased by ∼ 1m (50cm between joints 2-3 and 50cm between joints 4-5) to allow for the complete assembly process. Figure 4.3 illustrates the latest CAESAR configuration as integrated on the PULSAR telescope with an additional prismatic DoF at its origin corresponding to the rail. In addition, 6 reaction-wheels, positioned inside the spacecraft's base ((1) in Figure 4.2), allow to actively control the 3 rotational DoFs of the space telescope. Moreover, with the two solar arrays and the sun-shield, represented by four flexible beams, 22 flexible modes are integrated to the equations of motions as detailed in section 2.3. The total spacecraft mass is 6892kg. The base weighs 1960 kg while the robotic arm is limited to 60 kg. The scenario of a complete assembly can be divided into 372 sub-tasks, t 1≤i≤372 . The sequencing allows to break down complex actions into more simple ones among which inspections tasks and tile displacements. Limited by the manipulator workspace, the mirror assembly is decomposed into a first pre-assembly of bundles of five tiles during which the manipulator unstacks tiles from the container and free the space in the bottom part of the assembly site. Therefore, the manipulator is moving between one to four tiles before establishing a pre-assembly bundle. An illustration of the scenario is developed with the following snapshots. Next operation consists in building first pre-assembly and placing it on the assembly site. Same set of operations are done to build the second pre-assembly, and placing it at the correct location on the assembly site (as visualized in Figure 4.5). Finally, the last four pre-assembly/assembly operations are executed consecutively, five tiles are first assembled on the pre-assembly site, and then the pre-assembly is positioned at the correct location on the final assembly system. Simulations results Gains synthesis solver The LMI resolution involves high-dimensional matrices and a large set of constraints (according to the considered scenario) which may lead to significant computational times. In the present application, each constraint (4.27) corresponds to a 106 × 106 matrix inequality where the decision variables Q z , Q d and P e are respectively 28 × 28, 14 × 14 and 50 × 50 symmetric matrices. In order to better control the resolution time, the YALMIP toolbox [START_REF] Lofberg | YALMIP: A toolbox for modeling and optimization in MAT-LAB[END_REF] is interfaced here with the MOSEK solver. To assess the computation cost, table 4.3 gathers, for different sizes of the set of tasks T , the resolution times which have been obtained on a standard computer equipped with an Intel i7 Processor. As expected, the CPU time considerably increases with the size of T . However, the number of needed iterations is not much affected. It should also be emphasized that, without major degradation, the controllers gains can be restricted to diagonal matrices thus slightly reducing the number of decision variables. If the time resolution becomes an issue, as it directly depends on the number of decision variables and LMI constraints, the use of furthers relaxations can be considered at the cost of possibly increased conservatism. Analysis of system variations During the on-orbit assembly of the PULSAR telescope, important system variations occur due to mass distribution changes. A preliminary analysis of the impact of such variationsthrough the evaluation of the bounds detailed in (4.40) for each sub-task of the deployment scenario -is useful to reduce the size of T in the design process. Sub-tasks presenting lowest variations correspond to verification maneuvers or small adjustment of end-effector rotations. Consequently such tasks share similar physical properties with the following longer manipulation task justifying the reduction of tasks considered in the gains synthesis. .9 illustrates variations of the three bounds for each task t ∈ T for unused reactionwheels (i.e. qr = 0) as they do not affect the mass distribution in the workspace. For each task, the norms are evaluated for the maximal joint velocities and uniformly distributed manipulator joint configurations in the corresponding workspace w i . As a first observation, it can be noted that a workspace approach is a suitable method to evaluate the bounds ρ i . Observing from the upper subplot of Figure 4.9, the similar evolutions of σ M -1 D x and σ M -1 K η and then comparing σ (D x ) with σ K η on the central subplot, we can validate the approximations of section 4.2.5.2 from which the bounds (4.50) are derived. Regarding the actuators' velocities and the variations of ρ 3 from the lower subplot of Figure 4.9 in comparison to the evolutions of ρ 1 and ρ 2 the study of system variations with a workspace analysis can be justified. A second observation can be made on the large system variations for each task by considering the evolution σ M -1 K η , whose maximum (red full line) and minimum (blue dashed line) values are visualized on the upper subplot of figure 4.9. These variations motivate the use of bounds to be incorporated as a single constraint instead of considering multiple equilibrium points for each task which would generate numerous constraints and would lead to an unsolvable optimization problem. Moreover, according to the considered scenario, some successive tasks may exhibit similar workspaces and mass distributions. Then, a reduced set of tasks can be used in the design procedure. In our context, further simplifications could be achieved by eliminating short tasks or those whose impact on the system variations remains small enough. System performances in simulation In this section, the interest of the control method is illustrated with time-domain simulations. In order to validate the efficiency of the design procedure, simulation are performed on multiple similar tasks through the deployment scenario. Three sets of tasks are considered for the validation process: T 2T = {34, 88, 160, 214, 268, 328}, T 3T = {46, 100, 172, 226, 280, 340} and T 4T = {58, 112, 184}. The first set, T 2T , gathers six tasks through which the manipulator moves a bundle of 2 tiles with a similar motion (i.e. same velocities and amplitude). Similarly, the second set, T 3T , gathers six tasks for which similar motions allow to move 3 tiles and the last set, T 4T , gathers three tasks where 4 tiles are moved. Choosing similar tasks through the scenario aims at evaluating the robustness to system variations with comparison of control tracking performances. Indeed, first by comparing the moving of a same mass at the endeffector but with different overall mass distribution and secondly comparing performances for a significantly different mass at the end-effector and a different overall mass distribution, one will be able to conclude on the robustness of the control. The system mass or inertia variations can be evaluated with the relaxation term ρ 1 from (4.50) which its evolution for each tasks in the scenario is plotted in the upper subplot of Figure 4.9 and likewise the values for the considered simulated tasks are given in the Table 4.4. Furthermore, one will note that the chosen tasks present significant variations considering the complete deployment. Regarding modeling errors, which impact the evaluation of the inverse kinematics/dynamics, a degraded space manipulator model is considered. From the nominal model of the PULSAR telescope, the following levels of uncertainties are considered: ±3% on the inertia I S i , +10% on the flexible parameters and ±10% on the position of the CoM. Likewise, a noise bias is included on the measure of ω 0 corresponding to a ±10% of error such that state observer efficiency can be tested. In the studied application, manipulator trajectories and velocities are provided with singularity free path-planning strategies also taking into account the actuators capacities (https://www.h2020-pulsar.eu). Reaction-wheels desired velocities are deduced from those of the manipulator joints assuming a kinetic momentum conservation and by imposing a fixed based such that: qr d = -H + 0r (H 0m qm d + H 0 t 0 ) = -H + 0r H 0m qm d (4.51) with an initial moment considered null. The choice of imposing a fixed spacecraft attitude is justified by the reduction of the impact of the indirect coupling effects between the manipulator and the flexible appendages as developed in the previous chapter. The validation process is divided into two main discussions. First the control performances are evaluated considering for each set, T 2T , T 3T and T 4T , the tracking error of both the manipulator's joints and the reaction-wheels' velocities. This aims at concluding on the robustness to system variations, modeling and measurement errors. Secondly, with a decomposition of the computed torque (4.21), the contribution of including the estimations of the un-measured states and compensating for the modeling uncertainties is discussed. Control robustness: For each set T 2T , T 3T and T 4T similar motions are executed by the manipulator moving respectively 2, 3 and 4 tiles. To first illustrate the precision of the manipulator's joints velocity control, Figures 4.10, 4.11 and 4.12 respectively represent the measured joints' velocities for T 2T , T 3T and T 4T . The dotted-line corresponds to the desired joint' velocity, q m d , and in full-lines represents the velocities measures. Individually considering each three Figures 4.10, 4.11 and 4.12, one will first note that for each t i in the respective sets present similar control performances (as full-lines are overlapping). First, by obtaining for a given set the same closed-loop response allows to conclude on the robustness of system variations. As tasks are sorted in different set, for each set a same number of tiles and a similar motions are considered. In one given set, for each tasks the SMS exhibits different inertia properties due to the overall mass distribution. To emphasis on the system variations, on can analyze the difference between reactionwheels' velocities insuring a fixed attitude of the base for each tasks in a given set. In that purpose, Figures 4.13, 4.14 and 4.15 respectively represent the six reaction-wheels' velocities in function of each tasks respectively in T 2T , T 3T and T 4T . Considering for instance Figure 4.13, one will note that for each q r i different desired velocities (in dotted-lines) are required for each tasks t i ∈ T 2T . This traduces that the mass, initially near the base, is moved to the mirror position through the scenario which corresponds to modifications of the system's inertia matrix. As the reaction-wheels' inertia remains constant, the velocity varies to satisfy (4.51). Then similar observations can be made for Figures 4.14 Then, to guarantee that the system provides satisfying performances one will consider the relative tracking errors defined for each joints as in (3.67) (i.e. ˙ tc ( qi ) = ( qd i -qi ) /max( qi )). As similar measured velocities of the manipulator's joints, the relative error for all joints are plotted in Figure 4.16 for each tasks t i ∈ T 2T on the left, for t i ∈ T 3T on the middle and for t i ∈ T 4T on the right. One will note that in the transient state, this signal reaches the largest amplitudes. However, the overall relative error's amplitudes are similar to those obtained in the previous chapter (see Figure 3.13). Considering the reaction-wheels, a similar relative tracking error is plotted for each set T 2T , visualized in the left subplot of Figure 4.17, T 3T in the middle subplot of Figure 4.17 and T 4T in the right subplot of Figure 4.17. Similarly to the manipulator, during the transient state, the relative error reaches its maximal values and are equivalent to those obtained in the previous chapter (see Figure 3.14). These observations allow to first highlight that the global control performances are satisfying regarding the large system variations and ratio of mass moved with the manipulator. Secondly, it shows a first limit of the proposed gains synthesis. As it is difficult to impose a closed-loop dynamic for each actuators but only insure convergence of the system, the tran-sient states are providing the worst performances. Moreover, no consideration on the control torques' limitations are considered in the synthesis which allows to consider large control gains responsible of the transient states obtained. Figure 4.16 -Illustration of the manipulator control performances; For each manipulator joints the relative tracking error ˙ tc ( qi ) of each t i ∈ T 2T on the left, for each t i ∈ T 3T on the middle, for each t i ∈ T 4T on the right Figure 4.17 -Illustration of the reaction-wheels control performances; For each reactionwheels the relative tracking error ˙ tc ( qi ) of each t i ∈ T 2T on the left, for each t i ∈ T 3T on the middle, for each t i ∈ T 4T on the right Thus, from these observations one can conclude on the robustness to system's variations and in particular emphasize on the possibility of performing different tasks with significant mass to move. Moreover, it highlights the gains synthesis efficiency as it was made for motions of end-effector's loads between nothing and 220kg (corresponding to 4 tiles). Interest of the control scheme: In order to illustrate the usefulness of disturbance and state observers to properly linearize and decouple the system, the detailed components of the control torques (4.21) are visualized for tasks in T 2T in Figures 4.18 23 and4.26, the similitude for each tasks can be interpreted by an identical estimation quality that persists through the scenario progression. Then to evaluate the influence of each torque on the decoupling quality, one will simultaneously consider all the tasks simulated. This allows to compare different behaviors in function of system variations. Thus, the following discussions can be developed from the overall comparisons. the estimations provide useful information on the system linearization and decoupling. Secondly, as developed in the previous chapter, maintaining the base with a fixed attitude allows to reduce the impact of the internal disturbances and thus the amplitudes of the disturbance torques are minimized. • A last observation can be made on the influence of the measurement errors on the base angular velocities. As a null base angular velocity is imposed with the use of reactionwheels, one will note that for the considered sensors the noise has low effects on the disturbance torques as their amplitudes remain lower than those of the linearization torques. To emphasis on the generalized disturbance torques amplitudes, one can consider the evolution of the system internal disturbances from one side and then discuss on the quality of the estimation. First considering the evolution of the flexible mode velocities η visualized in Figures 4.27, 4.28 and 4.29 respectively for tasks in T 2T , T 3T and T 4T , one can observe small and reducing vibration amplitudes. This can be justified by the reaction-wheels control that successfully maintain the base at a null angular velocity. Moreover, similarly to the gains synthesis developed in section 3.3.3, the one proposed in section 4.2.5 the NDI does not require a perfect estimation to adequately control the actuators. In addition to the difficulty of estimating the fastest flexible modes, the NDO also present difficulties to properly estimate the disturbance torques as modeling errors are not quantifiable. For this reason, the synthesis offers flexibility on the tolerable amplitudes of observer errors' signals, e (4.17) and d (4.19). It is sufficient for these signals to be bounded to insure the stability and convergence of the closed-loop system. Chapter conclusions In this chapter, a novel robust joint-space control scheme has been introduced. Inspired by the control method developed in chapter 3, robust criteria have been considered to improve the joint-space control scheme. The focus has been made on modeling uncertainties, measurement errors, system variations and proceeding to various tasks. The NDI structure has been conserved with the inclusion of un-measured states thanks to an ESO. Additionally, the modeling and measurements errors have been tackled by expressing the equations of motions of a rotation-free-floating SMS with an external disturbance torque. With similar objectives than those developed for using an ESO, an NDO is designed to include an estimation of the disturbance torques induced by modeling and measurement errors. Once again, the modeling work detailed in chapter 2 has been advantageously used in the design of the observers and the NDI. After developing the overall design, a second contribution of this chapter is the common synthesis of the control and the two observers gains to satisfy precise control and disturbance rejections. The synthesis is based on a Lyapunov stability analysis and performed through an LMI resolution. An advantage of the synthesis is that it is developed for different workspaces without consideration of restrictive velocities as observed in the chapter 3. Furthermore, it is suitable to perform a wild range of tasks with an SMS in the way that significant variations of mass are considered on the base and at the end-effector. Validation of the proposed method has been developed on an actual on-orbit deployment scenario that offers consequent system variations to validate the robustness and the interest of the control method. Therefore, the chapter provides a joint-space control scheme with a gains synthesis that allows to answer actual problematics of OOS. Extensive simulations have provided conclusive results. The significant modeling uncertainties have been successfully tackled thanks to the NDO. Some limits have been observed with the difficulty of imposing closed-loop dynamics with the LMI resolution. However, the general control performances show the effectiveness of the method and benefits of the common base and manipulator control in OOS operations. Conclusion and perspectives Contribution of the thesis: Robotic systems have a key role to play in space exploitation and exploration. A SMS 11 presents multiple benefits in OOS 12 applications, such as the versatility in performing various tasks. Space manipulators provide the safest and more reliable solutions to deal with increasing debris number, structure too large to be self-deployed or satellite's life extension missions. As it can be illustrated with the Canadarms and their use to develop the MIR space station and the ISS 13 , their assistance to astronauts during extra-vehicular for the different Hubble Space Telescope's servicing missions or the recent use to inspect parts of the ISS. The complexity of operations lead to the necessity of improving the autonomous control of such a system to consider their use in the future space missions. In that purpose this thesis aims at providing autonomous control methods for OOS applications. Through the last thirty years, notable interests and advancements have been made to improve space robotics technologies but some challenges remains to tackle the new requirements that robotics are facing. From the literature review, developed in chapter 1, the recent requirements for SMS are addressed. The main improvements identified are first the need to tackle system couplings to perform precise manipulator control. Then, for most of spacecrafts the presence of flexible appendages, such as solar arrays, sun-shields or even antennas, has led to new challenges in the improvement of control technologies. A requirement for establishing high accuracy modeling and simulation tools of a rigid-flexible SMS is made to design and validate new control strategies. Moreover, dealing with system physical properties variations has raised interest not only for ADR 14 , in which the manipulator is required to deal with external and additional forces/torques, but also to perform multi-tasks such as for an in-space assembly or deployment. In addition, the use of kinetic moment exchange devices to actively control the SMS' base has raised interest to increase the field of possible motions of the SMS. In this thesis we aimed at developing modeling tools and control algorithm suitable to increase the autonomy of SMS. In order to deal with the high control precision required to perform on-orbit manipulator's operations, a generic approach to derive accurate non-linear model of a free-floating SMS motions with flexible appendages have been developed. This contribution is detailed in chapter 2, which details its implementation as a Matlab toolbox. The modeling method, based on an adapted DH 15 formalism and with a Lagrangian approach, allows to recursively derive kinematic and dynamic models for a multi-body system with multiple kinematic chains and flexible elements attached to the common system's base. The relevance of this approach and the precision of the tools have been illustrated through extensive studies with (and without) As mentioned in the literature review, few studies consider potential contribution of kinetic momentum exchange actuator available on the satellite base. Thus in chapter 3 to assess the interest to control such base's actuators when the manipulator performs some tasks, kinetic indices have been first introduced. They allow to develop quantitative analysis of the couplings existing between the manipulator's end-effector and the base's motions. As the couplings in SMS is the principal difficulty to develop autonomous control strategies, pre-designing an SMS with the help of these indices could benefit in future control applications. Moreover, the system analysis detailed in chapter 3, based on this indices and perform with the tools developed in chapter 2, led to the conclusion that a common base and manipulator control would be more efficient to achieve the expected tasks. As the SMS' motions could lead to flexible appendage vibrations, the development of suitable control scheme to ensure system stability was another challenge of this thesis. This point was handle in the second part of the chapter 3 with the design of steering laws and joint controller of the base and the manipulator. The proposed control framework is based on an NDI16 where an ESO 17 is introduced to perform an accurate system linearization. Its design is possible thanks to the non-trivial modeling developed in chapter 2. Additionally to the control scheme, the second contribution is the synthesis of both control and observer gains based on a Lyapunov stability proof and obtained from an LMI18 resolution. The simulations of the controllers thus obtained highlight the interest of reaction-wheels to efficiently reject undesired flexible vibrations when the manipulator operates. However, in order to be able to apply the solutions proposed above to real systems, it is necessary to first consider the robustness of the designed controller in a realistic scenario. Especially as NDI controller efficiency relies on the accuracy of its design model. Therefore, the control framework proposed has been extended in chapter 4 to deal with modeling uncertainties. It has been proposed to be tackled with a disturbance torque added to the overall SMS dynamic model. With the ambition to improve the system's decoupling quality and reject the modeling and measurement errors, an NDO 19 has been design to include a disturbance torque estimation in the NDI in addition to the flexibility estimation perform by the ESO. Finally, the relevance of SMS resides in their ability to perform different complex tasks in a same mission. Thus, in chapter 4, the gains synthesis process has been also adapted to ensure the closed-loop system for a set of tasks. This synthesis is likewise based on a Lyapunov stability constraint leading to an LMI resolution. It includes robustness to system variations such that the SMS can perform different tasks from which its physical properties varies. Therefore, the contribution of chapter 4 is the overall control framework and design method to autonomously control an SMS that requires to deal with the current and future challenges of space exploitation and exploration. Extensive tests were developed in a realistic environment to perform a on-orbit space telescope assembly use-case highlighting the efficiency of the proposed method. Perspectives: • After developing a rigid-flexible model of an SMS with a rigid manipulator, one could consider a flexible manipulator and extend the Lagrangian approach developed in chapter 2 such that junctions between a flexible element and another one (rigid or not) could have one DoF20 . Flexible manipulators provides a means to absorb the contact in a capture of a target and thus brings new solutions in ADR [START_REF] Stolfi | A parametric analysis of a controlled deployable space manipulator for capturing a non-cooperative flexible satellite[END_REF]. • Capitalizing on the robust joint-space control established in chapter 4, one could initiate task-space control strategies. With the rejection of intern disturbances, controlling the manipulator end-effector and the base developing assumptions on the behaviors of the joint-space inner loop control. Such method would benefits in capture scenarios as well as improving the precision of manipulator's operations [START_REF] Giordano | Compliant floating-base control of space robots[END_REF]; [START_REF] Papadopoulos | Robotic manipulation and capture in space: A survey[END_REF]. • Some applications consider multi-manipulator robotic systems to benefit on the possibility of stabilizing the system with the unused manipulators while one manipulator operates [START_REF] Stolfi | A two-arm flexible space manipulator system for post-grasping manipulation operations of a passive target object[END_REF]. Further analysis on the contributions the manipulators' joints with the presence of reaction-wheels could be developed. Likewise, the consideration of closed-kinematic chains could provide interesting information preliminary to establish detumbling strategies. • Concerning robust criteria, a drawback of the LMI resolution is the difficulty to consider actuators dynamics and adapt the closed-loop performances for a tight requirement list. With implementation purposes, further improvements of the approach could be investigated or compared with other synthesis methods, such as developing an H ∞ synthesis. Moreover, another interesting robustness criterion to consider would be to deal with external forces/torques applying on the end-effector in order to initiate the control for capture applications. d 0 = - 1 2                     t T 0 ∂ ∂θ 0x (M 0 ) + q ∂ ∂θ 0x M T 0q + η ∂ ∂θ 0x M T 0η t T 0 ∂ ∂θ 0y (M 0 ) + q ∂ ∂θ 0y M T 0q + η ∂ ∂θ 0y M T 0η t T 0 ∂ ∂θ 0z (M 0 ) + q ∂ ∂θ 0z M T 0q + η ∂ ∂θ 0z M T 0η t T 0 ∂ ∂r 0x (M 0 ) + q ∂ ∂r 0x M T 0q + η ∂ ∂r 0x M T 0η t T 0 ∂ ∂r 0y (M 0 ) + q ∂ ∂r 0y M T 0q + η ∂ ∂r 0y M T 0η t T 0 ∂ ∂r 0z (M 0 ) + q ∂ ∂r 0z M T 0q + η ∂ ∂r 0z M T 0η                     d 0q = - 1 2                      qT ∂ ∂θ 0x (M q ) + t T 0 ∂ ∂θ 0x (M 0q ) qT ∂ ∂θ 0y (M q ) + t T 0 ∂ ∂θ 0y (M 0q ) qT ∂ ∂θ 0z (M q ) + t T 0 ∂ ∂θ 0z (M 0q ) qT ∂ ∂r 0x (M q ) + t T 0 ∂ ∂r 0x (M 0q ) qT ∂ ∂r 0z (M q ) + t T 0 ∂ ∂r 0y (M 0q ) qT ∂ ∂r 0y (M q ) + t T 0 ∂ ∂r 0z (M 0q )                      d 0η = - 1 2                     t T 0 ∂ ∂θ 0x L T ηp t T 0 ∂ ∂θ 0y L T ηp t T 0 ∂ ∂θ 0z L T ηp t T 0 ∂ ∂r 0x L T ηp t T 0 ∂ ∂r 0y L T ηp t T 0 ∂ ∂r 0z L T ηp                     (A.2a) (A.2b) (A.2c) With the expression of the inertia matrices (2.67) the time-derivative in function of x 0 l (l ∈ [1, 6]) are given by:                                                                                      ∂ ∂x 0 l (M 0 ) = I i 0 3 0 3 m i I 3 ∂ ∂x 0 l (J m i ) ∂ ∂x 0 l (M q ) = nq i=1 ∂ ∂x 0 l J T m i I i 0 3 0 3 m i I 3 J m i + nq i=1 J T m i   ∂ ∂x 0 l (I i ) 0 3 0 3 0 3   J m i + nq i=1 J T m i I i 0 3 0 3 m i I 3 ∂ ∂x 0 l (J m i ) (A.3a) (A.3b) (A.3c) Inertia derivatives expressions: The inertia matrices I 0 and I i are expressed in the inertial frame R ine2 as given by (2.10). With (2.8) and (2.9), one can detail the recursive expressions as:            I 0 = R A 0 I S 0 R T A 0 I i = R A 0 i j=1 R A j I S i   R A 0 i j=1 R A j   T (A.4a) (A.4b) This allows to expressed the following derivatives:                              ∂ ∂x 0 i (I 0 ) = ∂ ∂x 0 i (R A 0 ) I S 0 R T A 0 + R A 0 I S 0 ∂ ∂x 0 i R T A 0 ∂ ∂x 0 i (I i ) = ∂ ∂x 0 i (R A 0 ) i j=1 R A j I S i   i j=1 R A j   T R T A 0 + R A 0 i j=1 R A j I S i   i j=1 R A j   T ∂ ∂x 0 i R T A 0 (A.5a) (A.5b) Therefore, the inertia derivative in function of a base state is reduced to the derivation of the rotation matrix R A 0 . This leads to ∂ ∂r 0x (R A 0 ) = ∂ ∂r 0y (R A 0 ) = ∂ ∂r 0z (R A 0 ) = 0 3 and with the Euler Angles rotation order given in (2.7) the derivative in function of the base attitude are: (B i0 ) = 0 3 and:                                                          ∂ ∂θ 0x (R A 0 ) =    0                                                    ∂ ∂r 0x (B i0 ) =      0 3 0 3    0 0 0 0 0 -1 0 1 0    0 3      ∂ ∂r 0x (B i0 ) =      0 3 0 3    0 0 1 0 0 0 -1 0 0    0 3      ∂ ∂r 0x (B i0 ) = J m i derivative expression: With (2.18), the expression of the Jacobian matrix J m i is given ∀i ∈ [1, n q ] by: J m i = B i1 p m 1 . . . B i(i-1) p m i-1 p m i 0 6×(nq-1) (A.8) As only p m l (l ∈ [1, i]) depends on the base state, ∂ ∂x 0 i (B il ) = 0 6 and thus: ∂ ∂x 0 i (J m i ) = B i1 ∂ ∂x 0 i (p m 1 ) . . . B i(i-1) ∂ ∂x 0 i p m i-1 ∂ ∂x 0 i (p m i ) 0 6×(nq-i) (A.9) Then two cases are considered, either the S l and S i are in the same kinematic chain or not. If they are, then: ∂ ∂x 0 i (p m l ) =     ∂ ∂x 0 i (k l ) ∂ ∂x 0 i k × l (r S l -r A l )     (A.10) and if not: ∂ ∂x 0 i (p m l ) =   0 3×1 ∂ ∂x 0 i (k l )   (A.11) With the recursive expression of k l (2.9), its derivative expression is given by: ∂ ∂x 0 i (k l ) = ∂ ∂x 0 i (R A 0 ) l j=1 R A j    0 0 1    (A.12) with the detail of ∂ ∂x 0 i (R A 0 ) given in the previous section. A.2 Joint-derivative of inertia matrices In this section the following terms are detailed: In order to maintain the generality, the transformation matrix T A i ,R ine is differentiated with the recursive relation (2.8) re-written with (2.3) ∀l, i ∈ [1, n q ] as:          d q0 = - 1 2 ∂ ∂q t T 0 M 0 + qM T 0q d q = - 1 2 ∂ ∂q qT M q + t T 0 M 0q T A i ,R ine = T A l-1 ,R ine      c(θ l ) -c(α l )s(θ l ) s(α l )s(θ l ) a l c(θ l ) s(θ l ) c(θ l )c(α l ) -c(θ l )s(α l ) a l s(θ l ) 0 s(α l ) c(α l ) d l 0 0 0 1      i j=l+1 T A j-1 ,Aj (A.16) According to the nature of the joints, the differentiation of T A i ,R ine according to q l ∀l ∈ [1, i] and ∀i ∈ [1, n q ] is given by: • If the l th joint is prismatic, then: ∂ ∂q l (T A i ,R ine ) = T A l-1 ,R ine      0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0      i j=l+1 T A j-1 ,Aj (A.17) • If the l th joint is revolute then ∀l ∈ [1, i]: ∂ ∂q l (T A i ,R ine ) = T A l-1 ,R ine      -s(q l ) -c(α l )c(q l ) s(α l )c(q l ) -a l s(q l ) c(q l ) -s(q l )c(α l ) -s(q l )s(α l ) a l c(q l ) 0 0 0 0 0 0 0 0 where:      i j=l+1 ∂ ∂q l (T A i ,S i ) = ∂ ∂q l R A i (r S i -r A i ) 0 1×3 1 =   ∂ ∂q l (R A i ) ∂ ∂q l ((r S i -r A i )) 0 1×3 0   (A.22) without detailing the differentiation of the matrix T A i ,S i depends on the nature of joints and can be expressed as: ∂ ∂q l (T S i ,R ine ) =   ∂ ∂q l (R S i ,R ine ) ∂ ∂q l (r S i ) 0 3×1 0   (A.23) Inertia derivatives expressions: With the recursive expression of inertias expressed in R ine (A.4), the derivatives in function of q l ∀l ∈ [1, n q ] are given by: ∂ ∂q l (I i ) = R A 0 ∂ ∂q l   i j=1 R A j   I S i   i j=1 R A j   T R T A 0 + R A 0 i j=1 R A j I S i ∂ ∂q l      i j=1 R A j   T    R T A 0 (A.24) with the differentiation of i j=1 R A j previously detailed to obtain (A.20). B i0 derivatives expression: B i0 =    0 3 0 3 ∂ (r S i -r S 0 ) ∂q l × 0 3×1    =    0 3 0 3 ∂r S i ∂q l × 0 3×1    (A.25) with the differentiation of r S i previously detailed to obtain (A.23). J m i derivatives expression: The Jacobian matrix J mi (A.8) is differentiated ∀l, i ∈ [1, n q ] as: ∂ ∂ ∂q l (k k ) = ∂ ∂q l (R A i ,R ine )    0 0 1    (A.27) According to the nature of the joint A k with ∀k ∈ [1, i]: • If A k is prismatic: ∂ ∂q l (B ik p m k ) =   0 3 ∂ ∂q l (k k )   (A.28) • If A k is revolute: ∂ ∂q l (B ik p m k ) =     ∂ ∂q l (k k ) ∂ ∂q l (r S i -r S k ) × k k + (r S i -r S k ) × ∂ ∂q l (k k ) + ∂ ∂q l k × k (r S i -r A k ) + k × k ∂ ∂q l ((r S i -r A k ))     (A.29) <!--Joint to Payload --> <joint name="Spacecraft_Payload" type="fixed"> <parent link="Spacecraft"/> <child link="Payload"/> <origin xyz="1.1 0 0" rpy="0 0 0" /> <axis xyz="1 0 0"/> </joint> <!--Payload--> <link name="Payload"> <inertial> <origin xyz="1.45 0 0"/> <mass value="1440"/> <inertia ixx="2458" ixy="0" ixz="0" iyy="1499" iyz="0" izz="1499"/> </inertial> <visual> <origin xyz="0.75 0 0" rpy="0 -1.57075 0"/> <geometry> <cylinder length="1.5" radius="1.6"/> </geometry> <material name="Blue"/> </visual> <stiffness name="rigid"/> </link> For a flexible element, all the physical properties (dampings, stiffness and natural frequency) are defined additionally to the matrix of the participation factors (L in the XML3 ). <!--joint between link_1 and link_2 | 1_T_2 --> <joint name="arm_joint_2" type="revolute"> <parent link="arm_link_1"/> <child link="arm_link_2"/> <origin rpy="-1.57079632679 0 0" xyz="0 0 0.112"/> <axis xyz="0 0 1"/> </joint> <link name="arm_link_2"> <inertial> <origin rpy="0 0 0" xyz="0 -1.6810 0"/> <mass value="27"/> <inertia ixx="51.3" ixy="0" ixz="0" iyy="0.15" iyz="0" izz="51.3"/> </inertial> <visual> <origin rpy="1.57079632679 0 -1.57079632679" xyz="0 0 0"/> <geometry> <mesh filename="meshes_dISAS/arm/link_2.dae"/> </geometry> <material name="Grey"/> </visual> <stiffness name="rigid"/> </link> <!--joint between link_3 and link_4 | 3_T_4 --> <joint name="arm_joint_4" type="revolute"> <parent link="arm_link_3"/> <child link="arm_link_4"/> <origin rpy="1.57079632679 0 0" xyz="0 0.1125 0"/> <axis xyz="0 0 1"/> </joint> <link name="arm_link_4"> <inertial> <origin rpy="0 0 0" xyz="-1.1910 0 -0.2"/> <mass value="18"/> <inertia ixx="0.14" ixy="0" ixz="0" iyy="15.7" iyz="0" izz="15.7"/> </inertial> <visual> <origin rpy="0 0 3.14159265359" xyz="0 0 0"/> <geometry> <mesh filename="meshes_dISAS/arm/link_4.dae"/> </geometry> <material name="Grey"/> </visual> <stiffness name="rigid"/> </link> <!--End-effector Mass --> <joint name="dlsaffe_joint_ee_tuile" type="fixed"> <parent link="dlsaffe_link_ee"/> <child link="dlsaffe_link_ee_tuile"/> <origin xyz="0 0 0" rpy="0 0 0"/> </joint> <link name="dlsaffe_link_ee_tuile"> <inertial> <origin xyz="0 0 0" rpy="0 0 0"/> <mass value="150"/> <inertia ixx="6" ixy="0" ixz="0" iyy="6" iyz="0" izz="6" /> </inertial> <visual> <origin xyz="0 0 0" rpy="0 -1.570795*2 0"/> <geometry> <mesh filename="tile.stl"/> </geometry> <material name="Red"/> </visual> <stiffness name="rigid"/> </link> Figure 2 2 Block diagram of classical control approach in the literature . . . . . . . . . . 3.2 Illustration of the SMS desired motion. Left to right the initial, the midconfiguration and the final configurations . . . . . . . . . . . . . . . . . . . . 3.3 Manipulator's joints control performances, measured velocities (full-line) and desired velocities (dashed-line) . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Upper subplot is the measured reaction-wheels' joint velocities second subplot represents the base angular velocity and lower subplot is the flexible modes velocities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Dissipated work (2.83) during the complete motion with a fixed base attitude 3.6 Variation of the joint contribution to the base motion (K m ω 0 ) plotted in R sat 9 for the initial reaction-wheels (left) and the second actuators configuration (right) 3.7 Evolution of the end-effector normalized manipulability indices, µ N ṙEE , plotted in R sat for the initial reaction-wheels (left) and the second actuators configuration (right) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Evolution of the end-effector normalized manipulability indices, µ N ṙEE , with a fix base attitude constraint plotted in R sat for the initial reaction-wheels (left) and the second actuators configuration (right) . . . . . . . . . . . . . . . . . 3.9 Block diagram of the proposed joint control method . . . . . . . . . . . . . . 3.10 Block diagram of the proposed base-manipulator control method . . . . . . . 3.11 Illustration of the SMS task: the initial configuration (left), the mid-task configuration (middle) and the final configuration (right) . . . . . . . . . . . . .3.12 Evolution of system variation for a given task, upper subplot represents the evolution of the relaxation term ρ 1 and the lower subplot represents ρ 2 's evolution106 xiv List of Figures 3.13 Manipulator's joints measured velocities and control performances; upper subplot is the measured manipulator's joint velocities and lower subplot is the relative tracking error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 3.14 Reaction-wheels measured velocities and control performances; upper subplot is the reaction-wheels' joint velocities and lower subplot is the relative tracking error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 3.15 Detail of the computed torque (3.38) highlighting interests of including unmeasured states in the NDI 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 3.16 Upper subplot represents base angular velocities; Lower subplot represents the flexible vibration η . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 3.17 Manipulator's joints measured velocities and control performances; upper subplot is the measured manipulator's joint velocities and lower subplot is the relative tracking error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 3.18 Upper subplot is the measured reaction-wheels' joint velocities second subplot represents the base angular velocity and lower subplot is the flexible modes velocities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 3.19 Detail of the computed torques (3.63) for the four reaction-wheels . . . . . . 114 3.20 Dissipated work (2.83) during the complete motion . . . . . . . . . . . . . . . 115 4.1 Block diagram of the proposed joint space control method . . . . . . . . . . 120 4.2 Spacecraft component overview (elements are detailed in tables 4.1 and 4.2), https://www.h2020-pulsar.eu . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.3 Stowed view (left) and end-effector closeup (right) of CAESAR arm, https: //www.h2020-pulsar.eu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 4.4 Snapshot after the assembly of the upper inner ring, https://www.h2020-pulsar. eu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 4.5 Snapshots after the two first pre-assembly operations, https://www.h2020-pulsar. eu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 4.6 Snapshot after the lower inner ring assembly, https://www.h2020-pulsar.eu 138 4.7 Snapshots of the four consecutive pre-assembly operations, https://www.h2020-pulsar. eu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 4.8 Final view of PULSAR, https://www.h2020-pulsar.eu . . . . . . . . . . . . 140 List of Figures xv 4.9 Evolution of relaxation terms for each tasks of the PULSAR scenario . . . . . 4.10 Evolutions of all the manipulator's joints for each t i ∈ T 2T ; the dotted-line represents the desired velocity qm d and the full-lines correspond to each qm for each task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 Evolutions of all the manipulator's joints for each t i ∈ T 3T ; the dotted-line represents the desired velocity qm d and the full-lines correspond to each qm for each task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 Evolutions of all the manipulator's joints for each t i ∈ T 4T ; the dotted-line represents the desired velocity qm d and the full-lines correspond to each qm for each task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 Evolutions of all the reaction-wheels' joints for each t i ∈ T 2T ; the dotted-lines represent the desired velocities qr d and the full-lines correspond to each qr for each task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Evolutions of all the reaction-wheels' joints for each t i ∈ T 3T ; the dotted-lines represent the desired velocities qr d and the full-lines correspond to each qr for each task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.15 Evolutions of all the reaction-wheels' joints for each t i ∈ T 4T ; the dotted-lines represent the desired velocities qr d and the full-lines correspond to each qr for each task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. 18 18 Represent the feedback control torques (M v) evolution for lower sub-figures illustrate the linearizing torques evolution for i ∈ T 2T . . . . . . . . . . . . . 4.19 Illustrate the linearizing torques (D q) evolution for t i ∈ T 2T . . . . . . . . . 4.20 Illustrate the evolution of the estimated disturbance torques (D x x e + τ d ) for T 2T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.21 Represent the feedback control torques (M v) evolution for i ∈ T 3T . . . . . 4.22 Illustrate the linearizing torques (D q) evolution for t i ∈ T 3T . . . . . . . . . 4.23 Illustrate the evolution of the estimated disturbance torques (D x x e + τ d ) for T 3T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 1 -Figure 2 - 12 Figure 1 -Engineering Test Satellite VII "KIKU-7" (ETS-VII), (credit: JAXA) 1. 1 1. 2 12 Overview of Space Manipulator Systems through the years . . . . . . 6 1.1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1.2 Active Debris Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1.3 On-Orbit Servicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.1.4 On-Orbit Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.1.5 Next steps of improvements . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Modeling of Space Manipulator System . . . . . . . . . . . . . . . . . 14 1.2.1 Control classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.2.2 Modeling of a rigid Space Manipulator System . . . . . . . . . . . . . . . 16 1.2.3 Modeling of flexible appendages . . . . . . . . . . . . . . . . . . . . . . . . 19 1.3 Control methods of Space Manipulator Systems . . . . . . . . . . . . 21 1.3.1 Early studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.3.2 Control method to reduce manipulator coupling disturbances on the base 21 1.3.3 Base disturbances control . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.4 Summary and identified area of improvements . . . . . . . . . . . . . 27 Figure 1 1 Figure 1.1 -Astronaut Thomas Pesquet during a solar array installation spacewalk (credits: NASA) Figure 1 1 Figure 1.5 -Robonaut 1A et 1B (credits: NASA) Figure 1 1 Figure 1.7 -Replica of the first artificial Earth' satellite, Sputnik 1 exposed in Washington,D.C (credit NASA) Figure 1 1 Figure 1.9 -Illustration of the ETS-VII spacecraft (credit NASDA) Figure 1 . 1 Figure 1.11 -Illustration of the Orbital Express mission Figure 1 . 1 Figure1.14 -Distinction between on-orbit assembly and servicing paradigms,[START_REF] Piskorz | On-orbit assembly of space assets: A path to affordable and adaptable space infrastructure[END_REF] Figure 1 . 1 Figure 1.15 -Two-link manipulator and its end-effector VM, [VD87] Multiple manipulators have the particularity and the possibility of presenting closed and open kinematic chains. The computation of the dynamics may change from the open to the closed kinematic chain and reciprocally. With such considerations, Nakamura and Yamane [NY00] have proposed a general algorithm to compute the dynamics of both closed and open kinematics without switching modeling methods. In addition, Newton-Euler approaches allow to easily connect and close kinematic chains as such based modeling methods develop for each body the forces/torques applied [ACT08]. Figure 2 . 2 Figure 2.1 -Denavit-Hartenberg parameters Figure 2 . 2 - 22 Figure 2.2 -Illustration of a SMS with n r reaction-wheels and a n m -DoFs manipulator Figure 2 . 2 Figure 2.3 -Example of a flexible satellite Figure 2 . 4 - 24 Figure 2.4 -Illustration of a rotation-free-floating SMS equipped with n r reaction-wheels, a n m -DoFs manipulator and n p flexible appendages Figure 2 . 2 Figure 2.5 -Description of the Simulink model Figure 2 . 2 Figure 2.6 -Examples of SMS modeled with the proposed Matlab tools, on the left a SMS composed of a 3-DoFs manipulator, 3 reaction-wheels and 2 flexible solar arrays, on the right a modeled version of the PULSAR telescope use-case https://www.h2020-pulsar.eu/ Figure 2 . 2 Figure 2.7 -Visual of the illustrative SMS used for tool validations Figure 2 . 8 - 28 Figure 2.8 -Upper subplots represent the input torques, τ r in (left) and τ m in (right); Lower subplot represents the evolution of the torque error signal τ (2.73) Figure 2 . 9 - 29 Figure2.9 -Upper figures represents the input torques τ q in (τ r in on the left and τ m in on the right); Lower figures correspond to the control performances with the desired velocities in the red full-line and the measured velocities in the blue dash-line Figure 2 . 2 Figure 2.10 -Left subplots represent the system's angular momentum and right subplots the linear momentum. Sub-figures in first row respectively represent the base momentum, in second row the reaction-wheels momentum, in third row the manipulator momentum and in fourth row the total momentum Comparison of the singular values of linear inverse dynamic models of the SMS • Comparison of the eigenvalues of linear inverse dynamic models of the SMS This validation process is developed on the PULSAR space telescope (https://www. h2020-pulsar.eu) in which different flexible appendages are rigidly attached to a fixed element on the base as visualized with the right spacecraft representation in Figure 2.6. A first reduced model of PULSAR, composed by the base, the payload and a sun-shield beam was useful to assess the computation of the generalized inertia matrix, the convective inertia matrix and the generalized stiffness matrix [Cum+21]. The equality of rigid behaviors (the DC gain of the two direct dynamic models of the global spacecraft), the eigenvalues correspondence of the inverse dynamic models of the global satellite system (base + payload + the 4 sun-shield beams + the 2 solar panels), and the superposition of the singular values of their frequency responses are means to assess the developments of the flexible functions (see Figures 2.11 and 2.12 ). Figure 2 . 2 Figure 2.11 -Comparison between singular values of direct linear models: on the left SDT models and on the right models linearized from our tools[START_REF] Cumer | MODELLING AND ATTITUDE CONTROL DESIGN FOR AUTONOMOUS IN-ORBIT ASSEMBLY[END_REF] Figure 2 . 2 Figure 2.12 -Comparison between eigenvalues of direct linear models: on the left SDT models and on the right models linearized from our tools https://www.h2020-pulsar.eu Figure 2 . 2 Figure2.13 -Upper subplot represent the control torques τ q in ; Lower subplot illustrates the evolution of the torque error signal τ(2.76) Figure 2 .Figure 2 .Figure 2 222 Figure 2.14 -Listing of system's works evolutions (2.82); Left side subplots represent from top to bottom works derived from: the inertial, convective forces and the external control torques powers; Right side subplots represent from top to bottom works derived from: the inertial, convective and potential forces Figure 2 . 2 Figure 2.16 -Flexible mode velocities η for the input torques represented in Figure 2.13 of improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.1.1 Challenges of developing steering law for SMS 1 with flexible appendages . 74 3.1.2 Illustration of a classical control approach limitations . . . . . . . . . . . 76 3.2 Interest of a common base and manipulator control . . . . . . . . . . 82 3.2.1 SMS workspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.2.2 Distribution of system efforts . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.3 Joint-space control of an SMS with flexible appendages . . . . . . . . 92 3.3.1 Observer structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3.3.2 Control law structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.3.3 Simultaneous synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.4 Base and manipulator control of an SMS with flexible appendages . 100 3.4.1 Open-loop dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 3.4.2 Closed-loop dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 3.5 Illustration of the proposed methods . . . . . . . . . . . . . . . . . . . . 103 3.5.1 Joint control results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 3.5.2 Interest of a base-manipulator common control strategy . . . . . . . . . . 111 3.6 Chapter conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Figure 3 . 2 - 32 Figure 3.2 -Illustration of the SMS desired motion. Left to right the initial, the midconfiguration and the final configurations Figure 3 3 Figure 3.3 -Manipulator's joints control performances, measured velocities (full-line) and desired velocities (dashed-line) Figure 3 3 Figure 3.5 -Dissipated work (2.83) during the complete motion with a fixed base attitude Figure 3 . 6 - 36 Figure 3.6 -Variation of the joint contribution to the base motion (K m ω 0 ) plotted in R sat for the initial reaction-wheels (left) and the second actuators configuration (right) Figure 3 3 Figure 3.7 -Evolution of the end-effector normalized manipulability indices, µ N ṙEE , plotted in R sat for the initial reaction-wheels (left) and the second actuators configuration (right) Figure 3 3 Figure 3.9 -Block diagram of the proposed joint control method Let us introduce the Lyapunov function: V(w) = z T P z z + T e P e (3.45) Figure 3 . 3 Figure 3.10 -Block diagram of the proposed base-manipulator control method Algorithm 1 1 Figure 3.11 -Illustration of the SMS task: the initial configuration (left), the mid-task configuration (middle) and the final configuration (right) Figure3.12 -Evolution of system variation for a given task, upper subplot represents the evolution of the relaxation term ρ 1 and the lower subplot represents ρ 2 's evolution Figure 3 Figure 3 Figure 3 333 Figure 3.13 -Manipulator's joints measured velocities and control performances; upper subplot is the measured manipulator's joint velocities and lower subplot is the relative tracking error Figure 3 Figure 3 33 Figure3.17 -Manipulator's joints measured velocities and control performances; upper subplot is the measured manipulator's joint velocities and lower subplot is the relative tracking error Sommaire 4 . 1 4. 2 412 Robust control challenges . . . . . . . . . . . . . . . . . . . . . . . . . . 118 4.1.1 Area of improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 4.1.2 Limits of the control introduced in chapter 3 . . . . . . . . . . . . . . . . 119 Control strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.2.1 Joint open-loop behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.2.2 State Observer model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.2.3 Nonlinear Disturbance Observer model . . . . . . . . . . . . . . . . . . . . 124 4.2.4 Closed-loop dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 4.2.5 Simultaneous gains synthesis . . . . . . . . . . . . . . . . . . . . . . . . . 127 4.3 Illustration of the proposed method . . . . . . . . . . . . . . . . . . . . 133 4.3.1 Study case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.3.2 Simulations results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 4.4 Chapter conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Figure 4 . 1 - 41 Figure 4.1 -Block diagram of the proposed joint space control method ) such that: ˙ e = (A e -L x C e ) e (4.25) With this last hypothesis, (4.19) and (4.24), a compact version of the closed-loop dynamics is obtained by introducing X = z any conditions z(0) = 0 and e 0 , d 0 ∈ { | T E ≤ 1}. Proof: As the matrix M is positive definite, one can choose the following Lyapunov function: V(X) = z T P z z + T e P e e + T d P T d M P d d ≥ 0 The time derivative of the proposed Lyapunov function is then given by: z A z + P z B z K) s P z B ze P z B z d * (P e (A e -L x C e )) s (L d D T x ) T (P T d M P d ) T * * (-P T d M P d ) s + P T d derivative of the equivalent inertia matrix can be defined as a function of the convective matrix as [Moh+13]: Ṁ = D + D T (4.32) As proposed by Mohammadi and al. [Moh+13], a candidate nonlinear gain observer can be chosen as: L d = P (4.16) to the auxiliary variable: d dt p = P -1 d q (4.34) Thus, simplification can be made to obtain the expression of the time-derivative of the Lyapunov function: V(X) = X T    (P z (A z + B z K)) s P z B ze P z B z d * (P e (A e -L x C e )) variable changes W z = KQ z and W e = P -1 e L x , and pre and post multiplying the above matrix by diag(Q z , I, Q d , I) = diag(P -1 z , I, P -1 d , I) one obtains the LMI constraint (4.27). Dimensions XYZ (m): 0, 2 × 1, 42541 × 1, 2344 (diameter 1.42541) Mass (Kg): 22 CoM XYZ (m): [0 0 0] The assembly site (10) around which will be assembled the primary mirror. Dimensions XYZ (m): 0, 2 × 1, 42541 × 1, 2344 (diameter 1.42541) Mass (Kg): 28 CoM XYZ (m): [0 0 0] The optics system (11) which only simulates the physics (mass and collisions), not the optical part: Dimensions XYZ (m): 0.994646 × 0.781024 × 0.835381 Mass (Kg): 425 CoM XYZ (m): [0 0 0]Table 4.2 -Spacecraft sub-components, https://www.h2020-pulsar.eu Figure 4 . 3 - 43 Figure 4.3 -Stowed view (left) and end-effector closeup (right) of CAESAR arm, https: //www.h2020-pulsar.eu Figure 4 4 Figure 4.4 -Snapshot after the assembly of the upper inner ring, https://www. h2020-pulsar.eu Figure 4 4 Figure 4.5 -Snapshots after the two first pre-assembly operations, https://www. h2020-pulsar.eu Figure 4 4 Figure 4.6 -Snapshot after the lower inner ring assembly, https://www.h2020-pulsar.eu Figure 4 4 Figure 4.7 -Snapshots of the four consecutive pre-assembly operations, https://www. h2020-pulsar.eu Figure 4 . 4 Figure 4.8 shows the final view of the space telescope after completion of all the different robotic tasks. Figure 4 4 Figure 4.8 -Final view of PULSAR, https://www.h2020-pulsar.eu Figure 4 4 Figure 4.9 -Evolution of relaxation terms for each tasks of the PULSAR scenario Figure 4 Figure 4 44 Figure 4.10 -Evolutions of all the manipulator's joints for each t i ∈ T 2T ; the dotted-line represents the desired velocity qm d and the full-lines correspond to each qm for each task and 4.13. Figure 4 Figure 4 44 Figure 4.13 -Evolutions of all the reaction-wheels' joints for each t i ∈ T 2T ; the dotted-lines represent the desired velocities qr d and the full-lines correspond to each qr for each task , 4.19 and 4.20, for tasks in T 3T in Figures 4.21 , 4.22 and 4.23 and for tasks in T 4T in Figures 4.24 , 4.25 and 4.26. The control torques (4.21) can be decomposed into three signals. The first one (M v), visualized in Figures 4.18 , 4.21 and 4.24 corresponds to the feedback control torque. The second one ( D q) is a linearizing torque and is visualized in Figures 4.19 , 4.22 and 4.25. The last one (D x x e + τ d ) can be interpreted as a generalized disturbance torque. It appears in Figures 4.20 , 4.23 and 4.26. Since the joints velocities and mass to move are significantly different during each of the considered tasks, it can be noted that disturbances differently impact the control performances. Thus an amplitude comparison between each signals allows to evaluate their influence on the system linearization quality. First, considering each set of tasks T 2T , T 3T and T 4T individually and the different torques composing (4.21), one can note that similar control torques are developed for each task. More precisely, considering Figure4.18 one will note similar amplitude of torques for each tasks in T 2T . The same observations can be made for the linearization torques with Figure4.19 and disturbance torques represented in Figure4.20. Similarly for T 3T and T 4T the torques correspond for each tasks. Figure 4 4 Figure 4.18 -Represent the feedback control torques (M v) evolution for lower sub-figures illustrate the linearizing torques evolution for i ∈ T 2T Figure 4 . 4 Figure 4.19 -Illustrate the linearizing torques (D q) evolution for t i ∈ T 2T Figure 4 . 4 Figure 4.20 -Illustrate the evolution of the estimated disturbance torques (D x x e + τ d ) for T 2T Figure 4 . 4 Figure 4.21 -Represent the feedback control torques (M v) evolution for i ∈ T 3T Figure 4 .Figure 4 44 Figure 4.22 -Illustrate the linearizing torques (D q) evolution for t i ∈ T 3T Figure 4 . 4 Figure 4.27 -Represent the evolution of the flexible mode velocities, η, for T 2T c(θ 0z )s(θ 0y )c(θ 0x ) + s(θ 0z )s(θ 0x ) -c(θ 0z )s(θ 0y )s(θ 0x ) 0 c(θ 0z )c(θ 0x ) + s(θ 0z )s(θ 0y )c(θ 0x ) -c(θ 0z )c(θ 0x ) -s(θ 0z )s(θ 0y )s(θ 0x ) 0 c(θ 0y )c(θ 0x ) -c(θ 0y )s(θ 0x ) θ 0z )s(θ 0y ) c(θ 0z )c(θ 0y )s(θ 0x ) s(θ 0z )c(θ 0y ) + c(θ 0z )c(θ 0y )c(θ 0x ) -s(θ 0z )s(θ 0y ) s(θ 0z )c(θ 0y )s(θ 0x ) s(θ 0z )c(θ 0y )c(θ 0x ) -c(θ 0y ) -s(θ 0y )s(θ 0x ) -s(θ 0y )c(θ 0x ) θ 0z )c(θ 0y ) -s(θ 0z )s(θ 0y )s(θ 0x ) -c(θ 0z )c(θ 0x ) c(θ 0z )s(θ 0y ) -s(θ 0z )s(θ 0y )c(θ 0x ) c(θ 0z )c(θ 0y ) -s(θ 0z )s(θ 0x ) + c(θ 0z )s(θ 0y )s(θ 0x ) s(θ 0z )s(θ 0x ) + c(θ 0z )s(θ 0y )c(θ 0x ) c(x)and s(x) respectively corresponding to cos(x) and sin(x). B i0 derivative expression: Considering r S i with i ∈ [1, n q ], the matrix B i0 (2.15) only depends on the position of the solid S i in the inertial frame. Thus,∂ ∂θ 0x (B i0 ) = ∂ ∂θ 0y (B i0 ) = ∂ ∂θ 0z 3 Astronaut Leroy Chiao operating the Canadarm 2 from the ISS 2 in the Destiny module (credit: NASA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1 Astronaut Thomas Pesquet during a solar array installation spacewalk (credits: Listing of system's works evolutions (2.82); Left side subplots represent from top to bottom works derived from: the inertial, convective forces and the external control torques powers; Right side subplots represent from top to bottom works derived from: the inertial, convective and potential forces . . . . . . . 2.15 Lost work (2.83) for the SMS motions obtained with input control torques represented in Figure 2.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.16 Flexible mode velocities η for the input torques represented in List of Figures xiii 2.14 11 1.13 Illustration of the Robotic Servicing of Geosynchronous Satellites (RSGS) mis- sion, (credits: DARPA 5 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.14 Distinction between on-orbit assembly and servicing paradigms, [PJ18] . . . . 13 NASA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 Astronauts Musgrave and Hoffman install corrective optics during the first servicing mission of HST (credits: NASA) . . . . . . . . . . . . . . . . . . . . 7 1.3 Three ISS crew members capturing Intelsat VI (credits: NASA) . . . . . . . . 7 1.4 Photography of the deployed SRMS (credits: usspaceshuttle) . . . . . . . . . 7 1.5 Robonaut 1A et 1B (credits: NASA) . . . . . . . . . . . . . . . . . . . . . . . 8 1.6 Thomas Marshburn teleoperating Robonaut with virtual reality equipment (credits: NASA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.7 Replica of the first artificial Earth' satellite, Sputnik 1 exposed in Washington,D.C (credit NASA 3 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.8 Illustration of the space debris distribution orbiting around the Earth, (credit ESA 4 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.9 Illustration of the ETS-VII spacecraft (credit NASDA) . . . . . . . . . . . . . 10 1.10 Illustration of the ENVISAT spacecraft capture by a space-tug (credit ESA) . 10 1.11 Illustration of the Orbital Express mission . . . . . . . . . . . . . . . . . . . . 11 1.12 Illustration of the experimental satellite mission Restore-L, (credits: NASA) . Algorithm 2 2 Design procedure for a multi-task on-orbit servicing scenario Define the set of tasks T and for each task t 1≤i≤n with an associated workspace w 1≤i≤n Define the different LMI variables, Q z , Q d , P e , W z and W e , as in the proposition in section 4.2.5.1 for i ≤ n do Define an equilibrium point to evaluate matrices, A e , D x , D , B z d and B ze , in the LMI constraint (4.27) from (4.1) Minimize the LMI variable γ > 0 such that Θ 1≤i≤n < 0 Evaluate the relaxation terms ρ 1 , ρ 2 and ρ 3 as in (4.50) Define a Θ i as in (4.41) end for Return K, L x , P d Table 4 4 size of T 10 20 30 40 50 80 100 154 LMI resolution time (s) 35 130 122 186 245 426 550 833 Iterations 27 53 33 38 39 38 39 43 .3 -LMI resolution times according to the size of T Concerning the feedback torques visualized in Figures 4.18, 4.21 and 4.24, this is explained by similar desired velocities and a common linear control gain, K. Moreover, for the same motions the linearization torques, represented in Figures 4.19, 4.22 and 4.25, will provide comparable amplitudes. Additionally, the generalized disturbance torques plotted in Figures 4.20, 4. • Regarding the linearized torques and the feedback ones, a first observation on similar amplitudes can be made. For both set T 2T and T 3T , the linearized torques reach 10% of the feedback torques amplitudes, as illustrated comparing Figures 4.18 with 4.19 and Figures 4.21 with 4.22. For T 4T the amplitudes are equivalent (i.e. 0.2Nm for the feedback torques and 0.1Nm for the linearized ones) as visualized with Figures 4.24with 4.25. This first note simply highlight the interest of developing an NDI control scheme.• Considering the linearized torques and generalized disturbances ones will allow to conclude on the interest to develop observers in the purpose of improving the system linearization and decoupling. Therefore, respectively comparing Figures 4.19 with 4.20, Figures 4.22 with 4.23 and Figures 4.25 with 4.26 one will note that the disturbance torques approximately reach 10% of the linearization torques. From this observation, two remarks can be developed. Firstly, such equivalent amplitudes indicate that With (2.8), the differentiation of T S i ,R ine is given by: (A.18) T A j-1 ,Aj If l > i: ∂ ∂q l (T A ∂ ∂q l (T A i ,R ine ) =   ∂R A i ,R ine 0 1×3 ∂q l ∂r A i 0 ∂q l   (A.20) ∂ ∂q l (T S i ,R ine ) = ∂ ∂q l (T A i ,R ine ) = 0 4 (A.19) Thus, ∀l ∈ [1, n q ]: i ,R ine ) T A i ,S i + T A i ,R ine ∂ ∂q l (T A i ,S i ) (A.21) Active Debris Removal On-Orbit Servicing Nonlinear Dynamic Inversion Extended State Observer Linear Matrix Inequality Space Manipulator System Degree of Freedom 5 Canadian Space Agency Japan Aerospace Exploration Agency European Space Agency Geostationary-Earth Orbits Active Debris Removal Center of Mass Satellite frame Virtual Manipulator Dynamically Equivalent Manipulator On-Orbit Servicing On-Orbit Servicing Degree of Freedom Virtual Manipulator Dynamically Equivalent Manipulator Generalised Jacobian Matrix Denavit-Hartenberg Inertial frame Center of Mass Nonlinear Dynamic Inversion Degree of Freedom Center of Mass Inertial frame Satellite frame Extended State Observer Nonlinear Dynamic Inversion Linear Matrix Inequality 3.6. Chapter conclusions Center of Mass Nonlinear Dynamic Inversion Degree of Freedom Figure 4.24 -Represent the feedback control torques (M v) evolution for i ∈ T 4T Figure 4.28 -Represent the evolution of the flexible mode velocities, η, for T 3T Nonlinear Dynamic Inversion Linear Matrix Inequality Degree of Freedom Inertial frame eXtensible Markup Language Acknowledgments I would like to start by giving thanks to my thesis supervisors, Mathieu Rognant and Yves Brière with whom it was a great pleasure to work with and learn from. The time you dedicated for me, your pedagogy and your kindness really helped me grow professionally. I really enjoyed working among you in this supportive and motivating environment. I also would to give a special thanks to David Henry and Benoit Clément for the time taken to review this thesis and provide relevant feedbacks to improve its content. Likewse, I would like to thank Paolo Gasbarri for presiding the thesis jury, Jurek Sasiadek, Hélène Evain and Vincent Dubanchet for their interest in this work and the productive discussions. Remerciements 7 Deutsches Zentrum für Luft-und Raumfahrt, German Center for Air and Space-flight 8 National Aeronautics and Space Administration 9 Defense Advanced Research Projects Agency 10 http://www.esa.int/Enabling_Support/Space_Engineering_Technology/Driving_a_robot_from_ Space_Station 11 Low-Earth Orbits Detail of Convective matrix evaluation In this appendix, state-derivative expressions are given to detail the convective terms in (2.69). To conserve a general formalism the generalized joint-state variable, q ∈ R nq×1 , defined as q = q 1 . . . q nq , corresponds to the kinematic chain(s) DoF 1 s while x 0 ∈ R 6×1 , defined as , corresponds to the 6-DoFs of the base. A.1 Base-derivative of inertia matrices In this section the following terms are detailed: 1 Degree of Freedom which can be developed as: Both convective terms can be developed as: where, ∀l ∈ [1, n q ]: Simple SMS 1 study case The SMS studied to validate the our proposed tools and control methods is a reduced version of the PULSAR telescope with less DoF The spacecraft base is a cylinder exhibiting a 1.6m radius and 2m length for a total of 1960kg. 1 Space Manipulator System 2 Degree of Freedom 181 <!--Spacecraft--> <link name="Spacecraft"> <inertial> <origin rpy="0 0 0" xyz="0 0 0"/> <mass value="1960"/> <inertia ixx="3345" ixy="0" ixz="0" iyy="2202" iyz="0" izz="2202"/> </inertial> <visual> <origin xyz="0.1 0 0" rpy="0 -1.57075 0"/> <geometry> <cylinder length="2" radius="1.6"/> </geometry> <material name="Orange"/> </visual> <stiffness name="rigid"/> </link> For each body, a parent and a child body are defined with the joint formalism. <!--Joint to Left Solar Panel --> <joint name="Spacecraft_Left_Panel" type="fixed"> <parent link="Spacecraft"/> <child link="Left_Panel"/> <origin xyz="-0.9 0 -1.6" rpy="3.1416 0 0"/> <axis xyz="1 0 0"/> </joint> <!--Left Solar Panel--> <link name="Left_Panel"> <inertial> <origin rpy="0 0 0" xyz="0.001 0.001 3.8447"/> <mass value="61"/> <inertia ixx="17" ixy="0" ixz="0" iyy="1250" iyz="0" izz="1233"/> </inertial> <visual> <origin rpy="0 0 0" xyz="0 0 2.75"/> <geometry> <box size="0.25 2 5.5"/> </geometry> <material name="Blue"/> </visual> <stiffness name="flexible"> <mode_number value="5"/> <mode_1 pulse="0.16*2*pi" L="-6.4 0 0 0 -35 0" damp="0.005"/> <mode_2 pulse="0.70*2*pi" L="0 -6.7 0 35.4 0 0" damp="0.005"/> <mode_3 pulse="1.08*2*pi" L="-0.1 -0.1 0 0.3 0 3.8" damp="0.005"/> <mode_4 pulse="1.21*2*pi" L="-3.2 0 0 0 -3 -0.01" damp="0.005"/> <mode_5 pulse="3.05*2*pi" L="2.3 0 -0.3 0 1.3 0" damp="0.005"/> </stiffness> </link> And similarly for the right panel. Each reaction-wheel are similar, only the rotating axis differs. <!--joint Spacecraft to Reaction Wheel 1 --> <joint name="Spacecraft_to_RW1" type="revolute"> <parent link="Spacecraft" /> <child link="RW1"/> <origin xyz="-0.4 -0.5 0.5" rpy="35.2644*pi/180 pi/4 0" /> <axis xyz="0 0 1"/> <limit effort="1000.0" lower="0" upper="0" velocity="0"/> </joint> <!--Reaction Wheel 1 --> <link name="RW1" > <inertial> <origin rpy ="0 0 0" xyz="0 0 0.0" /> <mass value="4"/> <inertia ixx="0.065" ixy="0" ixz="0" iyy="0.065" iyz="0" izz="0.1322"/> </inertial> <visual> <origin rpy="0 0 0" xyz="0 0 0"/> <geometry> <cylinder length="0.16" radius="0.31" /> </geometry> <material name="Grey"/> </visual> <stiffness name="rigid"/> </link> Then the detail of the manipulator is given body by body as follow: <!--joint between {parent} and link_0--> <joint name="Spacecraft_rail_joint" type="fixed"> <origin rpy="-1.57075 0 0" xyz="3.5 0 0"/> <parent link="Spacecraft"/> <child link="rail_link_0"/> </joint> <link name="rail_link_0"> <inertial> <origin rpy="0 0 0" xyz="2.65 0 0"/> <mass value="234.09894105"/> <inertia ixx="0.001" ixy="0" ixz="0" iyy="0.001" iyz="0" izz="0.001"/> </inertial> <visual> <origin rpy="0 0 0" xyz="0 0 0"/> <geometry> <mesh filename="meshes_dISAS/rail/rail_base.dae"/> </geometry> <material name="Grey"/> </visual> <stiffness name="rigid"/> </link> <!--joint between link_0 and link_1 --> <joint name="rail_joint_1" type="prismatic"> <parent link="rail_link_0"/> <child link="rail_link_1"/> <origin rpy="0 0 0" xyz="0.15 0 0.15"/> <axis xyz="1 0 0"/> </joint> <link name="rail_link_1"> <inertial> <origin rpy="0 0 0" xyz="0 0 0"/> <mass value="15.90105894"/> <inertia ixx="0.001" ixy="0" ixz="0" iyy="0.001" iyz="0" izz="0.001"/> </inertial> <visual> <origin rpy="0 0 0" xyz="0 0 0"/> <geometry> <mesh filename="meshes_dISAS/rail/rail_slot.dae"/> </geometry> <material name="Grey"/> </visual> <stiffness name="rigid"/> </link> <!--joint between {parent} and link_0--> <joint name="rail_link_1_arm_joint" type="fixed"> <origin rpy="0 0 0" xyz="0 0 0.05"/> <parent link="rail_link_1"/> <child link="arm_link_0"/> </joint> <link name="arm_link_0"> <inertial> <origin rpy="0 0 0" xyz="0 0 0.0318"/> <mass value="7"/> <inertia ixx="0.01" ixy="0" ixz="0" iyy="0.01" iyz="0" izz="0.08"/> </inertial> <visual> <origin rpy="0 0 0" xyz="0 0 0"/> <geometry> <mesh filename="meshes_dISAS/arm/link_0.dae"/> </geometry> <material name="Grey"/> </visual> <stiffness name="rigid"/> </link> <!--joint between link_0 and link_1 --> <joint name="arm_joint_1" type="revolute"> <parent link="arm_link_0"/> <child link="arm_link_1"/> <origin rpy="0 0 0" xyz="0 0 0.0635"/> <axis xyz="0 0 1"/> </joint> <link name="arm_link_1"> <inertial> <origin rpy="0 0 0" xyz="0 0 0.0888"/> <mass value="7"/> <inertia ixx="0.021" ixy="0" ixz="0" iyy="0.021" iyz="0" izz="0.01"/> </inertial> <visual> <origin rpy="0 0 0" xyz="0 0 0"/> <geometry> <mesh filename="meshes_dISAS/arm/link_1.dae"/> </geometry> <material name="Grey"/> </visual> <stiffness name="rigid"/> </link> <!--joint between link_2 and link_3 | 2_T_3 --> <joint name="arm_joint_3" type="revolute"> <parent link="arm_link_2"/> <child link="arm_link_3"/> <origin rpy="0 0 3.14159265359" xyz="0 -3.36198 0"/> <axis xyz="0 0 1"/> </joint> <link name="arm_link_3"> <inertial> <origin rpy="0 0 0" xyz="0 0.0237 0"/> <mass value="7"/> <inertia ixx="0.021" ixy="0" ixz="0" iyy="0.01" iyz="0" izz="0.021"/> </inertial> <visual> <origin rpy="1.57079632679 0 0" xyz="0 0 0"/> <geometry> <mesh filename="meshes_dISAS/arm/link_3.dae"/> </geometry> <material name="Grey"/> </visual> <stiffness name="rigid"/> </link> <!--last joint --> <joint name="dlsaffe_joint_ee" type="fixed"> <parent link="arm_link_4"/> <child link="dlsaffe_link_ee"/> <origin xyz="-2.5 0 -0.13" rpy="0 0 0"/> </joint> <link name="dlsaffe_link_ee"> <inertial> <origin xyz="0 0 0" rpy="0 0 0"/> <mass value="0"/> <inertia ixx="0" ixy="0" ixz="0" iyy="0" iyz="0" izz="0" /> </inertial> <visual> <origin xyz="0.05 0.05 0.05" rpy="0 0 0"/> <geometry> <box size="0.1 0.1 0.1"/> </geometry> <material name="Grey"/> </visual> <stiffness name="rigid"/> </link>
04117480
en
[ "info.eiah", "info.info-cv", "info.info-gr" ]
2024/03/04 16:41:26
2021
https://theses.hal.science/tel-04117480/file/2021ISAR0020_DEWEZ_Diane_TheseDEF.pdf
Olfa... Nils REMERCIEMENTS Une thèse, c'est trois ans de travail, mais aussi trois ans d'une belle aventure humaine. J'ai eu la chance de rencontrer plein de personnes extraordinaires qui ont toutes contribué d'une certaine manière à l'élaboration de cette thèse. First of all, I would like to thank the jury for assessing and attending my PhD defence. Merci Indira Thouvenin d'avoir présidé à distance ma soutenance, and thank you Takuji Narumi for attending even with the time difference. Many thanks to Martin Hachet and Niels Henze for taking the time to read my manuscript and giving constructive and nice feedback. Je tiens à remercier grandement mes supers encadrants de thèse sans qui tout ce travail n'aurait pas été possible. Merci tout d'abord à Anatole de m'avoir acceptée en thèse, de m'avoir encouragée jusqu'au bout à rester dans la recherche même si je choisis pour le moment une autre voie ! Un grand merci à Ferran et à Ludovic, merci pour votre esprit, pour votre gentillesse, votre aide pour le développement ou les corrections d'article. J'ai pu évoluer et apprendre énormément pendant ces trois années de doctorat grâce à cet encadrement incroyable. Vous avez su me guider pour que petit à petit je prenne confiance et puisse avoir mes propres idées. Je tiens également à remercier les deux membres de mon comité de suivi, Marie Babel et Nicolas Mollet, qui m'ont boostée par leurs encouragements pendant mes premières années de thèse. Merci à Elise Bannier qui a été ma mentor pendant mon début de thèse et qui a su me donner de précieux conseils. Merci à l'Inria via le Défi Avatar pour le financement de ma thèse. Les réunions de ce projet furent de grands moments d'échanges scientifiques. Je remercie également le service recherche de l'INSA Rennes, principalement Justine Gromaine et Aurore Gouin, pour leur aide et leurs réponses à mes interrogations. Je n'aurais sûrement jamais fait de thèse si je n'avais pas découvert pendant mon stage des équipes incroyables, pleines de gens fascinants et tous si différents. Merci Hybrid, merci pour les Mardis Jeux, les soirées Among Us, les movies foodies, les pauses café. Chaque personne que j'ai pu croiser dans cette équipe a un peu contribué à cette thèse, je ne vais mentionner qu'une partie de toutes ces personnes. Un merci tout particulier à Rebecca et Etienne mes supers anciens cobureaux. C'est devenu super triste et vide quand vous êtes partis ! Je remercie évidemment Victor/Rodras pour son soutien moral en toute circonstance, mon voisin psy, mon fournisseur de peluches (le petit axolotl, Lady Gagaviota...) et de vidéos de chats. Merci à Thomas H. a.k.a. Wouf pour nos discussions, les visites avec Rakia, les relectures si utiles de mes papiers, les pilules de motivation. Merci Adrien pour le soutien moral, pour l'organisation de supers soirées en ligne qui ont sauvé le confinement de l'équipe. Merci à tout le reste de la clique : Gwendal a.k.a. la Loutre ou le stajièr, Thierry, Alexandre, Hakim, Martin, Tiffany Introduction Avatars and Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . immersion, in terms of body perception, performance or behaviour. As there are already several studies and applications relative to sports in VR, we use the example of a virtual tennis game in this introduction. Each direct interrelation between the different entities is illustrated by a quote from an imaginary user. Quote 1: "This is just a fake body, it is not my real body!" First, it is necessary to investigate the relation between users and their avatar, to evaluate whether users identify with this virtual body as much as they identify with their real body. VR definitely has advantages compared to desktop video games, as it completely immerses users in terms of sensory information. VR offers new sensory information that can help users feel like they are actually the virtual character representing them. This is possible not only by using visual information, but also additional stimulation such as haptic feedback (tactile and force feedback). Inspired by the rubber-hand illusion (RHI) [START_REF] Botvinick | Rubber hands 'feel' touch that eyes see[END_REF]] which gives a synchronous tactile stimulation on a rubber hand and the hidden real hand, thus creating an illusion of owning the rubber hand, the idea emerged to use such visuotactile stimulation in VR [START_REF] Slater | Towards a digital body: the virtual arm illusion[END_REF]]. This can give the feeling of being the avatar, i.e. a sense of embodiment, even when the avatar has a different age [START_REF] Tajadura-Jiménez | Embodiment in a child-like talking virtual body influences object size perception, self-identification, and subsequent real speaking[END_REF]] or a different gender [START_REF] Slater | First Person Experience of Body Transfer in Virtual Reality[END_REF]. If this sense of embodiment is high, users believe their virtual representation is their real body, then they can act through this avatar. However, it is still not clear whether interacting with this avatar increases or decreases the sense of embodiment. Quote 2: "It is difficult to accurately control the avatar, I missed the ball because of that." Here we focus on the relation between the avatar perception and the interaction technique, and most specifically the impact of interaction input first. Transposing an activity like tennis into VR is not an easy task. One of the challenges to do it is for example to correctly track users' movements. Interaction techniques can use various methods to track users (e.g. controllers, cameras, inertial sensors), which all have their own advantages and drawbacks. Most of them are cumbersome, or still cause tracking losses. Also, will the user hold a real racket, or just a simple controller? All interaction input design choices might have effects on users' body perception. Indeed, when performing actions, users act through their virtual avatar. Using their avatar to perform actions might change the relationship they have with it. Acting with an avatar uses visuomotor stimulation, and studies showed that this type of stimulation can strongly give the impression of embodying this avatar [START_REF] Kokkinara | Measuring the Effects through Time of the Influence of Visuomotor and Visuotactile Synchronous Stimulation on a Virtual Body Ownership Illusion[END_REF]]. In the nineties, researchers had the opposite opinion that interacting with input devices could on the contrary create a sense of disembodiment, because input devices make users focus on these devices instead of their own body [START_REF] Davies | Osmose: towards broadening the aesthetics of virtual reality[END_REF]. In their experience "Osmose", the authors therefore privileged creating a link between users' breath and the VE, enabling users to influence the environment by their breath. This gave them a feeling of connecting to this new environment. While this is understandable considering the technical possibilities at that time, this has now evolved and input devices that use directly users' body are becoming increasingly common (e.g. detecting gestures using controllers and gloves for object manipulation [START_REF] Hayatpur | Plane, Ray, and Point: Enabling Precise Spatial Manipulations with Shape Constraints[END_REF] or using Kinect to track users' body for navigation [START_REF] Dam | A study of navigation and selection techniques in virtual environments using microsoft kinect®[END_REF]). In the current context, it is possible to imagine forms of interaction with the environment that directly use the real body as an input, thus avoiding a disembodiment but on the contrary, creating a sense of embodiment towards the virtual body used for interaction. quickly [Pan and Steed 2019]. While being embodied in a novel body, people can perceive themselves differently, or even show different behaviours. This is called the "Proteus effect" [START_REF] Yee | The Proteus Effect: The Effect of Transformed Self-Representation on Behavior[END_REF], and it starts being investigated in relation to interaction. For example, if we keep the example of the tennis game, it was already found that users can move more if their avatar looks thinner than their opponent's during a tennis game on Nintendo Wii [Peña et al. 2016]. Another example, if the avatar resembles a stereotype of someone who can play the djembe well, then users' actual performance can be better than playing with an avatar in a suit, and this performance is correlated with the level of embodiment [START_REF] Kilteni | Drumming in immersive virtual reality: the body shapes the way we play[END_REF]. But, sometimes the avatar does not seem to impact interaction, for instance seeing a virtual representation of your hands does not improve your performance in a motor-skill task [START_REF] Ricca | Influence of hand visualization on tool-based motor skills training in an immersive VR simulator[END_REF]]. Even though avatars offer a spatial reference which helps to perceive the environment [START_REF] Draper | Exploring the Influence of a Virtual Body on Spatial Awareness[END_REF][START_REF] Mohler | The Effect of Viewing a Self-Avatar on Distance Judgments in an HMD-Based Virtual Environment[END_REF], they also sometimes hinder interaction, for example by creating occlusions [Tran et al. 2017]. Avatars are also essential as they are an important clue of what is possible in the VE, they are here to indicate to users what they can perform or not. For example, embodied in a bat [START_REF] Andreasen | What is it like to be a virtual bat?[END_REF], they can potentially fly but they probably cannot grab a 3D object. Considering the environment affordances (the actions offered by an environment) and the avatar's characteristics (e.g. anthropomorphy, realism, shape, degrees of freedom), users should be able to quickly identify possible actions. Interaction in VR should therefore be designed by thinking about the users' virtual representation, so that the appearance reveals the possible actions. Avatars can therefore have various and complex effects on interaction, depending on their characteristics, which are still not fully understood. Motivation and Research Axes The interest of investigating avatar-based interaction is two-fold. On the one hand, it is linked to this influence of the avatar on interaction, as studies start demonstrating that users would benefit from having better interaction possibilities with a full-body avatar [START_REF] Kilteni | Drumming in immersive virtual reality: the body shapes the way we play[END_REF][START_REF] Banakou | Virtually being einstein results in an improvement in cognitive task performance and a decrease in age bias[END_REF]. This impact of having a novel body on users' behaviour can have great applications in several domains such as education or psychotherapy. For instance, for an application against fear of heights, users could be represented by a flying superhero or a bird avatar. If users manage to easily and efficiently control their avatar and fly with it, this might help them fighting their fear. On the other Introduction hand, it is also linked to the potential contributing effect of interaction on the avatar perception. Having interaction that is compatible with full-body avatars may help to increase embodiment. While there is therefore a great interest in investigating avatars and interaction together, the knowledge on avatar embodiment and interaction tend to evolve in parallel. There are not many studies investigating the apparent interrelations between both, which could influence the level of embodiment or the way users interact. Nevertheless, the motivations and current challenges to better understand these interrelations are numerous. This thesis aims to extend current knowledge linking the user, the avatar, and the interaction by investigating the interrelations between them. Our work focuses on three main research axes (RA). RA1: Influence of the user's characteristics on the sense of embodiment This axis aims to deepen the understanding of the link between the user and the avatar, i.e. the first interrelation presented with Quote 1. One motivation of this thesis is to better understand how the sense of embodiment is elicited. Even though there is a growing number of studies on the sense of embodiment, it is still a recent research topic in VR and we lack knowledge on how it is elicited. The factors investigated concern mostly the avatar's characteristics, the type of sensory stimulation or the tracking apparatus. Most studies therefore consider the sample of participants as a whole, without considering inter-individual differences. Yet, body perception is a very personal aspect, and studies often show high standard errors in the participants' level of embodiment. Researchers do not know yet why some people easily embody virtual bodies, while others seem refractory to the sense of embodiment. As individual characterictics have already been studied in VR for other aspects of user experience such as the sense of presence, it seems interesting to study such individual differences in the context of embodiment. This question will be investigated in Chapter 2 through the evaluation of whether the user's personality traits can affect embodiment. While this question investigates the unknown effect of the user's characteristics on avatar perception, it does not take the interaction into account. RA2: Influence of the interaction technique on the sense of embodiment This axis corresponds to Quote 2 (for interaction input) and Quote 3 (for interaction feedback), which aims to explore one aspect of the interrelations between interaction and avatar embodiment. Studies on interaction techniques often use interaction-based crite-ria such as performance or usability. Few studies investigate the sense of embodiment. Nevertheless, in the real world, exploring the environment by interacting with it is a way to know and appropriate our bodies. One component of embodiment is related to the control of the virtual body, which is deeply linked to the possibility of performing actions. When an interaction task is used in a study, it is rarely used as an experimental condition. Therefore, little is known on how the body is perceived depending on the way users interact. Several parameters can change when interacting, such as the input characteristics or the action feedback. The influence of such factors on embodiment needs to be more deeply investigated. This question will be explored in Chapter 3 where we will investigate the impact of locomotion techniques on embodiment, which had not been investigated before. Chapter 4 will focus on manipulation techniques and investigate the impact of a novel type of feedback on embodiment for the specific case of anisomorphic manipulation techniques. Also, contrary to locomotion techniques, there are already several studies on the influence of manipulation on the sense of embodiment. These works can be used to propose new methods of developping techniques more compatible with avatars. In Chapter 5, we will therefore propose design guidelines for manipulation techniques that take avatar embodiment into account. Figure 1 -Illustration of the different research axes, linking the interaction, the user, the avatar, and the sense of embodiment (SoE). RA3: Influence of the avatar on the interaction We discussed in the previous section that avatars can sometimes help (e.g. by in-Introduction creasing spatial awareness) or hinder interaction (e.g. by creating occlusions), which was illustrated with Quote 4. The studies investigating such influence of avatars on interaction are recent and still rare, as simpler virtual representations were used when most of the interaction techniques were invented. As a third research axis, we decided to investigate this possible influence of the avatar on interaction more deeply. While Chapter 3 will study the impact of embodying a full-body avatar when navigating, Chapter 4 will focus on manipulation techniques and most specifically anisomorphic manipulation, and how different appearances could impact performance. Scope and Outline In this thesis, we investigate two domains relative to VR: interaction and avatar embodiment. We focus on fully immersive VR applications used in Head-Mounted Displays (HMD). We study avatars defined as full bodies or body parts representing users in the VE. The notion of avatar embodiment studied in this thesis is based on the definition by Kilteni et al. [2012a], which may differ from other definitions of embodiment used in the Human-Computer Interaction community for example, where it relates more on the link between interaction and the user's real body. Our work focuses on avatars viewed from a first-person perspective, but related work using third-person perspective will be mentionned. The outline of this thesis will be as follows: Chapter 1: Related Work on Interrelations Between Avatars and Interaction We will first introduce related work on avatar embodiment, 3D interaction, and the studies investigating the interrelations between these two domains. While a first part will mostly present the context of this thesis and important aspects of VR, the following sections will develop the notion of embodiment, the impact of avatars on interaction, and the impact of interaction on avatar perception. Chapter 2: Understanding Individual Differences in the Sense of Embodiment The second chapter will focus on the sense of embodiment and tackle the question of individual differences in the levels of embodiment. We will present a user study on the influence of personality traits (Big Five traits and locus of control) and body awareness on the sense of embodiment. 123 participants took part in this study. A two-minute visuomotor task was used to elicit a sense of embodiment towards the avatar. At the end of the experiment, a virtual threat was induced to observe participants' reaction. Chapter 3: Studying the Interrelation Between Locomotion Techniques and Embodiment In this chapter, we will explore the impact of interaction on embodiment by focusing on locomotion. We will detail a user study that we conducted exploring the influence of locomotion techniques on the sense of embodiment. Three locomotion techniques were tested: real walking, walking-in-place and head steering. Each participant used only one technique and did the experiment twice, once with a full virtual body, and once with only 3D models of the controllers. Chapter 4: Investigating Dual Body Representations During Anisomorphic 3D Manipulation This chapter will focus on another type of interaction, manipulation, and most specifically on anisomorphic manipulation. It aims at investigating the use of a dual virtual representation when distorting users' motion for object manipulation. Two experiments were conducted, respectively investigating two types of motion distortions (amplified motion or decreased motion). Dual body representations with different visual appearances were compared to single body representations. Chapter 5: Designing Avatar-Friendly 3D Manipulation Techniques: Practical Guidelines In Chapter 5, we will leverage the knowledge on manipulation techniques and avatars to propose guidelines and leads of future research. We will present these guidelines for designing "avatar-friendly" manipulation techniques compatible with avatars, based on an analysis of existing literature (presented in Chapter 1). These guidelines are classified into three categories: Input Devices, Control and Feedback. These guidelines could help VR developers designing manipulation techniques that take avatars into account. Chapter 6: Conclusion Finally, a general conclusion will recall the different contributions as well as draw future research perspectives. "What I know is all quicksand" Giant Rooks Chapter 1 RELATED WORK: INTERRELATIONS BETWEEN AVATARS AND INTERACTION This chapter presents the related work on the sense of embodiment and interaction in virtual reality. It first introduces the main concepts of this thesis, namely virtual reality, 3D interaction and user experience in virtual environments. Then, it focuses on the sense of embodiment and how it is defined, elicited and measured in virtual reality. Two sections develop the existing interrelations between avatars and interaction. The first one studies the impact of avatars on interaction, while the second one focuses on the impact of interaction design choices on self-avatar perception. General Concepts Virtual Reality Virtual Reality (VR) is a concept that emerged during the second half of the twentieth century. The term designates a reality that is "virtual", i.e. "created by computer technology and appearing to exist but not existing in the physical world" 1 . This term became popular in the late eighties, but the concept of simulated environments started earlier, for example with the Sensorama simulator created by Morton [START_REF] Heilig | Sensorama simulator[END_REF], providing various sensory stimulations to simulate a motorcycle ride. Since then, VR hardware has evolved. The concept of VR has been placed at one end of the Milgram's Reality-Virtuality Continuum [Milgram and Kishino 1994]. On the other end of this continuum, there is the real world. VR is the opposite of the real world, as everything is simulated in it, which creates a whole new artificial world. Between both ends, there is what is commonly called mixed reality, as it is composed of a certain ratio of real elements and virtual/simulated elements. This continuum has been revisited recently [Skarbez et al. 2021b], as it was mostly focusing on visual displays while VEs can stimulate all users' senses. The new version of the continuum stipulates that perfect VR is currently an unachievable goal, and that most of the current VR applications are actually a part of mixed reality (see Figure 1.1). Several definitions of VR have been proposed. For example, [START_REF] Steuer | Defining virtual reality: Dimensions determining telepresence[END_REF] proposed a definition after analysing existing definitions: "A virtual reality is defined as a real or simulated environment in which a perceiver experiences telepresence". Here we present another definition, a technical definition, that was proposed in 2003. Virtual reality is a scientific and technical field that uses computer science and behavioural interfaces to simulate the behaviour of 3D entities in a virtual world, which interact in real time with each other and with one or more users in pseudo-natural immersion through sensory-motor channels. Technical definition of Virtual Reality [Arnaldi et al. 2003] Figure 1.1 -Revisited Milgram's Reality-Virtuality Continuum by Skarbez et al. [2021b]. VR immerses users in a brand new world, a virtual world. All the components of this world can be controlled by a computer. What users see, hear, touch or smell can be controlled. This opens infinite possibilities, to experience the real world in a safe and customisable manner or to experience unrealistic worlds to stimulate our imagination. To immerse users, there are different apparatus that can be used. In this thesis we will focus on Head-Mounted Displays (HMD). Ivan Sutherland and his team created what is considered as the first HMD in 1968 [START_REF] Sutherland | A head-mounted three dimensional display[END_REF]]. Nowadays, HMD are less cumbersome and can be wireless, letting users move freely. But other types of hardware can be used such as Cave Automatic Virtual Environment (CAVE), an immersive room made of screens on which the VE is projected [START_REF] Cruz-Neira | The CAVE: audio visual experience automatic virtual environment[END_REF]]. Interacting with Virtual Environments In the virtual world, there is the possibility of acting and having influence on the VE, i.e. interacting with the environment. The user is able to perform actions and the environment responds with feedback. The actions that can be performed are in theory all the actions people perform in the real world, like walking, jumping, or grabbing, catching, pinching objects. Many techniques have tried to follow different types of knowledge of the real world, that can be classified using the Reality-based interaction framework [START_REF] Jacob | Reality-based interaction: a framework for post-WIMP interfaces[END_REF]]. This framework is composed of four categories of real-world knowledge: Naive Physics, Body Awareness & Skills, Environment Awareness & Skills, and Social Awareness & Skills. All techniques using this knowledge form a new generation of interaction techniques after the generation of desktop interfaces. We can also say that this type of interaction offers a high fidelity [McMahan 2011] to the real world, i.e. they imitate the way people interact in the real world. But in VR, people are not limited by real-life physical constraints, so they can have "superpowers", like reaching remote objects or passing through walls. These superpowers can replace realistic interaction or extend it and offer new functionalities in the virtual world. Interaction in a VE can be modelised using an action-perception loop (see Figure 1.2). Users' input can be of various types, e.g. motion, voice, button trigger. It is transmitted to the application via input devices, treated by the environment which gives feedback thanks to output devices. The feedback can be coherent with the input. For example, a user motion can be directly mapped to a similar motion done by the user virtual representation. In this case, we talk about one-to-one mapping, or isomorphic interaction. But sometimes, a transformation is applied to the input to obtain a different feedback, thus creating distorted motion. In this case, we talk about anisomorphic interaction. The gain applied to the input is called a Control-Display gain (CD gain), as the display (feedback) is different from the user control (input). It is the gain applied to the input to compute the position of the displayed representation. It has been studied in the 2D [START_REF] Casiez | The impact of control-display gain on user performance in pointing tasks[END_REF]] and 3D contexts [START_REF] Argelaguet | A survey of 3D object selection techniques for virtual environments[END_REF]. This gain can be constant or adaptive (it varies depending on some conditions). A CD gain lower than one means the displayed motion is smaller than the real motion. A CD gain greater than one means that users' motion is amplified. Related Work User Input Devices Output Devices Avatar Figure 1.2 -Action-perception loop: the user can control the avatar thanks to the input devices, so as to interact with the VE. In response to the user's actions, the VE provides feedback perceived by the user thanks to the output devices. After having processed the input, the VE gives feedback to users. There are various types of feedback that can be given, related to each of the five human senses: visual, auditory, haptic and more rarely olfactory and gustatory. The three first types are the most common to give feedback after an action. They can either confirm the consequences of users' actions, or just inform of the current state of the application. Without this feedback, users cannot efficiently interact with the VE [START_REF] Wingrave | Towards preferences in virtual environment interfaces[END_REF]. Interaction with the VE can take many forms. One simple and established taxonomy is the one by [START_REF] Bowman | Interaction techniques for common tasks in immersive virtual environments: design, evaluation, and application[END_REF] classifying interaction into navigation, selection, manipulation and system input. In the case of navigation, users' motions result in camera translation and rotation. Selection and manipulation let users choose objects and apply transformations to these objects (translate, rotate or scale them). System input enables users to modify parameters in an application, to access a menu or to write text. In this section, we will present previous work on selection and manipulation, as well as on navigation. We will not focus on system input, as most techniques used for system input are similar to the ones used for selection and manipulation. Selection and Manipulation Selection and manipulation are either separated or considered as one global type of interaction category. Selection is the action of choosing the object the user will interact with. It is composed of two phases: first, users position the selection tool on the desired target; second, they confirm the selection by clicking on a button for example. Selection techniques vary in terms of selection tools, the most used ones being a virtual hand and a ray. They can be classified using other criteria, for example the provided degrees of freedom [START_REF] Argelaguet | A survey of 3D object selection techniques for virtual environments[END_REF]. Manipulation tasks encompass all the modifications that can be done on the selected virtual object. Manipulation is often summed up to four canonical tasks: positioning, rotation, scaling, as well as selection if we consider it as a subcomponent of manipulation [START_REF] Poupyrev | 3d manipulation techniques[END_REF][START_REF] Bowman | 3D User Interfaces: Theory and Practice[END_REF]]. Numerous manipulation techniques have been proposed in order to cope with the complexity of 3D manipulation [Mendes et al. 2019]. Selection and manipulation can be anisomorphic. The CD gain can be lower than one to provide a higher control for precise positioning [START_REF] Frees | Precise and rapid interaction through scaled manipulation in immersive virtual environments[END_REF]. It can also be greater than one to provide higher speed of manipulation when precision is not crucial, or to reach remote objects [START_REF] Poupyrev | The Go-Go Interaction Technique: Non-Linear Mapping for Direct Manipulation in VR[END_REF]. Creating distorted motion by modifying the CD gain can also be used to provide haptic feedback, to lead the user to a physical prop [START_REF] Kohli | Redirected touching: The effect of warping space on task performance[END_REF][START_REF] Azmandian | Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experiences[END_REF]. This technique applies an offset to the virtual hand so the user will reach the real tangible object. Locomotion Locomotion techniques enable users to translate and rotate in the VE. Numerous studies talk about navigation, which is more specific to task involving wayfinding. Locomotion in VR has been widely studied [START_REF] Zayer | Virtual Locomotion: A Survey[END_REF], and many techniques have been created to tackle the problem of limited physical space, like walking-in-place [START_REF] Slater | Taking steps: the influence of a walking technique on presence in virtual reality[END_REF] or teleportation [START_REF] Bozgeyikli | Point & Teleport Locomotion Technique for Virtual Reality[END_REF]]. Some of these techniques use virtual movements (i.e. virtual steering) or physical motions but with lower interaction fidelity (e.g. walking-in-place), i.e. which are not similar to the way people interact in real life [McMahan et al. 2012]. As for manipulation, navigation can be anisomorphic. A CD gain greater than one can be used to travel a longer distance [START_REF] Interrante | Seven league boots: A new metaphor for augmented locomotion through moderately large scale immersive virtual environments[END_REF] or to increase rotation to reorient the user towards a certain direction [START_REF] Steinicke | Estimation of detection thresholds for redirected walking techniques[END_REF]]. Several taxonomies have been proposed to classify all the proposed locomotion tech-niques. The first taxonomies were mainly focused on the implementation aspects, like the velocity selection, the target selection [START_REF] Bowman | Travel in immersive virtual environments: an evaluation of viewpoint motion control techniques[END_REF]]. With the growing mastery of navigation algorithms, the taxonomies have focused more on the paradigms used. [START_REF] Arns | A New Taxonomy for Locomotion in Virtual Environments[END_REF] made the distinction between physical and virtual navigation. Another taxonomy has been proposed more recently, separating locomotion techniques into four categories: room-scale-based, motion-based, teleport-based and controller-based [Boletsis 2017]. Nowadays, the tendency is to focus more on the user and its perception of locomotion [START_REF] Albert | User-centric Classification of Virtual Reality Locomotion[END_REF]. Immersion and Presence There are various user experience criteria in VR, such as enjoyment or comfort. In this section we focus on two aspects that have been extensively explored in the VR literature: immersion and presence. Immersion is the objective level of realistic sensory modalities provided to users [START_REF] Slater | A note on presence terminology[END_REF]]. It highly depends on the different output devices used, as well as the software part. A highly immersive application provides rich sensory stimulations such as high-quality graphisms, sound and touch feedback. As VR immerses users into another alternative world, it can give them the impression of being elsewhere and not in the real world anymore. The feeling of "being there" has been called the sense of presence. Several studies investigated factors influencing the sense of presence. For example, having a virtual body can increase the sense of presence [Slater and Usoh 1993b]. Presence differs from immersion, as it does not solely depend on the apparatus but is a psychological construct in users' mind, which can vary depending on each individual mental representations. It is a reaction to immersion [START_REF] Slater | A note on presence terminology[END_REF]] and it makes users react realistically to the virtual world. Slater [2009] introduced two components to substitute the sense of presence: Plausibility (Psi) and Place Illusion (PI). Place Illusion globally corresponds to the previous sense of presence, which gives to users a similar sensation as being in a real place. Plausibility refers to the credibility of the scenario. Plausibility is high if users think the events in the virtual world are really occuring. This thesis focuses on VR, a simulated world in which users can feel like they are really situated in it and where they can interact. An important element that makes presence higher and make interaction possible is the new virtual body that is given to users. Embodying an Avatar In VR, users have the possibility to be represented by an avatar, i.e. virtual body parts or a full body. They can feel a sense of embodiment towards this avatar, i.e. feel like this avatar is their real body [Kilteni et al. 2012a]. In this section, we will present related work on the sense of embodiment, from its emergence to its measures, its impact on users, and the factors influencing it. Emergence of Avatars in Computer-Mediated Technologies With the emergence of computer-mediated technologies and the internet, people started to need ways of expressing themselves to other people online. Because communication through text was highly limited, they also quickly needed a way to show a representation of themselves to other users, to express their identity. It started with profile pictures, usually called avatars, a term that first appeared mostly in video games or novels. This term has been extended to call all types of users' representation in a VE. From video games or chat platforms to VR, these avatars are users' virtual identity. They can be customised and serve as users' proxy in the virtual world. The definition of the word avatar sometimes vary between communities and researchers. A review by [START_REF] Nowak | Avatars and computer-mediated communication: a review of the definitions, uses, and effects of digital representations[END_REF] on the different uses of the word avatar led to this definition: An avatar is a digital representation of a human user that facilitates interaction with other users, entities, or the environment. Definition of an avatar [Nowak and Fox 2018] An avatar is therefore the means used to interact, with other users but also with the VE. With other users, it lets users communicate through gestures for example. It is essential for social interaction to be able to see other users and know what they are doing. For interaction with the VE, it influences users' capabilities (see Section 1.3.1.1). For example, thanks to their avatar, users can know their position in the virtual world. The avatar also represents how they want to expose themselves to other users in the case of multi-user applications. The chosen appearance can depend on the application context, it reveals both how users want to present themselves to others, and how they perceive themselves [Vasalou and Joinson 2009]. There is therefore a strong relation between users and their avatar, that has been extensively studied in several communities. Indeed, the question of the relation between users and their avatar is an interdisciplinary question, involving as much the fields of HCI (with the question of embodied interaction), than art or psychology [START_REF] Hamilton | Identifying with an avatar: a multidisciplinary perspective[END_REF]]. The Evolving Concept of the Sense of Embodiment In VR, this relation between users and their avatar is even stronger. When using HMDs, users' visual sense is reduced to what is visible in the headset. They cannot see their real body anymore, thus being left with only a virtual representation of themselves. As their real body is hidden, people can have the illusion that their avatar becomes their real body. This illusion has been studied in VR since the nineties. First called Illusory Virtual Body Ownership (IVBO), researchers investigating this question now use more often the term sense of embodiment, defined by Kilteni et al. [2012a]. This definition decomposed the sense of embodiment into three subcomponents: the sense of self-location, the sense of agency and the sense of ownership. This proposed structure has the advantage of clearly defining three aspects of embodiment, that represent different criteria of experiencing a body as our own. The sense of ownership was the component already studied in experiments on IVBO. VR is particularly efficient at creating ownership, as other types of media have more difficulty in creating such an illusion that the body is part of our own. There is the idea of a link between the user and the avatar in this component, both on an affective and a sensory aspect: this body is a part of users' body and they receive every stimulation through this body. The sense of agency is related to control. It has been extensively studied in other domains outside VR [Pacherie 2007]. A strong sense of agency implies both identifying yourself as a cause of an action, and experiencing a logical feedback relative to a performed action. [START_REF] Jeunet | Do you feel in control? " : Towards Novel Approaches to Characterise, Manipulate and Measure the Sense of Agency in Virtual Environments[END_REF] differentiated these two aspects by calling them the feeling and judgment of agency, based on previous studies in psychology. Finally, the sense of self-location refers to the spatial location of the avatar relative to the user's actual position. The main factor influencing it is the perspective from which the avatar is seen. The sense of self-location is high when the avatar is seen from a first-person perspective (1PP), and low when seen from a third-person perspective (3PP) [START_REF] Blanke | Full-body illusions and minimal phenomenal selfhood[END_REF]. It can be also impacted by tactile stimulations to some extent [Lenggenhager et al. 2009]. This notion of self-location comes from original real-life experiments like the rubber-hand illusion (RHI), where the fake hand and the real hand were not at the same location. In VR, it is easier to have a co-located avatar when the tracking is correct. Therefore this component has created less interest in the VR community recently. There are still interesting applications in VR that can disrupt this sense, for example out-of-body experiences where the user can be placed outside the virtual body on purpose [START_REF] Bourdin | A virtual outof-body experience reduces fear of death[END_REF]. While this structure has established a common reference to all researchers in the domain, the knowledge on the sense of embodiment in VR is still limited and definitions may need to evolve in the future. Measuring Embodiment in Virtual Reality The research on subjective measures to the sense of embodiment has highlighted the complex interrelations between all aspects covering embodiment, and the difficulty to clearly separate them [START_REF] Roth | Construction of the Virtual Embodiment Questionnaire (VEQ)[END_REF]Peck and Gonzalez-Franco 2021]. These subjective measures often take the form of a questionnaire. Items were first inspired from questions asked for the RHI. They have evolved with the experiments becoming more and more complex, and introducing more factors and types of feedback. Nowadays there are questions on the tactile sensations for example [Peck and Gonzalez-Franco 2021]. Researchers also tried to identify objective measures that could be correlated to the sense of embodiment. As in the RHI [START_REF] Ehrsson | Threatening a rubber hand that you feel is yours elicits a cortical anxiety response[END_REF]], a threat introduced to the fake body can create a reaction if the user felt embodied. The reaction can be evaluated with physiological measures such as skin conductance response [START_REF] Armel | Projecting sensations to external objects: Skin conductance response evidence[END_REF]]. This reaction is a behaviour that was found correlated to the sense of embodiment [START_REF] Yuan | Is the rubber hand illusion induced by immersive virtual reality?[END_REF][START_REF] Zhang | Body ownership and response to threat[END_REF]. Other behavioural responses can be used as additional clues of embodiment, for example voice frequency [START_REF] Tajadura-Jiménez | Embodiment in a child-like talking virtual body influences object size perception, self-identification, and subsequent real speaking[END_REF] or movements [START_REF] Kilteni | Drumming in immersive virtual reality: the body shapes the way we play[END_REF]. Behaviour can show how much users identify with the character representing them. Behavioural Impact of the Sense of Embodiment These changes in behaviour and reactions are a consequence of embodying a different body. Embodiment is a powerful illusion that can have surprising cognitive effects. For example, embodying an avatar with a different skin colour can reduce implicit racial bias [START_REF] Banakou | Virtual embodiment of white people in a black virtual body leads to a sustained reduction in their implicit racial bias[END_REF]]. The avatar characteristics can influence users' thoughts and behaviour, they can start acting as someone with their avatars' characteristics would. This is called the Proteus effect [START_REF] Yee | The Proteus Effect: The Effect of Transformed Self-Representation on Behavior[END_REF]. People embodied in a child avatar can start talking with a higher pitch [START_REF] Banakou | Body ownership causes illusory self-attribution of speaking and influences subsequent real speaking[END_REF]. It can also influence users' cognitive performance when they embody an avatar looking like Einstein, and this effect is even higher for people with a low self esteem [START_REF] Banakou | Virtually being einstein results in an improvement in cognitive task performance and a decrease in age bias[END_REF]]. Another study found contradictory results, with no effect of being Einstein on cognitive performance, but an impact of the appearance of another user in the VE [Kocur et al. 2020b]. If the other user looks like Einstein, there is an increase in the cognitive performance, that may be due to a competitive behaviour. This is not a simple process and might depend on the context. Contrary to the study on cognitive performances, a study found that an avatar that looks like an artist decreases creativity compared to a look-alike avatar [START_REF] Rooij | The Creative Proteus Effect: How Self-Similarity, Embodiment, and Priming of Creative Stereotypes with Avatars Influences Creative Ideation[END_REF]. Embodying an avatar is not only about experiencing a novel appearance, it can sometimes have an impact on users' cognition. Impact on Body Image and Body Schema The illusion created by the sense of embodiment is related to users' own body perception. Everyone has their own mental self-representation. One common theory, the dyadic taxonomy, says that people have at least two mental body representations, generally called body image and body schema [START_REF] Gallagher | Body image and body schema: A conceptual clarification[END_REF]]. The body schema "consists in sensorimotor representations of the body that guide actions" [START_REF] Vignemont | Body schema and body image-Pros and cons[END_REF]. Contrary to body schema which does not exist apart from the environment, body image is "a conscious image or representation [...] differentiated from its environment" [START_REF] Gallagher | Body image and body schema: A conceptual clarification[END_REF]]. De [START_REF] Vignemont | Body schema and body image-Pros and cons[END_REF] stated that body image "groups all the other representations about the body that are not used for action, whether they are perceptual, conceptual or emotional". Embodying an avatar challenges users' mental representations of themselves. For [START_REF] Biocca | The cyborg's dilemma: Progressive embodiment in virtual environments[END_REF], a duel takes place in the virtual world between the physical body and the virtual body, to alter body schema. The avatar appearance can alter mental representations, but also acting through this avatar. Similarly as tools can impact body schema [Maravita and Iriki 2004], interacting through an avatar that has a different morphology like a three-arm avatar for example [START_REF] Laha | Evaluating Control Schemes for the Third Arm of an Avatar[END_REF]] can alter body schema (see Figure 1.3). Proprioceptive drift is one possible alteration of the body schema. This drift happens for example in the context of the RHI, where the real hand drifts towards the fake hand. In VR, such drift has also been observed for example with a long-arm illusion [Kilteni et al. 2012b]. When displaying a long virtual arm, participants experience a shape modification of their mental representation and feel like their real hand shifts towards the remote virtual hand. However in this study, proprioceptive drifts and ownership were not correlated, showing possible different underlying processes. It was also found that a remote virtual hand alters body schema especially when participants feel a sense of agency towards this virtual hand [START_REF] D'angelo | The sense of agency shapes body schema and peripersonal space[END_REF]. This sense of agency appears when the virtual hand movements are synchronous with the participants' ones. Figure 1.3 -Control of an avatar with a different morphology, for example an avatar with a third arm. Image from [START_REF] Laha | Evaluating Control Schemes for the Third Arm of an Avatar[END_REF]]. Factors Influencing the Sense of Embodiment VR enables the full control of a wide number of experimental factors, from the stimulation protocol to the appearance and morphology of users' avatar, which has resulted in a notable body of literature examining the impact of such factors on the perceived sense of embodiment. These factors can be: top-down factors, i.e. depending on users' mental representations and cognitive processes that can moderate the influence of stimulations; or, bottom-up factors, i.e. depending on the provided stimulations detected by sensory organs [Kilteni et al. 2012a]. Here we prefer a classification by external factors, depending on the VE and the avatar, and internal factors, depending on the user [START_REF] Fribourg | Contribution à l'étude des facteurs influençant le sentiment d'incarnation envers un avatar en réalité virtuelle[END_REF]]. This classification clearly separates controllable factors from factors solely depending on users. External Factors At the origin of virtual embodiment, there are real-world embodiment illusions such as the RHI [START_REF] Botvinick | Rubber hands 'feel' touch that eyes see[END_REF]]. In this experiment, a touch visually applied on a rubber hand that is synchronous with an experienced touch on the real participant's Related Work hand can give the illusion of owning the rubber hand. The central idea in this illusion is to deceive the brain by creating a plausible stimulation. The result of an event (a motion, a touch) must be logical in reaction to the initial event. Progressively this illusion was transposed to VR. First, IJsselsteijn et al. [2006] reproduced the RHI by projecting either the real hand or both the hand and the stimulation (with a paintbrush). A few years later, an experiment was conducted that created an illusion of owning a virtual hand, with results as strong as the RHI [START_REF] Slater | Towards a digital body: the virtual arm illusion[END_REF]. In this experiment, they used a hand-held "Wand" device, with a ball attached to it (see Figure 1.4). There was a similar virtual ball touching the virtual hand. Figure 1.4 -First experiment in VR using visuotactile stimulation to elicit a sense of embodiment towards a virtual arm. Image from [START_REF] Slater | Towards a digital body: the virtual arm illusion[END_REF]]. In parallel, there were also studies focused not solely on the hand but on the full body. These studies started by the experiment of [START_REF] Petkova | If I Were You: Perceptual Illusion of Body Swapping[END_REF] inducing the illusion of owning a manikin body seen from a first-person perspective in a video feed. This experiment used a visuotactile paintbrush stimulation like in the RHI. A similar experiment was conducted in which subjects could see a virtual projection of either their real body or a fake body from a third-person perspective [Lenggenhager et al. 2007]. Most of these studies therefore used a visuotactile stimulation, i.e. they used an object (paintbrush, ball) to touch users' real body at the same time as the object virtual counterpart touched the virtual body. But this is not the only way to induce a sense of embodiment. The second main means to induce it is a visuomotor stimulation. In the real world, the RHI was conducted using a moving rubber hand following participants' real hand move-ments [START_REF] Kalckert | Moving a Rubber Hand that Feels Like Your Own: A Dissociation of Ownership and Agency[END_REF] and with a projection of the real moving hand [Tsakiris et al. 2006]. The same method can be used in VR, by making the avatar reproduce users' movements [START_REF] Kokkinara | Measuring the Effects through Time of the Influence of Visuomotor and Visuotactile Synchronous Stimulation on a Virtual Body Ownership Illusion[END_REF]. While visuotactile and visuomotor stimulations are the two main means of creating an embodiment illusion, there were also experiments reporting the illusion by using only a visuoproprioception stimulation, i.e. by only showing a virtual body where users' real body was located. If the virtual body was co-located with the real body and had a realistic skin tone, this was enough for the illusion to appear [Maselli and Slater 2013]. Visuomotor stimulation is currently the most used type of stimulation. It was found to be stronger than visuotactile stimulation [START_REF] Kokkinara | Measuring the Effects through Time of the Influence of Visuomotor and Visuotactile Synchronous Stimulation on a Virtual Body Ownership Illusion[END_REF]. This is to our advantage because the goal of the avatar is not to be solely observed by users, but also used to perform actions with the VE. If users perform movements in the VE, and that their avatar follows their movements, it is therefore possible to create a sense of embodiment. A recent study also found that visuomotor stimulation provokes a disembodiment faster than visuotactile stimulation [Lesur et al. 2020]. Tracking must therefore work well, as mismatches might decrease embodiment more quickly when using visuomotor stimulation. Spatial and temporal synchrony are extremely important, especially for the sense of agency [START_REF] Jeunet | Do you feel in control? " : Towards Novel Approaches to Characterise, Manipulate and Measure the Sense of Agency in Virtual Environments[END_REF]. While users can still feel in control when the offset between the action and the feedback is slightly manipulated, the sense of agency is however lost when the visual feedback becomes too obviously asynchronous [START_REF] Debarba | Self-attribution of distorted reaching movements in immersive virtual reality[END_REF]]. In addition to external factors concerning the type of stimulation, other factors were also found to affect the sense of embodiment, such as avatar appearance. Despite the fact that it is possible to feel some ownership when seeing only virtual hands and feet [START_REF] Kondo | Illusory body ownership of an invisible body interpolated between virtual hands and feet via visual-motor synchronicity[END_REF]], the avatar appearance has been shown to be of importance to maximise the ownership illusion. For example, it is usually stronger with an avatar with coherent clothes or skin tone [Maselli and Slater 2013], and potentially even higher with a personalised avatar or a self-representation [Waltemate et al. 2018;[START_REF] Gorisse | From Robot to Virtual Doppelganger: Impact of Visual Fidelity of Avatars Controlled in Third-Person Perspective on Embodiment and Behavior in Immersive Virtual Environments[END_REF]]. The sense of agency is less sensible to appearance. It can be elicited even with point-line avatars [START_REF] Wellerdiek | Recognizing your own motions on virtual avatars[END_REF]] and with virtual limbs in implausible positions [Tsakiris et al. 2006]. Internal Factors External factors focus on different experimental conditions (e.g. stimulation, appearance, perspective), yet, they rarely consider individual differences. These individual differ-ences can be due to internal factors depending on users. While there are not many studies investigating the influence of such factors on embodiment, their influence has been more explored in studies on the sense of presence [Slater and Usoh 1993a]. In particular, personality models with different dimensions like the OCEAN model have been used in order to characterise inter-personal differences. The OCEAN model, also known as the "Big Five" personality traits, is a taxonomy of personality traits that uses common language descriptors in order to identify five personality dimensions: Openness to experience, Conscientiousness, Extraversion, Agreeableness and Neuroticism. For example, it was found that agreeableness was positively associated with spatial presence [START_REF] Sacau | The impact of personality factors on the experience of spatial presence[END_REF]]. Regarding the influence of extraversion, it was found to be positively [START_REF] Laarni | Personality-Related Differences in Subjective Presence[END_REF] or to be negatively correlated [START_REF] Jurnet | Individual differences in the sense of presence[END_REF] with presence. In addition to the "Big Five", other personality traits that have been investigated are absorption (the disposition for having episodes of total attention that fully engage one's representational resources [START_REF] Tellegen | Openness to absorbing and self altering experiences ('absorption'), a trait related to hypnotic susceptibility[END_REF]) and dissociation (the lack of normal integration of thoughts, feelings, and experiences into the stream of consciousness and memory [START_REF] Bernstein | Development, Reliability, and Validity of a Dissociation Scale[END_REF]). Their influence on presence was studied and they were sometimes both found positively correlated with presence [START_REF] Sacau | The impact of personality factors on the experience of spatial presence[END_REF], sometimes only dissociation was associated with presence [START_REF] Murray | Absorption, dissociation, locus of control and presence in virtual reality[END_REF]] or neither of them was correlated [START_REF] Phillips | Correlations Between Physiological Response, Gait, Personality, and Presence in Immersive Virtual Environments[END_REF]. [START_REF] Kober | Personality and Presence in Virtual Reality: Does Their Relationship Depend on the Used Presence Measure?[END_REF] found that absorption was a good predictor of presence, no matter what presence questionnaire was used. Moreover, empathy is another trait which has been studied in the past, and demonstrated to be related to feeling a higher sense of presence [START_REF] Nicovich | Experienced Presence within Computer-Mediated Communications: Initial Explorations on the Effects of Gender with Respect to Empathy and Immersion[END_REF][START_REF] Sas | Presence Equation: An Investigation into Cognitive Factors Underlying Presence[END_REF]Ling et al. 2013]. Finally, the locus of control was also demonstrated to have an influence on the sense of presence. People with an internal locus believe they are in control of events, while people with an external locus believe in external influences (e.g. fate). Contradictory results were found, namely that either an external [START_REF] Murray | Absorption, dissociation, locus of control and presence in virtual reality[END_REF] or internal [Wallach et al. 2010] locus of control was improving presence depending on the study. For the sense of embodiment, some internal factors such as body awareness [START_REF] David | Susceptibility to the rubber hand illusion does not tell the whole body-awareness story[END_REF]] and personality traits [START_REF] Jeunet | Do you feel in control? " : Towards Novel Approaches to Characterise, Manipulate and Measure the Sense of Agency in Virtual Environments[END_REF]] have been studied. However, the majority of the works addressing such internal factors have mainly focused on the RHI in the physical world. The influence of body awareness, a cognitive ability that makes us aware of our body processes, has been investigated but no correlation was found with the strength of the RHI [START_REF] David | Susceptibility to the rubber hand illusion does not tell the whole body-awareness story[END_REF]. Regarding personality and RHI, it has been found that the illusion is stronger for empathic people [START_REF] Asai | Rubber hand illusion, empathy, and schizotypal experiences in terms of self-other representations[END_REF][START_REF] Seiryte | The Empathy Quotient (EQ) predicts perceived strength of bodily illusions and illusion-related sensations of pain[END_REF]. The sense of ownership in the RHI was also found correlated with traits like the Novelty Seeking trait (from the TCI-R questionnaire) or Psychoticism (from the SCL-90-R questionnaire) [START_REF] Kállai | Temperament and psychopathological syndromes specific susceptibility for rubber hand illusion[END_REF]. Also, higher responses to the RHI have been reported for people suffering from personality or psychotic disorders: dissociative subtype of posttraumatic stress disorder (PTSD) [START_REF] Rabellino | I can't tell whether it's my hand": a pilot study of the neurophenomenology of body representation during the rubber hand illusion in trauma-related disorders[END_REF], schizophrenia [Peled et al. 2000;[START_REF] Thakkar | Disturbances in Body Ownership in Schizophrenia: Evidence from the Rubber Hand Illusion and Case Study of a Spontaneous Out-of-Body Experience[END_REF]] and schizotypal personality disorder [START_REF] Asai | Rubber hand illusion, empathy, and schizotypal experiences in terms of self-other representations[END_REF]Van Doorn et al. 2018]. Finally, recent works have started to focus on the potential role of personality traits in virtual embodiment. One example being the work of [START_REF] Jeunet | Do you feel in control? " : Towards Novel Approaches to Characterise, Manipulate and Measure the Sense of Agency in Virtual Environments[END_REF] which showed that the feeling of agency is linked to an internal locus of control. The literature shows both a clear interest and important results regarding the influence of personality on users' sense of embodiment in the physical world. Some more recent work also revealed an influence of the locus of control, a personality trait, on the sense of embodiment in VR. This last result highlighted the potential role of individual differences in the elicitation of the sense of embodiment in VR and in this way, raised the concern of exploring deeper their possible link with the sense of embodiment in VR. For this reason, a part of this thesis (see Chapter 2) will explore this matter by investigating the potential effect of personality traits and body awareness, which are two factors varying from one person to another that can be linked to embodiment. The Impact of Having a Virtual Body During Interaction In real life, the body is the means of performing actions through our motor system, and of receiving feedback through our sensory system. In VR, the avatar serves as the intermediary between the real body and the VE. It shows actions performed by users and can visually receive a feedback, as the user would receive in the real world. As we said before, there are various avatars possible, and the avatar chosen in an application can impact the way users interact. First, an avatar affects users' perception of their capabilities and of the environment. It can also affect their perception of their own movements, even when these movements are distorted in the VE. These effects also highly depend on the coherency between the avatar and the input modality used. Finally, the avatar chosen can also impact performance, depending on the task. Impact on Perception of Interaction Capabilities and Environment 1.3.1.1 Affordances and Environment Perception In real life, people learn progressively which tasks they are able to do or not. From the youngest age, people learn that their hands can grab objects, but also where an object should be grabbed depending on its shape and centre of gravity. Legs can let humans walk, or step over obstacles. Arms have a limited length that do not let people grab remote objects. People use the perception of their own body to know what is feasible. In 1979, Gibson's ecological approach introduced this idea of a strong link between the environment and the human perception system [START_REF] Gibson | The Ecological Approach to Visual Perception[END_REF]]. People perceive what actions the environment offers by considering their own body abilities. In the VE, users also perceive the affordances of the environment, i.e. "the use or purpose that a thing can have, that people notice as part of the way they see or experience it"2 by taking their body into account. There are some affordances from the real world, when users can interact with some objects as they would in real life. There are also unrealistic affordances, such as superpowers offered by VR like the World-in-Miniature technique [START_REF] Stoakley | Virtual reality on a WIM: interactive worlds in miniature[END_REF], a miniaturised version of the environment that lets users move objects in this environment. Developers must be careful about affordances and design choices, as they must guide users and show them what actions are feasible or not. Users' avatar can give clues as to what is possible or not [Seinfeld et al. 2020a]. For example, the degrees of freedom offered by the avatar can limit the interaction. If users are represented by a sphere, they know they can probably translate this representation, maybe select objects by clicking on a button for example, but they will not be able to perform precise hand poses to grab objects. Also, depending on the hand appearance, people can have the impression of having a better tactile sensitivity [Schwind et al. 2018a]. The avatar, in relation to the affordances offered by the environment, tells users what they can probably do or not. It is similar to 2D desktop applications, where a cursor on a text indicates that it can be placed somewhere to add/delete some characters, or a hand cursor implies that users can drag and drop. Avatars can also be used to show extended abilities. For example, an avatar with an extendable arm can imply that users can reach remote objects [START_REF] Feuchtner | Extending the Body for Interaction with Reality[END_REF]. Also, similarly to real life [START_REF] Hall | The hidden dimension[END_REF]], the virtual body serves as a reference frame to interact with the environment. It is even more essential in VR, as users tend to show a size and distance perception bias, usually underestimating distances and object size [START_REF] Plumert | Distance Perception in Real and Virtual Environments[END_REF]. It has been demonstrated that displaying an avatar can improve these perceptions [START_REF] Mohler | The Effect of Viewing a Self-Avatar on Distance Judgments in an HMD-Based Virtual Environment[END_REF]]. In a multi-user application, the virtual body helps to understand the VE scale, but the other users' size can have more impact, especially if there are lots of users [START_REF] Langbehn | Scale matters! Analysis of dominant scale estimation in the presence of conflicting cues in multi-scale collaborative virtual environments[END_REF]. McManus et al. [2011] did not find an effect of a self avatar on distance estimation, contrary to previous works. They hypothesise that this is due to participants seeing their avatar in a mirror, which potentially decreased their sense of embodiment (because of visual differences), thus lowering the impact of cues provided by the avatar. The avatar can also help users decide whether they can perform an action. Lin et al. [2012] showed that the presence of an avatar changed users' perception of being able to step over a pole, but had no significant influence on their decision to pass through a doorway, as recently confirmed by [START_REF] Bhargava | Comparative Evaluation of Viewing and Self-Representation on Passability Affordances to a Realistic Sliding Doorway in Real and Immersive Virtual Environments[END_REF]. However, the choice of rotating shoulders to pass an aperture and avoid collisions was found influenced by the presence of an avatar [Mestre et al. 2016]. Similarly, the avatar's gender can also have an influence on affordances, in particular in collaborative VEs. [START_REF] Buck | Interpersonal Affordances and Social Dynamics in Collaborative Immersive Virtual Environments: Passing Together Through Apertures[END_REF] studied two-person joint actions while passing through apertures, and found an effect of avatar's gender on participants' behaviour. However, these behaviours are a bit different and more complex than the ones observed in real-life. For manipulation tasks, it was found that hands with a personalised size increase the correct estimation of object size [START_REF] Jung | Over My Hand: Using a Personalized Hand in VR to Improve Object Size Estimation, Body Ownership, and Presence[END_REF], and that this effect can also depend on the realism of the avatar's appearance [START_REF] Ogawa | Object Size Perception in Immersive Virtual Reality: Avatar Realism Affects the Way We Perceive[END_REF]. This is coherent with early results showing individual differences during manipulation of virtual objects and the importance of calibrating the experience for each individual [Wang et al. 1997]. Similarly, virtual foot size can influence affordance and environment perception. Having enlarged virtual feet affects both judgment of stepping over and estimation of the width of a virtual gap [START_REF] Jun | Big foot: Using the size of a virtual foot to scale gap width[END_REF]. For tasks involving full-body motions, the avatar also helps to estimate users' height compared to different types of obstacles. For example, having a full virtual body helps users know whether they can step off a ledge or pass under a pole [START_REF] Bodenheimer | The Effect of Avatar Model in Stepping off a Ledge in an Immersive Virtual Environment[END_REF]. In this between-subjects study, they tried three virtual representations: no avatar, line avatar and full-body gender-matched avatar. They found a significant difference between the no avatar condition and both line and full-body avatar, with people having more difficulties to estimate their abilities without an avatar. The fact that there was no significant different between line and full-body avatars supposes that avatar realism is not very important for affordance judgment, in the case of stepping off a ledge. Visual Occlusions Occlusions often serve as a depth cue, helping people compare depth between objects [Ono et al. 1988]. However, the avatar can occlude virtual objects and therefore hinder interaction. The more body parts are represented, the higher the risk of this occurring. This can prove critical in applications providing only visual feedback contrary to real life, where other types of feedback (haptic, auditory) can prevent people from being bothered by occlusions. For example, a study found that users perform better with a virtual hand than with a virtual whole arm [Tran et al. 2017]. The authors hypothesised that this is due to the whole arm hiding the scene more and creating occlusions. This is also consistent with the study by [START_REF] Argelaguet | The role of interaction in virtual embodiment: Effects of the virtual hand representation[END_REF], where users performed better with skeleton hands than with realistic hands which generate more occlusions (see Figure 1.5). In order to minimize occlusions, some potential solutions have been considered. For example, the avatar's hands can become transparent [START_REF] Zhai | The "Silk Cursor": Investigating Transparency for 3D Target Acquisition[END_REF][START_REF] Buchmann | Interaction with Partially Transparent Hands and Objects[END_REF] when users approach a virtual object. This maintains an anthropomorphic representation of the user while keeping the virtual object visible through this representation. Providing additional feedback of the object hidden by the avatar (e.g. with a supplementary view from another perspective) [START_REF] Bichlmeier | The Virtual Mirror: A New Interaction Paradigm for Augmented Reality Environments[END_REF] is also an alternative. The latter has the advantage of not changing the appearance of the avatar at the exchange of higher task complexity. Impact on Perception of Distorted Motion Thanks to the avatar, it is possible that users do not notice motion distortion. Users' visual perception dominates proprioception [START_REF] Burns | The hand is slower than the eye: a quantitative exploration of visual dominance over proprioception[END_REF], making this illusion possible. For example, even if users perform a small movement in real life, if their avatar moves a lot, they will trust the visual feedback more and think that they performed a larger movement. Several studies investigated the thresholds beyond which users notice this modification of their movements. It was found that displaying a realistic hand instead of a spherical cursor increases the detection thresholds during remapped movements by more than 30% [Ogawa et al. 2020a]. When reaching for an object, participants did not detect the horizontal shift applied to their movement as much in the realistic hand condition as in the spherical cursor condition (see illustration in Figure 1.6). Figure 1.6 -The participant is represented by either a realistic hand (on the left image) or a spherical pointer (on the right). The transparent hand represents the user's real hand, not displayed in the VE. Image from [Ogawa et al. 2020a]. Another study found that the subjects who felt a sense of ownership towards their avatar tended to detect distorted motion less [START_REF] Burin | Body ownership increases the interference between observed and executed movements[END_REF]. A study by [START_REF] Bourdin | Altered visual feedback from an embodied avatar unconsciously influences movement amplitude and muscle activity[END_REF] found that it is possible to impact muscular activity by altering avatar's motion, without users noticing. The task used was flexing the elbow to an angle of 90°, with an elastic band. The virtual arm was flexed by either the same angle, 75°or 105°. The sense of ownership and agency were not impacted by the distortion. While the avatar seems to help increasing detection thresholds for manipulation tasks, it was found that displaying feet during redirected walking does not impact sensitivity to translation gains [START_REF] Kruse | I can see on my feet while walking: Sensitivity to translation gains with visible feet[END_REF]. Similarly, no signifant result was found by [START_REF] Reimer | The Influence of Full-Body Representation on Translation and Curvature Gain[END_REF] when using a full-body representation. There was also no significant effect of virtual representation on detection threshold in a study on redirected jumping [Li et al. 2021]. The presence of an avatar therefore seems to have less impact on whole body movements. Coherency between Avatars and Input Devices Can Influence Interaction There is a strong relationship between the chosen user representation and the input device used [Seinfeld et al. 2020a], e.g. it is rare to use hand tracking to control a simple sphere cursor. The user representation can also impact the way users hold the input device, how they manipulate objects [START_REF] Kadri | The visual appearance of user's avatar can influence the manipulation of both real devices and virtual objects[END_REF], as well as their preferred input device. For instance, previous work demonstrated that users prefer hand tracking to controllers when represented by realistic virtual hands because it is more realistic and fun [START_REF] Moehring | Effective manipulation of virtual objects within arm's reach[END_REF]Lin et al. 2019]. However, designers usually choose controllers in most applications, probably because they are more reliable, accurate [START_REF] Moehring | Effective manipulation of virtual objects within arm's reach[END_REF] and commonplace. Various types of controllers exist, differing in terms of shape, buttons and other inputs available, weight or haptic feedback provided. It is questionable whether the way they are displayed in the VE should be generic or not. Using controllers therefore raises several questions: When using such controllers for interaction, what user representation should be chosen? But also, how does this choice impact the interaction performance? With most controllers, a virtual hand can be used as a generic representation that would directly manipulate virtual objects. In 1999, [START_REF] Bowman | Interaction techniques for common tasks in immersive virtual environments: design, evaluation, and application[END_REF] recommended the use of virtual hands rather than virtual tools like raycasting to ensure efficient positioning and rotation of virtual objects. In this case, the different discrete inputs (buttons on the controllers) can launch actions such as grabbing of virtual objects, a solution often used in current VR applications. But sometimes, the controller used is inspired by a real tool and specific to an application, especially in training applications. In this case, it might be preferred to display the 3D model of this tool in the avatar's hand, to provide realism and coherency with the haptic feedback provided by the input device [START_REF] Insko | Passive haptics significantly enhances virtual environments[END_REF]]. Using virtual tools in the same manner that we use real tools has the advantage that it is straightforward. Moreover, using virtual tools was found to activate the same regions in the brain as real tools [START_REF] Rallis | Getting a handle on virtual tools: An examination of the neuronal activity associated with virtual tool use[END_REF], which highlights that virtual tools are treated behaviorally by the brain in a similar manner to physical tools. Displaying virtual tools instead of hands can also be an explicit cue for possible actions in the VE [Seinfeld et al. 2020a]. Choosing a virtual hand or a virtual tool in the avatar's hand as the representation of an input device can therefore change the way the user's interact. Impact on Performance As seen in Section 1.3.1.1, the avatar helps users during interaction by providing spatial awareness [START_REF] Draper | Exploring the Influence of a Virtual Body on Spatial Awareness[END_REF]] and perceive their capabilities correctly. This can increase performance. When navigating, a realistic self-avatar can help having smoother trajectories and avoid collisions [Medeiros et al. 2018]. McManus et al. [2011] investigated the influence of a self-avatar on two types of interaction: grabbing and placing precisely a stamp tool, and a stepping stone task. They found that the avatar helped to increase performance, but not more than seeing an animated character perform the task. Additionally, avatar appearance can vary in terms of shape, size or texture, which can influence interaction. The more complex the representation, the more information users can obtain to potentially improve their performance. For instance, people perform better in a bowshooting task with a full-body, and tend to prefer this representation over other simpler representations (only controllers or floating hands) [START_REF] Gao | The Effects of Avatar Visibility on Behavioral Response with or without Mirror-Visual Feedback in Virtual Environments[END_REF]. It was also found that people can solve a puzzle more quickly with an avatar than without, but there was no difference between the conditions with feet tracking and without [Pan and Steed 2019]. Another study found a user preference for performing a tool-based pick and place task with a visual hand representation, albeit performance was similar to the condition of performing the same task without a visual hand representation [START_REF] Ricca | Influence of hand visualization on tool-based motor skills training in an immersive VR simulator[END_REF]. Similarly, after a two-week training, there was still no significant difference in the motor task performance between a group with a virtual hand representation, and without [Ricca et al. 2021]. The importance of viewing a full virtual body also depends on the task. Pastel et al. [2020] found that the level of body parts visible had no influence on the performance during a grasping task, while seeing a full body led to higher accuracy in a throwing task. In terms of realism, a visually faithful avatar does not outperform a generic avatar in a cognitive manipulation task [Lok et al. 2003] or a robot avatar in a pointing task [Schwind et al. 2018b], showing a potential low impact of human-likeness on performance. Apart from the avatar shape itself (arm length, number of limbs, anthropomorphic or abstract), the appearance can also conscienciously or not imply users' capabilities. For example, stereotypes can impact the way users interact, because they will inconsciously want to interact the way their avatar would, due to the Proteus effect (see Section 1.2.4). For instance, a user plays drums better when represented by a dark-skinned avatar with casual clothes than a light-skinned avatar with fancy clothes [START_REF] Kilteni | Drumming in immersive virtual reality: the body shapes the way we play[END_REF]] (see illustration in Figure 1.7). This effect can also appear when users compare their own appearance relatively to others' appearance. For instance, when playing virtual tennis on a Wii con-Figure 1.7 -Illustration from the experiment investigating the impact of avatar appearance on the way users drum in VR. Participants are represented either by flat shaded white hands (A), a dark-skinned avatar with casual clothes (B), or a light-skinned avatar with fancy clothes (C). Image from [START_REF] Kilteni | Drumming in immersive virtual reality: the body shapes the way we play[END_REF]. sole, users will tend to decrease their physical activity when they perceive their avatar as more obese than the opponent [Peña et al. 2016]. A self avatar's musculature can also impact performance, as it was found that embodying an avatar with a muscular appearance decreases perceived exertion during an isometric force task [Kocur et al. 2020a]. The Impact of Interaction Design on the Sense of Embodiment The avatar can have an impact on interaction as seen in the previous section, but it is also true the other way around. There is a strong link between the interaction technique and the avatar. First, the avatar appearance is linked to the technique, as not all appearances can be controlled by every types of input devices for example, but also because the appearance shows the possible interaction (see Section 1.3.1.1). Second, the avatar is the proxy through which the user can interact, so the user controls the avatar that then interacts with the environment. To do so, users make the avatar move through their own actions. Their actions and the avatar ones must be coherent. As we saw previously (see Section 1.2.6.1), visuomotor stimulation is one of the main way to elicit a sense of embodiment towards a virtual body. As soon as users perform motions that are repercuted on their virtual representation, they can feel that this representation is their own body. This therefore seems logical that performing different types of interaction in the VE can affect embodiment. Nevertheless, this impact has not been deeply explored. Most studies on embodiment involving visuomotor stimulation used simple actions to perform, such as moving in front of a mirror [Waltemate et al. 2018]. Several studies used interaction tasks: navigation tasks [Medeiros et al. 2018], manipulation tasks [Lin et al. 2019] or a mix of both [Pan and Steed 2019]; but, these studies have not deeply investigated the impact of the task itself on embodiment. Recent studies started to explore factors influencing the sense of embodiment depending on the task performed [Fribourg et al. 2020a], which showed that factors having the highest influence on embodiment seem to be impacted by the type of task. For example, avatar appearance was found to be more important for users in a punching task than in other tasks less focused on upper limbs, like walking or kicking a soccer ball. In this section, we review the current work on embodiment and interaction. The analysis is decoupled considering the three main design choices for an interaction technique: the input characteristics, the control mechanism and the feedback. The focus is mostly on manipulation techniques as most of studies explored this type of techniques, but we will also mention works on other interaction tasks. Impact of Input Characteristics Numerous input devices exist, varying for instance in shape, Degrees of Freedom (DoF), input modality and input interfaces (e.g. buttons, joysticks). If the user has a virtual representation, it means the input device must ensure all possible interaction with the VE but also the control of his/her virtual representation. While the input device is usually the same for both avatar control and interaction such as the case of the virtual hand metaphor, hybrid methods are beginning to emerge using different input devices for avatar control and object manipulation (e.g. hand tracking for hands control and a tablet for manipulation [START_REF] Surale | Tablet-InVR: Exploring the Design Space for Using a Multi-Touch Tablet in Virtual Reality[END_REF]). Shape Controllers have various shapes, but most of them are held in a power grip. When the shape resembles a real tool, this provides high interaction fidelity. Input devices adapted to the task can give better performances [START_REF] Pham | Is the Pen Mightier than the Controller? A Comparison of Input Devices for Selection in Virtual and Augmented Reality[END_REF][START_REF] Bhargava | Evaluating Multiple Levels of an Interaction Fidelity Continuum on Performance and Learning in Near-Field Training Simulations[END_REF] as the way the user holds them is more coherent with the task to perform [START_REF] Batmaz | Precision vs. Power Grip: A Comparison of Pen Grip Styles for Selection in Virtual Reality[END_REF]. Little work exists regarding the influence of device shape on embodiment, probably due to the various types of controllers and its potential low impact. However, there could be an impact of the shape on embodiment, especially depending on the associated virtual model. Depending on the way users hold a device, it can change the way they control their representation and manipulate objects, which can make them feel more or less in control. In particular, this may impact their perception of their avatar if the manipulation is not intuitive. More importantly, depending on the shape, it is crucial to choose an adequate mapping between the controller and the actions performed in the VE. DoF For most manipulation techniques, the virtual hands are the tool to interact with the virtual objects, except when there is a use of an intermediary element like the ray in raycasting techniques. When using a virtual hand, the DoF provided by the input device are used mostly to control this hand. Intuitively, we can think that the more DoF provided for hand control, the higher the level of control and the sense of embodiment. Supporting this theory, hand tracking was found to elicit a higher sense of ownership than controllers, probably because it provides a finer control of the users' virtual hand [Lin et al. 2019] (see Figure 1.8). However, this effect can also depend on the DoF provided by the user representation compared to the DoF provided by the input device. For example, while Argelaguet et al. [2016] found that agency was better with unrealistic hands (sphere and skeleton hand, so with limited DoF) using Leap Motion, Lougiakis et al. [2020] found no difference between different hand appearances with a similar protocol using controllers. A potential explanation for this could be that controllers offer less DoF and thus reduce expectations in terms of hand control. The more DoF are provided by the input device, the more control users want during manipulation, especially when realistic representations are used [START_REF] Argelaguet | The role of interaction in virtual embodiment: Effects of the virtual hand representation[END_REF]]. These results suggest that more research should be done on combinations of different input devices and avatars during 3D manipulation. Input Modality and Interaction Fidelity There are many ways to track users' input. Some apparatus are external to users, like camera-based tracking, while others must be attached to users, or held. For users' hand tracking, two main input modalities could be considered: full-hand tracking and controllers. While controllers are efficient and commonly used, manipulating objects directly with the hands remains the most realistic way both of interacting and of controlling the avatar. It allows fine manipulation as well as direct control of the virtual hand. This is possible thanks to optical devices (Leap Motion, Kinect) or wearables (mostly gloves). However, it implies having precise and robust finger tracking. As previously discussed, in theory hand tracking offers more DoF and therefore better control and probably a higher embodiment [Lin et al. 2019]. However, imperfect tracking can cause mismatches between the visual display and the users' real actions. When the hand movements are constrained by the controllers, there are fewer chances for users to notice mismatches with their real hands. Experiments in which finger tracking provides a good sense of embodiment are usually constrained experiments, for example requiring participants to keep their palm facing down [START_REF] Hoyet | Wow! I Have Six Fingers!" Would You Accept Structural Changes of Your Hand in VR?[END_REF]. Therefore, constraining movements could be one possible solution. Recent input devices (e.g. Oculus Touch, Valve Index) use capacitive sensors for the fingers, providing a good trade-off between offering the efficient tracking of a controller and more precise finger movements for manipulating objects. It would also be interesting to study the impact of hybrid input [START_REF] Huang | Evaluation of a Hybrid of Hand Gesture and Controller Inputs in Virtual Reality[END_REF]] (using both a controller and hand tracking) on the sense of embodiment. In addition to reliability, controllers also often offer buttons as input, which can be useful to trigger actions in the environment and feel in control, instead of for example launching an automatic animation of grabbing an object when the hand is near it. Using buttons offer low-fidelity interaction, but this can be better in terms of performance and acceptance than moderate-fidelity solutions [McMahan et al. 2016]. The level of interaction fidelity does not always impact embodiment. For example, when performing a transition between two virtual scenes, techniques with a higher fidelity like turning on yourself or putting a virtual HMD was not found to elicit a higher sense of ownership than a simulated blink metaphor [START_REF] Oberdörfer | Effects of VE Transition Techniques on Presence, Illusion of Virtual Body Ownership, Efficiency, and Naturalness[END_REF]]. This study therefore investigated the case of a teleportation from one VE to another, but continuous navigation has not been investigated yet. To our knowledge, no study investigated the impact of locomotion techniques on the sense of embodiment. This question will be explored in an axis of this thesis (see Chapter 3). The best scenario generally remains the high-fidelity case, where the user performs the entire action like in the real world, as a complete control of the avatar elicits a higher sense of embodiment than just triggering actions [Fribourg et al. 2020a]. An alternative form of input is brain signals. Work on Brain-Computer Interfaces (BCI) for precise control is still at its early stage. For the moment, it has been mostly studied for controlling walking [Luu et al. 2016;[START_REF] Alchalabi | A multi-modal modified feedback self-paced BCI to control the gait of an avatar[END_REF]. It was already found that BCI can elicit a higher sense of embodiment when making an avatar walk, than finger movements [START_REF] Cohen | Controlling an avatar by thought using real-time fMRI[END_REF]. More important than the input modality itself, what matters is the mapping applied to this input. Impact of Control and Mapping A great part of interaction techniques do not consider a one-to-one mapping between input motion and the resulting virtual motion. For example, modifications of the CD gain or constrained motions are commonplace in order to increase the precision of freehand manipulation, or to redirect users in the VE. However, such control mechanisms can introduce mismatches between the actions performed by the user and the motions of their avatar. As visuomotor coherency is one of the major inducers of the sense of embodiment [START_REF] Kokkinara | Measuring the Effects through Time of the Influence of Visuomotor and Visuotactile Synchronous Stimulation on a Virtual Body Ownership Illusion[END_REF], such changes in control could have a negative impact in the overall sense of embodiment. Motion and Interaction Constraints Constraints can be used to interact more quickly or respect real world physics. Most constraints applied during interaction are meant to avoid collisions, or to reduce DoF (limit the interaction to a particular surface or an axis for example). When constraining the in-teraction, it means that the avatar's motion is also constrained, which can create a visualproprioceptive mismatch and disturb the senses of agency and self-location [START_REF] Pritchard | Non-hierarchical Influence of Visual Form, Touch, and Position Cues on Embodiment, Agency, and Presence in Virtual Reality[END_REF]. First, several studies decided to handle collisions as it might seem a good idea to create a realistic environment that respect the laws of physics. For instance, when moving an object on a table, it is possible to apply semantic constraints based on real-life physics, as long as the motion of the user is coherent with this constraint (e.g. a horizontal motion on the surface of a table). However, especially for wide and fast motion, blocking the avatar outside objects can create huge offsets between the real hand and its virtual counterpart. This could cause a break in the sense of self-location but also ownership, which is sensitive to spatial location [START_REF] Ratcliffe | The Effect of Visual, Spatial and Temporal Manipulations on Embodiment and Action[END_REF]]. An alternative is to not handle collisions for this type of movements. If users experience a high sense of presence (i.e. the sense of being in the VE), it was found that they can intuitively avoid collisions, especially when using a realistic avatar [START_REF] Fribourg | Virtual co-embodiment: evaluation of the sense of agency while sharing the control of a virtual body among two individuals[END_REF]]. For finer motion, it was found that handling collisions is preferred by users (see in Section 1. 4.2.4). Second, constraints separating DoF (for example to move precisely an object on one direction) can also create big mismatches between the user's position and the avatar's one. In such a case, it is possible to use explicit constraints. They can be explicitly defined by the users so they know what motion they can perform or not [START_REF] Hayatpur | Plane, Ray, and Point: Enabling Precise Spatial Manipulations with Shape Constraints[END_REF]. It is also possible to use gestures to determine customised constraints [START_REF] Gloumeau | Pin-NPivot: Object Manipulation Using Pins in Immersive Virtual Environments[END_REF]], which has the advantage of making users move their avatar and potentially appropriate it. To reduce DoF, the classical use of widgets (see an example on Figure 1.9) explicitly constrains the manipulation, which can be more predictable and impact less the sense of agency. Widgets can be seen as virtual objects that the user can use to indirectly manipulate objects. Common constraints on the manipulation still apply to the widgets though, for example the grabbing of the widget tool must seem realistic (see Section 1.4.2.4). While widgets are intuitive, they do not provide high-fidelity interaction using directly the avatar. New techniques of DoF separation adapted to the avatar could be invented. Another way to handle constraints could be to dissociate the avatar, that provides a realistic representation of the user, from a second form of representation that is not affected by the constraints. It can be the tool used, a ghost [START_REF] Yang | Implementation and Evaluation of "Just Follow Me": An Immersive, VR-Based, Motion-Training System[END_REF] or a skeleton hand [START_REF] Argelaguet | The role of interaction in virtual embodiment: Effects of the virtual hand representation[END_REF]. Remapped Motion: Modification of CD gain Avatars provide a direct visual feedback of the user's position and motion in the environment. If the CD gain differs from one, this relation between the user's motion and the avatar's one is altered and can impact the sense of embodiment. Typically, interaction methods employ either constant or adaptive gains. Constant gains The higher or lower is the CD gain, the higher will be the mismatch between the avatar's hand and the user's real hand. Although visual feedback dominates proprioception [START_REF] Burns | The hand is slower than the eye: a quantitative exploration of visual dominance over proprioception[END_REF], and people do not notice easily motion distortion (i.e. when their motion is amplified or decreased when achieving a task) [START_REF] Debarba | Self-attribution of distorted reaching movements in immersive virtual reality[END_REF] and can even adapt their motion to this distortion [START_REF] Bourdin | Altered visual feedback from an embodied avatar unconsciously influences movement amplitude and muscle activity[END_REF], if the mismatch is too important it can frustrate the user and break the sense of embodiment. A study found that while the senses of agency and ownership are quite resistant to visual-proprioceptive mismatches (induced by a constant offset between the real and virtual hand), the sense of self-location is very sensible to them [START_REF] Pritchard | Non-hierarchical Influence of Visual Form, Touch, and Position Cues on Embodiment, Agency, and Presence in Virtual Reality[END_REF]. Moreover, [START_REF] Kokkinara | The Effects of Visuomotor Calibration to the Perceived Space and Body, through Embodiment in Immersive Virtual Reality[END_REF] found that when users' movements are amplified, this affects their perception of the VE and agency (contrary to the study by [START_REF] Pritchard | Non-hierarchical Influence of Visual Form, Touch, and Position Cues on Embodiment, Agency, and Presence in Virtual Reality[END_REF]), but not ownership. When the user's motion is distorted and it creates a mismatch that is noticeable, the user can therefore feel out of control and the sense of agency can be affected. To avoid this, it is better to do not apply gain, especially on fast and wide movements because it would quickly create a large mismatch between real and virtual hand. As discussed in Section 1.3.2, realistic hands make remapped motion less noticeable [Ogawa et al. 2020a]. Furthermore, instead of modifying the virtual body's motion, the distortion can also be applied on the virtual world [START_REF] Azmandian | Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experiences[END_REF]. This is the same principle as redirected walking, the world is translated when users move their head. This is only adapted to manipulation tasks necessitating head motions. A hybrid solution mixing world and body movements distortion can avoid the drawbacks of both techniques. Other tricks can be used especially after the distortion during offset correction, such as correcting the offset when the user closes the eyes [START_REF] Zenner | Blink-Suppressed Hand Redirection[END_REF]. Adaptive gains The gain applied could also be adaptive, i.e. its value can vary depending on the conditions. For example, the gain can depend on the distance of the object to the user, closer manipulation necessitating a lower gain to be precise, and further manipulation a higher gain [Ouramdane et al. 2006]. It was found that the acceptance of distorted motion is increased when the distortion is gradually introduced [START_REF] Porssut | Reconciling Being in-Control vs. Being Helped for the Execution of Complex Movements in VR[END_REF]. Altering the motion gain slowly in the context of physically tiring tasks still elicits a sense of ownership [Feuchtner and Müller 2018;[START_REF] Wentzel | Improving virtual reality ergonomics through reach-bounded non-linear input amplification[END_REF]. When distortion must be used in an application, it is better to adapt it depending on the situation. Different gains could also be applied depending on the direction of the movement, as users' notice more easily offsets on the horizontal axis than in other dimensions [START_REF] Esmaeili | Detection of Scaled Hand Interactions in Virtual Reality: The Effects of Motion Direction and Task Complexity[END_REF]]. An alternative to avoid the impact of distorted motion on embodiment could be to decouple the tool from the body. An isomorphic control would be kept for the avatar, while the manipulation would be anisomorphic [START_REF] Gloumeau | Pin-NPivot: Object Manipulation Using Pins in Immersive Virtual Environments[END_REF]]. The gains would only be applied on the tool, thus not affecting the avatar itself, for example in out-of-reach interaction. In this thesis, we propose another alternative, which is to use a dual body representation, made of an interactive representation showing distorted motion, and a co-located representation showing real motion. This alternative is presented in Chapter 4. Out-of-reach Manipulation A special type of manipulation that can create a strong mismatch and potentially impact embodiment is out-of-reach manipulation. In the space in which we can reach objects, i.e. our peripersonal space, the challenge is mostly focused on the range of motion, the hand poses, the feedback of actions, or the display of input device. However, outside the peripersonal space, it is necessary to choose how to represent the action, and how it impacts the avatar. Common out-of-reach manipulation techniques are the Go-Go [START_REF] Poupyrev | The Go-Go Interaction Technique: Non-Linear Mapping for Direct Manipulation in VR[END_REF]] and the raycasting techniques [START_REF] Bolt | Put-That-There": Voice and Gesture at the Graphics Interface[END_REF]]. The Go-Go technique applies a non-linear CD gain to the user's input. The more users extend their arm, the further the virtual hand goes. If the limit of CD gain is too high, the mismatch can be important between the user's real hand position and its virtual counterpart. This could potentially impact all the sense of embodiment components. Several choices of feedback can be considered in this problem [START_REF] Feuchtner | Extending the Body for Interaction with Reality[END_REF], but all of them can highly impact the sense of embodiment. In the case of full-body avatars, the hand could be detached from the virtual body. However, it has been found that body discontinuity reduces body ownership but does not necessarily impact motor performance [Seinfeld and Müller 2020]. A hand attached to the arm by a rigid wire disturbs embodiment but not skin conductance in reaction to a threat [Tieri et al. 2015]. A potential solution could be to have an extensible arm, which can maintain a correct sense of embodiment, until reaching a certain length [Kilteni et al. 2012b]. Other feedback can be imagined, like a robotic extensible arm, a ghost clone of the hand going towards the object (as already used in training [START_REF] Yang | Implementation and Evaluation of "Just Follow Me": An Immersive, VR-Based, Motion-Training System[END_REF]), or an outside assistance like a drone going to the object instead of the user's body. It depends on the context of the application, if the realism must be maintained or not. The Raycasting technique originally comes from the pointing technique [START_REF] Bolt | Put-That-There": Voice and Gesture at the Graphics Interface[END_REF]] with an additional ray to provide visual feedback of selected objects [Mine 1995]. It has the advantage of being body-referenced but not part of the avatar. However, depending on the way it is represented, it may impact ownership by providing an unrealistic feedback. If we consider the basic raycasting in which the ray is linked to the hand, we can display the ray coming from a finger, probably the index because it is the finger associated to pointing. We could also display a laser pointer in the avatar's hand instead, to maintain realism. The choice can also be impacted by the device input, if we use hand tracking we might prefer the first option, while the second could be better when a controller is used, or even a tracked stylus [START_REF] Teather | Pointing at 3D targets in a stereo head-tracked virtual environment[END_REF]. Other raycasting techniques exist, referenced to other parts of the body, for example to the head. In this case, a virtual headlamp could be used as a more coherent feedback. The two techniques both have their pros and cons. The Go-Go technique uses direct manipulation, which is more intuitive and uses the avatar to appropriate it. The raycasting technique might be safer for self-location and agency, as it does not modify the CD gain and the virtual body does not have to be modified or reshaped to use this technique. More variants of feedback for out-of-reach techniques and their impact on embodiment should be investigated. The ideal would be to invent a technique that is as intuitive as the Go-Go technique but does not have to modify the avatar. Control of Dexterous Hand Interactions Grabbing virtual objects is probably one of the hardest challenges in manipulation. Because of the lack of haptic feedback in VE, if a one-to-one mapping is applied to hand movements, visual artifacts can appear: unrealistic hand poses or fingers passing through objects. This can affect embodiment. However, it is known that humans are more sensible to visual interpenetration than proprioceptive offsets [START_REF] Burns | The hand is slower than the eye: a quantitative exploration of visual dominance over proprioception[END_REF]. They tend to believe more what they see than what their proprioception tells them. Designers therefore often choose to handle collisions and to define hand poses on the virtual objects, so the user is not disturbed by interpenetrations. This solution is usually preferred by the users, even though it can decrease performance [Prachyabrued and Borst 2012b]. But, if users feel too constrained by the collisions, this may impact the sense of agency and selflocation. This also applies for defined hand poses: if they are too different from users' real poses, this could affect embodiment. People usually tend to adapt to the visual feedback provided, and can be disturbed for example when the input action required is different from the feedback. This was noticed for example in the experiment by Lok et al. [2003] in which participants had to do a pinch movement but observed a grasping animation. Several participants started performing a grasp movement. Such adaptation needed might potentially influence the sense of embodiment. The preferred feedback during the grasping of a ball has been studied, which confirms that handling collisions for fine manipulation is preferred [START_REF] Canales | Virtual Grasping Feedback and Virtual Hand Ownership[END_REF]. Participants preferred visualising their hand outside the object, even though their real fingers were actually entering the sphere. This condition preserved the sense of ownership. Providing complete control of the virtual hand does not always seem to be the best choice. Assistance in taking virtual objects is often used to make the manipulation more intuitive and realistic. This can be used without affecting agency and ownership [START_REF] Porssut | Reconciling Being in-Control vs. Being Helped for the Execution of Complex Movements in VR[END_REF]]. However, if too few hand poses are provided, users may feel limited in their actions, which could impact the sense of agency. If users can grab an object even with unrealistic hand poses, the application will lack realism. Since defining many hand poses in a VR application can be time-consuming, it would be interesting to study their influence on the sense of embodiment, especially agency. One-to-One Mapping or Animation With tracking solutions improving, it is tempting to track exactly users' motions and map them directly to the avatar. This is not always possible as most of the tracking systems are still cumbersome or expensive to deploy extensively. Moreover, when the tracking is imperfect, this can create breaks in embodiment as users can notice the discrepancies [START_REF] Koilias | The Effects of Motion Artifacts on Self-Avatar Agency[END_REF][START_REF] Kokkinara | The Effects of Visuomotor Calibration to the Perceived Space and Body, through Embodiment in Immersive Virtual Reality[END_REF]]. There are therefore some papers comparing tracking solutions such as Inverse Kinematics (IK) to animations or hybrid solutions. For example, a locomotion technique that was recently studied in relation with the sense of embodiment is Walking-in-Place (WIP). A first preliminary study investigated two different feedbacks during WIP: an avatar stepping like the user, or an avatar walking [Park and Jang 2019]. Participants reported a preference for the natural walking animation, although it would sometimes display unintended steps. Another study compared a fixed body, a pre-recorded animation and an animation synchronised with users' leg movements using a deep-neural network [Lee et al. 2020]. It was found that the synchronised animation was better for ownership. A second experiment in the same study showed that this effect emerges whether users look directly at the legs or at the avatar's shadow. In line with these results, a study investigating prioritised factors to elicit a global sense of embodiment found that when people can progressively improve several factors, they prioritise control over appearance [Fribourg et al. 2020a]. The choice of controlling their avatar with IK instead of having an automatically launched animation was done before improving avatar appearance, and this was particularly the case for a soccer task where it was chosen before selecting a 1PP. While people like having full control over their avatar, a perfect one-to-one mapping is not always necessary. Movements can be distorted to have a visually realistic interaction while still feeling in control [START_REF] Porssut | Reconciling Being in-Control vs. Being Helped for the Execution of Complex Movements in VR[END_REF]]. Movement assistance is a good trade-off to ensure both sense of agency and reliability. Impact of Feedback The feedback provided to the user while performing an action can also impact embodiment. First the avatar itself is part of the feedback and can impact embodiment; for example if it is modified during manipulation. However, the type and characteristics of the feedback provided during an action can also affect embodiment. Avatar Appearance During Interaction The avatar is the main source of feedback during motion in the VE. To increase usability or performance, its appearance can be modified during manipulation (see Section 1.3.1.2). However, these changes must be done carefully. Even though avatar appearance is not the most important factor in the sense of embodiment [Fribourg et al. 2020a], several studies showed its importance in inducing a good sense of ownership [Waltemate et al. 2018]. Designers must be aware that modifying it can impact how much users feel like the avatar is part of their real body. For example, the avatar's hands can be made transparent to avoid occlusions. But, if the hands are too transparent, this might impact the sense of ownership [Martini et al. 2015] and agency [START_REF] Buchmann | Interaction with Partially Transparent Hands and Objects[END_REF]. It is also possible to use skeleton hands [START_REF] Argelaguet | The role of interaction in virtual embodiment: Effects of the virtual hand representation[END_REF], which provide a good sense of agency but a lower sense of ownership than realistic hands. To ensure a good sense of ownership, it seems important to keep a realistic avatar. When a tool is necessary for manipulation, or when a controller is used as input device, the feedback at the hand's level can be either the hand (controlled by the input device, for example by mapping buttons to the movements of the hand) or the tool held in the hand. The feedback chosen and its coherency with the input device used might impact embodiment. While some studies start investigating this question [START_REF] Alzayat | Quantitative Measurement of Tool Embodiment for Virtual Reality Input Alternatives[END_REF], this topic needs further research. Perspective Perspective is the user's point of view on the VE. Perspective is important to perform an action precisely, especially when the user can only rely on visual feedback. In applications using an avatar, various camera positions can be used, each of them providing a different amount of visual information. Perspective is generally described as a third-person perspective (the viewpoint is outside the user representation) or a first-person perspective (the user representation is co-located with the real body) (see Figure 1.10). A third-person perspective (3PP) is usually advised for navigation because it provides a better spatial awareness, but some recent work found that a first-person perspective (1PP) is actually preferred [Medeiros et al. 2018]. Similarly, [START_REF] Kokkinara | First person perspective of seated participants over a walking virtual body leads to illusory agency over the walking[END_REF] found that 1PP elicited higher senses of ownership and agency when seated users looked at their virtual avatar walking. For manipulation, a 1PP is usually preferred and more precise [START_REF] Salamin | The Benefits of Third-Person Perspective in Virtual and Augmented Reality?[END_REF][START_REF] Gorisse | First-and Third-Person Perspectives in Immersive Virtual Environments: Presence and Performance Analysis of Embodied Users[END_REF] as the user can directly manipulate objects like they would in real life. Moreover, a 3PP requires an additional cognitive load for considering both the transformation users want to apply to a virtual object and the offset between their vision and their avatar. In the literature on the role of perspective on the sense of embodiment, it was found that a 1PP is preferred to elicit embodiment. It was found that subjects experience a higher sense of ownership from a 1PP, even though it can be experienced from a 3PP to some extent with a synchronous motor feedback [START_REF] Galvan Debarba | Characterizing first and third person viewpoints and their alternation for embodied interaction in virtual reality[END_REF][START_REF] Gorisse | First-and Third-Person Perspectives in Immersive Virtual Environments: Presence and Performance Analysis of Embodied Users[END_REF]]. The sense of agency seems less impacted by the perspective, and can be felt either from 1PP or 3PP as long as the movements are synchronised. Regarding the sense of self-location, studies have shown that a 3PP usually provokes a drop in the sense of self-location [Maselli and Slater 2014;[START_REF] Gorisse | First-and Third-Person Perspectives in Immersive Virtual Environments: Presence and Performance Analysis of Embodied Users[END_REF]. A potential solution, more complicated to implement, would be to have both perspectives [Salamin et al. 2008]. In conclusion, it is better to have a 1PP to provide both an efficient interaction and a high level of embodiment. Haptic Feedback Visual feedback is the main feedback provided in VR. However, in real life, people receive other types of feedback that help during manipulation. Additional feedback like haptic feedback [START_REF] Hirota | Providing force feedback in virtual environments[END_REF] can also be provided in VR and can impact manipulation performance [START_REF] Swapp | Interaction with co-located haptic feedback in virtual reality[END_REF]]. Haptic feedback can be provided through tactile feedback (touch, to feel the texture of the object for example) or force feedback (e.g. users feel a resistance when they collide with an object) [START_REF] Burdea | Force and touch feedback for virtual reality[END_REF]. There are nowadays output devices that are put directly on the body [Spanlang et al. 2010] that provide vibrotactile feedback and can make the user more aware of their virtual body. Synchronous visuotactile feedback provided by an experimenter was one of the first methods used to elicit embodiment towards a virtual body [START_REF] Slater | Towards a digital body: the virtual arm illusion[END_REF]]. If users feel a touch and observe at the same time a stimulation on the virtual body that can cause this touch, the brain interprets that the virtual body must be their body. A study found that haptic feedback provided by wearable devices can increase the sense of embodiment [START_REF] Fröhner | Can Wearable Haptic Devices Foster the Embodiment of Virtual Limbs?[END_REF], especially force feedback. A recent study also supports these results, as they found that interaction force feedback had a positive effect on the sense of agency, but no main effect on the sense of ownership [START_REF] Akselrod | Contribution of interaction force to the sense of hand ownership and the sense of hand agency[END_REF]. With mid-air interfaces, audio and haptic feedback can be more efficient to increase the sense of agency than visual feedback [START_REF] Martinez | Agency in mid-air interfaces[END_REF]. In addition of the information it provides during manipulation, haptic feedback is therefore also a way of increasing the sense of embodiment and should be used when possible. It can also be beneficial during locomotion. For example, it was found that providing proprioceptive feedback using tendon vibration while looking at the avatar walking can enhance the sense of embodiment [Leonardis et al. 2014]. When it is not possible to provide haptic feedback, pseudo-haptic feedback can be used. This technique can create an illusion of weight by modulating the gain applied to users' motion [START_REF] Jauregui | Toward "Pseudo-Haptic Avatars": Modifying the Visual Animation of Self-Avatar Can Simulate the Perception of Weight Lifting[END_REF]]. However, this should be used with precaution when using avatars as it can create a mismatch between users and their avatar (see Section 1.4.2.2). Auditory Feedback Another type of feedback that can be provided to users is auditory feedback. Auditory feedback provides important information when manipulating objects in real life. Like haptic feedback, it can inform us of collisions, it can also help in perceiving textures [START_REF] Kyung | Multi-sensory Perception of Roughness: Empirical Study on Effects of Vibrotactile Feedback and Auditory Feedback in Texture Perception[END_REF]. However, its relation with the sense of embodiment has been less studied than other types of feedback. The major study on auditory feedback and virtual embodiment is from Tajadura et al. [2017] who found that we can enhance embodiment of a child's virtual body when the avatar's voice and appearance are coherent. One study investigated the role of auditory cues in the RHI. Synchronous auditory feedback seems to strengthen the illusion [START_REF] Radziun | Auditory cues influence the rubberhand illusion[END_REF]. With a task of clapping hands in a non-immersive VE, no significant effect of the auditory feedback on virtual body owner-ship was found, showing a potential low impact of this type of feedback in the ownership illusion [START_REF] Roth | A simplified inverse kinematic approach for embodied VR applications[END_REF]. Several studies exist on the link between body perception and sound [START_REF] Azañón | Multimodal contributions to body representation[END_REF]. For example, when the sound heard when a small hammer hits a person's hand is the sound of a hammer hitting a piece of marble, it can give the person the illusion that the hand is made of marble [START_REF] Senna | The Marble-Hand Illusion[END_REF]. Because the sound informs the user of a contact with an object, it can also be used to create a long arm illusion [Tajadura-Jiménez et al. 2015b]. During locomotion, audio frequency can impact body weight perception [Tajadura-Jiménez et al. 2015a]. However, to our knowledge, no study has been done on the impact of auditory feedback during interaction on the sense of embodiment. Make the Avatar Central To have interaction techniques compatible with an avatar, the idea is to make the avatar central for the interaction. Techniques that are based on the body itself (using it as input or as spatial reference) are inherently more compatible with an avatar. Techniques that have been invented to use proprioception for example, use the body as a reference [Slater and Usoh 1994;Mine et al. 1997]. This has also the advantage of making users process information more rapidly, as information placed on or near the body improves information detection in some tasks [Seinfeld et al. 2020b]. There are therefore interrelations between the sense of embodiment and interaction techniques. From the knowledge on these interrelations, it is possible to adapt existing techniques to make them more compatible with avatars, and to invent new techniques that will be inherently compatible with avatars. The state of the art shows more existing papers on manipulation techniques and virtual embodiment than for other types of interaction techniques. From the analysis of existing literature, we proposed several preliminary guidelines for manipulation techniques compatible with avatars that will be presented in Chapter 5. Conclusion of the State of the Art VR immerses users in a whole new world. In this alternative world, avatars are useful to interact with the environment, i.e. to manipulate objects or navigate in this environment. To fully experience having a new body, it is better to feel like it is our real body. Sense of embodiment in VR is a recent and ongoing topic of research, inspired by real-world illusions such as the RHI. It is still unclear how it is elicited and why its level differs between individuals. Moreover, while being able to act through a body is an efficient way to appropriate it, the impact of interaction on embodiment has been little studied. While some studies start showing interrelations between sense of embodiment and interaction in VR, especially for manipulation techniques, we lack knowledge on the impact of virtual representations on existing interaction techniques. We also do not fully understand yet how interaction design choices can affect the different components of embodiment. Interaction tasks in VR are various, from object manipulation to locomotion, as could be their interrelations with embodiment. There is therefore a need of better understanding these interrelations to design new techniques, more compatible with high-fidelity avatars. The goal is to have rich interaction and customised full-body avatars in future VR applications to fully experience all the advantages of VR. As a first step, the next chapter will focus on the sense of embodiment and why it may be elicited differently between users. "I need to be myself, I can't be no one else" Oasis Chapter 2 UNDERSTANDING INDIVIDUAL DIFFERENCES IN THE SENSE OF EMBODIMENT Abstract: This chapter reports an exploratory study aiming at identifying user internal factors (personality traits and body awareness) that might cause either a resistance or a predisposition to feel a sense of embodiment towards a virtual avatar. To this purpose, we conducted an experiment (n=123) in which participants were immersed in a virtual environment and embodied in a gender-matched generic virtual avatar through a headmounted display. After an exposure phase in which they had to perform a number of visuomotor tasks (during 2 minutes) a virtual character entered the virtual scene and stabbed the participants' virtual hand with a knife. The participants' sense of embodiment was measured, as well as several personality traits (Big Five traits and locus of control) and body awareness, to evaluate the influence of participants' personality on the acceptance of the virtual body. Introduction The sense of embodiment is the central aspect linking users to their avatar. In the related work, we presented many studies that investigated external factors influencing the sense of embodiment. These factors are linked to the avatar itself (e.g., appearance [START_REF] Banakou | Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes[END_REF]Peck et al. 2013], realism [Maselli and Slater 2013]) or to the apparatus (e.g., degree of control [START_REF] Debarba | Self-attribution of distorted reaching movements in immersive virtual reality[END_REF], nature of the stimulation [START_REF] Kokkinara | Measuring the Effects through Time of the Influence of Visuomotor and Visuotactile Synchronous Stimulation on a Virtual Body Ownership Illusion[END_REF]). However, they are "external factors", i.e., mostly linked to the avatar itself or to the experiment protocol, but not "internal factors", i.e. linked to the user's characteristics (e.g. personality or background experience). Indeed, while most studies are able to show general trends of the influence of such "external" factors, the inter-user variabil-ity remains non-negligible. In practice, we can observe that some people easily believe in the illusion, while others are on the contrary totally refractory. This observation led us to investigate how individual differences could influence the sense of embodiment in VR. Such individual differences have been found for the sense of presence [START_REF] Sacau | The impact of personality factors on the experience of spatial presence[END_REF][START_REF] Murray | Absorption, dissociation, locus of control and presence in virtual reality[END_REF], or for the RHI [START_REF] Asai | Rubber hand illusion, empathy, and schizotypal experiences in terms of self-other representations[END_REF][START_REF] Kállai | Temperament and psychopathological syndromes specific susceptibility for rubber hand illusion[END_REF]. Better understanding the emergence of different embodiment scores between individuals could be a key to also understand better how embodiment is elicited, and to better control participant samples in embodiment experiments. The aim of this chapter is to study such potential individual factors influencing the sense of embodiment, to better understand this phenomenon. We therefore conducted an exploratory experiment where we investigated the link between "internal" factors (personality traits and body awareness) and the sense of embodiment. This experiment was divided into three phases: adaptation, induction and threat. In the adaptation phase participants were able to freely explore the environment and their avatar, in the induction phase participants had to reproduce a series of visuomotor tasks (see Figure 2.1, left) and in the threat phase, a virtual character appeared in the environment and threatened the avatar's hand with a knife (see Figure 2.1, right). Experiment Design Participants One hundred and twenty three participants (age min=18, max=60, avg=30.3±9.0, 58 women and 65 men) took part in our experiment. The majority of them were students and staff from our research center. All participants freely volunteered for the experiment and most of them were curious about VR. They did not receive course credits nor economical compensation. They were all naive to the purpose of the experiment and had normal or correct-to-normal vision. People wearing glasses could keep them if they were not generating any discomfort. All participants gave written and informed consent. The study conformed to the declaration of Helsinki, and was approved by the local ethical committee. Seventy-six participants reported to have no previous experience in VR, twenty-five to have some previous experiences in VR and twenty-two to be experts. Experimental Protocol Before the experiment: Upon their arrival, participants were first briefed about the experiment, read and signed the consent form. They were then equipped with the HMD and the controllers, were asked to sit on a chair in front of the table where the experiment would be conducted, and a calibration phase was performed to adapt the avatar to their dimensions. More precisely, the global scale of the avatar was first adapted to match the height of the participant. Then, participants were asked to take a seated T-pose (reach arms on the side), in order to measure their arm span using both controllers. The distance between the two controllers was therefore used to adjust the avatar's arm length, while the headset position was used to scale the avatar's spine so that the avatar's head position matched the user's one. Participants were then asked to freely discover the environment. We did not impose a fixed time during the adaptation phase. Yet, all participants were encouraged to explore the scene, their avatar, and to look into the mirror. When they were ready, they could start the task. Experimental task: Participants sat in front of a real table and saw a similar co-located virtual table, while being immersed in the VE from a first-person perspective. They were asked to put their hands on the virtual table (on two white spots) receiving by this occasion passive haptic feedback from the physical table. They held in their hands the real controllers that were also represented in the virtual world for coherence concern. A virtual screen was positioned in front of them, on the table, and a virtual mirror was located on their left (Figure 2.1). We chose to use a mirror as it is supposed to induce a greater sense of ownership [START_REF] González-Franco | The contribution of real-time mirror reflections of motor actions on virtual body ownership in an immersive virtual environment[END_REF][START_REF] Jenkinson | New reflections on agency and body ownership: The moving rubber hand illusion in the mirror[END_REF]. Also, we decided to induce the sense of ownership using visuomotor feedback since it has been shown to be stronger than visuotactile synchronisation to induce body ownership [START_REF] Kokkinara | Measuring the Effects through Time of the Influence of Visuomotor and Visuotactile Synchronous Stimulation on a Virtual Body Ownership Illusion[END_REF]. 2D trajectories were displayed on the screen (see Figure 2.2), which participants were instructed to reproduce in front of them, using either their right or left hand according to the instruction provided. The trajectories presented were chosen to be relatively simple (number eight, circle, triangle, etc) to avoid a high cognitive load, which could have distracted participants from their avatar and the environment. After each drawing, they had to put back both hands on the white spots on the table. The task lasted two minutes during which participants saw their avatar moving synchronously according to their movements. After the achievement of the task, a virtual character entered the room and stabbed the virtual hand with a knife. We measured the reaction to the threat (hand motion), in order to inspect its potential correlation with the sense of ownership. After the experiment: Participants were asked to fill in a number of questionnaires. First they filled in a demographic questionnaire, then they answered questions about embodiment, as well as questions about presence. They also filled in several personality questionnaires, and a body awareness questionnaire. Collected data are presented in more detail in Section 2.2.4. Apparatus The experiment was developed using Unity 2018.1.6f1. Participants saw the VE through an HTC Vive PRO HMD, while their hand movements were tracked using the Vive con- trollers. The FinalIK plugin was used to animate the participants' avatar with Inversed Kinematics and to provide visuomotor feedback, based on the participants' head (HMD) and hands (controllers) movements. During the experiment, two avatar models were used to match the participant's gender (see the male and female avatars in Figure 2.3). Because appearance is a major contributor to the sense of embodiment, using personalised avatars might lead to high embodiment ratings for most participants [Waltemate et al. 2018], which would prevent us from exploring the influence of individual traits on embodiment. We therefore decided to use gender-matched generic avatars to obtain a higher variability in embodiment ratings, i.e., to obtain both low and high embodiment ratings across participants. Also, the animation of the virtual character threatening the participant's hand at the end of the task was recorded using an Xsens motion capture system prior to the experiment, and mapped onto another virtual character (see Figure 2.1, right). Collected Data Embodiment and Presence Questionnaires To measure embodiment, participants were asked to fill in the subjective questionnaire proposed by Gonzalez-Franco and Peck [2018]. It is composed of questions divided in several categories: body ownership, agency, tactile sensations (not used in our experiment), location, external appearance and response to external stimuli (referred to as threat per-Chapter 2 -Understanding Individual Differences in the Sense of Embodiment ception in this chapter). Participants therefore answered 19 questions on a 7-point Likert scale. Given the shortness (6 questions) of the Slater-Usoh-Steed (SUS) presence questionnaire [Usoh et al. 2000] typically used in such experiments in the past, we also decided to include this questionnaire. People rated each question on a 7-point Likert scale. Our goal was to confirm previous results linking personality to presence, despite the already important number of questionnaires in our study. In particular, assessing how similar our results are to previous work on the relation between personality and presence would also be of value to further validate potential results on the sense of embodiment. Behavioural Response In addition to the embodiment questionnaire, we recorded participants' hand movements during the threat (character stabbing the participant's virtual hand at the end of the experiment), in order to evaluate how much participants considered the avatar to be their own body. In this situation, typically used in previous studies to provide another measure of the sense of embodiment, participants who feel embodied in their avatar are more likely to remove their hand [START_REF] Ehrsson | Threatening a rubber hand that you feel is yours elicits a cortical anxiety response[END_REF]], suggesting that they consider the virtual body to be their own. Psychological Variables As we wished to explore the effect of several aspects of personality and ability on embodiment, we selected a number of questionnaires which participants filled in after the experiment. These questionnaires were chosen to explore aspects we believed could influence embodiment, while ensuring that the duration for answering all these questionnaires was reasonable. Our choice was also based on the few studies conducted on internal factors, using for example body awareness and locus of control as independent variables. Four questionnaires were therefore selected (BFI, TIPI, IPC, BAQ), which are presented below with our corresponding exploratory questions of interest. Research questions were preferred instead of precise hypotheses because of the lack of literature and the wide range of traits evaluated in our experiment. On average, participants took 20 minutes to answer all the questionnaires. It is also important to mention that the experiment was conducted on a French campus, and that we therefore used the French validated translation of these international questionnaires in our experiment. All the Likert scales used are the ones pro-posed in each validated questionnaire, going from the lower bound (strong disagreement) to the upper bound (strong agreement). 1) Big Five Personality traits taxonomy is a common way of describing one's personality, even though not the only one that exists. In this model, called the Big Five or OCEAN model [START_REF] Goldberg | An Alternative "Description of Personality": The Big-Five Factor Structure[END_REF]], personality is typically described by five dimensions, which are Openness to experience, Conscientiousness, Extraversion, Agreeableness and Neuroticism. While several questionnaires of various complexity exist to assess these personality dimensions (e.g., 240-item NEO PI-R, 44-item BFI) we chose two questionnaires to use in our experiment. First, we used the 44-item Big Five Inventory (BFI) [START_REF] John | The Big Five Inventory-Versions 4a and 54[END_REF]] adapted to French by [START_REF] Plaisant | Validation du Big Five Inventory français (inventaire des cinq grandes dimensions de la personnalité)[END_REF], where each item is rated on a 5-point Likert scale. This is a relatively short questionnaire, compared for example to the 240-item NEO PI-R questionnaire, but still quite complete [START_REF] John | The Big-five Trait Taxonomy: History, Measurement, and Theoretical Perspectives[END_REF], and used in research [START_REF] Jacques | Personality and virtual reality team candidates: The roles of personality traits, technology anxiety and trust as predictors of perceptions of virtual reality teams[END_REF][START_REF] Bélisle | Avatars as information: Perception of consumers based on their avatars in virtual worlds[END_REF]. It therefore seemed more adapted to an experiment involving several questionnaires. In this regard, a first question of interest was to study whether some personality traits were potentially correlated with the different components of embodiment (Q1). However, despite the popularity of the BFI questionnaire and its relatively short length (44 items), being able to evaluate quickly how a user's personality traits would affect embodiment prior to a VR experience would be greatly improved if shorter questionnaires could be used. Therefore, we decided to include a second personality questionnaire, namely the Ten Item Personality Inventory (TIPI) [START_REF] Gosling | A Very Brief Measure of the Big Five », en, in: Personality Domains[END_REF], and used the French version [START_REF] Storme | Psychometric Properties of the French Ten-Item », en, in: Personality Inventory (TIPI)[END_REF], where each item is rated with a 7-point Likert scale. In particular, our goal was to study the extent to which the TIPI questionnaire would enable us to explain embodiment felt in VR compared to the more complete BFI questionnaire (Q2). Our research questions related to the influence of Big Five personality traits on the sense of embodiment were therefore: Q1: Are some of the users' Big Five traits correlated with their sense of embodiment in VR? Q2: Do TIPI and BFI questionnaires show similar personality traits correlations with the sense of embodiment in VR? 2) Locus of Control (LoC) (i.e., the degree to which people believe that they have control over the outcome of events in their lives as opposed to external forces beyond their control) is another set of personality traits, which was demonstrated to have an influence on the sense of presence [START_REF] Murray | Absorption, dissociation, locus of control and presence in virtual reality[END_REF]Wallach et al. 2010] and on the sense of agency [START_REF] Jeunet | Do you feel in control? " : Towards Novel Approaches to Characterise, Manipulate and Measure the Sense of Agency in Virtual Environments[END_REF]. We therefore included a questionnaire to measure one's LoC, and used the common 24-item IPC scale [Levenson 1981], translated in French by Loas et al. [1994], using a 6-point Likert scale. This questionnaire determines LoC according to three dimensions: Internal, Powerful others and Chance. These dimensions typically mean that someone with an external LoC will tend to think that everything happens because of fate (chance type of locus) or powerful people (powerful others type of locus), while someone with an internal LoC will tend to think that he/she can change events with his/her own will and actions. As a previous study [START_REF] Jeunet | Do you feel in control? " : Towards Novel Approaches to Characterise, Manipulate and Measure the Sense of Agency in Virtual Environments[END_REF] showed that an internal LoC is positively correlated with the sense of agency, we expected to find the same results in our study (Q3). Moreover, while this seems in agreement with the fact that the LoC is directly related to the action, there is no information about the possible influence of LoC on the sense of ownership. Therefore, we investigate whether ownership could also be correlated with an internal LoC (Q4), since some studies found that the sense of ownership and the sense of agency are based on similar processes and can strengthen each other [START_REF] Kalckert | Moving a Rubber Hand that Feels Like Your Own: A Dissociation of Ownership and Agency[END_REF][START_REF] Dummer | Movement and the Rubber Hand Illusion[END_REF]]. Our research questions related to the influence of internal and external LoC on the sense of embodiment were therefore: Q3: Is an internal LoC positively correlated with the sense of agency, as previously found [START_REF] Jeunet | Do you feel in control? " : Towards Novel Approaches to Characterise, Manipulate and Measure the Sense of Agency in Virtual Environments[END_REF]]? Q4: Is the sense of ownership also correlated with an internal LoC? 3) Body Awareness is a cognitive ability that makes us aware of our body processes. Because it can change the way we perceive our real body, there is a possibility that it could also influence the perception of our virtual body. While body awareness was not found to influence the RHI in the physical world [START_REF] David | Susceptibility to the rubber hand illusion does not tell the whole body-awareness story[END_REF]], another study showed that it could be disturbed by the body ownership illusion [Tsakiris et al. 2006]. Furthermore, to our knowledge, no studies were conducted to investigate its influence on the sense of embodiment in VR. We therefore decided to also include this personal ability in our study, and used the 18-item Body Awareness Questionnaire (BAQ) [START_REF] Shields | The Body Awareness Questionnaire: Reliability and Validity[END_REF], where each item is rated on a 7-point Likert scale, translated in French by [START_REF] Dumont | Expérience subjective et différences individuelles dans l'intégration d'informations visuelle et kinesthésique[END_REF]. This questionnaire is a self-report assessment of the body awareness, estimating the attention and consciousness we have of our body processes, often used in research because of its high reliability and validity compared to other self-report instruments [Mehling et al. 2009]. Results Our research question related to the influence of body awareness on the sense of embodiment was therefore: Q5: Is body awareness correlated with the sense of embodiment? Results In order to analyse the link between internal factors and the sense of embodiment, Section 2.3.1 first explores the relationship between the embodiment scores (ownership, agency, self-location, external appearance and threat perception) and the Big Five, IPC and body awareness data. The data from the TIPI questionnaire are not discussed as there were no significant results (Q2 is therefore answered negatively). Then, Section 2.3.3 analyses the behavioural responses and Section 2.3.4 the presence results. Embodiment Questionnaire (Version 1) Analysis The analysis in this section was performed using the embodiment questionnaire filled in by the participants, i.e. the version from 2018 [START_REF] González-Franco | Avatar Embodiment. Towards a Standardized Questionnaire[END_REF]]. An additional analysis inspired by the second version of the questionnaire will be presented in the following section. Before conducting the following analyses, we analysed the potential effects of gender and experience in VR on the embodiment questionnaires. Regarding gender, we performed Mann-Whitney tests for each question trying to find significant differences over such potential confounding factors. The analysis only showed two significant differences for body ownership related questions (O2 and O5) between male and female participants, while no significant differences were found for the rest of the questions and factors. A summary of embodiment answers is presented in Table 2.1, separated by men and women answers when relevant. In order to prevent such differences from adding noise to the rest of the analysis, the population was split into two groups (men and women) for the body ownership analysis. Regarding experience in VR and video games, we used Pearson correlations to explore a potential influence on embodiment. We only found a positive correlation between agency and the experience in video games. For this reason, experience in video games is only reported in the section on Agency. As we ran the same analysis for each aspect of embodiment, we summarise the procedure here for clarity. More precisely, we ran a separate Table 2.1 -Statistical summary of the embodiment questionnaire responses. For each question we report the median and the first and third quartiles. If there was a significant difference between the men and women answers, we report the summary for each group. O: body ownership, A: agency, L: self-location, EA: external appearance, T: threat perception. Pearson correlations were then computed (see summary in Table 2.5) to explore potential links between the different components of embodiment and the internal factors questionnaires results. As we did not find results for the sense of self-location, this part was removed from the analysis. A multiple linear regression was performed when correlations were found for a given component, as in other cases it would be difficult to find a good model with variables that are not correlated with the studied component. We used a backward stepwise method to select the best predictors, using the Akaike Information Criterion (AIC), i.e. we started with all the variables and progressively removed them so as to minimise the AIC value. We chose the AIC over the adjusted R 2 as it also accounts for the complexity of the model [START_REF] Faraway | Extending the linear model with R: generalized linear, mixed effects and nonparametric regression models[END_REF]]. All the multiple linear regression models computed are summarised in Table 2.3. ID Body Ownership As answers to body ownership questions were significantly different for men and women, we performed two separate analyses for men and women participants. Men. Two components were selected (O P C1,M and O P C2,M ), which explained 65% of the variance. O P C1,M was mainly influenced by the questions O1, O2, O4 and O5, while O P C2,M was mostly influenced by O3. We only found a positive correlation between O P C1,M and the chance type of LoC (r = 0.248, p = 0.047). As we did not find correlations between O P C2,M and any of the variables, we do not consider it further. Results Table 2 We performed a multiple linear regression for O P C1,M using the different psychological variables. We obtained a model with internal LoC, chance LoC and body awareness (adjusted R 2 = 0.169, p = 0.003). Women. Two components were selected (O P C1,F and O P C2,F ), which explain 63% of the variance. O P C1,F was mainly influenced by the questions O1, O2 and O3, while O P C2,F was mostly influenced by O4 and 05. Then, we found a negative correlation between openness and O P C1,F (r = -0.293, p = 0.026), as well as positive correlations between O P C2,F and both the chance type LoC (r = 0.366, p = 0.005) and the powerful others LoC (r = 0.427, p < 0.001). The linear regression for O P C1,F gave us a model with openness, conscientiousness, internal LoC, powerful LoC, chance LoC and body awareness (adjusted R 2 = 0.239, p = 0.002). Agency One component was selected (A P C1 ) which explains 48% of the variance. It was mainly influenced by the questions A1, A2 and A4. We found a positive correlation between A P C1 and the internal LoC (r = 0.248, p = 0.006). The linear model found for A P C1 was composed of agreeableness, internal LoC and body awareness (R 2 = 0.079, p = 0.005). We also found a positive correlation between A P C1 and the level of experience in video games (r = 0.184, p = 0.04). External Appearance One component was selected (EA P C1 ), which explains 51% of the variance and was influenced positively by all the questions on appearance (EA1 to EA4). We only found correlations with the internal LoC (r = 0.195, p = 0.031) and chance LoC (r = 0.201, p = 0.026). However, as it seemed surprising that external appearance was simultaneously influenced by opposite (i.e., internal and external) types of LoC, we performed further Pearson correlations separately on the male and female populations, and found that the external appearance was more strongly related to the internal LoC for men (r = 0.306, p = 0.013) and to the chance LoC (r = 0.311, p = 0.017) for women. We performed two multiple linear regressions, by separating male and female populations. For men, the optimised model was composed of neuroticism, internal LoC and Results chance LoC (adjusted R 2 = 0.169, p = 0.002). For women, the optimised model was only composed of chance LoC (adjusted R 2 = 0.081, p = 0.017). Threat Perception One component was selected (T P C1 ), which explains 84% of the variance. All the questions about threat perception (T1 to T4) contributed positively to this component. A positive correlation was found with neuroticism (r = 0.258, p = 0.004). The linear regression gave us a model with agreeableness and neuroticism (adjusted R 2 = 0.088, p = 0.001). Additional Analysis: Embodiment Questionnaire (Version 2) We performed an additional analysis with the updated version of the embodiment questionnaire [Peck and Gonzalez-Franco 2021]. This version has 16 questions (instead of 25 in the first version) that are classified into four components obtained from a PCA. It was published after we performed the first analysis, but we wanted to support a posteriori the validity of our results. Because we did not include several questions in our study (question R2 about proprioceptive drift because the avatar was co-located, question R7 because we did not need a customised question, and questions R14 to R16 because we had no tactile stimulation), we could not perform the analysis recommended in the paper. An analysis with a PCA, similar to the one performed to obtain the new version of the embodiment questionnaire, was therefore performed instead. The goal was to see whether the components obtained were similar to the ones found in the second version of the questionnaire, and whether the results for our study were still similar. In this study, we used 19 out of the 24 questions from the first version of the questionnaire. First, the Pearson correlations between the different questions scores were computed. If the questions were correlated (r > 0.3) with less than three other questions, or too highly correlated with other questions (r > 0.9), they were removed. At this step, questions A2, A4, L1 and EA3 were removed because they were correlated with less than three other questions. Then, the individual Kaiser-Meyer-Olkin (KMO) value for each remaining question was computed. Question O5 had the lowest KMO value (0.56) and was correlated (r > 0.3) with only three other questions, therefore we decided to remove it. At the end, only 14 questions remained. On these 14 questions, a Polychoric PCA was conducted. Using the empirical Kaiser criterion (the eigen value must be greater or equal to one), three principal components were kept. The fourth component had an eigen value equal to 0.98, which is close to 1. To explain more than 60% of the variance, we decided to keep four components. We then performed an oblimin rotation and kept questions having a loading greater than |0.4|. As performed by Peck and Gonzalez-Franco [2021], we computed the score for each component with equal weight for each question, which gave the following formula: -TC1: T1 + T2 + T3 + T4 (questions about perception of threat) -TC2: O1 -O2 + O4 + EA4 (questions about ownership and appearance) -TC3: O3 + A3 + L2 + EA1 + EA2 (questions mostly about feeling out of the body or losing control) -TC4: -O2 + A1 (one question about ownership and one about agency) These components explain 65.4% of the variance. We then computed the correlations between these components and the different psychological variables (Big Five traits, LoC and body awareness). We found a positive correlation between TC1 and Neuroticism (r = 0.258, p < 0.01) which is in line with results from the first version of the questionnaire. We also found positive correlations between TC2 (asking globally about ownership) and both Internal (r = 0.239, p < 0.01) and Chance LoC (r = 0.252, p < 0.01). This is similar to correlations already found in the previous analysis, where external appearance was correlated to both Internal and Chance LoC. TC3 was positively correlated with both Neuroticism (r = 0.181, p < 0.05) and Chance (r = 0.186, p < 0.05). It is difficult to compare this result to the previous ones, as this component mixes several categories of questions. Finally, TC4 (asking about being in control and the avatar not being someone else) was positively correlated with Internal LoC (r = 0.287, p < 0.01), which is also in line with previous results. Threat Response In order to evaluate participants' response to the threat in a more objective manner, we also computed their accumulated right hand motion (the stabbed hand) during the threat period to determine whether they reacted or not to this threat. More precisely, we computed the accumulated right hand motion between the moment when the knife was above the hand (approximately 0.5s before the stab) and the moment when the character removed the knife (approximately 1.5s after the stab). Six participants were removed from Results the analysis because of missing data (controllers positions were not saved), one because he/she removed his/her hand without holding the controller, and one because he/she removed his/her hand before the stab. Across participants, the average accumulated hand motion was 9.15 ± 19.7 cm (me-dian=1.93cm; min=1.05cm; max=114cm), which was positively correlated with the threat perception score (r = 0.561, p < 0.001). In addition, we computed Pearson correlations between the participants' accumulated hand motion and their psychological variables, to determine if their personality traits or abilities would influence the degree to which they reacted to the threat, but we did not find significant correlations. However, as the threat response can also be considered as a binary variable (whether participants reacted or not), we then performed a further analysis by computing a multiple logistic regression model on whether participants reacted or not. In particular, we considered that participants reacted to the threat if their accumulated hand motion was greater than 5cm (threshold experimentally identified from the experimenter's records of whether participants actually reacted). With this criterion, 30 participants were considered to have reacted to the stab out of the 115 participants kept for this analysis. We then used AIC to select the variables to remove from the multiple logistic regression model, and found a model only composed of neuroticism (β = 0.027, p = 0.022) and chance LoC (β = -0.062, p = 0.140). Personality Influence on Presence As previously mentioned, we also asked participants to answer presence questionnaires to assess how similar our results on the relation between personality and presence were from previous work, as well as to strengthen potential results on the sense of embodiment. While we used Polychoric PCA to analyse the results of embodiment, previous work on presence commonly used a simple mean score over the SUS questions. In order to compare our results to previous work, we therefore followed the same procedure. As for the sense of embodiment, we studied Pearson correlations to identify potential links between presence and personality traits. We found positive correlations with agreeableness (r = 0.227, p = 0.012) and with the internal LoC (r = 0.203, p = 0.024) (see Table 2.4). We performed a multiple linear regression which gave us a model composed of agreeableness, neuroticism, internal LoC and chance LoC (adjusted R 2 = 0.134, p < 0.001). Additional Results In order to get a clearer understanding of the potential relations between the different aspects of embodiment, we also computed Pearson correlations between all the components that relate to the sense of embodiment. Across men and women participants, we first found a positive correlation between external appearance (EA P C1 ) and threat perception (T P C1 ) (r = 0.327, p < 0.001), showing that participants tended to be more sensitive to the threat on their avatar when they also rated higher questions on external appearance. As body ownership analyses were performed separately on men and women, because of significant differences in answering some of the questions, we also looked at correlations separately in this context. First it is interesting to note that we found a positive correlation between external appearance (EA P C1 ) and all the men and women ownership components (O P C1,M : r = 0.345, p = 0.005; O P C2,M : r = 0.287, p = 0.020; O P C1,F : r = 0.564, p < 0.001; O P C2,F : r = 0.593, p < 0.001). For both men and women, we also found a correlation between the threat perception (T P C1 ) and their first ownership component (O P C1,M : r = 0.355, p = 0.004; r = 0.289, p = 0.028). Discussion This experiment on the sense of embodiment, in which 123 participants took part, is to our knowledge the first VR experiment measuring embodiment as well as several personality traits and body awareness. Our aim was to explore how internal factors (individual differences) could modulate virtual embodiment experiences. In this section, we discuss the obtained results for each aspect of embodiment, as well as future work. Sense of Embodiment For each aspect, we first discuss the global obtained scores. While we used PCA for the analysis, here we mention scores for the different aspects using summation equations provided by [START_REF] González-Franco | Avatar Embodiment. Towards a Standardized Questionnaire[END_REF] in order to provide a simpler interpretation for discussion. Then we discuss the results concerning the Big Five traits and the locus of control, and finally potential other noticeable results like differences between men and women or interesting correlations with other variables. Body Ownership Overall, body ownership scores were in the average (M = 4.3; SD = 1.1), with a high variability, i.e. people presenting either low or high levels of body ownership. This could be explained either by the individual differences of participants and/or the use of a generic avatar. Our results also show that the question related to the co-located virtual body (O1) was rated higher than the question related to the avatar visible in the mirror (O4), which seems to suggest that the use of a mirror could be detrimental in some cases, as it might emphasise the appearance differences. Regarding the influence of personality traits, our results first demonstrated that body ownership is to some extend correlated with external dimensions of the locus of control, both for male and female participants. This result answers Q4, but not as expected. Since the sense of ownership and agency usually strengthen each other, we expected ownership to be correlated with an internal locus of control, as is the case for agency. However, our results suggest that body ownership is actually more influenced by external dimensions of the locus of control. Typically, people with an external locus of control tend to think that things happening to them depend mostly on the influence of other people or chance. Therefore, our results suggest that people with an external locus of control might feel embodied in a virtual representation more easily than people with an internal locus of control. In conducting this study, we expected some of the Big Five personality traits to influence ownership (Q3). However, our results did not show any evidence of such an influence. Similarly, we also measured body awareness, i.e., the cognitive ability of being aware of body processes, which we supposed could also influence embodiment in general (Q5). Even though body ownership scores tended to be low for people with high body awareness, the results were not significant. Similarly, experience in VR and video games, which could have influenced body ownership, did not show any influence, suggesting that experience in virtual-type applications does not influence how one's accept a virtual body as its own. Finally, we also noticed that women gave higher scores to the question O5, which means that they had a higher feeling that the avatar in the mirror was someone else. It is difficult to assess what might explain this result, but a possible assumption would be the fact that the avatars were not personalised. More precisely, it is possible that the male avatar had more average physical characteristics regarding the population of the experiment than the female avatar (brown hair and average build for male avatar compared to blond hair and skinny body for female avatar). While this result shows that differences can appear between different user groups as it was previously found in other studies [START_REF] Schwind | These Are Not My Hands!": Effect of Gender on the Perception of Avatar Hands in Virtual Reality[END_REF], and that the visual resemblance of the avatar might also influence these results, further studies would be necessary to better understand these influences. Agency On average, agency scores were high (M = 5.1; SD = 0.7), showing that participants felt in control of the avatar's movements. First, we found that the sense of agency is correlated with the internal dimension of the locus of control, which positively answers our question Q3 and is in line with previous findings [START_REF] Jeunet | Do you feel in control? " : Towards Novel Approaches to Characterise, Manipulate and Measure the Sense of Agency in Virtual Environments[END_REF]]. Therefore, it seems that people who feel a higher control on happening events tend to also experience a higher control of their virtual body, and might therefore feel more responsible of the avatar's movements. However, we did not find correlations with the other personality traits from the Big Five. Interestingly, we also found a positive correlation with the level of experience in video games, showing that the more people have experience in video games, the more they feel they have control over their avatar. This result is also supported by participants (with high gaming experience) feedback, who reported that they felt in control both because the avatar was moving well according to their movements and because they felt that there was no latency in the displayed movements. External Appearance External appearance scores were overall below average (M = 3.0; SD = 1.3), meaning that participants did not really have the feeling that the avatar looked like them. Our results showed that the acceptance of the avatar's external appearance was positively correlated with the internal and the chance dimensions of the locus of control. This result is particularly surprising as it shows that external appearance is simultaneously influenced by opposite (internal and external) types of locus of control. However, further exploration showed that these effects were due to external appearance being more correlated with an internal locus of control for men, but with a chance locus for women. However, these results cannot be interpreted in terms of differences between men and women's personality traits and can only be interpreted separately. Women with a chance locus of control tend to have higher external appearance scores, i.e., they tend to think that the self-avatar is a look-alike avatar. This result is similar to the one obtained for ownership, which was also correlated for women with an external locus of control and could be explained for the same reasons evoked previously. In contrary, men with an internal locus of control tended to have higher external appearance score. This means that men thinking that they can control their own life tend to more believe their avatar is similar to them. Although those interesting results also highlight differences between groups of population, deeper studies would be required to clarify these effects. Threat Perception Threat perception scores were particularly low (M = 2.9; SD = 2.0), which is in accordance with the number of people who actually reacted to the stab (30 out of 115 whose reactions to the threat were recorded). This is supported by the feedback from several participants who did not react to the threat and reported that they felt that the VE seemed "safe", and therefore did not feel threatened. Moreover, we found that threat perception was correlated with neuroticism. Since people with a high degree of neuroticism tend to be anxious, it is understandable that these same people were more impacted by the introduction of the threat. The fear of a threat is also commonly considered as an expression of the sense of ownership in studies exploring the sense of embodiment. If we did observed in our results that the threat perception was also correlated with one component of ownership for both men and women, it is however not enough to make a link between neuroticism and the sense of ownership. In addition to assessing threat through questionnaires, we also measured the right hand motion in reaction to the stab. The model which better explained the differences between people who reacted from those who did not react was also influenced by neuroticism, which confirms the influence of neuroticism on the response to threat. Presence Our goal in investigating whether we found similar effects of personality traits on presence than previous studies was to validate that our experimental setup provided a similar basis than previous studies, which would therefore simultaneously strengthen the value of any results found for the influence of personality traits on the sense of embodiment. As expected, we found similar correlations than in previous studies, namely a correlation between presence and agreeableness [START_REF] Sacau | The impact of personality factors on the experience of spatial presence[END_REF], as well as between presence and an internal locus of control [Wallach et al. 2010]. Conclusion In this chapter, we presented an exploratory study on the impact of personality traits and body awareness on the sense of embodiment. The major finding of the experiment is that the locus of control is linked to several components of embodiment: the sense of agency is positively correlated with an internal locus of control and the sense of body ownership is positively correlated with an external locus of control. Interestingly, both components are not influenced by the same traits, which confirms that they can appear independently. Taken together our results suggest that the locus of control could be a good predictor of the sense of embodiment when the user embodies an avatar with a similar physical appearance. Given the amount of personality and cognitive models and questionnaires in the literature, it was not possible to be exhaustive, and we decided to focus on some of the most common models. As future work, other personality traits could be explored. While this experiment included a visuomotor task, we did not study the impact of this task on embodiment. Nevertheless, interacting with the VE requires using the avatar and can therefore impact the sense of embodiment. In the following chapters, we will explore embodiment during locomotion and manipulation tasks. The next chapter will focus on the influence of locomotion technique on embodiment, as it has not been studied yet in the community. "This ride is a journey to, run boy run, the secret inside of you" Woodkid Chapter 3 STUDYING THE INTERRELATIONS BETWEEN LOCOMOTION TECHNIQUES AND EMBODIMENT Abstract: This chapter explores the potential interrelations between locomotion and embodiment by focusing on the two following questions: Does the locomotion technique have an impact on the user's sense of embodiment? Does embodying an avatar have an impact on the user's preference and performance depending on the locomotion technique? To address these questions, we conducted a user study (N = 60) exploring the relationship between the locomotion technique and the user's sense of embodiment over a virtual avatar seen from a first-person perspective. Three widely used locomotion techniques were evaluated: real walking, walking-in-place and virtual steering. All participants performed four different tasks involving a different awareness of their virtual avatar. Participants also performed the same tasks without being embodied in an avatar. Introduction Locomotion is an essential task in VEs, as it lets users move from one point to another, to explore the VE or to move towards an object. While there were already several studies investigating the link between manipulation techniques and embodiment [Lin et al. 2019], there was no study investigating the potential interrelations between locomotion techniques and virtual embodiment. Yet, locomotion techniques were found to influence the sense of presence, and the level of motion involvement could influence the way users perceive their virtual body. This chapter therefore investigates how locomotion techniques can impact the sense of embodiment, and how embodying an avatar can influence the way users navigate in the VE (in terms of performance and preference). To explore these effects, we conducted a mixed-design experiment in VR. We used three widely used locomotion techniques as a between-subject factor: real walking, walking-in-place and head steering. We chose these techniques because they are all continuous locomotion techniques, with different levels of physical engagement and interaction fidelity. Participants performed four tasks in the VE, with two conditions on their virtual appearance (within-subject factor): full-body animated avatar or only 3D models of controllers. Experiment Participants Sixty-five participants volunteered to take part in our experiment. The majority of them were students and staff from our research center. Five participants were removed from the study: two because of problems in data recording, two because of tracking issues and one because of motion sickness during the first block. In the end, sixty participants (age min=20, max=56, avg=28.9±8.4, 25 women and 35 men) took part in our study. Participants did not receive any compensation. They were all naive as to the purpose of the experiment, and gave written and informed consent. The study conformed to the declaration of Helsinki. Twenty-five participants reported having no or little previous experience in VR (score 1 or 2 out of 7), twenty-four to have some previous experience in VR (score between 3 and 5) and eleven to be experts (score 6 or 7 out of 7). Apparatus The experiment was developed using Unity (version 2019.1.0f2). Users were tracked using an HTC Vive PRO HMD (equipped with the HTC wireless adapter so that partic-Figure 3.2 -Left: Setup in the physical environment. The HTC Vive with a wireless adapter was used to enable participants to physically walk in an 8m × 8m area. The tracking of the user's motions was enabled by the two hand controllers and two HTC Vive Trackers attached to their ankles. Two additionnal HTC Vive trackers were also positioned on a backpack to track participants' shoulders. Tracking was done using four HTC Vive lighthouses. Right: Overview of the virtual environment: the navigation area was constrained by the virtual fences which matched the limits of the physical space. ipants were not hindered by the cable when navigating), two HTC Vive controllers, two HTC Vive trackers positioned on the ankles to track the feet, and two HTC Vive trackers positioned on a backpack to track the shoulders. We used four HTC Vive lighthouses to track an 8m × 8m area. The VE consisted of a flat grassy ground surrounded by fences, matching the dimensions of the physical space, with trees and hills in the background (see Figure 3.2). The positions of the headset, controllers and trackers on the ankles were used to animate the avatar, using inverse kinematics (FinalIK plugin). Participants were represented by gender-matched avatars (visible in Figure 3.1). During calibration, participants were asked to keep their feet and legs straight to adjust the avatar's feet. They also had to stand straight to adjust the avatar's global scale based on their height. Locomotion Techniques To better understand the influence of the presence of an avatar depending on the locomotion technique, we selected different locomotion techniques based on Boletsis's taxonomy [START_REF] Boletsis | The New Era of Virtual Reality Locomotion: A Systematic Literature Review of Techniques and a Proposed Typology[END_REF]]: room-scale-based, motion-based, controller-based and teleportationbased. In particular, we were interested in techniques that are commonly used in VR, and that involved a similar motion in the VE but different levels of physical movement. There-fore, we chose the following techniques: Real Walking (RW; room-scale-based), Walking-In-Place (WIP; motion-based) and Steering (S; controller-based). Despite its inclusion in Boletsis's taxonomy, we did not include teleportation in our study as it creates a discontinuous movement unlike the three other selected techniques. The different techniques are detailed below: -Real Walking (RW): Participants could physically walk in an 8m × 8m area. A one-to-one control mapping was kept between the physical and the virtual motions. -Walking-In-Place (WIP): we implemented a WIP algorithm which detects walking patterns based on the position of the ankles [START_REF] Kim | Sensor-fusion walking-in-place interaction technique using mobile devices[END_REF][START_REF] Bruno | A new approach to walking in place[END_REF]], which were tracked using Vive trackers. Locomotion was initiated whenever a first step was detected, i.e., when the two ankles' position had consecutively passed above a given threshold (initial tracker position + 5cm). It was stopped whenever a step had not been detected in the last 0.8s. Because participants tend to take lower steps over time, the threshold was decreased to a value equal to the initial tracker position + 2cm after they started stepping. The thresholds were determined empirically through iterative design and several user tests. The direction was determined by the participant's head orientation. -Steering (S): in this technique, participants navigated in the VE using a constant navigation speed while keeping the Vive controller button held down, which is a common approach in both VR and video games. We chose a head-steering implementation (i.e., navigation direction determined by the participant's head orientation), as it is commonly used, easy to get familiar with, and because the tasks in our experiment did not require looking in another direction than the one in which people were headed. While previous work identified that people usually walk in real situations at a comfort speed of approximately 1.3m/s [START_REF] Bohannon | Comfortable and maximum walking speed of adults aged 20-79 years: reference values and determinants[END_REF]], several studies demonstrated that participants tend to walk slower in an immersive VE [START_REF] Mohler | Gait parameters while walking in a head-mounted display virtual environment and the real world[END_REF][START_REF] Agethen | Behavior Analysis of Human Locomotion in the Real World and Virtual Reality for the Manufacturing Industry[END_REF][START_REF] Berton | Studying Gaze Behaviour during Collision Avoidance with a Virtual Walker: Influence of the Virtual Reality Setup[END_REF], especially when wearing an HMD. We therefore set the navigation speed of both the Steering and Walking-In-Place techniques to 1.0m/s, which should approximately match the speed observed while walking with an HMD [START_REF] Agethen | Behavior Analysis of Human Locomotion in the Real World and Virtual Reality for the Manufacturing Industry[END_REF][START_REF] Berton | Studying Gaze Behaviour during Collision Avoidance with a Virtual Walker: Influence of the Virtual Reality Setup[END_REF]. At this point, it also seems important to point out that for all three techniques in our experiment, the avatar's movements were always driven by the participants' actual Chapter 3 -Studying the Interrelations Between Locomotion Techniques and Embodiment movements when they were embodied in a virtual avatar. This means that a) for the real walking technique, the avatar walked as the participant physically walked; b) for the walking-in-place technique, the avatar performed steps on the spot like the participant, while being translated forward in the direction in which participants were looking; c) for the virtual steering, since participants did not move forward in the real world, the avatar stood like the participant, moved its feet like the participant when they turned on the spot, or "slid" along the ground in the direction participants were looking towards. Experimental Tasks The experiment was divided into four tasks (see Figure 3.1), which were designed to ensure that participants were aware of their virtual body, i.e. that parts of the avatar were directly visible from the participant's perspective. The main design principle for defining the tasks was to ensure that participants were able to see different parts of their virtual body during the experiment in a consistent way. -Training task: The first task was a training task, which goal was to get participants used to the VE and the locomotion technique. More precisely, participants were instructed to navigate in the VE, in order to pick mushrooms on the ground by touching them with either hand. This task lasted one minute. -Corridor task: The aim of this task was to enable participants to see their entire avatar moving, while navigating in a VE. They were therefore instructed to walk back and forth in a corridor with mirrors at both ends. A line in front of the mirror indicated where subjects were to turn around so as to not go through the mirror. This task lasted two minutes. -Path-following task: The aim of this task was to force participants to look at their avatar, in particular at their feet, while navigating. They were therefore instructed to follow a target moving on an eight-shaped trajectory, both displayed at the ground level. The target was always 50cm ahead of them so that participants had to tilt their head, thus ensuring that the avatar's legs were inside their field of view. This task lasted two minutes. -Columns task: The aim of this task was to study people's behaviour while navigating around virtual obstacles. They were instructed to navigate in a VE filled with 2-metre high columns and to pick mushrooms as in the training task. In this task, people could therefore mostly see their hands and arms. The mushrooms were always placed at the same positions. They were displayed in groups of ten, with new mushrooms appearing once all those of a series were picked. The distance between columns was 1.45 times the shoulder width of the scaled avatar, to ensure that any participant could move through without necessarily turning their shoulders (which requires an aperture-to-shoulder ratio lower than 1.4 [START_REF] Warren | Visual Guidance of Walking Through Apertures: Body-Scaled Information for Affordances[END_REF]Mestre et al. 2016]). This task lasted two minutes. The tasks were always presented in the same order: training, corridor, path-following and columns. As we were interested in evaluating the overall users' subjective experience, we considered that keeping the same order would reduce the variability in their subjective assessment. Furthermore, we did not compare the tasks with one another. Experimental Protocol and Design First, participants signed the consent form. Then they were equipped with the HMD, the controllers and the trackers. We had two independent variables: Technique and Appearance. For the Technique, we used a between-subjects design, so each participant used only one of the three locomotion techniques presented in Section 3.2.3. There were twenty participants per group. For the Appearance condition, a within-subject design was chosen. Two levels were used: Full-Body Avatar (FBA) and No Avatar (NA). In the FBA condition, participants were represented by a full-body gender-matched avatar holding a 3D model of an HTC Vive controller in each hand, while in the NA condition, only the 3D models of the Vive controllers were displayed. Therefore, the experiment was divided into two blocks (FBA and NA). In total, participants performed 2 x 4 tasks. To minimize potential ordering effects, the blocks were counterbalanced and there was a 5-minute break between them. Participants therefore either performed all the four tasks (in the order presented above) in the FBA condition, then all four tasks in the NA condition, or the opposite (see Table 3.1). The whole session lasted about 40 minutes, including the participant's welcome, performing the experiment, and answering questionnaires. As dependent variables, we used both objective (performance) and subjective (user preference, embodiment) criteria. Chapter 3 -Studying the Interrelations Between Locomotion Techniques and Embodiment Objective Data Objective data were measured in only two tasks (path-following and columns tasks). No objective data were recorded during the training task as the aim of the task was only to make users familiar with the given locomotion technique. Similarly, there was no relevant performance criterion in the corridor task. -Path-following task: The mean distance between the path to follow and the user's position was computed as an objective measure of performance. -Columns task: We measured both the number of mushrooms picked during the columns task, and the number of collisions with the columns, as an objective measure of performance. Because the number of collisions can also depend on the distance covered by participants, we normalized the number of collisions by the distance covered (in metres). We counted collisions between the participants' shoulders (tracked by the additional Vive trackers) and columns, hereafter referred as shoulder collisions. Subjective Data Subjective questionnaires were also used to obtain user perception of the experience. Embodiment questionnaire: We used the ownership, agency and self-location questions from the embodiment questionnaire proposed by Gonzalez-Franco and Peck [2018] (see Table 4.4). These questions were evaluated on a 7-point Likert scale from 1 (Strongly disagree) to 7 (Strongly agree). The questionnaire was administered only after the FBA block. Representation questionnaire: After each block, participants were asked questions related to the virtual representation (R questions in Table 3.3). These questions were about both the virtual representation seen indirectly in the mirror and in other tasks from the first-person perspective. The goals of these questions were mainly to detect whether people were disturbed by their representation (R M irrorLogical and R OtherLogical ) and whether they appreciated it (R M irrorP leasing and R OtherP leasing ). Each item was measured on a 7-point semantic differential scale. Participants answered these questions once after each block. Comparison questionnaire: Participants were also asked to answer several additional questions after the last block, related to whether they preferred the presence of an avatar or not (see Table 3.4). Questions C pref erBody , C pathEasier and C columnsEasier were rated on a 7-point semantic differential scale while question C behaviourDif f erent was rated on a 7point Likert scale. Participants could also associate an adjective with each Appearance condition, explaining why they preferred to have an avatar or not, as well as make free comments about the experiment. SSQ: Additionally, the Simulator Sickness Questionnaire (SSQ) [START_REF] Kennedy | Simulator sickness questionnaire: An enhanced method for quantifying simulator sickness[END_REF]] was given to participants before and after the experiment, and we computed the difference between pre-experiment and post-experiment scores to verify that there were no differences in terms of sickness between the different conditions. Hypotheses Previous studies have shown the impact of physical implication on the sense of embodiment [START_REF] Synofzik | I move, therefore I am: A new theoretical framework to investigate agency and ownership[END_REF]. Moving in the VE seems to let the brain compare the movement intention with the feedback of the action, which is one of the ways to increase the sense of agency [START_REF] Synofzik | The experience of agency: an interplay between prediction and postdiction[END_REF] if there is a match. This sense of agency can strengthen the sense of ownership [START_REF] Kalckert | Moving a Rubber Hand that Feels Like Your Own: A Dissociation of Ownership and Agency[END_REF]]. Thus, we expected that the higher the match between proprioception and sensory feedback, the higher the felt embodiment would be, leading to the following hypothesis: H1 The sense of embodiment is influenced by the locomotion technique. In particular, we expected the sense of embodiment to be higher for techniques with a higher interaction fidelity (i.e., RW > WIP > S). Chapter 3 -Studying the Interrelations Between Locomotion Techniques and Embodiment In addition, because an avatar provides a better spatial awareness [START_REF] Draper | Exploring the Influence of a Virtual Body on Spatial Awareness[END_REF]], we expected that participants would avoid the columns more in the columns task with the FBA condition. Participants could also see the columns as a threat for their avatar. Thus our second main hypothesis was: H2 The presence of an avatar leads to less collisions with the virtual environment. Furthermore, we also included secondary hypotheses, related to the other collected variables: H3 People find having a full-body avatar while navigating in a VE more pleasing and logical (R questions). H4 People think that their representation is more disturbing when using WIP and S than RW (R questions). Analysis Mixed two-way ANOVA analyses were used taking into account both the Technique (between-subjects) and the Appearance (within-subjects) factors. We tested the normality assumption using the Shapiro-Wilk normality test. When the data were not normal, we applied an Aligned Rank Transformation (ART) on the data before performing the ANOVA analysis. Tukey HSD post-hoc tests (α = .05) were conducted to check significance for pairwise comparisons. Kruskal-Wallis tests were however conducted when analysing results depending only on the Technique (between-subjects), as the Shapiro-Wilk tests significantly rejected the null hypothesis according to which the data were normally distributed. Dunn's post-hoc tests were conducted to perform pairwise comparisons. Effect of the Locomotion Technique on Embodiment First of all, we used a Mann-Whitney's test to identify potential ordering effects between the FBA and NA conditions. As we did not find any significant effect, order will not be considered in the remainder of the analysis. We performed Kruskal-Wallis tests to explore differences in embodiment ratings between the different techniques, using the questions about the senses of ownership, agency and self-location. We only found a significant effect of the Technique on the question SL located about self-location (χ 2 = 9.623, p < 0.01). Dunn's test revealed that the RW C pathEasier I felt that performing the path-following task was easier without an avatar (1) vs. with an avatar (7) 4 [3;6] C columnsEasier I felt that performing the columns task was easier without an avatar (1) vs. with an avatar (7) 4 [4;5] C behaviourDif f erent I have the impression that my behaviour was different with an avatar vs. without an avatar 5 [4;6] condition obtained significantly higher scores than WIP and S conditions (see values in Table 4.4). We did not find any significant difference between any of the three techniques for all the other questions. These results therefore do not support H1. Medians and quartiles for each question are available in Table 4.4. Although not significantly different, the scores for the questions O someoneElse and O someoneElseM irror were higher for the RW condition. We also computed the overall ownership, agency and self-location ratings using the recommendations from Gonzalez Franco and Peck [2018], and did not find any significant difference. Overall, the agency ratings (M = 5.00, SD = 0.81) as well as self-location ratings (M = 5.43, SD = 1.09) were high. For ownership, the scores were slightly lower (M = 4.60, Analysis SD = 1.21). Users' Performance and Collision Avoidance We performed a mixed two-way ANOVA to study the potential differences between the different techniques (between-subjects factor) and the avatar conditions (within-subjects factor) on the objective measures of the path-following (mean distance to trajectory) and columns task (shoulder collisions, number of collected mushrooms). We did not find any main effect of the Appearance or interaction effects on these objective measures (see Table 3.5). In particular, this result does not support H2 (no influence on the number of collisions). These results were also confirmed by participants' answers to the subjective questions about the ease of tasks (questions C pathEasier and C columnsEasier in Table 3.4, which results are detailed in Section 3.3.3.2). Participants did not find the task easier in one or the other Appearance condition, which is coherent with the objective data. Table 3.5 -Performance results: mean distance to trajectory (in cm), ratio of shoulder collisions, and number of picked mushrooms. Users' Preferences We also performed a mixed two-way ANOVA to explore the differences in the subjective answers between the different conditions (Technique and Appearance). Some questions were meant to indirectly compare appearance between the two blocks, while others were meant to directly ask participants which condition they preferred at the end of the experiment. Representation questionnaire For the questions about the virtual representation, we found a main effect of Appearance for the question R M irrorLogical (F 1,57 = 4.48, p < 0.05). Post-hoc tests showed that participants felt that being embodied in an avatar was more logical (M dn = 5, IQR = 3 -6) than without (M dn = 4, IQR = 3-6). We also found an effect of Appearance for the question Chapter 3 -Studying the Interrelations Between Locomotion Techniques and Embodiment R OtherP leasing (F 1,57 = 9.341, p < 0.01). Only the results of R M irrorLogical and R OtherP leasing support H3. Also, we observed that participants provided lower ratings for the R M irrorLogical and R M irrorP leasing questions compared to R OtherLogical and R OtherP leasing respectively. Being in front of the mirror seemed to have a negative impact on the participants' answers. No effect of the locomotion Technique was found on questions about representation, which does not support H4. Comparison questionnaire Regarding the questions directly asking participants to compare conditions, the results show a slight preference for the condition with an avatar, with all medians ≥ 4 for C pref erBody . Yet, we observed a polarization for S (see Figure 3.3), some participants not at all preferring having a virtual body (≤ 2), and others showing a high preference (= 7). These results were not visible for the WIP and RW conditions in which participants provided more consistent answers with a slight preference over the avatar. Regarding the questions asking to compare the level of ease between the two blocks (see Table 3.4, C pathEasier and C columnsEasier ), participants did not find the path-following and column tasks to be easier in one condition or another (answers around 4), independently of the Technique. Additional Analyses While we did not find that the presence of an avatar affected the number of collisions with the environment, previous studies have shown that people being embodied in an avatar usually tend to avoid obstacles more in the VE [Mestre et al. 2016;Pan and Steed 2019]. Therefore, we decided to explore this effect in more details by analyzing whether the number of collisions could be influenced by the level of embodiment reported by participants. In order to investigate this effect, we computed the correlations (Spearman) between the embodiment questionnaire and the amount of collisions in the FBA condition (as we did not ask embodiment questions in the NA condition). Regarding self-location questions, we found a significant negative correlation between the question SL located and the ratio of shoulder collisions (r = -0.31, p < 0.05). We did not find similar correlations for the questions about ownership or agency. This result suggests that people who felt that they were co-located with their virtual body collided less with the environment. Variations Discussion This study explored the potential inter-effects between the use of locomotion techniques and avatars. The first goal was to study the effect of the locomotion technique on embodiment, while the second goal was to explore the impact of the presence of an avatar during navigation, depending on the technique. In this section, we discuss the results on the sense of embodiment, collision avoidance, and finally user preference. Embodiment Our first main hypothesis was that the choice of a locomotion technique would influence the sense of embodiment. In particular, we expected high-fidelity locomotion techniques, requiring more physical movements, to elicit a higher sense of embodiment, similarly to previous results on the sense of presence [START_REF] Slater | Taking steps: the influence of a walking technique on presence in virtual reality[END_REF]Usoh et al. 1999]. Contrary to our expectations, there was no significant difference on the level of embodiment between the locomotion techniques, suggesting that the choice of a technique does 91 Chapter 3 -Studying the Interrelations Between Locomotion Techniques and Embodiment not influence the participants' sense of embodiment. Two potential reasons may explain this result. A first potential explanation is that participants had a total control over their avatar movements, no matter what technique they used, which has been shown to play a primary role in the sense of embodiment [Fribourg et al. 2020a]. For all tasks, when participants moved, the avatar moved accordingly, therefore there was always a coherence between participants' movements and the feedback. Even for the steering technique, participants had to reorient their body to perform the tasks and were aware of the upper-body motions (e.g. holding the controller in front of them, picking mushrooms). Even though participants were mostly stationary when using the walking-in-place and steering techniques, they still felt a high sense of embodiment, which is consistent with studies in which people embodied avatars even though they were not moving [START_REF] Galvan Debarba | Characterizing first and third person viewpoints and their alternation for embodied interaction in virtual reality[END_REF][START_REF] Kokkinara | First person perspective of seated participants over a walking virtual body leads to illusory agency over the walking[END_REF]]. However, contrary to related studies [START_REF] Kokkinara | First person perspective of seated participants over a walking virtual body leads to illusory agency over the walking[END_REF]], the avatar was not necessarily walking in these conditions (especially with the steering technique) in our study. Having an avatar sliding on the ground did not seem to affect participants' sense of embodiment. A second explanation of our results could be that visual incongruities due to animation quality might have slightly disturbed participants. Even though the difference was not significant, we noticed slightly lower ownership scores in the walking condition, which would be in contradiction with our hypothesis. Because it is the most realistic and highfidelity technique of locomotion, people may have more expectations in terms of matching between the visual feedback and their proprioception. It is known that the match between visual and motor signals plays an important role in inducing a sense of ownership [START_REF] Kokkinara | Measuring the Effects through Time of the Influence of Visuomotor and Visuotactile Synchronous Stimulation on a Virtual Body Ownership Illusion[END_REF], and that motion artifacts can affect the sense of agency while walking [START_REF] Koilias | The Effects of Motion Artifacts on Self-Avatar Agency[END_REF]. In summary, our experiment suggests that total control of avatar movements might contribute more to embodiment than the coherence between avatar movements and camera motion, although artifacts or differences between participants' real movements and the avatar's movements can modulate their sense of embodiment. Further studies will be required to better assess the contribution of the upper-body and lower-body control, as well as the impact of animation artifacts on the sense of embodiment. Collision Avoidance Our second main hypothesis was that people would collide less with the columns when embodied in a full-body avatar, however it was not supported by the results. In general, we observed different behaviours among participants, some avoided obstacles while others went through them. Participants who ignored the columns reported that they did it to pick the mushrooms faster, and because they were not disturbed by going through them, contrary to people who avoided them. This result differs from other studies that have found that people tended to avoid obstacles when they were embodied in a realistic avatar, but these studies used obstacles on the trajectory of the hand [START_REF] Argelaguet | The role of interaction in virtual embodiment: Effects of the virtual hand representation[END_REF] or at lower height [Pan and Steed 2019]. We believe that the task and the obstacles could explain the difference in our results, as the avatar may have not contributed to avoid the columns. First, the task required participants to focus on finding and picking the mushrooms which could have diverted their attention. Second, the obstacles might not have been close enough to compel the participants to use a body reference to avoid collisions. Preference Logical and pleasing aspects We expected that people would find the condition with a full-body avatar more logical and pleasing. For the logical aspect, this was the case for the task involving a mirror. However, some participants reported that the avatar's movements were sometimes "disturbing", for example that the knees would bend too much compared to what they were doing. The absence of an avatar was still perceived as less logical, where several participants reported negatively about seeing only controllers in the mirror condition. Participants did not find it significantly more logical to have a full-body avatar in other tasks. This may be due to the fact that most VR applications currently only use floating hands or controllers as virtual representations, and that participants were therefore not surprised by this simpler representation (controllers only). For the pleasing aspect, the results were significant only for the tasks not involving the mirror. Although participants appreciated having a body visible from a first-person perspective, they did not find it more pleasing to see the avatar in the mirror. A potential explanation could be that several participants reported being disturbed by the difference in external appearance (clothes, haircut, morphology) of the avatar compared to their own, yielding lower scores on this question. Chapter 3 -Studying the Interrelations Between Locomotion Techniques and Embodiment Preferred condition There were several questions asking participants whether they preferred the condition with or without an avatar. In particular, we expected that participants would prefer the condition with an avatar, and would find the navigation tasks easier to perform in such cases. The high variance in the answers to the question "I preferred having a body" shows different categories of people. Some participants were really disturbed by an absence of body and preferred having an avatar. Such participants described the experience with adjectives like "realistic" or "immersive" to describe the condition with an avatar. Other subjects did not prefer any of the conditions, or tended to prefer the condition without an avatar, as the avatar could be seen as "an obstacle" or too different from them that it was more disturbing than helping. This could potentially be explained by them being more task-oriented. They described this condition with adjectives like "easy", "transparent", "unconstrained" or "less disturbing". The most used words for the condition with an avatar were: funny (9), easy ( 8), disturbing ( 7), realistic (6), immersive (5) and interesting (5). For the condition without avatar, it was: easy ( 9), simple (6), interesting (5) and disturbing (5). While adjectives for the condition without an avatar were more focused on tasks achievement (easy, simple), it is interesting to point that positive adjectives related to immersion and realism only appeared in the condition with an avatar. However, while our results show a slight preference for having an avatar in the navigation tasks of our experiment, it is possible that this preference is not directly related to the task but that it might be common to virtual experiences. It would therefore be interesting to explore potential effects of the task on this preference in future studies. Ease of tasks Similarly to the work by Lugrin et al. [2018], the presence of a virtual body to perform the tasks did not seem to impact the user in terms of performance or preference. Similarly to their study using an action-based game, our experiment contained goal-oriented tasks, especially in the columns task. In particular, some participants of our experiment reported that they only needed the controllers to perform the tasks. They therefore seemed to have been more focused on achieving the task, possibly reducing the awareness of the presence of the avatar. For tasks where they had to notice the avatar (mirror task and path-following task mostly), some participants considered the avatar as an obstacle or Conclusion an annoyance that was diverting them from the task. Our results need to be considered relatively to the tasks, and other tasks could be tested. For example, tasks which need a body reference and a finer control of the avatar, like the stepping task in the work by McManus et al. [2011], could be investigated. Limitations A limitation of this study is related to the degree of realism of the avatars and their animation. For instance, the calibration only adjusted the global scale of the avatar, while other papers studying avatars and collisions used more precise calibrations [Mestre et al. 2016]. Several solutions are now starting to appear to easily and precisely calibrate an avatar to the participants morphology [START_REF] Pujades | The Virtual Caliper: Rapid Creation of Metrically Accurate Avatars from 3D Measurements[END_REF], which could help in the future to more accurately measure collisions with the environment. In terms of animation, several participants were also disturbed by the leg movements, especially when physically walking in front of the mirror. It was previously found that offset rotation of joints can affect embodiment [START_REF] Koilias | The Effects of Motion Artifacts on Self-Avatar Agency[END_REF]. It could therefore be interesting to run a similar study with higher-quality motion capture systems to evaluate whether differences in terms of embodiment could be influenced by the animation quality in such cases. Still, animating avatars with Inverse Kinematics as performed in this study is a common procedure in such embodiment studies [START_REF] Roth | A simplified inverse kinematic approach for embodied VR applications[END_REF][START_REF] González-Franco | The contribution of real-time mirror reflections of motor actions on virtual body ownership in an immersive virtual environment[END_REF]. Even though people felt a sense of embodiment while using walking-in-place and steering techniques, some participants noticed that the feedback was strange, especially in front of the mirror. It would be interesting to study different types of visual feedbacks for the techniques. As an example, several studies have started to compare different feedbacks for the walking-in-place technique [Park and Jang 2019;Lee et al. 2020], which can be walking animation, synchronous stepping or running animation. Conclusion The study presented in this chapter explored the interrelations between avatar embodiment and locomotion techniques. The study considered three locomotion techniques requiring different user involvement (walking, stepping in place, limited physical movement) and three main tasks involving a different awareness of the virtual avatar that participants had full control of. Overall, we found that people experienced similar lev-Chapter 3 -Studying the Interrelations Between Locomotion Techniques and Embodiment els of embodiment with the three locomotion techniques and that the coherence of the avatar's motion with respect to motion of the VE did not create any disruption in the sense of embodiment. Our results represent a first attempt to qualify the interrelations between locomotion and virtual embodiment, and suggest that the locomotion technique used has little influence on the user's sense of embodiment in VR when users have a full control of their avatar movements. In this experiment, users could see exactly the movements they were performing, but this is not always the case. We mentionned that contrary to locomotion, manipulation techniques have been more studied, but this is less the case for techniques that distort users' motion. In the next chapter, we will therefore focus on manipulation, and most specifically on anisormorphic manipulation. This type of interaction can impact embodiment as it creates a mismatch between real and virtual motions. "Time is running out, a ghost keeping me alive" Skip the Use Chapter 4 INVESTIGATING DUAL BODY REPRESENTATIONS DURING ANISOMORPHIC 3D MANIPULATION Abstract: In virtual reality, several manipulation techniques distort users' motions, for example to reach remote objects or increase precision. These techniques can become problematic when used with avatars, as they create a mismatch between the real performed action and the corresponding displayed action, which can negatively impact the sense of embodiment. In this chapter, we propose to use a dual representation during anisomorphic interaction. A co-located representation serves as a spatial reference and reproduces the exact users' motion, while an interactive representation is used for distorted interaction. We conducted two experiments, investigating the use of dual representations with amplified motion (Go-Go technique) and decreased motion (PRISM technique) . Two visual appearances for the interactive representation and the co-located one were explored. Introduction Many techniques have been invented for VR that increase user performance but distort user motion. While this is not necessarily a problem when using a simple user representation (e.g. a sphere representing the user's hand), this might cause strange feedback when embodied in a full-body avatar. Nevertheless, representing users with high-fidelity avatars is common as they offer several advantages. Avatars were found to increase spatial awareness [START_REF] Draper | Exploring the Influence of a Virtual Body on Spatial Awareness[END_REF], or even to impact perceived effort during a physical task [Kocur et al. 2020a]. There are also an increasing number of applications using fullbody avatars, especially social VR applications. To maximize the sense of embodiment towards avatars, spatially and temporally congruent visuomotor stimulation is an efficient method [START_REF] Kokkinara | Measuring the Effects through Time of the Influence of Visuomotor and Visuotactile Synchronous Stimulation on a Virtual Body Ownership Illusion[END_REF]. However, distorting movements during interaction On the left image, a ghost representation enables remote manipulation while a realistic co-located representation provides feedback with respect to the real user's position. On the right image, a ghost representation provides feedback with respect to the user's real position while a realistic representation enables precise manipulation with the environment thanks to slowed motion. could be detrimental to the sense of embodiment, as spatial congruency would not be respected anymore. Also, if the virtual body movements are distorted, users lose the information of their real body position, which might impact their understanding of the interaction technique. To keep the advantages of both anisomorphic interaction and spatial reference offered by the avatar, we propose to provide the visual feedback of both distorted and real motions by displaying two avatars simultaneously. This type of feedback is referred hereinafter as dual representation, as opposed to single representations where only one avatar shows one type of feedback (real or distorted). The aim of this dual representation is to offer both the feedback of real users' motions as well as distorted motions during anisomorphic manipulation. In the context of this study, we define the dual representation as made of an interactive representation, used for interaction and displaying altered movements, and a co-located representation displaying real movements. This type of feedback has already been explored in the context of finger collisions with objects for example [Prachyabrued and Borst 2014;[START_REF] Canales | Virtual Grasping Feedback and Virtual Hand Ownership[END_REF]], and we believe that it could be valuable to all interaction techniques involving motion distortions to support both the sense of embodiment and interaction capabilities. In order to evaluate the impact of dual representations on the sense of embodiment and interaction performance, we conducted two within-subject experiments to compare Extended and Multiple Body Representations different user representations (both single and dual) during anisomorphic manipulation. To this end, two common interaction methods were considered: remote manipulation with a gain that increases movement [START_REF] Poupyrev | The Go-Go Interaction Technique: Non-Linear Mapping for Direct Manipulation in VR[END_REF], and precise manipulation with a gain that decreases movement [START_REF] Frees | PRISM interaction for enhancing control in immersive virtual environments[END_REF]]. Both manipulation techniques create offsets between virtual and real hands, by either amplifying or decreasing users' movements. The interactive and co-located representations had either a realistic visual appearance or a transparent/ghost appearance. This chapter provides insight on which user representation to choose during 3D anisomorphic manipulation. Extended and Multiple Body Representations In the related work of this thesis, we presented common techniques involving motion alterations and the impact they can have on the sense of embodiment (see Section 1.4.2.2). While the precise impact of motion alteration on embodiment is still unclear, several studies showed that a mismatch between users' motion and their virtual avatar was detrimental to embodiment, especially to the senses of agency and self-location. Most of theses studies focused on cases where users saw only one virtual representation of their movements. In this chapter, we ask the question: what would happen if they could see multiple representations of their movements during interaction? Few studies have investigated virtual representations with additional body parts or full bodies. In this section, we will present these studies which investigated either extended or multiple body representations, which are a lead to explore to reconcile embodiment and motion alteration. Extended Body Representations Avatar extensions have been explored, such as human tails [START_REF] Steptoe | Human Tails: Ownership and Control of Extended Humanoid Avatars[END_REF]] or third arms [START_REF] Laha | Evaluating Control Schemes for the Third Arm of an Avatar[END_REF]]. These studies used a material reminiscent of a ghost to distinguish the additional part from the rest of the body. They found that people could experience a sense of embodiment towards these additional limbs. Another study investigated the use of an extended arm in augmented reality, while keeping a representation of the real arm [START_REF] Feuchtner | Extending the Body for Interaction with Reality[END_REF]. This feedback was less appreciated than only an extended arm, without the real arm. More recently, an appended limb has been used as a spatial reference to extend proprioception and increase target selection [Tian et al. Chapter 4 -Dual Body Representations During Anisomorphic 3D Manipulation 2020]. This study found that people could feel a sense of ownership towards this limb, especially when they could control and move it prior to the selection task. The authors also tested different transparency levels for the additional limb, and found no impact on selection performance. Multiple Body Representations A study investigated the experience of having two bodies in VR, but not spatially co-located neither interacting with the VE [START_REF] Heydrich | Visual capture and the experience of having two bodies-evidence from two different virtual reality techniques[END_REF]]. Similar studies with additional body parts were conducted in real life with the rubber hand illusion [START_REF] Ehrsson | How many arms make a pair? Perceptual illusion of having an additional limb[END_REF]], using video stream [START_REF] Chen | Body ownership and the four-hand illusion[END_REF][START_REF] Guterstam | Duplication of the bodily self: a perceptual illusion of dual full-body ownership and dual self-location[END_REF] or augmented reality [START_REF] Rosa | The supernumerary hand illusion in augmented reality[END_REF]. More recently, a study investigated "distributed embodiment", i.e. embodiment towards up to four bodies [Miura et al. 2021]. In this study, participants had split views over four first-person perspectives. The results tended to show that subjects switched attention between bodies, but kept a global parallel sense of embodiment towards all bodies. Another study explored the use of additional hands for interaction, to decrease object selection time but without investigating embodiment [START_REF] Schjerlund | Ninja Hands: Using Many Hands to Improve Target Selection in VR[END_REF]. Some VR applications have two superimposed representations, but they are usually used for guidance [START_REF] Yang | Implementation and Evaluation of "Just Follow Me": An Immersive, VR-Based, Motion-Training System[END_REF]] so one representation is not controlled by users. While studies using several controlled bodies often used similar textures for all representations, these guidance applications used a "ghost" metaphor [START_REF] Yang | Implementation and Evaluation of "Just Follow Me": An Immersive, VR-Based, Motion-Training System[END_REF], where the second representation used for guidance is more transparent than the main one [START_REF] Han | Ar-arm: Augmented visualization for guiding arm movement in the firstperson perspective[END_REF][START_REF] Chinthammit | Ghostman: augmented reality application for telerehabilitation and remote instruction of a novel motor skill[END_REF]. While people replicated the RHI with two arms or bodies, or used a second representation for guiding users, to our knowledge no study investigated the use of a second representation to show real movements during anisomorphic manipulation. For this reason, we conducted two experiments to explore the embodiment of a dual representation, with two different anisomorphic interaction techniques, involving either amplified or decreased movements. Context and Experiments Overview In this study, we were interested in the potential use of dual representations for interaction in VR. More precisely, the context of this chapter is the use of dual body rep- Experiment 1: Dual Representations for Increased Motions During Out-of-Reach Manipulation resentations when there is an offset created between users' real position and the virtual representation position during anisomorphic manipulation. In this context, we consider that dual representations are made of an interactive representation used for interaction with the environment, and a co-located representation showing users' real movements. Therefore, users have control over two avatars simultaneously, but the mapping and visual appearance differ. More precisely, when a motion distortion is applied to a part of the avatar, the hand in our experiments, the two avatars will dissociate, providing visual feedback of both the co-located and distorted motions. When no distortion is applied, a single default representation can be used (a realistic full-body avatar) to maximise embodiment [Martini et al. 2015;[START_REF] Latoschik | The Effect of Avatar Realism in Immersive Social Virtual Realities[END_REF]] and avoid a visual overlap between the two representations. In this paper, when talking about our different experimental conditions, we will always indicate the visual appearance of the interactive representation first (letter on the left), then the co-located representation one (letter on the right). The context is close to the ghost metaphor [START_REF] Yang | Implementation and Evaluation of "Just Follow Me": An Immersive, VR-Based, Motion-Training System[END_REF] in which the trainer's movements are superimposed to users' one. However, in our context, users are in control of both representations at the same time. Dual body representations could be adapted to other situations, but in this study we focus on anisomorphic manipulation techniques. These techniques usually increase or decrease users' motion. In this chapter, we therefore conducted two experiments investigating either amplified motions or decreased motions. The goal was to have two main types of motion alteration. We chose two very well-known manipulation techniques: the Go-Go technique which amplifies movements to reach remote objects [START_REF] Poupyrev | The Go-Go Interaction Technique: Non-Linear Mapping for Direct Manipulation in VR[END_REF], and the PRISM technique which decreases movements to gain precision [START_REF] Frees | PRISM interaction for enhancing control in immersive virtual environments[END_REF]]. Experiment 1: Dual Representations for Increased Motions During Out-of-Reach Manipulation The first within-subject experiment investigated the use of a dual representation when interacting with remote objects using the well-known Go-Go technique [START_REF] Poupyrev | The Go-Go Interaction Technique: Non-Linear Mapping for Direct Manipulation in VR[END_REF]. When using this technique, users' movements above a certain threshold are amplified, enabling both close and remote object manipulation. Participants and Apparatus Twenty-four participants (age min=20, max=36, avg=25.0±4.1, 11 women and 13 men) took part in this experiment. On a 7-point Likert scale, 7 of them were VR experts (score equal to or greater than 6), 6 were knowledgeable (score between 3 and 5) and 11 were beginners (score equal to or lower than 2). The majority of participants were students and staff recruited on our campus. All participants gave written and informed consent. The study conformed to the declaration of Helsinki, and was approved by the local ethical committee. Participants did not receive any compensation. They all had a normal or corrected vision. Participants were equipped with a Valve Index Head-Mounted Display and Knuckles enabling finger tracking, tracked by two base stations. The experiment was developed using Unity 2019.4.12f1. We used gender-matched avatars from the Rocketbox library. They were animated using the RootMotion Final IK plugin. For object interaction (selection and hand poses), we used the SteamVR interaction system. Go-Go Technique Implementation Our implementation is based on the original Go-Go implementation [START_REF] Poupyrev | The Go-Go Interaction Technique: Non-Linear Mapping for Direct Manipulation in VR[END_REF]. A one-to-one mapping is applied until the real hand position R r reaches a threshold value D, then it becomes non-linear. The virtual hand position R v is determined as follows: R v =    R r if R r < D R r + k(R r -D) 2 otherwise The threshold D was 2/3 of the real arm length. The factor k was computed to ensure that the maximum arm length was four times the real arm length. Therefore, when users had their real arm fully extended, the virtual arm was four times longer than their real arm. This k value enabled users to perform the task by reaching all the targets, while maintaining a relatively accurate interaction. To extend the arm, users' motion direction needed to be known. There are several ways of determining it, depending on whether the head, the shoulder or the controller positions are used to compute the direction vector. Different ways were tested, and finally the normalised vector linking the shoulder position to the controller position was kept, similarly to the implementation by [START_REF] Feuchtner | Extending the Body for Interaction with Reality[END_REF]. This was the most stable solution, as it did not depend on the head orientation. The Go-Go technique uses an adaptative gain, enabling both close and remote interaction. An ecological task involving both interaction in the peripersonal (close to the body) and the extrapersonal (far from the body) space was necessary. We chose the ecological task of picking apples. Participants were facing a virtual tree containing 20 apples (see Figure 4.1). They had to extend their arm to reach the apples placed in their extrapersonal space and pick them. Then, there was a sorting task. Participants had to bring back the apple to their peripersonal space to inspect it. They needed to look at them to see if they were flawless, or rotten, i.e. with stains on them. The goal was to make participants bring the apples back to them, to interact closer to their body and be aware of their dual representation. After looking at the apples, they had to put them in the appropriate basket situated just outside their peripersonal space beside the tree (flawless apples in the left basket, or rotten apples in the right basket). To pick/release the apples, subjects had to grab/release the Knuckles handle. When they grabbed the handle, this would snap the apple to the virtual hand and trigger a hand pose. Additional Task: Maximum Reach Estimation Task Interacting with remote objects can alter the mental arm representation, and give a sensation of real elongated arm [START_REF] Feuchtner | Extending the Body for Interaction with Reality[END_REF]. We therefore decided to add a task to estimate whether the perceived maximum reach distance was influenced by the Representation. For this additional task, participants were immersed in an empty scene, with only the Unity default skybox. They had no virtual body during this task and only saw a red virtual sphere in front of them. They were instructed to move this virtual sphere using the Knuckles controllers' pad to place them at the distance where they thought they could reach it with their arm fully extended. They had to keep their arms at their sides during the measure. This measure was repeated six times, three times with the sphere coming from away (at a distance of twice users' real arm length), and three times from near participants (at a distance of 0.3 times the real arm length), alternatively. Experimental Design and Protocol This experiment had a within-subject design with Representation as the independent variable. For each level of Representation, distorted (interaction) motions and real motions could be represented either by a realistic body (R), a ghost body (G) or nothing (Ø). For this experiment we chose three levels of Representation, defined as : R -Ø, R -G and G -R (see Figure 4.2). In R -Ø, only distorted motions were represented with a realistic elongated arm. In R -G, a realistic elongated arm was used for interaction, while real motions were represented by a ghost transparent arm. G -R corresponded to the opposite combination. We did not use other possible combinations, as the goal was to always have a different visual appearance for interactive and co-located representations (so for example the R -R combination was excluded), and only a realistic appearance when using a single representation (therefore no single ghost extended arm). Also, when using the Go-Go technique, the user must see where the virtual hand is to be able to interact during motion distortion, therefore combination Ø -R could not be used in this experiment. The order of the different Representations was fully counterbalanced. Participants were first welcomed, the experiment was explained to them and they had to read and sign the consent form. Before starting the main experiment, users were first immersed in the experimental scene but without any task to perform. They were represented by an avatar with a global scale calibrated to match their height. They were Experiment 1: Dual Representations for Increased Motions During Out-of-Reach Manipulation immersed 45 seconds to look at the scene and their avatar. The goal was to have a first perception of the virtual scene, with a normal calibrated avatar without extendable arms. After the 45 seconds, they performed the maximum reach estimation task. After this first exposure, the main experiment began, which was divided into three blocks, one for each Representation. Before the experimental task began, they had 20 seconds of free exploration of their avatar and the environment. Then, they had to press the controller's trigger to start a training with only five apples. After pressing again the trigger, they could start the main task, with 20 apples. The apples were placed similarly in all conditions to be able to compare results between the different representations. At the end of each block, participants were placed in another scene to do the maximum reach estimation task. In total, the experiment session lasted around 50 minutes. Experimental Data Subjective Measures Participants' sense of embodiment was measured using the questionnaires from Peck and Gonzalez-Franco [2021] (mentioned as PGF questionnaire hereafter) and from [START_REF] Roth | Construction of the Virtual Embodiment Questionnaire (VEQ)[END_REF] (mentioned as RL questionnaire). We decided to use both questionnaires because they have complementary questions. The questionnaire items were evaluated on a 7-point Likert Scale from 1 (Strongly disagree) to 7 (Strongly agree). For the PGF questionnaire, we customised question R7 as recommended in their paper. We used the question "I felt as if my real arm were becoming longer". For question R8, we asked "I felt a realistic sensation in my body when I saw my virtual body" as recommended. For R14 to R16, we used apples as the source of touch. We thought it would be clearer for participants than asking about the touch of the ground as recommended in the questionnaire paper. It is possible to consider that the passive haptic feedback provided by the Knuckles grip could elicit a sensation of touch, even though the shape was different from an apple. In addition, six questions specific to this study were added. For the two conditions with a dual representation, questions related to senses of ownership and agency (from the RL questionnaire) were added for both virtual representations, the realistic (opaque) and the ghost (transparent) representations (see Table 4.1). We used the words opaque and transparent to have neutral words describing the representations that would not bias participants' responses. Five questions on the global opinion regarding the virtual representation as well as usability were asked (see Table 4.2). Subjects were also asked to rank the conditions in order of preference after the experiment, and to explain why they chose this order. Table 4.1 -Additional embodiment questions asked after conditions involving a dual representation (in R -G and G -R). ID Objective Measures Because previous studies showed a possible impact of the virtual representation on performance [Tran et al. 2017], we included a performance measure in this experiment by measuring the time to pick the 20 apples. Similarly, related work showed that having a virtual extended arm can impact arm length perception, because it alters body schema [Kilteni et al. 2012b]. We therefore included the maximum reach estimation task described in Section 4.4.3.2 after each main task, to evaluate whether the virtual representation would impact maximum reach perception. Experiment 1: Dual Representations for Increased Motions During Out-of-Reach Manipulation Hypotheses Previous studies showed that from a certain length (four times the real arm length), an extended arm do not feel like the users' own arm anymore [Kilteni et al. 2012b]. Reminding users of their real arm might help to create a global sense of embodiment towards their representation, which could be higher than having only one visible arm. Also, having the constant feedback of their own arm might be good for the sense of agency, as they can always see a congruent feedback of their real actions, which is an important factor in eliciting a sense of agency [START_REF] David | The "sense of agency" and its underlying cognitive and neural mechanisms[END_REF]]. Therefore we propose as first hypothesis that dual body representations influence the sense of embodiment, especially the sense of agency (H1.1). While some studies suggested that additional visible information can impact performance [Tran et al. 2017], we hypothesised that seeing real movements could help to understand the transformation applied in the case of anisomorphic manipulation. This should not be enough to increase performance in the case of the Go-Go technique, as the technique is simple and the task we used does not necessitate precision. We hypothesised that we should not observe differences in terms of performance between the conditions (H1.2), as dual representations should not hinder the task. When using a single representation, an offset is created between the remote virtual hand interacting with objects and the real hand. Because of this mismatch between real and virtual movements, users might get frustrated. On the contrary, having a dual representation lets users see both their real and virtual movements, we hypothesised that this type of representation would be preferred. Also, it could be perceived as fun to embody two different virtual representations. Our third hypothesis was therefore that people prefer having a dual representation (H3.1). Results Considering the ordinal nature and the non-normal distribution of subjective questionnaires scores, we performed Friedman non-parametric tests (with α = 0.05) on the different embodiment components and the user experience questions, followed by post-hoc pairwise Wilcoxon signed-rank tests with Bonferroni correction. For objective measures, we used a one-way repeated measures ANOVA and Tukey's post-hoc tests when the normality assumption was verified by the Shapiro-Wilk test (for the estimated maximum reach distance), and Friedman tests followed by Wilcoxon tests when the normality as-sumption was not verified (for the time completion). For the additional questions on realistic and ghost representations in G -R and R -G, we performed Wilcoxon signed-rank tests. Sense of Embodiment and Preference Embodiment questionnaires For the RL questionnaire, we computed the global scores for the three components (Ownership, Agency and Change) as recommended. Globally, the Ownership (M dn = 5, IQR = 3.44 -5.81) and Agency (M dn = 6, IQR = 5.75 -6.5) scores were high in all conditions while the Change (M dn = 2.25, IQR = 1.25 -3.75) scores were very low. For all components, we did not find any significant difference. For the PGF questionnaire, we also computed global scores for each component (Ownership, Appearance, Multi-Sensory and Response), as well as the Multi-Sensory sub-score, the Agency score, using the two questions directly related to the sense of agency. Components scores were globally low (Mdn = 3.75, IQR = 3 -4.67 for Ownership, M dn = 2.63, IQR = 1.75 -3.41 for Appearance, M dn = 3.42, IQR = 2.67 -4.17 for Multi-Sensory, M dn = 2.75, IQR = 2 -3.83 for Response and M dn = 4, IQR = 3.5 -5 for the Agency sub-component), with common questions getting relatively high scores (such as I felt like I could control the virtual body as if it was my own body or I felt as if my body was located where I saw the virtual body) while questions not adapted to the experimental protocol gave very low scores (such as questions on tactile sensations). For the Response component, there was a main effect of Representation (χ 2 = 7.525, p < 0.05). The Kendall's W value showed a small effect (W = 0.16). Pairwise tests showed that the score for R -Ø was significantly higher than for G -R (p = 0.01). This seemed mostly influenced by the question I felt a realistic sensation in my body when I saw my virtual hand. For the Multi-Sensory component, Friedman's test showed a main effect of the Representation (χ 2 = 6.1, p < 0.05), but the pairwise test did not reveal any significant difference between the conditions. The results of these two components are visible in Figure 4.3. Additional questions For the additional ownership and agency questions about the realistic and the ghost representations (G -R and R -G only), there were significant differences in agency questions scores between R -G and G -R (see Figure 4.5). For the question controllingOpaque, the scores were significantly higher in R -G than G -R (p < 0.05). Participants felt more in control of the realistic arm when it was used for interaction than when it was displaying real motions. For the question controllingTrans- Chapter 4 -Dual Body Representations During Anisomorphic 3D Manipulation parent, the scores were significantly higher in G -R than R -G (p < 0.01). Similarly to the realistic arm, control was higher when interacting with the ghost arm. For the question causingTransparent, the scores were significantly higher in G -R than R -G (p < 0.01). No significant difference was found for the questions on ownership. We compared scores for similar types of representation (interactive or co-located) depending on the visual appearance. The scores for the interactive representations were not significantly different between the two appearances, neither for the co-located appearance. For each dual representation condition, we also looked at the potential difference in the scores between the interactive and the co-located representation. For the controlling and causing questions, scores were significantly higher for the realistic extended arm than for the ghost short arm in R -G (p < 0.05). Preference R -Ø was the preferred condition, with 10 participants ranking it as their most preferred condition (see Figure 4.4). Its average rank was 1.67. R -G and G -R had the same average rank equal to 2.17. There was no significant difference on the user experience questions. The scores were average for the question liked (M dn = 4, IQR = 2.75-6), low for disturbing (M dn = 2, IQR = 1-5), high for questions easy (M dn = 6, IQR = 5.75 -7), clear (M dn = 6, IQR = 6 -7) and exciting (M dn = 6, IQR = 6 -7). Objective Measures For the mean estimated maximum reach distance, corresponding to the average of the six measures from the maximum reach estimation task, the sphericity was violated so a Greenhouse-Geisser correction was applied. We found a main effect of Representation 110 Experiment 1: Dual Representations for Increased Motions During Out-of-Reach Manipulation (F 2.37,54.52 = 10.48, p < 0.001, η 2 p = 0.313). Post-hoc test showed that the baseline (the measure prior to the experiment) was significantly lower than measures after the different levels of Representation, as all the pairs with the baseline in them had a p-value lower than 0.001. However, there were no differences between the three levels of Representation. In the baseline, the estimated maximum reach was equal to 83.4 ± 15.2% of the real arm length. After the different conditions, it was respectively equal to 91.7 ± 16.2% of the arm length after R -Ø, 89.9 ± 16.9% after R -G and 90.6 ± 16.9% after G -R. We did not observe any order effect. There was also no significant difference in the time completion between the different blocks. People took 105.7 ± 40.7 seconds in average in condition R -Ø, 96.6 ± 25.3 seconds in R -G and 99.3 ± 22.8 seconds in G -R. Discussion The main goal of this experiment was to explore the impact of dual representations on the sense of embodiment and user performance. Except for a difference on the Response component between R -Ø and G -R, embodiment scores did not highlight differences between the conditions, which does not support H1.1. Additional questions showed that for both visual appearances, people had a higher sense of agency towards the arm when they were interacting with it (even if it was sometimes extended), than when the arm was showing their real movements. This is interesting as it encourages to introduce more interaction tasks in future embodiment studies, to study its impact on embodiment and potentially increase embodiment towards deformed avatars. Also, the agency scores for the ghost arm were significantly lower than the realistic arm in R -G. The opposite was not true in G -R. This means than participants could feel control over a ghost extended arm because they were interacting with it, but they felt less control over a co-located ghost arm while interacting with a realistic extended arm. Additionally, the maximum reach estimation task revealed an increased perceived maximum reach distance after all the conditions, compared to the measure prior to the experiment. This shows that interacting with an extended arm changed users' mental representations of themselves, and they perceived their real arm to be longer. This is similar to results found in other studies [START_REF] Feuchtner | Extending the Body for Interaction with Reality[END_REF]Lin et al. 2020]. Regarding user performance, we did not observe any significant difference in task completion time, which does not contradict H1.2. Participants performed similarly under all conditions. This is probably due to the fact that the co-located representation did not interfere during the task, as it was only an additional information not located where the manipulation was happening. Almost half of participants preferred using only one representation, which does not support H1.3. They found it more "playful", or that knowing only where the interactive hand is was "the most important". Several participants also reported it as more "realistic" because they only had one arm, even though this arm was extended. This is in line with the study by [START_REF] Feuchtner | Extending the Body for Interaction with Reality[END_REF] that also considered an extendable arm in augmented reality, in which 67% of participants preferred not seeing their real arm, and could embody the extended arm when interacting with it. Still, some participants preferred having a dual representation, to have the information of their real arm position. Some of them preferred their real body represented by a realistic body. These participants considered the ghost arm as an "extension", a superpower they had in the VE. Other people preferred interacting with a realistic arm, as they felt they were not able to pick apples with an "intangible" arm, and were more precise with a realistic arm. These two opposite opinions explain the similar mean ranking for R -G and G -R. Even though the single representation was preferred, there seems to be some individual differences. The results could also be greatly influenced by the type of mapping. For this reason, we conducted a second study investigating another type of motion distortion. Experiment 2: Dual Representations for Decreased Motions During Precise Manipulation The goal of the second within-subject experiment was to investigate the use of a dual representation when forced to be always in sight during a task in the peripersonal space. We also wanted to test a technique with a different type of gain, i.e. decreasing users' movements. The PRISM technique was chosen as it scales down users' movements to increase precision. Participants and Apparatus In this second experiment, we also had 24 participants (age min=22, max=36, avg=26.8±3.3, 12 women and 12 men). None of them did the first experiment. 4 of them were VR experts, 13 were knowledgeable and 7 were beginners. The apparatus was the same to that in the first experiment. PRISM Technique Implementation The PRISM technique adjusts the virtual hand motion (translation and rotation) depending on users' hand speed. For translation, we scaled users' motion depending on the speed measured in m/s. For rotation, the scale depends on the angular speed in degree/s. When implementing PRISM, the gains can be applied on global translation/rotation or independently on each axis. In our experiment, we used global scaling as it was found more intuitive after several tests. The different cases determining the gains applied are described here. The threshold values MinV, SC and MaxV were empirically adjusted to our task and its difficulty. -HandSpeed < M inV . In this case, the motion is considered as noise and not intentional motion. We used M inV = 0.01m/s for translation and M inV = 5 degree/s for rotation. -M inV < HandSpeed < SC. Users are performing a slow motion in order to be precise. The function determining the gain applied is linear (going from 0 for M inV to 1 for SC). We used SC = 0.1m/s for translation, and SC = 50 degree/s for rotation. -SC < HandSpeed < M axV . In this case we use a one-to-one mapping. We chose M axV = 0.2m/s for translation and M axV = 60 degree/s for rotation. -HandSpeed > M axV . This case shows an intention to remove the current offset created between the virtual hand and the real hand. When the speed is greater than M axV , the offset is instantly recovered. Task The task was similar to the 6-DOF task considered by [START_REF] Frees | PRISM interaction for enhancing control in immersive virtual environments[END_REF]. Subjects were instructed to grab a 3D object and place it at a given target position. The object was a cylinder, to have a simple hand pose to grab it, with an antenna on top of it to constrain its rotation on the three axes. The target consisted of the same object but transparent and red (see Figure 4.1). When the object was correctly placed, it turned green. The object was considered well placed when its centre was within 3 millimetres of the target's centre and its three axes of rotation were aligned with those of the target (with a tolerance of 2.25 degrees per axis). The objects were always shown in the same order across the different conditions. Experimental Protocol and Design Similarly to Experiment 1, the experiment followed a within-subject design with Representation as the independent variable. This variable had four levels (see Figure 4.6). We added a fourth condition, Ø -R, because the PRISM technique is compatible with having only the real movements displayed and the object slowed down, contrary to the Go-Go technique. The order of the different Representation conditions was counterbalanced using a 4 × 4 Latin square design. Participants were first welcomed, the experiment was explained to them and they had to read and sign the consent form. Then they were immersed in the VE where they first had 20 seconds to observe their virtual body and the environment. Then, they had to press the controller's trigger to start a training with only two objects to place. They had a maximum of 30 seconds to place each object after which they could start the main experiment by pressing again the trigger. They had three minutes to place as many objects as possible. In order to keep the exposure time the same for each condition, we preferred to fix the total exposure time because we expected stronger user variability in this task than in the first experiment task. The next object (and the associated target) would appear when the previous one was correctly placed. In total, the experiment session lasted about an hour. Experimental Data We used the same subjective questionnaire as in Experiment 1. Only the following questions were changed, in the PGF questionnaire: -R7: I felt as if my body had changed -R14: It seemed as if I felt the touch of the virtual objects in the location where I saw the virtual body touched -R15: It seemed as if the touch I felt was caused by the virtual objects touching the virtual body -R16: It seemed as if my real body was touching the virtual objects As objective data, the number of completed placements was counted. The times when each object was displayed, picked, released and correctly placed were logged. The hand velocity (in m/s) as well as the offset between the virtual and the real hands were also saved at each frame, for both left and right hands. Hypotheses We globally had the same hypotheses as in the first experiment. This experiment was slightly different from the first one, both because the technique decreased users' movements and the interaction was happening in users' peripersonal space. The PRISM technique alters users' movements which creates an offset between the real and the virtual hands [START_REF] Farrer | Effect of distorted visual feedback on the sense of agency[END_REF]]. Yet, compared to the Go-Go technique, the offset introduced will tend to be smaller and will also increase progressively during the manipulation process. On the one hand, as the offset increases progressively and will tend to be small, the chances of noticing the offset are reduced. On the other hand, as the offset is dependent of the motion speed, the user might perceive the virtual hand motion as unpredictable. The dual representation could enable to decrease this effect of control loss. However, having two bodies with different mappings in the peripersonal space could be disturbing for users. One hypothesis was that dual representations influence the sense of embodiment, especially the sense of agency (H2.1). We were still expecting similar performance between the different conditions. Because in all the dual representation conditions one representation is semi-transparent, we hypothesised that it would not add too much visual information to affect performance, so that we should not observe differences in terms of performance between the conditions (H2.2). Finally, because with dual representations there would be two virtual hands very close to each other, we were expecting a less pronounced preference for dual representations compared to the Go-Go technique, but still a preference because it would not result in any loss of information about users' movements. The third hypothesis was that people would slightly prefer having a dual representation (H2.3). Results We performed a similar analysis to Experiment 1. Subjective data were analysed using Friedman's tests and objective measures using either one-way repeated measured ANOVA or Friedman's test depending on the Shapiro-Wilk test significance. Sense of Embodiment and Preference Embodiment questionnaires No significant effect of Representation on the components from the RL questionnaire was found. Ownership ((Mdn = 4.625, IQR = 3 -5.5)) and Agency ((Mdn = 5.125, IQR = 4 -6.062)) were above average. The Change component scores were low ((Mdn = 2, IQR = 1 -3.062)). Interestingly, the question It felt like the virtual body was my body was influenced by Representation (p < 0.05), with single representations having better scores than dual representations, and the pairwise tests showing that Ø -R was significantly higher than G -R. Other questions had similar results between Representations, resulting in no significant difference in the Ownership component. For the PGF questionnaire, components for which we found a main effect of Representation are shown in Figure 4.7. Components scores were globally average (Mdn = 3.83, IQR = 2.67-4.67 for Ownership, M dn = 3.13, IQR = 2.13-3.5 for Appearance, M dn = 3.5, IQR = 2.96 -4.38 for Multi-Sensory, M dn = 3, IQR = 2 -3.67 for Response and M dn = 4, IQR = 3 -5 for Agency). For both the Response (χ 2 = 8.85, p < 0.05) and the Ownership (χ 2 = 8.38, p < 0.05) components, Friedman's test showed a main effect of the Representation. R -Ø had higher scores but the pairwise tests did not reveal any significant difference between the conditions. However, the question I felt as if my body was located where I saw the virtual body was significantly higher in R -Ø compared to R -G and G -R. Even though there was an offset with their real hand, participants felt co-located with their avatar. There was also an effect of Representation on the question I felt that my own body could be affected by the virtual world, with R -Ø having significantly higher scores than R -G. Friedman's test showed a main effect of the Representation on the global embodiment score (χ 2 = 9.63, p < 0.05). Embodiment score in R -Ø was significantly higher than in R -G (p < 0.05). There was no other significant difference. Additional questions For the additional questions, we did not find any significant difference (see Figure 4.8). But, we also compared the scores between each similar question (for example, controllingOpaque and controllingTransparent). The scores for the question on controlling the interactive representation (i.e. showing distorted motions) were higher with the realistic appearance (in R -G) than with the ghost appearance (in G -R) (p < 0.05). The scores for the question wasMyBody for the interactive representation were also higher with the realistic texture (in R -G) than with the ghost texture (in G -R) (p < 0.05). In G -R, the scores for wasMyBody and controlling were significantly higher for the co-located realistic arm than for the ghost interactive arm (p < 0.01). Preference R -Ø was the preferred condition, with 13 participants ranking it as their most preferred condition (see Figure 4.9). Its average rank was 1.92. R -G came second with an average rank of 2.38, then Ø -R (2.58) and G -R (3.13) as the least preferred. Objective Measures The number of completions was slightly higher in R -Ø but no significant difference was found between the four conditions (F 2.76,60.63 = 2.43, p = 0.079, η 2 p = 0.100). People correctly placed 15.7 ± 4.9 objects in average in R -Ø, 13.4 ± 6.0 in G -R, 13.5 ± 5.1 in R -G and 13.3 ± 5.3 in Ø -R. We also counted the number of adjustment releases (when objects were released without being correctly placed), for which there was a significant effect of Representation (p < 0.01). Pairwise tests showed that the number of adjustment releases was significantly higher in Ø -R than in R -Ø. The average number of adjustment releases was equal to 15.8 ± 7.0 in R -Ø, 16. 4 ± 9.5 in G -R, 15.3 ± 7.8 in R -G and 19.7±7.4 in Ø -R. Also, neither the offsets created nor the hand velocity were significantly different between the conditions. The average offset was 3.39 ± 0.97cm. Discussion The goal of this second experiment was to study the influence of dual representations on embodiment and performance when the interaction took place in the peripersonal space. We expected embodiment scores to be higher with the dual representations. On the contrary, there was a tendency of higher scores in R -Ø but pairwise tests could not show significant results. Only the global embodiment score showed that embodiment in R -Ø was higher than in R -G. Indeed, participants reported being disturbed by the presence of two hands, which could explain this result. Between the two dual representations, the results on the sense of embodiment tended to show that in this experiment, the hand appearance (realistic versus transparent) had an impact. The scores for the questions about control and ownership were higher with the realistic hand appearance than with the ghost one. We did not find a significant difference between the conditions for the performance measure (number of completions), which does not contradict H2.2. However, performance tended to be higher for R -Ø, and equivalent for dual representations and the single representation Ø -R. We also reported that the number of adjustment releases was high in Ø -R. It was indeed observed during the experiment that in this condition, where the hand would cross the object when there was an offset, people would tend to release and grab again the object to remove the offset. This is in line with papers showing that users are disturbed by interpenetrations [START_REF] Canales | Virtual Grasping Feedback and Virtual Hand Ownership[END_REF]. Also, the preference for condition R -Ø was slightly clearer than in the first experiment. But interestingly, R -G was often ranked second, showing a good acceptance of the ghost hand to show real movements. Some participants reported that it was easier to understand their virtual movements when they had two representations. The dual representation also enables keeping the contact between a virtual hand and the virtual object. This is different from the original PRISM implementation [START_REF] Frees | PRISM interaction for enhancing control in immersive virtual environments[END_REF] where the virtual representation moves away from the object (like Ø -R in our experiment), and the offset is represented by either a line between the hand and the object for translations, or two sets of 3D axes for rotations. Our visual feedback is more compatible with avatars, as it leverages avatars to represent the offset. General Discussion This section gives global discussion on the two experiments with some leads of future studies. Impact of Dual Body Representations on Embodiment and Performance Embodiment scores for dual representations were globally similar to the scores for single representations, especially when using the Go-Go technique. This tends to show that having two bodies still elicited a good global sense of embodiment, which is in line with previous studies [START_REF] Chen | Body ownership and the four-hand illusion[END_REF]Miura et al. 2021]. While we expected that showing real movements would increase the sense of agency, it was not higher compared to conditions when people could only see their distorted movements. This is in accordance with other studies which showed that people are not disturbed by small offsets [START_REF] Porssut | Reconciling Being in-Control vs. Being Helped for the Execution of Complex Movements in VR[END_REF][START_REF] Kokkinara | The Effects of Visuomotor Calibration to the Perceived Space and Body, through Embodiment in Immersive Virtual Reality[END_REF] or can embody an extended arm until a certain length [Kilteni et al. 2012b;[START_REF] Feuchtner | Extending the Body for Interaction with Reality[END_REF]. People even felt more in control of the extended arm than of the arm showing their real movements. But in the second experiment when the interaction happened in the peripersonal space, the avatar visual appearance seemed to have a higher impact on agency. Both virtual representations were visible and closer during object manipulation, which may have increased the comparison between both hands and the difference in embodiment scores. This may also be due to people focusing more on the objects than on their representation, and not being disturbed by the offset which was never big enough to be distracting. We also compared embodiment scores of the two experiments using a Student's t-test. For all conditions, the agency component score from the RL questionnaire was higher in the first experiment than in the second (p < 0.01 for G -R and R -G, and p < 0.05 for R -Ø). People therefore felt a lower sense of agency using the PRISM technique than using the Go-Go technique. The other components scores were similar in both experiments. As expected, we could not find a significant impact of dual representations on performance. Contrary to papers showing that additional visual information can impact 4.6. General Discussion performance [Tran et al. 2017], in our study dual representations were neither beneficial nor detrimental to the task. It therefore seems possible to use dual representations in such anisomorphic manipulation tasks. User Preference and Recommendations The two experiments showed that people globally preferred having a single representation over a dual representation for manipulation. Still, results showed a good acceptance of all representations, dual or single. The questions on user experience had relatively high scores, demonstrating that people liked the different representations and found the interaction easy and clear with them. However, while the general preference was for displaying only the interactive representation, some people found dual representations useful. We hypothesise that individual differences might exist regarding the perception of such a dual representation. One potential explanation could be related to people's ability to estimate their real body position or whether they are task-oriented. Depending on users' preferences, they could choose the representation that better suits them. The results from both experiments give us some insight on how to choose the best virtual representation during 3D anisomorphic manipulation. In general, we suggest to use a single realistic arm as the interactive representation, and to let users the possibility to activate a ghost co-located arm. However, when using a technique in the peripersonal space with small offsets, only keeping a single realistic interactive representation seems sufficient and preferred. Measuring Embodiment One of the challenges that we faced in both experiments was how to measure embodiment. While the most common method is the use of subjective questionnaires, it has several limitations as it relies on the participants' understanding of the questions. Moreover, existing questionnaires have not been designed to assess dual representations. To cope with these limitations, we added specific questions to address dual representations. Furthermore, we considered it a good opportunity to use two questionnaires [START_REF] Roth | Construction of the Virtual Embodiment Questionnaire (VEQ)[END_REF]Peck and Gonzalez-Franco 2021], namely PGF and RL, which could be more sensitive to different aspects of embodiment in such a situation. The two questionnaires have different components, but still some comparisons could be made. The RL Ownership and Agency scores were high, and the Change scores low, while the PGF questionnaire scores all tended to be below average. The RL Ownership and Agency questions are commonly used in the literature and tended to be high in all conditions, while the Change component questions were found irrelevant by the participants. They reported that they had difficulties answering the questions, suggesting that this component may be used in experiments investigating morphological differences, but probably not in experiments with calibrated avatars. For the PGF questionnaires, all components scores tended to be below average, as some questions (e.g. about tactile sensations) not adapted to the experiment seemed to have lowered global scores and added noise to the results. The questions in this questionnaire are more varied, suggesting a better adaptability to experiments with rich sensory stimulation, as there are questions asking specifically about it. Overall, the two questionnaires provided different and complementary results. More questionnaires could be designed, as VR setups and experiments are various, and having only one standardised measure of embodiment seems like a difficult goal [Skarbez et al. 2021a]. Limitations Our experiments were designed to study two common types of motion alteration, either increasing or decreasing motion. However, other types of distortion could be influenced by the representation. For instance, another type of common distortion is collision handling. Dual representations of the virtual hand have been studied for fine manipulation involving collision handling [START_REF] Canales | Virtual Grasping Feedback and Virtual Hand Ownership[END_REF], where a dual representation provided a good tradeoff between performance and preference. It would be interesting to explore the use of dual representations when a collision happens after a larger arm movement, and not only finger movements. While not studied here, dual representations could be appropriate as collisions can create huge offsets between the virtual and the real hand. Moreover, we decided to display the co-located representation as soon as there was a gain applied. It would also be possible to display the co-located representation only if the offset is greater than a certain threshold. It would avoid having an additional information for precise manipulation like PRISM, when the offset is not disturbing for users, and only display the real hand when users can notice the mismatch and feel a loss of control. Conclusion In this chapter, we investigated the potential use of dual representations during anisomorphic manipulation, to see whether it could ensure both embodiment and performance. We used two different manipulation techniques, namely the Go-Go technique which amplifies motion and the PRISM techniques which decreases motion. With both methods, participants managed to feel a sense of embodiment towards a dual representation. Additionally, no impact was found on performance, which shows that such representations could be used in VR applications. While interacting seemed more important than showing exact movements for agency during out-of-reach manipulation, people felt more in control of the realistic arm during close manipulation. Overall participants preferred having a single representation, but still some people reported the benefit of having a representation showing their real movements to understand the distortion. The importance of investigating such types of feedback during interaction is crucial, as we do not know yet the optimal visual feedback for each type of technique that would ensure both performance and embodiment. While this study investigated a specific feedback in the context of anisomorphic manipulation, there are already several studies on avatars and manipulation, detailed in Section 1.3 and Section 1.4. Based on the analysis of these studies, we will propose in the next chapter the theoretical concept of "avatar-friendly techniques", i.e. techniques that take the avatar into account. Chapter 5 DESIGNING AVATAR-FRIENDLY TECHNIQUES: PRACTICAL GUIDELINES Abstract: This chapter was born out of an analysis of the state of the art on avatars and manipulation. As the related work showed (see Chapter 1), the literature on avatars and manipulation techniques is more extensive than for other interaction tasks. This chapter therefore proposes a set of fourteen guidelines to help VR developers creating manipulation techniques compatible with avatars, which we call avatar-friendly techniques. It also suggests several topics that could be investigated further regarding manipulation techniques and avatars. Introduction VR stakeholders usually design applications based on their instinct, or empirical experience, because of a lack of simple design advice, in the form of guidelines or claims for example. However, proposing and validating design guidelines for VR is already a challenge in itself [START_REF] Wingrave | Reflecting on the Design and Implementation Issues of Virtual Environments[END_REF]. There are too many factors impacting interaction, and too many individual differences, that make generalization from experiments very challenging. A set of guidelines for designing VR applications has been proposed by [START_REF] Gabbard | A Taxonomy of Usability Characteristics in Virtual Environments[END_REF], already talking about user representation. For instance, these guidelines recommend to use an efficient embodiment (i.e. provide enough sensory information) and to let users control their appearance in the VE. But, these guidelines are general and not precise enough to design manipulation techniques. Guidelines for designing 3D manipulation techniques have also been proposed by [START_REF] Bowman | 3D User Interfaces: Theory and Practice[END_REF], however, they do not consider user representation. Other guidelines, heuristics and claims [START_REF] Sutcliffe | Heuristic evaluation of virtual reality applications[END_REF][START_REF] Tanriverdi | VRID: A Design Model and Methodology for Developing Virtual Reality Interfaces[END_REF], more or less general, have been proposed since, but the avatar is usually forgotten. Guidelines specific to manipulation, considering the avatar, would help VR developers and encourage the use of avatars. We propose in this chapter design guidelines, justified by existing literature, that can be used in early stages of a design pipeline. These guidelines are focused on design choices for manipulation techniques involving avatars, therefore we refer the reader to the guidelines of [START_REF] Gabbard | A Taxonomy of Usability Characteristics in Virtual Environments[END_REF] concerning the remaining of the VR application pipeline (e.g. to restrict latency). Our guidelines are principally aimed towards helping designers to elicit embodiment during manipulation tasks, while also providing insights when a trade-off between embodiment and performance is required. We call this type of techniques "Avatar-Friendly techniques". Techniques which take the user's avatar into account in the design process to preserve both user performance and sense of embodiment Avatar-Friendly Techniques The guidelines (see in Table 5.1) are classified into three parts: input devices (ID), control and mapping (CM) and feedback (FB). Guidelines Input Devices ID1: For specific applications or training, use an input device adapted to the task. For applications training the user on the use of a specific tool, use an adapted device resembling the real tool. Place the 3D model of the device in the avatar's hand (see guideline FB5). Using an adequate device for a task yields better performance [START_REF] Pham | Is the Pen Mightier than the Controller? A Comparison of Input Devices for Selection in Virtual and Augmented Reality[END_REF]. This is essential for example when learning technical gestures, but otherwise generic controllers can be used. ID2: Prioritise full-hand tracking when possible. Use hand tracking technologies (Leap Motion, gloves) instead of controllers when the tracking works well. In case of tracking issues, consider the use of 6-DoF controllers, especially controllers with capacitive sensors for the fingers. Using gestures and direct manipulation is preferred to ensure embodiment, realism, enjoyment, but only as long as the tracking is sufficient [Lin et al. 2019;[START_REF] Moehring | Effective manipulation of virtual objects within arm's reach[END_REF]. Six-DoF controllers still yield good levels of embodiment in most experiments. Control and Mapping CM1: For wide and fast motions, do not apply constraints. When wide and fast movements are done by the user, do not apply constraints (such as avoiding collisions or maintaining a hand pose on an object). Constraints might create a noticeable mismatch, decreasing the sense of self-location and potentially the sense of agency [START_REF] Pritchard | Non-hierarchical Influence of Visual Form, Touch, and Position Cues on Embodiment, Agency, and Presence in Virtual Reality[END_REF][START_REF] Kokkinara | The Effects of Visuomotor Calibration to the Perceived Space and Body, through Embodiment in Immersive Virtual Reality[END_REF]. CM2: For wide and fast motions, choose a CD gain equal to 1. For wide motion, the best is to keep a gain equal to 1. A gain lower than 1 could be perceived as latency. A higher gain can quickly create huge offsets between the real hand and the virtual hand, affecting the sense of embodiment [START_REF] Kokkinara | The Effects of Visuomotor Calibration to the Perceived Space and Body, through Embodiment in Immersive Virtual Reality[END_REF]. CM3: For small motion and precise manipulation, provide assistance to the user's movements. Use CD gains lower than 1 to precisely approach an object and/or automatic realistic poses to assist the user in their precise movements. It is possible to use subtle modifications of the motion without disturbing the sense of embodiment [START_REF] Debarba | Self-attribution of distorted reaching movements in immersive virtual reality[END_REF]]. This must be used as assistance to provide a visually realistic hand pose to the user during manipulation. CM4: For precise manipulation, handle collisions. Constrain the avatar's hand and fingers outside of virtual objects during manipulation. Users are more sensitive to collisions than a small mismatch between the position of their virtual hand that of their real one. Therefore, they prefer seeing their virtual hand outside of objects [START_REF] Canales | Virtual Grasping Feedback and Virtual Hand Ownership[END_REF]. CM5: Progressively increase the distortion gain to make it more acceptable. The CD gain does not have to be constant, it can evolve during the application. Decrease it slowly to help precise manipulation. Apply a gain to avoid tiring motion, and then set it back to 1. It was found that distorted motions are better accepted when the distortion is gradually introduced [START_REF] Porssut | Reconciling Being in-Control vs. Being Helped for the Execution of Complex Movements in VR[END_REF]Feuchtner and Müller 2018]. In this case, the sense of embodiment is less impacted. Feedback FB1: Provide a well-calibrated avatar. The avatar's body measurements (height, general scale, arm length) must match those of the user. Eye height is important to estimate object size [Leyrer et al. 2011]. Having a personalised hand size was found better for manipulating objects and estimating their size [START_REF] Jung | Over My Hand: Using a Personalized Hand in VR to Improve Object Size Estimation, Body Ownership, and Presence[END_REF]Wang et al. 1997]. FB2: Use first-person perspective. The viewpoint should be co-located with that of the avatar. First-person perspective is better both for accuracy in manipulation tasks and for the sense of ownership [START_REF] Gorisse | First-and Third-Person Perspectives in Immersive Virtual Environments: Presence and Performance Analysis of Embodied Users[END_REF]. This was already recommended by Gabbard to increase the sense of presence [START_REF] Gabbard | A Taxonomy of Usability Characteristics in Virtual Environments[END_REF]]. FB3: Use slightly transparent hands to prevent occlusions. Make the hand subtly transparent to avoid occlusions and still provide depth cues. It can become transparent when close to an object. Gabbard already proposed this guideline for selection (see guideline Select7 [START_REF] Gabbard | A Taxonomy of Usability Characteristics in Virtual Environments[END_REF]). Here we want to highlight that a slightly transparent hand, additionally of being beneficial for performance [Van Veldhuizen and Yang 2021], can also still elicit a sense of agency [START_REF] Buchmann | Interaction with Partially Transparent Hands and Objects[END_REF] and ownership [Martini et al. 2015], which we confirmed in our study in Chapter 4 with the ghost representation eliciting a good sense of embodiment, especially with the Go-Go technique. FB4: Preserve body continuity as much as possible. If anisomorphic motion is used for out-of-reach manipulation (Go-Go technique), use feedback such as a long arm illusion or a robotic arm, but do not break body continuity. Avoid a too high offset that would make the arm too long. Breaking body continuity during interaction can result in a loss of embodiment [Seinfeld and Müller 2020]. However, if the chosen feedback is a long arm, the embodiment can break when the arm reaches twice the length of the normal arm [Kilteni et al. 2012b]. In this case, other types of feedback could be used, like a "ghost" arm [START_REF] Yang | Implementation and Evaluation of "Just Follow Me": An Immersive, VR-Based, Motion-Training System[END_REF]] that manipulates the remote object instead of the real arm (see Chapter 4). Raycasting can also be a good alternative since it is more efficient, albeit less intuitive [START_REF] Poupyrev | Egocentric Object Manipulation in Virtual Environments: Empirical Evaluation of Interaction Techniques[END_REF]]. FB5: Maximize multisensory coherency. If a controller is used, display its 3D model into the avatar's hand. It is if the real hand pose is similar to the avatar's hand pose. It also provides coherency between visual and passive haptic feedback [START_REF] Insko | Passive haptics significantly enhances virtual environments[END_REF]]. If various virtual tools are used in the application, new input devices have been invented to propose a passive haptic feedback adapted to the virtual tool [START_REF] Zenner | Shifty: A Weight-Shifting Dynamic Passive Haptic Proxy to Enhance Object Perception in Virtual Reality[END_REF]. FB6: Provide haptic feedback. Use devices that provide haptic feedback during manipulation. Haptic feedback can increase the sense of embodiment [START_REF] Fröhner | Can Wearable Haptic Devices Foster the Embodiment of Virtual Limbs?[END_REF]. It can also help to improve performance when hand tracking is used [START_REF] Moehring | Effective manipulation of virtual objects within arm's reach[END_REF]. FB7: Use realistic avatars when using distorted motion. Use realistic avatars to make distorted motion less noticeable by the user. One condition for this is to have good hand tracking. They have several advantages as they make remapped movements less noticeable [Ogawa et al. 2020a] and increase the sense of ownership [Waltemate et al. 2018]. In Chapter 4, we also showed that people preferred to interact with a realistic arm rather than a ghost arm during distorted manipulation. However, realistic avatars increase users' expectations in terms of control [START_REF] Argelaguet | The role of interaction in virtual embodiment: Effects of the virtual hand representation[END_REF], therefore they should mostly be used when the tracking works well. Future Research Topics The current state of the art lets us identify several guidelines for designing manipulation techniques. However, there is such a high variety of input or output devices, manipulation techniques and possible virtual representations that our knowledge remains partial. More studies are needed to precise the guidelines, and maybe make them evolve as VR material evolves. This section presents the current gaps in the literature on manipulation techniques and avatars and proposes a number of research topics that should be explored in the future. Most of these research topics arise from the fact that the design of manipulation techniques rarely takes the avatar into account in the design process. This completes the lines of research on user representation proposed by Seinfeld et al. [2020a]. Study perception of motion alterations using different user representations. Ogawa et al. [2020a] compared the detection thresholds for a spheric cursor and a realistic hand, and found that distorted motion was less noticeable with a realistic hand. More of this kind of studies need to be done to investigate the role of avatars on the detection of distorted motion. It is necessary to understand how to safely modify motion without the user noticing it, or on the contrary to make the alteration more understandable to better accept it (e.g. by displaying a ghost hand showing real movements, as we proposed in Chapter 4). In addition to detection thresholds, motion alterations can influence how users' perceive the task. For example, several works have shown that changes in the CD gain could elicit illusions of weight when lifting virtual objects [START_REF] Jauregui | Toward "Pseudo-Haptic Avatars": Modifying the Visual Animation of Self-Avatar Can Simulate the Perception of Weight Lifting[END_REF]. Study different possibilities of feedback and mappings of controller-based input when an avatar is involved. The perceived sense of embodiment towards the virtual representation highly depends on the mapping and feedback used, based on the information provided by the input device. While there are several studies on hand tracking, there is a lack of investigation on the visual feedback of controller-based input [Lougiakis et al. 2020]. Different mappings for a chosen input device should also be tested. Compare different input devices to control the avatar during manipulation. There is an increasing number of controllers proposed that should be compared with one another. For one user representation, several devices can be used to control it [START_REF] Alzayat | Quantitative Measurement of Tool Embodiment for Virtual Reality Input Alternatives[END_REF]]. The type of input device was found to both affect interaction efficiency and embodiment [Lin et al. 2019]. It would be interesting to compare controllers allowing fingers movement (e.g. Valve Index controllers) to controllers that need to be grabbed (e.g. Vive controllers). Propose new techniques to separate axes during manipulation while maintaining embodiment. Constraining manipulation is sometimes needed for the sake of accuracy. For example, the user might want to move an object on a plane, or on an axis. Inspired by 3D desktop interfaces using widgets, similar techniques have been invented for immersive VE [Mendes et al. 2016]. If the user interacts with these widgets via an avatar, additional occlusions and clutching problems can appear. If the avatar does not correctly hold the widget, the feedback can seem unrealistic and disturb embodiment. If the movement is only constrained on a certain axis, users might feel a loss of agency, so it does not seem like a good solution either. It could be interesting to imagine new techniques to manipulate objects independently on different axes without affecting embodiment. Compare different visual feedback for common techniques. Different visual feedback can be provided for one chosen technique. For unrealistic interaction, it may be better to provide virtual tools that suggest the use of the technique. For example, putting a laser pointer in the avatar's hand to use raycasting. In some cases, the avatar can be adapted to the technique (long arm for the Go-Go), but this modifies the avatar appearance and may impact more the sense of embodiment. The impact of visual feedback has been investigated for the Go-Go technique [START_REF] Feuchtner | Extending the Body for Interaction with Reality[END_REF], and we have proposed in Chapter 4 a first study exploring the use of dual representations with anisomorphic techniques. Yet, there is a need for more similar studies testing different feedback and their impact on performance and embodiment. Another solution could be to decouple the avatar from the tool used for manipulation. When using anisomorphic manipulation, the transformations could be applied only on the tool, not the avatar. This type of feedback would not affect the avatar itself, thus potentially preserving embodiment. Study the influence of visual feedback during grasping (collisions, hand poses) on embodiment, especially on the sense of agency. Nowadays, frameworks (e.g. SteamVR) make it easy to define hand poses for object ma-nipulation, to have a more realistic visual feedback. Users can be guided to defined poses with subtle assistance, with no impact on embodiment [START_REF] Porssut | Reconciling Being in-Control vs. Being Helped for the Execution of Complex Movements in VR[END_REF]. They are biased to self-attribute assisted movements [START_REF] Debarba | Self-attribution of distorted reaching movements in immersive virtual reality[END_REF]]. However, overly simplistic mappings during grasping might make the user feel less in control. New solutions start to be proposed to automatically compute hand poses in real time [START_REF] Tian | Realtime Hand-Object Interaction Using Learned Grasp Space for Virtual Environments[END_REF]] that could potentially improve embodiment. The moment users release the hand pose might also be crucial for feeling in control, especially when no haptic feedback is used, yet it has rarely been investigated for the moment [Prachyabrued and Borst 2012a]. Explore the impact of multimodal sensory feedback during manipulation on the sense of embodiment. While there are studies showing the importance of synchronous multimodal feedback on the sense of embodiment, there is a lack of literature on the importance of haptic and auditory feedback, especially during interaction tasks. Yet, the potential impact of such feedback was found on body perception [START_REF] Fröhner | Can Wearable Haptic Devices Foster the Embodiment of Virtual Limbs?[END_REF]Tajadura-Jiménez et al. 2015b]. Manipulating objects offers a source of potential feedback that might increase the sense of embodiment. Conclusion In this chapter, we proposed several guidelines to design techniques compatible with avatars. These guidelines concern input devices, control and feedback. We also proposed several research topics that still need some investigation. We hope that in the future, thanks to deeper research on this topic, we will be able to reach good trade-offs ensuring embodiment and efficient manipulation. Finally, the avatar-friendly concept can be extended to other types of interaction (navigation, system control) to provide vibrant 3D experiences to users, as it will be discussed in the next chapter among other perspectives. "The end has no end" The Strokes Chapter 6 CONCLUSION The goal of this thesis entitled "Towards Avatar-Based Interaction in Virtual Reality" was to better understand the interrelations between the user, the avatar, and the interaction in VR, in order to have future rich interaction while maintaining a good avatar embodiment. Three research axes (RA) presented in introduction guided this research. In this chapter, the different contributions made are summarised, and some perspectives for future research are presented. Contributions First, it was important to better understand the relation between the user and the avatar to know how virtual embodiment was elicited. While several works focused on external factors, i.e. factors depending on the virtual environment or the avatar, this thesis explored internal factors depending on the user (RA1: Influence of the user's characteristics on the sense of embodiment). In Chapter 2, we presented our first contribution, investigating the potential impact of body awareness and several personality traits (Big Five traits and locus of control) on the sense of embodiment. While there was no apparent effect of the Big Five traits and body awareness, positive correlations were found between sense of embodiment and locus of control. In particular, an internal locus of control seems to be correlated with the sense of agency, while an external locus of control seems to affect the sense of ownership. These results can encourage the community to dig deeper into the influence of internal factors, to better understand individual differences in embodiment scores. Another factor that had not been investigated deeply was the effect of interacting with the VE. To ensure embodiment while interacting, it is essential to know which techniques are better for virtual embodiment, and how they can affect it. While there was an increasing number of works on the impact of manipulation on embodiment, locomotion techniques had rarely been investigated and their impact on embodiment was unknown (RA2: Influence of the interaction technique on the sense of embodiment). Moreover, most papers on locomotion did not use avatars, thus not taking their potential impact on locomotion into account (RA3: Influence of the avatar on the interaction). Therefore, we decided to investigate the interrelations between locomotion techniques and avatars. An experiment was conducted, presented in Chapter 3, using three different locomotion techniques involving a different level of physical motion: real walking, walking-in-place and head steering. In all conditions, the avatar's movements were reproducing users' movements. The user representation was used as a within-subject factor, with two levels: only 3D controllers or a full-body avatar. No significant difference between the locomotion techniques was found for the sense of embodiment, showing that having an avatar reproducing users' movements may be more important than the technique interaction fidelity. However, participants were particularly disturbed by the legs artifacts caused by inverse kinematics, especially in the walking condition. More investigation on the impact of such artifacts could lead to a deeper understanding of the interrelations between embodiment and locomotion techniques. In addition to locomotion, another essential task in VR is object manipulation. Several techniques, called anisomorphic techniques, distort users' motion to facilitate interaction, e.g. to reach remote object or to increase precision. These motion distortions can impact the sense of embodiment. Therefore, in Chapter 4, we presented two experiments that investigated the use of dual body representations with amplified motion (using the Go-Go technique) and decreased motion (using the PRISM technique). Dual body representations displayed both distorted and real movements. This first study investigating dual representations in this context showed that it was possible to feel a global sense of embodiment towards such representations (RA2), and that they had no impact on performance (RA3). While interacting seemed more important than showing exact movements for agency during out-of-reach manipulation, people felt more in control of the realistic arm during close manipulation. We also found that people globally preferred having a single representation, but opinions diverge especially for the Go-Go technique. As we mentioned in the related work, the literature on manipulation techniques and avatars is more extensive than on other types of techniques. We analysed previous work on this topic to propose fourteen guidelines, presented in Chapter 5, to help VR developers to design avatar-friendly manipulation techniques, i.e. techniques which take avatar embodiment into account during the design process (RA2). The aim is to have techniques that can ensure both performance and virtual embodiment. Perspectives While this thesis is a new step towards having avatar-friendly interaction techniques, our studies have some limitations and there are still many potential leads to explore in order to achieve rich interaction with avatars in VR. Improve Embodiment Measures To better understand how the sense of embodiment is elicited, we need to measure it, which is not trivial. For this reason, most studies use subjective questionnaires to measure embodiment. In this thesis, we used such questionnaires in all of our user studies. These questionnaires vary a lot between different experiments, as they highly depend on the experimental conditions. There have been several attempts to create a standardised validated questionnaire [Peck and Gonzalez-Franco 2021;[START_REF] Roth | Construction of the Virtual Embodiment Questionnaire (VEQ)[END_REF]. Roth et al. [2017;[START_REF] Fribourg | Contribution à l'étude des facteurs influençant le sentiment d'incarnation envers un avatar en réalité virtuelle[END_REF] constructed a novel embodiment questionnaire by relying on factor analysis to propose new leads of standardised questionnaires. Interestingly, their work brought out a novel dimension to evaluate embodiment, named Change (in the perceived body schema), which was not present in the subcomponents proposed by Kilteni et al. [2012a], suggesting that novel dimensions might emerge from the formal construction of novel questionnaires, and which could be more related to interaction. It would be interesting to get more insight on current questionnaires to know when to use them and maybe develop new ones. Having only one standardised questionnaire working for all embodiment experiments seems like a complex task to achieve [Skarbez et al. 2021a], therefore it might be interesting to have numerous small questionnaires adapted to specific cases. Moreover, subjective questionnaires are not always adaptable to every experiment and they still highly rely on participants' understanding of the questions. Studies therefore often add some behavioural responses to add matter to the results. While these responses are often used as an additional measure, and several works show positive correlations between behavioural responses and the sense of embodiment [START_REF] Yuan | Is the rubber hand illusion induced by immersive virtual reality?[END_REF][START_REF] Zhang | Body ownership and response to threat[END_REF][START_REF] Kilteni | Drumming in immersive virtual reality: the body shapes the way we play[END_REF], it is complicated to know whether they rely on similar neurological processes than virtual embodiment [Ma and Hommel 2013]. The ideal measure would be to have access to users' own mental representations of themselves at any time, but this is not feasible yet. A few cognitive models have been proposed trying to explain the brain processes eliciting the sense of embodiment [START_REF] Braun | The Senses of Agency and Ownership: A Review[END_REF]. One main hope for the future is to be able to use brain signals, captured by EEG or MRI, to find characteristic brain signals when embodying an avatar [START_REF] Jeunet | Do you feel in control? " : Towards Novel Approaches to Characterise, Manipulate and Measure the Sense of Agency in Virtual Environments[END_REF]Alchalabi et al. 2019]. Offer Customised Parameters to Ensure Embodiment This thesis highlighted some potential individual factors influencing the sense of embodiment (see Chapter 2). For the first study on this topic, we therefore chose a number of personality questionnaires to explore the potential influence of individual traits. Given the amount of personality questionnaires and cognitive models in the literature, it was not possible to be exhaustive, and we decided to focus on some of the most common models (i.e. Big Five, Locus of Control, Body Awareness). Further studies exploring the influence of other traits and inter-personal aspects could be interesting to improve our understanding. For example, other interesting traits that could be studied would be empathy, which was already shown to have an effect on RHI and presence [START_REF] Asai | Rubber hand illusion, empathy, and schizotypal experiences in terms of self-other representations[END_REF][START_REF] Seiryte | The Empathy Quotient (EQ) predicts perceived strength of bodily illusions and illusion-related sensations of pain[END_REF][START_REF] Sas | Presence Equation: An Investigation into Cognitive Factors Underlying Presence[END_REF], or absorption which was found linked to presence [START_REF] Kober | Personality and Presence in Virtual Reality: Does Their Relationship Depend on the Used Presence Measure?[END_REF]. Other personal information could be investigated such as cultural differences or racial information. More knowledge will also come from studies using machine learning to enable the identification of different users' profile, that react differently to VR and embodiment. Some studies already go in the direction of customised experiences, by using reinforcement learning to adapt the experience to each user [START_REF] Porssut | Adapting Virtual Embodiment through Reinforcement Learning[END_REF]Llobera et al. 2021]. The ultimate goal would be to be able to predict embodiment levels and have customised experience to optimise embodiment for each user. A pre-experiment questionnaire, with identifies important individual factors, could be proposed so as to predict a user's level of embodiment prior to the immersion in VR. Being able to create such a questionnaire could prove a valuable tool in the future to adapt the virtual experience to the users in order to maximise their sense of embodiment. This would however also require additional knowledge about which adaptations are more fitted for some categories of users than others. Study the Importance of Task and User Focus In VR, an infinity of tasks are possible that could have a different influence on embodiment. In our studies, we always had to choose which tasks to perform, and this choice was often done so as to ensure that people could see the avatar. However, other tasks could be explored. In Chapter 4, we decided to focus on two types of anisomorphic manipulation techniques. As future work, it could be interesting to explore other tasks creating bigger offsets between the real and the virtual hand. For example, interacting with surfaces that handle collisions could create bigger offsets. We also only used visuomotor tasks in our studies, but it would be possible to replicate these studies with visuotactile stimulation for example, as it necessitates subjects to look at the virtual body. While we wanted to make users interact with the VE and thus using visuomotor stimulation, as it was the focus of this thesis, results might differ a little when using different sensory stimulation. A recurrent problem in embodiment studies is that participants do not always focus on their avatar, thus making subjective embodiment assessment complicated and unreliable. In our studies, participants sometimes reported that they tended to forget the avatar while doing the task. For example, in the studies of Chapter 4, a few participants reported a difficulty to rank the different virtual representations as they focused on the task and not the avatar during the interaction. Such as in real life, the virtual body is not always necessary to perform a task [START_REF] Ricca | Influence of hand visualization on tool-based motor skills training in an immersive VR simulator[END_REF]. Therefore, exploring tasks involving more the actual user's virtual representation would also be interesting. In the future, it would be better to always know where users have their focus during an experiment, to determine if it has an impact on results, or if having the avatar in the peripheral vision is enough to estimate embodiment. To this end, eye tracking could be an interesting solution to use in embodiment studies to explore the effect of paying attention to the virtual body. Provide High-Quality Motion Capture In this thesis we used simple animation techniques (i.e. Inverse Kinematics) to animate avatars during user studies. While Inverse Kinematics is often used in embodiment studies, and quite efficient for animating upper limbs, it showed its limitations mostly in the experiment on locomotion in Chapter 3, in which some participants reported being disturbed by some artifacts in the legs movements. As previous work showed a user preference for high-quality motion capture over Inverse Kinematics [Fribourg et al. 2020a], it could be interesting to run a similar study with higher-quality motion capture systems to evaluate whether differences in terms of embodiment could be influenced by the animation quality in such cases. This was also due to the fact that while we used global calibration to match the avatar's height to the participant's, the avatar's morphology was still different. Several solutions are now starting to appear to easily and precisely calibrate an avatar to the participants morphology [START_REF] Pujades | The Virtual Caliper: Rapid Creation of Metrically Accurate Avatars from 3D Measurements[END_REF]]. In the future, it will be possible to have perfectly calibrated avatars and good motion tracking (with external sensors such as cameras, or wearables like inertial sensors). Extend Design Guidelines to All Interaction Techniques In Chapter 5, we proposed several guidelines for designing avatar-friendly manipulation techniques. Of course, this concept does not need to be limited to manipulation techniques, but can be extended to all types of interaction techniques. For instance, avatars start to be considered in the design of navigation [Medeiros et al. 2019] or system input techniques [START_REF] Grubert | Effects of Hand Representations for Typing in Virtual Reality[END_REF][START_REF] Knierim | Physical Keyboards in Virtual Reality: Analysis of Typing Performance and Effects of Avatar Hands[END_REF]. When the related work on avatars and other types of techniques is more developed, it will be possible to propose more guidelines. For locomotion techniques, we tried to go in this direction with Chapter 3 by investigating avatar embodiment when using different locomotion techniques. However, more studies are needed to propose guidelines for designing the optimised locomotion technique that ensures embodiment, which ideally also preserves ease of navigation, sense of presence and limits motion sickness. For system input, it would also be interesting to have design guidelines. Most of them would be similar to the ones for manipulation techniques, but other specific guidelines could be added such as using more body-centred widgets, which is an avatar-friendly design already used in most VR applications. Rethink Evaluation Methods In Chapter 5, we proposed guidelines for early design phases. As avatars can impact interaction (as detailed in Section 1. 3, e.g. influencing users' speed [Tran et al. 2017]), we believe that it is becoming important to also consider evaluating the influence of the avatar throughout the whole design process, as well as to consider novel evaluation methods assessing the compatibility between an avatar and an interaction technique. Developers and researchers should therefore use embodiment as a novel criterion for assessing the quality of interaction. During iterations of the design process, different types of evaluation are typically performed, where designers use several criteria and heuristics to identify flaws in a current implementation (formative evaluation) or to compare a new final implementation to benchmarks or older implementations (summative evaluation). These criteria are mostly performance-oriented (e.g. accuracy, speed) or user-oriented (e.g. ease of use, predictability). Given the literature now available in regards to the sense of embodiment, we believe that it should progressively become a new criterion for measuring the quality of interaction, just as breaks in presence were already proposed as new criteria to measure usability [START_REF] Steed | Breaks in presence as usability criteria[END_REF]]. In the future, the goal would therefore be to design efficient interaction techniques that do not disrupt embodiment, in which case questions related to the sense of embodiment [START_REF] González-Franco | Avatar Embodiment. Towards a Standardized Questionnaire[END_REF][START_REF] Roth | Construction of the Virtual Embodiment Questionnaire (VEQ)[END_REF]] should be added to formative and summative evaluations to complete evaluations during the iterative design process. Extend On-Body Interaction Body-centred interaction is inherently avatar-friendly, as it focuses the interaction on the body, therefore making the virtual body useful and helping the appropriation of it. While it is already common to have body-centered information and widgets in VR applications, it could go further by using the skin as input for example, which has not been deeply explored. Self-touch was already found beneficial for embodiment [START_REF] Bovet | The critical role of self-contact for embodiment in virtual reality[END_REF][START_REF] Hara | Voluntary self-touch increases body ownership[END_REF]]. It would be interesting to use more on-skin interaction [START_REF] Weigel | ISkin: Flexible, Stretchable and Visually Customizable On-Body Touch Sensors for Mobile Computing[END_REF]Meier et al. 2021] and to see its impact on embodiment. Making the avatar central during interaction and interacting directly with the skin, with a congruent visuotactile stimulation, could make users regularly focus on the body and could elicit high levels of embodiment. Embody Multiple Bodies As highlighted in Chapter 4, multiple bodies have not been investigated a lot in VR studies for the moment, but they might be more explored in the future. It would be interesting to propose a definition and a taxonomy of such multiple representations. Several factors already appear differentiating in such representations: visual appearance, perspective (first-person versus third-person), control (isomorphic versus anisomorphic; synchronised versus alternate), location (co-located bodies or not). Dual representations could represent co-located bodies seen from a first-person perspective, while multiple bodies seen from different perspectives like in the study by Miura et al. [2021] could be called duplicated representations. Multiple body representations could steer new psychological studies on embodiment, to understand if embodiment switches between several representations or if a "parallel" embodiment emerges, global to all representations [Miura et al. 2021]. In collaborative applications, it would even be possible to create shared embodiment, where multiple users could share the control, not only of one avatar [START_REF] Fribourg | Virtual co-embodiment: evaluation of the sense of agency while sharing the control of a virtual body among two individuals[END_REF]] but of multiple avatars. This could offer multiple points of view to the user, common with those of other users, which might be a way to stimulate empathy and collaboration, as it would develop a common perception of the virtual world. To conclude, we hope this thesis will steer new studies on the interrelations between avatars and interaction and open more questions on this topic, to invent incredible avatarfriendly interaction techniques. 1.2 Action-perception loop: the user can control the avatar thanks to the input devices, so as to interact with the VE. In response to the user's actions, the VE provides feedback perceived by the user thanks to the output devices. . 2.1 From left to right: an example of a trajectory to draw during the experimental task; A view of the scene from behind; Another virtual character stabbing the participants' virtual hand at the end of the experiment to measure their response to the threat on their virtual body. . . . . . . . . . LIST OF FIGURES Facteurs externes influençant l'incarnation Deux méthodes principales existent pour générer un sentiment d'incarnation. La première est la stimulation visuotactile, dans laquelle un objet (pinceau, balle) est utilisé pour toucher le corps réel des utilisateurs en même temps que l'objet virtuel équivalent touche le corps virtuel [START_REF] Slater | Towards a digital body: the virtual arm illusion[END_REF]. Le deuxième moyen principal est la stimulation visuomotrice. Pour cela, il faut faire en sorte que l'avatar reproduise les mouvements des utilisateurs [START_REF] Kokkinara | Measuring the Effects through Time of the Influence of Visuomotor and Visuotactile Synchronous Stimulation on a Virtual Body Ownership Illusion[END_REF]]. Cette méthode s'est avérée plus efficace que la stimulation visuotactile [START_REF] Kokkinara | Measuring the Effects through Time of the Influence of Visuomotor and Visuotactile Synchronous Stimulation on a Virtual Body Ownership Illusion[END_REF]. La synchronisation spatiale et temporelle est extrêmement importante, notamment pour le sentiment d'agentivité [START_REF] Jeunet | Do you feel in control? " : Towards Novel Approaches to Characterise, Manipulate and Measure the Sense of Agency in Virtual Environments[END_REF]]. En plus des facteurs externes concernant le type de stimulation, d'autres facteurs ont également été identifiés comme l'apparence de l'avatar [Maselli and Slater 2013;Waltemate et al. 2018;[START_REF] Gorisse | From Robot to Virtual Doppelganger: Impact of Visual Fidelity of Avatars Controlled in Third-Person Perspective on Embodiment and Behavior in Immersive Virtual Environments[END_REF]. Facteurs internes influençant l'incarnation Certains traits de personnalité pourraient être liés au sentiment d'incarnation. Certains liens ont été trouvés entre la force de l'illusion de la main en caoutchouc et certains traits de personnalité [START_REF] David | Susceptibility to the rubber hand illusion does not tell the whole body-awareness story[END_REF][START_REF] Asai | Rubber hand illusion, empathy, and schizotypal experiences in terms of self-other representations[END_REF]. Des travaux récents ont commencé à s'intéresser au rôle potentiel des traits de personnalité dans l'incarnation virtuelle. Un exemple est le travail de Jeunet et al. [2018] qui a montré que le sentiment d'agentivité est lié à un locus de contrôle interne, c'est-à-dire que les gens qui se sentent à l'origine des événements de leur vie se sentent également plus en contrôle de leur avatar. Dans le chapitre 2 de cette thèse, nous présentons une expérience utilisateur réalisée avec 123 participants sur l'impact potentiel de la personnalité sur le sentiment d'incarnation. Plusieurs traits ont été étudiés: les traits du "Big Five" [START_REF] John | The Big Five Inventory-Versions 4a and 54[END_REF][START_REF] Plaisant | Validation du Big Five Inventory français (inventaire des cinq grandes dimensions de la personnalité)[END_REF]] (Ouverture à l'expérience, Conscienciosité, Extraversion, Agréabilité, Neuroticisme), le locus de contrôle [Levenson 1981;Loas et al. 1994] et la conscience corporelle [START_REF] Shields | The Body Awareness Questionnaire: Reliability and Validity[END_REF][START_REF] Dumont | Expérience subjective et différences individuelles dans l'intégration d'informations visuelle et kinesthésique[END_REF]. Un questionnaire subjectif a été utilisé pour mesurer le sentiment d'incarnation. En plus de ce questionnaire, une menace a été introduite à la fin de l'expérience pour observer la réaction de l'utilisateur. Cette méthode est courante dans les études d'incarnation, car la réaction à la menace est généralement positivement corrélée à l'intensité du sentiment d'incarnation [START_REF] Yuan | Is the rubber hand illusion induced by immersive virtual reality?[END_REF]. Nous avons globalement trouvé que le locus de contrôle semblait être lié à deux composantes de l'incarnation. Le sentiment d'appropriation semble être plutôt influencé par un locus dit externe (le fait de penser que ce qui nous arrive est dû à des éléments extérieurs tel que le destin), alors que le sentiment d'agentivité est positivement corrélé par un locus dit interne (le fait de penser que ce qui nous arrive est dû à nos propres actions). Nous avons également trouvé que la réaction et la perception de la menace étaient positivement corrélés avec le neuroticisme. Dans le chapitre 4, nous étudions un autre type de technique d'interaction, la manipulation d'objets. Plus précisément, nous nous focalisons sur la manipulation anisormorphique qui modifie les mouvements de l'utilisateur, par exemple pour amplifier les mouvements pour atteindre des objets lointains, ou pour ralentir le mouvement pour gagner en précision. Ce type de techniques peut avoir un impact sur le sentiment d'incarnation, car les mouvements de l'avatar sont différents des mouvements de l'utilisateur. Les sentiments d'agentivité et de co-localisation sont potentiellement les plus impactés, car il y a un décalage entre les mouvements réels et virtuels [START_REF] Kokkinara | Measuring the Effects through Time of the Influence of Visuomotor and Visuotactile Synchronous Stimulation on a Virtual Body Ownership Illusion[END_REF][START_REF] Pritchard | Non-hierarchical Influence of Visual Form, Touch, and Position Cues on Embodiment, Agency, and Presence in Virtual Reality[END_REF]. Nous avons donc réalisé une étude pour explorer une potentielle solution : l'utilisation d'une représentation double. Une représentation sert à montrer les mouvements déformés aidant à l'interaction, pendant qu'une autre représentation co-localisée Conclusion Les différentes contributions de cette thèse ont pour but d'aider le développement des techniques d'interaction compatibles avec les avatars, c'est-à-dire des techniques qui sont efficaces mais qui permettent aussi de maintenir un bon sentiment d'incarnation envers l'avatar. En étudiant à la fois comment l'incarnation apparaît selon les profils des utilisateurs, comment les différentes techniques d'interaction peuvent aussi influencer cette incarnation, cette thèse s'intéresse autant à l'action qu'à la perception. Dans le futur, il serait intéressant d'approfondir les connaissances sur l'incarnation en réfléchissant à de meilleurs moyens de la mesurer et de la maximiser pour chaque utilisateur. Il serait intéressant aussi d'explorer des nouvelles méthodes d'interaction comme l'interaction sur la peau, et d'étendre les directives de design aux autres techniques d'interaction et pas seulement la manipulation d'objets. A long terme, la possibilité d'avoir des interactions riches via un avatar permettra par exemple aux utilisateurs de tester et d'apprendre de nouvelles compétences en RV. Abstract: Virtual reality enables users to be immersed in an alternative world. In this virtual world, they can move around or interact with objects through their virtual body, also called an avatar, which tells them where they are and what they are doing. Users can even feel a sense of embodiment towards this avatar and have the impression that it is their real body. This illusion offers an interesting user experience and can have surprising consequences such as increasing a person's cognitive abilities. However, it is not clear yet how this illusion is elicited. Furthermore, current interaction techniques are sometimes not compatible with realistic avatars, especially because they sometimes distort the user's movements, and thus potentially the avatar's movements and body, which can influence embodiment. This thesis proposes several studies to better understand the relationship between user, avatar and interaction. It focuses on the sense of embodiment, investigating both individual factors and the impact of different feedbacks during interaction tasks (locomotion and object manipulation). Design guidelines for avatarcompatible manipulation techniques are also provided based on existing literature, as well as leads for future research. The ultimate goal is to have effective interaction that provides a high sense of embodiment. Figure 1 1 Figure 1.5 -Examples of different possible hand appearances for a manipulation task.Image from[START_REF] Argelaguet | The role of interaction in virtual embodiment: Effects of the virtual hand representation[END_REF]]. Figure 1 . 8 - 18 Figure 1.8 -Two different input modalities: hand tracking versus controllers. Hand tracking is preferred and better for the sense of embodiment. Image from [Lin et al. 2019]. Figure 1 1 Figure 1.9 -Example of a widget to constrain manipulation. Image from [Mendes et al. 2016]. Figure 1 . 1 Figure 1.10 -First-person perspective vs. third-person perspective. Image from [Galvan Debarba et al. 2017]. Figure 2 2 Figure2.1 -From left to right: an example of a trajectory to draw during the experimental task; A view of the scene from behind; Another virtual character stabbing the participants' virtual hand at the end of the experiment to measure their response to the threat on their virtual body. Figure 2 2 Figure 2.2 -Examples of four trajectories that participants were instructed to perform during the experiment with either their left or their right hand. Figure 2 . 3 - 23 Figure 2.3 -The two avatar models used in our experiments, which were matched to the gender of the participant. Figure 2 2 Figure 2.4 -Contributions (i.e. weights) of the embodiment questions to the different components (O P C1,M and O P C2,M (for men), O P C1,F and O P C2,F (for women), A P C1 , EA P C1 , T P C1 ). Figure 3 . 3 Figure 3.1 -Illustration of the four tasks performed by the participants in our user study, all in the full-body avatar condition. The same tasks were also performed without an avatar. From left to right: Training task, Corridor task, Path-following task and Columns task. 14, SD = 2.96 M = 8.50, SD = 2.91 Ratio shoulder collisions M = 0.35, SD = 0.41 M = 0.37, SD = 0.38 Nb mushrooms M = 22.7, SD = 7.13 M = 23.45, SD = 6.41 Figure 3 . 3 - 33 Figure 3.3 -Scores by Technique for the question C pref erBody ; no significant difference between the techniques but a high dispersion for the S condition. Figure 4 . 4 Figure 4.1 -Illustration of two types of dual body representations studied in this chapter.On the left image, a ghost representation enables remote manipulation while a realistic co-located representation provides feedback with respect to the real user's position. On the right image, a ghost representation provides feedback with respect to the user's real position while a realistic representation enables precise manipulation with the environment thanks to slowed motion. 102 4. 4 . 4 Experiment 1: Dual Representations for Increased Motions During Out-of-Reach Manipulation Figure 4 . 4 Figure 4.2 -The three levels of Representation in the first experiment with amplified motion. The interactive arm is either realistic (in R -Ø or R -G) or transparent (in G -R). When a co-located arm is displayed, a realistic representation (in G -R) or a ghost one (in R -G) is used. Figure 4 . 3 - 43 Figure 4.3 -Results for the two components Response and Multi-Sensory from the PGF questionnaire in Experiment 1, for which a main effect of Representation was found. Only the pairwise test on the Response component showed a significant difference between R -Ø and G -R (p < 0.05). There was no main effect for other components. Figure 4 . 4 - 44 Figure 4.4 -Ranking for the different conditions in the Go-Go experiment. R -Ø was preferred with 10 participants ranking it as first, followed by R -G and finally G -R. Figure 4 . 4 Figure 4.5 -Results of additional questions on the senses of ownership and agency for G -R and R -G in the experiment with the Go-Go technique. * p < 0.05, ** p < 0.01 Figure 4 . 6 - 46 Figure 4.6 -Different levels of Representation in the second experiment with amplified motion. The interactive arm is either realistic (in R -Ø or R -G), transparent (in G -R) or not displayed (in Ø -R). A co-located arm is displayed, either a realistic one (in G -R and Ø -R) or a transparent one (in R -G). Figure 4 . 4 Figure 4.7 -Results for the two components Response and Ownership, as well as the global Embodiment score (from the PGF questionnaire) in Experiment 2, for which a main effect of Representation was found. Only the pairwise test on Embodiment scores showed a significant difference between R -Ø and R -G (p < 0.05). There was no main effect for other components. Figure 4 . 8 - 48 Figure 4.8 -Results of additional questions on the senses of ownership and agency for G -R and R -G in the experiment with the PRISM technique. * p < 0.05, ** p < 0.01 Figure 4 . 4 Figure 4.9 -Ranking for the different conditions in the PRISM experiment. R -Ø was preferred with 13 participants ranking it as first. 1 Illustration of the different research axes, linking the interaction, the user, the avatar, and the sense of embodiment (SoE). . . . . . . . . . . . . . . . 1.1 Revisited Milgram's Reality-Virtuality Continuum by Skarbez et al. [2021b]. 1. 3 3 Control of an avatar with a different morphology, for example an avatar with a third arm. Image from[START_REF] Laha | Evaluating Control Schemes for the Third Arm of an Avatar[END_REF]]. . . . . . . . . . . . . . . . 1.4 First experiment in VR using visuotactile stimulation to elicit a sense of embodiment towards a virtual arm. Image from [Slater et al. 2008]. . . . . 1.5 Examples of different possible hand appearances for a manipulation task. Image from [Argelaguet et al. 2016]. . . . . . . . . . . . . . . . . . . . . . . 1.6 The participant is represented by either a realistic hand (on the left image) or a spherical pointer (on the right). The transparent hand represents the user's real hand, not displayed in the VE. Image from [Ogawa et al. 2020a]. 1.7 Illustration from the experiment investigating the impact of avatar appearance on the way users drum in VR. Participants are represented either by flat shaded white hands (A), a dark-skinned avatar with casual clothes (B), or a light-skinned avatar with fancy clothes (C). Image from [Kilteni et al. 2013]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Two different input modalities: hand tracking versus controllers. Hand tracking is preferred and better for the sense of embodiment. Image from [Lin et al. 2019]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 Example of a widget to constrain manipulation. Image from [Mendes et al. 2016]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10 First-person perspective vs. third-person perspective. Image from [Galvan Debarba et al. 2017]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. 2 2 Examples of four trajectories that participants were instructed to perform during the experiment with either their left or their right hand. . . . . . .2.3 Thetwo avatar models used in our experiments, which were matched to the gender of the participant. . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Contributions (i.e. weights) of the embodiment questions to the different components (O P C1,M and O P C2,M (for men), O P C1,F and O P C2,F (for women), A P C1 , EA P C1 , T P C1 ). . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Illustration of the four tasks performed by the participants in our user study, all in the full-body avatar condition. The same tasks were also performed without an avatar. From left to right: Training task, Corridor task, Path-following task and Columns task. . . . . . . . . . . . . . . . . . . . . 3.2 Left: Setup in the physical environment. The HTC Vive with a wireless adapter was used to enable participants to physically walk in an 8m × 8m area. The tracking of the user's motions was enabled by the two hand controllers and two HTC Vive Trackers attached to their ankles. Two additionnal HTC Vive trackers were also positioned on a backpack to track participants' shoulders. Tracking was done using four HTC Vive lighthouses. Right: Overview of the virtual environment: the navigation area was constrained by the virtual fences which matched the limits of the physical space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. 3 3 Scores by Technique for the question C pref erBody ; no significant difference between the techniques but a high dispersion for the S condition. . . . . .4.1 Illustration of two types of dual body representations studied in this chap-ter. On the left image, a ghost representation enables remote manipulation while a realistic co-located representation provides feedback with respect to the real user's position. On the right image, a ghost representation provides feedback with respect to the user's real position while a realistic representation enables precise manipulation with the environment thanks to slowed motion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The three levels of Representation in the first experiment with amplified motion. The interactive arm is either realistic (in R -Ø or R -G) or transparent (in G -R). When a co-located arm is displayed, a realistic representation (in G -R) or a ghost one (in R -G) is used. . . . . . . . . . 4.3 Results for the two components Response and Multi-Sensory from the PGF questionnaire in Experiment 1, for which a main effect of Representation was found. Only the pairwise test on the Response component showed a significant difference between R -Ø and G -R (p < 0.05). There was no main effect for other components. . . . . . . . . . . . . . . . . . . . . . . . 4.4 Ranking for the different conditions in the Go-Go experiment. R -Ø was preferred with 10 participants ranking it as first, followed by R -G and finally G -R. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Results of additional questions on the senses of ownership and agency for G -R and R -G in the experiment with the Go-Go technique. * p < 0.05, ** p < 0.01 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Different levels of Representation in the second experiment with amplified motion. The interactive arm is either realistic (in R -Ø or R -G), transparent (in G -R) or not displayed (in Ø -R). A co-located arm is displayed, either a realistic one (in G -R and Ø -R) or a transparent one (in R -G). 4.7 Results for the two components Response and Ownership, as well as the global Embodiment score (from the PGF questionnaire) in Experiment 2, for which a main effect of Representation was found. Only the pairwise test on Embodiment scores showed a significant difference between R -Ø and R -G (p < 0.05). There was no main effect for other components. . . . . . 4.8 Results of additional questions on the senses of ownership and agency for G -R and R -G in the experiment with the PRISM technique. * p < 0.05, ** p < 0.01 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Ranking for the different conditions in the PRISM experiment. R -Ø was preferred with 13 participants ranking it as first. . . . . . . . . . . . . . . . 6.1 Illustration des différents axes de recherche, reliant l'utilisateur, l'interaction, l'avatar et le sentiment d'incarnation (SoE). . . . . . . . . . . . . . . . . . 6.2 Boucle d'action-perception : l'utilisateur peut contrôler un avatar grâce à des périphériques d'entrée, pour pouvoir interagir avec l'environnement virtuel. En réponse aux actions de l'utilisateur, l'environnement virtuel retourne un feedback qui est perçu par l'utilisateur via les périphériques de sortie. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 6.3 Première expérience en réalité virtuelle utilisant une stimulation visuotactile pour créer un sentiment d'incarnation envers un bras virtuel. Image venant de [Slater et al. 2008]. . . . . . . . . . . . . . . . . . . . . . . . . . 182 6.4 De gauche à droite : un exemple de trajectoire à dessiner pendant l'expérience; Une vue de derrière de la scène virtuelle; Un autre personnage virtuel en train de poignarder la main virtuelle du participant à la fin de l'expérience pour mesurer sa réponse à la menace sur son corps virtuel. . . . . . . . . . 184 6.5 Illustration des quatre tâches effectuées par les participants durant notre étude utilisateur, toutes dans la condition avec l'avatar au complet. Les mêmes tâches ont également été réalisées sans avatar. De gauche à droite : Tâche d'entraînement, tâche du couloir, tâche de suivi de chemin et tâche des colonnes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 6.6 Illustration de deux types de représentations doubles étudiées dans ce chapitre. Sur l'image de gauche, une représentation fantôme permet une manipulation à distance tandis qu'une représentation réaliste co-localisée fournit un retour par rapport à la position réelle de l'utilisateur. Sur l'image de droite, une représentation fantôme fournit un retour par rapport à la position réelle de l'utilisateur tandis qu'une représentation réaliste permet une manipulation précise avec l'environnement grâce à un mouvement ralenti.186 Ogawa, Nami, Takuji Narumi, Hideaki Kuzuoka, and Michitaka Hirose (2020b), « Do You Feel Like Passing Through Walls?: Effect of Self-Avatar Appearance on Facilitating Realistic Behavior in Virtual Environments », in: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, Honolulu, HI, USA, pp. 1-14. Ono, Hiroshi, Brian J Rogers, Masao Ohmi, and Mika E Ono (1988), « Dynamic Occlusion and Motion Parallax in Depth Perception », in: Perception 17.2, PMID: 3226867, pp. 255-266. Ouramdane, Nassima, Samir Otmane, Frédéric Davesne, and Malik Mallem (2006), « FOLLOW-ME: a new 3D interaction technique based on virtual guides and granularity of interaction », in: Proceedings of the 2006 ACM international conference on Virtual reality continuum and its applications, pp. 137-144. Pacherie, Elisabeth (2007), « The sense of control and the sense of agency », in: Psyche 13.1, pp. 1-30. Pan, Ye and Anthony Steed (2019), « How Foot Tracking Matters: The Impact of an Animated Self-Avatar on Interaction, Embodiment and Presence in Shared Virtual Environments », in: Frontiers in Robotics and AI 6, p. 104. Park, Chanho and Kyungho Jang (2019), « Investigation of Visual Self-Representation for a Walking-in-Place Navigation System in Virtual Reality », in: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), IEEE, pp. 1114-1115. Pastel, Stefan, Chien-Hsi Chen, Katharina Petri, and Kerstin Witte (2020), « Effects of body visualization on performance in head-mounted display virtual reality », in: Plos one 15.9, e0239226. Peck, Tabitha C. and Mar Gonzalez-Franco (2021), « Avatar embodiment. a standardized questionnaire », in: Frontiers in Virtual Reality 1, p. 44. Peck, Tabitha C., Sofia Seinfeld, Salvatore M. Aglioti, and Mel Slater (2013), « Putting yourself in the skin of a black avatar reduces implicit racial bias », in: Consciousness and Cognition 22.3, pp. 779-787. Peled, Avi, Michael Ritsner, Shmuel Hirschmann, Amir B Geva, and Ilan Modai (2000), « Touch feel illusion in schizophrenic patients », in: Biological Psychiatry 48.11, pp. 1105-1108. Peña, Jorge, Subuhi Khan, and Cassandra Alexopoulos (2016), « I am what I see: How avatar and opponent agent body size affects physical activity among men playing exergames », in: Journal of Computer-Mediated Communication 21.3, pp. 195-209. Figure 6 . 3 - 63 Figure 6.3 -Première expérience en réalité virtuelle utilisant une stimulation visuotactile pour créer un sentiment d'incarnation envers un bras virtuel. Image venant de [Slater et al. 2008]. d'études liant les deux domaines. Pourtant, il existe des interrelations entre incarnation et interaction qu'il est nécessaire de mieux comprendre. Dans les prochaines sections, nous présentons nos différentes contributions pour mieux comprendre les liens entre avatars et interaction, afin de créer des techniques plus compatibles avec les avatars. Figure 6 . 4 - 64 Figure 6.4 -De gauche à droite : un exemple de trajectoire à dessiner pendant l'expérience; Une vue de derrière de la scène virtuelle; Un autre personnage virtuel en train de poignarder la main virtuelle du participant à la fin de l'expérience pour mesurer sa réponse à la menace sur son corps virtuel. Figure 6 . 6 Figure 6.5 -Illustration des quatre tâches effectuées par les participants durant notre étude utilisateur, toutes dans la condition avec l'avatar au complet. Les mêmes tâches ont également été réalisées sans avatar. De gauche à droite : Tâche d'entraînement, tâche du couloir, tâche de suivi de chemin et tâche des colonnes. Figure 6 . 6 - 66 Figure 6.6 -Illustration de deux types de représentations doubles étudiées dans ce chapitre. Sur l'image de gauche, une représentation fantôme permet une manipulation à distance tandis qu'une représentation réaliste co-localisée fournit un retour par rapport à la position réelle de l'utilisateur. Sur l'image de droite, une représentation fantôme fournit un retour par rapport à la position réelle de l'utilisateur tandis qu'une représentation réaliste permet une manipulation précise avec l'environnement grâce à un mouvement ralenti. Titre: Interaction Basée Avatar en Réalité Virtuelle Mot clés : Avatars, sentiment d'incarnation, interaction, réalité virtuelle Résumé : La réalité virtuelle permet d'immerger des utilisateurs dans un monde alternatif. Dans ce monde virtuel, ils peuvent se déplacer ou encore interagir avec des objets grâce à leur corps virtuel, aussi appelé avatar, qui leur indique où ils se trouvent et ce qu'ils sont en train de faire. Les utilisateurs peuvent même se sentir incarnés dans cet avatar et ainsi avoir l'impression que c'est leur vrai corps. Cette illusion offre une expérience utilisateur intéressante et peut avoir des conséquences surprenantes comme augmenter les capacités cognitives d'une personne. Cependant, il n'est pas encore clair comment cette illusion est créée. De plus, les techniques actuelles d'interaction sont parfois peu compatibles avec des avatars réalistes, notamment parce qu'elles déforment parfois les mouvements de l'utilisateur, et donc potentiellement le corps et les mouvements de l'avatar, ce qui peut influencer le sentiment d'incarnation. Cette thèse propose plusieurs études pour mieux comprendre les relations entre utilisateur, avatar et interaction. Elle se concentre sur le sentiment d'incarnation, en étudiant à la fois des facteurs individuels et l'impact de différents feedbacks pendant des tâches d'interaction (locomotion et manipulation d'objets). Des conseils de conception de techniques d'interaction compatibles avec les avatars sont aussi proposés, ainsi que des pistes de recherche pour le futur. Le but à long terme est d'avoir des interactions efficaces qui assurent un haut sentiment d'incarnation. Title: Towards Avatar-Based Interaction in Virtual Reality Keywords: Avatars, sense of embodiment, interaction, Virtual Reality used the empirical Kaiser criterion to automatically select the number of principal components explaining sufficient amounts of variance, then performed a PCA with this number of components using an oblimin rotation, enabling us to interpret the selected components (see the summary of the obtained components in Figure2.4, and exact values in Table2.2). 2.3. Results Table 2.2 -Contributions (i.e. weights) of the different questions to the men ownership (O M ), women ownership (O W ), agency (A), external appearance (EA), and threat per- ception (T ) components. A P C1 A1 0.722 O P C1,M O P C2,M A2 0.827 O1 0.740 A3 -0.443 Questions O3 O2 0.102 -0.475 0.966 0.623 A4 Men Median[Q1,Q3] Women -0.713 O1 O2 I felt as if the virtual body was my body O4 0.722 I felt as if the virtual body I saw was someone else O5 -0.756 EA1 4 [3,5] 3 [2,4] EA P C1 4 [3,5] 0.716 O3 It seemed as if I might have more than one body EA2 0.719 2 [1,4] O4 O5 I felt as if the virtual body I saw when looking in the mirror was my own body I felt as if the virtual body I saw when looking at myself in O P C1,F EA3 0.727 3 [2,5] 4 [2,5] 5 [3,6] O P C2,F O1 0.471 0.272 EA4 0.705 A1 A2 A3 O2 the mirror was another person -0.846 It felt like I could control the virtual body as if it was my own O3 0.829 T P C1 O4 0.864 T1 0.919 body The movements of the virtual body were caused by my move-O5 -0.822 T2 0.913 T3 0.915 ments I felt as if the movements of the virtual body were influencing T4 0.917 6 [5,6] 7 [6,7] 2 [1,4] my own movements A4 I felt as if the virtual body was moving by itself 1 [1,2] L1 I felt as if my body was located where I saw the virtual body 6 [4, 6.5] L2 I felt out of my body 2 [1,4] EA1 It felt as if my (real) body were turning into an "avatar" body 3 [1,5] EA2 At some point it felt as if my real body was starting to take 2[1,5] on the posture or shape of the virtual body that I saw EA3 At some point it felt that the virtual body resembled my own 2[1,4] real body, in terms of shape, skin tone or other visual features EA4 I felt like I was wearing different clothes from when I came to 3[1,5] the experience T1 I felt that my own hand could be affected by the knife 2 [1,5] T2 I felt fear when I saw the knife 2 [1,5] T3 When the knife appeared above my hand, I felt the instinct 1 [1,5] to remove my hand from the table T4 I had the feeling that I might be harmed by the knife 2 [1,4] 64 Polychoric Principal Components Analysis (Polychoric PCA) for each aspect of embodiment on the different questions, as Polychoric PCA takes into account the ordinal nature of Likert scales. This type of PCA has already been used in similar studies [START_REF] Slater | Virtually Being Lenin Enhances Presence and Engagement in a Scene From the Russian Revolution[END_REF] . As mentioned previously, a separate Polychoric PCA was run on men and women data in the case of ownership. As proposed in Gonzalez-Franco and Peck's questionnaire [2018] , we Table 2 . 2 4 -Multiple linear regression models for presence ( * : p < 0.05 ; * * : p < 0.01) Presence Variables β P r(> |t|) Agreeableness 0.018 0.008 ** Neuroticism 0.008 0.095 Internal 0.061 0.001 ** Chance 0.046 0.010 * Table 2 . 2 5 -Pearson correlations for ownership (men and women), agency, external appearance, response to threat and presence. O P C1,M O P C1,F O P C2,F A P C1 EA P C1 T P C1 Presence Openness -0.293* Conscientiousness Extraversion Agreeableness 0.227* Neuroticism 0.258** Internal 0.248** 0.195* 0.203* Powerful others 0.427** Chance 0.248* 0.366** 0.201* Body awareness ( * : p < 0.05 ; * * : p < 0.01) Table 3 . 3 1 -The two possible orderings of the experiment based on the avatar factor. While the representation questionnaire was administered after each block, the embodiment questionnaire was only administered after the FBA condition. 1) SSQ Full-Body Avatar Rep. + Embodiment No Avatar SSQ Rep. Comparison 2) SSQ No Avatar Rep. Full-Body Avatar SSQ Rep. + Embodiment Comparison Table 3 . 3 2 -Summary for body ownership (O), agency (A) and self-location (SL) questionnaires. This table reports the median and interquartile range for each question. When significant differences (Kruskal-Wallis test) were found between techniques (NW, WIP, S), the descriptive statistics for each technique are provided. 3.3. Analysis Table 3 . 3 3 -Summary for subjective questions on the virtual representation (R). This table reports the median and interquartile range for each question. When main effects of appearance factor or interaction effects were found, the descriptive statistics for each appearance are provided. Median[Q1;Q3] ID Questions FBA NA R M irrorLogical When facing the mirror, my virtual representation when 5[3; 6] 4[3; 6] navigating in the VE was confusing(1)/logical(7) R OtherLogical In other tasks, my virtual representation when navigat- 6[4;6.25] ing in the VE was confusing(1)/logical(7) R M irrorP leasing When facing the mirror, my virtual representation when 4[3;5] navigating in the VE was disturbing(1)/pleasing(7) R OtherP leasing In other tasks, my virtual representation when navigat- 5[4; 6] 4[4; 5.25] ing in the VE was disturbing(1)/pleasing(7) ID Questions Median[Q1;Q3] NW WIP S C pref erBody I preferred not having a body (1) vs. having a body (7) 5[3;6.25] * indicates that an interaction effect was found Table 3 .4 -Summary for comparison questionnaire. This table reports the median and interquartile range for each question. There was no significant effect (Kruskal-Wallis test) of the Technique factor. Table 4 . 4 2 -Questions on user experience asked after all the conditions. ID Question liked I liked my virtual representation(s). disturbing My virtual representation(s) was/were disturb- ing. easy It was easy to interact. clear It was clear how I could interact with the envi- ronment. exciting It was exciting to interact with the environment. Table 4 . 4 4 -Summary of all embodiment questions for the second experiment with the PRISM technique. This table reports the median and interquartile range for each question. ID R -Ø Median[Q1;Q3] R -G G -R Ø -R RL wasMyBody 5[3; 6] 1,2 3.5[2; 5.25] 1,2 3[2; 5] 1 5[3; 6] 2 bodyParts 5[3; 6] 4[2.75; 6] 4.5[2.75; 5.25] 5[3; 6] human 5[4; 6] 5[2.75; 6] 5[3; 6] 4[3; 6] belonged 5[2.75; 6] 4.5[3; 6] 4[2.75; 6] 4.5[3; 6] myMovements 5[3; 6] 5[3; 6] 5[3; 6] 4.5[3; 6] controlling 6[4.75; 7] 6[5; 6] 5.5[5; 6] 5[4; 7] causing 6[5; 7] 6.5[4.75; 7] 6[5; 6.25] 6[5; 7] inSync 5[4; 6] 4.5[3; 6] 5[4; 6] 4[3.75; 6] form 2[1; 5] 2.5[1; 4] 2[1; 4] 2[1; 4] weight 1[1; 4] 1.5[1; 3.25] 1[1; 3.25] 1[1; 3.25] height 1[1; 4] 1[1; 4] 1[1; 4] 1[1; 2.25] width 1[1; 2.5] 1[1; 2] 1[1; 3] 1[1; 2.25] PGF outOfBody 2[1; 5] 2[1; 3] 2[1; 4] 2.5[1; 5] drifting 3.5[1.75; 5] 3[2; 4] 2.5[2; 4] 2[1; 4] influencing 2.5[2; 4] 3[2; 5] 3[1.75; 4.25] 3.5[2; 5] avatarBody 3[1; 5] 3[2; 4.25] 2[1; 4] 2[1; 4.25] posture 2.5[2; 4] 2[2; 3] 2[2; 3] 2[1.75; 4.25] clothes 1.5[1; 5] 1[1; 3] 2[1; 5] 1.5[1; 3.25] hadChanged 1[1; 3.25] 1[1; 2.25] 2[1; 3] 1[1; 3] realisticSensation 4.5[3; 5.25] 3[2; 5] 3[2; 5.25] 3.5[2; 5] affected 4.5[2.75; 6] 1 3[2; 5] 2 2.5[1; 5] 1,2 5[2; 5] 1,2 wasMyBody 5[3; 6] 1,2 3.5[2; 5.25] 1,2 3[2; 5] 1 5[3; 6] 2 resembled 3[1; 5] 3[2; 4] 2[1; 4.25] 3[1.75; 5] located 6[5; 6.25] 1 4.5[3; 6] 2 4.5[3; 6] 2 5[4.75; 6] 1,2 control 5[3; 6] 4.5[2.75; 5.25] 5[2.75; 6] 5[3; 5.25] touchLocated 2.5[2; 5] 3[2; 4.25] 2[1.75; 3.5] 2.5[1; 3.25] touchCaused 3[1; 5] 2[2; 4] 2[1.75; 3.25] 2[1; 4] touching 3[2; 5] 3[2; 3.25] 3[2; 4] 2.5[1; 5] Conditions sharing a superscript were not significantly different (Wilcoxon signed-rank test with α = .05) 125 Table 5 . 5 1 -Table recapitulating the proposed guidelines and the corresponding sections to justify these guidelines. Input Devices ID1 For specific applications or training, use an input device Section 1.4.1.1 adapted to the task ID2 Prioritise full-hand tracking when possible Section 1.4.1.3 Control CM1 For wide and fast motions, do not apply constraints Section 1.4.2.1 CM2 For wide and fast motions, choose a CD gain equal to 1 Section 1.4.2.2 CM3 For small motion and precise manipulation, provide as- Section 1.4.2.4 sistance to the user's movements CM4 For precise manipulation, handle collisions Section 1.4.2.4 CM5 Progressively increase the distortion gain to make it Section 1.4.2.2 more acceptable Feedback FB1 Provide a well-calibrated avatar Section 1.3.1.1 FB2 Use first-person perspective Section 1.4.3.2 FB3 Use slightly transparent hands to prevent occlusions Sections 1.3.1.2 and 1.4.3.1 FB4 Preserve body continuity as much as possible Section 1.4.2.3 FB5 Maximize multisensory coherency Section 1.3.3 FB6 Provide haptic feedback Section 1.4.3.3 FB7 Use realistic avatars when using distorted motion Section 1.3.2 Table 6 . 6 1 -Tableau récapitulatif des directives de design proposées et les sections correspondantes qui justifient ces directives. Entrée Related Work Cambridge University Press. [2021a],Affordance (n.d.) In: Cambridge dictionary, url: https : //dictionary.cambridge.org/dictionary/english/affordance [visited on 03/08/2021]. montre les mouvements réels de l'utilisateur. Nous voulions voir l'impact potentiel de ce type de représentation virtuelle sur le sentiment d'incarnation et la performance. Nous avons réalisé deux expériences, l'une avec des mouvements amplifiés (avec la technique Go-Go [Poupyrev et al. 1996]), l'autre avec des mouvements ralentis (avec la technique PRISM [Frees et al. 2007]). Pour chaque expérience, les représentations doubles étaient comparées à des représentations simples avec un seul avatar. Dans chaque condition, les mouvements déformés et réels pouvaient être représentés par: un avatar réaliste, un avatar fantôme transparent ou aucune représentation. Avec les deux techniques de manipulation, les participants ont réussi à ressentir un sentiment d'incarnation pour les différentes représentations doubles. De plus, aucun impact n'a été constaté sur la performance, ce qui montre que de telles représentations pourraient être utilisées dans des applications de RV. Alors que l'interaction semblait plus importante que la présentation des mouvements réels pour l'agentivité avec la technique Go-Go, les personnes se sentaient plus en contrôle du bras réaliste pendant la manipulation avec PRISM. Dans l'ensemble, les participants ont préféré avoir une représentation simple, mais certains ont tout de même signalé l'avantage d'avoir une représentation montrant leurs mouvements réels pour comprendre la distorsion créée par la technique d'interaction. Software and Technology, VRST '17, Gothenburg, Sweden: Association for Computing Machinery. Lee, Juyoung, Myungho Lee, Gerard Jounghyun Kim, and Jae-In Hwang (2020), « Effects of Synchronized Leg Motion in Walk-in-Place Utilizing Deep Neural Networks for Enhanced Body Ownership and Sense of Presence in VR », in: 26th ACM Symposium on Virtual Reality Software and Technology, pp. 1-10. Lenggenhager, Bigna, Michael Mouthon, and Olaf Blanke (2009), « Spatial aspects of bodily self-consciousness », in: Consciousness and cognition 18.1, pp. 110-117. Lenggenhager, Bigna, Tej Tadi, Thomas Metzinger, and Olaf Blanke (2007), « Video Ergo Sum: Manipulating Bodily Self-Consciousness », in: Science 317.5841, pp. 1096-1099. Leonardis, Daniele, Antonio Frisoli, Michele Barsotti, Marcello Carrozzino, and Massimo Bergamasco (2014), « Multisensory feedback can enhance embodiment within an enriched virtual walking scenario », in: Presence 23.3, pp. 253-266. Lesur, Marte Roel, Marieke Lieve Weijs, Colin Simon, Oliver Alan Kannape, and Bigna Lenggenhager (2020), « Psychometrics of disembodiment and its differential modulation by visuomotor and visuotactile mismatches », in: IScience 23.3, p. 100901. Levenson, H. (1981), « Differentiating among internality, powerful others and chance. », en, in: Research with the locus of control construct, ed. by H. M. Lefcourt, vol. 1, New York: Academic Press, pp. 15-63. Leyrer, Markus, Sally A. Linkenauger, Heinrich H. Bülthoff, Uwe Kloos, and Betty Mohler (2011), « The Influence of Eye Height and Avatars on Egocentric Distance Estimates in Immersive Virtual Environments », in: Proceedings of the ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization, APGV '11, Toulouse, France, pp. 67-74. Li, Yi-Jun, Miao Wang, De-Rong Jin, Frank Steinicke, Shi-Min Hu, and Qinping Zhao (2021), « Effects of Virtual Environments and Self-representations on Redirected Jump-162 ing », in: 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), IEEE, pp. 464-465. Lin, Lisa PY, Neil M McLatchie, and Sally A Linkenauger (2020), « The influence of perceptual-motor variability on the perception of action boundaries for reaching. », in: Journal of Experimental Psychology: Human Perception and Performance 46.5, p. 474. Lin, Lorraine, Aline Normoyle, Alexandra Adkins, Yu Sun, Andrew Robb, Yuting Ye, Massimiliano Di Luca, and Sophie Jörg (2019), « The effect of hand size and interaction modality on the virtual hand illusion », in: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), IEEE, pp. 510-518. Lin, Qiufeng, John Rieser, and Bobby Bodenheimer (2012), « Stepping over and Ducking Under: The Influence of an Avatar on Locomotion in an HMD-based Immersive Virtual Environment », in: Proceedings of the ACM Symposium on Applied Perception, SAP '12, pp. 7-10. Ling, Yun, Harold T. Nefs, Willem-Paul Brinkman, Chao Qu, and Ingrid Heynderickx (2013), « The relationship between individual characteristics and experienced presence », in: Computers in Human Behavior 29.4, pp. 1519-1530. Llobera, Joan, Alejandro Beacco, Ramon Oliva, Gizem Şenel, Domna Banakou, and Mel Slater (2021), « Evaluating participant responses to a virtual reality experience using reinforcement learning », in: Royal Society Open Science 8.9, p. 210537. Loas, G., R. Dardennes, P. Dhee-Perot, V. Leclerc, and D. Fremaux (1994), « Opérationnalisation du concept de "lieu de contrôle": traduction et première étude de validation de l'échelle de contrôle de Levenson (IPC: the internal powerful others and chance scale). », in: Annales médico-psychologiques 152.7, pp. 466-9. Lok, Benjamin, Samir Naik, Mary Whitton, and Frederick P. Brooks (2003), « Effects of Handling Real Objects and Self-Avatar Fidelity on Cognitive Task Performance and Sense of Presence in Virtual Environments », in: Presence 12.6, pp. 615-628. Lougiakis, Christos, Akrivi Katifori, Maria Roussou, and Ioannis-Panagiotis Ioannidis (2020), « Effects of virtual hand representation on interaction and embodiment in HMD-based virtual environments using controllers », in: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 510-518. Lugrin, Jean-Luc, Maximilian Ertl, Philipp Krop, Richard Klüpfel, Sebastian Stierstorfer, Bianka Weisz, Maximilian Rück, Johann Schmitt, Nina Schmidt, and Marc Erich Latoschik (2018), « Any "Body" There? Avatar Visibility Effects in a Virtual Reality 163 Game », in: 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 17-24. Lugrin, Jean-Luc, David Obremski, Daniel Roth, and Marc Erich Latoschik (2016), « Audio Feedback and Illusion of Virtual Body Ownership in Mixed Reality », in: Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, VRST '16, Munich, Germany, pp. 309-310. Luu, Trieu Phat, Yongtian He, Samuel Brown, Sho Nakagome, and Jose L Contreras-Vidal (2016), « Gait adaptation to visual kinematic perturbations using a real-time closed-loop brain-computer interface to a virtual reality avatar », in: Journal of neural engineering 13.3, p. 036006. Ma, Ke and Bernhard Hommel (2013), « The virtual-hand illusion: effects of impact and threat on perceived ownership and affective resonance », in: Frontiers in Psychology 4, p. 604. Maravita, Angelo and Atsushi Iriki (2004), « Tools for the body (schema) », in: Trends in cognitive sciences 8.2, pp. 79-86. Martini, Matteo, Konstantina Kilteni, Antonella Maselli, and Maria V Sanchez-Vives (2015), « The body fades away: investigating the effects of transparency of an embodied virtual body on pain threshold and body ownership », in: Scientific reports 5, p. 13948. Maselli, Antonella and Mel Slater (2013), « The building blocks of the full body ownership illusion », in: Frontiers in Human Neuroscience 7, p. 83. -(2014), « Sliding perspectives: dissociating ownership from self-location during full body illusions in virtual reality », in: Frontiers in Human Neuroscience 8, p. 693. McMahan, Ryan P. (2011), « Exploring the effects of higher-fidelity display and interaction for virtual reality games », PhD thesis, Virginia Tech. McMahan, Ryan P., Doug A. Bowman, David J. Zielinski, and Rachael B. Brady (2012), « Evaluating Display Fidelity and Interaction Fidelity in a Virtual Reality Game », in: IEEE Transactions on Visualization and Computer Graphics 18.4, pp. 626-633. McMahan, Ryan P., Chengyuan Lai, and Swaroop K. Pal (2016), « Interaction Fidelity: The Uncanny Valley of Virtual Reality Interactions », in: Virtual, Augmented and Mixed Reality, ed. by Stephanie Lackey and Randall Shumaker, Cham: Springer International Publishing, pp. 59-70. McManus, Erin A., Bobby Bodenheimer, Stephan Streuber, Stephan de la Rosa, Heinrich H. Bülthoff, and Betty J. Mohler (2011), « The Influence of Avatar (Self and Character) Animations on Distance Estimation, Object Interaction and Locomotion 164 in Immersive Virtual Environments », in: Proceedings of the ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization, APGV '11, Toulouse, France: ACM, pp. 37-44. Medeiros, Daniel, Rafael K. dos Anjos, Daniel Mendes, João Madeiras Pereira, Alberto Raposo, and Joaquim Jorge (2018), « Keep My Head on My Shoulders! Why Third-Person is Bad for Navigation in VR », in: Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, VRST '18, Tokyo, Japan: Association for Computing Machinery. Medeiros, Daniel, Mauricio Sousa, Alberto Raposo, and Joaquim Jorge (2019), « Magic carpet: Interaction fidelity for flying in vr », in: IEEE transactions on visualization and computer graphics 26.9, pp. 2793-2804. Mehling, Wolf E., Viranjini Gopisetty, Jennifer Daubenmier, Cynthia J. Price, Frederick M. Hecht, and Anita Stewart (2009), « Body Awareness: Construct and Self-Report Measures », in: PLOS ONE 4.5, pp. 1-18. Meier, Manuel, Paul Streli, Andreas Fender, and Christian Holz (2021), « TaplD: Rapid Touch Interaction in Virtual Reality using Wearable Sensing », in: 2021 IEEE Virtual Reality and 3D User Interfaces (VR), IEEE, pp. 519-528. Mendes, Daniel, Fabio M. Caputo, Andrea Giachetti, Alfredo Ferreira, and Joaquim A. P. Jorge (2019), « A Survey on 3D Virtual Object Manipulation: From the Desktop to Immersive Virtual Environments », in: Computer Graphics Forum 38.1, pp. 21-45. Mendes, Daniel, Filipe Relvas, Alfredo Ferreira, and Joaquim Jorge (2016), « The Benefits of DOF Separation in Mid-Air 3D Object Manipulation », in: Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, VRST '16, pp. 261-268. Mestre, Daniel R., Céphise Louison, and Fabien Ferlay (2016), « The Contribution of a Virtual Self and Vibrotactile Feedback to Walking Through Virtual Apertures », in: Human-Computer Interaction. Interaction Platforms and Techniques, ed. by Masaaki Kurosu, Springer International Publishing, pp. 222-232. Milgram, Paul and Fumio Kishino (1994), « A taxonomy of mixed reality visual displays », in: IEICE TRANSACTIONS on Information and Systems 77.12, pp. 1321-1329. Mine, Mark R. (1995), Virtual Environment Interaction Techniques, tech. rep., USA. Mine, Mark R., Frederick P. Brooks Jr, and Carlo H. Sequin (1997), « Moving objects in space: exploiting proprioception in virtual-environment interaction », in: Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pp. 19-26. RÉSUMÉ LONG EN FRANÇAIS Cette thèse intitulée "Interaction Basée Avatar en Réalité Virtuelle" étudie les interrelations entre les avatars et l'interaction en réalité virtuelle (RV). L'objectif est de mieux comprendre comment différentes techniques d'interaction peuvent avoir un impact sur la perception de l'avatar, ainsi que la façon dont le choix d'un avatar peut affecter l'interaction. En particulier, nous nous intéressons à la façon dont ces interrelations peuvent affecter le sentiment d'incarnation. Les contributions de cette thèse aideront les développeurs à concevoir des techniques d'interaction compatibles avec des avatars réalistes au corps entier. Cette thèse tente de répondre à trois axes de recherche : -RA1: l'influence des caractéristiques de l'utilisateur sur le sentiment d'incarnation -RA2: l'influence de la technique d'interaction sur le sentiment d'incarnation -RA3: l'influence de l'avatar sur l'interaction L'interaction avec l'environnement virtuel peut être classée en quatre catégories [Bowman 1999]: navigation, sélection, manipulation et entrée système. Nous ne nous concentrerons pas sur la saisie système, car la plupart des techniques utilisées pour la saisie système sont similaires à celles utilisées pour la sélection et la manipulation. La sélection est l'action de choisir l'objet avec lequel l'utilisateur va interagir. Les tâches de manipulation englobent toutes les modifications qui peuvent être effectuées sur l'objet virtuel sélectionné. La sélection et la manipulation peuvent être anisomorphiques. Le mouvement peut être diminué pour offrir un contrôle plus élevé pour un positionnement précis [START_REF] Frees | Precise and rapid interaction through scaled manipulation in immersive virtual environments[END_REF]. Il peut également être amplifié pour atteindre des objets éloignés [START_REF] Poupyrev | The Go-Go Interaction Technique: Non-Linear Mapping for Direct Manipulation in VR[END_REF]]. Les techniques de locomotion permettent aux utilisateurs de se déplacer et de pivoter dans l'environnement virtuel. Plusieurs techniques ont été inventées pour répondre aux contraintes d'espace limité, comme la téléportation ou la marche sur place. Conseils pour le Design de Techniques d'Interaction Compatibles avec l'Avatar Pour terminer cette thèse, nous proposons dans le chapitre 5 des directives de conception pour créer des techniques de manipulation compatibles avec les avatars. Ces directives sont classées en trois catégories: entrée, contrôle et feedback. Elles sont basées sur une analyse de l'état de l'art actuel sur les techniques de manipulation et les avatars. Nous avons étudié à la fois l'impact des avatars sur la manipulation et l'impact des choix de design sur le sentiment d'incarnation (voir le détail dans le chapitre 1). En plus des directives de design, nous proposons également des pistes de recherche pour la suite, toujours focalisées sur les techniques de manipulation.
04117555
en
[ "phys", "spi" ]
2024/03/04 16:41:26
2022
https://hal.science/tel-04117555/file/DTIS22225.1665577688.pdf
Pierre-Julien Chaine Marc Boyer Claire Pagetti Keywords: The pessimist sees difficulty in every opportunity. The optimist sees opportunity in every difficulty." Winston Churchill Je souhaite tout d'abord remercier très sincèrement ma directrice de thèse, la Maîtresse de Conférences Claire Pagetti, sans qui l'aboutissement de ces travaux n'aurait pu être possible. Ton accompagnement sans faille, tant sur le plan technique que sur le plan moral, m'a permis de traverser cette épreuve. La thèse est en effet autant un incroyable travail technique qu'un défi moral. Je me rappellerai longtemps de nos échanges où je finissais toujours par te dire que "encore une fois tu avais raison" et tu me répondais simplement "c'est l'expérience". Merci Claire. Je souhaite ensuite remercier tout aussi sincèrement mon co-directeur de thèse, le Maître de Conférences Marc Boyer. Ton expérience des réseaux embarqués industriels et ta connaissance de la communauté locale et internationale de ce domaine sont pour beaucoup dans la pertinence et le succès de mes travaux. C'est grâce à toi, en proposant le sujet initial à Airbus, que cette thèse a pu voir le jour. Je retiendrais surtout nos longues discussions du lundi après-midi, parfois un peu philosophiques, parfois plus techniques, sur la compréhension des standards de TSN ou des réseaux de manière plus générale. J'ai eu un immense plaisir à travailler avec toi et je me réjouis qui nous continuions à collaborer après la thèse. Merci Marc. Enfin, je souhaite remercier très chaleureusement mon co-directeur du monde industriel, Franck Wartel. Nous nous rencontrons lors de mon stage de fin d'études pendant le printemps/été 2018. A la fin de ce stage, nous décidons de continuer à travailler ensemble et tu acceptes d'encadrer mes travaux. Tu as été pour moi un mentor, me faisant profiter tant de tes connaissances techniques que celles Mes remerciements vont ensuite aux membres du jury pour l'intérêt qu'ils ont porté à mes travaux. En particulier, je souhaite remercier le Professeur Sébastien Pillement et le professeur Ye-Qiong Song pour leur lecture approfondie du manuscrit et les remarques pertinentes dans leurs rapports ainsi que durant la soutenance. De plus, je souhaite remercier la Professeure Alhem Mifdaoui, le Professeur Jean-Luc Scharbarg ainsi que la Directrice de Recherche Liliana Cucu-Grosjean pour interventions enrichissantes lors de la soutenance. Je souhaite aussi remercier l'entreprise Airbus Defence and Space SAS ainsi que l'Agence Nationale pour la Recherche Technologique (ANRT) pour avoir approuvé mon projet de thèse CIFRE et accepté de le financer. J'ai une pensée toute particulière pour Dominique Pibrac et Antoine Certain qui ont grandement contribué à la réussite du montage de ce projet. Je ne peux pas oublier de remercier mes collègues d'AIRBUS ainsi que mes collègues docteurs et docteurs-en-devenir à l'ONERA. C'est grâce à votre soutien au quotidien que j'ai pu traverser, non sans un certain plaisir, ces trois années de thèse. Une pensée pour mes partenaires de "pause" à l'ONERA Arthur, Damien, Felix, Hedwinn, Iryna, Iban, Lucien; nos brefs moments ensemble me manquent déjà. Une pensée aussi pour Damien, Matthieu, Jérôme, Thomas, Philippe, Sylvain, Marie, Isabelle, Maxime, Valentin et Anthony pour leur sympathie au quotidien. Pour terminer, je souhaite remercier mes proches notamment Hugo et Romain pour m'avoir supporté, dans tous les sens du terme, dans cette épreuve difficile. Bien évidemment, un immense merci à parents et mon cher frère pour avoir toujours cru en moi et pour m'avoir soutenu dans mes différents projets, quels qu'en soit la nature. Merci. 7 Abstract The spacecraft industry is facing a new challenge: offering new capabilities and new missions around Earth, in the solar system and beyond. This will not be achievable without providing more performance on board satellites. In particular, the current satellite network technologies will not be able to handle this increasing demand for long. This leads the spacecraft industry to consider an upgrade of their on-board networks. Such an upgrade is also an opportunity to envision a disruptive evolution of the current communication architecture and move from a network system relying on MIL-STD-1553 for real-time traffic and Spacewire for high throughput traffic to a unified network relying on a single technology supporting both types of traffic. One technology is raising the interest of satellite manufacturers: IEEE Time Sensitive Networking (TSN), the state-of-the-art Ethernet technology, deemed to be capable of supporting the needs of next-generation satellites. The goal of the PhD was thus to assess the suitability of TSN with respect to space systems requirements. In order to address this problem, we first identified a set of technologies -Ethernet, ARINC 664, TTEthernet, TSN and SpaceFibre -potentially able to answer to the future application demand. We made a qualitative comparison of those technologies with respect to what is foreseen as the expected requirements. The comparison was based on three properties dealing respectively with network performance, time management and fault tolerance. This led us to select three suitable candidates including TSN. While the two other candidates had already been studied and even started to be included in satellite network designs at the time of writing this manuscript, TSN was completely unknown to the spacecraft industry. After this preliminary step, we studied in depth the set of more than twenty standards that TSN encompasses, each composed of several mechanisms and each mechanism composed of several parameters. We identified IEEE 802.1Qbv Time Aware Shaper standard as the core TSN standard capable to satisfy the network performance requirements. In addition to Qbv, we discussed the interest of other TSN standards (i.e. IEEE 802.1Qci, 802.1CB, 802.1AS, 802.1Qbu), which are now being fitted into a TSN aerospace profile in an IEEE/SAE joint effort. The next step was then to define a strategy to automatically compute suitable configurations for the standards in this profile. By configuration, we mean a set of assigned values for all the parameters of all the used mechanisms of the list of considered standards. Due to the huge number of parameters' values to set, automatic strategies are a real game-changer to pave the way for large scale industrial use of TSN. In our approach, we focused on the configuration of TSN Qbv standard alone so that the network copes with the satellite performance requirements. At that stage, we considered that fault tolerance capabilities could either be provided by the network or the applications running over the network. While automatic configuration strategies relying on a network wide schedule of frame transmissions are the dominant approach in the literature, we proposed a brand new configuration strategy called Egress TT. In practice, Egress TT configurations consist of scheduled frame transmissions on the last hop port in the path of any flow. What happens between the source and the last hop port may be variable, as it depends on the time at which the message is emitted by the source and on the delays encountered in the previous hops. Nevertheless, the variability of this delay is absorbed by a correctly chosen release instant in the last hop port. This novel configuration strategy improves the scalability of configuration strategies form the state-of-the-art and reduces the necessary development effort for the upgrade of legacy application software towards a next-generation on-board satellite network system. Résumé Français L'industrie aérospatiale fait face à un nouveau défi : proposer de nouvelles fonctionnalités et de nouvelles missions autour de la Terre, dans le système Solaire et au-delà. Ces nouveautés ne se feront pas sans une amélioration de la performance à bord des satellites, notamment au niveau de l'architecture de communication. C'est la raison pour laquelle l'industrie aérospatiale envisage un changement radical de ses réseaux embarqués, passant du bus MIL-STD-1553 pour le trafic temps réel et Spacewire pour le trafic haut débit, à un réseau ≪unifié ≫ reposant sur une technologie unique capable de transporter ces deux types de trafic. Au début de la thèse, IEEE Time Sensitive Networking (TSN), la technologie état de l'art d'Ethernet, a commencé à attirer l'attention de différents acteurs du spatial. De fait, le but de cette thèse a été de mettre en évidence l'adéquation de TSN avec les exigences de l'industrie aérospatiale. Afin de résoudre ce problème, nous avons commencé par identifier un ensemble de technologies -Ethernet, ARINC 664, TTEthernet, Time Sensitive Networking et Spacefibre -a priori capables de répondre aux besoins des futures missions. Nous avons ensuite proposé une comparaison qualitative de ces technologies en se basant sur leur compatibilité avec les futures exigences des satellites. Cette comparaison s'est organisée autour de deux thèmes : qualité de service (i.e. performance réseau et tolérance aux fautes) et gestion du temps. Elle nous amènera à sélectionner trois candidats : TTEthernet, Spacefibre et TSN. Tandis que TTEthernet et Spacefibre étaient déjà connus et commençaient même à être intégrés dans des architectures réseaux embarqués satellite au moment d'écrire ce document, Time Sensitive Networking était lui totalement nouveau pour l'industrie aérospatiale. Ainsi, après cette étape préliminaire, nous avons étudié en profondeur les très nombreux standards de TSN. Nous avons identifié IEEE 802.1Qbv dit Time Aware Shaper comme le standard TSN indispensable pour répondre aux exigences en performance réseau des futurs satellites. Nous avons par ailleurs discuté de l'intérêt d'autres standards TSN (i.e. IEEE 802.1Qci, 802.1CB, 802.1AS, 802.1Qbu) qui sont, avec Qbv, en voie d'être inclus dans un profil TSN dédié à l'industrie aérospatiale. Afin de valider la compatibilité de TSN, nous nous sommes intéressés à la génération de configurations TSN. Cette tâche n'est pas aisée car chaque configuration nécessite d'instancier un très grand nombre de paramètres. De fait, ces configurations sont presque toujours générées de manière automatique. Cette automatisation est un véritable levier dans l'industrialisation du TSN, à la fois dans les satellites, et d'autres domaines d'application. Ainsi, nous nous sommes concentrés sur la configuration automatique du standard Qbv afin d'adresser les besoins en performance, considérant que les fonctions de tolérances aux fautes pouvaient être reléguées au niveau applicatif. Alors que les stratégies automatiques reposant sur des émissions planifiées à date fixe dans tous les équipements du réseau étaient très répandues dans l'état de l'art, nous avons proposé une nouvelle stratégie de configuration intitulée Egress TT. En pratique, les configurations Egress TT reposent sur des émissions planifiées à date fixe seulement dans le dernier équipement du trajet de n'importe quel flot. Le délai d'un message entre sa source et le dernier équipement dans son trajet peut être variable. En effet, il dépend de l'instant auquel le message a été émis à sa source et aux potentiels ralentissements qu'il rencontrerait dans le réseau. Néanmoins, ce délai variable est absorbé par une planification des émissions bien choisie au dernier saut. Cette nouvelle stratégie propose un meilleur passage à l'échelle que les stratégies existantes. Elle permet aussi de réduire l'effort de développement nécessaire pour la mise à jour des logiciels applicatifs vers l'architecture réseau nouvelle génération. CONTENTS Chapter 1 Introduction This PhD project was funded by Airbus Defence and Space and ARNT (Association Nationale pour la Recherche Technologique) and is included in Airbus TANIA-DP roadmap (Technological Assessment for New Instruments and Avionics -Data Processing), which, among others, aims at providing new communication standards for satellites on-board networks which would benefit future missions, be it on performance, on cost, on compatibility with ground segment standards or on system operation sides. Context In accordance with the ever-expanding volume of data generated and handled by ground-system equipments (telephone, cars, scientific instruments, etc.), satellites must be capable of producing and transmitting massive amounts of data in order to meet their users' requirements. While improving the performance of data production will not raise any major difficulties since instruments with such capabilities are already available on the market (e.g. COTS multi-gigabit camera, etc.); improving the performance of the on-board networks carrying that data remains complex. That is the reason why this document focuses on next-generation satellite embedded networks. Satellite on-board networks serve two different purposes. First, they are in charge of conveying the necessary information for the nominal behaviour of a satellite (e.g. control of the thrusters, control of the communication sub-system, control of the solar panels, etc.). Then, they convey the data acquired from the instruments (such as telescopes, weather sensors, etc.) towards the antennas of the satellite which forward it to ground-stations. These networks are, most of the time, supported by two technologies: MIL-STD-1553 (1553 for short) and Spacewire. Reason for a Change While the current satellite architecture has demonstrated its strength and maturity for the past 15 years, it is starting to show its limits, be it on data-rate, on development and update costs, on availability of COTS components, or even on potential synergies with other industrial domains or academia. Therefore, the spacecraft industry is considering an upgrade of their on-board networks. The next-generation network is foreseen as a "unified" network, meaning that a single technology would be used to support the needs of current and future missions on both satellite nominal behaviour management and instruments data-transfer. This technology should not only have better 18 CHAPTER 1. INTRODUCTION performances than 1553 and SpaceWire but also be easy to analyse and configure as well as ease the development and integration process, and help reduce the overall cost of the satellite. Several technologies have been preselected by the industrial partner for this next-gen "unified" network: Ethernet, ARINC 664, TTEthernet, Time Sensitive Networking and Spacefibre). Among all these technologies, one opportunity has appeared at the beginning of the PhD: Time Sensitive Networking (TSN for short), the state of the art IEEE Ethernet technology, said to be capable of supporting both real-time and high-bandwidth traffic of a satellite. Therefore, the industrial proposed the subject of this PhD: "Suitability of Time Sensitive Networking for the Spacecraft Industry". This work is novel in the space domain since TSN had never been neither identified or considered as a potential candidate for a next-generation on-board network. This context was published in [START_REF] Chaine | TSN Support for Quality of Service in Space[END_REF]. PhD Approach In order to evaluate the suitability of Time Sensitive Networking with respect to the foreseen needs of next-generation satellites, we propose a two-step approach. In a first step, we will aim at making sure that TSN is indeed a good candidate for the replacement of the current on-board networks by applying qualitative assessment. To do so, we will define a set of high level properties representative of the needs of future satellite missions. Inspired from existing state of the art comparisons, we will refine these properties into criteria. Finally, we will discuss the suitability of the set of technologies preselected by our industrial partner (i.e. Ethernet, ARINC 664, TTEthernet, Spacefibre and Time Sensitive Networking) with respect to these criteria and with these elements, compare these technologies with one another. Then in a second step, we will go further into the assessment of the suitability of TSN by refining and formalizing the quality of service requirements of next-generation satellites. We will show that Time Sensitive Networking is suitable by generating one or several network configurations. If it is not possible to compute a configuration that fulfils these requirements, this would mean that TSN is not suitable for future missions. To do so, we will inspire from existing configuration approaches in the state of the art, based on frame schedules and constraint programming, to propose our own configuration approach. TSN is composed of numerous standards controlled by many parameters, which makes its configuration difficult. Therefore, in order to reduce this effort, we chose to reduce the set of TSN standards to IEEE 802.1Qbv only. This two-step approach is illustrated in Fig. 1.1. Organization of the Manuscript The manuscript is organised as follows: in a first part, we give elements of context. The part has been written to provide the reader with the necessary background to understand the concepts and the vocabulary of the manuscript. In Chapter In Part II, we present the first contribution: the qualitative selection of high-throughput technologies for next-generation satellite networks. Chapter 5 presents a first problem statement, related to the identification of candidate technologies for a satellite network upgrade. We will introduce the list of pre-selected technologies as well as high level satellites requirements. In Chapter 6, we present our proposed methodology for the technology selection, supported by a related work. Finally, Chapter 7 presents the application of the methodology to the list of preselected technologies as well as an analysis of the results of the comparison. In Part III, we present the second contribution: the computation of TSN networks configurations for next-generation satellite networks. In Chapter 8, we present the second problem statement, along with a more precise description of the satellite network quality of service requirements and industrial considerations. Chapter 9 proposes insights on the configuration of TSN standard IEEE 802.1Qbv. We describe the parameters of the configurations and the existing configuration strategies available in the state of the art. In Chapter 10, we propose a novel configuration approach for networks with very-low jitter requirements. We first describe the concept, interest and suitability of this novel approach and then apply it to the configuration of TSN networks. We will propose two strategies or implementations for the generation of these configurations. Finally, in Chapter 11, we evaluate the performance of the two implementations introduced in the previous chapter against an implementation of the state of the art configuration approach. CHAPTER 1. INTRODUCTION List of Publications International Conferences • [START_REF] Chaine | TSN Support for Quality of Service in Space[END_REF] Pierre-Julien Chaine, Marc Boyer, Claire Pagetti, and Franck Wartel. TSN Support for Quality of Service in Space. In 10th European Congress on Embedded Real Time Software and Systems (ERTS 2020), Toulouse, France, January 2020. This paper introduces the interest of the spacecraft industry toward TSN as well as a very high level qualitative analysis of its suitability with satellite requirements. • [START_REF] Chaine | Comparative study of ethernet technologies for next-generation satellite on-board networks[END_REF] Pierre-Julien Chaine, Marc Boyer, Claire Pagetti, and Franck Wartel. Comparative study of Ethernet technologies for next-generation satellite on-board networks. In Proc. of the 40th Int. Conference on Digital Avionics System Conference (DASC), 2021. This paper introduces the methodology and results of the qualitative comparison of Ethernet, ARINC 664, TTEthernet and TSN for a next-generation on-board network. • [Submitted] Pierre-Julien Chaine, Marc Boyer, Claire Pagetti, and Franck Wartel. Egress TT configurations for TSN networks. In 30th International Conference on Real-Time Network and Systems Conference (RTNS), 2022. This paper presents the concept of Egress TT configurations and the two implementations for TSN networks that we detail in the second contribution in this manuscript. Communications in an International Congresses • [START_REF] Chaine | Suitability of time sensitive networking for space ? TSN A Conference[END_REF] Pierre-Julien Chaine, Marc Boyer, Claire Pagetti, and Franck Wartel. Suitability of Time Sensitive Networking for Space. In TSN/A Conference (TSN/A 2019), Bad-Homburg, Germany, October 2019. This presentation introduced the interest of the spacecraft industry towards TSN to TSN components and TSN-related software providers. It also proposed a glimpse on the first TSN profile for aerospace. • [START_REF] Chaine | TSN Support for Quality of Service in Space ?[END_REF] Pierre-Julien Chaine, Marc Boyer, Claire Pagetti, and Franck Wartel. TSN Support for Quality of Service in Space ?. In Junior Researcher Workshop on Real-Time Computing (JWRTC 2019), Toulouse, France, November 2019. This poster illustrated the need for a next generation unified network and summarized the high level qualitative analysis of [START_REF] Chaine | TSN Support for Quality of Service in Space[END_REF]. • [START_REF] Chaine | Comparative study of high throughput technologies for next-generation satellite on-board networks[END_REF] Pierre-Julien Chaine, Marc Boyer, Claire Pagetti, and Franck Wartel. Comparative study of high throughput technologies for next-generation satellite on-board networks. In Data Systems In Aerospace (DASIA), 2021. This paper enhanced the paper [START_REF] Chaine | Comparative study of ethernet technologies for next-generation satellite on-board networks[END_REF] by adding Spacefibre to the set of compared technologies. Part I Context Chapter 2 Introduction of Key Satellite Concepts This first chapter starts with an introduction or a reminder on satellite generalities. It details the different types of satellites and the multiple missions they can be used for. It then introduces the Platform & Payload concept, a fundamental concept when designing satellites. The chapter continues with the presentation of the SAVOIR Reference Architecture, an ESA initiative for a generic satellite on-board functional architecture. Finally, this chapter zooms on the devices and networks on board a satellite that will be mentioned or used in the next chapters. It explains what the devices are and how they are linked together using a platform and a payload network. Generalities Satellites and their Missions A satellite is a man-made "electronic device that is sent into space and moves around the earth or another planet" (Collins 2021). It is mainly used around Earth for telecommunications applications (e.g. telephone with Inmarsat, television with Eutelsat, internet with OneWeb), Earth observation applications (military, picture as a service with Pleiades Neo, ocean study with Sentinel-6B) and positioning applications. In the solar system, satellites are used for scientific applications (Sun study with Solar Orbiter, Mercury study with BepiColombo, Jupiter and its moons study with Juice). The format (i.e. size, weight), the orbit and the duration of the mission of the satellite vary: while satellites in geostationary orbit are large vehicles with at least fifteen years of operations (see Fig. 2.1a), low earth orbit satellites are smaller objects with shorter missions up to about five years (see Fig. 2.1b). Platform & Payload A satellite is generally composed of two parts : Payload and Platform (see Fig. 2.2). On the one hand, the payload (PL) is the purpose of the satellite, it is the part of the satellite that generates added value for its owner, it is specific to each and every mission. Generally, the payload is composed of instruments such as antennas or transponders for a telecommunication satellite, telescopes or cameras for an Earth observation satellite, or any form of scientific instruments for a scientific mission (radiation sensors, magneto-sensors, etc.). On the other hand, the platform (PF) is the infrastructure which allows the satellite to achieve its mission. If this part of the satellite does not work properly, the satellite might become useless or even be lost. The platform is composed of all the systems and subsystems that ensure a nominal behaviour of the satellite. Some notable systems are the Power Supply Subsystem, the Attitude and Orbit Control Subsystem (AOCS), the control and monitoring of the payload status, the management of telecommunications with ground stations system, etc. SAVOIR Reference Architecture, a Standardization at European Level For a long time, agencies and space companies, at prime and supplier levels, have raised the need of increasing the level of reuse and standardization in spacecraft avionics systems in order to improve efficiency and reduce development time and costs. This has led to studies and initiatives which are now federated in the Space Avionics Open Interface Architecture (SAVOIR) framework [START_REF]SAVOIR Functional Reference Architecture[END_REF] through different working groups. SAVOIR is, in a way, a reference to which any satellite manufacturer can rely on when designing a satellite. Plus, it does not contain any industrials' proprietary information and therefore is a good support for discussions and publications dealing with satellite architectures. SAVOIR vocabulary and reference architecture will be the terminology used in this document. The SAVOIR framework defines the SAVOIR Functional Reference Architecture based on the needs of all kinds of missions (scientific, telecommunication, earth observation, etc.). The SAVOIR Functional Reference Architecture suggests a physical implementation for the electronic units of the platform; the On-Board Computer (OBC) architecture and the payload units. It aims at defining standard building blocks and their associated functions. It focuses on data management and communications means, but also considers ECSS (European Cooperation for Space Standardization) compliant interfaces interconnecting the blocks. In order to be relevant to a sizeable range of mission, generic specifications, considered as a common-core of avionic specifications have been gathered in several SAVOIR documents or handbooks such as, for instance, the FDIR1 handbook. The Functional Reference Architecture is illustrated in Fig. 2 .3. This architecture is an important reference when designing a satellite on-board architecture. The network architectures as well as the performance and safety requirements (see Section 8.2) of this document will be compliant with SAVOIR. Legacy Network System Generic Architecture Overview This section presents a reference architecture for this research. It is compliant with the SAVOIR Functional Reference Architecture and is shown in Fig. 2.4. It contains several devices corresponding to different functions interconnected with communication links. It is considered as generic enough and representative of future satellites. In the next section, we describe the devices and, in the following one, the associated network. Platform Payload Devices Overview The most important device in the satellite architecture is the On-Board Computer or OBC. This OBC is the master of the satellite i.e. it manages all the other devices of the system. On the platform part, it is connected to sensors (e.g. star trackers, thermal sensors, magnetometers, etc.) and actuators (e.g. thrusters, reaction wheels etc.), being eventually gathered in a Remote Interface Unit -RIU. The power control and distribution unit and the solar panel control unit are also connected to the OBC. The OBC is also hosting, among others, the AOCS -Attitude and Orbit Control Subsystem (also known as GNC -Guidance and Navigation Control-) functions . To do so, it gathers information from several sensors, processes and exploits them in order to control the propulsion system of the satellite. On the payload part, the OBC is connected to a data storage system (Mass Memory -MM), usually a solid state mass memory, mainly used for storing payload data generated by the instruments. Finally, the OBC is generally in charge of routing Telecommand (TC) and Telemetry (TM) received from the ground in the communication subsystem towards the instruments. For a vast majority of missions, all the devices in the platform part of the satellite are duplicated. They work in cold redundancy meaning that only one device is active at a time (cf. Def. [START_REF] Chaine | TSN Support for Quality of Service in Space ?[END_REF]). If ever it ends up in a faulty state the other device is turned on while the fault is investigated by ground operators. Network Overview As for the satellite system, the on-board network is composed of two interconnected networks: platform and payload networks. Each of these networks fulfils diverging and sometimes contrasting needs. On the one hand, the platform network, featured in red and purple in Fig. 2.4, is in charge of conveying all the necessary information used to guarantee the nominal behaviour of the satellite. It transmits data from sensors (position, magnetic field, temperature, etc.) as well as, among others, flight control commands. This kind of traffic, often described as time critical traffic requires bounded latency and low jitter communications. However, due to the small size and small volume of messages, a low data rate is enough to achieve the platform needs. In general, the platform network is implemented using a MIL-STD-1553 bus [33] or CAN [START_REF]Road vehicles -Controller area network (CAN)[END_REF] bus. On the other hand, the payload network, featured in green and orange in Fig. 2.4, requires a very high data rate in order to convey the large amount of raw data generated by the payload instruments such as pictures from telescopes, telemeters from weather sensor or IoT (Internet of Things) data. The constraints are less stringent for a payload network: a delay in the packet communication path will most likely not impact the nominal behaviour of the satellite. The payload network is based in general on SpaceWire [42]. The communications links can be duplicated two or four times depending on the mission and are used in a cold redundancy scheme. Conclusion This chapter has proposed a snapshot view of what a satellite is, the Platform & Payload concept and the SAVOIR Reference Architecture. Then, it focused on the on-board architecture, centralized around the On-Board Computer as well as the on-board communication networks relying mostly on MIL-STD 1553 and SpaceWire. Although these networks have served their purpose for the past fifteen years, they are starting to reach their limits, in particular in terms of available bandwidth. Therefore, they need to be upgraded. In the next chapter, the existing candidate technologies for this upgraded network are introduced. Chapter 3 Introduction to Networking Technologies This chapter introduces all the necessary building blocks to define and design a network system followed by a brief presentation of the technologies considered during the PhD. After a reminder on the OSI model, the concept of network is defined and the existing design paradigms for L2 networks are identified. Then, the different Quality of Service properties offered to a communication over a specific network are introduced. Finally, two legacy technologies i.e. MIL-STD-1553 and SpaceWire; and five candidates for the next generation satellite network, identified by the industrial partner at the beginning of the PhD, i.e. SpaceFibre, Full Duplex Switched Ethernet, ARINC 664, TTEthernet and Time Sensitive Networking are overviewed. Reminder: the OSI Model The OSI model or Open System Interconnection model [START_REF]1994 Information Technology -Open Systems Interconnection -Basic Reference Model: The Basic Model[END_REF] is a conceptual framework used to describe the functions of a networking system without any assumption on the technologies used to implement it. As suggested by Guy Pujolles: "(translated) Even though this model is not used anymore, it still serves to define the [network] vocabulary and to be a reference for defining network functions", we will use the OSI model to introduce some network-related vocabulary. Fig. 3.1 proposes a representation of this model. The OSI model describes seven layers (denoted L[number of layer]) from the physical layer (L1) up to the application layer (L7). Three objects are associated with each layer i.e. a service, a protocol and a service access point. Definition 1 (Service of level N) The service of level N describes the set of actions (including primitives, events, etc.) that is provided by layer N for the upper layer N+1. Definition 2 (Protocol of level N) The protocol of level N describes the set of rules required to provide the service of level N and communication between two entities of level N through a N-PDU or Protocol Data Unit of level N. Definition 3 (Service Access Point of level N) A service access point of level N, or N-SAP, is located at the edge between layer N+1 and layer N. Service of level N is provided to layer N+1 through a N-SAP. Let us describe briefly the physical medium layer and the first four layers of the OSI model. The Physical Medium, sometimes called Layer 0 (L0), is in charge of conveying a physical signal encoding some binary elements of informations from a point to another, towards their destination. Several physical media exist such as copper wires, fibre optics, radio signals, etc. OSI L1 -Physical Layer. The Physical Layer, or Layer 1 (L1), contains the set of rules and procedures required so that the binary information can be conveyed on the physical medium. For instance, a physical layer can define an encoding strategy e.g. Manchester [START_REF] Pujolle | Les Reseaux[END_REF] for emitting/receiving binary information on the physical medium. OSI L2 -Data Link Layer. The Data Link Layer, or Layer 2 (L2) handles the data links on the physical medium by grouping binary information into frames (see Def. 4). The frame is the entity in which a certain number of bytes are conveyed simultaneously over the physical medium. The Data Link Layer is composed of two sub-layers: Media Access Control (MAC) and Logical Link Control (LLC). The MAC sub-layer contains the set of rules necessary to share the same physical medium between several stations. An example of such rule is the CSMA/CD for Carrier Sense Multiple Access, with Collision Detection [START_REF][END_REF]. The Logical Link Control is in charge of increasing the reliability of the MAC sub-layer by adding flow control (optional) and error control (optional) services. Ethernet (see 3.6.1) is the most common Data Link Layer protocol. In Ethernet, the LLC sub-layer exists but neither its flow control nor error control services are used. NETWORK DESIGN PARADIGMS OSI L3 -Network Layer. The Network Layer, or Layer 3 (L3) is in charge of conveying a packet (see Def. 5) from its source to its destination. The packet contains the address of the data recipient or information to guide the packet across the network towards its recipient. The L3 layer also provides flow control and error control services. The most notable L3 protocol is IP [START_REF]Internet Protocol[END_REF]. OSI L4 -Transport Layer. The Transport Layer, or Layer 4 (L4) is in charge of the end-to-end transfer of user data gathered in messages. These messages are transported from the emitter to the receiver. The transport layer supports a communication between two users located in different systems, independently of the characteristics of the underlying network on which the communication occurs, in a seamless manner while guaranteeing the quality of service required by users. In Internet, TCP -Transport Control Protocol [START_REF]Transmission Control Protocol[END_REF] and UDP -User Datagram Protocol [START_REF]User Datagram Protocol[END_REF] are the most common. In the remaining of this document we will focus on technologies providing L2 protocols. Network Design Paradigms Two elements have to be decided when designing a L2 networking system: the choice of physical topology and the communication paradigm. Before detailing these two elements, let us define basic network vocabulary on which the elements' definitions can rely on. The definitions below have several external sources: "Les Réseaux" [START_REF] Pujolle | Les Reseaux[END_REF], OSRA-NET (ESA's Network Specification [START_REF]OSRA Communication Network Specification[END_REF]), IEEE 802.1Q [START_REF]IEEE Standard for Local and Metropolitan Area Networks-Bridges and Bridged Networks[END_REF] and IEEE TSN standards [START_REF]Time Sensitive Networking Task Group[END_REF]. Some of them also come from our own understanding of networking concepts. Definitions The terms frame, packet, network nodes, end-point/end-system/end-station, switch, physical-topology are defined hereafter. Definition 4 (Frame) In the ISO model, a frame is the support of communication of Layer 2 -Data Link Layer. In this document, unless if explicitly stated, the word frame will refer to Ethernet frames, compliant with IEEE 802.3 standard [START_REF][END_REF]. In this document, unless if explicitly stated, the word packet will refer to IP packets, compliant with IETF RFC 791 standard [START_REF]Internet Protocol[END_REF]. Definition 5 (Packet) In the ISO model, a packet is the support of communication of Definition 6 (Network nodes) Component of a network physical topology. In our study, a network node can be understood as an end-station (see Def.7) or as a switch (see Def.8). Definition 7 (End-point, End-system, End-Station) Component of network that needs to transmit or receive data. It is called end-point because this component will not forward any frames it receives to another network node (End-point or Switch) contrarily to a switch. It is also called End-System (ES) in the Airbus terminology and End-Station in the IEEE terminology. We will use any of these words indifferently in this document. CHAPTER 3. INTRODUCTION TO NETWORKING TECHNOLOGIES Definition 8 (Switch) Component of network that connects other network nodes together. It forwards frames that it receives from one of its input port to one out its output port that will lead the frames to their destination. Definition 9 (Physical Topology) Physical network organization of all the network nodes of a system as well as the links connecting them together. Shared v.s. Point-to-Point Medium The choice of a physical topology lies upon a dilemma: shall the medium be shared between all network nodes or shall it only connect two nodes together point-to-point? Depending on the answer to that question, the available topologies differ. We briefly describe them in the next paragraphs. Shared Medium Topologies In a shared medium topology, there is only one physical medium shared by all network nodes. When a message is emitted by an end-station, any network node is able to see that messages and to decide whether it should retrieve it (being recipient) or not. Examples of such topologies are bus, ring, etc. (see Fig. In point-to-point medium topologies, their are several physical mediums, each medium connecting only two network nodes to one another. In order for two end-stations in the network to communicate with each other, if not already connected together, their frames must go through one or several intermediary network nodes e.g. switches. In these topologies, only the recipient end-station sees and receives frames destined to it. Example of such topologies are star, mesh, etc. (see. Fig. 3.3). In this document, bus topologies (shared medium) and star or mesh topologies (point-to-point medium) will be encountered. Event-Triggered v.s. Time-Triggered Networks Once the physical topology of the network has been defined, the way communication happens on top of that physical topology has to be decided. It can either happen in an Event-Triggered or in a Time-Triggered fashion. We briefly describe these two approaches in the following paragraphs. An extensive comparison of Event Triggered v.s. Time Triggered systems is available in [START_REF] Kopetz | Event-triggered versus time-triggered real-time systems[END_REF]. Event-Triggered In an Event-Triggered network, the system reacts asynchronously to the occurrence of an event. Examples of such events are interrupts (e.g. when a frame is received, an interrupt can be used to signal it to the application level) or release of a shared resource (e.g. release of a mutex, end of DMA transfer, etc.). In event triggered networks, emissions of frames are asynchronous. Time-Triggered In a Time-Triggered network, frames are emitted in a synchronous manner i.e. they are sent at a known date for the whole network. The emission of every frame has been planned a priori off line. To do so, any emission is associated to a slot i.e. a time interval in which the frame is emitted. The slots scheduling repeats in a periodic manner. This type of network relies on a TDMA -Time Division Multiple Access [START_REF] Chan | Time-Division Multiple Access[END_REF] strategy. Time Triggered networks require the existence of clocks in a certain number of nodes in the network as well as a synchronization strategy for these clocks to properly apply the slots scheduling across the network. Remark 1 Some technologies have the ability to implement either the Event-Triggered, the Time-Triggered or both communication approaches. After the design choices have been made, a network system can be configured to provide Quality of Service. We define it in the next section. Quality of Service Definition Let us now define the concept of quality of service and provide some examples of performance and fault tolerance metrics. The last paragraph defines the expression Traffic Category. Definition 10 (Quality of Service (QoS)) The Quality of Service for a data exchange (i.e. a group of frames transmitted from its source to its destination(s)) is a set of properties that can be guaranteed during the exchange. For instance, for a given data exchange, the Quality of Service can be to have a maximum jitter of 1 microsecond for all frames of the exchange. Quality of Service may equally refer to network performance metrics (see 3.3.2) or network fault tolerance metrics (see 3.3.3). Performance Metrics and Properties The network performance metrics relate to the data exchange temporal behaviour. In this manuscript, let us consider 2 metrics: Latency and Jitter Definition 11 (Latency, End-to-End Latency) Duration necessary for a frame to go through a network, from emitter to receiver. It can be qualified of network or application end-to-end latency (E2E for short). The two definitions are illustrated in Fig. 3.4. Network E2E Latency The considered duration is the time between which the frame is emitted on the medium and the time when the frame is received at physical layer in the receiving ES. Application E2E Latency The considered duration is the time between which the frame is passed to the network stack of the emitting ES to the time when the network stack of the receiving ES makes it available for the receiving application. Network ES B Software App B Hardware ES A Software App A Hardware L7-L2 SAP L2-L7 SAP Application E2E Latency Network E2E Latency 1. Scheduling Jitter A Scheduling Jitter is the length, in time, of a time window around an expected time of action within which the action is actually performed. It represents the variation of the instant at which the action is performed. Transmission/Transit Jitter In the networking community, jitter is understood as the variability of the network E2E latency. Using these two metrics, properties such as bounded latency (see Def. 13) or low jitter (see Def. 14) can be defined. Remark 2 (Determinism, Bounded Latency, Low Jitter) Among performance quality of service properties, determinism is often used. We find that term quite ambiguous even if one definition is proposed in [START_REF] Daigmorte | Analyse des interactions entre flux synchrones et flux asynchrones dans les réseaux temps réel[END_REF]. Therefore, we propose to use more precise properties like bounded latency or low jitter instead of determinism. Definition 13 (Bounded Latency) A data exchange satisfies the bounded latency property if the latency of all its frames is bounded and inferior to a user specified bound. Definition 14 (Low Jitter) A data exchange satisfies the low jitter property if the jitter of the data exchange i.e. the variability of the latency of all its frames is low (and bounded) and inferior to a user specified bound. Fault Tolerance Metrics and Properties The network fault tolerance metrics deal with the ability of the data exchange to occur in a faulty context. Let us introduce five metrics Reliability, Availability, Maintainability, Safety and Security. [START_REF] Laprie | Guide de la sûreté de fonctionnement[END_REF]. In this study, we will focus on Availability and especially on the tolerance to the loss of a frame. In order to increase the Availability of a networking system, one example of strategy that can be implemented is redundancy. To do redundancy in a networking system, network nodes or links can be duplicated (or n-plicated) so that when that equipment becomes faulty, another one can take its place while minimizing the downtime of the system. There are several modes for redundancy such as hot, warm and cold. ES B ES A (a) Cold • In a cold redundancy scheme, the duplicated devices are off and turned on only when the nominal one is deemed faulty • In a warm redundancy scheme, the duplicated devices are idle but will not participate to network nominal activities until the nominal device is deemed faulty • In a hot redundancy scheme, all duplicated devices are on and take part in the network nominal activities. An arbitration strategy will be needed to decide how to handle faulty devices. Fig. 3.5 illustrates a cold, warm and hot link redundancy between two end-stations. Definition of Traffic Categories Let us introduce the concept of Traffic Categories. Now that basic network vocabulary has been properly defined, let us use it to introduce the different networking technologies considered in this document. The presentation is organised as follows: a general introduction followed by a discussion on medium access. Then, the performance and the fault tolerance capabilities1 of the technology are briefly discussed. The presentation finishes with a description of the links between these technologies and the spacecraft industry. MIL-STD-1553 General Information MIL-STD-1553B [33] is the specification of a databus originally developed for the US military in the 1970's. It is used in space as Platform bus for a majority of satellites since 1983. It is a single medium time-triggered technology. Medium Access The 1553 bus connects one bus controller (BC) to up to thirty-one remote terminals (RT). Communication on the bus relies on a command/response principle: the BC periodically sends, in a Time Triggered fashion, a command to one RT and the RT shall respond to that command. The bus controller can also broadcast a message to all remote terminals. No RT shall speak unless asked first by the BC. By doing so, the bus controller is in fact in charge of handling the access to the single medium. Performance The 1553 data bus is a low data rate bus (∼1Mbit/s). The communication relies on messages composed of 1 to 32 "words" that can each bear up to two bytes of user data (in a data word). The communication pattern is simple: an exchange is initiated by a command word and acknowledged with a status word optionality accompanied by data words. The different communication patterns are standardized and can be found in [33]. Fault Tolerance A 1553 bus is bi-directional and dual redundant. It is used in a cold redundancy scheme (see Def. 17) meaning that the communication happens on one "channel" (primary or nominal) and, in presence of a failure, messages are sent on the other (redundant) channel in the same bus. In space, 1553 is used either with single or dual bus. Relation with the Spacecraft Industry MIL-STD-1553B is widely used in the spacecraft industry because of its simplicity at both physical (OSI L1) and protocol (OSI L2) layers; and high bit error reliability. It is used on board in roughly 90% of Airbus satellites. SpaceWire & SpaceFibre SpaceWire General Information SpaceWire [42] is a point-to-point medium event triggered technology specifically designed for the spacecraft industry. Based on IEEE 1355, it was developed then standardized [42,[START_REF]Spacewire -Protocol Identification[END_REF] in the early 2000's under the impulsion of ESA and other major agencies (NASA, JAXA, RosCosmos). SpaceWire was first mostly used for point-to-point connexions but it is now being used as a networking technology i.e. connecting SpaceWire nodes using SpaceWire Routing Switches. Medium Access SpaceWire standard covers the first three layers of the OSI model: it relies on LVDS -Low Voltage Differential Signalling -used for data transmission in the double twisted pair cable at physical layer; the character i.e. the basic unit of a packet; and wormhole routing to exchange packets between nodes i.e. SpaceWire-capable equipments. The routing strategy is not further discussed in this document. It is explained in detail in [108]. Performance The SpaceWire link is full-duplex and bi-directional. Its data rate ranges between 2Mbits/s and 200Mbits/s. Due to packet format, the theoretical number of bytes per transfer is unlimited. In space, it generally reaches 100Mbits/s with packets of 2 to 4 KiB. Higher protocols are also standardized to work with SpaceWire as for instance RMAP -Remote Memory Access Protocol [START_REF]Spacewire -Remote Memory Access Protocol[END_REF], CCSDS Packet Transfer Protocol [START_REF]Spacewire -CCSDS Packet Transfer Protocol[END_REF], etc. Fault Tolerance A SpaceWire character has a parity bit to detect error in the character as well as a link failure detection and signalization mechanism. The standard does not provide other fault tolerance capabilities. Relation with the Spacecraft Industry SpaceWire is widely used for connecting devices in the payload part of the satellite e.g. instruments (or instrument processing units) to mass memories. Its success is due to its robustness by design i.e. no memorization is required in the standard therefore avoiding the risk of SEU in buffers. In addition, its relatively high data-rate, the low-power consumption of its devices, its protocol simplicity and architectural flexibility make it ideal for many missions. SpaceFibre General Informations SpaceFibre or ECSS-E-ST-50-11C [START_REF]SpaceFibre -Very high-speed serial link[END_REF] is a multi-gigabit networking technology designed for space applications. It runs over electrical or fibre-optic cables. It is a point-to-point medium time-triggered technology. It complements the capabilities of SpaceWire by improving the data rate, reducing the cable mass, providing quality of service in both performance and fault tolerance. As for SpaceWire, SpaceFibre covers the first three layers of the OSI model (L1 to L3). Medium Access SpaceFibre proposes an improvement of SpaceWire wormhole routing as medium access. There are now up to 32 VC (Virtual Channels) per output port. These VCs work like FIFOs2 the first frame to come in is the first one to go out. When several VCs have a data frame ready for emission, it is necessary to specify a medium access strategy. SpF Scheduler The SpF Scheduler offers the possibility of dividing time in 64 time-slots (of configurable fixed duration). In each time-slot, a list of VCs is allowed to try to access the medium. In Fig. 3.7, the SpF Scheduler is represented by the rectangle on the right. In addition, we materialized it on each VC, SpF "token bucket"-like Credit[31] = … SpF "token bucket"-like Credit[30] = … SpF "token bucket"-like Credit[29] = … SpF "token bucket"-like in a similar manner to TSN Gate Control Lists (cf. Fig. 7 in [START_REF] Chaine | TSN Support for Quality of Service in Space[END_REF]). When the scheduler block on a VC states Open, it means that this VC is authorized to communicate in the time-slot, when the block states Closed, the VC is not configured. In the figure, VC #31 and #30 are configured in the represented time-slot. Credit VC Arbitration & Precedence Then, in order to arbitrate between VCs authorized to communicate in the same time-slot, Space-Fibre defines VC Arbitration i.e. a rule similar to Ethernet static priority, based on a value entitled precedence per virtual channel at instant t. The precedence of a VC is computed with two values: VC Priority and VC Banwidth Credit. The highest precedence VC will be granted access to the medium first. The VC Arbitration is represented by the block at the bottom of Fig. 3.7. VC Priority VC Priority is fixed a priori during configuration per VC. In Fig. 3.7, VC Priority is represented at the top, under the name of the VC. For instance, in this representation, VC #31 and #30 have the same and highest priority. VC Bandwidth Credit The value of the VC Bandwidth Credit evolves over time: its value is updated for every VC in a port each time a frame is emitted by one of the VC in that port. The bandwidth credit is represented, per VC, in the SpF Token-bucket-like block. SpF Token-Bucket The evolution of the VC Bandwidth Credit is dictated by a mechanism similar to a token-bucket (cf. [START_REF] Pujolles | Les Reseaux[END_REF]). This token-bucket like mechanism (let us call it SpF Token-Bucket) allows to reserve a portion of the available bandwidth for each VC. Examples of VC Bandwidth Credit computation are presented in [START_REF]What is spacefibre ?[END_REF]. Note that this section (including Fig. 3.7) only represents the user data channel, and ignores broadcast channels and management channels. Performance SpaceFibre links are bi-directional and full duplex. It can bear up to several gigabits per second. In fact, the protocols allows to rely on several lanes within the same link to drastically increase the data rate of a single link (in a similar fashion as PCIe3 ). A SpaceFibre data frame (L2) can bear up to 256 bytes of user data. The medium access rules allow SpaceFibre networks to provide bounded latency and controlled jitter. Fault Tolerance SpaceFibre provides fault tolerance capabilities at physical level. It is not detailed in this document. It also provides such capabilities at MAC level. In fact, error in a frame can be detected with a CRC. In addition, the non-respect of a bandwidth reservation contract for a VC can be monitored. Space-Fibre allows to isolate faults in a specific VC and prevent it from affecting other VCs. SpaceFibre does not provides redundancy mechanisms at MAC level. Relation with the Spacecraft Industry SpaceFibre development and standardization at ESA ECSS4 level is over. It is very likely to be used in future space missions as either Platform, Payload or Platform & Payload bus. In particular, it is explored by ESA in the ADHA -Advanced Data Handling Architecture -initiative. The Ethernet Family Ethernet General Information Full Duplex Switched Ethernet or Ethernet is an OSI L2 technology based on IEEE 802.3 [START_REF][END_REF] and 802.1Q [START_REF]IEEE Standard for Local and Metropolitan Area Networks-Bridges and Bridged Networks[END_REF]. It is a point-to-point medium event triggered network. In this document, we name Ethernet the technology defined in 802.1Q-2008. Ethernet is spread worldwide as it is the standard networking technology used at home and in ISP core networks. Medium Access In Full-Duplex Switched Ethernet, there can be no more than one device (end-stations or switch) connected to a port of a switch. Therefore there is no issue of medium access for end-stations. The medium access in switches output port is handled using FIFO -First In First Out + Priority. In the switches, frames heading towards the same direction through the same Ethernet output port are placed into queues, depending on their priority. The first rule applied is 'FIFO' meaning that, within a queue the first frame to get in is the first one to get out. In order to arbitrate between queues, the priority field of the 802.1Q optional tag is used. The frame having the highest priority is emitted first. Fig. 3.8 illustrates one Ethernet switch with 4 ports and one level of priority (i.e. one queue) per output port. Performance An Ethernet network is composed of switches and end-stations that exchange Ethernet frames (format defined in [START_REF][END_REF]) that can bear up to 1500 bytes of user data (and even more with jumbo frames [60]). An Ethernet frame is represented in Fig. 3.9. The Ethernet link is full-duplex and bi-directional and its data rate ranges from 100 Mbit/s to several gigabits per second. Fault Tolerance There is one mechanism defined in Ethernet for fault tolerance: the FCS -Frame Checking Sequencefield. This field contains a 32 bits CRC -Cyclic Redundancy Check-which serves to detect data corruption in a frame. Relation with the Spacecraft Industry Ethernet is just starting to be used on-board a spacecraft for either payload links or point-to-point dedicated links. ARINC 664 (AFDX) General Information ARINC 664P7 [3] defines a 100Mbit/s avionic bus with a "deterministic" Medium Access The medium access in ARINC 664 is identical to Ethernet. In addition, the traffic contract defined hereafter is applied. Fault Tolerance Regarding fault tolerance, in addition to Ethernet CRC, ARINC 664 offers the possibility to duplicate frames in the emitting end-station and reassembling them in the receiving end-station in a hot redundancy fashion. In order to identify duplicates, it defines a sequence number: two duplicates will always share the same sequence number. A first function, entitled Integrity Checking eliminates invalid frames based on their sequence number. Integrity Checking is also used to detect the loss of a frame. A second function called Redundancy Management is in charge of the elimination of redundant frames (i.e. duplicates). In addition, the BAG protects the network against an application that would be emitting more than it is allowed or configured to. Relation with the Spacecraft Industry To the best of our knowledge, ARINC 664 is not used in the spacecraft industry. 5 In ARINC 664 deterministic means bounded latency and no buffer overflow and respect of the traffic contracts. TTEthernet General Information TTEthernet is a scalable networking technology designed for industrial automation and aerospace applications, standardized by SAE under the reference AS6802 [102] in 2011. It is a point-to-point time triggered and even triggered network. TTEthernet extends the ARINC 664 standard. It supports mixed quality of service with both synchronous (time-triggered) and asynchronous communications schemes, with the help of a fault-tolerant synchronization strategy. Medium Access In fact, in addition to Best Effort and Rate Constrained traffic (i.e. ARINC 664 VL traffic), TTEthernet standard defines a third type of traffic: time-triggered traffic or TT traffic. This traffic relies on a global schedule, computed a-priori, that defines the emission instant of all frames on all hops of the network. As explained in Section 3.2.3, this time trigger behaviour requires the use of a synchronization protocol. Performance TTEthernet links are bi-directional and full duplex. They can operate at up to 1Gbps. The communication relies on Ethernet frames. In terms of Quality of Service, the TT traffic class is able to provide bounded or even fixed latency and low to ultra-low jitter. The RC traffic class has the same performances than ARINC 664 traffic. The BE traffic class has the same performances as Ethernet. Fault Tolerance Regarding fault tolerance, in addition to Ethernet CRC and similar to ARINC 664, TTEthernet offers the possibility to do hot redundancy i.e. replicating frames in the emitting end-station, sent on disjoints paths (3 for TT traffic and 2 for RC traffic) and reassembled in the receiving end-station; in order to tolerate faults. As for ARINC 664, TTEthernet implements a bus guardian to make sure both the bandwidth reservation and the TT schedule are respected. Finally, TTEthernet provides fault tolerance capabilities for its synchronization mechanism (not detailed in this document). Relation with the Spacecraft Industry The use of TTEthernet is growing within the spacecraft industry. It is already used in some launchers and will be used in future lunar missions (e.g. Lunar Gateway [122]). Time Sensitive Networking Time Sensitive Networking is presented in the next chapter. Conclusion This chapter has presented key concepts of the OSI model. It has then proposed a set of definitions to agree on the vocabulary that will be used in this document. Finally, this chapter presented all the technologies, apart from Time Sensitive Networking, on which most of this study will rely on. CHAPTER 3. INTRODUCTION TO NETWORKING TECHNOLOGIES The next chapter will focus on introducing Time Sensitive Networking and explaining in detail one of its addenda dedicated to network performance quality of service i.e. IEEE 802.1Qbv. Chapter 4 Focus on Time Sensitive Networking Overview of Time-Sensitive Networking Time Sensitive Networking (TSN for short) is a new IEEE technology that claims to be capable of supporting both real-time and high-throughput traffic. It is currently being developed by an IEEE task group, named TSN Task Group (TG). This Task Group has continued the work of the former AVB (for Audio Video Bridging) Task Group founded in 2012. The TSN Task Group has published more than a dozen of amendments as well as added a few new standards to IEEE 802.1 standard family in order to ensure a behaviour that is simultaneously real-time, adaptive and flexible, mixing synchronous and asynchronous approaches. The TSN standards, amendments and on-going projects can be organized in five main families i.e. Synchronization, Reliability, Latency, Resource Management, Zero Congestion Loss. General Overview Synchronization: The first family offers a network level synchronization service and it groups 802.1AS, 802.1AS-rev. 802.1AS defines synchronization and time distribution protocols for a TSN network. 802.1AS-rev defines upgrades to 802.1AS, mainly specifying a redundancy protocol for the synchronization service. Reliability: The second family concerns Reliability (802.1CB, 802.1Qca, 802.1Qci, 802.1As-Rev). It aims at preventing, as much as possible, the loss of a frame at application level by duplicating frames and/or by controlling that bandwidth reservation is respected by all streams. 802.1CB is a protocol used to support seamless network redundancy. 802.1Qca defines explicit path control and bandwidth and resource reservation protocol. 802.1Qci defines ingress policing strategies for TSN switches and end-points. Bounded Low Latency: The third family is the Bounded Low Latency (802.1Qav, 802.1Qbu, 802.1Qbv, 802.1Qch, 802.1Qcr). It aims at providing protocols with a bounded end-to-end (network) latency for specific streams in the TSN network. This can be made possible through bandwidth reservation, traffic scheduling (synchronous and asynchronous) and preemption strategies. 802.1Qav defines traffic shaping strategies for TSN switches and end-points. 802.1Qbu defines a preemption protocol at ISO Layer 2 (Ethernet frame level). 802.1Qbv refines and upgrades 802.1Qav. 802.1Qch is a combination of other TSN protocols, aiming at building a TSN network with fixed latency and jitter. 802.1Qcr defines an asynchronous traffic shaping strategy for TSN switches and end-points. Dedicated Resources and API: The fourth family is Dedicated Resources and API (802.1Qat, 802.1Qcc, 802.1Qcp). It defines resource management protocols as well as configuration strategies for a TSN network. 802.1Qat defines a resource reservation protocol for TSN. This can be done statically or dynamically. 802.1Qcc refines and upgrades 802.1Qat, it also defines a configuration protocol for TSN. 802.1Qcp defines a standardized model (YANG model) used to describe a TSN network, the capabilities of its devices, and potentially its configuration. Zero Congestion Loss: Reminder on 802.1 Switches and End-Stations In order to understand Time Sensitive Networking, we first remind/introduce the functional forwarding process of 802.1 switches and end-stations must be re-introduced. 802.1Q switch functional forwarding process An 802.1Q switch is a network device that forwards packets from input ports to output ports according to a certain number of rules. One can distinguish three steps in the forwarding process: Ingress, Switching and Egress. The Ingress part of the process has the ability to filter frames and prepare the switching. It is located in input ports of the switch. The Switching part of the process actually switches the frames between input and output ports. It is located "between" input and output ports. The Egress part of the process has the ability to apply post-switching filtering and traffic shaping, it is located in output ports. 802.1Q end-station functional forwarding process This paragraph will introduce our understanding of the ES behaviour. In fact, up to this day, there is not real specification for TSN end-station in TSN, as IEEE Std 802.1Q-2018 focuses only on switches. There is currently a project for specifying the behaviour of End-Station within the TSN working group, but nothing has been published so far. Our understanding of TSN leads us to the following architecture for End-Stations in TSN (see Fig. 4.3). The dashed rectangles represents the part of our understanding which is unsure i.e the definition of the service access point above the data link layer. Basically, in the output direction, it matches the egress part of the switch architecture whereas in the input direction, it matches the ingress part of the switch architecture. The interfaces (Service Ingress Access Points or SAP) between ISO Layer 2 and upper layers in a TSN End-System are still not specified in the standard. Nevertheless, we imagine, as represented in Fig. 4.3a, that some buffers will be necessary (close to the Service Access Point) in order to store frames while they wait to be processed by upper layers. Local and system-wide TSN features As TSN relies on IEEE 802.1Q, the structure described above for the switches and end-stations is identical in TSN devices. There are also some added mechanisms through TSN features. The effects of all the TSN features that we have studied/considered can be categorized in two types of features : • Switch related features, they have a direct impact on the switch forwarding architecture and can be configured differently from one switch to another • System wide features, they have an impact on the whole network and their configuration is at system level more than per-device level. The TSN features are sorted into these two types in Fig. 4.4. In the previous paragraphs (regarding switches and end-stations), we only talked about Switch related features. We assume that the system wide features are available in both switches and endstations. Their use, in the real system, will depend on the device capabilities and configuration. Vocabulary The purpose of this section is to introduce the necessary vocabulary for the understanding of TSN standards. The most common definitions are grouped here however, some existing and new definitions will also be introduced along the document in dedicated sections. The definitions below have several external sources: IEEE 802.1Q [START_REF]IEEE Standard for Local and Metropolitan Area Networks-Bridges and Bridged Networks[END_REF] and IEEE TSN features documents [START_REF]Time Sensitive Networking Task Group[END_REF]. Definition 22 (Feature) A feature refers to a TSN standard, amendment or on-going project. For instance, 802.1Qbv is a feature. Definition 23 (Mechanism) A mechanism is a part of a feature. Thus a feature is a union of mechanisms. 2. A feature configuration consists of a configuration of all the mechanism of that feature. End-Station End-Station End-Station End-Station Switch A TSN network configuration consists of a selection of a subset of TSN features and a configuration of these features. Definition 26 (Streams) A stream is multicast data transmission i.e. a unidirectional data transmission between one sender and one or several receivers. A stream is in fact a sequence of frame. It may also be called flow. In the Time Sensitive Networking and Audio Video Bridging terminology, the sender of a stream can be named Talker and the receivers Listeners. Configuration Challenge As Time Sensitive Networking is composed of many features, and as each feature is composed of several mechanisms each including several parameters, there is a significant number of parameters to tune in order to configure a TSN network. It is getting more and more complicated and challenging whenever a new feature is added to the list of amendments. That is the reason why the TSN Task Group has started to promote a common and open way to describe configured TSN networks. This will help interconnect tools for configuring and analysing TSN networks. The configuration description model is based on YANG (XML description of the network and its parameters) models and is being standardized into several TSN features (IEEE 802.1Qcp, IEEE 802.1ABcu, IEEE P802.1Qcw, P802.1CBcv, etc.). For now, only a few standards are included in the YANG models but the complete set of models is on its way. YANG is in fact a formal and common representation of the network and its configuration. The configuration process itself is also standardized in TSN features IEEE 802.1Qcc -TSN configuration. The configuration process will not be described in this document. IEEE Time Sensitive Networking is a wide technology, there are now more that 20 published amendments plus many on-going projects. Some industry verticals, such as automotive, industrial automation or 5G, have felt the need, for clarity and identity purposes, to reduce the number of standards in order to provide a smaller scope of TSN to the hardware/software manufacturers. These scopes are named profiles and also are standardized by IEEE. There are also often co-approved or co-written by other standardization institutions such as IEC or SAE. CHAPTER 4. FOCUS ON TIME SENSITIVE NETWORKING However, as good as the configuration features may be described, this does not help at all the network architect configure the TSN network. For now, the architect has to manually decide which TSN features to select and manually determine a value for the parameters of the mechanisms of these TSN features w.r.t. certain network performance and safety objectives. In this manuscript, we propose a novel automatic configuration generation approach in Chapter 10. Network Performance with 802.1Qbv-Enhancements for Scheduled Traffic The TSN standard that needs to be introduced for this manuscript is IEEE 802.1Qbv -Enhancement for Scheduled Traffic. As the name suggests, this feature provides enhancements to the output queues of switches and end-systems in order to support Scheduled Traffic i.e. the scheduling of frames from different traffic categories so as to respect their Quality of Service requirements. In this section we will describe how the enhanced switch output port works. Introduction In this enhanced output port, there are now up to 8 internal queues (i.e. traffic classes) to which frames can be assigned. Parameter 1 The mandatory number of queues really implemented in a device is not stated in the standard, let us called it #TC, where #T C ∈ [1; 8] (maximum value in the standard). In this document, unless explicitly stated, the figures will be represented using #T C = 8 queues. Each of these queues is associated with a Transmission Selection Algorithm (TSA) as well as a Transmission Gate (TG). Both TSA and TG will rule when frames within a specific queue will be allowed to try to access the medium. Fig. 4.7 summarizes the different mechanisms that one can see when zooming within the box (for #T C = 8). These mechanisms are: the transmission queues, the transmission selection algorithm and the transmission gate blocks. When frames arrive in the "queuing frames" part of the block architecture of the switch (Fig. 4.6), the frames are held in transmission queues. These transmission queues are the data structures that host the frames while they wait to be granted access to the medium. The queues are numbered and ordered from #7 to #0 and are often called traffic classes or just classes (cf. Def. 27). Each traffic class has a certain behaviour that will be defined with its transmission selection algorithm and transmission gate. NETWORK PERFORMANCE WITH 802.1QBV-ENHANCEMENTS FOR SCHEDULED TRAFFIC53 Definition 29 (Available for transmission) In order for a frame to be allowed to try to access the medium, it must be "Available for transmission". This state requires the following conditions: 1. The frame shall be in head of a queue (i.e. traffic class), The Transmission Selection Algorithm (of the queue) shall allow the transmission of that frame, marking it as "ready". By extension, the queue will also be considered as ready. The Transmission Gates (of the queue) shall allow the transmission of the frame. As several queues exist in a single switch output port, several frames can be "available for transmission" in different traffic classes, hence an arbitration strategy must be defined. The transmission selection mechanism will select a frame among all frames "available for transmission" based on a static priority rule. The frame selected by transmission selection will be emitted first. The priority attribution implemented in TSN in the Transmission Selection block is the following: the higher the traffic class number, the higher the priority i.e. traffic class #7 has a higher priority than #0. In the end, this means that if a frame of #i and #j are allowed to access the medium at the same time with i > j, #i will be emitted first. The allocation of frames (in reality streams) to traffic classes is discussed in Appendix A.2. Let us now focus on the Transmission Selection Algorithm and the Transmission Gate mechanisms. Transmission Selection Algorithms The transmission selection algorithm is one of the two mechanisms that defines whether a frame within a queue is allowed to try to access the medium. There are several transmission algorithms, or NETWORK PERFORMANCE WITH 802.1QBV-ENHANCEMENTS FOR SCHEDULED TRAFFIC55 shapers, introduced in the standard, namely : Credit Based Shaper, Enhanced Transmission Selection or even user-defined (ad-hoc). However the use of a TSA is not mandatory. Let us introduce these shapers in the following subsections. No TSA The use of a TSA is not mandatory in TSN. This means that one, several or all queues may choose not to use any transmission selection algorithm. In this situation, the frames in head of the different traffic classes would be immediately ready. The responsibility of selecting which frames goes out on the medium and in which order would be left to the transmission gates and the transmission selection. Credit Based Shaper The Credit Based Shaper, or CBS, is probably the most famous one. It defines a rule to share the bandwidth between queues based on a credit that evolves when frames are enqueued or dequeued. CBS was introduced in AVB in order to avoid starvation. Indeed, with no TSA, if the high priority queues have a lot of messages to emit, the lower priority queues will not get a chance to emit their frames. With CBS, the bandwidth shall be more equally shared between high and low priority queues. As we will see later on in this document, the behaviour of CBS is slightly changed when combined with Transmissions Gates as well as with TSN feature 802.1bu-Frame Preemption. In the rest of this subsection, we will introduce CBS as it was defined in AVB. A queue using Credit Based Shaper Transmission Selection Algorithm is characterized by two parameters and a counter: • send slope, which is set by configuration, represents the part of the bandwidth given to the traffic class that uses this CBS. • idle slope, is computed with send slope and other variables. • credit, counter. These parameters are used to determine, at a specific instant, whether the queue is allowed to emit or not. The credit evolves according to several rules. Frame arrival #2 class credit Output #2-1 #2-1 #2-2 #2-2 #2-3 #1-1 #1-1 #2-3 #3-1 #3-1 #1 traffic class #2 traffic class #3 traffic class 1. When the queue is not empty and its credit is non-negative, then the message in head of the queue is selected for emission. 2. When a message is emitted, then the queue's credit decreases at send slope rate. 3. When the queue is not empty but cannot emit (because someone else is using the medium), then its credit increases at idle slope rate. 4. When the queue is empty and its credit is positive, then the credit is reset to zero. 5. When the queue is empty and its credit is negative, then the credit increases at idle slope until it reaches zero, then stops increasing. As one can see on the figure, when the first frame of #2 is enqueued, #2 was empty hence its credit was zero. When the first frame arrives, as the credit is equal to zero (considered positive) and no one is using the medium, then this frame is emitted and the credit is decreased at send slope. When the frame is emitted, a new frame is enqueued in #2. However, its credit is negative, the message cannot be emitted, it has to wait for the credit to increase (at idle slope) and become positive to be emitted. Again, once the frame is emitted the credit decreases at send slope. While the credit was negative, a frame of #2 and one of #3 have arrived. As the credit of #2 is negative, the #3 frame is emitted, even if frames of #3 have a lower priority than #2. As there is a frame enqueued for #2 and it cannot be emitted (medium occupied by #3 frame), the credit increases. When the medium is freed (#3 frame is gone), the #2 frame could be granted access to the medium, however, a frame from a higher priority traffic class (#1) has arrived in the meantime and is first granted access to the medium. The credit hence continues to increase. When the medium is freed, the #2 frame is granted access to the medium and the credit decreases. Once the frame is emitted, as the queue is empty and the credit is strictly positive, it is reset to zero. Enhanced Transmission Selection The Enhanced Transmission Selection Algorithm is also introduced in 802.1Qbv. Its goal is to ensure a fair sharing of the available bandwidth to the traffic classes that implements it. There are no implementation proposed in the standard but still, one is suggested : Weighted Round Robin (WRR). Each queue implementing WRR TSA is affected with a weight. This weight will determine how much the queue will be served w.r.t. the other queues. Several implementations for weighted round robin exist, we will introduce one. The WRR server follows the following rules (as described by Algorithm 1, c.f. [START_REF] Bouillard | Deterministic Network Calculus, from Theory to Practical Implementation[END_REF]): 1. When a queue is served, if its weight is positive, a frame of this queue can be emitted and its weight is reduced by one. 2. As long as the weight of the queue is positive and the queue contains frames to emit, then the frames are emitted and the weight is decreased by one for each. 4. If a queue has a weight positive but no frames to emit, then the next queue is served. 5. When the queue has been served, its weight is reset to its configured weight. The remaining weight that may exist, because the queue did not have enough frames to emit during the previous round, is lost. Example 2 Fig 4 .9 instantiates these rules on a simple example with 3 traffic classes. Frame arrival Traffic class Token Output M2-1 M2-1 M2-2 M2-2 M2-3 M1-1 M1-1 M2-3 M3-1 M3-1 M3-2 M3-2 null #2 #2 #1 #2 #1 #3 M1-2 M1-2 Vendor Specific We have introduced in the previous subsection common TSA. But that standard also leaves the door open for vendor specific transmission selection algorithm. The only constraint that all transmission selection algorithms must respect is named Ordering requirements in the standard. All TSA must respect the ordering requirements of 802.1Q i.e. the order of frames received on the same Bridge Port shall be preserved for: • Unicast frames with a given VLAN identifier (VID), frame priority, stream identifier, MAC destination address and MAC source address combination • Multicast frames with a given VLAN identifier, frame priority, stream identifier, and MAC destination address. These elements (VID, frame priority, stream identifier, MAC destination and source address) will be introduced later. Transmission Gates The Transmission Gate mechanism is the second element that comes into play for deciding when frames in traffic classes are allowed to try to access the medium i.e. Where :                        T ick : base clock , T imeInterval ∈ N + , OperControlListLength ∈ N + , CycleT ime ∈ N + , with OperControlListLength * T imeInverval ≤ CycleT ime, Schedule : V =< v e >, e v 0 =< l 0 k >, k ∈ [1, 8] = [o, C, C, C, C, C, C, C] (4.3) The expression "a port uses the transmission gate mechanism" will be understood as "this port has scheduled closing and opening sequence for its transmission gates". The expression "a port does not uses the transmission gate mechanism (or does not use Time Aware Shaper)" will be understood as "the transmission gates of this port are always open". In other words, the default behaviour of the Time Aware Shaper for port j is to have all its gates open all the time i.e.: for e ∈ [1, OperControlListLength] , for k ∈ [1, #T C] , l e k = o, (4.4) Although 4.3 proposes a schedule with more that one element (e > 1), when for each schedule element, all the transmission gates are open, OperControlListLength could be set to 1 when the port does not use the transmission gate mechanism. Configuring the Time Aware Shaper on a transmission port is equivalent to finding a schedule and its duration or number of states for the transmission gates of all queues of this port i.e.        T imeInterval =?, OperControlListLength =?, CycleT ime =?, ∀e ∈ [1, OperControlListLength] , v e =? (4.5) Remark 5 (TimeInterval) Contrarily to the above explanation, we recently discovered that in the standard, TimeInterval is not constant but variable for each gate event. While it does not change the general principle of 802.1Qbv gate control list that we have just explained, we felt it was important to correct our understanding with this remark. Remark 6 (Tick implementation interoperability) Although it seems rather clever to have such abstraction using a Tick for defining the length, in time, of the events in the GCL. It might create some limitations in the choice of devices or interoperability between manufacturers limitation if the minimum supported value of Tick differs from one manufacturer to another. There is not standardized minimum value of Tick in 802.1Qbv, a value is suggested (1 nanosecond) but it not mandatory to support it. The network architect shall be aware of this issue when selecting its switches and end-stations. Remark 7 (Impact of TAS on CBS) The use of the Transmission Gate mechanism has an impact on the rules defined for credit based shaper. In fact, the standard defines rules for CBS regarding the evolution of the credit when the transmission gate is closed or what would happen to it when the gate is open but is about to close. The analysis and modelling in network calculus of this CBS behaviour when combined with TAS was done in [START_REF] Daigmore | Impact on credit freeze before gate closing in CBS and GCL integration into TSN[END_REF]. The new credit evolution rules for CBS with TAS are the following (cf. [START_REF] Daigmorte | Analyse des interactions entre flux synchrones et flux asynchrones dans les réseaux temps réels[END_REF] for the in-detail analysis): 1. When the queue is not empty, its credit is non-negative, its gate is open and there is enough time before the next gate closing event for transmitting the frame in head of the queue, then the queue is considered ready, and the message in head is selected for emission. 2. When a message is emitted, then the queue's credit decreases at send slope rate. This does not apply to the overhead induced by frame preemption (see Appendix A.1). 3. When the queue is not empty but cannot emit (because someone else is using the medium), or emit and overhead due to preemption, then its credit increases at idle slope rate. 4. When the queue is empty and its credit is positive, then the queue is reset to zero, 5. When the queue is empty and its credit is negative, then if its transmission gate is open, then its credit increases at idle slope rate until it reaches zero, then stops increasing. NETWORK PERFORMANCE WITH 802.1QBV-ENHANCEMENTS FOR SCHEDULED TRAFFIC61 6. When the transmission gate of a queue using CBS is closed, then this queue cannot emit frames and its credit remains constant. Configuration Problem for 802.1Qbv The field of possibilities for the configuration of 802.1Qbv is huge and imagining a random configuration is straightforward. However, finding a configuration, i.e. a selection of parameters' values for the Transmission Selection Algorithms, Transmission Gates and number of queues, in the goal of satisfying Quality of Service requirement is not an easy problem. 802.1Qbv Architectures of Interest In this section, we introduce several architectures of interest for a 802.1Qbv port that can be found in the literature. For all the architectures below, we assume without loss of generality that all ports use #T C = 8 queues. Static priority This subsection will describe the behaviour of a 802.1Qbv output port when there are no TSA configured and the Transmission Gate mechanism is not configured either. In this situation, the output port would look like Fig. 4.13. In the absence of Transmission Selection Algorithms and Gate Control Lists, the queues are directly "connected" to the Transmission Selection block. When a frame is enqueued in one of the NETWORK PERFORMANCE WITH 802.1QBV-ENHANCEMENTS FOR SCHEDULED TRAFFIC63 queue it is immediately available for transmission. If several queues have some frames available for transmission at the same time, the Transmission Selection does its job : it arbitrates between queues with a static priority rule. In this architecture, the Scheduling Type is "Static Priority". 14 illustrates the behaviour of such port on a simple example. We suppose that all the queues have a frame ready for transmission at the exact same time. Applying the static priority of Transmission Selection, frame from #7 is emitted first (priority order is decreasing, the lower the queue's number, the lower the frame's priority), followed by the #6 frame, followed by the #5 frame, [...] ending by the #0 frame. AVB -Audio Video Bridging This subsection will describe the behaviour of a 802.1Qbv output port when the the emission of #a traffic. TSN features 802.1Qbu and 803.br proposes two integration methods in order to deal will this issue. It will be detailed in Appendix A.1. The exclusive gating concept, applied above on #a, can be generalized to be applied to 2 (see Fig. 4. [START_REF] Chaine | Comparative study of ethernet technologies for next-generation satellite on-board networks[END_REF]), 3 and up to 8 queues. In the case where each queue is in exclusive gating with the other queues, the architecture of the output port is equivalent to a End-to-End TT port in which each queue is given in timeslot to emit its frames without any interaction from the other queues. It is not mandatory that the timeslots given to each queue are equals. This End-to-End TT architecture is represented in Fig. 4.20. #1 #3 #4 #5 #6 #7 #2 #8 OperControlListLenght = 9 Gate of #8 is open Gate of #8 is closed #1 has exclusive access to the medium during 2 TimeIntervals TT-CBS-BE This subsection will describe the behaviour of a 802.1Qbv output port when the highest priority queue uses no TSA and a TG in exclusive gating with the other queues. The two next higher priority queues remaining will use CBS and the rest will use either nothing, ETS or Vendor Specific algorithms and will be named BE. With such organization, an output port would look like that Fig. 4.21. We will not develop further the behaviour and performances of TT-CBS-BE as we have already deeply explained the other architectures of interests and all the concepts before. In-depth analysis and modelling of this architecture was realized in [START_REF] Daigmorte | Analyse des interactions entre flux synchrones et flux asynchrones dans les réseaux temps réels[END_REF]. It is not all ! In the previous subsections, we have introduced several architectures of interest regarding the configuration of TSN features 802.1Qbv. But these are not the only ones, the field of possibilities is nearly infinite as anyone can choose for each traffic class of a single port a different number of queues, a different Transmission Selection Algorithm and a different Gate Schedule. Summary To summarize, TSN feature 802.1Qbv -Enhancement for Scheduled Traffic add new bandwidth sharing mechanism through Transmission Selection Algorithm (Credit Based Shaper, Weighted Round Robin, Static Priority, etc.) as well as Time Triggering capabilities through Time Aware Shaper to the TSN switches output ports. The same elements are added to the output ports of the end-stations. Fig. 4.12 summarizes the configurable mechanisms of 802.1Qbv. With this set of mechanism, TSN is able to achieve bounded or even fixed latency as well as very low reception jitter. Conclusion This chapter has introduced some generalities on IEEE Time Sensitive Networking as well as TSN addenda for network performance quality of service and fault tolerance quality of service. In chapter III, we will aim at finding a configuration of the mechanisms of 802.1Qbv. Before doing so, we propose in the next part to make sure that TSN is indeed an interesting technology for a nestgeneration satellite network. Part Problem Statement 1 In accordance with the ever-expanding volume of data generated and handled by ground-level equipments (telephone, cars, scientific instruments, etc.), satellites must be capable of producing and transmitting massive amounts of data in order to meet their users' requirements. While improving the performance of data production is straightforward since instruments with such capabilities are already available on the market (e.g. COTS multi-gigabit camera, etc.); improving the performance of the on-board networks carrying that data remains complex. The current networking technologies (mostly 1553 and SpaceWire) will not be able to support that trend for long. Therefore, the spacecraft industry has started to consider new technologies so as to keep up with the demand. The technology supporting the next generation satellite network is expected to create a "unified" network, meaning that Platform and Payload traffic will co-exist in the same network, while proposing improved quality of service (performance and fault tolerance) at a reasonable cost. There are not one but several technologies currently considered by the industry as prospective solutions for next-generation satellite networks. The first section of this chapter presents the list of pre-selected technologies including a rationale for their pre-selection. The goal of this first problem will be to identify one or several candidates capable of fulfilling the satellite requirements. To do so, expectations on the technologies supporting the communication network have to be refined meaning that it is necessary to formalize further the behaviour of space applications and legacy design paradigm. For that purpose, we present an abstract model of the equipments (or devices) and application, from which we derive a first set of requirements that we call Application Level Properties. Preselected Technologies Although the actual architecture works perfectly fine, it has started to show its limits: new instruments and more generally new equipments are capable of generating gigabits of data that the network cannot handle in its current version i.e. 100Mbits/s on a SpaceWire network. Using gigabits-capable network could allow satellite users to access this huge amount of raw data. Moreover, adding more mechanisms at network level (ISO Level 2) could ease the integration of an increasing number of equipments on-board and reduce the development effort to be done at application level. At European Space Agency level, SpaceFibre is the high-throughput networking technology successor of SpaceWire, it is hence naturally considered in this study. Nevertheless, SpaceFibre is only used in the spacecraft industry, thus its development and updates are quite expensive, in particular in terms of non-recurring costs. Using a technology based on COTS -Commercial-off-the-shelves -components, or IP Cores -Semiconductor Intellectual Property Cores (instantiated into specific space oriented hardware), shared, for some parts or as a whole, with other industrial sectors (automotive, industrial automation, aeronautics, etc.) could help lower the overall cost of the satellite network. Having a wide-spread technology could also facilitate the interaction between the spacecraft industry and the academic world. This is the reason why the focus is steered towards Ethernet-based technologies. In fact, Ethernet has started being considered/used in industrial systems since the early 2000's. In the spacecraft industry, Ethernet is being used on board the International Space Station (ISS) and is starting to be embedded in satellites to convey payload traffic between the instrument and its associated computing device on a point-to-point fashion. In the aircraft industry, an enhanced version of Ethernet i.e. ARINC 664 is used to convey flight control traffic (with stringent quality of service requirements). It is an on-board networking technology which offers real-time and fault tolerance guarantees. ARINC 664 is widely used in Airbus planes, it is therefore logical to try to re-use it in Airbus satellites. That is why this technology is preselected for our study. In the launcher industry, TTEthernet will be used on board Ariane 6 rocket as on-board network (with quality of service requirements equivalent to our platform side in the satellite). It is also planned to be used in space for, among others, the space gateway of the ARTEMIS mission towards moon [122]. TTEthernet also offers real-time and fault tolerance guarantees. In addition, there are already space hardened components available for TTEthernet. That is why TTEthernet is preselected for our study. Finally, one opportunity has appeared with Time Sensitive Networking, the state of the art IEEE Ethernet technology. It is being considered by several industry verticals e.g. automotive industry [START_REF] Migge | Insights on the Performance and Configuration of AVB and TSN in Automotive Ethernet Networks[END_REF], aircraft industry [START_REF] Steiner | Recent ieee 802 developments and their relevance for the avionics industry[END_REF], industry automation [START_REF] Pop | Enabling fog computing for industrial automation through time-sensitive networking (tsn)[END_REF], [START_REF] Lo | A perspective on ieee time-sensitive networking for industrial communication and automation systems[END_REF], 5G [START_REF] Larrañaga | Analysis of 5g-tsn integration to support industry 4.0[END_REF], train industry [START_REF] Jakovljevic | Next-Gen Train Control / Management (TCMS) Architectures: "Drive-By-Data" System Integration Approach[END_REF], spacecraft & launcher industry [START_REF] Sanchez-Garrido | Implementation of a time-sensitive networking (tsn) ethernet bus for microlaunchers[END_REF], etc. Hoping that the development effort could be shared between industries and that the spacecraft industry could benefit from a scale effect due to the other industries buying the same components on a much larger scale, Time Sensitive Networking is preselected for our study. To summarize five technologies are pre-selected for this study: SpaceFibre and four technologies from the Ethernet family: Ethernet, ARINC 664, TTEthernet and Time Sensitive Networking. These technologies are introduced in Chapters 3 and 4. Satellite On-Board Applications Modelling To support the definition of Application Level Requirements, it is necessary to detail the behaviour of applications and the temporal parameters associated to it. Let us first model the end-stations and applications of the satellite system introduced in Chapter 2. The definitions and models of this chapter have been retrieved from [START_REF] Chaine | Formal Specification of Satellite On-Board Networks Requirements[END_REF]. Devices In the model, we define end-stations as devices. Definition 32 (D, Devices) Let D denote the set of devices. The devices (Dev ∈ D) can be part of four different families: • Platform Computing: The platform computing devices process data from sensing devices and generate commands sent to actuating devices. SATELLITE ON-BOARD APPLICATIONS MODELLING • Sensing: The sensing devices collect data through analog or digital sensors. • Actuating: The actuating devices execute commands received from a computing device so as to control the attitude and the orbit of the satellite. Motivating Example All along the problem statement, we will use a motivating example to illustrate the given definitions, constraints, etc. Every now and then, we will use this example or add to it more details so as to ease the understanding of the concepts of this document. This example, for now, is composed of three devices: • One Platform Computing Device that we will name OBC1 ,. • One Sensing Device that we will call High Performance Sensor, • One Actuating Device that we will call High Performance Actuator, that is, in reality, connected through a RIU -Remote Interface Unit. Applications Applications (e.g. Command & Control, Vision Based Navigation, Payload Processing, etc.) run on the Computing devices. On Platform Computing devices, applications gather, process, and produce data that serves for the control of the satellite, for instance the control of gyroscopic actuators to orient a satellite in order to take a picture of a specific spot on Earth, or the use of on-board cameras to detect any incoming obstacle, and control the thrusters of the satellite to avoid the threat. On Payload Computing devices, applications also gather, process, and produce data but for the payload part of the satellite. For instance, they can process the raw stream of data retrieved from the camera sensor of the satellite and build images or videos that will then be retransmitted to Earth via the communication subsystem of the satellite connected to ground stations. Several time constants are used to define the behaviour of these applications. Definition 34 (MIF Cycle, P MIF ) Each application follows a pattern, named a cycle, shown in Fig. 5.2. The duration of this cycle is constant per satellite and is called the MIF -MInor Frame -Period, denoted P MIF . During this cycle, a reserved time quantum of application time is dedicated to gathering input data coming from sensing devices through the on-board network, then another quantum of its time is dedicated to processing this data and one last quantum is dedicated to sending output data to actuating devices according to the output of the processing via the on-board network. This above definition is illustrated with Fig. 5.2. In the figure, the duration P MIF is represented by the distance between the two upward-facing arrows. Then, the green rectangle represents the part where application potentially gathers input, the blue part represents the application processing time and the orange one represents the part where application potentially emits its messages. Remark 8 Although it may appear constant in the figures of this document, the duration of the Input, Processing and Output phases is not required to be constant in the satellite system. Rule 1 All values for periods in this document will be expressed in milliseconds. Definition 35 (MAF Cycle,P MAF ) In the satellite, the MIF period represents the duration of one applicative cycle. As several applications co-exist in the satellite, a system-wide period or hyperperiod is defined. This period is call a MAF -MAjor Frame -Period (or MAF Cycle) and its duration is denoted P MAF . The MIF cycle of duration P MIF is repeated k (∈ N * ) times during a MAF Cycle so that: P MAF = k * P MIF (5.1) APPLICATION LEVEL PROPERTIES Hypothesis 1 In the system of our industrial partner, the usual value of P MAF was 1000ms. Tab. [START_REF] Alderisi | Simulative assessments of ieee 802.1 ethernet avb and time-triggered ethernet for advanced driver assistance systems and in-car infotainment[END_REF] Application Level Properties Because of the gathering of input at the beginning of the MIF cycle and the output at the end of the MIF cycle, multiple messages for several applications must share the same medium at the same time. Those applications probably have different requirements on network performance quality of service. As a consequence, the on-board network shall be able to satisfy these performance QoS requirements. In addition, in order for applications scattered all across the network to work properly, they all require to have the same understanding of time (by for instance being synchronized). Finally, since resources are shared between applications, in order to prevent a fault coming from one application to have an impact on the nominal behaviour of the other applications, a fault tolerance quality of service requirement is added. Therefore, at this stage, we identify three requirements or application level properties that the on-board network shall fulfil: Mixed Traffic Types, Time Management and Fault Tolerant Operations. Mixed Traffic Types Application-Level Property 1 (Mixed Traffic Types) Capability of the network system to convey, at the same time, several flows with different characteristics for instance, low data rate with low jitter and high data rate traffic but to prevent traffic with different criticality to affect each other's performances. For instance, a network satisfying Application Level Property. 1 shall be able to convey, with the same equipments, low data rate traffic with network performance requirements (e.g. messages for the command & control of a thruster requiring very low reception jitter); and high data rate traffic with no network performance requirements (e.g. data from a Payload Computing device sent to the communication sub-system for its transmission towards Earth using most of a 1Gbits/s link). Time Management Application-Level Property 2 (Time Management) Capability of the network system to manage time, i.e. ensuring either a global common clock of all network elements or at least applicative time distribution. Remark 9 While the industrial partner would be ready to consider that part of the time management would remain at application level, in this PhD, we took the hypothesis and requirement that Time Management should be done at MAC level (ISO level 2). Fault Tolerant Operations Before defining the third application level property, let us first explain that we consider a faulty behaviour as either incorrect, lost, out of time constraints or out of traffic contracts. Example 7 (Examples of Faulty Behaviours) An example of incorrect faulty behaviour would be a message sent by an application and received by another which content has been altered (due for instance a bit-flip, which is quite common in space environment). An example of lost faulty behaviour would be a message sent by an application that never reaches its destination application (due for instance to the message being discarded after a CRC error). An example of out of time constraints fault behaviour would be a message sent by an application expected to be received within a specific time window and received outside of this time window (due for instance to a synchronization error between the sending and receiving devices). Finally, an out of traffic contract faulty behaviour would be a message sent by an application while said application has already used all the resources that were allocated for it (due for instance to a device turning into a babbling idiot). Application-Level Property 3 (Fault Tolerant Operations) Capability of the network system to operate in a faulty context by preventing faults, by detecting, isolating and recovering from certain faults and by generating failures report/indicators for higher level fault management in case fault cannot be dealt locally, in a seamless manner. Remark 10 The above paragraph only addresses a selection of all types of faulty behaviour that could occur in the system. Examples of such faulty behaviours are impersonation (i.e. an application emitting message in place of another application), routing error (i.e. a message no taking the correct path), etc. Contribution Overview The applications running over the devices require a network to communicate with each other. This network can be implemented with several technologies. Therefore, we wondered : Definition 36 (Problem 1) Given the set of technologies preselected by the industrial partner, is it possible to identify one or several technologies that would be good candidates for a next-generation satellite network based on their compliance to the application level properties on a qualitative approach ? To confront this first problem, we proposed to compare the preselected technologies on a qualitative manner. To do so, we defined a set of criteria per application level property that would serve as metric for the comparison. The criteria, the comparison and its results have been published in [START_REF] Chaine | Comparative study of ethernet technologies for next-generation satellite on-board networks[END_REF] and in [START_REF] Chaine | Comparative study of high throughput technologies for next-generation satellite on-board networks[END_REF]. The first paper only discusses the suitability of Ethernet technologies (i.e. Ethernet, ARINC 664, TTEthernet and Time Sensitive Networking) while [START_REF] Chaine | Comparative study of high throughput technologies for next-generation satellite on-board networks[END_REF] complements the previous comparison by adding SpaceFibre. Part II will detail the identification of the criteria for the comparison followed by the comparison itself, criteria per criteria, on a qualitative basis. It will conclude by the identification of three suitable candidates for the next generation satellite network. Chapter 6 Methodology: Qualitative Comparison Relying On Criteria Regrouped Per Properties In order to select one or several technologies suitable for a unified Platform & Payload satellite onboard network, we propose to compare the preselected technologies (presented in Chapters 3 and 4) in a qualitative fashion. Willing to reuse as much as possible the bibliography, we propose in the first section a state of the art regarding the selection of a next-generation network. The state of the art focuses on two aspects: definition of criteria to compare technologies and the existing comparison of technologies. The second part of this chapter is the presentation of the criteria we will use for our comparison. Related Works: Selection of a Next Generation On-Board Network This state of the art deals, in the first section, with the space sector, mostly in the satellite domain, and then opens the focus, in the second section, to other industrial sectors such as automotive, industrial automation, etc. We will reuse the methodology of the state of the art for the comparison. In fact, authors define both a set of technologies and a set of criteria representative of the requirements of the system they consider (e.g. a satellite, a car, a train, etc.). Then, for each criteria, they evaluate how a specific technology performs w.r.t. to said criteria. Then, they analyse the "value" of every criteria w.r.t. to their system requirements. Example 8 For instance, with a criteria Maximum Data Rate, and a set of technologies like {Ethernet, Spacewire}, Ethernet will be able to perform at 1Gbit/s or more, Spacewire will be able to perform at about 200Mbit/s. Considering a system requirement at 800Mbit/s, the analysis of the value of the criteria Maximum Data Rate for the set of technologies {Ethernet, Spacewire}, leads to the conclusion that Ethernet is suitable and Spacewire is not for this particular criteria. Eventually, a summary of the analysis of all the defined criteria is provided so as to identify the best choice(s) for the considered system. In the Space Domain Spacewire and MIL-STD-1553 are the most common technologies used as on-board bus/network in the spacecraft industry today. However, their limits in terms of performance, scalability, flexibility, etc. have left space industrials wonder about new technologies. In [START_REF] Chaine | TSN Support for Quality of Service in Space[END_REF], we have introduced the need for an upgrade of the satellite network and identified several challenges on the use of TSN for space applications. Closest works The two closest works to our first contribution are [START_REF] Gwaltney | Comparison of Communication Architectures for Spacecraft Modular Avionics Systems[END_REF] and [START_REF] Pruvost | Ethernet for space: an enabler for next-generation avionics[END_REF]. [START_REF] Gwaltney | Comparison of Communication Architectures for Spacecraft Modular Avionics Systems[END_REF] Avoidance". . . , include most of ours. The others are either a subdivision of the criteria we introduce in the next chapter or out of the scope of the coming qualitative comparison (for instance physicallayer related criteria). For man-rated applications, the technology they selected was TTP/C. We do not reuse their conclusion as the requirements of our satellite system differ from their spacecraft's requirements (i.e. we consider high data rate traffic in addition to low jitter traffic). [START_REF] Pruvost | Ethernet for space: an enabler for next-generation avionics[END_REF] proposed in 2016 a first high level analysis comparing ARINC 664, TTEthernet and Ethernet for Space (a tailoring of ARINC 664 for space applications), without really relying on clearly defined criteria. They identified the opportunity of an Ethernet-based network as a next-generation satellite network without really selecting one specific technology. We add more technologies to the comparison they have started, namely Ethernet and TSN, and provide a more in-depth analysis of application level properties for each technology. Switched Ethernet and ARINC 664 In 2002, [START_REF] Webb | Ethernet for space flight applications[END_REF] was most likely the first work to address a detailed implementation of a full-duplex switched Ethernet for space applications. At the time, it was already considered as both a Platform & Payload network candidate and the authors compared it to 1553. The authors concluded that Ethernet was indeed interesting but that further studies were required to precisely quantify the bounded latency capability of Ethernet and the possible implementations of redundancy protocols over Ethernet. [START_REF] Notebaert | Towards spacewire-2: Space robotics needs: Spacewire missions and applications[END_REF] introduced several requirements oriented towards space robotics needs and provided a high level analysis of the capabilities of SpaceWire-2/SpaceFibre w.r.t. to said requirements. Their analysis comprised most of the criteria for Application Level Properties 1, 2, and 3 that we consider as well as some additional ones (e.g. considerations on configuration process of a SpaceFibre network). The authors concluded that SpaceWire-2/SpaceFibre could be a potential candidate for the upgrade of the satellite network. [START_REF] Suthaputchakun | Performance analysis of afdx switch for space onboard data networks[END_REF] proposed a detailed analysis of ARINC 664 for "Space On-board Data Networks". In the analysis the authors only considered part of Application Level Property 1 as they only discussed the latency and jitter criteria. Typical values for latency and jitter were assessed via simulation on a network working at 100Mbits/s. They concluded that on a first approach, ARINC 664 could be considered as a suitable candidate. While for the use case presented in their paper, we agree with the conclusion proposed in the paper, in our contribution we disagree with the authors and declare ARINC 664 not suitable for a next-generation satellite network based on the analysis of Application Level Properties 2 and 3 as well as more stringent requirements on jitter in our case. TTEthernet and Time Sensitive Networking In [START_REF] Steiner | Ethernet for Space Applications: TTEthernet[END_REF], the authors described TTEthernet and showcased its interest for the spacecraft industry by identifying several interconnections possibilities between TTEthernet and Spacewire for a satellite backbone network. Most recently, [START_REF] Kulau | Towards modular and scalable on-board computer architecture[END_REF] discussed of the interest of Ethernet based networks and in particular of TSN based networks for next-generation on-board computer and data handling architectures. The discussion was high level but went through the same application level properties that those of our contribution where we address them in detail. The launcher (i.e. rockets) industry has not been spared from the network evolution trend of the space domain. In Europe, the interest of ARINC 664, TTEthernet and Time Sensitive Networking has been discussed for the newest launchers and some relying on TTEthernet and Time Sensitive Networking will lift-off in 2022. In [START_REF] Clavier | TTEthernet, a Promising Candidate for Ariane 6[END_REF], the authors proposed a discussion on the network for ARIANE 6 launcher. In the paper, they selected TTEthernet as a suitable candidate among several technologies including Spacewire, Flexray, TTP, Ethernet, ARINC 664, TTEthernet and Fibre Channel. While the comparison process and results are not available in the paper, the criteria used by the authors are the following: Cost, RAMS, AIT-OPS (Assembly Integration & Test, Operations), Environment withstanding, Flexibility/Adaptability to design modification or obsolescence and Maturity (described by a TRL level). We assume that performance (Application Level Property 1) was also addressed but not listed. Application Level Property 2 does not seem to have been addressed. Two candidates were actually identified ARINC 664 and TTEthernet. However, TTEthernet was favoured because it was tailored for their applicative needs, and because of its flexibility and its simplicity for software validation. With the in-detail comparison we propose in the following chapters, we will also identify TTE as a suitable candidate for a next-generation satellite network. Later, authors from the same group started to consider TSN in place of TTEthernet. In paper [START_REF] Polonowski | Ariane launchers digital engineering: stakes and challenges[END_REF], the authors mentioned, among other things, the interest of the launcher industry for going towards an Ethernet-based technology relying on COTS components so as to reduce costs. The technology they chose was TSN (instead of TTEthernet). However, they raised the reader's awareness on the additional studies required to assess the behaviour of COTS components in space. We share the authors' view on COTS components and we actually detail it in the last chapter of this first contribution. Finally, [START_REF] Sanchez-Garrido | Implementation of a time-sensitive networking (tsn) ethernet bus for microlaunchers[END_REF] presented the TSN implementation used for the network of MIURA 1 microlauncher that will go to space in the coming years. The networking technologies considered by the authors are really close to ours, the only missing technology being Spacefibre. Again, as for the previous paper, the authors identified the interest of the use of COTS components for the launcher networks. TSN has been selected for its ability to handle critical data (Application Level Property 1) and redundancy (Application Level Property 3), as well as the expected availability of TSN COTS components. In the coming comparison, we agree with the authors on the interest of TSN and COTS components. CHAPTER 6. METHODOLOGY:QUALITATIVE COMPARISON There is therefore, in the space domain (satellite and launchers) a real interest in the selection of a next generation on-board network. The trend seems to predict that the technologies that were indeed pre-selected for our contribution would be good candidates. In the coming contribution, we will reduce this set of potential candidates with the help of the qualitative comparison. Outside the Space Domain Outside of the space domain, the race for evolving from an industry specific legacy bus towards a faster, better and more common technology is also raging, in particular in the automotive and factory automation domains, but also in the avionic industry. A good illustration of this "race" would probably be [START_REF] Navet | In-vehicle communication networks -a historical perspective and review[END_REF] where the authors propose a historical list of every, used or emerging at the time (2013), in-car embedded networks. This list does include Ethernet, and TTEthernet but lacks TSN which did not exist at the time and SpaceFibre which is out of scope of the automotive industry. Most of the works presented hereafter use simulation to obtain latency and jitter value for several network configurations. Moreover, in the following papers, the definition of performance quality of service requirements is not identical to ours. While most related works chose their technologies based on the best performance the technologies are able to provide (i.e. very low jitter and smallest network latency possible), our comparison is motivated by a satellite use case where jitter shall be kept low but latency constraints are not strong. In fact, in our industrial use case, latencies can be as large as possible as long as no deadlines are missed. Automotive Industry [START_REF] Cummings | Exploring use of ethernet for in-vehicle control applications: Afdx, ttethernet, ethercat, and avb[END_REF] proposed a comparison similar ours, but applied to the automotive industry. Their analysis targeted the evaluation of the use of AFDX (i.e. ARINC 664), TTEthernet, EtherCAT and AVB in an automotive context. They considered Application Level Property 1 and 3 but also addressed physical layer, system start-up and costs aspects. We consider these aspects out of scope of our comparison since we only wish to discuss the protocol theoretical capabilities and not the performance of one implementation at this stage. With the qualitative comparison of Chapter 7, we complete their work with the comparison on Application Level Property 2 and additional analysis on Application Level Property 3 applied to a space use case. Still in an automotive context, [START_REF] Migge | Insights on the Performance and Configuration of AVB and TSN in Automotive Ethernet Networks[END_REF] proposed a comparison of Ethernet, AVB and TSN. The comparison only focused on Application Level Property 1 and more specifically on latencies and their bounds in an automotive use case. Latencies were obtained via simulation and their bounds via worst-case schedulability analysis. The authors concluded by identifying that, among their candidates, TSN was the most suited for the automotive use case requirements. However, the authors rightfully claimed that the configuration and timing validation process of a TSN network are complex and that tools are needed for it. TSN, the successor of AVB, can reproduce, via configuration, the behaviour of AVB. Hence, the results observed in AVB are transposable to TSN when configured in AVB mode. [START_REF] Schneele | Comparison of ieee avb and afdx[END_REF] proposed a comparison between AFDX and AVB in an avionic context. The authors then, based on network calculus, declared that the observed bound on latencies obtained with an AVB network were suitable for aircraft avionics' network requirements. Their paper also stated that the new mass market controllers and switches developed for AVB (now TSN) could be a cost-effective alternative for COTS-based low criticality systems. We share their view on the performance and cost analysis. In particular, the mass market of COTS component could benefit to satellite production in a "New Space" context (business analysis pending). However, we do not discuss AVB in our comparison since, for our specific application, it does not match our needs in terms of jitter. [109] proposed a competitive performance evaluation of AVB and TTEthernet. The evaluation was performed on an automotive use case via simulation in OMNET++. Results of the evaluation showed comparable performances for both technologies in terms of latencies (Application Level Property 1). However, the authors identified that background traffic (without deadline and jitter constraints) had a significant impact in AVB and planned to make more evaluation while adding background traffic to quantify its impact on performances. [5] also did simulative assessment of the performances of AVB and TTEthernet in an automotive context that lead them to the same conclusions than [START_REF] Steinbach | Tomorrow's in-car interconnect? a competitive evaluation of ieee 802.1 avb and time-triggered ethernet (as6802)[END_REF]. [79] compared the performances of AVB and Ethernet in an automotive use case at 100Mbit/s. In our analysis, we compare the performances of TSN and Ethernet at 1Gbits/s. [11] compared the performance in terms of latency and jitter (Application Level Property 1) of AVB and "AVB-ST", for Scheduled Traffic (which would become TSN configured in TT-CBS-BE mode cf. Section 4.2.5), on an automotive use case via simulation using OMNET ++. The authors concluded that the performance of both technologies seemed suitable for the automotive use case but the introduction of a scheduled traffic class in AVB-ST would, in combination with other mechanisms, provide low and bounded latencies to critical traffic even with a high bandwidth traffic using the same resources. Again, this confirmed already at the time the interest of the automotive industry for TSN. Railway Industry [START_REF] Jakovljevic | Next-Gen Train Control / Management (TCMS) Architectures: "Drive-By-Data" System Integration Approach[END_REF] addressed the use of ARINC 664, TTEthernet and Time Sensitive Networking for deterministic (understood as bounded latency and low jitter) data delivery and briefly compared them without really choosing a winning technology at the end of the comparison. This paper was coming from a railway context, and the networking technology was discussed as a building block of a Train Control/Management System (TCMS). ARINC 664, TTEthernet and Time Sensitive Networking were compared on Application Level Properties 1 and 2. The criteria for the comparison included the ones we use in the coming comparison at the exception of one i.e. whether the synchronization capabilities of TTEthernet and Time Sensitive Networking are available at ISO L2 or not. We discuss it in the next chapter. Aircraft Industry [START_REF] Zhao | Comparison of time sensitive networking (tsn) and ttethernet[END_REF] proposed a comparison between TTEthernet and TSN. The comparison was very thorough. Several properties, similar to ours were used for the comparison i.e. synchronization, bandwidth allocation, traffic shaping and traffic scheduling and redundancy. The authors computed worst case delays in a specific TSN configurations based on network calculus. We redo the qualitative protocol comparison with our specific performance requirements and our use case and add a more detailed redundancy analysis. Helicopter Industry Most recently, [START_REF] Mauclair | Do we really need TSN in Next-Generation Helicopters ? Insights from a Case-Study[END_REF] proposed an evaluation of the suitability of TSN for an Helicopter network. The performance requirements were slightly different: while we consider deadline and jitter requirements, the helicopter network traffic is only subject to deadline requirements. Therefore, based on coarsegrained analysis using RTAW Pegase commercial tool [START_REF]RealTime-at-Work. RTaW-Pegase: Modeling, Simulation and automated Configuration of real-time communication architectures[END_REF], they declared that TSN might not be the number one solution for a next-generation helicopter network. Conclusion The previous sections have presented a set of papers discussing the interest of certain technologies for the next-generation on-board networks in different fields of applications (e.g. satellites, launchers, cars, etc.). Several papers already compared some of our preselected technologies together but the comparison was either too light or not compatible with our system model. For instance, the definition of jitter presented in Section 8.2 describes the end-to-end jitter as latency variability as it is the case in the common jitter definitions in the state of the art. However our latency concept differs: we consider the duration between the reception instant and a reference instant for any frame (see Chapter 8.1) instead of the difference between reception and emission instants. Therefore, while we do not contradict any of the papers in the state of the art, our conclusion might still differ. As a consequence, we reuse the criteria or the approach of certain papers but apply them to the specific requirements of a satellite next-generation on-board network. In fact, the comparison of the following chapter has never been done before on the full set of technologies presented in Chapter 3 and 4, as well as on the full set of considered criteria, with the goal of discussing the suitability of a technology for a next-generation network supporting both Platform and Payload requirements. Definition of Our Criteria In order to select, based on a qualitative comparison, one or several candidate for a next generation satellite network, it is necessary to define the extensive list of criteria on which the comparison will be based on. In the literature, the Federal Aviation Administration proposed in a report a list of criteria that could be used for "assessing the design of existing and new data networks for their applicability to safety-critical aviation digital electronics systems" [2]. The criteria presented hereafter are inspired from this report and driven by the requirements of the satellite system. Therefore, in this section, we list and define our criteria for the comparison, regrouped per application level properties. Section 6.2.1 introduces three criteria related to Mixed Traffic Types Application Level Property. Section 6.2.2 introduces three criteria related to Time Management Application Level Property. Section 6.2.3 introduces three criteria related to Fault Tolerant Operation Application Level Property. Criteria for Application Level Property 1 To evaluate the suitability of the preselected technologies i.e. Ethernet, ARINC 664, TTEthernet, Time Sensitive Networking and SpaceFibre, presented in Chapter 3 and 4, with Application Level Property 1 -Mixed Traffic Types, four criteria are identified: High Data Rate, Bounded Latency, Very Low Jitter and Mixed Criticality. Remark 11 One very important remark regarding this comparison and its associated criteria, is that the criteria evaluate what it is possible to achieve using the protocol at L2 level, not what could be achieved by a clever implementation of the protocol in the whole system. Criteria 1 (High Data Rate) Is the considered networking technology capable of handling 1Gbit/s or more ? This criteria is used to eliminate potential candidates that would be too slow for next-generation requirements. Note that neither SpaceWire or 1553 satisfies this criteria. Criteria 2 (Bounded Latency) Is the considered networking technology capable of providing bounded latency for one or several data exchanges ? This criteria is used to identify potential candidates that would not be able to provide performance quality of service to data exchanges. Criteria 3 (Very Low Jitter) Is the considered networking technology capable of ensuring less than 1µs reception jitter for one or several data exchanges ? This criteria is used to identify potential candidates that would not be able to handle the low jitter command and control traffic from the platform network. Criteria 4 (Mixed Criticality) Is the considered networking technology capable of handling in the same medium platform traffic (with bounded latency and very low jitter constraints) and payload traffic (with high throughput demand) without these two types of traffic affecting one another ? Criteria for Application Level Property 2 Three criteria are identified for property 2 Time Management: Time Synchronisation at MAC Level, Time Management Algorithm Robustness and Interaction with higher layer capabilities. Criteria 5 (Time Synchronization at MAC Level) Does the technology provides a synchronization protocol at MAC Level ? This criteria is used to specify that the network shall be in charge of time synchronization in the satellite system. Criteria 6 (Time Management Algorithm Robustness) Does the Time Synchronization/Management Algorithm come with any robustness mechanism ? This criteria is anticipating the high criticality of the time synchronization function. If most network operations are driven by a timing information known across the whole network, the mechanism in charge of distributing and maintaining such information shall be fail-safe. Criteria 7 (Interaction with higher layer capabilities) Does the Time Synchronization/Management Algorithm come with standardized way to interact with applications, on both L2 to L7 and L7 to L2 directions? This criteria is related to the first criteria for Application Level Property 2: if the network is in charge of time synchronization in the satellite system (and synchronization happens at L2 level), it would be nice to have a standardized way to share this timing information with applications running over the network. For instance, this timing information could be used to trigger actions at application level. Remark 12 We take the hypothesis in this contribution that Time Management should be done at MAC level (ISO level 2). In effect, a higher level management of time could also be considered in the next-generation network. Criteria for Application Level Property 3 Before defining the four criteria associated with Application Level Property 3 -Fault Tolerant Operations i.e. Error Detection Capabilities, Error Reporting Capabilities, Redundancy Capabilities and Fault Containment Capabilities, let us first remind the reader that we consider a faulty behaviour as either incorrect, lost, out of time constraints or out of traffic contracts. CHAPTER 6. METHODOLOGY:QUALITATIVE COMPARISON Criteria 8 (Error Detection Capabilities) Is the technology able to detect the following errors: • Incorrect Message? • Lost Message? • Out of Time Constraints Message? • Out of Traffic Contract Message? This criteria serves at identifying the capability of candidate technologies of detecting incorrect messages (e.g. messages with incorrect checksums), lost messages, messages arriving too early or too late (e.g. a flow emitting messages out of its allocated time slot) or messages out of traffic contract (e.g. a flow emitting messages that exceeds its bandwidth reservation). Criteria 9 (Error Reporting Capabilities) Is the technology able to report the errors that it has previously identified, in either a direct and an indirect way ? The next generation network is expected to be able to report, in both either a direct manner with for instance interruptions or in an indirect manner with for instance statistics counter periodically read by a fault management entity, the faults that it has detected. Criteria 10 (Redundancy Capabilities) Does the technology provide a redundancy mechanism? Redundancy was identified by the industrial as the way to satisfy the safety requirement i.e. tolerance to the loss of a message. It is therefore expected that the next-generation technology provides such capability. Criteria 11 (Fault Containment Capabilities) Is the technology able to isolate/contain and even eliminate the errors it has detected ? This behaviour is classical is industrial real-time network. The goal is to prevent the fault from propagating into the network and affecting the nominal behaviour of other devices/applications using the network. For instance, a switch connected to a babbling-idiot device (i.e. a device constantly sending messages out of its traffic contract) should eliminate the faulty messages (or all the messages) coming from that device to prevent them to propagate into the network and overcrowd the links and the buffers. Conclusion In this chapter, we have first presented the different initiatives for the selection of a next-generation network from the state of the art both in the space domain and outside the space domain. It is a hot topic growing in importance with the emergence of Time Sensitive Networking. Inspired from several papers, we have conceptualized several criteria for each of the application level properties defined in Chapter 8. These criteria could have been more detailed as it was done by the authors of [2]. However, it was deemed sufficient with respect to the satellite use case we were dealing with. These criteria will be used in the comparison of the next chapter in order to select one or several technologies for the next-generation satellite network. Chapter 7 Qualitative Comparison of the Pre-Selected Technologies with respect to the Identified Criteria This chapter is the first contribution. We apply the methodology so as to compare the preselected candidates and then detail the output resulting from this methodology. In fact, we propose a qualitative comparison based on a set of high level criteria representative of the requirements of future satellite systems. In the previous chapter, we have defined our set of criteria for each Application-Level Property (cf. Chapter 5). Therefore, we can now proceed with the comparison. In fact, the goal of this comparison is to assess the theoretical capabilities of the technologies i.e. what is provided in their definition documents by their mechanisms and not what would be achievable with a clever use of these mechanisms. This chapter starts by discussing the suitability of Ethernet, ARINC 664, TTEthernet, Time Sensitive Networking and Spacefibre with respect to space requirements. Then, in a second section, suitable candidates are identified. Finally, a last section discusses third-party arguments for the selection of a next-generation satellite network among the suitable candidates. Technologies Capabilities w.r.t. space requirements In the following paragraphs, the symbol ✔ signifies that the technology is compatible with the considered criteria, ✖ that it is not compatible and ✔ that part of it is compatible. Ethernet Let us now discuss the compatibility of Ethernet with the three application-level properties of Section 5.3. Application-Level Property 1 -Mixed Traffic Types Criteria 1 -High Data Rate ✔ Ethernet is widespread both at home and in ISP 1 core networks. Several physical media are available (e.g. twisted pair, optical, . . . ) and allow to achieve Fig. 3.9). This tag contains a 3 bit field called Priority, that can be used to define, at MAC level, a priority between frames on 8 (2 3 ) levels. This priority is used to decide, in case of medium access conflict, which Ethernet Frame shall be emitted first (i.e. the one with the highest priority). Property 2 (Static Priority and delay guarantees) Static priority does not provide any guar- antee on delays as far as there are no traffic contracts in place [START_REF] Bouillard | Tight performance bounds in the worstcase analysis of feed-forward networks[END_REF]. Since Ethernet does not provide any traffic contract by itself (i.e. at MAC level), in order to provide guarantees, traffic contracts are left to be dealt with at application level. Therefore, at MAC level, Ethernet alone cannot provide any guarantees on latencies hence it does not satisfies the Bounded Latency criteria. Criteria 3 -Very Low Jitter ✖ Jitter is understood as latency variability. Let us roughly analyse the different parts that affect latency to see which part will be taken into account in jitter. Any constant element in the latency computation can be ignored since it will by definition no vary between two frames. The only elements that need to be taken into account are the variable elements. Lat f l = AppEmOffset + link∈P ath(f ) ∆ link Where ∆ link = HPB link + SPB link + LPB link + τ emit + τ propag (7.1) Where: • AppEmOf f represents the duration between the reference date of f l and its deposit in the queue of the emitting end-system i.e. T SAP (f l ) -Ref(f l ). • HPB link , SPB link and LPB link represent the delays induced per hop, in the path of f l , by higher, same and lower priority traffic. • τ emit and τ propag represent the duration of emission and propagation of f l on a link. Hereafter, we focus on the variability of the delay at emission only to explain why Ethernet is not able to provide very low reception jitter, but this issue may appear on every hop within an Ethernet network, not just on the first hop. With several frames of different priorities competing for the access to the medium, some additional and variable delay can occur. Example 9 (Jitter issue with Ethernet) Let us illustrate the variability with an example represented in Fig. 7.1. A frame f 3 of medium priority can be delayed by the a frame g 1 of lower priority finishing being emitted, for a duration LPB link , then by one or several frames (h 1 , h 2 ) of higher priority also waiting to be emitted, for a duration HPB link , and finally by frames of same priority (f 1 , f 2 ) that arrived before f 3 , for a duration SPB link . Depending on the periodicity of all these frames, the waiting delay for the emission of f 3 can be variable. Let us now assume that this frame has low jitter requirements. In the worst case, the lower priority blocking alone adds up to an additional 12,304 microseconds (see Table 7.1) delay, way above the 1 microsecond requirement of a low jitter flow. Therefore, only one priority blocking at emission happening for one message and not for the next one of the same flow is enough to make this flow miss its jitter requirement. And this blocking (plus the other ones) could happen on all hops ! Therefore, Ethernet, on Very Low Jitter criteria, does not appear to be suitable. Priority ♯1 Priority ♯2 Priority ♯3 g 1 f 1 f 2 f 3 h 1 h 2 time g 1 h 1 h 2 f 1 f 2 f 3 IFG LPB link HPB link SPB link Remark 13 One might say that not assigning the highest priority to the low jitter flow is not correct. However, doing so is not always a good idea either. Indeed, if there is enough low jitter traffic in the highest priority queue, it could prevent the lower priority traffic being emitted (i.e. starvation) and hence prevent them from satisfying the bounded latency requirements. Table 7.1: Usual frame emission delay at 100Mbits/s and 1Gbits/s 100Mbits/s 1Gbits/s Max. size frame (1518 + 20 bytes) 2123,04µs 12,304µs Min. size frame (64 + 20 bytes) 6,72µs 0,672µs Criteria 4 -Mixed Criticality ✖ Regarding Mixed Criticality, Ethernet proposes with static priority a way to differentiate the importance of certain flows from others on eight levels. In order for platform and payload traffic to coexist, some priorities could be given to platform flows and the rest to payload flows. However, as already explained above, these priorities shall be cleverly allocated so as to avoid the medium being monopolized by the highest priority level. For instance, since payload traffic requires a lot of bandwidth it might not be a good idea to give it a too high priority otherwise the platform traffic could suffer from starvation. Nevertheless, as explained in the Very Low Jitter criteria paragraph, it would only require one low priority payload frame to mess up with the transmission delay of a higher priority low jitter frame. Therefore, Ethernet is not suitable with this Mixed Criticality Criteria. CHAPTER 7. QUALITATIVE COMPARISON To conclude on Application-Level Property 1, Ethernet is not deemed suitable. Application-Level Property 2 -Time Management Criteria 5 -Time Synchronization at MAC level ✖ At ISO level 2, Ethernet alone does not provide any mechanisms for time management. However, the use of higher level protocols over Ethernet for time distribution and/or time synchronization is very common. These protocols often require a lower level layer support, at either MAC or PHY level. Hardware supporting these protocols shall be able to timestamp frames in emission and reception and retrieve these timing informations at higher layer for these protocols to use. The most mainstream synchronization protocol over Ethernet (but not at level 2) is PTP -Precision Time Protocol [START_REF]IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems[END_REF]. However, this is out of scope of this study. Therefore, Ethernet does not seem to satisfy this criteria. Criteria 6 -Time Management Algorithm Robustness ✖ There is no Time Management Algorithm in Ethernet therefore discussing of its robustness is not possible. Hence Ethernet does not satisfy this second criteria. Criteria 7 -Interaction with Higher Layer ✖ There is no Time Management Algorithm in Ethernet therefore discussing of its interactions with higher layers is not possible. Hence Ethernet does not satisfy this third criteria. On behalf of the suitability with the above criteria, we conclude that Ethernet does not satisfy Application-Level Property 2. Application-Level Property 3 -Fault Tolerant Operations Criteria 8 -Error Detection ✖ Regarding error detection, Ethernet only offers one tool: Ethernet frame CRC i.e. the FCS -Frame Checking Sequence. This 16 bits field allows checking the integrity of the content of the Ethernet frame. A CRC error usually leads to the erroneous frame being dropped, which coincides with the fault isolation criteria. This CRC will help detecting and preventing the incorrect faulty behaviour. There is no way of detecting the other types of faults (i.e. lost message, out of time constraint and out of traffic contract). Criteria 9 -Error Reporting ✔ In addition to the Ethernet frame CRC, Ethernet devices usually hold counters, called MIB -Management Information Base counters, that describe the behaviour of the device through the number of received frames, number of emitted frames, number of CRC errors, etc. These MIBs are, for most parts, standardized by IEEE (e.g. MIB section in IEEE 802.3 [START_REF][END_REF] and IEEE 802.1Q [START_REF]IEEE Standard for Local and Metropolitan Area Networks-Bridges and Bridged Networks[END_REF]) or IETF (e.g. RFC 3635). The information included in these MIBs, updated in every device, could be gathered by a higher layer entity (with the help of SNMP protocol for instance) and serve as error detection mechanism. MIBs will help higher layer fault management entities detect incorrect, lost and out of traffic contracts faulty behaviours. Although Ethernet has some error detection mechanisms, there is no real mean to prevent faults, in particular, lost faulty behaviour (like single points of failure). Moreover, there are no mechanisms available to detect and or prevent the out of time constraints faulty behaviour. Criteria 10 -Redundancy ✖ In Ethernet, at ISO level 2, there are absolutely no mechanisms for redundancy. However, Ethernet, is widespread and hence is used as MAC layer for several higher layer protocols like, for instance, UDP/IP. Although Ethernet does not provide any redundancy mechanisms or more error handling mechanisms, such mechanisms could be provided with higher level protocols. Nevertheless, this option is out of scope of this comparison. Criteria 11 -Fault Containment ✖ Regarding fault containment capabilities, Ethernet provides weak traffic segregation through Static Priority. In fact, if a device sends too many messages (i.e. out of traffic contract faulty behaviour e.g. babbling idiot) which corresponds to not in traffic contracts faulty behaviour, it will definitely affect the available bandwidth of the other flows and might lead to buffer overflow. Moreover, this faulty behaviour will propagate downstream. According to the capabilities introduced above, Ethernet technology is not effective enough w.r.t Application-Level Property 3. ARINC 664 Let us now do the same discussion for ARINC 664. Application-Level Property 1 -Mixed Traffic Types Criteria 1 -High Data Rate ✔ ARINC 664 was first implemented at 100Mbit/s. It was later improved to run at 1Gbit/s. Therefore, ARINC 664 is compatible with the first criteria of Property 1. Criteria 2 -Bounded Latency ✔ Ethernet did not provide any guarantees on latencies by itself. Nevertheless, ARINC 664 extends Ethernet with bounded latency capabilities. In fact, in addition to Static Priority (reduced to only two level and only present in switches), the concept of VL -Virtual Link is introduced in ARINC 664. It is indeed a reserved bandwidth for a specific traffic on a static route. This traffic contract is characterized by two parameters, the maximum frame size and the so-called BAG -Bandwidth Allocation Gap -i.e. the minimum time between two frames' emission in the same VL. By applying Property 2, there can be guarantees on latencies in ARINC 664 since it uses Static Priority and has traffic contracts (i.e. VLs). In fact, latencies can be determined with more or less pessimism through, for instance, network calculus. Typical latencies are in the range of 1 to 10 milliseconds [START_REF] Boyer | Experimental assessment of timing verification techniques for AFDX[END_REF]. Therefore, ARINC 664 satisfies the Bounded Latency criteria. Criteria 3 -Very Low Jitter ✖ While Ethernet relied on static priority for medium access in the whole network, ARINC 664 provide a scheduler to arbitrate between VLs competing to access the medium in any emitting end-station. This scheduler can introduce emission jitter, as represented in Fig. 7.2. This jitter is bounded and its maximum value is known as maximal admissible jitter in the standard [3]. This maximum upper bound of the jitter is 500µs but the standard indicates that a typical value for this jitter is 40µs. Virtual links were designed for guaranteed latencies and controlled jitter. However, they were not designed to provide very low jitter (1µs i.e. one or two orders of magnitude lower than the bounds mentioned in the standard). Therefore, ARINC 664 does not satisfy the very low jitter criteria. Criteria 4 -Mixed Criticality ✖ While the above paragraphs have discussed the handling of platform traffic with deadline and jitter constraints, nothing has been said regarding the capability to also handle high throughput traffic. In fact, in ARINC 664 the payload traffic (e.g. a video traffic coming from the navigation camera) could either be fitted in a specific VL or be standard Ethernet traffic [START_REF] Hotescu | Vers la convergence de réseaux dans l'avionique[END_REF]. Nevertheless, fitting it in one or several VL could still affect the nominal behaviour of low jitter platform traffic as explained in the previous criteria, therefore, as is, ARINC 664 is not compatible with the Mixed Criticality criteria. f l f l+1 f l+2 Max. Jitter Max. Jitter Max. Jitter BAG BAG BAG 0 < Jitter < M ax Jitter = 0 Jitter = M ax So far, ARINC 664 does not seem suitable with respect to Application-Level Property 1. Application-Level Property 2 -Time Management Regarding synchronization, ARINC 664 is identical to Ethernet and our conclusion is not changed. ARINC 664 is not suitable with Application-Level Property 2. Criteria 5 -Time Synchronization at MAC level ✖ There is no Time Synchronization mechanism defined in the ARINC 664 standard. Criteria 6 -Time Management Algorithm Robustness ✖ There is no Time Synchronization mechanism defined in the ARINC 664 standard. However, if such algorithm existed, the synchronization traffic could be given a specific VL. It would hence be provided with a guaranteed reserved bandwidth and ARINC 664 fault tolerant operation mechanisms would prevent impact from synchronization traffic to user traffic latencies and respectively. Nevertheless, ARINC 664 is not suitable with criteria Time Management Algorithm Robustness. Criteria 7 -Integration with High Layers ✖ There is no Time Synchronization mechanism defined in the ARINC 664 standard therefore we do not discuss this criteria. Application-Level Property 3 -Fault Tolerant Operations ARINC 664 offers another major improvement from Ethernet: it enhances Ethernet with fault tolerance and traffic policing capabilities. Criteria 8 -Error Detection ✖ In addition to CRC, ARINC 664 switches have a traffic policing feature. It allows the switch to verify that every VL respects its traffic contract. If not, flows that exceeds their contract are dropped. This will help preventing the out of traffic contract faulty behaviour, but will also serve for fault containment purposes. In fact, if a device is emitting more than it should or emitting with incorrect addressing (wrong destination address), its traffic will immediately be eliminated at the next hop and will not impact the nominal behaviour of other VLs. Although traffic policing offers a good handling of out of traffic contracts faulty behaviour, it does not detect nor prevent the out of time constraints faulty behaviour. Therefore, ARINC 664 is completely not compatible with this first criteria. Criteria 9 -Error Reporting ✔ The new mechanisms (i.e. switch traffic policing and redundancy) introduced in ARINC 664 are also represented in the MIB counters. With respect to Error Reporting, ARINC 664 is deemed suitable. Criteria 10 -Redundancy ✔ ARINC 664 uses an end-to-end redundancy protocol at level 2. This redundancy protocol will help preventing the lost faulty behaviour. It can be seamless, meaning that the application level is not aware that a redundancy protocol was running at level Example 10 AFDX Redundancy Example In the emitting device, a frame is associated with a sequence number and then duplicated, one duplicate is sent on the upper path and the other is sent in the lower path. When a frame reaches the input port of a switch, it is checked by the Integrity Checking function which verifies that the integrity of the frame (CRC, length, etc.) and then passed to the switching core. Finally, when frames reach the receiving devices, they are first checked by the Integrity Checking function before being handled by the Redundancy Management Function. When the first duplicate is received, if its sequence number is correct (in the sequence of the previous one received), it is passed to the application. When the second duplicate is received, its sequence number is checked and the frame is dropped since it has already been passed to the application. In the case where one of the duplicate is not received, the other is still passed to the application and the message is not lost. After a time, the missing frame will be marked in the MIB counters and reported to higher layer fault management. As a conclusion, ARINC 664 is suitable with respect to this criteria. Criteria 11 -Fault Containment ✖ Regarding fault containment, VL and traffic policing offer a traffic segregation of all the different flows. In addition, Integrity Checking and Redundancy Management functions will also prevent the propagation of faults into the network (e.g. elimination of unwanted duplicated, elimination of messages with wrong sequence number, elimination of faulty frames, etc.). Any fault occurring in one VL will not affect the nominal behaviour of the other VLs. According to the previous mechanisms, although ARINC 664 improves Ethernet with fault tolerance capabilities, it is still deemed unsuitable w.r.t. to Application-Level Property 3. TTEthernet Let us now shift our focus on TTEthernet. Application-Level Property 1 -Mixed Traffic Types Criteria 1 -High Data Rate ✔ TTEthernet proposes data rates ranging from 100Mbit/s to 1+Gbit/s, therefore satisfying this first criteria. Criteria 2 -Bounded Latency ✔ TTEthernet proposes the same types of traffic than ARINC 664 i.e. BE -Best Effort (standard Ethernet) and RC-Rate Constrained (ARINC 664) traffic, hence it benefits from all the properties that we described for ARINC 664 in particular the bounded latency capability for the RC traffic. Moreover, TTEthernet extends ARINC 664 even further. In addition to BE and RC traffic, TTEthernet introduces a third traffic type entitled TT -Time Triggered. Time Triggered traffic is sent in a time triggered manner. Each TTEthernet device (i.e. end-stations and switches) has a transmit schedule per flow. This schedule allows flows to achieve constant communication latency. Therefore, TTEthernet is compliant with the Bounded Latency criteria. Criteria 3 -Very Low Jitter ✔ TT traffic in TTEthernet solves the jitter issue that Ethernet and ARINC 664 faced. In fact, since TT traffic achieves constant communication latency and the schedule (especially in switches) ensures there's no blocking from other frames, equation (7.1) leads to the conclusion that the low jitter on reception problem roots to a low jitter on emission problem (i.e. AppEmOffset which is variable). However, thanks to our definition of latency (i.e. defined between a reference date and the reception date), the per-flow schedule solves the problem. It does not matter how variable the deposit date of a frame is as long as it happens before the frame's schedule. We illustrate how time-triggered schedule cancels application emission jitter in Fig. 7.4. In fact, any variability in the deposit date will be compensated by the constant emission date and then constant communication latency will ensure a very low jitter for that frame. The only elements taking part into jitter in this situation are clock precision and TTEthernet constraint stating that in any schedule, there shall be space to fit a PCF -Protocol Control Frame -frame. The size of this PCF is 84 bytes (including SFD and IFG), which means that it can lead to a jitter of at most 672 nanoseconds (cf. Fig. 7.1). This jitter is compatible with the 1 microsecond requirement of platform traffic and hence TTEthernet ticks the low jitter criteria box. Criteria 4 -Mixed Criticality ✔ In order to mix platform and payload traffic in TTEthernet, 2 solutions exist. In both solutions, the platform traffic is put into TT traffic. Then, the payload traffic can either be fitted into VLs traffic or TT traffic. In both cases, the use of the Application-Level Property 2 -Time Management Criteria 5 -Time Synchronization at MAC level ✔ TTEthernet proposes a protocol to establish and maintain a global time throughout the network. It is realized by synchronization of local clocks within the devices (i.e. end-point and switches). It works in a similar manner than IEEE 1588 (also known as PTP). In fact, time master devices (entitled synchronization masters) distributes time with broadcast messages (entitled "PCF " frames in the TTEthernet standard). Devices in the system gather these PCF frames use their information to correct their local clock. In order for PCF to be correctly received, every slot (for the TT traffic) is configured so that both the frame of this slot and a PCF frame can be emitted. Hence, there is a resource reservation of 84 bytes (frame size + SFD + IFG) per slot for time synchronization. Therefore, TTEthernet satisfies the Time Synchronization at MAC level criteria. Criteria 6 -Time Management Algorithm Robustness ✔ In addition, TTEthernet introduces a new mechanism targeted for fault tolerance i.e. clock redundancy. Clock redundancy is illustrated in Fig. 7.5. In fact, in TTEthernet, there can be several synchronization masters (instead of one in PTP). These masters send their time through PCF frames in the network. Specific devices entitled compression masters gather PCFs coming from masters and vote for the correct timing information ("Step 1" in Fig. 7.5), this correct information is then distributed through a new PCF towards network devices for synchronization. The PCF is also sent back to synchronization masters for fault detection ("Step 2" in Fig. 7.5). This allows to tolerate the loss or the incorrect behaviour of one or several synchronization masters. This mechanism is really important since synchronization is critical in TTEthernet. Without synchronization, the proper temporal property obtained for TT flows (in time slots), i.e. very low jitter and constant network latency cannot be guaranteed anymore. With the clock redundancy mechanism, TTEthernet satisfies this criteria. Criteria 7 -Interaction with higher layers ✖ There is no interface for interaction with higher layer specified in the TTEthernet standard. Hence, TTEthernet does not match this last criteria regarding Property 2. To conclude, although there are no interfaces specified in the standard, some interfaces exist in the implementations. Since TTEthernet has a specified synchronization algorithm and that implementations propose such interaction capabilities, we ignore the last criteria for our conclusion. Therefore, we consider TTEthernet relevant w.r.t. Application-Level Property 2. ). TTEthernet network takes back the redundancy management and the integrity checking of ARINC 664 for TT and RC traffic. TTEthernet switches can also implement a central bus guardian function. It basically provides a policing mechanism to check that traffic contracts (of RC traffic) and schedule (of TT traffic) are respected. This function serves the identification, elimination and containment of the out of traffic contracts and out of time constraints faulty behaviours. In fact, it can for instance prevent the propagation of faults coming from a babbling idiot device in the network. Therefore, TTEthernet is able to detect, report and contain all types of faults identified in the criteria for Application-Level Property 3. Application-Level Property 3 -Fault Tolerant Operations We conclude that TTEthernet is suitable w.r.t. Application-Level Property 3. Time Sensitive Networking Let us introduce Time Sensitive Networking capabilities. Application-Level Property 1 -Mixed Traffic Types Criteria 1 -High Data Rate ✔ Time Sensitive Networking, still under development by the TSN Task Group, is the successor of AVB. The available implementations of TSN range from 100Mbit/s to 10Gbit/s, and higher data rates will probably be available as TSN gets more mature. Therefore, TSN meets the high data rate criteria. Criteria 2, 3 -Bounded Latency and Very Low Jitter ✔ Time Sensitive Networking extends Ethernet with traffic shaping capabilities and medium access enhancement. First of all, TSN reuses the static priority mechanism of Ethernet. However, TSN introduces a new mechanism entitled Frame Preemption, introduced in [59]. Frame Preemption helps, depending on configuration, to solve the Static Priority jitter introduced by lower priority blocking. In fact 802.1Qbu and 802.3br allow purposely tagged frames (express) to suspend the transmission of other frames (preemtable) for their own transmission on a point-to-point link, defining a frame fragmentation similar to IP fragmentation. Using Frame Preemption, the lower priority blocking jitter is reduced to the necessary time to transmit, in the worst case, a 143 bytes long frame (see [START_REF] Thiele | Formal worst-case performance analysis of time-sensitive ethernet with frame preemption[END_REF]). At 1Gbit/s, this will lead to a 1.144µs jitter which is nearly compatible with the very low jitter criteria. In addition, TSN proposes the Time Aware Shaper mechanism which allows defining time windows in which frames can be emitted, almost in a similar manner than TTEthernet. However, contrary to TTEthernet, the time schedule is not applied to flows (named Streams in TSN) but to emission queues. This means that the good temporal properties obtained in TTEthernet are not immediately achievable here. In particular, in the network, there may be more than 8 streams, meaning that several streams would have to share the same queue hence risking additional communication delay from non-exclusive resources. One solution to do so would be to reproduce TTEthernet with TSN mechanism Time Aware Shaper. Examples of such solutions are presented in [START_REF] Silviu | Smt-based task-and network-level static schedule generation for time-triggered networked systems[END_REF], [START_REF] Silviu | Scheduling real-time communication in ieee 802.1qbv time sensitive networks[END_REF] and [START_REF] Silviu | Combined task-and network-level scheduling for distributed time-triggered systems[END_REF]. It helps achieving a perflow schedule in TSN network. We will propose another solution in our second contribution. In conclusion, Time Sensitive Networking satisfies the bounded latency and very low jitter criteria. Criteria 4 -Mixed Criticality ✔ After the discussions on bounded latency and very low jitter, let us focus on mixed criticality. As for ARINC 664 and TTEthernet, TSN proposes a way to share bandwidth between flows and prevent any starvation issues related to the use of Static Priority. In fact, TSN inherits the CBS mechanism introduced in AVB. CBS or Credit Based Shaper defines a rule to allocate a bandwidth to a queue (ex: 4Mbits/s for queue 1) based on a credit that evolves when frames are enqueued or dequeued. With this credit, a maximum bandwidth can be defined for a set of flows sharing a queue. In addition, the TSN Enhanced Transmission Selection mechanism adds additional opportunities for sharing the available bandwidth between queues of an output port. One algorithm for ETS can be Round Robin or Weighted Round Robin. Using the Time Aware Shaper for platform traffic and CBS or ETS for payload traffic should be a good way to have both type of traffic coexisting and meeting their quality of service requirements, therefore satisfying this criteria. We hence declare TSN suitable w.r.t. Application-Level Property 1. Application-Level Property 2 -Time Management Criteria 5 -Time Synchronization at MAC level ✔ Inherited from Ethernet, Time Sensitive Networking supports several synchronization protocols. In particular, it supports IEEE 1588 (PTP) but also IEEE 802.1AS [START_REF]Timing and Synchronization for Time-Sensitive Applications in Bridged Local Area Networks[END_REF] also called gPTP. However, PTP and gPTP are not MAC level only protocols. Therefore it does not really entirely comply with this first criteria. PTP (i.e. IEEE 1588) is quite a simple protocol, one time master device (entitled grandmaster) distributes time with broadcast messages. Devices in the system measure propagation delays with their peers or with the time master and use this information to correct the time received from the master. If several potential grandmasters exist in the network, IEEE 1588 introduces the Best Master Clock Algorithm -BMCA, which chooses which device among potential masters is going to be grandmaster. This algorithm allows for faster recovery than PTP since the BMCA is run permanently and triggered if the current grandmaster is not functioning anymore in which case a new master can hence be automatically elected. gPTP is, for most part, fairly similar or identical to PTP. It contains in fact a profile of 1588 for use in TSN networks. We will not explain how the synchronization is established. Instead, let us focus on the two main events with gPTP : the possibility for a network to have several synchronization masters and the specification of interfaces and primitives for "time sensitive applications". In gPTP, like in PTP and in TTEthernet, the synchronization traffic travels in the same network than user data. However, it does not requires too stringent resources reservation: [START_REF] Lim | Ieee 802.1as time synchronization in a switched ethernet based in-car network[END_REF] shows that the synchronization process based on 802.1AS is not affected by high network load, while not having a dedicated queue or a very high priority. Criteria 6 -Time Management Algorithm Robustness ✔ Like TTEthernet synchronization, TSN synchronization protocol offers the possibility to have several masters in the network for availability purposes. However, it does not work quite like TTEthernet. In fact, 802.1AS standard does not provide any consolidation strategy for several grandmasters running concurrently (realized with compression masters in TTEthernet), it is left to be developed at application level. gPTP does not provide any fault tolerance mechanisms apart from BMCA (cf. §7.2.4.3 of [START_REF]Timing and Synchronization for Time-Sensitive Applications in Bridged Local Area Networks[END_REF]) which covers a grandmaster not working. As stated in the standard:"Techniques for identifying other types of failures, and the appropriate correction necessary, are not specified in this standard. However, if other techniques or standards are used for detection and correction of these (...) failures, this standard provides the means to recover from these errors". It will be the responsibility of system engineers designer the higher layers to provide mechanisms for fault detection, isolation and recovery in the synchronization system. Therefore, again, TSN does not entirely comply with this second criteria. Criteria 7 -Higher Layer Interactions ✔ gPTP also introduces application interfaces (cf. Clause 9 of [START_REF]Timing and Synchronization for Time-Sensitive Applications in Bridged Local Area Networks[END_REF]) for use with time sensitive applications. These interfaces are "model of behaviour and not application program interfaces". Five interfaces are described in the standard but it is stated that others can exist. Hopefully, this will help facilitate the port of time sensitive applications on different TSN products from potentially different manufacturers. Hence, TSN is deemed suitable with this third criteria. To conclude, we consider TSN relevant w.r.t. Application-Level Property 2. Application-Level Property 3 -Fault Tolerant Operations Criteria 8, 9 -Error Detection and Error Reporting ✔ Time Sensitive Networking offers a wide range of mechanisms for Fault Tolerant Operations. Since it extends Ethernet, all elements introduced in Ethernet (CRC and MIBs) are still true for TSN. In terms of traffic policing, IEEE 802.1Qci -Per Stream Filtering and Policing [START_REF]IEEE Standard for Local and metropolitan area networks-Bridges and Bridged Networks-Amendment 28: Per-Stream Filtering and Policing[END_REF] offers multiple mechanisms. One serves at detecting any temporal error in the reception of frames, this is particularly useful when TSN is configured with time triggered traffic. Another one serves at ensuring, per flow or per queue, the compliance of the traffic with the traffic contracts in place. Indeed, Per Stream Filtering and Policing will help detect and prevent the out of contract and out of time constraints faulty behaviours. Therefore TSN complies with this first two criteria. TECHNOLOGIES CAPABILITIES W.R.T. SPACE REQUIREMENTS 101 Criteria 10 -Redundancy ✔ Redundancy in TSN is fairly similar than in ARINC 664. Frames are duplicated and then reassembled using a sequence number. In TSN this sequence number is located in an Ethernet optional field (or tag). Several protocols are available for redundancy such as RTAG, PRP or HSR tags. TSN can deliver frames out of order whereas ARINC 664 provides in order delivery guarantee. However, TSN improves ARINC 664 redundancy by adding the opportunity to have more than two duplicates. It also offers the ability to specify the path of every duplicated flow, meaning that nominal and redundant flows does not have to travel on the same path using two different channels. In addition, in TSN it is possible to do redundancy on only a fragment of the path of the flow instead of only end-to-end. As for ARINC 664 and TTEthernet, redundancy in TSN will help detecting and prevent the incorrect and lost fault behaviour. Therefore TSN complies with the redundancy criteria. Criteria 11 -Fault Containment ✔ TSN will provide similar levels of fault containment than TTE on the faulty behaviours we have previously identified: faulty frames will be dropped with CRC checking, loss faulty behaviour will be prevented by redundancy, out of traffic contract and out of time constraints traffic will be detected and removed by the Per Stream Filtering and Policing, therefore, in all cases, preventing the propagation of a fault further in the network. However, there is a sort of challenge with fault containment in TSN. In fact, while TSN traffic shaping behaves like Ethernet i.e. the segregation unit is a queue (through Static Priority), the segregation unit for traffic policing is not a queue but a flow since the traffic policing functions are applied per stream (in Per Stream Filtering and Policing). This difference of granularity between traffic shaping and traffic policing might increase the complexity of the segregation analysis for fault containment purposes. In any cases, TSN complies with this criteria. In consideration of the above analysis, we declare TSN suitable w.r.t. Application-Level Property 3. SpaceFibre Finally, let us discuss the compatibility of SpaceFibre with respect to the three previously introduced application-level properties. Application-Level Property 1 -Mixed Traffic Types Criteria 1 -High Data Rate ✔ Spacefibre probably offers the highest data rate among all our technologies of interest. It can reach more than 40Gbit/s [START_REF]What is spacefibre ?[END_REF]. To do so, the SpaceFibre standard offers the possibility to serialize the communication over up to 16 so-called lanes in the same cable (either optical or twisted pairs). This serialization, detailed in [START_REF] Ferrer Florit | SpaceFibre multi-lane: SpaceFibre, long paper[END_REF], is similar to PCI Express serialization over multiple lanes [4]. Criteria 2 -Bounded Latencies ✔ To be able to provide guaranteed latencies, a system must provide both traffic contracts and bandwidth contract (cf. Property 2). In a SpaceFibre network, SpF Token-Bucket ensures such a traffic limitation. Nevertheless, to get accurate bounds on latencies, a good worst-case model of the SpF Scheduler must exist. Since SpF Scheduling combines a per VC FIFO strategy, 16 levels of priority and a credit-based algorithm that looks similar to the CBS and ATS, the analysis methods developed for TSN [START_REF] Maile | Network calculus results for tsn: An introduction[END_REF] may certainly be adapted for SpaceFibre. Criteria 3 -Very Low Jitter ✔ The SpF Scheduler is very likely to achieve ultra low jitter (< 1µs). By configuring the time-slots so that only one VC is allowed to access the medium at any time, one might expect that no traffic from other VC would interfere and induce unwanted jitter due to non-preemption (see [START_REF] Chaine | Comparative study of ethernet technologies for next-generation satellite on-board networks[END_REF] for illustration on non-preemption jitter). However, this last assertion is not always true. In fact, the SpaceFibre standard does not define any guard band mechanism for scheduler. This means that if a VC starts to emit one frame just before the end of its time-slot, that emission will end during the time-slot of the next VC and therefore delaying the next scheduled emission. The maximum induced jitter in this situation is the transmission duration of a frame (256 bytes). If the SpaceFibre network operates at 1Gbit/s, the induced jitter value is 2µs per hop, leading the ultra low jitter constraints not being satisfied. Nevertheless, if the data rate increases, it will directly reduce the induced jitter value and the jitter constraint is very likely to be satisfied even without guard bands. Criteria 4 -Mixed Criticality ✔ SpaceFibre will be able to achieve mixed criticality by assigning different slots to platform and payload traffic. Traffic will coexist and should not have an impact on each other quality of service. Therefore, it satisfies this criteria given the data rate is high enough (for very low jitter criteria compliance). To conclude, given the data rate is high enough, SpaceFibre is deemed suitable with respect to Application-Level Property 1. Application-Level Property 2 -Time Management Criteria 5 -Time Synchronization at MAC level ✔ Regarding synchronization and time distribution, SpaceFibre offers the possibility to rely on SpaceWire time-codes to distribute time information across the network. To do so, the time-code packet is sent to all network devices with the help of broadcast data frames. Criteria 6 -Time Management Algorithm Robustness ✖ There is no dedicated robustness mechanism specified for this time distribution method. However, the broadcast frames travel in a separate channel (broadcast channel) than user data (virtual channels). As explained in next section, this will provide space and time isolation between broadcast frames (including ones bearing time-codes) and user data. In addition, broadcast messages will also rely on the same fault tolerance mechanisms than data frame (see next section). Criteria 7 -Higher Layer Interaction ✔ Finally, in order to support a network-application synchronization, SpaceFibre Service Access Points at L2 and L3 can forward a broadcast message indication to the upper OSI layers, which they can use to synchronize themselves with network time. Although the time distribution mechanism lacks robustness, SpaceFibre is deemed almost suitable with respect to Application-Level Property 2. Application-Level Property 3 -Fault Tolerant Operations Criteria 8 -Error Detection ✔ Regarding error detection, the SpaceFibre standard provides three elements. First, a CRC per frame, which allows to detect errors within a frame. Then a sequence number, similar to the one of ARINC 664, allows to detect loss of frames (missing sequence number) or errors in the emitter (several frames with the same sequence number). Finally, the VC Bandwidth Credit has a minimum and a maximum value. If the credit reaches any of these bounds, an alert is raised to signify that either the VC uses more bandwidth then reserved or the VC does not get enough bandwidth to meet its traffic contract. The only type of error that SpaceFibre cannot 7.2. ANALYSIS 103 detect is the out of time error e.g. when a frame belonging to a VC that is not scheduled in the current time-slot is emitted. Criteria 9 -Error Reporting ✔ All the errors detected by SpaceFibre devices can be reported in both a synchronous and an asynchronous ways. When an error is detected, the data link layer triggers an indication that is passed to the upper layer (in particular the application layer), giving a real-time warning on the error. In addition, when an error occurs, a dedicated MIB -Management Information Base -counter is updated and can be monitored later on by another entity of fault management. This MIB is really similar to Ethernet MIBs. Criteria 10 -Redundancy ✔ SpaceFibre does not offer any redundancy mechanism at Data Link Layer. There is no packet duplication that could travel on disjoints paths whatsoever. However, there is a possibility of having a warm redundancy on the physical medium on a point to point basis. In fact, instead of using the 16 (or less) lanes within a link to increase the link speed, a lane could be in hot standby. In the event of one of the used lane becomes faulty, it would be swapped by the hot standby lane. This would provide a sort of redundancy mechanism at physical level. Although there is no redundancy, the data link layer in SpaceFibre works in connected mode, meaning that the reception of any frame is acknowledged. In case of erroneous reception, the faulty frame(s) can be retransmitted with the help of a retry mechanism. This behaviour is similar to acknowledgements and retry in TCP protocol [START_REF]Transmission Control Protocol[END_REF]. However, this retry mechanism would require further studies as it may affect the performances discussed in Property 1 (e.g. if a frame is retransmitted after the end of the time-slot associated to its VC). Criteria 11 -Fault Containment ✔ Finally, SpaceFibre proposes solutions to contain the errors detected by a SpaceFibre device. When a frame is received with wrong CRC or wrong sequence number, that frame is deleted therefore preventing it from spreading in the network. That frame can then benefit from the retransmission mechanism. In addition, the token-bucket like mechanism ensures temporal and space isolation between VCs i.e. an out of traffic contract error, as for instance babbling idiot traffic, in one VC will not affect the performances of other VCs. However, to the difference of TSN where out of traffic contract frames are deleted in ingress, out of traffic contract frames in SpaceFibre are not deleted, but reshaped in the next output port so that it fits the resource reservation of the VC. This can therefore induce unwanted increase in the switching fabric workload, and also potentially lead to buffer overflows. Even with the lack of proper frame replication mechanism, SpaceFibre is deemed suitable with respect to Application-Level Property 3. Analysis Now that the suitability of each technology with respect to Application-Level Properties 1, 2 and 3 has been properly discussed, we summarize the output of the previous sections in the tables below (cf. 7.2, 7.3, 7.3). Summary of the Comparison Third-party Arguments for the Selection of an Upgrade Candidate According to the previous analysis, it seems that three technologies would be good candidates for a future unified satellite on-board networks: SpaceFibre, TTEthernet and Time Sensitive Networking. ✖ ✔ ✔ ✔ Error Reporting ✔ ✔ ✔ ✔ ✔ Redundancy ✖ ✔ ✔ ✔ ✔ Fault Containment ✖ ✖ ✔ ✔ ✔ Suitability ✖ ✖ ✔ ✔ ✔ However, apart from Application Level Properties 1, 2 and 3, several arguments shall also be taken into consideration when choosing between one candidate or another. First, satellite components have stringent hardware/software constraints, not in certification like in aerospace, but more in radiation, temperature, SEU -Single Event Upset-tolerance, etc. This means that the satellite network manufacturer has to either buy end-points and switches designed for space or buy IPs that would be instanciated into space-hardened components. On the one hand, TTEthernet (through TTTech) and SpaceFibre have already been, or are being implemented into several space projects both in Europe and in the USA and are standardized for space use by ESA in an ECSS standard (European Cooperation for Space Standardization). It would hence be possible to obtain space-oriented TTEthernet and SpaceFibre components. On the other hand, TSN, for the past years, has gained increasing interest from the automotive industry and automation industry. The TSN devices that would be available on the market would not completely fulfil the space requirements, especially in terms of radiation tolerance. It would however be possible to either buy IPs and instantiate them into space-hardened components or buy the entire COTS and do a radiation tolerance evaluation campaign. Then, the space community is hoping that the use of COTS components from a widespread technology, shared with other industry verticals would help reducing the overall cost of design, purchase of devices and software development. One drawback of using TTEthernet or SpaceFibre instead of TSN would be that it is only produced and maintained by very limited manufacturers whereas TSN has already dozens of manufacturers working on it. Nevertheless, the products currently advertised by TSN automotive manufacturers might not exactly fit the space needs in terms of performance 7.2. ANALYSIS 105 or environment tolerance and might require further work before being used in space systems; which in the end might lead to an increase of costs. However, the impact on non-recurring cost might be significant enough to make the use of TSN worth. That is why the definition of a profile (like the TSN Automotive Profile but for space) would be a very good starting point to give space and aerospace an identity towards TSN components manufacturers. On validation and certification side, there is a certain advantage in using TTEthernet or Time Sensitive Networking instead of SpaceFibre. In fact, both TTEthernet and Time Sensitive Networking are based on Ethernet, where numerous research on validation/certification have been lead during the past 15 years. For as much, since these two technologies receive a lot of attention in multiple industrial sectors, they have also been getting more attention from researchers than Space-Fibre. There are now tools available to simulate, configure and validate TTEthernet networks and Time Sensitive Networking tools are getting more mature every day. SpaceFibre has been modelled in OMNET++ simulator [START_REF]SpaceWire / SpaceFibre Network Model[END_REF] but further research will be required in order to validate the real-time behaviour proposed by its medium access strategy. Conclusion To conclude on this first contribution, the qualitative comparison, based on the criteria that we identified in the previous chapter, lead us to select three suitable candidates for a next generation satellite network: TTEthernet, SpaceFibre and Time Sensitive Networking. This comparative study has been published in [START_REF] Chaine | Comparative study of ethernet technologies for next-generation satellite on-board networks[END_REF] where only Ethernet technologies are considered and in [START_REF] Chaine | Comparative study of high throughput technologies for next-generation satellite on-board networks[END_REF] where all the above technologies are considered. Further studies are necessary to decide which of these technologies (if any) would be the next-generation satellite network technology. As the reader may have glimpsed in Section 7.2.2, each technology has its advantages and drawbacks and the road is still long before the decision is made for future satellites. The most plausible situations that we envision are that either one technology is selected for all satellite missions or rather several technologies are selected, each being adapted to as specific type of mission. For instance, Time Sensitive Networking could be used for satellite constellations in a new space context where the number of components needed for the entire constellation will be high and benefit for the TSN mass market whereas TTEthernet or Spacefibre could be used for more specific mono-satellite missions where space-mature (e.g. radhard) components are really needed. In the following part, since Time Sensitive Networking was totally new in the spacecraft industry, we further analyse its suitability by generating network configurations that satisfy the quality of service requirements of next-generation satellites. Part III Contribution: Computation of TSN Networks Configurations for Next-Generation Satellite Networks Chapter 8 Problem Statement 2 Among the three candidate technologies identified by our first contribution, this PhD focuses on Time Sensitive Networking for the following reason: while the two other candidates have already been studied and even started to be included in satellite network designs at the time of writing this manuscript, Time Sensitive Networking was completely unknown to the spacecraft industry. It had never been addressed before in the scope of a next-generation unified Platform & Payload network [START_REF] Chaine | TSN Support for Quality of Service in Space[END_REF]. Thus, our industrial partner was really willing to get to know the technology and unleash its full potential. Time Sensitive Networking (cf. Chapter 4) is roughly composed of 20 standards. Gaining expertise on this novel technology and mastering all its mechanisms so as to exploit them correctly takes a lot of time. In addition, all standards (i.e. all mechanisms) might not be necessary to satisfy the satellite requirements. Therefore, the goal of this second problem is to reduce the effort on the network system designer side by identifying a subset of standards able to satisfy the system needs and by automatically computing network configurations based on these standards and requirements. To do so, we refine the modelling of the system started in the previous chapter by detailing the model of flows travelling across the network. A formal definition of all the quality of service requirements (i.e. performance and fault tolerance requirements) that could be encountered in the use cases to come is proposed. Some definitions and models of this chapter have been retrieved from [START_REF] Chaine | Formal Specification of Satellite On-Board Networks Requirements[END_REF]. Finally, once the model and the requirements have been introduced, the last section of this chapter formulates the problem and showcases our contribution. Flow Modelling Definitions The applications running on the computing devices communicate with the sensing and actuating devices through the on-board network with messages. These messages are gathered under the concept of flows. Definition 38 (Flow, f , F) A flow is a unidirectional sequence of messages from one sender to one or several receivers. Let us denote f a flow, and F the set of flows of the system. A flow is characterized by the following tuple: Where : ∀f ∈ F < Src f , LDests f , Size f , r f > (8. • Src f ∈ D is the device from which the messages are generated and emitted, • LDests f ⊆ D ∧ Src f / ∈ LDests f is the set of receiver end-stations, • Size f is the constant size in bytes of one message, • f .r f is our way to express the periodicity of the flow (see Eqn. 8.3), r f ∈ Z. Hypothesis 2 In this model, messages are embedded in Ethernet frames and we consider that : ∀f ∈ F, Size f ≤ M T U Ethernet (8.2) This means that one applicative message of a flow will lead to one frame in the network. The definitions, properties and constraints in the rest of the document apply this hypothesis. If this hypothesis was to be relaxed, the definitions, properties and constraints of the document would have to be slightly redefined (see Appendix C.1). Hypothesis 3 In this study, we agreed with the industrial to reduce the problem by only considering unicast flows. Therefore there is only one receiver per flow. Let Dest f denote this single receiver. The period P f of flow f is linked with its ratio r f with the following equation: ∀f ∈ F,      r f ≤ -1 =⇒ P f = |r f | * P MIF r f > 1 =⇒ r f messages per P MIF ⇐⇒ P f is a period during which r f messages are sent r f = 0 =⇒ P f = NA (f is non-periodic) (8. r f < -1 =⇒ r f | k ( ⇐⇒ lcm(|r f |, k) = k) (8.4) Hypothesis 4 In this document, we only consider periodic flows that send at least one message every P MAF and at most one message per P MIF i.e. : In our motivating example (cf. 5.2.2), the applications running on the devices communicate with three flows. Starting now, the convention used for naming flows is the following: r f ∈ [-k, -1] (8. Rule 2 (Naming Flows) A flow originating from Src f and going to LDests f will be named: f Src f Dest f ID (8.6) Where ID is a user defined identifier used to distinguish several flows with the same senders and receivers. Thus the flows, represented in Fig. 8.2, are the following: • f OBC Restriction to Flow Level Requirements In the original model [START_REF] Chaine | Formal Specification of Satellite On-Board Networks Requirements[END_REF], the specification and the constraints of the system were expressed at application level. Indeed, the system is composed of several applications running on the end-stations and communicating through flows (or streams in TSN vocabulary). After detailing how applications At emission, the implementation is done as follows: when an application produces a message, it is put into a mailbox (ISO L7) and then placed in the appropriate queue at MAC level (ISO L2). Definition 41 (Deposit / Emission instants) Let m be a message. We define the deposit instant T SAP (m) as the instant at which m is deposited in the L2 service access point. We define the emission instant T e (m) as the instant at which the first bit of m is emitted on the medium. Definition 42 (Reception / Delivery instants) Let m be a message. We define the reception instant T r (m) as the instant at which the last bit of m is received at receiver end-station physical level. We define the delivery instant T d (m) as the instant at which m is provided to the receiver application. Hypothesis 5 (Restricting the problem at flow level) Since we consider real-time equipments, we assume that the time between the production at applicative level and the placement into a queue at MAC level happens in a bounded known time (b L7→L2 ). In the same way, we consider the delay between the reception of a message and its delivery (i.e. availability) at applicative level to be bounded (b L1→L7 ). This is illustrated in Fig. 8.3. As a consequence, the configuration problem can be defined and solved solely at the network level. Quality of Service Requirements In the following sections, we propose to further detail the requirements of the system by deriving them from Application-Level Properties 1, 2 and 3 introduced in Chapter 5. Choice 1 In this document, we focused on the requirements of Application-Level Property 1 and a subset of Application-Level Property 3. The priority was to assess, via configuration, the network performance quality of service of TSN, before addressing its fault tolerance quality of service requirements. Reference Instants Before presenting the requirements, let us define the concept of reference instants. Definition 43 (Ref (f l )) We define the reference instant of f l as Ref(f l ) = l×P f . For any message f l of any periodic flow, f l will be enqueued during the interval T SAP (f l ) ∈ [Ref(f l ), Ref(f l+1 )[. QUALITY OF SERVICE REQUIREMENTS 113 Ref (f l ) Ref (f l+1 ) P f T SAP (f l ) P f T SAP (f l+1 ) Mixed Traffic Types Requirements The following paragraphs will introduce the different performance constraints identified for the flows. For most constraints, an example will illustrate the constraint being satisfied. In these figures, the colors representing the Input/Processing/Output parts have been removed for the sake of readability of the flow constraints. Performance Requirement 1 (Deadline) Let a flow f ∈ F, it comes with a deadline constraint so that T r (f l ) ≤ f l .deadline and f l .deadline ≤ Ref(f l ) + P f . This entails that the latest reception of f l of flow f must be terminated before the beginning of the emission window of f l+1 . In the case in which ∀l ∈ N, f l .deadline = Ref(f l ) + P f , the flow is said to have implicit deadlines [START_REF] Goossens | Multiprocessor algorithms for uniprocessor feasibility analysis[END_REF]. Deadlines for several messages of two flows are illustrated in Fig. 8.5. Definition 44 (Reception Jitter) The reception jitter [START_REF] Oliver | Ieee 802.1qbv gate control list synthesis using array theory encoding[END_REF] or jitter between two frames f l and f m is defined as the variability of their reception dates. It is denoted Jit f l,m such that ∀f ∈ F, ∀l, m ∈ N, Jit f l,m = |(T r (f l ) -Ref(f l )) -(T r (f m ) -Ref(f m ))|. The overall jitter of a flow is denoted Jit f such that ∀f ∈ F, Jit f = max l,m Jit f l,m . 114 CHAPTER 8. PROBLEM STATEMENT 2 Remark 16 The latency of a message f l can be defined in this model as ∀f ∈ F, ∀l ∈ N, Lat f l = T r (f l ) -Ref(f l ). Performance Requirement 2 (Jitter) A flow f also has a jitter constraint defined as f .jitter ∈ N ∪ {NA} where NA stands for not applicable (thus no jitter constraint) and otherwise f .jitter is the maximum accepted jitter. Jitter of f m with respect to f l is illustrated in Fig. 8.6 Ref (f l ) P MIF Ref (f m ) P MIF T p (f l ) T p (f m ) T r (f l ) T r (f m ) Lat fl = T d (f l ) -Ref (f l ) Lat fm = T d (f m ) -Ref (f m ) Projection of Lat fl Jit fl,m Fault Tolerance Requirement In addition to performance, safety requirements are often required. In particular ARINC 664 [3] or TTEthernet [102] networks offer Fault Isolation mechanism. Among the faults supported by those networks, we restrict ourself to message loss. Definition 46 (Message loss independence) A system is considered as message loss independent if for all flow f , the loss of messages of f has no negative impact on the performance (deadline/jitter) of the other flows. Safety Requirement 1 Any configuration of the network should fulfil the message loss independence requirement. Motivating Example: Adding Flow Constraints Three flows have been identified in the motivating example (cf. 8.1.2). Let us now assign them flow constraints according to the previously defined constraints. We denote Υ(f ) the function that, given a flow, returns its flow constraints. In the motivating example: Υ(f OBC Industrial Considerations Production Contract and Release Instant Applications come with a set of flow contracts, where each flow contract consists of a temporal window for messages production (see Fig. 8.7) so that applications meet their performance, safety and development requirements (see Section 8.2). Such a contract is bargained off-line between applications and platform providers. It is expected that applications always respect their contracts and that the on-board network ensures the quality of service of each application as long as the applications respect their contracts. Ref (f l ) Ref (f l+1 ) P f T SAP (f l ) Ref(f l ) + B - f l Release(f l ) = Ref(f l ) + B + f l Figure 8.7: Ref (f l ), T SAP (f l ) and Prod(f l ) Definition 47 (Application Flow Contract) Let f l be the l-th message of f . The production contract associated to This definition entails that ∀f ∈ F, ∀l ∈ N, T SAP (f l ) ∈ Prod(f l ) i.e.: f l is the interval Prod(f l ) = [Ref(f l )+B - f l , Ref(f l )+B + f l ] ⊆ [Ref(f l ), Ref(f l+1 )[, where B - f l (resp. B + f l ) Ref(f l ) + B - f l ≤ T SAP (f l ) ≤ Release(f l ). (8.10) Application Emission Scheme In the current version of the satellite system, the communication on the platform side relies most of the time on 1553 bus. Applications emission is based on a precomputed scheduled accesses to a list of descriptors that compose a frame ahead of time; and not based on a queue like it would be on Ethernet for instance. Therefore, the application can deposit its messages whenever the message is ready. There is no constraint on data production as long as it is deposited before the scheduled register read. The software running on the satellite relies on this application emission scheme. Changing the emission scheme would therefore have a significant impact on the amount of code that might have to be redeveloped. Hence, the configured TSN network shall aim at minimizing the amount of redevelopment and more generally the impact on on-board software. This is translated in the following optimization objective: Development Effort Requirement 1 Any configuration should maximize the length of Prod(f ) (see Def. 47). In particular, B - f l = 0 allows the scheduler to start executing any ready task at the beginning of a MIF Period and maximizing B + f l (ideally B + f l = P f ) gives more time to execute tasks during a MIF Period. The idea is to maximize the duration in which the application can emit its messages and mimic the 1553 behaviour. The network will be in charge of correctly delivering the message while respecting the performance and quality of service requirements. Cost of a Network Upgrade In addition to software, the cost on the hardware of the next-generation network is also important. In fact, switching from a legacy 1553 + SpaceWire network towards a Ethernet/TSN network might have two impacts. First, changing all the devices to TSN devices might have a considerable cost that the industrial is not willing to endorse. In addition, TSN is composed of more than 20 standards. The more standard implemented, the more complex the fault tolerance analysis will be. Second, in the current architecture, the receiving devices are extremely simple and the On-Board Computer (computing device) is hardly the only complex device in the architecture. Again, increasing the complexity in additional devices might require further analysis on the performance, the fault tolerance, etc. of the system. As a consequence, the configured network shall try to minimize the cost of the hardware by controlling the use of TSN, in number of device or number of implemented standards, but also by keeping the actuating and sensing devices as simple as possible. One other solution would have been to relax this "simplicity of the receivers" paradigm and distribute intelligence across the network but it was out of scope of the study w.r.t. the industrial partner requirements. Contribution Overview In order to demonstrate the suitability of Time Sensitive Networking w.r.t. the requirements of the spacecraft industry, we propose to find network configurations that would satisfy these requirements. Therefore, we wondered : Definition 48 (Problem 2 -Step 1) Given the performance and safety quality of service requirements of the spacecraft industry presented in Section 8.2, what is the smallest subset of TSN standards required for a next-generation satellite network ? In fact, Time Sensitive Networking (cf. Chapter 4), is not one but a set of roughly 20 standards. Therefore, the design and configuration of such networks is a difficult task. In order to help the network architect design a satellite unified network based on TSN, it would be nice to define a small subset of standards that would be sufficient to fulfil the coming network performance and fault tolerance requirements. Choice 2 (Choice of TSN standards) We first presented a glimpse of subset, or profile in the IEEE vocabulary, in [START_REF] Chaine | Suitability of time sensitive networking for space ? TSN A Conference[END_REF] and then in [START_REF] Chaine | TSN Support for Quality of Service in Space[END_REF] and it is still under consolidation in a joint effort between SAE and IEEE (see. IEEE/SAE P802.1DP TSN Profile for Aerospace [65]). This profile contains a certain number of standards. In this study, we reduce it to TSN standard 802.1Qbv only so as to cope with the requirements presented in Section 8.2. 1This choice is motivated by two reasons. First, at the time of writing this document, there were two standards available to handle low jitter traffic among TSN standards: 802.1Qbv and 802.1Qcr -Asynchronous Traffic Shaper (ATS [START_REF]IEEE Standard for Local and Metropolitan Area Networks-Bridges and Bridged Networks -Amendment 34:Asynchronous Traffic Shaping[END_REF]). Most of the papers in the state of the art relied on 802.1Qbv for low jitter traffic. In addition, there was no existing TSN hardware embedding ATS at that time while 802.1Qbv was largely represented. Remark 17 In this manuscript, we consider that 802.1Qbv standard includes all transmission selections algorithms, meaning that, for instance, Time Aware Shaper and Credit Based Shaper are included in 802.1bv standard and may be configured. This is not actually really the case: 802.1Qbv and 802.Qav (CBS) are two different standards included in the same 802.1Q standard. However, 802.1Q contains more mechanisms than the ones we have selected for this study. Therefore, it was easier to describe these mechanisms as part of 802.1Qbv. Definition 49 (Problem 2 - Step 2) Given the quality of service requirements, is it possible to compute valid configurations of TSN 802.1Qbv standard? Definition 50 (Valid configuration) A valid network configuration should respect the all the quality of service requirements of all the flows. There is certainly a lot a valid network configurations. However, we want to find a valid one that is acceptable in terms of industrial applicability i.e. a reduced cost/impact not only on hardware implementation but also on the software and the integration process. To confront this second problem, we propose to find a way to automatically generate valid TSN configurations, doing it by-hand not being a very scalable and industrial method. Based on existing methods from the state of the art, we generate TSN configurations thanks to constraint programming. This second contribution is organized in three chapters. In the first chapter, we detail a configuration model for 802.1Qbv standard. Then, we introduce in the related work section the state of the art methodology for the computation of TSN network configurations. Finally, we introduce the industrial use cases that we use in Chapter 11. In the second chapter, the novel concept of Egress TT configurations is introduced. Its advantages and drawbacks are discussed, and two strategies (or implementations of Egress TT) for the generation of TSN network configurations are proposed, namely Exclusive Queue Allocation and Size Based Isolation. In particular, we detail how these configurations are computed with the previously introduced configuration model and methodology. In the last chapter, the performance of these two implementations of Egress TT for TSN networks is compared with state of the art configuration approach (entitled End-to-End TT) on several use cases (including real-life systems) so as to underline potential improvements on scalability and computation effort. Chapter 9 Insights on the Configuration of IEEE 801.Qbv Time Sensitive Networking has been selected, in the qualitative study, as one suitable candidate for supporting the on-board network of next-generation satellites along with TTEthernet and Spacefibre. In order to further validate the suitability of Time Sensitive Networking, we wish to consolidate network configurations that fulfil the quality of service requirements (performance and fault tolerance). This configuration challenge is not easy: TSN is composed of several standards, each composed of several mechanisms with several parameters that need to be instantiated for any configuration. In Chapter 8, we have reduced TSN to one standard -IEEE 802.1Qbv -identified in Choice 2 as the necessary standard to support the quality of service requirements of next-generation satellites (cf. Section 8.2). Therefore, in this chapter, we first propose an insight on the configuration of a network relying on IEEE 802.1Qbv by introducing a formal configuration model. Then, we introduce Endto-End TT and its derivatives, a family of configuration designed to support flows with very low jitter requirements. Finally, we discuss the major advantages and drawbacks of these configurations with respect to Problem 2 Step 2 (cf. Def. 49). Configuration Model We detail hereafter the configuration model for a TSN network relying on IEEE 802.1Qbv. This network is composed of several TSN devices and multiple links. These devices (end-station or switch) are composed of a certain number of output ports. A system configuration entails a network configuration i.e. the configuration of every port in every device as well as a configuration of the flows travelling through the network. We now formalize the port configuration. 802.1Qbv Port Configuration An output port is composed of up to eight internal queues, also known as traffic classes. These queues have priorities, and come with several mechanisms to do traffic shaping, bandwidth sharing, etc. These concepts have been extensively described in Section 4.2. We now propose to formalize these concepts so that they can be translated into a model which will be used later for constraint programming. 120 CHAPTER 9. INSIGHTS ON THE CONFIGURATION OF IEEE 801.QBV Definition 51 (Output port) Let P denote the set of output ports in the network. An output port p = (q 0 , . . . , q 7 , T S) is composed of eight1 internal queues q j and a Transmission Selection (TS). Each queue q = (TSA q , TG q ) is associated with a Transmission Selection Algorithm (TSA) as well as a Transmission Gate (TG). (cf. Chapter 4). We summarize the output port model in Fig. 9.1. Both internal queues and TS will rule when frames access the medium. Transmission Selection Algorithm (TSA). TSA belongs to a list of available algorithms (cf. Section 4.2.2) implemented by the hardware device. Examples of such algorithms are CBS (for Credit Based Shaper) and none when no restriction on head of queue is added2 . Definition 52 (Ready(m, TSA q )) Let p = (q 0 , . . . , q 7 ), TS) ∈ P a port, let m a message stored queue q j , j ∈ [START_REF]What is spacefibre ?[END_REF][START_REF] Baron | LETT: An execution model for distributed real-time systems[END_REF]. To allow the transmission of message m from queue q, the transmission selection mechanism of that queue q = (TSA q , TG q ), if any, shall mark message m as ready. Let Ready(m, TSA q ) denote that message m is ready. Remark 18 In the case a queue has no configured TSA, a message m will be ready as soon as it becomes head of the queue. Transmission Gates (TG). This mechanism, also referred to as Time Aware Shaper (cf. Section 4.2.3), adds the possibility for internal queues, in both switches and end-stations, to be regulated according to time-driven rules. In effect, there is a gate TG q associated to any internal queue q which can be opened or closed. The schedule switching from open to closed and back is pre-computed off-line per port, is periodic and is called a Gate Control List (GCL). Definition 53 (Gate Control List) Let p a port, its associated gate control list, denoted GCL(p) 3 , is defined by the list [e 0 , . . . , e u -1] of u events e i = ⟨s i , t i , d i ⟩ where [START_REF] Axer | Exploiting shaper context to improve performance bounds of ethernet avb networks[END_REF] ⟩ is the status of the gates s i,j ∈ {o, C} where o stands for open and C stands for closed, • t i ∈ N is the time offset from the start of e 0 at which event e i starts, • s i = ⟨s i,0 , . . . , s i, • d i = t i+1 -t i is the duration during which the schedule s i will hold. In particular, the period of repetition of the pattern is i∈[0,u-1] d i and gcd i∈[0,u-1] (d i ) is called gate granularity. These parameters are illustrated in Fig. 9.1. Remark 19 (Link TG q , GCL(p)) The gate control list describe per port the status over time of all the gates for all the queues in that port. The variable TG q is denotes the gate of queue q (in port p). Hypothesis 6 (Gate Control List period) Since the system we consider is periodic, it is sufficient to compute the gate schedules on the hyper-period of all its flows. Therefore, i∈[0,u-1] d i = P MAF . Remark 20 Ethernet-capable devices do not have transmission gates and this is equivalent to the gates being open all Transmission Selection (TS) A frame is emitted when it is available for transmission (cf. Def. 54) and has the highest priority among frames available for transmission (cf. Def. 55). Definition 54 (Frame available for transmission) A frame (or message) m in queue q of output port p is "Available for transmission" at instant t when: 1. The frame is the head of q, 2. The transmission selection algorithm of q has marked m as ready 3. TG q is open at instant t, 4. TG q remains open long enough to transmit the frame. Formally,      head(q) = m, Ready(m, TSA q ) ∃m, k ∈ N s.t. ∀e i ∈ GCL(p), i ∈ [m, m + k], s i%n,q = o, ∃t ∈ t m ≤ [t, t m + sum m+k i=m d i ] s.t. ((t m + sum m+k i=m d i ) -t) < Size f r where r denotes link speed. Remark 21 The sum element in the above equation accounts for potential consecutive gate open events. Remark 22 (Frame Preemption) We do not consider TSN standard for frame preemption [START_REF] Thiele | Formal worst-case performance analysis of time-sensitive ethernet with frame preemption[END_REF] in this configuration model. Such evolution could be done with inspiration from [START_REF] Thiele | Formal worst-case performance analysis of time-sensitive ethernet with frame preemption[END_REF] by slightly redefining the equation of Def 54. The standard is presented in Appendix A.1. Definition 55 (Transmission Selection (TS)) The priority attribution is the following: the higher the traffic class number, the higher the priority. For instance, traffic class #7 has a higher priority than #0. Several classic configurations, also known as architectures of interest, exist for a port, depending on the choice of TSA and the gate configuration. Some architectures of interest are described in Section 4.2.5. System Configuration Now that the notion of port configuration has been formalized, we formulate what the notion of configuration means at system level. In fact, a system configuration is composed of a configuration of all flows and a network-level configuration. Definition 57 (Flow configuration) The configuration of a flow f is Config(f ) = [(p 1 , FtQM p1 ), . . . , (p l , FtQM p l )] where • Path f = p 1 , . . . , p l is the path followed by f , that is the sequence of output ports that are crossed; • FtQM pj is the associated Flow to Queue Mapping on each port p j . In particular, since a port is defined by p = (q 0 , . . . , q 7 , T S), FtQM p (f ) ∈ {q 0 , . . . , q 7 }; Definition 58 (Config(Net)) A network-level configuration Config(Net) consists in finding a configuration for all the output ports i.e. Config(Net) = {Config(p), ∀p ∈ P} In summary, computing a system configuration consists in determining:          ∀f ∈ F, Path f ∀f ∈ F, ∀p ∈ Path f , FtQM p (f ) ∀p ∈ P, GCL(p) ∀p = (q 0 , . . . , q 7 ) ∈ P, ∀q ∈ {q 0 , . . . , q 7 }, T SA q (9.1) Related Works: Existing System Configurations for Low Jitter Requirements Support Let us discuss existing system configurations in the state of the art that could be suitable for the constraints identified in Section 8.2 and 8.3. One could distinguish two subsets of configurations, aiming at satisfying the two performance requirements of the on-board network: the first family of configuration is designed for flows with deadline/latency requirements whereas the second family is designed for supporting traffic with low jitter requirements. While the first family has been extensively discussed in the state of the art (e.g. priority assignment [START_REF] Hamza | Priority assignment on an avionics switched Ethernet network (QoS AFDX)[END_REF], AVB/CBS [START_REF] Diemer | Exploring the worst-case timing of ethernet avb for industrial applications[END_REF][START_REF] Diemer | Formal worst-case timing analysis of ethernet topologies with strict-priority and avb switching[END_REF][START_REF] Migge | Insights on the performance and configuration of avb and tsn in automotive ethernet networks[END_REF], preshaping [START_REF] Li | Existing offset assignments are near optimal for an industrial AFDX network[END_REF][START_REF] Navet | Pre-Shaping Bursty Transmissions under IEEE802.1Q as a Simple and Efficient QoS Mechanism[END_REF], cyclic queueing and forwarding [START_REF] Nasrallah | Tsn algorithms for large scale networks: A survey and conceptual comparison[END_REF][START_REF] Yan | Injection time planning: Making cqf practical in time-sensitive networking[END_REF], TT-CBS-BE [START_REF] Sune Mølgaard Laursen | Routing optimization of avb streams in tsn networks[END_REF][START_REF] Daigmorte | Modelling in network calculus a TSN architecture mixing Time-Triggered, Credit Based Shaper and Best-Effort queues[END_REF], ETS [START_REF] Soni | Efficient configuration of a QoSaware AFDX network with Deficit Round Robin[END_REF][START_REF] Soni | Quantum assignment for QoS-aware AFDX network with Deficit Round Robin[END_REF]), the computation of configurations from the second family is still a hot topic in the networking community 9.2. RELATED WORKS 2 123 (considered as a NP problem [START_REF] Silviu | Smt-based task-and network-level static schedule generation for time-triggered networked systems[END_REF]). That is why this state of the art focuses on configurations designed for low jitter requirements. For low jitter requirements support, hardly all the papers in the state of the art rely on transmission gates, and transmission gates only (i.e. TSA is set to none for all queues). In addition, they consider that the routing of flows is known and fixed a priori. This means, unless explicitly stated, the system configuration (cf. Equation 9.1) is simplified to: ∀f ∈ F, ∀p ∈ Path f , FtQM p (f ) ∀p ∈ P, GCL(p) (9.2) After a short introduction on how constraint programming is used to compute these configurations, we detail these GCL-based configurations hereafter. Methodology:Configuration Generation with Constraint Programming In our coming contribution, we will adopt the most common methodology used to generate valid configuration for the Time Aware Shaper. This methodology is based on constraint programming ( [START_REF] Apt | Principles of constraint programming[END_REF][START_REF] Rossi | Handbook of constraint programming[END_REF]). In fact, the configuration of the TSN Time Aware Shaper mechanism can be translated into a constraint programming problem. It can then be tackled with either constraint programming solvers such as CPLEX [START_REF]CPLEX User's Manual. Ibm ilog cplex optimization studio[END_REF] or Satisfiability Modulo Theory/Optimization Modulo Theory (SMT/OMT) solvers such as Z3 [START_REF] De | Z3: an efficient smt solver[END_REF]. Three elements have to be defined for the constraint programming problem: the model, the decision variables and the constraints. The model describes the input on which the constraints will be evaluated (representing the system). The constraints form a system of equations: it is a mathematical formulation of the requirements of the system. The decisions variables are variables in the mathematical formula to which a value must be decided. In the case of network configuration, the model represents the network topology and its characteristics as well as the flows' definition (cf. Section 8.1). The constraints represent both the quality of service requirements (e.g. Section 8.2) as well as technology/system related constraints (e.g. maximum size of frames, maximum link capability, maximum number of frame on a link at any time, etc.). The decision variables are the parameters of the mechanism that the network designer is aiming at configuring (cf. Section 9.2). In that sense, in Chapter 10, we will reuse part of the model and constraints from the state of the art. In addition, we will define new constraints and decision variables of our novel configuration family. End-to-End TT configuration and its derivatives End-to-End TT configurations concept In the state of the art, the most common configuration supporting low jitter traffic requirements is the so called End-to-End TT configuration (cf. Section 4.2.5). .1 illustrates an End-to-End TT configuration with one emitter, one receiver and two switches (SWA and SWB). The principle of these configurations is simple: by fixing the transmission instant of all frames in all hops, the latency and jitter of the flows are controlled and nothing unexpected can occur. Definition 59 (End-to-End TT configurations) End-to-End TT configuration consists of a schedule per frame (or frame offset) in a time triggered fashion for either jitter traffic or all traffic on all ports in their path. This schedule is computed a-priori offline based on the requirements of the system. In this configuration, the transmission instant of any frame at any node is known during the whole life cycle of the system or until a new configuration is computed. SWA output SWB output (L1) Dest f Ref (f l ) f l T e (f l ) -Fixed f l Fixed f l Fixed f l T r (f l ) -Fixed End-to-End TT roots in TTEthernet networks In Ethernet networks, End-to-End TT configurations root back to TTEthernet networks. The pioneering paper is [START_REF] Steiner | An evaluation of smt-based schedule synthesis for time-triggered multi-hop networks[END_REF]. The authors introduce a formal TTEthernet network model and an associated set of constraints for SMT schedule generation, some of which we inspired from for the previously presented system model and the coming decision variables and constraints used to computed our novel configurations. Their methodology allowed to compute frame offsets, that were directly applicable into TTEthernet since TTEthernet provides a per frame scheduling capability. The configuration of the network was therefore immediate. At that stage the authors had already made it clear that the computation of such schedules was expensive (cf. Section 9.3.2) and introduced an incremental strategy for configuration generation to reduce the computation effort. Exploiting the constraints of the paper [START_REF] Steiner | An evaluation of smt-based schedule synthesis for time-triggered multi-hop networks[END_REF], [START_REF] Silviu | Smt-based task-and network-level static schedule generation for time-triggered networked systems[END_REF] (short version) and [START_REF] Silviu | Combined task-and network-level scheduling for distributed time-triggered systems[END_REF] (long version) propose to create a schedule for both applications running on end-stations and the underlying TTEthernet network. Link between Frame Schedules and GCL configuration Then, authors have started to consider network based on Time Sensitive Networking instead of TTEthernet where the scheduling of frame is slightly different. In fact, in order to meet the very low jitter traffic requirements, TSN relies on schedules per queue instantiated through the time aware shaper mechanism. This is genuinely different from TTEthernet which proposes a per frame scheduling. Instead of consisting of an emission schedule of all frames on all hops of the network, TSN schedule consists in serving one or several queues during a configured duration in a scheduled fashion. These queues will have the possibility to emit frames according to the transmission selection rules defined in Section 4.2. Nevertheless, these authors tried to configure TSN in such a way that would be equivalent to a TTE per frame schedule. There is a slight misalignment between the TSN configuration model and the most common configuration in the state of the art. In fact, on the one hand, as proposed in Equation 9.2, in order to configure the system, the parameters of the gate control list of any port (i.e. GCL(p)) have to be computed. On the other hand, the End-to-End TT approach defines frame schedules i.e. instants in which frames are transmitted on all hops. Naturally, one can wonder how to link these two concepts. In fact, in order to transform the per-queue scheduling capability of TSN into a frame scheduling capability, the authors in the state of the art take the problem the other way around. They compute frame offsets and from these offsets deduce gate opening and closing events. For instance, a frame f l in a port p with a single queue q which offset has been determined to be at instant t will lead to a gate event e = ⟨⟨o⟩ , t, τ f l ⟩. If a frame follows f l at instant t + τ f l , then it will lead to a similar opening event, otherwise, a gate closed event will be created until the next frame transmission. In addition, in port with multiple queues, in order to avoid the medium being busy when the transmission of frame f l must happen, an exclusive gating [START_REF] Daigmore | Impact on credit freeze before gate closing in CBS and GCL integration into TSN[END_REF] pattern is proposed (cf. Section 31). Therefore, in addition to opening the gate of queue q at instant t, the gate event will also close the gates of all others queues before or at instant t. We illustrate this transformation in Fig. 9.4. In this figure, we have represented two frames f l and g m in port p = (q 1 , q 2 ), which have Resulting gates in q 2 f l Open g m Open Closed Closed Closed Offset of f l Offset of g m τ f l τ g m Closed Closed Open Open Open Figure 9.4: Offsets to GCL transformation been configured to be placed into q 1 (i.e. FtQM p (f ) = FtQM p (g) = q 1 ). Queue q 1 and q 2 have been configured in exclusive gating. Need for Flow/Frame Isolation for End-to-End TT configurations of TSN networks The per-queue schedule may generate non deterministic behaviour at message level in the presence of failure. This concept is illustrated with two figures: in Fig. 9.5a, the nominal expected behaviour (so as to cope with jitter requirements for g m ) is shown. Fig. 9.5b presents a scenario where message f l is lost, leading g m to be sent in place of f l , creating an unwanted jitter. In order to cope with the potential non-determinism induced by the loss of a frame, [START_REF] Silviu | Scheduling real-time communication in ieee 802.1qbv time sensitive networks[END_REF] adapts the constraints of [START_REF] Steiner | An evaluation of smt-based schedule synthesis for time-triggered multi-hop networks[END_REF] and introduces two new constraints namely Flow Isolation and Frame Isolation. • flow isolation: a queue is dedicated to a flow from its first to its last message in an hyperperiod. Therefore, at each instant, only messages from a single flow can be present in the queue and interleaving of frames from different flows is not allowed; • frame isolation: a queue can be shared by several flows, but at each instant, only messages from a single flow can be present in the queue. All cases prevent messages from different flows to be in the queue at the same time. Thus, a message loss cannot affect the behaviour of messages of other flows. From any of these two constraints, the flow to queue mapping (FtQM p (f ), see Def. 57) i.e. the choice of a queue, per port, for a flow is immediately computed. Remark 23 The non-determinism induced by the loss of a frame is not compatible with safety requirement 1, therefore our configuration strategy will also have to include a constraint to eliminate this issue. (Re)ordering End-to-End TT schedules for no jitter traffic support The previously quoted papers create schedules for jitter traffic (see Def. 45) without any consideration on the remaining traffic in the network. Therefore, in order to improve the performance of no jitter traffic (i.e. deadline requirements) [START_REF] Steiner | Synthesis of static communication schedules for mixed-criticality systems[END_REF], [START_REF] Dürr | No-wait packet scheduling for ieee time-sensitive networks (tsn)[END_REF] and [START_REF] Houtan | Synthesising schedules to improve qos of best-effort traffic in tsn networks[END_REF] introduce strategies, a priori or a posteriori, to modify the jitter frame schedule by either spacing the frame offsets or gathering them. [START_REF] Pozo | Schedule reparability: Enhancing time-triggered network recovery upon link failures[END_REF] also proposes to add space between any two frame offsets, not in a no jitter performance consideration, but rather to leave time for potential retransmission of lost jitter frames. Since our configuration only focuses on jitter traffic, we will not further detail these approaches. Nevertheless, the (re)ordering philosophy could be integrated into our configurations in a future work. Group-of-frames scheduling Most recently, a second family of configuration strategies has appeared. It computes configurations based on a schedule per group of frames instead of a schedule per frame, motivated by TSN Transmission Gates per queue scheduling capability. Therefore, instead of computing frame schedules and then converting them into gate control list configurations, the methodology now directly computes gate events. [26] applies its TTEthernet schedule generation methodology [START_REF] Silviu | Combined task-and network-level scheduling for distributed time-triggered systems[END_REF] to TSN networks. The authors introduce new sets of constraints adapted for group of frames schedules as well as Stream Isolation, a fusion of Frame isolation and Flow isolation to again cover the loss of a frame. In [START_REF] Oliver | Ieee 802.1qbv gate control list synthesis using array theory encoding[END_REF], the same authors use their new constraints to implement a configuration generator and compare their two approaches (single frame offset v.s. group of frames offsets). It seems that the group of frames scheduling is a great improvement in terms of computation effort. Recently, in [START_REF] Reusch | Window-based schedule synthesis for industrial ieee 802.1qbv tsn networks[END_REF], the authors propose a group of frames configuration while relaxing the exclusive gating constraint. Based on previous constraints from [START_REF] Oliver | Ieee 802.1qbv gate control list synthesis using array theory encoding[END_REF] and new ones, the configurations they computed satisfy temporal constraints for jitter flows and no jitter flows using schedule porosity (i.e. (re)ordering) in an incremental approach based on the constraint programming methodology. The performance of group of frame configurations, in terms of computation effort, has to be slightly detailed: in fact, when the jitter constraint is not very low (i.e. above 10µs), the experiments in paper [START_REF] Oliver | Ieee 802.1qbv gate control list synthesis using array theory encoding[END_REF] show indeed that the computation time is smaller than End-to-End TT configurations. Nevertheless, once the jitter constraint gets smaller (i.e. around 1µs like it is the case for us), the experiments show that computation effort required for configuration generation is high and even greater that End-to-End TT configuration. That is why our coming configuration will not rely on group of frame scheduling. Adding more variables to the configuration problem Another group of papers have chosen to take more decision variables into account for configuring TSN networks. In particular, up to now, as all the previously cited paper in this section considered a fixed static routing for flows in their system, several papers (e.g. [START_REF] Gavrilut | Avb-aware routing and scheduling of timetriggered traffic for tsn[END_REF][START_REF] Gavrilut | Fault-tolerant topology and routing synthesis for ieee time-sensitive networking[END_REF][START_REF] Sune Mølgaard Laursen | Routing optimization of avb streams in tsn networks[END_REF][START_REF] Pahlevan | Genetic algorithm for scheduling time-triggered traffic in time-sensitive networks[END_REF][START_REF] Pahlevan | Heuristic list scheduler for time triggered traffic in time sensitive networks[END_REF]) have proposed to relax this hypothesis with joint routing and scheduling End-to-End TT configuration generators. This increases the solution space of frame schedules by allowing the route of flows to be modified. To compute these configurations, the authors not only rely on constraint programming methodology CHAPTER 9. INSIGHTS ON THE CONFIGURATION OF IEEE 801.QBV but also on heuristics. [START_REF] Vlk | Large-scale periodic scheduling in time-sensitive networks[END_REF], one recent addition to the state of the art, claims that with their heuristic based approach, they have solved the scalability issue of very large network configuration generation. They can configure networks with 2000 nodes and 10000 flows. We do not detail further these papers since our contribution is based on constraint programming methodology (that works well on our problem) and fixed route for all flows. Limitations of State of the Art Strategies w.r.t. Problem 2 While the existing End-to-End TT configurations and its extensions seem to be capable of handling the low jitter traffic requirements of next-generation satellite networks, the interest of the space industrials towards these configurations is limited for three reasons: their impact on development effort, their consequent computation cost and their upgrade costs. Application Impact Issues End-to-End TT configurations can provide guarantees for flows with very low jitter requirements. In order to be compliant with safety objective 1, i.e. be message loss independent, End-to-End TT configurations in the state of the art rely on Frame Isolation or on Flow Isolation. This comes at a cost: the impact on application of such configurations is substantial. Nevertheless, it has never been considered in the state of the art. Let us explain this application impact with the help of the example of Fig. 9.6. In this example, consider three messages f l , g m and h k belonging to the same time Src f -L2 Src f -PHY Ref (f l ) Ref (f l+1 ) Prod(f l ) B - f l = 0 Prod(g m ) B - g m ̸ = 0 Prod(h k ) B - h k ̸ = 0 f l Fixed g m Fixed h k Fixed Figure 9 .6: End-to-End TT issues with application impact queue in a last hop port, that are scheduled to be emitted between Ref (f l ) and Ref (f l+1 ), in the order and at the instants represented in the figure. In order to be emitted at their schedule, these messages must be produced before their emission date. In addition, since they share the same queue, they have to be enqueued in the same order that they will be emitted in. Therefore, the production contract of g m starts after the end of the production contract of f l . Identically, the production contract of h k starts after the end of the production contract of g m . Moreover, in systems where message loss independence (see Def. [START_REF] Gavrilut | Avb-aware routing and scheduling of timetriggered traffic for tsn[END_REF]) is required, one message must not be emitted in the place of another. In our example, since several messages share the queue, this implies that the production contract of g m (resp. h k ) must start after the emission of f l (resp. h k ). We represented the modified production contracts in Fig 9 .7. In this situation, it will be impossible for a message to take the slot of its predecessor since it will only arrive in the queue when after the gate closes (following an opening event for the previous message). In the formalism of LIMITATIONS OF STATE OF THE ART STRATEGIES W.R.T. PROBLEM 2 129 time Src f -L2 Src f -PHY Ref (f l ) Ref (f l+1 ) Prod(f l ) B - f l = 0 Prod(g m ) B - g m ̸ = 0 Prod(h k ) B - h k ̸ = 0 f l Fixed g m Fixed h k Fixed Figure 9 .7: End-to-End TT issues with application impact (2) Section 8.3.2, this entails that, for a majority of messages, B - f l ̸ = 0 which, according to Development Requirement 1, has an impact on applications. In fact, tasks have to wait until the beginning of the production contract to produce the message instead of starting at the beginning of the period. This is a situation that the industrial is aiming at avoiding. Software teams working on applications do no wish to handle constraints induced by the configuration of the underlying network. That is the reason why the existing End-to-End TT configurations do not comply with Development Requirement 1. Scalability Limitations As already identified in the state of the art, the computation of End-to-End TT configuration is expensive. In fact, a schedule must be computed for all flows on all hop in their path. In big network configurations with large path and/or large number of flows, scalability is a real issue. Considerations on Upgrade Costs In order to work nominally, End-to-End TT configurations require, among other hypotheses, that all devices implement TSN and uses the Time Aware Shaper mechanism. This could hinter the evolution of existing network towards TSN in space. In fact, there is no devices embedding TSN on-board a satellite today, therefore, all the devices would have to be replaced for the upgrade to happen which would represent a considerable cost (design, development, financial, . . . ). To conclude, the state of the art provides configurations suitable with the deadline/latency requirements of a next generation satellite network that could be deployed without any concern. The state of the art also proposes configurations compliant with very low jitter requirements, but these configurations come at a cost, that might hinder the introduction of TSN in a satellite context. First, existing End-to-End TT configurations have a significant impact on application development and second, the computation of such configurations is expensive. Finally, End-to-End TT configurations require that all devices in the satellite network implement TSN. This creates a huge gap between network with legacy network devices and a next-generation TSN network. Therefore, in the following chapters, we will focus on the creation of configurations suitable with the jitter requirements while requiring less computation effort, having a lesser impact on application development and potentially reducing the gap between legacy and next-generation networks. Spacecraft Industry Use Cases Before discussing our contribution with respect to the limitations identified in the previous section, let us introduce two industrial case studies compliant with the system and flow model of Chapter 5 and 8. They will be used in some experiments in this document. The first use case is an Airbus Generic Next-Generation Satellite Use Case that was created during the PhD ( [START_REF] Chaine | Formal Specification of Satellite On-Board Networks Requirements[END_REF]) and the second one is ORION Crew Exploration Vehicle (CEV) Use Case retrieved from the state of the art [START_REF] Zhao | Comparison of time sensitive networking (tsn) and ttethernet[END_REF]. Airbus Generic Next-Generation Satellite Use Case This first use case was consolidated during the PhD and formalized in [START_REF] Chaine | Formal Specification of Satellite On-Board Networks Requirements[END_REF]. It describes a generic satellite architecture with a unified network interconnecting both platform and payload devices. The number of devices has been chosen to be representative of several satellite missions (e.g. agile low earth orbit satellite, telecommunication geostationary orbit satellite, scientific mission, etc.). Platform Payload Network topology The system is composed of: These devices are connected together through a set of links and two switches, according to the topology of Fig. 9.8. SPACECRAFT INDUSTRY USE CASES 131 Applications There are three applications: • Command & Control (C & C) • Vision Choice 3 In the rest of this document, we will only experiment with the C&C application use case. The C&C application use case is the only application in which flows have real-time requirements i.e. are critical for the nominal behaviour of the satellite. Therefore, the choice was made to focus first on this application before adding the other applications. Flow constraints The flows and their constraints are listed in Appendix B.1 because it is too long and too verbose to be detailed here. There are roughly 120 unicast flows in the C&C use case. ORION Crew Exploration Vehicle Use Case This second use case is adapted from the Orion Crew Exploration Vehicle (CEV) use case described in [START_REF] Zhao | Comparison of time sensitive networking (tsn) and ttethernet[END_REF]. The network topology is shown in Fig. 9.9. Network topology The system we consider is a network composed of 31 devices of undisclosed type (i.e. computing, sensing, actuating or payload) and 15 switches. These devices are connected together through a set of links and switches, according to the topology of Fig. 9.9. Application We did not dispose of enough information about the applications running over the use case to provide the same level of detail than the Airbus Generic Satellite Use Case. Nevertheless, we have slightly adapted the use case to make it fit into our system model. It is then characterized by k = 1200 and P MIF = 625µs. Remark 24 In this use case, P MAF = k * P M IF = 750ms ̸ = 1ms. Flows & Flows constraints The flows and their constraints for this use case are listed in Appendix B.2. There are 186 flows among which 100 jitter flows (i.e. with jitter and deadline constraints) and 86 no jitter flows (i.e. with deadline constraint only). In our modelling, multicast flows are duplicated into several unicast flows. Thus, the use case is composed of 168 unicast jitter flows and 147 unicast no jitter flows. Chapter 10 Reduction of the Application Impact and Computation Effort of State of the Art Configuration Generation with Egress TT In the previous chapter, we have presented End-to-End TT, the most common configuration approach in the state of the art for the configuration of TSN GCL-based networks. These configurations rely on a per-frame schedule on all hops which is translated into a gate control list configuration. In Section 9.3, we have identified the limitations in terms of application impact, scalability and upgrade costs of these configurations with respect to the problem we are dealing with. Therefore, in this chapter, we introduce Egress TT, a new strategy for network configuration inspired from End-to-End TT where a schedule is only computed for the last port in the path of any flow. Egress TT is designed to reduce the limitations of state of the art End-to-End TT configurations i.e. lighter application impact and reduced computation effort for systems with strong jitter requirements but not too stringent latency constraints. In addition, Egress TT is also designed to ease the transition between legacy and next generation networks by requiring less time-triggered-capable devices. We propose two implementations of Egress TT configuration generation for TSN GCL-based networks for which we compute configurations with the constraint programming methodology presented in Section 9.2.1. The first one is entitled Exclusive Queue Allocation and the second one, aiming at increasing the schedulability of Exclusive Queue Allocation, is entitled Size Based Isolation. Finally, in the last section, we discuss the limitations of Egress TT and the two proposed implementations. Egress TT, a Novel Approach for the Generation of Configurations Suited for Low Jitter Traffic 10.1.1 What is Egress TT? In End-to-End TT configurations, the support for low jitter traffic is achieved with a network-wide schedule for all frames on all hops i.e. the reception and transmission instants of any frame of any flow in all input/output ports are fixed, and known a priori (see Def. 59). By doing so, the latency 134 CHAPTER 10. EGRESS TT CONFIGURATIONS and jitter of all the flows are controlled and nothing unexpected can occur. Fig. 10.1 illustrates an End-to-End TT configuration with one emitter, one receiver and two switches (SWA and SWB). time One issue with this type of configuration is the high computation effort required to determine the parameters for one valid configuration since a lot of schedules have to be generated for a single configuration. In this context, we introduce Egress TT a new configuration approach. Our idea is the following: in order to satisfy the very low jitter flows requirements, a time-triggered scheduling must be done at some point, but instead of using a schedule on all hops in the network, we propose to schedule frames only on the last port in their path so as to reduce the computation effort. Src f -L2 Src f -PHY SWA output SWB output (L1) LDests f Ref (f l ) f l Prod(f l ) T SAP (f l ) f l Fixed f l Fixed f l Fixed f l Fixed Definition 60 (Egress TT Configurations) In Egress TT configurations, jitter flows are only scheduled in the last hop port in their path. In all other output ports, medium access is not specified. A network designer could choose any medium access strategy as long as a bound on frame latencies can be computed. time Src f -L2 Src f -PHY SWA output SWB input SWB output (L1) LDests f Ref (f l ) f l Prod(f l ) T SAP (f l ) A NetLatBound(f l ) f l Fixed f l Fixed Remark 25 In the next section, we will apply this approach to the configuration for an Ethernet/TSN network but it could be adapted to any networking technology capable of handling frames in a time triggered way (for the last hop ports). Remark 26 The Egress TT approach (called LETT by [START_REF] Baron | LETT: An execution model for distributed real-time systems[END_REF]) is inspired from the LET -Logical Execution Time -approach [START_REF] Henzinger | Giotto: a time-triggered language for embedded programming[END_REF] where receiving applications can absorb the communication jitter by 10.1. EGRESS TT, A NOVEL APPROACH 135 buffering the data up to a nominal fixed release instant. Nevertheless, Egress TT is designed for systems where the network designer do not wish to handle jitter at destination. For instance, in the spacecraft industry, the existing receiving applications do not have the capability to absorb the network jitter. Therefore, in order to avoid redesigning these applications, this capability is relegated to the core of the network which devices are going to be upgraded anyway. How does it work ? Let us describe how Egress TT configurations work by studying how a message from a jitter flow f is handled in such configuration. Per Definition 47, message f l can be emitted at any time during the interval Prod(f l ). The network traversal delay of f l can be bounded and let NetLatBound(f l ) be such a bound. We illustrate it in Fig. 10.2 (A represents the best traversal delay). The purpose of these configurations is that whenever f l is emitted by the application, it will be delivered to the destination end-station at a fixed time. Definition 61 (NetLatBound) An upper bound on the worst case duration, from deposit (cf. Def. 41) to deposit in the last hop output queue, is denoted ∀f ∈ F, ∀l ∈ N, NetLatBound(f l ). Practically, there is no requirements on traffic shaping before the last hop on the path of f as long as a NetLatBound bound can be computed. This entails that a message can for instance encounter classical delays due to blocking by other flows (e.g. non-exclusive medium access). The last hop will be in charge of absorbing the upstream network delay variability (i.e. jitter) and delivering the message with low jitter, whatever happened to that message before (be it delayed in the network or waiting in the last hop). To ensure a low jitter reception for f l , it is sufficient to: • be received in the correct queue of its last hop port before its schedule, • be in head of that queue at f l schedule, • not be emitted to the destination before its schedule. In-fine, whenever f l is emitted by the application, it will arrive at the destination end-station at a fixed time, hence satisfying the very low reception jitter requirements. What makes Egress TT interesting? Our novel approach, Egress TT, is interesting for several reasons. Computation effort First, it limits the number of schedules to be computed for a single configuration. In Chapter 11, we will assess the computation effort for Egress TT configurations on several use cases. We will observe that this will indeed reduce the computation effort of any configuration. Application Impact Second, the concept of Egress TT configuration, where frames are scheduled on the last hop port offers more flexibility for flows' production contracts. In fact, instead of inducing small production windows (like End-to-End TT) due to the strict organisation of messages across the network, Egress TT proposes larger production windows, defined between the frame's reference date and an upper bound (which calculation will be computed later), allowing more flexibility in frames emissions, therefore lighter application impact. CHAPTER 10. EGRESS TT CONFIGURATIONS Transition from legacy to next-generation Finally, Egress TT shall ease the transition from legacy to next generation networks. In fact, only the last hop switch (or device in general) in the path of any flow requires time-triggered scheduling capabilities. Therefore, if the legacy core network is already suitable with the other requirements of the industrial, only the last hop device needs to be upgraded. For instance, Ethernet frames could be sent over an existing Spacewire network and only the last hop device would have to be modified. Therefore, this reduces the gap between legacy and next-generation network compared to End-to-End TT approaches where all devices need to be upgraded. Suitability of Egress TT The Egress TT approach is well suited for systems where very low reception jitter is a strong requirement and latency is less of a strong requirement. While Egress TT configurations offers some freedom in the core of the network, latency is traded for jitter. In fact, if a message is emitted at the beginning of its production contract and benefits from a favourable situation in the core network (between the emitter and the last hop device), it will have to wait a "long" time in its last hop port before being delivered to the receiving end-system and application. Therefore, very low reception jitter is achieved at the cost of a greater latency. While this is compatible with the requirements of the spacecraft industry where latency constraints are large, Egress TT configuration might not be suitable for systems where latencies shall be minimal. In the next section, we will show how the Egress TT approach can be applied to TSN networks with a first implementation of Egress TT configuration generation: Exclusive Queue Allocation. A First Implementation of Egress TT in IEEE 802.1Qbv Networks: Exclusive Queue Allocation This first implementation of Egress TT is based on the Exclusive Queue Allocation constraint which specifies that no two flows with low jitter requirements can share the same queue. We describe it formally here after. Adaptation of Egress TT Configurations to TSN Networks Let us now formalize how to compute valid Egress TT configurations for TSN networks according to the methodology of Section 9.2.1. Hypothesis 7 (Synchronization) We assume that emitters and last hop devices are synchronized and the synchronization error is insignificant with respect to the order of magnitude of the requirements presented hereafter. Remark 27 The above choices entail that no TSA will be used in the configuration process. We now formally define the concept of Last Hop ports. Last hop port We distinguish the output ports which are the last hop of some flows from the others. Definition 63 (Last Hop Ports) For a flow f following the path p 1 , . . . , p l , we denote by LH f = p l the last hop port. The set of last hop ports is LH= {p ∈ P|∃f ∈ F, LH f = p} and the set of last jitter ports is LH j = {p ∈ P|∃f ∈ F j , LH f = p} 1 . Port configuration The configuration for the ports P \ LH j is equivalent to Ethernet-capable port configuration i. When a port is not a last hop, there is nothing much else to do, hence we must now focus on the configuration of Last Hop Ports. In fact, in any last hop, that is in port p = (q 0 , . . . , q 7 ) ∈ LH j , gate schedules follow an exclusive gating pattern [START_REF] Daigmore | Impact on credit freeze before gate closing in CBS and GCL integration into TSN[END_REF]: • jitter flows and no jitter flows are placed in different queues; Decision variables The decision variables, i.e. the variables to which we are trying to find a value, should be those of equation 9. T r (f l ) = SchedLH[f l ] + τ f l Therefore: Lat f l = T r (f l ) -Ref(f l ) = SchedLH[f l ] + τ f l -Ref(f l ) From the variables SchedLH, it is possible to reconstruct GCL. Indeed, let us consider a jitter flow f and its last hop LH f = (q 0 , . . . , q 7 ) ∈ LH. f will produce P MAF /P f events: every time a frame of f is supposed to be transmitted, the gate should be open. More practically, for each f l , there is an event e = ⟨s, SchedLH[f l ], τ f l ⟩ where s = ⟨s 0 , . . . , s 7 ⟩ with s FtQM p (f ) = o and s j = C for j ̸ = FtQM p (f ). Thus GCL is the union of all events associated to all jitter messages f l where LH f = p. This union is completed with gate opening of queues not allocated to jitter flows on the remaining time (when the jitter associated queue gates are closed). The gate events are generated by a post processing procedure. Remark 29 Since the system we consider is periodic, it is only necessary to compute SchedLH[f l ]on one period of the system (P MAF ). Finally, the decision variables for the problem become: ∀f ∈ F, ∀p ∈ Path f , FtQM p (f ) ∀f ∈ F j , ∀l < P MAF P f , SchedLH[f l ] (10.1) Exclusive Queue Allocation Concept In order to satisfy the safety requirement in Egress TT configurations for TSN networks, instead of reusing existing constraints (i.e. Flow/Frame Isolation) from the state of the art which lead to high application impact configurations, we introduce our own "isolation" constraint: Exclusive Queue Allocation. Definition 65 (Exclusive Queue Allocation) With Exclusive Queue Allocation, each jitter flow is paired with one dedicated queue in its last hop port. No other flow can use that queue. Being alone in the queue removes the possible non-determinism induced by TSN Time Aware Shaper mechanism (see Section 9.2.2). Indeed, if a jitter message is lost in (or before) the last hop port, since the medium access for jitter queue is done via exclusive gating, no messages for other queues will take its place (and suffer from unwanted jitter as illustrated in Fig. 9.5b). Within the same queue, it could still lead to unwanted jitter but this is compatible with safety requirement 1, since in a specific queue, there can only be frames from the same flow. Constraints Formalization for Exclusive Queue Allocation Across the network, the Flow to Queue Mapping rule will not always be the same. In all ports except last hop ones, jitter and no jitter flows are allowed to share the same queue. However, Exclusive Queue Allocation is applied in last hop ports for jitter flow. The other flows can share their last hop ports. A FIRST IMPLEMENTATION: EXCLUSIVE QUEUE ALLOCATION 139 Remark 30 (Macrotick) To simplify the formulation of the equations in the rest of the paper, the instants and durations will be written in macroticks (like [START_REF] Silviu | Scheduling real-time communication in ieee 802.1qbv time sensitive networks[END_REF]). For instance, if the macrotick is the necessary duration to transmit a frame of 64 bytes, then Size f = 128 =⇒ τ f l = 2 (reminder: τ f l is the transmission duration of f l , defined in Def. [START_REF]Spacewire -Protocol Identification[END_REF]. Constraint 1 (Exclusive Queue Allocation) Each jitter flow is associated with one dedicated queue. ∀f ̸ = g ∈ F j , LH f = LH g =⇒ FtQM LH f (f ) ̸ = FtQM LH f (g) Links are modelled for the solver as two unidirectional links with opposite directions. Constraint 2 (Link Occupation) A link can only send a message at a time in one direction i.e. ∀f l , g m ∈ F j s.t. LH f = LH g : SchedLH[f l ] + τ f l < SchedLH[g m ] or SchedLH[g m ] + τ gm < SchedLH[f l ] Performance Constraints. All jitter flows are subject to deadline and jitter constraints. Constraint 3 (Ordered Delivery) For any jitter flow, the i-th message shall be delivered before the (i+k)-th message of that flow, i.e. ∀f ∈ F j ∀l, m ∈ N, l < m, T r (f l ) < T r (f m ). This is translated as ∀f ∈ F j ∀l, m ∈ N, l < m: SchedLH[f l ] < SchedLH[f m ] Constraint 4 (Deadline) The delivery instant of a flow is bounded, indeed ∀f ∈ F, ∀l ∈ N, Ref(f l ) ≤ T r (f l ) ≤ f l . deadline. This is translated as: ∀f ∈ F j , ∀l ∈ N, Ref(f l ) ≤ SchedLH[f l ] + τ f l ≤ f l .deadline Constraint 5 (Jitter) For any jitter flow, the difference of latency of any two messages is bounded by the flow's jitter constraint i.e. ∀f, ∈ F j , ∀i ̸ = j ∈ N, |Lat fi -Lat fj | < f .jitter. This is translated as ∀f, ∈ F j , ∀i ̸ = j ∈ N: |SchedLH[f i ] -Ref(f i ) -(SchedLH[f j ] -Ref(f j ))| < f .jitter Last Hop associated Constraints. In order to compute the last hop schedule, it is necessary to have an upper bound NetLatBound on the traversal time of flows until their last hop port. Remark 31 NetLatBound could be estimated with classical worst case traversal time method such as Response Time Analysis [START_REF] Maxim | Delay analysis of avb traffic in time-sensitive networks (tsn)[END_REF] or Network Calculus [START_REF] Zhao | Worst-case latency analysis for ieee 802.1qbv time sensitive networks using network calculus[END_REF], or any other methods as long as it can be integrated in the constraint programming solver. In this manuscript, a bound NetLatBound is estimated with Response Time Analysis [START_REF] Sanjoy K Baruah | Response-time analysis for mixed criticality systems[END_REF] because this method was directly implementable as a constraint in the solver. In fact, the solver needs to compute NetLatBound with every new configuration since NetLatBound depends on decision variables. As mentioned earlier, any other methods could be used to estimate that bound as long as it can be either integrated in constraints or coupled with the solver. Constraint 6 (Traversal Time Constraint) The release instant of any message of a jitter flow shall be within the flow's period. This is expressed as ∀f ∈ F j , ∀l ∈ N: Ref(f l ) ≤ SchedLH[f l ] -NetLatBound(f l ) < Ref(f l+1 ) time T p (f l ) Ref(f l ) Release(f l ) f l SchedLH[f l ] f l .deadline NetLatBound(f l ) Figure 10.3: Release instants for jitter flows Consider a jitter flow f , and its l-th message f l . With Egress TT configurations, in order to ensure f l arrives in FtQM LH f (f ) before its schedule, the message must be sent after its release and before some bound. We now explain how NetLatBound is computed. Definition 66 (Bound on worst case latency NetLatBound) A bound on the worst case deposit to last hop emission latency is computed as ∀f ∈ F, ∀l ∈ N: NetLatBound(f l ) = p∈Path f ,p̸ =LH f ∆(f l , p) + τ f l where ∆(f l , p) is a bound on the worst case duration for f l at output port p. In any port, f l can be delayed, in the worst case, by several other messages. First, f l can be delayed by all messages with same or higher priority than f l but also one lower priority frame which arrived in the port before f l . These delays are known as higher priority blocking HPB(f l , p), same priority blocking SPB(f l , p) and lower priority blocking LPB(f l , p)). This delay model is inspired from [START_REF] Thiele | Improving formal timing analysis of switched ethernet by exploiting traffic stream correlations[END_REF][START_REF] Axer | Exploiting shaper context to improve performance bounds of ethernet avb networks[END_REF][START_REF] Thiele | Formal worst-case performance analysis of time-sensitive ethernet with frame preemption[END_REF]. We illustrate the notion of higher, same and lower priority blocking for a frame f l in Fig. 10.42 . Priority Low Priority Medium Priority High g m f l-2 f l-1 f l h k h k+1 time g m h k h k+1 f l-2 f l-1 f l IFG LPB link HPB link SPB link (f l , p)) ∆(f l , p) is defined as ∀f ∈ F, ∀l ∈ N, ∀p ∈ Path f , p ̸ = LH f : ∆(f l , p) = HPB(f l , p) + SPB(f l , p) + LPB(f l , p) r A FIRST IMPLEMENTATION: EXCLUSIVE QUEUE ALLOCATION 141 It is now necessary to determine which messages will be accounted for in HBP, SPB and LBP. In our system, all messages have a deadline smaller or equal to the end of their period (implicit deadlines). Therefore, a finite number of instances (i.e. frames) of each flow may be considered interfering with any message of a defined flow (cf. [START_REF] Steiner | Synthesis of static communication schedules for mixed-criticality systems[END_REF][START_REF] Bauer | Worst-case end-to-end delay analysis of an avionics afdx network[END_REF]). Remark 32 The considerations of Prop. 4 may seem a little bit too excessive with respect to a decent worst case delay analysis. However, we believe it is sufficient to demonstrate the concept of Egress TT. Improving this bound would of course lead to an improvement of the performances of our approach (in particular for the integration of no jitter traffic). Remark 33 (Impact of common period start) In our model, there is no offset between periods i.e. any period starts at the beginning of a MIF cycle. Therefore, the number of interfering instances is reduced to ⌈ P f Pg ⌉. Definition 69 (Blocking durations) We formulate the priority blocking durations as follows : ∀f ∈ F, ∀l ∈ N, ∀p ∈ Path f , p ̸ = LH f : HPB(f l , p) = g∈FlowPort(p)|FtQM p (g)>FtQM p (f ) (⌈ P f P g ⌉) * Size g SPB(f l , p) = g∈FlowPort(p)|g̸ =f,FtQM p (g)=FtQM p (f ) (⌈ P f P g ⌉) * Size g LPB(f l , p) = max g∈FlowPort(p),FtQM p (g)<FtQM p (f ) Size g (10.2) Focus on the Computation of Release Instants Optimization criteria The problem has been encoded as a set of decision variables and a set of constraints to be solved by a constraint solver. Egress TT has been designed to provide a lighter application impact than End-to-End TT. The development effort requirement is ensured with an optimization criteria: maximize the production window of the configurations (i.e. minimize the application impact), we encode this criteria in equation 10. 3. maximize ∀f ∈F s.t. f .jitter̸ =NA (Release(f l ) -Ref(f l )) 2 (10.3) Choice 6 We chose a quadratic cost function to reduce the solution to homogeneous solutions only, where no flow is compensating for another flow (e.g. one flow has a tiny production window and another flow compensate with a huge one) as requested by our industrial use case. This function could be modified depending on specific system requirements. Therefore, this requires to compute the release instants for both jitter and no jitter flows. Definition 70 Release(f l ) for jitter flows. SchedLH[f l ] occurs exactly later after the worst case duration of f l compared to Release(f l ). Therefore: ∀f ∈ F j , ∀l ∈ N, Release(f l ) = SchedLH[f l ] -NetLatBound(f l ) Definition 71 Release(f l ) for no jitter flows. Release(f l ) is computed a posteriori via a post processing. Once the last hop emissions instants for jitter flows have been decided, the scheduling instants of no jitter flows are decided with the remaining port capacity (i.e. when gates for jitter flows are closed). Remark 34 Because the release instant for no jitter flows is computed a posteriori, it is necessary to check the correctness of that release instant that is ∀f ∈ F\F j , ∀l ∈ N, Release(f l ) ≥ Ref(f l ). The last hop gate of no jitter flows is always open (except when some jitter message is being emitted and the output port is its last hop). Moreover several no jitter messages may be in the same queue at the same time. Thus, the release instant is defined as ∀f ∈ F\F j , ∀l ∈ N: Release(f l ) = f l .deadline -NetLatBound(f l ) -∆ WC+closed LH f (f l ) where ∆ WC+closed LH f (f l ) denotes the worst case duration needed to transmit, in port LH f , in queue FtQM LH f (f ), no jitter message f l , including time for which the gate of FtQM LH f (f ) is closed. In order to cope with the different size of messages in the queue, a gate will be considered open if and only if it can fit more than the biggest message assigned to that queue. Definition 72 (e n .IsOpen(t)) Considering an event e n =< s n , t n , d n > from GCL(p) and a date t, e n .IsOpen(t) returns true if between t and the beginning of e n , there is enough time to transmit a maximum size frame from the set of frames travelling through p. Mathematically, e n .IsOpen(t) = ((t -t n ) > max g∈FlowPort(p) (Size f )) A bound on the worst case duration ∆ WC+closed LH f (f l ) is determine with an algorithm not disclosed in this paper. A SECOND IMPLEMENTATION OF EGRESS TT WITH IMPROVEMENT OF THE SCHEDULABILITY OF EXC time T p (f l ) Ref(f l ) Release(f l ) g m h l f l h l f l f l .deadline NetLatBound(f l ) ∆ WC+closed LH f (f l ) ∆ WC+closed LH f (f l ) = Compute(∆(f l , LH f ) * r, f l .deadline, FtQM LH f (f ), max f ∈FlowPort(LH f ) (Size f )) where Wrap-up The computation of release instants concludes the network configuration with our first Egress TT implementation with Exclusive Queue Allocation. We can generate schedules for frames (i.e. SchedLH[f l ]), that are transformed into GCL events (i.e. GCL(p)) as well as an assignment of flows into queues (i.e. FtQM LH f (f )), that were the required elements for the configuration of the network. In addition, we have provided the network designer with information on the available production contract for all flows in the network (i.e. Release(f l )) which he can use to design the applications running over that network. Exclusive Queue Allocation comes with one strong limitation: a last hop port can only receive up to 8 different jitter flows (or 7 jitter flows with additional no jitter traffic). While this might be sufficient for small networks with a relatively low number of jitter flows, it is not scalable enough for the configuration of large networks such as Orion CEV (cf. Section 9.4.2). Therefore, in the next section, we introduce Size Based Isolation, a second implementation of Egress TT for 802.1Qbv network aiming at reducing the limitation on number of flows of Exclusive Queue Allocation. A Second Implementation of Egress TT with Improvement of the Schedulability of Exclusive Queue Allocation:Size Based Isolation 10.3.1 Size Based Isolation Concept In the Exclusive Queue Allocation constraint, each jitter flow was paired with one dedicated queue in its last hop port and no other flow could use that queue. Being alone in the queue removed the possible non-determinism induced by TSN Time Aware Shaper mechanism (see Section. 9.2.2). However, respecting Exclusive Queue Allocation came at a cost: an end-station cannot receive more that eight jitter flows (or seven if it also receives no jitter traffic). With Size Based Isolation, we relax that constraint so that several messages from different flows are allowed to exist in the queue at the same time. However, it is necessary to manage the messages behaviour to satisfy the safety objective. Definition 74 (Size Based Isolation) All frames sharing the same queue on last hop port shall be enqueued in increasing frame size order. This size may be achieved with the help of padding. By ensuring that frames are enqueued in increasing order, if a frame is lost, the following frame will not be emitted in the slot of the lost frame since its size is bigger than the opening of the gate. Instead, the frame will be, as expected, emitted in its allocated slot. This concept is illustrated in Fig. 10.7: we show the nominal situation in 10.7a and the behaviour in case of message loss in 10.7b. Even when f l is lost, j k is not sent in place of f l . time g m h l f l j k g m h l f l j k Increasing gate opening durations Being unable to impose an order between messages coming from different sources in the last hop port without a negative impact on the application development, we impose that flows sharing a queue in a last hop port shall : • come from the same emitter, • and share the same route. In this situation, the application impact will be slightly increased: in addition to the traffic contract, the emitter will have to ensure an emission order. Constraints Formalization for Size Based Isolation Let us formulate the constraints for Egress TT with Size Based Isolation. Hypothesis 9 Let us assume that when two flows have the same source and same destination that they share the same route i.e. ∀f, g ∈ F, Src f = Src g and Dest f = Dest g =⇒ Path(f ) = Path(g). Rationale 1 This hypothesis is representative of the Airbus generic satellite system presented in Section 9.4.1. A SECOND IMPLEMENTATION: SIZE BASED ISOLATION 145 Remark 35 If this hypothesis is relaxed, the model could be modified so that the order of deposit in the last hop queue required for Size Based Isolation is maintained, probably with the help of additional constraints. Additional Decision Variable We add an additional decision variable P add which translates the additional padding used to increase the size of frames. Definition 75 (P add fl ) Let f l a message of f , we define P add f l as an additional amount of bytes that is used to increase the size of f l . In particular, we have: ∀f ∈ F, ∀l ∈ N, Size f ≤ Size f + P add f l < M T U Ethernet Remark 36 The above formulation is generic and allows to take into account systems where periods are not harmonic. In our model, since periods are harmonic, it would be possible to only compute a padding per flow instead of a padding per frame. Constraints We now extend the definition of the transmission duration of f l i.e. τ f l , ∀f ∈ F, ∀l ∈ N, τ f l = Size f + P add f l r Then, we reuse all the constraints from the first approach (i.e 2, 3, 4, 5 and 6) except Constraint 1. In addition we define two new constraints: Size Based Isolation and QueuePerEmitter. f l ♯g m =⇒ max(Ref(f l ), Ref(g m )) < min(f l .deadline, g m .deadline) . Constraint 7 (Size Based Isolation) All interfering messages in a last hop port shall be enqueued and transmitted in increasing message size order on last hop i.e. ∀f, g ∈ Queue p (i), ∀f l , g m s.t. f l ♯g m , SchedLH[f l ] < SchedLH[g m ] =⇒ τ f l < τ gm In order to be able to control the reception order in the last hop port, we define an additional constraint: Constraint 8 (Queue Per Emitter) Any two jitter flows having different source and same destination will be placed into the different queues i.e. ∀f, g ∈ F j s.t. LH f = LH g , Src f ̸ = Src g =⇒ FtQM LH f (f ) ̸ = FtQM LH f (g) The newly computed configurations allow a greater number of jitter flows per port; but not without a cost. In addition to the release date, the emitting applications must follow an order constraint on emission so that their messages arrive in the correct order in the last hop port. Limitations of Exclusive Queue Allocation, Size Based Isolation and Egress TT First, Egress TT with Exclusive Queue Allocation will always fail to find configurations when a device is supposed to receive more than 8 jitter flows. Indeed, this comes from the exclusive queue allocation since a queue is dedicated to one jitter flow. Then, Egress TT with Size Based Isolation will always fail when a device is set to receive flows coming from more that eight different sources. Again, this is due to the Size Based Isolation constraint and our objective to keep the application impact relatively low. In addition, the number of low-jitter flows per queue with Size Based Isolation will be limited by the gate granularity i.e. the smallest duration of a gate event. The maximum number of jitter frames than can be held in a queue at the same time χ p (i) is computed with the following formula : ∀p ∈ LH j , ∀i ∈ [0, 7], if ∃f ∈ F j ∈ Queue p (i), χ p (i) = ⌈ Maxsize gcd(dj ) ⌉ where gcd j∈[0,u-1] (d j ) is the gate granularity (see Def. 53). For instance, with a granularity of 1µs, the smallest open event will be able to transmit 125 bytes (i.e. 1µs 8 * r). Therefore considering the maximum frame size is 1518 bytes, this means that a queue can hold up to 13 (i.e. ⌈ 1538 125 ⌉) low-jitter frames. Egress TT will fail when the post processing on no jitter flows fails (i.e. deadlines of no jitter flows cannot be met). Finally, Egress TT will fail when the computation of NetLatBound(f l ) becomes too pessimistic: over-reservation of resources (Egress TT) is always less scalable that exact allocation (End-to-End TT). In the above situations, among others, End-to-End TT will always be a better approach. Nevertheless, we believe that the expected improved scalability, in particular the shorter configuration time, as well as the per design lower application impact will still attract industrials towards Egress TT. Conclusion In this chapter, we presented Egress TT, a new scheduling strategy for systems with very low reception jitter requirements and medium latency constraints. Then we proposed Exclusive Queue Allocation and Size Based Isolation two implementations of Egress TT for TSN 802.1Qbv networks. We first detailed the concept of Egress TT, and then applied the methodology introduced in the state of the art by defining a set of constraint for a constraint programming solver. We expect from Egress TT configurations better scalability, lighter application impact and easier transition from legacy to next-generation. In order to assess the performance (compare to what we expected in Section 10.1.3), we evaluate in the next chapter our two implementations of Egress TT, namely Exclusive Queue Allocation and Size Based Isolation as well as an implementation of state of the art End-to-End TT on several use cases. Chapter 11 Experimentations: Performance Comparison of Egress TT and End-to-End TT Configurations In this chapter, we propose to assess the performance of the Egress TT configurations introduced previously compared to our own implementation of state of the art End-to-End TT approach. In the first section, we give an overview of the software architecture to compute TSN configurations that was developed during this PhD. Then, in the second section, we introduce two comparison axes i.e. scalability and latencies and evaluate them on small use cases. We compare the obtained metrics between End-to-End TT and Egress TT. Finally, in the last section, we compare the performance of Egress TTand End-to-End TT on the "larger" use cases presented in Section 9.4.1 and 9.4.2. Software Architecture Data Conversion The Data Conversion part is in charge of transposing the system that needs to be configured into the model defined in Section 9.2.1. Practically, the network designer provides the NetworkDescriptor.csv file, which is our standardized input used to define the system. It includes the description of the devices, of the links and of how they are connecting the devices with one another, the flows and their constraints. As in classical constraint generation softwares, we rely on .dat and .mod files to describe the system and its associated constraints. In our case, the NetworkDescriptor.csv file is converted with the ConvertToDat function, encoded in Python, into a .dat file usable in the constraint programming solver. A GenerateNetworkModel function, encoded via the ILOG-CPLEX scripting language [START_REF]CPLEX User's Manual. Ibm ilog cplex optimization studio[END_REF], generates, based on the .dat file, all the input .mod files describing the model and the constraints required for the configuration generation. Configuration Then, the Configuration part, is the core function in this software architecture. It is in charge of effectively generating gate schedules via the constraint programming methodology. The Gener-ateSchedule function takes as input the .dat and .mod files describing the model and the constraints. It produces one or several files proposing a valid network configuration. The function was implemented in OPL and executed with the help of IBM CPLEX solver. This network configuration is processed/refined with the GenerateGCL function which converts the network configuration files into TSN Gate Control Lists configuration files that will be used in the Simulator. Simulation The Simulation part is in charge of providing a simulation capability for the configurations generated in the configuration part. The NetworkSimulator function is used to verify that the proposed configuration is indeed valid with respect to the quality of service requirements. It is a Python simulator developed during the PhD because it was more easily modifiable compared to existing on-the-shelf tools. Validation Once the system has been simulated during one hyper-period (i.e. P MAF ), the ConstraintChecker function checks that every constraint is satisfied. LATENCY AND SCALABILITY COMPARISON 149 Remark 37 This software also has scripts to interface with RTAW Pegase [START_REF]RealTime-at-Work. RTaW-Pegase: Modeling, Simulation and automated Configuration of real-time communication architectures[END_REF] configuration and simulation tool, so that the configurations proposed by the RTAW tool can be simulated and/or verified. In particular, this interface has been used for an internship (master level) during the PhD. Latency and Scalability Comparison The purpose of this section is to evaluate our approach and also to compare it with the state of the art. The comparison will be based on two criteria: scalability and network latency. The software architecture for configuration generation, simulation and validation is briefly introduced in Appendix. 11.1. Both approaches were implemented in OPL and the computation was done using CPLEX v12.9.0 running on a Ubuntu computer embedding Intel Xeon E5-2600 v3 @ 2.6GHz and 62GiBytes of memory. Remark 38 For all the experiments of this document, the network devices and links are supposed to work at 1Gbit/s. Scalability In this section, we use the sets of constraints without the optimization criteria for Egress TT and our implementation of End-to-End TT with Frame Isolation. We compare the results provided by the solver for both approaches. To assess the scalability, we increase the number of switches on the paths and the number of receivers, and we monitor two metrics: • Number of constraints necessary to generate a configuration, • Duration of the computation to find a configuration. Path size increase In this first set of experiments, we consider a simple topology with one emitting end-station and one receiving end-station connected with a set of switches from 1 to 10 switches (cf. Fig. 11.2). This allows to quantify the computation cost when adding a switch in the path. 3 presents the number of constraints and duration for the computation of one configuration. The number of constraints for both Egress TT implementations appears constant whereas it increases with End-to-End TT. This result was expected since, in End-to-End TT configurations, an emission instant has to be constructed for all frames in all hops of the network. In Egress TT the size or shape of the path of any flow is taken into account in NetLatBound. A change in the path of any flow in Egress TT will only change the value of NetLatBound and not add any additional constraint. Thus the resolution time for Egress TT remains almost constant and thus much faster than End-to-End TT. Between Exclusive Queue Allocation and Size Based Isolation, Fig. 11.4 shows that the number of constraints in Size Based Isolation is slightly greater than Exclusive Queue Allocation and therefore the computation time is also slightly greater, while remaining significantly faster than End-to-End TT configurations. This result was also expected: in message size isolation, in addition to gate events, the additional padding for all the flows in the network has to be computed. Remark 39 In the experiment with 6 switches, the measured duration for End-to-End TT does not We have no justification for this deviation and will investigate it in future works. LATENCY AND SCALABILITY COMPARISON Number of receivers increase In the second set of experiments, we consider the same topology and increase the number of receiving end-stations from 1 to 6 (cf. Fig. 11.5). This helps quantify the computation cost of adding an endstation. Each additional end-station will be receiving 15 flows with characteristics identical to those of Table 11.1, all emitted from "Sender". The next experiment consists in running both End-to-End TT and Egress TT algorithms on the same use case (depicted in Fig. 11.8) while using the optimization criteria and compare the resulting network latency for both configurations. However, since our optimization criteria (brought by Development Effort Requirement 1) is not tackled by the literature, we replaced it by an optimization criteria aiming at minimizing network latency for End-to-End TT. We consider a set of 4 data paths detailed in Table 11.2. On each data path, the source sends a set of 15 flows to the destination. Flows have the same characteristics as the ones of Table 11.1. On the following graphs, we will only depict the average latency from data path ① and ④. SW A -SW B -SW D ES E f 1 ...f 15 ② ES B SW A -SW B ES D g 1 ...g 15 ③ ES C SW C -SW A -SW B -SW D ES F h 1 ...h 15 ④ ES D SW B -SW D ES E i 1 ...i 15 As a reminder, the latency per frame (see Remark. [START_REF] Chaine | TSN Support for Quality of Service in Space ?[END_REF]) is: • in End-to-End TT: Lat f l = T r (f l ) -Ref(f l ) • in Egress TT: Lat f l = SchedLH[f l ] + τ f l -Ref(f l ) Due to the intrinsic limitation of Exclusive Queue Allocation (maximum of 7/8 jitter flows per last hop port), we have compared Exclusive Queue Allocation with state of the art End-to-End TT only using data path ①, ② and ③. We have chosen to represent two concepts of latencies in these figures. On the one hand, Fig. 11.9 represents the concept of latencies described in our model (cf. Chapter 8). We call these latencies: Application Latency. On the other hand, Fig. 11.10 represents the classical concept of latencies in the networking community (hereafter called Network Latencies). Unless stated otherwise, we will use the word latency to deal with Application latency. As a reminder: Our model: Lat f l = T r (f l ) -Ref(f l ) State of the art: Lat f l = T r (f l ) -T e (f l ) Experiments show that Egress TT configurations will lead to greater network latencies than End- EXPERIMENTATION ON LARGER USE CASES 155 to-End TT configurations. This increase of network latency is due to the definition of Egress TT configurations: a message is delayed by a bound on its worst case latency so that it can always be delivered at the same time and meet its jitter requirements. In addition, Egress TT also leads to greater application latencies. However, one element is worth mentioning when looking at the charts of Fig. 11.9. In fact, by design, Egress TT configurations will induce application latencies (i.e. the definition of our model) as large as possible whereas Endto-End TT will induce network latencies as small as possible. Therefore, when End-to-End TT and Egress TT latencies are close in the graph does not mean that Egress TT performs as good as End-to-End TT İnstead, it just means that for that particular flow, Egress TT did not perform as good as for other messages i.e. that the production contract of that message is tighter than others. In summary, according to the experimental results above, Egress TT configurations reduce the computation effort (and computation time) compared to End-to-End TT configurations at the cost of a greater network latency while having, by design, a lower impact on applications. It is important to insist on the meaning of these results: this chapter compares the scalability and latency of two approaches (End-to-End TT and Egress TT) that were not designed for the same purpose. While End-to-End TT aimed at satisfying jitter requirements while minimizing network latency, Egress TT configurations aimed at satisfying jitter requirements while minimizing application impact and reducing the computation effort, based on the assumption that minimizing latency is not always required in implicit deadlines systems and might over-constrain the system. Therefore, one solution is not better than the other. Rather, a network system designer will have the ability to choose, according to his needs, between one approach or the other. Experimentation on Larger Use Cases Airbus Generic Next-Generation Satellite Use Case This first larger scale experiment relies on the Airbus Generic Next-Generation Satellite Use Case described in Section 9.4.1. This use case is composed of 120 flows: 18 jitter flows and 102 no jitter flows described in Appendix B.1. We generated configurations for this system with the two implementations from our second contribution i.e. Exclusive Queue Allocation and Size Based Isolation both using the optimization criteria aiming at maximizing the production contract (see Section 10.2.4). Exclusive Queue Allocation In Fig. 11.11 (resp. Fig. 11.12), we present the computed application (resp. network) latencies for a configuration of the Airbus Generic Satellite generated with Exclusive Queue Allocation and a configuration generated with our implementation of End-to-End TT. Because of the limitation of Egress TT configurations with Exclusive Queue Allocation (i.e. a last hop port can only receive up to 8 jitter flows without no jitter traffic or 7 jitter flows plus additional no jitter traffic), we only considered 13 jitter flows and 102 no jitter flows in this figure. It appears clearly in Fig 11.11a that Exclusive Queue Allocation generates configuration with greater latencies than End-to-End TT. In average, latencies from the Exclusive Queue Allocation configuration are 8500 times greater than latencies form the End-to-End TT configuration. Nevertheless, the generation with Exclusive Queue Allocation took roughly 9 seconds while the generation with our End-to-End TT implementation lasted about 10 minutes. In addition, we observed that application latencies, for most flows can take up to 99% of their period with Exclusive Queue Allocation. This is an expected result. In fact, let maximize ∀f ∈F s.t. f .jitter̸ =NA (Release(f l ) -Ref(f l )) 2 Let us write Release(f l )as a function of SchedLH[f l ]: maximize ∀f ∈F s.t. f .jitter̸ =NA (SchedLH[f l ] -NetLatBound(f l ) -Ref(f l )) 2 11.3. EXPERIMENTATION ON LARGER USE CASES 157 Now, we replace SchedLH[f l ]as a function of Lat f l : emphmaximize ∀f ∈F s.t. f .jitter̸ =NA (Lat f l + τ f l -NetLatBound(f l )) 2 Therefore, since τ f l is constant, the optimization criteria leads to maximizing latency. Observing great latencies means that we provided large production contracts and therefore a lesser development effort. Remark 40 We remind the reader that latency is not the common network latency but the latency of Remark. [START_REF] Chaine | TSN Support for Quality of Service in Space ?[END_REF]. Size Based Isolation In Fig. 11.12a (resp. Fig. 11.12b), we present the computed application (resp. network) latencies for a configuration generated with Size Based Isolation and a configuration generated with our implementation of End-to-End TT. The configuration is realised on the full set of 120 flows. There are several elements that needs commenting in this second experiment. First of all, with our implementation of Size Based Isolation, we were able to configure the network for all the jitter flows of the generic satellite system. However, we were not able to configure the network for one no jitter flow. This flow is not represented in neither Fig. 11.12a nor Fig. 11.12a. Let us detail the reasons why we believe this flow could not be configured. In fact, this flow is a no jitter flow. Therefore, its release instant is computed a posteriori, once all the jitter flows have been computed (see Section 10.2.4). In order determine the release instant for the messages of that flow, per message, starting from the message's deadline, we go back in time until we find an open gate event large enough to fit that message. According to the algorithm provided in Section 10.2.4, a gate event is considered large enough if and only if it is larger than the largest frame transmission duration going through that port. Therefore, even if this message is really small, if some large no jitter frames go through the same port, a small gate event that could have been large enough for the small message is considered closed. Therefore, we keep going back in time until we find an open event large enough for the message. For the particular flow that we could not configure, our algorithm proposed, for several of its messages, a gate event before the reference date of these messages. This would imply that the message would have to be produced before the beginning of its period which is impossible. Nevertheless, this does not mean that the Size Based Isolation implementation is not correct, but instead that it can be improved. We think of two potential solutions: adding additional hypothesis (order between no jitter message for instance) that would help use the small gate events that we discarded; or rely existing in the state of the art such as incremental approaches [START_REF] Steiner | An evaluation of smt-based schedule synthesis for time-triggered multi-hop networks[END_REF] for the configuration of no jitter flows. Besides, there is a strange result than we are not able to explain: computed latencies in Size Based Isolation are sometimes, for no apparent reasons, smaller that latencies from the End-to-End TT implementation. In addition, we are surprised by the difference in latencies between End-to-End TT on the reduced and the full set of flows: by adding only 5 low jitter flows, latencies as completely modified. One possible explanation might be that we did not execute the configuration generator long enough for our optimization criteria to do its job.We will investigate these reasons in future works. ORION CEV Use Case Finally, in one last experiment, we evaluate Egress TT on a use case adapted from the Orion Crew Exploration Vehicle (CEV) use case (described in Section 9.4.2). It is composed of 100 jitter flows and 86 no jitter flows. Although the model described in Section 2 supports multicast flows, our implementations of End-to-End TT and Egress TT that we have used for this paper do not. Therefore, we have duplicated multicast flows into several unicast flows. Thus, the use case is composed of 168 unicast jitter flows and 147 unicast no jitter flows. Remark 41 (Multicast Support) There is an adverse effect in doing so: the number of messages in the network is unnecessarily increased, which might reduce the possible configurations. However, it is sufficient to demonstrate the concept of Egress TT configurations. Improving the implementation for multicast support is relatively simple and will be proposed in future works. Size Based Isolation offers the possibility to put in the same queue several flows with same source and same destination. However, in this use case, there is no two flows with same source and same destination. Therefore, we will only experiment with Exclusive Queue Allocation. In addition, because of the limitation of Egress TT configurations with Exclusive Queue Allocation, we can only consider 157 unicast jitter flows and 147 unicast no jitter flows. We compare this reduced use case with End-to-End TT. Unfortunately, our implementation of End-to-End TT (state of the art), maybe too naive, did not allow us to compute a configuration on the full set of flows of the Orion CEV use case like [START_REF] Zhao | Worst-case latency analysis for ieee 802.1qbv time sensitive networks using network calculus[END_REF] did in their paper. Therefore, we were only able to obtain an End-to-End TT configuration (without optimization criteria) on a reduced set of 60 flows. The generation of the configuration lasted about 4 hours in End-to-End TT and 18s with Exclusive Queue Allocation. The Egress TT configuration generation for the full size use case was successful and lasted 4 minutes. Message number Application latency in µs End-to-End TT Size Based Isolation This experiment confirms our previous observation on scalability and latency. On the reduced use case, observed network latencies are, in average, 30 times greater in Egress TT than in End-to-End TT configurations. Remark 42 If network latency is an important requirement for the system, the optimization criteria could be modified to take it into account when generating a configuration. We did not do it in these experiments since the optimization criteria that we use was provided along the Airbus Generic Satellite use case. EXPERIMENTATION ON LARGER USE CASES Part IV Conclusions & Perspectives Chapter 12 Conclusions & Perspectives Conclusion In light of the raging increase in performance demand in the satellite industry, the goal of this PhD was to discuss the suitability of an emerging set of standards, namely Time Sensitive Networking (TSN for short), with respect to the foreseen requirements of next generation satellite on-board networks. This study was included in the TANIA-DP roadmap (Technological Assessment for New Instruments and Avionics -Data Processing) at Airbus Defence and Space, which, among others, aims at providing new on-board communication standards for satellites Platform & Payload networks which would benefit future missions, be it on performance, on cost, on compatibility with ground segment standards or on system operation sides. The TSN standards Time Sensitive Networking is the state of the art IEEE Ethernet technology. Initiated after the previous activities on IEEE AVB (Audio Video Bridging), the IEEE TSN Working Group has introduced more than twenty standards aimed at providing real-time and high-throughput capabilities, fault tolerance capabilities, time-management capabilities, and configuration management capabilities to an Ethernet network. In 2017/2018, the popularity of Time Sensitive Networking began to expand with growing interest coming from three industrial sectors i.e. the factory automation industry, the automotive industry and the 5G industry. Soon, it aroused the interest of other industries including the aerospace industry. The set of standards is still growing today but first TSN components have already made it to the market, implementing only a subset of TSN standards. Components will be getting more mature in the years to come and products embedding TSN will definitely appear in the next five years. As the set of standards in TSN is getting bigger and bigger, industrial working groups have started to propose TSN Profiles defining a reduced set of standards and/or parameters for the use of TSN in a specific context (e.g. Automotive Profile [START_REF]Draft Standard for Local and Metropolitan Area Networks -Time-Sensitive Networking Profile for Automotive In-Vehicle Ethernet Communications[END_REF]). In that sense, we were the first to promote, in the TSN\A conference [START_REF] Chaine | Suitability of time sensitive networking for space ? TSN A Conference[END_REF] and in ERTS Congress [START_REF] Chaine | TSN Support for Quality of Service in Space[END_REF], the interest of the spacecraft industry towards TSN for next-generation satellite networks as well as a glimpse on an aerospace profile. This aerospace profile is now being refined in a joint effort of IEEE and SAE ([65]). CHAPTER 12. CONCLUSIONS & PERSPECTIVES Contributions In order to discuss the suitability of Time Sensitive Networking for the spacecraft industry, we described in this manuscript the two step approach that we put in place during this 3 year PhD program. First, we assessed in a qualitative approach that TSN was indeed a suitable candidate for a next-generation on-board network. Second, we assessed, in a quantitative way, that TSN was capable of satisfying the performance quality of service requirements of future satellite missions. Let us summarize the process and results we obtained in both steps. Selection of candidate technologies compliant with the quality of service requirements of a next-generation satellite network In order to validate that TSN was a suitable candidate for a next-generation on-board network, we propose to realise a qualitative comparison of several technologies, including TSN, inspired from existing comparisons in the state of the art. The list of technologies (i.e. Ethernet, ARINC 664, TTEthernet, Spacefibre and Time Sensitive Networking) were pre-selected by our industrial partner at the beginning of this PhD. To effectively compare the technologies we defined a set of three properties (Mixed Traffic Type, Time Management and Fault Tolerant Operations) that described a high level view on foreseen quality of service requirements of future missions. We then refined these properties into criteria that served as a basis for the comparison. The output of the comparison was a set tables, available in Tab. 7.2, 7.3 and 7.4, which represent, per criteria and per technology, our evaluation of the compatibility of the technology with the corresponding criteria. We identified, through this qualitative comparison, three suitable candidates for a next generation satellite network: TTEthernet, Spacefibre and TSN. Ethernet and ARINC 664 were discarded since they did not provide, among others, a very low reception jitter capability which was critical for current and future satellite missions. We then confronted the three remaining technologies on third party arguments i.e. not quality of service related so as to reduce even more the list of candidates. Nevertheless, this discussion did not reduce the list of candidates. On the one hand, while TTEthernet and Spacefibre have components that are designed to work in a space environment, the existing TSN components currently on the market do not have this kind of capability. Therefore, the industrial partner would have to make its own hardware and software so as to embedded TSN in Space. On the other hand, TSN is (or will be) a widespread technology, used in diverse industrial sectors with many use cases, therefore they will definitely be numerous COTS IP Cores (Components Of The Shelves Intellectual Property Cores) from different manufacturers that the industrial could easily instantiate in its own space-hardened hardware. This is actually already considered in the roadmap of several component providers which try to create synergies between their space and their automotive/industrial automation business units working on the same TSN topics. By relying on COTS components, the spacecraft industry would benefit from a TSN mass market to have attractive cost for the use of TSN. This might not be the case with TTEthernet (resp. Spacefibre) which is a mono-manufacturer proprietary (resp. space-specific) technology. The comparison and the analysis of Part II have been published in [START_REF] Chaine | Comparative study of ethernet technologies for next-generation satellite on-board networks[END_REF] and [START_REF] Chaine | Comparative study of high throughput technologies for next-generation satellite on-board networks[END_REF]. TTEthernet had already been studied in previous studies and Spacefibre will be studied in future activities, therefore, we proceeded with a more precise evaluation of the suitability of TSN in the second step of our approach. Computation of TSN network configurations for next-generation satellite networks In order to further assess the suitability of Time Sensitive Networking with next-generation satellite networks, we proposed in this second step to highlight its compatibility by computing valid network configurations i.e. configurations that would satisfy the performance and fault tolerance quality of service requirements of next-generation on-board networks. This is not a simple task since TSN is composed of multiple standards, themselves composed of several mechanisms which are configured via numerous parameters. In order to simplify the problem that we had at hands, we proposed to reduce our scope to only one TSN standard i.e. IEEE 802.1Qbv-Enhancement for Scheduled Traffic and discuss its compatibility with performance quality of service requirements. The configuration challenge of TSN was and still is a hop topic in the networking community and configuration strategies are already available in the state of the art. Nevertheless, the existing methodology in the state of the art for the configuration of 802.1Qbv had multiple drawbacks. First there were computationally expensive. In fact, these configurations relied on a frame schedule on all hops for all frames in the network that had to be computed for any configuration. As a consequence, the computation of a single configuration took time and resources and the effort drastically increased as the networks to configure got bigger and bigger. Then, these configurations induced a certain effort on application development. In fact, in these configurations, applications running over the network must emit their messages in somewhat tight emission windows that the application designer had to take into account during system design. Finally, these configurations required that all devices in the network embedded TSN features (i.e. TSN standard 802.Qbv). This created a huge gap between legacy networks (relying mostly on MIL-STD-1553 and SpaceWire) and next-generation TSN networks as all network devices and links had to be replaced. As a consequence, state of the art configurations strategies received moderate interest from our industrial partner. In order to reduce the limitation from existing configurations in the state of the art, we proposed Egress TT, a novel configuration approach where a frame scheduled is only computed in the very last device in the path of any flow. The travel time of a message across the network is bounded and any variability in this delay is compensated by a fixed emission date in the last hop port. In other words, latency is traded off for jitter i.e. Egress TT provides very low reception jitter but induces greater latency than existing state of the art configurations. While Egress TT might not be suitable for all types of systems, it was of great interest in the satellite environment where latency constraints are large. In fact, the Egress TT approach seemed to reduce the computation effort since the number of schedule needed to be computed was considerably lower than state of the art configurations. It also seemed to reduce the legacy-to-next-generation gap since only the very last device in the network requires frame scheduling capabilities. In this manuscript, we presented two versions of our approach for Ethernet/Time Sensitive Networking networks entitled Exclusive Queue Allocation and Size Based Isolation. In order to implement our Egress TT approach, we first created a formal model of the network and of its foreseen requirements. Then, we reused an existing configuration computation method from the state of the art: define the configuration problem as a constraint programming problem that can be solved by Constraint Programming solver or by SMT/OMT solvers. We adapted the existing 802.1Qbv configuration model so that is took the specific satellite requirements that we had previously formalized. The first implementation, Exclusive Queue Allocation, relied on an isolation constraint that we designed so that the existing message-loss independence property of satellite networks i.e. a loss a frame in a flow does not affect the nominal behaviour of other flows, remained satisfied. However, this came at a cost: an output port, in a last hop device (i.e. a device at the last hop in the path of a flow), could no receive more that 8 different flows (i.e. space isolation of different flows). While this could have been good enough for small networks, bigger satellite use cases could not be computed with this method. Therefore, in an attempt at increasing the schedulability of Exclusive Queue Allocation while keeping the good expected properties of Egress TT, we implemented Size Based Isolation. This second implementation relied on a modified version of the previous isolation constraints where messages from different flows are not separated in space anymore (i.e. they share the same resources) but instead, are isolated with a clever use of message sizes. We evaluated, in the last chapter, the performance of our two versions of Egress TT on several space use cases and compared them to our own implementation of state of the art configuration strategy and confirmed the reduction on computation effort and application impact. Perspectives on the Egress TT approach There are several perspectives that we envision as future work on the Egress TT approach. New features in the model New features could be added to the model without any considerable effort and therefore be part of the constraint programming model. First of all, multicast could be integrated so as to support a wider variety of systems. As explained in Chapter 11, we had to adapt the Orion use case to our model since it had multicast flows and therefore the configurations that we ended-up with were not applicable to the original use case. Another improvement would be to allow greater messages sizes at application level. This means that one applicative message would be divided into several Ethernet frames. We discuss the modifications of the model for this specific improvement in Appendix C.1. In addition, one simple modification of the model would be to include frame preemption for no jitter traffic. This would lead to a modification of the delay computation formula that we provided. TSN frame preemption standards are detailed in Appendix A.1. Talking about delay computation, the delay computation formula that we proposed in this manuscript is somewhat naive and could be improved. Nevertheless it was sufficient for the considered use cases. One way of improving the delay computation would be to couple the constraint programming solver with a network calculus tool that could compute better bounds on the delay at each iteration of the solver. This being said, we raise the reader awareness on the fact that this is currently not possible with the constraint programming solver that we used and might not be time efficient at all. Finally, one could propose to integrate the Credit Based Shaper for no jitter traffic or even introduce a third traffic type in the model which medium access is handled via CBS. To do so, several elements would have to be refined. First, new decisions variables corresponding to the idle slope and send slope parameters of the used CBS would have to be introduced so that the CBS configuration can be computed. Then, the delay computation formula would have to be modified to take into account the CBS queues. Finally, the exclusive gating philosophy currently applied in Egress TT configurations would have to be challenged to decide whether CBS queues are open when no jitter (static priority) queues are open or not. In any case, this will definitely affect the computation of Release instants for no jitter flows. New standards Apart from new features related to the 802.1Qbv standard (and frame preemption), other standards could be included in the model for better support of real life systems. First, TSN standard IEEE 802.1CB Frame Replication and Elimination for Reliability could be added to our model. This would lead to a traffic increase in the network and therefore might have an impact on the delay analysis. If for instance only a subset of flows benefit from redundancy or if redundancy is only applied in a subset of the network, the delay of flows sharing the same resources than those that are duplicated will be affected. Another standard that could be integrated into the model is 802.1AS-2020. This standard is dedicated to network synchronization and based on a specific PTP profile. Therefore, there are messages exchanged in the network, that share the same resources as user data. These messages shall be taken into account in the configuration and resource reservation (for instance flow to queue mapping) since they will have any impact on the delay. In addition, depending on the expected synchronization quality, the number of message might not be the same and therefore the configurations might evolve depending on the number of synchronization messages. Future works on Industrial Use of TSN in Space Beside the new features and new standards additions to the Egress TT approach, there are still challenges remaining on industrial side towards the selection of the next-generation satellite on-board network technology. This not only deals with our assessment of TSN but also with the assessment of the other technology candidates (e.g. TTEthernet and Spacefibre) that were identified throughout the qualitative comparison. In fact, in the second contribution of this PhD, we only addressed performance quality of service requirements. Therefore, in order to further assess the suitability of TSN (and also of the other technology candidates), it is necessary to work on the definition of a clear FDIR model for these next-generation networks. For instance, will the switches be used in a hot redundancy or cold redundancy scheme? Will the links be duplicated, or triplicated? will redundancy be applied to the whole network? What are the Reliability and Availability values that are expected for future missions ? How can it be evaluated on a TSN/TTE/Spacefibre network ? Another aspect that has not been discussed at all during this PhD is the physical medium aspects as we focused on ISO L2 features only. Therefore, we did not analyse the availability of physical mediums capable of handling the high-data rate traffic of next-generation satellites nor their compatibility with the space environment. Shall it be a fibre optic cable, a twisted-pair cable, a custom cable ? Shall we try to adapt a Spacefibre physical layer for a TSN MAC layer device ? The preliminary activities in which the industry is currently involved tend to suggest that the medium for the next-generation network would be from the fibre optic family but more studies are needed to fully design this physical layer. a short frame's header is used. It bears, within the Start Frame Delimiter, the information that this new frame is either an Intermediate Fragment or a Last Fragment as well as a modulo-4 counter that counts the number of fragments. Preamble SFD MAC-DA MAC-SA Ethertype If the transmission of the frame is ended (i.e. the First Fragment was sent and the rest of the transmission happened in one frame), the second fragment is called Last Fragment. However, if during the transmission of the fragment, an express frame becomes available for transmission, this new fragment, called Intermediate Fragment, is preempted (under the same conditions that the one listed above), an mCRC is generated and the process starts again (i.e. waiting for the medium to be free and trying to send the rest of the frame). For every Intermediate Fragment, the FCS used is the mCRC i.e. as a reminder, the four least significant bytes of the CRC of the current fragment. For the Last Fragment, the FCS used is the 32bit-CRC of the whole preemptable frame. This will allow to detect that the fragment received is the Last Fragment (i.e. F CS ̸ = mCRC), as well as to do integrity checking on the entire preemptable frame. The format of the First Fragment, Intermediate Fragment(s) and Last Fragment are summarized in A.1.4 Preemption Limitation Frame preemption is not allowed to add padding to fragments. As a consequence, not all frames can be preempted! When a frame's length is inferior to 143 bytes, then it is considered not preemptable. In fact if the preemptable frame was 143 bytes long (including IFG, Preamble and SFD), then splitting it in two would violate the Ethernet frame minimum length of 84 bytes (incluDing IFG, Preamble and SFD). This minimum length for preemptable frames is demonstrated in [START_REF] Thiele | Formal worst-case performance analysis of time-sensitive ethernet with frame preemption[END_REF]. A.1.5 Configuring Frame Preemption TSN feature IEEE 802.1Qbr (& IEEE 802.3br) Frame Preemption has only one mechanism: itself. There are only two possible configurations: configured or not configured. When configured, one parameter, namely addFragSize, allows to tune the size of the fragments. When Frame Preemption is configured, it is also assumed that traffic classes have be classified as express or preemptable before-hand. A.1.6 Behaviour of Frame Preemption when Combined with Transmission Gates in Exclusive Gating Combining TSN features Frame Preemption and Enhancement for Scheduled Traffic (cf. Chapter 4.2) configured in exclusive gating may have a certain interest when trying to maximize the bandwidth of preemptable traffic. In fact, let us imagine a TSN output port TDMA configuration (see 4.2.5) with M = 8 and #7 in exclusive gating with the other queues. All frames from #7 will be tagged express and frames from the other queues are tagged preemptable. As #7 and the other queues are in exclusive gating, this means that when the gate of #7 is open, the gates of all the other queues are closed and respectively when #7 is closed, the gates of all the other queues are open. In this situation, frames for traffic classes than #7 cannot interfere with #7 frames. However, when the gate of #7 is about to open and all the other gates are about to close, depending on the configuration of the Frame Preemption feature the behaviour of the port changes. If Frame Preemption is not configured, the behaviour will be qualified of Preventive and if Frame Preemption is configured, the behaviour will be qualified of Preemptive. In the preventive scenario, as well as in the preemptive method, the schedule of closing and opening sequences of all traffic classes of the port is known. As a consequence, it is possible for the port to know the remaining time before the opening of the gate bearing express traffic (#7 in our example), hence if a preemptable frame is available for transmission, before being emitted, the port will check that there is enough time to transmit the whole frame. If it is the case, the frame is transmitted, if not, the frame is on hold, it waits for all express traffic available for transmission when their gate opens to be sent and then the preemptable frame is sent. This prevents any interference between preemptable and express traffic, i.e. between #7 and all other traffic classes however, the time during wich the preemptable frame is on hold, because it cannot be emitted, is unused i.e. loss of bandwidth. Preemtable Express In the preemptive scenario i.e. when Frame Preemption is configured, the preemptable frame currently being emitted is preempted (if possible), when an express frame becomes available for transmission. If the preemptable frame was non-preemptable, it will be on hold until all express traffic available for transmission is sent. Configuring Frame Preemption seems interesting, in terms of bandwidth available for preemptable traffic because, instead of having the whole preemptable frame on hold, some of it can already been sent. In an exclusive gating configuration, using Frame A stream is a multicast unidirectional flow of data going from a Talker End-Station to one or several Listener End-Stations. A stream is composed of frames going from an emitting application to one or several receiving applications. A stream, at system level, has an identifier, that allows to differentiate one stream from another. However this identifier is not a field of the Ethernet MAC header and is not written anywhere else in the frame. The identifier only exists within network devices (switches and End-Systems) and is called stream handle. As this stream handle is not part a the frame, a method must be created and added to all the network devices in order to be able to distinguish between frames from different streams. This function, in TSN, is called the Stream Identification Function. We will describe this function in the next subsection. A.2.3 Stream Identification Function The Stream Identification Function is the function that allows network elements to recognise to which stream a frame belongs. As the stream handle is actually part of the frame, a stream is identified through a combination of actual fields of the frame that is being analysed. The table of Fig. A.7 lists all available combination of fields that could be used to identify a stream. As this list of combination is not so big, an amendment is being written right now in order to offer an extension of the stream identification function (cf. IEEE 802.1CBdb). For the record, the Stream Identification Function is not part of 802.1Qci. It was introduced in 802.1CB-Frame Replication and Elimination for Reliability (see Chapter A.3), but for the purpose of understanding 802.1Qci, we believe it was necessary to introduce it here. Basically, stream identification functions analyses part of the MAC header (destination/source MAC addresses and VLAN identifier) and sometimes higher level elements (IP and above). One of the function is qualified active because it can, when the stream handle is computed, change the value of the MAC destination address, VLAN identifier and priority of the frame. A.2.4 QCi ingress port organization With 802.1Qci, 4 mechanisms have been added to the switch ingress port: 1. Stream Identification Function, 2. Stream Filters, 3. Stream Gates, 4. Stream Meters. Let us describe mechanisms 2, 3 and 4. 1 has already been introduced in Section A.2.3. Stream Filters Once the stream handle has been recovered thanks to the Stream Identification Function in the ingress port of a switch, streams are passed to stream filters according to their stream handle. In fact there are several stream filters available in the same ingress port. One stream filter instance can be associated with one stream or one subset of the stream in the case the stream handle is combined with frame priority for stream filter instance attribution. We envision that there will be more streams that stream filters due to the memory size limitation in switches, that is why, there is generally one of the stream filters that has a wild-card, signifying that all streams that were not specifically associated with a stream filter will be handled with a "default" filter. All the frames of a stream that pass through the same stream filter will go through the same filters, the same stream gate and the same stream meters. A stream filter has several parameters: • Stream filter id, that identifies the stream filter, • Stream handle, identifies the associated stream, • Priority, for further associated stream identification (optional), • Zero or more Filter specifications, that defines a set of filters(ex: Max MDU size, Flow meter identifier, etc.). If the frame of a stream does not match the filter conditions, it may be discarded, • Frame counters, (total number of frames, number of dropped frames due to SDU filter, number of dropped frames due to Flow metering, etc.) for MIBs counters and statistics. • a StreamBlockedDueToOversizeFrameEnable function that can be enabled or not. If enabled, the parameter StreamBlockedDueToOversizeFrame can be true or false. If it gets true, this means that a stream that went through this stream filter had an oversize frame. In that case, all frames of the stream passing trough this stream filter are blocked and dropped. This function is optional. • Stream gate id, used to identify to which stream gate the streams will be passed, • Stream meter id, used to identify to which stream meter the streams will be passed. With this first mechanism, TSN protocol 802.1Qci offers a way to filter streams, and discard their frames if they do not match any previously specified format requirements. In particular, the size of the frame. This looks very appealing, for security purposes as well as safety purposes. In particular thanks to the ability to block and drop certain streams that would not match some previously specified system/network requirements is very interesting compared to standard Ethernet where this type of filtering would be left to Quality of Service Mechanisms at higher ISO level. Stream Gates Among the parameters of the stream filter, one parameter identifies the stream gate instance to which the stream filter sends its streams. Contrarily to the one to one (stream;stream filter id) couple, a stream gate can be associated with several stream filters. This means that in a single stream gate, frames from different streams may coexist (in space or time). Again, a stream gate has several parameters: • Stream Gate instance ID, identifies the stream gate instance, • Gate status, can be "Open" or "Closed", • Internal Priority Value, is used to specify the 802.1Qbv traffic class to streams. • GateClosedDueToInvalidRx function and parameter, used to detect and drop frames of streams that would arrive at a time instant when it was not supposed to arrived. This will come handy in time triggered situation in order to ensure that frames only arrive in the correct time windows. • GateClosedDueToOctetExceeded function and parameter, used to check that during a time window, a certain amount of bandwith has no been overpassed. Again this will come handy in order to ensure a correct share of the bandwidth between streams. • Stream Gate Control List, same definition than the 802.1Qbv Gate Control List, but instead of controlling the behaviour of 802.1Qbv gates, it controls the 802.1Qci stream gates state. The Internal Priority Value, or IPV, is rather interesting because this field will define in which 802.1Qbv traffic class (i.e. queue) a stream will go. This means that the association between streams and traffic classes is not constant, it may evolve during the journey of the stream's frame in the network. This opportunity of having the traffic class changed during the journey of the stream's frame is exploited by another TSN addendum (only an informative addendum and not a normative standard) referenced IEEE 802.1Qch -Cyclic Queueing and Forwarding, that will not be detailed in this document. The Stream Gate Control List (SGCL), is, to us, a good way to protect the network from a schedule failure of one of the device. if the SGCL is identical to the 802.1Qbv GCL modulo a transmission time, then the switch could be aware of the schedule of the end-station or switch used before. If the schedule is known, then time windows, where traffic of such or such stream is not supposed to arrive, can be define. Using these time windows, it would be possible first to detect that there are schedule problems and notify the user but also to protect these streams' frames from potentially messing up with the schedule later on in the network. Again, the policing functions provided by the streams gates is interesting for security and safety purposes. However, one thing there is more tricky: as several stream may be associated to the same gate, if the gate is closed because one of the streams did not respect its bandwidth constraints or its behaviour over time (schedule), the "bad" stream will be blocked and its frame will be dropped but the other streams using this gate will be also blocked and suffer the same punishment. This will A.3.1 Introduction Standard Ethernet, contrarily to TTEthernet or ARINC 664, does not provide any support for redundancy (see Sections 3.6.1, 3.6.2, 3.6.3). This means that if any redundancy strategy has to be defined in an Ethernet network, up to now, it would have to be handled at application layer (ISO Layer 7). With IEEE 802.1CB -Frame Replication and Elimination for Reliability, the TSN working group has introduced a way to handle redundancy, for reliability purposes, at ISO Layer 2. This standard is very often called FRER instead of 802.1CB. In our document, the reader is very likely to find both names. The overall goal of 802.1CB is to provide a protocol to avoid frame loss due to equipment failure. It is realized by duplicating frames of a stream and sending them in the network along as disjoint A.3. FAULT TOLERANCE WITH 802.1CB-FRAME REPLICATION AND ELIMINATION FOR RELIABILITY183 as possible paths to their destination where duplicates are eliminated, in a seamless redundancy (transparent to the application). A.3.2 Frame Format for Redundancy Support In order to support redundancy, it is necessary to define a way to identify duplicated frames and eliminate said duplicates. In ARINC 664, a sequence number is added to any duplicated frames. Two frames from the same flow sharing the same sequences number are therefore identified as duplicates. In 802.1CB, a similar mechanism is proposed. In fact, a field is added to any frame, in order to identify IT as duplicate. This field is the sequence number. There are three sequence number implementations proposed in the standard: • HRS, High Seamless Redundancy, • PRP, Parallel Redundancy Protocol, • RTAG, Redundancy TAG. While HRS and PRP are designed for ring topology networks, RTAG is designed for switched networks. Let us briefly describe RTAG sequence number. The RTAG field is pretty basic. It is a 6 bytes optional tag of the MAC header with the EtherType equal to 0xF1C1. The sequence number is encoded on two bytes. The format of the RTAG and how it is included in an Ethernet frame is shown in figure A.11. The RTAG field is the closest implementation to the existing sequence number implementation in TTEthernet and ARINC 664. A.3.3 Difference with ARINC 664 Redundancy There are two major differences between ARINC 664 Redundancy and 802.1CB + RTAG redundancy. First, in TSN, it is possible to have more than two duplicates whereas ARINC 664 only supports two. Second, in ARINC 664 redundancy is realised end-to-end, meaning that frame duplication is done in the source end-system and Redundancy Management & Integrity Checking is done in the receiving end-system. In TSN, it is possible to realise redundancy in a specific part of the network. For instance, one can imagine that frame are only duplicated in the core network and that reassembly is realised in the last switch in the path of any flow. Therefore, the end station only emit and receive one frame (and no duplicates). Another use case for this capability would be if a part of the network is located in an environment leading to a lot of perturbation on the physical medium, then locally, the number of duplicates could be increased to compensate these perturbations. A.3.4 Summary With 802.1CB, TSN proposes a way to have seamless redundancy at the cost of 6 bytes in the Ethernet frame. It allows the TSN stack to duplicate (or n-plicate) frames and send them on disjoint path to avoid single point of failures. Introduction Cette thèse a été financée par Airbus Defence and Space (dénommé l'industriel dans ce document) et l'ANRT (Association Nationale pour la Recherche Technologique). Elle est incluse dans la feuille de route TANIA-DP (Technological Assessment for New Instruments and Avionics -Data Processing) d'Airbus, qui a pour objectif, entre autres, de fournir de nouveaux standards de communication pour les réseaux embarqués satellite afin d'améliorer les performances, les coûts, la compatibilité avec les standards du segment sol, ou encore la gestion de futures missions dans l'espace. Contexte A l'image de l'augmentation du volume de données générées et manipulées par nos équipements sur Terre (téléphones, voitures, instruments de mesure scientifique, etc.), les satellites doivent eux aussi être capables de produire et de transmettre des quantités de données de plus en plus grandes afin de répondre aux besoins des entreprises qui les opèrent. On trouve déjà sur le marché des appareils de grande capacité, telles les caméras multi-gigabit, et de ce fait la transition technologique des instruments embarqués vers davantage de performance n'est pas un problème en soi. Néanmoins, l'augmentation de la capacité de transmission de ces données, elle, pose problème. C'est la raison pour laquelle ce document s'intéresse à la modernisation des réseaux embarqués lesquels doivent répondre à deux besoins différents. D'une part, ils sont en charge de véhiculer les informations nécessaires au bon fonctionnement du satellite (ex. contrôle des propulseurs, contrôle du système de communication, contrôle des panneaux solaires, etc.). D'autre part, ils transportent les données acquises par les instruments embarqués dans le satellite (comme par exemple des télescopes, des capteurs météorologiques, etc.) vers les antennes du satellite qui les retransmettent aux stations-sol. La plupart du temps, l'architecture réseau embarquée repose sur deux technologies : MIL-STD-1553 et SpaceWire. Un besoin de renouveau L'actuelle architecture du réseau embarqué satellite a démontré ses forces et sa maturité après 15 ans d'utilisation. Cependant, elle commence maintenant à montrer ses limites, que ce soit en terme de débit, de coût de développement et de maintenance, de disponibilité des composants dit "surétagère" (COTS -Components-Of-The-Shelf ), ou encore de potentiel d'interaction avec d'autres 4 CHAPITRE 1. INTRODUCTION industries ou avec la communauté scientifique. C'est pourquoi l'industrie aérospatiale envisage de renouveler ses réseaux embarqués pour les futures générations de satellites. À ce jour, la future architecture se baserait sur un réseau "unifié", c'est-à-dire qu'une seule technologie serait utilisée pour répondre aux besoins des futures missions, tant sur les aspects de gestion du fonctionnement nominal du satellite que du transfert de données des instruments. Cette technologie devra non-seulement avoir de meilleures performances que le 1553 (abréviation de MIL-STD-1553) et le SpaceWire, mais aussi être facile à analyser et à configurer, faciliter le développement et l'intégration des satellites. De plus, elle devra participer à la réduction du coût global du satellite. Plusieurs technologies ont été présélectionnées par l'industriel pour créer ce réseau "unifié" nouvelle génération : Ethernet, ARINC 664, TTEthernet, Time Sensitive Networking et Spacefibre. Parmi ces technologies, une opportunité est récemment apparue au début de la thèse avec Time Sensitive Networking (abrégé en TSN), le dernier standard Ethernet de l'IEEE. Ce standard serait capable de répondre à la fois aux besoins temps réels et haut débit des échanges à bord du satellite. C'est la raison pour laquelle ce sujet de thèse, intitulé "Adaptabilité de Time Sensitive Networking aux exigences de l'industrie aérospatiale", a été créé. Cette activité est totalement nouvelle dans le domaine spatial puisque TSN n'avait jamais auparavant été identifié ou bien considéré comme un candidat potentiel pour le renouvellement des réseaux embarqués satellite. Notre Approche Afin d'évaluer le potentiel de Time Sensitive Networking vis à vis des besoins envisagés pour les nouvelles générations de satellites, nous proposons un raisonnement en deux étapes. Dans une première étape, nous souhaitons nous assurer, à l'aide d'une analyse qualitative, qu'il est bien raisonnable de considérer TSN comme candidat pour le réseau unifié nouvelle génération. Pour ce faire, nous allons définir un ensemble de propriétés haut niveau représentatives des exigences des futures missions. A l'image des méthodologies de comparaisons similaires dans l'état de l'art, nous allons ensuite détailler ces propriétés en critères. Enfin, nous discuterons de l'adaptabilité des technologies présélectionnées par l'industriel (i.e. Ethernet, ARINC 664, TTEthernet, Time Sensitive Networking et Spacefibre) vis à vis des critères précédemment définis. Dans une deuxième étape, nous analyserons en profondeur la compatibilité de TSN en raffinant puis en formalisant les besoins en qualité de service du réseau unifié nouvelle génération. Nous mettrons en évidence la compatibilité de cette technologie en générant une ou plusieurs configurations du réseau de systèmes représentatifs. S'il n'est pas possible de générer des configurations qui répondent aux exigences en qualité de service, cela signifierait que TSN n'est pas un bon choix pour les futurs satellites. Afin de générer ces configurations, nous nous inspirerons des méthodologies de configuration existantes dans l'état de l'art, basées sur des émissions planifiées de messages et de la programmation par contraintes, afin de proposer notre propre méthode de configuration. Ce ne sera pas chose aisée. En effet, TSN est composé de nombreux standards, incluant chacun de très nombreux paramètres à prendre en compte pour générer une configuration. Dans l'objectif de réduire l'effort de configuration dans cette thèse, nous nous sommes limités à la configuration du standard TSN IEEE 802.1Qbv uniquement. Ce raisonnement est illustré dans la figure Fig. 1 Plan du résumé Ce document résume en français quelques éléments de contexte ainsi que les principales contributions présentées dans le manuscrit de thèse. Le chapitre 2 présente des généralités sur les satellites et leurs réseaux embarqués ainsi que le standard TSN 802.1Qbv. Ensuite, le troisième chapitre définit formellement le premier problème, correspondant à la première étape de notre raisonnement. Il résume ensuite les principaux éléments de la première contribution du manuscrit. Elle consiste en une sélection qualitative de technologies compatibles avec les besoins de satellites nouvelle génération. Dans le chapitre 4, le second problème, correspondant à la deuxième étape de notre raisonnement est présenté. Dans la suite du chapitre, nous résumons la deuxième contribution, c'est à dire la génération de configurations d'un réseau TSN adapté aux exigences de futures missions. Pour cela, nous présentons notre nouvelle méthodologie de configuration intitulée Egress TT et l'appliquons aux réseaux TSN de deux manières différentes intitulées Exclusive Queue Allocation et Size Based Isolation. Chapitre 2 Contexte Ce chapitre présente des éléments de contexte nécessaires pour la compréhension de la problématique et des contributions. Il résume la Partie 1 (Part 1) du manuscrit. Dans la première section, nous introduisons des généralités sur les satellites et sur les réseaux embarqués à bord de ces satellites. Dans la seconde section, nous présentons quelques généralités sur la technologie Time Sensitive Networking puis nous détaillons certains éléments du fonctionnement du standard IEEE 802.1Qbv. Bien que ces réseaux aient répondu aux exigences de l'industrie aérospatiale durant les 15 années passées, ils commencent à montrer leurs limites, en particulier en terme de bande passante. C'est pourquoi, ils doivent être renouvelés. Pour ce faire, nous allons étudier une liste de technologies précédemment sélectionnées par notre partenaire industriel afin de déterminer les candidats potentiels pour remplacer les technologies actuelles. Parmi ces technologies, présentées dans le chapitre 3 du manuscrit, se trouve Time Sensitive Networking, que nous introduisons plus en détail dans la section suivante. Généralités sur les satellites Time Sensitive Networking, l'état de l'art IEEE pour Ethernet Time Sensitive Networking (abrégé en TSN), est la toute dernière technologie de la famille Ethernet de l'IEEE qui prétend être capable de transporter à la fois du trafic temps réel et du Les standards, amendements et projets en cours, que nous nommerons tous standards par la suite pour des facilités d'écriture, peuvent être organisés en cinq familles : Synchronisation, Fiabilité, Latence, Gestion des Ressources et "Pas de pertes liées aux congestions". Un bref résumé des standards de chaque famille est disponible dans le manuscrit. S'il ne fallait retenir qu'un message au sujet de cette vue d'ensemble de TSN, ce serait le suivant : TSN n'est pas encore une nouvelle technologie, bien au contraire, elle se base sur le fonctionnement classique de l'Ethernet Switché Full Duplex (voir la section 3.6 dans le manuscrit) décrit dans IEEE 802.1Q ; en y rajoutant des protocoles qui amendent ou améliorent le mode de fonctionnement existant. Introduction Les ports de sorties améliorés avec 802.1Qbv disposent maintenant de 8 files internes (aussi appelées classes de trafic ou encore files de transmissions dans le standard) dans lesquelles les messages (ou trames dans le vocabulaire Ethernet) peuvent être placées dans l'attente de leur émission. Chacune de ces files est associée, à la fois avec un algorithme de sélection de transmission (Transmission Selection Algorithm -TSA) et avec une porte de transmission (Transmission Gate -TG). C'est la combinaison de ces deux mécanismes qui permettra de décider, au sein d'une file, quand une trame est autorisée à tenter d'accéder au medium physique. Cette architecture de port améliorée est présentée dans la figure Fig. 2 Ces files sont les structures de données qui stockeront les trames dans l'attente de disponibilité du lien physique. Les files sont numérotées et ordonnées en priorité depuis la file #7 jusqu'à la file #0. Chaque file a un comportement spécifique défini par son TSA et son TG. Définition 1 (Disponible pour la transmission). Pour qu'une trame soit autorisée à tenter d'accéder au lien physique, celle-ci doit être marquée "Disponible pour la transmission". Pour qu'elle soit marquée de la sorte, cela nécessite plusieurs conditions : 1. la trame doit être en tête de sa file, 2. le TSA de la file doit avoir autorisé la transmission de la trame, 3. le TG de la file doit avoir autorisé la transmission la trame. Puisqu'un port dispose de plusieurs files, il est possible que plusieurs trames, retenues dans des files différentes, soit marquées "disponibles pour la transmission" en même temps. De fait, il est nécessaire de définir une stratégie d'arbitrage entre les files d'un même port. La stratégie implémentée dans TSN est Static Priority (priorité statique) i.e. la trame dans la file avec la plus haute priorité sera émise en premier suivie des autres trames par ordre de priorité décroissant. Intéressons-nous maintenant aux Transmission Gates. Dans cette thèse nous n'avons pas considéré d'algorithme de sélection de transmission (TSA), ils ne sont donc de fait pas présentés dans ce résumé. Néanmoins, ils sont détaillés dans le manuscrit. Transmission Gates Le mécanisme de Transmission Gates est le deuxième élément qui rentre en jeu dans le processus de marquage des trames en "disponible pour la transmission". Il est souvent appelé Time Aware Shaper ou TAS. En effet, chaque file est associée avec une porte qui peut être ouverte ou fermée à des instants et pendant des durées configurables. Cela permet de décider quand et de combien de temps chaque file d'un port a la possibilité d'émettre des trames. Détaillons un peu plus le fonctionnement de ces portes. Une porte de transmission peut avoir deux états : Ouverte (o) ou Fermée (C). Quand la porte d'une file est : • Fermée → même si le TSA de la file autorise l'émission de trames, aucune trame de cette file ne peut tenter d'accéder au lien physique, • Ouverte → la trame en tête de la file sera marquée "Disponible pour la transmission" si le TSA de la file le permet et qu'il reste assez de temps pour émettre cette trame avant la fermeture de la porte. Il devient clair avec l'explication ci-dessus que l'état des portes doit évoluer au cours du temps. En effet, si une porte reste fermée en permanence, cela signifie que le trafic placé dans la file associée à cette porte ne sera jamais émis. Dans ce contexte, le port amélioré par 802.1Qbv dispose d'une structure intitulée liste de contrôle de portes (Gate Control List ou GCL) qui décrira l'évolution du statut des portes de chaque file d'un port au court du temps. Les paramètres présentés dans le standard pour la configuration des GCL sont détaillés dans le manuscrit. La configuration du standard 802.1Qbv La diversité des algorithmes de sélection de transmission et des ordonnancements de files définis par les GCLs créent un immense champ des possibles pour la configuration du standard 802.1Qbv. Tandis qu'imager une configuration du standard semble possible, trouver une configuration (i.e. une sélection de valeurs des paramètres pour les TSA et les GCL) qui satisfasse les exigences de qualité de service n'est pas simple. La figure Configurations courante de 802.1Qbv Nous présentons dans la suite du résumé français une configuration courante du standard 802.1Qbv. Dans le manuscrit, nous en introduisons plusieurs autres, provenant toutes de l'état de l'art. Ces configurations ou Architectures of Interest sont en somme des modes d'utilisation du Qbv où un choix de TSA et de l'usage ou non de TG a été fait. End-to-End TT. Dans cette configuration dite End-to-End TT, tous les ports utilisent des transmission gates (régies par des GCL) mais n'utilisent pas de TSA. En l'absence de TSA, une trame en tête de file est automatiquement marquée "Disponible pour la transmission" si la porte de sa file est ouverte. L'évolution du statut des portes au cours du temps décrit donc la manière avec laquelle les files sont servies et de fait, les instants auxquels les trames tentent d'accéder au lien physique. Parmi ces configurations End-to-End TT, une sous-famille est très représentée dans l'état de l'art : les configurations dites en exclusion mutuelle de portes (exclusive gating). Définition 2 (Exclusive Gating). Dans ces configurations, les GCL sont conçues de manière bien spécifiques : quand la porte d'une file est ouverte, la porte de toutes les autres files sont fermées et inversement. Cette exclusion mutuelle permet d'éviter les conflits entre les trames "Disponibles pour la transmission" pour l'accès au médium. Nous avons représenté dans la Fig. 2.8 une visualisation de la GCL d'un port donné qui implémente la stratégie d'exclusion mutuelle pour deux files. Quand la porte d'une de ces deux files est ouverte, les portes de toutes les autres files sont fermées et inversement. Par contre, quand les portes de ces deux files sont fermées, les portes de toutes les autres files sont ouvertes. En résumé En résumé, le standard TSN 802.1Qbv -Enhancement for Scheduled Traffic ajoute, dans tous les ports qui l'implémentent, non seulement de nouveaux mécanismes pour le partage de la bande ) à travers des TSA, mais aussi des capacités de programmation d'émissions de trames à l'aide des portes de transmissions. Sur la base de ces nouveaux mécanismes, TSN sera capable de garantir des délais de traversée du réseau bornés ainsi qu'une faible gigue sur la date de réception des trames. Chapitre 3 Première contribution : Sélection de technologies compatibles avec les besoins en qualité de service des réseaux satellite nouvelle génération Ce chapitre présente le premier problème considéré dans la thèse et résume les principaux éléments de notre première contribution (chapitres 6 et 7 du manuscrit). Dans la première section, nous posons le problème. Dans la deuxième section, nous détaillons la méthodologie utilisée pour la sélection des technologies. Nous listons ensuite les propriétés haut niveau représentatives des exigences du réseau nouvelle génération et les critères qui en découlent. Dans la troisième section, nous présentons les conclusions de la comparaison qualitative et l'identification de trois technologies candidates. Problème 1 La première question qui vient naturellement quand on cherche à remplacer le réseau embarqué est la sélection d'une ou plusieurs technologies candidates. Technologies présélectionnées Cinq technologies ont été présélectionnées par l'industriel au début de la thèse : Ethernet, ARINC 664, TTEthernet, Time Sensitive Networking et Spacefibre. Elles sont présentées en détail dans le manuscrit dans les chapitres 3 et 4. Toutes ces technologies offrent de nombreux avantages détaillés dans le chapitre 5 du manuscrit. CHAPITRE 3. PREMI ÈRE CONTRIBUTION Propriétés Haut Niveau De par la diversité des applications qui communiquent à travers le réseau, il arrive que plusieurs messages venant d'applications différentes avec des niveaux d'importance (ou de criticité) différents doivent partager le même équipement ou le même lien. Ces applications ont d'ailleurs probablement des exigences de qualité de service en performance réseau différentes. Le futur réseau devra donc être capable de satisfaire ces exigences. De plus, pour que les applications puissent communiquer correctement, elles ont besoins de partager une notion ou une base de temps commune (en étant par exemple synchronisées). Ainsi, le futur réseau devra être capable de satisfaire cette exigence. Finalement, puisque des ressources seront amenées à être partagées entre applications, le futur réseau devra être capable de satisfaire des exigences de tolérance aux fautes. Dans le manuscrit, nous avons défini 3 propriétés haut niveau : Mixité des trafics, Gestion du temps et Tolérance aux fautes. Mixité des trafics Propriété Haut Niveau 1 (Mixité des trafics). Capacité du système réseau à véhiculer, avec les mêmes ressources, plusieurs flux avec des caractéristiques différentes tout en garantissant que des flux de criticités différentes n'interfèreront pas entre eux. Par exemple, un réseau compatible avec la propriété Prop. 1 doit être capable de transporter, au sein des mêmes équipements, du trafic (plateforme) faible débit avec exigence de qualité de service (ex. latence, gigue) ainsi que du trafic (charge utile) haut débit sans contraintes particulières. Gestion du temps Propriété Haut Niveau 2 (Gestion du temps). Capacité du système réseau à gérer le temps, soit en proposant une horloge globale commune pour tous les équipements ou bien en proposant une distribution du temps au niveau applicatif. Remarque 2. Bien que l'industriel puisse considérer dans de futures études qu'une partie de la gestion du temps s'effectue au niveau applicatif, dans cette thèse, nous avons supposé que la gestion du temps soit faite au niveau MAC (niveau 2 du modèle OSI). Tolérance aux fautes Avant de définir la troisième propriété, nous devons définir la notion de faute. On distingue plusieurs situations : pertes de messages, messages erronés, messages ne respectant pas leurs contrats de trafic ou bien encore messages arrivant en avance/retard. Il en existe d'autres mais nous ne les avons pas considérées dans l'étude. Une situation de type message erroné pourrait par exemple arriver à un message envoyé par une application à une autre dont le contenu a été altéré (par exemple à cause d'une inversion de bit, phénomène relativement courant dans l'environnement spatial). Propriété Haut Niveau 3 (Tolérance aux fautes). Capacité du système de réseau à continuer à fonctionner en présence de fautes, en détectant, en isolant puis en réparant si possible certaines fautes ; ou bien en générant des rapports d'erreurs (qui seront utilisés par des instances de gestions de fautes de plus haut niveau) dans le cas ou la faute ne pourrait être réparée localement, le tout de manière transparente vis-à-vis des applications. Aperçu de la Contribution La première étape de sélection est exprimée par le problème suivant : CRIT ÈRES POUR LA COMPARAISON Définition 3 (Problème 1). Étant donnés la liste des technologies pré-sélectionnées, la liste des propriétés attendues et les résultats de l'état de l'art, est-il possible d'identifier, dans une approche qualitative, une ou plusieurs technologies adéquates pouvant remplacer les réseaux actuels ? Afin de répondre à ce problème, nous avons défini un ensemble de critères, par propriété, qui serviraient de points de comparaison. Les critères et les résultats de la comparaison ont été publiés dans [START_REF] Chaine | TSN Support for Quality of Service in Space[END_REF] et dans [START_REF] Chaine | Comparative study of ethernet technologies for next-generation satellite on-board networks[END_REF]. Dans la section suivante, nous détaillons les critères et les résultats de la comparaison. Critères pour la comparaison Afin d'évaluer si une technologie peut remplir ou non une propriété, cette propriété doit être raffinée sous forme de critères plus précis. Dans la littérature, la Federal Aviation Administration a proposé une liste de critères qui peuvent être utilisés pour évaluer la pertinence de technologies pour les réseaux embarqués avionique existants et futurs. Les critères que nous présentons ci-dessous en sont grandement inspirés. Ils ont été pensés pour être représentatifs du système satellite. Critères pour la propriété 1 La propriété 1 Mixité des trafics se traduit sous la forme de quatre critères : Haut Débit, Latence bornée, Très faible gigue et Criticité mixte. Critère 1 (Haut Débit). La technologie considérée est-elle capable de fonctionner à un débit supérieur ou égal à 1Gbit/s ? Ce critère est utilisé pour éliminer les technologies trop "lentes" par rapport aux besoins futurs. Par exemple, même si en l'occurrence elles n'ont pas été présélectionnées, 1553 et SpaceWire ne satisfont pas ce critère. Critère 2 (Latence bornée). La technologie considérée est-elle capable de garantir des bornes sur les latences d'une ou de plusieurs communications à travers le réseau ? Ce critère et le suivant sont utilisés pour éliminer les technologies qui ne seraient pas capable de fournir le niveau de qualité de service en performance réseau recherché pour les futurs satellites. Critère 3 (Très faible gigue). La technologie considérée est-elle capable de garantir une gigue en réception inférieure ou égale à 1µs pour une ou plusieurs communications à travers le réseau ? Critère 4 (Criticité mixte). La technologie considérée est-elle capable de transporter dans le même lien du trafic provenant de la plateforme (avec des exigences de latences bornées et faibles gigue) et du trafic provenant de la charge utile (haut débit) sans qu'un des trafics n'affecte les performances de l'autre ? Critères pour la propriété 2 Trois critères sont identifiés pour la propriété 2 -Gestion du temps : Synchronisation au niveau MAC, Robustesse des algorithmes de synchronisation et Interfaces avec la couche applicative. Critère 5 (Synchronisation au niveau MAC). La technologie fournit-elle un protocole de synchronisation au niveau MAC ? Critère 6 (Robustesse des algorithmes de synchronisation). L'algorithme de gestion du temps/de synchronisation a-t-il des mécanismes de robustesse ? Ce critère anticipe la haute importance de la function de synchronisation/distribution du temps dans le satellite. Si toutes les opérations se déroulent sur la base d'une information de temps venant du réseau, le mécanisme en charge de distribuer et maintenir cette information ne doit pas tomber en panne. Critère 7 (Interfaces avec la couche applicative). L'algorithme de gestion du temps/de synchronisation est-il associé à une manière standardisée d'interagir vers et depuis les applications ? Ce critère est lié au premier critère de la propriété 2 : si le réseau est en charge de la distribution du temps dans le satellite et que la synchronisation a lieu au niveau MAC, cela pourrait être agréable d'avoir une manière standardisée de partager les informations de temps vers et depuis les applications qui s'exécutent au-dessus du réseau. En effet, si l'interface application-réseau varie d'un fournisseur à l'autre, cela engendrerait un surcoût supplémentaire d'adaptation des applications à l'équipement sous-jacent. Critères pour la propriété 3 Quatre critères sont définis pour la propriété 3 -Tolérance aux fautes : Capacité de détection de fautes, Capacité de signalement de fautes, Capacité de redondance et Capacité de confinement des fautes. Critère 8 (Capacité de détection de fautes). La technologie est-elle capable de détecter des fautes correspondant aux situations suivantes : • Message incorrect ? • Message perdu ? • Message ne respectant pas le contrat de trafic ? • Message arrivant en avance/retard ? Ce critère est utilisé pour identifier la capacité des technologies candidates à détecter les messages incorrects (ex. checksums incorrects), les messages perdus, les messages arrivant trop tôt ou trop tard (ex. un flux qui émettrait du trafic en dehors d'une fenêtre temporelle qui pourrait lui être allouée) ou encore les messages ne respectant pas leur contrat de trafic (ex. un flux qui émettrait plus de données qu'il n'y est autorisé par la réservation de ressources). Critère 9 (Capacité de signalement de fautes). La technologie est-elle capable de signaler les erreurs qu'elle a précédemment identifiées de manière directe ou indirecte ? Le réseau nouvelle génération doit être capable de signaler les erreurs de manière directe avec par exemple des interruptions ou de manière indirecte avec par exemples des statistiques périodiquement consultées par un logiciel dédié à la gestion des erreurs. Critère 10 (Capacité de redondance). La technologie dispose-t-elle d'un mécanisme pour la redondance ? La redondance a en effet été identifiée par l'industriel comme une manière de répondre à la tolérance à la perte d'un message. C'est pourquoi la technologie du futur réseau doit fournir un tel mécanisme. Critère 11 (Capacité de confinement des fautes). La technologie est-elle capable d'isoler/de contenir voire même d'éliminer les erreurs qu'elle a détectées ? R ÉSULTATS ET ANALYSE 21 Ce comportement est classique dans un réseau temps réel industriel. Ce faisant, l'objectif est d'éviter que l'erreur ne se propage i.e. qu'elle perturbe le fonctionnement nominal d'autres systèmes utilisant le réseau. Par exemple, un commutateur connecté à un équipement dit babbling-idiot (c'est à dire un équipement qui émet en permanence des messages, en dehors de son contrat de trafic) doit être capable d'éliminer le surplus de messages afin qu'ils ne se propagent pas dans le réseau et provoquent d'éventuels débordements de files. Nous allons maintenant évaluer la pertinence de chaque technologie présélectionnée puis les comparer entre elles sur la base de l'analyse de chacun des critères précédemment définis. Résultats et Analyse Pour réaliser la comparaison, nous avons évalué la compatibilité de chaque technologie avec chaque critère. Remarque 3. Un point important à mentionner est que cette comparaison et les critères qui lui sont associés évaluent ce qu'il est possible de réaliser en utilisant la technologie concernée au niveau MAC et non ce qu'il serait possible de faire à travers une implémentation ou un usage astucieux de la technologie couplé à d'autres éléments dans le système. Dans ce résumé, nous ne détaillons pas le processus de décision de la compatibilité de chaque technologie pour chaque critère. Ces éléments sont disponibles dans le chapitre 7 du manuscrit. Ceci étant, nous proposons ci-après trois tableaux récapitulatifs des résultats de la comparaison. ✖ ✔ ✔ ✔ Compatibilité ✖ ✖ ✔ ✔ ✔ Premièrement, les composants embarqués dans les satellites ont de fortes contraintes tant au niveau matériel que logiciel. Ces contraintes, contrairement au domaine aéronautique, ne sont pas liées à un processus de certification mais plutôt à une capacité à fonctionner dans un environnement avec des températures, des champs électromagnétiques et des niveaux de radiations bien particuliers. Cela signifie que l'industriel en charge du réseau des satellites doit soit acheter des stations terminales et des commutateurs conçus pour cet environnement spatial spécifique (i.e. spatialisables) ou bien qu'il investisse dans des logiciels (appelées IP -Intellectual Property) qui seraient instanciés dans des composants spatialisables dont il dispose. TTEthernet (via TTTech) et Spacefibre ont déjà été, ou seront bientôt, utilisés dans diverses missions dans l'espace en Europe comme aux Etat Unis. Leur usage dans un contexte spatial est standardisé par l'Agence Spatiale Européenne (ESA) dans ce qui s'intitule un standard ECSS. Il serait donc possible de trouver sur le marché des composants spatialisables supportant le TTEthernet ou le Spacefibre. Concernant TSN, les composants disponibles actuellement ne sont spatialisables mais la technologie gagne tellement d'intérêt en particulier dans l'industrie automobile et dans le secteur de l'automatisation industrielle, qu'il serait tout à fait envisageable d'acheter des IP TSN qui seraient ensuite instanciées dans du matériel spatialisable de l'industriel. Deuxièmement, les industriels du secteur spatial espèrent que l'utilisation de composants sur étagère (dit COTS ) provenant d'une technologie largement répandue dans d'autres secteurs d'industrie pourrait participer à la réduction du coût global du satellite en profitant des coûts de conception, d'achat d'équipements et de développement de logiciel réduits. Dans ce sens, le fait que TTEthernet et Spacefibre ne soient produits et maintenu que par un nombre très limité de fournisseurs serait un frein à leur usage pour le futur réseau embarqué. A l'inverse, le nombre grandissant de fournisseurs de composants TSN pourrait attirer les industriels du secteur vers cette technologie. Cependant, les produits TSN actuellement disponibles sur le marché pourraient ne pas être compatibles avec l'environnement spatial ou les exigences de qualité de service. Cela signifierait en outre qu'un effort serait nécessaire pour adapter ces produits au satellite, ce qui risquerait d'augmenter les coûts. Néanmoins, la réduction attendue des coûts non-récurrents liée à l'utilisation du TSN pourrait tout de même attirer certains industriels. C'est pourquoi définir un profil spatial (comme celui défini pour l'industrie automobile) serait un bon point de départ pour donner une identité au secteur aérospatial vis-à-vis des fournisseurs de composants TSN. Troisièmement, concernant les aspects validation et certification, il y a un certain avantage à utiliser TTEthernet ou bien Time Sensitive Networking plutôt que Spacefibre. En effet, TTEthernet et Time Sensitive Networking sont tous deux basés sur Ethernet, pour qui de très nombreuses recherches sur la validation/certification sont disponibles dans l'état de l'art. De plus, puisque ces deux technologies suscitent l'intérêt des industriels de divers secteurs, elles ont aussi suscité l'intérêt de plus d'acteurs du monde académique que Spacefibre. Il y a déjà des outils disponibles pour simuler, configurer et valider des réseaux TTEthernet et les outils similaires pour TSN sont en train de gagner en maturité. A l'inverse, SpaceFibre a été modélisé dans le simulateur OMNET++ mais de plus amples analyses sont nécessaires pour valider les performances temps réelles proposées par sa politique d'accès au médium. Il n'y a donc pas de grand gagnant parmi ces trois technologies finalistes mais plutôt des compromis à préciser et à discuter pour choisir quelle(s) technologie(s) sera(ont) utilisée(s) dans le futur. Conclusion sur la première contribution Pour conclure sur cette première contribution, la comparaison qualitative, basée sur un ensemble de critères représentatifs des propriétés attendues, nous a amenés à identifier trois candidats pertinents pour remplacer les réseaux embarqués actuels. En dehors des aspects performances du réseau, de nombreuses considérations doivent être faites quant à l'usage de ces technologies dans l'espace. Des études plus approfondies sont nécessaires afin de décider sur quelle(s) technologie(s) le réseau "unifié" nouvelle génération reposerait. Nous avons proposé dans le paragraphe précédent un aperçu des avantages et des inconvénients de chaque technologie et de certains compromis à prendre en compte dans le choix. La route est encore longue avant qu'une décision ne soit prise sur le futur réseau embarqué. Une des pistes les plus plausibles est que, non pas une mais plusieurs technologies seront sélectionnées, chacune adaptée à un type spécifique de mission. Par exemple, Time Sensitive Networking pourrait être utilisé pour des constellations de satellites dans un contexte New Space où le nombre de composants pour la constellation entière est relativement élevé. L'industriel pourrait ainsi bénéficier du marché TSN existant. TTEthernet ou Spacefibre pourraient, quant à eux, être utilisés pour des missions spécifiques mono-satellite où des composants conçus pour l'environnement spatial sont vraiment nécessaires. Puisque Time Sensitive Networking était totalement nouveau dans l'industrie aérospatiale, nous continuons notre analyse de la compatibilité de TSN en creusant davantage la capacité de la technologie à répondre aux besoins. Chapitre 4 Seconde contribution : Génération de configuration d'un réseau TSN pour des satellites nouvelle génération Ce chapitre présente les principaux éléments de notre seconde contribution. Il résume les chapitres 9, 10 et 11 du manuscrit. Dans la première section, nous posons le second problème. Puis, nous présentons dans la section suivante notre méthodologie novatrice pour la génération de configuration réseau. Nous appliquons, dans la troisième section, cette méthodologie aux réseaux TSN dans deux "implémentations" (comprendre, de deux manière différentes) : Exclusive Queue Allocation et Size Based Isolation. Problème 2 TSN est composé d'une vingtaine de standards et ce nombre est encore en train de grossir. Monter en compétence et développer une expertise sur cette technologie et tous les mécanismes qu'elle propose afin de les exploiter à leur plein potentiel est une activité ardue. Néanmoins, tous les standards (et leurs mécanismes) ne sont peut-être pas nécessaires pour satisfaire les exigences des futures missions. C'est pourquoi l'objectif de ce deuxième problème est de réduire l'effort d'expertise de la technologie et de conception du réseau, d'une part en identifiant un sous-ensemble de standards suffisant pour répondre aux besoins de l'industriel et d'autre part en proposant une méthode de configuration automatique du sous-ensemble de standards prenant en compte ces besoins. Pour traiter ce deuxième problème, nous proposons une modélisation formelle du réseau satellite (notamment des flux qui le traversent). De même, nous précisons puis formalisons les exigences en qualité de service qui pourraient être rencontrées dans différents systèmes spatiaux. Enfin, une fois que le modèle et les exigences ont été proprement définis, nous présentons à la fin de cette section le problème et un aperçu de notre seconde contribution. ∀f ∈ F,      r f ≤ -1 =⇒ P f = |r f | * P MIF r f > 1 =⇒ r f messages par P MIF ⇐⇒ P f est la période pendant laquelle r f messages sont envoyés r f = 0 =⇒ P f = NA (f est non périodique) (4. 2) Dans la thèse, nous nous sommes limités à des flux à ratio négatif. Ainsi, les flux considérés sont périodiques à l'échelle des applications c'est-à-dire que exactement un message est produit à chaque occurrence de l'application qui lui est associée. Cependant, les émissions de messages ne sont pas strictement périodiques i.e. espacées exactement d'une période du flux. Au contraire, les instants d'émissions peuvent varier du moment qu'ils respectent la période de l'application. Nous avons représenté dans la figure Restriction du modèle au niveau flux Dans le modèle complet publié dans [START_REF] Chaine | TSN Support for Quality of Service in Space ?[END_REF], les spécifications et les contraintes du système étaient exprimées au niveau des applications. Dans le manuscrit, nous détaillons la manière dont une application émet et reçoit des messages puis nous proposons une restriction de notre modèle au niveau des flux afin de le simplifier dans une première approche. CHAPITRE 4. SECONDE CONTRIBUTION Exigences de qualité de service Dans les prochains paragraphes, nous proposons un aperçu des type d'exigences en qualité de service existant dans les systèmes de l'industriel. Ils sont dérivés des propriétés 1, 2 et 3 de la première contribution. Choix 1. Dans notre étude, nous nous sommes limités aux exigences de la propriété 1 et à une exigence issue de la propriété 3. En effet, la priorité était d'abord de s'assurer qu'il était possible de générer des configurations répondant aux besoins en performances avant de s'intéresser à la tolérance aux fautes. Date de référence Avant de présenter les exigences en qualité de service, nous devons définir la notion de date de référence. Définition 8 (Ref (f l )). La date de référence d'un message f l est définie par la formule suivante : Ref(f l ) = l×P f . Elle correspond au début de la l-ième période applicative i.e. la période dans laquelle f l doit être émis. Il devra être émis au plus tard avant la date de référence du message qui le suit (ici f l+1 ). Nous illustrons ce concept avec la figure Fig. 4.3. Ref (f l ) Ref (f l+1 ) P f T SAP (f l ) P f T SAP (f l+1 ) Exigences de type Mixité des trafics Dans ce paragraphe, nous proposons un aperçu des contraintes de qualité de service en performance identifiées pour les flux. Nous restons volontairement haut niveau dans la formalisation pour ce résumé. Les équations détaillées du modèle sont disponibles dans le manuscrit. Exigence de Performance 2 (Jitter). Un flux f peut avoir une exigence de gigue, définie par f .jitter ∈ N ∪ {NA}. NA signifie qu'il n'y a pas de contrainte, autrement, f .jitter est la valeur maximale acceptée pour la gigue pour le flux f . Cycle l Ref (f l ) P MIF Cycle m Ref (f m ) P MIF Cycle l Cycle m T p (f l ) T p (f m ) T d (f l ) T d (f m ) Lat fl = T d (f l ) -Ref (f l ) Lat fm = T d (f m ) -Ref (f m ) Lat fl Jit fl,m Jitter acceptance window Considérations Industrielles Contrat de Production et Date de Libération Dans les systèmes de l'industriel, les applications sont soumises à des contrats de production. Cela signifie que chaque flux devra émettre ses messages dans une fenêtre temporelle, définie par contrat (voir Fig. 4.6). Si les émissions de messages ont bien lieu dans cette fenêtre, les exigences de qualité de services et de tolérance aux fautes de ces messages seront automatiquement satisfaites. Ces contrats sont discutés entre les architectes du satellite pendant les phases de design. Nous considérons dans notre modèle que les applications respectent toujours leurs contrats et que de fait le réseau assure le respect des exigences de qualité de service des applications. Ref (f l ) Ref (f l+1 ) P f T SAP (f l ) Ref(f l ) + B - f l Release(f l ) = Ref(f l ) + B + f l Figure 4.6 -Ref (f l ), T SAP (f l ) and Prod(f l ) Définition 12 (Contrat de production). Soit f l le l-ième message de f . Le contrat de production associé à f l est l'intervalle Prod(f l ) = [Ref(f l ) + B - f l , Ref(f l ) + B + f l ] ⊆ [Ref(f l ), Ref(f l+1 )[, ou B - f l (resp. B + f l ) est l'instant f l )= Ref(f l ) + B + f l . Le contrat de trafic d'un flux f , noté Prod(f ), est défini comme suit : Prod(f ) = ∪ l∈N Prod(f l ). Schémas d'émission Dans la version courante du système satellite, les communications sur le réseau plateforme sont réalisées la plupart du temps sur un bus 1553. Les émissions des applications sont gérées à l'aide d'accès programmés pré-calculés à une liste de descripteurs qui "prépareront" la trame avant son émission et non à l'aide de file d'attentes comme ce serait le cas pour un réseau Ethernet par exemple. Grâce à ce schéma d'émission, les applications peuvent déposer leurs messages quand elles le souhaitent. Il n'y a pas de contraintes particulières du moment que le message est déposé avant la lecture du descripteur. Les logiciels applicatifs exécutés à bord du satellite s'appuient sur ce schéma d'émission. Ainsi, changer ce schéma aurait un impact signifiant sur le volume de code qui devrait être redéveloppé. De fait, les configurations du réseau TSN que nous allons générer doivent minimiser l'effort de redéveloppement et de manière générale l'impact sur les applications. L'idée est la suivante : en maximisant la fenêtre pendant laquelle une application peut émettre ses messages, nous nous rapprochons au plus près du schéma d'émission du 1553. Le réseau sera alors en charge de transporter convenablement le message pour qu'il atteigne ses exigences de qualité de service. Pour ce faire, nous proposons la contrainte suivante sur les configurations : Exigence sur l'Effort de Développement 1. Toute configuration doit maximiser la longueur de Prod(f ) (voir Def. 12). En particulier, les configurations où B - f l = 0 permettent aux applications de commencer à exécuter des tâches au début de la période MIF et la maximisation de B + f l (idéalement B + f l = P f ) laisse plus de temps disponible pour exécuter des tâches durant le cycle MIF. Discussions sur le coût de la modernisation du réseau En plus de l'impact applicatif, la modernisation du réseau a aussi un coût important sur les composants qui constituent le réseau. En effet, passer de l'architecture 1553+SpaceWire à une architecture Ethernet/TSN a deux impacts notables. Premièrement, le remplacement de tous les composants actuels en composants supportant le TSN représente un coût considérable que l'industriel n'est pas prêt à accepter. Deuxièmement, nous l'avons déjà dit, TSN est composé de plus de vingt standards. Plus il y aura de standards implémentés dans les composants, plus l'analyse de la défaillance de ces composants (et du logiciel associé) sera complexe. Dans l'architecture actuelle, les stations terminales qui reçoivent des messages (par exemple des capteurs) sont extrêmement simples et c'est l'ordinateur central du satellite (OBC -On-Board Computer) qui concentre toute l'intelligence. De fait, les configurations du réseau se doivent de tenter de minimiser le coût de la modernisation en limitant l'utilisation des mécanismes TSN et le nombre d'équipements implémentant ces mécanismes. Une autre solution aurait peut-être été de remettre en cause la simplicité des receveurs et de distribuer l'intelligence à travers le réseau, mais cela n'a pas été réalisé pendant cette thèse. Aperçu de la contribution Afin de démontrer la compatibilité de Time Sensitive Networking avec les exigences de l'industrie aérospatiale, nous proposons de générer des configurations du réseau qui satisferaient ces exigences. Nous nous sommes d'abord demandés : Définition 13 (Problème 2 -Étape 1). Sachant les exigences de qualité de services des futures missions, quel est le plus petit sous-ensemble de standards TSN nécessaire pour satisfaire ces exigences ? En effet, la configuration des standards TSN n'est pas aisée, de par leur nombre et leur complexité. Afin de réduire l'effort de l'architecte réseau dans sa conception d'un réseau TSN pour les futures générations de satellites, nous trouvons pertinents de réduire au maximum la liste des standards à utiliser. Choix 2 (Choix des standards TSN). Nous avons présenté un bref aperçu d'un sous-ensemble, que nous pourrions appeler Profil dans le vocabulaire IEEE, dans [START_REF] Boyer | Experimental assessment of timing verification techniques for AFDX[END_REF] puis dans [START_REF] Chaine | Formal Specification of Satellite On-Board Networks Requirements[END_REF]. Ce profil est toujours en train d'être consolidé dans un groupe de travail commun entre l'IEEE et la SAE (voir IEEE/SAE P802.1DP TSN Profile for Aerospace). Ce profil contient un certain nombre de standards mais dans cette étude nous l'avons réduit au standard 802.1Qbv uniquement afin de traiter les exigences de qualité de service du réseau nouvelle génération. Ce choix est motivé par trois raisons. D'abord, il n'existait parmi les standards TSN, au moment d'écrire ce manuscrit, que deux standards capables d'assurer une faible gigue en réception : 802.1Qbv et 802.1Qcr. Ensuite, la majorité des travaux de l'état de l'art étaient basés sur 802.1Qbv pour la gestion de trafic avec gigue. Enfin, à l'inverse de 802.1Qbv, il n'y avait aucun composant implémentant le standard 802.1Qcr sur le marché. Une fois le sous-ensemble de standard choisi, nous nous sommes demandés : Définition 14 (Problème 2 -Étape 2). Sachant les exigences de qualité de service à bord et sachant les méthodologies existantes dans l'état de l'art, est-il possible de générer des configurations valides du standard 802.1Qbv ? Définition 15 (Configuration valide). Une configuration est dite "valide" si elle permet de satisfaire toutes les exigences de qualité de service de tous les flux. Il est en fait relativement facile, à l'aide des méthodologies dans l'état de l'art, de générer des configurations valides. Cependant, nous souhaitons générer des configurations valides qui soient plus acceptables sur le plan industriel, c'est-à-dire, avec un impact/coût modéré sur les composants, les logiciels et le processus d'intégration du satellite. Afin de traiter ce second problème, nous proposons de mettre en place une manière automatisée de générer des configurations valides. En se basant sur les méthodologies de l'état de l'art, nous allons générer des configurations à l'aide de la programmation par contraintes. Dans le manuscrit, nous présentons dans le chapitre 9 les méthodologies de l'état de l'art, un modèle de configuration du standard 802.1Qbv et des cas d'usages industriels sur lesquels nous allons appliquer notre processus de configuration automatique. Nous ne les détaillerons pas dans ce résumé. Dans la prochaine section, nous introduisons le concept novateur de configurations dites "Egress TT". Nous discutons brièvement de ses avantages et ses inconvénients puis nous présentons deux adaptations (ou implémentations) de ce concept pour des réseaux TSN. Egress TT, une nouvelle approche pour la génération de configuration de réseaux à faible gigue Dans le chapitre 9 du manuscrit, nous avons présenté End-to-End TT, l'approche de configuration la plus répandue dans l'état de l'art. Elle consiste en la génération de configuration à l'aide de la programmation par contrainte. Les configurations consistent en un ordonnancement statique des émissions de toutes les trames sur tous les équipements du réseau. Cet ordonnancement est ensuite converti en une configuration du mécanisme Time Aware Shaper de TSN. Dans la section 9.3 dans le manuscrit, nous avons identifié les limitations en termes d'impact sur les applications, de passage à l'échelle et de coût de mise à niveau de l'approche End-to-End TT vis à vis du problème que nous considérons. De fait, nous présentons dans cette section une nouvelle approche de configuration intitulée Egress TT où un ordonnancement des émissions n'est calculé que sur le dernier commutateur du réseau dans le chemin de n'importe quel message. Les configurations de l'approche Egress TT, dites configurations Egress TT, sont générées à l'aide de la programmation par contrainte à l'image de l'état de l'art. L'approche Egress TT a été conçue afin de fournir une méthodologie plus compatible/plus industrialisable que celles existantes dans de l'état de l'art vis à vis du système satellite.. Nous présenterons dans la prochaine section une application de cette approche à des réseaux basés Ethernet et TSN mais elle peut, en principe, être appliquée à n'importe quelle technologie capable de véhiculer des trames de manière dirigée par le temps. Remarque 8. Cette approche Egress TT (appelée LETT dans [START_REF] Baron | LETT: An execution model for distributed real-time systems[END_REF]) est inspirée des approches LET (pour Logical Execution Time) où des applications peuvent absorber la variabilité du délai de communication (et donc des dates de réceptions) en bufférisant les messages avant de les délivrer à leurs destinataires à des instants fixes prédéfinis. Néanmoins, Egress TT se distingue par le fait qu'elle est adaptée aux systèmes dans lesquels l'architecte ne souhaite pas avoir à gérer la gigue des trames dans l'équipement de destination. Par exemple, dans le système de l'industriel, les équipements receveurs n'ont pas ce genre de capacité pour le moment. Afin d'éviter de devoir concevoir de nouveau ces équipements, Egress TT propose de déléguer au réseau (dont les équipements et les logiciels seront obligatoirement remplacés) cette gestion des gigues. Comment fonctionne Egress TT ? Nous décrivons ci-après le principe de fonctionnement des configurations Egress TT en étudiant comment un message f l d'un flux f avec gigue est géré dans ces configurations. Par définition, le message f l doit être émis dans une fenêtre définie par son contrat de trafic et l'émission peut avoir lieu à n'importe quel instant durant cet intervalle de temps. Par ailleurs, nous supposons que le délai de traversée du réseau pour le message f l peut être borné et nommons NetLatBound(f l ) une borne. Nous illustrons cette situation dans la figure Fig. 4.8 (A représente le délai de traversée le plus favorable). Dans la situation la plus défavorable, le message f l arrivera dans le dernier commutateur dans son trajet après NetLatBound(f l ). Définition 17 (NetLatBound). Une borne supérieure sur le délai de traversée du réseau est dénotée ∀f ∈ F, ∀l ∈ N, NetLatBound(f l ). En pratique, il n'y a pas d'exigences particulières sur la politique d'accès au médium avant le dernier commutateur sur le chemin de f à partir du moment où une borne NetLatBound peut être calculée. Cela implique notamment que nous autorisons un message à être retardé par un conflit d'accès au médium (par exemple avec Static Priority). Le commutateur du dernier saut est en charge d'absorber la variabilité du délai (c'est-à-dire la gigue) et de fournir le message à l'application en respect de ses contraintes de gigue. En particulier, si le message est arrivé en avance, il sera bufférisé afin d'être émis à une date fixe. Pour s'assurer que le message sera reçu en respectant les contraintes de gigue, il suffit que : • le message soit reçu dans le dernier commutateur avant de devoir être envoyé (à la date fixe pré-calculée) • le message ne soit pas envoyé du dernier commutateur vers le destinataire avant sa date d'émission prévue. De cette situation, indépendamment de quand le message est émis par l'application, il sera envoyé à son destinataire à une date fixe, ce qui permet de satisfaire les exigences de gigue. Quel est l'intérêt de Egress TT ? L'approche Egress TT est intéressante pour plusieurs raisons : Coût en temps et en ressources. Tout d'abord, elle limite le nombre de date d'émission à calculer pour chaque nouvelle configuration. Ce résultat est illustré dans la section 4.3.4 des expérimentations. EXCLUSIVE QUEUE ALLOCATION ET SIZE BASED ISOLATION DEUX IMPL ÉMENTATIONS DE EGRESS TT Impact sur les applications. De plus, ce concept de configurations Egress TT où les émissions de messages sont programmées uniquement dans les derniers sauts offre plus de flexibilité pour les contrats de production. Ces contrats sont plus grands que ceux offerts par End-to-End TT, seule solution à l'heure actuelle à permettre une faible gigue. En ce sens, Egress TT répond à nos critères. Transition vers des réseaux nouvelle génération. Enfin, les configurations Egress TT pourraient faciliter la transition des architectures de réseaux embarqués actuelles vers de nouvelles technologies. En effet, seul le dernier commutateur dans le chemin de n'importe quel flux nécessite véritablement d'être remplacé. De fait, si le coeur de réseau existant est déjà compatible avec les autres exigences du système considéré (notamment la bande passante), seuls quelques équipements auraient à être remplacés, conduisant à un coût nettement moins important que le remplacement de l'intégralité du réseau imposé par des configurations End-to-End TT. Applicabilité de Egress TT L'approche Egress TT convient bien à des systèmes où la gigue en réception est une contrainte très forte et où la latence l'est moins. Afin de ne pas sur-contraindre le coeur de réseau, la faible gigue est obtenue en "sacrifiant" la latence. En effet, si un message est émis au début de son contrat de production et bénéficie d'une situation favorable pour traverser le réseau, il devra attendre une certaine durée dans le dernier commutateur avant de pouvoir être effectivement envoyé au receveur. Cette situation va résulter en une latence bien supérieure à celle que l'on pourrait observer dans des configurations End-to-End TT. Bien que ce compromis latence/gigue soit compatible avec les exigences de l'industrie aérospatiale, les configurations Egress TT pourraient ne pas être adaptées à des systèmes où non seulement les gigues mais aussi les latences doivent être minimales. Dans les prochains paragraphes, nous décrivons brièvement deux implémentations de cette approche pour des réseaux Ethernet/TSN. Exclusive Queue Allocation et Size Based Isolation deux implémentations de Egress TT pour les réseaux TSN Intéressons-nous maintenant à l'application du concept de configuration Egress TT aux réseaux TSN basés sur le standard 802.1Qbv. Nous proposons dans le manuscrit deux implémentations (voir définition ci-après) intitulée Exclusive Queue Allocation et Size Based Isolation. Pour obtenir des paramètres de configurations de ces deux implémentations, comme détaillé dans le chapitre 9 du manuscrit, nous nous appuyons sur la technique récurrente dans l'état de l'art, c'est-à-dire, une modélisation du système dans un problème de programmation par contraintes. Il faut donc définir un modèle formel du réseau ainsi qu'un ensemble de contraintes décrivant à la fois les exigences du système. Définition 18 (Implémentation). Dans ce document et dans le manuscrit, nous utilisons le mot implémentation pour décrire un ensemble de contraintes utilisées pour la génération des configurations réseaux. De fait, une implémentation de Egress TT appliquée aux réseaux TSN est un ensemble de contraintes liées au modèle de configuration de TSN (définis dans le chapitre 9 du manuscrit) qui permettra de générer, à travers un solveur de contraintes, les configurations Egress TT du réseau. Le terme implémentation ne sera pas utilisé pour décrire la configuration d'équipements réels sur un démonstrateur sur la base des paramètres de configuration que nous générons. Nous pouvons maintenant définir la contrainte d'Exclusive Queue Allocation. Concept d'Exclusive Queue Allocation Afin de satisfaire l'exigence de tolérance aux fautes dans les configurations Egress TT pour les réseaux TSN, au lieu de réutiliser les contraintes d'isolation de l'état l'art qui amènent à un impact applicatif fort, nous introduisons notre propre contrainte : Exclusive Queue Allocation. En assurant une isolation spatiale entre les flux (puisqu'il ne peut y avoir que des messages d'un seul flux dans une file), le non déterminisme introduit par le mécanisme Time Aware Shaper est éliminé et la contrainte d'indépendance à la perte de message est satisfaite. En effet, si un message d'un flux avec gigue est perdu dans (ou avant) le port du dernier saut, puisque la stratégie d'accès au médium sur le dernier saut est basée sur de l'exclusion mutuelle de file, aucun message provenant d'une autre file ne peut prendre la place du message perdu. Si cela arrivait, cela aurait pour effet de créer de la gigue pour ce message. Nous l'illustrons dans la figure Fig. 4.9. Remarque 9. Puisqu'un port TSN peut, au plus, avoir huit files pour stocker des trames, cette contrainte implique qu'il ne peut y avoir plus de 8 flux avec gigue dans un même port du dernier saut, c'est-à-dire, allant vers la même destination. Pire, s'il y a aussi des flux sans gigue dans ce port, le nombre de flux avec gigue acceptable décroit encore plus (selon le nombre de files occupées). La première implémentation du concept Egress TT appliquée à des réseaux TSN est intéressante. Néanmoins, elle souffre d'une limitation majeure : un port du dernier saut ne peut recevoir plus de 8 flux avec gigue différents. Bien que cela puisse suffire à certains réseaux embarqués, cela ne convient pas à tous les systèmes industriels considérés dans la thèse (voir Chapitre 9 du manuscrit). C'est pourquoi, nous avons également proposé une seconde implémentation, avec pour objectif d'augmenter d'augmenter le nombre de flux avec gigue par port de réception. Hypothèses additionnelles Afin de pouvoir définir la contrainte de Size Based Isolation, nous devons rajouter une hypothèse supplémentaire sur notre modèle. En s'assurant que les trames respectent cet ordre de taille croissant et en générant des ouvertures de transmission gates de durée spécifique à chaque message, si une trame est perdue, la trame suivante ne prendra pas sa place puisque sa taille sera supérieure à la durée courante d'ouverture de la porte. Cela aura pour effet que le message ne sera pas émis dans le slot libéré par le message perdu mais bien à la date qui avait été configurée au départ. Nous illustrons ce concept dans la figure Fig. 4.10 : nous montrons la situation nominale dans 4.10a et le comportement attendu en cas de perte d'un message dans 4.10b. Grace à la contrainte de Size Based Isolation, même si le message f l est perdu, le message j k n'est pas perturbé. Cette propriété de taille croissante peut par exemple être obtenue en ajoutant du padding aux trames au moment de leur émission dans le réseau. De cette manière, il est possible d'augmenter le nombre de flux avec gigue par port du dernier saut. Cependant, cette solution a elle aussi un coût : il faut garantir que l'ordre croissant des tailles sera bien respecté. Puisqu'il nous est impossible de garantir un ordre entre des messages venant de différentes sources au sein d'un même port du dernier saut sans impact négatif sur les applications, nous imposons que des flux partageant la même file doivent venir du même émetteur par le même chemin (ce qui correspond justement à notre précédente hypothèse). Dans cette situation, l'impact applicatif sera légèrement dégradé par rapport à Exclusive Queue Allocation : en plus du contrat de trafic, les émetteurs devront respecter une contrainte d'ordre à l'émission. Limitations de Exclusive Queue Allocation, Size Based Isolation et Egress TT Maintenant que nous avons résumé brièvement le principe des configurations Egress TT et de ses deux implémentations, nous présentons dans le prochain paragraphe les limitations de ces configurations. Tout d'abord, les implémentations avec Exclusive Queue Allocation ne fonctionneront jamais dans des situations où un équipement est censé recevoir plus de 8 flux avec gigue. Ensuite, les implémentations avec Size Based Isolation ne fonctionneront jamais dans les situations où un équipement est censé recevoir des flux venant de plus de 8 sources différentes. De plus, le nombre de flux avec gigue par file dans les implémentations avec Size Based Isolation est limité par la granularité i.e. la plus petite durée d'une ouverture de porte dans le mécanisme Time Aware Shaper. Par exemple, avec une granularité de 1µs, la plus petite ouverture de porte sera capable de transmettre 125 bytes (à 1Gbit/s). Alors, en considérant que la taille maximale des trames est 1518 octets, une file dans ce port ne pourra pas accepter plus de 13 flux avec gigue différents. Les détails de ce calcul sont disponibles dans le manuscrit. Enfin, d'autres limitations intrinsèques au concept de configuration Egress TT existent, nous les détaillons dans le manuscrit. Dans toutes les situations mentionnées précédemment, End-to-End TT sera toujours une meilleure approche puisqu'elle permettra de générer des configurations valides. Néanmoins, nous sommes convaincus que l'amélioration du passage à l'échelle, en particulier le coût en temps et en ressources réduit, et l'impact applicatif léger peuvent intéresser certains industriels pour leur réseaux temps-réels. Dans les prochains paragraphes, nous résumons quelques résultats d'expérimentations obtenues en configurant différents cas d'usages avec notre méthode Egress TT et l'approche End-to-End TT de l'état de l'art. Les résultats expérimentaux présentés dans la figure Fig. 4.18 montrent que les configurations Egress TT génèrent de plus grandes latences que les configuration End-to-End TT. Encore une fois, ce résultat était attendu. En effet, afin d'obtenir de faibles gigues dans Egress TT, la latence est sacrifiée. Les deux expériences présentées ci-dessus montrent que l'approche Egress TT réduit bien l'effort nécessaire pour générer une configuration d'un réseau Time Sensitive Networking au coût d'une latence plus grande que celles qui seraient obtenues avec l'approche End-to-End TT de l'état de l'art, plus coûteuse en temps et en ressources. Configurations de systèmes industriels Après ces expériences sur des systèmes relativement petits, nous avons souhaité évaluer l'approche Egress TT sur des systèmes du domaine spatial existant dans l'état de l'art. Nous en avons considéré deux : le système satellite générique d'Airbus (disponible dans [START_REF] Chaine | TSN Support for Quality of Service in Space ?[END_REF]) et le système du Crew Conclusion sur la seconde contribution Dans cette seconde contribution, nous avons mis en place une démarche pour configurer des réseaux temps-réel avec une approche nouvelle Egress TT. Cette approche, basée sur des émissions planifiées de messages dans le dernier équipement dans le chemin de n'importe quel flux, permet de réduire l'effort nécessaire pour générer une configuration par rapport aux approches End-to-End TT de l'état de l'art. Cependant, afin d'obtenir de faibles gigues, l'approche Egress TT augmente les latences des messages dans le réseau. Elle est donc applicable tant que la contrainte de latence peut être assurée à l'aide les mécanismes asynchrones implémentés dans le coeur de réseau. Nous avons évalué sur des petits systèmes puis sur des systèmes industriels du domaine notre approche Egress TT et deux implémentations, Exclusive Queue Allocation et Size Based Isolation, pour des réseaux TSN. Ces expérimentations ont confirmé l'intérêt et les limites de l'approche Egress TT. Chapitre 5 Conclusion A la lumière de l'augmentation du besoin en performance dans l'industrie aérospatiale, l'objectif de cette thèse était de discuter de l'adaptabilité d'une technologie émergente nommée Time Sensitive Networking (TSN), vis-à-vis des exigences des prochaines générations de satellites. Cette étude est incluse dans la feuille de route stratégique TANIA-DP (Technological Assessment for New Instruments and Avionics -Data Processing) d'Airbus, ayant pour objectif, entre autres, de fournir de nouveaux standards de communication pour les réseaux Plateforme & Charge Utile afin d'améliorer les performances, les coûts, la compatibilité avec les standards du segment sol, ou encore la gestion de futures missions dans l'espace. Les Standards TSN Time Sensitive Networking est le dernier standard Ethernet de l'IEEE. Développé à la fin des activités du groupe de travail IEEE sur AVB (Audio Video Bridging), le groupe de travail TSN a proposé plus d'une vingtaine de standards dans l'objectif de fournir, à un réseau Ethernet, des capacités de gestion de trafic temps réel et de trafic haut-débit, de gestion des fautes, de synchronisation et de distribution du temps ainsi que des capacités de gestion de configurations. En 2017/2018, la popularité de TSN a commencé à grandir, en particulier dans trois domaines industriels i.e. l'automobile, l'automatisation industrielle et la 5G. Très vite, d'autres industriels ont manifesté leur intérêt pour TSN, notamment le secteur aérospatial. La liste des standards de TSN n'est toujours pas fixée : de nouveaux documents continuent à être produits par le groupe de travail au moment où nous écrivons ce manuscrit. Avec cette augmentation du nombre de standards, les industriels s'intéressant à TSN ont proposé de définir des profils décrivant un sous-ensemble de standards et/ou de paramètres pour l'usage du TSN dans leurs contextes spécifiques (ex. Profil pour l'automobile [START_REF]IEEE Standard for Local and Metropolitan Area Networks-Bridges and Bridged Networks -Amendment 34:Asynchronous Traffic Shaping[END_REF]). Dans ce sens, nous fûmes les premiers à mettre en avant à la fois l'intérêt de l'industrie aérospatiale pour la technologie TSN et un premier aperçu d'un profil pour l'usage du TSN dans l'espace. Ce profil est maintenant en train d'être développé dans un effort partagé entre l'IEEE et la SAE. Contributions Afin d'évaluer l'adaptabilité de Time Sensitive Networking à l'industrie aérospatiale, nous avons décrit dans le manuscrit la démarche en deux étapes mise en place durant la thèse. D'abord, nous 48 CHAPITRE 5. CONCLUSION nous sommes assurés, dans une approche qualitative, que TSN était bien un candidat pertinent pour le réseau unifié nouvelle génération. Ensuite, dans une approche quantitative, nous avons évalué la capacité de TSN à satisfaire les besoins en qualité de service des futures missions. Perspectives sur la méthodologie Egress TT Nous envisageons plusieurs perspectives pour de futures activités sur l'approche Egress TT. Nouvelles fonctionnalités dans le modèle Tout d'abord, de nouvelles fonctionnalités pourraient être ajoutées au modèle et donc être traduites dans le problème de programmation par contrainte sans véritable effort. Nous en détaillons quelques unes ci-après. D'abord, le modèle pourrait être modifiée pour supporter des flux multicast (i.e. avec une source mais plusieurs destinations) afin de pouvoir appliquer l'approche à une plus grande variété de systèmes. Comme expliqué dans le chapitre 11 dans le manuscrit, nous avons dû adapter le use case Orion à notre modèle puisqu'il y avait originellement des flux multicasts. De fait, les configurations que nous avons générées ne sont pas tout à fait applicables au système d'origine. De même, le modèle pourrait être modifié pour supporter des tailles de messages applicatifs plus grandes. Cela signifierait qu'un message applicatif serait maintenant divisé en plusieurs trames Ethernet. Nous proposons une intuition sur la modification du modèle pour cette nouvelle capacité dans l'Annexe C.1 du manuscrit. En restant sur la thématique des trames, une autre modification pourrait être apportée au modèle : l'introduction de la frame preemption (préemption de trames), décrite par les standards TSN IEEE 802.1Qbu et IEEE 802.13br. Ces standards sont présentés dans l'Annexe A.1 du manuscrit. Pour intégrer la préemption de trames dans le modèle, il serait notamment nécessaire de modifier la formule du calcul de la borne sur le délai de traversée du réseau. D'ailleurs, le calcul de la borne mentionnée précédemment pourrait lui aussi être amélioré. En effet, la formule encodée dans le manuscrit est peut-être un peu naïve. Ceci étant dit, elle était suffisante pour nos cas d'usages. Une des pistes d'amélioration à envisager serait de coupler le solveur de contrainte avec un outil de calcul réseau, ce dernier offrant de meilleures bornes. Il faut noter cependant que cela n'est, à l'heure actuelle, pas possible avec le solveur de contraintes que nous avons utilisé. De plus, cela pourrait potentiellement détériorer le temps réduit de configuration permis par Egress TT. La dernière modification des fonctionnalités que nous proposerons serait d'intégrer le Credit Based Shaper (contrôle de flux basé crédit) comme algorithme de gestion de flux soit pour certains flux sans contraintes de gigue soit pour une troisième classe de trafic. Pour ce faire, plusieurs éléments devraient être modifiés dans le solveur. Premièrement, de nouvelles variables de décisions devraient être ajoutées afin de pouvoir générer les paramètres (idle slope et send slope) liés à l'usage du CBS. Deuxièmement, la formule de calcul du délai devrait être adaptée pour prendre en compte le trafic CBS. Troisièmement, la philosophie dite "exclusive gating" avec laquelle nous configurons actuellement les ports du dernier saut devrait être repensée. Nouveaux standards En plus des nouvelles fonctionnalités liées au standard 802.1Qbv (et à la préemption de trames), de nouveaux standards pourraient aussi être intégrés au modèle pour se rapprocher encore plus des systèmes réels. En premier, le standard TSN 802.1CB Frame Replication and Elimination for Reliability pourrait être rajouté au modèle. Cela provoquerait une augmentation du trafic dans le réseau qui aurait peut-être un impact sur le calcul du délai. Un autre standard TSN pourrait aussi être intégré au modèle : le standard 802.1AS-2020. Ce standard est en charge de gérer la synchronisation du réseau. Encore une fois, cela aura pour conséquence d'introduire plus de trafic (avec peut-être des exigences en qualité de service différentes) dans le modèle. De plus, selon la qualité de la synchronisation recherchée, le nombre de messages pourrait peut-être varier ce qui signifie que les configurations générées par l'approche Egress TT pourraient être différentes. Prochaines étapes vis-à-vis de l'usage du TSN dans l'industrie aérospatiale En parallèle des nouvelles fonctionnalités et des nouveaux standards à ajouter à l'approche Egress TT, il reste des activés à mener, d'un point de vue industriel, pour terminer le processus de sélection de la technologie du réseau embarqué satellite de nouvelle génération. Cela concerne non seulement l'étude présentée dans ce manuscrit autour du TSN mais aussi les autres technologies finalistes (i.e. TTEthernet et Spacefibre). En effet, dans la seconde contribution de la thèse, nous nous sommes uniquement intéressés à la qualité de service en performance de TSN. Il serait nécessaire de formaliser un modèle de tolérance au fautes (FDIR) clair pour ces réseaux de nouvelle génération. Les commutateurs seraient-ils utilisés en redondance chaude, en redondance tiède ou bien en redondance froide ? Les liens seraient-ils dupliqués, ou tripliqués ? La redondance serait-elle appliquée à l'intégralité du réseau ou seulement à un sous-ensemble de commutateurs et de liens ? Quelles sont les valeurs attendues pour la disponibilité et la fiabilité du système ? Comment ces paramètres peuvent-ils être évalués sur un réseau TSN/TTE/Spacefibre ? De plus, dans cette thèse, nous avons uniquement regardé les aspects couche 2 du modèle OSI. Dans une logique d'industrialisation du TSN (et des autres technologies), il serait nécessaire de s'intéresser à la disponibilité des média physique spatialisables capables de supporter les hauts volumes de données échangés à bord. Ce médium sera-t-il basé sur une fibre optique ? sur une paire torsadée ? ou encore un câble spécifiquement développé par l'industriel ? Serait-il raisonnable d'adapter une couche physique SpaceFibre à une couche MAC TSN ? Les premières activités en cours chez l'industriel à ce sujet laisseraient penser que le médium physique du réseau nouvelle génération pourrait être basé sur une fibre optique mais de nouvelles études seront nécessaires pour concevoir cette couche physique. Figure 1 . 1 : 11 Figure 1.1: Technology selection process for a next-generation satellite on-board network 24 CHAPTER 2 .Figure 2 . 1 : 24221 Figure 2.1: Satellite sizes Figure 2 . 2 : 22 Figure 2.2: Platform and Payload on Spot-5 Satellite Figure 2 . 3 : 23 Figure 2.3: SAVOIR Reference Architecture Figure 2 . 4 : 24 Figure 2.4: Generic Satellite Network System 29 CHAPTER 3 Figure 3 . 1 : 29331 Figure 3.1: OSI Layer Model 3.2). Figure 3 . 2 : 32 Figure 3.2: Shared Medium Topologies Examples Figure 3 . 3 : 33 Figure 3.3: Point-to-Point Medium Topologies Examples Figure 3 . 4 : 34 Figure 3.4: Application and Network E2E Latencies Figure 3 . 5 : 35 Figure 3.5: Link Redundancy Modes Example Figure 3 . 6 : 36 Figure 3.6: 1553 bus with one bus controller and six remote terminals Figure 3 . 7 : 37 Figure 3.7: SpaceFibre Output Port Functional View Figure 3.8: Ethernet switch Figure 4 . 1 : 41 Figure 4.1: Status of TSN standards and projects in Nov. 2021 Fig 4.2 summarizes this paragraph. Figure 4 . 2 : 42 Figure 4.2: 802.1Q switch functional forwarding process overview Figure 4 . 3 : 43 Figure 4.3: TSN End-Station Architecture Figure 4 . 4 : 44 Figure 4.4: Types of TSN features Figure 4 . 5 : 45 Figure 4.5: Status of TSN features, configuration and profiles Figure 4 . 5 , 45 Figure 4.5, also from J. Farkas, gives a summary and status of the TSN base features, the TSN profiles (both introduced in 4.1.1) and the TSN configuration features in late 2019. Figure 4 . 6 : 46 Figure 4.6: 802.1Qbv perimeter in a 802.1Q switch Figure 4 . 7 : 47 Figure 4.7: 802.1Qbv switch port Figure 4 . 8 : 48 Figure 4.8: Example of CBS behaviour for traffic class #2 2 3 currentW eight ← w i ; 4 while 34 for i = 0 to #T C -1 do (not empty(#i)) and currentW eight > 0 do 5 MarkAsReady(head(#i)) ; 6 currentW eight ← currentW eight -1 ; 3. If the weight reaches zero, then the next queue (in the list of queues implementing the algorithm) is served (with the same rule w.r.t. the queue's weight). Figure 4 . 9 : 49 Figure 4.9: Example of WRR behaviour on #1-#2-#3 traffic classes Fig. 4 .Figure 4 . 11 : 4411 Fig. 4.11, illustrates these rules on a simple example. Figure 4 . 13 : 413 Figure 4.13: Static priority on 8 queues Figure 4 . 14 :Example 4 4144 Figure 4.14: Example of static priority on 8 queues Figure 4 . 4 Figure 4.17: End-to-End TT port architecture Figure 4 . 18 : 418 Figure 4.18: Random TAS configuration Figure 4 . 19 : 419 Figure 4.19: TAS Configuration with 2 queues applying the exclusive gating principle CHAPTER 4 . 4 FOCUS ON TIME SENSITIVE NETWORKING OperControlListLenght = 13 Gate of #8 is closed Gate of #8 is open Figure 4 . 4 Figure 4.20: End-to-End TT architecture Figure 4 . 4 Figure 4.21: TT-CBS-BE port architecture Figure 5 . 2 : 52 Figure 5.1: Motivating Example with three devices Figure 7 . 1 : 71 Figure 7.1: LPB link , SPB link and HPB link example Figure 7 . 2 : 72 Figure 7.2: BAG concept in AFDX and jitter illustration[3] Figure 7 . 3 : 73 Figure 7.3: AFDX Redundancy Example Figure 7 . 5 : 75 Figure 7.5: TTEthernet two-step synchronization (cf. [119]) Criteria 8 , 9 , 89 10, 11 -Error Detection, Error Reporting, Redundancy and Fault Containment ✔ Regarding fault tolerance, TTEthernet provides both redundancy and policing capabilities inherited from ARINC 664 (cf. Annex D of SAE AS6802 [102] 1 ) 109 110 CHAPTER 8 . 11108 PROBLEM STATEMENT 2 3 )Figure 8 . 1 : 8 Property 3 ( 38183 Figure 8.1: Ratio concept with two flows (r f 1 = 2, r f 2 = -4) and k = 8 5 ) 39 ( 111 Definition 40 (Remark 14 8 . 1 . 2 5391114014812 Definition Flow -restriction) Therefore in this document a flow f ∈ F is characterized by the tuple ⟨Src f , Dest f , Size f , P f ⟩ where P f is the period of the flow.8.1. FLOW MODELLING f l , τ fl ) A flow is a sequence of messages (or frames). Let f l , l ∈ N, denote the l-th message of f and τ f l the transmission duration of f l . Since the message size per flow is constant, the transmission duration per flow is constant too i.e. ∀i, j ∈ N, τ fi = τ fj Motivating Example: Adding Flows HPActuator 1 :Figure 8 . 2 : 182 Figure 8.2: Adding three flows to the motivating example Figure 8 . 3 : 83 Figure 8.3: Frame behaviour on the network Figure 8 . 4 : 4 . 15 84415 Figure 8.4: Reference date of message f l and f l+1 with r = -2 Figure 8 . 5 : 85 Figure 8.5: Deadlines for 2 flows (r f 1 = 2 * P MIF , r f 2 = 4 * P MIF ) and k = 8 Figure 8 . 6 : 86 Figure 8.6: Jitter visualisation with two messages f l and f m is the earliest (resp. latest) production offset. The upper bound of Prod(f l ) is called Release instant and denoted Release(f l )= Ref(f l ) + B + f l . The production traffic contract for a flow f , denoted Prod(f ), is defined by Prod(f ) = ∪ l∈N Prod(f l ). the time, i.e. ∀p, GCL(p) = [e 0 ] = [⟨⟨o, . . . , o⟩ , 0, P MAF ⟩]. 121 Figure 9 . 1 : 12191 Figure 9.1: Scheduled Traffic Parameters 124 CHAPTER 9 .Figure 9 . 2 : 124992 Figure 9.2: Topology illustration Figure 9 . 3 : 125 Fig. 10 9312510 Figure 9.3: End-to-End TT Configuration Figure 9 . 5 : 95 Figure 9.5: Need for Queue/Flow/Frame Isolation Figure 9 . 8 : 98 Figure 9.8: Airbus Generic Satellite Use Case Topology CHAPTER 9 .Figure 9 . 9 : 999 Figure9.9: Orion Network Topology[START_REF] Zhao | Comparison of time sensitive networking (tsn) and ttethernet[END_REF] Figure 10 . 1 : 101 Figure 10.1: End-to-End TT Configuration Figure 10 . 2 : 102 Figure 10.2: Egress TT Configuration e. their gates are always open. Medium access is handled with Static Priority. ∀p ∈ P \ LH j , GCL(p) = ⟨⟨o, o, o, o, o, o, o, o⟩ , 0, P MAF ⟩ Figure 10 . 4 : 104 Figure 10.4: LPB link , SPB link and HPB link example Definition 68 (Property 4 (Proof 1 Figure 10 . 5 : 6841105 Figure 10.5: Contributing instances of g for the delay of f l Figure 10 . 6 : 106 Figure 10.6: Release instant for no jitter flows Figure 10 . 7 : 107 Figure 10.7: Isolation by Message Size Definition 76 ( 76 Queue p (i)) Let Queue p (i) define the set of flows sharing the same last hop port and the same queue in that port i.e. ∀p ∈ P, ∀i ∈ [0, 7], Queue p (i) = {f ∈ F|LH f = p and FtQM p (f ) = i} . Definition 77 (f l ♯g m ) Let f l ♯g m denote that f l and g m can interfere with one another i.e. that they exist in the same queue at the same time. Therefore, Fig. 11 . 11 Fig. 11.1 presents the overall architecture of all the software elements that we developed for the configuration process. It is organized in four parts: Data Conversion, Configuration, Simulation and Validation. Let us describe briefly the purpose of each part. Figure 11 . 1 : 111 Figure 11.1: Software Architecture Figure 11 . 2 : 112 Figure 11.2: Path size increase use case Figure 11 . 3 : 113 Figure 11.3: Path Size Increase: End-to-End TT vs. Egress TT Figure 11 . 4 : 114 Figure 11.4: Path Size Increase: Exclusive Queue Allocation vs. Size Based Isolation Figure 11 . 5 :Figure 11 . 6 :Figure 11 . 7 :Figure 11 . 8 : 115116117118 Figure 11.5: Number of receivers increase use case Fig. 11 . 5 5 1155 Figure 11.9: Applicative Latency -End-to-End TT vs. Egress TT Figure 11 . 11 : 1111 Figure 11.11: Airbus Generic Satellite: End-to-End TT vs. Exclusive Queue Allocation Figure 11 . 12 : 1112 Figure 11.12: Airbus Generic Satellite: End-to-End TT vs. Size Based Isolation Figure 11 . 13 : 1113 Figure 11.13: Orion CEV: End-to-End TT vs. Exclusive Queue Allocation (Application Latency) Figure 11 . 14 : 1114 Figure 11.14: Orion CEV: End-to-End TT vs. Exclusive Queue Allocation (Network Latency) 12 Figure A. 2 : 122 Figure A.2: SFD for preemptable/express traffic identification Fig 12 Figure A. 3 : 123 Figure A.3: Preemptable frame fragments formats Figure A. 4 : 4 Figure A.4: Integration methods for Time Aware Shaper Figure A. 6 : 6 Figure A.6: 802.1Qci perimeter in switches Figure A.7: Stream Identification Functions Summary Figure A. 8 : 8 Figure A.8: 802.1Qci switch ingress port organization Figure A. 10 10 Figure A.10: FRER illustration Figure 1 . 1 - 11 Figure 1.1 -Processus de sélection d'une technologie pour le réseau unifié nouvelle génération 2. 1 . 1 11 Les satellites et leurs missionsUn satellite est un équipement électronique envoyé par l'Homme dans l'espace. En orbite autour de la terre, les satellites sont majoritairement utilisés pour des applications de télécommunications (e.x. téléphone avec Inmarsat, télévision avec Eutelsat, internet avec OneWeb), des applications d'observations de la Terre (militaires, services d'imageries avec Pleiade Neo, étude des océans avec Sentinel-6B ) et des applications de positionnement (pour les GPS notamment). Dans le système solaire, les satellites sont utilisés à des fins scientifiques (étude du Soleil avec Solar Orbiter, étude de Mercure avec BepiColombo, étude de Jupiter et ses lunes avec Juice). Le format du satellite (poids, taille), son orbite et sa mission varient : tandis que des satellites en orbite géostationnaire sont d'énormes véhicules (cf. Fig. 2.1a) conçus pour une durée de vie opérationnelle d'au moins quinze ans ; les satellites en orbites basses (cf. Fig. 2.1b) sont eux beaucoup plus petits et ont des durées de vies plus courtes (jusqu'à cinq ans). Figure 2 . 1 -Différents formats de satellites 8 CHAPITRE 2 . CONTEXTE 2 . 1 . 2 Figure 2 . 2 -Figure 2 . 3 -Figure 2 . 4 - 2182212222324 Figure 2.1 -Différents formats de satellites Figure 2 . 5 - 25 Figure 2.5 -État des standards TSN en Nov. 2021 Figure 2 . 6 - 26 Figure 2.6 -Port 802.1Qbv d'un switch Fig. 2 .Figure 2 . 7 - 227 Figure 2.7 -Mécanismes configurables dans 802.1Qbv Gate of #8 is openGate of #8 is closed #1 has exclusive access to the medium during 2 TimeIntervals Figure 2 . 8 - 28 Figure 2.8 -GCL d'un port appliquant le principe d'exclusion mutuelle de porte pour 2 files 25 4. 1 . 1 2511 Modélisation des applications Fig. 4 . 2 , 42 le concept de ratio pour deux flux, en vert avec r f = 2 et en bleu avec r f = -4. Les flèches vertes et bleues représentent les instants d'émissions de messages dans le réseau. Figure 4 . 2 - 8 Définition 7 4287 Figure 4.2 -Concept de ratio pour deux flux (r f 1 = 2, r f 2 = -4) and k = 8 Figure 4 . 3 - 2 Remarque 5 . 4325 Figure 4.3 -Dates de référence de deux messages f l et f l+1 avec r = -2 Exigence de Performance 1 ( 1 Deadline). Soit un flux f ∈ F, ce flux a obligatoirement une contrainte de deadline. Cette contrainte implique que la réception des messages du flux f doit avoir lieu avant une certaine date à chaque période de f . En particulier, cela signifie que, au plus tard, le message f l doit être reçu avant le début de la fenêtre d'émission du message f l+1 . Dans le cas où la deadline est égale à la période du flux, le flux est dit "à échéance sur requête". Nous avons représenté dans la figure Fig.4.4 des exemples de contraintes de deadlines pour deux flux à échéance sur requête. Figure 4 . 4 - 8 Définition 9 (Remarque 6 . 44896 Figure 4.4 -Deadlines pour deux flux (r f 1 = 2 * P MIF , r f 2 = 4 * P MIF ) et k = 8 Figure 4 . 5 - 45 Figure 4.5 -Gigue entre les messages f l et f m 4. 2 . 1 FixedFigure 4 . 7 -NetLatBound(f l ) f l Fixed f l FixedFigure 4 . 8 - 2147l48 Figure 4.7 -Configuration End-to-End TT Un problème majeur avec ce type de configuration est le coût en temps et en ressources informatique nécessaires pour déterminer les paramètres d'une configuration valide. En effet, un grand nombre de date d'émission doit être calculé pour chaque configuration. Dans ce contexte, nous proposons Egress TT, un nouveau type de configuration où l'idée est la suivante : au lieu d'ordonnancer toutes les émissions sur tous les équipements, nous proposons plutôt de programmer uniquement les émissions des trames sur le dernier commutateur dans le chemin de n'importe quel message. Définition 16 (Configurations Egress TT). Dans une configuration Egress TT, les dates d'émission des flux avec gigue sont pré-calculées dans le dernier commutateur dans leur chemin. Dans tous les autres équipements, la stratégie d'accès au médium n'est pas spécifiée. Un architecte réseau pourra ainsi choisir la stratégie d'accès qu'il souhaite, à partir du moment où une borne sur la latence des trames pourra être calculée. 4. 3 . 1 31 Une première implémentation de Egress TT pour les réseaux 802.1Qbv : Exclusive Queue Allocation Dans cette première implémentation de Egress TT, nous nous appuyons sur la contrainte d'Exclusive Queue Allocation (détaillée ci-après) pour satisfaire l'exigence d'indépendance à la perte d'un message du système. Une explication détaillée de l'origine et la nécessité de cette contrainte d'isolation dans les réseaux TSN est disponible dans la section Related Works du chapitre 9.Hypothèses pour l'adaptation de Egress TT aux réseaux TSN Nous considérons deux hypothèses sur notre modèle pour pouvoir générer ces configurations.Hypothèse 3 (Synchronisation). Nous prenons l'hypothèse que les émetteurs et les commutateurs du dernier saut 1 sont synchronisés et que l'erreur de synchronisation est négligeable vis-à-vis des exigences du système.Hypothèse 4 (Chemin fixe). De manière similaire à l'état de l'art, nous prenons l'hypothèse que le chemin des flux dans le réseau est connu a priori et n'évolue pas durant la vie du système (hors mise à jour). Choix 3 (Ethernet + Static Priority). Dans le reste de ce document et dans le manuscrit, nous avons choisi Static Priority, disponible dans Ethernet, comme politique d'accès au médium dans le coeur du réseau. Choix 4 (TSN Time Aware Shaper). Dans le reste de ce document et dans le manuscrit, nous avons choisi le mécanisme TSN Time Aware Shaper comme mécanisme d'ordonnancement des émissions dans les commutateurs des derniers sauts. Choix 5 (Politique d'accès au médium dans les ports du derniers saut). Dans les ports du derniers saut, nous définissons la politique d'accès au médium, dérivée du principe d'exclusion mutuelle de file, suivante : quand la porte (i.e. la transmission gate) d'une file contenant des messages de flux avec gigue est ouverte, toutes les autres portes du port sont fermées. Cela signifie en particulier que les portes des files contenant uniquement du trafic provenant de flux sans gigue peuvent être ouvertes en même temps. Choix 6 (Indépendance à la perte d'un message). Nous considérons que la propriété d'indépendance à la perte d'un message ne s'applique qu'aux flux avec gigue. Cela implique que si la perte d'un message provenant d'un flux sans gigue affecte un autre flux sans gigue, la configuration sera tout de même valide. Définition 19 ( 19 Exclusive Queue Allocation). Dans un système respectant la contrainte d'Exclusive Queue Allocation, chaque flux avec gigue est associé à une file spécifique dans son port du dernier saut. Aucun autre flux ne peut utiliser cette file. Gigue additionnelle pour g m (b) Perte d'un message Figure 4 . 9 - 49 Figure 4.9 -Gigue introduite par la perte d'un message Effectivement, au sein d'une même file, la perte d'un message pourrait introduire de la gigue pour les messages suivants mais puisqu'une file ne contient que des messages d'un même flux, ce cas n'est pas couvert dans l'exigence d'indépendance à la perte d'un message. 4. 3 . 2 32 Une seconde implémentation pour l'amélioration du passage à l'échelle d'Exclusive Queue Allocation : Size Based Isolation Hypothèse 5 ( 5 Chemin des flux). Nous prenons l'hypothèse que deux flux ayant la même source et la même destination prennent le même chemin. CHAPITRE 4 . 4 SECONDE CONTRIBUTIONCette hypothèse servira à réduire l'impact applicatif induit par les configurations obtenues avec Size Based Isolation.Concept de Size Based IsolationAvec la contrainte d'Exclusive Queue Allocation, une isolation spatiale était mise en place entre les flux avec gigue dans les ports du dernier saut afin de répondre à l'exigence d'indépendance à la perte d'un message. De fait, une file était réservée pour un flux et aucun autre flux ne pouvait l'utiliser. Avec Size Based Isolation, nous relâchons cette isolation spatiale en autorisant, sous conditions, le partage d'une même file à plusieurs flux avec gigue. Définition 20 (Size Based Isolation). Dans un système respectant la contrainte de Size Based Isolation, toutes les trames partageant une même file doivent être placées dans cette file par ordre de taille croissant. Figure 4 . 10 - 410 Figure 4.10 -Size Based Isolation 4. 3 . 4 34 Évaluation Expérimentale de la performance de Egress TT par rapport à l'état de l'art Dans le chapitre 11 du manuscrit, nous avons proposé une évaluation de la performance des configurations Egress TT et de leurs implémentations pour un réseau TSN par rapport à une implémentation de configuration End-to-End TT pour ce même réseau. Nous résumons ci-après quelques résultats expérimentaux.Coût en temps et en ressources d'une configurationDans cette première expérience, nous avons souhaité comparer le coût en temps et en ressources informatiques pour la génération d'une configuration Egress TT et d'une configuration End-to-End TT. Nous avons proposé une topologie de réseau relativement simple ainsi qu'un ensemble de flux. Nous avons fait évoluer deux paramètres sur ce réseau : sa taille en terme de nombre d'équipements et le nombre de flux qui transitent dans le réseau. L'évolution de ces deux paramètres est représentée dans les figures Fig.4.11 et Fig.4.12. La liste des flux, leurs caractéristiques et leurs routes dans le réseau sont disponibles dans le manuscrit.La comparaison des performances entre les deux configurations sera basée sur deux métriques : le nombres de contraintes nécessaires pour la description du problème dans le solveur de contraintes et le temps nécessaire pour la génération d'une configuration avec le solveur. Toutes les configurations ont été implémentées dans le langage OPL et la résolution du problème de programmation par contrainte Figure 4 . 14 - 414 Figure 4.14 -Augmentation de la taille des chemins : Exclusive Queue Allocation vs. Size Based Isolation Figure 4 . 15 - 415 Figure 4.15 -Augmentation du nombre de receveurs : End-to-End TT vs. Egress TT Évolution de la taille des chemins Pour cette première expérience, les résultats expérimentaux présentés dans les figures Fig. 4.13a, Fig. 4.14a, Fig. 4.15a et Fig. 4.16a montrent que notre approche Egress TT est considérablement moins coûteuse en temps et en ressources informatiques que l'approche End-to-End TT. De plus, l'implémentation avec Size Based Isolation apparait un peu plus couteuse que l'implémentation avec Exclusive Queue Allocation mais cette augmentation (du coût) reste négligeable par rapport au coût d'une configuration avec l'approche End-to-End TT. Ce résultat était attendu puisque Egress TT a justement été conçue avec la volonté de réduire le coût d'une configuration. Une analyse plus détaillée de ces résultats est disponible dans le chapitre 11 du Figure 4 . 16 - 416 Figure 4.16 -Augmentation du nombre de receveurs : Exclusive Queue Allocation vs. Size Based Isolation Figure 4 . 17 - 5 5 41755 Figure 4.17 -Topologies pour l'évaluation de l'impact des latences Figure 4 . 44 CHAPITRE 4 4444 Figure 4.18 -End-to-End TT vs. Exclusive Queue Allocation Figure 4 . 4 Figure 4.19 -Satellite Airbus Générique : End-to-End TT vs. Exclusive Queue Allocation Figure 4 . 20 - 420 Figure 4.20 -Satellite Airbus Générique : End-to-End TT vs. Size Based Isolation Figure 4 . 4 Figure 4.21 -Orion CEV : End-to-End TT vs. Exclusive Queue Allocation 2, we introduce key satellite concepts, i.e. what a satellite is, what its missions are, what the satellites networking technologies are, etc. Then, in Chapter 3, we propose an introduction to networking technologies. It starts from generic network concepts, and then focuses on the different technologies that will be mentioned or used in this manuscript. Finally, in Chapter 4, we give a focus on Time Sensitive Networking, the IEEE state of the art Ethernet technology, around which this PhD thesis was focused.The manuscript is organized around two contributions presented in respectively Part II and Part III. Instead of a single related works chapter, we will propose a problem statement and an associated related work for each contribution. Exclusive Queue Allocation Configuration of satellite TSN networks with Egress TT Focus on Chapt. 10 Size Based Introduction on satellite systems performance requirements Formal specification of satellite performance Isolation and networks QoS requirements Chapt. 2 Chapt. 8.2 5 Ethernet protocol. It is a point-to-point medium event triggered network. It is used, among others, at Boeing and at Airbus under the name AFDX (Avionics Full DupleX Switched Ethernet). It extends Ethernet (802.1Q-2008) with bounded latency and fault tolerance capabilities. The ARINC 664 standard defines VLs or Virtual Links. A VL is a traffic contract composed of a maximum frame size and a BAG -Bandwidth Allocation Gap i.e. a minimum time duration between two frames belonging to the same VL. The concept of BAG is illustrated in Fig.7.2. There is one traffic type in an ARINC 664 network: VL traffic. BAG BAG BAG Frame Frame Frame Figure 3.10: BAG concept in ARINC 664 Performance ARINC 664 links are bi-directional and full duplex. They can operate at up to 100Mbits. The communication relies on Ethernet frames. With virtual links, an ARINC 664 network is able to provide bounded latency. Figure 4.1, originally presented by Janos Farkas, chairman of the TSN Task Group, during TSN\A Conference in Oct. 2019, and upgraded to this version in Nov. 2021, summarized the published standards and amendments as well as the on-going projects conducted by the TSN Task Group at the time. The list of standard is still growing at the time of writing this manuscript. CHAPTER 4. FOCUS ON TIME SENSITIVE NETWORKING project. A standard is generally named with a sequence of numbers (eventually separated with dots) followed by upper-case letters (ex:802.1Q standard). An amendment is a standard but it uses the reference of the amended standard plus one of several lower-case letters (ex:802.1Qbv amendment). This reference represents the classification of the document in all the IEEE standards available. An On-going project has the same naming convention, however, the sequence of numbers is prefixed with a P, for Project (ex:P802.1AS-Rev). TSN Components (Tools of the TSN toolset) High availability / Ultra reliability: Frame Replication and Elimination [802.1CB] Path Control and Reservation [802.1Qca] Per-Stream Filtering and Policing [802.1Qci] Reliability for Time Sync [802.1AS-2020] Latency Bounded low latency: Credit Based Shaper [802.1Qav] Frame Preemption [802.1Qbu & 802.3br] Scheduled Traffic [802.1Qbv] Cyclic Queuing and Forwarding [802.1Qch] Asynchronous Traffic Shaping [802.1Qcr] Shaper Parameter Settings [P802.1Qdq] 1 QoS Provisions [P802.1DC] 11/16/2021 Definition 21 (IEEE Standard/Amendment/On-going Project) For the record, an IEEE standard is a document that bears the knowledge on a process, a technology, or anything else, consid- ered as a norm and common knowledge base to which anyone can refer. This document is declared a standard when it has been approved by an IEEE committee. While the document is being writ- ten/completed, once its theme has been approved by an IEEE committee, it is called an On-going The last family is Zero Congestion Loss (802.1Qav, 802.1Qbu, 802.1Qbv, 802.1Qch, 802.1Qcr, 802.1Qat, 802.1Qcc, 802.1Qcp). It aims at guaranteeing that no frames are lost due to congestion i.e. buffer overflow in switches. There are not much more details on this family as its protocols have already been introduced in other families. A message to remember is that Time Sensitive Networking is not yet another new technology, it is based on the classic behaviour of Full Duplex Switched Ethernet (see 3.6.1) networks described in IEEE 802.1Q standard. It only adds protocols that amend or enhance the existing behaviours. System wide features: 802 .1AS & 802.1AS-Rev (synchronization), 802.1 Qcc (TSN Configuration), 802.1Qat (Stream Reservation Protocol), 802.1CB (Frame Replication and Elimination for Reliability), YANG features (..). Bridge related features: 802.1Q, 802.1Qbv (Enhancement for Scheduled Traffic), 802.1Qch (Cyclic Queueing and Forwarding), 802.1Qcr (Asynchronous Traffic Shaping), 802.1Qav (Credit Based Shaper), 802.1Qci (Per-Stream Policing and Filtering), 802.1Qbu (Frame Preemption). An architecture of interest is a way to configure a TSN network that is common case in the literature or in the industrial use. It is often named/described as a combination of Scheduling Types. TT-CBS-BE is an example of architectures of interest. Other architectures are introduced in 4.2.5. Definition 27 (Traffic Class) A traffic class is a data structure introduced in TSN feature 802.1Qbv. There can be up to 8 traffic classes per output port of any 802.1Qbv compliant network device. These traffic classes have their own characteristics of the quality of service they can provide. Traffic types are associated with traffic classes at system/network design. Definition 28 (Architecture of Interest) It represents the functional blocks associated to a switch. The part in the box is what is amended by TSN feature 802.1Qbv. Numbers in the pictures are the chapters and sections number associated to each box in the 802.1Q-2018 standard [56]. Remark 4 Although 802.1Qbv is now included in 802.1Q-2018, in this manuscript, we will continue using the acronym "802.1Qbv" to describe it so as not to confuse the reading: 802.1Q-2018 includes more than just 802.1Qbv features and therefore is not precise enough to talk about 802.1Qbv features alone. Fig. 4.6 is extracted from IEEE 802.1Q-2018. 1 while True do 4.2. NETWORK PERFORMANCE WITH 802.1QBV-ENHANCEMENTS FOR SCHEDULED TRAFFIC57 Algorithm 1: WRR algorithm Input: Number of queues, #T C ∈ [1; 8] Input: Queue weights, w 1 , . . . , w #T C ∈ N Definition 30 (Gate Control List) The purpose of the Gate Control List or GCL is to provide the state of gates of each traffic class of a specific transmission port of available for transmission. It is also often called Time Aware Shaper -TAS. Each of the #TC TSN queues is associated with a transmission gate. In fact, this transmission gate may prevent a frame from a specific traffic class to access the medium even if its transmission selection algorithm allows it. Let us explain how these transmission gates work. A transmission gate can have two states : Open (o) or Closed (C). When the gate of traffic class #a is: • Closed → Even if the TSA allows it, #a frames cannot try to access the medium, • Open → If the queue is ready w.r.t. to its TSA, #a frame can try to access the medium i.e. is declared "available for transmission" It appears clearly that the gate state must evolve over time, otherwise, if a gate is closed and stays closed, it would mean that the frames of the associated traffic class will never be emitted and would end up being lost. To this extend, a Qbv switch output port has a data structure called a Gate Control List. a network device, composed of #TC queues. More precisely, we propose to describe a GCL as follows: GCL port = T ick, T imeInterval, OperControlListLength, CycleT ime, Schedule (4.1) Transmission Selection Algorithm Transmission Gate Transmission Selection (Static Priority) 8 Traffic Classes Transmission Selection Algorithm Transmission Gate … • Credit Based Shaper • Weighted Round Robin • Ad hoc … • none • Always open … • Ad hoc (scheduled opening/closing) Figure 4.12: Summary of configurable mechanisms in 802.1Qbv Queue for Queue for Queue for Queue for traffic class #7 traffic class #6 traffic class #5 traffic class #0 Transmission Selection Credit Based Shaper Credit Based Shaper .|ETS|VS Transmission Gate Transmission Gate Transmission Gate Transmission Selection 4.2. NETWORK PERFORMANCE WITH 802.1QBV-ENHANCEMENTS FOR SCHEDULED TRAFFIC65 C-1 Arrivals B-1 B-2 B-3 A-1 Transmission B-1 B-2 C-1 A-1 B-3 time Queue for traffic class #7 CBS Class B credit Queue for traffic class #6 CBS Queue for traffic class #5 BE Queue for traffic class #0 BE time Figure 4.16: AVB Example End-to-End TT .|ETS|VS This subsection will describe the behaviour of a 802.1Qbv output port where all queues use a Transmission Gate but not Transmission Selection Algorithm. In this architecture, the output port would look like Fig. 4.17. Transmission As there are no Transmission Selection Algorithms in any of the queues, all the frames are Gate considered ready all the time. The transmission gates, with their schedule, will decide which frames will be ready for transmission. 2 most important queues use CBS and the other use either nothing, Enhanced Transmission Selection Algorithm (ETS) or Vendor Specific (VS) algorithms and will be named BE (for Best Effort). Interactions Figure 4.15: AVB port architecture (CBS-BE) between queues are arbitrated by Transmission Selection. This architecture is called AVB -Audio Video Bridging, and is widely spread in the Audio-Video world (concert halls, television sets, etc.). Fig. 4.15 introduces an AVB output port architecture. Example 5 In this example, illustrated by Fig.4.16 (extracted from [28]), we have considered that Queue #A and #B are CBS queues and #C is a best effort queue with no transmission selection algorithm. As a reminder, the interaction between queues #A, #B and #C are arbitrated using strict priority and #A has a higher priority than #B, and #B has a higher priority than #C. On the first part of the drawing we can see frames from #B arriving and getting sent, accordingly to the evolution of B's credit. When a #C frame arrives, #B does not have enough credit to send its frame, hence #C frame is selected and sent. At that time, a #A frame has arrived, we assume that it has credit to emit its frame. As #A has a higher priority than #B, the #A frame is sent before #B-3. Finally #B-3 frame is sent. Example 6 Fig. 4.18 introduces a "random" configuration of the Transmission Gates (i.e. random schedule). In this configuration, several gates are open in the same time. If frames are ready in different queues the gate of which is open i.e. are available for transmission, Transmission Selection will decide which frame gets emitted first. If only one queue has its gate open, then the queue immediately gets to emit on the medium. A very common implementation of the Time Aware Shaper is the exclusive gating principle. Definition 31 (Exclusive Gating) In this implementation, all gates have a transmission gate managed in a Gate Control List. When the gate of one specific traffic class #a is Open, the gates of all the other traffic classes, named #[other] are Closed. When the gate of the specific traffic class is Closed, the gates of all the other traffic classes are Open. This will prevent traffic from #[other] to interfere with #a traffic. One issue remains : what to do when the gates of #[other] traffic classes are about to close ? In the very likely case where a frame is being emitted when the gate has to close, it will interfere with Transmission Gate Transmission Gate Transmission Gate Transmission Gate Transmission Selection Queue for Queue for Queue for Queue for traffic class #7 traffic class #6 traffic class #5 traffic class #0 II Contribution: Selection of Candidate Technologies Compliant with the Quality of Service Requirements of a Next-Generation Satellite Network Chapter 5 Definition 33 (Remote Interface Unit, a data concentrator) Some • Payload Computing: the payload computing devices collect useful data for the mission, for ex- ample: antennas and transponders for telecommunication satellites, telescopes and cameras for Earth/Space observation satellites or basically any type of scientific instruments for scientific missions. They also process this data before storing in an on-board mass memory and sending it to Earth. of the sensing or actuat- ing devices are analog devices and hence are connected to a RIU for Remote Interface Unit that acts as controller for these devices. This RIU will also be considered as an Actuating or a Sensing node. Property 1 (Master/Slave communication paradigm) Both Actuating and Sensing devices act as slaves of the Platform Computing devices. This entails that Actuating and Sensing devices can only use the communication system if they were previously asked to by a Computing device. Table 5 . 5 .1 gives usual values of k and corresponding usual values of P MIF with P MAF = 1000. 1: Usual values of k and P MIF This hypothesis is illustrated in Fig. 5.3. k P MIF 8 125ms 16 62.5ms 32 31.25ms P MAF = 8 * P MIF Figure 5.3: Link P MIF -P MAF with k = 8 proposed the very first comparison of a huge set of technologies for Spacecraft Avionics Systems but only for the Platform network. The goal was to provide a comparison of these technologies for man-rated spacecraft with low jitter and low data rate traffic. The authors considered 11 technologies: MIL-STD-1553, SAFEbus, Time-Triggered Communication Protocol (TTP), FlexRay, Time-Triggered Controller Area Network (TT-CAN), IEEE 1394b, SpaceWire, Ethernet 10/100 Base-T, Avionics Full-Duplex Switch Ethernet (AFDX i.e. ARINC 664), Fibre Channel and Gigabit Ethernet (i.e. Ethernet). Apart from an extensive presentation of each technology, the major contribution was the table provided in appendix which is a comparison matrix of all technologies on 45 criteria. These criteria, such as "Maximum Data Rate, Latency, Jitter, Clock Synchronization, Fault Containment, Babbling Idiot data rates from 10Mbit/s to 100+Gbit/s. Therefore, Ethernet, on criteria of High Data Rate, seems to be suitable. To provide some sort of quality of service (maybe bounded latencies?), Ethernet is equipped with only one mechanism: Static Priority. Static Priority relies on the 802.1Q optional tag of the Ethernet frame (cf. 90 CHAPTER 7. QUALITATIVE COMPARISON Criteria 2 -Bounded Latency ✖ Definition 37 (Static Priority) 1 Internet Service Provider 89 Table 7 . 7 2: Compliance to Application-Level Property 1 -Mixed QoS Criteria Ethernet ARINC 664 TTEthernet TSN SpF Data Rate ✔ ✔ ✔ ✔ ✔ Latency ✖ ✔ ✔ ✔ ✔ Jitter ✖ ✖ ✔ ✔ ✔ Suitability ✖ ✖ ✔ ✔ ✔ Table 7.3: Compliance to Application-Level Property 2 -Time Management Criteria Ethernet ARINC 664 TTEthernet TSN SpF At Layer 2 ✖ ✖ ✔ ✔ ✔ Robustness ✖ ✖ ✔ ✔ ✖ Interaction ✖ ✖ ✖ ✔ ✔ Suitability ✖ ✖ ✔ ✔ ✔ Table 7.4: Compliance to Application-Level Property 3 -Fault Tolerant Operations Criteria Ethernet ARINC 664 TTEthernet TSN SpF Error Detection ✖ The configuration Config(p) of a port p ∈ P consists in configuring the Gate Control List of the port, if any, as well as selecting and configuring the Transmission Selection Algorithms for each queue, if any. In summary Config(p) = GCL(p) ∀q ∈ {q 1 , . . . , q 8 }, T SA q Definition 56 (Config(p)) As in the state of the art, we assume that the routing of the flows in the network is fixed and static. 10.2. A FIRST IMPLEMENTATION: EXCLUSIVE QUEUE ALLOCATION 137 Hypothesis 8 (Fixed Path) Choice 4 (Ethernet Static Priority) In the rest of this manuscript, we chose Static Priority, conveniently available in standard Ethernet (cf. Section 3.6.1) as medium access strategy in non- last-hop ports. Choice 5 (TSN Time Aware Shaper) In the rest of this manuscript, we chose TSN Time Aware Shaper as the scheduling mechanism for last hop ports. • At any time, either exactly only one jitter associated queue gate is open or several no jitter associated queues gates are open; • if q i is allocated to a jitter flow f : the gate is closed almost all the time. It is opened when a message f l is scheduled and remains open during the message transmission duration (τ f l ); • if q i is allocated to no jitter flow(s): the gate remains always open except when one jitter associated queue is open. Let f l ∈ F j a jitter message, SchedLH[f l ] denotes the instant at which the gate FtQM LH f (f ) shall be opened. 138 CHAPTER 10. EGRESS TT CONFIGURATIONS Definition 64 (SchedLH) Remark 28 (Link SchedLH[f 1, but thanks to hypothesis 8 (fixed flow routing), remark 27 (no TSA), we simplify equation 9.1 into equation 9.2. Hence the variables are the flow to queue mapping and the gate control list schedule for all output ports. Instead of computing GCL (see. Section 9.1.1) directly, like in the state of the art, we introduce an intermediate decision variable SchedLH. l ], T r (f l ), Lat fl ) With this intermediate decision variable, SchedLH[f l ]is linked to T r (f l ) by the following formula: Compute(Remaining, Start, Queue, UnitSize) is a recursive function. The first argument is a number of bytes Remaining, the second argument is an instant Start, the third one is a queue Queue and the last one UnitSize is the maximum size of a frame going in that queue. The principle of that function is quite simple: starting at Start, the function goes back in time up to having enough open slots to send ⌊ Remaining UnitSize ⌋ in queue Queue. Since the order of frames received in Queue is unknown a priori, instead of considering the exact size of all the frames, in Compute function, we consider that all frames are UnitSize long. Table 11 . 11 1: Set of flows F Name Period f .jitter Size f Bandwidth f 1 125ms NA 64 4Mbit/s f 2 125ms NA 512 32Mbit/s f 3 250ms NA 64 2Mbit/s f 4 500ms NA 1500 24Mbit/s f 5 125ms NA 128 8Mbit/s f 6 125ms NA 512 32Mbit/s f 7 250ms NA 512 16Mbit/s f 8 125ms NA 128 4Mbit/s f 9 -f 13 125ms 1µs 64 4Mbit/s f 14 125ms 500µs 256 16Mbit/s f 15 125ms 500µs 512 32Mbit/s Table 11 . 2 : 112 Set of flows F Number Source Path Destination Flows ① ES A All flows have implicit deadlines and are subject to the safety requirement. In addition, the flows of TableB.17, B.18, B.19, B.20 and B.21 have jitter constraints. B.2. ORION CEV USE CASE B.2. ORION CEV USE CASE B.2. ORION CEV USE CASE B.2. ORION CEV USE CASE 191 193 195 197 Name Name Name Name Name Name Name Name Name Source Source Source Source Source Source Source Source Source Dest Dest Dest Dest Dest Max. Data Size Max. Data Size P f Dest Max. Data Size Max. Data Size P f Dest Max. Data Size Max. Data Size P f Dest Max. Data Size Max. Data Size P f Dest Max. Data Size P f P f P f P f P f f vl11937 STARTRB f vl8982 FCMB RCMA SMBCA FCMB STARTRB f vl11888 MIMUB SMBCA MIMUB f vl5556 SMRIUA FCMB SMRIUA f vl404 SMACB BFCU SMACB f vl12545 STARTRB SMBCB STARTRB f vl2694 SMBCA CMRIUA SMBCA f vl9876 SMRIUA SBANDA SMRIUA f vl7830 SBANDA MIMUA SBANDA 1053 1003 1015 537 145 438 814 313 772 100*P MIF 30*P MIF 24*P MIF 300*P MIF 40*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl11937 FCMB f vl3832 SMRIUB f vl1522 CMBCB DUB f vl14198 SBANDB STARTRB SMBCA SMRIUB FCMB CMBCA CMBCB SBANDB f vl14085 SMACB STARTRB SMACB f vl12545 SMACB SMBCB SMACB f vl2694 MIMUA CMRIUA MIMUA f vl9876 CMACB SBANDA CMACB f vl3212 SMACA CMRIUB SMACA 355 1249 1015 537 145 560 616 313 414 100*P MIF 400*P MIF 150*P MIF 20*P MIF 600*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl1506 SMRIUA f vl3832 CMACB DUB f vl1522 MIMUB f vl14198 DUB STARTRB CMBCA CMACB SMRIUA CMBCA MIMUB DUB f vl14085 CMBCA STARTRB CMBCA f vl8554 SMBCA MIMUC SMBCA f vl7388 SMACA LCMB SMACA f vl2703 MIMUB CMRIUA MIMUB f vl3212 SBANDA CMRIUB SBANDA 355 1249 185 784 202 560 616 993 414 200*P MIF 400*P MIF 150*P MIF 20*P MIF 600*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl1506 CMACB f vl2092 DUB CMBCB CMBCA DUB f vl11808 CMRIUA SMBCA f vl10922 DUA SMACA DUA f vl3262 CMRIUB DUA f vl8554 CMRIUA MIMUC CMRIUA CMACB CMRIUA CMRIUB f vl7388 RCMB LCMB RCMB f vl2703 SBANDA CMRIUA SBANDA f vl13888 MIMUC STARTRA MIMUC 1173 646 185 784 202 310 387 993 539 200*P MIF 150*P MIF 200*P MIF 400*P MIF 20*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl4331 CMBCB f vl4331 FCMB f vl10794 CMACB f vl2092 SMRIUB CMBCB SMACA DUC DUC SMRIUB CMBCB FCMB CMACB f vl11710 CMACA SMBCA CMACA f vl11710 SMRIUB SMBCA SMRIUB f vl11808 SMACA SMBCA SMACA f vl4115 MIMUC DUB MIMUC f vl4115 SMBCB DUB SMBCB f vl10922 SBANDB SMACA SBANDB f vl11054 LCMA SMACA LCMA f vl11054 STARTRA SMACA STARTRA f vl13418 MIMUB SMRIUB MIMUB f vl13418 STARTRA SMRIUB STARTRA f vl9151 MIMUA RCMA MIMUA f vl1941 CMACA CMBCB CMACA f vl1941 SMACB CMBCB SMACB f vl4167 SMRIUA DUB SMRIUA f vl5124 DUB FCMB DUB f vl10472 MIMUA SBANDB MIMUA f vl10472 DUB SBANDB DUB f vl4412 DUA DUC DUA f vl4412 SMACA DUC SMACA f vl11923 FCMB SMBCA FCMB f vl13888 SBANDB STARTRA SBANDB f vl8638 DUC MIMUC DUC f vl8638 SMACA MIMUC SMACA Chapitre 1 1173 87 87 646 1131 1131 118 118 1438 234 234 729 508 508 550 310 1260 1260 1239 1239 460 1277 577 577 539 989 989 100*P MIF 100*P MIF 300*P MIF 150*P MIF 120*P MIF 120*P MIF 200*P MIF 30*P MIF 30*P MIF 400*P MIF 30*P MIF 30*P MIF 120*P MIF 120*P MIF 100*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl10794 LCMA f vl10101 RCMB SBANDA SMACA RCMB f vl13760 DUC STARTRA f vl4241 BFCU DUC BFCU f vl1353 SBANDA CMACB SBANDA LCMA DUC f vl4167 SMACB DUB SMACB f vl2858 FCMB CMRIUB FCMB f vl11923 LCMB SMBCA LCMB f vl11589 STARTRB SMACB STARTRB 913 643 1438 729 550 502 526 867 1242 300*P MIF 120*P MIF 400*P MIF 40*P MIF 150*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl10964 SMACB f vl8883 MIMUC RCMA SMACA MIMUC SMACB f vl13760 STARTRB STARTRA STARTRB f vl4241 SMRIUA DUC SMRIUA f vl1353 SMACB CMACB SMACB f vl12254 STARTRA SMBCB STARTRA f vl2858 CMACA CMRIUB CMACA f vl10338 SMACB SBANDB SMACB f vl11589 RCMB SMACB RCMB 233 643 311 1193 1268 502 526 867 1242 60*P MIF 60*P MIF 400*P MIF 40*P MIF 150*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl10964 DUC f vl8883 BFCU f vl13493 DUA f vl8342 SBANDB MIMUB SMACA RCMA BFCU STARTRA SBANDB f vl11304 CMBCB SMACB CMBCB DUC DUA f vl12254 CMBCB SMBCB CMBCB f vl11403 DUB SMACB DUB f vl10338 CMACB SBANDB CMACB f vl8457 CMACA MIMUC CMACA 233 912 311 1193 1268 291 608 1354 729 60*P MIF 60*P MIF 15*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl4583 MIMUC f vl14362 RCMA STARTRB DUC f vl4419 DUA DUC f vl8342 STARTRB MIMUB STARTRB MIMUC RCMA DUA f vl11304 DUB SMACB DUB f vl3300 MIMUA DUA MIMUA f vl11403 MIMUB SMACB MIMUB f vl13830 STARTRB STARTRA STARTRB f vl8457 STARTRB MIMUC STARTRB 14362 912 89 751 742 1468 608 1354 729 20*P MIF 120*P MIF 12*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl4583 STARTRA f vl14362 SBANDA STARTRB DUC SBANDA STARTRA f vl4419 STARTRB DUC STARTRB f vl5179 CMACA FCMB CMACA f vl8174 DUC MIMUB DUC f vl3300 BFCU DUA BFCU f vl3130 SMBCA CMRIUB SMBCA f vl2264 SMACB CMBCB SMACB f vl8325 STARTRB MIMUB STARTRB 14362 1352 89 147 742 1468 212 547 145 20*P MIF 120*P MIF 12*P MIF 40*P MIF 30*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl5383 SMACA f vl3758 DUC f vl780 MIMUA f vl5179 CMRIUB FCMB DUB CMACA FCMB DUC CMRIUB SMACA MIMUA f vl8174 SMACB MIMUB SMACB f vl10460 DUA SBANDB DUA f vl3130 LCMA CMRIUB LCMA f vl2264 RCMA CMBCB RCMA 654 1352 1213 147 888 411 212 547 600*P MIF 60*P MIF 60*P MIF 40*P MIF 30*P MIF 200*P MIF 200*P MIF 200*P MIF f vl5383 DUC f vl12723 DUB f vl3758 BFCU f vl3185 MIMUC CMRIUB FCMB SMRIUA DUB BFCU MIMUC f vl2666 SMBCB CMRIUA SMBCB DUC DUB f vl2666 LCMA CMRIUA LCMA f vl10267 BFCU SBANDB BFCU f vl10267 DUA SBANDB DUA f vl13722 FCMB STARTRA FCMB f vl13722 DUB STARTRA DUB f vl10460 SMBCB SBANDB SMBCB f vl11430 SMBCA SMACB SMBCA f vl2183 SMACB CMBCB SMACB f vl14103 CMBCB STARTRB CMBCB f vl11129 SBANDA SMACA SBANDA f vl11129 SMRIUA SMACA SMRIUA Table B.16: Flow Definition for Orion CEV use case -Part 9 888 600*P MIF 170 40*P MIF 654 60*P MIF 131 60*P MIF 498 150*P MIF 498 150*P MIF 1374 24*P MIF 1374 24*P MIF 599 120*P MIF 599 120*P MIF 1213 200*P MIF 821 200*P MIF 619 200*P MIF 350 200*P MIF 757 200*P MIF 757 200*P MIF f vl12723 CMRIUA f vl3504 DUC f vl3185 SMBCB CMRIUB SMRIUA DUA SMBCB CMRIUA DUC f vl11127 SBANDA SMACA SBANDA f vl11727 MIMUB SMBCA MIMUB f vl11727 CMACB SMBCA CMACB f vl2493 CMRIUB CMRIUA CMRIUB f vl2493 DUC CMRIUA DUC f vl161 SMBCB BFCU SMBCB f vl161 CMRIUA BFCU CMRIUA f vl11430 DUC SMACB DUC f vl11844 DUA SMBCA DUA f vl14103 MIMUB STARTRB MIMUB f vl11696 CMACA SMBCA CMACA f vl80 SMACA BFCU SMACA f vl80 CMACB BFCU CMACB B.2.2 Flow Constraints 131 682 740 740 821 1376 170 170 170 773 447 447 1017 1017 350 376 40*P MIF 60*P MIF 60*P MIF 60*P MIF 60*P MIF 60*P MIF 12*P MIF 12*P MIF 120*P MIF 120*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl3504 SMACB f vl11127 SMBCA SMACA f vl6641 SBANDA LCMA DUA SMBCA SMACB SBANDA f vl2114 STARTRA CMBCB STARTRA f vl2066 DUA CMBCB DUA f vl11844 LCMA SMBCA LCMA f vl1410 LCMB CMBCA LCMB f vl10336 SBANDA SBANDB SBANDA 682 1389 1376 177 773 305 1290 529 60*P MIF 60*P MIF 12*P MIF 40*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl1266 LCMA f vl8358 SMRIUB MIMUB CMACB SMRIUB LCMA f vl6641 CMBCA LCMA CMBCA f vl2114 DUC CMBCB DUC f vl2066 SMACB CMBCB SMACB f vl812 LCMB CMACA LCMB f vl14214 MIMUC STARTRB MIMUC f vl10336 CMACB SBANDB CMACB 1259 1389 1438 177 1500 305 1290 487 15*P MIF 200*P MIF 12*P MIF 40*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl1266 SBANDA f vl8358 SMBCA MIMUB CMACB SMBCA SBANDA f vl8229 SMBCA MIMUB SMBCA f vl3703 SMACB DUA SMACB f vl12229 STARTRA SMBCB STARTRA f vl812 MIMUB CMACA MIMUB f vl14214 DUC STARTRB DUC f vl13030 DUC SMRIUB DUC 1259 1336 1438 1293 1500 732 1442 487 15*P MIF 200*P MIF 240*P MIF 240*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl304 SBANDB f vl9266 SBANDB RCMA f vl9247 SBANDB f vl3703 SMRIUB DUA f vl12229 CMBCA SMBCB BFCU SBANDB SBANDB RCMA SBANDB SMRIUB CMBCA f vl9461 SMBCB RCMB SMBCB f vl11262 MIMUB SMACB MIMUB f vl7596 CMBCB MIMUA CMBCB 498 1336 891 1154 1234 810 1442 523 600*P MIF 150*P MIF 120*P MIF 240*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl3511 FCMA f vl9266 SMACB RCMA f vl9247 RCMB f vl7343 MIMUA LCMB f vl2622 LCMB CMRIUA DUA SMACB FCMA RCMA RCMB MIMUA LCMB f vl9461 CMBCB RCMB CMBCB f vl11262 CMACB SMACB CMACB f vl7596 SBANDA MIMUA SBANDA 498 1288 891 1154 578 810 526 523 15*P MIF 150*P MIF 120*P MIF 400*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl3511 FCMB f vl9977 CMRIUB SBANDA f vl10050 RCMB SBANDA DUA CMRIUB FCMB RCMB f vl7343 SMACB LCMB SMACB f vl10932 DUB SMACA DUB f vl5098 SMRIUB FCMA SMRIUB f vl118 DUB BFCU DUB f vl8470 CMACB MIMUC CMACB 595 1288 950 1189 578 850 193 759 15*P MIF 12*P MIF 12*P MIF 400*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl10713 MIMUB f vl9977 STARTRB SBANDA SMACA STARTRB MIMUB f vl10050 FCMA SBANDA FCMA f vl14211 DUC STARTRB DUC f vl10932 FCMB SMACA FCMB f vl5098 SMACB FCMA SMACB f vl118 CMBCB BFCU CMBCB f vl8470 LCMB MIMUC LCMB 595 888 950 1189 1071 850 193 759 24*P MIF 12*P MIF 12*P MIF 240*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl7101 MIMUA f vl7398 SBANDA LCMB f vl13172 CMBCB SMRIUB LCMB SBANDA MIMUA CMBCB f vl14211 LCMB STARTRB LCMB f vl12148 BFCU SMBCB BFCU f vl7630 DUA MIMUA DUA f vl63 CMRIUB BFCU CMRIUB f vl1147 DUB CMACB DUB 186 888 832 1040 748 738 1426 1360 100*P MIF 120*P MIF 50*P MIF 240*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl7101 CMBCA f vl7398 SMACB LCMB f vl13172 MIMUA SMRIUB LCMB SMACB CMBCA MIMUA f vl3760 BFCU DUB BFCU f vl12148 STARTRA SMBCB STARTRA f vl7630 CMRIUB MIMUA CMRIUB f vl63 CMACB BFCU CMACB f vl8832 STARTRB MIMUC STARTRB 186 736 832 1499 748 738 1426 1360 100*P MIF 120*P MIF 50*P MIF 12*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl13800 LCMB f vl11654 STARTRA SMBCA STARTRA STARTRA LCMB f vl2733 SMRIUB CMRIUA SMRIUB f vl3760 FCMB DUB FCMB f vl282 FCMB BFCU FCMB f vl3535 FCMB DUA FCMB f vl11126 SBANDA SMACA SBANDA f vl8832 SMRIUA MIMUC SMRIUA 744 736 185 1499 908 138 778 963 120*P MIF 60*P MIF 200*P MIF 12*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl2809 RCMA f vl8306 RCMA MIMUB CMRIUB RCMA f vl2733 RCMA CMRIUA f vl8837 CMACA RCMA CMACA RCMA RCMA f vl282 MIMUC BFCU MIMUC f vl3535 MIMUA DUA MIMUA f vl11126 SMACB SMACA SMACB f vl10379 CMBCB SBANDB CMBCB 295 1492 185 773 186 138 778 963 60*P MIF 120*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl3764 LCMB f vl8306 SBANDA MIMUB f vl3305 BFCU f vl4727 CMACA FCMA f vl5300 SMBCB FCMB DUB SBANDA LCMB DUA BFCU CMACA SMBCB f vl3534 LCMB DUA LCMB f vl9127 SMRIUB RCMA SMRIUB f vl6521 FCMA LCMA FCMA 295 502 592 621 1420 135 1336 125 600*P MIF 120*P MIF 100*P MIF 60*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl3764 BFCU f vl3834 CMACB f vl3305 SBANDA DUB f vl4727 RCMB FCMA f vl5300 CMRIUA f vl3534 FCMB DUA f vl9127 FCMA f vl7783 SMRIUB MIMUA FCMB DUB CMACB BFCU DUA SBANDA RCMB CMRIUA FCMB RCMA FCMA SMRIUB 520 502 592 780 1420 135 1336 125 600*P MIF 30*P MIF 100*P MIF 60*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl10890 CMRIUB f vl3834 STARTRB DUB f vl1476 CMACA f vl4656 CMRIUA FCMA SMACA STARTRB CMRIUB CMBCA CMACA CMRIUA f vl2781 SMBCB CMRIUA SMBCB f vl2254 SMBCA CMBCB SMBCA f vl3845 CMBCA DUB CMBCA f vl7505 DUA MIMUA DUA 520 920 932 1062 197 1124 959 1046 60*P MIF 30*P MIF 300*P MIF 100*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl4205 RCMB f vl11997 LCMB SMBCA f vl1476 SMACB CMBCA DUC LCMB SMACB RCMB f vl7770 LCMA MIMUA LCMA f vl2781 SMRIUA CMRIUA SMRIUA f vl2254 MIMUC CMBCB MIMUC f vl3845 LCMB DUB LCMB f vl7505 CMACA MIMUA CMACA 974 431 932 1062 1097 1124 959 1046 40*P MIF 12*P MIF 300*P MIF 400*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl3812 CMACB f vl11997 SMRIUB SMBCA f vl9008 CMRIUA RCMA DUB SMRIUB CMACB CMRIUA f vl493 SMRIUB CMACA SMRIUB f vl7161 CMRIUA LCMB CMRIUA f vl2867 CMACA CMRIUB CMACA f vl3421 CMRIUA DUA CMRIUA f vl2336 FCMB CMRIUA FCMB 974 824 1431 462 246 455 930 1058 20*P MIF 12*P MIF 40*P MIF 600*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl3812 DUA f vl6964 STARTRA f vl4444 FCMA LCMA f vl13336 MIMUA SMRIUB f vl7161 SMRIUA LCMB DUB STARTRA DUA DUC FCMA MIMUA SMRIUA f vl2867 RCMB CMRIUB RCMB f vl3421 CMRIUB DUA CMRIUB f vl9873 CMACB SBANDA CMACB 526 397 1431 1018 246 764 930 1058 20*P MIF 120*P MIF 60*P MIF 15*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl639 LCMB f vl6964 SMBCA f vl4444 LCMA LCMA CMACA SMBCA LCMB DUC LCMA f vl14149 CMRIUB STARTRB CMRIUB f vl2140 MIMUA CMBCB MIMUA f vl8016 CMACB MIMUB CMACB f vl11724 LCMA SMBCA LCMA f vl9873 SMACB SBANDA SMACB 526 757 459 1018 103 764 1263 440 20*P MIF 120*P MIF 60*P MIF 600*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl639 CMRIUB f vl8753 MIMUA MIMUC CMACA MIMUA CMRIUB f vl9239 MIMUC RCMA MIMUC f vl14149 MIMUA STARTRB MIMUA f vl2140 FCMB CMBCB FCMB f vl8016 SMRIUA MIMUB SMRIUA f vl11724 CMACB SMBCA CMACB f vl4338 MIMUC DUC MIMUC 354 757 459 668 103 1132 1263 440 20*P MIF 150*P MIF 300*P MIF 600*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl8982 CMBCB f vl8753 SMBCB MIMUC RCMA SMBCB CMBCB f vl9239 SMACB RCMA SMACB f vl13503 MIMUB STARTRA MIMUB f vl5524 RCMA FCMB RCMA f vl9263 SBANDA RCMA SBANDA f vl9095 LCMA RCMA LCMA f vl4338 CMBCB DUC CMBCB 354 654 240 668 1053 1132 1079 863 30*P MIF 150*P MIF 300*P MIF 120*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF f vl11888 DUC f vl5556 SMACA SMBCA FCMB f vl404 RCMA BFCU RCMA DUC SMACA f vl5524 STARTRA FCMB STARTRA f vl9263 STARTRA RCMA STARTRA f vl9095 DUC RCMA DUC f vl7830 MIMUC MIMUA MIMUC 438 814 240 772 1003 1079 863 24*P MIF 300*P MIF 40*P MIF 200*P MIF 200*P MIF 200*P MIF 200*P MIF Table B.8: Flow Definition for Orion CEV use case -Part 1 Table B.9: Flow Definition for Orion CEV use case -Part 2 Table B.10: Flow Definition for Orion CEV use case -Part 3 Table B.11: Flow Definition for Orion CEV use case -Part 4 Table B.12: Flow Definition for Orion CEV use case -Part 5 Table B.13: Flow Definition for Orion CEV use case -Part 6 Table B.14: Flow Definition for Orion CEV use case -Part 7 Table B.15: Flow Definition for Orion CEV use case -Part 8 .1 1 . Spacefibre Time Sensitive Networking TSN TSN Focus sur le TTEthernet Sélection Qualitative → TTEthernet TTEthernet standard 802.1Qbv Exclusive ARINC 664 Spacefibre Spacefibre Queue Allocation Ethernet Configuration de réseau satellite basé TSN Chapt. 3 Chapt. 5, 6, 7 avec Egress TT Focus sur les Chapt. 10 Size Based Introduction aux systèmes et réseaux exigences en performances des exigences QoS Spécification formelle Isolation satellite en performance Chapt. 2 Chapt. 8 Selection Algorithm Transmission Selection Algorithm Transmission Selection Algorithm Transmission Selection Algorithm Transmission Gate Transmission Gate Transmission Gate Transmission Gate Transmission Selection .6. Queue for Queue for Queue for Queue for traffic class #7 traffic class #6 traffic class #5 traffic class #0 Transmission Table 3 . 3 1 -Compatibilité avec la propriété 1 -Mixité des trafics L'analyse des résultats de la comparaison résumée dans les trois tableaux ci-dessus semble indiquer que trois technologies, Spacefibre, TTEthernet et Time Sensitive Networking, pourraient être capables, d'un point de vue qualitatif, de répondre aux exigences des futures missions. Les deux autres technologies, Ethernet et ARINC 664, ont été écartées car, entre autres, elles ne permettraient pas d'atteindre les niveaux de qualité de service recherchés. Nous souhaitons profiter de ce paragraphe pour détailler d'autres arguments non liés à la qualité de service ou au fonctionnement du réseau, qu'il est nécessaire de prendre en compte dans le processus de sélection. Critère Ethernet ARINC 664 TTEthernet TSN SpF Haut Débit ✔ ✔ ✔ ✔ ✔ Latence bornée ✖ ✔ ✔ ✔ ✔ Très faible gigue ✖ ✖ ✔ ✔ ✔ Compatibilité ✖ ✖ ✔ ✔ ✔ Table 3.2 -Compatibilité avec la propriété 2 -Gestion du temps Critère Ethernet ARINC 664 TTEthernet TSN SpF Synchronisation au niveau MAC ✖ ✖ ✔ ✔ ✔ Robustesse des algorithmes de synchronisation ✖ ✖ ✔ ✔ ✖ Interfaces avec la couche applicative ✖ ✖ ✖ ✔ ✔ Compatibilité ✖ ✖ ✔ ✔ ✔ Table 3 . 3 3 -Compatibilité avec la propriété 3 -Tolérance aux fautes Critère Ethernet ARINC 664 TTEthernet TSN SpF Capacité de détection de fautes ✖ ✖ ✔ ✔ ✔ Capacité de signalement de fautes ✔ ✔ ✔ ✔ ✔ Capacité de redondance ✖ ✔ ✔ ✔ ✔ Capacité de confinement des fautes ✖ Table 4 . 4 Dans le satellite, des applications communiquent entre elles à travers le réseau. Ces échanges se matérialisent par des flux de messages. Avant de modéliser ces flux dans le réseau, nous introduisons ci-après le comportement de ces applications. Pour cela, nous définissons deux grandeurs P MIF et P MAF . Définition 4 (P MIF cycle MIF). Les exécutions d'applications à bord (ex. Navigation Basée Vision, Contrôle d'Attitude et d'Orbite, Traitement des données des charges utiles, etc.) suivent un pattern cyclique. Ce cycle, intitulé cycle MIF ou période MIF, est de durée P MIF , constante par satellite. Définition 5 (P MAF cycle MAF). La période MIF représente la durée d'un cycle applicatif. Puisque plusieurs applications, avec des périodes potentiellement différentes, coexistent dans le satellite, on peut définir une période du système correspondant à l'hyper-période de toutes les applications. Ce période dite période MAF ou cycle MAF est de durée P MAF . Dans un cycle MAF, il y a k cycles MIF si bien que P MAF = k * P MIF , k ∈ N * . Nous illustrons les cycles MIF et MAF dans la figure Fig. 4.1. Remarque 4. Dans les satellites de l'industriel, une valeur usuelle pour P MAF était 1000ms. A partir de cette valeur, nous listons des valeurs usuelles de k et les durées P MIF associées dans le tableau Tab. 4.1. 1 -Valeurs usuelles de k et P MIF 4.1.2 Modélisation des flux Définitions Les applications que nous venons d'introduire dans le paragraphe précédent communiquent entre elles à l'aide de messages envoyés dans le réseau. Ces messages sont regroupés en flux. Définition 6 (Flux, f , F). Un flux f est une séquence unidirectionnelle de messages d'une source vers une destination. L'ensemble des flux du système est dénoté F. Chaque flux est caractérisé par le tuple suivant : ∀f ∈ F < Src f , Dest f , Size f , r f > Src f ∈ D est la station terminale où le message est généré puis émis, • Dest f est le destinataire du message, • Size f est la taille constante d'un message, • r f est notre manière d'exprimer la période du flux. Hypothèse 1. Dans cette étude, nous considérons que les messages applicatifs ne dépassent pas la taille maximale d'une trame Ethernet. On pourra donc utiliser indifféremment les mots messages ou trames pour décrire à la fois le message applicatif et la trame dans le réseau. P MIF P MIF P MIF P MIF P MIF P MIF P MIF P MIF k P MIF 8 125ms 16 62.5ms 32 31.25ms P MAF = 8 * P MIF Figure 4.1 -Lien P MIF -P MAF avec k = 8 La période P f d'un flux f est reliée à son ratio r f par l'équation suivante : de production au plus tôt (resp. au plus tard). La borne supérieure de Prod(f l ) est appelée date de libération. Nous la notons Release( Fault Detection Isolation and Recovery CHAPTER 2. INTRODUCTION OF KEY SATELLITE CONCEPTS Note that this chapter discuss what is provided by the technology itself, not what could be achieved with a clever use of that technology. First-In-First-Out Peripheral Component Interconnect Express European Cooperation for Space Standardization (https://ecss.nl/) On-Board Computer The 20 bytes correspond to the sizes of Preamble, Frame Delimiter and Inter-Frame Gap, commonly denoted SFD + IFG Additional requirements, in particular fault tolerance quality of service requirements derived from Application-Level Properties 2 and 3 (cf. Section 5.3.3 & 5.3.2), were taken into account in the definition of the profile hence leading to the selection of additional TSN standards Without loss of generality, we assumed a fixed number of queues. Reminder: Asynchronous Traffic Shaper is considered out of scope of this study This is a simplification of IEEE 802.1Q standard where OperControlList is the only considered parameter. As a reminder, F j represents the set of jitter flows i.e. with very low jitter constraints Note that this figure is identical to Fig.7.1. We put it again in this section for easier readability Table B.21: Flow QoS for Orion CEV Use Case -Part 5 Conclusion Les chapitres indiqués dans cette figure sont ceux du manuscrit Comprendre le dernier commutateur dans le chemin de n'importe quel message.4.3. DEUX IMPL ÉMENTATIONS DE EGRESS TT POUR TSN Remerciements APPENDIX C. A STEP FURTHER: RELAXING HYPOTHESES Part V Appendices Appendix A Introduction to Time Sensitive Networking A.1 802.1Qbu & 802.3br -Frame Preemption In order to reduce the interaction between frames of different traffic classes, the Time Sensitive Networking Task Group has introduced one feature dedicated to preemption namely IEEE 802.1Qbu -Frame Preemption. It is associated with IEEE 802.3br -Interspersing express traffic for lower ISO layer support. In fact 802.1Qbu and 802.3br allow purposely tagged frames (express) to suspend the transmission of other frames (preemtable) for their own transmission on a point-to-point link. The part of the TSN output port in switches or end-systems affected by this protocol is highlighted in A.1.1 Introduction Definition 78 (Express/Preemptable Tag) IEEE 802.1Qbu offers the possibility to tag each packet, according to its priority, per port, as express or preemptable via a configuration table. This hence define two types of traffic: • Express: This traffic, tagged express, is considered to have a higher priority than preemptable traffic. As a consequence, it can preempt any preemptable traffic. • Preemptable: This traffic, tagged preemptable, is considered less important (i.e. lower priority) than express traffic. As a consequence, it can be preempted by any express traffic and cannot preempt any traffic. A.1.2 Frame format for Frame Preemption Support A.1.3 Frame Preemption How To In fact, frame preemption occurs when an express frame becomes available for transmission while a preemptable frame is being emitted. The emission of the preemptable frame shall be stopped so that the express frame can be emitted. In order to be able to do so, at least 64 * (1 + addF ragSize) -4 bytes of the preemptable frame must have already been sent. The variable addFragSize allows the user to configure the minimum fragment size. Once this number of bytes has been reached, a FCS (short for Frame Check Sequence), corresponding to the 4 least significant bytes of the CRC (Cyclic Redundancy Check) of the already transmitted bytes is computed and appended to the frame and sent on the medium. It is then called mCRC and this first frame is called First Fragment. The rest of the preempted frame is on hold until the express frame has been emitted. When the medium is free, i.e. all express traffic that was available for transmission has been sent, the emission of the preemptable frame can start again. There cannot be any transmission of preemptable traffic except from the frame that was preempted because this would mean that a preemptable frame is preempting another preemptable frame, which is, per definition, impossible. When the emission of the frame starts again, the MAC Ethernet headers are not repeated, instead, APPENDIX A. INTRODUCTION TO TIME SENSITIVE NETWORKING Preemption allows to increase the bandwidth of preemptable traffic at the cost of a small overhead of 12 bytes per-fragment (excluding Inter-Frame Gap), which is a rather small overhead. A.1.7 Summary TSN protocol Frame Preemption proposes a way to reduce the interference between express and preemptable traffic while maximizing the useful bandwidth for the preemptable traffic when combined with 802.1Qbv. A.2 802.1Qci-Per Stream Policing and Filtering A.2.1 Introduction Using TSN protocol 802.1Qbv, the bandwidth is share between traffic classed. In order to ensure that this share is respected and in order to add security features to the switch, the TSN Task Group introduced IEEE 802.1Qci -Per Stream Policing and Filtering. The overall goal of this TSN protocol is to protect the downfall switches and network devices both device failures and security attacks through impersonation or deny of service. It works by filtering and policing incoming traffic in the ingress port of a switch. A.2.2 Concept of Streams With Qci, a new concept must be introduced: streams. This stream concept is a new level of abstraction above frames. All the policing or filtering functions introduced with 802.1Qci will be applied to streams and not to frames. have to be taken into account when designing the system, especially on Time and Space partitioning aspects. Last but not least, the last mechanism provided by 802.1Qci is the stream meter mechanism. It is the equivalent of a token bucket that does some filtering and policing decision to ensure that the streams respects their bandwidth specifications. In fact, it really resemble to two "linked together" token buckets. The first token bucket is used to check that the regular bandwidth reserved or attributed for a stream or a group of stream is respected. If the stream(s) overtake this bandwidth limit, then there frames go to a second token bucket, that can be filled with token in order to admit some burst. In that case, the frames are marked yellow are passed to the switch. If there is not enough token available, the frame is marked red and dropped. This colour mapping could come handy if the yellow frames are automatically dropped in case of high network overload. A link between the two token bucket has been also provided but is mandatory. If used, if the nominal traffic token bucket overflows (because of absence of traffic) then the overflowing tokens goes to the burst bucket instead of being lost. Stream Meters APPENDIX A. INTRODUCTION TO TIME SENSITIVE NETWORKING The behaviour of this token bucket is linked to counters that can be retrieved by the user for fault detection and/or monitoring purposes. This meter mechanism is again very handy: it protects the network from security and safety aspects. Even if the device produces more data that what it has been specified and allowed to, the rest of the network will not be impacted and the user will be notified. Again, the only drawback is that a meter can be shared between several streams. The time and space partitioning between streams really needs to be assessed. A.2.5 Summary TSN protocol 802.1Qci provides a good way to handle streams in a TSN network and apply ingress policing and filtering functions to sent for security and safety purposes. The only drawback of it being that the network will have to design its system knowing that Quality of Service is applied at stream or group of stream level instead of frame level. Some time and space partitioning usually acquired when working with Virtual Links (TTEthernet, AFDX) will have to be re discussed in TSN if Qci is used. A.3 Fault Tolerance with 802.1CB-Frame Replication and Elimination for Reliability B.1.2 Flow Constraints Now that the flows of our system have been identified, let us assign them some flow constraints. Again, for easier readability, the flow constraints are gathered in Tables B. [START_REF] Apt | Principles of constraint programming[END_REF] and B.7. Name (Suffix) Υ(f ) f * (all flows) Deadlines("implicit") f * (all flows) SafetyReq1() In this manuscript, we have chosen through Hypothesis 2 that the maximum size of any message of any flow could not be greater than the Ethernet MTU. In fact, this allows us to state that one applicative message corresponds to one Ethernet frame. Relaxing Hyp. 2 will lead to having one applicative message being split (or fragmented) into several Ethernet frames. In that case, the concepts of Production instants and Delivery instants of a message shall be redefined. Definition 79 Production Date Let f be a flow (∈ F), where f does not satisfies Hyp. 2. Let f l be the lth message of f . Let us denote Frames(f l ) the set of Ethernet frames corresponding to f l . Let First(Frames(f l ) (resp. Last(Frames(f l ))) the first (resp. last) frame corresponding to f l . The production instant of f l , i.e. T p (f l ), correspond to the production instant (at applicative level) of the first bit of First(Frames(f l )). Definition 80 Delivery Date Let f be a flow (∈ F), where f does not satisfies Hyp.2. Let f l be the lth message of f . Let us denote Frames(f l ) the set of Ethernet frames corresponding to f l . Let First(Frames(f l ) (resp. Last(Frames(f l ))) the first (resp. last) frame corresponding to f l . The delivery date of f l , i.e. T r (f l ), correspond to the delivery instant (at applicative level) of the last bit of Last(Frames(f l )). With these newly adapted definitions, Hyp. 2 can be relaxed. The constraints introduced in the manuscrit are based on the concept of production instant and delivery instant. Since they have just been redefined, the size of applicative message is no longer limited to the Ethernet MTU. Appendix D Résumé Long Français
04117564
en
[ "sdv" ]
2024/03/04 16:41:26
2023
https://hal.science/hal-04117564/file/Norton%20J%20-%20SPPE%202023%20-%20submission%20HAL.pdf
Joanna Norton email: [email protected] Catherine Gandubert Isabelle Chaudieu Sonia Pellissier Sydney Gaultier Association between uncertainty regarding right-to-stay and mental health in unaccompanied and separated migrant children (UASC) reaching adulthood: findings from France Keywords: unaccompanied and separated children, transition to adulthood, depression, anxiety, trauma, vulnerability Purpose: There is substantial evidence suggesting high levels of mental health problems in unaccompanied and separated children (UASC). However, there is less focus on the first years of adulthood characterised by increased vulnerability and fear of deportation. We aim to describe the mental health of UASC on reaching adulthood, and how this is affected by uncertainty regarding their right-to-stay in France. Methods: 110 youth aged 18-22 were recruited via child protection reception centres. We administered the Patient Health Questionnaire somatic (PHQ-15), anxiety (GAD-7) and depression (PHQ-9) modules, the Post-Traumatic Stress Disorder Checklist (PCL-5) and Connor-Davidson Resilience Scale (CD-RISC-10). Logistic regression analysis was performed with the dependent variable, a secure (versus uncertain) situation defined as (i) detaining a residence permit and being in school, apprenticeship or salaried job, or (ii) waiting for residence permit whilst occupying a salaried job. Results: Of the sample, 18.4% reached criteria (score≥10) for probable somatic disorder, 17.6% for anxiety and 28.7% for depression; 41.8% were in an uncertain situation. Uncertainty was associated with higher anxiety ((OR per Interquartile range (95%CI), 1.77 (1.05-2.98)) and post-traumatic stress symptoms (2.05 (1.06-4.00)), lower resilience (0.50 (0.27-0.91)), and participants rating their anxiety (p=0.02) and depressive symptoms (p=0.003) as more severe since reaching adulthood. Conclusions: Our findings suggest uncertainty regarding right-to-stay is associated with increased mental health symptoms, specifically anxiety and trauma-induced stress, thereby highlighting the vulnerability of UASC in their first years of adulthood. This calls for greater support during this transition period with regular symptom monitoring for timely psychological interventions. Introduction Wars, political conflict, violence and lack of opportunities contribute to the continuous arrival of migrants to the borders of European countries. In 2020, minors represented almost one-third of all first-time asylum-seekers and close to 50% arrived unaccompanied or separated [START_REF]Asylum statistics[END_REF]. Their country of origin, migration journey and experiences vary substantially from one host country to another [START_REF]Refugee and Migrant Children in Europe, Accompanied, Unaccompanied and Separated[END_REF]. Most unaccompanied and separated children (UASC) migrating from Sub-Saharan African countries follow the Central Mediterranean migration route which is marked by high levels of abuse, trafficking and exploitation [START_REF]Harrowing Journeys: children and Youth on the move across the Mediterranean Sea, at risk of trafficking and exploitation[END_REF]. UASC, when recognised as minors, fall under the United Nation's Child Protection Act, with all signatory countries legally bound to offer them protection, care and access to education [START_REF]United Nation's Convention on the Rights of the Child[END_REF]. This has led to the opening of specific reception and care facilities. The planning and preparation of the transition to adulthood, when protection under this law ends, is of crucial importance [START_REF] Allsopp | Best interests, durable solutions and belonging: policy discourses shaping the futures of unaccompanied migrant and refugee minors coming of age in Europe[END_REF][START_REF]Unaccompanied and separated asylum-seeking and refugee children turning eighteen: what to celebrate?[END_REF][START_REF] Wade | Preparation and transition planning for unaccompanied asylum-seeking and refugee young people: A review of evidence in England[END_REF]. With the administrative importance of the age of 18, UASC experience an accelerated and often early transition to adulthood. This is of two types, with a life transition involving complex developmental changes, alongside a legal transition to a new and often precarious status, as highlighted in the UNHCR report entitled 'Unaccompanied and separated asylum-seeking and refugee children turning eighteen: what to celebrate?' [START_REF]Unaccompanied and separated asylum-seeking and refugee children turning eighteen: what to celebrate?[END_REF]. Country-specific and individual situations mean that whilst some youth may rapidly gain independence and right-to-stay, many become trapped in endless administrative procedures with protracted periods of waiting and uncertainty. This can lead to 'appeal rights exhaustion', with youths disengaging from all statutory services and living illegally to avoid forced return to country of origin. Yet, despite the vulnerability of youth in this crucial period, this is one of the least studied periods in the life course of migrants [START_REF] Chase | Transitions, capabilities and wellbeing: how Afghan unaccompanied young people experience becoming 'adult' in the UK and beyond[END_REF]. UASC face many adversities, before, during and after their migration journey [START_REF] Smid | Late-onset PTSD in unaccompanied refugee minors: exploring the predictive utility of depression and anxiety symptoms[END_REF]. There is now evidence highlighting the high levels of psychological distress, in particular post-traumatic stress symptoms (PTSS) or disorder (PTSD) experienced by UASC [START_REF] Kien | Prevalence of mental disorders in young refugees and asylum seekers in European Countries: a systematic review[END_REF][START_REF] Monpierre | Global health of unaccompanied refugee minors in Gironde (France) between 2011 and 2013[END_REF][START_REF] Von Werthern | The mental health and wellbeing of Unaccompanied Refugee Minors (URMs)[END_REF][START_REF] Hohne | Prevalences of mental distress and its associated factors in unaccompanied refugee minors in Germany[END_REF]. A systematic review of 47 studies reported considerably higher prevalence rates for a range of mental disorders in migrant minors compared to the general population, with however wide variations between studies. Up to a third were affected by depression (10.3% to 32.8%), or anxiety (8.7% to 31.6%) and up to half (19.0% to 52.7%) by PTSD. Rates were particularly high when restricted to unaccompanied minors [START_REF] Kien | Prevalence of mental disorders in young refugees and asylum seekers in European Countries: a systematic review[END_REF]. These figures are to compare with worldwide childhood and adolescent prevalence rates estimated at 2.6% (95%CI: 1.7-3.9) for depression and 6.5% (95%CI: 4.7-9.1) for anxiety [START_REF] Polanczyk | Annual research review: A meta-analysis of the worldwide prevalence of mental disorders in children and adolescents[END_REF], and with a 7.8% prevalence rate for PTSD by age 18 [START_REF] Lewis | The epidemiology of trauma and post-traumatic stress disorder in a representative cohort of young people in England and Wales[END_REF]. Rates in UASC in their first years of adulthood are seldom reported, as studies either focus on minors or include wider adult age groups [START_REF] Gimeno-Monterde | Unaccompanied young people and transition to adulthood: Challenges ofr child care services[END_REF][START_REF] Hoell | Prevalence of depressive symptoms and symptoms of post-traumatic stress disorder among newly arrived refugees and asylum seekers in Germany: systematic review and meta-analysis[END_REF][START_REF] Oppedal | The asylum-process, bicultural identity and depression among unaccompanied young refugees[END_REF]. Several longitudinal studies have examined change in mental health of UASC reaching adulthood according to right-to-stay in the host country and the asylum process [START_REF] Bean | Course and predictors of mental health of unaccompanied refugee minors in the Netherlands: one year follow-up[END_REF][START_REF] Jakobsen | The impact of the asylum process on mental health: a longitudinal study of unaccompanied refugee minors in Norway[END_REF][START_REF] Jensen | Development of mental health problems -a follow-up study of unaccompanied refugee minors[END_REF][START_REF] Muller | 1-year follow-up of the mental health and stress factors in asylum-seeking children and adolescents resettled in Germany[END_REF][START_REF] Vervliet | Longitudinal follow-up of the mental health of unaccompanied refugee minors[END_REF]. Evidence from these studies conducted in Norway [START_REF] Jakobsen | The impact of the asylum process on mental health: a longitudinal study of unaccompanied refugee minors in Norway[END_REF][START_REF] Jensen | Development of mental health problems -a follow-up study of unaccompanied refugee minors[END_REF], Holland [START_REF] Bean | Course and predictors of mental health of unaccompanied refugee minors in the Netherlands: one year follow-up[END_REF], Belgium [START_REF] Vervliet | Longitudinal follow-up of the mental health of unaccompanied refugee minors[END_REF] and Germany [START_REF] Muller | 1-year follow-up of the mental health and stress factors in asylum-seeking children and adolescents resettled in Germany[END_REF], with UASC originating mainly from Asia [START_REF] Jakobsen | The impact of the asylum process on mental health: a longitudinal study of unaccompanied refugee minors in Norway[END_REF][START_REF] Jensen | Development of mental health problems -a follow-up study of unaccompanied refugee minors[END_REF][START_REF] Muller | 1-year follow-up of the mental health and stress factors in asylum-seeking children and adolescents resettled in Germany[END_REF][START_REF] Vervliet | Longitudinal follow-up of the mental health of unaccompanied refugee minors[END_REF] suggests that overall the high level of psychological distress on arrival stays relatively unchanged over the short-term (2 years). However, a higher age at arrival [START_REF] Jensen | Long-term mental health in unaccompanied refugee minors: pre-and post-flight predictors[END_REF], low social support and poor living arrangements [START_REF] Jakobsen | The impact of the asylum process on mental health: a longitudinal study of unaccompanied refugee minors in Norway[END_REF][START_REF] Mitra | Prevention of psychological distress and promotion of resilience amongst unaccompanied refugee minors in resettlement countries[END_REF], and above all rejection of asylum status [START_REF] Jakobsen | The impact of the asylum process on mental health: a longitudinal study of unaccompanied refugee minors in Norway[END_REF][START_REF] Muller | 1-year follow-up of the mental health and stress factors in asylum-seeking children and adolescents resettled in Germany[END_REF] have been found to negatively affect their mental health trajectory. In addition, an increase in daily stressors is reported over time, especially new stressors such as discrimination, missing family, impact of age assessment and homelessness are concentrated around the pivotal age of 18 [START_REF] Vervliet | Longitudinal follow-up of the mental health of unaccompanied refugee minors[END_REF]. This is in keeping with a dose-response relationship established between cumulative trauma and PTSS [START_REF] Fazel | Mental health of displaced and refugee children resettled in high-income countries: risk and protective factors[END_REF][START_REF] Hodes | Risk and resilience for psychological distress amongst unaccompanied asylum seeking adolescents[END_REF]. In a review of the impact of traumatic refugee experiences on the mental health of UASC, adolescence and being female were found to considerably increase the risk of psychiatric disorders, namely depression, anxiety and PTSD [START_REF] Von Werthern | The mental health and wellbeing of Unaccompanied Refugee Minors (URMs)[END_REF]. Furthermore, a recent study of adolescent refugees and asylum-seekers (age 14-21) in Germany found stressful life events, female gender and fear of deportation to be associated with increased mental distress [START_REF] Hohne | Prevalences of mental distress and its associated factors in unaccompanied refugee minors in Germany[END_REF]. Despite evidence showing increased vulnerability of UASCS on reaching adulthood with a deleterious effect on their mental health, this has seldom been explored for specific mental disorders using standardised and validated research instruments. The impact of reaching adulthood for UASC varies from one European country to another. Whereas in some countries, there are no significant changes on reaching 18 within the asylumseeking process, in others such as France, there is 'little to celebrate' [START_REF]Unaccompanied and separated asylum-seeking and refugee children turning eighteen: what to celebrate?[END_REF]. In France, UASC fall under the responsibility of the local Child Protection Services (CPS) who are responsible for making institutional care arrangements, through a network of child protection reception centres that provide accommodation, and access to education, legal help and health care [START_REF]InfoMIE Centre de ressources sur les mineurs isolés étrangers[END_REF][START_REF]Circulaire du 31 mai 2013 relative aux modalités de prise en charge des jeunes isolés étrangers :dispositif national de mise à l'abri[END_REF]. UASC usually apply for a temporary residence permit (valid up to 4 years) rather than asylum. At age 18, child protection ends and youth fall under state law. However, some youth obtain a 'Young Adult Contract' (YA-contract), a form of 'bridging support' which provides an 'integration pathway' into education and the labour market and thereby extends child protection rights. It appears there are no clearly defined criteria for obtaining these contracts, but their renewal (possible up until age 21) often depends on successful education and/or employer appreciation [START_REF]Unaccompanied and separated asylum-seeking and refugee children turning eighteen: what to celebrate?[END_REF]. Furthermore, YA-Contracts [START_REF]Mineurs isolés étrangers: comment mieux les protéger[END_REF] as well as care arrangements [START_REF] Sturm | Mental health care for unaccompanied minors in France: towards a comprehensive approach to the needs of a vulnerable minority[END_REF] are financed at the local level and there can be significant disparities between areas. Research on UASC in France [START_REF] Monpierre | Global health of unaccompanied refugee minors in Gironde (France) between 2011 and 2013[END_REF] is very limited. France differs from many other European countries in attracting UASC from French-speaking West African countries travelling the highly dangerous Central Mediterranean migration route [START_REF]Harrowing Journeys: children and Youth on the move across the Mediterranean Sea, at risk of trafficking and exploitation[END_REF]. Yet, most research carried out to date has been in host countries with different reception and resettlement schemes with UASC from Asian, East African and European origin. Moreover, few have focused specifically on the first years of adulthood, examining individual mental health disorders using standardised research instruments. In the present cross-sectional study including 110 youth aged 18 to 22, we aimed to describe the mental health, trauma and resilience of unaccompanied migrant youth in their first years of adulthood, using standardised and validated research instruments. In addition, we sought to investigate the association between uncertainty regarding right-to-stay and mental health. We hypothesized that youth in an uncertain situation would be more likely to present mental health symptoms. Materials and methods Study setting and participants The study was carried out in three French cities: Chambery, Montpellier and La Rochelle. Participants were recruited between September 2019 and September 2020. We included: youth between 18 and 22, who no longer benefited from a YA-contract, who arrived in France as UASC, were registered with the CPSs and spoke sufficiently good French to participate. Youth meeting these criteria and for whom a contact number was still available were identified and contacted by carers at the child reception centres (psychologists, social workers, volunteer helpers, etc.). They were given a brief description of the study and invited to contact by phone the study coordinator. Upon acceptance and prior to the interview, participants read the study information sheet and signed an informed consent form. The study was approved by the ethics committee of the University of Savoie-Mont Blanc (ref. CEREUS 20118-22). Less than five youth refused to participate once briefed about the study. The sample size was 110 participants. Measures All questionnaires and scales were administered at a single time-point by eight research assistants trained in interview techniques. After questions on socio-demographics, migration pathway, previous living conditions in the CPS centre, current situation, support network and difficulties encountered during their transition to adulthood, the following validated scales were administered: -The Patient Health Questionnaire 15-item somatoform module (PHQ-15), 7-item anxiety module (GAD-7), and 9-item depression module (PHQ-9) [START_REF] Spitzer | Validation and utility of a self-report version of PRIME-MD: the PHQ primary care study. Primary Care Evaluation of Mental Disorders. Patient Health Questionnaire[END_REF]. -The Life Events Checklist for DSM-5 (LEC-5) which screens for 15 lifetime potentially traumatic events [START_REF] Gray | Psychometric properties of the life events checklist[END_REF]. -The 20-item PTSD Checklist for DSM-5 (PCL-5) which assesses PTSS or PTSD experienced in the past month [START_REF] Blevins | The Posttraumatic Stress Disorder Checklist for DSM-5 (PCL-5): Development and Initial Psychometric Evaluation[END_REF]. -The 10-item Connor-Davidson Resilience Scale (CD-RISC-10) exploring resilience over the past month [START_REF] Campbell-Sills | Demographic and childhood environmental predictors of resilience in a community sample[END_REF][START_REF] Scali | Measuring resilience in adult women using the 10-items Connor-Davidson Resilience Scale (CD-RISC). Role of trauma exposure and anxiety disorders[END_REF]. On completion of each of the PHQ modules and PCL-5, participants were asked in a single question to give a rating of the severity of their symptoms since reaching adulthood (no change, more severe, less severe). Study variables For the three PHQ modules, categorical variables were constructed applying the widely used 10+ cut-off threshold for probable symptomatology [START_REF] Kroenke | The PHQ-9: validity of a brief depression severity measure[END_REF][START_REF] Kroenke | The PHQ-15: validity of a new measure for evaluating the severity of somatic symptoms[END_REF][START_REF] Kroenke | Anxiety disorders in primary care: prevalence, impairment, comorbidity, and detection[END_REF]. DSM-5 major depression was also reported. Given that the PHQ-9 has failed to show measurement invariance in asylum-seeking populations, we also considered the scores as continuous variables [START_REF] Grupp | Is depression comparable between asylum seekers and native Germans? An investigation of measurement invariance of the PHQ-9[END_REF]. The dependent variable, a secure (versus uncertain) administrative situation was defined as (i) having obtained a residence permit and being in school, apprenticeship or a salaried job, or (ii) applied for residence permit whilst occupying a salaried job. Statistical analysis The sample was described using percentages for categorical variables and the median (range) for continuous variables. The Chi² test for categorical variables and Mann-Whitney test for continuous variables were used to study associations between socio-demographic variables and uncertainty. For the associations between mental health variables and uncertainty, we used logistic regression models, adjusted for age and study centre. Linearity of continuous variables with the outcome was verified. Significant associations were further adjusted for the following five variables associated with uncertainty (p<0.10), and not entering into its definition: country of origin, reason for leaving: family conflict/abuse, contact with family, visit to psychologist during child protection, having a partner. Adjustment variables were entered one-by-one due to the small sample size. Results were expressed as odds ratios (OR) with 95% confidence intervals (95%CI), per interquartile range increase for continuous variables, for comparability purposes. The threshold for statistical significance was set at p<0.05. Analyses were performed using SAS EG 7.15 (SAS Institute, Inc. Cary, North Carolina). Results Socio-demographics Of the participants, median age at interview was 19.7 years (range: 18.1-22.8). The majority (92.7%) were male and originated from Africa (80%), mainly Guinea (36.4%), Ivory Coast (19.3%), Mali (15.9%) and Congo (13.6%). Two-thirds (68.2%) of the participants had temporary residence status (Table 1). Migration background and child protection Approximately half of the participants (51.8%) decided to leave their home country alone without their family knowing, 17.3% warned their family and 12.7% got their approval; for the remaining 18.2%, their family had decided for them. The main reasons for leaving were: family conflict or abuse (43.6%), personal choice (36.4%), persecution or war (16.4%), economic problems (14.5%), abandoned by parent(s) (13.6%), and death of parent(s) (7.3%) (Percentages do not add up as 41% of participants gave more than one reason). One third (32.7%) declared that they sometimes or often regretted leaving. Contact with family members ranged from never (15.4%), to rarely (15.6%), occasionally (24.6%), and regularly with weekly contacts (45.6%). As UASC, 85.5% remained in the same child protection centre where initially placed. During their stay, 57.3% saw a psychologist at least once, of which 79% were offered psychological care. The types of difficulties reported were: uncertainty about the future (18.6%), separation from family (15.1%), feeling lonely (14.3%), pressure to succeed (12.3%), not able to speak French (10.4%), living conditions (9.3%), not able to express feelings (8.2%), cultural differences (7.7%), and a feeling of injustice and insufficient welcome by host country (4.1%) (Percentages do not add up as more than one type of difficulty could be reported). Globally, the participants felt their carers at the child protection centre understood their difficulties (80.9%) and felt satisfied with the care they received (75.5%). Transition to adulthood and mental health When asked, at the time of the interview, how they had felt during the first month after leaving child protection care, 61.5% of the participants declared they had felt less well than before; whereas only 17.7% rated their current wellbeing as worse than when they were minors. At the time of the interview, 18.4% reached criteria for a probable somatic disorder, 17.6% for anxiety and 28.7% for depression (Table 2). The most frequent (>25%) reported personally-experienced stressful lifeevents were: physical assault (66.1%), witnessing the sudden and unexpected death of a close family member/friend (63.3%), assault with weapon (47.7%), captivity (45.9%), transportation accident (35.5%), witnessing violent death (34.9%), combat or exposure to a war-zone (29.6%), and lifethreatening illness or injury (27.5%). For those experiencing a traumatic event, the median PTSS score was 21 (IQR: 18) and 26.4% met criteria for DSM-5 PTSD. With respect to symptom change since reaching adulthood, 66% reported feeling less severe PTSS, and 30%, 40% and 37.3% less severe somatic, anxiety and depressive symptoms, respectively (Figure 1). Uncertain situation and mental health Of the sample, 41.8% were in an uncertain situation, as defined in the study. They were younger (p=0.01), more often of African origin (p=0.01), less likely to have benefited from a YAcontract (p=0.003), with lower income (p<0.0001) and living in more precarious housing (p<0.001) (Table 2). They were also more likely to have no or little contact with their family (28.3% versus 6.3% in a stable situation, p=0.02) and to have seen a psychologist during child protection (69.6% versus 48.4%, p=0.03). Although borderline significant, a higher proportion of those in an uncertain situation migrated because of family conflict/abuse (54.5% versus 35.9%, p=0.06) and a lower proportion had a partner (26.9% versus 42.2%, p=0.08). Anxiety symptoms increased the odds of being in an uncertain situation (OR per IQR increase (95%CI): 1.77 (1.05-2.98)). This was also the case for PTSS total score (2.05 (1.06-4.00)). Similar magnitudes of associations were found for the different PTSS sub-scales but were borderline or not significant. The odds of uncertainty decreased by a factor of two in youth showing higher resilience (0.50 (0.27-0.91)). Youth affected in their daily functioning by PHQ mental health symptoms were more likely to be in an uncertain situation (p=0.04) (Table 2). Significant associations were further adjusted for country of origin, migrating because of family conflict/abuse, contact with family, visit to psychologist during child protection and having a partner, each entered separately into the models. Findings were unchanged for anxiety and resilience. However, for PTSS, the association no longer reached significance when adjusted for contact with family and became borderline significant when adjusted for seeing a psychologist (Table 4). Regarding uncertainty and subjective change in symptom severity, we observed a significant trend for anxiety (p=0.02) and depressive symptoms (p=0.003), with participants in an uncertain situation being more likely to consider their symptoms as more severe since reaching adulthood (Table 3). Discussion Frequency of mental health symptoms In agreement with elsewhere [START_REF] Kien | Prevalence of mental disorders in young refugees and asylum seekers in European Countries: a systematic review[END_REF][START_REF] Von Werthern | The mental health and wellbeing of Unaccompanied Refugee Minors (URMs)[END_REF], our findings highlight the high frequency of mental health symptoms in youth who arrived as UASC. For instance, the proportion with a somatic symptom score≥10 was approximately twice as high as in a normative population of men aged 18-39 [START_REF] Hinz | Frequency of somatic symptoms in the general population: Normative values for the Patient Health Questionnaire-15 (PHQ-15)[END_REF]. For probable PHQ anxiety and depression, rates were more than four times as high as in normative samples of men aged 18-39 for anxiety [START_REF] Hinz | Psychometric evaluation of the Generalized Anxiety Disorder Screener GAD-7, based on a large German general population sample[END_REF] and 14-24 for depression [START_REF] Kocalevent | Standardization of the depression screener patient health questionnaire (PHQ-9) in the general population[END_REF]. The PTSS score in our sample was considerably higher than in samples of undergraduate students either selfevaluated as having experienced a "very stressful life-event" (mean 15.4 (sd. 14.7)) [START_REF] Blevins | The Posttraumatic Stress Disorder Checklist for DSM-5 (PCL-5): Development and Initial Psychometric Evaluation[END_REF] or when referring specifically to their most distressing experience (29.9 (17.7) and 20.4 (16.7) in the English and French sub-samples, respectively) [START_REF] Ashbaugh | Psychometric Validation of the English and French Versions of the Posttraumatic Stress Disorder Checklist for DSM-5 (PCL-5)[END_REF]. Comparison with other studies of UASC in the first years of adulthood is limited, as few have focused specifically on the 18-20 age group (extended to 22 in our study to take into account the YAcontract). Of those that span both adolescence and early adulthood [START_REF] Gimeno-Monterde | Unaccompanied young people and transition to adulthood: Challenges ofr child care services[END_REF][START_REF] Hohne | Prevalences of mental distress and its associated factors in unaccompanied refugee minors in Germany[END_REF][START_REF] Jakobsen | The impact of the asylum process on mental health: a longitudinal study of unaccompanied refugee minors in Norway[END_REF][START_REF] Muller | 1-year follow-up of the mental health and stress factors in asylum-seeking children and adolescents resettled in Germany[END_REF][START_REF] Oppedal | The asylum-process, bicultural identity and depression among unaccompanied young refugees[END_REF], mental health problems are seldom reported by age group. Furthermore, the quality of evidence is considered to be low due to small convenience sampling [START_REF] Kien | Prevalence of mental disorders in young refugees and asylum seekers in European Countries: a systematic review[END_REF]. In one study, 81.4% of highly selected UASC around age 18 reached criteria for provisional depression [START_REF] Hohne | Prevalences of mental distress and its associated factors in unaccompanied refugee minors in Germany[END_REF]. On the other hand, recent aggregated data from surveys of general population refugee and asylum-seekers in Germany yielded an overall prevalence of PHQ-9 depression of 30.7%, comparable to the prevalence of 28.7% in our study [START_REF] Hoell | Prevalence of depressive symptoms and symptoms of post-traumatic stress disorder among newly arrived refugees and asylum seekers in Germany: systematic review and meta-analysis[END_REF]. Despite some controversy with one study reporting a reduction in psychological distress over a oneyear follow-up of both accompanied and unaccompanied migrant minors [START_REF] Muller | 1-year follow-up of the mental health and stress factors in asylum-seeking children and adolescents resettled in Germany[END_REF], most studies report no significant change as measured with different versions of the Hopkins Symptom Checklist [START_REF] Bean | Course and predictors of mental health of unaccompanied refugee minors in the Netherlands: one year follow-up[END_REF][START_REF] Jakobsen | The impact of the asylum process on mental health: a longitudinal study of unaccompanied refugee minors in Norway[END_REF][START_REF] Jensen | Development of mental health problems -a follow-up study of unaccompanied refugee minors[END_REF][START_REF] Vervliet | Longitudinal follow-up of the mental health of unaccompanied refugee minors[END_REF]. This suggests that mental health problems in UASC are likely to be chronic and persistent in the short-term. In a longitudinal study of 47 UASC examining changes in depression, anxiety, externalizing symptoms and PTSS, no changes were observed in the short-term (two-years postarrival mean age: 16.5 (sd. 1.6)) or, except for depression, in the long-term five years after arrival (mean age 20.0 (sd. 1.6)) [START_REF] Jensen | Long-term mental health in unaccompanied refugee minors: pre-and post-flight predictors[END_REF]. Although we have no comparable data before and after reaching adulthood, a majority of the participants in our study did not report a worsening of symptoms. On the other hand, compared to before, nearly two-thirds reported feeling less well during the first month after reaching age 18, whereas only a small minority reported feeling less well at the time of the study. Reaching age 18 may be more of a milestone in some countries that in others, depending on the change in legal status. Although this is delayed in the case of a YA-contract, in France at age 18, legal responsibility for UASC is transferred from the local level to the state. Once reaching this age, a resident permit is more difficult to obtain whilst being a prerequisite for accessing work and staying in the country. This may explain the increased vulnerability of UASC in the first month of adulthood, with their stay in the child protection centre reaching an end, leading also to a separation from their main carers and trusted persons. However, recall bias could contribute to these findings, as youth were asked to remember how they had felt as far as four years back (for the first month of adulthood) and more (as a minor). Improvement in subjective well-being since reaching adulthood could be overestimated, especially in youth in a secure situation. Indeed, recalling (i) how they had felt as minors brings them back to their migration journey and arrival in the host country, and (ii) how they had felt the first month of their majority, to the end of child protection. Uncertainty and mental health Our findings suggest uncertainty regarding right-to-stay is associated with anxiety and PTSS, but not depressive or somatic symptoms. In migrant minors, the link between right-to-stay, social support and psychological well-being, especially in those unaccompanied, has been previously reported [START_REF] Fazel | Mental health of displaced and refugee children resettled in high-income countries: risk and protective factors[END_REF]. Jakobsen et al. examined levels of psychological distress assessed using the HSCL-25 and Harvard Trauma Questionnaire in a 2.5 follow-up study of unaccompanied migrant youth spanning adolescence and early adulthood. Although psychological distress did not change over time, it was found to be negatively affected by low support and refusal of asylum [START_REF] Jakobsen | The impact of the asylum process on mental health: a longitudinal study of unaccompanied refugee minors in Norway[END_REF]. The link between decision on asylum status and psychological distress has been confirmed since [START_REF] Muller | 1-year follow-up of the mental health and stress factors in asylum-seeking children and adolescents resettled in Germany[END_REF]. In a cross-sectional study of unaccompanied asylum-seeker and refugee adolescents (mean age 18.6 (1.51)), fear of deportation was associated with increased mental distress [START_REF] Hohne | Prevalences of mental distress and its associated factors in unaccompanied refugee minors in Germany[END_REF]. However, this is to be distinguished from an objective measure of uncertainty regarding right-to-stay and can be confounded by anxiety symptoms. So far, few studies have investigated the association between asylum status and specific psychiatric disorders or symptom dimensions such as anxiety and depression. A recent retrospective study found no association between the length of the asylum process as a proxy for accumulated stress and depression; however, the sample was restricted to refugees who had been granted protection [START_REF] Oppedal | The asylum-process, bicultural identity and depression among unaccompanied young refugees[END_REF]. On the other hand, they reported a healing effect of time over the long-term (10 years) with residence-time negatively associated to depression. Regarding trauma, a correlation has been reported between age of UASC and PTSS [START_REF] Hodes | Risk and resilience for psychological distress amongst unaccompanied asylum seeking adolescents[END_REF], suggesting that approaching the age of 18 with potential worry about the future may have an impact on PTSS. A further study evidenced an increase in daily stressors in UASC since their arrival in the host country, in particular experiences related to discrimination but also dissatisfaction with educational situation, missing family, the impact of age assessment, homelessness and being forcibly and repeatedly moved, which all contributed to the increase in PTSS [START_REF] Vervliet | Longitudinal follow-up of the mental health of unaccompanied refugee minors[END_REF]. Indeed, transition to adulthood -with the paradox of protection as an UASC followed in many countries by a form of 'rejection' on reaching adulthood-can in itself constitute a new type of trauma [START_REF] Fazel | Mental health of displaced and refugee children resettled in high-income countries: risk and protective factors[END_REF]. Our findings suggest that being in an uncertain situation may exacerbate symptoms that may otherwise have remained stable or receded. Our hypothesis was that youth in an uncertain situation would be more likely to present mental health symptoms. However, this association can be explained bi-directionally. Uncertainty may trigger mental health problems, but chronic underlying anxiety and PTSS could also prevent youth from reaching a secure situation. Uncertainty would in turn exacerbate symptoms, both measured by validated research instruments (for anxiety and PTSS) and self-evaluated in terms of subjective change over time (for anxiety and depression). The direction of the association is particularly difficult to disentangle for resilience. Uncertainty may have hindered the development or maintenance of resilience, just as a lack of resilience may have triggered a succession of events leading to an uncertain situation. Research on resilience in UASC and adult migrants is scarce, and to our knowledge, no studies have measured this concept using standardised research instruments. Interestingly, the level of resilience in our study (mean (sd.): 29 (6.2)) is very similar to that reported for the 18-24 year age group in the general population validation study (mean (sd.): 29.5 (5.4)) [START_REF] Campbell-Sills | Demographic and childhood environmental predictors of resilience in a community sample[END_REF]. Limitations The main limitation was the representativeness of our sample. Regarding the choice of the three cities, the profiles of UASC under the responsibility of the three CPSs are unlikely to differ. Indeed, legislation ensures all CPSs participate in the reception of UASC, meaning that UASC seldom remain at their point of arrival in the country [START_REF]Circulaire du 31 mai 2013 relative aux modalités de prise en charge des jeunes isolés étrangers :dispositif national de mise à l'abri[END_REF]. On the other hand, participants were selected indirectly through their former carers, which may have introduced bias in the estimations of mental health problems. Those in a secure situation may have been easier to reach leading to an underestimation of prevalence rates. On the other hand, the carers felt that those who kept in touch were often in acute need of support and guidance with a strong emotional attachment. The main variable, uncertainty, lacks precision with respect to the duration of residence status and the length of the waiting period for pending cases. Another important limitation of our study was its crosssectional design. This led to ambiguity as to the nature (temporary versus enduring) of mental health symptoms and the direction of associations between uncertainty and mental health. Recall bias is another limitation as the participants were asked to rate their change symptoms since reaching adulthood, as well as the recall how the felt in the first month of leaving child protection, compared to as far as four years earlier. Strengths A strength of this study is the use standardised and validated research instruments. The choice of the widely used PHQ allowed us to compare our frequency rates to those established for normative populations. Given the limited knowledge of written French by some of the participants, a research assistant administered the self-report questionnaires during a 1.5-hour interview. This contributed to the good quality of our data with very few missing values. A further strength is the sample size, which can be considered relatively large given the difficulties of reaching young adult UASC no longer under child protection [START_REF]United Nation's Convention on the Rights of the Child[END_REF]. Our study is original as it is one of the few studies emanating from France and includes a majority of UASC from West Africa. To conclude, this study is one of the first to report on the mental health of unaccompanied migrant youth in France, focusing on the critical two-year period since reaching adulthood, and on the association between uncertainty regarding right-to-stay and specific mental health symptoms. Our findings confirm the high levels of mental health symptoms in this population and suggest that uncertainty is associated with higher anxiety and PTSS, less resilience and a worsening of subjective anxiety and depressive symptoms. UASC need to be better prepared and accompanied during this transition period, with regular assessments of mental health symptoms in order to apply timely psychosocial interventions [START_REF] Demazure | Dealing with difference: a scoping review of psychotherapeutic interventions with unaccompanied refugee minors[END_REF]. This represents a particular challenge given that many UASC reach their host country close to age 18. Tables least one stressful life-event considered to be traumatic (LEC-5 checklist) 3 odds ratios per interquartile range increase, except for number of stressful life events 4 adjusted for study centre and age Figure 1 . 1 Figure 1. Subjective change in symptom severity since reaching adulthood (18+) Table 1 . 1 Socio-demographic characteristics of the sample, overall and according to uncertain situation. ..............[START_REF] Vervliet | Longitudinal follow-up of the mental health of unaccompanied refugee minors[END_REF] Table 2 . 2 Uncertain situation according to mental health disorders, trauma and resilience. ..................................... 25 Table 3 . 3 Uncertain situation according to mental health disorders, trauma and resilience, adjusted for covariates entered separately into the models. .......................................................................................................................... 26 Table 4 . 4 Uncertain situation according to subjective change in symptom severity since reaching adulthood. ........ 27 Table 1 . 1 Socio-demographic characteristics of the sample, overall and according to uncertain situation. Total Uncertain situation (N=110) No (N=64) Yes (N=46) P 3 % (n) % (n) % (n) Table 2 . 2 Uncertain situation according to mental health disorders, trauma and resilience. Uncertain situation Total No Yes N Median Median Median OR (95% CI) 3 P 4 [IQR] [IQR] [IQR] Somatic symptom score 109 5.0 [6.0] 4.0 [6.0] 5.5 [6.0] 1.75 (0.96 -3.19) 0.07 Anxiety score 108 4.0 [5.0] 3.0 [4.0] 5.0 [7.0] 1.77 (1.05 -2.98) 0.03 Depression score 108 6.0 [7.5] 5.5 [8.0] 6.0 [8.0] 1.28 (0.73 -2.23) 0.40 Somatic score (>=10) (%) 109 19.3 17.5 21.7 2.37 (0.79 -7.07) 0.12 Anxiety score (>=10) (%) 108 17.6 14.5 21.7 1.62 (0.58 -4.58) 0.36 Depression score (>=10) (%) 108 28.7 27.4 30.4 1.25 (0.52 -3.00) 0.60 Major depression (%) 108 14.8 14.5 15.2 1.15 (0.38 -3.51) 0.81 Effect on daily functioning (%) 100 No effect 51.0 59.6 39.5 1 Made things difficult 33.0 31.6 34.9 2.08 (0.80 -5.42) Made things very difficult 16.0 8.8 25.6 5.58 (1.48 -20.9) 0.04 N° of stressful life events 1 110 5.0 [4.0] 5.0 [4.0] 5.0 [3.0] 1.04 (0.90 -1.20) 0.62 PTSD (DSM-5) (%) 110 26.4 23.4 30.4 1.85 (0.74 -4.60) 0.19 PTSS 98 2 Total score 21.0 [18.0] 19.0 [18.0] 23.0 [15.0] 2.05 (1.06 -4.00) 0.03 Re-experiencing score 6.0 [7.0] 6.0 [6.0] 7.0 [5.0] 1.84 (0.89 -3.79) 0.09 Avoidance Table 3 . 3 Uncertain situation according to mental health disorders, trauma and resilience, adjusted for covariates entered separately into the models. interquartile range increase, except for number of stressful life events, further adjusted for study centre and age Odds ratio per interquartile range increase in: Anxiety score PTSS score Resilience score Adjustment variables: OR (95% CI), p 2 OR (95% CI), p 2 OR (95% CI), p 2 Country of origin: Africa 1.76 [1.04 -2.99], 0.04 2.00 [1.01 -3.93], 0.05 0.50 [0.27 -0.91], 0.02 Reason for leaving: family conflict/abuse 1.81 [1.06 -3.09], 0.03 1.97 [1.00 -3.87], 0.05 0.51 [0.28 -0.93], 0.03 Contact with family 1.86 [1.06 -3.27], 0.03 1.71 [0.84 -3.48], 0.14 0.45 [0.23 -0.86], 0.02 Consultation with psychologist 1 1.72 [1.01 -2.92], 0.05 1.88 [0.95 -3.70], 0.07 0.52 [0.28 -0.95], 0.03 Having a partner 1.69 [1.00 -2.87], 0.05 2.23 [1.11 -4.48], 0.02 0.51 [0.28 -0.95], 0.03 1 during child protection 2 odds ratios per Table 4 . 4 Uncertain situation according to subjective change in symptom severity since reaching adulthood. Uncertain situation No (N=64) Yes (N=46) median [min-max] boarding school, student housing… Mann-Whitney test for continuous variables and chi² or Fischer test for categorical variables adjusted for study centre and age Acknowledgments We are very grateful to the staff of the child protection services and child reception centres who helped us recruit the participants. We thank the participants for their time given to the study. Funding None Author contributions SG designed the study. JN and CG provided methodological expertise. CG coordinated data collection in two of the study centres and managed the data. JN analysed the data and drafted the manuscript. SP and IC provided scientific expertise on trauma, stress and resilience. All authors interpreted the data, participated in the critical revision of the manuscript and approved the final version. Conflict of interest The authors declare that they have no conflict of interests.
04117588
en
[ "sdv.mhep", "info.info-bi", "sdv.ib" ]
2024/03/04 16:41:26
2022
https://theses.hal.science/tel-04117588/file/2022MULH5986_these_SAAD.pdf
-Making DTW: Dynamic Time Warping EWS: Early Warning Score ES: Exponential Smoothing GA: Genetic Algorithm HR: Heart Rate HDFS: Hadoop Distributed File System IoT: Internet of Things IoHT: Internet of Healthcare Things LSA: Least Squares Approximation MIMIC: Multiple Intelligent Monitoring in Intensive Care MA: Moving Average MLED: Modified Local Emergency Detection P2D: Patient-to-Doctor PULSE: Pulse PED: Patient Emergency Detection PRA: Patient Records Archive PMA: Patient Monitoring Algorithm PDCB: Patients Distribution-based Criticality Balanced PSG: Patients Sorting-based Grouping PNS: Patient-Nurse Scheduling PSO Keywords: Wireless Body Sensor Network, Data Reduction, Patient Classification, Patient-Nurse Scheduling, Emergency Detection, Energy Saving Average Blood Pressure BT: Body Temperature CH: Cluster Head CRDM: Clinical Response Decision Particle Swarm Optimization Oxygen Saturation SFDT: Sensing Frequency Decision Table WBSN: Wireless Body Sensor Network T oday, diseases and illnesses are becoming the most dangerous enemy to humans. The number of patients is increasing day after day accompanied with the emergence of new types of viruses and diseases. Recently, wireless body sensor network (WBSN) has been considered as an efficient technology for real-time health-monitoring applications. It provides a low cost solution for hospitals, performs a relief for staff and allows nurses and doctors to remotely track patients. However, the huge amount of data collected by sensors produce two major challenges for WBSN: the quickly depletion of the available sensor energy and the complex decision making by the doctor. Subsequently, these challenges are highly depended on the enormous amount of redundant collected and transmitted data in such network. Therefore, data reduction and prediction techniques have an important effect in preserving the sensor energy and prolonging the network lifetime. In addition, most hospitals suffer from the deficiency of qualified staff needed to continuously monitor patients and act when an urgent situation is detected. Thus, designing a nurse-patients scheduling is becoming essential to organise and balance the workload of the medical staff. In this thesis, we consider a WBSN architecture consists of a set of sensors, where each sensor monitors a specific vital sign during a period of time and then sends the collected data to a coordinator which, in turns, forwards them to the sink. Then, we propose data reduction, classification, management and nurse scheduling techniques that aim to overcome the mentioned challenges. Fundamentally, the proposed techniques work on sensor and sink levels. At sensor level, we propose an emergency detection and adaptive sensing frequency techniques that aims respectively to reduce the collected and transmitted redundant data according to the patient situation. At sink level, we propose data analysis and management techniques: First, a framework based on distributed systems for ingestion, processing, storage, prediction and visualisation, using several hadoop tools. Where, a prediction technique is implemented based on Prophet method that predicts the future patient's situation in order to provide the necessary treatment. Second, a patient classification model that aims to classify the patients according to their vital signs compared to the patient archive, using the Dynamic Time Warping algorithm. Finally, an efficient nurse-patients scheduling technique which consist of two steps. In first step, we proposed three distribution mechanisms that aims to allocate for each nurse the group of patients who should follow. Where, two clustering-based mechanisms and one is based on a hybrid of genetic algorithm and particle swarm optimization. In second step, a scheduling algorithms based on the patient's priority is implemented, that aims to determine the optimal order of patients for examination. We applied our simulation on real health sensor data, while the obtained results show its accuracy in terms of extending the network life, processing storage speed, regenerating of missing data, and confirm that the proposed scheduling approach provides the best possible organised time for the nurses routing to patients appointments. Dedication I am, dedicating this work to the memory of my father, Hassan SAAD. I miss him every day. He has always encouraged me to learn, to develop my abilities, and he is credited with reaching the completion of this thesis. XVII Moreover, I would like to thank the president and the members of the jury committee: Prof. Abdallah MAKHOUL, Prof. Ye-Qiong SONG, Prof. Laetitia JOURDAN and Prof Jaafar GABER for their time, interest, and helpful comments. It is an honor to have my work examined and assessed by professional experts such as yourselves. I would like to express my warm appreciation to my family who were fans along the way and must be thanked my mother Nidal, for giving me patience, effort, psychological and moral support. She was and still encourage me on the path of science and success. My brothers Ali and Mohamad, My sisters Mariam, Soha and Rawan. My sister Soha, who supported me in my work, and all my family, God reward you with all the best. XIX Intorduction 1 General Introduction In home, at work, in schools, our bodies are the target for hundreds of diseases and viruses. From one hand, our daily food along with the connection human-human or human-animal are the most causes of diseases. On the other hand, pollution of climate and wars are the indirect causes of the propagation of severe viruses. This leads to significantly increase the number of patients as well as the healthcare costs to governments and societies. Therefore, in order to deal with the increasing number of patients, integrating new technologies in healthcare monitoring and assessment to reduce the medical worker's cost and ensure a near patient-doctor interaction has become an essential for hospitals nowadays. Among other technologies, researchers have focused on wireless body sensor network (WBSN) [START_REF] Ghatole | Survey on wireless body area network for healthcare applications[END_REF] as an efficient and low cost monitoring system for various healthcare applications, either in-hospital or in-home. Indeed, WBSN consists of a group of sensors that are located in and on the patient's body where each of which collects data for one vital sign (heart rate, systolic blood pressure, body temperature, oxygen ratio, respiration rate, etc.). Then, the collected data are sent periodically to a coordinator that is located on or beside the patient body, and this coordinator sends them to the sink. Finally, the doctors are responsible to check and analysis the data in order to make the ultimate decision. Indeed, the huge amount of collected data in WBSN along with the growing number of patients provides several challenges for both hospitals and doctors. First, the sensors are equipped with autonomous power batteries but with limited capacities. Optimizing their energy consumption is a fundamental operation, in order to increase the network lifetime and ensure long-term patient monitoring. However, several works have proposed solutions for managing energy consumption based on the aggregation of the data sent or reducing the frequency of sending by transmitting only the relevant data following a change of state. Second, studying the evolution of the patient's behavior and predicting his future situation, based on his current situation, is one of the important objectives of health care. Therefore, the patient must be taken care of in order to receive adequate treatment before entering a critical situation. Finally, a scheduling technique that allows each nurse to determine her group of patients and specify the best conditions for their care is one of the basic operations to establish a balanced workload of medical staff due to epidemic periods. XXI XXII Chapter 0. Intorduction Main Contributions of this Thesis The main contributions in this thesis concentrate on designing an efficient energy data collection and transmission techniques at sensor nodes, on designing a nurse-patient scheduling algorithm and data analysis models at the sink node, and on building a patient classification model and hadoop-based framework at sensor and sink nodes. At Sensor Level In sensing-based application, the basic unit in the WBSN is the sensor node that continuously monitors the physiological statuses of patients, by collecting, exchanging and then sending analyzed patient's data remotely to processing center in order to make suitable decisions. Indeed, the collection model suffers from the big amount of redundant data collected by sensors. This big data produces two main challenges in WBSN: First, it quickly depletes the small energy battery in sensor which is not always replaceable or rechargeable. Second, it makes the decision-making process is very complex for experts because of the high level of redundancy existing among the data. In this thesis, we propose several techniques based on the process of data transmission and of sensing frequency adaptation, in order to reduce the redundant of collected and transmitted data, and therefore saving the sensor power. Data transmission techniques Indeed, the energy consumption is highly dependent on the data transmission operation. Therefore, in this thesis we propose several techniques that aims to reduce the amount of transmitted data in order to minimize the energy consumption. First, we propose an emergency detection algorithm that aims to directly alert the medical staff to any abnormal situation of the patient and to reduce the amount of periodic data transmission from each sensor to the coordinator, and therefore extending the network lifetime and taking the treatment needed. Second, we design the first phase of the patient classification model, that aims to calculate and send the level of criticality of the specific monitored vital sign, instead of sending all the collected readings to the coordinator. Third, we propose a multi-hop routing protocol that reduces the redundant collected data depending on the patient's condition, and improves the data transmission by finding the optimal path to send data between sensors according to several parameters: remaining energy; bandwidth available; transmission efficiency; hops to sink. Data collection techniques Obviously, the redundancy level among the collected data is dependent on two reasons: the stabilization of the patient's condition and the short time between the collected measurements. In this thesis, we propose an adaptive sensing frequency algorithm that reduces the amount of the collected data according to the criticality level of the patient. At sink Level In the sensing-based application, the sink node is responsible for stored and analysed the data collected from the sensor network and it can impart knowledge to the medical staff. In this thesis, we propose several techniques that drive the medical staff to make the suitable decision. First, we design a framework based on hadoop tools. The sink receive the data collected from the sensors then forward them to the hadoop cluster, in order to preprocess and store them for later analysis. Second, we propose a patient records archive technique that regenerates the readings for each patient during his staying in the hospital, in order to store and archive them for a latter data analysis. Third, we propose a patient situation prediction technique that predicts the variation of the situation of a patient during the next periods in order to take a quick response and avoid patient to enter in a critical situation. Fourth, we design the second phase of the patient classification model that determines the criticality level of the patient according to the received levels of all vital signs from the patient's body. Finally, we propose a patient-nurse scheduling technique in order to balance the workload of the medical personnel. This technique starts by distributing the patients to several groups, where each group is assigned to a specific nurse. This distribution assures the similarity between groups in the number of patient and in the levels of criticality, in other hand it ensures the dissimilarity in patient levels in the same group. We proposed three mechanism, two based on grouping operation, while, the third mechanism is based on the combining between the genetic algorithm and particle swarm optimization. After the distribution of patients, we build a scheduling algorithms in order to find the best traffic that a nurse should follow in order to serve all her assigned patients. The algorithm is based on calculated the priority of each patient according to the criticality and age parameters. Thesis Structure The thesis is structured as follows: Chapter 1: An Overview About Wireless Body Sensor Networks This chapter presents a general review about wireless body sensor networks (WBSNs). Then, it introduces the architecture of WBSN. Also, it shows the various types biomedical sensors. After that, it demonstrates the integration of Internet of Things in the healthcare field. In addition, it describes the challenges that face the implementation of WBSNs along with the growing number of patients while focusing on energy consumption, emergency detection and patients-nurse scheduling. Finally, it presents a state-of-the-art in order to overcome the highlighted challenges while the proposed techniques is classified according to specific criteria (is divided into several categories): patient classification, nurse scheduling, energy efficient data transmission, real-time patient monitoring, and analyzing large data volume for decision-making. Chapter 2: An Adaptive Intra-WBSN Communication approach This chapter presents a comparative study for the recent advances in Internet of healthcare monitoring and it introduces a novel approach based on merging an energy efficient routing protocol with the data transmission process. This approach reduces the data acquisition redundancy, and finds the optimum path for data transmission, according to the patient situation. At the sensor level, the approach proposes a multi-hop routing protocol that select the next hop based on several parameters: residual energy, transmission efficiency, bandwidth available and number of hops to reach the sink. In order to reduce the energy consumption and to ensure an efficient transmission. In addition, the approach integrates a data transmission technique that sends the measurement to the next hop only if the patient's situation has changed, in order to reduce the redundant data, and therefore extending the network lifetime. Chapter 3: P2D: An Efficient Patient-to-Doctor Framework for Real-Time Health Monitoring and Decision Making This Chapter introduces an efficient framework called P2D for health monitoring and decision making. P2D works on two levels: sensors and sink, which covers the entire life-cycle of data including data collection, data ingestion, data preprocessing, data storing, and data visualization. The proposed framework relies on Hadoop ecosystem tools. At the sensor level, P2D proposes an emergency detection algorithm that allows to directly detect any abnormal situation of the patient, and proposes an adapting sensing frequency algorithm based on the patient criticality level, that allows to save the sensor energy. Whilst, at the sink level, P2D proposes a patient records archive technique that allows to store an archive for each patient using the method of least squares approximation, then it introduces two fault detection algorithms (moving average and exponential smoothing) in order to preprocess data before storage, and proposes a prediction technique based on the prophet method that predict the patient situation during the next periods of time and make a suitable decision by the doctors. Chapter 4: A Sensing-Based Patient Classification Framework for Efficient Patient-Nurse Scheduling This chapter introduces a novel and efficient framework for nurse-patient intelligent task organization that treats the scheduling problem. Typically, our framework consists of two phases: patient classification, and nurse scheduling. The first phase introduces an efficient classification model that groups patients according to the severity level of their vital signs. The second phase starts with proposing three ditribution mechnaisms based on the classification results in order to balance the workload of the stressed medical personnel. Then a schedulig algorithm is proposed in order to find the best scheduling assignment of nurses over patients. This algorithm is based on some priority metrics. 1 An Overview About Wireless Body Sensor Networks Introduction The essential role of primary health care is to ensure continuous and comprehensive care to the patients. Thus, the healthcare system absorbs a significant and growing share of resources. However, Wireless body sensor network (WBSN) [START_REF] Ghatole | Survey on wireless body area network for healthcare applications[END_REF] is a well-known solution that is used to help nurses in monitoring patients in home or in hospital. WBSN is a network of sensors implanted in and/or on the patient's body, and constantly monitors their health status in real time. Where, each sensor collects periodically data, then send them to the medical center in order to make suitable decision, and therefore in order to prevent any unwanted situation with the necessary intervention medical. Indeed, WBSN suffers from big data collected by the medical sensors. Therefore, collecting and transmitting data are the major reasons behind the exhaustion of limited energy. Moreover, the researches focus on dealing, storing and analyzing the huge amount of the collected data. In addition, hospitals rely mainly on nurses who have several duties such as: serving the patients, contacting with doctors, giving medicine, detecting an emergency and monitoring vital signs [START_REF] Yoshioka-Maeda | Preparing for complex emergencies while combating covid-19: The role of public health nurses in japan[END_REF]. WBSN is one of the main applications that helps the medical staff in electronic health surveillance with early detection of critical physiological symptoms. Moreover, rapid growth in number of patients, poses pressure on hospitals and increases the nurse duties. Hence, the patient-nurse scheduling algorithm has taken a big attention from researchers for reducing the workload of nurses. Various Types of Biomedical Sensors In WBSN, we distinguish between three main categories of sensors that produce different data types: numerical, images, or video. The numerical biosensors allow converting various forms of stimuli into electrical signals then to numerical records. These sensors take data about vital signs like pressure, temperature, oxygen saturation, respiration rate, etc. The image biosensors take images for various patient organs like X-ray and dental imaging, radiography, mammography, cardiology, etc. The last biosensor category, i.e. video biosensor, records data for various surgery operations of patients like cardiology, invasive surgery, ocular surgery and observation, and artificial retinas. On the other hand, biomedical sensors are implemented on or implanted inside the body [START_REF] Kim | Wearable biosensors for healthcare monitoring[END_REF]. Figure 1.1 shows the most used types of biosensors in WBSN which can be classified as follows: Epidermal Biosensors These types of sensors are usually attached on the body of the patient and aim to monitor the glucose, lactate, sodium, potassium and temperature (Figure 1.1(a)), the interleukin-6 and cortisol (Figure 1.1(b)). Mostly, these sensors are self-power supply electronic skin that use piezoelectric-enzymatic reaction coupling. Saliva-based Biosensors These biosensors usually mounted onto tooth with integrated wireless electronics. The saliva biomarkers offer meaningful diagnostic information, such as: sodium intake during hypertension management (Figure 1.1(c)), detect foods and fluids during ingestion sugars, alcohol, salinity, pH and temperature (Figure 1.1(d)). Tear-based Biosensors These sensors rely on contact lenses and use transparent and soft materials while incorporating wireless electronics. They detect glucose concentrations via resistance-based enzymatic mechanism and allows quick response time for continuous measurements (Figure 1.1(e,f,g)). WBSN in Internet of Things (IoT) Recently, Internet of Things (IoT) systems were introduced in the field of healthcare, playing a major role in patient monitoring and in ensuring a good communication between the healthcare 1.4 WBSN 3-Tier Architecture providers [START_REF] Kodali | An implementation of iot for healthcare[END_REF]. IoT describes the network of physical objects that are embedded with sensors, software, and other technologies for the purpose of connecting and exchanging data with other devices and systems over the internet. In fact, the wireless body sensor network (WBSN) technologies are one of the essential technologies of IoT growths of the healthcare paradigm. In WBSN every patient is monitored through a group of small-powered and lightweight sensor nodes [START_REF] Alkhayyat | Wbsn in iot health-based application: toward delay and energy consumption minimization[END_REF]. IoT based healthcare or IoHT [START_REF] Asif-Ur-Rahman | Toward a heterogeneous mist, fog, and cloud-based framework for the internet of healthcare things[END_REF] allows monitoring physiological statuses of patient, by collecting and exchanging their data. These systems help reducing the average workload of medical staff in hospitals. Indeed, the remote monitoring from home reduces the patient hospital visits especially for elderly people. Moreover, adding an analysis part in the IoT healthcare systems contributes significantly in decision making by the doctors. Furthermore, IoHT systems contribute in medical work improvement by reducing the expenses in human, medical and financial resources. WBSN 3-Tier Architecture WBSN system is the key component of the patient monitoring system. The topology of WBSN is composed of three tiers as shown in Figure 1.2. Typically, the first tier consists of several sensors that are distributed inside and on the patient's body in order to monitor and measure the different vital sings. These measurements are sent to a coordinator that is located in the center of the patient's body. The coordinator sends the data to the sink in the second tier. This sink could be a smart phone, personal computer or other smart electronic device. These devices can play the role of a gateway. Finally, some specific information is transmitted from the gateway to the tier-3 via the Internet. This third tier offers several medical services like medical information storage, retrieval and analysis. On tier-3 medical information could be directed to physician or emergency team for necessary treatment. Indeed, Sensor nodes are powered by micro-batteries and each node is responsible of collecting one or more type of physiological data including heart rate (HR), respiratory rate (RESP), oxygen saturation (SPO2), pulse (PULSE) and blood pressure (ABP). In almost WBSN applications, a continuous monitoring of patient situation is required. Therefore, we are interested on the periodic data collection model. In our system, each sensor collects one data measurement on each second, and at the end of the period the sensor sends the collected data to the coordinator which, in turns forwards the data to the sink for a later data analysis and decision. Intra-WBSN Communication Types In this section, we introduce the three types of sensor-coordinator and sensor-sensor communication; broadcast, single hop and multi-hop. In broadcast communication, every sensor communicates with all others. This cluttered communication is considered high energy consuming. In single-hop type (Figure 1.3(a)), the sensor sends the data directly to the coordinator through one hop. Therefore, the signal strength should be significantly increased leading to the increasing of energy consumption. In multi-hop communication (Figure 1.3(b)), the data follow an adaptive routing between the sensors, minimizing by this way the required signal and reducing therefore the energy consumption. WBSN Challenges Typically, WBSN is one of the main technology that used in the monitoring healthcare applications. The life cycle in the WBSN begins with the acquisition of data for a patient and ends with its analysis by the medical profession. However, this process faces several types of challenges. Some essential challenges are related to the limited energy sensor, while other challenges are related to the data management ( such as big data collecting, transmitting, storage and processing), and, finally, other ones are related to patient-nurse scheduling. This challenges are for both hospitals and doctors which can be summarized as follows: Reducing Energy Consumption As sensors have autonomous power batteries, minimizing energy consumption is a fundamental operation in WBSN in order to ensure a long time patient monitoring and reduce energy cost. Furthermore, data transmission is a heavily cost operation in terms of energy consumption. Hence, adapting sensing frequency techniques have been proposed as an efficient approach for reducing data collection and transmission from sensors. Rapid Emergency Detection Usually, the situation of the patient can change from low to high criticality level. Thus, in case of emergency detection, an alert must be directly sent to the medical staff and doctor in order to evaluate the patient situation and take the needed treatment. Therefore, rapid emergency detection is a crucial task for medical team which can negatively affect the life of a patient. Predicting Progress of Patient Situation Obviously, patient monitoring and having information about his current situation is not always a sufficient task for medical staff. For instance, a patient is entering hospital in critical case but the rapid variation of his situation can lead to an unpredictable death. Hence, studying the progress of patient behavior and predicting his future situation, according to the current one, is one of the important goals in healthcare. By doing that, patient can be given the adequate treatment before entering a critical situation. Big Data Processing The acquisition of various data for a large number of patients leads to the problem of mega-data (Big Data) at the final sink level [START_REF] Ullah | Future of big data and deep learning for wireless body area networks[END_REF]. On the one hand, this huge amount of data complicates the mission of data analysis by the medical team, especially in the case of emergencies. On the other hand, the richness of this data will be a source of opportunity for Big-Data analysts to refine them and to propose models in line with the different states of patients, to validate hypotheses and of course to take decisions. Similarly, reducing the amount of redundant information transmitted by the nodes reduces the energy consumption in the system. Among the techniques proposed to reduce the transmission of redundant information are: data aggregation, compression and prediction. One of the main contribution of this thesis consists in developing data reduction/compression/prediction techniques and methods for adapting the detection frequency according to the application domain to save energy and reduce data size. Nurse Scheduling Indeed, one of the fundamental challenge in the hospital is to balance the workload of medical personnel especially in the epidemic time. A scheduling protocol aims to assign for each nurse her group of patients, then to determine the optimum path in order to serve his group. Normally, this scheduling algorithm depends on several parameters such as the level of severity of the patient and his age, the capacities of the nurses, the occupancy rate of the resuscitation rooms and medical equipment, etc. Patient Monitoring and Risk Assessment: A Background In the last decade, the healthcare sector takes a great attention from both industries and research community. From one hand, the industries try to integrate new technologies in the medical process while the governments aim to ensure a secure public health for the people. On the other hand, researchers aim to propose efficient patient monitoring and assessment techniques as well as try to overcome the challenges provided in WBSN. The authors of [START_REF] Ghatole | Survey on wireless body area network for healthcare applications[END_REF] give an overview on the sensing devices fabricated by the industries and dedicated to healthcare applications. In [START_REF] Majumder | Wearable sensors for remote health monitoring[END_REF][START_REF] Majumder | Smart homes for elderly healthcare-recent advances and research challenges[END_REF], the authors present and compare the remote health monitoring systems existing in the literature for elderly healthcare and well-being. Also, they survey on various textile-based sensors that can potentially used in wearable systems as long as the communications technologies dedicated to elderly ubiquitous healthcare [START_REF] Deen | Information and communications technologies for elderly ubiquitous healthcare in a smart home[END_REF]. Whilst, the authors of [START_REF] Khan | The state-of-the-art wireless body area sensor networks: A survey[END_REF][START_REF] Syed | Data science algorithms and techniques for smart healthcare using iot and big data analytics[END_REF] summarize the data analysis algorithms proposed by the research community for the sensor-based healthcare applications. Energy Efficient Data Transmission Techniques Some works in WBSN are dedicated to save the energies of the sensors and reduce the data transmission in the network. The authors of [START_REF] Zhang | Energy-efficient and reliable sleep scheduling algorithms in wbsns[END_REF] propose an energy efficient mechanism for WBSN based on a sleep scheduling strategy and dominating set method. After constructing the dominating graph, the sink selects, based on two approximation algorithms and a polymatroid function, a subset of nodes to collect the data (e.g. active nodes) while switching the other nodes to sleep mode in order to reduce the number of active sensors and therefore the energy consumption. Through a comprehensive simulation analysis, the authors demonstrated that this mechanism extends the network lifetime, but it don't take into account the integrity of information, where a number of sensors are in sleep mode which don't collect data during a period of time, and this sleep scheduling creates an information shortage for medical staff. In [START_REF] Rabby | A priority based energy harvesting scheme for charging embedded sensor nodes in wireless body area networks[END_REF], a priority-based energy harvesting scheme for charging embedded sensor nodes in WBSN has been proposed. The proposed scheme uses the CSMA/CA protocol in order to switch power from the primary unit to the secondary unit thus, saving the sensor voltage level and reducing the transmission losses. The limitaion of this paper is that consumes more time in transmission data operation, while reducing the period of transmission is important in the critical situation. The authors of [START_REF] Khan | An energy efficient routing protocol for wireless body area sensor networks[END_REF] propose a transmission protocol for WBSN where the nodes are divided into two categories: critical nodes and forwarder nodes. The forwarder nodes are responsible to collect data from sensors, aggregating them and sending to the sink. The forwarder nodes are selected according to two criteria: having maximum energies and minimum distance to the sink. The critical nodes use the single hop transmission and aim to send critical records directly to the sink. Simulation is carried out which shows better results than other solutions. However, the link information or signal strength is another significant parameter to select the forwarder node and only distance and energy level is not feasible because the high energy level sensor node not give surety of better signal strength in the network. The authors of [START_REF] Park | An energy efficient enhanced dual-fuzzy logic routing protocol for monitoring activities of the elderly using body sensor networks[END_REF] propose an efficient system in terms of energy saving in wireless body sensor networks, called LEACH-dual fuzzy logic (ELEACH-DFL), which aims to extend the network lifetime. The system is distributed on two stages. First, the authors select the cluster head (CH) using a fuzzy logic algorithm, according to some parameters like remaining energy and distance neighbor. Second, a cluster configuration technique is proposed in order to generated the clusters of each CH. The efficiency of ELEACH-DFL has been comprehensively evaluated through several studies and simulations, which have shown that ELEACH-DFL significantly extends the lifespan of the entire network. However, the selection and generation of clusters consumes more processing time, therefore the transmission time increases. The authors of [START_REF] Dhakal | A novel solution for a wireless body sensor network: Telehealth elderly people monitoring[END_REF] propose a tele-monitoring technique for the elderly people using WBSN that sends emergency physiological data through event-based data transmission to reduce the data redundancy and to improve the end-to-end delay. The data is transmitted on different time slots using Time Division Multiple Access (TDMA). The solution was evaluated through experiments and showed promising results in terms of reducing delay and energy consumption. However, the lack of security aspects seems to be its main shortcoming, where the privacy of patient has taken a great attention from researchers in the healthcare domain. In [START_REF] Awan | A priority-based congestion-avoidance routing protocol using iot-based heterogeneous medical sensors for energy efficiency in healthcare wireless body area networks[END_REF], a multi-hop priority-based congestion-avoidance routing protocol in WBSN is proposed. The goal was to establish a routing protocol between the sensor nodes. In this work, the data is categorized into two: normal and critical. In case of normal data, the next-hop selection is determined based on three parameters; The remaining power, congestion on the forwarder node, and the signal-to-noise ratio of the path between the source and forwarder node. The proposed scheme uses data aggregation and filtering techniques in order to reduce network traffic, therefore saving energy. This protocol prioritizes the critical vital data for emergency detection in order to reduce the delay. This mechanism has shown promising results in terms of energy consumption, traffic load, delay, lifetime. However, the lack of security aspects seems to be the main shortcoming of them. In [START_REF] Javaid | im-simple: improved stable increased-throughput multi-hop link efficient routing protocol for wireless body area networks[END_REF], the authors propose an improved stable increased-throughput multi-hop link efficient routing protocol for Wireless Body Area Networks. This protocol uses a cost function in order to select the next node as a forwarder node. This forwarder node collects the data from the other nodes and transmits them to the sink reducing by this way the sensor energy consumption. This selection of the forwarder node depends on the remaining energy and the distance to the sink. Selecting the forwarder hop with maximum residual energy balances the energy consumption among the sensor nodes and the least distance reduces the path loss and hence improves the packet delivery ratio of the network. The simulation results presented in the paper seem to be promising in terms of network lifetime and network throughput using an integer linear program. However, this approach don't take into account other parameters like bandwidth available of nodes. The authors of [START_REF] Li | An energy-balanced routing protocol for a wireless sensor network[END_REF] propose an energy-balanced routing protocol (EBRP) in order to prolong the network lifetime. The authors used K-means++ algorithm in order to divide the sensor network into clusters. A cluster head is selected, using the fuzzy logical system, in order to collect the data from the whole cluster and send them to the sink. Additionally, the authors get the fuzzy rules using genetic algorithm (GA). The article conducted a series of experiments comparing the results with existing solutions. The efficiency of this work has been verified in terms of the extension of network lifetime. However, the main limitation of this work is that when GA selects the best rule list, it consumes more processing time. In order to balance the energy consumption in the wireless body area network, the authors of [START_REF] Sahndhu | Bec: A novel routing protocol for balanced energy consumption in wireless body area networks[END_REF] propose a multi-hop routing protocol. In this system, the sensor sends their values to the node that has largest residual energy. However, this protocol suffers from the high packet loss rate and the large delay. In [START_REF] Khan | Energy-aware peering routing protocol for indoor hospital body area network communication[END_REF], a protocol for data routing called energy efficient peering routing (ERP) is proposed for handling the indoor WBANs applications. This protocol aims to estimate the communication cost of neighbor sensor nodes, then it uses its routing table to save this information. Choosing the downstream sensor node depends on residual energy and geographical position of nodes. ERP reduces the hello packet transmission using a control mechanism. The ERP architecture consists of three essential objectives: First, maintaining the hello packets generation. Second, building routing tables in order to update the neighbor sensor nodes position and location. Finally, building the owns updated routing table. However, this building operation takes additional time for data routing mechanism. Despite that the time is considered one of the important factor for real-time data transmission. The authors of [START_REF] Alwan | Mqosr: A multiobjective qos routing protocol for wireless sensor networks[END_REF] proposed a Multi objective QoS Routing (MQoSR) protocol for WBANs. This protocol aims to select the next node from neighbor nodes based on two parameters: the location that is determined using the GPS services and link information that is identified through residual energy of sensor nodes and reliability of sink node. MQoSR used fault tolerance approach in order to make multiple node-disjoint and it's based on demand routing protocols. Furthermore, this work focus on data transmission, end-to-end delay and network lifetime. The policies of routing depend on source and sink nodes available QoS requirements. However, in WBANs there are several types of QoS requirements needed and the process of achieving all the requirements requires more computational processing which increases the network overhead. Table 1.2 shows a comparison analysis of the mentioned techniques in terms of the used methods, the performance evaluation metrics, and the limitations. Note that the different limitations mentioned are collected from different state of the art references. Analyzing Large Amounts of Data for Decision Making Techniques Other works in WBSN are dedicated to analyze the large amount of data collected in WBSN and making suitable decisions . In [START_REF] Alameen | Clustering and classification based real time analysis of health monitoring and risk assessment in wireless body sensor networks[END_REF], the authors propose a deep learning mechanism based on the fractional cat-based swarm algorithm for patient situation's assessment and decision making. First, the nodes are organized into clusters where a cluster-head (CH) is selected for each cluster based on the harmony search algorithm and a particular swarm optimization. After that, the authors used the deep belief network (DBN) in order to classify the data for better treatment. Where DBN is a multi-level neural network. Furthermore, the CH receives the records from the nodes and classifies them based on the deep belief network (DBN) in order to detect the emergency situations. The results described that the proposed mechanism better performs for living state of the art approaches in case of energy usage, accuracy, and throughput. Moreover, this classification process needs more computational processing time. Where the interesting factor for real-time health application is the time. The authors of [START_REF] Raja | Modern framework for distributed healthcare data analytics based on hadoop[END_REF] propose a modern platform for healthcare information systems consisting of three layers is proposed. The first layer is composed of various data health sources like sensors, clinical report, medication, etc. The second layer aims to process and store data and it uses various Hadoop tools including Sqoop, HDFS, HBase, MapReduce and Hive. The last layer is responsible of applying business intelligence (BI) solutions over the stored data and it uses SpagoBI tools as an open source BI suite. The simulations results gives an efficient performance in terms of data storage. However, data warehousing process based on hadoop and hive suffer from two challenges. First, Hadoop is unable to handle the online analytical processing (OLAP) operations. Second, the language HiveQL of the Apache Hive query is unable to handle several standard SQL queries. In [START_REF] Abiodun | Reducing power consumption in wireless body area networks: a novel data segregation and classification technique[END_REF], the proposed routing architecture is based on the WBAN data-management technique such as data segregation and classification, that plays an important role in handling the continuous transmission of the large amount of data generated in the healthcare field. The authors propose a classification technique based on a defined threshold where sensor's readings are classified into three types: urgent (above threshold), semiurgent (close to threshold) and nonurgent (less than threshold). Furthermore, the authors introduce a routing protocol for a medical sensor that enables transmitting packets during gateway failure. This work ameliorates data reliability and power consumption. In case of failure on the gateway equipment, the segregation algorithm transmits only the urgent packets directly to the gateway and drops the non-urgent packets but the semi-urgent packets are in the The authors of [START_REF] Qiu | Body sensor networkbased gait quality assessment for clinical decision-support via multi-sensor fusion[END_REF] propose a multi-sensor fusion and decision making mechanism for patient monitoring through WBSN. The objective of the proposed system is to detect gait abnormality in subjects with neurological disorders based on the gait features (especially spatio-temporal correlation, gait asymmetry and regularity) and machine learning approach. Disorders in the movement of patient are monitored and analyzed continuously in order to ameliorate the diagnosis rate. The authors identified the disorders by analyzing the health data using machine learning and layered data fusion to verify the correlation with clinical validations. However, the research bind the sensor to the ankle, and therefore, users have to wear sneakers, which sometimes is difficult for users to accept attaching the sensor to a fixed position. In [START_REF] Liu | A novel cloud-based framework for the elderly healthcare services using digital twin[END_REF], the authors present a new paradigm called CloudDTH combining between digital twins and healthcare that is particularly dedicated to monitor elderly in their homes. Where, through precise analyzes the digital twin provides fast analysis and real-time decisions. The objective of CloudDTH is to improve medical services such that remote monitoring, diagnosing and predicting aspects of the health individual in terms of accuracy and speed, by integrating medical physics and virtual space. Moreover, this work suffer from several shortcomings. First, the effectiveness of digital twins in the data analytics process requires a connected and well thought through IT infrastructure. Second, the digital twins requires a stable, noise-free data flow that allows for high performance digital twins. Third, the large amount of used data affects the privacy and security aspect of digital twins. In this research article [START_REF] Xu | Federated learning for healthcare informatics[END_REF] the authors exploit the advances of federated learning (FL) in the field of healthcare informatics. FL is a learning paradigm seeking to address the problem of data governance and privacy by training algorithms collaboratively without exchanging the data itself. Towards this goal, federated learning could be a promising direction, which trains algorithms across decentralised edge devices (eg, individual mobile phones) or servers hosting different local samples (eg, data owned by different samples). Data samples are not shared or centralised and only the trained models are communicated, which might improve data security and privacy of patient data for drug-disease outcome validation in drug repurposingetain fine-grained control over the data that they have gathered. The performance of the proposed model is evaluated in simulations and the results obtained demonstrate significant improvement over other solutions. In [START_REF] Sousa | Decision-making based on big data analytics for people management in healthcare organizations[END_REF], the authors used the big data analytics in order to address the decision-making process. the performance comparison of the techniques is evaluated using metrics such as Accuracy, Recall and Specificity. In [START_REF] Sadineni | Developing a model to enhance the quality of health informatics using big data[END_REF], the authors investigate the performance of integration of big data analytics with the machine learning techniques like decision trees, support vector machine (SVM) and K nearest neighbor in enhancing the quality of healthcare services such as heart disease examination. The simulation presented in this work shows the performance comparison of the techniques in terms of accuracy, recall and specificity. The authors calculate the values of this metrics by the confusion matrix. Where, the obtained results shows that SVM performs better compared to decision tree and KNN techniques and gives better output for the dataset used. The authors of [START_REF] Rehman | Secured big data analytics for decision-oriented medical system using internet of things[END_REF] build a model based on the edge-cloud architecture for the secured big data analytics in order to ensure a timely decision-making process in the healthcare system. This model makes a private and secure healthcare monitoring system through the collaborating between the edges of mobile devices and the cloud level in order to increase the reliability of the biosensors connected. In addition, the authors used the greedy heuristics in calculating cost function to decrease the data retriveal rate based on mobile edges and therefore to enhance the handling of big data analytics. The authors reduce the network overhead by implementing the privacy algorithms on the sink and the mobile edges. The performance of the model is evaluated using many experimental tests. Moreover, on the security side, the model suffer from some potential network vulnerabilities. One other limitation depends on the optimization of the scalability factor for the mobile users that connected to several clouds. Table 1.1 shows a comparison analysis of the mentioned techniques in terms of the used methods, the performance evaluation metrics, and the limitations. Note that the different limitations mentioned are collected from different state of the art references. Real-Time Patient Monitoring and Assessment Techniques Recently, the authors of [START_REF] Koussaifi | Real-time stress evaluation using wireless body sensor networks[END_REF][START_REF] Habib | Self-adaptive data collection and fusion for health monitoring based on body sensor networks[END_REF][START_REF] Habibe | Health risk assessment and decisionmaking for patient monitoring and decision-support using wireless body sensor networks[END_REF][START_REF] Cappon | An integrated mobile platform for automated data collection and real-time patient monitoring in diabetes clinical trials[END_REF][START_REF] Ajerla | A real-time patient monitoring framework for fall detection[END_REF][START_REF] Harb | A sensor-based data analytics for patient monitoring in connected healthcare applications[END_REF][START_REF] Schick | Wireless body sensor network for monitoring and evaluating physical activity[END_REF][START_REF] Harbouche | Model driven flexible design of a wireless body sensor network for health monitoring[END_REF][START_REF] Aiello | An agent-based signal processing in-node environment for real-time human activity monitoring based on wireless body sensor networks[END_REF][START_REF] Tambe | Cluster-based real-time analysis of mobile healthcare application for prediction of physiological data[END_REF] open a new trend in sensing-based healthcare by proposing several frameworks for real-time patient monitoring and assessment. In [START_REF] Koussaifi | Real-time stress evaluation using wireless body sensor networks[END_REF], a framework for a stress detection and evaluation is proposed. The framework works by detecting first stress signals according to skin conductance parameter, then the stress level is evaluated through fuzzy inference system based on patient vital signs, particularly heart rate, respiration rate and average blood pressure. The framework was evaluated through experiments and showed promising results in terms of predicting stress level. However, the framework lack of security and privacy. In an early work [START_REF] Habib | Self-adaptive data collection and fusion for health monitoring based on body sensor networks[END_REF], the authors propose a data management framework for data collection and decision making in sensing-based healthcare. The framework relies on three algorithms: first, an emergency detection algorithm aims to send critical records directly to the coordinator; second, an adaptive sampling rate algorithm based on ANOVA and Fisher test in order to allow each sensor to adapt its sampling to the variation of patient situation; third, a data fusion and decision making model is proposed to the coordinator based on decision matrix and fuzzy set theory. Although its great advantages for patient monitoring and assessment, the proposed framework suffers from several disadvantages: 1) in case of low critical patient none of the data would be archived in the hospital thus, revising patient archive to check patient progress from doctors is not possible; 2) predicting the progress of the patient situation in the next periods of time is not possible by the doctors. In [START_REF] Cappon | An integrated mobile platform for automated data collection and real-time patient monitoring in diabetes clinical trials[END_REF], an example of an integrated mobile platform is proposed. The system has originally been created for patients with diabetes and was modified to meet the specific needs for clinical trials in PBH patients. The platform consists of three components: First, a mobile app for patients that integrates manual data input (drugs, symptoms, meals,..), data from the sensors , and from the activity tracker. Second, a web interface for researchers and healthcare professionals that follows the health patient situation in real time using browser dashboard. Third, a cloud database that ensures secure environment for data collection and real-time monitoring. However, the mobile platform does not support all the mobile operating system like Android. The work of [START_REF] Ajerla | A real-time patient monitoring framework for fall detection[END_REF] describes the use of machine learning in improving fall detection devices. The authors propose an effective fall detection system that uses edge computing to send data from cheap wearable devices, apache flink to analyse the helath data flow by implementing a data analytic pipeline, and long short-term memory technique to detect and classify the fall status. Through simulation, the efficiency of this approach is evaluated in terms of detecting falls in real-time. However, the work does not support all the types of sensors as well the parallel data-processing pipelines. In [START_REF] Harb | A sensor-based data analytics for patient monitoring in connected healthcare applications[END_REF], the authors propose an efficient sensor-based data analytics for real-time patient monitoring and assessment in order to achieve three goals: First, the framework allows the emergency detection by sending the abnormal health condition to the medical staff, in order to ensure a continuous monitoring of the patient's situation. Second, the authors create an algorithm in order to adapt the sensing frequency based on the criticality of the patient's situation, and therefore reduce the rate of transmitted data. Finally, a prediction technique is proposed based on the long short-term memory (LSTM) technique, which is an advanced recurrent neural network, it helps the medical staff in the right medical intervention process at the right time, in order to avoid any deterioration of the patient's situation. The authors applied this approach on real sensor data while the obtained results show its relevance in terms of reducing data transmission, predicting the patient's situation and saving network lifetime. Moreover, the adapting sensor frequency algorithm don't take into account the correlation between neighbor nodes, which lead to increase the rate of repeated collision. The authors of [START_REF] Habibe | Health risk assessment and decisionmaking for patient monitoring and decision-support using wireless body sensor networks[END_REF] proposed a generalized multi-sensor fusion technique, called health risk assessment and decision-making (Health-RAD) algorithm, in order to control the situation of patients using WBSN. Health-RAD aims to detect the vital signs scores using a fuzzy inference system and early warning score systems (EWS).Then, based on the scores it determines the patient severity level that represented by a risk variable between 0 and 1. When the risk variable is higher, the health situation is more critical. Through several simulations, the obtained results proves the efficiency on reducing the energy consumption in processing data and extending the network lifetime. Moreover, the technique needs a careful effectiveness in real-case scenarios. The authors of [START_REF] Schick | Wireless body sensor network for monitoring and evaluating physical activity[END_REF] created an ubiquitous computing environment for controlling the patient's health conditions in real time. The proposed algorithm consists of three components, which accumulate physiological data and offers indicators that support public policies to support physical activities. This technique did not implement WBSN module for deploying within Ubiquitous Computing Environment for Monitoring and Evaluating Physical Activity (UCEMEPA) environment to evaluate the physical activity sessions. In [START_REF] Harbouche | Model driven flexible design of a wireless body sensor network for health monitoring[END_REF], the authors designed a preventive health care system that allows monitoring of a patient's health care records. The architecture used in this system consists of heterogeneous nodes in order to ensure continuous monitoring and specific controls. In addition, the authors used a model checking approach that is based on model transformation for validating the WBSN behavior for health monitoring. The solution has been evaluated by experiments and has shown promising results in terms of handling the WBSN system, but it did not consider the scalability using huge WBSN system. The authors of [START_REF] Aiello | An agent-based signal processing in-node environment for real-time human activity monitoring based on wireless body sensor networks[END_REF] designed a framework, named Mobile Agent Platform for SunSPOTs (MAPS) framework, for monitoring human activities by creating wireless sensor networks using Javaprogrammable Sunspot sensor platform. The agent-oriented programming gives detailed description of health care that helps the medical staff to monitor in continuous manner. The architecture of the WBSN is consists of a coordinator and two sensors. The coordinator is based on Java Adverse Drug Event (JADE) based enhancement tool, which handles configuring sensors, receiving data and recognizing the human activities. However, the method efficiently managed low-level sensor node functions and could provide high-level services to agents, but it was not capable of processing in deployed computers. In [START_REF] Tambe | Cluster-based real-time analysis of mobile healthcare application for prediction of physiological data[END_REF], a data mining algorithm, called Online Distribution Resource Aware (ODRA) algorithm, is developed that aims to predict in real time the health risk of the patients based on previous health records. The ODRA was verified in real-time patient monitoring system for monitoring patient's physiological data. This algorithm ensures accurate status of patients and notifies patients of their health status to get immediate attention when needed. Moreover, the method was not able to detect the risk level alerts in the diagnostic system. Table 1.3 shows a comparison analysis of the mentioned techniques in terms of the used methods, the performance evaluation metrics, and the limitations. Note that the different limitations mentioned are collected from different state of the art references. Patient Classification and Risk Assessment Techniques Other works in sensing-based healthcare aim to build some techniques for patient classification and risk assessment [START_REF] Harb | A hadoop-based platform for patient classification and disease diagnosis in healthcare applications[END_REF][START_REF] Alameen | Optimization driven deep learning approach for health monitoring and risk assessment in wireless body sensor networks[END_REF][START_REF] Lanata | A new smart-fabric based body area sensor network for work risk assessment[END_REF][START_REF] Maniruzzaman | Classification and prediction of diabetes disease using machine learning paradigm[END_REF][START_REF] Deniz | Transfer learning based histopathologic image classification for breast cancer detection[END_REF]. In [START_REF] Harb | A hadoop-based platform for patient classification and disease diagnosis in healthcare applications[END_REF], the authors propose an efficient and robust big data analytical platform for real-time sensingbased healthcare applications. The proposed framework relies on Hadoop ecosystem, using data analytical techniques for data analysis and disease diagnosis, and composed of four layers: a layer to control the patient, a layer for data storage and making decision, a layer for patient classification, and a layer for data visualisation. The efficiency of this platform has been comprehensively evaluated through several studies and simulations based on the Hadoop ecosystem, which have shown that the platform significantly improves healthcare applications, in terms of efficiently performing patient classification and disease diagnosis. However, the authors did not consider real scenarios for evaluating the performance. In [START_REF] Alameen | Optimization driven deep learning approach for health monitoring and risk assessment in wireless body sensor networks[END_REF] the authors built a Fractional Cat-based Salp Swarm Algorithm (FCSSA) based on WBSN, the biomedical nodes collect the data then transmit them to the aggregator, which is selected using the hybrid Harmony Search Algorithm and Particle Swarm Optimization (hybrid HSA-PSO). Then, the deep belief network is trained to classify the vital conditions for the health risk assessment. The algorithm was evaluated through several simulations , and showed promising results in terms of accuracy, energy, and throughput compared to other existing models. However, this operation of classification requires more computational processing time. In [START_REF] Lanata | A new smart-fabric based body area sensor network for work risk assessment[END_REF], a new smart-fabric based body area sensor network for work risk assessment is proposed. The network contains a smartphone and an artificial intelligent algorithm that aim to determine the criticality level of psychological and physiological work, and a set of sensors integrated into a textile substrate. The solution was evaluated through experiments and showed promising results in terms of physiological signals and physical activity detection. Moreover, this framework needs an accurate validation in a huge dataset collected from workers in real setting scenarios. The authors of [START_REF] Maniruzzaman | Classification and prediction of diabetes disease using machine learning paradigm[END_REF] proposed a machine learning-based system for classifying diabetic patients. The framework uses the logistic regression in order to determine the features that characterize the diabetes disease relising on p value and odds ratio. In addition, the authors applied four prediction diabetic patients models such as naïve Bayes, decision tree, Adaboost, and random forest. The authors used US-based National Health and Nutrition Survey data of diabetic and nondiabetic individuals. The obtained results show the accuracy of combination between logistic regression and random forest for diabetes patients classification. Moreover, the framework requires a careful validation in classification with other kinds of medical data. The authors in [START_REF] Deniz | Transfer learning based histopathologic image classification for breast cancer detection[END_REF] developed a classifier for breast cancer on histopathologic images. The authors used a pre-trained VGG-16 model and a fine-tuned AlexNet in order to extract features, which were then classified using a support vector machine (SVM). Preliminary and encouraging results are shown in terms of accuracy. However, the model must be evaluated in a large data set to verify its accuracy. Table 1.4 shows a comparison analysis of the mentioned techniques in terms of the used methods, the performance evaluation metrics, and the limitations. Note that the different limitations mentioned are collected from different state of the art references. Nurse Scheduling Techniques Patient scheduling plays an important role in organizing time and allocating resources such as nurses and machines to perform operations and provide services efficiently [START_REF] Avram | Scheduling method for non-emergency medical cases[END_REF][START_REF] Alizadeh | A modified genetic algorithm for nonemergency outpatient appointment scheduling with highly demanded medical services considering patient priorities[END_REF][START_REF] Lee | Improving emergency department efficiency by patient scheduling using deep reinforcement learning[END_REF][START_REF] Daldoul | Scheduling patients in emergency department: A case study[END_REF][START_REF] Ariyani | An optimization model of nurse scheduling using goal programming method: a case study[END_REF][START_REF] Kim | A strategy to improve performance of genetic algorithm for nurse scheduling problem[END_REF][START_REF] Jafari | Maximizing the nurses' preferences in nurse scheduling problem: mathematical modeling and a meta-heuristic algorithm[END_REF][START_REF] Punnakitikashem | An optimization-based prototype for nurse assignment[END_REF][START_REF] Zhong | A two-stage heuristic algorithm for the nurse scheduling problem with fairness objective on weekend workload under different shift designs[END_REF][START_REF] Steege | A macroergonomic perspective on fatigue and coping in the hospital nurse work system[END_REF][START_REF] Amindoust | A hybrid genetic algorithm for nurse scheduling problem considering the fatigue factor[END_REF]. The authors in [START_REF] Avram | Scheduling method for non-emergency medical cases[END_REF] proposed a model based on a stochastic Petri nets that aims to specify the workload in order to fix the daily number of available appointments. In addition, it aims to minimize the waiting time of patients by scheduling the patients, using a genetic algorithm. Numerical results show that patients' waiting time decreased by using the proposed model. Moreover, the model don't take into account the preferences of nurses for choosing their days off. In [START_REF] Alizadeh | A modified genetic algorithm for nonemergency outpatient appointment scheduling with highly demanded medical services considering patient priorities[END_REF], the authors propose an algorithm for non-emergency outpatient appointment scheduling, which aims to reduce the costs through the effective use of expensive human resources and medical devices, and enhance the wait time management of hospitals. This framework determines the duration of every single appointment, and takes into account the priorities of patients. To optimize this model, a genetic algorithm is designed where the procedure based on crossover and mutation operators. The efficiency of the proposed model and GA was evaluated through several numerical experiments, which showed promising computational results in terms solution quality and computational times in most cases compared with MLIP. Moreover, the framework don't take into account the stochastic events such as patients' no-show and machine breakdown, which are inevitable aspects of an appointment scheduling system. In [START_REF] Lee | Improving emergency department efficiency by patient scheduling using deep reinforcement learning[END_REF], the authors propose an efficient model for patients'scheduling in emergency department, by using the Markov decision process, then designing the algorithm of the deep reinforcement learning based on deep Q-networks. Their objective is to minimize the crowdedness in hospital and the weighted waiting time of the patients. Several experiments were carried out to investigate the performance of the proposed model. Computational results confirmed that the presented deep RL can outperform the dispatching rules in terms of minimizing the weighted waiting time of the patients and the penalty of emergent patients in the suggested scenarios. However, this study was not addressed in a multi-agent RL with other factors such as operating rooms. The authors of [START_REF] Daldoul | Scheduling patients in emergency department: A case study[END_REF] aim to ameliorate the efficiency of the emergency department by proposing an algorithm for patient scheduling. They built a mixed integer linear programming, abbreviated MILP, that aims to reduce the patients'waiting time and distributed the patients into four classes based on ILOG CPLEX Optimization Studio. The program has been applied to a real case study. The proposed approach is shown to outperform the current configuration in terms of decreasing the patients' waiting time. Moreover, the numerical results show that the model is unable to find a solution when the number of patients is more than 17. [START_REF] Kim | A strategy to improve performance of genetic algorithm for nurse scheduling problem[END_REF] to ameliorate nurses'scheduling problem-solving time in an effort to reduce the cost of allocating nurses to inferior skill levels. Through a comprehensive Must be evaluated in a large data set to verify its accuracy. Genetic algorithms have been applied in Table 1.4 Patient classification and risk assessment in sensor network: a state of the art techniques. simulation analysis, the authors demonstrated that the suggested method created a nurse scheduling faster in time and better in quality compared to the traditional genetic algorithm. Moreover, the method needs a careful validation with real hospital data with more constraints and diversity of requirements. The authors of [START_REF] Jafari | Maximizing the nurses' preferences in nurse scheduling problem: mathematical modeling and a meta-heuristic algorithm[END_REF] proposed a mathematical programming model and a meta-heuristic algorithm based on simulated annealing, works on maximizing nurses'preferences for work shifts and weekends. It considers the work laws and regulations and the hospital policy as well. This algorithm presents superior outcomes when compared to the head nurse's programs. However, the authors don't take into account the different skill levels for nurses, in addition the priorities for senior nurses. On the other hand, lots of techniques have been proposed in the literature to optimize the nurse workloads. The authors of [START_REF] Punnakitikashem | An optimization-based prototype for nurse assignment[END_REF] have provided a decision-making system for optimizing the nurse's patient assignment in order to reduce job overload. A linear program solved on CPLEX was used to model the problem. Then they use a stochastic programming model to handle the problem and illustrates how it can save up to 273 hours of nurse overload each year in each medical/surgical unit. Moreover, this model does not take into account the preferences of nurses for choosing their days off. An optimization model is proposed in [START_REF] Ariyani | An optimization model of nurse scheduling using goal programming method: a case study[END_REF] that relies on the goal programming method to generate the ideal schedule by means of the Lingo 11.0 software. This model has met all the scheduling conditions which were previously unmet with the manual scheduling method; such as the minimum number of male nurses or senior nurses required per shift and the balancing of the working days of all nurses. The efficiency of this model has been comprehensively evaluated through several studies and simulations, which have shown that it significantly provides better solution for regular nurses and better assignment patterns for all nurses. However, it has not been achieved for senior nurses, due to meeting the requirements of senior nurses in each shift. In addition, this model does not take into account the preferences of nurses for choosing their days off. The authors of [START_REF] Zhong | A two-stage heuristic algorithm for the nurse scheduling problem with fairness objective on weekend workload under different shift designs[END_REF] designed a two-stage heuristic algorithm for generating a nurse schedule with a balanced weekend workload. The algorithm takes into account the individual nurse's preferences, patient volume variability, required patient-to-nurse, and other work and rest rules. The paper conducted a series of experiments using measurement data from US hospital, and the obtained results verify the importance of the investigation of uncertain impacts. In addition, the results verified that good shift design can lead to revenue savings and fairness. Moreover, the authors don't take into account the different skill levels for nurses. [60] interviews were performed and evaluated applying the Patient Safety System Engineering Initiative model's qualitative content analysis method. The findings demonstrate what nurses consider to be a source of fatigue, as well as factors that are helpful and harmful in coping with fatigue in their work environment. The nurses are working under extreme duress owing to the COVID-19 pandemic, and they are doing it benevolently all around the planet. Nurses who are responsible for caring for COVID-19 patients 24 hours a day may become more exhausted as a result of this emergency circumstance. As a result, nurse scheduling should be adjusted to account for the new scenario. In [START_REF] Amindoust | A hybrid genetic algorithm for nurse scheduling problem considering the fatigue factor[END_REF], a new mathematical model for Nurse Scheduling Problem (NSP) considering the fatigue factor is proposed. The authors developed a hybrid Genetic Algorithm (GA) in order to generate a nurse schedule for all three shifts of a day. The authors conducted a series of experiments, in a real case study comparing the results with manual schedule, in order to show the applicability of the proposed approach. Numerical results verify that this approach outperforms the current manual scheduling in both fatigue factor and time of finding a time schedule. However, it imposes a few more expense on the department. The nurse's schedules must adhere to a number of constraints. The scheduling must respect each nurse's working time, taking into account the maximum working hours, the minimum break duration, and the maximum number of consecutive work days. The limits can be established for each nurse, such as workload, days off, and so on; it must also take into account the nurses'preferences for teams and vacations [START_REF] Jung | A development of nurse scheduling model based on q-learning algorithm[END_REF][START_REF] Alade | Solving nurse scheduling problem using constraint programming technique[END_REF][START_REF] Chahyadi | Hospital nurse scheduling optimization using simulated annealing and probabilistic cooling scheme[END_REF]. The authors of [START_REF] Jung | A development of nurse scheduling model based on q-learning algorithm[END_REF] concentrate on constructing a socially problematic nursing schedule. This latter should consider mainly the following: three shifts, suitable placement of experienced personnel, fairness of job assignment, and legal work standards. As perplex as the nurse scheduling is, the nurses in charge strive relentlessly to set the schedule manually and ensure fairness and legitimacy. In contrast, the automated nurse schedule necessitates considerably less time and effort while respecting the legal work standards and allocating fair distribution of shifts. These automated systems rely on I/O Q-Learning algorithm using Python and a Web Application. Moreover, it imposes some additional costs on the hospital. In order to assure the effectiveness of the hospital operation, an efficient and practical method of drafting nurse schedules is required. The constraint programming technique was utilized to solve the problem of nurse scheduling in [START_REF] Alade | Solving nurse scheduling problem using constraint programming technique[END_REF]. In terms of the results achieved after numerous tests on various numbers of nurses, this technique demonstrates its capacity to discover an optimal Nurse Schedule Problem (NSP) solution with a higher computational footprint. The performance of the proposed depends on the number input and the present computational resources, taking into account the both hard and soft constraints such as hospital requirements, nurses preferences,.. However, it imposes a more cost to the hospital. The goal of the study in [START_REF] Chahyadi | Hospital nurse scheduling optimization using simulated annealing and probabilistic cooling scheme[END_REF] is to create a system that may be used to organize a nurse's schedule. The obtained working schedule is examined in light of the limits that were imposed. Simulated Annealing was utilized in conjunction with the Probabilistic Cooling Scheme's cooling approach to check the constraint falsification's value. Transitional rules make use of a cost matrix to create a new, more efficient state. Results approve that while the PCS cooling methods combined with the transition rules of the cost matrix imposes a little more cost to the department, it outperforms the the cooling method exponential and logarithmic in generating objective function value of new solutions better and faster in processing time. In addition, this system generates a work schedule in a better quality than the schedules generated manually by the head of the room. Table 1.5 shows a comparison analysis of the mentioned techniques in terms of the used methods, the performance evaluation metrics, and the limitations. Note that the different limitations mentioned are collected from different state of the art references. IoT in Healthcare Applications The Internet of Healthcare Monitoring Things describes the connected infrastructure of hardware, software, medical devices, and services in order to collect and analyze data supporting by this way the decision-making by healthcare professionals. Subsequently, the researchers have payed a great attention in order to improve the integration of the IoT in the healthcare sector [START_REF] Baker | Internet of things for smart healthcare: Technologies, challenges, and opportunities[END_REF], [START_REF] Qi | Advanced internet of things for personalised healthcare systems: A survey[END_REF], [START_REF] Ramesh | A survey on healthcare systems using internet of things[END_REF], [START_REF] Abdullah | Energyefficient remote healthcare monitoring using iot: A review of trends and challenges[END_REF]. Indeed, many IoHT system have been prove the positive effect at the level of low-cost remote diagnosis [START_REF] Kadhim | An overview of patient's health status monitoring system based on internet of things (iot)[END_REF]. The authors of [START_REF] Kaur | An internet of healthcare things (ioht)-based healthcare monitoring system[END_REF] give an overview of the Internet of Healthcare Things (IoHT) providing an accurate service in tracking the patient's condition in order to prevent any critical situation. In [START_REF] Javaid | Internet of things (iot) enabled healthcare helps to take the challenges of covid-19 pandemic[END_REF], the authors explain the important role of IoT system during COVID-19 pandemic, by validating the efficient transmission of patient's data without any human interaction. Conclusion In this chapter, we presented a general overview about wireless body sensor networks technology that includes network architecture as well as the different types of biomedical sensors. Then, we demonstrated the importance of the integration of internet of thing in the monitoring healthcare applications. In addition, we have described the challenges imposed in the wireless body sensor networks. Finally, we presented a background of several techniques that works on the mentioned challenges, where we discussed their advantages and their weakness. In the next chapter, we will present a comparative study for the recent advances in Internet of healthcare monitoring, using Friedman's statistical test. In addition, an optimized intra WBSN communication (OIC) is proposed in order to ensure the efficient transmission of critical data and to extend the network lifetime. 2 An Adaptive Intra-WBSN Communication Approach Introduction Healthcare requires the cooperation of many administrative units and medical specialties. The Internet of Things (IoT) is involved into healthcare field and plays an extremely important role by providing healthcare services. In the Internet of Healthcare Things (IoHT) several challenges appeared in terms of limited battery life, long processing time, large amounts of collected data, paquets overhead on sink,...etc. A lot of research studies have been done in order to improve the healthcare IoT based applications. However, these proposed systems have been focused on some specific purpose without ensuring an effective solution for all problems. In this chapter, we have attempted to achieve two main objectives: First, a comparative study between the recent advances data collection and communication method in IoHT. This study is based on several parameters namely as follows: • Energy consumption: Reducing the energy consumption is one of the important challenges in WBSN systems, which aims to extend the network lifetime and therefore to extend the patient monitoring operation. • Redundant data: Reducing the redundant data, transmitted from the sensors to sink, is essential to improve the decision making operation. • Overhead on sink: This parameter depends on the number of paquets received by the sink from the sensors. It is necessary to reduce the overhead to facilitate the decision making. • Execution time: This parameter describes the time required to transmit the data in one period for each studied approach. And this study aims to determine the shortness of each method. All these methods fail to overcome the all overmentioned challenges. Second, we propose an optimized intra WBSN communication (OIC) that is based on merging of a multi-hop routing protocol with the data transmission process of WBSN. The objective of our approach is to ensure the efficient transmission of critical data and to reduce the energy consumption of embedded medical devices, by determining the optimal route toward coordinator, according to the patient situation. The rest of this chapter is organized as follows. Section 2.2 presents the comparative study of four recent state of the art method in IoHT. Section 2.3 details our proposed transmission approach. The simulation results are evaluated in Section 2.4. Finally, Section 2.5 summarizes this chapter. Comparative study of recent advances in data transmission in IoT This section presents a comparison study of four state-of-the-art IoHT systems. The comparison is based on following parameters: the average lifetime of embedded sensors, the average transmission time, the average overhead rate on the sink, and the average data redundancy reduction. The following articles are selected based on specific criteria: they are recent works, published in well-known journals, and they deal with similar challenges with our work. An Energy-Efficient Routing Protocol (EERP) for Reliable Data Transmission in WBSN [1] The authors aim to ensure a reliable and efficient routing transmission of data and also to balance the energy consumption by prolonging the lifetime of the WBSN. The architecture of WBSN, which is used in this framework, is composed of 3 layers. First layer is composed of several biosensors that aims to collect different vital signs from the body of the patient. Second layer is composed of several electronic devices that aims to collect the data from the sensors. Third layer is composed of several remote servers that aims to analyse the data received from the second layer for decision making process. The authors of this approach propose a routing protocol with multi-hops. The steps of the protocol is described in algorithm 1. For each embedded sensor, first, the status of each captured measurement is determined as abnormal or normal. Then, the benefit function M is calculated for each neighbor sensor n i according to equation 2.1. M i = α ×Y i1 + β ×Y i2 + θ ×Y i3 + γ × (1 -Y i4 ) (2.1) Where Y i1 depends on the initial energy and the remaining energy of a sensor, Y i2 is the transmission efficiency that depends on the number of received packets, Y i3 depends on the available bandwidth, Y i4 depends on the number of hops to reach the sink. α, β , θ and γ are the weights for Y 1 , Y i2 , Y i3 and Y i4 respectively. Finally, the sensor n i selects the next node that has the maximum value of benefit function M i , in order to send the measurement to this selected node. The weights in the benefit function are distributed into two bins and fixed according to the patient's health status: • If the status is normal: α takes the big value, reducing by this way the energy consumption. • If the status is abnormal: β , θ and γ take the big value, ensuring an efficient and fast transmission. if level is normal then 3: Select the weights for normal level 4: for each neighbor node n j in NT do 5: Calculate the benefit function M j 6: end for 7: Select the node has maximum M 8: else if level is abnormal then 9: Select the weights for abnormal level 10: for each neighbor node n j in NT do 11: Calculate the benefit function M j 12: end for 13: Select the node has maximum M 14: end if 15: Send the measurement to the selected node 16: end if The paper conducted a series of experiments using MATLAB platform, comparing the results with existing solutions in the literature. And the obtained results showed that the proposed algorithm significantly improves the performance of routing, in terms of network throughput, network lifetime, energy efficiency and reliable transmission of emergency data. Moreover, the values of weights in the benefit function α, β , θ , γ are fixed. In addition, this multi-hops routing can't handle the overhead on the sink and the big rate of redundant collected data from the biosensors. RK-Energy Efficient (RKEE) Routing Protocol for WBSN [2] The authors propose a routing protocol in order to extend the lifetime of the WBSN, which their steps are described in the algorithm 2 . Similarly to the EERP approach [START_REF] Qu | An energy-efficient routing protocol for reliable data transmission in wireless body area networks[END_REF] this protocol determines first if the measurement is normal or abnormal. If abnormal, the sensor transmits the measurement directly to the sink using single hop. If normal, this sensor transmits the measurement to a forwarder node. The forwarder node is responsible of collecting the normal data from the sensors and transmit them to the sink. This forwarder node n i is selected among all the sensors, on each round, according to the CF i value (equation 2.2). CF i depends on the distance d i between the sensor and the sink, and depends on the remaining energy RE i of this sensor. The sensor having the minimum CF i is selected to become the forwarder node, means that the sensor node which has a minimum distance from the sink and maximum residual energy, it has the highest chance of becoming the forwarder node. In order to balance the remaining energy of all the sensors in the WBSN, and therefore to extend the network lifetime. for each sensor n i in WBSN do 3: CF i = d i RE i (2. Calculate the CF i 4: end for 5: Select the n i that has minimum CF i to be the forwarder node 6: for each sensor n i has packet to send do 7: if level is normal then 8: Send the data to the forwarder node 9: else if level is abnormal then 10: Send the data to the sink 11: end if 12: end for 13: end for The solution has been evaluated through experiments and has shown promising results in terms of energy consumption. Moreover, the proposed algorithm don't take into account the huge amount of the redundant data collected from the patient. Self-Adaptive Data Collection and Fusion (SADCF) for Health Monitoring in WBSN [3] The authors aim to optimize the data transmission and to estimate the detection frequency in real time in order to reduce the energy consumption. First, the authors proposed a Modified Local Emergency Detection. This method tries to reduce the data sent by each sensor to the sink and thus save energy. The sensor sends periodically the measurement to the coordinator only in case of criticality change level of the patient, as described in algorithm 3 (lines 2-15). Second, a distributed Adaptive Sampling (AS) algorithm is proposed (lines [START_REF] Syed | Data science algorithms and techniques for smart healthcare using iot and big data analytics[END_REF][START_REF] Zhang | Energy-efficient and reliable sleep scheduling algorithms in wbsns[END_REF][START_REF] Rabby | A priority based energy harvesting scheme for charging embedded sensor nodes in wireless body area networks[END_REF][START_REF] Khan | An energy efficient routing protocol for wireless body area sensor networks[END_REF][START_REF] Park | An energy efficient enhanced dual-fuzzy logic routing protocol for monitoring activities of the elderly using body sensor networks[END_REF][START_REF] Dhakal | A novel solution for a wireless body sensor network: Telehealth elderly people monitoring[END_REF]. This algorithm allows to change the sampling frequency on each round, in order to control the sensing and therefore to minimise the energy consumption. Consequently, the authors apply ANOVA model with fisher test in order to determine the variation level in sensing. Then, the sampling rate is balanced based on the sensed data variation and the patient criticality. Get measurement r i at slot i 9: s i = score of r i 10: if s i != s then 11: Send measurement r i 12: s = s i 13: end if Sampling rate is balanced to the maximum 21: end if 22: end for Sleep Scheduling in Energy Harvesting (SSEH) WBSN [4] The authors present a three-tiered sleep scheduling approach, providing a new path to further reduce energy consumption in EH-WBNAs, i.e. a combination of sensor sleep scheduling and energy recovery. In order to maximize the lifetime of the low-latency network. • First level: Sensor Node Scheduling, aims to activate the minimum number of sensors, in order to minimize the network's communication. The authors develop the global and local methods. -Global method: It works each round, first it adds to the initial active sensor group the sensor with the largest number of neighbors. Then, for each sensor, it determines if the sensor don't have any neighbor in the active sensor group, in order to increase by one a variable called the unscheduled number. After that, the sensor who has the maximum gradient of reducing the overall unscheduled number is elected to be added to the active sensor group. Repeat until all the remaining sensors have at least one neighbor in the active sensor group, except for the active sensors. -Local method: For each round, each sensor calculates the number of its unscheduled neighbors. After that, the sensors who have a larger number than their one-hop neighbors are elected and added to the active sensor group. Each sensor selected to be active will send an "active" message to its neighbors. In other hand, sensors that receive an "active" message from one of their neighbors can switch to sleep. Then, in the current round, the active sensors and their neighbors are pruned, while the rest of the unscheduled sensors will be scheduled for the next round. Repeat until all the sensors are scheduled, means there are no unscheduled sensors, the status of each sensor is either active or asleep at last. • Second level: Active Sensor Group Discovery, aims to find the maximum number of active sensor groups. First, this method converts the original network to a virtual network. The sensor is represented by several virtual sensors. The number of virtual sensors is defined according to the remaining energy of the sensor. Then, it uses the global or local methods in order to build a minimum active sensor group. After that, from the virtual network, it removes all the elected virtual sensors in the active sensor group. Finally, it converts all obtained virtual networks to original networks. • Third level: Active Sensor Group Scheduling, aims to arrange these active groups in a specific order, in order to run a specific active group every round. In fact, the authors have proposed three scheduling mechanisms: Retentive Scheduling, Round Robin scheduling, and Heuristic scheduling . -"Retentive Scheduling": where one active group is enabled to work frame by frame until its minimum energy is lower than a threshold. Therefore, these mechanism rotate among all active sensor groups until the minimum energy of all active sensor groups is smaller than the threshold. -"Round Robin scheduling": each active sensor group runs in turn in a circular order until all active sensor groups loose their battery. -"Heuristic scheduling": calculates the minimum energy of each active sensor group and the active group with the highest minimum energy is selected to be enabled in a frame. These mechanism continues until no active sensor group remains to work. Simulations are offered as a potential perspective to show the effectiveness of the approach. Based on the obtained results, "Heuristic scheduling" is the mechanism that ensure the longest life of the system. Optimised Intra-WBSN Communication Approach In this section we show the overall communication/routing process ( Figure 2.1). Multi-hop routing protocol is used in our system in order to save the consumed energy on sensor-coordinator and sensor-sensor communication level. First, the routing protocol that is proposed in the EERP approach is optimized by controlling the data transmission based on the patient status. Second, an adaptive model is adopted at the next hop selection level, where the model weights are fine-tuned based on the patient status. Our proposed routing protocol is divided into four stages: Network Initialization First, the system is initialized by exchanging hello message between all sensors. This message includes the following information: remaining energy, number of hop toward the sink and transmission bandwidth. Emergency Detection At this stage, each sensor collects data periodically. The sensor calculates the score of the captured measurement using Early Warning Score (EWS) as shown in Figure 2.2, we show one of the most used EWS guides that is developed in UK and distributed around all the world, called National EWS (NEWS). EWS is a guide that is built based on the vital signs, and it is used by the medical staff within hospital in order to track the criticality level of a patient. For each vital sign, the collected measurement is compared to a normal range in order to calculate a score between 0 and 3; 0 means normal measurement where other values indicate abnormal situation where increasing score of severity when the score increases [START_REF]National early warning score (news)[END_REF]. After score calculation and in order to reduce the transmission Next Hop Node Selection Based on Benefit function (M) This stage aims to find the optimal path, by determining the next node for each sensor. The next node is the one that will receive the captured measurement instead of sending it directly to the coordinator. Therefore, the optimal path is the rout that saves most energy, speeds up the transmission and maintains the quality of the communication. The energy consumption is highly dependent on the data transmission which depends on the distance to the destination. The greater the distance is, the greater the power consumption increases. Therefore, the sensor that is near to reach the coordinator, it sends the measurement directly using single-hop communication. In order to find the optimal path, we calculate the benefit function M i j of each node neighbor n j of the current sensor node n i . This function includes the following parameters (equation:2.3): M i j = α × X j + β ×Y j + θ ×W j + γ × (1 -Z j )) (2. 3) X j depends on the initial energy E j and the remaining energy Er j of the node neighbor n j and a minimum threshold E min (equation 2.4). X j = Er j -E min E j -E min (2.4) Y j is calculated by equation 2.6. Y j depends on the efficiency of transmission of node n j . t j (equation 2.5) depends on the number of packets that are successfully forwarded by the node neighbor n j : P f j from the received packet: Pr j . t min and t max are the minimum number and maximum number respectively of packets that are successfully transmitted. t j = P f j Pr j (2.5) Y j = t j -t min t max -t min (2.6) W j is the normalized parameter with the available bandwidth Bav j of the node neighbor n j . This parameter is calculated according to the equation 2.7, while Bmin and Bmax are the minimum and maximum bandwidth in the WBSN. The higher the bandwidth of the sensor, the more the transmission will be efficient. W j = Bav j -Bmin Bmax -Bmin (2.7) Z j is the normalized parameter of the number of hops to reach the coordinator H j of the node n j (equation 2.8). Hmax denotes the maximum number of hops from the candidate next hop n j to the coordinator. Z j = H j Hmax (2.8) α, β , θ and γ are the weights for X j , Y j , W j and (1-Z j ) respectively. Moreover, in order to calculate the benefit function, the optimum values of these four weights must be determined according to the patient's status. The patient's status is divided into two cases: normal and abnormal. If the case is abnormal, the focus will be on ensuring the efficient transmission of data. So the weight θ that depends on the available bandwidth will be maximized. Otherwise, if the case is normal, the focus is more on reducing the energy consumption. Therefore, the weight α that depends on the remaining energy will be maximized. This update of α increases the chance of selecting the sensors having the highest remaining energy. β = b × E 8: γ = c × E 9: else if level is abnormal then 10: θ = 1 2×(1+e -s ) 11: E= 1-θ 12: α = a × E 13: β = b × E 14: γ = c × E 15: end if 16: end for The steps of optimization weights is described in algorithm 4. For the initial conditions: the sum of the weights is equal to 1. First, the score s is calculated for the captured measurement using the EWS table (line 1-2). The status of the patient can be normal or abnormal. This status is determined by the doctor, for example a cancer patient has a normal status, but a patient who underwent surgery has a abnormal status. If the status is normal, α is calculated by the equation in line 4, when s is decreased α is increased and the opposite is true. E is the subtraction result of α from 1 (line 5), the largest part (a) of E is for θ (for bandwidth) line 6, and the other parts (b and c) are for other weights (θ and γ). If the status is abnormal, θ is calculated by the equation in line 11, when s increases θ increases and vice versa. And E is distributed in this way (line 13-15), the largest part a is for α (for the remaining energy) and the other parts (b and c) are for other weights (θ and γ). Finally, the next node is the neighbor that has the maximum M. Data Transmission: The sensor n i sends the measurement to the selected next node. Then n i sends a control message to these neighbors that contains the new remaining energy. Simulation Results In order to evaluate the performance of our approach, several parameters are calculated for the four approaches: the network lifetime, the data transmission time, the overhead on sink and the data redundancy reduction. These approaches are implemented using Java. We used real health data collected from Multiple Intelligent Monitoring in Intensive Care (MIMIC) database of PhysioNet [START_REF]Mimic database on physionet[END_REF]. MIMIC contains data for about 72 patients where recorded on vital signs including Heart Rate (HR), Ambulatory Blood Pressure (ABP), Respiration Rate (RESP), pulse (PULSE) and Oxygen Saturation (SPO2). Every second, the biosensor collects new reading for each vital sign then it sends toward the coordinator for archive purpose. Then, this coordinator forwards this reading to the sink. In this simulation, we used health data are collected from 30 patients. Each WBSN consists of 10 sensors. All sensors are provided with an initial energy level of 0.0005 Joules (J), the maximum wireless transmission distance is 2 m, the size of transmitted frames of each value is fixed to 64 bits, and two sensors are considered neighbors if the distance between them is less than 0.4 m. The network lifetime is divided into rounds and each round is divided into 10 periods. On each period the sensor collects 500 measurements. The simulation tests are repeated 10 times and all resulted values for the parameters are therefore averaged and presented for 30 patients with different criticality (normal and abnormal) as well. This criticality is determined by the medical team. Comparison of the Studied Approaches The table 2.1 presents the calculated values of the several parameters for the four approaches, with both patient's status (normal and abnormal). In addition, it presents the algorithm complexity of each approach that strongly depends on the data transmission time. We used the non-parametric Friedman test in order to test the difference between the four parameters on the four simulated approaches. The procedure involves ranking each row, then considering the values of ranks by columns. The data table has eight rows and four columns. The rows represent the parameters (lifetime, transmission time, overhead on sink and data redundancy reduction), on both status(normal and abnormal). On the other hand, the columns represent the four approaches to be compared. Two statistical assumptions in Friedman's test: Patient • The null hypothesis (H0): there is no difference between the simulated approaches and they behave similarly. • The alternative hypothesis (H1): at least one approach is different from the others. Table 2.1 is considered as a matrix with b rows (the parameters), k columns (the approaches). The ranks are calculated within each row R bk . The sum of ranks is calculated for each columns (approaches). Friedman test is applied as follow: • step 1: A is calculated: A = ∑∑ R(X i j ) 2 = 251.5 (2.9) Where R i j is the rank of approach j for the parameter i. • step 2: B is calculated, following this equation: B = 1 b × ∑ RX 2 j = 221.75 (2.10) Where b is the number of parameters. • step 3: T is calculated: T = (b -1) × [B -bK(k + 1) 2 /4] A -B = 5.1176 (2.11) • step 4: The value of FS is determined according to the Fisher's law distribution table, and the smallest value of α = 0.01 is toked to have a stricter analysis. FS 1-α , k -1, (b-1)(k -1) =3.4 (2.12) We obtain T > FS, so Friedman's hypothesis H0 is rejected. Therefore, there is one approach that performs better than the others. In order to determine the best approach, the bilateral test is used. This test verifies the difference between two approaches by using a reference critical value C equation 2.13. C = t (1-α /2) [ 2b(A -B) (b -1)(k -1) ] 1/2 = 9.593 (2.13) Where t is a constant value in the distribution table. t corresponds to α , b or k. The bilateral test is applied on all approaches. In this study, the SADCF approach demonstrate the superiority. However, the SADCF approach presents some important limitations such as: the large required transmission time and the high overhead on the sink. Comparison with our Optimised Intra-WBSN Communication Approach In this section, the performance of our optimised intra-WBSN communication approach is compared with the four studied approaches. The table 2.2 shows a comparison analysis of the four state-of-theart techniques in terms of the addressed challenges, used methods, and the performance evaluation metrics. Unfortunately, proposed systems have been focused on some specific purpose without ensuring an effective solution for all problems, such as limited battery life, long processing time, large amounts of collected data, paquets overhead on sink. The Figure 2.3 shows the lifetime of network with each approach. The obtained results show that the network lifetime is extended more than 50 % using our approach compared to the SADCF approach and more than 90 % compared to the others approaches, for the two patient's situation normal and abnormal. This results is obtained due to the merging of data transmission technique that reduces the number of data sent with the multi-hop routing protocol that reduces the transmission distance. Subsequently, this reductions reduce the energy consumption. Where, the energy consumption of a sensor node for transmitting n data set to the another node, which is at distance 'D' is calculated in our thesis as follows: E T X = E elec × n × τ × 64 + β amp × n × τ × 64 × D 2 (2.14) where τ is the period size, 64 indicates the bit representation of each value, E elec is the energy consumption of a sensor node in its electronic circuitry (usually E elec = 50nJ/bit), and β amp represents the energy consumption in RF amplifiers for propagation loss (usually β amp = 100pJ/bit). Figure 2.4 shows the reduction process of residual energy for the first died node. We can clearly observe that our approach (OIC) efficiently increases the battery life of sensors. This enhancement is obtained due to two reasons: First, each sensor doesn't send a measurement except a patient's situation variation occurs. Second, the sensors that are far away from the coordinator node use the multi-hop communication in order to send the measurements and saving more energy. In Figure 2.5, the different required transmission time in one period (ms) for each approach are shown. The obtained graphs show that the SSEH approach and our approach (OIC) have less required time than others for both patient's situation (normal and abnormal). In Figure 2.6 the comparison of overhead on sink for all studied approaches has been presented. This parameter describes how many packets have been received on the sink in each round. The results show that our approach reduces the overhead on sink up to 83 %, 69 % and 59 % compared to EERP, RKEE, and SADCF approaches respectively in the normal situation, and up to 79 %, 82 % and 63 % respectively in the abnormal situation. This enhancement is obtained due to the low number of sensors that send the data directly to sink. The SSEH approach reduces the overhead on sink more than our proposed. This result is interpreted by the sleep scheduling mechanism that is applied in this approach. Where, this mechanism aims to reduce the number of active sensors each round and therefore reduce the number of packets sent by the active sensors toward the sink. Reducing the redundant transmitted data is an important challenge in WBSN, this increases the accuracy of the decision making process. Figure 2.7 shows the percentage of reducing data after applying the compared approaches. The obtained results show that both SADCF and our approach (OIC) reduce the transmission of useless data up to 64 % compared to SSEH approach. While EERP and RKEE approaches don't take into account this challenge, the sensors in these networks send all the collected data to the coordinator. Conclusion In healthcare field, the IoT is typically employed in order to increase patient safety and to optimize the medical staff operations. The IoHT systems monitor the person's vital signs in a non-intrusive and effective way. In this chapter, four approaches for patient monitoring in context of IoHT are studied, and an adaptive intra-WBSN communication approach (OIC) is proposed. Our approach aimed to optimize the routing protocol for the data transmission process. By comparing the simulation results on real biosensor data, the efficiency of our approach has been verified in terms of the extension of system lifetime, the adaptation of transmission time, the reduction of overhead on sink, and the elimination of redundancy in data transmission. In the next chapter, we will present an efficient framework called P2D for real-time health-monitoring applications. This solution aims to face Nowadays, the fastest growth in the big data applications is occurring within wireless body sensor networks (WBSN). In addition to its big size, researchers face two other main challenges: first, the velocity in which data are collected in large speed, sometimes it reaches the level of Terabytes, thus storing such data using traditional servers prone to hardware failures and data loss. Second, the collected data has no structure and takes several formats like numerical, images, videos, etc. This makes the relational database systems is not suitable to big data applications. Therefore, in order to overcome these challenges, Hadoop ecosystem has been proposed as an efficient platform to handle big data applications. WBSN used in healthcare applications allows to monitor the vital signs (like heart rate, oxygen saturation, blood pressure, etc.) of a patient (e.g. numerical data), to take images of patient organs (like brain, lungs, etc.), to make operations for the patient (like surgery, appendectomy, etc.). Therefore, the periodic and multivariate data collection along with the densely deployment of the sensors makes WBSN as one of the biggest data contributors in this era. In this chapter, we aim to allow a near communication between patient situations and doctors by proposing an efficient framework for health monitoring and decision making. Our proposed framework is called Patient-to-Doctors (P2D) and works on both sensor and sink levels. This efficient framework start from the collection phase to the visualization phase. The proposed framework relies on Hadoop ecosystem and consists of five layers: collection, which is dedicated to the sensor level and the other layers (ingestion, processing, storage, and visualization) are dedicated to the sink level. The first layer, e.g. collection data, collects the data from the sensors of WBSN and sends via the internet to the second layer, e.g. ingestion data, receives data from the WBSN and forwards to the third layer, e.g. processing layer, which in its turn searches missing data before storing them in the fourth layer. The last layer allows the end users to visualize, analyse and understand the stored data. Moreover, our framework uses various Hadoop tools in each layer in order to ensure fast data processing and reliable storage. Subsequently, our contributions in this chapter can be described as follows: • At the sensor level: P2D proposes two algorithms: emergency detection and adapting sensing frequency. The first one aims to alert the medical team in case of any critical records are detected in order to take a rapid decision. Thus, the sensor only transmits its collected data to the coordinator if the patient's situation has changed, then this coordinator forwards the received data to the sink. The second algorithm seeks to reduce the amount of periodic data sent from the sensor by eliminating data redundancy in order to save its energy. • At the sink level: P2D guarantees an archive of each patient based on the least squares approximation method and two fault detection algorithms (moving average and exponential smoothing), and it allows the medical team to predict the progress of the patient situation during the next periods, based on the prophet method. Thus, the medical team can take preventive actions in order to reduce the risk level on the patient. The rest of this chapter is organized as follows. In section 3.2, we present our WBSN platform. Sections 3.3 and 3.4 detail the algorithms proposed at the level of sensors and sink respectively. In section 4.4, we discuss the obtained results. Finally, the section 4.5 deduces our work. WBSN Framework In this section, we present the architecture of our framework proposed for the WBSN. First, the sensors constitute the data sources for our framework that will collect various types of data (especially numerical, images and videos) and send them toward the sink node. In its turn, the sink will forward the data to the Hadoop cluster through the Internet. The selection of the Hadoop cluster at the end user has motivated by the high scalability and fast storage needed for the WBSN. Moreover, the architecture of framework consists of 5 layers (collection, ingestion, processing, storage and visualization) where each layer has its set of tools and, sometimes, some algorithms. Figure 3.1 presents the architecture of our framework with all layers which can be, briefly, introduced as follows: • Data collection layer: Two algorithms are applied over the data collected by the sensors, e.g. emergency detection and adapting sensing frequency. Where, they are presented in details in the section 3.3 (Patient Monitoring Model). • Data ingestion layer: This layer consists of ingestion tools (Flume and Spark streaming) and aim to receive data from the sensors (or data sources) and forward to the next layer (e.g. processing layer). • Data processing layer: Once data are sent by the Spark streaming, the processing layer uses the machine learning library in the Spark in order to detect the fault tolerant in the data before sending to the storage layer. First, we apply the patient records archive algorithm based on LSA method in order to regenerate the data, then we use two algorithms (moving average and exponential smoothing) in order to recover the missing data in WBSN, that has been ignored after the adapting rate of sensing process. • Data storage layer: After the preprocessing of the data, the storage layer uses the Hadoop HDFS (Hadoop Distributed File System) in order to store the data into the Hadoop cluster nodes in a distributed manner. • Data visualization and prediction layer: This layer allows the end user and the experts to retrieve the data stored in the Hadoop HDFS, predict the patient situation using prophet method, and make a real time analysis in order to understand the situation of the monitored zone and make suitable decisions. In this layer, we used two main tools: Grafana, e.g. a graphical user interface, and Matplolib proposed in Python. where each sensor n ∈ N monitors the vital sign vs ∈ V S . For analysis purposes, we assume that the lifetime of each sensor is divided into a set of rounds, U = [U 1 ,U 2 , . . . ], where each round u ∈ U consists of a set of P periods. Then, in each period p ∈ u, each sensor n collects a set of T readings and it forms a vector p R vs u = [r 1 , r 2 , . . . , r T ] that will be sent to the medical team (Figure 3 Patient Monitoring Model In this section, we introduce our proposed model at the sensor node level that applied in the collection layer. It aims to ensure an efficient monitoring of patient staying at hospital or in his home (to avoid unnecessary hospitalization and reduce the general healthcare costs). The patient monitoring model proposed in our platform consists of two major algorithms: emergency detection and adapting sensing frequency. Sometimes, the sensed data can be very critical and requires fast actions to be taken in order to save the person's life thus, the first algorithm, e.g. emergency detection, allows to alert the medical team in case of any abnormal records are detected. Whilst, the adapting sensing frequency algorithm aims to adapt the number of collected data by the sensor in order to save its energy and ensure a long patient monitoring period. In the following, we detail each of the proposed algorithm. Patient Emergency Detection In WBSN, there is a need to continuously monitor the variation of the patient's situation. However, the huge amount of data collected and sent by the sensor will quickly deplete its available energy. Thus, in order to ensure a long time patient monitoring, the amount of data transmission from each sensor should be reduced without loss of the information indicating the patient's situation variation, especially in emergency cases. In [START_REF] Habib | Self-adaptive data collection and fusion for health monitoring based on body sensor networks[END_REF], the authors proposed a modified local emergency detection, abbreviated MLED, algorithm for WBSN by only sending the non-sequential similar records to the sink. Although MLED can highly reduce the data transmission and save the sensor energy however it badly works when the patient situation is stable, e.g. low or high critical situation. Subsequently, in a stable situation, MLED will never send any records from the sensor to the sink which results two challenges: first, the variation of the patient situation will not be noticed by the medical team; second, archiving the patient records will not be completed. In this section, we aim to propose an efficient patient emergency detection algorithm that overcomes the mentioned challenges as well as it saves the energy of the sensor. In order to verify abnormal patient situations, we used the same guide EWS [START_REF]National early warning score (news)[END_REF] presented in chapter 3. According to the EWS guide, we propose an efficient patient emergency detection (PED) algorithm that has three folds: first, it reduces the amount of data periodically sent by the sensor; second, it ensures an update information about the criticality of the patient situation; third, it performs an efficient patient archive. The intuition of PED is to search the similarity between successive collected records, based on their scores, then it only sends one record with a score to the sink every time a variation in the patient situation is detected. Furthermore, at the end of each period, PED searches a correlation model among the records collected during this period that will be sent to the sink in order to regenerate and store the collected data for a later analysis and studying. More formally, PED algorithm works according to the following steps and the illustrative example shown in Figure 3.3: • During each period p, the sensor n collects a set of records about the vital sign vs and forms the set p R vs u . For instance, assume n collects data about the heart rate (HR) and forms a set of 12 readings during p, e.g. • The sensor n calculates the score of each reading in p R vs u according to EWS and then it forms the score set p S vs u . Subsequently, p S HR u = [0,0,1,1,1,2,2,3,3,2,2,2]. • The sensor n sends the first reading collected in p and all readings whose not similar, e.g. have different score values, with the previous sent ones. In Figure 3.3, we show that the first two readings are similar with the same score 0, then n sends only the first reading (e.g. 90); the next readings (91, 99 and 100) have the same score 1 then, s only sends the first one (e.g. 91) and so on. Therefore, a subset of p R HR u , assume S p R HR u , will send to the sink instead of the whole set. By doing that, the number of readings sent by the sensor will be reduced while, at the same time, the medical team will be up to date about the variation of the patient situation. • Finally, at the end of each period, PED studies the correlation between the readings collected in p and generates a prediction model to send to the sink in order to allow to regenerate the non-sending readings during p and ensures a patient archive at the hospital. In our work, we are interested in the Least Square Approximation (LSA) method (explained in the next paragraph) which is one of the most standard approaches used in the literature to make statistical analysis. Therefore, LSA allows the sensor n to find the coefficient set of the equation of degree k, e.g. p C HR u , that is best fitting the readings collected during p. Thus, n will send p C HR u to the coordinator at the end of p which, in turns forward them to the sink in order to maintain the patient situation during his staying in the hospital. Recall of LSA Method The method of least squares approximation (LSA) [START_REF] Bretscher | Linear algebra with applications[END_REF] is one of the most standard approaches used in statistical analysis. It aims to determine the curve that best describes the relationship between expected and observed data sets by minimizing the sums of the squares of deviation between observed and expected values. Thus, it limits the distance between a function and the data points that a function is trying to explain. LSA has lots of applications in various domains like healthcare [START_REF] Combes | Predicting hospital length of stay using regression models: Application to emergency department[END_REF], business [START_REF] Hair | Partial least squares structural equation modeling (pls-sem) an emerging tool in business research[END_REF] and finance [START_REF] Gilli | Numerical methods and optimization in finance[END_REF]. Mathematically, LSA can be formally defined as follows: • Definition 1: (LSA function). Given a set of q data points {(x 1 , y 1 )(x 2 , y 2 ) . . . (x q , y q )}. The polynomial or parabola of degree k, represented by y = a 0 + a 1 x + a 2 x 2 + • • • + a k x k , that can best fit all points (x i , y i ), where i ∈ [1, q], can be calculated according to the following equations: na 0 + a 1 ∑ x i + . . . + a k ∑ x k i = ∑ y i a 0 ∑ x i + a 1 ∑ x 2 i + . . . + a k ∑ x k+1 i = ∑ x i y i . . . + . . . + . . . + . . . = . . . a 0 ∑ x k i + a 1 ∑ x k+1 i + . . . + a k ∑ x 2k i = ∑ x k i y i . where such (k + 1) equations contain (k + 1) unknowns, i.e. a 0 , a 1 , . . . , a k , that should be calculated in order to find the coefficients of the best fit polynomial to the data points. Indeed, the processing time needed to calculate the LSA polynomial of degree k will be huge, especially when the period size T is high. Hence, in order to reduce its time complexity, we propose to reduce the number of taken records when calculating the LSA polynomial by selecting a subset of records from p R vs u instead of taking all records in the set. Our algorithm works on three steps (see also Figure 3.3): • We divide the whole set p R vs u into a number of D divisions where each division D i contains d % of the records in p R vs u , e.g. d % of T . For instance, if d = 25 % then p R HR u will be divided into 4 equal divisions as shown in Figure 3.3. • For each division D i , we calculate the mean value of the records in the division. Thus, a reduced set of records from p R vs u will be obtained. • We calculate the mean value for each division then we apply LSA method in order to find the equation of degree k. Finally, we send the set of equation coefficients, e.g. p C HR u , to the sink at the end of the period. For instance, in Figure 3.3, we find the equation of degree k = 2, e.g. a 0 + a 1 x + a 2 x 2 = 0, and we send the coefficient set p C HR u = [77.92, 11.77, -0.7219] to the sink. Algorithm 5 describes the process of PED algorithm which is applied at the sensor level at the end of each period. Indeed, PED sends the first collected record to the sink after calculating its score to be the current score (lines 1-3). Then, it calculates the score for each next record in the period and compares with that of the current score; if the scores are the same then the new record will not send to the sink otherwise, the record is sent and its score becomes the new current score (lines 4-10). At the end of the period, PED divides the records collected in p into equal divisions then it calculates the mean for each division and, finally, it finds the set of coefficients that will be sent to the sink, based on LSA method (lines 11-14). Adapting Sensing Frequency In WBSN, there is a need to continuously collect the vital signs of the patient in order to better understand his situation as well as to make a suitable decision. This leads to a high level of redundancy between the collected records due to two reasons: first, the stability of the patient situation, especially in low and high critical cases; second, the short slot time between the collected readings. Therefore, the redundancy level produced in WBSN leads to quickly consume the available energy in the sensor and complicate the mission of the medical team in analyzing a huge amount with un-useful information. Hence, adapting the sensor frequency to the dynamic of the patient situation has becoming an essential in WBSN in order to reduce the amount of data collection and send only the useful information to the sink. In this section, we propose an efficient algorithm that studies the variation of the patient situation during a set of successive periods, e.g. a round, then it determines the best sensing frequency of the sensor according to the patient criticalty level. As mentioned before, assume a round u consists of P periods, e.g. u = [P 1 , P 2 , . . . , P P ]. Then, for each period p ∈ u, the sensor collects the set of readings p R v u thus, during u, a set of readings set is collected as follows: R vs u = [ . Given a set of readings'set collected during a round u. Then, we define the weight of s i , where i ∈ [0,1,2,3], as the number of occurrence of the score i during the round u as calculated in the following equation: wgt(s i ) = ∑ P k=1 ∑ T j=1 1 j=i P × T (3.1) Based on the score weight function, we define the criticality level, C , of a patient during the round u according to the equation: C = ∑ 3 i=0 s i × wgt(s i ) ∑ 3 i=0 s i (3.2) Therefore, the criticality level of the patient can be described as shown in the Table 3.1: Table 3.1 Description of the patient criticality level. Criticality level, C Description 0 ≤ C ≤ 0.3 low criticality level 0.3 < C ≤ 0.6 medium criticality level 0.6 < C ≤ 1 high criticality level In addition to the criticality level, we take into account another parameter in order to adapt the sensing frequency of each sensor which is the risk level of the patient. This parameter is highly determined by the medical team when the patient enters to the hospital. In our platform, we define two levels for the risk level, R, depending on the situation of the patient: • Low risk: in which a patient enters to hospital in a stable situation and the collected data about his vital signs are almost normal. In this case, the patient requires minimum patient observations from the medical staff. • High risk: in which a patient enters the hospital in an unstable situation and most of the collected records are different from the normal score range. The patient, in this case, requires a continuous monitoring from the medical team in order to evaluate his situation progress. Therefore, we adapt the sensing frequency of a sensor according to both mentioned parameters: the criticality level (C ) and the risk level (R) of the patient. The sensing frequency decision table (abbreviated SFDT as shown in Table 3.2) shows the new sensing frequency of the next round that the sensor should adapt its sampling in order to reduce the amount of data collected, without losing the integrity of the information, thus save its energy. Indeed, SFDT shows two main facts: 1) the new sensing frequency is calculated based on the original period size T ; for instance, if T is set to 100 records and (C , R) is (low, high) then the sensor should adapt its sensing frequency to 40 records in the next round, e.g. (40 %/100) × T . 2) the sensing frequency of the sensor in increased with the increasing of C and/or R. Patient Monitoring Algorithm In this section, we aim to integrate the patient emergency detection (PED) and the adapting sensing frequency proposed at the sensor level. The new algorithm is called patient monitoring algorithm and is abbreviated as PMA. Algorithm 6 shows the process of PMA that takes the parameters of PED in addition to the risk level and the SFDT table. First, PMA applies the PED algorithm over the set of readings collected during each period of the round u (lines 1-3). Then, it calculates the weight of each score appeared in the round, based on the equation 3. Medical Team Decision-Making Model In this section, we introduce the second level of our P2D platform which is dedicated to the medical team at the hospital. Indeed, one of the most challenges that faces the medical team is the analyzing of the huge amount of data generated in WBSN and making a right and fast decision. Moreover, the prediction of the patient progress situation is also a crucial task for the medical team in order to provide a pretreatment and avoid any undesired situation, especially the death. On the other hand, maintaining an archive of each patient is a must in almost hospitals for several purposes, especially disease diagnosis and improving the healthcare quality. In order to overcome these challenges, P2D provides two main services for the hospital and medical team: patient archiving and prediction of patient situation progress. In the next sections, we detail each of the proposed algorithm, that are applied from the ingestion layer to visualization and prediction layer based on hadoop framework (see Appendix A), in order to handle the storing and the processing of the big data and to ensure the fault tolerance. Data Ingestion Layer This layer is the first step for the data coming from the sensors in WBSN in which data is prioritized and categorized in order to flow smoothly in the other layers. We used two Hadoop tools in this layers: Flume and Spark streaming. Flume It is a distributed and reliable software that aims to efficiently collect, aggregate and transfer a huge amount of data to the Hadoop cluster [START_REF] Severance | The apache software foundation: Brian behlendorf[END_REF]. In addition, Flume uses batch mode to move streaming data flows and proposes several recovery mechanisms to ensure the fault tolerance. In our framework, Flume receives period data collected by the sensors and deliver to the data collector tool (e.g. Spark streaming). Among other data ingestion tools, Flume has been chosen due to its huge ability in handling data with various types, e.g. the case of WBSN. Spark Streaming It uses the fast scheduling capability of Spark Core's in order to perform streaming analytics based on Scala programming [START_REF] Spark | Apache spark[END_REF]. It receives ingested data from Flume and process them in batches before sending to the rest of the data pipeline. In our framework, Spark streaming is selected among other existing tools due to three major characteristics required in WBSN: first, it supports both batch and streaming processing which are necessary to apply various data analytical algorithms (explained in next layer); second, it ensures a lower latency level than other tools like MapReduce, which is strongly required in WBSN, especially in critical application; 3) It guarantees scalability to any number of cluster nodes required for the WBSN application. Therefore, in our framework, we implemented Spark on the master node in order to receive data streaming from Flume, doing processing (in the next layer) and send data to the Hadoop HDFS for storing purpose. Data Processing Layer In this section, we introduce the third layer used in our framework which is responsible for preprocessing data before storing them in the next layer. Indeed, data sent by the sensors is usually prone to loss due to many reasons: 1) the long distance between the sensor and the sink; 2) the bad conditions in the monitored environment; 3) the congestion resulted from the densely network; 4) the failure in the sensor itself. In such cases, we need to recover the missing data before any processing in order to ensure the accuracy of the taken decision. We are interested, in this chapter, in the numerical missing data generated from WBSN. Hence, in this layer, we introduce the Patient Records Archive algorithm, which is implemented based on the machine learning module in Apache Spark, in order to generate the data based on the coefficients of LSA method. In addition, after data generation process, we applied two algorithms Moving Average (MA) and Exponential Smoothing (ES) in order to recover the missing data, which is reduced after applying the adapting sensing frequency algorithm. Patient Records Archive Algorithm Patient data archiving is a key operation in hospitals. However, most of the data reduction techniques proposed in the literature for WBSN aim to minimize the amount of data sent by the sensor, in order to save its energy, without taking into account that patient records must be regenerated for archiving purpose [START_REF] Abiodun | Reducing power consumption in wireless body area networks: a novel data segregation and classification technique[END_REF][START_REF] He | Lightweight and confidential data discovery and dissemination for wireless body area networks[END_REF][START_REF] Koussaifi | Real-time stress evaluation using wireless body sensor networks[END_REF][START_REF] Habib | Self-adaptive data collection and fusion for health monitoring based on body sensor networks[END_REF][START_REF] Habib | Real-time sampling rate adaptation based on continuous risk level evaluation in wireless body sensor networks[END_REF][START_REF] Habibe | Health risk assessment and decisionmaking for patient monitoring and decision-support using wireless body sensor networks[END_REF]. In addition, data collected by the sensors are vulnerable to loss before reaching the sink due to several reasons: 1) a long distance; 2) the congestion due to the overloaded network in case of dense biosensors deployment; 3) the obstacles; 4) the failure in the biosensor itself. In such cases, the medical team will not be able to make the right decision about the patient situation or store missing records for a later analysis. Thus, in order to overcome missing records, a preprocessing of data should be made before any decision or storage process. Algorithm 7 shows the patient records archive (PRA) process proposed in our P2D platform in order to archive the patient history during his stay in the hospital. PRA takes advantages from the sensor records and the coefficient set of LSA equation received at each period. For each slot time in the period, if the sink receives the record from the sensor, which means that situation of the patient is varied, then it directly stores in the patient archive; otherwise, e.g. in case of the record is not sent by the sensor due to the similarity with the previous one, the sink generates the record of the current slot time based on the set of coefficients of LSA method then it adds to the patient archive data. Moving Average (MA) Algorithm It is a well-known statistical method used to estimate the missing values in time series. Thus, if a value is missed during a slot time y then the average of the previous ws slots' time is calculated and assigned to the missed value. Hence, MA is based on the concept of shifting window of size ws. Particularly, if any value is missed from the window, the first existent future value is assigned to it. The most applications of moving average method are to study the trend direction whether in business, weather, network, etc [START_REF] Anderson | Forecasting with prediction intervals for periodic autoregressive moving average models[END_REF][START_REF] Chen | Functional coefficient moving average model with applications to forecasting chinese cpi[END_REF]. Let assume that the sink receives a vector of T readings from a sensor n i during T time slots, e.g. R i = {(y i , r i ) such that i ∈ [1, T ]}. Also assume that MA defines a window of ws readings thus if any reading r j at slot time j is not received by the sink then it will be calculated according to the following equation: r j = ∑ ws i= j-ws r i ws (3.3) Figure 3.4 shows an illustrative example of the moving average algorithm for a set of 8 readings collected by a sensor in which 3 of them are missing, e.g. at slots y 4 , y 5 and y 7 respectively. We can also show that the window is of size 3. Then, in order to find the missed reading at slot y 4 , we calculate the average of the readings within the window, e.g. 10 + 11 + 11/3, and we assign the result value to the slot y 4 , e.g. 10.66. After that, we calculate the missed reading at slot y 5 by first shifting the window to the slots y 2 to y 4 then we calculate the average of reading window. Lastly, the same process is applied for the reading at slot y 7 by calculating the average of the readings window at slots y 4 to y 6 . Exponential Smoothing (ES) Algorithm Similar to MA method, the exponential smoothing (ES) is an analysis technique of time series data that is based on the window concept. Whereas in MA the previous observations are equally weighted, ES assigns exponentially decreasing weight for the observations over slots time. Given the set of readings R i = {(y i , r i ) such that i ∈ [1, T ]} collected by a sensor, the missing reading r j at slot y j can be calculated according to ES formula as follows: Est 1 = r 1 Est j = ζ × r j-1 + (1 -ζ ) × Est j-1 (3.4) where: • Est 1 is the estimated reading at slot time y 1 that is initially equal to r 1 . • Est j is the estimated value at slot time y j . • ζ is the smoothing factor where 0 < α < 1. Data Storage Layer This layer aims to store the data sent from the WBSN after applying fault recovering algorithms (MA or ES). It is based on the Hadoop HDFS [? ] that splits the received data into small blocks then it distributes across the cluster nodes. Generally, HDFS is characterized by two aspects: first, it ensures the fault tolerance of the stored data thanks to the replication process of data into nodes (at least three). Second, the process of data is performed in a parallel computing which decreases the latency of data when taking a decision. In this work, we implemented our framework on a cluster forming of three nodes where the Hadoop HDFS is installed in each one. Data Visualization and Prediction Layer In the last layer of our framework, we are interested in providing an efficient data access and retrieval method to allow the end user to visualize and analyze the stored data. In addition, we implemented a prediction technique based on prophet method in order to predict the patient situation. We propose to use two visualization tools, e.g. Grafana and Matplotlib, where the first one allows a graphical user interface to access the data where the second one is based on the Python programming. The ultimate goal of using each of one is to perform a graphical representation of the retrieved data using statistical graphics and plots. In next sections, we highlight each tool used in our framework. Grafana It is a multi-platform open source analytics software that help users visualize and understand trends within vast amounts of data [START_REF]Grafana labs[END_REF]. Grafana is used on top of different data stores, i.e. Hadoop in our framework, and allows the end users to create and modify dashboards for their applications. In addition, Grafana supports a large number of charts and graphs in order to analysis the stored data. Also, Grafana comes with a built-in alerting engine that allows the end users to set conditional rules to the dashboard and sends a notification, using email or slack, when an alert is triggered. With Grafana, our framework offers the following services to the WBSN end users: 1) understanding the data collected by the sensors and stored in the Hadoop clusters using various Grafana plots; 2) create various alerts to notify the end users of any abnormal situation; 3) visualize the metrics of the Hadoop cluster such as system CPU, memory, disk, etc. and scales the cluster when necessary; Matplotlib It is the most used plotting library in Python programming with its extension NumPy [START_REF] Hunter | The matplotlib user's guide[END_REF]. In our framework, we created a Python script that allows to read periodic WBSN data sent to Hadoop storage, visualize them, using Matplotlib, on a real-time graph to the end users, and predict the patient situation using prophet method. An important extension which can be added later to our framework is to send plotting graphs to the end users using mobile application. Prediction of the Patient Situation Progress In healthcare applications, the ability of medical team to ensure a continuous monitoring and predicating the situation progress is crucial for patient safety. Hence, healthcare forecasting has been taken a great attention from researchers in WBSN [START_REF] Alharthi | Healthcare predictive analytics: An overview with a focus on saudi arabia[END_REF][START_REF] Soyiri | An overview of health forecasting[END_REF]. The main objective of the healthcare forecasting is to provide not only estimates of the current situation of the patient but also forecast to its future situation. This will help the medical team knows when and where there is a risk on the patient and, through this understanding, it can perform preventative actions to reduce the effect of illness and avoid the death. In this section, we propose a prediction algorithm in order to allow the medical staff to forecast the progress of the patient situation and take the appropriate clinical response. Indeed, one can find a lot of prediction techniques that have been proposed in the literature like linear and logistic regression, decision tree, random forecast, neural network, etc. These techniques are introduced in various domains such as stock production, scientific studies, sport monitoring, financial sector, psychology, etc. In this chapter, we are focusing on the Prophet method which is recently proposed and not yet adapted to WBSN. Let first recall the Prophet prediction method, then we adapt it to the WBSN case. • Recall of Prophet: Prophet [START_REF] Medina | Prophet, a web-based tool for class prediction using microarray data[END_REF] is an open source tool released by Facebook in 2008 that is implemented in R and Python. Basically, Prophet is a procedure used for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. In addition, Prophet takes advantage from the correlation between the data in order to handle the missing data and outliers. Recently, Prophet has been used in various domains such as streamflow of river [START_REF] Tyralis | Large-scale assessment of prophet for multi-step ahead forecasting of monthly streamflow[END_REF], data warehouse [START_REF] Chen | Prophet: Precise qos prediction on non-preemptive accelerators to improve utilization in warehouse-scale computers[END_REF], business [START_REF] Yenidoğan | Bitcoin forecasting using arima and prophet[END_REF][START_REF] Navratil | Decomposition and forecasting time series in business economy using prophet forecasting model[END_REF] and allows data scientists and analysts to provide efficient forecasting models for a variety of problems in a business scenario. Mathematically, Prophet uses additive regression model with three main components (trend, seasonality and holidays) which are combined in the following equation: y(t) = g(t) + s(t) + h(t) (3.5) where: -Trend, g(t): is the overall increase or decrease of the series during this time. Prophet automatically detects changes in trend by selecting change points from the data. -Seasonality, s(t): is the data component that undergoes regular and predictable changes over a fixed period of time. It is modeled using Fourier series (for yearly seasonal) and dummy variables (for weekly seasonal). -Holidays, h(t): is an optional component that models the effect of holidays or major events which is determined by the users. • Integrating Prophet into WBSN: Although that Prophet has been adapted to large number of domains however, for the best of our knowledge, it is the first that it is integrated in the body sensor networks. Our objective is to study the variation between the collected records about a patient during periods of time then try to forecast the progress of its future situation. As mentioned in section IV, we assume that the lifetime of a sensor is divided into a set of rounds where each round u contains a number of P periods. We divide the periods in each round into two parts: -Training data periods: in which periodic datasets collected by the sensor are received by the sink. Assume that at a given period of the round, the sink receives σ datasets from the sensor which will act as training and historical data for the Prophet. Fortunately, more the size of the training data is more the accuracy of the forecasting given by the Prophet becomes. -Predicted data periods: in which the sink has to predict, based on Prophet, the datasets that will be sent for the sensor in the remaining Pσ periods. For every new dataset received by the sink, the training data size will increment by one and the prediction process will be more accurate. Figure 3.6 shows an illustrative example of how to apply Prophet forecasting in our platform with a round set to 5 periods. When the sink receives the dataset at the first period, e.g. σ = 1, it applies the Prophet in order to estimate the datasets of the remaining 4 (e.g. 5σ ) periods. Then, when the second dataset is received (e.g. σ = 2), it applies the Prophet to forecast the remaining 3 periods and so on. Indeed, in our platform, we omitted the holidays component from the equation 3.5 and we considered the trend as the variation within each training data while seasonality represents the variation between the training periods. Clinical Response Decision-Making Algorithm Clinical decision making is the process by which the medical team know the progress situations of patients and then it determines who needs what and when. In order to make more efficient our platform, we propose a clinical response decision making algorithm that allows the medical team to determine the number of observations needed to check the patient situation during a period of time. The proposed algorithm takes advantage from the results of the Prophet forecasting and the criticality level of the patient and works based on the following steps: • Assume a period p during a round u, we estimate the predicted datasets for the remaining Pσ periods based on the Prophet forecasting. • We calculate the estimated criticality level C i , based on equation 3.2 and for all remaining periods, for each vital sign V S i of the patient. • We calculate the aggregated score, e.g. ScoreU, for all criticality levels of the vital signs as follows: ScoreU = |V S | ∑ i=1 C i (3.6) • The medical team makes the clinical response according to a predefined clinical response decision-making (CRDM) table (see Table 3.3). Typically, CRDM contains a set of m decisions where each decision is taken according to a range of ScoreU, e.g. [b i , b j ], with the observation time needed for the patient. Indeed, the CRDM table is strongly defined by the experts and the doctors. Simulation Results In order to evaluate the performance of our platform, we used the same real health data collected from Multiple Intelligent Monitoring in Intensive Care (MIMIC) database of PhysioNet [START_REF]Mimic database on physionet[END_REF]. In our simulation, we used a file that includes a log of about 100000 readings for each patient. We assume that each biosensor reads the data from its corresponding file for a period of time and sends them to the sink placed at 50 meters after applying our mechanism. We implemented the algorithms used in our platform based on Java simulator and we compared the obtained results to those obtained in the technique proposed in [START_REF] Habib | Self-adaptive data collection and fusion for health monitoring based on body sensor networks[END_REF], e.g. modified local emergency detection (MLED). The parameters used in our simulations are shown in the Table 3.4: Patient Situation Study In our simulations, we selected heart rate data of three patients having different situations' level (low, medium and high) in order to evaluate the performance of our framework. The calculated scores of readings collected for each patient are shown in Therefore, in the next of the simulations, we evaluate each metric according to the variation of the patient situations. Performance Study of PRA Algorithm In this section, we study the accuracy of the PRA algorithm proposed in our framework in terms of regenerating the raw data collected by the sensors. Indeed, the accuracy of PRA is highly dependent on two parameters (LSA degree and division size) which are studied in the following sections: The Effect of LSA Degree (k) At the end of each period, PED allows the sensor to send the non-sequential similar readings along with the set of LSA coefficients to the sink that tries to regenerate the raw data of the sensor in that period based on the coefficient set. Obviously, more the degree k of the LSA polynomial is more the regenerated data is accurate. Figure 3.8 shows the difference between the raw data and the data regenerated using PRA algorithm when varying the LSA degree k to 3, 5 and 7 respectively. From one hand, the obtained results show a high data accuracy of the regenerated data using the LSA method (Figure 3.8(a), or 3.8(b) or 3.8(c)). On the other hand, the increasing of the value of k leads to decrease the differences between the raw data and the regenerated ones (Figure 3.8(a) to 3.8(c)). Therefore, PRA ensures a high quality of data for both medical team and patient archiving process. The Effect of Division Size (d) In order to reduce its overhead process, PED selects a subset of readings based on the division size d to calculate the LSA polynomial instead of using the whole readings collected during the period. Obviously, more the division size is, more the number of selected readings reduces. Figure 3.9 shows the data regenerated at the sink using PRA, compared to the raw data, when varying the value of d to 5 % and 10 % respectively. The obtained results allow two observations: 1) the accuracy of the regenerated data using PRA is highly ensured in both cases; 2) the accuracy of PRA increases with the decreasing of the division size. This is because, the number of selected readings using PED increases with the decreasing of the division size which increases the accuracy of the LSA polynomial and therefore increases the performance of PRA. Performance Study of PMA Algorithm In our framework, the PMA algorithm allows each sensor to adapt, at the end of each period, its sensing frequency according to the sensing frequency decision table (SFDT). Figure 3.10 shows how each sensor is able to adapt its sensing frequency after each period for the three taken patients with different criticality situations. We fixed the period size to 3600 readings and we compared the results of PMA to those obtained with MLED and naïve method, e.g. where all readings collected by the sensor are sent to the sink. The obtained results show that both algorithms allow sensor to dynamically adapt its sampling frequency according to the variation of the patient situation. However, PMA outperforms MLED in terms of reducing sensing frequency in cases of low and medium patient situations (Figure 3.10(a) and 3.10(b)) while the sensing frequency is more adapted using MLED in case of high patient situation. This reveals one of the drawbacks of the MLED algorithm in which a sensor should collect more data in an emergency case in order to continuously update the medical team about the patient situation. We can also observe that the sensing frequency of the sensor, using our algorithm, increases with the increasing criticality of the patient situation (low, medium and high respectively). For instance, PMA adapts the sensor frequency of the sensor to its minimum in the case of low patient situation ( high patient situation case (Figure 3.10(c)). This confirms the behavior of our algorithm in terms of providing an accurate patient situation transmission to the medical team. On the other hand, Figure 3.11 shows the percentage of the sensed data using PMA, MLED and naïve algorithms in function of the variation of the period size. The obtained results are strongly related to those shown in Figure 3.10. Consequently, both algorithms (PMA and MLED) reduce the data collected by the sensor compared to the naïve approach where the amount of sensed data is reduced up to 76 % and 54 % using PMA and MLED respectively. Moreover, we observe that PMA reduces the sensed data from 27 % to 66 % compared to MLED in low and medium patient situations while MLED reduces up to 52 % of the sensed data in the high patient situation compared to PMA. We also notice that, using PMA, the sensing frequency of the sensor is not greatly affected by the variation of the period size while, using MLED, it is decreased with the decreasing of the period size. This is because, the sensing frequency using PMA is dynamically adapted according to the patient situation level thus it gives better results among patients and not on the same one. Whilst, MLED uses the Fisher test to adapt the sensor frequency which gives better results when the variation between the sensed data is small, e.g. small period size. Data Transmission Study In Figure 3.12, we show the amount of data transmission after applying PED, MLED and the naïve approach in function of the period size. Indeed, PED allows the sensor to send the sequential readings with different scores, accompanied with the LSA coefficient set, to the sink in order to reduce the amount of data transmission. Whilst, MLED limits the sensor data transmission to the first reading of the period in addition to the critical readings, e.g. having score > 0. The obtained results show two observations: for the low patient situation, both algorithms send a few amount of data, up to 2 %, to the sink compared to the naïve approach. This is due to the non-critical readings collected about the patient. For medium and high patient situations, PED reduces up to 95 % the data transmitted from the sensor compared to the MLED. This is because the data transmission using PED is limited to the non-sequential similar score while MLED sends all critical readings to the sink. Performance Study of MA In Figure 3.13, we show the performance of the moving average method in terms of recovering the missing readings for a portion of about 1300 HR (Figure 3.13(a)) and RESP (Figure 3.13(b)) readings, compared to the original raw sensor data. We varied the window size (ws) in MA to 3 and 6 respectively. The obtained results show the following observations: 1) MA ensures a high accuracy for both abnormal and normal conditions. 2) more the size of window is low the accuracy of MA is; this is because, a missed reading will be similar to its near previous readings more than the far ones. Performance Study of Processing Speed In this section, we evaluate the performance of our framework in terms of processing (using MA and ES) and storing (using HDFS) the whole WBSN data described in the Simulation Results section. We calculated the processing speed, or the execution time by simulation, after regenerating the missing readings and store the whole WBSN data. The obtained results are shown in Figure 3.15. We first observe that the MA outperforms ES in terms of regenerating the missing values in a rapid execution process. We can also notice that execution time of MA decreases with the decreasing of the window size value while the processing speed in ES is almost fixed independently from the α threshold. The performance Study of Prophet Method In this section, we show the efficiency of the Prophet method integrated to the WBSN in terms of predicting the patient progress situation. In our simulations, we assumed a round of 15 periods where each period is set to 900 readings. Thus, at a given period i ∈ [START_REF] Qu | An energy-efficient routing protocol for reliable data transmission in wireless body area networks[END_REF][START_REF] Khan | The state-of-the-art wireless body area sensor networks: A survey[END_REF], the sink has to predict the readings for the remaining 15i periods according to the Prophet method. Figure 3.16 shows a comparison between the raw data and the predicted data generated using the Prophet after the period numbers 1, 3, 8 and 12 in the round; the blue curve in each figure shows the readings received by the sink, e.g. training data, in which it has to train them in order to forecast the criticality level for the patient during the next periods, e.g. orange curve. From one hand, the obtained results show that Prophet ensures a good level of data accuracy regarding the forecasting of the patient readings. Hence, the medical staff will have an accurate information about the patient situation progress for the next periods thus, they can prevent him to enter in a critical situation and the nurses can determine the appropriate observations needed for that patient. On the other hand, we observe that predicting data generated using Prophet becomes more accurate when the increasing number of the period (Figure 3. Conclusion Public health will stay one of the main concern of governments and industries due to the population growth and the increasing number of aging and elderly. Thus, WBSN will take more attention in patient monitoring as an efficient and low cost solution for hospitals. In this chapter, we have proposed a robust Patient-to-Doctor (P2D) framework for real-time health monitoring and decision making which works on two levels (sensor and sink). This framework covers entire life-cycle of data science including data collection; data ingestion; data preprocessing; data storing; and visualization, and it relies on hadoop ecosystem. P2D aims to reduce the energy consumption in the sensor and send any abnormal situation to the coordinator. On the other hand, P2D allows to make an archive for each patient and help the medical team to make a decision regarding the progress of the patient situation. Through simulations on real health data, we demonstrated the efficiency of our platform compared to other existing systems. The next chapter emphasizes the nurse scheduling issue, and it presents an efficient framework for nurse-patient organization. We will also present detailed simulations of this framework to show its effectiveness in balancing the workload of the medical personnel. Nurses are directly responsible for monitoring and evaluating patients and performing immediate interventions, to reduce risks or prevent medical complications [START_REF] Ghatole | Survey on wireless body area network for healthcare applications[END_REF]. A treating nurse even helps educate patients and family members about post-hospital care before the hospital discharge. Furthermore, the nurse expenses represents more than half of the hospital's expenditure. Moreover, the nurses suffer from two major challenges using WBSN; analyzing the large data provided by the sensors, and organizing their monitoring programs and shifts. In this chapter, we present a novel and efficient framework for nurse-patient intelligent task organization that treats the scheduling problem on two phases. Subsequently, the contributions of this framework can be summarized as follows: • Patient classification: One of the basic tasks assigned to the nurses is to classify the patients according to their critical states. The critical state of a patient is based on the sensed vital signs during a specific period. In this framework, we introduce a novel machine learning based method that aims to classify the criticality degree of each patient between low, medium or high, based on the criticality of all its vital signs. • Patients assigning to nurses: A grouping method that aims to distribute a group of patients into several clusters is proposed within this framework. Each cluster is assigned to one nurse. We introduce a strategy that provides a fair distribution of patients on active nurses, in a way to normalize the exerted nurse efforts. This fair distribution takes into consideration the number and the criticality level of patients. • Patients scheduling: A routing protocol is build within this framework in order to organize dynamically the nurse-patient schedule. This protocol allows to specify the best traffic that a nurse should follow in order to serve all her assigned patients. This routing algorithm depends on several parameters such as the patient's severity level and their age. Moreover, this protocol controls which patient should be examined and estimates the examination duration. The rest of this chapter is organized as follows: Sections 4.2 and 4.3 details the two phases proposed in our framework respectively. The simulation results with the corresponding discussion are shown in section 4.4. Finally, we deduce a conclusion in section 4.5. Patient Classification Phase In this section, we present the patient classification method that classifies the patients according to the severity of their vital signs. Subsequently, an intelligent classification model is built passing by the training and testing phases (Figure 4.1). This model allows the nurses to identify periodically the level of criticality of each patient either low, medium, or high. These classes let the nurse to follow up their situation and to prevent any health deterioration through appropriate medical intervention. Training phase Training Phase This phase consists of two steps: shapelet generation and shapelet selection. A shapelet SH with length l is a subsection election from one of the data series from archive D with length L. A SH represents the D and replaces it, where L >> l. Shapelet Generation We generate several shapelets from patients history where each shapelet SH i (D, vs, level, l) is specific for one vital sign vs. In addition a shapelet inherits its level from the level of data series D from which it is selected. The SH i represents the ith shapelet. In order to select the optimal shapelet that represents the data series D, the Dynamic Time Warping (DTW) algorithm is applied [START_REF] Amjad | Robust energy efficiency optimization algorithm for health monitoring system with wireless body area networks[END_REF]. DTW allows to calculate the distance between two sequential data of different length. In Figure 4.2, we give an example of heart rate series D with high level and L=8. Where the goal is to select a shapelet SH of size 4 from D using a shift window of size 1 to obtain all shapelets. Then, we calculate the distance between each shapelet SH i and the data series D based on equations 4.1 and 4.2. The relationship between D and SH is represented by a time warping path W , where w k is the element indicates the alignment and matching relationship between each two readings in the two series, and max(L,l) ≤ k ≤ L + l + 1, where d i j = d(r i ,r j ) is the distance between each two readings r in each two series D and SH. Since we can use any distance measurements, we apply in our framework the euclidean distance. Finally, the optimum shapelet having the minimum distance with the original D is elected. Therefore, for each vital sign, we obtain many shaplelets that satisfy all levels space (low, medium and high). DTW (D, SH i ) = min ∑ d i j ,W = (w 1 ,w 2 ,..,w k ) Shapelet Selection In order to eliminate the redundant shapelets, therefore to reduce the time consumption of the test process, a shapelet selection mechanism is applied, after the step of generation. This mechanism remove some of shapelets from similar shapelets. After the generation step, for each vital sign three groups of shapelets obtained: group for low shapelets; group for medium shapelets; and group for high shapelets. The sink works in each group following this steps: • Calculate the percentage of similarity between all the shapelets using DTW (see above). • Aggregate each two shapelets having maximum correlation (minimum DTW). • Remove randomly one of them. • Repeat until having sufficient number of shapelets. Testing Phase After shapelet generation, each set of shapelets for specific life's parameter is saved in the corresponding sensor that collects this type of data. The testing phase aims to determine the level of criticality of a patient at each period according to the levels of all vital signs. It operates on two levels: sensors and sink. At Sensor Level At the end of each period, the sensor starts to find the similar shapelet, among all saved shapelets, with the data collected during the period of time, according to the following steps: • Divide the set of data collected to l equal division ( where l indicates the shapelet length). • Find the mean of each division. • Compare the similarity between the serie of means with each shapelet by calculating the DTW. • Select the shapelet having minimum DTW with the serie of means. • Send the level of the similar shapelet selected to the sink. At Sink Level The sink received from one patient the levels of all vital signs, heart rate level; saturation of oxygen level; etc. Each vital sign vs i is assigned to a score s i , according to its level, as shows in Nurse Scheduling Phase For accurate regulation in the hospital, patient scheduling is one of the most important challenges discussed by many researchers. The patients scheduling is the research problem in processes to find the optimal way to assign nurses to shifts, usually with a set of difficult constraints. This part of our framework is dedicated to the sink, that connected with all patient biosensors. Our technique is complemented by these two steps: Patients Assigning to Nurses Algorithm In order to better organize the work of nurses, this section starts by the distribution of patients, after the classification patient phase, at each period taking into consideration two parameters: the level and the number of patients. This step aims to divide the patients on the nurses existing in the same floor and to help the medical staff to acquire all patients in an organized manner. Two clustering-based mechanisms, called patients distribution based criticality balanced (PDCB) and patients sorting-based grouping (PSG), are proposed and in other hand we proposed a hybrid optimization algorithm, called Genetic Algorithm combined with Particle Swarm Optimization (GA-PSO), that combines genetic algorithm (GA) and particle swarm optimization (PSO). This proposed techniques aimed to assure the similarity between clusters in the number of patient and in the levels of criticality, and ensuring dissimilarity in patient levels in the same cluster. In the following sections, we describe the three techniques: Patients Distribution-based Criticality Balanced (PDCB) Mechanism Algorithm 8 explains the process of PDCB mechanism. It aims to allocate, for each nurse, the group of patients who should follow up during a period p. At the end of p, the sink owns the list of criticality rates Q of all the patients, where each C i in Q indicates the criticality rate of the patient i. The algorithm takes the following parameters: the number of patients (NP), the number of nurses (NN), and Q. First, the mechanism formes the clusters based on the number of nurses , where G i is the i th cluster for i th nurse (lines 1-5). Then, we select the first NN patients with higher criticality from Q and put each of them in a cluster. After that, the criticality of each cluster is calculated using the equation in line 9. Furthermore, we balance the clusters that are not equal according to the criticality of the 1 st cluster (line 14) by electing a patient from Q defined in the equation in line 15. Finally, we repeat until the Q becomes empty. Algorithm 10 shows the process of PSG mechanism that takes the same parameters of Algorithm 8. It starts by sorting the criticality list of all patients (Q) by decreasing order (line 1) and by forming NN clusters (lines 2-6). Then, it iteratively selects the first and the last patients in the list Q, and put each couple in a cluster G i . Then, it calculates the criticality of the cluster by adding the criticality rates of patients in the cluster (lines 7-14). This step is repeated until Q becomes empty, or if there remains a number of patients smaller or equal to the number of clusters. After that, the algorithm distributes the patients in a way that balances the criticality of all clusters; this is done by selecting the cluster has the minimum total criticality C Gi and put in them the patient has the maximum criticality rate (lines [START_REF] Zhang | Energy-efficient and reliable sleep scheduling algorithms in wbsns[END_REF][START_REF] Rabby | A priority based energy harvesting scheme for charging embedded sensor nodes in wireless body area networks[END_REF][START_REF] Khan | An energy efficient routing protocol for wireless body area sensor networks[END_REF][START_REF] Park | An energy efficient enhanced dual-fuzzy logic routing protocol for monitoring activities of the elderly using body sensor networks[END_REF][START_REF] Dhakal | A novel solution for a wireless body sensor network: Telehealth elderly people monitoring[END_REF][START_REF] Awan | A priority-based congestion-avoidance routing protocol using iot-based heterogeneous medical sensors for energy efficiency in healthcare wireless body area networks[END_REF]. if C G1 >= C G j then 15: select C k from Q / C G j + C k >= C 1 + ε 16: G j ← G j ∪ {Patientk} Genetic Algorithm combined with Particle Swarm Optimization (GA-PSO) Meta-heuristics are problem-independent strategies that allow us to explore the solution space more completely and, ideally, arrive at a good solution that is sometimes the same as the global optimum. GA-PSO nodes are encoded as genes during the GA phase. A chromosome encodes the path from the source to destination. The initial population is the set of all potential pathways from source to destination. The fitness function must be stated in terms of the scheduling parameters that are necessary. The fitness function will be used to evaluate fitness values for each chromosome. Begin the PSO phase, which consists of selecting parents from the population. During the GA phase, the population is initialized, and the fitness function is evaluated, later on in PSO phase start generating particle positions and velocities, velocity update, and finally position update are performed. This leads to the discovery of the best solutions from the solution space, which is used as parents in GA's crossover and mutation operations. When the termination requirement is reached, the new population is presented as a solution set with optimal paths that satisfy the relevant schedule metrics. • GA-PSO Algorithm: Algorithm 1 shows the process of GA-PSO that allows to select the best feature combination in this mechanism. The algorithm takes, as input, the criticality rate of each patient along with the configuration parameters required for GA and returns, as output, the best solution B for nurse scheduling for the purpose of serving the patient in timely manner after collecting of the vital signal. Firstly, the algorithm randomly generates an initial list of ω combined nurses-patients re-partition (line 1). Then, each combined solution set is evaluated according to the Fitness function (line 3) where the set having the max fitness value is selected to be the best current solution (lines 10-13) in order to produce the new chromosomes (set of children) by the crossover with the each other chromosome in ω (line 5). Then each chromosome (child) is performed using the mutation operations (line 6). After that, a loop is started in which the GA coefficient is enhanced (line 9) and the combined solutions are updated (line 15). Then, a new list of combined schedule sets is randomly recalculated and the current best solution (local best position pbest, global best position gbest) is updated in case a new better is detected (line 10-14). The loop is continued until reaching the maximum number of iterations or the convergence criteria is met. • Initial Population: First, many potential solutions are randomly generated to form an initial population ω. The population consists of S individuals that means S chromosomes or Solutions. The initial population represents set of all potential paths. • Fitness Function: The fitness function is always problem dependent and measures the quality of the depicted solution. Individual performance should be reflected in the fitness function. The fitness value is used to evaluate each solution of the population and to indicate how close it came to the optimum. We define the fitness function as f . The settings and notations of the proposed model are described as follows: return BS f = mínC ∑ k∈SN ∑ i∈SP ∑ j∈SP,i̸ = j x i jk d i j + X i (4.4) where: - • Selection of Parents: The best individuals are chosen from among those solutions based on the PSO model. In PSO, each solution is referred to as a "particle" and is represented by a "bird" in the search space. All particles have their fitness values evaluated by a fitness function, as well as velocities that direct the particles'flight. These values are optimized to be as accurate as possible. The particles can easily navigate through the space by following the path of the ideal particles and performing the search for optima using the generations. Figure 4.3 illustrates that for each particle i, at each time t, consists of a position x t i a candidate solution a velocity v t i and the best location it has ever visited pbest t i with respect to a fitness function. The particles of the swarm share information about the search space via the global best position gbest t i , which is the best position among all particles. Equations (4.8) and (4.9) control the movement of a particle: Where i is the number of the particle, t is the iteration index. c 1 and c 2 are constants called learning factors that control the effect of the personal memory of a particle and the shared information of the swarm, respectively. r 1 i (t) and r 1 i (t) are generate randomly in [0,1]. The process is repeated in discrete time steps until a stopping criterion is met. By modifying its parameters, it is able to control the swarm's trade-off between exploration, having to visit to new locations and learn more about the search space, and exploitation, deciding to stay closes the best position found so far. x V i = V i + c 1 × r 1 × (pbest i -X i ) + c 2 × r 2 (gbest -X i ) (4.8) X i = X i +V i (4.9) • Crossover Operation: A child created after a crossover operation is performed between the two paths chosen. Cross over is the process of combining multiple parent solutions to create a child solution. A pair of chromosomes is chosen as the parents in the PSO selection model to produce a single offspring. The crossover procedure consists of four steps: -Step 1: select a segment group randomly from the chromosomes (Figure 4.4). -Step 3: in offspring 1, keep Group A of chromosome 1, but add the genes of chromosome 2 that chromosome 1 does not include after Group A as their sequence (Figure 4.6). • Mutation: Following the crossover operation, the new offspring is subjected to a mutation operation. The operator chooses two different locations to relocate to, and each operation is capable of achieving five new solutions. The mutation is used to broaden the scale of searching, and the set of mutation probability can control the search trend (Figure 4.8). As a result, a new population is formed. The fitness value is applied to the newly generated offspring once more. The newly generated efficient unicast multiple paths can then be accepted if they provide the best solution. Otherwise, it is chosen for reproduction once more. As a result, the resulting paths will provide optimized multiple unicast paths from the source to the destination. Patient-Nurse Scheduling (PNS) Algorithm Appointment scheduling is especially important in medical centers, where a long list of patients with limited equipment, rooms and human resources should be examined within a certain period of time. After the distribution of patients, each nurse became responsible for a specific group of patients. Here it begins the stage of finding the best way in which the pathogen has to follow up in order to move from patient to a another for examination, and to give him medicine if there is a need. During each period, nurse should control the patients of high and medium level while the low patient is not necessary to follow it at this period. The principal steps of PNS algorithm 11 are summarized as follows: • We first calculate the duration of every single appointment based on patients' particular therapy requirements. The calculation consists on dividing the period time on the patients according to equations (4.10a) and (4.10b) where: ht is the time required for controlling high patient and mt for the medium patient; nhp is the number of high patients and nmp is the number of medium patients (line 2). Subsequently, equation (4.10a) represents the duration of the period with a decrease δ of time. This threshold is used for the subjectivity of the nurse that can increase/decrease it according to the patient need and necessity. In addition, this threshold allows the nurse to be able to take into account any emergency and sudden event for low patients, or for the sake of convenience. Whilst equation (4.10b) means that the time for high patients should be the double of the time of medium patient. nht × ht + nmp × mt = τ -δ ht = 2 × mt (4.10a) (4.10b) • In order to determine the order of patients for regular follow-up, a priority for each patient is calculated based on two constraints: the criticality rate (C ) and the age (A) of the patient. On one hand, the higher the patient's level of risk, the higher the priority for his examination. On the other hand, if a patient is in the youth stage he can bear more than others. Therefore, we propose two equations (4.11a) and (4.11b) that allows to calculate the priority of a patient. In G k ← G k ∪ {Patient i} 18: C Gk + = C i 19: remove C i from Q 20: NP -= 1 21: end for this equations, λ represents the weight of the patient criticality and ψ represents the weight of the patient age where the sum of the two weights must be 1; we consider that λ should be greater than ψ because criticality is more important than age in the patient selection to follow it, and their summation is equal to 1. Equation (4.11b) is dedicated to the patient children that are considered critical as the elderly patient; φ is a parameter that allows to increase the priority of patient children, which φ is much greater than the patient age (lines [START_REF] Habib | Self-adaptive data collection and fusion for health monitoring based on body sensor networks[END_REF][START_REF] Zhang | Sleep scheduling in energy harvesting wireless body area networks[END_REF][START_REF] Ghatole | Survey on wireless body area network for healthcare applications[END_REF][START_REF] Yoshioka-Maeda | Preparing for complex emergencies while combating covid-19: The role of public health nurses in japan[END_REF][START_REF] Kim | Wearable biosensors for healthcare monitoring[END_REF][START_REF] Kodali | An implementation of iot for healthcare[END_REF][START_REF] Alkhayyat | Wbsn in iot health-based application: toward delay and energy consumption minimization[END_REF][START_REF] Asif-Ur-Rahman | Toward a heterogeneous mist, fog, and cloud-based framework for the internet of healthcare things[END_REF][START_REF] Ullah | Future of big data and deep learning for wireless body area networks[END_REF]. PRI = λ × C + ψ × A, if A > 18 years PRI = λ × C + ψ × (φ -A), if A ≤ 18 years / where φ >> A (4.11a) (4.11b) • We finally sort the patients by decreasing order of their priorities (line 12). Thus we obtain the queue that contains the patients arranged with their assigned time calculated on the first step. Consequently, the nurse will assign the patient queue for better and fast service, and for good reputation and humanity in the hospital. Illustrative example of PNS: Given a set of six patients (Patient 1 to Patient 6) assigned to a nurse, with their criticality rates (C 1 to C 6 respectively), their levels and their ages during one period with T = 60 minutes. we assume λ = 0.8, ψ = 0.2, φ = 90, and δ = 10 min (Figure 4.9). Consequently, the sink calculates the time necessary to follow up high patients ht = 6.25 min and medium patients mt = 12.5 min, where there is three high patients and two medium patients, with the possibility to extend or reduce the duration of the examination, that by keeping δ = 10 min. Then, the sink computes the priority for each patient; for instance, the priority of Patient Research Results In this section, we show the relevance of our framework that is implemented based in Java. We used real data health published in Multiple Intelligent Monitoring in Intensive Care (MIMIC) database of PhysioNet [START_REF] Awan | A priority-based congestion-avoidance routing protocol using iot-based heterogeneous medical sensors for energy efficiency in healthcare wireless body area networks[END_REF]. In our simulation, we are interested on three vital signs: heart rate (HR), respiration rate (RESP), and oxygen saturation (SPO2). We run our framework on 10 periods where the simulation environment is set according to the following Performance Study of Shapelet Generation Method Performance Study of Finding Similar Shapelet In order to study the accuracy of the patient classification system, the shapelets elected is shown to determine the level of each vital sign. The green curve in each figure shows the readings detected by the sensor during a period. And the other curve shows the similar shapelet obtained by this model, if their color is red that means the level of shapelet is high, if it is orange the level is medium, and if it blue so the level is low. Performance of Patient Classification Model This section presents the results of the classification for two patients Patient 1 and Patient 9 (Figure 4.14 and Figure 4.15 respectively), obtained during the first round ( 10 periods). This technique depends on the level of the three vital signs (HR, RESP, SPO2). The results are represented in figures 4.14 with this rule: that if the level = 1 that means it is low, if equal to 2 it is medium , and if 3 their level is high. For the first period, Figure 4.14(a) shows the levels of the vital signs, where HR is high, RESP is high, and SPO2 is low, gives a high level of total criticality in Figure 4.14(b). But in the second period HR remains high and likewise SPO2 remains low but RESP was high and becomes medium, consequently the criticality decreases from high level to medium level, and so on. Patients Assigning to Nurses Study This framework allows each nurse to control a specific group of patients, during a period, by the patients distribution method following their criticality levels. Therefore, this framework prevents a nurse to getting a group of patients where all of them are in critical condition, and their health requires time to serve them. patients are distributed three for Nurse 1 , two for Nurse 2,and two for Nurse 3. And there are nine medium level patients (in orange color), they are distributed two for Nurse 1, three for Nurse 2, and four for Nurse 3. Also, there are two low level patients (in blue color), they are joined with the group of the nurse Nurse 2. Note that nurse Nurse 1 that has more high level patients(three) has two medium patient, but the other has three and four medium patients. Here Figure 4. 16(b) shows the results of the second mechanism (2), also the seven high patients and the nine medium patients are distributed in the same way as the first mechanism. But the two low patients are distributed between the 1 st nurse (Nurse 1) and the 2 nd (Nurse 2). These figures verify the performance of these two methods in equal distribution and diversity of criticality levels for patients in each cluster. Performance of PDCB and PSG In order to evaluate the accuracy of this two grouping mechanisms, the number of patient obtained for each nurse and the distribution of levels (Figure 4.17) are studied: • Number of patients in each cluster: The equality in the number of patient and their levels for each nurse, is one of the principal goal that determine the efficiently of the distribution. This is confirmed in the pictures Figure 4.17(a) and Figure 4.17(b) ( mechanism 1 and mechanism 2 respectively), where in the 1 st mechanism the distribution of patients is in the following form: five for the 1 st nurse (Nurse 1), seven for the 2 nd (Nurse 2), and six for the third nurse (Nurse 3). In the second mechanism the distribution is equal to nurses, e.g six in each cluster. • Accumulation criticality: For a fair distribution between the nurses, the volume of criticality in each cluster should be almost equal. patients, and joining the low critical patient in the appropriate cluster in terms of number and total of the criticality rate. Performance of GA-PSO We tested GA-PSO with two groups of criticality: ST 1 and ST 2 . In ST 1 , the nurse assigns 17.5 and 8.5 minutes to serve patients with high and medium criticality respectively, while patient with low criticality level does not assigned any serving time. In ST 2 , the nurse assigns 10, 6 and 1 minutes to serve patients with high, medium and low levels respectively. Furthermore, our mechanism GA-PSO presents a schedule for each nurse to follow-up the patients and the necessary time to be served that the nurse should follow it to treat. Both tables 4.4 and 4.5 verify the performance of our mechanism GA-PSO in diversity of criticality levels for patients for each nurse. In order to evaluate the fairness of our mechanism, the number of patient obtained for each nurse and the distribution of levels are studied: • Number of patients for each Nurse: A fair equality in the number of patients and their criticality levels for each nurse, is one of the principal goal that determine the efficiency of the distribution. It is confirmed in the tables 4 and 5, where by comparing the served patients with the number of nurses available there is a fair distribution. For instance, in the first approach ST 1 , the distribution of Fifteen patients between three nurses is in the following form: five for the Nurse1, four for the Nurse2, and six for the Nurse3. As for the second approach ST 2 , the distribution is four for the Nurse1, five for the Nurse2, and six for the Nurse3. • Accumulation criticality: For a fair distribution between the nurses, the volume of criticality for each nurse should be almost equal. The tables 4.4 and 4.5 show that our mechanism with different parameters give excellent results, by distributing the high critical patients to the nurses, not by grouping them into one group, but by dispersing them over the groups. As well as for the moderate patients and distributing the low critical patient for the appropriate nurse in terms of number and total of the criticality. • Performance Evaluation of GA-PSO in function of Criticality The volume of criticality should be nearly equal for a fair distribution among the nurses by distributing the high-risk patients to the nurses in groups rather than in a single group, as well as to distribute the medium and low patients in the proper group. Our GA-PSO mechanism with two simulations produced reasonable results by ensuring fairness in distribution among the nurses where taking into consideration the criticality of each patient. For instance, where NN = 3 and NP = 15 we observe that in the ST 1 the distribution of patients is in the following form: five for the 1st nurse, four for the 2nd nurse, and six for the third nurse. As for ST 2 the distribution is better than ST 1 , four are allocated for the 1st nurse, five for the 2nd nurse, and table 4.5 efsix for the third nurse. Consequently, we observe that the criticality values affect the optimization, as we balance the 3 factors with appropriate numbers with a lower gap such as 1, 6, 10 leads to a better performance than 0, 8.5, 17.5. • Performance Evaluation of GA-PSO in function of Dataset: This section explored the effect of dataset size on the performance of our mechanism. GA-PSO has shown to perform better on dataset of larger size with respect to the length of the number of nurses and the We observe that on a low number of nurses and patients, the optimization process is less accurate, which means that distribution is not as fair as possible. However, when increasing these parameters the optimization goes up as it increases due to a better combination of patients'chromosomes. Performance wise, increasing the parameters will lead to a slight delay addition but this could be tweaked by decreasing the number of iterations. On the other hand, any random combination of patients selected as gbest shows the equal distribution of these patients over any specified number of nurses considering the fact that every patient has a certain priority, these nurses must serve patients in order, and as distributed inside the chromosome per nurse. If we try to confirm this condition, we can sum up all the number of patients per nurse and verify with the criticality values by adding them once these patients are categorized as high, medium or low. For instance, in the tables shown above, we can determine the percentage of service for each nurse with the number of patients distributed equally over the dataset and for each scenario the designated set of criticality measures. As we also note, the number of patients is ranged between two certain numbers close to each other taking into consideration the priority and number of patients per nurse. For example, in both tables 4.4 and 4.5 where NN = 15 and NP = 50, it is ranged from 2 to 5 depending on how critical the patients are and the schedule of each nurse. As we see here, the gap between nurses is around 10 % in distribution terms for 3 nurses, which is even lower for 8 and also lower for 15 nurses. We observe that this gap decreases when the number of nurses and patients increase, and thus the performance of the algorithm. If we consider the other set of criticalities below, it is observed that this distribution is better which leads to a low percentage of difference, so depending on the gap of the criticality measures, the performance increases when this gap decreases, and the 0 for "Low" is not the best practice as it adds nothing to the nurse's schedule. Not only the distribution is fair over number of patients but also for the category of criticality, nurses go on different missions with low, medium and high categories which also depend on the number of patients per nurse. For example in the set of criticalities below, 6 low patients are considered equal to one medium patients and a high patient is a combination of both. Finally, the service time is properly managed in a low range of gaps between each nurse, so a nurse might have a total time of 26, the other at 34 and so on, and this gap is almost equal when changing parameters such as number of patients and nurses. Performance of Patients Scheduling This section verify the efficiency of the final step of this framework, that called patients scheduling. This step aims to organise and range the patients for each nurse , in order to follow and serve them during each period. These figures 4.19 shows the priorities of the patients of the three groups of nursing (Nurse 1, Nurse 2, Nurse 3), according to the criticality and the age of each patient. By making the weight of criticality λ = 0.8 greater than the weight of age ψ = 0.2. The pink rectangular bar in each figure presents the patient criticality detected, the blue rectangular bar presents the age of the patient, and the light yellow rectangular bar for their priority calculated. It is noted that if the criticality of the vital signs of two patients are equal, the priority becomes for the patient who has one critical age. Conclusion In public health, the patient classification based on wireless body sensor network and the appointment scheduling techniques can significantly reduce the workload of the medical staff. In this chapter, we have proposed an efficient framework for intelligent nurse-patient scheduling that consists of two phases: patient classification, and nurse scheduling. Our framework aims to classify patients according to their criticalities, and introduce an efficient nurse scheduling. Through simulations on real health data, we demonstrated the effectiveness of our framework in providing a balanced nurse workloads where each nurse serves, during each period of one hour, an average of 5 patients with corresponding cumulative criticality of 62 %. Conclusion and Perspectives Conclusion The healthcare applications relies on the WBSN which is the main technology that plays an important role in monitoring healthcare status. In addition, the researchers have gained insights about the important role of nurses in hospitals. Indeed, WBSN is becoming a staple for helping nurses in several tasks such as: emergency detection; controlling vital signs; data analysis and making decision. Thus, in this work we focused on proposing several techniques that improves this tasks and takes into account the conserving the limited sensor energy. At sensor level, we proposed several techniques that aims to reduce the energy consumption, in order to extend the network lifetime. These techniques are based on reducing the amount of collected and transmitted data from the sensors. First, an emergency detection algorithm is proposed, that aims to send data only if the patient's condition has varied in order to reduce the redundant transmitted data. Second, we presented an adaptive sensing frequency that aims to eliminate the redundant collected data, based on the patient's situation. Third, we built the first phase of the patient classification model that compares the vital sign series of the patient with the vital signs series provided from the archive using the DTW method, in order to select and send the criticality level of the vital sign to the sink. Finally, we proposed a multi-hop protocol that minimize the required signal strength of sensor in transmission process and therefore reducing the energy consumption. Third, At sink level, we proposed several technique that allows the medical staff to make the appropriate decision. First, we built an efficient framework based on hadoop tools for big data collection, processing and storage in wireless body sensor network. The proposed framework aims to ingest data in real time (using Flume and Spark), then preprocess them and store the big data for analyzing process (using HDFS), and finally allows the end user to analyse the data (using Grafana and Matplolib). Second, we proposed a patient records archive technique in order to regenerate the health data of each patient for analysis purposes. Third, we proposed a prediction technique that allows to predict any abnormal situation, using the prophet method, in order to obtain the right medical intervention on time. Fourth, we built a patient classification model that classify the patient according to its criticality which is calculated according to the received criticality levels from the all sensors. Finally, We proposed a patient-nurse scheduling which started by grouping mechanisms that aim to assign patients to nurses in a fair way, we proposed two mechansims that assigns the patients according to the two parameters: patient's criticality level and the number of patients, and we proposed third mechanism which is an hybrid of the genetic algorithm and the particle swarm optimization. Then, proposing a scheduling algorithm that aim to find the path that the nurse should Chapter 5. Conclusion and Perspectives follow in order to examine her group of patients. Which this algorithm is based on the patient's priority that calculated according to its criticality level and age. Perspectives In this section, we present two directions of perspectives in order to ameliorate the frameworks of patients controlling and assessment presented in this thesis: Short to Mid Term Perspectives that related to our proposed mechanisms and Log Term Perspectives. Short to Mid Term Perspectives As future work, we propose some perspectives in order to enhance the mechanisms presented in this thesis, which can be summarized as follows: • Our approach in chapter 2 can be extended by adapting the sensor sensing frequency according to the patient's criticality. Indeed, the sampling rate of the sensors could be adapted and therefore reducing the energy consumption. • We have two main directions to extend our framework in chapter 3. First, we plan to use aggregation techniques at the processing stage in order to eliminate the redundancy among the collected data thus reduce the amount of data storage. We want to search the similarity between the collected vital signs using several aggregation functions such as jaccard similarity; cosin similarity or several distance functions like euclidean; cosine. Second, we seek to enhance the performance of our framework by testing another Hadoop tools such as Hive; kafka. • We have three main directions to enhance our framework in chapter 3. First, we plan to test our platform in real-case scenarios in order to validate its performance. Second, we seek to adapt our platform to take into account various types of patient data like images for organs, video for operations, etc. The purpose of adding more information (images and videos) to the digital data is to support the decision making. Finally, we plan to develop a mobile application in order to help clinicians closely and remotely monitor critical patients. • Our platform in chapter 4 can be enhanced in several ways. First, we seek to integrate a mobile application for the medical staff in order to determine for each nurse their scheduling appointments. Second, we plan to improve the performance of our framework by testing another parameters in the patients scheduling algorithm in addition to the criticality rate and the age of the patients, such as the capability of each nurse. • We have three future directions to enhance NPS mechanism presented in chapter 4. First, we seek to take into account more criteria when assigning nurses to patients, such as nurse skills and decision subjectivity. Second, we plan to enhance the service time of each patient in a dynamic way rather than static one. Third, we seek to increase the performance of NPS either by enhancing the process of GA-PSO or by testing another heuristic algorithms. Long Term Perspectives Despite the big attention payed by the researchers in order to improve the healthcare applications, the field is still open to research. Subsequently, several issues need to be more explored, that are related on the data management; energy conservation, decision making; and patients-nurse scheduling. In this section, we give some issues in order to attract the attention of researchers. The first issue is based on the data collection techniques in WBSN. The integration of the Federated Learning (FL) technique in the process of the collect and transmit data prolong the network lifetime and reduce the complexity of decision making. The second issue is related to the patients-nurse scheduling technique. Indeed, in the hospital a nurse is distinguished from a nurse in activity, intelligence and experience. Therefore, in addition to identifying the group of patients, assigning the right nurse to this group improve the provided health services. Finally, the time synchronization is one of the issue that is not yet largely explored. Any loss or delayed cause a change in the time synchronization at the sink and therefore lead a maltreatment. Subsequently, more effort should be made for enhancing the time synchronization in WBSN in order to prevent any undesired patient's situation. Figure 1 . 1 11 Figure 1.1 Various types of biomedical sensors. Figure 1 . 2 12 Figure 1.2 WBSN architecture. Figure 1 . 3 13 Figure 1.3 Intra-WBSN Communication Types. algorithm Detecting the physiological signals and physical activity Determining the criticality level of psychological and physiological work Need an accurate validation in a huge dataset collected in real setting scenarios programming Decreasing the patients' waiting time Unable to find a solution when the number of patients is more than 17 Don't take into account the different skill levels for nurses and the priorities for senior nurses. Cooling Scheme's cooling Transition rules of the cost matrix Generating better and fasrer the time schedule Expensive Algorithm 3 for each period do 3 : 33 SADCF Algorithm.Require: Threshold: F t .1: for each round do 2:if it's the first slot then Algorithm 4 1 : for each measurement m do 2 : 412 Adapting Weights Algorithm. Require: Initial condition: α + β +θ +γ = 1, a= 50 %, b=25 %, c=25 %, E. Ensure: Determine the value of each weight. Determine the score s of m: s = EWS (m) Figure 2 . 3 23 Figure 2.3 Analysis of network lifetime. Figure 2 . 4 24 Figure 2.4 Analysis of energy consumption. Figure 2 . 5 25 Figure 2.5 Analysis of transmission time in one period. Figure 2 . 6 26 Figure 2.6 Analysis of overhead on sink. Figure 2 . 7 27 Figure 2.7 Analysis of data redundancy reduction. Figure 3 . 1 31 Figure 3.1 Overview of the proposed WBSN architecture. p R HR u = [90,90,91,99,110,120,123,131,132,125,120,122] as shown in the Figure 3.3. Figure 3 . 3 33 Figure 3.3 Illustrative example of PED. Algorithm 7 1 : 2 : 4 : 7 : 71247 PRA Algorithm. Require: Period size: T , Readings collected during p: p R vs u = [r 1 , r 2 , . . . , r T ], LSA coeffcient set: p C vs u . Ensure: void. StoredSet ← / 0 for i = 1 to T do 3: if r i is sent by the sensor then StoredSet = StoredSet ∪ {r i } r i based on p C vs u StoredSet = StoredSet ∪ {r i } 8: end if 9: end for 10: store StoredSet at the sink Figure 3 . 4 34 Figure 3.4 Illustrative example of MA algorithm. Figure 3 . 3 Figure 3.5 shows the same set of readings collected by the sensor in Figure3.4 but we apply the ES algorithm with a fixed value of ζ equal to 0.5. According to equation 3.4, the first predicted reading at slot time y 1 is the same to the collected reading, e.g. 10. Then, for the predicted reading Est 2 at slot y 2 , it calculated as 0.5 × 10 + (1 -0.5) × 10 = 10. To calculate the missed reading at the slot time y 4 , r 4 will equal to Est 4 that is calculated as 0.5 × 11 + (1 -0.5) × 10.5 = 10.75. The same process is performed for the other missing readings. Figure 3 . 5 35 Figure 3.5 Illustrative example of ES algorithm. Figure 3 . 6 36 Figure 3.6 Illustrative example of applying Prophet. Figure 3 . 7 . 37 The heart rate readings collected of the first patient (Figure 3.7(a)) are totally in the normal range (score = 0). The readings of the second one are almost abnormal with score criticality equals to 2 (Figure 3.7(b)). Whilst, the third patient shows a severe criticality of its situation where the scores vary between 2 and 3 (Figure 3.7(c)). Figure 3 . 7 37 Figure 3.7 Patient situation according to the percentage of reading scores. (a) LSA with k = 3 (b) LSA with k = 5 (c) LSA with k = 7 Figure 3 . 8 Figure 3 . 9 3573839 Figure 3.8 The quality of regenerated data after applying PRA in function of k, patient situation is low. Figure 3 .Figure 3 . 10 3310 Figure 3.10 The variation of the sensing frequency of the sensor after applying the PMA in function of the patient's situation level, T = 3600. Figure 3 . 11 311 Figure 3.11 The percentage of sensing data after applying the PMA algorithm in function of the patient's situation. Figure 3 . 12 312 Figure 3.12 Percentage of data transmission from sensor to the sink, k = 5. Figure 3 . 13 313 Figure 3.13 Raw data vs regenerated data with MA. Figure 3 . 14 Figure 3 . 3143 Figure 3.14 Raw data vs regenerated data with ES. Figure 3 . 15 315 Figure 3.15 Processing speed using MA ans ES. 16(a) to3.16(d)). This is because the size of the training data will increase which allows Prophet to better understand the variation of the patient situation thus increase the accuracy of his progress situation. Figure 3 . 16 316 Figure 3.16 The predicted data using Prophet compared to the raw data, T = 900, P = 15, patient situation is medium. Figure 4 . 1 41 Figure 4.1 Patient classification system. ( 4 . 1 )Figure 4 . 2 4142 Figure 4.2 Illustrative example of shapelet generation. Algorithm 8 7 : 87 PCDB Algorithm. Require: Period size: T ; Patients number: NP; Nurses number: NN; A balancing threshold: ε; A list of patient criticality during one period: Q = [C 1 , C 2 , . . . , C N P]. Ensure: Set of patients clusters to each nurse. 1: for i = 1 to NN do for j = 1 to NN do 8: select the Patient j with the highest critical score C 9:G j ← G j ∪ { j}C G j + = C j for j = 2 to NN do 14: 21 : 21 until no patient in Q Patients Sorting-based Grouping (PSG) Mechanism i jk : binary variable representing whether Nurse visits Patient node j after visiting Patient node i d i j : Euclidean distance between patient node i and patient node j -SP: set of patients' nodes -SN: set of nurses -C: nurse shift -Q: load capacity of nurse t f : service time at node i, i ∈ SP -X: position of the solution Based on the above notations, we define the following constraints: ∑ i∈SP,i̸ = j xi jn = 1, ∀ j ∈ SP, ∀n ∈ SN (4.5) ∑ i∈SP C i ≤ Q, ∀i ∈ SP (4.6) ∑ i∈SP ∑ j∈SP, j̸ =i xi jn ≤ SP, ∀n ∈ SN (4.7) Figure 4 . 3 43 Figure 4.3 Illustrative example of PSO exploration space search. Figure 4 . 4 - 4 Figure 4 . 5 44445 Figure 4.4 Segment selection from the chromosomes. Figure 4 . 6 46 Figure 4.6 Combining genes. Figure 4 . 7 47 Figure 4.7 Adding zeros. Figure 4 . 8 48 Figure 4.8 Offspring after mutation operation. Algorithm 10 7 : 8 : 1078 PSG Algorithm. Require: Patients number: NP; Nurses number: NN; A list of patient criticality during period p: Q p = [C 1 , C 2 , . . . , C N P ]. Ensure: Set of clusters of patients of each nurse. 1: Sort Q in descending order 2: for i = 1 to NN do while NP > 2NN do for j = 1 to NN do 9:G j ← G j ∪ {Patient j, PatientNP} 10:C G j + = C j + C N P 11:remove C j and C N P from Q end while 15: for i = 1 to NP do16: select the cluster having minimum criticality G k 17: Figure 4 . 9 49 Figure 4.9 Illustrative example of PNS algorithm. Figure 4 . 4 Figure 4.10 shows the results of shapelet generation step for the oxygen saturation (SPO2) vital sign according to the first step of the training phase in the classification model system. These shapelets are selected from archive of some patients with diversity of their levels of criticality in order to cover all Situations kinds of new patients. Our purpose is to increase the effectiveness of the model based on machine learning approach. The obtained results divided of 19 shapelets with low-level patients (Figure 4.10(a)), 7 shapelets with medium-level patients (Figure 4.10(b)), and 15 shapelets with high-level patients (Figure 4.10(c)). Figure 4 . 4 11 indicate the results for Patient 1, their heart rate level is high because the similar shapelet obtained has a high level shown in Figure 4.11(a) , and the Figure 4.11(d) shows the score frequencies for each vital sign, and it verifies the high level criticality of heart rate, because the heart rate readings collected of this patient (Figure 7(a)) are totally in the high range (score = 2 and 3). The selected shapelet for respiration rate RESP has high level (Figure 4.11(b)), in other hand the result confirmed in Figure 4.11(d), where the most scores of respiration data are of a high level. And also all the oxygen saturation SPO2 readings in the normal readings (Figure4.11(d), corresponding to their obtaining shapelet similar is for low level (Figure 4.11(c)). The same for the Patient 9 their three selected shapelets illustrated in the Figures 4.12, where their levels are low, medium, and high for vital signs respectively HR, RESP, and SPO2. And also the results of Patient 3 are shown in Figure 4.13, where their criticality level moving towards the low. Figure 4 . 10 410 Figure 4.10 The shapelets generated for SPO2. Figure 4 . 11 411 Figure 4.11 Similar shapelet elected for every vital sign of the high Patient 1. Figure 4 . 4 Figure 4.16 shows the slight difference between the two clustering-based mechanisms proposed in this paper. The nurses number is fixed to three, and the patients number is 18. So the goal is to distribute the 18 patients on the three nurses. The shown results are for after the first period.Figure 4.16(a) for mechanism 1, there are seven high level patients (represented in red color). The high Figure 4 . 4 Figure 4.16 shows the slight difference between the two clustering-based mechanisms proposed in this paper. The nurses number is fixed to three, and the patients number is 18. So the goal is to distribute the 18 patients on the three nurses. The shown results are for after the first period.Figure 4.16(a) for mechanism 1, there are seven high level patients (represented in red color). The high Figure 4 . 12 412 Figure 4.12 Similar shapelet elected for every vital sign of the medium Patient 9. Figure 4 . 13 413 Figure 4.13 Similar shapelet elected for every vital sign of the low Patient 3. Figure 4 . 14 414 Figure 4.14 The criticalities levels of Patient 1 during 10 periods. Figure 4 . 15 Figure 4 . 16 415416 Figure 4.15 The criticalities levels of Patient 9 during 10 periods. Figure 4 . 17 417 Figure 4.17 The variation of the patient number in each cluster. Figure 4 . 18 418 Figure 4.18 Accumulation criticality in each cluster. Figure 4 . 4 19(a) shows the different priorities for patients assigned to the Nurse 1. The Patient 12 is ranked first in terms of priority in the examination, where it has the max criticality rate and their age is 11 years. The two patients Patient 18 and Patient 2 have the same criticality, but the priority is for the Patient 18 where their age equal to 87 years grater critical than the age of P 2 equal to seven years. In Figure 4.19(b), the criticalities rate of patients Patient 4 and Patient 8 are equal, priority was given to the patient Patient 4, as his age is 16 years more critical to the age of Patient 8 ( = 30 years). And in Figure 4.19(c) shows the priorities assigned to the third Nurse (3), where the patients were arranged as follows: Patient 16, Patient 1, Patient 5, Patient 9, Patient 10, Figure 4 . 19 419 Figure 4.19 The priority level of patients obtained in function of age and criticality rate. Figure 4 . 4 Figure 4.20 presents for each nurse the schedule after the first period, to follow-up the patients during the second period. And this figure shows for each patient the appropriate time, that the nurse should follow it to treat and service them during one period( = 60 min). While respecting the nurse's responsibility, where the nurse can increase the permissible time in the event it is necessary, by giving him a period of time as an addition (δ = 10 min ), as shown in figures 4.20 as a break. With the two mechanisms (PCDB and PSG), the first Nurse 1 has the Figure 4 . 20 420 Figure 4.20 The scheduling results obtained. Table 1 .1 1 Energy consumption in sensor network: a state of the art techniques. pending state. The obtained results of the proposed technique shows better performance compared the other state of the art approaches in terms of energy usage, data reliability, PDR and packets transmission rate with avoiding network congestion. However, data classification technique at sensor node level increases overhead at sensor nodes. This work is not supported to handle several sensor nodes. In addition, there are three different processes such as classification, scheduling, and vertical handover decision were included in the monitoring of the telehealth system. Limitation Integrity of information Processing time Don't take into account the signal strength Processing time Lack of security Ignoring the mobility of sensors lack of security bandwith available of node is not taken into account Processing time High packet loss rate Large delay Processing time Processing time Performance Evaluation Reducing the number of active sensors Extending the network lifetime Saving sensor energy Reducing the transmission loses Emergency detection Saving energy Extending the lifespan Reducing delay Reducing energy consumption Emergency detection Reducing energy consumption Reducing traffic load Reducing delay Extending lifetime Extending network lifetime Increasing network throughput Clustering accuracy Extending network lifetime Saving energy consumption Saving energy consumption Data transmission Network lifetime Techniques Sleep scheduling strategy Dominating set method CSMA/CA protocol Cost function Multi-hop protocol Fuzzy logic algorithm Clustering configuration technique Tele-monitoring technique Time Division Multiple Access Multi-hop routing protocol Data aggregation Filtering techniques Cost function Integer linear program K-means++ algorithm Fuzzy logical system Genetic algorithm Multi-hop routing protocol Hello packets generation Routing tables Fault tolerance approach On-demand routing protocols Reference Year [17] 2020 [18] 2019 [19] 2018 [20] 2020 [21] 2020 [22] 2019 [23] 2015 [24] 2018 [25] 2015 [26] 2012 [27] 2013 Table 1 .2 1 Analyzing large amounts of data in sensor network: a state of the art techniques. Limitation Computational processing time OLAP operations SQL queries Overhead at sensor level Attaching the sensor to a fixed position IT infrastructure Noise data Privacy and security aspect Privacy and security Security vulnerabilities Scalability factor Performance Evaluation Emergency detection Energy usage Throughput Clustering accuracy Data storage Data reliability Power consumption Packets transmission rate Network congestion Gait abnormality detection Fast analysis Real-time decisions Diagnosing and predicting accuracy Accuracy Recall Specificity Network overhead Data retriveal rate Techniques Harmony search algorithm Particular swarm algorithm Deep belief network Hadoop tools (Sqoop, HDFS, HBase, MapReduce and Hive) Business intelligence (BI) solutions (SpagoBI tools) Data segregation and classification techniques Routing protocol Machine learning Layered data fusion Digital twins Machine learning techniques : Decision trees Support Vector Machine (SVM) K nearest neighbor Edge-cloud architecture Greedy heuristics Reference Year [28] 2019 [29] 2014 [30] 2017 [31] 2019 [32] 2019 [33] 2020 [34] 2021 Table 1 . 3 13 Real-time patient monitoring in sensor network: a state of the art techniques. Limitation Security and privacy Predicting the progress of the patient situation Android not supported Does not support all the types of sensors Does not support parallel data processing pipelines Repeated collision Did not consider real scenarios Did not implement WBSN module for deploying within Ubiquitous Computing Environment Scalability using huge WSN system Not capable of processing in deployed computers Not able to detect the risk level alerts in the diagnostic system Performance Evaluation Predicting stress level Emergency detection Adaptive sampling rate Energy consumption Real-time monitoring Security for data collection Detecting falls in real-time Reducing data transmission rate Right medical intervention Predicting the patient's situation Saving network lifetime Controlling the situation of patients Reducing energy consumption Extending network lifetime Real-time monitoring Controlling the situation of patients Monitoring human activities Ensuring accurate status of patients Notifying patients of their health status Reference Year Techniques [37] 2018 Fuzzy inference system Anova [3] 2016 Fisher test Decision matrix Fuzzy set Mobile app [39] 2021 Web interface Cloud database Machine learning [40] 2019 Edge computing Apache flink Data analytic pipeline Sending abnormal records [41] 2020 Adapting the sensing frequency Long short-term memory [38] 2019 Fuzzy inference system Early warning score systems [42] 2018 Collecting physiological data Supporting physical activities [43] 2017 Model checking approach Model transformation [44] 2011 Java-programmable sunspot sensor platform [45] 2018 Data mining Table 1 .5 Nurse 1 scheduling in sensor network: a state of the art techniques. Set of sensor nodes N=n 1 ,n 2 ,... . 2) Algorithm 2 RKEE Algorithm. Require: 1: for each round do 2: -11 12 -20 21 -24 ≥ 25 Oxygen Saturations ≤ 91 92 -93 94 -95 ≥ 96 Any Supplemental Oxygen Yes No Flow chart of the proposed protocol. Start Network initialisation No Node i has packet to Yes EWS score has Yes send? changed? Determining the weights of each No parameter Calculating the M of each neighbor node End Sending message of control to each neighbor node Sending packet to selected next node Selecting the node that has maximum M Figure 2.1 PHYSIOLOGICA L PARAMETERS 3 2 1 0 1 2 3 Repiration Rate 9 Temperature ≤ 8 ≤ 35.0 35.1 -36.0 36.1 -38.0 38.1 -39.0 ≥ 39.1 Systolic BP ≤ 90 91 -100 101 -110 111 -219 ≥ 220 Heart Rate ≤ 40 41 -50 51 -90 91-110 111 -130 ≥ 131 Level Of Conscioucness A V, P, or U of redundant data, the sensor sends the measurement to the next hop only if the patient's situation has changed e.g. if two successive scores are different. Consequently, the energy consumption is reduced and the decision making operation is become more simple. Figure 2.2 Early Warning Score (EWS) System. Table 2 . 1 21 Resulted values for the different parameters of four approaches. 's EERP RKEE SADCF SSEH parameters status Lifetime normal 752.3917 338.53 37791.067 2899 (seconds) abnormal 704.732 449.13 28588.27 2899 Transmission normal 24.4925 19.6921 21.9591 19.606 time(ms) abnormal 20.4437 21.4721 22.835 19.3757 Overhead on normal 3003.0917 1710.67 1092.67 37 sink(packets) abnormal 2821.8 3059.2 1499.13 37 Data redundancy normal 0 0 97.81466 63 reduction( %) abnormal 0 0 97.00174 63 Complexity - O(n) O(n) O(n) O(n) Table 2 . 2 Challenges Year Title Reference 2 State-of-the-Art: A comparison. .2). 𝒑 𝟏 𝒑 𝟐 … 𝒑 𝓟 𝒑 𝟏 𝒑 𝟐 … 𝒑 𝓟 𝒑 𝟏 𝒑 … 𝑼 𝟏 𝑼 𝟐 𝑼 … Sensor lifetime 𝟏 𝑹 𝟏 𝒗𝒔 = [𝒓 𝟏 , 𝒓 𝟐 , … , 𝒓 𝓣 ] 𝟏 𝑹 𝟏 𝒗𝒔 = [𝒓 𝟏 , 𝒓 𝟐 , … , 𝒓 𝓣 ] 𝟏 𝑹 𝟏 𝒗𝒔 = [𝒓 𝟏 , 𝒓 𝟐 , … , 𝒓 𝓣 ] … Figure 3.2 Round-based sensor lifetime with periodic collection model. R vs u = [r 1 , r 2 , . . . , r T ], Division size: d. Ensure: Set of values sent to the sink.1: calculate score s 1 of r 1 2: send s 1 to the sink 3: current_score = s 1 4: for each record r i ∈ p R vs u where i > 1 do 5: calculate score s i of r i according to EWS 6: if s i ! = current_score then 7: send r i 8: current_score = s i 9: end if 10: end for 11: divide p R vs u into 100/d equal divisions 12: calculate the mean for each division 13: find p C vs u based on definition 1 14: send p C vs u to the sink 1 R vs u , 2 R vs u , . . . , P R vs u ]. Moreover, we assign the set of scores p S v u for each set of readings p R v u . Hence, we define the weight of a score s i where i ∈ [0, 1, 2, 3] as follows: Algorithm 5 PED Algorithm. Require: A sensor n, A vital sign vs, A period p, Period size: T , Readings collected during p: p • Definition 2: (the weight of a score s i , wgt(s i )) Table 3 . 2 32 Sensing frequency decision table (SFDT). C R → normal abnormal low 20 % of T 40 % of T medium 40 % of T 60 % of T high 60 % of T T 10.75 10.75 10.75 10.87 10.87 𝑅 𝑖 10 11 11 ? ? 11 ? 12 𝑦 1 𝑦 2 𝑦 3 𝑦 4 𝑦 5 𝑦 6 𝑦 7 𝑦 8 𝐸𝑠𝑡 𝑖 10 10.5 𝑦 1 10 𝑦 2 𝑦 3 𝑦 4 𝑦 5 𝑦 6 𝑦 7 𝑦 8 Table 3 . 3 33 Clinical response decision-making table (CRDM). Decision # ScoreU Patient observation D 1 [ 0, b 1 ] one time every 6 hours D 2 ] b 1 , b 2 ] one time every 2 hours . . . . . . . . . D m ≥ b l one time every 10 minutes Table 3 . 4 34 Simulation environment.. Paramter Description Description T period size 900, 1600 and 3600 P round size 15 periods k degree of LSA polynomial 3, 5 and 7 d division size 5 % and 10 % vs vital sign heart rate Table 4 . 4 1. If the level of vs i is low its score is 0; if it is medium, the score is 1; otherwise, the score is 2. Then, the sink assigns the criticality level of this patient, based on these scores of all vital signs. Equation4.3 calculates the rate of criticality C where Table4.2 describes the relation between the criticality rate and the final level of severity of a patient. C = ∑ NS i=0 s i 2 × NS / where NS is the number o f sensors. (4.3) Shapelet level of vs i low medium high score of vs i , s i 0 1 2 Table 4 . 1 41 Level and corresponding score of vs. Criticality rate (C ) Criticality level 0 ≤ C ≤ 0.3 low 0.3 < C ≤ 0.6 medium 0.6 < C ≤ 1 high Table 4 . 2 42 Description of patient severity level. Period p, Period size: τ, Patient number NP, Nurse number NN, List of patients criticality during p: Q = [C 1 , C 2 , . . . , C P ]. Algorithm 9 GA-PSO Algorithm. Ensure: Best solution: BS 1: Generate the initial random population ω 2: repeat 3: Calculate the fitness value f for each individual based on equation (4.4) 4: Select the parent individuals for reproduction 5: Produce set of children by crossover operation 6: Perform mutation operations 7: Evaluate the fitness value for each child 8: Select the best individual for the current generation: 9: for each individual i do 10: if f(i) > f(pbest) then 11: pbest ← i 12: end if 13: if f(i) > f(gbest) then 14: gbest ← i 15: end if 16: Update positions and velocity for each individual according to equations (4.8) and (4.9) 17: end for 18: until convergence criteria is satisfied 19: Require: table : : Parameter Description Value T period size 1000 NS number of sensors 3 δ subjectivity threshold 10 NN number of nurses 3,8,15 NP number of patients 15,18,30,50 vs vital sign HR, RESP, SPO2 s score of vital sign 0, 1 or 2 λ weight of the patient criticality 0.8 ψ weight of the patient age 0.2 ST 1 and ST 2 Serving time 0,8.15,17.5 and1,6,10 ω Population size 50 CV Cross validation constant 0.9 MC Mutation constant 1/20 r 1 ,r 2 Random number 0,1 c 1 ,c 2 Learning factors 2,2 Table 4 . 3 43 Simulation environment. These pictures Figure4.18 show that the two mechanisms gave excellent results, by distributing the high critical patients to the nurses, not by grouping them into one group, but by dispersing them over the groups. As well as for the medium Raw data Similar shapelet Raw data Similar shapelet 108 17 107 HR readings of P3 100 101 102 103 104 105 106 RESP readings of P5 12 13 14 15 16 99 98 11 0 100 200 300 400 500 600 700 800 900 1,000 0 100 200 300 400 500 600 700 800 900 1,000 slot time slot time (a) HR (b) RESP Raw data Similar shapelet Score=3 Score=2 Score=1 Score=0 100.1 100% 100 90% SPO2 readings of P3 99.1 99.2 99.3 99.4 99.5 99.6 99.7 99.8 99.9 % of reading score 20% 30% 40% 50% 60% 70% 80% 99 10% 98.9 0% 0 100 200 300 400 500 600 700 800 900 1,000 HR RESP SPO2 slot time vital signs (c) SPO2 (d) frequency of scores Table 4 . 4 5GA-PSO results with ST number of patients. Reasonable accuracy is also observed in GA-PSO mechanism, particularly for larger dataset, as it was shown in the experimental results of tables 4.4 and 4.5. Patient Distribution ( %) 5.84 7.56 6.87 6.19 4.12 7.56 4.12 7.22 7.22 4.12 7.22 9.28 8.25 7.22 7.22 100 Service time (minutes) 17 22 20 18 12 22 12 21 21 12 21 27 24 21 21 291 Served patients 3: {1, 1, 1} 4 : {2, 0, 2} 2 : {2, 0, 0} 4 : {1, 1, 2} 3 : {1, 0, 2} 4 : {2, 0, 2} 3 : {1, 0, 2} 3 : {2, 0, 1} 3 : {2, 0, 1} 3 : {1, 0, 2} 3 : {2, 0, 1} 4 : {2, 1, 1} 5 : {1, 2, 2} 3 : {2, 0, 1} 3 : {2, 0, 1} 50 NN=15, NP=50 Nurse 1 Nurse 2 Nurse 3 Nurse 4 Nurse 5 Nurse 6 Nurse 7 Nurse 8 Nurse 9 Nurse 10 Nurse 11 Nurse 12 Nurse 13 Nurse 14 Nurse 15 Patient Distribution ( %) 9.88 12.79 12.79 13.37 13.95 13.95 13.37 9.88 100 Service time (minutes) 17 22 22 23 24 24 23 17 172 Served patients 3 : {1, 1, 1} 3 : {1, 2, 0} 3 : {1, 2, 0} 4 : {1, 2, 1} 5 : {1, 2, 2} 5 : {1, 2, 2} 4 : {1, 2, 1} 3 : {1, 1, 1} 30 NN=8, NP=30 Nurse 1 Nurse 2 Nurse 3 Nurse 4 Nurse 5 Nurse 6 Nurse 7 Nurse 8 Patient Distribution ( %) 31.76 38.82 29.41 100 Service time (minutes) 27 33 25 85 Served patients 4 : {2, 1, 1} 5 : {2, 2, 1} 6 : {1, 2, 3} 15 NN=3, NP=15 Nurse 1 Nurse 2 Nurse 3 Total Acknowledgements F irst and foremost, I thank and praise Allah (Subhanahu Wa Ta'ala), there is no power but from God. Thanks to God who gave me strength, I completed this productive work. And I thank all the people who followed and contributed with me to produce this work. I would like to express my sincere gratitude to my supervisors: Prof. Abdelhafid ABOUAISSA and Prof. Lhassane IDOUGHMAR, at the university of Haute-Alsace, and Dr. Hassan HARB and Dr.Nour CHARARA, at the American University of Culture and Education (AUCE), for their guidance during my research. They gave me strength, knowledge and advice, and they supported and encouraged me continuously. I am very grateful to them, for their contributions to ideas and times to make my Ph.D stronger and better. A Sensing-Based Patient Classification Framework for Efficient Patient-Nurse Scheduling Appendix A Hadoop Implementation Calculate PRI j based on equation (4.11b). 9: end if 11: end for 12: Sort Q in descending order < 18, so its priority is calculated as PRI 4 = 0.8 × 70 + 0.2 × (90 -13) = 71.4, and so one. Finally, we order the patients according to their priorities while setting the time for each one. Appendix A Hadoop Implementation The main objective of using Hadoop is to perform parallel data storage and processing thus it reduces the effect of data loss during node failure and decreases the processing latency. Therefore, we used hadoop tools in Chapter 3 in order to handle the big data generated from the patients. A.1 Hadoop Starting To show the effectiveness of our framework in store and process the big data with the diversity of information in real time and the speed of generation, we implemented the system on hadoop using ubunto, A.2 Results based on Data Collected and Storage The tool responsible for collecting data in our system is flume,
04117601
en
[ "shs.eco" ]
2024/03/04 16:41:26
2011
https://uca.hal.science/hal-04117601/file/a3136e97cd71e45cddd3c6c089b223ac.pdf
Nicolas Laroche Alexandre Cabagnols Pascale Hénaut Pierre-Charles Chinese Romond Returnee Chinese Returnee Entrepreneurs: The Essential Human Capital for a Chinese Innovative State published or not. The documents may come L'archive ouverte pluridisciplinaire Chinese Returnee Entrepreneurs: The Essential Human Capital for a Chinese Innovative State Nicholas Laroche 1, 4 , Alexandre Cabagnols 2 , Pascale Hénaut 3 and P. Romond 1 1 Clermont Université, Université d'Auvergne, Clermont-Ferrand, France 2 Clermont Université, Université Blaise Pascal, Clermont-Ferrand, France 3 Soluscience, Biopole Clermont-Limagne, Saint-Beauzire, France 4 NRS, Clermont-Ferrand, France Abstract: Do Chinese entrepreneurs, who have gained academic and/or professional experience abroad, develop different commercial or technology strategies compared to their competitors? Are they a real enrichment to the Chinese industrial sector? Within a single country -China-we can distinguish two different populations of biotech enterprises: one set up and led by returnees (the majority of whom have been to the USA), the other by "mainland" Chinese entrepreneurs. Returnees are defined as "migrant entrepreneurs" who have left their country of origin for a certain period to then return to it. They have been exposed to an occidental culture (mainly North American and European) and have benefited from an enrichment of their human capital by means of research visits or research fellowships at the best universities worldwide. In our study we compare the technological and product positioning of the two populations of enterprises. We distinguish between the enterprises positioned in mature market segments (who exploit already existing commercial opportunities) and those positioned in emerging market segments, which explore new opportunities. For the present paper we collected data on the Chinese biotech sector. We studied 19 returnees' societies and 23 "mainland Chinese" societies. Based on this sample, we test the influence of the head manager status (returnee/ or not) on the intensity of their companies' innovative behaviour. The results show that the firms managed by returnees are more strongly positioned in emerging markets compared to the firms that are managed by mainland Chinese. We conclude that returnees in the Chinese biotech sector contribute to the diversification of the markets on which the Chinese industry is positioned and to its technological catching-up towards US standards. Keywords: innovativeness / human capital / china / entrepreneurship / biotechnology / international mobility
04117659
en
[ "math.math-ap" ]
2024/03/04 16:41:26
2022
https://hal.science/hal-04117659/file/AC_proceedings_HYP2022.pdf
Debora Amadori email: [email protected] Cleopatra Christoforou email: [email protected] Weak solutions with bounded support to an Euler-type flocking model on the existence and time-asymptotic flocking of weak solutions to a hydrodynamic model of flockingtype with all-to-all interaction kernel in one-space dimension. An appropriate notion of entropy weak solutions with bounded support is described, to capture the behavior of solutions to the Cauchy problem with any BV initial data that has finite total mass confined in a bounded interval and initial density uniformly positive therein. In addition, a suitable condition on the initial data is provided that allows us to show time-asymptotic flocking for such solutions. Introduction This article serves as an overview of the results obtained in [START_REF] Amadori | BV solutions for a hydrodynamic model of flocking type with all-to-all interaction kernel[END_REF] on weak solutions to the hydrodynamic model of flocking-type in one-space dimension that takes the form ∂ t ρ + ∂ x (ρv) = 0, ∂ t (ρv) + ∂ x ρv 2 + p(ρ) = ∫ R K(x, x )ρ(x, t)ρ(x , t) (v(x , t) -v(x, t)) dx (1 ) where (x, t) ∈ R×[0, +∞). Here ρ ≥ 0 stands for the mass variable, v for the velocity, p for the pressure and K = K(x, x ) ≥ 0 for a symmetric interaction kernel. Self-organization is an area that has received alot of attention in the research community and especially for systems such as flock of birds, a swarm of bacteria or a school of fish and its study gives rise to new mathematical challenges. Especially, in an effort to understand the emergence of flocking behavior, many mathematical models have been introduced arising from the inspiring work of Cucker and Smale [START_REF] Cucker | Emergent behavior in flocks[END_REF], while led to many subsequent studies; cf. [START_REF] Carrillo | Particle, kinetic, and hydrodynamic models of swarming, In Mathematical modeling of collective behavior in socio-economic and life sciences[END_REF][START_REF] Shvydkoy | Dynamics and Analysis of Alignment Models of Collective Behavior[END_REF] and the references therein. Most results on flocking models involve results at the particle level, the corresponding kinetic equation or its hydrodynamic formulation (cf. [START_REF] Ha | From particle to kinetic and hydrodynamic descriptions of flocking[END_REF][START_REF] Ha | A simple proof of the Cucker-Smale flocking dynamics and mean-field limit[END_REF][START_REF] Motsch | A new model for self-organized dynamics and its flocking behavior[END_REF][START_REF] Karper | Existence of weak solutions to kinetic flocking models[END_REF][START_REF] Ha | A global unique solvability of entropic weak solution to the one-dimensional pressureless Euler system with a flocking dissipation[END_REF][START_REF] Ha | A hydrodynamic model for the interaction of Cucker-Smale particles and incompressible fluid[END_REF][START_REF] Ha | Emergent dynamics for the hydrodynamic Cucker-Smale system in a moving domain[END_REF][START_REF] Karper | Hydrodynamic limit of the kinetic Cucker-Smale model[END_REF]) and so far this subject has been investigated mainly in the context of solutions with no discontinuities. The Euler-type flocking system (1) with pressure p(ρ) = α 2 ρ , α > 0 . ( 2 ) is rigorously derived as the hydrodynamic limit of the kinetic Cucker-Smale flocking model on (x, t, ω) ∈ (0, T)×R d ×R d by Karper, Mellet and Trivisa [START_REF] Karper | Hydrodynamic limit of the kinetic Cucker-Smale model[END_REF]. In particular, they study the singular limit corresponding to strong noise and strong local alignment with the alignment operator being the usual Cucker-Smale operator, while a diffusion term is present in the kinetic equation and show the convergence of weak solutions to the kinetic equation to strong (suitably smooth) solutions of the Euler system with pressure (2). We remark that, in the literature, hydrodynamic models for flocking are often described by a pressureless Euler system that are obtained from a microscopic description of the particles motion without a stochastic forcing. As a consequence, the system with pressure received less attention than the pressureless one, in particular for weak solutions, that is the class of solutions we aim for. A result on smooth, space-periodic solutions to this model with pressure is established in [START_REF] Choi | The global Cauchy problem for compressible Euler equations with a nonlocal dissipation[END_REF]. In [START_REF] Amadori | BV solutions for a hydrodynamic model of flocking type with all-to-all interaction kernel[END_REF], we assume that the communication rate is of all-to-all type, that is K(x, x ) ≡ 1 . (3) Although this assumption, K = 1, simplifies the system and the nonlocal term turns into a local term, this special case still possesses many obstacles in the analysis point of view, in order to capture the existence of solutions and time-asymptotic flocking. In addition, the work on this special case indicates how the mechanism of the dissipative behavior of the solutions works and we expect this to be crucial in the extension of this analysis to general kernels. We consider the Cauchy problem and assign initial data (ρ, m)(x, 0) = (ρ 0 (x), m 0 (x)) x ∈ R , (4) for the density ρ = ρ(x, t), and momentum m = ρv. To capture the emergent behavior of self-organized systems, we assume that the initial total mass ρ is confined in a bounded interval, and is uniformly positive in there and the initial momentum m 0 taken to 0 outside that region, i.e. there exist a 0 < b 0 such that supp{(ρ 0 , m 0 )} ⊂ I 0 = [a 0 , b 0 ] , ess inf I 0 ρ 0 > 0 , (5) while we use v 0 = m 0 /ρ 0 only in I 0 . To study weak solutions in the context of flocking, we need to introduce a proper notion of entropy weak solution for which the support is bounded at any given time t > 0 in between the extremal particle paths, that is, there exists two absolutely continuous curves t → a(t), b(t), t ∈ [0, +∞) with a(0) = a 0 , b(0) = b 0 ; a(t) < b(t) for all t > 0 (6) and a (t) = v(a(t)+, t) , b (t) = v(b(t)-, t) for a.e. t > 0 (7) such that supp{(ρ, m)(•, t)} ⊂ I(t) = [a(t), b(t)] , t > 0 . (8) The following notion of entropy weak solutions with concentration is motivated by the ad-hoc boundary condition: The vacuum region is connected with the non-vacuum one by a shock discontinuity . This choice is made in order to allow a sharp front with finite speed to arise as expected in flocking. In this way, we exclude the case of a rarefaction connecting a vacuum region with a non-vacuum since in such a case, due to the pressure law (2), the front would not have a proper interpretation in terms of flocking because of the unbounded maximal speed. 1 We assume that ∬ {ρφ t + mφ x } dxdt = 0 , (9) holds true for all φ ∈ C ∞ 0 (R × (0, ∞)). Therefore, the Rankine-Hugoniot condition [m] = x[ρ] must hold along discontinuities x(t), in particular along the free boundaries a(t) and b(t), which is consistent with [START_REF] Choi | The global Cauchy problem for compressible Euler equations with a nonlocal dissipation[END_REF]. As a consequence, conservation of mass holds: M(t) = ∫ R ρ(x, t) dx = ∫ I(t) ρ(x, t) dx = ∫ R ρ 0 (x) dx =: M , ∀ t ≥ 0 . ( 10 ) The appropriate definition now follows: Definition 1 Given the initial data (ρ 0 , m 0 ) ∈ BV(R), together with (5), let (ρ, m) : R × [0, +∞) → R 2 be a function with the following properties: • the map t → (ρ, m)(•, t) ∈ L 1 loc (R) ∩ BV(R) is continuous in L 1 loc ; 1 For instance, if a rarefaction of the 1 st family connects the two states (ρ , v ) and ( ρ, v( ρ)), then using the pressure term (2), one has v( ρ) = v - ∫ ρ ρ 1 s p (s) ds = v -α ln ρ ρ , 0 < ρ ≤ ρ . As ρ → 0+, then v( ρ) → ∞. • lim t→0+ (ρ, m)(•, t) = (ρ 0 , m 0 ) in L 1 loc (R); • there exist two locally Lipschitz curves t → a(t), b(t), t ∈ [0, +∞) and a value ρ in f > 0 such that ( 6), ( 7), ( 8) hold and ess inf I(t) ρ(•, t) ≥ ρ in f > 0 ∀ t > 0 . (11) Then (ρ, m) is an entropy weak solution with concentration along a(t) and b(t) of the problem ( 1), ( 4) with ( 2) and ( 3), if (a) the integral identities [START_REF] Dafermos | Hyperbolic Conservation Laws in Continuum Physics, Fourth Edition[END_REF] and ∬ mφ t + m 2 ρ + p(ρ) φ x dxdt - ∬ [m M -ρ M 1 (t)] φ dxdt - ∫ ∞ 0 [p(ρ(b(t)-, t))φ(b(t), t) -p(ρ(a(t)+, t))φ(a(t), t)] dt = 0 , (12) hold true on R × (0, ∞) for all test functions φ ∈ C ∞ 0 (R × (0, ∞)), with M the conserved total mass given at [START_REF] Dafermos | Hyperbolic systems of balance laws with weak dissipation[END_REF], M 1 (t) = ∫ R m(x, t)dx + P b (t) -P a (t) , (13) and            P b (t) := ∫ t 0 e -M(t-s) p(ρ(b(s)-, s)) ds , P a (t) := ∫ t 0 e -M(t-s) p(ρ(a(s)+, s)) ds ; (14) (b) the solution is entropy weak for every pair of convex entropy-entropy flux (η, q) functions for the system (1) , i.e. the inequality ∂ t η(ρ, m) + ∂ x q(ρ, m) ≤ η m {ρM 1 (t) -mM} holds in the sense of distributions on the open set Ω = {(x, t); t > 0 , x ∈ (a(t), b(t))} ⊂ R × (0, +∞). As a consequence of the boundary condition above and in conjunction with the above definition of weak solutions, we deduce a conservation of momentum property. More precisely, we introduce a quantity that we call total momentum m, with two delta shocks supported on the free boundaries a(t) and b(t), that is the distribution m(•, t) := m(•, t) + δ b(t) P b (t) -δ a(t) P a (t) , t > 0 , (15) where δ a denotes the Dirac delta distribution on R with mass at x = a. This new singularity of the total momentum m along the free boundaries a(t) and b(t) is known as delta shock and references can be found in Dafermos [START_REF] Dafermos | Hyperbolic Conservation Laws in Continuum Physics, Fourth Edition[END_REF], Chapter 9. In what follows, we use the following standard notation < •, • >: < m(•, t), φ(•, t) >:= ∫ I (t) m(x, t)φ(x, t)dx + P b (t)φ(b(t), t) -P a (t)φ(a(t), t), t > 0 as the value of the functional m over C ∞ 0 , for all test functions φ ∈ C ∞ 0 (R × R + ), noting here that m = 0 for x I(t). Now, by definition [START_REF] Ha | A hydrodynamic model for the interaction of Cucker-Smale particles and incompressible fluid[END_REF], we observe that P b (t) + MP b (t) = p(ρ(b(t)-, t)) , and therefore ∫ ∞ 0 < δ b(t) P b (t), φ t (•, t) -Mφ > dt = - ∫ ∞ 0 p(ρ(b(t)-, t))φ(b(t), t) dt . Thus, we are led to the identity: ∫ ∞ 0 < m(•, t), φ t (•, t) -Mφ > dt = ∬ Ω m (φ t -Mφ) dxdt - ∫ ∞ 0 [p(ρ(b(t)-, t))φ(b(t), t) -p(ρ(a(t)+, t))φ(a(t), t)] dt . Hence, the integral identity (12) reduces to: ∫ ∞ 0 < m(•, t), φ t (•, t) > dt + ∬ m 2 ρ + p(ρ) φ x dxdt + ∬ ρ M 1 (t)φ dxdt -M ∫ ∞ 0 < m(•, t), φ > dt = 0 . If we choose the test function φ(x, t) = φ 1 (x)ψ(t), with φ 1 (x) = 1 for all x ∈ ∪ t ∈[T 1 ,T 2 ] I(t) and ψ(t) = 0 for t [T 1 , T 2 ], with 0 < T 1 < T 2 , and notice that M 1 (t) =< m(•, t), φ 1 >, we get M 1 (T 2 ) -M 1 (T 1 ) = 0. Since this holds true for arbitrary times, 0 < T 1 < T 2 , we deduce the conservation of (extended) momentum M 1 (t). By the time continuity of ∫ m(•, t) dx and the definition ( 14), we conclude that M 1 (t) = M 1 (0+) that is ∫ I(t) m(x, t) dx + P b (t) -P a (t) = ∫ R m 0 (x) dx =: M 1 ∀ t ≥ 0 . ( 16 ) Theorems In this section, we state the main results of [START_REF] Amadori | BV solutions for a hydrodynamic model of flocking type with all-to-all interaction kernel[END_REF]. More precisely: -Theorem 1 in Section 2.1 concerns the global in time existence of solutions obeying the notion introduced in Definition 1 without any restriction on the size of the total variation of the initial data, that conserve mass and momentum and at the same time, the non-vacuum region Ω is separated by the vacuum ones by the two free boundaries alongside delta shocks, which are present in the total momentum; -Theorem 2 in Section 2.2 states that, under an appropriate condition on the initial data, the solution admits time-asymptotic flocking. Global Existence First, we state the result on global existence in time of entropy weak solutions with concentration to (1) with bounded support. Theorem 1 Assume that the initial data (ρ 0 , m 0 ) ∈ BV(R) and satisfy [START_REF] Bressan | Hyperbolic Systems of Conservation Laws -The one-dimensional Cauchy problem[END_REF] with pressure [START_REF] Amadori | Global BV solutions and relaxation limit for a system of conservation laws[END_REF]. Then the Cauchy problem (1), ( 4) with (3), admits an entropy weak solution with concentration (ρ, m) in the sense of Definition 1. Moreover, conservation of mass [START_REF] Dafermos | Hyperbolic systems of balance laws with weak dissipation[END_REF] and of momentum [START_REF] Ha | A simple proof of the Cucker-Smale flocking dynamics and mean-field limit[END_REF] hold true. From the definition of solution (ρ, m), we clarify that we use variable v only in the support I(t) for all t > 0 satisfying v = m ρ , where it is well defined, while only the variables ρ and m are used in the complement of I(t). Now, by conservation of mass and momentum, we have the values M and M 1 in ( 10) and ( 16) respectively, and the average velocity v defined by v = M 1 /M . (17) By means of ( 10) and ( 16), system (1) rewrites as ∂ t ρ + ∂ x (ρv) = 0, ∂ t (ρv) + ∂ x ρv 2 + p(ρ) = -Mρ (v -v) . (18) Thus, to prove Theorem 1, it is equivalent to establish an entropy weak solution with concentration to [START_REF] Huang | Convergence to the Barenblatt solution for the compressible Euler equations with damping and vacuum[END_REF] for the initial data assumed in the theorem that conserves mass and momentum. Now, system [START_REF] Huang | Convergence to the Barenblatt solution for the compressible Euler equations with damping and vacuum[END_REF] belongs to the class of system of balance laws and the Cauchy problem for strictly hyperbolic systems of balance has been studied in [START_REF] Dafermos | A system of hyperbolic conservation laws with frictional damping[END_REF][START_REF] Dafermos | Hyperbolic systems of balance laws with weak dissipation[END_REF] under appropriate conditions. However, the previous results do not apply to system (18) because of large data and the loss of strict hyperbolicity due to the vacuum present in R \ [a 0 , b 0 ]. For the notion of the physical vacuum boundary for [START_REF] Huang | Convergence to the Barenblatt solution for the compressible Euler equations with damping and vacuum[END_REF], we refer to [START_REF] Liu | Compressible flow with damping and vacuum[END_REF][START_REF] Liu | Compressible flow with vacuum and physical singularity[END_REF]. About the asymptotic behavior of the solutions it is conjectured to obey the porous media equation and for the case of the pressure (2), we refer to [START_REF] Huang | Asymptotic behavior of the solutions to the damped compressible Euler equations with vacuum[END_REF], but when the initial density ρ 0 tends to a positive value as x → ±∞, which is not our case. Other works [START_REF] Huang | Convergence to the Barenblatt solution for the compressible Euler equations with damping and vacuum[END_REF][START_REF] Huang | L 1 convergence to the Barenblatt solution for compressible Euler equations with damping[END_REF] study the system with the pressure p(ρ) = ρ γ with γ > 1 and therefore, their analysis cannot be applied again here. One of the main challenges to construct a convergent approximate sequence to system (18), ( 4) is the loss of strict hyperbolicity. To overcome this, we transform the problem into Lagrangian coordinates in the spirit of Wagner [START_REF] Wagner | Equivalence of the Euler and Lagrangian equations of gas dynamics for weak solutions[END_REF]. Recasting system [START_REF] Huang | Convergence to the Barenblatt solution for the compressible Euler equations with damping and vacuum[END_REF] from Eulerian (ρ(x, t), v(x, t)) into the Lagrangian variables (u(y, t), v(y, t)), we get the equations ∂ τ u -∂ y v = 0, ∂ τ v + ∂ y (α 2 /u) = -M(v -v) ( 19 ) with the domain {(y, t); t ≥ 0 , y ∈ (0, M)} . However, the equivalence of weak solutions between the Eulerian and Lagrangian solutions does not apply from [START_REF] Wagner | Equivalence of the Euler and Lagrangian equations of gas dynamics for weak solutions[END_REF] because of the finite mass condition (5) in our case. Actually, because of this, the problem in Lagrangian coordinates is not Cauchy any more but boundary value problem and further difficulties arise. Now, system [START_REF] Huang | Asymptotic behavior of the solutions to the damped compressible Euler equations with vacuum[END_REF] has been studied but mostly in the context of the Cauchy problem or with different boundary conditions, cf. [START_REF] Nishida | Global solution for an initial boundary value problem of a quasilinear hyperbolic system[END_REF][START_REF] Dafermos | A system of hyperbolic conservation laws with frictional damping[END_REF][START_REF] Luo | Global BV solutions to a p-system with relaxation[END_REF][START_REF] Amadori | Global BV solutions and relaxation limit for a system of conservation laws[END_REF][START_REF] Frid | Initial-boundary value problems for conservation laws[END_REF]. Our strategy to prove Theorem 1 is: First, using the Riemann solution to (18) around the vacuum that is admissible for the flocking model and the definition of weak solution with concentration, we recast the problem ( 18), (4) in Lagrangian variables to [START_REF] Huang | Asymptotic behavior of the solutions to the damped compressible Euler equations with vacuum[END_REF] with non-reflecting boundary conditions at y = 0, M. Actually, the boundary conditions are expressed by the fact that, when a wave-front reaches the boundary, there is no resulting emitted wave. This is the natural counterpart to the behavior of the free boundaries that, in Eulerian variables, delimit the non-vacuum region. Next, we construct approximate solutions (u ν , v ν ) to ( 18), (4) using the front tracking algorithm (cf. Bressan [START_REF] Bressan | Hyperbolic Systems of Conservation Laws -The one-dimensional Cauchy problem[END_REF] and Holden-Risebro [START_REF] Holden | Front Tracking for Hyperbolic Conservation Laws[END_REF]), define appropriate Lyapunov functionals and show that the total variation in space and time of the approximate sequence remains bounded. Combining these results, we prove the convergence to an entropy weak solution of the system in Lagrangian coordinates. The last step is to transform this analysis to problem [START_REF] Huang | Convergence to the Barenblatt solution for the compressible Euler equations with damping and vacuum[END_REF] in Eulerian. We work, at the level of the approximate solutions, to show the equivalence between Eulerian and Lagrangian variables in the spirit of [START_REF] Wagner | Equivalence of the Euler and Lagrangian equations of gas dynamics for weak solutions[END_REF] and within the domain I(t) = [a(t), b(t)] where no vacuum is present, see Fig. 1. In this way, we construct the approximate Fig. 1 On the left, the domain in Lagrangian variables for (u ν , v ν ) and on the right, the domain in Eulerian variables for (ρ ν , v ν ) solutions (ρ ν , v ν ) to ( 18) that inherits the convergence property from the change of coordinates and it is appropriately extended to the half-plane. Also, this coordinate transformation together with the approximation scheme allow us to pass from the non-reflecting boundary conditions at y = 0, M to the free boundaries a(t) < b(t) in the Eulerian coordinates. It is shown finally that, in the limit, (ρ ν , v ν ) converges to an entropy weak solution with concentration that conserves mass and (extended) momentum. Time-Asymptotic Flocking Another important issue we address, except of the global in time existence, is the longtime behavior of the entropy weak solution with concentration to (1), ( 4) with [START_REF] Amadori | Global BV solutions and relaxation limit for a system of conservation laws[END_REF], which is interesting in the context of self-organization. The terminology "flocking" corresponds to the phenomenon in which self-organized individuals, using only limited environmental information and simple rules, get organized into an ordered motion. In the spirit of [START_REF] Ha | From particle to kinetic and hydrodynamic descriptions of flocking[END_REF][START_REF] Ha | A simple proof of the Cucker-Smale flocking dynamics and mean-field limit[END_REF], this behavior is captured in the following definition: Definition 2 We say that the entropy weak solution with concentration (ρ, m)(x, t) to system (1), admits time-asymptotic flocking if: 1. the support I(t) of the solution remains bounded for all times, i.e. sup 0≤t<∞ {b(t) -a(t)} < ∞ . (20) 2. the velocity satisfies lim t→∞ ess sup x 1 , x 2 ∈I (t) |v(x 1 , t) -v(x 2 , t)| = 0. (21) Indeed, condition [START_REF] Huang | L 1 convergence to the Barenblatt solution for compressible Euler equations with damping[END_REF] assures that the support of the solution is uniformly bounded, thus defining the "flock", while condition (21) yields that alignment occurs, i.e. the diameter of the set of velocity states within the support I(t) goes to zero timeasymptotically. We observe that solutions obtained in Theorem 1 satisfy condition (20) immediately by combining (8), [START_REF] Dafermos | A system of hyperbolic conservation laws with frictional damping[END_REF] and [START_REF] Dafermos | Hyperbolic systems of balance laws with weak dissipation[END_REF]. Therefore, to establish the time-asymptotic flocking property, it suffices to show [START_REF] Holden | Front Tracking for Hyperbolic Conservation Laws[END_REF], or equivalently that lim t→∞ ess sup x 1 ,x 2 ∈I(t) |v(x 1 , t) -v| = 0 where v is defined at [START_REF] Ha | From particle to kinetic and hydrodynamic descriptions of flocking[END_REF]. Assuming that the initial data satisfy e 2q M 2 < α max {ρ 0 (a 0 +), ρ 0 (b 0 -)} , (22) where q stands for the initial bulk: q := 1 2 TV {ln(ρ 0 )} + 1 2α TV {v 0 } (23) and TV the total variation in the support I 0 , we show that flocking occurs. In fact, if initial bulk q is controlled by the initial density at the endpoints a 0 and b 0 according to [START_REF] Karper | Existence of weak solutions to kinetic flocking models[END_REF], then velocity decays to v at an exponential in time rate. The result is: Theorem 2 Let (ρ, m) be the entropy weak solution with concentration to (1), ( 2), ( 4) with (3), the initial data (ρ 0 , m 0 ) ∈ BV(R) satisfying (5) and with q > 0 given in [START_REF] Karper | Hydrodynamic limit of the kinetic Cucker-Smale model[END_REF] as obtained in Theorem 1. Suppose that [START_REF] Karper | Existence of weak solutions to kinetic flocking models[END_REF] holds true, then the solution (ρ, m) admits time-asymptotic flocking. More precisely, the oscillation of the velocity decays exponentially fast, i.e. there exists t 0 > 0 such that ess sup x 1 , x 2 ∈I (t) |v(x 1 , t) -v(x 2 , t)| ≤ C 2 e -C 1 t , ∀ t ≥ t 0 ( 24 ) for some positive constants C 1 , C 2 . Our strategy to prove Theorem 2 involves careful analysis of the long time behavior of the approximate solutions constructed in the existence part using the results about the wave strength dissipation for the system of isothermal flow that is, the homogeneous version of ( 1)-( 2) as obtained in [START_REF] Amadori | Global BV solutions and relaxation limit for a system of conservation laws[END_REF][START_REF] Amadori | On a model of multiphase flow[END_REF][START_REF] Amadori | Global weak solutions for a model of two-phase flow with a single interface[END_REF]. These properties are used to provide uniform bounds on the vertical traces of approximate solutions and for the time-asymptotic analysis. In addition, we introduce a new functional, called the total variation that is weighted by generation order V(t) = k ≥1 ξ k F k (t) for ξ ≥ 1 taking values in an appropriate interval and F k (t) being the total variation functional of generation k. This allows us to detect a geometric decay property in terms of the number of wave reflections for the homogeneous system and it turns out that we can control under assumption [START_REF] Karper | Existence of weak solutions to kinetic flocking models[END_REF], the possible increase of the functional which is due to the reflections produced by the damping term leading to the timeexponential decay established in [START_REF] Liu | Compressible flow with damping and vacuum[END_REF]. In a work in progress, we are going to relax the sufficient condition [START_REF] Karper | Existence of weak solutions to kinetic flocking models[END_REF] to allow for time-asymptotic flocking with any (ρ 0 , m 0 ) ∈ BV(R) that satisfies [START_REF] Bressan | Hyperbolic Systems of Conservation Laws -The one-dimensional Cauchy problem[END_REF]. Acknowledgements The research of paper [1] was partially supported by 2020 INdAM-GNAMPA Project "Buona positura, regolarità e controllo per alcune equazioni di evoluzione". The first author kindly acknowledges the hospitality of the University of Cyprus, where this work was started.
00411770
en
[ "phys.hphe", "phys.hthe" ]
2024/03/04 16:41:26
2009
https://hal.science/hal-00411770/file/freeAbelian.pdf
M L Walker email: [email protected] MeV Mass Gluonic Colour Singlets in QCD Keywords: QCD, Cho-Faddeev-Niemi-Shabanov decomposition, monopole condensate PACS: 12.38.-t, 12.38.Aw, 12.39.Mk, 14.70.Dj come Introduction Identifying the internal Abelian directions has been of interest to studies of the QCD vacuum since Savvidy's landmark paper [START_REF] Savvidy | Infrared instability of the vacuum state of gauge theories and asymptotic freedom[END_REF] demonstrating the energetic favourability of a magnetic condensate. This led to a long-running controversy surrounding the condensate's stability [START_REF] Nielsen | An unstable yang-mills field mode[END_REF][START_REF] Kondo | Non-abelian stokes theorem and quark confinement in su(n) yang-mills gauge theory[END_REF][START_REF] Honerkamp | The question of invariant renormalizability of the massless yang-mills theory in a manifest covariant approach[END_REF][START_REF] Volker Schanbacher | Gluon propagator and effective lagrangian in QCD[END_REF][START_REF] Dittrich | Effective qcd lagrangian with zeta function regularization[END_REF][START_REF] Flory | Covariant constant chromomagnetic fields and elimination of the one loop instabilities[END_REF], with recent papers [START_REF] Flory | Covariant constant chromomagnetic fields and elimination of the one loop instabilities[END_REF][START_REF] Cho | Monopole condensation in su(2) qcd[END_REF][START_REF] Cho | Monopole condensation and confinement of color in su(2) qcd[END_REF][START_REF] Cho | Stability of monopole condensation in su(2) qcd[END_REF][START_REF] Kondo | Magnetic condensation, abelian dominance and instability of savvidy vacuum[END_REF][START_REF] Kay | Savvidy vacuum in su(2) yangmills theory[END_REF] concluding in the positive. What concerns this work is the manner in which the necessarily Abelian internal direction(s) of the condensate were identified. Two-colour studies typically assigned the Abelian direction to ê3 [START_REF] Savvidy | Infrared instability of the vacuum state of gauge theories and asymptotic freedom[END_REF][START_REF] Nielsen | An unstable yang-mills field mode[END_REF][START_REF] Honerkamp | The question of invariant renormalizability of the massless yang-mills theory in a manifest covariant approach[END_REF][START_REF] Volker Schanbacher | Gluon propagator and effective lagrangian in QCD[END_REF][START_REF] Hooft | Topology of the gauge condition and new confinement phases in nonabelian gauge theories[END_REF], in a blatant violation of gauge invariance that always left doubts that the calculated effects might be gauge artifacts. A further defect was that these papers were unable to prove that the magnetic background is due to monopoles. These problems are avoided by the Cho-Faddeev-Niemi-Shabanov decomposition [START_REF] Cho | Colored monopoles[END_REF][START_REF] Faddeev | Decomposing the yang-mills field[END_REF][START_REF] Shabanov | An effective action for monopoles and knot solitons in yang-mills theory[END_REF], which identifies the Abelian directions without choosing a special gauge. It does this by introducing the Cho connection, a topologically generated contribution to the gluon field which represents [START_REF] Cho | A restricted gauge theory[END_REF] a monopole potential. Thus the problems of gauge invariance and demonstrating the magnetic condensate to be of monopole origin are solved simultaneously. Identifying the Abelian degrees of freedom in a gauge invariant manner allows one to consider them as physical entities, and not as gauge artifacts. Furthermore, these particular physical entities are colour-neutral, and the primary claim of this paper is that they are not confined. We therefore refer to them as Free Abelian Gluons (FAGs). Section 2 presents the CFNS decomposition for general SU (N ) gauge groups. Section 3 justifies the claim that the Abelian generators are colourless and unconfined, uses the condensate coupling and dimensional arguments to estimate the FAG's mass, and then goes on to discuss other properties such as stability and decay modes. Some experimental signatures are proposed. Since the FAG mass is found to be of the same order as the pion mass, section 4 discusses the ramifications for the internucleon potential. It is suggested that several qualitative features of this potential are easier to understand in terms of a FAG contribution. The paper concludes with a discussion in section 5. Specifying Abelian Directions The CFNS decomposition was first presented by Cho [START_REF] Cho | A restricted gauge theory[END_REF], and later by Faddeev and Niemi [START_REF] Faddeev | Decomposing the yang-mills field[END_REF] and by Shabanov [START_REF] Shabanov | An effective action for monopoles and knot solitons in yang-mills theory[END_REF], as a gauge-invariant means of specifying the Abelian dynamics of two-colour QCD. These authors [START_REF] Cho | Colored monopoles[END_REF][START_REF] Faddeev | Decomposing the yang-mills field[END_REF] also applied it to three-colour QCD. In this section we adapt it to general SU (N ), although we are not the first to do so [START_REF] Faddeev | Partial duality in su(n) yang-mills theory[END_REF][START_REF] Li | Decomposition of su(n) connection and effective theory of su(n) qcd[END_REF], and establish our notation. The Lie group SU (N ) for N -colour QCD has N 2 -1 generators λ (i) , of which N -1 are Abelian generators Λ (i) . For simplicity, we specify the gauge transformed Abelian directions with ni = U † Λ (i) U . Fluctuations in the ni directions are described by c (i) µ . The gauge field of the covariant derivative which leaves the ni invariant is implicitly defined by gV µ × ni = -∂ µ ni , ( 1 ) for which the general form is V µ = c (i) µ ni + B µ , B µ = g -1 ∂ µ ni × ni , ( 2 ) where summation is implied over i. We define the covariant derivative Dµ = ∂ µ + gV µ × . ( 3 ) It is easily shown that the monopole field strength H µν = ∂ µ B ν -∂ ν B µ + gB µ × B ν , (4) has only ni components, ie. H (i) µν ni = H µν , ( 5 ) where H (i) µν has the eigenvalue H (i) . Since we are only concerned with magnetic backgrounds, H (i) is considered the magnitude of a background magnetic field H (i) . X µ contains the dynamical degrees of freedom (DOF) perpendicular to ni , so if A µ is the gluon field then A µ = V µ + X µ = c (i) µ ni + B µ + X µ , (6) where X µ ⊥ ni . ( 7 ) This appears to leave the gluon field with additional DOF due to ni , B µ , but detailed analyses can be found in [START_REF] Cho | Monopole condensation in su(2) qcd[END_REF][START_REF] Kondo | Magnetic condensation, abelian dominance and instability of savvidy vacuum[END_REF][START_REF] Bae | Qcd versus skyrme-faddeev theory[END_REF][START_REF] Kondo | Brst symmetry of su(2) yang-mills theory in cho-faddeev-niemi decomposition[END_REF] demonstrating that these fields are not fundamental, but a compound of dynamic fields. Hence ni , B µ are dynamic but do not constitute extra DOFs. Substituting the CFN decomposition into the QCD field strength tensor gives F 2 = (∂ µ c (i) ν -∂ ν c (i) µ ) 2 + (∂ µ B ν -∂ ν B µ + gB µ × B ν ) 2 +2(∂ µ c (i) ν -∂ ν c (i) µ )n i • (∂ µ B ν -∂ ν B µ + gB µ × B ν ) + ( Dµ X ν -Dν X µ ) 2 +2g((∂ µ c (i) ν -∂ ν c (i) µ )n i + ∂ µ B ν -∂ ν B µ + gB µ × B ν ) • (X µ × X ν ) +g 2 (X µ × X ν ) 2 + 2g( Dµ X ν -Dν X µ ) • (X µ × X ν ). ( 8 ) This expression holds for all N -colour QCD except N = 2 where the last term vanishes. The kinetic terms for c (i) µ are unmistakably those of Abelian fields. Eq. ( 8) has its analogue in studies [START_REF] Savvidy | Infrared instability of the vacuum state of gauge theories and asymptotic freedom[END_REF][START_REF] Nielsen | An unstable yang-mills field mode[END_REF][START_REF] Flory | Covariant constant chromomagnetic fields and elimination of the one loop instabilities[END_REF][START_REF] Flyvbjerg | Improved qcd vacuum for gauge groups su(3) and su(4)[END_REF] utilising the maximal Abelian gauge. However, dependence on a particular gauge casts a shadow on any analysis and makes it impossible to consider the corresponding DOFs as physically significant. The CFN decomposition also introduces additional gauge DOFs, which a proper application must fix. This is done by imposing the condition [START_REF] Flory | Covariant constant chromomagnetic fields and elimination of the one loop instabilities[END_REF] with the gauge condition Dµ X µ = 0, [START_REF] Cho | Monopole condensation and confinement of color in su(2) qcd[END_REF] which also reduces the number of gauge degrees of freedom to that of conventional QCD [START_REF] Kondo | Brst symmetry of su(2) yang-mills theory in cho-faddeev-niemi decomposition[END_REF]. These analyses were performed in two-colour QCD, but their application to N colours is straightforward [START_REF] Walker | Stability of the magnetic monopole condensate in three-and four-colour qcd[END_REF]. The most important advantage of the CFNS decomposition for the purpose of this paper is that the Abelian dynamics can be specified in a gauge-invariant, well-defined manner that makes it physically meaningful to say that the fields c (i) µ describe the Abelian component of the gluon field. This is in contrast to the MAG which is not gauge invariant and where the physical meaning of the Abelian direction is not well-defined. MAG depends on confinement to hide the gauge artifact. The CFNS decomposition has no gauge artifact. The Properties of FAGs An Abelian gluon has no colour charge, just as a photon has no electric charge. It therefore feels no confining potential, unlike quarks and the valence gluons X µ , which are coloured. I shall now argue on dimensional grounds that FAGs must be massive, which renders their effects short-range. The propagation of FAGs outside of a hadron, through the monopole condensate, is like that of photons in a conventional superconductor, which are well-known to gain an effective mass from the same Cooper pairs which restrict magnetic fields to flux tubes. Indeed, eq. ( 8) contains the term 2(∂ µ c (i) ν -∂ ν c (i) µ )n i • (∂ µ B ν -∂ ν B µ + gB µ × B ν ), (10) clearly indicating that the monopole condensate does indeed act as a sink/source for Abelian gluons. Significantly, there is no corresponding sink/source term for the valence gluons, although it has been argued [START_REF] Kondo | Magnetic condensation, abelian dominance and instability of savvidy vacuum[END_REF][START_REF] Walker | Stability of the magnetic monopole condensate in three-and four-colour qcd[END_REF] that their mass-gap term is generated from the interaction term (∂ µ B ν -∂ ν B µ + gB µ × B ν ) • (X µ × X ν ) (11) The corresponding mass-gap can be estimated on dimensional grounds. The monopole condensate neutralises the magnetic component of a FAG, unless it oscillates faster than the characteristic time of the condensate. This characteristic time would of course go to infinity when the condensate vanishes at deconfinement. Hence the lower limit on a FAG's period can be estimated from the QCD critical temperature using a suitable combination of dimensional constants. Recent lattice calculations ( [START_REF] Aoki | The QCD transition temperature: results with physical masses in the continuum limit II[END_REF] and references therein) of T QCD vary between 151 and 195 MeV. This should be compared to the π 0 mass of 135 MeV and the π ± mass of 140 MeV [START_REF] Amsler | Review of particle physics[END_REF]. Hence an Abelian gluon can propagate outside of a hadron only if it has sufficient energy to overcome the mass-gap imposed by the QCD vacuum. It would then have properties very similar to the Z 0 , except that it couples only to quarks. This coupling to quarks is a source of instability. Any gluon can couple to a quark-antiquark pair, and from there to a photon and an e + -e -pair. Another decay mode is into a π 0 with a photon to conserve angular momentum. A more interesting process is the interception of a FAG by a virtual pion emitted by a hadron. The virtual pion could absorb the FAG and use its energy to become real while emitting a photon. Note that both neutral and charged pions can participate in this reaction, so a proton could stimulate a FAG to become a π 0 and emit a photon, or to become a π + (and emit a photon) while turning itself into a neutron. While the energy of a FAG is certainly accessible, hadron collisions at this energy are dominated by jets. Indeed, the mass was derived from the deconfinement temperature so one should not expect the necessary energy to be available in a stable hadron. One possible source is a quark-gluon plasma. It must have sufficient energy by definition, so a quark-gluon plasma, surrounded by normal space, could lower its temperature by emitting FAGs. The internucleon potential Since hadrons are colour neutral, the inter-hadron effect of FAGs is a massive, ie. exponentially decaying, van der Waals interaction. At the simplest level of understanding, mesons are genuine dipoles in this respect, being quarkantiquark pairs, while baryons are more complicated objects. Nonetheless their colour polarization can be crudely modelled by observing that the combination of any two colours is the opposite of the remaining colour. Hence the baryon contains three axes of polarization, of which only two are linearly independent. Hence we expect that two baryons in close proximity to each other will attempt to align their polarization axes, as shown schematically in figure [START_REF] Savvidy | Infrared instability of the vacuum state of gauge theories and asymptotic freedom[END_REF]. This can lead to complicated behaviour for several reasons apart from the inherent difficulty of nonperturbative systems. One is that the exchange of valence gluons between quarks will alter a hadron's colour polarizations by swapping colours around. This suggests an additional mechanism for loss of nucleon mass in the atomic nucleus, in addition to sharing virtual pions. An energetically favourable configuration of nucleonic colour polarizations will suppress the valence gluon exchanges which would otherwise disturb it, thus inhibiting a significant contribution to the baryons' mass. Another source of complicated behaviour is the difficulty of aligning multiple, coupled polarization axes when more than two hadrons are in close proximity. This would obviously contribute to the three-body effects of the internucleon force. While further analysis is needed, colour polarization may explain this naturally. Discussion I have made a case for M eV mass gluonic colour singlets. It is based on the observation that two of the eight gluon generators in three-colour QCD are without colour charge, and that it is colour which is confined. The arguments require the CFNS decomposition to identify the Abelian directions in a gauge invariant way. Without this it is impossible to claim that the Abelian degrees of freedom have real physical meaning. It has been noted that the analysis is not sensitive to the number of colours, with one Abelian degree of freedom in the two-colour case and N -1 of them for N -colour QCD. An attractive feature of the CFNS decomposition, which makes it useful in dual-Meissner effect studies, is that it unambiguously identifies the gluon's monopole degrees of freedom [START_REF] Faddeev | Decomposing the yang-mills field[END_REF][START_REF] Shabanov | An effective action for monopoles and knot solitons in yang-mills theory[END_REF][START_REF] Cho | A restricted gauge theory[END_REF]. It is easily shown furthermore, at least to one-loop order, that the corresponding monopole condensate is non-zero [START_REF] Savvidy | Infrared instability of the vacuum state of gauge theories and asymptotic freedom[END_REF][START_REF] Flyvbjerg | Improved qcd vacuum for gauge groups su(3) and su(4)[END_REF]. Especially important for this work, a term describing the condensate acting as a sink/source appears. This not only allows the condensate to restrict the chromoelectric flux to flux tubes as required by the dual Meissner effect, but also provides a mass gap for unconfined gluon fluctuations. A generic dimensional analysis then predicts a mass of the same order but slightly higher than that of the pion. Thus one expects Abelian gluon exchange to also contribute to the internucleon potential. The resulting potential is essentially a massive van der Waals interaction, although the form of its repulsive component at short distances can only be guessed at. Such a potential implies a mutual colour polarisation which is expected to be affected by the addition of a third nucleon to the immediate vicinity, which has long been observed in the internucleon potential. While it is to be expected that the pion-nucleon coupling would be complicated, the possibility remains that a FAG contribution to the internucleon potential might simplify its analysis. While their deconfinement scale mass-gap makes FAG production by hadron scattering unlikely, a quark-gluon plasma of sufficient size could well employ FAG emission as a means of lowering its temperature. The key signatures marking the existence of FAGs are the catalysis of pion production by hadrons in the vicinity of a quark-gluon plasma. Figure 1 : 1 Figure 1: Simplistic representation of induced colour polarization generated by FAG exchange between two hadrons. Note that even if the colours are changed or mixed by a gauge transformation, the degree of polarization is unaffected. Acknowledgements The author wishes to thank S. Narison, G. Moultaka and V. Zhakarov for helpful discussions. He thanks, for generous hospitality, the physics department at the University of Montpellier (UM2), the physics department of the University of New South Wales, and the College of Quantum Science at Nihon University.
04117752
en
[ "info.info-cv" ]
2024/03/04 16:41:26
2021
https://hal.science/tel-04117752/file/hdr.pdf
CVPR Edmond Boyer Marc Pollefeys Federica Bogo Bugra François Le Clerc Martin Hachet Now Vagia Tsiminaki Jean-Sébastien Franco Benjamin Allain Edmond Boyer Cvpr Jinlong Yang Franck Hétroy-Wheeler Stefanie Wuh- Rer Adnane Boukhayma Chun-Hao Huang Nassir Navab Slo- Bodan Ilic Vincent Leroy Tony Tung Multi-view object segmentation with calibrated camera networks Keywords: 4.2 Student Supervision acquisition. Rigid structure detection. Collaboration with OMG/Vicon, BBC, HHI, Artefacto, University la stéréo multi-vues et l'estimation monoculaire de forme monoculaire 3D. Deuxièmement, je présenterai l'apport d'a prioris faibles mais très généraux sur la rigidité des surfaces ou en volume, et l'utilisation d'a priori d'un espace des formes humaines, et comment les mettre à profit pour le suivi de surface et l'alignement de sujets humains en vêtements amples comme serrés. Troisièmement, je discuterai des modèles statistiques spatio-temporelles et des représentation d'apparence que nous avons proposés pour estimer et stocker des cartes de couleur pour les sujets acquis. iii ABSTRACT Acquiring and representing 3D shapes in motion from one or multiple views, coined 4D modeling, has been a topic of interest to computer vision and computer graphics for several decades, for 3D content production, virtual reality, telepresence and human motion analysis applications. In this time frame, the topic has gone from theoretical research subject to partial industrialisation, including nowadays many acquisition studios from major world companies, and a successful Inria startup with 15 years of activity. To achieve this, a set of hard problems had to be addressed, with advances in what can be classified as three subfields of shape acquisition and representation, motion retrieval and modeling, and appearance and texture estimation. This document summarizes my journey and contributions toward these goals with the students I was privileged to work with, with a particular focus on PhDs defended in the last five years. First I will discuss our work with multi-view stereo and monocular shape estimation. Second, I will present how weak but general priors on surface or volumetric rigidity, and using the a priori of an underlying human shape space, can be leveraged for surface tracking and alignment of human subjects in tight or loose clothing. Third, I will discuss the space-time statistical and representational models of appearance we proposed to estimate and store color map information of acquired subjects. RÉSUMÉ Acquérir et représenter des formes 3D en mouvement à partir d'une ou plusieurs vues, aussi appellé modélisation 4D, a été un sujet d'intérêt pour la vision par ordinateur et l'infographie depuis plusieurs décennies, pour la production de contenu 3D, les applications liées à la réalité virtuelle, la téléprésence et à l'analyse du mouvement humain. Dans ce laps de temps, le sujet est passé de la recherche théorique à une industrialisation partielle, avec de nos jours de nombreux studios d'acquisition dans les entreprises majeures du domaine de l'informatique, et une startup Inria avec actuellement à son actif 15 ans d'activité. Pour atteindre ce palier majeur, un ensemble de problèmes difficiles a dû être résolu, avec des progrès dans ce qui peut être classé en trois champs d'investigations, l'acquisition et la représentation de forme, la modélisation et l'estimation de mouvement, et l'estimation de l'apparence et de la texture. Ce document résume mon parcours et mes contributions dans le sens ces objectifs avec le étudiants avec lesquels j'ai eu le privilège de travailler, avec un accent particulier sur les thèses soutenues ces cinq dernières années. J'aborderai tout d'abord notre travail avec CHAPITRE 1 Introduction RESEARCH PROBLEM : 4D MODELING The past decades have seen an ever increasing interest in automated 3D dynamic content creation, for various applications such as 3D content production, advertizing, entertainment, virtual reality, telepresence. It is also becoming a popular research topic in the learning era, where works are increasingly showing that 3D constraints and inference can be built on deep learning methods to increase result quality and scene understanding. This trend has been fueled by the increased availability of multi-camera systems, such as our 68-camera capture plaftorm Kinovis [kin], sometimes comprised of dozens or hundreds of cameras, that can be used for performance capture (e.g. [SH07, dAST + 08, LDX10, CCS + 15, JLT + 15]). Such systems can readily produce video streams of the same subject from multiple viewpoints, allowing indirect and passive access to the 3D geometry, motion, and appearance of filmed subjects. This technology offers numerous promises with respect to rugged, but sparse previous generation motion capture techniques, where only a predefined set of points and no colorimetry was observed, and relies on instrumentation of the captured subjects with active or passive markers. Thanks to spatially and temporally dense color and sometimes depth observations now offered with multi-view platforms, a much richer set of possibilities arise toward estimation of full shapes in motion, with detailed surface reconstruction, motion tracking, and access to the appearance through the acquired images. This is all with the advantage of passive, non invasive protocols where the subject or actor can come in full clothing and expect to be captured with all his apparel, props, and ultimately interaction with other people or with the set and scenery. But with this richer data comes a vastly increased complexity, and extracting these problems is still a very active research field to this day. This document is centered on the problem of 4D modeling, which is the process of producing this 3D content and appearance representations from temporal sequences obtained from a set of cameras. In most works in this topic, access to a controlled setup with pre-calibrated cameras is assumed, such that the focus is on obtaining the 3D model geometry, its motion and its appearance as texture map as opposed to egenral scene structure. 4D modeling differs from simple, one shot reconstructions over each frame of the sequence, by the main property that it exploits time continuity and redundancy to enhance the quality of the models, or produces models that are aligned, e.g. with identical surface topology and connectivity but deforming shape or, by means of varying model parameters or vertex positions. CHALLENGES Our general goal is to produce highest quality possible and easy to use models, and produce the best 4D modeling algorithms to this goal. This is a generally very challenging topic because, in its most general form, we are observing a moving shape or set of moving shapes with very indirect raw data, namely sets of image sequences I = {I t i } t∈{1,•••m} i∈{1,••• ,n} obtained from a set of calibrated n cameras during m temporal frames, and how to extract the shape, motion and appearance of these single underlying objects, in some sense a large scale multi-variate and interdependent regression problem. Some of the main obstacles to achieve these goals are the following : Dimensionality The key information is buried in several high frequency video streams that quickly comprise Terabytes of data. On a typical platform such as Kinovis, acquisitions produce data at a rate of 40Gb/s. STRUCTURE AND CONTRIBUTIONS The dimensionality of the output is typically also very large, even if it exploits and removes key input redundancies, but still comprises large representations, e.g. mesh and textures, with full or partial updates that need to be periodically refreshed. Representation Finding the best intermediate and output representations for the 4D model is still to this day an open problem. Exactly what granularity shall be stored for the geometry, temporal evolution and appearance can lead to compact results, but the desirable properties of the representations may not be the same at different stages of the pipeline. Should we use surface meshes, or volumetric primitives for the inference ? How exactly should the color information be stored, as a separate per-frame texture map, or using a temporally refactored representation ? What is the best way of co-storing shape, geometry, motion and appearance ? These are some of the questions examined in this document. Choosing Priors The image set we observed is a noisy, partial observation of the quantities we wish to estimate, and our problem is an inverse and ill-posed problem. Choosing the correct priors and how to embed them in our method is going to be important and in fact we have explored many different directions. First the assumptions inherent to the model itself involve relying on regularizing behaviors, and usually in their simplest form involve geometric, motion and appearance continuity over the full shape. But choosing the correct prior is non-trivial. Typically building the prior that the motion is from a human subject is a key tradeoff we examine in this work, on one hand dealing with very basic motion hypothesis or at the opposite of the spectrum having a strong constraining human model. One will favor generality and the other will be more stable but may not account for out-of-model situations such as object or multi-person interaction or loose clothing. STRUCTURE AND CONTRIBUTIONS As the previous challenges hint at, the 4D modeling problem requires examining a collection of subproblems together to target a single goal. To break the complexity of the algorithm, some kind of stratification is usually chosen. Throughout our works during the years, the stratification that we opted for has been relatively uniform, first extracting the shape at individual time frames, then performing motion analysis for tracking and alignment on the individual reconstructed shapes directly in 3D, and finally extracting appearance information. This is of course not the only possible stratification, but to this day remains a very popular way to break down the problem, with variants in how geometric and temporal tasks are connected, e.g. [CCS + 15, MVK + 20]. CHAPITRE 1. INTRODUCTION This manuscript follows these three main stages in their logical order of application in the pipeline, which in particular is quite different than the chronological order in which these works were actually executed. In Chapter 2, I describe how we approached the problem of 3D shape estimation with one frame or small group of frames simultaneously : -I first present two of our contributions toward designing a customized multiview stereo pipeline particularly suitable for the context of performance capture and 4D modeling tasks. A careful analysis of our needs and acquisition scenarios lead us to build-in key features and exploit local temporal redundancy to increase precision ; we also examine how deep learning can be used to optimize the feature extraction and reconstruction decisions. -Second, I present a method whereby we used an end-to-end deep learning to extract a full 3D from one viewpoint, with the idea of testing the limits of the monocular reconstruction setup, and as preliminary step to possible multicamera applications as well. In Chapter 3, I present several works focused on temporal analysis and alignment of the 3D models previously extracted -I first summarize the works achieved on a series of models based on weak but quite general, quasi-rigid patch-based priors, using adaptive versions of patchbased constraints either at the surface or the volumetric level. -Then I examine how a stronger human shape-space model can be used to constrain the estimation of shapes in motion, and estimate the underlying body shape of acquired subject under clothing. In Chapter 4, the discussion is on two appearance estimation works we did to retrieve an appearance map measuring dense radiance information observed : -First, we present a principled method by which the appearance estimation of a 3D model under small motions can be treated as a specialized 3D version of the well-known video superresolution problem -Second, I discuss how this work can be extended to longer sequences by storing long term appearance variations as a linear combination of 3D mapped Eigen textures. Before getting to the technical part of the presentation, I will describe some contextual information, in particular about the people and contexts I worked with. PEOPLE This manuscript is intended to give an overview of my research work as an associate professor ("maître de conférences") in computer science at Grenoble INP -Ensimag (part of the Université Grenoble Alpes, France) with a focus on PhDs defended 1.4. PEOPLE 5 in the last 5 years. During my career, I had the pleasure and privilege to work with a wide range of colleagues and students on the aforementioned topics. I will give a brief overview of these encounters in the following paragraphs. In 2007, I was appointed to my first associate professor position at Université Bordeaux 1 with Pascal Guitton and the IPARLA team, Inria Bordeaux Sud-Ouest. I co-supervized the PhDs of Yann Savoye and Robin Skowronski with Pascal Guitton, working respectively on non-rigid human tracking and calibration for drones. I also collaborated in the supervision of with PhD student Benjamin Petit with whom I interacted a lot with for the Dalia ANR project on telepresence between two multi-camera platforms Hemicyclia and GrImage. I then moved to the Ensimag (School of Computer Science and Applied Mathematics, INP Grenoble University) as associate professor of Computer Science, and as a researcher at the Inria Grenoble Rhône-Alpes, France, with the Morpheo team in 2011 and have been co-supervising many PhD students since then, Abdelaziz Djelouah on multi-view segmentation with Patrick Perez and Francois LeClerc at Technicolor, Vagia Tsiminaki and Adnane Boukhayma on multi-view appearance estimation and superresolution, Benjamin Allain on shape tracking and temporal surface alignement, Vincent Leroy on probabilistic Multi-View Stereo, with Edmond Boyer ; and I have had a non-official supervizing role throughout Jinlong Yang's PhD on model-based human body tracking under clothing, a collaboration with Morpheo team members Stefanie Wuhrer and Franck Hetroy. Out of the master students I supervized too numerous to enumerate here, one stands out recently as it opened a collaboration cycle with Inria Thoth team members Gregory Rogez and Cordelia Schmid, through the supervision of Valentin Gabeur. This collaboration would yield a major conference paper at ICCV 2019 and paved the way to several Morpheo-Naver Labs Europe collaborations after Gregory Rogez and Vincent Leroy joined Naver. I am currently co-supervizing Mathieu Armando on mesh and appearance superresolution, Boyao Zhou on using deep learning for human space-time human shape reconstruction, with Federica Bogo and Edmond Boyer, as part of a Microsoft Grant ; I am also co-supervizing Abdullah Haroon Rasheed on learning based cloth physics estimation with Florence Bertails, Stefanie Wuhrer, and Mathieu Marsot on learning space-time human shape and motion generative models, with Stefanie Wuhrer, and Anne-Hélène Olivier from Inria Rennes who is a ANR project collaborator. These have all been exciting projects that are followups of the research projects described in this document, and have benefited from these previous experiences. Manuscript Focus It will be apparent to the reader at this point that this document is in no way meant as an exhaustive review of works and student projects. The fact that I left them out of this manuscript says nothing about those works, and everything about having to keep this document contained in the interest of time and by focusing on key emblematic stages and developments we had with the understanding of the problems in the last five years. I have in particular left out the excellent work of Li Guan [GSFP06, GFP07, GFP08a, GFP08b, GFBP10, GFP10], the first PhD student I co-supervized with Marc Pollefeys during my post-doc, of Abdelaziz Djelouah [DFB + 12, DFB + 13, DFB + 15, DFB + 16], which have in common the fact that they are focused on earlier silhouettebased extraction efforts. I have also taken the stance of leaving out works that were not defended yet as PhDs, leaving this discussion at the legacy stage at the end of this work, this includes the excellent work of Mathieu Armando [AFB19, AFB20], Boyao Zhou [ZFB + 20], and Abdullah Haroon Rasheed (CVPR 2020 oral paper [RRBD + 20]). I did include Valentin Gabeur's work on monocular reconstruction with Deep Learning [GFM + 19], defended as master student, since it so well represents the Deep Learning leap and transition we took these last years, in continuity with the work of Vincent Leroy. CHAPITRE 2 3D Shape Estimation INTRODUCTION Among the first problems we examine in this document, and one of the first problems to solve in stratified 4D Modeling, is that of 3D shape estimation from images. This is in fact one of the fundamental problems of computer vision and has often been treated in generic form in the geometric era of computer vision [START_REF] Hartley | Multiple view geometry in computer vision[END_REF]. Using a preconfigured platform camera rig, we have been naturally strongly inclined to pursue research in methods that assume calibration is available. This was notably the case during my thesis and post-doc years where I pursued research in silhouette-based methods [START_REF] Franco | Exact polyhedral visual hulls[END_REF][START_REF] Franco | A distributed approach for real-time 3d modeling[END_REF][START_REF]Fusion of Multi-View Silhouette Cues Using a Space Occupancy Grid[END_REF][START_REF]Efficient polyhedral modeling from silhouettes[END_REF][START_REF]3D Object Reconstruction with Heterogeneous Sensor Data[END_REF][START_REF] Guan | Probabilistic 3d occupancy flow with latent silhouette cues[END_REF]. While this push to enhance silhouette-based methods yields quite useful results and has been the basis of many other works discussed in the other chapters, during the last five years, we also pursued research on enhancing 3D modeling, first in the direction of significantly improving model quality given the abundant input data acquired through our 68-camera Kinovis platform, and second gaining better understanding of how deep learning can contribute to improving 3D modeling methods, in particular toward any of the common objectives of increasing model quality, precision, robustness to input corruption and lack of texture detail routinely encountered in everyday, casual clothing. Dealing with fast motions has also been a desirable target due to the presence of motion blur when subjects execute fast movements typical to dancing or sports. This drive was to open new research avenues in the team, as they have proven largely successful. To illustrate the push in these directions, I will discuss in this chapter the two most relevant efforts we pursued, first toward novel improving Multi-View Stereo in the context of performance capture applications, and second toward the problem of monocular 3D shape estimation, where Deep Learning has proven to be instrumental for both. 12 CHAPITRE 2. 3D SHAPE ESTIMATION MULTI-VIEW STEREO EFFORTS Innovation on the quality of the shape result was a major drive throughout the last decade in the team ; and multi-view stereo had the potential to offer these benefits. But multi-view stereo is both a widely studied and notoriously difficult subject to work with on a technical level, which has stumped a number of students we attempted to tackle this problem with. A testament to this is the endurance of several widely respected benchmarks that have survived as beacon references to the community for the better part of the last decades, such as the Middlebury, the DTU and more recently tanks and temples datasets [SCD + 06, SvHG + 08, JDV + 14, SSG + 17, KPZK17], which all have registered small-step but continuous improvements in the domain over large time frames. Also, the technical variety of the frameworks and representations devised to tackle this problem is quite astounding spanning several decades, from level sets [START_REF] Faugeras | Variational principles, surface evolution, pde's, level set methods, and the stereo problem[END_REF], voxel carving [START_REF] Kutulakos | A Theory of Shape by Space Carving[END_REF], depth map sweeping [GFM + 07], Delaunay decomposition and graph cut optimization [START_REF] Labatut | Efficient multi-view reconstruction of large-scale scenes using interest points, delaunay triangulation and graph cuts[END_REF], sparse points from image features [START_REF] Furukawa | Accurate, dense, and robust multiview stereopsis[END_REF] spatiotemporal integration [START_REF] Goldlücke | Space-time isosurface evolution for temporally coherent 3D reconstruction[END_REF][START_REF] Mustafa | Temporally coherent 4d reconstruction of complex dynamic scenes[END_REF], model-based integration [SH07, dAST + 08], convex grid optimization [START_REF] Cremers | Multiview stereo and silhouette consistency via convex functionals over convex domains[END_REF] or probabilistic inference [START_REF] Osman Ulusoy | Towards probabilistic volumetric reconstruction using ray potentials, 3D Vision[END_REF], to name only a few. It thus takes a special type of character to confront this problem, with perseverance, heterogeneous literature comprehension, and large programming and technical proficiency, which we found with PhD student Vincent Leroy. In this journey, we implemented two state of the art MVS pipelines with a classic plane-sweeping articulation but several key practical improvements, then substituted the photoconsistency core with a Deep Learning replacement that considered local photoconsistency volumes to compute a depth indicator function. DAISY-Based Plane Sweep Stereo Pipeline with Local Temporal Integration Our first effort in this direction was to implement a classic pipeline based on what stood out in the literature as desirable characteristics for our capture scenarios. First and foremost, we wanted to be able to enhance the quality of the surface thanks to temporal smoothing and refinement, which has been a longstanding goal [GM04, APSK07, MKGH16] seldom achieved for centrimetric detail in performance capture scenarios. Of particular inspiration both for the remarkable detail accumulation in dynamic capture environments and for the popularization of Truncated Signed Distance Functions (TSDF) as depth map volumetric fusion representation, were methods of the Kinect and Dynamic fusion family [NIH + 11, NFS15, IZN + 16, DKD + 16], a key difference with our goals being that they process RGB-D streams instead of multi-RGB inputs. Second, depth-map plane sweep methods have been notable for one very desirable feature : their spatially dense, per-pixel monotonous depth parameterization of visible surface geometry for every viewpoint, with first-point visibility built-in the parameterization [Col96, MAW + 07]. This yields very efficient depth map extraction methods which are cast as extracting the first visible point at each pixel, gracefully dealing with the visibility problem. Third, among the various classic photoconsistency characterizations based on normalized window correlation (e.g. ZNCC, SSD, SHD, etc) or gradient feature correlation [Low04, BTG06, MS03], DAISY features [START_REF] Tola | DAISY : an efficient dense descriptor applied to wide-baseline stereo[END_REF] was the state of the art at the time for dense, wide-baseline stereo and computational efficiency, and as such was a natural contender for our photoconsistency function and evaluation. Method and Contribution We proposed a pipeline illustrated in Fig. 2.1 implementing together these key characteristics, assuming images I i , silhouettes Ω i , and calibration matrices π i are given. -First we compute per-view depth maps based on line searches that maximize a DAISY-based photoconsistency criterion, filtering correct matches using a visual hull of the subject obtained from the silhouettes -Second we provide an initial estimate of a multi-view merged shape based on the fused multiple depth maps obtained at a single frame. Because of the detail density, classic TSDF on regular grids is particularly inefficient and memory heavy, so we proposed an implicit TSDF form which can be computed with sparse storage. -Third, using this initial per-frame shape estimate for a time-window of neighboring frames, we construct sparse surface feature matches to neighboring frame shapes using MeshHOG features [START_REF] Zaharescu | Keypoints and local descriptors of scalar functions on 2d manifolds[END_REF], which we densify by propagating them on the surface. -We use these matches to fold the shape contributions to the center frame in each temporal window using a simple locally rigid deformation model. -These folded shapes bring new sparse implicit TSDF constraints to the central frame, which can then be seamlessly integrated. The process is then iterated between alternate steps of matching and reconstructing, leading to a refined final shape. -The final surface can be extracted from the implicit TSDF using Centroid Voronoi Tesselations [DFG99, LWL + 09] of a carefully selected set of points samples in the vicinity of the final surface as hinted by the TSDF induced occupancy function, and clipping the CVT tesselation to the zero level set of the implicit TSDF function to extract surface polygon geometry, as illustrated in Fig. 2.2 Our results showed that the quality of the reconstructions, measured with the widely adopted Chamfer-based metrics of surface accuracy and completeness [SCD + 06, JDV + 14], was significantly improved using the various components of our proposed approach. The approach also demonstrated measurable improvements with respect to state of the art methods in the context of performance capture. Details of the method and results can be found in the ICCV 2017 publication [START_REF] Leroy | Multi-View Dynamic Shape Refinement Using Local Temporal Integration[END_REF]. Volume Sweeping : Learned Photoconsistency During the time frame of Vincent Leroy's thesis, it became quite evident that the benefits of Deep Learning could reach beyond 2D vision problems which were the main focus in the early DNN years vision [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF]. As my personal and our team expertise in the subject was initially limited, our approach has been largely progressive and at first geared toward testing the contribution of learning techniques in well identified stages of time-honored vision pipelines. In this respect, the MVS work with Vincent was an ideal candidate. As illustrated in Fig. 2. 3 showing how we modified the classic MVS pipeline, the photoconsistency function ρ used to characterize surface hits for every depth candidate along each pixel's viewing line surface depths has a well identified local support in the images at the projection of the candidate, which can be described with a small size network. The surface hit decision is typically a per pixel decision in plane sweep algorithms with optional image-domain regularization of the depth maps based on local per-pixel depth data terms. The photoconsistency function was thus ripe for this kind of trial as it also clas-FIGURE 2.3 -Volume sweeping method overview. Depth maps, for all input image I i , are obtained by maximizing, along viewing lines, a learned function ρ that measures photoconsistency at a given depth d along the viewing line of a given pixel p. Depth maps are then fused into an implicit form from which the zero set surface is extracted. sically relies on a hand-crafted measure and empirically selected image features. This is typically where deep learning can contribute by automatically extracting relevant features from example data. In fact at the time several works had started pushing in the direction of shortbaseline MVS with symmetric combination of 2D learned features [HGH + 17], or wide baseline sparse capture scenarios [HLC + 18, GVCH18] that followed learning research on short baseline stereo [vL16, LSU16, ZK15, UZU + 17]. Method and Contribution While some learning methods for MVS were starting to emerge, at the time little or none of them were particularly geared or tested for the specifics of the multi-camera performance capture scenario, where in particular the attention is on moving subjects that cover only a minority fraction of all frame pixels, which we informally referred to as the "mid-range scenario". This use case is in particular very different to the case where objects are fully framed in every view, for close static objects or full-views of architectural scenes which are typical used as benchmark cases [SCD + 06, JDV + 14, SvHG + 08]. Our feeling as well was that, in our case with known calibration, we would be under-using the input data if we only used 2D patches to characterize matches in the learned photoconsistency function, as several methods were proposing [vL16, LSU16, ZK15, UZU + 17]. Several methods were also starting to propose 3D inference in the volume for these kind of reasons [CXG + 16, JGZ + 17, KHM17]. Reasoning only on 2D patches deprives the network from information on relative orientation of views and the local geometric context that can be additionally useful to build a decision. This intuition turned out to be correct and verified in our results, where we compared 2D versus 3D volumetric support of the learning function. To these goals, we proposed to formulate the inference on a small projective volume surrounding each query point of interest along a viewing line. Each voxel of this volume is assigned two R,G,B triplets from the reference view whose depth map we are computing, and another view used to check photoconsistency. The support region of the network, and its architecture is illustrated in Fig. 2.4. The architecture is intentionally simple and classically inspired [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF], with the main characteristic that the contribution of all pairs of views needs to be merged using a symmetric function computing a result independent of the number of views considered, with the mean giving the best empirical results in practice. The network was trained on a large set of samples built from the standard DTU dataset, which for this compact network can provide thousands of training samples per set of multi-view input frames given the local support of the learned function ; we also notably used no view from our Kinovis acquisition platform, a testament to the generalization ability of the method. We obtained impressive results that, quite frankly, exceeded our expectations by a large margin. As Fig. 2.5 illustrates, the method was able to infer more details and exhibited less errors and more completeness even in color-ambiguous, partially reflective regions (e.g. thigh, boots) than the classic version of our pipeline. More unexpected, the FIGURE 2.5 -Challenging scene captured with a passive RGB multi-camera setup [kin]. (left) one input image, (center) reconstructions obtained with classical 2D features [START_REF] Leroy | Multi-View Dynamic Shape Refinement Using Local Temporal Integration[END_REF], (right) proposed solution. Our results validate the key improvement of a CNN-learned disparity to MVS for performance capture scenarios. Results particularly improve in noisy, very low contrast and low textured regions such as the arm, the leg or even the black skirt folds. method recovered fold details of the black dress in portions of the image with almost zero contrast. The results were shown to outperform several off-the-shelf available methods, including classic methods [FP10, TSF12, CVHC08] and recently published learning based methods [YLL + 18, JGZ + 17]. Interestingly, the method also recovers results of quality comparable to [CCS + 15] who use two modalities RGB and infrared pattern projections, despite the fact that we were using only the RGB inputs of their datasets for our comparison. Details are available in the ECCV 2018 publication [START_REF] Leroy | Shape reconstruction using volume sweeping and learned photoconsistency[END_REF] but the more thorough comparisons and comments are published in IJCV 2021 [START_REF] Leroy | Volume Sweeping : Learning Photoconsistency for Multi-View Shape Reconstruction[END_REF], which is the one I am providing in the appendix A.8. We refer to the supplemental video for more results 1 MOULDING HUMANS : LEARNING 3D SHAPE ESTIMATION FROM A SINGLE IMAGE The success of deep learning on a multi-view stereo pipeline naturally brought us to another research question : can we transfer these benefits to the less data rich case of monocular 3D shape estimation ? The question of how much can be recovered from a single view has intrigued researchers for quite a while. From a purely geometric standpoint, the fundamental FIGURE 2.6 -Our non-parametric representation for human 3D shape : given a single image, we estimate the "visible" and the "hidden" depth maps from the camera point of view. The two depth maps can be seen as the two halves of a virtual "mould". We show this representation for one of the images of our new dataset. projection ambiguity prohibits recovering absolute depths and scales in that situation [START_REF] Hartley | Multiple view geometry in computer vision[END_REF]. It has been however identified quite early that data-driven priors may provide the possibility of finding a solution, e.g. for the problem of recovering a human 3D [START_REF] Agarwal | Recovering 3d human pose from monocular images[END_REF]. In fact, estimating 3D poses is one of the key problems by which deep learning was thrown at monocular 3D estimation [CWL + 16, RWS19] along with denser monocular 3D estimation applications such as dense surface correspondance [WHC + 16, GNK18]. [RWS19] happens to be a Thoth team contribution down our corridor at Inria Rhone-Alpes with Gregory Rogez and Cordelia Schmid. The Thoth team was getting more and more interested in addressing 3D problems with human subjects, as evidenced by various other collaborative publications to estimate 3D shapes from images [VRM + 17, VCR + 18]. So we came to join forces with Gregory and Cordelia and co-supervized a master student together, Valentin Gabeur. One of the issues with using DNNs for human 3D data is that straightforwardly extending CNNs from the 2D to the 3D domain, while relying on well defined extensions of the usual CNNs convolution stages in the 3D domain such as with [VCR + 18], are subject to the curse of dimensionality as this creates a large memory and computational burden in the training process. This limits the amount of training data which can be processed in a single batch and also means a large number of training parameters, with potential for overfitting. For this reason, we set our main quest as one of finding a lean representation that would allow us to retrieve human 3D models of comparable or better quality at a fraction of the cost. In fact the lower parameter dimensionality would probably help us achieve the quality goal. The idea emerged in the discussions of performing depth inference in the image domain, but to output a surface, we need to regress not one but two depth maps, one for the front view and one for the back view of the human subject, a representation we informally coined a "mould", for the reason apparent in Fig. 2.6. MOULDING HUMANS : LEARNING 3D SHAPE ESTIMATION FROM A SINGLE IMAGE 19 Both depth maps assembled together create a point cloud which can yield a surface reconstruction using standard techniques [START_REF] Kazhdan | Screened poisson surface reconstruction[END_REF]. Although technically only having two depth maps encoding doesn't cover all possible situations (such as self-occlusion which leads to four or more surface crossing and depth values along a given viewing ray), this first dual-depth map encoding, with the advantage of a fixed output inference representation, provided surprisingly good results already. The inference can then be expressed in terms of an image-wide regression task, for which we proposed a stacked hourglass architecture [START_REF] Newell | Stacked hourglass networks for human pose estimation[END_REF], previously shown successful for human pose regression -but not full surfaces. While generally successful, we did encounter one significant issue in that the succession of reduced latent space at the bottlenecks of the hourglass networks were not sufficiently regularizing for the network to fully encode the "humanness" prior underlying the training set, yielding humans with additional limbs or no limbs for example. For this reason, we additionally introduced a discriminator network, trained in adversarial fashion, to allow this to happen. This proposed pipeline is illustrated in Fig. 2.7. FIGURE 2.7 -Overview. Given a single image, we estimate the "visible" and the "hidden" depth maps. The 3D point clouds of these 2 depth maps are combined to form a full-body 3D point cloud, as if lining up the 2 halves of a "mould". The 3D shape is then reconstructed using Poisson reconstruction [START_REF] Kazhdan | Screened poisson surface reconstruction[END_REF]. An adversarial training with a discriminator is employed to increase the humanness of the estimation. We devised a training dataset, 3D HUMANS, with a large synthetic portion from [VRM + 17], complemented by real captures from the Kinovis platform to capture some clothing and shape variability absent from the latter. We obtained very good to excellent results, improving over state of the art approaches, with shorter training and inference times. This includes on test datasets that had nothing to do with the training such as the DeepFashion dataset, with images of women in completely different clothing or postures, as evidenced in Fig. 2.9, Fig. 2.8. The contribution of the GAN can also be appreciated in these figures under severe input frame occlusion. More details and results can be found in the attached paper A.7 (Appendix A.7), which was published at ICCV 2019 [GFM + 19] with the master student, and the supplemental video 2 . 2. https://hal.inria.fr/hal-02242795/file/Moulding-Humans-ICCV2019. mp4 MOULDING HUMANS : LEARNING 3D SHAPE ESTIMATION FROM A SINGLE IMAGE 21 Ours HMR [18] Bodynet [33] FIGURE 2.9 -Comparison between HMR [KBJM18] (left), Bodynet [VCR + 18] (middle) and our method (right). Unlike [KBJM18, VCR + 18], we do not train on inthe-wild images but estimate 3D shapes of clothed subjects. CHAPITRE 3 3D Motion Estimation INTRODUCTION In this chapter, we examine the problem of tracking shapes and estimating surface alignments over complete temporal sequences of shapes seen from multiple calibrated cameras. When estimating 3D shapes per frame, one is left with a representation that has no temporal consistency, meaning the geometric representations have no primitive correspondence. In our case with the Kinovis platform multi-RGB video streams, and applying the algorithms of the type described in chapter 2, that means we retrieve a set of meshes per time frame that are all independent and possess completely different, unmatched sets of vertices and triangles. 1These per-frame representations can be used as-is for some applications, such as streaming and displaying the models in sequence, but this has a number of limitations. The first one is that it is memory and bandwidth inefficient. All resources must be transferred from scratch per frame, typically to the GPU for displaying or through the network for streaming. Pre-loading the sequences doesn't circumvent this bottleneck as it introduces a transfer latency at the beginning of a sequence. Additionally, this may not be possible on a GPU because each mesh can be several megabytes depending on resolution and a sequence of meshes can saturate GPU memory quite fast. All of these problems are drastically worse when one starts attaching attribute data to the mesh, typically appearance texture data which allows to rendering perceived detail on the surface of the shape. CHAPITRE 3. 3D MOTION ESTIMATION More generally, all temporal-related tasks requiring surface tracks or speed or a motion representation, such as motion editing and motion-driven interactive setups, are not possible with this inadequate frame-to-frame representation. Fundamentally, throwing away the previous reconstructed shape information is sub-optimal : in typical acquisition scenarios involving human or non-human subjects, the underlying shape observed in motion has a fixed physical envelope and topology, or a set of discrete geometric topologies, depending on whether self-contacts are considered as merges of the geometic envelope of the object [START_REF] Letouzey | Progressive shape models, Computer Vision and Pattern Recognition[END_REF]. Using the hypothesis of a common underlying surface means recovering the sequence as a moving shape, whose topology is at least temporarily stationary. In concrete terms one can then store a single shape topology or sparse set of topologies for the whole sequence, often referred as template, and store frame updates as a set of relative or absolutes motions, expressed as raw vertex displacements, local piecewise transforms using a shape decomposition such as patches [CBI10b], or updates in a parametric or kinematic representation [START_REF] De Aguiar | Marker-less deformable mesh tracking for human shape and motion capture[END_REF]. This provides a wide set of benefits, notably reducing the memory and bandwidth requirements of the representation, since only the motion updates need to be transferred per frame. This is especially true when one attaches additional surface attribute information, such as a appearance, to the fixed-topology templates rather than each time frame [CCS + 15], thus avoiding needless duplication. From a broader standpoint, subject and human motion analysis then become possible, such as monitoring specific attributes of motion, e.g. trajectories of specific landmarks, building motion statistics, using the model to compute dynamic collisions with virtual objects in a immersive digital world, with a wide range of applications for sports, medical, virtual reality, entertainment and interactive systems. This type of representation is also a pre-requisite to make the motion sequence editable [START_REF] Stoll | A volumetric approach to interactive shape editing[END_REF][START_REF] Matsuyama | Generation, visualization, and editing of 3d video[END_REF], to use it in a content production pipeline where artists or designers wish to use real-life full shape captures as the starting point for 3D asset creation, in which modification and stylisation of the motion are targeted. Alignment Problem The benefits are clear, but the problem itself is challenging due to various aspects. Looking at it from a global perspective, it can be seen as retrieving from a set of image sequences I = {I t i } t∈{1..m} i∈{1..n} a single, time-independent set of parameters of a shape model S of the subject on one hand, and a set of (most often per-frame t) motion parameters of a motion model Θ = {Θ t } t∈{1..m} on the other hand, such that {S , Θ} optimally explain the full set of image sequences I. The shape representation S will contain geometric attributes of the shape, describing its surface, volume, pose, etc., and may also include non-geometric attributes such as appearance, or other features. Although some methods do treat the problem this way as a single, global sequence retrieval problem [START_REF] Goldlücke | Space-time isosurface evolution for temporally coherent 3D reconstruction[END_REF], in particular in some recent publications [START_REF] Niemeyer | Occupancy flow : 4d reconstruction by learning particle dynamics[END_REF], it is very complex and technically challenging for a number of reasons. First the size of the image sequence data I, and shape and motion parameter dimensionality, usually prohibits global optimization update steps relating directly all images I to all motion parameters Θ and the shape S . Second because in several ways this is a chicken and egg problem : updates to the shape require updates to the motion parameters, and vice versa. Matching shape surface points to identified 2D projection in the viewpoints in the different temporal frames requires updating the motion model, and vice versa. Third it is intrinsically difficult to formulate spatio-temporal surfaces directly in terms of multi-view and temporal image content variation, which this view of the problem implies. Interestingly some formulations close to the latter standpoint exist, e.g. [START_REF] Goldlücke | Space-time isosurface evolution for temporally coherent 3D reconstruction[END_REF] where the moving shape is described as an spatiotemporal isosurface optimized from image variations. While quite elegant, as in many methods, the formulation however needs to make practical compromises, e.g. it forgoes describing the single underlying shape S , and in practice makes partial updates in frame batches that are slow and susceptible to local minima. Because of these difficulties, the vast majority of estimation methods in the literature propose stratified approaches, breaking the problem down into more tractable sub-problems, and this is still the case today with state of the art approaches, e.g. [BHKH13, MVK + 20]. This can be done along various dimensions, e.g. Simon et al. [START_REF] Simon | Separable spatiotemporal priors for convex reconstruction of time-varying 3d point clouds[END_REF] consider spatiotemporal priors on point trajectories, putting more emphasis on temporal connections than shape connectivity. But a majority of techniques first pre-process whole shapes in individual frames, yielding a set of independent shapes {S t } t∈{1,••• ,m} substituted for inputs of the 3D temporal alignment stage, which is the strategy we follow. This is also a natural path to tackle the problem by stepping up from our expertise in 3D reconstruction techniques from a calibrated set of images. Model-Based Approaches In the stratified view of the problem relying on per-frame reconstructions, we must first select and conceive shape and motion models. Various strategies exist to this end, but in this document and in the work described in this chapter we took interest in so-called model-based approaches. These adopt a further simplification to the general paradigm described earlier in §3.1.1, by restricting the shape S either to a family of parametric shapes [ASK + 05, HAR + 10, NH13, PWH + 17] or to a single template shape [SH03, BC08, VBMP08, GSA + 09, LGS + 13], to name only a few. In our work, we explored both possibilities. This simplification thus trades the complexity of matching of spatio-temporal 4D representations to input images, for the problem of fitting a model by optimizing its deformation parameters Θ t such that it best explains per-time step 3D reconstructions. Examining the literature, one can see that most such approaches mostly share a common general canvas : once the motion model is chosen, they all require defining a 3D matching cost or loss to provide a metric measuring the disparity between the deformation estimate and the 3D reconstructions used as input of the fitting. This loss can be either directly minimized for certain forms of the loss function, e.g. Chamfer which considers maximum surface to surface distances, or the minimization can be interlea-ved with an association step, where shape primitives are matched to the primitives of per-frame reconstructions in ICP fashion [START_REF] Besl | A method for registration of 3-d shapes[END_REF][START_REF] Granger | Multi-scale EM-ICP : A fast and robust approach for surface registration[END_REF][START_REF] Mundermann | Accurately measuring human movement using articulated icp with soft-joint constraints and a repository of articulated models[END_REF]. I will here discuss various template-based approaches we explored based on motion expressed on detecting rigid parts for surface or volume patch decomposition of a single template shape to track ( §3.2), and an approach based on the human shape space S-SCAPE which is equipped with a multi-linear deformation framework for identity and kinematic motion ( §3.3), all of which illustrate particularly well our research interests and progress on this subject. SURFACE AND VOLUME PATCH-BASED SHAPE TEMPLATES With the first model, our goal was to explore different improvements of a fixed template object S that allow more robust and more precise fitting. A very popular strategy which we follow in this work is to use a particular capture or scan of the exact subject to be tracked as template shape, popularized by various works in the 2000s [dAST + 08]. With this simplification advantage of a fixed shape S , the essence of the alignment approach is to define a deformation model and its motion parametrization Θ t . Many such parametrizations exist in prior works. One is to rig a generic human model using kinematic chain which can be pre-fitted to the template [BC08, VBMP08, GSA + 09, LGS + 13] or in some cases automatically extracted [START_REF] Baran | Automatic rigging and animation of 3D characters[END_REF]. Another is to relax this strong kinematic prior by using local surface rigidity constraints, with the idea that looser skin or clothing can then also be fitted by the model. Typically, the idea is to preserve local intrinsic surface properties, e.g. isometric deformations [MS04, BBK06, OMMG10, SY10, BPC13], conformal deformations [BGC + 15], inextensibility of the template mesh [START_REF] Mathieu Salzmann | Closed-form solution to non-rigid 3d surface registration[END_REF], or general neighborhood preservation through functional warps [START_REF] Del Bue | Multiview 3d warps, 13th Internationl Conference on Computer Vision (ICCV 2011)[END_REF]. Others use properties related to local rigidities, e.g. as-rigid-as-possible deformations or elasticity minimization in [WSSC11, ZWG + 13, BGC + 15]. Among the methods allowing to express rigid cohesion of the template surface, and a particular topic of interest to us, patch-based methods [CBI10a, CBI10b, BHH11] offer an interesting compromise as they decorrelate the surface support of the deformation model from the geometry resolution, by formulating elasticity-like constraints between a set of pre-computed surface patches. To summarize, on one hand at the time we had a large set of methods with very strong priors on human articulation, which couldn't deal with non rigid surface aspects such as clothing or objects held by the subject, on the other hand we have a set of generic surface fitting methods that use weak rigidity constraints that can deal with many kinds of surfaces but are completely agnostic to the underlying characteristics of the shape, assuming instead uniformly distributed local cohesion properties. This raised my curiosity and sparked a research effort aimed at finding a middle ground between the two families, i.e. a deformation model that could be applied to most situations, e.g. for humans with or without loose clothing or even holding objects, able to model both rigid and non-ridid aspects in a single method. One question this FIGURE 3.1 -Illustration of the convergence of algorithm [START_REF] Franco | Learning temporally consistent rigidities, Computer Vision and Pattern Recognition[END_REF] for a single frame of the LOCK sequence, courtesy [START_REF] Starck | Surface capture for performancebased animation[END_REF]. Each rigid patch assignment is given by a different color. raised for me was whether an alignment inference method could become automatically aware of the distribution of rigid regions on the template surface. Preliminary Effort on Rigidity Learning My first attempt toward this goal [START_REF] Franco | Learning temporally consistent rigidities, Computer Vision and Pattern Recognition[END_REF] was to devise a method that introduced rigid segmentation parameters K v ∈ {∅, 1..k} associated to every vertex v on the template surface. Each of the k rigid components was assigned a set of rigid motion parameters Θ t k per time step t and rigid component k. An additional robustness outlier class label ∅ was included for every K v to allow for a vertex to also be drawn from a free uniform motion component unexplained by the k rigid components. Each input observed surface point o at time t was also given an association variable V t o basically attaching a matched template vertex v to every observed point of the input reconstructions at time t. A joint probability distribution p(K v , V t o , Θ t ) was then expressed on the full set of statistical variables K v , V t o and Θ t to allow for a Maximum A Posteriori (MAP) to be computed, using Gaussian priors for the distribution of labels on the template mesh, and Gaussian noise distributions on the difference between observed surface points o, t and their deformed template associated vertex given by V t o . By treating K v and V t o as hidden variable sets in the inference, one could then write a formal Expectation Maximization algorithm akin to a specialized GMM, that basically extracted a locally optimal point estimate of the rigid-component transforms, a set of per-vertex probability tables over K v that amounted to fuzzy and automatic rigid-guided patchsegmentation extension of [CBI10a], and sparse probability tables over V t o for each observed vertex o at time t, giving probabilistic associations to the data points at each time frame, which essentially builds EM-ICP in the method. This was published in CVPR 2011 [START_REF] Franco | Learning temporally consistent rigidities, Computer Vision and Pattern Recognition[END_REF]. The method had some excellent qualities, in simultaneously exhibiting a datadriven solution to rigid segmentation of the template, rigid motion and closest point assignments with closed form updates resembling those of a standard GMM, essentially giving a principled solution to the template-based multi-rigid EM-ICP problem. However, this initial attempt was also undermined by possible losses of tracking for our practical data, with complete stretching and disconnection of the rigid patches. This is because it had not built strong inter-vertex cohesion constraints in the method and the general surface cohesion provided by the rigid components was too weak to prevent a rigid component to go astray. Detecting Rigidities on a Patched-Based Surface Template We were still convinced that a more performance capture-suitable analysis was possible and that it would yield improvements in results quality with respect to purely agnostic models. So we included this as PhD topic in European Project React that was aimed at creating new vision-driven workflows for 3D digital content creation, which sparked a 4-year effort on this topic with PhD student Benjamin Allain. The first work was also motivated by discussions with Tony Tung of Kyoto University at the time, and the paper became a collaboration. Our first effort was a better attempt to a surface-based template with patch-based deformation. To this goal, we opted for a much simpler model than previously. First we computed a fixed patch decomposition of the template surface into k uniformly distributed patches (see Fig. with each C k,l ∈ {0, 1}, for every pair k, l in the set N of immediately adjacent patches on the fixed template geometry. C k,l is meant as a time-independent inference variable describing how rigidly correlated two neighboring patches are. In some sense this can be seen as a simpler way to implement the initial idea of §3.2.1 [START_REF] Franco | Learning temporally consistent rigidities, Computer Vision and Pattern Recognition[END_REF], first because the rigidity variables C k,l can together express arbitrary rigid aggregations of the initial and smaller "primordial" k patches. Second, while the number of rigid components were fixed in the previous effort [START_REF] Franco | Learning temporally consistent rigidities, Computer Vision and Pattern Recognition[END_REF], the arbitrary rigid aggregations afforded by the C k,l variables allow support for any composed set of k -component patch aggregates by this method, as long as k <= k. Third, the strong surface cohesion previously missing is now built-in the inter patch distance prior in the expression of a surface prior term which basically quadratically penalizes neighboring patches that come apart. When C k,l = 0, that is when patches k and l are non-rigidly correlated, the surface term is chosen to allow neighboring patches to loosely hinge. When C k,l = 1, that is when pacthes k and l belong to the same rigid group the surface intrinsic penalty term penalized any non rigid movement between the two patches. The probability assigned ot each C k,l is then expected to be estimated based on the data at inference time thanks to these prior constraints. An additional contribution of this model is in how it computes inter-patched distances. We had noticed a bias in how Cagniart computed inter-patch penalties [CBI10a] that basically favored estimates close to the default template pose. For this reason, we included a mean pose Θ as a point estimate to be retrieved during the optimization for a full sequence, which allows the model to significantly deviate from the default template pose, and also acts as a regularizer to stabilize estimates patch pose etsimates in the sequence. Similarly to §3.2.1, each (now constant) patch k is again given a set of rigid transform parameters Θ t k at each time frame t of the processed sequence of reconstructions, and each observed point o on the input 3D meshes at each time step t is again given a template vertex assignment index V t o . The inference can then again be expressed as a MAP over the joint probability p(C, V, Θ, Θ), treating C k,l as hidden. Using Expectation Maximization, one then alternates between refining point estimates of the patch transforms Θ t and mean patch pose Θ in the maximization step, and computing probabilities over each patch pair (k, l)'s rigidity variable in C k,l on one hand, and EM-ICP-like template vertex assignment probabilities V t o on the other hand, both as part of the E-step. The results obtained with this simple idea confirmed the expected regularizing behavior of sharing a common rigidity inter-patch property over all frames of a sequence (see Fig. 3.2 and Fig. 3.4), leading to better temporal alignment and more robustness, quite apparent in Fig. 3.3. The features we built in the method also acted as a form of damping and stabilization in the inference when applied in sliding-window fashion across longer sequences. Detailed results are available in the original ECCV 2014 publication [START_REF] Allain | On mean pose and variability of 3d deformable models[END_REF] (available as Appendix A.2) and supplemental video2 . Volume-Patch Rigidity-Enforced Template Although encouraging, it appeared from previous results that one of the remaining limitations of the previous method was in fact a common limitation of the surfacebased family of methods : the method exhibits some stretching, squeezing and generally rubbery looking artifacts in non rigid zones. As with LBS, this was prominent at kinematic joints, where a non smooth split between two neighboring rigid or quasi-rigid is expected ; however the instrinsic surface priors we chose, when weighted by an intermediate, undecided probability of rigid correlation in the neighboring patches, tend to smoothen the energy required to follow the limb joint across limbs rather than putting all the deformation at the joint location (see which highlights these cases). Our hope was in some sense that the rigidity inference and regularizing effect would at least partially mitigate some aspects of the problem, which turned out to be correct, but we also sensed that this could be better enforced. One idea could have been to use a different norm as support for the surface tension expression, which in fact has later been explored at the time of this writing, e.g. [GXW + 15] in the case of depth camera streams. Another possible path which we ended up following was to enforce rigidity constraints at the volumetric level, where volume preservation could then be built in the model to alleviate rubber and squeezing effects. A source of inspiration was to observe what had been done in the graphics community to express volumetric deformations and to mitigate problems of the LBS model [ACOL00, ZHS + 05, BPWG07], which in fact had already been used for performance capture applications [dATSS07, dAST + 08, BH10]. As illustrated in Fig. 3.5(a), those approaches are however based on tesselations of surface points of the input reconstructions and do not introduce any inner vertices. This allows some volumetric constraints to be taken into account for the deformation energy, but prevent a full volumetric treatment of the alignment problem. Other tesselations of the volume are possible (Fig. 3.5(b) and (c)), such as a regular voxel grid, but it is non isotropic and biased along the axis directions. Using Voronoi cells of points randomly drawn inside the volume is insufficient as it yields irregular inner cells. Instead we propose a fully dense an regular volumetric treatment of the volume using a CVT of the volume (Fig. 3.5(d)). Informally CVT are a particular type of Voronoi tessellation where the samples are iteratively repositioned to coincide with the center of mass of their cell, which achieves the desired properties [DFG99] : isotropy, rotational invariance, uniform cells of compact and regular form factor, regular intersection of boundary cells and surface, independent cardinality and practical computa- tion. On can also choose the number of final cells desired, thus adapting the complexity to the desired problem granularity. The tracking methodology and deformation model is then strictly analogous to §3.2.2, the main initialization difference being that both the inputs and template shape are now tesselated into a set of CVT cells, on which the whole algorithm operates. Namely, the initial patch decomposition is now expressed on volumetric cell groupings instead of surface vertices, and observation-to-template matches are also expressed from input volumetric cells to template volumetric cells. We keep the rigidity inference on C k,l variables based on pairs of adjacent volumetric patches in the template as a complementary regularizer of rigid behavhior over each inferred sequence as in the previous method. We observed significant qualitative and quantitative improvements on Kinovis and BBC React acquired datasets 3 , as measured with both the silhouette reprojection error for template-fitted sequences and sparse ground truth marker RMS error for some sequences which had been simultaneously captured with multi-RGB and sparse markers [LGS + 13], as illustrated Fig. 3.6. One can notably observe the more natural folds at kinematic joints, and the increase robustness even with truly nightmarish data, e.g. GOALKEEPER has some frames where the subject gets up after leaning on the floor, with a very messy visual hull reconstruction, and the volumetric approach does not lose the main anatomic features. This paper, provided in appendix A.3 was presented as an oral at CVPR 2015 [START_REF] Allain | An efficient volumetric framework for shape tracking[END_REF]. More results can be seen in the supplemental video 4 . Volume-Based Tracking-by-Detection For the record, I will briefly mention here an interesting ramification of this work demonstrating the usefulness of a CVT-based volume approach. Previously, the deformation model was the focus of our discussion and work, and associating observations to the template was performed with EM-ICP type terms. We showed, with a collaboration with Chun-Hao Huang, Federico Tombari, Slobodan Ilic and Nassir Navab, that is also possible to use the volumetric CVT template decomposition as the basis for the association component of the loop. For this, we directly took inspiration from work on the Vitruvian Manifold [START_REF] Taylor | The Vitruvian manifold : Inferring dense correspondences for one-shot human pose estimation, Computer Vision and Pattern Recognition[END_REF], where the depth-anchored, learned data-driven association model based on Random Forests [SFC + 11] was generalized to continuous regression of depth points to a generic template surface, then applied as a one-shot association component to drive a non-rigid human body model fitting scheme. We proposed in [HAF + 16, HAB + 18] to substitute into a Vitruvian-like pipeline CVT-based representations for the volumetric feature extraction and regression stages with Random Forests, this time driving the volume-based deformation model of volumetric templates described in the previous section, with of course some technical adjustments. The results showed drastically improved stability of the results and better trainability for a CVT-decomposition of the shape versus a training based on a regular grid volumetric feature support. We show results where the alignment is recovered even with truly challenging situations where the subject is a performing a full cartwheel, without any tracking loss. The PAMI version of this article explains the original methods, as well as additional temporal stability improvements, is provided as appendix A.6, with more results in the supplemental video 5 . FIGURE 3.7 -Overview of the proposed pipeline. From left to right : input frame, automatically computed landmarks using [START_REF] Zuffi | The stitched puppet : A graphical model of 3d human shape and pose[END_REF], result after estimation of initial identity and posture, final result, and overlay of input and final result. INFERRING SHAPE UNDER CLOTHING USING SHAPE SPACES In the work discussed above, we have used priors over rigidity but not complete priors of observing humans, only using the template shape but no constraint on human kinematics. This has both advantages and disadvantages. On one hand the generic nature of the deformation and fitting model allows uniform treatment of a wide variety of scenarios, such as people in clothing, people holding objects, and some hair detail, as long as the geometry of these features are pre-observed as part of the template used for tracking the sequence. Those added feature would elude methods purely based on a generic human shape template with no clothing for example. On the other hand, one can argue that, since we are observing humans, it only makes sense to use that prior in some way. Various methods have gone in the direction of either explicitly including a clothing model, for example a specific skirt or type of clothing [RKP + 08]. More recently more and more layered models have been proposed that use the underlying body and kinematic structure as a shape and motion prior guiding an outer cloth layer [START_REF] Zhang | Detailed, accurate, human shape estimation from clothed 3d scan sequences[END_REF]. Before these latest works, in 2016, we worked on model that estimates shape under clothing, that is by only using the outer envelope over a fully observed sequence as a guide constraining the underlying human shape possibilities and narrows them down to the most plausible data given the indirect observations of it. But this only works if we are given a low dimensional representation describing the full set of plausible human body shapes. This is why we rely on S-SCAPE, a mul-CHAPITRE 3. 3D MOTION ESTIMATION tilinear variant of SCAPE that is easier to manipulate and optimize. This was an ideal topic to start collaboration and PhD co-supervision of student Jinlong Yang, with then newly arrived Inria Researcher Stefanie Wuhrer, who joind the team in 2015 and had prior expertise in this field as a co-author of the S-SCAPE model. The well-known SCAPE family of methods has been introduced in the 2000s as a means to encode body shapes with two sets of parameters, a set of kinematic parameters governing the body pose observed, and a set of intrinsic body shape parameters. The kinematic set is the same controlling as other kinematic model based methods, but the body shape parameters are essentially a set of coordinates in a PCA-shape basis, which is learned from a database of human shapes with different body characteristics, such as height, gender and corpulence. The immense advantage of such a model is its ability to describe a specific body shape instance, also called "identity", with a few dozen parameters, and a description of shape and pose together only spanning around a hundred parameters. We cast the fitting problem as a minimization procedure over the set of identity parameters β, which basically encode the estimated intrinsic shape S of the subject, and the set of m per-frame poses {Θ t } t∈{1,••• ,m} , which in this case are the set of joint parameter of the S-SCAPE LBS-based kinematic model. We define a loss to minimize over the set of observed input 3D shapes over time {S } t∈{1,••• ,m} , with E sequence (β, Θ t , S ) = m t=1 ω lnd E lnd (β, Θ t , S ) + ω data E data (β, Θ t , S ) + ω cloth E cloth (β, Θ t , S ), (3.1) where E lnd (β, Θ t , S ), E data (β, Θ t , S ) and E cloth (β, Θ t , S ) are energy terms weighted by scalars ω lnd , ω data and ω cloth . The landmark term E lnd (β, Θ t , S ) measures the distance between a set of automatically computed landmarks on S and their corresponding anatomical points on c(β, Θ t ). This energy allows to obtain a rough estimate of the body shape and posture at each frame. The data term E data (β, Θ t , S ) measures the distance of points c(β, Θ t ) to the nearest neighbors on scan S and serves to pull the estimate towards the observed scan surface. This term allows to obtain a good estimate for the identities β. The clothing term E cloth (β, Θ t , S ) accounts for the loose clothing by encouraging c(β, Θ t ) to be located inside the observation S . Since the cloth term is applied to all frames, it allows to take advantage of the motion cues observed throughout the sequence ; as the clothing moves close to localized regions of the body in different frames it restricts the underlying shape §encoded by β to essentially lie inside the observed cloth for all frames. To acquire our solution over full sequence which may span hundreds of frames, as illustrated in Fig. 3.7, we begin by extracting the matching landmarks c(β, Θ t ). Second, we initialize the identity parameters β by minimizing (3.1) for a small subsequence at the beginning of our full sequence. Third we solve (3.1) for the pose parameters Θ t sequentially over the full sequence, while fixing β. Fourth, we refine the identity parameters β. To evaluate the proposed framework, we have captured a database of six subjects (three male, three female) performing three different motions and wearing three clo- thing styles each using the Kinovis platform, which has become popular with the community, (e.g. [START_REF] Zhang | Detailed, accurate, human shape estimation from clothed 3d scan sequences[END_REF]), the conference paper [START_REF] Yang | Estimation of Human Body Shape in Motion with Wide Clothing[END_REF]. We make it available as Appendix A.4 and refer to the supplemental video for more results 6 . Figure 3.8 shows six representative frames of the database. Figure 3.9 evaluates the posture estimation on manually placed markers for walking sequences captured in tight clothing. Note that the use of S-SCAPE as statistical prior prevents drift. Figure 3.10 shows further qualitative results of our method. Due to these very interesting results and originality of the work, the paper became quite popular inspiring for instance several great followups in other teams, e.g. [START_REF] Zhang | Detailed, accurate, human shape estimation from clothed 3d scan sequences[END_REF]. We also had a followup collaboration with Jinlong Yang and Stefanie Wuhrer examining the statistics of the outer shape as a vertex displacement layer, which I'm not discussing here in detail in the interest of space [START_REF] Yang | Analyzing Clothing Layer Deformation Statistics of 3D Human Motions[END_REF]. 6. https://hal.inria.fr/hal-01344795/file/supplementaryVideo.mp4 CHAPITRE 4 Appearance Estimation and Refinements INTRODUCTION Multi-camera capture platforms such as the 68-camera Kinovis produce a lot of observations on the same scene and from the same surfaces. Thanks to the sequence alignments described in the previous chapter, we have access to an additional source of observation redundancy. A natural research goal that has been the focus of our attention in the last decade, is how to best exploit this redundancy to enhance the quality of the models produced. This is a particularly important aspect of our work since, as was previously discussed, the subjects we acquire cover only a fraction of the input image pixels, limiting the actual amount of appearance data present in each input frame. We discussed several of our works toward this goal concerning geometry acquisition in chapter 2, but another aspect to enhance the perceived realism and appeal of our output models is to acquire the fine-grain color appearance and texture of those subjects. To this goal we can leverage the multiple RGB streams relating to the same underlying surface. We here discuss two families of works in this direction that we carried, to acquire appearance data, first in the direction of multi-view superresolution ( §4.2), second in how to efficiently describe and estimate a temporally evolving appearance representation ( §4.3). HIGH RESOLUTION 3D SHAPE TEXTURE FROM MULTIPLE VIDEOS Retrieving appearance information from all views of the subject raises interesting questions. In the case of a single video, a vast literature on 2D superresolution methods exists, which by analogy raises the possibility of retrieving appearance detail from the multi-view streams, with a quality beyond that of any given input frame in isolation. When we took on this project with PhD student Vagia Tsiminaki, co-supervized with Edmond Boyer for the React European project on content production, the literature on the subject mostly looked at multi-view texture estimation without looking at the temporal redundancy, by aligning the various contributing images onto a single texture to avoid ghosting [TAL + 07, LHS01], or building a texture patchwork of singleview contributions optimized with graph cuts [START_REF] Victor | Seamless mosaicing of imagebased texture maps[END_REF], extended to the temporal domain as one of the rare methods addressing time redundancy in the multi-view case [START_REF] Janko | Spatio-temporal image-based texture atlases for dynamic 3-D models[END_REF] at the time. Only a handful of particularly relevant works has started to examine this explicitly as a multi-view superresolution problem, but without considering temporal frames [START_REF] Goldluecke | Superresolution texture maps for multiview reconstruction[END_REF][START_REF] Goldluecke | A super-resolution framework for high-accuracy multiview reconstruction[END_REF]. The literature in the monocular video case is however abundant. An interesting thing we learned from reviewing those works is that a well identified generative model of low resolution image formation had emerged, as a geometric warping, blurring and sub-sampling process of the initial high-resolution image [START_REF] Baker | Limits on super-resolution and how to break them[END_REF]. Of particular interest to us are that this model can be represented by a stack of linear transforms and that Bayesian noise models have been developed to explicit the noise dependencies and statistical priors over the image and warps to estimate [START_REF] Fransens | Optical flow based super-resolution : A probabilistic approach[END_REF], some using the L1-norm based priors and total variation (TV-)minimization popular for image restoration tasks [START_REF] Liu | A Bayesian approach to adaptive video super resolution[END_REF]. Notably, super-resolving multiple videos of a moving subject was examined in a performance capture context, but only for the input viewpoints [START_REF] Tung | Simultaneous super-resolution and 3D video using graphcuts[END_REF]. Proposed Methodology and Contributions Essentially no multi-view superresolution technique existed that used the timetested elements of 2D superresolution generative models to retrieve a common appearance map for a 3D model, nor was the temporal aspect really examined in this context either. Yet the building blocks for that were there, including for example papers such as [EDM + 08], which hinted at how optical-flow-driven warps could be substituted for the 2D optical flow driven warps in 2D superresolution pipelines. We looked at the problem as one of inferring a single appearance map T attached to a reconstructed geometry pre-aligned with [START_REF] Allain | On mean pose and variability of 3d deformable models[END_REF], from all input frames I = {I t i } t∈{1,••• ,m} i∈{1,••• ,n} . We proposed to generalize the linear generative pipeline of 2D superresolution technique [START_REF] Liu | A Bayesian approach to adaptive video super resolution[END_REF], which explains the low resolution input views as warped, blurred, subsampled versions of an underlying high resolution (HR) image ; by filling the gap in explaining the high resolution images as a projected version of the high resolution common texture. This framework is illustrated in Fig. 4.1. To generate an input image from each HR image H t i , the HR image is first warped according to the different sources of variability apparent in the image -calibration error, distortion, model geometry error -using a dense warp field W t i . This warp results in a linear operator over the HR image, which we note F W t i . The image then traverses the optical system, where it is blurred and captured by the CCD which performs light integration at every photosite. Following 2D super-resolution literature [START_REF] Baker | Limits on super-resolution and how to break them[END_REF][START_REF] Fransens | Optical flow based super-resolution : A probabilistic approach[END_REF] this is generally modeled using a Point Spread Function (PSF) with the form of a Gaussian blur kernel, followed by an image subsampling stage. Both operations can be written as linear operators, the image-wide blur operator K and subsampling operator S, which are applied to the HR image to obtain a view's observed image I t i = SKH t i . Remarkably, in its noiseless form, the full image formation model can thus be noted as a single, sparse linear operator A t i = SKF W t i P t i for each view {i, t}, with 3×w T ×h T rows and 3 × w I t i × h I t i columns, such that I t i = A t i T for each view {i, t}. This linear model is then used as the foundation to describe the noisy dependencies in the model and ultimately make this a Bayesian generative model, whose MAP we estimate by alternating the minimization of the common appearance map T and the warps W t i , in analogous fashion to the original 2D algorithm. Results We compared a static version of our algorithm to the then state of the art static multi-view resolution technique of [START_REF] Goldluecke | A super-resolution framework for high-accuracy multiview reconstruction[END_REF], with results significantly improved, with less noise and artifacts, as illustrated in Fig. 4.2. The method was also successfully tested on several datasets acquired with the Kinovis platform and with the BBC React platform, shown in Fig. 4.3, on small temporal windows, in particular comparing the results obtained with 1, 2 or 3 frames with each time a measurable improvement. A more detailed version of the framework, which was published at CVPR 2014 [START_REF] Tsiminaki | High Resolution 3D Shape Texture from Multiple Videos[END_REF], and additional results and comments are available as Appendix A.1 and the supplemental video1 . EIGEN APPEARANCE MAPS In the latter work, we were able to retrieve high quality textures, by performing a principled fusion of the appearance data present in the different views and the different time frames. This however can only be valid over a relatively short temporal window before the perceived appearance changes. This is because the images provide a reading of the radiance of the surface patches each pixel sees and not an intrinsic value ; and this radiance changes with the surface motion and orientation, and with significant changes in the lighting conditions that may occur over a larger temporal window. Appearance changes may also encode desirable appearance variations, such as change in lighting or personal expression of the subject, that have a negligible impact on geometry and could not be encoded at that level. One strategy is therefore to store estimated textures for as long as the texture doesn't significantly change, e.g. as in [CCS + 15]. But how to represent this information for longer sequences is still an open problem and needs to account for all these different sources of variability. With PhD students Vagia Tsiminaki and Adnane Boukhayma, and given previously encouraging results described in the previous section, it came as a natural extension of their work to tackle this challenge and examine how to retrieve and represent appearance maps over full acquired 4D sequences. We proposed to advance this aspect by providing a view-independent appearance representation and estimation algorithm, to encode the appearance variability of a dynamic subject, observed over one or several temporal sequences. Compactly representing image data from all frames and viewpoints of the subject can be seen as a non-linear dimensionality reduction problem in image space, where the main nonlinearities are due to the underlying scene geometry. Our strategy is to remove these non-linearities with state-of-the-art geometric and image-space alignment techniques, so as to reduce the problem to a single texture space, where the remaining image variabilities can be straightforwardly identified with PCA and thus encoded as Eigen texture combinations. To this goal, we identify two geometric alignment steps. First, we co-FIGURE 4.4 -Overview : Time consistent shape modeling provides datasets of appearance maps. Our proposed method exploits the manifold structure of these appearance information through PCA decomposition to generate the Eigen appearance maps relative to a shape. arsely register geometric shape models of all time frames to a single shape template, for which we pre-computed a single reference surface-to-texture unwrapping. We use the techniques of chapter 3 for this task. Second, to cope with remaining fine-scale misalignments due to registration errors, we estimate realignment warps in the texture domain. Because they encode low-magnitude, residual geometric variations, they are also advantageously decomposed using PCA, yielding Eigen warps. The full appearance information of all subject sequences can then be compactly stored as linear combinations of Eigen textures and Eigen warps. Our strategy can be seen as a generalization of the popular work of Nishino et. al. [START_REF] Ko Nishino | Eigen-texture method : Appearance compression and synthesis based on a 3d model[END_REF], which introduces Eigen textures to encode appearance variations of a static object under varying viewing conditions, to the case of fully dynamic subjects with several viewpoints and motions. The pipeline is shown to yield effective estimation performance. In addition, the learned texture and warp manifolds allow for efficient generalizations, such as texture interpolations to generate new unobserved content from blended input sequences, or completions to cope with missing observations due to e.g. occlusions. Method To eliminate the main geometric non-linearity, we first align sequence geometries to a single template shape and extract the texture maps of a subject over different motion sequences in a common texture space using the previously described method in §4.2 [START_REF] Tsiminaki | High Resolution 3D Shape Texture from Multiple Videos[END_REF]. From these subject specific textures, Eigen textures and Eigen warps that span the appearance space are estimated. The main steps of the method below are depicted in Fig. 4.5. 1. Texture deformation fields that map input textures to, and from, their aligned versions are estimated using optical flows. Given the deformation fields, Poisson reconstruction is used to warp textures. 2. PCA is applied to the aligned maps and to the texture warps to generate the Eigen textures and the Eigen warps that encode the appearance variations due to, respectively, viewpoint, illumination, and geometric inaccuracies in the reference model. Hence, The main modes of variation of aligned textures and deformation fields, namely Eigen textures and Eigen warps respectively, span the appearance space in our representation. Note that due to texture space discretization, the warps between textures are not one-to-one and, in practice, two separate sets of warps are estimated. Forward warps map the original texture maps to the reference map. Backward warps map the aligned texture maps back to the corresponding input textures (see Fig. 4.5). Given the Eigen textures and the Eigen warps, and as shown in Fig. 4.6, a texture can be generated by first creating an aligned texture by linearly combining Eigen textures and second de-aligning this new texture using another linear combination of the Eigen warps. Results We evaluated the method on a number of Kinovis acquired datasets and measure the quality using the SSIM metric [START_REF] Wang | Simoncelli, Image quality assessment : From error visibility to structural similarity[END_REF] that is more tolerant to small shifts, both in image space after reprojection and texture space before reprojection by comparing it to the short term maps estimated with §4.2 [START_REF] Tsiminaki | High Resolution 3D Shape Texture from Multiple Videos[END_REF]. The sequences are previously tracked using the method from §3.2.3 with a single template that is used as support for the common texture space. The results show that our strategy successfully encodes 2048x2048 datasets of 200 or 300 frames with virtually no error (0.98 SSIM) with 50 PCA components. We also show that using an equivalent number of parameters, a simple baseline PCA strategy without eigen warps achieves much lower performance, substantiating that the eigen warps successfully correct small geometry induced texture slippage. We also show the applicability of our method for two tasks : interpolation between two template poses, by interpolating in both eigen spaces (warp and texture) and texture completion to correct artifacts in estimated textures e.g. due to poor visibility, by projecting the textures to a pre-established space of our eigen representation, in both cases with encouraging results as illustrated in figures CHAPITRE 5 Conclusion In this chapter, we give a short recap of the contributions, discuss some insights, and mention some of the interesting followups related to this work that are happening in the team or that are envisioned as future work. SUMMARY AND INSIGHTS This manuscript presents some of the prominent works with my students, for PhDs defended in the last five years. With the general goal of advancing 4D modeling to digitize captured subjects, in the form of high quality time evolving representations estimated from multi-viewpoint video, we advanced the field in the following directions. In chapter 2, we first examined estimating 3D shapes in the static case, with several simultaneous images inputs from one or several cameras. We have crafted a classic MVS pipeline tuned for performance capture applications, but also successful with more general data, then successfully demonstrated the benefit of learning the photoconsistency function and substituting it in this pipeline. We then extended our experience of learning-based approaches to end-to end monocular 3D models estimations, which is currently the basis for new works in the team for monocular sequence analysis and new multi-view reconstruction algorithms. All contributions presented became state of the art in reconstruction quality at the time of their publication and to this day serve as input to subsequent methods explored in the team. Some main limitations are the relatively slow training and processing times for e.g. Vincent Leroy's algorithms, due to the very fine level of detail targeted and particular training procedure, and this also drives are future research to improve efficiency. In chapter 3, we have presented several principled motion models for 3D sequence tracking and alignment, using surface and volume rigidity priors, and volume support for improved data-driven shape matching, with state of the art results at the time of publication. We also presented human-prior driven tracking under wide clothing. They are all grounded in the previous extraction of 3D models per time step discussed in the previous chapter. While these have been great achievements and have definitely contributed to advance understanding of motion models, improving the precision, robustness, in particular breaking the centimetric error barrier in broad acquisition situations, are still prominent challenges. Even better cooperation between the reconstruction, 3D sequence alignment and appearance modeling layers, over longer terms, are probably a key to break this deadlock, as well as better accounting for clothing in our models. In chapter 4, we presented a principled approach to estimate appearance maps from multi-view sequences over small time windows, treating it as a generalized multicamera superresolution problem. We further show how this estimation can be used a building block to build an Eigen space of varying textures to encode the appearance of a 4D sequence. These contributions were made possible by the advances with the surface reconstruction and alignment previously presented. One of the challenges facing us is to make the superresolution model more practical as it is currently very compute-intensive. We believe finer detail is to be accessed with better cooperation between the geometric and appearance stages as well. This has largely been the basis for new work with PhD student Matthieu Armando on new encodings for appearance on the shape [START_REF] Armando | Adaptive Mesh Texture for Multi-View Appearance Modeling[END_REF], and fine detail correction and restoration on the shape surface [START_REF] Armando | Mesh Denoising with Facet Graph Convolutions[END_REF], for which we are obtaining very encouraging results. FUTURE DIRECTIONS The work has brought me to very different fields and was rich with new discoveries, which helped forge some intuitions. Breaking stratified assumption and representations. Mostly one stratification approach was examined in this document, however others are possible. For example, some works examine the 3D motion problem and priors in trajectory space [START_REF] Akhter | Nonrigid structure from motion in trajectory space[END_REF][START_REF] Simon | Separable spatiotemporal priors for convex reconstruction of time-varying 3d point clouds[END_REF]. I think there are powerful priors to be learned by examining the problem in such orthogonal directions, not necessarily abandoning the existing geometric priors in fact, but using those in complementary fashion. Learning offers new paths to consider input data and output of shape, motion and data jointly representation. Graph-convolution [MBM + 17, VBV18] networks offer a promising avenue of research that we have began exploiting in the team, e.g. [START_REF] Armando | Mesh Denoising with Facet Graph Convolutions[END_REF]. Some methods also of great interest to us that redefine shape representation based on implicit networks [CAPM20] e.g. the recent occupancy flow works [START_REF] Niemeyer | Occupancy flow : 4d reconstruction by learning particle dynamics[END_REF], which echo some of our own works to a similar goal [START_REF] Guan | Probabilistic 3d occupancy flow with latent silhouette cues[END_REF], or NERF [MST + 20]. Thsi is driving some of our current explorations, e.g. with PhDs student Mathieu Marsot and Boyao Zhou [ZFB + 20]. FUTURE DIRECTIONS 51 Better priors for motion and clothing. Modeling clothing is definitely becoming a hot topic, because our research is has hit the precision limits associated to body-only models. New models are increasingly accounting for the actually observed surface layer which sits on top of the other body and flesh layers. We have started examining this to some extent with Jinlong Yang's work, as presented in this document, with one of the team followups focused on predicting a vertex deformation layer of a template based on the underlying human motion parameters [START_REF] Yang | Analyzing Clothing Layer Deformation Statistics of 3D Human Motions[END_REF]. We've personally made a low level-oriented effort in collaboration with Stefanie Wuhrer, and Physical and Structural Mechanics specialist Florence Bertails and Arnaud Lazarus, in co-supervizing PhD student Abdullah Haroon Rasheed, to better understand how cloth simulations could be used in vision systems to build new priors [RRBD + 20]. One can measure the growing popularity of the topic with the number of papers and citations surrounding this work, and seeing interesting new work picking up where we left off with Jinlong Yan's thesis, e.g. [START_REF] Lassner | A generative model for people in clothing[END_REF][START_REF] Pons-Moll | Clo-thCap : Seamless 4D clothing capture and retargeting[END_REF]. And I believe much more can be done in this direction. From old make new. It would be quite lame and self-evident to state that Deep Learning will continue reshaping our field, 4D modeling being no exception. But our experience, and the literature shows that one of the key limits fo Deep Learning models, explainability, can be at least partially mitigated by focusing the learning method on well understood sub-problems of time-tested classical pipelines, wit a well identified support domain, which has guided some of our earlier works, notably with the MVS photoconsistency function. While more and more methods are able to look at the whole problem end-toend, I think this is still a valid approach to gain understandings on new idea and new problems. Many times we can use the classic pipeline as a basis for a data-driven method to build on. This is in some sense what we were doing with our volumetric tracking by detection method in §3.2.4 [HAF + 16] by alternating a data-driven discriminative detection step with a classic generative model-based error minimization. Interestingly, during her post-doc after her thesis with us in Morpheo, Vagia Tsiminaki demonstrated another way to achieve this, by using a network to compute the fine detail residual of appearance maps with respect to the generative appearance map model presented here [RCO + 19]. These are all useful alternative ways to look at our problems. Another aspect I am noticing and increasingly interested in integrating in my future learning approaches, is that some time time-tested pipeline structuring ideas that have proven to be essential for classic vision algorithms to work in practice and achieve top performance, such as coarse-to-fine processing, are finding their way in the way deep learning pipelines are structured. It is not a coincidence that coarse-to-fine pyramid networks are top scorers in today's benchmarks, such as PWC-net for optical flow [START_REF] Sun | Cnns for optical flow using pyramid, warping, and cost[END_REF] or for the MVS problem [START_REF] Yang | Cost volume pyramid based depth inference for multi-view stereo[END_REF]. This is exciting as it offers new Introduction Gathering appearance information of objects through multi-camera observations is a challenging problem, of particular interest for multi-view capture systems. In such systems, typically, a geometric model is reconstructed, tracked or refined to be as close as possible to reality. Adding an appearance or texture layer to this geometric information plays an essential part in the realism of the result, and is often more important than geometric detail to convey the object's visual aspect. Applications of this acquisition pipeline, such as broadcast, special effects or entertainment, among others, are very highly demanding in terms of quality. Yet, even with state of the art multi-camera studio equipment, simply reprojecting texture from any one of the high resolution video streams used in the acquisition process is not enough to guarantee good texture coverage and high quality renderings or close-ups. Because several such input video streams are available in this context, and in order to take advantage of all the information they carry, we naturally turn to the various sources of data redundancy to boost texture quality. Following 2D monocular superresolution techniques that successfully regain details from low resolution images, we consider here a similar framework for multi-viewpoint videos. Such a framework is however significantly different from 2D super-resolution. First, dealing with multiple video streams is a different problem than using only one, where little parallax is usually assumed to occur. In a multi-view scenario, the intrinsic appearance of a single 3D object is only partially visible in each view, and observed only after being perspective projected, distorted by 3D geometry, and self-occluded. The 3D geometry itself is subject to reconstruction error and thus uncertain. Seamlessly blending and super-resolving the different input contributions into one single coherent texture space, while accounting for all such sources of variability is thus quite challenging. In fact it has only recently started to be addressed as such [10] for static objects. But to fully exploit data redundancy, temporal accumulation of all views also needs to be examined. Not only is it an additional source of data, but interestingly temporal accumulation might make it possible to obtain high quality results with a sparser set of viewpoints than in the static case. This is not without its own source of difficulties. More often than not the subjects of interest are of arbitrarily deformable nature, such as human actors. This means that consistent temporal accumulation of texture data can only be done by realigning the relevant parts of the texture from one temporal frame to another, and accounting for sources of geometric variability. Fortunately, recent progress in non-rigid surface tracking methods [3] offer a path to resolve such issues, which we open with this work. Overview. Generalizing existing multi-view appearance super-resolution work to the temporal domain requires a robust model of variability. As the appearance of subjects may drastically change over the long run, we focus in this paper on small non-rigid motions of the subject around a stable pose and observed appearance. We propose to deal with the largest non-rigid motion component using a surfacetracking method [3], and to compensate for any remaining geometry perturbations with a per-view, per-time frame warp. This per-view registration popularized in various rendering techniques [20,8] has the large advantage of uniformly dealing with all sources of error, calibration, reconstruction, temporal misalignments and ghosting for our texture super-resolution, and is one of the major contributions of the paper. This paper is also the first, to our knowledge, to deal both with multiple viewpoints and temporal frames to build one common super-resolved texture, as opposed to [21] which enhance the input views directly, and [10] which only deals with the static multi-view aspect. Warping is done on an intermediate, high-resolution projected proxy of the model texture, where variability can be appropriately densely compensated ( §3.1). We also expose a straightforward model and algorithm for this task, illustrated in Fig. 2. We notably show that some linear models [17] of the image formation can be generalized to the multi-view, multi-frame case ( §3), as well as the monocular noise models ( §4). We exhibit a two-stage iterative algorithm ( §5), whose convergence is illustrated in experiments ( §6). Our validation protocol also includes favorable comparison with the closest state of the art method [10], at the intersection of the validity domains of the methods (static, multi-view texture resolution case). Furthermore, we quantitatively demonstrate the convergence and temporal improvement of our method over using the same number of views in the static case. Related Work View-Dependent Texturing. Various strategies exist to retrieve and render the appearance of objects from input views and given a viewpoint, a geometric reconstruction being assumed available in general. One of the first proposed is to reproject and blend view contributions according to visibility and viewpoint-to-surface angle [7]. Viewdependent techniques have been generalized to model and approximate the plenoptic function for the scene object, capturing view dependent shading effects [2] but this requires many dense views. Imperfect proxies and other geometric errors create rendering misalignments (ghosting), which various techniques correct with an additional imagespace registration step [8], building a local basis of appearance variability [5], or refining the geometry proxy [19]. By nature, these methods are not targeted to capture intrinsic, view-independent texture properties and generally do not exploit viewpoint redundancy to super-resolve visual quality, nor do they easily extend to the time domain for deformable objects as proposed. Multi-View Texture Estimation. To store intrinsic details of the acquired object and later render them, numerous methods build an image-based texture atlas to store appearance information, where each texel blends contributions from each view. Realignment is often proposed again to avoid ghosting [20,15], but a second strategy exists which instead builds the texture as a mosaics of unique-view contributions, whose seam locations are optimized to minimize appearance change between fragments [14]. Interestingly, this strategy was extended to the temporal domain [13]. Only a handful of particularly relevant works examine how to super-resolve fine appearance detail from viewpoint redundancy at a single time frame [12,10]. We propose an improved, unified model to deal with geometric variability due to reconstruction error and small deformation across time for multi-view super-resolution. Video Super-Resolution While very few works exist concerning super-resolution techniques applied in a multiview context, the problem has been extensively studied in the monocular case. The image formation model is well identified, as a geometric warping, blurring and subsampling process of the initial high-resolution image [1]. Two features of particular interest to us are that this model can be represented by a stack of linear transforms, and that Bayesian models have been developed to explicit the noise dependencies and priors over the target image and estimated warps [9,17]. L1-norm based priors and total variation (TV-)minimization are increasingly popular [17] for their image restoration qualities. Notably, super-resolving multiple videos of a moving subject was examined in a performance capture context, but only for the input viewpoints [21]. Our model proposes temporal and multi-view super-resolution, yet super-resolves a single, intrinsic appearance map which can be re-used to render new viewpoints. Image Formation Model Our goal is to estimate an appearance map T of an object of interest from a set of input color images {I t i }, where i ∈ {1, • • • , n i } is the camera number and t ∈ {1, • • • , n t } the time. We assume a temporally coherent mesh model of the object, i.e. whose connectivity is time independent but of varying pose {M t }, obtained by tracking the surface tracking [3]. High Resolution Projection We project the texture to a high resolution (HR) image {H t i } for each viewpoint {i, t}. Before reaching H t i , the texels of T undergo two geometric transformations. Texture Mapping. For the appearance to be mapped to the mesh, a geometric mapping function must map each texel of T to the mesh surface. Thanks to fixed connectivity of the mesh across time, only one such function τ needs to be defined. Conformal mappings are preferred, because preservation of angles ensure low distortion during the transfer, such that the texel density of T is kept homogeneous on the 3D surface. Note that due to potential cuts and non-zero genus topology of the objects of interest, τ may not be continuous and may have a support region with several connected components (or charts) in the texture domain. To obtain τ , we use [18], which yields large charts with relatively few components, a useful feature for regularization and to avoid continuity artifacts. Projection to High Resolution Image. We assume projections {π t i } are known for each view i and time t. A texel at texture location x is mapped to a geometric point τ (x) on model M t , this point is then projected in view {i, t} at point π t i • τ (x). This projection model is intended to provide a high resolution image space to be able to precisely compute a correction warp, which remaps the texture con-tributions with the matching content of input images I t i . In particular we do not model any optical blur here; rather for each HR pixel q we collect all texel contributions projecting within. Because calibration and 3D models are available, we can use GPU z-rendering to filter out non-visible texels [7]. Occasionally the density of projected texels is insufficient (i.e. in high curvature regions of the surface) for a pixel to receive any texture samples. In this situation we assume the underlying surface appearance perceived by this pixel is an interpolation of neighboring texels. For a uniform, continuous treatment of both cases, we combine all texel contributions falling in the vicinity of q by a spatial Gaussian weight with small variance σ 2 p , and normalize to one the sum of texel contribution weights for a pixel q. The continuity of this scheme ensures that no artificial discontinuity is created as a result of a discrepancy in the treatment of these cases. This insures that samples present at the pixel contribute overwhelmingly when present at the center of the pixel, and that the pixel is computed as a weighted sum of texels further away otherwise. Note that, with this formulation, HR pixels are a linear combination of texels of T. Let P t i be the resulting sparse projection operator such that H t i = P t i T, appropriately collecting the weights previously discussed after being mapped and projected in view {i, t}. Each P t i can be stored as a sparse matrix with w H t i × h H t i rows and w T × h T columns, respectively HR image resolution and the chosen texture resolution. Inputs as Warped, Downsampled HR Images To generate an input image from each HR image H t i , the HR image is first warped according to the different apparent sources of variability impacting the input image -calibration error, distortion, model geometry error -using a dense warp field W t i . This warp results in a linear operator over the HR image, which we note F W t i . The image then traverses the optical system, where it is blurred and captured by the CCD which performs light integration at every photosite. Following 2D super-resolution literature [1,9] this is generally modeled using a Point Spread Function (PSF) with the form of a Gaussian blur kernel, followed by an image subsampling stage. Both operations can be written as linear operators, the image-wide blur operator K and subsampling operator S, which are applied to the HR image to obtain a view's observed image I t i = SKH t i . Remarkably, in its noiseless form, the full image formation model can thus be noted as a single, sparse linear operator A t i = SKF W t i P t i for each view {i, t}, with w I t i × h I t i rows and w T × h T columns, such that I t i = A t i T for each view {i, t}. This elegantly generalizes the linear formation models used in various 2D super-resolution models [17] to the 3D+t case. Bayesian Generative Model The linear model previously discussed describes how input pixels are obtained through warping and blending of texels in noiseless fashion. As in the 2D case, inverting the problem to estimate T and the warps W t i from I t i is illposed, non-convex, and noise ridden [1]. We thus introduce a noise model and priors for better problem conditioning, formulating the solution as a MAP estimation over T, and the warps {W t i } for all views and temporal frames {i, t}: { T, { Ŵt i }} = arg max T,{W t i } p(T, {W t i }|{I t i }), (1) where the posterior is a product of prior and likelihood: p(T, {W t i }|{I t i }) = p(T) i,t p(W t i ) i,t p(I t i |W t i , T). ( 2 ) Prior Terms. To ensure sparsity of variations of the estimated texture and warp, we impose minimal Total Variation (TV) constraints on appearance image T and each W t i : p(T) = 1 Z T (λ) e -λ ∇T , (3) p(W t i ) = 1 Z W (γ) e -ν( ∇u t i + ∇v t i ) , (4) where ∇ is the gradient operator, ∇T = q ∇T(q) = q ( T x (q) + T y (q) ), the sum over pixel index q of the L 1 -norm over spatial image derivatives of T(q). The same definition holds for u t i and v t i , the x-and y-components of the warp W t i . Z T (λ), Z W (γ) denote the normalization constants of both distributions. The TV constraint ensures that T is treated as a natural image to be restored with sparse and preserved edges. However, a discontinuity between some neighboring object surface points can be created due to necessary cuts in the mesh unwrapping algorithm, leading some mapped texels to appear in different charts despite their proximity on the surface [13]. For such texels, we carefully compute gradients by computing the transform of axis directions as reprojected in the chart where surface neighbors were mapped [10,11]. This minimizes discontinuities in treatment across chart boundaries in the estimation. Data Term. Under the assumption that the noise is independent per pixel given the information about the texture, model and cameras, we impose a Gaussian prior for each frame {i, t} : p(I t i |W t i , T) = 1 Z(D t i ) e -(I t i -A t i T) D t i (I t i -A t i T) , (5) where D t i is a diagonal covariance matrix introduced to allow different noise characteristics per pixel q, and Z(D t i ) a normalization function of D t i . In 2D super-resolution models, a single variance per input image is usually used, with the i.i.d. noise assumption [9]. However when acquiring appearance in the 3D case it is well known that contributions need to be modulated according to the angle θ q between viewing vector and local surface normal [7]. This can be purposely identified in the generative model, where each diagonal element d(θ q ) of D t i materializes the breadth of the underlying Gaussian predictive model and thus the confidence in the pixel. We set this value as a robust, conservative function of θ q given in §6, which we assume fixed for the purpose of estimation, under small warp perturbations. Note that this is a valid assumption since visibility and grazing angles are generally stable, as we assume given the full poses of the model M t for all frames. Inference Directly maximizing all variables in (1) is intrinsically hard and seldom done in the literature. We opt for a coordinate descent scheme, alternating between T and W t i . Appearance Map. We maximize (1) by minimizing its negative log, dropping all terms independent of T: T = arg min T i,t (I t i -A t i T) D t i (I t i -A t i T) + λ ∇T , (6) where the data term develops to a weighted sum of per-pixel L 2 -norms. Although not specifically using a robust data term norm here as opposed to some works, we nevertheless obtain excellent results enforcing robustness through the constant covariance matrix D t i , as will be shown. Optimizing a L 2 data term with a TV-regularizer has been specifically studied [4], yielding a family of forward-backward splitting solvers whose implementation are available offthe-shelf [6]. Let us note f d (T) and f TV (T) the data and the TV-term. Forward-backward splitting is an iterative algorithm for estimating T, alternating between computing a gradient update step and projection prox γ,fTV which computes an implicit subgradient descent step for the TV-norm. T n+1 = prox γ,fTV (T n -γ∇f d (T n )) , (7) where γ is a step-size parameter. Our re-weighted functional ( 6) only implies a modification of the gradient update with respect to the standard case, with ∇f d (T n ) = 2A t i D t i (A t i T n -I t i ). Warp Estimates. We independently estimate each W t i for an input view {i, t}. Minimizing the negative log of (1), and dropping all terms independent of W t i yields: Ŵt i = arg min W t i ν ∇u t i + ∇v t i (8) + (I t i -SKF W t i P t i T) D t i (I t i -SKF W t i P t i T), which can be interpreted as a modified optical flow equation with a TV-regularizer, where the data term is re-weighted by D t i . The intuition here is that the minimization favors the TV prior of sparse variation over the data term for untrustworthy pixels according to D t i , and puts more relative importance on trying to follow data on reliable pixels. We opt for a similar strategy to [17] for solving this equation, and initialize the estimation of W t i with the result of a standard optical flow method [16], applied between H t i and an upsampled I t i at each iteration. Experiments We exhibit results with a MATLAB prototype implementation, and run experiments on a 16-core 2.4 GHz PC with 32GB RAM 1 . Our current implementation is mainly mono-thread, with the exception of the optical flow which we launch in 10 separate threads. To initialize the algorithm, we first use a small C++/OpenGL program to render visibility maps from texture to image space, then initialize the texture map with a simple weighted average of visible inputs. The visibility maps are also used to generate each projection matrix operator P t i . We use the Optical Flow package from Liu et al. [16] for per-iteration optical flow initialization, and the UNLocBOX package [6] for the texture re-estimation in the loop. We use a threshold on the relative norm of the objective function ( 6) as stopping criterion, and observe convergence in 30 to 70 iterations for a given λ. The execution time of the algorithm is in the range of 30 minutes to an hour per iteration depending on the dataset, number of views and number of frames. These are not a good indication of the final achievable performance as many enhancements are possible, including making the 1 See video results at http://hal.inria.fr/hal-00977755 flow and image update estimations massively parallel on a GPU, better inter-time flow bootstraps as suggested by [17], more compact data-structures, C++ inner loop. Parameter values. We set the Gaussian variances with σ p = 0.25 and σ k = 0.1, respectively for the projection weight and PSF kernel K, for all datasets. Although these could be optimized alongside other parameters, we observe low sensitivity to these parameters when set in the [0.1, 1] range. Higher values introduce over-blurring, while lower values tend to reveal the underlying discretization of the texture map (σ p < 0.1) or the input image (σ k < 0.1). We also fix the convergence parameters to γ = 0.05 and λ = 5.10 -4 for all experiments, using a second and third round of iterations with λ = 5.10 -5 and λ = 5.10 -6 to down-weigh TV-regularization and thus reveal higher frequency detail. We set d(θ q ) = 1 C e -s tan θ q as a faster approximation of a normal distribution over the angles of the perceived surface, and use C to normalize these weight contributions to 1 among all pixels that see a common texel x to obtain homogeneous weights among pixels in the data term i,q=τ •π t i (x) d(θ q ) = 1. We use s = 7π/16 over all experiments. This weight is more conservative than the cos θ q weight usually used for blending in multi-view texturing techniques [7], and yields improved results in our experiments, as it downgrades unreliable contributions from surface points at a grazing angle. Static Multi-View Comparison We compare our model with the latest state of the art multi-view texture super-resolution technique of Goldlücke et al. [10]. As the latter does not deal with temporal sequences, the comparison is performed on the common applicability domain, i.e. static images, as shown Fig. 3. The authors provide a public dataset for three objects BEETHOVEN, BUNNY and BIRD, and kindly provided additional data on request, so we could reproduce the experiment in the closest possible setup. This included a high resolution output of their algorithm for the viewpoint originally reported per dataset in [10], to which we compare our high resolution output. We use the same super-resolution ratio of 3× the input resolution for the texture and high resolution image domains. Respectively 108, 52 and 52 calibrated viewpoints were originally used at resolution 768×576. We have used identical views, and also use identical 3D models except for the BIRD dataset, for which we observed large reconstruction and silhouette reprojection artifacts on the model provided. In fairness we thus only provide crops in regions where the 3D model geometry is not significantly different. It can be generally observed that our outputs provide lower noise levels and artifacts. This is particularly visible in the BUNNY dataset, in the ear region and shadow region around the left eye. The BEETHOVEN exhibits some visibility difficulties due to the face geometry and presence of concave regions around the nose and hair, which generate artifacts for [10]. In contrast, our method is able to deal with these situations efficiently. A single texture domain cut is present on the nose but the discontinuity is barely visible thanks to the inter-chart terms we introduced. More accurate details and sharper pattern borders can be observed on the BIRD wing and tail, notably in the feather textures. Temporal Superresolution Validation To evaluate our approach on the temporal aspect, we introduce three synchronised datasets, GOALKEEPER, BACKPACK and ACTOR, in Fig. 4. The datasets were acquired with three different setups and camera models so as to maximize testing variability. All 3D models were obtained using silhouette-based reconstruction techniques and thus yield largely imperfect models. GOALKEEPER consists of 21 calibrated viewpoints at 1024 × 1024, which we downsample to 512 × 512 for the purpose of evaluation. BACKPACK consists of 15 viewpoints of a person, with resolution 1624 × 1224. ACTOR consists of 11 viewpoints in resolution 1920 × 1080. The ACTOR dataset is arguably the most difficult one, with lower views and higher noise lev-els both in the images and the reconstruction. We focus on small motions of the three subjects, and test the method for 2 to 7 frames. Significant improvements can be seen in the figure through temporal accumulation. There are several difficulties in designing an experiment to quantify this improvement, such as the absence of ground truth data in texture space for real datasets. Synthetic datasets are less than ideal for image restoration and superresolution problems: a significant conclusion can only be achieved if the different sources of variability are correctly introduced and simulated: sensor noise, calibration error, local reconstruction errors, specularities, temporal misalignments. Instead, we focus here on showing the temporal improvement by running our algorithm on a 2× downsampled version of the GOALKEEPER dataset, and comparing our reprojected result with the higher resolution inputs using the mean squared error metric (MSE). Fig. 5 shows the result of this experiment, with convergence curves from one frame (static case) to three frames, and MSE's evaluated on the 21 input views. Several observations can be made from these curves. First, they illustrate convergence of the iterations toward the high resolution ground truth. Second the temporal improvement leveraged by our algorithm is validated in two forms: acceleration of the rate of convergence using more temporal frames, and improvement of the final result quality over using only one temporal frame. Discussion We have presented a novel method to retrieve a single, coherent texture from several viewpoints and temporal frames of a deformable subject. The noiseless formation model introduced is linear from texture space to image space, and noise and regularization are achieved using a Bayesian framework. We have demonstrated the usefulness of this approach with respect to state of the art, and quantified the convergence and temporal improvement. The method opens several interesting research possibilities. First, more of the parameters and variability could be automatically learned, such as the projection parameters and regularization weight. The framework proposed enables this, with adapted convergence algorithms. Second, the trade-off between using more views or more temporal frames could be further explored to understand how each modality contributes to the result. Third, longer term resilience could be explored as an extension of this model. Recent years have seen the emergence of many solutions for the capture of dynamic scenes, where a scene observed by several calibrated cameras is fully reconstructed from acquired videos using multiview stereo algorithms [24,12,1,20]. These techniques have many applications for media content production, interactive systems [2] and scene analysis [28] since they allow to recover both geometric and photometric information of objects' surface, and also their shape and evolution over time. Since these temporal evolutions were initially reconstructed as a sequence of topologically inconsistent 3D models, significant research work has been done for full 4D modeling and analysis of geometrically time-consistent 3D sequences. On Mean Pose and Variability of 3D Deformable Models In particular, several techniques propose to deform and match a template to either image data, or to intermediate 3D representations of the surface [25,17,9,26]. These methods allow the recovery of both shape and motion information. However they usually do not consider the intrinsic dynamic properties of a surface. These are either assumed, for instance through a kinematic structure (rigging) or through the surface tension parameters, or are simply ignored. Hence, there is a large interest in better understanding rigidity and motion properties of shapes, with the prospect of improving dynamic models, extracting more useful information, and better automation. In this work, we take the estimation a step further and investigate how to infer dynamics or statistical properties of shapes given temporal sequences. Recovering this information is yet a largely open research topic with only few exploratory representations proposed for dynamics characteristics of surfaces, e.g. [11,29]. We propose a novel inference framework for the analysis of complex object shapes in motion that learns local surface rigidity probabilities (i.e., deformations), and estimates a mean pose over a temporal sequence. Based on recent advances in surface tracking techniques, we formulate a generative model of 3D temporal sequences using a probabilistic framework, which conditions shape fitting over all frames to a simple set of intrinsic surface rigidity properties. Surface tracking and rigidity variables can then be obtained iteratively using Expectation-Maximization inference by alternatively minimizing two nested fixed point iterations. Thus, our main contribution is a framework that allows the simultaneous tracking and inference of dynamic properties of object surfaces given temporal observations. We show how these properties contribute to a better understanding of surface motion and how they can be used for the dynamic analysis of 3D surface shapes through mean pose estimation and rigidity-based segmentation, while achieving competitive surface tracking. The remainder of the paper is organized as follows. The next section discusses related work. Details on the mean pose inference model are given in Sect. 3. Section 4 presents various applications and experimental results. Section 5 concludes with a discussion on our contributions. Related Work The analysis of deformable surfaces captured by multi-video systems has gained lot of interest during the last decade due to the rapid progression of computer and image sensing technologies. We focus here on works that relate to dynamic properties of shapes. Kinematic structures. Many popular tracking methods propose to rigidly constrain a model using an articulated structure, for instance a skeleton or a cage, which must be scaled and rigged to a 3D template, and optimally positioned through a sequence of models representing the observed subjects [4,30,17,19]. The template is usually deformed using a skinning technique, according to the optimized structure across the sequence [5]. Such kinematic structures provide intrinsic information on the associated shapes through their parameter evolutions (e.g. their averages can define a mean pose). These approaches require a priori knowledge on the observed shapes, such as the topology and the rigid parts, and cannot be applied to arbitrary object shapes. Moreover global template deformation across time is subject to loss of local details such as cloth wrinkles and folds. Locally rigid structures. The literature also contains several methods that relax the constraint on the shape structure using looser rigidity priors. A body of works consider deformations that preserve local intrinsic surface properties, e.g. isometric deformations [21,8,22,23]. Such properties relate to local rigidities, for instance in [31,32] local surface distorsions are constrained, however they are usually known priors. While efficient to register or match surfaces, intrinsic surface properties are not necessarily sufficient to track complex shapes such as human bodies. In that case, several approaches introduce local deformation models to drive surface evolutions. For instance, in [9], the observed surface is treated as a piece-wise body with locally rigid motions. We consider a similar model to represent surface deformations which is used to learn local rigidities as well as mean poses along with the tracking. Interestingly, recent approaches also in this category were proposed to characterize local surface deformations. In [11], the authors propose a probabilistic framework for rigid tracking and segmentation of dynamic surfaces where the rigid kinematic structure is learned along time sequences. Our framework does not assume such structure but learns instead local rigidities and mean poses. In [29], the authors model complex local deformation dynamics using linear dynamical systems by observing local curvature variations, using a shape index, and perform rigidity-based surface patch classification. The latter approach assumes surface alignment is given, in contrast to our proposed generative model that simultaneously performs surface tracking and local rigidity estimation. Shape Spaces. Following the work of Kendall [18], a number of works consider shape spaces that characterize the configurations of a given set of points, the vertices of a mesh for instance. This has been used in medical imaging to estimate mean shapes through Procrustes analysis, e.g. [16]. In this case, the shape of the object is the geometrical information that remains when the pose (i.e., similarities) is filtered out. Thus Procrustes distances can be used to measure shape similarities and to estimate shape averages with Fréchet means. We follow here a different strategy where a shape space represents the poses of a single shape and where we estimate a mean pose instead of a mean shape. This relates to other works in this category that also consider shapes spaces to model shape poses with mesh representations. They can either be learned, e.g. [3,15] or defined a priori, e.g. [27] and are used to constrain mesh deformations when creating realistic animations [3,27] or estimating shape and poses from images [15]. While sharing similarities in the deformation model we consider, our objective is not only to recover meaningful shape poses but also to measure pose similarities and intrinsic shape properties. Unlike [3,15], we do not need a pose or shape database and the associated hypothesis of its representativeness. Moreover, our methodology specifically addresses robust temporal window integration. We assume given a temporal sequence of 3D reconstructions, incoherent meshes or point clouds, obtained using a multi-view reconstruction approach, e.g. [12,1,20]. We also assume that a template mesh model of the scene is available, e.g. a particular instance within the reconstructed sequence under consideration. The problem of local surface rigidity and mean pose analysis is then tackled through the simultaneous tracking and intrinsic parameter estimation of the template model. We embed intrinsic motion parameters (e.g. rigidities) in the model, which control the motion behavior of the object surface. This implies that the estimation algorithm is necessarily performed over a sub-sequence of frames, as opposed to most existing surface tracking methods which in effect implement tracking through iterated single-frame pose estimation. We first describe in details the geometric model ( §3.1) illustrated with Fig. 1, and its associated average deformation parameterization for the observed surface ( §3.2). Second, we describe how this surface generates noisy measurements with an appropriate Bayesian generative model ( §3.3). We then show how to perform estimation over the sequence through Expectation-Maximization ( §3.4). Mean Pose Inference Model Shape Space Parameterization To express non-rigid deformability of shapes, while de-correlating the resolution of deformation parameters from mesh resolution, we opt for a patch-based parameterization of the surface similar to [9]. The reference mesh is partitioned in an overlapping set of patches, pre-computed by geodesic clustering of vertices. Each patch P k is associated to a rigid transformation T t k ∈ SE(3) at every time t. Each position x k,v of a mesh vertex v as predicted by the transform of P k can then be computed from its template position x 0 v as follows: x k,v = T k (x 0 v ). (1) We thus define a pose of the shape space as the set of patch transforms T = {T k } k∈K that express a given mesh deformation. Note here that a pose in the shape space does not necessarily correspond to a proper geometric realization of the reference mesh and, in practice, patch deformations are merged on the template to preserve the mesh consistency. Mean Pose To retrieve the mean pose of a given sequence, we provide a definition suitable for the analysis of complex temporal mesh sequences. Following Fréchet's definition of a mean [13], we introduce the mean pose T of a given set of poses {T t } t∈T over the time sequence T as the pose minimizing the sum of squared distances to all poses in the set: T = arg min T t∈T d 2 (T, T t ), (2) where d() is a distance that measures the similarity of two poses. This distance should evaluate the non-rigidity of the transformation between two poses of a shape and hence should be independent of any global pose. Such a distance is not easily defined in the non-Euclidean shape space spanned by the rigid motion parameters of the patches. However using the Euclidean embedding provided by the mesh representation, we can define a proper metric based on the vertex positions. Inspired by the deformation energy proposed by Botsch et al. [7] our distance is expressed as an internal deformation energy between two poses. Let T i and T j be two poses of the model, the distance can be written as a sum of per patch pair squared distances: d 2 (T i , T j ) = (P k ,P l )∈N d 2 kl (T i , T j ), (3) with d 2 kl (T i , T j ) = v∈P k ∪P l T i k-l (x 0 v ) -T j k-l (x 0 v ) 2 , (4) where T i k-l = T i l -1 • T i k is the relative transformation between patches P k and P l for pose i, and N is the set of neighboring patch pairs on the surface. The distance sums, for every pair of patches of the deformable model, its rigid deviation from pose i to j. This deviation is given by the sum over each vertex v belonging to the patch pair, of the discrepancy of relative positions of the vertex as displaced by P k and P l . It can be verified that d 2 defines a distance as it inherits this property from the L 2 norm used between vertices. Generative Model The expression (2) is useful to characterize the mean over a set of poses already known. Our goal however is to estimate this mean in the context where such poses are indirectly observed through a set of noisy and sparse 3D point clouds of the surface. Thus we cast the problem as the joint estimation of mean pose and fitting of the model to each set of observations. For our purposes, we assume the set of poses {T t } t∈T are defined for a set T corresponding to observations in a temporal sequence. The observed point clouds are noted Y ={Y t } t∈T , where Y t ={y t o } o∈O t is the set of point coordinates y t o for an observation o among the set of observations O t at time t. Note that this set O t is different than V in general as it is obtained from a 3D reconstruction or depth camera, without any direct correspondence to the deformable shape surface model earlier defined. To express the noisy predictions of observations, we follow the principle of EM-ICP [14] by introducing a set of assignment variables k t o indicating, for each observation o, which patch this observation is assigned to. We are also interested in retrieving information about the variations of the rigid deformation with respect to the mean shape. To keep this information in its simplest form, we express in the generative model that each pair of patches (k, l) ∈ N is assigned a binary rigidity variable c kl ∈ {0, 1}, which will condition the patch pair to accordingly be rigid or flexible. This variable is an intrinsic parameter attached to the original deformable model and is thus time-independent. We note the full set of rigidity variables C = {c kl } (k,l)∈N . This in turn will allow during inference the estimation of a rigid coupling probability for each patch pair (k, l). We express the generative model through the following joint probability distribution: p( T, T, Y, C, K, σ) = p( T) t∈T   p(T t | T, C) o∈O t p(y t o | k t o , T t , σ t )   , (5) p(y t o | k t o T t σ t ) = N (y t o | T t k t o (x 0 v ), σ t ). (6) Pose constraining model. We constrain the fitted poses to be close to the mean pose, using the distance defined earlier (3). We embed the influence of rigidity variables in this term, by computing two versions of the distance, biased by rigidity variables C: p(T t | T, C) ∝ exp   - (k,l)∈N d 2 kl ( T, T t , c kl )   , (7) where d 2 kl ( T, T t , c kl ) = v∈P k ∪P l β kl (v, c kl ) T i k-l (x 0 v ) -T j k-l (x 0 v ) 2 , (8) with β kl (v, c kl ) a uniform function over all vertices of the patch pair if c kl = 1, which encourages common rigid behavior of the two patches, and a non-uniform function encouraging more elasticity when c kl = 0: β kl (v, 0) ∝ exp(- b kl (v) η D ), (9) where b kl (v) is the distance between the vertex v and the border between P k and P l on the template, D is the average patch diameter and η is a global coefficient controlling the flexibility. The β kl (•, 0) has larger values on the border between the patches, which allows more flexibility while enforcing continuity between the patches. The coefficients β kl (v, 0) are normalized such that P k ∪P l β kl (v, 0) = P k ∪P l β kl (v, 1 ) in order to make both modes as competitive. Mean model prior. In the absence of any prior, the mean pose is unconstrained and could theoretically have completely loose patches unrelated to each other. To avoid this and give the mean pose a plausible deformation, we consider the following a prior which expresses that the intrinsic mean pose should not significantly deviate from the original reference pose (represented by the identity transform Id): p( T) ∝ exp(-d 2 ( T, Id)) ∝ exp   (P k ,P l )∈N v∈P k ∪P l Tk (x 0 v ) -Tl (x 0 v ) 2   , (10) Expectation-Maximization Inference We apply Expectation-Maximization [10] to compute Maximum A Posteriori (MAP) estimates of the tracking and average shape parameters given noisy 3D measurements, using the joint probability described in (5) as described in [6]. The assignment variables K and rigidity coupling variables C are treated as latent variables, which we group by the name Z = {K, C}. For the purpose of clarity let us also rename all parameters to estimate as Θ = { T, T, σ}. Expectation-Maximization consists in iteratively maximizing the following auxiliary function Q given the knowledge of the previous parameter estimate Θ m : Θ m+1 = arg max Θ Q(Θ|Θ m ) = arg max Θ Z p(Z |Y, Θ m ) ln p(Y, Z |Θ). (11) The E-Step consists in computing the posterior distribution p(Z |Y, Θ m ) of latent variables given observations and the previous estimate. It can be noted given the form of ( 5) that all latent variables are individually independent under this posterior according to the D-separation criterion [6], thus following the factorization of the joint probability distribution: p(Y, Z |Θ m ) = t∈T   (k,l)∈N p(c kl |Θ m ) o∈O t p(k t o |Y, Θ m )   , (12) where p(c kl |Θ m ) = a • exp   - v∈P k ∪P l -d 2 kl (T t,m , Tm , c kl )   (13) and p(k t o |Y, Θ m ) = b • N (y t o | T t,m k t o (v), σ t,m ), ( 14 ) where a, b are normalization constants ensuring the respective distributions sum to 1, and v is the closest vertex on patch k. Equations ( 13) and ( 14) are the E-step updates that need to be computed at every iteration for every latent variable. ( 13) corresponds to a reevaluation of probabilities of rigid coupling between patches, based on the previous m-th estimates of temporal and mean poses. ( 14) corresponds to the probability assignment table of time t's observation o to each patch in the model. This corresponds to the soft matching term commonly found in EM-ICP methods [14]. The M-Step maximizes expression (11), which can be shown to factorize similarly to ( 5) and ( 12), in a sum of three maximizable independent groups of terms, leading to the following updates: T t,m+1 = arg min T t (k,l)∈N c kl p(c kl |Θ m ) d 2 kl ( Tm , T t , c kl ) (15) + o∈O t k t o p(k t o |Y, Θ m ) y t o -T t k t o (x t v ) 2 , σ t,m+1 2 = 1 3 o∈O t k t o p(k t o |Y, Θ m ) y t o -T t,m+1 k t o (x t v ) 2 o∈O t k t o p(k t o |Y, Θ m ) , (16) Tm+1 = arg min T d 2 ( T, Id) + t∈T (k,l)∈N c kl p(c kl |Θ m )d 2 kl ( T, T t,m+1 , c kl ). (17) Expression (15) corresponds to simultaneous updates of all patch transformations for a given time t, weighed by E-step probabilities. ( 16) updates the pertime frame noise parameter with an E-step weighed contribution of each observation. ( 17) computes the mean pose, accounting for all poses in the time sequence. Note that, for ease of resolution, we decouple the estimation of T t,m+1 and Tm+1 , which is why (17) uses the result T t,m+1 . We solve both systems with Gauss-Newton iterations, using a parametrization of the rigid transforms as a rotation matrix and translation. Experiments We evaluate the proposed generative model using 3D sequences reconstructed from real human performances captured by multiple view videos. We propose two datasets, GoalKeeper and Dancer, which provide two different actions and clothing situations with high resolution inputs. These were processed by extracting visual hull reconstructions, and two neutral topology frames were selected to provide the template model after smoothing and simplifying the obtained mesh down to 5k vertices. Additionally, we also validate using two public datasets made available by the community. The Free [25] dataset consists of a photocoherent mesh sequence of a dancer with approximately 135k vertices per frame, exhibiting particularly fast and difficult dancing motion. The Marker dataset [19] provides another type of challenging situation with a two-person sequence of reconstructions, with martial art motions. It also provides markers on one of the persons which we will use for quantitative evaluation. For both these public sequences, we use the templates provided downsampled to 5k vertices. In all visualizations, we render mesh poses by computing vertex position x t v at time t as a linear blend of positions x t k of expression (1), weighed by a set of Gaussian weights α k (v) materializing the region of influence of patch P k on the mesh. These weights are maximal at the center of mass of P k and their sum over all non-zero patch influences are normalized to 1 for a given vertex v: x t v = k α k (v) x t k . (18) We visualize the rigidity coupling probabilities over the surface with heat-colored probabilities, by diffusing this probability over vertices of influence of patch pairs to obtain a smooth rendering. We provide a supplemental video3 with the processed results for these datasets. Tracking Evaluation We first evaluate the tracking performance of the algorithm. Full sequences may be processed but because of the motion of subjects in the sequence, all poses of the sequence cannot be initialized with a single static pose, as this would surely be susceptible to local minima. We thus process the four datasets using a sliding window strategy for T , where processing starts with a single pose, then additional poses are introduced in the time window after the previous window converges. We provide tracking results with sliding window size 20 which corresponds to approximately one second of video. We show the resulting poses estimated by our algorithm on the four datasets in Fig. 4, Fig. 5a and Fig. 5b. Runtime is approximately 15 seconds per time step on a recent workstation and can be further improved. We also provide a comparison with state of the art methods Liu et al. [19] and with a purely patch-based strategy [9], on the Marker dataset. We reproduce [9] results by neutralizing mean updates and rigid coupling updates from our method, which corresponds to removing these terms from the energy and closely mimics [9]. Note that [19] is a kinematic tracking strategy, where both subjects are rigged to a kinematic skeleton providing a strong, fixed and dataset specific rigidity prior. On the other hand, [9] only use patch rigidity and inter-patch elasticity priors, that are weaker than [19] and our method. The Marker dataset provides sparse marker positions, at which we estimate geometric positional error with respect to the surface. To this purpose we match the closest vertex on the template model provided, and follow it with the different methods, computing geometric errors in position with respect to the corresponding marker's position in these frames. The average errors are shown in Table 1. We also provide a temporal error graph for our method and [9] in Fig. 2. Table 1 shows our method achieves comparable tracking performance to state of the art surface tracking techniques. The slightly higher error with respect to [19] is not unexpected given that they use a stronger kinematic skeleton prior. Regarding [9], the graph and table show a small advantage in error for our method along the sequence, as well as a smaller variance of the error, showing the better constraining provided by our framework. The graph also shows significantly higher error values with [9] than with our method around frames 60, 250, 325, 390 and 460. These error peaks are imputable to difficult segments of the input sequence where [9] loses track of limbs (see Fig. 3a and Fig. 3b) while our method does not. The high error values around frame 390 are due to ambiguous input meshes where the head of the second character (not seen in Fig. 3b) is out of the field of view. Around this frame, our method still outperforms [9] which misaligns an arm (see Fig. 3b). These results substantiate stronger robustness for our method over [9]. Regarding limitations, the model may fall into local minima when the noise level of inputs is too high similarly to all patch-based methods but this was not a strong limitation on the datasets. As the model favours rigidity and isometric surface deformations, the surface sometimes overfolds in non-rigid sections (as sometimes seen in video), which we will address in future work. Fig. 3. Input mesh (left), tracked mesh with [9] (middle) and with our method (right). Mean Pose and Rigidity Estimation Fig. 5a shows tracking results with color coded rigidity coupling probabilities with sliding window size 20. The method accurately reports instantaneous rigidity deviation, such as when the subject folds his elbows or shoulders. Blue regions correspond to regions of the mesh that have no non-rigid distortion with respect to the estimated mean pose. Fig. 5b shows estimates of mean poses for full sequences, colored with the estimated rigidity coupling probabilities over full sequence (no sliding window). It can be noted that the method accurately reports where the most common deviations occur. The supplemental video shows mean pose sequences for several sliding window sizes. We observe a temporal smoothing of the initial deformation: fast deformation is filtered out. This effect is stronger with wide windows. We interpret this phenomenon as follows: when the temporal window slides along the sequence, it produces a mean pose sequence analogous to the convolution of the estimated pose sequence with a gate function, with the same size as the window size. This process can be seen as a low-pass filtering of the sequence poses. We also observe that the mean pose is not affected by global rigid motion of the shape (noticeable with the Dancer dataset). This is an expected consequence of using a pose distance that is invariant under global rigid transforms in (2). Conclusions We present a novel methodology for the analysis of complex object shapes in motion observed by multiple cameras. In particular, we propose a generative model of 3D temporal sequences using a probabilistic framework that simultaneously learns local surface rigidity probabilities and estimates a mean pose over temporal sequence. Hence, rigidity-based surface segmentation can be achieved using local deformation properties, while motion synthesis or surface alignment for compression or morphing applications can be achieved using a mean pose as a sequence keyframe or a cluster prototype. Our model can also perform surface tracking with state of the art performance, and does not require a priori rigid (kinematic) structure, nor prior model learning from a database. Surface tracking and rigidity variable probabilities are obtained by solving an Expectation-Maximization inference problem which alternatively minimizes two nested fixed point iterations. To our knowledge, this is the first model that achieves simultaneous estimation of mean pose, local rigidity, and surface tracking. Experimental results on real datasets show the numerous potential applications of the proposed framework for complex shape analysis of 3D sequences. A. 3 Edmond Boyer Abstract Recovering 3D shape motion using visual information is an important problem with many applications in computer vision and computer graphics, among other domains. Most existing approaches rely on surface-based strategies, where surface models are fit to visual surface observations. While numerically plausible, this paradigm ignores the fact that the observed surfaces often delimit volumetric shapes, for which deformations are constrained by the volume inside the shape. Consequently, surface-based strategies can fail when the observations define several feasible surfaces, whereas volumetric considerations are more restrictive with respect to the admissible solutions. In this work, we investigate a novel volumetric shape parametrization to track shapes over temporal sequences. In constrast to Eulerian grid discretizations of the observation space, such as voxels, we consider general shape tesselations yielding more convenient cell decompositions, in particular the Centroidal Voronoi Tesselation. With this shape representation, we devise a tracking method that exploits volumetric information, both for the data term evaluating observation conformity, and for expressing deformation constraints that enforce prior assumptions on motion. Experiments on several datasets demonstrate similar or improved precisions over state-of-the-art methods, as well as improved robustness, a critical issue when tracking sequentially over time frames. Introduction The capture of shapes and their evolutions has been a very active research topic for the last decade, motivated by many applications for which dynamic shape models are useful. This ability is of interest for several fields of activity such as computer-assisted design, virtual reality, entertainment, medical imaging, gesture and sports analysis. Ever since the initial promises of free viewpoint video [9], many models of shape capture have been explored. Initially examined as a per-time reconstruction problem, e.g. [24,14], temporal integration and tracking of the shape in the time domain have then been considered, e.g. [11,3]. In any case, however, surface-based models, such as meshes, have been largely dominant to represent and track shapes. This is due to several factors, primarily to the fact that visual observations generally lie on the shape surface, but also to the popularity of surface-based representations in the vision and graphics communities and the availability of efficient tools to manipulate them. Yet it has been observed that certain forms of volume-preserving deformations may be beneficial to model shape deformations in graphics applications such as [1,5], or to enforce volumetric constraints, nevertheless based on surface tesselations, in dynamic shape capture [10]. While the idea has led to interesting preliminary results, a full volumetric treatment of dynamic shape capture is still to be investigated and its benefits evaluated. Among the expected advantages of this approach are its ability to express volume conservation as well as its ability to enforce local volumetric deformation constraints. In this paper, we address this problem with a twofold contribution: we first propose a dedicated volumetric deformation model based on Centroidal Voronoi Tesselations (CVT) [13], which integrates the latest advances of recent tracking models, and second we propose an evaluation of the method based on a hybrid multi-camera and marker-based capture dataset [21]. Previous Work A large set of techniques exist to capture moving shapes as a time independent sequence of meshes representing the object's surface [24,14]. For this process, many volumetric parameterizations have also been devised, based on regular or hierarchical Eulerian grid discretizations [30,20], although they are mainly dedicated to single time occupancy representation. Some approaches have taken these representations a step further, by examining short term motion characteristics of the shape using regular volume grids [33,17,32], yet they do not retrieve long term motion information of the sequence, nor do they embed spe-1 cific motion models in the volume. Various methods attempt leveraging time consistency to retrieve temporally coherent shape models, in the vast majority of cases manipulating a surface model. While in some cases this process is purely data-driven, by aligning surfaces across frames using sparse matching and stereo refinement [29], in most cases a deformation prior is used to drive the method toward the solution within a plausible state space. In its weakest form and without further assumptions, pure spatio-temporal continuity of the observed surface can be used [16]. At the other end of the spectrum a full kinematic rigging of a template model can be assumed, where the surface is expressed from kinematic parameters using e.g. the linear blend skinning deformation framework [23] popularized for 3D animation in the computer graphics community. These parameters can then be estimated for best fitting the model reprojections to image and silhouette data [34,3,18,21]. For tracking more general subjects and situations, more generic surface deformation frameworks have been explored to bypass the rigging stage and allow for more general non-rigid motion components. Central to these methods is the idea of enforcing a cohesive behavior of the surface, such as locally rigid behavior [15], Laplacian deformation [11,10,6], inextensibility [25], or elasticity between piecewise-rigid surface patches [7,6]. Among the existing surface capture methods, only a handful use volumetric representations. Some methods have proposed reparameterizing temporally aligned sequences using a volumetric cage embedding [26,31] inspired from the animation community [27,19]. However, no volume deformation model strong enough to solve the full tracking problem has yet emerged from these works. Among the methods successfully using volume preserving constraints, most use a Delaunay tetrahedrization of reconstructed template surface points [11,10,6] to enforce asrigid-as-possible or Laplacian deformation constraints common to 3D animation techniques [1,28]. It can be noted that the proposed decomposition is not fully volumetric as it only involves tesselating surfaces. In contrast, we propose a fully volumetric treatment of the problem, using an intrinsically volumetric tesselation, deformation model and data terms for rewarding volume alignment. Approach Overview We formulate the tracking problem as the MAP estimation of multiple poses of a given geometric template model, non-rigidly adjusted to a set of temporally inconsistent shape measurements. In multi-view camera systems, these measurements typically take the form of time independent 3D mesh reconstructions obtained from a visual hull or multi-view stereo method, which is what we assume here. To efficiently make use of volumetric information, we need to express volume conservation and overlapping constraints from the template to the observed shape volumes. For representational and computational efficiency, we thus need a proper discretization of the interior of the shape. While uniformly located in the volume, regular grids are inherently anisotropic and biased toward the axis of the template basis. Furthermore, their intersection with the object surface yields boundary voxels of irregular shape (Fig. 1(a)). On the other hand, the Constrained Delaunay tetrahedrization of the boundary vertices, previously used in [11,10,6], yields a set of highly non-uniform tetrahedra spanning the whole interior of the volume, whose cardinality is not controlled but imposed by the surface discretization (Fig. 1(b)). Taking instead the Voronoi diagram of a uniform set of samples of the interior volume decorrelates the cardinality of the decomposition from the geometry, but still yields cells of irregular shape (Fig. 1(c)). Rejection sampling may statistically impose additional regularity, but this would only come with asymptotic guaranties attainable at high computational cost. We therefore propose to use CVT (Fig. 1(d)), informally a Voronoi tesselation where the samples are iteratively repositioned to coincide with the center of mass of their cell, which achieves the desired properties [13]: isotropy, rotational invariance, uniform cells of compact and regular form factor, regular intersection of boundary cells and surface, independent cardinality and practical computation. After introducing how to define and compute CVTs in the context of our approach ( §2), we show how this discretization can be used to define adequate volumetric deformation ( §3) and observation ( §4) models in the form of Bayesian prior and likelihoods. The MAP estimation proposed on this basis in §5 is evaluated in §6. Centroidal Voronoi Tessellation (CVT) Definitions. To tesselate the shape, we manipulate Voronoi diagrams that are restricted, or clipped, to its inner volume. More formally, let S be a set of 3D point samples of a volumetric domain ⌦, either the template to be fitted or the observed shape for our purposes. The Clipped Voronoi diagram of S in ⌦ is defined as the intersection of the Voronoi diagram of S in R 3 with the domain ⌦. Thus the Voronoi cell ⌦ s of a sample s is the set of points from ⌦ that are closer to s than to any other sample: ⌦ s = {x 2 ⌦ | 8s 0 2 S \{s} kx x s k < kx x s 0 k} , (1) where cells ⌦ s are mutually exclusive and define a partition of ⌦: [ s2S ⌦ s = ⌦ , (2) where ⌦ s and ⌦ denote topolgical set closures. If the border @⌦ of ⌦ is a polyhedral surface, then each cell also has a polyhedral border. A Centroidal Voronoi tessellation is a clipped Voronoi tessellation of ⌦ for which each sample s is the center of mass of its (clipped) Voronoi cell ⌦ s . CVT cells are of regular size and shapes, and also define a regular connectivity of the samples set, two samples being connected if and only if their respective CVT cells share a face. This connectivity thus encodes the shape volume and topology, a property we will use in the following sections. Computing a CVT. It has been proven [13] that local minima of the energy E(S) = X s2S Z x2⌦ s kx x t s k 2 dV (3) define CVTs on ⌦. Thus a CVT can be obtained by iteratively estimating the sample locations that minimize (3), with a quasi-Newton method such as the L-BFGS algorithm [22], for a sample population of desired size and uniform initial position. Deformation Model Principle Once a regular, anisotropic volumetric decomposition of the template shape is obtained, we can use it as a fundamental building block to build a volumetric deformation model of the shape, which will constrain the estimation. Botsch et al. [5] show that a non-linear elastic deformation energy can be devised using small volumetric deformations, typically voxels. While such a reasoning could be advantageously transposed to the CVT discretization proposed, eliminating the grid orientation bias, doing so comes at a high computational cost. Cagniart et al. [7] show that the complexity of the deformation model is best decorrelated from the geometry itself, in their case by using rigid surface patches in lieu of the original surface vertices. Recent works have shown a way to improve the quality and temporal stability using a similar surface decomposition [2], by inferring a mean pose and sequence rigidity behaviour. We extend the latter ideas to the case of a complete volumetric treatment of the deformation problem. In so doing, we cluster together groups of CVT cells in rigid volume patches P k using a k-medoids algorithm. Note that such patches can lie either on the surface or completely inside the template shape's volume, which is of particular interest to express non-rigid deformation of the model while preserving the local volume and averting over-compression or dilation. We associate to each patch a rigid transform T t k 2 SE(3) at every time t. Each position x k,q of a mesh vertex or inner sample is indiscriminately labeled as a point q. Its position can be written as a transformed version of its template position x 0 q as follows, once the rigid transform of its patch is applied: x k,q = T k (x 0 q ). (4) This makes it possible to define a pose of the shape as the the set of patch transforms T = {T k } k2K , which expresses a given volumetric shape deformation. Formulation To prevent patch poses of the shape from being arbitrary, we constrain the shape to be close to a sequence rest pose T and to follow constant rigidity characteristics C over the sequence. These rigid characteristics are defined for neighboring patch pairs in the volume (P k , P l ), as a binary valued property c kl , whose value in {0, 1} reflects wether the relative motion between patches P k and P l is respectively articulated or rigid. To define the rest pose T we rely on the following measure [5,2] of the relative deformation energy between two arbitrary poses T i and T j of the template shape, given a rigidity configuration C: E (T i , T j |C) = X (P k ,P l )2N E kl (T i , T j |c kl ), with (5) E kl (T i ,T j |c kl ) = X q2P k [P l kl (q,c kl )kT i k l (x 0 q ) T j k l (x 0 q )k 2 , where T i k l = T i l 1 T i k is the relative transformation between patches P k and P l for pose i, and N is the set of neighboring patch pairs on the surface. The energy measures the rigid deviation from pose i to j of every neighboring patch pair, as the sum over each of the samples s of the pair, of the discrepancy in relative positions of the vertex as displaced by P k on one hand, and P l on the other. If the two patches are rigidly linked (c kl = 1), then the discrepancy of all samples of the pair should be equally penalized, therefore kl (s, 1) is chosen to be constant over all samples s of the pair. On the other hand, if the patch pair is articulated (c kl = 0), only samples that lie near the boundary between the two patch volumes should be penalized for deviating relative positions: those samples materialize the locus of the inter-patch articulation, whereas samples that aren't close the inter-patch boundary can move more freely. We express this using kl (q, 0) / exp( b kl (s) ⌘ D ) where b kl (s) is the distance between the sample s and the boundary between P k and P l on the template, D is the average patch diameter and ⌘ is a global coefficient controlling the flexibility. Resulting Pose Likelihoods The relative pose energy described in (5) makes it possible to express the expected behavior of the estimated models as a prior and likelihood over the poses: p( T) / exp( E ( T, Id)), (6) p(T t | T, C) / exp E (T t , T|C) . (7) p( T) is the prior over the rest pose, which should minimize the relative displacement energy to the default template pose (transformed by identity Id). This terms ensures minimal cohesion of the volume patches of the rest pose model, as it enforces mutual patch elasticity. p(T t | T, C) is the likelihood of a given tracked pose at time t, which should minimize the relative displacement energy with respect to the sequence rest pose T given a current rigidity state C. This ensures the inter-patch cohesion of pose T t as well as a general proximity to the rest pose, which stabilizes the resulting pose estimates. In turn the rest pose will be simultaneously estimated as the pose which minimizes the relative deformation energy to all poses in the sequence. Observation Model Probabilistic Shape Fitting The observed shape ⌦ t at time t is described by the point cloud Y t = {y t o } o2O t . To describe how a deformed template can explain the observed shape, we propose a generative data term following EM-ICP, expressing how a given deformed template point predicts the position of an observed shape point o. A set of cluster association variables k t o is therefore instantiated for every observed point in time, indicating which cluster generates this point. For simplicity, each observation o is associated to its cluster k t o via the best candidate q of cluster k t o . The best candidate is chosen as the closest compatible sample in the cluster during iterative resolution. We consider that each cluster P k generates observations perturbed by a Gaussian noise with isotropic variance 2 : p(y t o | k t o , T t k , ) = N (y t o | T t k (x 0 q ), ). (8) Note that o indiscriminately refers to surface or volume sample points of the observed shape, as the principles we describe here apply to both, with the restriction that observed surface points only associate to surface template points, and volume samples are associated to volume samples of the template. As often proposed in ICP methods, we additionally filter associations using a compatibility test, described in the following sections. The compatibility test is specific to the nature (surface or volume) of the observed point and is detailed in the next paragraphs. If there is no compatible candidate in the cluster, then we set conditional probability density (8) to zero. We deal with outliers by introducing an outlier class among values of k, which generate observations with a uniform probability density over the scene. Compatibility Tests Compatibility tests are useful for pruning the association graph for obvious mismatches that would perturb and otherwise slow down convergence. We use two compatibilty tests respectively designed for surface fitting and volumetric fitting. Surface Observations. While surface points may be matched based on position only, obvious orientation incompatibilites can be filtered by detecting large discrepancies between the normal of the deformed template candidate point v, and the normal of surface observation vertex o: ñt o . R t k (ñ 0 v ) cos(✓ max ), (9) where ñt o is the surface normal of observation o, ñ0 v is the surface normal of the template at vertex v, R t k is the rotation component of T t k , and ✓ max is an arbitrary threshold. Volume Observations. We introduce a compatibility test specific to volumetric fitting, by assuming that the distance of inner surface points to the shape's surface remains approximately constant under deformation. Let us define the distance between an inner shape point x and the shape's surface by: d(x, @⌦) = min p2@⌦ d(x, p). (10) In our observation model, this hypothesis can be leveraged by the following compatibility test: a volumetric observation o can be associated to a template point s only if d(x 0 s , @⌦ 0 ) = d(y t o , @⌦ t ). (11) To account for small deviations to this assumption, which might occur under e.g. slight compression or dilation of the perceived shape, we relax the equality constraint up to a precision ✏, where ✏ accounts for the distance-to-surface inconsistency caused by the discrete sampling of the template. Using the triangular inequality, it can be shown that this error is bounded by the maximum cell radius over the set of the template's CVT cells. This leads to the following compatibility test: d(y t o , @⌦ t ) ✏  d(x 0 s , @⌦ 0 )  d(y t o , @⌦ t ) + ✏ (12) For the particular case of silhouette-base observed shapes, it can be noted that reconstruction algorithms based on the visual hull inherently provides inflated estimates of the true shape. This phenomenon results in an overestimation of the distance to the surface when computed on the reconstructed shape. Hence, we only impose a volumetric inclusion constraint instead of complete depth correspondance, i.e. we only keep the right inequality from expression (12) in this case: d(x 0 s , @⌦ 0 )  d(y t o , @⌦ t ) + ✏. ( 13 ) Contrary to the surface compatibility test, this test does not depend on pose parameters T, consequently it is robust to convergence failures of inference algorithms. Inference The model proposed with ( 6), ( 7) and ( 8), defines a joint likelihood over the rest pose, the rigidity configuration, all temporal poses, the observed points and their selection variables, and prediction noise : p( T) Y t2T 0 @ p(T t | T, C) Y o2O t p(y t o | k t o , T t , t ) 1 A , (14) It can be shown that this likelihood can be maximized using an Expectation Maximization algorithm [2,12,4], yielding maximum a posteriori estimates of the pose parameters T, T and prediction noise . This results in an algorithm iterating between two steps. Intuitively, The E-Step computes all observation cluster assignment probabilities over K, based on the distance to the predicted template positions under the currently estimated poses. Compatibility rules are applied at this stage. Probabilities over inter-cluster rigid links C are also estimated based on the current deformation energy of the poses. The M-Step updates the rest pose T, all poses T, and prediction noise , using the assignment and rigid link probabilities to weigh individual observation contributions to each cluster transform estimate. Experiments Datasets We validate our framework using four synchronized and calibrated multiple-camera datasets, labeled GOALKEEPER-13, DANCER [2], MARKER [21], and the newly proposed BALLET, whose content reflect a wide variety of shape tracking situations. DANCER is a long sequence (1362 frames, 2048 ⇥ 2048 resolution, 48 viewpoints) showing slow and medium speed dance moves, and thus offers good opportunity to verify the tracking stability. GOALKEEPER-13 (2048⇥2048, 150 frames, 48 viewpoints) illustrates a specific soccer goalie plunging move, of particular interest when the goalie is on the floor, where the reconstruction data is of poor quality due to grazing camera angle and challenges the tracking performance. Both previous sequences otherwise have very high quality and detailed reconstructions. BALLET is a more challenging full HD (1920 ⇥ 1080) sequence we have acquired with fewer cameras (9 viewpoints and 500 frames) and thus coarser reconstructions, consisting in a number of ballet moves with various levels of difficulty, including fast moves, spinning and crossing legs. MARKER (1296 ⇥ 972, 500 frames, 12 viewpoints) is a sequence with two actors performing karate moves, illustrating the robustness to several subjects, and which was captured simultaneously with a set of sparse markers offering a reference and comparison basis method std. dev. (L) MARKER BALLET Cagniart et al. 2010 [8] 3 with [21]. The reconstructions are of coarser quality due to relatively noisy inputs and occasional jumps where actor heads get clipped. Experimental Protocol We first select a template among the better frames with correct reconstruction topology, then compute a CVT using §2 with 5'000 samples per person (10'000 for the two-person MARKER sequence) and 250 clusters per person (500 for MARKER), as illustrated in Fig. 3. Each shape reconstruction is obtained from a silhouette-based reconstruction algorithm [14] and CVTs are also extracted (1 minute/frame). We then apply the algorithm described using a sliding window strategy over a 10 frame window, where the rest position is computed for each time slice to locally stabilize the estimation. The sequences are initially solved for frames in the vicinity of the template pose, then the solution is classically propagated to future windows as initialization. Convergence has been achieved for all frames, typically in a few hundred iterations, with a convergence time per frame of the order of a minute to a few minutes. The provided supplemental video 1 illustrates the results obtained with these sequences. Quantitive Analysis Volume Stability. We verify here the assertion that the volumetric parameterization of the tracking produces poses with stable volumes. As we use silhouette based reconstructions, it is not relevant to compare the estimated volumes with the observed shape volumes. Instead, we compute the standard deviation to this volume, in Table 1 and provide a comparison of these results with best runs of two state of the art methods [8,2]. This comparison supports the initial intuition of volumetric stability in the sequence, as the standard deviation of the estimated volumes is significantly lower for our method. Silhouette reprojection error. We evaluate the silhouette reprojection error as the symmetric pixel difference between the reprojected and the silhouette projection of the reconstructions used as observed input shapes. We then express this value as a percentage of the area of the silhouette region in each view. Table 2 shows favorable comparisons to state of the art methods [8,2]. In particular, the mean error and maximum error achieved by our method over the sequences is significantly lower, and exhibits lower variance. Additionally we test the influence of the volumetric data term by comparing the results with a run where it is disabled (surface data-term only), all other parameters being equal. Interestingly, the method still achieves better mean error than state of the art, but with less stable behavior. Marker reference error. We use the MARKER sequence provided by Liu et al. [21] to sparsely compare the output quality of our method against state of the art methods. This comparison is illustrated in Table 3 and plotted against time in the sequence in Fig. 2. Again we illustrate turning off the surface data term, in which case the estimation is slightly worse. The method proposed performs consistently better than comparable surface-based state of the art. Note that Liu et al. fully rig a skeleton to the template, which provides slightly better mean results than ours thanks to the stronger assumption. On the other hand, our method is generic and can be applied to arbitrary objects. Qualitative Assessment To illustrate the benefits of the approach, in particular where the improvements can be seen, we provide excerpts of the datasets (see supplemental video for further examples). Fig. 4 shows the improvements of the method over surface-based methods [8,2] in poses of strong contortion, such as an elbow or knee bending gesture. Because of their use of the elastic energy on the surface, these methods tend to dilute error compensation over a smooth and extended location of the folds, yielding curvy elbow and knee shapes in the tracking. A usual side effect here is the local decrease of the shape volume in the vicinity of the fold. In contrast, our method being volumetrically constrained, it penalizes such local volume changes and prefers to focus the bending energy on fewer volumetric patch articulations, yielding more consistent and accurate pose estimates. The GOALKEEPER-13 dataset illustrates the increased robustness in the presence of highly perturbed reconstructions thanks to the volumetric constraints, where other methods yield random results. The reconstructed visual hull input is very ambiguous on the shown frame because of the presence of concavities and the strong topology mismatch creates errors for surfacebased methods. Discussion We have presented a novel volumetric approach to shape tracking based on CVT volume decomposition. The approach leverages CVT desirable properties to build suitable volumetric deformable constraints, while formulating a discrete volume assignment scheme as data term through the uniform cell centroid coverage of the volume. Currently, the volumetric clustering proposed for volumes yields uniform sizes over the entire template shape, which can be a limitation for parts that are thinner than the cluster size, such as arms. We will address this in future work with adaptive cluster densities, ensuring the volumetric prior is equally efficient regardless of thickness. Numerical analysis nevertheless shows significant improvement over state of the art tracking methods, both in terms of tracking error over the surface and silhouette reprojection. The framework is also shown to conform to initial intuition in being more stable in terms of the errors and volume measures of the fitted template shapes. We believe the approach paves the way for proper use of volumetric priors in any shape tracking framework. A. Inria Grenoble Rhône-Alpes, France Laboratoire Jean Kuntzmann, Université Grenoble Alpes, France {jinlong.yang,jean-sebastien.franco,franck.hetroy,stefanie.wuhrer}inria.fr Abstract. Estimating 3D human body shape in motion from a sequence of unstructured oriented 3D point clouds is important for many applications. We propose the first automatic method to solve this problem that works in the presence of loose clothing. The problem is formulated as an optimization problem that solves for identity and posture parameters in a shape space capturing likely body shape variations. The automation is achieved by leveraging a recent robust pose detection method [1]. To account for clothing, we take advantage of motion cues by encouraging the estimated body shape to be inside the observations. The method is evaluated on a new benchmark containing di↵erent subjects, motions, and clothing styles that allows to quantitatively measure the accuracy of body shape estimates. Furthermore, we compare our results to existing methods that require manual input and demonstrate that results of similar visual quality can be obtained. Introduction Estimating 3D human body shape in motion is important for applications ranging from virtual change rooms to security. While it is currently possible to effectively track the surface of the clothing of dressed humans in motion [2] or to accurately track body shape and posture of humans dressed in tight clothing [3], it remains impossible to automatically estimate the 3D body shape in motion for humans captured in loose clothing. Given an input motion sequence of raw 3D meshes or oriented point clouds (with unknown correspondence information) showing a dressed person, the goal of this work is to estimate the body shape and motion of this person. Existing techniques to solve this problem are either not designed to work in the presence of loose clothing [4,5] or require manual initialization for the pose [6,7], which limits their use in general scenarios. The reason is that wide clothing leads to strong variations of the acquired surface that is challenging to handle automatically. We propose an automatic framework that allows to estimate the human body shape and motion that is robust to the presence of loose clothing. Existing methods that estimate human body shape based on an input motion sequence of 3D meshes or oriented point clouds use a shape space that models human body shape variations caused by di↵erent identities and postures as prior. Such a prior allows to reduce the search space to likely body shapes and postures. Prior works fall into two lines of work. On the one hand, there are human body shape estimation methods specifically designed to work in the presence of loose clothing [6,7]. These techniques take advantage of the fact that observations of a dressed human in motion provides important cues about the underlying body shape as di↵erent parts of the clothing are close to the body shape in di↵erent frames. However, these methods require manually placed markers to initialize the posture. On the other hand, there are human body shape estimation methods designed to robustly and automatically compute the shape and posture estimate over time [4,5]. However, these methods use strong priors of the true human body shape to track the posture over time and to fit the shape to the input point cloud, and may therefore fail in the presence of loose clothing. In this work, we combine the advantages of these two lines of work by proposing an automatic framework that is designed for body shape estimation under loose clothing. Like previous works, our method restricts the shape estimate to likely body shapes and postures, as defined by a shape space. We use a shape space that models variations caused by di↵erent identities and variations caused by di↵erent postures as linear factors [8]. This simple model allows for the development of an e cient fitting approach. To develop an automatic method, we employ a robust pose detection method that accounts for di↵erent identities [1] and use the detected pose to guide our model fitting. To account for clothing, we take advantage of motion cues by encouraging the estimated body shape to be located inside the acquired observation at each frame. This constraint, which is expressed as a simple energy that is optimized over all input frames jointly, allows to account for clothing without the need to explicitly detect skin regions on all frames as is the case for previous methods [7,9]. To the best of our knowledge, existing datasets in this research area do not provide 3D sequences of both body shape as ground truth and dressed scans for estimation. Therefore, visual quality is the only evaluation choice. To quantitatively evaluate our framework and allow for future comparisons, we propose the first dataset consisting of synchronized acquisitions of dense unstructured geometric motion data and sparse motion capture data of 6 subjects with 3 clothing styles (tight, layered, wide) under 3 representative motions, where the capture in tight clothing serves as ground truth body shape. The main contributions of this work are the following. -An automatic approach to estimate 3D human body shape in motion in the presence of loose clothing. -A new benchmark consisting of 6 subjects captured in 3 motions and 3 clothing styles each that allows to quantitatively compare human body shape estimates. Related work Many works estimate human posture without aiming to estimate body shape, or track a known body shape over time. As our goal is to simultaneously estimate body shape and motion automatically and in the presence of loose clothing, we will focus our discussion on this scenario. Statistical shape spaces. To model human body shape variations caused by di↵erent identities, postures, and motions, statistical shape spaces are commonly used. These shape spaces represent a single frame of a motion sequence using a low-dimensional parameter space that typically models shape variations caused by di↵erent identities and caused by di↵erent postures using separate sets of parameters. Such shape spaces can be used as prior when the goal is to predict a likely body shape under loose clothing. Anguelov et al. [10] proposed a statistical shape space called SCAPE that combines an identity model computed by performing principal component analysis (PCA) on a population of 3D models in standard posture with a posture model computed by analyzing near-rigid body parts corresponding to bones. This model performs statistics on triangle transformations, which allows to model non-rigid deformations caused by posture changes. Achieving this accuracy requires solving an optimization problem to reconstruct a 3D mesh from its representation in shape space. To improve the accuracy of the SCAPE space, Chen et al. [11] propose to combine the SCAPE model with localized multilinear models for each body part. To model the correlation of the shape changes caused by identity and posture changes, Hasler et al. [12] perform PCA on a rotation-invariant encoding of the model's triangles. These models may be used as priors when estimating human body shape in motion, but none of them allow to e ciently reconstruct a 3D human model from the shape space. To speed up the reconstruction time from the SCAPE representation, Jain et al. [13] propose a simplified SCAPE model, denoted by S-SCAPE in the following, that computes the body shape by performing PCA on the vertex coordinates of a training set in standard posture and combines this with a linear blend skinning (LBS) to model posture changes. Any posture variations present in the training data cause posture variation to be modeled in identity space, which is known to cause counter-intuitive deformations [8]. To remedy this, recently proposed shape spaces start by normalizing the posture of the training data before performing statistics and model shape changes caused by di↵erent factors such as identity and posture as multilinear factors [14,8,15]. We use the normalized S-SCAPE model [8] in this work; however, any of these shape spaces could be used within our framework. Recently, Pons-Moll et al. [16] proposed a statistical model that captures fine-scale dynamic shape variation of the naked body shape. We do not model dynamic geometry in this work, as detailed shape changes are typically not observable under loose clothing. Estimation of static body shape under clothing. To estimate human body shape based on a static acquisition in loose clothing and in arbitrary posture, the following two approaches have been proposed. Balan et al. [9] use a SCAPE model to estimate the body shape under clothing based on a set of calibrated multi-view images. This work is evaluated on a static dataset of di↵erent subjects captured in di↵erent postures and clothing styles. Our evaluation on 3D motion sequences of di↵erent subjects captured in di↵erent motions and clothing styles is inspired by this work. Hasler et al. [17] use a rotation-invariant encoding to estimate the body shape under clothing based on a 3D input scan. While this method leads to accurate results, it cannot easily be extended to motion sequences, as identity and posture parameters are not separated in this encoding. Both of these methods require manual input for posture initialization. In this work, we propose an automatic method to estimate body shape in motion. Estimation of body shape in motion. The static techniques have been extended to motion sequences with the help of shape spaces that separate shape changes caused by identity and posture. Several methods have been proposed to fit a SCAPE or S-SCAPE model to Kinect data by fixing the parameters controlling identity over the sequence [4,5]. These methods are not designed to work with clothing, and it is assumed that only tight clothing is present. Two more recent methods are designed to account for the presence of clothing. The key idea of these methods is to take advantage of temporal motion cues to obtain a better identity estimate than would be possible based on a single frame. Our method also takes advantage of motion cues. Wuhrer et al. [6] use a shape space that learns local information around each vertex to estimate human body shape for a 3D motion sequence. The final identity estimate is obtained by averaging the identity estimates over all frames. While this shape space leads to results of high quality, the fitting is computationally expensive, as the reconstruction of a 3D model from shape space requires solving an optimization problem. Our method uses a simpler shape space while preserving a similar level of accuracy by using an S-SCAPE model that prenormalizes the training shapes with the help of localized information. Neophytou and Hilton [7] propose a faster method based on a shape space that models identity and posture as linear factors and learns shape variations on a posture-normalized training database. To constrain the estimate to reliable regions, the method detects areas that are close to the body surface. In contrast, our method constrains the estimate to be located inside the observed clothing at every input frame, which results in an optimization problem that does not require a detection. Both of these methods require manual input for posture initialization on the first frame. Additionally, a temporal alignment is required by Neophytou and Hilton. Computing temporal alignments is a di cult problem, and manual annotation is tedious when considering larger sets of motion sequences. In contrast, our method is fully automatic and addresses both aspects. S-SCAPE model In this work, we use the S-SCAPE model as prior for human body shape changes caused by di↵erent identities and postures. While we choose this shape space, any shape space that models identity and posture as multilinear factors could be used [14,15]. Although such a simple shape space does not accurately model correlated shape changes, such as muscle bulging, it allows to e↵ectively separate the di↵erent variations and can be fitted e ciently to input scans. This section briefly reviews the S-SCAPE model introduced by Jain et al. [13] that allows to separate the influence of parameters controlling identity and parameters controlling posture of a human body shape. In the following, we denote by and ⇥ the parameter vectors that influence shape changes caused by identity and posture changes, respectively. In this work, we use the publicly available posture-normalized S-SCAPE model [8], where each training shape was normalized with the help of localized coordinates [18]. In the following, let N v denote the number of vertices on the S-SCAPE model, let s ( , ⇥) 2 R 3N v denote the vector containing the vertex coordinates of identity in posture ⇥, and let e s ( , ⇥) 2 R 4N v denote the vector containing the corresponding homogeneous vertex coordinates. For the fixed posture ⇥ 0 that was used to train the identity space, S-SCAPE models the shape change caused by identity using a PCA model as e s ( , ⇥ 0 ) = e A + e µ, (1) where e µ 2 R 4N v contains the homogeneous coordinates of the mean body shape, e A 2 R 4N v ⇥d id is the matrix found by PCA, and d id is the dimensionality of the identity shape space. For a fixed identity 0 , S-SCAPE models the shape change caused by posture using LBS as s i ( 0 , ⇥) = N b X j=1 ! ij T j (⇥) e s i ( 0 , ⇥ 0 ) , (2) where s i and e s i denote the standard and homogenous coordinate vector of the i-th vertex of s, N b denotes the number of bones used for LBS, T j (⇥) 2 R 3⇥4 denotes the transformation matrix applied to the j-th bone, and ! ij denotes the rigging weight binding the i-th vertex to the j-th bone. Combining Eq. 1 and 2 in matrix notation leads to s ( , ⇥) = T (⇥) e A + T (⇥) e µ, (3) where T (⇥) 2 R 3N v ⇥4N v is a sparse matrix containing the per-vertex transformations. Using this notation, it is easy to see that S-SCAPE is linear in both and T (⇥), which allows for a simple optimization w.r.t. and ⇥. Estimating model parameters for a motion sequence We start by providing an overview of the proposed method. Fig. 1 shows the di↵erent parts of the algorithm visually. Given as input a trained S-SCAPE model and a motion sequence consisting of N f frames F i represented by triangle meshes with unknown correspondence, we aim to compute a single parameter vector controlling the shape of the identity (as the identity of the person is fixed during motion) along with N f parameter vectors ⇥ i controlling the postures in each frame, such that s i ( , ⇥ i ) is close to F i . To fit the S-SCAPE model to a single frame F , we aim to minimize E (F , ,⇥ )=! lnd E lnd (F , ,⇥ )+! data E data (F , ,⇥ )+! cloth E cloth (F , ,⇥ ) (4) w.r.t. and ⇥ subject to constraints that keep in the learned probability distribution of parameter values. Here, ! lnd , ! data , and ! cloth are weights that trade o↵ the influence of the di↵erent energy terms. The energy E lnd measures the distance between a sparse set of provided landmarks, which correspond to distinctive positions on the human body, to their corresponding locations on s ( , ⇥). The provided landmarks are computed automatically in the following. The energy E data measures the distance between s ( , ⇥) and F using a nearest neighbor cost. The energy E cloth is designed to account for loose clothing by encouraging s ( , ⇥) to be located inside the observation F . For a motion sequence of N f frames, our goal is then to minimize E ✓ F 1:N f , ,⇥ 1:N f ◆ = PN f i=1 E ( F i , ,⇥ i ) (5) w.r.t. and ⇥ 1:N f subject to constraints that keep in the learned probability distribution of parameter values. Here, F 1:N f = {F 1 , . . . , F N f } is the set of frames and ⇥ 1:N f = {⇥ 1 , . . . , ⇥ N f } is the set of posture parameters. The energy E cloth allows to take advantage of motion cues in this formulation as it encourages the body shape to lie inside all observed frames. In the following sections, we detail the prior that is used to constrain as well as the di↵erent energy terms. Optimizing Eq. 5 w.r.t. all parameters jointly results in a high-dimensional optimization problem that is ine cient to solve and prone to get stuck in undesirable local minima. After introducing all energy terms, we discuss how this problem can be divided into smaller problems that can be solved in order, thereby allowing to find a good minimum in practice. Prior model for A prior model is used to ensure that the body shape stays within the learned shape space that represents plausible human shapes. The identity shape space is learned using PCA, and has zero mean and standard deviation i along the i-th principal component. Similarly to previous work [9], we do not penalize values of that stay within 3 i of the mean to avoid introducing a bias towards the mean shape. However, rather than penalizing a larger distance from the mean, we constrain the solution to lie inside the hyperbox ±3 i using a constrained optimization framework. This constraint can be handled by standard constrained optimizers since the hyperbox is axis-aligned, and using this hard constraint removes the need to appropriately weigh a prior energy w.r.t. other energy terms. Landmark energy The landmark energy helps to guide the solution towards the desired local minimum with the help of distinctive anatomical landmarks. This energy is especially important during the early stages of the optimization as it allows to find a good initialization for the identity and posture parameters. In the following, we consider the use of N lnd landmarks and assume without loss of generality that the vertices corresponding to landmarks are the first N lnd vertices of s. The landmark term is defined as E lnd (F , , ⇥) = N lnd X i=1 ks i ( , ⇥) l i (F )k 2 , (6) where l i (F ) denotes the i-th landmark of frame F , s i ( , ⇥) denotes the vertex corresponding to the i-th landmark of s ( , ⇥), and k•k denotes the `2 norm. The landmarks l i (F ) are computed automatically with the help of the state of the art Stitched Puppet [1], which allows to robustly fit a human body model to a single scan using a particle-based optimization. Specifically, we once manually select a set of vertex indices to be used as landmarks on the Stitched Puppet model, which is then fixed for all experiments. To fit the Stitched Puppet to a single frame, randomly distributed particles are used to avoid getting stuck in undesirable local minima. We fit the Stitched Puppet model to frame F , and report the 3D positions of the pre-selected indices after fitting as landmarks l i (F ). While the Stitched Puppet aims to fit the body shape and posture of F , only the coordinates l i (F ) are used by our framework. Note that our method does not require accurate l i (F ), since l i (F ) are only used to initialize the optimization. Using many particles on each frame of a motion sequence is ine cient. Furthermore, since the Stitched Puppet is trained on a database of minimally dressed subjects, using many particles to fit to a frame in wide clothing may lead to overfitting problems. This is illustrated in Fig. 2. To remedy this, we choose to use a relatively small number of particles which is set to 30. Starting at the second frame, we initialize the particle optimization to the result of the previous frame to guide the optimization towards the desired optimum. Data energy The data energy pulls the S-SCAPE model towards the observation F using a nearest neighbor term. This energy, which unlike the landmark energy considers all vertices of s, is crucial to fit the identity and posture of s to the input F as E data (F , , ⇥) = N v X i=1 N N ks i ( , ⇥) N N (s i ( , ⇥) , F )k 2 , (7) where N v denotes the number of vertices of s and N N (s i ( , ⇥) , F ) denotes the nearest neighbour of vertex s i ( , ⇥) on F . To remove the influence of outliers and reduce the possibility of nearest neighbour mismatching, we use a binary weight N N that is set to one if the distance between s i and its nearest neighbor on F is below 200mm and the angle between their outer normal vectors is below 60 , and to zero otherwise. Clothing energy The clothing energy is designed to encourage the predicted body shape s to be located entirely inside the observation F . This energy is particularly important when considering motion sequences acquired with loose clothing. In such cases, merely using E lnd and E data leads to results that overestimate the circumferences of the body shape because is estimated to fit to F rather than to fit inside of F , see Fig. 3. To remedy this, we define the clothing energy as E cloth (F , ,⇥ )= P N v i=1 out NN k s i ( ,⇥ ) N N ( s i ( ,⇥ ),F )k 2 +! r k 0 k 2 , ( 8 ) where out is used to identify vertices of s located outside of F . This is achieved by setting out to one if the angle between the outer normal of N N (s i ( , ⇥) , F ) and the vector s i ( , ⇥) N N (s i ( , ⇥) , F ) is below 90 , and to zero otherwise. Furthermore, ! r is a weight used for the regularization term, and 0 is an initialization of the identity parameters used to constrain . When observing a human body dressed in loose clothing in motion, di↵erent frames can provide valuable cues about the true body shape. The energy E cloth is designed to exploit motion cues when optimizing E cloth w.r.t. all available observations F i . This allows to account for clothing using a simple optimization without the need to find skin and non-skin regions as in previous work [9,19,7]. The regularization k 0 k 2 used in Eq. 8 is required to avoid excessive thinning of limbs due to small misalignments in posture. Fig. 3 shows the influence of E cloth on the result of a walking sequence in layered clothing. The left side shows overlays of the input and the result for ! cloth = 0 and ! cloth = 1. Note that while circumferences are overestimated when ! cloth = 0, a body shape located inside the input frame is found for ! cloth = 1. The comparison to the ground truth body shape computed as discussed in Sec. 6 is visualized in the middle and the right of Fig. 3, and shows that E cloth leads to a significant improvement of the accuracy of . Optimization schedule Minimizing E ⇣ F 1:N f , , ⇥ 1:N f ⌘ defined in Eq. 5 over all N f frames w.r.t. and ⇥ i jointly is not feasible when considering motion sequences containing hundreds of frames as this is a high-dimensional optimization problem. To solve this problem without getting stuck in undesirable local minima, we optimize three smaller problems in order. Initial identity estimation. We start by computing an initial estimate 0 based on the first N k frames of the sequence by optimizing E ⇣ F 1:N k , , ⇥ 1:N k ⌘ w.r.t. and ⇥ i . For increased e ciency, we start by computing optimal i and ⇥ i for each frame using Eq. 4 by alternating the optimization of ⇥ i for fixed i with the optimization of i for fixed ⇥ i . This is repeated for N it iterations. Temporal consistency is achieved by initializing ⇥ i+1 as ⇥ i and i+1 as i starting at the second frame. As it su ces for the identity parameters to roughly estimate the true body shape at this stage, we set ! cloth = 0. In the first iterations, E lnd is essential to guide the fitting towards the correct local optimum, while in later iterations E data gains in importance. We therefore set ! data = 1 ! lnd and initialize ! lnd to one. We linearly reduce ! lnd to zero in the last two iterations. We then initialize the posture parameters to the computed ⇥ i , and the identity parameters to the mean of the computed i and iteratively minimize E ⇣ F 1:N k , , ⇥ 1:N k ⌘ w.r.t. ⇥ 1:N k and . This leads to stable estimates for ⇥ 1:N k and an initial estimate of the identity parameter, which we denote by 0 in the following. Posture estimation. During the next stage of our framework, we compute the posture parameters ⇥ N k +1:N f for all remaining frames by sequentially minimizing Eq. 4 w.r.t. ⇥ i . As before, ⇥ i+1 is initialized to the result of ⇥ i . As the identity parameters are not accurate at this stage, we set ! cloth = 0. For each frame, the energy is optimized N it times while reducing the influence of ! lnd in each iteration, using the same weight schedule as before. This results in posture parameters ⇥ i for each frame. Identity refinement. In a final step, we refine the identity parameters to be located inside all observed frames F 1:N f . To this end, we initialize the identity parameters to 0 , fix all posture parameters to the computed ⇥ i , and minimize E ⇣ F 1:N f , , ⇥ 1:N f ⌘ w.r.t. . As the landmarks and observations are already fitted adequately, we set ! lnd = ! data = 0 at this stage of the optimization. Implementation details The S-SCAPE model used in this work consists of N v = 6449 vertices, and uses d id = 100 parameters to control identity and d pose = 30 parameters to control posture by rotating the N b = 15 bones. The bones, posture parameters, and rigging weights are set as in the published model [8]. For the Stitched Puppet, we use 60 particles for the first frame, and 30 particles for subsequent frames. We use a total of N lnd = 14 landmarks that have been shown su cient for the initialization of posture fitting [6], and are located at forehead, shoulders, elbows, wrists, knees, toes, heels, and abdomen. Fig. 1 shows the chosen landmarks on the Stitched Puppet model. During the optimization, we set N it = 6 and N k = 25 . The optimization w.r.t. uses analytic gradients, and we use Matlab L-BFGS-B to optimize the energy. The setting of the regularization weight ! r depends on the clothing style. The looser the clothing, the smaller ! r , as this allows for more corrections of the identity parameters. In our experiments, we use ! r = 1 for all the sequences with layered and wide clothing in our dataset. Evaluation Dataset This section introduces the new dataset we acquired to allow quantitative evaluation of human body shape estimation from dynamic data. The dataset consists of synchronized acquisitions of dense unstructured geometric motion data and sparse motion capture (MoCap) data of 6 subjects (3 female and 3 male) captured in 3 di↵erent motions and 3 clothing styles each. The geometric motion data are sequences of meshes obtained by applying a visual hull reconstruction to a 68-color-camera (4M pixels) system at 30FPS. The basic motions that were captured are walk, rotating the body, and pulling the knees up. The captured clothing styles are very tight, layered (long-sleeved layered clothing on upper body), and wide (wide pants for men and dress for women). The body shapes of 6 subjects vary significantly. Fig. 4 shows some frames of the database. To evaluate algorithms using this dataset, we can compare the body shapes estimated under loose clothing with the tight clothing baseline. The comparison is done per vertex on the two body shapes under the same normalized posture. Cumulative plots are used to show the results. Evaluation of posture and shape fitting We applied our method to all sequences in the database. For one sequence of a female subject captured while rotating the body in wide clothing, Stitched Puppet fails to find the correct posture, which leads to a failure case of our method (see Fig. 2). We exclude this sequence from the following evaluation. To evaluate the accuracy of the posture parameters ⇥, we compare the 3D locations of a sparse set of landmarks captured using a MoCap system with the corresponding model vertices of our estimate. This evaluation is performed in very tight clothing, as no accurate MoCap markers are available for the remaining clothing styles. Fig. 5 summarizes the per-marker errors over the walking sequences of all subjects. The results show that most of the estimated landmarks are within 35mm of the ground truth and that our method does not su↵er from drift for long sequences. As the markers on the Stitched Puppet and the MoCap markers were placed by non-experts, the landmark placement is not fully repeatable, and errors of up to 35mm are considered fairly accurate. To evaluate the accuracy of the identity parameters , we use for each subject the walking sequence captured in very tight clothing to establish a ground truth identity 0 by applying our shape estimation method. Applying our method to sequences in looser clothing styles of the same subject leads identity parameters , whose accuracy can be evaluated by comparing the 3D geometry of s ( 0 , ⇥ 0 ) and s ( , ⇥ 0 ) for a standard posture ⇥ 0 . Fig. 6 summarizes the per-vertex errors over all motion sequences captured in layered and wide clothing, respectively. The left side shows the cumulative errors, and the right side shows the color-coded mean per-vertex error. The color coding is visualized on the mean identity of the training data. The result shows that our method is robust to loose clothing with more than 50% of all the vertices having less than 10mm error for both layered and wide clothing. The right side shows that as expected, larger errors occur in areas where the shape variability across di↵erent identities is high. Layered clothing Wide clothing Knees up Rotate body Walk Fig. 7. Overlay of input data and our result. Fig. 7 shows some qualitative results for all three types of motions and two clothing styles. Note that accurate body shape estimates are obtained for all frames. Consider the frame that shows a female subject performing a rotating motion in layered clothing. Computing a posture or shape estimate based on this frame is extremely challenging as the geometry of the layered cloth locally resembles the geometry of an arm, and as large portions of the body shape are occluded. Our method successfully leverages temporal consistency and motion cues to find reliable posture and body shape estimates. Comparative evaluation As we do not have results on motion sequences with ground truth for existing methods, this section presents visual comparisons, shown in Fig. 8. We compare to Wuhrer et al. [6] on the dancer sequence [20] presented in their work. Note that unlike the results of Wuhrer et al., our shape estimate does not su↵er from unrealistic bending at the legs even in the presence of wide clothing. Furthermore, we compare to Neophytou and Hilton [7] on the swing sequence [21] presented in their work. Note that we obtain results of similar visual quality without the need for manual initializations and pre-aligned motion sequences. In summary, we present the first fully automatic method for body shape and motion estimation, and show that this method achieves state of the art results. Comparison to Comparison to Wuhrer et al. [6] Neophytou and Hilton [7] Fig. 8. Per comparison from left to right: input, result of prior works, our result. Conclusion We presented an approach to automatically estimate the human body shape under motion based on a 3D input sequence showing a dressed person in possibly loose clothing. The accuracy of our method was evaluated on a newly developed benchmark1 containing 6 di↵erent subjects performing 3 motions in 3 di↵erent styles each. We have shown that, although being fully automatic, our posture and shape estimation achieves state of the art performance. In the future, the body shape and motion estimated by our algorithm have the potential to aid in a variety of tasks including virtual change rooms and security applications. HAL Id Introduction The last decade has seen the emergence of 3D dynamic shape models of moving objects, in particular humans, acquired from multiple videos. These spatiotemporal models comprise geometric and appearance information extracted from images, and they allow for subject motions to be recorded and reused. This is of interest for applications that require real 3D contents for analysis, free viewpoint and animation purposes and also for interactive experiences made possible with new virtual reality devices. This ability to now record datasets of subject motions bolsters the need for shape and appearance representations that make optimal use of the massive amount of image information usually produced. While dynamic shape representations have been extensively studied, from temporally coherent representations over a single sequence, to shape spaces that can encode both pose and subject variabilities over multiple sequences and multiple subjects, appearance representations have received less attention in this context. In this paper, we investigate this issue. Currently, appearance information is still most often estimated and stored once per frame, e.g. a texture map associated to a 3D model [1], and the leap to an efficient temporal appearance representation is still a largely open problem. This is despite the obvious redundancy with which the appearance of subjects is observed, across temporal frames, different viewpoints of the same scene, and often several sequences of the same subject performing different actions or motions. At the opposite of the spectrum, and given registered geometries, one can store only one texture for a sequence or even for a subject in several sequences, hence dramatically reducing sizes, but in so doing would drop the ability to represent desirable appearance variations, such as change in lighting or personal expression of the subject. In this paper, we advance this aspect by providing a view-independent appearance representation and estimation algorithm, to encode the appearance variability of a dynamic subject, observed over one or several temporal sequences. Compactly representing image data from all frames and viewpoints of the subject can be seen as a non-linear dimensionality reduction problem in image space, where the main non-linearities are due to the underlying scene geometry. Our strategy is to remove these non-linearities with state-of-the-art geometric and image-space alignment techniques, so as to reduce the problem to a single texture space, where the remaining image variabilities can be straightforwardly identified with PCA and thus encoded as Eigen texture combinations. To this goal, we identify two geometric alignment steps. First, we coarsely register geometric shape models of all time frames to a single shape template, for which we pre-computed a single reference surface-to-texture unwrapping. Second, to cope with remaining fine-scale misalignments due to registration errors, we estimate realignment warps in the texture domain. Because they encode low-magnitude, residual geometric variations, they are also advantageously decomposed using PCA, yielding Eigen warps. The full appearance information of all subject sequences can then be compactly stored as linear combinations of Eigen textures and Eigen warps. Our strategy can be seen as a generalization of the popular work of Nishino et. al. [2], which introduces Eigen textures to encode appearance variations of a static object under varying viewing conditions, to the case of fully dynamic subjects with several viewpoints and motions. The pipeline is shown to yield effective estimation performance. In addition, the learned texture and warp manifolds allow for efficient generalizations, such as texture interpolations to generate new unobserved content from blended input sequences, or completions to cope with missing observations due to e.g. occlusions. To summarize our main contribution is to propose and evaluate a new appearance model that specifically addresses dynamic scene modeling by accounting for both appearance changes and local geometric inaccuracies. Related work Obtaining appearance of 3D models from images was first tackled from static images for inanimate objects, e.g. [3,2], a case largely explored since e.g. [4,5]. The task also gained interest for the case of subjects in motion, e.g. for human faces [6]. With the advent of full body capture and 3D interaction systems [7, Fig. 1: Overview: Time consistent shape modeling provides datasets of appearance maps. Our proposed method exploits the manifold structure of these appearance information through PCA decomposition to generate the Eigen appearance maps relative to a shape. 1] the task of recovering appearance has become a key issue, as the appearance vastly enhances the quality of restitution of acquired 3D models. A central aspect of the problem is how to represent appearance, while achieving a proper trade-off between storage size and quality. 3D capture traditionally generates full 3D reconstructions, albeit of inconsistent topology across time. In this context the natural solution is to build a representation per time frame which uses or maps to that instant's 3D model. Such per instant representations come in two main forms. View-dependent texturing stores and resamples from each initial video frame [8], eventually with additional alignments to avoid ghosting effects [9]. This strategy creates high quality restitutions managing visibility issues on the fly, but is memory costly as it requires storing all images from all viewpoints. On the other hand, one can compute a single appearance texture map from the input views in an offline process [1], reducing storage but potentially introducing sampling artifacts. These involve evaluating camera visibility and surface viewing angles to patch and blend the view contributions in a single common mapping space. To overcome the resolution and sampling limitations, 3D superresolution techniques have been devised that leverage the viewpoint multiplicity to build such maps with enhanced density and quality [10][11][12]. In recent years, a leap has been made in the representation of 3D surfaces captured, as they can now be estimated as a deformed surface of time-coherent topology [13,14]. This in turns allows any surface unwrapping and mapping to be consistently propagated in time, however in practice existing methods have only started leveraging this aspect. Tsiminaki et. al. [11] examines small temporal segments for single texture resolution enhancement. Volino et. al. [15] uses a view-based multi-layer texture map representation to favour view-dependant dy-namic appearance, using some adjacent neighbouring frames. Collet et. al. [1] use tracked surfaces over small segments to improve compression rates of mesh and texture sequences. Methods are intrinsically limited in considering longer segments because significant temporal variability then appears due to light change and movement. While global geometry consistency has been studied [16][17][18], most such works were primarily aimed at animation synthesis using mesh data, and do not propose a global appearance model for sequences. In contrast, we propose an analysis and representation spanning full sequences and multiples sequences of a subject. For this purpose, we build an Eigen texture and appearance representation that extends concepts initially explored for faces and static objects [19,6,20,2]. Eigenfaces [19] were initially used to represent the face variability of a population for recognition purposes. The concept was broadened to built a 3D generative model of human faces both in the geometry and texture domains, using the fact that the appearance and geometry of faces are well suited to learning their variability as linear subspaces [6]. Cootes et. al. [20] perform the linear PCA analysis of appearance and geometry landmarks jointly in their active appearance model. Nishino et. al. [2] instead use such linear subspaces to encode the appearance variability of static objects under light and viewpoint changes at the polygon level. We use linear subspaces for full body appearance and over multiple sequences. Because the linear assumption doesn't hold for whole body pose variation, we use state of the art tracking techniques [21] to remove the non-linear pose component by aligning a single subject-specific template to all the subject's sequence. This in turn allows to model the appearance in a single mapping space associated to the subject template, where small geometric variations and appearances changes can then be linearly modeled. Method To eliminate the main geometric non-linearity, we first align sequence geometries to a single template shape and extract the texture maps of a subject over different motion sequences in a common texture space using a state-of-the-art method [11]. Other per-frame texture extractions may be considered. From these subject specific textures, Eigen textures and Eigen warps that span the appearance space are estimated. The main steps of the method below are depicted in Figure 2 and detailed in the following sections. 1. Texture deformation fields that map input textures to, and from, their aligned versions are estimated using optical flows. Given the deformation fields, Poisson reconstruction is used to warp textures. 2. PCA is applied to the aligned maps and to the texture warps to generate the Eigen textures and the Eigen warps that encode the appearance variations due to, respectively, viewpoint, illumination, and geometric inaccuracies in the reference model. Hence, The main modes of variation of aligned textures and deformation fields, namely Eigen textures and Eigen warps respectively, span the appearance space in our representation. The main steps of this method are depicted in Figure 2 and detailed in the following sections. Note that due to texture space discretization, the warps between textures are not one-to-one and, in practice, two separate sets of warps are estimated. Forward warps map the original texture maps to the reference map. Backward warps map the aligned texture maps back to the corresponding input textures (see Figure 2). Aligning texture maps Appearance variations that are due to viewpoint and illumination changes are captured through PCA under linearity assumption for these variations. To this purpose, textures are first aligned in order to reduce geometric errors resulting from calibration, reconstruction and tracking imprecisions. Such alignment is performed using optical flow, as described below, and with respect to a reference map taken from the input textures. An exhaustive search of the best reference map with the least total alignment error over all input textures is prohibitive since it requires N 2 alignments given N input textures. We follow instead a medoid shift strategy over the alignment errors. The alignment algorithm (see Algorithm 1) first initializes the reference map as one texture from the input set. All texture maps are then aligned to this reference map, and the alignment error is computed as the cumulative sum of squared pixel differences between the reference and the aligned texture maps. The medoid over the aligned texture maps, with respect to alignment error, then identifies the new reference map. These two steps, alignment and medoid shift, are iterated until the total alignment error stops decreasing. Data: Texture maps {I k } k∈[[1..N ]] Result: Reference map A ref , aligned textures A k A ref , A ref = I k0 s.t. k 0 = arg min k l A k -A l 2 ; end Algorithm 1: Texture alignment with iterative reference map selection. Dense texture correspondence with optical flow The warps {w k } in the alignment algorithm, both forward and backward in practice, are estimated as dense pixel correspondences with an optical flow method [22]. We mention here that the optical flow assumptions: brightness consistency, spatial coherency and temporal persistence, are not necessarily verified by the input textures. In particular, the brightness consistency does not hold if we assume appearance variations with respect to viewpoint and illumination changes. To cope with this in the flow estimation, we use histogram equalization as a preprocessing step, which presents the benefit of enhancing contrast and edges within images. Additionally, local changes in intensities are reduced using bilateral filtering, which smooths low spatial-frequency details while preserving edges. Texture warping Optical flows give dense correspondences {w} between the reference map and the input textures. To estimate the aligned textures {A}, we cast the problem as an optimization that seeks the texture map which, once moved according to w, best aligns with the considered input texture both in the color and gradient domains. Our experiments show that solving over both color and gradient domains significantly improves results as it tends to better preserve edges than with colors only. This is also demonstrated in works that use the Poisson editing for image composition, e.g. [23,24] or interpolation, e.g. [25,26]. We follow here a similar strategy. We are given an input texture map I, a dense flow w from A ref to I, and the gradient image ∇I. The aligned texture A of I with respect to A ref is then the map that minimizes the following term: E(A) = Σ x ∇ 2 A(x) - - → ∇.∇I(x + w) 2 + λ A(x) -I(x + w) 2 , (1) where ∇ 2 is the Laplacian operator, -→ ∇. the divergence operator, and x denotes pixel locations in texture maps. The weight λ balances the influence of color and gradient information. In our experiments, we found that the value 0.02 gives the best results with our datasets. Using a vector image representation, the above energy can be minimized by solving, in the least-squares sense, the overdetermined 2N × N system below, where N is the active region size of texture maps: L Λ A = - → ∇.∇I(x + w) ΛI(x + w) , ( 2 ) where L is the linear Laplacian operator and Λ = diag N (λ). A solution for A is easily found by solving the associated normal equations: L T L + Λ 2 A = L T - → ∇.∇I(x + w) + Λ 2 I(x + w). (3) Figure 3 shows an example where a texture map is warped, given a warp field, using both direct pixel remapping and Poisson warping. The latter strategy achieves visually more compelling and edge preserving results. Eigen Textures and Eigen Warps Once the aligned textures and the warps are estimated, we can proceed with the statistical analysis of appearances. Given the true geometry of shapes and their motions, texture map pixels could be considered as shape appearance samples over time and PCA applied directly to the textures would then capture appearance variability. In practice, incorrect geometry causes distortions in the texture space and textures must be first aligned before any statistical analysis. In turn, de-alignment must be also estimated to map the aligned textures back to their associated input textures (see Figure 2). And these backward warps must be part of the appearance model to enable appearance reconstruction. In the following, warps denote the backward warps. Also, we consider vector representations of the aligned texture maps and of the warps. These representations include only pixels that fall inside active regions within texture maps. We perform Principal Component Analysis on the textures and on the warp data separately to find the orthonormal bases that encode the main modes of variation in the texture space and in the warp space independently. We refer to vectors spanning the texture space as Eigen textures, and to vectors spanning the warp space as Eigen warps. Let us consider first texture maps. Assume N is the dimension of the vectorized representation of active texture elements, and F the total number of frames available for the subject under consideration. To give orders of magnitude for our datasets, N = 22438995 and F = 207 for the Tomas dataset, and N = 25966476 and F = 290 for the Caty dataset that will be presented in the next section. We start by computing the mean image Ā and the centered data matrix M from aligned texture maps {A i } i∈[1..F] : Ā = 1 F k A k , M =   | | A 1 -Ā ... A F - Ā | |   . (4) Traditionally, the PCA basis for this data is formed by the Eigen vectors of the covariance matrix M M T , of size N × N , but finding such vectors can easily become prohibitive as a consequence of the texture dimensions. However, it appears that the non zero eigen values of M M T are equal to the non zero Eigen values of M T M , of size (F × F ) this time, and that they are at most: min(F, N ) -1. Based on this observation, and since F << N in our experiments, we solve the characteristic equation det(M M T -αI N ) = 0 by performing Singular Value Decomposition on the matrix M T M , as explained in [27]: M T M = DΣD T , D =   | | V 1 ... V F | |   (5) where D contains the (F -1) orthonormal Eigen vectors {V i } of M T M , and Σ = diag(α i ) 1≤i≤F contains the eigen values {α i } 1≤i≤F -1 . We can then write: M T M V i = α i V i , i ∈ [1..F -1], (6) and hence: M M T M V i Ti = α i M V i Ti , i ∈ [1..F -1], (7) where T i are the Eigen vectors of M M T and therefore form the orthonormal basis of the aligned texture space after normalization, namely the Eigen textures. In a similar way, we obtain the mean warp w and the orthonormal basis of the warp space {W i } 1≤i≤F -1 , the Eigen warps. Texture generation Given the Eigen textures and the Eigen warps, and as shown in Figure 4, a texture can be generated by first creating an aligned texture by linearly combining Eigen textures and second de-aligning this new texture using another linear combination of the Eigen warps. Performance Evaluation To validate the estimation quality of our method, we apply our estimation pipeline to several datasets, project and warp input data using the built eigenspaces, then evaluate the reconstruction error. To distinguish the different error sources, we evaluate this error both in texture space before projection, and in image domain by projecting into the input views, as compared to the original views of the object and the texture before any reconstruction in texture space, estimated in our pipeline using [11]. For the image error measurement, we use the 3D model that was fitted to the sequence, as tracked to fit the test frames selected [21], and render the model as textured with our reconstructed appearance map, using a standard graphics pipeline. In both cases, we use the structural similarity index (SSIM) [28] as metric to compare to the original. All of our SSIM estimates are computed in the active regions of the texture and image domains, that is on the set of texels actually mapped to the 3D model in the texture domain, and only among actual silhouette pixels in the image domain. We study in particular the compactness and generalization abilities of our method, by examining the error response as a function of the number of eigen components kept after constructing the linear subspaces, and the number of training images selected. For all these evaluations, we also provide the results for a naive PCA strategy, where only a set of eigen appearance maps are built in texture space and use to project and reconstruct textures, to show the performance contribution of including the Eigen warps. For validation, we used two multi-sequence datasets: (1) the Tomas dataset which consists of 4 different sequences left, right, run and walk with 207 total number of frames and 68 input views each captured at resolution 2048x2048 pixels per frame; and (2) the Caty dataset: low, close, high and far jumping sequences with 290 total number of frames and 68 input views each captured at resolution 2048x2048 pixels per frame. Estimation Quality and Compactness We study the quality and compactness of the estimated representation by plotting the SSIM errors of reconstructed texture and image estimates of our method against naive PCA, for the two multi-sequence datasets (Figure 5). Note that all texture domain variability could be trivially represented by retaining as many Eigen textures as there are input images, thus we particularly examine how the quality degrades with the fraction of Eigen components kept. In the case of image domain evaluations, we plot the average SSIM among all viewpoints. Our method outperforms naive PCA in image and texture domains on both datasets, achieving higher quality with a lower number of Eigen components, and only marginally lower quality as the number of components grows, where the method would be anyway less useful. Higher number of Eigen components marginally favors naive PCA, because naive PCA converges to input textures when increasing the Eigen textures retained by construction, whereas our method hits a quality plateau due to small errors introduced by texture warp estimation and decomposition. For both datasets, virtually no error (0.98 SSIM) is introduced by our method in the texture domain with as low as 50 components, a substantially low fraction compared to the number of input frames (207 and 290). This illustrates the validity of the linear variability hypothesis in texture domain. The error is quite higher in the image domain (bounded by 0.7) for both our method and naive PCA, because measurements are then subject to fixed upstream errors due to geometric alignments, projections and image discretizations. Nevertheless, visually indistinguishable results are achieved with 50 Eigen components (images and warps), with a significant compactness gain. Generalization ability In the previous paragraph, we examined the performance of the method by constructing an Eigen space with all input frames. We here evaluate the ability of the model to generalize, i.e. how well the method reconstructs textures from input frames under a reduced number of examples that don't span the whole input set. For this purpose, we perform an experiment using a varying size training set, and a test set from frames not in the training set. We use a training set comprised of randomly selected frames spanning 0% to 60% of the total number of frames, among all sequences and frames of all datasets, and plot the error of projecting the complement frames on the corresponding Eigen space (Figure 6). The experiment shows that our representation produces a better generalization than naive PCA, i.e. less training frames need to be used to reconstruct a texture and reprojections of equivalent quality. For the Tomas dataset, one can observe than less than half training images are needed to achieve similar performance in texture space, and a quarter less with the Caty dataset. Applications We investigate below two applications of the appearance representation we propose. First, the interpolation between frames at different time instants and sec- ond, the completion of appearance maps at frames where some appearance information is lacking due to occlusions or missing observations during the acquisition. Results are shown in the following section and the supplementary video. Number of Eigen Vectors Interpolation In our framework, appearance interpolation benefits from the pre-computed warps and the low dimensionality of our representation to efficiently synthesize compelling new appearances with reduced ghosting-artefacts. It also easily enables extension of appearance interpolation from pairwise to multiple frames. Assume that shapes between two given frames are interpolated using a standard non-linear shape interpolation, for instance [29]. Consider then the associated aligned textures and associated warps at the given frames. We perform a linear interpolation in the Eigen texture and Eigen warp spaces respectively by blending the projection coefficients of the input appearance maps. Poisson warping, as introduced in section 3.1 is used to build de-aligned interpolated texture with the interpolated backward warp. Figure 7 compares interpolation using our pipeline to a standard linear interpolation for 4 examples with the Caty and Tomas datasets. Note that our method is also linear but benefits from the alignment performed in the texture space to reduce interpolation artefacts, as well as from the simplified computational aspects since interpolation applies to projection coefficients only. Completion As mentioned earlier, appearance maps can be incomplete due to acquisition issues. For instance, as shown in Figure 8, during the running sequence the actor Tomas bends his knees in such a way that the upper parts of his left and right shins become momentarily hidden to the acquisition system. This results in missing information for those body parts in the texture maps and over a few frames. Such an issue can be solved with our texture representation by omitting the incomplete frames when building our appearance representations, and then projecting these incomplete appearance maps in the Eigen spaces and reconstructing them using the projection coefficients and Poisson texture warping. Figure 8 shows two examples of this principle with occluded regions. Note however, that while effectively filling gaps in the appearance map, this completion might yet loose appearance details in regions of the incomplete map where information is not duplicated in the training set. Conclusion We have presented a novel framework to efficiently represent the appearance of a subject observed from multiple viewpoints and in different motions. We propose a straightforward representation which builds on PCA and decomposes into Eigen textures and Eigen warps that encode, respectively, the appearance variations due to viewpoint and illumination changes and due to geometric modeling imprecisions. The framework was evaluated on 2 datasets and with respect to: (i) its ability to accurately reproduce appearances with compact representations; (ii) its ability to resolve appearance interpolation and completion tasks. In both cases, the interest of a global appearance model for a given subject was demonstrated. Among the limitations, the representation performances are dependent on the underlying geometries. Future strategies that combine both shape and appearance information would thus be of particular interest. The proposed model could also be extended to global representations over populations of subjects. A. 3 D shape tracking is the process of recovering temporal evolutions of a template shape using visual information, such as images or 3D points. It finds applications in several domains including computer vision, graphics and medical imaging. In particular, it has recently demonstrated a good success in marker-less human motion capture (mocap). Numerous approaches assume a user-specific reference surface, with the objective to recover the skeletal poses [1], surface shapes [2], or both simultaneously [3]. A standard tracking process consists in an alternation of the following two steps. First, finding associations between the observed data, e.g. 3D points of the reconstructed visual hull, to the corresponding 3D template shape, typically based on the proximity in Euclidean space or a feature space. Second, given such associations, recovering the pose of the template under the constraint of a deformation model, typically based on the kinematic skeleton [1], [4], [5], [6], or the piecewise-rigid surface [2] parameterization, among others. Most of these model-based methods can be viewed as extensions of Iterative-Closest-Point (ICP) framework [7], [8] to deformable shapes, which attempts to explain newly observed data using the previous outcomes. As long as the initialization is close to the optimum solution, it is able Manuscript received XX.XX, 2017; revised XX.XX, 2017. to produce outstanding results. However, they also suffer from inherent weaknesses of generative strategies, e.g. slow convergence. Moreover, when large deformations or many outliers occur, discovering associations becomes particularly difficult. Unreliable correspondences result in ambiguous situations that yield erroneous numerical solutions. Recently, a number of alternatives and enhancements have been explored for both association and deformation stages independently. On one hand, improvements have also been proposed for the association problem by discovering them discriminatively [6], [9], [16], This in turn yields the possibility for 3D tracking techniques that are robust to failure. In contrast to those generative ICP variants, these discriminative approaches that 'detect' rather than track models have shown better robustness over the past decade, for instance, in human pose estimation with 2.5D data from Kinect [6], [10]. These approaches usually consider foreground human subjects to be pre-segmented, which is not a favorable assumption in full 3D data that generally contains substantial amount of outliers like Fig. 1(b). Including nonhuman objects into the reference shape so that more points are explained, i.e. less outliers, is one workaround adopted by many existing multi-view methods [17], [18], with the downside that further post-processing is required to analyze only humans' movements. There is a growing need to facilitate robust frame-wise observation-model associations for reconstructed complete 3D human shapes. Although surface-based features are commonly used for this purpose in the context of shape matching [9], volumetric features have also proven to be a promising direction for 3D shape description with surface-based templates [11]. On the other hand, progress has also been made in the deformation stage by introducing volumetric deformation models instead of purely surface-based ones, mainly motivated by the observation that human movements are largely volume-preserving. It has shown significantly improved robustness to various tracking situations, such as shape folding and volume bias of observed shapes [12]. As volumetric deformation models are gradually used in capturing actors' motions due to their inherent local volumepreserving properties, facilitating volumetric discriminative correspondences can be favorable. We investigate this direction and make the following two contributions in this paper. First, two volumetric features are designed for human shape correspondence detection, operating respectively on surface and volumetric meshes. Inspired by Taylor et al. [6], we apply regression forests to improve the associations, with two learning strategies devised for different shape parameterizations. In the case of surface mesh representations, we convert shapes to the volumetric Truncated Signed Distance Field (TSDF) [13], where each surface vertex is fed into userspecific forests to predict correspondences in one shot. Meanwhile, we also tessellate both the observed and template shapes as a set of uniform and anisotropic cells (see Fig. 2) from Centroidal Voronoi Tessellation (CVT) [14] and, again leverage the similar distance-transform representations to predict volumetric correspondences for all CVT cells. Second, by integrating these one-shot associations into the respective deformation models, we further present a discriminative human mocap framework, as depicted in Fig. 1, termed tracking-by-detection of 3D human shapes. In contrast to the ICP-like methods [2], [3], [4], it does not require close initializations from a nearby frame to estimate correspondences and thus better handles large deformations. Experiments demonstrate that, when combined with a generative tracking approach, this hybrid framework leads to better or comparable results than purely generative ones, e.g. [2], [15], reducing error accumulations and hence increasing the stability. The regression entropy is also augmented with the classification one to identify outliers. Very few prior arts afford the tracking or matching situation where the input describes mainly irrelevant outliers. Notably, in the case of CVT, our method is a unified volumetric pipeline where the shape representation, deformation model, feature description, and points association are all built on a single CVT representation that brings benefits at all stages of the pipeline. This fully volumetric tracking-by-detection method shows improved accuracy and memory performance compared to the surface-based counterpart [11]. template observations inner cells inner cells Fig. 2. Centroidal Voronoi tessellations yields volumetric cells of uniform shape and connectivity with controllable complexity. The cells of the observed shape are matched discriminatively to those of the template. RELATED WORK Among the vast literature on human motion analysis [19], we focus on top-down approaches that assume a 3D template and deform it according to input data, either directly with pixels [4], [20], or with computed 3D points [2], [3], [15]. These methods typically decompose into two major steps: (1) data association, where observations are associated to the model, and (2) deformation stage, where motion parameters are estimated given the associations. As our primary objective in this paper is to improve the first part, existing approaches are discussed accordingly below. Generative approaches Methods of this category follow the association strategy in ICP while extending the motion model to more general deformations than the one in the original method [7], [8]. Correspondences are addressed by searching for closest points, with various distance measures such as point-topoint [2], point-to-plane [21], or Mahalanobis distances [20]. This strategy heavily relies on the fact that observations in consecutive frames are in vicinity. Klaudiny et al. [22], Huang et al. [3] and Collet et al. [17] generalize the idea from the previous frame to a certain key-frame in the considered sequences, finding the best non-sequential order to track, but the proximity assumption remains. On the other hand, since 3D data such as reconstructed point clouds often contain spurious fake geometries, another challenge consists in identifying online and dynamically irrelevant observations without any prior knowledge. Liu et al. [4] establish 3D-2D correspondences by considering both texture in images and contours in silhouettes and further include image segmentation information to differentiate multiple interacting subjects. Huang et al. [2], [3] relax the hard correspondence constraint to soft assignments and introduce an additional outlier class to reject noisy observations. Data is explained by Gaussian Mixture Models (GMM) in an Expectation-Maximization (EM) manner [23]. In [24], both source and target points are similarly modeled as GMMs and the registration problem is cast as minimizing the distance between two mixture models. Collet et al. [17] fuse information from various modalities attentively to generate high-quality textured meshes. Yet, to yield a temporal coherent mesh tessellation, the underlying tracking component is still ICPbased [25]. All these generative methods are highly likely to fail in large deformations. Furthermore, they are prone to error accumulations and, as a result of matching several successive frames wrongly (whether sequentially or not), they are prone to drift. Copyright (c) 2017 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing [email protected]. Discriminative approaches and 3D descriptors Recently, discriminative approaches have demonstrated their strengths in estimating human [6], [26] and hand [27] poses from depth images. With the initial intention to substitute ICP-based optimization, Taylor et al. [6] propose a frame-wise strategy that yields decent dense correspondences without iterative refinements. The method replaces the step of proximity search in ICP-based tracking methods by learning the mapping from input 3D points from depth sensors, to the human template surface domain, termed the Vitruvian manifold. Later, Pons-Moll et al. [5] train forests with a new objective on surface manifolds, and increase the precision by finishing convergence with an ICP-based loop after the discriminative association stage. Both approaches operate frame-independently and are generally drift free. Following the same weak pair-wise features and random forest framework, Dou et al. [18] learn to match two successive depth frames to avoid depending on a specific template. More informative descriptors and matching strategies have long been studied for shape recognition or retrieval with meshes [28] and point clouds [29]. The well known heat kernel signatures (HKS) [30] and wave kernel signatures (WKS) [31] exploit the Laplacian-Beltrami operator, the extension of the Laplacian operator to surface embeddings. Rodola et al. [9] later apply forests to learn the parameters of WKS during training. These features are nonetheless known for their lack of resilience to significant topology changes, an artifact frequently seen in noisy surface acquisitions. Mesh-HoG [32] and SHOT [33] attach a local coordinate frame at each point to achieve invariant representations and reach better performance for noisy surfaces. To enforce consistent matches over the whole shape, Chen and Koltun [34] and Starck et al. [35] formulate the matching problem as the inference of Markov random field (MRF). Besides hand-crafted features, there is a recent trend that applies Convolutional Neural Network (CNN) [36] to discover the deep representation of non-rigid human shapes. Wei et al. [16] render depth images in several viewpoints, where the CNN feature transformation takes place, and average the descriptors from multiple views. Boscaini et al. [37] stay in 3D space but define the convolution function in the intrinsic manifold domain. While showing encouraging results in handling missing data, these methods do not consider matching human shapes in the presence of large amount of outliers, e.g. un-subtracted furniture in the background, and thus do not fit to our 'detection' purpose. Another common trait of the aforementioned approaches is that the computation involves only surface points. We show in our early work [11] that surface features can be built based on local coordinate frames in a regular-grid volume. In this paper, we not only improve this feature but also propose a new one to address the need of fully volumetric correspondences. Both features, implicitly or explicitly, leverage distance-transform volumes to describe 3D geometry. Taking only surface vertices into account, the existing approaches rely on heterogeneous shape representations, deformation models, target primitives and feature spaces. Instead, our CVT-based tracking-by-detection proposal builds a unified framework for all these purposes and takes advantage of volumetric tracking strategies. OVERVIEW We implement discriminative associations using two different volumetric representations. In the first case, we convert the triangular surface meshes to the Truncated Signed Distance Field (TSDF) constructed with the regular 3D volumetric grid. In the second case, we use CVT representation which is not bound to the regular grids. As in Fig. 2, the interior space of a triangular surface is tessellated into a set of cells of uniform anisotropic shape whose seed location coincides with its centers of mass. Such an optimal discretization yields lower memory footprint than regular-grid volumes, in turn accommodating more training meshes. Moreover, we also associate CVT cells discriminatively and present volumetric correspondences. Formally, a humanoid shape describes a continuous volumetric domain in 3D Ω ⊂ R 3 whose border ∂Ω defines a 2-manifold surface. The discretized mesh representation M contains a set of 3D points M and their connectivity T , i.e. M = (M, T ), where M is drawn from the surface (M ⊂ ∂Ω) or the whole volume (M ⊂ Ω ). The goal of 3D shape tracking is to register a source reference 1 mesh X = (X, T X ) to the observed target mesh Y = (Y, T Y ), such as fitting the shape in Fig. 1(a) to the one in Fig. 1(b). Our method starts with surface meshes reconstructed by shape-from-silhouette method [38]. We refer only to points on surfaces as vertices v ∈ V , where V is the set of their indices. Suppose the reference surface X and the input visual hull Y are located at X = {x v } v∈V X and Y = {y i } i∈V Y 2 respectively, the registration typically boils down to two steps: (1) association: matching each points in Y with those in X to build the correspondence set To discover the correspondences C discriminatively, we adapt the Vitruvian strategy [6] from matching 2.5D against 3D to 3D against 3D. This amounts to warping the input mesh Y to the reference one X , denoted as Ỹ = ( Ỹ, T Y ) = (r(Y), T Y ) where r is the warping function. A good r shall lead to a clean warp Ỹ as in Fig. 3. Incorrect warped points, however, can still be told from huge edges. Specifically, this R 3 → R 3 mapping r is learned by a regression forest [39]. We convert each surface into an implicit representation, a distance field, which is usually defined volumetrically. As stated above, we investigate two ways to define the volumetric elements s. The first one is a voxel from a regular axis-aligned volume, i.e. s ∈ N 3 , while the second one is a cell from a volumetric mesh, i.e. s ∈ S, where S is a group of CVT cells that tessellate only the surface interiors. Depending on the choice of s, our volumetric feature f is hence also realized in two different forms. Taking the feature f as input, multiple binary decision trees are trained with previously observed meshes. In the online testing phase, a input point obtains a prediction ỹi = r(y i ) that indicates the locations of potential matches since the warp Ỹ is 1. Several terms are used interchangeably in this paper: reference and template; correspondences and associations; point and primitive. C = {(i, v)} ⊂ V Y × V X ; and ( 2. The observations are always indexed by i regardless of the parameterization. This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TPAMI.2017.2740308 Copyright (c) 2017 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing [email protected]. learned to resemble X. Thus, C can be built swiftly by doing nearest neighbor search between Ỹ and X just once and the deformation parameter Θ that encodes the shape pose of the template is estimated accordingly. Notably, in the case of CVT, since cells comprise a volumetric mesh, the whole pipeline (discovering C and estimating Θ) can instead be conducted in a fully volumetric fashion. Fig. 3 illustrates this correspondence detection process. The details of training, prediction and deformation models are provided in § 5. VOLUMETRIC FEATURES The two volumetric features are introduced in this section. Although both taking a volumetric point s as input, the first one actually aims to match surface vertices v, denoted as f (v) := f (s v ) while the second one matches s directly, i.e. f (s). Both are designed to be incorporated into forest training and prediction. A great advantage of decision trees is to learn the most discerning attributes among a large feature bank. One does not have to prepare the whole highdimensional vector f to draw predictions, because only a few learned attributes κ are needed to traverse the trees. As a result, features can be computed on the fly during testing. To make use of such property, the calculation of each f κ is assumed to independent. We hence avoid the histogram-based descriptors that requires normalization, such as MeshHOG [32] or SHOT [33], and resort to offset comparison features used in [40] for f (s v ) and Haar feature in [41] for f (s). Regular-voxel-based features Our first approach to discriminative associations considers regular-grid volumes (upper row in Fig. 3, s ∈ N 3 ). The warping function r is modeled as a composite one: r : R 3 → N 3 → R 3 , where the former is voxelization and the regression trees account for only the latter. We first cast each mesh M into a volumetric scalar field D : N 3 ⊂ R 3 → R. Truncated signed distance transform (TSDT) Voxelizing a surface in general comprises two parts: (1) determining which voxel s that every vertex v maps to, and (2) testing the overlap between triangles and voxels. The first part can be viewed as a quantization mapping from Euclidean space to a discretized space s : R 3 → N 3 . The size of the volume is large enough to include all possible pose variations, and its center is aligned with the barycenter of the surfaces. The voxel size is chosen to be close to the average edge length of meshes, so that a single voxel is not mapped by too many vertices. To check the intersection of triangles with voxels, we apply separating axis theorem which is known to be efficient for collision detection [42]. Voxels occupied by the surface are referred to as s suf . We further identify voxels located inside and outside the surface, denoted respectively as s in and s out . Together they define a directional truncated signed distance transform: (1) d(s, M) denotes the shortest Euclidean distance from the voxel center to the mesh, which can be computed efficiently via AABB trees using CGAL library. If the distance is bigger than a threshold , we store only ± to indicate the inside/outside information. It is empirically set to be three times the physical length of diagonal of voxels. In the earlier version of this work [11], we store averaged surface normals at each s suf . However, such representations yield high memory footprint and thus limit the amount of training meshes we can incorporate later in § 5. The TSDT representation naturally encodes the spatial occupancies of a mesh and the required memory footprint is only one-third of the former (each voxel stores now just a scalar, not a vector). It shares a similar spirit with implicit surface representations, e.g. levelset, and has been widely employed in RGBD-based tracking or reconstruction [43], [44]. D(s) =                + if Pair-wise offset features Next, we present the features f for describing TSDT, which are later used to train the forests. Since we are interested in predicting correspondences for vertices instead of triangles, from now on we concentrate only on those surface voxels s suf occupied by mesh vertices v, denoted as s v . The feature is thus defined as a function of s v , i.e. f (v) := f (s v ). As depicted in Fig. 4, for each surface voxel s v (blue), we shoot two offsets (red vectors) ψ = (o 1 , o 2 ) ∈ N 3 × N 3 , reaching two neighboring voxels (green). To describe the local geometry, we take the TSDT values within a cuboid around two respective voxels (yellow squares), perform element-wise subtractions and sum them up. Let ε denotes this sum-of-difference operation. By definition, ε from different offsets ψ can be evaluated independently and thus fully parallelizable, which is an useful trait since this computation will be carried out multiple times during training with thousands of randomly generated ψ for the same s v . The feature vector f consist of ε resulted from many offset pairs ψ. More precisely, it is a function of s v but takes an offset pair ψ, a binary variable η (whether to use Local Coordinate Frame (LCF) or not), and a rotational matrix R ∈ SO(3) (the orientation of LCF) as parameters. Every possible combination of offset pairs ψ and binary variables η results in one independent feature attribute κ, in notations: f κ (s v ) = ε(s v ; R η (ψ)). The dimensionality of f is virtually infinite. Binary variables η determines the alignment of the offset ψ with respect to a LCF, whose transformation is specified by R. The intuition behind this adjustment is to make features f invariant to poses, c.f. Fig. 4(b) and(c). Without re-orientations, ψ might land on different types of voxel pairs, c.f. Fig. 4(a) and (b), and hence cause different feature responses ε, despite the fact that the current voxels are located on the same position on the body. Both offset pairs ψ and binary variables η are learned during forest training, while the rotational matrix R is characterized by a LCF obtained as follows. Local coordinate frame Defining local coordinate frames for 3D primitives (voxels, vertices, points) has long been studied and usually comes with their 3D descriptor counterparts, see [45] for a comprehensive review. An ideal LCF is supposed to follow whatever transformations the meshes undergo, namely, as co-variant as possible, such that the consequent feature representations are as invariant as possible. Constructing a LCF boils down to defining three orthonormal vectors as [x, y, z] axes. To do that, the state-of-the-art methods in the field of LCFs for rigid matching of 3D meshes and point clouds mainly rely on the neighboring points within a local support [33], [46], [47], [48]. The way they leverage spatial distributions can in general be classified into two categories: (1) EigenValue-Decomposition (EVD) [33], [47], [49], and (2) signed distance (SignDist.) [46], [48]. Since it is impractical to repeat EVD process for all surface voxels s v , in the following, we propose an adaptation of SignDist. approach to our volumetric representations [50]. This conclusion is drawn after an extensive study and comparison of three LCF approaches presented in our early work [50]. Specifically, for each s v , we consider its surface normals n v as z axis, and obtain y axis by z × x. The task left is to identify a repeatable x axis. To this end, the class of SignDist. approaches look for a discerning point within the support (yellow voxel in Fig. 5(b)). We first open an local cuboid support (pink) around each s v (green) as visualized in Fig. 5(a). The search involves only the peripheral voxels s (cyan) lying on the intersection of support borders and the surface. The discernibility is defined as the maximum signed distance to the tangent plane [46]: ŝ = arg max s∈ S (s -s v ) n v , ( 2 ) where S is the intersection of support borders and the surface. The x axis is the projection of the vector directed from s v towards ŝ. Fig. 5(b) illustrates the full procedure. Note that there is no guarantee that the discerning point ŝ from Eq. 2 is always repeatable: in particular, if different directions yield similar values of the signed distance, the x axis will be ambiguous, hence the resulting LCFs could rotate about the z axis. Therefore, as shown in Fig. 5(c), this approach produces LCFs quasi-covariant to pose changes, and as a result, only quasi-pose-invariant features f . We leave such noise for forests to take care of during learning. CVT-based features The feature f (s v ) above describes surface geometries in volumes but is devised to match only surface vertices v. A more intriguing question is: can one match these points s directly? In other words, instead of an auxiliary role of matching surfaces, can they also be associated to the template discriminatively and even participate in shape deformations (bottom row of Fig. 3)? We investigate this direction with a volumetric representation from centroidal Voronoi tessellations that haven shown some recent success in various applications [51], [52], i.e. s is a CVT cell. We use it to sample a distance field where every cell s stores the Euclidean distance from the centroid to the surface ∂Ω: d(x s , ∂Ω) = min p∈∂Ω d(x s , p), yielding a distancetransform like representation similar to the TSDT above. Haar-like spherical feature The offset feature f (s v ) above is nevertheless not applicable here since it relies on regular grids. We propose a new feature f (s) with the following principles in mind. It should be able to characterize the local neighborhood of any point of the volumetric shape. This rules out the descriptors that rely on surface normals such as MeshHOG [32] and SHOT [33]. To be able to match any deformed pose with the template, we would like our feature to be pose-invariant. Therefore, we build it on the distance transform because it naturally encodes the relative location with respect to the surface and it is invariant to rotations, translations and quasi-invariant to pose changes. Finally, our feature needs to be robust to the topological noise present in the input data. Given a distance field sampled by CVT cells S, our feature is similar in spirit to Haar feature in the Viola-Jones face detector [41], except that the rectangular neighborhood is replaced with a sphere. As depicted in Fig. 6, we open an L-layer spherical support region in the Euclidean space around each cell. An L-dimensional vector u is defined accordingly, where each element u l is the sum of the distances of all cells falling within layer l. The feature value is the linear combination of all u l , with coefficients c l chosen from a set Υ = {-1, 0, 1}. Formally, suppose c are L-dimensional vectors whose elements are the bootstrap samples of Υ. Let c κ denote one particular instance of c, i.e. , c κ ∈ Υ L . The feature value is then expressed as an inner product: u c κ , corresponding to one feature attribute κ. We consider all possible c κ and also take the distance d itself into account. f is hence a vector of (3 L + 1) dimensions, where 3 L is the cardinality of Υ L and each element f κ is defined as: f κ u c κ = l c κ l u l , κ < 3 L , c κ l ∈ {-1, 0, 1} d(x s , ∂Ω), κ = 3 L . (3) Since each dimension f κ is computation-wise independent, f is suitable for decision forests, which select feature channels κ randomly to split the data during training. Being derived from d(x s , ∂Ω), f inherits the invariance to rigidbody motions. As opposed to the early version of this work [53], we normalize the distances with respect to the averaged edge length of cells, achieving invariance to the body size to a certain extent. However, f is not invariant to pose changes as the contained cells in each layer vary with poses. Although considering geodesic spherical supports instead of Euclidean ones would overcome this issue and yield quasi-invariance to pose changes, the resulting feature would be highly sensitive to topological noise. Thus, we keep the Euclidean supports and let forests take care of the variations caused by pose changes in learning. CORRESPONDENCES INFERENCE Now that the features for both surface and volumetric associations, f (v) and f (s), are defined, we proceed on using them to train a regression forest, an ensemble of T binary decision trees, to learn the mapping r : R 3 → R 3 from the observation domain to the template domain. During training each tree learns the split functions that best separate data recursively at branch nodes, while during testing the input point is routed through each tree, reaching T leaves that store statistics as predictions. We discuss in § 5.1 a generic learning framework that applies to both shape parameterizations. A CVT-specific multi-template strategy is presented in § 5.2 to generalize the Vitruvian framework from single mesh connectivity to multiple ones. Training and prediction Broadly speaking, training a regression forest amounts to determining the following components: sample-label pairs, split functions, learning objectives and leaf-node statistical models. Readers are referred to [39] for a comprehensive analysis on different choices of these components. Training data and split functions First we elaborate the training scenario for surface representations. Since forests aim to map an observed 3D vertex back to the template domain ∂Ω X , usually chosen to be in the rest (T or A) pose, it requires meshes in various poses but with the same connectivity for training. To incorporate abundant training variations, we animate the template X 0 = x 0 v ⊂ ∂Ω X to a variety of poses with a method similar to [START_REF] Sumner | Deformation transfer for triangle meshes[END_REF]. After voxelizing all animated meshes, we associate each surface voxel to their locations at the rest pose, obtaining a pool of sample-label pairs D = {(s v , x 0 v )}. Each tree is trained with a randomly bootstrapped subset of D. While the split function may be arbitrarily complex, a typical choice is a stump where one single dimension κ is compared to a threshold τ , i.e. axis-aligned thresholding. Our splitting candidate φ is hence the pair of testing channels κ and thresholds τ , φ = (κ, τ ), where κ is represented by offset pairs ψ and binary variables η in § 4.1. Let D N denotes the samples arriving at a certain branch node. The training process is to partition D N recursively into two subsets D L and D R , based on randomly generated φ: D L (φ) = {s v ∈ D N |f κ (s v ) = ε(s v ; R η (ψ)) ≥ τ }, (4a) D R (φ) = {s v ∈ D N |f κ (s v ) = ε(s v ; R η (ψ)) < τ }. (4b) Similarly, given a set of CVTs corresponding to the template volumes Ω X deformed in various poses, we associate each cell s ∈ S X to its locations in the rest pose, denoted as x 0 s ∈ X 0 ⊂ Ω X , forming a pool of sample-label pairs D = (s, x 0 s ) as the dataset. The split candidate φ is again the pair of thresholds and feature attributes, φ = (κ, τ ), where features are instead computed according to Eq. 3 but the thresholding criteria in Eqs. 4a and 4b follows. Learning objectives and leaf predictions At branch nodes, many candidates φ are randomly generated and the one that maximizes the information gain I , φ * = argmax φ I (φ), is stored for the later prediction use. We follow the classic definition of information gain: I (φ) = H(D N ) - i∈{L,R} |D i (φ)| |D N | H(D i (φ)), (5) This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TPAMI.2017.2740308 Copyright (c) 2017 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing [email protected]. where H is the entropy, measured as the variance in Euclidean space, i.e. H = σ 2 for both parameterizations. The tree recursively splits samples and grows until one of the following stopping criteria is met: (1) it reaches the maximum depth, or (2) the number of samples |D N | is too small. A Mean-Shift clustering [55] is performed in a leaf node to represent the distributions of x 0 as a set of confidenceweighted modes H = {(h, ω)}. h ∈ R 3 is the mode location and ω is a scalar weight. In the prediction phase, a 3D input point i ∈ V Y or i ∈ S Y traverses down the trees and lands on T leaves containing different collections of modes: {H 1 • • • H T }. The final regression output r i is the cluster centroid with largest weight obtained by performing Mean-Shift [55] on them. Each observed point then gets a closest point p in the reference shape X 0 , either in surfaces, p = argmin v∈V X r ix 0 v 2 , or in CVTs, p = argmin s∈SX r ix 0 s 2 . The correspondence pair (i, p) serves as input to the subsequent deformation framework described in § 6. Outliers such as false geometries, or un-removed background elements often exist in 3D data, drastically deteriorating tracking results. If their models are available, we also include them in the training process, so that forests can identify and reject them online. In this case, the goodness of a split φ is evaluated in terms of both classification and regression. We follow Fanelli et al. [START_REF] Fanelli | Random forests for real time 3d face analysis[END_REF] and extend the entropy to be: H(D) = - c p(c|D) log p(c|D) + (1 -e δ α )σ 2 (D), (6) where p(c|D) is the class probability of being foreground or background. It is the weighted sum of the aforementioned regression measure σ 2 and the classification entropy measure. Forests trained with Eq. 6 are often referred to as Hough forests. During training it learns simultaneously (1) how to distinguish between valid and invalid samples (outliers) and (2) how to match valid samples to the template. The regression part gets increasing emphasis when the current depth δ gets larger (i.e. the tree grows deeper), and the steepness is controlled by the parameter α. Learning across multiple volumetric templates So far we know how to utilize Vitruvian-based learning framework to match surface or volumetric data against the template. For the training purposes, one has to deform the reference mesh into various poses such that all meshes share a consistent topology T X and one can easily assign each sample a continuous label which is its rest-pose position X 0 . In this regards, the trained forest applies only to one mesh connectivity T X . Nevertheless, the amount of training data for one single template is often limited. To avoid over-fitting, the rule of thumb is to incorporate as much variation as possible into training. This motivates us to devise an alternative that learns across different template connectivities T X . Due to the high memory footprint of regular voxel grids, this strategy is unfortunately less practical for the surface feature f (v) in § 4.1 and we implement it only with CVTs. Given U distinct CVT templates: {S µ } U µ=1 3 , whose temporal evolutions are recovered with the method in [51], 3. The template suffix X is dropped to keep notations uncluttered. resulting in a collection of different templates deformed in various poses: {{X t 1 } • • • {X t U }} as our dataset. To include all of them into training, we take one generic template Ŝ as the reference. Intuitively, if there exists a mapping g that brings each cell s ∈ S µ to a new cell g(s) = ŝ ∈ Ŝ, one only needs to change the template-specific labels x 0 s to the corresponding x 0 ŝ , which are common to all templates, and the training process above can again be applied. In other words, we align topologies by matching every template S µ to Ŝ. Fig. 7 depicts this multi-template learning scheme. Although various approaches for matching surface vertices exist, only a handful of works discuss matching voxels/cells. Taking skinning weights [START_REF] Lewis | Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation[END_REF] as an example, we demonstrate in the following how to adapt a surface descriptor to CVTs. Note that our goal is not to propose a robust local 3D descriptor. With proper modifications, other descriptors can be used as well for shape matching. Generalized skinning weights. Skinning weights are originally used for skeleton-based animations, aiming to blend the transformations of body parts (bones). Usually coming as a side product of the skeletonrigging process [START_REF] Baran | Automatic rigging and animation of 3d characters[END_REF], it is a vector w of B -dimensions, each corresponding to a human bone b and B is the number of bones. The non-negative weight w b indicates the dependency on that part and is normalized to sum up to one, i.e. b w b = 1. As such, a skinning weight vector w is actually a probability mass function of body parts, offering rich information about vertex locations. To extend it from surface vertices to CVT cells, we first relax the unity-summation constraint as w is not used to average transformations of bones but only as a descriptor here. The intuition behind the adaptation is that, a CVT cell should have bone dependencies similar to the closest surface point. Therefore, for a cell whose normalized distance to the surface is d, its skinning weight is simply the one of its closest surface point, scaled by e d . We tackle scale changes by normalizing the distance field with the averaged edge length of cells in the shape. Since the shortest distance usually hits a triangle rather than a single vertex, we use barycentric coordinates as the coefficients to linearly combine the skinning weights of the three vertices. Note that this does not violate the unity-summation constraint for surface vertices as their distance d is still zero. We illustrate this concept in Fig. 8(a). The mapping g is then determined by searching for the nearest neighbor in the skinning weight space: g(s) = arg min ŝ∈ Ŝ w ŝw s 2 . In practice, we use Pinocchio [START_REF] Baran | Automatic rigging and animation of 3d characters[END_REF] to computes skinning weights, extend them from surface vertices to CVT This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TPAMI.2017.2740308 Copyright (c) 2017 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing [email protected]. cells, and match all cells to those of the common template Ŝ. The resulting skeletons are not used in our method. Fig. 8(b) visualizes one example of matching results. Our approach yields reasonable matches, regardless of the difference in body sizes. Due to the descriptiveness of skinning weights, symmetric limbs are not confused. Note that this computation is performed only between user-specific templates S µ and the generic one Ŝ off-line once. Input data S Y cannot be matched this way, because rigging a skeleton for shapes in arbitrary poses remains a challenging task. TRACKING Recall that our goal is not only to detect the associations C but eventually to estimate the deformation parameter Θ via Θ = argmin Θ E(Θ; C ), such that the resulting X( Θ) best explains Y. The choice of Θ could be raw point positions [START_REF] Sorkine | Laplacian surface editing[END_REF], [START_REF] Zollh Öfer | Real-time non-rigid reconstruction using an rgb-d camera[END_REF], skeletal kinematic chains [4], [START_REF] Ye | Real-time simultaneous pose and shape estimation for articulated objects using a single depth camera[END_REF] and cage [START_REF] Duveau | Cagebased motion recovery using manifold learning[END_REF]. We opt for a patch-based deformation framework [2] for surfaces and a CVT cluster-based method [51] for volumetric meshes respectively. Both group the 3D points into a higher-level structure, where shape deformations are represented as the ensemble of their rigid-body motions θ. We briefly explain here the basic principles and how to apply the predicted correspondences in § 5 to track a sequence of temporally inconsistent observations. Surface-based deformation In [2], the reference surface is decomposed into several patches k. It serves as a intermediate deformation structure between vertex positions and anatomical skeletons. Without any prior knowledge of motion, patches are preferred to be distributed uniformly over X . Given correspondences C from above, a data term is formulated as: E data (Θ; C ) = (i,p)∈C w ip y i -x p (Θ) 2 2 , (7) which is a standard sum of weighted squared distances. Since evolving a surface with discrete observations (even with a good C) is ambiguous by nature, regularization terms are usually introduced to exert soft constraints. Given a vertex v, the rigidity constraint enforces the predicted positions x v (θ k ) and x v (θ l ) from two adjacent patches P k and P l ∈ N k to be consistent: E r (Θ) = K k=1 P l ∈N k v∈P k ∪P l w kl x v (θ k ) -x v (θ l ) 2 2 , ( 8 ) where Θ is implicitly encoded in x v (θ k ) and x v (θ l ). Given a fixed input Y, the regression forest returns a fixed response Ỹ, and in turn a fixed C . We therefore apply standard Gauss-Newton method directly to find the minimizer of final energy: E(Θ; C ) = λE data (Θ; C ) + E r (Θ), where λ defines the softness of the template and is empirically set to 10 throughout our experiments. Note anyway that refining C like non-rigid ICP does is always possible. In this case, our method provides better initializations than using last frame results, reducing the needed ICP-iterations. Volumetric deformation On the other hand, a similar deformation framework can be formulated for CVTs as well, only that surface patches k are replaced by clusters of cells. We follow [51] which is essentially a non-rigid ICP method. As opposed to the extensive correspondence search, we again directly use the association pairs (i, p) detected by the forest as initializations. This results in a faster pose estimation. EXPERIMENTS The presented method is evaluated extensively in this section. We verify the merits of the discriminative associations as well as the complete 3D tracking-by-detection pipeline, in both surface and CVT parameterizations. As summarized in Table 1 in the supplemental material, in total 15 datasets are considered for various evaluation purposes. Due to the availability of ground-truths, the input in § 7.1 is the nonrigid registration, whereas in § 7.2 it is the reconstructed visual hull from [38] or raw tessellated CVT from [START_REF] Wang | A hierarchical approach for regular centroidal Voronoi tessellations[END_REF]. Discriminative associations Recall that the goal of discriminative correspondences is to guide the shape deformation not to match non-rigid 3D shapes accurately. We aim only to show that (1) the presented features are more or at least equally informative for matching humanoid surfaces than the existing state-of-thearts 3D descriptors, e.g. Heat Kernel Signature (HKS) [30], [START_REF] Bronstein | Scale-invariant heat kernel signatures for non-rigid shape recognition[END_REF] or Wave Kernel Signature (WKS) [31] and (2) CVTbased associations are more reliable than the surface-based counterparts. We describe every vertices with HKS, WKS, and our pair-wise offset features f (v) in § 4.1. CVT cells are, on the other hand, described by the Haar-like spherical features f (s) in § 4.2. The forests learn to match these 3D primitives against their own learning template, either a generic reference surface (FAUST) or a subject-specific CVT template (Goalkeeper, Ballet and Thomas). Surface-based correspondences Surface correspondences are validated on the publiclyavailable dataset FAUST [START_REF] Bogo | FAUST: Dataset and evaluation for 3D mesh registration[END_REF]. Following [16], [34], we use only the training set because of the availability of groundtruth vertex indices. It comprises 100 static 3D scans from 10 subjects in 10 poses. The accuracy on FAUST indicates how well the proposed method deals with human shape variations. Specifically, half the registrations (50 meshes) are chosen to train T = 50 trees and the other half are left out for testing. At branch nodes, 5000 splitting candidates φ are randomly generated and the best one is stored. The error measure is the geodesic distance between predicted vertices This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TPAMI.2017.2740308 Copyright (c) 2017 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing [email protected]. and ground-truths. If the distance is smaller than a certain threshold, we consider the point correctly matched. The percentage of correct matches in varying thresholds characterizes the performance of one algorithm and is commonly used in many matching papers [16], [34]. The results are shown in Fig. 10, where x-axis is normalized by the averaged edge length of the template. We partition the 100 meshes in two ways to test the generalization to unseen shapes or poses. The keyword pose means that the forest is trained with meshes in all 10 subjects but in only 5 poses, whereas shape represents the opposite. To compare fairly with other existing methods, we keep the Vitruvianmanifold label space unchanged (i.e. the same learning template) while replacing the voxel-based features with 30dimensional scale-invariant HKS or WKS feature vectors. The proposed TSDT-forest combination yields overall best accuracy in Fig. 10, suggesting that the voxel-based TSDT feature is indeed more informative than H/WKS in the chosen parameter range. Comparing the blue solid curve to the dashed one, we notice that our approach handles unseen shapes better than unseen poses. This is due to the fact that our feature relies mainly on 3D geometry, in which pose variations cause more significant changes than shape variations. Although this phenomenon is not observed in the curves of H/WKS because they exploit the spectral domain for better pose invariance, they suffer from the confusion between symmetric parts as visualized in Fig. 9. We further visualize in Fig. 11 the predicted associations on noisy reconstructed visual hulls with outliers, where no ground truths are available. Black colors means that the predicted correspondences are either rejected by simple normal compatibility check [2] like those on the body, or rejected because they are recognized as the chair. In this experiment, we include chair meshes into training data and follow Eq. 6 as the entropy measure to grow the trees. As a result, we can identify observations on the chair online and remove them, so that they do not affect the subsequent tracking stage. The task of trees here is throwing away the points of known outlier classes and in the meanwhile also predicting correspondences for the remaining points. As one can see, our approach is capable of predicting reasonable associations for noisy visual hulls while rejecting outliers. This is of importance since they are the real input data of the final tracking-by-detection pipeline. HKS and WKS are known for their sensitivity to topological noises, e.g. the merging of arms and torso. We however would like to remark that, as oppose to our feature vector f (v) that has a dimensionality virtually longer than 5000 from the randomly generated splitting candidates at each branch node, HKS and WKS are only 30-dimensional in our experiment. To fully conclude that the presented voxel-based feature is certainly better than HKS or WKS requires a more fair and thorough comparison but is not the main goal of this paper. Volumetric correspondences The discriminative CVT-based correspondence in § 4.2 is validated with 6 sequences from 3 subjects: Goalkeeper, Ballet and Thomas. We register each template to the corresponding raw CVT sequences using a EM-ICP based This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TPAMI.2017.2740308 Copyright (c) 2017 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing [email protected]. Training- [12] wo re-ori. Testing- [12] wo re-ori. Training- [12] w re-ori. Testing- [12] w re-ori. (b) Ballet method [51] to recover temporal coherent volumetric deformations (tracked CVTs). For each subject, up to 250 tracked CVTs are randomly chosen from the first sequence to form the training set, while the second sequences are completely left out for testing. We open L = 8 sphere layers for the feature computation. Each tree is grown up to depth 20 with 30% bootstrap samples randomly chosen from the dataset. The contributions of CVT on improving the correspondences detection are evaluated in two aspects. First, we keep using the Vitruvian manifold ∂Ω as the regressing domain but replace the voxel-based features f (v) with the spherical feature f (s), denoted as CVTfeature. Next, we further change the label space from surfaces ∂Ω to volumes Ω, termed fullCVT. We test on the tracked CVTs and report the results on all frames of both training sequences (Tr) and testing ones (Te). The drop between them indicates the ability to generalize. The same error measure as in the previous subsection is applied, only the geodesic distances are replaced by Euclidean ones. To yield a fair comparison with [11], here the forests are subject-specific and consist of T = 20 trees. Fig. 13 shows the percentage of correct matches in varying thresholds for Thomas and Ballet. Since CVTfeature and [11] are regressing to surfaces whereas fullCVT regresses to volumes, we normalize the x-axis by the average edge length of templates to yield fair comparisons. While the results of CVTfeature are comparable to [11] (green vs. red or orange), fullCVT attains the improved accuracies (blue vs. red or green), demonstrating the advantages of our fully volumetric framework. Some visual results of the fullCVT approach on raw CVT input are shown in Fig. 12. It is worth a closer analysis to highlight the advantages of CVT-based feature f (s) against the voxel-based one f (v). Our early work [11] applied f (v) that takes 150 3 voxels for f (v) to describe a human shape, while CVT needs only 5k cells 4 . Consequently, [11] is not able to include a sufficient amount of training shapes, leading to a major drawback that forests are limited to one single subject. To further decrease the needed number of training meshes, [11] exploits skeletal poses to cancel the global orientation. This in turn makes every mesh in the training dataset face the same direction and we learn merely pose variations. It follows that during tracking the input data has to be re-oriented likewise using the estimated skeletal poses from the last frame. The CVTbased feature f (s), on the other hand, considers distance fields of cells which is naturally invariant to rotations and hence does not require re-orientations. We anyway compare to [11] in both settings. Orange curves in Fig. 13 shows the results with re-orientation, which is better than the proposed strategy in Ballet. Nonetheless, without re-orienting data, the accuracy drops substantially during testing (compare red to orange). The efficiency on memory and the invariance of our features are two determining reasons why the presented method is better than [11]. With the multi-template learning strategy in § 5.2, it takes just one forest for different subjects in the tracking-by-detection experiment in § 7.2. 4. Further note that [11] stores a 3D vector in each voxel, whereas we store a scalar in each CVT cell. So the ratio is 3 × 150 3 to 5k. This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TPAMI.2017.2740308 Copyright (c) 2017 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing [email protected]. Next, we use the sequences of Goalkeeper to verify the merits of this multi-template learning framework, which is unfeasible for voxel-based feature f (v) due to the high memory footprint. It is a particularly difficult dataset because motions in the testing sequence UpJump have little overlap with those in the training SideJump. We report in Fig. 14 the correctness of correspondences for UpJump (unseen during training) in fullCVT setting. Three situations are taken into account: training with the tracked CVTs of all three subjects (red), training only with those from Goalkeeper (blue) and without Goalkeeper (green). For red and green curves, we choose the Goalkeeper template as the common one Ŝ and follow the strategy in § 5.2 to align distinct CVT tessellations. Comparing the red curve to the blue one confirms the advantage of this cross-template approach, leading to a forest that applies to all three subjects without trading off much accuracy. Nonetheless, the training data of the testing subject is still indispensable, as the accuracy drops when there is no tracked CVTs of Goalkeeper (green vs. red or blue), even if the forest of green curve is trained with twice the amount of CVTs compared to the blue curve. This is partially due to the fact that template of Goalkeeper has much smaller size than the other two and suggests that the proposed Haar-like feature in Eq. 3 captures more shape than pose information. Tracking-by-detection Now we move on to evaluate the full tracking-by-detection pipeline. The predicted associations C of two parameterizations are fed into their respective shape deformation frameworks in § 6 and the tracking is carried out on a frameby-frame basis. The fidelity of estimated shapes is verified by the widely-used silhouette overlap error. Surface-based An individual forest is trained for each subject with up to 200 meshes, depending on the number of vertices in the template. For Baran and Vlasic, we train standard regression forest; for Lionel and Ben we apply the adaptation in Eq. 6 (α = 5) due to the un-properly segmented chairs and tables in input data. Growing T = 20 trees to depth 25 with 5000 testing offset pairs ψ takes about 3 hours. Although it is not the aim of this paper, we anyway augment the energies in § 6.1 with the skeleton energy in [2] and validate the estimated human poses in 2D. For sequences without outliers, we compare with surface-based ICP (surICP) [2] and articulated ICP (ar-tICP) [15], both of which explain data with GMM using the Expectation-Maximization algorithm. We run an additional ICP step to reduce the errors (ours + ICP) for all testing sequences. The averaged overlap errors are shown in Fig. 15(ad). In general, our method performs much better than artICP and attains comparable results with surICP. However, ICPbased methods often fail when large deformation occurs between consecutive frames, which is usually the case in videos of low frame rates. We simulate this by tracking only every three frames. As reported in Table 1, surICP now yields higher errors because local proximity search fails to estimate correspondences properly, while our approach is able to handle large jumps between successive input. Four of our testing sequences, Cutting, WalkChair1, Ham-merTable, and WalkChair2 contain tables or chairs in observations, which play the roles as static outliers. We compare with other outlier rejection strategies such as, fixed outlier proportion (fixOL) [2], removing outliers by body-part classifications with SVM (bpSVM) [START_REF] Huang | Robust human body shape and pose tracking[END_REF], and modeling outlier likelihood dynamically by aggregating over all patches (patchedOL) [3]. As shown in Fig. 15(e-h), conventional outlier strategy fixOL drifts quickly and soon fail to track (green curves). ICP with robust outlier treatment, patchedOL, is able to sustain noisy input to a certain extent. Once it starts drifting, the error only gets higher due to its ICP nature (yellow curves). When subjects and outliers are sperate components in visual hulls, we cast them into separately TSDT, and feed them into the joint classification-regression forest. If they are connected to each other, forests inevitably associate some outliers to the humanoid template, leading to undesirable deformations as suggested by the spike in blue curves in Fig. 15(f). Nonetheless, since we rely less on previous frames for data associations, the results can always get recovered when they are separate again. In average, we still yield low errors throughout the whole sequences. We remark that such ability to recover is the essence of our discriminative approach, which is the biggest advantages over the existing generative methods. The recovered shapes and poses, superimposed on original images, are also presented in Fig. 2(c) in the supplementary material. Fully volumetric tracking-by-detection After evaluating the surface-based tracking-by-detection framework, now we turn to evaluate the volumetric one. We compare in two quantitative metrics against the whole pipeline in [11], which is the early version of our surfacebased tracking-by-detection approach. Unlike the matching experiment in the previous subsection, here we apply the multi-template strategy in § 5.2 to train one universal regression forest, with Goalkeeper chosen as the common template Ŝ. Training T = 50 trees up to depth 20 where each one is grown with around 200 CVTs (approximately one million samples) takes about 15 hours on a 24-core Intel Xeon CPU machine. For each subject, we track the testing sequence, which is not part of the training set. Tracking inputs are raw CVTs that have no temporal coherence. The number of clusters K is 250 for Ballet and Goalkeeper and 150 for Thomas. We evaluate our tracking approach with two different metrics. On one hand, evaluation with marker-based motion capture evaluates the correctness of the surface pose, but only for a sparse set of surface points. On the other hand, the silhouette overlap error evaluates the shape estimate but not the estimated pose. Hence these metrics are complementary. Some visual results are shown in Fig. 3 in the supplemental material and video 5 . Our approach is able to discover volumetric associations even in challenging poses found in Thomas and deform the templates successfully. As shown in Table 2-4, we evaluate the results by computing the overlap error between the ground truth silhouette and the projection of the estimated surface. The metric we use is the pixel error (number of pixels that differ). Statistics are computed on all frames of all cameras. The Ballet/Seq2 sequence has markerbased motion capture data: fifty markers were attached to the body of the subject, providing a sparse ground truth for surface tracking. First, each marker is associated to a vertex of the template surface. Then, for each marker, we measure the distance between its location and the estimated vertex location. Statistics on the distance are reported on Table 5. We observe that our approach attains slightly better performances than a state of the art ICP-based approach [51] and outperforms a surface-based tracking-by-detection [11] which mostly fails to correctly register the legs of the subject. Discussion Last but not lease, we make a short comparison between the two presented features. As discussed above, voxel-based volume in § 4.1 has the downside of high memory footprint, which limits the allowed training variations. Aligning the orientations is one way to reduce the training variation such that forests only need to learn the pose variations of one single subject. One has to repeat the same thing likewise during the testing phase. In [11], we rely on the skeletal poses of previous frames for this purpose and thus the forest predictions are not fully frame-independent, exposing tracking subject to the potential risk of drifting. To facilitate a fully 3D tracking-by-detection framework, the information of previous frames is preferred no to participate in the discriminative correspondence estimation. On the other hand, the spherical feature presented in § 4.2 attempts to incorporate rotational, pose, and even shape variations during training, yielding completely frame-wise forest predictions. As reported in Fig. 13, without aligning rotations, the accuracies of correspondences drop substantially on the testing sequences for the method in [11]. This means that voxel-based framework and the corresponding features do not generalize well to unseen rotations. When deployed in tracking applications, such unreliable associations eventually result in failure. In particular, one can observe in Table 4 that [11] attains high silhouette overlap discrepancy, most likely due to the fact that the subject rotates himself in many orientations and thus confuses the forest. From these observations, we conclude that the CVT-based Haar-like feature and the derived fully volumetric tracking-by-detection framework is better than the voxel-based counterpart. CONCLUSION In this paper, we present two features for surface and CVT shape parameterizations respectively, both making use of volumetric distance fields to describe 3D geometry. Aiming to integrate with random forests, each feature attribute is computationally independent and can be obtained on the fly in testing. They facilitate the surface-based and CVT-based discriminative associations and in turn lead to the corresponding tracking-by-detection frameworks for 3D human shapes. While CVT-based approach is more robust to the surface counterpart, we show that both yield more stability compared with the respective generative ICP extensions. The reliability of the proposed method is confirmed by the experiments on numerous public sequences. Future directions include alleviating problems of topological changing and incorporating photometric information. Nassir Navab is a professor of computer science and founding director of Computer Aided Medical Procedures and Augmented Reality (CAMP) laboratories at Technische Universitt Mnchen (TUM) and Johns Hopkins University (JHU). He also has secondary faculty appointments at TUM and JHU medical schools. He received the Ph.D. degree from INRIA and University of Paris XI, France, and enjoyed two years of postdoctoral fellowship at MIT Media Laboratory before joining Siemens Corporate Research (SCR) in 1994. At SCR, he was a distinguished member and received the Siemens Inventor of the Year Award in 2001. In 2012, he was elected as a fellow member of MICCAI society. He received the 10 year lasting Impact Award of IEEE ISMAR in 2015. He holds 45 granted US patents and over 40 European ones. He has served on the program and organizational committee of over 80 international conferences including CVPR, ICCV, ECCV, MICCAI, IPCAI, and ISMAR. He has also been co-author of more than 20 awarded papers in international conferences. His current research interests include robotic imaging, computer aided surgery, computer vision and augmented reality. Chun-Hao Slobodan Abstract In this paper, we tackle the problem of 3D human shape estimation from single RGB images. While the recent progress in convolutional neural networks has allowed impressive results for 3D human pose estimation, estimating the full 3D shape of a person is still an open issue. Model-based approaches can output precise meshes of naked under-cloth human bodies but fail to estimate details and un-modelled elements such as hair or clothing. On the other hand, non-parametric volumetric approaches can potentially estimate complete shapes but, in practice, they are limited by the resolution of the output grid and cannot produce detailed estimates. In this work, we propose a non-parametric approach that employs a double depth map to represent the 3D shape of a person: a visible depth map and a "hidden" depth map are estimated and combined, to reconstruct the human 3D shape as done with a "mould". This representation through 2D depth maps allows a higher resolution output with a much lower dimension than voxel-based volumetric representations. Additionally, our fully derivable depth-based model allows us to efficiently incorporate a discriminator in an adversarial fashion to improve the accuracy and "humanness" of the 3D output. We train and quantitatively validate our approach on SURREAL and on 3D-HUMANS, a new photorealistic dataset made of semi-synthetic in-house images annotated with 3D ground truth surfaces. Introduction Recent works have shown the success of deep network architectures for the problem of retrieving 3D features such as kinematic joints [4,33] or surface characterizations [43] from single images, with extremely encouraging results. Such successes, sometimes achieved with simple, standard network architectures [30], have naturally motivated Figure 1. Our non-parametric representation for human 3D shape: given a single image, we estimate the "visible" and the "hidden" depth maps from the camera point of view. The two depth maps can be seen as the two halves of a virtual "mould". We show this representation for one of the images of our new dataset. the applicability of these methodologies for the more challenging problem of end-to-end full 3D human shape retrieval [2,18]. The ability to retrieve such information from single images or videos is relevant to a broad number of applications, from self-driving cars, where spatial understanding of surrounding obstacles and pedestrians plays a key role, to animation or augmented reality applications such as virtual change rooms that can offer the E-commerce industry a virtual fitting solution for clothing or bodywear. Designing a deep architecture that produces full 3D shapes of humans observed in an input image or a sequence of input images raises several key challenges. First, there is a representational issue. While the comfort zone of CNNs is in dealing with regular 2D input and output grids, the gap must be bridged between the 2D nature of inputs and the 3D essence of the desired outputs. One solution is to follow a parametric method and estimate the deformation parameters of a predefined human 3D model [2,18]. These methods are limited to the level of details covered by the model. In contrast, non parametric approaches can potentially account for shape surface details but are prone to produce physicallyimpossible body shapes. This is the case of the recent volumetric approach proposed in [38] that encodes the human body as a voxel grid whose dimensionality directly impacts 1 Figure 2. Overview. Given a single image, we estimate the "visible" and the "hidden" depth maps. The 3D point clouds of these 2 depth maps are combined to form a full-body 3D point cloud, as if lining up the 2 halves of a "mould". The 3D shape is then reconstructed using Poisson reconstruction [19]. An adversarial training with a discriminator is employed to increase the humanness of the estimation. the precision of the estimation. This highlights a second challenge: the dimensionality of the problem is considerably higher than what existing networks have been shown to handle, because the parametrisation sought is no longer restricted to a subset of the variability, e.g. kinematic pose of humans or body shape parameters, but to an intrinsically finer description of the body. Finally, the training data for this problem, yet to be produced, requires a particularly demanding definition and acquisition effort. The large data variability of 3D problems has motivated some initial efforts to produce fully synthetic training sets [39], where such variability can be partially scripted. Yet recent successful methods underscore the necessity for realistic data, for both the general applicability of the estimation, and to keep the underlying network architecture simple, as devoid as possible of any domain specific adaptations. In order to overcome these difficulties, we propose a non-parametric approach that employs a double depth map representation to encode the 3D shape of a person: a "visible" depth map capturing the observable human shape and a "hidden" depth map representing the occluded surface are estimated and combined to reconstruct the full human 3D shape. In this encoding of the 3D surface, the two depth maps can be seen as the two halves of a virtual "mould", see Figure 1. This representation allows a higher resolution output, potentially the same as the image input, with a much lower dimension than voxel-based volumetric representations, i.e. O(N 2 ) vs O(N 3 ). We designed an encoderdecoder architecture that takes as input a single image and simultaneously produces an estimate for both depth maps. These depth maps are then combined to obtain a point cloud of the full 3D surface which can be readily reconstructed using Poisson reconstruction [19]. Importantly, our fully differentiable depth-based model allows us to efficiently incorporate a discriminator in an adversarial fashion to improve the accuracy and "humanness" of the 3D output, especially in the case of strong occlusions. See Figure 2. To train and quantitatively evaluate our network in near real-world conditions, we captured a large-scale dataset of textured 3D meshes that we augment with realistic backgrounds. To account for the large variability in human appearance, we took special care in capturing data with enough variability in movements, clothing and activities. Compared to parametric methods, our method can estimate detailed 3D human shapes including hair, clothing and manipulated objects. After reviewing the related work in Section 2, we present our two-fold contribution: our new non-parametric 3D surface estimation method is explained in Section 3 while our large-scale dataset of real humans with ground-truth 3D data is detailed in Section 4. Experiments are presented in Section 5 and conclusions drawn in Section 6. Related Work 3D object from single images. Various representations have been adopted for 3D object shape estimation. Voxelbased representations [5] consist in representing the 3D shape as an occupancy map defined on a fixed resolution voxel grid. Octree methods [36] improve the computability of volumetric results by reducing the memory requirements. Point-clouds are another widely employed representation for 3D shapes. In [7], Fan et al. estimate sets of 1024 points from single images. Jiang et al. [15] build on this idea and incorporate a geometric adversarial loss (GAL) to improve the realism of the estimations. AtlasNet [10] directly estimates a collection of parametric surface elements to represent a 3D shape. Our representation combines two complementary depth maps aligned with the image, similar in spirit to the two halves of a "mould", and shares the resolution of the input image, capturing finer details while keeping output dimensionality reasonable. Similarly to the work of Tatarchenko et al. [35] on reconstructing vehicle images from different viewpoints, we combine the estimation of several depth maps to obtain a 3D shape. For human shape estimation, however, we work on a deformable object. Also, we focus on the visible and hidden depth maps rather than any other because of their direct correspondence with the input image. Our two depth maps being aligned with the image, details as well as contextual image information are directly exploited by the skip connections to estimate the depth values. Multi-views [35] do not necessarily have pixel-to-pixel correspondences with the image making depth prediction less straightforward. 3D human body shape from images. Most existing methods for body shape estimation from single images rely on a parametric model of the human body whose pose and shape parameters are optimized to match image evidence [2,11,20,29]. This optimization process is usually initialised with an estimate of the human pose supplied by the user [11] or automatically obtained through a detector [2,20,29] or inertial sensors [42]. Instead of optimizing mesh and skeleton parameters, recent approaches proposed to train neural networks that directly predict 3D shape and skeleton configurations given a monocular RGB video [37], multiple silhouettes [6] or a single image [18,25,28]. Recently, BodyNet [38] was proposed to infer the volumetric body shape through the generation of likelihoods on the 3D occupancy grid of a person from a single image. A large body of work exists to extract human representations from multiple input views or sensors, of which some recently use deep learning to extract 3D human representations [8,13,21]. While they intrinsically aren't designed to deal with monocular input as proposed, multi-view methods usually yield more complete and higher precision results as soon as several viewpoints are available, a useful feature we leverage for creating the 3D HUMANS dataset. More similar to ours are the methods that estimate projections of the human body: in [39], an encoder-decoder architecture predicts a quantized depth map of the human body while in DensePose [12] a mapping is established between the image and the 3D surface. Our method also makes predictions aligned with the input image but the combination of two complementary "visible" and "hidden" depth maps leads to the reconstruction of a full 3D volume. In [24], the authors complete the 3D point cloud built from the front facing depth map of a person in a canonical pose by estimating a second depth map of the opposite viewpoint. We instead predict both depth maps simultaneously from a single RGB image and consider a much wider range of body poses and camera views. All these methods rely on a parametric 3D model [2,18,20] or on training data annotated [12] or synthesised [39] using such a model. These models of humans built from thousands of scans of naked people such as the SMPL model [23] lack realism in terms of appearance. We instead propose to tackle real-world situations, modeling and estimating the detailed 3D body shape including clothes, hair and manipulated objects. 3D human datasets. Current approaches for human 3D pose estimation are built on deep architectures trained and evaluated on large datasets acquired in controlled environments with Motion Capture systems [1,14,34]. However, while the typology of human poses on these datasets captures the space of human motions very well, the visual appearance of the corresponding images is not representative of the scenarios one may find in unconstrained real-world images. There has been a recent effort to generate in-thewild data with ground truth pose annotation [26,32]. All these datasets provide accurate 3D annotation for a small set of body keypoints and ignore 3D surface with the exception of [20] and [41] who annotate the SMPL parameters in real-world images manually or using IMU. Although the resulting dataset can be employed to evaluate under-cloth 3D body shapes, its annotations are not detailed enough, and importantly, its size is not sufficient to train deep networks. To compensate for the lack of large scale training data required to train CNNs, recent work has proposed to generate synthetic images of humans with associated ground truth 3D data [4,31,39]. In particular, the Surreal dataset [39], produced by animating and rendering the SMPL model [23] on real background images, has proven to be useful to train CNN architectures for body parts parsing and 2.5D depth prediction [39], 3D pose estimation [31,33], or 3D shape inference [38]. However, because it is based on the SMPL model, this dataset is not realistic in terms of clothing, hair or interactions with objects and cannot be used to train architectures that target the estimation of a detailed 3D human shape. We propose to bridge this gap by leveraging multicamera shape data capture techniques [3,40], introducing the first large scale dataset of images showing humans in realistic scenes, i.e. wearing real clothes and manipulating real objects, dedicated to training with full 3D mesh and pose ground-truth data. Most similar to ours are the CMU Panoptic dataset [17] that focus on social interactions and the data of [45] that contains dense unstructured geometric motion data for several dressed subjects. Methodology In this section, we present our new non-parametric 3D human shape representation and detail the architecture that we designed to estimate such 3D shape from a single image. "Mould" representation We propose to encode the 3D shape of a person through a double 2.5D depth map representation: a "visible" depth map that depicts the elements of the surface that are directly observable in the image, and a "hidden" depth map that characterises the occluded 3D surface. These two depth maps can be estimated and combined to reconstruct the complete human 3D shape as done when lining up the two halves of a "mould". See example in Figure 2. Given a 3D mesh, obtained by animating a 3D human model or by reconstructing a real person from multiple views, and given a camera hypothesis, i.e. location and parameters, we define our two 2D depth maps z vis and z hid by ray-tracing. Specifically, we cast a ray from the camera origin, in the direction of each image pixel location (u, v) and find the closest intersecting point on the mesh surface: z vis [u, v] = min k2Ray(u,v) ||p k || 2 (1) for the visible map, and the furthest one for the hidden map: z hid [u, v] = max k2Ray(u,v) ||p k || 2 , (2) where 3D points {p i } = {(p x,i , p y,i , p z,i )} are expressed in camera coordinate system and the L2-norm ||.|| 2 is the distance to the camera center. Ray(u, v) denotes the set of points p i on the ray passing through pixel (u, v) obtained by hidden surface removal and visible surface determination. To be independent from the distance of the person to the camera, we center the depth values on the center of mass of the mesh, i.e. z orig : z vis [u, v] 0 = z vis [u, v] z orig 8u, v, and similarly for z hid [u, v]. Since they are defined with respect to the same origin, the 2 depth maps z vis [u, v] and z hid [u, v] can be readily combined in 3D space by merging their respective 3D point clouds into a global one: {p i } = {p i } vis [ {p i } hid (3) An example of such a point cloud is depicted in Figure 2, where points corresponding to z vis [u, v] and z hid [u, v] are respectively colored in red and blue. In practice, to keep the depth values within a reasonable range and estimate them more accurately, we place a flat background a distance L behind the subject to define all pixels values in the depths maps in the range [ z orig . . . L]. Points p i of the point clouds are then selected as belonging to the human surface if p z,i  L ✏. As in volumetric representation through voxel grid, our method also encodes 3D surfaces and point clouds of diverse sizes into a fixed size representation, making a 3D surface easier to consider as a deep network target. However, in our case, we can work at the image resolution with a much lower output dimensionality O(N 2 ) than voxel-based volumetric representations O(N 3 ), N being the size of the bounding box framing the human in the input image. We numerically validated the benefit of our representation compared to a voxel grid approach by encoding a random set of 100 meshes (picked from our 3D HUMANS dataset presented in Section 4) at different resolutions and computing the 3D reconstruction error (average Chamfer Figure 3. Reconstruction error for voxel grid and our "mould" when augmenting the dimensionality D of the representation, D=N 3 for voxels grid and D=2N 2 for ours. distance) between ground-truth vertices and the resulting point clouds. This comparison is shown in Figure 3. The error obtained with our mould-representation decreases and converges to a minimum value that corresponds to surface details that cannot be correctly encoded even with high resolution depth maps, i.e. when some rays intersect more than twice with the human surface for particular poses. In practice, we show in Section 5 that this can be solved by employing a Poisson reconstruction to obtain a smooth 3D surface, including those areas. We can extrapolate from Figure 3 that voxel grids can reach perfect results with an infinity of voxels, but for manageable sizes, our representation allows to capture more details. Architecture We formulate the 3D shape estimation problem as a pixel-wise depth prediction task for both visible and hidden surfaces. Our framework builds on the stacked hourglass network proposed by Newell et al. [27] that consists of a sequence of modules shaped like an hourglass, each taking as input the prediction from the previous module. Each of these modules has a set of convolutional and pooling layers that process features down to a low resolution and then upsample them until reaching the final output resolution. This process, combined with intermediate supervision through skip connections, implicitly captures the entire context of the image. Originally introduced for the task of 2D pose estimation and employed later for part segmentation and depth prediction [39], this network is an appropriate choice as it predicts a dense pixel-wise output while capturing spatial relationships associated with the entire human body. We designed a 2-stack hourglass architecture that takes as input an RGB image I cropped around the human and outputs the 2 depths maps z vis and z hid aligned with I. We use a L L1 loss function defined on all pixels of both depth maps. The loss function to be minimized is thus the average distance between the ground truth z p and the estimation b z p : L L1 = 1 P P X p=1 |z p b z p |, (4) with P being the number of pixels in the batch and b z p the network output for pixel p, including pixels in both z vis [u, v] and z hid [u, v] maps. We also experimented with an L L2 loss but found that it overly penalizes outliers, i.e. pixels incorrectly assigned to background and vice versa, and therefore focuses only on that task. By using the L L1 norm, we force the network to not only segment the image correctly, i.e discriminate the subject from the background, but also provide an accurate estimation of the depth at each pixel. Adversarial training As observed with other non-parametric methods [38] but also with approaches relying on a model [18], our network can sometimes produce implausible shapes that do not look human, especially when a limb is entirely occluded by other parts of the body. To improve the accuracy and the "humanness" of our prediction, we follow an adversarial training procedure inspired by the Generative Adversarial Networks (GAN) [9]. Our fully derivable depth-based model allows us to efficiently incorporate a discriminator in an adversarial fashion, i.e., the goal for the discriminator will be to correctly identify ground truth depth maps from generated ones. On the other hand, the generator objective will be two-fold: fitting the training set distribution through the minimization of the L L1 loss (Equation 4) and tricking the discriminator into classifying the generated depth maps as ground truth depth maps through the minimization of the L GAN loss: L GAN (G, D) = E I,z [log D(I, z)]+E I [log(1 D(I, G(I))]. (5) Our discriminator D will be trained to maximize the L GAN loss by estimating 1 when provided with ground-truth depth maps z and estimating 0 when provided with generated depth maps G(I). In order to weigh the contribution of each loss, we will use a factor , our full objective being modeled as a minimax game: (G ⇤ , D ⇤ ) = arg min G max D (L GAN (G, D) + L L1 (G)). (6) The L L1 loss will be used to learn the training set distribution by retrieving the low-frequency coefficients while the L GAN loss will entice the generator into predicting realistic and precise depth maps. It is important to note that the discriminator is only used to guide the generator during the learning. The discriminator is not used at test time. The architecture employed as our discriminator is a 4 stack CNN. Each stack is composed of a convolutional layer (kernel size 3, stride 1), a group normalization layer (32 groups), a ReLu activation function and a MaxPool 2x2 operation. There are 64 channels for the first convolution and the number of channels is multiplied by 2 at each stack until reaching 512 for the 4th and last stack convolution. We then connect our 8x8x512 ultimate feature map with 2 fullyconnected layers of size 1024 and 512 neurons and then our final output neuron on which we apply a binary cross entropy loss. We jointly trained our generator and discriminator on 50,000 images for 40 epochs. Training is performed on batches of size 8 with the Adam optimizer. Given our small training batch size, we found the use of group norm [44] to be a great alternative to batch norm that was producing training instabilities. The learning rate is kept constant at 1e-4 during the first 20 epochs and is then decreased linearly to zero during the following 20 epochs. In practice, since our L L1 loss is much smaller than the L GAN loss, we multiply the L L1 loss by a factor equal to 1e4. With this adversarial training, we observed that the results are sharper and more realistic. In cases of deformed or missing limb, e.g. the legs in Figure 7 right, the use of a discriminator forces the generator to produce a better prediction. Dataset generation We introduce 3D HUMANS (HUman Motion, Activities aNd Shape), a realistic large-scale dataset of humans in action with ground-truth 3D data (shape and pose). It consists of semi-synthetic videos with 3D pose and 3D body shape annotations, as well as 3D detailed surface including cloths and manipulated objects. First, we captured 3D meshes of humans in real-life situations using a multi-camera platform. We then rendered these models on real-world background scenes. See examples in Figure 4a. Capture. We employed a state of the art 3D capture equipment with 68 color cameras to produce highly detailed shape and appearance information with 3D textured meshes. The meshes are reconstructed frame by frame independently. They are not temporally aligned and do not share any common topology. We divided the capture into 2 different subsets: in the first one, 13 subjects (6 male and 7 female) were captured with 4 different types of garments (bathing suit/tight clothing, short/skirt/dress, wide cloths and jacket) while performing basic movements e.g., walk, run, bend, squat, knees-up, spinning. In the second subset, 6 subjects, 4 male and 2 female, were captured while performing 4 different activities (talking on the phone, taking pictures, cleaning a window, mopping the floor) in 2 different ways: standing/sitting for talking on the phone, standing/kneeling for taking picture, etc. More than 150k meshes were reconstructed. The dataset was collected at Inria from consenting and informed participants. Rendering. We rendered all our videos at a 320 x 240 resolution using a camera of sensor size 32mm and focal length 60mm. Our videos are 100 frames in length and start with the subject at the center of the frame. For the first frame of the sequence, the subject is positioned at a distance of 8 meters of the camera, with a standard deviation of 1 meter. We used the images of the LSUN dataset [46] for background. Annotations. We augment our dataset with ground-truth SMPL pose and body parameters. To do so, we use the Hu-man3.6M [14] environment as a "virtual MoCap room": we render the 3D meshes for which we want to estimate the 3D pose within that environment, generate 4 views using camera parameters and background images from the dataset and estimate the 2D/3D poses by running LCR-Net++, an off-the-shelf 3D pose detector particularly efficient on Hu-man3.6M. An optimum 3D pose is then computed using multi-view 3D reconstruction and used as initialization to fit the SMPL model, estimating pose and shape parameters that better match each mesh. The SMPL model is fitted to the point clouds both for naked and dressed bodies. Keeping the body parameters fixed (obtained from fits in minimal clothing) resulted in a lower performance of the baseline when evaluated against ground truth dressed bodies. Experiments We analyse quantitatively and compare our approach to the state-of-the-art on two datasets. First, the SURREAL dataset [39], a synthetic dataset obtained by animating textured human models using MoCap data and rendering them on real background images, and our 3D HUMANS dataset introduced in this paper. While SURREAL covers a wider range of movements since it has been rendered using thousands of sequences from [1], our data better covers shape details such as hair and clothing. In the following exper- iments, both training and test images are tightly cropped around the person using subjects segmentation. The smallest dimension of the image is extended to obtain a square image that is then resized to 256x256 pixels to serve as input for our network. Performance is computed on both 128x128 output depth maps as the distance between each ground truth foreground pixel and its corresponding pixel in the predicted depth map. Background depth L is set at 1.5m. SURREAL Recent methods [38,39] evaluate their performance on this dataset. First, we evaluated the performance of our architecture when estimating quantized depth values (19+1 for background) through classification as in [39] and our proposed regression method: with a maximum distance to groundtruth of 30mm, the quantity of pixels with a correct depth estimation increase by 5% when using regression instead of classification. Then, we compare in Figure 5a performance against the recent BodyNet voxel grid-based architecture from [38] who also reported numerical performance on SURREAL. Although good 3D performances are reported in the paper, we can see that when evaluating in the image domain, i.e., comparing depth maps, the performance of BodyNet drops. Our method makes 3D estimations aligned with the image and better recover details, outperforming BodyNet quite substantially. The GAN helps increase the "humanness" of the predictions. 3D HUMANS We consider 14 subjects (8 male, 6 female) for training and the remaining 5 subjects (2 male, 3 female) for test. An interesting aspect of synthetic datasets is that they offer an almost unlimited amount of training data. In our case, the data generation relies on a capture process with a non-negligible acquisition effort. It is therefore interesting to analyze how adding more training data impacts the performance. Our results in Figure 5b show that training our architecture on 50,000 images is sufficient and that using more training images does not improve much the performance. The appearance of our images being quite different from SURREAL data, we first compare the performance of our method when considering different training strategies: training on SURREAL, training on 3D HUMANS, or training on a mix of both datasets. In Figure 6a, we can see that the best performances are obtained when SURREAL images are not used. The appearance of the images is too different and our architecture cannot recover details such as clothes or hair when trained on data obtained by rendering the SMPL model. This is verified by the result depicted in Figure 6b: we outperform, by a large margin, a baseline obtained by fitting the SMPL model on the ground truth meshes, effectively acting as an upper bound for all methods estimating SMPL meshes [2,18]. It shows the inefficiency of these methods to estimate clothed body shape since clothes are not included in the SMPL model. Finally, we analyse in Figure 6c how much our performance varies between front and back depth maps. As expected, we better estimate the visible depth map, but our hidden depth maps are usually acceptable. See examples in Figure 7 and Figure 8. The quality of the 3D reconstructions is remarkable given the low dimensionality of the input. Main failures occur when a limb is completely occluded. In such cases, the network can create non-human shapes. We proposed to tackle this issue by considering an adversarial training that we analyse in the next section. We note a higher performance on 3D HUMANS than on SUR-REAL. We attribute that to several factors including the higher pose variability in SURREAL (some subjects are in horizontal position) and the absence of lighting in 3D HU-MANS. We also analyzed the results on different subsets of the evaluation set @50mm and obtained with/without clothing: 83.30% and 85.43% respectively and with/without object: 79.11% and 84.55% respectively, confirming the nuisance introduced by these elements. 8. Generalisation to previously unobserved data. We apply our pipeline to images with 3D realistically rendered backgrounds (left), and with 3 real-world images from the LSP dataset (right). These poses, in particular the baseball player, have not been seen at training time but our model still generalizes well. GAN Severe occlusions (self-or by other elements of the scene) are a limitation of our model that we address with adversarial training. We carried out a dedicated experiment where we artificially generated such occlusions in train/test images to quantify improvements. We obtain a 7% chamfer distance error drop with adversarial training and a clear qualitative improvement which we illustrate in Figure 7. We highlight the differences by showing an error heat-map over a Poisson reconstruction of the point cloud for better visualization. The quantitative gain is limited due to the network sometimes hallucinating plausible limbs far from groundtruth (red hand in the left Figure 7), resulting in higher error than a network without GAN that does not estimate any limb at all. This is because the metric does not evaluate the overall plausibility of the produced estimation. Generalisation In order to quantitatively measure its generalisation capability, we have evaluated our network on an additional dataset: instead of static background images, we have rendered the meshes in realistic 3D environments obtained on the internet (examples in Figure 4b). The results (Figure 6d) show that a mix training on both SURREAL and 3D HU-MANS is ideal for generalisation. We suspect that jointly rendering the subject and the 3D background at the same time creates a more realistic image where the subject is more complicated to segment, hence the need for more variability in the training data. We also generated qualitative results for LSP images [16], depicted in Figure 8, and for the DeepFashion dataset [22], shown in Figure 9 where we compare our approach with HMR [18] and BodyNet [38]. We can observe that our approach captures more details, including hair, shirt and the belly of the pregnant woman (up), hair, skirt and body pose (middle) and dress (bottom). [18] (left), Bodynet [38] (middle) and our method (right). Unlike [18,38], we do not train on in-the-wild images but estimate 3D shapes of clothed subjects. Conclusion We have proposed a new non-parametric approach to encode the 3D shape of a person through a double 2.5D depth map representation: a "visible" depth map depicts the elements of the surface that are directly observable in the image while a "hidden" depth map characterises the occluded 3D surface. We have designed an architecture that takes as input a single image and simultaneously produces an estimate for both depth maps resulting, once combined, in a point cloud of the full 3D surface. Our method can recover detailed surfaces while keeping the output to a reasonable size. This makes the learning stage more efficient. Our architecture can also efficiently incorporate a discriminator in an adversarial fashion to improve the accuracy and "humanness" of the output. To train and evaluate our network, we have captured a large-scale dataset of textured 3D meshes that we rendered on real background images. This dataset will be extended and released to spur further research. The main difference between our generator architecture and the stacked hourglass by Newell et al. [27] is the output dimension. Newell et al. estimate a 64x64 resolution heatmap for each body joint. In our case, we estimate 2 depth maps and aim at a higher 128x128 resolution. Our hourglass output dimension is 128x128x2. Because of this difference in output resolution, we apply the following modifications to the stacked hourglass [27] architecture: We do not use a maxpooling operation after layer1, we increase the depth of the hourglasses from 4 to 5 skipped connections, we project the hourglass result on 2 channels (one for each depth map). Also, we use 2 stacked hourglasses and we replace batch normalization by group normalization [44] that performs better on small training batches. See architecture details in Table 1. Layer Discriminator For our discriminator, we employed a 4 stacks CNN. It takes as input a set of 2 depth maps at resolution 128x128 and outputs a scalar: close to 1.0 if it believes they are sampled from the ground truth depth maps and close to 0 if it believes they have been generated by the generator. See Table 2 for details. Introduction In this paper, we examine the problem of multiview shape reconstruction of production-realistic performance capture sequences. Such sequences may contain arbitrary casual clothing and motions, and have specific capture set assumptions due to the particular lighting and camera positioning of these setups. Multiview 3D reconstruction is a popular and mature field, with numerous applications involving the recording and replay of captured 3D scenes, such as 3D content creation for broadcast and mobile applications, or the increasingly popular virtual and augmented reality applications with 3D user avatars. An essential and still improvable aspect in this matter, in particular with performance capture setups, is the fidelity and quality of the recovered shapes, our goal in this work. Multi-view stereo (MVS) based methods have attained a good level of quality with pipelines that typically comprise feature extraction, matching stages and 3D shape extraction. Interestingly, very recent works have re-examined stereo and MVS by introducing features and similarity functions automatically inferred using deep learning. The main promise of this methodological shift is to include better data-driven priors, either in 2D [1,2,3,4] as improvement over classic 2D features, or in 3D to account for relative view placement and local or global shape priors [5,6,7]. These novel MVS methods have been shown to outperform classic learning-free methods on static scene benchmarks [8]. Our main goal is to examine whether these datadriven improvements transfer to the more complex case of live performance capture, where a diverse set of additional di culties arise with respect to typical MVS setups. Typical challenges for these capture situations include smaller visual projection areas of objects of in-Fig. 1 Challenging scene captured with a passive RGB multi-camera setup [9]. (left) one input image, (center ) reconstructions obtained with classical 2D features [10], (right) proposed solution. Our results validate the key improvement of a CNN-learned disparity to MVS for performance capture scenarios. Results particularly improve in noisy, very low contrast and low textured regions such as the arm, the leg or even the black skirt folds, which can be better seen in a brightened version of the picture in Figure 17. terest, due to wider necessary fields of view for capturing motion; occlusion and self-occlusion of several subjects interacting together; lack of texture content typical of real-life subject appearance and clothing; or motion blur with fast moving subjects such as sport action scenes (see Figure 14). To the best of our knowledge, existing learning-based MVS schemes report results on static datasets such as DTU [11] or ShapeNet [12] but have not yet been demonstrated on performance capture data with the aforementioned typical issues. We present a detailed framework for this purpose, which casts the problem as a fusion of per view depth maps as inspired by recent fusion methods [13], each depth map extracted using a learned multi-view photoconsistency function. Our approach performs multiview matching within local volumetric units of inference. Contrary to previous methods, our volumetric unit is defined in a given view's own reference, so as to capture camera inherent 3D dependencies, specifically for the purpose of per-view decision. Instead of inferring occupancies, we infer disparity scores to ease training and to focus the method more on photometric configurations than local shape patterns. We sweep viewing rays with this volumetric receptive field, a process we coin volume sweeping, and embed the algorithm in a multi-view depth-map extraction and fusion pipeline followed by a geometric surface reconstruction. With this strategy, we validate that CNN-based MVS outperforms classical MVS approaches in performance capture scenarios. In particular, we obtain high precision geometric results on complex sequences, outperforming both existing CNN-based and classic non-learning methods on a large set of capture datasets. These diverse results are obtained using only a DTU subset as training data, which evidences the generalization capabilities of our network. This article is an extended version of [14] that provides a complete and self-contained description of the proposed method, with more details about the pipeline from [10] along with the detailed volume sweeping and surface extraction algorithms. Several supplementary experiments were performed to give more insights on the contribution and study the influence of the parameters. We finally challenged the generalization properties of our network on multiple dataset that were not seen during training with competitive results compared to both hand-crafted and learned state of the art. Related Work Multi-view stereo reconstruction is an active and longstanding vision problem [15]. Stereo and MVS-based approaches are increasingly being used for high fidelity capture applications [16,17,18,19,11,20,21], possibly complementing other strategies such as depthbased reconstruction [13,22,23,24] by addressing shortcomings that include limited range, sensitivity to high contrast lighting, and interference when increasing the number of viewpoints. While considering various shape representations, for instance point clouds [16], fused depth maps [25], meshes [26,27], or volumetric discretizations [28,29,30], most MVS methods infer 3D shape information by relying on the photoconsistency principle that rays observing the same scene point should convey similar photometric information. In it simplest form, such similarity can be measured by considering projected color variances among views, as used in early works [28] with limited robustness. In stereo and short baseline situations, simple normalized forms of 2D window correlation are sucient to characterize similarity under simple lighting and constrast changes, using e.g. ZNCC, SSD, SHD. For broader geometric and photometric resilience, various features based on scale-invariant gradient characterizations [31,32,33] have been designed, some specialized for the dense matching required for the MVS problem [34]. More recently, image features have been successfully applied to performance capture sequences in e.g. [20,10]. Generally, MVS methods characterize photoconsistency either with a symmetric, viewpoint agnostic, combination of all pairwise similarities [35], or with a per image depth map determination through sweeping strategies [36,25]. The latter sweeping approaches have the advantage of simplifying the scene parametrization of occlusions [37,38], which we leverage for our approach and show to yield a robustness advantage over other strategies in our experiments. While classic MVS approaches have been generally successful, recent works aimed at learning stereo photoconsistency have underlined that additional priors and more subtle variability co-dependencies are still discoverable in real world data. Several works leverage this by learning how to match 2D patch pairs for short baseline stereo, letting deep networks infer what features are relevant [1,2,3,4]. More recent works extend this principle to short baseline MVS, with symmetric combination of 2D learned features [39], or wide baseline sparse capture scenarios [40,41]. Most of these methods however use a 2D receptive field for stereo matching. The intuition that volumetric 3D receptive fields may be more informative and ease CNN inference and has been explored by some recent works [5,6,7,8], an assertion that the presented approach further verifies. While casting correlations in 3D as well, our approach proposes several key di↵er-ences. Contrary to the latter, our volumetric receptive field is projective in the camera coordinate frame, similar to some binocular stereo [42] or image-based rendering [43] works. This allows for sweeping along viewing rays, which was proven to be a robust search strategy for binocular stereo plane sweeping [38]. It also enables a per frame approach, with depth estimations, that appears to be more flexible than a global reasoning over all frames. This scheme also avoids decorrelating camera resolution and 3D receptive field resolution, as with e.g. voxels, the volumetric receptive field being defined as a backprojection along pixel rays. Additionally, this volumetric receptive field learns local pairwise correlations, a lower level and easier task than learning occupancy grid patterns. Our evaluation substantiates the aforementioned robustness benefits on a number of qualitative (7.3) and quantitative experiments (7.2) with challenging dynamic capture datasets, showing in particular the improvements over 2D receptive fields (7.1). Method Overview Our main objective is to study multi-view photoconsistency within the context of multi-view stereo reconstructions. We consider for that purpose the reconstruction framework, largely adopted over the last decade, that consists in first estimating per camera depth maps, followed by depth fusion and surface extraction. This framework allows to reason at the pixel level, enabling therefore each camera to provide local details on the observed surface with local estimations. This is in contrast with global strategies that consider photoconsistency at the shape level, with for instance voxels as in [6]. Comparisons between strategies are provided in the experiment section (see section 7.2). Regarding depth map estimation, we propose to replace the traditional handcrafted photoconsistency measures used to estimate depths with a learned version. This version is based on CNNs and exploits their ability to learn local photometric configurations near surfaces observed from multiple viewpoints. As depicted in Figure 2, our approach takes as input a set of calibrated images and outputs a 3D mesh obtained by fusing depth maps. Depths along pixel viewing rays are obtained using a volume sweeping strategy that samples multi-view photoconsistency along rays and identifies the maxima. For a depth point candidate along a viewing ray, the photoconsistency is estimated using a discretized 3D volumetric projective grid centered on that point. In such a 3D grid, color inputs from the primary camera are paired with color inputs from another camera at each volume element of the grid around the depth point candidate. For a given depth candidate, we collect all such paired color volume grids for every other camera than the primary. A trained CNN is used to recognize the photoconsistent configurations given pairs of color samples within the 3D grid. The key aspects of this strategy are: -The per camera approach, which, by construction, samples the photoconsistency at a given location as captured and thus enables more local details to be revealed compared to a global approach, as shown in Figure 17. -The 3D receptive field for the photoconsistency evaluation, which resolves some 2D projection ambiguities that hindered 2D based strategies. -The learning based strategy using a convolutional neural network, which outperforms traditional photometric features when evaluating the photoconsistency in dynamic captured scenes, as demonstrated by our experiments. The following sections focus on our main contributions, namely the 3D volume sampling in section 4.1 and the learning based approach in sections 4.2,4.3 for the photoconsistency evaluation. We then describe in section 5 our depth map evaluation procedure, derived from a winners-take-all strategy suitable to our capture scenario. These depth maps are then fused into an implicit form, from which, without loss of generality, we extract the zero-level set using the surface extraction technique described in section 6. Learning Photoconsistency Our reconstruction approach takes as input N images {I i } N i=1 , along with their projection operators {⇡ i } N i=1 , and computes depth maps, for the input images, which are subsequently fused into a 3D implicit form. This section explains how these maps are estimated. Given a pixel p in an input image I i , the problem is therefore to find the depth d at which its viewing ray intersects the observed surface. The point along the ray of pixel p at depth d is noted r i (p, d). Our approach searches along viewing rays using a likelihood function for a point to be on the surface given the input color pairs in the evaluation volume. In contrast to traditional methods that consider handcrafted photoconsistency measures, we learn this function from multiview datasets with ground-truth surfaces. To this purpose we build a convolutional neural network which, given a reference camera i and a query point x 2 R 3 , maps a local volume of color pair samples around x to a scalar photoconsistency score ⇢ i (x) 2 [0..1]. The photoconsistency score accounts in practice for color information from camera i at native resolution, and for other camera colors in addition to their relative orientations as implicitly encoded in the volume color pair construction. These important features allow our method to adapt to specific ray incidences. Its voluntarily asymmetric nature also allows subsequent inferences to automatically build visibility decisions, e.g. deciding for occlusion when the primary camera i's color is not confirmed by other view's colors. This would not have been possible with a symmetric photoconsistency function such as [39]. We thus cast the photoconsistency estimation as a binary classification problem from these color pairs around the location x, with respect to the reference image I i and the other images. In the following, we first provide details about the 3D sampling regions before describing the CNN architecture used for the classification and its training. We then explain the volume sweeping strategy that is subsequently applied to find depths along rays. Volume Sampling In order to estimate photoconsistency along a viewing ray, a 3D sampling region is moved along that ray at regular distances. Within this region, pairs of colors backprojected from the images are sampled. Each pair contains a color from the reference image I i and its corresponding color in another image I j . Samples within the 3D region are taken at regular depths along viewing rays in the reference image (see Figure 3). The corresponding volume is a truncated pyramid that projects onto a 2D region of constant and given pixel dimension in the reference image. This allows the 3D sampling to adapt to the camera properties, e.g. pixel resolution and focal length. More precisely, we denote r i (p, d) the 3D location at depth d along the viewing line back-projected from Volume sampling is always performed with the same orientation and ordering with respect to the reference camera. Convolutions are thus consistently oriented with respect to the camera depth direction. Volume Size In practice, we choose k = 8. Our strategy is to learn pairwise photoconsistent configurations along rays. This way, decisions for the surface presence are conditioned to the observation viewpoints, which implicitly enforce visibility rules since only one 3D point per ray can be detected. This is in contrast to more global strategies where such per viewpoint visibility is less easy to impose, as with regular voxel grids, e.g. [6] with 32 3 or 64 3 grids. In addition, by considering the surface detection problem alone, and letting the subsequent step of fusion integrate depth in a robust and consistent way, we simplify the problem and require little spatial coherence, hence allowing for small grids. We provide a more detailed study of the performances of the classifiers with various depth values in section 7.1. Multi-View Neural Network As explained in the previous section, at a given point x along a viewing ray from image I i we can build N 1 color volumes with pairs of views (I i , I j6 =i ). Each volume is composed of k 3 cells with pairs of RGB values. In order to detect whether the surface is going through x, we use siamese encoders similar in spirit to [39], with however 3D volumes instead of 2D patches. Each encoder considers as input a pairwise color volume and provides a feature. Features from all color volumes at x are then averaged and fed into a final decision layer. Weight sharing and averaging are chosen to achieve camera order invariance. The network is depicted in Figure 4. The inputs are the N 1 color volumes of size k 3 ⇥ 6 where RGB pairs are concatenated at each sample within the volume. Convolutions are performed in 3D over the 6 valued vectors of RGB pairs. The first layers (encoders) of the network process every volume in parallel, with shared weights. Every encoder is a sequence of two convolutions followed by non-linearities, and max-pooling with stride. Both convolutional layers consist of respectively 16 and 32 filters of kernel 4 ⇥ 4 ⇥ 4, followed by a Rectified Linear Unit (ReLU) and a max-pooling with kernel 2 ⇥ 2 ⇥ 2 with stride 2. We then average the obtained 2 ⇥ 2 ⇥ 2 ⇥ 32 features and feed the result to a 128 filter 1 ⇥ 1 ⇥ 1 convolutional layer, followed by a ReLU and a final 1 ⇥ 1 ⇥ 1 decision layer, for a total of 72K parameters . The network provides a score ⇢ i (r i (p, d)) 2 [0..1] for the photoconsistency at depth d along the ray from pixel p in image I i . We experimented this network using di↵erent configurations. In particular, instead of averaging pairwise comparison features, we tried max-pooling which did not yield better results. Compared to the volumetric solution proposed by [6], the number of parameters is an order of magnitude less. As mentioned earlier, we believe that photoconsistency is a local property that requires less spatial coherence than shape properties. Network Training The network was implemented using TensorFlow [44] and trained from scratch using the DTU Robot Image Dataset [11], which provides multiview data equipped with ground-truth surfaces that present an accuracy of 0.5mm. From this dataset 11 million k 3 color volumes were generated, from which we randomly chose 80 percent for training, and the remaining part for evaluation. Both positive and negative samples were equally generated by randomly sampling volumes up to 20cm away from ground truth points, where a volume is considered as positive when it contains at least µ ground truth points. In theory, the network could be trained with any number of camera pairs, however, in practice, we randomly choose from one up to 40 pairs. Training was performed with the binary cross entropy function as loss. Model weights are optimized by performing a Stochastic Gradient Descent, using Adaptive Moment Estimation [45] on 560, 000 iterations with batch size of 50 comparisons, and with a random number of compared cameras (from 2 up to 40). Since our sampling grids are relatively small and camera dependent, we are able to generate enough sample variability for training, without the need for data augmentation. Depth Estimation As previously noted, our main motivation is to reconstruct live dynamic scenes, typically humans in motion. In such cases, it is advantageous to focus on the foreground objects in the observed scene rather than modeling the full scene. To this purpose, we limit the search domain for depths along viewing rays to a region defined by image silhouettes. In the following we explain how such a region is defined and we detail then the volume sweeping we adopt to identify image depths. Confidence Volume We assume we are given a set of N images {I i } N i=1 observed with a set C of calibrated cameras with known projections {⇡ i } N i=1 and centers {c i } N i=1 . We assume we are also given a set of silhouettes {⌦ i } N i=1 , often available in multi-view scenarios by performing image segmentation, for instance background subtraction with constrained capture environments. The silhouettes are generally imprecise, as a result of multiple causes, including color ambiguities, that plague the segmentation. However, the redundant information they provide over several viewpoints can be used to restrict the search domains along viewing rays to segments that are likely to intersect the object surface. To this purpose we define the confidence volume V as: V = {x 2 R 3 : 9 >↵ i (⇡ i (x) 2 I i ) ^9> i (⇡ i (x) 2 ⌦ i )}, (1) as the locus of points in R 3 which project in i > ↵ images and  i silhouettes to which they belong. When = i, V is simply the visual hull with i images. ↵, are two user defined constants that restrict weakly supported depth predictions with ↵ and enable predictions away from the exact visual hull when < ↵. Intuitively, V is a dilated version of the visual hull in the space region seen by at least ↵ images, as shown in fig 5. As explained in the following section, the intersection of a viewing ray with V defines the starting point of the depth search interval along that ray. Volume Sweeping In order to estimate pixel depths, the sampling volume introduced in section 4.1 is swept along their viewing rays while computing multi-view photoconsistency using the network detailed in section 4.2. For every camera, we sample therefore along viewing rays, test possible depth values, and choose the best candidate with respect to the network score. In practice, a reference view I i is only compared to the other views I j such that cos(✓ ij ) > 0.5, where ✓ ij is the angle between the optical axes of camera i and j. Then, we sample rays from camera i through every pixel p and build colored volumes at every candidate depth, starting at the intersection with the confidence volume introduced in the previous section. Once the probability of surface presence is computed for every candidate, we define the estimated depth d i as: d i = argmax Depths for all pixels and from all images are further fused using a truncated signed distance function (TSDF) [46]. The following section explains how we define and extract the zero level-set of the TSDF. Surface Extraction We explained in the previous section how to compute depth maps for every viewpoint. We now have to fuse them into an implicit form, namely the TSDF [46] from which we can extract the zero-level set that corresponds to the reconstructed surface, which appears in black in Figure 6. Contrary to previous works [47,24,13,22], we do not store TSDF values in a regular voxel grid but we rather devise a simple yet e cient sampling procedure derived from Voronoï Tesselation strategies, that specifically accommodates multi-view capture scenarios. It is worth mentioning that other works such as [27] also make use of irregular sampling strategies for MVS, but in a volumetric graph-cut framework. Implicit Form Definition For a point x 2 R 3 , the truncated signed distance T D(x) 2 R to the surface is defined as the weighted average of all camera contributions F i (x), i 2 C: where C x = {i 2 C : F i (x) 6 = ;} and ⇢ 0 i (x) the photoconsistency measure (4) of the estimated depth along the ray from camera i passing through x. If d i is undefined at x, e.g. x is outside the camera visibility domain, then camera i does not contribute to the TSDF. When no camera contributes at x but x is inside the confidence volume V then it is considered as inside, i.e. T D(x) < 0. Note that contributions are weighted by the normalized photoconsistency measure which means that when cameras disagree about the photoconsistency at x, cameras with higher measures have an increased impact whereas cameras with low detection probability measures only marginally impact the reconstruction. Extraction Procedure From the previously defined TSDF, we extract the surface using a sampling strategy based on ray casting and Voronoï Tessellation. Figure 6 provides a 2D example of the main steps of the algorithm that are as follows: 1. (orange) Sample points inside the implicit form defined by the TSDF. This is achieved by randomly selecting pixels in all images and computing the point, along each pixel rays, inside but close to the surface according to the TSDF. The process is iterated until a user defined number of 3D points is reached. In the above strategy, sampling points close to the surface, and originating from image viewpoints, ensures that the 3D discretization is denser on the surface than inside the volume and also denser on surface regions Fig. 6 Our surface extraction procedure. The zero-level set of the implicit form (black) is observed by di↵erent cameras (red). They are used to provide the inside samples (orange) that will be used as the centroids for the Voronoï tessellation. This tessellation is finally clipped at the zero-level set and the final surface (green) can be extracted. observed by more images. The latter enables more precision to be given to surface regions for which more image observations are available. We visualize in figure 7 an example of extracted surface. We show 2 of the 40 input views in the top row, and our reconstruction in the middle. The bottom side of the bust is never seen by any camera. We show in the bottom row the di↵erence in sampling resulting from the observation of the shape. The horizontal bottom side of the model is never observed, yet still correctly reconstructed. On the other hand, the triangles of the mesh in that area are much larger than the ones in the vertical upper part, which is observed by more cameras. This strategy allows for complete reconstructions of captured shapes with an adaptive sampling density depending on the observations of the object, focusing more samples in the regions where the details can be recovered. Runtime The full pipeline allows us to reconstruct one time frame, i.e. 68 images (2048⇥2048), in approximately 30 to 40 minutes using two NVIDIA Titan X GPUs, depending on the number of pixels that observe the shapes. Experiments Our main goals in this section are (i) to evaluate whether and how our learned photoconsistency contributes with respect to existing methods and (ii) to ver- ify whether these transfer to the more complex case of generic 3D capture scenes in practice, e.g. humans with complex clothing. To this aim, we perform various evaluations to verify and quantify the benefit of our learned multi-view similarity. We start by providing multiple validation experiments to justify the choices for the learning and reconstruction strategies in 7.1. Second, for comparison purposes, we apply in 7.2 our depth estimation approach in the static case using the [11] benchmark and compare it to state of the art MVS methods, both handcrafted and learning based. We make use of the standard accuracy and completeness metrics, both averaged and median, for which the evaluation code is provided by the authors. Finally, we build experiments to test the main claim of improvement with production capture data in 7.3. To this goal we use several dynamic sequences captured on di↵erent platforms, which exhibit typical di culties of such data. In particular, we mainly focus on the Kinovis acquisition platform [9], which consists of 68 RGB cameras, of resolution 2048 ⇥ 2048 with focal lengths varying from 8mm to 25mm. We achieve very significant qualitative improvements compared to the state of the art approaches both learning-based [6,8], and handcrafted [10], without fine-tunning and despite the di↵erence of capture setup used for training. We also compare to [23] on an example provided by the authors and achieve slightly better quality using only half the available information. Validation We previously formulated the problem of surface detection along viewing rays as a binary classification problem, as explained in section 4. In order to assess the benefit of our volumetric strategy, we first focus on different classifiers performances. We provide in 7.1.1 receptive field comparisons on the training dataset, this to enhance the advantage of casting and learning correlations in 3D. Additionally, section 7.1.2 provides a study of the depth hyperparameter of the receptive field of our network. Then, since preliminary results of [14] seemed to show a better robustness to a larger baseline, we design an experiment with cameras that are further apart to better quantify this improvement in section 7.1.3. We finally provide in 7.1.4 an ablation study of the accumulator described in 3 to validate its importance in the depth estimation procedure, in the performance capture scenario. Section 7.1.2 shows that a volume size of 8 ⇥ 8 ⇥ 8 is a preferred trade-o↵, thus will be used from now on, when not specified. Classifiers Study In this paragraph, we compare performances of di↵erent classifiers based on various receptive fields: 1. Zero-Mean Normalized Cross Correlation (ZNCC ): ZNCC is applied over the samples within the volumetric support region. 2. Learning (CNN) with a planar support: a planar equivalent of our volumetric solution, with the same architecture and number of weights, in a frontofacing plane sweeping fashion. 3. Learning (CNN) with a volumetric support: our solution described in the previous sections. Figure 8 shows, with the classifiers' ROC curves, that the most accurate results are obtained with a volumetric support and learning. Intuitively, a volumetric sampling region better accounts for the local non-planar geometry of the surface than planar sampling regions. This graph also emphasizes the significantly higher discriminative ability of learned correlations compared to deterministic ones. Volume Sampling To further demonstrate this, we then proceed to a study on the impact of the depth parameter of the sampling volume. While keeping a 8⇥8 pixels reprojection on the images, we study the performances on classifiers with receptive fields varying in depth. Figure 9 shows classifiers performances with depth values ranging from 1 to 12. To perform this experiment, we had to diminish the networks number of parameters to fit the 12 depth training in memory and keep reasonable training and testing times, explaining the worse performances compared to previous ROC curves. This experiment demonstrates that the more information the network gathers along the ray the better the detection of the surface is. We choose a depth of 8 as it gives the best trade-o↵ between computational complexity and performance. Baseline Study We now evaluate the robustness to various baselines by accounting for a higher number of cameras and more distant cameras in the classification. Table 1 shows the accuracy of the classifiers with a varying number of cameras and for the optimal threshold values in Figure 8. As already noticed in the literature, e.g. [16,19], a planar receptive field gives better results with a narrow baseline and the accuracy consistently decreases when the inter-camera space grows with additional cameras. In contrast the classifier based on a volumetric support exhibits more robustness to the variety in the camera baselines. This appears to be an advantage with large multi-camera setup as it enables more cameras to contribute and reduces hence occlusion issues. To push this experiment further, we design an experiment to test the robustness of our approach on a sparse capture platform, with lower scene coverage and wider baseline. Since no ground truth exists for this kind of performance capture scenario, we simulate it using of a realistic rendering engine to create a synthetic dataset. Similar to [9] in terms of camera parameters and capture volume, we chose to render only 10 randomly placed cameras, evenly distributed on an hemisphere around the capture volume. The average spacing between a camera and its 10 closest neighbors is 8.03m in this case, where it is 2.5m for the 68 POV kinovis platform and 0.188m in the 49 POV DTU case. For this experiment, we set the neighboring camera acceptation threshold cos(✓ ij ) to 0.1, meaning that we accept almost orthogonal cameras. The synthetic cameras render the scene using Filmic Blender [48], a photorealistic configuration for Blenders Cycles ray-tracing engine. The images are generated with random parameters, i.e. the cameras parameters vary, in terms of position, orientation, focal length, and pixels number of samples, the latter directly a↵ecting sensor noise. With this platform, we rendered a dozen of models such as procedu- rally generated geometric shapes, real life reconstructions or CAD models with various appearances. The multiview networks are trained from scratch on these synthetic examples, and evaluated on unseen synthetic data. Figure 10 shows an example of our synthetic platform as well as the generated synthetic data. We show in figure 11 the impact of a volumetric support: when the baseline between the cameras becomes extreme, it o↵ers more robustness compared to a planar support, which appears very slanted in the compared view. Even though it is only a synthetic dataset, we believe that it gives interesting insights on the versatility of our volume sweeping strategy for the performance capture scenario. A qualitative result of this improved robustness is shown in figure 12. The area of the face is highly occluded, and the volumetric support helps recovering a smoother surface. Also note the details of the belt: the volume allows a sharp reconstruction of finer details, where a plane cannot handle finer geometry details. Accumulation Term We now provide a qualitative experiment to justify the use of the accumulation term in equation 3 in figure 13. This figure demonstrates the importance of the accumulation scheme in the performance capture scenario. The noisy photoconsistency in this case leads to a lot of false positives, creating extreme holes in the reconstructions when not using the accumulation scheme, i.e. ⇢ max ! 1. The addition of this term (⇢ max = 1.6) allows for smooth and faithful reconstructions, still containing most of the important geometric details. Quantitative Comparisons In this section, we compare our solution to various state-of-the-art methods using the DTU Robot Image Dataset [11]. We use the standard accuracy and completeness metrics to quantify the quality of the esti- mated surface as described in [49], that is we define accuracy for a point of the reconstructed shape as the smallest Euclidean distance to the ground-truth, and the completeness of a point of the ground-truth as the smallest Euclidean distance to the reconstructed shape. For both metrics, we compare the average and median values over all the points of the shapes. To diminish the impact of far outliers in the metrics, we make use of the default thresholding parameter of [49]. We compare to Furukawa et al. [16], Campbell et al. [50] and Tola et al. [34], that are well-known handcrafted strategies, as well as to additional learning-based results from Ji et al. [6] and Hartmann et al. [39]. To conduct a fair comparison with [39], that is a patch based approach building a depthmap with a network comparable to ours, we use the result of our volume sweeping approach on the same depth map. When performing reconstructions on the DTU, we did not use the accumulation scheme in 3, i.e. ⇢ max ! 1. To speed up computations, we limit the search along a viewing ray to 5mm around a coarse depth estimation based on image descriptors [51]. Depths are sampled every 0.5mm. As a post processing step, we simply add a soft bilateral filter, similarly to [39], accounting for color, spatial neighborhood, and probability of the detection. Reconstruction results are depicted in table 2. We achieve quality on par with the best performing methods on this dataset, with a median accuracy and completeness in the range of the ground truth accuracy that we measured around 0.5mm. It should be noticed that the best accuracy is obtained by Tola et al. [49] which tend to favor accuracy against completeness whereas Campbell et al. [50], in a symmetric manner, tend to favor completeness against accuracy. We obtain more balanced results on the 2 criteria, similarly to the widely used approach by Furukawa et al. [16], with however better performances. We also outperform the recent learning based method Surfacenet [6] on most measures in this experiment. Compared to Hartmann et al. [39], and under similar experimental conditions, our approach give better results with 2 orders of magnitude less parameters, thereby confirming the benefit of volumetric supports over planar ones. Compared to Surfacenet. [6] (cube size 64 ⇥ 64 ⇥ 64, sample step 0.4mm) we obtain reconstructions of slightly better quality with an order of magnitude less parameters. Qualitative Evaluation and Generalization One of our main goals is to verify whether a learning based strategy generalizes to real life dynamic data and how it compares to state-of-the-art approaches in this case. To this purpose, we focus our qualitative evaluation on two di↵erent dynamic capture datasets, both drastically di↵erent from the training one. We first perform, in section 7.3.1, reconstructions of dynamic RGB sequences captured by the Kinovis platform [9]. We then test, in section 7.3.2, our reconstruction method on a di↵erent real life dynamic dataset, captured with the active setup of [23] and compare to their results. It is important to note that the network previously trained on the DTU Dataset [11] was kept as such without any fine tuning at all times in this section. Kinovis Data We first focus on data captured by [9], that is a hemispherical setup with 68 cameras of various focal lengths. In this scenario, standard MVS assumptions are often violated, e.g. wide baseline, specular surfaces, motion blur and occlusions, challenging therefore the reconstruction methods. A video demonstrating our results and providing comparisons on dynamic sequences is available online: https://hal.archives-ouvertes. fr/hal-01849286. Most general purpose MVS methods we tested tend to fail in the performance capture scenario, either providing incomplete or low resolution results, or being extremely noisy. Figure 16 illustrates the reconstruction obtained using COLMAP [52], which is a handcrafted general purpose MVS pipeline based on Patch-Match Stereo [53]. Both methods perform overall correctly, as seen on the left side of the figure. However, [52] (top-right) struggles to recover fine-grained details while keeping the noise and artifacts level low, contrary to our approach (bottom-right). This results demonstrates the benefit of a dedicated method in the context of performance capture. In order to assess the performances of our learned photoconsistency term, we compared in figure 1 to [10], which is a patch based sweeping method using traditional image features and specifically designed for this scenario. Both methods share a significant part of their pipeline, except for the photoconsistency evaluation, thus providing good insights about the benefits of the proposed learned term. Even though [10] performs well in contrasted regions, the patch based descriptors reach their limits in image regions with low contrast or low resolution. Figure 15 and 14 give such examples. They show that our strategy helps recovering finer surface details, while strongly decreasing noise in low contrast regions. The results obtained also demonstrate strong improvements in surface details, such as dress folds, that were undetected by the deterministic approach. In addition, they demonstrate lower levels of noise, particularly in self-occluded regions, and more robustness to motion blur as with the toes or tongue-in-cheek details that appear in Figure 14-bottom. We then compare to the recent learning based approach [6] using the code available online (see Figure 17). Reconstructions with this approach were limited to a tight bounding box and di↵erent values for the volume sampling step were tested. The best results were obtained with a 2mm step. To conduct a fair comparison with our method, all points falling outside the visual hull were removed from the reconstruction. In this sce- nario, the point cloud obtained using [6] appeared to be very noisy and incomplete (see Figure 17 [6] wrongly reconstruct many surface points inside the shape volume (top figure), as a result of the ambiguous appearance of the dress. In contrast, our approach (bottom figure) correctly identify surface points by maximizing learned correlations along viewing rays. In addition to this, we also compare to results of [8] provided by the authors in Figure 18. This method outputs a rather dense colored point cloud but similarly to results from [6], extracting a smooth surface from this point cloud remains a di cult task due to strong noise and missing data. Since the method uses custom and undocumented calibration parameters, it was not straightforward to remove points lying outside the visual hull. Moreover, the precision of the point cloud from [8] restricts its usage for performance capture and realistic reconstructions rendering. Figure 19 provides a close-up of the face of a subject. The level of detail of the result from [8] is not fine enough to correctly capture facial details, compared to the density of our output surface. Active Capture Platform Finally, we compare our reconstructions of a scene captured with results from the active system of [23]. This setup consists of 52 RGB cameras mounted as stereo pairs but also di↵ers from the previous dynamic capture scenario, as it also features an active system, projecting random infrared dots on the shape. 52 infrared cameras, also paired on stereo rigs then capture the reprojected spots on the shape, resulting in highly contrasted images, allowing to disambiguate the photoconsistency computation, especially in textureless regions without interfering with the visible appearance of the subject. In figure 20, we compare to results provided by the authors. While [23] make use of all the data available, we restrict our method to work with RGB images only. On the other hand, we allow cameras that are far apart to participate in the computation of the photoconsistency. Our results demonstrate the quality of our method's results, showing detailed reconstructions competitive with the results of [23] even though we only use the passive system, i.e. half of the available information. Figure 21 displays a close-up of the face of the subject. Our method allows to recover high-frequency facial details, such as the shape of the nostrils or the lips commissures, thus providing highly faithful reconstructions. Conclusion and Future Works We presented a learning framework for surface reconstruction in passive multi-view scenarios. Our solution consists in a N -view volume sweeping, trained on static scenes from a small scale dataset equipped with ground truth. Thanks to this new model, we validate the improvement of CNN-learned MVS photoconsistency in the case of complex and dynamic performance capture, with significant challenges typical of these datasets such as low light areas and low texture content and perceived resolution. This result is achieved with an order of magnitude less training parameters than previous comparable learned MVS works, showing significant network generalization from a training performed only on static DTU inputs, fully leveraging the high quality ground truth now available with these datasets. Thanks to our local strategy, our method achieved significantly improved detail recovery and noise reduction in complex real life scenarios, outperforming all existing approaches in this case. The discretization of the volume around a query point involves a lot of redundancy and is a computationally expensive step for both training and inference. Moreover, even when optimized to process several neighboring depths in parallel, it remains rather memory ine cient. A possible future work could be to find a continuous representation for colored rays crossing the volume of interest, that could be used to infer surface presence probability in a similar manner with a much lighter computational cost. Finally, we believe our approach is a first step towards a data-driven method to unify shape from silhouette and multi-view stereo inference, as made possible by the wide baseline robustness and general volumetric receptive field of our network, with the prospect of increased automation and quality. FIGURE 1 . 1 - 11 FIGURE 1.1 -Visualization of the geometry of a 4D aligned template model. 1. 4 . 1 41 Contexts I obtained my PhD from the Institut National Polytechnique de Grenoble in 2005 with the INRIA MOVI team headed by Radu Horaud, supervized by Edmond Boyer. I then joined UNC Chapel Hill as a post-doctoral research assistant to Marc Pollefeys in 2006-2007, during which I co-supervized PhD students Li Guan and collaborated with Sudipta Sinha on probabilistic visual hulls. FIGURE 2 . 1 - 21 FIGURE 2.1 -Spatiotemporal refinement framework. FIGURE 2 . 2 - 22 FIGURE 2.2 -Obtained spatio-temporally refined results. (a) Detailed shapes obtained for complex, dynamic, multi-person performance captures scenes. (b) Comparison of the method without, and with the temporal integration proposed. FIGURE 2 . 4 - 24 FIGURE 2.4 -(left) The 3D volume used to estimate photoconsistency along rays from the reference image I i . k 3 samples within the volume are regularly distributed along viewing rays and contain color pairs as back-projected from images I i and I j . At a given depth along a ray from I i any image I j =i can define such a pairwise comparison volume. (right) CNN architecture. Each cube is a pairwise comparison volume with k 3 samples that contain 6 valued vectors of RGB pairs and over which 3D convolutions are applied. The output score ρ i (r i (p, d)) ∈ [0..1] encodes the photoconsistency measure at depth d along the ray from pixel p in image I i . FIGURE 2 . 8 - 28 FIGURE 2.8 -(a) Performance on 3D-HUMANS dataset in presence of severe occlusions on three frames : (top) input images, (left) with GAN, (right) without GAN. Errors above 15cm are shown in red. The GAN helps increase the "humanness" of the predictions. (b) Generalisation to previously unobserved data. We apply our pipeline to images with 3D realistically rendered backgrounds (left), and with 3 real-world images from the LSP dataset (right). These poses, in particular the baseball player, have not been seen at training time but our model still generalizes well. FIGURE 3 . 2 - 32 FIGURE 3.2 -Tracking excerpts from the DANCER dataset. Colors code patches. FIGURE 3.3 -(left) Input mesh, (middle) tracked mesh without rigidity inference -Cagniart et al. [CBI10b] and (right) with rigidity inference.The absence of the head on the top-left input mesh is imputable to the visual hull computation method, which assumes full visibility for each camera. It can be observed that our method corrects substantial artifacts. FIGURE 3 . 4 - 34 FIGURE 3.4 -Tracking excerpts from GOALKEEPER, MARKER [LGS + 13] and FREE datasets showing a heatmap coded values of the C k,l rigidity variables : red when very likely non rigid, blue when very likely rigid. Note that the inference is performed on a time window, i.e. the rigid inference corresponds to rigidities as observed in that window. FIGURE 3 . 5 - 35 FIGURE 3.5 -Possible volumetric decompositions of the template and observed shapes. (top) 2D schematic view. (bottom) 3D decomposition example. (a) Voxels on a regular grid. (b) A sliced Constrained Delaunay tetrahedrization showing the elongated inner tetrahedra generated. (c) Voronoi cells with random centroids shown in red, center of mass of each cell in green. (d) Centroidal Voronoi tesselation cells, where the center of mass and cell centroid coincide. FIGURE 3 . 6 - 36 FIGURE 3.6 -Frames of the BALLET (top and middle) and GOALKEEPER datasets (bottom). (a) Visual hull input. (b) Tracking result of Cagniart et al. [CBI10b]. (c) Allain et al. [AFBT14b]. (d) Our method. Note the improved angular shapes on the dancer's knee (top) and elbow (middle), and the improved robustness (bottom). FIGURE 3 . 8 - 38 FIGURE 3.8 -Six representative examples of frames of our motion database. From left to right, a female and male subject is shown for tight, layered, and wide clothing each. FIGURE 3 . 9 - 39 FIGURE 3.9 -Accuracy of posture estimation over the walking sequences of all subjects in tight clothing. Left : cumulative landmark errors. Right : average landmark error throughout each sequence. Figure from [YFHWW16]. FIGURE 3 . 10 - 310 FIGURE 3.10 -Overlay of input data (grey) and our result (blue).Figure adapted from [YFHWW16]. FIGURE 3.10 -Overlay of input data (grey) and our result (blue).Figure adapted from [YFHWW16]. FIGURE 4 . 2 - 42 FIGURE 4.2 -Comparison on BUNNY, BEETHOVEN and BIRD datasets. Left column : input images. Middle : output of [GAKC13]. Right : our algorithm. FIGURE 4 . 3 - 43 FIGURE 4.3 -Top : GOALKEEPER. Middle : BACKPACK. Bottom : ACTOR. The figure illustrates various temporal improvements. Top : Input is compared to Frame 1 and Frame 3. Middle : Input on left, Frame 1 and Frame 3 comparisons. Details are revived on the backpack, T-shirt and pants. Bottom : Input, Frame 1 and 2 comparisons. FIGURE 4 . 5 - 45 FIGURE 4.5 -Method pipeline from input textures (left) to eigen maps (right). FIGURE 4 . 6 - 46 FIGURE 4.6 -Texture map generation by linear combination. Fig. 4 . 4 7 and Fig. 4.8. More details are available in the original ECCV 2016 publication [BTF + 16] (Appendix A.5) and with the supplemental video 2 . FIGURE 4 . 7 - 47 FIGURE 4.7 -Interpolation examples using linear interpolation (left) and our pipeline (right). From left to right : Input frames, Interpolated models, and a close-up on the texture maps (top) and the rendered images (bottom). FIGURE 4 . 8 - 48 FIGURE 4.8 -Completion examples. From left to right : Input and completed models, close-up on input and completed texture maps (top) and rendered images (bot-tom). Figure 1 . 1 Figure 1. Input view 768 × 576 resolution with up-sampling by factor of three, BEETHOVEN dataset. Super-resolved 2304 × 1728 output of our algorithm rendered from identical viewpoint. Figure 2 . 2 Figure 2. Summary of image formation model and problem notation. Figure 3 . 3 Figure 3. Comparison on BUNNY, BEETHOVEN and BIRD datasets. Left column: input images. Middle: output of [10]. Right: our algorithm. Best viewed magnified and in color. Figure 5 . 5 Figure 5. Results from GOALKEEPER dataset. We computed the mean value over frames of the Mean Square Error between our output and high resolution ground truth image. The resolution of input images is 512 × 512 and of the super-resolved output images is 1024 × 1024. We use a time step of 2 in experiments, corresponding to an acquisition frequency of 15Hz. Figure 4 . 4 Figure 4. This figure illustrates various temporal improvements and detail enhancements obtained with various acquired datasets, comparing different convergence using one or several temporal frames. Top: GOALKEEPER dataset. Left: output of Frame 3. Input is compared to Frame 1 and Frame 3 for each close-up. Middle: BACKPACK dataset. Input on left, Frame 1 and Frame 2 comparisons for close-ups. Details are revived on the backpack, T-shirt and pants. Bottom: ACTOR; left to right: full result with three frames, close-up comparison between input, against Frame 1 and Frame 3. Best viewed magnified and in color. A. 2 . 2 ON MEAN POSE AND VARIABILITY OF 3D DEFORMABLE MODELS 65 ∴ A.2 ON MEAN POSE AND VARIABILITY OF 3D DEFORMABLE MODELS Benjamin Allain, Jean-Sébastien Franco, Edmond Boyer, Tony Tung, ECCV 2014 -European Conference on Computer Vision, Sep 2014, Zurich, Switzerland. Springer HAL Id: hal-01016981 https://hal.inria.fr/hal-01016981 Submitted on 2 Jul 2015 Benjamin Allain, Jean-Sébastien Franco, Edmond Boyer, Tony Tung To cite this version: Benjamin Allain, Jean-Sébastien Franco, Edmond Boyer, Tony Tung. On Mean Pose and Variability of 3D Deformable Models. ECCV 2014 -European Conference on Computer Vision, Sep 2014, Zurich, Switzerland. pp.284-297, 10.1007/978-3-319-10605-2_19. hal-01016981 1 Introduction Fig. 1 . 1 Fig. 1. Example of patch template used. Fig. 2 . 2 Fig. 2. Mean error for temporal evolution over Marker dataset. Fig. 4 . 4 Fig. 4. Tracking excerpts from the Dancer dataset. Colors code patches. Fig. 5 . 5 Fig. 5. Tracking excerpts from GoalKeeper, Marker and Free datasets. Best viewed in color. Please watch supplemental video for more visualizations. Figure 1 . 1 Figure 1. Possible volumetric decompositions of the template and observed shapes. (top) 2D schematic view. (bottom) 3D decomposition example. (a) Voxels on a regular grid. (b) A sliced Constrained Delaunay tetrahedrization showing the elongated inner tetrahedra generated. (c) Voronoi cells with random centroids shown in red, center of mass of each cell in green. (d) Centroidal Voronoi tesselation cells, where the center of mass and cell centroid coincide. Figure 2 . 2 Figure 2. Mean error for temporal evolution over MARKER dataset. Figure 3 .Figure 4 . 34 Figure 3. Patched volumetric decompositions of the template for sequences (a) BALLET, (b) GOALKEEPER-13, (c) MARKER and (d) DANCER. A color has been assigned to each patch for a visualization purpose. Fig. 1 . 1 Fig. 1. Overview of the proposed pipeline. From left to right: input frame, result of Stitched Puppet [1] with annotated landmarks, result after estimation of initial identity and posture, final result, and overlay of input and final result. Fig. 2 . 2 Fig. 2. Left: overfitting problem of Stitched Puppet in the presence of clothing. Input frame, Stitched Puppet result with 160 particles, and Stitched Puppet result with 30 particles are shown in order. Right: the failure case from our database caused by mismatching of Stitched Puppet. Fig. 3 . 3 Fig. 3. Influence of E cloth on walking sequence. Left: input data overlayed with result with ! cloth = 0 (left) and ! cloth = 1 (right). Middle: cumulative per-vertex error of estimated body shape with ! cloth = 0 and ! cloth = 1. Right: color-coded per-vertex error with ! cloth = 0 (left) and ! cloth = 1 (right). Fig. 4 . 4 Fig. 4. Six representative examples of frames of our motion database. From left to right, a female and male subject is shown for tight, layered, and wide clothing each. Fig. 5 . 5 Fig. 5. Accuracy of posture estimation over the walking sequences in tight clothing. Left: cumulative landmark errors. Right: average landmark error for each sequence. Fig. 6 . 6 Fig. 6. Summary of shape accuracy computed over the frames of all motion sequences of all subjects captured in layered and wide clothing. Left: cumulative plots showing the per-vertex error. Right: mean per-vertex error color-coded from blue to red. Fig. 2 : 2 Fig. 2: Method pipeline from input textures (left) to eigen maps (right). Fig. 3 : 3 Fig. 3: Poisson versus direct texture warping. Fig. 4 : 4 Fig. 4: Texture map generation by linear combination. Fig. 5 : 5 Fig. 5: Reconstruction Error for Tomas and Caty Dataset from top to down in Texture and Image Domain from left to right. Fig. 6 : 6 Fig. 6: Generalization Error for Thomas and Caty Dataset from top to down in Texture and Image Domain from left to right. Fig. 7 : 7 Fig. 7: Interpolation examples using linear interpolation (left) and our pipeline (right). From left to right: Input frames, Interpolated models, and a close-up on the texture maps (top) and the rendered images (bottom). Fig. 8 : 8 Fig. 8: Completion examples. From left to right: Input and completed models, close-up on input and completed texture maps (top) and rendered images (bottom). Fig. 1 . 1 Fig. 1. Given a reference shape (a) and input data (b), our method discovers reliable data-model correspondences by random forests, colorcoded in (c). This strategy detects user-specific shapes in a frame-wise manner, resulting in better sustainability. In (d) the reference model (a) is deformed with correspondences (c) to fit the input data (b). This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.The final version of record is available at http://dx.doi.org/10.1109/TPAMI.2017.2740308 2 ) 2 optimization: estimating the motion parameter Θ by minimizing an energy E that describes the discrepancies between pairs in C , i.e. Θ = argmin Θ E(Θ; C ), such that X( Θ) resembles Y as much as possible. Fig. 3 . 3 Fig. 3. The pipeline of our tracking-by-detection framework. Data-model associations are visualized in the same color. Upper row: surface-based associations (black means no correspondence found for that vertex); bottom row: volumetric associations. Fig. 4 . 4 Fig. 4. The intuition of adjusting offsets. (a) original offset pair ψ. (b) η = 0 results in ψ without re-orientation, i.e. R = I. (c) η = 1. ψ is orientated by a rotation matrix R = [e 1 , e 2 , e 3 ] characterized by a LCF. Fig. 5 . 5 Fig. 5. Our method leads to quasi pose-covariant LCFs. Fig. 6 . 6 Fig. 6. CVT-based feature. Left: CVT cells S sample a distance field, where each cell stores the distance d(xs, ∂Ω). Blue to red colors means from close to far. Red dot: cell s to be described. Right: a toy example of our feature f , where L = 5. Shadowed and transparent layers have coefficients c l = -1 and 1 respectively. See text for more explanations. Fig. 7 . 7 Fig. 7. The schematic flowchart of the multi-template learning framework. Red arrows: mappings g µ that associate the indices from each subject-specific template S µ to the common one Ŝ. X t µ are the temporal evolutions of each template. Blue: training; green: prediction. Fig. 8 . 8 Fig. 8. (a): illustration of our strategy adapting skinning weights to CVT cells. Distances to the surface d(xs, ∂Ω) are reflected in the normalization constants e d . (b): result of matching two templates. Fig. 9 .Fig. 10 . 910 Fig. 9. Qualitative results of surface matching on FAUST. Best viewed in pdf. Fig. 11 . 11 Fig. 11. Predicted data-model associations on noisy visual hulls with Hough forests. Black color means that the points are either outliers, or the inferred correspondences are rejected due to incompatible normals. Fig. 12 .( 12 Fig. 12. Qualitative results of volumetric matching on the raw CVTs. Best viewed in pdf. Fig. 13 . 13 Fig. 13. Cumulative matching accuracy of different approaches. The xaxis is normalized w.r.t. the average edge length of the templates. The number of trees T is 20 in this experiment. Dashed and solid lines are accuracies on training (Tr) and testing (Te) sequences respectively. Fig. 14 . 14 Fig. 14. Cumulative matching accuracy on Goalkeeper. Fig. 15 . 15 Fig. 15. Pixel overlap error of 8 sequences, averaged over all cameras. Image resolution: (a-d): 1920×1080; (e-h): 1000×1000. Best viewed in pdf. Federico Tombari is a research scientist and team leader at the Computer Aided Medical Procedures Chair of the Technical University of Munich (TUM). He has co-authored more than 110 refereed papers on international conferences and journals in the field computer vision and robotic perception, on topics such as visual data representation, RGB-D object recognition, 3D reconstruction and matching, stereo vision, deep learning for computer vision. He got his Ph.D at 2009 from University of Bologna, at the same institution he was Assistant Professor from 2013 to 2016. He is a Senior Scientist volunteer for the Open Perception foundation and a developer for the Point Cloud Library, for which he served, in 2012 and 2014, respectively as mentor and administrator in the Google Summer of Code. In 2015 he was the recipient of a Google Faculty Research Award. His works have been awarded at conferences and workshops such as 3DIMPVT'11, MICCAI'15, ECCV-R6D'16. Figure 4 . 4 Figure 4. Data generation. (a)We captured 3D meshes of humans, wearing real clothes, moving and manipulating objects using a multicamera platform. We then rendered these models on real-world background scenes and computed ground-truth visible and hidden depths maps. (b) We also generated a test set by rendering our meshes on realistic 3D environments. Figure 5 . 5 Figure 5. Comparaison with state-of-the-art on the SURREAL dataset: (a) we first compare against the BodyNet [38] baseline. (b) We analyse the impact of varying the size of the training set on performance on our new 3D HUMANS dataset. our Figure 6 . 6 Figure 6. Evaluation on our 3D HUMANS dataset: we first analyze the influence of the training data on performance (a). Then, we compare against the SMPL baseline (b). We compare the performance on visible and hidden depth map separately (c). Finally, we analyse the training data on a dataset rendered in realistic backgrounds and observe that SURREAL data is important for generalisation (d). Figure 7 . 7 Figure 7. Performance on 3D-HUMANS dataset in presence of severe occlusions on three frames: (top) input images, (left) with GAN, (right) without GAN. Errors above 15cm are shown in red. The GAN helps increase the "humanness" of the predictions. Figure Figure 8. Generalisation to previously unobserved data. We apply our pipeline to images with 3D realistically rendered backgrounds (left), and with 3 real-world images from the LSP dataset (right). These poses, in particular the baseball player, have not been seen at training time but our model still generalizes well. Figure 9 . 9 Figure 9. Comparison between HMR[18] (left), Bodynet[38] (middle) and our method (right). Unlike[18, 38], we do not train on in-the-wild images but estimate 3D shapes of clothed subjects. Fig. 2 2 Fig. 2 Method overview. Depth maps, for all input image I i , are obtained by maximizing, along viewing lines, a learned function that measures photoconsistency at a given depth d along the viewing line of a given pixel p. Depth maps are then fused into an implicit form from which the zero set surface is extracted. Fig. 3 3 Fig.3The 3D volume used to estimate photoconsistency along rays from the reference image I i . k 3 samples within the volume are regularly distributed along viewing rays and contain color pairs as back-projected from images I i and I j . At a given depth along a ray from I i any image I j 6 = i can define such a pairwise comparison volume. Fig. 4 4 Fig. 4 CNN architecture. Each cube is a pairwise comparison volume with k 3 samples that contain 6 valued vectors of RGB pairs and over which 3D convolutions are applied. The output score ⇢ i (r i (p, d)) 2 [0..1] encodes the photoconsistency measure at depth d along the ray from pixel p in image I i . Fig. 5 5 Fig. 5 Left: the Confidence Volume with ↵ = = 54, equivalent to the Visual hull with the 54 cameras that see the subject; Right: the Confidence Volume with ↵ = = 10. d2[dmin,dmax] (⇢ i (r i (p, d))), (2) where ⇢ i (r i (p, d)) is the consistency measure along the ray from p in image I i , as estimated by the network. [d min , d max ] is the search range with: d min = d V (p) the intersection of the viewing ray at p with the confidence volume; d max such that the search is stopped when the accumulated photoconsistency score reaches a given value ⇢ max , in a winner-takes-all surface detection strategy. Z dmax x=dmin ⇢ i (r i (p, x))dx  ⇢ max F⇢ i (x) = ⇢ min(µ, ⌘(x)) iif ⌘(x) µ, ; otherwise, ⌘(x) = d i (⇡ i (x)) kc i xk, i (x)F i (x) P i2Cx ⇢ i (x) , 2 . 2 (blue) Determine the Voronoï diagram: given the points inside the shape surface, a Voronoï diagram of this set of points is computed. 3. (green) Clip the Voronoï diagram with the zero level set of the TSDF. This operation extracts the intersection of the Voronoï cells with the surface to form an oriented mesh. Fig. 7 7 Fig. 7 Two points of view of a synthetic model (top) and the result of our reconstruction (middle). A close-up of the extracted surface (bottom) at the limit between well-observed and unseen regions. The top part of the close-up is seen by many cameras whereas the bottom part is never observed. Fig. 8 8 Fig.8ROC Curves of three di↵erent classifiers, ZNCC, planar and volumetric supports, on the DTU Dataset[11]. Circles represent thresholds that optimize sensitivity + specificity with the values 0.2, 0.5 and 0.5 respectively. Fig. 9 9 Fig.9ROC Curves of four di↵erent classifiers using 8 ⇥ 8 receptive fields with various depths. Circles represent thresholds that optimize sensitivity + specificity. Fig. 10 10 Fig. 10 An example of sparse synthetic performance capture data generation. (top) Top and side view of the 10 cameras positioned around a surface. (bottom) Four examples of generated points of view. Fig. 11 ROC 11 Fig. 11 ROC Curves of two di↵erent classifiers using planar and volumetric receptive fields, on the sparse synthetic data. Circles represent thresholds that optimize sensitivity + specificity. Fig. 12 ( 12 Fig. 12 (Left ) 3 input images, (middle) plane based classifier, (right) volumetric classifier. The face is highly occluded in many views (left) making the reconstruction noisy and inaccurate when using a planar support whereas the volume counterpart yields smoother and more accurate details. Fig. 13 ( 13 Fig. 13 (top row ) Input images of captured subjects. (middle row ) Reconstructions without probability accumulation along rays. (bottom row ) Results with accumulation. Fig. 14 ( 14 Fig. 14 (top) Input images, (middle) result with [10], (bottom) result with our method. Motion blur and low contrast are visible in the input images . Best viewed magnified. Fig. 15 15 Fig. 15 Close up view of the arm region in Figure 1. (left) Results from [10], (right) our reconstruction Fig. 17 17 Fig. 17Qualitative comparison with [6]. (Left ) input image with the horizontal section in red, (middle) point cloud with [6], (right-top) point cloud horizontal section with [6] (rightbottom) point cloud horizontal section with our approach. Fig. 17Qualitative comparison with [6]. (Left ) input image with the horizontal section in red, (middle) point cloud with [6], (right-top) point cloud horizontal section with [6] (rightbottom) point cloud horizontal section with our approach. -middle), plaguing the subsequent surface extraction step. Fig-ure 17-left also shows a horizontal section of the model in a poorly contrasted image region of the dress. The global strategy used in Fig. 18 ( 18 Fig. 18 (top) Results provided by [8] on the kick 540 sequence. (middle) Poisson Reconstruction of the output point cloud of [8]. (bottom) Our result. Fig. 19 19 Fig. 19 Point clouds density comparison between results provided by [8] (left) and our output (right). Best viewed magnified. Fig. 20 20 Fig. 20 Two points of view of a subject from [23] (left). (middle) Reconstruction provided by the authors. (right) Results using our learning strategy. Fig. 21 21 Fig. 21 Close up of the face of the subject from [23] (left). The reconstruction provided by the authors (middle) is very smooth compared to our result (right). 40 CHAPITRE 4. APPEARANCE ESTIMATION AND REFINEMENTS Appearance map T Warp Blur kernel K Subsampling S Pixel Noise & Perturbations ⌧ Mapping i-th view projection at time t Model M t Appearance projection operator P t i Scene geometry High resolution projected image H t i Warped high hesolution image Imaging F W t i Blurred high resolution Image Noiseless low resolution image Input: noisy low res. image Time t Summary of image formation model and problem notation. ⇡ t i FIGURE 4.1 - Table 1 . 1 Mean error and standard deviation over the sequence of the Marker dataset. method mean error (mm) standard deviation (mm) no coupling, no mean pose [9] 55.11 48.02 our method 43.22 29.58 Liu et al. [19] 29.61 25.50 Mean Error at Markers 250 without coupling estimation, without mean pose our method Mean Error (mm) 100 150 200 50 0 0 50 100 150 200 250 300 350 400 450 500 Frame . AN EFFICIENT VOLUMETRIC FRAMEWORK FOR SHAPE TRACKING 83 ∴ An Efficient Volumetric Framework for Shape Tracking Benjamin Allain Jean-Sébastien Franco Inria Grenoble Rhône-Alpes -LJK Grenoble Universities, France [email protected] A.3 AN EFFICIENT VOLUMETRIC FRAMEWORK FOR SHAPE TRACKING Benjamin Allain, Jean-Sébastien Franco, Edmond Boyer CVPR 2015 -IEEE International Conference on Computer Vision and Pattern Recognition, Jun 2015, Boston, United States. (Oral Presentation, acceptance rate <3%) Table 1 . 1 Variation of the estimated volume over the sequence for MARKER and BALLET datasets. Allain et al. 2014 [2] our method .85 4.32 2.24 1.22 1.20 0.95 Table 2 . 2 Mean and statistics of silhouette reprojection error over BALLET dataset, expressed in percentage of silhouette area. method mean stddev. median max Cagniart et al. [8] Allain et al. [2] Ours, no vol. fitting 4.62 5.74 5.81 Ours 4.56 1.88 1.70 1.94 1.21 5.48 5.61 4.28 4.43 15.20 13.77 17.20 11.00 method mean std. dev. Cagniart et al. [8] Allain et al. [2] Proposed, no surface fitting 42.60 55.11 43.22 Proposed 38.41 Liu et al. [21] 29.61 48.02 29.58 29.32 26.70 25.50 1 Video available at https://hal.inria.fr/hal-01141207 Table 3 . 3 Mean marker error and std.dev. (mm), MARKER dataset. Note that Liu et al. assume a rigged skeleton is associated to the template, a stronger and more restrictive assumption. 4. ESTIMATION OF HUMAN BODY SHAPE IN MOTION WITH WIDE CLOTHING 95 Estimation of Human Body Shape in Motion with Wide Clothing ∴ Jinlong Yang, Jean-Sébastien Franco, Franck Hétroy-Wheeler, and Stefanie Wuhrer A.4 ESTIMATION OF HUMAN BODY SHAPE IN MOTION WITH WIDE CLOTHING Jinlong Yang, Jean-Sébastien Franco, Franck Hétroy-Wheeler, Stefanie Wuhrer European Conference on Computer Vision 2016, Oct 2016, Amsterdam, Netherlands : hal-01348837 https://hal.inria.fr/hal-01348837 Submitted on 7 Oct 2016 HAL is a multi-disciplinary open access L'archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d'enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. e 0 initializations; while e i < e i-1 do Compute alignment warps:{w k } k∈[[1..N ]] s.t. A ref ≈ I k (x + w k ); Align texture maps: A k = I k (x + w k ); Update alignment error: e i = k A k -A ref 2 ;Set A ref as the texture that gives the medoid of the aligned textures: TABLE 1 1 Average silhouette overlap error in pixels 4 sequences at low frame rate. Image resolution: 1920×1080. Crane 7746.40 surICP [2] 8295.58 16759.29 Jumping Bouncing Handstand ours 9148.94 6847.72 9279.57 9400.76 11690.61 Huang received the MSc degree in computer and communication engineering from National Cheng-Kung University in 2010. After one year in Academia Sinica as a research assistant, he started his doctoral study in Technische Universit ät M ünchen in 2012 and obtained the Ph.D. degree in 2016. His research interests include 2D/3D conversion, human motion capture and other 3D-vision related topics. He received Studying Abroad Scholarship from Taiwan government and the best paper award runner-up in 3DV'13. Benjamin Allain received the MSc degree in computer science from Ensimag -Grenoble INP, France, in 2012. Then he joined the Morpheo group at Inria Grenoble Rh ône-Alpes and obtained his PhD degree in computer science from Universit é Grenoble Alpes in 2017. Since November 2016, he has been working as a research scientist at Smart Me Up, France. His research interests include non-rigid motion tracking of 3D shapes from multiview video sequences. Edmond Boyer is a senior research scientist at the INRIA where he leads the MORPHEO research team. He obtained his PhD from the Institut National Polytechnique de Lorraine in 1996. He was research assistant at Cambridge university in 1998 before joining the INRIA. His fields of competence cover computer vision, computational geometry and virtual reality. He has done pioneering work in the area of geometric 3D reconstruction with focus on objects with complex shapes like the humans. Edmond Boyer is co-founder of the 4D View Solutions (http://www.4dviews.com/), one of the leading companies worldwide specialized in multi-view acquisition and processing. His current research interests are on 3D dynamic modeling from images and videos, motion perception and analysis from videos. Jean-Sebastien Franco is assistant professor of computer science at the Ensimag (School of Computer Science and Applied Mathematics, Grenoble Universities), and a researcher at the Inria Grenoble Rh ône-Alpes and LJK lab, France, with the Morpheo team since 2010. He obtained his Ph.D. from the Institut National Polytechnique de Grenoble in 2005 with the Inria MOVI / Perception team. He started his professional career as a postdoctoral research assistant at the University of North Carolinas Computer Vision Group in 2006, and as assistant professor at the University of Bordeaux, with the IPARLA team, INRIA Bordeaux Sud-Ouest. His expertise is in the field of computer vision, with several internationally recognized contributions to dynamic 3D modeling from multiple views and 3D interaction. Ilic is currently senior key expert research scientist at Siemens Corporate Technology in Munich, Perlach. He is also a visiting researcher and lecturer at Computer Science Department of TUM and closely works with the CAMP Chair. From 2009 until end of 2013 he was leading the Computer Vision Group of CAMP at TUM, and before that he was a senior researcher at Deutsche Telekom Laboratories in Berlin. In 2005 he obtained his PhD at EPFL in Switzerland under supervision of Pascal Fua. His research interests include: 3D reconstruction, deformable surface modelling and tracking, real-time object detection and tracking, human pose estimation and semantic segmentation.Valentin Gabeur 1,2 Jean-Sébastien Franco 1 Xavier Martin 1 Cordelia Schmid 1,2 Grégory Rogez A.7. MOULDING HUMANS : NON-PARAMETRIC 3D HUMAN SHAPE ESTIMATION FROM SINGLE IMAGES 149 ∴ Moulding Humans: Non-parametric 3D Human Shape Estimation from Single Images A.7 MOULDING HUMANS : NON-PARAMETRIC 3D HUMAN SHAPE ESTIMATION FROM SINGLE IMAGES Valentin Gabeur, Jean-Sébastien Franco, Xavier Martin, Cordelia Schmid, Gregory Rogez. ICCV 2019 -International Conference on Computer Vision, Oct 2019, Seoul, South Korea. pp.1-10 Table 1 . 1 Generator architecture. Layer type Output shape Input Input 256x256x3 Conv1 Conv 7x7 stride=2, GroupNorm, Relu 128x128x64 Layer1 Residual module expanded 128x128x128 Layer2 Residual module expanded 128x128x256 Layer3 Residual module 128x128x256 Hg1 Hourglass, skipped connections = 5 128x128x2 Hg2 Hourglass, skipped connections = 5 128x128x2 Table 2 . 2 Discriminator architecture. Layer Layer type Output shape Input Input 128x128x2 Conv1 Conv 3x3 stride=1, GroupNorm, Relu 128x128x64 MP1 MaxPool 2x2 64x64x64 Conv2 Conv 3x3 stride=1, GroupNorm, Relu 64x64x128 MP2 MaxPool 2x2 32x32x128 Conv3 Conv 3x3 stride=1, GroupNorm, Relu 32x32x256 MP3 MaxPool 2x2 16x16x256 Conv4 Conv 3x3 stride=1, GroupNorm, Relu 16x16x512 MP4 MaxPool 2x2 8x8x512 FC1 Fully connected layer 1024 FC2 Fully connected layer 512 FC3 Fully connected layer 1 Table 1 1 Classifier accuracy (%). Camera # 5 20 49 ZNCC 64.98 65.46 65.58 Ours Plan. 80.67 77.87 75.92 Ours Vol. 82.95 84.84 83.45 Table 2 2 Reconstruction accuracy and completeness. Acc. Compl. Measure Mean Med. Mean Med. Tola et al. [49] 0.448 0.205 0.754 0.425 Furukawa et al. [16] 0.678 0.325 0.597 0.375 Campbell et al. [50] 1.286 0.532 0.279 0.155 Ji et al. [6] 0.530 0.260 0.892 0.254 Ours (fused) 0.490 0.220 0.532 0.296 Hartmann et al. [39] 1.563 0.496 1.540 0.710 Ours (depthmap) 0.599 0.272 1.037 0.387 https://hal.archives-ouvertes.fr/hal-01567758/file/1361-supp.mp4 Due to the chronology of this work, at the time we investigated the alignment methods described in this chapter, we were using mostly polyhedral visual hulls obtained with algorithms from my thesis[START_REF] Franco | Exact polyhedral visual hulls[END_REF][START_REF]Efficient polyhedral modeling from silhouettes[END_REF], as these were fast, rugged, and implemented with the platform software suite we were using with Kinovis ; but this doesn't alter the essence of this discussion. https://hal.inria.fr/hal-01016981/file/Allain_ECCV2014_On_Mean_ Pose_and_Variabiliy.mp4 https://hal.inria.fr/hal-00977755/file/CVPR2014-HR-3D-Shape-Texture-from-Videos. mp4 https://hal.inria.fr/hal-01348837/file/EigenAppearance.mp4 http://hal.inria.fr/hal-01016981 The benchmark can be downloaded at http://dressedhuman.gforge.inria.fr/. This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.The final version of record is available at http://dx.doi.org/10.1109/TPAMI.2017.2740308Copyright (c) 2017 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing [email protected]. https://kinovis.inria.fr Acknowledgement This work was funded by the Seventh Framework Programme EU project RE@CT (grant agreement no. 288369). Acknowledgements This work was funded by the Seventh Framework Programme EU project RE@CT (grant agreement no. 288369). It was also supported in part by the INRIA-JSPS Bilateral Program AYAME 146121400001 and the JSPS WAKATE B 26730089. Acknowledgements This work was funded by the Seventh Framework Programme EU project RE@CT (grant agreement no. 288369). We thank Li Wang and Franck Hetroy for providing the CVT computation code. Acknowledgments Funded by France National Research grant ANR-14-CE24-0030 ACHMOV. We thank Yannick Marion for help with code to e ciently fit an S-SCAPE model to a single frame, Leonid Pishchulin for helpful discussions, Alexandros Neophytou and Adrian Hilton for providing comparison data, and Mickaël Heudre, Julien Pansiot and volunteer subjects for help acquiring the database. The database was acquired using the Kinovis platform: http://kinovis.inrialpes.fr/. ACKNOWLEDGMENTS Several datasets proposed in this paper have been acquired using the Kinovis platform at Inria Grenoble Rh ône-Alpes (http://kinovis.inrialpes.fr). Acknowledgements. We thank Pau De Jorge and Jinlong Yang for their help in capturing the data employed in this paper. The dataset was acquired using the Kinovis 2 platform. This work was supported in part by ERC advanced grant Allegro. Acknowledgements This work was conducted at the INRIA Grenoble. Funded by France National Research grant ANR-14-CE24-0030 ACHMOV. Images 1 -2 -15 -17 of Anja Rubik courtesy of Ezra Petronio and Self Service Magazine. Geometric model in figure 7 courtesy of 3DScanStore [54]. , dedicated to the high definition acquisition of subjects in motion (68 4Megapixels cameras, 100m2 studio, 17 twelve-core PC cluster). -Participant and scientific coordinator for IPARLA team, for the ANR-DALIA project (French Ministry of Research Project Grant), from September 1st, 2007 to July 2010. 50% participation. Telepresence and collaborative 3D interaction, in collaboration with MOAIS and Perception teams (INRIA Rhône-Alpes, France) and PRV (LIFO lab, Orléans, France). RNTL OCETRE, FP6-IST STREP HOLONICS (european commission), ACI Jeune Chercheur Cyber I and ACI Masse de Données Cyber II, (mixed-reality projects) at INRIA Rhône-Alpes, France. 3. Video available at https://hal.inria.fr/hal-01141207 4. https://hal.inria.fr/hal-01141207/file/Allain_CVPR2015_ Volumetric_Tracking.avi List of publications -45 international peer reviewed, among which : -37 international conferences (with 20 ICCV/CVPR/ECCV, high impact computer vision conferences). -7 international peer reviewed journals (PAMI, IJCV, TVCG, IJMDB) -1 book chapter -3 international workshops & demos (VRST, AMDO) -5 French peer reviewed conferences (RFIA) -3 research reports
04117856
en
[ "sdv" ]
2024/03/04 16:41:26
2022
https://hal.science/hal-04117856/file/poster_alice_michel_FHU_RESPIRE.pdf
est destinée au dépôt Projet Microbiosthme : Dynamique des microbiomes respiratoires des voies supérieures et inférieures, associée à l'asthme sévère du nourrisson A Michel, C Hassel, H Petat, M Leoz, Meriadeg Le Gouilh, Alice Moisan, O. Join-Lambert, A Vabret, C Marguet, J C Plantier To cite this version: A Michel, C Hassel, H Petat, M Leoz, Meriadeg Le Gouilh, et al.. Projet Microbiosthme : Dynamique des microbiomes respiratoires des voies supérieures et inférieures, associée à l'asthme sévère du nourrisson. Congrès de la FHU RESPIRE, Nov 2022, Forges-les-eaux, France. hal-04117856 Projet Microbiosthme : Dynamique des microbiomes respiratoires des voies supérieures et inférieures, associée à l'asthme sévère du nourrisson Décrire les éventuelles évolutions de la composition des microbiomes respiratoires au cours du temps, dans le cadre d'asthme sévère, en lien avec les données cliniques recueillies Déterminer s'il est possible d'établir un lien entre les profils « microbiomiques », cliniques et inflammatoires S it e w e b d e D Y N A M IC U R E ! ! Premier congrès de la FHU RESPIRE, 8 et 9 novembre 2022, Forges-les-Eaux, France Références : 1] Rajagopala, Seesandra V et al. "Metatranscriptomics to characterize respiratory virome,microbiome, and host response directly from clinical samples." Cell reports methods vol. 1,6 (2021): 100091. doi:10.1016/j.crmeth.2021.100091 2] Thorsen, Jonathan et al. "Infant airway microbiota and topical immune perturbations in the origins of childhood asthma." Nature communications vol. 10,1 5001. 1 Nov. 2019, doi:10.1038/s41467-019-12989-7 3] Teo SM, Mok D, Pham K, Kusel M, Serralha M, Troy N, et al. The Infant Nasopharyngeal Microbiome Impacts Severity of Lower Respiratory Infection and Risk of Asthma Development. Cell Host Microbe. 2015 May 13;17(5):704-15. 4] Kumpitsch, C.; Koskinen, K.; Schöpf, V.; Moissl-Eichinger, C. The Microbiome of the Upper Respiratory Tract in Health and Disease. BMC Biol. 2019, 17 (1), 87. infections microbiennes. La Normandie Univ, UNIROUEN, UNICAEN, Inserm UMR 1311 DYNAMICURE, CHU de Rouen, Service de Microbiologie, 76000 Rouen, CHU de Caen, Service de Microbiologie Clinique, 14000 France dynamique de ce microbiote est régie par la respiration et le système immunitaire. Un déséquilibre de ces communautés microbiennes semble impliqué dans la survenue de pathologies respiratoires comme l'asthme. Cette pathologie qui touche environ 25% des nourrissons, se caractérise par des profils cliniques et biologiques variés et sa physiopathologie suggère une atteinte de microbiote, associée à une inflammation. L'objectif du projet Microbiosthme est de déterminer la composition globale du microbiome respiratoire de nourrissons atteints d'asthme sévère, de suivre son évolution au cours du temps et d'évaluer l'implication du système immunitaire. La cohorte sera composée de 100 patients suivis aux CHU de Rouen et Caen, dont les prélèvements permettront de caractériser les microbiomes respiratoires via une approche métatranscriptomique. La potentielle contribution de l'activation immunitaire sera également évaluée. Les résultats de ce projet original permettront d'envisager une meilleure prise en charge des patients et la découverte de nouvelles cibles thérapeutiques. [email protected] Le tractus respiratoire de l'Homme est colonisé par des micro-organismes qui composent le microbiote respiratoire. Ce microbiote possède un rôle majeur de protection de l'hôte contre la survenue d' A. Michel, C. Hassel, H. Petat, M. Leoz, M. Le Gouil, A. Moisan, O. Join-Lambert, A. Vabret, C. Marguet, JC. Plantier Le projet original que constitue « Microbiosthme » permettra d'établir la nature des potentielles dysbioses, l'interrelation des différents microbiomes du tractus respiratoire et le potentiel rôle du système immunitaire dans la pathogénèse de l'asthme sévère chez les nourrissons. L'exploration et la compréhension de ces données ayant pour objectif une meilleure prise en charge à terme des patients et la découverte de nouvelles cibles thérapeutiques. CONCLUSIONS MATÉRIEL ET MÉTHODES RÉSUMÉ RÉSULTATS ATTENDUS La composition du microbiote respiratoire constitue t-elle un biomarqueur d'asthme sévère chez les nourrissons ? 100 nourrissons atteints d'asthme sévère CHU Rouen et Caen INCLUSION 3 ème mois 6 ème mois 9 ème mois 12 ème mois
00411787
en
[ "shs.socio" ]
2024/03/04 16:41:26
2009
https://shs.hal.science/halshs-00411787/file/Gutierrez_paper_Warwick_English.pdf
Andriei Gutierrez email: [email protected] Chantal Darsch Maria Rosa Lombardi Patrícia Trópia Marcia Nori Rafael Machiaverni Eduardo Guerra Marcos Paulo Engineers and Capitalism in Brazil 1 Based on a partial analysis of an ongoing survey, we have managed to observe that the changes introduced into the Brazilian social and economic environment since 1990 have had a fairly strong impact on the engineers of that country. Both the transformations of the productive structure and the change in type of state intervention (tending towards neo-liberalism) have played an important role in redefining the professional group of engineers and in the formation of collective identities. This article firstly presents the way in which the literature analyses these two transformations and their effects on the reproduction of the professional group of engineers. We then present the first results of a survey carried out amongst Brazilian engineers in 2009. We show (1) the general data constituting the professional group, (2) some developing trends amongst engineers and (3) the various structural cleavages which influence the formation of engineers' opinions towards State policy. 1 This research is directed by Paul Bouffartigue (France) and Armando Boito Jr. (Brésil). It was partly funded by the Coordenação de Aperfeiçoamento de Pessoal de Nível INTRODUCTION Brazilian capitalism has undergone profound transformations since 1990. In the productive sector, new management tools and new information technologies have profoundly altered the traditional mechanisms which defined the professional situations of a great many engineers. From the macro social viewpoint, the neo-liberal break with the former type of state intervention in economic and social development has been a determining factor for the working conditions and organisation of this professional group on the employment market. This paper considers two elements as part of a more general reflexion about the relationship between the effects of these changes on the professional group and how that group reacts within such a context. We attempt to address two areas, political science and the sociology of professional groups. 2 To do this the research was carried out in several stages. This first paper attempts to look at how engineers think about State policy especially with regard to industrial policy. We particularly wish to know the ways in which the professional group of engineers is homogeneous or heterogeneous regarding this subject. We are interested in trying to see whether their opinions about State policy have chain reactions on the composition, maintenance and evolution of the professional group of engineers. Also, from a different angle, we want to observe whether the professional group of engineers represents a social force in the political sphere (pressure group influencing national policy). Within this framework, we undertook an on-line survey of engineers in Brazil, entitled "Radiografia dos Engenheiros no Brasil" . This article corresponds to the analysis of the first 69 responses to the questionnaire, recorded between June and July 2009. 3 Considering that the population of engineers is about 500 000 strong, the study of these initial responses has more value as a frame of reference and a general guide to the field than as a statistical illustration. Our survey contains about 90 questions based on French studies, in particular the socio-economic situation of engineers and working conditions of executives 4 . Taking an average of 20 minutes, the engineers responded to four groups of questions: 1) Personal information and the characteristics of their company for those still in employment, educational background, employment history; 2) activity and professional profile, type and level of remuneration, involvement with the financial market, satisfaction and autonomy in the work situation; 3) political profile, trade union and association membership; and 4) social origin and professional mobility. The survey was carried out in four phases. Firstly, we talked with the French researchers. Then the questionnaire was translated into Portuguese and discussed with the Brazilian researchers. The third phase consisted of testing the questionnaire on several engineers and representatives of trade unions and associations. The questionnaire was put on line at the beginning of June and remained active until the end of August. For its diffusion, we relied on the wide support of associations, trade unions and other organisations representing the professional category nationwide. In carrying out the questionnaire and in view of the responses, the most populated regions of the country were over-represented. First Warwick Business School -24-25 September 2009 we tried to reinforce the diffusion of the questionnaire in the regions and areas of activity which were under-represented in the sample. A second limitation to the initial sample was the over-representation of unionised engineers. This can be explained because this population was the first to have responded to the questionnaire. We tried to correct this distortion by reinforcing the diffusion of the questionnaire to associations representing excellence within the profession and to organisations of professional branches. This article is organised into two parts. Firstly we focus on the recent transformations to Brazilian capitalism, and especially their repercussions on the professional group of engineers and how this group has evolved over time. The second part of the article presents the initial results of the survey focusing on the general characteristics of engineers, the development of certain general trends in the job market and the political cleavages existing within the professional group. TRANSFORMATIONS OF BRAZILIAN CAPITALISM AND THEIR IMPACT ON ENGINEERS In this part, our discussion centres on recent changes to engineers' work situations. It is an introduction to the challenges faced by the professional group and to the ways in which engineers have organized their collective reactions. We begin by looking more globally at the transformations of the working conditions of engineers in general and then look more closely at the particular situation of the Brazilian context. Engineers confronted with transformations in the productive structure. The strong relationship between engineers and the productive structure resides in the fact that their general conditions of work and employment have undergone profound modifications in this area. Changes in the organisation of production, new configurations of technological innovation and the management of human resources have brought about structural changes that impact the professional group of engineers. The change in the organisation of production from huge factories and industries towards a system where products and services are sub-contracted also affects engineers' identity. In as much as they are no longer concentrated inside large units of production, the productive space is no longer a place where a collective identity can be forged. Even when engineers work in the same physical space, the fact that they belong to different companies and have differing statuses affects their relationship. Considering these new paradigms in terms of management, they affect the professional group in several ways. Although the new paradigms give engineers more autonomy for managing ordinary activities, this autonomy is restricted by the rigidity of the objectives imposed and the time pressure for achieving those objectives within strict deadlines [START_REF] Paradeise | Autonomie et régulation: retour sur deux notions-clés[END_REF]. According to [START_REF] Dubar | Sociologie des Professions[END_REF], for « professionals » in general, this means that former links are weakened in favour of another type of « professional identity » which is more closely allied to a company viewpoint. [START_REF] Charriaux | Ingénieur : une professionnalité interpellée[END_REF] consider this movement as a « broadening of the professional sphere »5 of engineers. The actors stress various levels of this, for example the purely technical level and the functional level (which is related to maintenance for those in charge of projects or products.) This involves an enlargement of the work activity towards financial and economic management as well as technology and marketing [START_REF] Charriaux | Ingénieur : une professionnalité interpellée[END_REF]. Warwick Business School -24-25 September 2009 We see this situation repeated in Brazil. According to the literature, the engineer's normal activities involve more and more responsibilities. A whole gamut of new tasks must be accomplished within a short period of time : supervising workmen, dealing with employee administration and working conditions, negotiations with trade unions, managing financial reports, liaising with suppliers and clients, carrying out quality controls, checking deadlines etc. [START_REF] Lombardi | Perseverança e resistência: a engenharia como profissão feminina[END_REF][START_REF] Laudares | A qualificação/requalificação do engenheiro na fábrica globalizada: a necessidade de novos processos de trabalho[END_REF]. Engineers confronted by transformations concerning State intervention. In developing countries the engineers' fate is closely linked to the State. To the extent that the economy of these countries is not sufficiently developed, State intervention becomes crucial. Generally speaking, between the 1930's and the 1990's, Brazil's model for national development was guided by direct State intervention into the economy. Between 1990 and the 2000's, this model was gradually deconstructed. In the early 2000's, some investments began to bear fruit, especially in infrastructure -this opened a new period which is today tried by the economic crisis. The evolution of the engineers' job market and their identity has been strongly influenced by State development. In the 1930's, this created the steel, electrical energy and ore mining sectors. From the 1950's economic growth was fed by a consolidated industrial base and by the development of other productive activities such as heavy industry and consumer durables, especially the auto industry. This development strategy consisted of having a policy of substitution for imports and stimulating internal production. The government set up a national bank for economic development (BNDES) and created a huge state oil company (Petrobras) which had a monopoly over the whole sector from refining the oil to controlling the distribution of derived products [START_REF] Serra | Ciclos e mudanças estruturais na economia brasileira do pós-guerra[END_REF]. During the 1990's, this model of autonomous development was brought into question by the structural reforms implemented by successive governments. The previous period's policy of substituting imports (by home-grown articles) gave way to a policy of opening up the economy to international competition. This was the start of a strong ideological offensive against public (state owned) companies and their employees, and resulted in a period of intense privatisations [START_REF] Cano | O ajuste da década de 1990: neoliberalismo e crise[END_REF]Biondi, 2003). Both the opening of the economy and the privatisations had an effect on employment, especially in the industrial sector. In 1991 for example, jobs in the industrial sector were cut by 25%. The literature calls this period the « de-nationalisation » of Brazilian companies in the productive sector. The share of foreign capital increased through foreign acquisitions of privatized companies. We can also note several private industries whose bankruptcy was followed by their incorporation into foreign companies (Biondi, 2003 ;Boito, 1998 ;[START_REF] Carneiro | Desenvolvimento em crise: a economia brasileira no último quarto do século XX[END_REF]. According to [START_REF] Carneiro | Desenvolvimento em crise: a economia brasileira no último quarto do século XX[END_REF], this period is marked by a premature and regressive process of « deindustrialisation » of the main Latin-American economies through reducing the proportion of industrial production in the GDP, and the reduction of the high technology industry throughout the fabric of industrial production. Between 1985 and 1990, during José Sarney's government, the official rate of employment of engineers rose by 3,6%, from 144 to 172 000 jobs. After the implementation of the first neo-liberal policies -opening up commerce, privatization and reducing public investment -between 1991 and 1992 there was a fall of 11.5%. The drop in the employment rate of engineers continued until 1999, with a rate of under 2.7% (Lombardi, 2004, 79). Thus, between 1989 and 1999 about 30% of engineers' employment, or 53 166 official engineering jobs, disappeared. Despite the reversal of this trend since 1999, with an increase of 3.4% of jobs until 2002, a low rate of professional occupation persists for engineers compared with the creation of other jobs. If we compare 2002 and 1989 (a year when there were a great many jobs for engineers (177 000), we realise that there is still a deficit of 38 000 jobs. If we take into account the number of new engineers graduating during that period, the situation appears even more serious. Between 1991 and 2002, the number of new professionals rose from 13 000 to nearly 20 000 per year (Lombardi, EUROPEAN DOCTORAL WORKSHOP Warwick Business School -24-25 September 20092004, 96). During the period 2004 to the end of 2008, the country underwent a new situation. The growth of the global economy made it possible to increase the volume of Brazil's exports (especially in the sectors of agriculture and mineral extraction). This situation also increased the liquidity of Brazilian capital in the world market. The Lula government, supported by the positive balance of payments, adopted several measures to encourage internal industrialisation and the modernisation of the country's infrastructure. In 2007, the government created the Plano de Aceleração do Crescimento (PAC) to increase new public and private investment in industrialisation and infrastructure. The government's forecast for investments is over 1 thousand billion R$ (about € 300 billion) until 2010. According to government figures, investment in infrastructure compared to GDP increased by 0.64%, in 2006, to 1% in 2008. During the same period, the GDP increased as follows: 3.7% in 2006, 5.4% in 2007 and 6.4% in the first three trimesters of 2008. (Balanço do PAC, 2009). 6 Figures for engineers' employment over this period are still unavailable. However, from information given by the profession's trade unions, it can be said that the economic bounce-back increased the level of employment of engineers. Until July 2008, some engineer's organisations proposed that the country double the number of graduate engineers over the coming ten years, especially in the sectors of production, mechanics and electronics (Procuram-se engenheiros, 2008). ENGINEERS AND POLITICS IN BRAZIL This second part is based on the first results of the survey 7 . This survey was designed to find out more about the engineers and their political attitudes. It enabled us to make direct connections between the engineers' structural situations in their work, their individual career paths and their relationships with political positions. At this stage of the paper we rely on the partial results of the survey to present: a very general picture of the working conditions and job market for engineers in Brazil (1), a few more recent general trends linked to current transformations of capitalism (2) and some elements of cleavages between engineers which strongly impact their political attitudes (3). Brazilian Engineers According to the analysis of the first results of the survey, engineering continues to be a largely male profession (82%) concentrated in the South West of the country (43%). Engineers graduate most frequently in the areas of mechanics (21.7%), electricity (20.2%) and construction and public works (18.8%). The majority of engineers are trained in public universities (63.7%). The branches of employment which employ them the most are: civil construction; telecommunications, electronics, information technology, agriculture and energy, food production and the food industry. About 14.4% of engineers are unemployed, (60% since January 2009); 4.3% are retired and 81.1% 6 The PAC thus met the demands of the National Federation of Engineers an dits movement for the country's development « Cresce Brasil + Engenharia + Desenvolvimento ». In 2006, engineers in the organisation issued a manifesto for the urgent uptake of several measures which were almost all satisfied by the PAC (Manifesto Cresce-Brasil, 2006;FNE, 2007). 7 This research was made up of different phases using open source software. LimeSurvey was used for the creation and diffusion of the questionnaire. For statistical data, R software, especially « R Commander » was invaluable. Then for word processing and tables we relied on OpenOffice. During this phase, we relied on the support of the Lest technicians, particularly Sara Famiglietti, Patrice Cacciuttolo and Gregory Conu. EUROPEAN DOCTORAL WORKSHOP Warwick Business School -24-25 September 2009 are working. Amongst these, 64.3% have a permanent contract, 8,9% have a short term contract, 10,7% have tenure as state officials, 12.5% are self employed and 3.6% have their own company. Engineers' revenue is concentrated between a minimum of 5 and 20 salaries per month8 . However, 10% of engineers have a high level of revenue, over R$ 111 600 per year, (equivalent of over 61 000 USD per year). In general, the highest revenues are those of engineers attached to the private sector. The distribution of workers by type of employer is as follows: 55.4% work in the private sector; 16.1% in public companies; 8.9% in the administrations and public services of cities and regions; 14.3% are company owners or self employed and 5.4% work for NGO's and associations. Amongst private sector engineers, 58% work in a company which depends on a foreign group or company. Even if we do not have previous comparative data, it can be supposed that this figure corresponds to the effects of the process of de-nationalisation of the Brazilian economy. From the engineers working in the private sector and for the state, a large proportion work for companies with over 500 employees. Amongst those in the private sector, 29% work for companies with from 500 to 1 999 employees and 38.7% in companies with over 2 000 employees. For public sector workers, these figures are 33.3% and 44.4%, respectively. Generally speaking, half the engineers from all sectors carry out activities of surveying and design, research, and conception and management of projects. The two most frequent activities in the survey responses are: Production (13%) and Executive Director (8,6%). It is interesting to note that the latter are not those with the highest revenues: amongst the 6 engineers who say they work as executive directors, only 1 belongs to the "richest" 10% (indeed, he is the only one who works in the private sector and who is not the head of a company). General trends of the evolution of Brazilian capitalism affecting engineers From the total number of 56 working engineers, 32% undertake a supervisory management role. Amongst these, 72% are project managers. Amongst working engineers who have no management role, the proportion of project managers is just as high, 63.2%. This means that the engineers in our sample have different levels of responsibility and remuneration: (1) there are project managers who are high level managers and these are generally well paid : (50% have a responsibility such as executive director and are paid over 30 salaries minimum); (2) middle management project managers (supervise a team, an office or a department) with a revenue of between 5 and 20 salaries ; and (3) just over one third of the engineers (34.8% of the sample) are responsible for a project, but have no supervisory or hierarchical position. The majority of this group (75%) whose revenue is no more than 20 salaries, has on one hand little economic and financial influence over the tasks performed (60%) but on the other hand, are autonomous in their technological decisions (45%). EUROPEAN DOCTORAL WORKSHOP Warwick Business School -24-25 September 2009 Nearly one third of our engineers are drawn towards financialization, more specifically: 33.3% of the sample receive bonuses or stock options linked to the companies' financial results. Amongst these engineers, 91.3% work in the private sector, 60% carry out surveys and design work, research, project conception and management. 73.9% are project managers and 82.6% have no hierarchical responsibility. It can be asserted that the « path to financialization » is well advanced for one category of engineers: over 40% of engineers possess shares and 10% receive financial profits amounting to between 10% and 30% of their revenue. On noticeable trend for engineers is the weakening of support for nationalism amongst the youngest -those who have entered the job market since the 1990's. We use the term « General nationalism » to designate those engineers who, to a greater or lesser degree (either generally -diffuse or in a more interventionist sense), supported the necessity of maintaining the national nature of the large Brazilian productive companies. 9 . As shown in table 2, engineers born in the 1960's have a strong tendency to favour the national nature of companies (80%). This is reduced by over 20% for the younger generation -(under 60%). On the other hand, this weakening of general nationalism amongst the young does not mean that attitudes are siding towards liberal policies of no radical state intervention in the economy. The engineers' position is that investment in infrastructure and social services still require strong State intervention. For engineers, the idea that the State should be the main motor of investment in infrastructure remains entrenched: 60.9% think that this investment is highly necessary either to develop infrastructure of for the country's sovereignty. Only 8.6% think that the important thing is company investment, and 2.8% consider that the State should reduce its spending -these are the two most liberal positions. Concerning social policy, engineers responses were almost unanimous: their conception being that « the Education, Health and well being of citizens are the obligations of the State » 10 . 9 Question S11 asked engineers their opinion of large privatised Brazilian companies such as Petrobras (oil), Embraer (aeroplanes) and Vale do Rio Doce (mining). There were six possible responses: 1) A free marketeer who specified « The government should not intervene. Let them be regulated by the laws of the global market ». 2) A "diffuse nationalist" who said « They should still be Brazilian with national capital, whether public or private » 3) A nationalist interventionist who said : « The Brazilian State should nationalise them and try to be a major shareholder ». 4) A nationalist for state intervention and management who said : « The Brazilian State should re-nationalise these companies and take over their management ». 5) A nationalist interventionist for certain cases who said « The government should only take back the control of Petrobras »; and 6) « Another opinion». In these responses, we did not refer to the different types of nationalism. The responses were presented in a different order for each new respondent. All questions in the survey concerning the political profile of engineers could be left unanswered. 10 Question S14 asked engineers their opinion of State social policy such as citizens' health, education, leisure and EUROPEAN DOCTORAL WORKSHOP Warwick Business School -24-25 September 2009 Another noticeable trend amongst the young generation is the significant reduction in trade union membership. Table 3 shows a gradual reduction of the rate of trade union membership. Even if trade unionists might be over-represented in our sample, (compared to the 1997 rate which was 13% for men and 8% for women), it is important to mention that this trend exists. Elements of cleavage amongst engineers We noticed three tendencies in engineers' political opinions concerning large Brazilian productive companies. These opinions can be identified either as « diffuse nationalism », « interventionist nationalism » or « free market ». By « diffuse nationalism » we understand those political attitudes which agree on the necessarily national nature of companies financed by public or private capital, without however being in favour of State intervention. This latter stance is characteristic of the « nationalist interventionist » engineers. We divided this last type of nationalism into two sub-types : State interventionists those in favour of state intervention in the economy without state management of the companies concerned -and « State management interventionists » those who defended direct State intervention in the company management. The "free market " stance is held by those who are against all State intervention in the economy. They favour "free regulation" by the global market. We mentioned above a general trend towards diminishing « general nationalism » amongst engineers according to age. If this is further analysed by decades of birth, we observe that age is an element of cleavage between engineers. Specifically, the personal history of different generations of engineers influences their political stance. Engineers born between 1940 and 1950 are those who are most in favour of the State taking back the management of large Brazilian companies. Engineers born in the 1960's are the most interventionist: 33% are in favour of nationalising the companies by buying shares, and 20% are in favour of taking back state management. If we observe table 4, we see that amongst the youngest generations (born between 1970 and 1980) : there is an increase in diffuse nationalism, and also a reduction in interventionist nationalism. well being. Five different responses were possible: 1) « This area has nothing to do with the state. It is up to each individual to be prepared, work for his/her own education, health and well being and those of his/her children »; 2) « The education, health and well being of citizens are obligations of the State »; 3) « The state should take care of Education, and leave the other areas alone »; 4) « The state should only deal with preventive health and basic education. Beyond this, health and education are the responsibility of each individual » and 5) « Other». Reformulate the possible responses If we consider that an engineer enters the job market aged about 23 (1 year of preparatory classes, then 5 years to finish the diploma), the most nationalist interventionist engineers (those born during the 1960's) -are those who entered the job market when the productive structure was being transformed. They thus lived through the immediate effects of changes such as (the lessening of) State intervention. These engineers, who were over 30 years old during the first neo-liberal reforms of the 1990's were also those who faced unemployment : 20% have already been unemployed two or three times. On the other hand, engineers born since 1980, who have never worked in a political context of strong State intervention, are those with the lowest percentage of responses in favour of interventionist nationalism (8.4%). The distinction between public and private remains an important element of cleavage between among engineers (Cf. Table 5). Engineers from the public sector working in non productive sectors (generally in regions, cities and hospitals) are those who are most in favour of interventionist nationalism (80%). This trend is also present -to a lesser extent -amongst public or mixed capital companies (66,7%). On the contrary, engineers in the private sector are those who most frequently respond in favour of the free market position (38,7%). The increase in the number of engineers attached to the idea of the financializaton of the economy is also an element of cleavage in engineers' political attitudes. A comparison between the political position of the whole sample towards large productive companies (table 6) with that of the 40% who hold shares in the stock-market (table 7), shows a difference. When all engineers are taken together, the proportion of those with political attitudes tending towards diffuse nationalism and the free market is 40-27. If we take the group of engineers whose revenue partly depends on financial investments, we observe a proportion of about 45-45. This means that this latter group of engineers is more in tune with free market ideas -and policies. Another aspect which we attracted our attention was the strong tendency amongst engineers in public companies to take on the ideas of private management. When these engineers are faced with choosing between private or public management, a significant proportion of them chose the private option (44%). On the contrary, 80% engineers from the non productive sector, working in towns and regions, especially in administration or services, refused the private option. There are enough elements to suggest that engineers in public companies are not completely opposed to public management. In the questionnaire, we asked about the reasons for preferring private management. The engineers from public companies who responded in favour of this justified their choice as follows: private management is "targeted towards client needs" (50%). When we wonder about this apparent incongruity, we can suppose that the integration of certain new management tools into public companies -such as the optimization of resources and the increase of autonomy -may have influenced these engineers towards their favourable attitude. Moreover, when asked about the guarantee of job security for public employees, 75% of these engineers were in favour. In this paper, we have tried to show that the recent transformations of Brazilian capitalism have had a fairly strong influence on the collective identities and political attitudes of engineers in Brazil. This does not mean that there is a relation of cause and effect. Far from saying this, our objective was to bring some elements to bear into a more complex picture involving the relationship between the political situation and groups of actors. From the partial analysis of an ongoing survey, we have been able to observe that the changes introduced into the Brazilian economy and society since the 1990's have impacted engineers fairly strongly. On one side, transformations of the productive structure and on the other the change in type of State intervention (tending towards neo-liberalism) have played an important part in the redefinition of the professional group of engineers and in the formation of their collective identities. Concerning the organisation of productive activities, there is a polarisation with a small group of engineers, high level managers and project leaders who are well paid; and at the other end of the spectrum we find a mass of engineers who are not managers but who are nevertheless responsible for projects. The characteristic of this latter group is that their professional situation gives them a high level of autonomy in their technological choices, countered by less autonomy in financial and economic areas. The enlargement of these engineers' activities is not matched by the rise in salaries seen at upper management levels. On the other hand, a few elements lead us to think that new management tools may have a positive impact which is felt by those engineers working in public productive companies. A case in point is these engineers strong preference for private management which takes account of client needs. We also noticed a trend amongst the youngest engineers, towards a gradual reduction in trade union membership and nationalism. In order to better understand this point, we categorised the political opinions of engineers towards large large Brazilian companies into four types : « diffuse nationalism », « State interventionist nationalism », « State management interventionist nationalism » and « free market » . We have shown that a double trend exists: the strong interventionist nationalism among engineers who lived through the transformations of the employment market from 1980-1990 contrasts with a strong diffuse nationalism -combined with a relatively favourable attitude towards liberal policies -amongst the youngest engineers who never knew the time of strong State intervention when the country was being developed. Another important transformation in Brazilian society which influences engineers' political attitudes is that of the financialization of the country's economy. Some engineers have been encouraged to participate in the development of financial markets through bonuses and incentives. When this is crossed with their political position, we find that their integration of the reasoning of market finance has influenced their political attitudes towards the liberal field. . Source: survey « Radiografia dos Engenheiros no Brasil » <5 . (2003), O Brasil privatizado: um balanço do desmonte do Estado. São Paulo, Perseu Abramo. Boito, A. (1998), Política neoliberal e sindicalismo no Brasil. São Paulo, Xamã. Table 1 : 1 Engineers' Revenue (in relation of one minimum of R$ 465,00 -value of June 2009) 5 -10 10 -20 20 -30 30 > 4.3% 33.3% 33.3% 7.2% 2.8% Table 2 : 2 Age compared with general nationalism among engineers. Source: survey « Radiografia dos Engenheiros no Brasil » Table 3 : 3 Proportion of Trade Unionists by decade of birth Source: survey « Radiografia dos Engenheiros no Brasil » Table 4 : 4 Political stance concerning large companies according to decade of birth. Source: survey « Radiografia dos Engenheiros no Brasil » Table 5 : 5 Political stance concerning large companies according to type of employer. Political Stance / Date of Birth Diffuse Nationalism Nationalism Interventionist Free Market State Interv. State Manag. Interv. 1940-50's 33.3% 0.0% 22.2% 33.3% 1960's 26.7% 33.3% 20.0% 13.3% 1970's 38.1% 19.0% 0.0% 33.3% 1980's 54.2% 4.2% 4.2% 29.2% Source: survey « Radiografia dos Engenheiros no Brasil » Table 6 : 6 Political position of the whole sample towards large companies Source: survey « Radiografia dos Engenheiros no Brasil » Table 7 : 7 Political position towards large companies according to percentage of « financial » revenue. Source: survey « Radiografia dos Engenheiros no Brasil » « Un élargissement de la professionalité » in the French expression. The question concerning engineers' revenue related to the gross revenue for the whole of 2008 (salaries, advantages, bonuses etc.). We used 5 levels of remuneration. Without mentioning this on the questionnaire, we used the value of the minimum salary for June 2009 as a reference for the various levels of revenue : (1) under 5 salaries accumulated during one year ; (2) between 5 and 10 ; (3) between 10 and 20 and (4) over 30 salaries. Political Stance / Type of employer Superior (Capes-Brazil).
04117872
en
[ "sdv.spee", "sdv.mhep.mi" ]
2024/03/04 16:41:26
2022
https://theses.hal.science/tel-04117872/file/SANDFORT_Mirco_these_2022.pdf
Sandfort Mirco email: [email protected] Mirco Sandfort email: [email protected] Amélie Vantaux Saorin Kim Thomas Obadia Anaïs Pepey Soazic Gardais Nimol Khim Dysoley Lek Michael White Leanne J Robinson Benoit Witkowski Ivo Mueller M Niang Forest malaria in Cambodia: the occupational and spatial clustering of Plasmodium vivax and Plasmodium falciparum infection risk in a cross-sectional survey in Mondulkiri province, Cambodia Keywords: Malaria, Plasmodium vivax, Plasmodium falciparum, spatial epidemiology, spatial clustering, spatiotemporal clustering, spatial signature Without the enormous support of my partner Isabelle this thesis would not have been possible. Isa, thank you for the load you took off my back. I apologize to you and Nea Résumé Après plusieurs décennies marquées par un déclin global du paludisme, les pays de la sous-région du Grand Mékong se sont engagés à l'éliminer d'ici 2030. Avec l'intensification des efforts d'élimination, les zones de forte incidence du paludisme se restreignent aux régions reculées. Certaines populations sont particulièrement exposées, notamment lors de déplacements en zones forestières. Le risque d'infection doit être bien décrit afin d'adapter localement les stratégies d'intervention. Les retards par rapport aux objectifs prévus d'élimination confirment la nécessité d'améliorer les programmes de contrôle. Cette thèse a eu pour but de décrire et comprendre l'hétérogénéité du risque d'infection dans une région reculée du Cambodge. Nous avons stratifié le risque, en exploitant des covariables démographiques et géographiques, et avons appliqué des outils existants pour la détection de clusters. Nous avons introduit la notion de signature spatiale de la transmission pour l'épidémiologie du paludisme. A partir de cas index confirmés par PCR, nous avons décrit l'évolution de la prévalence en fonction de la distance aux cas, et avons étendu cette analyse aux études menées au Cambodge, au Brésil, en Thaïlande, aux Îles Salomon et au Sénégal. Au Cambodge, nous avons démontré que des poches de prévalence élevées d'infections à Plasmodium spp. subsistent, majoritairement causées par des infections de P. vivax, asymptomatiques et difficiles à détecter en routine. La zone d'influence de ces poches ne dépasse pas quelques kilomètres en bordure des zones forestières. Le travail en forêt est le facteur de risque le plus important à l'échelle de la population, et le risque s'étend à l'ensemble des habitants des villages forestiers. Au Cambodge, au Brésil et en Thaïlande, les cas index et secondaires sont très proches, géographiquement et dans le temps. Le Cambodge a intensifié son programme de lutte à l'échelle locale et introduit le traitement « radical » des infections récurrentes à P. vivax. Cependant, la détection passive des cas continuera à manquer la plupart des infections. Le risque homogène dans les villages forestiers pourrait justifier des campagnes de distribution massive d'antipaludéens. Le faible rayonnement géographique des clusters plaide en faveur d'approches réactives restreintes aux foyers avoisinants. En résumé, cette thèse a amélioré la compréhension du paludisme résiduel et propose de nouveaux outils qui aideront à son élimination. SANDFORT Mirco -Thèse de doctorat -2022 Table of contents In contrast to the other species, P. vivax and P. ovale sporozoites form dormant stages in hepatocytes called hypnozoites (Mueller et al., 2009). These re-activate and cause relapses, i.e. bloodstream infections weeks, months, or years after the initial infection by a mosquito [START_REF] White | Variation in relapse frequency and the transmission potential of Plasmodium vivax malaria[END_REF]. Contrary to P. falciparum, P. vivax merozoites prefer reticulocytes, potentially causing chronic anaemia. Also, P. vivax gametocytes can develop prior to symptom onset [START_REF] Adams | The Biology of Plasmodium vivax[END_REF]Mueller et al., 2009). Artemisinin-based combination therapies (ACTs) are the most effective treatment against uncomplicated malaria. These drugs treat the bloodstream parasitaemia of all human malaria pathogens and can cure P. falciparum, P. malariae, and P. knowlesi. However, they are ineffective against hypnozoites. To prevent relapses from P. vivax or P. ovale and thus achieve 'radical cure', 8-aminoquinolines such as primaquine are needed. However, primaquine can induce haemolysis in patients with glucose-6-phosphate dehydrogenase (G6PD) deficiency, one of the most common enzyme deficiencies worldwide (World Health Organization, 2022). Malaria epidemiology worldwide Malaria occurs across many countries in Latin America, Africa, Asia, and Oceania, particularly in tropical and subtropical areas. It thus poses a risk of illness and death to billions of people and puts an additional economic strain on many resource-poor countries (World Health Organization, 2015a). The latest World Malaria Report 2021 from the World Health Organization (WHO) estimated 241 million malaria cases and 627,000 deaths from malaria worldwide in 2020. Ninety-five per cent of these cases and 96% of malaria deaths occurred in the WHO African region, and 77% of all malaria deaths happened in children under five years (World Health Organization, 2021a). Malaria case incidence varied across endemic areas from 233 per 1,000 population at risk in the WHO African region to 2 cases per 1,000 in the Western Pacific region in 2020. Likewise, mortality rates ranged from 62 deaths per 100,000 population at risk to 0.4 per 100,000, respectively. However, low-incidence regions can enclose high-incidence countries, e.g. Papua New Guinea in the Western Pacific region, with 164 cases per 1,000 (World Health Organization, 2021a). Generally, the heterogeneity of malaria risk is 'fractal', i.e. low-incidence world regions include high-incidence countries, low-incidence countries contain highincidence provinces, and low-incidence provinces comprise high-incidence villages (Lana et al., 2021). Risk diversity extends further, e.g. to the levels of households and individuals within villages [START_REF] Smith | Revisiting the basic reproductive number for malaria and its implications for malaria control[END_REF][START_REF] Woolhouse | Heterogeneities in the transmission of infectious agents: implications for the design of control programs[END_REF]. Cases in sub-Saharan Africa are almost exclusively caused by P. falciparum. In contrast, P. vivax causes a considerable share of cases in the other malaria-endemic WHO regions, e.g. an estimated 30% of all cases were P. vivax mono-infections in the Western Pacific in 2020. In the Americas, even 75% of cases were attributed to P. vivax mono-infections (World Health Organization, 2021a). In particular, in countries that are co-endemic with P. falciparum and P. vivax, it has frequently been observed that as malaria case incidence decreased with intensified control interventions, the proportion of cases attributable to P. vivax increases (Price, Commons, Battle, Thriemer, & Mendis, 2020), see also Figure 2 on page 13. The goal of malaria elimination Since the United Nations included the halt of the spread of malaria in the Millennium Development Goals in 2000, the global incidence of malaria cases has been reduced by a quarter, i.e. from 81 cases per 1,000 population at risk in 2000 to 59 cases per 1,000 in 2020, see Figure 1. Malaria mortality has halved from an estimated 30 deaths per 100,000 population at risk in 2000 to 15 deaths per 100,000 in 2020 worldwide (World Health Organization, 2000). Figure 1 Global malaria case incidence (cases per 1,000 population at risk) from 2000 until 2020 as estimated by the WHO (World Health Organization, 2021a). A massive scale-up of the major malaria control tools made this success in burden reduction possible. It mainly comprised insecticide-treated bed nets (ITNs), including longlasting insecticide-treated nets (LLINs), indoor residual spraying (IRS), rapid diagnostic tests (RDTs), and ACTs. Given this momentum, the Bill & Melinda Gates Foundation and the WHO called for malaria elimination in 2007 (World Health Organization, 2020). Until 2020, 23 countries had reached the WHO category 'prevention of re-introduction', i.e. had reported zero indigenous cases in a year (World Health Organization, 2007). After reporting no locally acquired cases for three years, a country is eligible for malaria-free certification by the WHO. 12 countries had achieved this status by 2020 (World Health Organization, 2021a), mainly from the Middle East or Central Asia. The milestone of malaria elimination in 8-10 countries until 2015 was surpassed (The Roll Back Malaria Partnership, 2008), and the malaria-free certification of China and El Salvador in 2021 was a great success (World Health Organization, 2021a). However, the Global Technical Strategy for malaria 2016-2030 (GTS) aimed to eliminate malaria from 10, 20, and 35 countries in which malaria was endemic in 2015 until 2020, 2025, and 2030, respectively (World Health Organization, 2015a). Six countries have reached the malaria-free criterion since 2016, so the intermediate 2020 target was missed. Overall, progress in reducing the malaria burden has levelled off globally (World Health Organization, 2021a). South East Asia and especially the Greater Mekong Subregion1 (GMS) has attracted particular attention since 2011 regarding malaria elimination due to the detection of resistance to artemisinin derivatives in P. falciparum at the Cambodia-Thailand border (Carrara et Challenges to elimination and the need to accelerate Several recent developments threaten global elimination efforts. They include the spread of drug-resistant parasites, particularly in P. falciparum, and strains that evade common RDT target antigens or more efficient and resilient vector species (Faulde, Rueda, & Khaireh, 2014; [START_REF] Gamboa | A large proportion of P. falciparum isolates in the Amazon region of Peru lack pfhrp2 and pfhrp3: implications for malaria rapid diagnostic tests[END_REF]World Health Organization, 2015b, 2021a). In addition, many countries, particularly those from the GMS, face challenges due to inherent features of malaria epidemiology. First, a large share of asymptomatic infections which are not detected during routine, curative case management. Second, the predominance of P. vivax infections entailing transmission prior to symptoms, difficult cure, and relapses. Third, the overall geographic fragmentation of malaria transmission with persistently high infection levels in remote areas. While the emerging threats render malaria elimination a race against time, the inherent challenges of malaria epidemiology hinder the progress of (inter-)national control programmes. Those roadblocks must be overcome if the ambitious elimination timeline is to be met. This thesis is embedded in the Asia-Pacific International Center of Excellence in Malaria Research (ICEMR) project. It applies state-of-the-art and novel molecular, immunological, entomological, and modelling techniques to better understand malaria transmission in Papua New Guinea and Cambodia [START_REF] Mueller | Asia-Pacific ICEMR: Understanding Malaria Transmission to Accelerate Malaria Elimination in the Asia Pacific Region[END_REF]. Regarding Cambodia, it especially examines the dynamics of forest-based transmission and P. vivax malaria and their implications for malaria elimination [START_REF] Robinson | Asia-Pacific ICEMR: Maximising impact on malaria control policy and public health in Cambodia and Papua New Guinea[END_REF]. This thesis assesses the high levels of asymptomatic and P. vivax infections and spatial and social heterogeneity in infection risk based on a cross-sectional survey in a remote area in Cambodia. The work introduces a novel approach to describe spatial and temporal clustering of infections and applies it to data from Cambodia, Thailand, Solomon Islands, Brazil, and Senegal. The thesis compares it to established spatial statistical tools and reviews the findings in light of their utility to help accelerate towards malaria elimination. Chapter 2: State of the art Chapter 2 recaps the current knowledge of the progress towards malaria elimination in South East Asia. It lists the presently recommended public health interventions. A focus lies on the challenge of fragmented malaria burden and statistical tools for spatial risk patterns. Progress towards malaria elimination in South East Asia Over the past two decades, countries in the GMS have made enormous progress in reducing the malaria burden. Annual numbers of indigenous cases of P. falciparum malaria have fallen by 93% from 2000 until 2020 and overall malaria case numbers by 78%, see Figure 2, despite a drastic increase in case numbers between 2006-2012. Coordinated efforts against artemisinin-resistant P. falciparum helped to change the course and initiate a decade of even more marked reductions in case numbers (World Health Organization, 2021a). Reductions of 60% or more in malaria incidence and mortality were achieved in all GMS countries. The 2020 GTS milestone of a 40% reduction was thus met (World Health Organization, 2015a, 2021a). In Cambodia, reported case numbers per year have declined drastically over the last 20 years, from 129,167 cases and 608 deaths in 2000 to 4,279 cases and zero deaths (for four sequential years) in 2021, see Figure 3. Laos and Vietnam also reported zero malaria deaths in 2020. Cambodia reported 10% of cases caused by P. falciparum in 2020 and thus missed the target of P. falciparum elimination by that year (World Health Organization, 2015b). Considering unreported cases, the WHO estimates 69,136 malaria cases in Cambodia in 2020, which sets malaria elimination far from reach. Toolbox for malaria elimination The tools that the WHO particularly recommends for the prevention of infection include: 1) distribution of ITNs/LLINs free of charge, 2) IRS if vectors feed and people sleep mainly indoors, 3) preventive chemotherapies irrespective of diagnosis, either targeting risk groups or population-wide via mass drug administration (MDA), and 4) RTS,S/AS01 vaccination against P. falciparum malaria for children (World Health Organization, 2022). Given the unspecific symptoms, treatment against malaria should follow confirmation via light microscopy (LM) or RDTs. Uncomplicated P. falciparum malaria should be treated with ACT daily over three days. In low-transmission settings, a combination with a single primaquine dose is recommended without needing G6PD testing to lower the risk of onward transmission by killing gametocytes. Uncomplicated malaria by any of the other Plasmodium spp. should be treated with ACT (or chloroquine unless the area is known for resistance). P. vivax and P. ovale should be treated with primaquine daily for 7-14 days to prevent relapses if they are not G6PD deficient (World Health Organization, 2022). The geographic heterogeneity of malaria poses strategic and logistical challenges to national malaria programmes (NMPs). Thus, the WHO recommends regionally stratified In delimited areas with transmission across the general population, 'mass' interventions may be considered. When transmission is (largely) confined to certain high-risk groups, 'targeted' interventions could be applied to group members before, during, or following the exposure. 'Reactive' interventions can be initiated upon the (usually passive) detection of cases: cases are followed-up, and interventions are also targeted to co-exposed contacts or people residing with or close to the index case. In all three categories, interventions could be presumptive drug administration or testing and treatment. The Cambodian NMP has mainly tackled malaria through the cost-free distribution of ITNs/LLINs and effective diagnosis by LM or RDTs and treatment with ACT in health centres and more remote health posts (Maude et al., 2014). Access to diagnosis and treatment for remote populations was further improved by the VMW programme, launched in 2004 [START_REF] Canavati | Village malaria worker performance key to the elimination of artemisininresistant malaria: a Western Cambodia health system assessment[END_REF]. These community health workers provide cost-free malaria diagnosis by RDTs, ACT, and behaviour change communication to the general population in remote villages. In addition, mobile malaria workers (MMWs) were established to target migrant populations, e.g. mobile groups that stay in the forest for extended periods for occupational activities. IRS and vaccination are not adopted in Cambodia. Until 2019, all RDTs or LMconfirmed infections had been treated with ACT. Primaquine treatment for P. vivax infections and prior G6PD testing was piloted in 2019-2020 and has been scaled up countrywide since then [START_REF] Sovannaroth | Accelerating malaria elimination in Cambodia: an intensified approach for targeting at-risk populations[END_REF]; World Health Organization, 2021a, 2021b). Regional challenges to malaria elimination Predominance of P. vivax In the past, P. vivax was reported to cause a minority of cases in countries across the GMS (World Health Organization, 2021a). To date, the share of cases with P. vivax infections has increased in all countries, and in most, P. vivax now accounts for most of the cases: 42.4% of cases in Vietnam, 54.9% in Laos, 74.1% in Myanmar, 89.5% in Cambodia, and 93.5% in Thailand in 2020 (World Health Organization, 2021a). Case management with ACT after RDT or LM diagnosis (in combination with prevention measures through ITNs and IRS) has proven very successful in reducing case incidence. However, this success has been more pronounced against P. falciparum than against P. vivax (Price et al., 2020; World Health Organization, 2021a). P. vivax infections lead mostly to lowdensity, asymptomatic infections that can be hard to detect by RDTs or even LM. Several relapses per primary P. vivax infection lead to partial immunity more rapidly and increase the likelihood of a milder or asymptomatic course upon relapse or new primary infection (Mueller et al., 2009). Thus, clinical case management that relies on patients' self-reporting to health facilities targets patients with acute P. falciparum infections more effectively than those with a P. vivax infection. Furthermore, primaquine treatment of P. vivax cases is often not established in endemic countries because of prevalent G6PD deficiency in the population, the risk of drug-induced haemolysis, and because G6PD testing prior to treatment is costly. In addition, the daily dosing over 14 days reduces adherence compared to the 3-day ACT-only regimens. A further challenge to P. vivax control is the early production of gametocytes prior to symptom onset [START_REF] Adams | The Biology of Plasmodium vivax[END_REF]Mueller et al., 2009), whereas P. falciparum gametocytes are more easily hit by gametocidal treatment as they appear only days to weeks after clinical manifestation (Lover, Baird, Gosling, & Price, 2018). High prevalence of asymptomatic infections The malaria incidence reported by NMPs is mainly based on clinical cases who consult health facilities. The measure thus misses patients who do not attend such facilities, be it due to mild symptoms or because they frequent traditional healers [START_REF] Chatterjee | Cambodia's fight against malaria[END_REF]. Patients from remote parts of a country, far from the nearest health facility, may also be missed. VMWs, MMWs, or health posts in the periphery of Cambodia may lower but not abolish such barriers. Furthermore, the detection threshold of LM and RDTs does not allow the detection of sub-patent parasitaemia in mildly symptomatic patients. Lastly, asymptomatic individuals will not consult health facilities in the first place. Sub-patent or asymptomatic infections will be particularly prevalent in areas or populations with high transmission levels (due to immunity). Rates of infections will thus exceed incidence, even more so in remote areas with high transmission levels, including the north-eastern provinces in Cambodia [START_REF] Stresman | Association between the proportion of Plasmodium falciparum and Plasmodium vivax infections detected by passive surveillance and the magnitude of the asymptomatic reservoir in the community: a pooled analysis of paired health facility and community data[END_REF]. The disproportionately stronger divergence of reported incidence and actual infection rates in high-transmission areas make field studies in these areas instrumental, as they actively sample asymptomatic people. The first survey determining infection levels in Cambodia's general population was conducted by Incardona et al. in three high-incidence provinces in peak transmission periods in 2001-2003 (Incardona et al., 2007). Comparing LM-derived prevalence to reported case incidence, the discrepancy was between 5-fold in Sampovloun and higher than 100-fold in Koh Kong. That infection prevalence in surveys exceeded NMP records of clinical incidence was equivalently observed in other GMS countries, e.g. at the Thailand-Myanmar border in 2012 [START_REF] Baum | Submicroscopic and asymptomatic Plasmodium falciparum and Plasmodium vivax infections are common in western Thailand -molecular and serological evidence[END_REF] or in Myanmar (O'Flaherty et al., 2021). Asymptomatic and sub-patent infections often harbour parasitaemia levels that allow the transmission to mosquitoes and thus contribute to the persistence of malaria despite malaria control interventions (Kiattibutr et al., 2017;Tadesse et al., 2018). It is thus crucial to identify and treat this silent reservoir of infections if malaria elimination is to be achieved. Geographic heterogeneity in malaria burden The countries in the GMS show a highly heterogeneous profile of malaria burden. At the lower end, China has had no locally acquired cases since 2017. In terms of incidence of indigenous malaria cases in 2020 in the other countries, the WHO estimates range from <0.1 per 1,000 population at risk in Vietnam and 0.2 per 1,000 in Thailand, and up to 5.8 per 1,000 in Cambodia. The malaria mortality estimates range from <0.1 per 100,000 population at risk in 2020 in Vietnam and Thailand, and up to 0.4 per 100,000 in Cambodia. Within the countries, the malaria burden is also distributed highly unevenly, see Figure 5. In Thailand, large parts of the country are malaria-free. Areas with ongoing transmission are mainly restricted to the borders with regions in Myanmar, Malaysia, Laos, and Cambodia that have a relatively high incidence (World Health Organization, 2021e). Also, in Vietnam, lowlevel transmission mainly remains along the border with Cambodia and Laos. By contrast, in Laos, Myanmar, and Cambodia, parts of the country with high or very high transmission remain. While the mountainous northern half of Laos is mostly free of malaria, high transmission levels occur throughout the country's south and southeast along the border with Thailand with its plains along the Mekong. In Myanmar, provinces with no or low-level transmission can be found in the central lowlands, but the largest part of the country is covered with high transmission areas (World Health Organization, 2021c, 2021d, 2021f). In Cambodia, districts with no transmission can only be found in and around the capital Phnom Penh. Apart from these, transmission occurs throughout the country in an area inhabited by 71% of the population. High-incidence provinces were mainly restricted to the west, the area most afflicted by drug-resistant P. falciparum, mainly in Pursat province, and five north-eastern provinces, Preah Vihear, Stung Treng, Ratanakiri, Kratie, and Mondulkiri, see Figure 5 [START_REF] Dondorp | Artemisinin resistance in Plasmodium falciparum malaria[END_REF][START_REF] Noedl | Evidence of artemisinin-resistant malaria in western Cambodia[END_REF]World Health Organization, 2021b). Until 2020, the incidence had been reduced in almost all provinces, and high-incidence provinces withdrew further into the countries' periphery. For example, in north-eastern Cambodia, only three provinces remained with elevated incidence and the highest incidence of 10-20 cases per 1,000 population was only found in Mondulkiri. High-burden areas in the GMS are usually found in the foothills and along the coast but, above all, in forested regions. The major vectors in the GMS, An. dirus and An. minimus, prefer this shaded and humid environment. They tend to exophily and exophagy, i.e. to rest and bite outdoors, and to bite early, limiting the effects of IRS and ITNs. Agricultural activity close to the forest (often at the forest fringe on deforested land) with frequent or long overnight stays or wood logging in the forest ensures frequent and prolonged vector-host contact (Dysoley et al., 2008;[START_REF] Hii | Malaria vectors in the Greater Mekong Subregion: overview of malaria vectors and remaining challenges[END_REF][START_REF] Singhanetra-Renard | Malaria and mobility in Thailand[END_REF]World Health Organization, 2015b). The country's northeast is densely forested, subject to illegal wood logging, and populated by disproportionately high proportions of marginalized ethnic minorities (Asian Development Bank, 2002;Moul & Seng, 2012). The recurring concept of malaria risk in the GMS and Cambodia, in particular, is that it is confined to the high-risk groups of people exposing themselves to forested areas temporarily, e.g. slash and burn farmers and loggers, but also miners, security forces, and dam construction workers. (Dysoley et al., 2008;[START_REF] Hii | Malaria vectors in the Greater Mekong Subregion: overview of malaria vectors and remaining challenges[END_REF][START_REF] Singhanetra-Renard | Malaria and mobility in Thailand[END_REF]; World Health Organization, 2015b). Various methods are at hand for detecting spatial clustering of observations, e.g. malaria cases or infections. Abundant options are statistical tests for spatial autocorrelation, e.g. Moran's I or Geary's C, or space-time statistics. Based on geo-located households of reported malaria cases in Haiti in a low-transmission setting in 2014-2015, a study detected several areas of 1 km 2 in size with significant case clustering via a spatial autocorrelation statistic for cluster detection, called Getis-Ord Gi*, and overlapping space-time clusters via space-time permutation modelling [START_REF] Dismer | Detecting Malaria Hotspots in Haiti, a Low-Transmission Setting[END_REF]. Three cross-sectional surveys in northern Guatemala at low malaria endemicity in 2000-2001 found clusters of malaria cases by Moran's I and Getis-Ord Gi*, however highly variable in location over time [START_REF] Malvisi | Analysis of the spatial and temporal distribution of malaria in an area of Northern Guatemala with seasonal malaria transmission[END_REF]). In the 1990s, the Trinidad and Tobago Ministry of Health was able to confirm P. vivax and P. malariae outbreaks and calculate their spatial extents via space-time statistics to adjust national outbreak management guidelines (Chadee & Kitron, 1999). These days, the most popular method to locate malaria clusters is spatial scanning, particularly the SaTScan™ implementation (Martin Kulldorff, 1997; M. Kulldorff & Nagarwalla, 1995). This method divides the study area (and for spatiotemporal clustering, potentially the study period in case of multiple time points of observations) into overlapping windows, e.g. circular areas of the area under study and time intervals during the study period, and varies the size of these windows. Within the windows, the number of observed vs. expected positive observations are compared. The window with the maximum likelihood is considered the most likely cluster, often referred to as a 'hotspot'. The output of the calculation then is the location and size of that window, e.g. the circle on the map of the study area. Statistical tools for low-scale spatial risk clustering A cross-sectional survey in three villages in Cambodia's northeast, in Ratanakiri province in 2016, found clusters of P. vivax, P. falciparum, and P. malariae in sub-village areas (Durnez et al., 2018). In a cross-sectional survey along the border between French Guiana and Brazil at low prevalence of P. falciparum and P. vivax in 2017, hotspots could be located in remote neighbourhoods on village outskirts and small intra-village clusters [START_REF] Mosnier | Prevalence of Plasmodium spp. in the Amazonian Border Context (French Guiana-Brazil): Associated Factors and Spatial Distribution[END_REF]. During its malaria elimination phase, China used SaTScan analyses on retrospectively collected and geo-coded malaria cases to identify remaining transmission clusters in Shandong province. The study identified several distinct clusters for indigenous and imported cases, which was helpful in guiding elimination interventions [START_REF] Kong | Malaria control and prevention towards elimination: data from an eleven-year surveillance in Shandong Province, China[END_REF] Often when applied to observation time series, the SaTScan method finds hotspots, but these are not stable in time. For example, a cross-sectional survey with repeated data collection and sampling time points over two years in three villages in eastern Cambodia, Ratanakiri province, in 2016-2017 found hotspots in 2016. However, having lived in a hotspot in 2016 did not associate with a risk of Plasmodium spp. infection in 2017, whereas having lived in a household with a malaria case did associate with risk in the following year (Bannister-Tyrrell et al., 2019). Based on repeated cross-sectional surveys three months apart from 2013-2014 in three villages in western Cambodia, hotspots of P. vivax and P. falciparum infections were found, with clusters of subclinical infections close to clinical case clusters. However, hotspots were transient and not predictive of stable spatial clusters over time (Parker et al., 2017). Based on the household locations of dengue fever patients in Bangkok over five years, Salje et al. developed a method to generalize the spatial structure of the underlying dengue transmission dynamics (Salje et al., 2012). The authors also applied it to data on HIV and measles cases (Lessler, Salje, Grabowski, & Cummings, 2016). It made use of the knowledge that the viruses of some case pairs were more closely related than others, e.g. viruses of one of the four serotypes of the dengue virus. The method then infers spatial clustering by dividing the probability of case pairs being of the same serotype in a given distance interval by the probability of any other case pair being of the same serotype, regardless of distance, across moving distance intervals. An increasing relative probability of more closely related case pairs at shorter distances indicated spatial clustering, e.g. see Figure 6. Whereas SaTScan-based analyses aim to identify the locations of the clusters of infections (which may change over time), the spatial signature provides a description of the degree of clustering aggregated across a geographic area. Cambodia Background In the last two decades, Cambodia has seen a marked decrease in numbers of reported malaria cases from approximately 140,000 cases in 1999 to 60,000 in 2018 [1]. Likewise, mortality by malaria steadily reduced, with zero deaths reported for the first time in 2018. Similarly remarkable reductions have been observed in the other countries of the Greater Mekong Subregion [2]. Since 2014, these countries have even pursued the goal of elimination of all malaria by 2030 [3]. This ambitious aim however faces several obstacles. After an all-time low of approximately 20,000 reported cases in 2016, numbers have increased again in 2017-2018 [1,2]. While the annual case numbers increased for both Plasmodium falciparum and Plasmodium vivax in 2017, they declined for P. falciparum in 2018 but increased further for P. vivax [1,3]. Consequently, P. vivax now accounts for almost three-quarters of all malaria cases in Cambodia [2,4]. The national malaria control programme (NMCP) is based on mass distribution campaigns of long-lasting insecticidal nets (LLIN), diagnosis by light microscopy (LM) and rapid diagnostic tests (RDT) in the public health sector, RDT-based testing by village/mobile malaria workers (VMW/MMW), and treatment of blood stage infections with artemisinin-based combination therapy (ACT) [5]. The spread of multi-drug resistant P. falciparum malaria in Western Cambodia [6] might explain the resurgence of P. falciparum cases in 2017 and the change of first-line treatment from dihydroartemisinin-piperaquine to artesunate-mefloquine their subsequent decline in 2018. For P. vivax, such resistance is not reported. However, this species has dormant liverstages called hypnozoites that escape acute blood stage treatment and cause relapsing infections. Therefore, although LLINs do reduce P. vivax transmission and ACT is effective against blood stage P. vivax infection, neither intervention targets the hypnozoite reservoir. Hence, there is a less marked effect of these interventions on P. vivax than P. falciparum. The only licensed drug with hypnozoiticidal effect is primaquine, typically administered at 0.25 mg/kg daily over 14 days in Cambodia. The roll-out of P. vivax radical cure is part of the national treatment guidelines but has not yet been implemented because of concerns over the drug's potential haemolytic effect in patients with G6PD deficiency [7,8]. In summary, the control and elimination efforts of the NMCP are successfully targeting clinical P. falciparum malaria (understandably in the context of drug-resistant P. falciparum [6] in the country) but to a lesser extent to P. vivax. Another obstacle to elimination is the high prevalence of asymptomatic infections [9][10][11][12]. Neither health facilities nor VMWs reach these asymptomatic carriers as they do not seek diagnosis or treatment. Although of lower infectivity [13,14], they remain however a potential source of onward transmission. Alongside the sustained reduction in countrywide incidence, malaria has also become increasingly fragmented, affecting fewer, usually remote provinces. The highest incidence of malaria in Cambodia and across the Greater Mekong Subregion is now particularly found in the North-Eastern provinces such as Ratanakiri and Mondulkiri [1,3]. These settings are characterized by a high proportion of ethnic minorities and agricultural or wood-logging activities as the main income sources. Exposure to Anopheline mosquitoes and Plasmodium in Cambodia is understood to primarily occur in the forest rather than peri-domestically [15][16][17]. Together with the Western province Pursat, the highest incidence per province in 2018 was reported for Mondulkiri [1]. However, previous studies on population-level burden and risk of malaria in Cambodia focused on the country's West and North [12,[16][17][18][19][20][21] or the Eastern province of Ratanakiri [10,11,15,20,22,23]. Here, using a cross-sectional survey conducted in a rural area of Mondulkiri in the dry season in 2018, estimates of the prevalence of Plasmodium infection are shown and key risk factors identified. Methods Study area and census Seventeen villages were selected in the Kaev Seima district, Mondulkiri province, in the Kingdom of Cambodia. The rainy season in Cambodia normally runs from June to October with the high transmission period from June through December [1,4,24]. Approx. two-thirds of the population in Mondulkiri is comprised of national ethnic minorities, with the Phnong ethnic group comprising the largest proportion [25,26]. In November-December 2017, a census was conducted by visiting each household in the 17 villages and collecting basic demographics of household participants such as age and gender. A person's household was defined as location of main residence in exposure was indicated in addition to risk due to forest activities. Village-level stratification of targeted interventions based on forest proximity could render the elimination efforts more cost-effective and successful. Keywords: Forest, Occupational risk, Spatial, Vivax, Hotspots, Cambodia, Greater Mekong Subregion the village according to adult members of the household or the village head. GPS coordinates were collected using Garmin ® GPSMAP ® 64s devices. When no adult household member could be found, demographic information was obtained from the village head's registry book. Cross-sectional survey Based on the census, a random selection of households was drawn oversampling small villages to ensure sufficient coverage (Additional file 1: Table ST1). Selected households were visited from mid-December 2017 until mid-April 2018 and all household members aged 2-80 years who had resided in the study area for at least 3 months were invited to participate in the survey. Upon informed consent, a questionnaire on household variables was administered to the head of household or another adult household member by trained interviewers. A questionnaire on individual-level variables was administered to each consenting household member. Children were interviewed assisted by a parent (or rarely another caregiver in the absence of the parents). Data were collected on tablets and run through automated data quality checks within days after the interview. In case of missing data or discrepancies, the field team was informed for immediate resolution if possible. Fingerprick blood samples were collected as thick and thin film slides and in K + EDTA-microtainers. Participants were screened for symptoms, i.e. feeling sick or feverish on the day of interview, having felt feverish over the preceding two days, or having an axillary body temperature of at least 37.5 °C. Upon any indication, the participant was administered a standard malaria RDT (Malaria Ag P.f/P.v, Standard Diagnostics Inc., South Korea) and referred to a local health care provider for treatment if positive. Detection of infection The microtainer blood samples collected at the interview site were stored in 4 °C ice boxes. At a field laboratory, they were separated into plasma and cell pellet and frozen at -20 °C. Following transport to the main laboratory at Institut Pasteur of Cambodia in Phnom Penh, cell pellets were stored at -20 °C and plasma at -80 °C. Infections with any of the four human Plasmodium parasites were determined by real-time PCR [27]. In case of a positive genus-specific result, qPCR specific for P. falciparum, P. vivax, Plasmodium malariae, and Plasmodium ovale followed. All positive and a random selection of 10% negative samples were assessed by independent double LM readings of asexual and sexual stages and parasite densities calculated. No discrepancy in parasite densities of above 30% occurred. Parasites were counted per approximately 500 leukocytes and densities were inferred assuming 8000 leukocytes per microlitre blood. Descriptive analyses Villages were classified based on the forest cover in a 750 m radius around the households, computed from the land cover analysis in [28] (Additional file 1: Figure S1). Villages with ≥ 50% of households with ≥ 10% forest cover in their vicinity were considered "inside the forest", with ≥ 30% of households with ≥ 5% forest cover as "at the forest fringe", or "outside the forest" otherwise. Because of very low sample sizes, the two small, neighbouring villages Beng (11 individuals) and Gaty (95 individuals) were analysed as one. Population prevalence was estimated based on post-sampling weights assigned to each participant according to their representation by village, gender, and 10-year age bins compared to the census population (raw numbers of positive survey samples accompany the estimates in brackets). Categorical covariates were compared using the chi-squared test or the Fisher's exact test when low strata sizes required it. Risk factor analyses The association of covariates with infection by P. vivax, P. falciparum, or all four species was assessed by mixedeffects logistic regression with random intercepts per household and village. Those covariates that were statistically significantly associated with Plasmodium infection at two-tailed α = 5% in univariate regressions were included in the multivariate model. The villages' proximity to the forest, work-unrelated overnight travels (incl. to forest sites), and work in the deep forest were considered part of the multivariate model a priori in order to assess the association of forest exposure with risk of infection. Having slept outdoors or under a bed net last night were kept as part of the model based on causal reasoning or prior knowledge due to their association with infection risk in previous publications, e.g. [11]. Gender and age were retained in the model as proxies for riskrelated behaviour and thus potential confounders. The other covariates comprised potential proxies for socioeconomic status or further exposure variables and their subset significantly associated with infection risk was assessed by backwards variable selection. Akaike Information Criterion (AIC) was used to assess model fit. If collinearities and interactions were identified, covariates were retained comparing fit of the respective models by AIC. Statistical significance in the multivariate regressions was calculated per covariate by likelihood ratio tests. Spatial hotspots were identified by a purely spatial scan statistic via a discrete Poisson model with maximally a third of the population in a scanning window, adjusted for all covariates of the final multivariate model except for the villages' forest proximity [29]. Software All questionnaires were applied on tablets using the REDCap (Research Electronic Data Capture) software hosted at Institut Pasteur in Paris [30,31]. Data quality control as well as all descriptive and analytical statistics were performed in R 3.6.3 [32]. The SaTScan ™ software version 9.6 was used for the spatial scan statistic [33]. Results Census, survey representativeness and survey population From among the 10,053 individuals in 2351 households identified in the census, the survey recruited 4200 participants from 1147 households and oversampled smaller villages on average (Additional file 1: Table ST1). A map of the survey households and the categorization of villages by proximity to the forest is shown in Fig. 1a. Mean age was 26 years in both the census and the survey and with 51% (5135/10,053) and 53% (2231/4200) women, respectively, women were slightly oversampled in the survey. In men, ages of 16 to 40 years were mildly underrepresented (Fig. 1b). The main income source for the survey households was by large majority farming (89.8%, 1030/1147). In terms of mobility, three-quarters (75.9%, 3188/4200) of the survey participants reported work-unrelated trips to forest or field sites in the last month (8.5%, 358/4200) or any work trip in the last two months (75.8%, 3184/4200). By vast majority, these trips were short and frequent: In only 4.4% (140/3188) of the instances, stays for longer than a week were reported and among those who reported any work trip, 89.4% (2847/3184) went for work at least once a week. Plasmodium vivax predominated over other human malaria parasites Infection by Plasmodium parasite was detected by PCR in 8.3% (349/4200) of participants. The proportion of samples positive for P. vivax was 6.4% (268/4200) compared to 3.0% (125/4200) for P. falciparum (p < 0.001, Table 1). Plasmodium vivax was found in 77% (268/349) of all infections compared to 36% (125/349) for P. falciparum. Four samples were positive for P. malariae monoinfections and none for P. ovale. Speciation by PCR was unsuccessful in 4 samples which were analysed as negative. Extrapolation to the entire Kaev Seima population yielded an estimated prevalence of 8.9% for Plasmodium infection, 6.8% for P. vivax, and 3.3% for P. falciparum. Estimated prevalence was highly heterogeneous across the villages, ranging from 0.6% (3/594) to 36.3% (54/152) and from 0% (0/594) to 25.1% (27/106) for P. vivax and P. falciparum, respectively (both p < 0.001, Table 1). Village-level prevalence of P. vivax infection was on average higher in villages inside the forest compared with those at the forest fringe and outside the forest (medians 23.2, 7.2, 5.6% respectively, Fig. 2; similarly for P. falciparum in Additional file 1: Fig. S2). Plasmodium vivax infections were least detectable by the health care system Of all PCR-detected infections, 83.2% (223/268) were sub-microscopic for P. vivax compared to 74.4% (93/125) for P. falciparum (p≈0.056, Table 2). Plasmodium vivax infections coincided less often with reported or measured symptoms at the interview than those by P. falciparum, with 6.7% (18/268) and 16.8% (21/125), respectively (p < 0.01). Only 2.2% (6/268) of all P. vivax infections were symptomatic and also positive by RDT or LM (i.e. detectable through the Cambodian health care system), less often than for P. falciparum (7.2%, 9/125, p < 0.05). Asexual parasite stages were detected in all LM + samples for P. vivax (37/37) and in most (81.5%, 22/27) for P. falciparum (Table 3). In LM + samples, the geometric mean parasite densities were 149.7 parasites/μL and 431.3 parasites/μL (t-test: p≈0.07), respectively. While gametocytes were found in only one (2.7%, 1/37) P. vivax LM + sample, more than a third (37.0%, 10/27) of LM + samples for P. falciparum were gametocyte-positive with a geometric mean of 51.1 gametocytes/μL. Prevalence was highest in men of working age Estimated prevalence was more than twice as high in men as in women, i.e. 10.4% (188/1969) vs. 3.6% (80/2231) for P. vivax (p < 0.001), 4.8% (83/1969) vs. 1.9% (42/2231) for P. falciparum (p < 0.001), and 13.3% (239/1969) vs. 5.0% (110/2231) regardless of species (p < 0.001). The patterns across age were similarly heterogeneous for both species (Additional file 1: Figure S3). While there was no difference in genus-wide prevalence across age in women, men showed an elevated risk at working age (p < 0.001), regardless of the proximity of the village to the forest (Fig. 3). A fitted interaction term of gender and age was significant in the strata of villages outside the forest (p < 0.001), but not across those villages at the forest fringe or inside the forest. Risk profiles for P. vivax and P. falciparum infections were similar Risk of infection was associated with individual covariates for both P. vivax and P. falciparum in a highly similar Table 1 Prevalence of PCR-detected infections As proportions of positive blood samples or as census population-level estimates by extrapolation via post-sampling weights (raw numbers of positive samples in brackets). 4 fashion (Table 4 for behavioural variables, full list of variables with odds ratios in Additional file 1: Table ST2). For both species, prevalence was associated with workunrelated overnight travels and highest if those occurred to forest sites (22.4%, 60/268, and 14.9%, 40/268, for P. vivax and P. falciparum, respectively) and lowest for urban destinations (1.7%, 1/59, and 0%, 0/59, for both p < 0.001). Infections of both species were also more prevalent among those who reported work trips to sites in the nearby and deep forest (assessed separately). In particular, a significantly higher prevalence was observed in participants who reported work trips into the deep forest (25.5%, 37/145, and 18.6%, 27/145) compared to those who did not (5.7%, 231/4055, and 2.4%, 98/4055, for P. vivax and P. falciparum, respectively, p < 0.001). Once clustering at household and village-level was taken into account in univariate mixed-effects logistic regression, work in the nearby forest was no longer significantly associated with risk of infection though (Additional file 1: Table ST2). There were fewer infections among those who reported the use of standard protection measures such as sleeping under a bed net (6.1%, 236/3880, and 2.7%, 104/3880, vs. 10.0%, 32/320, and 6.6%, 21/320, if no net was used, p < 0.01 and p < 0.001 for P. vivax and P. falciparum, respectively). A higher risk in men, at working age, in villages at the forest fringe or inside the forest, and with indicators of lower socio-economic status was also found similarly for both species (Additional file 1: Table ST2). Working in and travelling to the forest were strong risk factors of infection Behavioural covariates related to activities in the forest were statistically significant in the multivariate model of Plasmodium infection (Table 5). Having travelled to forest sites (nearby or deep forest) or having worked in the deep forest independently increased the odds of infection two to three-fold (adjusted odds ratio, aOR, 2.17, p < 0.01 and aOR 2.88, p < 0.001, respectively). Risk of infection was not significantly attributed to the other behavioural covariates that were included in the model a priori, namely having slept outdoors (aOR 1.99, p≈0.08) and under a bed net (aOR 0.99, p≈0.96). Risk of infection was increased in males (aOR 3.06, p < 0.001) and working age (aOR 7.84 in 21-25 years old compared to children, p < 0.001). A fitted interaction term of gender and age improved the model (AIC 1786 vs. 1802 without interaction, p < 0.001, Additional file 1: Table ST3). Other covariates linked higher socio-economic status with lower odds of infection, such as living in a house with a roof built of relatively high-quality material (aOR 0.50, p < 0.01). The covariate on recent information on malaria via TV (aOR 0.37, p < 0.01) most likely also acts as a proxy for higher socio-economic status, i.e. being able to afford a TV in the first place. Descriptive characterization of the risk factors forest work and forest travels Work-unrelated overnight travels to forest sites was a predominantly male domain (reported by 10.7%, 211/1969, of the men vs. Residing in a village inside the forest was an independent spatial risk factor Living in a village inside the forest remained associated with risk of infection when adjusting for the significant demographic, socio-economic, and behavioural covariates (aOR 12.47, p < 0.001, Table 5). Households from all four villages inside the forest also formed hotspots of Plasmodium infection, i.e. purely spatial clusters of elevated risk of infection that cannot be explained by the other covariates (Fig. 4, Additional file 1: Tables ST4-ST5). Discussion This is the first detailed study on the prevalence of Plasmodium infection and associated risk factors on population-level in Mondulkiri province in Cambodia. The overall prevalence of Plasmodium infection is consistent with that observed in the neighbouring province of Ratanakiri [10,11,22] or other endemic provinces in the West of the country [16]. Plasmodium falciparum has long been the predominant species countrywide [9,[START_REF] Gomez-Barroso | Spatial clustering and risk factors of malaria infections in Bata district, Equatorial Guinea[END_REF]. However, following a scale-up in the VMW programmes, a steady decrease in reported P. falciparum cases was observed in 2009-2011, while numbers of reported P. vivax cases increased [24]. In recent years, cases of both species were reported at an almost equal share [4] and in 2018 P. vivax accounted for approximately three quarters of reported cases [2]. That P. vivax started to predominate over P. falciparum in Cambodia even earlier is suggested by other studies in high-incidence provinces where molecular and serological diagnosis identified a higher prevalence of P. vivax than P. falciparum infections [11,12,[15][16][17][18][19], consistent with the presented study in Mondulkiri. 7.2% of the P. falciparum infections could be identified by symptoms and a positive result by LM or RDTs, i.e. the diagnostics used by the health system, compared to only 2.2% for P. vivax. Low sensitivity of RDTs for P. vivax is a commonly reported problem [12,[35][36][37][38]. In addition, higher proportions of asymptomatic infections for P. vivax are regularly observed as the relapses can lead to higher levels of immunity and thus overall lower parasite densities [39,40]. Data from studies using membrane feeding assays suggest that the densities of asexual stages and gametocytes in the asymptomatic, LM-positive samples in this survey are infectious to mosquitoes [14,[41][42][43][44][45]. It can thus not be ruled out that these subclinical infections contribute to the ongoing transmission in this area. More asymptomatic infections, lower diagnostic sensitivity for P. vivax, high transmissibility, and the lack of radical cure by primaquine all might add up as possible explanations why the control efforts have been less successful against P. vivax than against P. falciparum. In order to address this high burden of P. vivax, Cambodia is now piloting the use of primaquine against P. vivax infections in four provinces. While an essential step, the effect of radical cure will be limited without point-of-care diagnostics that are more sensitive for P. vivax infections and ideally a test allowing the detection of hypnozoite carriage (irrespective of blood stage parasitaemia). New advances in the use of multiple antigens to detect antibodies as serological markers for recent exposure and thus for potential hypnozoite carriers are promising in this respect [46]. With as many as 40% of the inhabitants infected in the highest-prevalence village in this survey, it may be necessary to complement the current malaria control efforts by active or reactive case detection schemes [47,48] to reduce the infected (and potentially infectious) reservoir and eventually reach the goal of malaria elimination. Targeting such resource-intensive interventions to high-risk groups will render them significantly more cost-efficient. The similarity of the risk profiles for both P. falciparum and P. vivax presented here is encouraging as it suggests that such targeted interventions would appropriately address both of the two species. This study emphasizes the predominant role of forest malaria transmission in explaining elevated risk of infection in remote rural areas of Southeast Asia. Previous cross-sectional surveys have also identified an association of infection risk with either work in the forest [15,17,49] or time spent therein [16,50]. However, this study is unique in retaining them both as statistically significant risk factors in one multivariate model while also adjusting for gender, age, socio-economic proxies, and other behavioural covariates. While work in and travels into the forest are most frequent in adolescent and adult men who are usually considered the forest-related risk group, this behaviour also occurred in women and from children to participants in their sixties and thus actually extends to a much broader range of the population. On the relatively small spatial scale of 17 villages in an area of 22 × 26 km 2 , the prevalence ranged from less than 1% to above 40% in the dry season. The trend towards higher prevalence in villages at the forest fringe and inside the forest compared to those outside the forest is in line with the widely accepted notion for forest-related mosquito abundance and exposure in Cambodia. Other studies have also identified a higher prevalence in villages at close proximity to the forest [21,49,50]. It is apparent that the association of risk with living near or inside the forest was retained in this study even if adjusting for demographic, socio-economic, occupational, and other behavioural differences. Not more than 60% of infections could be attributed to the strong behavioural, forest-related risk factors. In villages outside the forest or at the forest fringe, most infections indeed occurred in working age men. This predominance of this occupational risk group was much lower in forest fringe villages and entirely lost in villages situated inside the forest. Taking gender and age as proxies for risk-related behaviour and occupational activities, this could indicate that in villages outside the forest, risk can be explained almost completely by such occupational exposure. By contrast, for villages within the forest, the more homogeneous presence of infection across all age groups in both males and females indicates additional exposure inside the village. This is in line with other studies in which household-level risk factors in forested areas were important to explain infection risk [10] or infections clustered in households over 2 years [START_REF] Bannister-Tyrrell | Households or hotspots? Defining intervention targets for malaria elimination in Ratanakiri province, Eastern Cambodia[END_REF] in Ratanakiri, the other Eastern province of Cambodia. The current WHO report on malaria eradication calls for subnational stratification of intervention programmes [START_REF]Malaria eradication: benefits, future scenarios & feasibility[END_REF][START_REF] Stresman | Malaria hotspots: is there epidemiological evidence for fine-scale spatial targeting of interventions?[END_REF]. This study suggests that such a stratification of interventions could happen along a gradient of villages inside or outside the forest. Population-oriented interventions such as targeted mass drug administration or test and treat programmes with sensitive molecular or serological diagnostics [46,[START_REF] Lek | Tools to accelerate falciparum malaria elimination in Cambodia: a meeting report[END_REF] (together with their costs and risk of overtreatment) could be justified in forest villages. By contrast, approaches targeted at risk groups only may be effective at lower costs for the NMCP and the local population in residential settings outside of the forest. Conclusion Despite a substantial reduction in malaria burden in Cambodia over the last decades, this study demonstrates that pockets of high malaria prevalence persist in the country. Given that the vast majority of infections were asymptomatic, the study strengthens the argument to enhance malaria elimination efforts by measures of (re-)active detection of also asymptomatic infections. Plasmodium vivax infections were detected at a higher prevalence than P. falciparum infections, more often asymptomatic, and less detectable by RDT and LM. Consequently, novel tools for the identification of hypnozoite carriers could play a key role in the presented setting as well as the roll-out of routine radical cure treatment. The study corroborates the notion of forest malaria in the Greater Mekong Subregion by demonstrating the independent association of travels to the forest, of forest work, and of living in close proximity to forest with malaria infection risk. However, the study also demonstrates that infection risk is less confined to forest-goers in sub-populations that already live in forested areas and suggests within-village transmission therein. A focus of interventions solely on forest-goers could thus be insufficient to reach the goal of nation-wide malaria elimination in due time. In the light of the presented results, interventions could be targeted at whole villages inside the forest, and targeted at stratified risk groups for villages outside the forest. Background Globally, enormous successes have been achieved over the last decades in reducing the malaria burden (1). The main instruments were the distribution of long-lasting insecticide-treated nets and clinical case management after passive case detection. The strategies deployed so far have proven particularly successful in battling P. falciparum transmission. However, P. vivax causes relapses (2) and to a higher degree asymptomatic low-parasitaemia infections that escape passive case detection but contribute to transmission (3,4). The effectiveness against P. vivax malaria has thus been less pronounced so that P. vivax has predominance over P. falciparum in many areas (1). In Latin America, South East Asia, and the Western Pacific, the overall reduction of malaria burden in many countries has paved the way for the ambitious goal of malaria elimination by 2030 (1,5). The necessity to incorporate active finding of infections in order to tackle the silent reservoir of on-going transmission is increasingly acknowledged (3,4,(6)(7)(8)(9). One strategy considered as part of current recommendations for malaria elimination are reactive interventions, i.e. active finding, testing, and treatment of other infections in a delimited area around passively detected malaria cases (10). However, the radius of the area in which to optimally implement interventions remains unclear. Infection with human Plasmodium spp. through an Anopheles spp. mosquito bite depends on a complex interplay of factors of the pathogen, host, vector, and environment. Each of these factors, such as proximity to the vector's breeding grounds, bed net usage, Anopheles species composition, and host immunity, can vary geographically (11,12). Geographical heterogeneity in malaria infections appears across multiple spatial scales varying from between households (13)(14)(15)(16) to within and between villages (9,11). At low incidence, the possibility to identify single malaria transmission clusters has long been reported (17). Reducing malaria transmission country-wide usually leads to a geographical fragmentation of areas where malaria risk remains high, further increasing spatial heterogeneity (18). Tailoring interventions to high-risk areas within these countries is at the core of recommended strategies for malaria elimination (10,19). Obtaining a good understanding of the spatial dependence between malaria infections will help underpin spatially targeted control efforts. To date, most efforts to characterise the spatial clustering of malaria have relied on comparing infection prevalence at community or household scales (11,13). In addition, some studies have used space-time clustering statistics such as the k-nearest neighbour method for binary testing whether spatiotemporal clustering of cases occurred (17). The introduction of spatial scanning (20,21) to malaria epidemiology further stimulated studies on spatial heterogeneity in disease or infection risk. This method scans the study area for (size-variable) circular areas with statistically significant clustering of infections ('hotspots'). Such hotspots were identified in diverse study areas in South America (22)(23)(24)(25)(26), Africa (27)(28)(29)(30)(31)(32)(33)[START_REF] Gomez-Barroso | Spatial clustering and risk factors of malaria infections in Bata district, Equatorial Guinea[END_REF](35)(36)(37)(38)(39), and South East Asia (9,(40)(41)(42)(43)(44), for P. falciparum (25,32,40,41), P. vivax (25,26,32,40,41), and P. malariae (40,41). However, the method's effectiveness in forecasting hotspots remains unclear and it does not generalize spatial patterns between infections. In this study, we introduce a global clustering statistic to malaria epidemiology, borrowing strongly from earlier development in the epidemiology of dengue and other pathogens (45,46). The spatial signature quantifies both the magnitude of clustering of infections and the radius within which it is statistically significant, potentially informing reactive interventions as part of malaria elimination endeavours. We reveal spatial signatures of PCR-confirmed P. vivax and P. falciparum infections by assessing the pooled prevalence of infections around confirmed index infections in widening spatial and temporal windows. We demonstrate spatial and temporal clustering of infections in both cross-sectional surveys and cohort studies from a diverse spectrum of study sites in Brazil (6,47), Thailand (7,8), Cambodia (9), and Solomon Islands (48). Methods Cross-sectional malaria prevalence surveys We used data from 5 cross-sectional surveys from 4 countries, i.e. Brazil, Thailand, Cambodia, and Solomon Islands. The study design and basic descriptions are summarized in Table 1. We included a cross-sectional survey in 17 villages in the high-incidence, rural province Mondulkiri, in North-Eastern Cambodia, in the dry season in December 2017 until April 2018 (as described in ( 9)). The 4,200 study participants were 2-79 years old and from randomly sampled households based on a census prior to the survey. We classified villages of residence into categories of proximity to the forest based on a forest cover remote sensing analysis (49). In this and all following included studies, we tested finger-prick blood samples for P. vivax and P. falciparum infections by real-time PCR and collected GPS locations of households. * Range of monthly prevalence. † Some individuals had reported ages greater than 100 years. In these cases, record keeping was considered not sufficiently robust to provide an accurate estimate of age. From Kanchanaburi and Ratchaburi provinces in western Thailand, we consider data from a crosssectional survey in September-October 2012 (7). We enrolled 4,309 study participants aged 0-92 years. We added data from two cross-sectional surveys in a peri-urban part of Manaus, Amazonas State, Brazil from November 2012 until January 2013 (rainy season) and from August until September 2013 (dry season), respectively (6). After all inhabitants had been invited to the study, 2,010 and 2,073 participated in the two surveys, respectively (963 in both surveys). From Solomon Islands, we used data from a cross-sectional survey in several villages across Ngella, Central Islands province from May until June 2012 (48). From a representative household-based sample, 3,501 individuals agreed to participate. Longitudinal cohort studies In addition to the cross-sectional surveys, we used data from two longitudinal cohort studies with similar design: Monthly active follow-up with finger-prick blood-sampling for P. vivax and P. falciparum real-time PCR testing and collection of household locations by GPS at baseline. One study was conducted in two villages in Kanchanaburi and Ratchaburi provinces, Thailand, respectively, in May 2013 until June 2014, enrolling 999 participants (Table 1) (8). The other study took place in the peri-urban part of Manaus, Brazil, from April 2013 until March 2014 with 1,274 enrolled participants (47). Spatial signature of malaria infection In order to characterise the spatial dependence between infections in both the cross-sectional surveys and for each follow-up period in the longitudinal studies, we calculated the average prevalence 𝜋(𝑑) of PCR-positive infections within a distance 𝑑 of PCR-positive "index" infections. To do this, we identified the study participants that lived within distance 𝑑 of each PCR-positive participant, and calculated the proportion that were also PCR-positive. We varied 𝑑 in incremental steps by 1 km for 𝜋 across the entire study region; and by 50 m for 𝜋 within 1 km. We compared the observed prevalence with that expected under a null distribution where no spatial dependence exists. We generated a bootstrap null distribution of 𝜋 after 10,000 random reallocations of infection locations and define statistical significance outside of the null's 95 th percentile interval. For the cohort studies, we calculated the spatial signature for P. vivax infections as above, pooling across the monthly study visits, i.e. considering only index-neighbour pairs in the same study month. If multiple monthly PCR results were positive for an individual, we consider only the first, "incident" positive result. We provide the underlying distribution of pairwise distances between the study participants per study site as well as the underlying distribution of pairwise distances between infections and all study participants in the Supplementary Figures (S.1.1a-S1.7). Spatiotemporal signature of malaria infection In the two longitudinal cohort studies, we also explored the extent of spatiotemporal dependence. We calculated the period prevalence 𝜌(𝑑, 𝑡) of PCR-positive neighbouring infections within a distance 𝑑 and within a time window 𝑡 of all incident PCR-positive index infections for P. vivax as the ratio of the number of pairs of index and neighbouring incident infections within 𝑑 and 𝑡 and the number of pairs of study participants within 𝑑 (steps by 1 km for 𝜌 across the entire study region; by 50 m for 𝜌 within 1 km). A study participant was considered an incident infection at the first study visit with a PCR-positive blood sample. We generated a bootstrap null distribution of 𝜌 after 1,000 random re-allocations of locations of infection time series. From the cohort study in Thailand, PCR-positive blood samples for P. vivax were genotyped as described in (8). We calculated spatiotemporal signatures as described above but respecting only P. vivax infection pairs with a matching genotype. Comparative analysis of distance to 50% reduction in prevalence From each spatial signature across the cross-sectional surveys and cohort studies, we drew the distance within which the prevalence around index infection has fallen by 50% (between the prevalence at 0 m and the study's global prevalence). The distance was approximated as the mean of the pair of 50 m-increments of distance 𝑑 within which the spatial signature falls below the 50% of prevalence. Software We described, analysed, and visualised the data in R 4.1.2 (50). We created the maps with the ggmaps package in R and background Landsat-8 image courtesy of the U.S. Geological Survey. Results Spatial signatures from cross-sectional surveys In the Cambodian cross-sectional survey, P. vivax and P. falciparum infections were found in all villages across the study region (Figure 1A). The prevalence of P. vivax around index infections decreased threefold from 21.3% at 0 km to the global study prevalence of 6.4% at maximum distance (Figure 1B). The estimate at 0 km may include household members and inhabitants of the same building. For P. falciparum, prevalence decreased from 12.9% to 3.0%. Within 1 km, it decreased to 13.3% for P. vivax infections and to 7.9% for P. falciparum infections. Amongst the villages inside the forest, P. vivax prevalence decreased from 38.4% at 0 km to 28.2% within 1 km (Figure 1C). For P. falciparum, prevalence fell from 23.9% to 17.3%. For both species, prevalence remains spatially clustered beyond the null distribution within 1 km amongst the villages inside and outside the forest but to a lesser degree, if at all, amongst the villages at the forest fringe. In the cross-sectional survey across three sites in Western Thailand (Figure 2A), prevalence of P. vivax infections around index infections decayed from 6.6% at 0 km to 3.6% at 1 km and for P. falciparum from 3.1% to 1.0% (Figure 2B). Significant spatial clustering for P. vivax infections was found only within a radius of 75 m. In the peri-urban part of Manaus, P. vivax and P. falciparum infections were detected in the first of the two cross-sectional surveys (Figure 2C) and P. vivax infections in the second survey (maps in the Supplementary Figure S3). Only one P. falciparum infection was found in the second survey, precluding any signature analysis. P. vivax prevalence dropped from 11.4% at 0 km around index infections to 5.3% within 1 km in the first survey and from 9.6% to 3.7% in the second survey (Figure 2D). Prevalence around P. vivax index infections remained higher than the null distribution within 1 km in the first survey and up to 70 m in the second survey. Prevalence of P. falciparum infections in the first survey did not follow a decaying pattern. In the cross-sectional survey in Solomon Islands, P. vivax prevalence was 23.7% at 0 km around index infections and hovered between 19.3% and 21.8% until a distance of 1 km (a decaying pattern is more apparent from the signature across the entire study region in the Supplementary Figure S5). For P. falciparum, prevalence decreased from 11.1% at 0 km around index infections until 1.8% within 1 km. Prevalence was elevated from the null distribution for both species across the distance of 1 km. Spatial and spatiotemporal signatures from cohort studies Pooling across the study visits of the longitudinal cohort studies, P. vivax prevalence around index infections per study visit decreased from 9.5% at 0 km to 4.1% at 1 km in the study in Thailand (Figure 3B) and from 13.1% to 6.4% in the Brazilian study (Figure 3E). The signature remains above the null distribution until 250 m in the Thai study and entirely within 1 km in the study in Brazil. In the cohort study in Thailand, P. vivax period prevalence decreased from 4.5% at 0 km around incident infections to 2.3% within 1 km and the same study month, from 4.3% to 2.3% within 2 months, and from 3.1% to 1.9% across the entire study period of 14 months (Figure 3C). Period prevalence was above the null distribution until 150 m within the same study month and within 2 months, and 25 m for infections across the study period. In the Brazilian cohort study, P. vivax period prevalence fell from 7.3% at 0 km to 3.5% within 1 km and the same study month, from 5.7% to 3.1% within 2 months, and from 4.7% to 2.9% across the entire study period of 13 months (Figure 3F). Period prevalence was above the null distribution until 150m within the same study month, until 250 m for infections within 2 months, and 100 m for infections across the study period. Considering only infection pairs with a matching genotype in the cohort study from Thailand, P. vivax period prevalence decreased from 1.4% at 0 km to 0.4% within 1 km and the same study month, from 0.9% to 0.3% for infections up to 2 months apart, and from 0.7% to 0.2% across the entire study period of 14 months (Figure 4). Period prevalence was above the null distribution until 200 m within the same study month, until 250 m for infections within 2 months, and 250 m for infections across the study period. 2, Figure 5). Overall, the distance to 50% reduction tends to be shorter with lower prevalence. Legend: From the spatial signatures (for cross-sectional surveys) or spatiotemporal signatures at time window of '0 months', i.e. the same month (for cohort studies) for P. vivax and P. falciparum. Discussion Across multiple studies in diverse malaria-endemic regions, we demonstrate clustering of P. vivax and P. falciparum infections in close proximity around index infections, decreasing with distance. Comparing the signature to a null distribution after random re-allocation of infection locations shows statistically significant clustering in most settings. In a village in Senegal, we have previously shown spatial clustering of P. falciparum infections with this method (14). The finding of spatial/spatiotemporal clustering is in line with previous studies close to the study sites (23,24,40,41) or in other malaria-endemic regions (22, 25-28, 30-39, 42-44), deploying the SaTScan scanning method. We did not find clustering for P. falciparum in the first Brazilian cross-sectional survey. Given very low global P. falciparum prevalence in both surveys (6), this could be due to a lack of statistical power. Considering the distribution of infection locations on the map, almost all infections surround a forested, sparsely inhabited area in which no samples were collected. The resulting spatial signature is possibly an artefact due to a peculiar distribution of household locations in an insufficiently sampled transmission hotspot. The signature of P. falciparum infections in the cross-sectional survey in Thailand also lacks statistical significance, however follows a decaying form. This lack of significance most likely stems from low prevalence and insufficient statistical power. A significant spatial signature was also not observed in the Cambodian cross-sectional survey among the villages at the forest fringe. This is most likely because exposure of inhabitants of villages outside the forest is mainly driven by forest-going behaviour (9), and vectors in this region are mainly found in the forests (12). P. vivax infection distribution in Solomon Islands' cross-sectional survey showed significant clustering in a flat signature within 1 km distance around index infections, with clustering decreasing at greater distances. There was substantial variation in malaria prevalence between villages (48). By calculating a spatial signature pooled across all villages that were sampled in the survey, we potentially miss significant within-village clustering in high transmission areas. We also show spatial clustering in the cohort studies, particularly within shorter time windows. The amplitude of the signatures flattens with increasing time windows, demonstrating temporal clustering. Considering only infection pairs of a matching genotype, the P. vivax spatiotemporal signature in the Thai cohort becomes even more pronounced. While the genotyping methods used here lack the resolution to define separate transmission chains with certainty, the finding that infections with a matching genotype cluster in space and time corroborates the principle of the spatial signature method. The study is subject to certain limitations. First, all included studies were based on household locations. In an area with peri-domestic vector exposure, spatial clustering and a decrease with distance based on household locations can be readily interpreted, mainly by the vectors' hostseeking. The interpretation in areas with mainly behavioural exposure is less straightforward, e.g. in villages outside the forest in the Cambodian site (9). Significant clustering of infections based on household location could still occur though, e.g. resulting from forest-goers tending to live closer to each other in the villages or more general clustering of household locations based on socio-economic factors. The fact that even in these settings clustering might be found is useful from a programmatic perspective. Another limitation is that the form of the signatures depends on the locations of the households and villages that were sampled, i.e. the underlying distribution of pairwise distances in the data. Pooling across villages attempts to generalize the prevalence dynamics. Studies that sampled across adjacent villages (e.g. the Cambodian survey) are easily accessible for this approach. However, if sampled villages are far apart, it can complicate the interpretation (e.g. the survey in Solomon Islands). We controlled for this limitation by ensuring that no gaps occur in the pairwise distance distributions. However, the P. falciparum signature in the first Brazilian survey may be an example where these distributions cause artefacts in the signature. It is generally acknowledged that malaria elimination efforts need to be spatially tailored towards areas where (the highest) transmission risk persists (10,19). The scanning method has proven effective for detecting and locating clusters of infections. However, the method detects a single or multiple hotspots, ordered by their likelihood. The localization of hotspots with the strongest clustering can distract from spatial clustering that also exists outside the statistically most significant cluster. While the assessment of village-level prevalence and scanning methods are very valuable for identifying local spatial structure (e.g. identifying hotspots), these tools provide limited information on the more general properties of the spatial nature of malaria infection. Also, while some studies found 'stable hotspots', i.e. recurring hotspots in the same area from one year to another (36-38), others did not find that temporal predictive value (32). Repeating resource-intensive observational studies in order to regularly update hotspot mapping is not feasible for malaria elimination programs. Reactive interventions are part of the recommended toolbox for malaria elimination (10). Unless they choose levels such as households or villages, malaria control programmes need to be informed on the radius around index infections in which to operate. Scanning methods report the spatial extent of the detected hotspots which is very specific to the investigated area and only partially generalizable. The spatial signature method allows to detect clustering, to quantify its magnitude, and shows its dynamics across increasing distance. For that purpose, it generalizes the spatial clustering across the entire study area and data points. From a programmatic perspective, it can inform a cost-benefit approach for the optimal selection of a radius around detected index infections, balancing the trade-off between the total number of households included, and the expected proportion of infections found. That clustering on the basis of household locations was found even in areas with occupationally driven exposure is encouraging. From a reactive intervention perspective, such socio-economic clustering may still allow effective targeting, regardless of the underlying dynamics. We consider the distance to 50% reduction in prevalence as a suitable measure for comparing the signatures across study sites. For both P. vivax and P. falciparum, we observed a trend towards stronger spatial clustering at lower global study prevalence. This suggests that reactive infection detection strategies could become increasingly effective in low transmission settings approaching malaria elimination. Conclusion We Background. A detailed understanding of the contribution of the asymptomatic Plasmodium reservoir to the occurrence of clinical malaria at individual and community levels is needed to guide effective elimination interventions. This study investigated the relationship between asymptomatic Plasmodium falciparum carriage and subsequent clinical malaria episodes in the Dielmo and Ndiop villages in Senegal. List of abbreviatons Methods. The study used a total of 2792 venous and capillary blood samples obtained from asymptomatic individuals and clinical malaria datasets collected from 2013 to 2016. Mapping, spatial clustering of infections, and risk analysis were performed using georeferenced households. Results. High incidences of clinical malaria episodes were observed to occur predominantly in households of asymptomatic P falciparum carriers. A statistically significant association was found between asymptomatic carriage in a household and subsequent episode of clinical malaria occurring in that household for each individual year (P values were 0.0017, 6 × 10 -5 , 0.005, and 0.008 for the years 2013, 2014, 2015, and 2016 respectively) and the combined years (P = 8.5 × 10 -8 ), which was not found at the individual level. In both villages, no significant patterns of spatial clustering of P falciparum clinical cases were found, but there was a higher risk of clinical episodes <25 m from asymptomatic individuals in Ndiop attributable to clustering within households. Conclusion. The findings provide strong epidemiological evidence linking the asymptomatic P falciparum reservoir to clinical malaria episodes at household scale in Dielmo and Ndiop villagers. This argues for a likely success of a mass testing and treatment intervention to move towards the elimination of malaria in the villages of Dielmo and Ndiop. Keywords. malaria; asymptomatic; clinical malaria; clustering; interventions. The World Health Organization (WHO) African Region was home to 93% of the 228 million malaria cases and 94% of the ~405 000 deaths attributed to malaria in 2018 [1], despite the dramatic decline of malaria incidence recorded in past decades in many countries [1], which has led in some places to the belief that elimination could be attained. The move from control to elimination entails the need to shift the primary focus of early detection and rapid case management of clinical malaria cases to the interruption of malaria transmission including the clearance of asymptomatic infections. In areas where repeated exposure to Plasmodium falciparum infections is commonplace, individuals develop nonsterile immunity to clinical malaria [2] that often is associated with a long-term carriage of asymptomatic infections [3][4][5]. In low transmission areas, asymptomatic P falciparum infections have been reported to contribute to 20%-50% of human-to-mosquito infections [6], and parasites from asymptomatic individuals have been reported to be even more infectious to vectors than those from symptomatic cases [7,8]. In Dielmo and Ndiop, 2 villages situated in the center of Senegal (West Africa) [9,10], the disease epidemiology has substantially changed to the point that the elimination of the disease is being considered [11][12][13]. The progress has unfortunately been somewhat hampered by resurgences of malaria attacks [12,13], and a low-level residual malaria transmission attributed to persistent asymptomatic infections is regularly documented in these areas [11,14]. Findings from 3 cross-sectional surveys performed before the malaria transmission season from 2013 to 2015 identified substantial asymptomatic infections in the communities with average parasite carriage of 7.5% and 9.7% in Dielmo and Ndiop, respectively [11]. The microgeographical analysis of such asymptomatic Plasmodium infections in relation with subsequent clinical malaria cases could serve as a rationale to trigger the application of targeted interventions toward the elimination of the disease. In fact, microgeographical analyses can inform how, when, and where clustering occurs and how it relates to transmission patterns [15]. Major elimination strategies such as mass or focused treatment with or without screening and reactive case detection (RCD) rely on being able to identify clusters of infections that can be targeted with appropriate interventions [16]. A recent study conducted at 4 field sites in The Gambia, Mali, and Senegal found separate spatial, temporal, and spatiotemporal clustering of P falciparum infections [17]. However, the authors visited participating households only once in The Gambia (rainy season in 2012) and Mali (dry season in 2013), detected asymptomatic P falciparum infections by using microscopy only, and did not account for submicroscopic and clinical P falciparum infections in the clustering analysis. The current study capitalizes on cross-sectional-based asymptomatic P falciparum infections documented over 4 consecutive years and clinical P falciparum episodes of the Dielmo and Ndiop cohorts to explore the relationship between asymptomatic P falciparum carriage and subsequent clinical malaria episodes at the individual and household (HH) levels. The study also explored the spatiotemporal clustering and spatial signature of P falciparum infections in the communities in both villages. MATERIALS AND METHODS Study Sites and Samples Collection The study was conducted in Dielmo and Ndiop, 2 Senegalese villages where malaria has been closely monitored since 1990s [9,10]. Asymptomatic individuals were recruited before the malaria transmission season through communitybased cross-sectional surveys in both villages in July 2013, June 2014, June 2015, and June 2016. No formal sample size calculation was performed because the objective was to sample the largest possible fraction of the project's participants who were present in the villages during the sampling period. A total of 2792 venous or capillary blood samples were provided from asymptomatic individuals during the 4 sampling periods. Each participant was identified with a unique code, and HH in both villages were geo-referenced (Figure S1) using a handheld global positioning system device (GPS 76, Garmin International Inc.). The study falls within the ongoing Dielmo and Ndiop project and was examined and approved by the Senegalese National Health Research Ethics Committee. Detection and Characterization of Plasmodium Species by Microscopy and Molecular Methods The detection of Plasmodium spp was carried out using both light microscopy (LM) examination of blood films and quantitative real-time polymerase chain reaction (qPCR) as previously described [18] and detailed in Supplementary Material 1. For the qPCR, DNAs from confirmed P falciparum-, Plasmodium malariae-, Plasmodium ovale-, and Plasmodium vivax-infected patients were used as positive controls in all amplifications, whereas sterile water and genomic DNA from uninfected blood samples served as negative controls. Survey of Clinical Malaria Episodes Clinical malaria episodes were surveyed among febrile patients (axiliary temperature ≥37.5°C or history of fever in the last 24 hours) who presented at the health posts for consultation after the cross-sectional sampling until December of the same year. Patients positive for malaria by rapid diagnostic test and/ or LM diagnosis were treated with artemether-lumefantrine as recommended by the National Malaria Control Program. Blood samples were collected from patients for follow-up, and all patients' clinical and epidemiological data were recorded in the Dielmo/Ndiop database at Institut Pasteur, where they were consulted for the purpose of this study. Risk Analysis and Spatial Mapping All data were analyzed using the R statistical software program. Differences in proportions were assessed using Fisher's exact test. Logistic regression was used to estimate the odds ratio for the associations between PCR positivity in a survey and the risk of experiencing an episode of clinical malaria during the rest of the year. Individual-level analyses were adjusted for age, sex, village, and year of cross-section. Household-level analyses were adjusted for household size, village, and year of cross-section. The previously described concept of spatial signature was used to assess the degree of spatial clustering of P falciparum infections and clinical episodes of malaria [19]. The relative risk τ of a PCR-positive (or a clinical) case, occurring within a distance d, of a "neighboring" PCR-positive individual (denoted as the "central" case) in the same year is expressed as τ(d) = π(d)/π(d max ). π() is the ratio of the number of pairs of central and neighboring cases within distance d in the same year as the numerator and the number of pairs of central and neighboring cases within distance d in the same year as the denominator. d max is the maximum distance between points in the dataset. Ninety-five percent uncertainty intervals were generated from a null distribution of τ after randomly reassigning the clinical and PCR status across the HH locations and time points 1000 times. RESULTS Characteristics of the Study Population The 2792 blood samples investigated in this study for the presence of Plasmodium parasites were collected in 2013 (N = 611), 2014 (N = 713), 2015 (N = 741), and 2016 (N = 727) (Table 1). The mean and standard deviation of ages of participants were comparable over the 4 periods, and were 21.3 years (19.3) in 2013, 22.1 years (18.6) in 2014, 22.3 years (18.9) in 2015, and 22.3 years (18.8) in 2016 (Table 1). Participants' ages ranged from 0.2 to 93.4 years (Table 1). Submicroscopic and Clinical P falciparum Infections The proportions of submicroscopic Plasmodium infections and species composition for the 2013, 2014, and 2015 samples were previously reported, and unique P falciparum infections accounted for 3.26% (21/611), 11.1% (90/713), and 5.8% (44/741) in 2013, 2014, and 2015, respectively [11]. In 2016, all samples were negative by LM, whereas qPCR detected 11.8% (86/727) positive samples, all of which were due to P falciparum (Table 2). In both villages, the carriage of asymptomatic P falciparum parasites was similarly distributed and was age-dependent, with frequencies being higher in older individuals (>15 years) from 2013 to 2015 [11] and in 2016 (data not shown). The number of clinical malaria episodes recorded during the follow-up period was 126 in 2013 (45 in 2). The frequencies of clinical malaria cases were greater in individuals aged 5-15 years (Table 2). Mapping of Asymptomatic and Clinical Plasmodium Infections To generate a high-resolution picture of the localization of P falciparum infections within HHs in both Dielmo and Ndiop villages, asymptomatic P falciparum infections (green spots) and clinical P falciparum episodes (red spots) were positioned within their respective HHs (Figure 1). In Dielmo, the mapping analysis revealed that 85.71% (6/7), 77.3% (17/22), 21.4% (3/14), and 13.3% (2/15) of HHs with asymptomatic P falciparum infections during the cross-sectional sampling had clinical malaria episodes in 2013, 2014, 2015, and 2016, respectively (Figure 1). Two and 4 clinical malaria episodes occurred among previous asymptomatic carriers (orange spots) in 2013 and 2014, respectively (Figure 1). In Ndiop, all HHs with asymptomatic P falciparum carriers recorded clinical malaria cases documented on individuals different from asymptomatic carriers in 2013 (Figure 2). The 37 clinical malaria cases of 2014 occurred in 11 HHs, of which only 2 had no previous asymptomatic carriers, whereas 9 occurred in previous asymptomatic carriers (Figure 2). In 2015, 10 clinical malaria cases occurred in 4 households, all of which had previous asymptomatic carriers. In 2016, 8 clinical malaria cases occurred in 6 HHs, 5 of which had previous asymptomatic carriers (Figure 2). Together, the spatiotemporal mapping of P falciparum infections revealed that 68% (160/236) of clinical malaria episodes occurred in HHs where a P falciparum asymptomatic case existed. Individual Versus Household Risk Analysis in Dielmo and Ndiop The individual datasets from Dielmo and Ndiop villages collected throughout the 4-year sampling period were pooled and used to investigate the relationship between asymptomatic P falciparum carriage and subsequent clinical malaria episode at the individual and HH levels by applying Pearson's χ 2 test for statistical difference between proportions. There was no association at the individual level between PCR positivity and the risk of clinical malaria, except for 2014 when PCR-positive individuals had a higher risk of clinical malaria, although this was marginally significant (P = .048) (Table 3). At the HH level, a statistically significant association was observed between asymptomatic carriage in a HH and subsequent episode of clinical malaria occurring in that HH (Table 3) for each individual year (P values were 0.0017, 6.0 × 10 -5 , 0.006, and 0.005 for 2013, 2014, 2015, and 2016, respectively) as well as for the combined years (P = 8.5 × 10 -8 ). The individual analysis of the datasets from each setting showed no significant association at the individual level, whereas a statistically significant association was observed at the HH level in Dielmo (Table S1) and Ndiop (Table S2). The individual and HH risk analysis was adjusted for confounders and the association between P falciparum prevalence by PCR; the subsequent incidence of clinical malaria was reanalyzed using logistic regression (Table 4). If an individual was PCR-positive during a survey, he or she was 1.77 (95% confidence interval [CI]: 1.06-2.93; P = .028) times more likely to experience a clinical episode of malaria during the rest of the year. If an individual was PCR-negative during a survey, but lived in the same household as a PCR-positive individual, he or she was 1.58 (95% CI: 1.12-2.24; P = .0097) times more likely to experience a clinical episode of malaria during the rest of the year. If an infection was detected by PCR in a HH, that HH was 2.44 (95% CI: 1.00-5.98; P = .0499) times more likely to report a clinical episode of malaria in at least 1 member of the HH (Table 4). Spatial Clustering of P falciparum Infections The spatial clustering of P falciparum infections can inform the likelihood of infection occurring in nearby people around an individual with asymptomatic or clinical P falciparum infection compared with other people from the same village. The spatial clustering of P falciparum infections was analyzed by determining the proportion of asymptomatic or clinical cases around a PCR-positive individual according to the distance of the infected individual's HH. No significant patterns of spatial clustering of P falciparum infections were found in Dielmo (Figure 3A). This meant that if we identify an individual with an asymptomatic P falciparum infection, nearby people are not more likely to be infected than other people from the same village. By contrast, there was evidence of significant patterns of spatial clustering of P falciparum infections in Ndiop (Figure 3B) (ie, if we identify an individual with P falciparum infection, nearby people are more likely to be infected than other people from the same village). The spatial clustering of P falciparum clinical cases revealed no significant patterns of spatial clustering of P falciparum clinical cases in Dielmo (Figure 4A) and Ndiop (Figure 4B) (ie, if we identify an individual with asymptomatic P falciparum infection, nearby people are not more likely to experience clinical cases than other people from the same village). The higher risk of clinical episodes < 25 m from PCR-positive individuals in Ndiop is attributable to clustering within HHs. DISCUSSION The National Malaria Control Program reported a significant decrease of the incidence of malaria in Senegal [20], allowing the country to embark on elimination programs in parts of the country [21,22]. The dramatic decline observed in Dielmo and Ndiop has questioned the feasibility of eliminating the disease which would requires a reorientation away from the classical control program toward targeted approaches aimed at detecting and curing all Plasmodium infections, as highlighted by WHO [23]. It is largely acknowledged that the elimination strategies in each setting should be based on a detailed understanding of the local epidemiology to guide the application of specific interventions. In this study, the high prevalence of 11.8% submicroscopic asymptomatic P falciparum infections observed in 2016 is in line with the level of residual parasite transmission reported in the previous years in the areas [11]. The high number of clinical P falciparum malaria attacks recorded in 2013 was attributed to a rebound following the successful implementation of Artemisinin-based combination therapies in June 2006 and long-lasting insecticidal nets in July 2008 [14], and was associated with the fact that people were exposed to infectious mosquito bites while they stayed late outdoors watching television [24]. Throughout the 4-year survey period, clinical malaria attacks were highest in older children 5-15 years old in both villages; a pattern attributed to the shift in age susceptibility [12,28] following long-lasting insecticidal net implementation. Importantly, though the number of asymptomatic and clinical malaria cases varied from 2013 to 2016, the mapping analysis revealed a similar pattern with clinical malaria predominantly occurring in HH of asymptomatic P falciparum carriers. We identified in Dielmo (2013 and 2014) and in Ndiop in 2014 individuals who were asymptomatic carriers of P falciparum parasites during the cross-sectional survey and who subsequently experienced a clinical malaria episode during the follow-up period. This could be either the result of newly infecting P falciparum clones or the recrudescence of the carried-asymptomatic parasites that have evolved from a subclinical to a clinical threshold. The use of parasite genetic data to track infections within and between HHs has been shown to be critical in the understanding of the extent of within-and between-household transmission dynamics [25][26]. The preliminary genetic relatedness study conducted in Dielmo using msp-1 genotyping revealed similar genetic profiles between asymptomatic and clinical P falciparum parasites from the same individuals and within a HH (Niang et al, unpublished). This suggests that the same parasite populations are circulating within the community in Dielmo, with the asymptomatic reservoir contributing to the occurrence of clinical malaria episodes. The findings from the risk analysis imply the existence of within-HH transmission with a significant association between the existence of an asymptomatic P falciparum infection within a HH and the subsequent occurrence of clinical malaria episodes in HH members of the asymptomatic P falciparum carrier, and this was also true at both the Dielmo and Ndiop sites when the datasets from each village were analyzed individually or compiled. This is of critical importance because it would suggest that a surveillance system that could detect asymptomatic Plasmodium infections during the dry season could also trigger a response to treat all Plasmodium-infected individuals in the HH and prevent the occurrence of clinical malaria in the HH. The efficiency of an RCD strategy for malaria elimination in a given setting likely depends on the optimal screening radius and the number of HHs to investigate following the identification of an index P falciparum clinical case. RCD is a detection strategy that takes advantage of the spatial clustering of asymptomatic and/or clinical P falciparum-infected individuals by using passively detected cases as triggers to initiate screening and treatment of individuals living in proximity to those cases. The degree of spatial clustering was assessed using the spatial signature of the likely occurrence of asymptomatic and clinical P falciparum cases in relation to a previously documented P falciparum infection. Beyond the household level, we found significant spatial clustering of P falciparum infections in Ndiop but not in Dielmo. Though significant, the degree of spatial clustering in Ndiop was not strong: individuals living with 20 m of another individual with PCR-detectable P falciparum infection were 1.35 times more likely to also be infected. Such short-range effects are likely driven by household factors. The effect remained significant at greater distances, with individuals living 50 m of another individual with PCR-detectable P falciparum infection being 1.21 times more likely to also be infected (Figure S2). In both villages, there were no significant spatial interactions between PCR positivity in the cross-sectional survey and subsequent risk of episodes of clinical malaria. An individual's level of immunity may have played a role in maintaining parasites at a subclinical level and contributed to the lack of association found at the individual level. In fact, the asymptomatic carriage has been described as a very complex process associated with low parasite proliferation and an absence of clinical signs related to a specific immunity to soluble antigens and a strain-specific or not antiparasite immunity [27]. Approaches for more efficient malaria elimination interventions have shifted to incorporating spatial dynamics of transmission and to identifying lingering foci. The Framework for Malaria Elimination contained in the WHO Global Technical Strategy for Malaria 2016-2030 included enhanced surveillance, mass drug administration, and active community-based case detection [29]. However, many strategies have encountered variable success in low/moderate transmission settings [30,31] resulting in either operational challenges of such settings and/or the limited understanding of the heterogeneity in malaria risk and transmission. Subscript "ind" denotes the individual level and subscript "house" denotes the household level. Individual-level regressions were adjusted for cross-section, age, sex, and village. Household level regressions were adjusted for cross-section, village, and household size. Notes Abbreviations: CI, confidence interval; OR, odds ratio; Pfclin, P. falciparum clinical malaria; PfPCR, P. falciparum PCR positive. the laboratory experiments. M. W., M. S., C. T., J. F., and M. N. curated the data and performed the statistical analyses. M. N. and M. W. drafted the manuscript. All authors contributed to the review and editing of the draft manuscript and approved the final manuscript. All authors read and approved the final manuscript and agree to its submission. All relevant data are within the paper. Specific datasets are available from the corresponding author on reasonable request. Chapter 6: Discussion, conclusion, and perspectives Chapter 6 contextualises the findings from the cross-sectional survey and the spatial signature application within the body of knowledge at the time of the studies and since then. It highlights gaps of knowledge addressed and the expected prospects of the work. Synthesis Persisting pockets of high malaria risk on the province and district level Using PCR with high sensitivity for infections (Canier et The data collected in our survey contributed to further studies by our partners, e.g. a mosquito collection study in the same region that investigated the Anopheles spp. behaviour, species composition, the prevalence of insecticide resistance and pathogen carriage (Vantaux et al., 2021). The social, behavioural, and spatial risk patterns also informed a study of GPStracked mobility in forest-goers in the same study region [START_REF] Pepey | Mobility evaluation by GPS tracking in a rural, low-income population in Cambodia[END_REF]. The high prevalence of Plasmodium spp. infection in a remote area in Mondulkiri in 2017-2018 was alarming. Together with other surveys, these findings corroborated that malaria persisted in remote provinces in Cambodia. With targets of country-wide elimination in the near distance, these point estimates should aid NMPs with informed milestone evaluations (Cambodian Ministry of Health, 2011; World Health Organization, 2015b). Heterogeneity of infection rates on the village level The prevalence of infection is highly heterogeneous below the province level. In the survey in Mondulkiri, we found P. vivax and P. falciparum infections at a village-level prevalence that ranged between 0.6% and 36.3% for P. vivax and between 0% and 25.1% for P. falciparum. Variation of prevalence between villages in close proximity has been found in other cross-sectional surveys; however, not at an equally wide range. In the western Battambang province in the dry season in 2015, a survey of 20 villages found no infections in some villages (sampling only 50 people per village). If infections were found, the prevalence of P. falciparum infection varied between 2% and 6% and P. vivax prevalence varied between 2% and 26%. Infection prevalence was found at lower extremes, although an ultra-sensitive PCR was used in the study (Tripura et Our survey in Mondulkiri added a unique observation to the literature on infection prevalence patterns in Cambodia as it sampled from all villages in a study area, with relatively high sample sizes, and with villages along a gradient of proximity to the forest. We categorized them along this gradient based on forest cover as deduced from remote sensing analysis of satellite imagery, an objective approach no other surveys in this area have used. The trend of increasing village-level prevalence with closer proximity to the forest was evident and statistically significant. In the survey by Since the publication of our survey, no further surveys assessing infection risk in the general population have been published for Cambodia. Our approach offers a methodological example for future studies so that the contribution of residential baseline risk vs. behavioural exposure to infection risk in these persistent pockets of malaria transmission in overall elimination settings can be further distinguished. Spatial clustering of infections beyond the resolution of villages In a survey of 117 villages over an area of approximately 80 km in Ratanakiri in 2012, [START_REF] Parker | Microgeography and molecular epidemiology of malaria at the Thailand-Myanmar border in the malaria pre-elimination phase[END_REF]. This survey was, interestingly, the only one from other GMS countries that used the SaTScan method to assess spatial clustering based on household locations. In our study, household-level GPS coordinates also allowed for the detection of clustering below the village level and yielded two clusters of Plasmodium spp. infection. These included households from all study villages located inside the forest. In contrast to the surveys discussed above, detecting artificial spatial clusters due to social clustering was avoided as the analysis was adjusted for non-spatial, behavioural risk factors. That the hotspots remained significant despite the adjustment could indicate additional peri-domestic risk of infection in these villages. Further studies are needed to underpin the hypothesis of peri-domestic infections in these areas, e.g. assessments of mosquito biting activity throughout the day, similar to the study by Vantaux et al. (Vantaux et al., 2021). Future studies are also needed to quantify the differential contribution of behavioural vs. peri-domestic exposure in more detail. Under the assumption of peri-domestic exposure, it remains unclear whether in-village host-vector contact patterns are dense enough to sustain peri-domestic transmission. Alternatively, the transmission system could depend on parasite 'importation' by individuals infected due to behavioural exposure, e.g. forest goers. More comprehensive infection control in these risk groups would then also drain the transmission system in the villages. This would probably require genotyping or sequencing studies, which are resource-intense but have suggested near-clonal circulating parasite lineages elsewhere (Salla et al., 2020). The use(fulness) of spatial clusters to guide malaria elimination The increase in country-wide malaria incidence in Cambodia 2016-2018, see section falciparum hotspot in 2010 was predictive for PCR-detected infection in the second survey (Mosha et al., 2014). In a study from Sudan, consecutive surveys from 1999 until 2008 among the general population in one state found significant hotspots in six of the ten yearly surveys (Nourein et al., 2011). In another study from Sudan, children from 88 villages in a low transmission area were tested for P. falciparum infection by LM in one survey per year each January 1999-2009. Hotspots were detected in all eleven yearly surveys, and the hotspots from ten years overlapped with a spatiotemporal cluster based on the entire study period (Mirghani et al., 2010). However, the hotspots drastically varied in size, from containing only a single village to a diameter of more than 60 km. Also, the ten hotspots that overlapped with the space-time cluster did not all overlap, and the variation (regarding the hotspot size and location) was erratic. In a two-year cohort study among children from villages surrounding a hydropower dam in Ethiopia in 2008-2010, the hotspot for P. falciparum malaria clinical case incidence was stable over the two study years. A hotspot of P. vivax malaria cases was found in each study year but on opposite sides of the study area (Seyoum et al., 2017). Studies from the GMS continue to cast doubt on the stability of (local) hotspots. The survey in Ratanakiri province, Cambodia, in the dry season in 2012 detected a hotspot of PCRdetected P. falciparum infection that was also found in another survey in the rainy season of the same year. However, the area was not a hotspot of PCR-detectable infections in a consecutive survey one year later. the effect that reducing malaria transmission in the hotspot would also reduce transmission in the surrounding area [START_REF] Bousema | The Impact of Hotspot-Targeted Interventions on Malaria Transmission in Rachuonyo South District in the Western Kenyan Highlands: A Cluster-Randomized Controlled Trial[END_REF]. Applying the scanning method across a wide area based on village coordinates can lead to hotspot detections that are limited in their usefulness for malaria control and elimination. For example, in the Ratanakiri survey in 2012, one hotspot was a single village with high village-level prevalence, with no extra benefit to conventional village stratification. Other hotspots were vast in geographical extension, some of which contained relatively few infections, e.g. the P. ovale and P. malariae clusters were 32 km and 30 km wide in radius and contained 13 cases and 18 cases, respectively (Sluydts et al., 2014). Even assuming that detected hotspots would have predictive value of future infection and thus define suitable targets for control interventions, it would be resource-intense for an NMP to survey the entire operational area at sufficient coverage for detailed hotspot mapping, potentially even multiple times over the years. The spatial scanning method also does not generalize hotspot formation. It is unlikely that it would be possible to find common denominators of a set of hotspots in order to extrapolate hotspots in other areas without the need to derive them from detected infections, given the complex interplay of host, mosquito, and pathogen characteristics that make up favourable niches of clustered transmission. Thus, it remains to be demonstrated that spatial scanning for infection hotspots can cost-effectively direct malaria control and elimination interventions and reduce transmission. Until then, interventions targeting areas of high clinical incidence in the surveillance data and areas populated by known risk groups for infection remain the most practicable guidance for geographical resource allocation by NMPs. However, this stratification of interventions should also be informed by data on the reservoir of sub-patent or asymptomatic infections, as areas of high transmission can have exceptionally high rates of these low parasitaemia infections due to the built-up of immunity. This data would also require (repeated) surveys with active case detection based on molecular detection of infection across the operational area, though at less coverage than hotspot mapping would depend on. Generalizing spatial clustering of infection risk by the spatial signature method SaTScan has proven powerful to detect areas of increased risk of observed outcomes, i.e. to localize (albeit potentially transient) hotspots of parasite carriage, and thus to demonstrate heterogeneity in infection risk on small spatial scales. However, it does not enable to generalize such spatial clustering and thus transfer it to other locations. Also, the radius of areas with increased risk varies greatly depending on the setting and underlying location patterns and thus cannot be generalized. Furthermore, focusing on a detected hotspot bears the risk of losing sight of ongoing transmission chains outside the hotspot within active foci. The method can detect multiple areas with statistically significantly increased risk, but only the highest likelihood cluster is usually reported. Also, the hypothesized effect of reductions outside hotspots, if the transmission is hit inside them, has not been underpinned by statistically significant evidence. We have tried to overcome these methodological blanks by trying to answer the question of how far, on average, around infections (index infections) one would need to search for co-clustering infections (neighbouring infections). For that purpose, we introduced the spatial signature method to malaria epidemiology. where few infections are rapidly detected, reported, treated, and followed up. In China, for example, the success in the late stages towards malaria elimination was also attributed to their 1-3-7 surveillance, i.e. reporting confirmed cases within one day, investigating cases detected by RDTs within three days, and applying targeted control responses within a week. In Cambodia, an adapted approach was used in an operational district near-elimination, including active case detection of co-travellers, household and family members, as well as surrounding households [START_REF] Kheang | Malaria elimination using the 1-3-7 approach: lessons from Sampov Loun, Cambodia[END_REF]. Spatial distances provided by evidence derived from spatial signature analyses could also contribute to the parameters for this intervention and thus allow further accelerating towards malaria elimination in a feasible and cost-effective manner. Conclusions The enormous effort of routine surveillance of clinical cases from health care facilities and VMWs/MMWs allows for monitoring malaria incidence and comparing countries or regions within a country. This data confirms the remarkable reductions in malaria transmission over the last two decades in Cambodia and the GMS overall. As country-wide incidence has been reduced, the landscape of persisting transmission has become increasingly fragmented, and the remaining burden is found in more and more remote areas. Field studies in such areas, like our survey, further reveal the extent of high local transmission levels. These findings support the increase of peripheral access to malaria diagnosis and treatment via the establishment and recent scale-up of the VMW programme by the Cambodian NMP. Beyond geographical fragmentation of malaria incidence, infection risk is highly heterogeneous on the individual level. As corroborated by our survey, travelling to or working in the forest is the primary behavioural risk factor for infection. Thus, the MMW initiative of the Cambodian NMP and its focus on forest goers appears well targeted. The countries of the GMS have committed to eliminating malaria until 2030 or earlier. Cambodia aims at the overall elimination of malaria by 2025 and had planned to eliminate P. falciparum by 2020. The challenges that elimination programmes in Cambodia, the GMS in general, but also other regions face are primarily: the high prevalence of asymptomatic, subpatent infections, the difficulties of curing P. vivax, and the spatial fragmentation of infection risk. These roadblocks to malaria elimination are well established but must be further delineated. As milestones are missed (such as P. falciparum malaria elimination in Cambodia by 2020) and threats such as drug resistance are looming, the need to better inform malaria elimination programmes and render them more efficient with novel tools is apparent. Perspectives A better understanding of how relapses contribute to P. vivax transmission remains vital. Membrane feeding studies suggest infectivity of asymptomatic infections in field studies, and mathematical models reconstruct such dynamics [START_REF] White | Modelling the contribution of the hypnozoite reservoir to Plasmodium vivax transmission[END_REF][START_REF] White | Mathematical modelling of the impact of expanding levels of malaria control interventions on Plasmodium vivax[END_REF]. Tools to disentangle relapses from new blood-stage infections in longitudinal studies and analyses of their transmission share would be illuminating. Generally, further cohort studies need to shed light on transmission dynamics. In the Asia-Pacific ICEMR, such a study took place in the same region as our survey and will assess spatiotemporal infection patterns. Data on infection and household location will allow to fit mathematical models of P. falciparum and P. vivax transmission and extend them to be spatially explicit. These can then compare approaches to identify and hit pockets of residual transmission, e.g. active vs. passive designs -so that elimination is reached most effectively. Disillusion grows that elimination can be achieved with the insufficiently sensitive RDTs and LM. For high-risk groups like forest workers, experts thus discuss prophylaxis or regular presumptive drug administration. Presumptive drug administration could also be considered where entire villages remain at high risk. Including 8-aminoquinolines would be an indirect way to hit the silent hypnozoite reservoir, a potentially strong blow to P. vivax transmission. That hypnozoites cannot be diagnosed remains an obstacle to P. vivax elimination. Antibody responses to P. vivax antigens allowed to sensitively and specifically detect recent infections and thus hypnozoite carriage (Longley et al., 2020). Serological testing and treatment could thus become a novel tool for relapse prevention and transmission reduction. At very low incidence, NMPs begin to follow up cases, co-exposed individuals, household members, and potentially neighbours. Spatial signatures could inform that process. Further use in similar settings must show the generalizability of distance estimates. The clustering of infections across diverse settings suggests using spatial signatures for reactive approaches also for burden reduction. The method provides a tool for NMPs to make informed decisions on operational distances, considering the number of households needed for a certain coverage vs. feasibility. village is in category "outside forest", "forest fringe", or "inside forest", resp. Significance asterisk for Kruskal-Wallis test of differences in village-level prevalence. Supplementary Figures Figure S3 Prevalence of infection by age per gender and species. Significance asterisks for test of differences in prevalence across age groups per gender strata. al., 2009; Dondorp et al., 2009; Noedl et al., 2008; Noedl, Socheat, & Satimai, 2009; Vijaykadga et al., 2006; World Health Organization, 2013). Except for China, which reports zero cases since 2017, malaria remains endemic across the GMS, with high malaria incidence and mortality observed until recently in Cambodia (World Health Organization, 2021a, 2021b). In the WHO Strategy for malaria elimination in the Greater Mekong Subregion 2015-2030, all GMS countries committed to eliminating P. falciparum in areas where multidrug resistance is known by 2020, all P. falciparum malaria by 2025 and all malaria by 2030. Some countries even declared earlier targets, e.g. elimination of P. falciparum malaria overall by 2020 and complete malaria elimination by 2025 in Cambodia (Cambodian Ministry of Health, 2011; Chinese Ministry of Health, 2010; World Health Organization, 2015b). Figure 2 2 Figure 2 Annual case numbers of indigenous overall and P. falciparum malaria cases in the GMS from 2000 until 2020 (World Health Organization, 2021a). Figure 3 3 Figure 3 Malaria case numbers (blue bars, left axis) and deaths (orange line, right axis) as per the Cambodian control programme (Cambodian National Center for Parasitology, 2021). interventions to tailor elimination interventions to the local distribution of the remaining reservoir of infection and to target high-risk groups, see Figure 4 (Global Malaria Programme, 2017; World Health Organization, 2015a, 2022). Figure 4 4 Figure 4 The effect of (a) an untailored intervention programme compared to (b) a programme that stratifies interventions by transmission levels (World Health Organization, 2014). Figure 5 5 Figure 5 Provinces in Cambodia (left) and malaria incidence in the GMS in 2012 and 2020 (NIRVn, 2014; World Health Organization, 2021a). One way to delineate where remaining transmission occurs is to associate disease or infection risk on different spatial scales with increasing granularity, e.g. villages, hamlets, or individual households. Bannister-Tyrrell et al., for example, conducted a cross-sectional survey in three villages in Ratanakiri province, north-eastern Cambodia, in 2016-2017. They found that odds for Plasmodium spp. infection were sufficiently explained by household-level information by contrast to individual or village-level covariates, suggesting the household as the unit of risk clustering (Bannister-Tyrrell et al., 2018). . In a low transmission area in South Africa, Mpumalanga province, SaTScan assisted in identifying clusters of confirmed malaria cases in seven towns over three years, 2002-2005 (Coleman et al., 2009). Based on cross-sectional surveys at the Thailand-Myanmar border in 2011-2012, Parker et al. found clusters of P. vivax infections in the dry season around a few year-round water sources (Parker et al., 2015). Figure 6 6 Figure 6 Spatial signature of dengue cases in the same month, i.e. the relative probability (y) of two case pairs being of the same serotype within a distance interval (x) compared to any other case pair carrying the same genotype. The ribbon indicates the 95% confidence interval as derived from a null distribution that erased any spatial clustering in time (Salje et al., 2012). Chapter 3 : 3 Forest malaria in Cambodia: the occupational and spatial clustering of Plasmodium vivax and Plasmodium falciparum infection risk in a cross-sectional survey in Mondulkiri province, Fig. 1 1 Fig. 1 Geographic and demographic description of survey population. a Map of survey household locations, coloured by village in shades of blue, purple, or green if village is in category "outside forest", "forest fringe", or "inside forest", respectively. Background Landsat-8 image courtesy of the U.S. Geological Survey. b Representativeness of age distribution in women and men by overlaying distribution in census (bars) by distribution in survey (points) Fig. 2 2 Fig.2Prevalence of P. vivax infection per village (as boxplots by proximity of villages to the forest in a and as squares in b). Household locations of survey participants in the map background, transparently coloured by village in shades of blue, purple, or green if village is in category "outside forest", "forest fringe", or "inside forest", respectively. Significance asterisk for Kruskal-Wallis test of differences in village-level prevalence Fig. 4 4 Fig. 4 Spatial clusters of Plasmodium infection (red-shaded circles) as detected by covariate-adjusted SaTScan analysis. White dots represent household locations in the survey, red dots for PCR-positive participants Figure 1 1 Figure 1 Spatial signature of P. vivax or P. falciparum infections in the Cambodian cross-sectional Figure 2 2 Figure 2 Spatial signatures of infections in the cross-sectional surveys in Thailand, Brazil, and Figure 3 3 Figure 3 Spatial and spatiotemporal signatures of infections in the cohort studies in Thailand and Figure 4 4 Figure 4 Spatiotemporal signatures of infections with a matching genotype in the cohort study in Figure 5 5 Figure 5 Comparative analysis of distance to 50% reduction in prevalence by global study FiguresFigure 1 1 Figures Figure 2 2 Figure 2 Spatial signatures of infections in the cross-sectional surveys in Thailand, Brazil, and Figure 3 3 Figure 3 Spatial and spatiotemporal signatures of infections in the cohort studies in Thailand and Figure 4 4 Figure 4 Spatiotemporal signatures of infections with a matching genotype in the cohort study in Figure 5 5 Figure 5 Comparative analysis of distance to 50% reduction in prevalence by global study Downloaded from https://academic.oup.com/cid/article/73/12/2175/6157008 by Robert Koch-Institut user on 13 October 2022 Clustering of Asymptomatic and Clinical P falciparum Infections for Targeted Interventions • cid 2021:73 (15 december) • 2177 Dielmo and 81 in Ndiop), but decreased progressively over time to 86 in 2014 (49 in Dielmo and 37 in Ndiop), 13 in 2015 (3 in Dielmo and 10 in Ndiop), and 11 in 2016 (3 in Dielmo and 8 in Ndiop) (Table Author contributions. M. N. and A. T-B. conceived and designed the study. F. D. S., J. F., A. B., N. D., C. S., and A. T. B. coordinated the fieldwork and samples collection. M. N., A. F. M., R. S., B. D., and L. G. T. performed Figure 4 . 4 Figure 4. Spatial clustering of clinical cases near PCR-positive Plasmodium falciparum infections in (A) Dielmo and (B) Ndiop villages. Abbreviation: PCR, polymerase chain reaction. Figure 3 . 3 Figure 3. Spatial clustering of PCR-positive (asymptomatic) Plasmodium falciparum infections in (A) Dielmo and (B) Ndiop villages. Abbreviation: PCR, polymerase chain reaction. Downloaded from https://academic.oup.com/cid/article/73/12/2175/6157008 by Robert Koch-Institut user on 13 October 2022 2. 1 , 1 was responded to not only by reinstating the VMW programme and changing the first-line ACT. The NMP also launched the Intensification Plan that ran from 2018 until 2020 and targeted areas and populations with increased malaria risk with further scaled-up testing and treatment by VMWs/MMWs. The clustering of infections at the village or the sub-village level and in hard-to-reach populations, as described in our and other cross-sectional surveys, explicitly informed the programme[START_REF] Sovannaroth | Accelerating malaria elimination in Cambodia: an intensified approach for targeting at-risk populations[END_REF][START_REF] Stratil | Eliminating Plasmodium falciparum malaria: results from tailoring active case detection approaches to remote populations in forested border areas in north-eastern Cambodia[END_REF].The concept of transmission 'hotspots' is based on the idea that in every endemic setting, small clusters of households exist that have an increased risk of transmission of Plasmodium spp. infection due to local circumstances that facilitate transmission, e.g. proximity to breeding sites[START_REF] Bousema | Hitting hotspots: spatial targeting of malaria for control and elimination[END_REF]. From these overly active transmission systems, infections 'spill' outwards into the surroundings and maintain transmission there, particularly at overall low transmission levels. Likewise, transmission cycles can persist over the low transmission season within these hotspots and cause expanding transmission with the onset of the high transmission season. Targeting such hotspots would be efficient in reducing infection levels. Malaria control and elimination programmes would thus need to survey intervention areas once, detect the hotspots, and target their interventions to them. This concept presumes the stability of such hotspots over time. Some studies indeed found areas that were repeatedly hotspots in consecutive surveys. In two surveys one year apart among four villages in Tanzania in the dry seasons of 2010 and 2011, being in a P. Figure S1 Classification of villages as to their forest proximity based on forest cover in 750 m Figure S2 S2 Figure S2 Prevalence of P. falciparum infection per village (as boxplots by proximity of villages positive P. malariae samples omitted N PCR positivity Plasmodium P. vivax P. falciparum Co-infections Proportion (n) of survey samples 4200 8.3% (349) 6.4% (268) 3.0% (125) 1.1% (48) Extrapolated population prevalence 10,053 8.9% 6.8% 3.3% 1.3% Extrapolated prevalence per village Outside forest Trapaingphiae 594 0.6% (3) 0.6% (3) 0% (0) 0% (0) Chhnaeng 550 2.7% (12) 1.9% (8) 0.9% (4) 0.3% (1) Oham 333 8.0% (20) 6.2% (16) 2.9% (7) 1.1% (3) Ohrana 470 5.7% (25) 4.2% (18) 1.8% (8) 0.2% (1) Sraepreas 228 6.5% (13) 5.6% (11) 1.7% (3) 0.8% (1) Sraektum 296 13.1% (36) 8.6% (24) 4.8% (13) 0.8% (2) Lapakhe 71 10.6% (4) 6.7% (3) 7.8% (2) 3.9% (1) Ohkaunpreas 317 6.9% (21) 5.9% (18) 1.7% (5) 0.7% (2) Trapaingtouk 201 1.9% (3) 1.2% (2) 0.6% (1) 0% (0) Forest fringe Poucha 239 9.7% (24) 6.7% (17) 4.3% (10) 1.4% (3) Sraeampilkroam 218 11.0% (23) 7.2% (15) 3.8% (8) 0.6% (1) Sraeampilleu 168 10.6% (17) 8.2% (13) 2.4% (4) 0% (0) Inside forest Beng-Gaty 106 37.9% (40) 30.9% (32) 25.1% (27) 18.1% (19) Ohchra 152 40.4% (60) 36.3% (54) 10.0% (15) 5.9% (9) Sraelvy 154 18.7% (28) 15.5% (23) 3.9% (6) 1.3% (2) Ohtrone 103 19.7% (20) 10.4% (11) 12.4% (12) 3.1% (3) Table 2 Detectability of infections through the public health care system 2 Measure PCR positivity P. vivax P. falciparum LM + 16.8% (45/268) 25.6% (32/125) Symptoms at interview 6.7% (18/268) 16.8% (21/125) Symptoms & LM + 2.2% (6/268) 4.8% (6/125) Symptoms & RDT + 0.4% (1*/268) 4.0% (5/125) Symptoms & RDT/LM + 2.2% (6/268) 7.2% (9/125) Proportion (N) of PCR-positive samples that were also detectable by LM, inspection of symptoms, or by testing of symptomatic individuals with LM or RDTs *Actually RDT + for P. falciparum in a co-infection with P. vivax as by PCR Table 3 Densities of asexual and sexual stages in LM + samples 3 LM positivity P. vivax P. falciparum Geometric mean and range always given from among samples with non-zero densities Gametocytaemia Proportion (N) of LM + samples with gametocytes 2.7% (1/37) 37.0% (10/27) Geometric mean [/μl blood] 31.9 51.1 Range [/μl blood] 31.9-31.9 15.6-175.0 Table 4 Univariate association of infection risk and behavioural covariates 4 None 3465 4.9% (171) < 0.001 2.0% (70) < 0.001 Field sites 90 3.3% (3) 4.4% (4) Forest sites 268 22.4% (60) 14.9% (40) A village 177 13.6% (24) 5.6% (10) A city 59 1.7% (1) 0% (0) Unspecified 123 7.3% (9) 0.8% (1) Work in the last two months in… …deep forest No 4055 5.7% (231) < 0.001 2.4% (98) < 0.001 Yes 145 25.5% (37) 18.6% (27) …cassava field No 2115 5.6% (118) < 0.05 2.9% (62) n.s Yes 2085 7.2% (150) 3.0% (63) No work No 3186 7.9% (253) < 0.001 3.7% (119) < 0.001 Yes 1014 1.5% (15) 0.6% (6) Slept outdoors last night Indoors 4124 6.2% (256) < 0.01 2.9% (118) < 0.01 Outdoors 76 15.8% (12) 9.2% (7) Sprays repellent usually at bedtime No 3668 6.7% (247) < 0.05 3.2% (116) < 0.10 Yes 524 4.0% (21) 1.7% (9) Slept under net last night No 320 10.0% (32) < 0.01 6.6% (21) < 0.001 Yes 3880 6.1% (236) 2.7% (104) Prevalence (n) of PCR-positivity across the strata of those behavioural variables that were statistically significant in univariate logistic regression. "n.s. ": Not significant Covariate N P. vivax P. falciparum Prevalence p-value Prevalence p-value Work-unrelated travels overnight in the last month to… Table 5 Risk factors after multivariate mixed-effects logistic regression for Plasmodium infection as detected by PCR 5 Covariate N aOR 95% CI p Gender Female Reference < 0.001 Male 3.06 [2.26-4.13] Age [years] 2-10 Reference < 0.001 11-15 4.51 [2.52-8.06] 16-20 7.47 [4.14-13.47] 21-25 7.84 [4.20-14.63] 26-30 8.08 [4.36-14.94] 31-35 7.71 [4.05-14.70] 36-40 7.15 [3.76-13.58] 41-45 5.33 [2.67-10.65] 46-50 4.16 [1.92-8.99] 51-80 2.68 [1.36-5.29] Forest proximity of village Outside forest Reference < 0.001 Forest fringe 2.14 [1.02-4.52] Inside forest 12.47 [6.29-24.71] Material of roof Grass/leaves 0.96 [0.26-3.55] < 0.01 Tent 3.43 [1.36-8.64] Corrugated iron Reference Wood planks/cement/tiles 0.50 [0.30-0.84] Household owns a toilet No Reference < 0.05 Yes 0.69 [0.48-1.00] Household owns buffaloes No Reference < 0.01 Yes 2.11 [1.21-3.69] Household head had received information on malaria via TV in the past 3 months No Reference < 0.01 Yes 0.37 [0.16-0.81] Insecticides had been sprayed inside the house in the past year No Reference < 0.01 Yes 0.62 [0.43-0.89] Slept outdoors last night No Reference 0.08 Yes 1.99 [0.93-4.25] Slept under net last night No Reference 0.96 Yes 0.99 [0.61-1.59] Table 5 (continued) 5 Covariate N aOR 95% CI p Work-unrelated travels overnight in the last month to… None 3465 Reference < 0.01 Field sites 90 0.63 [0.25-1.59] Forest sites 268 2.17 [1.41-3.35] A village 177 1.28 [0.75-2.17] A city 59 0.27 [0.03-2.13] Unspecified 123 1.23 [0.56-2.74] Work in the last two months in… …deep forest No 4055 Reference < 0.001 Yes 145 2.88 [1.69-4.93] …cassava field No 2115 Reference < 0.05 Yes 2085 1.37 [1.01-1.85] Table 1 1 Design and basic summary of included cross-sectional surveys and cohort studies. Study type Country Number of Timing and duration P. vivax P. falciparum Median age Gender [%] (reference) participants PCR prevalence PCR prevalence [range] Cross- Cambodia 4,200 Dec 2017-Apr 2018 6.4% 3.0% 22 years Female 53% sectional (9) (dry season) [2-79] surveys once) (sampling (7) Thailand 4,309 (low transmission season) Sep-Oct 2012 3.1% 0.9% [0-92] 20 years Female 52% Brazil 2,010 (Survey 1) Nov 2012-Jan 2013 4.3% 0.8% 23 years Female 47% (6) (rainy season) [0-100 †] 2,073 (Survey 2) Aug-Sep 2013 3.4% <0.1% 25 years Female 48% (dry season) [0-100 †] Solomon 3,501 May-June 2012 13.4% 0.1% 18 years Female 53% Islands (minimal seasonality) [1-100] (48) Cohort Thailand 999 May 2013-Jun 2014 1.7-4.2%* 0-1.3%* 23 years Female 54% studies (8) (14 sampling visits) [1-82] (monthly sampling) (47) Brazil (13 sampling visits) 1,274 Apr 2013-Mar 2014 2.5-6.5%* 0-1.0%* [0-100 †] 25 years Female 49% Table 2 2 Distance to 50% reduction in prevalence and global study prevalence across the studies. Study type Study site P. falciparum P. vivax Global study Distance to 50% Global study Distance to 50% prevalence reduction in prevalence reduction in prevalence prevalence Cross-sectional Cambodia 3.0% 975 m 6.4% 875 m surveys Thailand 1.1% 75 m 3.3% 25 m Brazil Survey 1 - - 4.2% 175 m demonstrate spatial clustering of P. vivax and P. falciparum infections across a diverse set of study sites and transmission intensities. Introducing a novel method in spatiotemporal malaria epidemiology, we quantify the distance within which clustering occurs around index infections. These distances are often short, e.g. below 200 m, tending to lower values at lower global study prevalence. The spatial signature of Plasmodium spp. infections offers a new tool to extract insights on Plasmodium spp. epidemiology from observational epidemiological field studies. It also provides a method to inform reactive infection detection strategies regarding effective and feasible radius choices of interventions around detected infections. 1 Mirco Sandfort, 2,3 Adja Fatou Mbodj, 1 Babacar Diouf, 1 Cheikh Talla, 4 Joseph Faye, 4 Rokhaya Sane, 1 Laty Gaye Thiam, 1 Alassane Thiam, 1 Abdoulaye Badiane, 4 Ines Vigan-Womas, 1 Nafissatou Diagne, 5 Fatoumata Diene Sarr, 4 Ivo Mueller, 2 Cheikh Sokhna, 5 Michael White, 2 and Aissatou Toure-Balde 1 1 Institut Pasteur Dakar, Pôle Immunophysiopathologie & Maladies Infectieuses, Dakar, Sénégal; 2 Malaria: Parasites and Hosts Unit, Department of Parasites & Insect Vectors, Institut Pasteur, Paris, France; 3 Sorbonne Université, Collège doctoral, Paris, France; 4 Institut Pasteur Dakar, Pôle Epidémiologie, Recherche Clinique et Science des données, Dakar, Sénégal; and 5 VITROME, Campus international IRD-UCAD, Dakar, Sénégal Fine-scale Spatiotemporal Mapping of Asymptomatic and Clinical Plasmodium falciparum Infections: Epidemiological Evidence for Targeted Malaria Elimination Interventions Makhtar Niang, CSS Cross-sectional survey GPS Global positioning system PCR Polymerase chain reaction Table 1 . Demographic Characteristics of the Study Population 1 Downloaded from https://academic.oup.com/cid/article/73/12/2175/6157008 by Robert Koch-Institut user on 13 October 2022 2013 2014 2015 2016 N = 611 N = 713 N = 741 N = 727 Total N % N % N % N % Location Dielmo 301 49.3 348 48.3 357 48.2 354 48.0 Ndiop 310 50.7 365 51.7 384 51.8 373 52.0 Sex Male 249 40.7 327 45.4 340 45.9 311 42.2 Female 362 59.3 386 54.6 401 54.1 416 57.8 Age Mean 21.3 … 22.1 … 22.3 … 22.3 … Range 0.4-91.5 … 0.2-92.5 … 0.2-93.4 … 0.3-82 … ±SD 19.3 18.6 18.9 18.8 Age groups, y ≤5 113 18.6 120 16.8 121 16.3 130 17.9 5-15 201 32.9 215 30.2 226 30.5 211 29.0 >15 297 48.5 378 53.0 394 53.2 386 53.1 Abbreviation: SD, standard deviation. Table 2 . Microscopy and qPCR-Based Parasite Prevalence and Clinical Plasmodium falciparum Cases in the Studied Areas and Periods 2 Downloaded from https://academic.oup.com/cid/article/73/12/2175/6157008 by Robert Koch-Institut user on 13 October 2022 2013 2014 2015 2016 No. 611 713 741 727 Villages Dielmo Ndiop Dielmo Ndiop Dielmo Ndiop Dielmo Ndiop N (clinical surveillance) 312 332 353 366 357 385 356 376 N (sample) 301 310 348 365 357 384 354 373 Microscopy N 0 0 1 1 1 0 0 0 % 0 0 0.28 0.27 0.28 0 0 0 qPCR (all species) a N 14 9 45 47 25 21 22 64 % 4.7 2.9 12.9 12.9 7.0 5.2 6.2 17.2 qPCR (Pf) N 13 8 43 47 24 20 22 64 % 4.3 2.6 12.4 12.9 6.7 5.2 6.2 17.2 Submicroscopic N 13 8 42 46 23 20 22 64 % 4.3 2.6 12.1 12.6 6.4 5.2 6.2 17.2 Clinical malaria N 45 81 49 37 3 10 3 8 Age groups, y (%) <5 1/54 (1.9) 12/61 (19.7) 5/49 (10.2) 6/72 (8.3) 0/51 (0) 0/70 (0) 0/50 (0) 1/81 (1.2) 5-15 23/112 (20.5) 31/97 (32.0) 19/113 (16.8) 13/104 (12.5) 1/110 (0.9) 5/117 (4.3) 0/106 (0) 4/106 (3.8) >15 21/146 (14.4) 38/174 (21.8) 25/191 (13.1) 18/190 (9.5) 2/196 (1.0) 5/198 (2.5) 3/200 (1.5) 3/189 (1.6) Abbreviations: Pf, Plasmodium falciparum; qPCR, quantitative polymerase chain reaction. a The qPCR detected no P. vivax. P. ovale was found as single infections in 2 individuals in 2015, and in co-infections with P. falciparum in 1 and 9 individuals in 2013 and 2014, respectively. P. malariae was not detected in 2015, but 2 unique cases were detected in 2013 and 2014, respectively. of Asymptomatic and Clinical P falciparum Infections for Targeted Interventions • cid 2021:73 (15 december) • 2179 Table 3 . Individual Versus Household Risk Analysis of the Relationship Between Asymptomatic Plasmodium falciparum Carriage and Subsequent Clinical Malaria Episode in Dielmo and Ndiop Villages 3 Downloaded from https://academic.oup.com/cid/article/73/12/2175/6157008 by Robert Koch-Institut user on 13 October 2022 All Years 2016 2015 Analysis at Individual Level 2014 2013 Table 4 . Results of Logistic Regression for the Association Between Plasmodium falciparum Prevalence by PCR and the Subsequent Incidence of Clinical Malaria 4 Pfclin ind -PfPCR ind Pfclin ind -PfPCR house Pfclin house -PfPCR house Unadjusted Adjusted Unadjusted Adjusted Unadjusted Adjusted Dielmo OR (95% CI) 1.57 (0.76-3.24) 1.76 (0.83-3.73) 1.95 (1.15-3.30) 2.54 (1.45-4.45) 6.86 (3.33-14.13) 4.51 (1.31-15.62) P value .22 .14 .013 .001 1.7 × 10 -7 .017 Ndiop OR (95% CI) 1.05 (0.54-2.00) 1.78 (0.89-3.55) 0.81 (0.53-1.23) 1.05 (0.70-1.74) 5.05 (2.44-10.45) 1.32 (0.32-5.47) P value .88 .10 .32 .66 1.2 × 10 -5 * .70 Dielmo and Ndiop OR (95% CI) 1.26 (0.78-2.04) 1.77 (1.06-2.93) 1.19 (0.86-1.65) 1.58 (1.12-2.24) 5.90 (3.54-9.84) 2.44 (1.00-5.98) P value .88 .028 .29 .0097 1 × 10 -11 .0499 al., 2017). Falq et al. detected infections in the northern Preah Vihear province in 2014 at a prevalence of 0-4.6% for P. falciparum, 0-3.6% for P. vivax, or 0-12.1% for Plasmodium spp. across all 21 villages of one district, despite the use of ultra- sensitive PCR and sampling in the rainy season (Falq et al., 2016). Only a survey in Ratanakiri in the rainy season more than 15 years earlier found a more extreme village-level infection prevalence, ranging less widely though (Steenkeste et al., 2010). Between-village variation in prevalence was also described in Thailand (Sattabongkot et al., 2018) and Laos (Iwagami et al., 2017; Phommasone et al., 2016; Pongvongsa et al., 2016). Tripura et al. from 2015 in Battambang province, a similar trend of village-level prevalence and proximity to the forest is apparent from the map. However, it is not analytically tested[START_REF] Tripura | Submicroscopic Plasmodium prevalence in relation to malaria incidence in 20 villages in western Cambodia[END_REF]. Incardona et al. sampled from randomly selected villages in three provinces in 2001-2003 and provided maps of the villages, their infection prevalence based on LM, and forest-covered areas (drawn from aerial and satellite photographs and field verifications). Prevalence tended to be high in remote, forested, mountainous areas or camps outside villages at the forest fringe(Incardona et al., 2007).The subsequent analysis of the survey in Battambang focused on behavioural exposure by spending time in the forest without accounting for the baseline exposure when residing close to the forest[START_REF] Tripura | Submicroscopic Plasmodium prevalence in relation to malaria incidence in 20 villages in western Cambodia[END_REF]. Incardona et al. accounted for distance to the forest in their multivariable regression model for infection risk and found an association in two of the three provinces(Incardona et al., 2007). However, they could not simultaneously account for behavioural exposure. Our survey demonstrated by multivariable mixed-effects logistic regression that both behavioural exposure (i.e. working in or travelling to the forest) and living in villages at the forest fringe or inside the forest were independent risk factors of infection. Sluydts et al. found hotspots of infection risk with Plasmodium spp. in general, and by species(Sluydts et al., 2014). These hotspots were located mainly at a river, but locations varied per species. The hotspots had a 24-32 km radius and were based on the villages' coordinates, i.e. could not stratify infection risk beyond the village scale by design. By contrast, in three villages in Ratanakiri in the dry season in 2016, Durnez et al. searched for spatial clusters of PCRdetectable infections per village separately, based on GPS coordinates per household (Durnez et al., 2018). Clusters of P. vivax, P. falciparum, or P. malariae infections could be located among field houses in densely forested areas and inside two villages close to the forest. These hotspots were between less than 100 m to 1.8 km wide. One cluster in a village included only one house inhabited by numerous families, which could hint at social clustering in the village. Spatial clustering of infections on sub-village-level elsewhere in the GMS was shown by Parker et al. in a village at the Thailand-Myanmar border in 2011-2012, where cases clustered in households close to a water drainage system and other continuous water sources The same holds for a hotspot of P. vivax infection detected in the dry and rainy seasons in 2012. A P. vivax hotspot detected in 2013 was at a different site. Hotspots based on clinical case incidence were in yet different locations for P. falciparum and would not have predicted the hotspot of P. vivax infections in 2013 (Kerkhof et al., 2016; Sluydts et al., 2014). In two consecutive surveys among households from 3 villages in Ratanakiri province in 2016 and 2017, living in a household in a hotspot of PCR-detected infection in 2016 was not associated with an individual or household-level risk of infection at the follow-up survey one year later (Bannister-Tyrrell et al., 2019). Furthermore, a clusterrandomized trial of hotspot-targeted interventions in Kenya could not observe evidence for It was originally developed and applied to dengue fever, HIV/AIDS, and measles(Lessler et al., 2016; Salje et al., 2012). Its initial development was designed to infer spatial clustering from case locations without knowing the spatial distribution of the underlying population from which the cases arose. Instead, it used data on the genetic relatedness of the virus strains of the cases. Spatial clustering was inferred from cases closer to each other having a higher chance of being infected with more closely related pathogen strains.In malaria surveys, the genetic relatedness of the parasite strains of infections by P. falciparum, P. vivax, or other species is laborious to determine and rarely reported. However, an underlying population in which the cases were detected is usually known. Hence, we can assess the pooled prevalence of neighbouring infections at a certain distance from index infections and how it develops with increasing distance, see Chapter 4. By randomly re-That is, infections tend to cluster more in lower transmission settings. Overall, similar distance ranges within which most of the risk reduction occurs were observed in the studies of dengue fever, HIV/AIDS, and measles(Lessler et al., 2016; Salje et al., 2012).In our study in the two Senegalese villages of Ndiop and Dielmo, we had the chance to apply the spatial signature method in a Sub-Saharan setting and assess the degree of spatial clustering of PCR-detectable P. falciparum infections and clinical malaria episodes(Niang et al., 2021), see Chapter 5. Based on blood samples from the general village populations from each year from 2013 until 2016 and household GPS locations, PCR-detectable infections were found to spatially cluster in Ndiop within 70 m around index infections but not in Dielmo.Clinical cases were not found to spatially cluster beyond the household level.When applied to the monthly time points of cohort studies of PCR-detectable P. vivax infections in Thailand and Brazil, the dilution of spatial clustering of pairs of incident infections (first detections of an infection in a given individual) with widening temporal windows indicated temporal clustering. However, in both settings and particularly in the Brazilian study, the elevated period prevalence of incident case pairs at close proximity was still statistically significant when assessed across the entire year-long study period, indicating the predictive potential of detecting nearby cases around the location of index cases across a year in time.Among mobile and migrant populations, MMWs should also provide RDT testing and effective treatment but also actively search for cases that otherwise might not have reported themselves to the MMWs. Firstly, the plan includes active test and treat outreach twice per month to individuals in forest hotspots. Secondly, MMWs are supposed to ask positive forest goers for their co-travellers to test them for infection and treat them if necessary. No active case detection is intended for the village-level, VMW-guided layer of the intervention.The more homogeneously distributed infection risk across the entire village population in forest villages in our Mondulkiri cross-sectional survey, as well as the short-range distance clustering of infections in our spatial signature analyses, suggest that such an active case detection approach could be helpful in tackling within-village transmission cycles. It would be helpful to repeat the spatial signature analysis on survey data from various regions with high levels of persistent transmission in Cambodia to assess the generalizability of specific estimates of distances within which infections cluster to a certain degree. Based on this, the NMP would then be able to make an informed choice at what distance from index cases to search for other cases, balancing the expected share of clustering infections that would be covered against the feasibility, i.e. the number of households that would need to be screened.AsKunkel et al. argue that the sensitivity of the testing by RDTs is insufficient[START_REF] Kunkel | Choosing interventions to eliminate forest malaria: preliminary results of two operational research studies inside Cambodian forests[END_REF]) and PCR detection is not feasible at the VMW points of care, a spatially informed drug administration scheme could be considered: Families who reside within a certain proximity around index cases are offered treatment without certainty of actual current infection, in analogy to the intermittent preventive treatment of forest goers suggested byKunkel et al. allocating case locations, statistical significance can be assessed by comparing the observed signature with a null distribution. In that way, several purposes are fulfilled: 1) Spatial clustering is tested for and detected if case pair prevalence is elevated at a close distance above the null distribution, 2) the magnitude of the spatial clustering is quantified by the increase of prevalence with a shorter distance, and 3) the clustering is assessed on the continuous scale of distance between observation pairs. Spatiotemporal patterns can be assessed by comparing pairs of observations sampled at different times. As demonstrated, spatial and spatiotemporal clustering of PCR-detectable infection risk existed to differing degrees across a diverse set of transmission settings from Cambodia, Brazil, Thailand, and Solomon Islands. At medium to low prevalence, the distance to a 50% reduction in prevalence was within 1 km, often even below 100 m around index cases, with a trend to lower distances at lower global study prevalence. 6.1.6. Spatially informing (re-)active case detection schemes As the major recent intervention scheme to accelerate progress towards the malaria elimination targets, the Cambodian NMP launched the Intensification Plan (Sovannaroth et al., 2022) . On top of the already established malaria control and intervention activities, the plan contains a further scale-up of VMW and MMW coverage and activities. In identified hotspots, VMWs ought to fill gaps in households owning LLINs and their hammock versions, continue to provide RDT testing and effective ACT treatment, and ensure immediate referral of severe malaria cases to the nearest health clinic, all among the general village populations. and Dysoley et al. (Lek et al., 2020). Once transmission has been largely interrupted in an area, e.g. by the Intensification Plan interventions, malaria elimination programmes need to switch to responsive approaches Spatial scanning methods like SaTScan are powerful for scanning study areas for statistically significant clusters of positive observations, e.g. hotspots of PCR-detected cases.Their use in spatial malaria epidemiology has been embedded in the hypothesis that certain locations are particularly favourable for transmission and spill infections into neighbouring areas that otherwise would not sustain transmission. Breaking transmission in these hotspots would also halt the spread beyond the intervention area. However, studies have found hotspots not stable over time or not predictive for future infection. A trial did not yield evidence for incidence reduction outside the intervention area. Furthermore, hotspot detection locates clusters but does not generalize spatial dynamics in a given area. Conducting resource-intense surveys in all areas of operation does not seem feasible for NMPs. Our survey confirmed the disproportionate association of infection risk with time spent in the forest. However, residents of villages in the forest were at high risk across all ages and gender, and households from these villages formed hotspots of infections despite adjustment for forest-going behaviour. This suggests peri-domestic transmission and that targeting high- The predominance of P. vivax infections affirms the roll-out of radical cure in combination with G6PD testing. Clinical incidence relies on the self-reporting of patients to health care facilities, VMWs, or MMWs in reach, proper diagnosis, and onward notification. The vast majority of asymptomatic infections, as found in field studies with active outreach and molecular parasite detection, means that most infections are missed by curative health care. Even pro-active testing by VMWs or MMWs has a limited effect when diagnosis by LM or RDTs is not sensitive enough to detect the sub-patent, low-parasitaemia infections, which nevertheless contribute to transmission. risk groups and passive case detection might not suffice for elimination. As the high prevalence calls for burden reduction and risk is high across the general population, the NMP may need to consider MDA or mass testing and treatment. The spatial signature method calculates the pooled prevalence of infections in widening areas (and potentially time windows) around other infections. The method detects clustering with statistical significance, summarizes the extent of clustering, generalizes spatial dynamics in infection risk across the observed area, and evaluates the radius around infections within which other infections occur at elevated likelihood. It revealed spatial clustering of household locations of PCR-positive individuals within close proximity across several field studies, notably in Brazil, Thailand, and Cambodia. The clustering tended to occur in narrower areas around infections at decreasing overall prevalence in the area. Infections in cohort studies appeared to cluster spatially and in time over a few months. These findings could support reactive interventions, such as reactive drug administration or reactive testing and treatment, and enable NMPs to make informed choices of operational distances. Leanne Robinson, Benoit Witkowski, Ivo Mueller. Retreat of ED393 Pierre Louis de Santé Publique : Epidémiologie et Sciences de l'Information Biomédicale, Saint-Malo, 21-23 Oct 2019.• "The spatial signature of malaria infection: A new measure to calibrate transmission models and inform trial design of targeted malaria control measures", Mirco Sandfort, Amélie Vantaux, Leanne Robinson, Benoit Witkowski, Ivo Mueller; Michael White. The Malaria Endgame, Keystone Symposium, Addis Ababa, Ethiopia, October 38-November 2, 2019. Table ST2 ST2 Full list of covariates and their association with prevalence ("pos.") and risk of infection (as detected by PCR) for P. vivax and P. falciparum. Statistical significance of the covariate based on prevalence by χ 2 test (or Fisher's exact test if required) and based on univariate logistic regression with random intercepts for households and villages by likelihood ratio test (LRT). "n.s.": Not significant (p≥0.05). P. vivax P. vivax P. vivax P. vivax P. vivax P. vivax P. falciparum P. falciparum P. falciparum P. falciparum P. falciparum P. falciparum Covariate Covariate Covariate Covariate Covariate Covariate N N N N N N pos. (n) pos. (n) pos. (n) pos. (n) pos. (n) pos. (n) p (χ 2 ) p (χ 2 ) p (χ 2 ) p (χ 2 ) p (χ 2 ) p (χ 2 ) OR OR OR OR OR OR 95% CI 95% CI 95% CI 95% CI 95% CI 95% CI p (LRT) p (LRT) p (LRT) p (LRT) p (LRT) p (LRT) pos. (n) pos. (n) pos. (n) pos. (n) pos. (n) pos. (n) p (χ 2 ) p (χ 2 ) p (χ 2 ) p (χ 2 ) p (χ 2 ) p (χ 2 ) OR OR OR OR OR OR 95% CI 95% CI 95% CI 95% CI 95% CI 95% CI p (LRT) p (LRT) p (LRT) p (LRT) p (LRT) p (LRT) Water source …chicken Household head Work in the last Having slept 51-80 Surface/rain No Indoors 474 303 1088 4124 3.6% 9.9% 5.0% 6.2% 2.88 [1.46-5.70] 1.06 [0.65-1.74] Reference Reference 1.1% 3.6% 2.5% 2.9% 1.76 [0.52-5.93] 0.85 [0.39-1.84] Reference Reference Forest proximity had received information on two months in… …nearby forest outside last night Outside forest water Well Yes No Outdoors (17) 3.4% (30) 7.6% (54) <0.05 6.9% 6.2% (256) <0.01 76 15.8% 3060 2011 3111 4053 Reference Reference 1.12 [0.80-1.57] Reference 3.11 [1.54-6.30] n.s. <0.01 (5) 1.4% (11) 3.7% (27) n.s. 3.2% 2.9% (118) <0.01 9.2% Reference Reference 1.04 [0.63-1.73] Reference 2.97 [1.14-7.74] n.s. <0.05 Covariate Gender Age [years] of village …dogs malaria in the past 3 months via Sprays repellent Material of floor Material of walls Material of roof Toilet in house …pigs …TV …deep forest usually at bedtime Slept under net Household head attended school …cows …household visit …rubber last night plantations Antimalarials in Main income for households Household owns …buffaloes …ducks …cats Insecticides had been sprayed inside the house in the past year …cashew nut the last 2 months Work-unrelated travels overnight in the last month to… plantations Felt sick at …cassava field interview Felt feverish at …rice field interview Axillary No work temperature N 2231 1969 1054 527 424 11.6% pos. (n) 3.6% (80) <0.001 p (χ 2 ) 9.5% (188) 1.5% (16) <0.001 7.0% (37) (49) 335 10.1% (34) 347 10.1% (35) 309 8.4% (26) 289 9.0% (26) 244 6.6% (16) 197 6.1% (12) (103) <0.001 625 7.2% <0.001 (152) 57 8.8% (214) 1585 4.9% (252) <0.05 (12) 147 10.9% 3668 6.7% (45) 515 23.3% (120) 3578 7.0% (249) <0.001 622 3.1% (19) 65 13.8% (9) <0.01 3802 6.5% (248) 217 4.6% (10) 116 0.9% (1) 81 3.7% (3) <0.001 79 12.7% (10) Corrugated iron 3301 Female Male 2-10 11-15 16-20 21-25 26-30 31-35 36-40 41-45 46-50 Forest fringe Tap No Yes No Inside forest Else Cement/tiles No wall/tent Wooden Iron sheets Cement/bricks Grass/leaves Tent 7.0% (232) Wood/cement/ tiles 739 3.1% (23) (5) Bottled water 1828 4.4% (81) No toilet 2383 8.2% (195) <0.001 (77) <0.01 Yes 2614 7.3% (191) No 3107 4.0% No 3784 6.9% (262) <0.001 (16) No 4055 5.7% (247) <0.05 Yes 524 4.0% Yes 415 1.4% (6) (231) <0.001 (21) Yes 145 25.5% No 320 10.0% Any type 1816 4.0% (73) No 1264 8.5% (107) <0.001 (123) <0.001 Yes 1092 13.3% (145) No 3451 5.0% No 3983 6.2% (248) n.s. (37) No 3904 6.8% (32) <0.01 Yes 3880 6.1% Yes 216 9.3% (20) (264) <0.001 (236) Yes 296 1.4% No 3978 6.0% Yes 2935 5.5% (161) Rubber plantation 95 1.1% (1) <0.05 Farming 3829 6.7% (256) Wood logging 73 9.6% (7) Driving 22 9.1% (2) Workmanship 50 2.0% (1) Selling/service 130 0.8% (1) (15) (172) <0.001 Yes 748 12.8% (96) No 3977 5.9% (234) <0.001 Yes 222 15.3% (34) No 3938 6.6% (259) <0.10 Yes 261 3.4% (9) No 3687 6.6% (244) n.s. Yes 512 4.7% (24) No 2935 7.3% (214) <0.001 (4) No 3053 5.2% (238) <0.001 Yes 211 14.2% Yes 1264 4.3% (54) None 3465 4.9% (171) <0.001 Field sites 90 3.3% (3) Forest sites 268 22.4% (60) A village 177 13.6% (24) A city 59 1.7% (1) Unspecified 123 7.3% (9) (158) <0.001 (30) Yes 1147 9.6% (110) No 4017 6.4% No 2115 5.6% (118) <0.05 (258) n.s. Yes 170 5.9% Yes 2085 7.2% (150) (10) No 3930 6.4% No 3872 6.1% (238) <0.05 (252) n.s. Yes 249 6.4% Yes 328 9.1% (30) (16) <37.5°C 3917 6.4% No 3186 7.9% (253) <0.001 Yes 1014 1.5% (15) (250) n.s. ≥37.5°C 273 5.5% P. vivax OR Reference 3.45 [2.60-4.57] 95% CI Reference 4.41 [2.43-8.01] 8.16 [4.58-14.56] 8.73 [4.74-16.06] 9.11 [4.96-16.74] 6.73 [3.54-12.77] 6.51 [3.40-12.43] 4.61 [2.25-9.41] 4.73 [2.20-10.18] 2.43 [1.03-5.73] 1.46 [0.49-4.33] Reference 1.37 [0.77-2.44] Reference 9.23 [4.28-19.91] Reference 0.59 [0.35-0.97] 2.95 [1.29-6.76] Reference 1.04 [0.53-2.03] 0.27 [0.04-1.72] 1.54 [0.47-5.08] 3.93 [1.60-9.69] Reference 0.60 [0.38-0.96] 0.79 [0.57-1.10] Reference 1.25 [0.92-1.70] Reference Reference Reference 0.62 [0.38-1.01] 0.31 [0.14-0.70] 8.03 [4.83-13.35] <0.001 p (LRT) <0.001 <0.001 n.s. n.s. <0.001 <0.05 n.s. <0.01 n.s. n.s. <0.01 Reference 0.67 [0.47-0.94] <0.05 Reference 1.40 [0.98-1.99] n.s. Reference Reference Reference 0.46 [0.30-0.71] <0.01 0.76 [0.42-1.38] n.s. 0.60 [0.22-1.61] n.s. Reference 0.83 [0.62-1.10] n.s. 0.31 [0.05-2.05] Reference 1.85 [0.79-4.34] 2.23 [0.45-11.08] <0.05 0.56 [0.08-3.70] 0.15 [0.02-0.94] 1.45 [1.02-2.06] <0.05 Reference 1.90 [1.12-3.20] <0.05 Reference 0.80 [0.40-1.61] n.s. Reference 0.83 [0.49-1.39] n.s. Reference Reference 1.73 [1.10-2.73] <0.05 0.60 [0.43-0.84] <0.01 Reference 0.81 [0.26-2.51] 5.47 [3.73-8.01] <0.001 2.79 [1.68-4.61] 0.44 [0.07-2.85] 2.14 [1.03-4.42] 1.10 [0.82-1.48] n.s. Reference Reference 0.98 [0.51-1.91] n.s. 1.71 [1.28-2.28] <0.001 Reference Reference 0.88 [0.51-1.51] n.s. 0.75 [0.48-1.19] n.s. Reference Reference 0.24 [0.14-0.40] <0.001 0.72 [0.41-1.24] n.s. pos. (n) 1.9% (42) <0.001 p (χ 2 ) 4.2% (83) 0.7% (7) <0.001 3.0% (16) 3.8% (16) 5.4% (18) 4.9% (17) 4.5% (14) 4.2% (12) 5.3% (13) 3.6% (7) (43) 3.5% <0.001 (74) 8.8% (98) 2.5% n.s. (7) (116) <0.05 6.1% 3.2% <0.001 (22) 11.7% (60) 3.2% (116) <0.05 1.4% (9) 9.2% (6) <0.01 3.1% (116) 1.4% (3) 0% (0) 1.2% (1) <0.01 2.5% (2) 3.4% (113) 1.2% (9) (5) 1.9% (35) 4.1% (97) <0.001 (39) 3.3% (86) 1.7% 3.2% (9) 2.4% (116) <0.10 1.7% (122) <0.01 0.7% (3) (9) (98) <0.001 18.6% 6.6% 1.5% (28) 4.2% (53) <0.01 (54) <0.001 6.5% (71) 2.2% 2.9% (27) 3.2% (21) <0.001 2.7% (114) <0.10 5.1% (11) (104) (124) <0.01 0.3% 2.3% 2.5% (72) 1.1% (1) n.s. 3.1% (120) 4.1% (3) 4.5% (1) 0% (0) 0% (0) (6) (77) <0.001 6.4% (48) 2.7% (109) <0.001 7.2% (16) 2.9% (115) n.s. 3.8% (10) 3.2% (118) <0.05 1.4% (7) 3.6% (1) 2.5% (91) <0.001 16.1% (106) <0.001 1.5% (19) 2.0% (70) <0.001 4.4% (4) 14.9% (40) 5.6% (10) 0% (0) 0.8% (1) (34) (77) <0.01 4.2% 2.8% (48) 2.9% (114) <0.05 6.5% (62) n.s. 3.0% (11) 2.7% (63) 2.7% (108) <0.001 6.8% (106) <0.01 5.8% (17) 2.7% (19) 3.7% 0.6% (19) (119) <0.001 (106) <0.001 7.0% P. falciparum OR 95% CI Reference 2.69 [1.78-4.07] Reference 4.44 [1.74-11.34] 5.25 [2.04-13.48] 10.18 [4.00-25.88] 8.49 [3.30-21.81] 9.21 [3.48-24.33] 7.48 [2.76-20.26] 8.66 [3.20-23.45] 6.40 [2.07-19.80] 2.60 [1.00-6.75] 4.11 [1.09-15.48] n.s. p (LRT) <0.001 <0.001 Reference 1.42 [0.64-3.15] n.s. Reference <0.001 16.49 [6.96-39.05] Reference 0.64 [0.30-1.38] n.s. 5.45 [1.79-16.61] Reference 0.66 [0.19-2.26] <0.01 No cases 1.00 [0.12-8.30] n.s. 1.62 [0.29-8.87] Reference 0.46 [0.22-1.00] 0.80 [0.48-1.33] 1.02 [0.64-1.61] n.s. Reference Reference Reference 0.60 [0.28-1.27] n.s. 0.39 [0.11-1.32] n.s. 10.75 [5.81-19.89] <0.001 Reference Reference 0.56 [0.33-0.96] <0.05 1.53 [0.89-2.61] n.s. Reference Reference Reference 0.32 [0.18-0.56] <0.001 1.20 [0.54-2.64] n.s. 0.26 [0.03-2.14] n.s. Reference Reference 0.70 [0.45-1.07] n.s. 0.78 [0.09-6.52] Reference 1.56 [0.43-5.72] 1.96 [0.18-20.97] n.s. No cases No cases 1.61 [0.97-2.68] n.s. Reference 1.98 [0.94-4.16] n.s. Reference 1.98 [0.90-4.37] n.s. Reference 0.58 [0.24-1.40] n.s. Reference Reference 5.98 [3.52-10.15] <0.001 0.40 [0.23-0.71] <0.001 Reference 2.61 [0.82-8.36] 8.38 [4.96-14.18] <0.001 2.37 [1.08-5.17] No cases 0.50 [0.06-3.94] 0.90 [0.58-1.40] n.s. Reference Reference 2.47 [1.19-5.14] <0.05 1.61 [1.03-2.50] <0.05 Reference Reference 2.39 [1.31-4.38] <0.01 1.19 [0.65-2.19] n.s. Reference 0.23 [0.10-0.53] <0.001 Reference 2.40 [1.34-4.28] <0.01 Table ST3 ST3 Fixed effects parameters of multivariate logistic regression model for Plasmodium spp. infection as detected by PCR, including a gender-age interaction term. on malaria via TV in the past 3 months Insecticides No Reference had been Yes -0.47608 0.18622 Covariate sprayed inside the (Intercept) house in the Gender past year Having slept Age [years] alone outside last night Having slept under a net last night Work-unrelated travels overnight in the last Age [years] in men (gender-age interaction) month to… Work in the last two months… …in the deep forest …in a cassava field Female Male No 2-10 Yes 11-15 16-20 No 21-25 Yes 26-30 31-35 None 36-40 Field sites 41-45 46-50 Forest sites A village 51-80 A city 2-10 Unspecified 11-15 16-20 21-25 26-30 31-35 No Yes 36-40 41-45 No Yes β -4.67888 Reference -0.47247 Reference 0.72020 Reference 1.06923 1.23856 Reference 0.60961 -0.01898 0.46972 0.91935 Reference 0.32925 -0.52234 0.81629 0.57800 0.68471 0.14994 -0.19417 -1.42758 0.12539 Reference 0.68542 1.31719 2.43349 2.63857 1.92763 Reference 0.95113 2.74284 1.48228 Reference 0.31620 Std. error 0.46687 0.46214 0.39234 0.40483 0.40158 0.24718 0.49079 0.50472 0.49992 0.48253 0.51226 0.52053 0.22425 0.27518 0.63309 1.06795 0.57775 0.41792 0.58371 0.58402 0.65292 0.66262 0.27823 0.66384 0.67905 0.70859 0.15641 <0.01 p <0.001 0.07 0.94 <0.05 <0.001 <0.05 46-50 1.47956 0.81076 51-80 1.92305 0.73188 Forest Outside Reference proximity of village forest Forest fringe 0.77026 0.39881 <0.001 Inside forest 2.59209 0.36768 Material of Grass/leaves -0.03273 0.67613 roof Tent 1.35235 0.48477 Corrugated Reference iron <0.01 Wood -0.70368 0.26758 planks/ cement/tiles Household owns a toilet No Yes Reference -0.35946 0.18914 0.06 Household No Reference owns Yes 0.82853 0.28977 <0.01 buffaloes Household No Reference head had received Yes -1.02219 0.40875 <0.01 information Table ST4 ST4 Details of spatial clusters identified by covariate-adjusted SaTScan analysis. Cluster Lat Long Radius # Pop # Obs # Exp RR p 1 12.22155 106.9107 4.5 km 273 80 37 2.5 <0.001 2 12.24500 106.8488 2.5 km 152 60 28 2.4 <0.001 Table ST5 ST5 Portion of villages that form the spatial clusters in the SaTScan analysis. Village Cluster # households # households in cluster % of households in cluster Beng-Gaty 1 42 37 88.1% Sraelvy 1 36 20 55.6% Ohtrone 1 23 19 82.6% Ohchra 2 39 39 100% including Cambodia, China (Yunnan province), Laos, Myanmar, Thailand, and Vietnam Downloaded from https://academic.oup.com/cid/article/73/12/2175/6157008 by Robert Koch-Institut user on 13 October 2022 Figure 1. Spatiotemporal localization of asymptomatic (AS, green), clinical (Cl, red), and asymptomatic with subsequent clinical (AS + CL, orange) Plasmodium falciparum infections within households in Dielmo village from 2013 to 2016. Downloaded from https://academic.oup.com/cid/article/73/12/2175/6157008 by Robert Koch-Institut user on 13 October 2022 Figure 2. Spatiotemporal localization of asymptomatic (AS, green), clinical (Cl, red), and asymptomatic with subsequent clinical (AS + CL, orange) Plasmodium falciparum infections within households in Ndiop village from 2013 to 2016. Downloaded from https://academic.oup.com/cid/article/73/12/2175/6157008 by Robert Koch-Institut user on 13 October 2022 Acknowledgements Acknowledgements .................................................................................................... Abstract ......................................................................................................................... Acknowledgements The authors express their gratitude to the study participants, village heads, VMWs, local health facilities, and officials for making this study possible. We also thank the interviewers, technicians, and other study staff for their support. Acknowledgements The authors express their gratitude to the participants and field teams of the original studies. We also thank the reviewers for their availability and agreement to review this manuscript. Acknowledgments. The authors thank the population of Dielmo and Ndiop villages who volunteered to participate in the study without whose cooperation this study would not have been possible. The authors acknowledge the kind donation of the Plasmodium vivax DNA by Dr. Ambroise Ahouidi (Le Dantec Hospital of Dakar). The healthcare workers in Dielmo and Ndiop are duly acknowledged for their cooperation in conducting this study. Financial support. The study was funded by Institut Pasteur de Dakar, Institut de Recherche pour le Developpement de Dakar, and by the European & Developing Countries Clinical Trials Partnership (EDCTP), which is part of the EDCTP2 programme supported by the European Union (fellowship number TMA2018SF-2468). The views and opinions of authors expressed herein do not necessarily state or reflect those of EDCTP and other funders. Potential conflicts of interest. The authors have declared that no competing interests exist. for her administrative support and competence, and the rapporteurs and jury members for their time. Funding This study is part of the International Centers of Excellence for Malaria Research program "Understanding, tracking and eliminating malaria transmission in the Asia-Pacific Region", funded by the National Institutes of Health, MD, US (grant 1U19AI129392-01) and received additional funding by NHMRC (Australia, GNT1092789). Some of the included studies were supported by the TransEPI consortium funded by the Bill & Melinda Gates Foundation, USA (www.gatesfoundation.org). IM is supported by an NHMRC Principal Research Fellowship (GNT1155075). MS is part of the PhD program of the doctoral school ED393 Pierre Louis de santé publique and was supported by the Sorbonne Université (contract n°2695/2017). MW is supported by the European Research Council (852373). HS is supported by the European Availability of data and materials The de-identified dataset analysed for this study is being made publicly available in ClinEpiDB repository, https ://cline pidb.org/ce/app/. In the meantime, it is available from the corresponding author on reasonable request. Availability of data and materials The observational studies from which data were used list their data availability statements in the original publications. Research Council (804744). LJR is supported by an NHMRC Fellowship (Australia, GNT1161627). The funding bodies had no role in the design of the study, the collection, analysis, and interpretation of data, and in writing the manuscript. Annex Annex Chapter 3 -Supplementary material to chapter 3 Supplementary information Supplementary information accompanies this paper at https ://doi. org/10.1186/s1293 6-020-03482 -4. Additional file 1. Additional tables ST1-ST5 and additional figures S1-S3. Abbreviations ACT : Artemisinin-based combination therapy; AIC: Akaike Information Criterion; aOR: Adjusted odds ratio; G6PD: Glucose-6-phosphate dehydrogenase; GPS: Global positioning system; K + EDTA: Potassium ethylenediaminetetraacetic acid; LLIN: Long-lasting insecticidal net; LM: Light microscopy; MMW: Mobile malaria worker; N: Sample size; NMCP: National malaria control programme; PCR: Polymerase chain reaction; RDT: Rapid diagnostic test; VMW: Village malaria worker; WHO: World Health Organization. Authors' contributions AV, LR, BW, and IM conceived and designed the study and were overall responsible for the study. SK, AV, and BW implemented and managed the data collection, with support by SG, NK, and DL, and supervised the sample processing. SK conducted the data collection, supported by data management by MS and TO. TO, MS, SK, AV, LR, and IM designed the questionnaires. MW contributed to the statistical analysis. AP contributed the land cover analysis. MS analysed the data and wrote the first draft of the manuscript. AV, MW, TO, BW, LR, and IM contributed to data interpretation and writing. All authors read and approved the final manuscript. Funding This study is part of the International Centers of Excellence for Malaria Research programme "Understanding, tracking and eliminating malaria transmission in the Asia-Pacific Region", funded by the National Institutes of Health, MD, US (grant 1U19AI129392-01) and received additional funding by NHMRC (Australia, GNT1092789). IM is supported by an NHMRC Principal Research Fellowship (GNT1155075). LJR is supported by an NHMRC Career Development Fellowship (GNT1161627). MS is part of the PhD programme of the doctoral school ED393 Pierre Louis de santé publique and supported by the Sorbonne Université (contract n°2695/2017). AP is supported by the Pasteur Institute International Network PhD fellowship programme "Calmette & Yersin". The funding bodies had no role in the design of the study, the collection, analysis, and interpretation of data, and in writing the manuscript. Ethics approval and consent to participate This study was approved by the National Ethics Committee for Health Research, Ministry of Health, Kingdom of Cambodia (reference 239NECHR) and by the Institut Pasteur Institutional Review Board (reference 2017-03). Written informed consent to participate in the study was obtained from all participants (or their parent or legal guardian). Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. The spatial signature of Plasmodium vivax and P. falciparum infections: Quantifying the clustering of infections in crosssectional surveys and cohort studies Mirco Sandfort 1,2,* , Wuelton Monteiro 3,4 , Marcus Lacerda 3,4,5 , Wang Nguitragool 6 , Jetsumon Sattabongkot 7 , Andreea Waltmann 8,9 , Henrik Salje 10 , Amélie Vantaux 11 , Benoit Witkowski 11 , Leanne J. Robinson 8,9,12 , Ivo Mueller 1,8,9 , Michael White Abstract Background: Over the last decades, enormous successes have been achieved in reducing malaria burden globally. In Latin America, South East Asia, and the Western Pacific, many countries now pursue the goal of malaria elimination by 2030. It is widely acknowledged that Plasmodium spp. infections cluster spatially so that interventions need to be spatially informed, e.g. spatially targeted reactive case detection strategies. We introduce the spatial signature method as a tool to quantify the distance around an index infection within which other infections significantly cluster. Methods: We considered data from cross-sectional surveys from Brazil, Thailand, Cambodia, and Solomon Islands, conducted between 2012 and 2018. Household locations were recorded by GPS and finger-prick blood samples from participants were tested for Plasmodium infection by PCR. We also included cohort studies from Brazil and Thailand with monthly sampling over a year from 2013 until 2014. We calculated the prevalence of PCR-confirmed infections at increasing distance around index infections (and growing time intervals in the cohort studies). Statistical significance was defined as prevalence outside of a 95%-quantile interval of a bootstrap null distribution after random reallocation of locations of infections. Results: Prevalence of Plasmodium vivax and P. falciparum infections was elevated in close proximity around index infections and decreased with distance in most study sites, e.g. from 21.3% at 0 km to the global study prevalence of 6.4% for P. vivax in the Cambodian survey. In the cohort studies, the clustering decreased with longer time windows. The distance from index infections to a 50% reduction of prevalence ranged from 25 m to 3,175 m, tending to shorter distances at lower global study prevalence. Conclusions: The spatial signatures of P. vivax and P. falciparum infections demonstrate spatial clustering across a diverse set of study sites, quantifying the distance within which the clustering occurs. The method offers a novel tool in malaria epidemiology, potentially informing reactive Declarations Ethics approval and consent to participate The observational studies from which data were used list their ethics approval references in the original publications. Consent for publication The observational studies from which data were used state the consent for publication in the original manuscripts. Competing interests The authors declare that they have no competing interests. List of posters • "Spatial heterogeneity and determinants of residual malaria transmission risk in North-Eastern Cambodia", Mirco Sandfort, Thomas Obadia, Saorin Kim, Michael White, Amélie • "Pockets of high malaria infection prevalence in villages inside the forest and their microepidemiology: A cross-sectional survey in rural Cambodia", Mirco Sandfort, Amélie Vantaux, Saorin Kim, Thomas Obadia, Soazic Gardais, Nimol Khim, Michael White, Annex Chapter 5 -Supplementary material to chapter 5 List of figures Supplementary Material Detection and characterization of Plasmodium species by LM and qPCR Microscopic examination Thick and thin blood films were prepared, stained with 10% Giemsa for 25 minutes and screened for the presence of malaria parasites using LM with 100x oil immersion lens. The number of parasites per 200 white blood cells in thick film was recorded and parasite density was estimated by counting the number of leucocytes by field examined and by arbitrarily considering that 8,000 leucocytes were present in 1µl of blood. Two microscopists read all slides, and in case of discrepancy, a third microscopist examined the slides and the results were combined. A slide was considered positive after two concordant readings and at least, two hundred oil-immersion fields were examined before a slide was declared negative for Plasmodium. Molecular detection and characterization of Plasmodium species The detection of Plasmodium spp was carried out by qPCR following genomic DNA isolation of Plasmodium parasites using QIAamp DNA Blood Mini Kit (Qiagen, Hilden, Germany) according to manufacturer's instructions. Genomic DNA from blood samples of microscopically-confirmed P. falciparum, P. malariae, P. ovale and P. vivax infected patients served for positive controls in all amplifications; sterile water and gDNA from uninfected blood samples served for negative controls to ensure lack of contamination [20]. The qPCR-based detection and characterization of Plasmodium spp was based on two steps real time PCR as previously described [21,22]. Firstly, Plasmodium parasites were detected by a "screening real-time PCR" with genus-specific primers targeting the Plasmodium cytochrome b gene using the Eva Green dye (Solis Biodyne) followed by melt curve analysis of resulting amplicons from the Bio-Rad CFX96 real time thermal cycler as described previously [21]. Secondly, positive Plasmodium DNA samples were ten-fold diluted and analyzed for malaria species using a nested real-time PCR assays with genus-specific primers for P. falciparum, P. vivax, P. malariae, and P. ovale as described by Canier et al. [21]. The presence of P. knowlesi, the fifth human malaria parasite [23] was not investigated in this study. Supplementary Figure S1 Supplementary Figure S2
04117920
en
[ "shs.langue" ]
2024/03/04 16:41:26
2022
https://hal.science/hal-04117920/file/Transcending%20circulations%20of%20southern%20and%20northern%20concepts.pdf
Isabelle Léglise Sangeeta Bagga-Gupta Ana Deumert Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language This special issue on Transcending circulations of southern and northern concepts. Towards mobile and dialogic perspectives on language puts the spotlight on the hegemonies and marginalizations of mainstream academic productions and the circulations of concepts in the contemporary fields that are labelled Multilingualism, Applied Linguistics, Sociolinguistics, etc. Transcending the demarcations that have emerged in the Language Sciences broadly conceptualized across time, it particularly attempts to illuminate, through different scholars' engagements, the circulations of concepts inside and across academic circles located in various places on the planet, identified as central or marginal places of production of academic norms, including what is understood as global epistemic circuits. 1 Thus, this special issue attempts to address a new range of questions. The following constitutes a non-exhaustive list: (1) To what extent is it possible to observe, beyond hegemonic productions, different types of circulations and trajectories, at different scales, including the imaginaries that create centers and margins, as well as how we understand concepts such as global, international, national, northern, and southern? (2) To what extent can we identify and defend subjugated ideas, proposing norms from the periphery, producing alternative knowledges (in the plural), leading to parallel forms of globalizations? (3) To what extent are such identified ideas, norms, and knowledges then marginalized, co-opted, appropriated, or re-used by academic disciplines within and across the global south/north? The late Tope Omoniyi emphasized at an invited symposium that he organized in 2015 ('Illusions and Delusions of the Centre within the Framework of Globalization', at The Sociolinguistics of Globalization Conference, Hong Kong) the need for a debate, that Isabelle [START_REF] Léglise | Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language[END_REF] Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language, Journal of Multicultural Discourses, 17:4, 273-283 DOI:10.1080/17447143.2023.2206385 was long overdue, on northern versus southern understandings of 'globalizations' (and thus also modernities). This, he argued, includes interrogating the status of previously colonized territories, territories that are not recognized explicitly as having been colonized and the status of territories that are or are not seen as being colonizers. In the vocabularies of the late Aimé Césaire, it is only through the 'deeping and the co-existence of all particulars' that a meaningful and rich universal can emerge. It is thus the dialogues between the locally situated and what is glossed as the global or universal, that is meaningful. Of particular significance is that scholars with marginalized positionalities across all territories and across time have been forced to take up global and universalizing nomenclature and norms from the global north. This has led to the fact that norms, theories, and methodologies are not just dominant in academia, but are universalizing and constitute what might be understood as lingua-colonialism. Such a state of affairs plays a crucial role and concretely shapes the reproduction of national, regional, and transnational elites. It also affects the publishing possibilities that scholars have or do not have access to, given the dominant languages of publishing that constitute de facto challenges for writing and publishing, the genre-expectations of such texts, etc. In other words, discourses that have originated in older and newer academic metropoles have become naturalized as 'global science'. While more and more is published, these publications seem to be of less and less relevance to the challenges that humanity and our planet face. Therefore, we ask what imaginaries enable such continuing universalizing reproductions and how 'epistemologically and existentially sustainable' such science can be (Bagga-Gupta forthcoming-b)? The concerns that we address in this special issue relate to the increasing body of research across the last twenty years that calls for particularities and situated standpoints, knowledge and imagination [START_REF] Stoetzler | Standpoint Theory, Situated Knowledge and the Situated Imagination[END_REF], the inclusion of southern voices [START_REF] Connell | Southern Theory: Social Science And The Global Dynamics Of Knowledge[END_REF][START_REF] Heugh | A Sociolinguistics of the South[END_REF], and the decolonialization of knowledge itself (Bagga-Gupta forthcoming-b; [START_REF] Mignolo | Epistemic Disobedience, Independent Thought and Decolonial Freedom[END_REF][START_REF] Pennycook | Innovations and Challenges in Applied Linguistics from the Global South[END_REF][START_REF] Deumert | Colonial and Decolonial Linguistics[END_REF]Deumert & Makoni forthcoming). That being the case, the stronghold of mythical universalizing academic hegemonies means that very few terms proposed by southern scholars (whether they are located within northern or southern spaces), or terms that are grounded in southern contexts, are used or circulate in fields where the language sciences (broadly conceptualized) play a role. This notwithstanding entanglements between southern and northern academic circles have been documented Isabelle [START_REF] Léglise | Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language[END_REF] Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language, Journal of Multicultural Discourses, 17:4, 273-283 DOI:10.1080/17447143.2023.2206385 [START_REF] Kerfoot | Introduction: Entanglement and Orders of Visibility[END_REF]. The marginalization of southern knowledges is also the case within the contemporary fields that are labelled Multilingualism, Applied Linguistics, or Sociolinguistics, etc. -either in Anglophone or, even more clearly -in Francophone traditions [START_REF] Léglise | Géopolitique et Circulation des Savoirs en Sociolinguistique du Multilinguisme et de L'éducation au Nord et Dans le Sud Global[END_REF]. Furthermore, older as well as newer concepts and theories that make claims to have global or universal relevance continue to be exported unidirectionally: from Europe or the US to other continents. These older-newer concepts and theories flourish, silencing other equally (if not more) relevant terms and conceptual currents. Linguists, more than others, it has been suggested, are facing a globalization that is nevertheless marked by an almost complete western/northern hegemony in their own fields and in the multiple fields to which they feed into [START_REF] Shi-Xu | Reconstructing Eastern Paradigms of Discourse Studies[END_REF]. By focusing on multiple mobile and dialogic -rather than global or universal -perspectives and their entanglements on a non-universalizing planetary scale, this special issue on Transcending circulations of southern and northern concepts. Towards mobile and dialogic perspectives on language proposes and conceptualizes alternative perspectives that emerge from non-extractive work across northern and southern territories, voices and gazes, and experiences from the south in the north and the north in the south. 2 It thus attempts to go beyond and contributes to ontologies and epistemologies that have an important bearing upon both multiple perspectives as well as northern hegemonies. In this sense, this special issue calls for and contributes towards multidirectional sciences (in the plural) that are of relevance to the multidisciplinary domains of the Language Sciences where knowledges rooted in western/northern experiences and traditions are simply part of the palette of knowledges. With this as an overarching introduction, we present a short reflection first on the journey of this special issue, then our own ways of framing mobile and dialogic gazing, including our positionalities before introducing the 'six plus two papers' that make up this special issue. After years of collaboration with colleagues from Amazonian and Caribbean academic circles 3 , regular meetings with colleagues dealing with multilingualism from various places (and especially from the Centre for Multilingualism and Diversities Research, University of the Western Cape and Centre for Research on Bilingualism in Stockholm), as a member of the Southern Multilingualisms and Diversities consortium (SMDC) 4 and because of her own experience struggling to navigate within scholarship [START_REF] Léglise | Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language[END_REF] Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language, Journal of Multicultural Discourses, 17:4, 273-283 DOI:10.1080/17447143.2023.2206385 between several languages and academic traditions and measuring discriminations and coloniality at play at various levels, Isabelle Léglise was particularly attentive to scientific claims and literature regarding marginalized voices and epistemic violence in Academia. When she was invited to write a paper in France on sociolinguistic issues linked to, as they called it 'exotic fieldworks' in mid-2010s, she realized how little southern theory and epistemologies, subaltern, postcolonial, and decolonial voices and traditions of thought were known among her Parisian colleagues. She set out to make these voices heard within French academia through publications [START_REF] Léglise | Multilingualism and Heterogeneous Language Practices. New Research Areas and Issues in the Global South[END_REF]) but also through her monthly seminar on language practices as social practices that had, at that time, a 15-year history. Although awareness regarding these issues within the little group of close colleagues and students in this seminar was high, not least because of participants' minoritized backgrounds within French academia, she found the level of awareness within other circles continued to be dismal. Isabelle then decided to address the issue of erasure in itself. In order to contribute from her positionality to contemporary debates, not least since she had started meeting social scientists working on the circulations of knowledge and academic elites through the Federation of Research Centers on Social Sciences and Global South F3S regularly. She began working on concepts rooted in southern experiences and their circulations (or erasures) within academic circles and particularly in the Francophone academic circles [START_REF] Léglise | Géopolitique et Circulation des Savoirs en Sociolinguistique du Multilinguisme et de L'éducation au Nord et Dans le Sud Global[END_REF]. At the June 2018 Sociolinguistic Symposium in Auckland, New Zealand where more scholars from Australia and Asia than usual could attend, two sessions were particularly meaningful for Isabelle Léglise. She approached and discussed her concerns first with Ana Deumert, Alan Carneiro, and Sinfree Makoni after attending their double panel on 'Sociolinguistics and Southern Theory -Voices, Questions and Alternative', and thereafter with Sangeeta Bagga-Gupta after attending her critical co-authored paper on the genesis and trajectory of Translanguaging across the globe and into Swedish spaces. Ana Deumert's panel included many voices contesting a Euro-centric perspective on organizing academic conferences and regimenting science. The visibly limited number of colleagues attending the 2018 Sociolinguistic Symposium from across eastern/southern territories meant that issues regarding western/northern hegemonies could no longer be ignored. Many African colleagues whose papers were accepted, for instance, could not present because they lacked financial possibilities for their travels. Isabelle Léglise invited Sangeeta Bagga-Gupta, Ana Deumert, Sinfree Makoni, and Alan Carneiro to par-Isabelle [START_REF] Léglise | Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language[END_REF] Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language, Journal of Multicultural Discourses, 17:4, 273-283 DOI:10.1080/17447143.2023.2206385 ticipate in a panel at a conference on the 'GlobalizationS and Circulations of Ideas, Knowledges and Norms' within the F3S in Paris in September 2019 that she was co-organizing to get these ideas to travel to Paris. Ana Deumert's experience of the New Zealand conference was one of deep discomfort. As the organizer of the panel 'Sociolinguistics and Southern Theory -Voices, Question and Alternatives', she had tried for more than a year to find financial support for colleagues from the global south. Although the registration fee was waived, this did not address the challenges of travel costs and accommodation. In the end, she was able to raise funds for accommodation, but travel costs proved to be a stumbling block. As a result, and as noted above, the panel was characterized by too many absent voices, which are, however, made visible in a subsequent publication (Deumert and Makoni forthcoming). The importance of physical presence at professional events is evident in the fact that this special issue emerged out of in-person conversations at the conference. Yet, to attend conferences in-person can be difficult -if not impossible -for scholars from the global south: 'international' conferences still take place predominately in northern spaces and charge increasingly unaffordable fees in euros or dollars. The materialities of participation were also at the centre of discussions at the Sociolinguistic Symposium in Ghent ( 2022), but solutions proved elusive. In addition, one also needs to be cognizant of the fact that the climate catastrophe asks one to avoid, as much as possible, long-distance travel. All this raises an important question: how can north-south conversations take place when taking these challenges into account? Might we need to reimagine our work maybe more radically? This does not mean moving conferences online -an approach that many tried during the COVID-19 pandemic and that has, overall, not been fully successful. Currently, some conferences use hybrid formats which, however, lead to new forms of inclusion/exclusion: those who cannot afford the travel costs and full registration fees often tend to present online and are not part of the important 'side-events' at conferences; for instance, the conversations during breaks and in the evenings, conversations which are part of academic engagement and which are important for making connections across academic spaces. Ana has reflected on her positionality in several places: on her political engagement, her experience in the academy, her desire for cross-generational knowledge formation (in Deumert and Makoni forthcoming), as well on her position in the academy and her intellectual resistance to any grand theories, universalizing statements and the perform- Sangeeta Bagga-Gupta's experiences across time and her nomadic existence both across spaces as well as across disciplinary enclaves made her cognizant early as a senior (albeit biologically young) scholar in a purportedly global North setting, her privileged-marginal voice and the potential need to disturb fossilized ways of academia. Unwittingly, she found herself troubling hegemonies in various ways, not least through her publications that did not get an up-take initially since she was considered 'too multidisciplinary' and furthermore, did not follow mainstream concepts or references; instead she nurtured a troubling stance with her doctoral students at international workshops and conferences both in Sweden and in mainstream conferences across the globe What ways can normal-languaging and normal-diversities be articulated in the scholarship to trouble the banal monopoly of monolingual-hegemonies, written and oral languaging biases and white superiority she was privy to through her work as a senior scholar, mentor, and teacher, as well as the world citizen she considers herself to be? While not [START_REF] Léglise | Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language[END_REF] Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language, Journal of Multicultural Discourses, 17:4, 273-283 DOI:10.1080/17447143.2023.2206385 denying the power of face-to-face meetings, Sangeeta's efforts have been augmented by both her scholarly interests in the areas of (multilingual and multimodal) digitalization and the deployment of digital resources to experiment with and create more inclusive multilingual research events across borders since the last 15 years. Identifying resources to enable such events across continents physically and in digital spaces is an ongoing long-term effort in her research environment. Even though Isabelle Léglise succeeded in finding financial support for her colleagues to come to Paris in September 2019, she struggled to make the conference co-organized multilingual. She had to present strong arguments to her co-organizers from other disciplines in social sciences in favor of accepting papers in different languages. While resistance towards the hegemony of English in academia is very common among French scholars, its less recognized downside is the non-circulation of ideas and a compartmentalization of our fields, she argued. Her efforts paid off and English and Spanish were accepted along with French. Isabelle organized and financed simultaneous translations as an inclusive stance that could enable a circulation of ideas. The panel 'Circulation and trajectories of Southern concepts in the fields of Multilingualism, Language and Education' and discussions when preparing it with her team of young scholars proved highly interesting. In her view, the Paris conference was successful since simultaneous translations gave access to the circulation of ideas and contestations. Sinfree Makoni for e.g. was able to listen to and discuss with Francophone Africans and challenge common assumptions within French academia. An historian scholar discussed the subversive potential of our panel's claims. During the Paris panel, Sinfree suggested the Journal of Multicultural Discourses as a key journal to push our ideas forward. We sent in some first notes as a Special Issue idea. A few months later, the first COVID-19 wave struck the planet. Although some Parisian colleagues found that lockdown was a golden age, away from teaching and where 'everybody published double as much', Isabelle experienced lockdown differently: massive mental overload due to working more than usual, home-schooling her children, hours of remote connections sometimes with limited internet capacity in projects with colleagues in Brazil and Cambodia and issues of mental health impacting her family and friends. Teaching and mentoring also meant helping students from various backgrounds stuck in small rooms with no family around and no financial support from the State. This led to, for example, the creation of a student NGO to provide free food for thousands of students 7 in Paris. This constitutes a good example of issues well known in the global south that Isabelle Léglise, Sangeeta Bagga-Gupta & Ana [START_REF] Léglise | Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language[END_REF] Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language, Journal of Multicultural Discourses, 17:4, 273-283 DOI:10.1080/17447143.2023.2206385 now impacted the northern hemisphere especially for marginalized populations (see also Sangeeta's introduction in the 2022 double special issue of Bandung Journal of the Global South where she highlights her predicament in the spaces of affluent Sweden in similar lines; https://brill.com/view/journals/bjgs/9/1-2/bjgs.9.issue-1-2.xml). Ana Deumert's experience of lockdown meant a very heavy load, not only with regard to teaching (which had moved online at great speed), but also with regard to supporting students and colleagues emotionally. The psychological toll of the pandemic was immense: as one was trying to support others, one also suffered, dealt with illness and death as well as the isolation of lockdown. Today, it feels strange to think that just a year ago, we did not travel, we avoided interaction, and we didn't know when it would end. The 'slow emergencies' of racism, capitalism, patriarchy, and the climate catastrophe came into sharp relief during these three years (see Deumert and Makoni forthcoming for a more detailed discussion). COVID-19 also meant a broad shift to digital modes of engagement -while some have embraced these as opportunities for global collaboration, others have found the online space unproductive for the deep political work that needs to be done. And, while working from home did afford space to read, the fact that reading had to happen online created its own challenges -increased screen-time affected eyes and minds. With this pandemic now officially behind us, Ana often asks herself: What have we learned? What will we do differently in the future? Or are we just going back to the super-exploitation of people and the planet? In February 2020, Sangeeta Bagga-Gupta and her team of roughly 15 doctoral students, post-doctoral colleagues and colleagues from other societal sectors in Sweden were working in Mumbai, India with various projects. In particular, Sangeeta's research environment co-organized its international conference LeaDMe (www.ju.se/ccd/ leadme2020) together with partners from the Department of Mass Media at KC College, Mumbai University. After the conference, different sub-teams left for Sweden after completing their projects in a staggered fashion. The sudden and severe lockdown imposed in March 2020 however stopped Sangeeta from returning back. During her several months, severe lockdown (she was not allowed to leave the flat where she stayed in a multi-storied building for the first few months), she became privy to insights that have clearly changed her scholarly priorities. Digitalization enabled her to work seven days a week for the entire period of her lockdown existence in Mumbai. During this time, [START_REF] Léglise | Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language[END_REF] Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language, Journal of Multicultural Discourses, 17:4, 273-283 DOI:10.1080/17447143.2023.2206385 she participated in Sinfree Makoni's weekly global forum seminars, read volumes online enabled through her home library in Sweden in preparation for the seminars, worked on at least five individual studies, two special issues and created other academic constellations. She collaborated with Alan Carneiro at this juncture as a reviewer team for a special issue of the International Journal of Multilingualism offering critical insights into an area of her expertise -Deaf Studies (Bagga-Gupta and Carneiro 2021). The double special issue of Bandung Journal of the Global South (2022, 1-2) was curated during the pandemic enabling the participation of scholars from all corners of the planet, as were her two studies in it (Bagga-Gupta 2022, Bagga-Gupta & Kamei 2022). Working mediated through stable digital resources in a southern territory, in strict lockdown conditions brought home a number of insights -professional and existential. The morality, poverty, and richness of privileges and marginalizations everywhere on the planet were critically one of these. The severe challenges (and unique possibilities) that the pandemic brought with it, meant that we were not able to complete the proposal for the Journal of Multicultural Discourses Special Issue until the start of 2021. The proposal was accepted in April 2021, and we published an open call for papers in June 2021. This was circulated widely and we received a large number of proposals. At that time, we hoped to receive first draft submissions in March 2022. Although we received some, the second and third pandemic waves shaped the continuing trajectory of the Special Issue and many people could not participate. Across the planet, the pandemic affected scholars and academics severely and differently, as our own three experiences are testimony to. As a community we lost colleagues and friends, faced significantly increased workloads due to online teaching and administration, many struggled with issues of health and well-being, and for too many scholars those were also times of economic hardship. The Janus' face of privileges and marginalizing effects continued. We are now in early 2023 finalizing this special issue after participating in a lot of digital exchanges, some online meetings, and our journey as editors, reading and discussing the manuscripts remotely across time spaces. The final form of the special issue is based on eight papers, this introduction and a postscript. This special issue deals with reflections grounded in various academic circles and on our own local and mobile experiences from nation-state places as diverse as Angola, Brazil, Cambodia, Canada, France, India, Mexico, Portugal, South Africa, Spain, Sweden, Uruguay, etc. All three of us, including the authors of the papers in this special issue, live nomadic lives at least within the [START_REF] Léglise | Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language[END_REF] Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language, Journal of Multicultural Discourses, 17:4, 273-283 DOI:10.1080/17447143.2023.2206385 scholarship. We live and exist in one or more than one nation-state contexts, and we work regularly across one or more than one other nation-state territories. Furthermore, there is a dialectic that this Special Issue attempts to address. On the one hand, we believe strongly with the need to say from where we speak or from where we gaze, reflect on our positionalities, and address local theories and standpoints. On the other hand, we need at a certain stage to say something on global relations of power and hegemonies within academia, and their effect on the local, from the place we stand but also from what we understand of other voices -in order to initiate and carry out a conversation without silencing other voices or letting our own voices become erased. Offering a non-universalizing stance, mobile and dialogic perspectives on knowledges from the particularities of local knowledges enables a polyphony of situated voices and gazing and the need for intercultural translations. We identify three main points of convergence between the texts that are part of this special issue and that are intertwined with one another: First, our special issue deals with the circulation of concepts such as: in the paper by Isabelle Léglise, sumak kawsay/buen vivir and translanguaging from southern or marginalized experiences to the north and back, or, such as in the paper by Marcelyn Oostendorp, linguistic repertoires from data from the south to conceptualization in the north and back to southern academic circles and also, in the paper by Raquel Carinhas, Maria Helena Araújo e Sá and Danièle Moore the circulation of Plurilingual education as a concept from the north to the south and new participatory experiences from Uruguay. This Special issue also revisits old concepts of language, identity, culture, hunger, international, in the paper by Sangeeta Bagga-Gupta, identity and community in the paper of Raquel Carinhas, Maria Helena Araújo e Sá, and Danièle Moore in light of the multilingual-, boundary -and mobility-turns (in the words of Sangeeta Bagga-Gupta) and taking into account subaltern, postcolonial, and decolonial voices (in the words of Isabelle Léglise) or meta-intercultural ontologies or South-South inter-epistemic dialogue (as claimed by Hamza R'boul's paper) but it also proposes, following Alan Carneiro's paper discomfort as a new epistemological tool to rethink marginalized positions. Second, our special issue deals with the need to fight or deconstruct hegemonies as the sidelining of southern thinking in Marcelyn Oostendorp's paper, the erasure of southern debts and origins through translation and circulation in Isabelle Léglise's paper, and a sharp reminder of marginalizations everywhere across time in Sangeeta Isabelle [START_REF] Léglise | Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language[END_REF] Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language, Journal of Multicultural Discourses, 17:4, 273-283 DOI:10.1080/17447143.2023.2206385 Bagga-Gupta's paper. It critically assesses the silencing and excluding of heterogeneous language repertoires, reproducing colonial asymmetries in indigenous education in Mexico as illustrated by Susana Ayala, Valeria Rebolledo, and Elsie Rockwell and also addresses the metapragmatics of knowledge production systems in order to avoid their misrecognition as in Alan Carneiro's paper. If we try to avoid the use of the four cardinal points -north-south-east-west, we must sometimes refer to the terminologies circulating in the literature such as the global/Global South/ south and also the Global/global north/North as a historical provider of norms identified in Western academic traditions. See among others, discussions in Bagga-Gupta (forthcominga), [START_REF] Heugh | A Sociolinguistics of the South[END_REF]. 2. This, as some have pointed out, can be called a global-centric stance (see Bagga-Gupta, this issue). 3. On the one hand through her own scientific cooperation in Brazil, French Guiana and Suriname and on the other, through Creole studies networks such as the SCL (Society for Caribbean Linguistics). 4. A 'group of sociolinguists and applied linguists who work in multilingual contexts around the world and who collaborate with research groups and/or centres with a focus on multilingualism. Our first interest is in the experiences, knowledge and expertise that southern and marginalised communities have of multilingualism and diversity. Our second interest is in how the knowledge and expertise of southern and marginalised communities may contribute to global understandings of diversity. Our third interest is in how knowledge and expertise may be exchanged in reciprocal and respectful ways among marginal and mainstream communities located in southern and northern settings of the world.' (https://webdevtestsite. Isabelle [START_REF] Léglise | Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language[END_REF] Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language, Journal of Multicultural Discourses, 17:4, 273-283 DOI:10.1080/17447143.2023.2206385 wordpress.com) 5. The double panel was unique in that it brought together junior and senior scholars from Scandinavian contexts who were deaf and hearing, and wherein all but one participant was a user of a national sign language. Sinfree was one of the two commentators. 6. 2019, 21(2-3) https://www.tandfonline.com/toc/ydei20/21/2-3. Of interest for present purposes is that the journal has a dual editorship handled from the north and the south. 7. On the right to food and access to food in a Hungry France (see [START_REF] Bonzi | Faim de Droits: Le don à L'épreuve des Violences Alimentaires[END_REF] Isabelle[START_REF] Léglise | Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language[END_REF] Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language, Journal of MulticulturalDiscourses, 17:4, 273-283 DOI:10.1080/17447143.2023.2206385 ance of academic expertness and fluency (Deumert forthcoming, Mabandla and Deumert forthcoming). across the Language Sciences, Educational Sciences, Disability Studies and Migration and Ethnicity Studies. Her encounters with colleagues who shared similar concerns in 2014 at AILA in Brisbane and the subsequent formation of the Southern Multilingualism and Diversities Consortium in 2014 in Adelaide brought further home her insider-outsider positionalities everywhere, all the time. Sangeeta's subsequent meetings with Isabelle, Ana, and Sinfree at the 2018 Sociolinguistic Symposium in Auckland where she had organized a double panel on Deaf Studies and Language Studies opened up possibilities that lay in creating critical work with others who had formulated similar concerns in different ways. 5 One of the prime reasons she organized this event in Auckland in 2018 was the conference theme: Crossing Borders: South, North, East, and West. She had been experimenting since the late 1990s with creating small-scale workshops, conferences that were multilingual in English, Swedish, Swedish Sign Language, and International Signing and then working bilaterally with research environments in Mexican and Indian spaces with multilingual presentations. The double panel in New Zealand was multilingual in a mainstream conference setting and resulted in a double special issue of the Deafness and Education International journal in 2019. 6 Sangeeta's epistemological concerns can be gauged in questions like the following: what articulations and sensibilities can be exploited to forge new alliances across continents, across historical trajectories, across disciplinary silos, across language areas and societal sectors, and across academic hierarchies? Third, and through the above, the participants in this special issue contribute to propose other ways of thinking, to offer a critical gaze from the south in Marcelyn Oostendorp's paper, as the need to listen to postcolonial and decolonial voices as a polyphonic academic world in Isabelle Léglise's paper, as engaging analytically with global-centric (rather than global, universal) epistemologies in Sangeeta Bagga-Gupta's paper, as the importance of revoicing our conceptualizations of plurilingual education to include the discourses, multisensory experiences and stories of diversely situated social actors in Raquel Carinhas, Maria Helena Araújo e Sá, and Danièle Moore's paper. In Alan's Carneiro words, this relates to proposing alternative and transdisciplinary epistemologies of diversity and a transhistoric and transtopical way of positioning, that offers a potential alternative for bringing about change. In Hamza R'boul words, problematizing equality and international relations leads to reconciliation among cultures for ensuring a smooth functioning.The rest of the issue is organized as follows (for space reasons, the first two papers are part of Issue II, this volume): Alan Carneiro: Following the path of otherwise: subalternized subjects, academic writing, and the political power of discomfort Hamza R'boul: Epistemological plurality in intercultural communication knowledge Isabelle Léglise: Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging Marcelyn Oostendorp: Linguistic repertoires: South/North trajectories and entanglements Maria Bäcke: 'Let the Settlers Set Sail:' The Global North and the Global South in the Literary Works of Angola's Agualusa and Ondjaki and in the Academic Research Drawing on Their Work Isabelle Léglise, Sangeeta Bagga-Gupta & Ana Deumert (2022) Transcending circulations of southern and northern concepts: introducing mobile and dialogic perspectives on language, Journal of Multicultural Discourses, 17:4, 273-283 DOI:10.1080/17447143.2023.2206385 Raquel Carinhas, Maria Helena Araújo e Sá, and Danièle Moore: Re-voicing conceptualizations of plurilingual education: 'El plurilingüismo, este concepto de ... ¿cómo se puede decir?' Sangeeta Bagga-Gupta: Troubling circulating discourses on planet earth. Attending to complexities through a mobile-loitering gaze Susana Ayala-Reyes, Valerio Rebolledo-Angulo and Elsie Rockwell: Voices silenced by written texts: indigenous languages encountering standardization Sinfree Makoni, Cristine Severo, and Ashraf Abdelhay: The politics of Southern research in language studies Notes 1. No single terminology is perfect to grasp the diversity of experiences in the geopolitics of knowledge production as well as various forms of discriminations within academic circles.
04117926
en
[ "shs.langue" ]
2024/03/04 16:41:26
2022
https://hal.science/hal-04117926/file/leglise-circulation-hal%20%281%29.pdf
Isabelle Léglise email: [email protected] Circulation of concepts, compartmentalisation and erasures in Western academic circles Keywords: geopolitics of knowledge circulation, circulation of concepts, multilingualism may come Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging Introduction It is a well-known fact that in the West, scientific disciplines such as linguistics and anthropology developed concomitantly with the description of languages and cultures 1 Parts of this article were given as a paper at the conference on 'Globalisations et circulations des idées, des savoirs et des normes' which I co-organised within the Fédération Sciences Sociales aux Suds (2019) and as a plenary at the Sociolinguistic Symposium in Ghent (2022). Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) viewed as exotic [START_REF] Gal | The Boundaries of Languages and Disciplines : How Ideologies Construct Difference[END_REF], in parts of the world which served as a reservoir for Western theorising [START_REF] Comaroff | Theory from the South: Or, how Euro-America is Evolving Toward Africa[END_REF]) captured through a violent episteme (Deumert and Storch 2019) and an extractivist process of elicitation and data mining. Simultaneously with the development of nation-states, colonial linguistic descriptions produced 'powerful icons of ethnolinguistic sharedness' [START_REF] Errington | Colonial Linguistics[END_REF] which played a part in the invention of Africa and African languages [START_REF] Mudimbe | The Invention of Africa: Gnosis, Philosophy, and the Order of Knowledge[END_REF][START_REF] Makoni | Disinventing and Reconstituting Languages[END_REF]. Colonised territories were forced to adopt the norms and nomenclature of the West. Educational systems, designed during the colonial period and based on European models, perpetuated colonial ideologies and hierarchies of languages and language varieties (Bambgose 2000), and continued to disseminate Western forms of knowledge (Makoni and Pennycook 2021). Thus European, and more generally Western, scientific discourse, with its own norms and worldviews, has de facto imposed itself as a 'global science' in social sciences and disciplines such as sociology [START_REF] Dufoix | Les enjeux d'une sociologie mondiale nonhégémonique[END_REF]. The tendency to assume that research focused on the WEIRD countries (Western, Educated, Industrialised, Rich and Democratic) is universally valid [START_REF] Henrich | The Weirdest People in the World?[END_REF] has been remarked on repeatedly, as has the tendency to present Western theories, notions and research as ethically sound, pancultural and universally applicable [START_REF] Shi-Xu | Reconstructing Eastern paradigms of discourse studies[END_REF]), while at the same time Southern researchers have been assigned to local and empirical domains [START_REF] Mudimbe | The Invention of Africa: Gnosis, Philosophy, and the Order of Knowledge[END_REF][START_REF] Hountondji | The Struggle for Meaning: Reflections on Philosophy, Culture and Democracy in Africa[END_REF]. To quote Dufoix and Macé, the global economy of knowledge in the social sciences is 'still dominated by the West linguistically, academically, financially, editorially, and conceptuallyand, within this overall domination, by the English language, which unfailingly produces effects of "peripherisation" and Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) "provincialisation" within the North itself ' (2019, 121). This assessment, which relies heavily on the critiques from subaltern, post-and decolonial studies, matches many academic experiences, including my own in multilingual scientific environments, reading and working with colleagues from various institutions around the globe in the field of multilingualism for the last twenty years. As a French scholar engaged in longterm scientific projects both in Amazonia and Southeast Asia, becoming decentred is a necessary experience, as is taking on board alternative theorising and emic perspectives (Léglise 2017) 2 even thoughor becausethese Southern theories [START_REF] Connell | Southern Theory: Social Science And The Global Dynamics Of Knowledge[END_REF] or Epistemologies of the South 3 (Santos 2007) are peripheral to the system of knowledge production [START_REF] Medina | Centers and Peripheries in Knowledge Production[END_REF][START_REF] Demeter | Academic Knowledge Production and the Global South: Questioning Inequality and Under-representation[END_REF] and have been the object of erasure on a massive scale. Becoming aware of the silences in Western epistemologies (Mignolo 2009) is one response; identifying interconnections [START_REF] Kerfoot | Introduction: Entanglement and orders of visibility[END_REF] and reappropriations is another; addressing core-periphery problems in academic publishing and its exclusionary effects (Obeng-Odoom 2019) is yet another. None of these 2 More than twenty years of companionship experiencing coloniality in French overseas territories and discussions with French Guianese friends, including local activists and political leaders, has made me very sensitive to Indigenous claims and also receptive to Southern epistemologies. Among many other occasions, the conferences of the Society for Caribbean Linguistics were crucial opportunities for me as a young researcher to first encounter powerful discourses by Caribbean scholars. Years later, regular meetings with colleagues from the Southern Multilingualisms and Diversities Consortium gave me the opportunity to learn from experiences in South Africa, Australia and many other countries. 3 Following Santos (2011, 39) I understand the Global South as 'a metaphor for human suffering caused by capitalism and colonialism on the global level, as well as for the resistance that seeks to overcome or minimise such suffering', a phenomenon that exists both in Western or Northern countries and in Southern contexts. Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, 284-297 (doi 10.1080/17447143.2023.2204840) responses excludes the other. In the present text I hope to contribute to this field by discussing the circulation of concepts in research on language and society (a subdiscipline identified as sociolinguistics in France), particularly in my own fields, multilingualism and education. I am seeking to go beyond the hegemonic productions of our research fields, and to ask how far concepts rooted in Southern or non-hegemonic experiences are marginalized, co-opted or reused in Western academic circles in general and in France in particular. Epistemological thinking of this sort, whose subject is an active scientific discipline, is not easy to do: in the scientific discipline in which I was trained, the language sciences, epistemological analysis of the 'history of linguistic ideas' is generally practised over the long term [START_REF] Auroux | Histoire des idées linguistiques[END_REF]). However, I still think that epistemological thinking on the contemporary is necessary, since it enables us to question our own place in the chain of production and circulation of knowledge as a whole, and that of concepts and theories in particular. Years of co-writing crosslinguistically 4 and cross-theoretically has made me particularly sensitive to the difficulty of finding shared concepts to communicate cross-culturallyeven between Francophone and Anglophone sociolinguistics. As [START_REF] Guilherme | Difference in diversity: multiple perspectives on multicultural, intercultural, and transcultural conceptual complexities[END_REF] show, scientific concepts form complex webs-of-understandings with layers and regions of meanings. Thus, as traditions of thought in sociolinguistics are deeply linked to 4 Drawing on the conception of comprehension, as articulated by Bourdieu and Culioli, as a particular case of misunderstanding, communication always needs inter-individual adjustments. In my view named languages, theories and illusions of understanding might constitute obstacles to be overcome to avoid common misunderstandings. Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) languages (Léglise 2022), they constitute different 'cultural discourses' which might be analysed through both a transcultural and crosscultural analysis [START_REF] Shi-Xu | Towards a methodology of cultural discourse studies[END_REF], in search of influence and borrowing of concepts in the first case and of differences and variations in the second. Discussing the circulation of concepts cross-linguistically, as I will, can thus contribute to cultural discourse studies. Although trained in sociolinguistics, I have begun a dialogue with many social scientists over the last decade, particularly through the French Federation of Research Centres working in the Global South, seeking to 'take into account perspectives developed from/on the South to think science' (Dumoulin [START_REF] Kervran | Going South. How STS could think science in and with the South?[END_REF]). As a consequence, I have been interested in the circulation of knowledge [START_REF] Keim | Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging[END_REF] through studies spanning the sociology of elites, intellectual history and reception studies showing how theories and ideas are circulated across countries and disciplines, mostly by mobile scholars [START_REF] Medina | Centers and Peripheries in Knowledge Production[END_REF][START_REF] Demeter | Academic Knowledge Production and the Global South: Questioning Inequality and Under-representation[END_REF]) with a mobile gaze. If ideas flow 'like rivers, from the South to the North, and are then transformed into tributaries which swell into great waves of thought' (Cusicanqui 2012, 116), very few empirical studies focus on the reception of knowledge 'from the South to the North' (but see [START_REF] Brisson | Can the Subaltern Speak (in French)? Reception of Gayatri Spivak in France[END_REF] on the limited reception of Spivak in France, for example). In terms of methodology, I draw here on epistemological analysis of linguistic thought by adopting qualitative and epistemological thinking on the contemporary in my fields of expertise, tracing the archaeology of concepts and its variation in academic texts. I also draw on studies of the circulation of knowledge, by tracing authors' quotations and concepts as they circulate particularly in publications in sociolinguistics. All this analysis is qualitative, complemented by a systematic analysis of all PhD theses defended in France between 1985 and 2020. Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) The first section of the paper, building on previous studies, addresses the geopolitics of knowledge production in my fields of inquirybroadly defined as the sociolinguistics of multilingualismshowing firstly the common erasure of research produced by scholars in institutions located in what is described as the Global South and in languages other than English, and secondly the adoption of the Global North as the sole provider of analytic frameworks. In the second and third sections I look at the archaeology and development of two conceptssumak kawsay/buen vivir and translanguagingand at how they circulate in these fields in various academic circles. Both examples, I argue, illustrate the circulation of concepts and also erasures and compartmentalisation through language. The geopolitics of knowledge production and circulation around the sociolinguistics of multilingualism In the last decades, critical works have opened up a fruitful space of contestation on the global economy of knowledge, although relatively few empirical studies concern the field of the sociolinguistics of multilingualism and education (but see [START_REF] Canagarajah | Nondiscursive' Requirements in Academic Publishing, Material Resources of Periphery Scholars, and the Politics of Knowledge Production[END_REF][START_REF] Bamgbose | Language and Exclusion: The Consequences of Language Policies in Africa[END_REF] and more recent studies cited below) and its exclusionary effect on researchers located in Southern institutions. Recent publications urge us to decolonise these fields (see among others [START_REF] Pennycook | The Cultural Politics of English as an International Language[END_REF]Makoni 2020, Ndhlovu 2021). At the same time, by including the voices of traditionally excluded, marginalised or silenced populations in both the South and the North [START_REF] Kerfoot | Introduction: Entanglement and orders of visibility[END_REF], scholars situated in Southern institutions are still struggling to make their voices heard in rethinking the sociolinguistics of multilingualism (Heugh et al. 2021). In their Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) introduction to a thematic issue entitled Seeing from the South 5 a few years ago, Tommaso [START_REF] Milani | Seeing from the South: Discourse, Gender and Sexuality from Southern Perspectives[END_REF] note that in reference books introducing sociolinguistics published in English, and disseminated globally through major publishers, postcolonial theorists are never cited, hardly any contributions are associated with Southern institutions, and none deal with theories or data from outside Europe or North America, except for occasional references. These geopolitics of knowledge, already well described in many publications, still privilege Northern perspectives and deter researchers from the South from providing differently positioned interpretations of events and practices that matter to them (Mignolo 2002). I have similarly shown (Léglise 2022) that postcolonial theorists are almost never cited in sociolinguistic research in France. It is as if the numerous works published in postcolonial, subaltern or de-colonial studies during the last decades were non-existent and irrelevant to the study of language in society. I have also shown that contemporary works by scholars situated in Southern institutions simply have no place: in a recent reference book introducing sociolinguistic concepts 90% of the contributions are from France and none from outside Europe and North America. Effects of peripheralisation have also been documented in the subfield of multilingualism through the study of citation practices in four academic journals. The 5 At the time, the first author was affiliated with a university in South Africa and the second in Singapore, both locations they consider as belonging to the Global South, an expression which 'encapsulates the conflation between geographical positionality and political marginality, as well as captures the complexity of contemporary postcolonial conditions' (ibid.). Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) study by [START_REF] Liddicoat | Multilingualism research in Anglophone contexts as a discursive construction of multilingual practice[END_REF] shows how the impression that no academic knowledge or tradition exists outside the English-speaking world comes about: 90 to 100% of citations are taken from English-language sources. Languages other than English, along with their associated epistemologies, are almost invisible, as if they have made only a peripheral contribution to the field of multilingualism and the definition of its theoretical foundations. Research on multilingualism in education shows also that at present, while caught between locally specific demands on the one hand and globalisation on the other, the subject is still strongly influenced by its colonial history, due to the persistence of language policies [START_REF] Bamgbose | Language and Exclusion: The Consequences of Language Policies in Africa[END_REF] as well as asymmetrical language status inherited from colonialism and resulting language-based social inequalities [START_REF] Migge | Language and colonialism. Applied linguistics in the context of creole communities[END_REF]. [START_REF] Pennycook | Innovations and Challenges in Applied Linguistics from the Global South[END_REF] show that research done on a country in the Global South generally first discusses the theoretical and methodological frameworks developed in the North and then their extension or application to the South. They argue for decolonising university curricula by sensitising students to the historical and political issues of postcolonial conditions, and also by exposing them to the diversity of kinds of knowledge on the subject of language in order to situate existing Western research on language teaching more clearly, and to encourage the development of alternative paradigms which offer a different understanding of multilingualism. To move away from the monolithic vision of languages prevailing in Englishlanguage sociolinguistics on multilingualism and education published in Global centres, many concepts have been forged in the last twenty years and widely disseminated, including 'superdiversity', 'crossing', '(poly)languaging'and in many ways 'translanguaging'. [START_REF] Bagga-Gupta | Meaning-making or heterogeneity in the areas of language and identity? The case of translanguaging and nyanlända (newly-arrived) across time and space[END_REF] observe that these neologisms Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) appear and are disseminated by academics in the Global North, to be adopted in a significant number of works by 'junior scholars from the Global South' whose research they direct, while 'senior scholars from the Global South' find that they resonate only marginally. [START_REF] Heugh | Diversities, affinities and diasporas: a southern lens and methodology for understanding multilingualisms[END_REF] point out that researchers who claim to have (re)discovered that multilingualism is more than the sum of several languages, each understood as monolingual, present their researchintentionally or notin a way that is ahistorical and disconnected from the experience of minority and marginalised populations in the Global South. They also note that the work of colleagues from the South is rarely or only summarily cited, giving the impression that two parallel conversations are taking place. The problem, they argue, is that this new academic fashion not only appropriates and disseminates ideas that have been circulating for a long time in the Global Southpresenting them as new discoveriesbut also treats multilingualism as if it were a peculiar phenomenon. In English publications that treat multilingualism and education, few circulating terms are clearly identified as rooted in Southern experiences. Among them we can cite 'disenventing languages' [START_REF] Makoni | Disinventing and Reconstituting Languages[END_REF] following the 'invention of Africa' [START_REF] Mudimbe | The Invention of Africa: Gnosis, Philosophy, and the Order of Knowledge[END_REF] (2) symbiosis with nature, respect for nature, control of the effects of production, (3) 6 Viteri Gualingua was working with the Sarayaku people in the 1990s; in relation with the Pastaza Indigenous Peoples' Organisation, he 'systematized this concept until it became a theoretical model of welfare and a proposal for social transformation' (Hidalgo-Capitán, Cubillo-Guevara, and Masabalín-Caisaguano 2020). Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) economy at the service of society, and (4) the search for balance between material and symbolic aspects of life (economic, political, cultural, ecological). Although sumak kawsay is said to be untranslatable in languages such as Spanish and English (see quotation 1), it seems that most publications refer to it by its translation in Spanish or English (see quotations 2, 3 and 4). (1) los significados de la expresión quechua (sumak kawsay) y aimara (suma qamaña) son tan extensos y complejos que resultan prácticamente intraducibles a idiomas como el español o el inglés 7 [Barranquero Carretero and Sáez Baeza ( 2015)] (2) The contemporary notion of buen vivir emerged in the late 1990s from the meeting of ancient indigenous belief systems, the work of critical intellectuals and adoption in the political sphere. Buen vivir is the Spanish translation of the concepts of sumak kawsay in Quechua and suma qamaña in Aymara, as well as similar terms from indigenous languages across the continent 8 (Gudynas and Acosta 2011). The translation is inadequate but serves as a starting point. Literally translated into English as 'good living', this is considered a pale reflection of the original meanings of sumak kawsay. Sumak means full of plenitude, sublime, excellent, magnificent, beautiful, while kawsay is life; to 7 The meanings of the expression in Quechua (sumak kawsay) and Aymara (suma qamaña) are so extensive and complex that they are practically untranslatable in languages such as Spanish or English (my translation). Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) exist in a dynamic, changing, active manner. [Brown and McCowan (2018: 318)] (3) The purpose of this chapter is to identify the different meanings of Latin American Good Living (buen vivir) and its diverse intellectual wellsprings [Hidalgo-Capitán and Cubillo-Guevara ( 2017)] (4) Good living is a relatively new concept in the social sciences [START_REF] Hidalgo-Capitán | The Ecuadorian Indigenist School of Good Living (Sumak Kawsay)[END_REF]] Analysing these quotations, it seems that not only has the concept travelled through Spanish and then Englishlanguages in which the term always appears first, and is sometimes translated afterbut that it has been coined as a concept via the translation. We may also note that it has been popularised largely through two books by Alberto Acosta [START_REF] Acosta | El buen vivir: una vía para el desarrollo[END_REF][START_REF] Acosta | El Buen Vivir: Sumak Kawsay, una oportunidad para imaginar otro mundo[END_REF] Although it is possible to trace its development quite clearly (see [START_REF] Hidalgo-Capitán | The Ecuadorian Indigenist School of Good Living (Sumak Kawsay)[END_REF], some publications stress its obscure origins and various influences (see for example [START_REF] Altmann | Good Life As a Social Movement Proposal for Natural Resource Use: The Indigenous Movement in Ecuador[END_REF] on the role of German development agencies in the dissemination in Bolivia of what he calls the Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) 'Good life', originating with the Aymara elite first described in anthropology). The general impression we get is that it is obscure and complicated. It is as if it had no stable meaning but many variations continuously requiring translation. Moreover, it is clear that the notion travelled through Spanish (either as buen vivir or vivir bien) before being stabilised both in Spanish (as buen vivir) and English (as good living) in various academic and non-academic fields. Looking at its reception in France in academia and particularly in the sociolinguistics of multilingualism, I note that it has not been translated into French or established in France. Example of the travelling concept of translanguaging The second example I would like to focus on is a much more well-known concept in Western academic circles. The term translanguaging became famous via the publications of Ofelia García, a professor at the City University of New York, more than ten years ago (García 2009a;2009b) when referring to 'the practice of bilingualism with no functional separation' of languages in the classroom, unlike a monolingual vision of multilingualism. The term quickly became highly popular in the field of English language teaching. Numerous publications immediately adopted the term, especially in their titles, in reference to Garcia's work [START_REF] Creese | Translanguaging in the Bilingual Classroom: A Pedagogy for Learning and Teaching?[END_REF][START_REF] Wei | Moment Analysis and translanguaging space: Discursive construction of identities by multilingual Chinese youth in Britain[END_REF]Canagarajah 2011). The notion has been applied to all kinds of languages and teaching situations, and has rapidly been exported to English-speaking sociolinguistics in general, influencing the field of translation (Baynham, Lee and Lee 2019) as well as the study of everyday behaviour in advertising, urban signage and communication within the family, at work, in health centres and elsewhere [START_REF] Mazzaferro | Translanguaging as Everyday Practice[END_REF]. At the same time the concept was quickly adopted by actors in the world of education, for instance by government agencies, municipalities and inspectors in Sweden (Bagga-Gupta and Dahlberg 2018). As far as I know, the first written occurrence of the term as used by García is in her preface to Makoni and Pennycook's Disinventing and Reconstituting Languages (2007). There she compares what they call translingual language practices to the concept of translanguaging used in English-Welsh bilingual education [START_REF] Baker | Biliteracy and Transliteracy in Wales: Language Planning and the Welsh National Curriculum[END_REF]). The term, taken from the Welsh word trawsieithu, was apparently first used in Cen Williams' doctoral thesis to refer to a pedagogical exercise in which students listen to a lesson in one language and respond in another [START_REF] Williams | Arfarniad o ddulliau dysgu ac addysgu yng nghyd-destun addysg uwchradd ddwyieithog, Bangor[END_REF]. In subsequent publications García acknowledges her indebtedness to Latin American writers (see quotations 5 and 6) for her understanding of the term languaging and of the prefix trans-11 . García and Wei (2013) also made a connection with the decolonial struggles that the term is said to support, in the same way that [START_REF] Mignolo | Local Histories/Global Designs: Coloniality, Subaltern Knowledges, and Border Thinking[END_REF] notion of bilanguaging is said to help correct the asymmetry of languages by denouncing the colonial nature of power and knowledge. ( 284-297 (doi 10.1080/17447143.2023.2204840) Maturana and Varela's theory of autopoesis that argues that we cannot separate our biological and social history of actions from the ways in which we perceive the world [ibid.] In contrast to my attempt here to reconstruct its trajectory and the intellectual debts involved, the vast majority of publications on translanguaging refer purely to authors publishing in the United States or the United Kingdom, sometimes mentioning the Welsh-language origin of the term, but especially citing the two main works that have become most prominent (García 2009a;García and Wei 2013). At the same time many voices have criticised this concept for adopting a hegemonic Global centre perspective (see [START_REF] Bagga-Gupta | Meaning-making or heterogeneity in the areas of language and identity? The case of translanguaging and nyanlända (newly-arrived) across time and space[END_REF] and [START_REF] Heugh | Diversities, affinities and diasporas: a southern lens and methodology for understanding multilingualisms[END_REF]). However, the notion of translanguaging has travelled widely and resonates with the South African experience, described by Leketi Makalela for example. He associates it with the concept of ubuntu, stating that when plurilingual students speak different languages in the classroom, this is in fact an advantage, both socially and cognitively, and that this use of translanguaging seems to index the values of ubuntu [START_REF] Makalela | Translanguaging as a Vehicle for Epistemic Access: Cases for Reading Comprehension and Multilingual Interactions[END_REF]. Borrowing the notion of Sankofa from Ghana, he argues that we should draw on pre-colonial societies for models and solutions based on African experience. He offers the example of the Limpopo Valley and the Luanga and Mapungubwe regions (in present-day Zambia) in the eleventh and twelfth centuries, to show that movement among different communities created a language continuum which made it possible to communicate and enable civilisation to develop in the region. On the basis of this philosophy he proposes to construct plurilingual pedagogical practices drawing on translanguaging ubuntu, in order to escape from the monolingual bias which continues 284-297 (doi 10.1080/17447143.2023.2204840) trends and the publication of our work some years later (Alby and Léglise 2018). But for us the use of the concept did not mean adopting a new idea. For the past twenty years we have been concerned with how the plurilingualism of pupils in classrooms is generally erased and only sometimes taken into account [START_REF] Alby | L'enseignement en Guyane et les langues régionales, réflexions sociolinguistiques et didactiques[END_REF], based on how teachers deal with it and on the use of language alternation for pedagogical purposes. French-language research on the construction of knowledge through the use of multilingual resources in the classroom is not new in France and Switzerland [START_REF] Moore | Bouées transcodiques en situation immersive ou comment interagir avec deux langues quand on apprend une langue étrangère à l'école[END_REF][START_REF] Castellotti | Alternance de langues et construction des savoirs[END_REF][START_REF] Gajo | Interactions et acquisitions en contexte. Modes d'appropriation de compétences discursives purilingues par de jeunes immigrés[END_REF]; it draws especially on the didactics of plurilingualism [START_REF] Billiez | De la didactique des langues à la didactique du plurilinguisme : Hommage à Louise Dabène[END_REF]; see [START_REF] Moore | Introduction -French voices on plurilingualism and pluriculturalism: theory, significance and perspectives[END_REF] for an introduction) and on plural approaches to languages and cultures [START_REF] Candelier | Approches Plurielles, didactiques du plurilinguisme : Le même et l'autre[END_REF]). This research is also based on a conception of the plural competences and repertoires of plurilingual individuals that is locally well established in the discipline [START_REF] Coste | Compétence plurilingue et pluriculturelle[END_REF], although the Anglophone sociolinguistics literature treats it as a recent innovation (for an illustration see Blommaert and Backus 2013). The complex example of trawsieithu/translanguaging shows not only erasures and circulation in Western and Southern circles but also several compartmentalisations through language. Conclusion We have traced the emergence, revival, interconnections and reappropriations of two concepts emerging in the 1990s and rooted in non-hegemonic circles: sumak kawsay through the work of a Kichwa-Amazonian anthropologist (Carlos Viteri Gualinga, working with the Sarayaku people), and trawsieithu through the work of 284-297 (doi 10.1080/17447143.2023.2204840) scholars in bilingual education (Cen Williams, Dafydd Whittall and Colin Baker) with Welsh, a minority language in the United Kingdom. Both examples show firstly (more or less) circulation from one language to another, from one field to another and from one context and academic location to another. Secondly, both also exemplify erasure: the erasure of their roots (and particularly the partial erasure of the origin language sumak kawsay and trawsieithu, while buen vivir or good living and translanguaging are circulating), although intellectual debts may be acknowledged at some point, and the erasure of the names and inspirations (as in trans-and languaging linked to Latin American experiences) of scholars located in Southern or peripheral institutions. Thirdly, this is particularly obvious in citation practices, where only some prominent books (such as [START_REF] Acosta | El Buen Vivir: Sumak Kawsay, una oportunidad para imaginar otro mundo[END_REF] or Garcia and Li Wei To assess the archaeology of current concepts and trace their circulation and erasure in academic fields associated with sociolinguistics in France is a vast project, one that I have scarcely begun; but mapping their circulation across a number of locations in the academic worldnot only their spread outwards from former colonial 284-297 (doi 10.1080/17447143.2023.2204840) powersmight offer an alternative, non-binary view of the globalisation of knowledge in language and society, in which a polyphony of multiple voices, academic traditions and discourses can be heard by those who are eager to listen. and African languages and the notion of 'linguistic citizenship'(Stroud 2001) which has been disseminated widely(Williams et al 2022). I focus in the next two sections on two other concepts, the first clearly identified as rooted in Southern experiences and the second mostly identified with Northern experiences and look at their circulation and reappropriation.Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of MulticulturalDiscourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) Example of the travelling concept of buen vivirThe first example I would like to take has travelled as a notion as buen vivir, but comes originally from Quechua sumak kawsay and Aymara suma qamaña. It was forgedor revealed to a broader audiencein the 1990s through the work of the Kichwa-Amazonian anthropologist Carlos Viteri Gualingua 6 and began to be used as an alternative to development discourses in the 2000s. Buen vivir entered the Constitution of Ecuador in 2008 and that of Bolivia in 2009. Outside of anthropology (Viteri Gualinga 2002), it was used in activist and political spheres before entering the fields of philosophy, the social sciences and communication, and more recently education (see for example Rodríguez Cruz (2015) regarding its potential for intercultural bilingual education, Brown and McCowan (2018) for its potential for education for development rooted in a comprehensive and contextualised philosophy of living, and Angel Alvarado (2021) for decolonial pedagogies). According to Yánez Cossío (2012) in Quechua kawsay means a living entity, including humans and nature, with energy going through space and time, and sumak refers to a concrete realisation of sumay, including aesthetics and spiritual and physical harmony. Conjoined, the terms refer to holistic thinking. Sumak kawsay combines four principles: (1) social justice for humanity, more egalitarian social relations, equal access to the means and redistribution of production, and particularly through the translation of the latter. El Buen vivir: Sumak Kawsay, una oportunidad para imaginar otros mundos has been translated into many languages. If the Spanish concept Buen vivir generally remains cross-linguistically in the various translations of the title of the book as a loan-word (although sometimes explained in the subtitle, as in Dutch: Latijns Amerikaanse Filosofie over Goed Leven), the original term Sumak Kawsay disappears in the translations of the book title, as in French Le buen vivir: pour imaginer d'autres mondes. (2013)), published twenty years later and disseminated widely, are mentioned in a very prototypical and iconic way: a single quotation illustrates the concept. Finally, both examples show compartmentalisation in (access to) science through ex-colonial languages and traditions of thought or academic discourses. These constitute good examples of what cultural discourse studies call 'transcultural borrowings' of concepts (Shi-xu 2022). I showed particularly the way concepts rooted in non-hegemonic circles are appropriatedthrough translation and partial erasure of their rootsin Western academic circles and recreated through iconicity. Table 1 : 1 The following table shows how often Sumak kawsay and buen vivir are mentioned in PhDs in France defended between 1985 and 2020. 9 Number of PhD theses mentioning both terms Their absence in PhDs in sociolinguistics is particularly meaningful regarding both erasure and language compartmentalisation in science. They may be used in the field of education in France, but it seems essentially limited to activist circles and blogs and not to have gained a real footing in academia.Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of MulticulturalDiscourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) All academic disciplines Sociolinguistics sumak kawsay 40 0 buen vivir 103 1 10 10 Lewis, Jones and[START_REF] Lewis | Translanguaging: origins and development from school to street and beyond[END_REF] discuss the origins of the notion, which was introduced by Cen Williams and his colleague Dafydd Whittall during a training session for school head Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of MulticulturalDiscourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) 5) I try to theorize translanguaging by reflecting on how the concept emerged for me as a US Latina, born in Cuba and raised in New York City. I, thus, draw In one sense via the notion of cultural transculturation introduced by Ortiz in 1940, and in another sense with respect to the 'trans-formative' and transcendent power of translanguaging. , I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, Léglise mostly on Latin American scholarship, and specifically the work of Chilean biologists Humberto Maturana and Francisco Varela, the Cuban anthropologist Fernando Ortiz, and the Argentinean cultural theorist Walter Mignolo. [García and Leiva 2014, 201] (6) It is Maturana and Varela's concept of languaging that shapes my understanding as a Latin American of translanguaging. Languaging is directly linked to teachers. It was first translated into English as translinguifying, but in the end translanguaging was adopted following a discussion between Williams and Baker. They claim that it was the third edition of Baker's work Foundations of Bilingual Education and Bilingualism, in 2001, which 'made the term internationally known' (2012: 645). 11 Table 2 : 2 Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of MulticulturalDiscourses, vol 17 n°4, 284-297 (doi 10.1080Discourses, vol 17 n°4, 284-297 (doi 10. /17447143.2023.2204840) .2204840) to dominate official language practice in South Africa and which he believes creates tensions between actual practice (where the use of one language is incomplete without the practice of others) and expected monolingual practice. Number of PhD theses mentioning translanguaging compared to , I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, It seems to me that the reception of the notion of translanguaging in France is limited and only beginning to appear. Quantitative analysis of PhD theses shows that few of them mention the term: All academic disciplines Linguistics departments Plurilinguisme 731 350 Multilinguisme 680 300 Translanguaging 37 28 plurilinguisme and multilinguisme In 2013 Sophie Alby (who is involved in teacher training at the University of French Guiana) and myself made use of the term at a conference in Belgium on 'translanguaging and plurilingual resources in the classroom', which dealt with precisely the phenomena we had been working on for years. It became clear to us that we had to adopt the dominant terminology then in circulation in order to be heard; this produced an immediate understanding of the phenomena we wished to deal with, which was not true of our endogenous terms, in particular 'heterogeneous language practices', a concept I work with, derived from a notion introduced more than forty years ago (Boutet, Fiala and Simonin-Grumbach 1976), which non-Francophone scholars find opaque because it does not fit into frameworks familiar to them. The adoption of this dominant terminology led to a rare discussion of Francophone versus non-Francophone Léglise Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4, Léglise, I., 2022, Circulation of concepts, compartmentalisation and erasures in Western academic circles: sumak kawsay/buen vivir and translanguaging, Journal of Multicultural Discourses, vol 17 n°4,
04118044
en
[ "shs.hist" ]
2024/03/04 16:41:26
2023
https://hal.science/hal-04118044/file/UN%20GENTILHOMME-CAMPAGNARD%20ENTRE%20L%E2%80%99HISTOIRE%20ET%20LE%20CREPUSCULE%20Le%20livre%20de%20Raison%20de%20Philippe%20Tamizey%20de%20Larroque%20%281889-1898%29.pdf
Adolphe Magen Jules Serret Alexandre Tamizey De Larroque Pierre est destinée au dépôt dont Tamizey souffrait à la fin de sa vie. On peut les attribuer également aux convictions de Tamizey, hostile au rajeunissement de l'orthographe des textes anciens quand il s'agissait de les éditer, mais aussi bel et bien pour écrire encore le français, à la fin du XIX e siècle, à l'évidence 6 . Ce A propos de la collection formée par Fauris de St-Vincens qui avait cédé à la tentation de la modernisation, Tamizey écrivait qu'elle « est aux manuscrits ce qu'est aux antiques monuments, le fléau du badigeonnage » : A. P. Baquier] INTRODUCTION Le Livre de raison d'un campagnard 8 est le titre que Philippe Tamizey de Larroque (1828-1898), éminent éditeur de textes et historien fameux et révéré des XVI e et XVII e siècles et du Sud-Ouest tout particulièrement 9 , donna au journal tenu durant les neuf dernières années de sa vie, du 14 juillet 1889 jusqu'au 10 mai 1898 10 . Par ce titre, il revendiquait l'archaïsme. Dans le fond comme dans la forme, ce texte était conçu, en effet, à l'imitation des livres de raison dont Philippe Tamizey de Larroque, en pionnier, a montré et prouvé l'intérêt historique 11 . 8 Conservé aux Archives départementales de Lot-et-Garonne, sous la cote 1 J 804, sous la forme d'une copie dactylographiée de 227 pages, collationnée de l'original, dans les années 1950, par les soins de Claude Maisani : Maisani (Claude), « Sur Philippe Tamizey de Larroque », Revue de l 'Agenais, 1961, p. 11-29. -Bourrachot (Lucile), « La fin du pavillon Peiresc à Gontaud », Revue de L 'Agenais, 1999, p. 15. -Serin (Pierre), « Quelques anecdotes autour de Philippe Tamizey de Larroque », Revue de l 'Agenais, 1999, p. 19-37. 9 France (Anatole), Le crime de Sylvestre Bonnard, [publié en 1881] Paris, Calmann-Lévy, s.d, p. 147. Dans un passage de ce livre, deux jeunes étudiants archivistes causent de leurs études dans le jardin de la rue Paradis-au-Marais. Gélis, de troisième année d'École des Chartes, prépare une thèse sur le Monasticon gallicanum : « Son ami lui demanda s'il connaissait tous les documents manuscrits et imprimés relatifs à son sujet… Ils parlèrent d'abord des sources originales… Puis ils en vinrent aux travaux de la critique contemporaine… « As-tu lu, dit Boulmier, l'article de Tamizey de Larroque dans la Revue des questions historiques ? » … « Oui, répondit Géliset, j'y ai trouvé des indications utiles ». -Clemens (Jacques) éd., « Hommage à Tamizey de Larroque », Ville de Marmande, 1975. -Peyrous (Bernard), « L'oeuvre scientifique de Tamizey de Larroque », Revue française d'histoire du livre, n° 74-75, 1992, p. 219-234. étaient des registres-journaux, d'ordre purement privé. Ils contenaient des faits de comptabilité domestique et de gestion patrimoniale. Ils donnaient aussi par la mention régulière des naissances, des mariages, des décès et de leurs à-côtés, un compte-rendu des péripéties de l'existence et du devenir des (Ph.), « Mémoire sur le sac de Béziers dans la guerre des Albigeois et sur le mot : « Tuez les tous ! » attribué au légat du pape Innocent III », Annales de philosophie chrétienne, t.VI, 5 e série, 1862, note 1 p. 9 : « Pourquoi M. Sabatier n'a-t-il pas jugé à propos de consulter à ce sujet le légat lui-même ? » 14 Tamizey de Larroque (Ph.), Livre-Journal de Pierre de Bessot, Périgueux, 1893, p. 5. 15 « …Je l'ai entendu comparer avec indignation un volume sans notes à la froide nudité d'une campagne sans arbres, d'un désert sans verdure… Une historiette me revient en mémoire. Magen feuilletait le premier volume des Lettres de Jean Chapelain dans le magnifique exemplaire sur grand papier qui venait de m'être envoyé par l'Imprimerie Nationale : tout à coup il s'écria, prenant son air le plus sérieux : « Vous êtes déshonoré ! » -Je Ainsi, en 1896, il n'hésitait plus non seulement à consacrer aux notes autant de place qu'au texte lui-même, mais encore à se mettre complaisamment en scène en s'éloignant de Souvenirs littéraires, ce texte aujourd'hui perdu, retraçait l'enfance et la jeunesse de l'auteur jusqu'en 1851 18 . S'il crus à la découverte de quelque énorme erreur, et, tout anxieux, je demandai : « Pourquoi donc ? -Pourquoi ? » répondit-il en riant de mon effarement, « parce que voilà une page sans notes ! », Tamizey de Larroque (Ph.), Adolphe Magen 1818-1893, Agen, Lamy, 1894, p. 5-6 (note). 16 Tamizey de Larroque (Philippe), Deux jardiniers émérites, Peiresc et Vespasien Robin, J. Remondet, Aix-en-Provence, 1896, p. 9-10. Anecdote reprise par un article du journal Le Soleil recopié dans le Livre de raison à la date du 18 novembre 1896 : « Comme je tendais ma grande main indiscrète à la dame en question, fort jeune et fort jolie, elle me dit que mon héros l'intéressait médiocrement et qu'elle aimait mieux réserver pour ses pauvres la somme qu'il faudrait me donner. En vain, j'insistais que les divers mérites de Peiresc… À chacune de mes réclames ma spirituelle interlocutrice opposait une piquante fin de non-recevoir. Cela devenait désespérant et j'allais me résigner à battre en retraite, quand tout à coup une idée « géniale » comme on dit aujourd'hui, illumina mon cerveau… Madame, lui dis-je, me souvenant d'une de ses innocentes passions, vous qui raffolez de la tubéreuse, peut-être parce que vous retrouvez en elle quelques choses de votre blancheur et de votre parfum -un vieillard a peu le droit de se permettre une galante familiarité ! -refuserez-vous votre obole au grand amateur qui chez nous acclimata la magnifique fleur ? Oh ! s'il en est ainsi, me répondit-elle vivement, je ne résiste plus ; et en souriant, elle me remit une petite pièce d'or. Ne dois-je pas restituer le bien mal acquis ? Je soumets humblement mes scrupules aux théologiens qui liront mon historiette… ». 17 D'abord parce que Le livre de raison de Philippe Tamizey de Larroque ne prend en compte qu'une fraction chronologique de sa vie, limitée à la dernière décennie d'une existence de 70 années. Il passe donc sous silence les années de jeunesse et de formation, mais aussi l'essentiel de sa vie d'adulte. 18 Henri Tamizey de Larroque recopia ce texte manuscrit et le transmis à Léonce Couture qui rédigea, à son appui, une partie de la notice abandonna ce premier essai autobiographique, Philippe Tamizey de Larroque resta, de toute évidence, taraudé par cette forme d'écriture. La rédaction du Livre de raison tient certainement de l'exemple de sa grande amie, la comtesse Marie de Raymond . Généalogiste, férue d'histoire provinciale comme lui, elle était la bienfaitrice de la Société académique d'Agen dont elle accueillait, dans son salon, les membres distingués ; parmi lesquels Philippe Tamizey de Larroque était particulièrement assidu 19 . Elle avait déjà, en 1885, composé ses propres mémoires et lui en avait donné lecture 20 . Pourtant le Livre de raison ne put apparemment pas suffire à Philippe Tamizey de Larroque. Après l'avoir entamé à la mi-juillet 1889, Philippe Tamizey de Larroque composa, en 1891 et en 1892, des plaquettes d'hommage à la vie et l'oeuvre de ses amis et confrères, voire maîtres, en histoire récemment disparus : le Bordelais Jules Delpit (1808-1891) et l'Agenais Alphonse Magen (1818Magen ( -1893) ) 21 . Ce qui lui donna l'occasion, en brassant des souvenirs, d'en dire davantage sur ses sentiments et sur les péripéties de sa propre existence. Après le choc de la tragédie de l'été 1895, une nouvelle digue céda et laissa s'échapper, un peu plus encore, le flot de la confidence. Dans l'introduction du « Maréchal de Biron et la prise de Gontaud en 1580 », publié en 1898, et encore dans le recueil d'articles intitulé Monuments et portraits agenais, Tamizey revient ainsi sur la genèse de sa vocation d'historien et, fait rare, il mentionne même ce qu'il avait toujours occulté ou seulement effleuré jusque-là : les dix années durant lesquelles il fut le premier nécrologique de Philippe Tamizey de Larroque pour la Revue de Gascogne : Couture (Léonce), « Notice nécrologique de Philippe Tamizey de Larroque », Revue de Gascogne, t. 39, 1898, p. 499. 19 Lauzun (Philippe), La Société académique d 'Agen, Picard, Paris, 1900, p. 233-235. 20 Tamizey de Larroque (Ph.), Madame la comtesse…,op. cit.,p. 11. 21 Tamizey de Larroque (Ph.), Jules Delpit, notes biographiques et bibliographiques, Périgueux, Imprimerie de la Dordogne, 1892 (extr. du Bulletin de la Société historique et archéologique du Périgord). -Tamizey de Larroque (Ph.), Adolphe Magen, Agen, V ve Lamy, 1894, p. 3. magistrat de sa commune 22 . Indéniablement tout comme sa haute stature se trouvait engoncée, tandis que son embonpoint prononcé débordait désespérément de ses habits étriqués de notable de la fin du XIX e siècle -les photographies en témoignent 23 fréquentation (10 juillet 1893). Quant aux contemporains, les 24 Dezeimeris (R.), La reconstitution des vignobles dans le canton de Cadillac, imp. G. Gounouilhou, Bordeaux, 1900, réed. Association St-Blaise-Cadillac, 1999, p. 5. plus récents, Tamizey ne cite que des vers de Charles Nodier dans l'histoire littéraire. Ce qui l'aurait amené à l'histoire tout court puisqu'il en vint à ne plus lâcher le dictionnaire de Bouillet 27 qu'il apprit pratiquement par coeur. De ce fait, curieusement, la vocation d'historien de Tamizey serait pour le moins tardive et tiendrait même du pis-aller. Toujours selon la même source, Tamizey de Larroque aurait quitté le collège sans passer le baccalauréat. Cela n'était pas rare alors, il est vrai. Tamizey avouait, de plus, une aversion profonde pour les mathématiques qui peut expliquer une telle lacune 28 . Le jeune homme serait « monté » à Paris, à la fin des années 1840, sous le prétexte de recherches dans les bibliothèques de la capitale, ayant obtenu la permission de son père et une provision de 400 francs destinée à défrayer un séjour de quelques semaines. Tamizey y passant, en fait, plusieurs mois dans ce qui ressemble bien à une vie de bohême : « pour 15 fr. par mois, il avait un cabinet noir sous les combles : 1 m. 80 de long -lui en avait 1 m. 84 -: sa tête touchait au plafond, il ne pouvait s'y promener que courbé…Voilà le couvert, voici le vivre : à midi, 2 sous de pain et 2 sous de groseilles, grignotés, hélas ! dévorés sur un banc du Luxembourg, le tout arrosé de l'eau gratuite de la fontaine de Médicis… Et toute la journée piochant, copiant, amassant des matériaux. Le soir, il mangeait à une pension de 1 fr 10 c. ». Deux ou trois fois par semaine des parents l'invitaient. Il faisait montre alors d'un spectaculaire appétit : « Son oncle, le général, bien que gros mangeur le plaisantait sans lui faire perdre un coup de dent. C'était 27 Bouillet (Marie-Nicolas), 1798-1864, ancien élève de l'École normale supérieure, professeur, proviseur et inspecteur de l'Académie de Paris, auteur du dictionnaire encyclopédique d'histoire de référence du XIX e siècle, le Dictionnaire universel d'histoire et de géographie, dont la première édition date de 1842. Tamizey a collaboré à l'enrichissement de cette première édition. L'auteur lui rend hommage dans la 20 e édition de cet ouvrage (1864). En tout état de cause, le père de Philippe Tamizey de Larroque, Alexandre, démissionna, en février 1848, de la mairie de Gontaud, pour Les troubles politiques marquant la fin du règne de Napoléon III empêchèrent la réalisation des grands projets qu'il avait pour Gontaud : l'érection de la cité en chef-lieu de canton et l'achat d'un splendide hôtel de ville, propre à abriter plus dignement les archives municipales qu'il avait commencé à inventorier : Couture (L.), op. cit., p. 501. Brelot (Claude-Isabelle), La noblesse réinventée, nobles de Franche-Comté de 1814 à 1870, Annales littéraires de l'Université de Besançon, 1992, t.2, p. 848 sur possession bibliothèque [signe de reconnaissance, sinon brevet de noblesse] : « De toutes les collections, les bibliothèques sont les plus fréquentes. Beaucoup plus répandues que les tableaux d'ancêtres, elles matérialisent, comme les biens de famille, les beaux meubles et l'argenterie, la profusion culturelle dans laquelle vit l'ancien second ordre comtois… Comme le billard, la bibliothèque est devenue l'une des parties constitutives du château : point de château sans bibliothèque. Cette dernière témoigne donc de l'universalité d'une culture au moins rudimentaire. Mieux, elle devient l'un de ces « titres de noblesse culturelle » sans lesquels est impossible une authentique agrégation. C'est, entre autres, à la possession d'une bibliothèque que le second ordre de l'Ancien Régime mesure la qualité des nouvelles noblesses. Elle matérialise tous les facteurs de leur assimilation : études, culture, talents personnels, compétence scientifique, voire traditions patrimoniales… » 47 Bourdé (Guy) et Martin (Hervé), Les écoles historiques, Seuil, 1983, p. 138-139 : Ses caractéristiques sociologiques et idéologiques furent stigmatisées dans le Manifeste de lancement de la Revue historique, fondée en 1876, par Gabriel Monod et G. Fagniez, pour s'en démarquer : « … Elle [RQH] n'a pas été fondée simplement en vue de la recherche désintéressée et scientifique, mais pour la défense de certaines idées politiques et religieuses. » (Manifeste, 1876, p. 322) La cérémonie fut annulée pratiquement au dernier moment, selon ce que rapporte Louis Audiat dans sa nécrologie de Tamizey : « Il était chevalier de la légion d'honneur depuis dix-sept ans. M. Gaston Paris, qui allait comme présider comme délégué du ministre de l'instruction publique les fêtes provençales [inauguration du monument à la mémoire de Peiresc à Aix-en-Provence], devait lui porter la rosette ; la coïncidence avait été délicatement choisie. Trois jours avant, un télégramme arriva à Paris : « Réactionnaire militant, nomination ferait le plus mauvais effet. » Et docile aux dénonciations de caboulot, le gouvernement s'inclina… » cité dans Couture (L.), op. cit., p. 509. 51 Tamizey de Larroque (Ph.), Joseph Gonin et le vignoble de Saint-Joseph, Imprimerie V ve Lamy, Agen, 1883, extr. de la Revue de l'Agenais. 52 op. cit., dans Duby (Georges) s.d., Histoire de France, Larousse, 1987, t. III, p. 196-197 (de Ricard, Fourès). En fait, le point commun de ces courants réside dans un certain passéisme, la nostalgie de l'époque de l'artisanat et de la vie agraire traditionnelle. À droite comme à gauche, c'est une autre manière de refuser l'évolution économique. L'originalité culturelle de certaines provinces ne se limite pas à leurs diversités linguistiques. Dans tout le Midi, la passion pour la chose publique ne traduit pas uniquement une position politique, mais aussi une culture dont la multiplicité des associations, le goût des associations, le goût et l'amour de l'art lyrique définissent les contours… À Paris même, les goûts de l'homme du monde ne coïncident pas avec ceux du bourgeois moyen. L'appartenance de longue date à l'aristocratie, le sentiment de faire partie héréditairement des classes dirigeantes donnent la liberté à l'égard de cette culture qui, pour d'autres, accorde le droit de bourgeoisie. Le snobisme encourage les nouvelles formes culturelles qui ne trouvent pas d'appui dans la masse de la bourgeoisie. Bourgeat (chanoine Paul), Léonce Couture, érudit gascon, à propos du cinquantenaire de sa mort (1902-1925), Auch, imprimerie F. Cocharaux, 1953, p. 10 En cela, même si sa conception de l'histoire n'était pas indemne d'intentions idéologiques, le projet de Beaucourt était novateur en France. même si sa conception de l'histoire n'était pas indemne d'intentions idéologiques : Poulat (Émile), Histoire, dogme et critique dans la crise moderniste, Albin Michel, 3 e réed., 1996, p. L. 72 Carbonell (Charles-Olivier), « La naissance de la Revue Historique », Revue Historique, avril-juin 1976, p. 518. -Elle fut un lieu de sociabilité, de moralité et d'innovation. Elle permit la diffusion de la méthode historique dans des milieux plutôt hostiles (catholiques intransigeants). Elle fut au coeur des débats qui divisèrent les intellectuels catholiques : naturalisme, modernisme, nationalisme : Poulat (É.), op. cit. ; Laplanche (François), La Bible en France, entre mythe et critique XVI e -XIX e siècles, Albin Michel, 1994. Elle renouvela l'histoire religieuse ; elle s'opposa à l'histoire officielle et suscita, contre elle, un peu abusivement la réaction de l'école positiviste. Ce que montre Larcher (Laurent), « Radiographie de la Revue des questions historiques », dans La Revue des revues, Revue internationale d'histoire et de bibliographie, n° 23, 1997. 73 Revue des Questions Historiques, 1866. 74 Annales de philosophie chrétienne, V e série, t. III, 1861. éclatement de l'Affaire Dreyfus. Or pas un mot n'est exprimé sur ces événements qui déchirent les Français. Certes, l' affaire ne devient véritablement publique qu'après la tragédie de l'été 1895 qui introduit une rupture définitive non seulement dans les activités, mais dans l'existence même de Tamizey qu'elle brise bel et bien. Après le 9 juillet 1895, il se replie manifestement sur luimême. Déplorer son malheur et les pertes irréparables qui l'accompagnent l'occupe pour toute la fin de l'année 1895 et encore en 1896 et en 1897, c'est-à-dire quand l'affaire devient réellement l'Affaire. On constate bien que Tamizey de Larroque ne fait plus aucune mention des journaux et autres publications qui passent entre les mains comme il pouvait le faire naguère. Il signale aussi, à plusieurs reprises, la gêne que lui procurent ses yeux malades. Il souffre, à l'évidence, d'après les photographies prises à la fin de sa vie, d'une cataracte, opérée dans les derniers mois de 1897 75 Notice sur le général Delmas de Grammont, impr. de Soye et Bouchet, Paris, 1862. In-8°, 8 p. -Plaquette tirée à 100 ex. et non mise dans le commerce : Jacques-Philippe Delmas de Grammont, né à La Sauvetat-en-Dropt, en 1796, mort à Miramont-de-Guyenne, en 1862. Officier de cavalerie, il fit la campagne d'Afrique et devint général de division en 1853. Il fut député de la Loire à la législative (1849), où il soutint la politique de l'Élysée et fit voter, en 1850, une loi protectrice des animaux. Voir Livre de raison, 4 octobre 1897 et 10 mai 1898. cas, peu de succès dans ce projet 96 . Précisément, était-ce une façon pour lui de se poser pour ou contre Dreyfus ? Pour le colonel Picquart ou pour le colonel Henry ? Même s'il est impossible de trancher, l'interrogation mérite d'être ouverte. En effet, même s'il avait pris le parti de Tamizey a pourtant la fibre généreuse et prodigue des plus grands seigneurs. Il l'a manifestée en refusant la fortune de sa première épouse : il refusa de faire appeler, près du lit de mort, le notaire qu'elle demandait pour assurer sa succession en sa faveur et il refusa encore de bénéficier des dispositions testamentaires, lui léguant tous ses biens, qu'elle avait rédigées, avant l'accouchement tragique qui lui coûta la vie. Tamizey rendit à la famille jusqu'au dernier sou de la dot, n'acceptant même pas de son beau-frère une indemnité de 18 000 francs que celui-ci lui offrait pour rembourser la corbeille de noces, une année de séjour, les voyages et frais divers 103 Jean d'Antras [c'est-à-dire en 1880] la dépense fut plus considérable que nous ne l'avions pensé. Mon cher collaborateur était alors un pauvre curé de campagne et moi j'étais un malheureux propriétaire déjà cruellement atteint par le phylloxéra… », Tamizey de Larroque (Ph.), La comtesse Marie de Raymond, impr. G. Foix, Auch, 1886, p. 12. Au soir de sa vie, en terminant son Livre de raison c'est l'avant-dernière note qu'il rédigea, le 8 mai 1898 -il mourut dans la nuit du 25 au 26 mai suivant 115 Tamizey de Larroque (Philippe), « Notes et documents inédits relatifs à la famille de Lau », Revue de Gascogne, 1877, p. 41. publicatoire encore à assouvir (7 mars 1890, 7 décembre 1893) et il s'enorgueillissait de la masse de ses travaux (30 décembre 1891). En tout état de cause, on peut distinguer deux grandes phases dans ses travaux portant d'abord sur le Moyen Âge finissant puis se fixant sur le XVI e et le XVII e siècle) D'abord celle de 1856 à 1873, durant laquelle il collabore surtout aux revues savantes de Paris et de la province (essentiellement du Sud-Ouest). La fin de cette période correspondit à la fin de son mandat de maire et de ses séjours de six mois à Paris 121 . Ce fut certainement un tournant de sa vie : la maturité et même, pratiquement, selon les normes de son temps, l'entrée dans la vieillesse, puisqu'il était, alors, âgé de 45 ans. Mais des revers de fortune le frappèrent peut-être aussi, à ce moment, le contraignant à un train de vie provincial plus modeste et à des travaux plus lucratifs. En fait, la documentation qu'il avait accumulée notamment dans la capitale 122 , comme les facilités que procurait alors la Bibliothèque nationale 123 , lui permirent de se lancer dans un nouveau type de travail. Donc, à partir de 1873, il entama de grandes publications épistolaires touchant des grandes figures intellectuelles du début du XVII e siècle pour la collection officielle des Bibliophiles). Il recensait seulement 71 de ses principales publications. En 1887, Jules Andrieu reprit la tâche dans sa Bibliographie générale de l'Agenais (Paris,t. II,t. III (supplément), Paris, 1891, p. 163-167). Enfin, Jules Momméjà publia en 1901 : Philippe Tamizey de Larroque, correspondant de l'Institut. Essai bio-bibliographique, Saint-Denis, 1901(Extrait de la Correspondance historique, 1898à 1901). Mais il faut prendre en considération aussi le Catalogue général des imprimés de la Bibliothèque Nationale. Tamizey remettait, en effet, à la Bibliothèque Nationale, ses tirés à part. Sa bibliographie y occupe 32 colonnes et 220 numéros. Plusieurs ne sont pas mentionnés dans l'article de Momméja : voir Peyrous (Bernard), « L'oeuvre d'éditeur scientifique de Tamizey de Larroque », Revue française d'histoire du livre, 61 e année, n°s 76-77 -n elle série 3 e et 4 e trimestres 1992, p. 220 et, en annexe, la Bibliographie provisoire de Tamizey de Larroque. Documents inédits sur l'histoire de France : Guez de Balzac (1873) ;Jean Chapelain (1880-1883) et surtout Nicolas-Claude Fabri de Peiresc (1888Peiresc ( -1890) ) 124 dont les lettres retinrent particulièrement l'attention de Tamizey : … Elles sont remarquables à plusieurs égards. Un de leurs principaux mérites, c'est leur aimable simplicité. On dirait une familière et savoureuse causerie. Le naturel, comme une source vive, y coule à flots. Jamais un mot cherché, jamais une prétentieuse image ! Dans cette honnête et agréable prose se reflète le caractère de l'écrivain… Un autre grand mérite des lettres que j'ai l'honneur de publier, c'est l'extrême intérêt qu'elles présentent. Peiresc, dont l'esprit était si largement ouvert et que l'on pourrait surnommer le plus curieux des hommes, aborde dans sa correspondance avec les frères Dupuy, si curieux eux aussi, les sujets les plus divers. Tout l'attire, et son encyclopédique savoir, puise non seulement dans d'immenses lectures commencées dès l'enfance, mais encore dans de nombreux voyages et dans de féconds entretiens avec la plupart des hommes éminents de l'Europe, lui permet de traiter, en se jouant, pour ainsi dire, des questions de tout genre. C'est plaisir de voir la merveilleuse aisance avec laquelle il s'occupe tour à tour de l'histoire ancienne et de l'histoire contemporaine, de Sur Philippe Tamizey de Larroque en pantoufles dans son « costume de travail », comme il le dit luimême, au Pavillon Peiresc, voir le récit de Jacques Brissaud dans Maisani (Claude), « Sur Philippe Tamizey de Larroque », op.cit., p. 26-29. J. Brissaud note que la gouvernante-cuisinière de Tamizey, Antonia Monthus, « ne devait ni ouvrir les fenêtres, ni épousseter par crainte qu'un courant d'air n'égare les documents ou que le nettoyage n'en trouble l'ordre ; si bien que l'historien vivait à la fois dans les cendres du passé et aussi dans une poussière, elle bien présente, accumulée partout, depuis des jours et des nuits ». 146 Récit de Jacques Brissaud dans Maisani (Claude), « Sur Philippe Tamizey de Larroque », op.cit., p. 27. 147 Serin (P.), op. cit., p. 32-33 : Henri Tamizey de Larroque vécut à Larroque après la mort de son père : « …il revenait toujours à peu près bredouille de la chasse avec une plume au chapeau. Une fois un canard appelé du nom de Benjamin, de leur basse-cour, avait une patte cassée, Henri ne trouva rien de mieux que de le soigner en lui montant une attelle. Le canard quelque temps après guérit, mais il resta boiteux à vie ». Jean-Paul Michel, William Blake & Co. éd., 1998. -Merlio (Gilbert) et Pelletier (Nicole), s.d., Bordeaux au temps de Hölderlin, coll. Contacts, Gallogermania, serie II, 20, Peter Lang, 1997, p. 164-227 et p.265-286. LIVRE DE RAISON DE PHILIPPE TAMIZEY DE LARROQUE, COMMENCE A LARROQUE, LE 14 JUILLET 1889. Dimanche 14 juillet Puisque je me suis tant occupé, tous ces jours derniers, de livres de raison à propos des vieux papiers de la famille Fontainemarie 156 , pourquoi n'écrirais-je pas, à mon tour, mon petit mémorial 157 ? Je regrette seulement de le commencer si tard et quand j'ai déjà atteint la soixantaine. La première chose que je consigne en ce cahier, c'est la résolution que je viens de prendre, d'habiter désormais ici. Gontaud 159 pour Larroque aussitôt que faire se pourra. Lundi 15. Cette nuit, contre mon habitude, je n'ai pas beaucoup dormi. J'en ai profité pour admirer, vers deux heures du matin, un ciel étoilé, de toute beauté, où brillaient, à travers l'extrême pureté de l'atmosphère, des constellations que je n'avais jamais encore eu l'occasion de contempler. J'ai profité aussi de mon insomnie pour préparer dans ma tête le plan de mes prochaines petites constructions. Je ne suis arrêté au projet que voici : un pavillon attenant au chalet actuel, composé de trois chambres, une au rez-de-chaussée, une au 1 er étage, avec balcon, une enfin au second étage d'où l'on aura une vue splendide et qui sera mon cabinet de travail 160 . 158 À partir de cette année-là, Tamizey de Larroque avait pris l'habitude de passer le vendredi à la campagne, sauf le cas de très mauvais temps : Couture (Léonce), « Philippe Tamizey de Larroque, correspondant de l'Institut », Revue de Gascogne, 1898, p. 505. 159 C'est là que se trouvait la maison familiale des Tamizey depuis plusieurs générations. Larroque, l'une des trois propriétés de la famille, se trouvait à 4 km environ au nord-est de Gontaud de Nogaret sur une éminence (point coté 109 carte 1738 Est, Seyches, série bleue 1 : 25 000, I.G.N., Paris, 1987) : Serin (Pierre), « Quelques anecdotes autour de Jacques Philippe Tamizey de Larroque », Revue de l 'Agenais, 1999, p. 25. 160 Description du pavillon « ouvert à tout vent », état des lieux, en 1952, dressé à l'occasion de la réfection du cadastre par Pierre Serin, aide-géomètre : « …bâtiment rectangulaire…en entrant à droite un étroit couloir d'où se déployait un escalier en sapin à pans coupés desservant les deux étages et le grenier mansardé sous le toit de brisis et celui à quatre pans. Au rez-de-chaussée… à gauche, la salle à manger avec une cheminée d'angle située au sud-est, surmontée d'une tablette de velours rouge, les murs peints en rose, la fenêtre donnant vers le Nord. Le sol était carrelé avec des grands carreaux de terre cuite placés en diagonale par rapport aux quatre murs de cette salle à manger. Au premier étage même orientation, c'était l'ancienne chambre à coucher de l'érudit, cette pièce donnant sur une grande terrasse faisant tout le travers du Pavillon. Les murs de la chambre étaient recouverts de tapisserie, dessin noir sur blanc, représentant l'effigie des divers rois et reines de France et autres, des ministres, des ecclésiastiques… cela formait un ensemble impressionnant… une grande porte-fenêtre donnant sur le balcon au nord, ce balcon soutenu par deux piliers ronds en fonte. Au deuxième étage, une grande bibliothèque établie sur de multiples étagères en sapin non peintes garnissant les quatre murs de cette grande pièce garnie de livres, de revues, d'ouvrages divers… Une fenêtre au nord-est, une autre au nord, une troisième au sud avec un petit balcon en fer forgé d'où l'on apercevait la ville de Gontaud, J'appellerai ce pavillon le pavillon Peiresc 161 parce que je le bâtirai avec le produit de la 1 re partie de ma grande publication 162 , soit huit mille francs. (1776-1900), Paris, Picard, 1900, p. 165-166 et p. 18, 58, 61, 90. -Généalogie de la famille de Godailh par la comtesse Marie de Raymond, n°4 et 42. Lieu-dit « Les Peyrès » aujourd'hui déboisé, en contrebas de Larroque au : carte 1738 Est, Seyches, série bleue 1 : 25 000, I.G.N., Paris, 1987. morts, les autres mourants, dont j'aurai à me servir comme bois de chauffage, leur enlèvement devant favoriser le développement des taillis environnants. Mardi 16. Mon Vendredi 23. Aurai-je des chats à Larroque ? J'ai bien envie de dire non, car s'ils sont d'agréables compagnons, surtout pour un homme qui, en digne Peirescien 172 , les aime autant que moi, combien le plaisir de les posséder est chèrement payé par le regret de les perdre ! J'écris ceci sous l'impression du gros chagrin que m'a causé, cette semaine, la mort de ma pauvre Nigra, cette magnifique chatte que chacun de mes visiteurs un peu lettrés se croyait obligé de saluer de cette citation : (Gers). Il a été l'élève de Léonce Couture qui était professeur de Philosophie au Petit séminaire et dirigeait la Revue de Gascogne. Il s'engagea très vite dans des travaux historiques touchant essentiellement à la période moderne et publiés dans la Revue de Gascogne (notamment « Jugements de maintenue de noblesse » : 49 notices publiées échelonnées sur 5 années dans cette revue ; l'Histoire des Trois barons de Poyanne, commencée en 1879 et terminée en 1883). Il a participé lui-même à la rédaction des Archives historiques de la Gascogne (il fut secrétaire général de la Société des Archives Historiques de la Gascogne), publiant : Documents sur la Fronde en Gascogne qui fut salué par le grand historien alors de la période, Chéruel. Après avoir eu la charge de diverses paroisses, il fut nommé par l'archevêque d'Auch, Mgr Gouzot, archiviste du diocèse et son secrétaire particulier. Fin 1891, l'abbé de Carsalade fut le fondateur avec A. Lavergne et Ph. Lauzun de la Société archéologique du Gers. Chanoine honoraire en 1893, il devint fin 1899, évêque de Perpignan : Andrieu (Jules), Bibliographie générale de l'Agenais, 1886, t. 1, p. 140 ; notice nécrologique par A. Clergeac, président de la Société historique de Gascogne, dans la Revue de Gascogne, t. 28-29, 1933-34 p. 13-18. Sur cette édition des Mémoires de J. d'Antras : Tamizey de Larroque (Philippe), Madame la comtesse Marie de Raymond, Auch, G. Foix, 1886, p. 12 : « Quand M. l'abbé de Carsalade du Pont et moi nous publiâmes les Mémoires de Jean d'Antras de Samazan, la dépense fut plus considérable que nous ne l'avions pensé. Mon cher collaborateur était alors un pauvre curé de campagne et moi j'étais un malheureux propriétaire déjà cruellement atteint par le phylloxéra. Le succès de curiosité de l'Exposition universelle de 1889 a été sans précédent. Occupant une superficie de 958 572 m 2 , elle réunissait une foule d'attractions, et l'on estime qu'elle fut visitée par près de 33 millions de personnes. On y remarquait surtout la Galerie des Machines et le Dôme central, véritables merveilles de construction métallique ; la Tour de 300 m, construite entièrement en fer par l'ingénieur Eiffel ; les Fontaines lumineuses de l'ingénieur Bechmann. Il y eut plus de 55 000 exposants industriels et plus de 5000 exposants pour les beaux-arts. 1865[START_REF]Les Commencements d'une conquête : l'Algérie de 1830 à[END_REF], Privat, 1976, p. 241-242. -Bercé (Françoise), « Arcisse de Caumont et les sociétés savantes » dans Nora (Pierre) s.d, Les lieux de mémoire, Quarto, Gallimard, t. 1, 1997, p. 1545[START_REF]L'Arpenterie, livre de géométrie, enseignant à mesurer les champs[END_REF] Semaine de plantations. J'ai mis quatorze ormeaux sur le bord du chemin qui longe la prairie et mène à la fontaine ; auprès de cette fontaine, j'ai planté un peuplier de la Caroline, dont les racines seront perpétuellement rafraîchies par l'eau et qui me donnera beaucoup d'ombre en peu de temps. Aussi Alors que le phylloxéra commençait à sévir (décelé dans le Gard en 1863), le pharmacien Jules Émile Planchon, de Ganges près de Montpellier, ramena des États-Unis, en 1874, des plants résistant au puceron et, grâce à son action persévérante, soutenue par celle de Viala, l'utilisation des ceps américains va lentement s'imposer comme le remède à la crise. Lentement parce que les vignerons se divisent en deux camps, les « sulfuristes » qui croient à la supériorité des vignes françaises et veulent les conserver, les « américanistes »qui eux voient dans les plants importés le meilleur moyen de sauver le vignoble national. La solution mettant les adversaires d'accord existait : le greffage des espèces françaises sur des plants américains. Elle fut finalement adoptée et l'on eut ainsi des plantations directes et des plantations de porte-greffes et hybrides. Greffage et replantation ne démarrent réellement qu'après 1877 et s'accèlèrent à la fin des années 1880 quand le gouvernement décide d'accorder, en décembre 1887, des exonérations fiscales pour les vignobles nouvellement plantés. L'emploi de cépages américains ne touche que 17 départements en 1881, mais déjà 34 en 185, 44 en 1889 et, enfin, 64 en 1899, si bien qu'en 20 ans la superficie replantée atteignit 962 000 ha. À la fin du siècle, les ceps américains couvrent 92% du vignoble de l'Hérault, 89% de celui de l'Aude, 79% de celui du Gard et seulement 40% en Gironde. La crise phylloxérique peut alors être considérée comme surmontée : Duby (Georges) et Wallon (Armand) (dir.), Histoire de la France rurale, t. 3, Seuil, 1976, réed. coll. Points, 1992, p. 360-362. -La reconstitution des vignobles dans le canton de Cadillac, Imprimerie G. Gounouilhou, 1900, Bordeaux (reprint Association Saint-Blaise Cadillac, 1999), p. 50-55 notamment. 196 Vigne plantée selon le mode dit « à joualles » ou « en joualles » : à des rangées de ceps succèdent des espaces vides. Généralement, on plante Elle n'a qu'un défaut, un grave défaut,… une faible santé, mais l'excellent air de Larroque la fortifiera et j'espère qu'elle pourra devenir le bâton de ma vieillesse. juin Nous avons allumé, hier soir, à 8 h. ½ , le feu de la Saint- juillet Hier, je suis allé m'asseoir sur la pierre de mon tombeau. Je me propose de prendre souvent cette pierre pour but de mes promenades. Cela me fera du bien. Il faut s'habituer à la pensée de la mort et attendre l'heure dernière sans la désirer ni la craindre. Aujourd'hui, j'adresse au Ministère de l'Instruction publique la demande relative à la continuation de ma publication des , puis au fils de celui-ci, Henri, comte de Paris. 209 Il y a deux comtes d'Haussonville. Joseph Cléron d'Haussonville (1809-1884). Député conservateur de Provins de 1842 à 1848, il se consacra à partir de cette date à des études politiques et historiques qui lui valurent, en 1865, un fauteuil à l'Académie française. Après la guerre de 1870, il fut l'un des plus actifs fondateurs de la Société de protection des Alsaciens-Lorrains, et fut élu sénateur inamovible (1878). Au sénat, il attaqua avec violence les projets de lois de J. Ferry (1880). Il avait épousé, en 1836, la princesse Louise de Broglie, soeur du duc Albert de Broglie. Son fils Bernard (1843Bernard ( -1924)) Le 10 juillet 1860 avait épousé, en secondes noces, sur les instances des siens, sa cousine germaine Olivia-Marie-Henriette Delmas de Grammont (1842Grammont ( -1910) ) : voir 27 septembre 1889. 213 Nés du second mariage de Ph. Tamizey de Larroque avec Olivia Delmas de Grammont, morts tous trois en bas âge. Dans la notice nécrologique Adolphe Magen (1818Magen ( -1893)), Imprimerie V ve Lamy, Agen, 1894, p. 12, Tamizey évoque la disparition de Charlotte : « J'eus le malheur, l'affreux malheur de perdre, en avril 1879, ma plus jeune fille, qui n'ayant pas même atteint sa sixième année, laissait entrevoir, comme une fleur rare en son précieux bouton, les plus délicieuses qualités. Elle emportait un lambeau de moimême dans son petit cercueil » : Audiat (Louis), Ph. Tamizey de Larroque août Je viens de passer quatre jours à Gontaud à l'occasion de la fête de l'Assomption, de la distribution des prix du couvent, etc. Ah ! qu'il faisait chaud dans mon vallon natal, pendant ces quatre journées, et combien j'ai regretté mon ombreux châtaignier ! Au moment où j'écris ces lignes, la fraîcheur de l'air est aussi délicieuse qu'était accablante la température de la ville. Hier, je ressemblais à un malheureux poisson jeté sur le sable brûlant de la plage et s'y rôtissant d'avance. Aujourd'hui je suis comme un poisson en pleine eau. Je revois mon pavillon avec d'autant plus de plaisir, que l'escalier est enfin posé et que je pourrai, dès demain, prendre possession de tous les étages. J'ai déjà même porté dans mon cabinet de travail bon nombre de livres et de paperasses que je vais classer peu à peu. Avant la fin du mois tout sera en ordre du rez-de-chaussée jusqu'à la mansarde et je touche à l'heure de l'installation définitive. août Hier, jour de la fête de Saint-Louis, j'ai pris possession de tous les étages de mon pavillon. J'ai dîné et soupé dans la salle à manger, j'ai dormi dans la chambre à coucher, j'ai travaillé dans la bibliothèque. Ma première occupation, en cette dernière pièce, a été la rédaction d'un article sur la septembre En venant de Gontaud, ce matin, j'ai trouvé le chemin n° 99 jonché de branches de peupliers de la Caroline brisées par le vent de la nuit (peupliers que je fis planter quand j'étais maire de Gontaud, par l'agent-voyer Génibeau, au nom de l'intérêt commun). Cette triste jonchée m'a un peu consolé de la mort du peuplier de la même espèce que j'avais placé près de la fontaine et sous lequel je comptais venir prendre la double fraîcheur du voisinage de l'eau et de l'agitation du feuillage. Je renonce à replanter un arbre dont les rameaux sont si cassants, si fragiles, et qui, par le vent qui souffle continuellement à Larroque, serait sans cesse mutilé. La nature vient à mon secours en faisant pousser, tout près du peuplier tué par l'affreuse sécheresse de l'année (il n'a plu un peu sérieusement qu'une seule fois depuis que je suis ici et voilà bientôt trois mois !), un chêne qui n'a pas un mètre de hauteur, mais qui paraît avoir bonne envie de se développer. Je le soignerai bien, j'écarterai de lui la mauvaise compagnie (je veux dire les herbes et les ronces), je l'arroserai, et je serai bien trompé s'il ne devient pas un bel arbre, peut-être même un digne successeur du vieux Chêne dont il est sans doute un rejeton. Il sera moins haut posé que feu son père, mais il sera aussi moins menacé par les tempêtes, par la foudre, et ce qu'il perdra en majesté, il le gagnera en solidité. février C'est le mois des inaugurations métalliques, c'est le mois des triomphes du brave Thouron. Après la pose de la cloche pose aujourd'hui du petit balcon que j'ai fait établir sous la porte-fenêtre de mon cabinet de travail et où je jouirai tout à la fois de la vue de la vallée de la Garonne et du soleil du printemps et de l'automne. J'aurai ainsi deux installations pour mon travail, une au nord, sur la terrasse, pendant les trop chaudes journées d'été ; l'autre au midi, pendant que la température sera modérée. J'imiterai ces sybarites qui changeaient de climat selon les saisons. février Tout ce mois a été aussi beau que le mois de janvier avait été affreux. Nous n'avons cessé de jouir du plus magnifique soleil. Quelques matinées ont été froides, surtout au commencement du mois, mais, à partir de 9 heures environ, chaque matin, la température était délicieuse. Je n'ai cessé de travailler sur mon nouveau balcon que pendant les heures de l'après-midi où il y faisait vraiment trop chaud. J'ai déjà eu le plaisir de voir un des chèvrefeuilles plantés par moi verdoyer de la plus charmante façon et encore est-il dans une situation ingrate, au nord. Cela me promet pour bientôt le régal de la verdure partout renaissante. Voyons si mes arbres et arbustes devanceront beaucoup le fameux marronnier du 20 mars ! mars Hier, j'ai fait planter autour de mon tombeau quelques pieds de chèvrefeuilles qui remplaceront les pieds morts d'aubépine. Le chèvrefeuille, à cause de la précocité de sa verdure, sera bien placé là : ce sera comme un symbole de résurrection. J'y mettrai aussi, dans quelques jours, une double rangée de violettes afin de former une petite enceinte toute parfumée. mars Mon ami, M. Prosper de Lafitte 284 , l'ancien président du comité institué à Agen pour combattre le phylloxera 285 Voilà un mois qui veut finir comme il a commencé, c'est-àdire par des caprices incessants. En un seul jour, nous avons eu, après une bonne petite gelée, du soleil, de la pluie, du vent, surtout du vent. Jamais peut-être les traditionnelles giboulées de mars n'ont été aussi fréquentes qu'en ces derniers jours. Les froides matinées mettent tout en retard. Pourtant j'ai pu admirer tout à l'heure, entre deux ondées, les fleurs blanches de mes jeunes cerisiers, les fleurs roses de mes jeunes pêchers. avril Troisième anniversaire de la mort de ma mère bien-aimée 290 . Il me semble que l'événement est d'hier, tant la blessure reste vive et saignante en mon pauvre coeur. À la douleur d'avoir perdu ce trésor des trésors se joindra toujours le regret particulier de n'avoir pas été auprès d'elle le 8 avril 1888, pour recueillir son dernier soupir. Je lui avais écrit le samedi, veille de sa mort (je lui écrivais tous les samedis, depuis son départ de Gontaud), et elle eut le temps, le dimanche matin, avant le commencement de la crise si rapide qui nous l'enleva, de lire les pages qui lui apportaient les derniers témoignages de ma tendresse. Quand donc me sera-t-il donné de la retrouver et pour ne jamais plus la perdre ? avril Il a gelé assez fort, ce matin, ce qui rappelle les vers de Victor Hugo sur les fleurs du pommier brûlées par les gelées d'avril. Cette neige odorante du printemps, dont parle le poète est répandue partout autour de moi. Ce sont surtout les pruniers qui sont poudrés à blanc et qui 290 Marie-Élisabeth-Pauline Delmas de Grammont (1802Grammont ( -1888)) mai Je veux donner un bon conseil à ceux qui liront ces pages : qu'ils prêtent seulement l'argent dont ils seront disposés à faire l'abandon ! Je calculais tout à l'heure, en me promenant, que j'ai perdu environ le vingtième de ma petite fortune en prêtant trop facilement soit de petites sommes (20 francs, 40, 50, 100 francs), soit de plus fortes sommes (une fois 400, deux fois 500), soit enfin une somme relativement 299 Audisio (Gabriel), Les Français d'hier, t. 2 : Des croyants XV e -XIX e siècle, A. Colin, Paris, 1996, p. 454-457. 300 Voir 25 avril 1891. 301 Famille de l'Agenais, apparentée à Ph. Tamizey de Larroque : Meller (P.), op. cit., t. III. 302 À 5 km environ au nord-est de Miramont-de-Guyenne sur l'actuelle D 933 : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. énorme (3000 francs). J'ai prêté ces dix mille francs à des personnes de tout âge, de tout sexe et de toute condition, à des paysans, à des ouvriers, à des bourgeois, à des gentilshommes, à des jeunes filles, à des veuves, à une vieille dévote, à des prêtres. Personne ne m'a jamais rien rendu. Il y a eu sur le nombre des débiteurs honnêtes, mais dont la bonne volonté a été paralysée par de malheureuses circonstances ; il y en a eu beaucoup plus qui m'ont demandé de l'argent avec l'intention bien arrêtée de ne pas me le rembourser. Loyaux ou non, tous ceux qui empruntent restent presque fatalement insolvables, et je le répète après une longue expérience (car elle a commencé pour moi dès la vie de collège), il ne faut se laisser emprunter que l'argent que l'on veut perdre. mai Hier, premier orage de l'année. Le tonnerre n'a presque cessé de gronder depuis 10 heures du matin jusqu'en pleine nuit. C'est surtout vers neuf heures du soir qu'il est devenu très violent. La foudre a dû tomber deux ou trois fois non loin du pavillon, tant les détonations étaient rapprochées de nous. J'étais inquiet pour mon vieux chêne, mais il résiste à tout. J'ai été témoin de l'influence de l'électricité sur deux représentants de la race canine : notre pauvre Black était très agité, il avait sur les nerfs, il aboyait à l'orage, il était partagé entre la terreur et l'agacement. Il est venu gratter à la porte de ma chambre à coucher, comme pour me demander aide et protection. Le chien de la ferme a été plus vivement impressionné encore ; il s'est blotti sous un lit et n'a cessé de pousser des gémissements aigus. qui a tout noyé et ravagé. À Auch 306 , idem. er juin Le temps est encore maussade. Ciel gris, vilain, pas du tout digne de la fête de St-Clair. Chaleur lourde, orageuse. J'ai pourtant eu, entre deux ondées, le plaisir de voir près de mon tombeau un chèvrefeuille en fleurs. Dans le jardin, plusieurs roses sont épanouies. Ah ! Si tous mes rosiers avaient pu fleurir, c'eût été comme une immense corbeille de fleurs autour de mon pavillon. Si l'hiver prochain est plus clément que celui de 90-91, j'aurai sans doute, le 1 er juin 92, le doux spectacle de cent rosiers couverts de fleurs. juin Juin voudrait-il ressembler à l'odieux mois de mai ? Hier, averses continuelles. Aujourd'hui ciel nébuleux et grand vent très froid. Ventôse après Pluviôse 307 . N'aurons-nous donc plus ni printemps ni été ? -Hier, (il faut que je note ici les plus petites particularités) j'ai eu plaisir de retrouver une petite amie que je croyais perdue, ma jeune chatte qu'en raison de son air éveillé j'ai cru devoir appeler Gredinette. Je m'étais promis -et j'ai même inscrit ma promesse ici -de n'avoir pas de chats à Larroque. Mais c'était un serment de joueur, d'ivrôgne (sic), de poète, d'amoureux. J'ai peu à peu laissé pénétrer dans mon intimité la petite chatte des fermiers ; j'ai fini par l'adopter tout à fait. Cette gentille petite bête m'a pris en affection depuis le jour où, à peine âgée de deux ou trois mois, peu de jours après mon juin Nouvel orage qui a éclaté à 8 h. du soir et s'est prolongé dans la nuit. note en marge : Acte de naissance de ma tante Jeanne Rosalie (1 ére colonne) et de mon père (seconde colonne). (1 re colonne) Palay (Simin), id., p. 917 : petit siège en bois de coin de feu ; il de la vaste cheminée de la cuisine, jusqu'à ces derniers temps. Cette vie se déployait devant moi, avec tous ses grands événements et tous ses petits accidents, comme, par cet éclatant coucher de soleil, apparaissaient, dans toute l'étendue de l'horizon empourpré, les moindres détails du paysage. Tous mes chers morts ont été évoqués, toutes mes affections actuelles ont eu leur souvenir. J'ai revécu mes 62 ans avec leurs cent mille émotions joyeuses ou douloureuses, en constatant une fois de plus que le travail a été le bienfaiteur de ma vie. juillet Il est 4 h. 20 mn. La matinée est de toute beauté et la fraîcheur de l'air invite au travail. Je vais en découdre aujourd'hui de 4 h. 30 à 7 h. 30 ! Du reste, j'ai rarement autant travaillé que tous ces jours-ci. Depuis le 14 juillet, j'ai préparé pour l'impression toute la seconde partie du tome IV de la Correspondance de Peiresc, annotant 46 lettres de mon héros à J. J. Bouchard 331 . J'ai l'intention de consacrer le mois de septembre à la préparation de la 3 ème et dernière partie du manuscrit de mon tome III, laquelle sera de beaucoup la plus importante, car elle ne comprend pas moins de 150 lettres environ échangées entre Peiresc et Gassendi. Au printemps de l'an prochain, je m'occuperai de la rédaction des notes du tome V et, en 1893, j'achèverai, avec le tome VI, la seconde série de mon vaste recueil. juillet Hier, journée féconde entre toutes. Le maçon Carsalade a mis la dernière main à deux des choses qui avaient été projetées le jour de la bénédiction du pavillon la petite chambre de la bonne, laquelle couchait jusqu'à présent dans la cuisine, ce qui était un peu trop primitif et patriarcal (je me souviens sert parfois de coffre à sel. Cornelis de Witt336 m'a demandé ma collaboration. J'en ferai faire un petit tirage à part et ce sera un numéro de plus dans la Bibliographie Tamizeyenne. juillet Hier, après mon souper, j'aurais pu redire du haut de ma terrasse-observatoire le vers si souvent cité du poète : Le Soleil était rouge à son coucher, ce soir. Il était si rouge qu'on aurait cru voir une masse de fer embrasée sortant de la fournaise. Cette flamboyante lueur annonçait-elle l'orage de la nuit ? Le vent a soufflé très fort, suivi d'une abondante pluie. Dois-je à cette pluie, qui aurait grossi le volume de ma source, le plaisir que j'ai éprouvé pour la première fois d'entendre le murmure de l'eau coulant par le canon de fusil qui sert de tuyau de dégagement ? Jamais musique ne fut plus douce à mon oreille. à Fort-Louis, permettant ainsi de faire échouer le siège. Épuisé et engourdi, incapable de tenir debout en arrivant, il aurait rampé sur la plage pour atteindre son but. Tamizey de Larroque a retracé cette prouesse grâce aux Mémoires de Jacques Chastenet de Puységur, (t. 1, p. 58-59) dont il a procuré une nouvelle édition, Société Bibliographique, Paris, 1883, 2 vol. grand in-18, au Mercure françois, t. XIII, 1627-1628 et à Jurieu de La Gravière (Vice-amiral), Les origines de la marine française et la tactique naturelle. Le siège de La Rochelle, Firmin-Didot, Paris, 1891, p. 204-206. 334 Le Paysan du Sud-Ouest, « Organe de la Démocratie rurale, paraissant le Dimanche », journal politique à 5 centimes, dont le premier numéro est daté du 14 décembre 1890, diffusé en 4 pages in-f°, imprimé à Tonniens par l'imprimerie G. Ferrier, Cornelis de Witt en est le directeur-gérant. Ce qui m'a rendu ce bruit encore plus agréable, c'est qu'il a été accompagné d'une grande surprise, car je croyais que l'eau ne monterait jamais assez haut pour se répandre dans l'abreuvoir. Mais les sources sont capricieuses comme les femmes, et la mienne, après m'avoir laissé craindre que ma cannelle fût à perpétuité un objet de luxe, me donne maintenant la certitude que cette cannelle verra couler un petit torrent. juillet Hier, on a achevé de peindre (à la colle) le plancher et les murs de ma salle à manger. Je m'étais bien promis de rester fidèle à une austère simplicité, mais c'était par trop nu, par trop laid. La vue des planches et solives à l'état naturel et des murs enduits de mortier était une souffrance pour l'oeil, d'autant que ma salle à manger (cumularde comme un vieux sénateur) est aussi mon salon de réception. Un peu d'art était donc nécessaire. Un habile peintre de Marmande 337 , Prébesty, a très bien arrangé la petite pièce ; il a donné une jolie teinte grise au plancher et aux murailles et il a encadré le tout dans des filets rouges qui produisent le meilleur effet. C'est une décoration faite avec goût. Nos vieilles assiettes clouées aux murs auront une autre mine, leur élégance jurait avec la grossière couche de mortier ; c'étaient de jolies femmes dans un vilain lit. La douce nuance de la peinture les fera, au contraire, valoir et resplendir. août Hier, j'ai eu la visite de treize pensionnaires du couvent de Gontaud 338 , accompagnées de deux religieuses, la soeur Antonin août Les petites pluies de la semaine dernière et surtout la grande pluie de samedi ont rafraîchi la température et la verdure. La journée, hier, a été douce et charmante. J'ai contemplé longtemps d'un oeil ravi -faisons un peu de style romantique, une fois n'est pas coutume -, le saphir du ciel et l'émeraude des prairies. Hier matin, j'ai eu la visite de M. Brissaud344 , professeur à la Faculté de droit de Toulouse, homme aussi aimable qu'instruit. J'ai eu grand plaisir à causer avec lui de nos chères études. Il m'a promis de venir bientôt me donner une Dans les Ardennes, sur la Meuse, la bataille fut livrée le 1 er septembre 1870. Après la défaite de Reichshoffen (6 août), le commandant en chef, Mac-Mahon, avait dû abandonner l'Alsace et se retirer sur Châlons, où fut constituée, sous ses ordres, une nouvelle armée. La première pensée du maréchal avait été de se replier sur Paris. Mais, craignant que ce mouvement de retraite n'amenât des troubles dans la capitale, le ministre de la Guerre, Palikao, à la prière de l'impératrice, imposa au maréchal une offensive sur Metz, destinée à secourir le maréchal Bazaine. La marche vers l'est se fit lentement, mais sans précautions du côté sud. Mac-Mahon croyait n'avoir devant lui que la première armée prussienne (prince de Saxe). En réalité, dès le 26 août, la troisième armée (prince royal) avait convergé vers le nord, de manière à tourner l'armée française ; au combat de Beaumont (30 août), l'aile droite française (5 e corps) était surprise et rejetée sur Sedan. L'armée du maréchal, forte de 100 000 hommes environ, arriva à Sedan, le 31 août et fut dispersée sur les hauteurs qui dominent la ville. Mais déjà, au nord et au sud, les forces allemandes avaient à peu près achevé leur double mouvement tournant, l'armée du prince royal prenant pour objectif la route de Mézières, la seule ligne de retraite qui restât désormais ouverte aux Français. L'attaque commença le 1 er septembre à 4h du matin, à Bazeilles, où se concentrèrent tous les efforts des Français, tandis que le prince royal s'emparait des défilés de Vrigne-aux-Bois, qui lui livraient Sedan au nord. Au début de l'action, le maréchal de Mac-Mahon, avait été blessé ; le général de Wimpfen ramena les corps d'armée devant Bazeilles, dans l'espoir de percer les lignes saxonnes, mais un mouvement de retraite esquissé par Ducrot avait donné à l'ennemi de grands aantages ; aux défilés du nord, sur le plateau d'Illy, les chasseurs d'Afrique de la division Margueritte se sacrifièrent pour retarder la marche des Prussiens. À 11h.30, l'empereur Napoléon III rentra dans la ville ; à 2h., il faisait hisser le drapeau blanc en signe de capitulation : l'armée était prisonnière de guerre. Toute tentative, d'ailleurs, était désormais impossible. L'empereur se retira au château de Wilhelmshoehe, près de Cassel, résidence assignée par le vainqueur. op. cit., t. 2, p. 122-123. 381 Cité comme membre résidant de la société académique d'Agen, en 1882, comme « historien » dans Lauzun (Philippe), La Société académique d 'Agen (1776-1900), Picard, Paris, 1900, p. 318. 382 Son abondante bibliographie occupe 8 pages du Catalogue des imprimés de la Bibliothèque Nationale. Il s'est intéressé à un grand nombre d'aspect de l'histoire depuis le Moyen Âge jusqu'à l'époque contemporaine de la Picardie ainsi qu'au patrimoine et aux activités des habitants de cette province : notamment sur les faïences picardes ; Les Anglais à Amiens pendant la Révolution. Le colonel Keating, 1792-1793…, impr; de Delattre-Lenoël, 1876, 12 p. (extr. de La Picardie) ; L'iconographie des thèses dites « historiées », gravées notamment par des Picards ; Les rosières de Santerre (1863) ; il a aussi édité quelques documents historiques se rapportant à l'histoire de la Picardie. -A.D. Lot-et-Garonne, 16 J 22, correspondance d'érudits : Fonds Tamizey de Larroque. Archives historiques de la Gironde, Aubry, Paris ; Gounouilhou-Lefevre, Bordeaux, Celles-ci après une remarquable série de 58 volumes in-4°, inaugurée en 1867, ont cessé de paraître en 1932. En mettant à la disposition des chercheurs des milliers d'actes du Moyen Âge et de l'époque moderne, intéressant non seulement la Gironde mais aussi les départements voisins, puisés dans des fonds publics et privés, régionaux, nationaux et étrangers, la société des Archives historiques du département de la Gironde a permis de sauvegarder et de diffuser un important patrimoine archivistique voué à l'oubli, sinon à une pure et simple destruction. Une tentative de relance de la collection en 1933-1936 sous un format in-8°, plus économique, fit long feu en dépit des efforts de Paul Courteault (1867Courteault ( -1950)). La force de la société résidait dans ses hommes, amateurs éclairés et cultivés, mécènes prodigues de leur temps et de leurs deniers, qui, dans l'esprit des societés savantes de l'époque, entretenaient un réseau de correspondants grâce auquel ils purent élargir le champ de leur prospection. L'extinction de la génération des fondateurs (Léo Drouyn (1816-1896), Jules Delpit (1808-1891), Charles Marionneau (1823-1896) et Henri Barckhausen (1834Barckhausen ( -1914) ) coïncida avec ou fut suivie de peu par la disparition tragique de plusieurs de leurs héritiers spirituels, emportés dans la tourmente de la Première Guerre mondiale, tel Pierre Harlé tué au combat en 1915. La paix revenue, les difficultés financières croissantes dans lesquelles la société se débattit durant les années 20 finirent par avoir raison des dernières bonnes volontés : Couture (L.), op. cit., p. 560-561. 1854). Avec le titre d' « ingénieur en chef des embellissements », en fait Paris posséda en lui un véritable préfet des travaux, sous l'Empire puis sous la République. Il fut chargé de créer des squares, de transformer en parcs le bois de Boulogne, le bois de Vincennes, les Buttes Chaumont, de dessiner les parterres des Champs-Élysées et du parc Monceau, d'établir les pépinières et serres de la ville de Paris entre autres. En 1870, Alphand fut nommé directeur des travaux de fortifications. Après la Commune qui l'avait remplacé par Georges Cavalier dit Pipe-en-bois, il refuse d'être préfet, mais est nommé par arrêté du chef du pouvoir exécutif (27 mai 1871), directeur des travaux de Paris (voirie, voie publique, promenades, plan de Paris et travaux d'architecture), en réalité, directeur des travaux à la préfecture de la Seine, son service comprenait aussi les travaux publics du département et l'architecture départementale, avec les beauxarts et même les travux historiques. À la mort de Belgrand (1878), la direction des eaux et égouts lui est rattachée et il dirige ainsi la vie souterraine de la capitale. Entre autres grandes voies, il ouvre le boulevard Saint-Germain, l'avenue de l'Opéra, celle de la République, le boulevard Voltaire, transforme le XVI e arrondissement, achève l'hôtel-dieu. Avec l'État, il poursuit les agrandissement du Palais de justice et l'édification de la nouvelle Sorbonne, établit en banlieue les grand cimetières parisiens et un réseau départemental de tramways. Il fit aussi décider l'adduction de nouvelles eaux, l'emploi du système de tout-àl'égout et le pavage en bois. Il prit une part considérable aux Expositions universelles de 1867,1878 et 1899. Il est l'auteur de Promenades de Paris (1867-1872) avec gravures et chromolithographies et l'Art des jardins en collaboration avec le baron Ernouf (1868). Inspecteur général des ponts et chaussées depuis 1869, maintenu en activité par mesure exceptionnelle après avoir atteint la limite d'âge, grand'croix de la légion d'honneur, il est élu membre libre de l'Académie des beaux-arts en 1891, en remplacement d'Hausmann. Le lendemain de sa mort, le conseil municipal de Paris leva la séance en signe de deuil et ses obsèques furent faites aux frais de la ville qui donna pour l'inhumer un terrain au Père-Lachaise. l'acte de mariage de son neveu Maurice Festugières avec M lle Blanche Lavergne, de même, dis-je, que M. Alphand, qui semblait avoir en main une baguette magique, transportait dans les jardins de Paris les grands arbres de la forêt de Fontainebleau, j'ai fait arracher chez un de mes voisins cet arbre déjà si développé et qui, si l'opération réussit, ne tardera pas à me donner beaucoup d'ombre, d'autant plus que c'est un ormeau à larges feuilles. J'ai installé deux noyers au bout de l'allée qui mène au vieux chemin de Minor, de chaque côté du très rustique banc que mes bons voisins les Bouton -un nom en harmonie avec la chose -m'ont promis d'entourer des rejetons de leurs plus beaux rosiers. À propos de rosiers, on en a mis partout, le long de toutes les allées, dans les massifs formés au milieu de la prairie, dans mon petit cimetière, etc. Aux plantations déjà faites, l'an dernier, autour de ce cimetière, j'ai ajouté six lilas venus janvier On vient de me communiquer un fascicule de la collection Joanne : Géographie du dépt. de Dès l'âge de dix-sept ans, il collabora à des journaux de province, eut des duels, donna des articles à « la Charte de 1830 », puis à « la Paix » où il attaqua Thiers (1838). Olivier Fulgence l'emmena à Rome, où il fut conquis au catholicisme. Il visita la Suisse, et publia les Pélerinages en Suisse (1839) ; Rome et Lorette (1840) etc. Sous-chef de bureau au ministère de l'intérieur, secrétaire de Bugeaud en Algérie (1842), attaché au cabinet de Guizot, il démissionna en 1843 pour entrer comme rédacteur à « l'Univers religieux », dont il devint le rédacteur en chef en 1848 et dont il fit l'organe de l'ultramontanisme. Il accueillit avec faveur en 1848, une République qu'il espérait trouver favorable à l'Église. Il publia alors : Les libres penseurs (1848), l'Esclave Vindex (1849) ; le Lendemain de la victoire ; Petite philosophie (1850). Déçu, il adhéra au régime du 2-Décembre. Il attaqua avec violence les catholiques libéraux, s'attira le blâme de l'archevêque de Paris Sibour (1850), puis de l'évêque d'Orléans son correspondant (J'ai lu à Versailles plus de cent lettres de lui, toutes ravissantes), me disait, un jour que j'étais allé le voir (rue du Bac) : « M lle de Grammont est, à tous les points de vue, une des femmes les plus remarquables qu'il m'ait été donné d'admirer. » Ma cousine, outre sa beauté et ses talents, avait une piété et une vertu peu communes. Nous étions de grands amis et je la regrette de tout mon coeur. février Je suis allé aujourd'hui à Gontaud pour la première fois depuis ma maladie, c'est-à-dire depuis plus de trois mois. J'ai assez bien supporté la fatigue du double voyage fait à pied et, après cette épreuve, je puis me considérer comme rentré en la pleine possession de mes forces. Ce n'est pas sans émotion que j'ai revu ma ville natale. J'ai prié de bon coeur pour le repos de l'âme de ma pauvre cousine Charlotte et pour tous ceux que j'aime, vivants ou morts. er mars Hier, premier orage de l'année, de 5 à 7 heures du soir : éclairs, tonnerre, plaie, grêle. J'ai aperçu dans le ciel en grande partie couvert de sombres nuages, deux délicieux petits nuages teintés de rose qui produisaient le même effet qu'une robe de gaze de même nuance au milieu de vilains habits noirs dans une soirée dansante. C'était un reflet du soleil couchant qui empourprait ainsi les deux nuages. Aujourd'hui je me suis réinstallé dans mon cabinet de travail que j'avais quitté depuis plus de quatre mois. Ce cabinet, beaucoup plus clair, beaucoup plus commode que ma chambre à coucher, va me voir bien travailler pendant les huit mois qui nous séparent des affreux quatre mois d'hiver. Il y fait jour (à quelques minutes près) de six heures du matin à six heures du soir, et Dupanloup (1851). Mais il reçut l'approbation du pape Pie IX. En 1859 il soutint avec ardeur la cause du pouvoir temporel, rompit avec le gouvernement, qui supprima « l'Univers ». quelqu'un qui n'est pas manchot peut abattre, en douze heures, pas mal de besogne. note en marge : Le dossier Guilleche a été expédié à Bordeaux le 12 de ce mois. mars Le comte de Gubernatis 423 , professeur à l'université de Rome, nombre impressionnant de ballots qui formèrent la Collection Doat. Les 258 volumes de celle-ci reliés en maroquin rouge aux armes de Colbert ont été acquis par la Bibliothèque royale (future Bibliothèque Nationale) en 1732. En août 1669, Doat présenta sa note de frais : 42 000 livres, à laquelle vint s'ajouter un petit supplément de 9448 livres. Colbert paya sans mot dire mais il dut trouver cela cher car, lorsqu'on lui proposa de copier les actes du Limousin (Doat fut longtemps en procès avec la généralité de Limoges), de l'Auvergne et du Roussillon, il refusa : Roman d'Amat et Limouzin-Lamothe (R.) sous la dir. de, Dictionnaire de biographie française, Letouzey et Ané, Paris, 1967, t. XI, p. 407-408. 418 Guilloche (Jean) disait-on, mais on n'arrive pas. Désormais la pente sera moins terrible et l'inaccessibilité du pavillon ne sera plus proverbiale. Plus tard, il y aura encore quelques coups de pioche à donner pour rendre le chemin aussi peu escarpé que possible. Si, après cela, je puis aboutir à la route de La Bretonnie en achetant la pièce de terre qui m'en sépare, il y aura de beaux jours pour le vieux marcheur. mai Je viens de faire, tout au matin, autour de mon enclos, une de ces promenades qui sont aussi hygiéniques qu'agréables. Les oiseaux chantaient gaîment (sic) à plein gosier. Si mon oreille a été ravie par ces petits musiciens ailés, qui ne font jamais de fausses notes et qui ne coûtent rien, bien différents à ce double point de vue de tant de musiciens en redingote, mon odorat a été caressé par la bonne senteur des bois et mes yeux ont trouvé un charme indescriptible contempler ces douces nuances de la verdure renaissante qui semble leur communiquer quelque chose de sa fraîcheur. Si rien n'est salutaire pour l'organisme comme un bain dans l'air vif et pur, rien n'est salutaire pour les yeux fatigués, comme l'aspect et, pour ainsi dire, l'imprégnation de la verdure. Comme des chevaux épuisés, il faut mettre au vert les yeux affaiblis par l'excès de travail. Gascogne, t. 28-29, 1933Gascogne, t. 28-29, -1934, p. 13-18. 457 , p. 13-18. 457 Voir 28 janvier 1891. Il s'agit de la fille de Gabrielle-Marie-Jeanne-Constance de Seguins (1844-1892) qui avait épousé, le 12 octobre 1865, Pierre-Louis-Marie de Gautier, marquis de Saint-Paulet, baron de l'Empire , magistrat : Authier (Michel)-Galbrun (Alain), État de la noblesse française subsistante (1940-1993), vol. 22, p. 215-217. 463 Près de Carpentras, à Aubignan, ce pittoresque vieux manoir illustré par l'un de ses anciens possesseurs, l'humaniste du XVI e siècle, Alexandre Scot : Tamizey de Larroque (Ph.), Deux testaments inédits : Alexandre Scot (1616), Jean-Jacques Bouchard (1661) Fondée en 1859 par M gr de Salinis, archevêque d'Auch, elle publia, jusqu'en 1940, date de sa disparition, la Revue de Gascogne, sous-titrée « Bulletin de la Société historique de Gascogne » : Bourgeat (chanoine Charles), Léonce Couture, érudit gascon, à propos du cinquantenaire de sa mort mars Je reçois de M me veuve H. Taine 529 une lettre de faire-part de avril Je reviens de Sainte-Bazeille où j'ai eu la douleur d'accompagner à sa dernière demeure ma pauvre cousine Amélie de Grammont, douairière de Bentzmann 540 , morte le lundi matin 17 du même mois. Quoique cette perte fût prévue depuis plusieurs mois, car Amélie avait été frappée à mort au mois d'août dernier. J'ai été bien malheureux de voir disparaître à jamais ma chère paralysée. Quand on aime bien, on a toujours un peu d'espoir, malgré tout. Que de tristes souvenirs auprès de ce cercueil qui emportait une de mes plus chères amies d'enfance ! J'ai revu à travers mes larmes toutes les victimes fauchées par la mort, depuis que j'ai âge d'homme, dans cette famille sortie des Grammont où les femmes surtout ont été enlevées avant leur vieillesse, exceptée ma bien-aimée mère, ma tante de Boëry, ma soeur Marie, ma pauvre Nathalie, Marie et Charlotte de Grammont541 . Amélie n'avait pas encore 60 ans. Elle semblait devoir vivre long-temps, tant elle était robuste et bien conservée. À qui le tour maintenant ? À moi sans Fourchon, rusé, menteur, envieux, insolent, qui fomente les haines, qui dirige toute la campagne, qui ne craint pas d'exposer au général les revendications menaçantes des villageois. Balzac prétend dans ce livre, dénoncer « la conspiration permanente du paysan contre le riche ». 540 Voir 10 septembre 1890 : Marie-Amélie Delmas de Grammont, fille du général Jacques-Philippe Delmas de Grammont et de Marie-Anne de Boëry. Née à Miramont-de-Guyenne, en 1835, Marie-Amélie de Bentzmann a publié À la gloire du Sacré-Coeur de Jésus et pour son amour, Bordeaux, impr. Adrien Boussin, 1879, in-8) juillet Le mois de juillet commence bien. Nous avons eu, hier, à l'ombre, 34 degrés. Cela promet. J'ai oublié de noter que, la semaine dernière, nous avons atteint 35 degrés, ce qui pour Larroque est formidable. Le soir de cette brûlante journée, étant assis sur ma terrasse, de 8 à 9 heures, j'eus l'admirable spectacle de trois orages simultanément allumés. Les éclairs se rejoignaient et embrasaient l'horizon tout entier. On eut dit de gigantesques serpents de feu qui s'entrelaçaient, se séparaient et s'unissaient de nouveau. Je Aujourd'hui j'ai eu la visite du jeune Maurice Calbet, le poète, et de son père 567 . Ces messieurs m'ont apporté deux énormes dossiers qui proviennent de la bibliothèque de Boudon de Saint-Amans 568 . Parmi ces manuscrits j'ai remarqué surtout les lettres autographes de Lacépède 569 , de Lacroix de Gessac Bernard-Germain-Étienne Médard de La Ville de Las, seigneur de Lacépède. Né à Agen, en 1756, mort à Épinay sur Seine en 1825. Passionné dès l'enfance, pour les arts et les sciences, il se rendit à Paris et y fut chaudement accueilli par Gluck et Buffon. Il s'occupa d'abord de musique et publia, de 1781 à 1785, une Poétique de la musique, qui lui valu de pompeux éloges. Mais, Buffon lui ayant fait donner la place de sous-démonstrateur du Cabinet du roi, c'est dans l'exercice de ces fonctions qu'il écrivit (1788-1789), l'Histoire générale et particulière des quadrupèdes ovipares et des serpents pour faire suite à l'Histoire naturelle des animaux de Buffon. Partisan des idées de la Révolution, Lacépède devint membre puis président de la Législative ; mais hostile à la Terreur, il se démit de ses fonctions au Muséum et se retira à la campagne. Rentré à Paris après le 9 Thermidor, il reprit le cours de ses travaux, et une chaire nouvelle fut créée pour lui au Muséum, celle des reptiles et des poissons. Admis à l'Institut en 1795, il publia, de 1798 à 1803, l'Histoire naturelle des poissons, puis, en 1804, l'Histoire naturelle des cétacés. Sénateur en 1799, président du Sénat en 1801, ministre d'État en 1804, Napoléon 1 er le fit comte et le nomma grand chancelier de la Légion d'honneur. Créé pair de France par Louis XVIII, rayé de la liste sous la seconde Restauration, après les Cent-Jours, il fut réintégré en 1819. La fin de sa vie se passa dans la rédaction d'une Histoire générale de l'Europe qui ne vit le jour qu'après sa mort. Outre les ouvrages déjà cités, Lacépède avait écrit une Histoire naturelle de l'Homme, Les âges de la nature et des Mémoires insérés dans les recueils de l'Institut ou du Museum et dans divers journaux scientifiques. Ses OEuvres complètes d'histoire naturelle ont été publiées en 1826 et dans les années suivantes. Son Salon fut à l'origine de la création de l'Académie d'Agen : Lauzun (Ph.), op. cit., p. 5-8. Il s'agit probablement de Jean-Girard Lacuée, comte de Cessac (1752-1841), né à La Massas (Lot-et-Garonne), militaire et homme politique. Les écrits qu'il publia pour signaler les abus dans l'armée le firent appeler en 1789 au comité de réorganisation de l'armée. Envoyé par le Lot-et-Garonne à l'assemblée législative, il contribua au succès de Valmy. En 1793, il fut envoyé dans les Pyrénées pour y organiser la défense et entra deux ans plus tard au Comité de salut public. Sous l'Empire, Napoléon 1 er le nomma génral de division et ministre de l'administration de la guerre. En 1814, il se rallia aux Bourbons. Il est l'un des fondateurs de l'Académie d'Agen : Lauzun (Philippe), op. cit., p. 8-9. 571 Bory de Saint-Vincent (Jean-Baptiste-Geneviève-Marcellin, baron de). Né à Agen, en 1778, mort à Paris en 1846. Naturaliste et géographe célèbre au début du XIX e siècle. Bien que les événements révolutionnaires aient interrompu sa formation, il avait déjà affirmé ses connaissances en histoire naturelle par l'envoi de deux mémoires remarquables à l'Académie de Bordeaux, quand il dut se rendre à peine âgé de 19 ans, à l'armée de l'Ouest. Le général Brune l'apprècia promptement et le nomma souslieutenant. En 1800, pendant qu'il était à la tête d'un détachement occupant un petit fort de Belle-Île-en-Mer, le savant Lacépède le fit mettre au nombre des naturalistes attachés à l'expédition du capitaine Baudin autour du monde ; mais sa santé le força de relâcher aux îles d'Afrique. Il fut successivement attaché, de 1808 à 1814, aux corps d'armée des maréchaux Davout, Ney et Soult, et après la bataille de Toulouse, il commanda pendant quelques jours la garnison d'Agen. Il se trouvait dans cette ville, lors du passage du duc d'Angoulême. Dès son arrivée au ministère, en 1815, Davout désigna Bory de Saint-Vincent comme l'un des 8 colonels chargés du dépôt de la Guerre, et peu après (3 juin), le département de Lot-et-Garonne le nomma son représentant à la Chambre. Sa fidélité aux souvenirs de l'Empire le fit comprendre dans la liste de proscription du 24 juillet 1815. Pendant 5 ans, traqué hors de Il s'agit probablement de Jean-Girard Lacuée, comte de Cessac (1752-1841), né à La Massas (Lot-et-Garonne), militaire et homme politique. Les écrits qu'il publia pour signaler les abus dans l'armée le firent appeler en 1789 au comité de réorganisation de l'armée. Envoyé par le Lot-et-Garonne à l'assemblée législative, il contribua au succès de Valmy. En 1793, il fut envoyé dans les Pyrénées pour y organiser la défense et entra deux ans plus tard au Comité de salut public. Sous l'Empire, Napoléon 1 er le nomma général de division et ministre de l'administration de la guerre. En 1814, il se rallia aux Bourbons. Il est l'un des fondateurs de l'Académie d'Agen : Lauzun (Philippe), op. cit., p. 8-9. C'est le sixième de la somme nécessaire. J'ai donc le droit de croire au succès. octobre Journée remplie de petits événements : On a achevé les semailles à Larroque par un temps aussi beau que jamais. J'ai cueilli un brin de lilas aussi odorant qu'en avril. J'ai planté trois pieds de malaga et trois pieds de muscat blanc qui m'ont été apportés de Tonneins par le brave Mazières. J'ai 602 Extrait de la Revue félibréenne, IX, 1893 : Pour Peiresc, S.V.P., aux bureaux de la Revue félibréenne, 1893, in-8°, 8 p. :Souscription lancée par Tamizey et inspirée par les méthodes commerciales et publicitaires de la fin du XIX e siècle pour restaurer le tombeau de Peiresc. 603 Voir 18 février 1893. 604 Sur le Comité des Travaux Historiques et Scientifiques qui reprend l'action menée par Arcisse de Caumont, au début du XIX e s., notamment dans le cadre de l'Institut des provinces, pour la renaissance et la fédération de sociétés savante où les nobles sont nombreux : Carbonell (Charles-Olivier), Histoire et historiens, une mutation idéologique des historiens français 1865[START_REF]Les Commencements d'une conquête : l'Algérie de 1830 à[END_REF], Privat, 1976, p. 241-242. , p. 241-242. -Bercé (Françoise), « Arcisse de Caumont et les sociétés savantes » dans Nora (Pierre) s.d., Les lieux de mémoire, Quarto, Gallimard, t. 1, 1997Gallimard, t. 1, , p.1545Gallimard, t. 1, -1573. 605 . 605 Bibliothèque Méjanes fondée, à Aix-en-Provence, en 1786, par le marquis de Méjanes. Contenant environ 200 000 volumes, 1600 manuscrits et 400 incunables, elle était alors installée au 1 er étage à droite du grand escalier de l'Hôtel de ville, sur la place du même nom. eu la visite de M. Brissaud 607 qui m'a promis de faire à Toulouse une active propagande en faveur de mon projet de restauration du tombeau de Peiresc. novembre Hier, faisant ma promenade habituelle de chaque soir, entre cinq et six heures, j'ai entendu sonner en même temps toutes Hautevignes est situé à 4 km environ à l'Est de Gontaud, Labretonie à 7 km environ au Nord-Est, Agmé, à 2 km à l'Ouest de Labretonie au Nord de Gontaud, Puymiclan à 6 km environ au nord de Gontaud sur l'actuelle D 641, Saint-Pierre-de-Nogaret, est à 1 km environ à l'ouest de Gontaud : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. 609 Adolphe Magen (1818Magen ( -1893)), né à Agen, pharmacien, chimiste, Inspecteur des pharmacies de Lot-et-Garonne, correspondant du ministère de l'Instruction publique, secrétaire perpétuel de la Société académique d'Agen, directeur de la Revue de l'Agenais. Ses travaux et publications sont nombreux et variés, touchant à sa spécialité scientifique, à l'histoire et à la littérature. Une recension à peu près exhaustive en est donnée dans Andrieu (J.), op. cit., t. II, p. 97-102. -Tamizey souligne tout ce qu'il doit aux conseils d'Ad. Magen et à ses encouragements alors qu'il débutait sa carrière de chercheur ainsi qu'à la chaleur et à la constance de l'amitié qui les unissait, dans la plaquette nécrologique qu'il lui consacra : Tamizey de Larroque (Ph.), Adolphe Magen (1818Magen ( -1893)) des Archives historiques de la Gascogne 630 , m'écrit que je n'ai jamais été mieux inspiré. Si c'est vrai, on pourra redire, à cette occasion, le mot antique : C'est le coeur qui nous rend éloquents, car j'ai écrit ces pages avec toute l'émotion de mon coeur. février Établissement d'une cressonnière dans le fossé qui va de la fontaine au bois de La Roche-Marais. On a mis là une magnifique espèce de cresson apportée de La Maratonne 631 , aux feuilles larges et du plus beau vert. J'espère que la colonie deviendra aussi florissante que la mère-patrie, où de tout temps le cresson s'est développé avec tant de vigueur. Le cresson de la Maratonne est un des meilleurs souvenirs gastronomiques de ma jeunesse… surtout quand il entourait une des grasses poulardes que les métayers apportaient à mon père et qui faisaient partie de la rève 632 . Je note qu'en ce commencement de février tout bourgeonne à vue d'oeil et qu'il y a déjà un peu de verdure à mes arbustes, surtout aux chèvrefeuilles. Voilà plus de quinze jours que les ajoncs, au bord de la lande que l'on défriche, se couvrent de fleurs. C'est, à certaines expositions, l'arbuste qui reste le plus (extr. du Bulletin de la Société archéologique du midi de la France, Séance du 5 janvier 1897) et de Jean-François Bladé, notice biographique et bibliographique, L. Cocharaux, Auch, 1904, In-4°, 48 p. (extr. du Bulletin de la Société archéologique du Gers). Voir Bourgeat (Chanoine Ch.), Léonce Couture, érudit gascon…, Imprimerie F. Cocharaux, Auch, 1953, p. 5, p. 11. -A.D. Lot-et-Garonne, 16 J 17, correspondance d'érudits : Fonds Tamizey de Larroque. 630 Constituée pour la publication de pièces inédites, sous la forme de fascicules gr. In-8°, formant chacun un travail spécial et complet et devant représenter chaque année un volume de 5 à 600 pages. Inspirée par les Archives historiques de la Gironde (voir 30 décembre 1891), cette collection est inaugurée par Jules de Carsalade du Pont, Documents inédits sur la Fronde en Gascogne, 1883, 202 p. Tamizey a publié dans le 3 e fasc. : Voyage à Jérusalem de Philippe de Voisins, seigneur de Montaut, en 1883 : voir « Archives historiques de la Gascogne, publiées sous les auspices de la Société historique de Gascogne » dans Revue de Gascogne, 1883, p. 95-97. 631 Carte 1738 Est Seyches, série bleue 1 : 25 000, I.G.N., Paris, 1987 : point coté 72 [286-3244]. 632 Au XV e siècle, la « rève » était un impôt qui se prélevait sur les marchandises à leur sortie du royaume. Elle fut, par la suite transformée en « traite ». longtemps fleuri, car les mêmes ajoncs étaient encore chamarrés d'or en novembre dernier. Il n'y aurait donc éclipse de leur éclat que pendant les tristes mois de décembre et de janvier, les pires de l'hiver. Jean-Marie-Hippolyte-Aymar d'Arlot, comte de Saint-Saud (1853-1932), généalogiste et historien : Roumejoux (A. de), Bosredon (Ph. de), Villepet (F.), Bibliographie générale du Périgord, Imprimerie de la Dordogne, Périgueux, 1899, t. 3, p. 92-97. -Chevé (Joëlle), La noblesse du Périgord, au pays des mille châteaux, Perrin, 1998, p. 357-358. -A.D. Lot-et-Garonne, 16 J 25, correspondance d'érudits : Fonds Tamizey de Larroque. février Hier, j'ai fait tailler mes arbres fruitiers et mes arbustes. J'ai constaté avec joie que nous n'avons, cette fois, ni beaucoup de morts, ni beaucoup de blessés. Mes chèvrefeuilles sont partout magnifiques, ceux du tombeau comme ceux de la fontaine, ceux qui voilent le kiosque comme ceux qui grimpent autour des colonnes de la terrasse et des ormes voisins du vieux chêne. Non content de soigner mes plantations déjà magistrature, pour le scrutin de liste, pour le maintien du budget des cultes et pour la politique des résultats de Gambetta. Réélu en 1885, il accepta le portefeuille de l'agriculture dans le cabinet Tirard, puis dans le cabinet Floquet, le conservant donc jusqu'en 1889. Il déposa en cette qualité plusieurs projets de loi, notamment concernant la réforme de l'administration forestière. Il vint, en visite officielle, en Gironde et dans le Midi, à l'été 1888 « pour étudier les diverses questions de la reconstitution et les besoins de la viticulture » : Pouverau (N.), op. cit., p. 97. En 1889, il se prononça notamment pour les poursuites contre les trois députés membres de la Ligue des patriotes et pour les poursuites contre le général Boulanger. Il était l'un des hauts dignitaires de la franc-maçonnerie. mars Nous avons eu aujourd'hui à déjeuner mon beau-frère Eugène Delmas de Grammont Philippe Lauzun (1847Lauzun ( -1920) ) : né à Agen. Son père était maire de la ville voisine de Brax et conseiller général du canton de Laplume (Lot-et-Garonne). Il appartenait à la haute bourgeoisie agenaise. Sa mère était issue d'une très ancienne famille de Valence-sur-Baïse (Gers), apparentée à la meilleure société de ce coin de Gascogne (voir Lauzun (Ph.), Une famille agenaise, les Lamouroux, F. Lamy, Agen, 1893). Il fut élève du Lycée d'Agen puis fit son droit à Bordeaux puis à Paris, où il rencontra Batbie, professeur agrégé à la Faculté de Droit. Il terminait ses études quand éclata la guerre de 1870. Il quitta alors la capitale et se joignit aux mobiles du Lot-et-Garonne et gagna des galons d'officier. Après la guerre, Batbie ayant été élu député, puis devenant ministre de l'Instruction publique prit Lauzun pour secrétaire. Celui-ci fut tenté par une carrière politique comme en témoigne l'une de ses première plaquettes : Les députés du Lot-et-Garonne aux ancien États-Généraux et aux Assemblées modernes, mais il renonça finalement à poursuivre dans cette voie, en raisonparaît-il -de son caractère vif et intransigeant, quoiqu'étant unanimement apprécié pour ses manières exquises et son urbanité. Dès l'enfance, il avait manifesté un goût marqué pour l'histoire et les beaux-arts, il partagea désormais son temps entre la capitale et la Gascogne et des voyages en France et à l'étranger (voir Vingt-quatre heures au Mont-Cassin parmi ses publications). Suivant la mode de l'archéologie lancée par Arcisse de Caumont, il fut de toutes les grandes manifestations archéologiques et se montra particulièrement actif à la Société Académique d'Agen dont il devint membre résidant le 2 janvier 1872. Après le départ de son ami G. Tholin, archiviste départemental du Lot-et-Garonne, il devint secrétaire perpétuel de cette Société. Il lui revenait le soin de l'organisation des réunions mensuelles et la préparation des numéros de la Revue de l'Agenais (voir son étude La Société Académique d 'Agen 1776-1900, Picard, Paris, 1900). Habitant Valence-sur-Baïse plus au sud dans le département du Gers et lié au chanoine Carsalade du Pont, il participa à la fondation de la Société archéologique du Gers, inspirée de la Société agenaise. Il devint d'ailleurs président de la Société Archéologique du Gers, en 1901 -après le départ de J. de Carsalade du Pont pour l'évêché de Perpignan -Adrien Lavergne s'effaçant et le désignant aux suffrages. Ph. académique d'Agen. On a beaucoup bu, beaucoup mangé, beaucoup jasé, beaucoup ri. On a aussi beaucoup photographié : Jean de Boëry a croqué successivement mon vieux chêne, moi, Black, puis en bloc tous les convives qui étaient groupés, une fois autour de la table où, sous le châtaignier, on avait savouré le café et l'Armagnac, et une autre fois autour du chêne 647 . Journée charmante éclairée par un joyeux soleil. Mon enclos semblait être tout entier en fête, car les pruniers fleuris formaient autour de nous une décoration magnifique. avril Journée passée -en grande partie -au grand air en l'aimable compagnie du colonel de Grammont et de deux savants périgourdins, le comte de Saint-Saud, mon collaborateur au Bessot 648 , et M. de Masmontet, mon futur collaborateur au Bacalan 649 . Nous avons croqué un pâté d'Amiens entouré d'an excellent gigot et de beaucoup d'autres plats qui ont été fort appréciés. J'ai été bien touché de la cordialité avec laquelle m'a abordé le comte de St-Saud. Comme je m'avançais à la rencontre de mes hôtes, il m'a gracieusement demandé la permission de m'embrasser et on eût dit, à nous voir en ce moment, non des confrères, mais de véritables frères. Nos chers Périgourdins ont été ravis de leur séjour ici et ils Lauzun a une abondante bibliographie à son actif, marquée par une prédilection particulière pour Marguerite de Valois (« la reine Margot ») et surtout pour les châteaux gascons, sur lesquels il rédigea des monographies en nombre accompagnées de clichés qu'il réalisait lui-même : Andrieu (J.), op .cit., t. 2, p. 68-69. -Notices nécrologiques dans Bulletin de la Société Archéologique du Gers, 1920, p. 13, p. 70 ;Revue de l'Agenais, 1920, p. 64-81 et 1921, p. 275. -A.D. Lot-et-Garonne, 16 J 17, correspondance reviendront, comme reviendront tous les ans M. Audiat 650 , M. le chanoine Allain651 et presque tous ceux enfin qui ont une fois tâté de notre hospitalité. avril Nous voilà en plein printemps, car tous mes peupliers ont revêtu leur belle robe verte -costume d'académicien ! -et tous mes lilas sont admirablement fleuris. Mais ce qui pour moi est le signe le plus certain, le meilleur, c'est le retour de mes chères hirondelles. Tout ce matin, elles exécutent de joyeuses croisières autour du pavillon. On dirait qu'elles sont contentes de me revoir, de se montrer à moi. Elles semblent me dire : Comme tes bons amis, nous te sommes toujours fidèles. avril Qu'il y a donc des moments heureux dans la vie, même dans la vie d'un homme qui depuis long-temps n'est plus jeune ! J'étais ce matin, à six heures, sur le balcon de mon cabinet, et j'ai éprouvé, pendant quelques minutes, une sensation délicieuse en laquelle se réunissaient mille petits plaisirs divers, le plaisir d'admirer un beau ciel déjà tout inondé de soleil, de suivre dans les airs les évolutions de mes hirondelles, de jouir de la fraîcheur de la brise qui agite doucement la cime de mes peupliers de plus en plus verdoyants, de sentir la suave odeur des giroflées groupées en masse dans graveur d'Avignon, 1657-1674, Caen, H. Delesques, 1896, 14 p. (extr. du Congrès archéologique de France, 1893). 689 Sur la route accidentée mais très pittoresque entre Carpentras et Apt, en vue de la cité de Venasque, située sur un promontoire rocheux, après un petit pont, on franchissait la Nesque par un gué à pied, pour gagner en deux minutes cette chapelle qui abritait la pierre tombale de Boetius, évêque de Venasque-Carpentras, mort en 604, très bel échantillon de sculpture mérovingienne. À côté se trouvait le prieuré de Saint-Maurice, en avant duquel, suivant la légende, une pierre porte l'empreinte du pied du cheval de Saint Maurice. 726 Daniel Campagne (1851-1914), descendant de la famille Chevalier d'Escage dont Ph. Tamizey de Larroque a publié le livre de raison, est né à Gontaud, élève de Falguière, auteur notamment du gisant du duc de Nemours, fils de Louis-Philippe, dans la chapelle royale de Dreux, de la statue de Fleurette à la garenne de Nérac et du monument funéraire de la famille Dolgorouki au cimetière du Père Lachaise à Paris [A.P. Baquier] : Angelo (Bruno d'), Bulletin de la Société archéologique et historique de l 'Albret, n° 24, 2002, p. 6- Garonne. Maurice Boisvert, avocat, officier d'artillerie de réserve, siègea lui-même au Conseil général de Lot-et-Garonne, pour le canton de Seyches, s'occupant avec compétence des questions agricoles. « Après s'être incliné, en termes élevés, devant la volonté de la Nation », il refusa, en 1892, de solliciter le renouvellement de son mandat. Il habitait le château de Boisvert-Beaupuy, à 5 km. de Castelnau-sur-Gupie, à 3 km environ au nordouest de Beaupuy et 6 km. de Marmande dans la même direction [A.P. Baquier]. Il est le frère aîné du Docteur François Boisvert qui visite Tamizey le 6 janvier 1897. Philippe Tamizey de Larroque a publié le livre de raison de la famille de Boisvert, tenu entre 1650 et 1816 : Tamizey de Larroque (Ph.), Deux livres de raison de l'Agenais, L. Cocharaux, Auch, 1893. -Correspondance de Tamizey avec Maurice Boisvert : A.D. Lot-et-Garonne, 16 J 28, correspondance d'érudits : Fonds Tamizey de Larroque. 733 Dans la commune de Beaupuy (canton de Marmande, en Lot-et-Garonne), domaine de la famille : Livre de raison de la famille de Fontainemarie, 1640-1774, publié par Ph. Tamizey de Larroque, Agen, Impr. V ve Lamy, 1889, p. 11. 734 Il s'agit probablement de Jean-Baptiste Donatien de Vimeur, qui fut vicomte puis comte de Rochambeau (1725-1807), chef du corps de 6000 hommes envoyé par la France à l'aide des insurgés d'Amérique et qui participa notamment aux côtés de Washington à la prise de Yorktown en 1781, alors qu'il venait d'être nommé lieutenant général. Il reçut par la suite le bâton de maréchal de France, avant d'éviter de justesse la guillotine sous la Terreur. À moins qu'il ne s'agisse de son fils Donatien Marie Joseph de Vimeur, vicomte de Rochambeau (1750-1813), envoyé à St-Domingue en 1792, puis à la Martinique, il chassa les Anglais de cette île (1793), mais dut capituler l'année suivante. Successeur de Leclerc à St-Domingue (1802), il dut se rendre aux mains des insurgés qui le livrèrent aux Anglais. Libéré en 1811, il fut tué à la bataille de Leipzig. Sans doute s'agit-il du fils de Nicolas -Jean Chirol (1848[START_REF]l'hôtel de Rohan qui vint agrandir les dépôts de l'hôtel de Soubiseet obtint le versement aux archives des minutiers notariaux et des papiers des différents ministères[END_REF],ancien cocher et homme d'affaires de Philippe Tamizey de Larroque, qui fut, lui-aussi, au service des Tamizey. Comme il y est fait ici allusion, Tamizey a publié une plaquette pour conserver le nom et le souvenir de l'humble journalier Justin-Jean Chirol que cite Audiat (L.), Ph. Tamizey de Larroque. Notice biographique, Impr. Texier, La Rochelle, 1898, p. [START_REF]-Kader en mai 1843. Il y gagna le grade de « lieutenant général. Retiré en Angleterre, après la révolution de 1848 qui chassa son père du trôneil était lui-même alors gouverneur général des possessions françaises en Afrique -, il se consacra à l'étude de l'histoire. Il écrivit une Histoire des princes de Condé dont Napoléon III fit saisir les premiers exemplaires[END_REF][23]op. cit.,p. 23. avait été parfait pour moi. Je répète avec une amère tristesse : les meilleurs s'en vont. - ------------------------------------------------J'ai abandonné, pendant plusieurs mois, mon livre de raison, tant j'étais triste et découragé. Non seulement j'ai beaucoup souffert de l'âme, mais aussi beaucoup souffert du corps. J'ai continué à travailler, mais sans enthousiasme et, pour ainsi dire, machinalement. Il me semble qu'en peu de temps j'ai vieilli de cent ans au physique comme au moral. Il y a comme une corde brisée en moi. Me relèverai-je jamais ? Je ne l'espère pas. Sans compter mes infirmités, j'ai trop de Célèbre dynastie d'imprimeurs et d'érudits entre le début du XVI e et le XVII e siècle (au cours duquel la famille s'éteignit). Les Estienne sont restés fameux par la qualité de leurs travaux avec, en particulier, Robert 1 er Estienne (1503-1559), grand humaniste, médecin et sympathisant de la Réforme (ce qui le conduisit à s'exiler à Genève en 1551). Il se spécialisa dans la publication de textes anciens grecs (caractères Garamond) et surtout latins. Il est notamment l'auteur d'un Thesaurus linguae latinae (1531-1532) qui a servi à tous les auteurs de dictionnaires latin-français. Son fils Henri II Estienne (1528-1598), polyglotte, parcourut l'Europe à la recherche de manuscrits grecs et fit paraître la première édition d'Anacréon. Il est l'auteur d'un dictionnaire reconnu pour 751 l'un des plus intelligents que l'on ait jamais fait le Thesaurus linguae graecae (1572-1573). Ses écrits lui valurent, par ailleurs d'être mis en prison par le Conseil de Genève. À la génération suivante, les Estienne abjurèrent le calvinisme pour accéder aux fonctions d'imprimeur royal. Il s'agit du président Aimar de Ranconnet, auteur avec Jean Nicot (v. 1530-1600) de l'un des premiers grands dictionnaires de la langue française, Thresor de la langue françoise, publié en 1606. 752 Marie de Raymond (1825-1886), descendait de Florimond de Raymond, premier éditeur, en 1592, des Commentaires de Monluc. Grande généalogiste, mécène, elle recevait, dans ses salons, les séances de la Société académique d'Agen : Tamizey de Larroque (Ph.), La comtesse Marie de Raymond, Impr. G. Foix, Auch, 1886 (extr. de la Revue de Gascogne) ; Lauzun (Ph.) diverses autres copies de rares imprimés. GONTAUD. -Incendie. -Mardi, à cinq heures du matin, la paisible population de Gontaud a été réveillée par le tocsin. La maison qu'habite M me Tamisé de Laroque, était en feu. Tout a été détruit ainsi qu'une bibliothèque d'une grande valeur. L'incendie n'est pas encore complètement éteint. (La France de Bordeaux et du Sud-Ouest 757 , du 10 juillet 1895) GONTAUD. -Incendie.-L'incendie de mardi matin, qui menaçait de détruire tout un quartier, a été arrêté dans sa marche, grâce au concours dévoué de toute la population de Gontaud, des deux pompes de la ville et de celle de la commune de Saint-Pierre de Nogaret 758 . La maison de M. Tamisé 754 Il s'agit du chanoine Bernard Labenazie, prieur de l'église collégiale d'Agen qui vécut à la fin du XVII e siècle qui est l'auteur notamment d'une Histoire de la ville d'Agen et pays d'Agenois, suivie des Annales ou chronique agenoise…colligée par M. Darribeau de Lacassagne…., publiée par le V te Antoine-Godefroy de Dampierre, Tome 1 er , M lle A. Barrès, Saint-Nicolas-de-la-Balerme (Lot-et-Garonne), 1888. 755 Il peut s'agir de Jules-César Scaliger, philologue et médecin italien (1484-1558), mort à Agen où il s'était établi après y avoir suivi son patient Angelo della Rovere, devenu évêque de cette ville. Il polémiqua avec Érasme et Cardan et publia notamment des commentaires sur les oeuvres d'Hippocrate, Aristote et Théophraste. On publia en 1600 des Epistolae de sa plume. Son fils, Joseph-Juste (1540-1609), né à Agen, mort à Leyde, dont il est possible qu'il soit aussi question ici est l'un des plus grands philologues du XVI e siècle, professeur réputé à Genève, puis à Leyde, cible des attaques des jésuites. Ayant embrassé le protestantisme, il avait quitté, en effet, le royaume de France après la Saint-Barthélemy. On a publié de lui après sa mort : Poemata omnia (1615), Epistolae omnes (1627) et Lettres françaises inédites (1879). 756 François Maynard (v. 1582Maynard (v. -1646)), président du présidial d'Aurillac, il souffrit cruellement de cette sorte d'exil exprimant sa mélancolie dans ses poèmes, dans le style de Malherbe, publiés en 1646. Mais la pauvreté l'empêcha, même quand il entra à l'Académie d'abandonner son emploi. Ses Lettres (1653) permettent de se faire une idée de la vie qu'il menait, assombrie par les regrets d'une ambition mal satisfaite. Après la mort d'un de ses fils, il fit un voyage à Rome, en 1634, et y resta environ deux ans, attaché à la personne de Noailles ambassadeur de France. de Larroque a été seule détruïte avec le mobilier et une bibliothèque d'une valeur de plus de 50 000 F. Trois personnes ont été victimes d'accidents qui, heureusement, n'ont aucun caractère de gravité. (La France 759 du 12 juillet 1895.) juillet Je suis content de moi, je ne me laisse pas abattre. Je me montre plus énergique que je ne le pensais. J'ai eu le courage de me remettre au travail. Puisse cette force d'âme se maintenir sans la moindre défaillance ! Mais, chaque fois que le souvenir de la catastrophe se présente à moi -et combien souvent cela arrive ! -il me semble que je reçois un coup de couteau dans le coeur et que les flammes qui ont brûlé mes livres, me brûlent moi-même. Le jour de l'incendie, le vent soufflait dans la direction de mon pavillon. Aussi, sur tous les chemins qui sillonnent la région environnante, dans les champs et les prairies de mes voisins, on retrouve des feuilles nourcies de mes livres et de mes manuscrits. On eût dit dans l'air un nombre infini de papillons noirs. Messager d'un nouveau genre, le vent m'a ainsi apporté des débris de vieilles lettres où l'on voyait encore mon adresse. Mes pauvres papiers venaient ainsi me retrouver comme pour me dire un dernier adieu. juillet Ma soeur a passé, hier, quelques heures avec nous. Sa visite nous a fait du bien. Malheureuse elle-même, elle nous plaint d'autant plus. J'ai déjà reçu beaucoup de lettres de mes amis. Vivie 764 , de M lle Anna Brugère 765 qui cherche à me consoler avec une tendresse vraiment filiale. juillet Expertise commencée la veille, achevée aujourd'hui. Nous Blaise, Cadillac, 1998. -La reconstitution des vignobles dans le canton de Cadillac, Rapports sur les travaux du comice agricole et viticole de 1884 à 1900, Bordeaux, Impr. G. Gounouilhou, 1900, réed. Association Saint-Blaise, Cadillac, 1999. Issu d'une famille originaire de Villefranche-de-Louchat à la limite des départements de la Gironde et de la Dordogne, son père, médecin, fit un carrière d'homme politique et d'élu. Il épousa Odelly Baptiste, héritière du domaine de Loupiac-Gaudiet, sur la rive droite de la Garonne, en amont de Cadillac, dans un terroir réputé. Reinhold doit son nom à un chirurgien allemand qui le sauva, lui et sa mère, au moment de sa venue au monde très difficile. Bibliophile, fin hélléniste et latiniste, sa fortune personnelle lui permit d'abord de se consacrer pleinement à des travaux savants d'histoire, de philologie et d'archéologie (il ne se maria que très tard, juste après la mort de sa mère qui avait choisi elle-même sa future épouse). Il édita notamment les oeuvres du poète et humaniste bordelais de la Renaissance, Pierre de Brach. Conservateur de la Bibliothèque de la ville de Bordeaux, il participa à la publication des Archives historiques de la Gironde et à la fondation de la Société des Bibliophiles de Guyenne et entra à l' Académie de Bordeaux. Durement frappé par les crises viticoles, dans sa propriété, à partir des années 1850, il devint l'un des promoteurs locaux des campagnes de lutte contre le fléau du phylloxéra et de la mise au point de nouvelles techniques de viticulture. Il mit au point en particulier une nouvelle méthode de taille de la vigne à laquelle Tamizey fait ici référence : Sahut (Félix), Exposé de la taille Dezeimeris, Société nationale d'agriculture de France, Typographie Chamerot et Renouard, Paris, 1892. Maire de Loupiac, Il fut conseiller général du canton de Cadillac, sous l'étiquette « Républicain », à partir d'octobre 1877. Président de ce conseil général de 1894 à 1899, il démissonna à la suite de la controverse suscitée par le projet d'établir une École nationale de viticulture et d'oenologie à Cadillac. -A.D. Lot-et-Garonne, 16 J 11, correspondance d'érudits : Fonds Tamizey de Larroque. 763 Georges Tholin, né en 1843 à Amplepuis (Rhône), ancien élève de l' École des chartes, archiviste du département de Lot-et-Garonne depuis 1867. Il a terminé l'inventaire des Archives de Lot-et-Garonne, a dressé celui des Archives communales d'Agen. Sous sa direction la « Bibliothèque départementale » a pris une importance régionale. Il est l'auteur de travaux et de publications de textes portant essentiellement sur le patrimoine et l'histoire de l'Agenais durant la première modernité : Andrieu (J.), Bibliographie générale de l'Agenais, t. 2, p. 340-343. Une notice nécrologique lui est consacrée dans la Revue de l 'Agenais, 1922, p. 195. -Tamizey de Larroque (Philippe), Madame la comtesse Marie de Raymond, Auch, G. Foix, 1886, p. 11. -A.D. Lot-et-Garonne, 16 J 1 et 16 J 26, correspondance d'érudits : Fonds Tamizey de Larroque. avions assuré la maison, les meubles et les livres pour une valeur de 24,000 francs (maison, 11,000 ; mobilier, 6,500 ; bibliothèque, 6,500). On nous a donné 18,417.65 francs. note en marge : Somme versée par la générale au Crédit Industriel et Commercial le 2 juillet. juillet Je reçois tous les jours des lettres de condoléance de mes amis de Paris et de province. Je reçois aussi des livres pour remplacer une partie de ceux que j'ai eu le malheur de perdre. été dévorée par les flammes. On comprendra tout ce qu'a de cruel et d'affreux pour notre ami la disparition de sa maison natale, toute remplie de souvenirs sacrés, que l'on se transmettait de père en fils depuis plusieurs générations. Mais avec la maison, avec les meubles, le linge, l'argenterie, ont été consumés la bibliothèque qui ne contenait pas moins de dix mille volumes rares et précieux, ainsi que la collection de titres de famille d'autographes, de notes et documents rassemblés par l'illustre archéologue, durant cinquante années de patientes et laborieuses recherches. C'est là pour la science historique une perte irréparable. Pierre Bayle (1647-1706), né au Carlat (Ariège), mort à Rotterdam. Fils d'un pasteur, il se convertit brièvement au catholicisme avant au protestantisme. Appelé à occuper une chaire de philosophie à Rotterdam, il publia, en 1682, son premier ouvrage important : Pensées sur la comète, dans lequel il s'élève contre les préjugés qui attribuent aux comètes quelque influence sur les évenements de la terre. Passant en revue tous les excès commis par la superstition, il en conclut que l'athéïsme, dans ses effets, se montre souvent moins funeste que l'idolâtrie. Un tollé général s'éleva contre Bayle. Les esprits s'échauffèrent plus encore quand, peu après, il fit paraître sa Critique de l'histoire du calvinisme du P. Maimbourg où les adversaires de la réforme étaient durement malmenés. Il s'ensuivit que les jésuites obtinrent du roi un ordre pour faire brûler le livre de la main du bourreau. Bayle répliqua en publiant, au moment de la Révocation de l'Édit de Nantes, Ce que c'est que la France toute catholique sous le règne de Louis le Grand (1685). Outre ces ouvrages inspirés par les événements, Bayle rêvait depuis longtemps de fonder une revue pour faire concurrence au Journal des Savants dont le patron était Denis de Sallo, conseiller au parlement de Paris. Il l'intitula les Nouvelles de la République des Lettres ; le premier numéro parut en mars 1684 et le dernier en février 1687. Bayle y analysait les ouvrages scientifiques, y donnait la biographie des grands écrivains disparus, le tout mêlé de digressions. Ayant, d'autre part, la passion de la toléance, il était souvent blessé de la conduite qu'il voyait tenir aux protestants, parmi lesquels il vivait. Grand liseur, habitué à confronter toutes les opinions, il considérait qu'il était absurde de prétendre posséder toute la vérité et de l'imposer aux autres par la violence. C'est pourquoi il entreprit de formuler les principes généraux de la tolérance, dans un ouvrage qui s'éleverait audessus des querelles de secte et de doctrine, ouvrage en 4 volumes, que Bayle donnait pour traduit de l'anglais et imprimé à Cantorbéry : Commentaire philosophique sur les paroles de Jésus-Christ : « Contrain-les (sic) d'entrer »(1686-1688). Il demande que chacun soit libre de professer la religion qu'il croit vraie. Cette idée suscita les pires colères chez les protestants eux-mêmes. Bayle dut essuyer les coups de son vieil ami, le philosophe Jurieu. Il se vit privé, en fin de compte, par les magistrats d'Amsterdam, de sa chaire de philosophie. Mais il ne s'émut pas trop de cette mesure. Ayant toujours vécu de peu, il crut pouvoir se suffire avec le travail de sa plume. C'est alors qu'il commença son Dictionnaire historique et critique (1695-1697), oeuvre qui fut vraiment le couronnement de sa carrière. Elle obtint un énorme succès (au moins 10 éditions avant 1760). Elle avait été entreprise avant tout pour combler les lacunes des dictionnaires antérieurs. Bayle, en effet, se donne mission de tout remettre en question. Sa prodigieuse érudition lui permet de passer en avec les remarques de (en blanc). revue tous les problèmes de morale, de théologie et d'exégèse et même de littérature. Son Dictionnaire est un trésor de renseignements. N'acceptant que ce qui est fondé sur un fait indubitable, il applique à toute chose l'esprit historique. On le considère comme un précurseur de la critique moderne, même si son style diffus, sa passion du détail et de l'inconvenance dans une certaine mesure, appartiennent sans doute bien plus à la Renaissance qu'à son propre siècle : Labrousse (Élisabeth), Pierre Bayle, hétérodoxie et rigorisme, Albin Michel, Paris, 1996. 790 Prosper Marchand (1675-1756), né à Guise (Aisne), mort à La Haye. En 1698, il s'établit libraire dans la rue St-Jacques à Paris à l'enseigne du Phénix. Protestant, il passa en Hollande en 1711 et s'établit à Amsterdam ; puis il quitta le métier de libraire et se livra à des travaux littéraires. En mourant, il légua tous ses livres et ses manuscrits à l'université de Leyde. Parmi ses publications, avec une Histoire critique de l'anti-Cotton [pamphlet anti-jésuite de l'époque d'Henri IV] (1738) et une Histoire de l'origine et des premiers progrès de l'imprimerie (1740) figure le Dictionnaire historique ou Mémoires critiques et littéraires (1758)(1759). 791 Joseph Michaud (1767-1839), né à Albens (Savoie), il vint à Paris, en 1790, sous les auspices de Fanny de Beauharnais, et s'y fit une place dans le journalisme. Arrêté comme royaliste après Vendémiaire, il s'évada et revint en 1796. Il se rallia à l'Empire, devint membre de l'Académie française (1813), puis acclama le retour des Bourbons (1814). Censeur des journaux et député de l'Ain (1815), Michaud fut le principal rédacteur de La Quotidienne où il défendit la politique ultra-royaliste. Il fut élu membre de l'académie des inscriptions (1837). Son grand ouvrage, avec la publication en collaboration avec Poujoulat de la Collection des mémoires pour servir à l'histoire de France (1836de France ( -1844) ) 4° (plusieurs réed. 1862, 1869, 1872-73, 1882-83, 1886). Il s'agit des Éloges des hommes sçavants tirés de l'histoire de de Thou [c'est-à-dire l'Histoire de mon temps -allant de 1543 à 1607 -de Jacques-Auguste de Thou (1553-1617), magistrat, président au Parlement de Paris] par Antoine Tessier, probablement dans l'édition de 1715, Th. Haak, Leyde. 809 Jacques Lelong, né et mort à Paris (1665-1721). Très jeune encore, il fut admis dans l'ordre de Malte, puis retourna à Paris, où il se fit oratorien (1686). Il devint biblothécaire du séminaire de Notre-dame-des-Vertus, puis de la maison de l'Oratoire à Paris. Parmi ses importants ouvrages bibliographiques : La Bibliothèque historique de la France, contenant le catalogue de tous les ouvrages qui traitent de l'histoire de ce royaume (1719). Le Journal des Savants 814 (à partir du 1 er volume, 810 Jean-Pierre Niceron (1685-1738), frère du célèbre Père Jean-François Niceron, de l'ordre des Minimes, mathématicien, spècialiste d'optique notamment. Jean-Pierre, barnabite, est donc l'auteur des Mémoires pour servir à l'histoire des hommes illustres de la république des lettres (1727-1745), en 43 volumes, ouvrage rempli de curieux renseignements. 811 Il s'agit en fait de l'abbé Claude-Pierre Goujet, né et mort à Paris (1697-1767). Il entra dans l'ordre des Oratoriens. Peu après, il fut nommé chanoine de Saint-Jacques-de-l'Hôpital. Érudit de premier ordre et compilateur acharné, Goujet a laissé plus de 60 ouvrages. Parmi les principaux : Vies des saints (1730) ; Supplément au Dictionnaire de Moréri (1735) ; Suite de la bibliothèque des auteurs ecclésiastiques de Dupin (1736) ; Dissertation sur l'état des sciences en France depuis la mort de Charlemagne…(1737), couronné par l'Académie des inscriptions ; édition du Dictionnaire de Richelet (1738) ; et surtout l'ouvrage ici mentionné la Bibliothèque française ou histoire littéraire de la France en 18 vol. (1740 et suiv.), son oeuvre la plus connue qui s'étend jusqu'au XVIII e siècle. C'était un janséniste convaincu, fervent apôtre du diacre Pâris. 812 Joseph-Marie Quérard (1797-1865). Il fut d'abord commis de librairie à Rennes où il était né, puis à Paris (1817), enfin à Vienne. De retour à Paris en 1824, il y publia, en 10 vol. La France littéraire ou Dictionnaire bibliographique des savants, historiens et gens de lettres de la France, ainsi que des littérateurs étrangers qui ont écrit en français, plus particulièrement pendant les XVIII e et XIX e siècles, complétée plus tard par un t. 11 : Corrections et additions -Auteurs pseudonymes et anonymes dévoilés t. 1, (1854-1857) et un t. 12 : XIX e siècle, t. 2 (1859-1864). Il donna ensuite le premier volume d'une publication contemporaine : La littérature française contemporaine (1839) ; mais il se brouilla avec son éditeur qui fit continuer le travail par Charles Louandre, Félix Bourquelot et Alfred Maury. Il entreprit bientôt la rédaction de ses importantes Supercheries littéraires dévoilées, qu'il publia de 1846 à 1854, puis fonda une revue mensuelle, Le Quérard, complément périodique de « la France littéraire » (1855France littéraire » ( -1856)). Chanoine Ulysse Chevalier . Fils du Dr Ulysse Chevalier, médecin militaire, il profita des conseils de Léopold Delisle, qui appréciait hautement ce jeune prêtre ordonné en 1867, devenu vite un maître. Professeur d'histoire ecclésiastique aux facultés catholiques de Lyon (1887) et d'archéologie au grand séminaire de Romans (1891), membre libre (1912) de l'Académie des inscriptions, il a publié d'innombrables travaux de bibliographie, de liturgie et d'histoire. Notamment : Répertoire des sources historiques du Moyen âge (1905)(1906)(1907) ; Topobibliographie (1894)(1895)(1896)(1897)(1898)(1899)(1900)(1901)(1902)(1903), vade-mecum du médiéviste ; un précieux Repertorium hymnologicum ; un Regeste dauphinois (1912)(1913)(1914)(1915)(1916)(1917)(1918)(1919)(1920). Ses études sur le saintsuaire de Lirey-Chambéry-Turin (1900 et suiv.) et sur Notre-Dame- de-Lorette (1906) Victor Gay (1820-1887) est l'auteur du Glossaire archéologique du Moyen âge et de la Renaissance, A. et J. Picard, Paris, 1887-1928, 2 Monlezun (Jean-Justin), chanoine d'Auch est l'auteur de monographies sur des sanctuaires et lieux de pélérinages de l'actuel département du Gers ainsi que des Vies des saints évêques de la métropole d'Auch…, Impr. J.-A. Portes, Auch, 1800-1859 et de l'Histoire de la Gascogne depuis les temps les plus reculés jusqu'à nos jours, Impr. J.-A. Portes, puis Brun, Auch, 1846-1850, 7 vol. in-8°. 846 Abbé Joseph Barrère (1808[START_REF]Les Commencements d'une conquête : l'Algérie de 1830 à[END_REF], né à Mézin, mort à Agen. Il débuta en 1836 par la petite cure d'Asquets, devint, en 1838, aumônier des prisons de Nérac, et partit, en 1840, pour l'île Bourbon (La Réunion) où il fut nommé vicaire de la cathédrale St-Denis, puis curé de St-Paul. Rentré en France au commencement de l'année 1845 et chargé alors du vicariat de mézin, il fut appelé, en 1850, à professer les mathématiques et l'archéologie au Petit Séminaire d'Agen. L'abbé Barrère était titulaire de la cure de Daubèze, chanoine honoraire depuis 1875, et desservit longtemps l'aumônerie du couvent de la Miséricorde d'Agen. Il était correspondant du ministère de l'Instruction publique et de la Société des Antiquaires de France. Il est l'auteur estimé de nombreux travaux historiques et archéologiques dont certains ont suscité la controverse. Dans ce dernier cas se trouvent : Histoire religieuse et monumentale du diocèse d'Agen, depuis les temps les plus reculés jusqu'à nos jours ; comprenant la partie des diocèses circonvoisins autrefois renfermés dans l'Agenais, impr. P. Noubel, Agen, 1855-56, 2 vol. in Cahors, 1883Cahors, -1886, 4 vol. in-8°. , 4 vol. in-8°. 851 Jean-Bernard Lafon dit Mary Lafon (1810-1884), né à La Française (Tarn-et-Garonne) et mort à Montauban. Bibliothécaire de la ville de Montauban, chevalier de la Légion d'honneur. Après ses études classiques faites à Monatuban, il se rendit à Paris et débuta en 1883 dans La France Littéraire. Il collabora tour à tour à l'Institut Historique, au Musée des familles, à la Revue indépendante, au Moniteur etc. et publia un grand nombre d'ouvrages : poésiés, romans, théâtre, histoire littéraire etc. « qui lui valurent quelques succès académiques, mais ne purent lui procurer une notoriété qu'il rechercha vainement toute sa vie », affirme J. Andrieu (op. cit., t. II, p. 122) dans la notice venimeuse qu'il lui consacre et qui souligne perfidement les avantages procurés à Mary-Lafon par la grosse fortune personnelle qui était la sienne. J. Andrieu dénonce tout particulièrement les critiques mal fondées que Mary-Lafon fit au poète agenais de langue gasconne Jasmin. Parmi ses publications : Histoire politique, religieuse et littéraire du Midi de la France depuis les temps les plus reculés jusqu'à nos jours, Maffre-Capin, Paris, 1841-1845, 4 vol. in-8° dont un fragment fut publié à part sous le titre de Tableau historique et comparatif de la langue parlée dans le Midi de la France et connue sous le nom de langue romano-provençale, René, Paris, 1841, 56 p. réimprimé sous le titre de Tableau historique et littéraire de la langue parlée dans le Midi de France…, Maffre-Capin, Paris, 1842, in-18, 342 Monluc, Paris, 1842, in-12 ; il a dirigé le fascicule Guyenne et Gascogne de l'histoire des villes de France, publiée par A. Guilbert, Furne, Paris, 1844-1849, 6 vol., in-8° qui a eu une édition spéciale sous le titre Histoire des provinces de Guyenne et de Gascogne, Furne, Paris, 1853) ; enfin Mary-Lafon a laissé ses mémoires : Cinquante ans de vie littéraire, Lévy, Paris, 1882, in-12. 852 Jean-François Bladé (1827-1900) -Louis, 1860, in-18 ;Contes populaires de la Gascogne, 1861, in-18 ;L'Espagne inconnue, 1861, in-18 ; Les Révolutions imminentes et l'attitude de la France, 1861, in-8 ; Histoire de l'Amour dans l 'antiquité, 1862, in-18 ;L'Amour dans les temps modernes, 1863, in-18 ;Dictionnaire gascon-français, 1863, in-8 ; Les Richesses des Pyrénées françaises et espagnoles, 1864, in-8 ; Le Colporteur des Pyrénées ou les aventures de Pierre Ardisan, 1866, in-18 ; Histoire du caractère et de l'esprit français depuis les temps les plus reculés jusqu'à la Renaissance, 1867-1868, 3 vol., in-18 : Bulletin de la Société archéologique du Gers, 1973, p. 357. 857 Abbé Jacques Baurein (1713-1790), auteur des Variétés bordelaises ou essai historique et critique sur la topographie ancienne et moderne du diocèse de Bordeaux, éd. originale, chez les frères Labottinière, Bordeaux, 1784-86, 6 vol. in-12 ;réed. du marquis de Castelnau d'Essenault, Féret et fils, Bordeaux, 1876, 4 vol. in-8°. 858 Il s'agit peut-être d'Auguste-Joseph Bernard, né à Montbrison en 1811 (frère cadet de l'activiste républicain Martin Bernard). Il fit au collège de sa ville natale des incomplètes. Forcé d'embrasser la profession de son père, imprimeur, il vint à Paris en 1828 et fut employé dans la maison Didot. Il entra ensuite à l'Imprimerie royale, où il ne tarda pas à devenir correcteur. En même temps, il occupait tous les moments dont il pouvait disposer à rassembler des matériaux sur l'histoire, les hommes et les usages de quelques provinces du Centre. C'était l'un des membres les plus actifs de la Société des Antiquaires de France. À la fin de 1862, il avit été nommé inspecteur général de la librairie et de l'imprimerie. Tarn-et-Garonne) ; Documents Historiques sur le Tarn-et-Garonne. Diocèses, Abbayes, Chapitres, Commanderies, Eglises, Seigneuries, etc., t. I à III, Impr. Forestié neveu, Montauban, 1879-1884, 3 vol. ;Corbarieu et Normale Supérieure (1886) ; secrétaire de la rédaction, puis directeur de la Revue des Deux Mondes (1893), membre de l'Académie française (1893). Il a publié des ouvrages de critique littéraire qui ont eu un grand écho et une autorité certaine : plusieurs séries d'Études critiques (8 vol., 1880-1907) ; Questions de critique, nouvelles questions de critique (1890) ; Le roman naturaliste (1883) ; L'évolution de la poésie lyrique (1894) ; Les époques du théâtre français notamment. 898 Il s'agit donc de Louis Mézières (1793-1872). Né à Paris, élève de l'École Normale Supérieure, docteur ès lettres (1816), il professa la rhétorique en province, puis fut recteur à Metz. Il est notamment l'auteur de Leçons anglaises de littérature et de morale (1823) ; Histoire critique de la littérature anglaise (1834) ; Éloge de l'économie (1851) ; Jugements, maximes et réminiscences (1857). Il est le père d 'Alfred Mézières (1826-1915), lui-même élève de l'École Normale Supérieure et de l'Ecole d'Athènes, docteur ès lettres (1853). Il devint professeur de littérature étrangère à Nancy ( 1854 Le chanoine Charles-Auguste Auber (1804-1892), né à Bordeaux, fit ses études théologiques à Poitiers, enseigna au petit séminaire de Montmorillon, puis dirigea l'école cléricale de Poitiers et fut nommé historiographe du diocèse et chanoine honoraire en 1842. Membre dès l'origine de la Société des antiquaires de l'Ouest, correspondant de la Société des antiquaires de France, membre de nombreuses sociétés savantes, infatigable travailleur qui au cours de sa longue existence se constitua une riche bibliothèque que le monastère de Ligugé s'empressa d'acquérir L'Histoire des Empereurs de Le Nain de Tillemont 923 . (in-4°). L'Histoire universelle de César Cantu 924 (19 vol. in-8°). après sa mort, a publié l'Histoire générale, civile, religieuse et littéraire du Poitou, Impr. de L.P. Gouraud, Fontenay-le-Comte, 1885-1893, 9 vol. in-8°. Voir sur le chanoine Auber : Semaine religieuse du diocèse de Poitiers, 13 novembre 1892. -Bulletin de la Société des antiquaires de l'Ouest, II e série, t. IV, p. 144 et 225. -Balteau (J.) -Barroux (M.) -Prevost (M.), Dictionnaire de biographie française, Paris, 1939, t. III, p.1486-1487. -A.D. Lot-et-Garonne, 16 J 3, correspondance d'érudits : Fonds Tamizey de Larroque. Ainsi L'Histoire des empereurs et des autres princes qui ont régné pendant les six premiers siècles de l'Église… (Paris, 1690(Paris, -1738, 6 vol.), 6 vol.), monument réalisé solitairement par Le Nain de Tillemont, conserve encore au début du XXI e siècle, malgré les immenses progrès dans le champ des sciences historiques et philologiques, sa place parmi les usuels des bibliothèques et son utilité pour les historiens de la fin de l'Antiquité et des débuts du christianisme. La couleur de piété de cet ouvrage lui valut, certes, des critiques, notamment des jésuites de Trévoux et surtout de l'historien anglais Edward Gibbon, son principal utilisateur au XVIII e siècle ; ces lecteurs refusant, quoiqu'éblouis par la science et la conscience de l'auteur, de partager ses convictions théologiques. Ce caractère religieux de l'oeuvre était pourtant essentiel dans l'esprit de Tillemont qui entendait perfectionner les travaux de Baronius, rectifier ceux d'Usher, de Dodwell, de Bull, dans le dessein d'établir indiscutablement les titres de l'Église romaine, la perpétuité de la foi des Apôtres jusqu'à l'époque moderne. La conjugaison de cette perspective avec une critique quasi « positiviste », tendant parfois à l'hypercritique, caractérise l'oeuvre de Tillemont. C'est certainement l'une des dernières à refléter une manière d'équilibre entre recherche historique et certitudes dogmatiques. L'Histoire des ducs de Bourgogne par P. de Barante 928 . Les oeuvres complètes d'Augustin et d'Amédée Thierry, de Jean 927 Alexis-Amans Monteil (1769-1850), né à Rodez, mort à Paris. Fils d'un avocat, il se fit remarquer, dès le collège par la tournure indépendante et sérieuse de son esprit. Tourmenté par le désir de voir Paris, il tenta vainement de décider son père à l'y envoyer et s'engagea, pour arriver à ses fins, dans un régiment qui se dirigeait sur la capitale, mais son père fit casser son engagement et il revint à Rodez, où il continua seul ses études. En 1789, il fut nommé secrétaire du district de Rodez, puis professeur d'histoire à l'école centrale de Rodez, puis à l'École militaire de Fontainebleau. Mais, au bout d'un an, Monteil se démit de ce poste. Le milieu militaire dans lequel il était obligé de vivre ne pouvait convenir à un penseur peu enthousiaste de la gloire des armes. Il vint alors à à Paris et finit, après quelques annéesde vaines sollicitations par obtenir l'emploi de bibliothécaire secrétaire archiviste à l'École militaire de St-Cyr, fonction qu'il conserva jusqu'en 1819, date de suppression de cette école. À cette date, Monteil alla s'établir à St-Germain-en-Laye, puis à Versailles et enfin à Passy. Il vécut à cette époque en achetant un peu partout, à bas prix, d'anciens manuscrits ou des livres rares qu'il revendait soit à des amateurs, soit à des bibliothèques publiques. Pendant ce temps, il n'avait cessé de s'occuper de son Histoire des Français des divers états aux cinq derniers siècles, dont il publia les deux premiers volumes en 1827. Cet ouvrage aussi remarquable qu'original, n'obtint point à sa parution, le succès qu'il méritait et fut l'objet d'assez vives critiques. Lorsqu'éclata la révolution de 1830, qu'il salua avec enthousiasme, le travail auquel Monteil consacrait tous ses loisirs n'était point encore achevé. En 1832, l'académe des sciences morales et politiques ayant été rétablie, Monteil s'y présenta, mais ne fut point élu et cet échec lui fut très pénible. L'Académie française lui décerna plusieurs fois l'un des prix Montyon destinés aux meilleurs livres traitant de l'histoire de la nation. Tenu pour l'un des plus remarquables historiens du début du XIX e siècle, retiré dans ses dernières années, dans une petite masure dans un village, près de Fontainebleau, il mourut pratiquement dans la misère Il avait publié : De l'existence de hommes célèbres dans les républiques (1799) ; Description du département de l'Aveyron (1801) ; Traité de matériaux manuscrits de divers genres d'histoire (1835) et surtout l'ouvrage que possédait donc Tamizey, l'Histoire des Français des divers états aux cinq derniers siècles ( 1827 et années suiv. ; 2 e éd. 1842-1844). La Mennais ou Lamennais (Félicité-Robert de) (1782-1854), fils d'armateur malouin anobli, La Mennais naquit avec une santé fragile et manifesta très tôt une nervosité impulsive. Il fut élevé par un oncle érudit et sublit l'influence religieuse de son frère aîné Jean-Marie. Une retraite de plusieurs années (1805)(1806)(1807)(1810)(1811)(1812)(1813)(1814) dans la propriété de La Chesnaye décida de sa vocation religieuse et accentua encore le caractère intransigeant de ses convictions. En 1808, les Réflexions sur l'état de l'Église de France pendant le XVIII e siècle et sur sa situation actuelle, interdites par la police impériale, furent la première manifestation de ses tendances ultramontaines. Professeur de mathématiques au Petit séminaire de St-Malo (1804[START_REF]Ancien supérieur du petit séminaire d'Auch, il est devenu, en 1856[END_REF], 1808-1810), fondé par son frère, il prit parti pour la Restauration. Craignant sans doute des poursuites pour un ouvrage sur la Tradition de l'Église…( 1814), peu conforme aux conceptions napoléoniennes, il s'exila à Guernesey pendant les Cent Jours, puis s'installa à Paris, dans la communauté des Feuillantines. En 1816, il reçut l'ordination à Vannes. Son Essai sur l'indifférence en matière de religion (1817-1823) le rendit brusquement célèbre et fit de lui l'un des auteurs les plus représentatifs de l'ultramontanisme. Il précisa et durcit encore ses positions dans Défense de l'Essai… (1821). Au retour d'un voyage à Rome, il publia dans le journal ultra-légitimiste, Le Drapeau blanc, une série d'articles contre l'esprit de l'Université roayle pourtant dominée par le clergé. En 1825, des poursuites contre son traité De la religion considérée dans ses rapports avec l'ordre politique et social, préconisant la subordination du pouvoir temporel au catholicisme, accrurent encore sa notoriété. En 1828, il se dressa contre les ordonnances de M gr Frayssinous parce qu'elles limitaient l'activité des séminaires. Il se faisait ainsi le promoteur d'une séparation de l'Église et de l'État, et réclamait pour le catholicisme une entière liberté d'action. Dans cet esprit, il créa, après la révolution de 1830, le journal L'Avenir et l'Agence générale pour la défense de la liberté religieuse, en liaison avec Lacordaire et Montalembert. Ils menèrent côte à côte, pendant plusieurs mois, le combat contre la politique de la monarchie de Juillet, accusée d'exercer sur le catholicisme une tutelle inadmissible, de refuser la liberté d'enseignement et de limiter l'autorité pontificale sur l'Église de France. En dépit de ces tendances très ultramontaines, Lamennais ne put obtenir de la papauté l'appui escompté ; ce fut au retour d'un voyage à Rome, où il n'avit reçu de Grégoire XVI que des encouragement ambigus, qu'il apprit la condamnation de ses idées sur la liberté de l'Église par l'encyclique Mirari vos (1832). Après deux ans de retraite, sa désillusion s'exprima par la rupture : en 1834, les Paroles d'un croyant parurent sous la signature de Lamennais. Cet ouvrage, condamné par l'encyclique Singulari nos (25 juin 1834), contribua à répandre dans les milieux libéraux une sentimentalité catholique et à modifier l'attitude des républicains et des premiers socialistes à l'égard d'une Église trop étroitement associée au pouvoir. Cette orientation s'accentua encore avec les ouvrages qui se succédèrent, affirmant encore davantage le caractère humanitaire d'une pensée à la fois mystique et démocratique : c'est-à-dire Les Affaires de Rome (1836-1837), Le Livre du peuple (1838), L'Esclavage moderne (1839), Le Pays et le gouvernement (1840). Ce dernier ouvrage valut même un an de prison à l'auteur qui incriminait aussi bien l'égoïsme bourgeois du régime que l'abandon par l'Église catholique de ses principes initiaux. La révolution de 1848 fit de La Mennais un représentant du peuple à l'Assemblée constituante et le directeur de l'éphémère journal Le Peuple. p. 454, 533, 567. -Annuaire-bulletin de la Société de l'histoire de France, 1886, p. 82. -A.D. Lot-et-Garonne, 16 J 4, correspondance d'érudits : Fonds Tamizey de Larroque. 939 Auguste-Honoré Longnon (1844[START_REF]étaient traités pêle-mêle dans l'ordre alphabétique, Denys adopta le classement par province. De 1715 à 1725, il publia 3 volumes ; les bénédictins publièrent jusqu[END_REF] Henri Wallon (1812Wallon ( -1904)), élève de l'École Normale Supérieure (1831), agrégé d'histoire (1834), il fut nommé maître de conférences à l'École Normale Supérieure (1840) et suppléant de Guizot à la Sorbonne (1846). Son ouvrage L'Esclavage dans les colonies (1847) lui valut de devenir secrétaire de la commission de l'esclavage (1848), député suppléant de la Guadeloupe à la Constituante, représentant du Nord à l'assemblée législative (1849) ; il vota pour l'expédition de Rome, mais démissionna à la suite de la loi du 31 mai 1850, restrictive du suffrage universel. Professeur d'histoire à la Sorbonne, il fut élu membre de l'Académie des inscriptions et belles-lettres (1850). Député du Nord à l'Assemblée nationale (février 1871), il siégea au centre-droit, fit voter au moment de la libération anticipée du territoire, un ordre du jour portant que « Thiers avait bien mérité de la patrie », mais contribua à renverser le Président au 24 mai 1873. Il se fit remarquer par son intervention dans la discussion relative aux lois constitutionnelles en 1874 et 1875, qui révéla décisive. En effet son nom est attaché à l' « amendement Wallon », adopté le 30 janvier 1875 (Le président L'Agriculture et maison rustique… avec un brief recueil des chasses, plus la fabrique et usage de la jauge, avec une instruction pour sçavoir en quel temps…on doit semer et replanter… par Charles Estienne (1504 ?-1564) et Jean Liébault, qui tient à la fois du traité d'agronomie, du guide pratique et de l'almanach, est un grand succès de librairie du début de l'époque moderne qui a fait l'objet de nombreuses éditions (1564, 1583, 1586, 1637, 1640-1641, 1658, 1680, 1689) Combien Lorédan Larchey 950 , que j'ai connu jadis à l'Arsenal et 948 Auguste Geffroy (1820-1895), élève de l'École Normale Supérieure en 1840, professeur à la Faculté des lettres de Bordeaux (1852), puis de Paris (1872). Membre de l'académie des sciences morales à partir de 1874 et directeur de l'École française de Rome de 1875 à 1882, puis de 1888 à 1895. Spécialiste de l'histoire scandinave : Gustave III et la cour de France (1857) qu'il lui consacra : Tamizey de Larroque (Ph.), Adolphe Magen (1818Magen ( -1893)) complètement détruit sa riche bibliothèque de 6000 volumes,livres rares, précieux, introuvables, qu'il avait mis sa vie à collectionner. De plus avec ses livres, instruments de travail, ont péri tous ses manuscrits, toutes ses notes, résultat de ses longues recherches dans les bibliothèques, trésors d'érudition, fruit de 50 ans de labeur qu'il mettait d'ailleurs si libéralement à la disposition de chacun. Il faut, dans la mesure du possible, réparer l'irréparable. Que chacun fouille dans sa bibliothèque lui envoie quelques ouvrages, à G-.L. et G. en gare de F (sic). Il y a une chose qui, dans cet affreux malheur, est une consolation : ce sont les témoignages de sympathie adressés par des amis souvent inconnus. Un érudit si fécond, si universel et si généreux, qu'on a toujours trouvé disposé à rendre service, qui passe sa vie à travailler pour les autres, mérite bien qu'on lui vienne en aide ; qu'on récompense un peu son zèle ardent pour la science historique. La Société des Archives a commencé. Nous faisons donc un pressant appel à nos lecteurs, à nos collaborateurs. Il tendait, il y a quelque temps, la main pour Peiresc ; nous la tendons aujourd'hui pour lui. On aura la satisfaction d'obliger un fort aimable homme, un coeur excellent, un érudit consciencieux, un travailleur infatigable, que le malheur frappe si cruellement. La charité, s.v.p. Victor Cousin (1792-1867). Fils d'un ouvrier joaillier, après de brillantes études secondaires au lycée Charlemagne à Paris, il entra en 1810 à l'École normale supérieure, où il subit l'influence durable de Pierre Laromiguière ; celui-ci l'initia à la pensée de Locke et de Condillac, lui apportant la révélation et le goût de la philosophie qu'on enseignait pour ainsi dire pas à cette époque dans les lycées impériaux. En 1813, âgé de 21 ans, il est chargé de conférences de philosophie à l'École normale. Il est alors remarqué par Royer-Collard, chef politique des doctrinaires et spécialiste de la philosophie écossaise ; en 1815, Cousin devient son assistant à la faculté des lettres. Après quelques mois d'enseignement, il entreprend un voyage en Allemagne afin d'y rencontrer les philosophes contemporains : Hegel, Schelling, Jacobi. Le projet lui en est venu en grande partie sous l'influence de Maine de Biran qui l'avait notamment incité à lire Kant et avait attiré son attention sur l'importance de la philosophie allemande. À Heidelberg, en 1817, puis à Munich, en 1818, noue avec les grands philosophes précédemment évoqués des est confondue, peignent sur le vif les goûts et les moeurs de ces hautes et puissantes châtelaines. » Je parlais tout à l'heure des quatre grands périodiques parisiens qui reçoivent mes communications, mais il faut que j'en nomme un 5 e , la Revue contacts qui l'inclinent probablement à cet électisme qui devient bientôt la caractéristique de sa pensée. En 1820, au moment de la réction qui suivit l'assassinat du duc de Berry, Cousin en raison de ses idées libérales, est privé de son poste d'assistant à la faculté des lettres. En 1822, l'École normale supérieure est fermée. C'est alors qu'il entreprend ses grandes éditions de Proclus et de Descartes, suivies, plus tard, par celles d'Abélard et de Maine de Biran, ainsi que de sa traduction des oeuvres de Platon. Lors d'un nouveau voyage en Allemagne, en 1824, il est arrêté à Dresde pour un motif politique et passe environ 6 mois en prison à Berlin ; pendant cette période, il lit Kant, Fichte, Jacobi et écrit ses Fragments philosophiques (1826). Libéré grâce aux interventions de Hegel, il revient en France, où cette aventure lui vaut beaucoup de sympathie dans le parti libéral. Le soutien de ce parti lui vaut, en 1828, une chaire d'histoire de la philosophie à la Sorbonne. Ainsi parvient-il au sommet de sa carrière proprement professorale. Les principes de la révolution de juillet 1830 coïncidant parfaitement avec les opinions politiques de Cousin, sa carrière s'en trouva favorisée : il fut, coup sur coup, nommé conseiller d'État, membre du Conseil de l'instruction publique (1830), membre de l'Académie française (1831), membre de l'Académie des sciences morales et politiques (1832), pair de France (1832), directeur de l'École normale (1834), ministre de l'Instruction publique dans le gouvernement d'Adolphe Thiers, où il reste 8 mois. C'est lui qui inspire alors la réforme de Guizot sur l'enseignement primaire (1833) : un voyage en Allemagne, en 1831, lui a fourni, en effet, l'occasion d'étudier le système scolaire de ce pays et de publier un Rapport sur l'état de l'instruction publique dans quelques pays de l'Allemagne et particulièrement en Prusse (1833). En 1844, sa controverse avec le parti catholique sur la laïcité de l'enseignement et l'enseignement de la philosophie est une nouvelle victoire, mais la dernière, car la révolution de 1848 met fin à sa carrière politique ; il est contraint, après le coup d'État de 1851, à prendre sa retraite. Il se livre alors à ses recherches philosophiques. L'idée fondamentale de Cousin, qui se caractérise par son éclectisme, consiste à reprendre à chaque système philosophique ce qu'il a de plus valable. Néanmoins ses critères restent vagues et sa pensée peu cohérente. À côté de ses oeuvres philosophiques, V. mai Hier, j'enregistrais le mort d'une jeune fille aujourd'hui j'ai à enregistrer la mort d'un savant prêtre, l'abbé Breuils 1022 , emporté soudainement dans toute la force de l'âge, 1020 Roman, publié en 1836, par Xavier Saintine (1798-1865). C'est l'histoire d'une fleur. Le comte de Charney, pour avoir conspiré contre Bonaparte, a été enfermé au château de Fénestrelle, en Piémont. Il découvre entre deux pavés de la cour de sa prison une plante qu'il baptise « Picciola », c'est-à-dire « pauvre petite ». Il surveille avec émotion la venue du bouton, puis la fleur. Charney revient à la liberté ; mais Picciola, qu'il a transplantée dans le parc de son château, meurt faute de soins. Grand succès, ce roman fut traduit dans de nombreuses langues et fut récompensé en France par le prix Montyon de trois mille francs (1837). Gascogne, t. XIV, 1873, p. 231-237, 274-279, 421-425, 466-473, 527-527) (1776-1900), Paris, Picard, 1900, p. 165-166 et p. 18, 58, 61, 90. -Généalogie de la famille de Godailh par la comtesse Marie de Raymond, n° 4 et 42. -Voir 9 août 1889. délicieusement du rayon de soleil qui darde dans mon cabinet et du bouquet de violettes, posé sur mon pupitre, qui embaume toute la pièce. Le printemps, qui va venir, achèvera de me rendre le calme et la force qui m'ont tant manqué depuis plus de deux mois. février Je viens de faire (entre onze heures et midi) ma première bonne promenade de cette année. La température était très douce, très agréable ; le soleil était de la partie ; la boue avait presque entièrement disparu. J'ai eu du plaisir à voir les bourgeons de mes lilas et la naissante verdure de mes chèvrefeuilles. février J'ai fait acheter aujourd'hui à la foire de Gontaud une douzaine de pruniers à 65 centimes on les plantera dans la pièce des quatre journaux. Il a été convenu, à l'occasion de cet achat, que mon métayer établirait, en vue des futures plantations, une pépinière de pruniers dans un coin du jardin. février Hier, déjeuner donné en l'honneur de ma guérison, sous la présidence de ma soeur, à mon cher médecin, à son frère et à M. Raymond Bazin 1097 . La petite fête a été charmante. On a fort apprécié le pâté chaud (l'antique « tourtière » 1098 ), le dindon rôti et les beignets. mars Nouvelle tempête qui a duré trois nuits et deux jours. Nouvelle dégringolade des tuiles de la toiture du pavillon. mars Le premier jour du printemps a été magnifique. Puisse-t-il être suivi de beaucoup de jours semblables ! Jamais le pauvre campagnard que je suis n'a mérité autant de dédommagements. Jamais il n'a autant savouré le charme de l'arrivée de la belle saison. Avec quel enthousiasme, dans mes promenades enfin sans boue, j'ai salué aujourd'hui le resplendissement du soleil dans un ciel sans nuages, la fraîcheur de la verdure de mes arbustes, le blanc éclat des fleurs, des cerisiers, des poiriers, de l'aubépine ! Il y a déjà des petites feuilles aux peupliers plantés, pendant l'automne, dans la prairie. Ils ont été si arrosés ! J'ai, depuis quelques semaines, la visite assidue de chardonnerets qui viennent manger sur ma fenêtre les miettes de pain que j'y mets chaque matin. (cela constituait un groupe impayable), j'ai lu jusqu'au soir des journaux, des revues, des catalogues de librairie, enfin la moitié d'un spirituel volume de Paul Bourget 1165 . En somme, journée charmante et qui m'a dédommagé de tant de sombres journées de l'hiver dernier. Que mes saints patrons obtiennent pour moi le plus possible de semblables consolations ! mai Je suis allé remplir mon devoir de bon citoyen à la mairie de Saint-Pierre de Nogaret -------------------- Le général Jacques-Philippe Delmas de Grammont est à l'origine de la loi sur la protection des animaux, votée le 2 juillet 1850, elle punit d'une amende de 5 à 15 francs et d'un emprisonnement de un à cinq jours « ceux qui auront exercé publiquement et abusivement de mauvais traitements envers les animaux domestiques ». Clément, « La loi Grammont, répressive des mauvais traitements envers les animaux domestiques », Bulletin de la S.P. A., 1860, p. 289-303 ;Pierre (Éric), Amour des hommes, amour des bêtes : discours et pratiques protectrices dans la France du XIX e siècle, Thèse de doctorat nouveau régime, dactylographiée, Université d 'Angers, 1998, p. 99-115 ; Tamizey de Larroque (Ph.) Livre de raison d'un campagnard de Philippe Tamizey de Larroque (1828-1898) a été établi à partir de la copie déposée par Claude Maisani (né le 10 septembre 1910), Inspecteur d'Académie en résidence à Agen de 1952 à 1975 1 aux Archives départementales de Lot-et-Garonne 2 . Claude Maisani a raconté lui-même comment il avait pu avoir entre les mains ce Livre de raison, par l'entremise de son ami et collègue Ernest Entz (né le 8 octobre 1912), Inspecteur d'Académie dans les Basses-Pyrénées en résidence à Pau de 1956 à 1975 3 marié à une parente de la nièce d'Isaure Rumeau, épouse d'Henri de Tamizey de Larroque, fils de Philippe et dernier de sa lignée, mort sans postérité 4 . On a perdu la trace de ce texte original. La version dactylographiée des Archives Départementales de Lot-et-Garonne en respecte visiblement la mise en page. On peut vérifier l'authenticité et la conformité de cette version à l'original -et la supputer intégrale -en la comparant aux extraits donnés par Léonce Couture dans sa nécrologie de Tamizey de Larroque qui date de 1898 5 et qui y correspondent parfaitement. On constate des erreurs de graphie dans la version des Archives Départementales de Lot-et-Garonne, notamment sur les noms propres. Elles sont imputables à des fautes de frappe, mais aussi aux problèmes oculaires 1 Dossier de carrière coté 19820668/229 au Centre des Archives Contemporaines de Fontainebleau (Seine et Marne) », un tiret entre « très » et l'adjectif qui suit ou encore à écrire « long-temps » plutôt que longtemps. Ce trait qu'on observe dans le texte des Archives Départementales de 7 Lot-et-Garonne constitue une attestation supplémentaire de l'exactitude de cette version, d'autant plus que se trouvent en marge des corrections au crayon de la main de Claude Maisani vraisemblablement qui rectifient des erreurs de copie. On a choisi de maintenir ces habitudes d'écriture ainsi que les abréviations souvent employées par Ph. Tamizey de Larroque, marques de l'écriture cursive qui est le propre d'un journal : la mention (sic) les signale et des tirets dans les notes infrapaginales rétablissent l'orthographe courante. texte du Livre de raison sont le fait de Tamizey lui-même dans l'exemplaire de référence. NB : l'abréviation « A.P. Baquier » employée dans plusieurs des notes qui suivent renvoie aux Archives et collections privées de Madame Baquier (dossiers et fiches généalogiques ; faire-part). membres d'une lignée, sans toutefois donner lieu ni à de grands épanchements, ni à d'indiscrètes confidences. Ils rapportaient plutôt des phénomènes météorologiques plus ou moins exceptionnels et, à l'occasion, des événements ayant fait date dans la contrée ou la province comme le passage du roi ou d'un grand personnage. Ainsi, d'abord et avant tout conservatoires de la mémoire d'une famille, les livres de raison étaientauteur, un père-chef de famille le plus souvent, et à peu près toujours un représentant de la bonne société. Ils étaient d'abord le fait de gentilshommescampagnards, nobles provinciaux trés éloignés, certes, des Grands de la cour mais bien distingués pourtant dans le monde rural qui les entourait. La tenue de livres de raison, poursuivie assez fréquemment sur plusieurs générations, fut à son apogée au XVII e siècle pour amorcer, ensuite, un déclin. En tout cas, à l'inverse des Mémoires, écrits rétrospectivement et traitant de grands événements et faits publics ces -Philippe Tamizey de Larroque était à l'étroit dans le carcan des conventions du livre de raison autant que dans les convenances de son temps qui, les unes et les autres, ne parvenaient qu'imparfaitement à corseter sa personnalité. Ainsi ne parvient-il pas à s'affirmer vraiment dans son Livre de raison qui, bien des fois, est pratiquement un anti-livre de raison. Celui-ci, à côté des notations « traditionnelles », se trouve, en effet, quelque peu dénaturé par des confidences discrètement distillées qui tiennent davantage du journal intime et de la sensibilité des contemporains du XIX e siècle. Ainsi le moment de solitude que fut pour lui la soirée du 14 juillet 1891. De même le Livre de raison se transforme-til, à plusieurs reprises, quasiment en compte rendu de lecture (6 septembre 1891) ou bien il hésite entre l'agenda et le mémento (7 mars 1890 ; 14 décembre 1892). Tamizey, il l'avoue, le tient irrégulièrement (14 juillet 1894 ; 10-25 juillet et 18 août 1895) et le compose, même, après coup (4 mai-23 juin 1894). Dès lors, derrière la chronique menue du quotidien d'un historien qu'il présente en apparence, le Livre de raison de Tamizey de Larroque dissimule peut-être, en réalité, une sorte de roman, tant familial que personnel, qui culmine en un drame, le 9 juillet 1895. Celui-ci amène à relire tout ce qui précède comme la chronique de l'annonce de cette tragédie , bel et bien, et de ce texte et de son 22 Tamizey de Larroque (Philippe), « Le maréchal de Biron et la prise de Gontaud en 1580 », dans Monuments et portraits agenais, 1 er fasc., Agen, 1898, p. 61. auteur. Ce qui est, au bout du compte, bien plus que le dépassement des principes du livre de raison : leur négation, pure et simple ! Au fond, Tamizey est sans doute moins entravé par les contraintes qu'impose le genre du livre de raison que par sa propre incapacité à s'exprimer effectivement et à dire sa vérité. Le propos de ce Livre de raison n'est certainement pas hors de l'histoire et passéiste. Ce n'est aucunement une fuite du présent mais plutôt une façon de traiter d'une actualité spéciale, bien à lui et curieusement exprimée jusque dans ce qui est tu. Il fait percevoir une existence bien moins simple et bien moins conformiste, en tout cas, qu'elle ne l'affiche. Sa transparente obscurité a partie liée avec la passion des mots qu'il manifeste. Il s'agit notamment des mots de la langue gasconne (3 août 1890) qui s'imposait véritablement à lui. Rheinhold Dezeimeris, son ami, confrère en érudition et presque voisin, expliquait que c'était la seule langue que connaissaient les domestiques et les petites gens 24 . Mais Tamizey a aussi une prédilection jubilatoire pour le latin. Les citations latines émaillent le Livre de raison. C'est le propre assurément d'un homme de bien, d'un notable de la fin du XIX e siècle. Néanmoins, à travers le latin et le gascon, Tamizey n'est-il pas, au fond, en quête d'une langue, ou plutôt d'un autre langage qui, mieux que le français, lui permettrait de se faire entendre vraiment ? On peut le croire d'autant plus que Tamizey suggère qu'il y a, au tréfonds de son être, un poète refoulé. Les vers et leurs auteurs ne manquent certes pas dans le livre de raison.Mais ce sont surtout les classiques de l'Antiquité latine : ( 12 août 1890) ou d'Édouard Pailleron (4 juillet 1893). Ce qui est très sage et bien conventionnel, au début des années 1890 : vingt ans plutôt avaient paru les Illuminations de Rimbaud, Verlaine qui avait cessé depuis longtemps d'écrire, se mourait et Mallarmé, alors, travaillait à son subversif et avant-gardiste Coup de dé. Tamizey affichait donc un certain « bon goût » bourgeois et convenu. Pourtant de l'aveu même du Livre de raison, il était un lecteur assidu et éclectique des dernières productions parisiennes (6 septembre 1891 ; 2 mars 1893) et il n'ignorait ni ne méconnaissait la nouveauté en matière littéraire, ni ceux qui la portaient et faisaient sensation. Il sait parfaitement, par exemple, la notoriété de Baudelaire (18 septembre 1892). En fait, parangon du conformisme en matière de poésie, à première vue, Tamizey l'était-il vraiment et complètement, puisqu'il souligne, à contre-courant de tous les engouements parisiens, qu'il lit des vers gascons (18 juillet 1893)? Il est certain, en tout cas, qu'une double veine lyrique et élégiaque, présente avec constance, dans le Livre de raison, l'imprègne d'une certaine qualité poétique. D'abord par le champ que Philippe Tamizey de Larroque donne à ses émotions et à l'évocation de sujets tendres et tristes comme l'enterrement d'une jeune paysanne des environs (16 mai 1896). Tout cela est étroitement lié à une vive sensibilité à la nature manifestée, par exemple, lorsqu'il parle des hirondelles qu'il voit voler de la fenêtre de son cabinet de travail (25 octobre 1892 ; 7 avril 1894). Le Livre de raison n'est pas le seul exutoire de tels élans. C'est ainsi le cas des innombrables mises au point savantes ou « Notes » que Philippe Tamizey de Larroque a fournies à la Revue de Gascogne. L'une, de 1887, traite de l'identification d'un toponyme : la substance pourrait tenir en une demi-page ; or il développe sur deux pages, parlant de sa promenade hebdomadaire et surtout d'un paysage qui lui tient à coeur « magnifique vue qui s'étend, d'une part, sur la riche plaine de la Garonne ; d'autre part, sur de pittoresques coteaux… [il dit son admiration pour le] spectacle toujours nouveau pour moi dans sa variété de cette plaine et de ces coteaux… » 25 . Cela ne manque de rappeler le style et l'inspiration d'auteurs qu'il cite, ceux des pages bucoliques de George Sand (17 juin 1890) ou encore de Victor Hugo (10 avril 1891) et d'Alfred de Musset (21 mars 1891). À l'évidence Philippe Tamizey de Larroque veut imiter les poètes et la poésie de sa jeunesse à la manière de laquelle il reste fidèle. Il se pose comme l'homme d'une génération, celle qui a grandi aux dernières heures de la monarchie de juillet. Il est et il demeure profondément un romantique. Les bribes conservées du premier journal perdu brossent le portrait d'un jeune Tamizey de Larroque, en 1844, travaillé par la muse de la poésie, au point de devenir un rimailleur forcené. Dans ses vieux jours, il assurait avoir composé quelques 100 000 vers sur les bancs du collège. Il avait tout, alors, d'un « enfant du siècle » excessif et tourmenté. Rien n'y manquait pratiquement : ni l'amour malheureux, ni le parfum du scandale. On le raillait, rapportait-il, en raison de sa chevelure et de la coupe de son habit. Enfin pour avoir fait des pièces de poésie destinées à la soeur de l'un de ses condisciples, il fut cause d'un esclandre qui fit cesser toute relation avec la demoiselle comme, d'ailleurs, avec le frère 26 . Il aurait même tenté de faire carrière dans les lettres à Paris. Mais sa famille, d'après ce que l'on sait des Souvenirs littéraires, contribua, par son manque d'enthousiasme, à calmer la fièvre lyrique du jeune Tamizey ; d'autant qu'il perdait, par ailleurs, à cause de sa passion poétique, ses prix d'excellence. Ainsi aurait-il brûlé lui-même ses cahiers en 1845. Il se serait, dès lors, jeté dans les classiques et 25 « Une petite découverte. Podiodalphinum », Revue de Gascogne, 1887, p. 78-79. 28 id., p. 498. plaisir de l'entendre plus tard avec sa verve gasconne raconter les incidents de ces repas de Pantagruel… Il mimait surtout le désespoir muet mais indigné d'une vieille tante fort avare privée de la possibilité d'« accommoder les restes » 29 . Il aurait donc cherché alors à « percer » à la manière d'un Rastignac. Ce premier séjour parisien ne donna lieu, de toute évidence, à aucune publication de plaquette ou de volume de vers ou de prose. Il se termina, en 1850, semble-t-il et aurait été suivi de cinq années entières passées à Gontaud. Pour expliquer ce quinquennat provincial, surgit l'image de l'aspirant poète plein d'espoirs, éconduit par la capitale et ruminant son échec. A moins qu'il ne s'agisse des conséquences de l'ire paternelle à l'encontre d'un fils rebelle subissant un châtiment à la mesure du courroux provoqué par son inconduite. Celle-ci aurait pu se trouver aggravée par les idées et les prises de position subversives alors courantes chez les jeunes gens de son âge ; autrement dit la génération du second romantisme, qui délaisse la nostalgie de l'Ancien Régime pour cultiver des idées socialement et politiquement hardies. Avoir vingt ans, à Paris, en 1848, alors que la Révolution embrase l'Europe et fait vaciller la Monarchie de Juillet, n'a certainement pas été sans conséquences pour Tamizey. Dans ses Souvenirs littéraires, il aurait avoué avoir passé la plus grande partie de l'année 1848 à s'occuper de politique, écrivant une innombrable quantité de lettres aux grands noms du premier socialisme, à Pierre Leroux, Proudhon, Considerant et Cabet 30 . Le témoignage d'un ami de jeunesse de Tamizey donne effectivement une image nettement moins échevelée et beaucoup plus sage de ses débuts : ainsi, contrairement à ses affirmations, il aurait brillamment obtenu son baccalauréat et poursuivi à Paris des études universitaires tout à fait institutionnelles et sérieuses, à peine encanaillées par la fréquentation des restaurants bon marché du Quartier latin et celle, occasionnelle, d'un jeune Gambetta alors fort studieux et rangé 37 . De ce fait, est-il excessif de tenir Tamizey de Larroque pour un fils de famille fourvoyé en politique et ramené, de force, dans le droit chemin ? ou faut-il voir en lui, plutôt, un déçu de 1848, à l'instar, par exemple, de George Sand pour laquelle il nourrit un attrait certain 38 ? Il n'en demeure pas moins que Paris, durablement, l'a attiré ou plutôt il entretient avec la capitale une relation pleine de contradictions. Elle est marquée tout à la fois par un détachement manifeste et un besoin tout aussi évident de reconnaissance. De fait, on saurait trancher définitivement si Paris a refusé Tamizey ou si ce dernier a renoncé à Paris 39 . En tout cas, en 1855, Tamizey de Larroque y fit, à nouveau, Comté de 1814 à 1870, Annales littéraires de l'université de Besançon, 1992, t. II, p. 882 : « Le statut d'homme de lettres est en contradiction ouverte avec la tradition aristocratique qui se flatte d'une consommation somptuaire et qui refuse toute production comme indigne : le noble qui se fait homme de lettres ou pire publiciste ne se déclasse-t-il pas ? L'honneur de la noblesse de la noblesse, en effet, semble incompatible avec les extravagances de la « vie d'artiste ». » -Bourdieu (Pierre), « Flaubert et l'invention de la vie d'artiste », Actes de la Recherche en sciences sociales, 1975, n° 2. harmonieusement avec un second Empire qui, il est vrai, s'était libéralisé en 1860 44 . Fait avéré, Tamizey démissionna, en 1870, au moment de la Commune 45 . Il perpétua donc, en tous points, la tradition familiale inaugurée par son trisaïeul et continuée par son bisaïeul, son grand-père et son père. A l'évidence, il participait sans équivoque à l'ascension vers la noblesse d'une lignée de gens de bien qui conformément aux idéaux et aux traditions de l'ancienne France trouvait dans l'agrégation au second ordre la consécration d'une réussite et d'une respectabilité, fondée sur la propriété foncière. Pour les Tamizey, elle prenait notamment la forme du domaine de Larroque sur les hauteurs de Gontaud qui leur permettait d'ajouter une particule avantageuse au patronyme familial. Dans cette entreprise d'anoblissement, l'affirmation d'une vocation d'historien de Philippe Tamizey trouvait sa pleine et entière légitimité et ne pouvait que rencontrer l'approbation des siens. Être historien constituait bel et bien une revendication de noblesse pour un personnage issu d'un milieu qui ne l'était que modestement mais aspirait à l'être pleinement. Philippe Tamizey de Larroque en fait bel et bien l'aveu en insérant dans le Livre de raison, des papiers de famille (10 juillet 1891) et surtout, datant de 1762, le certificat de noblesse du père de sa mère, Jean-Joseph, qualifié de chevalier de Grammont (2 septembre 1891). Dans cette perspective la bibliothèque richement fournie dont il fait l'inventaire dans le Livre de raison, à partir du 25 juillet 1895, était certainement pour lui, autant un instrument de travail que le 44 Tamizey de Larroque (Philippe), « De l'opinion de l'Empereur sur l'emplacement d'Uxellodunum », Revue de Gascogne, 1866, p. 245-248. Il y fait un compte-rendu extrêmement flatteur et approbateur du livre publié par Napoléon III l'Histoire de Jules César. 45 signe extérieur spectaculaire de son appartenance au monde distingué et cultivé qui était, par excellence, celui de la noblesse, telle qu'elle se concevait au XIX e siècle 46 . Mais être historien, à la manière de Philippe Tamizey de Larroque, au milieu du XIXe siècle, représentait aussi un acte militant. L'historiographie à laquelle il adhérait n'était ni socialement, ni politiquement neutre. Il était l'un des plus actifs collaborateurs à la Revue des Questions Historiques 47 , fondée et animée par des descendants de l'ancienne noblesse -le marquis de Beaucourt, le comte Henri de l'Epinois, le comte Hyacinthe de Charencey -et quelques roturiers aussi -Léon Gautier et Marius Sepet notamment -qui partageaient leur goût de l'érudition autant qu'un attachement à la foi catholique et un penchant pour les institutions et l'ordre social d'avant 1789, en d'autres termes pour la pensée de la droite ultramontaine et légitimiste qui avait triomphé à l'époque de l'« ordre moral » 48 . 1865, en tant que maire, il fit installer à Gontaud un bureau de poste et surtout de télégraphe 54 . La rapidité de transmission de ce nouveau moyen de communication, à l'avant-garde de la technologie de son temps le fascinait, visiblement. Partie d'Alger à 10 h. 20 du matin, une dépêche lui est remise à 3 h. de l'après-midi (14 septembre 1892) ! De même, il accueille avec joie le confort d'un petit poêle qui lui permet de continuer à travailler l'hiver près des livres de son bureau (14 novembre 1893). Mais il faut aussi rappeler qu'il utilise avec ferveur les méthodes de la publicité la plus récente pour rassembler les fonds nécessaires à un monument à Peiresc et qu'il s'intéresse aux réflexions et aux réalisations de H. Issanchou contre les congrégations au printemps 1880, ce geste prenait une certaine dimension provocatrice. Pendant naturel à son royalisme, Tamizey de Larroque affichait ainsi à contrecourant, son cléricalisme. Mais cette revendication réactionnaire avait cependant quelque chose d'un peu trop théâtral pour ne pas être un tantinet outrée. Elle occultait, là encore, la réalité d'un zèle bien moins permanent et bien moins rigide que Tamizey ne le prétendait : « Je crois à mon catéchisme, et je m'en tiens là » écrivait-il ainsi, sur le tard, à un écrivain spirite, ajoutant « Je suis un profane en philosophie et je veux mourir dans la peau d'un profane impénitent » 59 . C'était apparemment une position de fait et de principe. Léonce Couture, son collaborateur à partir de 1865 60 , rapporte ainsi que Philippe Tamizey de Larroque lui aurait confié combien il déplorait l'accueil favorable fait par la Revue critique aux premiers volumes des Origines du christianisme d'Ernest Renan, publiés en 1863 61 . Il n'en demeure pas moins que, même converti à un catholicisme romain, Tamizey gardait des goûts qui n'étaient pas tout à fait conformes à la morale compassée de l'inflexible Église catholique de l'époque du Syllabus 62 . D'abord, il y avait ses propres attaches familiales, qui dans l'Agenais, vieille terre de présence réformée, le conduisait à forcément avoir des cousins protestants. Il n'en parle pas, mais sa parfaite connaissance de son arbre généalogique 63 l'empêcherait certainement de souscrire entièrement à l'idée d'une France monarchique toute catholique, s'il n'y avait, de 59 La plume libre, numéro du 15 juillet 1898.60 descendants des captaux de Buch ou les Malvin de Montazet 98 ; mais il y donne aussi la transcription un peu compromettante de documents de famille plus révélateurs d'une honnête gentilhommerie plutôt que d'une très ancienne noblesse 99 . Une confidence faite à Léonce Couture, va encore dans ce sens : Tamizey avouant appartenir à une noblesse d'accession, arrivée par de continuels services administratifs, judiciaires et militaires, vraie noblesse de clocher « la seule à peu près qui subsiste encore en Agenais », selon ses propres termes 100 . Tamizey nourrit à l'égard de son statut, à la fois certes une pointe d'orgueil mais aussi un certain humour, ainsi quand il relève la manière dont les paysans d'alentour appellent « castet », c'est-à-dire « château » en gascon, son modeste pavillon Peiresc 101 . À moins que ce ne soit pour lui, une façon d'affirmer une certaine idée de la noblesse, celle des gentilshommes campagnards, enracinés dans leur terroir et fort peu aristocratiques, qu'évoqua, quelques années plus tard Pierre de Vaissière 102 . . De même, il laisse la gloire et les avantages de la publication des lettres de Richelieu à d'Avenel qui, malade, lui a laissé la charge d'achever et d'indexer l'énorme ouvrage qu'il avait entrepris 104 . Le Livre de raison rapporte non sans complaisance -excusée, il est vrai, par l'exemple de Peiresc dont Tamizey se réclame-les cadeaux de livres coûteux à de savants amis impécunieux. Ainsi le monumental glossaire de Du Cange qu'il procure à Lucien Massip en 1892 ou encore, à Louis de Berluc-Perussis, les 3 volumes du recueil de documents historiques, publiés par Langlois et Stein, chez Picard et au chanoine Allain, les 9 volumes de la grande édition des Mémoires du duc de St-Simon, par Arthur de Boislisle 105 . Mais le Livre de raison fait aussi état de récriminations contre des débiteurs indélicats qui sentent sinon la lésine, du moins l'obligation dans laquelle il se trouve de « compter » 106 . Philippe Tamizey de Larroque n'a pas les moyens d'être un « grand seigneur ». En fait, il est moins un « seigneur » à l'ancienne qu'un « propriétaire » qui rencontre l'histoire, sous les espèces de la crise agricole de la fin du XIX e siècle. Certes, au moment où il rédige le Livre de raison le plus fort de la crise phylloxérique est passée. Cependant, de son propre aveu, elle l'a durement atteint. En 1880, c'est uniquement grâce à la générosité de la comtesse de Raymond qu'il peut faire face à la dépense de la publication des Mémoires de Jean d'Antras 107 . La catastrophe viticole l'a conduit à se rapprocher de Prosper de Laffitte, propriétaire de Lajoannenque et président du comité institué à . Le fléau transforma ce dernier, membre de l'Académie de Bordeaux, distingué hélléniste et « rat » de bibliothèque, sinon en véritable ouvrier agricole, du moins en praticien chevronné, animateur du Comice agricole de Cadillac. Dezeimeris a raconté comment il avait pris l'habitude d'aller travailler dans les vignes : « … un vieux pardessus roussi… jeté sur [les] épaules, attaché, en haut, par un lien en raphia dont une floche soutenait le bec-de-corbin d'un parapluie avarié ; à gauche, sur le flanc, un sac de toile contenant des serpettes, des sécateurs et des greffons ; -Rheinhold Dezeimeris avait mis au point un procédé de taille qui porte son nom, (la « taille Dezeimeris ») -chapeau de feutre décoloré, en éteignoir, prévoyant la pluie ; pantalons mouillés de boue aux genoux ; éclaboussures partout, et sabots éculés, traînants », sortant ainsi, un matin, de sa demeure qu'on appelait le château dans le pays pour aller greffer en plaine, il rencontre un mendiant, bien mieux mis que lui, qui, le prenant pour un confrère l'arrête et lui demande d'Et Dezeimeris de répondre malicieusement : « On m'a donné, allez-y voir ! » 109 . Mais les domaines de Tamizey de Larroque n'ont pas la vocation viticole avantageuse de ceux de Loupiac, et la mise au point de nouvelles techniques agronomiques ne suffit pas à consolider des revenus maigrelets 107 « Quand M. l'abbé de Carsalade du Pont et moi nous publiâmes les Mémoires de -Tamizey de Larroque écrit qu'il est « toujours du côté des vaincus ». Il fait allusion au candidat (orléaniste), battu d'avance pour jeu politique alors que triomphe la République radicale. Il avait traversé deux révolutions, pour voir condamner finalement le régime social et politique qu'il soutenait, celui de la monarchie constitutionnelle. Et, sans doute, comparait-il la marginalité à laquelle ses convictions le confinaient avec la réussite publique d'un Reinhold Dezeimeris, président du conseil général de Gironde, depuis 1894 -il y représentait le canton de Cadillac, depuis 1877, sous l'étiquette de « républicain » -116 . Mais plus largement, il évoque les causes perdues qui ont sa faveur dans les grandes affaires internationales du moment. Ce qui montre bien sa parfaite connaissance de l'actualité et prouve qu'il continue à lire les journaux et qu'il est donc parfaitement au fait des derniers développements de l'Affaire Dreyfus. Il affirme donc son soutien à l'Espagne contre les États-Unis, à la Grèce contre la Turquie. Il est du côté des puissances du passé, contre les puissances du présent. L'Empire des conquistadors, où le Soleil ne se couchait jamais, contre celui des Rough riders de Théodore Roosevelt et du chemin du fer et l'antique patrie de Thucydide et de Plutarque contre les Ottomans à l'artillerie ultra-moderne. Plus largement ne dressait-il pas là un bilan pessimiste de son existence qui, alors qu'il était clairement en proie à la dépression depuis pratiquement trois ans, pouvait bien lui apparaître comme un naufrage. Un naufrage social, tout d'abord, parce qu'il était allé à contre-courant du mouvement qui, à partir du milieu du 115 Couture (L.), op. cit., p. 540-541. 116 Pouvereau (Norbert), Rheinhold Dezeimeris, érudit loupiacais, essai biographique, Association Saint-Blaise-Cadillac, 1998, p. 65 et p. 110. campagne, comme de la province, pour trouver des situations citadines et notamment parisiennes, plus brillantes et plus lucratives 117 . Mais Tamizey pouvait aussi avoir le sentiment d'un naufrage personnel. Sa famille était décimée. Des six enfants qu'il avait vu naître, quatre n'avaient pas vécu jusqu'à cinq ans. Tout indiquait qu'il allait mourir presque seul, séparé de pouvaient, certes, que le laisser amer. Mais, au fond, plus qu'un vaincu, Tamizey était un solitaire qui avait subi certainement, mais aussi, dans une certaine mesure, organisé sa solitude. Larroque était véritablement un ermitage que Tamizey avait choisi et aménagé pour une vie entièrement dévouée à l'étude ou plutôt au travail, selon ses propres termes. C'était un endroit qu'il avait aimé depuis l'enfance. Il le ramenait à des moments de lecture que l'on devine enchanteurs, sous le grand chêne planté par ses ancêtres, comme il l'explique au tout début du Livre de raison, le 14 juillet 1889, au moment même où il commence à la rédaction de ce journal. C'est une date trop bien choisie, pour ne pas marquer délibérément et malicieusement celle d'une révolution personnelle. En fait, Larroque était déjà devenu, depuis près de dix ans, son asile du vendredi. On peut penser qu'il s'installait alors dans les bâtiments de ferme qui s'y trouvaient avant qu'il n'entame la construction du Pavillon Peiresc dont l'achèvement fut à peu près effectif début septembre 1890 118 . Là il s'astreignait à un rythme de travail sinon infernal, du moins inhumain et qui le condamnait bel et bien à une vie asociale. Elle commençait par des levers très matinaux 117 Ariès (Philippe), « La Bretagne, les Alpes et les pays d'Aquitaine » dans Histoire des populations françaises, Seuil, 1971, p.23. 118 Couture (Léonce), « Philippe Tamizey de Larroque, correspondant de l'Institut », Revue de Gascogne, 1898, p. 505. d'une puissance de travail hors du commun (24 juin 1891 ; 21 juillet 1891 ; 6 mars 1894). En dehors de chez lui, invité, par exemple, à consulter le fonds d'archives privées du château de Xaintrailles, il faisait montre de la même ardeur dont il se flattait visiblement et qu'il ne manque pas de rapporter en tout cas : … Le temps me manqua malheureusement pour profiter, comme je l'aurais voulu, de cette bonne fortune que je dus à la bienveillance des héritiers de madame la marquise de Lusignan, MM. de Châteaurenard et de Saint-Géry. Mon désir, mon ardent désir était de tout transcrire. Mais il aurait fallu, pour cela, des semaines entières d'un travail assidu, et je ne disposais que de quelques journées. Ces journées, certes, je les utilisai de mon mieux, et l'infatigable rapidité de ma plume, qui, du matin jusqu'au soir, « a solis ortu usque ad occasum » [trad. du lever jusqu'au coucher du soleil] volait sur le papier, faisait, il m'en souvient, l'étonnement des honorables notaires qui, auprès de moi, rédigeaient avec une majestueuse lenteur un interminable inventaire 119 . Effectivement sa productivité est étonnante. Aujourd'hui encore sa bibliographie complète n'a pas été établie. Selon les dires de Tamizey lui-même sa bibliographie réelle se montait à plusieurs milliers d'articles. Selon l'inventaire de Jules Momméja, paru en 1901, assez rigoureux dans les critères qu'il retient (c'est-à-dire les publications de Tamizey ayant fait l'objet d'un tiré à part), il avait 200 titres à son actif 120 . En tout cas, début 1890, il manifestait un appétit 119 protection des animaux 134 venait à Tamizey de la petite enfance, explique-t-il, en évoquant un oiseau apprivoisé, dont la disparition fut l'un de ses « gros chagrins » (26juin 1893). Tamizey apparaît donc, aux détours du Livre de raison, comme assoiffé, dès ses plus jeunes années, d'affection. Il a souffert, certainement, des manières un peu brutales de son père (18 août 1891), comme d'être séparé d'une mère beaucoup plus tendre 135 à l'évidence, pour être envoyé au collège. Période visiblement malheureuse de son existence, dont il parle en expliquant comment la lecture a été alors pour lui une consolation et un refuge (31 juillet 1890). Ce que l'on sait du premier journal de Tamizey était apparemment beaucoup plus explicite sur ce point, montrant le jeune Tamizey, au lycée de Cahors, victime des moqueries de ses camarades à cause de sa grande taille et parce qu'il n'était pas comme les autres, non plus, dans sa mise et ses origines. Jouant de sa prodigieuse mémoire et de son immense capacité de travail, il se serait alors réfugié dans les études comme dans une carapace qui finit, à la fin de sa vie, par l'étouffer 136 . Fugitivement, bien sûr, mais certainement, Tamizey laisse entrevoir qu'il rêve d'ailleurs. Peut-être à une carrière militaire manquée, on l'a vu, mais surtout à la liberté des grands espaces. C'est ce qu'il laisse entendre, dans la nécrologie qu'il consacre à son ami Adolphe Magen, à propos d'un séjour de plusieurs mois en Algérie à une date indéterminée (avant 1893 en tout cas) dont il garde un souvenir lumineux, et dont il évoque l'éblouissement du ciel bleu 137 . Il y a, par exemple, dans le Livre de raison, la fierté qu'il a à recevoir au Pavillon Peiresc la femme et les filles du fameux orientaliste et aventurier anglais Layard (16 octobre 1895), mais aussi la grande curiosité qu'il manifeste pour l'astronomie. On le surprend à lire les compterendus de l'Académie des Sciences (7 juin 1891) ou un livre de vulgarisation scientifique d'Henri Guillemin (18 juin 1890 148voir Livre de raison, 27 octobre 1889. 149 Serin (P.), op. cit., p. 32-33.-Dans le fonds Tamizey des Archives partager pleinement les exigeantes entreprises et ambitions intellectuelles de ce dernier qui, à son grand désespoir, ne trouva pas à le marier dans leur milieu (10 juillet 1892). La chaleur d'une famille, Tamizey la trouvait plutôt à Lavardac, chez sa soeur cadette Eléonore (1831-1917), mariée à Jean-Baptiste Truaut et qui menait visiblement, dans une petite maison de plaisance appelé le « châlet », à Ambrus, une vie enjouée et sans apprêt 150 . La joie que peuvent donner des enfants brillants, Tamizey la connaissait grâce à ses neveux Boëry, qu'il aimait à l'évidence beaucoup et qu'1892). Mais tout cela n'arrivait que de temps en temps et il fallait alors que Tamizey quitte ce cabinet de travail, ces travaux et ces lectures qui étaient devenus pour lui comme une drogue. Ainsi, même s'il avait terriblement besoin de compagnie -ses domestiques n'y pouvant suffire -il lui en coûtait quelque peu de ne pas pouvoir travailler des heures d'affilée du matin au soir (16 juillet 1890). Or, à l'extrême fin de son existence, Tamizey se trouvait bel et bien privé de ce qui faisait sa vie. Ses problèmes oculaires l'empêchaient de lire et, surtout, privé de la plus grande partie de sa bibliothèque, il n'avait presque plus de quoi à lire. Il était livré à lui-même et dans une certaine mesure c'est ce qui l'anéantit et le tua 151 . Son épitaphe indiquait bien ce à quoi douloureusement son existence n'avait pas cessé de le condamner et que, non moins tortueusement, il départementales de Lot-et-Garonne sont conservées plusieurs lettres d'Henri Tamizey de Larroque. On peut retenir celle datée de Larroque-Peiresc, le 20 mai 1900 : il y parle de sa joie de visiter prochainement l'Exposition Universelle, mais aussi de soucis financiers liés à une « indivision » (A.D. Lot-et-Garonne, 1 J 652 J 'éprouve un besoin impérieux, et comme instinctif, de finir ma vie à la campagne. Est-ce parce que l'air si vif que l'on respire sur ces hauteurs me fait du bien ? Est-ce parce que la paix qui y règne est particulièrement agréable à un homme qui devient vieux ? Est-ce par une sorte de mystérieuse et héréditaire affinité et suis-je attiré presque invinciblement vers le lieu où ont vécu mes aïeux, ceux-là même qui plantèrent sans doute le chêne plusieurs fois séculaire que la foudre et le temps ont à peu près tué et à l'ombre duquel j'ai 156 Le Livre de raison de la famille de Fontainemarie 1640-1774, édité par Philippe Tamizey de Larroque fut publié en 1889, par l'Imprimerie V ve Lamy. Voir introduction et annexe sur les éditions, publications et recensions de livres de raison par Philippe Tamizey de Larroque. Il reçut les exemplaires d'auteur du livre de raison des Fontainemarie, six mois plus tard : voir vendredi 16 novembre 1889. 157 La comtesse Marie de Raymond [28 juin 1825-24 avril 1886], avait fait lire à Ph. Tamizey de Larroque, à l'été 1885, des Mémoires personnels ainsi que quelques semaines avant sa mort une copie d'un manuscrit intitulé : Comment je travaille. Tamizey de Larroque (Philippe), Madame la comtesse Marie deRaymond, Auch, G. Foix, 1886, p. 11. séjour se prolonge dans une joyeuse tranquillité ; cette tranquillité dont je suis, en quelque sorte, affamé, et qui n'est, en réalité, autre chose que le bonheur. Je ne me lasse pas, en transcrivant les lettres de Peiresc à son frère, de jouir de la verdure des bois. Quel bienfait pour mes pauvres yeux si fatigués que cette douce verdure où ils se baignent et se reposent avec délices ! Je continue à faire des plans, non plus pour l'intérieur de mon ermitage, mais pour l'extérieur. Je veux multiplier les arbres autour de moi. Ce sont des amis qui ne sauraient être trop nombreux. Je planterai des arbres fruitiers tout le long de l'allée qui mène à Minor 163 , et je mettrai une vingtaine d'ormeaux dans la prairie voisine de mon pavillon. J'aurai aussi à planter auprès de la fontaine un peuplier de la le bourg de Saint-Pierre-de-Nogaret et, à l'horizon, à droite, la ville de Marmande… » : Serin (P.), op. cit, p. 34-35. Une photographie du Pavillon Peiresc figure p. 229 dans l'article de Bernard Peyrous cité ci-après. 161 Charles Nicolas Fabri de Peiresc (1580-1637), né et mort à Aix-en-Provence, l'un des grands savants et penseurs du XVII e siècle. Voir introduction. 162 Peyrous (Bernard), « L'oeuvre d'éditeur scientifique de Tamizey de Larroque », Revue française d'histoire du livre, 61 e année, n°s 76-77 -n elle série 3 e et 4 e trimestres 1992, p. 225-227 : Tamizey de Larroque aurait voulu éditer à la fois les lettres de Peiresc à ses correspondants, et celles qu'il en reçut. Il obtint la seule publication de la correspondance active et fut obligé de restreindre l'annotation. Cependant, telle quelle, la correspondance publiée de Peiresc comprend 7 volumes in-folio s'échelonnant de 1888 à 1898. Et encore n'est-elle pas complète. Tamizey prévoyait plusieurs autres volumes et, à la veille de sa mort, le tome VIII était assez avancé. Il s'agit de l'un des plus beaux monuments de l'érudition française du XIX e siècle. Il annonce déjà, par sa méthode, les célèbres éditions de la Correspondance de Bossuet par Urbain et Levesque ou des Mémoires de Saint-Simon, par Boislisle. Comme Tamizey ne trouvait pas d'éditeur acceptant de réunir la correspondance passive, celle reçue par Peiresc, il fit passer les lettres reçues par Peiresc, sous forme d'articles dans les diverses revues savantes de France. Cette série, intitulée Les correspondants de Peiresc, comprit 21 articles échelonnés de 1879 à 1897. 163 La carte 1738 Est, Seyches, série bleue, 1 : 25 000, I.G.N., Paris, 1987 mentionne un lieu-dit « Minon » à l'Est de Larroque [441-4927]. et rafraîchissantes feuilles. Mon tuquet 164 de Larroque était par trop nu, par trop sec, et ressemblait beaucoup trop à un caillou. À force de plantations bien soignées, transformons toute cette nudité et cette sécheresse en une petite oasis. Samedi 20. Je reprends, après trois jours d'interruption et avant de rentrer à Gontaud, mon journal de campagnard. J'ai beaucoup réfléchi, en ces trois jours, à mes projets d'installation et j'ai cherché à les compléter. Le principal des compléments adoptés est celui-ci : je me ferai enterrer à Larroque, non loin de ces grands bois que j'aime tant. Je me reposerai mieux, me semble-t-il, au murmure de leur feuillage. Ce murmure me bercera, comme le refrain de la nourrice berce l'enfant qui s'endort. J'ai choisi, pour l'emplacement de ma tombe, un petit coin ombragé par le vieux genévrier. Je ferai entourer ce coin d'une haie d'aubépines, de lauriers, de lilas, de rosiers, etc. Les oiseaux viendront chanter sur les arbustes de la haie et les abeilles bourdonneront autour des fleurs du verdoyant rempart. Cela ne vaut-il pas mieux qu'un froid et banal cimetière où, d'ailleurs, ne repose aucun de ceux que j'ai eu le malheur de perdre 165 ? Vendredi 9 août Me voici de retour et prêt à recommencer mes bavardages champêtres. Je pense aujourd'hui à ce qui est une grosse question pour le futur ermite de Larroque, la question de viabilité. Il faudrait pouvoir aller facilement rejoindre du 164 Mot gascon, diminutif de tuc, tuque, tertre, coteau, éminence. Par analogie, tête, crâne. Abe la tuque horte, avoir la tête dure, être une « forte tête ». Palay (Simin), Dictionnaire du Béarnais et du Gascon modernes (Bassin aquitain), Bibliothèque de l'Escole Gastou-Febus, éditions du C.N.R.S., 1961, p. 995-996. haut de mon pain de sucre la route de Labretonnie 166 . Le propriétaire du domaine de la Roche-Marais, mon bon voisin, M. de Godailh 167 , veut bien me donner l'autorisation de me servir du chemin qui traverse ses bois. À peu de frais on mettrait ce chemin en assez bon état. Quelques rampes trop raides à adoucir, quelques ornières trop profondes et trop évasées à combler, et tout ira bien. Le gazon remplacera le macadam. Ce chemin, toujours sous le bois, sera ombreux et poétique. Je le ferai prolonger en lacet jusqu'au seuil de ma rustique habitation. En relisant la page précédente, je constate un péché d'omission dans le paragraphe relatif à ma tombe. Me souvenant de ce que j'ai écrit en décembre dernier, en tête du Testament inédit de Peiresc, imprimé à la suite de la belle étude de au nord-est de Gontaud de Nogaret entre Agmé et le bois de Verteuil sur l'actuelle D. 314 : carte au 50 000 e , Marmande-Agen, 56, I.G.N, Paris, 2003. 167 Sans doute s'agit-il du fils du Godailh (1764-1840), professeur de grammaire à l'École centrale d'Agen, qui fut parmi les fondateurs, le 8 prairial an VI (27 mai 1798), de la Société d'agriculture départementale prolongeant la Société libre des Sciences d'Agen qui se réunissait depuis 1776. Il en fut secrétaire perpétuel jusqu'en 1810. Il démissionna alors « pour raison de santé » de cette «… fonction que son éloignement forcé au temps des sessions législatives l'empêche de remplir utilement ». Il fit don au Musée de la Société en 1839 d'un plat de B. Palissy et d'une collection minéralogique composée de 400 échantillons cf Procès-verbal du 21 août 1839 dans Lauzun (Philippe), La société académique d'Agen -Victor), 1826-1910. Ancien élève de l'École des chartes. Entré au Cabinet des Manuscrits (1852), conservateur (1871), devient administrateur de la Bibliothèque Nationale (1874). Il a largement travaillé au Catalogue des manuscrits, et fait connaître l'histoire de ce dépôt dans le Cabinet des manuscrits de la Bibliothèque nationale (1868-1881), 3 vol.). Par ses mesures, il assura le rapide inventaire de tous les anciens fonds. C'est à lui qu'est due l'impression du Catalogue général, commencée en 1897. Élu en 1857 à l'Académie des inscriptions. Le catalogue de ses publications dressé en 1902 par Paul Lacombe (avec supplément en 1911) comprend près de 20 000 articles. -Correspondance de Tamizey avec L. Delisle : A.D. Lot-et-Garonne, 16 J 11, correspondance d'érudits : Fonds Tamizey de Larroque. « Nigra sum sed formosa » 173 . Nigra avait pour moi une tendresse d'autant plus flatteuse qu'elle était exclusive. Toujours près de moi et bien souvent sur moi, tantôt sur mon épaule quand je mangeais, tantôt sur mes genoux, surtout pendant les soirées d'hiver où elle prenait, sensible à la volupté du feu, des attitudes si charmantes. Ce qu'il y avait de plus touchant dans son attachement pour son maître, c'était son agitation nerveuse, son inquiétude profonde, quand je devais quitter la maison, elle devinait mon départ à mes préparatifs, s'établissait sur le banc vert placé près de la porte, allait et venait sur ce banc avec un air désolé, me suppliant, en quelque sorte, de ne pas l'abandonner, et, la porte une fois refermée, elle restait soucieuse, troublée, presque inconsolable. En ces derniers temps, elle ne quittait guère le pupitre sur lequel j'écris ; elle avait choisi sa place à l'angle gauche de ce pupitre, parfois luttant, en un demi-sommeil, de tout l'allongement de ses pattes, contre l'envahissement de mes grands volumes, d'autres fois, tout à 172 On attribue à Peiresc l'introduction en France du chat angora : Tamizey de Larroque (Philippe), Deux jardiniers émérites, Peiresc et Vespasien Robin, J. Remondet, Aix-en-Provence, 1896, p. 9-10. 173 Trad. du latin « Je suis noire, moi, mais jolie » : A.T, Cantique des cantiques, 1, 5. fait réveillée, se montrant joyeuse et ronronnante, surtout si du bout de mon porte-plume je caressais sa tête noire. Nigra est morte de vieillesse sans doute, car c'était déjà une grande chatte quand j'allai passer trois mois en Provence 174 , il y a neuf ans, et elle avait eu, dès cette époque, les gloires de la maternité. Décidément, je n'aurai pas de chats à Larroque. Que deviendraient-ils après moi ? Gâtés comme je les gâte, ils seraient, après mon départ définitif, les plus malheureux des quadrupèdes. Nigra aura donc été, dans l'ordre des animaux domestiques, ma dernière affection. Vendredi 6 septembre J'ai donné rendez-vous à tous les corps d'état, au maçon Lacube, au charpentier Laliman (adjoint au Maire d'Agmé 175 , S.V.P. ! 176 ), au marchand de bois Duranton, et j'avais vu déjà, en passant près de son usine, le tuilier Mouret. Je suis d'accord avec tout ce monde et j'espère que je serai content de ces braves gens, qui me promettent monts et merveilles. Pour la maçonnerie je donne à Lacube 30 francs par toise. C'est un prix assez rémunérateur pour qu'après avoir payé les matériaux et les transports (ces derniers sont fort chers, car on lui demande 47 francs 50 centimes pour porter ici cent quartiers pris à la carrière de Gontaud), il ait encore un honnête petit gain. Mais il vaut mieux donner à l'ouvrier un peu trop que pas assez et je ne voudrais pas que l'on m'appliquât jamais le surnom de « Jugulator » 177 que j'avais 174 En dehors de celui de mai 1894, rapporté par le Livre de raison, Louis de Berluc-Perussis évoque les séjours de Tamizey en Provence, à Carpentras et à Aix « plus d'une longue saison » notamment en 1880 : Berluc-Perussis (L.), Ph. Tamizey de Larroque, Imprimerie Chaspoul et V ve Barbaroux, Digne, 1898 (extr. du Bulletin de la société scientifique et littéraire des Basses-Alpes), p. 4-5. 175 Sur l'actuelle D 265 à 3,5 km environ au nord de Gontaud de Nogaret : carte au 50 000 e , Marmande-Agen, 56, I.G.N, Paris, 2003. 176 Abréviation qu'affectionne semble-t-il Tamizey : le tract publicitaire qu'il a rédigé, en 1893 , pour la souscription afin d'offrir à son personnage favori un momunent propre à honorer glorieusement sa mémoire s'intitule « Pour Peiresc S.V.P » : voir bibliographie provisoire de Ph. Tamizey de Larroque en annexe.177 Trad. du latin « égorgeur, massacreur ». voulait la donner pour cuisinière à l'abbé Alis 181 , le nouveau curé d'Agmé. Les gages de ma servante seront de 190 francs par an, et comme elle recevra de moi 10 francs d'étrennes, cela fera la somme ronde de 200 francs.Vendredi 27.Pendant tout le séjour de mon cher beau-frère Henri de Grammont182 , je ne suis venu qu'une seule fois ; encore n'ai-je fait que toucher barres. Je trouve aujourd'hui les murailles de mon pavillon élevées de plus d'un mètre au-dessus du sol.Les maçons, favorisés par le beau temps, travaillent avec une joyeuse ardeur. « Fervet opus. » 183 La construction sera sans doute achevée avant les grandes pluies. J'ai posé la première pierre vendredi dernier, 20 septembre. Puisse cette pierre rester en place pendant de bien longues années ! Puisse-t-elle être vue par les petits-fils des petits-fils de mon fils. 180 Sur la famille Deville (sans particule « de Sardelys ») : Meller (Pierre), Armorial du Bordelais, t. 2, 1906, p. 30. 181 L'abbé Raymond-Louis Alis, né à Villeneuve-sur-Lot, en 1850, ancien élève de St-Sulpice et du Collège Romain, licencié en théologie, avait débuté par un vicariat à la paroisse de Saint-Hilaire d'Agen, en 1876 ; il avait été nommé ensuite curé de d'Alès-Cazeneuve en 1881, puis de Mauvezin en 1882, avant de passer à Sainte-Radegonde. Il a publié plusieurs monographies d'histoire locale, notamment une Notice sur le château, les anciens Seigneurs et la Paroisse de Mauvezin, près Marmande… précédée d'une Description archéologique et accompagnée de nombreux dessins, par Charles Bouillet, architecte, Michel et Médan, Agen, 1887, x-679 p. avec en introduction une Lettre de M. Tamizey de Larroque à l'Auteur, datée de Gontaud, 1 er mars 1887 et une Histoire de la ville d'Aiguillon et de ses environs depuis l'époque gallo-romaine jusqu'à nos jours, 1894, reprint C. Lacour éditeur, Nîmes, 2003, dédiée à Ph. Tamizey de Larroque et, elle aussi, portant en introduction une lettre de ce dernier à l'auteur, datée du pavillon Peiresc, à Larroque, 22 novembre 1893 : Andrieu (Jules), Bibliographie générale de l'Agenais et des parties du Condomois et du Bazadais incorporées dans le département de Lot-et-Garonne, t. 1 A-K, Paris-Agen, 1886, p. 382. -Correspondance de Tamizey avec l'abbé Alis : A.D. Lot-et-Garonne, 16 J 2, correspondance d'érudits : Fonds Tamizey de Larroque. 182 Fils d'Urbain Delmas de Grammont, ancien garde du corps de Charles X, receveur des finances à Paris et de Marie-Alexandrine de James, frère d'Olivia-Marie-Henriette Delmas de Grammont, seconde épouse de Philippe Tamizey de Larroque [A.P. Baquier]. Également mentionné le 12 juin 1891. Mort en Alger, Tamizey retrace sa vie : voir le Livre de raison 14 et 15 septembre 1892 et 18 février 1894. 183 Trad. du latin « On travaille avec entrain (et même avec feu ; le travail est animé)» : Virgile, Géorgiques, 4, 169. Vendredi 27 octobre Hélas ! Le beau temps n'a pas duré et au « fervet opus » cité plus haut, a dû succéder le « pendent opera interrupta » 184 . La pluie, la boue, des voyages, m'ont empêché de continuer mon petit journal depuis un mois. Je viens d'écrire le mot voyages. Toute la famille semble prendre l'humeur voyageuse. Mon fils est à Paris où il admire les dernières magnificences de l'Exposition 185 . Ma fille rêve une excursion et même un séjour en Espagne. Sa mère a le projet d'aller passer l'hiver à Dax. Je désire que les uns et les autres, quand ils rentreront au logis, n'aient pas subi les épreuves du pigeon voyageur de La Fontaine 186 . Vendredi 16 novembre Les longues pluies dont je me plaignais, il y a trois semaines, ont fait écrouler en partie un des murs du pavillon. Quelle tuile sur ma tête ou, pour mieux dire, quelles tuiles, car les pierres ont écrasé la toiture du chai et fourni beaucoup d'ouvrage au couvreur ! Acceptons cet accident avec une douce philosophie. Du reste, dès le premier avis de la catastrophe, j'ai pris gaiement la chose, car, comme mon maçon m'affirmait, le jour de la foire de la Saint-Martin 187 , que nul ne travaillait plus solidement que lui, j'ai dit, en riant, qu'il avait plus d'aplomb que son mur. Cela va retarder encore l'achèvement de l'entreprise et je serai bien heureux si le 184 Trad. du latin « Les travaux commencés s'arrêtent » : Virgile, Enéide, Liv. IV, v. 88 peignant la stagnation qui a succédé dans Carthage naissante, à la première activité des Tyriens, depuis que Didon, tout entière à sa passion pour Enée, ne songe plus à ses devoirs de reine. 185 7 mars 7 partie, mardi 25 de ce mois, pour Mustapha-Supérieur 191 . Puisse sa santé s'améliorer sous le ciel de l'Algérie ! J'avais abandonné, depuis trois mois et demi, mon livre de raison, tant j'ai été contrarié et découragé par les constructeurs de mon pauvre pavillon, lesquels sont décidément les plus pitoyables maçons du monde. Il a fallu, devant d'effrayantes menaces d'écroulement, réclamer la visite d'un architecte et se décider à attendre la belle saison pour consolider l'édifice et l'achever. Je me suis un peu consolé de tous ces retards et de tous ces ennuis en faisant tracer des allées, planter des haies autour de mon jardin et de mon petit cimetière, creuser des trous qui recevront, dans quelques jours, des arbres fruitiers et des arbres d'agrément. Je viens d'achever l'annotation du tôme (sic) III des Lettres de Peiresc aux frères Dupuy 192 , presque au moment où mon cher commissaire, confrère et ami Léopold Delisle a offert, en mon nom, le tôme (sic) II à l'Académie des Inscriptions et Belles-Lettres (Séance du 21 février). J'ai pris la résolution de préparer, chaque année, en y comprenant celle-ci, le manuscrit d'un volume de mon grand recueil, et comme, d'après le plan que j'ai adopté et qu'adoptera, je l'espère, le Comité des 191 Mustapha : ancienne commune du département d'Alger à l'époque coloniale, aujourd'hui englobée dans l'agglomération algéroise. On y distingue Mustapha inférieur sur la côte de la Méditerranée et Mustapha supérieur au flanc des collines. Là se trouvait le Palais d'été du gouverneur général d'Alger. Henri de Grammont, le beau-frère de Tamizey après des campagnes militaires en Kabylie notamment s'était établi en Algérie, où il avait acheté outre la Villa Grammont à Mustapha-Supérieur la ferme de Birkadem : voir 15 septembre 1892. travaux historiques 193 , ledit recueil se composera de onze volumes (dix de textes et un de tables), j'aurai terminé ma tâche avant l'accomplissement de ma 70 e année. Voici le tableau des opérations : 1890 tome IV, plus 2 fascicules des Correspondants de Peiresc. 1891 tome V, plus 2 id. 1892 tome VI, plus 2 id. 1893 tome VII, plus 2 id. 1894 tome VII, plus 2 id. 1895 tome VIII plus 2 id. 1896 tome IX, plus 2 id. total :14 qui, joints aux 16 fascicules déjà publiés, donneront un total de 30 fascicules 1897 tome X et Notice sur les collections de Peiresc 1898 tome XI Arrivé, avec décembre 1898, si le Ciel me les accorde, à mes 70 ans révolus, je n'aurai plus à m'occuper qu'à lire et relire, qu'à recueillir des notes bibliographiques, car dans la période qui s'écoulera des fêtes de Pâques de 1890 jusqu'aux fêtes de Noël de 1898, j'aurai sans doute le temps de préparer, outre le Peiresc, et ses Correspondants, et ses établirai-je un banc sous son feuillage, banc sur lequel je relirai parfois les délicieux vers d'Horace sur les claires fontaines et la riante parure dont les arbres les entourent. Dans le jardin ont été plantés des pommiers et poiriers et, des deux côtés de l'allée du tombeau, sont alignés dix-huit pêchers. J'ai remplacé par un pin vulgaire le Cèdre du Liban que, dans mes rêves ambitieux, je voyais déjà monter si haut et qui s'est laissé mourir dans sa seconde année. La semaine prochaine, on mettra des chasselas dans le jardin, pour que je puisse cueillir mon déjeuner de septembre et d'octobre du seuil même de mon pavillon, en quelque sorte, et on fermera de plants américains 195 deux joualles 196 qui borderont l'allée 195 197 Philippe Tamizey de Larroque aurait fréquenté l'école primaire de Gontaud, puis, à 11 ans, en 1839, le collège de Marmande avant de poursuivre ses études au lycée de Cahors à partir d'octobre 1842 : Lacoste (René), « La vie d'un érudit de province Philippe Tamizey de Larroque », Revue de l'Agenais, 1999, p. 8. abréviation). La promenade sera bien agréable au milieu de toutes ces roses de parfums si doux et de couleurs si diverses. Je la rendrai plus agréable encore en mettant entre les rosiers, l'automne prochain, bon nombre de plantes odorantes, telles que Lavande, Menthe, Verveine, etc. Enfin je ferai grimper autour des colonnettes de fonte qui soutiennent mon balcon-terrasse, du jasmin et du chèvrefeuille. Voilà, je dois en convenir, un paragraphe singulièrement poétique, mais ne fallait-il pas inaugurer le printemps ? 28 mars J'ai déjà parlé ici du vieux chêne foudroyé, mutilé, dont on apercevait autrefois de si loin la cime majestueuse et dont le tronc, même à peu près dépouillé de son écorce, n'a pas moins de trois mètres, vingt-cinq centimètres de tour. Ce pauvre arbre, qui a cinq cents ans au moins, et qui a donné son magnifique ombrage à plusieurs générations de Tamizey, avant que mes prédécesseurs eussent quitté Larroque pour Gontaudje vais agir en sens inverse et remonter, sinon le courant, du moins le coteau -ce pauvre arbre, dis-je, m'est cher comme un objet de famille, m'est sacré comme une relique. Je conserverai tant que je pourrai cet antique témoin du passé. Mais qu'il est donc malade et combien ses branches où s'est réfugié le peu de sève et de vie qui lui restent, sont rares et fragiles ! En attendant que quelque coup de vent achève de le briser ou que la foudre, qui a si cruellement labouré son écorce, renverse le géant plusieurs fois séculaire, j'ai voulu lui donner un vêtement qui l'embellira dans son extrême vieillesse. J'ai mis du lierre tout autour de son tronc vénérable et cela lui fera une robe toujours verdoyante. J'arroserai souvent ce lierre, je favoriserai de mon mieux ses progrès, et quand le chêne tout entier sera enlacé de ses mille et mille luisantes feuilles, on verra de loin monter dans le ciel une colonne d'un vert inaltérable et ce sera, en apparence, un arbre ressuscité, rajeuni. Peut-être qu'il restera debout, quoique complètement mort, et que son cadavre, ainsi paré d'un feuillage nouveau, ressemblera à ces guerriers que l'on ensevelit revêtus de leurs plus beaux habits. On a osé quelquefois me conseiller de faire couper mon chêne, mais j'ai repoussé cela comme une proposition sacrilège. Il me semblerait, si j'ordonnais jamais de porter la hache sur ce patriarche des chênes de tout le pays, que, comme des flancs de l'arbre de la légende, le sang jaillirait sous le fer. Un tel arbre, n'est-ce pas presque un aïeul ? 8 avril Visite de l'architecte Guillory. Il croit que le pavillon pourra être consolidé et que je pourrai y dormir en paix. J'avais déjà surnommé les pierres de mon dernier étage des pierres de Damoclès. L'architecte m'a rassuré. Puisse-t-il maintenant raffermir l'édifice ! Il maintient dans ses fonctions le sieur Lacube, malgré l'incapacité notoire de ce singulier constructeur. Mais si nous avions exigé que le pauvre diable donnât sa démission, cela ruinait à jamais sa réputation. L'architecte l'oblige seulement à prendre un auxiliaire capable de conduire les travaux et sous la direction duquel tout se réparera et s'achèvera dans de bonnes conditions. 19 avril Je suis allé voir, à la carrière de Bistauzac 198 , ma pierre tombale. Elle est d'assez bon grain et a fort bonne mine. On comprend, en l'examinant, qu'elle bravera les siècles. J'avais d'abord eu le projet de la mettre debout, comme les pierres levées qui ont une origine si antique et si mystérieuse. Mais elle n'a pas été taillée comme il faudrait pour cela et je 198 Au nord-ouest de Gontaud-de-Nogaret, au nord de Saint-Pierre-de-Nogaret en surplomb de l'actuelle D 267 : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. dois renoncer à la dresser. Elle sera couchée, comme je serai couché moi-même sous sa protection. 27 avril Aujourd'hui, dimanche, à midi et demie, la foudre est tombée sur le clocher de l'église de Gontaud et en a détaché quelques pierres. L'accident n'a été précédé ni suivi d'aucun coup de tonnerre. En 1834, la foudre était aussi tombée sur le même clocher et l'avait également détérioré. J'avais alors six ans et je me souviens encore du saisissement que me causa le fracas de la foudre, suivi de l'écroulement du sommet de notre malheureux clocher. Il est curieux qu'à plus d'un demi-siècle de distance, les deux accidents se ressemblent tellement, qu'il semble que ce soit un seul et même accident. 23 mai J'ai eu le plaisir de voir, à 5 heures ½ du matin, par un soleil magnifique, la maçonnerie du pavillon enfin achevée, comme l'indiquait la branche de laurier dont l'édifice était surmonté. J'espère voir, vendredi prochain, la même branche reposée par les charpentiers au sommet du pavillon en signe (un de ces collaborateurs qui font tout l'ouvrage), au maître maçon Bonneval (de la ville de Saint-Barthélemy 199 ), qu'à s'occuper du crépissage des murs, de la construction des cheminées et des cloisons des chambres. Ce sera tout au plus l'affaire d'un mois, et s'il ne surgit aucun obstacle imprévu, je pourrai dans les premiers jours de juillet aller pendre la crémaillère 200 . 199 Saint-Barthélemy d'Agenais entre Puymiclan à l'ouest et Tombeboeuf à l'est sur l'actuelle D 124 : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. 200 Pièce de métal munie de crans au moyen desquels on suspend un récipient au-dessus du foyer à une hauteur variable pour cuisiner. Moment essentiel donc de l'installation dans une nouvelle demeure, la pendaison de la crémaillère était traditionnellement l'occasion d'un repas festif. C'est aujourd'hui seulement que la branche de laurier du 23 mai a couronné l'oeuvre du charpentier Laliman. D'autres retards sont à prévoir et décidément il faut que je me résigne à une installation provisoire dans les deux anciennes chambres. Là, comme le Sage, j'attendrai les événements. 13 juin Je suis venu, malgré la pluie, car il faut qu'en ma qualité de campagnard je m'habitue à braver le mauvais temps. C'est lundi prochain, 16, que commencera ma nouvelle vie. Espérons que la journée d'inauguration sera plus belle que celle-ci et plus digne de l'ardent mois de juin-J'ai eu, avant-hier, la visite de mon cousin Joseph de Vivie 201 , un homme du passé s'il en fut jamais. Cet ancien magistrat m'a dit que, depuis qu'il n'est plus chef du parquet de Marmande et vit dans la retraite, à la campagne, il écrit tous les jours quelques lignes dans un livre de raison. Il a ajouté que, dans deux ou trois siècles, son manuscrit paraîtrait bien curieux, car il y met un peu de tout. Les deux cousins ont des destinées à peu près pareilles. Puissent leurs deux livres de raison rester en bonnes mains ! Puissent-ils surtout être continués par leurs descendants directs pendant au moins autant d'années qu'en a vécu mon vieux chêne ! 16 juin 201 Louis-Joseph de Vivie de Régie, né à Agnac, le 2 octobre 1841 et décédé à « Lamouthe », le 11 juillet 1932 : successivement substitut à Lombez et procureur à Mirande, puis à Marmande, il fut mis d'office à la retraite après le 16 mai 1877 (échec du rétablissement d'un régime monarchique et avènement de la III e République) et se fit alors inscrire au barreau de cette ville. Il avait épousé Marguerite Campagne. Il a publié Un Cadet de 1792, Imprimerie Victor Crespy, Bordeaux, 1886, 32 p. C'est la biographie de Charles de Cornier. Né à Marmande en août 1775, il fut tué à l'armée du Rhin, le 13 octobre 1792. -Correspondance de Tamizey avec J. de Vivie : A.D. Lot-et-Garonne, 16 J 27, correspondance d'érudits : Fonds Tamizey de Larroque. Très beau temps. Il n'est que cinq heures et un soleil radieux, un vrai soleil de fête monte à l'horizon. Tout resplendit et sourit autour de moi. C'est d'un bon augure. Avant-hier, le vieux père Thouron a posé la rampe en fer ouvragé de mon balcon-terrasse. Ce brave homme voulait absolument mettre en face de la porte d'entrée les initiales mises par moi au bas de tant d'articles bibliographiques et de notes diverses. Il m'a fallu insister beaucoup auprès de lui pour obtenir qu'à T. de L. il substituât la lettre P, la première lettre du nom de Peiresc, qui est le parrain du pavillon et qui en est même le père, car sans les honoraires d'éditeur de sa correspondance, qui me permettent de jouer sur le velours, je ne me serais jamais décidé à construire l'édifice que je lui consacre. 17 juin Il est 4 heures du matin et je veux me lever à la même heure pendant tout l'été. Il fait si bon ! Quel calme ! Quel silence ! Tout semble encore endormi, même les oiseaux, et je n'entends que la voix de mon fermier qui, aussi vaillant dans son genre que moi dans le mien, gourmande déjà ses vaches. Je vais continuer la transcription des lettres de Peiresc d'après les registres qui m'ont été prêtés par la Bibliothèque Nationale. La journée, hier, a été charmante pour moi, toute remplie de travail et de tranquillité, les deux meilleures étoffes dont la vie puisse être faite. J'ai pu, du matin au soir, lire et écrire en plein air, tantôt à l'ombre du pavillon, tantôt à l'ombre du Châtaignier 202 . Je me suis régalé de la suave senteur des foins coupés, cette senteur dont George Sand a parlé si délicieusement dans je ne sais plus 202 Un majestueux châtaignier se trouvait, à Larroque, à proximité du pavillon Peiresc. Une plaquette d'hommage intitulée Au pavillon Peires, le vieux châtaignier, lui est consacrée, signée L. Audiat, E. Allain, A. de Gagnaud, Ph. Tamizey de Larroque, D. Granier, J. de Boëry, Charles Boy, s.d. (l'avertissement rédigé par Tamizey est daté du 2 août 1897), elle porte la mention « imprimé à 120 exemplaires, tous réservés aux bons amis ». lequel de ses romans. Combien cela ressemble peu aux émanations de la petite ville, aux odeurs de Gontaud ! 18 juin Assisté, hier, de neuf à dix heures, et aux premières loges, à l'éclipse de soleil. J'ai voulu relire, à cette occasion, quelques pages du livre de Guillemin sur le Soleil 203 . Lecture d'à-propos s'il en fut jamais. Le livre est très bien fait et surtout très clair, très lumineux. J'ai mis dans ma petite bibliothèque de campagne la série complète des travaux de l'habile vulgarisateur. Je m'amuserai souvent à les parcourir. Ce seront pour moi des livres de circonstances, car le monticule de Larroque n'est-il pas un observatoire naturel ? 20 juin Je n'ai pas dit encore qu'au cordon-bleu d'Agmé 204 dont il a été question ici, a dû être substituée Antonia Monthus, qui est à notre service depuis le mois de juillet 1888. Je suis très heureux d'avoir auprès de moi cette brave fille qui a un coeur d'or, qui m'a déjà donné force preuves de son dévouement et qui est pour moi moins une servante qu'une petite amie. Jean. Peu à peu d'autres feux ont brillé sur tous les coteaux 203 Guillemin (Amédée-Victor), 1826-1893, publiciste scientifique, né et mort à Pierre (Saône-et-Loire), auteur de Les Mondes (1861), Le Ciel (1864), La Lune (1866), Les Chemins de fer (1867), Le Soleil (1869), La Vapeur (1873), Les Applications de la physique aux sciences, à l'industrie et aux arts (1873), Les comètes (1874), La lumière et les Couleurs (1875), Le Son (1876), Les Étoiles (1877), Les Nébuleuses (1880), Le Monde physique (1880-1885) : vaste tableau d'ensemble et son oeuvre capitale, Petite encyclopédie populaire (1886-1891) notamment. 204 Agmé est un village à 5 km environ au nord-est de Gontaud : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. environnants et c'était un très pittoresque spectacle. Vers neuf heures, tout l'horizon était enflammé. 27 juin Aux ennuis de constructeur, ennuis qui durent encore, car toujours quelque ouvrier se fait attendre et se fait maudire, se joignent les ennuis du propriétaire. Les grandes chaleurs de ce mois nuisent aux arbres plantés au printemps. Un de mes ormeaux est mort, deux autres sont très malades, d'autres encore sont menacés. Chaque matin, mon premier soin est d'arroser ces pauvres mourants. À chacun d'eux je fais l'aumône du contenu de deux arrosoirs. Arrêterai-je ainsi leur dépérissement ? L'eau rafraîchira-t-elle assez leurs racines déjà si desséchées ? Je serais bien contrarié si je ne sauvais pas mes malades, car je n'ai pas le temps d'attendre. Comme compensation aux doubles ennuis dont je parle, que d'agréments dans ma nouvelle existence ! Combien je goûte ce que l'on a si bien appelé la paix des champs ! Quelles circonstances favorables au travail ! Que de fraîcheur sous le feuillage presque toujours agité de ce châtaignier qui constitue, au milieu des ardeurs de l'été, un délicieux salon de verdure ! Et quels beaux levers de soleil, à 4 heures, chaque jour ! Quels beaux couchers de soleil, à 8 heures ! C'est si admirable, que c'est un plaisir toujours nouveau. Orage n° 2, non moins violent que le précédent, mais entièrement nocturne. Éclairs magnifiques, qui semblaient incendier toute ma chambre, et coups de tonnerre épouvantables. Hier, une des hirondelles qui s'amusent à danser le chassé-croisé devant le balcon, a familièrement fait son entrée dans mon cabinet et a croqué une mouche qui bourdonnait à la fenêtre. Comme je déteste les mouches autant que j'aime les hirondelles, je voudrais que ma petite visiteuse s'habituât à entrer souvent chez moi. 21 mai Bonne journée au point de vue bibliophilique. Dans le sac que m'a apporté mon petit courrier spécial (car j'ai amélioré depuis quelque temps le service postal entre Gontaud et Larroque et c'est le bouvier Marcel qui remplace très avantageusement le facteur capricieux et ondoyant), j'ai trouvé deux volumes recommandés et fort recommandables : un ouvrage de bibliographie en italien publié à Rome (grand in-8°) en décembre 1890, que m'envoie, avec sa carte, l'illustre professeur de l'Université de Pise, Alessandro d'Ancona, le Catalogo metodico degli scritti contenuti nelle publicazioni periodiche italiane e straniere 303 . Secondo supplemento, où je suis huit fois mentionné, et un délicieux volume intitulé Livre de l'Institution de la femme chrestienne tant en son enfance que mariage et viduité aussi de l'office du mary nagueres composez en latin par Jehan Loys Vives et nouvellement traduictz en langue françoyse par Pierre de Changy, écuyer avec préface et glossaire, par A. Desbousse (sic) 304 . (Achevé d'imprimer par Lemale au Hâvre (sic) le 11 mai 1891.) (Hommage de l'éditeur et de l'imprimeur). Ce volume se vend 12 303 Alessandro d'Ancona, (1835-1894) né et mort à Pise, prit une part active au mouvement qui prépara l'unité italienne et devint, en 1859, le directeur du journal florentin La Nazione. Surtout il fut le principal introducteur dans l'étude des lettres italiennes de la méthode historique et critique. Professeur à l'université de Pise, depuis 1860, il exerça une grande influence, tant par ses nombreux écrits que par les élèves qu'il forma. Il est notamment l'auteur de Michel de Montaigne avec un essai bibliographique sur ses voyages en Italie (1889).304 Il s'agit certainement d'Achille Delboulle, éditeur d'Anacréon, publié la même année par la même maison. -Correspondance de Tamizey avec A. Delboulle : A.D. Lot-et-Garonne, 16 J 11, correspondance d'érudits : Fonds Tamizey de Larroque. Ah ! le joli mois de mai ! Combien il justifie le joli mot que l'on a attribué à Voltaire : Le mois de mai est l'emblème des réputations usurpées. Nous avons eu, depuis la Saint-Philippe, des brouillards épais comme en novembre, des journées pluvieuses comme en février, des orages d'une fréquence et d'une violence déplorables. Mais de tous ces orages celui qui doit avoir le pompon est l'orage qui a éclaté, hier, d'une heure de l'après-midi à trois heures. Quelle grêle ! Une grêle à tout casser ! Nos vitres en savent quelque chose et nos tuiles aussi. La plupart des grêlons étaient gros comme des noisettes ; quelques-uns étaient du calibre d'une noix. Presque de vrais petits boulets ! Tout a été haché dans le jardin et dans les champs d'alentour. De plus, les eaux ont tout raviné et je ne vois que dégâts, soit chez moi, soit chez mes voisins. Ah ! le joli mois de mai ! 23 mai Est-ce encore un des méfaits de la grêle ? Ma cloche, dont j'étais si fier, dont le son joyeux retentissait jusqu'au presbytère d'Agmé et plus loin encore, ma pauvre cloche est bien malade. On dirait la voix cassée d'une vieille brebis. Y a-t-il eu quelque fêlure produite par ces grêlons qui tombaient si dru et qui étaient si gros ? (J'en ai retrouvé encore quelques-uns le lendemain qui restaient durs comme des cailloux). Les journaux d'Agen et de Bordeaux nous apprennent que dans ces deux villes la tempête du 21 a fait de grands ravages, à Bordeaux surtout où les cheminées pleuvaient à qui mieux mieux. Voilà des grêlons encore plus gros que les miens et la comparaison doit me consoler de mes petites mésaventures. C'est encore là une des supériorités du campagnard sur le citadin. note en marge : Le Nouvelliste 305 , dans un article spécial intitulé « L'ouragan d'hier », donne beaucoup de détails. Pendant une heure, dit-il, le vent, la pluie et le tonnerre firent rage. Contrevents et devantures enlevés, arbres déracinés, becs de gaz tordus, maisons démolies, etc. L'article débute ainsi : Une des tempêtes les plus violentes qu'on ait jamais vues, un véritable cyclône a sévi hier soir sur notre ville. 29 mai Le mois veut finir comme il a commencé : hier, orage, grand vent, fréquentes ondées. Croirait-on qu'à la veille du brûlant mois de juin, j'ai été obligé d'endosser pendant quelques heures, mon gros paletot d'hiver ? Un vrai temps de diable déchaîné ! 30 mai Hier, encore un orage ! C'est décidément, en ce fatal mois de mai, un par jour. Et encore, c'était-il plus qu'un orage, mais bien une tempête, un ouragan, presque un cyclône (sic). Nuit en plein midi, coups de tonnerre dont l'un n'attendait pas l'autre, vent furieux, pluie qui tournait au déluge. Il ne manquait que la crépitation de la grêle à ce sinistre concert. En un clin d'oeil tous nos fossés ont été transformés en torrents impétueux. Mes grands peupliers, sous l'effort de la rafale, se ployaient comme des joncs. Le vieux chêne n'a perdu dans la bagarre que quelques petites branches extrêmes, à peine de quoi faire un fagot. Un gaillard qui résiste à de telles bourrasques est indestructible. note en marge : À Agen, à 4 h. de l'après-midi, trombe 305 Le Nouvelliste de Bordeaux : journal politique quotidien, publié entre septembre 1841 et le 20 février 1887, il a repris les abonnements au Journal du Peuple, quotidien, paru à partir du 2 mars 1848 et de L'Électeur, journal de défense sociale, journal politique et quotidien paru du 1 er mars 1873 au 28 janvier 1885. 2 juillet 2 photographie m'a particulièrement ravi, car c'est celle d'un parent que j'aime beaucoup et d'un travailleur que j'admire extrêmement. On lit dans son énergique physionomie, reproduite d'une façon saisissante, qu'il mènera à bonne fin l'oeuvre gigantesque de la nouvelle édition -si augmentée et perfectionnée -de la Bibliothèque des écrivains de la Compagnie de Jésus 325 . 327 Probablement Verteuil d'Agenais, à 10 km environ à l'est de Gontaud de Nogaret, entre Grateloup-St Gayrand au sud et Tombeboeuf au nord sur l'actuelle D 120 : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. (*) Sic. Pour De Cours.[note de PhTamizey de Larroque] (2 e colonne) Le trois juin mil sept cent quatre-vingt-six est né et le lendemain a été baptisé sieur Alexandre Tamisay fils légitime de sieur Jean Pierre Tamisay, gendarme, et de demoiselle Anne de Montardy. Parrain : Sieur Alexandre Degals, et marraine, demoiselle Marthe de Montardy. Signés : Monbrune de Montardit, Montardit-Decours, Anne de Melon, Degals, parrain, Tamizey et Truitard vicaire. Pour copie conforme. À Gontaud, le 27 juillet 1876 Le Maire, A. de Lagalvagne. (Copie conservée parmi mes papiers de famille.) 14 juillet Hier, mon fils est parti pour la Bretagne où il va passer trois semaines. Ma petite bonne est allée, le même jour, voir sa famille à Lavardac et à Calignac 328 . Me voilà complètement seul et rien ne trouble mon tête à tête avec Peiresc, rien sinon le chien d'Henri, l'infortuné Black, qui, se voyant séparé à la fois de son jeune maître et de la cuisinière qui a tant soin de lui, se cramponne à moi, craignant sans doute de perdre encore ce dernier trésor. Sur ma terrasse, de huit à neuf heures hier, par une des plus belles soirées qui se puissent voir, j'ai revu en quelques minutes, ma vie tout entière depuis mon extrême enfance, où je lisais, à la lumière du careil 329 , assis sur mon petit banc, lou souquet 330 , au coin 328 À 7 km environ, à l'est de Nérac, sur l'actuelle D. 656 entre Nérac et à Agen : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. 329 Palay (Simin), op. cit., p. 204 : carélh, calélh : chaleil, sorte de lampe à huile en métal ou de terre cuite. Elle diffuse une lumière parcimonieuse d'où le dicton : « abàre coume u carélh » : pour désigner un avare ladre. 330 d'avoir vu, enfant, quatre grands lits aux quatre coins de l'immense cuisine du Petit Léonard, la maison de mon grandpère maternel) et la fontaine dite (en famille) monumentale qui, sous son épaisse voûte, mettra notre magnifique source à l'abri des feuilles d'arbres et autres malpropretés, qui en atténuaient la saveur. La construction de l'édifice coûte bien cher, mais pour des buveurs d'eau comme nous, quoi de plus précieux que la pureté d'une fontaine ? Pendant que le maçon achevait, le matin, la chambrette, et, le soir, mettait à la fontaine, les dernières truelles de ciment, Bouffartigue employait la journée à établir autour de la volière une petite cour où les malheureux canards et poulets pourront s'ébattre à leur aise. Il était cruel de maintenir, pendant les grandes chaleurs, ces victimes dans l'étroite enceinte du poulailler. À ce carcere duro 332 où l'amaigrissement était inévitable, jà prendre possession, hier au soir, de leur petit enclos grillagé où ils vont et viennent comme des écoliers en récréation. Adoucissons autant que possible toutes les captivités. La liberté est, pour tous les êtres, une si bonne chose ! -À un autre point de vue, la journée du 23 a été aussi très productive. À l'ombre de mon châtaignier, j'ai préparé une notice assez développée et, si je ne m'abuse, assez intéressante, sur Un héros ignoré, le soldat Lapierre, d'Unet 333 . Je destine cette notice au Paysan du Sud-Ouest 334 332 En italien : emprisonnement étroit, régime pénitentiaire sévère, infligé particulièrement aux prisonniers politiques au début du XIX e siècle : Pellico (Silvio) 1789-1854, affilié au movement des Carbonari militant contre l'autorité autrichienne, alors souveraine en Italie du Nord, qui a rapporté les conditions de sa captivité dans Le mie prigioni [Mes Prisons] (1832), livre qui connut un grand succès. 333 Tamizey de Larroque (Ph.), Le soldat La Pierre d'Unet, Imprimerie George Ferrier, Tonneins, 1891, extr. du Paysan du Sud-Ouest : se rapporte à l'exploit héroïque d'un simple soldat de l'armée du futur général de Toiras, bloquée par les Anglais dans le fort de Saint-Martin sur l'île de Ré. Il parvint en nageant 10 à 12 km, malgré une tempête et des chaloupes anglaises à apporter une lettre contenue dans une boîte en fer blanc, enduite de cire, qu'il portait autour du cou aux forces royales stationnées pour lequel son directeur, le petit-fils de M. Guizot 335 , M. 337À une quinzaine de km au nord-ouest de Gontaud-de-Nogaret par l'actuelle D 299 : carte au 50 000 e , Marmande-Agen, 56, I.G.N.,Paris, 2003. 338 Le jugement au tribunal civil de Marmande, enregistré à Gontaud, le 18 février 1904, rendu en faveur de M lle Gonin, mentionne le « Couvent des Illuminés » et le « Couvent de la petite église » àGontaud [A.P. Baquier].et la soeur Clotilde, Supérieure. On s'est fort amusé ; on a chanté, on a dansé des rondes dans la prairie. J'ai fait servir une petite collation : prunes, amandes, gâteaux, le tout arrosé de flots d'orgeat. J'ai été heureux de la joie de ces enfants. Notre propre bonheur n'est-il pas fait surtout du bonheur des autres ? 10 août Hier soir, à souper, j'ai été, pendant cinq minutes, entre la vie et la mort. J'ai le malheur d'avoir le gosier très étroit et le tort de manger beaucoup trop vite. Un petit morceau de boeuf rôti -dur comme une pierre -me demeura fort avant au gosier . Malgré tous les efforts, il ne voulait ni remonter ni descendre. C'est en ce moment que j'ai compris combien devait souffrir le loup de la fable. Il n'y a peut-être pas supplice pire au monde. Mon front était inondé des sueurs de l'agonie et je me disais que, dans quelques secondes, si la respiration continuait à me manquer, j'allais avoir le sort du docteur Léon Samendès (sic) 339 que l'on venait d'enterrer deux heures auparavant et qui avait été étouffé par son asthme. J'ai vu en un clin d'oeil avec une admirable lucidité les diverses conséquences probables de ma suffocation, et j'ai donné un regret particulier à cette si intéressante correspondance de Peiresc et de Gassendi 340 que j'avais passé une partie de la 339 Léon Samondès, né et mort au château Laffitte, à Gontaud (4 mai 1852-20 octobre 1935), reçu médecin en 1878. Il avait épousé le 12 juin 1909, Marie-Joseph Florent-Gonin, née et morte à Gontaud (27 août 1866-14 janvier 1946), née de père inconnu et de Claudine (Marguerite) Brière dite Gonin, fille adoptive du père Gonin -auquel Ph. Tamizey de Larroque a consacré une monographie -et célibataire. Elle était directrice du couvent-pensionnat de la petite église. Voir note précédente [A.P. Baquier]. 340 Pierre Gassendi, né près de Digne en 1592, mort en 1655. Quelques protections lui permirent d'étudier d'abord à Digne, puis à Aix-en-Provence. En 1608, il obtint la chaire de rhétorique à Digne. Docteur en théologie à Avignon en 1614, il prit les ordres en 1617 et obtint au concours la même année une chaire à l'université d'Aix qu'il occupa jusqu'en 1623. Nommé prévôt à l'église de Digne en 1626, il ne séjourna en fait dans cette ville que de 1648 à 1653, c'est-à-dire pendant la Fronde. Il résidait, en fait, à Paris, où il connut, à l'hôtel des monnaies, nombre de savants qui s'y réunissaient.Il fut professeur de mathématiques au six heures en hiver, avant cinq heures en été. Tout le respect que je lui portais ne m'empêchait pas, dans un rapide mouvement de mauvaise humeur, de l'envoyer au diable (in petto 342 ). Mais combien, plus tard, j'ai béni cette salutaire rigueur ! Soit pour ma santé, soit pour mon travail, soit pour mon agrément, je m'applaudis tous les jours d'avoir été habitué à quitter ma couchette de bonne heure. Comme on respire bien, quand on se lève en devançant ce grand paresseux de Soleil ! Quels flots d'air pur dans la poitrine ! Quelles provisions de forces et d'heures de liberté pour le travail ! Et quelle joie d'assister à ces beaux spectacles qui s'appellent l'aube, l'aurore, le matin ! -Je reviens à la météorologie,-dont tout ceci m'a bien éloigné -pour noter qu'il tombe en ce moment -pendant que le soleil brille -une ondée qui va compléter à merveille mes insuffisants arrosages de chaque jour. Hier, c'était autour de moi une sorte d'Arabie pétrée 343 . La douce pluie d'aujourd'hui va nous rendre la région des oasis. 342 2 septembre 2 Expression italienne littéralement « dans la poitrine » : c'est-àdire dans le secret du coeur, en secret. journée entière. Avec un causeur tel que lui la journée s'écoulera rapide et agréable. Autant je déteste les visites des ennuyeux et des importuns, qui me volent mon temps sans aucune compensation, autant j'aime les visites des hommes de Science et d'esprit dont la conversation est pour moi une fortifiante récréation que je compare à ces parties de barres où les collégiens, après les heures de classes et d'études, font assaut de vigueur et d'agilité. La causerie n'est-elle pas la gymnastique de l'esprit ? M. Charles de Ribbe 345 , un des meilleurs amis que je doive à Peiresc, et qui m'a si gracieusement reçu à sa table à Aix en 1880, vient de m'envoyer sa photographie. Je retrouve là cette douce et majestueuse physionomie qui a quelque chose de patriarcal. Notre commun ami M. de Berluc 346 me disait devant lui, un jour que nous nous promenions tous les trois sur le Cours Mirabeau 347 , qu'il avait une figure épiscopale. Rien n'est plus vrai, s'il s'agit des évêques du bon vieux temps. M. de Ribbe me promet pour le Polybiblion 348 un compte rendu du Livre de raison de la famille Dudrot 349 . Ce sera un compte rendu magistral. Voici un certificat délivré à mon grand-père maternel. Je le transcris d'après une photographie de l'original prise de la famille Dudrot de Capdebosc (1522-1675), éd. Tamizey de Larroque (Ph.), Paris, Picard, 1891. Le domaine de Cap de Bosc est situé sur la paroisse de Marcadis, commune de Moncrabeau (Lot-et-Garonne), sur la rive droite de l'Osse, très près de la limite du département du Gers. Le domaine était, en 1891, dans la famille depuis le XVI e siècle. En effet, représentée au début du XVI e siècle par deux frères, Pierre (licencié au cadastre de Condom en 1536) et Michel qui finalement apparaît seul dont les descendants se perpétuent jusqu'à nos jours avoir adopté pour résidence le domaine de Cap-de-Bosc. Le dernier du nom Dudrot connu par Tamizey était Paul-Fernand Dudrot, habitant avec son père. Il avait deux soeurs Marie-Antoinette, mariée à Ernest Baylin, résidant au Boué, près de Moncrabeau et Gabrielle-Josèphe, mariée au Dr Labat, médecin à Nérac [A.P. Baquier]. Ministère de la Guerre et qui m'a été donnée par Henri Delmas de Grammont 350 : note en marge : Certificat de noblesse délivré à Jean Joseph Delmas de Grammont, père de ma mère. gentilshommes de la province de Guienne habitans de la ville de Gastillonnès en Agenois, certifions à tous ceux qu'il appartiendra que noble Jean Joseph actuellement Chevalier de Grammont proposé au Régiment d'Enghien pour un employ, est le fils de noble Raimont Delmas de Grammont escuyer et de dame Marie de Jourdat tous deux habitans de la dite ville de Castillonnès 351 prouvé par l'extrait de baptême cy joint ; comme certifions aussi que cette maison a été de tous les tems réputée noble dans le pays ; en foy de quoy avons signés le presant certificat comme conforme à la vérité. Fait à Castillonnès le 15 janvier de l'année mil sept cent soixante-deux. Gironde, lieutenant colonel du royal Roussillon. Baillet Thoumaseau d'Abzac, brigadier des armées du roy et son lieutenant de la ville de Cambray. 350 Il s'agit vraisemblablement de François-Philippe-Henri Delmas de Grammont, historien, neveu du général Delmas de Grammont, né le 7 août 1830, à Versailles. Président de la Société Historique Algérienne, il a publié une « Lettre à M. Tamizey de Larroque sur un livre de M. Abel Ducoudat (Juvinilia, Virilia) insérée dans la Revue d'Aquitaine, 1866, p. 342 ; Le R'Azouat est-il l'oeuvre de Kheir-ed-Dinn (Barberousse)?, Impr.X. Dutels, Villeneuve-sur-Lot, 1873, in-8°, 41 p. ; il a également contribué au n° 3 des Plaquettes gontaudaises, publiées par Ph. Tamizey de Larroque : Histoire du massacre des Turcs à Marseille en 1620. Avant-propos, Notes et Appendice, H. Champion, Paris, Gounouilou, Bordeaux, 1879 ; la Relation de l'Expédition de Charles Quint contre Alger, par Durand de Villegaignon, Aubry, Paris, 1874. Il préparait, en 1886, la publication d'un recueil : Documents inédits pour servir l'histoire de l'Afrique française : Andrieu (J.), op. cit., t. 1, p. 336-337. 351 À une vingtaine de km au nord-est de Miramont-de-Guyenne, par l'actuelle D1 : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. Douloureux souvenirs ! C'est à pareil jour, il y a vingt ans, que j'appris, en quittant Paris, la nouvelle du désastre de Sedan 352 , dans lequel fut glorieusement enseveli, au milieu de la charge désespérée de la cavalerie commandée par le général Marguerite 353 , mon jeune et cher cousin et beau-frère, le lieutenant Robert Delmas de Grammont 354 , digne petit-fils du brave colonel mentionné ci-dessus et qui fut un des héros de 352 353Jean- 6 septembre 11 octobre« 611 Auguste Margueritte (1823-1870) : il se distingua pendant l'expédition duMexique (1860Mexique ( -1863)), et fut promu général de brigade en 1866. Au début de la guerre de 1870, il commandait la première brigade (1 er et 3 e chasseurs d'Afrique) de la division du Barail. Sa belle marche stratégique à travers l'Argonne pour rejoindre l'armée de Châlons à Sedan lui valut les étoiles de général de division (30 août). Le surlendemain sur le plateau d'Illy, il reçut en pleine figure une balle qui lui fracassa la mâchoire. Il avait publié Les Chasses de l'Algérie et des Notes sur les Arabes de l'Afrique duNord (1862). 354 Fils de Jean-Urbain Delmas de Grammont et de Marie de James de Longueville, frère de Henri et d'Olivia, le seconde épouse de Philippe Tamizey de Larroque [A.P. Baquier]. un magnifique article de Zola au Figaro 357 de la veille, ces phrases que je transcris avec émotion, avec admiration : « Une nation qui a survécu à une pareille catastrophe est une nation immortelle, invincible dans les âges. De cette page affreuse de Sedan, je voudrais qu'il en sortit une vivace confiance, le cri même de notre relèvement. Par une nuit de lune claire, je suis monté du fond-de-Givonne vers le plateau d'Illy 358 , suivant les chemins creux, traversant les champs où dorment tant de nos morts. Et il m'a semblé que tous ces braves gens se soulevaient de terre, les fantassins frappés isolément derrière une haie, les cavaliers de l'héroïque charge tombés en masse, et que tous ils avaient la joie du 355 Ville du Bas-Rhin, à 68 km. de Strasbourg sur la Lauter et sur la frontière du Palatinat. Théâtre de bataille, lors de la guerre de succession d'Espagne et en 1793 (avant la prise de Landau par Hoche (fin 1793). Le 4 août 1870, la division Abel Douai y fut écrasée par les V e et XI e corps prussiens et par le II e corps bavarois. à Paris en 1873 par Edouard Hervé. Ce fut le premier grand journal politique quotidien mis en vente au prix de cinq centimes.Lorsqu'éclata l'affaire Dreyfus, de Kerhohant, frère d'Ed. Hervé, alors malade, fit dans Le Soleil, qui perdit de ce fait une partie de sa clientèle, une ardente campagne en faveur du capitaine condamné. Après la mort d'Hervé (1899), Ambroise Rendu devint directeur du journal redevenu purement monarchiste. Il fut bientôt remplacé par Louis Baragnon. Il a disparu à la veille de la Première Guerre mondiale. Voir El Gammal(Jean), Politique et poids du passé dans la France « fin de siècle », P.U.LIM, 1999, p. 89. 357 Depuis la fin de l'Empire et la libéralisation de la presse, ÉmileZola (1840Zola ( -1902) ) mène parallèlement une double carrière de journaliste et de romancier, dans les titres de référence de la presse de prestige à laquelle appartient Le Figaro qui se distingue des journaux populaires comme Le Journal, Le petit Journal, Le Matin ou Le Petit Parisien. En 1891, même si le groupe d'hommes de lettres qu'il réunissait, le jeudi soir, à Médan s'est dispersé et que la fresque des Rougon-Macquart, entreprise en 1869, le lasse quelque peu, il est au faîte de la réussite matérielle et de la reconnaissance de son talent. Ce n'est qu'à partir de la publication d'une série d'articles dans le Figaro (25 décembre 1895, 9 et 16 mai 1896) qu'il commence à prendre position contre les campagnes de presse contre la République et les Juifs, ne devenant véritablement le champion du dreyfusisme qu'après la publication du fameux article « J'accuse » dans L'Aurore, le 13 janvier 1898 : Gauthier (Robert) éd., « Dreyfusards ! », souvenirs de Mathieu Dreyfus et autres inédits, Archives, Gallimard-Julliard, 1965. p. 46. 358 Dans les Ardennes à 16 km. de Mézières, près de la Givonne. C'est du Calvaire d'Illy que partirent les héroïques charges de cavalerie conduites par le général Margueritte, puis par le général de Galliffet, le 1 er septembre 1870 et qui arrachèrent au roi de Prusse, le mot fameux : « Oh ! les braves gens ». sacrifice utile, de la grande moisson d'espérances qui germe aujourd'hui de leur sang. » Je citais, avant-hier, le saisissant passage de l'article de Zola. Je viens d'achever la lecture de son dernier livre, et j'ai admiré plus que jamais la puissance de son talent, talent inégal, si l'on veut, trop souvent entaché de bizarrerie et surtout d'inconvenance, mais, au demeurant, un des plus vigoureux de notre époque. Dans l'Argent 359 , que de pages enlevantes ! Que d'inoubliables tableaux ! Quelles magnifiques flammes de verve et d'éloquence mêlées à quelques scories qui disparaissent dans l'éclatante beauté de l'ensemble des descriptions et des récits ! Depuis mes jeunes années, où les romans de Balzac étaient si émouvants pour moi, je n'ai rien lu de plus empoignant que les études de Zola et surtout que sa dernière étude. Si ce merveilleux écrivain pouvait se débarrasser des défauts qui déparent son oeuvre gigantesque, s'il allait en se purifiant de plus en plus, ne gardant de sa hardiesse extrême que ce qu'il faut pour donner de grands coups d'aile et s'élever très haut, quelle place il occuperait parmi les plus forts ! 11 septembre Aujourd'hui vendredi départ de M. Louis Audiat 360 que nous 359 Dix-huitième volume de la série des Rougon-Macquart (20 au total), publié en 1891. 360 Louis Audiat : né à Moulins en 1833, conservateur de la bibliothèque de Saintes où il a fondé une société des arts, sciences et belles-lettres et une société archéologique et historique de la Saintonge. Poète élégiaque, il a publié Poésies (1854) et Nouvelles poésies (1862Essai sur l'imprimerie en Saintonge et en Aunis, Pons, 1879 et « Un petit-neveu de Châteaubriand, Édouard de Blossac, ancien sous-préfet de Marmande », extr. Revue de l'Agenais, 1877, tiré à part, Agen, Lamy, 1877, 35 p. Auteur de plusieurs études sur B. Palissy, chez Aubry, Paris, 1861, XXI-358 pp. et chez Didier, Paris, 1868, in-12. Voir Andrieu (J.), Bibliographie générale de l'Agenais, t.2, 1886, p. 29-30.-Correspondance de Tamizey avec L. Audiat : A.D. Lot-et-Garonne, 16 J 3, correspondance d'érudits : Fonds Tamizey de Larroque.possédions depuis dimanche. Le biographe de Palissy 361 est enchanté de son séjour au pavillon Peiresc et il nous a promis d'y revenir tous les ans. C'est un homme très aimable, très instruit. Il a payé son écot en spirituelles causeries. J'ai été heureux de bien le recevoir. J'ai toujours regardé un hôte comme sacré, même un hôte indifférent. Mais combien les devoirs de l'hospitalité deviennent doux à remplir quand on a chez soi un homme bon, aimable et savant ! Combien nous avons taillé de bavettes à l'ombre du châtaignier, sous un ciel qui était si pur et si bleu que le charme de l'entretien en était doublé. Pendant cinq journées je n'ai pas travaillé un seul moment, mais il faut bien de temps en temps prendre des vacances. Et puis comme je vais me rattraper ! J'étais tout à mon hôte : je vais être tout à Peiresc. La récréation prise pendant une semaine me permettra d'apporter plus d'ardeur au travail. Je serai l'athlète qui, après le repos, recommence la lutte avec une vigueur nouvelle. 25 septembre Hier, à sept heures du soir, en sonnant ma cloche, j'ai couru un grand danger et je puis dire en style oriental que l'ange de la mort m'a frôlé de son aile. La barre de fer à laquelle la cloche était suspendue et qui était enfoncée dans la muraille en a été brusquement détachée tout l'appareil a roulé sur la terrasse en brisant plusieurs barreaux de la balustrade, passant à quelques centimètres à peine de ma tête. J'ai été aussi près que possible de l'écrasement, car un tel poids tombant de si haut et lancé avec tant de force, c'était la mort certaine et mon crâne, que rien ne protégeait, eût été broyé comme le blé sous la meule. Voilà deux accidents en quelques semaines, presque à la même heure ! L'étouffement du 10 août et la chûte (sic) de la cloche sont-ils des avertissements ? Les bons paysans, mes voisins, ont l'habitude de dire : Lous tres trucs fan la lutte 362 . Gare donc au troisième coup ! 2 octobre Hier, jeudi, retour d'Ambrus 363 où j'ai eu comme toujours le coeur bien serré auprès de la tombe de ma mère bien aimée. Combien son incomparable affection me manque ! Et combien j'ai besoin de me dire que chaque pas que je fais me rapproche dmaison de campagne de M. et de M me de Naurois 365 , où j'ai passé quelques heures charmantes entre le descendant du grand Racine et la descendante de ce président de Sevin 366 qui fut un des correspondants de Peiresc, j'ai vu, de sept à huit heures, étant en voiture découverte, briller du plus vif éclat la planète Jupiter. Sa magnifique lumière inondait tout le ciel. Ah ! le beau spectacle et qui était accompagné du bruit si poétique du feuillage des grands pins ! Je dois noter la splendeur particulière de l'automne dont nous jouissons. Toute cette dernière semaine surtout, le ciel a été d'une admirable pureté et le soleil a eu l'éclat des belles journées d'été. Hier, la chaleur a été telle que j'ai été obligé de quitter mon très léger vêtement d'alpaga et de 362 Truc » : mot gascon signifiant coup, battement, action habile, combine, tour de main ; « lute » veut dire lutte, jeu d'athlètes, combat, altercation, résistance. L'expression proverbiale utilisée par Tamizey reprend celle signalée par S. Palay : « aus très cops, lute !» : à la troisième fois, à la troisième invective, on se bat : Palay (S.), op. cit., 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003 indique un lieu-dit « Bousquet » au sud-ouest d'Agen, à 4 km environ à l'ouest d'Estillac sur la rive gauche de la Garonne. travailler en chemise, comme en pleine canicule. Dans la soirée, entre six et sept heures, je suis resté en contemplation devant le plus magnifique ciel étoilé. J'étais assis sur la pierre de mon tombeau, autour duquel et même sur lequel jouait mon petit chat roux, qui m'accompagne si souvent dans mes promenades. J'ai vu beaucoup d'étoiles filantes, fusées du grand feu d'artifice tiré sans cesse là-haut. note en marge : 25 degrés à Agen à 1 heure. 14 octobre Visite trop courte de ma chère cousine Amélie de Bentzmann 367 . Elle a trouvé mon pavillon charmant et le panorama superbe. Qu'eût-elle dit un jour de beau soleil ? Elle nous a promis de revenir bientôt et de nous donner une journée entière. Que de souvenirs cette rapide apparition évoque en moi ! Toute ma jeunesse vient de revivre en quelques minutes. Mon pauvre coeur retrouve toutes ses vieilles joies et aussi toutes ses vieilles souffrances. 18 octobre Hier, j'ai recommencé mes plantations d'automne. J'ai mis en terre six pieds de lavande. Je souhaite à ces six pieds le développement qu'a pris à Poncet 368 chez mon voisin le vieux Bégoule -lequel a beaucoup connu mon grand-père paternel qui, déjà pas mal âgé, invitait souvent le jeune homme à croquer un morceau, mais qui, pour maintenir la distance des rangs, le laissait manger debout, pendant que lui-même était assis -le développement, dis-je, qu'a pris la lavande probablement presque aussi vieille que son propriétaire, lavande qu'on peut appeler arborescente car elle forme un massif d'une épaisseur et d'une hauteur vraiment extraordinaires. J'aime beaucoup la lavande qui me rappelle ma chère Provence, cette gueuse 367 Voir 10 septembre 1890. 368 Au nord-est de Gontaud-de-Nogaret : carte 1738 Est, Seyches, série bleue 1 : 25 000, I.G.N., Paris, 1987.parfumée. -Pour passer des fleurs naturelles aux fleurs artificielles, j'ajouterai que j'ai eu l'occasion, hier, de bénir une fois de plus le bon Dieu de la facilité de travail qu'il a daigné me donner. J'avais à rendre compte de la délicieuse anthologie publiée par M. A. Desbousse sous le titre de : Anacréon et les poèmes anacréontiques 369 . Comme j'avais déjà bien lu ce recueil, je n'ai eu qu'à écrire mes impressions et appréciations. En un quart d'heure sans prendre la peine de faire un brouillon, et sans avoir à raturer un seul mot, j'ai rédigé deux pages envoyées sur le champ à la Revue critique 370 et qui ne sont pas les plus mauvaises de toutes celles que je fournis, depuis plus d'un quart de siècle, à ce recueil si renommé ! puissé-je garder encore long-temps ce don d'improvisation écrite qui me permet de tenir tête à tant d'exigences littéraires ! 20 octobre Depuis quelque temps, les visites féminines pleuvent à Larroque. Deux bien aimables femmes me quittent à l'instant, Madame de Pérès et sa fille, Madame de Lavaissière 371 .Malheureusement elles ont eu à braver la boue et le mauvais temps et cela a bien gâté leur plaisir, comme le mien. J'aurais payé bien cher aujourd'hui quelques rayons de soleil. Puissent mes deux gracieuses amies revenir par un beau jour où tout sourira, la campagne comme les visages ! 31 octobre 369 Anacréon, poète grec (560 ? -478 ? av. J.-C.), Anacréon et les poèmes anacréontiques. Texte grec avec les traductions et imitations des poètes du XVI e siècle, publié par Achille Delboulle, Lemale, Le Havre, 1891, XII-185 p. -A.D. Lot-et-Garonne, 16 J 11, correspondance d'érudits : Fonds Tamizey de Larroque. 370 Revue critique d'histoire et de littérature, publiée à Paris à partir de 1866 (BNF, 8°Z.272). 371 Madame de Pérès, née Inès Brousteau, épouse d'Auguste Lesueur de Pérès dont la fille Marie Marthe était mariée à Lodoïs de La Vaissière. Elle était cousine issue de germains de Philippe Tamizey de Larroque par les Vivie, Bailhès, Paloque et d'Antraygues [A.P. Baquier]. Sur les La Vayssière, famille ancienne originaire d'Auvergne et fixée en Agenais au XIII e siècle, voir Meller (P.), op. cit., t. 2, p. 327. ma solitude. Hier, ce fut une véritable avalanche. À deux heures, arrivèrent les deux amis de mon fils, M. Joseph Coulet 372 , notre nouveau notaire, et M. Raymond Dubrana. À peine avaient-ils pris congé de moi que surgirent M. et M me Beaune 373 , le comte et la comtesse de Dienne 374 , M. de Dordaygue 375 , beau-frère du comte de Dienne, et trois enfants, en tout huit personnes. Une bien aimable invasion ! M. de Dienne avait voulu venir me remercier de vive voix de mon article du Bulletin critique sur son savant livre au sujet du Dessèchement des marais 376 . Il m'a apporté, outre ses actions de grâces, quatre livres de raison qui lui avaient été confiés 372 Voir 12 novembre 1890.373 Guillaume-Marie-Joseph Beaune, né à Lévignac en 1844. Après avoir complété au Cours supérieur de la Capelle St-Mesmin les études classiques commencées au Collège des Jésuites de Bordeaux, il fit son droit à Paris et fut reçu avocat en 1856. Inscrit au barreau d'Agen en 1869, il entra bientôt dans la magistrature, fut nommé juge suppléant à Marmande, en 1871, substitut à Lombez la même année, passa à Nérac en 1872 et à Villeneuve, en 1873. Il se retira volontairement le 27 février 1879. Il a signé Les Décrets du 29 mars 1880 et les lois sur l'Enseignement, Paris, Jules Gervais, 1881, in-12 de 368 p. Ce volume daté de Bistauzac, près Gontaud, a été l'objet de plusieurs analyses et comptes-rendus dans la presse parisienne : Andrieu (J.), op. cit., t. 1, p. 59. Madame Beaune était née de Ricaud, au château de Bistauzac [A.P. Baquier]. -Correspondance de Tamizey avec Joseph Beaune : A.D. Lot-et-Garonne, 16 J 5, correspondance d'érudits : Fonds Tamizey de Larroque. 374 La première publication du comte Édouard de Dienne s'intitule Les capitaines saintongeais au XVI e siècle. Jacques de Rabar, Impr. N. Texier, La Rochelle, 1879, 7 p. Il s'est intéressé, à partir de 1886, à l'assèchement des marais et des lacs. Une synthèse de ses travaux parut sous le titre Histoire du déssèchement des lacs et marais en France avant 1789, H. Champion, Paris, 1891, 590 p. Il a signé également une série d'articles d'histoire locale, portant sur le début de la période moderne dans la Revue de l'Agenais principalement. Il a également publié dans la Revue de Haute-Auvergne, des articles historiques sur Aurillac, Vic et la vicomté de Carlat, entre 1899 et 1906. Enfin et surtout, il est l'auteur de la Bibliographie des hommages rendus à la mémoire de Ph. Tamizey de Larroque, correspondant de l'Institut, précédée de notes intimes, Impr. et lithographie agenaises, Agen, 1901, 65 p. Voir Catalogue des imprimés de la Bibliothèque Nationale. Édouard de Dienne avait épousé la soeur de Louis de Dordaygue [A.P. Baquier]. -Correspondance de Tamizey avec le comte de Dienne : A.D. Lot-et-Garonne, 16 J 12, correspondance d'érudits : Fonds Tamizey de Larroque. 375 Louis de Dordaygue était marié à une demoiselle de Beaupuy. Quatre enfants naquirent de cette union dont l'aînée, Mathilde, épousa Robert Cramaix-Hugonis, petit-neveu de Ph. Tamizey de Larroque [A.P. Baquier]. pour moi par notre confrère M. P. Le Blanc (de Brioude) 377 . Mes hôtes d'un moment ont été ravis de tout ce qu'ils ont vu du haut de ma terrasse et de mon balcon. Précisément l'horizon était limpide et lumineux. Il avait un peu gelé, le matin, et un beau soleil faisait de la journée du 30 octobre une des plus magnifiques journées de l'automne. 4 novembre Hier, encore une visite, celle du jeune poète Jean Carrère, l'auteur de Ce qui renaît toujours 378 . Je l'ai vu avec beaucoup de plaisir, car c'est un garçon de talent et qui fera parler de lui. Déjà son recueil a eu beaucoup de succès et sa préface, retentissante comme le clairon d'un jour de bataille, a été très discutée par les uns, très louée par les autres. Il y a des idées d'une généreuse hardiesse soutenues avec toute la verve gasconne. J'éprouve une sympathie d'autant plus vive pour Jean Carrère qu'il est né à Gontaud sous mon Consulat, que son grand-père a été un de mes trois secrétaires de mairie et que sa grand'mère a aidé à venir au monde ma pauvre petite 377 Voir 20 mai 1892. 378 Antoine-Louis-Jean Carrère (1865-1932), né à Gontaud-de-Nogaret, il signait Arthur Carrère les poésies, chroniques, silhouettes d'avocats, de députés et d'artistes, notices et fantaisies agenaises que publiait l'Écho de Gascogne dont il était un collaborateur actif utilisant parfois les pseudonymes de Monocle, Criquette, Pierrette etc. Parmi lesquelles : La Vie ironique. -Ce farceur de Birac. -Émile Zola. -Abel Boyé. -Le sabot de la reine. Conte de Noël (1887) ; Poème d'hiver. -Lever de nuit. -Spleen d'automne (1888) etc. Poète, journaliste, J. Carrère était aussi littérateur et conférencier : Essai sur le génie de Jasmin à propos de l'éd. défintive de ses oeuvres [par Augustin Boyer](Journal de Lot-et-Garonne, n°s des 1, 8 et 15 décembre 1889) ; Conférence sur la Charité, impr. V ve Lamy, Agen, 1890, 30 p. (Conférence faite à Agen en 1890 devant les élèves du lycée de cette ville); La Vénus du Mas (article littéraire paru dans Le Siècle, 4 octobre 1890 et reproduit dans le Journal de Lot-et-Garonne du 8 octobre). Jean Carrère qui eut son heure de célébrité au momant des troubles du Quartier latin (affaire Nuga, juillet 1893) était le cousin d'Henri-Adrien Carrère, instituteur, né à Xaintrailles, en 1850, d'abord élève-maître à l'École normale de la Sauve de 1866 à 1869, nommé instituteur public à Andiran en 1869, puis à Francescas (1875), à Barbaste (1878) et enfin à Nérac en novembre 1882, comme directeur de l'École primaire publique. Il est l'auteur d'une méthode originale d'apprentissage de la lecture par l'écriture primée à l'exposition scolaire de 1890. Une nécrologie de Jean Carrère en 3 pages a été publiée dans la Revue de l'Agenais, 1932, p. 249, signée R. Bonnat : Andrieu (J.), op. cit., t. 3, p. 38-39. -[A.P. Baquier]. 8 novembre Admirable été de la Saint-Martin ! Les matinées et les soirées sont froides (2 ou 3 degrés au-dessous de zéro et formation de la glace même sur la hauteur où nous sommes), mais journées splendides, surtout de 9 à 3 heures. Mon cher confrère et ami Gaston Paris 379 vient de m'envoyer sa nouvelle édition d'Extraits de la Chanson de Roland et j'y trouve (p. 7) un vers qui s'applique à merveille à ce regain d'été dont je jouis si délicieusement : Clers fut li jorz e bels fut Li Soleilz. 12 novembre Hier, j'ai acheté, avec l'oie traditionnelle de la Saint-Martin, 12 pruniers, 6 pêchers, 3 poiriers, 3 abricotiers, 12 sapinettes, 12 fusains variés, 20 thuias (sic) de Chine. Aujourd'hui même on plante les arbres verts, et demain on plantera le reste. J'entends dire qu'il s'est fondé une Société dont les membres s'engagent à mettre, tous les ans, un arbre en terre. Je mériterais d'être un des dignitaires de cette Société, moi qui, depuis que je suis propriétaire de Larroque, ai planté en moyenne vingt arbres par an, sans compter les arbustes. 1839-1903), fils de Paulin Paris [voir Tamizey de Larroque (Ph.), À la mémoire de Alexis-Paulin Paris, impr. Durand frères, s.d., Chartres, 14 p. (extr. du Bulletin du Bibliophile, mars-avril 1881)], spécialiste d'histoire littéraire médiévale, professeur au Collège de France. Gaston Paris, ancien élève de l'École des chartes, lui aussi féru d'histoire littéraire médiévale, réputé pour son immense érudition, a imposé les règles et les principes de la philologie scientifique. Il fut directeur à l'École des hautes études, académicien et administrateur du Collège de France. Il a pris une grande part à la fondation et à la direction de la Revue critique (1866) et de la grande revue de philologie, Romania (1872). Il fut aussi un éminent collaborateur du Journal des Savants et de l'Histoire littéraire de la France. -Correspondance de Tamizey avec Paulin Paris : A.D. Lot-et-Garonne, 16 J 21, correspondance d'érudits : Fonds Tamizey de Larroque. refroidissement en allant à Gontaud le dimanche 8 novembre. Le mal s'est aggravé le vendredi 13, jour où j'ai fait planter les douze ormeaux achetés à mes voisins Grand et où j'ai subi les atteintes d'un vent très froid. Il m'a fallu garder la chambre pendant une quinzaine de jours et suspendre tout travail. J'ai beaucoup souffert, surtout par la difficulté de respirer (Bronchite ? Pneumonie ? Influenza ?). Il ne me reste aujourd'hui que beaucoup de faiblesse. J'ai recommencé à travailler, ayant corrigé, hier, les épreuves du Peiresc amoncelées sur mon bureau. Béni soit Dieu qui n'a pas voulu que ma grande publication fût inachevée ! -Pendant que j'étais si malade, j'ai appris la mort de mon ancien condisciple du collège de Marmande, Adrien de Forcade 380 , qui fut Conseiller à la Cour de Bordeaux, de M. Henri de Groussou 381 , qui fut avocat général à la Cour d'Agen, avec lequel j'avais depuis longtemps d'excellentes relations, enfin de deux de mes savants amis, M. Ferdinand Pouy 382 d'Amiens, membre de la Société des Antiquaires de Picardie, et M. Gustave Bascle de Lagrèze 383 , ancien Conseiller à la Cour de 380 Sur la famille de Forcade : Meller (P.), de ses oeuvres dans le Catalogue des imprimés de la Bibliothèque Nationale s'étale sur 4 pages et comprend 32 articles. Il s'agit d'une part d'ouvrages de droit : Études sur la révision du Code forestier, impr. de J.-M. Dossun, 1851, 84 p. ; Droit criminel à l'usage des jurés, science morale, code et vocabulaire du jury, Cotillon, Paris, 1854, 454 p. (2 éditions) ; Observations sur les lacunes du Code pénal, Cotillon, Paris, 1856, 34 p. ; De la Réorganisation de la magistrature, impr. d'E. Vignancour, Pau, 1871, 93 p. ; Le Saut du procureur, E. Dentu, Paris, 1879, in-16, 292 p. La plus grande partie de ses publications est j'avais passé deux journées pendant lesquelles il fut le plus gracieux de tous les hôtes pour moi. 11 décembre Un doux soleil vient favoriser ma convalescence et m'invite à me promener. Qu'il fait bon sous ses rayons après un emprisonnement de près de trois semaines ! Je suis encore bien faible et je marche difficilement. J'ai pourtant voulu aller m'asseoir sur ma pierre tombale, après avoir tant failli être mis sous elle. J'ai aussi voulu voir mes plantations qui ont été continuées pendant ma maladie. Arbres verts, rosiers, lierre à larges feuilles, tout abonde, mais tout prospèrera-til ? 15 décembre J'ai reçu force affectueuses lettres au sujet de ma maladie et de ma convalescence. J'en reproduis deux qui m'ont touché d'une façon toute particulière : « Paris, 11 décembre 1891. -Mon cher ami, Vous ne sauriez vous imaginer le plaisir que j'ai éprouvé en revoyant votre écriture. Je vais le faire partager aujourd'hui à nos amis de l'Académie, qui sont, je vous l'assure, fort impatients d'apprendre votre rétablissement. Je ne veux pas vous imposer la fatigue de lire une lettre un peu longue. Il me faut cependant vous remercier du charmant article que la Revue critique vient de publier sur Pierre Magen (sic) 384 . La poste va peut-être apporter au pavillon Peiresc, en même consacrée à l'histoire de la Bigorre et du Béarn, avec un intérêt particulier pour le patrimoine religieux avec notamment Le château de Lourdes et la grotte de l'apparition, T. Telmon, Tarbes, 1875, in-16, 252 p. (3 éditions) ; il s'est intéressé également à l'histoire de la Suède et a procuré une édition de Marca (Pierre de), Antiquités du Béarn…, Pau, 1846 et de Charles VI, roi de Suède, Légendes et poèmes scandinaves, Paris, 1863. -A.D. Lot-et-Garonne, 16 J 17, correspondance d'érudits : Fonds Tamizey de Larroque. temps que cette lettre, un petit paquet d'imprimés ; mais vous ne l'ouvrirez que quand vous n'aurez rien de mieux à faire et que la prudence vous permettra de revenir à vos studieuses et actives occupations. Mais je vous conjure de modérer votre ardeur au travail et de ménager votre précieuse santé. -Tout à vous. L. Delisle. » « Paris, ce 12 décembre 1891. -Mon cher confrère et ami, En même temps que je voulais vous remercier pour l'article,beaucoup trop aimable, mais qui m'a bien touché et (avouonsle) bien flatté-que vous avez donné dans la Revue critique sur mon petit livre, j'ai appris hier à l'Académie que vous aviez été sérieusement malade, et, du même coup, que vous étiez en très bonne voie de complet rétablissement. J'ai été très ému de cette nouvelle et je suis bien heureux de ne l'avoir eue que complétée. J'espère que vous allez être bientôt tout à fait remis et en état de reprendre votre merveilleuse activité. Je ne veux pas vous en écrire plus long pour ne pas vous fatiguer, mais je tenais à vous serrer la main avec une double vigueur, pour remercier le critique (Peut-on vraiment l'appeler ainsi ?) et dire à l'ami (Sur ce nom-là point de doute) la part de coeur que je prends à tout ce qui le touche. Votre bien dévoué. G.Paris. » 20 décembre Faisons un peu de météorologie, sujet que j'ai fort négligé depuis longtemps. J'aurais dû marquer ici l'extraordinaire douceur de la température pendant la seconde moitié de novembre et pendant la première moitié de décembre. Nous avons eu souvent, en ces deux mois, 10 à 12 degrés au-dessus de zéro, le matin. Le 16 décembre, nous avions encore 5 à 6 degrés de chaleur. La situation a changé seulement depuis trois jours. Le 18 décembre, le thermomètre était à 2 degrés au-dessous de zéro et, le lendemain, à 5 degrés idem. (Le matin, bien entendu, car, dans la journée, sous l'influence d'un bon soleil, il remontait de plusieurs degrés.) (*) Allons-nous voir reparaître les grands froids de l'hiver dernier ? note en marge : Le 20, 6 degrés 50 ; Le 21, 6° 25 ; Le 22, 4°75. 30 décembre (*) Aujourd'hui j'accomplis ma 63 ème année. Trois fois j'ai pu craindre que ce ne fut la dernière : le jour où j'ai failli être étouffé en soupant, le jour où j'ai failli être écrasé par la chûte (sic) de ma cloche, enfin le jour où, pendant ma maladie, je ne pouvais presque plus respirer. Devant ce triple avertissement, dois-je me préparer à partir bientôt ? On me conseille de ménager mes forces, de ne plus autant travailler. Mais le pourrais-je ? Pas plus que le poisson ne pourrait vivre hors de l'eau. Je calculais, un de ces derniers soirs, que, si je vis assez longtemps pour achever le Peiresc et ses correspondants, je n'aurai guère publié moins de dix mille documents inédits environ. (J'en ai inséré trois mille au moins dans les vingt premiers volumes des Archives historiques du dépt. de la Gironde 385 ). Le nombre de mes articles de 385 élevé. Dans la Revue Critique seulement, à laquelle je collabore depuis un quart de siècle, j'ai pondu près d'un millier d'articles. (À raison de 40 par an en moyenne. On en trouve quelquefois 2 et même 5 dans un seul numéro) On voit que j'ai bien le droit de demander que l'on inscrive sur ma tombe Ci-gît un travailleur. note en marge : En cet avant-dernier jour de l'année, le thermomètre à Agen indiquait (5 h. du matin) une température de printemps : 10 degrés au-dessus de zéro. Je continue mes plantations d'arbres et d'arbustes avec un redoublement d'ardeur. J'ai fait planter un ormeau magnifique -qui sera bientôt un orme -, tout près de la volière. De même que M. Alphand 386 , qui vient de mourir quand j'étais moi-même presque mourant, et que j'ai vu jadis, étant maire, signer 386 Alphonse Alphand (1817-1891), né à Grenoble et dauphinois d'origine, fils d'un lieutenant-colonel, ancien polytechnicien. Il vint à Paris, en 1834, terminer au Lycée Charlemagne des études entreprises au petit séminaire et entra lui-même à l'École polytechnique en 1835, puis à l'École des ponts et chaussées en 1838. Il fut d'abord ingénieur à Bordeaux où se maria avec une demoiselle Holagray et se distingua notamment par sa construction de la rade et d'un quai, ses travaux des Landes et en organisant les fêtes d'octobre 1852. Il était de plus chef d'escadron de la grade nationale, conseiller municipal de Bordeaux, conseiller général du canton de Coutras (qu'il représenta jusqu'en 1867), lorsque son préfet, Haussmann, ayant été nommé à la préfecture de la Seine, le fit venir à Paris ( note en marge : givre, neige. J'ai fait acheter 500 plants racinés d'Herbemont pour commencer la reconstitution du vignoble de Larroque.(6 F le cent). Nous allons d'abord planter onze rangs entre les deux allées qui mènent au chemin de Minor. L'an prochain, nous continuerons nos plantations dans la partie de l'ancien vignoble qui avoisine la terre de Paillé 388 . Il a été convenu entre mon fermier et moi que je payerai, pour les frais de préparation du terrain, 2 francs par rang à planter, c'est-àdire 22 francs, cette fois. Il a été aussi convenu que nous partagerions les produits de la nouvelle vigne, Ducasse se chargeant de l'entretien. Il deviendra donc mon vigneron à moitié fruits. De plus, il s'engage à me donner désormais 50 francs de plus pour le fermage, soit 650 francs chaque année.J'espère que nous serons contents les uns et les autres de ce nouvel arrangement.18 janvierJ'ai fait établir près de la fontaine une pépinière de rosiers et de chèvrefeuilles qui me permettra de remplir, l'an prochain, les vides faits dans mes plantations par la nature, si souvent aidée par la main de l'homme, ou, pour mieux dire, par le pied de l'homme, car combien de petits arbustes sont écrasés par un marcheur inattentif, par un lourdaud funeste ! Tout à l'heure j'ai eu le crève-coeur de voir plusieurs des lilas apportés de Miramont 389 qui ont été broyés sous le sabot de quelque animal. Il faudrait mettre un tuteur auprès de chacun de ces fragiles arbustes. Mais hélas ! Les tuteurs euxmêmes ne seront-ils pas sans cesse menacés par les allants et venants ? Décidément la vie de l'horticulteur n'est faite que de tracasseries.388Au nord-est de Gontaud-de-Nogaret : carte 1738 Est, Seyches, série bleue 1 : 25 000, I.G.N., Paris, 1987. 389 À une vingtaine de km au Nord-est de Marmande et de Gontaud-de-Nogaret : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. Journée admirablement belle. Vrai soleil de printemps. Quelle bonne promenade je viens de faire ! J'ai pu lire pendant près de deux heures en plein air sans éprouver la moindre sensation de froid. Par la douce température de tous ces derniers jours je me sens tout rebiscoulat 390 et ce qui me restait de faiblesse disparaît à vue d'oeil. Le roseau va redevenir le chêne. J'ai remarqué, dans ma longue promenade -la plus longue que mes jambes aient pu se permettre depuis trois mois -quel bel effet les fleurs de l'ajonc produisent au coeur de l'hiver. On n'admira pas assez cet arbuste qui donne à nos haies une parure perpétuelle (vert et or). C'est un dédaigné qui mérite fort d'être réhabilité. note en marge : À 5 heures du matin 2°75 de chaud. À 1 h. du soir 12° 50 de chaud. Le 23, 6 degrés idem à 5 heures matin et 14,50 à 1 h. soir. Ouragan extraordinaire, toute cette nuit. Je donnerai une suffisante idée de sa violence en disant que, moi qui dors toujours si bien, je n'ai pas fermé l'oeil un seul moment. Le pavillon semblait secoué jusque dans ses fondements et, pour employer une phrase de je ne sais plus quel romantique de 1830, les contrevents se tordaient sous les âpres baisers de la tempête. Une pluie impétueuse retentissait sur les toits et ajoutait encore au tapage. C'est le cas de dire que tous les éléments étaient déchaînés. Ce qui m'a tenu éveillé autant que l'assourdissant vacarme, c'était la crainte de quelque grave accident. Je me demandais sans cesse si mon pavillon n'allait pas être décoiffé, comme mon pauvre héros Chapelain 404 ou si mon antique chêne n'allait pas s'abattre sur nous et nous escarbouiller, comme dit Du Bartas 405 . Mais tout s'est passé sans dégat et j'en suis quitte pour une nuit blanche. Je note que mes chats, en proie à l'épouvante, miaulaient en choeur d'une façon désespérée, fait que je n'avais jamais constaté dans les plus grandes tempêtes. Ces miaulements aussi aigus que possible mettaient une note vibrante de plus dans l'affreux concert. C'était hélas ! Toute la lyre 406 . 11 février 404 Jean Chapelain (1595-1674), homme de lettres protégé de Richelieu qui joua un grand rôle dans la fondation et l'animation de l'Académie française, notamment à l'occasion de la Querelle du Cid. Il fut victime des sarcasmes et des plaisanteries surtout de Boileau -auteur d'un Chapelain décoiffé auquel Tamizey fait ici allusion -, de Racine, de La Fontaine et de Molière. Ph. Tamizey de Larroque a publié ses Lettres, en 2 volumes, dans la Collection de documents inédits sur l'histoire de France, 2 e série, Paris, 1880-1883, in-4°. 405 Du Bartas (Guillaume Salluste) 1544-1590, né à Montfort (Gers), protestant, humaniste, homme de confiance d'Henri de Navarre (futur Henri IV) et poète, en langue gasconne, auteur d'une épopée d'inspiration biblique, La Sepmaine (1578), admiré par Ronsard, il a inspiré le grand poète anglais Milton.-Voir Dauphiné (James) éd., Du Bartas, poète encyclopédique du XVI e siècle, Colloque de Pau, mars 1988, La Manufacture, 1988 ; Du Bartas 1590-1990, Colloque d'Auch, avril 1990, éd. interuniversitaires, Mont-de-Marsan, 1992 ; Du Bartas, Colloque de Pau 1993, J&D éditions, Biarritz, 1994. 406 Titre donné au recueil de deux volumes de poèmes posthumes de Victor Hugo, publié par Paul Meurice en 1888. le père, la mère et la fille, la belle Elvina, faire un petit jardin au bout de l'allée qui mène à leur chemin, autour du banc qu'ombrageront plus tard les deux noyers… en herbe plantés il y a quelques semaines. Ces braves gens ont planté des rosiers, des chênes-verts, des violettes, d'autres fleurs encore. Tout cela forme une petite enceinte au milieu de laquelle il fera bon, pendant les beaux jours, lire quelque revue, quelque catalogue. J'ai été bien touché de cette affectueuse manifestation qui caresse en moi une corde bien sensible, en moi qui ne dirai jamais : trop de fleurs ! 17 février Je reçois à l'instant la nouvelle de la mort presque subite de ma cousine Charlotte de Grammont 407 , étouffée par son catarrhe. C'était mon aînée de quelques mois et je deviens le doyen de la famille. Charlotte, comme me l'apprend son cousin et son héritier, le vicomte de Lammerville, lieutenant-colonel du 5 ème Hussards, est morte le samedi 13 et a été enterrée, hier, mardi. Elle a suivi de bien près sa soeur Marie que nous eûmes le malheur de perdre en décembre 1890. Charlotte de Grammont, qui avait été belle comme le jour, était une intelligence d'élite. Louis Veuillot 408 , qui était son ami et 407 Fille de Joseph Auguste Delmas de Grammont et d'Adèle Heurtault de Lammerville, nièce du général Delmas de Grammont et donc cousine germaine de Philippe Tamizey de Larroque [A.P. Baquier]. 408 Veuillot (Louis-François) (1813-1883), né à Boynes (Loiret). Fils d'un tonnelier, petit clerc à treize ans, il fit lui-même son instruction. traverser une période glaciaire. Combien je m'attendais peu à ce retour offensif du froid Je me croyais déjà au printemps et voilà une vilaine queue d'hiver pire que le plein hiver lui-même ! Le 3 mars, nous étions à 2 degrés 1/2 au-dessous de zéro ; le 4, à 4-1/2 ; Le 5, à 5,75 ; le 6, à 5,50 ; le 7, à 5. Ce brigand de froid, rendu insupportable par le souffle obstiné d'une bise à emporter les oreilles et aussi, pendant quelque temps, par une absence complète de soleil, m'atteint bien vivement comme planteur de lilas, de lauriers et surtout de violettes Ah ! mes pauvres violettes dont j'avais entouré tous les abords de la fontaine ! Elles sont presque toutes flétries et comme brûlées par la gelée. J'avais voulu faire de ce coin de mon petit jardin le coin le plus verdoyant et le plus parfumé, le coin favori, auquel j'appliquais déjà le si gracieux vers de Virgile : Devenere locos extos (sic) et amena vireta 409 . Ah ! comme les Anciens avaient raison de vanter avec enthousiasme les îles où régnait un printemps perpétuel ! 11 mars Hier, journée admirablement remplie. enfin rédiger une bonne partie de l'Avertissement du poème de Guilleche 418 (sic) que je vais donner aux Actes de l'Académie de Bordeaux 419 (**). Rarement mon activité intellectuelle a été plus brillante, même au beau temps de ma jeunesse. En cette journée qui de toute façon aura été très féconde, j'ai fait planter une dixaine (sic) de peupliers autour du vivier et le long du fossé oui conduit le trop plein du vivier dans la prairie. Quelques jours avant on avait planté une vingtaine de pruniers dans la pièce de terre qui me sépare de Beyries 420 .Voilà mes dernières plantations de cet hiver, car je ne compte pas le petit fagot de rosiers qui me viennent des jardins du château de Longueville 421 , et le fagot plus petit encore de bambous que me promet le Dr Samondès 422 , le tout devant être mis en terre avant le 15 mars. S'il plaît à Dieu, nous recommencerons en automne. -Hier et cette nuit, un peu de neige. 10 avril 10 l'embarras de la dame du logis, il lui dit : « Rassurez-vous, Madame. Nous ne nous coucherons pas. Nous aimons mieux causer que dormir. » Et, comme il était un délicieux conteur, il conta tant et si bien qu'aucun de ses auditeurs ne s'aperçut de la durée de la nuit. Comme, le lendemain, la maîtresse de la maison, dont il prenait congé, le remerciait avec effusion, il lui dit : « Je n'ai eu d'autre mérite que de dire à vos hôtes des contes à dormir debout. »31 marsOn a enterré aujourd'hui mon voisin, le vieux Bouton, un ancien vigneron de mon père. Je me souviens de l'avoir vu, quand j'étais tout enfant, danser dans le pressoir et écraser sous ses pieds d'innombrables raisins. J'ai regretté de ne pouvoir l'accompagner à sa dernière demeure. Ceux qui ont été nos bons serviteurs méritent d'être traités comme nos bons amis.Il y a quatre ans aujourd'hui j'accompagnai ce qui restait de ma pauvre mère au cimetière d'Ambrus 437 . Je vivrais dix mille ans que je n'oublierais rien de cette affreuse journée. Le ciel était triste comme mon âme elle-même : la pluie tombait ; le bruit du vent dans les grands bois de pins dont le cimetière est entouré, était lugubre. On eût dit comme un autre de profundis chanté par les frémissements et gémissements des branches. Il me sembla que dans le funèbre caveau je laissais quelque chose de mon être. Combien de fois, en revenant par la pensée à Ambrus, j'ai redit cette phrase d'un beau discours du grand orateur espagnol, d'Emilio Castelar 438 , dans laquelle il salue cette croix sous les bras 437 Voir 30 octobre 1890. 438 Emilio Castelar (1832-1899). Né à Cadix. En 1855-1856, il commença une carrière de romancier à succès. Auteur aussi d'études historiques à la valeur scientifique discutée, il obtint la chaire d'histoire à l'Athénée de Madrid, mais son attitude d'opposition démocratique l'obligea à abandonner sacrés de laquelle s'étend le lieu que j'aime et que je vénère le plus ici bas, la tombe de ma mère. 15 avril Je guignais, depuis que je suis installé ici, le petit bois qui est au bout de la prairie et qui appartient à M. de Godailh 439 . Je voulais l'annexer à mon domaine. Je rêvais d'y tracer des allées qui auraient fourni, à 50 pas du pavillon Peiresc, un promenoir charmant. Je me promettais de laisser croître les chênes de façon à obtenir peu à peu la plus ombreuse des garennes. Hélas ! On abat les vieux arbres pour en faire des bûches et les jeunes arbres pour en faire des fagots. Chaque coup de la coignée (sic) qui les frappe retentit jusqu'au fond de mon âme.. Que dis-je ? Il me semblait que le fer aigu du bûcheron pénétrait dans mes chairs frémissantes. Avec ces pauvres arbres tombent tous mes désirs d'acquisition. C'est un écroulement complet. Que ferais-je d'un lopin de terre dégarni, dénudé ? Les grands arbres surtout, c'était l'enchantement. Ne peut-on les comparer aux longs cheveux d'une jolie femme ? 21 avril Nous n'avions pas eu d'hiver, pour ainsi dire, mais nous avons maintenant la monnaie de M. de Turenne 440 . Il gèle assez fort, tous les matins, depuis le jour de Pâques, et en ce jour-là même la gelée fut désastreuse. Les arbres fruitiers et sa chaire. Il prit part à la révolution du 22 juin 1866, réprimée par le maréchal Serrano. Condamné à mort, il se réfugia à Paris. Il revint en Espagne après la révolution de 1868, qui détrôna Isabelle, et fut envoyé aux Cortès constituantes de 1869. Il s'opposa en vain à la proclamation du duc d'Aoste comme roi d'Espagne, sous le nom d' « Amédée 1 er » (19 oct. 1870). Quand Amédée eut abdiqué et que la République eut été proclamée (11 févr. 1873), Castelar fut quelques mois président du Conseil. La monarchie ayant été rétablie avec Alphonse XII, Castelar continua de défendre la cause républicaine. Il venait lorsqu'il mourut, d'être élu député de Murcie et il se proposait d'opposer, sur le terrain constitutionnel, les républicains et les libéraux au parti conservateur qui soutenait le ministère Silvela. la vigne ont beaucoup souffert dans la plaine. Ici nous avons été protégés jusqu'à présent par notre élévation, mais les hommes sont parfois plus dangereux que les froids tardifs. On m'a volé, dans la semaine sainte, les plants de chasselas qui m'avaient été donnés par M. de Lafitte 441 et qui, enfouis dans le sable, attendaient, pour être plantés, que le sol fût moins mouillé. Adieu ma tonnelle et mes beaux raisins dorés ! 30 avril C'est demain le jour de ma fête. Elle m'a été bien aimablement souhaitée par un lilas planté de ma propre main auprès de mon tombeau et qui m'a offert tout un panache de belles fleurs épanouies, semblait-il, pour la circonstance. Si c'est un grand plaisir d'admirer et de sentir un gros bouquet sortant d'un bain de rosée, c'est un plus grand plaisir encore de l'admirer et de le sentir quand la tige sur laquelle il se balance a été mise en terre par nous-même. Il y a là un peu des nobles et fières jouissances de la paternité. Ô mon lilas, puisses-tu, long-temps encore après ma mort, me donner, en bon voisin, le parfum de tes fleurs ! 3 mai Il a fait froid depuis le jour de Pâques et il a encore gelé, hier matin, mais il pleut aujourd'hui et cette pluie a déjà fort adouci la température. Espérons que, cette fois, l'hiver est bien fini. On a achevé, hier, à 7 heures du soir, un travail très considérable commencé depuis plusieurs semaines : le transport de je ne sais combien de brouettes et de tombereaux de terre enlevée dans la partie la plus raide de la rampe qui mène à Larroque, en longeant le bois de M. de Godailh 442 . On a comblé la vallée et écrêté la montagne. Un des plus grands inconvénients de mon installation, c'était la difficulté de l'abord. Le pavillon Peiresc est charmant, 441 Voir 16 mars 1891. 442 Voir 11 novembre 1890. j'envoie à la Société historique et archéologique du Périgord 443 la notice qu'elle m'avait fait l'honneur de me demander sur Jules Delpit 444 . J'ai surtout cherché, dans cette notice, à faire aimer l'homme qui a été si bon pour tous, mais surtout pour moi. Il est si doux de payer les vieilles dettes du coeur ! 443 Bercé (Françoise), « Arcisse de Caumont et les sociétés savantes » dans Nora (Pierre) s.d., Les lieux de mémoire. Quarto, Gallimard, t. 1, 1997, p. 1545-1573. Le célèbre bibliophile de Brioude, M. Paul Le Blanc, m'a communiqué un très curieux volume intitulé : Observations météorologiques et économiques faites à Boësses (Loiret) de l764 à 1853 (90 années) par M. Charles Pierre (ancien maire de cette commune), recueillies et collationnées par P. Isidore Pierre, doyen de la faculté des Sciences de Caen (Versailles, 1877, grand in-8° de 151 p.). Moi qui m'occupe souvent ici de météorologie, j'ai été particulièrement intéressé par tant d'indications fournies par un homme qui, « chaque jour, inscrivait ce qu'il avait_vu », par « un paysan doué d'une intelligence supérieure, d'un jugement solide et sain jusqu'à son dernier jour, à qui sa sobriété a permis de vivre jusqu'à l'âge de 91 ans », comme s'exprime l'éditeur des cahiers de son grand-oncle. J'ai lu le recueil en deux fois, une en fleur, une autre fois étant assis sur le banc que j'appelle le banc de Hautes Vignes 445 , auprès des acacias également en fleur qui bordent le chemin. Une attachante lecture entourée de tous ces doux parfums, quoi de plus délicieux ? 28 mai Après plusieurs semaines de sécheresse -car la pluie du commencement de ce mois ne fut qu'une mauvaise plaisanterievoici enfin une bonne ondée qui promet de ne pas cesser de sitôt. Il était temps que la terre se désaltérât, car tout souffrait, tout périssait et surtout mes pauvres arbustes qui sont maintenant si nombreux que mes arrosages étaient pour eux d'une insuffisance dérisoire. Vive donc l'arrosoir céleste ! Je comprends aujourd'hui, en ma qualité d'horticulteur passionné, le mot que m'adressa, il y aura bientôt un demi-445 À 4 km environ à l'est de Gontaud-de-Nogaret : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. -Tamizey de Larroque (Philippe), Histoire de la commune de Hautevignes, impr. de P. Noubel, 1869, 12 p. siècle, un négociant retiré des affaires et devenu campagnard, ayant par conséquent tout l'enthousiaste zèle d'un néophyte, avec lequel je voyageais dans une voiture publique. J'avais dit, en voyant tomber une pluie qui dérangeait tous mes projets de touriste, comme on disait alors : « Voilà une pluie aussi ennuyeuse que la Henriade 446 ! -Jeune homme, vous blasphémez ! s'écria d'un ton solennel mon voisin que je pris d'abord, très étonné, pour M. Joseph Prud'homme 447 . -Vous admirez à ce point, lui répondis-je, le plus assommant des poèmes ? -Je me f… des poèmes, dit-il en s'animant, mais ce qui m'indigne, c'est que vous osiez vous plaindre d'une pluie qui est une pluie d'or. » Eh bien ! C'est aussi une pluie d'or qui tombe maintenant, qui rafraîchit tout, qui sauve tout et qui vaut à elle seule cent fois plus que tous mes arrosages réunis. 30 mai Hélas ! La pluie n'a pas duré et à peine si le sol en a été humecté. Et moi qui rêvais une de ces pluies qui s'enfoncent dans la terre et qui la rafraîchissent pour une bonne semaine tout au moins ! Voilà les doléances sur la sécheresse qui vont recommencer de plus belle. Tous les journaux se lamentent au sujet de la persistance de cette sécheresse anormale. Où sont les Voltaire en 10 chants, paru en 1723. Il a pour héros Henri IV, et s'intitulait d'abord Poème de la Ligue. L'auteur voulait, par la peinture des guerres de religion, inspirer la haine de l'intolérance et du fanatisme. Le poème excita chez les contemporains un enthousiasme certain, qui ne fut pas vraiment partagé par la suite, comme en témoigne Tamizey. 447 Monsieur Joseph Prud'homme : type de la nullité magistrale et satisfaite de soi, dont la création est due à Henri Monnier (1803-1877), auteur des Mémoires de Joseph Prudhomme (1857) et de pièces de théâtre mettant en scène ce personnage. M. Prudhomme est maître d'écriture, calligraphe incomparable, mais en même temps un niais majestueux. Prudhomme devint le porte-voix des caricaturistes. Un certain nombre de ses expressions : « C'est mon opinion et je la partage. » -« Ôtez l'homme de la société, vous l'isolez. » -« Le char de l'État navigue sur un volcan. » -« Napoléon 1 er était un ambitieux : s'il avait voulu rester simple officier d'artillerie, il serait peut-être encore sur le trône. » -« Ce sabre est le plus beau jour de ma vie ! Je saurai m'en servir pour défendre nos institutions, et au besoin pour les combattre ! » d'airain, terre d'airain ! Airain partout ! J'ai remarqué, hier, étant assis sous mon châtaignier, une prodigieuse multitude de papillons blancs qui s'ébattaient au-dessus de la prairie. C'était comme un mouvant nuage neigeux. Le spectacle était charmant. Mais cela m'a rappelé que cette année, mes bonnes amies les hirondelles sont d'une désolante rareté. Je n'ai pas revu, celles qui, l'été dernier, entraient si librement dans mon cabinet et me frôlaient presque de leurs rapides ailes. On a raconté que l'on s'était livré, dans les premiers jours du printemps, aux bords de la Méditerranée, à un gigantesque massacre de ces gentilles voyageuses. Il paraîtrait que les plumes d'hirondelle sont à la mode et que l'on a détruit des milliers et des milliers de ces pauvres oiseaux. Maudits soient les bourreaux ! 1 er juin L'historien de la ville de Cancon, M. Lucien Massip 448 , à qui j'avais recommandé de se procurer le Glossaire de Du Cange 449 , m'a répondu : « Je suis trop pauvre pour acheter un ouvrage d'aussi grand prix. » Cette réponse m'a touché, (et, comme il faut faire de temps en temps une bonne action, j'ai sacrifié cent vingt francs pour lui offrir l'édition Didot.) Voici copie du petit billet que je lui ai écrit à cette occasion : « Pavillon Peiresc, 31 mai 1892. -Mon cher confrère et ami, Mon héros Fabri de Peiresc aimait à donner à ses amis les livres dont ils avaient besoin pour leurs travaux. Permettezmoi d'imiter ce bon exemple et de vous offrir un exemplaire du Glossaire de Du Cange. Si vous avez autant de plaisir à le recevoir que j'en éprouve à vous le faire envoyer, tout ira 448 Lucien Massip, pharmacien, a publié La Révolution à Cancon, impr. V ve Lamy, Agen, 1888, in-8°, 78 p. et, à compte d'auteur, à Cancon (à une vingtaine de km au nord de Villeneuve-sur-Lot), Histoire de la ville et des seigneurs de Cancon depuis les temps les plus reculés jusqu'en 1789, in-8°, 259 p. -Correspondance de Tamizey avec L. Massip : A.D. Lot-et-Garonne, 16 J 19, correspondance d'érudits : Fonds Tamizey de Larroque. 449 Charles Dufresne du Cange, Glossaire du latin moyen et tardif, Firmin-Didot, 1846, en 7 volumes in 4°. c'était toujours à moi qu'il venait, sur moi qu'il se tenait.Je suis tout ému du sort mystérieux et tragique de Mistigris si choyé et si heureux ici il y a 24 heures à peine. Je voudrais qu'au moins il n'eût pas trop souffert dans le peu de temps qui à dû s'écouler entre sa disparition et sa mort.note en marge : À Bordeaux, la moyenne a été de 30 degrés. À Agen de 29°. Le lendemain, nous avons noté ici, à 2h., 37 degrés ce qui justifie les chiffres Gontaudais de la veille qui m'avaient paru excessifs. À Bordeaux, la température maxima du 28 a été de 33 degrés et à Agen de 32°. Il faudrait en conclure qu'il fait beaucoup plus chaud à Larroque qu'à Bordeaux et qu'à Agen, et encore plus chaud à Gontaud qu'à Larroque. 1 er juillet Les chats, qui auront tenu une si grande place dans ma vie, comme dans celle de Peiresc, tiendront aussi une grande place dans ce livre de raison. Mistigris n'est pas mort. Vive Mistigris ! Nous venons de le retrouver. Le pauvre animal s'était perdu dans les blés. Il est resté plusieurs jours sans manger, car il est maigre comme un clou, lui qui était gras comme un moine. Déjà, dans le malheur, il était devenu un peu sauvage et il a fallu un peu de temps pour l'apprivoiser. Sa mère, la bonne Gredinette, n'avait cessé de le chercher. Cette brave petite bête partait dans toutes les directions, en flairant le terrain, revenait voir si son chaton avait reparu, repartait avec inquiétude, prenant à peine, pendant toute la durée de cette chasse, le temps de manger un peu. Gredinette a prouvé une fois de plus qu'avec la persévérance on arrive à tout. Nous sommes toujours en pleine sécheresse, en pleine rôtissoire. Mes arbustes s'étiolent et meurent de plus j'étais si fier, seront presque toutes à recommencer, pour les arbres comme pour les arbustes. -Reçu, hier, une lettre de M gr l'Archevêque d'Auch qui me remercie et me félicite en termes trop aimables d'avoir si bien parlé de son parent Jules Delpit 456 . 10 juillet Je suis allé, hier, à Bordeaux par une chaleur accablante pour voir le marquis Anatole de Bremond d'Ars-Migré 457 avec lequel je suis en correspondance depuis plus de vingt années. Je tenais à faire de visu 458 la connaissance de ce galant homme auquel j'ai présenté mon fils, de même qu'il m'a présenté le sien. J'ai été enchanté de l'entrevue qui semblait -c'était son seul défaut -avoir pour théâtre une des plus brûlantes parties du Sahara. J'ai causé avec M. de Brémond d'Ars d'un projet de mariage pour Henri. Que Dieu veuille favoriser Dieu soit loué ! Les orages ont enfin détendu la situation. Il a plu hier, il a plu cette nuit, il pleut encore aujourd'hui. Vive l'humide 14 juillet ! Ce sera la fête des jardiniers. Certes la pluie n'est pas aussi abondante que je le voudrais, mais à la longue cela entrera, surtout si ça dure encore quelque temps. Pour beaucoup d'arbres et arbustes, 456 Voir 27 mars 1892. Il s'agit de M gr Gouzot qui, en 1884, avait choisi l'abbé Jules de Carsalade du Pont, ami et collaborateur de Tamizey -voir 6 septembre 1889 -pour secrétaire particulier et archiviste du diocèse : voir Clergeac (A.), « Monseigneur Jules de Carsalade du Pont, évêque de Perpignan », Revue de 458 Expression latine : trad. « après l'avoir vu », « pour l'avoir vu ». c'est trop tard, mais pour plusieurs qui n'étaient que languissants, c'est le salut. Je n'aurais jamais cru autrefois que la vue de la pluie pût me faire autant de plaisir. 20 juillet J'ai touché du Ministère la somme de deux mille francs pour mes honoraires d'éditeur de la seconde série des lettres de Peiresc (1 er à-compte). J'avais chargé le Crédit industriel et commercial de souscrire quatre obligations tunisiennes (à 476 francs 25), ce qui représentait à peu près mon indemnité. On m'annonce aujourd'hui que deux obligations seulement me sont attribuées. Je m'étais décidé à choisir cette valeur en souvenir des relations qu'eut Peiresc avec la Tunisie par l'intermédiaire de Thomas d'Arcos 459 .21 juilletReçu, hier, deux mauvaises nouvelles. Mon ami Charles de Ribbe 460 a eu une attaque d'apoplexie qui met ses jours en danger et qui, en tout cas, le tue comme travailleur, lui qui avait encore de si beaux ouvrages à nous donner dont il me parlait complaisamment dans une de ses dernières lettres, se plaignant toutefois d'un grand malaise qui était, je le vois bien, le précurseur du terrible coup qui l'atteint. Un autre de mes bons amis de Provence, le vénérable marquis de Seguins 461 , vient de perdre une de ses filles, M me de Saint-459 Tamizey de Larroque (Philippe) éd., Les correspondants de Peiresc, XV, Thomas d'Arcos. Lettres inédites écrites de Tunis… (1633-1636), Alger, 1889 (extr. de la Revue africaine) : Thomas d'Arcos, né en 1568 à La Ciotat, fut pendant quelques années secrétaire du duc de Joyeuse et exécuta de nombreux voyages en Asie et en Afrique. Capturé par des corsaires en 1628, il subit 2 ou 3 ans de captivité et se fit musulman à la fin de 1632Seguins-Vassieux, (1809-1897). Philippe Tamizey de Larroque avait fait sa connaissance , dès son premier séjour à Carpentras, en 1877. Le marquis de Seguins était un descendant de Thomas de Cohorn, 3 e fils d'Antoine et d'Hélène de Gardane, sa seconde épouse. Il était issu, au 4 e degré, de l'illustre général suédois, Pierre de Cohorn, établi en Avignon, en 1474 et mort au monastère de Montfavet, près de cette ville, en 1479. Il avait épousé Charlotte-Louise-Constance de Froment-Fromente de Castille-Rohan (morte le 12 janvier 1895) : Tamizey de Larroque (Philippe), Une lettre inédite de Thomas de Cohorn à Peiresc,1897 (extr. du Journal du Paulet 462 , jeune encore et douée des plus charmantes qualités. J'avais eu le plaisir de la voir souvent chez son père à Carpentras et de passer toute une journée avec elle au château du Rocan 463 où elle fut si aimable et si gaie. Que Dieu ait pitié de la pauvre morte et du pauvre infirme ! 30 juillet Un de mes plus vieux papiers de famille est un contrat d'acquisition d'une maison et de diverses pièces de terre à Larroque (1696) par mon trisaïeul. Voilà près de deux cents ans que Jean Tamizey agrandissait son patrimoine et, comme on dit, s'arrondissait. Je vais donner quelques extraits du vénérable document : « Aujourdhuy trentième du mois de septembre mil six cent quatre-vingt-seize, dans la ville de Gontaud en Agenais, devant mon notaire royal et témoins bas nommés ont été constitués en leurs personnes Jean et Pierre Maurin, tisserans, habitant, des jurisdictions de Calonges et Lagruere et Suzanne Bissiere, fille de Pierre Bisierre, habitante de la paroisse St-Pierre, présente jurisdiction, au nom et comme héritier de fü (sic) Pierre Maurin ont vendu, quitté et à jamais délaissé solidairement l'un pour l'autre, à M. Jean Tamizey bourgeois et jurat et habitant de la présente ville icy present et acceptant, c'est à sçavoir les biens qui en suivent : en premier lieu, une maison, sol, agriaux, vigne et terre le tout joignant situé dans la présente jurisdiction Comtat, tiré à 75 exemplaires). -[A.P. Baquier]. -Authier (Michel)-Galbrun (Alain), État de la noblesse française subsistante(1940-1993), vol. 22, p. 215-217. -A.D. Lot-et-Garonne, 16 J 25, correspondance d'érudits : Fonds Tamizey de Larroque. 462 2 septembre 2 et au lieu et village de Larroque de haut, que confronte du levant à terre de Arnaud Freche, midy à un chemin de service, couchant à une chambre de maison de Marthe Glane et vigne de Bernard Mourguan, nord à un vacant contenant deux journeaux trois quarts quinze escots, plus une pièce de vigne audit lieu, etc. » Mention de vigne « des hoirs Pierre Chirol 464 » dont les descendants sont mes bons voisins et ont eu le mérite de rester toujours de braves cultivateurs. Pour divers lopins de terre énumérés dans le contrat on signale le voisinage de « M. Jean Maisonnade » du « S r Ricaud », du « S r de Mellet » de « César Dubergé » etc.; « laquelle presente vente est faite pour et moyennant la somme de trente-six livres à vingt sols pour livres ». Ce très faible prix s'explique par le mauvais état du terrain presque tout en bousigue465 , comme dit le contrat. Mention est faite en ces termes du consulat de mon trisaïeul : « À ce compris quatre livres de reste de la taille desdits biens qu'ils doivent audit Sieur Tamizey comme consul de la presente ville l'année dernière 1695. »8 aoûtUn bon abbé avait annoncé dans tous les journaux de la région que la sécheresse dont nous souffrons depuis le Carême, cesserait le 7 de ce mois et que, ce jour-là, tomberait une 464Nicolas-Jean Chirol (1848[START_REF]l'hôtel de Rohan qui vint agrandir les dépôts de l'hôtel de Soubiseet obtint le versement aux archives des minutiers notariaux et des papiers des différents ministères[END_REF]. C'était l'ancien cocher et l'homme d'affaires de Philippe Tamizey de Larroque : « Je me souviens… je le voyais passer sur le C.D. n° 299 devant notre maison sur un vélo à pignon fixe, donc pas besoin de frein à l'époque, un homme de taille moyenne, maigre, habillé d'un veston d'alpaga, pantalon noir assorti, tantôt béret gascon, tantôt chapeau noir plat. Au printemps seulement il reproduisait parfaitement le chant du coucou. Lorsqu'il était âgé de 71 ans et plus, il faisait encore le trajet de la route d'Agen sur sa bicyclette, 50 km et autant pour le retour. De ce fait, les cycles Terrot de Dijon lui offrirent un vélo, alors qu'il avait déjà un cycle portant cette marque. Chirol me racontait les périples qu'ils faisaient [Ph. Tamizey de Larroque et lui] lors de leurs sorties avec la voiture hippomobile, notamment la visite chez les Delmas de Grammont, à Miramont-de-Guyenne »… Le fils de Chirol fut, lui-aussi, au service des Tamizey. Tamizey a publié une plaquette pour conserver le nom et le souvenir de l'humble journalier Justin-Jean Chirol que cite Audiat (L.), Ph. Tamizey de Larroque. Notice biographique,Impr. Texier, La Rochelle, 1898, p. 22-23. -Serin (P.), op. cit., p. 23. 465 Terme signifiant dans le Sud-Ouest, sol inculte, brousailles, pâturage médiocre : Lachiver (Marcel), Dictionnaire du monde rural, les mots du passé,Fayard, 1997, p. 280. grande pluie. Il n'est tombé qu'une pluie de feu et je ne prendrai plus des almanachs de ce météorologiste. J'ai été, hier, insuffisamment rafraîchi par l'arrivée de La Cigalo d'or où un de mes confrères en félibrige, M.E. Portal, ingénieur à Palerme, et de la famille languedocienne qui a fourni un savant à l'Institut et un homme d'État à la Restauration (le ministre baron Portal 466 ), m'a dédié de charmants vers sur la Provence : A'N Tamizey de la Roco. 12 août Hier, nous avons donné un déjeuner splendide à M. Louis Audiat 467 , qui est de nouveau notre hôte depuis quelques jours, à M. J. Beaune 468 , déjà souvent nommé en ces pages, enfin à M. le comte Édouard de Dienne 469 . La journée a été charmante.Nos convives en ont été non moins enchantés que nous-mêmes. On a dépensé beaucoup d'appétit, d'esprit, de gaîté. Ah ! les bonnes réunions que celles où règne, dans le bon air de , baron Portal, né en 1765, près de Montauban, mort à Bordeaux en 1845. Issu d'une famille protestante, nombreuse mais possédant un peu de fortune. Entré à 18 ans chez un armateur de Bordeaux, il devint, en 1789, chef d'une maison d'armements maritimes et subit, pendant les premières années de la révolution, des pertes qui l'obligèrent, en 1796, à repartir de rien. Nommé, sous le consulat, juge au tribunal de commerce et membre du conseil de commerce, il fut ensuite député par le commerce de Bordeaux pour réclamer la restitution d'une grande quantité de marchandises saisies sur des bâtiments américains, il déploya une habileté et une fermeté de caractère qui attirèrent sur lui l'attention de Napoléon. Nommé maître des requêtes en 1811, il se décida, après bien des hésitations à résigner cet emploi et se retira dans sa famille. S'étant distingué en maintenant l'ordre à Bordeaux en 1814, Louis XVIII le replaça comme maître de requêtes au Conseil d'État. Fonction qu'il résigna pendant les Cent-Jours, au grand mécontement de l'empreur, qui peu de jours après le nomma maire de Bordeaux. Portal refusa et se retira à la campagne. La première ordonnance que signa Louis XVIII, à son retour, fut celle qui l'appela à faire partie d'une commission chargée de pourvoir au service des armées alliées. Il fut nommé ensuite directeur supérieur des colonies ; il ne consentit à se charger que pour un temps limité et sans traitement de ces fonctionsles traités de 1815. Élu peu après député de Tarn-et-Garonne, dont il avait présidé le collège électoral, il siégea au centre droit, et fut nommé, en 1818, ministre de la Guerre et des Colonies, poste qu'il occupa avec succès jusqu'en 1821. Louis XVIII l'éleva à la dignité de pair de France. joyeusement : non pas tous Auvergnats (nous n'en avions qu'un, le comte de Dienne ; encore est-il devenu Gascon par son mariage et son implantation à Cazideroque 470 !), mais tous honnêtes gens et bons campagnards ! 14 août M. Audiat 471 vient de nous quitter, nous laissant de vifs regrets. Son séjour ici a été une fête continuelle. S'il m'a ravi comme causeur, il ne m'a pas moins ravi comme lecteur. Il nous avait apporté le dernier livre de Zola, La Débâcle, et il nous en a lu les plus beaux passages. Jamais lecture ne m'a paru plus émouvante, plus empoignante. Ce sont surtout les descriptions de Bazeilles 472 et de Sedan qui m'ont profondément remué. Le talent de lecteur de mon hôte ajoutait encore à la saisissante impression des récits. Je n'ai pu retenir une larme de douleur à la fois patriotique et fraternelle quand on est arrivé à la charge héroïque des Chasseurs d'Afrique où a péri, fauché dans la fleur de ses vingt-cinq ans, le pauvre Robert Delmas de Grammont 473 . 17 août Je suis allé, hier, voir au chateau de Lalanne 474 , ma chère cousine et amie d'enfance Amélie de Grammont, douairière de Bentzmann 475 , frappée de paralysie et ne pouvant plus parler. Elle a conservé quelque chose de sa belle intelligence, car elle m'a reconnu, comme me l'a montré son doux regard, comme me l'a montré aussi la pression de sa main encore libre. Le spectacle de cette femme qui, en pleine maturité, est frappée à mort, m'a navré. Elle était si active, si bienfaisante ! La voilà clouée sur son lit en attendant qu'en la cloue dans son 470 Près de Tournon d'Agenais (Lot-et-Garonne)est de Ste-Bazeille, sur la rive droite de la Garonne : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. cercueil ! Elle allait venir à la Villa d'Agmé 476 où elle m'avait gracieusement donné rendez-vous. Et maintenant je ne la verrai plus ! Et, en baisant hier son front avec toute la tendresse désolée de mon coeur, je lui ai dit le dernier adieu. 20 août La semaine qui se termine aujourd'hui mérite de s'appeler à jamais la brûlante semaine. De dimanche à mercredi, le thermomètre s'est tenu plus haut qu'en tout l'été mais, le jeudi, il y a eu un coup de chaleur comme de mémoire d'homme on n'avait subi le pareil. J'ai vu, de mes yeux vu, ce jourlà, le thermomètre monter, à Lalanne 477 , à l'ombre, jusqu'à 41 degrés. Je n'avais pas, dans les plus terribles journées de juillet, en Provence, dépassé 37 degrés. Ce qui nous a valu cette ascension invraisemblable du thermomètre, c'est un vent embrasé qui ressemblait fort au vent du désert. Le soir, à huit heures, nous avions encore 28 degrés et mon pavillon tout entier était comme la gueule d'un four chauffé jusqu'à l'excès. La journée du 16 août restera mémorable dans les annales de la météorologie. 22 août Mon fils vient de partir pour aller faire ses 28 jours 478 à Rochefort 479 . Je le plains d'autant plus, que la chaleur est toujours accablante. Les exercices de l'artillerie de marine seront bien durs par cette température. Pourvu que le pauvre enfant ne tombe pas malade ! Puisse-t-il n'être que fatigué ! 476 À 8 km environ au nord-est de Gontaud : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. 477 Amélie de Grammont-Bentzmann, cousine de Tamizey, réside au château de Lalanne : voir 17 août 1892.478Après la défaite de 1871, s'était imposé le principe du service militaire obligatoire pour tous, associé à l'appel des réservistes en temps de enseignants, et des régimes de faveur comme le volontariat d'un an : il n'est pas question par la suite de l'incorporation d'Henri. 479 Rochefort (Charente-Maritime), chef-lieu de préfecture maritime, l'un des principales bases de la marine de guerre française depuis le XVII e siècle.Enfin la voilà cette grande pluie qui semblait ne devoir jamais tomber ! Elle nous a été amenée, hier matin, par un bel orage et, depuis 24 heures, elle rafraîchit les hommes et les plantes. Cette nuit, le bruit qu'elle faisait sur la toiture m'a paru doux comme la plus délicieuse musique. Hier, tête nue, sur ta terrasse, j'ai reçu les premières gouttes de l, de 2 à 5 heures, je suis allé faire une visite à ma bonne voisine et vénérable amie M lle Gonin 480 . En apercevant, du haut de la route de Hautesvignes 481 , au milieu de la magnifique verdure du vignoble ressuscité, la petite maison blanche construite à l'aide des trois mille francs avancés par moi et qui ne rentreront peut-être jamais dans ma poche, j'ai été récompensé de mes sacrifices par un mouvement de joie généreuse, m'applaudissant d'avoir fait cette bonne oeuvre et me disant que je n'avais pas payé trop cher le droit d'être ainsi fier de moi.Depuis quelques jours je déjeune fort agréablement avec les chasselas et les muscats de mon jardin. C'est pour la première fois que je jouis des fruits de mes plantations. Gonin (elle était la fille de son épouse) auquel Ph. Tamizey de Larroque a consacré une monographie : Gonin Joseph et le vignoble de Saint-Joseph, impr ; V ve Lamy, Agen, 1883, 11 p. (extr. de la revue de l'Agenais, X, 1883), elle avait mis au monde hors mariage, dans le plus grand secret -dont Tamizey, maire aux moments des faits, était au courant -une fille et resta célibataire. Voir note 180 [A.P. Baquier). Ph. Tamizey de Larroque évoque ses visites amicales à M lle Gonin et à sa remarquable bibliothèque dans AdolpheMagen, op. cit., p. 16. la fin d'août, précédées et suivies de si fortes chaleurs, ont hâté la maturité de mes raisins. Je ne donnerais pas mon frugal déjeûner pour un déjeûner au Café de Paris 482 et j'inscris ici cet axiome : rien ne vaut un morceau de pain goussé croqué au grand air avec accompagnement de raisins cueillis par la main du producteur.8 septembreTalleyrand dit, dans ses Mémoires, que celui qui n'a pas vécu avant 1789 n'a pas connu la douceur de vivre. Je dirai, à mon tour, que celui qui n'a pas été gravement malade et qui n'a pas, quelques mois plus tard, joui du baume bienfaisant d'un précoce et magnifique automne, n'a pas non plus connu la douceur de vivre. Ah ! qu'il fait bon, surtout après les anéantissantes chaleurs du mois dernier, sous ce soleil attiédi et au milieu de ces brises rafraîchissantes ! Je me sens plus jeune et plus fort. Et aussi comme je travaille.13 septembreDémolition de la halle qui nous cachait une partie de l'horizon au midi. J'ai eu deux bonnes raisons pour la supprimer : d'abord, elle menaçait ruine, ensuite elle était pour les spectateurs le plus vilain de tous les masques. Question de prudence et question d'esthétique. J'ajoute 482 Haut-lieu d'élégance et de raffinement gastronomique, rendez-vous de la « fashion » parisienne, selon l'expression en vogue, au début du XIX e siècle, le Café de Paris fut, par la suite (à partir des années 1850-1860) tout en gardant grande réputation, détrôné par la Maison-Dorée et le café Riche pour donner le ton dans la capitale. Situé à l'angle de la rue Taitbout et du Boulevard des Italiens, il était installé dans les vastes appartements qu'avait occupés longtemps le prince Demidoff, au rez-dechaussée d'un hôtel, habité aux étages supérieurs par le célèbre excentrique lord Seymour. L'ouverture eut lieu, le 15 juillet 1822, en grande pompe et avec renforts de publicité de la part de ses fondateurs Angilbert et Guérez. Au temps de la splendeur du Café de Paris, un des grands plaisirs de lord Seymour était, dit-on, de passer des heures entières à regarder le va-et-vient des consommateurs à travers les barreaux de ses persiennes hermétiquement closes. Sous le titre d'Histoires du café de Paris, Charles de Courcy fils a publié, vers 1861, le recueil de ses articles de presse légère, histoires de café, certes, mais qui n'apportent pas vraiment de détails plus spéciaux sur l'établissement mentionné dans le titre : voir Dictionnaire Larousse universel du XIX e siècle. qu'elle était non seulement un éteignoir, mais encore un étouffoir, car elle empêchait la circulation de l'air. Aujourd'hui dégagement parfait. Ni l'air ni le regard ne sont plus arrêtés. Ces piliers vermoulus, ces chevrons galeux feront place à un parterre qui charmera l'oeil comme l'odorat. J'établirai un banc ; sous les ormes qui forment une demienceinte et j'aurai ainsi deux salons de verdure ; très rapprochés, puisque l'ombreux châtaignier n'est qu'à deux pas. 14 septembre Douloureuse nouvelle qui n'était que trop attendue ! Ma fille 483 m'adresse cette dépêche télégraphique, partie d'Alger à 10 h. 20 mn. ce matin, m'est remise à 3 h. de l'après-midi : « Pauvre oncle mort minuit quatorze. -Germaine. » Dans la dernière carte-lettre reçue de mon cher beau-frère 484 , la semaine dernière, et signée de lui, il m'apprenait qu'une ponction venait de lui être faite. Le cher ami était perdu depuis plusieurs mois. C'est une belle intelligence qui s'éteint, un noble coeur qui cesse de battre. J'ai vu peu d'hommes aussi heureusement doués. Nous avons toujours été unis comme deux bons frères et c'est de tout mon coeur que je le regrette et que je me souhaite de le retrouver dans ce groupe de personnes aimées sans lesquelles le bonheur du ciel me paraîtrait imparfait. 15 septembre Grand orage qui a duré toute la nuit, depuis 9 h. du soir jusqu'à 6 h. du matin. J'ai bien peu dormi et j'ai pensé tout le temps à ce pauvre Henri de Grammont que l'on enterre probablement au moment même où je trace ces lignes. Je l'ai suivi pas à pas dans toute sa vie depuis que je l'avais vu, vrai gamin de Paris, tourmentant les ânes du fermier de 483 De santé fragile, elle ne se maria jamais et vivait auprès de sa mère. Une photographie de famille la représente dans Serin (P.), op. cit., p. 24. Boisvert, jusqu'au jour où il est venu inaugurer, en quelque sorte, mon pavillon à peine achevé. Entre cas deux dates extrêmes, que d'événements dans la vie de mon cher beaufrère : son engagement dans les Zouaves, ses expéditions en Kabylie, son séjour à Saint-Cyr, son retour en Algérie, sa participation à la guerre de Crimée, sa nomination à la recette des finances de Montbéliard, notre voyage dans le Jura et en Suisse (Neuchâtel, Lausanne, Genève), sa glorieuse démission donnée pour prendre le commandement d'un bataillon de mobiles du Doubs, sa belle campagne de 70-71, sa croix si méritée de chevalier de la Légion d'honneur, son installation en Algérie, ses achats de la Villa Grammont à Mustapha-Supérieur et de la ferme de Birkadem, ses travaux d'histoire et d'érudition dans lesquels je l'avais lancé, son élection de président de la Société historique algérienne, renouvelée pendant de longues années, car c'était un président modèle, dont la démission, qui remonte à quelques mois, prouvait qu'il se sentait déjà frappé à mort. Quelle belle notice on pourrait écrire sur sa belle vie ! 18 septembre Mon ami L. de Berluc-Perussis 485 vient de m'envoyer un fragment d'un catalogue d'un libraire de Paris, A. Voisin (37, rue Mazarine) contenant (article 6124) ce qui suit : « Tamizey de Larroque (Jacques Philippe), érudit, né à Gontaud (L.&G.) en 1828. Lettre autographe signée à Charles Asselineau 486 . Paris, 2 nov. 1869, 4 pages in-8°, 3 francs. Curieuse lettre dans laquelle il s'excuse de s'être montré sévère dans l1874), homme de lettres et surtout critique et bibliophile, auteur de La double vie : nouvelles (1858), d'une Bibliographie romantique : catalogue anecdotique et pittoresque des éditions originales des oeuvres de V. Hugo, A. de Vigny, A. Dumas, J. Janin, Th. Gautier, Petrus Borel etc., 1872 et tout particulièrement de Charles Baudelaire, sa vie et son oeuvre, 1869, qui suscita, à l'évidence, la lettre de Tamizey mentionnée ici. 4 octobre 4 photographie, avait apporté son appareil et a croqué le pavillon Peiresc. c'était le travail qui me fortifiait et me conservait ainsi.Je trouve aujourd'hui dans un extrait du recueil que Maxime Du Camp va publier sous le titre de Le crépuscule 525 , cette anecdote que j'aime à rapprocher de ma réponse : « Je disais, un jour, à Michelet 526 : Comme vous restez jeune malgré vos cheveux blancs ! Il me montra son encrier et de se belle voix sonore me répondit : Voilà ma fontaine de Jouvence ! Le mot n'est pas exagéré ; les grands laborieux le savent bien. »7 marsHier, j'ai fait planter des violettes le long du fossé qui borde une partie de l'enceinte formée sur l'emplacement de la halle démolie. Aujourd'hui j'ai garni un petit monticule qui occupe le milieu de cette enceinte, de deux rangées de rosiers qui m'ont été envoyés par ma parente M me de Bonnegarde 527 (de Clairac) dont la collection est célèbre. Quand les violettes et les roses seront venues, quel doux mélange de parfums pour ceux qui seront assis sur an des trois bancs établis sous les ormes voisins de mon pauvre chêne ! Ce qui achèvera d'embaumer mon salon de verdure, ce sera la douzaine de chèvrefeuilles plantés au pied de ces arbres et qui s'enrouleront alors autour de leurs branches. J'ai aussi fait mettre en terre aujourd'hui par un temps magnifique (18 degrés de chaleur) dix bambous qui m'ont été envoyés de Fauillet par la de juin 1848. Voyageur et littérateur, ami et correspondant de Gustave Flaubert, académicien depuis 1880. 526 Jules Michelet (1798-1874), fils d'un petit imprimeur ruiné par les sévérités de la censure sous le premier Empire. Historien et professeur à l'École Normale Supérieure puis au Collège de France à la fin des années 1830 où son enseignement, suivi par une foule enthousiaste d'étudiants devint une sorte d'apostolat en faveur des idées libérales et humanitaires. Il fait paraître alors les premiers volumes de son Histoire de France (jusqu'à Louis XI) (1833-1846). Mais avec ses collègues du Collège de France Edgar Quinet et Adam Mickiewicz, il se donne bientôt tout entier à la bataille contre la politique conservatrice et ultramontaine de Guizot, Veuillot et Montalembert. C'est alors qu'il publia son Étude sur les jésuites (1843) ; Le prêtre, la femme et la Famille (1844) ; Le peuple (1844) ; L'étudiant (1848). Après le coup d'État de Louis Napoléon (1851), son cours fut suspendu ; puis il fut révoqué. Vincens de Tapol 528 . Je calculais, en faisant une bonne promenade où je saluais avec une sorte d'ivresse le beau soleil et les premières pâquerettes, je calculais, dis-je, que, depuis mon installation, j'ai planté près d'un demimillier d'arbres de tout genre et plus d'un millier d'arbustes. Je pense que désormais je devrai me contenter de remplacer les arbres et arbustes qui disparaîtront, car tout est rempli dans le jardin, dans la prairie, dans le verger, et le long des allées. 14 mars Prise de possession, hier, par les nouveaux fermiers, les époux Barthalome, d'une des pièces de terre de Larroque où ils ont semé des graines du fourrage qu'ils récolteront au mois d'octobre prochain. J'ai eu bien de la peine à me décider à me séparer des fermiers actuels, mais d'abord, le père et le fils étaient désunis et la division dans une famille gâte toutes choses ; ensuite, ils étaient l'un et l'autre si peu intelligents, qu'ils n'ont jamais voulu et n'auraient jamais voulu rien faire pour reconstituer le vignoble. L'arrangement dont il a été question ici est resté lettre morte. Barthalome me donne 250 francs de plus, par an (900 francs) et s'engage à planter 45 ares de vignes tous les ans, en fournissant tout, moyennant un secours de 250 francs par hectare. Ce sont des conditions magnifiques et comme je n'aurais pas osé les espérer. 528Jean- Timothée Vincens de Tapol, né à Fauillet, le 31 août 1830, était maire de la commune de Fauillet, à 4 km environ au sud deGontaud [A.P. Baquier]. 529 Hippolyte Taine (1828-1893), philosophe, critique et historien. Entré premier à l'École Normale Supérieure en 1848, l'indépendance de ses convictions lui valut, en 1851, un échec à l'agrégation de philosophie. D'abord nommé professeur à Nevers, puis à Poitiers, il fut bientôt envoyé en disgrâce à Besançon. Il se fit mettre en congé et retourna à Paris où il la mort de son mari (5 mars, à Paris, rue Cassette, 23, dans sa 65 e année, avec cette citation de Saint Mathieu : « Heureux ceux qui ont faim et soif de la justice, car ils seront rassasiés. » 530 ) Je réponds tout de suite en ces termes : « Madame, j'ai profondément regretté Monsieur Taine ; j'avais une vive sympathie pour l'homme, une vive admiration pour le travailleur et pour l'écrivain. Je l'avais rencontré, un jour, dans le salon de notre ami commun, M. Gaston Paris 531 , et j'ai fit ses débuts d'écrivain au Journal des Débats et à la Revue des Deux Mondes. Soutient une thése de doctorat sur les fables de La Fontaine et écrit un Essai sur Tite-Live, couronné par un prix de l'Académie française. Voyage dans les Pyrénées (1854) et en Angleterre (notamment en 1858), en Belgique et en Allemagne. Fut nommé en 1864 examinateur à Saint-Cyr et professa à Oxford en mai 1871. Il entra en 1878 à l'Académie. Il demanda en mourant des funérailles protestantes. Inspiré par Condillac, Bain, Hegel et Vacherot, Taine attaqua d'abord l'école sipritualiste (Études sur les philosophes français du XIX e siècle publiées en 1857) avant de faire paraître, en 1870 seulement, son oeuvre philosophique majeure L'intelligence qui reprend et systématise des thèmes et des idées développées dans ses oeuvres précédentes, s'efforçant de définir, à la lumière de la physiologie, les conditions nécessaires du développement de l'esprit. Envisageant l'art et la littérature comme les fonctions naturelles de cet « animal d'espèce supérieure » qu'est l'homme, et appliquant à l'esprit le principe de la subordination des caractères, Taine considérait le génie des grands écrivains et des grands artistes comme gouverné par une « faculté maîtresse ». C'est, par exemple, la faculté poétique chez La Fontaine, la faculté oratoire chez Tite-Live. Mais cette faculté maîtresse est dominée elle-même par des influences géographiques : le sol et le climat (Voyage aux Pyrénées, 1855) et surtout les trois grandes influences parallèles de la race, du moment et du milieu dont l'action efficace dirige l'évolution de toute une littérature (Histoire de la littérature anglaise, 1865). Le jeu des mêmes facteurs se retrouve d'ailleurs dans la création proprement artistique (Philosophie de l'art en Italie, 1866, Philosophie de l'art dans les Pays-Bas, 1868, Philosophie de l'art en Grèce, 1869). Enfin, appliquant la même méthode aux grands phénomènes historiques, Taine choisit pour sujet de sa démonstration la crise révolutionnaire (Origines de la France contemporaine (1871-1894)). De tous ses livres celui-ci par les jugements sévères que Taine y porte successivement sur le régime jacobin et sur Napoléon 1 er souleva dans des camps divers les polémiques les plus vives. Taine publia aussi, en 1867, Vie et opinions de Thomas Graindorge, critique humoristique de la société parisienne où il montre sa conception stoïcienne de l'existence. 530 Évangile de Matthieu, 5. 6 [Sermon sur la montagne]. 531 Gaston Paris (1839-1903), fils de Paulin Paris, spécialiste d'histoire littéraire médiévale, professeur au Collège de France. Gaston Paris, ancien élève de l'École des chartes, lui aussi féru d'histoire littéraire médiévale, réputé pour son immense érudition, a imposé les règles et les principes de la philologie scientifique. Il fut directeur à l'École des hautes études, académicien et administrateur du Collège de France. Il a pris une grande part à la fondation et à la direction de la Revue critique (1866) et de la grande revue de philologie, Romania (1872). Il fut aussi un éminent collaborateur du Journal des Savants et de l'Histoire littéraire de la France. Voir A.D. Lot-et-Garonne, 16 J 21, correspondance d'érudits : Fonds Tamizey de Larroque. -Tamizey de Larroque gardé et garderai de sa causerie un souvenir ineffaçable. Nous parlâmes ensemble des Pyrénées, que nous aimions également, et de la Révolution, que nous jugions de la même façon. Peu de mois avant la mort de l'illustre historien, j'avais dit de M. Taine, dans le Bulletin critique 532 , à propos d'un livre de M. le Chanoine Allain 533 , que ce grand esprit s'élevait vers la vérité, comme l'aigle monte vers le soleil. Aura-t-il pu voir l'hommage que lui rendait son humble confrère ? Vous avez merveilleusement bien choisi la citation dans les livres saints, car votre cher mari était le plus juste des hommes. Je vous prie d'agréer, Madame, avec mes plus respectueux hommages, mes compliments de condoléance à partager avec vos enfants et toute la famille. » 1 er avril Je veux adresser mes compliments au mois qui vient de finir, j'en ai rarement vu d'aussi agréable. Presque pas de caprices (Ph.), À la mémoire de Alexis-Paulin Paris…[introduction par Léon Techener], impr. de Durand frères, Chartres, s.d. (extr. du Bulletin du Bibliophile, mars-avril 1881 7 avril 7 et de giboulées, en dépit de sa mauvaise réputation ! À peine quelques gouttes de pluie un jour ou deux. Tout le reste du temps, du soleil à plein ciel, une douce chaleur amenant une végétation précoce. Aussi que de bonnes promenades ! Quelle joie de vivre en face de ces haies où brillent les buissons blancs, en face de ces arbres qui reverdissent tous (excepté les chênes), en face de ces cerisiers et de ces pruniers couverts de fleurs ! Tous mes arbustes sont de la fête et mes lilas rivalisent de bonne odeur avec mes violettes. Quelles délices, en ces beaux jours, nous prodigue la campagne ! Et que je plains donc, du haut de mon rocher si verdoyant et si fleuri les malheureux emprisonnés dans ces basses-fosses que l'on appelle les villes ! Reçu aujourd'hui de M. Maxime Lanusse 534 , professeur au lycée de Grenoble, qui a été autrefois mon hôte à Gontaud, ses deux thèses pour le doctorat ès-lettres avec ce double hommage manuscrit : « Doctissimo viro et dulcissimo amico » 535 , et : « Au plus docte et au plus bienveillant des gascons, un disciple reconnaissant ». La thèse française intitulée De l'influence du dialecte gascon sur la langue française m'est dédiée en ces termes : A M. Ph. Tamizey de Larroque, correspondant de l'Institut, hommage de profonde gratitude. 534 Maxime Lanusse (1853-1930), De l'influence du dialecte gascon sur la langue gasconne de la fin du XV e siècle à la seconde moitié du XVII e siècle, Thèse présentée à la faculté des Lettres de Paris, publiée à Grenoble, impr. de F. Allier père et fils, 1893, XVI-470 p., complétée par une étude sur le philologue et lexicographe Jean Nicot (v. 1530-1600). Maxime Lanusse est aussi l'auteur de manuels scolaires utilisés par plusieurs générations d'élèves : Cours complet de grammaire française à l'usage de l'enseignement secondaire classique (6 e , classes élementaires, classes supérieures), plusieurs fois réédité chez Belin entre 1914 et 1935, Cours de langue française à l'usage des cours complémentaires et des écoles primaires supérieures, repris en fonction des réformes de l'enseignement et republiés encore après la Seconde Guerre mondiale par Henri Yvon et Roger Thabault (1948). -A.D. Lot-et-Garonne, 16 J 17, correspondance d'érudits : Fonds Tamizey de Larroque. Mes deux neveux Jean et Guy de Boëry 536 ont passé la journée complète avec nous (de 7 h. l/2 (sic) du matin à 7 h.(sic) l/2 du soir). Nous avons ensemble déjeuné, dîné et soupé. Jean a pris ma photographie deux fois (une spéciale, une autre en groupe) ; il a pris aussi la photographie de mon vieux voisin, le château de la Roche-Marais 537 . Il m'a promis de me donner cent vues dudit château pour illustrer autant d'exemplaires d'une brochure que je veux consacrer aux anciens seigneurs du grand domaine qui touchait par ses bois le petit domaine de Larroque. L'artiste a été favorisé par un temps magnifique. Il y avait dans le ciel une intensité de lumière digne d'un ciel de l'Orient. La journée a été très gaie, embellie encore par deux jeunes filles, les perles de Gontaud et de Saint-Pierrede-Nogaret, M lles Angèle Coudrey et Valentine Perret, qui ont été croquées à plusieurs reprises par notre cher artiste : elles figurent notamment dans la photographie du châteaumasure, assises sur l'herbe de la prairie, ce qui m'a permis de leur dire galamment qu'on les prendrait pour deux des plus jolies fleurs de cette prairie. 10 avril Nous avons aujourd'hui passé ici le bail avec le nouveau fermier, après avoir fait croquer au notaire M. Coutet 538 (sic) et à un des témoins, Raymond Dubrana, un pâté d'Amiens et des beignets avec un beau gigot de mouton. Il a fallu (Ô grand Honoré de Balzac, j'ai pensé aux roueries de tes Paysans 539 !) 536 Voir 23 septembre 1892. Une photographie les montre à Larroque auprès de Tamizey dans Serin (P.), op. cit., p. 24. -Jean de Boëry a participé à l'élaboration de la plaquette Le vieux chataîgnier (note 407). 537 Sur les propriétés voisines de Larroque et notamment le lieu-dit «Marès » : Serin ( P.), op. cit., p. 27. 538 Voir 12 novembre 1890 et 3 août 1893. 539 Honoré de Balzac publia Les Paysans en 1845. Le sujet de ce livre consiste dans une lutte, tantôt sourde, tantôt ouverte, entre le général de Montcornet, grand propriètaire terrien et les paysans de sa commune. Parmi ces derniers Balzac campe notamment Tonsard, un ivrogne et un débauché, voleur par occasion ; sa femme, une commère cynique ; ses filles qui courent les champs en compagnie des garçons du village ; enfin et surtout il a fallu, dis-je, modifier les conventions du 14 mars, en ce qui regarde la plantation des vignes. Il paraît qu'il y avait eu malentendu et qu'au moment de signer l'acte la lumière s'est faite. Nous planterons donc à moitié frais et à moitié produits. 6 mai 6 figure déjà dans la Bibliographie générale de l'Agenais 544 pour une petite brochure d'économie sociale, figurera dans le Supplément de 1900 pour un récit de voyage Paysages de Suisse et d'Italie dont il m'a soumis le manuscrit et qui va être inséré dans la Revue Catholique de Bordeaux 545 . Voilà encore un de mes fils littéraires ! Nous avons passé une bien agréable journée sub umbra Castaneae 546 . Nous jouissions d'une bonne petite chaleur de 28 degrés, mais il faisait si frais à l'abri du feuillage agité par le vent, que nous étions comme des poissons dans l'eau. 4 juillet 4 n'ai jamais vu d'aussi magnifiques fusées et décidément le bon Dieu est un maître artificier. Les trois grands foyers d'électricité ont dégagé des flammes toute la nuit. Vers une heure la pluie est arrivée, mais les gouttes d'eau semblaient tomber sur une immense pelle rougie au feu. Je viens de passer plusieurs minutes à contempler (c'est le mot) le vol si rapide et si charmant de mes quatre hirondelles qui semblaient vouloir, tout exprès pour m'amuser, exécuter d'infinis chassés-croisés sous mes fenêtres. Cela m'a rappelé les vers exquis d'Édouard Pailleron 549 dans le sonnet sur l'Hirondelle : Par la grâce du vol, c'est un oiseau sans pair. N'est-elle pas jolie, alors que d'un coup d'aile. Dans les rayures d'ombre et dans le soleil clair, Elle passe en criant, vive comme un éclair, La faucheuse d'azur ! Et dirait-on pas d'elle La navette de jais d'un tisserand de l'air ? Votre oeil aime à la suivre où son vol s'évertue. eu à chercher dans les oeuvres d'Horace un passage cité dans une plaquette que je vais réimprimer, et chemin faisant, je me suis oublié à relire presque tout le volume, comme on s'oublie à prolonger une promenade dans un parc délicieux, 549 Édouard Pailleron (1834-1899), d'abord avocat, commence en 1860, avec une comédie en un acte en vers (Le Parasite) une carrière de poèteil est généralement jugé médiocre -et de prosateur (notamment avec Le Monde où l'on s'ennuie, en 1881, satire du monde académique). En 1863, il avait épousé la fille de François Buloz et il devint, par la suite, l'un des propriétaires de la Revue des Deux Mondes. Il tint surtout un salon littéraire très fréquenté et entra, en 1882, à l'Académie française. Le poème qui suit est extrait du recueil La souris, publié en 1887. rempli d'ombre et de fraîcheur. Ah l'aimable poète qu'Horace ! Et combien il a tout à la fois de bon goût, de bon sens et d'esprit ! C'est donc de lui qu'on peut dire surtout qu'il a des grâces et des beautés toujours nouvelles. Je n'avais pas relu mon cher Horace 550 depuis longtemps, et les quatre ou cinq heures que je viens de passer avec lui ont été douces comme des heures passées avec un vieil ami retrouvé. 14 juillet Pendant que les badauds s'amusent, je travaille fort et ferme. J'avais déjà la plume en main au moment où le canon de Gontaud a tiré son premier coup en l'honneur de la fête (5 h précises). J'achève aujourd'hui de préparer le 20 e fascicule des Correspondants de Peiresc (D r Novel et c ie ) 551 que j'enverrai demain à M. de Berluc 552 pour qu'il le revoie et le soumette ensuite à l'Académie d'Aix 553 . J'ai travaillé, tout ce temps-ci, avec un entrain merveilleux. J'ai presque mis au point les Lettres de Peiresc à Ménestrier et de Ménestrier à Peiresc qui forment la 3 e et dernière partie de mon tome V 554 et qui partiront pour le Ministère avant la fin du mois. J'ai aussi presque entièrement préparé pour l'impression « L'amiral Jaubert de Barrault et les pirates de La Rochelle », que je destine à la Revue Catholique de Bordeaux 555 , et « Un notaire 550 Poète latin (65-8 av. J.-C.). Le volume évoqué est certainement celui des Odes qui comptaient parmi les classiques étudiés dans le cadre des humanités enseignées dans les collèges et les lycées (à compléter : composition de la bibliothèque familiale). 551 Il s'agit des Lettres inédites du D r A. Novel, écrites à Peiresc et à Valavez, d'Espagne, de Paris, de Bretagne (1625-1634), suivies de Lettres inédites de quelques autres médecins provençaux (Cassagnes, Mérindol, Senelle…) publiées à Aix-en-Provence, en 1894, sous la forme d'Extrait des Mémoires de l'Académie d'Aix-en-Provence. Académie d'Aix-en-Provence : Carbonell (Charles-Olivier), Histoire et historiens, une mutation idéologique des historiens français 1865-1885, Privat, 1976, p. 243. 554 Lettres de Nicolas-Claude Fabri de Peiresc, publiées en 7 tomes entre 1888 et 1898, in-4°, dans la Collection de documents inédits sur l'histoire de France, 2 e série : Peyrous (Bernard), « L'oeuvre d'éditeur scientifique de Tamizey de Larroque », Revue française d'histoire du livre, 61 e année, n°s 76-77 -n elle série 3 e et 4 e trimestres 1992, p. 225-227. 555 Tamizey de Larroque (Philippe), L'Amiral Jaubert de Barrault et les d'autrefois M e Baboulène », que je destine à la Revue de l'Agenais 556 . J'ai écrit la préface de ce dernier opuscule un jour où la chaleur était intolérable et à l'heure la plus terrible, dans l'après-midi, et jamais peut-être ma plume n'a couru sur le papier avec plus de rapidité. Tant il est vrai que le feu sacré se moque des feux du soleil ! 16 juillet J'ai dit dans une note des Lettres inédites de Ramond 557 (p. 27) : « La température est une grande capricieuse au moment où j'écris cas lignes (31 janvier), le soleil rayonne si bien qu'il fait presque chaud comme en plein été. » Aujourd'hui, au plus fort de l'été, il fait tellement froid que je n'ai pu lire mes journaux sous le châtaignier et qu'il m'a fallu me réfugier dans ma chambre, après m'être affublé pirates de La Rochelle, recueil de pièces rares ou inédites, publié avec un avertissement et des notes, Paris, A. Picard et fils, 1894, 96 p., extrait de la Revue catholique de Bordeaux, 1893-1894. Antoine Jaubert de Barrault, vice-amiral de Guyenne, mena cette expédition punitive, début juin 1617 : Duclot (Jean-François)-Saignac (Jean-Pierre), L'ascension d'une famille noble de l'Entre-deux-mers, les Jaubert de Barrault des guerres de religion à la Fronde 1550-1655, collection « Archives et chroniques de l'Entre-deuxmers », 1993, p. 101-114. Voici les résultats de l'expertise faite aujourd'hui à l'occasion de la fin du bail de Ducasse et du commencement du bail de Barthalome : 4 vaches = 1 500 francs, 1 jeune boeuf : 295 francs, 1 génisse : 90 francs, 2 charrettes et la caisse d'un tombereau : 187,50 francs, 2 charrues : 60 francs, 1 rouleau : 30 francs. Total : 2162,50. Ce qui dépasse de 162 francs l'expertise précédente. J'ai donc à remettre au fermier sortant 162 francs 50. Je note ici que le nouveau fermier se contente de 8 hectolitres de semence et nous en abandonne 2 sur les 10 que son prédécesseur nous a remis. Il faut par conséquent remplacer dans le bail les 10 hectolitres qu'il y aurait eu à nous rendre, à la fin du nouveau bail, par 8 hectolitres. 10 août Nous avons eu à déjeuner, outre M. Audiat, M. Raymond Bazin, maire de Castelnau-sur-Gupie, et un autre Raymond, le compagnon de chasse et l'ami de mon fils, Dubrana. Le repas a été très gai et les causeries qui l'ont suivi, jusqu'au souper et qui ont aussi suivi le souper ont été non moins gaies. Toute la journée a été très animée et M. Audiat et moi, qui touchons à la vieillesse, nous avons été heureux de voir les jeunes dépenser tant de belle humeur et de verve endiablée. L'art et la poésie ont embelli par surcroît cette joyeuse journée. M. Bazin, qui dessine fort bien, a croqué avec sa plume le pavillon, la ferme et le vieux chêne. M. Audiat a composé des vers charmants sur le châtaignier sous lequel nous avons tant jasé en 91, 92 et 93. Il a très aimablement identifié l'arbre et le propriétaire et la pièce allégorique est aussi gracieuse que spirituelle 566 . Mon hôte pouvait-il 566 Un majestueux châtaignier se trouvait, à Larroque, à proximité du payer son écot en meilleure monnaie Que pourrait-on mettre audessus des diamants de la poésie ? 19 août Les pèlerinages littéraires au pavillon Peiresc se succèdent. 6 octobre 6 plus éclatante et la plus implacable lumière ! Depuis le 10 août, jour où jaillirent les larmes brûlantes de Saint-Laurent 577 sous la forme d'innombrables étoiles filantes, nous avons eu d'atroces chaleurs. Nous nous maintenions, chaque après-midi, entre 30 et 35 degrés, et un jour même nous avons atteint 36 degrés. C'est le jour où mon malheureux neveu Jean de Boëry arriva de la station de la Gazelle 578 à pied, m'apportant les photographies du printemps dernier et sur le point d'expirer, comme le soldat de Marathon. La pluie d'hier, qui a fait tant de bien aux végétaux, ne nous fait pas moins de bien : elle marque la fin de l'ère des hautes températures et nous allons désormais goûter les aimables fraîcheurs de l'automne. 8 septembre Nous venons de posséder pendant trois jours un véritable trésor en la personne de mon ami le chanoine E. Allain 579 , archiviste de 1'archevêché de Bordeaux, homme très savant et très aimable avec lequel on peut également causer de choses sérieuses et de choses légères. C'est un homme d'esprit qui a le coeur sur la main. C'était pour la première fois que nous nous rencontrions, après plusieurs années de relations épistolaires : à peine était-il ici depuis cinq minutes que nous ressemblions à de vieux amis. Je défie les pinsons de mes haies d'avoir jamais été plus gais que nous. Que le bon Dieu me ramène souvent un tel hôte ! 21 septembre Les pluies d'automne ! Nous en avons vu les débuts hier et avant-hier. Cela a quelque peu gâté le séjour qu'ont fait ici 577 Allusion à la mise à mort de saint Laurent martyrisé sur un gril : Audisio (Gabriel), Les Français d'hier, t. 2 : Des croyants XV e -XIX e siècle, A. Colin, Paris, 1996, p. 454-457. 578 À 6 km environ au Nord de Gontaud, à mi-chemin entre Virazeil à l'ouest et Puymiclan à l'est sur l'actuelle D 933 : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. 579 Voir 29 mars 1893. depuis lundi ma soeur, ma nièce et mes deux petites-nièces Antoinette et Cécile 580 , mais cela n'a pas empêché tout le monde d'être bien gai. Combien j'aime ces réunions de famille où l'affection se ravive au milieu des doux souvenirs évoqués ! Le dimanche 17, MM. Andrieu père et fils 581 ont passé la journée ici et ont déjeuné et soupé avec nous. Le bibliographe et historien de l'Agenais venait pour la première fois au pavillon Peiresc. Il a été ravi de tout ce qu'il a vu, même au point de vue gastronomique, car, par un hasard heureux, nous avons pu servir à nos visiteurs des tranches de jambon d'York, des saucisses truffées, des pieds de cochon truffés, deux perdreaux, etc. La journée a été également bonne pour les invités et pour les inviteurs. 25 septembre J'ai eu la joie de cueillir quelques violettes dans mon jardin et d'en former un petit bouquet qui remplit tout mon cabinet de son délicat parfum. Me voilà payé de la peine prise, tout l'été, en arrosant à la sueur de mon front (littéralement) mes plantations de tout genre ! J'ai aussi en ce moment sous les yeux des roses fournies par le rosier du Bengale voisin de la volière. Me voilà donc en possession des fleurs les plus odorantes de la corbeille du printemps ! 4 octobre 580 Il s'agit vraisemblablement d'Éléonore Tamizey (1831-1917), épouse d'Onézime dit en famille « Henry » Truaut, de sa fille Marie-Marguerite-Élisabeth-Gabrielle Truaut (née à Lavardac le 5 septembre 1851) mariée le 17 février 1873 à Pierre-Henri Cramaix-Hugonis. L'une des filles de cette dernière, ici mentionnée de toute évidence, Marie-Antoinette Cramaix-Hugonis, épousa le 15 janvier 1901, François Briançon [A.P. Baquier]. d'Agen, ce matin : « C'est avec la plus vive douleur que je viens vous apprendre de la part de la famille Azema et Magen le décès de M. Adolphe Magen 583 . Il a rendu le dernier soupir après une longue agonie, hier soir à 11 heures. » M. Magen était un de mes plus anciens et de mes meilleurs amis, et je lui avais souvent dit : vous êtes pour moi en Agenais ce qu'est Jules Delpit 584 en Bordelais, le n° 1. Nous étions étroitement liés depuis plus de trente ans et il a été bien souvent mon hôte à Gontaud, où ses inspections de pharmacie le 582 J. Andrieu mentionne dans la bibliographique générale de l'Agenais, op. cit., t. III, p. 334 : Louis-Pierre-Élie Goux, médecin, né à Pachuca (Mexique), le 15 juin 1832. Il appartient à une famille originaire du Passage-d'Agen et il est membre de la Société de Sciences, Lettres et Arts d'Agen. Il a publié une brochure intitulée : Essai sur l'anatomie des Mermyres, Impr. P. Noubel, 1859, in-8°, 11 p. (Extr. du Recueil des Travaux de la Société académique d'Agen, 1 ere série, t. IX, 1880, tirage à 75 exemplaires).-Lauzun (Ph.), op. cit., p. 345.583AdolpheMagen (1818Magen ( -1893)), né à Agen, pharmacien, chimiste, Inspecteur des pharmacies de Lot-et-Garonne, correspondant du ministère de l'Instruction publique, secrétaire perpétuel de la Société académique d'Agen, directeur de la Revue de l'Agenais. Ses travaux et publications sont nombreux et variés, touchant à sa spécialité scientifique, à l'histoire et à la littérature. Une recension à peu près exhaustive en est donnée dans Andrieu (J.), op. cit., t. II, p. 97-102. -Tamizey souligne tout ce qu'il doit aux conseils d'Ad. Magen et à ses encouragements alors qu'il débutait sa carrière de chercheur ainsi qu'à la chaleur et à la constance de l'amitié qui les unissait, dans la plaquette nécrologique qu'il lui consacra (voir note 427). -Lauzun(Ph.), op. cit., 16 J 1 et 16 J 18, correspondance d'érudits : Fonds Tamizey de Larroque. 584 Jules Delpit (1808-1891). Licencié en droit en 1830, il suivit les cours de L'École des chartes où il se lia à J. Quicherat et Augustin Thierry qui le recommandèrent au ministre Villemain. Chargé le 11 mai 1842 d'aller rechercher en Angleterre les archives enlevées à Philippe-Auguste à Fréteval. Il n'en trouva pas trace mais en rapporta beaucoup d'autres documents. Mais sa mission fut finalement interrompue faute de crédits. Le père de Jules Delpit, notable bordelais, ancien girondin, fait imprimer le fruit de ses travaux. J. Delpit contribua au Catalogue des manuscrits de la Bibliothèque de Bordeaux, et à la publication de la Chronique de Gaufreteau et de la Chronique du Parlement de Bordeaux (d'Étienne Cruseau 1588-1616). Il était membre de l'Académie de Bordeaux dont il fut secrétaire général pendant 30 ans. Il participa, en 1845, à la création de deux autres sociétés c'est-à-dire la Commission des Archives municipales de Bordeaux et la Société des Bibliophiles de Guyenne. Parmi ses ouvrages : Réfutation du livre de M. L. Veuillot sur le droit du seigneur ou Réponse d'un campagnard à un Parisien (1857), L'Origine de l'imprimerie en Guyenne (1869), Le droit du seigneur, seconde réponse à M. Louis Veuillot (1873), Le prince ridicule, mazarinade composée en 1650 (1875) : L'École Nationale des chartes, Histoire de l'École depuis 1821, Gérard Klopp éditeur, 1997. -A.D. Lot-et-Garonne, 16 J 11, correspondance d'érudits : Fonds Tamizey de Larroque. ramenaient chaque année. C'était un coeur et un esprit d'élite. Les dernières pages imprimées par lui ont été consacrées à l'analyse de mon recueil des lettres de Ramond. Le mot amicissimus qu'il m'adressait en tête de l'exemplaire du tirage à part n'était que la stricte expression de la vérité, car peu d'hommes m'ont autant aimé que lui. Qu'il repose en paix ! Sa mémoire me sera toujours chère. Je viens de lire les discours prononcés sur la tombe de mon pauvre ami Magen et les articles nécrologiques qui lui ont été consacrés par M. Jules Andrieu dans la Constitution 585 et par M. Xavier de Lassalle 586 dans le Lot-et-Garonne 587 . On a dignement retracé les mérites si divers et si considérables de mon cher mort. Il faudra qu'à mon tour je lui rende un public hommage, car notre commun ami M. G. Tholin 588 me demande un 585 Sous-titrée « Journal quotidien de la démocratie républicaine » paraît de 1877 à 1901 [une publication bi-hebdomadaire entre 1875 et 1892 porte aussi le titre de « La Constitution », elle se définit comme le « journal du gouvernement du pays par le pays »]. La Constitution remplace Le Réveil de Lot-et-Garonne et de Villeneuve-sur-Lot dont il reprend en mars 1883 la numérotation d'année. Entre le 31 décembre 1897 et le 1 er juillet 1901, il prend comme titre 601 Peyrous (Bernard), « L'oeuvre d'éditeur scientifique de Tamizey de Larroque », Revue française d'histoire du livre, 61 e année, n°s 76-77 -n elle série 3 e et 4 e trimestres 1992, p.225-227. -Voir 16 novembre 1889, 7 mars 1890, 17 juin 1890 et 30 décembre 1891. Je reçois aujourd'hui cent exemplaires de ma petite brochure : Pour Peiresc, s.v.p. ! 602 35 autres exemplaires ayant été envoyés à M.L. Delisle 603 , pour être distribués à l'Institut et au Comité des Travaux historiques 604 , et 15 autres encore à M. Vidal, bibliothécaire de la Méjanes 605 , pour être distribués à Aix. Réussirai-je à recueillir assez de souscriptions pour que la restauration de la chapelle funéraire des Fabri puisse se faire l'année prochaine ? Que Dieu m'assiste dans ma pieuse entreprise ! La livraison de la Revue félibréenne 606 d'où est tirée ma brochure m'apprend que nous avons déjà recueilli -d'avance -près de 500 francs. , Puymiclan, Saint-Pierre 608 . C'était dans le silence de la campagne d'un effet saisissant. Tous ces funèbres tintements appelaient irrésistiblement la prière pour les morts dont c'est aujourd'hui la triste fête J'ai prié pour mes morts de ces derniers mois, ma tante et mon ami Magen 609 . J'ai prié aussi pour mes morts d'autrefois, ceux de ma famille et ceux de cette autre famille que l'on appelle le groupe des 607 Jean-Baptiste Brissaud, né à Puysserampion (Lot-et-Garonne) en 1854. Docteur en droit de la faculté de Bordeaux, en 1879, ayant soutenu une thèse sur la notion de cause dans les obligations conventionnelles en droit romain et en droit français. Il fut nommé en 1880, professeur de droit français à l'université de Berne et promu à l'ordinariat en 1881. Il passa en janvier 1883 à la faculté de droit de Montpellier où il professait le cours d'histoire, puis de Toulouse en 1885. Membre de l'Académie de législation, de l'Académie des Sciences de Toulouse et de la Société archéologique du Midi. Il fut l'un des directeurs de la Revue générale de Droit et a signé de nombreux ouvrages : Andrieu (J.), op. cit., t. 1, p. 117-118. -Son fils Jacques a laissé le récit d'une visite à Larroque : « Une visite à Peiresc, écrit à Fauillet, le 25 janvier 1961 » dans Maisani (Claude), « Communication… sur Philippe Tamizey de Larroque » in Revue de l'Agenais, 87 e année, 1 er Bulletin Trimestriel, janvier-mars 1961, p. 25-30. -A.D. Lot-et-Garonne, 16 J 7, correspondance d'érudits : Fonds Tamizey de Larroque. 608 618 À 3 km. environ au sud-est d'Agen avec une église romane fortifiée monographie d'Aiguillon : nous avons travaillé ensemble à le compléter, à l'améliorer. Ce sera un ouvrage fort intéressant et où j'ai retrouvé avec plaisir les documents jadis découverts et transcrits par moi à la Bibliothèque Nationale pour mon ami M. Lafargue, alors chef de division à la préfecture d'Agen et maintenant conseiller de préfecture à Albi. Le bon curé est parti enchanté de son séjour et de son accroissement de bagage historique. Je l'ai accompagné jusqu'au pont de Rouderat 619 et j'ai profité de ma politesse pour aller voir mes fermiers qui font un nouveau transport de terre aux Quatre journaux. Mais ce n'est plus à pleins chariots, c'est à pleins tombereaux qu'ils transportent la terre entassée au bord du ruisseau, engraissée par les feuilles des peupliers et qu'ils en recouvrent tout le haut de la pièce. C'est du terreau auquel on n'avait pas touché depuis un demi-siècle. Chaque tombereau vaut dix chariots et il y en aura pour toute la semaine, plus de 300 tombereaux à 50 par jour en moyenne. C'est un travail digne de ces grands remueurs de terre qui s'appelaient les Romains. Les gigantesques travaux de mes piocheurs ont été interrompus par la pluie. J'ai mis à profit ce statu quo pour établir une pièce d'eau en face de celle qui existait. Elle est plus petite que son aînée, mais d'un carré parfait. Il y avait là un trou alimenté par ce que nous appelons un sourdiou 620 . Le trou, approfondi, élargi, devient une succursale du grand lavoir et un bassin pour l'arrosage. C'est un progrès. Vive le progrès dans les petites comme dans les grandes choses ! 13 novembre Nouveau témoignage en faveur du climat de l'Agenais et particulièrement du climat de Larroque. La neige vient de tomber dans les départements voisins, à Auch, à Montauban, à Périgueux : nous n'en avons pas vu un seul flocon. Le froid a été très vif non seulement dans le Nord de la France, notamment à Paris, mais encore tout près de nous, dans le Périgord où le thermomètre a baissé de plusieurs degrés audessous de zéro. Ici nous l'avons vu se maintenir au contraire à plusieurs degrés au-dessus et nous en avons été quittes pour deux journées sans soleil, mais non sans vent d'une extrême fraîcheur. Aujourd'hui l'été de la Saint-Martin reparaît. J'ai pu lire, assis sur la pierre de mon tombeau, mes trois journaux quotidiens sans éprouver la moindre sensation de froid. L'astre du jour, comme disait l'école de Delisle, est étincelant et cela nous promet une magnifique queue d'automne. 14 novembre Je profite du beau temps dont je parlais hier pour faire planter, à la place des quatre ormeaux morts dans la prairie, 620 Un sourdiù en gascon est une source ou une petite fontaine au bas d'une éminence et où l'eau est amenée par une conduite (tuiles ou tuyau) : Palay (Simin), op. cit. p. 918. un platane, un marronnier et deux tilleuls. Espérons que ces arbres seront plus heureux que leurs prédécesseurs. J'ai à mon service, tous ces jours-ci, un jardinier de passage : il était autrefois dans les grandeurs et il était chargé de l'entretien des jardins si admirables des environs de Pau. Il a planté, autour de mon tombeau et le long des allées, divers arbustes : lauriers, lilas, rosiers, etc. Il a créé deux massifs de verdure et il a modifié l'alignement de deux des allées de façon à leur donner une courbe plus gracieuse. Il aurait des projets d'embellissement grands comme le monde, mais je l'arrêterai sur cette pente dangereuse : ami de la simplicité dans mon jardin comme partout, je repousserai des plans ambitieux qui exigeraient du jardinier errant un séjour de plus d'un mois et, à la fin de la semaine, je lui dirai d'une voix ferme : tu n'iras pas plus loin. Même jour : inauguration en mon cabinet de travail, du petit poêle envoyé d'Agen par le nouveau secrétaire perpétuel de la Société des Sciences, Lettres et Arts de cette ville, mon ami Jules Andrieu 621 (prix : 24 francs). L'instrument marche à merveille : il chauffe beaucoup sans fumée et son murmure m'est très agréable. C'est comme le ronron d'un gros chat. Je gagne bien des choses à l'installation de mon poêle je reste ainsi près de mes livres et de mes dossiers. J'ai beaucoup plus de clarté qu'au premier étage ; dès que brillera le plus petit rayon de soleil, j'en jouirai, tandis que dans ma chambre toute au nord j'étais en une sorte de froide et obscure prison ; enfin il est plus sain d'habiter au milieu des livres que dans une chambre à coucher. Toutes ces 621 Jules Andrieu, né en 1839, conducteur des Ponts et Chaussées, membre de la Société des Sciences, Lettres et arts d'Agen, correspondant de l'Académie de Bordeaux, collaborateur actif au journal Sud-Ouest, auteur d'études sur Jasmin (1881) et Théophile de Viau (1877) et d'autres aspects de la vie culturelle agenaise sous l'Ancien Régime, il a surtout publié la Bibliographie générale de l'Agenais (1886-1887) dont est extraite une Bibliographie Tamizeyenne (1862-1887), Agen, Impr. V. Lentéric, 1887, 22 p. Sur son oeuvre et sa disparition prématurée voir Lauzun (Ph.), op. cit., p. 261-263. -A.D. Lot-et-Garonne, 16 J 2 et 16 J 3, correspondance d'érudits : Fonds Tamizey de Larroque. considérations me font regarder mon petit ronfleur comme un petit bienfaiteur. 16 novembre Malgré la pluie, j'ai fait planter une haie derrière mes barrières, de façon à avoir double clôture, clôture de bois et clôture de verdure. Cette haie, qui se prolonge en dehors des barrières, jusqu'à la fontaine, et, en dehors du jardin, derrière mon tombeau, est formée d'arbustes qui ont l'avantage de n'être jamais flétris, ni par les rigueurs de l'hiver, ni par les ardeurs de l'été. Ce sera autour du jardin et d'une partie de la prairie une parure perpétuelle aussi bien qu'un solide rempart.22 novembreMon fermier Barthalome a commencé aujourd'hui le défrichement de la mauvaise prairie ou plutôt de la véritable petite lande voisine de l'enclos de Bouton. C'est là que nous planterons notre première vigne. Rude piocheur que ce Barthalome ! Il a tout seul fouillé un grand espace de terrain si résistant, si rebelle, et à une grande profondeur. Je l'ai vu travailler avec une joie mêlée d'admiration. Je saluais déjà les récoltes futures et il me semblait que de chaque coup de pioche jaillissaient de magnifiques raisins.24 novembreMon illustre jardinier de passage n'était qu'un vulgaire coquin. Il est parti brusquement de Gontaud, oubliant de payer sa note à l'hôtelier, de même que le chevalier de Gramont s'éloignait de Londres à tire d'aile, ayant oublié d'épouser la soeur d'Hamilton 622 . J'ai remplacé le chevalier d'industrie 622 Philibert, chevalier puis comte de Gramont, frère d'Antoine III, duc de Gramont (1621-1707) : destiné à être homme d'Église, il préféra la carrière des armes et se fit remarquer par Condé et par Turenne en Franche-Comté (1668) et en Hollande (1672). Il fut nommé lieutenant général du Béarn et gouverneur de l'Aunis. Ayant disputé à Louis XIV la conquête de par l'honnête Chirol qui a planté aujourd'hui quatre petits cèdres et une trentaine d'arbustes divers (le tout m'ayant coûté 5 francs). Chirol a aussi fait un terrassement autour du vieux chêne et nous y établirons une sorte de petit jardin suspendu. Deux des cèdres ont été plantés de chaque côté du portail, et s'ils se développent heureusement, ils seront les majestueux gardiens de l'entrée. Auprès de cette même entrée nous avons créé un massif formé d'une dixaine (sic) d'arbres et arbustes toujours verts. Ce sera d'un effet très décoratif. Chirol reviendra lundi prochain avec son fils aîné et ils arrangeront ensemble la grande allée qui part du pavillon. De nouvelles plantations achèveront de faire disparaître les ravages de la sécheresse. 28 novembre Première gelée sérieuse. Continuation du défrichement du futur vignoble. Admirable soleil qui m'a permis de faire une longue lecture en plein air, non loin des défricheurs (le père et le fils) dont les pioches tombaient an cadence et formaient une sorte de chanson du bon travailleur très douce à mon oreille. J'ai fait planter une grande croix en coeur de chêne qui abritera mon tombeau. Peinte en coaltar, elle résistera autant que ma pierre tombale elle-même aux injures du temps. J'ai profité de l'occasion pour mettre une couche de coaltar sur les pieux et les planches qui forment barrière autour du jardin. Quand la verdure s'étalera sur ce noir, ce sera d'un bon effet. 1 er décembre Presque tout le mois qui vient de s'écouler a été consacré à une correspondance effrénée au sujet du Pour Peiresc s.v.p. Je suis au-dessous de la vérité en indiquant ici plus de 200 lettres de demandes, plus de 50 lettres de remerciements. L'affaire marchera-t-elle ? J'ai beau lui donner tout mon temps, tout mon zèle, il y a des blocs de glace que je ne parviens pas à réchauffer. Que de déceptions ! Que de défections ! Que de prétendus amis qui aiment mieux l'argent que l'amitié ! Parmi les méridionaux combien ne sont que de splendides farceurs qui, après avoir donné de la voix, ne donnent pas autre chose ! En revanche, combien j'ai été consolé par d'affectueuses générosités ! Mon co-peirescien L. Delisle a été un des premiers à souscrire et pour donné 80 francs. Et combien d'autres ont fourni 500 francs et, de son côté, Paul Mariéton 627 en a touché à peu près autant. Voyons si décembre nous amènera un second mille ! 635 636 Leo Drouyn (1816-1896) né à Izon (Gironde) et mort à Bordeaux. Il se fit d'abord apprécier comme peintre, ses Paysages des Landes étant exposés aux Salons de Paris de 1851 à 1860. Il doit ensuite et surtout sa célébrité et sa notoriété à ses gravures à l'eau-forte : Vues de Bordeaux, Vues de Bazas et à son travil d'archéologue : Choix de types d'architecture religieuse de la Gironde (1845) notamment. -Andrieu (J.), op. cit., t. 1, p. 246.-A.D. Lot-et-Garonne, 16 J 12, correspondance d'érudits : Fonds Tamizey de Larroque.-Portelli-Zavialoff (Frédérique)-Lacoste (Jacques), Léo Drouyn, artiste et archéologue, Mollat, 1997. lue deux fois avant que M. Vivie 637 , notre Secrétaire général, en ait lu deux pages, hier soir, dans notre séance générale. Elles ont été applaudies par tout le monde. » 12 février Je reste toujours le planteur infatigable. Aujourd'hui j'ai fait mettre en terre une trentaine de plants de lauriers et une douzaine de petits chèvrefeuilles, sans parler de plusieurs pieds de violettes. Ceux qui m'accuseraient d'avoir négligé l'arbusticulture seraient aussi injustes que ceux qui me reprocheraient de n'avoir pas assez travaillé. Si parva licet componere magnis 638 , je rappellerai que mon héros Peiresc se plaisait presque autant dans son jardin que dans sa bibliothèque 639 . 18 février Les journaux m'apprennent la mort du député et ancien ministre M. Viette 640 que j'ai jadis vu à Montbéliard chez son 637 Il peut s'agir de Jules de Vivie de Régie (1862-1927), avocat à la cour d'appel de Bordeaux, à Nontron puis à Lourdes, fils de Louis-Joseph de Vivie de Régie, cousin de Tamizey ou plus certainement de Jean de Vivie de Régie, son cousin [A.P. Baquier]. La séance ici évoquée est peut-être celle de la Société archéologique de Bordeaux, fondée le 2 mai 1873 ou de la Société des Antiquaires de France, à moins qu'il ne s'agisse de l'Académie de Bordeaux dont L. Drouyn était membre. -A.D. Lot-et-Garonne, 16 J 27, correspondance d'érudits : Fonds Tamizey de Larroque. -Voir 13 juin 1890. 638 Trad. du latin: « Si tant est que l'on puisse rapprocher les petites choses des grandes. » 639 Tamizey de Larroque (Philippe), Deux jardiniers émérites, Peiresc et Vespasien Robin, J. Remondet, Aix-en-Provence, 1896. Peiresc est notamment réputé avoir introduit en Europe l'acacia et la tubéreuse (à tort, dans ce dernier cas, reconnaît Tamizey). 640 Jules-François-Stanislas Viette, né à Blamont (Doubs) en 1843. Il combattit l'empire dans les journaux républicains de l'Est et notamment dans la Démocratie franc-comtoise dont il fut l'un des fondateurs. Capitaine des mobilisés du Doubs pendant la guerre de 1870, il fut cité à l'ordre du jour de l'armée. Par la suite, sur la recommandation publique de Gambetta, il fut élu, le 20 février 1876, député de l'arrondissement de Montbéliard. Dans sa profession de foi, il demandait une République sagement progressive, la réduction du service militaire, la liberté des cultes, l'instruction laïque, gratuite et obligatoire. Il prit place à gauche et fut des 363 qui refusèrent un vote de confiance au ministère du 16 mai. Réélu en 1877 et en 1881, il déclara (février 1883) dans la discussion du projet Fabre sur l'expulsion des prétendants, qu'il n'y avait pas de droit commun pour les princes. Aussi signa-t-il en février 1886 la proposition d'expulsion Ballue-Duché. Il se prononça pour la réforme de la ami mon pauvre beau-frère Henri de Grammont 641 , il y a plus de 25 ans. M. Viette, qui était tout jeune alors était le plus gai et le plus aimable des hommes. Nous avons bien ri ensemble soit chez lui, à Blamont, où il nous donna la plus cordiale hospitalité, soit dans notre voyage dans le Jura et en Suisse. J'étais alors un maire de l'Empire et M. Viette était déjà un républicain ardent. Nous nous livrions des combats épiques avec une verve endiablée. M. Viette n'avait jamais oublié son ancien adversaire et, ce qui est une bonne note, il était resté bon enfant au milieu des grandeurs. Pendant son dernier ministère, il se montra très gentil pour moi qui lui avais recommandé un de mes amis. Une autre fois, il m'avait écrit qu'il serait toujours heureux de pouvoir me rendre service J'ai applaudi avec sympathie aux paroles du président de la Chambre des Députés qui, entre'autres éloges, a donné à M. Viette cet éloge, qu'il « est toujours resté fidèle à ses opinions et à ses amitiés ». 641 6 mars 6 Voir 27 septembre 1889, 14 et 15 septembre 1892. prospères, j'ai mis en terre, hier et aujourd'hui, des lauriers, des myrthes, des violettes. De plus, j'ai planté de ma propre main, pour remplacer quelques-uns des bouleaux tués par la sécheresse de l'an dernier, plusieurs branches de peuplier. Ah ! Combien tout cela sera joli, plus tard ! Je m'attache de plus en plus à mon pauvre rocher qui finira par être aussi verdoyant et aussi fleuri que le jardin le mieux situé et le mieux tenu.28 févrierNouvellesplantations de myrthes et de violettes. Tout est garni maintenant et je n'ai plus rien à désirer. Peut-être même mes richesses végétales seront-elles surabondantes et faudra-t-il opérer quelques retranchements pour éviter le trop plein. J'ai fait mettre des roseaux et une petite oseraie autour du vivier. Il n'y aura là qu'arbustes aquatiques, des bambous y ayant été installés en automne. Je note aussi qu'un verger vient d'être établi dans la pièce que j'appelle du genévrier, à cause du voisinage de celui qui, comme une borne toujours verdoyante, s'élève très droit et déjà très haut entre la propriété de Bergé et la mienne : nous avons planté là, après un bon transport de terre, des pêchers, des poiriers, au nombre d'une vingtaine. 5 mars Après les pêchers, les poiriers, les pommiers, plantés la semaine dernière, c'est le tour des pruniers. Mon fermier en a apporté 18 de son petit enclos de Rebec 642 , tous de jolie taille et de bonne mine. L'an prochain, nous ferons une plantation plus considérable, car je voudrais garnir de pruniers deux allées, une se dirigeant vers le nouveau verger, l'autre vers le chemin que l'on suit pour gagner la maison de 642 Au nord de Gontaud, entre Birac au sud-ouest et Puymiclan au nordest [437-3246] : Carte 1738 Est Seyches : série bleue 1 : 25 000, I.G.N., Paris, 1987. mes voisins Bergé. Le printemps s'approche. Salut à lui ! Mes abricotiers ont déjà leur panache rose et les boutons d'or tapissent mes allées. Ce ne sont pas seulement les fleurs qui m'annoncent cette saison que je bénis avec enthousiasme, ce sont aussi les journées plus claires et plus longues. Je puis commencer à travailler vers 6 heures l/2, chaque matin, et ne m'arrêter que vers 6 h. l/2, chaque soir. Aujourd'hui de 6 h.1/2 à 11 h. l/2, j'ai pioché sans la moindre interruption et avec une telle ardeur, que j'ai pu pousser la Table alphabétique du tome IV des Lettres de Peiresc, de la page 150 à la page 200 643 . C'est marcher à raison de 10 pages à l'heure, c'est-à-dire à toute vapeur. 10 mars Aujourd'hui, en allant voir un nouveau transport de terre de mes vaillants fermiers, j'ai passé devant la haie qui borde mon mauvais chemin et j'ai eu le plaisir d'y trouver l'aubépine en fleur. Encore un signe de l'approche du printemps ! La floraison a été hâtée par la situation de la haie en plein soleil toute la journée. Question d'exposition. Dans quelques jours nous aurons partout de ces beaux buissons blancs qui m'ont toujours été si chers. J'ai fait greffer plusieurs églantiers dans le cabinet de verdure qui est au bout de ma grande allée et dans la haie voisine de mon tombeau. Si la double opération réussit, si mes rosiers sauvages se civilisent, j'aurai de jolies roses mêlées, d'une part, au feuillage des jeunes chênes, mêlées, d'autre part, au feuillage du vieux genévrier. 643 Voir 20 janvier 1893 : Peyrous (Bernard), « L'oeuvre d'éditeur scientifique de Tamizey de Larroque », Revue française d'histoire du livre, 61 e année, n°s 76-77 -n elle série 3 e et 4 e trimestres 1992, p. 225-227. Voilà bien les giboulées de mars ! Nous avons, depuis ce matin, les quatre-temps : pluie, soleil, vent et grêle. -Je note avec tristesse que ma pauvre chatte Gredinette, dont j'ai plusieurs fois fait mention, a disparu depuis le commencement du mois et qu'elle a probablement été tuée dans les bois où elle allait sans cesse chasser. C'était une bien jolie bête avec sa robe aux trois couleurs et sa mine futée. Elle était toute jeune encore, car elle venait de naître quand je m'installai ici. 16 mars Le lendemain du jour où j'ai écrit la petite oraison funèbre de Gredinette, elle a reparu, mais en quel état de maigreur et d'affaiblissement ! C'est un squelette à demi-vivant. La pauvre bête avait dû, pendant les douze ou treize jours de sa disparition, être enfermée, sans nourriture. Survivra-t-elle aux souffrances de sa captivité mystérieuse ? Elle qui avait la vivacité même d'un écureuil, elle se traîne languissamment, comme épuisée, anéantie. 22 mars Vraie journée d'été, tant le soleil était flamboyant. Ma promenade du soir (car j'en fais deux par jour, une avant le dîner, l'autre avant le souper) a été attristée par l'assassinat fait à trois pas de moi par mon jeune fermier d'un joli oiseau, un merle voyageur, qui s'était arrêté depuis le matin près du vivier où il avait bu plusieurs fois. Le pauvre oiseau, que son assassin appelle un merle blanc parce qu'il avait autour du cou comme une cravate blanche qui contrastait avec son plumage d'un noir lustré, s'était installé sur un orme tout entouré de lierre. Il avait ainsi à manger et à boire. Peut-être comptait-il vivre désormais en un lieu si bien choisi ! Foudroyé par le coup de fusil, il est resté comme enseveli dans le lierre et son bourreau a été obligé de grimper le long de l'arbre pour aller le chercher, froissant ainsi à la fois les feuilles du lierre et moi-même qui aurais donné bonne chose pour conserver la vie de mon pauvre petit visiteur. 650 Louis Audiat : né à Moulins en 1833, conservateur de la bibliothèque de Saintes où il a fondé une société des arts, sciences et belles-lettres et une société archéologique et historique de la Saintonge. Poète élégiaque, il a publié Poésies (1854) et Nouvelles poésies (1862Essai sur l'imprimerie en Saintonge et en Aunis, Pons, 1879 et « Un petit-neveu de Châteaubriand, Édouard de Blossac, ancien sous-préfet de Marmande », extr. Revue de l'Agenais, 1877, tiré à part, Agen, Lamy, 1877, 35 p. Auteur de plusieurs études sur B. Palissy, chezAubry, Paris, 1861, XXI-358 pp. et chez Didier, Paris, 1868, in-12. Voir Andrieu (J.), Bibliographie générale de l'Agenais, t. 2, 1886, p. 29- 30. -A.D. Lot-et-Garonne, 16 J 3, correspondance d'érudits : Fonds Tamizey de Larroque.le parterre qui a remplacé la vieille halle. Tout cela m'a fait oublier de bien pénibles préoccupations et, comme un baume souverain, a mis dans mon coeur enfiévré un apaisement plein de douceur. 17 avril Nouvelle visite de l'abbé Breuils, curé de Caseneuve, qui n'a fait que souper et coucher, étant obligé de repartir pour Bordeaux, ce matin même. Il va faire imprimer son travail sur Saint-Austinde 652 et il a reproduit dans son prospectus une lettre d'encouragement que je lui écrivis, l'an dernier. Puisse ma lettre faire du bien à son livre qui sera, du reste, un livre excellent ! 30 avril Pendant presque tout ce mois, mais surtout pendant la dernière quinzaine, nous avons eu d'abondantes pluies avec deux ou trois orages et de grands coups de vent. Au moment où j'écris, la pluie tombe encore et le vent souffle avec rage. Je vais partir, le 4 du mois prochain, pour Aix-en-Provence, où je tâcherai de préparer toutes choses pour que le monument de Peiresc soit érigé en 1895 653 . (On m'a déjà nommé président du comité d'organisation). Je quitterai Aix le samedi 12 mai pour aller passer sept ou huit semaines à Carpentras et achever les transcriptions destinées aux quatre derniers volumes des Lettres de Peiresc. Ce sera là ma dernière campagne peirescienne et je prie le bon Dieu de la bénir. Une fois de retour, si je reviens, je ne quitterai plus mon cher pavillon dont je ne me sépare qu'avec un extrême regret. Mais il le faut, car l'achèvement de la publication de la correspondance de mon héros est, en quelque sorte, une question d'honneur et on ne doit jamais transiger en ces questions là. 652 Voir 22 février 1893. 653 Ce qui fut fait, mais en l'absence de Tamizey : voir Couture (L.), op. cit., p. 509 et introduction. Abandonner mon entreprise par désir de tranquillité, par égoïsme, ce serait déserter à la veille d'une décisive bataille. Mais avec quelle joie je reverrai, à la fin de juin ou au commencement de juillet, mon pavillon et mon jardin, et reprendrai mon livre de raison qui est pour moi un si bon confident et un si doux ami ! 24 juin Je suis arrivé hier au soir. Le bon Dieu sait avec quelle joie j'ai revu le pavillon Peiresc. Je suis si accablé de fatigue et je vais être si occupé pour réparer les retards apportés en toutes choses par près de deux mois d'absence, que je n'écrirai ici que peu de lignes jusqu'à ce que ma barque soit remise à flot. Ce que je tiens à constater tout d'abord, c'est combien était beau le spectacle que m'ont offert, ce matin, les églantiers chargés de fleurs qui entourent mon tombeau. Des milliers de roses brillent dans la verdure de la haie : elles jettent un voile charmant non seulement sur le vieux genévrier, mais sur la croix dont le sommet émerge à peine de toutes ces touffes qui semblent vouloir l'envelopper entièrement. Je suis resté plusieurs minutes en contemplation devant ce gigantesque bouquet. Le bonheur de me retrouver chez moi ajoutait encore à ma douce extase. 30 juin note en marge : Séjour à Aix en mai 1894 Je ne puis consigner ici tous les souvenirs de mon voyage, car les pages blanches de mon registre n'y suffiraient pas. Je vais seulement donner par ordre chronologique de brèves indications sur les principaux incidents de ce beau voyage : 4 mai vendredi. Diner (sic) et coucher à Cette 654 à l'hôtel du Grand Galion. 654 C'est-à-dire Sète. Tamizey emploie la graphie en usage au XVII e siècle pour ce toponyme. , p. 18 mentionne une famille Ribbe, sans particule, comme une lignée «… de la ville d'Aix-en-Provence dont il est parlé dans l' Histoire héroïque de la noblesse de Provence. D'une « famille anoblie à la fin du XVII e siècle », Charles de Ribbe est secrétaire perpétuel de l'Académie d'Aix : Carbonnel (Charles-Olivier), Histoire et historiens, une mutation idéologique des historiens français1865[START_REF]Les Commencements d'une conquête : l'Algérie de 1830 à[END_REF] , Privat, 1976, p. , p. 243. -Surtout Charles de Ribbe était l'un des grands connaisseurs des livres de raison. Ph. Tamizey de Larroque lui rend hommage dans l'avertissement du Livre de raison de la famille de Fontainemarie 1640-1774, Agen, Impr. V ve Lamy, 1889, p. 5 et p. 121-123. Ch. de Ribbe a notamment publié La vie domestique, ses modèles et ses règles, d'après des documents originaux, Paris, Baltenweck (3 e édition en 2 vol. : le t. II est presque en entier occupé par l'important livre de raison de la famille de Courtois-Durefort, commencé en 1812 par Antoine de Courtois, p. 107-253. Cet ouvrage a été traduit en allemand (Colmar, Hoffmann, 1880) et, la même année, Les livres de raison et leur rétablissement dans la coutume des familles comme moyen de réforme, compte-rendu de la discussion ouverte sur ce sujet dans la séance de la Société des études pratiques d'économie sociale, en date du 19 mai 1878, Paris, au siège de la Société. Grand in-8° de 36 p. Ch. de Ribbe a également signé Une famille au XVI e siècle, d'après des documents originaux [la famille du Laurens], Tours, A. Mame, 1879, 220 p. in-18 et surtout cette même année, chez le même éditeur : Les familles et la Société en France avant la révolution, d'après des documents originaux, 2 vol. (il s'agissait de la 4 e édition refondue et considérablement augmentée de cet ouvrage) et Le livre de famille, reproduction d'une conférence faite à l'assemblée des Catholiques, le 12 juin 1878 qui fit l'objet d'une seconde édition à la même librairie, in-8° de 24 p. Le Livre de raison est précédé d'une lettre chaleureusement approbative du cardinal Donnet, archevêque de Bordeaux, le 25 février 1879. Ch. de Ribbe est aussi l'auteur d'une « sympathique étude » (selon les termes de Ph. Tamizey de Larroque) sur l'économiste Le Play « son intime ami, digne de ce surnom qu'aimait le XVI e siècle : grand homme de bien » : Le Play d'après sa correspondance, Paris, Firmin-Didot, 1884, in-18. -A.D. Lot-et-Garonne, 16 J 24, correspondance d'érudits : Fonds Tamizey de Larroque. Seguins 686 . Après déjeuner, longue promenade sous les platanes avec leur charmante petite-fille, M lle Marguerite de Saint-Paulet. 20 mai, dimanche. Journée passée à Avignon. Déjeuné chez le chanoine Paul de Terris avec sa belle-soeur et ses deux nièces, Mme et Mlles Jules de Terris 687 . Visite à l'abbé Requin 688 . 690 Correspondance de Tamizey avec Paul de Faucher : A.D. Lot-et-Garonne, 16 J 13, correspondance d'érudits : Fonds Tamizey de Larroque. Paul de Faucher a publié sur l'histoire de la Provence et plus précisément du Comtat-Venaissin, de Bollène et de Carpentras dans la Revue de Provence, les Mémoires de l'Académie d'Aix-en-Provence et le Bulletin de la Société scientifique et littéraire des Basses-Alpes. Il est aussi l'auteur de plusieurs articles pour les Archives de la Société française des collectionneurs d'ex-libris et la revue de cette société. Les discours qu'il a prononcés pour la réception des nouveaux curés-doyens de Bollène, en 1890 et 1894 ont été imprimés. Il a publié, en dehors d'études historiques touchant l'époque moderne de la fin du XV e au XVIII e siècle, Auguste de Merles, de Valréas… ancien commandant du 2 e bataillon des mobiles de Vaucluse en 1870-1871, 1815-1886… (extr. de l'ouvrage Mobiles et mobilisés du Vaucluse en campagne) Avignon, F. Seguin, 1903, In-16, 22 p. Il a aussi édité de Morel (Pierre-Joseph), Les souvenirs de l'Aïeul, poésies fugitives et humoristiques, Carpentras, 1901, In-16. 691 Sur un promontoire rocheux, près de la route entre Carpentras et Apt, à une dizaine de km de N.-D. de Vie, siège au Haut Moyen âge de l'évêché de Carpentras, abritant des curiosités romaines et médiévales. 692 Gustave Barcilon est l'auteur de La magistrature et les décrets du 29 mars 1880, Avignon, Seguin frères, 1880-1881, 2 vol., in-18. Il est très probablement apparenté à Félicien-Jacques-Augustin Barcilon (1822-1892), né à Carpentras, avocat dans cette ville et conservateur militant avait été député du Vaucluse de 1877 à 1878 et l'adversaire du Dr Poujade (voir 19 juin). -De plus, il existe un livre de raison de Charles Barcilon, de Carpentras, commencé le 1 er juillet 1700, mentionné par Livre de raison de la famille de Fontainemarie, publié par Ph. Tamizey de Larroque, Agen, Impr. V ve Lamy, 1889, p.142. -A.D. Lot-et-Garonne, 16 J 4, correspondance d'érudits : Fonds Tamizey de Larroque. Baumes 700 , avec M. et M me Gabriel Pyriès 701 . 18, lundi. Déjeuner au Rocan chez le comte et la comtesse de Saint-Paulet, déjeuner que nous rendons le lendemain. 19, mardi. Déjeuner à l'Isle-sur-Sorgue 702 avec M. et M me de Saint-Paulet. Course à Vaucluse 703 . Visite d'adieu au marquis et à la marquise de Seguins et à M lle de Saint-Paulet. 707Louis- Cyprien Poujade, né à Canet (Aveyron) en 1823, fit ses études de médecine à Paris et fut reçu docteur en 1855, et s'établit ensuite à Carpentras dont il devint conseiller municipal et où il se fit remarquer par une vive opposition à l'Empire. Nommé préfet du Vaucluse, le 6 septembre 1870, il fut élu le 8 février 1871, représentant du département à l'Assemblée nationale. Il donna sa démission de député avec ses collègues, quand une enquête fut ordonnée par l'assemblée sur les élections de Vaucluse. Maire de Carpentras, en 1872, membre puis président du conseil général, il se présenta à la députation sur un programme portant « que la France a soif de réforme libérale ». Élu député en 1876, contre Barcilon, légitimiste, il le resta jusqu'en 1877, siégeant dans les rangs de la gauche républicaine. Il fut encore élu de 1878 à 1885, reprenant sa place à l'Union républicaine et appuyant la politique scolaire et coloniale du gouvernement. Il échoua à l'élection au Sénat en 1883, battu par Naquet et ne se réprésenta pas aux élections législatives de 1885agit vraisemblablement de Léon-Gabriel Pélissier, professeur à la faculté des Lettres de Montpellier, grand historien et spécialiste des archives de l'Italie et de la France surtout à la fin du XV e et au début du XVI e siècle et éditeur de textes et de documents touchant les régnes de Charles VIII et de Louis XII. Sa bibliographie occupe 18 pages du Catalogue des imprimés de la Bibliothèque Nationale. Il est l'auteur notamment d'une Collection de textes inédits tirés des manuscrits de l'Inguibertine, II. Un voyage du Pont-Saint-Esprit à Paris en 1658 (Cod. Inguimb.447)…, Paris, A. Picard, s.d., 62 p. (extr. de la Revue des études historiques, mai-juin, juillet-août 1904). -A.D. Lot-et-Garonne, 16 J 21, correspondance d'érudits : Fonds Tamizey de Larroque. 710 À 4 km au sud de Laplume et à une dizaine de kilomètres donc au sud d'Agen sur l'actuelle D 931 : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. 723 Il s'agit probablement du fils de Jean-Timothée Vincens de Tapol, né à Fauillet, le 31 août 1830, qui était maire de la commune de Fauillet, à 4 km environ au sud deGontaud [A.P. Baquier]. Voir 30 mars 1892. Ami du fils de Tamizey, Henri déjà venu à Larroque avec Léaumont et Cordier(Brissaud, né à Puysserampion (Lot-et-Garonne) en 1854. Docteur en droit de la faculté de Bordeaux, en 1879, ayant soutenu une thèse sur la notion de cause dans les obligations conventionnelles en droit romain et en droit français. Il fut nommé en 1880, professeur de droit français à l'université de Berne et promu à l'ordinariat en 1881. Il passa en janvier 1883 à la faculté de droit de Montpellier où il professait le cours d'histoire, puis de Toulouse en 1885. Membre de l'Académie de législation, de l'Académie des Sciences de Toulouse et de la Société archéologique du Midi. Il fut l'un des directeurs de la Revue générale de Droit et a signé de nombreux ouvrages : Andrieu (J.), op. cit., t. 1, p. 117-118. -Son fils Jacques a laissé le récit d'une visite à Larroque : « Une visite à Peiresc, écrit à Fauillet, le 25 janvier 1961 » dans Maisani (Claude), « Communication… sur Philippe Tamizey de Larroque » in Revue de l'Agenais, 87 e année, 1 erBulletin trimestriel, janvier-mars 1961, p. 25-30. -A.D. Lot-et-Garonne, 16 J 7, correspondance d'érudits : Fonds Tamizey de Larroque. 732 Maurice Boisvert, né à Marmande en 1850,fils de l'ancien maire de Marmande qui fut, pendant 20 ans, membre du Conseil général de Lot-et-aller voir, (à peu de distance) le château de Castécu 733 dont il est si souvent question dans le livre de raison des Fontainemarie. Mon ami Boisvert m'a donné un autographe du général Vicomte de Rochambeau 734 et plusieurs haches de pierre, enrichissant ainsi doublement mes modestes collections. J'ai rangé les haches trouvées à Beaupuy et à Mauvezin 735 autour des objets préhistoriques trouvés dans le département de Vaucluse et qui m'ont été envoyés par le D r Poujade 736 . Cela fait un petit musée dans un petit coin de mon cabinet. Au retour de notre excursion à Beaupuy nous avons dîné chez M me de Pérès 737 . environ au nord de Marmande et à 15 km environ au nord-ouest de Gontaud ; Mauvezin-sur-Gupie se trouve à 3 km au nord de Beaupuy sur l'actuelle D 115 : carte au 50 000 e , Marmande-Agen, 56, I.G.agit d'Inès Brousteau, épouse d'Auguste Lesueur de Pérès, dont la fille Marie-Marthe était mariée à Lodoïs de La Vaissière. Elle était cousine issue de germains de Philippe Tamizey de Larroque via les Vivie, Bailhès, Paloque et d'Antraygues [A.P. Baquier]. La Chronique d'Issac de Pérès …(1554-1611), a été publié par A. Lesueur de Pérès…, avec le concours de Tamizey de Larroque, Faugère-Dubourg, J.-B. de Laffore et Ad. Magen, Agen, 1879. Je viens de faire une rapide excursion en Périgord. Le 3, je suis allé déjeuner chez Paul de Boëry, à Soumensac 738 , où je n'étais pas revenu depuis 25 ans ; le soir, je suis allé au château de Fayolle où j'ai dîné et couché. J'en suis reparti le lendemain après déjeuner, enchanté de l'hospitalité des dames de Masmontet 739 et de mon jeune confrère en paléographie. J'ai rapporté de Fayolle, outre d'excellents souvenirs, des greffes des beaux rosiers de ces dames, et, si mon jardinier Chirol 740 , le héros de ma dernière plaquette, a la main heureuse, quelle amélioration dans mon jardin ! 19 septembre Règlement -orageux -avec mon fermier de Larroque. J'ai eu beau lui abandonner une cinquantaine de francs, il s'est montré très mécontent et il a déclaré qu'il se retirerait au bout des trois premières années et qu'il ne s'occuperait plus de plantations de vignes et de transports de terre. Que de désagréments j'entrevois ! Décidément il y a une fatalité sur 738 Ph. Tamizey de Larroque est apparenté aux Boëry par son premier mariage avec Nathalie de Boëry qui était déjà sa cousine par alliance, son oncle et sa tante, du côté de sa mère, c'est-à-dire Marie-Henriette Delmas de Grammont et Jacques-Philippe Delmas de Grammont (le général) avaient épousé respectivement Jean-Pierre de Boëry (le père de Jean et de Guy que Tamizey reçoit à plusieurs reprises à Larroque) et Anne-Marie de Boëry [A.P. Baquier]. -Soumensac à 10 km environ au nord de Miramont-de-Guyenne et à 12 km à l'Est de Duras, sur l'actuelle D 313 : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. 739 Jean-Edmond de Boisserie de Masmontet a signé avec Aymard de Saint-Saud et R. de Manthe, Généalogie de Bideran, Périgord, Agenais, Quercy, Poitou, J. Castanet, Bergerac, 1896, in-8°, 238 p. Il est aussi l'auteur de Maisons-Laffitte et son château, L. Milly, 1902, In-16, 40 p. ; Rigaud ou Rigault de Granfont, du Marchet, des Baratous, du Mineur, de Lartigue, des Guignards…etc. en Guienne, aux bureaux de la Revue héraldique, Paris, 1906, in-8°, 21 p. et d'une Monographie du canton de Sigoulès [extr. de l'Histoire de l'arrondissement de Bergerac], s.l., 1907, in-8°, 66 p. -Coorespondance de Tamizey avec E. de Boisserie de Masmontet : A.D. Lot-et-Garonne, 16 J 28, correspondance d'érudits : Fonds Tamizey de Larroque. 740 douloureuses et obsédantes préoccupations. Je suis ce que l'on appelle vulgairement un homme fini, Je n'aurais pas songé à revenir à mon livre de raison si un événement très grave et très malheureux ne m'obligeait, en quelque sorte, à en consigner le souvenir en la page suivante. Je puis dire que cet événement empoisonnera le reste d'une vie que je supplie Dieu d'abréger. anniversaire de mon mariage. Depuis plusieurs jours, j'éprouvais un redoublement de tristesse, un malaise qui allait jusqu'à l'accablement. Je redoutais vaguement quelque mystérieux malheur. Je me sentais menacé par un invisible ennemi et je cherchais vainement, à chasser mes idées noires. Combien mes pressentiments devaient être justifiés ! Hier matin, à 4 heures 1/2, mon fils et moi nous sommes réveillés par notre servante qui nous annonce qu'on sonne le tocsin à Gontaud. Nous nous levons en toute hâte et nous voyons bientôt des flammes qui s'élèvent à une grande hauteur, accompagnées d'une épaisse fumée qui obscurcit une partie du ciel. Nous nous demandons avec anxiété quelle est la maison qui brûle ainsi. Au bout de quelques minutes Jean Léaumont 745 , qui était venu sur sa bicyclette jusqu'au bas de la côte, arrive tout ému et nous apprend que le feu est chez nous et que déjà tout est presque dévoré. Je ne décrirai pas notre saisissement. Pendant que mon fils s'élance vers Gontaud, je reste affaissé comme le boeuf frappé en plein front d'un , ranimé, comme par un coup de fouet du désespoir, à la pensée de ma pauvre maison natale anéantie, cette maison si remplie de souvenirs sacrés, à la pensée de mes livres bien-aimés réduits en cendres, je n'avais pleuré comme un enfant. Ce flot de larmes m'a sauvé. J'ai pu respirer plus librement et faire appel à toute mon énergie. Des détails qui m'ont été donnés par divers témoins oculaires 745 Voir 30 mars 1892 : Sur la famille de Léaumont, connue depuis le début du XIII e siècle et établie dans le pays de Lomagne, en Guyenne et les contrées environnantes : De La Chenaye-Desbois et Badier, Dictionnaire de la noblesse, t. XI, Schlesinger, Paris, 1867, p. 818-825. de l'immense désastre, il résulte que le feu a éclaté vers quatre heures, que diverses circonstances n'ont pas permis , ont été la proie des flammes. Je veux indiquer ici peu à peu les trésors que contenait l'une des plus belles bibliothèques de toute la région, une bibliothèque à laquelle, mes savants amis avaient donné grande célébrité et que j'avais mis plus d'un demi-siècle à former, on ne saura jamais avec quel soin et quel amour. Tout cela perdu en quelques instants !! Aujourd'hui je signalerai seulement la disparition de mes papiers de famille, d'un grand nombre d'autographes, d'un nombre encore plus considérable de copies de documents, de notes recueillies un peu partout depuis que j'ai commencé à travailler, c'est-à-dire, depuis mon extrême jeunesse. Parmi les manuscrits d'une grande valeur, je citerai les Mémoires du marquis de Saint-Hilaire 746 , lieutenant-général, d'après le texte de la bibliothèque du Louvre 747 , beaucoup plus complet que le texte des 4 volumes in-12 imprimés en 1766, et les notes marginales dont le bibliographe J.F. Adry (de l'Oratoire) 748 avait enrichi un exemplaire de la Bibliothèque française, de la Croix du Maine et du Verdier 749 . 746 Il s'agit probablement du texte qui fut publié sous le titre de Mémoires de M. de S.H.*** [Armand de Mormès de Saint-Hilaire] contenant ce qui s'est passé de plus considérable en France depuis le décès du cardinal de Mazarin jusqu'à la mort de Louis XIV, Arkstée et Merkus, Amsterdam, 1766, 4 vol. in-12. 747 Par le décret du 27 juillet 1793, la Convention ordonna la création dans le Palais du Louvre du Muséum de la République ou Musée central des arts ouvert au public le 8 novembre 1793 et doublé, à l'instar du British Museum de Londres d'une bibliothèque. Mais les lieux où ses collections étaient installées, pendant la guerre civile de 1871, l'aile droite du Louvre restauré sous le Second Empire, entre les pavillons de Rohan et de Marsan ainsi que de l'autre côté, le pavillon de Flore, souffrirent beaucoup de l'incendie allumé le 24 mai en vue de détruire tout le monument. 748 Père Jean-Félicissime Adry, ancien élève du collège de Juilly, auteur notamment du Catalogue chronologique des imprimeurs et libraires du Roy. Cet ouvrage a été publié par M. Le Roux de Lincy, Paris, impr. de Guiraudet et Jouaust, 1849 (extr. du Journal de l'Amateur de livres, année 1849). 749 Abbé Augustin Ingold (1852[START_REF] Laurens | leur vie, leur oeuvre). Il fut conservateur du Musée d'Agen dont il aménagea les salles du rez-de-chaussée et du premier étage. Sa fille avait épousé le docteur Villeneuve, adjoint au maire de Moissac en[END_REF], né à Cernay (Haut-Rhin) et mort à 757 La France de Bordeaux et du Sud-Ouest, sous-titrée « édition régionale » puis « journal absolument indépendant », puis « grand journal républicain », quotidien, publié à Bordeaux entre le 26 mai 1887 et le 12 septembre 1944, faisant paraître un suplément littéraire illustré, puis un supplément agricole du dimanche et enfin un supplément illustré du dimanche.758À 1 km. environ à l'ouest de Gontaud. La propriété de Tamizey à Larroque se trouve dans dans son ressort. Paris, à partir de 1857. 760 Voir 18 février 1893. -Delisle (Léopold) 1826-1910. Né à Valognes (Manche), fils d'un médecin, élève des Lassaliens puis du collège municipal où l'abbé Tollemer, grand érudit et historien -ce fut le premier éditeur du fameux Journal du sire de Gouberville, du début du XVI e siècle -, était principal. Il fut initié à l'histoire par un hobereau ancien émigré, un des reconnaissance762 , de M. Tholin 763 , de mon cousin Joseph de fondateurs de la Société des Antiquaires de Normandie et correspondant de l'Académie des inscriptions et belles-lettres, Charles Duhérisson de Gerville. Ce dernier révèla à L. Delisle l'existence de l' École des Chartes dont il sortit diplômé fin 1845. Médiéviste, deux années de suite, il obtint le prix Gobert, décerné par l'Académie française au meilleur ouvrage d'histoire. Il songeait à diriger les Archives de la Seine-Maritime mais l'un de ses maîtres de l' École des Chartes, Benjamin Guérard, lui enjoint de ne pas quitter Paris. En 1852, en effet, il est nommé à la Bibliothèque Nationale, au Cabinet des Manuscrits dont Guérard vient d'être nommé conservateur -ce dernier reste à ce poste jusqu'en 1854, date à laquelle il est remplacé par Natalis de Wailly qui, lui-même, se retira en 1870. L. Delisle prit alors également la charge de la bibliothèque de l' École des Chartes qu'il conserva durant plus de 50 ans jusqu'en 1905. A 31 ans, il fut élu, fin 1857 -l'année de son mariage avec la fille de l'indianiste Burnouf) -à l'Académie des inscriptions et belles-lettres. Bibliothècaire de la Bibliothèque Nationale depuis 1866, il résista, en mai 1871, aux agents de la Commune qui le révoquèrent. Nommé en remplacement de Natalis de Wailly, conservateur au département des Manuscrits. Il a fait connaître l'histoire de ce dépôt dans le Cabinet des manuscrits de la Bibliothèque nationale (1868-1881), 3 vol.). Par ses mesures, il assura le rapide inventaire de tous les anciens fonds. Il aux Manuscrits jusqu'en 1874, étant nommé alors administrateur général de la B.N. en remplacement de Jules Taschereau. Dans ces fonctions, il a, d'une part, le classement et le catalogage des collections. C'est à lui qu'est due l'impression du Catalogue général, commencée en 1897. D'autre part, il parvint à récupérer et à faire entrer au département des Manuscrits les documents volés dans les dépôts publics de France par Libri et Barrois et passés en Angleterre. Le 21 février 1905, il apprit par le Journal Officiel, sa mise à la retraite : sa femme ne s'en remit pas et en mourut. Il se réfugia alors dans l'étude de sa province natale et des Actes d'Henri II, roi d'Angleterre et duc de Normandie. Depuis 1897, en qualité de conservateur du Musée Condé, il disposait d'un appartement à Chantilly où il mourut subitement le 22 juillet 1910, peu avant il avait confié à son visiteur qui lui parlait de la gloire « qui vient sans qu'on l'ait cherché »: « Je n'ai cherché que la vérité et le bon résultat des entreprises qui m'étaient confiées. » Le catalogue de ses publications dressé en 1902 par Paul Lacombe (avec supplément en 1911) comprend près de 20 000 articles : L' École Nationale des Chartes, histoire de l' École depuis 1821, Gérard Klopp éditeur, 1997. -J. Balteau, M. Barroux, M. Prévost, Roman d'Amat et alii, Dictionnaire de biographie française, Paris, 1933-1986. Voir le Discours de Léopold Delisle, président de la section d'histoire et de philologie, devant le Comité des Travaux historiques et scientifiques, prononcé le 6 juin 1898 dans Philippe Tamizey de Larroque (30 décembre 1828-26 mai 1898), Société d'agriculture, sciences et arts d'Agen, Imprimerie et lithographie agenaises, Agen, 1898, p.5-24 : il s'agit d'un hommage à l'oeuvre de Tamizey de Larroque et de l'évocation des relations d'amitié et de travail entre les deux hommes, nouées au département des Manuscrits de la B.N. au tout début des recherches de Tamizey. -A.D. Lot-et-Garonne, 16 J 11, correspondance d'érudits : Fonds Tamizey de Larroque. 761 Voir 2 juillet 1894. -L'abbé Louis Bertrand a publié Les hommes d' Église de la famille de Poudenx, Impr. de G. Lescher-Montoué, Pau, 1902, in-8°, 19 p. (extr. des Études historiques et religieuses du diocèse de Bayonne). -A.D. Lot-et-Garonne, 16 J 6, correspondance d'érudits : Fonds Tamizey de Larroque. 764 Joseph de Vivie de Régie, né en 1841, à « Costy » (encore propriété de la famille de Vivie de Régie au début du XXI e siècle). Il fut magistrat à Mirande (Gers) et décéda à « Lamouthe », le 11 juillet 1931. Il eut 5 enfants de son mariage avec Mademoiselle de Madaillan dont le grand-père était Antoine de Cornier. C'est par les Madaillan et les Delmas de Grammont que les familles Vivie et Tamizey de Larroque sont apparentées [A.P. Baquier]. -A.D. Lot-et-Garonne, 16 J 27, correspondance d'érudits : Fonds Tamizey de Larroque. Parmi les plus empressés donateurs je nomme M. Castaigne 766 , censeur du Lycée de Bar-le-Duc, M. Dejeb, maître de conférence à la Faculté des lettres de Paris, M. l'abbé Dubarat 767 , aumônier du Lycée de Pau, les Pères Chérot 768 et Sommervogel 769 . 766 Eusèbe-Joseph Castaigne, signalé comme proviseur du lycée de Moulins dans le Catalogue des imprimés de la Bibliothèque Nationale, est l'auteur de vers pour le banquet de la Saint-Charlemagne (au Lycée Charlemagne) en 1879 ; d'un discours de distribution des prix Lycée du Mans, le 31 juillet, 1893 ainsi que de Petites études littéraires avec deux lettres de M. VictorienSardou, Picard, 1888, in-12, VI-129 p. et de Trois fabulistes, Ésope, Phèdre et La Fontaine, étude bibliographique et littéraire, Paris, A. Picard, 1889, in-16, 29 p. 767 L'abbé Victor Dubarat (1855-1939), historien de Bayonne, du Béarn et du Pays basque, dirigeait la revue Études historiques et religieuses du diocèse de Bayonne. Il a publié Lettres de M. Tamizey de Larroque [à M. l'abbé Dubarat, publiées par ce dernier], précédées d'une notice par M. Léopold Delisle, Impr. de Vignancour, pau, 1898 (extr. factice des Études historiques et religieuses du diocèse de Bayonne). Voir 31 mai 1897. -A.D. Lot-et-Garonne, 16 J 12, correspondance d'érudits : Fonds Tamizey de Larroque. 768 Henri Chérot (1855-1906), éditeur des oeuvres ou de la correspondance d'hommes et de femmes d' Église du XVII e siècle, du Père Le Moyne, de Bossuet, de Jeanne de Lestonnac et de Bourdaloue. Il est l'auteur d'articles et d'études sur l'éducation du Grand Condé, sur Anne de Caumont (1574-1642), duchesse de Fronsac et fondatrice des Filles de St-Thomas à Paris, sur les carmélites de Compiègne, martyres de la Révolution. Il a aussi signé un texte sur la conversion d'Augustin Thierry (10 novembre 1895) à propos du Centenaire de sa naissance, un hommage à Carlos Sommervogel et enfin Un maître de l'érudition française, Philippe Tamizey de Larroque 1828-1898, l'homme, l'érudit, Paris, Impr. de Dumoulin, 1898, in-8°, 32 p. (extr. des Études). -A.D. Lot-et-Garonne, 16 J 9, correspondance d'érudits : Fonds Tamizey de Larroque. 769 Carlos Sommervogel (1834-1902), né à Strasbourg. Entré en 1853 dans la compagnie de Jésus. Surveillant dans les collèges de Vaugirard (1856-1865) et d'Amiens (1865-1867). Il fut attaché à partir de cette date à la revue jésuite Études, qu'il ne quitta plus. C'est à Vaugirard que se révéla sa vocation bibliographique par les rectifications et les additions qu'il fournit aux PP. Aloys et Augustin de Backer. En 1882 seulement, on le laissa se consacrer à la refonte et au perfectionnement de leur travail. note en marge : J'ajoute à la liste de mes bienfaiteurs les noms de MM. Rodolph Reuss 770 , bibliothécaire de la ville de Strasbourg, Ristelhuber 771 , Antoine Thomas 772 , Léon Pélissier 773 , Paul Meyer 774 , Eugène de Rozière 775 , Ainsi la Bibliothèque de la Compagnie de Jésus, rédigée sous sa direction fut-elle publiée entre 1890 et 1900, en 9 volumes auxquels s'ajouta, en 1904, un volume de tables. En plus de sa collaboration à Études et à divers recueils, il a signé la table méthodique « Mémoires de Trévoux » (1864-1865) ; Comment on servait autrefois : le maréchal de Bellefonds (1872) ; le Dictionnaire des ouvrages anonymes et pseudonymes de la Compagnie de Jésus (1884) et la « Bibliotheca mariana » de la Compagnie de Jésus (1885).-A.D. Lot-et-Garonne, 16 J 25, correspondance d'érudits : Fonds Tamizey de Larroque. 770 Rodolph Reuss (1841-1924), fils d' Édouard Reuss, le grand théologien protestant et éditeur des oeuvres de Calvin. Il soutint un Doctorat de philosophie à l'université de Göttingen en 1864. Il devint ensuite professeur d'histoire au séminaire protestant, professeur de littérature allemande au gymnase de Strasbourg, puis bibliothécaire de ville en 1873. En 1896, il fut nommé professeur à l' École des hautes études de Paris et, en 1902, il fut élu correspondant de l'Académie des sciences morales et politiques. Il est l'auteur de Ernest de Mansfeld pendant la guerre de Bohême (en allemand); La destruction du protestantisme en Bohême (1868); La sorcellerie au XVI e et XVII e siècle, particulièrement en Alsace (1871); L'Alsace au XVII e siècle (1897-1898) qui a obtenu le grand prix Gobert; L'Histoire de l'Alsace (1912); La France et l'Alsace à travers l'histoire (1915); L'Histoire de Strasbourg depuis ses origines jusqu'à nos jours (1922). -A.D. Lot-et-Garonne, 16 J 23, correspondance d'érudits : Fonds Tamizey de Larroque. 771 Paul Ristelhüber, né à Strasbourg en 1834, fils d'un médecin réputé. Bibliophile, collaborateur à divers recueils littéraires alsaciens et auteur notamment de Bouquet de lieder, traduit de l'allemand, Strasbourg, 1856 ; Intermezzo, traduit de H. Heine en vers français, Strasbourg, 1857 ; Faust d'après Goethe, adapté à la scène française, 1861; Faust dans l'histoire et dans la légende, essai sur l'humanisme superstitieux du XVI e siècle, 1863 ; avec M. Baquet : sur les Archives de la ville de Strasbourg (1866). Il a aussi édité, en 1862, le Liber vagatorum ou Livre des gueux du XVI e siècle (voir Geremek (Bronislaw), Les fils de Caïn, Flammarion, 1991). Quelques une de ses publications sont remarquables du point de vue typographique. -A.D. Lot-et-Garonne, 16 J 24, correspondance d'érudits : Fonds Tamizey de Larroque.772 Antoine Thomas (1857-1935). Professeur de philologie romane à la Faculté des lettres de Paris, il a publié de savantes études à la fois historiques et philogiques : Francesco de Barberino (1883) ; Essais et nouveaux Essais de philologie française (1897-1904) ; Mélanges d'étymologie française (1902). Il a collaboré avec Darmsteter et Hatzfeld à la rédaction du Dictionnaire général de la langue française du commencement du XVII e siècle jusqu'à nos jours (1889-1900). Élu en 1904 à l'Académie des Inscriptions et belles-lettres. Il fut l'éditeur scientifique de la revue des Annales du Midi à partir de sa fondation en 1889. -A.D. Lot-et-Garonne, 16 J 26, correspondance d'érudits : Fonds Tamizey de Larroque. 773 Voir 20 juin 1894. 774 Paul Meyer (1840-1917), né à Paris et mort à Saint-Mandé, ancien élève de l' École des chartes (1858), archiviste à Tarascon (1861), puis aux Archives nationales -alors impériales -(1866), nommé suppléant (1869) puis titulaire (1882) à la chaire des langues romanes à l' École des chartes. Il avait fondé, en 1872, la revue Romania avec G. Paris ; d'autre part, il prit, en 1876, la succession d'E. Quinet au Collège de France, volumes in-4° de l'Histoire de Provence de Papon 777 . note en marge : Mention particulière doit être faite d'une belle et éloquente lettre de M lle Louise de Guérines (M me de Marcey) 778 . -------------------PERTE IRRÉPARABLE. -Divers journaux de Paris et de la province ont annoncé que le 9 courant, la maison de M. Tamizey de la Roque, le savant aimable et distingué bien connu dans notre ville, avait dans la chaire des langues et littératures méridionales de l'Europe. En 1882, il devint aussi directeur de l' École des chartes ; en 1884, il fut élu membre de l'Académie des inscriptions et belles-lettres. Ses travaux sur les langues romanes sont d'une absolue rigueur de méthode, notamment : Note sur la métrique du « Chant de Sainte-Eulalie » (1861) ; Recherches sur les de « la Chanson de la croisade albigeoise » (1866), Les derniers troubadours de la Provence (1872), La Chanson de la croisade contre les Albigeois commencée par Guillaume de Tudèle (1875-1879), Daurel et Belon, chanson de geste provençale (1881), Alexandre le Grand dans la littérature française du Moyen âge, 2 vol. (1886), Histoire de Guillaume le Maréchal…régent d'Angleterre, 3 vol. (1891-1894-1901), Le chansonnier français de Saint-Germain-des-Prés (1892), Documents linguistiques du Midi de la France (1909). -A.D. Lot-et-Garonne, 16 J 19, correspondance d'érudits : Fonds Tamizey de Larroque. ( Paul de Faucher 779 ). (Journal du Comtat 780 -4 août) ---------------Dans le Bulletin de la Société bibliographique d'août, le marquis de Beaucourt 781 a inséré (p. 196) cette note : Nous apprenons avec le plus vif regret que notre excellent et si érudit confrère, M. Tamizey de Larroque, correspondant de l'Institut, vient d'être victime d'une catastrophe qui ne saurait laisser indifférent aucun de ses nombreux amis. La bibliothèque qu'il possédait à Gontaud vient d'être détruite par un incendie. Il lui reste heureusement les livres du Pavillon Peiresc qui lui tiennent compagnie dans sa studieuse demeure. Mais le coup n'en est pas moins terrible pour ce vaillant travailleur, et nous lui envoyons l'expression de nos chaleureuses sympathies. Ont encore mentionné mon désastre avec des regrets et des éloges : A. Chuquel, dans la Chronique de la Revue critique 782 du (en 779 Voir 10 et 30 juin 1894. 780 Journal du Comtat, politique, littéraire et commercial, publié à Carpentras, entre 1893 et 1901. 781 Gaston Du Fresne, marquis de Beaucourt, né à Paris en 1833, mort au château de Morainville en 1902. Descendant d'une famille alliée au célèbre érudit Du Cange. Il fréquenta, pour acquérir une formation critique, en auditeur libre, l'École des chartes. Il consacra de longues années à établir sur des bases solides de sa grande Histoire de Charles VII (1881-1892). En 1866, il créa la Revue des Questions historiques. En 1868, il fonda la Société bibliographique. Le Polybiblion, organe littéraire de la Société (1868) fut aussi sa création. Outre de nombreux articles, il a publié : Chronique de Mathieu d'Escouchy (1863-1864) ; Captivité et derniers moments de Louis XVI (1892) ; Lettres de Marie-Antoinette (1895-1896). Ces deux derniers recueils de textes ont été faits sous les auspices de la « Société d'histoire contemporaine », autre création de Beaucourt. -A.D. Lot-et-Garonne, 16 J 5, correspondance d'érudits : Fonds Tamizey de Larroque. 782 Revue critique d'histoire et de littérature, fondée en 1866 par Paul Meyer, Charles Morel, Gaston Paris et Hermann Zotenberg pour faire connaître les principales productions de l'érudition française et mémoires relatifs à l'histoire de France, 6°) en ouvrages relatifs à l'histoire et à la littérature de la région méridionale, 7°) en monographies d'hommes célèbres et enfin 8°) en mélanges. BIOGRAPHIE. -1°) -Le Dictionnaire de Moréri 788 , édition de 1759, 10 vol. in-f°. Le Dictionnaire de Bayle 789 , édition de 1740. 5 vol. in-f° 788 Louis Moréri, né à Bargemon, en Provence et mort à Paris en 1680. Il entra dans les ordres et prêcha longtemps avec succès à Lyon, puis se consacra à son Grand Dictionnaire historique ou Mélange curieux de l'histoire sacrée et profane (1674), compilation qui eut beaucoup de succès. 789 792 Il s'agit de la Nouvelle Biographie Générale (d'abord intitulée Nouvelle Biographie Universelle), publ. par Firmin-Didot frères, sous la direction du D r Ferdinand Hoefer,Firmin-Didot, Paris, 1852-1866, 46 vol. 793 Il s'agit de domLouis-Mayeul Chaudon (1737[START_REF]Maurice Campagne fit la guerre de 1870 comme capitaine de mobiles, puis devint avocat et entra dans l'Administration dont il démissionna en 1878, alors qu'il était sous-préfet de Rochechouart. Retiré à Escages, il s'adonna à l'histoire et à la généalogie[END_REF], né à Valensoles (Alpes de Haute-Provence). Son ouvrage le plus connu est le Dictionnaire historique qu'il publia en 1766 et qui fut plusieurs fois réimprimé. En 1804, il en publia la 8 e édition avec Delandine, qui eut part surtout aux articles concernant les hommes de la Révolution. L'édition remaniée et augmentée, donnée par Prudhomme (1810-1812) du consentement de Chaudon était peu estimée. Le dictionnaire de Chaudon a été largement mis à contribution par Feller. Malgré les erreurs qui l'émaillent, il a connu un grand succès à cause de la modération et de l'impartialité relative des jugements qu'il portait. Parmi les autres écrits de dom Chaudon : Dictionnaire anti-philosophique (1767-1769) ; Éléments de l'histoire ecclésiastique (1785, 2 vol.) etc. Il a aussi édité le Dictionnaire collaboration bibliographique de Charles Nodier. Le Dictionnaire de Bouillet 794 (l'édition à laquelle j'avais fourni, tant de corrections et additions.) Le Dictionnaire de Bachelet et Dezobry 795 . Le Dictionnaire du dept. de Maine-et-Loire par Célestin Port 796 . 3 vol. gr. In-8°. La Vie des hommes illustres de Plutarque, traduction de Ricard 797 . 3 vol. In-8° compacts. note en marge : C'est 1'occasion d'indiquer que je possédais presque tous les auteurs de l'antiquité grecque et romaine, le plus souvent avec texte et traduction. J'avais presque toute la collection des auteurs latins de Nisard 798 et un grand nombre de historique des auteurs ecclasiastiques avec le catalogue de leurs ouvrages (1764, 4 vol.). 794 Marie-Nicolas Bouillet (1798-1864), ancien élève de l'École Normale Supérieure, professeur, proviseur et inspecteur de l'Académie de Paris, s'est fait connaître par des travaux de lexicographie notamment par ce Dictionnaire universel d'histoire et de géographie dont la première édition date de 1842. 795 Théodore Bachelet (1820-1879) et Charles Dezobry (1798-1871) ont signé le Dictionnaire général de biographie et d'histoire, de mythologie, de géographie ancienne et moderne, Delagrave, Paris, 1857-1861, 2 vol. in- 796 CélestinPort (1828Port ( -1901) ) a publié le Dictionnaire historique, géographique et biographique deMaine-et-Loire et de l'ancienne province d'Anjou, J.-B. Dumoulin, Paris et Lachèse et Dolbeau, Angers, 1874-1878, 3 vol., pl., gr. -A.D. Lot-et-Garonne, 16 J 22, correspondance d'érudits : Fonds Tamizey de Larroque. recueils biographiques. L'avant-dernière édition du Vapereau. Le Dictionnaire bio-bibliographique publié par le Comte de Gubernatis (où j'ai une notice). Beaucoup de biographies locales, telles que le volume de MM. Arbellot et Du Beys 803 . Sur les hommes célèbres du Limousin, le volume de M. de Richemont 804 sur ceux de l'Aunis et de la Saintonge, etc. Les 10 vol. de B. Hauréau 805 sur l'Histoire littéraire du 1808 et quand Napoléon créa, en 1810, les écoles de marine, il se présenta au concours ; il fut admis, le 4 juin 1811. Nommé aspirant en février 1815, en mars suivant il forma à Lyon une compagnie d'aspirants de marine qui prit part à la défense de Paris. Victime de l'épuration royale, il fut rayé des cadres en mai 1817. Il vint alors à Paris où il vécut chichement ; il travailla chez un tireur d'or, puis chez un passementier ; il donna aussi des leçons à des élèves étrangers à 5 fr le cachet ; il fit de la peinture et un peu de journalisme au Fureteur et au Musée des familles et publia, entre 1818 et 1829 des essais historiques et artistiques. Chroniqueur au Constitutionnel, il fut envoyé par ce journal suivre l'expédition d'Alger et, à son retour, il est, le 1 er juillet 1831, attaché à la section historique du ministère de la Marine, comme conservateur des archives. Il continue alors à publier des ouvrages de caractère plus nettement historique. Après une mission en Turquie et en Grèce, en 1841, il consacre tout son temps à préparer le Glossaire nautique (1850) et surtout au Dictionnaire critique de biographie et d'histoire (1867) que possédait Tamizey. Il est aussi l'auteur d'Abraham Du Quesne et la marine de son temps (1873). Ses Souvenirs d'un homme de lettres ont été publiés en 1877 par P. Margry. Il était officier de la légion d'honneur depuis le 24 mai 1846. Voir Bouvet (Ch.), « Un historiographe de la marine, Augustin Jal » dans La Revue maritime, juillet 1926, p. 20-40. -Correspondance de Tamizey avec A. Jal : A.D. Lot-et-Garonne, 16 J 16, correspondance d'érudits : Fonds Tamizey de Larroque. 803 L'abbé François Arbellot (1816-1900) et Auguste du Boys [confusion probablement avec Charles Beys (1610-1659), poète et dramaturge, ami de Tristan l'Hermite, St-Amant, Scarron et surtout Colletet] sont les auteurs de la Biographie des hommes illustres de l'ancienne province du Limousin, Impr. Ardillier Fils, Limoges, 1854. 804 Il s'agit vraisemblablement de Louis-Marie Meschinet de Richemond qui, avec Hervé Feuilleret, a publié la Biographie de la Charente inférieure (Aunis et Saintonge), L. Clouzot, Niort, 1875-1877, 2 vol., in-16. -Correspondance de Tamizey avec A. de Richemont : A.D. Lot-et-Garonne, 16 J 24, correspondance d'érudits : Fonds Tamizey de Larroque. 805 Jean-Barthélemy Hauréau (1812-1896), Après ses études secondaires à Louis-le-Grand, il se lance dans le journalisme (1832) ; il collabore à la Tribune et au Journal du peuple. En 1837, il est rédacteur en chef du Courrier de la Sarthe et, en décembre 1840, il est nommé bibliothécaire adjoint à la bibliothèque municipale du Mans. Il met en chantier alors son Histoire littéraire du Maine, 1842-1852, 4 vol. (2 e éd., 1870-1876, 10 vol.). Il est destitué en 1845 pour avoir approuvé le discours adressé au duc de Nemours par son ami Trouvé-Chauvel. Il vient alors à Paris et, trois 807 Charles Perrault (1628-1703), l'auteur des célèbres Contes de ma mère l'Oye (1691) engagé dans la Querelle des Anciens et des Modernes en faveur des Modernes a publié Les Hommes illustres qui ont paru en France pendant le XVII e siècle (1697-1701). 808 Mémoires de Niceron 810 et Bibliothèque de l'Abbé Gurjet 811 (près de 100 volumes en tout). Histoire littéraire de la France par les Bénédictins et par l'Académie des Inscriptions, 30 vol. in-4°. La Bibliothèque des ouvrages de la Compagnie de Jésus par les PP. de Backer et Sommervogel. 3 in-f°. La France littéraire de Quérard 812 , plus 2 vol. du recueil intitulé : Le Quérard. La France littéraire au XV e siècle par Gustave Brunet 813 . 813 GustaveBrunet (1807Brunet ( -1896)), né et mort à Bordeaux. On lui doit des traductions annotées, parmi lesquelles il faut citer celles de la Légende dorée de J. deVoragine (1843), des Évangiles apocryphes (1849) ; des éditions d'ouvrages et un grand nombre de travaux : Dictionnaire de bibliographie catholique (1859), la Papesse Jeanne (1862) ; La France littéraire au XV e siècle (1865). Sous le pseudonyme de Philomneste Junior, il a publié : les Fous littéraires (1880) ; La Bibliomanie (1878 à 1885) ; Livres perdus sur les livres demaurés introuvables (1882) notamment. -A.D. Lot-et-Garonne, 16 J 7, correspondance d'érudits : Fonds Tamizey de Larroque. petit format, XVII e siècle et XVIII e .) Le Dictionnaire des Anonymes de Barbier 815 . Les Supercheries littéraires de Quérard. Édition Daffis (pour ces deux derniers ouvrages : signalés par une accolade) Le Recueil d'Oettinger 816 ; le Répertoire de l'abbé Ulysse Chevalier 817 . 814 C'est le plus ancien recueil littéraire de la France. Par privilège du 8 août 1664, Denis de Sallo, conseiller au Parlement de Paris, obtint la permission de un Journal des sçavans, dont le premier numéro parut en janvier 1665. il devait contenir le titre et l'analyse des ouvrages nouveaux, des notices sur les écrivain célèbres, des renseignements sur les découvertes scientifiques, les décisions des tribunaux ecclésiastiques et séculiers, les censures des universités. Sallo aidé notamment par Bourzéis, Gomberville et Chapelain porta son recueil à une quasi perfection. La sévérité de ses jugements qui suscita des ennemis au Journal, le fit supprimer momentanément ; l'abbé Gallois reprit la publication (1666-1674), d'une manière intermittente. En 1674, l'abbé Laroque prit sa succession. Le président Cousin donna au Journal des Savants plus d'autorité encore. En 1702, il fut transformé par Pontchartrain qui forma un comité de direction. Il cessa de paraïtre en 1792. Une tentative de résurrection en l'an V n'eut pas de suites. En 1816, il fut réorganisé sous la direction du garde des sceaux. Les membres divisés en assistants et en auteurs se recrutaient par élection. Il continue à paraître au début du XXI e siècle sous la direction de l'Académie des inscriptions. de l'École Normale Supèrieure, il fut chargé de recueillir pour l'État les livres et objets d'art provenant des établissements que la Révolution avait supprimés. Il fut bibliothécaire du Directoire, du Conseil d'État et de Napoléon 1 er , et, sous la Restauration, adminstrateur des bibliothèques de la couronne jusqu'en 1822. Son ouvrage capital est ce Dictionnaire des anonymes et pseudonymes (1806-1808). 816 Oettinger (Édouard-Marie), littérateur et bibliographe allemand, né à Breslau d'une famille israélite ruinée par la guerre. Il acheva ses études à l'université de Vienne et entama une carrière mouvementée et vagabonde de journaliste satirique à Berlin, Vienne, Münich, Hambourg, Stuttgart, Mayence, en Suisse, puis il s'installa à Paris, en 1851, d'où il alla se réfugier à Bruxelles en 1852. Il est aussi l'auteur de romans, de pièces de théâtre et de poésies. Ses principaux travaux bibliographiques : Archives historiques, Karlsruhe, 1841 ; Bibliotheca Schahiladii, Leipzig, 1844 ; Iconographia Mariana, Leipzig, 1852 ; Bibliographie biographique, Leipzig, 1850, gr. in-8 ; 2 e éd. avec Supplément, Bruxelles, 1854 [c'est peut-être l'ouvrage que possèdait Tamizey] ; Le Moniteur des Dates, Dresde, 1866 et suiv., in-4. 817 qui ont soulevé d'âpres polémiques, sont des modèles de critique Le Dictionnaire de bibliographie universelle de F. Denis 819 , de Martenne 820 , etc.Le Catalogue de la Bibliothèque nationale. Histoire de France ; le Cabinet des manuscrits et tous les autres ouvrages de Léopold Delisle relatifs à la Bibliothèque Nationale.Les ouvrages de Madden 821 , de P. Lacroix, d'Émile Picot, de hagiographique. -A.D. Lot-et-Garonne, 16 J 9, correspondance d'érudits : Fonds Tamizey de Larroque.818Étienne-Gabriel Peignot (1767[START_REF] Quicherat | né et mort à Paris. Élève de l'École des Chartes en 1834[END_REF], né à Arc-en-Barrois, mort à Dijon. Il fut successivement avocat à Besançon, garde de Louis XVI, inspecteur de la librairie (1813), inspecteur d'académie et proviseur du collège de Dijon (1815). Il a composé une grande quantité de petits traités traitant de particularités piquantes et peu connues : Dictionnaire raisonné de bibliologie (1802-1804) ; Essai de curiosités bibliographiques (1804) continué par les Variétés, notices et raretés bibliographiques des principaux condamnés au feu, supprimés ou censurés (1806) ; Répertoire bibliographique universel (1812) ; Documents historiques et détails curieux sur les dépenses de Louis XIV (1829) ; Recherches sur les autographes (1836) notamment. Une série de catalogues de bibliothèques du siècle dernier et de notre siècle (plus de 200 volumes in-8° et quelques-uns in-4°). La Bibliographie Oratorienne de l'Abbé Ingold. La Bibliographie bénédictine par [laissé en blanc] 822 Bibliographie de l'Histoire de France par Gabriel Monod. La collection du Bulletin du Bouquiniste 823 , du Bulletin du Bibliophile, du Polybiblion, de la Correspondance littéraire 824 (où j'ai fait mes premières armes en 1856). L'Athenaeum 825 (4 vol. in-4°). La Bibliographie de la Gaule par Ruelle . manuscrits, et conservateur titulaire en 1837. Il a gardé ces fonctions jusqu'en 1866. Sir Fr. Madden, membre de la Société des Antiquaires de Londres, fut créé chevalier de l'ordre de Hanovre par Guillaume IV, en 1832. Ses principaux travaux ont trait aux premiers siècles de la littérature anglaises, dont il remit plusieurs monuments en lumière, notamment : Hevelock le Danois [Havelock the Dane] (1828), chronique rimée du XIII e siècle, imprimée pour le club Roxburghe et accompagnée d'un glossaire ; Dépenses privées de Marie Stuart [Privy purse expenses of the Queen Mary] (1831) ; William and the Werwolf (1832) ; Ornements tirés des manuscrits et des premiers livres imprimés [Illuminated ornaments] (1833, in-4) ; Gesta Romanorum (1838) ; Sir Gawayne (1839), collection d'anciennes légendes anglaises et écossaises sur ce chevalier ; Layamon's Brut (1847, 3 vol.), paraphrase poétique du poème de Wace, traduite du saxon avec notes et glossaire ; Paléographie universelle [Universal palaeography] (1850, 2 vol.), version de l'ouvrage français de Silvestre ; enfin La Sainte Bible [The Holy Bible] (1850, 4 vol. in-4), éditée d'après la version de Wycliff et contenant d'un bout à l'autre les variantes des deux plus anciens manuscrits. Sir F. Madden a travaillé 22 ans à la collection de ce grand ouvrage, qu'il a publié de concert avec son collègue, le révérend J. Forshall. 822 Il s'agit vraisemblablement de la Bibliographie des bénédictins de la Congrégation de France par des pères de la même congrégation, impr. St-Pierre, Solesme, 1889, in-8° [dom Fernand Cabrol, éd. scientifique]. 823 Le Bulletin du bouquiniste a été publié à Paris, entre 1857 et 1896, totalisant 656 numéros. Cote Bibliothèque Nationale de France : NUMP-757 et Q-3547. 824 La Correspondance littéraire : s'agit-il du Correspondant : parmi ses rédacteurs : Montalembert, Ozanam, Lacordaire, Lenormant, de Falloux, Aug. Cochin, de Champagny, Cantù, de Laprade, de Pontmarin, Foisset, M gr d'Hulst notamment, lesquels professaient tous des idées à la fois catholiques et libérales.825 The Athenaeum, journal de littérature anglaise et étrangère, de science, des beaux-arts, de musique et d'art dramatique, fondé en 1828 par James Silk Buckingham, qui lui assigna pour objet d'être « Comme l'Athenaeum de l'antiquité, le rendez-vous des philosophes, historiens, orateurs et poètes les plus distingués de nos jours ». Les Tables de la Revue des Deux-Mondes et de plusieurs autres recueils périodiques. RECUEILS ÉPISTOLAIRES. -3° -(sans parler des Lettres de Ciceron, de Pline le Jeune, etc) : Lettres de Scaliger (père et fils), de Juste (mention illisible : Lipse ?), Correspondance des Réformateurs (recueil Herminjard), de M me de Sévigné, de M me du Deffand, de la marquise de Balleroy, de Guez de Balzac (édition in-f°), de Bussy-Rabutin, de Chapelain (une édition en grand papier), de Colbert, de Richelieu, de Mazarin, de Henri IV, note en marge : édition des grands écrivains de la France, avec toutes les autres éditions de ces grands écrivains données par la maison Hachette. de Lacordaire, Guizot, Sainte-Beuve, note en marge : Je possédais les oeuvres complètes de Sainte-Beuve en plus de 50 volumes. J'avais deux éditions de son livre sur le XVI e siècle. H. de Balzac, Guy Patin (éditions du XVIII e siècle), Cardinal d'Ossat, Philippe de Mornay, Seigneur du Plessis (édition Auguis en une douzaine d'in-8°), de Salignac-Fénelon (édition Teulet en 7 vol. in-8°), Lettres d'Espagne de la Comtesse de Robersart 827 , Lettres de Buffon, note en marge : J'avais la belle édition de son Histoire naturelle donnée par Flourens. de Voltaire (sans parler de la correspondance en plusieurs volumes qui fait partie de l'édition des Oeuvres complètes d'Armand Aubris en 50 vol. in-8°, j'avais presque tout ce la musique grecque ancienne, a publié, en 1886, la Bibliographie générale des Gaules. a été publié isolément, c'est-à-dire une bonne douzaine de volumes ou brochures), Lettres de Saint Vincent de Paul (en 4 vol. in-8°) de Louis Veuillot, de Grimm (édition Tourneux), Lettres de divers dans les Archives de la Bastille de Ravaisson 828 , dans la correspondance des Contrôleurs me de Maintenon (Lavallée et Geffroy), Lettres françaises de Calvin (édition Bonnet), Correspondance de Madame, mère du Régent, Lettres de Prosper Mérimée à Panizzi (2 in-8°) à une inconnue (1 in-8°) PHILOLOGIE. -4°) -Dictionnaire de Trévoux. Dernière édition. 8 vol. in-f° 829 . Glossaire de Du Cange. Première édition en 3 vol. in-f°. Glossaire de La Curne de Sainte-Palaye. 8 vol. in-4° 830 . Dictionnaire de l'Académie française. Deux éditions, la dernière et celle de 1798. Dictionnaire de Littré. 5 vol. in-4°. Dictionnaire de Gabriel Azaïs (Languedoc). 3 in-8° 831 . 828 Ravaisson-Mollien (François) 1811-1884, secrétaire-trésorier, puis conservateur de la Bibliothèque de l'Arsenal, puis Ravisson-Mollien (Louis) 1851-1922 [pour les derniers tomes], Archives de la Bastille : documents inédits recueillis et publiés par…, A. Durand et Pedone-Lauriel, Paris, 1866-1904, 19 vol. in-8°. 829 Dictionnaire universel français et latin dit Dictionnaire de Trévoux, composé par les Pères de la Compagnie de Jésus. L'édition de 1771 comporte 8 vol.in-8°. Est-ce cette « dernière édition » que possédait Tamizey ? 830 Jean-Baptiste de la Curne de Sainte-Palaye (1697-1781), Dictionnaire historique de l'ancien langage françois ou glossaire de la langue française depuis son origine jusqu'au siècle de Louis XIV, une réédition de cet ouvrage avait été publiée par L. Favre, Niort et H. Champion, Paris, 1875-1882, en 10 vol. in-4°. MÉMOIRES SUR L'HISTOIRE DE FRANCE. -5°) -Je possédais presque tout ce qui a paru dans les trois grandes collections du panthéon Littéraire, de la Société de l'Histoire de France et des Documents inédits publiés par le Ministère de l'Instruction publique. Il faut y ajouter presque tous les autres mémoires édités en dehors de ces collections, auxquelles j'allais oublier de joindre la collection des Mémoires relatifs au XVII e siècle publiés chez Didot par Barrière et par Lescure. note en marge : J'avais aussi quelques volumes de la collection Petitot, notamment les Mémoires du Cardinal de Richelieu, ceux de Montrésor. Je n'exagère certainement pas en déclarant que j'avais réuni près d'un millier de volumes de Mémoires. Parmi les publications isolées je citerai : Mémoires de Puységur (l'édition ancienne et mon édition publiée pour la Société bibliographique), du président Hénault, de Miot (3 in-8°), du P. Rapin (4 in-8°), de Dutens (3 in-8°), du colonel : J'avais tous les autres mémoires de la Bibliothèque elzévirienne, et, du reste, à peu près tous les volumes de cette précieuse collection (200 n°s au moins.) note en marge : J'avais tous les autres mémoires de la Bibliothèque elzévirienne, et, du reste, à peu près tous les volumes de cette précieuse collection (200 numéros au moins.) 840 , né à Lectoure (Gers), mort à Paris. Folkloriste, connu aussi par ses recherches sur la littérature, l'histoire et la géographie du Sud-Ouest français pendant l'Antiquité et le Moyen âge. Membre correspondant de l'Académie des inscriptions (1882). Il a publié : Études sur l'origine des Basques (1869) ; Contes populaires recueillis en Agenais (1871), Géographie juive, albigeoise et calviniste de la Gascogne (1877), Poésies populaires en langue française recueillies dans l'Armagnac et l'Agenais (1879) ; Poésies populaires de la Gascogne (1881) ; Contes populaires de la Gascogne (1886) ; Le Sud-Ouest de la Gaule sous le haut et le bas empire (1887) ; Géographie historique de la Vasconie espagnole (1891) ; Les Ibères (1892) qui se trouvaient certainement dans la bibliothèque de Tamizey à Gontaud ; l'Essai sur l'histoire de la transhumance n'ayant paru, par contre, qu'en 1898. -Correspondance de Tamizey avec J.-F. Bladé : A.D. Lot-et-Garonne, 16 J 6, correspondance d'érudits : Fonds Tamizey de Larroque. 853 Marie-Armand-Pascal d'Avezac-Macaya, né à Bagnères-de-Bigorre en 1799, mort à Paris en 1875. Géographe et érudit, il fut membre de l'Académie des inscriptions et belles-lettres et composa nombre de mémoires relatifs à l'histoire de la géographie et des découvertes au Moyen âge et au XVI e siècle. -A.D. Lot-et-Garonne, 16 J 3, correspondance d'érudits : 894 Il s'agit probablementde Jean-François Boissonade (1774-1857) se fit connaître comme hélléniste par son édition et son commentaire des Héroïques de Philostrate (1806). Dès 1809, il fut attaché à la nouvelle faculté des lettres de Paris ; et plus tard, il y hérité de la chaire de Larcher. Il fut élu, en 1813, comme membre de l'Académie des Inscriptions, et en 1829, il remplaça Gail au Collège de France. Il a joué un rôle décisif dans la remise en l'honneur en France des études grecques, publiant notamment Eunapii vitae sophistarum (1822) ; Aristenaeti epistolae (1822) ; Poetarum graecorum syllage (1828-1832) ; Novum testamentum graecum (1824) ; Fables de Brabius (1844-1848)… à moins qu'il ne s'agisse de son fils, Gustave-Émile Boissonade (né en 1825), avocat puis professeur de droit à la faculté de Grenoble, puis à la faculté de Paris, auteur de l'Histoire de la réserve héréditaire (1873) ; Histoire des droits de l'époux survivant (1874), ouvrages qui auraient pu intéresser éventuellement Tamizey. 895 Edmond Rostand (1868-1918) à cette date n'a publié qu'un volume de vers Les Musardises (1890) et des fantaisies théâtrales Les Romanesques (1894) ; La princesse lointaine (1895). Il n'a pas fait oeuvre de critique et ses grands succès, Cyrano de Bergerac (1898) et L'Aiglon (1900) datent d'après juillet 1895. Il y a certainement confusion avec Eugène Rostand (1843-1915), né à Marseille, mort à Cambo, le père d'Edmond. Licencié ès lettres et en droit, qui publia d'abord des recueils de vers : Ébauches (1865), La seconde page (1866), Poésies simples (1874), Sentiers unis (1886) ; puis une traduction en vers des Poésies de Catulle (1880) ; puis il se voua à l'étude des questions économiques et inaugura à Marseille un mouvement de progrès social pratique (habitations ouvrières, lutte contre l'alcoolisme notamment). Il a fondé une banque populaire et publié : Les Questions d'économie dans une grande ville populaire (1889), L'Action sociale par l'initiative privée (1893-1907) ; La Réforme des caisses d'épargne françaises (1891). Il a été élu membre de l'Académie des sciences morales et politiques. 896 Duméril (Edelestand-Pontas) (1801-1871), né à Valognes, mort à Passy. Philologue et paléographe, il fit une étude toute particulière de l'histoire du Moyen âge et publia Essai philosophique sur le principe et la formation de la versification (1841), Essai sur l'origine des rimes (1844), Origines latines du théâtre moderne (1849), Essai philosophique sur la formation de la langue française (1852), Des formes du mariage et des usages qui s'y rattachaient pendant le Moyen âge (1861), Histoire de la comédie, période primitive (1864-1869) notamment. 897 Ferdinand Brunetière (1849-1906), maître de conférences à l'École (père de l'Académicien) 898 , Ernest Renan 899 , Prosper Mérimée 900 , Silvestre de Sacy 901 . ), puis à La Sorbonne (1863). Élu membre de l'Académie française en 1874. Il a été député républicain de Briey en 1881 et réélu enMeurthe-et-Moselle en 1885, 1889, 1893[START_REF] Lot-Et-Garonne | 1169 À 5 km environ au nord de Gontaud entre St-Pierre de Londres à l'ouest et Puymiclan à l'est, en surplomb de la D 124 actuelle[END_REF] et en 1900 sénateur de Meurthe-et-Moselle. 899 Rallié définitivement, à la suite du 16-Mai, à la République, Ernest Renan(Tréguier, 1823-Paris, 1892) en était devenu une sorte de personnage officiel, étant nommé, en 1883, administrateur du Collège de France. Ayant renoncé au sacerdoce en 1845, influencé par la pensée allemande, ami du grand physicien et chimiste Marcelin Berthelot, reçu premier à l'agrégation de philosophie en 1848, collaborateur de La Liberté de Penser, dirigée par Jules Simon, de la Revue des Deux Mondes et du Journal des Débats, grande autorité intellectuelle, introduit notamment dans le Salon de la princesse Mathilde et commensal du prince Napoléon, fils de Jérôme dont il partage les vues libérales. Il avait soutenu, en 1852, une thèse sur Averroès et l'averroïsme et s'était imposé par ses compétences de philologue et de linguiste (il avait obtenu, en 1847, le prix Volney pour un Essai historique et théorique sur les langues sémitiques). En 1862, il fut nommé à la chaire d'hébreu au Collège de France, mais dès sa première leçon, il souleva un tumulte pour avoir appelé le Christ un « homme incomparable ». Auteur d'une très controversée Vie de Jésus (1863), premier volume d'une Histoire des origines du christianisme, 7 vol., publ. entre 1863 et 1881 et de l'Histoire du peuple d'Israël (5 t., 1887-1893), il a aussi publié Essais de morale et de critique (1859), Questions contemporaines (1868), Dialogues et frgaments philosophiques (1876), Souvenirs d'enfance et de jeunesse où figure sa célèbre Prière sur l'Acropole (1883) et enfin Les Drames philosophiques (Caliban, L'Eau de Jouvence, Le Prêtre de Némi, l'Abbesse de Jouarre) parus en 1886. 900 Prosper Mérimée (1803-1870) qui fut l'ami de Stendhal et de la mère d'Eugénie de Montijo, la future épouse de Napoléon III, il est l'auteur de romans d'inspiration « étrangère » (Colomba 1840, Carmen 1845, notamment) et s'attacha dans les dernières années de sa vie littéraire à faire connaître en France la littérature russe. Inspecteur des monuments historiques à partir de 1841, il contribua activement à la conservation des anciens édifices français. Voir notamment Fermigier (André), « Mérimée et l'inspection des monuments historiques », p. 1599 -dans Nora (Pierre) s.d. Les lieux de mémoire, t. 1, Quarto, Gallimard, Paris, 1997. Fermigier (André), p. 1599-1614.901 Antoine-Isaac, baron Silvestre de Sacy (1758-1838), fils d'un notaire et janséniste fervent, Abraham Silvestre (mais qui n'appartenait en rien à la famille des Lemaistre de Sacy). Il commença très jeune des études orientales avec Dom Berthereau, fut nommé, en 1781, conseiller à la Cour Parmi les livres non mentionnés dans les sept catégories, je me contenterai de citer L'Histoire ecclésiastique attribuée à Th. de Bèze. Dernière édition en 3 beaux volumes 902 . L'Histoire ecclésiastique de l'abbé Fleury 903 . L'Histoire de la Grèce par V. Duruy. L'Histoire romaine par V. Duruy 904 des monnaies et, quatre ans plus tard, académicien libre aux Inscriptions et belles-lettres. Lors de la création par la Convention de l'École des langues orientales (1795), il y enseigna l'arabe. En 1806, il fut nommé professeur de persan au Collège de France. Député de la Seine en 1808, il fut l'un des chauds partisans de la Restauration. Il fut recteur de l'université de Paris (1815), administrateur du Collège de France (1823), puis de l'École des langues orientales, pair de France (1832) et secrétaire perpétuel de l'Académie des inscriptions. Par l'étendue de son érudition et la méthode critique qu'il inaugura, il est considéré comme le fondateur des études arabes en France. Sa Grammaire arabe (1810) est un monument philologique. Il contribua aussi à l'étude de la langue copte et au déchiffrement des hiéroglyphes. Tamizey possédé plus certainement les ouvrages de son fils Samuel-Ustazade Silvestre de Sacy (1801-1879) qui, à 27 ans, après s'être fait recevoir avocat, entra à la rédaction du Journal des Débats dont il ne cessa plus de faire partie. En 1836, il devint conservateur de la bibliothèque Mazarine. Il fut appelé, en 1854, à sucéder à Jay comme membre de l'Académie française. En 1865, l'Empire le fit sénateur. Il recueillit ses meilleurs études disséminées et il en forma les deux volumes des Variétés littéraires, morales et historiques (1858) où il se présente en homme antique par ses goûts, classique exclusif et sans réserve et se révèle lui-même investi des qualités de style dont il recommande l'admiration. Sous le titre de Bibliothèque spirituelle, il a édité un certain nombre d'ouvrages religieux et et donné, en outre, une édition des Lettres de Mme de Sévigné (1861-1864). 902 Théodore de Bèze (1519-1605), né à Vézelay, mort à Genève. Théologien et prédicateur réformé, qui contribua largement à la conversion de Jeanne d'Albret, reine de Navarre et mère du futur Henri IV. Il est le plus proche collaborateur de Calvin qu'il représenta au Colloque de Poissy. Grand poète, il est aussi l'auteur de cette très apologétique Histoire ecclésiastique des Églises réformées au royaume de France dont l'édition originale date de 1580. 903 L'abbé Claude Fleury (1640-1723) est l'auteur d'un Discours sur l'histoire ecclésiastique… [et de] Discours sur la poésie des hébreux, l'Écriture sainte, la prédication et les libertés de l'Église gallicane…, paru d'abord en 1720, chez Mariette, in-8° [avec un 9 e Discours sur les libertés de l'église gallicane, exclu des autres éditions qui n'en comptent que 8] et chez Emery, in-12. Dans l'édition de 1785, chez Pierre Beaume, Nîmes, in-12, a été ajouté le Discours sur le renouvellement des Études ecclésiastiques depuis le XIV e siècle, par l'abbé Goujet. 904 Victor Duruy (1811-1894), né et mort à Paris. Fils d'ouvrier, il sortit de l'École Normale Supérieure, fut successivement profeseur 920 Joseph Noulens, né à Condom (Gers) en 1828, membre de la Société des Gens de lettres, de la Société de l'Histoire de France, de la Société de Géographie notamment, fut le fondateur et la premier directeur de la Revue d'Aquitaine. Il débuta en 1849, à Paris, par des articles d'art et de théâtre dans un journal de modes, et, en 1850, devint l'un des collaborateurs du Suffrage universel « journal politique avancé » selon l'expression de J. Andrieu. Arrêté après le coup d'État de Décembre et transporté en Algérie, il revint en France, lors de l'aministie partielle qui suivit le mariage de Napoléon III et se consacra à des travux de littérature et d'histoire. Parmi ses publications, celles que Tamizey avait certainement en sa possession : la Généalogie de la maison du Pleix de Calignan, Paris, 1861 et surtout les Maisons historiques de Gascogne…, Aubry-Dumoulin, Paris, 1865, 2 t. -Correspondance de Tamizey avec J. Noulens : A.D. Lot-et-Garonne, 16 J 20, correspondance d'érudits : Fonds Tamizey de Larroque. 921 Il s'agit des Documents historiques et généalogiques sur les familles et les hommes remarquables du Rouergue dans les temps anciens et modernes, Impr. de N. Ratery, Rodez, 1853-1860, 4 vol., in-8° publiés par Eugène-Hippolyte de Barrau. 922 923Louis- Sébastien Le Nain de Tillemont (1637-1698), issu d'une grande famille parlementaire parisienne, vouée à l'étude, à la charité et à une piété ouvertement janséniste, instruit aux Petites Écoles de Port-Royal, dont il fut, selon Racine, le meilleur élève, formé à Beauvais aux sciences sacrées et à l'histoire, Tillemont se joignit aux Messieurs de Port-Royal dès la paix de l'Église de 1668, vivant à proximité du monastère des religieuses de Port-Royal-des-Champs, où il se rendit souvent avant et après son élévation au sacerdoce, en 1676. A la dispersion des solitaires et des familiers, en 1679, il se retira à Tillemont, terre de sa famille, située à Montreuil, près de Paris, et y mena jusqu'à sa mort une existence extrêmement retirée et austère, tendue vers la perfection des vertus chrétiennes et l'achèvement d'un travail historique qui constituait pour lui, au-delà d'une recherche scientifique convenant à son esprit honnête et exact, un exercice spirituel. Très tôt, les maîtres de Port-Royal, Tillemont pour l'étude des six premiers siècles de l'Église, période que les théologiens et les polémistes catholiques ou réformés privilégiaient alors comme un âge d'or doctrinal et moral. Ils confièrent au jeune prêtre le soin de continuer et de perfectionner une des entreprises qu'ils avient à coeur -avec la traduction de l'Écriture sainte et la « perpétuité de la foi » -celle d'une histoire des saints et des solitaires ébauchée par Antoine Lemaistre. Les immenses lectures de Tillemont, son souci de la chronologie, son labeur incessant, acharné transformèrent l'entreprise édifiante en un véritable corps d'histoire. 925Pierre-Claude-FrançoisDaunou (1761Daunou ( -1840)). Oratorien , puis membre du clergé assermenté, il fut élu par le Pas-de-Calais à la Convention. Il vota la proscription de Louis XVI, protesta contre l'exclusion des Girondins et fut incarcéré jusqu'au 9-Thermidor. Il prit une part importante à l'organisation de l'Instruction publique, puis de l'Institut, où il fut aussitôt admis. Principal auteur de la Constitution de l'an III, membre du Conseil des Cinq-Cents, il fut chargé en 1798 de l'oragnisation de la République romaine. D'esprit trop libéral pour approuver le coup d'État du 18 brumaire, il s'efforça de mettre d'accord les conceptions de Bonaparte et de Sieyès dans la Constitution de l'an VIII ; Bonaparte modifia son projet. Membre du Tribunat, puis exclu pour son esprit d'indépendance, il fut cependant nommé archiviste de l'Empire. Destitué en 1815, il devint sous la Restauration, professeur au Collège de France, et député. Nommé à la Chambre des pairs en 1839, il consacra toute la seconde partie de sa vie à l'étude de l'histoire du Moyen âge. Secrétaire perpétuel de l'Académie des inscriptions, membre de l'Académie des sciences morales, il collabora à la publication des Historiens de France et à l'Histoire littéraire de la France pour laquelle il a composé un remarqué Discours sur l'état des lettres en France au XIII e siècle. 926 Ce livre religieux qui date du début du XV e siècle a trouvé auprès des laïcs et même dans le public non chrétien un très grand succès. On compte plus de 60 traductions françaises. Celle que Pierre Corneille a écrite en vers, celle du Père de Gonnelieu, celle qui est connue sous le nom de Lammennais sont les plus célèbres. Cet ouvrage anonyme (d'où les vives controverses sur son auteur), écrit en latin, renferme 4 livres, indépendants les uns des autres, qui ont dû circuler d'abord isolément et qui sont remarquables par leur pitié simple et confiante et aussi par une profonde connaissance du coeur humain : 1. Conseils utiles pour la vie spirituelle ; 2. Conseils pour la vie intérieure ; 3. De la consolation intérieure ; 4. Dévote exhortation à la sainte Communion. Le premier livre a pour but de détacher l'homme de lui-même et du monde ; le second l'aide à descendre dans son propre coeur ; le troisième l'initie aux mystères de l'amour divin ; le quatrième, enfin, l'unit à Dieu dans le sacrement de l'eucharistie. -Voir note 764 . -Voir Tamizey de Larroque (Ph.), Preuves que Thomas a Kempis n'a pas composé l'Imitation de N.S.J.C., par Philippe Tamizey de Larroque… Paris, A. Durand, 1862. In-8°, 82 p. (Extrait des Annales de philosophie chrétienne, 5 e série. t. III et IV, 1861.) Ruysbroek, Klopstock, Goethe, Lord Byron, Macaulay, etc. L'Histoire des Français par Alexis Monteils 927 . 928 Amable-Guillaume-Prosper Brugière, baron deBarante (1782Barante ( -1866)), nè à Riom, mort à Barante, fut successivement auditeur au Conseil d'État (1806) et préfet sous Napoléon 1 er , député et directeur des contributions indirectes sous la Restauration, pair de France et ambassadeur sous Louis-Philippe. Il appartint toujours à l'opinion libérale doctrinaire qui trouva son expression dans le régime de Juillet. Son Histoire des ducs de Bourgogne (1824) le fit entrer à l'Académie française et le posa comme chef de l'école narrative qui limite la tâche de l'écrivain à la reproduction aussi fidèle et aussi vivante que possible du passé, en lui interdisant toute appréciation personnelle et toute réflexion philosophique. Barante, écrivain fécond et aussi curieux des questions du jour que de celles du passé, a laissé sur le monde politique et littéraire de son temps des Souvenirs édité par son petit-fils. Jacçues Rousseau (édition du XVIII e siècle), un grand nombre de volumes de Michelet, tout Alfred de Musset, les oeuvres complètes d'Agrippa d'Aubigné, de Ronsard, de Du Bartas, de Montalembert 929 , de Villemain 930 , de Victor Cousin, de Patin, de Th. Gautier, de Flaubert, de Joseph de Maistre, du président Fauchet 931 , de Lamennais 932 , de Montesquieu (édition Laboulaye929 Charles Forbes, comte de Montalembert (1810-1870), né à Londres et mort à Paris. Fils d'un émigré, marié à une écossaise protestante, il fit la rencontre décisive pour lui de Lamennais, et participa à la fondation de L'Avenir (1830). Il y défendit la cause des Irlandais et des Polonais, ainsi que celle de la liberté de l'Église, puis, le journal ayant été condamné par Rome, il tenta vainement de retenir La Mennais dans sa révolte, et rompit avec lui en 1834. Devenu, en fait, le chef des catholiques libéraux, et siégeant, depuis 1835, à la Chambre des pairs, il y plaida sans cesse pour la liberté d'enseignement et préconisa, à cette fin, la formation d'un parti « catholique avant tout ». Dans ce dessein fut constitué un « Comité directeur pour la défense de la liberté religieuse » (1844) qui avait pour organe le journal L'Univers et qui contribua à l'élection de 140 députés partisans de la liberté de l'enseignement (1846). Représentant du Doubs à l'Assemblée en 1848, il constitua un comité électoral pour la liberté religieuse, groupant les catholiques, mais son attitude sociale (discours de 1840 sur les travail des enfants) ne l'empêcha pas d'approuver la répression des journées de Juin (après la Révolution de février 1848). Se méfiant dès lors de la démocratie, il soutint la politique d'ordre du prince-président. Il fut partiellement satisfait par le vote de la Loi Falloux (1850), mais peu après le coup d'État de Louis-Napoléon, il passa à l'opposition. Renonçant à la politique après son son échec aux élections de 1857, il prit en main Le Correspondant pour l'opposer aux ultramontains intransigeants de L'Univers. Désapprouvé par Pie IX après son discours au congrès de Malines (1863) et affligé par la publication du Syllabus, il meurt après avoir exprimé son inquiétude sur l'orientation très conservatrice du Concile du Vatican. Il a laissé de nombreuses brochures de circonstances et a contribué au renouveau des études médiévales par l'Histoire de sainte Élisabeth de Hongrie (1836) et Les Moines d'Occident depuis saint Benoît jusqu'à saint Bernard (1860-1877). Il était entré à l'Académie française en 1851. 930 Villemain (Abel-François) 1790-1870. Né et mort à Paris. Sa carrière universitaire et politique a été exceptionnellement brillante : professeur de littérature française à la Sorbonne dès 1816, il y enseigne jusqu'en 1830. Il est élu député d'Évreux peu avant les journées de Juillet, et devient pair de France en 1832. Après avoir fait partie du ministère Soult (1839-1840), il reçoit de Guizot le minisère de l'Instruction publique (1840-1844) : il déploie une grande activité en faveur de la réforme de l'enseignement secondaire. Sa vie est ensuite uniquement consacrée aux travux littéraires. Son Cours de littérature française (1828-1829), ses Études de littérature ancienne et étrangère (1846), son Choix d'études sur la littérature contemporaine (1857), son Essai sur le génie de Pindare (1859) sont ses ouvrages les plus appréciés. De tendance libérale en politique, il a, en littérature, agi sur le mouvement romantique : substituant la critique historique à la critique dogmatique, il juge la littérature française en la comparant aux littératiures étrangères. Entré à l'Académie française en 1821, il en devint Secrétaire perpétuel en 1834. 931 Claude Fauchet (1530-1601), né et mort à Paris, Président de chambre à la cour des Monnaies (1569), dont il devint, en 1581, premier président. Il publia divers travaux qui lui valurent, sous Henri IV, le titre d'historiographe de France. Les Antiquités gauloises et françoises (1599-1602) est tenu pour son chef d'oeuvre (nouvelle édition augmentée en 1611 sous le titre Les Antiquités et histoires gauloises et françoises) ; Origine des dignitez et magistrats de France (1600) ; Traité des libertez de l'Église gallicane (1608) et aussi Recueil de l'origine de la langue et poésie françoise, ryme et roman. 932 937 GastonBoissier (1823Boissier ( -1908)), né à Nîmes, élève de l'École Normale Supérieure, professeur à Nîmes, à Paris (Lycée Charlemagne), puis à l'École Normale Supérieure, enfin professeur d'éloquence latine au Collège de France dont il fut nommé administrateur en 1892, il fut élu membre de l'Académie française (1876), membre de l'Académie des Inscriptions (1886) et secrétaire perpétuel de l'Académie française en 1895. Érudit et archéologue, en même qu'écrivain, il a écrit, sur la littérature et les moeurs romaines anciennes, une série d'études : Le poète Attius, étude sur la tragédie latine pendant la république (1857) ; Étude sur la vie et les ouvrages de Terentius Varron, thèse (1861) ; Cicéron et ses amis, étude sur la société romaine au temps de César (1865) ; La religion romaine d'Auguste aux Antonins (1874) ; L'opposition sous les Césars (et Virgile (1886) et aussi M me de Sévigné (1887) ; La fin du paganisme (1891) ; Saint-Simon (1892), L'Afrique romaine (1895), ouvrages que pouvait posséder Tamizey. 938 Maximin Deloche (1817-1900), né à Tulle (Corrèze), mort à Paris. Il commença sa carrière comme avocat au barreau de Bordeaux (1836), puis il partit pour l'Algérie où il fut tour à tour directeur des travaux publics à Alger, puis des affaires civiles à Constantine. Il revint en France en 1850, entra au ministère de l'Agriculture (1853), et devint directeur de la comptabilité centrale. Il prit sa retraite en 1880 pour se donner tout entier aux travaux d'érudition dans lesquels il s'était acquis une notoriété qui lui valut le grand prix Gobert en 1859 et un siège à l'Académie des inscriptions, en 1871. Il a publié Étienne de Baluze (1856) ; le Cartulaire de l'abbaye de Beaulieu (1859) ; Étude sur la géographie historique de la Gaule (1861-1864) ; Description des monnaies mérovingiennes du Limousin (1863) ; Le Port des anneaux dans l'antiquité romaine et les premiers siècles du Moyen âge (1892) notamment. -A.D. Lotet-Garonne, 16 J 11, correspondance d'érudits : Fonds Tamizey de Larroque. de la République est élu à la majorité des suffrages par le Sénat et par la chambre des députés réunis en Assemblée nationale. Il est nommé pour 7 ans. Il est rééligible). Ce qui ouvrait la perspective d'une restauration monarchique. Ministre de l'Instruction publique, des Cultes et des Beaux-Arts dans les cabinets Buffet (1875) et Dufaure (1876), il créa les facultés de droit à Lyon, et de médecine à Lille. Élu dès décembre 1875, sénateur inamovible, il défendit les intérêts catholiques dans la discussion des lois relatives à l'enseignement (1879-1886), au divorce (1884) et au droit d'association (1901). Secrétaire perpétuel de l'Académie des inscriptions depuis 1873, doyen de la faculté des Lettres de Paris (1876-1887), il a écrit, entre autres ouvrages : Du monothéïsme chez les races sémitiques (1859) ; Jeanne d'Arc (1860) ; Saint Louis et son temps (1875). 942 Gustave Saige (1838-1905) a notamment publié le Journal des guerres civiles de Dubuisson-Aubenay [source importante sur l'histoire de la Fronde de François-Nicolas Baudot Dubuisson-Aubenay 1590-1652], H. Champion, Paris, 1883-1885, 2 vol ; in-8°. -A.D. Lot-et-Garonne, 16 J 25, correspondance d'érudits : Fonds Tamizey de Larroque. édition in-4°) 945 ; diverses oeuvres de G. Colletet, de Belleforest, de Charron, les oeuvres complètes de Lachaize 946 , de Joseph de Maistre, de Jean de Silhon 947 , les 2 éditions de 943 Thomas Hemerken dit a Kempis (1379-1471), écrivain mystique né à Düsseldorf. Formé à Deventer, aux Pays-Bas (1392-1399), par le fameux Radewyns, fondateur des chanoines réguliers de Windesheim, promoteur de la « devotio moderna » dont Érasme fut l'élève. Il entra en 1399, dans la maison de l'ordre récemment fondé au Mont-Saint-Agnès, près de Zwolle, dont il devint, par la suite, sous-prieur. On lui attribue généralement la rédaction d'un traité qui connut un très grand succès, bien au delà du Moyen-âge : L'Imitation de Jésus-Christ. Voir Tamizey de Larroque (Ph.), Preuves que Thomas a Kempis n'a pas composé l'Imitation de N.S.J.C., par Philippe Tamizey de Larroque… Paris, A. Durand, 1862. In-8°, 82 p. (Extrait des Annales de philosophie chrétienne, 5 e série. t. III et IV, 1861.) 944 Egger (Émile) (1813-1885), philogue et hélléniste. Docteur ès lettres en 1833, agrégé des lettres en 1834, Egger fut d'abord professeur dans divers lycées de Paris, puis (1839-1861) maître de conférences de grammaire à l'École Normale Supérieure. Il suppléa Boissonnade dans a chaire de littérature grecque à la Faculté des lettres de Paris dont il devint titulaire en 1855. En 1854, il était entré à l'Académie des inscriptions et belles-lettres. Parmi ses publications : Examen critique des historiens anciens de la vie et du règne d'Auguste (1844), Essai sur l'histoire de la critique chez les Grecs (1849), Mémoires de littérature ancienne (1862), Mémoires d'histoire ancienne (1863), l'Hellénisme en France (1869), Histoire du livre depuis ses origines jusqu'à nos jours (1880). -A.D. Lot-et-Garonne, 16 J 13, correspondance d'érudits : Fonds Tamizey de Larroque. 945 J 'apprends la mort d'un homme avec lequel j'étais fort lié depuis près de 40 années, M. Aug. Geffroy 948 , de l'Institut, ancien directeur de l'École française de Rome. C'était un des plus aimables savants au'il m'ait été donné de connaître. Je transmets en toute hâte à sa veuve l'expression de mes très vifs regrets. Combien d'amis je perds depuis quelque temps et que je ne remplacerai pas ! note en marge : Je viens de saluer un des meilleurs de tous mes chers disparus, Jules Andrieu, dans un article nécrologique des Annales du Midi 949 (livraison de juillet, p. 564-565). Voici les dernières lignes de cet article daté du Pavillon Peiresc, 14 mai 1895 : Je ne sais s'il y a eu jamais quelqu'un qui aimât plus que lui sa famille et ses amis. Pour ma part, j'ai reçu de lui, pendant de longues années, de tels témoignages d'affection, que j'en reste profondément touché, et qu'à jamais, je garderai de Jules Andrieu, un souvenir dans lequel se confondront le regret, la vénération et la reconnaissance. Nous avons eu, hier, un hôte bien aimable, Paul Bonnefon 961 , bibliothécaire à l'Arsenal, que je n'avais pas vu depuis plusieurs années. (Sa dernière visite m'avait été faite à Gontaud en 1888.) Très spirituel causeur, il nous a fait passer une fort agréable journée. Nous avions avec lui nos deux neveux de Boëry 962 . Les anecdotes de Paul Bonnefon ont beaucoup amusé ces jeunes gens. Il leur semblait entendre une chronique du Figaro 963 . 28 septembre Je viens de faire ma première sortie, n'ayant pas un seul moment quitté mon pavillon depuis le 9 juillet Je suis allé passer deux jours au chalet de ma soeur 964 , à Ambrus 965 . J'ai 961 Paul Bonnefon, né à Sauveterre-de-Guyenne (Gironde) en 1861, mort à Paris en 1922. Il fut nommé bibliothècaire à l'Arsenal en 1889. Il a publié : OEuvres complètes d'Étienne de la Boétie (1892) et Montaigne, l'homme et l'oeuvre (1893), ce dernier ouvrage, a été réimprimé avec des additions en 1898 sous le titre : Montaigne et ses amis : La Boétie, Charron et Melle de Gournay. -A.D. Lot-et-Garonne, 16 J 28, correspondance d'érudits : Fonds Tamizey de Larroque. 962 Voir 8 avril 1893 notamment. 963 D'abord fondé à Paris en 1825 par Maurice Alboy et conçu comme un journal satirique et batailleur dont une cascade de procès précipita la mort. Le nouveau Figaro fondé par de Villemessant, parut le 2 avril 1854 et fut au début hebdomadaire puis bi-hebdomadaire. Son succès fut rapide. Cette feuille à la fois agressive, littéraire, et mondaine compta parmi ses rédacteurs la plupart des chroniqueurs célèbres de l'époque. En 1866, le Figaro devint quotidien et, en 1867, politique. Les articles qu'y publia Rochefort contre le pouvoir mirent en péril le journal , qui dut se séparer du célèbre pamphlétaire. Après la guerre de 1870-71, Villemessant fit du Figaro, un journal essentiellement monarchiste. À sa mort (1879), Francis Magnard, F. de Rodays et Périvier en devinrent les directeurs. Le Figaro tout en restant l'organe des idées conservatrices, religieuses et mondaines, perdit son caractère agressif. À l'aube du XX e siècle, ce journal occupait la première place dans la presse littéraire et mondaine. 964 Il s'agit d'Anne-Élénore Tamizey de Larroque (2 e enfant d'Alexandre Tamizey de Larroque et de Marie-Élisabeth Delmas de Grammont), née le 2 décembre 1831, à Gontaud et morte le 3 janvier 1917 à Lavardac (Elle est inhumée à Ambrus dans le tombeau de la famille de son mari). Elle avait épousé à Gontaud, le 26 novembre 1850, Onézime Truaut -dit Henry en famille -(1819-1894) [A.P. Baquier]. Voir 12 juin 1891, 21 septembre 1893, 27 septembre 1894, 13-18 septembre 1897, 25 septembre 1897. 965 Sur la rive gauche de la Garonne, à 4 km environ au sud-ouest de Buzet-sur-Baïse : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. assisté, le jeudi 26, à la messe du bout de l'an dite pour mon beau-frère Henri Truaut 966 . Le prêtre qui a dit la messe est le nouveau curé de Saint-Pierre de Buzet, l'abbé Dubois 967 , l'ancien vicaire et futur historien de Monclar, lequel était venu passer jadis une journée ici pour y prendre des conseils et des notes. J'ai retrouvé avec plaisir mon jeune confrère, lequel est animé du plus beau zèle et a déjà beaucoup recueilli. Nous avons subi, pendant notre séjour à Ambrus, une chaleur étouffante, une chaleur de fournaise. J'ai salué d'un coeur ému, dans le cimetière d'Ambrus, le tombeau où repose ma pauvre mère. 1 er octobre Je viens de recevoir d'un de mes confrères de Strasbourg, M. Paul Ristelhuber, un gros paquet de livres : ses oeuvres complètes, que je possédais à Gontaud. Décidément, l'Alsace, en général, et la ville de Strasbourg, en particulier, se montrent bien secourables pour le pauvre incendié. notre ami commun, qui a été entre nous le trait d'union sacré que rien ne rompra jamais. 2 novembre J'arrive de Soumensac où j'avais été appelé, le 31 octobre, par un télégramme de mon neveu Jean de Boëry m'annonçant la mort de son père Étienne Paul de Boëry, mon cousin germain et beau-frère, arrivée le mercredi soir, 30 octobre Paul, qui n'était âgé que de 62 ans, était malade depuis assez longtemps (albuminurie), mais personne, même dans sa famille, ne s'attendait à le voir mourir si tôt. Sa femme et son fils cadet n'ont pu arriver à Soumensac qu'une heure environ avant qu'il rendit le dernier soupir, et son fils aîné, qui était à Toulouse, a trouvé son père déjà mort depuis une heure à peu près. Je plains beaucoup le pauvre Paul que j'aimais tant pour ses qualités personnelles que comme fils de la soeur de mon père, ma chère tante Henriette, et surtout comme frère de ma toujours bien aimée Nathalie 977 . Paul était un de mes plus anciens et meilleurs amis. Moi qui disais ici, l'avant-veille de son décès, combien j'avais perdu de camarades de route, combien j'étais loin de penser que, 48 heures plus tard, disparaîtrait celui que j'aimais tant dès mon enfance et qui avait plusieurs années de moins que moi ! On a enterré mon pauvre ami le jour de Toussaint au milieu des regrets d'une nombreuse assistance. Son fils aîné et moi nous menions le deuil avec une affliction presque égale et qui s'augmentait chez moi de toutes sortes de souvenirs cruels. 11 novembre 11 Reçu de M. le doyen honoraire de la Faculté des Lettres d'Aix, président du Comité du monument Peiresc, la dépêche suivante : Gontaud. de Aix-en-Provence.n° 620 mots : 31 Dépôt le 10/11 à 7 h 37 mn. du soir. « Fêtes célébrées avec succès. Peiresc glorifié aujourd'hui presque populaire. Tous vous remercions de votre initiative en regrettant profondément votre absence. laissé passer trois mois sans rien inscrire ici. J'étais si accablé de tristesse que je n'avais pas le courage de continuer mon livre de raison. Il y a de si cruels souvenirs pour moi dans le passé ! Tant de douloureuses préoccupations pour moi dans l'avenir ! Et puis, quel lamentable hiver ! Quelle affreuse et interminable série de journées assombries par le brouillard ! Aujourd ' hui ces voiles épais ont enfin disparu. Un beau soleil éclaire mon cabinet et me vivifie. En revoyant cet ami si long-temps enseveli dans les brumes, je rouvre mon registre, et, pour bien commencer, je vais énumérer avec un vif sentiment de reconnaissance les personnes et les sociétés savantes qui ont témoigné leur sympathie à l'incendié par l'envoi de leurs livres, depuis la Toussaint jusqu'à ce jour. Je transcris, d'abord, cette lettre : « Mairie de Bordeaux. -Cabinet du Maire. Bordeaux, 5 novembre 1895. Monsieur, J'ai l'honneur de vous faire hommage de la monographie de Bordeaux (4 volumes) et de la collection des publications municipales de la ville de Bordeaux. Veuillez agréer, etc… Le Maire de Bordeaux, Alfred DANEY 988 . » 988 Alfred Daney (1832-1911), né à Marmande. Il succéda à son père dans la maison de commerce qu'il avait créé rue de la Rousselle à Bordeaux. -Les sociétés savantes qui m'ont envoyé leurs publications sont : La Société de l'Histoire de France (une cinquantaine de volumes) ; (Lettres d'A. de Boislisle 989 et de Lecestre). La Société des Anciens Textes (une cinquantaine de volumes) ; (Lettres de Paul Meyer 990 et de G. Reynaud). La Société des Sciences, Lettres et Arts d'Agen (une quarantaine de volumes) ; (lettres de Tholin 991 ). La Société des Archives historiques de la Saintonge (une quarantaine de volumes) ; (lettres d'Audiat 992 ). L'Académie de Reims (une douzaine de volumes) -. (Lettres de H. Jadart). Voici les donateurs particuliers : MM. Hachette (m'ont rendu tout ce qui me manquait du Saint-Simon, des Grands écrivains de la France) ; M. H. Baguenier-Desormeaux 993 ; Entré au conseil municipal sur la liste républicaine, il y siégea 22 ans entre 1870 et 1908 dont 12 ans à titre de maire. Son oeuvre comme maire fut marquante. Il veilla tout particulièrement à l'application de la loi sur l'enseignement laïque et obligatoire, et à promouvoir les institutions sociales. Son administration financière prudente lui attira à la fois éloges et critiques pour ne pas avoir entrepris des travaux par souci d'économie. On lui doit le dégagement des abords de l'Hôtel de ville, l'ouverture du Parc bordelais, l'aménagement du cours Pasteur, la construction des Facultés des lettres et sciences et de médecine, ainsi que la création du Musée du Vieux Bordeaux : Catalogue de l'exposition Bordeaux 2000 ans d'histoire, Bordeaux, Musée d'Aquitaine, 13 février-30 juin 1973 (comité d'organisation présidé par Charles Higounet), p. 517. 989 Arthur Michel de Boislisle (1835-1908) a publié la Chambre de comptes de Paris (1873), auquel l'Académie des inscriptions a décerné le grand prix Gobert ; puis sa monumentale édition des Mémoires de Saint-Simon. Il fut, à partir de 1884, membre libre de l'Académie des inscriptions et belles-lettres. 1007 Abbé Philippe-André Grandidier (1752-1787), bénédictin alsacien, auteur d'essais historiques touchant en particulier l'histoire religieuse et ecclésiastiques de Strasbourg et des environs. L'abbé Ingold, aidé pour certains volumes par des collaborateurs a publié entre 1895 et 1898, chez A. Picard et fils, à Paris, les Correspondants de Grandidier, 11 tomes, inspirés à l'évidence par le modèle des Correspondants de Peiresc, publiés en 21 volumes par Tamizey entre 1879 et 1897. 1009 Correspondance de Tamizey avec C. Urbain : A.D. Lot-et-Garonne, 16 J 29, correspondance d'érudits : Fonds Tamizey de Larroque. d'Histoire littéraire de le France où j'ai publié (livraison du 15 avril) une « Notice inédite de Guillaume Colletet sur Marc Antoine Muret, suivie d'une lettre de Muret également inédite » et où je vais publier (livraison du 15 juillet) un document à retrancher du recueil de lettres missives du roi Henri IV 1019 . Cousin a publié une série d'études sur les femmes du XVII e siècle : Jacqueline Pascal (1845), Madame de Chevreuse et Madame de Hautefort (1856), La société française au XVII e siècle (1858). Ce matin, j'ai trouvé tout fleuri un jeune acacia que j'ai fait planter, l'hiver dernier, auprès de la porte d'entrée du jardin. J'ai éprouvé devant ce doux spectacle inattendu, quelque chose du ravissement où la vue d'une simple fleur jeta le prisonnier rendu célèbre par l'auteur de Picciola 1020 . Ce jour même, les fleurs de mon acacia m'ont aidé à rendre un pieux et funèbre hommage à une jeune fille de 18 ans, charmante de bonté et de beauté, Angèle Plurimet, que l'on va enterrer dans le cimetière de Rebec 1021 : elles ont été jointes à des roses blanches et à des branches d'aubépine fournies aussi par mon jardin, qui ont formé une couronne que l'on déposera sur le cercueil de la pauvre enfant. Angèle, qui était elle-même une fleur, trouvera dans le Ciel son complet épanouissement. d'une manifestation qui jamais ne fut mieux méritée. » Je note ici que G. Paris, à mes félicitations pour son élection, a mis bien des choses en ces quatre mots : « Merci à Peiresc et à vous. » 9 juillet Anniversaire du jour où ma maison a été brûlée. Je retrouve toute vive l'impression indicible que produisit sur moi la catastrophe. Il me semble entendre la cloche dont les vibrations me remplissaient d'épouvante et revoir les flammes qui jetaient sur l'horizon leur pourpre sinistre. Je me demande comment j'ai pu résister à un tel coup. Plusieurs ont pensé que j'en mourrais. Mais la pensée de Dieu m'a fortifié, et le travail aide à me consoler. J'ai subi avec courage les regrets de chaque instant et aussi les troubles fort graves apportés dans ma santé par tant de malheurs accumulés. Aujourd'hui je suis beaucoup mieux physiquement et moralement. Je dois ce double mieux à l'apaisante influence du temps. -L'admirable vers de La Fontaine est la vérité même : « Sur les ailes du temps la tristesse s'envole » -et aussi à celle du beau temps. J'ai constaté que le relèvement de mon pauvre être a été favorisé par les radieuses journées qui ont suivi la fin de l'hiver. Le rajeunissement du vieillard brisé par les plus pénibles émotions a concordé avec le rajeunissement de la nature. Il y a autour de moi comme une magique électrisation dans le parfum de l'aubépine, de la violette et du lilas. Ces premières fleurs du printemps ont été pour moi d'aussi douces que puissantes bienfaitrices. L'influence du renouveau n'a pas manqué de se faire sentir dans mon cabinet de travail. Depuis près de trois mois, de plus en plus je me ranime, je me ressaisis, je retrouve cet enthousiasme, cette sorte d'ivresse qui rend les labeurs à la fois si faciles et si agréables. 15 juillet Nous avons eu, hier, deux grands orages, un le matin, l'autre, le soir. Heureusement, la grêle ne s'est pas mêlée au vent et à la pluie. Dans l'orage du matin, vers 2 heures, la foudre, capricieuse comme une jolie femme, et dédaignant, cette fois, les hauteurs, est tombée au bas de notre coteau, dans la prairie de notre voisin Berger, sur un peuplier qu'elle a mis en pièces. Puissions-nous n'avoir jamais à subir la terrible visiteuse ! Le vent était si violent qu'il a brisé, dans le jardin, mon plus beau genévrier, lequel n'était déjà plus un arbuste, mais avait les belles proportions d'un arbre. Je le regrette comme on regrette un jeune ami. L'orage du soir a été surtout caractérisé par une pluie torrentielle. C'était presque le retour du déluge du 25 juillet 1891 -. Mon pauvre pavillon, que des paysans trop respectueux appellent lou castet 1030 , était un château d'eau. 20 juillet 1030 Castèt (ou castèth ou castéyt) signifie en gascon, château, habitation seigneuriale, forteresse mais aussi gran nuage de type cumulonimbus. Il existe deux dictons qui l'utilisent : « créde-s d'û castèt » (croire être d'un château, être présomptueux, fat), « n'ey pas la pene d'en ha castèts » (ce n'est pas la peine d'en faire des montagnes) : Palay (S.), op. cit., p. 213. mois, avec mes chasselas, mes malagas, mes muscats ! Ces grappes sur lesquelles je comptais tant sont déjà pestiférées.Rien d'aussi horrible, comme, il y a quelques jours encore, rien n'était aussi riant. On me dit, de plus, que mes rosiers, dont j'étais peut-être plus fier que de mes ceps, sont atteints d'une sorte de gale. Le mal est donc partout ?15 aoûtLe mois d'août nous amène, tous les ans, un flot d'aimables visiteurs. Cette première quinzaine a été remplie presque en entier par de douces invasions. En voici la liste :Lundi, 5, et mardi, 4, séjour de M. l'abbé Dubois 1036 , curé de Saint-Pierre de Buzet, un des habitués du pavillon Peiresc ; du 10 au 14, séjour de ma chère soeur 1037 ; le lundi, 10, nous avons eu à dîner l'abbé Dabos (sic)1038 , curé de Saint-Léon, un des plus 1036 L'abbé Jean Dubois a classé et travaillé les archives du château de Rayne-Vigneau à Bommes (Gironde). Il a correspondu avec Ph. Tamizey de Larroque et son fils Henri. Il a signé la nécrologie de l'abbé Dubos dans la Revue de l'Agenais, 1928 (mai-juin), 55 e a. n°3. -Le catalogue des imprimés de la Bibliothèque Nationale ne mentionne aucune publication de son fait.-A.D. Lot-et-Garonne, 16 J 12, correspondance d'érudits : Fonds Tamizey de Larroque. 1037 Il s'agit d'Anne-Élénore Tamizey de Larroque (2 e enfant d'Alexandre Tamizey de Larroque et de Marie-Élisabeth Delmas de Grammont), née le 2 décembre 1831, à Gontaud et morte le 3 janvier 1917 à Lavardac (Elle est inhumée à Ambrus dans le tombeau de la famille de son mari). Elle avait épousé à Gontaud, le 26 novembre 1850, Onézime Truaut -dit Henry en famille -(1819-1894), né à Xaintrailles et mort au « châlet » d'Ambrus, par Buzet. Il était fils de Julie Dubédat (1786-1849) et de Jean-Baptiste Truaut, propriétaire, ancien notaire à Lavardac de 1805 à 1845, qui s'était retiré sur la commune d'Ambrus, au château de Pradères [A.P. Baquier]. Voir 12 juin 1891, 21 septembre 1893, 27 septembre 1894, 13-18 septembre 1897, 25 septembre 1897. 1038 Il s'agit en fait de l'abbé Léon Dubos (1839-1928). Issu d'une famille nombreuse de paysans aisés de Sainte-Abondance. Ordonné prêtre en 1864, il débuta comme vicaire de Lévignac, puis passa à Mézin où il professa en même temps au Petit Séminaire de cette ville. En 1868, il devint curé d'Armillac, puis il occupa successivement les paroisses d'Estillac, Veyries, Lamarque, Lachapelle et Saint-Léon. En 1897, il devint aumônier des dernières prières, à Agen. Dès cette époque, ses visites aux archives départementales devinrent plus fréquentes. Son assiduité le fit bien vite remarquer et, en 1899, il fut élu membre résident de la Société Académique d'Agen. Le 1 er janvier 1906, il se retira à St-Sernin d'Eysses.actifs travailleurs du diocèse ; le mercredi, mon petit-neveu, Robert Cramaix-Hugonis 1039 , maréchal des logis aux Chasseurs d'Auch, est venu, rejoindre sa grand'mère et a passé deux jours avec nous ; le jeudi, mon avocat, M e Lefèvre, a déjeuné avec nous tous. Au moment où il nous quittait, à 3 heures de l'après-midi, sont arrivés M. A. Boyer, d'Agen, sa femme, la très habile artiste, de la célèbre famille Breton 1040 , et leur Peu après, son frère, prêtre aussi et non moins érudit, l'abbé Paulin Dubos, vint le rejoindre à ce poste de retraite. L'un et l'autre avaient employé et employaient les loisirs que leur laissait le saint ministère à la culture des abeilles. En rapport tous les deux avec le syndicat agricole de Marmande, ils étudiaient l'apiculture avec une méthodique ferveur. Pour eux, comme pour tant d'autres, explique l'abbé Jean Dubois, cette étude d'une branche de l'histoire naturelle fut la préparation toute naturelle aux recherches historiques. Le même abbé Dubois retrace la genèse de la vocation d'historien des deux frères. Le cadet, Paulin, était curé de saint-Avit, à ses débuts. Il reconstruisait le clocher de son église et il désira connaître les familles qui dans le passé avaient habité sa paroisse. L'aîné, Léon, dont il est ici fait mention, était alors curé de Lachapelle et ayant un procès à soutenir, eut besoin pour le gagner de faire de sérieuses recherches. Les deux frères, s'encourageant mutuellement et s'entraidant à l'occasion, arrivèrent à leurs fins. Le premier construisit plus facilement son clocher, tandis que son frère arrivait à gagner son procès. Encouragés par ces succès, ils continuèrent à marcher, jusqu'à leur mort, dans la voie où ils venaient de s'engager. -L'abbé Jean Dubois a signé sa nécrologie dont les précédentes informations sont tirées dans la Revue de l'Agenais, 1928 (mai-juin), 55 e a. n°3, p. 127-128. 1039 C'est le fils de la nièce de Tamizey, Marie-Marguerite-Élisabeth-Gabrielle Truaut (fille de sa soeur Anne-Éléonore mariée à Onézime-Henry Truaut), née à Lavardac, en 1851, qui avait épousé, en 1873, Pierre-Henri Cramaix, né à Sainte-Foy (Gironde), en 1846, sans profession, fils d'Ambroise Cramaix, négociant à Sainte-Foy et de Marguerite-Marie Hugonis, avec laquelle il demeurait, à Sainte-Foy, au moment de son mariage [A.P. Baquier]. 1040 Jean-Auguste Boyé, dit Augustin Boyer, né à Agen, en 1857. Il quitta en 1879 le séminaire d'Agen à la suite d'un incident qui le conduisit à Rome pour soulager sa conscience. -Ce fut, du reste, un changement de voie. En mars 1881, il entra chez un négociant de Cette, qu'il quitta le 16 octobre suivant pour retourner en Italie. Revenu en France, en juillet 1882, Augustin Boyer se fixa à Paris et s'adonna désormais exclusivement à des travaux littéraires auxquels il s'était déjà livré au séminaire et lors de son séjour à Rome, ayant notamment publié, sous le pseudonyme de Jean Passefont, une brochure : Le Panthéon restauré, Rome, impr. Perino, novembre 1881, in-4°. Il est l'auteur notamment de Souvenirs du Cloître et Portraiture séculière. -Contes à la d'Ouest-Ange, Paris, 1884 [tirage à part de la Revue critique d'Émile Max (1883)] ; La légende hugolienne. -Première série : Les Petites Épopées -Paris, A. Laurent, 1886, in-8° de 355 p. : la substance de ce livre dans la Revue Critique (1883), la Nouvelle Revue (1886), la Revue Internationale de Florence (25 octobre -10 novembre 1885) ; Petites Épopées (suite), et les Grandes Histoires ; Lettres de Gascogne publiées d'abord dans les journaux : Journal de Lot-et-Garonne du 31 août au 28 septembre 1884 et 8 janvier 1886 ; Pris-Rome, des 20 décembre 1885, 9 janvier 1886, etc. ; supplément au Petit Journal etc. En outre, Augustin Boyer a donné à la fils, garçon de six ans. Ils ont soupé ici et ils sont repartis à neuf heures du soir. En de si courtes heures, M me Boyer-Breton, qui est infiniment gracieuse, a trouvé moyen de faire de moi sur panneau un portrait assez ressemblant. Pendant qu'elle croquait ma pauvre vieille frimousse, son mari, qui a une verve de tous les diables, nous racontait une foule d'anecdotes italiennes et parisiennes. Il n'est pas moins pittoresque comme causeur que comme écrivain. J'ai été heureux de recevoir ce couple fort aimable et fort distingué. Malheureusement, leur séjour a été de trop courte durée et j'ai pu les comparer très justement à ces météores qui brillent et disparaissent au même instant. Le 14, dans la soirée, sont arrivés mon neveu, Arthur Grizot 1041 , président du Revue critique, en 1883 et 1884 : Pauvre Homme (15 juillet 1883) ; Le Bandit du Pont-d'Espagne (22 juillet) ; La Culotte de Bertrand (2 septembre) ; Un souvenir de Boccace (23 septembre) ; Une page (7 octobre) ; Souvenirs du Cloître (4 novembre) ; Un Gringoire (15 novembre) ; Barbara de Gransano (2 décembre) ; le Recteur de San-Gennaro (décembre) ; Le Testament (janvier 1884) etc. La nouvelle publiée le à l'auteur, le 7 février 1884, une condamnation à huit jours de prison et 200 fr. d'amende pour outrage aux bonnes moeurs. Le gérant de la Revue Critique reçut, pour sa part, 6 jours de prison et 100 fr. d'amende. S'y ajoutent Franz et Bettine dans le Courrier du Soir d'octobre 1884 ; Le Petit Chose et les Ballades de la pluie dans la Revue du Sud-Ouest de février et d'avril 1885 ; Le Manuscrit de Grand'Mère (Riola Mancini) dans la Revue Britannique d'août 1885 entre autres. Jules Andrieu dresse un tableau assez peu favorable de l'oeuvre d'Augustin Boyer dans sa Bibliographie générale de l'Agenais, t. 1, 1886, p. 111-113. Il évoque notamment la publication qu'il fit dans la nouvelle Revue Contemporaine, n° 2, du 25 février 1885 : Deux Chants de Dante, retrouvés et traduits et de la contestation qu'il suscita. Le Journal des Débats du 11 mars, dans un article signé M. M. (Marc Monnier) mit en cause le caractère inédit des textes publiés par A. Boyer et l'accusa pratiquement d'avoir plagié une brochure d'Ignazio Giorgi, extraite du Giornale de Filologia romanza : aneddoto di un codice Dantesco, Livourne, Vigo, 1880. A. Boyer s'empressa de reconnaître les droits revendiqués par I. Giorgi lui-même (journal l'Italie du 20 mars 1885). Par une lettre insérée dans le Figaro du 20 mars, il déclara que la Revue Contemporaine avait modifié le titre de son manuscrit qui portait exactement : Dante Alighieri, à propos de deux chants apocryphes. Adrien Remacle, directeur de la Revue Contemporaine, répliquant le lendemain dans le même journal, soutint qu'A. Boyer lui avait donné les deux chants comme inédits, s'offrant à faire la preuve de son assertion. En janvier 1886, A. Boyer était encore en instance pour obtenir d'A. Remacle une rectification du titre. -Correspondance de Tamizey avec Augustin Boyer : A.D. Lot-et-Garonne, 16 J 28, correspondance d'érudits : Fonds Tamizey de Larroque. 1041 Arthur Grizot, né à Laon (Aisne) en 1841, fils d'Antoine Grizot et d'Adèle Coffignon, propiètaires, domiciliés à Vorges (Aisne), fut juge au tribunal de Première instance de Marmande. Il avait épousé, en 1877, la tribunal de Condom, et son fils Paul, charmant garçon de quinze ans, lesquels viennent de nous quitter. s'est très bien passé et, après la transmission du cheptel, le sortant et l'entrant ont amicalement trinqué. Nous allons voir si le métayage nous sera plus avantageux que le fermage. Nous tiendrons note de toutes les dépenses et de toutes les recettes et nous comparerons le total de notre revenu aux 900 francs que nous touchions de Barthalome et que nous lui avions proposé de réduire à 800 francs. Je serais agréablement surpris si le métayage produisait jamais cette dernière somme. Samedi 22 août Nous avons possédé, toute cette semaine, M. le chanoine Allain 1042 , curé de Saint-Ferdinand de Bordeaux. J'ai déjà eu l'occasion de dire ici combien c'était un hôte aimable. Aussi, sous le charme de sa causerie, la semaine s'est-elle écoulée avec la rapidité d'un songe. Je ne veux pas répéter au sujet de cette visite ce que j'ai déjà dit de ses visites précédentes. Qu'il me suffise de constater que le nouveau curé de Saint-Ferdinand est un des plus spirituels et des meilleurs curés du monde ! nièce de Tamizey, Marie-Virginie-Henriette Truaut (née en 1853), fille d'Anne-Éléonore Tamizey de Larroque épouse d'Onézime-Henry Truaut [A.P. Baquier]. 1051Jean- Baptiste Brissaud, né à Puysserampion (Lot-et-Garonne) en 1854. Docteur en droit de la faculté de Bordeaux, en 1879, ayant soutenu une thèse romain et en droit français. Il fut nommé en 1880, professeur de droit français à l'université de Berne et promu à l'ordinariat en 1881. Il passa en janvier 1883 à la faculté de droit de Montpellier où il professait le cours d'histoire [il a publié un Rapport sur les Concours ouverts devant la faculté de Droit de Montpellier (année 1882), Impr. Martel aîné, Montpellier, 1883, in-8° de 20 pp] puis de Toulouse en 1885. Membre de l'Académie de législation, de l'Académie des Sciences de Toulouse et de la Société archéologique du Midi, il a fourni des articles à la Zeitschrift der bernischen juristenvereins qui se publie à Berne, et à la Revue générale du Droit et de la Législation (Thorin, éditeur), notamment une étude sur le Nouveau Code du commerce italien (1884). Il fut l'un des directeurs de la Revue générale de Droit et a signé de nombreux ouvrages : Andrieu (J.),op. cit., t. 1, p. 117-118. -Son fils Jacques a laissé le récit d'une visite à Larroque : « Une visite à Peiresc, sur Philippe Tamizey de Larroque » in Revue de l'Agenais, 87 e année, 1 erBulletin Trimestriel, janvier-mars 1961, p. 25-30. -A.D. profité des reliefs du festin (croûte du canard d'Amiens.) Il est venu nous faire ses adieux, partant pour Paris où il est nommé membre du jury de 1'Agrégation, juste hommage rendu à un de nos plus savants professeurs de droit. Je me réjouis fort du brillant avenir que cela promet à un compatriote que j'apprécie beaucoup, et qui est aussi aimable qu'instruit. On lui reproche d'être un grand bavard, et la vérité est qu'il n'a cessé de nous raser toute la journée. Mais quand le rasoir est très doux, faut-il se plaindre ? Un bavard n'est ennuyeux que s'il n'est pas spirituel. M. Brissaud se fait pardonner son intarissable verve tant elle est charmante, tandis que j'aurais maudit le monologue d'un sot.8 septembre Nous avons possédé, pendant une semaine, un autre bien intrépide, mais bien aimable causeur, M. Louis Audiat 1052 , un de ces absorbants que l'on ne se lasse pas d'écouter. C'était son quatrième séjour ici. Je voudrais le revoir encore pendant vingt années successives, ce qui nous donnerait un âge fort raisonnable. M. Audiat nous a conté mille anecdotes, les unes fort curieuses, les autres fort réjouissantes. Nous l'avons fait souper, le jour de son arrivée, avec notre bon voisin, Maurice Campagne 1053 . Je m'amuse à dire, à cause de l'abondance de mes hôtes, que le pavillon Peiresc que j'appelle souvent un ermitage, mérite maintenant le surnom de caravansérail. Je note que, pendant le séjour de M. Audiat, un des hirondeaux nés il y a quelques semaines, sous ma terrasse, est venu se poser sur le barreau de fer établi devant la fenêtre dans l'embrasure de laquelle je travaille et qu'il a fait là Lot-et-Garonne, 16 J 7, correspondance d'érudits : Fonds Tamizey de Larroque. 1052 Voir notamment 6 août 1893. 1053 Voir 22 mai 1896. toutes sortes de gentillesses, montrant son joli costume blanc, jaune et brun, caressant de son petit bec son plumage, agitant ses ailes, semblant, en un mot, me remercier de ma bonne hospitalité. Il a passé en ces exercices dix minutes à peu près, me donnant ainsi une petite fête qui a été, pour le vieux travailleur, ce qu'était, il y a 60 ans bientôt, pour l'enfant que j'étais, le fugitif quart d'heure de la récréation au collège. 12 septembre Je viens de recevoir de mon ami Philippe Lauzun 1054 , un paquet des photographies prises par lui, soit ici, soit à Gontaud, dans la mémorable journée du 25 août. Le tout a été fait avec grand succès. Le groupe des sept convives est aussi bien croqué qu'ils avaient eux-mêmes bien croqué. Bravo aussi pour le groupe formé des cinq femmes ou filles qui ont posé après nous (mon petit cordon-bleu, Seconde Chirol, sa belle-fille Angèle, Marie Vigneau et Alice Borie). Les photographies de la façade de l'église, de l'ensemble des constructions appelées le château, de la tour quadrangulaire avec sa magnifique porte d'entrée, d'une monumentale cheminée de la salle principale, sont remarquables. L'artiste mérite mes plus reconnaissants applaudissements.J'ai oublié de noter (à la date du 5 de ce mois), l'arrivée d'une charmante lettre de M. Édouard Hervé 1055 , de l1835-1899). Sous l'Empire, il fut dans la presse le champion des idées libérales, mais après 1870 il devint un ardent défenseur des idées monarchiques et fonda, en 1873, Le Soleil, qu'il dirigea jusqu'à sa mort. Il se prononça pour la fusion des deux branches des Bourbons et défendit le Septennat et la politique du 16-mai en vue de la restauration d'un régime monarchique. Mais, en 1879, Hervé rompit avec le parti légitimiste. Son journal devint l'organe du parti orléaniste et il fut luimême le conseiller et l'ami dévoué du comte de Paris. De 1881 à 1884, il siégea au Conseil municipal de Paris et fut élu, en 1886, membre de l'Académie française. Il est l'auteur de La presse et la législation deFrançaise, accompagnée de quatre volumes in-8° (dont un relié) que l'éminent écrivain a daigné m'offrir à l'occasion de l'incendie de ma bibliothèque. Cette preuve de sympathie, venue de si haut, m'a profondément touché.25 septembreNous aussi, après les Parisiens, nous avons eu notre cyclone.Toute la région a été ravagée. note en marge : Voir le Nouvelliste de Bordeaux 1056 de ce jour. Les dégâts dans cette ville ont été considérables, ainsi qu'aux environs. Le pavillon Peiresc a particulièrement souffert. La toiture a été enlevée de deux côtés (au Nord et au Midi) et le vent furieux qui s'engouffrait car les deux larges brèches menaçaient de tout emporter. J'ai cru pendant plusieurs heures que le pavillon allait être entièrement décoiffé, comme le fut mon héros Jean Chapelain 1057 quand il perdit sa majestueuse perruque. La terrasse était jonchée de tuiles brisées qui, tombant aussi du côté opposé, ont fracassé les tuiles ordinaires. Mes arbustes n'ont pas été plus épargnés que mes tuiles. L'ouragan a rompu beaucoup de branches de peupliers. J'ai tout aussitôt utilisé ces pauvres branches et j'en ai planté un assez grand nombre autour de la prairie, tirant ainsi le bien du mal, le progrès du désordre. Si je vois jamais cette plantation en pleine prospérité, je me réjouirai en admirant les grands peupliers du 25 septembre de n'avoir 1852 (1866) ; Une page d'histoire de l'Angleterre (1869) ; La Crise irlandaise depuis la fin du XVIII e siècle (1886). 1056 Le Nouvelliste de Bordeaux : journal politique quotidien, publié entre septembre 1841 et le 20 février 1887, il a repris les abonnements au Journal du Peuple, quotidien, paru à partir du 2 mars 1848 et de L'Électeur, journal de défense sociale, journal politique et quotidien paru du 1 er mars 1873 au 28 janvier 1885. pas courbé la tête devant le fléau et d'avoir opposé l'espérance à la destruction. L'homme ne doit jamais se laisser abattre ni par les petits accidents, ni par les grands événements. La résistance aux cours de vent comme aux coups du sort, est féconde en mâles joies. Vive le lutteur ! 29 septembre Visite de deux actifs travailleurs, M. l'abbé Dubois 1058 (un habitué du pavillon Peiresc) et M. Brassier, qui y venait pour la première fois et qui a été ravi de sa journée. restait débiteur envers moi d'une somme de 55 francs. Je lui en ai fait l'abandon, le pauvre diable n'ayant pas gagné grand'chose en ses trois années de fermage. Il faut toujours laisser les gens qui partent, contents de nous et pouvoir leur adresser le vieux mot : Quittes et bons amis. Jean Dubois a classé et étudié les archives du château de Marcellus (Lot-et-Garonne) du château de Rayne-Vigneau à Bommes (Gironde) et surtout celles des Pontac aux Jaubertes (Gironde). Il a correspondu avec Ph. Tamizey de Larroque et son fils Henri. Il a signé la nécrologie de l'abbé Dubos dans la Revue de l'Agenais, 1928 (mai-juin), 55 e a. n°3. -Le catalogue des imprimés de la Bibliothèque Nationale ne mentionne aucune publication de son fait. -A.D. Lot-et-Garonne, 16 J 12, correspondance d'érudits : Fonds Tamizey de Larroque. eu un bon mouvement, un mouvement bien peirescien. Mon ami Jules Momméja m'ayant écrit qu'on lui offre un B. de Montfaucon 1064 à 150 francs et qu'il regrette fort de ne pouvoir le prendre, je lui ai envoyé un chèque pour en payer le prix. C'est un devoir pour un travailleur dans l'aisance de venir au secours des travailleurs qui sont pauvres. De même que j'ai naguère été heureux de donner un Du Cange à mon confrère Lucien Massip 1065 , j'ai été heureux de donner un Dom Montfaucon : Peyrous (Bernard), « L'oeuvre d'éditeur scientifique de Tamizey de Larroque », Revue française d'histoire du livre, 61 e a. n n°s 74-75, n elle série, 1992, p. 230. Sur Jules Momméja : voir 22 mai 1896. 1065 Voir 1 er juin 1892 : Lucien Massip, pharmacien, a publié La Révolution à Cancon, impr. V ve Lamy, Agen, 1888, in-8°, 78 p. et, à compte d'auteur, à Cancon (à une vingtaine de km au nord de Villeneuve-sur-Lot), Histoire de la ville et des seigneurs de Cancon depuis les temps les plus reculés jusqu'en 1789, in-8°, 259 p. -Correspondance de Tamizey avec L. Massip : A.D. Lot-et-Garonne, 16 J 19, correspondance d'érudits : Fonds à l'excellent archéologue de Monteils-en-Quercy. Il m'a semblé que l'illustre moine, dont le portrait est tout près de moi, m'adresse tout aujourd'hui un sourire approbatif. 16 octobre Visite de mon cousin Joseph de Vivie 1066 avec lequel, laudatores temporis acti 1067 , nous avons beaucoup parlé des choses du passé, notamment des choses de famille. Il était accompagné de ses deux fils, Jacques, dont il veut faire un avocat, et…[en blanc], déjà Saint-Cyrien, dont il veut faire un général. Que Dieu vienne en aide à cet excellent père ! 1 er novembre Les jours deviennent si courts et je suis si occupé que je n'ai pas eu le temps de noter divers petits événements du mois qui vient de finir. Commençons par les petits événements bibliographiques. Aux livres si aimablement donnés, en septembre, par un des Quarante, ont succédé trois beaux et précieux ouvrages qui m'ont été envoyés : 1°) par M. Arthur de la Borderie, de l'Institut (tome I er de son Histoire de la Bretagne), 2°) par M. André Steyert (tome I er de sa Nouvelle Histoire de Lyon), 3°) par M. Albert Tamizey de Larroque. 1066 Voir 4 mai 1895, 2 février et 28 mars 1894 : l'étudiant en droit, fils de Joseph de Vivie, que Tamizey prénomme Jacques, pourrait bien être le Roger de Vivie de Régie, signalé par le Catalogue des imprimés de la Bibliothèque Nationale, auteur des Femmes et la société de nos derniers parlementaires toulousains, Impr. de Lagarde et Sebille, 1901, in-16, 144 p. (Discours prononcé le 2 décembre 1900 à la rentrée solennelle de la conférence des avocats stagiaires). 1067 Trad. du latin : « Ceux qui font l'éloge du passé ». Cette expression reprend la fin d'un vers d'Horace (Art poétique, 173), où il fait ressortir ce défaut ordinaire aux vieillards de dénigrer le présent au profit du passé. de Naurois (Voyage en Orient par le prince Oukhtomsky), magnifique in-f° illustré de 178 compositions de Karasine). Il faut y joindre un gros paquet de livres divers -ouvrages nouveaux et vieux bouquins -envoyé par le comte G. Baguenault de Puchesse 1068 . Tout cela renforce singulièrement ma pauvre bibliothèque. -Les autres événements sont deux visites qui couronnent la série des nombreuses invasions relatées en ces derniers mois : visite du comte de Saint-Saud 1069 (du 24 au 27), visite de M. Jules Momméja 1070 (29). Ces deux hommes d'esprit et de savoir m'ont fait passer de bien agréables heures. Nous avons avec l'un comme avec l'autre fait de très beaux projets de publications, les unes agenaises et périgourdines, les autres peiresciennes. Voilà deux collaborateurs dont je serai enchanté, je l'assure d'avance, tant je connais bien leur extrême bonne volonté et sympathie et les ressources de leur érudition. 4 novembre J'ai fait porter du jardin de Gontaud douze petits marronniers et ils ont été plantés le long du chemin qui sépare la prairie du bois de M. de Godailh 1071 . Plus tard, 'agit-il du fils du Godailh (1764-1840), professeur de grammaire à l'École centrale d'Agen, qui fut parmi les fondateurs, le 8 prairial an VI (27 mai 1798), de la Société d'agriculture départementale prolongeant la Société libre des Sciences d'Agen qui se réunissait depuis 1776. Il en fut secrétaire perpétuel jusqu'en 1810. Il démissionna alors « pour raison de santé » de cette «… fonction que son éloignement forcé au temps des sessions législatives l'empêche de remplir utilement ». Il fit don au Musée de la Société en 1839 d'un plat de B. Palissy et d'une collection minéralogique composée de 400 échantillons cf Procès-verbal du 21 août 1839 dans Lauzun (Philippe), La société académique d'Agen 2 février 2 chênes de mon voisin et les dits marronniers mêleront fraternellement leurs branches et formeront une voûte de verdure sous laquelle il fera bon se reposer ou se aux prières du Monastère de Notre-Dame des Gardes, de la congrégation cistercienne de la Trappe, près Chemillé, diocèse d'Angers, signées par Soeur Marie Hyacinthe, Prieure. C'est une récompense accordée à l'auteur de l'Humble requête d'un boeuf à M. le président du Conseil des Ministres, insérée, d'abord, dans Le Paysan du Sud-Ouest 1072 du 30 août dernier (avec l'image du quadrupède en guise de signature), et reproduite successivement dans le Journal du Comtat du 13 septembre (avec une nouvelle image du plaignant et un avant-propos où l'on dit que La Fontaine n'aurait pas mieux fait parler la pauvre bête), dans l'Écho des Bouches-du-Rhône du même jour (avec cette formule finale Pour copie conforme. Ph. Tamizey de Larroque, Pavillon Peiresc, 29 août 1896), dans la France libre du 18 octobre (Lyon), etc. Enfin, dans la Jeunesse royaliste du 1 er novembre (Paris), ma protestation a obtenu cette mention honorable (due à mon confrère le comte A. de Bourmont 1073 , Vice-Président du Comité central) : Très jolie la requête présentée par un de nos meilleurs amis au nom du boeuf saisi par l'administration républicaine sur les pauvres religieuses trappistines des Gardes. Sud-Ouest, « Organe de la Démocratie rurale, paraissant le Dimanche », journal politique à 5 centimes, dont le premier numéro est daté du 14 décembre 1890, diffusé en 4 pages in-f°, imprimé à Tonniens par l'imprimerie G. Ferrier, Cornelis de Witt, le petit-fils de Guizot, en est le directeur-gérant : voir 28 juillet 1891. 1073 Correspondance de Tamizey avec Amédée de Bourmont : A.D. Lot-et-Garonne, 16 J 28, correspondance d'érudits : Fonds Tamizey de Larroque. Toute cette nuit a soufflé un vent furieux accompagné d'une pluie torrentielle. La tempête était si forte que le pavillon était ébranlé jusqu'en ses fondements. Pour qu'il résiste à de pareils assauts, il faut qu'il soit indestructible. Mais ce qui est plus étonnant encore, c'est la solidité du vieux chêne bravant toute la violence des vents déchaînés. Tel un vétéran reste debout sur le champ de bataille ! 18 novembre Au sujet de ma petite, toute petite plaquette sur Deux jardiniers émérites, Peiresc et Vespasien Robin 1074 , j'ai reçu un grand nombre de lettres charmantes, surtout de lettres féminines (comtesse de Dienne 1075 , Jeanne Sarramia de Père 1076 , M me de Marcey 1077 , M me Gaston Paris 1078 , etc.). À ces flatteurs temoignages épistolaires, je joins (lettre ci-contre de Nous venons d'avoir une série de bourrasques qui ont duré quatre jours et quatre nuits. Quel temps fut jamais plus fertile en tempêtes ? Que de vent ! Que de pluie ! Hier, dans l'après-midi, nous avons entendu un formidable coup de tonnerre. L'avant-veille, à 4 heures et à 5 heures du matin, de la grêle ! Tous les journaux de la région signalent les ravages de ces anormales débauches d'électricité. Nous nous souviendrons de la gravité des orages de décembre ! 20 décembre Pendant huit jours, nouvelles grandes pluies, nouvelles violentes rafales. C'est aujourd'hui seulement que le temps redevient beau. Nous allons voir enfin arriver le froid. Vive le froid sec succédant au froid humide ! Dans les rares journées de ce mois qui n'ont pas été pluvieuses, divers travaux d'assainissement et d'embellissement ont été faits autour du pavillon Peiresc. En ce qui regarde l'assainissement, mon nouveau métayer, -balai neuf balaie toujours bien -a répandu trois charretées de décombres apportés de Gontaud devant l'étable et, grâce à cet empierrement, nous n'aurons plus sous les yeux et sous le nez du haut du. balcon du cabinet, le fumier croupissant dans les eaux pluvieuses mêlées au purin. Précieuse amélioration que la suppression de ce cloaque les Souvenirs d'émigration de la marquise Béatrix-Etiennette Renart de Fuchsamberg d'Amblimont, Évreux, 1869 : A.D. Lot-et-Garonne, 16 J 17, correspondance d'érudits : Fonds Tamizey de Larroque. 1082 Voir 31 octobre 1893 notamment. permanent ! J'ai fait combler le fossé inutile qui séparait en deux portions la prairie et dans la terre remplissant le dit fossé, j'ai fait planter une douzaine de peupliers qui croîtront à vue d'oeil. Une autre plantation de peupliers a été faite dans une rigole creusée entre le petit pont et le petit bois dans lequel nous établirons une cabane qui sera fermée par les branches entrecroisées des jeunes chênes. Des genévriers ont été transportés de la prairie dans la haie de l'allée du tombeau et dans le jardin. Toujours plus d, 1783), les 7 volumes des Lettres et papiers d'État du Cardinal de Richelieu, dont le septième est un peu mon oeuvre, puisque j'en ai surveillé l'impression et que j'ai rédigé la table qui en occupe une bonne partie 1085 . Puisse l'année qui va commencer m'apporter en ce genre beaucoup de consolations et de dédommagements ! 1085 Tamizey a participé activement à la publication des Lettres, instructions diplomatiques et papiers d'État du cardinal de Richelieu, qui furent édités en 8 volumes de 1852 à 1877. Cette édition est célèbre, mais la part prise par Tamizey est méconnue. Elle avait été confiée à un érudit renommé, Denis-Louis-Martial Avenel. Il mena sa tâche à bien jusqu'en 1873, mais à cette date, il tomba malade, comme il l'explique dans Lettres, instructions diplomatiques et papiers d'État du cardinal de Rchelieu, t. VIII, Paris, 1877, p. 20, n ° 1. Il fit appel alors à Tamizey pour achever l'entreprise, c'est-à-dire de rédiger le t. VIII entièrement conscaré à des additions ou corrections des 7 volumes précédents, en somme il fallait relire et vérifier toute l'oeuvre précédente. Tamizey accomplit cet immense travail avec une prodigieuse érudition, mais il refusa de laisser son nom figurer sur la page de titre. Seul celui d'Avenel y demeura, et le travail de Tamizey de Larroque est seulement rappelé par deux petites notes à peine visibles : Peyrous (Bernard), op. cit., p. 221-222. -Couture (L.), op. cit., p. 512 et p. 566. un autre paquet qui m'est envoyé de Dijon par le professeur Roy 1089 . Cela m'oblige à crier (comme un sourd) : Vive l'Université ! 31 janvier Les tempêtes de l'an dernier n'étaient qu'une plaisanterie auprès de celle d'aujourd'hui. Pendant presque toute la journée le vent a soufflé avec une violence effrayante. Une grande partie de la toiture du pavillon a, de nouveau, été emportée. Un orme a été arraché près de la fontaine. Mais ce qui est bien plus regrettable et me fera maudire à jamais le 31 janvier 1897, c'est la chute de mon vieux chêne (aux environs de midi). Moi qui le croyais pour bien longtemps étais si content de voir le géant mutilé élever fièrement dans les airs ce qui restait de ses branches ! Plusieurs fois par jour j'admirais cet arbre dix fois centenaire qui était certainement le plus remarquable de tous les arbres de la région. Je ne saurais t'exprimer le chagrin que j'éprouve de sa disparition. Larroque aura désormais pour moi quelque chose d'incomplet. Un vide pénible subsistera non seulement pour mes yeux mais pour mon coeur, car j'aimais mon vieux chêne comme on aime un ami. Le Journal du Comtat 1090 du 31 janvier m'apprend la mort de française. En 1878, il fut chargé d'un cours de langue romane à la faculté des lettres de Montpellier, et, en 1886, il fut nommé membre correspondant de l'Institut. Outre de nombreux articles dans la Revue des langues romanes, on lui doit : Grammaire limousine (1876) ; Biographie des troubadours en langue provençale (1885) ; La langue et la littérature du Limousin (1892) et de nombreuses éditions d'anciens textes provençaux. -Correspondance de Tamizey avec C. Chabaneau : A.D. Lot-et-Garonne, 16 J 8, correspondance d'érudits : Fonds Tamizey de Larroque. 1089 Il s'agit d'Émile Roy (1856-1929) probablement à cause de la publication des Lettres et la société dans la 1 re moitié du XVII e siècle. Leçon d'ouverture du cours de littérature française à la faculté des Lettres de Dijon, 1896, Dijon, impr. de Darantière, in-8°, 29 p., extr. de la Revue bourguignonne de l'enseignement supérieur, 1896. -Correspondance de Tamizey avec É. Roy : A.D. Lot-et-Garonne, 16 J 24, correspondance d'érudits : Fonds Tamizey de Larroque. 1090 Journal du Comtat, politique, littéraire et commercial, publié à Carpentras, entre 1893 et 1901. mon vénérable ami M. le marquis Edmond de Seguins-Vassieux 1091 . Il est décédé subitement, à l'âge de quatre-vingt-huit ans, dans ce château du Rocan 1092 où il m'a donné plusieurs fois une si cordiale hospitalité. Je ne puis m'empêcher de faire un rapprochement entre ce vieillard resté si robuste malgré son grand âge, et mon pauvre chêne. Tous les deux sont tombés en un instant et presque simultanément. M. de Seguins était un homme de bien dans toute la force du terme. J'ai connu peu d'hommes aussi distingués à tous égards et dont la vertu fut plus haute et l'intelligence plus cultivée. Quelles bonnes causeries nous faisions ensemble soit dans son cabinet de travail, où les collections de livres et surtout de manuscrits étaient si précieuses, soit dans nos promenades en la ville et autour de la ville de Carpentras, promenades où nous étions accompagnés par sa petite fille, M lle Marguerite de Saint-Paulet 1093 , une des plus charmantes de toutes les Comtadines ! 5 février De nouveaux dons de livres me sont faits. Le comte de Gaudemaris 1094 , chez lequel mon fils et moi nous avons passé une si bonne journée en juin 1894, pendant notre séjour à Carpentras, m'envoie deux volumes que je possédais autrefois, une étude sur Aeria retrouvée (sa voisine de campagne), et une Monographie de Baumes de Venisse. D'autre part, la Société historique et archéologique du Périgord s'expédie la collection de son Bulletin 1095 (de 1880 à 1890). Bien des vides se comblent ainsi dans ma bibliothèque, mais tout ce qui m'a été donné jusqu'à présent ne représente pas même le centième partie de ce que j'ai perdu.), « Arcisse de Caumont et les sociétés savantes » dans Nora (Pierre) s.d., Les lieux de mémoire. Quarto, Gallimard, t. 1, 1997, p. 1545-1573. Notre vieux chien Black est mort cette nuit. C'est un nouveau sujet de tristesse pour moi, car je m'étais fort attaché à ce pauvre animal qui était avec nous depuis notre installation ici. Et que d'autres tristesses j'éprouve ! Sans parler de ma santé qui laisse tant à désirer, combien le mauvais temps que nous subissons depuis près de trois mois m'est insupportable ! Jamais hiver n'a été aussi pluvieux, aussi boueux. Toute promenade est impossible, même dans mes meilleures allées. Au moment où j'écris ceci, la pluie tombe à flots non seulement à l'extérieur, mais aussi à l'intérieur, car les tuiles enlevées par l'ouragan du 31 janvier laissent libre entrée à l'eau et l'escalier est inondé. Ce qui me désole surtout, c'est la vue du chêne renversé. Son gigantesque cadavre (18 mètres de longueur) note en marge : 4 m. de circonférence à la base, 3 m. 50 en moyenne plus haut. entouré d'une épaisse masse de lierre, me fait penser à un guerrier d'autrefois couché dans son manteau sur le champ de bataille. Je ne puis détacher mes yeux de ce spectacle qui est aussi triste que grandiose. 11 février Sixième visite du docteur Boisvert 1096 . Mon excellent médecin me trouve beaucoup mieux. Ce qui me préoccupait le plus, l'état de me vue, s'améliore, de jour en jour. Les longues heures de travail redeviennent possibles. Il me restera une demi-surdité, car le tympan de l'oreille gauche a été perforé par l'abcès dont j'ai tant souffert en janvier, mais je me consolerai de cette infirmité on répétant le mot philosophique de la vieille dame qui disait : j'entendrai moins de sottises. Au moment où j'écris ces lignes, je constate que le relèvement moral 1097C 'est le maire de Castelnau-sur-Gupie. Voir 10 août 1893. 1098 Habasque (Francisque), « Comment Agen mangeait au temps des derniers Valois », Agen, Impr. Vve Lamy, 1887 (extr. de la Revue de l'Agenais). -Tamizey de Larroque lui-même est l'auteur de : Documents inédits relatifs à l'histoire des Terrines de Nérac, publiés par Un Gourmet, Ludovic Durey, Nérac, 1885, petit in-16 de 23 p. (extr. du Journal de Nérac). J. Andrieu dévoile le pseudonyme : voir Andrieu (J.), op. cit., t. 3, p. 325-326. Décidément, j'ai à payer bien cher, depuis quelques mois, le plaisir d'être très haut perché. De grandes pluies compliquent encore la situation. Nous sommes inondés en dedans. Et, à un point de vue général, quelle calamité que ce continuel déluge qui retarde tous les travaux agricoles et qui compromet les récoltes de toute l'année ! 5 mars Le vent ne souffle plus aussi furieusement, mais la pluie tombe toujours à flots. Nous avons eu aussi à plusieurs reprises de la grêle avec accompagnement de violents coups de tonnerre. Les journaux nous apprennent que toute la région du Sud-Ouest a beaucoup souffert des bourrasques des 2cette première semaine du mois où s'ouvre le printemps ! Pourvu qu'avec ces orages qui nous font tant de mal, nous ne voyions pas surgir, du côté de l'Orient, des orages d'un autre genre ! 15 mars Toujours la pluie ! Elle n'a cessé de tomber depuis le commencement de mars et, on peut presque le dire, depuis le commencement de novembre rendant deux nuits, la nuit du 12 et la nuit du 15, nous avons eu des éclairs, de la grêle, des rafales. Mon pauvre livre de raison devient aussi monotone que la pluie de cet abominable hiver. Quand donc reverrai-je un beau ciel bleu, un beau soleil rayonnant m'apportant enfin la joie de vivre ? 4 août 10 novembre 410 photographies de ma personne surabondent. On dirait que ça tombe comme une pluie. M. R. Bazin m'a saisi quand je débitais mon discours du 3 octobre, et il m'a admirablement saisi. Seul, debout auprès du piédestal du monument, entouré d'une foule innombrable, j'apparais comme le héros de la fête. Et cela coïncide avec la splendide publication de la plaquette consacrée au Vieux Châtaignier, dont mon ami Charles Boy a fait une merveille et où a été reproduit le groupe petits événements de la première dixaine (sic) de ce mois : Départ de mes chères hirondelles, retardé, cette année, par le splendide temps dont nous jouissons depuis plus de six semaines.Le jour même du départ, elles sont venues frapper de leur bec les vitres de ma fenêtre, comme pour appeler mon attention sur leurs adieux. -Agrandissement et régularisation du trou creusé dans la prairie, devenu ainsi un bassin gracieusement arrondi autour duquel j'ai fait planter dix peupliers. -Sciage du tronc de mon pauvre vieux Chêne, qui nous fournira une large provision de bûches pour plusieurs hivers. -Visite d'un excellent travailleur et d'un homme excellent, M. G. B. Champeval de Vyers, que nous avons gardé pendant quatre jours et qui m'a apporté son ouvrage sur le Bas-Limousin seigneurial et religieux, ou Géographie historique de la Corrèze 1157 . -Enfin, visite d'un savant religieux de Garaison 1158 , le R.P. Rigaudie, qui est venu dîner, souper et -entre les deux repas -travailler avec moi ; présent, mon cher neveu Jean de Boëry. 30 décembre. J'entre aujourd'hui dans ma 70 e année ; j'y entre fort tristement. Je suis fort souffrant depuis plusieurs semaines et mes yeux surtout sont gravement malades. Je me demande avec la plus cruelle anxiété si je pourrai continuer à travailler. Si cette consolation m'était enlevée, ce serait pour moi la 1157 moi la main pleine de vieux papiers domestiques. Il m'a notamment apporté un livre de raison dont la rédaction commence à la fin du XVI e siècle et se prolonge jusqu''illustre comte de Lacépède 1163 et du père de cet Académicien, lettres adressées au grand-père de mon visiteur, et qui m'aideront à prouver que les Laville de Lacépède sont de la même famille que les Laville des Combois. J'ai l'intention de publier le livre de raison de la famille de Laville-Monbazon, et je remplirai ce devoir de bon parent avec un plaisir particulier, car M. Adrien de Laville, père de mon hôte, avait été l'intime ami de mon père, son cousin, lequel a toujours trouvé aux Combois une cordiale hospitalité dont il aimait à nous parler. 1 er mai Ce matin, quand j'ai ouvert mes fenêtres, à 5 heures, les lilas de mon jardin ont semblé vouloir me souhaiter ma fête, tant leurs grappes de fleurs pleinement épanouies m'ont envoyé de suaves parfums. Quel baume pour la poitrine -et pour l'âme aussi -que l'air si frais du matin m'apportant de tels parfums ! Le reste de la journée a répondu à beaucoup de bouquets, dont un, formé de fleurs des champs, ne m'a pas été le moins agréable. J'ai travaillé jusqu'à midi à l'annotation des cent documents gontaudais 1164 que je vais publier dans le recueil académique d'Agen. Après le dîner, je me suis accordé congé jusqu'au soir, et, assis par un très beau soleil, sous le vieux châtaignier, entre Rousseau et Perdreau, étendus sur mon banc 1162 Voir inventaire des publications touchant à des livres de raison dans Livre de raison de la famille de Fontainemarie, publié par Ph. Tamizey de Larroque, Agen, Impr. Vve Lamy, 1889, p. 118-173. -Ce livre de raison ne figure pas dans la liste des ouvrages de Tamizey donnée par le Catalogue général des imprimés de la Bibliothèque Nationale. 1163 Voir 19 août 1893. 1164 « Une centaine de documents inédits pour servir à l'histoire de la ville de Gontaud (1532-1789) », dans Recueil des travaux de la Société d'Agriculture, Sciences et Arts d'Agen, 2 e série, t. XIII, 2 e partie, Agen, 1898. latin : « Après les ténèbres la lumière ». 1172 , Notice sur le Général Delmas de Grammont, Paris, Soye et Bouchet imprimeurs, 1862. BULLETIN DE NAISSANCE Le maire de la commune de GONTAUD certifie que, Le 30 décembre 1828 est né un enfant de sexe masculin, nommé TAMIZEY DE LARROQUE, prénommé Jacques, Philippe, fils de Alexandre TAMIZEY DE LARROQUE, profession de propriétaire, et de Marie, Elisabeth, Pauline DELMAS DE GRAMMONT, profession de ' UN GENTILHOMME-CAMPAGNARD ENTRE L'HISTOIRE ET LE CREPUSCULE JOURNAL DE PHILIPPE TAMIZEY DE LARROQUE (1889-1898) Mademoiselle Bourrachot, de Maître Fortin et tout particulièrement de Monsieur Serin qui m'a fait partager sa connaissance approfondie des lieux, des gens et des choses touchant à Tamizey, ainsi que de Messieurs Louvel avec lesquels nous avons planté un arbre près de la tombe de l'auteur du Livre de raison. Il me faut encore exprimer toute ma reconnaissance envers ceux qui, de bon coeur, ont éclairci cette fin du XIX e siècle, parfois un peu ténébreuse pour moi, plus familière des siècles précédents : le Professeur Marc Agostino, Monsieur Jean-Claude Drouin, Monsieur Nicolas Champ et Monsieur Fabien Oppermann. travail qu'elle a grandement aidé en m'ouvrant ses notamment, au très utile instrument de recherche sur le fonds Tamizey de Larroque accessible en ligne, constitué par Mesdames Juliette Billard et Isabelle Brunet, sous la direction de Madame Salmon-Dalas. Je ne saurais oublier l'accueil si sympathique et si touchant reçu à Gontaud et à Larroque auprès de EDITE ET ANNOTE PAR VERONIQUE LARCADE à Oma REMERCIEMENTS De nombreuses personnes ont généreusement et chaleureusement contribué à l'achèvement de cette édition du Livre de raison de Tamizey de Larroque. Le meilleur leur revient et les défauts de cette publication n'appartiennent qu'à moi. Je dois d'abord une infinie gratitude à Madame Baquier dont l'enthousiasme et l'affectueuse sollicitude ont ensoleillé ce collections de rares tirés à part de Ph. Tamizey de Larroque et de faire-part ainsi qu'en constituant, pour moi, en nombre, de très précieux dossiers généalogiques sur les parents et les relations de « l'ermite de Larroque », comme il disait plaisamment de lui-même. Mes remerciements vont également à la compétence et à l'amabilité de Madame Salmon-Dalas, la directrice et au personnel des Archives départementales de Lot-et-Garonne, à Agen, qui ont mis à ma disposition le texte du Livre de raison et amplement fourni à mes questions grâce, Cet ouvrage n'existerait pas sans la confiance, la patience et les encouragements du Professeur Michel Figeac qui a accepté d'accueillir, dès le début du projet, cette édition du Livre de raison aux Presses Universitaires de Bordeaux. Elle doit beaucoup au généreux soutien de la Société des Bibliophiles de Guyenne et à son président Monsieur Jean Barbet, qu'ils trouvent ici mes plus vifs remerciements. Mais cet ouvrage n'existerait sans doute pas, non plus, sans ce que le Professeur Yves-Marie Bercé, avait su me dire, il y a plusieurs années, comme j'entamais des travaux tout autres sous sa direction. Il m'avait parlé alors, non seulement de l'historien, mais aussi de la personne de Philippe Tamizey de Larroque, suggérant combien il serait piquant et sans doute passionnant d'en savoir un peu plus sur son compte. Une fois de plus, il a été de très bon conseil. Livre-journal…1603-1652, éd. Tamizey de Larroque (Ph.), Huet (Paul) et Saint-Saud (comte de), extr. du Bulletin de la Société historique et archéologique du Périgord, Paris, 1893. -Livre de raison de la famille Dudrot de Capdebosc (1522-1675), éd. Tamizey de Larroque (Ph.), Paris, Picard, 1891. -Livre de raison de la famille de Fontainemarie, éd. Tamizey de Larroque (Ph.), Agen, V ve Lamy, 1889. -Livre de raison de la famille de Chevalier d'Escage en Agenais modestes de l'ordinaire de la vie, rédigés au fil des jours ou presque, n'étaient pas destinés à être publiés, pas plus qu'ils n'étaient l'expression d'une création littéraire ; ils s'en tenaient, fondamentalement, à la simplicité de la chronique restreinte du cercle étroit de la parenté et du voisinage comme aux bornes du domaine patrimonial de leurs rédacteurs 12 . Conformément à cette tradition, Tamizey de Larroque consigne ainsi, le 16 août 1889, par exemple, les revenus de sa propriété de Larroque, à la suite de la visite des terres et dépendances qu'il avait effectuée le matin même avec son fermier Pierre Ducasse ; mais aussi les modalités de Bentzmann » faisant de lui le doyen de la famille. Mais il Tamizey de Larroque parle aussi des soins du jardin et notamment de plantations, faites en abondance le 14 mars 1890, par exemple et le 3 mars 1891 ou encore le 28 février 1894. Il consigne enfin des ce qu'un livre de raison ? » dans Revue de l'Association pour l'autobiographie, février 2005. -Site : www.ecritsduforprive.fr famille Boisvert, 1650-1816 et de N. de Lidon, sieur de Savignac, 1650-1664], suivis d'extraits d'autres registres domestiques [de dame Boucharel, 1682-1687, de Bertrand Noguères, 1649-1682] et d'une liste récapitulative des livres de raison publiés ou inédits, éd. Tamizey de Larroque (Ph.), Auch, L. Cocharaux, 1893. -Bessot (Pierre de), (1746-1792), éd. Tamizey de Larroque (Ph.), extr. de l'Annuaire du Conseil Héraldique de France 1895, Destenay-Bussière f res , Saint-Amand du Cher, 1895. Il convient d'ajouter à cet inventaire la Notice inédite sur le livre de raison du Muet de Laincel, d'après les manuscrits de Peiresc, publiée par Ph. Tamizey de Larroque, Digne, impr. de Chaspoul et V ve Barbaroux, 1895. registres négociation d'un nouveau contrat de fermage et le prix de vente des produits de la terre (9 août 1893). Le projet de plantation d'une vigne est signalé du même coup. Il est aussi question de l'engagement de domestiques, ainsi le recrutement d'une nouvelle cuisinière, le 20 juin 1890. De même, il signale à la date du 7 janvier 1891, une impressionnante tempête qui dura 48 heures, le 14 et le 20 mai suivants, il s'agit de terribles orages et d'intempéries exceptionnelles, mais entre le 10 et le 30 août 1893, c'est la canicule avec une température de 30 à 35° tous les après-midi et même 36° un jour. Il note aussi les visites reçues et, avec beaucoup de minutie, les décès dans son entourage et sa parentèle plus ou moins éloignée ; ainsi le 19 avril 1893, celui de sa cousine Amélie de Grammont « douairière de [de la famille] de 12 Cassan (Michel) et Landou (Noël) éd., Écrits de Jean-Baptiste-Alexis Chorllon, Paris, Champion, 2002. -Cassan (Michel), « Qu'est- son propos. Sous le prétexte d'un scrupule d'érudition, il a la coquetterie de se montrer, pétillant d'esprit et de galanterie, à Aix-en-Provence, gagnant une nouvelle contribution pour un monument à la mémoire du grand Nicolas- Claude Fabri de Peiresc (1580-1637) 16 . Pourtant si Tamizey se raconte, il ne se révèle pas entièrement 17 . Le Livre de raison d'un campagnard cache, en fait, un autre journal, mis à l'écart mais jamais détruit pourtant ; puisqu'il restait, après la disparition de Philippe Tamizey de Larroque, en la possession de son fils. Intitulé marquer son opposition aux événements comme l'avait fait son propre père en 1789 31 .A la fin de sa vie, Philippe Tamizey de Larroque tournait en dérision et reniait même ouvertement cet épisode de sa jeunesse, traitant d'« élucubrations » ses écrits « politiques » et de « charlatans », les grandes figures de l'utopie et de l'anarchie auxquelles il s'était adressé. Pourtant il n'avait pas détruit ces lettres et les gardait dans un cahier sous le titre de Varia 32 . Jusqu'à quel point faut-il vraiment accorder crédit à ce récit à demi-mot d'une jeunesse « folle » ? Il correspond trop bien aux poncifs pour ne pas avoir une part d'invention ou d'exagération coquette, pour le moins. Sans qu'on puisse exactement en déterminer leur date d'origine, Tamizey de Larroque avait bien des contacts dans les milieux littéraires parisiens, avec Louis Veuillot 33 et Hippolyte Taine 34 , au moins, dont parle le Livre de raison. Ce sont plutôt des Constitutionnel, entremettant le fameux Sainte-Beuve en personne, pour le gagner. Granier aurait même tenté encore, en 1866, de le recruter pour son nouveau journal Le Pays. Toutes ces offres furent, en tout cas, repoussées successivement par Tamizey 35 . Ses refus répétés étaient-ils motivés par le dépit de ne pas avoir été reconnu à ses débuts ou s'agissait-il, pour lui, de récuser définitivement une « vie d'artiste » indécente 36 ? 29 id., p. 499. 30 Couture (L.), op. cit., p. 499. révolutionnaires conservateurs bon teint que de hardis pionniers de l'avantgarde. Tamizey de Larroque aurait même été invité à participer comme critique hebdomadaire à L'Univers, le journal de Louis Veuillot, tandis que Granier de Cassagnac, le pétulant journaliste originaire des environs de Mirande (Gers) et activiste bonapartiste, lui offrait un rôle analogue dans le 31 id., p. 501. 32 id., p. 499. 33 Livre de raison, 17 février 1892. 34 Livre de raison, 29 mars 1893. 35 Couture (L.), op. cit., p. 554. 36 Brelot (Claude-Isabelle), La noblesse réinventée, nobles de Franche- : Les rapports entre Paris et la province : la centralisation culturelle estelle aussi avancée que la centralisation politique ? Le sens de l'évolution n'est pas niable : la culture bourgeoise se veut instrument d'unification du pays. L'attitude à l'égard des langues régionales en est la preuve : Pourtant le monarchisme de Philippe Tamizey de Larroque n'était pas exactement crispé dans le culte de l'Ancien Régime. Certes, les murs de sa chambre au Pavillon Peiresc étaient « recouverts de tapisserie, dessin noir sur blanc, représentant l'effigie des divers rois et reines de France et autres… » 53 . Mais il adoptait la forme la plus libérale du royalisme. Il était résolument orléaniste. Il le revendique dans le Livre de raison quand il se décrit lisant Le Soleil, Maurras, pourraient le faire croire. Mais il existe aussi un « félibre rouge », en Provence même, avec Pierre Devoluy et surtout en Languedoc l'organe du parti (18 novembre 1895) ou en évoquant ses liens suivis et sa correspondance, par exemple, avec Cornelis de Witt, directeur du « Paysan du Sud-Ouest » et surtout petit- fils de Guizot, le grand ministre de la Monarchie de Juillet (23 juillet 1891) ou plus clairement encore quand il parle du duc Philippe d'Orléans (1869-1926), « le prisonnier de Clairvaux », arrière-petit-fils du roi Louis-Philippe (7 juillet 1890). félibrige provençal avec Roumanille ou Aubanel, ainsi que les thèses décentralisatrices de aucun effort pour les sauvegarder, même dans les universités ; De plus, l'attachement de Tamizey de Larroque à la forme l'enseignement français dans sa totalité nie la possibilité d'une diversité culturelle régionale. En 1913, le président de la République peut bien visiter Mistral, ce n'est que la reconnaissance d'une valeur devenue internationale. Pourtant cette renaissance littéraire de tous les pays de langue d'oc pose un problème ; le félibrige n'est pas uniquement provençal (de l'autre côté du Rhône, Toulouse, Nîmes ou Montpellier sont des centres de littérature occitane). Le mouvement s'appuie sur un public populaire, mais il est animé par des érudits, notables provinciaux ; une partie de la bourgeoisie locale les soutient. Phénomène politique ? Une manière pour le royalisme de s'opposer au pouvoir ? Les tendances dominantes du . . Les procédures les plus achevées en matière d'analyse et d'étude des faits et des documents historiques, la « méthode critique » en un mot, quittait ainsi le monde clos de France » et ce dix ans avant la Revue Historique de Gabriel Monod et de Gustave Fagniez 72 . Les premières publications de Tamizey, en tout cas, avaient clairement mis en évidence, dans cette logique, la ligne qu'il se fixait : « Mémoire sur le Sac de Béziers dans la guerre des un dignitaire franc-maçon. De même Tamizey recopie complaisamment l'hommage que Gabriel Monod, descendant d'une lignée de pasteurs genevois et surtout chef de file parmi les l'École des Chartes pour atteindre le grand public 71 . historiens universitaires et les intellectuels républicains 70 , Effectivement, la Revue des Questions Historiques fut la rend à ses travaux, ajoutant lui-même à propos de ce dernier première revue « d'histoire générale à prétentions « un des travailleurs que je prise et que j'aime le plus » (1 er juin 1896). Il tournait ainsi le dos à l'antagonisme habituellement monté en épingle entre la Revue historique dont Gabriel Monod est l'un des fondateurs et la Revue des Questions historiques à laquelle collaborait assidûment Philippe Tamizey de Larroque, pratiquement depuis sa fondation. En fait, par-delà des milieux d'origine a priori inconciliables et des prises de position politiques diamétralement opposées, les tenants des deux écoles historiques dont ces revues étaient les porte-parole, scientifiques publiée en tôt : « Preuves que Thomas A. Kempis n'a pas composé Nous nous engageons dans l'étude des questions historiques sans parti pris, avec le seul désir de chercher la vérité et de la dire… C'est aux faits que nous nous attaquons, c'est à l'aide des sources originales soigneusement recherchées, au moyen de textes scrupuleusement étudiés, des témoignages sévèrement contrôlés, que nous tâcherons de rétablir la vérité historique, et que nous nous efforcerons de donner sur chaque question le dernier mot de la science. partageaient une même exigence intellectuelle. Une semblable passion de la vérité les réunissait et pouvait bel et bien créer, contre toute apparence, une profonde connivence entre Gabriel Monod et Philippe Tamizey de Larroque. Ainsi s'exprimait Gaston de Beaucourt dans le numéro de lancement de la Revue des Questions Historiques, le 1 er juillet 70 Julliard (Jacques) et Winock (Michel), Dictionnaire des intellectuels français, Seuil, 1996, p. 799-800. 1866Albigeois et sur le mot « Tuez-les tous ! » attribué au légat du pape Innocent III » 73 . Il s'agissait pour lui de rectifier cette expression employée mal à propos par Guizot lors de la Réception de Lacordaire à l'Académie française. Ce qui faisait suite à une tout aussi éloquente étude, publiée cinq ans plus l'imitation de Jésus-Christ » 74 . Mais jusqu'où exactement Philippe Tamizey de Larroque pouvait-il manifester son indépendance d'esprit ? Le Livre de raison est exactement contemporain des débuts et du plein 71 . Ce serait assez, certes, pour expliquer que se réduisent ses lectures et que se restreigne le champ de ses curiosités et de ses intérêts. Mais ne peut-il effectivement rien voir, ou ne veut-, ne dit rien de son opinion et de sa prise de particulièrement celle qu'il consacre à son oncle et parrain le général Delmas de Grammont auquel il voue une admiration et à l'évidence une affection profondes 90 . Il note bien parmi le titre de « gendarme » que ses ancêtres mettent en avant comme un titre de gloire 91 comme il rapporte les grades et états de service de ses beaux-frères, Eugène et surtout Henri 92 . Ce qui ne va pas, certainement, sans envie, de la part de Tamizey, pour une carrière d'officier que son statut d'aîné et d'héritier, sinon sa myopie lui avait interdit. En fait, Philippe Tamizey de Larroque restait marqué inguérissablement par la Défaite de 1870. Elle l'a atteint dans ses proches. Son il plutôt rien voir ? Ainsi fin 1897, entre le 13 novembre -date à laquelle Le Temps publia la lettre de Scheurer-Kestner affirmant l'innocence de Dreyfus et révélant que le vrai coupable était connu, sans préciser son nom -et le 25 novembre -quand Zola fit paraître dans Le Figaro, le premier de ses articles dreyfusards conclu par ses mots : « La vérité est en marche, et rien ne l'arrêtera… » 76 -Philippe Tamizey de Larroque parle de grandes tempêtes secouant les choses et les gens alentour, au propre certainement, mais aussi possiblement au figuré selon un procédé poétique qu'il n'ignore sûrement pas, mais par la fin de la tradition du joyeux et gastronomique dîner annuel au château de Lafon, sous la qui, clairementnotamment présidence de M. d'Amblard, oncle de Philippe Lauzun, jeune cousin et beau-frère, le lieutenant Robert Delmas de Grammont, a trouvé la mort, le 1 er septembre 1870, devant Sedan, dans les rangs des Chasseurs d'Afrique de la division Margueritte qui se sacrifièrent pour retarder la marche des Prussiens, sur le plateau d'Illy 93 . Cette défaite a été, avant le drame de l'été 1895, l'occasion d'une fracture dans la vie de Tamizey. Il y a eu un avant et un après Sedan qui se marqua l'historien de la Société Académique d'Agen 94 , comme le rapporte Tamizey dans sa nécrologie d'Adolphe Magen : (1867) ; L'amiral Jaubert de Barrault et les pirates de La Rochelle (1894) et la réed. en 2 vol. in-12 (1883) des Mémoires de J. Chastenet de Puységur : voir Bibliographie provisoire de Tamizey de Larroque en annexe. 90 Philippe Tamizey de Larroque, impr. Chaspoul et V ve Barbaroux, Digne, 1898, p. 4. -Voir Livre de raison, 23 août 1889. Noël dit Natalis de Wailly (né à Mézières en 1805, mort à Paris en 1886). Avocat, il obtint un emploi aux Archives royales. Membre de l'Académie des inscriptions et belles-lettres en 1840. Mémoire sur la langue de Joinville (1868) ; Mémoire sur Joinville et les enseignements de Saint Louis à son fils (1875) ; Mémoire sur le Romant ou Chronique en langue vulgaire dont Joinville a reproduit plusieurs passages (1875) ; Recueil d'un ménestrel de Reims au XIII e siècle (1875) notamment. avantageuse à la copieuse bibliographie et à la silhouette non moins replète de son aîné Alexandre Dumas (1802-1870)dont la Sur les animaux qui entourent Tamizey, un témoignage est donné par le récit de Jacques Brissaud, écrit à Fauillet, le 25 janvier 1961, intitulé « Une visite à Peiresc » dans Maisani (Claude), « Sur Philippe Tamizey de Larroque », Revue de l'Agenais, 1961, p. 27. Il s'agit de l'un des fils du professeur Brissaud dont Ph. Tamizey de Larroque évoque la venue au Pavillon Peiresc, dans le Livre de raison à la date du 19 août 1897. Jacques Brissaud rapporte dans ce texte ses souvenirs comme il était petit garçon, âgé, dit-il, de 8 ou 9 ans. Il parle notamment d'une petite chèvre irlandaise nommée Zoé qui broutait dans le parc. Elle avait cassé en fonçant dans son reflet, racontait Ph. Tamizey de Larroque, une glace à trumeau à l'intérieur du Pavillon, représentant une scène de chasse. séjours prolongés à la Bibliothèque inguibertine de Carpentras, à la Bibliothèque Méjanes d'Aix-en-Provence, en recherches archéologiques, de naturelle, recherches d'histoire de de travaux géographiques et de travaux philologiques. On suit avec astronomiques et bibliographie et une sympathie toujours croissante ce guide aux connaissances universelles qui nous fait admirablement connaître vingt années de la première moitié du XVII e siècle… 125 Ce fut l'occasion pour lui de nouveaux voyages, plutôt vers la Provence, désormais, dans le pays de Peiresc avec des Avignon et aussi à Montpellier 126 . À cela, continuellement publiés en parallèle à ces colossales entreprises s'ajoutèrent une multitude de foisonnants comptes rendus. Ils étaient, en effet, plutôt analytiques que critiques parce qu'ils apportaient un surcroît d'informations, fidèles au style qu'avaient imposé, dans ce genre, les grandes publications érudites du XVII e et du XVIII e siècles : le Journal des Savants et le Dictionnaire de Trévoux essentiellement. Du même esprit participaient les innombrables Il s'imposa comme l'un des meilleurs paléographes, philologues et historiens de sa génération, devenant, en 1854, conservateur des manuscrits à la Bibliothèque Nationale. Il a publié : Éléments de paléographie (1838) ; Notice sur Guillaume Guiard (1846) ; les tomes XXI et XXII du Recueil des historiens des Gaules avec Guigniaut et Delisle (1855-1865) ; une Histoire de Saint Louis par Joinville, en texte rapproché du langage moderne littéraire et matérielle étaient manifestement inversement proportionnelles au vide intérieur qu'il éprouvait 129 ? Tamizey cultivait sa différence, politiquement, on l'a vu, mais aussi dans son comportement quotidien. Il ne buvait pas dans un pays de grande tradition viticole et où les plaisirs de la table -qu'il appréciait pourtant -s'accompagnent forcément de vin 130 . Il n'était pas plus chasseur, alors que, précisément, cette activité faisait traditionnellement partie même excès accumulatif que celui qui caractérise ses publications 132 . p. 517. 131 Il partage les convictions de son oncle et parrain le général de Grammont sur la protection des animaux : voir notamment 10 mai 1898. (1865) ; boulimie 132 Voir Livre de raison, 15 et 25 août 1896. 124 Voir la remarquable analyse de ces publications dans Peyrous (Bernard), op. cit., Tamizey de Larroque (Ph.) , Correspondance de Peiresc, t. I, p. IV-V. « Notes et fragments » qu'accueillait une rubrique dans la Revue de Gascogne et L'intermédiaire des chercheurs et curieux. Tamizey était un fournisseur si abondant qu'il prenait des pseudonymes pour les faire paraître : « Yezimat », « Euquoral », « un vieux chercheur », « Ph. ». Ce qui lui valut d'ailleurs une boutade du grand Natalis de Wailly 127 : « Vous nous laisserez pourtant un peu d'inédit, n'est-ce pas ? » 128 Mais il y a dans l'abondance même des publications de Philippe Tamizey de Larroque, un excès qui respire plus le malaise qu'un véritable épanouissement. Comment ne pas penser face à l'appétit publicatoire de Tamizey et à sa corpulence 126 Berluc-Perussis (Léon de), 127 Joseph-du mode de vie des gentilshommes-campagnards 131 . Pourtant il souffrait de la solitude et de l'isolement dans lesquels il vivait. Dans le Livre de raison, le soin mis à énumérer ses hôtes et à donner les détails de son hospitalité participe du L'amour et la sollicitude que Tamizey manifestait à l'égard des animaux a un aspect compensatoire assez évident. Comment ne pas être frappé du contraste entre le laconisme avec lequel il évoque sa fille (14 septembre 1892) et les longs développements qu'il consacre à la disparition du chat Mistigris (28 juin-1 er juillet 1892) 133 . Cet amour des animaux qui était une fidelité à son oncle et parrain, le général de Grammont, promoteur de la première loi française sur la 129 Schopp (Claude), Alexandre Dumas, Mazarine, 1985, p. 515. 130 Comme le fils de Tamizey le souligne : Couture (L.), op. cit., 133 ). Livre de raison de la famille de Fontainemarie, impr. V ve Lamy, Agen, 1889, p. 152. simplement d'avoir la paix 145 . En tout cas, les deux époux étaient séparés de fait, avec l'installation permanente de Tamizey, au Pavillon Peiresc, à Larroque, en 1890. Madame demeurant à Gontaud dans la vieille maison de famille des Tamizey et séjournant apparemment fréquemment en Algérie chez son frère Henri (19 septembre 1892). C'est bien à l'égard des livres de son mari que Madame Tamizey de Larroque avait, dès lors, fini par nourrir une dévorante jalousie et elle est fortement soupçonnée, d'après la rumeur publique, d'être à l'origine de la tragédie de juillet 1895.Tamizey ne trouvait que peu de réconfort auprès de ses enfants. Sa fille Olivia, dont il dit qu'elle est « de santé fragile », pour dissimuler, semble-t-il, un handicap mental, vit auprès de sa mère. Henri, son seul fils survivant, est certes plus proche de lui. Mais c'est un chasseur -peu surcroît, Peiresc avait pris l'habitude de tenir son journal 139 … Il faut Dans les pages de son Livre de raison Tamizey manifeste une encore et surtout considérer comment Tamizey s'identifie manifestement à Peiresc (1580-1637) qui n'était soif de reconnaissance 140 qui le fait exulter lors de son pas seulement un homme de cabinet, mais également un animateur triomphal séjour en Provence en mai-juin 1894 et ressentir au centre d'un réseau de correspondants : Peiresc avait des trés amèrement sa promotion manquée dans la légion d'honneur amis partout et il n'ignorait pratiquement rien des choses de (30 juin 1897). Elle est certainement à la mesure d'une la terre et du ciel. Un des esprits les plus étonnants du profonde insécurité affective qu'une série de drames XVII e siècle, il avait passé sa vie à s'intéresser à toutes les habile, certes, et connu pour rentrer éternellement bredouille 146 - qui préfère la compagnie de ses jeunes amis Elle a pu pousser Tamizey à se replier encore davantage sur ses travaux d'érudition, parce qu'ils étaient un moyen, pour lui, de gagner de l'argent qui n'appartenait qu'à lui ou tout 138 Peyrous (B.), op. cit., p. 225-226. sciences, à fréquenter tous les savants estimés d'Europe. Il était lié à tout ce qui comptait, au point de vue intellectuel, dans la société de son temps, en France ou à l'étranger : le padouan Jean-Vincent Pinelli, le flamand Rubens, le grand pensionnaire hollandais Heinsius, les Français Malherbe, Guillaume du Vair, Jacques-Auguste de Thou, Louis et Scévole de Sainte Marthe, Pierre Pithou, Guez de Balzac, Chapelain, Ménage, Huet, Gassendi, le Père Mersenne, Jérôme Bignon, Du Cange, Sirmond entre autres. Il s'associait à leurs idées, à leurs expériences. Il se servit, l'un des premiers, des lunettes astronomiques récemment inventées par les Hollandais, et perfectionnées par Galilée. Grâce à elles il découvrit, le 26 novembre 1610, la grande nébuleuse d'Orion. En 1611, il observait Mercure et Vénus en plein jour, posant ainsi les bases de l'astronomie méridienne. À partir de 1633-1635, ses travaux sur les longitudes lui permettaient de redresser les cartes de la Méditerranée orientale. En 1636, il dressait la première carte lunaire, et la faisait graver -un cirque porte encore son nom -, puis en 1634, il découvrait avec Gassendi les vaisseaux chylifères de l'homme. Sa mort fut un deuil public. On le célébra, dit-on, en 40 langues. Les principales dissertations à son sujet furent recueillies dans un ouvrage extrêmement curieux, édité à Rome en 1638 138 . Et, de personnels n'a pu qu'accroître. Il s'agit tout d'abord de la disparition de sa première épouse, sa très « chère Nathalie », morte en mettant au monde un enfant qui ne survécut pas et que l'on enterra dans le cercueil de la jeune femme 141 . Puis il y a le remariage que lui imposa sa famille, pour des questions d'intérêts 142 . Il ne fut pas heureux : des enfants morts en bas âge 143 , mais surtout une évidente mésentente. Tamizey ne parle que bien peu et encore du bout de la plume d'Olivia Delmas de Grammont, sa seconde épouse 144 . Tamizey avoue, plus d'une fois dans le Livre de raison, être sensible à la beauté féminine et apprécier la compagnie des jeunes femmes (21 avril 1892, 8 avril 1893) : était-il pour autant infidèle et coureur de jupons, justifiant les récriminations acariâtres d'une femme bafouée ? Il est sûr qu'il existait entre les époux une grande différence de fortune au bénéfice de Madame Tamizey de Larroque (12 novembre 1890) qui pouvait aigrir leurs rapports. 139 Tamizey de Larroque (Ph.), (30 mars 1892) et abandonne à la garde de son père, à Larroque, son chien Black (14 août 1891, 28 mars 1894, 7 février 1897). Henri était ce que l'on appelait alors un « original » 147 , sans doute un peu borné dans ses activités et son intelligence. Il était terriblement sympathique et attachant au demeurant, bon vivant, s'amusant, en grand enfant, à l'Exposition Universelle de 1889 148 , et comptant fleurette à la demoiselle de la poste à laquelle son père devait l'envoyer souvent 149 . Mais il était peu capable de 145 que Tamizey ne parle pas de Raoul-Joseph Delmas de Grammont, né à Miramont, le 15 mars 1832, fils de généralde Grammont, et son cousin germain. Ex-écuyer de Napoléon III, ancien conseiller général de Lot-et-Garonne, chevalier de la légion d'honneur qui avait publié sous les initiales R. de G., une brochure intitulée : Quelques mots sur la situation présente, imp. Renou et Maulde, Paris, 1860, in-8) de 23 p., 2 éd. (Andrieu (J.), Bibliographie générale de l'Agenais, 1886, t. 1, p. 336. Tamizey était-il brouillé avec une partie de la famille Delmas de Grammont ? n'avait pas cessé, au fond, de vouloir : « Ci-gît un travailleur ». C'est aujourd'hui l'unique marque de sa présence sur le côteau de Larroque. Il a trouvé bel et bien là son ultime vérité. Du pavillon Peiresc subsiste seulement des pans de murs et des gravats. Ce qui restait des meubles, de la bibliothèque et des papiers a été dispersé et en partie perdu 152 . Henri n'a pas eu d'enfants de son mariage avec Isaure Rumeau, la jolie demoiselle de la poste 153 . Sa soeur ne s'est jamais mariée et la lignée des Tamizey de Larroque s'est donc éteinte au début du XX e siècle. En un sens, il a tout raté et tout perdu, manquant de la veuve de Tamizey, Olivia Delmas de Grammont. Le mariage ne fut possible qu'après un jugement du tribunal de Marmande : Serin (P.), op. cit., p. 33. "…Es nehmet aber/ Und gibt Gedächtnis die See, / Und die Lieb auch hefter fleissig die Augen, / Was bleibet aber, stiften die Dichter" , [… Mais si la mer / Retire et donne la mémoire, si l'amour aussi / Captive inlassablement les regards, / Ce qui demeure, les poètes seuls le même à perpétuer durablement les siens, la race des gentilshommes campagnards de l'ancienne France 154 qu'il 152 Voir récit aussi pittoresque qu'édifiant du sauvetage que le délabrement du Pavillon nécessita et dont M elle Bourrachot, René Galibert et Claude Maisani furent les héros : Bourrachot (Lucile), « La fin du pavillon Peiresc à Gontaud », dans Revue de l'Agenais, 1999, p. 15-18. 153 Cette dernière avait 39 ans, lorsqu'elle épousa, le 1 er juillet 1909, Henri Tamizey de Larroque, ce qui lui laissait peu d'espoir de maternité. L'éventualité d'une telle mésalliance avait, d'ailleurs, suscité l'opposition ). 150 On peut s'étonner souhaitait précisément continuer par la tenue d'un livre de raison. Philippe Tamizey de Larroque dont les publications, au début du XXI e siècle, font toujours référence, n'existe plus que sur les rayons des bibliothèques et dans les notes infrapaginales. À moins que l'on considère, au contraire, pour cette raison, qu'il a été plus fort que tous les malheurs, toutes les avanies, toutes les déceptions de sa vie et qu'il lui a été donné, par la force et l'excellence même de son travailalors que ses vers et ses élans lyriques étaient restés brimés et qu'il n'avait jamais su ou pu tout à fait dire et parler à sa guise -de connaître, au fond, la plus magnifique des réussites en égalant exactement « les poètes [qui] seuls fondent] : Hölderlin (Friedrich), derniers vers de « Andenken » dans Souvenir de Bordeaux (1802), texte original et trad. Kenneth White et Statistique générale de la Gironde, t. III, Biographie, p. 143 et Lurton, Michel, « La Revue des Bibliophiles publiée à Sauveterre-de-Guyenne (déc.1878-déc.1882) », Revue française d'histoire du livre, n° 74-75, 1992, p. 237. -Correspondance de Tamizey avec J. Chollet : A.D. Lot-et-Garonne, 16 J 9, correspondance d'érudits : Fonds Tamizey de Larroque. donné à l'imprimeur des Mémoires de Jean d'Antras 178 , plaisanterie vengeresse qui consolait des rigueurs de notre commun bourreau mon cher collaborateur, l'abbé de Carsalade 179 . Les diverses corporations réunies à Larroque m'assurent que le pavillon sera entièrement prêt dans deux mois. Que le bon Dieu les entende. Ce même jour, j'ai pris pour me servir à Larroque, à partir du 15 avril 1890, Virginie Dupuy, veuve Verdelet, âgée de 47 ans, femme dont chacun m'a dit du bien, particulièrement la 178 Sur l'imprimeur Jean Chollet : Féret, 179 Jules de Carsalade du Pont, né à Simorre (Gers) en 1847, mort à Perpignan, en 1932. Issu d'une famille ancienne de Gascogne, il a fait ses études au Petit et au Grand Séminaire d'Auch Des croyants XV e -XIX e siècle,A. Colin, Paris, 1996, p. 454- 457. pavillon est entièrement construit à la Noël. J'ai reçu, cette semaine (le dimanche, 10), les Petits mémoires inédits de Peiresc 188 , venus d'Anvers, et le Livre de raison de la famille de Fontainemarie 189 , venu d'Agen. Ces tirages à part du Bulletin Rubens et de la Revue de l'Agenais, portent à 140 le chiffre total de mes publications grandes ou petites. C'est vingt de moins que n'en compte mon ami Léonce Couture dans cette gracieuse phrase du numéro de ce mois de la Revue de Gascogne (p. 331) : « Je crois qu'il nous a déjà donné plus de cent soixante publications, et tout le monde sait que pas une n'a manqué d'ajouter quelque chose à la Science. Puisse-t-il doubler encore ce nombre !... » C'est beaucoup trop demander. Le temps et les forces me manqueront très probablement pour atteindre même le nombre de 200 190 . Larroque fit un essai de recension de sa production historique en 1881 : Le Père Cortade. Notes et extraits, suivis d'une bibliographie Tamizeynne, Sauveterre-de-Guyenne, 1881 (extr. de la Revue des Bibliophiles de Guyenne). Il recensait alors seulement 71 de ses principales publications. En 1887, Jules Andrieu reprit le problème dans sa Bibliographie générale de l'Agenais (Paris-Agen, t.II, 1887, p.315-330 ; t.III (supplément), 1891, p. 163-167). Enfin, Jules Momméja publia, en 1901 : Philippe Tamizey de Larroque, correspondant de l'Institut. Essai bio-bibliographique, Saint-Denis, 1901 (extr. de la Correspondance historique, 1898 à 1901). C'est de loin le catalogue le plus exhaustif. Il faut le compléter avec le catalogue général des imprimés de la Bibliothèque Nationale. Tamizey remettait, en effet, à la Bibliothèque Nationale, ses 186 La Fontaine, « Les deux pigeons », Fables, livre IX. 187 C'est-à-dire le 11 novembre : Audisio (Gabriel), Les Français d'hier, t. 2 : 188 Peiresc (Nicolas-Claude Fabri de), Petits mémoires inédits…[liste des correspondants de Peiresc de 1622 à 1632], Anvers, 1889 (extr. du Bulletin Rubens, IV). 189 Voir 14 juillet 1889. 190 Tamizey de tirés-à-part. Sa bibliographie y occupe 32 colonnes et 220 numéros (voir Annexe). Cependant elle n'épuiserait pas tout à fait la matière : « Mais combien d'articles n'a-t-il pas publiés dans les périodiques les plus divers, souvent de petits articles signés d'une foule de pseudonymes… » : Lacoste (René), « Une vie d'érudit de province : Philippe Tamizey de Larroque », Revue de l 'Agenais, 1999, p.11. Comité des Travaux Historiques et Scientifiques qui reprend l'action menée par Arcisse de Caumont, au début du XIX e s., notamment dans le cadre de l'Institut des provinces, pour la renaissance et la fédération de sociétés savantes où les nobles sont nombreux : Carbonell (Charles-Olivier), Histoire et historiens, une mutation idéologique des historiens français Collections, une vingtaine de plaquettes, lesquelles contiendront tout ce qu'il me reste de matériaux à utiliser. À la rigueur, je pourrais appliquer aux travaux de ce genre qui seraient en retard, les deux années qui me sépareraient encore du 31 décembre 1900, ce qui me permettrait d'écrire mon mot de la fin tout juste à la fin du XIX e siècle 194 . 193 Sur le funéraire où pêches et raisins mettront, en été et en automne, la note agréable. P.S. J'oubliais d'inscrire le prix de mes plantations. Les ormeaux ont coûté 75 centimes chacun, les pêchers greffés 75 centimes, les pêchers francs destinés à devenir des brugnoniers, les pommiers et poiriers 60 centimes, les chasselas greffés et enracinés 25 centimes, enfin les plants américains ordinaires 10 centimes, chacun. 21 mars Le printemps commence bien. Un beau soleil rayonne dans un deux sillons de vigne et on les sépare par deux sillons de terre. La terre intercalaire est cultivée en céréales, légumes etc. et reçoit des fumures dont la vigne profite. « Vigne…bonne à mettre en appuy ou jouelle » (1555) dans Cotereau, Columelle, III, 2 : Hatzfeld (Adolphe) et Darmesteter (Arsène), Dictionnaire général de la langue française du commencement du XVII ciel bleu. Avec quel plaisir j'ai vu les plantations de la semaine précédente et les plantations de celle-ci ! Il me semble que, par ce magnifique temps, tout va bien vite se développer. J'ai eu l'idée de mettre auprès de chaque ormeau un pied de vigne qui croîtra avec lui, ayant ainsi un tuteur naturel et de toute solidité. Pampres et raisins feront un bon effet dans les branches des arbres. Ce sera tout à fait joli et productif, car quoi de plus fécond que le mariage -chanté par les poètes -de la vigne et de l'ormeau ? J'ai formé dans le jardin, le long de la haie qui le sépare du chemin de la fontaine, une allée de rosiers. Il y en a là plus de cinquante de toute variété qui m'ont été donnés, les uns par mon ancien condisciple et adjoint, N. Laguionie 197 , dont la collection est si riche, les autres par mes voisins Batanisse, Bégoule, Berger, tous marqués au B, ai-je dit dans ma joyeuse reconnaissance, B signifiant en pareil cas Bonté (par e siècle jusqu'à nos jours, Paris, Delagrave. Lettres de Peiresc. Je voudrais que l'on m'accordât, pour cette seconde partie, trois volumes, comme pour la première. de remerciement pour l'envoi que je lui avais fait à Clairvaux 208 , par l'intermédiaire du comte d'Haussonville 209 , de ma petite gerbe de documents inédits, où devait surtout l'intéresser une lettre de Guizot 210 , lequel raconte une Fils du comte de Paris (c'est-à-dire arrière-petit-fils du roi Louis-Philippe) élevé en France, mais banni par la loi du 23 juin 1886, il se fixa en Angleterre. En 1890, âgé de 21 ans, il se présenta au bureau de recrutement de Paris, pour demander à remplir ses devoirs militaires. Arrêté, il fut condamné à 2 ans de prison, enfermé à Clairvaux, mais bientôt grâcié. Le prince d'Orléans retourna en Angleterre, puis voyagea en Europe, en Perse, en Égypte. Devenu par la mort de son père (1894) le représentant de la monarchie traditionnelle en France, il a appelé sur lui l'attention publique soit par des lettres adressées à ses partisans soit par son manifeste de septembre 1898 au sujet de l'affaire Dreyfus, soit par son discours de janvier 1900 aux royalistes poursuivis devant la Haute cour. Il entreprit en 1905, 1907 et 1919 de grands voyages au pôle Nord, et en 1922-23 en Afrique-Orientale anglaise. Il en rapporta d'importantes collections zoologiques qu'il légua à la France. Pendant la guerrede 1914- 1918, il demanda vainement à prendre du service, soit dans les armées Autriche Marie-Dorothée, il n'en eut pas d'enfants : ses droits dynastiques sont passés au duc de Guise (fils du duc de Chartres), Si, comme je l'espère, ma demande est favorablement accueillie, voilà du pain sur la planche du travail pour plusieurs années ! 208 Orléans (Louis-Philippe-Robert, duc d') : (Twickenham, 1869- 3 juillet J'ai eu, hier, la visite de mon vieil ami et condisciple Alexandre de La Barrière 205 , qui n'avait jamais vu Larroque et Palerme,1926) françaises, soit dans les armées alliées. Marié depuis 1896 à l'archiduchesse d' 7 juillet qui a été ravi du pavillon, et surtout du panorama pourtant moins beau que d'habitude, car l'horizon était voilé par un demi-brouillard. Comme je dois noter dans mon journal les accidents météorologiques, je dirai, à ce propos, que, depuis le 1 er juillet, nous jouissons ici d'une température automnale et que même, à certaines heures, le vent et la pluie ont abaissé cette température jusqu'au froid de l'hiver. 5 juillet Hier, l'ermite de Larroque a eu la visite de son bon voisin, M. l'abbé Alis 206 , curé d'Agmé, qui a eu l'amabilité de m'apporter une énorme liasse de vieux papiers qui lui ont été confiés par M. de Bonneau 207 . Je tirerai de ces papiers du XVII e siècle la matière d'un intéressant petit recueil. Cela va faire une galette de plus sur la planche dont je parlais ici avant-hier. 205 La famille de La Barrière est originaire de Tonneins (filiation remontant à 1490 ; maintenue de noblesse : 1716, 1768) : Meller (P.), Armorial du Bordelais, Féret, Bordeaux, 1906, t. 2, p. 243. autographe , fut élu, en 1871, député de Seineet-Marne à l'Assemblée nationale. Rentré, en 1876, dans la vie privée, il s'occupa d'études sociales : Les Établissements pénitentiaires en France et aux colonies (1875) ; L'enfance à Paris (1879) et d'études littéraires : Sainte-Beuve, sa vie et ses oeuvres (1875) ; Le salon de Mme Necker (1882) ; Études biographiques et littéraires (1879-1888). Élu, en 1888, à l'Académie française, puis, en 1904, à l'Académie des sciences morales. En 1891, il fut appelé par le comte de Paris de le représenter, en France, auprès des comités monarchistes. Il publia aussi : Madame de La Fayette (1891) ; Professeur d'histoire moderne à la Sorbonne, il fut un opposant au Premier Empire. Élu député de Lisieux en janvier 1830, il prit position contre les ordonnances de juillet et Louis-Philippe le nomma ministre de l'Intérieur, puis ministre de l'Instruction publique (1832-1837), poste où il engagea l'État dans la prise en charge de l'enseignement primaire. Il fut ensuite ministre des Affaires étrangères à partir de 1840 et se montra un partisan convaincu de l'Entente cordiale avec la Grande-Bretagne. Guizot ne devint président du Conseil qu'à partir de 1847, sans charmante anecdote sur le comte de Paris 211 . Je suis heureux de penser que ma plaquette et mon attention ont pu être agréables au chevaleresque prisonnier. Socialisme et charité (1895) ; Lacordaire (1896) ; La duchesse de Bourgogne et l'alliance savoyarde (1898-1903) ; Salaires et Misères de femmes (1900) ; avec Gabriel Hanotaux : Souvenirs sur M me de Maintenon (1903-1904) ; Ombres françaises et Visions anglaises (1914). C'est de ce dernier commte d'Haussonville qu'il est vraisemblablement question ici. 210 François Guizot (1787-1874). Issu d'une famille de notables Persistance d'une température anormale. Ce pauvre mois de juillet ressemble de plus en plus à un mois de mars, d'autant que nous avons toutes les giboulées habituelles de ce dernier mois, vents, pluies, etc. Je me souviendrai du pâle soleil de juillet 1890. Je n'avais jamais encore été obligé de prendre, en plein été, une redingote d'hiver. Je me demande si l'éclipse de soleil de l'autre jour n'est pas pour quelque chose dans toutes ces perturbations et anomalies atmosphériques. 10 juillet perdre son premier portefeuille, mais il était le chef effectif du gouvernement depuis 1840 (sous la présidence nominale du maréchal Soult). Le 23 février 1848, sous la pression de la Garde nationale, lui imposa de démissionner. Guizot se retira alors de la vie politique et retourna à ses protestants de Nîmes. 8 juillet travaux historiques. Aujourd'hui trentième anniversaire du jour de mon mariage 212 . Que d'événements douloureux dans ma vie depuis cette époque ! J'ai vu mourir, pendant ces trente années, trois de mes enfants, Marc, Jeanne et Charlotte 213 , mon père 214 , ma mère 215 , 211 Il s'agit du père du comte d'Orléans mentionné plus haut. 212 8 août Que ce mois ressemble donc peu au bizarre et frigidus 226 mois de juillet ! La gueule en feu du chien céleste, comme disait ce pauvre J.B. Rousseau 227 , verse sur nous toutes ses ardeurs. mon père, trois frères de ma mère 216 , dont un, l'aîné, le général, était mon parrain 217 , dont un autre, le plus jeune, mon excellent beau-père 218 , était véritablement pour moi un second père. Que d'amis aussi, de bons amis, j'ai eu le malheur de perdre en ces trente années ! J'adresse à tous ces chers morts un cordial salut et je prie Dieu de me les faire tous retrouver dans un monde meilleur que celui-ci. P.S. -Aujourd'hui, dans la matinée, j'ai copié sept trois personnes, un ouvrier venu d'Agmé et qui représentait le Combien je suis heureux de pouvoir, en pleine canicule, d'un mot de Charles Nodier 229 qui m'a frappé dès l'enfance : documents Peiresciens. Je viens d'achever la lecture du dernier livre de Pierre Loti Périgord (car Agmé qui à vol d'oiseau n'est qu'à 4 ou 5 m'abriter sous le parasol de mon vieux Chataignier ! Combien Heureuse la maison ornée de nids d'hirondelles ! Le roman d'un enfant 221 . Il y a là des détails charmants où kilomètres d'ici est déjà une localité-frontière), ma petite je bénis son ombre épaisse qui me permet de travailler au Je suis à Larroque depuis un mois et combien ce mois a passé 23 juillet j'ai retrouvé quelques-unes de mes impressions d'enfant. Par bonne venue de Lavardac 225 et qui représente la région dont milieu d'une perpétuelle fraîcheur ! Je viens de le comparer à vite ! Et combien agréablement ! Décidément, j'ai été créé et Juillet continue à être un mois détraqué. Hier, au matin, il une singulière coïncidence moi aussi j'ai lu avec délices le nous sépare si profondément la Garonne, enfin mon fermier qui, un parasol. Mais c'est aussi un éventail, un gigantesque mis au monde pour être campagnard et j'ai donné une trop faisait tellement froid qu'il avait, semblait-il, gelé pendant Télémaque 222 à l'âge où le lisait Pierre Loti et à peu près indigène, représente la portion de notre département que l'on éventail, car son feuillage est sans cesse agité. Je me tardive satisfaction à ma vocation. Ici tout me plaît et la nuit. Les moissonneurs ne se souviennent pas d'avoir jamais dans les mêmes circonstances, c'est-à-dire dans une retraite peut appeler le fond même de l'Agenais. Ces trois langages demande avec effroi ce que je deviendrais sur mon caillou « de m'enchante et ce qui me manque, au point de vue du confort, fauché leurs blés sous un ciel plus doux. Voilà plusieurs où je me cachais pour échapper aux jeux de mes camarades. Ce qui, pour prendre un seul exemple, ont chacun un mot tous côtés au soleil exposé », sur mon caillou brûlant, brûlé, n'est rien à côté des jouissances que me donnent la nature, la semaines que le soleil est presque toujours voilé. Aussi qui m'a fort intéressé dans les récits de Loti, c'est son particulier pour désigner le coq (Zaou à Agmé, ajan à si je ne pouvais me réfugier sous cette fraîche verdure. C'est 14 juillet tranquillité, le travail. Comme il marche le travail qu'à combien la verdure, d'ordinaire brûlée à cette époque de séjour en Quercy, à Castelnau. J'ai entendu beaucoup parler de Lavardac, biguers ici), ne caractérisent-ils pas trois sources mon unique ressource. Puisse mon protecteur vivre autant que Voilà juste un an qu'à pareille heure à peu près (5 heures du partir de 4 heures -l'heure du plus paradoxal des critiques, l'année, reste fraîche et charmante. On se croirait encore au ce séjour par une personne qui habitait à cette époque différentes ? Pour moi, je suis persuadé que cette variété de moi ! Puisse-t-il, du moins, vivre jusqu'au jour où ses deux matin) j'ai commencé mon journal. Qui m'eût dit, au moment où le P. Hardouin 219 -interrompent seulement mes petites printemps. Je ne peux me lasser d'admirer : Castelnau et qui était la fidèle compagne de Loti en ses langue correspond à une variété de race, et que si Lavardac, voisins, le jeune pin et le jeune noyer, uniront leurs j'exprimais ici mon désir de devenir campagnard, que je ne promenades et mes arrosages ! J'ai déjà, depuis le 16 juin, Le vert tendre des prés, le vert sombre des bois. excursions. Il a oublié sa petite compagne l'ingrat ! Mais qui est la porte des Landes, semble avoir appartenu aux branches au-dessus de ma tête, et sans avoir la prétention de serais pas encore installé, le 14 juillet 1890, dans le rédigé une bonne douzaine d'articles bibliographiques et elle se souvient toujours de lui. mystérieux Ibères, les deux autres points ont dû appartenir à remplacer l'incomparable abri, constitueront pour moi une pavillon de mes rêves ? Ah ! C'est qu'il faut toujours, en transcrit une bonne centaine de lettres de Peiresc. Je vais 29 juillet deux autres des peuples primitifs qui ont habité la Gaule. sorte de demi-équivalent de mon cher Châtaignier et formeront toute combinaison, faire largement la part de l'imprévu, de la continuer avec ardeur la transcription des deux manuscrits qui J'ai croqué aujourd'hui les premières tomates cueillies dans 3 août Combien ces questions philologiques sont curieuses ! Et quel à leur tour la monnaie de M. de Turenne 228 ! malchance, ce que l'on peut appeler la part du diable. Bien m'ont été confiés par la Bibliothèque Nationale. Je préparerai mon petit jardin. Je leur ai trouvé un goût particulièrement J'ai écrit aujourd'hui un article destiné au Bulletin du secours le langage populaire, minutieusement étudié localité sage et bien avisé est celui qui, dans les entreprises en ensuite le premier volume de la seconde série de la délicieux, le goût que trouve, dit-on, le chasseur au gibier Bibliophile 223 sur le 1 er fascicule du Dictionnaire de la langue par localité, pourrait prêter à l'histoire de nos origines si 12 août apparence les plus simples et les plus faciles, prévoit correspondance de mon héros, volume que je voudrais soumettre qu'il a tué ! Et moi qui, quand j'étais citadin, me moquais de française par MM. Darmesteter, Hatzfeld et Thomas 224 . Pendant obscure et si incertaine encore ! J'ai vu, ce matin, une hirondelle au faîte de mon pavillon. révolutionnaires). Sur ce personnage : Tamizey de Larroque (Ph.), Le en novembre au Comité des Travaux historiques. Puis viendra l'enthousiasme des propriétaires-producteurs et qui n'avait que je me livrais à ce travail, j'ai été frappé de la Sur un ciel très bleu, très pur, le gentil petit oiseau se Chroniqueur Proché, documents inédits [édité par H. Tamizey de Larroque], l'énorme besogne de la Table des trois volumes des Lettres pas assez d'épigrammes pour le bourgeois sensible et naïf différence du langage populaire que parlaient autour de moi détachait admirablement. Il est resté posé ainsi pendant Impr. et lith. agenaises, Agen, 1898, p. 15-16, p. 19 (extr. de la Revue de l'Agenais, XXV, 1898). -Couture (L), op. cit., p. 491. Peiresc-Dupuy. Le malheur, c'est que la durée du jour va arrosant, disais-je, ses laitues avec des larmes plusieurs minutes. Je voudrais qu'il revînt, qu'il s'habituât diminuer et que déjà, vers la fin du mois, je pourrai à peine d'attendrissement en quelque sorte paternel ! À mon tour Provence. 221 Publié en 1890, c'est le récit de la sévère enfance protestante, à à mon toit, qu'il y amenât ses parents et amis. Je me souviens y voir à 4 heures. Profitons donc bien des huit heures qui Rochefort, de Julien Viaud dit Pierre Loti (1850-1923). d'éprouver des émotions que je ne croyais pas faites pour moi. 222 Les Aventures de Télémaque (publiées en 1699) sont à la fois une séparent mon lever de mon dîner, car les heures libres de De la joie avec laquelle j'ai savouré mes tomates je compilation érudite, un cours de morale et un manuel de politique. François de Salignac de La Mothe-Fénelon (1651-1715), issu d'une famille de noblesse l'après-midi sont presque toutes prises par la lecture des rapprocherai le désespoir avec lequel, l'autre jour, j'ai vu périgourdine, commença ses études à Cahors, archevêque de Cambrai, il fut 225 Sur la rive gauche de la Garonne, entre Vianne et Barbaste, au bord journaux, des revues, des livres nouveaux, par la correction entre 1689 et 1697 le précepteur du duc de Bourgogne, petit-fils de Louis de la Baïse, affluent de la Garonne : carte au 50 000 e , Marmande-Agen, 56, une entaille produite par le couteau de quelque affreux gamin XIV et héritier du trône. Destiné à son élève, cet ouvrage s'applique I.G.N., Paris, 2003. des épreuves, par la correspondance, etc. Le véritable d'abord à raviver tous les souvenirs littéraires d'Homère et de Virgile, 226 Trad. du latin : « glacial ». dans l'écorce de mon jeune noyer ! Il semblait que j'eusse qui, selon les traditions de l'époque classique, constituent le bagage 227 Jean-Baptiste Rousseau (1671-1741), né à Paris, fils d'un travail, c'est de l'aurore à midi. J'appelle le reste de la normal d'un homme cultivé ; il se soucie également constamment de lui cordonnier. Il reçut les conseils du vieux Boileau, et le maréchal de reçu ce coup de couteau dans ma propre chair. J'ai transcrit imposer des directives dans sa conduite privée ; enfin il cherche à lui Tallard l'emmena à Londres en 1701, en qualité de secrétaire. L'académie journée mes heures de récréation. récemment une lettre de Peiresc à son frère où, peignant tracer un plan d'action pour le jour où il serait appelé à monter sur le des inscriptions lui ouvrit ses portes en 1705. C'est vers 1707 que se trône. place l'affaire, encore obscure, qui empoisonna le reste de sa vie : J.B. énergiquement une impression toute semblable, mon héros 223 Publié depuis 1834 à Paris, continué aujourd'hui par le Bulletin du Rousseau disputait à La Motte la succession de Thomas Corneille à 219 la numismatique des ouvrages très savants, en particulier son traité Nummi Le Père Jean Hardouin, jésuite (1646-1729) publia sur l'histoire de bibliophile, du bibliothécaire et de la Société des amis de la Bibliothèque l'Académie, lorsqu'on fit courir sous son nom des couplets calomnieux et déclare qu'il a eu le coeur malade à la vue du dépérissement Nationale et des grandes bibliothèques de France, BNF, Q-3714. infâmants contre plusieurs gens de lettres et les habitués du café Laurent, antiqui (1684) et également une édition de Pline l'Ancien (1685) et une Collectio Conciliorum très estimés. Il se plaisait à soutenir des opinions 224 Publié entre 1890 et 1900, donc après la mort de son maître d'oeuvre dont l'un, le capitaine La Faye, capitaine aux gardes, souffleta Rousseau. des orangers de ses beaux jardins de Belgentier 220 . Arséne Darmsteter (1846-1888) -philologue, professeur à partir de 1881de Bien que non condamné dans un premier procès engagé à cette occasion, assez étranges ; comme, par exemple, que l'Énéïde, avait été composée au langue et de littérature françaises du Moyen Âge, à la faculté des Lettres Rousseau se sentit couvert du mépris public, et pour se venger il essaya, Moyen Âge par des moines bénédictins, qu'il n'y avait eu dans l'Église de Paris -par le soin de son collaborateur Ad. Hatzfeld et d'Antoine par des moyens assez indignes, de rejeter l'accusation sur Saurin de aucun concile avant le concile de Trente notamment. 220 Château natal et demeure de Peiresc dans la campagne d'Aix-en-Thomas. l'académie des sciences. Mais le parlement le déclara coupable à la fois . Notice biographique, La Rochelle, 1898, p. 15 (extr. du Bulletin de la Société des Archives historiques de la Saintonge et de l'Aunis, juillet 1898). -Serin (P.), op. cit., p. 29. Les dates de ces décès sont données le 25 avril 1891. 214 Alexandre Tamizey de Larroque (1786-1876), maire de Gontaud de 1840 à 1848 (date à laquelle il démissionna à cause des événements la soeur de pendule en bois sculpté qui nous viennent de la Ménagère 235 ; dans ma chambre, le portrait-photographie de ma pauvre mère 236 231 La basilique de Saint-Denis était la sépulture traditionnelle des rois de France. j'aperçois, chaque fois que mon regard embrasse le demi-cercle dont le charmant sourire semble vivant et illumine toute la qui s'étend autour de moi, le vieux genévrier qui ombrage ma pièce ; dans mon cabinet de travail, en face de la table sur pierre tombale, mais cela ne me trouble pas et, moins faible laquelle, s'il plaît à Dieu, j'achèverai les oeuvres que Louis XIV, je ne me plains pas du voisinage de Saint-commencées, deux cadres qui contiennent, l'un le portrait de Terre provençale de Paul Mariéton 230 . Je devais bien cette Denis 231 . Peiresc par Mellan 237 , don de mon amie la Comtesse Juliette de préférence à l'aimable écrivain qui, sur la première page de Robersart 238 , l'autre une lettre autographe de mon héros 239 , son livre, a mis cet hommage écrasant : Au maître de la 29 août adressée à un médecin de l'Agenais, M. de la Ferrière, et qui bibliographie française (ces méridionaux ne doutent de Mon fils 232 a tué, hier, son premier perdreau. La joie du m'a été cédée par M. Calbet 240 , lequel la tenait… Mais ne rien !), et qui déjà, dans son livre même, à propos de la jeune chasseur m'a fait du bien. Jamais général ayant gagné remontons pas plus haut. En face de l'écriture de mon héros et bibliothèque une grande bataille n'a été plus fier de sa victoire. Puisse de Carpentras, avait dit (un peu moins de la meilleure des gravures qui nous aient conservé sa fine témérairement) : « Un des princes de la bibliographie, ce perdreau être suivi dans le carnier de mon fils d'une et belle tête, je ne puis que bien travailler. Où trouver de M. Tamizey de Larroque. » Me voilà donc élevé en grade dans le nombreuse compagnie ! Pendant que Henri se livrait à ses plus précieuses sources d'inspiration et d'énergie ? peu de temps qui s'est écoulé entre la composition de exploits cynégétiques, je faisais placer, dans la petite pièce l'ouvrage par l'auteur et la composition de l'ouvrage par que nous appelions le « chalet » et qui devient aujourd'hui le 30 août l'imprimeur ! Il y a pas mal d'autres amusantes exagérations parloir, le portrait du général Jacques Philippe Delmas de Il n'y a pas de fête sans lendemain. Hier, nouveau perdreau Grammont 233 ; dans la salle à manger, sur la cheminée, le dans la Terre provençale, mais le Journal de route est écrit avec tant de verve spirituelle et de vive poésie, que l'on et de Marie-Henriette de Vivie de Duvivier (1765-1827) épousée en secondes portrait du Comte de Paris et de son fils le duc d'Orléans 234 noces à la Sauvetat-en-Dropt, le 21 décembre 1790 [A.P. Baquier]. Jean- pardonne tout au chancelier du félibrige. Joseph était « le héros de Wissembourg [1793]». Selon les termes de Ph. (qui régneront plus dans les coeurs de leurs fidèles que sur le Tamizey de Larroque, sa mère pendant soixante ans « illumina cette petite ville [Gontaud-de-Nogaret] par sa bonté et par sa charité » : Tamizey de trône), contre la muraille deux tableaux qui représentent -ou Larroque (Ph.), Le chroniqueur Proché… op. cit., p.15-19.-Couture (L.), 28 août op. cit., p. 491. Voir 29 septembre 1894 et 25 septembre 1897. veulent représenter -un geai et une perdrix (cette dernière 237 Mellan (Claude), graveur et dessinateur français, né à Abbeville en Depuis lundi, considérable abaissement de la température. image flattera mon fils) enfin un baromètre-thermomètre et une 1598, mort à Paris en 1688. Il exécuta, à Rome, un grand nombre d'ouvrages, des vignettes, des frontispices de livres, des portraits. Il imagina une L'automne se ferait-il déjà sentir ou du moins pressentir ? Il nouvelle manière de graver, consistant à n'employer que des tailles parallèles contournées et renflées. En quittant l'Italie, Mellan se rendit y a eu, tout ce matin, un voile de brume sur les coteaux. Je à Aix (1636), puis à Paris (1637), travaille sur ma terrasse et, je le vois bien, c'est là que je me tiendrai le plus souvent. J'y suis si bien, en face d'un si vaste et si bel horizon et m'abreuvant d'un si bon air ! Je ne 232 Aîné et seul fils survivant de Philippe Tamizey de Larroque et d'Olivia Delmas de Grammont, sa seconde épouse : Henri-François-Philippe quitterai ce poste de combat que les jours de mauvais temps ou Tamizey de Larroque, né le 9 octobre 1865, à Paris (VI e arrondissement), mort à Saint-Pierre de Nogaret, « au lieu-dit Larroque », le 3 mai 1929 : les jours d'extrême chaleur. Du haut de ma terrasse, Serin (P.), op. cit., p. 32-33. 233 Né à la Sauvetat-en-Dropt, en 1796, mort à Miramont-de-Guyenne, en 1862. Officier de cavalerie, il fit la campagne d'Afrique et devint général 230 de division en 1853. Député de la Loire à l'Assemblée législative (1849) où Mariéton (Paul), auteur de La terre provençale, 1890. (Lyon, 1862- Nice, 1911). Provençal d'adoption, il fut l'un des promoteurs les plus il soutint la politique de l'Élysée et fit voter une loi protectrice des zélés du mouvement félibréen. En 1885, il fonda la Revue félibréenne. On animaux. Voir 4 octobre 1897 et 10 mai 1898. lui doit La terre provençale (1890) ; Les voyages félibréens et cigaliers 234 Voir 7 juillet 1890. (1891-1894-1897) ; Jasmin (1898) ; La Provence nouvelle, histoire du 235 Un magasin d'équipement domestique porte encore ce nom aujourd'hui félibrige (1901). Il a publié également des recueils de vers : Souvenance au centre de Valence d'Agen (3 Allée du 4 septembre). (1884) ; La viole d'amour (1886) ; Hellas (1888) ; Mélancolie (1895) ; 236 Marie-Élisabeth-Pauline Delmas de Grammont (1802-1888), soeur du Hippolyta (1901), et des études littéraires : Soulary et la Pléiade général Jacques-Philippe Delmas de Grammont (voir note 74), fille de Jean- lyonnaise (1884) ; une Histoire d'amour : les Amants de Venise [Georges Joseph Delmas de Grammont, né à Castillonnès, le 23 décembre 1746 et mort à Sand et Musset]. Miramont-de-Guyenne, le 25 octobre 1809, colonel, chevalier de Saint-Louis, où il reçut un logement au Louvre, et fut désigné pour graver les statues et bronzes antiques du cabinet du roi. L'un de ses tours d'adresse les plus connus est La Sainte face, gravée en 1649, d'un seul trait en spirale. Il a gravé plus de 300 planches. tué par mon fils. C'est presque ce qu'on peut appeler coup double. La joie du vainqueur ne connaît plus de bornes. Que Diane puisse lui sourire encore aujourd'hui. Mais je me demande avec une certaine perplexité ce que deviendra mon héritier le jour où il aura la gloire d'abattre un lièvre. 4 septembre M. l'architecte Guillory est venu déjeuner avec nous et procéder à la réception des travaux et au règlement des comptes. La dépense totale ne s'éloignera pas beaucoup de mes prévisions (8000 francs). La portion la plus considérable de cette dépense, la maçonnerie (compte de Lacube) ne dépasse guère 2 200 francs (en réalité, 2209 francs 19 centimes), lesquels 2 200 francs s'appliquent 1°) à la construction de 70 toises 2 tiers (à 30 francs la toise, 2122,72) et à des travaux accessoires pour reprise des fondations, démolition du mur de terre, etc. (86 francs 47 centimes). Je note que les murs ont 10 m. 33 cm. de hauteur et que la façade a 8 m 40 cm. de longueur. 238 La famille des seigneurs de Robersart dont la filiation remonte au début du XIII e siècle, est originaire du Hainaut. Juliette Robert, comtesse de Robersart, hérita, en 1866, de son oncle, le vicomte Obert de Quévy, le château de Wambrechies, existant depuis le XI e siècle. Elle en fut la dernière châtelaine, mourant célibataire et sans descendance, le 2 janvier 1900 : voir portail internet du Conseil Général du Nord. 239 Il s'agit de Nicolas-Claude Fabri de Peiresc. Voir 16 juillet 1889. 240 Maurice Calbet a publié Juvenilia, Marmande, impr. de E. Demeaux, 1892, in-16, 66 p. -En 1891, Philippe Tamizey de Larroque lui a offert un exemplaire du Livre de raison de la famille Dudrot de Cap de Bosc (voir note 120). Maurice Calbet habitait alors, 4 rue Henri Martin, à Agen [A.P. Baquier]. -A.D. Lot-et-Garonne, 5 J 579 : carte-lettre d'Henri Tamizey de Larroque à l'abbé Dubois, curé de Saint-Pierre de Buzet et d'Ambrus, par Buzet, 8 octobre 1899 : « …vous arriveriez le jour précédent, le jeudi, pour déjeuner avec M. calbet qui vient chercher le reste de ses vieux papiers. Vous m'aiderez à supporter sa visite, car il est ennuyeux au possible… ». Correspondance de Tamizey avec Maurice Calbet : A.D. Lot-et- Garonne, 16 J 8, correspondance d'érudits : Fonds Tamizey de Larroque. Pendant ces quinze jours je suis venu plusieurs fois à Larroque, une fois notamment (le jeudi, 25 septembre) avec Henri de Grammont auquel je tenais à faire les honneurs de mon ermitage. Nous avons gaiement déjeuné répéterai, devant cette sauvage harmonie, les beaux vers de de septembre, le maçon Bonneval est venu achever son oeuvre à Lucrèce 243 . l'intérieur (cloison dans le chai et réparations au parloir et à la cuisine) comme à l'extérieur (crépissages, lieux 13 septembre d'aisance, volière, etc.). Hier, j'ai achevé la transcription des deux manuscrits qui m'avaient été confiés par la Bibliothèque Nationale et où j'ai 6 octobre eu plus de 500 documents à prendre. Je tiens à constater que Je n'ai pas dit encore qu'il a fallu remplacer la pierre de nous avons eu, ce jour-là, une température très élevée (près mon tombeau par une pierre de meilleure qualité. J'avais de 30 degrés). Comme il faisait bon, le soir, sur ma décidément trop favorablement jugé le monolithe venu de la terrasse ! Et avec quelle admiration, pendant que l'air était carrière de Bistauzac : il commençait déjà à s'effriter et rafraîchi par une petite brise, j'ai contemplé les étoiles qui cela avant même le mauvais temps ! Que serait-il arrivé après 10 septembre m'ont paru plus nombreuses que jamais ! C'était un véritable plusieurs hivers ? Une pulvérisation complète. Bonneval m'a Hier, j'ai passé la journée, avec Henri, à la villa d'Agmé 241 fourmillement d'astres, et ceux qui ont poétiquement appelé fourni un magnifique bloc extrait de la carrière de où nous avons eu le plaisir de trouver ma chère cousine Amélie les étoiles « ces fleurs du ciel », auraient pu dire que le Pardaillan 246 , dur comme le diamant et blanc comme l'albâtre. de Bentzmann 242 , et la soeur et la nièce de la marquise, Madame parterre était au grand complet. Au milieu de la verdure des arbres voisins, cette pierre, qui et Mademoiselle de Pellan, cette dernière délicieuse jeune est debout conformément à mes premières intentions, produit fille de 18 ans, ayant quelque chose d'une déesse glissant sur 1 er octobre par son extrême blancheur un poétique effet. Ce sera l'austère les nuées. Il a fallu payer la joie de la causerie avec toutes Rentrée au pavillon Peiresc que nous avions quitté depuis ornement d'un coin du beau paysage que j'ai sans cesse sous ces aimables femmes par une marche très pénible sous un soleil quinze jours pour aller recevoir à Gontaud mon beau-frère les yeux du haut de ma terrasse. de juillet, du vrai juillet. Je suis rentré au pavillon Henri de Grammont 244 et pour aller, ensuite, passer 48 heures Peiresc rompu de fatigue et j'ai éprouvé, avant de m'endormir, chez ma soeur, au chalet 245 , où nous avons trouvé toute la cette sensation si agréable du bruit du vent dans les bois, bruit qui semble donner plus de douceur au repos que l'on goûte. Quand viendront les tourmentes de l'hiver, je 241 Voir 6 septembre 1889. 242 Noble et ancienne, la famille de Bentzmann était originaire de famille réunie. ensemble et nous nous sommes après mis d'accord sur plusieurs Pologne, mais réfugiée et établie à Dantzig où elle s'était réfugiée par suite de troubles en Pologne. Dans la première moitié du XVII e siècle, une points importants, accord qui fait disparaître plusieurs branche vint se fixer en France. Le 31 janvier 1856, Léon-Jean-Charles-Marie de Bentzmann (1813-1884) épousa Marie-Amélie Delmas de Grammont, difficultés présentes et futures. Pendant la seconde quinzaine fille du général Jacques-Philippe Delmas de Grammont et de Marie-Anne de Boëry [A.P. Baquier]. Née à Miramont-de-Guyenne, en 1835, Marie-Amélie de Bentzmann a publié A la gloire du Sacré-Coeur de Jésus et pour son amour, 243 Poète latin (v. 98 av. J.-C.-v. 53 ap. J.-C.), auteur De la nature Bordeaux, impr. Adrien Boussin, 1879, in-8) de 37 p. ; Élévations sur les des choses, d'inspiration épicurienne. Douleurs et les Enseignements du Coeur de Jésus pendant le Chemin de la 244 Voir 27 septembre 1889. Crois d'après les écrits de la Bienheureuse Marguerite-Marie ; suivies de 245 À Ambrus (sur la rive gauche de la Garonne, à mi-chemin de St- prières pour le salut de la France, et d'un Exercice pour le Chemin de la Pierre-de-Buzet au nord et de Xaintrailles au sud, à l'écart de l'actuelle Croix, Paris, Adolphe Josse, 1881, in-18 de VI-214 p. -Petit Bréviaire du D.108 : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003), il Sacré-Coeur de Jésus. -Petits Offices pour chaque jour de la semaine, et s'agit de la demeure de plaisance établie, à l'écart du château de Padères, Exercice pendant la Messe. Extr. de la vie et des oeuvres authentiques de la propriété du beau-père d'Anne-Eléonore Tamizey de Larroque (1831-1917) qui Bienheureuse avait épousé le 26 novembre 1850, à Gontaud, Onézime Truaut (1819-1894) [A. Marguerite-Marie, 5 e éd., Nancy, Soc. nancéienne de Propagande, 1882, in-32 de 143 p. P. Baquier]. 13 octobre Tous ces jours derniers j'ai fait répandre du sable sur les allées qui entourent le pavillon. Je pourrai désormais attendre de pied ferme les pluies de l'hiver. Je me moquerai du mauvais temps et sans craindre la moindre boue je me empierrer, entre l'étable et le châtaignier, le chemin qui Psalmiste : eripe me a luto 259 . Le chemin du bois de M. de magnifique été de la Saint-Martin 262 ! Et combien je jouis de était jusqu'à présent, de novembre à avril, un affreux Puymiclan 253 , où j'ai passé tant de joyeuses journées en ma Godailh 260 , auquel j'avais autrefois pensé, est transformé en ce renouvellement du beau temps ! Le ciel est d'un bleu très cloaque. Je veux que l'on en fasse une solide petite voie jeunesse, jusqu'à Miramont 254 et la Sauvetat 255 , où se torrent. Heureusement que j'ai, pour mes petites promenades, pur et l'on se croirait au mois d'avril si les arbres romaine, un camin herrat 247 , selon l'énergique expression réunissent pour moi tant de souvenirs de famille, et, plus au moment des éclaircies, mes trois allées sablées auxquelles n'avaient changé leur manteau vert contre leur manteau jaune populaire. On comblera le fossé où croupissait ce purin que, loin, jusqu'à Castillonnès 256 , berceau des Delmas, jusqu'à bientôt se joindra une quatrième allée plus longue que les ou rouille. Je profite de cette fin d'automne pour recommencer d'après une croyance générale et qui n'en est pas moins Bournel (près de Villeréal) 257 où ces mêmes Delmas habitèrent autres ! mes plantations de rosiers, de myrtes, de lauriers. J'en mets erronée, mon cher Palissy 248 aurait appelé si poétiquement de autrefois. Ma carte, du côté des Landes, s'arrête aux environs un peu partout, appliquant toujours à ces arbustes le vieux l'or liquide, et dans la terre qui le remplira je planterai d'Ambrus 258 , où repose ma pauvre mère, et où, le dernier jour 12 novembre mot des jurisconsultes : ce qui abonde ne vicie pas. des rosiers, des lauriers entremêlés, ce qui sera tout à la du mois dernier, je suis allé prier sur sa tombe, devoir que Je vais avoir dans quelques semaines 62 ans révolus. De même fois assainissement et embellissement. j'accomplirai, chaque année, tant que je le pourrai. que j'ai préparé ma tombe d'avance, j'ai voulu ne pas attendre 28 novembre plus Je veux noter ici un froid extraordinaire qui dure depuis longtemps pour rédiger mon testament*. On dit 30 octobre 11 novembre vulgairement que cela ne fait pas mourir. Pour moi j'estime deux jours. Larroque est une petite Sibérie. Mon thermomètre a Je n'ai pas encore mentionné l'achat, l'heureux achat que J'ai fort vanté, depuis que j'ai commencé ce journal, les que cela fait du bien, car cela tranquillise, et il y a marqué jusqu'à 7 degrés au-dessous de zéro. Il est vrai qu'à j'ai fait d'un fragment de la carte de Cassini 249 où je agréments de la campagne. Mais voici le revers de la longtemps que je regarde la tranquillité comme la 99 e partie du ce moment soufflait le plus glacial de tous les vents du Nord. retrouve toutes les localités que l'on aperçoit d'ici et médaille ! le mauvais temps ! Depuis trois semaines, pluies bonheur. J'ai cru devoir dans mon testament favoriser beaucoup Agen n'a constaté, le même jour, que 5 degrés 1/2. Ce brusque beaucoup d'autres localités qui me sont chères à divers abominables. Les chemins sont tellement mauvais que mon mon fils, d'abord parce que sa soeur ne se mariera pas, ensuite et maudit grand froid a dû tuer les artichauts que je venais titres. Outre Larroque dont le nom est imprimé La Roque, j'y ermitage devient inaccessible. Hier, j'étais arrêté dans la parce qu'elle aura une plus large part que son frère dans la de faire planter, la semaine précédente, et que je croquais vois mon domaine de la Carrère 250 , toute la vallée de la boue, comme un train en détresse dans la neige. Je ne puis considérable fortune de sa mère. En donnant à Henri la maison déjà par la pensée. (Toujours Perrette et son pot au lait Garonne jusqu'à Casseuil 251 , toute la vallée du Lot jusqu'à songer à aller demain à Gontaud pour la foire de la Saint-de Gontaud et le domaine de Larroque en préciput, je ne fais renversé 263 !) Ce diable de froid a dû tuer aussi bon nombre Martin et je ne pourrai même pas probablement y aller pour que rétablir l'équilibre et j'accomplis, par conséquent, un des pauvres arbustes mis en terre ces jours derniers, 247 dimanche prochain. J'ai reconnu cette gluante et tenace boue Palay (Simin), op. cit., p. 104 : trad. « chemin pavé, grande acte de justice que sa soeur, je l'espère, ne manquera pas compagnons d'infortune des arbustes brûlés par l'interminable route ». 248 de Larroque qui est d'un genre particulier et au milieu de Célèbre céramiste et penseur converti au protestantisme. Né en d'approuver. Agenais, -à Lacapelle-Biron (Lot-et-Garonne) où un musée lui est consacré -v.1510 -mort à Paris, à la Bastille en 1590, après s'être établi à laquelle, quand j'étais enfant, j'ai si souvent laissé mes (*) Ce testament a été déposé, le 23 novembre 1894, entre les Saintes : Andrieu (J.), Bibliographie générale de l'Agenais, Paris-Agen, t. 2, 1887, p. 174. -Le premier grand biographe de Palissy est le grand chaussures. j'avais envie, hier, de m'écrier comme le mains de M e Coulet, notaire à Gontaud 261 [note de Ph. Tamizey de sur le site Gallica de la B.N.F.). -Bernard Palissy (1510-1590), 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. donnerai le plaisir d'une promenade de plusieurs centaines de ami et confrère historien de Tamizey, le saintongeais Louis Audiat : Bernard Palissy, Magnin Blanchard, Paris, 1864 (texte numérisé consultable 252 C'est-à-dire Villeneuve-sur-Lot, en amont de Ste-Livrade : carte au Larroque en marge]. mètres. Ce sera un double plaisir par le contraste qui l'écrivain, le réformé, le céramiste, Actes du colloque de Saintes, Abbaye-aux-Dames, 29-30 juin 1990 -Communications réunies et publiées par Frank 253 Au nord de Gontaud, à 5 km environ, par l'actuelle D 641, au croisement de la D 124 : carte au 50 000 e , Marmande-Agen, 56, I.G.N., 25 novembre s'établira entre mon terrain toujours sec et les terrains Lestringant, éd. Interuniversitaires, Mont-de-Marsan, 1992. -Amico (Léonard N.), Bernard Palissy et ses continuateurs à la recherche du Paris, 2003. 254 Miramont-de-Guyenne où se sont établis les Delmas de Grammont, à 15 Par un beau soleil, un soleil qui m'a permis, à la veille environnants toujours fangeux. Je propose deux poteaux, deux paradis terrestre, Flammarion, Paris, 1996. -Boudon-Duaner (Marguerite), Bernard Palissy, le potier du Roy, Carrière-sous-Poissy : La Cause, 1999. km environ au nord de Gontaud : carte au 50 000 e , Marmande-Agen, 56, I.G.N, Paris, 2003. presque du mois le plus froid de l'année, de faire une assez écriteaux, deux inscriptions d'une part : Ici l'on peut en 249 C'est-à-dire la grande carte de France en 180 feuilles dressée par 255 La Sauvetat-du-Dropt, à 5 km au nord de Miramont-de-Guyenne. Le longue lecture en plein air, assis sur ma pierre tombale. Quel César-François Cassini de Thury (1714-1784), petit-fils de l'astronome de général Delmas de Grammont y est né. tout temps marcher en pantoufles ; d'autre part : Ici l'on est Louis XIV, spécialiste lui-même de géodésie. 256 À une vingtaine de km au nord-est de Miramont par l'actuelle D1 : 250 carte au 50 000 e , Marmande-Agen, 56, I.G.N, Paris, 2003.. Situé à Fauguerolles, au sud-ouest de Gontaud (lieu-dit indiqué sur condamné à barboter à perpétuité. On va, un peu plus tard, la carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003, à l'endroit 257 À 5 km environ au sud-est de Castillonnés, Bournel, un peu à l'écart où la ligne de chemin de fer de Bordeaux à Toulouse se rapproche de la de l'actuelle D 2 est desservie par la D 218 et la D 257 : carte au 50 000 e 259 trad. « arrache-moi à la boue » : Ps 68, 3 (hébreu 69). N113. Carrère : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. , Marmande-Agen, 56, I.G.N., Paris, 2003. 260 Sur les difficultés d'accès à Larroque : Serin (P.), op. cit., p. 246 251 258 26-27. Au nord d'Allemans-du-Dropt, au nord-est de Miramont-de-Guyenne, sur Casseuil, sur la rive droite de la Garonne, en aval de La Réole et Ambrus (sur la rive gauche de la Garonne, à mi-chemin de St-Pierre- l'actuelle de Gronde-sur-Dropt et à 5 km environ, en amont de Saint-Macaire : carte au D28 : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, de-Buzet au nord et de Xaintrailles au sud, à l'écart de l'actuelle D 108 : 261 Aujourd'hui conservé dans l'étude de son successeur, M e Michel 2003. 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. Fortin, notaire à Gontaud-de-Nogaret. sécheresse dont je me suis tant plaint ici. Que de dangers menacent les plantations dans l'hiver comme dans l'été et combien il faut peu compter sur le succès définitif ! 1 er décembre que la promenade est non seulement possible, mais encore agréable. Agen, le 1 er décembre, à 5 heures du matin 7 degrés 50. Je viens présentement de faire mon petit kilomètre et j'ai rapporté de cette course un ragaillardissement dû à l'air si vif qui fouettait mes joues et rendait plus rapide la circulation de mon sang. Je prends l'engagement de sortir, tous les jours, pendant une demi-heure et par les plus grands froids, pourvu que ce soient des froids ensoleillés. rajout en marge : Agen, le 1 er décembre, à 5 heures du matin 7 degrés 50. 2 décembre Le thermomètre remonte et pourtant le froid me paraît plus sensible, plus pénétrant. Nous sommes à la neige. Déjà il en est tombé un peu à Gontaud, cette nuit, tandis qu'ici nous n'avons pas eu le plus petit flocon. L'absence de tout rayon de soleil aurait rendu la journée triste pour moi, si je n'avais goûté, auprès d'un bon feu, un des plaisirs les plus doux de ce monde, le plaisir de lire un livre intéressant. Ce livre est le second volume de Le froid, si violent hier et avant-hier, diminue un peu. Agen et Larroque ont eu à peu près la même température le 29 et le 30 novembre, c'est-à-dire 9 degrés environ au commencement de chacune de ces rudes journées (tantôt un peu moins de 9 et tantôt un peu plus). Quel début que celui de l'hiver de 90- 91 ! Heureusement que le soleil éclaire ce terrible début et 262 Voir 16 novembre 1889. 263 La Fontaine, Fables, livre VII, X : « La laitière et le pot au lait ». la Monographie du château de Plassac par le Marquis de Dampierre 264 qui a eu l'exquise amabilité de faire imprimer mon nom au verso du faux-titre. roi. Leur arrière petit-neveu a lu avec fierté des pages qui Rubens, moi à mon recueil des lettres de Peiresc. De même que nous font tant d'honneur, et la journée du 2 décembre, si nos héros respectifs avaient été de grands amis, leurs sombre à l'extérieur, a été pour lui, prisonnier, toute douce éditeurs respectifs devinrent de grands amis. J'ai raconté et charmante, une de ces journées de fête intime dont on garde tout cela dans une lettre-dédicace en tête des Petits mémoires longtemps le souvenir. de Peiresc 267 . Combien j'étais loin de m'attendre à me voir si tôt séparé de cet aimable compagnon de route à qui je disais, 10 décembre dans le document que je viens de citer « Puissions nous, avant Je profite de la trêve (sic) que nous accorde l'hiver pour de prendre notre billet pour la mystérieuse station qui est au continuer mes petites plantations d'arbustes et surtout de bout de la ligne, échanger nos plus cordiales félicitations en rosiers. face de la rangée complète et pas mal imposante de ces gros Si la moitié seulement de ces plantations réussissait, ce serait ici un vrai jardin de roses. Mais volumes auxquels nous aurons consacré le meilleur de notre combien peu des plantations faites à pareille époque l'an existence, et, la main dans la main, nous dire avec sérénité, dernier, ont survécu aux dangers qui menacent particulièrement que nous avons consciencieusement rempli notre mission ! » toutes les enfances ! Rien n'est fragile comme ces pauvres Hélas ! Le pauvre Ruelens n'a pu nous donner qu'un seul volume petites tiges. Nos pères disaient : Pour un plaisir cent de son Rubens et il laisse encore bien d'autres travaux in- douleurs. Je vois bien qu'il faut dire aussi : pour un rosier terrompus. C'est avec un serrement de coeur que je vois qui prospère, cent rosiers meurent en bas âge. disparaître cet ami si obligeant dont j'ai reçu voilà un mois à peine un billet si cordial, si affectueux, dans lequel il me 13 décembre remerciait de l'envoi de mon Borrilly 268 , dont il me promettait Je reçois une bien triste nouvelle, celle de la mort si de rendre compte, dans lequel il me remerciait aussi des inattendue de mon cher confrère et ami M. Charles Ruelens 266 , éloges que je venais de donner à ses dernières publications, conservateur des manuscrits à la Bibliothèque royale de Bruxelles, officier de l'Ordre de Léopold, chevalier de la Tout ce volume, de plus de 400 pages, est consacré à nos Légion d'Honneur, officier de l'Instruction publique, etc. Il communs parents, les Montazet 265 et principalement aux deux est « pieusement décédé à Saint-Josse-ten-Noode, le grands hommes de la famille, l'archevêque de Lyon, membre de 8 décembre, à l'âge de 70 ans », comme me l'apprend la lettre l'Académie française, et le lieutenant général des armées du de faire-part que m'adresse la famille. Je m'étais lié avec M. Ruelens à Carpentras où nous avons pendant plusieurs 264 semaines travaillé ensemble, lui à son recueil des lettres de Élie-Adrien-Roger, marquis de Dampierre, Monographie du château de Plassac en Saintonge. La Saintonge et les seigneurs de Plassac (1215-1669), impr. N. Texier, La Rochelle, 1888. 266 Charles-Louis Ruelens est l'auteur de plusieurs articles d'histoire 265 et d'histoire de l'art des anciens Pays-Bas espagnols. Il a notamment édité Les Malvin étaient originaires de l'Albret. Un des leurs, passé en Agenais au XVI e siècle, s'y fixa par un mariage avec une fille du seigneur de Montpezat et d'Aiguillon. C'est au fief de Montazet apporté par celle-ci la Correspondance et documents épistolaires de Pierre-Paul Rubens, publiés que les Malvin d'Agenais empruntèrent leur surnom. Antoine de Malvin de Montazet (1733-1788), aumônier du à Bruxelles, 1877 et à Anvers, 1887. Il a aussi signé les Notes relatives roi en 1742, fut un zélé champion de l'Eglise gallicane et un ardent partisan des jansénistes qu'il défendit contre aux lettres adressées à Rubens ou à d'autres Belges, extr. Peiresc Beaumont, archevêque de Paris. Son frère, Antoine-Marie, baron de Quissac (1771-1768), lieutenant-général des (Nicolas-Claude Fabri de), Petits mémoires inédits…, Tamizey de Larroque armées du roi (1760) fut ambassadeur en Hongrie et grand'croix de Saint-Louis : voir Meller (P.), op.cit.-(Ph.) éd., Anvers, 1889, t. 1. -Correspondance de Tamizey avec Charles Andrieu (J.), op. cit, t.2, p. 148-149.-Audiat (L.), M.E. de Blossac (un petit-neveu de Chateaubriand, sous-préfet Ruelens : A.D. Lot-et-Garonne, 16 J 24, correspondance d'érudits : Fonds à Marmande, Agen, 1877. Tamizey de Larroque. soit comme chroniqueur du Polybiblion 269 , soit comme rédacteur de l'Intermédiaire 270 . Je prie le bon Dieu de bénir le cher de la littérature. Rouvier 274 ), le froid atteint ici son maximum pour la 1 re les 13 degrés. Ceci me rappelle que j'ai entendu jadis deux dizaine du mois. Redoublement du froid. Hier, nous avons eu ici 10 degrés et hommes qui avaient beaucoup voyagé, mon cousin Léon de -1 8 9 1 -7 janvier (note en marge : À Agen le 11 il y a eu 9°25 et le 12, 10 aujourd'hui nous en avons 12. Le vent a soufflé, l'avant-Bentzmann 276 et M. Alphonse Paillard 277 , préfet de Lot-et-Ga- Autre triste nouvelle. J'apprends par une lettre de ma 2 janvier Puisque je dois signaler ici les choses météorologiques, je degrés à 5h du matin.) dernière nuit, avec une telle violence que rarement j'ai ronne, déclarer presque dans les mêmes termes que nulle part cousine Charlotte de Grammont, que sa soeur est morte quelques heures après M. Ruelens : elle a été enterrée à Versailles avant-hier, lundi, 15 de ce mois. Marie avait une très belle âme et je suis sûr qu'elle est au ciel. dirai qu'un vent d'une violence extrême souffle depuis plus de entendu pareille musique. Je tremblais pour mon vieux chêne, ils n'avaient trouvé un climat aussi doux et aussi agréable J'étais à Gontaud depuis l'avant-veille de Noël. Je rentre 48 heures jour et nuit. Ce sont des rafales à tout emporter. 13 janvier mais il n'a pas perdu une seule de ses branches dans la qu'en Agenais. M. Paillard ajoutait même avec enthousiasme : ici avec un charmant soleil et par une température aussi douce Pourvu que le pavillon Peiresc résiste aux efforts de la Il a neigé assez abondamment pendant la nuit. C'est triple tourmente et on peut dire que, quoique mort, il est solide « c'est le paradis de la France ». qu'en un des plus beaux jours du printemps. On dirait que la tempête ! Jusqu'ici rien n'a bronché, pas même une seule des gain : d'abord détente de la situation, la température encore. campagne a pris un air de fête pour se faire mieux apprécier tuiles de la toiture. Quoique le vent soit coupant comme un 5 février remontant de plusieurs degrés ; ensuite, fumure gratuite de de celui qui, l'ingrat, vient de l'abandonner pendant plus d'une semaine. Comme si tout devait rendre plus agréable ma grand sabre, et qu'il fasse bien froid, j'ai fait ma promenade mon jardin, de mes prés et de mes champs ; enfin, em-28 janvier Le vieux père Thouron a posé aujourd'hui la cloche que je de tous les jours. En pareil cas, il n'y a que le premier pas bellissement du paysage. Je ne profiterai pas d'une aussi l'avais chargé d'acheter. Cette cloche est petite (ne pesant Dix jours après avoir noté de si basses températures et de si rentrée, je trouve toutes sortes d'embellissements faits en qui compte. Je parlais ici, l'an dernier, du plaisir que commode occasion de décrire les effets de neige sur les arbres que 3 kg. 1/2), mais elle a un beau son et je crois qu'on terribles tempêtes, je constate que je travaille, les fenêtres mon absence par les fermiers : sable répandu tout autour du j'éprouverai à entendre, près d'un bon feu ou dans un bon lit, et sur les coteaux ; j'éviterai les métaphores banales, les l'entendra bien loin. Je la ferai mettre en branle, chaque ouvertes, dans mon cabinet où dansent de joyeux rayons d'un pavillon, allées régularisées, plantations d'arbustes (surtout siffler le vent autour du pavillon. Je me régale de ce plaisir clichés indestructibles, mais je constaterai, puisque c'est jour, pour annoncer le dîner et ce sera un avertissement pour soleil déjà vif. Je constate aussi que j'ai pu lire en plein des genévriers) au nord et au midi, rosiers autour du vieux depuis lundi et cela promet de durer encore. pour la première fois depuis bien longtemps que je vois la mon fils, quand la passion de la chasse lui fera oublier midi, air, assis sur ma pierre tombale, divers journaux et divers chêne, etc. Puisse l'année nouvelle voir prospérer toutes ces neige en pleine campagne, que le spectacle est charmant, l'heure invariable, l'heure sacramentelle du repas principal catalogues. C'est presque déjà le printemps tant il fait bon plantations et voir surtout prospérer tous ceux que j'aime ! 8 janvier surtout pour un homme assis devant la large fenêtre d'une et beau. Combien, même quand mon thermomètre marquait 12 dans ma famille. 5 janvier J'ai reçu, hier, du président de la section d'histoire du chambre bien chauffée. De même que le terrible vent du Nord de degrés, nous avons été plus favorisés que dans d'autres note en marge : À 4 francs le kilo = 14 francs plus 6 267 Comité des travaux historiques 272 , un petit billet qui Voir notamment 7 mars 1890. tous ces derniers jours me rendait plus doux le repos du lit, régions ! J'ai reçu aujourd'hui même une lettre de M. A. de francs pour les accessoires. Au moment où je fais planter tant de rosiers, mon savant confrère M. Charles Joret m'envoie, pour mes étrennes une m'apportait une bien bonne nouvelle et que j'ai grand plaisir de même la flamme si vive qui brille dans ma cheminée rend Brémond d'Ars, marquis de Migré 275 , écrite en ce même Hôtel des 276 C'est l'époux de Marie-Amélie Delmas de Grammont. Voir 10 septembre à transcrire ici : « Paris, 6 janvier 1891. -Mon cher ami, la plus agréable pour moi le poétique aspect de toutes ces bains Sextius à Aix en Provence où j'ai passé de si douces 1890. dissertation aussi savante qu'intéressante sur la Légende de la rose au Moyen-Âge chez les nations romanes lecture des pages consacrées à la reine des fleurs par un copies me valurent le payement d'un assez honnête excédent de Delisle 273 . » vraiment charmantes et qui m'ont rendu encore plus agréable la Carpentras, près d'un millier de documents et mes notes et volumes de la correspondance de Peiresc. Tout à vous. Léopold germaniques 271 . Il y a là une coïncidence, une harmonie transcrit, cette année-là, à Montpellier, à Aix et à d'hier, votre projet de publier une nouvelle série de trois et 277 Alphonse Paillard, né à St-Mihiel (Meuse) en 1817. À sa sortie de section d'histoire a adopté à l'unanimité, dans sa séance blancheurs inaccoutumées. semaines en l'an de grâce et de grand travail 1880, (J'ai l'École des chartes (1842), 268 bagages), j'ai reçu, dis-je, une lettre du Marquis de Migré Tamizey de Larroque (Ph.) éd., Les correspondants de Peiresc, XVIII, Le vent souffle et mugit toujours et le froid reste très vif érudit également versé dans la connaissance de la botanique et Boniface Borrilly, Lettres inédites écrites d'Aix… (1618-1631)…, Aix-en-Provence, 1890 (extr. des Mémoires de l'Académie d'Aix). (de 6 à 7 degrés). m'apprenant que sous ce beau ciel tant vanté on avait atteint 269 271 Revue bibliographique, de caractère nettement catholique, fondée à Charles Joret (1829-1914), né à Formigny (Calvados), philologue, Paris en 1866, publiée par la Société bibliographique pour servir à la fois aux érudits et aux gens du monde. Elle se compose de deux parties : diplômé de l'École des hautes études (1874), avec un mémoire remarquable : Du C dans les langues romanes. Docteur ès lettres (1875), il fut professeur 10 janvier technique, liste bibliographique des principaux livres parus en France et à de littérature étrangère à la faculté des lettres d'Aix-en-Provence 275 Un faire-part annonce le décès de Guillaume-Joseph, marquis de l'étranger, et des sommaires de revues et même de journaux ; littéraire, ou jusqu'en 1893. Membre correspondant (1887), puis membre de l'Académie des Nous venons de constater (9 h. du matin) que le thermomètre Brémond d'Ars…, général de division, ancien sénateur, cousin germain du analyse d'ouvrages, français surtout, de tous les genres. Une Chronique la cpmlpète qui donne les nouvelles littéraires : et, pendant longtemps une inscriptions, sa bibliographie est abondante : Herder et la Renaissance littéraire en Allemagne au XVIII e siècle (1875) ; De la littérature comte Anatole de Brémond d'Ars, marquis de Migrè [c'est lui dont parle marque 9 degrés. Il est pour mémoire, comme disaient nos Tamizey ici], ce dernier est Président d'honneur du Conseil héraldique de place fut faite aux questions et réponses. À partir de 1875, la partie technique a été plus nettement séparée de la partie littéraire. allemande au XVIII e siècle dans ses rapports avec la littérature française et avec la littérature anglaise (1876) ; La légende de Saint Alexis en France, chevalier de Saint-Jean de Jérusalem et de Saint-Sylvestre, membre pères, que le jour de la danse des millions (emprunt et secrétaire du conseil général du Finistère, ancien président de la 270 Allemagne (1881) ; Des caractères et de l'extension du patois normand L'Intermédiaire des chercheurs et curieux, périodique fondé en 1864, Société archéologique de Nantes et de la Loire Inférieure, de l'association par Charles Read, sous l'anagramme de Carle de Rash, à l'imitation des (1883) ; Flore populaire de la Normandie (1887) ; Le voyageur Tavernier des Chevaliers Pontificaux etc. (il habitait 5, rue Harroÿs, Nantes (Loire Notes and Queries anglais (fondé en 1849), pour servir de lien entre les (1889) ; La rose dans l'Antiquité et au Moyen Âge (1891) ; Les jardins dans inférieure)et au château de la Porte Neuve par Riec (Finistère) [coll. érudits désireux d'éclaircir des points obscurs. Read a eu pour successeurs l'ancienne Égypte (1894) ; Fabri de Peiresc (1894) ; Les plantes dans Baquier]. -Correspondance de Tamizey avec A. de Bremont d'Ars : A.D. Lot- dans la direction Lucien Faucou (1884), puis le général Iung (1895). l'Antiquité et au Moyen Âge (1897-1894), 2 vol. et-Garonne, 16 J 7, correspondance d'érudits : Fonds Tamizey de Larroque. il fut nommé substitut, mais il abandonna la magistrature en 1848 et devint successivement sous-préfet à Forcalquier en 1849 et à Dunkerque en 1851, préfet du Cantal en 1852, de Lot-et-Garonne en 1858, du Puy-de-Dôme en 1864 et du Pas-de-Calais en 1866. Il a terminé sa carrière administrative en 1870. Membre de la Société académique d'Agen, lauréat de l'Institut en 1839 pour un Mémoire sur les monnaies des Northmans au midi de la Loire, de l'Académie royale de Belgique en 1841 et 1842 pour deux mémoires sur L'influence des Institutions religieuses au VII e siècle et sur les Invasions Northmandes au IX e siècle. Picard, 1891. Le domaine de Cap de Bosc est situé sur la paroisse de Marcadis, commune de Moncrabeau (Lot-et-Garonne), sur la rive droite de l'Osse, très près de la limite du département du Gers. Le domaine était, en 1891, dans la famille depuis le XVI e siècle. En effet, représentée au début du XVI e siècle par deux frères, Pierre (licencié au cadastre de Condom en 1536) et Michel qui finalement apparaît seul dont les descendants se perpétuent jusqu'à nos jours avoir adopté pour résidence le domaine de Cap-de-Bosc. Le dernier du nom Dudrot connu par Tamizey était Paul-Fernand Dudrot, habitant avec son père. Il avait deux soeurs Marie-Antoinette, mariée à Ernest Baylin, résidant au Boué, près de Moncrabeau et Gabrielle-Josèphe, mariée au Dr Labat, médecin à Nérac [A.P. Baquier]. commencerai l'aride besogne de la Table des trois volumes J'ai achevé après-midi l'annotation du livre de raison de la Peiresc-Dupuy, digne occupation d'un temps de pénitence. Avec famille Dudrot de Cap-de-Bosc 278 . J'emporterai mon manuscrit à quel entrain, si j'ai fini pour Pâques, je crierai Alleluia ! peine séché, demain, à Lavardac 279 et le surlendemain à (note en marge : À Lavardac, 5 degrés de froid le Mardi-Gras Condom 280 où je le mettrai aux deux érudits indigènes, et 6 le mercredi des Cendres, 10 et 11 février.) MM. Gardère 281 et Soubdès 282 . Après révision, ces Messieurs l'adresseront à la Revue de Gascogne 283 où on l'insérera peu à peu. Dès mon retour de Lavardac, le mercredi des Cendres, je 278 Livre de raison de la famille Dudrot de Capdebosc (1522-1675), éd. Tamizey de Larroque (Ph.), Paris, Gardère (Joseph), archiviste-bibliothécaire de la ville de Condom, il a publié avec Ph. Lauzun, Le couvent de Prouillan ou de Pont-Vert à Condom, impr. L. Cocharaux, Auch, 1904, 14 p. (extr. du Bulletin de la Société archéologique du Gers), L'instruction publique à Condom sous l'Ancien Régime, impr. G. Foix, Auch, 1889, 224 p. et l'Inventaire sommaire des archives hospitalières antérieures à 1790… Hospice de Condom (Gers), impr. de Cocharaux frères, Auch, 1883 ; Tamizey de Larroque (Ph.), « Le maréchal de Biron et la prise de Gontaud en 1580 » dans Monuments et portraits agenais, 1 er fasc., Agen, 1898, p. 65. -A.D. Lot-et-Garonne, 16 J 14, correspondance d'érudits : Fonds Tamizey de Larroque. 282 Il existe de Soubdès (Jean-Louis), député du Gers au Conseil des Il a aussi Anciens, des Lettres écrites en 1808, par M. Soubdès… à son fils, alors publié à l'occasion de son séjour en Agenais : Histoire de la préfecture sous-lieutenant, aujourd'hui capitaine au 5 e régiment de cuirassiers, au d'Agen, impr. P. Noubel, 1860 (extr. du Recueil des Travaux de la Soc. sujet de son exclusion du tribunal de 1 ere instance du département de la Lett. et Arts d'Agen, 2 e série, t. 1 -Cette étude fut reproduite dans Seine, impr. de Ballard, s.d., 28 p. -Jean-Marie Soubdès a publié La l'Abeille Agenaise, n°s du 2 avril et suiv.) ; Discours sur la part que les noblesse française en 1858, nécessité de la reconstituer sur de nouvelles lettres antiques ont eue à l'unité de la civilisation des peuples modernes, bases, impr. de Dupouy, Condom, 1858, 24 p. ; La papauté temporelle, son Agen, ibid., 1862 (Discours prononcé à la distribution des prix du Lycée origine dans le VIII e siècle, sa plus haute élévation dans le XIV e et son d'Agen, en 1862). Il a également fourni plusieurs notices à la Bibliothèque déclin dans le cours du XIX e siècle, impr. de P. Dupouy, Condom, 1860, 87 de l'École des Chartes (1848-1876) et publié un ouvrage de référence : p. ; Précis d'un ouvrage inédit… intitulé : Constitution française telle Histoire de la transmission du pouvoir impérial à Rome et à Constantinople, que la réclament les besoins de l'époque actuelle, impr. de Dupouy jeune, Paris, Plon, 1875 : Andrieu (J.), op. cit., t. II, p. 174. Il a guidé et Condom, 1838, 64 p. -A.D. Lot-et-Garonne, 16 J 26, correspondance encouragé les premiers travaux historiques de Philippe Tamizey de Larroque, d'érudits : Fonds Tamizey de Larroque. « Le maréchal de Biron et la prise de Gontaud en 1580 » dans Monuments et portraits agenais, 1 er fasc., Agen, 1898, p. 63. -Correspondance de Tamizey avec Ch. Paillard : A.D. Lot-et-Garonne, 16 J 21, correspondance d'érudits : Fonds Tamizey de Larroque. installation, je suis allé la cueillir sur une des hautes branches du Châtaignier où elle était en détresse. Elle est 306 Préfecture du département du Gers, archevêché : à une soixantaine de km au sud d'Agen, par l'actuelle N.21. s'allonge sur mon pupitre, accompagnant de son ronron le bruit de ma plume courant sur le papier. L'autre jour, elle m'a accompagné dans une longue promenade, tantôt me précédant et tantôt me suivant, comme si elle avait voulu jouer avec moi à cache-cache. Elle a sauté sur mon tombeau 307 Inversion malicieuse par Tamizey des mois du calendrier révolutionnaire correspondant pour ventôse à janvier-février et pour pluviôse à février-mars, établi par le Poète Fabre d'Églantine et mis en vigueur à partir du 5 octobre 1793 : Furet (François) -Richet (Denis), La Révolution française, Fayard, 1973, p. 235. ou l'irrévérencieuse ! Et elle m'a suivi, comme un chien fidèle, dans les profondeurs du bois. Elle avait disparu depuis deux jours, entraînée par sa folle passion pour la chasse. Je croyais qu'elle avait été tuée par quelque braconnier ou étranglée par quelque chien, quand, hier, vers quatre heures du matin, elle a miaulé à ma porte et, une fois introduite dans ma chambre, m'a témoigné sa joie et sa sympathie par des gambades sur mon lit, des caresses, des ronrons. J'ai, de mon côté, fait bon accueil à Gredinette, mais j'entrevois que ça finira mal pour la petite chasseresse qui s'expose au péril périra. C'est vrai pour les chats comme pour nous. de Miramont, le pays natal de ma mère, et la pensée m'est douce de ce voisinage d'arbustes que j'appelle ses lilas rosiers plantés en ces derniers jours forment un total de trois cents arbustes au moins. Si tout cela vient à prospérer, combien mon rocher sera fleuri et combien ceux qui se souviendraient de sa nudité d'autrefois, s'étonneront de sa riante transformation ! 9 janvier compatriotes. Plus de soixante autres lilas m'ont été apportés des jardins de Bouillaguet qui, après avoir appartenu au géneral de Grammont, appartiennent maintenant à son petit-fils Christian de Bentzmann 387 . Répandus dans le jardin, dans les massifs, dans les haies, tous ces lilas feront merveille. J'ai, de plus, envoyé chercher dans les bois des genévriers et des houx, ces derniers au nombre de plus d'une centaine. Houx, 387 Voir note 137 : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. -Christian de Bentzmann (1857-1883) est le fils de Léon de Bentzmann (1813-1884) et de son épouse Marie-Amélie, née Delmas de Grammont, voir 10 septembre 1890 [A.P. Baquier]. Clairac, en 1787, fils d'un médecin, qui le destinait à la même profession, il vint à Paris faire ses études, fut nommé interne au concours de 1808 et reçut en 1810, le diplôme de docteur. L'un des inspecteurs de l'hôtel-Dieu (1812) et chef des travaux anatomiques de l'amphithéâtre central (1814), il se distigua durant les deux invasions étrangères par son zèle et par son courage à soigner les blessés, soit à , qui furent aussi quelque peu mes voisins dans le département, car les cantons de Tonneins et de Prayssas ne sont guère loin de ma ville natale. Je suis tout fier d'avoir été admis sur une liste où je retrouve tant de noms illustres tels que Poton de Xaintrailles 393 , Bernard Palissy 394 , Joseph Scaliger 395 , Pierre Dupuy 396 , Théophile de Viau 397 , le Maréchal contribuèrent non moins que ses travaux de physiologie et d'embryogénie à lui faire donner en 1822 les fonctions de médecin chef à la Pitié ; il ne cessa de remplir ces fonctions actives et ne renonça à la pratique de son art qu'en venant remplacer le grand médecin Flourens dans la chaire Des lois de l'ostéogénie, Paris, 1815, in-fol. et atlas (prix de physiologie expérimentale proposé par l'Académie des sciences en 1820). -Essai sur l'anatomie et la physiologie des dents, 1817. -Anatomie comparée du cerveau dans les quatre classes des animaux vertébrés, Paris, 1824-26, 2 vol., in-8° et atlas in-4° (grand prix de l'Académie des sciences en 1821). -Anatomie comparée des monstrousités, in-fol. pl., ouvrage manuscrit présenté en 1825 à l'Académie. -Traité des maladies organiques de l'axe cérébro-spinal du système nerveux, in-fol. manusc. Communiqué en 1828 à l'Académie. -Théorie des formations et déformations organiques appliquée à l'anatomie de Rita-Christina et de la Duplicité monstrueuse, Paris, 1832, in 4° et atlas. -Principes d'organogénie, Paris, 1842. Il a, de plus, rédigé un très grand nombre de mémoires ou d'articles pour les recueils de l'académie des sciences et du Museum d'histoire naturelle, les Archives générales de médecine, l'Encyclopédie des sciences Médicales, la Revue médicale, les Annales des sciences naturelles etc. Rovere, devenu évêque de cette ville. Joseph-Juste Scaliger est l'un des plus grands philologues du XVI e siècle, professeur réputé à Genève, puis à Leyde, cible des attaques des jésuites. Ayant embrassé le protestantisme, il avait quitté, en effet, le royaume de France après la Saint-Barthélemy. On a publié de lui après sa mort : Poemata omnia (1615), Epistolae omnes (1627) et Lettres françaises inédites (1879). Rome, 1654) : de l'ordre des Charteux, séjourne longtemps à Rome et fut procureur général de son ordre. On a de lui des Perroniana (1669), anecdotes sur le cardinal Du Perron. Le cadet Pierre (Agen, 1582-Paris, 1651). En 1615, il dressa un inventaire du trésor des chartes. En 1645, il devint avec son frère Jacques, garde de la bibliothèque du roi. Le benjamin Jacques (Paris, 1586-1656). Après avoir administré avec son frère Pierre, la bibliothèque du président de Thou, son parent, il devint en 1645, garde de la bibliothèque du roi auquel il légua par testament sa bibliothèque comprenant 9000 volumes et des manuscrits anciens. Sa collection de pièces juridiques, historiques etc. fut cédée au roi en 1754 par son dernier possesseur, Joly de Fleury. 397 Théophile de Viau (1590-1626), né à Clairac-en-Agenais, mort à Paris. Originaire d'une famille de petite noblesse protestante, poète à succès, libre-penseur qui ne faisait pas mystère de son homosexualité. Protégé par de grands seigneurs, il fut néanmoins poursuivi et condamné, notamment après la publication, en 1623, du Parnasse satyrique. Il n'échappa que de justesse au bûcher et mourut des suites de son incarcération. occupé avec un zèle patriotique de tous ces personnages, regrettant qu'on ait oublié de leur adjoindre Florimond de Raymond 399 . et le grand Helléniste Combefis 400 . Puisse-t-on, Bordeaux, 1605, dénoncée, pour son sectarisme anti-protestant, par P. Bayle, dans son Dictiuonnaire, Amsterdam, 1734, t. IV. -Tamizey de Larroque (Ph.), Essai sur la vie et les ouvrages de Florimond de Raymond, Conseiller au parlement de Bordeaux, Paris, A. Aubry, 1867. -Darricau (Raymond), « Bibliographie des oeuvres de Florimond de Raemond », Revue française d'histoire du livre, 1971, p. 128. -Dubois (Claude-Gilbert), La conception de l'histoire en France au XVI e siècle(1560-1610), A.G. Nizet, Paris, 1977, p. 46-54. François)1605-1679, né à Marmande, il fit profession, en 1624, chez les Domincains à Bordeaux et professa la philosophie et la théologie successivement à Bordeaux, à Saint-Maximin et Paris. S'appliquant à la lecture des Pères de l'Église et des auteurs grecs, il s'imposa comme un hélléniste hors pair. En 1653, le P. Goar étant tombé malade lorsqu'il travaillait par ordre du roi sur l'histoire byzantine qui s'imprimait au Louvre, Combefis fut obligé de prendre sa place. Les prélats de France, assemblés à Paris en 1655, le choisirent pour travailler aux nouvelles éditions des Pères grecs qu'ils voulaient entreprendre et le gratifièrent en 1656 d'une pension de cinq cents livres qu'ils doublèrent dans la suite. Marie Risteau, dame), née à Tonneins en 1770, morte près de Palaiseau en 1807. Mariée jeune à un riche banquier de Bordeaux qui se ruina et mourut en 1793, elle dut vivre du produit de ses livres. Ses cinq romans : Claire d'Albe (1799), Malvina (1801), Amélie de Mansfield (1803), Mathilde (1805), Élisabeth ou les exilés de Sibérie (1806), pleins de mélancolie romantique, eurent un succès prodigieux. était décédée au bout de 10 mois de mariage, Nathalie de Boëry, la première épouse de Ph. Tamizey de Larroque, en accouchant d'un fils qui ne survécut pas et que l'on enterra dans le cercueil de sa mère : voir 25 avril 1891. -Couture (L.), op. cit., p. 500. noterai seulement ici deux particularités frappantes : la première c'est que, dans la pleine possession de mon intelligence, et sous l'impression de la fièvre qui me dévorait j'étais obsédé par une idée fixe : j'étais persuadé que mon lit, qui est excellent, formé de plusieurs matelas dignes des chanoines du Lutrin 403 , n'était qu'un affreux grabat sur lequel mes os se brisaient. Ce trouble du cerveau a persisté pendant plusieurs jours et plusieurs nuits. C'était un peu de délire s'appliquant à un objet spécial, pendant que, d'autre part, je Elle relève un défi lancé à Boileau par le président Lamoignon : composer un poème héroïque sur un sujet aussi mince qu'une dispute arbitrée par lui entre le chantre et le trésorier de la Sainte-Chapelle de Paris et où se trouvent mêlés leurs collègues chanoines. Clairac) et le naturaliste de Lacaze-Duthiers 392 (de Montpezat)396 Dupuy (frères), correspondants de Peiresc, fils du jurisconsulte Pierre Dupuy, conseiller au Parlement de Paris. L'aîné est Christophe une nouvelle édition, leur accorder une mention réparatrice, en élaguant M me Cottin 401 , qui ne nous appartient (Paris, 1579-dans pas ! 2 février Ce matin, à mon réveil, après avoir donné un souvenir plein gardais toute la lucidité de mon esprit. L'autre d'émotion à un évènement qui restera toujours profondément particularité, c'est qu'au moment où j'étais le plus en douloureux pour moi et dont c'est aujourd'hui le 34 ème Lot-et-Garonne. 2 e édition (Paris, Librairie Hachette, 1884), où feu Adolphe Joanne, auteur du Dictionnaire géographique et de l'Itinéraire général de la France, me donne (p. 32) une place bien flatteuse parmi les « personnages célèbres » du département. La liste qui commence à Sainte-Foy, martyre, au III e siècle, se termine à « l'érudit Tamizey de Larroque, né à Gontaud, en 1828. » J'ai pour plus proches voisins, dans cette liste, deux membres de l'Académie des Sciences, l'anatomiste Étienne Serres 391 (de 390 trad. du gascon : arrebiscoulà, rebiscoulà signifie ranimer, réconforter, ressusciter, avec l'expression, par exemple, rebiscoulà lo heuc : ranimer le feu. Voir Palay (S.), op. cit., p. 59. 391 Étienne Serres, né à Paris, soit dans les environs. Les services qu'il avait rendus d'anatomie comparée au Jardin des Plantes (janvier 1839). Après avoir été agrégé à l'Académie de médecine, où du reste il se montra rarement , il fut élu, en 1828, membre de l'Académie des sciences à la place de Chaussier ; appelé en 1841 à présider ce corps savant, il reçut, à cette occasion, la croix d'officier de la Légion d'honneur et celle de commandeur en 1846. Parmi les commissions dont il a fait partie à différentes époques, celle des hautes études scientifiques et littéraires en 1846. Ses principaux ouvrages : Traité de la fièvre entéro-mésentérique, Paris, 1813, composé avec A. Petit. -anniversaire 402 , j'ai refait par la pensée l'histoire de ma maladie. J'ai pu la reconstituer presque entièrement. Je Siècle. Tamizey a publié et annoté, en 1872, La Relation de la défense de Dunkerque (1651Dunkerque ( -1652)) , que le maréchal avait laissée manuscrite à sa mort (impr. Gounouilhou, Bordeaux, in-8° de 88 pp.). Des lettres, mémoires et négociations de ses missions diplomatiques ont aussi été publiés : Andrieu (J.), op. cit., t. 2, p. 283-284. 399 Florimond de Raymond (Agen v.1540-Bordeaux 1601) : magistrat et homme de lettres, auteur notamment d'une Histoire de la naissance, des progrez et de la decadence de l'hérésie [la Réforme] en ce siècle, 400 Combefis (401 Cottin (402 Le 2 février 1857 danger, une sorte d'indifférence complète m'a envahi, assez semblable à celle que l'on éprouve quand on a le mal de mer. Moi qui me préoccupais tant de mes enfants, de mes travaux et principalement du Peiresc, j'en étais arrivé à ne plus me soucier de rien. C'était comme un état léthargique. Je ne sentais pas plus mon mal que le regret. J'éprouvais un double coma, physique et moral, et si j'étais mort alors, je serais mort sans éprouver la moindre souffrance. C'est-il là l'effet de toute très grave maladie, ou seulement l'effet de la maladie même sous laquelle je succombais ? J'ai lu quelquefois que les mourants n'ont plus conscience de ce qui se passe en eux et autour d'eux, qu'il y a, dans les heures qui précèdent le départ, abolition de l'action des sens autant que des facultés de l'entendement, que c'est là, pour ainsi dire, un ensevelissement anticipé. L'indifférence absolue que je viens de mentionner semble donner raison à ceux qui croient que la mort est précédée d'un engourdissement de l'âme et du corps, comme celui qu'amènent les breuvages narcotiques. 403 Le Lutrin est une parodie d'épopée (1228 alexandrins, répartis en six chants) composée par le poète et critique Nicolas Boileau (1636-1711). Tulle et à Orléans. En 1722, il se rendit à Lyon pour y prendre la direction du séminaire, charge qu'il conserva jusqu'à sa mort. Il est l'auteur de Remarques sur différents articles des trois premiers volumes du Dictionnaire de Moréri de l'éditionde 1718de , Orléans, 1719de -1721, 3 , 3 vol. Bibliothèque des Auteurs cités dans le Dictionnaire de Richelet, placée en tête de l'édition de cet ouvrage faite à Lyon, 1728 ; Lettre critique sur le Dictionnaire de Bayle avec une corriger trois feuilles d'épreuves du Peiresc-Gassendi 411 , lire outre mes trois journaux, deux catalogues de librairie, un article de la Revue des Deux Mondes 412 , parcourir, en le coupant, un volume tout fraîchement arrivé, le tome II des Comptes consulaires de la ville de Riscle 413 , où j'ai eu le plaisir de trouver une aimable mention d'un de mes opuscules, préface qui contient un jugement de ce Dictionnaire, La Haye (Lyon), 1732 ; Lettre pour servir d'éclaircissement aux articles 82 et 88 des Mémoires de Trévoux (août et sept. 1735), insérée dans le même recueil (mai 1736) et dans laquelle il justifie son père, le graveur Sébastien Leclerc (1637-1714), de l'accusation de plagiat élevée contre lui par d'Aleman au sujet de l'ordre françois, entre autres. L'abbé Leclerc est estimé pour l'exactitude de ses critiques mais a aussi la réputation de se perdre dans les détails les plus minutieux. Il est encore l'auteur d'une Histoire des Papes, d'une Chronologie des rois de France de la première race, d'un abrégé de la Vie de son père avec le catalogue de ses ouvrages, d'un Traité du plagiat, Une Apologie du P. Labbe notamment ; mais aucun de ses manuscrits n'a vu le jour. Philippe-Jacques Maussac(1590-1650), magistrat et grand hélléniste d'origine languedocienne qui fut président de la chambre des comptes de Montpellier à partir de 1628, correspondant des principaux savants de son temps : Saumaise, Sirmond, Dupuy etc. Tamizey de Larroque a édité Impressions de voyage… dans la Provence alpestre de Pierre Gassendi, Digne, 1888 (extr. des Annales des Basses-Alpes, bulletin de la Société scientifique et littéraire de Digne). Elle se veut d'abord le « journal des voyages » et un « recueil de la politique, de l'administration et des moeurs ». Reprise deux ans après sa création par François Buloz, elle devient bimensuelle. Par la contribution d'académiciens, d'universitaires et d'hommes de lettres, le directeur-gérant en fait le point de ralliement de la monarchie constitutionnelle et de la bourgeoisie libérale. La Revue des Deux Mondes publie dès lors tous les auteurs romantiques restés les plus célèbres (de Chateaubriand à Hugo) ; Sainte-beuve et Gustave Planche y critiquent les pièces de Scribe et de Dumas, les romans de Stendhal, Mérimée, Heine et Tourgueniev, tous publiés dans la revue. En 1855, plus de 10 000 abonnés y découvrent les poèmes d'un inconnu Charles Baudelaire. Après la révolution de février 1848, Buloz définit sa revue comme « un grand centre littéraire » : le gouvernement romantique de Lamartine sert ses intérêts, et il s'enorgueillit d'ouvrir la Revue des Deux Mondes à des préoccupations sociopolitiques. Sous le Second Empire, il manifeste une opposition de moins en moins feutrée et conquiert un large public en confiant chaque rubrique à des spécialistes : les beaux-arts à Delacroix puis à Henri Regnault, les sciences à Littré, Taine et Claude Bernard ; la politique et l'économie à Victor Cousin, Guizot, Quinet ; l'histoire à Renan et Michelet. Prenant la succession de son père, en 1877, Charles Buloz conserve la même ligne éditoriale. Mais l'arrivée de Fernand Brunetière, en 1893, orienta la Revue des Deux Mondes vers un catholicisme violemment antidreyfusard, qui se transforma, au début du XX e siècle, en un conservatisme et un traditionnalisme de droite très marqués qui perdure jusqu'à sa mort en 1906 et s'infléchit ensuite : Sirinelli (J.F.) et Couty (D.) sous la dir. de, Dictionnaire de l'histoire de France, A. Colin, t. 2, 1999, p. 1376. J. de Carsalade 415 , éditeurs de ce volume ont dit (note de la page 346) : « Il faut lire les détails curieux et inédits de l'expédition de Jean Raphaël dans une charmante plaquette de M. T. de L. intitulée « Les infortunes d'un Commissaire au XV e siècle » 416 , Agen, 1887. Le document analysé par le spirituel et savant auteur est extrait de la collection Doat 417 . 414 Paul Parfouru (1846-1905), ancien élève de l'École des Chartes. Auteur d'une étude sur l'histoire du théâtre de l'Odéon à Paris (1876-1882. D'abord archiviste aux Archives départementales du Gers, il a publié, en 1882, Construction de la voûte du choeur de la cathédrale d'Auch 1617-1620, en 1884, le Catalogue des incunables de la bibliothèque d'Auch, précédé d'une notice historique.., en 1885, Lettres et mémoires inédits de M. d'Etigny, intendant de la généralité d'Auch et Pau de 1751 à 1767, en 1887, Voyages de deux bourgeois d'Auch à la cour de France en 1528 et 1529 et L'instruction publique à Fleurance avant 1789, en 1890, il a donné une lettre figurant dans Du Bartas (Guillaume de Salluste, seigneur), Choix de poésies françaises et gasconnes, avec notice et notes littéraires, par Olivier de Gourcuff et Paul Bénétrix…, enfin il a signé, en 1892, l'Inventaire de la série C des Archives départementales du Gers et, en 1924, avec R. Pagel, archiviste du Gers, l'Inventaire de la série E des mêmes archives. Ayant poursuivi sa carrière aux Archives départementales d'Ille-et-Vilaine, il a également publié des pièces et des inventaires de fonds bretons entre 1892 et 1896. -Correspondance de Tamizey avec P. Les infortunes d'un commissaire du XV e siècle [Jean Raphaël], Impr. de V. Lenthéric, 1887, in-16, 15 p. (extr. du Sud-Ouest). -On retrouve la trace de ce commissaire dans Centini (Nicolas), Les Coutumes de Port Ste-Marie (texte latin de 1501), Mémoire de maîtrise, Université Bordeaux III, 2001. des travaux de Jean Doat, né au début du XVII e siècle, mort en 1683. Devenu Président de la Chambre des comptes de Pau, en 1646, il acheta, en 1663, la seigneurie de Doat, près de Montaner, dont il portait le nom seulement par accident. Vers la même époque, il proposa à Pierre de Carcavy, bibliothècaire de Colbert, de lui faire exécuter des copies d'actes anciens qui présentaient un intérêt historique, ce qui était conforme aux idées du ministre. Il lui envoya d'abord un inventaire de titres béarnais, sur lequel Colbert cocha les actes à copier. Un premier travail fut exécuté et d'autres inventaires concernant le Périgord et l'Albret, furent soumis à Carcavy. De 1664 à 1668, 8 ballots de copies furent expédiés à Paris. Doat passa ensuite en Guyenne et Colbert le laissa maître de copier ce qu'il jugerait à propos. Accompagné d'un valet de chambre, de deux laquais et de huit copistes, Doat fit une grande tournée en Guyenne et en Languedoc passant par Foix, Rodez, St-Geniez, Millau, Vabres, Rodez et Foix de nouveau (1666), Tarascon, Dax, Pamiers, Carcassonne, Mirepoix, Béziers, Narbonne (1668), Pau, Auch, Toulouse, Albi. Des lettres patentes du 1 er avril et du 23 octobre 1667 lui avaient donné a posteriori mission de rechercher dans les archives les titres concernant les droits de la Couronne et pouvant servir à l'histoire. Il visita ainsi 135 dépôts d'archives civiles et ecclésiastiques et envoya à Paris un De 6 h. à 6 h. j'ai pu plusieurs de quatre pages, transcrire une notice inédite de L.-J. Leclerc sur Maussac 410 , écrire neuf lettres, dont 409 trad. du latin [laetos] dans le texte : « Ils arrivèrent dans des lieux heureux doucement gazonnés » : Virgile, Énéide, 6, 638. 410 Laurent-Josse Leclerc (1677-1736). Admis dans la communauté des prêtres de Saint-Sulpice, il devint, en 1704, licencié de Sorbonne, et Tamizey de Larroque. 415 Voir 23 août 1889. 417 Conservée au Département des manuscrits de la Bibliothèque enseigna la théologie à Parfouru : A.D. Lot-et-Garonne, 16 J 21, correspondance d'érudits : Fonds Nationale, issue (il s'agit de corrections reprises dans l'édition de 1725, la suite n'a pas été imprimée) ; 411 Ph. 412 Publication fondée en août 1829. 413 Les Comptes consulaires de Riscle de 1440 à 1507 (texte gascon), publiés dans la collection des Archives Historiques de la Gascogne, par Paul Parfouru, 1886-1892. chanoine 416 Tamizey de Larroque (Ph.), , Le Bien ducal, poème…, éd. Tamizey de Larroque(Ph.), Bordeaux, 1893. Professeur à l'institut des études supérieures de Florence (1863-1890), puis professeur de littérature italienne à l'université de Rome, il a publié de nombreux ouvrages sur la me demande d'inscrire quelques mots de ma chère langue gasconne sur un feuillet blanc détaché de l'album qu'il veut. consacrer à Christophe Colomb (à l'occasion du 4 ème centenaire de la découverte de l'Amérique). Voici le petit tribut, que Ce qui me fait le plus louer Christophe Colomb, c'est sa détermination et sa constance. Ce têtu admirable a montré en ces deux voyages que l'homme peut aller aussi loin et aussi haut qu'il veut. » terre, elles m'ont coûté 5 francs chacune. Ce même jour, Ducasse et moi nous avons acheté à moitié, à la foire de Tonneins, un porc pesant 496 livres et qui nous a coûté 222 francs. C'était le plus magnifique animal de la foire. Jusqu'à présent nous avions ici voulu éviter les embarras et ennuis de ce que l'on appelle la cuisine du cochon, mais nous avons reconnu qu'à la campagne on ne peut absolument se passer des provisions que fournit le quadrupède qui a eu l'insigne honneur d'être chanté par Homère. Bordeaux. Licencié en droit en 1830, il suit les cours de l'École des chartes et se lie à Jules Quicherat, le fameux médiéviste historien de Jeanne d'Arc, et à Augustin Thierry, la grande autorité historiographique du début du XIX e siècle, qui le recommande au ministre Villemain. Il est ainsi chargé, le 11 mai 1842, d'aller rechercher en Angleterre les archives enlevées à Philippe-Auguste à Fréteval. Il n'en trouva pas trace, mais rapporta beaucoup d'autres documents. Sa mission pourtant fut interrompue faute de crédits. Son père, un notable bordelais, ancien girondin, fit imprimer, à ses frais, le fruit de ses travaux. Jules Delpit a signé le Catalogue des manuscrits de la Bibliothèque de Bordeaux, l'édition de la Chronique bordelaise de l'Abbé Jean de Gaufretau ainsi que celle de la Chronique du Parlement de Bordeaux d'Étienne Cruseau (1588-1616). Membre de l'Académie de Bordeaux, il en fut, 30 ans durant, le secrétaire général. Il participa, en 1845, à la création de deux autres sociétés : la Commission des Archives municipales de Bordeaux et la Société des bibliophiles de Guyenne. Parmi ses ouvrages : Réfutation du livre de M ; L. Veuillot sur le droit du seigneur ou Réponse d'un campagnard à un parisien, 1857 ; Origine de l'imprimerie en Guyenne, 1869 ; Le droit du seigneur, seconde réponse à M. Louis Veuillot, 1873 ; Le Prince ridicule, mazarinade inédite composée en 1650, 1875. Voir L'École Nationale des chartes, histoire de l'École depuis 1821, Gérard Klopp éditeur, 1997. -Tamizey de Larroque (Ph.), Jules Delpit. Notes biographiques et bibliographiques, impr. de la Dordogne, Périgueux, 1892 (extr. du Bulletin de la Société historique et archéologique du Périgord). -Correspondance de Tamizey avec J. Delpit : A.D. Lot-et-Garonne, 16 J 11, correspondance d'érudits : Fonds Tamizey de Larroque.que pour la culture de son intelligence ». M. Delpit était âgé de 84 ans. Je perds en lui un de mes plus anciens et de mes meilleurs amis. Je l'avais connu quand j'étais encore bien jeune et je puis dire que ce sont surtout l'avoir harangué, comme maire, à son entrée solennelle dans Gontaud, nous causâmes de mon ami Delpit et que l'archevêque de Bordeaux lui donna ce bel éloge : « C'est le plus droit des hommes. Izon est trop loin pour qu'il m'eût été possible corolle, lance sur la terre des lambeaux de sa robe d'or. Nous avions, hier, à souper trois jeunes gens, amis de mon fils, MM. Joseph de Corbier 432 , Jean de Léaumont 433 et Henri Vincent de Tapol 434 . Ils ont été bloqués ici par la neige qui, pendant et après le repas, tombait à gros flocons. Le Comté de Dognon et la Marche, 1907 et La vicomté de Limoges et le comté de Périgord, leur réunion à la couronne à l'avènement d'Henri IV. Étude historique sur le domaine royal en Limousin, Ducourtieux et Gout, 1913, Limoges, in-8°, 48 p. (extr. du Bulletin de la société archéologique du Limousin). 433 Sur la famille de Léaumont, connue depuis le début du XIII e et établie dans le pays de Lomagne, en Guyenne et les contrées environnantes : De La Chenaye-Desbois et Badier, Dictionnaire de la noblesse, t. XI, Schlesinger, Paris, 1867, p. 818-825. Joseph Delmas de Grammont (né à Castillonnés, le 23 décembre 1746 et mort à Miramont, le 25 octobre 1809, colonel de cavalerie et chavalier de Saint-Louis, il épousa, en premères noces, à La Sauvetat, le 5 février 1787, Marthe Sophie de Vivie de Duvivier (1767-1789) et en secondes noces, toujours à La Sauvetat, le 21 décembre 1790 Marie Henriette de Vivie de Duvivier (1765-1827), de cette union naquirent 9 enfants dont Marie Élisabeth qui épousa Alexandre Tamizey de Larroque et donna naissance à Philippe Tamizey de Larroque [A.P. Baquier]. On l'appelle éclairette… Bien avant l'hirondelle, bien avant l'avril, elle annonce le réveil des beaux jours, et ses pétales clairs jettent comme une lumière dans l'herbe des prés l'indigne majoral du Félibrige que je suis, fournit à la manifestation : « Ço qué faou lou mey louga en Christopho Couloumb, acos sa counifience et sa counstence. L'admirablé testut a hey beze que dan aquelles duous ales l'hommé pod ana ta len et ta haou qué boou. » 424 16 mars La journée d'hier a été bien employée au point de vue horticole. J'ai fait planter un grand nombre d'arbustes le long de la grande allée (depuis le vivier jusqu'au bout de ladite allée). Ce sont des lilas, des fusains, des 27 mars Un télégramme de M. René Delpit, arrivé en retard car il est daté de la veiller m'a appris, hier, que son oncle et beau-père M. Jules Delpit 427 est mort le vingt-cinq à Izon, à 4 heures du matin. note en marge : Petit article nécrologique dans le Nouvelliste 428 du 26. On y dit que le défunt était « aussi généralement estimé pour ses qualités privées 427 conseils qui m'ont fait aborder la carrière de l'érudition. Aussi aimais-je à l'appeler mon cher et vénéré paternelle et je l'ai vu aussi fier de mes succès qu'eût pu l'être un père véritable. On l'enterre, demain, à 9 heures, dans ce cimetière d'Izon 429 où il m'avait montré sa place voilà plus de trente ans. C'était un homme qui, avec quelques d'aller saluer son cercueil, mais je le salue d'ici avec un affectueux et reconnaissant respect. » 29 mars La neige tombe comme en plein hiver et le vent qui la fait tourbillonner est aussi glacial que violent. C'est le vieux Borée 431 qui souffle comme aux plus vilains jours de décembre. et sur la berge des routes. » Jules Delpit (1808-1891) : né à ses maître. M. Delpit avait pour moi une affection toute 30 mars Premier sourire du printemps : « Il est une fleurette, si Quant aux journées avec vaches et tombereau pour transporter humble, si simple, si cachée que bien peu connaissent, et que littérature et le folklore : Sources védiques de l'épopée ( 1867), 434 Jean-Timothée Vincens de Tapol, né à Fauillet, le 31 août 1830, ceux-là même qui ne l'ignorent point, la foulent aux pieds était maire de la commune de Fauillet, à 4 km environ au sud de Gontaud 419 jours, Éditions Bordessoules, Saint-Jean d'Angély, 2005, p. 316-317. Figeac (Michel) sous la dir. de, La Gironde de la Préhistoire à nos Mythologie zoologique en anglais (1872), Mythologie des plantes en français (1878-1880) ; plusieurs dictionnaires biographiques, dont les plus [A.P. Baquier]. avec dédain. Son nom, pourtant, est frais et plein de 435 trad. du gascon : « Le bon Dieu plume ses oies. » 420 connus sont le Dictionnaire biographique des écrivains contemporains (1879-Au nord-est de Gontaud-de-Nogaret : carte 1738 Est, Seyches, série promesses, et c'est le Soleil, semble-t-il, qui, pour former 436 Il s'agit de Jean- bleue 1 : 25 000, I.G.N., Paris, 1987. 1882) et le Dictionnaire international des écrivains du monde latin (1905). 425 trad. du latin « avec agréable utilité » : Horace, P. 343, 164 ; O, 4, 9, 41. 421 Il s'agit peut-être de Longueville à 6 km. environ au sud-est de Marmande : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. 422 Voir 10 août 1891. 423 Gubernatis (comte Angelo de) 1840-1913. rosiers etc. Après l'agrément, l'utilité, utile dulci 425 . J'ai créé une pépinière de pruniers où plus d'une centaine de jeunes plants attendra le développement voulu pour la transplantation. Le même jour, j'ai reçu de M. Prosper de Lafitte 426 120 plants de chasselas destinés à la tonnelle depuis longtemps projetée. 26 mars J'ai payé à mon fermier Ducasse pour travaux exécutés pour moi par lui et par ses deux fils depuis le 4 novembre dernier la somme de 176 francs 50 centimes. Les journées sont comptées à raison de 2 francs (moyenne adoptée pour toutes saisons). 424 trad. du gascon : « la défauts (beaucoup de bizarrerie surtout) avait de grandes qualités, une extrême bonté notamment et aussi une inflexible loyauté. Je me souviens, à ce propos, que, le jour où le cardinal Donnet 430 vint me faire une visite pour me remercier 429 Près de Saint-Loubès (Gironde). de Quel contraste avec les journées précédentes si douces et si charmantes ! On a bien raison de dire que le mois de mars est un mois à surprises, un mois comparable à ces jolies femmes capricieuses sur lesquelles on ne peut jamais compter. Dimanche dernier, quand je revenais de Gontaud, je jouissais, au contraire, de toute la délicieuse magie du printemps. Le ciel était pur, le soleil étincelant les blés verdoyaient ; l'aubépine fleurissait dans toutes les haies. Dans l'aprèsmidi, quand je fis autour de mon enclos la promenade accoutumée, je fus frappé de l'abondance des boutons d'or qui recouvraient, comme le plus magnifique des tapis, toute l'allée qui mène du pavillon au chemin de Minor. Par un singulier à-propos, je lus, ce jour-là même, dans un journal illustré, un éloge de la fleur que l'on appelle vulgairement bouton d'or. Voici les premières lignes de cet éloge intitulé défend lors du concile Vatican I. En 1867, un mouvement d'opinion se produisant parmi les catholiques en faveur de la canonisation de Christophe Colomb, le cardinal Donnet en prit la direction : Tulard (J.) s.d., Dictionnaire du Second Empire, Fayard, 1995, p. 433-434. 431 Dans le mythologie grecque Borée personnifie le vent du Nord. sa (« Lou boun Diou plumo sas aouques » 435 , me disait-on, par un temps pareil, quand j'étais enfant). La retraite était impossible : c'eut été la retraite de la Bérésina. Nos convives ont passé la nuit sous mon toit et, comme nous n'avions pas de lits à leur offrir, ils ont remplacé le sommeil par la causerie, le jeu, les infusions (thé et café) : il y a eu même un réveillon à 4 heures du matin. Somme toute, nuit très gaie. Cela m'a rappelé que mon grand-père, le colonel de Grammont 436 , arriva, un soir, à la campagne, en nombreuse compagnie, dans une maison où les lits manquaient pour tant de monde. Voyant 432 A-t-il un lien de parenté avec le baron L. du Corbier qui a publié , bien. En le feuilletant, vous aurez parfois un affectueux souvenir pour l'éditeur de Peiresc et cette pensée est déjà pour moi une récompense. Une autre récompense qui me sera bien agréable aussi, c'est le profit que vous tirerez pour vos Paris, bibliothèque européenne, 1885, VIII-230 p. et du Rôle et avantage du timbre-sauvegarde et de ses accessoires dans les sociétés mutuelles, les syndicats et les groupements de toute nature, Impr. des mutuelles, Paris, 1897, In-8°, 26 p. La fin d'un monde … de paperasses, avec préface de Ch.-M. Limousin, aux bureaux de « L'Écho des mutuelles », s.d, in-12, 251 p. Paris, bureaux de la «Revue des ambulants », 1916, In-8°, 22 p. (publication de la Ligue du public contre les pertes de temps occasionnées par les administrations). Il est aussi, publiant souvent à son compte, l'auteur du Manuel du jeu des renards, l'éditeur alopécien, 1882, In-16, 48 p. qui connaît plusieurs éditions : 1889, 1891; des Jeux du foyer, 1891 et du Traité du jeu de la bataille, Paris, 1891, In-16, 12 p., de La lecture apprise en dix heures sous forme de loto, jeu éducatif, Impr. de Mazeyrie, Tulle, 1891, in-32, 40 p. et de L'abeille latine ou le latin pour tous, dicsionaire de citacions latines traduites en français avec toutes les nocions propres à an doner le sans exact et à an propajer l'uzàje, exànples d'aplicacion tirés des Vade-mecum de l'oedipe et du sphinx. -Dictionnaire des types de solutions à figures compliquées…, Paris, 5 parties en 1 vol., 1890, fig. dont il existe également plusieurs éditions. Enfin H. Issanchou est l'éditeur de la Galerie des contemporains …, Paris, bibliothèque européenne, 1885-1887, 6 vol. (lui-même est le sujet du n° 4 et Louis de Berluc-Pérussis, celui du n° 5) et du Panthéon du mérite, revue biographique et photographique, publiée pendant deux ans, Paris, 1887-1888 (2 vol.).première création a bien réussi. Puisse la seconde avoir de non moins heureuses destinées ! Mon hôte est très intelligent, très original. Sa tête est sans cesse en fermentation. Je la compare à une fournaise où bouillonnent toutes sortes d'idées présent nous n'avons, à notre grand regret, pu offrir qu'une eau-de-vie fort ordinaire, nous qui jadis produisions à La Maratonne 451 un nectar de tant de réputation. Je me souviens que nous en avions envoyé une barrique au ministre actuel des -Jean, un bon mouvement, ce qui doit être facile aux Saints et surtout à un grand Saint tel que vous ! Rendez-nous, en échange de notre feu, l'eau dont nous avons tant besoin ! Tout se dessèche. Mes pauvres arbustes sont morts ou mourants. Saint-Jean 455 ne m'a pas exaucé. Il semble même, au contraire, que nous sommes plus que jamais sous la zone torride. Hier, la chaleur a été effrayante. On m'assure qu'à Gontaud il y a eu 38 degrés à l'ombre (à 4 h. du soir) et encore 30 degrés à 6 heures. Le thermomètre, au soleil, aurait atteint 55 degrés. D'après les observations du pavillon Peiresc, ce serait un peu forcé, mais, la part de l'exagération étant faite, il resterait encore, pour la journée du 27, des chiffres accablants. Il fallait qu'il fît bien chaud pour que j'aie été obligé, dans l'après-midi, de suspendre mon travail et de me reposer pendant une heure environ sur mon lit, ce qui ne m'était encore jamais arrivé. Mes fermiers, qui chargeaient leur foin sur leurs charrettes, m'ont déclaré qu'ils avaient cru étouffer sous ce ciel implacable qui ressemblait à du fer rougi au feu. -Je note ici la disparition de mon petit chat Mistigris qui avait de si belles raies noires sur son poil gris si fin. (C'était un demi-angora). Jamais animal n'a été plus gentil à la fois de robe et de caractère. Il avait des yeux qui brillaient, dans sa tête sombre comme des diamants, études du trésor que je mets entre vos mains. Fournir un bon outil à un excellent travailleur, n'est-ce pas devenir un peu son collaborateur, son associé, et, par conséquent, n'est-ce pas avoir part à ses succès ? Vous voyez qu'ainsi je serai presque autant votre obligé que vous serez le mien. Là-dessus, je vous serre bien amicalement la main… » 3 juin Nous venons de garder pendant 24 heures M. Henri Issanchou 450 , commis des postes et publiciste, deux fois célèbre comme père de la carte-lettre et comme père de cette association philanthropique qu'il appelle le Centime quotidien. Sa (Réformes postales) : c'est à ce dernier ouvrage que Tamizey fait vraisemblablement allusion ; Un système français de chèques, de recouvrements et de comptes courants postaux…, meilleurs auteurs, imitacions et équivalants an français, notices historiques et mitolojiques, anecdotes etc., édicion de la « Plume libre », Paris, 1901, 500 p.-Sous sa direction ont été publiés Les Plaisirs du foyer, -Clé du « Pantographe scriptolégique » et du « loto mnémotechnique et orthographique »… le tout accompagné des premières notions verbales de et hardies. Moi qui aime fort les esprits primesautiers, j'ai éprouvé grand plaisir à causer avec cet inventeur, ce pacifique révolutionnaire auquel je souhaite de tout coeur succès et prospérité. 14 juin Après une journée où la chaleur a été suffocante, il y a eu, 17 juin Un riant hasard a voulu qu'un églantier poussât vigoureusement à côté du vieux genévrier qui ombrage ma pierre tombale. Ce matin, les blanches fleurs étoilaient la verdure sombre du génévrier qui semblait lui-même avoir produit toutes ces roses, tant elles sont mêlées à ses propres branches. C'était d'un effet charmant. -Pour passer de la poésie de la nature à un sujet bien différent, je noterai que, ce même Travaux publics, M. Viette 452 , lequel en sa maison de Blamont 453 , où il nous a donné, en 1867, une si cordiale hospitalité, la fit goûter à des Suisses, ses bons voisins, qui déclarèrent avec le plus ardent enthousiasme qu'une telle liqueur devait être bue à genoux. -« À genoux ou non, au train dont vous y allez, » leur dit, en riant, M. Viette, « vous auriez bientêt bu ma barrique tout entière. » 24 juin sur l'actuelle D 641. Point coté 72 [286-3244] : Carte 1738 Est, Seyches, série bleue 1 : 25 000, I.G.N., Paris, 1987. Les lilas de Miramont 454 sont à jamais flétris. Les arbres eux-mêmes dépérissent noyers, ormeaux, etc. C'est un désastre qui me met la mort dans l'âme. 28 juin un jour de bal, au milieu d'une noire chevelure. Je m'étais déjà fort attaché à ce pauvre petit chat qui était si vif, si grammaire. -neuves jour, j'ai acheté à 2 francs le litre un petit barril d'eau-gai, si amusant et qui me témoignait grande confiance, car de-vie d'Armagnac (très authentique, mais très jeune encore). 454 Voir 30 octobre 1890. 450 Henri Issanchou a signé Le Livre d'or des postes, cette nuit, un ouragan à tout casser, suivi d'une pluie torrentielle, mais qui malheureusement n'a pas duré plus d'une demi-heure. Elle n'a pas été assez pénétrante, surtout après un rôtissement de plus de deux mois ; espérons qu'elle recommencera et que le ciel d'airain que j'ai tant maudit ne reparaîtra pas de quelques jours. Mes malheureuses plantations ont été brûlées par l'implacable sécheresse de tout le printemps. Celles qui ont résisté ont bien soif. La pluie de cette nuit n'a été pour elles que ce que serait un petit verre d'eau pour un homme très altéré. Elle prendra des vertus en vieillissant, ce que nous autres hommes, nous ne faisons pas toujours. Ce sera une ressource, dans 18 mois ou deux ans, pour nos chers hôtes auxquels jusqu'à Pour la troisième fois depuis que j'ai commencé ce journal, nous avons allumé un beau feu en l'honneur de la fête de Saint-Jean. Le feu d'hier dépassait en éclat ceux des années précédentes Ah ! Si Saint-Jean, reconnaissant de tous les fagots que nous venons de brûler pour lui, daignait nous procurer un peu d'eau, quel aimable Saint ce serait ! Allons ! 451 Au nord de Gontaud sur la hauteur au-dessus de Traversat Saint455 Fêté d'après le calendrier liturgique ancien le 24 juin : Audisio (Gabriel), Les Français d'hier, t. 2 : Des croyants XV e -XIX e siècle, A. Colin, Paris, 1996, p. 454-457. , Impr. de Rouillé-Ladevèze, 1886, In-8°, 12 p. (Extr. du bulletin critique, 1886). -[A.P. Baquier]. , Imprimerie F. Cocharaux, Auch, 1953, p. 8. -« La Société Historique de Gascogne », Bulletin de la Société Archéologique duGers, 1961, p. 5-42. Francisque) 1842-1917, né à Saint-Brieuc (Côtes d'Armor). Substitut à Barbézieux en 1865 et à Périgueux en 1867, il fut envoyé comme procureur à Limoges en 1870, et passa comme avocat général à Agen en 1875. Il fut nommé Conseiller à la cour d'appel de Bordeaux en 1883, chevalier de la Légion d'honneur, officier d'Académie, correspondant du ministère de l'Instruction publique, membre de la Société des Sciences, Lettres et Arts d'Agen, président de la Société des Archives Historiques de la Gironde. Sous le pseudonyme de H. Loho, il a signé Du progrès de la Science pénitentiaire, Discours [Cour d'Agen. Discours de rentrée de 1880], Impr. V ve Lamy, Agen, 1880. Il a publié Un magistrat au XVI e siècle, Étienne de La Boétie, impr. P. Noubel, Agen, 1876, 54 p. (Cour d'Agen. Discours de rentrée de 1876) ; La Cour de France à Agen 1564-1565, Impr. F. Lamy, 1878 (extr. Revue de l'Agenais, t. V, 1878) ; La Vie en province au XVI e siècle. -Comment Agen mangeait au temps des derniers Valois, impr. Vve Lamy, 1887, travail de lui sur Saint-Austinde au Marquis de Beaucourt 523 pour la Revue des Questions historiques 524 . L'abbé Breuils n'est pas seulement un excellent piocheur : c'est aussi un causeur agréable et le bon Dieu sait si nous avons causé ! Son séjour a été troublé par une affreuse tempête qui a atteint son maximum de violence le 21, de 8 à 9 heures du matin. Une grosse branche de mon vieux chêne a été projetée sur la toiture du pavillon et a brisé plusieurs tuiles. Le vent furieux qui s'engouffrait par ce trou, avec des torrents de pluie, semblait vouloir tout emporter, le pavillon et ses habitants. Nous en avons été quittes pour quelques émotions et aussi pour quelques dégâts que réparera le couvreur. Le curé La Domination de la Reine de Navarre à Agen en 1885, communication faite à la Sorbonne, au Congrès des Sociétés savantes de 1890 et imprimée dans le Bulletin historique et philologique de Comité des Travaux historiques, 1890, n°s 2-3, p. 226-256 -Il est l'éditeur du Livre doré du Présidial d'Agen et l'auteur d'études sur le Théâtre d'Agen et sur les marchés et les vivres dans cette ville au temps des Valois, d'où il tira une fantaisie publiée sous un pseudonyme dans la Revue de l'Agenais, en 1880 : « De la part de Maître François Jauffrion » [c'est le nom de l'apothicaire de la Histoire et historiens, une mutation idéologique des historiens français(1865[START_REF]Les Commencements d'une conquête : l'Algérie de 1830 à[END_REF]), Privat, 1976, p. 330. -A.D. Lot-et-Garonne, 16 J 5, correspondance d'érudits : Fonds Tamizey de Larroque. de Caseneuve se souviendra peut-être de ma cordiale hospitalité ; il se souviendra certainement de l'ouragan du 21 reine Margot] : Voir Revue de l'Agenais, 54 e a., n°2-3, mars-juin 1927, p. 162 (nécrologie prononcée lors de la séance d'avril 1927 de la Société académique d'Agen). -Andrieu (J.), op. cit., t. 1, p. 348. -Sur les Archives historiques de la Gironde : voir 30 décembre 1891. -A.D. Lot-et- Garonne, 16 J 16, correspondance d'érudits : Fonds Tamizey de Larroque. 523 Georges du Fresne, marquis de Beaucourt : Carbonell (Charles- Olivier), 522 Habasque (qui s'est, du reste, étendu fort loin, car les journaux du Sud-Ouest sont tous remplis du récit de ses ravages. 2 mars On me disait, l'autre jour, en me voyant encore si droit, si solide, que j'étais étonnant pour mon âge ; je répondis que 145 p. (extr. Revue de l'Agenais, 1886) ; Le dernier duc d'Aquitaine Xavier de France (1753France ( -1754)) . Étude historique ; suivie de la réimpression des Vers sur la naissance de Mgr le duc d'Aquitaine, célébrée dans le Collège des Jésuites de Bordeaux et de Pièces justificatives. -A. Picard, Paris, Féret et fils, Bordeaux (Impr. V. Lenthéric, Agen, 1890), 213 p. ; 524 Larcher (Laurent), « Radiographie de la Revue des questions historiques », La Revue des revues, Revue internationale d'histoire et de bibliographie, n° 23, 1997. ). Revue des Questions historiques, octobre 1894 ( Mémoire lu, le 6 septembre 1894, au congrès scientifique international catholique de Bruxelles). -A.D. Lot-et-Garonne, 16 J 2, correspondance d'érudits : Fonds Tamizey de Larroque. 532 Issu de la Revue historique, publiée depuis 1876. Un Bulletin critique de littérature, d'histoire et de théologie parut à Paris entre 1881 et 1887, sous la direction du Père Ingold de l'Oratoire. 533 Le chanoine Ernest Allain a contribué à inventorier le fonds ancien ecclésiastique aux archives départementales de la Gironde : Inventaire sommaire des archives départementales antérieures à 1790. Gironde. Archives ecclésiastiques. Série G (n°1 à 920). Inventaire des fonds de l'archevêché et du chapitre métropolitain de Bordeaux, Bordeaux, impr. G. Gounouilhou, 1892. Il est l'auteur d'études d'histoire religieuse : Une vie inédite de saint Emilion d'après le ms Y 1 de l'archevêché de Bordeaux, Bruxelles, impr. De Polleunis et Ceuterick, 1894 (extr. Analecta bollandiana, t. 13, 1894) ; Un Ordo ad sponsandum bordelais du XV e siècle, Impr. nationale, 1895 (extr. du Bulletin historique et philologique, 1894 ); « L'Église de Bordeaux au dernier siècle du Moyen Âge (1350-1450), Revue des Questions historiques, oct. 1895). Mais le chanoine Allain a surtout publié sur la question scolaire, notamment : L'instruction primaire avant la Révolution qui a fait l'objet de plusieurs éditions dont une en 1881 avec une préface de S. Gr. M gr de La Bouillerie, Paris, librairie de Société bibliographique, il s'agit de la reprise d'un article publié, en 1875, dans Revue des Questions historiques ; L'oeuvre scolaire de la Révolution. 1789-1802. Études critiques et documents inédits, Paris, Firmin-Didot, 1891 (reprise d'un article publié dans Le Contemporain, en février 1883 ; « L'organisation administrative et financière du diocèse de Bordeaux avant la Révolution », de 37 p. ; Élévations sur les Douleurs et les Enseignements du Coeur de Jésus pendant le Chemin de la Croix d'après les écrits de la Bienheureuse Marguerite-Marie ; suivies de prières pour le salut de la France, et d'un Exercice pour le Chemin de la Croix, Paris, Adolphe Josse, 1881, in-18 de VI-214 p. -Petit Bréviaire du Sacré-Coeur de Jésus. -Petits Offices pour chaque jour de la semaine, et Exercice pendant la Messe. Extr. de la vie et des oeuvres authentiques de la Bienheureuse Marguerite-Marie, 5 e éd., Nancy, Soc. nancéienne de Propagande, 1882, in-32 de 143 p. Lettres inédites du baronRamond de Carbonnières…, éditées par Philippe Tamizey de Larroque, Toulouse, 1893, extrait de la Revue des Pyrénées et de la France méridionale, mars-avril 1893. Louis-François-Élisabeth, baron Ramond de Carbonnières (1753-1827). Géologue et homme politique. Né à Strasbourg, conseiller intime du cardinal de Rohan, il fut député à l'Assemblée législative, et appartint au parti royaliste constitutionnel ; aussi dut-il abandonner son siège après le 10 Août. Il devint, en 1796, professeur d'histoire naturelle à l'école centrale de Tarbes et député au Corps législatif (1800-1806). En 1804, il composa, à la demande de Napoléon, une brochure intitulée Naturel et Légitime, pour démontrer la nécessité de la transformation du consulat en empire. Il reçut pour récompense le titre de baron et, en 1806, la préfecture du Puy-de-Dôme. Louis XVIII le nomma maître des requêtes (1815) et conseiller d'État (1818). Ramond a surtout étudié la géomorphologie des Pyrénées : Observations faites dans les Pyrénées (1789), Voyage au mont Perdu (1801), Coup d'oeil général et comparatif sur les Alpes et les Pyrénées (1834). Prefaci de V. Lespy, 1893, Paris, Maisonneuve et C. Leclerc, in-8°, XVI-475 p. où se trouvent Lous Sabens (Les Savants). agit du fils de la nièce de Tamizey Marie-Élisabeth-Gabrielle Truaut, née à Lavardac, le 5 septembre 1851 qui avait épousé le 17 février 1873 Pierre-Henri Cramaix, né à Ste-Foy (Gironde), le 25 mai 1846, fils d'Ambroise Cramaix, négociant à Ste-Foy et de dame Hugonis [A.P. Baquier]. Paris, E. Thorin, 1889, 224 p.. Son père Philippe-Jacques Roux, professeur à la faculté des lettres de Bordeaux, a publié, entre 1861 et 1872, plusieurs articles dans les Actes de l'Académie des sciences et et arts de Bordeaux dont « Des formes diverses de la satire dans la littérature française du Moyen âge » ( 72 p., 2 e trim. 1861) ; « Rapport sur le concours de poésie de 1862 » (3 e et 4 e trim. 1862) ; « Tableau général de la civilisation et de littérature française à toutes leurs époques, discours de réception à l'Académie des sciences… de Bordeaux » (21 p., idem) ; « Considérations générales sur l'histoire de la prose française depuis l'époque de ses premiers essais jusqu'au siècle de Louis XIV, lecture faite à l'Académie des sciences… de Bordeaux… -Rapport de M. Roux sur la publication du Breviari d'amor de Matfre Ermengaud, lu à à l'église Saint-Louis de Bordeaux, ancien professeur au collège de Bazas, fils de l'ancien doyen de la faculté des lettres. J'ai été heureux de recevoir sous mon toit M. l'abbé Un ancien préfet de l'Aube et du Bas-Rhin, M. Isidore Salles, Roux, d'abord parce que c'est un aimable et savant prêtre, m'envoie ensuite parce que je garde un souvenir très reconnaissant de aujourd'hui un magnifique volume de poésies gasconnes. Une des pièces du recueil, intitulée Lous Sabens 558 , son père avec lequel j'ai eu d'excellentes relations et qui, m'est dédiée. C'est un honneur auquel je suis très sensible. comme président de l'Académie de Bordeaux 563 , me remit la Du reste, les beaux livres pleuvent sur le pavillon Peiresc, Médaille d'or pour mon Florimond de Raymond 564 en accompagnant et je viens de recevoir aussi de la main de l'auteur, cette remise des plus flatteuses paroles. M. Gabriel Hanotaux, le premier volume (imprimé chez Didot 559 ) de l'Histoire du Cardinal de Richelieu 560 . 6 août 559 Prestigieuse maison d'édition parisienne dont l'origine remonte au XVII e siècle. Illustrée sous la direction de Firmin Didot (1764-1836), 556 inventeur de la stéréotypie, parce qu'elle devint, en 1811, l'imprimerie de Tamizey de Larroque (Philippe), Un notaire d'autrefois, M e Baboulène [de Beauville], peint par lui-même dans sa correspondance inédite avec le l'Institut. Le caractère Didot, créé par François-Ambroise Didot (1730- comte de Galard de Brassac-Béarn, Agen, impr. V ve Lamy, 1893, 30 p., extrait 1804) est à la base de la typographie moderne française. de la Revue de l'Agenais, XX, 1893. -La Revue de l'Agenais et des 560 Gabriel Hanotaux (1853-1944). Ancien élève de l'École des Chartes. anciennes provinces du Sud-Ouest historique, littéraire, scientifique et Attaché en 1879 aux archives des affaires étrangères, devint chef de artistique, bulletin de l'Académie des Sciences, Lettres et Arts d'Agen a cabinet du ministre sous Gambetta et Jules Ferry, et fut nommé en 1885 été publiée à Agen à partir de 1874. Sur les circonstances de sa conseiller d'ambassade à Constantinople. Après un passage à la Chambre fondation : Lauzun (Philippe), La Société académique d'Agen 1776-1900, comme député de l'Aisne (1886-1889), il revint aux ministère des Affaires Picard, Paris, 1900, p. 171. étrangères et était directeur des consulats depuis 1892 lorsqu'il fut appelé à le diriger dans les cabinets Dupuy et Ribot (mai 1894-nov.1895). Il fut chargé encore du même portefeuille dans le cabinet Méline (mai 1896- juin 1898) et eut, en cette qualité, à traiter les importantes questions qui se rattachaient au resserrement de l'alliance franco-russe, et à la l'Académie dans la séance du 12 mars 1863 » (13 p., 1863) ; « Rapport délimitation des possessions françaises et anglaises en Afrique. À partir général sur les travaux de l'Académie des sciences, belles-lettres et arts de 1898, il renonce à la politique pour se consacrer à ses travaux de Bordeaux pour l'année 1864-1865 (4 e trim. 1864), pour l'année 1866-1867 historiques. En 1897, l'achèvement de la publication en 2 volumes de (4 e trim. 1866) ; « Transformation épique du Charlemagne de l'histoire » l'Histoire du cardinal de Richelieu lui valut un siège à l'Académie (38 p., 1 er trim. 1865) ; « Réflexions sur « le Misanthrope » (30 p., française. Sur cet ouvrage et son auteur : Jouhaud (Christian), La main de 1866) ; « Étude sur le Mithridate de Racine » (36 p.,1868) ; « Du Génie et Richelieu ou le pouvoir cardinal, l'un et l'autre, Gallimard, 1991, p. 14-des influences de la littérature française depuis ses premières origines 35. jusqu'au XVI e siècle » (32 p., 1872). -A.D. Lot-et-Garonne, 16 J 24, correspondance d'érudits : Fonds Tamizey de Larroque. 562 Tamizey a déposé son testament dans son étude, à Gontaud, le 23 557 3 août Hier, nous avons passé une bonne journée en compagnie de ma soeur, de mon beau-frère, de mon petit-neveu Robert Cramaix-Hugonis 561 , le futur colonel de cavalerie (brigadier de 19 ans), du notaire Joseph Goulet 562 et de M. l'abbé Roux, vicaire 558 Isidore Salles (de Gosse) 1821-1900 a publié Gascounhe. Le brabe yent de noste. Navets debis, 561 Il s'Aujourd'hui dimanche, arrivée de M. Louis Audiat 565 qui, pour la troisième fois, vient passer quelques jours avec nous. Que le bon Dieu me donne souvent de pareils visiteurs aussi aimables que savants et dont la causerie est aussi douce que fructueuse ! Cet adjectif me rappelle que mon cher président est grand amateur de fruits et il arrive avec le plus heureux à propos, car mon verger lui offrira des pêches, des poires, des chasselas et des muscats, le tout mûri déjà par la novembre 1894 : voir 12 novembre 1890. 563 L'abbé A. Roux a publié Le Pape saint Gélase 1 er (492-496), étude sur sa vie et ses écrits, 564 Tamizey de Larroque (Philippe), Essai sur la vie et les ouvrages de Florimond de Raymond, conseiller au parlement de Bordeaux, Paris, A. Aubry, 1867, 135 p. pavillon Peiresc. Une plaquette d'hommage intitulée Au pavillon Peires, le vieux châtaignier, lui est consacrée, signée L. Audiat, E. Allain, A. de Gagnaud, Ph. Tamizey de Larroque, D. Granier, J. de Boëry, Charles Boy, s.d. (l'avertissement rédigé par Tamizey est daté du 2 août 1897), elle porte la mention « imprimé à 120 exemplaires, tous réservés aux bons amis »[A.P. Baquier]. Juvenilia, Marmande, impr. de E. Demeaux, in-16, 66 p. Voir note 84. Lettres inédites de Jean-Florimond Boudon de Saint-Amans suivies de Lettres de Philippe Dumas. Causerie sur Saint-Amans par son fils Casimir, Paris, 1895, dans la collection Les Correspondants de Grandidier, IV. Sur son rôle aux débuts de l'Académie d'Agen, voir Lauzun (Ph.), op .cit., p. XII-XIII et p. 349. -Sur Saint-Amans, voir Chaudruc de Crazannes, Notice sur la vie et les ouvarges de Boudon de Saint-Amans, Agen, 1832, in-8°. 569 567 Maurice Calbet avait publié, en 1892, 568 Philippe Tamizey de Larroque a édité les France par la police du nouveau régime, il eut une existence mouvementée et difficile qui ne prit fin qu'à son rapratiement en 1820. Il fut chargé, en 1829, d'une expédition scientifique en Morée et devint ensuite chef du bureau historique au ministère de la Guerre. Ses publications scientifiques sont abondantes (8 pages du Catalogue des imprimés de la Bibliothèque Nationale) [ il a cosigné notamment avec Jules Dumont d'Urville le Récit du voyage autour du monde, sur le navire « La Coquille » 1822-1825] s'y ajoutent des articles de presse et quelques pièces littéraires : Justification de la conduite et des opinions de M. Bory de St-Vincent, Membre de la Chambre des Représentants et proscrit par l'ordonnance du 24 juillet, Eymery, Paris, 1815 et Bruxelles, 1816, 110 p. ; La Fille Grenadier, comédie en un acte, mêlée de couplets, Barba, Paris, 1817 ; Le mariage par billet de logement, représentée à Séville en 1822 ; Samuel ou le Livre du Seigneur. Traduction d'un manuscrit hébreu exhumé de la Bibliothèque ci-devant Impériale.-Agenais. -Extrait du Recueil des travaux de la Société d'agriculture, sciences et arts d'Agen, t. VIII). Le même Adolphe Magen a aussi publié et annoté Les livres liturgiques de l'église d'Agen, considérés comme monuments historiques, II e dissertation de J. Labrunie, Impr. P. Noubel, 1860, 79 p. En 1892, Mathieu-Oswald Fallières a publié l'Abrégé chronologique des antiquités d'Agen par l'abbé Joseph Labrunie, savants célèbres de l'Agenais. M. Maurice Calbet voudrait publier les plus curieux de ces documents avec mon concours. Nous en formerions une petite collection sous le titre de Plaquettes agenaises 573 . Ce projet pourra-t-il être réalisé ? Je le voudrais bien. Calbet fourniraient les copies et moi je me chargerais de la sauce. Mes convives ont été fort aimables pour moi : ils m'ont donné pour ma collection d'autographes une lettre de Ramond 574 , une lettre de Lacépède 575 , une lettre de Lacuée 576 et ils m'en ont promis d'autres encore.Bonne pluie, hier, qui a duré plusieurs heures, et qui, s'il plaît à Dieu, recommencera de plus belle aujourd'hui, car le ciel est bien gris. Quelle jolie et douce nuance que le gris quand on a été aveuglé, pendant plusieurs semaines, par la Agen, Ferran frères, 1892, LIII-214 p., extr. de la Revue de l'Agenais. Voir Lauzun(Ph.), op. cit., p. 15 et suiv. , p. 205, p. 345. Les OEuvres de Jean Rus] ; Ph. Tamizey de Larroque avait de 1868 à 1880 publié une autre collection d'inédits sous le titre de Plaquettes gontaudaises qui compte 6 numéros : Vie d'Eustorg de Beaulieu, poète huguenot, ex-organiste de Lectoure ; Quelques lettres d'Isaac de Lapeyrère, le préadamite ; Quelques Mazarinades inconnues ; Sonnets inédits d'Olivier de Magny ; Récit de l'assassinat de Boisse-Pardaillan. Pierre Serin a retrouvé une publication que ne signale pas le Catalogue des imprimés de la Bibliothèque Nationale, intitulée « Une centaine de documents inédits pour servir à l'histoire de la ville de Gontaud (1532-1789) », dans Recueil des travaux de la Société d'Agriculture, Sciences et Arts d'Agen, 2 e série, t. XIII, 2 e partie, Agen, 1898. Il s'agit de la dernière ou du moins de l'une des toutes dernières livraisons de Tamizey au soir de sa vie, annonçant d'autres publications… Voir Andrieu (J.), op. cit., t.II, t. III, Histoire authentique de l'Empereur Apollyon et du Roi Béhémot, par le Très-Saint-Esprit, Liège et Paris, 1816 [texte anonyme assez satirique, dédié à Chateaubriand qui eut un certain retentissement] : Andrieu (J.), op. cit, t. 1, p. 97-100. -Lauzun (Ph.), op. cit., p. 94. 572 Il s'agit de l'abbé Joseph Labrunie. Des Extraits des essais historiques et critiques d'Argenton sur l'Agenais par Joseph Labrunie. Première dissertation : les Nitiobriges ont été publiés par Adolphe Magen, Agen, impr. P. Noubel, 1856, 76 p. (Collection de documents inédits sur 573 En tant que tel, le projet resta sans suite. Après avoir publié dans une Collection méridionale parue entre 1869 et 1875, en volumes in-8° une première série d'inédits [Mémoires de Bertrand de Vignoles (1621-1622) ; Sonnets exotériques du poète condomois G.-M. Imbert ; La Relation du siège de Dunkerque par le maréchal d'Estrades ; Les Lettres inédites du cardinal d'Armagnac ; 574 Voir 16 juillet 1893. 575 Voir 19 août 1893 (note 567). l'histoire de l'30 août 576 Jean Gascogne), avocat à la Cour d'appel de Paris, membre de la Société des auteurs dramatiques et de la Société des gens de lettres, mairede Roquefort d'Agen (1862[START_REF] Laurens | leur vie, leur oeuvre). Il fut conservateur du Musée d'Agen dont il aménagea les salles du rez-de-chaussée et du premier étage. Sa fille avait épousé le docteur Villeneuve, adjoint au maire de Moissac en[END_REF]. En 1898, il a fait aussi le compte rendu des fêtes données à Agen, le 6 et 7 août, pour le Centenaire de Jasmin, publié à Agen, Impr. et lithographie agenaises, 54 p. Il a signé aussi, en 1854, plusieurs brochures sur le projet portant modification à la loi du 21 mars 1832 sur le recrutement de l'armée (imprimées chez J. Claye, Paris). -Tamizey de Larroque (Philippe), Madame la comtesse Marie de Raymond, Auch, G. Foix, 1886, p. 9 (note). -A.D. Lot-et-Garonne, 16 J 17, correspondance d'érudits : Fonds Tamizey de Larroque. Revue de l'Agenais et je répondrai de mon mieux à son appel 589 . J'avais d'avance, du reste, résumé en quatre lignes l'histoire de la vie du grand travailleur dans cette dédicasse (sic) de mon recueil de Henriette Delmas de Grammont, soeur aînée de la mère de Philippe Tamizey de Larroque dont elle était par conséquent la tante avait épousé Jean-Pierre de Boëry. De cette union étaient nés Nathalie, première épouse de Tamizey et Paul ici mentionné [A.P. Baquier]. km environ au nord de Gontaud et à une dizaine de km au nord de Miramont, résidence traditionnelle des Delmas de Grammont : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. Larroque, à proximité du pavillon Peiresc. Une plaquette d'hommage intitulée Au pavillon Peires, le vieux châtaignier, lui est consacrée, signée L. Audiat, E. Allain, A. de Gagnaud, Ph. Tamizey de Larroque, D. Granier, J. de Boëry, Charles Boy, également aimables pour leur hôte et leur ami, et si l'on me juge d'après leurs trop beaux éloges, je n'aurai pas à me plaindre. Alger 596 à dix heures et par lequel le colonel Eugène Delmas de Grammont 597 m'apprend la mort de ma tante et belle-mère en ces termes : « Ma mère morte ce matin ». Madame Delmas de Grammont, née de James 598 , était plus qu'octogénaire. Elle avait une maladie de foie et, depuis quelques semaines, cette maladie s'était aggravée de la façon la plus inquiétante. L'événement était donc prévu ; il n'en est pas moins pénible, car ma tante avait toujours été très bonne pour moi et pour mes enfants. Il y eut une époque où elle disait de moi avec enthousiasme : « C'est la perle des gendres. » C'était une femme d'infiniment d'esprit et qui contait à ravir. Que de détails curieux je tiens d'elle sur la cour de Louis-Philippe. Elle avait été élevée par sa tante Mme de Mallet (née de Bellonde) 599 avec les princesses d'Orléans et elle avait été l'amie intime de la princesse Marie et de la princesse Louise, reine des Belges 600 . C'était ma dernière parente dans la génération qui a précédé la mienne. Elle représentait seule tout un passé qui m'est bien cher. Je suis plus que jamais maintenant le doyen de la famille. effrénée. Se préparent-elles à nous quitter ? Est-ce un exercice destiné à dégourdir leurs ailes en vue du grand voyage ? Elles ont mille fois passé et repassé devant les article nécrologique pour la Lettres inédites de quelques hommes célèbres de l'Agenais fenêtres de mon cabinet. Serait-ce un adieu ? Elles auraient 13 octobre publiées il y a seulement quelques semaines 590 : beau temps, car la journée est superbe et, toute ensoleillée, Pour moi les deuils succèdent aux deuils. Hier, à sept heures À mon vieil ami Adolphe Magen. nous dédommage des vilaines pluies de tout ce commencement d'octobre. du soir, j'ai reçu un télégramme parti d'16 octobre qui, depuis plus d'un demi-siècle, Journée magnifique, comme toutes celles de la semaine se consacre tout entier à notre cher Agenais, 12 octobre précédente. Le soleil rayonne comme en été. Aussi mes fraternel hommage. 10 octobre Un bon point à mes nouveaux fermiers ! Ils viennent de faire hirondelles ont ajourné leur départ et tourbillonnent Arrivée d'une barrique de vin blanc que m'offre mon cher un magnifique transport de terre. C'était plaisir de les voir, gracieusement devant mes fenêtres. Les semailles se font dans cousin et beau-frère Paul de Boëry 591 . Ce vin provient de l'homme, la femme et le fils, remplir les charriots et les les conditions les plus favorables. Je suis allé, chaque Soumensac 592 et me rappelle la plaisante étymologie que je pousser tout le long du jour. Ce travail avait été négligé donnai, dans ma jeunesse, au nom de cette localité où me depuis bien des années et il y avait derrière les fours une disait-on, il y avait eu un établissement de templiers : accumulation de terre et de terreaux dont l'inutilisation parbleu ! m'écriai-je, ils buvaient tant et tant de ce bon faisait pitié. J'ai fort encouragé et félicité mes fermiers. vin, en templiers qu'ils étaient, qu'on les voyait rouler sous J'espère qu'ils continueront à marcher dans cette bonne voie. la table, sub mensa. Nous n'attaquerons notre barrique Plus on remue la terre, plus elle vaut. C'est une alma qu'après Pâques et nous y puiserons… des forces pour résister parens 593 qui donne d'autant plus qu'on la bouleverse Le Républicain progressiste de Lot-et-Garonne. Voir Andrieu (J.), op. cit., t. 1 p. 105 et t. 3 p. 393. aux chaleurs de l'été. davantage. 586 Xavier de Lassalle a publié dans le Journal de Lot-et-Garonne Ce n'est pas un bon point, c'est un millier de bons points plusieurs articles nécrologiques notamment : sur le Dr Belloc (n° du 14-15 l'auteur de travaux et de publications de textes portant essentiellement janvier 1890) ; en 1898, sur Henri Noubel ; en 1904, enfin, sur Emmanuel sur le patrimoine et l'histoire de l'Agenais durant la première modernité : que je dois au chanoine Allain qui vient de m'envoyer 25 Andrieu (J.), Bibliographie générale de l'Agenais, t. 2, p. 340-343. Une notice nécrologique lui est consacrée dans la Revue de l'Agenais, 1922, p. 195. -Tamizey de Larroque (Philippe), Madame la comtesse Marie de Raymond, Auch, G. Foix, 1886, p. 11. -A.D. Lot-et-Garonne, 16 J 1 et 16 J 26, correspondance d'érudits : Fonds Tamizey de Larroque. 589 Philippe Tamizey de Larroque a publié dans la Revue de l'Agenais, XX, 1893 un long article nécrologique d'hommage où il lui rend un vibrant hommage : Adolphe Magen (1818-1893), Impr.V ve Lamy, Agen 1894, In-4°, 23 p. -A.D. Lot-et-Garonne, 16 J 1 et 16 J 18, correspondance d'érudits : Fonds exemplaires (dont un sur hollande 594 ) du tirage à part des deux charmants morceaux de prose et de vers publiés par lui et par M. Louis Audiat, dans la Revue Catholique de Bordeaux, en l'honneur du pavillon Peiresc et de mon vieux Châtaignier. La plaquette, tirée seulement à quarante exemplaires, sera plus 30 juillet 1892.-M r de Belonde est l'auteur d'une Relation inédite de l'assassinat du duc d'Angleterre, publiée par Tamizey dans la Revue des Questions Historiques en 1872. M r de Belonde, né à Lamothe d'Alès en 1742, Ratoin (587 Le Journal de Lot-et-Garonne remplace le Messager de Lot-et-Garonne, Tamizey de Larroque. 590 Lettres inédites de quelques hommes célèbres de l'Agenais, fut commissaire ordonnateur des guerres en 1772 et chevalier de St Louis. tard bien recherchée 595 . Le prosateur et le poète ont été C'est l'arrière-grand-père de Philippe Tamizey de Larroque : Andrieu (J.), bi-hebdomadaire puis tri-hebdomadaire ou quotidien puis hebdomadaire publié recueillies, publiées et annotées par Ph. Tamizey de Larroque, Agen, Ferran op. cit., t.1, p. 328. à Agen entre mars 1806 et mai 1917. Ce « journal politique et littéraire de frères, 1893, 168 p. extr. du Recueil des Travaux de la Société des 600 Ce sont les deux filles aînées de Louis-Philippe et de Marie-Amélie Lot-et-Garonne » à partir du 17 septembre 1870 fait partir la numérotation lettres, sciences et arts d'Agen. 593 Trad. du latin « mère nourricière » ou plutôt « parente des Deux-Siciles qui donna naissance à trois filles et cinq garçons. La d'année de 1789, date de la fondation du Journal patriotique de l'Agenois. généreuse ». princesse Louise (1812-1850), épousa en 1832, Léopold 1 er , roi des Belges. -Voir Andrieu (J.), op. cit., t. 3, p. 389. La princesse Marie, épousa le duc Frédéric-Guillaume-Alexandre de 588 Wurtemberg (1813-1839). Georges Tholin, né en 1843 à Amplepuis (Rhône), ancien élève de l'École des chartes, archiviste du département de Lot-et-Garonne depuis 1867. Il a terminé l'inventaire des Archives de Lot-et-Garonne, a dressé celui des Archives communales d'Agen. Sous sa direction la « Bibliothèque départementale » a pris une importance régionale. Il est 591 Marie-592 À 25 594 C'est-à-dire sur le papier de luxe, un peu fort, vergé que l'on appelle « papier hollande ». 595 Un majestueux châtaignier se trouvait, à matin, depuis le commencement de l'opération, voir lancer le grain dans les sillons profondément creusés. Que le bon Dieu donne une abondante récolte à tous les cultivateurs en général, à mes vaillants fermiers en particulier ! 18 octobre Le beau temps continue. C'est au milieu des splendeurs d'un admirable soleil que je viens d'achever la préparation du sixième volume des Lettres de Peiresc 601 . Pendant que mes fermiers travaillent avec tant d'ardeur autour de moi, je ne travaille pas avec une moindre ardeur, et les uns et les autres nous pouvons nous dire, chaque soir, avec un juste , Impr. V ve Lamy, Agen, 1894. -A.D. Lot-et-Garonne, 16 J 1 et 16 J 18, correspondance d'érudits : Fonds Tamizey de Larroque. bons amis. Que Dieu leur donne à tous l'éternelle paix ! J'ai oublié de noter sous le mois d'octobre que j'ai deux fois imité mon héros Peiresc : en ses généreuses concessions de livres à ses confrères les travailleurs : J'ai fait envoyer par Picard les 3 volumes du recueil Langlois-Stein 610 à M.L. de que beaucoup d'hommes plus notables que moi. Malheureusement, si l'on indique exactement quelques-unes de mes plus petites publications, comme la notice de 8 pages sur Hautesvignes, on a oublié quelques-unes de mes publications les plus importantes, comme la monographie de Marmande, comme mes Berluc Perussis et par Hachette les 9 volumes du Saint-Simon-Lettres de Balzac (In-4°) comme mes Lettres de Joseph Scaliger Boislisle 611 à M. le Chanoine Allain 612 . J'ai éprouvé un vif (Gr.-In-8°). De plus, on n'a mentionné qu'un seul volume, le plaisir à recevoir les chaleureux remerciements de mes deux 1 er , des Lettres de Jean Chapelain 615 et on a commis une double amis. et grave erreur en donnant 17 volumes à ma Correspondance de Peiresc qui, en tout, n'en aura que dix et en faisant 3 novembre commencer la publication en 1839 pour 1888. Aujourd'hui j'ai adressé à l'imprimerie Lamy par l'intermédiaire de l'archiviste Tholin 613 le manuscrit de ma notice sur Adolphe Magen note en marge : Exemplaire mentionné dans la Bibliographie générale de l'Agenais (II, 125). Ce sera pour moi un volume doublement précieux. Duteix, 1872, 136 p., Préfecture de Lot-et-Garonne. Monographies historiques publiées sous les auspices du conseil général) ; Tamizey de Larroque (Ph.) éd., Lettres de Jean-Louis Guez de Balzac, Paris, 1873, In-4 novembre 4°, extr. des Documents inédits publiés par les soins du Ministère de l'Instruction publique ; Douze lettres inédites à René de Voyer de Paulmy, La librairie Hachette vient de m'envoyer la nouvelle édition comte d'Argenson et à Jacques Dupuy…, Paris, 1863,extr. des Actes de (entièrement l'académie impériale des sciences, belles-lettres et arts de Bordeaux, 3 e et refondue et considérablement augmentée) du 4 e trimestres 1862 ; Tamizey de Larroque (Ph.) éd., Lettres de Jean Dictionnaire Chapelain…, Paris, 1880-1883, 2 vol., in-4°, Collection de documents universel des Contemporains. On m'a fait inédits sur l'histoire de France, 2 e série. l'honneur de m'y consacrer un très long article (il n'a pas 616 Voir 20 janvier 1893. 617 Alis (abbé R.-L.), est l'auteur de l'Histoire de la ville moins de 45 lignes) et je suis ainsi plus favorablement traité d'Aiguillon et de ses environs depuis l'époque gallo-romain jusqu'à nos jours, Agen, Ferran frères, 1895, In-8°, ouvrage qu'il dédicace (p.VII) à Philippe Tamizey de Larroque. Il a aussi publié l'Histoire de la ville, du 610 château et des seigneurs de Caumont, Agen Ferran frères, 1898, In-8° ; Archives de l'histoire de France, publiées en 1891-1893, par les historiens, anciens élèves de l'École des chartes, Charles-Victor Langlois Histoire de la ville et de la baronnie de Ste-Bazeille (de l'ancien diocèse (1863-1929) et Henri Stein (1862-1940). de Bazas), depuis l'époque gallo-romaine jusqu'à nos jours, Agen, Michel et 611 Médan, 1892, In-8°. Notice sur le château, les anciens seigneurs et la Il s'agit de l'édition de référence des Mémoires de Louis de Rouvroy, duc de Saint-Simon (1675-1755), procurée par Arthur de Boislisle paroisse de Mauvezin (près Marmande)…précédée d'une description (1835-1908), publiée en 41 volumes à partir de 1879, dans la collection des archéologique et accompagnée de nombreux dessins par Charles Boullet…., « Grands écrivains de la France ». Agen, Michel , 1887 (Précédé d'une lettre de Ph. Tamizey de Larroque). - 612 A.D. Lot-et-Garonne, 16 J 2, correspondance d'érudits : Fonds Tamizey de Voir 29 mars 1893. 613 Larroque. Voir 6 octobre 1893. 614 achevée le jour des Morts. Par une autre touchante coïncidence j'ai reçu de Madame Magen un volume très rare et admirablement relié qu'elle me donne en souvenir de son pauvre mari, le poème latin de Raymond de Massac, médecin natif de Clairac, sur les eaux de Pougues. 8 novembre Je continue à imiter en petit mon grand modèle, mon grand héros Fabri de Peiresc 616 . Après avoir, comme lui, donné des livres à mes amis pour les aider dans leurs travaux, je leur donne l'hospitalité la plus cordiale. J'ai gardé pendant trois jours mon ancien voisin, M. l'abbé Alis 617 , actuellement curé de Sainte-Radegonde 618 . Il m'avait apporté son manuscrit de la 615 Notice sur la ville de Marmande, Villeneuve-sur-Lot, impr. de X. Véritable discours de la naissance et de la vie de Mgr le prince de Condé…, de René de Cumont, sgr de Fiefbrun, Paris, 1861 ; des Memoiresjournaux de Pierre de l'Estoile, Paris, 12 vol. in-8°, 1875-1896, édition à laquelle a contribué Ph. Tamizey de Larroque; des Lettres inédites de Louis XIII …à M. de Césy, ambassadeurde France à Constantinople (1631-1639), Paris, 1904 et du récit du voyage à Bruxelles du prince et de la princesse de Condé à Bruxelles en 1609-1610 par Claude-Enoch Virey. -A.D. Lot-et-Garonne, 16 J 16, correspondance d'érudits : Fonds Tamizey de Larroque. Revue félibréenne. Il a publié en plus de recueils de vers et d'études littéraires proprement français, La terre provençale (1890), Les voyages félibréens et cigaliers (181-1894-1897), Jasmin (1898), La Provence nouvelle, histoire du félibrige (1901). Académie nationale des sciences, belleslettres et arts de Bordeaux, 1893, publié par l'imprimerie G. Gounouilhou de Bordeaux.chêne, pour mettre la vie, en ce qu'elle a de plus riant, auprès de la mort. Quelle douce température que celle d'aujourd'hui ! Le soleil était si bon et si chaud que j'ai pu m'asseoir sur l'herbe de la prairie que l'on défonce pour y planter la vigne. aujourd'hui, dans ma 66 e année. Je me sens très fort pour mon âge et, s'il n'arrive pas quelque écrasante tuile imprévue, ce que l'on appelle un gros accident, j'atteindrai, je crois, les 90 ans de mon pauvre père. L'intelligence garde aussi toute sa vigueur, la mémoire toute sa fraîcheur, la plume toute sa facilité. Que le bon Dieu me conserve tous ces dons inappréciables ! Si, avec cela, il me donne la paix si Revue de Gascogne) et dans le Gers (1881) (extr. de la Revue de Gascogne) ainsi que des excursions de la Société archéologique du Gers, dont il fut pressenti pour être président (voir note 473), publiés dans le Bulletin de la Société archéologique du Gers en1893-1895, 1904 et 1907. Sa notice nécrologique se trouve dans Bulletin de la Société Archéologique du Gers, 1914, p. 173. Il est aussi l'auteur des Chemins de Saint-Jacques en Gascogne, P. Chollet, Bordeaux, 1887, In-8°, 76 p., (Extr. de la Revue de Gascogne), Études archéologiques dans le Gers, H. Delesques, Caen, 1903, In-8°, 13 p. (Extr. du Compte rendu du LXVIII e Congrès archéologique de France, tenu en 1901 à Agen et à Auch), de Peyrusse-Grande, Peyrusse-Vieille et Mouchan (Gers), impr. de A. Chauvin et fils (s.d.), in-8°, 4 p. -1 8 9 4 - 5 janvier Nous voilà dans les grands froids, dans les très grands 7 décembre Reçu aujourd'hui de l'imprimerie Gounouilhon (sic) le tirage à part de mon édition de Le Bien ducal, poème de la fin du XV 626 Eugène Halphen (1820-1912) qui a édité plusieurs documents inédits relatifs aux règnes de Henri IV et de Louis XIII notamment des Discours, harangues et lettres du premier Bourbon (1879, 1886), notamment des Lettres aux ambassadeurs Béthune (1889-1901) et Sillery (1866) et Villiers (1885) et au chancelier de Bellièvre (1872-1883) ; Journal inédit de Robert Arnauld d'Andilly (1620-1630 à la fin), Paris, 1888-1909 ; Lettres inédites de Jacques Faye, seigneur d'Espeisses et de Charles Faye, Paris, 1880, In-16 ; 627 Paul Mariéton (1862-1911), né à Lyon et provençal d'adoption, il fut l'un des promoteurs les plus zélés du mouvement félibréen. En 1885, il J'entre nécessaire à tous, mais ultra-nécessaire à ceux qui deux : la chapelle funéraire réparée et une colonne de marbre surmontée du buste de mon héros, dressée sur une des places publiques de la ville d'Aix. Voyons si le double projet sera froids, les froids excessifs. Ce matin, en entrant dans mon cabinet resté clos pendant toute la nuit, j'ai constaté que le thermomètre indiquait 5 degrés au-dessous de zéro. À l'extérieur, le thermomètre marquait 12, ce qui dépassait d'un degré les froids maxima de l'an dernier. Hier, nous étions déjà à neuf degrés au-dessous de zéro le matin, et à 7 degrés à 1 heure de l'après-midi. 1894 débute bien ! 11 janvier De gais rayons de soleil dansent dans mon cabinet où je travaille sans autre feu. 20 janvier Je reçois à l'instant de l'imprimerie Lamy cent exemplaires de ma notice sur Adolphe Magen, plus un exemplaire sur papier de Hollande. Puisse cette notice faire bien connaître et bien admirer celui qui a été un de mes meilleurs amis ! Mon confrère M. Adrien Lavergne 629 , vice-président de la Société 629 Adrien Lavergne a signé les comptes-rendus des excursions de la fonda la 30 décembre réalisé l'an prochain à pareille époque ! Société française d'archéologie dans le Couserans (1904) (extr. de la e siècle par Jean Guilloche (de Bordeaux) publié pour la première fois d'après le manuscrit unique de la bibliothèque de Turin 628 . Ma plaquette a très bonne mine et elle a réjoui le coeur de son vieux père. C'est le n° 19 de mes publications de 1893 et le n° 167 de mes publications totales. Arriverai-je au n° 200 ? 8 décembre J'ai fait planter des violettes tout autour de mon antique (P.), op. cit., t. 1, p. 152. 628 Extrait des Actes de l'vieillissent, je n'aurai pas à me plaindre de mon sort. Tout en travaillant dans mon cabinet comme si j'étais encore dans la force de l'âge, je continue à faire travailler dans mon jardin (où l'on a tout récemment planté encore des buis, des genévriers et du houx) et dans mon enclos (où les fermiers ont à peu près effectué la moitié du défrichement de la lande que je veux transformer en vignoble). Je constate avec joie, en achevant cette page et cette année, que mon projet d'un monument à élever en l'honneur de Peiresc paraît en bonne voie. Que dis-je ? Au lieu d'un monument, nous en aurons Les semaines se suivent et ne se ressemblent pas. Nous jouissons aujourd'hui d'une température printanière. Aux douze degrés au-dessous de zéro ont succédé douze degrés au-dessus. Saud et R. de Manthe, Généalogie deBideran, Périgord, Agenais, Quercy, Poitou, J. Castanet, Bergerac, 1896, in-8°, 238 p. Il est aussi l'auteur de Maisons-Laffitte et son château, L. Milly, 1902, In-16, 40 p. ; Rigaud ou Rigault de Granfont, du Marchet, des Baratous, du Mineur, de Lartigue, des Guignards…etc en Guienne, aux bureaux de la Revue héraldique, Paris, 1906, in-8°, 21 p. et d'une Monographie du canton de Sigoulès [extr. de l'Histoire de l'arrondissement de Bergerac], s.l., 1907, in-8°, 66 p. -A.D. Lot-et-Garonne, 16 J 28, correspondance d'érudits : Fonds Tamizey de Larroque. 11 février J'ai reçu aujourd'hui la visite d'un jeune érudit périgourdin, M. Edmond Boisserie de Masmontet 633 (habitant le château de Fayolles 634 , par Gardonne) qui a déjà beaucoup tra- vaillé, quoiqu'il ait à peine 23 ans. Il s'occupe surtout de généalogies et de monographies périgourdines et il a rassemblé plusieurs milliers de fiches à lui fournies par le dépouillement de documents inédits. Mon hôte d'un moment m'a paru très aimable et j'espère qu'il reviendra bientôt passer toute une journée ici avec son voisin et ami le Comte de Saint-Sand 635 (sic). -Le matin, j'avais reçu du patriarchearchéologue Leo Drouyn 636 une lettre où j'avais trouvé ces lignes sur mon Adolphe Magen : « C'est une plaquette que j'avais 633 Jean-Edmond de Boisserie de Masmontet a signé avec Aymard de Saint-634 Dans l'actuel département de la Dordogne, près de La Force. 644 , lieutenant-colonel en retraite, mon cousin Joseph de Vivie, ancien procureur de la République, son fils Jacques, étudiant en droit et futur archiviste paléographe 645 , mes deux neveux Jean et Guy de Boëry, enfin mon confrère Philippe Lauzun 646 , ancien président de la Société 644 Voir 13 octobre 1893. 645 Voir 11 février 1894 et 16 octobre 1896 où il est question encore de Jacques de Vivie de Régie. 646 d'érudits : Fonds Tamizey de Larroque. Paul Huet et le Comte de Saint-Saud, Paris, 1893 (extr. du Bulletin de la Société historique et archéologique du Périgord). Il n'y a pas trace d'une tel titre dans les publications connues à ce jour de Tamizey, ni d'un périodique de ce nom paraissant à cette date. Maurice Campagne, dont il est question à plusieurs reprises dans le Livre de raison a publié Histoire de Bacalan du XV e au XX e s., Impr. de J.Castanet, Bergerac, 1905, in-8°, XV-308 p. 647 Voir 10 août 1893. 648 Livre-journal… (1609-1652) de Pierre de Bessot, publié par Ph. Tamizey de Larroque, 649 Voir note 461. - Bresc 655 , son beau-frère M. de Berluc, le juge de paix bibliophile Alexandre Mouttet 656 et le bibliothécaire de la Méjanes François Vidal 657 . Après déjeuner, visite à M. le Doyen Guibal 658 . Le soir, à l'hôtel de ville, réunion des adhérents au projet de formation du comité du monument à élever à Peiresc. Je suis nommé par acclamation président d'honneur de ce comité et M. Guibal président réel. Je fais la connaissance, en cette soirée, de deux aimables hommes avec lesquels j'étais depuis longtemps en correspondance, le marquis de 655 Louis de Bresc, avocat à la Cour d'Aix est l'auteur de Fêtes d'Aups, à l'occasion de la Saint-Pancrace, bravade et entrée historiquede Charles, comte d'Anjou et de Provence, Aix, impr. de Illy, 1857, In-16, 14 p. (extr. de « L'écho des Bouches-du-Rhône » du 31 mai 1857) ; Armorial des communes de Provence ou Dictionnaire géographique et héraldique des villes et villages des Bouches-du-Rhône, du Var, des Basses-Alpes, de Vaucluse et des Alpes-Maritimes, Paris, Bachelin-Deflorenne, 1867, LI-370 p. ; Épisode des guerres de religion en Provence, massacre d'Aups (octobre 1574), Draguignan, impr. A. Latil, 1877, 21 p. (extr. du Bulletin de la Société d'études scientifiques et archéologiques de la ville de Draguignan) ; Aix-en-Provence, impr. de J. Barthélemy, 1898, 58 p. Il s'est aussi intéressé à une autre gloire provençale, Mirabeau, publiant : À propos de l'acte de naissance de Mirabeau, Aix-en-Provence, impr. de A. Makaire, 1888, 7 p. ; La galerie du château de Mirabeau, Aix-en-Provence, impr. de J. Remondet-Aubin, 1894, 37 p. et Une arrière-petite-nièce de Mirabeau, homme de lettres…[il s'agit de la comtesse de Martel de Janville, née Mirabeau, en littérature « Gyp »], Aix en Provence, A. Makaire, 1889, 7 p. prolongé par Une petite-nièce de Mirabeau, notes généalogiques et anecdotiques, Manosque, impr. de A. Demontoy, 1890, 32 p. Il est, de plus, l'auteur de Méry et le salon de lady Greig à Marseille, notes et souvenirs, Toulon, impr. de L. Laurent, 1879, 35 p. (extr. du Bulletin de l'Académie du Var) ; La Saint-Huberty au théâtre d'Aix, 1783, Aix-en-Provence, impr. de Garcin et Didier, 1893, 23 p. (Extr. des Mémoires de l'Académie d'Aix) ; Aix-en-Provence, président du comité Peiresc (voir 11 novembre 1896). -Correspondance de Tamizey avec Georges Guibal : A.D. Lot-et-Garonne, 16 J 15, correspondance d'érudits : Fonds Tamizey de Larroque. Boisgelin 659 et Gustave Mouravit 660 . 6 mai, dimanche. Diverses visites au maire, au recteur, à mon vieil ami Lorédan Larchey 661 , à ma vieille amie M lle Marie 659 Charles-Joseph-Eugène, marquis de Boisgelin avait contribué au volume XVII des Correspondants de Peiresc : Lettres inédites écrites de Provence et de Syrie à Peiresc par François de Galaup-Chasteuil publiées par Ph. Tamizey de Larroque, signant une Notice généalogique sur la famille de Galaup-Chasteuil, Digne, 1890 (extr. des Annales des Basses-Alpes, bulletin de la Société scientifique et littéraire de Digne). Le marquis de Boisgelin est aussi l'auteur d'autres études généalogiques : Alayer, seigneurs de Champourein, Costemore, Le Poil, Digne, impr. de Chaspoul et V ve Barbaroux, 1899, 9 p. (extr. du Bulletin de la société Aix, A. Makaire, 1894, 12 p. (extr. du Bulletin de la société scientifique et littéraire des Basses-Alpes), Esquisses généalogiques sur les familles de Provence, t. I, 1 re partie, Draguignan, impr. de C. et A. Latil, 1900, Les Thomas, de La Garde, barons de S te -Marguerite etc. Généalogie, Aix-en-Provence, A. Makaire, 1896, 149 p. (extr. du Bulletin de la Société d'études scientifiques et archéologiques de la ville de Draguignan, t. XX, 1894-1895). Il a édité, enfin, l'ouvrage de Maurice de Duranti de la Calade, La famille d'André, Digne, 1902 (extr. du Bulletin de la Société scientifique et littéraire des Basses-Alpes). -A.D. Lot-et-Garonne, 16 J 28, correspondance d'érudits : Fonds Tamizey de Larroque. Aix-en-Provence, impr. de Illy-Brun, 1889, 35 p. Il a aussi édité les Chroniques indiscrètes sur la Régence… de Charles-Pinot Duclos, Paris, 1878 (extr. des Mémoires secrets) et fait paraître à Bordeaux : Discours sur les lettres françaises au Moyen âge [signé Auguste Timavour], Bordeaux, impr. de J. Delmas, 1865, 20 p. ; La Rose et l'Abeille. Rêverie, l'an des roses 6866, d'après Barbier, 20 p., Bordeaux, impr. de J. Delmas, 1866 ; Essai sur les poésies inédites de Paul Reynier, conférence faite à Marseille, le 2 mars 1868, Bordeaux, impr. de J. Delmas, 1868, 14 p. On lui doit aussi Édouard Tricotel et ses « Variations bibliographiques », Paris, aux bureaux de la Revue biblio-iconographique, 1900, 48 p. et Napoléon bibliophile ; recherches spéciales de psychologie napoléonienne, avec documents inédits, Paris, A. Blaizot, 1905, 144 p. -A.D. Lot-et-Garonne, 16 J 20, correspondance d'érudits : Fonds Tamizey de Larroque. Almanach des noms expliquant 2800 noms de personnes (1881) et aussi Journal de marche du sergent Fricasse de la 127 e demi-brigade (1792-1802), d'après le manuscrit original (1882) ; Les Cahiers du capitaine Coignet d'après le manuscrit original (1883) ; l'Esprit de tout le monde (1892-1893) ; le Monde féodal : costumes vrais (1899). -A.D. Lot-et-Garonne, 16 J 17, correspondance d'érudits : Fonds Tamizey de Larroque.bonté toute paternelle et qui m'a très généreusement donné 300 francs pour le monument de Peiresc. Déjeuné, ce jour-là, chez M. le Marquis de Boisgelin 664 dans le cabinet duquel j'ai travaillé plusieurs heures. Pendant que je causais avec le savant généalogiste, M lle Nathalie, sa fille, a fait au crayon un croquis de ma tête d'après lequel, quelques jours plus tard, le statuaire Gondran 665 a moulé en terre cuite mon médaillon. note en marge : C'est ce même statuaire qui a moulé également en terre cuite le médaillon de Peiresc d'après le portrait de Finsonius 666 dont j'ai rapporté d'Aix une si belle copie qui orne ma salle à manger. J'ai acheté trois exemplaires du médaillon de Peiresc, un pour moi, un autre pour le cabinet de M. Léopold Lyon (1893) et un aperçu des incunables de Colmar. Elle travailla aussi à l'édition du catalogue dressé par Daunou, des incunables de la bibliothèque Sainte-Geneviève. Le premier volume de son oeuvre capitale : le Catalogue général des incunables des bibliothèques publiques de France fut imprimée en 1897. -A.D. Lot-et-Garonne, 16 J 21, correspondance d'érudits : Fonds Tamizey de Larroque. Dictionnaire critique et documentaire des peintres, sculpteurs… d'E. Bénezit, Gründ, 1951, t.4, ne fait aucune mention de cet artiste. Dans la notice nécrologique qu'il publia : Philippe Tamizey de Larroque, Digne, impr. de Chaspoul et V ve Barbaroux, 1898, (extr. du Bulletin de la société scientifique et littéraire des Basses-Alpes), L. de Berluc-Perussis dans une note (p. 7) indique qu' « un habile artiste aixois, M. J. Gondran, a modelé et édité en terre cuite les médaillons de Peiresc et de Larroque, se faisant pendant, le premier d'après la belle toile de Finsonius, le second d'après un crayon, fort ressemblant, dû à M lle de Boisgelin, et datant du dernier voyage de Larroque en Provence (1894) ». Bruges vers 1580, mort à Arles en 1632. Il alla étudier en Italie où il fut très influencé par le Caravage. Il séjourna ensuite longtemps dans le Midi où il a laissé ses meilleures à Aix et en Arles. Delisle. Voir 18 février 1893 notamment. Il s'agit de la Bibliothèque inguibertine fondée, à Carpentras, en 1746, par Malachie hôte M. de Bresc. Il y avait là tous les membres importants du Comité Peirescien. On a bu à ma santé. J'ai beaucoup causé avec M. de Ribbe 668 , avec le marquis de Saporta 669 . J'ai gardé d'Inguibert. Celui avait constitué une bibliothèque et un « cabinet », surtout en Italie, quand il était, simple religieux, au service du cardinal Laurent Corsini, le futur pape Clément XII. Nommé évêque de Carpentras en 1735, il ouvrit ses collections au public en 1745, et quand il mourut en 1757, les légua à sa ville natale. Il avait alors réuni à son propre fonds les collections de Peiresc et des Thomassin de Mazaugues. D'autres dons importants de comtadins au XIX e siècle vinrent les compléter. Le fonds ancien de cette bibliothèque comprend donc environ 150 000 volumes, 2300 manuscrits, 1300 liasses et registres provenant des archives de la ville, du XIII e au XIX e siècle, 200 incunables, en partie italiens, des éditions rares du XVI e siècle ainsi qu'une riche bibliothèque musicale léguée par Bonaventure Laurens. Depuis 1847, le Bibliothèque inguibertine était installée dans un Hôtel particulier du XVIII e siècle, bâti par l'architecte Antoine d'Allemand, sis boulevard Albin-Durand. Dictionnaire de la noblesse de La Chenaye-Desbois et Badier, 1866, t. 17 scientifique et littéraire des Basses-Alpes) ; Les Castellane à Excursion d'Aix à Fontaine-l'Evêque, Draguignan, impr. de C. et A. Latil, 1889, 20 p. ; Sur Sigaud de Bresc : C. d'E.-A., Dictionnaire des familles françaises anciennes et notables, t. 4, Évreux, Impr. de Charles Hérissey, 1905, p. 4. -A.D. Lot-et-Garonne, 16 J 7, correspondance d'érudits : Fonds Tamizey de Larroque. 656 Alexandre Mouttet (1814-1901) a co-signé avec Ph. Tamizey de Larroque, Autour de Peiresc : le baptistaire de Nicolas Fabri ; sa biographie anecdotique par J.-J. Bouchard ; les jardins de Belgancier ; le testament de Peiresc ; son tombeau ; les héritiers et les continuateurs de Peiresc…, Souvenirs et notes littéraires ; I. Auguste Garbeiron. II Frelons, Toulon, Impr. de L. Laurent, 1876, 82 p. -A.D. Lot-et-Garonne, 16 J 20, correspondance d'érudits : Fonds Tamizey de Larroque. Forcalquier, 660 Gustave Mouravit, né en 1840, a notamment publié Le Livre et la petite bibliothèque d'amateur, essai de critique, d'histoire et de philosophie morale sur l'amour des livres, Paris, A. Aubry, 1869, XXIV-448 p. ; Poètes et bibliophiles : les devises des vieux poètes, étude littéraire et bibliographique, Paris, D. Morgand et C. Fatout, 1879, In-4°, 47 p. ; Les incunables de la Méjanes, rapport et voeu de l'Académie des sciences, agriculture, arts et belles-lettres d'Aix (séance du 18 mars 1889), 661 Lorédan Larchey (1831-1902), né à Metz, bibliothècaire (à partir de 1873) puis conservateur (1890) de la Bibliothèque de l'Arsenal. Auteur de travaux plein d'originalité notamment un dictionnaire argotique Excentricités du langage (1860) ; Dictionnaire des noms contenant la le troisième pour la bibliothèque de Carpentras 667 . 662 Marie Pellechet (1840-1900) consacra sa vie à l'étude et à la description des incunables français. Auteur des Notes sur les livres liturgiques des diocèses d'Autun, de Châlons et de Mâcon (1883), et des Notes sur les imprimeurs du Comtat Venaissin (1887), elle publia les catalogues des incunables des bibliothèques publiques de Dijon (1886), Versailles (1889), 663 M gr Goulhe-Soulard. recherche étymologique des formes anciennes (1880) ; Delisle, 664 Voir 6 mai 1894. 657 Voir 27 octobre 1893. -A.D. Lot-et-Garonne, 16 J 27, correspondance d'érudits : Fonds Tamizey de Larroque. 658 Doyen de la faculté des lettres d'665 Le 666 Louis Finson ou Finsonius, peintre flamand, né à 667 Sur L. 668 Le Toulon, Impr. de E. Costel, 1883, 24 p. ; Institution Sainte-Marie, La Seyne-sur-Mer, près Toulon. Nos marins provençaux, discours prononcé… à la distribution des prix, le 24 juillet 1883, Toulon, Impr. de E. Costel, 1883, 22 p. ; Panégyrique de saint Agricol, prononcé le 2 septembre 1883… en l'église paroissiale de St-Agricol, Avignon, impr. de Aubanel frères, 1883, 24 p. ; Souvenir du triduum célébré en l'église métropolitaine d'Avignon en l'honneur du bienheureux J.-B. de La Salle, les 15, 16, 17 juin 1888. Panégyrique…., Avignon, impr. de Aubanel frères, 1888, 56 p. ; Apt, la ville sainte de Provence, discours prononcé en la basilique de Sainte-Anne d'Apt…à l'occasion du pèlerinage régional du 29 avril 1895, Apt, impr. de V ve A. Jean, 1895, In-16, 16 p. ; Le triomphe de la religion catholique à Orange, discours prononcé à l'occasion des fêtes du troisième centenaire, le… 12 février 1899, Avignon, impr. de Aubanel frères, 1899, 23 p. Plusieurs de ses éloges funéraires et nécrologies publiés notamment dans La Semaine religieuse d'Avignon ont aussi fait l'objet de tirés à part. Paul de Terris a encore publié des Theses theologicae dogmaticae et morales…has theses propugnabunt : pro parte dogmatica D. Paul J. Terris, pro parte morali D. Josephus Lautier…, Avignon, impr. de Aubanel frères, 1864, In-fol. ; Nouveau mois du Sacré-Coeur, trente méditations sur les litanies du Coeur de Jésus…, Avignon, impr. de Aubanel frères, 1893, In-16, XIV-379 p. Il est également l'auteur d'études historiques, parmi lesquelles : Recherches historiques et littéraires sur l'ancienne liturgie de l'église d'Apt, Avignon, Impr. de F. Seguin aîné, 1874, 78 p. (extr. des Mémoires de la Société littéraire, scientifique et artistique d'Apt, nouvelle série, t. 1) ; Joseph-François de Remerville, étude biographique, critique et littéraire…, Avignon, Seguin frères, 1881, 64 p. ; La Charte de Montrieux et les traditions provençales, étude historique…, Apt, Impr. de V ve A. Jean, 1897, 14 p. (extr. de la Revue Sainte-Anne d'Apt) et tout particulièrement d'Un père de famille au XVII e siècle, d'après un document original et inédit (extr. du 5 e vol. des Annales de la Société littéraire, scientifique et artistique d'Apt, Apt, 1870, 18 p. [il s'agit de l'analyse du livre de raison de l'un de ses aïeux, Gaspard de Mongé du Caire, seigneur du Caire et en partie de Puimichel, secrétaire du roi en la chancellerie, près la cour de Parlement de Provence, mort le 25 octobre 1726] mentionné dans le Livre de raison de la famille de Fontainemarie, publié par Ph. Tamizey de Larroque, Agen, Impr. V ve Lamy,1889,p.119-120. -A.D. Lot-et-Garonne, 16 J 26, correspondance d'érudits : Fonds Tamizey de Larroque. Requin a publié plusieurs études sur l'histoire de l'art et de l'imprimerie plus spécialement, en Avignon et en Provence: Documents inédits sur les origines de la typographie, Paris, E. Leroux, 1890, 24 p. (extr. du Bulletin historique et philologique du Comité des travaux historiques et scientifiques) ; Documents inédits sur les peintres. 24, jeudi. Déjeuner à Notre-Dame de Vie 689 chez le chapelain, l'abbé de Faucher, avec son frère et sa belle-soeur, M. et Mme Paul de Faucher 690 . Course à Vénasque 691 . Souper chez M. Gustave Barcilon 692 . 28 mai, lundi. Demi-journée passée au château du Rocan avec Peintres-verriers et enlumineurs d'Avignon au XV e siècle, Paris, impr. de E. Plon, Nourrit et Cie, 1889, III-99 p. ; Histoire de la faïence artistique de Moustiers…t. 1, Paris, G. Rapilly, 1903, Gr. In-4° ; Jean de Fontay et le tombeau d'Alain Chartier, Paris, E. Leroux, 1893, 10 p. (extr. Laugier Sapor, évêque de Gap et chancelier de Provence, son emprisonnement dans le château de Tarascon (1425-1427), Gap, L. Jean et Peyrot, 1912, 86 p. (extr. du Bulletin de la Société d'études historiques, scientifiques et littéraires des Hautes-Alpes, 3 e trimestre 1912) ; Origines de l'imprimerie en France (Avignon, 1444), Paris, Cercle de la librairie, 1891, 37 p., fig. (extr. du Journal général de l'imprimerie et de la librairie, du 28 février 1891). ; Philippe Mellan, (1686-1762), discours prononcé à la distribution des prix… le 23 juillet du Bulletin archéologique du comité des travaux historiques et Loriol-du-Comtat, près de Carpentras. scientifiques, 1892) ; 1883, 685 686 Voir 17 mai 1894. 687 Voir 14 mai 1894. 688 L'abbé H. Alphonse, marquis de Gaudemaris, colonel de cavalerie, marié à Mlle Cramail de Tronchay, décédé à Cannes en 1895. Il descendait d'une famille du Comtat Venaissin, honorablement connue dès le XVII e siècle et qui arriva à la noblesse à la faveur du grade de docteur en droit civil de l'Université d'Avignon, dont furent pourvus successivement en 1733 et 1757, deux de ses membres, puisqu'au Comtat, ce grade conférait à ses titulaires la noblesse au premier degré. Antoine-Jérôme-Félix-Augustin obtint du pape, en mai 1755, des lettres de réhabilitation de noblesse avec concession du titre de marquis sans inféodation. Son fils Charles-Imbert de Gaudemaris, juge de paix, décédé en 1831, ne porta jamais ce titre qui paraît n'avoir été repris que dans le seconde moitié du XIX e siècle. Vaucluse est une excursion traditionnelle pour les touristes venant d'Avignon ou Carpentras à cause de la résurgence spectaculaire se trouvant à la source de la Sorgue et du souvenirde la vie et des oeuvres du poète italien Pétrarque (1304-1374), brisé et inspiré par son amour malheureux pour la belle Laure. Ce souvenir est marqué notamment par une colonne élevée en 1804 en son honneur. C'est pour se consoler de cette passion sans espoir que Pétrarque se retira souvent dans la solitude du Vaucluse, où il rima une partie de ses célèbres sonnets. Il fit en particulier, le 26 avril 1336, l'ascension du sommet isolé et majestueux du Mont Ventoux, plus au nord. Le chanoine Albanès a signé, à partir de 1886, plusieurs ouvrages touchant à l'histoire de la Provence. Il a notamment publié la Chronique de l'abbaye de Saint Victor de Marseille, Saint Bénezet, fondateur du Pont d'Avignon, texte provençal du XIII e siècle et Sainte Douceline, fondatrice des béguines de Marseille, texte également en langue provençale datant du XIII e siècle ainsi que le catalogue des manuscrits de la Bibliothèque d'Arles. Il a signé aussi des études sur l'abbaye de Silvacane, sur le protestantisme en Provence, sur Jean Huet, évêque de Toulon au service du roi René, sur la famille Grimoard, sur Marguerite-Marie Alacoque (1647-1690) et sur Anne-Madeleine de Remuzat, propagatrice du culte du Sacré-Coeur l'artiste Jules Laurent 706 , qui m'a donné un magnifique dessin représentant le mont Ventoux, et le docteur Poujade 707 , ancien préfet de Vaucluse et ancien député. 20, mercredi. Départ d'Henri pour Lyon et de son père pour Montpellier. Nous nous séparons à Avignon. Jésus (1696-1730) et, enfin, l'Inventaire analytique des titres de la maison de Forbin, recueillis au château de St-Marcal, par M. le marquis de Forbin d'Oppède, Imprimerie marseillaise, Marseille, 1900. -A.D. Lot-et-Garonne, 16 J 2, correspondance d'érudits : Fonds Tamizey de Larroque. Déjeuné chez Camille Chabaneau 708 et soupé et couché chez M. et M me Léon Pélissia 709 . note en marge : J'ai vu, pendant plusieurs jours, à 22, vendredi. Journée passée à Lamontjoie 710 chez mon amie l'Inguibertine 704 et chez moi, le P. Roy, jésuite, et le M lle Anna Brugère. chanoine 23, samedi. Déjeuner chez M. Jules Andrieu 711 . Albanès 705 . J'ai aussi vu très souvent 699 706 peintre, graveur, né à Carpentras en 1825, mort à St-Didier (Vaucluse) en Jules-Joseph Laurens, dessinateur, aquarelliste, lithographe, 1901. Auteur d'une abondante production, il exposa au Salon surtout des lithographies. Il fut fait chevalier de la Légion d'honneur en 1868 : Bénézit (E.), Dictionnaire critique et documentaire des peintres, Il s'agit probablement de Pierre-Les Gaudemaris étaient alliés depuis 1881 sculpteurs…, Gründ, 1951, t. 5, p. 436. avec la famille gasconne de Lacave-Laplagne-Barris : C. d'E.-A., Dictionnaire des familles françaises anciennes ou notables à la fin du XIX e siècle, t. 20, Évreux, impr. Ch. Hérissey, 1929, p. 234-235. 700 Il s'agit de Beaumes-de-Venise, aux environs de Carpentras, au Nord en direction du Mont Ventoux. C'est le berceau de la famille de Gaudemaris. 701 Y a-t-il une erreur de transcription ? S'agit-il de M. Eyriès évoqué le 16 octobre 1894 ? Correspondance de Tamizey avec Eyris : A.D. Lot-et- Garonne, 16 J 13, correspondance d'érudits : Fonds Tamizey de Larroque. 704 Voir 6 mai 1894. 702 Au sud de Carpentras sur la route de la fontaine de Vaucluse. 703 La Fontaine-de-705 de Nous avons eu aujourd'hui autour de notre table quatre jeunes gens, notre cousin Raoul de Bonnegarde 720 , MM. de Cordier 721 , 717 Historien fameux, Camille Jullian est né à Marseille en 1859 et mort à Paris en 1933. Ancien élève de l'École Normale Supérieure, agrégé d'histoire en 1880, membre de l'École française de Rome, docteur ès lettres, en 1883, avec une thèse sur Les Transformations politiques de l'Italie sous les empereurs romains. Il fut nommé professeur à la Faculté des lettres de Bordeaux où il s'occupa surtout d'histoire locale : Mélanges d'épigraphie bordelaise (1884) ; Inscriptions romaines de Bordeaux (1886) ; Vercingétorix (biographie colorée et vivante publiée en 1901), Recherches sur la religion gauloise (1904). Nommé en 1905 à la chaire d'antiquités nationales au Collège de France, il a publié une monumentale Histoire de la Gaule en 8 volumes entre 1907 et 1928 et, enfin, en 1922, une grande synthèse : De la Gaule à la France. Il a complètement renouvelé cette période historique, demeurée jusqu'à lui pratiquement inexplorée. Il a apporté à son enquête une exigeante méthode, une érudition minutieuse mais aussi un généreux patriotisme et une grande chaleur d'expression. C. Jullian a également dirigé la publication posthume du grand ouvrage de Fustel de Coulanges, son maître : Histoire des institutions politiques de l'ancienne France et réuni divers mémoires de cet historien en deux volumes : Nouvelles recherches sur quelques problèmes d'histoire (1891), Questions historiques (1892). Membre de l'Académie des inscriptions et belles lettres depuis 1908, il fut élu à l'Académie française en 1924. François Ouvré a soutenu ses deux thèses, l'une en latin, selon l'usage alors en vigueur : De Monarchia Dantis Aligherii florentini commentationem historicam scripsit H. Ouvré, In-8°, 56 p. et l'autre en français : Aubéry Du Maurier, étude sur l'histoire de la France et de la Hollande, 1566-1636, In-8°, 355 p., publiées, en 1853, chez A. Durand, à Paris. Il est aussi l'auteur dans les Mémoires de la Société des antiquaires de l'Ouest, d'études sur l'histoire de Poitiers (1595-1628) et celle de la Ligue dans cette ville (1855, t. XXI et 1856, t. XXII) ainsi que sur le poète et historien poitevin du XVI e siècle, Jean Bouchet (1857, t. XXIV : Discours prononcé à la séance publique de la Société des antiquaires de l'Ouest, le 27 décembre 1857). On conserve également de lui : L'enseignement au moyen âge et les Facultés des lettres, discours prononcé à la séance de rentrée des Facultés de théologie, de droit et des lettres de l'Académie d'Aix, Aix, impr. de Pardigon, 1862, 39 p. et un Discours prononcé… à la distribution solennelle des prix du lycée d'Agen, le 6 août 1883, Agen, impr. de Bonnet et fils, 1883, 7 p. Cité comme ami d'Henri Tamizey de Larroque, déjà venu en visite à Larroque avec Vincens de Tapol et Léaumont, le 30 mars 1892 : y a-t-il là une erreur de transcription de son patronyme écrit « Corbier » le 30 mars note en marge : Je note ici (mon livre de raison ayant été très irrégulièrement tenu depuis mon retour) les diverses visites, séjours, repas, de Jean de Boëry, du chanoine Allain 724 , de l'abbé Durey de Louga, curé de Lamontjoie, des curés de Gontaud et de Saint-Pierre de Nogaret, du professeur Brissaud 725 , du statuaire Daniel Campagne 726 et de son fils, de l'abbé Alis 727 et de sa nièce, de l'abbé de Maisonneuve, curé de Birac, et de M. Joseph Sarramia 728 , enfin (7 octobre) de MM. Jules 1892 ? L'Armorial du Bordelais signale une famille « du Corbiers » seigneur de La Mothe, Rousselet, Mataplane, le Cluseau, Saint-Georges, Clairac. Famille noble répandue en Bordelais et Blayais. Noblesse de Bordeaux (1789) : Meller (P.), op. cit., t. 1, p. 273. 722 Il s'agit de Jean de Léaumont, ami du fils de Tamizey, Henri. Il lui avait déjà rendu visite à Larroque avec Cordier (ou Corbier) et Vincens de Tapol, le 30 mars 1892 : sur la famille de Léaumont, connue depuis le début du XIII e et établie dans le pays de Lomagne, en Guyenne et les contrées environnantes : De La Chenaye-Desbois et Badier, Dictionnaire de la noblesse, t. XI, Schlesinger, Paris, 1867, p. 818-825. Mardi 3 juillet Déjeuné de nouveau chez le chanoine Allain 719 avec MM. l'abbé Lafargue, curé de Saint-Médard-en-Jalles, Barckhausen, Habasque. Rentrée, le soir, avec mon fils (retour de Paris) au pavillon Peiresc. Samedi 14 juillet Ausone et Bordeaux (1893) ; Histoire de Bordeaux (1895) ; Puis il porta son activité sur les antiquités gauloises avec Gallia (1892), 718 Henri-719 Voir 29 mars 1893. 720 Voir 2 janvier 1893. Hier, dans la soirée, de 5 à 7 heures, nous avons subi un des plus terribles orages qui aient jamais dévasté notre région. Nous avons eu trois fléaux réunis et qui semblaient rivaliser de rage, le vent, la grêle et la pluie. Non seulement les fruits (prunes et raisins) ont été perdus, mais aussi les arbres ont été brisés, arrachés. C'est un dommage incalculable. Nous avons eu ici beaucoup de mal, mais il y en a eu davantage encore dans d'autres communes, notamment dans la commune de La Bretonnie 731 où les coteaux, si verts hier encore, sont entièrement dénudés et semblent avoir été brûlés 2 août Petit voyage à Beaupuy avec Henri. Nous avons déjeuné chez M. et M me Maurice Boisvert 732 . J'ai profité de l'occasion pour et-Garonne, 16 J 25. -Ce dernier a édité Relation de ce qui s'est passé à Villeveuve d'Agennois par les généreux exploits des Bourgeois…, Paris, N. Vivenay, 1652, Agen, Imprimerie moderne, 1895, 8 p. : A.D. Lot-et-Garonne, cote 10 PL 14/1 [n° inv. 16170]. 15. 727 Voir 8 novembre 1893. 728 Correspondance de Tamizey avec Joseph Sarramia de Père : A.D. Lot-par un immense incendie. Le vent était si violent que j'ai craint de le voir emporter mon pavillon. Des tuiles ont été enlevées de la toiture et l'eau a inondé tout le grenier et roulait dans l'escalier comme un torrent. En vérité c'était assourdissant, la cascade des tuiles « se mêlant au fracas de la grêle et du vent ». Mes malheureux raisins, sur lesquels je comptais pour mes déjeûners de l'automne, ont été hachés jusqu'au dernier. Ce qui me console un peu, c'est que mes arbres, à l'exception de mon châtaignier, dont la moitié a été emportée, n'ont pas souffert autant que je le craignais, et que mon vieux chêne en particulier a résisté à tous les efforts de la tempête. 729 Voir 23 juin 1894. 730 Voir 3 novembre 1893. 731 Labretonie est à 8 km environ au nord-est de Gontaud : carte au 50 000 e , Marmande-Agen, 56, I.G.N., Paris, 2003. , op. cit., p. 233-235. -Correspondance de Tamizey avec la comtesse Marie de Raymond : parties du manuscrit de Labenazie 754 ; une copie des Lettres à J. Scaliger 755 et des lettres du président Maynard 756 que je voulais réimprimer ; A.D. Lot-et-Garonne, 16 J 23 : Fonds Tamizey de Larroque. 753 C'est-à-dire manuscrit. principales Le Dictionnaire de Bayle, édition Beuchet. Le Dictionnaire de Prosper Marchand 790 . La Biographie universelle dite de Michaud, dernière édition 52 vol. gr. in-8°7 91 . La Nouvelle Biographie générale, publiée chez Didot sous la direction du Dr Hoefer. 46 vol. in-8°7 92 . Le Dictionnaire de Dom Chaudon 793 . 1780. vol. in-8°. Mémoires pour servir à l'histoire de plusieurs hommes illustres de Provence, C. Herissart fils, Paris, 1752, in-12. Il avit publié auparavant, en 1718, un Projet d'une histoire des hommes illustres de Provence et une Vie de Pierre Gassendi (1737). et les Hommes illustres de Provence par le P. Bougerel 806 . Les Hommes illustres de Perrault 807 , le recueil de Teissier extrait de l'Histoire du prés. de Thou 808 (4 vol. in-l2) BIBLIOGRAPHIE ET HISTOIRE LITTÉRAIRE. -2°) - Bibliothèque historique de la France par Lelong- Foulette 809 , etc. 5 vol. in-f°. ans plus tard, il entre au National, puis il est nommé conservateur des manuscrits à la Bibliothèque Nationale. Le 4 juin 1848, il est élu député de la Sarthe à l'Assemblée nationale ; il vote pour l'abolition de la peine de mort, pour la constitution, contre l'expédition de Rome, contre le principe de la liberté de l'enseignement. Il donne sa démission après le coup d'État et il est révoqué de son poste à la Bibliothèque Nationale. En 1861, il devient bibliothècaire de l'ordre des avocats. En 1862, il est élu membre de l'Académie des inscriptions et belles-lettres. Le 6 septembre 1871, il est nommé directeur de l'Imprimerie nationale et il le restera jusqu'à sa retraite en 1882. En 1893, il devient le premier directeur de la fondation Thiers. Il était commandeur de la Légion d'honneur juin 1878. Il avait été secrétaire de rédaction de l'Histoire littéraire de la France et l'un des collaborateurs les plus assidus des Notices et extraits des manuscrits de la Bibliothèque Nationale. Il acheva la Gallia Christiana, t. XIV-XVI. Voir Prevost (M.) -Roman d'Amat -Tribout de Morembert (H.), Dictionnaire de biographie française, Paris, 1989, p. 726. -A.D. Lot-et-Garonne, 16 J 16, correspondance d'érudits : Fonds Tamizey de Larroque. 806 Le Père Joseph Bougerel de la congrégation de l'Oratoire (1680-1753) est l'auteur des vol. Chronique ecclésiastique du diocèse d'Auch, suivies de celles des comtes du même diocèse, J.-F.Robert, Toulouse, 1746, in-4°. toutes les Sociétés savantes auxquelles j'appartiens, notamment d'Agen, d'Aix-en-Provence, de Béziers, de Dijon, de Périgueux, de Toulouse. Dom Brugelles 844 : Chronique ecclésiastique d'Auch, in-4°. note en marge : Magnifique exemplaire orné d'une reliure princière qui m'avait été donné au château de St-Roch par M. Georges de Monbrison dans les circonstances que voici : quelques minutes avant mon départ, je priai mon cher hôte de me laisser prendre une note dans Dom Brugelles. J'aime mieux, me dit-il trop gracieusement, que vous emportiez mon volume et que vous me laissiez jouir de votre conversation. Chanoine Monlezun 845 : Histoire de la Gascogne. Presque toutes les publications relatives à l'Agenais : l'abbé Barrère 846 , 845 844 Dom Louis Clément de Brugèles, impr. P. Noubel (au Couvent des Carmes), Agen, 1865, in-12 de 372 p. [« L'emplacement de Pompéjac et l'épiscopat de St Caprais sont des questions obscures que l'abbé Barrère a traitées avec plus de conviction que d'autorité », affirme J. Andrieu] ; Berguille ou l'Extatique de Pontet. Apparitions accompagnées de divers prodiges », impr. S. Demeaux ; libr. Ach. Chairoux ; Paris et Bordeaux, s.d. (1874), in-18 de ouvrages de Mary Lafon 851 , de Bladé 852 , de d'Avezac 853 , -4°; Ermitage de St-Vincent-de-Pompéjac, depuis son origine jusqu'à sa restauration par les Carmes déchausés; comprenant le rétablissement providentiel de ces Religieux en France, une Dissertation sur l'Épiscopat de St Caprais et plusieurs pièces justificatives, Bibliographie des Patois de Pierquin de Gembloux. Quérard signale à ce sujet dans les Supercheries littéraires, éd. de 1870, t. II, p. 1070 : « Il existe dans les Archives de la Société des Gens de lettres un rapport sur le plagiat de M. Mary-Lafon et un jugement le condamna à 300 fr. Mary-Lafon n'en a pas moins réclamé le prix Gobert pour son Histoire du Midi, dans une lettre à M. Guigniant, président de l'Académie des Inscriptions et Belles-Lettres, Paris, impr. de Duverger, 1844, in-8° de 24 p. ». Ce qui ne découragea pas Mary-Lafon de publier encore Histoire littéraire du Midi de la France, C. Reinwald, Paris, 1882, XIII-421 p. ; par ailleurs, il est l'auteur d'un drame en 3 actes : Le Maréchal de p. Ce dernier ouvrage est, en fait, une reproduction littérale de la , il se livra à l'étude approfondie des langues orientales au Collège de France (sous la direction des prof. Reinaud et Caussin pour l'arabe et Quatremère et Jaubert pour le persan) ; Il fut admis dans la Société asiatique, à la suite de la publication de L'Histoire des sultans du Kharezni, par Mirkhoud (1842), texte persan accompagné de notes historiques et grammaticales. Il a publié dans le même cercle d'études : Histoire des sultans Ghourides (1844), traduite du persan de Mirkhoud ; Histoire des Samanides (1845) traduite du même auteur ; Histoire des Seljoukides et des Ismaéliens ou Assassins de l'iran (1849, traduite du persan) et sur lesquels il a fait de Nouvelles recherches ; histoire des Khans mongols du Turkestan et de la Transoxiane (1862) traduit du persan de Khondémir. Il a publié aussi des ouvrages de géographie historique, notamment la traduction annotée des Voyages d'Ibn Batoutah dans la Perse, l'Asie centrale et l'Asie Mineure (1848-1851, 2 vol.), réimprimée avec l'original arabe (1853-1856, 3 vol.) et Fragments de géographes et d'historiens arabes et persans inédits (1849), relatifs aux anciens peuples du Caucase et de la Russie méridionale. Collaborateur assidu du Journal asiatique de Paris, à partir de 1842, plusieurs de ses articles ont été recueillis sous le titre de Mémoires d'histoire orientale etc. (1854, 1 ère partie). Il a, en fin, publié Gulistan ou le parterre des roses (1858), traduit de Sadi. -Correspondance de Tamizey avec C. Defrémery : A.D. Lotet-Garonne, 16 J 11, correspondance d'érudits : Fonds Tamizey de Larroque.Bernard 858 (l'aimable bibliophile m'avait donné tout ce qu'il avait publié, plus de cinquante plaquettes ou volumes), de l'abbé Rance 859 (sur Arles), du D r Barthélemy 860 (sur Aubagne), intitulé L'Église romaine et la liberté, Lyon-Paris, janvier 1848, in-8°. Mais après la proclamation de la République, il écrivit contre les démocrates un essai de pamphlet allégorique : Fortun-peda ou lesAventures d'un grand agitateur, Auch, publia aussi d'Avant et pendant, Le commissaire malgré lui et l'École des représentants, des comédies politiques en vers, « imitées de Molière », Auch, 1849, in-12. L'expédition de Crimée lui a inspiré L'Europe et l'Orient, poème en 6 chants, Paris, 1857, in-8. Son ouvrage le plus important est l'Histoire des Pyrénées et des rapports internationaux de la France avec l'Espagne depuis les temps les reculés jusqu'à nos jours, Paris, 1853-1854, 5 vol. in-8. Chargé, en 1853 et 1857 de deux missions scientifiques, la première fois dans les Pyrénées, la seconde aux Pays-Bas, il a publié les résultats de la première sous le titre de Voyages archéologiques dans les Pyrénées, 1857, 6 vol. Ses travaux sur les Pyrénées ont obtenu, en 1861, une mention très honorable de l'Académie des inscriptions et belles-lettres. Maire de Saint-Elix (Gers), Cénac-Moncaut a été élu par le canton de Mirande, membre du Conseil général du Gers. Parmi ses autres publications : Le Congrès des brochures, 1860, in-4 ; La France et l'Europe latine, 1860, in-8 ; Marguerite, histoire du temps de Saint note en marge : M. d'Avezac m'avait donné toutes ses publications grandes et petites. Son gendre, M. Defrémery 854 , avait fait de même. Les deux recueils, au grand complet, sont infiniment rares. Plusieurs autres Académiciens m'avaient aussi donné toutes leurs oeuvres, notamment MM. Léopold Delisle et Paul Meyer. de Curie-Seimbres 855 , de Cénac-Moncaut 856 , de Bascle de Lagrèze, Fonds Tamizey de Larroque. 854 Charles Defrémery, orientaliste, né en 1822, à Cambrai (Nord). De 1840 à 1842 Auteur de plusieurs notices archéologiques insérées dans les Mémoires de la Société des Antiquaires, la Revue du Lyonnais, le Bulletin du Bibliophile Histoire de l'Imprimerie royale duLouvre (1867). En 1853, il avait publié un ouvrage fruit de longues et laborieuses recherches : De l'Origine et des débuts de l'imprimerie en Europe, 2 vol. in-8°, avec de nombreux fac-similés et une Table très détaillée. 859 L'abbé A.-Joseph Rance-Bourrey a publié L'Académie d'Arles au XVII e siècle d'après les documents originaux. Étude historique et critique, du chanoine Paul Guillaume 861 (sur le Dauphiné), Louis XIII et le Béarn de l'abbé Puyol 862 . note en bas de page : J'avais aussi son Richer 863 et son Gap. Issu d'une famille de petits cultivateurs, fit ses études au petit et au grand séminaire de Bordeaux et les compléta à Rome où il devint professeur à l'abbaye du Mont-Cassin, en 1868 et à l'abbaye de Cava, près de Salerne, en 1870 (abbayes sur lesquelles il publia des monographies historiques). De retour en France, en 1878, ayant sollicité sans succès de l'évêque de Gap un titre d'historiographe du diocèse, il décida de suivre, comme auditeur libre, les cours de l'École des chartes, tout en remplissant les fonctions d'aumônier dans un couvent de religieuses de Champigny. Il obtint le certificat d'aptitude paléographique, ce qui lui permit d'être nommé, en 1879, archiviste départemental des Hautes-Alpes et d'occuper cette fonction pendant 35 ans, publiant : Inventaire sommaire des archives départementales …des Hautes-Alpes, 1887-1913, 13 vol. ; Inventaire des Archives seigneuriales de l'Argentière, 1888 ; Inventaire sommaire des archives communales… des Hautes-Alpes, 1906-1913, 3 vol. ; Répertoire des minutes notariales ; Le clergé ancien et moderne du diocèse de Gap (extrait de l'Inventaire de la série G), 1909. En 1879, il organisa un musée départemental à Gap ; en 1881, il fonda la Société d'études des Hautes-Alpes et en 1891 il commença la publication des Annales des Hautes-Alpes dont il fut le principal rédacteur. Travailleur infatigable, il a publié de nombreux ouvrages et études dont les titres remplissent 15 colonnes du Catalogue général des imprimés de la Bibliothèque Nationale. Il a notamment édité Le mystère de S. Eustache (joué en 1504), 1883 ; Le mystère de S. Bulletin de la Société d'études des Hautes-Alpes, LVII, 1965, p. 99-122 (liste de ses ouvrages). -Correspondance de Tamizey avec P. Guillaume : A.D. Lot-et-Garonne, 16 J 15, correspondance d'érudits : Fonds Tamizey de Larroque. 862 Puyol (abbé) [le Catalogue des imprimés de la Bibliothèque Nationale indique Mgr Pierre-Édouard Puyol], Louis XIII et le Béarn ou rétablissement du catholicisme en Béarn et réunion du Béarn et de la Navarre à la France, Impr. de De Soye, Paris, 1872, 583 p., in-8°. Université de Paris, il essaya de mettre obstacle à l'influence des religieux, principalement des jésuites. Ceux-ci dénoncèrent les opinions ultra-gallicanes qu'il avait exposées dans son traité De ecclesiastica et politica potestate (1611). Ce livre ayant été condamné par le clergé de France (1612), Richer dut renoncer à ses fonctions. Il fut même un moment enfermé à Saint-Victor. Devenu chanoine de Paris, iltravail sur l'Imitation de J. C 864 . J'ai perdu, avec ses excellents livres, un grand nombre de ses lettres pleines d'esprit et de coeur. Tous les ouvrages de Paul Raymond 865 , donnés par lui, d'E. Gaussieur 866 (sic), Alain d'Albret par Luchaire 867 , toutes les publications de G. Clément-Simon, de L. de Berluc Perussis, de Charles de Ribbe, de Germain, doyen de la faculté de Montpellier et membre de l'Institut, de Jules Delpit, de rétracta sa doctrine avant de mourir. On lui doit divers ouvrages notamment pour les classes. 864 Ce livre religieux qui date du début du XV e siècle a trouvé auprès des laïcs et même dans le public non chrétien un très grand succès. On compte plus de 60 traductions françaises. Celle que Pierre Corneille a écrite en vers, celle du Père de Gonnelieu, celle qui est connue sous le nom de Lammennais sont les plus célèbres. Cet ouvrage anonyme (d'où les vives controverses sur son auteur), écrit en latin, renferme 4 livres, indépendants les uns des autres, qui ont dû circuler d'abord isolément et qui sont remarquables par leur pitié simple et confiante et aussi par une profonde connaissance du coeur humain : 1. Conseils utiles pour la vie spirituelle ; 2. Conseils pour la vie intérieure ; 3. De la consolation intérieure ; 4. Dévote exhortation à la sainte Communion. Le livre a pour but de détacher l'homme de lui-même et du monde ; le second l'aide à descendre dans son propre coeur ; le troisième l'initie aux mystères de l'amour divin ; le quatrième, enfin, l'unit à Dieu dans le sacrement de l'eucharistie. -Voir note 764 . -Voir Tamizey de Larroque (Ph.), Preuves que Thomas a Kempis n'a pas composé l'Imitation de N.S.J.C., par Philippe Tamizey de Larroque… Paris, A. Durand, 1862. In-8°, 82 p. (Extrait des Annales de philosophie chrétienne, 5 e série. t. III et IV, 1861.) Jean-Denis-Achille), né à Paris, en 1846. Il fut professeur de géographie et d'histoire des langues du Midi de la France à la Faculté des Lettres de Bordeaux avant de devenir maître de conférences à la Faculté des Lettres de Paris. L'ouvrage que mentionne ici Tamizey est Alain le Grand, sire d'Albret. L'administration royale et la Féodalité du Midi (1440-1552), Hachette, Paris, 1877, in-8° de 240 p.Commanay 868 (sic), de Leo Drouyn, de Dezeimeris, l'Histoire du Périgord de Léon Dessalles (en 3 vol. in-8°), toutes les publications de Jules Andrieu, d'Adolphe Magen, de Ph. Lauzun, de G. Tholin 869 , de F. Moulenq 870 , de l'abbé Canéto 871 , de l'abbé 868 Il s'agit d'Arnaud Communay, né à Pau en 1845. Licencié en droit, ex-employé de la Bibliothèque de Bordeaux, attaché libre aux Archives départementales de la Gironde, membre de la Société académique d'Agen et du Conseil héraldiquede France, vice-président des Archives Historiques de la Les Huguenots dans le Béarn et la Navarre. Documents inédits, Archives Historiques de la Gascogne, Auch, Paris, 1885, in-8°, 198 p. ; Essai généalogique sur les Montferrand de Guyenne, 1888 ; Esquisses biographiques : les grands négociants bordelais au XVIII e siècle, 1888 ; L'Ormée à Bordeaux d'après le journal inédit de J. de Filhot, 1888 -Voir Andrieu (J.), op. cit., t. 1, p. 185-186. -Correspondance de Tamizey avec A. Communay : A.D. Lot-et-Garonne, 16 J 10, correspondance d'érudits : Fonds Tamizey de Larroque. Bibliothèque départementale » a pris une importance régionale. Il est l'auteur de travaux et de publications de textes portant essentiellement sur le patrimoine et l'histoire de l'Agenais durant la première modernité : Andrieu (J.), Bibliographie générale de l'Agenais, t. 2, p. 340-343. Une notice nécrologique lui est consacrée dans la Revue de l'Agenais, 1922, p. 195. -Tamizey de Larroque (Philippe), Madame la comtesse Marie de Raymond, Auch, G. Foix, 1886, p. 11. -A.D. Lot-et-Garonne, 16 J 1 et 16 J 26, correspondance d'érudits : Fonds Tamizey de Larroque. Agen). Avocat à la Cour de Paris, en 1836, notaire à Montauban en 1843, puis à Valence-d'Agen, en 1852. Il a été maire de cette dernière ville en 1860. Chevalier de l'Ordre d'Isabelle-la-Catholique, il a été secrétaire général de la Société Archéologique de Tarn-et-Garonne et membre de la Société Académique d'Agen. Il est l'auteur de plusieurs remarquables travaux d'érudition notamment : Albias et ses coutumes, Impr. Forestié neveu, Montauban, 1869, 34 p. (extr. du Bulletin de la Société Archéologique de Tarn-et-Garonne) ; Un chapitre de l'histoire des Colonies au XVII e siècle, Impr. Forestié neveu, Montauban, 1870, 52 p. (extr. du Bulletin de la Société Archéologique de Tarn-et-Garonne) ; La justice au XVII e siècle. Épisode de l'Histoire d'Auvillars, Impr. P. Noubel, Agen, 1874, 90 p. (extr. du Recueil des Travaux de la Société des Sciences, Lettres et Arts d'Agen, 2 e série, t. IV) ; Études sur la Topographie des Gaules, Impr. Forestié neveu, Montauban, 1876, 18 p. (extr. du Bulletin de la Société Archéologique de notamment, il a signé, entre autres : Les d'Urfé bibliographiques sur le roman d'Astrée (1859) ; Le temple d'Auguste et la (1839) ; Recherches Port, 1886 ; Une thèse de rhétorique au collège des Jésuites d'Arles (26 août 1683) précédé d'un aperçu historique sur le collège d'Arles, 1887. 860 Il s'agit de l'Histoire d'Aubagne, chef-lieu de baronnie, depuis son origine jusqu'en 1789, Impr. de Barlatier et Barthelet, Marseille, 1889, 2 vol. in-8°, par le Dr Louis Barthélemy. 861 Le chanoine Paul Guillaume (1842-1914), né à Vars (Hautes-Alpes) et du XV e s.), 1887 ; Istorio de sanct Poncz (mystère en langue provençale du XV e s.), 1888 ; Les chartes de N.-D. de Bertaud (1188-1449), 1888, Histoire générale des Alpes-Maritimes ou cottiennes de Marcellin Fornier (1642), 1890-92 ; Les chartes de Durbon (1116-1452), 1893 ; La période révolutionnaire, le Consulat, l'Empire, la Restauration dans les Hautes-Alpes de Théodore Gautier, 1895. Voir Chevalier (U.), Le chanoine Paul Guillaume, 1915 ; Faucher (B.), Notice nécrologique sur le chanoine Paul Guillaume, 1917 ; 863 Edmond Richer, né à Chaource (Aube) en 1559, mort à Paris en 1633. Issu d'une famille pauvre, il dut, pour faire ses études se faire serviteur dans un collège de Paris. Il devint professeur de belles-lettres, docteur en Sorbonne (1590), grand maître du collège du cardinal Lemoine (1594) ; syndic de l'865 Paul Raymond (1833-1878) est l'auteur d'un Dictionnaire topographique du département des Basses-Pyrénées, 1863 ; Il a édité Un baron béarnais au XV e siècle, texte béarnais ; l'histoire de Béarn et Navarre de Nicolas de Bordenave, historiographe, 1530 ?-1661 ; le Dénombrement général des maisons de la vicomté de Béarn en 1385 par ordre de Gaston Phébus ; le Cartulaire de l'abbaye St-Jean de Sorde (1105-1176) et des Récits d'histoire sainte en béarnais, 2 t., 1876-1877. -Correspondance de Tamizey avec P. Raymond : A.D. Lot-et-Garonne, 16 J 23, correspondance d'érudits : Fonds Tamizey de Larroque. 866 Il s'agit vraisemblablement d'Ernest Gaullieur, né à Bordeaux, en 1827. Archiviste de la ville de Bordeaux. Parmi ses nombreux travaux : Les Gascons et l'artillerie bordelaise au siège de Fontarabie, Bordeaux, 1875 ; Histoire du Collège de Guyenne d'après un grand nombre de documents inédits, Sandoz et Fischbacher, Paris-Bordeaux, 1878 ; Histoire de la réformation à Bordeaux et dans le ressort du Parlement de Guyenne, Bordeaux, 1881, 589 p. (un seul tome paru qui s'arrête à la Paix d'Amboise -1563). E. Gaullieur a écrit et fait représenter à Bordeaux de spirituels vaudevilles. Gironde. Il a notamment publié Le Parlement de Bordeaux. Notes nationalité gauloise (Lyon, 1864) ; 1886-1890 ; L'ancien clergé d'Arles, Gaspard de St-Andiol et Gilles du biographiques sur ses principaux officiers, Impr. Olivier-Louis Favraud, Bordeaux, 1886, in-8°, 288 p. ; 869 Georges Tholin, né en 1843 à Amplepuis (Rhône), ancien élève de l'École des chartes, archiviste du département de Lot-et-Garonne depuis 1867. Il a terminé l'inventaire des Archives de Lot-et-Garonne, a dressé celui des Archives communales d'Agen. Sous sa direction la « 870 Moulenq (François), né en 1814 à Bellecombe (commune de Perville, mort à Anthoni de Viennès (copie de 1503), 1884 ; Istoria Petri et Pauli (mystère canton de Valence d' 867 Luchaire ( Marville ou Vigneul-de-Marville est le pseudonyme littéraire de Noël dit Bonaventure d'Argonne (1634-1704). Né à Paris, il fut avocat au Parlement, puis chartreux. Sa nouvelle profession l'entraîna à étudier les auteurs sacrés et à publier un Traité de la lecture des Pères (1658). Il attaqua La Bruyère dans ses Sentiments critiques sur les « Caractères » (1701). Son oeuvre capitale, les Mélanges d'histoire et de littérature (1701) fournit quantité de renseignements curieux, parfois satiriques. C'est ce dernier ouvrage que possédait Tamizey vraisemblablement dans l'édition de C. Prudhomme, Paris, 1713, 3 vol. in-12. Le roman de Molière, 1863 ; L'art de la reliure en France aux derniers siècles, 1864 etc.), il a aussi travaillé pour le théâtre, le plus souvent en collaboration. Rédacteur en chef de 1853 à 1865 au journal Le Théâtre, il collabora à La Patrie où il rédigea une chronique parisienne, puis il tint la revue hebdomadaire du théâtre et des livres au Figaro, au Gaulois (1869) notamment. Décoré de la Légion d'honneur en 1862, il prit, fin 1863, la direction de la Revue des provinces, organe de la décentralisation littéraire. L'ouvrage que possédait Tamizey : Les Variétés historiques et littéraires, 9 vol., in-12 dans la Bibliothèque elzévirienne de P. Jannet. Germain Chaudesaigues, né en Piémont en 1814, il fit ses études à Turin et à Grenoble et, beau jeune homme, au demeurant naïf provincial, débarqua à Paris en pleine bataille romantique. Disposant d'une certaine fortune, il mena une vie élégante, s'agrégea au groupe des jeunes littérateurs, publia un roman et un recueil de vers. Ses conquêtes féminines ayant dévorer le peu qui lui restait et définitivement ruiné au jeu, il se fit introduire par Gustave Planche, son ami, à L'Artiste, à la Revue de Paris, au Courrier français où il donna des articles de critique théâtrale réunis sous le titre Les écrivains modernes de la France, 1841. Il fut nommé bibliothécaire à la Sorbonne, le 1 er janvier 1847 et mourut d'apoplexie le 26 ou le 29 janvier suivant. Artiste et dans Le Globe, il écrivit dans la Revue des Deux Mondes, à partir de 1881, des articles qui le firent connaître, notamment : « Les Haines littéraires », « Les royautés littéraires », « La Moralité de la poésie », « La Critique française ». En 1836, Balzac ayant acheté la Chronique de Paris, se l'attacha comme collaborateur. En 1840, il partit pour l'Italie, où il passa six années. De retour, il reprit sa place à la Revue des Deux Mondes. Ses éudes les plus importantes ont été réunies en Ancien élève de l'École Normale Supérieure, agrégé des lettres en 1844, il fut nommé professeur à Caen, au collège Charlemagne, précepteur du comte d'Eu (1847), professeur au Collège de Versailles (1850) et, en 1853, professeur de rhétorique au lycée Louisle-Grand. En 1856, il soutint avec grand éclat sa thèse de doctorat Histoire de la Querelle des Anciens et des Modernes (1856). Il suppléa Ernest Havet dans la chaire d'éloquence latine du Collège de France. Mis en demeure de quitter la rédaction du Journal des Débats, il préféra renoncer à l'enseignement. De 1857 à 1858, il écrivit dans ce journal des Revues de quinzaine, remarquables par un mélange d'esprit, de sincérité et de verve. 888 Léon Feugère (1810-1858), né dans l'Yonne, mort à Paris. Professeur de rhétorique, puis censeur, il est l'auteur d'études sur des hommes et femmes de lettres du XVI e siécle : Étienne de La Boétie (1845), Étienne Pasquier, Henri Estienne, M lle de Gournay et Agrippa d'Aubigné (1848-1855) ainsi que d'une Étude sur la vie et les travaux de Du Cange (1852) ; Caractères et Portraits littéraires du XVI e siècle (1859) ; enfin Les femmes poètes du XVI e siècle (1860). Il a, de plus, édité avec introduction et notes, les OEuvres complètes de La Boétie (1846) ; les OEuvres choisies d'Étienne Pasquier (1849) ; La Précellence du langage françois de Henri Estienne (1850) et la Conformité du langage françois avec le grec par le même (1853). Il devint, en 1826, professeur au Lycée Louis-le-Grand. L'Académie récompensa son Éloge de Lesage (1826), son Éloge de Bossuet (1827) et son Tableau de la littérature au XVI e siècle (1828). Collaborateur au Journal des Débats, il y écrivit assidûment pendant 45 ans, et y défendit les idées de la bourgeoisie libérale. Après la révolution de Juillet, il fut nommé maître des requêtes au Conseil d'État, et professeur de poésie française à la faculté des lettres de Paris (1834). Élu député en 1835, il conserva son siège jusqu'en 1848. En 1837, il devint conseiller d'État. Il fut élu en 1844 à l'Académie française. Il fit à l'Empire une opposition modérée mais constante. En 1871, il fut envoyé à l'Assemblée de Bordeaux, où il siégea à droite, et, nommé vice-président de la Chambre contribua au renversement de Thiers. Ses principaux ouvrages de critique littéraire sont les Essais de littérature et de morale (1844) ; La Fontaine et les fabulistes (1867) ; une Étude sur Jean-Jacques Rousseau (1870), mais surtout un Cours de littérature dramatique (1843-1863) qu'il professa d'abord en Sorbonne. Il y étudie l'un après l'autre les sentiments les plus essentiels du coeur humain, et montre quelles expressions diverses leur ont donnés les anciens, les modernes et les classiques. Cet ouvrage est surtout une oeuvre de polémique contre le romantisme. Magnin (Charles), 1793-1862. Attaché, dès 1813, à la Bibliothèque Nationale, il devint en 1825, critique théâtral du Globe, et s'y montra favorable aux innovations romantiques. De 1830 à 1833, il resta attaché comme critique au National. Il fut nommé conservateur administrateur des imprimés à la Bibliothèque Royale (1832), suppléa Fauriel à la faculté des Lettres ( 1834-1836). Il fut admis à l'Académie des Inscriptions et belleslettres en 1838. Il est l'auteur des Origines du théâtre en Europe (1838), ouvrage que devait posséder Tamizey, plutôt que son Histoire des marionnettes en Europe depuis l'antiquité jusqu'à nos jours (1852). Gourdon en 1852, mort à Paris en 1903. Maître de conférences de littérature française à La Sorbonne (1884) ; chef de cabinet de Lockroy au ministère de l'Instruction publique, il devint directeur des Beaux-Arts (1888), professeur à la Sorbonne (1892), membre libre puis secrétaire perpétuel de l'Académie des Beaux-Arts (1898). Il succéda à Fr. Sarcey comme critique dramatique du Temps. Il est l'auteur de Marivaux, sa vie et ses oeuvres [thèse de doctorat] (1883) ; La Comédie de Molière, l'auteur et le milieu (1886) ; Études d'histoire et de critique dramatiques (1892) ; Le XVIII e et la Critique contemporaine (1892), pour les publications que pouvait avoir Tamizey. ses seigneurs, Impr. Forestié neveu, Montauban, 1880, 43 p. (extr. du Bulletin de la Société Archéologique de Tarn-et-Garonne) : Andrieu (J.), op. cit., t. II, p. 152-153. -A.D. Lot-et-Garonne, 16 J 20, elzévirienne. Une centaine de recueils d'articles de divers critiques contemporains : Chaudes-Aigues 885 , Gustave Planche 886 , vers les recherches bibliographiques et d'érudition. Il mena une vie très retirée, se plongeant dans la lecture et le travail. Tamizey possédait son ouvrage le plus connu, les Nouveaux mémoires d'histoire, de critique et de littérature, Paris, 1749-1756, 7 vol. in-12, auxquels le Journal de Trévoux, Mémoires pour l'histoire… fit, dès octobre 1749, (Paris, 1749, in-12, p.2138) un accueil favorable. L'abbé d'Artigny y publiait, en effet, des documents alors peu connus notamment au t. V les Pièces originales concernant le procès de Cinq-Mars et au t. I, les Particularités sur la reine Christine qui ont trait à son passage à Vienne en 1656. Voir Balteau (J.) -Barroux (M.) -Prevost (M.), Dictionnaire de biographie française, Paris, 1939, t. III, p. 1184. 884 Édouard Fournier (1819-1880). Né à Orléans, connu surtout comme érudit (Essai historique sur l'orthographe, 1849 ; Essai sur l'art lyrique au théâtre, 1849 ; Histoire des hôtelleries et des cabarets 1850 ; -Correspondance avec Tamizey : A.D. Lot-et-Garonne, 16 J 13, correspondance d'érudits : Fonds Tamizey de Larroque. 886 Gustave Planche (1808-1857), né et mort à Paris. Après avoir débuté Girardin 891 , Magnin 892 , Gustave Larroumet 893 , volumes sous les titres suivants : Portrats littéraires (1846-1849) ; Nouveaux portraits littéraires (1854) ; Études sur les arts (1855). G. Planche se fit beaucoup d'ennemis par sa sévérité rogue et tranchante. Il traitait de haut Hugo, Lamartine et Balzac. L'autorité qu'il s'acquit tenait surtout à la décision de son jugement à ses façons dogmatiques, à la gravité et à l'indépendance qu'il affectait. dans L'Saint-Marc 887 Hippolyte Rigault (1821-1858). 889 Voir note 794. 890 Alfred-Auguste Cuvillier-Fleury né et mort à Paris (1802-1887), d'abord secrétaire de l'ancien roi de Hollande, Louis Bonaparte qu'il suivit à Rome et à Florence, il devint, en 1827, précepteur du duc d'Aumale et, plus tard, secrétaire de ses commandements. Vers 1834, il entra à la rédaction du Journal des Débats. En 1866, il devint membre de l'Académie française. Il avait notamment publié, avant juillet 1895, Études et Portraits (1865-1868) ; La duchesse d'Aumale (1870) ; Réforme universitaire (1872). 891 Marc Girardin dit Saint-Marc Girardin (1801-1873). Licencié en droit et agrégé des lettres (1823). 893 Larroumet (Gustave) né à correspondance d'érudits : Fonds Tamizey de Larroque. 883 Vigneul-885 Jacques- 'histoire à Reims et à Paris, inspecteur de l'Académie de Paris en 1851, maître de conférences à l'École Normale Supérieure, inspecteur général et professeur à l'École polytechnique. Distingué par Napoléon III, qu'il avait aidé dans ses recherches pour la Vie de César, il reçut, en 1863, le ministère de l'Instruction publique, qu'il garda 6 ans et où il effectua plusieurs réformes : rétablissement de l'agrégation de philosophie, suppression de la « bifurcation Fourtoul », introduction de l'histoire contemporaine dans l'enseignement, création de l'enseignement secondaire spécial (technique), ouverture de nombreuses écoles primaires et de cours spéciaux pour les jeunes filles. Après sa retraite (1869), il se consacra tout entier à ses études historiques, entra en 1873 à l'Académie française (1654), il fut nommé en 1662 archevêque de Paris et présida au plus fort de la répression du mouvement janséniste (affaire du « Formulaire » et dispersion des religieuses intransigeantes de Port-Royal). Il est l'auteur d'une Histoire de Henri le Grand qui parut en 1661 et fut complétée en 1664 par le Recueil de quelques belles actions et parole mémorables du roi Henri le Grand qui contribua notamment à la légende du « bon roi Henri ». Voir Avezou (Laurent), Sully à travers l'histoire : les avatars d'un muthe politique, École des chartes, Paris, 2001. Henry-Joseph Batz-Trenquelléon, né au château de Cuq, près du Mas-d'Agenais en 1834. Fils d'un ancien garde du corps de Louis XVIII et de Charles X, il termina ses études au Lycée d'Angoulême et débuta dans la carrière littéraire à Calais sous le pseudonyme de « Georges Linois » dont il s'est servi jusqu'en 1857. En 1863, il devint un des principaux rédacteurs de La France Centrale, journal royaliste de Blois et, en 1868, Il s'agit certainement par Charles de Lacombe (1832-1904) de Henri IV et sa politique, Didier, Paris, 1861, 518 p. in-8°. -Correspondance de Tamizey avec Ch. Lacombe : A.D. Lot-et-Garonne, 16 J 17, correspondance d'érudits : Fonds Tamizey de Larroque. Caterino Davila, historien italien (1576-1631). Il fut amené en France en 1582 et devint page de Catherine de Médicis. De retour en Italie, il passa au service de Venise. Il fut tué dans une rixe. Sa grande oeuvres est l'Historia delle guerre civili di Francia (Venise, 1630), divisée en 15 livres qui traitent des années 1559-1598. Gallia christiana [trad. La Gaule chrétienne] : histoire par provinces des évêchés de France et des monastères compris dans leurs limites. C'est Claude Robert qui donna ce titre au recueil qu'il publia. En sentant les lacunes, il en assura la refonte par les frères Scévola et Louis de Sainte-Marthe, qu'encouragea l'assemblée du clergé de France (1645). Leur oeuvre inachevée fut terminée par les fils de Scévola, Pierre-Abel et Nicolas, et le premier volume parut en 1656. L'assemblée du clergé de 1710 chargea un autre Sainte-Marthe, Denys, de refondre l'oeuvre sur un plan nouveau. Au lieu que dans la première édition (désignée habituellement sous le nom de Gallia christiana vetus) archevêchés, évêchés, abbayes ouvrages généalogiques, notamment ceux (au grand complet) de Jules de Laffore 919 , de Noulens 920 , sans oublier les 4 vol. de M. de Barrau 921 sur le Rouergue qui m'avaient été donnés par la comtesse de Raymond. L'Histoire du Poitou du Chanoine Aubert 922 . 918 Victor de Saint-Allais est l'auteur du Nobiliaire universel de France ou Recueil général des généalogies historiques des maisons nobles de ce royaume, Librairie Bachelin-Deflorenne, Paris, 1872-1878, 21 vol. in-8°[en 1871-1876, chez le même éditeur, avait été publié un Nobiliaire universel de France, en 20 vol. in-8°]. 919 Pierre-Jules Bourrousse de Laffore (1811-1890) avait continué la publication entamée par Gabriel O'Gilvy du Nobiliaire de Guienne et de gascogne : revue des familles d'ancienne chevalerie ou anoblies de ces provinces antérieures à 1789 avec leurs généalogies et leurs armes ; traité héraldique sous forme de dictionnaire, Dumoulin-H. Champion, Paris, 1856-1883, 4 vol. in-4° (impr. par G. Gounouilhou, Laffore avait publié à partir de 1885 dans la Revue de l'Agenais, un manuscrit du XVIII e siècle, abondamment annoté : Estat de la Noblesse et des Vivant onblement de la sénéchaussée d'Agenaois en 1777. Il était aussi l'auteur Du progrès alarmant de la mortalité dans le département de Lot-et-Garonne et en particulier dans la commune d'Agen : des causes d'insalubrité qui le produisent et des moyens de le faire disparaître, impr. P. Noubel, Agen, 1847, 208p., in-8° : sur P.-J. Bourrousse de Laffore voir Andrieu (J.) ,op. cit.,t. 3, p. 112 et notice nécrologique par Adolphe Magen dans la Revue de l'Agenais, t. XVII, 1890, p. 449-460. -Correspondance de Tamizey avec J. de Borrousse de Laffore : A.D. Lot-et-Garonne, 16 J 28, correspondance d'érudits : Fonds Tamizey de Larroque. indication en accolade concernant ces deux derniers ouvrages : Édition in-4° avec reliures. vol. à 35 francs chacun.) 905 L'Histoire du président de Thou. 16 vol. in-4°. Les Mémoires de Condé. 8 vol. in-4°. Les Mémoires de la Ligue. 6 vol. in-4°. Plus de 50 volumes sur Henri IV, depuis celui de Perefixe 906 jusqu'à ceux de Poirson 907 (4 vol.), de Balz (sic) -de Trenquellon 908 , de Dussieux 909 , de Mercier de Lacombe 910 , d'Eug. Académie des inscriptions, en 1879 à l'Académie des sciences morales, en 1884 à l'Académie française. Il est l'auteur de nombreux ouvrages : d'abord une série de manuels scolaires qui devinrent classiques et une collection d'Histoire universelle dont il composa lui-même plusieurs volumes. Tamizey possédait les deux livres majeurs de Victor Duruy : l'Histoire du peuple romain, en 7 volumes illustrés et l'Histoire grecque (1887-1889) en 3 volumes illutés sur le même plan que l'Histoire romaine. 905 Voir 20 mars 1892 pour une échelle de prix (une journée de manoeuvre coûte 2 francs à Tamizey) et 9 janvier 1893 sur la valeur des fermages. 906 Hardouin Beaumont de Péréfixe (1605-1670), précepteur de Louis XIV (1644), évêque de Rodez (1648), membre de l'907 Auguste Poirson (1795-1871) professa la rhétorique et l'histoire à Paris, puis devint proviseur du collège Saint-Louis (1823), puis du collège Charlemagne (1837) et il fut conseiller de l'université (1845). Outre des articles dans divers recueils, il est l'auteur de précis d'histoire pour l'enseignement. Sa grande oeuvre est cette Histoire de Henri IV (1857). Voir Avezou (Laurent), Sully à travers l'histoire : les avatars d'un mythe politique, École des chartes, Paris, 2001. d908 Charles-911 Enrico-912 924 CesareCantu (1804Cantu ( -1895)), né à Brivio (Lombardie), mort à Milan. Figure du Risorgimento, compromis dans des mouvements de la « Jeune-Italie », il écrivit dans un cachot autrichien son roman patriotique, Margherita Pusterla (1835), dans le style des Fiancés de Manzoni. En 1848, il fit partie du gouvernement provisoire de Milan, mais dut s'enfuir après retraite de l'armée piémontaise. La Lombardie libérée en 1859, il fut élu député au parlement italien, où, resté fidèle au libéralisme catholique de Gioberti et de Balbo, il put jouer qu'un rôle effacé. Mais une autre célébrité lui vint de ses travaux historiques : Storia di centi anni (1750-1850) ; Gli Ultimi trent'anni ; la Cronistoria dell' independenza italiana et surtout sa Storia universale en 72 vol. (1838-1846), riche en faits et en documents et conçue dans un esprit catholique et enthousiaste. La Collection des meilleures dissertations relat. à l'Histoire de France, par C. Leber, etc. 20 vol. in-8°. Le Cours d'études historiques de Daunou 925 (20 vol. in-8°). Un grand nombre d'éditions et traductions de l'Imitation de N.S. J.C. (avec le manuscrit de ma propre traduction faite en 1857 926 ) Tous les romans de Balzac, de George Sand et presque tous les romans remarquables qui ont paru de notre temps. Tous les chefs-d'oeuvre de la littérature étrangère : la Il abandonna ensuite toute activité politique et vécut dans la retraite, surveillé par la police après d'études : Le roi chez la reine, 1864, où il retraçait les rapports de Louis XIII et d'Anne d'Autriche. Une mission à Mantoue lui fit retrouver la correspondance entre Rubens et le prince de Mantoue et des documents inédits sur l'Arétin. Les bibliothèques de Paris lui fournissant d'autres inédits, ses publications s'accrurent notamment de Alde Manuce l'Ancien, lettres et documents, 1867 et du Journal du concile de Trente de Antonio Milledonne, 1870. Ses recherches aux archives des Affaires étrangères lui permirent de mettre au jour Le duc de Saint-Simon, son cabinet historique et l'historique de ses manuscrits, 1874, puis l'Histoire du dépôt des archives des Affaires étrangères, 1875, où il dénonçait les entraves que donnent aux chercheurs les chinoiseries administratives. De la même veine sont Le répertoire général de toutes les dépêches et autres documents appartenant aux correspondances des ambassadeurs de France, successivement accrédités en Angleterre depuis le régne de Henri VIII jusqu'à celui de George 1 er , 1876 ; Le duc de St-Simon et le cardinal Gualterio, 1878 ; Recueil général des dépêches des ambassadeurs vénitiens en France pendant les XVI e , XVII e et XVIII e siècle, 1877 ; Mémoire d'Armand Du Plessis de Richelieu écrit de sa main l'année 1607 ou 1610, 1880. Le dernier ouvrage d'A. Baschet est Les comédiens italiens à la cour de France sous Charles IX, Henri III, Henri IV et Louis XIII, 1882. Voir : Dufay, Un érudit au XIX e siècle, 1887. -L'Intermédiaire des chercheurs et curieux, 1887, . D'abord ouvrier cordonnier(1856)(1857)(1858)(1859)(1860)(1861)(1862)(1863)(1864)(1865)(1866)(1867)(1868)(1869), il consacra ses loisirs à l'étude de l'histoire, fut reçu, en 1869, à l'École Pratique des Hautes Études et fut nommé auxiliaire aux Archives impériales (1870) puis archiviste titulaire (1871). Il devint Généraux), C. Port, de Rozière, Siméon Luce, Paul Viollet, H. Wallon 941 , C. Joret, G. Saige 942 , Lud. Lalanne, H. Stein, Jules Loiseleur, de La Mothe-Le-Vayer (14 vol. in-8°) de membre, en 1886, de l'Académie des Inscriptions et belles-lettres. Il s'est imposé comme un spécialiste de la géographie historique de la France, créant véritablement cette discipline qu'il enseigna à l'École des Hautes Etudes (1879) et au Collège de France (1892). Parmi ses publications, Tamizey possédait probablement La formation de l'unité française (1891) ; peut-être La géographie de la Gaule au VI e siècle (1878), son oeuvre capiatle, avec atlas. L'Atlas historique de la France depuis César jusqu'à nos jours (1885-1907) est resté inachevé. 940 Eugène Müntz (1845-1902), né à Soultz-sous-Forêts (Bas-Rhin). En 1893, il avait été élu membre de l'Acédémie des inscriptions. Il a multiplié les travaux d'érudition dans le domaine de l'art italien et de la Renaissance. Parmi ses ouvrages, Tamizey pouvait posséder probablement certains des suivants : Les Arts à la cour des papes pendant le XV e et le XVI e siècle (1878-1898) ; Les Précurseurs de la Renaissance (1881) ; Raphaël, sa vie, son oeuvre et son temps (1881) ; La Tapisserie (1882) ; Les Collections des Médicis au XV e siècle (1887) ; Histoire de l'art pendant la Renaissance (1890-1895). -A.D. Lot-et-Garonne, 16 J 20, correspondance d'érudits : Fonds Tamizey de Larroque. 941 , avec des additions par exemple La chasse au loup nécessaire à la maison rustique par Jean Clamorgan, V ve de G. Machuel, Rouen, 1676, 2 parties, en 1 vol. in-4°). Ce livre figurait traditionellement dans la bibliothèque des gentilshommes campagnards. Aix de La Chaise ou Chaize (1624-1709), père jésuite, connu pour avoir été le confesseur de Louis XIV à partir de 1674 et pendant 34 ans. Numismate distingué, il fut nommé membre de l'Académie des inscriptions et belles-lettres, en 1701. Son médaillier constitua une partie importante de la collection numismatique des jésuites de Paris. Son nom reste attaché à la maison de plaisance des jésuites sur le mont Louis, au nord-est de Paris où il aimait se retirer et où fut installé, à partir de 1806, le cimetière qui porte son nom. L'abbé Canéto a publié Du trésor découvert à Auch, le 1 er avril 1690, correspondance inédite du R. P. de La Chaise avec d'autres jésuites et divers personnages du temps à l'occasion de ce trésor, Foix frères, Auch, 1858 (extr. de la Revue d'Aquitaine). À moins qu'il ne s'agisse de Nicolas Filleau de la Chaise (1631-1688), auteur notamment d'un Discours sur les Pensées de Pascal et d'une Histoire deSaint Louis, en XV livres, 1688. 946 François d'947 Jean de Silhon, né à Sos (Lot-et-Garonne) en 1596, mort à Paris en 1667. Secrétaire de Richelieu, puis conseiller d'État et académicien, il a laissé : Les Deux vérités (1626) ; Le ministre d'État (1631), De l'immortalité de l'âme (1634) notamment. Voir Andrieu (J.), op. cit., t. 2. , du règne de Louis XIV : Mme de Maintenon d'après sa correspondance (1887) et d'archéologie : L'École française de Rome, ses origines, son objet, ses premiers travaux (1876). -A.D. Lot-et-Garonne, 16 J 14, correspondance d'érudits : Fonds Tamizey de Larroque. Dictionnaire des noms contenant la recherche étymologique des formes anciennes (1880) ; Almanach des noms expliquant 2800 noms de personnes (1881) et aussi Journal de marche du sergent Fricasse de la 127 e demi-brigade (1792-1802), d'après le manuscrit original (1882) ; Les Cahiers du capitaine Coignet d'après le manuscrit original (1883) ; l'Esprit de tout le monde (1892-1893) ; le Monde féodal : costumes vrais (1899). -A.D. Lot-et-Garonne, 16 J 17, correspondance que j'ai retrouvé avec tant de plaisir à Carpentras il y a une douzaine d'années, et à Aix, l'an dernier, a raison de m'écrire aujourd'hui même, que notre déclin se passe rarement sans amener autour de nous des choses qui semblent montrer que la vie tient à ne pas se faire regretter ! J'ai ici quelquesunes des charmantes lettres qui m'ont été écrites à diverses époques par M. Geffroy, mais le plus grand nombre était resté parmi les nombreuses liasses non encore classées à Gontaud et que je me proposais mettre en ordre dès que je serais rentré dans ma pauvre chère maison. Je viens de recevoir d'un érudit danois, M. Émile Gigas 951 , les trois volumes dont j'avais rendu compte dans la Revue Critique, avec ces mots écrits sur la première page de chaque Tolbiac conserve La première ébauche d'un ouvrage célèbre [Le Dictinnaire historique et critique de P. Bayle], signé Émile Gigas, Copenhague, sans autre mention de lieu ou de date (cote : 8-Z pièce-2084). septembre 1896, sur son frère aîné Maurice Campagne. -Correspondance de Tamizey avec Maurice Campagne : A.D. Lot-et-Garonne, 16 J 8, correspondance d'érudits : Fonds Tamizey de Larroque. Daniel Campagne l'autre jour, me proposer d'aller avec lui. J'ai bien regretté de ne pouvoir l'accompagner. , descendant de la famille Chevalier d'Escage dont Ph. Tamizey de Larroque a publié le livre de raison, est né à Gontaud, en 1851 et mort en 1914. Élève de Falguière, il débuta, en 1889, avec un buste au Salon des Artistes Français. Il y exposa ensuite « Autour du Drapeau », groupe dont le bronze fut érigé à Agen, en 1896, place de la Préfecture, comme monument au morts de la guerre de 1870-1871 (dont Maurice, son frère, était un ancien combattant). Il est aussi l'auteur notamment du gisant du duc de Nemours, fils de Louis-Philippe, dans la chapelle royale de Dreux, de la statue de Fleurette à la Garenne de Nérac et du monument funéraire de Madame Lobanof de Rostoff (née Dolgorouky) au cimetière du Père Lachaise à Paris (48 e division, 3 n° 24, 2002, p. 6-15. ami Anatole de Montaiglon 955 avec lequel je m'étais particulièrement lié pendant mes deux derniers séjours à Paris (1874 et 1875). Nous avions formé le projet de publier ensemble une nouvelle édition des Lettres de Guy Patin 956 revue et corrigée (d'après les autographes) et considérablement augmentée (soit de documents non encore publiés, soit de notes fort abondantes). J'avais espéré que nous trouverions quelque éditeur qui aurait les reins assez forts et le courage assez grand pour accepter nos dix ou douze volumes grand in-8°, mais toutes nos démarches restèrent infructueuses. J'avais commencé avec ardeur la préparation du commentaire et j'envoyai d'un seul coup à mon collaborateur plusieurs centaines de notes qui s'appliquaient aux cent premières lettres de la nouvelle édition. Mes notes ne seront pas perdues, car Montaiglon, un jour que je déjeunais dans son paisible appartement de la Place Royale, m'apprit qu'il laisserait à l'École des Chartes, pour la bibliothèque spéciale de cette école, tous ses livres Paris en 1873 par Édouard Hervé. Ce fut le premier grand journal politique quotidien mis en vente au prix de cinq centimes. Lorsqu'éclata l'affaire Dreyfus, de Kerhohant, frère d'Éd. Hervé, alors malade, fit dans Le Soleil, qui perdit de ce fait une partie de sa clientèle, une ardente campagne en faveur du capitaine condamné. Après la mort d'Hervé (1899), Ambroise Rendu devint directeur du journal redevenu purement monarchiste. Il fut bientôt remplacé par Louis Baragnon. Il a disparu à la veille de la Première Guerre mondiale. Courde de Montaiglon (1824-1895). Ancien élève de l'École des chartes où il fut professeur de bibliographie à partir de 1868. Il fut l'éditeur notamment de Gringore (1858-1877), de Rabelais (1868-1872), de l'Heptaméron de Marguerite de Navarre (1880), de Molière (1881-1891), des Contes et nouvelles de La Fontaine (1882) entre autres. -A.D. Lot-et-Garonne, 16 J 19, correspondance d'érudits : Fonds Tamizey de Larroque. et manuscrits. Par conséquent, mes notes seront conservées dans ladite bibliothèque. Puissent-elles, un jour, être mises à profit par un travailleur sérieux qui donnera l'édition complète de la correspondance de Guy Patin telle que j'aurais tant voulu la donner. Essai sur l'imprimerie en Saintonge et en Aunis, Pons, 1879 et « Un petit-neveu de Châteaubriand, Édouard de Blossac, ancien sous-préfet de Marmande », extr. Revue de l'Agenais, 1877, tiré à part, Agen, Lamy, 1877, 35 p. Auteur de plusieurs études sur B. Palissy, chez Aubry, Paris, 1861, XXI-358 p. et chez Didier, Paris, 1868, in-12. Voir Andrieu (J.), Bibliographie générale de l'Agenais, t. 2, 1886, p. 29-30. -A.D. Lot-et-Garonne, 16 J 3, correspondance d'érudits : Fonds Tamizey de Larroque. même jusqu'à 35. On n'a peut-être jamais vu, au voisinage de l'automne, température aussi étouffante. Tout est desséché, tout est grillé. J'ai perdu, en quelques jours, un grand nombre d'arbustes qui avaient resisté aux ardeurs des mois Je viens de passer quelques heures à parcourir les seize premières livraisons du Dictionnaire général de la langue française 958 , dont j'avais seulement consulté jusqu'à ce jour quelques articles, selon les besoins du moment. J'ai constaté avec plaisir que mes publications ont fourni bon nombre de citations aux savants auteurs de l'excellent recueil, surtout mon Balzac, mon Chapelain et mon Peiresc 959 . On n'a pas même négligé mes Correspondants de Peiresc et c'est ainsi que sous le mot café on a mis un exemple tiré des Lettres de Thomas d'Arcos 960 . Quelle récompense pour un travailleur que la pensée Les correspondants de Peiresc, XV, Thomas d'Arcos. Lettres inédites écrites de Tunis… (1633-1636), Alger, 1889 (extr. de la Revue africaine). Thomas d'Arcos, né en 1568 à La Ciotat, fut pendant quelques années secrétaire du duc de Joyeuse et exécuta de nombreux voyages en Asie et en Afrique. Capturé par des corsaires en 1628, il subit 2 ou 3 ans de captivité et se fit musulman à la fin de 1632. 949 archéologique, historique et philogique de la France méridionale », publiée Voir 21 septembre 1893. -Les Annales du Midi, sous-titrées « revue à partir de janvier 1889, diffusée par Privat, Toulouse et Picard, Paris avec pour éditeur scientifique entre 1889 et 1935, Antoine Thomas (1857-1935). 950 Lorédan Larchey (1831-1902), né à Metz, bibliothècaire (à partir de 1873) puis conservateur (1890) de la Bibliothèque de l'Arsenal. Auteur de travaux plein d'originalité notamment un dictionnaire argotique volume : Hommage reconnaissant de l'auteur . Ce sympathique souvenir, venu de si loin, m'a profondément touché. Je m'applaudis plus que jamais d'avoir trois fois de suite donné de grands éloges au recueil d'un excellent travailleur qui est, de plus, un homme de coeur. J'ai envoyé à Copenhague mes remerciements émus. 31 août J'étais invité à assister aujourd'hui à Agen dans une des salles du Musée, à l'inauguration du buste d'Adolphe Magen 952 , buste dont j'ai parlé le premier dans ma notice sur ce cher ami. L'auteur du buste, Daniel Campagne 953 , était venu ici, d'érudits : Fonds Tamizey de Larroque. 2 septembre Mes hirondelles m'ont quitté. Elles avaient construit un nid sous la voûte de la terrasse ; elles avaient eu des hirondeaux qui avaient pris leur envolée. Tout ce petit monde allait et venait joyeusement. C'était autour du pavillon une agitation perpétuelle. Malheureusement, une des dernières nuits du mois précédent, la pluie, inondant la terrasse, en a détaché le nid, que j'ai trouvé à terre, quand je me suis levé. Les hirondelles ont fui une maison qui leur a semblé inhospitalière. Elles reviennent de temps en temps, mais pour quelques minutes seulement, et, après avoir tourné autour du pavillon, elles repartent à tire-d'aile. Je crains que, découragées à jamais par l'accident du mois dernier, elles ne reviennent plus s'établir chez moi. C'est un petit chagrin s'ajoutant à mes immenses chagrins. 5 septembre (1851-1914)954 Journal quotidien, politique et littéraire de sensibilité orléaniste, fondé à 7 septembre M. Louis Audiat 957 est venu passer deux jours avec nous nous apportant ses plus vives sympathies, ses plus cordiales consolations. Il a cherché avec une bonté parfaite à me distraire, se montrant aussi aimable causeur que je pouvais le désirer. Je lui reste très reconnaissant de sa charitable visite. J'espère le garder plus longtemps, l'année prochaine et lui laisser moins voir ma profonde tristesse. 11 septembre Enfin la température n'est plus tropicale ! Une petite pluie a rafraîchi l'air : hier, nous avions 32 degrés à l'ombre ; aujourd'hui nous sommes à 22. Les dix premiers jours de septembre ont été plus brûlants que les plus terribles jours des mois de juillet et août. Quand j'entrais dans mon cabinet de travail, à 5 heures du matin, le thermomètre indiquait déjà 25 ou 26 degrés ; dans l'après-midi, il montait jusqu'à 33 et 957 Louis Audiat : né à Moulins en 1833, conservateur de la bibliothèque de Saintes où il a fondé une société des arts, sciences et belles-lettres et une société archéologique et historique de la Saintonge. Poète élégiaque, il a publié Poésies (1854) et Nouvelles poésies (1862) et surtout des travaux historiques, archéologiques et l'an dernier, ont misérablement péri. Mon pauvre jardin a un aspect lamentable. Comme mes hirondelles, mes plantations s'en vont. Adieu, gais oiseaux ! Adieu, riante verdure ! 20 septembre bibliographiques notamment: précédents : lauriers, peupliers, buis, genévriers, plantés que ses travaux auront ainsi été utiles à tant d'autres travailleurs ! 958 Dictionnaire général de la langue française depuis le commencement du XVII e siècle jusqu'à nos jours, publié entre 1890 et 1900, qu'avait entrepris Arsène Darmsteter (1846-1888) avec Adolphe Hatzfeld et qui fut publié donc, après sa mort, par Hatzfeld et Antoine Thomas. Excentricités du langage (1860) ; 28 août 951 Voir 4 et 5 octobre 1893 notamment. La B.N.F-952 959 Voir Bibliographie provisoire de Ph. Tamizey de Larroque en annexe. 953 Voir 8 e ligne). Daniel Campagne avait épousé, le 27 mai 1873, à Lévignac de Guyenne, Marie-Élisabeth Beaune (1850[START_REF]l'hôtel de Rohan qui vint agrandir les dépôts de l'hôtel de Soubiseet obtint le versement aux archives des minutiers notariaux et des papiers des différents ministères[END_REF] . À Gontaud, le ménage habitait une vieille demeure pleine encore de cachet, appelée « petit château » ou « Château Duverrier » [A.P. Baquier] : Angelo (Bruno d'), Bulletin de la Société archéologique et historique de l'Albret, 955 Anatole de 960 Tamizey de Larroque (Philippe) éd., Bommes (Gironde). Il a correspondu avec Ph. Tamizey de Larroque et son fils Henri. Il a signé la nécrologie de l'abbé Dubos dans la Revue de l'Agenais, 1928 (mai-juin), 55 e a. n°3. -Le catalogue des imprimés de la Bibliothèque Nationale ne mentionne aucune publication de son fait. -A.D. 16 J 12, correspondance d'érudits. presque aussi courtes que des minutes. Malheureusement il est venu par une pluie intense, par une boue affreuse, et sa visite a eu tout le mérite du sacrifice. Quel dommage que la pluie surabondante d'octobre ne soit pas quelque peu tombée en Hier, mardi, j'ai eu l'honneur de recevoir à ma table M me de Layard et ses deux filles, Mesdemoiselles Florence et Ida.M me de Layard est la veuve d'un général anglais et la belle-Dictionnaire universel des contemporains contenant toutes les personnes notables de la France et des pays étrangers, Hachette, dans l'édition de 1870, cette notice se trouve à la page 1078. Henry Layard, descendant de huguenots réfugiés en Angleterre après la révocation de l'Édit de Nantes. Né à Paris en 1817. Il abandonna ses études de droit pour voyager avec l'un de ses amis. Il parcourut ainsi l'Asie mineure et la Syrie pendant l'hiver l'automne de 1839 et l'hiver de 1840, dans les contrées réputées contenir l'antique capitale perdue de Ninive. En 1842, le consul de France Botta lui communiqua les dessins des sculptures gigantesques et des bas-reliefs qu'il venait de révéler au monde savant. Après avoir obtenu de l'ambassadeur d'Angleterre à Constantinople, le financement d'une expédition archéologique, il partit en octobre 1845 pour la Turquie d'Asie, en ayant soin de garder le secret le plus absolu sur l'objet de son voyage. De Mossoul, il descendit le cours du Tigre sur un frêle radeau, aborda la rive gauche après quelques heures de navigation, pour prendre à sa solde un groupe d'Arabes errants et commença les fouilles sur un monticule à l'est du village portant le nom caractéristique de Nemroud. Dès le premier jour, elles produisirent d'importants résultats, et le convainquirent qu'il avait bien découvert la Ninive des temps bibliques. Les nombreux bas-reliefs, sculptures, inscriptions qu'il en exhuma furent promptement transportés au British museum de Londres. Toutes les découvertes de ce voyageur ont été gravées et publiées dans un atlas infolio ; de plus, il les a décrites lui-même avec un soin scrupuleux dans son ouvrage intitulé Niniveh and its remains, publié en français en 1849 [Ninive et ses ruines] qui a eu plusieurs éditions. À son retour, Layard Lors de la retraite de Lord Palmerston en 1852, il fut appelé par Lord J. Russell, au poste éminent et lucratif de sous-secrétaire d'État au département des affaires étrangères et entra, la même année, à la Nous avons reçu ces dames de notre mieux et je crois qu'elles n'ont été mécontentes ni de notre déjeuner, ni de notre causerie. De notre côté, nous avons été enchantés de leur esprit et de leur amabilité. La température était fort En lisant l'article nécrologique consacré à Anatole de Montaiglon dans la Chronique du Polybiblion 973 , de ce mois, je pensais au très grand nombre d'amis que j'ai eu le malheur de perdre en ces dernières années, presque tous de ces vieux amis que l'on ne remplace pas. Depuis que je suis installé au pavillon Peiresc, j'en ai vu mourir plus d'une vingtaine. À mon âge et dans ma solitude, comment former de nouvelles liaisons ? Je serais pourtant injuste si je ne notais que j'ai chambre des Communes où il devint bientôt l'un des ténors du parti libéral. Ce fut lui qui, par ses efforts réïtérés, fit passer la motion sur l'enquête des événements de Crimée. Il ne réussit pas de même, en 1855, civile et qu'il exposa des plans de réforme. En même temps, il repoussa formellement les offres des lords Derby et Aberdeen, d'accepter une place dans leur cabinet, préférant rester fidèle à ses convictions politiques. En 1854, il avait suivi en amateur les opérations de l'armée alliée jusqu'en Crimée et, en 1856, après la conclusion de la paix, il fonda à Constantinople une banque nationale, dont il fut président. Il fut nommé membre du conseil privé en 1868. Il avait été élu, en 1854, correspondant de l'Institut de France (Académie des inscriptions et belleslettres). Paris en 1866, publiée par la Société bibliographique pour servir à la fois aux érudits et aux gens du monde. Elle se compose de deux parties : technique, liste bibliographique des principaux livres parus en France et à l'étranger, et des sommaires de revues et même de journaux ; littéraire, ou analyse d'ouvrages, français surtout, de tous les genres. Une chronique donne les nouvelles littéraires : et, pendant longtemps une place fut faite aux questions et réponses. À partir de 1875, la partie technique a été plus nettement séparée de la partie littéraire. Mon cher M. Rodolphe Reuss 968 , conservateur de la bibliothèque de Strasbourg et cousin Carlos Sommervogel, Strasbourgeois, M. Ristelhuber ont rivalisé à mon égard de sympathie et de générosité. De plus, l'abbé Ingold a eu la bonté de m'envoyer de Colmar, comme je l'ai déjà dit ici, un grand nombre de 966 Onézime Truaut -dit Henry en famille -(1819-1894), né à Xaintrailles et mort au « châlet » d'Ambrus, par Buzet. Il était fils de Julie Dubédat (1786-1849) et de Jean-Baptiste Truaut, propriétaire, ancien notaire à Lavardac de 1805 à 1845, qui s'était retiré sur la commune d'Ambrus, au château de Pradères [A.P. Baquier]. 967 L'abbé Jean Dubois a classé et travaillé les archives du château de Rayne-Vigneau à septembre ! 16 octobre soeur du célèbre orientaliste qui fut ambassadeur à Constantinople en 1877 et associé étranger de l'Institut en 1889. fut nommé, en récompense de ses travaux, attaché d'ambassade à châtaignier. Nos gracieuses visiteuses ont fort admiré notre vaste horizon qu'éclairait un beau soleil. J'espère que, somme toute, elles auront emporté un bon souvenir du Pavillon Peiresc. 28 octobre lorsqu'il se fit le porte-parole des plaintes générales contre Constantinople. agréable et nous avons pu passer plusieurs heures sous le l'adminstration 973 Revue bibliographique, de caractère nettement catholique, fondée à note en marge : Voir dans le Vapereau 971 la notice sur Sir Austen Henry Layard 972 . 971 Vapereau (G.), 972 Austen- , Impr. V ve Lamy, Agen, 1894. -A.D. Lot-et-Garonne, 16 J 1 et 16 J 18, correspondance d'érudits : Fonds Tamizey de Larroque. J'ai reçu, depuis mon retour de Soumensac, deux lettres trop importantes pour que je ne transcrive pas ici ce que m'ont écrit deux des plus grands savants de l'Europe, lesquels Il fut, avec Roumanille, l'artisan de la renaissance provençale en groupant les poètes usant de la langue du Midi jusque-là isolés. En 1854, il fut un des sept poètes présents à la réunion du château de Fontségugue d'où sortit avec le nom de félibres, la première organisation du mouvement du félibrige. Puis il contribua à la création de l'Aramana prouvençau (1855) qui devint l'organe de la propagande félibréenne et, en 1859, il discours de Gaston Paris 980 . C'est vers le nord que des obligations de famille me font un devoir de me diriger. à Avignon son célèbre poème rustique Miréio (Mireille) dont Lamartine salua avec enthousiasme l'apparition. En même temps qu'une renaissance linguistique, Mistral poursuivait aussi une oeuvre de rénovation nationale et ses efforts pour conserver aux provinces méridionales leurs caractères particuliers tendaient naturellement à la décentralisation. Il étendait même au-delà des frontières l'action morale du félibrige ; ainsi il appelait par le Chant de la coupe (1868) les poètes catalans à fraterniser avec les félibres, et après 1870 il exaltait dans l'Hymne à la race latine (1878), l'idée du rapprochement des nations latines. Cependant son rôle d'organisateur ne cessait de grandir. En 1876, prenant pour base la division du pays de langue d'oc en dialectes, il donna un nouveau statut au félibrige et il en fut proclamé grand maître (Capoulié). À côté d'oeuvres d'inspiration épique comme Calendal (1867) par exemple, glorification de la Provence héroïque, Mistral a également fait oeuvre de philologue : Le Trésor du félibrige (1878-1886), dictionnaire franco-provençal et véritable encyclopédie de la langue d'oc. Il a fondé, de plus, le Museon Arlaten (musée Arlésien), en 1899, où il a cherché à réunir toutes les formes de la vie provençale et laissé des Mémoires (1906) où en se racontant lui-même, il fait revivre la Provence et les provençaux d'autrefois. Le prix Nobel de littérature lui a été attribué en 1904. Jules Andrieu lui consacre un article, dans lequel il soutient que le grand poète agenais de langue gasconne, Jacques Boé dit Jasmin (1793-1864), est bien supérieur à Mistral : Bibliographie générale de l'Agenais, 1887, t. 2, p. 143-144. Jules Andrieu revient sur l'oeuvre et l'action de Mistral dans le long article « Patois » du même ouvrage dans lequel il se fait l'écho des débats et controverses autour du félibrige id., p.181-186. En France, A. Darmsteter avait repris le problème dans sa Vie des mots où l'étude des phénomènes était déformée par une interprétation darwinienne parfois schématique sinon caricaturale. C'est Michel Bréal qui jeta les bases d'une étude scientifique du sens des mots dans l'Essai de sémantique (1897), imposant du même coup son nom définitif à ce nouveau domaine de la linguistique.C'est un grand regret pour moi de penser que je ne vous verrai pas à Aix. C'est vous qui devriez présider cette fête, et vous la présiderez toujours par notre pensée à tous. Je ne vous ai pas oublié dans le petit discours que je prononcerai et où j'ai essayé de rendre hommage à votre illustre ami.On a su déjà, par la Revue critique, par la Revue de Gascogne et d'autre part, l'irréparable malheur qui a frappé notre excellent confrère, M. T. de L., correspondant de l'Institut. Un incendie a 983 Phase tardive de beau temps et de douceur à l'automne aux alentours du 11 novembre, date de la fête de Saint Martin : Audisio (Gabriel), Les Français d'hier, t. 2 : Des croyants XV e -XIX e siècle,A. Colin, Paris, 1996, p. 454-457. note en marge : J'oubliais de dire que nous avons eu une des plus chaudes journées de cet admirable été de m'honorent de toute leur sympathie : « 1 er nov. 95. Mon cher confrère et ami, J'ai su quelles épreuves sont venues attrister une vie qui méritait de rester jusqu'au bout aussi heureuse qu'elle est honorée. Je ne vous ai pas écrit, sachant par expérience que les condoléances font souvent couler les larmes sans soulager le blessé. Le travail, c'est le grand consolateur, celui qui vous donne, au moins momentanément, l'oubli. Je vois par vos deux brochures, pour lesquelles je vous remercie sincèrement, que vous vous êtes bravement remis à l'ouvrage. Comme Alsacien, je m'intéresse particulièrement à Grandidier. Votre Muet 978 est également un curieux spécimen d'une chose qu'on connaît encore si peu -la vie privée en province au 17 ème (sic) siècle. On vous a regretté au Centenaire. On vous regrettera encore plus à Aix, si réellement vous ne paraissez pas à cette fête qui est votre oeuvre et où tout parlera de vous. Mais c'est ainsi que les choses humaines trompent toutes les prévisions. Si vous y regardez de près, vous verrez que, plus ou moins, nous sommes tous touchés au point le plus sensible Je n'entendrai pas plus que vous les vers de Mistral 979 et le 978 Tamizey de Larroque (Philippe), Notice inédite sur le livre de raison du Muet de Laincel, d'après les manuscrits de Peiresc, Impr. de Chaspoul et V ve Barbaroux, Digne, 1895, In-8°, 23 p., pl. 979 Frédéric Mistral (1830-1914), né et mort à Maillane (Bouches-du-la mémoire de Peiresc, puisqu'il a été du petit nombre des étruscologues raisonnables. la Saint-Martin 983 qui n'a jamais été aussi splendide. Le thermomètre indiquait 20 degrés dans mon cabinet. Nous avons passé 3 ou 4 heures en plein air. C'était le Rhône). Je vous envoie une petite brochure que j'aurais dû dédier à Au revoir, quand même, et bien à vous de coeur. Michel BREAL 981 . » « Paris, ce 2 novembre 1895, Mon cher confrère et ami, 980 Voir 29 mars 1893. 981 Michel Bréal (1832-1915). L'Allemand E. Leisi dans les années 1830 avait tenté de fonder une science des significations qu'il appelait le témoignage bien légitime de la reconnaissance qui vous est due, en vous nommant officier de la Légion d'honneur ; mais il paraît que la politique s'y oppose et que votre préfet envoie sur votre compte les notes les plus défavorables. J'aurais décision ! Nous partons demain ma femme et moi. Dès que je serai revenu, je m'occuperai de ramasser ce que je pourrai de mes ouvrages et je les enverrai à Meyer pour qu'il les réunisse aux autres. Ma femme est bien sensible à votre souvenir et vous envoie toutes ses amitiés, auxquelles je joins les miennes, dont vous connaissez depuis long-temps la sincérité. À vous cordialement, G. PARIS. » 7 novembre L'aimable autant que savant archiviste départemental, M. Georges Tholin 982 est venu passer la journée d'hier avec nous. Nous avons fait déjeuner ce grand chasseur avec un chasseur bien moins renommé, Raymond Dubrana. Les convives ont fait honneur au lièvre en civet, et au dindon rôti qui leur ont été servis. M. Tholin nous a très agréablement conté une mois de mai en novembre. 8 novembre Reçu aujourd'hui d'Aix un sonnet manuscrit en langue semasiologie. J'avais demandé au Ministère de vous donner en cette occasion pourtant été bien heureux de proclamer une aussi juste provençale de mon confrère et ami F. Vidal 984 , président de foule d'historiettes agenaises et autres. Voilà des visites qui sont de véritables bonnes fortunes ! publia l'École de Lar et bibliothécaire de la Méjanes, à propos de la fête de l'inauguration du monument à Peiresc. Le poète y loue d'une façon charmante le héros de la fête, le président de la fête, G. Paris 985 , et le promoteur de la fête, y déplorant aussi affectueusement que poétiquement l'absence de ce dernier. -Reçu par le même courrier la livraison de novembre de la Revue de Saintonge et d'Aunis où M. Louis Audiat 986 (p. 400-401) note en marge : On lit à la page 394 de la même livraison (Séance du 18 octobre) : Sur la proposition du président, comme témoignage de sympathie à l'un de ses plus dévoués collaborateurs et pour réparer un peu les pertes faites dans l'incendie, la Société accorde à M. T. de L. les volumes des Archives dont elle peut disposer. a inséré ce trop flatteur article : 984 Voir 5 mai 1894. 985 Voir 29 mars 1893. 986 Voir notamment 12 octobre 1893. Bonchamps et le passage de la Loire par l'armée vendéenne en 1793, Lafolye, Vannes, 1896, 79 p. ; Mémoires et documents concernant les guerres de la Vendée, publiés avec des notes et des éclaircissements, Germain et G. Garssin, Angers, 1896, 374 Oscar de Poli commença ses études au collége militaire de La Flèche et les acheva au séminaire d'Orléans. En 1860, il s'engagea dans le corps des zouaves pontificaux, fut blessé gravement à Castelfidardo et publia à son retour, dans La Gazette de France, les Souvenirs du bataillon des zouaves pontificaux publiés en volume en 1861. Le comité pontifical le chargea, vers la même époque, de reconduire de Paris à dublin la brigade irlandaise de Saint-Patrick. O. de Poli écrivit ensuite dans L'Union, les Lettres à un campagnard, ressucita le vieux Mercure de France et publia, de 1861 à 1866, une série de romans ou de récits d'histoire contemporaine à succès, notamment : Le Dernier des Plantagenêts (1862) et Jean Poigne d'acier, récits d'un vieux chouan (1866). O. de Poli avait été fait personnellement comte romain par Pie IX, en 1865. Il avait épousé, en mai 1865, Mlle de Choiseul-Gouffier, arrière-petite-fille de l'ambassadeur de France à Tamizey avec F. de Mély : A.D. Lot-et-Garonne, 16 J 19, correspondance d'érudits : Fonds Tamizey de Larroque. 25 février À la suite de la liste de mes bienfaiteurs en livres, je veux reproduire quelques lignes imprimées par mes bienfaiteurs en consolations. Dans le recueil de l'Académie des Inscriptions et belles-lettres (compte rendus des séances de l'année 1895, Bulletin de 9bre-Xbre, Paris, Imprimerie Nationale, p. 598), on lit que M. de Barthélemy, au nom de son confrère, M. A. de Boislisle 1005 , a présenté, 22 9bre 1006 , les ouvrages suivants de M. Ph. T. de L., correspondant de l'Institut. (1°) Gault ; 2°) Muet de Laincel ; 3°) Boudon de St-Amans). Voici l'exorde de l'orateur : « Bien que notre laborieux correspondant ait été frappé récemment, je dirais presque dans ses affections les plus chères, puisqu'un incendie a détruit sa bibliothèque de Gontaud, son esprit de labeur a triomphé de cette dure épreuve, et il a pu se remettre à l'oeuvre grâce à la conservation des manuscrits et instruments de travail qui avaient échappé au désastre. » Dans la chronique de la Revue des Questions historiques de janvier, le marquis de Constantinople, sous Louis XV. - A.D. Lot-et-Garonne, 16 J 4, correspondance d'érudits : Fonds Tamizey de Larroque. Beaucourt 1007 a ainsi annoncé ma publication sur Boudon de 998 Amédée, vicomte Caix de Saint-Aymour (1843-1921), né à Senlis (Oise). Il a publié Le Pays sud-slaves de l'Austro-Hongrie (1883) ; Hugues de Groot (1884) ; Les Intérêts français dans le Soudan éthopien (1884) ; collectionneurs de l'ancienne France (1873) ; Causeries sur l'art et la La France en Ethiopie (1886) ; Instructions aux ambassadeurs et ministre de curiosité (1878) ; Physiologie du curieux (1881) ; Dictionnaire des France en Portugal (1886) ; Arabes et Kabyles (1891) ; Anne de Russie, amateurs français du XVII e siècle (1884) ; Le meuble en France (1887) reine de France (1894) ; Histoire illustrée de la France (1899-1900) ; notamment. Autour de Noyon : sur les traces des Barbares (1917) ; Les Boullongne 1005 Arthur Michel de Boislisle (1835-1908) a publié la Chambre de (1919). comptes de Paris (1873), auquel l'Académie des inscriptions a décerné le 999 grand prix Gobert ; puis sa monumentale édition des Mémoires de Saint-Voir 3 novembre 1893. 1000 Simon. Il fut, à partir de 1884, membre libre de l'Académie des Eugène Beauvois, né en 1835 à Corberon (Côte-d'Or), archéologue et historien, il s'est spécialisé dans l'histoire politique des pays inscriptions et belles-lettres. scandinaves. Ses principaux ouvrages : Histoire légendaire des Francs et 1006 voir notes 783 et 785. des Burgondes (1867) ; La colonisation de la Russie et du Nord scandinave, traduit du danois (1875) ; L'Élysée des Mexicains comparé à celui des Celtes (1885) ; Les Trois Chamilly (1886). 1001 Voir 28 août 1895. 1004 Edmond Bonnaffé (1825-1903), né au Havre, critique d'art. Il a publié : Les collectionneurs de l'ancienne Romes (1867) ; Les 993 Baguenier-Desormeaux (H.) est l'auteur de 1002 Voir 1 er décembre 1893. 1003 Correspondance de Moretus à Anvers, grand spécialiste des peintres flamands et auteur de monographies sur Rubens, van Dyck et Jordaens notamment. Il avait publié, en 1884, Christophe Plantin, imprimeur anversois et, l'année suivante, la Correspondance de Christophe Plantin. Jamais cadeau n'a mieux mérité le nom d'OEufs de Pâques. J'ai eu grande joie à parcourir tout aussitôt tant d'excellents et précieux volumes, en coupant leurs pages. Les rayons de ma bibliothèque se garnissent avec rapidité. J'ai fait venir de la librairie Hachette les volumes qui me manquaient de la Collection des Grands Écrivains. Peut-on se passer de La Rochefoucauld, de La Fontaine, de Molière, du cardinal de Retz, de M me de Sévigné ? C'est aussi nécessaire que l'air à respirer. J'ai vu, ce matin, arriver mes hirondelles, et j'ai eu un télégramme m'annonçant une autre agréable arrivée pour demain, celle de mon neveu Jean de Boëry que j'aime comme un second fils 1012 . , bonne et agréable journée, une des meilleures qu'il m'ait été donné de passer depuis longtemps. J'avais trois jardiniers et trois jeunes et jolies jardinières. On a très bien arrangé le jardin, les allées comme l'intérieur. Le tout a une mine charmante. Par un très beau soleil la petite fête champêtre a été pour moi un réconfort. Je me suis grisé de la douce odeur des lilas qui sont en pleine floraison. Je me suis encore délecté de la vue des têtes blanches de mes cerisiers et des feuilles verdoyantes de mes peupliers, même de ceux qui ont été plantés cet hiver. Combien la nature est belle en ce moment ! Et combien elle réjouit le coeur de l'homme, même de celui qui est le plus malheureux ! C'est une magicienne Revue des Questions historiques 1013 , le Polybiblion 1014 , la Revue critique et le Bulletin critique 1015 , et pour nos périodiques régionaux (Agen, Le cardinal d'Armagnac et François de Seguins, documents inédits, É. Privat, Toulouse, 1896, In-8°, 31 p. (extr. des Annales du Midi, VIII, 1896). Dans l'énumération des publications de Tamizey, le Catalogue des Imprimés de la Bibliothèque Nationale signale seulement un Document inédit relatif à l'enlèvement d'Anna de Caumont, Impr. de Pillet fils aîné, Paris, 1873, In-8°, 12 p. [justification présentée au roi Henri III par le duc de Mayenne, auteur de l'enlèvement -Extrait du cabinet historique, XIX, 1878]. Auch, Bordeaux et Toulouse). Quelques-uns de ces articles sont d'assez grande étendue, comme « Le Cardinal d'Armagnac et François de Seguins » (Annales du Midi de juillet prochain) et « Marguerite de Lustrac et Anne de Caumont » (Revue de l'Agenais -en retard -de mars-avril) 1016 . Je transcris ce que disait de ce dernier travail L'Avenir de Lot-et-Garonne du 20 avril dernier : « Dans sa séance mensuelle tenue vendredi, cette Société (d'agriculture, sciences et arts) a entendu la lecture par M. Ph. Lauzun 1017 , d'une étude pétillante d'humour et d'esprit de M. T. de L. sur deux femmes célèbres de l'Agenais au XVI e siècle… L'analyse de M. T. de L. est semée d'aperçus fins et délicats rappelant la manière de irrésistible. M. Cousin 1018 , l'admirateur passionné des héroïnes du siècle de J'avais oublié toutes mes peines et préoccupations en cette journée passée au grand air presque ). Monsieur, je suis heureux 12 avril tout entière. Le mot de la fin a été délicieusement dit par un 1013 Revue des Questions Historiques, fondée en 1866, par le marquis de d'avoir, au nom de la Société d'histoire littéraire, à vous Mon fermier Barthalome ne voulant plus payer 900 francs par Beaucourt dont Tamizey est l'un des collaborateurs les plus réguliers et rossignol, mon très proche voisin, qui, pendant que j'allais les plus actifs. Voir Introduction : note 46 notamment. annoncer que le Conseil d'Administration, dans sa séance de jeudi, vous a nommé vice-président de la Société. En vous an et m'offrant seulement 600 francs, un autre candidat-1014 Voir 28 octobre 1895. m'endormir, m'a donné une incomparable sérénade. Le divin 1015 Revue critique d'histoire et de littérature, publiée à Paris à transmettant fermier, le sieur Plurimet, ne m'ayant proposé lui aussi que cette décision j'y ajoute, Monsieur, mes chanteur m'a rappelé un mot de mon enfance : Il doit y avoir partir de 1866 (BNF, 8°Z.272). félicitations personnelles, ... Le Secrétaire Brunot. » la somme dérisoire de 600 francs, je me suis décidé à essayer beaucoup de rossignols dans le paradis. du métayage. Hier, je me suis mis d'accord avec les époux Pierre Borie et Marie Mathieu. qui ont la réputation d'être honnêtes et vaillants. Ils ont été très raisonnables au sujet des conditions faites. Puissent-ils rester toujours aussi 1017 Voir 28 mars 1894. 6 avril Lundi de Pâques. raisonnables ! Ils sont jeunes et ont trois petits enfants. Alleluia ! Mon vénéré et cher ami l'abbé Louis Bertrand 1011 Moi qui déteste tant les changements, je voudrais bien les 1 er mai m'a envoyé, hier, la collection de ses oeuvres complètes. garder jusqu'à ce que je sois obligé de donner ma démission de Jour de ma fête. J'ai envoyé au Ministère le manuscrit de propriétaire. l'Avertissement du tome VI des Lettres de Peiresc. J'espère 1010 que le volume, dont la publication a été fort retardée, Max Rooses (1839-1914), né et mort à Anvers, historien d'art et conservateur du musée Plantin-1011 Voir 2 juillet 1894. -A.D. Lot-et-Garonne, 16 J 6, correspondance d'érudits : Fonds Tamizey de Larroque. 1012 Voir 8 avril et 30 août 1893 notamment. Hierparaîtra vers la fin de l'été. J'ai, du reste, beaucoup travaillé depuis le retour des beaux jours, préparant de nombreux articles pour les quatre grands périodiques parisiens auxquels je collabore depuis longtemps, la 1016 Tamizey de Larroque (Philippe), 1018 1021 Au nord de Gontaud entre Birac au sud-ouest et Puymiclan au nord-est : Carte 1738 Est Seyches, série bleue 1 : 25 000, I.G.N.,Paris, 1987. Saint Austinde, archevêque d'Auch dont j'ai mentionné ici les deux visites et auquel j'avais, il y a quelques jours, procuré la faveur de visiter Chantilly.C'est un parent du cher curé de Cazeneuve, curé que je me plaisais à surnommer l'abbé Gorini 1023 de la Gascogne, qui, le jour même du décès, m'apprend une nouvelle aussi triste qu'imprévue. C'est une grande perte que celle du biographe de Mirande. Il publia son premier article en juin 1885. Adrien Lavergne dans la notice nécrologique qu'il lui consacra dans la Revue de Gascogne, 1896, p. 325, énumère en tout 33 publications classées en trois grandes rubriques : archéologie gallo-romaine ; histoire religieuse et histoire civile médiévales avec notamment une étude sur Jean 1 er , comte d'Armagnac au temps du prince noir. Il se rendit à Paris pour le Congrès de la Sorbonne de 1896. Il ne put y lire qu'un seul des mémoires qu'il avait emmenés pour répondre aux questions du programme sur les noms de baptême au Haut-Moyen âge en Gascogne. Il obtint pour cette communication les félicitations de Gaston Boissier, de l'Académie française, président de la séance, qui la retint pour être insérée dans le Bulletin du Comité des Travaux Historiques. Il revint malade de ce Congrès et en mourut, à 40 ans, le 15 mai 1896. « Il laisse écrit Adrien Lavergne une grande quantité de notes malheureusement écrites sur des fiches minuscules. Bourg-en-Bresse. Il publia, en 1853, un livre fruit de longues recherches : Défense de l'Église contre les erreurs historiques de MM.Guizot, Aug. et Am. Thierry, etc. qui fit du bruit. M. Jules Momméja 1024 vient de nous quitter, après avoir passé trois journées ici. Il était arrivé, mercredi dernier, avec notre commun ami M. Georges Tholin 1025 , et, ce jour-là, nous avions invité à déjeuner notre excellent voisin M. Maurice Campagne, le nouveau maire de Saint-Pierre de Nogaret 1026 . J'ai été enchanté de faire complète connaissance 1024 Jules Momméja (1855-1928) a publié notamment Les Fresques du château de Bioule (Tarn-et-Garonne), impr. de E. Plon, Nourrit et Cie, Paris, 1889, 15 p. (Mémoire lu à la Réunion des sociétés des beaux-arts des départements à l'Ecole des beaux-arts, dans la séance du 12 juin 1889) ; L'Hôtel de ville de St-Antonin (Tarn-et-Garonne), contributions à l'histoire de l'art à travers les moeurs, impr. de E. Plon, Nourrit et Cie, Paris, 1889, 32 p. ; Souvenirs du Mont-Cassin, une analyse graphique du poème de Dante, impr. de E. Forestié, Montauban, 1889, 15 p ; Les Plate-tombes du Moyen âge, essai d'esthétique archéologique, impr. de E. Forestié, Montauban, 1890, in-8°, 19 p. (extr. du Bulletin de la Société archéologique de Tarn-et-Garonne) ; Du rôle des moines dans l'architecture du Moyen âge, analyse de la conférence faite par M. Anthyme Saint-Paul, en séance publique de la Société archéologique de Tarn-et-Garonne, le 19 mars 1892, impr. de E. Forestié, Montauban, 1892, 23 p. (extr. du Bull. archéologique de Tarn-et-Garonne) ; Mosaïques du Moyen âge et carrelages émaillés de l'abbaye de Moissac, Impr. nationale, Paris, 1894 (extr. Bulletin archéologique, 1894) ; Les Sarcophages chrétiens antiques du Quercy, J. Girma, Cahors, 1895, 67 p. ; Un numismate montalbanais au XVI e siècle, impr. de A. Chauvin et fils, Toulouse, 1896, 8 p. ; Quelques marbres antiques chrétiens et païens du musée de Cahors, Impr. nationale, Paris, 1896, in-8°, 12 p. (extr. du Bulletin archéologique, 1895) ; L'Oppidum des Nitiobriges, H. Delesques, Caen, 1903, 78 p. (extr. du Compte-rendu du LXVIII e Congrès archéologique de France , tenu en 1901 à Agen et à Auch) ; La roue de la fortune du château de Mazères, notes pour servir à l'histoire des carrelages émaillés du Moyen âge, impr. de L. Cocharaux, 1904, Auch, 28 p. (extr. du Bulletin de la Société archéologique du Gers) ; Ingres…biographie critique…, H. Laurens, Paris, 1904, 128 p. (Les Grands artistes, leur vie, leur oeuvre). Il fut conservateur du Musée d'Agen dont il aménagea les salles du rez-de-chaussée et du premier étage. Sa fille avait épousé le docteur Villeneuve, adjoint au maire de Moissac en 1928 : « Nécrologie », Eugène Campagne et frère aîné du statuaire DanielCampagne (1851[START_REF] Campagne | fils d'Eugène Campagne et frère aîné du statuaire Daniel Campagne[END_REF]. La famille Campagne est originaire du Béarn, mais l'un de ses représentants, François Campagne, maître chirurgien vint se fixer à Caudrot (Gironde) par un mariage, en 1771. Il fut procureur du roi jusqu'à la Révolution dont il eut beaucoup à souffrir avec ses quatre fils. Le père et les enfants furent emprisonnés à Bordeaux, puis relâchés après le 9-Thermidor (Maurice Campagne en fait mention dans une plaquette intitulée un procès sous Charles X, p. 13, n.2). avec M. Momméja qui est un homme de science, de talent et de coeur. Nous avons tant et tant causé ensemble et de tous sujets qu'il est maintenant pour moi comme un ancien ami. Les intimes confidences sont pour l'affection ce que les voyages sont pour Robert Pellevé de La Motte-Ango, marquis de Flers (1872-1927), auteur à grand succés d'opéras bouffes et surtout de pièces de boulevard écrits en collaboration avec G.-A. Caillavet. Mais cette carrière théâtrale ne débuta réellement à partir de 1900. En 1896, il venait de faire paraître sa première oeuvre, un courte nouvelle : La Courtisane Taïa et son singe vert et un volume d'impressions, Vers l'Orient. ressembler le beau ciel bleu du Comtat au vilain ciel gris de trésorier Édouard Ande, avocat, docteur en droit : l'Angleterre. « Pavillon Peiresc, 24 juin 1896. -Monsieur, Honorons les grands esprits, les grands coeurs. Ce sont des bienfaiteurs qui nous donnent la lumière et la force. Pour leurs leçons, pour Saint Austinde, livre que j'ai publiquement loué avant et après son apparition, avant, dans une lettre à l'auteur reproduite par le prospectus imprimé en vue de la souscription, après, dans un article publié par la Revue des Questions historiques (livraison du 1 er avril). L'abbé Breuils m'avait entretenu avec enthousiasme en sa dernière visite, de son beau projet d'écrire en plusieurs volumes l'histoire des comtes d'Armagnac. Qui nous donnera le grand ouvrage qu'il eût pu si bien nous donner, lui qui travaillait tant et qui travaillait si bien ? (1000-1068) et la Gascogne au XI e siècle, 1895, Auch, Léonce Cocharaux, VI-359 p. ill. Né à Nogaro, en 1855, il fut au début de sa carrière ecclésiastique professeur au petit séminaire d'Auch, avant de devenir en 1882, vicaire de St Jacques de Condom et en 1883 vicaire de l'abbé Bartherote à Il laisse des cahiers manuscrits peut-être des travaux prêts pour l'impression… Que n'êut-il pas fait si la mort ne l'avait enlevé ! Nous savons qu'une société d'études avait mis à sa disposition une somme suffisante pour aller à Londres étudier les documents considérables emmenés par les Anglais après Revue de l'Agenais, 55 e a., n°1&2, janv.-avr. 1928. -Lauzun (Ph.), op. cit., p. 265. -A.D. Lot-et-Garonne, 16 J 19, correspondance d'érudits : Fonds Tamizey de Larroque. 1025 Voir 3 novembre 1893. 1026 tempe. Non seulement nous avons beaucoup causé, mais aussi beaucoup travaillé. Mon cher hôte a trouvé dans ma pour son histoire des Sénéchaux du Sud-Ouest et je l'ai aidé de mon mieux à recueillir ces renseignements tout heureux de lui donner ma collaboration. N'était-ce pas là un des devoirs de l'hospitalité ? Une seule chose a gâté le séjour de mon nouvel ami : c'est un vent furieux qui ne cesse de souffler depuis plus d'une semaine et qui nous empêché de nous tenir sous le châtaignier. Il a fallu se claquemurer dans mon cabinet et se priver du doux voisinage de mes roses. 15 juin Autant la seconde moitié de mai a été venteuse, autant la première moitié de juin a été pluvieuse. On eût dit un mois d'hiver. Cela me rappelait le mois de juin à Carpentras où chaque jour, la pluie tombait à flots, ce qui faisait C'est par le mariage du second de ses fils, Pierre-Hippolyte, le 13 février 1817, âgé de 42 ans, chevalier du « Brassard » (ordre royaliste de la Restauration), capitaine de la garde Nationale avec Marguerite Célinie de Chevalier d'Escages, une des trois filles en la personne desquelles s'éteignit cette famille (dont Tamizey a publié le Livre de raison) que les Campagne s'installent à Escages, près de Gontaud, non loin de Larroque. Maurice Campagne fit la guerre de 1870 comme capitaine de mobiles, puis devint avocat et entra dans l'Administration dont il démissionna en 1878, alors qu'il était sous-préfet de Rochechouart. Retiré à Escages, il s'adonna à l'histoire et à la généalogie (voir Revue de l'Agenais, 1906, en 20 juin J'apprends la nouvelle de la mort de M. de Rozière 1027 , membre de l'Institut et du Sénat, un de ceux de mes grands confrères n'oublierai jamais sa chaude sympathie pour le travailleur privé de ses livres et de ses manuscrits. Non seulement il m'écrivit une lettre touchante, mais il m'envoya un grand nombre de ses savantes publications. J'ai de lui plusieurs lettres charmantes, pleines de coeur et d'esprit -l'esprit d'un bossu aimable ! -deux surtout, une, déjà un peu ancienne -où il plaisantait sur son échine et me donnait de curieux détails sur son séjour à la campagne, dans ses chères montagnes de la Lozère une autre, toute récente, où il me recommandait le livre de son petit-fils, Robert de Flers 1028 , et donnait de trop flatteurs éloges à mes articles de la Revue critique. 24 juin Je viens de recevoir une circulaire dans laquelle on m'invite à contribuer à la souscription organisée en vue de fêter l'élection de M. Gaston Paris 1029 à l'Académie française. Je réponds en ces termes (avec envoi de vingt francs) au 1027 Voir 25 juillet 1895.-Correspondance de Tamizey avec Eugène de Rozière : A.D. Lot-et-Garonne, 16 J 24, correspondance d'érudits : Fonds Tamizey de Larroque. leurs exemples, nous leur devons autant de reconnaissance que d'admiration. Je vous remercie et je vous félicite, vous et vos deux collaborateurs, de votre généreuse initiative, vous Maurice Campagne (1849-1906), fils d'le vin : elles la vieillissent et la perfectionnent en peu de bibliothèque si réduite : hélas ! de précieux renseignements qui m'ont toujours témoigné le plus de bienveillance. Je exprimant mes voeux les plus vifs pour l'éclatant succès : leur évacuation de la Gascogne. » : Voir 22 février 1893 et 17 avril 1894. annexe). Lorsqu'en 1900, il fit paraître l'Histoire de la maison de 1029 Voir 29 mars 1893. 1022 L'abbé Alphonse Breuils a publié -A.D. Lot-et-Garonne, 16 J 7, correspondance d'érudits : Fonds Tamizey de Larroque. 1023 Jean-Marie-Sauveur Gorini (1803-1859), prêtre et érudit né et mort à Madaillan, Camille Jullian, le grand historien et membre de l'Institut, écrivit « …la clarté est remarquable…vous avez pieusement et sagement bien fait de la dédier à Ph. Tamizey de Larroque. Ce sont les qualités de notre cher et regretté maître qui sont passées en vous » [A.P. Baquier].-Correspondance de Tamizey avec Maurice Campagne : A.D. Lot-et-Garonne, 16 J 8, correspondance d'érudits : Fonds Tamizey de Larroque. 1028 Gustave Baguenault de Puchesse, né et mort à Orléans, fils de l'un des chefs du catholicisme libéral, ami de M gr Dupanloup. Il obtint en 1869 le grade de docteur ès lettres et fut membre du Comité des travaux historiques. Il a fait d'importantes publications touchant l'histoire du XVI e siècle : Les ducs François et Henri de Guise (1867) ; Jean de Morvilliers, évêque de d'Orléans, garde des sceaux de France (1869) ; La Saint-Barthélemy à Orléans (1873) ; Lettres de Catherine de Il s'agit de l'hélléniste Anatole Bailly (1833-1911), né et mort à Orléans. Élève de l'École Normale Supérieure en 1853, agrégé de grammaire, professeur. Il se consacra entièrement à sa grande oeuvre : le Dictionnaire grec-français (1894) pour lequel il s'attacha à relever toutes les formes attestées par les inscriptions et les bons manuscrits, à indiquer les références des exemples qu'il citait et à marquer la quantité. Il est aussi l'auteur d'un Manuel pour l'étude des racines grecques et latines (1867), d'une Grammaire grecque (1872), exposée d'après la méthode comparative alors presque inconnue en France. Il a enfin collaboré aux Mots latins et aux Mots grecs de Bréal.d'Orléans, etc. Tous ces livres, dont l'aimable envoi coïncide Le mois de juillet, que j'ai surnommé le père des terribles orages, ne s'achève pas sans nous en amener un qui dépasse en violence ceux du 13 de ce mois. Au moment où j'écris ceci, la pluie tombe avec une sorte de furie. Tout est inondé dans le pavillon comme dehors. Les eaux ont déjà raviné mes allées, envahi nos prairies que l'on voit d'ici entièrement submergées et qui resteront déplorablement ensablées, et entraîné force tombereaux de terre dans les champs de nos voisins d'en bas. C'est surtout la pièce des Quatre-journaux 1035 qui a été ravagée par l'avalanche. Tout le dessus de notre panier nous a été enlevé en quelques minutes. C'est un véritable désastre.O infortunes du propriétaire ! Hier, je me plaignais de l'orage qui avait écrémé mes meilleurs semis. Aujourd'hui j'ai constaté que les raisins de ma petite vigne sont fort malades.et les temps modernes (1863), Les anciennes institutionsde France (1866), Problèmes historiques (1867) etc. Il avait autrefois donné au théâtre du Gymnase une comédie, Lénore, et inséré des articles d'érudition et de critiques dans les recueils des sociétés savantes de l'Orléanais et de la Touarine, ainsi que dans le Journal du Loire. Il avait également fourni à la Revue contemporaine des études historiques, dont plusieurs ont été tirées à part, notamment le Château de Sully, souvenir de la jeunesse de Voltaire. Quatre-journaux » ou « lous quate journaou » : champ situé en contrebas de la propriété de Larroque, face au lieu-dit « roudérat », longeant un tout petit cours d'eau dit « la Croze » : Serin (P.), op. cit., p. 26. Je viens de recevoir d'Orléans une grande caisse de livres qui m'est envoyée par mes confrères de la Société archéologique de l'Orléanais, et qui contient, outre la série avec un douloureux anniversaire, sont pour moi des des Mémoires de cette Société, de nombreux volumes et de consolateurs. Que Dieu rende aux généreux donateurs -et au nombreuses brochures offerts à l'incendié par MM. G. Ba-centuple -le bien qu'ils m'ont fait ! guenault de Puchesse 1031 , Bailly 1032 , Bimbenet 1033 , Cuissart, Vicaire Général Desnoyers, Herluison. Jarry, Loiseleur 1034 . 31 juillet Médicis (1897-1909) notamment. - A.D. Lot-et-Garonne, 16 J 4, correspondance d'érudits : Fonds Tamizey de Larroque. 1 er août 1031 1032 1035 Les « Malheureusement ces doctes et aimables auscitains n'ont pu se joindre à leurs confrères. Ils ont été remplacés par mes neveux Jean et Guy de Boëry 1050 , moins érudits que les deux chers absents, mais qui ne laissent rien à désirer comme brillantes fourchettes. Le déjeûner a été très gai. Mon petit cordon-bleu s'est surpassé. Tout a été trouvé exquis et on l'a bien prouvé. Voici le menu : bouchées à la reine Margot, poule au pot Henri-IV, canard aux olives, civet de lièvre, pâté de canard d'Amiens, gigot de mouton, beignets. À cause du trop précipité départ du train d'Agen (4 h.), la petite fête a été interrompue aux environs de 3 h., ce qui a été grand dommage.On s'est promis de mieux arranger les choses, l'an prochain.Auch, 1909. Il a signé l'introduction de Bénétrix (P.), Les Conventionnels du Gers, impr. J. Capin, Auch, 1894 et l'avertissement à Rodière (Roger), l'Épigraphie du Pas-de-Calais, t.8, Fontenay-le-Comte, 1932 et édité le Journal de Balthsar Sentex, huissier et archer du Vice-Sénéchal d'Auch et monagraphie du villagede Castin, Impr. centrale, Auch, 1904, 96 p. -A.D. Lot-et-Garonne, 16 J 26, correspondance d'érudits : Fonds Tamizey de Larroque. Jeudi 27 août Après l'inondation d'avant-hier, un tout petit flot aujourd'hui, représenté par M. le professeur Brissaud 1051 . Il a d'Armagnac, 1050 Voir 8 avril 1893 notamment. 1583, in-12. Il a, en outre, édité les oeuvres de Sidoine Apollinaire, d'Eutrope, de Perse, d'Ausone, de Censorinus et de Pomponius Mela ainsi que le Polyhistor de Solin. 1063 Gabriel de Lurbe, né à Bordeaux, mort en 1613. Il fut d'abord avocat, puis devint procureur syndic à Bordeaux. Très versé dans l'histoire de sa province natale, il en est le premier historiographe ayant notamment publié : De illustribus Aquitaniae viris, a Constantino ad nostra tempora, 1591, contenant 113 biographies ; Anciens et nouveaux statuts de Bordeaux, Bordeaux, 1593 ; LurbaeiGarumna, seu de fluviis et urbibus Aquitaniae, 1593. 1064 Dom Bernard de Montfaucon (1655-1741), bénédictin de la congrégation de Saint-Maur, qui y continua l'oeuvre de Dom Mabillon (1632-1701), genèse de l'historiographie méthodique. Il est connu avoir édité un recueil de documents anciens, Monuments de la Monarchie Française, publié entre 1729 et 1733, auquel il est fait vraisemblablement allusion ici. Il est aussi le Palaeographia graeca, il a inventé le mot -qu'il emploie en français (« paléographie ») pour la première fois dans une lettre du 14 janvier 1708 et il lui a donné une extension très lmarge qui englobe à la fois la codicoligie et l'étude des écritures livresques ; la diplomatique et les écritures documentaires sont traitées accessoirement. Il distingue deux grandes catégories d'écritures : l'onciale (majuscules, ou capitales), dont il emprunte le nom aux latinistes et les « écritures liées », c'est-à-dire les « minuscules », dont il possède une connaissance incomparable par ses recherches au Cabinet du roi et dans la collection du duc de Coislin : Bourdé (Guy) et Martin (Hervé), Les écoles historiques, Points, Seuil, p. 86-96. Tamizey s'est intéréssé comme à d'autres grands érudits du XVII e fondateur de la paléographie grecque. Ayant publié en 1707 une siècle, à Dom Bernard de Montfaucon, publiant notamment De la correspondance inédite de Dom Bernard de Montfaucon, Paris, 1873 (extr. de la Revue de Jamais mes petits amis n'avaient été plus joyeux qu'aujourd'hui. Eux Ce matin, en revenant de Gontaud, où, soit à cause de ma maladie, soit à cause du mauvais temps, je n'étais pas allé depuis le mois de novembre, j'ai éprouvé, à mesure que je montais vers mon pavillon, une pénible sensation : le coteau de Larroque, m'apparaissant pour la première fois depuis qu'il n'était plus dominé par le chêne, semblait tout dénudé, tout appauvri : on eût dit un roi découronné. Comme consolation de la disparition de cette grande ruine dont l'effet était si majestueux, si poétique, j'ai vu, le soir, arriver mes chères hirondelles. Les pauvrettes n'auront pas froid, car, depuis quelques jours, nous jouissons ici, après midi, d'une température de 20 à 22 degrés centigrades. Sous l'influence de Cantal, frère de feu le président du tribunal de la Seine.Double bonne visite qui ne m'aurait laissé que d'agréables impressions si mes chers hôtes, obligés de repartir à 5 heures, n'avaient eu à subir une pluie trop digne de cet affreux mois de mai J'ai souffert en voyant mes visiteurs s'engager dans la boue sous la masse d'eau qui tombait avec furie. J'espère qu'ils seront arrivés au port sans trop de chutes, sinon sans trop de glissades. Mais je crois qu'ils se souviendront longtemps du chemin transformé en torrent qu'ils ont suivi et des torrents de pluie qui le grossissaient sans cesse. Torrents en haut et en bas, torrents partout !28 maiLe mois finira-t-il comme il a commencé ? Hier, la fête de l'Ascension a été très mouillée. Aujourd'hui le temps est encore plus mauvais. Et quel froid ! La nuit dernière, je me suis réveillé tout gelé, ayant eu le tort de garder une couverture trop légère. J'ai un autre grief contre ce fatal mois de mai Les pluies perpétuelles ont découragé un rossignol qui avait élu domicile, dès les premiers jours du printemps, sur l'ormeau si feuillu que j'ai fait planter près de la volière. Mon harmonieux petit voisin s'est envolé, peut-être atteint d'une bronchite causée par l'humidité. Bien vilain mal pour un chanteur ! Le soir, avant de m'endormir, le matin, avant de me lever, c'était un charme pour moi que d'entendre ce poète ailé qui l'emportait sur tous les autres poètes. Puisse-t-il revenir quand nous jouirons enfin de ces belles nuits d'été qui nous consoleront de nos huit mois d'hiver ! 31 mai J'ai passé presque tout ce mois par moi surnommé l'aquatique, à déblayer le terrain en vue de l'exécution de mon projet du et lithographie agenaises, Agen, 1901, 65 p. Voir Catalogue des imprimés de la Bibliothèque Nationale. Édouard de Dienne avait épousé la soeur de Louis de Dordaygue [A.P. Baquier]. -A.D. Lot-et-Garonne, 16 J 12, correspondance d'érudits : Fonds Tamizey de Larroque. premier jour du dit mois. J'ai eu un grand nombre de livres qui m'ont été prêtés par de bons amis et aussi plusieurs livres nouveaux dont j'avais à rendre compte. J'ai été frappé, en ces lectures, de la fréquente mention qui y est faite de mes publications. J'ai tant travaillé et sur tant de sujets que l'on me rencontre sur tous les chemins, comme l'a dit le cher Père Sommervogel en tête de sa Bibliothèque de la Compagie de Jésus 1111 . Parmi les livres prêtés je citerai surtout la Vie de Marca par l'abbé Dubarat 1112 , où mon nom revient plus de 50 fois accompagné souvent de trop flatteurs éloges. Parmi les livres nouveaux je mentionnerai les ouvrages sur Bossuet et sur Pascal de M. l'abbé (en blanc) et de M. Michaud. Je mentionnerai surtout la Jeanne d'Albret du baron de Ruble 1113 , où je suis cité presque aussi souvent que dans la Vie de Marca. Hier ont commencé les premières grandes chaleurs. Nous avons atteint 25 degrés dans la matinée et 30 dans l'après-midi. La nuit a été étouffante. Je me suis levé pour ouvrir ma fenêtre et faire entrer un peu d'air frais dans la fournaise. La nuit était admirable. Les astres brillaient du plus magnifique éclat. Je suis resté longtemps en contemplation du sublime spectacle, ma pensée se perdant dans l'infini de tous ses mondes auprès desquels le nôtre est si petit. Je sentais avec délices le pénétrant parfum des lavandes voisines. Mais j'ai eu le regret de ne pas entendre mon pauvre rossignol. Il a décidément abandonné pour toujours l'ormeau qui était sa verdoyante salle de concert. À la place de ses chants divinement mélodieux je n'ai entendu, dans le calme silence de cette belle nuit, que le coassement des grenouilles. J'ai été indigné de l'ignoble vacarme et j'ai compris le besoin qu'avaient, selon la légende, les châtelains du Moyen-Âge de faire battre par leurs vassaux les étangs environnants pour décourager les abominables exécutants. de la Légion d'Honneur qu'on voulait me donner le jour de l'inauguration à Aix du monument de Peiresc : « C'està l'Académie française dont il avait écrit l'histoire. Il était entré dans la clientèle de Fouquet, qui en fit son premier commis et lui procura une charge de conseiller d'État. Entraîné dans la chute du surintendant, emprisonné à la Bastille, pendant 4 ans. Il y résista aux pièges tendus pour lui faire dévoiler les secrets de son maître et occupa ses loisirs à écrire des mémoires en faveur de Fouquet et à correspondre avec Mlle de Scudéry. Privé de papier et d'encre, subissant la promiscuité d'un musicien amateur de musette, il parvaint si on l'en croit, à approvoiser une araignée. Libéré sur les instances de ses amis, il se mit au service de Louis XIV et l'accompagna, en 1668, dans la campagne de Franche-Comté, dont il relata l'histoire. Ayant abjuré le protestantisme en 1670, il fut tonsuré et obtint deux bénéfices, ce dont iul remercia le roi en prononçant à l'Académie française un Panégyrique qui obtint un grand succès et fut traduit en plusieurs langues. Il fut chargé de rédiger la partie concernant les années 1661-1662 des Mémoiresou prétendus telsde Louis XIV pour l'instruction du Dauphin. Nommé historiographe du roi, il écrivit le récit de la campagne de Hollande de 1672, « modèle de ce genre difficile et délicat d'une histoire immédiate visant à la glorification » Grosperrin (Bernard), art. Pellisson dans Bluche (Fr.) s.d., Dictionnaire du GrandSiècle, Paris, Fayard, 1989, p. 1180. aussi ressentent l'heureuse influence de l'arrivée du printemps ! 28 mars -Dimanche 30 juin Je me suis amusé à raconter que la Préfecture répondit au Ministre de l'Instruction Publique, lui demandant des renseignements politiques sur moi, à propos de la rosette d'officier cette chaleur anormale, succédant brusquement à tant d'humidité, tout s'est développé, dans les champs et dans les bois, avec une rapidité extraordinaire. En quelques heures, pour ainsi dire, j'ai vu tous mes lilas s'épanouir et la verdure s'étendre partout. C'est comme un décor d'opéra. du 1166 , étant toujours du côté des vaincus, dans les luttes électorales comme dans la guerre de la Grèce Amérique 1167 . Après avoir voté pour le brave soldat de la bonne cause, l'abbé Rambaud 1168 , j'ai rempli un autre devoir, un devoir de charité, en allant voir au Faudon 1169 mon condisciple, parent et ami Charles de Ricaud 1170 , lequel est presque mourant.Ma visite lui a fait du bien, et, en lui serrant la main poux la dernière fois, j'ai senti se mêler à mort chagrin la joie consolante d'avoir payé une dette sacrée. est inscrit et portant la célèbre devise : Post tenebras lux 1171 , médaille qui m'avait été annoncée par la lettre que je vais transcrire : Union romande pour la protection des animaux. -Fribourg, le 24 mars 1898. Monsieur, j'ai l'honneur de vous annoncer que dans son assemblée générale tenue à Genève le 17 mars dernier, l'Union romande des Sociétés protectrices des animaux, sur la proposition de Monsieur Eugène de Budé, son vice-président, vous a décerné à l'occasion du récent centenaire du Général de Grammont, sa médaille d'honneur en argent, an témoignage de reconnaissance poux les services que vous avez rendus à la cause protectrice 1172 . Nous espérons que cette distinction que nous sommes heureux de vous offrir, contribuera à resserrer les bonnes relations que nos Sociétés suisses ont menées avec vous, Monsieur, lors des belles fêtes de Miramont. Agréez, Monsieur, l'assurance de notre considération disitinguée. contre la Turquie et la guerre de l'Espagne contre l'Abbé Charles de Raemy, Curé de Bourguillon, président. REPUBLIQUE FRANCAISE DEPARTEMENT DE LOT-ET-GARONNE - Cote : 1 J 804 -Don ClaudeMaisani (novembre 1987) Dossier de carrière coté 19820668/114 au Centre des Archives Contemporaines de Fontainebleau (Seine et Marne) Bourrachot (Lucile), « La fin du pavillon Peiresc à Gontaud », Revue de L'Agenais, 1999, p. 15. Philippe Tamizey de Larroque est décédé le 26 mai 1898 : Philippe Tamizey de Larroque. Discours de Léopold Delisle, président de la section d'histoire et de philologie, prononcé devant le Comité des Travaux historiques, le 6 juin 1898, Société d'agriculture, sciences et arts d'Agen, Agen, imprimerie et lithographie agenaises, 1898. Il en a publié une série : Deux livres de raison de l'Agenais [de la Serin (Pierre), op. cit., p. Couture (L.), op. cit., p. 492. Serin (P.), op. cit., p. 35. Simple homme de troupe de l'armée du futur général de Toiras, bloquée par les Anglais dans le fort de Saint-Martin sur l'île de Ré. Il parvint en nageant 10 à 12 km., malgré une tempête et des chaloupes anglaises à apporter une lettre contenue dans une boîte en fer blanc, enduite de cire, qu'il portait autour du cou aux forces royales stationnées à Fort-Louis, permettant ainsi de faire échouer le siège. Épuisé et engourdi, incapable de tenir debout en arrivant, il aurait rampé sur la plage pour atteindre son but. Le militarisme de Tamizey de Larroque, alors qu'est révélée la prétendue trahison du capitaine Dreyfus, l'aurait donc poussé à exalter, par réaction, l'accomplissement héroïque du devoir par un simple soldat, sorti de la paysannerie de l'Agenais. Il remporta, en tout 95 Tamizey de Larroque(Ph.), AdolpheMagen (1818Magen ( -1893)), Impr. V ve Lamy, Agen, 1894, p.8. -Le 20 janvier 1871, Tamizey dont les deux premiers nés étaient déjà morts en bas âge, perdit une petite fille, Jeanne-Marie qui n'avait pas un an : voir Livre de raison, 25 avril 1891. Couture (L.), op. cit., p. 500. Il existe trois recensements partiels de ses oeuvres : Tamizey de Larroque fit, lui-même, un essai de recensement de sa production historique en 1881 : Le Père Cortade. Notes et extraits, suivis d'une bibliographie Tamizeynne, Sauveterre-de-Guyenne, 1881 (Extrait de la Revue des Couture (L.), op. cit., p. 551-557 et parmi beaucoup d'autres « À propos de Charron et Du Bartas », Revue deGascogne, 1883, p. 299. Couture (L.), op. cit., p. 518. Voir note 87. Voir 15 juillet 1889. Voir 15 juillet 1889. Voir 6 septembre 1889. Sur la famille de Bonneau : Meller (P.),op. cit., t. 1, p. 134-135. Marie-Élisabeth-Pauline Delmas deGrammont (1802Grammont ( -1888)), soeur du général Jacques-Philippe Delmas de Grammont (voir note 74), fillede Jean-Joseph Delmas de Grammont, né à Castillonnès, le 23 décembre 1746 et mort à Miramont-de-Guyenne, le 25 octobre 1809, colonel, chevalier de Saint-Louis, et de Marie-Henriette de Vivie de Duvivier (1765-1827) épousée en secondes noces à la Sauvetat-en-Dropt, le 21 décembre 1790. Jean-Joseph était « le héros de Wissembourg [1793]». -Selon les termes de Ph. Tamizey de Larroque, sa mère pendant soixante ans « illumina cette petite ville [Gontaud-de-Nogaret] par sa bonté et par sa charité » : Tamizey de Larroque (Ph.), Le chroniqueur Proché… op. cit., p. 15-19. -[A.P. Baquier] ; Couture (L.), « Philippe Tamizey de Larroque », Revue de Gascogne, 1898, p. 491. Voir notamment 29 septembre 1894 et 25 septembre 1897. 216 Issus du second mariage de Jean-Joseph Delmas de Grammont (1746-1809) avec Marie-Henriette de Vivie de Duvivier (1765-1827) c'est-à-dire : Jacques-Philippe et Jean-Urbain évoqués dans la note suivante, deux autres fils sont nés de cette union et il est fait allusion à l'un d'eux, soit Joseph-Auguste, époux d'Adèle Heurtault de Lammerville, soit Raymond, époux de Marie Anne Le Roy de Lanauze [A.P. Baquier]. 217 Jacques-Philippe Delmas de Grammont (1796-1862), époux d'Anne-Marie de Boëry : Tamizey de Larroque (Philippe), Notice sur le général Delmas de Grammont, de Soye et Bouchet impr., Paris, 1862. Voir 4 octobre 1897. 218 Jean-Urbain Delmas de Grammont. Voir 27 septembre 1889 [A.P.Baquier] d'avoir composé les couplets et calomnié Saurin, et le condamna au bannissement à perpétuité. Le poète s'était déjà exilé en Suisse, à Soleure. Là il intéressa à sa cause l'ambassadeur de France, le comte du Luc, qui resta son défenseur convaincu, et, à qui il a dédié son Ode à la Fortune. Rousseau le suivit à Vienne. Cependant une réaction se produisait en France, en sa faveur, et le baron de Breteuil obtint son rappel ; mais Rousseau refusa de rentrer sans une justification plus complète. Plus tard, il devait solliciter lui-même un rappel qui ne lui fut pas accordé. Il rentra cependant à Paris, et l'autorité ferma les yeux ; mais, isolé et malade, il se rendit bientôt à Bruxelles où il mourut. Rousseau avait essayé du théâtre, mais sans succès. Il dut sa très grande célébrité à ses Cantates et à ses Psaumes, mais aussi à des épîtres et des épigrammes.228 Henri de La Tour d'Auvergne, vicomte de Turenne et maréchalde France (1611de France ( -1675)), grand homme de guerre qui se distingua sous les règnes de Louis XIII et de Louis XIV, très admiré et respecté de ses contemporains et de la postérité. Il mourut sur le front des troupes, le 27 juillet 1675. À la suite de son glorieux décès, Louis XIV nomma, pour le « remplacer », huit maréchaux de France qui furent donc la « monnaie de Turenne » : Contamine (Ph.) s.d., t. 1, Des origines à 1715, dans Corvisier (André) s.d., Histoire militaire de laFrance, P.U.F, Paris, 1992, p. 421. Charles Nodier 1783-1844, originaire de Besançon, d'abord secrétaire de Pichegru, affilié à des sociétés secrètes, il est l'auteur d'une ode satirique la Napoleone qui lui valut d'être persécuté par la police impériale, réfugié à Besançon, puis à Laybach en Illyrie, bibliothècaire, journaliste, il publie des ouvrages d'histoire naturelle, des romans et des traités (Les Tablettes d'un suicidé, le Dictionnaire des onomatopées). De retour en France, il entre au Journal des Débats et à la Quotidienne. Il affiche alors des idées royalistes et fait paraître, en 1815, une Histoire des sociétés secrètes dans l'armée. Peu après il commence à publier des contes de mystère et de fantaisie qui deviennent désormais sa spécialité et des travaux de critique, d'histoire et de bibliographie. Nommé, en 1823, bibliothècaire à la bibliothèque de l'Arsenal à Paris, il réunit à ses soirées demeurées célèbres, toute la jeune école romantique. En 1833, il entre à l'Académie française. Il a aussi signé Les papillotes du perruquier d'Agen, Paris, Techener, 1835 (extr. du journal Le Temps où cette notice sur Jasmin parut d'abord, le 10 octobre 1835, sous le titre de Bibliographie patoise. Les papillotes du Coiffeur d'Agen reproduite dans le feuilleton du Journal politique et littéraire de Lot-et-Garonne du 15 octobre 1835. À rapprocher d'un autre écrit de Nodier : « Comment les patois furent détruits en France », Bulletin des Bibliophiles, n° 14. Voir 7 mars 1890. Voir 9 août 1889. MauriceRouvier (1842Rouvier ( -1911)), né à Aix-en-Provence.Député 1871Député - 1903 ; ; sénateur 1903[START_REF]étaient traités pêle-mêle dans l'ordre alphabétique, Denys adopta le classement par province. De 1715 à 1725, il publia 3 volumes ; les bénédictins publièrent jusqu[END_REF]. Il avait commencé sa carrière sous la protection de Gambetta et collaboré au journal de ce dernier La République Française. Il participa à de nombreux gouvernements de la III e République, y jouant le rôle d'agent de la Haute Finance : 7 fois ministre des Finances, président du Conseil (juin-décembre 1887). Il fut gravement compromis dans le scandale de Panama auquel il est fait allusion ici. En effet, dès 1885, avait été lancé le projet d'un emprunt à lots, une sorte de loterie servant d'appât à de nouveaux souscripteurs pour la construction du canal de Panama. Pour le réaliser, une loi devait être votée. Pour cela la Compagnie de Panama acheta les voix d'un certain nombre de députés. La loi fut effectivement votée le 9 juin 1888, mais dès le 4 février 1889, la Compagnie fut mise en liquidation entraînant la ruine de 85 000 souscripteurs petits épargnants. C'est seulement en juin 1891, plus de deux ans après le début de la liquidation, qu'une information fut ouverte pour abus de confiance et escroquerie. En fait, le « Scandale de Panama » éclata quand E. Drumont, en septembre 1892, dans la Libre Parole, publia une série d'articles sur « Les dessous de Panama ». À la suite de ce scandale, à partir de décembre 1892, Rouvier resta exclu du pouvoir jusqu'à ce que Combes qui voulait profiter de ses relations avec la Haute Banque ne le reprenne comme ministre desFinances (1902Finances ( -1905)). Sur la rive gauche de la Garonne, sur la rive droite de la Baïse à 3km au sud de Vianne et à 5 km environ au nord de Nérac : carte au 50 000 e , Marmande-Agen, 56, I.G.N.,Paris, 2003. Sous-préfecture du département du Gers, à 22 km à sud de Nérac sur la Baïse. Fondée en 1859 par Mgr de Salinis, archevêque d'Auch, sous le nom de Bulletin de la Société historique de Gascogne, a pris le titre de Revue de Gascogne de 1864 à 1940, date de sa disparition : Rouleau (Pierre), Revue deGascogne. Tables (1859[START_REF] Courteault | exemplaire, il ne s'en retira qu'en 1948, époque où sa santé l'obligea à garder la chambre. La plupart de ses travaux ont paru dans les Archives historiques de la Gironde, dans[END_REF], préface de Charles Samaran,Albi, 1973. Le vignoble du Sud-Ouest aquitain eut à faire face à une série de maladies dans la deuxième moitié du XIX e siècle. D'abord l'oïdium : à partir de 1850, une campagne de lutte est entreprise. Cette année-là, une très forte attaque ruine la récolte ; 7 années sont nécessaires pour obtenir des résultats. En effet, le printemps apportait ses promesses, les pampres se développaient normalement, la vigne fleurissait, les jeunes mannes annonçaient une belle vendange ; mais, dès juillet, l'oïdium sévissait dans toute son horreur, les baies se déssèhaient et là où on avait récolté 200 barriques, il en arrivait 7 ou 8 et pas excellentes. Après bien des tâtonnements, on sut enfin utiliser l'antidote approprié : le soufre. La prospérité revint entre 1860 et 1870. La lutte contre le mildiou et le black-rot ne donna des résultats probants qu'après 1885, grâce à la « Bouillie bordelaise », mise au point par les professeurs Millardet et Gayon de la faculté de Bordeaux. L'attaque du phylloxéra est effective en Aquitaine en 1875, elle est aggravée par une récidive du mildiou en 1882 : Pouvereau (Norbert), Reinhold Dezeimeris, érudit loupiacais. Essai biographique, Association Saint-Blaise,Cadillac, 1998, p. 76-84. Voir 14 mars 1889. Marie-Henriettede Boëry, née Delmas de Grammont, soeur aînée du général Delmas de Grammont. Jean-Pierre de Boëry [A.P. Baquier]. Actuellement Annaba, à l'est de l'Algérie, à une centaine de km de la frontière tunisienne. Lieu-dit, à 2 km environ, au sud-est de Miramont-de-Guyenne, au bord de l'actuelle D 667 : carte au 50 000 e , Marmande-Agen, 56, I.G.N.,Paris, 2003. Voir 30 octobre 1890. Voir 16 juillet 1889. Joseph Roumanille (1818-1891) né et inhumé à Saint-Rémy-de-Provence, poète, prosateur et promoteur d'une renaissance de la langue provençale. Son premier recueil de vers, Li Margarideta [« Les Pâquerettes »] (1847), le mit en vue. Il travailla, dès lors, à associer tous les littérateurs de langue provençale et publia une anthologie de leurs oeuvres : Li Provençalo [« Les Provençales »] (1852). Il organisa les congrès d'Arles (1852) et d'Aix (1853). Mais son projet d'épurer le dialecte rhodanien rencontra les résistances des « patoisants » indisciplinés. Il fonda alors (1854) à part, avec quelques amis avignonais, le groupement félibréen qui eut pour organe L'armanau prouvençau. Roumanille qui, de maître d'étude était devenu libraire, se fit l'éditeur de l'Armana prouvençau et de la plupart des oeuvres félibréennes. Il fut, après Mistral et jusqu'à sa mort « Capoulié » du félibrige. « Issu d'une vieille famille de paysans et de jardiniers Saint-Rémois (on retrouve leur trace dès le Moyen-âge), l'écrivain eut une importance considérable dans l'histoire de la littérature provençale. Il fut le premier à découvrir et à encorager les talents d'un jeune pensionnaire avignonais, natif de Maillane qui s'appelait… Frédéric Mistral. Avec ce dernier, il fonda le félibrige, association dont il assuma la présidence de 1888 jusqu'à sa mort. Il fut le principal auteur de la graphie et des règles orthographiques avec lesquelles on écrit encore aujourd'hui le provençal, dont la modernité et les grandes qualités ne sont plus à démontrer … » : extr. Pascal (Rc.), « Saint-Rémy-de-Provence : 24 mai -Centenaire de la mort de Joseph Roumanille », Petites affiches deMarseille, 3 juillet 1991 [A.P. Baquier]. JehanFroissart (1333Froissart ( -v.1400)), né à Valenciennes en Hainaut, auteur de Chroniques de l'histoire de son temps, composées à l'occasion de ses voyages notamment, en 1388, lors de son séjour en Béarn, à la cour de Gaston Phébus. Mathieu (Anselme) 1829-1895, né et mort à Châteauneuf-du-Pape. Condisciple et ami de Mistral, il fut, en 1854, l'un des sept fondateurs du félibrige dans la mémorable réunion du château de Font-Ségugne. Son recueil La Farandola (la Farandole) (1862) représente l'essentiel de son oeuvre. La large place qu'il y a fait à l'amour et à la joie de vivre l'a fait comparer à Catulle. Il a aussi publié dans l'Armana prouvençau des poésies et des contes. CamilleChabaneau (1831Chabaneau ( -1908)), né et mort à Nontron (Dordogne). D'abord employé de l'administration des postes, il s'instruisit seul et publia, en 1868 : Histoire et théorie de la conjugaison française. En 1878, il fut chargé d'un cours de langue romane à la faculté des lettres de Montpellier, et, en 1886, il fut nommé membre correspondant de l'Institut. Outre de nombreux articles dans la Revue des langues romanes, on lui doit : Grammaire limousine (1876) ; Biographie des troubadours en langue provençale (1885) ; La langue et la littérature du Limousin (1892) et de nombreuses éditions d'anciens textes provençaux. -A.D. Lot-et-Garonne, 16 J 8, correspondance d'érudits : Fonds Tamizey de Larroque. AdolpheMagen (1818Magen ( -1893)), né à Agen, pharmacien, chimiste, Inspecteur des pharmacies de Lot-et-Garonne, correspondant du ministère de l'Instruction publique, secrétaire perpétuel de la Société académique d'Agen, directeur de la Revue de l'Agenais. Ses travaux et publications sont nombreux et variés, touchant à sa spécialité scientifique, à l'histoire et à la littérature. Une recension à peu près exhaustive en est donnée dans Andrieu (J.), op. cit., t. II, p. 97-102. -Tamizey souligne tout ce qu'il doit aux conseils d'Ad. Magen et à ses encouragements alors qu'il débutait sa carrière de chercheur ainsi qu'à la chaleur et à la constance de l'amitié qui les unissait, dans la plaquette nécrologique qu'il lui consacra : Tamizey de Larroque(Ph.), AdolpheMagen (1818Magen ( -1893)), Impr. V ve Lamy, Agen, 1894. -A.D. Lot-et-Garonne, 16 J 1 et 16 J 18, correspondance d'érudits : Fonds Tamizey de Larroque. Un peu à l'écart de Gontaud à l'ouest sur la D. 267. Le domaine de Larroque y est rattaché : carte au 50 000 e , Marmande-Agen, 56, I.G.N.,Paris, 2003. Voir 7 mars 1890. Voir 7 juillet 1890. Les de Witt, négociants hollandais, originaires de Dordrecht (Pays-Bas du nord), protestants connus dès le XVI e siècle, et illustrés par Jean de Witt, grand pensionnaire de Hollande (né en 1632 et mort, massacré, en 1672). Ils vinrent en France au début du XIX e siècle. Cornelis de Witt, né en 1828, connut à Paris, au collège, Taine et François-Jean-Guillaume Guizot (1815-1837), fils du Président du Conseil (voir note 53) qui avait aussi une fille, Pauline, que Cornelis épousa. Ce dernier était royaliste et mourut en 1889. Son petit-fils, prénommé comme lui, Cornelis, épousa Madeleine de La Bruyère, appartenant à une très vieille famille protestante et royaliste qui possédait le château de Peyreguilhot, dans les environs de Tonneins, non loin de Larroque. Ce Cornelis de Witt fut élu conseiller général royaliste du canton de Castelmoron. En 1887, il s'installa au château de Morin, à côté de Puch, près de Tonneins et consacra son temps, à partir de cette date, à l'hebdomadaire « Le paysan du Sud-Ouest ». La famille était encore présente à Tonneins, au début du XX e siècle, en la personne de Jean de Witt, mort en 1954, époux de Simone dePourtalès, décédée en 1977 [A.P. Baquier]. C'est, pour les géographes, selon la terminologie ancienne, la partie centrale de la péninsule, la plus chaude et la plus sèche, par opposition à l' « Arabie heureuse » du littoral sud-ouest. Récit d'une visite au Pavillon Peiresc par son fils âgé de 9 ans, en Voir 13 octobre 1890. Jean Racine (1639-1699), le célèbre dramaturge, après de liaisons tapageuses avec des comédiennes notamment la célèbre Mademoiselle du Parc, avait épousé, le 1 er juin 1677, Madame Catherine de Romanet dont il eut sept enfants, deux fils Jean-Baptiste et Louis et cinq filles dont trois se firent religieuses. Sevin (Pierre de), Trois lettres inédites… à Peyresc, Agen, 1884 (extr. de la Revue de l'Agenais, XI, 1884). Voir note 216. Issu de la Revue historique, publiée depuis 1876. Un Bulletin critique de littérature, d'histoire et de théologie parut à Paris entre 1881 et 1887, sous la direction du Père Ingold de l'Oratoire. S'agit-il d'une confusion avec Adolphe Magen ? Voir 6 juin 1891. Henri deLacaze-Duthiers (1821[START_REF] Lot-Et-Garonne | J'ai été doublement ému en voyant s'ouvrir le tombeau où mon beau-frère allait rejoindre ma bien-aimée mère 742 , dont le souvenir reste si vivant dans mon coeur. Ah ! quand pourrai-je moi-même aller la rejoindre non dans son tombeau, mais dans un monde meilleur que celui-ci ? 16 octobre[END_REF], né à Montpezat (Lot-et-Garonne), issu d'une ancienne famille noble de l'Agenais. Après des études de médecine, il se tourne vers la zoologie et obtient un doctorat de sciences naturelles. En 1854, il est nommé professeur à la faculté des sciences de Lille et, en 1864, maître de conférences à l'École normale supérieure. En 1865, il devient professeur au Muséum d'histoire naturelle et, en 1868, professeur à la Sorbonne. En 1873, il fonde les Archives de zoologie expérimentale et il crée deux ans plus tard, un laboratoire de zoologie maritime à Roscoff suivi d'un autre à Banyuls. En 1871, il avait été élu membre de l'Académie des sciences qu'il présidera en 1893 et en 1886, il devint membre de l'Académie de médecine. 393Poton de Xaintrailles, né vers 1390 ou 1400 -Xaintrailles est situé près de Nérac dans l'actuel département de Lot-et-Garonne -mort, à Bordeaux, en 1461. Cadet de Gascogne qui s'illustra parmi les compagnons de Jeanne d'Arc : Carsalade du Pont (J.), Jehanne d'Arc et les capitaines gascons, Cocharaux, Auch, 1892. 394 Voir 13 octobre 1890. Joseph JusteScaliger (1540Scaliger ( -1609)), né à Agen et mort à Leyde. Fils de Jules-César Scaliger, philologue et médecin italien (1484-1558), mort à Agen où il s'était établi après y avoir suivi son patient Angelo della Godefroy, comte d'Estrades (1607, comte d'Estrades ( -1686)), né à Agen, mort à Paris. Issu d'une ancienne et illustre famille de l'Agenais, doué d'un esprit vif et de qualités physiques remarquables, il entra à quinze ans dans les pages de Richelieu, et quatre ans plus tard, obtint en Hollande, sous Maurice de Nassau, le plus expert militaire de son temps, une compagnie du régiment commandé par son oncle, Pierre de Secondat. Il accomplit dans cette campagne plusieurs actions d'éclat. La fortune du comte d'Estrades fut rapide et brillante. Ses hautes capacités furent appréciées par Richelieu, Mazarin et Louis XIV qu'il servit tour à tour avec fidélité et dévouement. Il était, à vingt-cinq ans, l'un des plus beaux cavaliers du royaume ; les Précieuses de l'Hôtel de Rambouillet le choyait et la chanson et le roman le popularisaient sous le pseudonyme de Théodat. Colonel à trente ans, il fut choisi par Richelieu, en 1637, pour une mission difficile et Mazarin, pour se l'attacher plus étroitement, le fit capitaine de ses gardes. Le 12 décembre 1643, il fut l'un des acteurs d'un duel célèbre, entre Coligny (descendant de l'amiral assassiné en 1572) et le duc de Guise. Lui, second de Coligny, combattit contre le marquis de Bridieu, un valeureux colosse qu'il vainquit. Nommé gouverneur de Dunkerque, il accomplit, en 1652, de vrais prodiges de bravoure devant cette place qu'assiégeait l'archiduc d'Autriche. Pendant les troubles de la Fronde, il fut lieutenant-général pour le roi en Guyenne, prit Bourg et Libourne, entra dans Bordeaux en septembre 1563 et devint gouverneur de Guyenne et maire perpétuel de Bordeaux le 10 octobre suivant. Déjà, de 1637 à 1647, il avait rempli diverses missions de confiance ; mais c'est surtout à partir de 1661 qu'il fut chargé de grandes ambassades, à Londres d'abord, puis en Hollande (1668) etc. Il fit les campagnes de la guerre de Hollande, de 1672 à 1675, fut élevé à la dignité de maréchal de France, le 30 juillet 1675, et il dirigea avec une habileté consommée les négociations de la paix de Nimègue. Vers la fin de sa vie, Godefroy d'Estrades, que Louis XIV avait fait encore conseiller d'État, chevalier de ses Ordres, et même, en 1663, vice-roi d'Amérique, fut désigné comme gouverneur du duc de Chartres. C'est l'une des personnalités les plus sympathiques et les plus remarquables du Grand Voir 9 mars 1891. Voir 23 mai 1891. Ferdinand Donnet (1795-1882), né à Bourg-Argental (Loire), mort à Bordeaux. Il fit ses études de théologie au grand séminaire de Lyon. Entre 1822 et 1827, il dirigea une société de prêtres missionnaires dans le diocèse de Tours. Alors qu'il devait succéder à l'évêque de Nancy, il fut promu archevêque de Bordeaux, le 30 novembre 1836. Il profite de la liberté de réunion sous la II e République pour organiser un concile provincial. Homme de conciliation, il approuve, par ailleurs, le compromis que représente la loi Falloux. Rallié ostensiblement à Louis-Napoléon Bonaparte après le coup d'État, il est fait cardinal par le pape Pie IX, le 15 mars 1852, avant d'acceuillir à Bordeaux le Prince-Président en octobre de la même année. Dans l'Église, il se défie des excès de l'Univers de Louis Veuillot [voir note 249] et partage plutôt les points de vue de Dupanloup, tout en restant discret, ce qui lui évite de se brouiller avec aucun parti. Son soutien à l'Empire est indéfectible dans les années suivantes, ce qui explique qu'il ait pu placer un grand nombre de prêtres dans l'épiscopat. Ses bonnes relations avec le régime se tendent cependant quelque peu au moment de la question romaine. Dès février 1859, il défend, commentant la brochure Napoléon III et l'Italie, le pouvoir temporel du pape et dit craindre la guerre contre l'Autriche. Le Sénat lui offre finalement une tribune de choix ; il y proteste, par exemple, en mars 1860, contre l'annexion des Romagnes, puis y répond, en mars 1861, au discours du prince Napoléon sur la question romaine, et récidive l'année suivante à l'encontre de Bonjean. Entre-temps, il a organisé en octobre 1860, l'oeuvre du denier de Saint-Pierre, puis se rend à Rome en 1862. L'année suivante, il prend également position en faveur des victimes de l'insurrection polonaise. Le Syllabus l'embarrasse visiblement, car il remet en cause le serment au régime, ce qui ne l'empêche pas de protester auprès du ministre contre l'interdiction de le publier. Finalement, lors de la discussion de l'adresse au sénat, en mars 1865, il donne une interprétation relativement libérale des documents pontificaux. C'est, en fait, un ultramontain modéré. Il refuse les thèses intransigeantes de l'école de Veuillot, mais se montre attaché à la définition du dogme de l'infaillibilité pontificale qu'il Voir 9 août 1889. Voir 8 août 1890. Voir 27 mars 1892. Jules-François-Stanislas Viette, né à Blamont (Doubs) en 1843. Il combattit l'empire dans les journaux républicains de l'Est et notamment dans la Démocratie franc-comtoise dont il fut l'un des fondateurs. Capitaine des mobilisés du Doubs pendant la guerre de 1870, il fut cité à l'ordre du jour de l'armée. Par la suite, sur la recommandation publique de Gambetta, il fut élu, le 20 février 1876, député de l'arrondissement de Montbéliard. Dans sa profession de foi, il demandait une République sagement progressive, la réduction du service militaire, la liberté des cultes, l'instruction laïque, gratuite et obligatoire. Il prit place à gauche et fut des 363 qui refusèrent un vote de confiance au ministère du 16 mai. Réélu en 1877 et en 1881, il déclara (février 1883) dans la discussion du projet Fabre sur l'expulsion des prétendants, qu'il n'y avait pas de droit commun pour les princes. Aussi signa-t-il en février 1886 la proposition d'expulsion Ballue-Duché. Il se prononça pour la réforme de la magistrature, pour le scrutin de liste, pour le maintien du budget des cultes et pour la politique des résultats de Gambetta. Réélu en 1885, il accepta le portefeuille de l'agriculture dans le cabinet Tirard, puis dans le cabinet Floquet, le conservant donc jusqu'en 1889. Il déposa en cette qualité plusieurs projets de loi, notamment concernant la réforme de l'administration forestière. En 1889, il se prononça notamment pour les poursuites contre les trois députés membres de la Ligue des patriotes et pour les poursuites contre le général Boulanger. Il était l'un des hauts dignitaires de la franc-maçonnerie. Dans le département du Doubs, près de Montbéliard. Voir 11 septembre 1891. Voir 31 octobre 1891. Voir 31 octobre 1891. Voir 6 septembre 1889. Voir 20 mai 1892. Voir 27 septembre 1889. Voir 31 août 1892. Voir 2 janvier 1893. trad. du latin : « au plus savant des hommes et au plus cher des amis ». Voir 25 avril 1891. Voir 11 septembre 1891. Jules Andrieu, né en 1839, conducteur des Ponts et Chaussées, membre de la Société des Sciences, Lettres et arts d'Agen, correspondant de l'Académie de Bordeaux, collaborateur actif au journal Sud-Ouest, auteur d'études sur Jasmin (1881) et Théophile de Viau (1877) et d'autres aspects de la vie culturelle agenaise sous l'Ancien Régime, il a surtout publié la Bibliographie générale de l'Agenais (1886-1887) dont est extraite une Bibliographie Tamizeyenne (1862-1887), Agen, Impr. V. Lentéric, 1887, 22 p. Sur son oeuvre et sa disparition prématurée voir Lauzun (Ph.), op. cit., p. 261-263. -A.D. Lot-et-Garonne, 16 J 2 et 16 J 3, correspondance d'érudits : Fonds Tamizey de Larroque. La Revue félibréenne, publication littéraire franco-provençale, publiée à Paris entre 1885 et 1909. Voir 6 octobre 1893. novembre L'été de la Saint-Martin, qui n'a jamais été si long, si chaud (j'ai devant les yeux un bouquet tout printanier formé de roses, de violettes, de lilas), vient enfin d'être arrosé, cette nuit, par une très abondante pluie qui va probablement durer quelques jours. Nous n'avions pas vu tomber une goutte d'eau depuis plusieurs semaines. Au moment d'entrer dans la série des pluies automnales, je tiens à saluer 1'extrême douceur et l'extrême beauté de l'été de la Saint-Martin dont nous venons de jouir. Je ne me souviens pas d'avoir jamais vu, en novembre, une température aussi agréable et un ciel aussi au XV e siècle, l'une des curiosités du pays de Serres où elle se situe. 619 En contrebas de Larroque, longeant un tout petit cours d'eau dit « La Croze » [439-4927] : Carte 1738 Est Seyches : série bleue 1 : 25 000, I.G.N., Paris, 1987. -Serin (P.), op. cit., p. 26. Mlle de La Mothe-Houdancourt, il fut exilé et se réfugia en Angleterre où il brilla à la cour du roi Charles II. Il y connut Saint-Evremond et Antoine Hamilton dont il épousa la soeur et qui écrivit, avant la mort de l'intéressé les Mémoires du comte de Gramont. Ce dernier les approuva et ils furent publiés en 1713. Antoine Hamilton est écossais. Venu tout jeune en France et ayant suivi, par la suite la cour demi-française de Charles II, il parvint à une maîtrise parfaite de la langue qui font des Mémoires de Gramont un très plaisant ouvrage. Il montre Gramont, vaillant soldat, séduisant et volage coureur de jupons, joueur intrépide et adroit. Ses aventures à Turin avec M lle de Saint-Germain, puis M me de Sénantes le posent en un don Juan gai, souple et sans scrupules. Sa lutte avec le grand roi pour les faveurs de M lle de La Mothe-Houdancourt et sa disgrâce sont le prétexte d'enlevés et spirituels portraits de grandes figures comme Richelieu, Cromwell ou Mazarin : Hamilton (Antoine), Mémoires du chevalierde Gramont, Julliard, 1965. PaulArbaud (1832Arbaud ( -1911)), fils d'un riche propriétaire de Manosque, fixé à Aix-en-Provence. Dès 1854, il commença à collectionner les livres, les manuscrits, les brochures relatifs à la Provence ou écrits en langue provençale, les portraits, gravures et estampes, les bibelots, objets d'art et tableaux pouvant intéresser sa province. Il réunit ainsi de remarquables séries bibliographiques qui obtinrent à l'exposition nationale du livre de 1895 la plus haute récompense et des des objets d'une valeur inestimable. En 1884, il avait aménager et décorer par les peintres Denis et Audibert, par le sculpteur Blanqui, par l'ornemaniste André-Joseph Allar son hôtel de la rue du Quatre-Septembre, à Aix, où il avait décidé d'installer ses collections, léguant le tout par testament à l'Académie d'Aix dont il faisait partie depuis 1883 et qui transforma l'hôtel Arbaud à sa mort en musée Arbaud. Paul Arbaud qui faisait partie de nombreuses sociétés savantes, collaborait à L'Almanach du sonnet et donnait sous le pseudonyme d'Alfred de Lohéac des articles aux journaux. Il publia aussi dans le Bulletin des bibliophiles, plusieurs études tirées à part, parmi lesquelles, il faut mentionner Les prédictions perpétuelles de Nostradamus, Marseille, 1855 et surtout Peiresc bibliophile, Aix, 1871. Voir 14 février 1893. Il a signé Les Reclus de Toulouse sous la Terreur, registres officiels concernant les citoyens emprisonnés comme suspects, publiés et annotés par le baron de Bouglon, publiés en plusieurs éditions entre 1893 et 1912, Privat, Toulouse. -Sur la famille de Bouglon, connue en Guyenne depuis le XII e siècle et qui descend des captaux de La Tresne : Meller Voir 29 mars 1893. Voir 21 septembre, 6 octobre et 14 novembre 1893 notamment. Voir 22 février 1893. RheinholdDezeimeris (1835Dezeimeris ( -1913) ) : Pouvereau (Norbert), Reinhold Dezeimeris, érudit loupiacais, essai biographique, Association Saint- Voir 22 juin 1894. Jean-Ferdinand Denis, né en 1798. Il était destiné par son père à la carrière diplomatique ; l'étude des langues et le goût des voyages l'en détournèrent et il partit, en 1816, pour l'Amérique. À son retour, tout en préparant divers travaux géographiques, historiques ou littéraires, il projetait un autre voyage dans l'orient, dont il avait aussi étudié les idiomes ; il restreignit son excursion à l'Espagne et au Portugal. Nommé d'abord, en 1838, bibliothécaire au ministère de l'Instruction publique, il fut attaché plus tard (1841), comme conservateur, à la Bibliothèque Sainte-Geneviève, à laquelle il avait déjà été attaché, de 1833 à 1838, et dont il devint administrateur en mars 1865. Décoré de la Légion d'honneur en mars 1839, F. Denis a reçu aussi diverses décorations du Portugal et du Brésil ainsi que le titre de membre de l'Académie des sciences de Lisbonne. Il a publié de nombreux ouvrages, dont la plupart sont le fruit de ses excursions (Paraguay, Brésil, Guyane, Portugal) ainsi que des romans moraux ou instructifs eux aussi baignés d'exotisme ; Il est encore l'auteur de notices pour diverses encyclopédies notamment La Nouvelle Biographie Générale et il a signé avec de Martonne et Pinçon, le Nouveau manuel de la bibliographie universelle, 1857, gr.in-8, petit-texte à 3 col. entre autres collaborations. 820 En fait il s'agit de Louis-George-Alfred de Martonne, né au Havre en 1820, fils d'un magistrat archéologue, mis à la retraite en 1849. Ancien élève de l'École des chartes, il fut en 1848 et 1849, professeur d'histoire à Draguignan ; puis il devint rédacteur du Journal de la Haute Saône et du Journal de Saint-Quentin. Il a été nommé, en 1854, archiviste du Loir-et-Cher. Il est devenu correspondant du ministère de l'Instruction publique pour les travaux historiques, associé-correspondant des Antiquaires de France et membre de nombreuses soiétés savantes. Parmi ses nombreuses publications, un Examen de l'histoire de la littérature française de M. Nisard, 1848, in-18 et l'ouvrage mentionné dans la note précédente. 821 Sir Frederick Madden, né à Portsmouth, en 1801, 7 e fils d'un capitaine d'infanterie de marine. Sur la recommandation de Roscoe, qu'il avait assisté dans la rédaction d'un catalogue de manuscrits, il entra, en 1826, à la bibliothèque du British Museum, pour travailler au classement des imprimés. Il devint, en 1828, conservateur-adjoint au département des Voir 29 août 1890. Gabriel Azaïs, Dictionnaire des idiomes languedociens, étymologique, Voir note 690 et note 699. Michault(Jean-Bernard) [pseud. J.-P. Gilkin Mureau de Cherval] est l'auteur de Mélanges historiques et philologiques avec notes, N.M. Patin (Guy) (1601-1672), médecin, réputé médiocre et surtout ennemi Sur ces trois personnages, Paul Ristelhuber, Carlos Sommervogel et Rodolphe Reuss : voir 25 juillet 1895. Voir notamment 4 novembre 1891: La première publication du comte Édouard de Dienne s'intitule Les capitaines saintongeais au XVI e siècle. Jacques de Rabar, Impr. N. Texier, La Rochelle, 1879, 7 p. Il s'est intéressé, à partir de 1886, à l'assèchement des marais et des lacs. Une synthèse de ses travaux parut sous le titre Histoire du déssèchement des lacs et marais en France avant 1789, H. Champion, Paris, 1891, 590 p. Il a signé également une série d'articles d'histoire locale, portant sur le début de la période moderne dans la Revue de l'Agenais principalement. Il a également publié dans la Revue de Haute-Auvergne, des articles historiques sur Aurillac, Vic et la vicomté de Carlat, entre 1899 et 1906. Enfin et surtout, il est l'auteur de la Bibliographie des hommages rendus à la mémoire de Ph. Tamizey de Larroque, correspondant de l'Institut, précédée de notes intimes, Impr. et lithographie agenaises, Agen, 1901, 65 p. Voir Catalogue des imprimés de la Bibliothèque Nationale. Édouard de Dienne avait épousé la soeur de Louis de Dordaygue [A.P. Baquier]. Notice nécrologique dans Revue de l'Agenais, 1920, p. 115. -A.D. Lot-et-Garonne, 16 J 12, correspondance d'érudits : Fonds Tamizey de Larroque. Voir 25 avril 1891 et 10 octobre 1893. Voir, en annexe, la Bibliographie provisoire de Philippe Tamizey de Larroque. Jean Eugène Bimbenet, greffier en chef de la cour impériale d'Orléans, né dans cette ville, en 1801, conservateur de la Bibliothèque municipale, membre fondateur de la Société archéologique de cette ville, se fit connaître par un certain nombre de publications : Relation fidèle de la fuite du roi Louis XVI et de sa famille à Varennes, extraite des pièces judiciaires et adminstratives, produites devant la haute cour nationale alors établie à Orléans et déposées au greffe (1844), Monographie de l'hôtel de la mairie d'Orléans (1851, éd. refondue en 1855), Histoire de l'Université de lois d'Orléans (1853) etc. Il avait fourni à la Revue orléanaise des « Recherches sur les inondations de la Loire » (1847), aux Mémoires de la Société des antiquaires de Picardie, un « Mémoire sur les écoliers de la nation picarde, à l'Université d'Orléans » (1850), à la Revue critique de législation des « Recherches sur l'état de la femme, l'institution du mariage et le régime nuptial » (1855-1856), les Essais de Montaigne dans leurs rapports avec la législation moderne (1864) etc. ainsi que d'autres travaux restés manuscrits : Recherches sur la fondation de la bibliothèque publique d'Orléans ; Rangement méthodique et chronologique des archives judiciaires de la province de l'Orléanais et Jurisprudence de la Cour impériale d'Orléans, table analytique de ses arrêts, depuis l'an VIII. Jules Loiseleur, né à Orléans en 1816, il était devenu le bibliothècaire de cette ville. Membre du conseil municipal d'Orléans, c'est à son initiative qu' a été due l'élection de la statue équestre de Jeanne d'Arc par Foyatier, sur la principale place publique de la ville. J. Loiseleur fut nommé correspondant du ministère de l'Instruction publique, pour les travaux historiques. Décoré de la Légion d'honneur en mai 1868, il est l'auteur de plusieurs ouvrages d'histoire et d'érudition : Résidences royales de la Loire (1863), Les Crimes et les Peines dans les l'Antiquité Voir 12 octobre 1893. Voir 3 février 1892. Voir 8 avril 1893. 50 francs. Deux bibliophiles aussi riches qu'aimables, M. Paul Arbaud 623 et M.A. de Naurois 624 m'ont donné 100 francs chacun. Mon cousin le baron de Bouglon 625 m'a envoyé un mandat de Vicomte Caix de Saint-Aymour 998 ; M. Georges Tholin 999 ; M. Frank Andrieu ; MM. Delhomme et Briguet, libraires-éditeurs à Lyon ; M. Delagrave, libraire-éditeur à Paris, M. Beauvois (E) 1000 ; M. Gigas (Émile), à Copenhague 1001 ; M. Halphen (Eugène) 1002 ; M. F. de Mély 1003 ; M me Le Borgne (à Vannes) ; Marquis de Surgères ; M. Edmond Bonnaffé 1004 . livres, les uns publiés par lui, les autres gracieusement cédés par lui. Tout cela me fera encore plus amèrement regretter notre noble Alsace. Et, par une coïncidence charmante, sur la terrasse morne où j'ai relu cette note, s'épanouit en ce moment une tubéreuse dont le blanc éclat et la suave odeur sont pour moi un double enchantement. octobre Le comte de Dienne 970 est venu m'apporter ses condoléances. Cet aimable homme a passé trois heures avec nous, et ces trois heures, à cause de l'agrément de sa causerie, nous ont paru 969 Tamizey de Larroque (Philippe), Deux jardiniers émérites, Peiresc et Vespasien Robin, J. Remondet, Aix-en-Provence, 1896.
00411825
en
[ "math.math-ap" ]
2024/03/04 16:41:26
2009
https://inria.hal.science/inria-00411825/file/RT-0369.pdf
Keywords: propagation d'ondes, condensation de masse, schéma numérique, réseau, arbre, Kirchhoff Résolution numérique de l'équation des ondes sur un réseau de fentes Résumé : Dans ce rapport technique, nous présentons un modèle théorique et numérique pour simuler la propagation des ondes dans des réseaux finis de fentes avec des conditions de Kirchhoff classiques et améliorées aux noeuds des réseaux. Nous commençons par décrire le cadre continu, puis nous discrétisons le problème en utilisant la technique des éléments finis avec condensation de masse, introduite par G. Cohen et P. Joly (voir [3]). Finalement, nous montrons une implémentation du schéma numérique dans un code écrit en C++ en collaboration avec K. Boxberger, quelques résultats et des estimations d'erreurs. Introduction In this technical report, we extend the results about propagation of acousic wave in a general network of thin slots. Study of waves in networks is not so recent: one can check for example the works of B. Dekoninck and S. Nicaise [5], and in a more recent case the works of B. Maury, D. Salort and C. Vannier [START_REF] Maury | Trace theorems for trees, application to the human lungs[END_REF]. It's also quite easy to do some numerical simulations in some particular case of networks (one see for example the work of Y. Achdou, C. Sabot and N. Tchou [1]). But actually, it is not possible to find an efficient numeric code which solves wave propagation problem on a general network. To avoid this, a subject was proposed for an internship during second trimester 2009. During this internship which was supervised by P. Joly and A. Semin, K. Boxberger studied the propagation of acoustic wave in "semi-homogeneous" networks, i.e. networks whose geometrical and physical parameters are constant functions on each edge of those networks, but are not constant on the whole network. The approach used in this stage was with a finite differences method. The approach allowed us to get some good numeric scheme, but we can see that we lost one order of error estimate when dealing with boundary conditions. The idea is then to use a finite element approach. One could say that numerically, finite element method is slower than finite differences method, that's why we will use the mass-lumping method. The aim of this technical report is the following: in section 1, we present the continuous problem we want to solve. In section 2, we present the discrete problem associated to the continuous problem. We first use space discretisation, then time discretisation. In section 3, we present an implementation of the numeric code netwaves (whose writing has during the previously cited internship -the code can be soonly found at http://gforge.inria.fr/projects/netwaves/). In section 4, we give some error estimates for both problems introduced in sections 1 and 2. Finally, in section 5, 1 Setting of the continuous problem Geometry of the problem In this part, we recall some notations about the geometry we consider, in the spirit of [START_REF] Maury | Trace theorems for trees, application to the human lungs[END_REF], but taking the geometry in ℝ , with = 2 or = 3. Reader can find more information in [2] or in [START_REF] Joly | Construction and Analysis of Improved Kirchoff Conditions for Acoustic Wave Propagation in a Junction of Thin Slots[END_REF] about the choice of the conductances. Let = ( , ℰ, ) denote a graph: is a finite set of vertices in ℝ , ℰ subset of × such that, for (( , ), ( , )) ∈ ℰ × ℰ with ( , ) ∕ = ( , ), the open segments ( ) and ( ) do not intersect in ℝ and the the intersection of the closed segments [ ] and [ ] is a subset (possibly empty) of , and ∈ (0, +∞) ℰ a conductance field. Note that the condition we put on ℰ let us allow say that edges are counted only once in ℰ, that is to say ( , ) ∈ ℰ ⇒ ( , ) ∕ ∈ ℰ. Such example of sets are given by figure 1.1. We now have to introduce some definitions to describe more precisely the particularities of graph : Here we introduce some definitions to describe more precisely the structure of our network. • Edges connected to a vertex. Given ∈ , the set of edges ( ) connected to is the subset of ℰ given by elements whose one of the extremities is . In other words, ( ) := {( , ) ∈ ℰ / = or = } • Inner and outer vertices. A vertex ∈ will be called an outer vertex if the number of edges connected to is equal to 1. Otherwise, it will be called an inner vertex. We call the set of outer vertices, and the set of inner vertices. In other words, = { ∈ / # ( ) ⩾ 2} = { ∈ / # ( ) = 1} Remark. In this definition, one assume that there's not any non-connected vertex (i.e. a vertex satisfying # ( ) = 0). Now one wants to define the wave equation on this kind of structure: we have to consider parametrization on each edge of the graph, and to define precisely the equation we consider on each edge ∈ ℰ and conditions we put on each vertex ∈ . : → + -Definition 1.3 (Sobolev spaces on a edge). Given ∈ ℕ and ∈ ℰ, a function defined on will be in the Sobolev space H ( ) if and only if the function ∘ -1 belongs to H (]0, [). Definition 1.4 (Discontinuous Sobolev spaces on a graph). Let = ( , ℰ, ) be a graph and ∈ ℕ given. A function defined on will be in the Discontinuous Sobolev space H d ( ) if and only if the restriction of to each edge ∈ ℰ belongs to H ( ). Continuous problem We want to solve the following problem: find ∈ C 0 (ℝ + , H 1 ( )) ∩ C 1 (ℝ + , H 0 ( )) such that: ⎧          ⎨          ⎩ ∂ 2 ∂ 2 -Δ = 0 in , ∀ ∈ ℰ and ∀ ∈ ℝ * + ∑ ∈ ( ) ( ) ∂ ∂ , = 0 for each ∈ and ∀ ∈ ℝ * + = for = 0 and ∀ ∈ ℰ ∂ ∂ = for = 0 and ∀ ∈ ℰ (1.1) where: • Δ means the second derivative of with respect to the curvilinear abscissa, i.e. Δ = ∂ 2 ( ∘ ) ∂ 2 ∘ -1 (1.2) • ∂ ∂ , means the inner derivative of with respect to the curvilinear abscissa on an edge ∈ ( ), for ∈ . To be more precise, let ∈ and ∈ ( ). Then we define ∂ ∂ , by ∂ ∂ , ( ) = ⎧  ⎨  ⎩ ∂ ( ∘ ) ∂ ∘ -1 ( ) if -1 ( ) = 0, - ∂ ( ∘ ) ∂ ∘ -1 ( ) otherwise. Remark. One can see that the sign of this quantity does not depend of the direction of the parametrization . RT n°0369 A. Semin Of course, one can define the outer derivative of on an edge ∈ ( ), for ∈ , as the opposite of the inner derivative. More precisely, ∂ ∂ , ( ) = - ∂ ∂ , ( ) • ∈ H 1 (ℰ) and ∈ H 1 (ℰ) vanish near any vertex. The first line of this system is the "classical" wave equation (one can see that, thanks to (1.2), this line becomes the classical 1D wave equaion in ℝ * + × (0, )). The second line, associated to the fact that is continuous at the nodes, is the so-called Kirchhoff conditions, named in this way by analogy with the conditions Kirchhoff wrote in 1906 for electrical networks. We will see in section 5 what changes if we use other type of conditions. To complete the system (1.1), one has to add boundary conditions on the outer vertices: • (In)homogeneous Dirichlet condition: on a given 0 ∈ , one has ( , 0 ) = 0 ( ) where 0 ∈ C 1 (ℝ * + ) • (In)homogeneous Neumann condition: on a given 0 ∈ , one has ∂ ∂ ( , 0 ) = 0 ( ) where 0 ∈ C 0 (ℝ * + ) Remark. One denotes ∂ ∂ instead of ∂ ∂ , . This is due to the fact that on outer vertices, by definition, there's only one edge connected (there's no ambiguity about ). We omit also for convenience (the convention for normal derivatives is to consider outer normal derivatives). • Outgoing condition -this is the particular case of the Sommerfield radiation condition: on a given 0 ∈ , one has ∂ ∂ ( , 0 ) + ∂ ∂ ( , 0 ) = 0 We call respectively , , , and , the set of outer vertices with Dirichlet conditions, Neumann conditions and Outgoing conditions. With these conditions, multiplying the first line of (1.1) with the product of ( ) and a test function ∈ H 1 ( ) which vanishes on each vertex ∈ , , and making some integration over the term (-Δ ) , by noting ∇ = ∂ ( ∘ ) ∂ ∘ -1 , one gets, by summing the integrals over all edges: ∑ ∈ℰ ∫ ( ) ∂ 2 ∂ 2 ( , ⋅) + ( )∇ ( , ⋅)∇ + ∑ ∈ , ( ) ( ) ( ) + ∑ ∈ , ( ) ∂ ∂ ( , ) ( ) = 0 (1.3) Remark. The notation ( ) is an abusive notation. We should rather write ( ( )), where ( ) is the sole edge connected to . INRIA 2 Discretization of the continuous problem Discretization in space Once the continuous problem under its variationnal form (1.3) is defined, one can discretize this problem by using ℙ finite elements with mass lumping. Moreover the fact that mass lumping allows lesser storage and improved computation time, one will see that this method also allows us to uncouple computation on open edges and computation on vertices. In the following we will take = 1. On a given edge ∈ ℰ, we introduce a local space step Δ such that /Δ is a (great) positive integer, denoted . We introduce then, for 0 ⩽ ⩽ , the point , given by , = ( Δ ) Note that there are some couples ( , ) which give the same point ∈ ℝ : these points correspond to inner vertices of the graph. Once we introduced discretization points, we can introduce on each edge ∈ ℰ the basis functions of ℙ 1 called (Φ , ) 0⩽ ⩽ such that: • for any ∈ {1, . . . , }, Φ , ∘ is affine on the open segment ( -1 ( , -1 ), -1 ( , ) ) , • for any ∈ {0, . . . , }, Φ , ( , ) = where is the Kronecker symbol. We approximate our functions and by writing, on each edge ∈ ℰ, | ( , ) = ∑ =0 , ( )Φ , ( ) and | ( ) = ∑ =0 , Φ , ( ) which can be rewritten under the condensed form, by denoting = ( ,0 . . . . , , ) for ∈ { , , Φ}: ( , ) = ∑ ∈ℰ Φ ( ) ( ) and ( ) = ∑ ∈ℰ Φ ( ) = ∑ ∈ℰ Φ ( ) (2.1) We put up the vectorial approximations (2.1) in the variationnal formulation (1. 3), and one can see that two natural set of matrices appear: • rigidity matrices ( ) ∈ℰ are given by = ∫ ∇Φ ∇Φ = 1 Δ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 1 -1 0 . . . 0 -1 2 . . . . . . . . . 0 . . . . . . . . . 0 . . . . . . . . . 2 -1 0 . . . 0 -1 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ (2.2) RT n°0369 A. Semin • mass matrices ( ) ∈ℰ are given by = ∫ Φ Φ = Δ 6 ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 2 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ (2.3) Instead of using the mass matrices (2.3), we use mass lumping technic (we give here only for the order 1, the reader may see [3] for mass lumped matrices for any ℙ ). We set that = Δ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 1 2 0 . . . . . . 0 0 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 0 0 . . . . . . 0 1 2 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ (2.4) Moreover, for any point ∈ , we introduce: • ( ) the sole element belonging to ( ), • the ( ) + 1 square matrix whose is identically equal to zero, except for: -the coefficient on the first line and first column if -1 ( ) = 0, -the coefficient on the last line and last column otherwise. And finally, for any point ∈ (yes, even for inner vertices), for any ∈ ( ), we introduce the two following functions: • 0 defined as 0( , ) = 0 if -1 ( ) = 0 and 0( , ) = otherwise, • 1 defined as 1( , ) = 1 if -1 ( ) = Δ and 1( , ) = -1 otherwise. Remark. For these two functions, we will omit the index when ∈ . Discretization in time For the time discretization, we introduce here a global time step Δ , and one denotes the approximation of ( ) at time = Δ . We discretize first order time derivative by using a centered scheme: ∂ ∂ ( Δ ) ⇒ +1 - -1 2Δ (2.5) and second order time derivative by using classical scheme: ∂ 2 ∂ 2 ( Δ ) ⇒ +1 -2 + -1 Δ 2 (2.6) INRIA 2. Writing the discete problem and resolution Finally, using notations and matrices of section 2.1 with time discretizations (2.5) and (2.6) leads to the numerical scheme: ∑ ∈ℰ ( ) ( +1 -2 + -1 Δ 2 + ) + ∑ ∈ , ( ) ( Δ )+ ∑ ∈ , ( ) +1 ( ) - -1 ( ) 2Δ = 0 (2.7) under the two following constraints: • continuity of the solution at the inner vertices -for any ∈ , for any , ′ ∈ ( ), one has +1 ,0( , ) = +1 ′ ,0( , ′ ) (2.8) • Dirichlet condition on some outer vertices: for any ∈ , , one has +1 ( ),0( ) = (( + 1)Δ ) (2.9) Computation at initial time One has to compute the solution at iterations = 0 and = 1 (corresponding respectively to times = 0 and = Δ ). Te be more precise about the "near node vanishing" Cauchy datas of the continuous problem (1.1), we suppose that: Hypothesis 2.1. For any ∈ , for any ∈ ( ), for any ∈ { , }, one has ,0( , ) = ,1( , ) = 0 (2.10) On the following, we treat separately the contribution from the Cauchy data and the contribution from the boundary conditions. Contribution from the Cauchy data For 0 , we simply write that 0 = for each edge ∈ ℰ. For 1 , one starts for the D'Alembert formula for the solution of the 1D time-wave equation: ( , ) = 1 2 ( ( -) + ( + )) + 1 2 ∫ + - ( ) (2.11) We can use this formula for any inner discretization point on each edge -one can see easily that 1 ( ) = 0 for any ∈ . By using our interpolation formula in (2.11), one gets (assuming that Δ ⩽ Δ for any edge ): ∀ ∈ ℰ, ∀1 ⩽ ⩽ -1, 1 , = Δ -Δ Δ , + Δ 2Δ ( , -1 + , +1 ) + Δ 2Δ ( Δ 2 , -1 + ( 2 - Δ Δ ) , + Δ 2 , +1 ) (2.12) RT n°0369 A. Semin Contribution from the boundary conditions For 0 , we simply take into account the Dirichlet part: ∀ ∈ , , 0 ( ),0( ) = (0) For 1 , we have a little more work to do. We have three parts to consider: • Dirichlet part: ∀ ∈ , , 1 ( ),0( ) = (Δ ) • propagation of the Dirichlet part: using (2.7) for = 0 (assuming that -1 is identically equal to 0 near vertices) gives that ∀ ∈ , , 1 ( ),1( ) = Δ 2 Δ 2 (0) Remark. Saying that -1 is identically equal to 0 is available if and only if the functions and are support in ℝ + and if the Cauchy data vanish near vertices. • Neumann part: still using (2.7) for = 0, one gets that ∀ ∈ , , 1 ( ),0( ) = - 2Δ 2 Δ (0) Computation over time For computation at other times, one simply starts from the numeric scheme (2.7). Knowing -1 and , it will be very simple, to compute +1 . The fact that each matric is a diagonal matrix let us allow to compute each discretization point independantly of the others. Computation of inner edges discretization points One simply has ∀ ∈ ℰ, ∀1 ⩽ ⩽ -1, +1 , -2 , + -1 , Δ 2 - , +1 -2 , + , -1 Δ 2 = 0 Computation of inner vertices Given 0 ∈ . Because of condition (2.8), we denote +1 0 the common value of +1 at vertex 0 . One has, using the numeric scheme (2.7), and extracting parts corresponding to this vertex, ∑ ∈ ( ) ( )Δ 2Δ 2 ( +1 0 -2 0 + -1 0 ) + ( ) Δ ( 0 -,1( 0) ) = 0 (2.13) which gives +1 0 = 2 0 --1 0 + 2Δ 2 ∑ ∈ ( 0) Δ ( ,1( 0) -0 ) ∑ ∈ ( 0) Δ (2.14) INRIA Computation of outer vertices We have to treat Neumann and Outgoing conditions. Dirichlet condition is given explicitely by (2.9). • Neumann condition: one gets, for any ∈ , : Δ ( ) 2Δ 2 ( +1 ( ),0( ) -2 ( ),0( ) + -1 ( ),0( ) ) + ( ),0( ) -( ),1( ) Δ + ( Δ ) = 0 (2.15) • Outgoing condition: one gets, for any ∈ , : Δ ( ) 2Δ 2 ( +1 ( ),0( ) -2 ( ),0( ) + -1 ( ),0( ) ) + ( ),0( ) -( ),1( ) Δ + +1 ( ),0( ) - -1 ( ),0( ) 2Δ = 0 (2.16) Implementation of the discrete problem Here, we show the implementation of the numeric scheme dscribed in the numerical code netwaves. This program uses mainly two types of structures: a structure of geometrical graph, and a structure of problem. Actually, this code considers that Dirichlet and Neumann conditions are homogeneous conditions. Remark. In this section and associated sub-sections, we'll introduce some classes and some methods (we won't introduce all the classes and methods used, one can check them by retrieving the sources and run Doxygen, available at http://www.stack.nl/~dimitri/doxygen/). We decided to prefix all members of our classes by an underscore, to differientiate with local variable in our class methods (in addition of using the object * this). Geometrical graph structure ª ¦ ¥ The two associatives maps _vertices and _edges respectively contains and ℰ. More precisely: RT n°0369 • index of map _vertices is the label of vertices of the graph, and value correspond to coordinates of those vertices, • index of map _edges is the label of edges of the graph, and value correspond to the two vertices linked with each edge. To describe more quickly the following functions, we introduce the two following typedef: § t y p ed ef s t d : : map<i n t , s t d : : v e c t o r <r e a l > > t a b u l a r _ v e r t i c e s ; t y p ed ef s t d : : map<i n t , s t d : : v e c t o r <i n t > > t a b u l a r _ e d g e s ; ª ¦ ¥ ª ¦ ¥ Each associative map (except the map _tabular_boudary_conditions which index is linked to index of the map _vertices of member _geom) are indexes linked to index of the map _edges of member _geom. We stock • in map _hspaces the couples ( , Δ ), for ∈ ℰ, • in map _relative_width the couples ( , ( )) for ∈ ℰ (we called this map as it because the conductance is linked to some geometrical parameters of our network), • in maps _Unminus and _Un the solution computed on discretization points at iteration -1 and . INRIA Set up the problem Compute initlal data Assume that we only know the structure of the graph and we know the desired value of space step on each edge, named Δ . For each edge : 1. we retreive the coordinates of the two vertices associated with this edge, 2. we compute the length , 3. we compute = ⌊ / Δ ⌋ and then Δ = / . This ensures that Δ ⩾ Δ , and if the CFL condition is satisfied for the choice Δ , it will be satisfied for the choice Δ , 4. we create a pointer on a vector whose size is equal to + 1 and whose elements are equal to 0, ª ¦ ¥ 6. we compute the initial data by using formula (2.12) by using this method (thanks to hypothesis 2. Computation over time Assume that _Unminus and _Un contains the vectors -1 and . Then we compute +1 as described in section 2.3.2. The computation is the following (for the sequential form -one can see it would be easy to parallelize the code by using shared memory): § ª ¦ ¥ Some remarks about this listing: • on line 51, we get in the map tabular_edges_connected the set ( ), for each ∈ (we do not differientiate if ∈ or not), • on lines 62-71, since the values -1 and does not depend on the edge, we look for these values on the first edge (in sense of the std :: less order associated to int) , and we write that -1 = -1 ,0( ) and = ,0( ) lines 68-71 are for the case 0( ) = 0, lines 63-66 are for the other case. • on lines 124-133, we compute, for ∈ 0 , the values of 0( ) and 1( ), then we use this values in various boundary conditions implemented in lines 134-163. Error estimates In this section we give some error estimate associated to both continuous framework (section 1) and discrete framework (section 2). One has to define some energy, and then to check error estimates both theoretically and numerically. Continuous framework One can check in (1.3) that taking = ∂ ∂ leads to the relation 1 2 ∂ ∂ [ ∑ ∈ℰ ( ) ∫ ∂ ∂ ( , ⋅) 2 + |∇ ( , ⋅)| 2 ] + ∑ ∈ , ( ) ( ) ∂ ∂ ( , )+ ∑ ∈ , ( ) ∂ ∂ ( , ) 2 = 0 (4.1) From (4.1), one can define some continuous energy by the following: INRIA Definition 4.1. The energy of solution of (1.3) at time is defined by ( ) = 1 2 [ ∑ ∈ℰ ( ) ∫ ∂ ∂ ( , ⋅) 2 + |∇ ( , ⋅)| 2 ] (4.2) The idea is to get some a priori error estimates. For this case, we suppose that there's only Neumann or Outgoing conditions (for the Dirichlet conditions, one has to introduce some function locally supported the graph satisfying the Dirichlet conditions, and to check up which problem is satisfied by -). We will suppose also that the Neumann functions belong to the set H 1 (ℝ * + ). One starts from (4.1), integrate over ∈ [0, ], and by using the fact that some terms are positive, one gets: ( ) -(0) ⩽ - ∑ ∈ , ( ) ∫ 0 ( ) ∂ ∂ ( , ) (4.3) Using some intergration by parts in (4.3) and using that (0, ) = 0 for each ∈ , one gets ( ) -(0) ⩽ - ∑ ∈ , ( ) ( ) ( , ) + ∑ ∈ , ( ) ∫ 0 ′ ( ) ( , ) (4.4) To be able to conclude, one has to use the two following lemmas Lemma 4.1 (Poincare-Wirtinger inequality). For any ∈ , , by taking = ( ), there exists a constant which depends only of such that, for any ∈ ℝ + | ( , )| ⩽ ∥ ∥ H 1 ( ) ⩽ ( ∥ ( , ⋅)∥ L 2 ( ) + ∥∇ ( , ⋅)∥ L 2 ( ) ) Lemma 4.2. For any ∈ ℝ + , for any ∈ ℰ, one has ∥ ( , ⋅)∥ L 2 ( ) ⩽ 2 ( ∥ (0, ⋅)∥ L 2 ( ) + √ ∫ 0 ∂ ∂ ( , ⋅) L 2 ( ) ) Proof of lemma 4.2. One starts from ∥ ( , ⋅)∥ 2 L 2 ( ) = ∫ ( , ) 2 = ∫ ( , ) ( (0, ) + ∫ 0 ∂ ∂ ( , ) ) ⩽ ∫ | ( , )| ⎛ ⎝ | (0, )| + √ ( ∫ 0 ∂ ∂ ( , ) 2 ) 1/2 ⎞ ⎠ Then using Cauchy-Schwartz inequality gives the relation of the lemma. RT n°0369 By using both lemmas 4.1 and 4.2, there exists two constant , ′ and " depending on time such that ( ) ⩽ (0) + ( ) + ∫ 0 ′ ( ) √ ( ) + "( ) √ ( ) (4.5) Remark. One can easily see that all these constants are equal to zero when all Neumann conditions are homogeneous Neumann conditions. Now using the fact that "( ) √ ( ) ⩽ ( "( )) 2 2 + 1 2 ( ) in (4.5) gives that there exist two constants ˜ ( ) and ′ ( ) (which is the same constant as before) such that ℰ( ) ⩽ ˜ ( ) + ∫ 0 ′ ( ) √ ( ) (4.6) Finally, using Gronwall inequality leads to the following a priori following majoration: Theorem 4.3. There exist functions ˜ and ′ which depends on time and on boundary conditions of problem (1.1) such that, for all ∈ ℝ + , ( ) ⩽ ( √ ˜ ( ) + 1 2 ∫ 0 ′ ( ) ) 2 where ˜ and ′ are the previously defined constants. Remark. When all Neumann conditions are homogeneous Neumann conditiosn, theorem 4.3 gives that energy is bounded over time (see back the remark associated to equation (4.5)). Discrete framework Here we will suppose for convenience that we have only homogeneous Neumann conditions. We recall the numeric scheme (2.7) with our hypothesis: ∑ ∈ℰ ( ) ( +1 -2 + -1 Δ 2 + ) = 0 One has to mulltiply this line by +1 --1 , and one gets, by definying the following discrete energy = ∑ ∈ℰ ( ) ( +1 - Δ ) ( +1 - Δ ) + ∑ ∈ℰ ( )( +1 ) (4.7) INRIA the fact that this energy is constant over time. However, one can also check that not all the terms of energy defined by (4.7) are positive ones. To avoid that, one remarks that, thanks to the symmetry of each , ( +1 ) = 1 2 ( ( +1 ) +1 + ( ) ) - Δ 2 2 ( +1 - Δ ) ( +1 - Δ ) (4.8) Using (4.8) in (4.7) let us allow to rewrite the discrete energy in a different way: one has = ∑ ∈ℰ ( ) ( +1 - Δ ) ( - Δ 2 2 ) ( +1 - Δ ) + ∑ ∈ℰ ( ) 2 ( ( +1 ) +1 + ( ) ) (4.9) Let us define, for each edge ∈ ℰ, the two follwing matrices One has then, thanks to (4.10) and (4.11), ˜ = 2 Δ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 1 - Δ 2 2 = Δ 2 ( ˜ - Δ 2 Δ 2 ˜ ) (4. 12) The advantage to introduce the matrices ˜ and ˜ is that their coefficients do not depend on the discretisation. Let us now resolve the generalised eigenvalue problem: find ( , ) ∈ ℝ × ℝ such that ˜ = ˜ (4.13) RT n°0369 Then one can easily say that, for all ∈ ℝ , ( ˜ - Δ 2 Δ 2 ˜ ) ⩾ (1 - Δ 2 Δ 2 ) ˜ (4.14) where is the largest eigenvalue of problem (4.13). Rouch calculus show that ⩽ 1. Then one can say that Theorem 4.4. The energy contains positive terms (and then numeric scheme is stable) if and only if ∀ ∈ ℰ, Δ Δ ⩽ 1 Remark. One can also check by computation that the numeric scheme is not stable (with exponential increasing of the computed solution) if the condition of theorem 4.4 is not satisfied on at least one edge. Dealing with Improved Kirchhoff conditions Instead of writing problem (1.1) with Kirchhoff conditions, we want to solve the following problem: find ∈ C 0 (ℝ + , H 1 d ( )) ∩ C 1 (ℝ + , H 0 ( )) such that: ⎧     ⎨     ⎩ ∂ 2 ∂ 2 -Δ = 0 in , ∀ ∈ ℰ and ∀ ∈ ℝ * + = for = 0 and ∀ ∈ ℰ ∂ ∂ = for = 0 and ∀ ∈ ℰ (5.1) with some improved Kirchhoff conditions, introduced by P. Joly and A. Semin in [START_REF] Joly | Propagation of an acoustic wave in a junction of two thin slots[END_REF] for the particular case of two slots and in [START_REF] Joly | Construction and Analysis of Improved Kirchoff Conditions for Acoustic Wave Propagation in a Junction of Thin Slots[END_REF] for the general case (which interest us). However, we have to define some notations. In particular, the notation here will differs from the notation introduced in section 2.3.2. Improved Kirchhoff conditions We first have to define an order function, which says us how we locally number our edges. Definition 5.1. An order function on the graph is a function defined on × ( ) = { , ( )} ∈ with values in ℕ * such that for any ∈ , ( , ⋅) is a bijection between ( ) and {1, . . . , # ( )}. We will then denote -1 ( , ⋅) : {1, . . . , # ( )} ∈ ℰ( ) the reciprocal bijection. Once we defined our order function , for any ∈ , one can denote: this vector is not proportionnal to the vector (1, . . . , 1) , since the function is not continous on each vertex ∈ . • ∂ ( ) the vector in ℝ # ( ) ( ) defined by ∂ ( ) = ⎛ ⎜ ⎜ ⎝ ∂ ∂ -1 ( ,1), ( , ) . . . ∂ ∂ -1 ( ,# ( )), ( , ) ⎞ ⎟ ⎟ ⎠ (5.3) Improved conditions will be given as follow: for any ∈ , there exist two matrices and in ℳ # ( ) (ℝ) such that ∂ ( ) = ( + ∂ 2 ∂ 2 ) ( ) (5.4) The problem with Improved Kirchhoff conditions is (5.1) with (5.4), i.e.: find ∈ C 0 (ℝ + , H 1 d ( ))∩ C 1 (ℝ + , H 0 ( )) such that: ⎧         ⎨         ⎩ ∂ 2 ∂ 2 -Δ = 0 in , ∀ ∈ ℰ and ∀ ∈ ℝ * + ∂ ( ) = ( + ∂ 2 ∂ 2 ) ( ), ∈ C 0 (ℝ + , H 1 d ( )) ∩ C 1 (ℝ + , H 0 ( )) satisfying inhomogeneous Dirichlet condition such that, for any ∈ H 1 d ( ) satisfying associated homogeneous Dirichlet, one has (denoting the vector for defined as (5.2) for ): ∑ ∈ℰ ∫ ( ) ∂ 2 ∂ 2 ( , ⋅) + ( )∇ ( , ⋅)∇ + ∑ ∈ , ( ) ( ) ( ) + ∑ ∈ ( ) + ∂ 2 ∂ 2 ( ) + ∑ ∈ , ( ) ∂ ∂ ( , ) ( ) = 0 (5.6) RT n°0369 Discretization The discrete problem associated to the new variationnal formulation (5.6) is the same as the discrete problem described in section 2.3, except for inner vertices which are dealt as below: for any ∈ , one introduces • vector is defined as the vector containing the values of on each discretization point associated to vertex on each edge in ( ), i.e. = ⎛ ⎜ ⎝ -1 ( ,1),0( ) . . . Then, for inner vertex, instead of computing (2.14), one has to solve this system: 1 2Δ 2 Δ ( +1 -2 + -1 ) + Δ -1 ( -,Δ ) + +1 + 2 + -1 4 + +1 -2 + -1 Δ 2 = 0 (5.7) INRIA Remark. One can check in (5.7) that the term associated to has been treated implicitely. This is not classical, and this is due to the fact that if we do not implicit this term, there might be some resonance frequencies (linked to the amplitude of matrix ) that will corrupt CFL condition of theorem 4.4 -the case for two slots has been proved in [START_REF] Semin | Propagation d'ondes dans des jonctions de fentes minces[END_REF]. Implementation In our class struct_problem, we add some associative maps, whose indexes will be indexed on indexes of _geom->_vertices. § template<c l a s s g eo metry , c l a s s r e a l > c l a s s s t ª ¦ ¥ The class Matrix is the class of matrices of the external library newmat, written by R.B. Davies (see [4] for instance). For initialization of the problem, the matrices _J and _A are read in a data file, as the vector _order, and one checks that the dimensions of those objects match with the number of edges connected to associated vertex. Moreover, one checks (when building vector _order), that there's no element associated with two differents keys, with the following code: § s t d : : map<i n t , i n t > o r d e r 2 ; s t d : : v e c t o r <i n t > c o n s t _ i t e r a t o r i t = t h i s ->_order->f i n d ( v e r t e x _ l a b e l )->b e g i n ( ) , i t e = t h i s ->_order->f i n d ( v e r t e x _ l a b e l )->b e g i n ( ) f o r ( ; i t != i t e ; ++i t ) { o r d e r 2 [ i t ->s e c o n d ] = i t -> f i r s t ; } i f ( _ t h i s->o r d e r ->s i z e ( ) != o r d e r 2 . s i z e ( ) ) { // I n t h i s c a s e , t h e r e was two d i f f e r e n t s i t e r a t o r s w i t h // t h e same v a l u e . We e x i t w i t h f a i l e x i t ( EXIT_FAILURE ) ; RT n°0369 A. Semin Conclusion We showed the existence of a numeric scheme solving propagation of acoustic wave in networks. Moreover, we can say that this numeric scheme is stable under some classical conditions, and implementation of this scheme is easy -one does not need to use matrices to implement this scheme. However we can see that the time step used is a global time stepone expects to use a local time step, in the spirit of the works of J. Rodriguez. Figure 1 . 1 : 11 Figure 1.1: On the left: an example of set in ℝ 2 . On the right: an example of set ℰ associated to . INRIA Definition 1 . 2 ( 12 Parametrization). Given = ( , ) ∈ ℰ, we introduce the parametrisation :]0, [→ , where is the length (in the sense of the Euclidian norm) of , by 3. 2 2 Problem structure For the problem structure, one defines also a template class struct_problem based on a float point class and on a geometrical graph class. The complete syntax is: § template<c l a s s g eo metry , c l a s s r e a l > c l a s s s t r u c t _ p r o b l e m { p r o t e c t e d : g e o m e t r y * _geom ; s t d : : map<i n t , r e a l > * _ hspaces s t d : : map<i n t , r e a l > * _ r e l a t i v e _ w i d t h s t d : : map<i n t , s t d : : v e c t o r <r e a l > > * _Unminus ; s t d : : map<i n t , s t d : : v e c t o r <r e a l > > * _Un ; s t d : : map<i n t , i n t > * _ t a b u l a r _ b o u d a r y _ c o n d i t i o n s ; } 5 . 5 we set up in our problem structure the space step Δ and we initialize vectors _Unminus and _Un by the following instructions: § 1 ( * t h i s ->_ hspaces ) [ l a b e l _ e d g e ] = delta_x_e ; 2 ( * t h i s ->_Unminus ) [ l a b e l _ e d g e ] = * U ; 3 ( * t h i s ->_Un) [ l a b e l _ e d g e ] = * U ; 1 ): § 1 11 template<typename struct_ g eo m , c l a s s r e a l > 2 v o i d s t r u c t _ p r o b l e m<struct_ g eo m , r e a l > : : i n i t _ d a t a ( 3 c o n st r e a l dt , 4 r e a l ( * p o i n t e r _ f ) ( r e a l , r e a l , r e a l ) , 5 r e a l ( * p o i n t e r _ g ) ( r e a l , r e a l , r e a l ) ) 6 { 7 s t d : : v e c t o r <r e a l > c o o r d i n a t e s ( 3 ) ; 8 s t d : : v e c t o r <r e a l > c o o r d i n a t e s _ p r e v i o u s ( 3 ) ; 9 s t d : : v e c t o r <r e a l > c o o r d i n a t e s _ n e x t ( 3 ) ; 10 11 // Loop o v e r e d g e s 12 13 typename t a b u l a r _ e d g e s : : c o n s t _ i t e r a t o r 14 e d g e _ i t e r = t h i s ->_geom->g e t _ e d g e s ( )->b e g i n ( ) , 15 edge_end = t h i s ->_geom->g e t _ e d g e s ( )->end ( ) ; 16 A. Semin f o r ( ; e d g e _ i t e r != edge_end ; ++e d g e _ i t e r ) { // Get i n f o r m a t i o n s a b o u t edg e i n t l a b e l _ e d g e = e d g e _ i t e r -> f i r s t ; i n t l a b e l _ p o i n t 1 = ( e d g e _ i t e r ->s e c o n d ) [ 0 ] ; i n t l a b e l _ p o i n t 2 = ( e d g e _ i t e r ->s e c o n d ) [ 1 ] ; // R e t r i e v e v e r t i c e s typename t a b u l a r _ v e r t i c e s : : c o n s t _ i t e r a t o r v e r t e x 1 = t h i s ->_geom->g e t _ v e r t i c e s ( )->f i n d ( l a b e l _ p o i n t 1 ) , v e r t e x 2 = t h i s ->_geom->g e t _ v e r t i c e s ( )->f i n d ( l a b e l _ p o i n t 2 ) ; // R e t r i e v e number o f unknowns and a s s o c i a t e D e l t a x_e i n t N = t h i s ->_Unminus->f i n d ( l a b e l _ e d g e )->s e c o n d . s i z e ( ) ; r e a l dx = t h i s ->_hspaces->f i n d ( l a b e l _ e d g e )->s e c o n d ; // Compute l e n g t h o f edg e r e a l l e n g t h = 0 . ; f o r ( i n t j =0; j <3; j ++) { l e n g t h +=(( v e r t e x 1 ->s e c o n d ) [ j ] -( v e r t e x 2 ->s e c o n d ) [ j ] ) * ( ( v e r t e x 1 ->s e c o n d ) [ j ] -( v e r t e x 2 ->s e c o n d ) [ j ] ) ; } l e n g t h = s t d : : s q r t ( l e n g t h ) ; // I n i t i a l i z e v e c t o r s s t d : : v e c t o r <r e a l > * U1 = new s t d : : v e c t o r <r e a l >(N , 0 . ) , } r e a l Fn = ( * p o i n t e r _ f ) ( c o o r d i n a t e s [ 0 ] , c o o r d i n a t e s [ 1 ] , c o o r d i n a t e s [ 2 ] ) ; r e a l Fnm = ( * p o i n t e r _ f ) ( c o o r d i n a t e s _ p r e v i o u s [ 0 ] , c o o r d i n a t e s _ p r e v i o u s [ 1 ] , c o o r d i n a t e s _ p r e v i o u s [ 2 ] ) ; r e a l Fnp = ( * p o i n t e r _ f ) ( c o o r d i n a t e s _ n e x t [ 0 ] , c o o r d i n a t e s _ n e x t [ 1 ] , c o o r d i n a t e s _ n e x t [ 2 ] ) ; r e a l Gn = ( * p o i n t e r _ g ) ( c o o r d i n a t e s [ 0 ] , c o o r d i n a t e s [ 1 ] , c o o r d i n a t e s [ 2 ] ) ; r e a l Gnm = ( * p o i n t e r _ g ) ( c o o r d i n a t e s _ p r e v i o u s [ 0 ] , c o o r d i n a t e s _ p r e v i o u s [ 1 ] , c o o r d i n a t e s _ p r e v i o u s [ 2 ] ) ; r e a l Gnp = ( * p o i n t e r _ g ) ( c o o r d i n a t e s _ n e x t [ 0 ] , c o o r d i n a t e s _ n e x t [ 1 ] , c o o r d i n a t e s _ n e x t [ 2 ] ) ; ( * U1 ) [ e v a l p o i n t ] = Fn ; ( * U2 ) [ e v a l p o i n t ] = ( dx-d t ) / dx * Fn+d t / ( 2 . * dx ) * (Fnm+Fnp ) + 0 . 5 * d t * ( 2 . -d t / dx ) * Gn + 0 . 5 * d t * d t / (2 * dx ) * (Gnm+Gnp ) ; } ( * t h i s ->_Unminus ) [ l a b e l _ e d g e ] = * U1 ; ( * t h i s ->_Un) [ l a b e l _ e d g e ] = * U2 ; U1->c l e a r ( ) ; d e l e t e U1 ; U2->c l e a r ( ) ; d e l e t e U2 ; } } ª ¦ ¥ 1 template<typename struct_ g eo m , c l a s s r e a l > 2 v o i d s t r u c t _ p r o b l e m<struct_ g eo m , r e a l > : : r e s o l v e ( c o n st r e a l d t ) e t up a new map ( t h a t w i l l be U^(n+1) ) s t d : : map<i n t , s t d : : v e c t o r <r e a l > > * Up = new s t d : : map<i n t , s t d : : v e c t o r <r e a l > >( * ( t h i s ->_Un) ) ; // S e t up map o f s p a c e s t e p s s t d : : map<i n t , i n t > * Ne = new s t d : : map<i n t , i n t >; // F i r s t p a r tc o m p u t a t i o n o v e r e d g e s typename t a b u l a r _ e d g e s : : c o n s t _ i t e r a t o r e d g e _ i t e r = t h i s ->_geom->g e t _ e d g e s ( )->b e g i n ( ) , edge_end = t h i s ->_geom->g e t _ e d g e s ( )->end ( ) ; f o r ( ; e d g e _ i t e r != edge_end ; ++e d g e _ i t e r ) { e d g e _ l a b e l = e d g e _ i t e r -> f i r s t ; r e a l dx = t h i s ->_hspaces->f i n d ( e d g e _ l a b e l )->s e c o n d ; r e a l c f l = d t / dx ; assume ( c f l <= 1 . ) ; ( * Ne ) [ e d g e _ l a b e l]= i n t ( ( * ( t h i s ->_Un) ) [ e d g e _ l a b e l ] . s i z e ( ) ) -1; f o r ( i n t m=1; m < ( * Ne ) [ e d g e _ l a b e l ] ; m++) { ( * Up ) [ e d g e _ l a b e l ] [ m]=2 * (1 -c f l * c f l ) * ( * t h i s ->_Un) [ e d g e _ l a b e l ] [ m] + c f l * c f l * ( ( * t h i s ->_Un) [ e d g e _ l a b e l ] [ m+1] +( * t h i s ->_Un) [ e d g e _ l a b e l ] [ m-1] ) -( * t h i s ->_Unminus ) [ e d g e _ l a b e l ] [ m ] ; } } // Second p a r tc o m p u t a t i o n o v e r v e r t i c e s // S e t up t a b u l a r o f e d g e s c o n n e c t e d t o a g i v e n v e r t e x // C o n v e n t i o n o f b o o l e a n : t r u e i f v e r t e x i s t h e r i g h t p o i n t , // f a l s e i f v e r t e x i s t h e l e f t p o i n t s t d : : map<i n t , bool> t a b u l a r _ e d g e s _ c o n n e c t e d ; typename t a b u l a r _ v e r t i c e s : : c o n s t _ i t e r a t o r v e r t e x _ i t e r = t h i s ->_geom->g e t _ v e r t i c e s ( )->b e g i n ( ) , v e r t e x _ e n d = t h i s ->_geom->g e t _ v e r t i c e s ( )->end ( ) ; INRIA RT n°0369 { // Outer v e r t e x e d g e _ l a b e l = t a b u l a r _ e d g e s _ c o n n e c t e d . b e g i n ( )-> f i r s t ; i n t BC = t h i s ->_ t a b u l a r _ b o u n d a r y _ c o n d i t i o n s->f i n d ( v e r t e x _ l a b e l )->seco nd , i n t z e r o , one ; i f ( t a b u l a r _ e d g e s _ c o n n e c t e d . b e g i n ( )->s e c o n d ) { z e r o = ( * n b _ i t e r ) [ e d g e _ l a b e l ] ; one = ( * n b _ i t e r ) [ e d g e _ l a b e l ] -1 ; } e l s e { z e r o = 0 ; one = 1 ; } s w i t c h (BC) { c a se b c _ d i r i c h l e t : ( * Up ) [ e d g e _ l a b e l ] [ z e r o ] = 0 ; break ; c a se bc_neumann : ( * Up ) [ e d g e _ l a b e l ] [ z e r o ] = 2 * ( * t h i s ->_Un) [ e d g e _ l a b e l ] [ z e r o ] + ( 2 * d t * d t / dx / dx ) * ( ( * t h i s ->_Un) [ e d g e _ l a b e l ] [ one ] -( * t h i s ->_Un) [ e d g e _ l a b e l ] [ z e r o ] ) -( * t h i s ->_Unminus ) [ e d g e _ l a b e l ] [ z e r o ] ; break ; c a se bc_ o utg o ing : r e a l c1 , c2 , c3 ; c1 = dx / ( 2 . * d t * d t ) + 1 . / d t ; c2 = dx / ( d t * d t ) ; c3 = -dx / ( 2 . * d t * d t ) + 1 . / d t ; ( * Up ) [ e d g e _ l a b e l ] [ z e r o ] = ( c2 * ( * t h i s ->_Un) [ e d g e _ l a b e l ] [ z e r o ] + c3 * ( * t h i s ->_Unminus ) [ e d g e _ l a b e l ] [ z e r o ] + ( ( * t h i s ->_Un) [ e d g e _ l a b e l ] [ one ] -( * t h i s ->_Un) [ e d g e _ l a b e l ] [ z e r o ] ) ) / c1 ; break ; RT n°0369 A. Semin d e f a u l t : s t d : : c e r r << "ERROR␣ : : ␣ r e s o l v e ( ) ␣ : : ␣ C o n d i t i o n ␣ " << BC << " n o t ␣ y e t ␣ i m p l e m e n t e d " << s t d : : e n d l ; s t d : : c e r r << " A b o r t i n g . . . " << s t d : : e n d l ; e x i t ( EXIT_FAILURE ) ; 0 . . . . . . ( , ) defined on the edge -1 ( , 1) . . . ( , ) defined on the edge -1 ( , # ( )) classical conditions (Dirichlet, Neumann, Outgoing). The variationnal formulation is: find - 1 ( 1 ,# ( )),0( ) ⎞ ⎟ ⎠• vector ,Δ is defined as the vector containing the values of on each discretization point next to discretization points associated to vertex on each edge in ( ), i.e. r u c t _ p r o b l e m { p r o t e c t e d : // . . . p r e v o u s l y d e f i n e d members s t d : : map<i n t , s t d : : v e c t o r <i n t > > _ o rder ; s t d : : map<i n t , M a t r i x * > _J ; s t d : : map<i n t , M a t r i x * > _A; } Acknolowedgment I would like to thanks P. Joly for his knowledge which helped me in some technical points, and K. Boxberger with helped me to provide the code netwaves. INRIA
04118265
en
[ "info.info-bi" ]
2024/03/04 16:41:26
2022
https://hal.science/hal-04118265/file/2022-11-21_nojhan__UNIX-Shell_Cheatsheet__poster.pdf
local str="${1}" if [[ -z "$str" ]]; then return 1 # error else # return by var ret="$str" # return by stdout printf "$str" return 0 # OK fi } printf $(g "First") if g "Second"; then printf "$ret" fi #!/bin/bash cmd x=$(cmd) x=$((1+2)) Shell calls: run this script with bash call cmd capture cmd standard output integer arithmetics Flavors of UNIX: -Linux -FreeBSD -Darwin (MacOSX) -Cygwin -WSL (Windows) -… UNIX Shell Cheatsheet Flavors of Shell: -Bash -Zsh -Fossil -… UNIX wisdom: everything is a file. Shell wisdom: everything is a text stream. Script wisdom: everything is a string -cmd [options] ARG [OPTARG] -Short: -h -v [or] -al -Long: --help --verbose -Arguments: --opt[= ]<value> Options convention: Examples: $ ls -a -l . string is empty if [[ -w "$fname" ]]; then # file is writable fi -r -f -d -x File conditions: is readable is a file is a directory is executable $0 $1 … $* $# $IFS Special variables: program name 1 st argument … all arguments number of arguments Internal File Separartor for x in "$X"; do # iterate over split X done for ((i=0; i<10; i++)); do # numeric counter i done detachable shell(s) detach the shell(s) get back the detached shell(s) background=depends on the session detached =does not depends on session alias f="cmd -opt" function f ' read -r line || [[ -n "$line" ]]; do printf "$line\n" done<"$filename"
00411827
en
[ "stat.th", "math.math-st" ]
2024/03/04 16:41:26
2009
https://hal.science/hal-00411827/file/GMRF_appendix.pdf
Nicolas Verzelen email: [email protected] Technical appendix to "Adaptive estimation of stationary Gaussian fields" Keywords: AMS 2000 subject classifications: Primary 62H11; secondary 62M40 Gaussian field, Gaussian Markov random field, model selection, pseudolikelihood, oracle inequalities, Minimax rate of estimation . Proof of Proposition 8.1 Proof of Proposition 8.1. First, we recall the notations introduced in [START_REF] Boucheron | Moment inequalities for functions of independent random variables[END_REF]. Let N be a positive integer. Then, I N stands for the family of subsets of {1, . . . , N } of size less than 2. Let T be a set of vectors indexed by I N . In the sequel, T is assumed to be a compact subset of R (N (N +1)/2)+1 . The following lemma states a slightly modified version of the upper bound in remark 7 in [START_REF] Boucheron | Moment inequalities for functions of independent random variables[END_REF]. U i U j t {i,j} + N i=1 t {i} + t ∅ , where U 1 , . . . , U N are independent Rademacher random variables. Then for any x > 0, P {T ≥ E[T ] + x} ≤ 4 exp - x 2 L 1 E[D] 2 ∧ x L 2 E , (1) where D and E are defined by: D := sup t∈T sup α: α 2 ≤1 N i=1 U i j =i α j t {i,j} , E := sup t∈T sup α (1) ,α (2) , α (1) 2≤1 α (2) ≤1 N i=1 j =i t {i,j} α i α (2) j . Contrary to the original result of [START_REF] Boucheron | Moment inequalities for functions of independent random variables[END_REF], the chaos are not assumed to be homogeneous. Besides, the t {i} are redundant with t ∅ . In fact, we introduced this family in order to emphasize the connection with Gaussian chaos in the next result. A suitable application of the central limit theorem enables to obtain a corresponding bound for Gaussian chaos of order 2. Lemma 1.2. Let T be a supremum of Gaussian chaos of order 2. T := sup t∈T {i,j} t {i,j} Y i Y j + i t i Y 2 i + t ∅ , (2) where Y 1 , . . . , Y N are independent standard Gaussian random variable. Then, for any x > 0, P {T ≥ E[T ] + x} ≤ exp - x 2 E[D] 2 L 1 ∧ x EL 2 , (3) where D := sup t∈T sup α∈R N α 2 ≤1 i,j Y i (1 + δ i,j )α j t {i,j} , E := sup t∈T sup α1, α1 2 ≤1 sup α2, α2 2 ≤1 i,j α 1,i α 2,j t {i,j} (1 + δ i,j ) . The proof of this Lemma is postponed to the end of this section. To conclude, we derive the result of Proposition 8.1 from this last lemma. For any matrix R ∈ F , we define the vector t R ∈ R nr(nr+1)/2+1 indexed by I nr as follows t R {(i,k),(j,l)} := δ k,l (2δ i,j ) R[i,j] n , t R {(i, Let us now turn the constant E E = sup t R ∈T sup α 1 , α 2 ∈ R nr α 1 2 ≤ 1, α 2 2 ≤ 1 1≤i,j≤r 1≤k,l≤n (1 + δ ij δ k,l )t R,kl i,j α k 1,i α l 2,j = sup R∈F sup α 1 , α 2 ∈ R nr α 1 2 ≤ 1, α 2 2 ≤ 1 2 n 1≤i,j≤r 1≤k≤n R[i,j]α k 1,i α k 2,j . From this last expression, it follows that E is a supremum of L 2 operator norms E = 2 n sup R∈F ϕ max Diag (n) (R) , where Diag (n) (R) is the (nr×nr) block diagonal matrix such that each diagonal block is made of the matrix R. Since the largest eigenvalue of Diag (n) (R) is exactly the largest eigenvalue of R, we get E = 2 n sup R∈F ϕ max (R) . (5) Applying Proposition 1.2 and gathering identities (4) and [START_REF] Moran | A Gaussian Markovian process on a square lattice[END_REF] yields P(Z ≥ E(Z) + t) ≤ exp - t 2 L 1 E(V ) t L 2 B , where B = E and V = D 2 . Proof of Lemma 1.1. This result is an extension of Corollary 4 in [START_REF] Boucheron | Moment inequalities for functions of independent random variables[END_REF]. We shall closely follow the sketch of their proof adapting a few arguments. First, we upper bound the moments of (T -E(T )) + . Then, we derive the deviation inequality from it. Here, x + = max(x, 0). Lemma 1.3. For all real numbers q ≥ 2, (T -E(T )) + q ≤ LqE(D) + LqE , (6) where T q q stands for the q-th moment of the random variable T . The quantities D and E are defined in Lemma 1.1. By Lemma 1.3, for any t ≥ 0 and any q ≥ 2, P (T ≥ E(T ) + t) ≤ E (T -E(T )) q + t q ≤ √ LqE(D) + LqE t q . The right-hand side is at most 2 -q if √ LqE(D) ≤ t/4 and LqE ≤ t/4. Let us set q 0 := t 2 16LE(D) 2 ∧ t 4LE . If q 0 ≥ 2, then P (T ≥ E(T ) + t) ≤ 2 -q0 . On the other hand if q 0 < 2, then 4 × 2 -q0 ≥ 1. It follows that P (T ≥ E(T ) + t) ≤ 4 exp - log(2) 4L t 2 4E(D) 2 ∧ t E . Proof of Lemma 1.3. This result is based on the entropy method developed in [START_REF] Boucheron | Moment inequalities for functions of independent random variables[END_REF]. Let f : R N → R be a measurable function such that T = f (U 1 , . . . , U N ). In the sequel, U ′ 1 , . . . , U ′ N denote independent copies of U 1 , . . . , U N . The random variable T ′ i and V + are defined by T ′ i := f (U 1 , . . . , U i-1 , U ′ i , U i+1 , . . . , U N ) , V + := E N i=1 (T -T ′ i ) 2 + |U N 1 , where U N 1 refers to the set {U 1 , . . . , U N }. Theorem 2 in [START_REF] Boucheron | Moment inequalities for functions of independent random variables[END_REF] states that for any real q ≥ 2, (T -E(T )) + q ≤ Lq √ V + q . (7) To conclude, we only have bound the moments of √ V + . By definition, T = sup t∈T {i,j} U i U j t {i,j} + N i=1 t {i} + t ∅ . Since the set T is compact, this supremum is achieved almost surely at an element t 0 of T . For any 1 ≤ i ≤ N , (T -T ′ i ) 2 + ≤ (U i -U ′ i ) j =i U j t 0 {i, j} 2 . Gathering this bound for any i between 1 and N , we get V + ≤ N i=1 E   (U i -U ′ i ) j =i U j t 0 {i, j} 2 U N 1   ≤ 2 N i=1 j =i U j t 0 {i, j} 2 ≤ 2 sup α∈R N , α 2 ≤1 N i=1 α i j =i t 0 {i,j} U j 2 ≤ 2 sup t∈T sup α∈R N , α 2 ≤1 N i=1 U i j =i α j t {i,j} 2 = 2D 2 . Combining this last bound with ( 7) yields (T -E(T )) + q ≤ Lq √ 2 D q ≤ Lq E(D) + |(D -E(D)) + q . ( 8 ) Since the random variable D defined in Lemma 1.1 is a measurable function f 2 of the variables U 1 , . . . , U N , we apply again Theorem 2 in [START_REF] Boucheron | Moment inequalities for functions of independent random variables[END_REF]. (D -E(D)) + q ≤ Lq V + 2 q , where V + 2 is defined by V + 2 := E N i=1 (D -D ′ i ) 2 + U N i , and D ′ i := f 2 (U 1 , . . . , U i-1 , U ′ i , U i+1 , . . . , U N ). As previously, the supremum in D is achieved at some random parameter (t 0 , α 0 ). We therefore upper bound V + 2 as previously. V + 2 ≤ N i=1 E   (U i -U ′ i ) j =i α 0 j t 0 {i,j} 2 U N 1   ≤ 2 N i=1 j =i α 0 j t 0 {i,j} 2 ≤ 2 sup α (2) ∈R N , α 2≤1 N i=1 α (2) j j =i α 0 i t {i,j} 2 = 2E 2 . Gathering this upper bound with (8) yields (T -E(T )) + q ≤ LqE(D) + LqE . Proof of Lemma 1.2. We shall apply the central limit theorem in order to transfer results for Rademacher chaos to Gaussian chaos. Let f be the unique function satisfying T = f (y 1 , . . . , y N ) for any (y 1 , . . . , y N ) ∈ R N . As the set T is compact, the function f is known to be continuous. Let (U (j) i ) 1≤i≤N,j≥0 an i.i.d. family of Rademacher variables. For any integer n > 0, the random variables Y (n) and T (n) are defined by Y (n) := n j=1 U (j) 1 √ n , . . . , n j=1 U (j) N √ n , T (n) := f Y (n) . Clearly, T (n) is a supremum of Rademacher chaos of order 2 with nN variables and a constant term. By the central limit theorem, T (n) converges in distribution towards T as n tends to infinity. Consequently, deviation inequalities for the variables T (n) transfer to T as long as the quantities E D (n) , E (n) , and E[T (n) ] converge. We first prove that the sequence T (n) converges in expectation towards T . As T (n) converges in distribution, it is sufficient to show that the sequence T (n) is asymptotically uniformly integrable. The set T is compact, thus there exists a positive number t ∞ such that T (n) ≤ t ∞ i,j |Y (n) i Y (n) j | + 1 ≤ t ∞ 1 + (N + 1)/2 N i=1 Y (n) i 2 . It follows that T (n) 2 ≤ t 2 ∞ N + 1 2 2 N + 2 2 1 + N i=1 Y (n) i 4 . (9) The sequence Y (n) i does not only converge in distribution to a standard normal distribution but also in moments (see for instance [START_REF] Billingsley | Probability and measure[END_REF] p.391). It follows that limE T (n) 2 ≤ ∞ and the sequence f Y (n) is asymptotically uniformly integrable. As a consequence, lim n→∞ E T (n) = E[T ] . Let us turn to the limit of E D (n) . As the variable T (n) equals T (n) = sup t∈T {i,j} t {i,j} 1≤k,l≤n U (k) i U (l) j n + i t i 1≤k≤n U (k) i √ n l =k U (l) i √ n + t ∅ + i t i , it follows that D (n) = sup t∈T sup α∈R nN , α 2 ≤1 1≤i≤N 1≤k≤n U (k) i j =i t {i,j} n 1≤l≤n α (l) j + 2 l =k 2 t {i} n α (l) i ≤ sup t∈T sup α∈R nN , α 2 ≤1 i U (k) i √ n j (1 + δ i,j )t {i,j} 1≤l≤n α (l) j √ n + A (n) , (10) where the random variable A (n) is defined by A (n) := sup t∈T sup α∈R nN , α 2≤1 N i=1 n j=1 t {i} U (j) i n α j i . Straightforwardly, one upper bounds A (n) by t ∞ /n N i=1 n j=1 U (j) i 2 and its expectation satisfies E A (n) ≤ t ∞ N n , which goes to 0 when n goes to infinity. Thus, we only have to upper bound the expectation of the first term in (10). Clearly, the supremum is achieved only when for all 1 ≤ j ≤ N , the sequence (α (l) j ) 1≤l≤n is constant. In such a case, the sequence (α (1) j ) 1≤j≤N satisfies α (1) 2 ≤ 1/ √ n. it follows that E D (n) = E sup t∈T sup α∈R N α 2 ≤1 E i Y (n) i j (1 + δ i,j )α j + O 1 √ n . Let g be the function defined by g (y 1 , . . . , y N ) = sup t∈T sup α∈R N α 2 ≤1 i y i j (1 + δ i,j )α j , for any (y 1 , . . . , y N ) ∈ R N . The function g(.) is measurable and continuous as the supremum is taken over a compact set. As a consequence, g(Y (n) ) converges in distribution towards g(Y ). As previously, the sequence is asymptotically uniformly integrable since its moment of order 2 is uniformly upper bounded. It follows that lim E D (n) = E [D]. Third, we compute the limit of E (n) . By definition, E (n) = sup t∈T sup α1,α2∈R nN , α1 2≤1, α2 2 ≤1 N i=1 n k=1 α k 1,i j =i n l=1 α (l) 2,j t {i,j} n + 2 l =k α (l) 2,i t {i} n = sup t∈T sup α1,α2, α1 2 ≤1, α2 2≤1 N i=1 N j=1 (1 + δ i,j ) t {i,j} n n k=1 n l=1 α (k) 1,i α (l) 2,j + O 1 n . As for the computation of D (n) , the supremum is achieved when the sequences (α k 1,i ) 1≤k≤n and (α l 2,j ) 1≤l≤n are constant for any i ∈ {1, . . . , N }. Thus, we only have to consider the supremum over the vectors α 1 and α 2 in R N . E (n) = sup t∈T sup α1,α2∈R N αi 2≤1 N i=1 N j=1 (1 + δ ij )t i,j α 1,i α 2,j + O 1 n . It follows that E (n) converges towards E when n tends to infinity. The random variable T (n) -E(T (n) ) converges in distribution towards T -E(T ). By Lemma 1.1 , P(T -E(T ) ≥ x) ≤ lim exp - x 2 E[D (n) ] 2 L 1 ∧ x E (n) L 2 , for any x > 0. Combining this upper bound with the convergence of the sequences D (n) and E (n) allows to conclude. Proof of Theorem 3.1 Proof of Lemma 8.3. We only consider here the anisotropic case, since the isotropic case is analogous. This result is based on the deviation inequality for suprema of Gaussian chaos of order 2 stated in Proposition 8.1. For any model m ′ belonging to M, we shall upper bound the quantities E(Z m ′ ), B m ′ , and E(W m ′ ) defined in (42) in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. 1. Let us first consider the expectation of Z m ′ . Let U ′ m,m ′ be the new vector space defined by U ′ m,m ′ := U m,m ′ √ D Σ p , where U m,m ′ is introduced in the proof of Lemma 8.2 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. This new space allows to handle the computation with the canonical inner product in the space of matrices. Let B (2) m 2 ,m ′2 be the unit ball of U ′ m,m ′ with respect to the canonical inner product. If R belongs to U m,m ′ , then R H ′ = R √ D Σ /p F , where . F stands for the Frobenius norm. Z m ′ = sup R∈B H ′ m 2 ,m ′2 1 p 2 tr RD Σ (YY * -I p 2 ) = sup R∈B (2) m 2 ,m ′2 tr R √ D Σ p YY * -I p 2 (11) = Π U ′ m,m ′ √ D Σ p YY * -I p 2 F , where Π U ′ m,m ′ refers to the orthogonal projection with respect to the canonical inner product onto the space U ′ m,m ′ . Let F 1 , . . . , F d m 2 ,m ′2 de- note an orthonormal basis of U ′ m,m ′ . E(Z 2 m ′ ) = d m 2 ,m ′2 i=1 E tr 2 F i D Σ p 2 YY * -I p 2 = d m 2 ,m ′2 i=1 E p 2 j=1 F i [j,j] √ D Σ [j,j] p (YY * [j,j] -1) 2 = d m 2 ,m ′2 i=1 2 np 2 tr(F i D Σ F i ) ≤ d m 2 ,m ′2 i=1 2ϕ max (D Σ ) np 2 = 2d m 2 ,m ′2 ϕ max (Σ) np 2 . Applying Cauchy-Schwarz inequality, it follows that E(Z m ′ ) ≤ 2d m 2 ,m ′2 ϕ max (Σ) np 2 . ( 12 ) 2. Using the identity (11), the quantity B m ′ equals B m ′ = 2 n sup R∈B (2) m 2 ,m ′2 ϕ max R √ D Σ p . As the operator norm is under-multiplicative and as it dominates the Frobenius norm, we get the following bound B m ′ ≤ 2 ϕ max (Σ) np . (13) 3. Let us turn to bounding the quantity E(W m ′ ). Again, by introducing the ball B (2) m 2 ,m ′2 , we get W m ′ = 4 n sup R∈B H ′ m 2 ,m ′2 1 p 2 tr RYY * D Σ R ≤ 4ϕ max (Σ) np 2 sup R∈B (2) m 2 ,m ′2 tr RYY * R ≤ 4ϕ max (Σ) np 2 1 + sup R∈B (2) m 2 ,m ′2 tr R YY * -I p 2 R . Let F 1 , . . . F d m 2 ,m ′2 an orthonormal basis of U ′ m,m ′ and let λ be a vector in R d m 2 ,m ′2 . We write λ 2 for its L 2 norm. E sup R∈B (2) m 2 ,m ′2 tr R YY * -I p 2 R 2 = E sup λ 2 ≤1 d m 2 ,m ′2 i,j=1 λ i λ j tr F i F j (YY * /n -I p 2 ) 2 ≤ d m 2 ,m ′2 i,j=1 E tr F i F j (YY * /n -I p 2 ) 2 . The second inequality is a consequence of Cauchy-Schwarz inequality in R (d m 2 ,m ′2 ) 2 since the l 2 norm of the vector (λ i λ j ) 1≤i,j≤d m 2 ,m ′2 ∈ R d 2 m 2 ,m ′2 is bounded by 1. Since the matrices F i are diagonal, we get E sup R∈B (2) m 2 ,m ′2 tr [R(YY * /n -I)R] 2 ≤ 2 n d m 2 ,m ′2 i,j=1 F i F j 2 2 . It remains to bound the norm of the products F i F j for any i, j between 1 and d m 2 ,m ′2 . d m 2 ,m ′2 i,j=1 F i F j 2 2 = d m 2 ,m ′2 i,j=1 p 2 k=1 F i [k,k] 2 F j [k,k] 2 = p 2 k=1   d m 2 ,m ′2 i=1 F i [k,k] 2   2 . For any k ∈ {1, . . . , p 2 }, d m 2 ,m ′2 i=1 F i [k,k] 2 ≤ 1 since (F 1 , . . . , F d m 2 ,m ′2 ) form an orthonormal family. Hence, we get d m 2 ,m ′2 i,j=1 F i F j 2 2 ≤ p 2 k=1 d m 2 ,m ′2 i=1 F i [k,k] 2 = d m 2 ,m ′2 . All in all, we have proved that E(W m ′ ) ≤ 4ϕ max (Σ) np 2 1 + 2d m 2 ,m ′2 n . (14) Gathering these three bounds and applying Proposition 8.1 allows to obtain the following deviation inequality: P Z m ′ ≥ 2ϕmax(Σ) n 1 + α/2 d m 2 ,m ′2 + ξ ≤ exp - √ 1+α/2-1 √ d m 2 ,m ′2 +ξ 2 2L1(1+ √ 2d m 2 ,m ′2 /n) √ n √ 1+α/2-1 √ d m 2 ,m ′2 +ξ √ 2L2 ≤ exp - ω 2 m,m ′ 2L1(1+ √ 2d m 2 ,m ′2 /n) √ nω m,m ′ √ 2L2 - ξω m,m ′ L1[1+ √ 2d m 2 ,m ′2 /n] √ nξ √ 2L2 , where ω m,m ′ = 1 + α/2 -1 d m 2 ,m ′2 . As n and d m 2 ,m ′2 are larger than one, there exists a universal constant L ′ 2 such that ( 1 + α/2 -1) 2 d m 2 ,m ′2 2L 1 1 + 2d m 2 ,m ′2 /n √ n( 1 + α/2 -1) d m 2 ,m ′2 √ 2L 2 ≥ 4L ′ 2 d m 2 ,m ′2 1 + α/2 -1 2 ∧ 1 + α/2 -1 . Since the vector space U m,m ′ contains all the matrices D(θ ′ ) with θ ′ belonging to m ′ , d m 2 ,m ′2 is larger than d m ′ . Besides, by concavity of the square root function, it holds that 1 + α/2 -1 ≥ α[4 1 + α/2] -1 . Setting L ′ 1 := [4L 1 (1 + √ 2)] -1 ∧ [ √ 2L 2 ] -1 and arguing as previously leads to ξ( 1 + α/2 -1) d m 2 ,m ′2 L 1 1 + 2d m 2 ,m ′2 /n √ nξ √ 2L 2 ≥ L ′ 1 ξ α 1 + α/2 ∧ √ n . Gathering these two inequalities allows us to conclude that P Z m ′ ≥ 2ϕmax(Σ) n (1 + α/2) d m 2 ,m ′2 + ξ ≤ exp -L ′ 2 d m ′ α 1 + α/2 ∧ α 2 1 + α/2 -L ′ 1 ξ α 1 + α/2 ∧ √ n . Proof of Lemma 8.4 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. The approach falls in two parts. First, we relate the dimensions d m and d m 2 to the number of nodes of the torus Λ that are closer than r m or 2r m to the origin (0, 0). We recall that the quantity r m is introduced in Definition 2.1 of [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. Second, we compute a nonasymptotic upper bound of the number of points in Z 2 that lie in the disc of radius r. This second step is quite tedious and will only give the main arguments. Let m be a model of the collection M 1 . By definition, m is the set of points lying in the disc of radius r m centered on (0, 0). Hence, Θ m = vect {Ψ i,j , (i, j) ∈ m} , where the matrices Ψ i,j are defined by Eq. ( 14) in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. As Ψ i,j = Ψ -i,-j , the dimension d m of Θ m is exactly the number of orbits of m under the action of the central symmetry s. As d m 2 is defined as the dimension of the space U m , it also corresponds to the dimension of the space which is clearly in one to one correspondence with U m . Straightforward computations lead to the following identity: vect {C(θ), θ ∈ Θ m } + vect C(θ) 2 , θ ∈ Θ m , (15) C(Ψ i1,j1 )C(Ψ i2,j2 ) = C(Ψ i1+i2,j1+j2 ) [1 + s i1+i2,j1+j2 ] + C(Ψ i1-i2,j1-j2 ) [1 + s i1-i2,j1-j2 ]) , where s x,y is the indicator function of x = -x and y = -y in the torus Λ. Combining this property with the definition of Θ m , we embed the space (15) in the space vect {C (Ψ i1+i2,j1+j2 ) , (i 1 , j 1 ), (i 2 , j 2 ) ∈ m ∪ {(0, 0)}} , and this last space is in one to one correspondence with vect {Ψ i1+i2,j1+j2 , (i 1 , j 1 ), (i 2 , j 2 ) ∈ m ∪ {(0, 0)}} . (16) In the sequel, N (m) stands for the set {(i 1 + i 2 , j 1 + j 2 ), (i 1 , j 1 ), (i 2 , j 2 ) ∈ m ∪ {(0, 0)}} . Thus, the dimension d m 2 is smaller or equal to the number of orbits of N (m) under the action of the symmetry s. To conclude, we have to compare the number of orbits in m and the number of orbits in N (m). We distinguish two cases depending whether 2r m + 1 ≤ p or 2r m + 1 > p. First, we assume that 2r m + 1 ≤ p. For such values the disc of radius r m centered on the points (0, 0) in not overlapping itself on the torus except on a set of null Lebesgue measure. In the sequel, ⌊x⌋ refers to the largest integer smaller than x. We represent the orbit space of m as in Figure 1. To any of these points, we associate a square of size 1. If we add 2 + 2⌊r m ⌋ squares to the d m first squares, we remark that the half disc centered on (0, 0) and with length r m is contained in the reunion of these squares. Then, we get d m + 2 + 2⌊r m ⌋ ≥ πr 2 m 2 . ( 17 ) The points in N (m) are closer than 2r m from the origin. Consequently, all the squares associated to representants of N (m) are included in the disc of radius 2r m + √ 2. d m 2 + 2 + 2⌊2r m ⌋ ≤ π 2 2r m + √ 2 2 . Combining these two inequalities, we are able to upper bound d m 2 2 + 2⌊2r m ⌋ + d m 2 ≤ 4 1 + √ 2 2r m 2 (d m + 1 + 2⌊r m ⌋) , d m 2 ≤ 4 1 + √ 2 2r m 2 d m + 4 1 + √ 2 2r m 2 (1 + 2⌊r m ⌋) . Applying again inequality (17), we upper bound r m : r m ≤ 2 π 1 + 1 + π 2 (1 + d m ) . Gathering these two last bounds yields d m 2 ≤ 4 1 + √ 2 2r m 2 1 + 1 d m 1 + 4 π 1 + 1 + π 2 (1 + d m ) d m . This upper bound is equivalent to 4d m , when d m goes to infinity. Computing the ratio d m 2 /d m for every model m of small dimension allows to conclude. Let us turn to the case 2r m + 1 > p. Suppose that p is larger or equal to 9. The lower bound (17) does not necessarily hold anymore. Indeed, the disc is overlapping with itself because of toroidal effects. Nevertheless, we obtain a similar lower bound by replacing r m by (p -1)/2: d m + 2 + 2⌊ p -1 2 ⌋ ≥ π(p -1) 2 8 . The number of orbits of Λ under the action of the symmetry s is ( p 2 + 1)/2 if p is odd and [(p + 1) 2 -1]/2 if p is even. It follows that d m 2 ≤ [(p + 1) 2 -1]/2. Gathering these two bounds, we get d m 2 d m ≤ (p + 1) 2 π(p -1) 2 /4 -2(p + 1) . This last quantity is smaller than 4 for any p ≥ 9. An exhaustive computation of the ratios when p < 9 allows to conclude. Let us turn to the isotropic case. Arguing as previously, we observe that the dimension d iso m is the number of orbits of the set m under the action of the group G introduced in in [6] Sect.1.1 whereas d m 2 is smaller or equal to the number of orbits of N iso (m) under the action of G. As for anisotropic models, we choose represent these orbits on the torus and associate squares of size 1 (see Figure 2). Assuming that r m < (p -1)/2, we bound d m and d m 2 . d m + 1 ≥ 1 8 πr 2 m + 1 2 ⌊ √ 2r m 2 ⌋ , d m 2 ≤ 4 1 + √ 2 2r m 2 1 8 πr 2 m + 1 2 ⌊ √ 2r m ⌋ . Gathering these two inequalities, we get d m 2 ≤ 4 1 + √ 2 2r m 2 d m . As a consequence, d m 2 is smaller than 4d m when d m goes to infinity. As previously, computing the ratio d m 2 /d m for models m of small dimension allows to conclude. The case r m > (p -1)/2 is handled as for the anisotropic case. Proofs of the minimax bounds Proof of Lemma 8.5 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. This lower bound is based on an application of Fano's approach. See [START_REF] Yu | Assouad, Fano, and Le Cam[END_REF] for a review of this method and comparisons with Le Cam's and Assouad's Lemma. The proof follows three main steps: First, we upper bound the Kullback-Leibler entropy between distributions corresponding to θ 1 and θ 2 in the hypercube. Second, we find a set of points in the hypercube well separated with respect to the Hamming distance. Finally, we conclude by applying Birgé's version of Fano's lemma. Lemma 3.1. The Kullback-Leibler entropy between two mean zero-Gaussian vectors of size p 2 with precision matrices I p 2 -C(θ 1 ) /σ 2 and I p 2 -C(θ 2 ) /σ 2 equals K(θ 1 , θ 2 ) = 1/2 log |I p 2 -C(θ 1 )| |I p 2 -C(θ 2 )| + tr I p 2 -C(θ 2 ) I p 2 -C(θ 1 ) -1 -p 2 , where for any square matrix A, |A| refers to the determinant of A. This statement is classical and its proof is omitted. The matrices (I p 2 -C(θ 1 )) and (I p 2 -C(θ 2 )) are diagonalizable in the same basis since they are symmetric block circulant (Lemma A.1 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]). Transforming vectors of size p 2 into p × p matrices, we respectively define λ 1 and λ 2 as the p × p matrices of eigenvalues of (I p 2 -C(θ 1 )) and (I p 2 -C(θ 2 )). It follows that K(θ 1 , θ 2 ) = 1/2 1≤i,j≤p λ 2 [i,j] λ 1 [i,j] -log λ 2 [i,j] λ 1 [i,j] -1 . For any x > 0, the following inequality holds x -1 -log(x) ≤ 9 64 x - 1 x 2 . It is easy to establish by studying the derivative of corresponding functions. As a consequence, λ 2 [i,j] λ 1 [i,j] -log λ 2 [i,j] λ 1 [i,j] -1 ≤ 9 64 λ 2 [i,j] λ 1 [i,j] - λ 1 [i,j] λ 2 [i,j] 2 ≤ 9 64 1 λ 1 [i,j] + 1 λ 2 [i,j] 2 (λ 1 [i,j] -λ 2 [i,j]) 2 . ( 18 ) Let us first consider the anisotropic case. Let m be a model in M 1 and let θ ′ belong Θ m ∩ B 1 (0 p , 1). We also consider a positive radius r such that (1θ ′ 1 -2rd m ) is positive. For any θ 1 , θ 2 in C m (θ ′ , r) the matrices (I p 2 -C(θ 1 )) and (I p 2 -C(θ 2 )) are diagonally dominant and their eigenvalues λ 1 [i,j] and λ 2 [i,j] are larger than 1 -θ ′ 1 -2rd m . K(θ 1 , θ 2 ) ≤ 9 16(1 -θ ′ 1 -2rd m ) 2 1≤i,j≤p (λ 1 [i,j] -λ 2 [i,j]) 2 ≤ 9 16(1 -θ ′ 1 -2rd m ) 2 C(θ 1 ) -C(θ 2 ) 2 F ≤ 9d m r 2 p 2 8(1 -θ ′ 1 -2rd m ) 2 . ( 19 ) We recall that . F refers to the Frobenius norm in the space of matrices. Let us state Birgé's version of Fano's lemma [START_REF] Birgé | A new lower bound for multiple hypothesis testing[END_REF] and a combinatorial argument known under the name of Varshamov-Gilbert's lemma. These two lemma are taken from [START_REF] Massart | Concentration inequalities and model selection[END_REF] and respectively correspond to Corollary 2.18 and Lemma 4.7. Lemma 3.2. (Birgé's lemma) Let (S, d) be some pseudo-metric space and {P s , s ∈ S} be some statistical model. Let κ denote some absolute constant smaller than one. Then for any estimator s and any finite subset T of S, setting δ = min s,t∈T,s =t d(s, t), provided that max s,t∈T K(P s , P t ) ≤ κ log |T |, the following lower bound holds for every p ≥ 1, sup s∈S E s [d p (s, s)] ≥ 2 -p δ p (1 -κ) . E θ d H θ, θ ≥ d m 8 (1 -κ) , (20) provided that 9d m r 2 p 2 n 8(1 -θ ′ 1 -2rd m ) 2 ≤ κd m 8 . (21) Let us express (20) in terms of the Frobenius . F norm. sup θ∈Cm(θ ′ ,r) E θ C( θ) -C(θ) 2 F ≥ d m r 2 p 2 4 (1 -κ) . Since for every θ in the hypercube, σ -2 (I p 2 -C(θ)) is diagonally dominant, its largest eigenvalue is smaller than 2σ -2 . The loss function l( θ, θ) equals σ 2 /p 2 tr{[C( θ) -C(θ)](I -C(θ)) -1 [C( θ) -C(θ)]}. It follows that sup θ∈Cm(θ ′ ,r) E θ l θ, θ ≥ σ 2 d m r 2 8 (1 -κ) . (22) Condition (21) is equivalent to r 2 (1 -θ ′ 1 -2rd m ) -2 ≤ κ/(9p 2 n). If we assume that r 2 ≤ κ (1 -θ ′ 1 ) 2 18p 2 n , ( 23 ) then 1-θ ′ 1 -2rd m ≥ (1 -θ ′ 1 ) 1 -2d m κ/(18np E θ l θ, θ ≥ inf θ sup θ∈Cm θ ′ ,r∧(1-θ ′ 1) κ 18p 2 n E θ l θ, θ ≥ L r 2 ∧ (1 -θ ′ 1 ) 2 np 2 d m σ 2 . One handles models of dimension d m between 1.5( √ 2 -1) np 2 /κ and √ np by changing the constant L in the last lower bound. Let us turn to sets of isotropic GMRFs. The proof is similar to the nonisotropic case, except for a few arguments. Let m belongs to the collection M 1 and let θ ′ be an element of Θ iso m ∩ B 1 (0 p , 1). Let r be such that 1θ ′ 1 -8d iso m is positive. If θ 1 and θ 2 belong to the hypercube C iso m (θ ′ , r), then K(θ 1 , θ 2 ) ≤ 9d m r 2 p 2 2(1 -θ ′ 1 -8rd iso m ) 2 . Applying Lemma 3.2 and 3.3, it follows that inf θ sup θ∈C iso m (θ ′ ,r) E θ d H θ, θ ≥ d iso m 8 (1 -κ) , provided that 4.5d m r 2 p 2 n(1 -θ ′ 1 -8rd iso m ) -2 ≤ κd iso m /8. As a consequence, inf θ sup θ∈C iso m (θ ′ ,r) E θ l θ, θ ≥ d iso m r 2 8 (1 -κ) , if r 2 1 -θ ′ 1 -8rd iso m -2 ≤ κ(36p 2 n) -1 . We conclude by arguing as in the isotropic case. Proof of lemma 8.6 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. Let m be a model in M 1 , r be a positive number smaller than 1/(4d m ), and θ be an element of the convex hull of C m (0 p , r). The covariance matrix of the vector X v is Σ = σ 2 [I -C(θ)] -1 . Since the field X is stationary, Var θ (X[0,0]) equals any diagonal element of Σ. In particular, Var θ (X[0,0]) corresponds to the mean of the eigenvalues of Σ. The matrix (I -C(θ)) is block circulant. As in the proof of Lemma 20, we note λ the p × p matrix of the eigenvalues of (I p 2 -C(θ)). By Lemma A.1 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF], λ[i,j] = 1 + (k,l)∈Λ θ[k,l] cos 2π ik p + jl p , for any 1 ≤ i, j ≤ p. Since θ belongs to the convex hull of C m (0 p , r), θ[k,l] is zero if (k, l) / ∈ m and |θ[k,l]| ≤ r if (k, l) ∈ m. Thus (k,l)∈Λ |θ[k,l] | is smaller than 1/2. Applying Taylor-Lagrange inequality, we get 1 1 + x ≤ 1 -x + x 2 (1 -|x|) 3 , for any x between -1 and 1. It follows that λ[i,j] -1 ≤ 1 - k,l∈Λ θ[k,l] cos 2π ik p + jl p + 8    k,l∈Λ θ[k,l] cos 2π ik p + jl p    2 . Summing this inequality for all (i, j) ∈ {1, . . . , p} 2 , the first order term turns out to be tr[C(θ)]/p 2 which is zero whereas the second term equals 8tr[C(θ) 2 ]/p 2 . Since there are less than 2d m non-zero terms on each line of the matrix C(θ), its Frobenius norm is smaller than 2d m p 2 r 2 . Consequently, we obtain Var θ (X[0,0]) ≤ σ 2 1 + 16d m r 2 . Proof of Lemma 8.7 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. This property seems straightforward but the proof is a bit tedious. Let i be a positive integer smaller than Card(M 1 ). By definition of the radius r m in Equation ( 10) in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF], the model m i+1 is the set of nodes in Λ\{(0, 0)} at a distance smaller or equal to r mi+1 from (0, 0), whereas the model m i only contains the points in Λ \ {(0, 0)} at a distance strictly smaller than r mi+1 from the origin. Let us first assume that 2r mi+1 ≤ p. In such a case, the disc centered on (0, 0) with radius r mi+1 does not overlap with itself on the torus Λ. To any node in the neighborhood m i+1 and to the node (0, 0), we associate the square of size 1 centered on it. All these squares do not overlap and are included in the disc of radius r mi+1 + √ 2/2. Hence, we get the upper bound 2d mi+1 + 1 ≤ π(r mi+1 + √ 2/2) 2 . Similarly, the disc of radius r mi+1 -√ 2/2 is included in the union of the squares associated to the nodes m i ∪ {0, 0}. It follows that 2d mi + 1 is larger or equal to π r mi+1 -√ 2/2 2 . Gathering these two inequalities, we obtain d mi+1 d mi ≤ r mi+1 + √ 2/2 2 -1 r mi+1 - √ 2/2 2 -1 , if r mi+1 is larger than 1 + √ 2/2. If r mi+1 larger than 5, this upper bound is smaller than two. An exhaustive computation for models of small dimension allows to conclude. If 2r mi+1 ≥ p and 2r mi < p, then the preceding lower bound of d mi and the preceding upper bound of d mi+1 still hold. Finally, let us assume that 2r mi ≥ p. Arguing as previously, we conclude that 2d mi + 1 ≥ π(p/2 -√ 2/2) 2 . The largest dimension of a model m ∈ M 1 is (p 2 -1)/2 if p is odd and ((p + 1) 2 -3)/2 if p is even. Thus, d mi+1 ≤ [(p + 1) 2 -3]/2. Gathering these two bounds yields d mi+1 d mi ≤ 4 (p + 1) 2 -3 p - √ 2 2 , which is smaller than 2 if p is larger than 10. Exhaustive computations for small p allow to conclude. Proof of Proposition 6.7 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. This result derives from the upper bound of the risk of θ ρ1 stated in Theorem 3.1 and the minimax lower bound stated in Proposition 6.6 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. Let E(a) be a pseudo-ellipsoid that satisfies Assumption (H a ) and such that a 2 1 ≥ 1/(np 2 ). For any θ in E(a) ∩ B 1 (0 p , 1) ∩ U(ρ 2 ), the penalty term satisfies pen(m) = Kσ 2 ρ 2 1 ρ 2 d m /np 2 is larger than Kd m ϕ max (Σ)/np 2 . Applying Theo-rem3.1, we upper bound the risk θ ρ1 E θ l θ ρ1 , θ ≤ L 1 (K) inf m∈M1 [l(θ m,ρ1 , θ) + pen(m)] + L 2 (K)ρ 2 σ 2 np 2 , for any θ ∈ E(a) ∩ B 1 (0 p , 1) ∩ U(ρ 2 ). It follows that sup θ∈E(a)∩B1(0p,1)∩U (ρ2) E θ l θ ρ1 , θ ≤ L(K) inf m∈M1 , dm>0 l(θ m,ρ1 , θ) + ρ 2 1 ρ 2 σ 2 d m np 2 . Let i be a positive integer smaller or equal than Card(M 1 ). We know from Section 4.1 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF] that the bias l(θ mi , θ) of the model m i equals Var(X[0,0]|X mi )σ 2 . Since θ belongs to the set E(a) ∩ B 1 (0 p , 1), the bias term is smaller or equal to a 2 i+1 with the convention a 2 Card(M1)+1 = 0. Hence , the previous upper bound becomes E θ l θ ρ1 , θ ≤ L(K) inf 1≤i≤Card(M1) a 2 i+1 + ρ 2 1 ρ 2 σ 2 d mi np 2 ≤ L(K, ρ 1 , ρ 2 ) inf 1≤i≤Card(M1) a 2 i+1 + σ 2 d mi np 2 . ( 24 ) Applying Proposition 6.6 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF] to the set E(a) ∩ B 1 (0 p , 1) ∩ U(2), we get inf θ sup θ∈E(a)∩B1(0p,1)∩U (ρ2) E θ l θ, θ ≥ inf θ sup θ∈E(a)∩B1(0p,1)∩U (2) E θ l θ, θ ≥ L sup 1≤i≤Card(M1) a 2 i ∧ σ 2 d mi np 2 . Let us define i * by i * := sup 1 ≤ i ≤ Card(M 1 ) , a 2 i ≥ σ 2 d mi np 2 , with the convention sup ∅ = 0. Since a 2 1 ≥ σ 2 /np 2 , i * is larger or equal to one. It follows that inf θ sup θ∈E(a)∩B1(0p,η) E θ l θ, θ ≥ L 2 a 2 i * +1 ∨ σ 2 d m i * np 2 . Meanwhile, the upper bound (24) on the risk of θ ρ1 becomes E θ l θ ρ1 , θ ≤ L(K, ρ 1 , ρ 2 ) a 2 i * +1 + σ 2 d m i * np 2 ≤ 2L(K, ρ 1 , ρ 2 ) a 2 i * +1 ∨ σ 2 d m i * np 2 , which allows to conclude. Proof of the asymptotic risks bounds Proof of Corollary 4.6 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. For the sake of simplicity, we assume that for any node (i, j) ∈ m, the nodes (i, j) and (-i, -j) are different in Λ. If this is not the case, we only have to slightly modify the proof in order to take account that Ψ i,j 2 F may equal one. The matrix V is the covariance of the vector of size d m X i1,j1 + X -i1,-j1 , . . . , X i dm ,j dm + X -i dm ,-j dm . (25) Since the matrix Σ of X v is positive, V is also positive. Moreover, its largest eigenvalue is larger than 2ϕ max (Σ). Let us assume first the θ belongs to Θ + m and that Assumption (H 1 ) is fulfilled. By the first result of Proposition 4.4 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF], lim n→+∞ np 2 E l θ m,ρ1 , θ = 2σ 4 tr IL m V -1 ≥ σ 4 ϕ max (Σ) tr[IL m ] = 2σ 4 d m ϕ max (Σ) , which corresponds to the first lower bound (30) in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. Let us turn to the second result. We now assume that θ satisfies Assumption (H 2 ). By the identity (28) of Proposition 4.4 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF], we only have to lower bound the quantity tr V W -1 . tr V -1 W ≥ ϕ max (V ) -1 tr [W ] ≥ 1 2ϕ max (Σ) tr[W ] . Since the matrix Σ -1 = σ -2 I p 2 -C(θ) is diagonally dominant, its smallest eigenvalue is larger than σ -2 (1-θ 1 ). The matrix I p 2 -C(θ m,ρ1 ) 2 I p 2 -C(θ) is symmetric positive. It follows that W is also symmetric positive definite. Hence, we get tr V -1 W (26) ≥ σ -2 2 [1 -θ 1 ] dm k=1 tr C(Ψ i k ,j k ) 2 I p 2 -C(θ m,ρ1 ) 2 I p 2 -C(θ) -2 p 2 . The largest eigenvalue of I p 2 -C(θ) is smaller than 2 and the smallest eigenvalue of I p 2 -C(θ m,ρ1 ) is larger than 1-θ m,ρ1 1 . By Lemma A.1 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF], these two matrices are jointly diagonalizable and the smallest eigenvalue of I p 2 -C(θ m,ρ1 ) 2 I p 2 -C(θ) -2 is therefore larger than (1θ m,ρ1 1 ) 2 /4. Gathering this lower bound with (26) yields tr V -1 W ≥ d m σ -2 2 [1 -θ 1 ] [1 -θ m,ρ1 1 ] 2 . Lemma 4.1 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF] states that θ m,ρ1 1 ≤ θ 1 . Combining these two lower bounds enables to conclude. Proof of Example 4.8 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. Lemma 4.1. For any θ is the space Θ +,iso m1 , the asymptotic variance term of θ iso m1,ρ1 equals lim n→+∞ np 2 E θ l θ iso m1,ρ1 , θ = 2σ 4 tr H 2 tr (H 2 Σ) . If θ belongs to Θ +,iso and also satisfies (H 2 ), then lim n→+∞ np 2 E θ l θ iso m1,ρ1 , θ iso m1,ρ1 = 2 tr (I -θ iso m1,ρ1 [1, 0]H)HΣ 2 tr(H 2 Σ) , (27) where the p 2 × p 2 matrix H is defined as H := C Ψ iso 1,0 . Proof of Lemma 4.1. Apply Proposition 4.4 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF] noting that V = tr[HΣH]/p 2 and W = tr (I -θ m iso 1 [1,0]H )HΣ 2 σ 4 p 2 . To prove the second result, we observe that Θ +,iso m1 equals Θ +,iso m1,2 . It is stated for instance in Table 2 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. Since the matrix θ belongs to Θ +,iso m1 , we may apply the second result of Lemma 4.1. Straightforward computations lead to tr(H 2 ) = C Ψ iso 1,0 2 F = 4p 2 and tr(H 2 Σ) = 4p 2 [Var(X[0,0]) + 2cov θ (X[0,0], X[1,1]) + cov θ (X[0,0], X[2,0])] . Since the field X is an isotropic GMRF with four nearest neighbors, X[0,0] = θ[1,0] (X[1,0] + X[-1,0] + X[0,1] + X[0,-1]) + ǫ[0,0] , where ǫ[0,0] is independent from every variable X[i,j] with (i, j) = 0. Multiplying this identity by X[1,0] and taking the expectation yields cov θ (X[0,0], X[1,0]) = θ[1,0] [Var (X[0,0]) + 2cov θ (X[0,0], X[1,1]) + cov θ (X[0,0], X[2,0])] . Hence, we obtain tr H 2 Σ = 4cov θ (X[0,0], X[1,0])/θ [START_REF] Billingsley | Probability and measure[END_REF]0] and tr(H 2 ) tr(H 2 Σ) = θ[1,0] cov θ (X[0,0], X[1,0]) , which concludes the first part of the proof. This second part is based on the spectral representation of the field X and follows arguments which come back to Moran [START_REF] Moran | A Gaussian Markovian process on a square lattice[END_REF]. We shall compute the limit of cov θ (X[0,0], X[1,0]) when the size of Λ goes to infinity. As the field X is stationary on Λ, we may diagonalize its covariance matrix Σ applying Lemma A.1 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. We note D Σ the corresponding diagonal matrix defined by Applying Lemma A.1 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF] to the matrix Σ -1 and noting that θ ∈ Θ iso,+ allows to get another expression of the eigenvalues of Σ D Σ [(i-1)p+j,(i-1)p+j] = σ 2 1 -2θ[1,0] cos 2πi p + cos 2πj p . We then combine these expression. By symmetry between i and j we get dxdy . cov θ (X[0,0], X[1,0]) = σ 2 This last elliptic integral is asymptotically equivalent to log 16[4(1 -4θ[1,0])] -1 when θ[1,0] → 1/4 as observed for instance by Moran [START_REF] Moran | A Gaussian Markovian process on a square lattice[END_REF]. We conclude by substituting this limit in expression (33) in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. Proof of Example 4.9 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. First, we compute [θ (p) ] iso m1 [START_REF] Billingsley | Probability and measure[END_REF]0]. By Lemma 4.1 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF], it minimizes the function γ(.) defined in (19) in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF] over the whole space Θ m iso 1 . We therefore obtain [θ (p) ] iso m1 [1,0] = tr [ΣH] tr [ΣH 2 ] . Once again, we apply Lemma A.1 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF] to simultaneously diagonalize the matrices H and Σ -1 . As previously, we note D Σ the corresponding diagonal matrix of Σ. D Σ [(i-1)p+j,(i-1)p+j] = σ 2 1 -2α cos 2π pi 4p + pj 4p + cos 2π -pi 4p + pj 4p = σ 2 1 -4α cos π i 2 cos π j 2 . Analogously, we compute the diagonal matrix D Ψ iso 1,0 . As each term of this sum is non-negative, we may only consider the coefficients i and j which are congruent to 0 modulo 4. tr(H np 2 R θ (p) θ (p) iso,ρ1 m1 , [θ (p) ] iso m1 ≥ Lσ 2 1 -4α . Miscellaneous Proof of Lemma 1.1 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. Let θ be a p × p matrix that satisfies condition (3) in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. For any 1 ≤ i 1 , i 2 ≤ p, we define the p × p submatrix C i1,i2 as C i1,i2 [j1,j2] := C(θ)[(i1-1)p+j1,(i2-1)p+j2] , for any 1 ≤ j 1 , j 2 ≤ p. For the sake of simplicity, the subscripts (i 1 , i 2 ) are taken modulo p. By definition of C(θ), it holds that C i1,i2 = C 0,i2-i1 for any 1 ≤ i 1 , i 2 ≤ p. Besides, the matrices C 0,i are circulant for any 1 ≤ i ≤ p. In short, the matrix C(θ) is of the form C(θ) =    C 0,1 C 0,2 • • • C 0,p . . . . . . . . . . . . C 0,p C 0,1 • • • C 0,p-1    , where the matrices C 0,i are circulant. Let (i 1 , i 2 , j 1 , j 2 ) be in {1, . . . , p} 4 . By definition, C(θ)[(i1-1)p+j1,(i2-1)p+j2] = θ[i2-i1,j2-j1] . Since the matrix θ satisfies condition (3) in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF], θ[i2-i1,j2-j1] = θ[i1-i2,j1-j2]. As a consequence, C(θ)[(i1-1)p+j1,(i2-1)p+j2] = C(θ)[(i2-1)p+j2,(i1-1)p+j1] and C(θ) is symmetric. Conversely, let B be a p 2 × p 2 symmetric block circulant matrix. Let us define the matrix θ of size p by θ[i,j] := B[1,(i-1)p+j] , for any 1 ≤ i, j ≤ p. Since the matrix B is block circulant, it follows that C(θ) = B. By definition, θ[i,j] = C(θ)[1,(i-1)p+j] and θ[-i,-j] = C(θ)[(i-1)p+j,1] for any integers 1 ≤ i, j ≤ p. Since the matrix B is symmetric, we conclude that θ[i,j] = θ[-i,-j]. Proof of Lemma 2.2 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. For any θ ′ ∈ Θ + , γ n,p (θ ′ ) is defined as γ n,p (θ ′ ) = 1 p 2 tr (I p 2 -C(θ ′ ))X v X v * (I p 2 -C(θ ′ )) . Applying Lemma A.1 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF], there exists an orthogonal matrix P that simultaneously diagonalizes Σ and any matrix C(θ ′ ). Let us define Y i := √ Σ -1 X i and D Σ := P ΣP * . Gathering these new notations yields γ n,p (θ ′ ) = 1 p 2 tr I p 2 -D(θ ′ ) D Σ YY * I p 2 -D(θ ′ ) , where the vectors Y i are independent standard Gaussian random vectors. Except YY * , every matrix involved in this last expression is diagonal. Besides, the diagonal matrix D Σ is positive since Σ is non-singular. Thus, tr (I p 2 -D(θ ′ ))D Σ YY * (I p 2 -D(θ ′ )) is almost surely a positive quadratic form on the vector space generated by I p 2 and D(Θ + ). Since the function D(.) is injective and linear on Θ + , it follows that γ n,p (.) is almost surely strictly convex on Θ + . Proof of Lemma 4.1 and Corollary 4.2 in [START_REF] Verzelen | Adaptive estimation of stationary Gaussian Markov random fields[END_REF]. The proof only uses the stationarity of the field X on Λ and the l 1 norm of θ. However, the computations are a bit cumbersome. Let θ be an element of Θ + . By standard Gaussian properties, the expectation of X[0,0] given the remaining covariates is E θ X[0,0]|X -{0,0} = (i,j)∈Λ\(0,0) θ[i,j]X[i,j] . By assumption (H 2 ), the l 1 norm of θ is smaller than one. We shall prove by backward induction that for any subset A of Λ\{(0, 0)} the matrix θ A uniquely defined by E θ (X[0,0]|X A ) = (i,j)∈A θ A [i,j]X [i,j] and θ A [i,j] = 0 for any (i, j) / ∈ A satisfies θ A 1 ≤ θ 1 . The property is clearly true if A = Λ\{(0, 0)}. Suppose we have proved it for any set of cardinality q larger than one. Let A be a subset of Λ\{(0, 0)} of cardinality q -1 and (i, j) be an element of Λ\(A ∪ {(0, 0)}). Let us derive the expectation of X[0,0] conditionally to X A from the expectation of X[0,0] conditionally to X A∪{(i,j)} . E θ (X[0,0]|X A ) = E θ E(X[0,0]|X A )|X A∪{(i,j)} = (k,l)∈A θ A∪{(i,j)} [k,l]X [k,l] + θ A∪{(i,j)} [i,j]E θ [X[i,j]|X A ] . ( 28 ) Let us take the conditional expectation of X[i,j] with respect to X A∪{(0,0)} . Since the field X is stationary on Λ and by the induction hypothesis, the unique matrix θ A∪{(0,0)} (i,j) defined by since θ A∪{i,j} [i,j]θ i,j A∪{0,0} [0,0] < 1. Then, we upper bound the l 1 norm of θ A using that θ A∪{(i,j)} 1 and θ A∪{(0,0)} (i,j) 1 are smaller or equal to θ 1 . 1θ A∪{(i,j)} [i,j]θ A∪{(0,0)} (i,j) [0,0] ≤ θ 1 + θ A∪{(i,j)} [i,j] (k,l)∈A∪{(0,0)} θ A∪{(0,0)} (i,j) [k,l] -1θ A∪{(0,0)} (i,j) [0,0] 1θ A∪{(i,j)} [i,j]θ A∪{(0,0)} i,j [0,0] ≤ θ 1 1 + θ A∪{(i,j)} [i,j]θ A∪{(i,j)} [i,j] 1 + θ A∪{(0,0)} i,j [0,0] 1θ A∪{(i,j)} [i,j]θ A∪{(0,0)} (i,j) [0,0] ≤ θ + θ A∪{(i,j)} [i,j] ( θ 1 -1) 1 + θ A∪{(0,0)} (i,j) [0,0] 1θ A∪{(i,j)} [i,j]θ A∪{(0,0)} (i,j) [0,0] . Since θ 1 is smaller than one, it follows that θ A 1 ≤ θ 1 . Let m be a model in the collection M 1 . Since m stands for a set of neighbors of (0, 0), we may define θ m as above. It follows that θ m 1 ≤ θ 1 . Since the field X is stationary on the torus, X follows the same distribution as the field X s defined by X s [i,j] = X[-i,-j]. By uniqueness of θ m , we obtain that θ m [i,j] = θ m [-i,-j]. Thus, θ m belongs to the space Θ m . Moreover, θ m minimizes the function γ(.) on Θ m . Since the l 1 norm of θ m is smaller than one, θ m belongs to Θ + m,2 . The matrices θ m and θ m,ρ1 are therefore equal, which concludes the proof in the non-isotropic case. Let us now turn to the isotropic case. Let θ belong to Θ iso,+ and let m be a model in M 1 . As previously, the matrix θ m satisfies θ m 1 ≤ θ 1 . Since the distribution of X is invariant under the action of the group G, θ m belongs to Θ iso m . Since θ m 1 ≤ θ 1 , θ m lies in Θ +,iso m,2 . It follows that θ m = θ iso m,ρ1 . Proof of Corollary 4.3 in [6]. Let θ be a matrix in Θ + such that (H 2 ) holds and let m be a model in M 1 . We decompose γ( θ m,ρ1 ) using the conditional Lemma 1 . 1 . 11 Let T be a supremum of Rademacher chaos indexed by I N of the form T := sup t∈T {i,j} ( 1 + 1 k)} := R[i,i] n , and t R ∅ := -tr(R) , where δ i,j is the indicator function of i = j. In order to apply Lemma 1.2 with N = nr and T = t R |R ∈ F , we have to work out the quantities D and E. δ i,j δ k,l )α l j Y * R * ) . Figure 1 . 1 Figure 1. The black dots represent the orbit space of m and the white dots represent the remaining points of the orbit space of N (m). Figure 2 . 2 Figure 2. The black dots represent the orbit space of m under the action of G and the white dots represent the remaining points of the orbit space of N iso (m). Lemma 3 . 3 . 33 (Varshamov-Gilbert's lemma) Let {0, 1} d be equipped with Hamming distance d H . There exists some subset Φ of {0, 1} d with the following properties d H (φ, φ ′ ) > d/4 for every (φ, φ ′ ) ∈ Φ 2 with φ = φ ′ and log |Φ| ≥ d 8 . Applying Lemma 3.2 with Hamming distance d H and the set Φ introduced in Lemma 3.3 yields sup θ∈Cm(θ ′ ,r) D Σ [(i-1)p+j,(i-1)p+j] = p k=1 p l=1 cov θ (X[0,0], X[k,l]) cos 2π ki p + lj p , for any 1 ≤ i, j ≤ p. Straightforwardly, we express cov θ (X[0,0], X[1,0]) as a linear combination of the eigenvalues cov θ (X[0,0], X[1,0]) (i-1)p+j,(i-1)p+j] . . 1 - 1 If we let p go to infinity, this sum converges to the following integral limp→+∞ cov θ (X[0,0], X[1,0]) 2θ[1,0] [cos(x) + cos(y)] E θ X[i,j]|X A∪{(0,0)} = l] = 0 for any (k, l) / ∈ A ∪ {(0, 0)} satisfies θ A∪{(0,0)} (i,j) 1 ≤ θ 1 .Taking the expectation conditionally to X A of this previous expression leads toE θ (X[i,j]|X A ) = (k,l)∈A θ A∪{(0,0)} (i,j) [k,l]X [k,l] + θ A∪{(0,0)} (i,j) [0,0]E (X[0,0]|X A ) . (29)Gathering identities (28) and (29) yieldsE θ (X[0,0]|X A ) = (k,l)∈A θ A∪{i,j} [k,l] + θ A∪{(i,j)} [i,j]θ A∪{0,0} (i,j) [k,l] 1θ A∪{(i,j)} [i,j]θ A∪{0,0} (i,j) [0,0] X[k,l] , θ A 1 ≤ 1 (k,l)∈A θ A∪{j+1} [k,l] + (k,l)∈A θ A∪{(i,j)} [i,j]θ . Let us split this sum in 16 parts depending on the congruence of i and j modulo 4. As each if of these 16 sums is shown to be zero, we conclude that tr(HΣ) = First, we lower bound the numerator tr(H 4 Σ 2 ) = σ 4 p i=1 p j=1 2 cos 2π i p + cos 2π j p 1 -4α cos π i 2 2 cos π j 2 4 D Ψ iso 1,0 [(i-1)p+j,(i-1)p+j] = 2 cos 2π i p + cos 2π j p . Combining these two last expressions, we obtain tr(HΣ) = p i=1 p j=1 σ 2 2 cos 2π i p + cos 2π j p 1 -4α cos π i 2 cos π j 2 [θ (p) ] iso m1 [1,0] = 0. By Lemma 4.1, the asymptotic risk of θ (p) iso,ρ1 m1 therefore equals lim n→+∞ np 2 E θ (p) l θ (p) iso,ρ1 m1 , [θ (p) ] iso m1 = tr(H 4 Σ 2 ) tr(H 2 Σ) . 4 Σ 2 ) ≥ σ 4 p/4-1 i=0 p/4-1 j=0 16 cos 2π i p/4 + cos 2π j p/4 (1 -4α) 2 4 . If we let go p to infinity, we get the lower bound lim p→+∞ tr(H 4 Σ 2 ) p 2 ≥ σ 4 (1 -4α) 2 0 1 0 1 [cos(2πx) + cos(2πy)] 4 dxdy . Similarly, we upper bound tr(H 2 Σ) and let p go to infinity lim p→+∞ tr(H 2 Σ) p 2 ≤ 4σ 2 1 -4α 0 1 0 1 [cos(2πx) + cos(2πy)] 2 dxdy . Combining these two bounds allows to conclude lim p→+∞ lim n→+∞
04118337
en
[ "info" ]
2024/03/04 16:41:26
2019
https://hal.science/hal-04118337/file/CIKM.pdf
Marie Le Guilly email: [email protected] Jean-Marc Petit email: [email protected] Ihab F Ilyas email: [email protected] ExplIQuE: Interactive Databases Exploration with SQL Keywords: SQL, extensions, imprecise queries To help databases users who have just started learning SQL or are not familiar with their database, we propose ExplIQuE, an exploration interface with query extensions. Its purpose is to assist users to smoothly dive into data exploration, and to be able to express imprecise questions over their data. Indeed, such situations are more and more current with the increasing desire for users to get value out of their data. In this configuration, in addition to classic SQL querying possibilities, ExplIQuE offers the possibility to extend a given SQL query, by suggesting a set of possible selection predicates to add to the query, that aim at dividing the initial answer set to identify interesting exploration zones. In addition, ExplIQuE proposes some indicators to help the user in choosing its desire extension and in understanding her data, as well as interactive visualizations of the result set, in two dimensions revealed by PCA techniques. In this demonstration, we offer the audience the possibility to try the various functionalities of ExplIQuE by trying to express an imprecise question over a scientific database on bacterial colonies, through an iterative process. A video of the proposed demonstration is available at https://youtu.be/oK8xWGCWj_A. INTRODUCTION SQL is a language that appeared in the 1960s, and is still widely used nowadays. Over the years, the volume of data that is stored has increased tremendously, creating new needs for users to be able Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. CIKM '19, November 3-7, 2019, Beijing, China © 2019 Association for Computing Machinery. ACM ISBN 978-1-4503-6976-3/19/11. . . $15.00 https://doi.org/10.1145/3357384.3357847 to store, access, and analyze their data. Moreover, with the recent development and popularity of data analysis techniques, combined with an easier access to data collecting and storing devices, more and more users are trying to make sense and get value out of their data. As a consequence, the structure of databases tends to change, with bigger schemas, larger tables, and of course, more and more tuples. In addition, the use of databases by users is also evolving, because the data they are looking for is less easy to find, because they are less experienced, and because the question they are trying to answer is not always totally clear: we call such questions imprecise queries, that are expressed in natural language but not easily translatable into SQL. This is especially true in exploratory contexts, where it is necessary to deeply understand the data, and to try several different SQL queries, before translating the initial question into SQL and reaching the desired data. Indeed, when a user is confronted to a relational database with a question that is not necessarily clear in the first place, she will go through different phases. First, she can explore and navigate the database to understand its content and structure better. Then, once more familiar with the data structure, she can express a first simple and general query, before refining it over and over, until she answers the initial question at hand. As a result, the desired tuple set is not reached with a single query, but with a sequence of iterative refinement that, one by one, that gets the user closer to what she is looking for, and gradually builds her final answer set. This query refinement process is not always easy for users, especially when they do not know where to start it from, or when for example their initial query returns an answer set that presents too many tuples. Tools to assist users in such situations are therefore necessary, as the one for interactive query refinement presented in [START_REF] Mishra | Interactive query refinement[END_REF]. More specifically, there is a gap to bridge to help SQL users to get confident about their data and queries, and to tackle real life problems. Such users have often just started to master its basic functionalities, and have been trained with well-defined questions on small datasets. But whenever they are confronted to fuzzy questions on new databases, they can get lost. This situation is common, for examples students confronted to their first internship experiences, or for freshly trained employees in companies that are implementing new data analysis processes to make use of the data they collect daily. Helping these new users to smoothly dive into data exploration is therefore an important challenge that would be beneficial in many domains. In this paper, we present ExplIQuE, our exploration interface of SQL databases with query extension. The purpose of this interface is to help users in refining an initial SQL query over a database, through an iterative process, by suggesting a set of possible extensions of this query, that consist in possible additional selection conditions for its where clause. These extensions give new queries, that can then be explored and extended, until the desired result is reached. Assistance is provided to understand and select the desired extensions, with metrics and visual representations. These extensions have several objectives: • Help the user understand her database by paying attention to some interesting attributes they might have overlooked, and observing how the query result is divided and distributed, using 2D visualizations. • Unblock the user when she does not know where to go or where to start, by providing suggestions even to a very general query. • Provide a semantic help where most SQL editors only provide syntactic help that is not helpful to understand the content of the database itself. ExplIQuE is an interface aimed to connect to any database, and to provide, in addition to classic querying options in SQL, extensions to queries, with interactive and intuitive visual support. we offer the audience the possibility to try the various functionalities of Ex-plIQuE by trying to express an imprecise question over a scientific database on bacterial colonies, through an iterative process. SYSTEM OVERVIEW 2.1 Query extensions The purpose of extensions is to give suggestions to the user, so that she has directions to refine her initial query Q, by adding additional selection predicates in the Where clause of the query. More specifically, we propose, given a query Q, to compute a k extensions set, such that the extensions in this set do not overlap in terms of results, but also cover all the initial tuples returned by Q. Our solution, to compute such extensions, presently being patented 1 , is based on two machine learning algorithms: Clustering In order to identify interesting zone of refinement in Q's answer set (denoted by ans (Q, d )), we propose to group similar tuples together using a clustering algorithm. ans (Q, d ) is thus divided into k clusters, and each cluster becomes a possible refinement zone for the user. The intuition between this process is that users are formulating queries in order to reach a specific set of tuples, answering a question, and that the results to be found are not random, but likely to contain tuples that fit together and have similar characteristics. We therefore chose to base the identification of refinement zone of the k-means algorithm with a Euclidean distance (see [START_REF] Lloyd | Least Squares Quantization in PCM[END_REF]). Decision tree The second part of our extension process consists in finding a query to describe each refinement zone identified by clustering. To do so, an additional column label is added to the dataset from ans (Q, d ), and filled with the number of the cluster each tuple has been assigned to. A 1 Patent number FR1757682 from the 14/08/2017 in France binary decision tree is then built to discriminate between the different clusters. For each leaf of the tree, the decision path from the root of the tree to this leaf is computed, and the conjunction of decision clause gives a new selection predicate. In order to get as many extensions as clusters, we offer two possibilities. We either limit the depth of the tree to only obtain k leaves, and therefore each leaf gives exactly one extension. Otherwise, we do not limit the depth of the tree, but all the leaves corresponding to the same cluster are joined by a disjunction, forming together a unique extension. These algorithms require some parameters. Predefined ones are proposed to the user, who can then personalize them using the ExplIQuE's interface. For the clustering, several values of k are used (by default for k = 2 to k = 10), each giving an extension set of different size. These sets are presented to the user, ranked according to well-known clustering score of the silhouette coefficient [START_REF] Peter | Silhouettes: a graphical aid to the interpretation and validation of cluster analysis[END_REF]. Inside a single extension set, extensions are ranked according to the size of their result, by showing the most general extension first (with the most tuples), and the most specific one last. The algorithm to compute a k extension set given a fixed value of k and a tree with k-leaves is given in algorithm 1. It should be noted that the data is preprocessed and normalized for clustering, and that the projection of the query is removed, to allow other attributes, that the user might have overlooked, to be included in the extensions. procedure Extension (Q, d, k ); Input : A query Q over R, d a database over R, k the number of extensions Output :S c a set of k extensions of Q if Q = π X (Q ′ ) ; // remove the projection then Q = Q' end wd = ans (Q, d ) ; // wd: working data lwd = kmeans (wd, k) ; // lwd: labelled wd tree = DecisionTree (lwd, k) conjunctions = дetRules (tree) S c = {} foreach c in conjunctions do S c = S c ∪ σ c (Q ) end return S c ; Algorithm 1: Query extension procedure Data visualization To understand the extensions, and assist the user in choosing or refining one, ExplIQuE offers several hints. First, for each extension, two scores are displayed, in addition to the ranking among extension sets with the silhouette coefficient. • The narrowing ratio, which is the percentage of tuples removed from the initial query when adding the extension. • The result set size of the extension, when added to the initial query. In addition, two visualizations are available to assist the user in choosing an interesting extension. The first is a scatterplot of the results of the query being extended, where the tuples are grouped by extension. The data is projected on two dimensions using principal component analysis (PCA), and each extension is presented using a different color. Moreover, the visualization is interactive, as the user can see the extension corresponding to the datapoints by moving the mouse over the scatterplot. The purpose of this visualization is to show in one glance to the user the size and dispersion of an extension, as well as how separated each extension is with respect to the others. Such a visualization is presented on figure 2. In the specific case of image databases, where tuples are associated with images, another visualization is possible. The images associated to the results of an extension can be displayed as a mosaic in ExplIQuE. This is useful for the user to easily identify the diversity of data that an extension represents: she might see visually that the extension contains homogeneous images, or on the opposite easily identify outliers. In this demonstration, we propose to use such a database to demonstrate this additional functionality. Implementation ExplIQuE's is implemented as a web interface. The backend relies on Flask2 framework, and is therefore implemented using Python 3. The clustering and decision trees algorithms are the ones from the scikit-learn library [START_REF] Pedregosa | Scikit-learn: Machine Learning in Python[END_REF]. Data is stored in a relational database that can be either MySQL or Oracle. Optional external files can be stored outside the database in a dedicated folder. The user has access to a web interface that is implemented using the React3 javascript library. A first page allows to connect to the desired database, before accessing to the second page that allows to extend queries over it. A snapshot of this page is presented on figure 2, showing the main functionalities. On the left panel, the user can query the database using SQL, and when needed, ask to extend the current query. If necessary, the schema can be displayed using the link on the top left corner. On the right panel, the extensions are displayed. For a given value of k, a global visualization is displayed, and the corresponding extension is highlighted when the mouse is over a cluster, to link the visualization to the SQL extension it represents. For each extension, additional information such as the reduction ratio is given to the user when she clicks on it. Finally, the extension can be added to the current query in just one click, it can be updated in the query writing zone, and the process can be repeated iteratively until convergence. As this is an online system, the extensions have to be computed in a reasonable amount of time. This can be challenging on large instances: as a result, we use random sampling, that allows to reach a trade-off between the computing time and the quality of the results. The experimentations on this specific problem are out of the scope of this demonstration paper. Experimentation results A preliminary version of ExplIQuE (see [START_REF] Le Guilly | Partitioning queries for data exploration using query extensions[END_REF]) was experimented with a group of 70 computer science students, who had just started their lessons on SQL 4 . They were divided in two groups, one with access to ExplIQuE (group EXT), the other with a similar interface but with only classic SQL querying possibilities (group NoEXT). They were asked to answer a series of ten questions over a database designed for the test: • The first three questions were precise and easy to translate into SQL. They were used to assess the participant's level in SQL, and to ensure that the performances of both groups were balanced. • The other were imprecise questions that had been designed to be deliberately fuzzy with a more exploratory purpose. As such, they where not easily translatable to SQL, and required some fumbling around and playing with SQL to answer them using only classic SQL tools. The answering time of each participant to each question was monitored, in order to compare the performances of both groups. The results are presented on figure 1, that shows the boxplot of answering time per question for each group, only when student correctly answered the question. On the first three questions, the results are very similar for both groups, which is what was expected: this questions were easy, and did not require the use of extensions. However, on the other ones, impressive differences can be observed, as students who had access to extensions performed much faster (the difference on question 10 is due to the fact that very few students from group NoEXT had the time to answer it correctly). This experiment therefore showed how extensions can help users in better understanding their data, and to write their SQL queries faster. This is what we want to show in this demonstration. DEMONSTRATION SCENARIO The audience will have the opportunity to use ExplIQuE's web interface, over a scientific database, in order to answer a list of imprecise questions on the database. The interface is presented on figure 2, showing its main features. Running database The database used for the demonstration comes from a study where colonies of different bacteria grow on solid plates 5 . It contains 10000 tuples over 29 columns, that correspond to measures on the development of the bacteria. It is both simple (only one table) and difficult, as the content of its columns is not easy to under at first glance, with some confusing column names. It describes the shape, texture, and color of the colonies. Scientists use it to detect colonies with specific characteristics, for example bacteria belonging to the same species. Each tuple is also associated to an image representing the bacteria. We chose this dataset for the audience to be in the conditions where ExplIQuE can be useful, as they are not likely to have prior knowledge on this subject. Moreover, the 29 columns, with names that are not so expressive except for domain experts, will make the extensions useful to understand what they contain and the type of data they represent. Audience interaction The audience will have the opportunity to experience the various features of ExplIQuE. First, as with any DBMS, the audience will be able to browse the database's schema, to understand its structure and content. Moreover, ExplIQuE allows user to evaluate queries like any other DBMS's interface. The audience will then be allowed Figure 1 : 1 Figure 1: Boxplot of answering time for students with (EXT) and without extensions (NoEXT) Figure 2 : 2 Figure 2: Web interface for ExplIQuE http://flask.pocoo.org/ https://reactjs.org/ more details at https://marielgy.github.io/sql_experimentation/ The authors would like to thank Christopher Pease from Darlington EURL for releasing the dataset to play with the dataset, by querying it in a traditional setting. Second, the audience will be asked to answer questions, like the following one: What are the bacteria that have a very similar texture, and all have a circular shape ? Using our query extensions, the answer to this question can be found in three iterations, and in less than five minutes. Users will be invited to start with a very general query, and to use the suggestions of the extensions to reach the desired result set. They will also be invited to play with the several visualizations to understand and select the most useful extensions. Finally, we also propose to play with several parameters available to tune the extensions. They will therefore have the possibility to change the number of clusters to produce, and to change how the decision tree is computed.
04118563
en
[ "math.math-pr" ]
2024/03/04 16:41:26
2023
https://hal.science/hal-04118563/file/Approximation%20of%20reflected%20SDEs-HAL.pdf
Keywords: Reflected SDEs, Time-dependent domains, Generalized BSDEs, Viscosity solution of PDEs, Nonlinear Neumann boundary conditions. 2020 MSC: 35G30, 60H10, 60H30 Approximation of reflected SDEs in non-smooth time-dependent domains and application to fully nonlinear PDEs with Neumann boundary condition on time-dependent domains Manal Jakani To cite this version: Manal Jakani. Approximation of reflected SDEs in non-smooth time-dependent domains and application to fully nonlinear PDEs with Neumann boundary condition on time-dependent domains. 2023. hal Introduction Let d ≥ 1, T > 0 be fixed, and let D ′ be a bounded open connected subset of R 1+d . We will refer to D = D ′ ∩ [0, T ] × R d , as a time-dependent domain. For t ∈ [0, T ], the time sections of D are defined by D t = {x : (t, x) ∈ D} and are assumed to be convex and increasing in time. We deal with the normally reflected SDE in time-dependent domain of the following form: ∀t ⩽ T ,      X t = x + t 0 b(r, X r )dr + t 0 σ (r, X r )dW r + t 0 ⃗ n(r, X r )d|Λ| r ; X t ∈ Dt , |Λ| t = t 0 χ {Xr ∈∂ Dr } d|Λ| r < ∞. ( This class of reflected SDEs has been introduced by Costantini-Gobet-El Karoui [START_REF] Costantini | Boundary sensitivities for diffusion processes in time dependent domains[END_REF] for smooth timedependent domains. Then, in the case of SDEs with oblique reflection in non-smooth time-dependent domains, existence of weak solutions has been established by . This result has been generalized by Lundström-Önskog [START_REF] Lundström | Stochastic and partial differential equations on nonsmooth timedependent domains[END_REF], the authors proved the existence and uniqueness of a strong solution for obliquely reflected SDEs in non-smooth time-dependent domains. In the previous papers, the results of existence and uniqueness are obtained mainly by solving Skorohod problem. The aim of this work is to provide an approximation scheme for the reflected SDE (1.1) using standard SDEs. This problem has been considered by Lions-Menaldi-Sznitman [START_REF] Lions | Construction de processus de diffusion réfléchis par pénalisation du domaine[END_REF] and by Menaldi [START_REF] Menaldi | Stochastic Variational Inequality for Reflected Diffusion[END_REF] where an approximation is given in convex time-independent domains when b and σ are Lipschitz. The same domain was considered by Bahlali-Maticiuc-Zalinescu [START_REF] Bahlali | Penalization method for a nonlinear Neumann PDE via weak solution of reflected SDEs[END_REF] when the coefficients are only measurable, the authors established a weak convergence result. In general time-independent domains, Ren-Wu [START_REF] Ren | Penalization of Reflected SDEs and Neumann Problems of HJB Equations[END_REF] extended these results to the case where the domain may have corners. In this paper, we are able to extend similar results of convergence to the case of not necessarily smooth time-dependent domains whose time sections are increasing with time which constitutes the main result of this paper. As an application, we consider the following generalized backward stochastic differential equations (BSDE for short): ∀t ⩽ s ⩽ T , Y t,x s = h(X t,x T ) + T s f (r, X t,x r ,Y t,x r , Z t,x r )dr + T s ψ(r, X t,x r ,Y t,x r )d|Λ t,x | r -T s Z t,x r dW r , This class of BSDEs has been first introduced and studied by Pardoux-Zhang [START_REF] Pardoux | Generalized BSDEs and nonlinear Neumann boundary value problems[END_REF]. Moreover, the authors showed that if (X t,x , Λ t,x ) is the solution of a normally reflected SDE in a smooth timeindependent domain, then the solution of (1.2) provides a probabilistic formula for u which is a solution of a system of PDEs with nonlinear Neumann boundary conditions on smooth time-independent domain. Based on this connection between BSDEs and PDEs, and the approximation by standard SDEs of reflected SDEs in regular convex time-independent domains, many authors studied the convergence in the S-topology of the approximation of the generalized BSDE (1.2) associated with a reflected SDE in a regular convex when the driver function f does not depend on z. Then, they give an approximation (by standard PDEs) for the associated PDE with nonlinear Neumann boundary conditions on smooth time-independent domain (see e.g. [START_REF] Boufoussi | An approximation result for a nonlinear Neumann boundary value problem via BSDEs[END_REF][START_REF] Bahlali | Penalization method for a nonlinear Neumann PDE via weak solution of reflected SDEs[END_REF][START_REF] Bahlali | Penalization for a PDE with a nonlinear Neumann boundary condition and measurable coefficients[END_REF]). A recent paper by Bahlali-Boufoussi-Mouchtabih [START_REF] Bahlali | Approximation of a degenerate semilinear PDE with a nonlinear Neumann boundary condition[END_REF] treats the case where the driver function is allowed to depend on the variable z. Moreover, the authors provide a strong approximation for the generalized BSDE using a sequence of standard BSDEs. Then they use it to obtain, once again, the approximation by standard PDEs for a PDE with nonlinear boundary condition on time-independent regular convex. Using the approximation for the normally reflected diffusion (1.1) that we give in the first part of this paper, we get an approximation for the associated generalized BSDE (1.2) when randomness stems from (1.1), using a sequence of standard BSDEs. As we know that (see Jakani [START_REF] Jakani | System of nonlinear second-order parabolic partial differential equations with interconnected obstacles and oblique derivative boundary conditions on non-smooth time-dependent domains[END_REF]), the solution of the generalized BSDE (1.2) provides a solution for the following PDE with nonlinear Neumann boundary condition on the time-dependent domain D ′ ∩ [0, T ] × R d : ∀i = 1, ..., m,            ∂ t u i (t, x) + L u i (t, x) + f i (t, x, u(t, x), σ ⊤ (t, x)D x u i (t, x)) = 0, (t, x) ∈ D ′ ∩ [0, T ) × R d ; ∂ u i ∂⃗ n (t, x) + ψ i (t, x, u(t, x)) = 0, (t, x) ∈ D ′ \ D ′ ∩ [0, T ) × R d ; u(T, x) = h(x), x ∈ D T , (1.3 ) where the operator L is defined by L = 1 2 Tr(σ σ ⊤ )D 2 xx (.) + b ⊤ D x (.) and at a point (t, x) ∈ ∂ D we set ∂ ∂⃗ n = ⟨⃗ n(t, x), D x (.)⟩. We then obtain an approximation for the PDE with nonlinear Neumann boundary condition on time-dependent domain (1.3) by a sequence of standard PDEs defined on [0, T ] × R d . The paper is organized as follows: In Section 2, we define the geometry of the domain considered in this paper and we collect some existing results on reflected SDEs in time-dependent domains. Section 3 is devoted to the results of convergence, we propose a sequence of standard SDEs for which we establish a priori estimates and we show that it converges to the solution of the reflected SDE in the time-dependent domain D. Then, in Section 4, we use the approximation of reflected SDEs in a smooth time-dependent domain to provide an approximation for both generalized BSDEs associated with reflected diffusions in time-dependent domains and PDEs with normal derivative and nonlinear Neumann boundary condition defined on time-dependent domains. Preliminaries and formulation of the problem 2.1 Geometry of the time-dependent domain Let d ≥ 1, T > 0 be fixed, we follow the notation of [START_REF] Lundström | Stochastic and partial differential equations on nonsmooth timedependent domains[END_REF] and we let D ′ be a bounded open connected subset of R 1+d . We will refer to D = D ′ ∩ [0, T ] × R d , as a time-dependent domain. Given D and t ∈ [0, T ], we define the time sections of D as D t = {x : (t, x) ∈ D}. We assume that: (2.1) D t ̸ = ∅, Our basic assumption is the following: D t ⊂ D t ′ , whenever t ≤ t ′ , t,t ′ ∈ [0, T ]. (2.2) Let t ∈ [0, T ], the boundary of D t will be denoted ∂ D t . Then, define N(t, x) the cone of unit inward normal vectors at a boundary point x ∈ ∂ D t which is nonempty thanks to assumption (2.1). Note that under this assumption, the domain may have corners which does not rule out the possibility of several unit inward normal vectors at the same boundary point. Now, let ⟨•, •⟩ denote the standard inner product on R d and |x| = ⟨x, x⟩ 1/2 be the euclidean norm of x ∈ R d . For x ∈ R d and r > 0, let B(x, r) and S(x, r) denote the ball and sphere of radius r, centered at x, respectively, i.e. B(x, r) = y ∈ R d : |x -y| < r and S(x, r) = y ∈ R d : |x -y| = r . We assume that there exists a radius r 0 > 0 such that the exterior sphere condition holds for all time-sections of D. This implies that for any t ∈ [0, T ], we have: B(x -r 0 ⃗ n(t, x), r 0 ) ⊂ D c t , (2.3) whenever x ∈ ∂ D t and ⃗ n(t, x) ∈ N(t, x). This is equivalent to say that ⟨y -x,⃗ n(t, x)⟩ + 1 2r 0 |y -x| 2 ⩾ 0, ∀x ∈ ∂ D t , y ∈ D t , (2.4 ) whenever ⃗ n(t, x) ∈ N(t, x) for t ∈ [0, T ]. Unless otherwise stated, we fix ⃗ n(t, x) in N(t, x) and we assume that: ⃗ n ∈ C 1,2 b (R 1+d , {0, 1}), (2.5) where C 1,2 b (R 1+d , {0, 1}) denotes the space of bounded functions that are continuously differentiable once with respect to the time variable and twice with respect to the space variable and having bounded derivatives. Then, let us recall the temporal variation of the domain d(t, x) := inf y∈D t |y -x|, ∀t ∈ [0, T ], x ∈ R d that is assumed to satisfy for some p ∈ (1, ∞), d(., x) ∈ W 1,p ([0, T ], [0, ∞)), (2.6) for all x ∈ R d , where W 1,p ([0, T ], [0, ∞)) denotes the Sobolev space of functions whose first order weak derivatives belong to L p ([0, T ]) with Sobolev norm uniformly bounded in space and such that the first weak derivative ∂ t d(t, x) is jointly measurable in (t, x). Remark 2.1 Thanks to assumption (2.1), the exterior cone condition assumed in [START_REF] Lundström | Stochastic and partial differential equations on nonsmooth timedependent domains[END_REF] which is weaker than the uniform sphere condition (2.3) holds, i.e, there exists a constant ρ ∈ (0, 1) such that ∪ 0⩽ξ ⩽ρ B(x -ξ⃗ n(t, x), ξ ρ) ⊂ D c t , ∀t ∈ [0, T ], x ∈ ∂ D t . (2.7) It follows, that the interior cone condition is satisfied as well: ∪ 0⩽ξ ⩽ρ B(x + ξ⃗ n(t, x), ξ ρ) ⊂ D t , ∀t ∈ [0, T ], x ∈ ∂ D t . (2.8) Moreover, by Remark 2.2 in [START_REF] Lundström | Stochastic and partial differential equations on nonsmooth timedependent domains[END_REF], there exists α = 1 -1/p ∈ (0, 1) and K ∈ (0, ∞) such that for all s,t ∈ [0, T ], x ∈ R d , |d(s, x) -d(t, x)| ≤ K|s -t| α . ( 2 Reflected SDEs in time-dependent domain Let (Ω, F , P) be a fixed probability space on which is defined an n-dimensional Brownian motion W = (W t ) 0⩽t⩽T , where F = (F t ) 0≤t≤T is the completed filtration of (σ (W s , 0 ≤ s ≤ t)) t≤T with all P-null sets of We start by recalling the definition of the solution: F . Let b : [0, T ] × R d -→ R d and σ : [0, T ] × R d -→ R d × R n be Definition 2.1 (Lundström-Önskog [START_REF] Lundström | Stochastic and partial differential equations on nonsmooth timedependent domains[END_REF]) A strong solution of the reflected SDE in D driven by W and with coefficients b and σ , direction of reflection along ⃗ n and initial condition x ∈ D0 is an F t -adapted stochastic process X t which satisfies P-almost surely, whenever t ∈ [0, T ],      X t = x + t 0 b(r, X r )dr + t 0 σ (r, X r )dW r + t 0 ⃗ n(r, X r )d|Λ| r ; X t ∈ Dt , |Λ| t = t 0 χ {Xr ∈∂ Dr } d|Λ| r < ∞. (2.10) We introduce the following assumptions: (a) The functions b and σ are Lipschitz continuous with respect to x, i.e., there exists a positive constant C such that |b(t, x) -b(t, x ′ )| + |σ (t, x) -σ (t, x ′ )| ≤ C|x -x ′ |, ∀(t, x, x ′ ) ∈ [0, T ] × R d × R d . (2.11) (b) The functions b and σ are of linear growth in (t, x), i.e., there exists a positive constant C such that |b(t, x)| + |σ (t, x)| ≤ C(1 + |x|), ∀(t, x) ∈ [0, T ] × R d . (2.12) Next, we recall the following result of existence and uniqueness of the solution of the reflected SDE (2.10) under the above geometric setting: Theorem 2.1 (Lundström-Önskog [START_REF] Lundström | Stochastic and partial differential equations on nonsmooth timedependent domains[END_REF]) Under assumption (a), the reflected SDE (2.10) has a unique strong solution. Moreover, it has been shown in [START_REF] Jakani | System of nonlinear second-order parabolic partial differential equations with interconnected obstacles and oblique derivative boundary conditions on non-smooth time-dependent domains[END_REF] that the solution satisfies the following properties: Proposition 2.1 There exists a constant C such that for all x, x ′ ∈ D 0 , E[ sup 0⩽t⩽T | X x t -X x ′ t | 4 + | |Λ x | t -|Λ x ′ | t | 4 ] ≤ C | x -x ′ | 4 . (2.13) Moreover, for each µ > 0, t ∈ [0, T ], there exists C(µ,t) such that for all x ∈ D 0 , E[e µ|Λ x | t ] ≤ C(µ,t). (2.14) Approximation of reflected SDEs in time-dependent domains Let D be a time-dependent domain satisfying (2.1)- (2.6). From now on we assume that the functions b and σ satisfy assumptions (a) and (b). Let x ∈ D0 , we introduce the following penalized SDE: ∀n ≥ 1, ∀t ∈ [0, T ], X n t = x + t 0 b(r, X n r )dr + t 0 σ (r, X n r )dW r -n t 0 (X n r -π(r, X n r ))dr. (3.1) Note that if X n t / ∈ Dt , the vector -X n t -π(t,X n t ) |X n t -π(t,X n t )| is an element of N(t, π(t, X n t )). Then the penalized SDE (3.1) can be written as: X n t = x + t 0 b(r, X n r )dr + t 0 σ (r, X n r )dW r + Λ n t , where Λ n and |Λ n | are given by: Λ n t = t 0 n(X n r -π(r, X n r ))dr, and |Λ n | t = t 0 n|X n r -π(r, X n r )|dr = t 0 n|X n r -π(r, X n r )|dr = t 0 nd(r, X n r )dr. A priori estimates Proposition 3.1 Under assumptions (a) and (b), for any q ≥ 1, ∀t ≤ T , we have: sup n≥1 E sup 0≤t≤T |X n t | 2q + sup 0≤t≤T |Λ n | q t < ∞. (3.2) PROOF. From Itô's formula, we have: ∀t ∈ [0, T ], |X n t -P 0 | 2 + 2n t 0 ⟨X n r -P 0 , X n r -π(r, X n r )⟩dr = |x -P 0 | 2 + t 0 |σ (r, X n r )| 2 dr + 2 t 0 ⟨X n r -P 0 , b(r, X n r )⟩dr + 2 t 0 ⟨X n r -P 0 , σ (r, X n r )⟩dW r . Recall Lemma 2.1 -(d), then there exists P 0 ∈ D 0 such that for some 1 ⩽ γ < ∞, we have: ⟨X n r -P 0 , X n r -π(r, X n r )⟩ ≥ 1 γ |X n r -π(r, X n r )|. (3.3) This implies that: 2n t 0 ⟨X n r -P 0 , X n r -π(r, X n r )⟩dr ≥ 2n 1 γ t 0 |X n r -π(r, X n r )|dr. Besides, from assumption (a), we deduce that for some constant M > 0 depending on the Lipschitz constant of b and σ , we have: |b(r, X n r )| ≤ |b(r, P 0 )| + M|X n r -P 0 |, (3.4) |σ (r, X n r )| ≤ |σ (r, P 0 )| + M|X n r -P 0 |. (3.5) Therefore, |X n t -P 0 | 2 +2n 1 γ t 0 |X n r -π(r, X n r )|dr ≤ |x -P 0 | 2 + (M 2 + M + 1) t 0 |X n r -P 0 | 2 dr + t 0 |b(r, P 0 )| 2 dr + t 0 |σ (r, P 0 )| 2 dr + 2 t 0 ⟨X n r -P 0 , σ (r, X n r )⟩dW r . Then from the boundedness of D0 and thanks to assumption (b), we get: |X n t -P 0 | 2 +2n 1 γ t 0 |X n r -π(r, X n r )|dr ≤ C D 0 ,σ ,b,x +C t 0 |X n r -P 0 | 2 dr + 2 t 0 ⟨X n r -P 0 , σ (r, X n r )⟩dW r , and we have for q ≥ 1, |X n t -P 0 | 2q + 2n 1 γ t 0 |X n r -π(r, X n r )|dr q ≤ C D 0 ,σ ,b,x +C t 0 |X n r -P 0 | 2q dr +C t 0 ⟨X n r -P 0 , σ (r, X n r )⟩dW r q . (3.6) First, let us examine the term |X n t -P 0 | 2q . By taking the supremum over [0,t] and the expectation, we get: E sup 0≤r≤t |X n r -P 0 | 2q ≤ C D 0 ,σ ,b,x +CE r 0 sup 0≤u≤r |X n u -P 0 | 2q du (3.7) +CE sup 0≤r≤t r 0 ⟨X n u -P 0 , σ (u, X n u )⟩dW u q . From BDG inequality, it follows that: E sup 0≤r≤t r 0 ⟨X n u -P 0 , σ (u, X n u )⟩dW u q ≤ CE t 0 |X n r -P 0 | 2 |σ (r, X n r )| 2 dr q 2 ≤ CE t 0 |X n r -P 0 | 4 + |σ (r, X n r )| 4 dr q 2 ≤ CE t 0 |X n r -P 0 | 2q |dr + t 0 |σ (r, X n r )| 2q |dr . Again, using (3.5), we deduce from (3.7) that: E sup 0≤r≤t |X n r -P 0 | 2q ≤ C D 0 ,σ ,b,x +C t 0 E sup 0≤u≤r |X n u -P 0 | 2q dr. Finally, we apply Gronwall's lemma and we obtain: ∀q ≥ 1, E sup 0≤r≤t |X n r -P 0 | 2q ≤ C, ∀t ≤ T, which implies that E sup 0≤t≤T |X n t | 2q ≤ C, ∀n ≥ 1. Moreover, with the use of the latter estimate and taking into account (3.6), it follows that: ∀q ≥ 1, E sup 0≤t≤T |Λ n | q t = E sup 0≤t≤T t 0 n|X n r -π(r, X n r )|dr q ≤ 1 2 γC, ∀n ≥ 1. Uniform control of the distance In this part, we are interested in the uniform control of d(t, X n t ). Note that assumptions (2.1) and (2.6) do not ensure the smoothness of the boundary. Inspired by [START_REF] Nyström | Reflected BSDE of Wiener-Poisson type in time-dependent domains[END_REF], we use a smooth approximation of D that allows to apply Itô's formula with a function involving the distance. More precisely, we recall the following lemma from the same work: Lemma 3.1 Let ε > 0, there exists a C ∞ -smooth time-dependent domain D ε ⊂ D ′ satisfying (2.1) and (2.2) such that: h(D t , D ε,t ) < ε, ∀t ∈ [0, T ], (3.8) where h stands for the Hausdorff distance, which is defined by: h(F, G) = max(sup{d(y, F); y ∈ G}, sup{d(y, G); y ∈ F}) for any two sets F and G of R d . Thanks to Lemma 3.1, we deduce that the cone of unit inward normal vectors at each boundary point of D ε is reduced to a unique vector that we denote⃗ n ε . Then for t ∈ [0, T ] and x ∈ R d \ Dε,t the projection of x along ⃗ n ε (t, x) will be denoted π ε (t, y). Note that π ε satisfies: π ε (t, y) = y, ∀y ∈ D ε,t , ∀t ∈ [0, T ], and d ε (t, y) = d(y, D ε,t ) = |y -π ε (t, y)|, ∀y ∈ R d , ∀t ∈ [0, T ]. Proposition 3.2 Let (X n ) n⩾1 be the unique solution of the penalized SDE (3.1). Then for any p > 2, there exists c > 0 such that ∀n ⩾ 1, we have: E sup 0≤t≤T d(t, X n t ) p ⩽ c n p-2 2 , (3.9) and E T 0 d(t, X n t ) p dt ⩽ c n p 2 . ( 3.10) First we shall recall some properties of the projection π ε as stated in the following lemma which is borrowed from [START_REF] Nyström | Reflected BSDE of Wiener-Poisson type in time-dependent domains[END_REF]. Lemma 3.2 Let D ε be a smooth approximation of D satisfying (3.8). Then, there exists a constant c ≥ 0, such that, if ε ∈ (0, 1), y ∈ R d and t ∈ [0, T ], we have: (i) |π(t, y) -π ε (t, y)| ≤ c min( ε 2 + εd ε (y,t); √ ε(1 + d ε (y,t)); ε 2 + εd(y,t)), (ii) |π(t, y) -π ε (t, y)| ≤ c √ ε d ε (t, y) whenever d ε (t, y) > ε. PROOF OF PROPOSITION 3.2. We establish a uniform control of d ε (t, X n t ). This will be done using Itô's formula with the function ϕ ε (t, y) := (d ε (t, y)) p = |yπ ε (t, y)| p , ∀p > 2, which is continuously differentiable with respect to y and for which the derivative with respect to the time variable exists. Now, recall Lemma 3.1, then (D ε,t ) t⩾0 is increasing in time and by definition of ϕ ε , we have: ∂ t ϕ ε ≤ 0. Using Itô's formula and thanks to the previous remark, we get: ∀t ∈ [0, T ], ϕ ε (t, X n t ) ≤ ϕ ε (0, X n 0 ) + t 0 ∂ x ϕ ε (s, X n s )dX n s + 1 2 t 0 ∂ xx ϕ ε (s, X n s )d⟨X n i , X n j ⟩ s ≤ ϕ ε (0, x) + t 0 ⟨∂ x ϕ ε (s, X n s ), b(s, X n s )⟩ds + t 0 ⟨∂ x ϕ ε (s, X n s ), σ (s, X n s )dW s ⟩ -n t 0 ⟨∂ x ϕ ε (s, X n s ), X n s -π(s, X n s )⟩ds + 1 2 t 0 σ ⊤ (s, X n s )∂ xx ϕ ε (s, X n s )σ (s, X n s )ds. First, note that Lemma 3.2-(i) gives the following boundedness from above of ϕ ε (0, x): ϕ ε (0, x) = |x -π ε (0, x)| p = |π(0, x) -π ε (0, x)| p ≤ c p ε 2 + εd(x, D 0 ) p = (cε) p . Next, for the term involving ∂ x ϕ ε , we observe that: ∂ x ϕ ε (t, x) = ∂ x (d ε (t, x) 2 ) p 2 = p 2 ∂ x d ε (t, x) 2 × d ε (t, x) 2 p 2 -1 = p × (x -π ε (t, x)) × d ε (t, x) p-2 . Then by taking the norm, we get: |∂ x ϕ ε (t, X n t )| ≤ pd ε (t, X n t ) p-1 . Note that for any c 1 > 0, there exists C 1 depending on c 1 and p: pd ε (s, X n s ) p-1 |b(s, X n s )| ≤ c 1 nd ε (s, X n s ) p + C 1 n p-1 |b(s, X n s )| p , (3.11) Then we obtain the following inequality: ∀c 1 > 0, t 0 ⟨∂ x ϕ ε (s, X n s ), b(s, X n s )⟩ds ≤ p t 0 d ε (s, X n s ) p-1 |b(s, X n s )|ds ≤ c 1 n t 0 d ε (s, X n s ) p ds + C 1 n p-1 t 0 |b(s, X n s )| p ds. (3.12) Next, note that -x-π ε (t,x) |x-π ε (t,x)| coincides with⃗ n ε (t, x) the unit normal vector pointing toward the interior of D ε,t whenever x ∈ ∂ D ε,t for each t ∈ [0, T ] and null elsewhere. Hence, we can see that, ∂ xx ϕ ε (t, x) = ∂ x -p⃗ n ε (t, x)d ε (t, x) p-1 = -p⃗ n ε (t, x)∂ x (d ε (t, x) 2 ) p-1 2 -p(d ε (t, x)) p-1 ∂ x ⃗ n ε (t, x). Taking into account the smoothness of D ε and the boundedness of D ′ , the derivative ∂ x ⃗ n ε is bounded. Thus, by taking the norm there exists a constant c > 0 independent of ε such that: 1 2 t 0 (σ ⊤ ∂ xx ϕ ε σ (s, X n s )ds ≤ t 0 cpd ε (s, X n s ) p-1 + cp(p -1)d ε (s, X n s ) p-2 |σ (s, X n s )| 2 ds. Similarly, we can see that for any c 2 , c 3 > 0 there exist C 2 ,C 3 > 0 depending on c 2 and c 3 respectively and p such that, cpd ε (s, X n s ) p-1 ≤ c 2 nd ε (s, X n s ) p + C 2 n p-1 |σ (s, X n s )| 2p , and cp(p -1)d ε (s, X n s ) p-2 |σ (s, X n s )| 2 ≤ c 3 nd ε (s, X n s ) p ds + C 3 n p-2 2 |σ (s, X n s )| p . Therefore, the following upper bound holds: ∀c 2 , c 3 > 0, 1 2 t 0 (σ ⊤ ∂ xx ϕ ε σ (s, X n s )ds ≤ (c 2 + c 3 )n t 0 d ε (s, X n s ) p ds + C 2 n p-1 t 0 |σ (s, X n s )| 2p ds + C 3 n p-2 2 t 0 |σ (s, X n s )| p ds. Now, let us examine the term involving the penalization term, -n t 0 <∂ x ϕ ε (s, X n s ), X n s -π(s, X n s ) > ds = -np t 0 d ε (s, X n s ) p ds -np t 0 d ε (s, X n s ) p-2 ⟨X n s -π ε (s, X n s ), π ε (s, X n s ) -π(s, X n s )⟩χ {dε (s,X n s )>ε} (s, X n s )ds -np t 0 d ε (s, X n s ) p-2 ⟨X n s -π ε (s, X n s ), π ε (s, X n s ) -π(s, X n s )⟩χ {dε (s,X n s )⩽ε} (s, X n s )ds. On the one hand, Lemma 3.2-(ii) yields: | -np t 0 d ε (s, X n s ) p-2 ⟨X n s -π ε (s, X n s ), π ε (s, X n s ) -π(s, X n s )⟩χ {dε (s,X n s )>ε} (s, X n s )ds| ≤ npc √ ε t 0 d ε (s, X n s ) p-2 |X n s -π ε (s, X n s )|d ε (s, X n s ) 1 2 χ {dε (s,X n s )>ε} (s, X n s )ds ≤ npc √ ε t 0 d ε (s, X n s ) 2p-1 2 ds. This implies that, for any c 4 > 0 there exists C 4 > 0 depending on c 4 and p such that: | -np t 0 d ε (s, X n s ) p-2 ⟨X n s -π(s, X n s ), π ε (s, X n s ) -π(s, X n s )⟩χ {dε (s,X n s )>ε} (s, X n s )ds| ≤ c 4 n t 0 d ε (s, X n s ) p ds +C 4 n 1 2p ε p . On the other hand, the second term can be dominated as follows: np t 0 d ε (s, X n s ) p-2 |X n s -π ε (s, X n s )||π ε (s, X n s ) -π(s, X n s )|(1 -χ {dε (s,X n s )⩽ε} (s, X n s ))ds ≤ np √ ε t 0 d ε (s, X n s ) p-1 (1 + d ε (s, X n s ))ds ≤ cnε p-1 2 . The last line follows from Lemma 3.2-(i). As a conclusion, we have: ϕ ε (t, X n t ) + (p -c 1 -c 2 -c 3 -c 4 )n t 0 ϕ ε (s, X n s )ds ≤ (c p +C 4 n 1 2p )ε p + cnε p-1 2 + C 1 n p-1 t 0 |b(s, X n s )| p ds + t 0 C 2 n p-1 |σ (s, X n s )| 2p + C 3 n p-2 2 |σ (s, X n s )| p ds + t 0 ⟨∂ x ϕ ε (s, X n s ), σ (s, X n s )dW s ⟩. ( 3 E d ε (t, X n t ) p + n t 0 d ε (s, X n s ) p ds ≤ (c p +C 4 n 1 2p )ε p + cnε p-1 2 + C n p-2 2 E t 0 |b(s, X n s )| p + |σ (s, X n s )| p + |σ (s, X n s )| 2p ds . (3.14) By taking the supremum over [0, T ] and by recalling (3.2) and assumption (b), we conclude that: sup 0≤t≤T E d ε (t, X n t ) p ⩽ (c p +C 4 n 1 2p )ε p + cnε p-1 2 + C n p-2 2 , ∀p > 2. Next, using B-D-G inequality there exists c > 0 such that: E sup 0≤t≤T | t 0 ⟨∂ x ϕ ε (s, X n s ), σ (s, X n s )dW s ⟩| ≤ cE T 0 d ε (s, X n s ) p-1 |σ (s, X n s )|ds ≤ c 5 E T 0 nd ε (s, X n s ) p ds + C 5 n p-1 E T 0 |σ (s, X n s )| p ds , ∀c 5 > 0, where C 5 > 0 is independent of n. Hence, from (3.13) and (3.14), we conclude that there exists c > 0 such that E sup 0≤t≤T d ε (t, X n t ) p ⩽ (c p +C 4 n 1 2p )ε p + cnε p-1 2 + C n p-2 2 , ∀p > 2. Finally, since h(D t , D ε,t ) < ε, we have d(s, X n ) ≤ d ε (s, X n s ) + ε. Then, it suffices to take the limit as ε → 0. Convergence of the Penalized SDE First, we show that (X n ) n⩾1 is a Cauchy sequence as stated in the following proposition: Proposition 3.3 Let (X n ) n⩾1 be |X n t -X m t | 2 = 2 t 0 ⟨X n s -X m s , b(s, X n s ) -b(s, X m s )⟩ds + 1 2 t 0 |σ (s, X n s ) -σ (s, X m s )| 2 ds -2n t 0 ⟨X n s -X m s , X n s -π(s, X n s )⟩ds + 2m t 0 ⟨X n s -X m s , X m s -π(s, X m s )⟩ds + 2 t 0 ⟨X n s -X m s , (σ (s, X n s ) -σ (s, X m s ))dW s ⟩. Using the Lipschitz continuity of b and σ , we deduce that: |X n t -X m t | 2 ≤ c b,σ t 0 |X n s -X m s | 2 ds -2n t 0 ⟨X n s -X m s , X n s -π(s, X n s )⟩ds + 2m t 0 ⟨X n s -X m s , X m s -π(s, X m s )⟩ds + 2 t 0 ⟨X n s -X m s , (σ (s, X n s ) -σ (s, X m s ))dW s ⟩. (3.16) Note that, -⟨X n s -X m s , X n s -π(s, X n s )⟩ = -|X n s -π(s, X n s )| 2 + π(s, X n s ) -π(s, X m s ), - X n s -π(s, X n s ) |X n s -π(s, X n s )| d(s, X n s ) -π(s, X m s ) -X m s , X n s -π(s, X n s ) . (3.17) By recalling the exterior sphere property (2.4) and using the Lipschitz continuity of π w.r.t. y, we deduce that: -2n t 0 ⟨X n s -X m s , X n s -π(s, X n s )⟩ds ≤ cn t 0 |π(s, X n s ) -π(s, X m s )| 2 d(s, X n s )ds + 2n t 0 d(s, X n s )d(s, X m s )ds ≤ cn t 0 d(s, X n s )|X n s -X m s | 2 ds + 2n t 0 d(s, X n s )d(s, X m s )ds. We do likewise with the third term in the right hand side of (3.16). Then, we obtain the following inequality: |X n t -X m t | 2 ≤ M m,n t + H m,n t + t 0 Ψ m,n s |X n s -X m s | 2 ds , where M m,n is a local martingale and the processes H m,n and Ψ m,n are defined by: H m,n t :=2(m + n) t 0 d(s, X n s )d(s, X m s )ds Ψ m,n s :=c b,σ ,r (1 + nd(s, X n s ) + md(s, X m s ) ). Then thanks to Stochastic Gronwall's inequality (see e.g. Lemma 2.3 in Ren-Wu [START_REF] Ren | Penalization of Reflected SDEs and Neumann Problems of HJB Equations[END_REF], Theorem 4 in Scheutzow [START_REF] Scheutzow | A stochastic Gronwall lemma[END_REF]), we get: E sup 0≤t≤T |X n t -X m t |e - c b,σ ,r 2 T 0 (1+nd(s,X n s )+md(s,X m s ))ds ≤ 3cE (n + m) T 0 d(s, X n s )d(s, X m s )ds 1 2 ≤ cE sup 0≤t≤T d(t, X m t ) T 0 nd(s, X n s )ds + sup 0≤t≤T d(t, X n t ) T 0 md(s, X m s )ds 1 2 ≤ c E sup 0≤t≤T d(t, X m t ) + sup 0≤t≤T d(t, X n t ) 1 2 × E n T 0 d(s, X n s )ds + m T 0 d(s, X m s )ds 1 2 . Recall Proposition 3.1, then from (3.2) we deduce that E n T 0 d(s, X n s )ds = E |Λ n | T ≤ c. This implies that, E n T 0 d(s, X n s )ds + m T 0 d(s, X m s )ds ≤ c On the other hand, from (3.9) it follows that for p > 2, there exists c > 0 that may change from line to line such that: E sup 0≤t≤T d(t, X m t ) + sup 0≤t≤T d(t, X n t ) 1 2 ≤ E sup 0≤t≤T d(t, X m t ) + sup 0≤t≤T d(t, X n t ) p 1 2p ≤ c E sup 0≤t≤T d(t, X m t ) p + sup 0≤t≤T d(t, X n t ) p 1 2p ≤ c 1 m p-2 2 + 1 n p-2 2 1 2p . Therefore, the following holds: ∀p > 2, E sup 0≤t≤T |X n t -X m t |e - c b,σ ,r 2 (1+|Λ n | T +|Λ m | T ) ≤ c 1 m p-2 2 + 1 n p-2 2 1 2p -→ m,n→∞ 0. First, observe that: ∀κ > 0, sup 0≤t≤T |X n t -X m t | = sup 0≤t≤T |X n t -X m t | χ e - c b,σ ,r 2 (1+|Λ n | T +|Λ m | T ) ⩽κ + χ e - c b,σ ,r 2 (1+|Λ n | T +|Λ m | T ) >κ ≤ sup 0≤t≤T |X n t -X m t |χ 1+|Λ n | T +|Λ m | T ⩾ln ( 1 κ ) + 1 κ sup 0≤t≤T |X n t -X m t |e - c b,σ ,r 2 (1+|Λ n | T +|Λ m | T ) . By taking the expectation, we obtain: ∀κ > 0, E sup 0≤t≤T |X n t -X m t | ≤ E sup 0≤t≤T |X n t -X m t | 1 2 P 1 + |Λ n | T + |Λ m | T ⩾ ln 1 κ 1 2 + 1 κ 1 m p-2 2 + 1 n p-2 2 1 2p ≤ E sup 0≤t≤T |X n t -X m t | 1 2 E 1 + |Λ n | T + |Λ m | T ln 1 κ 1 2 + 1 κ 1 m p-2 2 + 1 n p-2 2 1 2p . Thanks to the estimate (3.2), there exists a constant c > 0 independent of n and m such that ∀κ > 0, E sup 0≤t≤T |X n t -X m t | ≤ c 1 ln 1 κ 1 2 + 1 κ 1 m p-2 2 + 1 n p-2 2 1 2p . It follows that: lim sup m,n→∞ E sup 0≤t≤T |X n t -X m t | = 0. Again from the estimate (3.2), we can see that for any p ⩾ 1 the family (|X n t -X m t | p ) m,n is uniformly integrable. Therefore, the following convergence holds: ∀p ⩾ 1, E sup 0≤t≤T |X n t -X m t | p -→ m,n→∞ 0. It follows that (X n ) n⩾1 is a Cauchy sequence in the space S p defined by: S 2 = {(ψ t ) 0⩽t⩽T F t -progressively measurable such that E[ sup 0≤t≤T | ψ t | p ] < ∞}, then (X n ) n⩾1 converges. To conclude this section, we show that the limit of (X n , Λ n ) n⩾1 is the solution of the reflected SDE (2.10). Proposition 3.4 For any p ⩾ 1, we have: lim n→∞ E sup 0≤t≤T |X n t -X t | p = 0. (3.18) PROOF. Let n ⩾ 1, by applying Itô's formula to |X n t -X t | 2 , we get: |X n t -X t | 2 = 2 t 0 ⟨X n s -X s , b(s, X n s ) -b(s, X s )⟩ds + 1 2 t 0 |σ (s, X n s ) -σ (s, X s )| 2 ds + 2 t 0 ⟨X n s -X s , (σ (s, X n s ) -σ (s, X s ))dW s ⟩ + 2 t 0 (X n s -X s )dΛ n s -2 t 0 (X n s -X s )dΛ s . The Lipschitz continuity of b and σ , implies that; |X n t -X t | 2 ≤ c b,σ t 0 |X n s -X s | 2 ds + 2 t 0 ⟨X n s -X s , (σ (s, X n s ) -σ (s, X s ))dW s ⟩ -2n t 0 ⟨X n s -X s , X n s -π(s, X n s )⟩ds + 2 t 0 (X n s -X s )dΛ s . (3.19) Then, by repeating the same calculus as in the previous proof, we obtain: -2n t 0 ⟨X n s -X s , X n s -π(s, X n s )⟩ds ≤ cn t 0 |π(s, X n s ) -π(s, X s )| 2 d(s, X n s )ds + 2n t 0 d(s, X n s )d(s, X s )ds ≤ cn t 0 d(s, X n s )|X n s -X s | 2 ds + 2n t 0 d(s, X n s )d(s, X s )ds. Since (X t ) 0⩽t⩽T is the solution of the reflected SDE in the time-dependent domain (2.10), then X t ∈ D t . This implies that d(t, X t ) = 0 for any 0 ⩽ t ⩽ T . Therefore, the inequality (3.19) becomes: |X n t -X t | 2 ≤ M n t + c t 0 (1 + nd(s, X n s ))|X n s -X s | 2 ds + 2 t 0 (X n s -X s )dΛ s , where M m is a local martingale. Besides, note that |X n s -X s | ≤ |X n s -π(s, X n s ) + π(s, X n s ) -X s | ≤ d(s, X n s ) + c, since π(s, X n s ) and X s belong to the bounded set D T . It follows that, |X n t -X t | 2 ≤ M n t + c t 0 (1 + nd(s, X n s ))|X n s -X s | 2 ds + c t 0 (1 + d(s, X n s ))d|Λ s |. Again thanks to Stochastic Gronwall's inequality (see e.g. Lemma 2.3 in Ren-Wu [START_REF] Ren | Penalization of Reflected SDEs and Neumann Problems of HJB Equations[END_REF], Theorem 4 in Scheutzow [START_REF] Scheutzow | A stochastic Gronwall lemma[END_REF]), we get: E sup 0≤t≤T |X n t -X t,x t |e - c b,σ ,r 2 T 0 nd(s,X n s )ds ≤ 3cE T 0 d(s, X n s )d|Λ| s 1 2 ≤ cE sup 0≤t≤T d(t, X n t )|Λ| T 1 2 ≤ c E sup 0≤t≤T d(t, X n t ) p 1 p × E |Λ| T q 1 q 1 2 , p > 2, q > 1 and 1 p + 1 q = 1. Thanks to (2.14) and (3.2), we deduce that the right hand side tends to zero as n → ∞. The remaining of the proof is similar to the proof of Proposition 3.3. Note that the above convergence implies the convergence of Λ n to Λ as stated in the following corollary: Corollary 3.1 For any p ⩾ 1, we have: lim n→∞ E sup 0≤t≤T |Λ n t -Λ t | p = 0. (3.20) PROOF. Recall that Λ n t -Λ t = X n t -X t + t 0 (b(r, X r ) -b(r, X n r ))dr + t 0 (σ (r, X r ) -σ (r, X n r ))dW r . Then using the Lipschitz continuity of b and σ , we get: ∀p ⩾ 1, |Λ n t -Λ t | p ≤ |X n t -X t | p + t 0 (b(r, X r ) -b(r, X n r ))dr p + t 0 (σ (r, X r ) -σ (r, X n r ))dW r p . By taking the expectation, together with the use of B-D-G inequality and the convergence (3.18), we obtain the result. Remark 3.1 It should be pointed out that the smoothness of the direction of reflection (2.5) and the assumption on the boundary of the domain (2.6) are not required to obtain the convergence of the approximation (3.1). However, in case these conditions are not satisfied, the existence and uniqueness of a solution of the reflected SDE (2.10) as well as its properties are not ensured within the geometric setting of Lundström-Önskog [START_REF] Lundström | Stochastic and partial differential equations on nonsmooth timedependent domains[END_REF]. In another framework, when the SDE (2.10) is reflected in D ⊂ R d+1 which is assumed to be at least H 2 -time-dependent domain, the cone of unit inward normal vectors is reduced to a unique element and the time sections of D satisfy the uniform exterior and interior sphere condition. Therefore, the existence and uniqueness of the solution of the reflected SDE (2.10) within this geometric setting as well as the estimate (2.14) follow without the need for the assumptions (2.5) and (2.6) (see Theorem 3.2, Theorem 3.4 and Proposition 3.5 in Costantini-Gobet-El Karoui [START_REF] Costantini | Boundary sensitivities for diffusion processes in time dependent domains[END_REF]). Moreover, with this type of regularity, the control of the distance d(t, X n t ) is obtained without resorting to the approximation of the domain (3.1). Remark 3.2 In the case of time-independent domains, the estimates (3.2) coincide with the estimates obtained for the approximation of reflected SDEs in regular convex domain given in Bahlali-Maticiuc-Zalinescu [START_REF] Bahlali | Penalization method for a nonlinear Neumann PDE via weak solution of reflected SDEs[END_REF] and Menaldi [START_REF] Menaldi | Stochastic Variational Inequality for Reflected Diffusion[END_REF]. We also note that the convergence results (3.18) and (3.20) are similar to the results of convergence obtained in time-independent domains: for convex domain (Menaldi [10] and Slominski [START_REF] Slominski | Weak and strong approximations of reflected diffusions via penalization methods[END_REF]) and for non-convex domains (see e.g. Lions-Sznitmann [START_REF] Lions | Stochastic differential equations with reflecting boundary conditions[END_REF] and Ren-Wu [START_REF] Ren | Penalization of Reflected SDEs and Neumann Problems of HJB Equations[END_REF]). Application to generalized BSDEs and PDEs with boundary condition on non-smooth time-dependent domains Let D be a time-dependent domain satisfying (2.1)-(2.6). In this section, we give an application of the results obtained in the previous section by providing an approximation of generalized BSDEs when the underlying process is a reflected diffusion in a time-dependent domain. This approximation consists of a sequence of standard BSDEs associated with standard diffusions. Therefore, by considering the associated PDEs we also get an approximation of PDEs with boundary conditions on a time-dependent domain using a sequence of standard PDEs defined on R d . However, this requires an additional smoothness condition on the domain. From now on, we assume that the cone of unit inward normal vectors at each boundary point is reduced to a unique element ⃗ n that is assumed to satisfy (2.5). Remark 4.1 By assuming that ⃗ n is unique at each boundary point, we note that Lemma 2.2 in Boufoussi-Casteren [START_REF] Boufoussi | An approximation result for a nonlinear Neumann boundary value problem via BSDEs[END_REF] can be generalized to the case φ ∈ C 1,2 b (R 1+d ). The result is based essentially on the convergences (3.18) and (3.20). In fact we have: ∀p ⩾ 1, lim n→∞ E sup 0≤t≤T t 0 ⟨φ (r, X r ), dΛ r ⟩ - t 0 ⟨φ (r, X n r ), dΛ n r ⟩ p = 0. Therefore, by taking φ :=⃗ n(., .), we get: lim n→∞ E sup 0≤t≤T t 0 ⟨⃗ n(r, X r ), dΛ r ⟩ - t 0 ⟨⃗ n(r, X n r ), dΛ n r ⟩ p = lim n→∞ E sup 0≤t≤T |Λ n | t -|Λ| t p = 0. Now, let (t, x) ∈ D be fixed, we define (X t,x s , Λ t,x s ) t⩽s⩽T as the unique solution of: ∀t ⩽ s ⩽ T ,      X t,x s = x + s t b(r, X t,x r )dr + s t σ (r, X t,x r )dW r + s t ⃗ n(r, X t,x r )d|Λ t,x | r ; X t,x s ∈ Ds , |Λ t,x | s = s 0 χ {X t,x r ∈∂ Dr } d|Λ t,x | r . (4.1) Then, let m ⩾ 1 and let us introduce the following functions: f : (t, x, y, z) ∈ [0, T ] × R d × R m × R m×n -→ f (t, x,⃗ y, z) = ( f i (t, x,⃗ y, z i )) i=1,...,m ∈ R m , ψ : (t, x, y) ∈ [0, T ] × R d × R m -→ ψ(t, x,⃗ y) = (ψ i (t, x,⃗ y)) i=1,...,m ∈ R m , h : x ∈ R d -→ h(x) = (h i (x)) i=1,...,m ∈ R m . From now on, we make the following assumptions: (H 0 ) The function h is continuous and is of polynomial growth. (H 1 ) (i) (t, x) -→ f (t, x,⃗ y, z) and (t, x) -→ ψ(t, x,⃗ y) are uniformly continuous with respect to (⃗ y, z) and ⃗ y respectively. (ii) (t, x) -→ f (t, x, 0, 0) and (t, x) -→ ψ(t, x, 0) are of polynomial growth. (iii) f and ψ are Lipschitz continuous with respect to (⃗ y, z) and ⃗ y respectively. (iii) ∃β < 0 such that ⟨y -ȳ, ψ(t, x, y) -ψ(t, x, ȳ)⟩ ≤ β | y -ȳ | 2 . Associated BSDEs Now, let (X t,x s , Λ t,x s ) t⩽s⩽T be the unique solution of the reflected SDE (4.1), then we introduce the associated multidimensional generalized BSDE: ∀t ⩽ s ⩽ T , Y t,x s = h(X t,x T ) + T s f (r, X t,x r ,Y t,x r , Z t,x r )dr + T s ψ(r, X t,x r ,Y t,x r )d|Λ t,x | r - T s Z t,x r dW r , (4.2) which has a unique solution thanks to Theorem 1.6 in Pardoux-Zhang [START_REF] Pardoux | Generalized BSDEs and nonlinear Neumann boundary value problems[END_REF] that we denote (Y t,x , Z t,x ). Moreover, the following estimates hold: E sup t⩽s⩽T | Y t,x s | 2 + T t | Y t,x r | 2 d|Λ t,x | r + T t ||Z t,x r || 2 dr < ∞. (4.3) Next, let n ⩾ 1 and let X t,x,n be the unique solution of the penalized SDE: ∀t ⩽ s ⩽ T ,          X t,x,n s = x + s t b(r, X t,x,n r )dr + s t σ (r, X t,x,n r )dW r + Λ t,x,n s ; Λ t,x,n r = -n s t (X t,x,n r -π(r, X t,x,n r ))dr, |Λ t,x,n | s = s 0 n|X t,x,n s -π(s, X t,x,n s )|dr. (4.4) Then, consider the associated sequence of standard BSDEs: ∀n ⩾ 1, ∀t ⩽ s ⩽ T , Y t,x,n s = h(X t,x,n T ) + T s f (r, X t,x,n r ,Y t,x,n r , Z t,x,n r )dr + T s ψ(r, X t,x,n r ,Y t,x,n r )d|Λ t,x,n | r - T s Z t,x,n r dW r . (4.5) We first give the estimates for the penalized BSDE: Proposition 4.1 Let n ⩾ 1 and (Y t,x,n , Z t,x,n ) be the solution of the the penalized BSDE (4.5). Then, we have the following estimates: ∀n ⩾ 1, E sup t⩽s⩽T | Y t,x,n s | 2 + T t | Y t,x,n r | 2 d|Λ t,x,n | r + T t ||Z t,x,n r || 2 dr < c, (4.6) where c is a constant independent of n. PROOF. From Itô's formula, we have: |Y t,x,n s | 2 + T s ||Z t,x,n r || 2 dr = |h(X t,x,n T )| 2 + 2 T s ⟨Y t,x,n r , f (r, X t,x,n r ,Y t,x,n r , Z t,x,n r )⟩dr +2 T s ⟨Y t,x,n r , ψ(r, X t,x,n r ,Y t,x,n r )⟩d|Λ t,x,n | r -2 T s ⟨Y t,x,n r , Z t,x,n r dW r ⟩. Besides, using assumptions (H 1 )(ii) -(iv), we have: ⟨Y t,x,n r , f (r, X t,x,n r ,Y t,x,n r )⟩ ≤ |Y t,x,n r || f (r, X t,x,n r ,Y t,x,n r )| ≤ |Y t,x,n r |{c(1 + |X t,x,n r | + |Y t,x,n r | + |Z t,x,n r |)} ≤ c c 1 {(1 + |X t,x,n r | 2 + |Y t,x,n r | 2 } + c 1 ||Z t,x,n r || 2 (4.7) and ⟨Y t,x,n r , ψ(r, X t,x,n r ,Y t,x,n r )⟩ ≤ ⟨Y t,x,n r , ψ(r, X t,x,n r ,Y t,x,n r ) -ψ(r, X t,x,n r , 0)⟩ + ⟨Y t,x,n r , ψ(r, X t,x,n r , 0)⟩ ≤ β |Y t,x,n r | 2 + c 2 |Y t,x,n r | 2 + c c 1 {1 + |X t,x,n r | 2 )}. Note that the constants c 1 and c 2 can be chosen such that (-βc 2 ) > 0 and (1c 1 ) > 0. Therefore, the equation (4.7) yields, |Y t,x,n s | 2 + (-β -c 2 ) T s |Y t,x,n r | 2 d|Λ t,x,n | r + (1 -c 1 ) T s ||Z t,x,n r || 2 dr ≤ c{1 + |X t,x,n T | 2 + T s |X t,x,n r | 2 dr + T s |X t,x,n r | 2 d|Λ t,x,n | r } + c T s |Y t,x,n r | 2 dr -2 T s ⟨Y t,x,n r , Z t,x,n r dW r ⟩. Next, using the estimates (3.2)and by taking the expectation, we get: E |Y t,x,n s | 2 + (-β -c 2 ) T s |Y t,x,n r | 2 d|Λ t,x,n | r + (1 -c 1 ) T s ||Z t,x,n r || 2 dr ≤ c{1 + E sup t≤r≤T |X t,x,n r | 2 + |Λ n | q [t,s] + T s |Y t,x,n r | 2 dr }. Then, thanks to Gronwall's lemma, we obtain: sup t≤s≤T E |Y t,x,n s | 2 + T s |Y t,x,n r | 2 d|Λ t,x,n | r + T s ||Z t,x,n r || 2 dr ≤ c. Finally, it suffices to apply BDG inequality to conclude. The convergence of (Y t,x,n , Z t,x,n ) n⩾1 is stated in the following proposition: Proposition 4.2 Let (Y t,x,n , Z t,x,n ) n⩾1 and (Y t,x , Z t,x ) be the unique solutions of the sequence of BS-DEs (4.5) and the generalized BSDE (4.2) respectively. Then, the following convergences hold true: [START_REF] Bahlali | Approximation of a degenerate semilinear PDE with a nonlinear Neumann boundary condition[END_REF] when (X t,x s , Λ t,x s ) t⩽s⩽T is the solution of the reflected SDE (4.1) in a regular convex time-independent domain. The proof is very technical and relies essentially on the results of convergence and properties of the penalized SDE (4.4) established in [START_REF] Bahlali | Penalization method for a nonlinear Neumann PDE via weak solution of reflected SDEs[END_REF][START_REF] Boufoussi | An approximation result for a nonlinear Neumann boundary value problem via BSDEs[END_REF][START_REF] Slominski | Weak and strong approximations of reflected diffusions via penalization methods[END_REF] and the properties of the reflected SDE in a regular convex time-independent domain that can be found in [START_REF] Lions | Stochastic differential equations with reflecting boundary conditions[END_REF][START_REF] Tanaka | Stochastic differential equations with reflecting boundary condition in convex regions[END_REF][START_REF] Menaldi | Stochastic Variational Inequality for Reflected Diffusion[END_REF]. The properties of the solution (X t,x s , Λ t,x s ) t⩽s⩽T and the results of convergence obtained in Section 2 and Section 3 are the generalizations of these results within our geometric setting. Therefore, the proof of Proposition 4.2 can be obtained by mimicking the proof of Theorem 6 in Bahlali-Boufoussi-Mouchtabih [START_REF] Bahlali | Approximation of a degenerate semilinear PDE with a nonlinear Neumann boundary condition[END_REF], we omit any further details. lim n→∞ E sup t≤s≤T |Y t, Associated PDEs D o = D ′ ∩ [0, T ) × R d , ∂ D = D ′ \ D ′ ∩ [0, T ) × R d . Next, let us consider the following system of PDEs: ∀i = 1, ..., m,            ∂ t u i (t, x) + L u i (t, x) + f i (t, x, u(t, x), σ ⊤ (t, x)D x u i (t, x)) = 0, (t, x) ∈ D o ; ∂ u i ∂⃗ n (t, x) + ψ i (t, x, u(t, x)) = 0, (t, x) ∈ ∂ D; u(T, x) = h(x), x ∈ D T , (4.9) where the operator L is defined by L = It follows from Theorem 2 in Jakani [START_REF] Jakani | System of nonlinear second-order parabolic partial differential equations with interconnected obstacles and oblique derivative boundary conditions on non-smooth time-dependent domains[END_REF], that u is a viscosity solution of the system of PDEs with boundary condition of Neumann type on time-dependent domain (4.9) in the sense of Definition 2.2 in Jakani [START_REF] Jakani | System of nonlinear second-order parabolic partial differential equations with interconnected obstacles and oblique derivative boundary conditions on non-smooth time-dependent domains[END_REF]. Finally, let (Y t,x,n , Z t,x,n ) n⩾1 be the unique solution of (4.5), it is well known that the sequence of deterministic functions (u n ) n⩾1 given by u n (t, x) = Y t,x,n t for any n ⩾ 1 is a viscosity solution of the following PDEs system: ∀x ∈ R d , ∀0 ⩽ t < T , ∀i = 1, . . . , m,        ∂ t u n i (t, x) + L u n i (t, x) + f i (t, x, u n (t, x), σ ⊤ (t, x)D x u n i (t, x)) -nψ i (t, x, u n (t, x))⟨⃗ n(t, x), xπ(t, x)⟩ = 0, u n (T, x) = h(x). As an application of the approximation provided for generalized BSDEs (4.5), we obtain an approximation for the system of PDEs (4.9): and that D t is open and convex for every t ∈ [0, T ]. . 13 ) 13 It suffices to choose c 1 , c 2 , c 3 and c 4 positive such that pc 1c 2c 3c 4 > 1. Then, by taking the expectation, there exists C > 0 depending on c 1 , c 2 , c 3 , c 4 and p such that: Now, we are in a position to get an approximation of PDEs with nonlinear Neumann boundary condition on time-dependent domains. Let us set D = D ′ ∩ [0, T ] × R d . Then, following the notation of Lundström-Önskog [9], let us recall the spaces: 1 2 1 Tr(σ σ ⊤ )D 2 xx (.) + b ⊤ D x (.) and at a point (t, x) ∈ ∂ D we set ∂ ∂⃗ n = ⟨⃗ n(t, x), D x (.)⟩. Let u : D → R m be the deterministic function defined by Y t,x the solution of the multidimensional generalized BSDE (4.2) as follows: u i (t, x) := Y t,x,i t , ∀i = 1, . . . , m. (4.10) Proposition 4 . 3 43 The following convergence holds: ∀(t, x) ∈ D, ∀i = 1, . . Now, recall that for any t ∈ [0, T ], D t is convex. Then for each t ∈ [0, T ], we have for all y ∈ R d \ Dt at least one projection of y onto ∂ D t along N(t, •) denoted π(t, y) which satisfies: d(t, y) = d(y, D t ) = |yπ(t, y)| and π(t, y) = y, ∀y ∈ Dt . Let D be a time-dependent domain satisfying (2.1) and (2.2), then we have the following lemma which is an adaptation of Lemma 3.1 in [11] to our case: Lemma 2.1 For any t ∈ [0, T ], and y, y ′ ∈ R d , we have: (a) ⟨y ′ -y, y -π(t, y)⟩ ≤ 0, y ′ ∈ Dt ; (b) ⟨y ′ -y, y -π(t, y)⟩ ≤ ⟨y ′ -π(t, y ′ ), y -π(t, y)⟩, y ′ ∈ Dt ; (c) |π(t, y) -π(t, y ′ )| ≤ |y -y ′ |. Furthermore, there exists P 0 ∈ D 0 and 1 ≤ γ < ∞ depending on d(P 0 , ∂ D 0 ) such that: (d) ⟨y -P 0 , y -π(t, y)⟩ ≥ 1 γ |y -π(t, y)|, for any y ∈ R d and t ∈ [0, T ]. .9) Recall the function l(r) = sup s,t∈[0,T ] x∈ Ds sup inf y∈ Dt |x -y| introduced in [5], then by Remark 2.4 in [9], the |s-t|⩽r condition (2.9) is equivalent to l(r) ⩽ Kr α . the solution of the penalized SDE (3.1). Then, for any p ⩾ 1, lim m,n→∞ E sup 0≤t≤T |X n t -X m t | p = 0. (3.15) PROOF. Let m, n ⩾ 1 and 0 ⩽ t ⩽ T , by applying Itô's formula to |X n s -X m s | 2 , we get: Remark 4.2 The convergence (4.8) was established in Bahlali-Boufoussi-Mouchtabih s x,n -Y t,x s | 2 + t T ||Z t,x,n r -Z t,x r || 2 dr = 0. (4.8) . , m, lim n→∞ u n i (t, x) = u i (t, x). (4.12) PROOF. Let (t, x) ∈ D, then from the definition of (u n i ) i=1,...,m and (u i ) i=1,...,m , it follows that: ∀i = 1, . . . , m, lim n→∞ |u n i (t, x)u i (t, x)| 2 = lim n→∞ |Y t,x,n,i t -Y t,x,i t | 2 ≤ lim n→∞ E sup t≤s≤T |Y t,x,n,i s -Y t,x,i s | 2 = 0.
03713603
en
[ "sdv" ]
2024/03/04 16:41:26
2012
https://hal.uvsq.fr/hal-03713603/file/Chapitre%20P27-1.pdf
M Terrae M Interjectum M Genavense M Paratuberculosis M Szulgaï Keywords: M. xenopi, M. kansasii, M. peregrinum, M. fortuitum, M Using an ELISA test we found that that the antibody immune response to P27-PPE36 in the sera of patients was dominated by an IgA antibody response accompanied by the absence of IgG response. The immune response against the P27-PPE36 protein was investigated in mice. It was studied in the context of different pathogen associated molecular patterns (PAMPs). BALB/c mice were immunized either with the P27-PPE36 recombinant protein in Freund's adjuvant or in phosphate saline buffer (PBS), with a pcDNA3 plasmid containing the gene encoding the P27-PPE36 protein, or with the Escherichia coli bacteria expressing the P27-PPE36 protein genetically fused into the flagellin. We found that P27-PPE36 expressed into the flagellin led to the strongest cellular responses, where we obtained the highest production of IFN- and cell proliferation, an indication of specific Th1-like orientation of the immune response. ¶(14pt) 2. Early works on Rv2108 and genetic analysis ¶(14pt) Mtb PCR-based assay detection test ¶(6pt) , one pair of primers and two oligonucleotide probes were successfully used to amplify and to detect the DNA of strains belonging to the M. tuberculosis complex. These primers and probes did not hybridize with DNA from any of the 21 other mycobacterial species tested (M. avium, M. intracellulare, M. gordonae, M. chelonae paratuberculosis [START_REF] Stinear | Insights from the complete genome sequence of Mycobacterium marinum on the evolution of Mycobacterium tuberculosis[END_REF] and those despite some bacteria like M. marinum have an higher number of PPE genes than M. tuberculosis (106 vs. 69) ( [START_REF] Stinear | Insights from the complete genome sequence of Mycobacterium marinum on the evolution of Mycobacterium tuberculosis[END_REF]. However an other analysis found a Rv2108 ortholog in the same strain (Agy99) of Mycobacterium ulcerans [START_REF] Riley | Identifying cognate binding pairs among a large set of paralogs: the case of PE/PPE proteins of Mycobacterium tuberculosis[END_REF]. This work presents also that Rv2108 gene is deleted in the strain C of Mycobacterium tuberculosis while it is present in the stains CDC1551, F11, H37Rv and Harlem as in the two strains of Mycobacterium bovis tested (BCG stain Pasteur 1173 and AF2122/97). According another study [START_REF] Gey Van Pittius | Evolution and expansion of the Mycobacterium tuberculosis PE and PPE multigene families and their association with the duplication of the ESAT-6 (esx) gene cluster regions[END_REF], Rv2108 have no orthologues in M. smegmatis, M avium paratuberculosis, M. leprae, M. ulcerans or M. marinum. This results confirm the interest of this gene in terms of diagnostic tool. M. tuberculosis has become highly specialized for intracellular survival in a very restricted range of mammalian hosts, and several recent studies have shown that lateral gene transfer (LGT) has been a major force in the evolution of the M. tuberculosis complex from an environmental Mycobacterium [START_REF] Kinsella | Fatty acid biosynthesis in Mycobacterium tuberculosis: Lateral gene transfer, adaptive evolution, and gene duplication[END_REF][START_REF] Gutierrez | Ancient origin and gene mosaicism of the progenitor of Mycobacterium tuberculosis[END_REF][START_REF] Rosas-Magallanes | Horizontal transfer of a virulence operon to the ancestor of Mycobacterium tuberculosis[END_REF][START_REF] Becq | Contribution of horizontally acquired genomic islands to the evolution of the tubercle bacilli[END_REF]. In fact, Rv2108 appears to belong to one of the 80 regions (minimal number identified containing 360 Protein coding sequences (CDS)) that have probably been acquired by LGT in Mtb [START_REF] Stinear | Insights from the complete genome sequence of Mycobacterium marinum on the evolution of Mycobacterium tuberculosis[END_REF]. Whether acquired by LGT or other means, some of these M. tuberculosis-specific regions contain known virulence genes or code for adaptation factors making them pottentially important for bacteria belonging to Mtb-complex. ¶(14pt) Genomic organizations ¶(6pt) Analysis of the genomic environment of the Rv2108 gene reveals that it is situated downstream a member of the pe gene family, Rv2107, coding for the PE22 protein (Fig. 1). These ajacent Rv2107 and Rv2108 genes lie in the same orientation. Occasionally, it can be noted that an insertion site IS6110 is localized between this two genes in the strains H37Rv and CDC1551 [START_REF] Beggs | Mapping of IS6110 insertion sites in two epidemic strains of Mycobacterium tuberculosis[END_REF]Sampson et al., 2001). Genome analysis by the operon/gene cluster method [START_REF] Strong | Inference of protein function and protein linkages in Mycobacterium tuberculosis based on prokaryotic genome organization: a combined computational approach[END_REF][START_REF] Bowers | Prolinks: a database of protein functional linkages derived from coevolution[END_REF] suggests that the PE and PPE families are functionally linked [START_REF] Gey Van Pittius | Evolution and expansion of the Mycobacterium tuberculosis PE and PPE multigene families and their association with the duplication of the ESAT-6 (esx) gene cluster regions[END_REF][START_REF] Tekaia | Analysis of the proteome of Mycobacterium tuberculosis in silico[END_REF][START_REF] Strong | Toward the structural genomics of complexes: crystal structure of a PE/PPE protein complex from Mycobacterium tuberculosis[END_REF][START_REF] Tundup | Clusters of PE and PPE genes of Mycobacterium tuberculosis are organized in operons: evidence that PE Rv2431c is co-transcribed with PPE Rv2430c and their gene products interact with each other[END_REF]. That is, the two genes tend to be in close chromosomal proximity on the Mtb genome [START_REF] Strong | Inference of protein function and protein linkages in Mycobacterium tuberculosis based on prokaryotic genome organization: a combined computational approach[END_REF][START_REF] Bowers | Prolinks: a database of protein functional linkages derived from coevolution[END_REF]. Based on their short intergenic distance (56 bp) and same transcription direction, Rv2107 and Rv2108 were assumed to belong to the same operon (Fig. 1) and so be co-transcribed. In Mtb genome, these same-operon PE/PPE pairs comprise less than 10% of the total number of PE and PPE genes (14 pairs of PE and PPE genes are found adjacent -same orientation, minimal intergenic distance -in the genome) [START_REF] Riley | Identifying cognate binding pairs among a large set of paralogs: the case of PE/PPE proteins of Mycobacterium tuberculosis[END_REF]. Genes separated by short intergenic sequences tend to have related function and interact physically [START_REF] Jacob | Genetic regulatory mechanisms in the synthesis of proteins[END_REF]. The structure of a complex of one PE/PPE protein pair was recently characterized [START_REF] Strong | Toward the structural genomics of complexes: crystal structure of a PE/PPE protein complex from Mycobacterium tuberculosis[END_REF][START_REF] Tundup | Clusters of PE and PPE genes of Mycobacterium tuberculosis are organized in operons: evidence that PE Rv2431c is co-transcribed with PPE Rv2430c and their gene products interact with each other[END_REF]. These results indicate that there may be many other instances of interactions between PE and PPE proteins. Like the PE and PPE proteins from the gene Rv2431c (PE25) and Rv2430c (PPE41) that interact together in vitro as probably in vivo [START_REF] Strong | Toward the structural genomics of complexes: crystal structure of a PE/PPE protein complex from Mycobacterium tuberculosis[END_REF][START_REF] Tundup | Clusters of PE and PPE genes of Mycobacterium tuberculosis are organized in operons: evidence that PE Rv2431c is co-transcribed with PPE Rv2430c and their gene products interact with each other[END_REF], it is stongly probable that PPE36 and PE22 have the same behavior. In fact, computational methods predict that the PE22/PPE36 interaction probability is almost the strongest of all the PE/PPE possible combinations tested [START_REF] Riley | Identifying cognate binding pairs among a large set of paralogs: the case of PE/PPE proteins of Mycobacterium tuberculosis[END_REF]. Furthermore, according to this analysis, this putative complexe is predicted to interact specifically, that is, PPE36 do not appear to interact with PEs other than its operon partner PE22, and vice versa [START_REF] Riley | Identifying cognate binding pairs among a large set of paralogs: the case of PE/PPE proteins of Mycobacterium tuberculosis[END_REF] but this supposition would need to be experimentaly confirmed. However due to the fact that Rv2108 is absent in M. tuberculosis strain C and Rv2107 is absent in M. tuberculosis strain F11, it is possible that another interacting partner is able to interact with the orphaned gene, possibly restoring the PE/PPE complex's function, or introducing new complexes that help these strains survive in their environmental niches [START_REF] Riley | Identifying cognate binding pairs among a large set of paralogs: the case of PE/PPE proteins of Mycobacterium tuberculosis[END_REF]. A putative interaction PE22/PPE36 is probably under the form of a 1:1 heterodimeric complex [START_REF] Strong | Inference of protein function and protein linkages in Mycobacterium tuberculosis based on prokaryotic genome organization: a combined computational approach[END_REF]. In their study, they found, as us [START_REF] Le Moigne | Expression, immunochemical, characterisation and localisation of the Mycobaterium tuberculosis protein p27[END_REF], that PPE36 is insoluble when expressed alone. The association with the relative PE protein would lead to a soluble complex: their experiments showed that proteins PE Rv2431c and PPE Rv2430 that are insosuble when expressed on their own are soluble when they are expressed together [START_REF] Strong | Toward the structural genomics of complexes: crystal structure of a PE/PPE protein complex from Mycobacterium tuberculosis[END_REF][START_REF] Tundup | Clusters of PE and PPE genes of Mycobacterium tuberculosis are organized in operons: evidence that PE Rv2431c is co-transcribed with PPE Rv2430c and their gene products interact with each other[END_REF]. ¶(9pt) Fig. 1. Genomic environment of Rv2108 gene. ¶(9pt) ¶(14pt) Regulation expression of Rv2108 ¶(6pt) A fundamental step in understanding the role of pe and ppe genes is elucidating how their expression is regulated. Although various studies have demonstrated that pe and ppe genes are expressed under a range of in vitro and in vivo conditions, they have not revealed any obvious indication of global pe and ppe gene regulation (Voskuil et al., 2004 (b)). This group of Voskuil and Smith has tested a large variety of diverse conditions to analyse gene expression in Mtb [START_REF] Manganelli | The Mycobacterium tuberculosis ECF sigma factor sigmaE: role in global gene expression and survival in macrophages[END_REF][START_REF] Sherman | Regulation of the Mycobacterium tuberculosis hypoxic response gene encoding alpha -crystallin[END_REF][START_REF] Manganelli | Role of the extracytoplasmic-function sigma factor sigma(H) in Mycobacterium tuberculosis global gene expression[END_REF][START_REF] Rodriguez | IdeR, an essential gene in Mycobacterium tuberculosis: role of IdeR in iron-dependent gene expression, iron metabolism, and oxidative stress response[END_REF][START_REF] Schnappinger | Transcriptional adaptation of Mycobacterium tuberculosis within macrophages: insights into the phagosomal environment[END_REF][START_REF] Voskuil | Inhibition of respiration by nitric oxide includes a Mycobacterium tuberculosis dormancy program[END_REF]Voskuil et al., 2004 (a); Voskuil et al., 2004 (b)). Among them, only two conditions induce a variation in Rv2108 expression (at least two fold): in presence of 0.05% sodium dodecyl sulfate (SDS) 90 min, Rv2108 expression is repressed [START_REF] Manganelli | The Mycobacterium tuberculosis ECF sigma factor sigmaE: role in global gene expression and survival in macrophages[END_REF] as after 14 days of stationnary phase culture (Voskuil et al., 2004 (a)). Other conditions (Macrophage IFN activated, 24 h; diethylenetriamine/nitric oxide adduct (DETA/NO or DNO) 0.5 mM, 40 min; hydrogen peroxide (H2O2) 10 mM, 40 min; hypoxia (oxygen from 20% to 0.20%), 2 h [START_REF] Sherman | Regulation of the Mycobacterium tuberculosis hypoxic response gene encoding alpha -crystallin[END_REF][START_REF] Rustad | The enduring hypoxic response of Mycobacterium tuberculosis[END_REF]; palmitic acid 50 µm, 4h; non-replicating persistence (NRP) dormancy model 20 days; Iron high vs. low; diamide 5 mM, 1 h; potassium cyanide (KCN) 0.5 mM, 1 h; carbonyl-cyanide 3-chlorophenylhydrazone (CCCP) 0.5 mM, 1 h; ethambutol 10 µm, 24 h; nutrient starvation 24 h [START_REF] Betts | Evaluation of a nutrient starvation model of Mycobacterium tuberculosis persistence by gene and protein expression profiling[END_REF]; heat shock (45°C), 30 min [START_REF] Stewart | Dissection of the heat-shock response in Mycobacterium tuberculosis using mutants and microarrays[END_REF]; acid shock (pH 5.5 vs. 6.9) [START_REF] Fisher | Microarray analysis of the Mycobacterium tuberculosis transcriptional response to the acidic conditions found in phagosomes[END_REF]) do not appear to generate variation (more than 2 fold) in Rv2108 expression. The associated pe gene, Rv2107 (pe22), is found to be induced in macrophage culture of Mtb [START_REF] Schnappinger | Transcriptional adaptation of Mycobacterium tuberculosis within macrophages: insights into the phagosomal environment[END_REF] and in presence of 0.5 mM DETA/NO [START_REF] Voskuil | Inhibition of respiration by nitric oxide includes a Mycobacterium tuberculosis dormancy program[END_REF]. However, inversely, [START_REF] Park | Rv3133c/dosR is a transcription factor that mediates the hypoxic response of Mycobacterium tuberculosis[END_REF] found that Rv2108 gene is induced by hypoxia (even if this needs confirmation since standard error deviation is elevated). However, contrarly to the majority of genes powerfully regulated by hypoxia, its induction does not requires the putative traznscription factor Rv3133/DosR. Like the majority of other PPE gene (54 of 69), Lsr2, a small basic protein highly conserved in mycobacteria that binds DNA and is implicated in gene regulation, is able to bind Rv2108 sequence [START_REF] Gordon | Lsr2 is a nucleoid-associated protein that targets AT-rich sequences and virulence genes in Mycobacterium tuberculosis[END_REF]. The binding of Lsr2 to the majority of pe/ppe genes suggests that this factor may negatively affect the expression of these antigenic proteins to modulate interactions with the host. More generally, Rv2108 has a low expression in the diverse M. tuberculosis strains that have been tested [START_REF] Gao | Gene expression diversity among Mycobacterium tuberculosis clinical isolates[END_REF] and it does not seem that there is a difference of Rv 2108 gene expression between M. bovis and M. tuberculosis in microarray analysis [START_REF] Rehren | Differential gene expression between Mycobacterium bovis and Mycobacterium tuberculosis[END_REF]. Furthermore, a study showed, by hy density mutagenesis experiments, that Rv2108 is not an essential gene for mycobacterial growth [START_REF] Sassetti | Genes required for mycobacterial growth defined by high density mutagenesis[END_REF]. In these experiments, only three of pe and ppe genes met the criteria for defining growth-attenuating mutations (Rv1807, Rv3872, and Rv3873). Although mutations in several other pe and ppe genes appeared to have subtle defects, the fact that such a small fraction are detected in this system suggests either that most of these genes are able to functionally complement each other, or that they are required under conditions that have not been testes. In the same study, the Mycobacterium leprae gene ML0411 is presented as an orthologue of Rv2108. In the Sanger Institute Mycobacterium leprae genome project, ML0411 is in fact described as being similar to Rv2108. ML0411 is coding for a protein 408-amino acid long named as a serinerich antigen (Sra) that have been largelly described (Vega-Lopez et al., 1993;[START_REF] De Wit | A Mycobacterium leprae-specific gene encoding an immunologically recognized 45 kDa protein[END_REF][START_REF] Macfarlane | Presence of human T-cell responses to the Mycobacterium leprae 45-kilodalton antigen reflects infection with or exposure to M. leprae[END_REF][START_REF] Parkash | Evaluation of recombinant serine-rich 45-kDa antigen (ML0411) for detection of antibodies in leprosy patients[END_REF] Rv2108 belongs the the 27% of genes that are not required for in vitro growth having M. leprae orthologues while the majority (78%) of the genes that they predict to be required for the optimal growth of M. tuberculosis have an orthologue in M. leprae genome. Thus, M. leprae appears to have selectively conserved the majority of genes that are necessary for optimal growth [START_REF] Cole | Massive gene decay in the leprosy bacillus[END_REF]. ¶(14pt) Characterization of P27-PPE36 protein ¶(6pt) The Rv2108 nucleotide sequence encoded for a 243 amino acid length protein. The P27-PPE36 antigen belongs to the PPE protein family, large family of protein present in Mtb, which represent 3% of the genome of this bacterium [START_REF] Cole | Deciphering the biology of Mycobacterium tuberculosis from the complete genome sequence[END_REF]. With the related PE protein family, they account for 10% of the genome. These families appear to have originated in the fast growing mycobacterial species before undergoing extensive expansion and diversification in certain slow growing species, particularly M. ulcerans, M. marinum and members of the M. tuberculosis complex (Gey van [START_REF] Gey Van Pittius | Evolution and expansion of the Mycobacterium tuberculosis PE and PPE multigene families and their association with the duplication of the ESAT-6 (esx) gene cluster regions[END_REF]. This asparagine or glycine-rich protein family containing 69 members has been termed PPE after the characteristic Pro-Pro-Glu motifs near the N-termini, in position 8-10. The relatively conserved N-terminal domain is about 180 amino acids lenght while C-terminal segments vary in sequence and length. According this C-terminal region, the PPE proteins are classified into four subfamilies: the first subfamily (24 members) , named PPE-SVP, has the well conserved motif Gly-X-X-Ser-Val-Pro-X-X-Trp located approximately at position 350; the second (23 members) constitutes the major polymorphic tandem repeats (MPTR) subfamily and is characterized by the presence of multiple tandem repeats of the motif Asn-X-Gly-X-Gly-Asn-X-Gly; the third subfamily (10 members), named PPE-PPW, is characterized by a highly conserved region comprising Gly-Phe-X-Gly-Thr and Pro-X-X-Pro-X-X-Trp motifs; and the last PPE subfamily (12 members) includes proteins with a low percentage of homology at the C-terminus that are unrelated other than having the PPE motif [START_REF] Gordon | Genomics of Mycobacterium bovis[END_REF][START_REF] Adindla | Sequence analysis corresponding to the PPE and PE proteins in Mycobacterium tuberculosis and other genomes[END_REF][START_REF] Gey Van Pittius | Evolution and expansion of the Mycobacterium tuberculosis PE and PPE multigene families and their association with the duplication of the ESAT-6 (esx) gene cluster regions[END_REF]. P27-PPE36 belongs to this last subfamily. A recent phylogenetic analysis of the 69 ppe genes present in the M. tuberculosis reference strain H37Rv has uncovered their evolutionary relationships and reveals that they can be divided into 5 sublineages which globally match the subfamilies described above (Gey van [START_REF] Gey Van Pittius | Evolution and expansion of the Mycobacterium tuberculosis PE and PPE multigene families and their association with the duplication of the ESAT-6 (esx) gene cluster regions[END_REF]. Rv2108 is classified in the sublineage III, having the most similarity with Rv3892c. The role of the PPE proteins is still unknown. Firstly, they have been thought to be implicated in increasing antigenic variation and immune evasion due to the highly polymorphic nature of their C-terminal domains [START_REF] Cole | Deciphering the biology of Mycobacterium tuberculosis from the complete genome sequence[END_REF][START_REF] Cole | Learning from the genome sequence of Mycobacterium tuberculosis H37Rv[END_REF][START_REF] Karboul | Frequent homologous recombination events in Mycobacterium tuberculosis PE/PPE multigene families: potential role in antigenic variability[END_REF]. Concerning this, an interessant study realized by [START_REF] Plotkin | Detecting selection using a single genome sequence of M. tuberculosis and P. falciparum[END_REF] shows that PE/PPE proteins are under strong selection for amino acids substitution. They calculate volatility of codons which is the proportion of their point-mutaions neighbours that encode different amino acids. The volatility of a codon is used to quantify the chance that the most recent nucleotide mutation to that codon caused an amino-acid subtitution. According their calcul, Rv2108 has a volatility value of 0.1029, which place it at the 594 th rank of genes with the higher volatility among the 4099 genes values calculated. Furthermore, in agreement with the theory of an antigenic variation role, it has been observed that many PPE proteins present high levels of polymorphism like for exemple PPE38 (Rv2352c), PPE39 (Rv2353c) and PPE40 (Rv2356c) [START_REF] Mcevoy | Evidence for a rapid rate of molecular evolution at the hypervariable and immunogenic Mycobacterium tuberculosis PPE38 gene region[END_REF], PPE34 (Rv1917c) (Sampson et al., 2001(a)), PPE42 (Rv2608) [START_REF] Chakhaiyar | Regions of high antigenicity within the hypothetical PPE major polymorphic tandem repeat open-reading frame, Rv2608, show a differential humoral response and a low T cell response in various categories of patients with tuberculosis[END_REF], PPE8 (Rv0355c) [START_REF] Srivastava | Identification of a repetitive sequence belonging to a PPE gene of Mycobacterium tuberculosis and its use in diagnosis of tuberculosis[END_REF] or PPE18 (Rv1196) [START_REF] Hebert | DNA polymorphisms in the pepA and PPE18 genes among clinical strains of Mycobacterium tuberculosis: implications for vaccine efficacy[END_REF] and sequence variation has been observed between the orthologues of the PE and PPE protein families in in silico analyses of the sequenced genomes of M. tuberculosis H37Rv, M. tuberculosis CDC1551 and M. bovis [START_REF] Gordon | Genomics of Mycobacterium bovis[END_REF][START_REF] Fleischmann | Whole-genome comparison of Mycobacterium tuberculosis clinical and laboratory strains[END_REF]Garnier et al., 2003). However, this variability can not be extended to all pe/ppe family members since some are in fact conserved across strains and species [START_REF] Cubillos-Ruiz | Analysis of the genetic variation in Mycobacterium tuberculosis strains by multiple genome alignments[END_REF]. It has then been suggested that the PPE proteins may play a role in the virulence of Mtb [START_REF] Rindi | Search for genes potentially involved in Mycobacterium tuberculosis virulence by mRNA differential display[END_REF][START_REF] Li | A Mycobacterium avium PPE gene is associated with the ability of the bacterium to grow in macrophages and virulence in mice[END_REF], in the maintenance of bacterial growth in macrophages [START_REF] Camacho | Identification of a virulence gene cluster of Mycobacterium tuberculosis by signature-tagged transposon mutagenesis[END_REF][START_REF] Dubnau | Mycobacterium tuberculosis genes induced during infection of human macrophages[END_REF][START_REF] Hou | Mycobacterium avium genes expressed during growth in human macrophages detected by Selective Capture of Transcribed Sequences (SCOTS)[END_REF][START_REF] Li | A Mycobacterium avium PPE gene is associated with the ability of the bacterium to grow in macrophages and virulence in mice[END_REF][START_REF] Sassetti | Genes required for mycobacterial growth defined by high density mutagenesis[END_REF] and in the regulation of bacterial iron starvation and oxidative stress responses [START_REF] Rodriguez | Identification and characterization of two divergently transcribed iron regulated genes in Mycobacterium tuberculosis[END_REF][START_REF] Rodriguez | IdeR, an essential gene in Mycobacterium tuberculosis: role of IdeR in iron-dependent gene expression, iron metabolism, and oxidative stress response[END_REF]. In addition, PPE might be a target for the protective immune response in experimental mouse models [START_REF] Skeiky | T cell expression cloning of a Mycobacterium tuberculosis gene encoding a protective antigen associated with the early control of infection[END_REF]. It has also be emitted the hypothesis that PPE proteins, due to their abundance of asparagine, could have a possible storage function for this amino acid which is one of the preferred nitrogen sources of the tubercle bacilli [START_REF] Tekaia | Analysis of the proteome of Mycobacterium tuberculosis in silico[END_REF]. Some PPE proteins, like PPE31 (Rv1807) could be involved in the protection from antibiotic stress targeting the envelope and help to confer the basal level of Mtb resistance to antibacterial drugs [START_REF] Provvedi | Global transcriptional response to vancomycin in Mycobacterium tuberculosis[END_REF]. Many PPE proteins are also known to induce a strong T cell and B cell responses and associate with the cell wall. Following surface exposure, these PPE proteins could act as agonists to various surface receptors of APCs resulting in modulation of the host immune responses [START_REF] Choudhary | PPE antigen Rv2430c of Mycobacterium tuberculosis induces a strong B-cell response[END_REF][START_REF] Tundup | The co-operonic PE25/PPE41 protein complex of Mycobacterium tuberculosis elicits increased humoral and cell mediated immune response[END_REF][START_REF] Mishra | Functional role of the PE domain and immunogenicity of the Mycobacterium tuberculosis triacylglycerol hydrolase LipY[END_REF]Chaitra et al., 2008 (a); Chaitra et al., 2008 (b)). Recently, two PPE proteins, PPE18 (Rv1196) and PPE34 (Rv1917c), were found to specifically interact with the innate immune receptor TLR2 [START_REF] Nair | The PPE18 of Mycobacterium tuberculosis interacts with TLR2 and activates IL-10 induction in macrophage[END_REF][START_REF] Bansal | Src homology 3-interacting domain of Rv1917c of Mycobacterium tuberculosis induces selective maturation of human dendritic cells by regulating PI3K-MAPK-NF-kappaB signaling and drives Th2 immune responses[END_REF]. Very little is known about the protein encoded by the Rv2108 gene. Theoretical properties of P27-PPE36 protein are a low pHi (4.59) and representative amino acid composition is 12% for alanine and 9% for glutamic acid. Predictive secondary structure shows that this protein would be mainly constitued of alpha-helix (58,5%) and the absence of -feuillet. The resting amino acids (31%) would be in random coil. ¶(14pt) Expression and purification of the PPE36 protein ¶(6pt) The Rv2108 gene was amplified, inserted into bacterial vectors, sequenced, and expressed as a recombinant protein. Either the GST (pGEX-4T-3) in E. coli DH5 or the pET (pET15b) in E. coli BL21 (DE3) plasmid were used. Induction of the PPPE36 protein by these various expression systems lead to the expression of a protein with an apparent molecular mass of 43 kDa in sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) analysis (Fig. 2A andB). ¶(9pt) (A) (B) PPE36 GST-PPE36 GST ¶(9pt) This value was higher than the theoretical mass predicted by its DNA sequence translation of 27 kDa. Mass spectrometric analysis of the expressed protein in the pET system revealed a molecule at 29 kDa, which corresponds to the P27-PPE36 putative protein estimated mass plus 2 kDa for the polyHistidine fusion Tag [START_REF] Le Moigne | Expression, immunochemical, characterisation and localisation of the Mycobaterium tuberculosis protein p27[END_REF]. This result was confirmed by partial sequencing of the N-terminal region of the recombinant protein. The reason for this difference may be due to the nature of the P27-PPE36 protein, which belongs to a family of intrinsically unstructured proteins (IUP) with an atypical composition of amino acid sequences [START_REF] Tompa | Intrinsically unstructured proteins[END_REF]. It presents notably a high proportion of Proline dimers (3 for 243 amino acids). These proteins bind less to SDS than most other proteins and their apparent molecular mass is often 1.2-1.8 times higher than the real value calculated from sequence data or measured by mass spectrometry [START_REF] Dunker | Intrinsically disordered protein[END_REF]. Such a phenomen of electrophoresis abnormal migration has been observed for another protein belonging to the PE protein family of Mtb: the product of the gene Rv1441c has an apparent molecular weight of about 60 kDa instead of a theorical MW of 40,7 kDa [START_REF] Banu | Are the PE-PGRS proteins of Mycobacterium tuberculosis variable surface antigens?[END_REF]. Generally, PE and PPE proteins did not express well or expressed in insoluble or unfolded forms [START_REF] Strong | Toward the structural genomics of complexes: crystal structure of a PE/PPE protein complex from Mycobacterium tuberculosis[END_REF]. Our attemps to express P27-PPE36 under the form of a recombinant proteins confirm this rules and lead to the obtention of an insoluble protein [START_REF] Le Moigne | Expression, immunochemical, characterisation and localisation of the Mycobaterium tuberculosis protein p27[END_REF], as confirmed later by an other study [START_REF] Strong | Toward the structural genomics of complexes: crystal structure of a PE/PPE protein complex from Mycobacterium tuberculosis[END_REF]. The lack of apparent transmembrane elements is a possible explanation for their failure to express on their own is that they need protein partners to fold [START_REF] Strong | Toward the structural genomics of complexes: crystal structure of a PE/PPE protein complex from Mycobacterium tuberculosis[END_REF] like explained above in the Genomic organization paragraph. ¶(9pt) ¶(14pt) Physico-chemical caracteristic of the PPE36 protein ¶(6pt) Based on the DNA and protein sequences, the expected pI value of the P27 protein should be 4.8. To determine the PI value of the expressed p27 protein, a two-dimensional gel was applied to the cell lysates from the BCG strain. After gel transfer to a nitrocellulose membrane and blotting with the P27-PPE36-specific antibodies, only one spot with a pI between 4.5 and 5 at the same molecular mass level observed by SDS-PAGE was recognized on the membrane [START_REF] Le Moigne | Expression, immunochemical, characterisation and localisation of the Mycobaterium tuberculosis protein p27[END_REF]. ¶(14pt) Anti-P27-PPE36 antibodies production and localization of P27-PPE36 ¶(6pt) Very little is known about the cellular localization of the PPE protein family, a 143 kDa PPE protein encoded by the Rv1917c gene (PPE34) was found to be a cell-wall associated protein and probably surface exposed (Sampson et al., 2001) as well as the PPE68 protein (Rv3873 gene) located in the cell envelope [START_REF] Pym | Loss of RD1 contributed to the attenuation of the live tuberculosis vaccines Mycobacterium bovis BCG and Mycobacterium microti[END_REF][START_REF] Okkels | PPE protein (Rv3873) from DNA segment RD1 of Mycobacterium tuberculosis: strong recognition of both specific T-cell epitopes and epitopes conserved within the PPE family[END_REF][START_REF] Demangel | Cell envelope protein PPE68 contributes to Mycobacterium tuberculosis RD1 immunogenicity independently of a 10-kilodalton culture filtrate protein and ESAT-6[END_REF]. We have generated specific mouse monoclonal and rabbit polyclonal antibodies to P27-PPE36 and used them for the immunochemical characterization and cellular localization of this protein. Specific immunoblot analysis confirmed the presence of the P27-PPE36 antigen in Mycobacterium bovis BCG strain and in human clinical isolates of M. tuberculosis from infected patients (Fig. 3), but not in other mycobacteria tested which does not belong to the Mtb complex [START_REF] Le Moigne | Expression, immunochemical, characterisation and localisation of the Mycobaterium tuberculosis protein p27[END_REF]. ¶(9pt) Fig. 3. Western blot analysis of bacterial lysate from different mycobacterial species. P27-PPE36 is the recombinant PPE36 protein. TB1, TB2 and TB3 are Mycobacterium tuberculosis clinical strains isolated from infected patients. Mycobacterium aurum is a fastgrowing mycobacteria, and BCG is the Calmette-Guérin bacillus. The first antibody is a mouse monoclonal IgG antibody directed against PPE36. ¶(9pt) Then, after demonstrating that the P27-PPE36 protein was present in the M. bovis BCG strain and in clinical isolates of M. tuberculosis, we attempted to localize this PPE protein in the BCG strain. To achieve this, bacteria were washed, fixed and ultrathin sections were prepared to be analysed by electron microscopy using immunohistochemistry test with specific anti-P27-PPE36 antibodies. Results generated with monoclonal (Fig. 4 A) and polyclonal antibodies (Fig. 4 B) revealed a peripheral localization of this protein on the cell membrane. Similar results were obtained using western-blot analysis (Fig. 4 D) of the Mtb cell fractions with the monoclonal anti-P27-PPE36 antibody indicating that the P27-PPE36 protein is localized in the membrane of the cell [START_REF] Le Moigne | Expression, immunochemical, characterisation and localisation of the Mycobaterium tuberculosis protein p27[END_REF]. This protein was the third member of its family to be localized at the periphery of the cell (Sampson et al., 2001;[START_REF] Pym | Loss of RD1 contributed to the attenuation of the live tuberculosis vaccines Mycobacterium bovis BCG and Mycobacterium microti[END_REF][START_REF] Okkels | PPE protein (Rv3873) from DNA segment RD1 of Mycobacterium tuberculosis: strong recognition of both specific T-cell epitopes and epitopes conserved within the PPE family[END_REF][START_REF] Demangel | Cell envelope protein PPE68 contributes to Mycobacterium tuberculosis RD1 immunogenicity independently of a 10-kilodalton culture filtrate protein and ESAT-6[END_REF] and since, the same localization was assigned to other PPE proteins like for exemple Map3420c and Map1506 in Mycobacterium avium subsp. paratuberculosis [START_REF] Newton | Presence of PPE proteins in Mycobacterium avium subsp. paratuberculosis isolates and their immunogenicity in cattle[END_REF]. In Mycobacterium immunogenum, a PPE protein (accessio no. YP_001288073) have been found to be a cell-membrane-associated antigen [START_REF] Gupta | Immunoproteomic identification of secretory and subcellular protein antigens and functional evaluation of the secretome fraction of Mycobacterium immunogenum a newly recognized species of the Mycobacterium chelonae-Mycobacterium abscessus group[END_REF]. In a recent detailed analysis of the Mycobacterium marinum capsule using cryoelectron microscopy in conjunction with liquid chromatography mass spectrometry (LC-MS) demonstrated that 5 (MM1129, MM1402, MM0186, MM5047 and MM1497) of the 25 major cell surface proteins were members of the PPE familie [START_REF] Sani | Direct visualization by cryo-EM of the mycobacterial capsular layer: a labile structure containing ESX-1-secreted proteins[END_REF]. Similarly high-throughput proteomics MALDI-MS and LC-MS approaches have been utilized by [START_REF] Målen | Definition of novel cell envelope associated proteins in Triton X-114 extracts of Mycobacterium tuberculosis H37Rv[END_REF] to identify 8 PPEs in the M. tuberculosis envelope fractions (PPE18, PPE20, PPE26, PPE32, PPE33, PPE51, PPE60 and PPE68). ¶(9pt) Therefore, these results suggest that cell wall/surface localization is a characteristic of several PE/PPE proteins although another PPE protein, PPE41, have been shown to be secreted by pathogenic mycobacteria [START_REF] Abdallah | Specific secretion system mediates PPE41 transport in pathogenic mycobacteria[END_REF]. So, if for the majority of PE and PPE proteins are localize to the cell wall, some of them could be secreted into the extracellular environment. Like explained above, P27-PPE36 should be, as a disordered protein which need a partner to fold, associated with the PE protein PE22 (Gey van [START_REF] Gey Van Pittius | Evolution and expansion of the Mycobacterium tuberculosis PE and PPE multigene families and their association with the duplication of the ESAT-6 (esx) gene cluster regions[END_REF][START_REF] Strong | Toward the structural genomics of complexes: crystal structure of a PE/PPE protein complex from Mycobacterium tuberculosis[END_REF]. Moreover, this putative complex PPE36-PE22 could be associated with a system dedicated to the secretion of members of the potent T-cell antigen 6-kDa Early Secreted Antigenic Target (ESAT-6) family (Gey van [START_REF] Gey Van Pittius | Evolution and expansion of the Mycobacterium tuberculosis PE and PPE multigene families and their association with the duplication of the ESAT-6 (esx) gene cluster regions[END_REF]. According this last computational study constructing an evolutionary history of the pe and ppe genes families, Rv2107 and Rv2108 genes are hypothesized to have been duplicated from the ESAT-6 (esx) gene cluster regions, as they are very homologous to their paralogues within the ESAT-6 (esx) gene clusters and have the same paired genomic orientation. These esx clusters encode the socalled Type VII or ESX secretion systems, of which there are 5 in Mtb (Gey van [START_REF] Van Pittius | The ESAT-6 gene cluster of Mycobacterium tuberculosis and other high G+C Gram-positive bacteria[END_REF]. Thus, Rv2107 and Rv2108 would derive from ESAT-6 gene cluster Region 2, i.e. from Rv3893c (PE36) and Rv3892c (PPE69). ¶(14pt) Serological studies ¶(6pt) Diverse reports point out the potential immunodominant nature of PPE proteins. Presence of antibodies against other PPE proteins have been found in mycobacterium infected human or animals: in human against the PPE17 (Rv1168c) [START_REF] Khan | Association of strong immune responses to PPE protein Rv1168c with active tuberculosis[END_REF], PPE41 (Rv2340c) [START_REF] Choudhary | PPE antigen Rv2430c of Mycobacterium tuberculosis induces a strong B-cell response[END_REF], PPE42 (Rv2608) [START_REF] Chakhaiyar | Regions of high antigenicity within the hypothetical PPE major polymorphic tandem repeat open-reading frame, Rv2608, show a differential humoral response and a low T cell response in various categories of patients with tuberculosis[END_REF], PPE55 (Rv3347c) [START_REF] Singh | Immunogenicity of the Mycobacterium tuberculosis PPE55 (Rv3347c) protein during incipient and clinical tuberculosis[END_REF], PPE57 (Rv3425) [START_REF] Zhang | PPE protein (Rv3425) from DNA segment RD11 of Mycobacterium tuberculosis: a potential B-cell antigen used for serological diagnosis to distinguish vaccinated controls from tuberculosis patients[END_REF], in human and mice against PPE68 (Rv3873) [START_REF] Daugelat | The RD1 proteins of Mycobacterium tuberculosis: expression in Mycobacterium smegmatis and biochemical characterization[END_REF], in human [START_REF] Rindi | Variation of the expression of Mycobacterium tuberculosis ppe44 gene among clinical isolates[END_REF] and mice [START_REF] Romano | Immunogenicity and protective efficacy of tuberculosis subunit vaccines expressing PPE44 (Rv2770c)[END_REF][START_REF] Bonanni | Immunogenicity of mycobacterial PPE44 (Rv2770c) in Mycobacterium bovis BCG-infected mice[END_REF] against the PPE44 (Rv2770c), in cattle against PPE68 (Rv3873) [START_REF] Cockle | Identification of novel Mycobacterium tuberculosis antigens with potential as diagnostic reagents or subunit vaccine candidates by comparative genomics[END_REF], and against a PPE protein of Mycobacterium avium subsp paratuberculosis (Newton et al., 2008). Other studies highlight the capacity of PPE proteins to induce high B cell response in TB patients like PPE41 (Rv2340c) [START_REF] Tundup | The co-operonic PE25/PPE41 protein complex of Mycobacterium tuberculosis elicits increased humoral and cell mediated immune response[END_REF]. Inversely, a study shows that patients with tuberculosis do not develop a strong humoral response against the PPE44 protein [START_REF] Zanetti | Patients with pulmonary tuberculosis develop a strong humoral response against methylated heparin-binding hemagglutinin[END_REF]. In cattle, no difference is seen in the humoral response to the PPE44 (Rv2770c) between infected and TB-free animals [START_REF] Molicotti | Heparin binding haemagglutin as potential diagnostic marker of Mycobacterium bovis[END_REF]. The P27-PPE36 expressed protein is immunologically active, and reacts, in western-blot and ELISA, with antibodies from sera of patients infected with Mtb (Le [START_REF] Le Moigne | Expression, immunochemical, characterisation and localisation of the Mycobaterium tuberculosis protein p27[END_REF]. (Fig. 5). ¶(9pt) Fig. 5. Western blot analysis of the presence of anti-recombinant PPE36 antibodies in the sera of TB patients in comparison with serum from healthy donors and recombinant PPE36hyperimmunized rabbit. ¶(9pt) So, we then studied PPE36 specific antibody isotype distribution in sera of pulmonary tuberculosis patients and compared them to those in sera from healthy control by enzymelinked immunosorbent assay (ELISA). Our result showed a significant increase of IgA antibody response in patient's sera, but a less important IgM response accompanied by total absence of IgG2, 3, and 4 responses and a weak IgG1 response in few patients' sera (unpublished results). The absence of IgG response in the sera of patients allowed verifying for the presence of immune complexes that may inhibit the interaction of antibodies with our antigen on the plate. Using an immunoprecipitation test with goat anti human immunoglobulin antibodies, no immune complex containing P27-PPE36 was present in the patient's sera. Rec. PPE36 Rabbit anti-PPE36 TB infected patient serum Healthy patient serum 45 kDa The significance of IgA and IgM is not clear. The IgA response is the more interesting and intriguing results for this protein, because this is the first study showing the presence of IgA alone and the absence of an IgG response against a peptidic antigen. The IgA response is often considered to be local (mucosa and body surface) and non systemic (sera). It has also been reported as a more specific for the non peptidic antigens comparing to the IgG response, which was more reactive (Julián et al., 2005). The IgM response is in general related to the natural auto antibodies found in the sera of healthy and infected peoples and animals, and we couldn't ascribe it a diagnostic value 6 , though we note its augmentation during infection. These antibodies are in general polyspecific with weak affinity for their Ag. The occurrence of antibodies against the PPE proteins is highly controversial; different studies highlighted the capacity of PPE proteins to induce high B cell response in TB human patients or infected animals [START_REF] Tundup | The co-operonic PE25/PPE41 protein complex of Mycobacterium tuberculosis elicits increased humoral and cell mediated immune response[END_REF][START_REF] Singh | Immunogenicity of the Mycobacterium tuberculosis PPE55 (Rv3347c) protein during incipient and clinical tuberculosis[END_REF]. Inversely, a study showed that patients with tuberculosis do not develop a strong humoral response against a PPE protein [START_REF] Zanetti | Patients with pulmonary tuberculosis develop a strong humoral response against methylated heparin-binding hemagglutinin[END_REF]. In comparison with other PPE proteins, P27-PPE36 proved to be less useful as a basis for the development of a TB diagnostic test. However, the presence of an IgA response in the absence of an IgG one, could be exploited as an indicator for Mtb diagnosis. A large number of sera should be tested to gather further information on the immune responses to this antigen. ¶(14pt) Immune response against P27-PPE36 by different immunisation ways ¶(6pt) We have studied the immune response of mice against the Mtb P27-PPE36 protein. The peripheral localization of the P27-PPE36 protein led to the belief that they might play an important immunological role either in diagnosis or in protection. So, we examined the immune response against the P27-PPE36 protein using different Pathogen associated molecular patterns (PAMPs) as adjuvants and vectors for immunization. PAMPs are expressed only by micro-organisms and are recognized by the eukaryotic cells through the pattern recognition receptors (PRRs) of the innate immune system such as the Toll-like receptors (TLRs) [START_REF] Medzhitov | Innate immune recognition: mechanisms and pathways[END_REF]. The interaction of PAMPs with their corresponding TLRs helps to identify the nature of the PAMP and to guide the adequate adaptive immune response [START_REF] Medzhitov | Innate immune recognition: mechanisms and pathways[END_REF]. Muramyl dipeptides, a major element of the Freund's complete adjuvant, bacterial DNA, and bacterial flagellin are three PAMPs recognized by TLR2, TLR9, and TLR5, respectively. Different immunization protocols were used to study immunological potential of the P27-PPE36 protein. BALB/c mice were immunized either with the P27-PPE36 recombinant protein in Freund's adjuvant or in phosphate saline buffer (PBS) (classical immunization), with a pcDNA3 plasmid containing the gene encoding the P27-PPE36 protein (DNA immunization), or with the Escherichia coli bacteria expressing the P27-PPE36 protein genetically fused into the flagellin (flagellin immunization) [START_REF] Le Moigne | Flagellin as a good carrier and potent adjuvant for Th1 response: Study of mice immune response to the p27 (Rv2108) Mycobacterium tuberculosis antigen[END_REF]. (A): Proliferation of splenic cells of immunized mice after incubation in vitro with different concentrations (( ) 0.0 µg/ml, ( ) 1.1µg/ml, ( ) 3.3 µg/ml and ( ) 10µg/ml) of purified p27 recombinant protein. The proliferation was monitored by [ 3 H] thymidine uptake at 66 h after stimulation. (B): Cytokine secretion by splenic cells of immunized mice. Splenic cells were stimulated in vitro by the recombinant P27-PPE36 protein and IFN- was quantified in the supernatant after one week of culture. Results are presented as mean cytokine concentrations (±standard errors) compared to a standard curve of purified cytokines. (C): Specific anti-P27-PPE36 antibodies responses. Mice sera diluted at 1/500 were tested in ELISA for the presence of anti-P27-PPE36 antibodies of the different isotypes IgG1, IgG2a, IgG2b, IgG3, IgM and IgA one week after the third immunization. The results are presented as the optical density of the different isotypes. ¶(9pt) We found that P27-PPE36 expressed into the flagellin led to the strongest cellular responses, where we obtained the highest production of IFN-(Fig. 6 B)and cell proliferation (Fig. 6 A), an indication of specific Th1-like orientation of the immune response. DNA immunization was less potent in the induction of such responses. We confirmed the role of flagellin in this response by using different immunization combinations [START_REF] Le Moigne | Flagellin as a good carrier and potent adjuvant for Th1 response: Study of mice immune response to the p27 (Rv2108) Mycobacterium tuberculosis antigen[END_REF]. However, the specific antibody response was weak with either method (Fig. 6 C). On the other hand, classical immunization with the recombinant protein, soluble or incorporated in Freund's adjuvant still yielded the best antibody response (Fig. 6 C). The best cellular and humoral responses were obtained in the group of mice primed with the recombinant protein and boosted by the antigen presented on the modified flagellin [START_REF] Le Moigne | Flagellin as a good carrier and potent adjuvant for Th1 response: Study of mice immune response to the p27 (Rv2108) Mycobacterium tuberculosis antigen[END_REF]. In general, the P27-PPE36 PPE antigen induced a strong proliferative response accompanied by high production of IFN- and low amount of IL-4 (Le Moigne et al., 2008), independently of the PAMP used. The results indicated that this antigen may be involved in the establishment of the host cellular immune responses against the Mtb. Protective anti-mycobacterial immunity is primarily mediated by cellular immune responses [START_REF] Flynn | Major histocompatibility complex class I restricted T cells are required for resistance to Mycobacterium tuberculosis infection[END_REF][START_REF] Caruso | Mice deficient in CD4 T cells have only transiently diminished levels of IFN-γ, yet succumb to tuberculosis[END_REF]. Mtb is rich in antigens that induce IFN- secretion, and the presence of such antigens has been reported in purified cell walls, the cytosolic fraction, and short-term culture filtrates (ST-CF) [START_REF] Mustafa | Biotechnology in the development of new vaccines and diagnostic reagents against tuberculosis[END_REF]. The importance of antibodies in tuberculosis is much debated, but it has been suggested that certain antibody specificities against bacterial surface epitopes and with the correct isotype may confer protection against intracellular infections [START_REF] Glatman-Freedman | Advances in antibody-mediated immunity against Mycobacterium tuberculosis: implications for a novel vaccine strategy[END_REF][START_REF] Glatman-Freedman | Serum therapy for tuberculosis revisited: reappraisal of the role of antibody-mediated immunity against Mycobacterium tuberculosis[END_REF][START_REF] Casadevall | Antibody immunity and invasive fungal infections[END_REF]. Other PPE proteins have been reported to be strongly immunogenic [START_REF] Choudhary | PPE antigen Rv2430c of Mycobacterium tuberculosis induces a strong B-cell response[END_REF][START_REF] Demangel | Cell envelope protein PPE68 contributes to Mycobacterium tuberculosis RD1 immunogenicity independently of a 10-kilodalton culture filtrate protein and ESAT-6[END_REF][START_REF] Okkels | PPE protein (Rv3873) from DNA segment RD1 of Mycobacterium tuberculosis: strong recognition of both specific T-cell epitopes and epitopes conserved within the PPE family[END_REF][START_REF] Dillon | Molecular characterization and human T-cell responses to a member of a novel Mycobacterium tuberculosis mtb39 gene family[END_REF][START_REF] Skeiky | T cell expression cloning of a Mycobacterium tuberculosis gene encoding a protective antigen associated with the early control of infection[END_REF]. Antibodies against PPE41 (Rv2430c) are present in TB patients and not in healthy individuals [START_REF] Choudhary | PPE antigen Rv2430c of Mycobacterium tuberculosis induces a strong B-cell response[END_REF]; PPE68 (Rv3873) induces IFN- production from splenocytes of M. tuberculosis-infected mice and from peripheral blood mononuclear cells of TB patients and PPD+ healthy individuals [START_REF] Demangel | Cell envelope protein PPE68 contributes to Mycobacterium tuberculosis RD1 immunogenicity independently of a 10-kilodalton culture filtrate protein and ESAT-6[END_REF][START_REF] Okkels | PPE protein (Rv3873) from DNA segment RD1 of Mycobacterium tuberculosis: strong recognition of both specific T-cell epitopes and epitopes conserved within the PPE family[END_REF] and from cattle blood cells [START_REF] Cockle | Identification of novel Mycobacterium tuberculosis antigens with potential as diagnostic reagents or subunit vaccine candidates by comparative genomics[END_REF][START_REF] Mustafa | Immunogenicity of Mycobacterium tuberculosis RD1 region gene products in infected cattle[END_REF]. Immune responses elicited by PPE18 (Rv1196) and PPE14 (Rv0915c) have been shown to provide some protection in mice infected with M. tuberculosis [START_REF] Dillon | Molecular characterization and human T-cell responses to a member of a novel Mycobacterium tuberculosis mtb39 gene family[END_REF][START_REF] Skeiky | T cell expression cloning of a Mycobacterium tuberculosis gene encoding a protective antigen associated with the early control of infection[END_REF]. Together, these studies suggest that several PPE proteins are expressed in vivo. In other mycobacteries, other PPE proteins have been shown to induce immune responses. For exemple in M. avium subs. paratuberculosis, two PPE proteins named Map39 and Map41 significantly elicited IFN- production in peripheral blood mononuclear cells from infected cattle [START_REF] Nagata | Expression cloning of gamma interferon-inducing antigens of Mycobacterium avium subsp[END_REF]. When immunized in mice, PPE57 (Rv3425) and PPE46 (Rv3018c) induce also strong humoral and cellular responses [START_REF] Wang | PPE protein (Rv3425) from DNA segment RD11 of Mycobacterium tuberculosis: a novel immunodominant antigen of Mycobacterium tuberculosis induces humoral and cellular immune responses in mice[END_REF][START_REF] Chaitra | Modulation of immune responses in mice to recombinant antigens from PE and PPE families of proteins of Mycobacterium tuberculosis by the Ribi adjuvant[END_REF] ¶(14pt) Conclusion ¶(6pt The P27-PPE36 protein is the third member of its family to be localized at the periphery of the cell (Sampson et al., 2001;[START_REF] Pym | Loss of RD1 contributed to the attenuation of the live tuberculosis vaccines Mycobacterium bovis BCG and Mycobacterium microti[END_REF][START_REF] Okkels | PPE protein (Rv3873) from DNA segment RD1 of Mycobacterium tuberculosis: strong recognition of both specific T-cell epitopes and epitopes conserved within the PPE family[END_REF]. Now others PPE have been found to have a similar localization. This may shed some light on its role in the diagnosis and pathogenesis of Mtb. In conclusion, the P27-PPE36 protein was found to be a specific antigen for the Mtb complex and was recognized by sera of tuberculosis patients and localized in the membrane of the bacterial cell. ¶(14pt) Acknowledgment ¶(6pt) We thank Georges Robreau, Jean-Luc Guesdon and Caroline Borot for their help. (14pt) Fig. 2 . 2 Fig. 2. Coomassie blue staining of bacterial lysates and purified recombinant PPE36 protein expressed with His-Tag (A) or with GST (B). (A): Lanes 1 and 2: bacterial extracts of E. coli BL21 (DE3) without or with IPTG induction, respectively. Lane 3: purified PPE36 protein. (B): Lanes 1 and 2: PPE36, GST-PPE36 fusion protein, partially cleaved or not cleaved by thrombin, respectively. Fig. 4 . 4 Fig. 4. Localization of P27-PPE36 antigen: Immunogold electron microscopic image (A-C) showing the peripheral localization of the P27-PPE36 protein in cryosectioned M. bovis BCG and western-blot on various M. tuberculosis cell fractions (D). Incubation was realized either with a mouse monoclonal antibody (A), or rabbit polyclonal anti-P27-PPE36 antibodies (B). Negative control was done using normal rabbit serum (C). (A: 49 000, B and C: 23 000). (D): Immunoblot analysis of different cell fractions of M. tuberculosis obtained from the Tuberculosis Research Materials and Vaccine Testing Laboratory, Colorado State University using monoclonal anti-P27-PPE36 antibody. Recombinant P27-PPE36 (rP27), cytosol fraction (CYT), cell membrane fraction (MEM), culture filtrate proteins (CFP), cell wall fraction (CW), and SDS-soluble cell wall proteins (SCWP).¶(9pt) Therefore, these results suggest that cell wall/surface localization is a characteristic of several PE/PPE proteins although another PPE protein, PPE41, have been shown to be secreted by pathogenic mycobacteria[START_REF] Abdallah | Specific secretion system mediates PPE41 transport in pathogenic mycobacteria[END_REF]. So, if for the majority of PE and PPE proteins are localize to the cell wall, some of them could be secreted into the extracellular environment. Fig. 6 . 6 Fig. 6. Immune response generated against P27-PPE36 from mice immunized either with flagellin-modified bacteria (B), DNA plasmid containing the Rv2108 gene (D) or with the P27-PPE36 recombinant protein associated (P) or not (sP) with Freund's adjuvant. Control